From: Linus Torvalds Date: Mon, 31 Dec 2018 17:54:17 +0000 (-0800) Subject: Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel... X-Git-Tag: 4.21-smb3-small-fixes~35 X-Git-Url: http://git.samba.org/samba.git/?p=sfrench%2Fcifs-2.6.git;a=commitdiff_plain;h=e3ed513bcf0097c0b8a1f1b4d791a8d0d8933b3b;hp=c40f7d74c741a907cfaeb73a7697081881c497d0 Merge branch 'sched-urgent-for-linus' of git://git./linux/kernel/git/tip/tip Pull scheduler fix from Ingo Molnar: "This is a revert for a lockup in cgroups-intense workloads - the real fixes will come later" * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c --- diff --git a/.gitignore b/.gitignore index 97ba6b79834c..a20ac26aa2f5 100644 --- a/.gitignore +++ b/.gitignore @@ -15,6 +15,7 @@ *.bin *.bz2 *.c.[012]*.* +*.dt.yaml *.dtb *.dtb.S *.dwo diff --git a/Documentation/ABI/testing/sysfs-block b/Documentation/ABI/testing/sysfs-block index dea212db9df3..7710d4022b19 100644 --- a/Documentation/ABI/testing/sysfs-block +++ b/Documentation/ABI/testing/sysfs-block @@ -244,7 +244,7 @@ Description: What: /sys/block//queue/zoned Date: September 2016 -Contact: Damien Le Moal +Contact: Damien Le Moal Description: zoned indicates if the device is a zoned block device and the zone model of the device if it is indeed zoned. @@ -259,6 +259,14 @@ Description: zone commands, they will be treated as regular block devices and zoned will report "none". +What: /sys/block//queue/nr_zones +Date: November 2018 +Contact: Damien Le Moal +Description: + nr_zones indicates the total number of zones of a zoned block + device ("host-aware" or "host-managed" zone model). For regular + block devices, the value is always 0. + What: /sys/block//queue/chunk_sectors Date: September 2016 Contact: Hannes Reinecke @@ -268,6 +276,6 @@ Description: indicates the size in 512B sectors of the RAID volume stripe segment. For a zoned block device, either host-aware or host-managed, chunk_sectors indicates the - size of 512B sectors of the zones of the device, with + size in 512B sectors of the zones of the device, with the eventual exception of the last zone of the device which may be smaller. diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram index c1513c756af1..9d2339a485c8 100644 --- a/Documentation/ABI/testing/sysfs-block-zram +++ b/Documentation/ABI/testing/sysfs-block-zram @@ -98,3 +98,35 @@ Description: The backing_dev file is read-write and set up backing device for zram to write incompressible pages. For using, user should enable CONFIG_ZRAM_WRITEBACK. + +What: /sys/block/zram/idle +Date: November 2018 +Contact: Minchan Kim +Description: + idle file is write-only and mark zram slot as idle. + If system has mounted debugfs, user can see which slots + are idle via /sys/kernel/debug/zram/zram/block_state + +What: /sys/block/zram/writeback +Date: November 2018 +Contact: Minchan Kim +Description: + The writeback file is write-only and trigger idle and/or + huge page writeback to backing device. + +What: /sys/block/zram/bd_stat +Date: November 2018 +Contact: Minchan Kim +Description: + The bd_stat file is read-only and represents backing device's + statistics (bd_count, bd_reads, bd_writes) in a format + similar to block layer statistics file format. + +What: /sys/block/zram/writeback_limit +Date: November 2018 +Contact: Minchan Kim +Description: + The writeback_limit file is read-write and specifies the maximum + amount of writeback ZRAM can do. The limit could be changed + in run time and "0" means disable the limit. + No limit is the initial state. diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt index 151584a1f950..b21fba14689b 100644 --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt @@ -21,6 +21,15 @@ Description: Holds a comma separated list of device unique_ids that If a device is authorized automatically during boot its boot attribute is set to 1. +What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection +Date: Mar 2019 +KernelVersion: 4.21 +Contact: thunderbolt-software@lists.01.org +Description: This attribute tells whether the system uses IOMMU + for DMA protection. Value of 1 means IOMMU is used 0 means + it is not (DMA protection is solely based on Thunderbolt + security levels). + What: /sys/bus/thunderbolt/devices/.../domainX/security Date: Sep 2017 KernelVersion: 4.13 diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs index 3ac41774ad3c..a7ce33199457 100644 --- a/Documentation/ABI/testing/sysfs-fs-f2fs +++ b/Documentation/ABI/testing/sysfs-fs-f2fs @@ -92,6 +92,15 @@ Contact: "Jaegeuk Kim" Description: Controls the number of trials to find a victim segment. +What: /sys/fs/f2fs//migration_granularity +Date: October 2018 +Contact: "Chao Yu" +Description: + Controls migration granularity of garbage collection on large + section, it can let GC move partial segment{s} of one section + in one GC cycle, so that dispersing heavy overhead GC to + multiple lightweight one. + What: /sys/fs/f2fs//dir_level Date: March 2014 Contact: "Jaegeuk Kim" diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt index ac66ae2509a9..e133ccd60228 100644 --- a/Documentation/DMA-API.txt +++ b/Documentation/DMA-API.txt @@ -58,15 +58,6 @@ specify the ``GFP_`` flags (see kmalloc()) for the allocation (the implementation may choose to ignore flags that affect the location of the returned memory, like GFP_DMA). -:: - - void * - dma_zalloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag) - -Wraps dma_alloc_coherent() and also zeroes the returned memory if the -allocation attempt succeeded. - :: void @@ -717,12 +708,15 @@ dma-api/num_errors The number in this file shows how many dma-api/min_free_entries This read-only file can be read to get the minimum number of free dma_debug_entries the allocator has ever seen. If this value goes - down to zero the code will disable itself - because it is not longer reliable. + down to zero the code will attempt to increase + nr_total_entries to compensate. dma-api/num_free_entries The current number of free dma_debug_entries in the allocator. +dma-api/nr_total_entries The total number of dma_debug_entries in the + allocator, both free and used. + dma-api/driver-filter You can write a name of a driver into this file to limit the debug output to requests from that particular driver. Write an empty string to @@ -742,10 +736,15 @@ driver filter at boot time. The debug code will only print errors for that driver afterwards. This filter can be disabled or changed later using debugfs. When the code disables itself at runtime this is most likely because it ran -out of dma_debug_entries. These entries are preallocated at boot. The number -of preallocated entries is defined per architecture. If it is too low for you -boot with 'dma_debug_entries=' to overwrite the -architectural default. +out of dma_debug_entries and was unable to allocate more on-demand. 65536 +entries are preallocated at boot - if this is too low for you boot with +'dma_debug_entries=' to overwrite the default. Note +that the code allocates entries in batches, so the exact number of +preallocated entries may be greater than the actual number requested. The +code will print to the kernel log each time it has dynamically allocated +as many entries as were initially preallocated. This is to indicate that a +larger preallocation size may be appropriate, or if it happens continually +that a driver may be leaking mappings. :: diff --git a/Documentation/EDID/1024x768.S b/Documentation/EDID/1024x768.S index 6f3e4b75e49e..4aed3f9ab88a 100644 --- a/Documentation/EDID/1024x768.S +++ b/Documentation/EDID/1024x768.S @@ -31,14 +31,13 @@ #define YBLANK 38 #define XOFFSET 8 #define XPULSE 144 -#define YOFFSET (63+3) -#define YPULSE (63+6) +#define YOFFSET 3 +#define YPULSE 6 #define DPI 72 #define VFREQ 60 /* Hz */ #define TIMING_NAME "Linux XGA" #define ESTABLISHED_TIMING2_BITS 0x08 /* Bit 3 -> 1024x768 @60 Hz */ #define HSYNC_POL 0 #define VSYNC_POL 0 -#define CRC 0x55 #include "edid.S" diff --git a/Documentation/EDID/1280x1024.S b/Documentation/EDID/1280x1024.S index bd9bef2a65af..b26dd424cad7 100644 --- a/Documentation/EDID/1280x1024.S +++ b/Documentation/EDID/1280x1024.S @@ -31,14 +31,13 @@ #define YBLANK 42 #define XOFFSET 48 #define XPULSE 112 -#define YOFFSET (63+1) -#define YPULSE (63+3) +#define YOFFSET 1 +#define YPULSE 3 #define DPI 72 #define VFREQ 60 /* Hz */ #define TIMING_NAME "Linux SXGA" /* No ESTABLISHED_TIMINGx_BITS */ #define HSYNC_POL 1 #define VSYNC_POL 1 -#define CRC 0xa0 #include "edid.S" diff --git a/Documentation/EDID/1600x1200.S b/Documentation/EDID/1600x1200.S index a45101c6160c..0d091b282768 100644 --- a/Documentation/EDID/1600x1200.S +++ b/Documentation/EDID/1600x1200.S @@ -31,14 +31,13 @@ #define YBLANK 50 #define XOFFSET 64 #define XPULSE 192 -#define YOFFSET (63+1) -#define YPULSE (63+3) +#define YOFFSET 1 +#define YPULSE 3 #define DPI 72 #define VFREQ 60 /* Hz */ #define TIMING_NAME "Linux UXGA" /* No ESTABLISHED_TIMINGx_BITS */ #define HSYNC_POL 1 #define VSYNC_POL 1 -#define CRC 0x9d #include "edid.S" diff --git a/Documentation/EDID/1680x1050.S b/Documentation/EDID/1680x1050.S index b0d7c69282b4..7dfed9a33eab 100644 --- a/Documentation/EDID/1680x1050.S +++ b/Documentation/EDID/1680x1050.S @@ -31,14 +31,13 @@ #define YBLANK 39 #define XOFFSET 104 #define XPULSE 176 -#define YOFFSET (63+3) -#define YPULSE (63+6) +#define YOFFSET 3 +#define YPULSE 6 #define DPI 96 #define VFREQ 60 /* Hz */ #define TIMING_NAME "Linux WSXGA" /* No ESTABLISHED_TIMINGx_BITS */ #define HSYNC_POL 1 #define VSYNC_POL 1 -#define CRC 0x26 #include "edid.S" diff --git a/Documentation/EDID/1920x1080.S b/Documentation/EDID/1920x1080.S index 3084355e81e7..d6ffbba28e95 100644 --- a/Documentation/EDID/1920x1080.S +++ b/Documentation/EDID/1920x1080.S @@ -31,14 +31,13 @@ #define YBLANK 45 #define XOFFSET 88 #define XPULSE 44 -#define YOFFSET (63+4) -#define YPULSE (63+5) +#define YOFFSET 4 +#define YPULSE 5 #define DPI 96 #define VFREQ 60 /* Hz */ #define TIMING_NAME "Linux FHD" /* No ESTABLISHED_TIMINGx_BITS */ #define HSYNC_POL 1 #define VSYNC_POL 1 -#define CRC 0x05 #include "edid.S" diff --git a/Documentation/EDID/800x600.S b/Documentation/EDID/800x600.S index 6644e26d5801..a5616588de08 100644 --- a/Documentation/EDID/800x600.S +++ b/Documentation/EDID/800x600.S @@ -28,14 +28,13 @@ #define YBLANK 28 #define XOFFSET 40 #define XPULSE 128 -#define YOFFSET (63+1) -#define YPULSE (63+4) +#define YOFFSET 1 +#define YPULSE 4 #define DPI 72 #define VFREQ 60 /* Hz */ #define TIMING_NAME "Linux SVGA" #define ESTABLISHED_TIMING1_BITS 0x01 /* Bit 0: 800x600 @ 60Hz */ #define HSYNC_POL 1 #define VSYNC_POL 1 -#define CRC 0xc2 #include "edid.S" diff --git a/Documentation/EDID/HOWTO.txt b/Documentation/EDID/HOWTO.txt index 835db332289b..539871c3b785 100644 --- a/Documentation/EDID/HOWTO.txt +++ b/Documentation/EDID/HOWTO.txt @@ -45,14 +45,5 @@ EDID: #define YPIX vdisp #define YBLANK vtotal-vdisp -#define YOFFSET (63+(vsyncstart-vdisp)) -#define YPULSE (63+(vsyncend-vsyncstart)) - -The CRC value in the last line - #define CRC 0x55 -also is a bit tricky. After a first version of the binary data set is -created, it must be checked with the "edid-decode" utility which will -most probably complain about a wrong CRC. Fortunately, the utility also -displays the correct CRC which must then be inserted into the source -file. After the make procedure is repeated, the EDID data set is ready -to be used. +#define YOFFSET vsyncstart-vdisp +#define YPULSE vsyncend-vsyncstart diff --git a/Documentation/EDID/Makefile b/Documentation/EDID/Makefile index 17763ca3f12b..85a927dfab02 100644 --- a/Documentation/EDID/Makefile +++ b/Documentation/EDID/Makefile @@ -15,10 +15,21 @@ clean: %.o: %.S @cc -c $^ -%.bin: %.o +%.bin.nocrc: %.o @objcopy -Obinary $^ $@ -%.bin.ihex: %.o +%.crc: %.bin.nocrc + @list=$$(for i in `seq 1 127`; do head -c$$i $^ | tail -c1 \ + | hexdump -v -e '/1 "%02X+"'; done); \ + echo "ibase=16;100-($${list%?})%100" | bc >$@ + +%.p: %.crc %.S + @cc -c -DCRC="$$(cat $*.crc)" -o $@ $*.S + +%.bin: %.p + @objcopy -Obinary $^ $@ + +%.bin.ihex: %.p @objcopy -Oihex $^ $@ @dos2unix $@ 2>/dev/null diff --git a/Documentation/EDID/edid.S b/Documentation/EDID/edid.S index ef082dcc6084..c3d13815526d 100644 --- a/Documentation/EDID/edid.S +++ b/Documentation/EDID/edid.S @@ -47,9 +47,11 @@ #define mfgname2id(v1,v2,v3) \ ((((v1-'@')&0x1f)<<10)+(((v2-'@')&0x1f)<<5)+((v3-'@')&0x1f)) #define swap16(v1) ((v1>>8)+((v1&0xff)<<8)) +#define lsbs2(v1,v2) (((v1&0x0f)<<4)+(v2&0x0f)) #define msbs2(v1,v2) ((((v1>>8)&0x0f)<<4)+((v2>>8)&0x0f)) #define msbs4(v1,v2,v3,v4) \ - (((v1&0x03)>>2)+((v2&0x03)>>4)+((v3&0x03)>>6)+((v4&0x03)>>8)) + ((((v1>>8)&0x03)<<6)+(((v2>>8)&0x03)<<4)+\ + (((v3>>4)&0x03)<<2)+((v4>>4)&0x03)) #define pixdpi2mm(pix,dpi) ((pix*25)/dpi) #define xsize pixdpi2mm(XPIX,DPI) #define ysize pixdpi2mm(YPIX,DPI) @@ -200,9 +202,9 @@ y_msbs: .byte msbs2(YPIX,YBLANK) x_snc_off_lsb: .byte XOFFSET&0xff /* Horizontal sync pulse width pixels 8 lsbits (0-1023) */ x_snc_pls_lsb: .byte XPULSE&0xff -/* Bits 7-4 Vertical sync offset lines 4 lsbits -63) - Bits 3-0 Vertical sync pulse width lines 4 lsbits -63) */ -y_snc_lsb: .byte ((YOFFSET-63)<<4)+(YPULSE-63) +/* Bits 7-4 Vertical sync offset lines 4 lsbits (0-63) + Bits 3-0 Vertical sync pulse width lines 4 lsbits (0-63) */ +y_snc_lsb: .byte lsbs2(YOFFSET, YPULSE) /* Bits 7-6 Horizontal sync offset pixels 2 msbits Bits 5-4 Horizontal sync pulse width pixels 2 msbits Bits 3-2 Vertical sync offset lines 2 msbits diff --git a/Documentation/Makefile b/Documentation/Makefile index 2ca77ad0f238..9786957c6a35 100644 --- a/Documentation/Makefile +++ b/Documentation/Makefile @@ -2,7 +2,7 @@ # Makefile for Sphinx documentation # -subdir-y := +subdir-y := devicetree/bindings/ # You can set these variables from the command line. SPHINXBUILD = sphinx-build diff --git a/Documentation/admin-guide/LSM/SELinux.rst b/Documentation/admin-guide/LSM/SELinux.rst index f722c9b4173a..520a1c2c6fd2 100644 --- a/Documentation/admin-guide/LSM/SELinux.rst +++ b/Documentation/admin-guide/LSM/SELinux.rst @@ -6,7 +6,7 @@ If you want to use SELinux, chances are you will want to use the distro-provided policies, or install the latest reference policy release from - http://oss.tresys.com/projects/refpolicy + https://github.com/SELinuxProject/refpolicy However, if you want to install a dummy policy for testing, you can do using ``mdp`` provided under diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 476722b7b636..7bf3f129c68b 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -56,11 +56,13 @@ v1 is available under Documentation/cgroup-v1/. 5-3-3-2. IO Latency Interface Files 5-4. PID 5-4-1. PID Interface Files - 5-5. Device - 5-6. RDMA - 5-6-1. RDMA Interface Files - 5-7. Misc - 5-7-1. perf_event + 5-5. Cpuset + 5.5-1. Cpuset Interface Files + 5-6. Device + 5-7. RDMA + 5-7-1. RDMA Interface Files + 5-8. Misc + 5-8-1. perf_event 5-N. Non-normative information 5-N-1. CPU controller root cgroup process behaviour 5-N-2. IO controller root cgroup process behaviour @@ -1610,6 +1612,176 @@ through fork() or clone(). These will return -EAGAIN if the creation of a new process would cause a cgroup policy to be violated. +Cpuset +------ + +The "cpuset" controller provides a mechanism for constraining +the CPU and memory node placement of tasks to only the resources +specified in the cpuset interface files in a task's current cgroup. +This is especially valuable on large NUMA systems where placing jobs +on properly sized subsets of the systems with careful processor and +memory placement to reduce cross-node memory access and contention +can improve overall system performance. + +The "cpuset" controller is hierarchical. That means the controller +cannot use CPUs or memory nodes not allowed in its parent. + + +Cpuset Interface Files +~~~~~~~~~~~~~~~~~~~~~~ + + cpuset.cpus + A read-write multiple values file which exists on non-root + cpuset-enabled cgroups. + + It lists the requested CPUs to be used by tasks within this + cgroup. The actual list of CPUs to be granted, however, is + subjected to constraints imposed by its parent and can differ + from the requested CPUs. + + The CPU numbers are comma-separated numbers or ranges. + For example: + + # cat cpuset.cpus + 0-4,6,8-10 + + An empty value indicates that the cgroup is using the same + setting as the nearest cgroup ancestor with a non-empty + "cpuset.cpus" or all the available CPUs if none is found. + + The value of "cpuset.cpus" stays constant until the next update + and won't be affected by any CPU hotplug events. + + cpuset.cpus.effective + A read-only multiple values file which exists on all + cpuset-enabled cgroups. + + It lists the onlined CPUs that are actually granted to this + cgroup by its parent. These CPUs are allowed to be used by + tasks within the current cgroup. + + If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows + all the CPUs from the parent cgroup that can be available to + be used by this cgroup. Otherwise, it should be a subset of + "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" + can be granted. In this case, it will be treated just like an + empty "cpuset.cpus". + + Its value will be affected by CPU hotplug events. + + cpuset.mems + A read-write multiple values file which exists on non-root + cpuset-enabled cgroups. + + It lists the requested memory nodes to be used by tasks within + this cgroup. The actual list of memory nodes granted, however, + is subjected to constraints imposed by its parent and can differ + from the requested memory nodes. + + The memory node numbers are comma-separated numbers or ranges. + For example: + + # cat cpuset.mems + 0-1,3 + + An empty value indicates that the cgroup is using the same + setting as the nearest cgroup ancestor with a non-empty + "cpuset.mems" or all the available memory nodes if none + is found. + + The value of "cpuset.mems" stays constant until the next update + and won't be affected by any memory nodes hotplug events. + + cpuset.mems.effective + A read-only multiple values file which exists on all + cpuset-enabled cgroups. + + It lists the onlined memory nodes that are actually granted to + this cgroup by its parent. These memory nodes are allowed to + be used by tasks within the current cgroup. + + If "cpuset.mems" is empty, it shows all the memory nodes from the + parent cgroup that will be available to be used by this cgroup. + Otherwise, it should be a subset of "cpuset.mems" unless none of + the memory nodes listed in "cpuset.mems" can be granted. In this + case, it will be treated just like an empty "cpuset.mems". + + Its value will be affected by memory nodes hotplug events. + + cpuset.cpus.partition + A read-write single value file which exists on non-root + cpuset-enabled cgroups. This flag is owned by the parent cgroup + and is not delegatable. + + It accepts only the following input values when written to. + + "root" - a paritition root + "member" - a non-root member of a partition + + When set to be a partition root, the current cgroup is the + root of a new partition or scheduling domain that comprises + itself and all its descendants except those that are separate + partition roots themselves and their descendants. The root + cgroup is always a partition root. + + There are constraints on where a partition root can be set. + It can only be set in a cgroup if all the following conditions + are true. + + 1) The "cpuset.cpus" is not empty and the list of CPUs are + exclusive, i.e. they are not shared by any of its siblings. + 2) The parent cgroup is a partition root. + 3) The "cpuset.cpus" is also a proper subset of the parent's + "cpuset.cpus.effective". + 4) There is no child cgroups with cpuset enabled. This is for + eliminating corner cases that have to be handled if such a + condition is allowed. + + Setting it to partition root will take the CPUs away from the + effective CPUs of the parent cgroup. Once it is set, this + file cannot be reverted back to "member" if there are any child + cgroups with cpuset enabled. + + A parent partition cannot distribute all its CPUs to its + child partitions. There must be at least one cpu left in the + parent partition. + + Once becoming a partition root, changes to "cpuset.cpus" is + generally allowed as long as the first condition above is true, + the change will not take away all the CPUs from the parent + partition and the new "cpuset.cpus" value is a superset of its + children's "cpuset.cpus" values. + + Sometimes, external factors like changes to ancestors' + "cpuset.cpus" or cpu hotplug can cause the state of the partition + root to change. On read, the "cpuset.sched.partition" file + can show the following values. + + "member" Non-root member of a partition + "root" Partition root + "root invalid" Invalid partition root + + It is a partition root if the first 2 partition root conditions + above are true and at least one CPU from "cpuset.cpus" is + granted by the parent cgroup. + + A partition root can become invalid if none of CPUs requested + in "cpuset.cpus" can be granted by the parent cgroup or the + parent cgroup is no longer a partition root itself. In this + case, it is not a real partition even though the restriction + of the first partition root condition above will still apply. + The cpu affinity of all the tasks in the cgroup will then be + associated with CPUs in the nearest ancestor partition. + + An invalid partition root can be transitioned back to a + real partition root if at least one of the requested CPUs + can now be granted by its parent. In this case, the cpu + affinity of all the tasks in the formerly invalid partition + will be associated to the CPUs of the newly formed partition. + Changing the partition state of an invalid partition root to + "member" is always allowed even if child cpusets are present. + + Device controller ----------------- @@ -1879,8 +2051,10 @@ following two functions. wbc_init_bio(@wbc, @bio) Should be called for each bio carrying writeback data and - associates the bio with the inode's owner cgroup. Can be - called anytime between bio allocation and submission. + associates the bio with the inode's owner cgroup and the + corresponding request queue. This must be called after + a queue (device) has been associated with the bio and + before submission. wbc_account_io(@wbc, @page, @bytes) Should be called for each data segment being written out. @@ -1899,7 +2073,7 @@ the configuration, the bio may be executed at a lower priority and if the writeback session is holding shared resources, e.g. a journal entry, may lead to priority inversion. There is no one easy solution for the problem. Filesystems can try to work around specific problem -cases by skipping wbc_init_bio() or using bio_associate_blkcg() +cases by skipping wbc_init_bio() and using bio_associate_blkg() directly. diff --git a/Documentation/admin-guide/devices.rst b/Documentation/admin-guide/devices.rst index 7fadc05330dd..d41671aeaef0 100644 --- a/Documentation/admin-guide/devices.rst +++ b/Documentation/admin-guide/devices.rst @@ -1,3 +1,4 @@ +.. _admin_devices: Linux allocated devices (4.x+ version) ====================================== diff --git a/Documentation/admin-guide/dynamic-debug-howto.rst b/Documentation/admin-guide/dynamic-debug-howto.rst index fdf72429f801..252e5ef324e5 100644 --- a/Documentation/admin-guide/dynamic-debug-howto.rst +++ b/Documentation/admin-guide/dynamic-debug-howto.rst @@ -110,8 +110,8 @@ If your query set is big, you can batch them too:: ~# cat query-batch-file > /dynamic_debug/control -A another way is to use wildcard. The match rule support ``*`` (matches -zero or more characters) and ``?`` (matches exactly one character).For +Another way is to use wildcards. The match rule supports ``*`` (matches +zero or more characters) and ``?`` (matches exactly one character). For example, you can match all usb drivers:: ~# echo "file drivers/usb/* +p" > /dynamic_debug/control @@ -258,7 +258,7 @@ this boot parameter for debugging purposes. If ``foo`` module is not built-in, ``foo.dyndbg`` will still be processed at boot time, without effect, but will be reprocessed when module is -loaded later. ``dyndbg_query=`` and bare ``dyndbg=`` are only processed at +loaded later. ``ddebug_query=`` and bare ``dyndbg=`` are only processed at boot. @@ -301,7 +301,7 @@ The ``dyndbg`` option is a "fake" module parameter, which means: For ``CONFIG_DYNAMIC_DEBUG`` kernels, any settings given at boot-time (or enabled by ``-DDEBUG`` flag during compilation) can be disabled later via -the sysfs interface if the debug messages are no longer needed:: +the debugfs interface if the debug messages are no longer needed:: echo "module module_name -p" > /dynamic_debug/control diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst index 965745d5fb9a..0a491676685e 100644 --- a/Documentation/admin-guide/index.rst +++ b/Documentation/admin-guide/index.rst @@ -76,6 +76,7 @@ configure specific aspects of kernel behavior to your liking. thunderbolt LSM/index mm/index + perf-security .. only:: subproject and html diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 60536779ed24..37e235be1d35 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -331,7 +331,7 @@ APC and your system crashes randomly. apic= [APIC,X86] Advanced Programmable Interrupt Controller - Change the output verbosity whilst booting + Change the output verbosity while booting Format: { quiet (default) | verbose | debug } Change the amount of debugging information output when initialising the APIC and IO-APIC components. @@ -486,10 +486,14 @@ cut the overhead, others just disable the usage. So only cgroup_disable=memory is actually worthy} - cgroup_no_v1= [KNL] Disable one, multiple, all cgroup controllers in v1 - Format: { controller[,controller...] | "all" } + cgroup_no_v1= [KNL] Disable cgroup controllers and named hierarchies in v1 + Format: { { controller | "all" | "named" } + [,{ controller | "all" | "named" }...] } Like cgroup_disable, but only applies to cgroup v1; the blacklisted controllers remain available in cgroup2. + "all" blacklists all controllers and "named" disables + named mounts. Specifying both "all" and "named" disables + all v1 hierarchies. cgroup.memory= [KNL] Pass options to the cgroup memory controller. Format: @@ -2833,7 +2837,7 @@ check bypass). With this option data leaks are possible in the system. - nospectre_v2 [X86] Disable all mitigations for the Spectre variant 2 + nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2 (indirect branch prediction) vulnerability. System may allow data leaks with this option, which is equivalent to spectre_v2=off. diff --git a/Documentation/admin-guide/mm/concepts.rst b/Documentation/admin-guide/mm/concepts.rst index 291699c810d4..c2531b14bf46 100644 --- a/Documentation/admin-guide/mm/concepts.rst +++ b/Documentation/admin-guide/mm/concepts.rst @@ -4,13 +4,13 @@ Concepts overview ================= -The memory management in Linux is complex system that evolved over the -years and included more and more functionality to support variety of +The memory management in Linux is a complex system that evolved over the +years and included more and more functionality to support a variety of systems from MMU-less microcontrollers to supercomputers. The memory -management for systems without MMU is called ``nommu`` and it +management for systems without an MMU is called ``nommu`` and it definitely deserves a dedicated document, which hopefully will be eventually written. Yet, although some of the concepts are the same, -here we assume that MMU is available and CPU can translate a virtual +here we assume that an MMU is available and a CPU can translate a virtual address to a physical address. .. contents:: :local: @@ -21,10 +21,10 @@ Virtual Memory Primer The physical memory in a computer system is a limited resource and even for systems that support memory hotplug there is a hard limit on the amount of memory that can be installed. The physical memory is not -necessary contiguous, it might be accessible as a set of distinct +necessarily contiguous; it might be accessible as a set of distinct address ranges. Besides, different CPU architectures, and even -different implementations of the same architecture have different view -how these address ranges defined. +different implementations of the same architecture have different views +of how these address ranges are defined. All this makes dealing directly with physical memory quite complex and to avoid this complexity a concept of virtual memory was developed. @@ -48,8 +48,8 @@ appropriate kernel configuration option. Each physical memory page can be mapped as one or more virtual pages. These mappings are described by page tables that allow -translation from virtual address used by programs to real address in -the physical memory. The page tables organized hierarchically. +translation from a virtual address used by programs to the physical +memory address. The page tables are organized hierarchically. The tables at the lowest level of the hierarchy contain physical addresses of actual pages used by the software. The tables at higher @@ -121,8 +121,8 @@ Nodes Many multi-processor machines are NUMA - Non-Uniform Memory Access - systems. In such systems the memory is arranged into banks that have different access latency depending on the "distance" from the -processor. Each bank is referred as `node` and for each node Linux -constructs an independent memory management subsystem. A node has it's +processor. Each bank is referred to as a `node` and for each node Linux +constructs an independent memory management subsystem. A node has its own set of zones, lists of free and used pages and various statistics counters. You can find more details about NUMA in :ref:`Documentation/vm/numa.rst ` and in @@ -149,9 +149,9 @@ for program's stack and heap or by explicit calls to mmap(2) system call. Usually, the anonymous mappings only define virtual memory areas that the program is allowed to access. The read accesses will result in creation of a page table entry that references a special physical -page filled with zeroes. When the program performs a write, regular +page filled with zeroes. When the program performs a write, a regular physical page will be allocated to hold the written data. The page -will be marked dirty and if the kernel will decide to repurpose it, +will be marked dirty and if the kernel decides to repurpose it, the dirty page will be swapped out. Reclaim @@ -181,8 +181,8 @@ pressure. The process of freeing the reclaimable physical memory pages and repurposing them is called (surprise!) `reclaim`. Linux can reclaim pages either asynchronously or synchronously, depending on the state -of the system. When system is not loaded, most of the memory is free -and allocation request will be satisfied immediately from the free +of the system. When the system is not loaded, most of the memory is free +and allocation requests will be satisfied immediately from the free pages supply. As the load increases, the amount of the free pages goes down and when it reaches a certain threshold (high watermark), an allocation request will awaken the ``kswapd`` daemon. It will @@ -190,7 +190,7 @@ asynchronously scan memory pages and either just free them if the data they contain is available elsewhere, or evict to the backing storage device (remember those dirty pages?). As memory usage increases even more and reaches another threshold - min watermark - an allocation -will trigger the `direct reclaim`. In this case allocation is stalled +will trigger `direct reclaim`. In this case allocation is stalled until enough memory pages are reclaimed to satisfy the request. Compaction @@ -200,7 +200,7 @@ As the system runs, tasks allocate and free the memory and it becomes fragmented. Although with virtual memory it is possible to present scattered physical pages as virtually contiguous range, sometimes it is necessary to allocate large physically contiguous memory areas. Such -need may arise, for instance, when a device driver requires large +need may arise, for instance, when a device driver requires a large buffer for DMA, or when THP allocates a huge page. Memory `compaction` addresses the fragmentation issue. This mechanism moves occupied pages from the lower part of a memory zone to free pages in the upper part @@ -208,15 +208,16 @@ of the zone. When a compaction scan is finished free pages are grouped together at the beginning of the zone and allocations of large physically contiguous areas become possible. -Like reclaim, the compaction may happen asynchronously in ``kcompactd`` -daemon or synchronously as a result of memory allocation request. +Like reclaim, the compaction may happen asynchronously in the ``kcompactd`` +daemon or synchronously as a result of a memory allocation request. OOM killer ========== -It may happen, that on a loaded machine memory will be exhausted. When -the kernel detects that the system runs out of memory (OOM) it invokes -`OOM killer`. Its mission is simple: all it has to do is to select a -task to sacrifice for the sake of the overall system health. The -selected task is killed in a hope that after it exits enough memory -will be freed to continue normal operation. +It is possible that on a loaded machine memory will be exhausted and the +kernel will be unable to reclaim enough memory to continue to operate. In +order to save the rest of the system, it invokes the `OOM killer`. + +The `OOM killer` selects a task to sacrifice for the sake of the overall +system health. The selected task is killed in a hope that after it exits +enough memory will be freed to continue normal operation. diff --git a/Documentation/admin-guide/perf-security.rst b/Documentation/admin-guide/perf-security.rst new file mode 100644 index 000000000000..f73ebfe9bfe2 --- /dev/null +++ b/Documentation/admin-guide/perf-security.rst @@ -0,0 +1,97 @@ +.. _perf_security: + +Perf Events and tool security +============================= + +Overview +-------- + +Usage of Performance Counters for Linux (perf_events) [1]_ , [2]_ , [3]_ can +impose a considerable risk of leaking sensitive data accessed by monitored +processes. The data leakage is possible both in scenarios of direct usage of +perf_events system call API [2]_ and over data files generated by Perf tool user +mode utility (Perf) [3]_ , [4]_ . The risk depends on the nature of data that +perf_events performance monitoring units (PMU) [2]_ collect and expose for +performance analysis. Having that said perf_events/Perf performance monitoring +is the subject for security access control management [5]_ . + +perf_events/Perf access control +------------------------------- + +To perform security checks, the Linux implementation splits processes into two +categories [6]_ : a) privileged processes (whose effective user ID is 0, referred +to as superuser or root), and b) unprivileged processes (whose effective UID is +nonzero). Privileged processes bypass all kernel security permission checks so +perf_events performance monitoring is fully available to privileged processes +without access, scope and resource restrictions. + +Unprivileged processes are subject to a full security permission check based on +the process's credentials [5]_ (usually: effective UID, effective GID, and +supplementary group list). + +Linux divides the privileges traditionally associated with superuser into +distinct units, known as capabilities [6]_ , which can be independently enabled +and disabled on per-thread basis for processes and files of unprivileged users. + +Unprivileged processes with enabled CAP_SYS_ADMIN capability are treated as +privileged processes with respect to perf_events performance monitoring and +bypass *scope* permissions checks in the kernel. + +Unprivileged processes using perf_events system call API is also subject for +PTRACE_MODE_READ_REALCREDS ptrace access mode check [7]_ , whose outcome +determines whether monitoring is permitted. So unprivileged processes provided +with CAP_SYS_PTRACE capability are effectively permitted to pass the check. + +Other capabilities being granted to unprivileged processes can effectively +enable capturing of additional data required for later performance analysis of +monitored processes or a system. For example, CAP_SYSLOG capability permits +reading kernel space memory addresses from /proc/kallsyms file. + +perf_events/Perf unprivileged users +----------------------------------- + +perf_events/Perf *scope* and *access* control for unprivileged processes is +governed by perf_event_paranoid [2]_ setting: + +-1: + Impose no *scope* and *access* restrictions on using perf_events performance + monitoring. Per-user per-cpu perf_event_mlock_kb [2]_ locking limit is + ignored when allocating memory buffers for storing performance data. + This is the least secure mode since allowed monitored *scope* is + maximized and no perf_events specific limits are imposed on *resources* + allocated for performance monitoring. + +>=0: + *scope* includes per-process and system wide performance monitoring + but excludes raw tracepoints and ftrace function tracepoints monitoring. + CPU and system events happened when executing either in user or + in kernel space can be monitored and captured for later analysis. + Per-user per-cpu perf_event_mlock_kb locking limit is imposed but + ignored for unprivileged processes with CAP_IPC_LOCK [6]_ capability. + +>=1: + *scope* includes per-process performance monitoring only and excludes + system wide performance monitoring. CPU and system events happened when + executing either in user or in kernel space can be monitored and + captured for later analysis. Per-user per-cpu perf_event_mlock_kb + locking limit is imposed but ignored for unprivileged processes with + CAP_IPC_LOCK capability. + +>=2: + *scope* includes per-process performance monitoring only. CPU and system + events happened when executing in user space only can be monitored and + captured for later analysis. Per-user per-cpu perf_event_mlock_kb + locking limit is imposed but ignored for unprivileged processes with + CAP_IPC_LOCK capability. + +Bibliography +------------ + +.. [1] ``_ +.. [2] ``_ +.. [3] ``_ +.. [4] ``_ +.. [5] ``_ +.. [6] ``_ +.. [7] ``_ + diff --git a/Documentation/admin-guide/ras.rst b/Documentation/admin-guide/ras.rst index 197896718f81..c7495e42e6f4 100644 --- a/Documentation/admin-guide/ras.rst +++ b/Documentation/admin-guide/ras.rst @@ -54,7 +54,7 @@ those errors are correctable. Types of errors --------------- -Most mechanisms used on modern systems use use technologies like Hamming +Most mechanisms used on modern systems use technologies like Hamming Codes that allow error correction when the number of errors on a bit packet is below a threshold. If the number of errors is above, those mechanisms can indicate with a high degree of confidence that an error happened, but diff --git a/Documentation/admin-guide/security-bugs.rst b/Documentation/admin-guide/security-bugs.rst index 30187d49dc2c..dcd6c93c7aac 100644 --- a/Documentation/admin-guide/security-bugs.rst +++ b/Documentation/admin-guide/security-bugs.rst @@ -44,7 +44,7 @@ only valid reason for deferring the publication of a fix is to accommodate the logistics of QA and large scale rollouts which require release coordination. -Whilst embargoed information may be shared with trusted individuals in +While embargoed information may be shared with trusted individuals in order to develop a fix, such information will not be published alongside the fix or on any other disclosure channel without the permission of the reporter. This includes but is not limited to the original bug report diff --git a/Documentation/admin-guide/thunderbolt.rst b/Documentation/admin-guide/thunderbolt.rst index 35fccba6a9a6..898ad78f3cc7 100644 --- a/Documentation/admin-guide/thunderbolt.rst +++ b/Documentation/admin-guide/thunderbolt.rst @@ -133,6 +133,26 @@ If the user still wants to connect the device they can either approve the device without a key or write a new key and write 1 to the ``authorized`` file to get the new key stored on the device NVM. +DMA protection utilizing IOMMU +------------------------------ +Recent systems from 2018 and forward with Thunderbolt ports may natively +support IOMMU. This means that Thunderbolt security is handled by an IOMMU +so connected devices cannot access memory regions outside of what is +allocated for them by drivers. When Linux is running on such system it +automatically enables IOMMU if not enabled by the user already. These +systems can be identified by reading ``1`` from +``/sys/bus/thunderbolt/devices/domainX/iommu_dma_protection`` attribute. + +The driver does not do anything special in this case but because DMA +protection is handled by the IOMMU, security levels (if set) are +redundant. For this reason some systems ship with security level set to +``none``. Other systems have security level set to ``user`` in order to +support downgrade to older OS, so users who want to automatically +authorize devices when IOMMU DMA protection is enabled can use the +following ``udev`` rule:: + + ACTION=="add", SUBSYSTEM=="thunderbolt", ATTRS{iommu_dma_protection}=="1", ATTR{authorized}=="0", ATTR{authorized}="1" + Upgrading NVM on Thunderbolt device or host ------------------------------------------- Since most of the functionality is handled in firmware running on a diff --git a/Documentation/arm/Booting b/Documentation/arm/Booting index 259f00af3ab3..f1f965ce93d6 100644 --- a/Documentation/arm/Booting +++ b/Documentation/arm/Booting @@ -126,7 +126,7 @@ tagged list. The boot loader must pass at a minimum the size and location of the system memory, and the root filesystem location. The dtb must be placed in a region of memory where the kernel decompressor will not -overwrite it, whilst remaining within the region which will be covered +overwrite it, while remaining within the region which will be covered by the kernel's low-memory mapping. A safe location is just above the 128MiB boundary from start of RAM. diff --git a/Documentation/arm/Samsung-S3C24XX/GPIO.txt b/Documentation/arm/Samsung-S3C24XX/GPIO.txt index 0ebd7e2244d0..e8f918b96123 100644 --- a/Documentation/arm/Samsung-S3C24XX/GPIO.txt +++ b/Documentation/arm/Samsung-S3C24XX/GPIO.txt @@ -55,7 +55,7 @@ out s3c2410 API, then here are some notes on the process. as they have the same arguments, and can either take the pin specific values, or the more generic special-function-number arguments. -3) s3c2410_gpio_pullup() changes have the problem that whilst the +3) s3c2410_gpio_pullup() changes have the problem that while the s3c2410_gpio_pullup(x, 1) can be easily translated to the s3c_gpio_setpull(x, S3C_GPIO_PULL_NONE), the s3c2410_gpio_pullup(x, 0) are not so easy. diff --git a/Documentation/arm/Samsung-S3C24XX/Overview.txt b/Documentation/arm/Samsung-S3C24XX/Overview.txt index 359587b2367b..00d3c3141e21 100644 --- a/Documentation/arm/Samsung-S3C24XX/Overview.txt +++ b/Documentation/arm/Samsung-S3C24XX/Overview.txt @@ -17,7 +17,7 @@ Introduction versions. The S3C2416 and S3C2450 devices are very similar and S3C2450 support is - included under the arch/arm/mach-s3c2416 directory. Note, whilst core + included under the arch/arm/mach-s3c2416 directory. Note, while core support for these SoCs is in, work on some of the extra peripherals and extra interrupts is still ongoing. diff --git a/Documentation/arm/Samsung-S3C24XX/Suspend.txt b/Documentation/arm/Samsung-S3C24XX/Suspend.txt index 1ca63b3e5635..cb4f0c0cdf9d 100644 --- a/Documentation/arm/Samsung-S3C24XX/Suspend.txt +++ b/Documentation/arm/Samsung-S3C24XX/Suspend.txt @@ -87,7 +87,7 @@ Debugging suspending, which means that use of printascii() or similar direct access to the UARTs will cause the debug to stop. - 2) Whilst the pm code itself will attempt to re-enable the UART clocks, + 2) While the pm code itself will attempt to re-enable the UART clocks, care should be taken that any external clock sources that the UARTs rely on are still enabled at that point. diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt index 207eca58efaa..ac18b488cb5e 100644 --- a/Documentation/block/biodoc.txt +++ b/Documentation/block/biodoc.txt @@ -65,7 +65,6 @@ Description of Contents: 3.2.3 I/O completion 3.2.4 Implications for drivers that do not interpret bios (don't handle multiple segments) - 3.2.5 Request command tagging 3.3 I/O submission 4. The I/O scheduler 5. Scalability related changes @@ -708,93 +707,6 @@ is crossed on completion of a transfer. (The end*request* functions should be used if only if the request has come down from block/bio path, not for direct access requests which only specify rq->buffer without a valid rq->bio) -3.2.5 Generic request command tagging - -3.2.5.1 Tag helpers - -Block now offers some simple generic functionality to help support command -queueing (typically known as tagged command queueing), ie manage more than -one outstanding command on a queue at any given time. - - blk_queue_init_tags(struct request_queue *q, int depth) - - Initialize internal command tagging structures for a maximum - depth of 'depth'. - - blk_queue_free_tags((struct request_queue *q) - - Teardown tag info associated with the queue. This will be done - automatically by block if blk_queue_cleanup() is called on a queue - that is using tagging. - -The above are initialization and exit management, the main helpers during -normal operations are: - - blk_queue_start_tag(struct request_queue *q, struct request *rq) - - Start tagged operation for this request. A free tag number between - 0 and 'depth' is assigned to the request (rq->tag holds this number), - and 'rq' is added to the internal tag management. If the maximum depth - for this queue is already achieved (or if the tag wasn't started for - some other reason), 1 is returned. Otherwise 0 is returned. - - blk_queue_end_tag(struct request_queue *q, struct request *rq) - - End tagged operation on this request. 'rq' is removed from the internal - book keeping structures. - -To minimize struct request and queue overhead, the tag helpers utilize some -of the same request members that are used for normal request queue management. -This means that a request cannot both be an active tag and be on the queue -list at the same time. blk_queue_start_tag() will remove the request, but -the driver must remember to call blk_queue_end_tag() before signalling -completion of the request to the block layer. This means ending tag -operations before calling end_that_request_last()! For an example of a user -of these helpers, see the IDE tagged command queueing support. - -3.2.5.2 Tag info - -Some block functions exist to query current tag status or to go from a -tag number to the associated request. These are, in no particular order: - - blk_queue_tagged(q) - - Returns 1 if the queue 'q' is using tagging, 0 if not. - - blk_queue_tag_request(q, tag) - - Returns a pointer to the request associated with tag 'tag'. - - blk_queue_tag_depth(q) - - Return current queue depth. - - blk_queue_tag_queue(q) - - Returns 1 if the queue can accept a new queued command, 0 if we are - at the maximum depth already. - - blk_queue_rq_tagged(rq) - - Returns 1 if the request 'rq' is tagged. - -3.2.5.2 Internal structure - -Internally, block manages tags in the blk_queue_tag structure: - - struct blk_queue_tag { - struct request **tag_index; /* array or pointers to rq */ - unsigned long *tag_map; /* bitmap of free tags */ - struct list_head busy_list; /* fifo list of busy tags */ - int busy; /* queue depth */ - int max_depth; /* max queue depth */ - }; - -Most of the above is simple and straight forward, however busy_list may need -a bit of explaining. Normally we don't care too much about request ordering, -but in the event of any barrier requests in the tag queue we need to ensure -that requests are restarted in the order they were queue. - 3.3 I/O Submission The routine submit_bio() is used to submit a single io. Higher level i/o diff --git a/Documentation/block/cfq-iosched.txt b/Documentation/block/cfq-iosched.txt deleted file mode 100644 index 895bd3813115..000000000000 --- a/Documentation/block/cfq-iosched.txt +++ /dev/null @@ -1,291 +0,0 @@ -CFQ (Complete Fairness Queueing) -=============================== - -The main aim of CFQ scheduler is to provide a fair allocation of the disk -I/O bandwidth for all the processes which requests an I/O operation. - -CFQ maintains the per process queue for the processes which request I/O -operation(synchronous requests). In case of asynchronous requests, all the -requests from all the processes are batched together according to their -process's I/O priority. - -CFQ ioscheduler tunables -======================== - -slice_idle ----------- -This specifies how long CFQ should idle for next request on certain cfq queues -(for sequential workloads) and service trees (for random workloads) before -queue is expired and CFQ selects next queue to dispatch from. - -By default slice_idle is a non-zero value. That means by default we idle on -queues/service trees. This can be very helpful on highly seeky media like -single spindle SATA/SAS disks where we can cut down on overall number of -seeks and see improved throughput. - -Setting slice_idle to 0 will remove all the idling on queues/service tree -level and one should see an overall improved throughput on faster storage -devices like multiple SATA/SAS disks in hardware RAID configuration. The down -side is that isolation provided from WRITES also goes down and notion of -IO priority becomes weaker. - -So depending on storage and workload, it might be useful to set slice_idle=0. -In general I think for SATA/SAS disks and software RAID of SATA/SAS disks -keeping slice_idle enabled should be useful. For any configurations where -there are multiple spindles behind single LUN (Host based hardware RAID -controller or for storage arrays), setting slice_idle=0 might end up in better -throughput and acceptable latencies. - -back_seek_max -------------- -This specifies, given in Kbytes, the maximum "distance" for backward seeking. -The distance is the amount of space from the current head location to the -sectors that are backward in terms of distance. - -This parameter allows the scheduler to anticipate requests in the "backward" -direction and consider them as being the "next" if they are within this -distance from the current head location. - -back_seek_penalty ------------------ -This parameter is used to compute the cost of backward seeking. If the -backward distance of request is just 1/back_seek_penalty from a "front" -request, then the seeking cost of two requests is considered equivalent. - -So scheduler will not bias toward one or the other request (otherwise scheduler -will bias toward front request). Default value of back_seek_penalty is 2. - -fifo_expire_async ------------------ -This parameter is used to set the timeout of asynchronous requests. Default -value of this is 248ms. - -fifo_expire_sync ----------------- -This parameter is used to set the timeout of synchronous requests. Default -value of this is 124ms. In case to favor synchronous requests over asynchronous -one, this value should be decreased relative to fifo_expire_async. - -group_idle ------------ -This parameter forces idling at the CFQ group level instead of CFQ -queue level. This was introduced after a bottleneck was observed -in higher end storage due to idle on sequential queue and allow dispatch -from a single queue. The idea with this parameter is that it can be run with -slice_idle=0 and group_idle=8, so that idling does not happen on individual -queues in the group but happens overall on the group and thus still keeps the -IO controller working. -Not idling on individual queues in the group will dispatch requests from -multiple queues in the group at the same time and achieve higher throughput -on higher end storage. - -Default value for this parameter is 8ms. - -low_latency ------------ -This parameter is used to enable/disable the low latency mode of the CFQ -scheduler. If enabled, CFQ tries to recompute the slice time for each process -based on the target_latency set for the system. This favors fairness over -throughput. Disabling low latency (setting it to 0) ignores target latency, -allowing each process in the system to get a full time slice. - -By default low latency mode is enabled. - -target_latency --------------- -This parameter is used to calculate the time slice for a process if cfq's -latency mode is enabled. It will ensure that sync requests have an estimated -latency. But if sequential workload is higher(e.g. sequential read), -then to meet the latency constraints, throughput may decrease because of less -time for each process to issue I/O request before the cfq queue is switched. - -Though this can be overcome by disabling the latency_mode, it may increase -the read latency for some applications. This parameter allows for changing -target_latency through the sysfs interface which can provide the balanced -throughput and read latency. - -Default value for target_latency is 300ms. - -slice_async ------------ -This parameter is same as of slice_sync but for asynchronous queue. The -default value is 40ms. - -slice_async_rq --------------- -This parameter is used to limit the dispatching of asynchronous request to -device request queue in queue's slice time. The maximum number of request that -are allowed to be dispatched also depends upon the io priority. Default value -for this is 2. - -slice_sync ----------- -When a queue is selected for execution, the queues IO requests are only -executed for a certain amount of time(time_slice) before switching to another -queue. This parameter is used to calculate the time slice of synchronous -queue. - -time_slice is computed using the below equation:- -time_slice = slice_sync + (slice_sync/5 * (4 - prio)). To increase the -time_slice of synchronous queue, increase the value of slice_sync. Default -value is 100ms. - -quantum -------- -This specifies the number of request dispatched to the device queue. In a -queue's time slice, a request will not be dispatched if the number of request -in the device exceeds this parameter. This parameter is used for synchronous -request. - -In case of storage with several disk, this setting can limit the parallel -processing of request. Therefore, increasing the value can improve the -performance although this can cause the latency of some I/O to increase due -to more number of requests. - -CFQ Group scheduling -==================== - -CFQ supports blkio cgroup and has "blkio." prefixed files in each -blkio cgroup directory. It is weight-based and there are four knobs -for configuration - weight[_device] and leaf_weight[_device]. -Internal cgroup nodes (the ones with children) can also have tasks in -them, so the former two configure how much proportion the cgroup as a -whole is entitled to at its parent's level while the latter two -configure how much proportion the tasks in the cgroup have compared to -its direct children. - -Another way to think about it is assuming that each internal node has -an implicit leaf child node which hosts all the tasks whose weight is -configured by leaf_weight[_device]. Let's assume a blkio hierarchy -composed of five cgroups - root, A, B, AA and AB - with the following -weights where the names represent the hierarchy. - - weight leaf_weight - root : 125 125 - A : 500 750 - B : 250 500 - AA : 500 500 - AB : 1000 500 - -root never has a parent making its weight is meaningless. For backward -compatibility, weight is always kept in sync with leaf_weight. B, AA -and AB have no child and thus its tasks have no children cgroup to -compete with. They always get 100% of what the cgroup won at the -parent level. Considering only the weights which matter, the hierarchy -looks like the following. - - root - / | \ - A B leaf - 500 250 125 - / | \ - AA AB leaf - 500 1000 750 - -If all cgroups have active IOs and competing with each other, disk -time will be distributed like the following. - -Distribution below root. The total active weight at this level is -A:500 + B:250 + C:125 = 875. - - root-leaf : 125 / 875 =~ 14% - A : 500 / 875 =~ 57% - B(-leaf) : 250 / 875 =~ 28% - -A has children and further distributes its 57% among the children and -the implicit leaf node. The total active weight at this level is -AA:500 + AB:1000 + A-leaf:750 = 2250. - - A-leaf : ( 750 / 2250) * A =~ 19% - AA(-leaf) : ( 500 / 2250) * A =~ 12% - AB(-leaf) : (1000 / 2250) * A =~ 25% - -CFQ IOPS Mode for group scheduling -=================================== -Basic CFQ design is to provide priority based time slices. Higher priority -process gets bigger time slice and lower priority process gets smaller time -slice. Measuring time becomes harder if storage is fast and supports NCQ and -it would be better to dispatch multiple requests from multiple cfq queues in -request queue at a time. In such scenario, it is not possible to measure time -consumed by single queue accurately. - -What is possible though is to measure number of requests dispatched from a -single queue and also allow dispatch from multiple cfq queue at the same time. -This effectively becomes the fairness in terms of IOPS (IO operations per -second). - -If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches -to IOPS mode and starts providing fairness in terms of number of requests -dispatched. Note that this mode switching takes effect only for group -scheduling. For non-cgroup users nothing should change. - -CFQ IO scheduler Idling Theory -=============================== -Idling on a queue is primarily about waiting for the next request to come -on same queue after completion of a request. In this process CFQ will not -dispatch requests from other cfq queues even if requests are pending there. - -The rationale behind idling is that it can cut down on number of seeks -on rotational media. For example, if a process is doing dependent -sequential reads (next read will come on only after completion of previous -one), then not dispatching request from other queue should help as we -did not move the disk head and kept on dispatching sequential IO from -one queue. - -CFQ has following service trees and various queues are put on these trees. - - sync-idle sync-noidle async - -All cfq queues doing synchronous sequential IO go on to sync-idle tree. -On this tree we idle on each queue individually. - -All synchronous non-sequential queues go on sync-noidle tree. Also any -synchronous write request which is not marked with REQ_IDLE goes on this -service tree. On this tree we do not idle on individual queues instead idle -on the whole group of queues or the tree. So if there are 4 queues waiting -for IO to dispatch we will idle only once last queue has dispatched the IO -and there is no more IO on this service tree. - -All async writes go on async service tree. There is no idling on async -queues. - -CFQ has some optimizations for SSDs and if it detects a non-rotational -media which can support higher queue depth (multiple requests at in -flight at a time), then it cuts down on idling of individual queues and -all the queues move to sync-noidle tree and only tree idle remains. This -tree idling provides isolation with buffered write queues on async tree. - -FAQ -=== -Q1. Why to idle at all on queues not marked with REQ_IDLE. - -A1. We only do tree idle (all queues on sync-noidle tree) on queues not marked - with REQ_IDLE. This helps in providing isolation with all the sync-idle - queues. Otherwise in presence of many sequential readers, other - synchronous IO might not get fair share of disk. - - For example, if there are 10 sequential readers doing IO and they get - 100ms each. If a !REQ_IDLE request comes in, it will be scheduled - roughly after 1 second. If after completion of !REQ_IDLE request we - do not idle, and after a couple of milli seconds a another !REQ_IDLE - request comes in, again it will be scheduled after 1second. Repeat it - and notice how a workload can lose its disk share and suffer due to - multiple sequential readers. - - fsync can generate dependent IO where bunch of data is written in the - context of fsync, and later some journaling data is written. Journaling - data comes in only after fsync has finished its IO (atleast for ext4 - that seemed to be the case). Now if one decides not to idle on fsync - thread due to !REQ_IDLE, then next journaling write will not get - scheduled for another second. A process doing small fsync, will suffer - badly in presence of multiple sequential readers. - - Hence doing tree idling on threads using !REQ_IDLE flag on requests - provides isolation from multiple sequential readers and at the same - time we do not idle on individual threads. - -Q2. When to specify REQ_IDLE -A2. I would think whenever one is doing synchronous write and expecting - more writes to be dispatched from same context soon, should be able - to specify REQ_IDLE on writes and that probably should work well for - most of the cases. diff --git a/Documentation/block/queue-sysfs.txt b/Documentation/block/queue-sysfs.txt index 2c1e67058fd3..39e286d7afc9 100644 --- a/Documentation/block/queue-sysfs.txt +++ b/Documentation/block/queue-sysfs.txt @@ -64,7 +64,7 @@ guess, the kernel will put the process issuing IO to sleep for an amount of time, before entering a classic poll loop. This mode might be a little slower than pure classic polling, but it will be more efficient. If set to a value larger than 0, the kernel will put the process issuing -IO to sleep for this amont of microseconds before entering classic +IO to sleep for this amount of microseconds before entering classic polling. iostats (RW) @@ -194,4 +194,31 @@ blk-throttle makes decision based on the samplings. Lower time means cgroups have more smooth throughput, but higher CPU overhead. This exists only when CONFIG_BLK_DEV_THROTTLING_LOW is enabled. +zoned (RO) +---------- +This indicates if the device is a zoned block device and the zone model of the +device if it is indeed zoned. The possible values indicated by zoned are +"none" for regular block devices and "host-aware" or "host-managed" for zoned +block devices. The characteristics of host-aware and host-managed zoned block +devices are described in the ZBC (Zoned Block Commands) and ZAC +(Zoned Device ATA Command Set) standards. These standards also define the +"drive-managed" zone model. However, since drive-managed zoned block devices +do not support zone commands, they will be treated as regular block devices +and zoned will report "none". + +nr_zones (RO) +------------- +For zoned block devices (zoned attribute indicating "host-managed" or +"host-aware"), this indicates the total number of zones of the device. +This is always 0 for regular block devices. + +chunk_sectors (RO) +------------------ +This has different meaning depending on the type of the block device. +For a RAID device (dm-raid), chunk_sectors indicates the size in 512B sectors +of the RAID volume stripe segment. For a zoned block device, either host-aware +or host-managed, chunk_sectors indicates the size in 512B sectors of the zones +of the device, with the eventual exception of the last zone of the device which +may be smaller. + Jens Axboe , February 2009 diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt index 3c1b5ab54bc0..436c5e98e1b6 100644 --- a/Documentation/blockdev/zram.txt +++ b/Documentation/blockdev/zram.txt @@ -164,11 +164,14 @@ reset WO trigger device reset mem_used_max WO reset the `mem_used_max' counter (see later) mem_limit WO specifies the maximum amount of memory ZRAM can use to store the compressed data +writeback_limit WO specifies the maximum amount of write IO zram can + write out to backing device as 4KB unit max_comp_streams RW the number of possible concurrent compress operations comp_algorithm RW show and change the compression algorithm compact WO trigger memory compaction debug_stat RO this file is used for zram debugging purposes backing_dev RW set up backend storage for zram to write out +idle WO mark allocated slot as idle User space is advised to use the following files to read the device statistics. @@ -220,6 +223,17 @@ line of text and contains the following stats separated by whitespace: pages_compacted the number of pages freed during compaction huge_pages the number of incompressible pages +File /sys/block/zram/bd_stat + +The stat file represents device's backing device statistics. It consists of +a single line of text and contains the following stats separated by whitespace: + bd_count size of data written in backing device. + Unit: 4K bytes + bd_reads the number of reads from backing device + Unit: 4K bytes + bd_writes the number of writes to backing device + Unit: 4K bytes + 9) Deactivate: swapoff /dev/zram0 umount /dev/zram1 @@ -237,11 +251,60 @@ line of text and contains the following stats separated by whitespace: = writeback -With incompressible pages, there is no memory saving with zram. -Instead, with CONFIG_ZRAM_WRITEBACK, zram can write incompressible page +With CONFIG_ZRAM_WRITEBACK, zram can write idle/incompressible page to backing storage rather than keeping it in memory. -User should set up backing device via /sys/block/zramX/backing_dev -before disksize setting. +To use the feature, admin should set up backing device via + + "echo /dev/sda5 > /sys/block/zramX/backing_dev" + +before disksize setting. It supports only partition at this moment. +If admin want to use incompressible page writeback, they could do via + + "echo huge > /sys/block/zramX/write" + +To use idle page writeback, first, user need to declare zram pages +as idle. + + "echo all > /sys/block/zramX/idle" + +From now on, any pages on zram are idle pages. The idle mark +will be removed until someone request access of the block. +IOW, unless there is access request, those pages are still idle pages. + +Admin can request writeback of those idle pages at right timing via + + "echo idle > /sys/block/zramX/writeback" + +With the command, zram writeback idle pages from memory to the storage. + +If there are lots of write IO with flash device, potentially, it has +flash wearout problem so that admin needs to design write limitation +to guarantee storage health for entire product life. +To overcome the concern, zram supports "writeback_limit". +The "writeback_limit"'s default value is 0 so that it doesn't limit +any writeback. If admin want to measure writeback count in a certain +period, he could know it via /sys/block/zram0/bd_stat's 3rd column. + +If admin want to limit writeback as per-day 400M, he could do it +like below. + + MB_SHIFT=20 + 4K_SHIFT=12 + echo $((400<>4K_SHIFT)) > \ + /sys/block/zram0/writeback_limit. + +If admin want to allow further write again, he could do it like below + + echo 0 > /sys/block/zram0/writeback_limit + +If admin want to see remaining writeback budget since he set, + + cat /sys/block/zram0/writeback_limit + +The writeback_limit count will reset whenever you reset zram(e.g., +system reboot, echo 1 > /sys/block/zramX/reset) so keeping how many of +writeback happened until you reset the zram to allocate extra writeback +budget in next setting is user's job. = memory tracking @@ -251,16 +314,17 @@ pages of the process with*pagemap. If you enable the feature, you could see block state via /sys/kernel/debug/zram/zram0/block_state". The output is as follows, - 300 75.033841 .wh - 301 63.806904 s.. - 302 63.806919 ..h + 300 75.033841 .wh. + 301 63.806904 s... + 302 63.806919 ..hi First column is zram's block index. Second column is access time since the system was booted Third column is state of the block. (s: same page w: written page to backing store -h: huge page) +h: huge page +i: idle page) First line of above example says 300th block is accessed at 75.033841sec and the block's state is huge so it is written back to the backing diff --git a/Documentation/core-api/assoc_array.rst b/Documentation/core-api/assoc_array.rst index 8231b915c939..792bbf9939e1 100644 --- a/Documentation/core-api/assoc_array.rst +++ b/Documentation/core-api/assoc_array.rst @@ -34,7 +34,7 @@ properties: 8. The array can iterated over. The objects will not necessarily come out in key order. -9. The array can be iterated over whilst it is being modified, provided the +9. The array can be iterated over while it is being modified, provided the RCU readlock is being held by the iterator. Note, however, under these circumstances, some objects may be seen more than once. If this is a problem, the iterator should lock against modification. Objects will not @@ -42,7 +42,7 @@ properties: 10. Objects in the array can be looked up by means of their index key. -11. Objects can be looked up whilst the array is being modified, provided the +11. Objects can be looked up while the array is being modified, provided the RCU readlock is being held by the thread doing the look up. The implementation uses a tree of 16-pointer nodes internally that are indexed @@ -273,7 +273,7 @@ The function will return ``0`` if successful and ``-ENOMEM`` if there wasn't enough memory. It is possible for other threads to iterate over or search the array under -the RCU read lock whilst this function is in progress. The caller should +the RCU read lock while this function is in progress. The caller should lock exclusively against other modifiers of the array. diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst index f8bb9aa120c4..8954a88ff5b7 100644 --- a/Documentation/core-api/memory-allocation.rst +++ b/Documentation/core-api/memory-allocation.rst @@ -1,3 +1,5 @@ +.. _memory-allocation: + ======================= Memory Allocation Guide ======================= diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst index 5ce1ec1dd066..aa8e54b85221 100644 --- a/Documentation/core-api/mm-api.rst +++ b/Documentation/core-api/mm-api.rst @@ -46,11 +46,20 @@ The Slab Cache .. kernel-doc:: mm/slab.c :export: +.. kernel-doc:: mm/slab_common.c + :export: + .. kernel-doc:: mm/util.c :functions: kfree_const kvmalloc_node kvfree -More Memory Management Functions -================================ +Virtually Contiguous Mappings +============================= + +.. kernel-doc:: mm/vmalloc.c + :export: + +File Mapping and Page Cache +=========================== .. kernel-doc:: mm/readahead.c :export: @@ -58,23 +67,28 @@ More Memory Management Functions .. kernel-doc:: mm/filemap.c :export: -.. kernel-doc:: mm/memory.c +.. kernel-doc:: mm/page-writeback.c :export: -.. kernel-doc:: mm/vmalloc.c +.. kernel-doc:: mm/truncate.c :export: -.. kernel-doc:: mm/page_alloc.c - :internal: +Memory pools +============ .. kernel-doc:: mm/mempool.c :export: +DMA pools +========= + .. kernel-doc:: mm/dmapool.c :export: -.. kernel-doc:: mm/page-writeback.c - :export: +More Memory Management Functions +================================ -.. kernel-doc:: mm/truncate.c +.. kernel-doc:: mm/memory.c :export: + +.. kernel-doc:: mm/page_alloc.c diff --git a/Documentation/crypto/api.rst b/Documentation/crypto/api.rst index 2e519193ab4a..b91b31736df8 100644 --- a/Documentation/crypto/api.rst +++ b/Documentation/crypto/api.rst @@ -1,15 +1,6 @@ Programming Interface ===================== -Please note that the kernel crypto API contains the AEAD givcrypt API -(crypto_aead_giv\* and aead_givcrypt\* function calls in -include/crypto/aead.h). This API is obsolete and will be removed in the -future. To obtain the functionality of an AEAD cipher with internal IV -generation, use the IV generator as a regular cipher. For example, -rfc4106(gcm(aes)) is the AEAD cipher with external IV generation and -seqniv(rfc4106(gcm(aes))) implies that the kernel crypto API generates -the IV. Different IV generators are available. - .. class:: toc-title Table of contents diff --git a/Documentation/crypto/architecture.rst b/Documentation/crypto/architecture.rst index ca2d09b991f5..ee8ff0762d7f 100644 --- a/Documentation/crypto/architecture.rst +++ b/Documentation/crypto/architecture.rst @@ -157,10 +157,6 @@ applicable to a cipher, it is not displayed: - rng for random number generator - - givcipher for cipher with associated IV generator (see the geniv - entry below for the specification of the IV generator type used by - the cipher implementation) - - kpp for a Key-agreement Protocol Primitive (KPP) cipher such as an ECDH or DH implementation @@ -174,16 +170,7 @@ applicable to a cipher, it is not displayed: - digestsize: output size of the message digest -- geniv: IV generation type: - - - eseqiv for encrypted sequence number based IV generation - - - seqiv for sequence number based IV generation - - - chainiv for chain iv generation - - - is a marker that the cipher implements IV generation and - handling as it is specific to the given cipher +- geniv: IV generator (obsolete) Key Sizes --------- @@ -218,10 +205,6 @@ the aforementioned cipher types: - CRYPTO_ALG_TYPE_ABLKCIPHER Asynchronous multi-block cipher -- CRYPTO_ALG_TYPE_GIVCIPHER Asynchronous multi-block cipher packed - together with an IV generator (see geniv field in the /proc/crypto - listing for the known IV generators) - - CRYPTO_ALG_TYPE_KPP Key-agreement Protocol Primitive (KPP) such as an ECDH or DH implementation @@ -338,18 +321,14 @@ uses the API applicable to the cipher type specified for the block. The following call sequence is applicable when the IPSEC layer triggers an encryption operation with the esp_output function. During -configuration, the administrator set up the use of rfc4106(gcm(aes)) as -the cipher for ESP. The following call sequence is now depicted in the -ASCII art above: +configuration, the administrator set up the use of seqiv(rfc4106(gcm(aes))) +as the cipher for ESP. The following call sequence is now depicted in +the ASCII art above: 1. esp_output() invokes crypto_aead_encrypt() to trigger an encryption operation of the AEAD cipher with IV generator. - In case of GCM, the SEQIV implementation is registered as GIVCIPHER - in crypto_rfc4106_alloc(). - - The SEQIV performs its operation to generate an IV where the core - function is seqiv_geniv(). + The SEQIV generates the IV. 2. Now, SEQIV uses the AEAD API function calls to invoke the associated AEAD cipher. In our case, during the instantiation of SEQIV, the diff --git a/Documentation/dev-tools/coccinelle.rst b/Documentation/dev-tools/coccinelle.rst index aa14f05cabb1..00a3409b0c28 100644 --- a/Documentation/dev-tools/coccinelle.rst +++ b/Documentation/dev-tools/coccinelle.rst @@ -4,6 +4,8 @@ .. highlight:: none +.. _devtools_coccinelle: + Coccinelle ========== diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst index e313925fb0fa..b0522a4dd107 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -3,8 +3,8 @@ Development tools for the kernel ================================ This document is a collection of documents about development tools that can -be used to work on the kernel. For now, the documents have been pulled -together without any significant effot to integrate them into a coherent +be used to work on the kernel. For now, the documents have been pulled +together without any significant effort to integrate them into a coherent whole; patches welcome! .. class:: toc-title diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index aabc8738b3d8..b72d07d70239 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -4,15 +4,25 @@ The Kernel Address Sanitizer (KASAN) Overview -------- -KernelAddressSANitizer (KASAN) is a dynamic memory error detector. It provides -a fast and comprehensive solution for finding use-after-free and out-of-bounds -bugs. +KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to +find out-of-bound and use-after-free bugs. KASAN has two modes: generic KASAN +(similar to userspace ASan) and software tag-based KASAN (similar to userspace +HWASan). -KASAN uses compile-time instrumentation for checking every memory access, -therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is -required for detection of out-of-bounds accesses to stack or global variables. +KASAN uses compile-time instrumentation to insert validity checks before every +memory access, and therefore requires a compiler version that supports that. -Currently KASAN is supported only for the x86_64 and arm64 architectures. +Generic KASAN is supported in both GCC and Clang. With GCC it requires version +4.9.2 or later for basic support and version 5.0 or later for detection of +out-of-bounds accesses for stack and global variables and for inline +instrumentation mode (see the Usage section). With Clang it requires version +7.0.0 or later and it doesn't support detection of out-of-bounds accesses for +global variables yet. + +Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later. + +Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390 +architectures, and tag-based KASAN is supported only for arm64. Usage ----- @@ -21,12 +31,14 @@ To enable KASAN configure kernel with:: CONFIG_KASAN = y -and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and -inline are compiler instrumentation types. The former produces smaller binary -the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC -version 5.0 or later. +and choose between CONFIG_KASAN_GENERIC (to enable generic KASAN) and +CONFIG_KASAN_SW_TAGS (to enable software tag-based KASAN). + +You also need to choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. +Outline and inline are compiler instrumentation types. The former produces +smaller binary while the latter is 1.1 - 2 times faster. -KASAN works with both SLUB and SLAB memory allocators. +Both KASAN modes work with both SLUB and SLAB memory allocators. For better bug detection and nicer reporting, enable CONFIG_STACKTRACE. To disable instrumentation for specific files or directories, add a line @@ -43,85 +55,85 @@ similar to the following to the respective kernel Makefile: Error reports ~~~~~~~~~~~~~ -A typical out of bounds access report looks like this:: +A typical out-of-bounds access generic KASAN report looks like this:: ================================================================== - BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3 - Write of size 1 by task modprobe/1689 - ============================================================================= - BUG kmalloc-128 (Not tainted): kasan error - ----------------------------------------------------------------------------- - - Disabling lock debugging due to kernel taint - INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689 - __slab_alloc+0x4b4/0x4f0 - kmem_cache_alloc_trace+0x10b/0x190 - kmalloc_oob_right+0x3d/0x75 [test_kasan] - init_module+0x9/0x47 [test_kasan] - do_one_initcall+0x99/0x200 - load_module+0x2cb3/0x3b20 - SyS_finit_module+0x76/0x80 - system_call_fastpath+0x12/0x17 - INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080 - INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720 - - Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ - Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk - Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk - Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk - Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk - Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk - Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk - Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk - Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk. - Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc ........ - Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ - CPU: 0 PID: 1689 Comm: modprobe Tainted: G B 3.18.0-rc1-mm1+ #98 - Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 - ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78 - ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8 - ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558 + BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0xa8/0xbc [test_kasan] + Write of size 1 at addr ffff8801f44ec37b by task insmod/2760 + + CPU: 1 PID: 2760 Comm: insmod Not tainted 4.19.0-rc3+ #698 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 Call Trace: - [] dump_stack+0x46/0x58 - [] print_trailer+0xf8/0x160 - [] ? kmem_cache_oob+0xc3/0xc3 [test_kasan] - [] object_err+0x35/0x40 - [] ? kmalloc_oob_right+0x65/0x75 [test_kasan] - [] kasan_report_error+0x38a/0x3f0 - [] ? kasan_poison_shadow+0x2f/0x40 - [] ? kasan_unpoison_shadow+0x14/0x40 - [] ? kasan_poison_shadow+0x2f/0x40 - [] ? kmem_cache_oob+0xc3/0xc3 [test_kasan] - [] __asan_store1+0x75/0xb0 - [] ? kmem_cache_oob+0x1d/0xc3 [test_kasan] - [] ? kmalloc_oob_right+0x65/0x75 [test_kasan] - [] kmalloc_oob_right+0x65/0x75 [test_kasan] - [] init_module+0x9/0x47 [test_kasan] - [] do_one_initcall+0x99/0x200 - [] ? __vunmap+0xec/0x160 - [] load_module+0x2cb3/0x3b20 - [] ? m_show+0x240/0x240 - [] SyS_finit_module+0x76/0x80 - [] system_call_fastpath+0x12/0x17 + dump_stack+0x94/0xd8 + print_address_description+0x73/0x280 + kasan_report+0x144/0x187 + __asan_report_store1_noabort+0x17/0x20 + kmalloc_oob_right+0xa8/0xbc [test_kasan] + kmalloc_tests_init+0x16/0x700 [test_kasan] + do_one_initcall+0xa5/0x3ae + do_init_module+0x1b6/0x547 + load_module+0x75df/0x8070 + __do_sys_init_module+0x1c6/0x200 + __x64_sys_init_module+0x6e/0xb0 + do_syscall_64+0x9f/0x2c0 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 + RIP: 0033:0x7f96443109da + RSP: 002b:00007ffcf0b51b08 EFLAGS: 00000202 ORIG_RAX: 00000000000000af + RAX: ffffffffffffffda RBX: 000055dc3ee521a0 RCX: 00007f96443109da + RDX: 00007f96445cff88 RSI: 0000000000057a50 RDI: 00007f9644992000 + RBP: 000055dc3ee510b0 R08: 0000000000000003 R09: 0000000000000000 + R10: 00007f964430cd0a R11: 0000000000000202 R12: 00007f96445cff88 + R13: 000055dc3ee51090 R14: 0000000000000000 R15: 0000000000000000 + + Allocated by task 2760: + save_stack+0x43/0xd0 + kasan_kmalloc+0xa7/0xd0 + kmem_cache_alloc_trace+0xe1/0x1b0 + kmalloc_oob_right+0x56/0xbc [test_kasan] + kmalloc_tests_init+0x16/0x700 [test_kasan] + do_one_initcall+0xa5/0x3ae + do_init_module+0x1b6/0x547 + load_module+0x75df/0x8070 + __do_sys_init_module+0x1c6/0x200 + __x64_sys_init_module+0x6e/0xb0 + do_syscall_64+0x9f/0x2c0 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 + + Freed by task 815: + save_stack+0x43/0xd0 + __kasan_slab_free+0x135/0x190 + kasan_slab_free+0xe/0x10 + kfree+0x93/0x1a0 + umh_complete+0x6a/0xa0 + call_usermodehelper_exec_async+0x4c3/0x640 + ret_from_fork+0x35/0x40 + + The buggy address belongs to the object at ffff8801f44ec300 + which belongs to the cache kmalloc-128 of size 128 + The buggy address is located 123 bytes inside of + 128-byte region [ffff8801f44ec300, ffff8801f44ec380) + The buggy address belongs to the page: + page:ffffea0007d13b00 count:1 mapcount:0 mapping:ffff8801f7001640 index:0x0 + flags: 0x200000000000100(slab) + raw: 0200000000000100 ffffea0007d11dc0 0000001a0000001a ffff8801f7001640 + raw: 0000000000000000 0000000080150015 00000001ffffffff 0000000000000000 + page dumped because: kasan: bad access detected + Memory state around the buggy address: - ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc - ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc - ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc - ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc - ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 - >ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc - ^ - ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc - ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc - ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb - ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb - ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb + ffff8801f44ec200: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb + ffff8801f44ec280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc + >ffff8801f44ec300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 + ^ + ffff8801f44ec380: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb + ffff8801f44ec400: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ================================================================== -The header of the report discribe what kind of bug happened and what kind of -access caused it. It's followed by the description of the accessed slub object -(see 'SLUB Debug output' section in Documentation/vm/slub.rst for details) and -the description of the accessed memory page. +The header of the report provides a short summary of what kind of bug happened +and what kind of access caused it. It's followed by a stack trace of the bad +access, a stack trace of where the accessed memory was allocated (in case bad +access happens on a slab object), and a stack trace of where the object was +freed (in case of a use-after-free bug report). Next comes a description of +the accessed slab object and information about the accessed memory page. In the last section the report shows memory state around the accessed address. Reading this part requires some understanding of how KASAN works. @@ -138,18 +150,24 @@ inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h). In the report above the arrows point to the shadow byte 03, which means that the accessed address is partially accessible. +For tag-based KASAN this last report section shows the memory tags around the +accessed address (see Implementation details section). + Implementation details ---------------------- +Generic KASAN +~~~~~~~~~~~~~ + From a high level, our approach to memory error detection is similar to that of kmemcheck: use shadow memory to record whether each byte of memory is safe -to access, and use compile-time instrumentation to check shadow memory on each -memory access. +to access, and use compile-time instrumentation to insert checks of shadow +memory on each memory access. -AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory -(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and -offset to translate a memory address to its corresponding shadow address. +Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB +to cover 128TB on x86_64) and uses direct mapping with a scale and offset to +translate a memory address to its corresponding shadow address. Here is the function which translates an address to its corresponding shadow address:: @@ -162,12 +180,38 @@ address:: where ``KASAN_SHADOW_SCALE_SHIFT = 3``. -Compile-time instrumentation used for checking memory accesses. Compiler inserts -function calls (__asan_load*(addr), __asan_store*(addr)) before each memory -access of size 1, 2, 4, 8 or 16. These functions check whether memory access is -valid or not by checking corresponding shadow memory. +Compile-time instrumentation is used to insert memory access checks. Compiler +inserts function calls (__asan_load*(addr), __asan_store*(addr)) before each +memory access of size 1, 2, 4, 8 or 16. These functions check whether memory +access is valid or not by checking corresponding shadow memory. GCC 5.0 has possibility to perform inline instrumentation. Instead of making function calls GCC directly inserts the code to check the shadow memory. This option significantly enlarges kernel but it gives x1.1-x2 performance boost over outline instrumented kernel. + +Software tag-based KASAN +~~~~~~~~~~~~~~~~~~~~~~~~ + +Tag-based KASAN uses the Top Byte Ignore (TBI) feature of modern arm64 CPUs to +store a pointer tag in the top byte of kernel pointers. Like generic KASAN it +uses shadow memory to store memory tags associated with each 16-byte memory +cell (therefore it dedicates 1/16th of the kernel memory for shadow memory). + +On each memory allocation tag-based KASAN generates a random tag, tags the +allocated memory with this tag, and embeds this tag into the returned pointer. +Software tag-based KASAN uses compile-time instrumentation to insert checks +before each memory access. These checks make sure that tag of the memory that +is being accessed is equal to tag of the pointer that is used to access this +memory. In case of a tag mismatch tag-based KASAN prints a bug report. + +Software tag-based KASAN also has two instrumentation modes (outline, that +emits callbacks to check memory accesses; and inline, that performs the shadow +memory checks inline). With outline instrumentation mode, a bug report is +simply printed from the function that performs the access check. With inline +instrumentation a brk instruction is emitted by the compiler, and a dedicated +brk handler is used to print bug reports. + +A potential expansion of this mode is a hardware tag-based mode, which would +use hardware memory tagging support instead of compiler instrumentation and +manual shadow memory manipulation. diff --git a/Documentation/dev-tools/kselftest.rst b/Documentation/dev-tools/kselftest.rst index dad1bb8711e2..7756f7a7c23b 100644 --- a/Documentation/dev-tools/kselftest.rst +++ b/Documentation/dev-tools/kselftest.rst @@ -9,7 +9,7 @@ and booting a kernel. On some systems, hot-plug tests could hang forever waiting for cpu and memory to be ready to be offlined. A special hot-plug target is created -to run full range of hot-plug tests. In default mode, hot-plug tests run +to run the full range of hot-plug tests. In default mode, hot-plug tests run in safe mode with a limited scope. In limited mode, cpu-hotplug test is run on a single cpu as opposed to all hotplug capable cpus, and memory hotplug test is run on 2% of hotplug capable memory instead of 10%. @@ -89,9 +89,9 @@ Note that some tests will require root privileges. Install selftests ================= -You can use kselftest_install.sh tool installs selftests in default -location which is tools/testing/selftests/kselftest or a user specified -location. +You can use the kselftest_install.sh tool to install selftests in the +default location, which is tools/testing/selftests/kselftest, or in a +user specified location. To install selftests in default location:: @@ -109,7 +109,7 @@ Running installed selftests Kselftest install as well as the Kselftest tarball provide a script named "run_kselftest.sh" to run the tests. -You can simply do the following to run the installed Kselftests. Please +You can simply do the following to run the installed Kselftests. Please note some tests will require root privileges:: $ cd kselftest @@ -139,7 +139,7 @@ Contributing new tests (details) default. TEST_CUSTOM_PROGS should be used by tests that require custom build - rule and prevent common build rule use. + rules and prevent common build rule use. TEST_PROGS are for test shell scripts. Please ensure shell script has its exec bit set. Otherwise, lib.mk run_tests will generate a warning. diff --git a/Documentation/device-mapper/dm-raid.txt b/Documentation/device-mapper/dm-raid.txt index 52a719b49afd..2355bef14653 100644 --- a/Documentation/device-mapper/dm-raid.txt +++ b/Documentation/device-mapper/dm-raid.txt @@ -146,7 +146,7 @@ The target is named "raid" and it accepts the following parameters: [data_offset ] This option value defines the offset into each data device where the data starts. This is used to provide out-of-place - reshaping space to avoid writing over data whilst + reshaping space to avoid writing over data while changing the layout of stripes, hence an interruption/crash may happen at any time without the risk of losing data. E.g. when adding devices to an existing raid set during diff --git a/Documentation/devicetree/bindings/.gitignore b/Documentation/devicetree/bindings/.gitignore new file mode 100644 index 000000000000..ef82fcfcccab --- /dev/null +++ b/Documentation/devicetree/bindings/.gitignore @@ -0,0 +1,2 @@ +*.example.dts +processed-schema.yaml diff --git a/Documentation/devicetree/bindings/Makefile b/Documentation/devicetree/bindings/Makefile new file mode 100644 index 000000000000..6e5cef0ed6fb --- /dev/null +++ b/Documentation/devicetree/bindings/Makefile @@ -0,0 +1,27 @@ +# SPDX-License-Identifier: GPL-2.0 +DT_DOC_CHECKER ?= dt-doc-validate +DT_EXTRACT_EX ?= dt-extract-example +DT_MK_SCHEMA ?= dt-mk-schema +DT_MK_SCHEMA_FLAGS := $(if $(DT_SCHEMA_FILES), -u) + +quiet_cmd_chk_binding = CHKDT $(patsubst $(srctree)/%,%,$<) + cmd_chk_binding = $(DT_DOC_CHECKER) $< ; \ + $(DT_EXTRACT_EX) $< > $@ + +$(obj)/%.example.dts: $(src)/%.yaml FORCE + $(call if_changed,chk_binding) + +DT_TMP_SCHEMA := processed-schema.yaml +extra-y += $(DT_TMP_SCHEMA) + +quiet_cmd_mk_schema = SCHEMA $@ + cmd_mk_schema = $(DT_MK_SCHEMA) $(DT_MK_SCHEMA_FLAGS) -o $@ $(filter-out FORCE, $^) + +DT_DOCS = $(shell cd $(srctree)/$(src) && find * -name '*.yaml') +DT_SCHEMA_FILES ?= $(addprefix $(src)/,$(DT_DOCS)) + +extra-y += $(patsubst $(src)/%.yaml,%.example.dts, $(DT_SCHEMA_FILES)) +extra-y += $(patsubst $(src)/%.yaml,%.example.dtb, $(DT_SCHEMA_FILES)) + +$(obj)/$(DT_TMP_SCHEMA): $(DT_SCHEMA_FILES) FORCE + $(call if_changed,mk_schema) diff --git a/Documentation/devicetree/bindings/arm/altera.txt b/Documentation/devicetree/bindings/arm/altera.txt deleted file mode 100644 index 558735aacca8..000000000000 --- a/Documentation/devicetree/bindings/arm/altera.txt +++ /dev/null @@ -1,14 +0,0 @@ -Altera's SoCFPGA platform device tree bindings ---------------------------------------------- - -Boards with Cyclone 5 SoC: -Required root node properties: -compatible = "altr,socfpga-cyclone5", "altr,socfpga"; - -Boards with Arria 5 SoC: -Required root node properties: -compatible = "altr,socfpga-arria5", "altr,socfpga"; - -Boards with Arria 10 SoC: -Required root node properties: -compatible = "altr,socfpga-arria10", "altr,socfpga"; diff --git a/Documentation/devicetree/bindings/arm/altera.yaml b/Documentation/devicetree/bindings/arm/altera.yaml new file mode 100644 index 000000000000..49e0362ddc11 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/altera.yaml @@ -0,0 +1,20 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/altera.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Altera's SoCFPGA platform device tree bindings + +maintainers: + - Dinh Nguyen + +properties: + compatible: + items: + - enum: + - altr,socfpga-cyclone5 + - altr,socfpga-arria5 + - altr,socfpga-arria10 + - const: altr,socfpga +... diff --git a/Documentation/devicetree/bindings/arm/altera/socfpga-clk-manager.txt b/Documentation/devicetree/bindings/arm/altera/socfpga-clk-manager.txt deleted file mode 100644 index 2c28f1d12f45..000000000000 --- a/Documentation/devicetree/bindings/arm/altera/socfpga-clk-manager.txt +++ /dev/null @@ -1,11 +0,0 @@ -Altera SOCFPGA Clock Manager - -Required properties: -- compatible : "altr,clk-mgr" -- reg : Should contain base address and length for Clock Manager - -Example: - clkmgr@ffd04000 { - compatible = "altr,clk-mgr"; - reg = <0xffd04000 0x1000>; - }; diff --git a/Documentation/devicetree/bindings/arm/altera/socfpga-clk-manager.yaml b/Documentation/devicetree/bindings/arm/altera/socfpga-clk-manager.yaml new file mode 100644 index 000000000000..e4131fa42b26 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/altera/socfpga-clk-manager.yaml @@ -0,0 +1,31 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/altera/socfpga-clk-manager.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Altera SOCFPGA Clock Manager + +maintainers: + - Dinh Nguyen + +description: test + +properties: + compatible: + items: + - const: altr,clk-mgr + reg: + maxItems: 1 + +required: + - compatible + +examples: + - | + clkmgr@ffd04000 { + compatible = "altr,clk-mgr"; + reg = <0xffd04000 0x1000>; + }; + +... diff --git a/Documentation/devicetree/bindings/arm/atmel-sysregs.txt b/Documentation/devicetree/bindings/arm/atmel-sysregs.txt index 4b96608ad692..14f319f694b7 100644 --- a/Documentation/devicetree/bindings/arm/atmel-sysregs.txt +++ b/Documentation/devicetree/bindings/arm/atmel-sysregs.txt @@ -158,14 +158,24 @@ Security Module (SECUMOD) The Security Module macrocell provides all necessary secure functions to avoid voltage, temperature, frequency and mechanical attacks on the chip. It also -embeds secure memories that can be scrambled +embeds secure memories that can be scrambled. + +The Security Module also offers the PIOBU pins which can be used as GPIO pins. +Note that they maintain their voltage during Backup/Self-refresh. required properties: - compatible: Should be "atmel,-secumod", "syscon". can be "sama5d2". - reg: Should contain registers location and length +- gpio-controller: Marks the port as GPIO controller. +- #gpio-cells: There are 2. The pin number is the + first, the second represents additional + parameters such as GPIO_ACTIVE_HIGH/LOW. + secumod@fc040000 { compatible = "atmel,sama5d2-secumod", "syscon"; reg = <0xfc040000 0x100>; + gpio-controller; + #gpio-cells = <2>; }; diff --git a/Documentation/devicetree/bindings/arm/calxeda.txt b/Documentation/devicetree/bindings/arm/calxeda.txt deleted file mode 100644 index 25fcf96795ca..000000000000 --- a/Documentation/devicetree/bindings/arm/calxeda.txt +++ /dev/null @@ -1,15 +0,0 @@ -Calxeda Platforms Device Tree Bindings ------------------------------------------------ - -Boards with Calxeda Cortex-A9 based ECX-1000 (Highbank) SOC shall have the -following properties. - -Required root node properties: - - compatible = "calxeda,highbank"; - - -Boards with Calxeda Cortex-A15 based ECX-2000 SOC shall have the following -properties. - -Required root node properties: - - compatible = "calxeda,ecx-2000"; diff --git a/Documentation/devicetree/bindings/arm/calxeda.yaml b/Documentation/devicetree/bindings/arm/calxeda.yaml new file mode 100644 index 000000000000..aa5571d23c39 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/calxeda.yaml @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/calxeda.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Calxeda Platforms Device Tree Bindings + +maintainers: + - Rob Herring +description: |+ + Bindings for boards with Calxeda Cortex-A9 based ECX-1000 (Highbank) SOC + or Cortex-A15 based ECX-2000 SOCs + +properties: + $nodename: + const: '/' + compatible: + items: + - enum: + - calxeda,highbank + - calxeda,ecx-2000 diff --git a/Documentation/devicetree/bindings/arm/cpus.txt b/Documentation/devicetree/bindings/arm/cpus.txt deleted file mode 100644 index b0198a1cf403..000000000000 --- a/Documentation/devicetree/bindings/arm/cpus.txt +++ /dev/null @@ -1,490 +0,0 @@ -================= -ARM CPUs bindings -================= - -The device tree allows to describe the layout of CPUs in a system through -the "cpus" node, which in turn contains a number of subnodes (ie "cpu") -defining properties for every cpu. - -Bindings for CPU nodes follow the Devicetree Specification, available from: - -https://www.devicetree.org/specifications/ - -with updates for 32-bit and 64-bit ARM systems provided in this document. - -================================ -Convention used in this document -================================ - -This document follows the conventions described in the Devicetree -Specification, with the addition: - -- square brackets define bitfields, eg reg[7:0] value of the bitfield in - the reg property contained in bits 7 down to 0 - -===================================== -cpus and cpu node bindings definition -===================================== - -The ARM architecture, in accordance with the Devicetree Specification, -requires the cpus and cpu nodes to be present and contain the properties -described below. - -- cpus node - - Description: Container of cpu nodes - - The node name must be "cpus". - - A cpus node must define the following properties: - - - #address-cells - Usage: required - Value type: - - Definition depends on ARM architecture version and - configuration: - - # On uniprocessor ARM architectures previous to v7 - value must be 1, to enable a simple enumeration - scheme for processors that do not have a HW CPU - identification register. - # On 32-bit ARM 11 MPcore, ARM v7 or later systems - value must be 1, that corresponds to CPUID/MPIDR - registers sizes. - # On ARM v8 64-bit systems value should be set to 2, - that corresponds to the MPIDR_EL1 register size. - If MPIDR_EL1[63:32] value is equal to 0 on all CPUs - in the system, #address-cells can be set to 1, since - MPIDR_EL1[63:32] bits are not used for CPUs - identification. - - #size-cells - Usage: required - Value type: - Definition: must be set to 0 - -- cpu node - - Description: Describes a CPU in an ARM based system - - PROPERTIES - - - device_type - Usage: required - Value type: - Definition: must be "cpu" - - reg - Usage and definition depend on ARM architecture version and - configuration: - - # On uniprocessor ARM architectures previous to v7 - this property is required and must be set to 0. - - # On ARM 11 MPcore based systems this property is - required and matches the CPUID[11:0] register bits. - - Bits [11:0] in the reg cell must be set to - bits [11:0] in CPU ID register. - - All other bits in the reg cell must be set to 0. - - # On 32-bit ARM v7 or later systems this property is - required and matches the CPU MPIDR[23:0] register - bits. - - Bits [23:0] in the reg cell must be set to - bits [23:0] in MPIDR. - - All other bits in the reg cell must be set to 0. - - # On ARM v8 64-bit systems this property is required - and matches the MPIDR_EL1 register affinity bits. - - * If cpus node's #address-cells property is set to 2 - - The first reg cell bits [7:0] must be set to - bits [39:32] of MPIDR_EL1. - - The second reg cell bits [23:0] must be set to - bits [23:0] of MPIDR_EL1. - - * If cpus node's #address-cells property is set to 1 - - The reg cell bits [23:0] must be set to bits [23:0] - of MPIDR_EL1. - - All other bits in the reg cells must be set to 0. - - - compatible: - Usage: required - Value type: - Definition: should be one of: - "arm,arm710t" - "arm,arm720t" - "arm,arm740t" - "arm,arm7ej-s" - "arm,arm7tdmi" - "arm,arm7tdmi-s" - "arm,arm9es" - "arm,arm9ej-s" - "arm,arm920t" - "arm,arm922t" - "arm,arm925" - "arm,arm926e-s" - "arm,arm926ej-s" - "arm,arm940t" - "arm,arm946e-s" - "arm,arm966e-s" - "arm,arm968e-s" - "arm,arm9tdmi" - "arm,arm1020e" - "arm,arm1020t" - "arm,arm1022e" - "arm,arm1026ej-s" - "arm,arm1136j-s" - "arm,arm1136jf-s" - "arm,arm1156t2-s" - "arm,arm1156t2f-s" - "arm,arm1176jzf" - "arm,arm1176jz-s" - "arm,arm1176jzf-s" - "arm,arm11mpcore" - "arm,cortex-a5" - "arm,cortex-a7" - "arm,cortex-a8" - "arm,cortex-a9" - "arm,cortex-a12" - "arm,cortex-a15" - "arm,cortex-a17" - "arm,cortex-a53" - "arm,cortex-a57" - "arm,cortex-a72" - "arm,cortex-a73" - "arm,cortex-m0" - "arm,cortex-m0+" - "arm,cortex-m1" - "arm,cortex-m3" - "arm,cortex-m4" - "arm,cortex-r4" - "arm,cortex-r5" - "arm,cortex-r7" - "brcm,brahma-b15" - "brcm,brahma-b53" - "brcm,vulcan" - "cavium,thunder" - "cavium,thunder2" - "faraday,fa526" - "intel,sa110" - "intel,sa1100" - "marvell,feroceon" - "marvell,mohawk" - "marvell,pj4a" - "marvell,pj4b" - "marvell,sheeva-v5" - "nvidia,tegra132-denver" - "nvidia,tegra186-denver" - "nvidia,tegra194-carmel" - "qcom,krait" - "qcom,kryo" - "qcom,kryo385" - "qcom,scorpion" - - enable-method - Value type: - Usage and definition depend on ARM architecture version. - # On ARM v8 64-bit this property is required and must - be one of: - "psci" - "spin-table" - # On ARM 32-bit systems this property is optional and - can be one of: - "actions,s500-smp" - "allwinner,sun6i-a31" - "allwinner,sun8i-a23" - "allwinner,sun9i-a80-smp" - "amlogic,meson8-smp" - "amlogic,meson8b-smp" - "arm,realview-smp" - "brcm,bcm11351-cpu-method" - "brcm,bcm23550" - "brcm,bcm2836-smp" - "brcm,bcm-nsp-smp" - "brcm,brahma-b15" - "marvell,armada-375-smp" - "marvell,armada-380-smp" - "marvell,armada-390-smp" - "marvell,armada-xp-smp" - "marvell,98dx3236-smp" - "mediatek,mt6589-smp" - "mediatek,mt81xx-tz-smp" - "qcom,gcc-msm8660" - "qcom,kpss-acc-v1" - "qcom,kpss-acc-v2" - "renesas,apmu" - "renesas,r9a06g032-smp" - "rockchip,rk3036-smp" - "rockchip,rk3066-smp" - "ste,dbx500-smp" - - - cpu-release-addr - Usage: required for systems that have an "enable-method" - property value of "spin-table". - Value type: - Definition: - # On ARM v8 64-bit systems must be a two cell - property identifying a 64-bit zero-initialised - memory location. - - - qcom,saw - Usage: required for systems that have an "enable-method" - property value of "qcom,kpss-acc-v1" or - "qcom,kpss-acc-v2" - Value type: - Definition: Specifies the SAW[1] node associated with this CPU. - - - qcom,acc - Usage: required for systems that have an "enable-method" - property value of "qcom,kpss-acc-v1" or - "qcom,kpss-acc-v2" - Value type: - Definition: Specifies the ACC[2] node associated with this CPU. - - - cpu-idle-states - Usage: Optional - Value type: - Definition: - # List of phandles to idle state nodes supported - by this cpu [3]. - - - capacity-dmips-mhz - Usage: Optional - Value type: - Definition: - # u32 value representing CPU capacity [4] in - DMIPS/MHz, relative to highest capacity-dmips-mhz - in the system. - - - rockchip,pmu - Usage: optional for systems that have an "enable-method" - property value of "rockchip,rk3066-smp" - While optional, it is the preferred way to get access to - the cpu-core power-domains. - Value type: - Definition: Specifies the syscon node controlling the cpu core - power domains. - - - dynamic-power-coefficient - Usage: optional - Value type: - Definition: A u32 value that represents the running time dynamic - power coefficient in units of uW/MHz/V^2. The - coefficient can either be calculated from power - measurements or derived by analysis. - - The dynamic power consumption of the CPU is - proportional to the square of the Voltage (V) and - the clock frequency (f). The coefficient is used to - calculate the dynamic power as below - - - Pdyn = dynamic-power-coefficient * V^2 * f - - where voltage is in V, frequency is in MHz. - -Example 1 (dual-cluster big.LITTLE system 32-bit): - - cpus { - #size-cells = <0>; - #address-cells = <1>; - - cpu@0 { - device_type = "cpu"; - compatible = "arm,cortex-a15"; - reg = <0x0>; - }; - - cpu@1 { - device_type = "cpu"; - compatible = "arm,cortex-a15"; - reg = <0x1>; - }; - - cpu@100 { - device_type = "cpu"; - compatible = "arm,cortex-a7"; - reg = <0x100>; - }; - - cpu@101 { - device_type = "cpu"; - compatible = "arm,cortex-a7"; - reg = <0x101>; - }; - }; - -Example 2 (Cortex-A8 uniprocessor 32-bit system): - - cpus { - #size-cells = <0>; - #address-cells = <1>; - - cpu@0 { - device_type = "cpu"; - compatible = "arm,cortex-a8"; - reg = <0x0>; - }; - }; - -Example 3 (ARM 926EJ-S uniprocessor 32-bit system): - - cpus { - #size-cells = <0>; - #address-cells = <1>; - - cpu@0 { - device_type = "cpu"; - compatible = "arm,arm926ej-s"; - reg = <0x0>; - }; - }; - -Example 4 (ARM Cortex-A57 64-bit system): - -cpus { - #size-cells = <0>; - #address-cells = <2>; - - cpu@0 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x0 0x0>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@1 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x0 0x1>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x0 0x100>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@101 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x0 0x101>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@10000 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x0 0x10000>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@10001 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x0 0x10001>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@10100 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x0 0x10100>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@10101 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x0 0x10101>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100000000 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x1 0x0>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100000001 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x1 0x1>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100000100 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x1 0x100>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100000101 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x1 0x101>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100010000 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x1 0x10000>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100010001 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x1 0x10001>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100010100 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x1 0x10100>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; - - cpu@100010101 { - device_type = "cpu"; - compatible = "arm,cortex-a57"; - reg = <0x1 0x10101>; - enable-method = "spin-table"; - cpu-release-addr = <0 0x20000000>; - }; -}; - --- -[1] arm/msm/qcom,saw2.txt -[2] arm/msm/qcom,kpss-acc.txt -[3] ARM Linux kernel documentation - idle states bindings - Documentation/devicetree/bindings/arm/idle-states.txt -[4] ARM Linux kernel documentation - cpu capacity bindings - Documentation/devicetree/bindings/arm/cpu-capacity.txt diff --git a/Documentation/devicetree/bindings/arm/cpus.yaml b/Documentation/devicetree/bindings/arm/cpus.yaml new file mode 100644 index 000000000000..298c17b327c6 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/cpus.yaml @@ -0,0 +1,507 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/cpus.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: ARM CPUs bindings + +maintainers: + - Lorenzo Pieralisi + +description: |+ + The device tree allows to describe the layout of CPUs in a system through + the "cpus" node, which in turn contains a number of subnodes (ie "cpu") + defining properties for every cpu. + + Bindings for CPU nodes follow the Devicetree Specification, available from: + + https://www.devicetree.org/specifications/ + + with updates for 32-bit and 64-bit ARM systems provided in this document. + + ================================ + Convention used in this document + ================================ + + This document follows the conventions described in the Devicetree + Specification, with the addition: + + - square brackets define bitfields, eg reg[7:0] value of the bitfield in + the reg property contained in bits 7 down to 0 + + ===================================== + cpus and cpu node bindings definition + ===================================== + + The ARM architecture, in accordance with the Devicetree Specification, + requires the cpus and cpu nodes to be present and contain the properties + described below. + +properties: + $nodename: + const: cpus + description: Container of cpu nodes + + '#address-cells': + enum: [1, 2] + description: | + Definition depends on ARM architecture version and configuration: + + On uniprocessor ARM architectures previous to v7 + value must be 1, to enable a simple enumeration + scheme for processors that do not have a HW CPU + identification register. + On 32-bit ARM 11 MPcore, ARM v7 or later systems + value must be 1, that corresponds to CPUID/MPIDR + registers sizes. + On ARM v8 64-bit systems value should be set to 2, + that corresponds to the MPIDR_EL1 register size. + If MPIDR_EL1[63:32] value is equal to 0 on all CPUs + in the system, #address-cells can be set to 1, since + MPIDR_EL1[63:32] bits are not used for CPUs + identification. + + '#size-cells': + const: 0 + +patternProperties: + '^cpu@[0-9a-f]+$': + properties: + device_type: + const: cpu + + reg: + maxItems: 1 + description: | + Usage and definition depend on ARM architecture version and + configuration: + + On uniprocessor ARM architectures previous to v7 + this property is required and must be set to 0. + + On ARM 11 MPcore based systems this property is + required and matches the CPUID[11:0] register bits. + + Bits [11:0] in the reg cell must be set to + bits [11:0] in CPU ID register. + + All other bits in the reg cell must be set to 0. + + On 32-bit ARM v7 or later systems this property is + required and matches the CPU MPIDR[23:0] register + bits. + + Bits [23:0] in the reg cell must be set to + bits [23:0] in MPIDR. + + All other bits in the reg cell must be set to 0. + + On ARM v8 64-bit systems this property is required + and matches the MPIDR_EL1 register affinity bits. + + * If cpus node's #address-cells property is set to 2 + + The first reg cell bits [7:0] must be set to + bits [39:32] of MPIDR_EL1. + + The second reg cell bits [23:0] must be set to + bits [23:0] of MPIDR_EL1. + + * If cpus node's #address-cells property is set to 1 + + The reg cell bits [23:0] must be set to bits [23:0] + of MPIDR_EL1. + + All other bits in the reg cells must be set to 0. + + compatible: + items: + - enum: + - arm,arm710t + - arm,arm720t + - arm,arm740t + - arm,arm7ej-s + - arm,arm7tdmi + - arm,arm7tdmi-s + - arm,arm9es + - arm,arm9ej-s + - arm,arm920t + - arm,arm922t + - arm,arm925 + - arm,arm926e-s + - arm,arm926ej-s + - arm,arm940t + - arm,arm946e-s + - arm,arm966e-s + - arm,arm968e-s + - arm,arm9tdmi + - arm,arm1020e + - arm,arm1020t + - arm,arm1022e + - arm,arm1026ej-s + - arm,arm1136j-s + - arm,arm1136jf-s + - arm,arm1156t2-s + - arm,arm1156t2f-s + - arm,arm1176jzf + - arm,arm1176jz-s + - arm,arm1176jzf-s + - arm,arm11mpcore + - arm,armv8 # Only for s/w models + - arm,cortex-a5 + - arm,cortex-a7 + - arm,cortex-a8 + - arm,cortex-a9 + - arm,cortex-a12 + - arm,cortex-a15 + - arm,cortex-a17 + - arm,cortex-a53 + - arm,cortex-a57 + - arm,cortex-a72 + - arm,cortex-a73 + - arm,cortex-m0 + - arm,cortex-m0+ + - arm,cortex-m1 + - arm,cortex-m3 + - arm,cortex-m4 + - arm,cortex-r4 + - arm,cortex-r5 + - arm,cortex-r7 + - brcm,brahma-b15 + - brcm,brahma-b53 + - brcm,vulcan + - cavium,thunder + - cavium,thunder2 + - faraday,fa526 + - intel,sa110 + - intel,sa1100 + - marvell,feroceon + - marvell,mohawk + - marvell,pj4a + - marvell,pj4b + - marvell,sheeva-v5 + - marvell,sheeva-v7 + - nvidia,tegra132-denver + - nvidia,tegra186-denver + - nvidia,tegra194-carmel + - qcom,krait + - qcom,kryo + - qcom,kryo385 + - qcom,scorpion + + enable-method: + allOf: + - $ref: '/schemas/types.yaml#/definitions/string' + - oneOf: + # On ARM v8 64-bit this property is required + - enum: + - psci + - spin-table + # On ARM 32-bit systems this property is optional + - enum: + - actions,s500-smp + - allwinner,sun6i-a31 + - allwinner,sun8i-a23 + - allwinner,sun9i-a80-smp + - allwinner,sun8i-a83t-smp + - amlogic,meson8-smp + - amlogic,meson8b-smp + - arm,realview-smp + - brcm,bcm11351-cpu-method + - brcm,bcm23550 + - brcm,bcm2836-smp + - brcm,bcm63138 + - brcm,bcm-nsp-smp + - brcm,brahma-b15 + - marvell,armada-375-smp + - marvell,armada-380-smp + - marvell,armada-390-smp + - marvell,armada-xp-smp + - marvell,98dx3236-smp + - mediatek,mt6589-smp + - mediatek,mt81xx-tz-smp + - qcom,gcc-msm8660 + - qcom,kpss-acc-v1 + - qcom,kpss-acc-v2 + - renesas,apmu + - renesas,r9a06g032-smp + - rockchip,rk3036-smp + - rockchip,rk3066-smp + - ste,dbx500-smp + + cpu-release-addr: + $ref: '/schemas/types.yaml#/definitions/uint64' + + description: + Required for systems that have an "enable-method" + property value of "spin-table". + On ARM v8 64-bit systems must be a two cell + property identifying a 64-bit zero-initialised + memory location. + + cpu-idle-states: + $ref: '/schemas/types.yaml#/definitions/phandle-array' + description: | + List of phandles to idle state nodes supported + by this cpu (see ./idle-states.txt). + + capacity-dmips-mhz: + $ref: '/schemas/types.yaml#/definitions/uint32' + description: + u32 value representing CPU capacity (see ./cpu-capacity.txt) in + DMIPS/MHz, relative to highest capacity-dmips-mhz + in the system. + + dynamic-power-coefficient: + $ref: '/schemas/types.yaml#/definitions/uint32' + description: + A u32 value that represents the running time dynamic + power coefficient in units of uW/MHz/V^2. The + coefficient can either be calculated from power + measurements or derived by analysis. + + The dynamic power consumption of the CPU is + proportional to the square of the Voltage (V) and + the clock frequency (f). The coefficient is used to + calculate the dynamic power as below - + + Pdyn = dynamic-power-coefficient * V^2 * f + + where voltage is in V, frequency is in MHz. + + qcom,saw: + $ref: '/schemas/types.yaml#/definitions/phandle' + description: | + Specifies the SAW* node associated with this CPU. + + Required for systems that have an "enable-method" property + value of "qcom,kpss-acc-v1" or "qcom,kpss-acc-v2" + + * arm/msm/qcom,saw2.txt + + qcom,acc: + $ref: '/schemas/types.yaml#/definitions/phandle' + description: | + Specifies the ACC* node associated with this CPU. + + Required for systems that have an "enable-method" property + value of "qcom,kpss-acc-v1" or "qcom,kpss-acc-v2" + + * arm/msm/qcom,kpss-acc.txt + + rockchip,pmu: + $ref: '/schemas/types.yaml#/definitions/phandle' + description: | + Specifies the syscon node controlling the cpu core power domains. + + Optional for systems that have an "enable-method" + property value of "rockchip,rk3066-smp" + While optional, it is the preferred way to get access to + the cpu-core power-domains. + + required: + - device_type + - reg + - compatible + + dependencies: + cpu-release-addr: [enable-method] + rockchip,pmu: [enable-method] + +required: + - '#address-cells' + - '#size-cells' + +examples: + - | + cpus { + #size-cells = <0>; + #address-cells = <1>; + + cpu@0 { + device_type = "cpu"; + compatible = "arm,cortex-a15"; + reg = <0x0>; + }; + + cpu@1 { + device_type = "cpu"; + compatible = "arm,cortex-a15"; + reg = <0x1>; + }; + + cpu@100 { + device_type = "cpu"; + compatible = "arm,cortex-a7"; + reg = <0x100>; + }; + + cpu@101 { + device_type = "cpu"; + compatible = "arm,cortex-a7"; + reg = <0x101>; + }; + }; + + - | + // Example 2 (Cortex-A8 uniprocessor 32-bit system): + cpus { + #size-cells = <0>; + #address-cells = <1>; + + cpu@0 { + device_type = "cpu"; + compatible = "arm,cortex-a8"; + reg = <0x0>; + }; + }; + + - | + // Example 3 (ARM 926EJ-S uniprocessor 32-bit system): + cpus { + #size-cells = <0>; + #address-cells = <1>; + + cpu@0 { + device_type = "cpu"; + compatible = "arm,arm926ej-s"; + reg = <0x0>; + }; + }; + + - | + // Example 4 (ARM Cortex-A57 64-bit system): + cpus { + #size-cells = <0>; + #address-cells = <2>; + + cpu@0 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x0 0x0>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@1 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x0 0x1>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x0 0x100>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@101 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x0 0x101>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@10000 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x0 0x10000>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@10001 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x0 0x10001>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@10100 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x0 0x10100>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@10101 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x0 0x10101>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100000000 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x1 0x0>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100000001 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x1 0x1>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100000100 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x1 0x100>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100000101 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x1 0x101>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100010000 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x1 0x10000>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100010001 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x1 0x10001>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100010100 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x1 0x10100>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + + cpu@100010101 { + device_type = "cpu"; + compatible = "arm,cortex-a57"; + reg = <0x1 0x10101>; + enable-method = "spin-table"; + cpu-release-addr = <0 0x20000000>; + }; + }; +... diff --git a/Documentation/devicetree/bindings/arm/davinci.txt b/Documentation/devicetree/bindings/arm/davinci.txt deleted file mode 100644 index 715622c36260..000000000000 --- a/Documentation/devicetree/bindings/arm/davinci.txt +++ /dev/null @@ -1,25 +0,0 @@ -Texas Instruments DaVinci Platforms Device Tree Bindings --------------------------------------------------------- - -DA850/OMAP-L138/AM18x Evaluation Module (EVM) board -Required root node properties: - - compatible = "ti,da850-evm", "ti,da850"; - -DA850/OMAP-L138/AM18x L138/C6748 Development Kit (LCDK) board -Required root node properties: - - compatible = "ti,da850-lcdk", "ti,da850"; - -EnBW AM1808 based CMC board -Required root node properties: - - compatible = "enbw,cmc", "ti,da850; - -LEGO MINDSTORMS EV3 (AM1808 based) -Required root node properties: - - compatible = "lego,ev3", "ti,da850"; - -Generic DaVinci Boards ----------------------- - -DA850/OMAP-L138/AM18x generic board -Required root node properties: - - compatible = "ti,da850"; diff --git a/Documentation/devicetree/bindings/arm/idle-states.txt b/Documentation/devicetree/bindings/arm/idle-states.txt index 2c73847499ab..8f0937db55c5 100644 --- a/Documentation/devicetree/bindings/arm/idle-states.txt +++ b/Documentation/devicetree/bindings/arm/idle-states.txt @@ -142,7 +142,7 @@ characterised by the following graph: The graph is split in two parts delimited by time 1ms on the X-axis. The graph curve with X-axis values = { x | 0 < x < 1ms } has a steep slope -and denotes the energy costs incurred whilst entering and leaving the idle +and denotes the energy costs incurred while entering and leaving the idle state. The graph curve in the area delimited by X-axis values = {x | x > 1ms } has shallower slope and essentially represents the energy consumption of the idle diff --git a/Documentation/devicetree/bindings/arm/mrvl/mrvl.txt b/Documentation/devicetree/bindings/arm/mrvl/mrvl.txt index 117d741a2e4f..951687528efb 100644 --- a/Documentation/devicetree/bindings/arm/mrvl/mrvl.txt +++ b/Documentation/devicetree/bindings/arm/mrvl/mrvl.txt @@ -11,4 +11,4 @@ Required root node properties: MMP2 Brownstone Board Required root node properties: - - compatible = "mrvl,mmp2-brownstone"; + - compatible = "mrvl,mmp2-brownstone", "mrvl,mmp2"; diff --git a/Documentation/devicetree/bindings/arm/nspire.txt b/Documentation/devicetree/bindings/arm/nspire.txt deleted file mode 100644 index 4d08518bd176..000000000000 --- a/Documentation/devicetree/bindings/arm/nspire.txt +++ /dev/null @@ -1,14 +0,0 @@ -TI-NSPIRE calculators - -Required properties: -- compatible: Compatible property value should contain "ti,nspire". - CX models should have "ti,nspire-cx" - Touchpad models should have "ti,nspire-tp" - Clickpad models should have "ti,nspire-clp" - -Example: - -/ { - model = "TI-NSPIRE CX"; - compatible = "ti,nspire-cx"; - ... diff --git a/Documentation/devicetree/bindings/arm/primecell.txt b/Documentation/devicetree/bindings/arm/primecell.txt deleted file mode 100644 index 0df6acacfaea..000000000000 --- a/Documentation/devicetree/bindings/arm/primecell.txt +++ /dev/null @@ -1,46 +0,0 @@ -* ARM Primecell Peripherals - -ARM, Ltd. Primecell peripherals have a standard id register that can be used to -identify the peripheral type, vendor, and revision. This value can be used for -driver matching. - -Required properties: - -- compatible : should be a specific name for the peripheral and - "arm,primecell". The specific name will match the ARM - engineering name for the logic block in the form: "arm,pl???" - -Optional properties: - -- arm,primecell-periphid : Value to override the h/w value with -- clocks : From common clock binding. First clock is phandle to clock for apb - pclk. Additional clocks are optional and specific to those peripherals. -- clock-names : From common clock binding. Shall be "apb_pclk" for first clock. -- dmas : From common DMA binding. If present, refers to one or more dma channels. -- dma-names : From common DMA binding, needs to match the 'dmas' property. - Devices with exactly one receive and transmit channel shall name - these "rx" and "tx", respectively. -- pinctrl- : Pinctrl states as described in bindings/pinctrl/pinctrl-bindings.txt -- pinctrl-names : Names corresponding to the numbered pinctrl states -- interrupts : one or more interrupt specifiers -- interrupt-names : names corresponding to the interrupts properties - -Example: - -serial@fff36000 { - compatible = "arm,pl011", "arm,primecell"; - arm,primecell-periphid = <0x00341011>; - - clocks = <&pclk>; - clock-names = "apb_pclk"; - - dmas = <&dma-controller 4>, <&dma-controller 5>; - dma-names = "rx", "tx"; - - pinctrl-0 = <&uart0_default_mux>, <&uart0_default_mode>; - pinctrl-1 = <&uart0_sleep_mode>; - pinctrl-names = "default","sleep"; - - interrupts = <0 11 0x4>; -}; - diff --git a/Documentation/devicetree/bindings/arm/primecell.yaml b/Documentation/devicetree/bindings/arm/primecell.yaml new file mode 100644 index 000000000000..5aae37f1c563 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/primecell.yaml @@ -0,0 +1,36 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/primecell.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: ARM Primecell Peripherals + +maintainers: + - Rob Herring + +description: |+ + ARM, Ltd. Primecell peripherals have a standard id register that can be used to + identify the peripheral type, vendor, and revision. This value can be used for + driver matching. + +properties: + compatible: + contains: + const: arm,primecell + description: + Should be a specific name for the peripheral followed by "arm,primecell". + The specific name will match the ARM engineering name for the logic block + in the form "arm,pl???" + + arm,primecell-periphid: + $ref: /schemas/types.yaml#/definitions/uint32 + description: Value to override the h/w ID value + clocks: + minItems: 1 + maxItems: 32 + clock-names: + contains: + const: apb_pclk + additionalItems: true +... diff --git a/Documentation/devicetree/bindings/arm/qcom.txt b/Documentation/devicetree/bindings/arm/qcom.txt deleted file mode 100644 index ee532e705d6c..000000000000 --- a/Documentation/devicetree/bindings/arm/qcom.txt +++ /dev/null @@ -1,57 +0,0 @@ -QCOM device tree bindings -------------------------- - -Some qcom based bootloaders identify the dtb blob based on a set of -device properties like SoC and platform and revisions of those components. -To support this scheme, we encode this information into the board compatible -string. - -Each board must specify a top-level board compatible string with the following -format: - - compatible = "qcom,[-][-]-[/][-]" - -The 'SoC' and 'board' elements are required. All other elements are optional. - -The 'SoC' element must be one of the following strings: - - apq8016 - apq8074 - apq8084 - apq8096 - msm8916 - msm8974 - msm8992 - msm8994 - msm8996 - mdm9615 - ipq8074 - sdm845 - -The 'board' element must be one of the following strings: - - cdp - liquid - dragonboard - mtp - sbc - hk01 - -The 'soc_version' and 'board_version' elements take the form of v. -where the minor number may be omitted when it's zero, i.e. v1.0 is the same -as v1. If all versions of the 'board_version' elements match, then a -wildcard '*' should be used, e.g. 'v*'. - -The 'foundry_id' and 'subtype' elements are one or more digits from 0 to 9. - -Examples: - - "qcom,msm8916-v1-cdp-pm8916-v2.1" - -A CDP board with an msm8916 SoC, version 1 paired with a pm8916 PMIC of version -2.1. - - "qcom,apq8074-v2.0-2-dragonboard/1-v0.1" - -A dragonboard board v0.1 of subtype 1 with an apq8074 SoC version 2, made in -foundry 2. diff --git a/Documentation/devicetree/bindings/arm/qcom.yaml b/Documentation/devicetree/bindings/arm/qcom.yaml new file mode 100644 index 000000000000..f6316ab66385 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/qcom.yaml @@ -0,0 +1,125 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/bindings/arm/qcom.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: QCOM device tree bindings + +maintainers: + - Stephen Boyd + +description: | + Some qcom based bootloaders identify the dtb blob based on a set of + device properties like SoC and platform and revisions of those components. + To support this scheme, we encode this information into the board compatible + string. + + Each board must specify a top-level board compatible string with the following + format: + + compatible = "qcom,[-][-]-[/][-]" + + The 'SoC' and 'board' elements are required. All other elements are optional. + + The 'SoC' element must be one of the following strings: + + apq8016 + apq8074 + apq8084 + apq8096 + msm8916 + msm8974 + msm8992 + msm8994 + msm8996 + mdm9615 + ipq8074 + sdm845 + + The 'board' element must be one of the following strings: + + cdp + liquid + dragonboard + mtp + sbc + hk01 + + The 'soc_version' and 'board_version' elements take the form of v. + where the minor number may be omitted when it's zero, i.e. v1.0 is the same + as v1. If all versions of the 'board_version' elements match, then a + wildcard '*' should be used, e.g. 'v*'. + + The 'foundry_id' and 'subtype' elements are one or more digits from 0 to 9. + + Examples: + + "qcom,msm8916-v1-cdp-pm8916-v2.1" + + A CDP board with an msm8916 SoC, version 1 paired with a pm8916 PMIC of version + 2.1. + + "qcom,apq8074-v2.0-2-dragonboard/1-v0.1" + + A dragonboard board v0.1 of subtype 1 with an apq8074 SoC version 2, made in + foundry 2. + +properties: + compatible: + oneOf: + - items: + - enum: + - qcom,apq8016-sbc + - const: qcom,apq8016 + + - items: + - enum: + - qcom,apq8064-cm-qs600 + - qcom,apq8064-ifc6410 + - const: qcom,apq8064 + + - items: + - enum: + - qcom,apq8074-dragonboard + - const: qcom,apq8074 + + - items: + - enum: + - qcom,apq8060-dragonboard + - qcom,msm8660-surf + - const: qcom,msm8660 + + - items: + - enum: + - qcom,apq8084-mtp + - qcom,apq8084-sbc + - const: qcom,apq8084 + + - items: + - enum: + - qcom,msm8960-cdp + - const: qcom,msm8960 + + - items: + - const: qcom,msm8916-mtp/1 + - const: qcom,msm8916-mtp + - const: qcom,msm8916 + + - items: + - const: qcom,msm8996-mtp + + - items: + - const: qcom,ipq4019 + + - items: + - enum: + - qcom,ipq8064-ap148 + - const: qcom,ipq8064 + + - items: + - enum: + - qcom,ipq8074-hk01 + - const: qcom,ipq8074 + +... diff --git a/Documentation/devicetree/bindings/arm/sirf.txt b/Documentation/devicetree/bindings/arm/sirf.txt deleted file mode 100644 index 7b28ee6fee91..000000000000 --- a/Documentation/devicetree/bindings/arm/sirf.txt +++ /dev/null @@ -1,11 +0,0 @@ -CSR SiRFprimaII and SiRFmarco device tree bindings. -======================================== - -Required root node properties: - - compatible: - - "sirf,atlas6-cb" : atlas6 "cb" evaluation board - - "sirf,atlas6" : atlas6 device based board - - "sirf,atlas7-cb" : atlas7 "cb" evaluation board - - "sirf,atlas7" : atlas7 device based board - - "sirf,prima2-cb" : prima2 "cb" evaluation board - - "sirf,prima2" : prima2 device based board diff --git a/Documentation/devicetree/bindings/arm/sirf.yaml b/Documentation/devicetree/bindings/arm/sirf.yaml new file mode 100644 index 000000000000..0b597032c923 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/sirf.yaml @@ -0,0 +1,27 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/sirf.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: CSR SiRFprimaII and SiRFmarco device tree bindings. + +maintainers: + - Binghua Duan + - Barry Song + +properties: + $nodename: + const: '/' + compatible: + oneOf: + - items: + - const: sirf,atlas6-cb + - const: sirf,atlas6 + - items: + - const: sirf,atlas7-cb + - const: sirf,atlas7 + - items: + - const: sirf,prima2-cb + - const: sirf,prima2 +... diff --git a/Documentation/devicetree/bindings/arm/spear.txt b/Documentation/devicetree/bindings/arm/spear.txt deleted file mode 100644 index 0d42949df6c2..000000000000 --- a/Documentation/devicetree/bindings/arm/spear.txt +++ /dev/null @@ -1,26 +0,0 @@ -ST SPEAr Platforms Device Tree Bindings ---------------------------------------- - -Boards with the ST SPEAr600 SoC shall have the following properties: -Required root node property: -compatible = "st,spear600"; - -Boards with the ST SPEAr300 SoC shall have the following properties: -Required root node property: -compatible = "st,spear300"; - -Boards with the ST SPEAr310 SoC shall have the following properties: -Required root node property: -compatible = "st,spear310"; - -Boards with the ST SPEAr320 SoC shall have the following properties: -Required root node property: -compatible = "st,spear320"; - -Boards with the ST SPEAr1310 SoC shall have the following properties: -Required root node property: -compatible = "st,spear1310"; - -Boards with the ST SPEAr1340 SoC shall have the following properties: -Required root node property: -compatible = "st,spear1340"; diff --git a/Documentation/devicetree/bindings/arm/spear.yaml b/Documentation/devicetree/bindings/arm/spear.yaml new file mode 100644 index 000000000000..f6ec731c9531 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/spear.yaml @@ -0,0 +1,25 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/spear.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: ST SPEAr Platforms Device Tree Bindings + +maintainers: + - Viresh Kumar + - Stefan Roese + +properties: + $nodename: + const: '/' + compatible: + items: + - enum: + - st,spear600 + - st,spear300 + - st,spear310 + - st,spear320 + - st,spear1310 + - st,spear1340 +... diff --git a/Documentation/devicetree/bindings/arm/sti.txt b/Documentation/devicetree/bindings/arm/sti.txt deleted file mode 100644 index 8d27f6b084c7..000000000000 --- a/Documentation/devicetree/bindings/arm/sti.txt +++ /dev/null @@ -1,23 +0,0 @@ -ST STi Platforms Device Tree Bindings ---------------------------------------- - -Boards with the ST STiH415 SoC shall have the following properties: -Required root node property: -compatible = "st,stih415"; - -Boards with the ST STiH416 SoC shall have the following properties: -Required root node property: -compatible = "st,stih416"; - -Boards with the ST STiH407 SoC shall have the following properties: -Required root node property: -compatible = "st,stih407"; - -Boards with the ST STiH410 SoC shall have the following properties: -Required root node property: -compatible = "st,stih410"; - -Boards with the ST STiH418 SoC shall have the following properties: -Required root node property: -compatible = "st,stih418"; - diff --git a/Documentation/devicetree/bindings/arm/sti.yaml b/Documentation/devicetree/bindings/arm/sti.yaml new file mode 100644 index 000000000000..47f9b8eebaa0 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/sti.yaml @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/sti.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: ST STi Platforms Device Tree Bindings + +maintainers: + - Patrice Chotard + +properties: + $nodename: + const: '/' + compatible: + items: + - enum: + - st,stih415 + - st,stih416 + - st,stih407 + - st,stih410 + - st,stih418 +... diff --git a/Documentation/devicetree/bindings/arm/tegra.txt b/Documentation/devicetree/bindings/arm/tegra.txt deleted file mode 100644 index c59b15f64346..000000000000 --- a/Documentation/devicetree/bindings/arm/tegra.txt +++ /dev/null @@ -1,65 +0,0 @@ -NVIDIA Tegra device tree bindings -------------------------------------------- - -SoCs -------------------------------------------- - -Each device tree must specify which Tegra SoC it uses, using one of the -following compatible values: - - nvidia,tegra20 - nvidia,tegra30 - nvidia,tegra114 - nvidia,tegra124 - nvidia,tegra132 - nvidia,tegra210 - nvidia,tegra186 - nvidia,tegra194 - -Boards -------------------------------------------- - -Each device tree must specify which one or more of the following -board-specific compatible values: - - ad,medcom-wide - ad,plutux - ad,tamonten - ad,tec - compal,paz00 - compulab,trimslice - nvidia,beaver - nvidia,cardhu - nvidia,cardhu-a02 - nvidia,cardhu-a04 - nvidia,dalmore - nvidia,harmony - nvidia,jetson-tk1 - nvidia,norrin - nvidia,p2371-0000 - nvidia,p2371-2180 - nvidia,p2571 - nvidia,p2771-0000 - nvidia,p2972-0000 - nvidia,roth - nvidia,seaboard - nvidia,tn7 - nvidia,ventana - toradex,apalis_t30 - toradex,apalis_t30-eval - toradex,apalis_t30-v1.1 - toradex,apalis_t30-v1.1-eval - toradex,apalis-tk1 - toradex,apalis-tk1-eval - toradex,apalis-tk1-v1.2 - toradex,apalis-tk1-v1.2-eval - toradex,colibri_t20 - toradex,colibri_t20-eval-v3 - toradex,colibri_t20-iris - toradex,colibri_t30 - toradex,colibri_t30-eval-v3 - -Trusted Foundations -------------------------------------------- -Tegra supports the Trusted Foundation secure monitor. See the -"tlm,trusted-foundations" binding's documentation for more details. diff --git a/Documentation/devicetree/bindings/arm/tegra.yaml b/Documentation/devicetree/bindings/arm/tegra.yaml new file mode 100644 index 000000000000..fbcde8a7e067 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/tegra.yaml @@ -0,0 +1,101 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/tegra.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: NVIDIA Tegra device tree bindings + +maintainers: + - Thierry Reding + - Jonathan Hunter + +properties: + compatible: + oneOf: + - items: + - enum: + - compal,paz00 + - compulab,trimslice + - nvidia,harmony + - nvidia,seaboard + - nvidia,ventana + - const: nvidia,tegra20 + - items: + - enum: + - ad,medcom-wide + - ad,plutux + - ad,tec + - const: ad,tamonten + - const: nvidia,tegra20 + - items: + - enum: + - toradex,colibri_t20-eval-v3 + - toradex,colibri_t20-iris + - const: toradex,colibri_t20 + - const: nvidia,tegra20 + - items: + - enum: + - nvidia,beaver + - const: nvidia,tegra30 + - items: + - enum: + - nvidia,cardhu-a02 + - nvidia,cardhu-a04 + - const: nvidia,cardhu + - const: nvidia,tegra30 + - items: + - const: toradex,apalis_t30-eval + - const: toradex,apalis_t30 + - const: nvidia,tegra30 + - items: + - const: toradex,apalis_t30-eval-v1.1 + - const: toradex,apalis_t30-eval + - const: toradex,apalis_t30-v1.1 + - const: toradex,apalis_t30 + - const: nvidia,tegra30 + - items: + - enum: + - toradex,colibri_t30-eval-v3 + - const: toradex,colibri_t30 + - const: nvidia,tegra30 + - items: + - enum: + - nvidia,dalmore + - nvidia,roth + - nvidia,tn7 + - const: nvidia,tegra114 + - items: + - enum: + - nvidia,jetson-tk1 + - nvidia,venice2 + - const: nvidia,tegra124 + - items: + - const: toradex,apalis-tk1-eval + - const: toradex,apalis-tk1 + - const: nvidia,tegra124 + - items: + - const: toradex,apalis-tk1-v1.2-eval + - const: toradex,apalis-tk1-eval + - const: toradex,apalis-tk1-v1.2 + - const: toradex,apalis-tk1 + - const: nvidia,tegra124 + - items: + - enum: + - nvidia,norrin + - const: nvidia,tegra132 + - const: nvidia,tegra124 + - items: + - enum: + - nvidia,p2371-0000 + - nvidia,p2371-2180 + - nvidia,p2571 + - const: nvidia,tegra210 + - items: + - enum: + - nvidia,p2771-0000 + - const: nvidia,tegra186 + - items: + - enum: + - nvidia,p2972-0000 + - const: nvidia,tegra194 diff --git a/Documentation/devicetree/bindings/arm/ti/nspire.yaml b/Documentation/devicetree/bindings/arm/ti/nspire.yaml new file mode 100644 index 000000000000..e372b43da62f --- /dev/null +++ b/Documentation/devicetree/bindings/arm/ti/nspire.yaml @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/ti/nspire.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: TI-NSPIRE calculators + +maintainers: + - Daniel Tang + +properties: + $nodename: + const: '/' + compatible: + items: + - enum: + # CX models + - ti,nspire-cx + # Touchpad models + - ti,nspire-tp + # Clickpad models + - ti,nspire-clp +... diff --git a/Documentation/devicetree/bindings/arm/ti/ti,davinci.yaml b/Documentation/devicetree/bindings/arm/ti/ti,davinci.yaml new file mode 100644 index 000000000000..4326d2cfa15d --- /dev/null +++ b/Documentation/devicetree/bindings/arm/ti/ti,davinci.yaml @@ -0,0 +1,26 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/ti/davinci.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Texas Instruments DaVinci Platforms Device Tree Bindings + +maintainers: + - Sekhar Nori + +description: + DA850/OMAP-L138/AM18x based boards + +properties: + $nodename: + const: '/' + compatible: + items: + - enum: + - ti,da850-evm # DA850/OMAP-L138/AM18x Evaluation Module (EVM) board + - ti,da850-lcdk # DA850/OMAP-L138/AM18x L138/C6748 Development Kit (LCDK) board + - enbw,cmc # EnBW AM1808 based CMC board + - lego,ev3 # LEGO MINDSTORMS EV3 (AM1808 based) + - const: ti,da850 +... diff --git a/Documentation/devicetree/bindings/arm/vt8500.txt b/Documentation/devicetree/bindings/arm/vt8500.txt deleted file mode 100644 index 87dc1ddf4770..000000000000 --- a/Documentation/devicetree/bindings/arm/vt8500.txt +++ /dev/null @@ -1,22 +0,0 @@ -VIA/Wondermedia VT8500 Platforms Device Tree Bindings ---------------------------------------- - -Boards with the VIA VT8500 SoC shall have the following properties: -Required root node property: -compatible = "via,vt8500"; - -Boards with the Wondermedia WM8505 SoC shall have the following properties: -Required root node property: -compatible = "wm,wm8505"; - -Boards with the Wondermedia WM8650 SoC shall have the following properties: -Required root node property: -compatible = "wm,wm8650"; - -Boards with the Wondermedia WM8750 SoC shall have the following properties: -Required root node property: -compatible = "wm,wm8750"; - -Boards with the Wondermedia WM8850 SoC shall have the following properties: -Required root node property: -compatible = "wm,wm8850"; diff --git a/Documentation/devicetree/bindings/arm/vt8500.yaml b/Documentation/devicetree/bindings/arm/vt8500.yaml new file mode 100644 index 000000000000..7b25b6fa34e9 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/vt8500.yaml @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/vt8500.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: VIA/Wondermedia VT8500 Platforms Device Tree Bindings + +maintainers: + - Tony Prisk +description: test + +properties: + $nodename: + const: '/' + compatible: + items: + - enum: + - via,vt8500 + - wm,wm8505 + - wm,wm8650 + - wm,wm8750 + - wm,wm8850 diff --git a/Documentation/devicetree/bindings/arm/xilinx.txt b/Documentation/devicetree/bindings/arm/xilinx.txt deleted file mode 100644 index 26fe5ecc4332..000000000000 --- a/Documentation/devicetree/bindings/arm/xilinx.txt +++ /dev/null @@ -1,83 +0,0 @@ -Xilinx Zynq Platforms Device Tree Bindings - -Boards with Zynq-7000 SOC based on an ARM Cortex A9 processor -shall have the following properties. - -Required root node properties: - - compatible = "xlnx,zynq-7000"; - -Additional compatible strings: - -- Adapteva Parallella board - "adapteva,parallella" - -- Avnet MicroZed board - "avnet,zynq-microzed" - "xlnx,zynq-microzed" - -- Avnet ZedBoard board - "avnet,zynq-zed" - "xlnx,zynq-zed" - -- Digilent Zybo board - "digilent,zynq-zybo" - -- Digilent Zybo Z7 board - "digilent,zynq-zybo-z7" - -- Xilinx CC108 internal board - "xlnx,zynq-cc108" - -- Xilinx ZC702 internal board - "xlnx,zynq-zc702" - -- Xilinx ZC706 internal board - "xlnx,zynq-zc706" - -- Xilinx ZC770 internal board, with different FMC cards - "xlnx,zynq-zc770-xm010" - "xlnx,zynq-zc770-xm011" - "xlnx,zynq-zc770-xm012" - "xlnx,zynq-zc770-xm013" - ---------------------------------------------------------------- - -Xilinx Zynq UltraScale+ MPSoC Platforms Device Tree Bindings - -Boards with ZynqMP SOC based on an ARM Cortex A53 processor -shall have the following properties. - -Required root node properties: - - compatible = "xlnx,zynqmp"; - - -Additional compatible strings: - -- Xilinx internal board zc1232 - "xlnx,zynqmp-zc1232-revA", "xlnx,zynqmp-zc1232" - -- Xilinx internal board zc1254 - "xlnx,zynqmp-zc1254-revA", "xlnx,zynqmp-zc1254" - -- Xilinx internal board zc1275 - "xlnx,zynqmp-zc1275-revA", "xlnx,zynqmp-zc1275" - -- Xilinx internal board zc1751 - "xlnx,zynqmp-zc1751" - -- Xilinx 96boards compatible board zcu100 - "xlnx,zynqmp-zcu100-revC", "xlnx,zynqmp-zcu100" - -- Xilinx evaluation board zcu102 - "xlnx,zynqmp-zcu102-revA", "xlnx,zynqmp-zcu102" - "xlnx,zynqmp-zcu102-revB", "xlnx,zynqmp-zcu102" - "xlnx,zynqmp-zcu102-rev1.0", "xlnx,zynqmp-zcu102" - -- Xilinx evaluation board zcu104 - "xlnx,zynqmp-zcu104-revA", "xlnx,zynqmp-zcu104" - -- Xilinx evaluation board zcu106 - "xlnx,zynqmp-zcu106-revA", "xlnx,zynqmp-zcu106" - -- Xilinx evaluation board zcu111 - "xlnx,zynqmp-zcu111-revA", "xlnx,zynqmp-zcu111" diff --git a/Documentation/devicetree/bindings/arm/xilinx.yaml b/Documentation/devicetree/bindings/arm/xilinx.yaml new file mode 100644 index 000000000000..c73b1f5c7f49 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/xilinx.yaml @@ -0,0 +1,114 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/xilinx.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Xilinx Zynq Platforms Device Tree Bindings + +maintainers: + - Michal Simek + +description: | + Xilinx boards with Zynq-7000 SOC or Zynq UltraScale+ MPSoC + +properties: + $nodename: + const: '/' + compatible: + oneOf: + - items: + - enum: + - adapteva,parallella + - digilent,zynq-zybo + - digilent,zynq-zybo-z7 + - xlnx,zynq-cc108 + - xlnx,zynq-zc702 + - xlnx,zynq-zc706 + - xlnx,zynq-zc770-xm010 + - xlnx,zynq-zc770-xm011 + - xlnx,zynq-zc770-xm012 + - xlnx,zynq-zc770-xm013 + - const: xlnx,zynq-7000 + + - items: + - const: avnet,zynq-microzed + - const: xlnx,zynq-microzed + - const: xlnx,zynq-7000 + + - items: + - const: avnet,zynq-zed + - const: xlnx,zynq-zed + - const: xlnx,zynq-7000 + + - items: + - enum: + - xlnx,zynqmp-zc1751 + - const: xlnx,zynqmp + + - description: Xilinx internal board zc1232 + items: + - const: xlnx,zynqmp-zc1232-revA + - const: xlnx,zynqmp-zc1232 + - const: xlnx,zynqmp + + - description: Xilinx internal board zc1254 + items: + - const: xlnx,zynqmp-zc1254-revA + - const: xlnx,zynqmp-zc1254 + - const: xlnx,zynqmp + + - description: Xilinx internal board zc1275 + items: + - const: xlnx,zynqmp-zc1275-revA + - const: xlnx,zynqmp-zc1275 + - const: xlnx,zynqmp + + - description: Xilinx 96boards compatible board zcu100 + items: + - const: xlnx,zynqmp-zcu100-revC + - const: xlnx,zynqmp-zcu100 + - const: xlnx,zynqmp + + - description: Xilinx 96boards compatible board Ultra96 + items: + - const: avnet,ultra96-rev1 + - const: avnet,ultra96 + - const: xlnx,zynqmp-zcu100-revC + - const: xlnx,zynqmp-zcu100 + - const: xlnx,zynqmp + + - description: Xilinx evaluation board zcu102 + items: + - enum: + - xlnx,zynqmp-zcu102-revA + - xlnx,zynqmp-zcu102-revB + - xlnx,zynqmp-zcu102-rev1.0 + - const: xlnx,zynqmp-zcu102 + - const: xlnx,zynqmp + + - description: Xilinx evaluation board zcu104 + items: + - enum: + - xlnx,zynqmp-zcu104-revA + - xlnx,zynqmp-zcu104-rev1.0 + - const: xlnx,zynqmp-zcu104 + - const: xlnx,zynqmp + + - description: Xilinx evaluation board zcu106 + items: + - enum: + - xlnx,zynqmp-zcu106-revA + - xlnx,zynqmp-zcu106-rev1.0 + - const: xlnx,zynqmp-zcu106 + - const: xlnx,zynqmp + + - description: Xilinx evaluation board zcu111 + items: + - enum: + - xlnx,zynqmp-zcu111-revA + - xlnx,zynqmp-zcu11-rev1.0 + - const: xlnx,zynqmp-zcu111 + - const: xlnx,zynqmp + +... diff --git a/Documentation/devicetree/bindings/arm/zte.txt b/Documentation/devicetree/bindings/arm/zte.txt deleted file mode 100644 index 340612794a37..000000000000 --- a/Documentation/devicetree/bindings/arm/zte.txt +++ /dev/null @@ -1,14 +0,0 @@ -ZTE platforms device tree bindings - ---------------------------------------- -- ZX296702 board: - Required root node properties: - - compatible = "zte,zx296702-ad1", "zte,zx296702" - ---------------------------------------- -- ZX296718 SoC: - Required root node properties: - - compatible = "zte,zx296718" - -ZX296718 EVB board: - - "zte,zx296718-evb" diff --git a/Documentation/devicetree/bindings/arm/zte.yaml b/Documentation/devicetree/bindings/arm/zte.yaml new file mode 100644 index 000000000000..2d3fefdccdff --- /dev/null +++ b/Documentation/devicetree/bindings/arm/zte.yaml @@ -0,0 +1,26 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/arm/zte.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: ZTE platforms device tree bindings + +maintainers: + - Jun Nie + +properties: + $nodename: + const: '/' + compatible: + oneOf: + - items: + - enum: + - zte,zx296702-ad1 + - const: zte,zx296702 + - items: + - enum: + - zte,zx296718-evb + - const: zte,zx296718 + +... diff --git a/Documentation/devicetree/bindings/clock/qoriq-clock.txt b/Documentation/devicetree/bindings/clock/qoriq-clock.txt index 97f46adac85f..c655f28d5918 100644 --- a/Documentation/devicetree/bindings/clock/qoriq-clock.txt +++ b/Documentation/devicetree/bindings/clock/qoriq-clock.txt @@ -28,6 +28,12 @@ Required properties: * "fsl,p4080-clockgen" * "fsl,p5020-clockgen" * "fsl,p5040-clockgen" + * "fsl,t1023-clockgen" + * "fsl,t1024-clockgen" + * "fsl,t1040-clockgen" + * "fsl,t1042-clockgen" + * "fsl,t2080-clockgen" + * "fsl,t2081-clockgen" * "fsl,t4240-clockgen" * "fsl,b4420-clockgen" * "fsl,b4860-clockgen" diff --git a/Documentation/devicetree/bindings/connector/usb-connector.txt b/Documentation/devicetree/bindings/connector/usb-connector.txt index d90e17e2428b..a9a2f2fc44f2 100644 --- a/Documentation/devicetree/bindings/connector/usb-connector.txt +++ b/Documentation/devicetree/bindings/connector/usb-connector.txt @@ -14,6 +14,8 @@ Optional properties: - label: symbolic name for the connector, - type: size of the connector, should be specified in case of USB-A, USB-B non-fullsize connectors: "mini", "micro". +- self-powered: Set this property if the usb device that has its own power + source. Optional properties for usb-c-connector: - power-role: should be one of "source", "sink" or "dual"(DRP) if typec diff --git a/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt b/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt index 999fb2a810f6..6130e6eb4af8 100644 --- a/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt +++ b/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt @@ -1,8 +1,12 @@ Arm TrustZone CryptoCell cryptographic engine Required properties: -- compatible: Should be one of: "arm,cryptocell-712-ree", - "arm,cryptocell-710-ree" or "arm,cryptocell-630p-ree". +- compatible: Should be one of - + "arm,cryptocell-713-ree" + "arm,cryptocell-703-ree" + "arm,cryptocell-712-ree" + "arm,cryptocell-710-ree" + "arm,cryptocell-630p-ree" - reg: Base physical address of the engine and length of memory mapped region. - interrupts: Interrupt number for the device. diff --git a/Documentation/devicetree/bindings/crypto/fsl-dcp.txt b/Documentation/devicetree/bindings/crypto/fsl-dcp.txt index 76a0b4e80e83..4e4d387e38a5 100644 --- a/Documentation/devicetree/bindings/crypto/fsl-dcp.txt +++ b/Documentation/devicetree/bindings/crypto/fsl-dcp.txt @@ -6,6 +6,8 @@ Required properties: - interrupts : Should contain MXS DCP interrupt numbers, VMI IRQ and DCP IRQ must be supplied, optionally Secure IRQ can be present, but is currently not implemented and not used. +- clocks : Clock reference (only required on some SOCs: 6ull and 6sll). +- clock-names : Must be "dcp". Example: diff --git a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt new file mode 100644 index 000000000000..3fe0961bcf64 --- /dev/null +++ b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt @@ -0,0 +1,33 @@ +* Mediatek UART APDMA Controller + +Required properties: +- compatible should contain: + * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA + * "mediatek,mt6577-uart-dma" for MT6577 and all of the above + +- reg: The base address of the APDMA register bank. + +- interrupts: A single interrupt specifier. + +- clocks : Must contain an entry for each entry in clock-names. + See ../clocks/clock-bindings.txt for details. +- clock-names: The APDMA clock for register accesses + +Examples: + + apdma: dma-controller@11000380 { + compatible = "mediatek,mt2712-uart-dma"; + reg = <0 0x11000380 0 0x400>; + interrupts = , + , + , + , + , + , + , + ; + clocks = <&pericfg CLK_PERI_AP_DMA>; + clock-names = "apdma"; + #dma-cells = <1>; + }; + diff --git a/Documentation/devicetree/bindings/example-schema.yaml b/Documentation/devicetree/bindings/example-schema.yaml new file mode 100644 index 000000000000..9175d67f355d --- /dev/null +++ b/Documentation/devicetree/bindings/example-schema.yaml @@ -0,0 +1,170 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +# Copyright 2018 Linaro Ltd. +%YAML 1.2 +--- +# All the top-level keys are standard json-schema keywords except for +# 'maintainers' and 'select' + +# $id is a unique idenifier based on the filename. There may or may not be a +# file present at the URL. +$id: "http://devicetree.org/schemas/example-schema.yaml#" +# $schema is the meta-schema this schema should be validated with. +$schema: "http://devicetree.org/meta-schemas/core.yaml#" + +title: An example schema annotated with jsonschema details + +maintainers: + - Rob Herring + +description: | + A more detailed multi-line description of the binding. + + Details about the hardware device and any links to datasheets can go here. + + Literal blocks are marked with the '|' at the beginning. The end is marked by + indentation less than the first line of the literal block. Lines also cannot + begin with a tab character. + +select: false + # 'select' is a schema applied to a DT node to determine if this binding + # schema should be applied to the node. It is optional and by default the + # possible compatible strings are extracted and used to match. + + # In this case, a 'false' schema will never match. + +properties: + # A dictionary of DT properties for this binding schema + compatible: + # More complicated schema can use oneOf (XOR), anyOf (OR), or allOf (AND) + # to handle different conditions. + # In this case, it's needed to handle a variable number of values as there + # isn't another way to express a constraint of the last string value. + # The boolean schema must be a list of schemas. + oneOf: + - items: + # items is a list of possible values for the property. The number of + # values is determined by the number of elements in the list. + # Order in lists is significant, order in dicts is not + # Must be one of the 1st enums followed by the 2nd enum + # + # Each element in items should be 'enum' or 'const' + - enum: + - vendor,soc4-ip + - vendor,soc3-ip + - vendor,soc2-ip + - enum: + - vendor,soc1-ip + # additionalItems being false is implied + # minItems/maxItems equal to 2 is implied + - items: + # 'const' is just a special case of an enum with a single possible value + - const: vendor,soc1-ip + + reg: + # The core schema already checks that reg values are numbers, so device + # specific schema don't need to do those checks. + # The description of each element defines the order and implicitly defines + # the number of reg entries. + items: + - description: core registers + - description: aux registers + # minItems/maxItems equal to 2 is implied + + reg-names: + # The core schema enforces this is a string array + items: + - const: core + - const: aux + + clocks: + # Cases that have only a single entry just need to express that with maxItems + maxItems: 1 + description: bus clock + + clock-names: + items: + - const: bus + + interrupts: + # Either 1 or 2 interrupts can be present + minItems: 1 + maxItems: 2 + items: + - description: tx or combined interrupt + - description: rx interrupt + description: + A variable number of interrupts warrants a description of what conditions + affect the number of interrupts. Otherwise, descriptions on standard + properties are not necessary. + + interrupt-names: + # minItems must be specified here because the default would be 2 + minItems: 1 + maxItems: 2 + items: + - const: tx irq + - const: rx irq + + # Property names starting with '#' must be quoted + '#interrupt-cells': + # A simple case where the value must always be '2'. + # The core schema handles that this must be a single integer. + const: 2 + + interrupt-controller: true + # The core checks this is a boolean, so just have to list it here to be + # valid for this binding. + + clock-frequency: + # The type is set in the core schema. Per device schema only need to set + # constraints on the possible values. + minimum: 100 + maximum: 400000 + # The value that should be used if the property is not present + default: 200 + + foo-gpios: + maxItems: 1 + description: A connection of the 'foo' gpio line. + + vendor,int-property: + description: Vendor specific properties must have a description + # 'allOf' is the json-schema way of subclassing a schema. Here the base + # type schema is referenced and then additional constraints on the values + # are added. + allOf: + - $ref: /schemas/types.yaml#/definitions/uint32 + - enum: [2, 4, 6, 8, 10] + + vendor,bool-property: + description: Vendor specific properties must have a description + # boolean properties is one case where the json-schema 'type' keyword + # can be used directly + type: boolean + + vendor,string-array-property: + description: Vendor specific properties should reference a type in the + core schema. + allOf: + - $ref: /schemas/types.yaml#/definitions/string-array + - items: + - enum: [ foo, bar ] + - enum: [ baz, boo ] + +required: + - compatible + - reg + - interrupts + - interrupt-controller + +examples: + # Examples are now compiled with dtc + - | + node@1000 { + compatible = "vendor,soc4-ip", "vendor,soc1-ip"; + reg = <0x1000 0x80>, + <0x3000 0x80>; + reg-names = "core", "aux"; + interrupts = <10>; + interrupt-controller; + }; diff --git a/Documentation/devicetree/bindings/firmware/intel,stratix10-svc.txt b/Documentation/devicetree/bindings/firmware/intel,stratix10-svc.txt new file mode 100644 index 000000000000..1fa66065acc6 --- /dev/null +++ b/Documentation/devicetree/bindings/firmware/intel,stratix10-svc.txt @@ -0,0 +1,57 @@ +Intel Service Layer Driver for Stratix10 SoC +============================================ +Intel Stratix10 SoC is composed of a 64 bit quad-core ARM Cortex A53 hard +processor system (HPS) and Secure Device Manager (SDM). When the FPGA is +configured from HPS, there needs to be a way for HPS to notify SDM the +location and size of the configuration data. Then SDM will get the +configuration data from that location and perform the FPGA configuration. + +To meet the whole system security needs and support virtual machine requesting +communication with SDM, only the secure world of software (EL3, Exception +Layer 3) can interface with SDM. All software entities running on other +exception layers must channel through the EL3 software whenever it needs +service from SDM. + +Intel Stratix10 service layer driver, running at privileged exception level +(EL1, Exception Layer 1), interfaces with the service providers and provides +the services for FPGA configuration, QSPI, Crypto and warm reset. Service layer +driver also manages secure monitor call (SMC) to communicate with secure monitor +code running in EL3. + +Required properties: +------------------- +The svc node has the following mandatory properties, must be located under +the firmware node. + +- compatible: "intel,stratix10-svc" +- method: smc or hvc + smc - Secure Monitor Call + hvc - Hypervisor Call +- memory-region: + phandle to the reserved memory node. See + Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt + for details + +Example: +------- + + reserved-memory { + #address-cells = <2>; + #size-cells = <2>; + ranges; + + service_reserved: svcbuffer@0 { + compatible = "shared-dma-pool"; + reg = <0x0 0x0 0x0 0x1000000>; + alignment = <0x1000>; + no-map; + }; + }; + + firmware { + svc { + compatible = "intel,stratix10-svc"; + method = "smc"; + memory-region = <&service_reserved>; + }; + }; diff --git a/Documentation/devicetree/bindings/fpga/intel-stratix10-soc-fpga-mgr.txt b/Documentation/devicetree/bindings/fpga/intel-stratix10-soc-fpga-mgr.txt new file mode 100644 index 000000000000..6e03f79287fb --- /dev/null +++ b/Documentation/devicetree/bindings/fpga/intel-stratix10-soc-fpga-mgr.txt @@ -0,0 +1,17 @@ +Intel Stratix10 SoC FPGA Manager + +Required properties: +The fpga_mgr node has the following mandatory property, must be located under +firmware/svc node. + +- compatible : should contain "intel,stratix10-soc-fpga-mgr" + +Example: + + firmware { + svc { + fpga_mgr: fpga-mgr { + compatible = "intel,stratix10-soc-fpga-mgr"; + }; + }; + }; diff --git a/Documentation/devicetree/bindings/fsi/ibm,p9-occ.txt b/Documentation/devicetree/bindings/fsi/ibm,p9-occ.txt new file mode 100644 index 000000000000..99ca9862a586 --- /dev/null +++ b/Documentation/devicetree/bindings/fsi/ibm,p9-occ.txt @@ -0,0 +1,16 @@ +Device-tree bindings for FSI-attached POWER9 On-Chip Controller (OCC) +--------------------------------------------------------------------- + +This is the binding for the P9 On-Chip Controller accessed over FSI from a +service processor. See fsi.txt for details on bindings for FSI slave and CFAM +nodes. The OCC is not an FSI slave device itself, rather it is accessed +through the SBE fifo. + +Required properties: + - compatible = "ibm,p9-occ" + +Examples: + + occ { + compatible = "ibm,p9-occ"; + }; diff --git a/Documentation/devicetree/bindings/gpio/cdns,gpio.txt b/Documentation/devicetree/bindings/gpio/cdns,gpio.txt new file mode 100644 index 000000000000..706ef00f5c64 --- /dev/null +++ b/Documentation/devicetree/bindings/gpio/cdns,gpio.txt @@ -0,0 +1,43 @@ +Cadence GPIO controller bindings + +Required properties: +- compatible: should be "cdns,gpio-r1p02". +- reg: the register base address and size. +- #gpio-cells: should be 2. + * first cell is the GPIO number. + * second cell specifies the GPIO flags, as defined in + . Only the GPIO_ACTIVE_HIGH + and GPIO_ACTIVE_LOW flags are supported. +- gpio-controller: marks the device as a GPIO controller. +- clocks: should contain one entry referencing the peripheral clock driving + the GPIO controller. + +Optional properties: +- ngpios: integer number of gpio lines supported by this controller, up to 32. +- interrupts: interrupt specifier for the controllers interrupt. +- interrupt-controller: marks the device as an interrupt controller. When + defined, interrupts, interrupt-parent and #interrupt-cells + are required. +- interrupt-cells: should be 2. + * first cell is the GPIO number you want to use as an IRQ source. + * second cell specifies the IRQ type, as defined in + . + Currently only level sensitive IRQs are supported. + + +Example: + gpio0: gpio-controller@fd060000 { + compatible = "cdns,gpio-r1p02"; + reg =<0xfd060000 0x1000>; + + clocks = <&gpio_clk>; + + interrupt-parent = <&gic>; + interrupts = <0 5 IRQ_TYPE_LEVEL_HIGH>; + + gpio-controller; + #gpio-cells = <2>; + + interrupt-controller; + #interrupt-cells = <2>; + }; diff --git a/Documentation/devicetree/bindings/gpio/gpio-omap.txt b/Documentation/devicetree/bindings/gpio/gpio-omap.txt index 8d950522e7fa..e57b2cb28f6c 100644 --- a/Documentation/devicetree/bindings/gpio/gpio-omap.txt +++ b/Documentation/devicetree/bindings/gpio/gpio-omap.txt @@ -5,6 +5,8 @@ Required properties: - "ti,omap2-gpio" for OMAP2 controllers - "ti,omap3-gpio" for OMAP3 controllers - "ti,omap4-gpio" for OMAP4 controllers +- reg : Physical base address of the controller and length of memory mapped + region. - gpio-controller : Marks the device node as a GPIO controller. - #gpio-cells : Should be two. - first cell is the pin number @@ -18,6 +20,8 @@ Required properties: 2 = high-to-low edge triggered. 4 = active high level-sensitive. 8 = active low level-sensitive. +- interrupts : The interrupt the controller is rising as output when an + interrupt occures OMAP specific properties: - ti,hwmods: Name of the hwmod associated to the GPIO: @@ -29,11 +33,13 @@ OMAP specific properties: Example: -gpio4: gpio4 { +gpio0: gpio@44e07000 { compatible = "ti,omap4-gpio"; - ti,hwmods = "gpio4"; + reg = <0x44e07000 0x1000>; + ti,hwmods = "gpio1"; gpio-controller; #gpio-cells = <2>; interrupt-controller; #interrupt-cells = <2>; + interrupts = <96>; }; diff --git a/Documentation/devicetree/bindings/gpio/gpio-vf610.txt b/Documentation/devicetree/bindings/gpio/gpio-vf610.txt index 0ccbae44019c..ae254aadee35 100644 --- a/Documentation/devicetree/bindings/gpio/gpio-vf610.txt +++ b/Documentation/devicetree/bindings/gpio/gpio-vf610.txt @@ -24,6 +24,12 @@ Required properties for GPIO node: 4 = active high level-sensitive. 8 = active low level-sensitive. +Optional properties: +-clocks: Must contain an entry for each entry in clock-names. + See common clock-bindings.txt for details. +-clock-names: A list of clock names. For imx7ulp, it must contain + "gpio", "port". + Note: Each GPIO port should have an alias correctly numbered in "aliases" node. diff --git a/Documentation/devicetree/bindings/gpio/nxp,lpc1850-gpio.txt b/Documentation/devicetree/bindings/gpio/nxp,lpc1850-gpio.txt index eb7cdd69e10b..627efc78ecf2 100644 --- a/Documentation/devicetree/bindings/gpio/nxp,lpc1850-gpio.txt +++ b/Documentation/devicetree/bindings/gpio/nxp,lpc1850-gpio.txt @@ -3,12 +3,24 @@ NXP LPC18xx/43xx GPIO controller Device Tree Bindings Required properties: - compatible : Should be "nxp,lpc1850-gpio" -- reg : Address and length of the register set for the device -- clocks : Clock specifier (see clock bindings for details) -- gpio-controller : Marks the device node as a GPIO controller. -- #gpio-cells : Should be two - - First cell is the GPIO line number - - Second cell is used to specify polarity +- reg : List of addresses and lengths of the GPIO controller + register sets +- reg-names : Should be "gpio", "gpio-pin-ic", "gpio-group0-ic" and + "gpio-gpoup1-ic" +- clocks : Phandle and clock specifier pair for GPIO controller +- resets : Phandle and reset specifier pair for GPIO controller +- gpio-controller : Marks the device node as a GPIO controller +- #gpio-cells : Should be two: + - The first cell is the GPIO line number + - The second cell is used to specify polarity +- interrupt-controller : Marks the device node as an interrupt controller +- #interrupt-cells : Should be two: + - The first cell is an interrupt number within + 0..9 range, for GPIO pin interrupts it is equal + to 'nxp,gpio-pin-interrupt' property value of + GPIO pin configuration, 8 is for GPIO GROUP0 + interrupt, 9 is for GPIO GROUP1 interrupt + - The second cell is used to specify interrupt type Optional properties: - gpio-ranges : Mapping between GPIO and pinctrl @@ -19,21 +31,29 @@ Example: gpio: gpio@400f4000 { compatible = "nxp,lpc1850-gpio"; - reg = <0x400f4000 0x4000>; + reg = <0x400f4000 0x4000>, <0x40087000 0x1000>, + <0x40088000 0x1000>, <0x40089000 0x1000>; + reg-names = "gpio", "gpio-pin-ic", + "gpio-group0-ic", "gpio-gpoup1-ic"; clocks = <&ccu1 CLK_CPU_GPIO>; + resets = <&rgu 28>; gpio-controller; #gpio-cells = <2>; + interrupt-controller; + #interrupt-cells = <2>; gpio-ranges = <&pinctrl LPC_GPIO(0,0) LPC_PIN(0,0) 2>, ... <&pinctrl LPC_GPIO(7,19) LPC_PIN(f,5) 7>; }; gpio_joystick { - compatible = "gpio-keys-polled"; + compatible = "gpio-keys"; ... - button@0 { + button0 { ... + interrupt-parent = <&gpio>; + interrupts = <1 IRQ_TYPE_EDGE_BOTH>; gpios = <&gpio LPC_GPIO(4,8) GPIO_ACTIVE_LOW>; }; }; diff --git a/Documentation/devicetree/bindings/gpio/renesas,gpio-rcar.txt b/Documentation/devicetree/bindings/gpio/renesas,gpio-rcar.txt index 2889bbcd7416..f3f2c468c1b6 100644 --- a/Documentation/devicetree/bindings/gpio/renesas,gpio-rcar.txt +++ b/Documentation/devicetree/bindings/gpio/renesas,gpio-rcar.txt @@ -8,6 +8,7 @@ Required Properties: - "renesas,gpio-r8a7745": for R8A7745 (RZ/G1E) compatible GPIO controller. - "renesas,gpio-r8a77470": for R8A77470 (RZ/G1C) compatible GPIO controller. - "renesas,gpio-r8a774a1": for R8A774A1 (RZ/G2M) compatible GPIO controller. + - "renesas,gpio-r8a774c0": for R8A774C0 (RZ/G2E) compatible GPIO controller. - "renesas,gpio-r8a7778": for R8A7778 (R-Car M1) compatible GPIO controller. - "renesas,gpio-r8a7779": for R8A7779 (R-Car H1) compatible GPIO controller. - "renesas,gpio-r8a7790": for R8A7790 (R-Car H2) compatible GPIO controller. diff --git a/Documentation/devicetree/bindings/gpio/snps-dwapb-gpio.txt b/Documentation/devicetree/bindings/gpio/snps-dwapb-gpio.txt index 7276b50c3506..839dd32ffe11 100644 --- a/Documentation/devicetree/bindings/gpio/snps-dwapb-gpio.txt +++ b/Documentation/devicetree/bindings/gpio/snps-dwapb-gpio.txt @@ -43,7 +43,7 @@ gpio: gpio@20000 { #address-cells = <1>; #size-cells = <0>; - porta: gpio-controller@0 { + porta: gpio@0 { compatible = "snps,dw-apb-gpio-port"; gpio-controller; #gpio-cells = <2>; @@ -55,7 +55,7 @@ gpio: gpio@20000 { interrupts = <0>; }; - portb: gpio-controller@1 { + portb: gpio@1 { compatible = "snps,dw-apb-gpio-port"; gpio-controller; #gpio-cells = <2>; diff --git a/Documentation/devicetree/bindings/hwmon/adm1275.txt b/Documentation/devicetree/bindings/hwmon/adm1275.txt new file mode 100644 index 000000000000..1ecd03f3da4d --- /dev/null +++ b/Documentation/devicetree/bindings/hwmon/adm1275.txt @@ -0,0 +1,25 @@ +adm1275 properties + +Required properties: +- compatible: Must be one of the supported compatible strings: + - "adi,adm1075" for adm1075 + - "adi,adm1272" for adm1272 + - "adi,adm1275" for adm1275 + - "adi,adm1276" for adm1276 + - "adi,adm1278" for adm1278 + - "adi,adm1293" for adm1293 + - "adi,adm1294" for adm1294 +- reg: I2C address + +Optional properties: + +- shunt-resistor-micro-ohms + Shunt resistor value in micro-Ohm + +Example: + +adm1272@10 { + compatible = "adi,adm1272"; + reg = <0x10>; + shunt-resistor-micro-ohms = <500>; +}; diff --git a/Documentation/devicetree/bindings/hwmon/lm90.txt b/Documentation/devicetree/bindings/hwmon/lm90.txt index 97581266e329..c76a7ac47c34 100644 --- a/Documentation/devicetree/bindings/hwmon/lm90.txt +++ b/Documentation/devicetree/bindings/hwmon/lm90.txt @@ -23,6 +23,7 @@ Required node properties: "onnn,nct1008" "winbond,w83l771" "nxp,sa56004" + "ti,tmp451" - reg: I2C bus address of the device diff --git a/Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt b/Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt index c3b9c4cfe8df..37f18d684f6a 100644 --- a/Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt +++ b/Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt @@ -4,6 +4,7 @@ NTC Thermistor hwmon sensors Requires node properties: - "compatible" value : one of "epcos,b57330v2103" + "epcos,b57891s0103" "murata,ncp15wb473" "murata,ncp18wb473" "murata,ncp21wb473" diff --git a/Documentation/devicetree/bindings/hwmon/tmp108.txt b/Documentation/devicetree/bindings/hwmon/tmp108.txt index 8c4b10df86d9..54d4beed4ee5 100644 --- a/Documentation/devicetree/bindings/hwmon/tmp108.txt +++ b/Documentation/devicetree/bindings/hwmon/tmp108.txt @@ -7,6 +7,10 @@ Requires node properties: - compatible : "ti,tmp108" - reg : the I2C address of the device. This is 0x48, 0x49, 0x4a, or 0x4b. +Optional properties: +- interrupts: Reference to the TMP108 alert interrupt. +- #thermal-sensor-cells: should be set to 0. + Example: tmp108@48 { compatible = "ti,tmp108"; diff --git a/Documentation/devicetree/bindings/i2c/i2c-gpio.txt b/Documentation/devicetree/bindings/i2c/i2c-gpio.txt deleted file mode 100644 index 38a05562d1d2..000000000000 --- a/Documentation/devicetree/bindings/i2c/i2c-gpio.txt +++ /dev/null @@ -1,46 +0,0 @@ -Device-Tree bindings for i2c gpio driver - -Required properties: - - compatible = "i2c-gpio"; - - sda-gpios: gpio used for the sda signal, this should be flagged as - active high using open drain with (GPIO_ACTIVE_HIGH|GPIO_OPEN_DRAIN) - from since the signal is by definition - open drain. - - scl-gpios: gpio used for the scl signal, this should be flagged as - active high using open drain with (GPIO_ACTIVE_HIGH|GPIO_OPEN_DRAIN) - from since the signal is by definition - open drain. - -Optional properties: - - i2c-gpio,scl-output-only: scl as output only - - i2c-gpio,delay-us: delay between GPIO operations (may depend on each platform) - - i2c-gpio,timeout-ms: timeout to get data - -Deprecated properties, do not use in new device tree sources: - - gpios: sda and scl gpio, alternative for {sda,scl}-gpios - - i2c-gpio,sda-open-drain: this means that something outside of our - control has put the GPIO line used for SDA into open drain mode, and - that something is not the GPIO chip. It is essentially an - inconsistency flag. - - i2c-gpio,scl-open-drain: this means that something outside of our - control has put the GPIO line used for SCL into open drain mode, and - that something is not the GPIO chip. It is essentially an - inconsistency flag. - -Example nodes: - -#include - -i2c@0 { - compatible = "i2c-gpio"; - sda-gpios = <&pioA 23 (GPIO_ACTIVE_HIGH|GPIO_OPEN_DRAIN)>; - scl-gpios = <&pioA 24 (GPIO_ACTIVE_HIGH|GPIO_OPEN_DRAIN)>; - i2c-gpio,delay-us = <2>; /* ~100 kHz */ - #address-cells = <1>; - #size-cells = <0>; - - rv3029c2@56 { - compatible = "rv3029c2"; - reg = <0x56>; - }; -}; diff --git a/Documentation/devicetree/bindings/i2c/i2c-gpio.yaml b/Documentation/devicetree/bindings/i2c/i2c-gpio.yaml new file mode 100644 index 000000000000..da6129090a8e --- /dev/null +++ b/Documentation/devicetree/bindings/i2c/i2c-gpio.yaml @@ -0,0 +1,73 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/i2c/i2c-gpio.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Bindings for GPIO bitbanged I2C + +maintainers: + - Wolfram Sang + +allOf: + - $ref: /schemas/i2c/i2c-controller.yaml# + +properties: + compatible: + items: + - const: i2c-gpio + + sda-gpios: + description: + gpio used for the sda signal, this should be flagged as + active high using open drain with (GPIO_ACTIVE_HIGH|GPIO_OPEN_DRAIN) + from since the signal is by definition + open drain. + maxItems: 1 + + scl-gpios: + description: + gpio used for the scl signal, this should be flagged as + active high using open drain with (GPIO_ACTIVE_HIGH|GPIO_OPEN_DRAIN) + from since the signal is by definition + open drain. + maxItems: 1 + + i2c-gpio,scl-output-only: + description: scl as output only + type: boolean + + i2c-gpio,delay-us: + description: delay between GPIO operations (may depend on each platform) + $ref: /schemas/types.yaml#/definitions/uint32 + + i2c-gpio,timeout-ms: + description: timeout to get data + $ref: /schemas/types.yaml#/definitions/uint32 + + # Deprecated properties, do not use in new device tree sources: + gpios: + minItems: 2 + maxItems: 2 + description: sda and scl gpio, alternative for {sda,scl}-gpios + + i2c-gpio,sda-open-drain: + # Generate a warning if present + not: true + description: this means that something outside of our control has put + the GPIO line used for SDA into open drain mode, and that something is + not the GPIO chip. It is essentially an inconsistency flag. + + i2c-gpio,scl-open-drain: + # Generate a warning if present + not: true + description: this means that something outside of our control has put the + GPIO line used for SCL into open drain mode, and that something is not + the GPIO chip. It is essentially an inconsistency flag. + +required: + - compatible + - sda-gpios + - scl-gpios + +... diff --git a/Documentation/devicetree/bindings/i2c/ibm,p8-occ-hwmon.txt b/Documentation/devicetree/bindings/i2c/ibm,p8-occ-hwmon.txt new file mode 100644 index 000000000000..5dc5d2e2573d --- /dev/null +++ b/Documentation/devicetree/bindings/i2c/ibm,p8-occ-hwmon.txt @@ -0,0 +1,25 @@ +Device-tree bindings for I2C-based On-Chip Controller hwmon device +------------------------------------------------------------------ + +Required properties: + - compatible = "ibm,p8-occ-hwmon"; + - reg = ; : I2C bus address + +Examples: + + i2c-bus@100 { + #address-cells = <1>; + #size-cells = <0>; + clock-frequency = <100000>; + < more properties > + + occ-hwmon@1 { + compatible = "ibm,p8-occ-hwmon"; + reg = <0x50>; + }; + + occ-hwmon@2 { + compatible = "ibm,p8-occ-hwmon"; + reg = <0x51>; + }; + }; diff --git a/Documentation/devicetree/bindings/iio/accel/lis302.txt b/Documentation/devicetree/bindings/iio/accel/lis302.txt index dfdce67826ba..764e28ec1a0a 100644 --- a/Documentation/devicetree/bindings/iio/accel/lis302.txt +++ b/Documentation/devicetree/bindings/iio/accel/lis302.txt @@ -64,7 +64,7 @@ Optional properties for all bus drivers: Example for a SPI device node: - lis302@0 { + accelerometer@0 { compatible = "st,lis302dl-spi"; reg = <0>; spi-max-frequency = <1000000>; @@ -89,7 +89,7 @@ Example for a SPI device node: Example for a I2C device node: - lis331dlh: lis331dlh@18 { + lis331dlh: accelerometer@18 { compatible = "st,lis331dlh", "st,lis3lv02d"; reg = <0x18>; Vdd-supply = <&lis3_reg>; diff --git a/Documentation/devicetree/bindings/iio/adc/ad7949.txt b/Documentation/devicetree/bindings/iio/adc/ad7949.txt new file mode 100644 index 000000000000..c7f5057356b1 --- /dev/null +++ b/Documentation/devicetree/bindings/iio/adc/ad7949.txt @@ -0,0 +1,16 @@ +* Analog Devices AD7949/AD7682/AD7689 + +Required properties: + - compatible: Should be one of + * "adi,ad7949" + * "adi,ad7682" + * "adi,ad7689" + - reg: spi chip select number for the device + - vref-supply: The regulator supply for ADC reference voltage + +Example: +adc@0 { + compatible = "adi,ad7949"; + reg = <0>; + vref-supply = <&vdd_supply>; +}; diff --git a/Documentation/devicetree/bindings/iio/adc/adc.txt b/Documentation/devicetree/bindings/iio/adc/adc.txt new file mode 100644 index 000000000000..5bbaa330a250 --- /dev/null +++ b/Documentation/devicetree/bindings/iio/adc/adc.txt @@ -0,0 +1,23 @@ +Common ADCs properties + +Optional properties for child nodes: +- bipolar : Boolean, if set the channel is used in bipolar mode. +- diff-channels : Differential channels muxed for this ADC. The first value + specifies the positive input pin, the second value the negative + input pin. + +Example: + adc@0 { + compatible = "some,adc"; + ... + channel@0 { + bipolar; + diff-channels = <0 1>; + ... + }; + + channel@1 { + diff-channels = <2 3>; + ... + }; + }; diff --git a/Documentation/devicetree/bindings/iio/adc/adi,ad7124.txt b/Documentation/devicetree/bindings/iio/adc/adi,ad7124.txt new file mode 100644 index 000000000000..416273dce569 --- /dev/null +++ b/Documentation/devicetree/bindings/iio/adc/adi,ad7124.txt @@ -0,0 +1,75 @@ +Analog Devices AD7124 ADC device driver + +Required properties for the AD7124: + - compatible: Must be one of "adi,ad7124-4" or "adi,ad7124-8" + - reg: SPI chip select number for the device + - spi-max-frequency: Max SPI frequency to use + see: Documentation/devicetree/bindings/spi/spi-bus.txt + - clocks: phandle to the master clock (mclk) + see: Documentation/devicetree/bindings/clock/clock-bindings.txt + - clock-names: Must be "mclk". + - interrupts: IRQ line for the ADC + see: Documentation/devicetree/bindings/interrupt-controller/interrupts.txt + + Required properties: + * #address-cells: Must be 1. + * #size-cells: Must be 0. + + Subnode(s) represent the external channels which are connected to the ADC. + Each subnode represents one channel and has the following properties: + Required properties: + * reg: The channel number. It can have up to 4 channels on ad7124-4 + and 8 channels on ad7124-8, numbered from 0 to 15. + * diff-channels: see: Documentation/devicetree/bindings/iio/adc/adc.txt + + Optional properties: + * bipolar: see: Documentation/devicetree/bindings/iio/adc/adc.txt + * adi,reference-select: Select the reference source to use when + converting on the the specific channel. Valid values are: + 0: REFIN1(+)/REFIN1(−). + 1: REFIN2(+)/REFIN2(−). + 3: AVDD + If this field is left empty, internal reference is selected. + +Optional properties: + - refin1-supply: refin1 supply can be used as reference for conversion. + - refin2-supply: refin2 supply can be used as reference for conversion. + - avdd-supply: avdd supply can be used as reference for conversion. + +Example: + adc@0 { + compatible = "adi,ad7124-4"; + reg = <0>; + spi-max-frequency = <5000000>; + interrupts = <25 2>; + interrupt-parent = <&gpio>; + refin1-supply = <&adc_vref>; + clocks = <&ad7124_mclk>; + clock-names = "mclk"; + + #address-cells = <1>; + #size-cells = <0>; + + channel@0 { + reg = <0>; + diff-channels = <0 1>; + adi,reference-select = <0>; + }; + + channel@1 { + reg = <1>; + bipolar; + diff-channels = <2 3>; + adi,reference-select = <0>; + }; + + channel@2 { + reg = <2>; + diff-channels = <4 5>; + }; + + channel@3 { + reg = <3>; + diff-channels = <6 7>; + }; + }; diff --git a/Documentation/devicetree/bindings/iio/adc/amlogic,meson-saradc.txt b/Documentation/devicetree/bindings/iio/adc/amlogic,meson-saradc.txt index 54b823f3a453..325090e43ce6 100644 --- a/Documentation/devicetree/bindings/iio/adc/amlogic,meson-saradc.txt +++ b/Documentation/devicetree/bindings/iio/adc/amlogic,meson-saradc.txt @@ -22,6 +22,12 @@ Required properties: - vref-supply: the regulator supply for the ADC reference voltage - #io-channel-cells: must be 1, see ../iio-bindings.txt +Optional properties: +- nvmem-cells: phandle to the temperature_calib eFuse cells +- nvmem-cell-names: if present (to enable the temperature sensor + calibration) this must contain "temperature_calib" + + Example: saradc: adc@8680 { compatible = "amlogic,meson-gxl-saradc", "amlogic,meson-saradc"; diff --git a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt index 6c49db7f8ad2..a10c1f89037d 100644 --- a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt +++ b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt @@ -11,7 +11,7 @@ New driver handles the following Required properties: - compatible: Must be "samsung,exynos-adc-v1" - for exynos4412/5250 and s5pv210 controllers. + for exynos4412/5250 controllers. Must be "samsung,exynos-adc-v2" for future controllers. Must be "samsung,exynos3250-adc" for @@ -28,6 +28,8 @@ Required properties: the ADC in s3c2443 and compatibles Must be "samsung,s3c6410-adc" for the ADC in s3c6410 and compatibles + Must be "samsung,s5pv210-adc" for + the ADC in s5pv210 and compatibles - reg: List of ADC register address range - The base address and range of ADC register - The base address and range of ADC_PHY register (every diff --git a/Documentation/devicetree/bindings/iio/adc/ti-adc128s052.txt b/Documentation/devicetree/bindings/iio/adc/ti-adc128s052.txt index daa2b2c29428..c07ce1a3f5c4 100644 --- a/Documentation/devicetree/bindings/iio/adc/ti-adc128s052.txt +++ b/Documentation/devicetree/bindings/iio/adc/ti-adc128s052.txt @@ -1,7 +1,14 @@ * Texas Instruments' ADC128S052, ADC122S021 and ADC124S021 ADC chip Required properties: - - compatible: Should be "ti,adc128s052", "ti,adc122s021" or "ti,adc124s021" + - compatible: Should be one of: + - "ti,adc128s052" + - "ti,adc122s021" + - "ti,adc122s051" + - "ti,adc122s101" + - "ti,adc124s021" + - "ti,adc124s051" + - "ti,adc124s101" - reg: spi chip select number for the device - vref-supply: The regulator supply for ADC reference voltage diff --git a/Documentation/devicetree/bindings/iio/dac/ti,dac7311.txt b/Documentation/devicetree/bindings/iio/dac/ti,dac7311.txt new file mode 100644 index 000000000000..e5a507db5e01 --- /dev/null +++ b/Documentation/devicetree/bindings/iio/dac/ti,dac7311.txt @@ -0,0 +1,23 @@ +TI DAC7311 device tree bindings + +Required properties: +- compatible: must be set to: + * "ti,dac7311" + * "ti,dac6311" + * "ti,dac5311" +- reg: spi chip select number for the device +- vref-supply: The regulator supply for ADC reference voltage + +Optional properties: +- spi-max-frequency: Max SPI frequency to use + +Example: + + spi_master { + dac@0 { + compatible = "ti,dac7311"; + reg = <0>; /* CS0 */ + spi-max-frequency = <1000000>; + vref-supply = <&vdd_supply>; + }; + }; diff --git a/Documentation/devicetree/bindings/iio/imu/st_lsm6dsx.txt b/Documentation/devicetree/bindings/iio/imu/st_lsm6dsx.txt index 879322ad50fd..69d53d98d0f0 100644 --- a/Documentation/devicetree/bindings/iio/imu/st_lsm6dsx.txt +++ b/Documentation/devicetree/bindings/iio/imu/st_lsm6dsx.txt @@ -13,6 +13,7 @@ Required properties: Optional properties: - st,drdy-int-pin: the pin on the package that will be used to signal "data ready" (valid values: 1 or 2). +- st,pullups : enable/disable internal i2c controller pullup resistors. - drive-open-drain: the interrupt/data ready line will be configured as open drain, which is useful if several sensors share the same interrupt line. This is a boolean property. diff --git a/Documentation/devicetree/bindings/iio/light/vcnl4035.txt b/Documentation/devicetree/bindings/iio/light/vcnl4035.txt new file mode 100644 index 000000000000..c07c7f052556 --- /dev/null +++ b/Documentation/devicetree/bindings/iio/light/vcnl4035.txt @@ -0,0 +1,18 @@ +VISHAY VCNL4035 - Ambient Light and proximity sensor + +Link to datasheet: https://www.vishay.com/docs/84251/vcnl4035x01.pdf + +Required properties: + + -compatible: should be "vishay,vcnl4035" + -reg: I2C address of the sensor, should be 0x60 + -interrupts: interrupt mapping for GPIO IRQ (level active low) + +Example: + +light-sensor@60 { + compatible = "vishay,vcnl4035"; + reg = <0x60>; + interrupt-parent = <&gpio4>; + interrupts = <11 IRQ_TYPE_LEVEL_LOW>; +}; diff --git a/Documentation/devicetree/bindings/iio/magnetometer/mag3110.txt b/Documentation/devicetree/bindings/iio/magnetometer/mag3110.txt new file mode 100644 index 000000000000..bdd40bcaaa1f --- /dev/null +++ b/Documentation/devicetree/bindings/iio/magnetometer/mag3110.txt @@ -0,0 +1,27 @@ +* FREESCALE MAG3110 magnetometer sensor + +Required properties: + + - compatible : should be "fsl,mag3110" + - reg : the I2C address of the magnetometer + +Optional properties: + + - interrupts: the sole interrupt generated by the device + + Refer to interrupt-controller/interrupts.txt for generic interrupt client + node bindings. + + - vdd-supply: phandle to the regulator that provides power to the sensor. + - vddio-supply: phandle to the regulator that provides power to the sensor's IO. + +Example: + +magnetometer@e { + compatible = "fsl,mag3110"; + reg = <0x0e>; + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_i2c3_mag3110_int>; + interrupt-parent = <&gpio3>; + interrupts = <16 IRQ_TYPE_EDGE_RISING>; +}; diff --git a/Documentation/devicetree/bindings/iio/magnetometer/pni,rm3100.txt b/Documentation/devicetree/bindings/iio/magnetometer/pni,rm3100.txt new file mode 100644 index 000000000000..497c932e9e39 --- /dev/null +++ b/Documentation/devicetree/bindings/iio/magnetometer/pni,rm3100.txt @@ -0,0 +1,20 @@ +* PNI RM3100 3-axis magnetometer sensor + +Required properties: + +- compatible : should be "pni,rm3100" +- reg : the I2C address or SPI chip select number of the sensor. + +Optional properties: + +- interrupts: data ready (DRDY) from the chip. + The interrupts can be triggered on level high. + +Example: + +rm3100: rm3100@20 { + compatible = "pni,rm3100"; + reg = <0x20>; + interrupt-parent = <&gpio0>; + interrupts = <4 IRQ_TYPE_LEVEL_HIGH>; +}; diff --git a/Documentation/devicetree/bindings/iio/potentiometer/mcp41010.txt b/Documentation/devicetree/bindings/iio/potentiometer/mcp41010.txt new file mode 100644 index 000000000000..566711b9950c --- /dev/null +++ b/Documentation/devicetree/bindings/iio/potentiometer/mcp41010.txt @@ -0,0 +1,28 @@ +* Microchip MCP41010/41050/41100/42010/42050/42100 Digital Potentiometer + +Datasheet publicly available at: +http://ww1.microchip.com/downloads/en/devicedoc/11195c.pdf + +The node for this driver must be a child node of a SPI controller, hence +all mandatory properties described in + + Documentation/devicetree/bindings/spi/spi-bus.txt + +must be specified. + +Required properties: + - compatible: Must be one of the following, depending on the + model: + "microchip,mcp41010" + "microchip,mcp41050" + "microchip,mcp41100" + "microchip,mcp42010" + "microchip,mcp42050" + "microchip,mcp42100" + +Example: +potentiometer@0 { + compatible = "microchip,mcp41010"; + reg = <0>; + spi-max-frequency = <500000>; +}; diff --git a/Documentation/devicetree/bindings/iio/resolver/ad2s90.txt b/Documentation/devicetree/bindings/iio/resolver/ad2s90.txt new file mode 100644 index 000000000000..477d41fa6467 --- /dev/null +++ b/Documentation/devicetree/bindings/iio/resolver/ad2s90.txt @@ -0,0 +1,31 @@ +Analog Devices AD2S90 Resolver-to-Digital Converter + +https://www.analog.com/en/products/ad2s90.html + +Required properties: + - compatible: should be "adi,ad2s90" + - reg: SPI chip select number for the device + - spi-max-frequency: set maximum clock frequency, must be 830000 + - spi-cpol and spi-cpha: + Either SPI mode (0,0) or (1,1) must be used, so specify none or both of + spi-cpha, spi-cpol. + +See for more details: + Documentation/devicetree/bindings/spi/spi-bus.txt + +Note about max frequency: + Chip's max frequency, as specified in its datasheet, is 2Mhz. But a 600ns + delay is expected between the application of a logic LO to CS and the + application of SCLK, as also specified. And since the delay is not + implemented in the spi code, to satisfy it, SCLK's period should be at most + 2 * 600ns, so the max frequency should be 1 / (2 * 6e-7), which gives + roughly 830000Hz. + +Example: +resolver@0 { + compatible = "adi,ad2s90"; + reg = <0>; + spi-max-frequency = <830000>; + spi-cpol; + spi-cpha; +}; diff --git a/Documentation/devicetree/bindings/iio/st-sensors.txt b/Documentation/devicetree/bindings/iio/st-sensors.txt index 6f626f73417e..ddcb95509599 100644 --- a/Documentation/devicetree/bindings/iio/st-sensors.txt +++ b/Documentation/devicetree/bindings/iio/st-sensors.txt @@ -48,6 +48,7 @@ Accelerometers: - st,lis3l02dq - st,lis2dw12 - st,lis3dhh +- st,lis3de Gyroscopes: - st,l3g4200d-gyro @@ -67,6 +68,7 @@ Magnetometers: - st,lsm303dlm-magn - st,lis3mdl-magn - st,lis2mdl +- st,lsm9ds1-magn Pressure sensors: - st,lps001wp-press diff --git a/Documentation/devicetree/bindings/interrupt-controller/mrvl,intc.txt b/Documentation/devicetree/bindings/interrupt-controller/mrvl,intc.txt index 8b53273cb22f..608fee15a4cf 100644 --- a/Documentation/devicetree/bindings/interrupt-controller/mrvl,intc.txt +++ b/Documentation/devicetree/bindings/interrupt-controller/mrvl,intc.txt @@ -5,7 +5,7 @@ Required properties: "mrvl,mmp2-mux-intc" - reg : Address and length of the register set of the interrupt controller. If the interrupt controller is intc, address and length means the range - of the whold interrupt controller. If the interrupt controller is mux-intc, + of the whole interrupt controller. If the interrupt controller is mux-intc, address and length means one register. Since address of mux-intc is in the range of intc. mux-intc is secondary interrupt controller. - reg-names : Name of the register set of the interrupt controller. It's diff --git a/Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt b/Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt index 01fdc33a41d0..bb7e896cb644 100644 --- a/Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt +++ b/Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt @@ -10,7 +10,7 @@ such as network interfaces, crypto accelerator instances, L2 switches, etc. For an overview of the DPAA2 architecture and fsl-mc bus see: -Documentation/networking/dpaa2/overview.rst +Documentation/networking/device_drivers/freescale/dpaa2/overview.rst As described in the above overview, all DPAA2 objects in a DPRC share the same hardware "isolation context" and a 10-bit value called an ICID diff --git a/Documentation/devicetree/bindings/misc/pvpanic-mmio.txt b/Documentation/devicetree/bindings/misc/pvpanic-mmio.txt new file mode 100644 index 000000000000..985e90736780 --- /dev/null +++ b/Documentation/devicetree/bindings/misc/pvpanic-mmio.txt @@ -0,0 +1,29 @@ +* QEMU PVPANIC MMIO Configuration bindings + +QEMU's emulation / virtualization targets provide the following PVPANIC +MMIO Configuration interface on the "virt" machine. +type: + +- a read-write, 16-bit wide data register. + +QEMU exposes the data register to guests as memory mapped registers. + +Required properties: + +- compatible: "qemu,pvpanic-mmio". +- reg: the MMIO region used by the device. + * Bytes 0x0 Write panic event to the reg when guest OS panics. + * Bytes 0x1 Reserved. + +Example: + +/ { + #size-cells = <0x2>; + #address-cells = <0x2>; + + pvpanic-mmio@9060000 { + compatible = "qemu,pvpanic-mmio"; + reg = <0x0 0x9060000 0x0 0x2>; + }; +}; + diff --git a/Documentation/devicetree/bindings/mmc/arasan,sdhci.txt b/Documentation/devicetree/bindings/mmc/arasan,sdhci.txt index e2effe17f05e..1edbb049cccb 100644 --- a/Documentation/devicetree/bindings/mmc/arasan,sdhci.txt +++ b/Documentation/devicetree/bindings/mmc/arasan,sdhci.txt @@ -16,6 +16,10 @@ Required Properties: - "rockchip,rk3399-sdhci-5.1", "arasan,sdhci-5.1": rk3399 eMMC PHY For this device it is strongly suggested to include arasan,soc-ctl-syscon. - "ti,am654-sdhci-5.1", "arasan,sdhci-5.1": TI AM654 MMC PHY + Note: This binding has been deprecated and moved to [5]. + + [5] Documentation/devicetree/bindings/mmc/sdhci-am654.txt + - reg: From mmc bindings: Register location and length. - clocks: From clock bindings: Handles to clock inputs. - clock-names: From clock bindings: Tuple including "clk_xin" and "clk_ahb" diff --git a/Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.txt b/Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.txt index 3e29050ec769..9201a7d8d7b0 100644 --- a/Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.txt +++ b/Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.txt @@ -16,6 +16,7 @@ Required properties: "fsl,imx6sl-usdhc" "fsl,imx6sx-usdhc" "fsl,imx7d-usdhc" + "fsl,imx8qxp-usdhc" Optional properties: - fsl,wp-controller : Indicate to use controller internal write protection diff --git a/Documentation/devicetree/bindings/mmc/sdhci-am654.txt b/Documentation/devicetree/bindings/mmc/sdhci-am654.txt new file mode 100644 index 000000000000..15dbbbace27e --- /dev/null +++ b/Documentation/devicetree/bindings/mmc/sdhci-am654.txt @@ -0,0 +1,36 @@ +Device Tree Bindings for the SDHCI Controllers present on TI's AM654 SOCs + +The bindings follow the mmc[1], clock[2] and interrupt[3] bindings. +Only deviations are documented here. + + [1] Documentation/devicetree/bindings/mmc/mmc.txt + [2] Documentation/devicetree/bindings/clock/clock-bindings.txt + [3] Documentation/devicetree/bindings/interrupt-controller/interrupts.txt + +Required Properties: + - compatible: should be "ti,am654-sdhci-5.1" + - reg: Must be two entries. + - The first should be the sdhci register space + - The second should the subsystem/phy register space + - clocks: Handles to the clock inputs. + - clock-names: Tuple including "clk_xin" and "clk_ahb" + - interrupts: Interrupt specifiers + - ti,otap-del-sel: Output Tap Delay select + - ti,trm-icp: DLL trim select + - ti,driver-strength-ohm: driver strength in ohms. + Valid values are 33, 40, 50, 66 and 100 ohms. + +Example: + + sdhci0: sdhci@4f80000 { + compatible = "ti,am654-sdhci-5.1"; + reg = <0x0 0x4f80000 0x0 0x260>, <0x0 0x4f90000 0x0 0x134>; + power-domains = <&k3_pds 47>; + clocks = <&k3_clks 47 0>, <&k3_clks 47 1>; + clock-names = "clk_ahb", "clk_xin"; + interrupts = ; + sdhci-caps-mask = <0x80000007 0x0>; + mmc-ddr-1_8v; + ti,otap-del-sel = <0x2>; + ti,trm-icp = <0x8>; + }; diff --git a/Documentation/devicetree/bindings/mmc/sdhci-msm.txt b/Documentation/devicetree/bindings/mmc/sdhci-msm.txt index 502b3b851ebb..da4edb146a98 100644 --- a/Documentation/devicetree/bindings/mmc/sdhci-msm.txt +++ b/Documentation/devicetree/bindings/mmc/sdhci-msm.txt @@ -4,15 +4,28 @@ This file documents differences between the core properties in mmc.txt and the properties used by the sdhci-msm driver. Required properties: -- compatible: Should contain: +- compatible: Should contain a SoC-specific string and a IP version string: + version strings: "qcom,sdhci-msm-v4" for sdcc versions less than 5.0 - "qcom,sdhci-msm-v5" for sdcc versions >= 5.0 + "qcom,sdhci-msm-v5" for sdcc version 5.0 For SDCC version 5.0.0, MCI registers are removed from SDCC interface and some registers are moved to HC. New compatible string is added to support this change - "qcom,sdhci-msm-v5". + full compatible strings with SoC and version: + "qcom,apq8084-sdhci", "qcom,sdhci-msm-v4" + "qcom,msm8974-sdhci", "qcom,sdhci-msm-v4" + "qcom,msm8916-sdhci", "qcom,sdhci-msm-v4" + "qcom,msm8992-sdhci", "qcom,sdhci-msm-v4" + "qcom,msm8996-sdhci", "qcom,sdhci-msm-v4" + "qcom,sdm845-sdhci", "qcom,sdhci-msm-v5" + "qcom,qcs404-sdhci", "qcom,sdhci-msm-v5" + NOTE that some old device tree files may be floating around that only + have the string "qcom,sdhci-msm-v4" without the SoC compatible string + but doing that should be considered a deprecated practice. + - reg: Base address and length of the register in the following order: - Host controller register map (required) - - SD Core register map (required) + - SD Core register map (required for msm-v4 and below) - interrupts: Should contain an interrupt-specifiers for the interrupts: - Host controller interrupt (required) - pinctrl-names: Should contain only one value - "default". @@ -29,7 +42,7 @@ Required properties: Example: sdhc_1: sdhci@f9824900 { - compatible = "qcom,sdhci-msm-v4"; + compatible = "qcom,msm8974-sdhci", "qcom,sdhci-msm-v4"; reg = <0xf9824900 0x11c>, <0xf9824000 0x800>; interrupts = <0 123 0>; bus-width = <8>; @@ -46,7 +59,7 @@ Example: }; sdhc_2: sdhci@f98a4900 { - compatible = "qcom,sdhci-msm-v4"; + compatible = "qcom,msm8974-sdhci", "qcom,sdhci-msm-v4"; reg = <0xf98a4900 0x11c>, <0xf98a4000 0x800>; interrupts = <0 125 0>; bus-width = <4>; diff --git a/Documentation/devicetree/bindings/mmc/sdhci-omap.txt b/Documentation/devicetree/bindings/mmc/sdhci-omap.txt index 393848c2138e..72c4dec7e1db 100644 --- a/Documentation/devicetree/bindings/mmc/sdhci-omap.txt +++ b/Documentation/devicetree/bindings/mmc/sdhci-omap.txt @@ -2,6 +2,8 @@ Refer to mmc.txt for standard MMC bindings. +For UHS devices which require tuning, the device tree should have a "cpu_thermal" node which maps to the appropriate thermal zone. This is used to get the temperature of the zone during tuning. + Required properties: - compatible: Should be "ti,dra7-sdhci" for DRA7 and DRA72 controllers Should be "ti,k2g-sdhci" for K2G diff --git a/Documentation/devicetree/bindings/mmc/tmio_mmc.txt b/Documentation/devicetree/bindings/mmc/tmio_mmc.txt index 27f2eab2981d..2b4f17ca9087 100644 --- a/Documentation/devicetree/bindings/mmc/tmio_mmc.txt +++ b/Documentation/devicetree/bindings/mmc/tmio_mmc.txt @@ -13,12 +13,14 @@ Required properties: - compatible: should contain one or more of the following: "renesas,sdhi-sh73a0" - SDHI IP on SH73A0 SoC "renesas,sdhi-r7s72100" - SDHI IP on R7S72100 SoC + "renesas,sdhi-r7s9210" - SDHI IP on R7S9210 SoC "renesas,sdhi-r8a73a4" - SDHI IP on R8A73A4 SoC "renesas,sdhi-r8a7740" - SDHI IP on R8A7740 SoC "renesas,sdhi-r8a7743" - SDHI IP on R8A7743 SoC "renesas,sdhi-r8a7744" - SDHI IP on R8A7744 SoC "renesas,sdhi-r8a7745" - SDHI IP on R8A7745 SoC "renesas,sdhi-r8a774a1" - SDHI IP on R8A774A1 SoC + "renesas,sdhi-r8a774c0" - SDHI IP on R8A774C0 SoC "renesas,sdhi-r8a77470" - SDHI IP on R8A77470 SoC "renesas,sdhi-mmc-r8a77470" - SDHI/MMC IP on R8A77470 SoC "renesas,sdhi-r8a7778" - SDHI IP on R8A7778 SoC @@ -56,7 +58,7 @@ Required properties: "core" and "cd". If the controller only has 1 clock, naming is not required. Devices which have more than 1 clock are listed below: - 2: R7S72100 + 2: R7S72100, R7S9210 Optional properties: - pinctrl-names: should be "default", "state_uhs" diff --git a/Documentation/devicetree/bindings/net/broadcom-bluetooth.txt b/Documentation/devicetree/bindings/net/broadcom-bluetooth.txt index 4194ff7e6ee6..c26f4e11037c 100644 --- a/Documentation/devicetree/bindings/net/broadcom-bluetooth.txt +++ b/Documentation/devicetree/bindings/net/broadcom-bluetooth.txt @@ -10,6 +10,8 @@ device the slave device is attached to. Required properties: - compatible: should contain one of the following: + * "brcm,bcm20702a1" + * "brcm,bcm4330-bt" * "brcm,bcm43438-bt" Optional properties: @@ -18,8 +20,13 @@ Optional properties: - shutdown-gpios: GPIO specifier, used to enable the BT module - device-wakeup-gpios: GPIO specifier, used to wakeup the controller - host-wakeup-gpios: GPIO specifier, used to wakeup the host processor - - clocks: clock specifier if external clock provided to the controller - - clock-names: should be "extclk" + - clocks: 1 or 2 clocks as defined in clock-names below, in that order + - clock-names: names for clock inputs, matching the clocks given + - "extclk": deprecated, replaced by "txco" + - "txco": external reference clock (not a standalone crystal) + - "lpo": external low power 32.768 kHz clock + - vbat-supply: phandle to regulator supply for VBAT + - vddio-supply: phandle to regulator supply for VDDIO Example: diff --git a/Documentation/devicetree/bindings/net/can/fsl-flexcan.txt b/Documentation/devicetree/bindings/net/can/fsl-flexcan.txt index bfc0c433654f..bc77477c6878 100644 --- a/Documentation/devicetree/bindings/net/can/fsl-flexcan.txt +++ b/Documentation/devicetree/bindings/net/can/fsl-flexcan.txt @@ -24,6 +24,14 @@ Optional properties: if this property is present then controller is assumed to be big endian. +- fsl,stop-mode: register bits of stop mode control, the format is + <&gpr req_gpr req_bit ack_gpr ack_bit>. + gpr is the phandle to general purpose register node. + req_gpr is the gpr register offset of CAN stop request. + req_bit is the bit offset of CAN stop request. + ack_gpr is the gpr register offset of CAN stop acknowledge. + ack_bit is the bit offset of CAN stop acknowledge. + Example: can@1c000 { diff --git a/Documentation/devicetree/bindings/net/can/xilinx_can.txt b/Documentation/devicetree/bindings/net/can/xilinx_can.txt index 060e2d46bad9..100cc40b8510 100644 --- a/Documentation/devicetree/bindings/net/can/xilinx_can.txt +++ b/Documentation/devicetree/bindings/net/can/xilinx_can.txt @@ -6,6 +6,7 @@ Required properties: - "xlnx,zynq-can-1.0" for Zynq CAN controllers - "xlnx,axi-can-1.00.a" for Axi CAN controllers - "xlnx,canfd-1.0" for CAN FD controllers + - "xlnx,canfd-2.0" for CAN FD 2.0 controllers - reg : Physical base address and size of the controller registers map. - interrupts : Property with a value describing the interrupt diff --git a/Documentation/devicetree/bindings/net/cpsw.txt b/Documentation/devicetree/bindings/net/cpsw.txt index b3acebe08eb0..3264e1978d25 100644 --- a/Documentation/devicetree/bindings/net/cpsw.txt +++ b/Documentation/devicetree/bindings/net/cpsw.txt @@ -22,7 +22,8 @@ Required properties: - cpsw-phy-sel : Specifies the phandle to the CPSW phy mode selection device. See also cpsw-phy-sel.txt for it's binding. Note that in legacy cases cpsw-phy-sel may be - a child device instead of a phandle. + a child device instead of a phandle + (DEPRECATED, use phys property instead). Optional properties: - ti,hwmods : Must be "cpgmac0" @@ -44,6 +45,7 @@ Optional properties: Slave Properties: Required properties: - phy-mode : See ethernet.txt file in the same directory +- phys : phandle on phy-gmii-sel PHY (see phy/ti-phy-gmii-sel.txt) Optional properties: - dual_emac_res_vlan : Specifies VID to be used to segregate the ports @@ -85,12 +87,14 @@ Examples: phy-mode = "rgmii-txid"; /* Filled in by U-Boot */ mac-address = [ 00 00 00 00 00 00 ]; + phys = <&phy_gmii_sel 1 0>; }; cpsw_emac1: slave@1 { phy_id = <&davinci_mdio>, <1>; phy-mode = "rgmii-txid"; /* Filled in by U-Boot */ mac-address = [ 00 00 00 00 00 00 ]; + phys = <&phy_gmii_sel 2 0>; }; }; @@ -114,11 +118,13 @@ Examples: phy-mode = "rgmii-txid"; /* Filled in by U-Boot */ mac-address = [ 00 00 00 00 00 00 ]; + phys = <&phy_gmii_sel 1 0>; }; cpsw_emac1: slave@1 { phy_id = <&davinci_mdio>, <1>; phy-mode = "rgmii-txid"; /* Filled in by U-Boot */ mac-address = [ 00 00 00 00 00 00 ]; + phys = <&phy_gmii_sel 2 0>; }; }; diff --git a/Documentation/devicetree/bindings/net/dsa/ksz.txt b/Documentation/devicetree/bindings/net/dsa/ksz.txt index ac145b885e95..0f407fb371ce 100644 --- a/Documentation/devicetree/bindings/net/dsa/ksz.txt +++ b/Documentation/devicetree/bindings/net/dsa/ksz.txt @@ -8,6 +8,10 @@ Required properties: - "microchip,ksz9477" - "microchip,ksz9897" +Optional properties: + +- reset-gpios : Should be a gpio specifier for a reset line + See Documentation/devicetree/bindings/net/dsa/dsa.txt for a list of additional required and optional properties. diff --git a/Documentation/devicetree/bindings/net/icplus-ip101ag.txt b/Documentation/devicetree/bindings/net/icplus-ip101ag.txt new file mode 100644 index 000000000000..a784592bbb15 --- /dev/null +++ b/Documentation/devicetree/bindings/net/icplus-ip101ag.txt @@ -0,0 +1,19 @@ +IC Plus Corp. IP101A / IP101G Ethernet PHYs + +There are different models of the IP101G Ethernet PHY: +- IP101GR (32-pin QFN package) +- IP101G (die only, no package) +- IP101GA (48-pin LQFP package) + +There are different models of the IP101A Ethernet PHY (which is the +predecessor of the IP101G): +- IP101A (48-pin LQFP package) +- IP101AH (48-pin LQFP package) + +Optional properties for the IP101GR (32-pin QFN package): + +- icplus,select-rx-error: + pin 21 ("RXER/INTR_32") will output the receive error status. + interrupts are not routed outside the PHY in this mode. +- icplus,select-interrupt: + pin 21 ("RXER/INTR_32") will output the interrupt signal. diff --git a/Documentation/devicetree/bindings/net/mediatek-dwmac.txt b/Documentation/devicetree/bindings/net/mediatek-dwmac.txt new file mode 100644 index 000000000000..8a08621a5b54 --- /dev/null +++ b/Documentation/devicetree/bindings/net/mediatek-dwmac.txt @@ -0,0 +1,78 @@ +MediaTek DWMAC glue layer controller + +This file documents platform glue layer for stmmac. +Please see stmmac.txt for the other unchanged properties. + +The device node has following properties. + +Required properties: +- compatible: Should be "mediatek,mt2712-gmac" for MT2712 SoC +- reg: Address and length of the register set for the device +- interrupts: Should contain the MAC interrupts +- interrupt-names: Should contain a list of interrupt names corresponding to + the interrupts in the interrupts property, if available. + Should be "macirq" for the main MAC IRQ +- clocks: Must contain a phandle for each entry in clock-names. +- clock-names: The name of the clock listed in the clocks property. These are + "axi", "apb", "mac_main", "ptp_ref" for MT2712 SoC +- mac-address: See ethernet.txt in the same directory +- phy-mode: See ethernet.txt in the same directory +- mediatek,pericfg: A phandle to the syscon node that control ethernet + interface and timing delay. + +Optional properties: +- mediatek,tx-delay-ps: TX clock delay macro value. Default is 0. + It should be defined for RGMII/MII interface. +- mediatek,rx-delay-ps: RX clock delay macro value. Default is 0. + It should be defined for RGMII/MII/RMII interface. +Both delay properties need to be a multiple of 170 for RGMII interface, +or will round down. Range 0~31*170. +Both delay properties need to be a multiple of 550 for MII/RMII interface, +or will round down. Range 0~31*550. + +- mediatek,rmii-rxc: boolean property, if present indicates that the RMII + reference clock, which is from external PHYs, is connected to RXC pin + on MT2712 SoC. + Otherwise, is connected to TXC pin. +- mediatek,txc-inverse: boolean property, if present indicates that + 1. tx clock will be inversed in MII/RGMII case, + 2. tx clock inside MAC will be inversed relative to reference clock + which is from external PHYs in RMII case, and it rarely happen. +- mediatek,rxc-inverse: boolean property, if present indicates that + 1. rx clock will be inversed in MII/RGMII case. + 2. reference clock will be inversed when arrived at MAC in RMII case. +- assigned-clocks: mac_main and ptp_ref clocks +- assigned-clock-parents: parent clocks of the assigned clocks + +Example: + eth: ethernet@1101c000 { + compatible = "mediatek,mt2712-gmac"; + reg = <0 0x1101c000 0 0x1300>; + interrupts = ; + interrupt-names = "macirq"; + phy-mode ="rgmii"; + mac-address = [00 55 7b b5 7d f7]; + clock-names = "axi", + "apb", + "mac_main", + "ptp_ref", + "ptp_top"; + clocks = <&pericfg CLK_PERI_GMAC>, + <&pericfg CLK_PERI_GMAC_PCLK>, + <&topckgen CLK_TOP_ETHER_125M_SEL>, + <&topckgen CLK_TOP_ETHER_50M_SEL>; + assigned-clocks = <&topckgen CLK_TOP_ETHER_125M_SEL>, + <&topckgen CLK_TOP_ETHER_50M_SEL>; + assigned-clock-parents = <&topckgen CLK_TOP_ETHERPLL_125M>, + <&topckgen CLK_TOP_APLL1_D3>; + mediatek,pericfg = <&pericfg>; + mediatek,tx-delay-ps = <1530>; + mediatek,rx-delay-ps = <1530>; + mediatek,rmii-rxc; + mediatek,txc-inverse; + mediatek,rxc-inverse; + snps,txpbl = <32>; + snps,rxpbl = <32>; + snps,reset-gpio = <&pio 87 GPIO_ACTIVE_LOW>; + snps,reset-active-low; + }; diff --git a/Documentation/devicetree/bindings/net/renesas,ravb.txt b/Documentation/devicetree/bindings/net/renesas,ravb.txt index 3530256a879c..7ad36213093e 100644 --- a/Documentation/devicetree/bindings/net/renesas,ravb.txt +++ b/Documentation/devicetree/bindings/net/renesas,ravb.txt @@ -18,6 +18,7 @@ Required properties: R-Car Gen2 and RZ/G1 devices. - "renesas,etheravb-r8a774a1" for the R8A774A1 SoC. + - "renesas,etheravb-r8a774c0" for the R8A774C0 SoC. - "renesas,etheravb-r8a7795" for the R8A7795 SoC. - "renesas,etheravb-r8a7796" for the R8A7796 SoC. - "renesas,etheravb-r8a77965" for the R8A77965 SoC. diff --git a/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt b/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt index 2196d1ab3c8c..ae661e65354e 100644 --- a/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt +++ b/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt @@ -21,10 +21,22 @@ can be provided per device. SNOC based devices (i.e. wcn3990) uses compatible string "qcom,wcn3990-wifi". -Optional properties: - reg: Address and length of the register set for the device. - reg-names: Must include the list of following reg names, "membase" +- interrupts: reference to the list of 17 interrupt numbers for "qcom,ipq4019-wifi" + compatible target. + reference to the list of 12 interrupt numbers for "qcom,wcn3990-wifi" + compatible target. + Must contain interrupt-names property per entry for + "qcom,ath10k", "qcom,ipq4019-wifi" compatible targets. + +- interrupt-names: Must include the entries for MSI interrupt + names ("msi0" to "msi15") and legacy interrupt + name ("legacy") for "qcom,ath10k", "qcom,ipq4019-wifi" + compatible targets. + +Optional properties: - resets: Must contain an entry for each entry in reset-names. See ../reset/reseti.txt for details. - reset-names: Must include the list of following reset names, @@ -37,12 +49,9 @@ Optional properties: - clocks: List of clock specifiers, must contain an entry for each required entry in clock-names. - clock-names: Should contain the clock names "wifi_wcss_cmd", "wifi_wcss_ref", - "wifi_wcss_rtc". -- interrupts: List of interrupt lines. Must contain an entry - for each entry in the interrupt-names property. -- interrupt-names: Must include the entries for MSI interrupt - names ("msi0" to "msi15") and legacy interrupt - name ("legacy"), + "wifi_wcss_rtc" for "qcom,ipq4019-wifi" compatible target and + "cxo_ref_clk_pin" for "qcom,wcn3990-wifi" + compatible target. - qcom,msi_addr: MSI interrupt address. - qcom,msi_base: Base value to add before writing MSI data into MSI address register. @@ -55,14 +64,25 @@ Optional properties: - qcom,ath10k-pre-calibration-data : pre calibration data as an array, the length can vary between hw versions. - -supply: handle to the regulator device tree node - optional "supply-name" is "vdd-0.8-cx-mx". + optional "supply-name" are "vdd-0.8-cx-mx", + "vdd-1.8-xo", "vdd-1.3-rfa" and "vdd-3.3-ch0". - memory-region: Usage: optional Value type: Definition: reference to the reserved-memory for the msa region used by the wifi firmware running in Q6. +- iommus: + Usage: optional + Value type: + Definition: A list of phandle and IOMMU specifier pairs. +- ext-fem-name: + Usage: Optional + Value type: string + Definition: Name of external front end module used. Some valid FEM names + for example: "microsemi-lx5586", "sky85703-11" + and "sky85803" etc. -Example (to supply the calibration data alone): +Example (to supply PCI based wifi block details): In this example, the node is defined as child node of the PCI controller. @@ -74,10 +94,10 @@ pci { #address-cells = <3>; device_type = "pci"; - ath10k@0,0 { + wifi@0,0 { reg = <0 0 0 0 0>; - device_type = "pci"; qcom,ath10k-calibration-data = [ 01 02 03 ... ]; + ext-fem-name = "microsemi-lx5586"; }; }; }; @@ -138,21 +158,25 @@ wifi@18000000 { compatible = "qcom,wcn3990-wifi"; reg = <0x18800000 0x800000>; reg-names = "membase"; - clocks = <&clock_gcc clk_aggre2_noc_clk>; - clock-names = "smmu_aggre2_noc_clk" + clocks = <&clock_gcc clk_rf_clk2_pin>; + clock-names = "cxo_ref_clk_pin"; interrupts = - <0 130 0 /* CE0 */ >, - <0 131 0 /* CE1 */ >, - <0 132 0 /* CE2 */ >, - <0 133 0 /* CE3 */ >, - <0 134 0 /* CE4 */ >, - <0 135 0 /* CE5 */ >, - <0 136 0 /* CE6 */ >, - <0 137 0 /* CE7 */ >, - <0 138 0 /* CE8 */ >, - <0 139 0 /* CE9 */ >, - <0 140 0 /* CE10 */ >, - <0 141 0 /* CE11 */ >; + , + , + , + , + , + , + , + , + , + , + , + ; vdd-0.8-cx-mx-supply = <&pm8998_l5>; + vdd-1.8-xo-supply = <&vreg_l7a_1p8>; + vdd-1.3-rfa-supply = <&vreg_l17a_1p3>; + vdd-3.3-ch0-supply = <&vreg_l25a_3p3>; memory-region = <&wifi_msa_mem>; + iommus = <&apps_smmu 0x0040 0x1>; }; diff --git a/Documentation/devicetree/bindings/nvmem/amlogic-efuse.txt b/Documentation/devicetree/bindings/nvmem/amlogic-efuse.txt index e3298e18de26..2e0723ab3384 100644 --- a/Documentation/devicetree/bindings/nvmem/amlogic-efuse.txt +++ b/Documentation/devicetree/bindings/nvmem/amlogic-efuse.txt @@ -2,6 +2,8 @@ Required properties: - compatible: should be "amlogic,meson-gxbb-efuse" +- clocks: phandle to the efuse peripheral clock provided by the + clock controller. = Data cells = Are child nodes of eFuse, bindings of which as described in @@ -11,6 +13,7 @@ Example: efuse: efuse { compatible = "amlogic,meson-gxbb-efuse"; + clocks = <&clkc CLKID_EFUSE>; #address-cells = <1>; #size-cells = <1>; diff --git a/Documentation/devicetree/bindings/pci/host-generic-pci.txt b/Documentation/devicetree/bindings/pci/host-generic-pci.txt index 3f1d3fca62bb..614b594f4e72 100644 --- a/Documentation/devicetree/bindings/pci/host-generic-pci.txt +++ b/Documentation/devicetree/bindings/pci/host-generic-pci.txt @@ -56,7 +56,7 @@ For CAM, this 24-bit offset is: cfg_offset(bus, device, function, register) = bus << 16 | device << 11 | function << 8 | register -Whilst ECAM extends this by 4 bits to accommodate 4k of function space: +While ECAM extends this by 4 bits to accommodate 4k of function space: cfg_offset(bus, device, function, register) = bus << 20 | device << 15 | function << 12 | register diff --git a/Documentation/devicetree/bindings/perf/nds32v3-pmu.txt b/Documentation/devicetree/bindings/perf/nds32v3-pmu.txt new file mode 100644 index 000000000000..1bd15785b4ae --- /dev/null +++ b/Documentation/devicetree/bindings/perf/nds32v3-pmu.txt @@ -0,0 +1,17 @@ +* NDS32 Performance Monitor Units + +NDS32 core have a PMU for counting cpu and cache events like cache misses. +The NDS32 PMU representation in the device tree should be done as under: + +Required properties: + +- compatible : + "andestech,nds32v3-pmu" + +- interrupts : The interrupt number for NDS32 PMU is 13. + +Example: +pmu{ + compatible = "andestech,nds32v3-pmu"; + interrupts = <13>; +} diff --git a/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.txt b/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.txt new file mode 100644 index 000000000000..a22e853d710c --- /dev/null +++ b/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.txt @@ -0,0 +1,17 @@ +* Freescale i.MX8MQ USB3 PHY binding + +Required properties: +- compatible: Should be "fsl,imx8mq-usb-phy" +- #phys-cells: must be 0 (see phy-bindings.txt in this directory) +- reg: The base address and length of the registers +- clocks: phandles to the clocks for each clock listed in clock-names +- clock-names: must contain "phy" + +Example: + usb3_phy0: phy@381f0040 { + compatible = "fsl,imx8mq-usb-phy"; + reg = <0x381f0040 0x40>; + clocks = <&clk IMX8MQ_CLK_USB1_PHY_ROOT>; + clock-names = "phy"; + #phy-cells = <0>; + }; diff --git a/Documentation/devicetree/bindings/phy/phy-cadence-sierra.txt b/Documentation/devicetree/bindings/phy/phy-cadence-sierra.txt new file mode 100644 index 000000000000..6e1b47bfce43 --- /dev/null +++ b/Documentation/devicetree/bindings/phy/phy-cadence-sierra.txt @@ -0,0 +1,67 @@ +Cadence Sierra PHY +----------------------- + +Required properties: +- compatible: cdns,sierra-phy-t0 +- clocks: Must contain an entry in clock-names. + See ../clocks/clock-bindings.txt for details. +- clock-names: Must be "phy_clk" +- resets: Must contain an entry for each in reset-names. + See ../reset/reset.txt for details. +- reset-names: Must include "sierra_reset" and "sierra_apb". + "sierra_reset" must control the reset line to the PHY. + "sierra_apb" must control the reset line to the APB PHY + interface. +- reg: register range for the PHY. +- #address-cells: Must be 1 +- #size-cells: Must be 0 + +Optional properties: +- cdns,autoconf: A boolean property whose presence indicates that the + PHY registers will be configured by hardware. If not + present, all sub-node optional properties must be + provided. + +Sub-nodes: + Each group of PHY lanes with a single master lane should be represented as + a sub-node. Note that the actual configuration of each lane is determined by + hardware strapping, and must match the configuration specified here. + +Sub-node required properties: +- #phy-cells: Generic PHY binding; must be 0. +- reg: The master lane number. This is the lowest numbered lane + in the lane group. +- resets: Must contain one entry which controls the reset line for the + master lane of the sub-node. + See ../reset/reset.txt for details. + +Sub-node optional properties: +- cdns,num-lanes: Number of lanes in this group. From 1 to 4. The + group is made up of consecutive lanes. +- cdns,phy-type: Can be PHY_TYPE_PCIE or PHY_TYPE_USB3, depending on + configuration of lanes. + +Example: + pcie_phy4: pcie-phy@fd240000 { + compatible = "cdns,sierra-phy-t0"; + reg = <0x0 0xfd240000 0x0 0x40000>; + resets = <&phyrst 0>, <&phyrst 1>; + reset-names = "sierra_reset", "sierra_apb"; + clocks = <&phyclock>; + clock-names = "phy_clk"; + #address-cells = <1>; + #size-cells = <0>; + pcie0_phy0: pcie-phy@0 { + reg = <0>; + resets = <&phyrst 2>; + cdns,num-lanes = <2>; + #phy-cells = <0>; + cdns,phy-type = ; + }; + pcie0_phy1: pcie-phy@2 { + reg = <2>; + resets = <&phyrst 4>; + cdns,num-lanes = <1>; + #phy-cells = <0>; + cdns,phy-type = ; + }; diff --git a/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt b/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt index fbc198d5dd39..41a1074228ba 100644 --- a/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt +++ b/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt @@ -25,10 +25,6 @@ Required properties: - For all others: - The reg-names property shouldn't be defined. - - #clock-cells: must be 1 - - Phy pll outputs a bunch of clocks for Tx, Rx and Pipe - interface (for pipe based PHYs). These clock are then gate-controlled - by gcc. - #address-cells: must be 1 - #size-cells: must be 1 - ranges: must be present @@ -82,27 +78,33 @@ Required nodes: - Each device node of QMP phy is required to have as many child nodes as the number of lanes the PHY has. -Required properties for child node: +Required properties for child nodes of PCIe PHYs (one child per lane): - reg: list of offset and length pairs of register sets for PHY blocks - - - index 0: tx - - index 1: rx - - index 2: pcs - - index 3: pcs_misc (optional) + tx, rx, pcs, and pcs_misc (optional). + - #phy-cells: must be 0 +Required properties for a single "lanes" child node of non-PCIe PHYs: + - reg: list of offset and length pairs of register sets for PHY blocks + For 1-lane devices: + tx, rx, pcs, and (optionally) pcs_misc + For 2-lane devices: + tx0, rx0, pcs, tx1, rx1, and (optionally) pcs_misc - #phy-cells: must be 0 -Required properties child node of pcie and usb3 qmp phys: +Required properties for child node of PCIe and USB3 qmp phys: - clocks: a list of phandles and clock-specifier pairs, one for each entry in clock-names. - clock-names: Must contain following: "pipe" for pipe clock specific to each lane. - clock-output-names: Name of the PHY clock that will be the parent for the above pipe clock. - For "qcom,ipq8074-qmp-pcie-phy": - "pcie20_phy0_pipe_clk" Pipe Clock parent (or) "pcie20_phy1_pipe_clk" + - #clock-cells: must be 0 + - Phy pll outputs pipe clocks for pipe based PHYs. These clocks are then + gate-controlled by the gcc. Required properties for child node of PHYs with lane reset, AKA: "qcom,msm8996-qmp-pcie-phy" @@ -115,7 +117,6 @@ Example: phy@34000 { compatible = "qcom,msm8996-qmp-pcie-phy"; reg = <0x34000 0x488>; - #clock-cells = <1>; #address-cells = <1>; #size-cells = <1>; ranges; @@ -137,6 +138,7 @@ Example: reg = <0x35000 0x130>, <0x35200 0x200>, <0x35400 0x1dc>; + #clock-cells = <0>; #phy-cells = <0>; clocks = <&gcc GCC_PCIE_0_PIPE_CLK>; @@ -150,3 +152,54 @@ Example: ... ... }; + + phy@88eb000 { + compatible = "qcom,sdm845-qmp-usb3-uni-phy"; + reg = <0x88eb000 0x18c>; + #address-cells = <1>; + #size-cells = <1>; + ranges; + + clocks = <&gcc GCC_USB3_SEC_PHY_AUX_CLK>, + <&gcc GCC_USB_PHY_CFG_AHB2PHY_CLK>, + <&gcc GCC_USB3_SEC_CLKREF_CLK>, + <&gcc GCC_USB3_SEC_PHY_COM_AUX_CLK>; + clock-names = "aux", "cfg_ahb", "ref", "com_aux"; + + resets = <&gcc GCC_USB3PHY_PHY_SEC_BCR>, + <&gcc GCC_USB3_PHY_SEC_BCR>; + reset-names = "phy", "common"; + + lane@88eb200 { + reg = <0x88eb200 0x128>, + <0x88eb400 0x1fc>, + <0x88eb800 0x218>, + <0x88eb600 0x70>; + #clock-cells = <0>; + #phy-cells = <0>; + clocks = <&gcc GCC_USB3_SEC_PHY_PIPE_CLK>; + clock-names = "pipe0"; + clock-output-names = "usb3_uni_phy_pipe_clk_src"; + }; + }; + + phy@1d87000 { + compatible = "qcom,sdm845-qmp-ufs-phy"; + reg = <0x1d87000 0x18c>; + #address-cells = <1>; + #size-cells = <1>; + ranges; + clock-names = "ref", + "ref_aux"; + clocks = <&gcc GCC_UFS_MEM_CLKREF_CLK>, + <&gcc GCC_UFS_PHY_PHY_AUX_CLK>; + + lanes@1d87400 { + reg = <0x1d87400 0x108>, + <0x1d87600 0x1e0>, + <0x1d87c00 0x1dc>, + <0x1d87800 0x108>, + <0x1d87a00 0x1e0>; + #phy-cells = <0>; + }; + }; diff --git a/Documentation/devicetree/bindings/phy/sun4i-usb-phy.txt b/Documentation/devicetree/bindings/phy/sun4i-usb-phy.txt index 07ca4ec4a745..f2e120af17f0 100644 --- a/Documentation/devicetree/bindings/phy/sun4i-usb-phy.txt +++ b/Documentation/devicetree/bindings/phy/sun4i-usb-phy.txt @@ -14,13 +14,14 @@ Required properties: * allwinner,sun8i-r40-usb-phy * allwinner,sun8i-v3s-usb-phy * allwinner,sun50i-a64-usb-phy + * allwinner,sun50i-h6-usb-phy - reg : a list of offset + length pairs - reg-names : * "phy_ctrl" - * "pmu0" for H3, V3s and A64 + * "pmu0" for H3, V3s, A64 or H6 * "pmu1" * "pmu2" for sun4i, sun6i, sun7i, sun8i-a83t or sun8i-h3 - * "pmu3" for sun8i-h3 + * "pmu3" for sun8i-h3 or sun50i-h6 - #phy-cells : from the generic phy bindings, must be 1 - clocks : phandle + clock specifier for the phy clocks - clock-names : @@ -29,12 +30,13 @@ Required properties: * "usb0_phy", "usb1_phy" for sun8i * "usb0_phy", "usb1_phy", "usb2_phy" and "usb2_hsic_12M" for sun8i-a83t * "usb0_phy", "usb1_phy", "usb2_phy" and "usb3_phy" for sun8i-h3 + * "usb0_phy" and "usb3_phy" for sun50i-h6 - resets : a list of phandle + reset specifier pairs - reset-names : * "usb0_reset" * "usb1_reset" * "usb2_reset" for sun4i, sun6i, sun7i, sun8i-a83t or sun8i-h3 - * "usb3_reset" for sun8i-h3 + * "usb3_reset" for sun8i-h3 and sun50i-h6 Optional properties: - usb0_id_det-gpios : gpio phandle for reading the otg id pin value diff --git a/Documentation/devicetree/bindings/phy/ti-phy-gmii-sel.txt b/Documentation/devicetree/bindings/phy/ti-phy-gmii-sel.txt new file mode 100644 index 000000000000..50ce9ae0f7a5 --- /dev/null +++ b/Documentation/devicetree/bindings/phy/ti-phy-gmii-sel.txt @@ -0,0 +1,68 @@ +CPSW Port's Interface Mode Selection PHY Tree Bindings +----------------------------------------------- + +TI am335x/am437x/dra7(am5)/dm814x CPSW3G Ethernet Subsystem supports +two 10/100/1000 Ethernet ports with selectable G/MII, RMII, and RGMII interfaces. +The interface mode is selected by configuring the MII mode selection register(s) +(GMII_SEL) in the System Control Module chapter (SCM). GMII_SEL register(s) and +bit fields placement in SCM are different between SoCs while fields meaning +is the same. + +--------------+ + +-------------------------------+ |SCM | + | CPSW | | +---------+ | + | +--------------------------------+gmii_sel | | + | | | | +---------+ | + | +----v---+ +--------+ | +--------------+ + | |Port 1..<--+-->GMII/MII<-------> + | | | | | | | + | +--------+ | +--------+ | + | | | + | | +--------+ | + | | | RMII <-------> + | +--> | | + | | +--------+ | + | | | + | | +--------+ | + | | | RGMII <-------> + | +--> | | + | +--------+ | + +-------------------------------+ + +CPSW Port's Interface Mode Selection PHY describes MII interface mode between +CPSW Port and Ethernet PHY which depends on Eth PHY and board configuration. + +CPSW Port's Interface Mode Selection PHY device should defined as child device +of SCM node (scm_conf) and can be attached to each CPSW port node using standard +PHY bindings (See phy/phy-bindings.txt). + +Required properties: +- compatible : Should be "ti,am3352-phy-gmii-sel" for am335x platform + "ti,dra7xx-phy-gmii-sel" for dra7xx/am57xx platform + "ti,am43xx-phy-gmii-sel" for am43xx platform + "ti,dm814-phy-gmii-sel" for dm814x platform +- reg : Address and length of the register set for the device +- #phy-cells : must be 2. + cell 1 - CPSW port number (starting from 1) + cell 2 - RMII refclk mode + +Examples: + phy_gmii_sel: phy-gmii-sel { + compatible = "ti,am3352-phy-gmii-sel"; + reg = <0x650 0x4>; + #phy-cells = <2>; + }; + + mac: ethernet@4a100000 { + compatible = "ti,am335x-cpsw","ti,cpsw"; + ... + + cpsw_emac0: slave@4a100200 { + ... + phys = <&phy_gmii_sel 1 1>; + }; + + cpsw_emac1: slave@4a100300 { + ... + phys = <&phy_gmii_sel 2 1>; + }; + }; diff --git a/Documentation/devicetree/bindings/power/reset/gpio-poweroff.txt b/Documentation/devicetree/bindings/power/reset/gpio-poweroff.txt index 6d8980c18c34..3e56c1b34a4c 100644 --- a/Documentation/devicetree/bindings/power/reset/gpio-poweroff.txt +++ b/Documentation/devicetree/bindings/power/reset/gpio-poweroff.txt @@ -27,6 +27,8 @@ Optional properties: it to an output when the power-off handler is called. If this optional property is not specified, the GPIO is initialized as an output in its inactive state. +- active-delay-ms: Delay (default 100) to wait after driving gpio active +- inactive-delay-ms: Delay (default 100) to wait after driving gpio inactive - timeout-ms: Time to wait before asserting a WARN_ON(1). If nothing is specified, 3000 ms is used. diff --git a/Documentation/devicetree/bindings/power/supply/axp20x_ac_power.txt b/Documentation/devicetree/bindings/power/supply/axp20x_ac_power.txt index 826e8a879121..7a1fb532abe5 100644 --- a/Documentation/devicetree/bindings/power/supply/axp20x_ac_power.txt +++ b/Documentation/devicetree/bindings/power/supply/axp20x_ac_power.txt @@ -4,6 +4,7 @@ Required Properties: - compatible: One of: "x-powers,axp202-ac-power-supply" "x-powers,axp221-ac-power-supply" + "x-powers,axp813-ac-power-supply" This node is a subnode of the axp20x PMIC. @@ -13,6 +14,8 @@ reading ADC channels from the AXP20X ADC. The AXP22X is only able to tell if an AC power supply is present and usable. +AXP813/AXP803 are able to limit current and supply voltage + Example: &axp209 { diff --git a/Documentation/devicetree/bindings/power/supply/battery.txt b/Documentation/devicetree/bindings/power/supply/battery.txt index f4d3b4a10b43..89871ab8c704 100644 --- a/Documentation/devicetree/bindings/power/supply/battery.txt +++ b/Documentation/devicetree/bindings/power/supply/battery.txt @@ -22,6 +22,18 @@ Optional Properties: - charge-term-current-microamp: current for charge termination phase - constant-charge-current-max-microamp: maximum constant input current - constant-charge-voltage-max-microvolt: maximum constant input voltage + - factory-internal-resistance-micro-ohms: battery factory internal resistance + - ocv-capacity-table-0: An array providing the open circuit voltage (OCV) + of the battery and corresponding battery capacity percent, which is used + to look up battery capacity according to current OCV value. And the open + circuit voltage unit is microvolt. + - ocv-capacity-table-1: Same as ocv-capacity-table-0 + ...... + - ocv-capacity-table-n: Same as ocv-capacity-table-0 + - ocv-capacity-celsius: An array containing the temperature in degree Celsius, + for each of the battery capacity lookup table. The first temperature value + specifies the OCV table 0, and the second temperature value specifies the + OCV table 1, and so on. Battery properties are named, where possible, for the corresponding elements in enum power_supply_property, defined in @@ -42,6 +54,11 @@ Example: charge-term-current-microamp = <128000>; constant-charge-current-max-microamp = <900000>; constant-charge-voltage-max-microvolt = <4200000>; + factory-internal-resistance-micro-ohms = <250000>; + ocv-capacity-celsius = <(-10) 0 10>; + ocv-capacity-table-0 = <4185000 100>, <4113000 95>, <4066000 90>, ...; + ocv-capacity-table-1 = <4200000 100>, <4185000 95>, <4113000 90>, ...; + ocv-capacity-table-2 = <4250000 100>, <4200000 95>, <4185000 90>, ...; }; charger: charger@11 { diff --git a/Documentation/devicetree/bindings/power/supply/bq24190.txt b/Documentation/devicetree/bindings/power/supply/bq24190.txt index 9e517d307070..ffe2be408bb6 100644 --- a/Documentation/devicetree/bindings/power/supply/bq24190.txt +++ b/Documentation/devicetree/bindings/power/supply/bq24190.txt @@ -3,7 +3,9 @@ TI BQ24190 Li-Ion Battery Charger Required properties: - compatible: contains one of the following: * "ti,bq24190" + * "ti,bq24192" * "ti,bq24192i" + * "ti,bq24196" - reg: integer, I2C address of the charger. - interrupts[-extended]: configuration for charger INT pin. @@ -19,6 +21,12 @@ Optional properties: - ti,system-minimum-microvolt: when power is connected and the battery is below minimum system voltage, the system will be regulated above this setting. +child nodes: +- usb-otg-vbus: + Usage: optional + Description: Regulator that is used to control the VBUS voltage direction for + either USB host mode or for charging on the OTG port. + Notes: - Some circuit boards wire the chip's "OTG" pin high (enabling 500mA default charge current on USB SDP ports, among other features). To simulate this on @@ -39,6 +47,8 @@ Example: interrupts-extended = <&gpiochip 10 IRQ_TYPE_EDGE_FALLING>; monitored-battery = <&bat>; ti,system-minimum-microvolt = <3200000>; + + usb_otg_vbus: usb-otg-vbus { }; }; &twl_gpio { diff --git a/Documentation/devicetree/bindings/power/supply/sc27xx-fg.txt b/Documentation/devicetree/bindings/power/supply/sc27xx-fg.txt new file mode 100644 index 000000000000..fc35ac577401 --- /dev/null +++ b/Documentation/devicetree/bindings/power/supply/sc27xx-fg.txt @@ -0,0 +1,56 @@ +Spreadtrum SC27XX PMICs Fuel Gauge Unit Power Supply Bindings + +Required properties: +- compatible: Should be one of the following: + "sprd,sc2720-fgu", + "sprd,sc2721-fgu", + "sprd,sc2723-fgu", + "sprd,sc2730-fgu", + "sprd,sc2731-fgu". +- reg: The address offset of fuel gauge unit. +- battery-detect-gpios: GPIO for battery detection. +- io-channels: Specify the IIO ADC channel to get temperature. +- io-channel-names: Should be "bat-temp". +- nvmem-cells: A phandle to the calibration cells provided by eFuse device. +- nvmem-cell-names: Should be "fgu_calib". +- monitored-battery: Phandle of battery characteristics devicetree node. + See Documentation/devicetree/bindings/power/supply/battery.txt + +Example: + + bat: battery { + compatible = "simple-battery"; + charge-full-design-microamp-hours = <1900000>; + constant-charge-voltage-max-microvolt = <4350000>; + ocv-capacity-celsius = <20>; + ocv-capacity-table-0 = <4185000 100>, <4113000 95>, <4066000 90>, + <4022000 85>, <3983000 80>, <3949000 75>, + <3917000 70>, <3889000 65>, <3864000 60>, + <3835000 55>, <3805000 50>, <3787000 45>, + <3777000 40>, <3773000 35>, <3770000 30>, + <3765000 25>, <3752000 20>, <3724000 15>, + <3680000 10>, <3605000 5>, <3400000 0>; + ...... + }; + + sc2731_pmic: pmic@0 { + compatible = "sprd,sc2731"; + reg = <0>; + spi-max-frequency = <26000000>; + interrupts = ; + interrupt-controller; + #interrupt-cells = <2>; + #address-cells = <1>; + #size-cells = <0>; + + fgu@a00 { + compatible = "sprd,sc2731-fgu"; + reg = <0xa00>; + battery-detect-gpios = <&pmic_eic 9 GPIO_ACTIVE_HIGH>; + io-channels = <&pmic_adc 5>; + io-channel-names = "bat-temp"; + nvmem-cells = <&fgu_calib>; + nvmem-cell-names = "fgu_calib"; + monitored-battery = <&bat>; + }; + }; diff --git a/Documentation/devicetree/bindings/reserved-memory/xen,shared-memory.txt b/Documentation/devicetree/bindings/reserved-memory/xen,shared-memory.txt new file mode 100644 index 000000000000..d483a2103d70 --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/xen,shared-memory.txt @@ -0,0 +1,24 @@ +* Xen hypervisor reserved-memory binding + +Expose one or more memory regions as reserved-memory to the guest +virtual machine. Typically, a region is configured at VM creation time +to be a shared memory area across multiple virtual machines for +communication among them. + +For each of these pre-shared memory regions, a range is exposed under +the /reserved-memory node as a child node. Each range sub-node is named +xen-shmem@
and has the following properties: + +- compatible: + compatible = "xen,shared-memory-v1" + +- reg: + the base guest physical address and size of the shared memory region + +- xen,offset: (borrower VMs only) + 64 bit integer offset within the owner virtual machine's shared + memory region used for the mapping in the borrower VM. + +- xen,id: + a string that identifies the shared memory region as specified in + the VM config file diff --git a/Documentation/devicetree/bindings/rng/mtk-rng.txt b/Documentation/devicetree/bindings/rng/mtk-rng.txt index 366b99bff8cd..2bc89f133701 100644 --- a/Documentation/devicetree/bindings/rng/mtk-rng.txt +++ b/Documentation/devicetree/bindings/rng/mtk-rng.txt @@ -1,9 +1,10 @@ Device-Tree bindings for Mediatek random number generator -found in Mediatek SoC family +found in MediaTek SoC family Required properties: - compatible : Should be "mediatek,mt7622-rng", "mediatek,mt7623-rng" : for MT7622 + "mediatek,mt7629-rng", "mediatek,mt7623-rng" : for MT7629 "mediatek,mt7623-rng" : for MT7623 - clocks : list of clock specifiers, corresponding to entries in clock-names property; diff --git a/Documentation/devicetree/bindings/rtc/rtc.txt b/Documentation/devicetree/bindings/rtc/rtc.txt new file mode 100644 index 000000000000..7c8da6926095 --- /dev/null +++ b/Documentation/devicetree/bindings/rtc/rtc.txt @@ -0,0 +1,64 @@ +Generic device tree bindings for Real Time Clock devices +======================================================== + +This document describes generic bindings which can be used to describe Real Time +Clock devices in a device tree. + +Required properties +------------------- + +- compatible : name of RTC device following generic names recommended practice. + +For other required properties e.g. to describe register sets, +clocks, etc. check the binding documentation of the specific driver. + +Optional properties +------------------- + +- start-year : if provided, the default hardware range supported by the RTC is + shifted so the first usable year is the specified one. + +The following properties may not be supported by all drivers. However, if a +driver wants to support one of the below features, it should adapt the bindings +below. +- trickle-resistor-ohms : Selected resistor for trickle charger. Should be given + if trickle charger should be enabled +- trickle-diode-disable : Do not use internal trickle charger diode Should be + given if internal trickle charger diode should be + disabled +- wakeup-source : Enables wake up of host system on alarm + +Trivial RTCs +------------ + +This is a list of trivial RTC devices that have simple device tree +bindings, consisting only of a compatible field, an address and +possibly an interrupt line. + + +Compatible Vendor / Chip +========== ============= +abracon,abb5zes3 AB-RTCMC-32.768kHz-B5ZE-S3: Real Time Clock/Calendar Module with I2C Interface +dallas,ds1374 I2C, 32-Bit Binary Counter Watchdog RTC with Trickle Charger and Reset Input/Output +dallas,ds1672 Dallas DS1672 Real-time Clock +dallas,ds3232 Extremely Accurate I²C RTC with Integrated Crystal and SRAM +epson,rx8010 I2C-BUS INTERFACE REAL TIME CLOCK MODULE +epson,rx8581 I2C-BUS INTERFACE REAL TIME CLOCK MODULE +emmicro,em3027 EM Microelectronic EM3027 Real-time Clock +isil,isl1208 Intersil ISL1208 Low Power RTC with Battery Backed SRAM +isil,isl1218 Intersil ISL1218 Low Power RTC with Battery Backed SRAM +isil,isl12022 Intersil ISL12022 Real-time Clock +microcrystal,rv3029 Real Time Clock Module with I2C-Bus +nxp,pcf2127 Real-time clock +nxp,pcf2129 Real-time clock +nxp,pcf8523 Real-time Clock +nxp,pcf8563 Real-time clock/calendar +nxp,pcf85063 Tiny Real-Time Clock +pericom,pt7c4338 Real-time Clock Module +ricoh,r2025sd I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC +ricoh,r2221tl I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC +ricoh,rs5c372a I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC +ricoh,rs5c372b I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC +ricoh,rv5c386 I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC +ricoh,rv5c387a I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC +sii,s35390a 2-wire CMOS real-time clock diff --git a/Documentation/devicetree/bindings/serial/8250.txt b/Documentation/devicetree/bindings/serial/8250.txt index aeb6db4e35c3..da50321da34d 100644 --- a/Documentation/devicetree/bindings/serial/8250.txt +++ b/Documentation/devicetree/bindings/serial/8250.txt @@ -51,6 +51,7 @@ Optional properties: - tx-threshold: Specify the TX FIFO low water indication for parts with programmable TX FIFO thresholds. - resets : phandle + reset specifier pairs +- overrun-throttle-ms : how long to pause uart rx when input overrun is encountered. Note: * fsl,ns16550: diff --git a/Documentation/devicetree/bindings/serial/fsl-lpuart.txt b/Documentation/devicetree/bindings/serial/fsl-lpuart.txt index 6bd3f2e93d61..21483ba820bc 100644 --- a/Documentation/devicetree/bindings/serial/fsl-lpuart.txt +++ b/Documentation/devicetree/bindings/serial/fsl-lpuart.txt @@ -8,6 +8,8 @@ Required properties: on LS1021A SoC with 32-bit big-endian register organization - "fsl,imx7ulp-lpuart" for lpuart compatible with the one integrated on i.MX7ULP SoC with 32-bit little-endian register organization + - "fsl,imx8qxp-lpuart" for lpuart compatible with the one integrated + on i.MX8QXP SoC with 32-bit little-endian register organization - reg : Address and length of the register set for the device - interrupts : Should contain uart interrupt - clocks : phandle + clock specifier pairs, one for each entry in clock-names diff --git a/Documentation/devicetree/bindings/serial/lantiq_asc.txt b/Documentation/devicetree/bindings/serial/lantiq_asc.txt index 3acbd309ab9d..40e81a5818f6 100644 --- a/Documentation/devicetree/bindings/serial/lantiq_asc.txt +++ b/Documentation/devicetree/bindings/serial/lantiq_asc.txt @@ -6,8 +6,23 @@ Required properties: - interrupts: the 3 (tx rx err) interrupt numbers. The interrupt specifier depends on the interrupt-parent interrupt controller. +Optional properties: +- clocks: Should contain frequency clock and gate clock +- clock-names: Should be "freq" and "asc" + Example: +asc0: serial@16600000 { + compatible = "lantiq,asc"; + reg = <0x16600000 0x100000>; + interrupt-parent = <&gic>; + interrupts = , + , + ; + clocks = <&cgu CLK_SSX4>, <&cgu GCLK_UART>; + clock-names = "freq", "asc"; +}; + asc1: serial@e100c00 { compatible = "lantiq,asc"; reg = <0xE100C00 0x400>; diff --git a/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt b/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt index e52e16c6bc57..20232ad05d89 100644 --- a/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt +++ b/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt @@ -24,6 +24,10 @@ Required properties: - "renesas,hscif-r8a7745" for R8A7745 (RZ/G1E) HSCIF compatible UART. - "renesas,scif-r8a77470" for R8A77470 (RZ/G1C) SCIF compatible UART. - "renesas,hscif-r8a77470" for R8A77470 (RZ/G1C) HSCIF compatible UART. + - "renesas,scif-r8a774a1" for R8A774A1 (RZ/G2M) SCIF compatible UART. + - "renesas,hscif-r8a774a1" for R8A774A1 (RZ/G2M) HSCIF compatible UART. + - "renesas,scif-r8a774c0" for R8A774C0 (RZ/G2E) SCIF compatible UART. + - "renesas,hscif-r8a774c0" for R8A774C0 (RZ/G2E) HSCIF compatible UART. - "renesas,scif-r8a7778" for R8A7778 (R-Car M1) SCIF compatible UART. - "renesas,scif-r8a7779" for R8A7779 (R-Car H1) SCIF compatible UART. - "renesas,scif-r8a7790" for R8A7790 (R-Car H2) SCIF compatible UART. @@ -61,13 +65,13 @@ Required properties: - "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART. - "renesas,scifb-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFB compatible UART. - "renesas,rcar-gen1-scif" for R-Car Gen1 SCIF compatible UART, - - "renesas,rcar-gen2-scif" for R-Car Gen2 SCIF compatible UART, - - "renesas,rcar-gen3-scif" for R-Car Gen3 SCIF compatible UART, - - "renesas,rcar-gen2-scifa" for R-Car Gen2 SCIFA compatible UART, - - "renesas,rcar-gen2-scifb" for R-Car Gen2 SCIFB compatible UART, + - "renesas,rcar-gen2-scif" for R-Car Gen2 and RZ/G1 SCIF compatible UART, + - "renesas,rcar-gen3-scif" for R-Car Gen3 and RZ/G2 SCIF compatible UART, + - "renesas,rcar-gen2-scifa" for R-Car Gen2 and RZ/G1 SCIFA compatible UART, + - "renesas,rcar-gen2-scifb" for R-Car Gen2 and RZ/G1 SCIFB compatible UART, - "renesas,rcar-gen1-hscif" for R-Car Gen1 HSCIF compatible UART, - - "renesas,rcar-gen2-hscif" for R-Car Gen2 HSCIF compatible UART, - - "renesas,rcar-gen3-hscif" for R-Car Gen3 HSCIF compatible UART, + - "renesas,rcar-gen2-hscif" for R-Car Gen2 and RZ/G1 HSCIF compatible UART, + - "renesas,rcar-gen3-hscif" for R-Car Gen3 and RZ/G2 HSCIF compatible UART, - "renesas,scif" for generic SCIF compatible UART. - "renesas,scifa" for generic SCIFA compatible UART. - "renesas,scifb" for generic SCIFB compatible UART. diff --git a/Documentation/devicetree/bindings/serial/rs485.txt b/Documentation/devicetree/bindings/serial/rs485.txt index b7c29f74ebb2..b92592dff6dd 100644 --- a/Documentation/devicetree/bindings/serial/rs485.txt +++ b/Documentation/devicetree/bindings/serial/rs485.txt @@ -16,7 +16,7 @@ Optional properties: - linux,rs485-enabled-at-boot-time: empty property telling to enable the rs485 feature at boot time. It can be disabled later with proper ioctl. - rs485-rx-during-tx: empty property that enables the receiving of data even - whilst sending data. + while sending data. RS485 example for Atmel USART: usart0: serial@fff8c000 { diff --git a/Documentation/devicetree/bindings/timer/arm,arch_timer.txt b/Documentation/devicetree/bindings/timer/arm,arch_timer.txt deleted file mode 100644 index 68301b77e854..000000000000 --- a/Documentation/devicetree/bindings/timer/arm,arch_timer.txt +++ /dev/null @@ -1,112 +0,0 @@ -* ARM architected timer - -ARM cores may have a per-core architected timer, which provides per-cpu timers, -or a memory mapped architected timer, which provides up to 8 frames with a -physical and optional virtual timer per frame. - -The per-core architected timer is attached to a GIC to deliver its -per-processor interrupts via PPIs. The memory mapped timer is attached to a GIC -to deliver its interrupts via SPIs. - -** CP15 Timer node properties: - -- compatible : Should at least contain one of - "arm,armv7-timer" - "arm,armv8-timer" - -- interrupts : Interrupt list for secure, non-secure, virtual and - hypervisor timers, in that order. - -- clock-frequency : The frequency of the main counter, in Hz. Should be present - only where necessary to work around broken firmware which does not configure - CNTFRQ on all CPUs to a uniform correct value. Use of this property is - strongly discouraged; fix your firmware unless absolutely impossible. - -- always-on : a boolean property. If present, the timer is powered through an - always-on power domain, therefore it never loses context. - -- fsl,erratum-a008585 : A boolean property. Indicates the presence of - QorIQ erratum A-008585, which says that reading the counter is - unreliable unless the same value is returned by back-to-back reads. - This also affects writes to the tval register, due to the implicit - counter read. - -- hisilicon,erratum-161010101 : A boolean property. Indicates the - presence of Hisilicon erratum 161010101, which says that reading the - counters is unreliable in some cases, and reads may return a value 32 - beyond the correct value. This also affects writes to the tval - registers, due to the implicit counter read. - -** Optional properties: - -- arm,cpu-registers-not-fw-configured : Firmware does not initialize - any of the generic timer CPU registers, which contain their - architecturally-defined reset values. Only supported for 32-bit - systems which follow the ARMv7 architected reset values. - -- arm,no-tick-in-suspend : The main counter does not tick when the system is in - low-power system suspend on some SoCs. This behavior does not match the - Architecture Reference Manual's specification that the system counter "must - be implemented in an always-on power domain." - - -Example: - - timer { - compatible = "arm,cortex-a15-timer", - "arm,armv7-timer"; - interrupts = <1 13 0xf08>, - <1 14 0xf08>, - <1 11 0xf08>, - <1 10 0xf08>; - clock-frequency = <100000000>; - }; - -** Memory mapped timer node properties: - -- compatible : Should at least contain "arm,armv7-timer-mem". - -- clock-frequency : The frequency of the main counter, in Hz. Should be present - only when firmware has not configured the MMIO CNTFRQ registers. - -- reg : The control frame base address. - -Note that #address-cells, #size-cells, and ranges shall be present to ensure -the CPU can address a frame's registers. - -A timer node has up to 8 frame sub-nodes, each with the following properties: - -- frame-number: 0 to 7. - -- interrupts : Interrupt list for physical and virtual timers in that order. - The virtual timer interrupt is optional. - -- reg : The first and second view base addresses in that order. The second view - base address is optional. - -- status : "disabled" indicates the frame is not available for use. Optional. - -Example: - - timer@f0000000 { - compatible = "arm,armv7-timer-mem"; - #address-cells = <1>; - #size-cells = <1>; - ranges; - reg = <0xf0000000 0x1000>; - clock-frequency = <50000000>; - - frame@f0001000 { - frame-number = <0> - interrupts = <0 13 0x8>, - <0 14 0x8>; - reg = <0xf0001000 0x1000>, - <0xf0002000 0x1000>; - }; - - frame@f0003000 { - frame-number = <1> - interrupts = <0 15 0x8>; - reg = <0xf0003000 0x1000>; - }; - }; diff --git a/Documentation/devicetree/bindings/timer/arm,arch_timer.yaml b/Documentation/devicetree/bindings/timer/arm,arch_timer.yaml new file mode 100644 index 000000000000..6deead07728e --- /dev/null +++ b/Documentation/devicetree/bindings/timer/arm,arch_timer.yaml @@ -0,0 +1,103 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/timer/arm,arch_timer.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: ARM architected timer + +maintainers: + - Marc Zyngier + - Mark Rutland +description: |+ + ARM cores may have a per-core architected timer, which provides per-cpu timers, + or a memory mapped architected timer, which provides up to 8 frames with a + physical and optional virtual timer per frame. + + The per-core architected timer is attached to a GIC to deliver its + per-processor interrupts via PPIs. The memory mapped timer is attached to a GIC + to deliver its interrupts via SPIs. + +properties: + compatible: + oneOf: + - items: + - enum: + - arm,cortex-a15-timer + - enum: + - arm,armv7-timer + - items: + - enum: + - arm,armv7-timer + - items: + - enum: + - arm,armv8-timer + + interrupts: + items: + - description: secure timer irq + - description: non-secure timer irq + - description: virtual timer irq + - description: hypervisor timer irq + + clock-frequency: + description: The frequency of the main counter, in Hz. Should be present + only where necessary to work around broken firmware which does not configure + CNTFRQ on all CPUs to a uniform correct value. Use of this property is + strongly discouraged; fix your firmware unless absolutely impossible. + + always-on: + type: boolean + description: If present, the timer is powered through an always-on power + domain, therefore it never loses context. + + fsl,erratum-a008585: + type: boolean + description: Indicates the presence of QorIQ erratum A-008585, which says + that reading the counter is unreliable unless the same value is returned + by back-to-back reads. This also affects writes to the tval register, due + to the implicit counter read. + + hisilicon,erratum-161010101: + type: boolean + description: Indicates the presence of Hisilicon erratum 161010101, which + says that reading the counters is unreliable in some cases, and reads may + return a value 32 beyond the correct value. This also affects writes to + the tval registers, due to the implicit counter read. + + arm,cpu-registers-not-fw-configured: + type: boolean + description: Firmware does not initialize any of the generic timer CPU + registers, which contain their architecturally-defined reset values. Only + supported for 32-bit systems which follow the ARMv7 architected reset + values. + + arm,no-tick-in-suspend: + type: boolean + description: The main counter does not tick when the system is in + low-power system suspend on some SoCs. This behavior does not match the + Architecture Reference Manual's specification that the system counter "must + be implemented in an always-on power domain." + +required: + - compatible + +oneOf: + - required: + - interrupts + - required: + - interrupts-extended + +examples: + - | + timer { + compatible = "arm,cortex-a15-timer", + "arm,armv7-timer"; + interrupts = <1 13 0xf08>, + <1 14 0xf08>, + <1 11 0xf08>, + <1 10 0xf08>; + clock-frequency = <100000000>; + }; + +... diff --git a/Documentation/devicetree/bindings/timer/arm,arch_timer_mmio.yaml b/Documentation/devicetree/bindings/timer/arm,arch_timer_mmio.yaml new file mode 100644 index 000000000000..c4ab59550fc2 --- /dev/null +++ b/Documentation/devicetree/bindings/timer/arm,arch_timer_mmio.yaml @@ -0,0 +1,120 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/timer/arm,arch_timer_mmio.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: ARM memory mapped architected timer + +maintainers: + - Marc Zyngier + - Mark Rutland + +description: |+ + ARM cores may have a memory mapped architected timer, which provides up to 8 + frames with a physical and optional virtual timer per frame. + + The memory mapped timer is attached to a GIC to deliver its interrupts via SPIs. + +properties: + compatible: + items: + - enum: + - arm,armv7-timer-mem + + reg: + maxItems: 1 + description: The control frame base address + + '#address-cells': + enum: [1, 2] + + '#size-cells': + const: 1 + + clock-frequency: + description: The frequency of the main counter, in Hz. Should be present + only where necessary to work around broken firmware which does not configure + CNTFRQ on all CPUs to a uniform correct value. Use of this property is + strongly discouraged; fix your firmware unless absolutely impossible. + + always-on: + type: boolean + description: If present, the timer is powered through an always-on power + domain, therefore it never loses context. + + arm,cpu-registers-not-fw-configured: + type: boolean + description: Firmware does not initialize any of the generic timer CPU + registers, which contain their architecturally-defined reset values. Only + supported for 32-bit systems which follow the ARMv7 architected reset + values. + + arm,no-tick-in-suspend: + type: boolean + description: The main counter does not tick when the system is in + low-power system suspend on some SoCs. This behavior does not match the + Architecture Reference Manual's specification that the system counter "must + be implemented in an always-on power domain." + +patternProperties: + '^frame@[0-9a-z]*$': + description: A timer node has up to 8 frame sub-nodes, each with the following properties. + properties: + frame-number: + allOf: + - $ref: "/schemas/types.yaml#/definitions/uint32" + - minimum: 0 + maximum: 7 + + interrupts: + minItems: 1 + maxItems: 2 + items: + - description: physical timer irq + - description: virtual timer irq + + reg : + minItems: 1 + maxItems: 2 + items: + - description: 1st view base address + - description: 2nd optional view base address + + required: + - frame-number + - interrupts + - reg + +required: + - compatible + - reg + - '#address-cells' + - '#size-cells' + +examples: + - | + timer@f0000000 { + compatible = "arm,armv7-timer-mem"; + #address-cells = <1>; + #size-cells = <1>; + ranges; + reg = <0xf0000000 0x1000>; + clock-frequency = <50000000>; + + frame@f0001000 { + frame-number = <0>; + interrupts = <0 13 0x8>, + <0 14 0x8>; + reg = <0xf0001000 0x1000>, + <0xf0002000 0x1000>; + }; + + frame@f0003000 { + frame-number = <1>; + interrupts = <0 15 0x8>; + reg = <0xf0003000 0x1000>; + }; + }; + +... diff --git a/Documentation/devicetree/bindings/timer/arm,global_timer.txt b/Documentation/devicetree/bindings/timer/arm,global_timer.txt deleted file mode 100644 index bdae3a818793..000000000000 --- a/Documentation/devicetree/bindings/timer/arm,global_timer.txt +++ /dev/null @@ -1,27 +0,0 @@ - -* ARM Global Timer - Cortex-A9 are often associated with a per-core Global timer. - -** Timer node required properties: - -- compatible : should contain - * "arm,cortex-a5-global-timer" for Cortex-A5 global timers. - * "arm,cortex-a9-global-timer" for Cortex-A9 global - timers or any compatible implementation. Note: driver - supports versions r2p0 and above. - -- interrupts : One interrupt to each core - -- reg : Specify the base address and the size of the GT timer - register window. - -- clocks : Should be phandle to a clock. - -Example: - - timer@2c000600 { - compatible = "arm,cortex-a9-global-timer"; - reg = <0x2c000600 0x20>; - interrupts = <1 13 0xf01>; - clocks = <&arm_periph_clk>; - }; diff --git a/Documentation/devicetree/bindings/timer/arm,global_timer.yaml b/Documentation/devicetree/bindings/timer/arm,global_timer.yaml new file mode 100644 index 000000000000..21c24a8e28fd --- /dev/null +++ b/Documentation/devicetree/bindings/timer/arm,global_timer.yaml @@ -0,0 +1,46 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/timer/arm,global_timer.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: ARM Global Timer + +maintainers: + - Stuart Menefy + +description: + Cortex-A9 are often associated with a per-core Global timer. + +properties: + compatible: + items: + - enum: + - arm,cortex-a5-global-timer + - arm,cortex-a9-global-timer + + description: driver supports versions r2p0 and above. + + reg: + maxItems: 1 + + interrupts: + maxItems: 1 + + clocks: + maxItems: 1 + +required: + - compatible + - reg + - clocks + +examples: + - | + timer@2c000600 { + compatible = "arm,cortex-a9-global-timer"; + reg = <0x2c000600 0x20>; + interrupts = <1 13 0xf01>; + clocks = <&arm_periph_clk>; + }; +... diff --git a/Documentation/devicetree/bindings/trivial-devices.txt b/Documentation/devicetree/bindings/trivial-devices.txt deleted file mode 100644 index 6ab001fa1ed4..000000000000 --- a/Documentation/devicetree/bindings/trivial-devices.txt +++ /dev/null @@ -1,190 +0,0 @@ -This is a list of trivial i2c devices that have simple device tree -bindings, consisting only of a compatible field, an address and -possibly an interrupt line. - -If a device needs more specific bindings, such as properties to -describe some aspect of it, there needs to be a specific binding -document for it just like any other devices. - - -Compatible Vendor / Chip -========== ============= -abracon,abb5zes3 AB-RTCMC-32.768kHz-B5ZE-S3: Real Time Clock/Calendar Module with I2C Interface -ad,ad7414 SMBus/I2C Digital Temperature Sensor in 6-Pin SOT with SMBus Alert and Over Temperature Pin -ad,adm9240 ADM9240: Complete System Hardware Monitor for uProcessor-Based Systems -adi,adt7461 +/-1C TDM Extended Temp Range I.C -adt7461 +/-1C TDM Extended Temp Range I.C -adi,adt7473 +/-1C TDM Extended Temp Range I.C -adi,adt7475 +/-1C TDM Extended Temp Range I.C -adi,adt7476 +/-1C TDM Extended Temp Range I.C -adi,adt7490 +/-1C TDM Extended Temp Range I.C -adi,adxl345 Three-Axis Digital Accelerometer -adi,adxl346 Three-Axis Digital Accelerometer (backward-compatibility value "adi,adxl345" must be listed too) -ams,iaq-core AMS iAQ-Core VOC Sensor -at,24c08 i2c serial eeprom (24cxx) -atmel,at97sc3204t i2c trusted platform module (TPM) -capella,cm32181 CM32181: Ambient Light Sensor -capella,cm3232 CM3232: Ambient Light Sensor -dallas,ds1374 I2C, 32-Bit Binary Counter Watchdog RTC with Trickle Charger and Reset Input/Output -dallas,ds1631 High-Precision Digital Thermometer -dallas,ds1672 Dallas DS1672 Real-time Clock -dallas,ds1682 Total-Elapsed-Time Recorder with Alarm -dallas,ds1775 Tiny Digital Thermometer and Thermostat -dallas,ds3232 Extremely Accurate I²C RTC with Integrated Crystal and SRAM -dallas,ds4510 CPU Supervisor with Nonvolatile Memory and Programmable I/O -dallas,ds75 Digital Thermometer and Thermostat -devantech,srf02 Devantech SRF02 ultrasonic ranger in I2C mode -devantech,srf08 Devantech SRF08 ultrasonic ranger -devantech,srf10 Devantech SRF10 ultrasonic ranger -dlg,da9053 DA9053: flexible system level PMIC with multicore support -dlg,da9063 DA9063: system PMIC for quad-core application processors -domintech,dmard09 DMARD09: 3-axis Accelerometer -domintech,dmard10 DMARD10: 3-axis Accelerometer -epson,rx8010 I2C-BUS INTERFACE REAL TIME CLOCK MODULE -epson,rx8581 I2C-BUS INTERFACE REAL TIME CLOCK MODULE -emmicro,em3027 EM Microelectronic EM3027 Real-time Clock -fsl,mag3110 MAG3110: Xtrinsic High Accuracy, 3D Magnetometer -fsl,mma7660 MMA7660FC: 3-Axis Orientation/Motion Detection Sensor -fsl,mma8450 MMA8450Q: Xtrinsic Low-power, 3-axis Xtrinsic Accelerometer -fsl,mpl3115 MPL3115: Absolute Digital Pressure Sensor -fsl,mpr121 MPR121: Proximity Capacitive Touch Sensor Controller -fsl,sgtl5000 SGTL5000: Ultra Low-Power Audio Codec -gmt,g751 G751: Digital Temperature Sensor and Thermal Watchdog with Two-Wire Interface -infineon,slb9635tt Infineon SLB9635 (Soft-) I2C TPM (old protocol, max 100khz) -infineon,slb9645tt Infineon SLB9645 I2C TPM (new protocol, max 400khz) -infineon,tlv493d-a1b6 Infineon TLV493D-A1B6 I2C 3D Magnetic Sensor -isil,isl1208 Intersil ISL1208 Low Power RTC with Battery Backed SRAM -isil,isl1218 Intersil ISL1218 Low Power RTC with Battery Backed SRAM -isil,isl12022 Intersil ISL12022 Real-time Clock -isil,isl29028 Intersil ISL29028 Ambient Light and Proximity Sensor -isil,isl29030 Intersil ISL29030 Ambient Light and Proximity Sensor -maxim,ds1050 5 Bit Programmable, Pulse-Width Modulator -maxim,max1237 Low-Power, 4-/12-Channel, 2-Wire Serial, 12-Bit ADCs -maxim,max6621 PECI-to-I2C translator for PECI-to-SMBus/I2C protocol conversion -maxim,max6625 9-Bit/12-Bit Temperature Sensors with I²C-Compatible Serial Interface -mcube,mc3230 mCube 3-axis 8-bit digital accelerometer -memsic,mxc6225 MEMSIC 2-axis 8-bit digital accelerometer -microchip,mcp4017-502 Microchip 7-bit Single I2C Digital POT (5k) -microchip,mcp4017-103 Microchip 7-bit Single I2C Digital POT (10k) -microchip,mcp4017-503 Microchip 7-bit Single I2C Digital POT (50k) -microchip,mcp4017-104 Microchip 7-bit Single I2C Digital POT (100k) -microchip,mcp4018-502 Microchip 7-bit Single I2C Digital POT (5k) -microchip,mcp4018-103 Microchip 7-bit Single I2C Digital POT (10k) -microchip,mcp4018-503 Microchip 7-bit Single I2C Digital POT (50k) -microchip,mcp4018-104 Microchip 7-bit Single I2C Digital POT (100k) -microchip,mcp4019-502 Microchip 7-bit Single I2C Digital POT (5k) -microchip,mcp4019-103 Microchip 7-bit Single I2C Digital POT (10k) -microchip,mcp4019-503 Microchip 7-bit Single I2C Digital POT (50k) -microchip,mcp4019-104 Microchip 7-bit Single I2C Digital POT (100k) -microchip,mcp4531-502 Microchip 7-bit Single I2C Digital Potentiometer (5k) -microchip,mcp4531-103 Microchip 7-bit Single I2C Digital Potentiometer (10k) -microchip,mcp4531-503 Microchip 7-bit Single I2C Digital Potentiometer (50k) -microchip,mcp4531-104 Microchip 7-bit Single I2C Digital Potentiometer (100k) -microchip,mcp4532-502 Microchip 7-bit Single I2C Digital Potentiometer (5k) -microchip,mcp4532-103 Microchip 7-bit Single I2C Digital Potentiometer (10k) -microchip,mcp4532-503 Microchip 7-bit Single I2C Digital Potentiometer (50k) -microchip,mcp4532-104 Microchip 7-bit Single I2C Digital Potentiometer (100k) -microchip,mcp4541-502 Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (5k) -microchip,mcp4541-103 Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (10k) -microchip,mcp4541-503 Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (50k) -microchip,mcp4541-104 Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (100k) -microchip,mcp4542-502 Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (5k) -microchip,mcp4542-103 Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (10k) -microchip,mcp4542-503 Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (50k) -microchip,mcp4542-104 Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (100k) -microchip,mcp4551-502 Microchip 8-bit Single I2C Digital Potentiometer (5k) -microchip,mcp4551-103 Microchip 8-bit Single I2C Digital Potentiometer (10k) -microchip,mcp4551-503 Microchip 8-bit Single I2C Digital Potentiometer (50k) -microchip,mcp4551-104 Microchip 8-bit Single I2C Digital Potentiometer (100k) -microchip,mcp4552-502 Microchip 8-bit Single I2C Digital Potentiometer (5k) -microchip,mcp4552-103 Microchip 8-bit Single I2C Digital Potentiometer (10k) -microchip,mcp4552-503 Microchip 8-bit Single I2C Digital Potentiometer (50k) -microchip,mcp4552-104 Microchip 8-bit Single I2C Digital Potentiometer (100k) -microchip,mcp4561-502 Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (5k) -microchip,mcp4561-103 Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (10k) -microchip,mcp4561-503 Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (50k) -microchip,mcp4561-104 Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (100k) -microchip,mcp4562-502 Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (5k) -microchip,mcp4562-103 Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (10k) -microchip,mcp4562-503 Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (50k) -microchip,mcp4562-104 Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (100k) -microchip,mcp4631-502 Microchip 7-bit Dual I2C Digital Potentiometer (5k) -microchip,mcp4631-103 Microchip 7-bit Dual I2C Digital Potentiometer (10k) -microchip,mcp4631-503 Microchip 7-bit Dual I2C Digital Potentiometer (50k) -microchip,mcp4631-104 Microchip 7-bit Dual I2C Digital Potentiometer (100k) -microchip,mcp4632-502 Microchip 7-bit Dual I2C Digital Potentiometer (5k) -microchip,mcp4632-103 Microchip 7-bit Dual I2C Digital Potentiometer (10k) -microchip,mcp4632-503 Microchip 7-bit Dual I2C Digital Potentiometer (50k) -microchip,mcp4632-104 Microchip 7-bit Dual I2C Digital Potentiometer (100k) -microchip,mcp4641-502 Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (5k) -microchip,mcp4641-103 Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (10k) -microchip,mcp4641-503 Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (50k) -microchip,mcp4641-104 Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (100k) -microchip,mcp4642-502 Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (5k) -microchip,mcp4642-103 Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (10k) -microchip,mcp4642-503 Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (50k) -microchip,mcp4642-104 Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (100k) -microchip,mcp4651-502 Microchip 8-bit Dual I2C Digital Potentiometer (5k) -microchip,mcp4651-103 Microchip 8-bit Dual I2C Digital Potentiometer (10k) -microchip,mcp4651-503 Microchip 8-bit Dual I2C Digital Potentiometer (50k) -microchip,mcp4651-104 Microchip 8-bit Dual I2C Digital Potentiometer (100k) -microchip,mcp4652-502 Microchip 8-bit Dual I2C Digital Potentiometer (5k) -microchip,mcp4652-103 Microchip 8-bit Dual I2C Digital Potentiometer (10k) -microchip,mcp4652-503 Microchip 8-bit Dual I2C Digital Potentiometer (50k) -microchip,mcp4652-104 Microchip 8-bit Dual I2C Digital Potentiometer (100k) -microchip,mcp4661-502 Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (5k) -microchip,mcp4661-103 Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (10k) -microchip,mcp4661-503 Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (50k) -microchip,mcp4661-104 Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (100k) -microchip,mcp4662-502 Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (5k) -microchip,mcp4662-103 Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (10k) -microchip,mcp4662-503 Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (50k) -microchip,mcp4662-104 Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (100k) -microchip,tc654 PWM Fan Speed Controller With Fan Fault Detection -microchip,tc655 PWM Fan Speed Controller With Fan Fault Detection -microcrystal,rv3029 Real Time Clock Module with I2C-Bus -miramems,da226 MiraMEMS DA226 2-axis 14-bit digital accelerometer -miramems,da280 MiraMEMS DA280 3-axis 14-bit digital accelerometer -miramems,da311 MiraMEMS DA311 3-axis 12-bit digital accelerometer -national,lm63 Temperature sensor with integrated fan control -national,lm75 I2C TEMP SENSOR -national,lm80 Serial Interface ACPI-Compatible Microprocessor System Hardware Monitor -national,lm85 Temperature sensor with integrated fan control -national,lm92 ±0.33°C Accurate, 12-Bit + Sign Temperature Sensor and Thermal Window Comparator with Two-Wire Interface -nuvoton,npct501 i2c trusted platform module (TPM) -nuvoton,npct601 i2c trusted platform module (TPM2) -nuvoton,w83773g Nuvoton Temperature Sensor -nxp,pca9556 Octal SMBus and I2C registered interface -nxp,pca9557 8-bit I2C-bus and SMBus I/O port with reset -nxp,pcf2127 Real-time clock -nxp,pcf2129 Real-time clock -nxp,pcf8523 Real-time Clock -nxp,pcf8563 Real-time clock/calendar -nxp,pcf85063 Tiny Real-Time Clock -oki,ml86v7667 OKI ML86V7667 video decoder -ovti,ov5642 OV5642: Color CMOS QSXGA (5-megapixel) Image Sensor with OmniBSI and Embedded TrueFocus -pericom,pt7c4338 Real-time Clock Module -plx,pex8648 48-Lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch -pulsedlight,lidar-lite-v2 Pulsedlight LIDAR range-finding sensor -ricoh,r2025sd I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC -ricoh,r2221tl I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC -ricoh,rs5c372a I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC -ricoh,rs5c372b I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC -ricoh,rv5c386 I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC -ricoh,rv5c387a I2C bus SERIAL INTERFACE REAL-TIME CLOCK IC -samsung,24ad0xd1 S524AD0XF1 (128K/256K-bit Serial EEPROM for Low Power) -sgx,vz89x SGX Sensortech VZ89X Sensors -sii,s35390a 2-wire CMOS real-time clock -silabs,si7020 Relative Humidity and Temperature Sensors -skyworks,sky81452 Skyworks SKY81452: Six-Channel White LED Driver with Touch Panel Bias Supply -st,24c256 i2c serial eeprom (24cxx) -taos,tsl2550 Ambient Light Sensor with SMBUS/Two Wire Serial Interface -ti,ads7828 8-Channels, 12-bit ADC -ti,ads7830 8-Channels, 8-bit ADC -ti,amc6821 Temperature Monitoring and Fan Control -ti,tsc2003 I2C Touch-Screen Controller -ti,tmp102 Low Power Digital Temperature Sensor with SMBUS/Two Wire Serial Interface -ti,tmp103 Low Power Digital Temperature Sensor with SMBUS/Two Wire Serial Interface -ti,tmp275 Digital Temperature Sensor -winbond,w83793 Winbond/Nuvoton H/W Monitor -winbond,wpct301 i2c trusted platform module (TPM) diff --git a/Documentation/devicetree/bindings/trivial-devices.yaml b/Documentation/devicetree/bindings/trivial-devices.yaml new file mode 100644 index 000000000000..cc64ec63a6ad --- /dev/null +++ b/Documentation/devicetree/bindings/trivial-devices.yaml @@ -0,0 +1,342 @@ +# SPDX-License-Identifier: GPL-2.0 +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/trivial-devices.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Trivial I2C and SPI devices that have simple device tree bindings + +maintainers: + - Rob Herring + +description: | + This is a list of trivial I2C and SPI devices that have simple device tree + bindings, consisting only of a compatible field, an address and possibly an + interrupt line. + + If a device needs more specific bindings, such as properties to + describe some aspect of it, there needs to be a specific binding + document for it just like any other devices. + +properties: + reg: + maxItems: 1 + interrupts: + maxItems: 1 + compatible: + items: + - enum: + # SMBus/I2C Digital Temperature Sensor in 6-Pin SOT with SMBus Alert and Over Temperature Pin + - ad,ad7414 + # ADM9240: Complete System Hardware Monitor for uProcessor-Based Systems + - ad,adm9240 + # +/-1C TDM Extended Temp Range I.C + - adi,adt7461 + # +/-1C TDM Extended Temp Range I.C + - adt7461 + # +/-1C TDM Extended Temp Range I.C + - adi,adt7473 + # +/-1C TDM Extended Temp Range I.C + - adi,adt7475 + # +/-1C TDM Extended Temp Range I.C + - adi,adt7476 + # +/-1C TDM Extended Temp Range I.C + - adi,adt7490 + # Three-Axis Digital Accelerometer + - adi,adxl345 + # Three-Axis Digital Accelerometer (backward-compatibility value "adi,adxl345" must be listed too) + - adi,adxl346 + # AMS iAQ-Core VOC Sensor + - ams,iaq-core + # i2c serial eeprom (24cxx) + - at,24c08 + # i2c trusted platform module (TPM) + - atmel,at97sc3204t + # CM32181: Ambient Light Sensor + - capella,cm32181 + # CM3232: Ambient Light Sensor + - capella,cm3232 + # High-Precision Digital Thermometer + - dallas,ds1631 + # Total-Elapsed-Time Recorder with Alarm + - dallas,ds1682 + # Tiny Digital Thermometer and Thermostat + - dallas,ds1775 + # CPU Supervisor with Nonvolatile Memory and Programmable I/O + - dallas,ds4510 + # Digital Thermometer and Thermostat + - dallas,ds75 + # Devantech SRF02 ultrasonic ranger in I2C mode + - devantech,srf02 + # Devantech SRF08 ultrasonic ranger + - devantech,srf08 + # Devantech SRF10 ultrasonic ranger + - devantech,srf10 + # DA9053: flexible system level PMIC with multicore support + - dlg,da9053 + # DA9063: system PMIC for quad-core application processors + - dlg,da9063 + # DMARD09: 3-axis Accelerometer + - domintech,dmard09 + # DMARD10: 3-axis Accelerometer + - domintech,dmard10 + # MMA7660FC: 3-Axis Orientation/Motion Detection Sensor + - fsl,mma7660 + # MMA8450Q: Xtrinsic Low-power, 3-axis Xtrinsic Accelerometer + - fsl,mma8450 + # MPL3115: Absolute Digital Pressure Sensor + - fsl,mpl3115 + # MPR121: Proximity Capacitive Touch Sensor Controller + - fsl,mpr121 + # SGTL5000: Ultra Low-Power Audio Codec + - fsl,sgtl5000 + # G751: Digital Temperature Sensor and Thermal Watchdog with Two-Wire Interface + - gmt,g751 + # Infineon SLB9635 (Soft-) I2C TPM (old protocol, max 100khz) + - infineon,slb9635tt + # Infineon SLB9645 I2C TPM (new protocol, max 400khz) + - infineon,slb9645tt + # Infineon TLV493D-A1B6 I2C 3D Magnetic Sensor + - infineon,tlv493d-a1b6 + # Intersil ISL29028 Ambient Light and Proximity Sensor + - isil,isl29028 + # Intersil ISL29030 Ambient Light and Proximity Sensor + - isil,isl29030 + # 5 Bit Programmable, Pulse-Width Modulator + - maxim,ds1050 + # Low-Power, 4-/12-Channel, 2-Wire Serial, 12-Bit ADCs + - maxim,max1237 + # PECI-to-I2C translator for PECI-to-SMBus/I2C protocol conversion + - maxim,max6621 + # 9-Bit/12-Bit Temperature Sensors with I²C-Compatible Serial Interface + - maxim,max6625 + # mCube 3-axis 8-bit digital accelerometer + - mcube,mc3230 + # MEMSIC 2-axis 8-bit digital accelerometer + - memsic,mxc6225 + # Microchip 7-bit Single I2C Digital POT (5k) + - microchip,mcp4017-502 + # Microchip 7-bit Single I2C Digital POT (10k) + - microchip,mcp4017-103 + # Microchip 7-bit Single I2C Digital POT (50k) + - microchip,mcp4017-503 + # Microchip 7-bit Single I2C Digital POT (100k) + - microchip,mcp4017-104 + # Microchip 7-bit Single I2C Digital POT (5k) + - microchip,mcp4018-502 + # Microchip 7-bit Single I2C Digital POT (10k) + - microchip,mcp4018-103 + # Microchip 7-bit Single I2C Digital POT (50k) + - microchip,mcp4018-503 + # Microchip 7-bit Single I2C Digital POT (100k) + - microchip,mcp4018-104 + # Microchip 7-bit Single I2C Digital POT (5k) + - microchip,mcp4019-502 + # Microchip 7-bit Single I2C Digital POT (10k) + - microchip,mcp4019-103 + # Microchip 7-bit Single I2C Digital POT (50k) + - microchip,mcp4019-503 + # Microchip 7-bit Single I2C Digital POT (100k) + - microchip,mcp4019-104 + # Microchip 7-bit Single I2C Digital Potentiometer (5k) + - microchip,mcp4531-502 + # Microchip 7-bit Single I2C Digital Potentiometer (10k) + - microchip,mcp4531-103 + # Microchip 7-bit Single I2C Digital Potentiometer (50k) + - microchip,mcp4531-503 + # Microchip 7-bit Single I2C Digital Potentiometer (100k) + - microchip,mcp4531-104 + # Microchip 7-bit Single I2C Digital Potentiometer (5k) + - microchip,mcp4532-502 + # Microchip 7-bit Single I2C Digital Potentiometer (10k) + - microchip,mcp4532-103 + # Microchip 7-bit Single I2C Digital Potentiometer (50k) + - microchip,mcp4532-503 + # Microchip 7-bit Single I2C Digital Potentiometer (100k) + - microchip,mcp4532-104 + # Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (5k) + - microchip,mcp4541-502 + # Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (10k) + - microchip,mcp4541-103 + # Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (50k) + - microchip,mcp4541-503 + # Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (100k) + - microchip,mcp4541-104 + # Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (5k) + - microchip,mcp4542-502 + # Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (10k) + - microchip,mcp4542-103 + # Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (50k) + - microchip,mcp4542-503 + # Microchip 7-bit Single I2C Digital Potentiometer with NV Memory (100k) + - microchip,mcp4542-104 + # Microchip 8-bit Single I2C Digital Potentiometer (5k) + - microchip,mcp4551-502 + # Microchip 8-bit Single I2C Digital Potentiometer (10k) + - microchip,mcp4551-103 + # Microchip 8-bit Single I2C Digital Potentiometer (50k) + - microchip,mcp4551-503 + # Microchip 8-bit Single I2C Digital Potentiometer (100k) + - microchip,mcp4551-104 + # Microchip 8-bit Single I2C Digital Potentiometer (5k) + - microchip,mcp4552-502 + # Microchip 8-bit Single I2C Digital Potentiometer (10k) + - microchip,mcp4552-103 + # Microchip 8-bit Single I2C Digital Potentiometer (50k) + - microchip,mcp4552-503 + # Microchip 8-bit Single I2C Digital Potentiometer (100k) + - microchip,mcp4552-104 + # Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (5k) + - microchip,mcp4561-502 + # Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (10k) + - microchip,mcp4561-103 + # Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (50k) + - microchip,mcp4561-503 + # Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (100k) + - microchip,mcp4561-104 + # Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (5k) + - microchip,mcp4562-502 + # Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (10k) + - microchip,mcp4562-103 + # Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (50k) + - microchip,mcp4562-503 + # Microchip 8-bit Single I2C Digital Potentiometer with NV Memory (100k) + - microchip,mcp4562-104 + # Microchip 7-bit Dual I2C Digital Potentiometer (5k) + - microchip,mcp4631-502 + # Microchip 7-bit Dual I2C Digital Potentiometer (10k) + - microchip,mcp4631-103 + # Microchip 7-bit Dual I2C Digital Potentiometer (50k) + - microchip,mcp4631-503 + # Microchip 7-bit Dual I2C Digital Potentiometer (100k) + - microchip,mcp4631-104 + # Microchip 7-bit Dual I2C Digital Potentiometer (5k) + - microchip,mcp4632-502 + # Microchip 7-bit Dual I2C Digital Potentiometer (10k) + - microchip,mcp4632-103 + # Microchip 7-bit Dual I2C Digital Potentiometer (50k) + - microchip,mcp4632-503 + # Microchip 7-bit Dual I2C Digital Potentiometer (100k) + - microchip,mcp4632-104 + # Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (5k) + - microchip,mcp4641-502 + # Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (10k) + - microchip,mcp4641-103 + # Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (50k) + - microchip,mcp4641-503 + # Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (100k) + - microchip,mcp4641-104 + # Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (5k) + - microchip,mcp4642-502 + # Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (10k) + - microchip,mcp4642-103 + # Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (50k) + - microchip,mcp4642-503 + # Microchip 7-bit Dual I2C Digital Potentiometer with NV Memory (100k) + - microchip,mcp4642-104 + # Microchip 8-bit Dual I2C Digital Potentiometer (5k) + - microchip,mcp4651-502 + # Microchip 8-bit Dual I2C Digital Potentiometer (10k) + - microchip,mcp4651-103 + # Microchip 8-bit Dual I2C Digital Potentiometer (50k) + - microchip,mcp4651-503 + # Microchip 8-bit Dual I2C Digital Potentiometer (100k) + - microchip,mcp4651-104 + # Microchip 8-bit Dual I2C Digital Potentiometer (5k) + - microchip,mcp4652-502 + # Microchip 8-bit Dual I2C Digital Potentiometer (10k) + - microchip,mcp4652-103 + # Microchip 8-bit Dual I2C Digital Potentiometer (50k) + - microchip,mcp4652-503 + # Microchip 8-bit Dual I2C Digital Potentiometer (100k) + - microchip,mcp4652-104 + # Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (5k) + - microchip,mcp4661-502 + # Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (10k) + - microchip,mcp4661-103 + # Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (50k) + - microchip,mcp4661-503 + # Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (100k) + - microchip,mcp4661-104 + # Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (5k) + - microchip,mcp4662-502 + # Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (10k) + - microchip,mcp4662-103 + # Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (50k) + - microchip,mcp4662-503 + # Microchip 8-bit Dual I2C Digital Potentiometer with NV Memory (100k) + - microchip,mcp4662-104 + # PWM Fan Speed Controller With Fan Fault Detection + - microchip,tc654 + # PWM Fan Speed Controller With Fan Fault Detection + - microchip,tc655 + # MiraMEMS DA226 2-axis 14-bit digital accelerometer + - miramems,da226 + # MiraMEMS DA280 3-axis 14-bit digital accelerometer + - miramems,da280 + # MiraMEMS DA311 3-axis 12-bit digital accelerometer + - miramems,da311 + # Temperature sensor with integrated fan control + - national,lm63 + # I2C TEMP SENSOR + - national,lm75 + # Serial Interface ACPI-Compatible Microprocessor System Hardware Monitor + - national,lm80 + # Temperature sensor with integrated fan control + - national,lm85 + # ±0.33°C Accurate, 12-Bit + Sign Temperature Sensor and Thermal Window Comparator with Two-Wire Interface + - national,lm92 + # i2c trusted platform module (TPM) + - nuvoton,npct501 + # i2c trusted platform module (TPM2) + - nuvoton,npct601 + # Nuvoton Temperature Sensor + - nuvoton,w83773g + # Octal SMBus and I2C registered interface + - nxp,pca9556 + # 8-bit I2C-bus and SMBus I/O port with reset + - nxp,pca9557 + # OKI ML86V7667 video decoder + - oki,ml86v7667 + # OV5642: Color CMOS QSXGA (5-megapixel) Image Sensor with OmniBSI and Embedded TrueFocus + - ovti,ov5642 + # 48-Lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch + - plx,pex8648 + # Pulsedlight LIDAR range-finding sensor + - pulsedlight,lidar-lite-v2 + # S524AD0XF1 (128K/256K-bit Serial EEPROM for Low Power) + - samsung,24ad0xd1 + # SGX Sensortech VZ89X Sensors + - sgx,vz89x + # Relative Humidity and Temperature Sensors + - silabs,si7020 + # Skyworks SKY81452: Six-Channel White LED Driver with Touch Panel Bias Supply + - skyworks,sky81452 + # i2c serial eeprom (24cxx) + - st,24c256 + # Ambient Light Sensor with SMBUS/Two Wire Serial Interface + - taos,tsl2550 + # 8-Channels, 12-bit ADC + - ti,ads7828 + # 8-Channels, 8-bit ADC + - ti,ads7830 + # Temperature Monitoring and Fan Control + - ti,amc6821 + # I2C Touch-Screen Controller + - ti,tsc2003 + # Low Power Digital Temperature Sensor with SMBUS/Two Wire Serial Interface + - ti,tmp102 + # Low Power Digital Temperature Sensor with SMBUS/Two Wire Serial Interface + - ti,tmp103 + # Digital Temperature Sensor + - ti,tmp275 + # Winbond/Nuvoton H/W Monitor + - winbond,w83793 + # i2c trusted platform module (TPM) + - winbond,wpct301 + +required: + - compatible + - reg + +... diff --git a/Documentation/devicetree/bindings/ufs/cdns,ufshc.txt b/Documentation/devicetree/bindings/ufs/cdns,ufshc.txt new file mode 100644 index 000000000000..a04a4989ec7f --- /dev/null +++ b/Documentation/devicetree/bindings/ufs/cdns,ufshc.txt @@ -0,0 +1,31 @@ +* Cadence Universal Flash Storage (UFS) Controller + +UFS nodes are defined to describe on-chip UFS host controllers. +Each UFS controller instance should have its own node. +Please see the ufshcd-pltfrm.txt for a list of all available properties. + +Required properties: +- compatible : Compatible list, contains the following controller: + "cdns,ufshc" + complemented with the JEDEC version: + "jedec,ufs-2.0" + +- reg : Address and length of the UFS register set. +- interrupts : One interrupt mapping. +- freq-table-hz : Clock frequency table. + See the ufshcd-pltfrm.txt for details. +- clocks : List of phandle and clock specifier pairs. +- clock-names : List of clock input name strings sorted in the same + order as the clocks property. "core_clk" is mandatory. + Depending on a type of a PHY, + the "phy_clk" clock can also be added, if needed. + +Example: + ufs@fd030000 { + compatible = "cdns,ufshc", "jedec,ufs-2.0"; + reg = <0xfd030000 0x10000>; + interrupts = <0 1 IRQ_TYPE_LEVEL_HIGH>; + freq-table-hz = <0 0>, <0 0>; + clocks = <&ufs_core_clk>, <&ufs_phy_clk>; + clock-names = "core_clk", "phy_clk"; + }; diff --git a/Documentation/devicetree/bindings/ufs/ufshcd-pltfrm.txt b/Documentation/devicetree/bindings/ufs/ufshcd-pltfrm.txt index 2df00524bd21..8cf59452c675 100644 --- a/Documentation/devicetree/bindings/ufs/ufshcd-pltfrm.txt +++ b/Documentation/devicetree/bindings/ufs/ufshcd-pltfrm.txt @@ -33,6 +33,12 @@ Optional properties: - clocks : List of phandle and clock specifier pairs - clock-names : List of clock input name strings sorted in the same order as the clocks property. + "ref_clk" indicates reference clock frequency. + UFS host supplies reference clock to UFS device and UFS device + specification allows host to provide one of the 4 frequencies (19.2 MHz, + 26 MHz, 38.4 MHz, 52MHz) for reference clock. This "ref_clk" entry is + parsed and used to update the reference clock setting in device. + Defaults to 26 MHz(as per specification) if not specified by host. - freq-table-hz : Array of operating frequencies stored in the same order as the clocks property. If this property is not defined or a value in the array is "0" then it is assumed diff --git a/Documentation/devicetree/bindings/usb/ci-hdrc-usb2.txt b/Documentation/devicetree/bindings/usb/ci-hdrc-usb2.txt index 529e51879fb2..adae82385dd6 100644 --- a/Documentation/devicetree/bindings/usb/ci-hdrc-usb2.txt +++ b/Documentation/devicetree/bindings/usb/ci-hdrc-usb2.txt @@ -80,15 +80,19 @@ Optional properties: controller. It's expected that a mux state of 0 indicates device mode and a mux state of 1 indicates host mode. - mux-control-names: Shall be "usb_switch" if mux-controls is specified. -- pinctrl-names: Names for optional pin modes in "default", "host", "device" +- pinctrl-names: Names for optional pin modes in "default", "host", "device". + In case of HSIC-mode, "idle" and "active" pin modes are mandatory. In this + case, the "idle" state needs to pull down the data and strobe pin + and the "active" state needs to pull up the strobe pin. - pinctrl-n: alternate pin modes i.mx specific properties - fsl,usbmisc: phandler of non-core register device, with one argument that indicate usb controller index - disable-over-current: disable over current detect -- over-current-active-high: over current signal polarity is high active, - typically over current signal polarity is low active. +- over-current-active-low: over current signal polarity is active low. +- over-current-active-high: over current signal polarity is active high. + It's recommended to specify the over current polarity. - external-vbus-divider: enables off-chip resistor divider for Vbus Example: @@ -111,3 +115,29 @@ Example: mux-controls = <&usb_switch>; mux-control-names = "usb_switch"; }; + +Example for HSIC: + + usb@2184400 { + compatible = "fsl,imx6q-usb", "fsl,imx27-usb"; + reg = <0x02184400 0x200>; + interrupts = <0 41 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&clks IMX6QDL_CLK_USBOH3>; + fsl,usbphy = <&usbphynop1>; + fsl,usbmisc = <&usbmisc 2>; + phy_type = "hsic"; + dr_mode = "host"; + ahb-burst-config = <0x0>; + tx-burst-size-dword = <0x10>; + rx-burst-size-dword = <0x10>; + pinctrl-names = "idle", "active"; + pinctrl-0 = <&pinctrl_usbh2_idle>; + pinctrl-1 = <&pinctrl_usbh2_active>; + #address-cells = <1>; + #size-cells = <0>; + + usbnet: smsc@1 { + compatible = "usb424,9730"; + reg = <1>; + }; + }; diff --git a/Documentation/devicetree/bindings/usb/dwc3.txt b/Documentation/devicetree/bindings/usb/dwc3.txt index 636630fb92d7..8e5265e9f658 100644 --- a/Documentation/devicetree/bindings/usb/dwc3.txt +++ b/Documentation/devicetree/bindings/usb/dwc3.txt @@ -37,7 +37,11 @@ Optional properties: - phy-names: from the *Generic PHY* bindings; supported names are "usb2-phy" or "usb3-phy". - resets: a single pair of phandle and reset specifier + - snps,usb2-lpm-disable: indicate if we don't want to enable USB2 HW LPM - snps,usb3_lpm_capable: determines if platform is USB3 LPM capable + - snps,dis-start-transfer-quirk: when set, disable isoc START TRANSFER command + failure SW work-around for DWC_usb31 version 1.70a-ea06 + and prior. - snps,disable_scramble_quirk: true when SW should disable data scrambling. Only really useful for FPGA builds. - snps,has-lpm-erratum: true when DWC3 was configured with LPM Erratum enabled diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt index a2f4451cf3f9..dbe926b6b78a 100644 --- a/Documentation/devicetree/bindings/vendor-prefixes.txt +++ b/Documentation/devicetree/bindings/vendor-prefixes.txt @@ -171,6 +171,7 @@ holtek Holtek Semiconductor, Inc. hwacom HwaCom Systems Inc. i2se I2SE GmbH ibm International Business Machines (IBM) +icplus IC Plus Corp. idt Integrated Device Technologies, Inc. ifi Ingenieurburo Fur Ic-Technologie (I/F/I) ilitek ILI Technology Corporation (ILITEK) @@ -304,6 +305,7 @@ pixcir PIXCIR MICROELECTRONICS Co., Ltd plathome Plat'Home Co., Ltd. plda PLDA plx Broadcom Corporation (formerly PLX Technology) +pni PNI Sensor Corporation portwell Portwell Inc. poslab Poslab Technology Co., Ltd. powervr PowerVR (deprecated, use img) @@ -416,6 +418,7 @@ vamrs Vamrs Ltd. variscite Variscite Ltd. via VIA Technologies, Inc. virtio Virtual I/O Device Specification, developed by the OASIS consortium +vishay Vishay Intertechnology, Inc vitesse Vitesse Semiconductor Corporation vivante Vivante Corporation vocore VoCore Studio diff --git a/Documentation/devicetree/todo.txt b/Documentation/devicetree/todo.txt deleted file mode 100644 index b5139d1de811..000000000000 --- a/Documentation/devicetree/todo.txt +++ /dev/null @@ -1,10 +0,0 @@ -Todo list for devicetree: - -=== General structure === -- Switch from custom lists to (h)list_head for nodes and properties structure - -=== CONFIG_OF_DYNAMIC === -- Switch to RCU for tree updates and get rid of global spinlock -- Document node lifecycle for CONFIG_OF_DYNAMIC -- Always set ->full_name at of_attach_node() time -- pseries: Get rid of open-coded tree modification from arch/powerpc/platforms/pseries/dlpar.c diff --git a/Documentation/devicetree/writing-schema.md b/Documentation/devicetree/writing-schema.md new file mode 100644 index 000000000000..a3652d33a48f --- /dev/null +++ b/Documentation/devicetree/writing-schema.md @@ -0,0 +1,130 @@ +# Writing DeviceTree Bindings in json-schema + +Devicetree bindings are written using json-schema vocabulary. Schema files are +written in a JSON compatible subset of YAML. YAML is used instead of JSON as it +considered more human readable and has some advantages such as allowing +comments (Prefixed with '#'). + +## Schema Contents + +Each schema doc is a structured json-schema which is defined by a set of +top-level properties. Generally, there is one binding defined per file. The +top-level json-schema properties used are: + +- __$id__ - A json-schema unique identifier string. The string must be a valid +URI typically containing the binding's filename and path. For DT schema, it must +begin with "http://devicetree.org/schemas/". The URL is used in constructing +references to other files specified in schema "$ref" properties. A $ref values +with a leading '/' will have the hostname prepended. A $ref value a relative +path or filename only will be prepended with the hostname and path components +of the current schema file's '$id' value. A URL is used even for local files, +but there may not actually be files present at those locations. + +- __$schema__ - Indicates the meta-schema the schema file adheres to. + +- __title__ - A one line description on the contents of the binding schema. + +- __maintainers__ - A DT specific property. Contains a list of email address(es) +for maintainers of this binding. + +- __description__ - Optional. A multi-line text block containing any detailed +information about this binding. It should contain things such as what the block +or device does, standards the device conforms to, and links to datasheets for +more information. + +- __select__ - Optional. A json-schema used to match nodes for applying the +schema. By default without 'select', nodes are matched against their possible +compatible string values or node name. Most bindings should not need select. + +- __allOf__ - Optional. A list of other schemas to include. This is used to +include other schemas the binding conforms to. This may be schemas for a +particular class of devices such as I2C or SPI controllers. + +- __properties__ - A set of sub-schema defining all the DT properties for the +binding. The exact schema syntax depends on whether properties are known, +common properties (e.g. 'interrupts') or are binding/vendor specific properties. + + A property can also define a child DT node with child properties defined +under it. + + For more details on properties sections, see 'Property Schema' section. + +- __patternProperties__ - Optional. Similar to 'properties', but names are regex. + +- __required__ - A list of DT properties from the 'properties' section that +must always be present. + +- __examples__ - Optional. A list of one or more DTS hunks implementing the +binding. Note: YAML doesn't allow leading tabs, so spaces must be used instead. + +Unless noted otherwise, all properties are required. + +## Property Schema + +The 'properties' section of the schema contains all the DT properties for a +binding. Each property contains a set of constraints using json-schema +vocabulary for that property. The properties schemas are what is used for +validation of DT files. + +For common properties, only additional constraints not covered by the common +binding schema need to be defined such as how many values are valid or what +possible values are valid. + +Vendor specific properties will typically need more detailed schema. With the +exception of boolean properties, they should have a reference to a type in +schemas/types.yaml. A "description" property is always required. + +The Devicetree schemas don't exactly match the YAML encoded DT data produced by +dtc. They are simplified to make them more compact and avoid a bunch of +boilerplate. The tools process the schema files to produce the final schema for +validation. There are currently 2 transformations the tools perform. + +The default for arrays in json-schema is they are variable sized and allow more +entries than explicitly defined. This can be restricted by defining 'minItems', +'maxItems', and 'additionalItems'. However, for DeviceTree Schemas, a fixed +size is desired in most cases, so these properties are added based on the +number of entries in an 'items' list. + +The YAML Devicetree format also makes all string values an array and scalar +values a matrix (in order to define groupings) even when only a single value +is present. Single entries in schemas are fixed up to match this encoding. + +## Testing + +### Dependencies + +The DT schema project must be installed in order to validate the DT schema +binding documents and validate DTS files using the DT schema. The DT schema +project can be installed with pip: + +`pip3 install git+https://github.com/robherring/yaml-bindings.git@master` + +dtc must also be built with YAML output support enabled. This requires that +libyaml and its headers be installed on the host system. + +### Running checks + +The DT schema binding documents must be validated using the meta-schema (the +schema for the schema) to ensure they are both valid json-schema and valid +binding schema. All of the DT binding documents can be validated using the +`dt_binding_check` target: + +`make dt_binding_check` + +In order to perform validation of DT source files, use the `dtbs_check` target: + +`make dtbs_check` + +This will first run the `dt_binding_check` which generates the processed schema. + +It is also possible to run checks with a single schema file by setting the +'DT_SCHEMA_FILES' variable to a specific schema file. + +`make dtbs_check DT_SCHEMA_FILES=Documentation/devicetree/bindings/trivial-devices.yaml` + + +## json-schema Resources + +[JSON-Schema Specifications](http://json-schema.org/) + +[Using JSON Schema Book](http://usingjsonschema.com/) diff --git a/Documentation/doc-guide/kernel-doc.rst b/Documentation/doc-guide/kernel-doc.rst index 8db53cdc225f..51be62aa4385 100644 --- a/Documentation/doc-guide/kernel-doc.rst +++ b/Documentation/doc-guide/kernel-doc.rst @@ -77,7 +77,7 @@ The general format of a function and function-like macro kernel-doc comment is:: * Context: Describes whether the function can sleep, what locks it takes, * releases, or expects to be held. It can extend over multiple * lines. - * Return: Describe the return value of foobar. + * Return: Describe the return value of function_name. * * The return value description can also have multiple paragraphs, and should * be placed at the end of the comment block. diff --git a/Documentation/doc-guide/sphinx.rst b/Documentation/doc-guide/sphinx.rst index f0796daa95b4..02605ee1d876 100644 --- a/Documentation/doc-guide/sphinx.rst +++ b/Documentation/doc-guide/sphinx.rst @@ -1,3 +1,5 @@ +.. _sphinxdoc: + Introduction ============ diff --git a/Documentation/driver-api/dmaengine/dmatest.rst b/Documentation/driver-api/dmaengine/dmatest.rst index 7ce5e71c353e..49efebd2c043 100644 --- a/Documentation/driver-api/dmaengine/dmatest.rst +++ b/Documentation/driver-api/dmaengine/dmatest.rst @@ -11,6 +11,10 @@ This small document introduces how to test DMA drivers using dmatest module. capability of the following: DMA_MEMCPY (memory-to-memory), DMA_MEMSET (const-to-memory or memory-to-memory, when emulated), DMA_XOR, DMA_PQ. +.. note:: + In case of any related questions use the official mailing list + dmaengine@vger.kernel.org. + Part 1 - How to build the test module ===================================== diff --git a/Documentation/driver-api/firmware/other_interfaces.rst b/Documentation/driver-api/firmware/other_interfaces.rst index 36c47b1e9824..a4ac54b5fd79 100644 --- a/Documentation/driver-api/firmware/other_interfaces.rst +++ b/Documentation/driver-api/firmware/other_interfaces.rst @@ -13,3 +13,33 @@ EDD Interfaces .. kernel-doc:: drivers/firmware/edd.c :internal: +Intel Stratix10 SoC Service Layer +--------------------------------- +Some features of the Intel Stratix10 SoC require a level of privilege +higher than the kernel is granted. Such secure features include +FPGA programming. In terms of the ARMv8 architecture, the kernel runs +at Exception Level 1 (EL1), access to the features requires +Exception Level 3 (EL3). + +The Intel Stratix10 SoC service layer provides an in kernel API for +drivers to request access to the secure features. The requests are queued +and processed one by one. ARM’s SMCCC is used to pass the execution +of the requests on to a secure monitor (EL3). + +.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h + :functions: stratix10_svc_command_code + +.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h + :functions: stratix10_svc_client_msg + +.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h + :functions: stratix10_svc_command_reconfig_payload + +.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h + :functions: stratix10_svc_cb_data + +.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h + :functions: stratix10_svc_client + +.. kernel-doc:: drivers/firmware/stratix10-svc.c + :export: diff --git a/Documentation/driver-api/gpio/driver.rst b/Documentation/driver-api/gpio/driver.rst index a6c14ff0c54f..a92d8837b62b 100644 --- a/Documentation/driver-api/gpio/driver.rst +++ b/Documentation/driver-api/gpio/driver.rst @@ -434,7 +434,9 @@ try_module_get()). A GPIO driver can use the following functions instead to request and free descriptors without being pinned to the kernel forever:: struct gpio_desc *gpiochip_request_own_desc(struct gpio_desc *desc, - const char *label) + u16 hwnum, + const char *label, + enum gpiod_flags flags) void gpiochip_free_own_desc(struct gpio_desc *desc) diff --git a/Documentation/driver-api/pm/devices.rst b/Documentation/driver-api/pm/devices.rst index 1128705a5731..090c151aa86b 100644 --- a/Documentation/driver-api/pm/devices.rst +++ b/Documentation/driver-api/pm/devices.rst @@ -6,6 +6,8 @@ .. |struct wakeup_source| replace:: :c:type:`struct wakeup_source ` .. |struct device| replace:: :c:type:`struct device ` +.. _driverapi_pm_devices: + ============================== Device Power Management Basics ============================== diff --git a/Documentation/driver-api/usb/index.rst b/Documentation/driver-api/usb/index.rst index 8fe995a1ec94..cfa8797ea614 100644 --- a/Documentation/driver-api/usb/index.rst +++ b/Documentation/driver-api/usb/index.rst @@ -19,6 +19,7 @@ Linux USB API dwc3 writing_musb_glue_layer typec + typec_bus usb3-debug-port .. only:: subproject and html diff --git a/Documentation/driver-api/usb/typec.rst b/Documentation/driver-api/usb/typec.rst index 48ff58095f11..201163d8c13e 100644 --- a/Documentation/driver-api/usb/typec.rst +++ b/Documentation/driver-api/usb/typec.rst @@ -1,3 +1,4 @@ +.. _typec: USB Type-C connector class ========================== diff --git a/Documentation/driver-api/usb/typec_bus.rst b/Documentation/driver-api/usb/typec_bus.rst index d5eec1715b5b..f47a69bff498 100644 --- a/Documentation/driver-api/usb/typec_bus.rst +++ b/Documentation/driver-api/usb/typec_bus.rst @@ -13,10 +13,10 @@ every alternate mode, so every alternate mode will need a custom driver. USB Type-C bus allows binding a driver to the discovered partner alternate modes by using the SVID and the mode number. -USB Type-C Connector Class provides a device for every alternate mode a port -supports, and separate device for every alternate mode the partner supports. -The drivers for the alternate modes are bound to the partner alternate mode -devices, and the port alternate mode devices must be handled by the port +:ref:`USB Type-C Connector Class ` provides a device for every alternate +mode a port supports, and separate device for every alternate mode the partner +supports. The drivers for the alternate modes are bound to the partner alternate +mode devices, and the port alternate mode devices must be handled by the port drivers. When a new partner alternate mode device is registered, it is linked to the @@ -46,7 +46,7 @@ enter any modes on their own. ``->vdm`` is the most important callback in the operation callbacks vector. It will be used to deliver all the SVID specific commands from the partner to the alternate mode driver, and vice versa in case of port drivers. The drivers send -the SVID specific commands to each other using :c:func:`typec_altmode_vmd()`. +the SVID specific commands to each other using :c:func:`typec_altmode_vdm()`. If the communication with the partner using the SVID specific commands results in need to reconfigure the pins on the connector, the alternate mode driver @@ -67,15 +67,15 @@ Type-C Specification, and also put the connector back to ``TYPEC_STATE_USB`` after the mode has been exited. An example of working definitions for SVID specific pin configurations would -look like this: +look like this:: -enum { - ALTMODEX_CONF_A = TYPEC_STATE_MODAL, - ALTMODEX_CONF_B, - ... -}; + enum { + ALTMODEX_CONF_A = TYPEC_STATE_MODAL, + ALTMODEX_CONF_B, + ... + }; -Helper macro ``TYPEC_MODAL_STATE()`` can also be used: +Helper macro ``TYPEC_MODAL_STATE()`` can also be used:: #define ALTMODEX_CONF_A = TYPEC_MODAL_STATE(0); #define ALTMODEX_CONF_B = TYPEC_MODAL_STATE(1); diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt index fc4cc24dfb97..841c99529d27 100644 --- a/Documentation/driver-model/devres.txt +++ b/Documentation/driver-model/devres.txt @@ -132,6 +132,13 @@ devres. Complexity is shifted from less maintained low level drivers to better maintained higher layer. Also, as init failure path is shared with exit path, both can get more testing. +Note though that when converting current calls or assignments to +managed devm_* versions it is up to you to check if internal operations +like allocating memory, have failed. Managed resources pertains to the +freeing of these resources *only* - all other checks needed are still +on you. In some cases this may mean introducing checks that were not +necessary before moving to the managed devm_* calls. + 3. Devres group --------------- @@ -256,7 +263,6 @@ GPIO devm_gpiod_put() devm_gpiod_unhinge() devm_gpiochip_add_data() - devm_gpiochip_remove() devm_gpio_request() devm_gpio_request_one() devm_gpio_free() diff --git a/Documentation/early-userspace/README b/Documentation/early-userspace/README index 1e1057958dd3..955d667dc87e 100644 --- a/Documentation/early-userspace/README +++ b/Documentation/early-userspace/README @@ -52,7 +52,7 @@ user root (0). INITRAMFS_ROOT_GID can be set to a group ID that needs to be mapped to group root (0). A source file must be directives in the format required by the -usr/gen_init_cpio utility (run 'usr/gen_init_cpio --help' to get the +usr/gen_init_cpio utility (run 'usr/gen_init_cpio -h' to get the file format). The directives in the file will be passed directly to usr/gen_init_cpio. diff --git a/Documentation/features/core/jump-labels/arch-support.txt b/Documentation/features/core/jump-labels/arch-support.txt index 27cbd63abfd2..60111395f932 100644 --- a/Documentation/features/core/jump-labels/arch-support.txt +++ b/Documentation/features/core/jump-labels/arch-support.txt @@ -29,5 +29,5 @@ | um: | TODO | | unicore32: | TODO | | x86: | ok | - | xtensa: | TODO | + | xtensa: | ok | ----------------------- diff --git a/Documentation/features/io/sg-chain/arch-support.txt b/Documentation/features/io/sg-chain/arch-support.txt deleted file mode 100644 index 6554f0372c3f..000000000000 --- a/Documentation/features/io/sg-chain/arch-support.txt +++ /dev/null @@ -1,33 +0,0 @@ -# -# Feature name: sg-chain -# Kconfig: ARCH_HAS_SG_CHAIN -# description: arch supports chained scatter-gather lists -# - ----------------------- - | arch |status| - ----------------------- - | alpha: | TODO | - | arc: | ok | - | arm: | ok | - | arm64: | ok | - | c6x: | TODO | - | h8300: | TODO | - | hexagon: | TODO | - | ia64: | ok | - | m68k: | TODO | - | microblaze: | TODO | - | mips: | TODO | - | nds32: | TODO | - | nios2: | TODO | - | openrisc: | TODO | - | parisc: | TODO | - | powerpc: | ok | - | riscv: | TODO | - | s390: | ok | - | sh: | TODO | - | sparc: | ok | - | um: | TODO | - | unicore32: | TODO | - | x86: | ok | - | xtensa: | TODO | - ----------------------- diff --git a/Documentation/filesystems/caching/backend-api.txt b/Documentation/filesystems/caching/backend-api.txt index c0bd5677271b..c418280c915f 100644 --- a/Documentation/filesystems/caching/backend-api.txt +++ b/Documentation/filesystems/caching/backend-api.txt @@ -704,7 +704,7 @@ FS-Cache provides some utilities that a cache backend may make use of: void fscache_get_retrieval(struct fscache_retrieval *op); void fscache_put_retrieval(struct fscache_retrieval *op); - These two functions are used to retain a retrieval record whilst doing + These two functions are used to retain a retrieval record while doing asynchronous data retrieval and block allocation. diff --git a/Documentation/filesystems/caching/cachefiles.txt b/Documentation/filesystems/caching/cachefiles.txt index 748a1ae49e12..28aefcbb1442 100644 --- a/Documentation/filesystems/caching/cachefiles.txt +++ b/Documentation/filesystems/caching/cachefiles.txt @@ -45,7 +45,7 @@ filesystems are very specific in nature. CacheFiles creates a misc character device - "/dev/cachefiles" - that is used to communication with the daemon. Only one thing may have this open at once, -and whilst it is open, a cache is at least partially in existence. The daemon +and while it is open, a cache is at least partially in existence. The daemon opens this and sends commands down it to control the cache. CacheFiles is currently limited to a single cache. @@ -163,7 +163,7 @@ Do not mount other things within the cache as this will cause problems. The kernel module contains its own very cut-down path walking facility that ignores mountpoints, but the daemon can't avoid them. -Do not create, rename or unlink files and directories in the cache whilst the +Do not create, rename or unlink files and directories in the cache while the cache is active, as this may cause the state to become uncertain. Renaming files in the cache might make objects appear to be other objects (the diff --git a/Documentation/filesystems/caching/netfs-api.txt b/Documentation/filesystems/caching/netfs-api.txt index 2a6f7399c1f3..ba968e8f5704 100644 --- a/Documentation/filesystems/caching/netfs-api.txt +++ b/Documentation/filesystems/caching/netfs-api.txt @@ -382,7 +382,7 @@ MISCELLANEOUS OBJECT REGISTRATION An optional step is to request an object of miscellaneous type be created in the cache. This is almost identical to index cookie acquisition. The only difference is that the type in the object definition should be something other -than index type. Whilst the parent object could be an index, it's more likely +than index type. While the parent object could be an index, it's more likely it would be some other type of object such as a data file. xattr->cache = diff --git a/Documentation/filesystems/caching/operations.txt b/Documentation/filesystems/caching/operations.txt index a1c052cbba35..d8976c434718 100644 --- a/Documentation/filesystems/caching/operations.txt +++ b/Documentation/filesystems/caching/operations.txt @@ -171,7 +171,7 @@ Operations are used through the following procedure: (3) If the submitting thread wants to do the work itself, and has marked the operation with FSCACHE_OP_MYTHREAD, then it should monitor FSCACHE_OP_WAITING as described above and check the state of the object if - necessary (the object might have died whilst the thread was waiting). + necessary (the object might have died while the thread was waiting). When it has finished doing its processing, it should call fscache_op_complete() and fscache_put_operation() on it. diff --git a/Documentation/filesystems/configfs/configfs.txt b/Documentation/filesystems/configfs/configfs.txt index 3828e85345ae..16e606c11f40 100644 --- a/Documentation/filesystems/configfs/configfs.txt +++ b/Documentation/filesystems/configfs/configfs.txt @@ -216,7 +216,7 @@ be called whenever userspace asks for a write(2) on the attribute. [struct configfs_bin_attribute] - struct configfs_attribute { + struct configfs_bin_attribute { struct configfs_attribute cb_attr; void *cb_private; size_t cb_max_size; diff --git a/Documentation/filesystems/index.rst b/Documentation/filesystems/index.rst index 46d1b1be3a51..605befab300b 100644 --- a/Documentation/filesystems/index.rst +++ b/Documentation/filesystems/index.rst @@ -359,3 +359,24 @@ encryption of files and directories. :maxdepth: 2 fscrypt + +Pathname lookup +=============== + + +This write-up is based on three articles published at lwn.net: + +- Pathname lookup in Linux +- RCU-walk: faster pathname lookup in Linux +- A walk among the symlinks + +Written by Neil Brown with help from Al Viro and Jon Corbet. +It has subsequently been updated to reflect changes in the kernel +including: + +- per-directory parallel name lookup. + +.. toctree:: + :maxdepth: 2 + + path-lookup.rst diff --git a/Documentation/filesystems/path-lookup.md b/Documentation/filesystems/path-lookup.rst similarity index 56% rename from Documentation/filesystems/path-lookup.md rename to Documentation/filesystems/path-lookup.rst index e2edd45c4bc0..9d6b68853f5b 100644 --- a/Documentation/filesystems/path-lookup.md +++ b/Documentation/filesystems/path-lookup.rst @@ -1,20 +1,6 @@ - - - -Pathname lookup in Linux. -========================= - -This write-up is based on three articles published at lwn.net: - -- Pathname lookup in Linux -- RCU-walk: faster pathname lookup in Linux -- A walk among the symlinks - -Written by Neil Brown with help from Al Viro and Jon Corbet. - -Introduction ------------- +Introduction to pathname lookup +=============================== The most obvious aspect of pathname lookup, which very little exploration is needed to discover, is that it is complex. There are @@ -32,58 +18,58 @@ distinctions we need to clarify first. There are two sorts of ... -------------------------- -[`openat()`]: http://man7.org/linux/man-pages/man2/openat.2.html +.. _openat: http://man7.org/linux/man-pages/man2/openat.2.html Pathnames (sometimes "file names"), used to identify objects in the filesystem, will be familiar to most readers. They contain two sorts -of elements: "slashes" that are sequences of one or more "`/`" +of elements: "slashes" that are sequences of one or more "``/``" characters, and "components" that are sequences of one or more -non-"`/`" characters. These form two kinds of paths. Those that +non-"``/``" characters. These form two kinds of paths. Those that start with slashes are "absolute" and start from the filesystem root. The others are "relative" and start from the current directory, or from some other location specified by a file descriptor given to a -"xxx`at`" system call such as "[`openat()`]". +"``XXXat``" system call such as `openat() `_. -[`execveat()`]: http://man7.org/linux/man-pages/man2/execveat.2.html +.. _execveat: http://man7.org/linux/man-pages/man2/execveat.2.html It is tempting to describe the second kind as starting with a component, but that isn't always accurate: a pathname can lack both slashes and components, it can be empty, in other words. This is -generally forbidden in POSIX, but some of those "xxx`at`" system calls -in Linux permit it when the `AT_EMPTY_PATH` flag is given. For +generally forbidden in POSIX, but some of those "xxx``at``" system calls +in Linux permit it when the ``AT_EMPTY_PATH`` flag is given. For example, if you have an open file descriptor on an executable file you -can execute it by calling [`execveat()`] passing the file descriptor, -an empty path, and the `AT_EMPTY_PATH` flag. +can execute it by calling `execveat() `_ passing +the file descriptor, an empty path, and the ``AT_EMPTY_PATH`` flag. These paths can be divided into two sections: the final component and everything else. The "everything else" is the easy bit. In all cases it must identify a directory that already exists, otherwise an error -such as `ENOENT` or `ENOTDIR` will be reported. +such as ``ENOENT`` or ``ENOTDIR`` will be reported. The final component is not so simple. Not only do different system calls interpret it quite differently (e.g. some create it, some do not), but it might not even exist: neither the empty pathname nor the pathname that is just slashes have a final component. If it does -exist, it could be "`.`" or "`..`" which are handled quite differently +exist, it could be "``.``" or "``..``" which are handled quite differently from other components. -[POSIX]: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_12 +.. _POSIX: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_12 -If a pathname ends with a slash, such as "`/tmp/foo/`" it might be +If a pathname ends with a slash, such as "``/tmp/foo/``" it might be tempting to consider that to have an empty final component. In many ways that would lead to correct results, but not always. In -particular, `mkdir()` and `rmdir()` each create or remove a directory named +particular, ``mkdir()`` and ``rmdir()`` each create or remove a directory named by the final component, and they are required to work with pathnames -ending in "`/`". According to [POSIX] +ending in "``/``". According to POSIX_ -> A pathname that contains at least one non- <slash> character and -> that ends with one or more trailing <slash> characters shall not -> be resolved successfully unless the last pathname component before -> the trailing characters names an existing directory or a -> directory entry that is to be created for a directory immediately -> after the pathname is resolved. + A pathname that contains at least one non- <slash> character and + that ends with one or more trailing <slash> characters shall not + be resolved successfully unless the last pathname component before + the trailing characters names an existing directory or a + directory entry that is to be created for a directory immediately + after the pathname is resolved. -The Linux pathname walking code (mostly in `fs/namei.c`) deals with +The Linux pathname walking code (mostly in ``fs/namei.c``) deals with all of these issues: breaking the path into components, handling the "everything else" quite separately from the final component, and checking that the trailing slash is not used where it isn't @@ -100,15 +86,15 @@ of the possible races are seen most clearly in the context of the "dcache" and an understanding of that is central to understanding pathname lookup. -More than just a cache. ------------------------ +More than just a cache +---------------------- The "dcache" caches information about names in each filesystem to make them quickly available for lookup. Each entry (known as a "dentry") contains three significant fields: a component name, a pointer to a parent dentry, and a pointer to the "inode" which contains further information about the object in that parent with -the given name. The inode pointer can be `NULL` indicating that the +the given name. The inode pointer can be ``NULL`` indicating that the name doesn't exist in the parent. While there can be linkage in the dentry of a directory to the dentries of the children, that linkage is not used for pathname lookup, and so will not be considered here. @@ -135,7 +121,7 @@ whether remote filesystems like NFS and 9P, or cluster filesystems like ocfs2 or cephfs. These filesystems allow the VFS to revalidate cached information, and must provide their own protection against awkward races. The VFS can detect these filesystems by the -`DCACHE_OP_REVALIDATE` flag being set in the dentry. +``DCACHE_OP_REVALIDATE`` flag being set in the dentry. REF-walk: simple concurrency management with refcounts and spinlocks -------------------------------------------------------------------- @@ -144,22 +130,23 @@ With all of those divisions carefully classified, we can now start looking at the actual process of walking along a path. In particular we will start with the handling of the "everything else" part of a pathname, and focus on the "REF-walk" approach to concurrency -management. This code is found in the `link_path_walk()` function, if -you ignore all the places that only run when "`LOOKUP_RCU`" +management. This code is found in the ``link_path_walk()`` function, if +you ignore all the places that only run when "``LOOKUP_RCU``" (indicating the use of RCU-walk) is set. -[Meet the Lockers]: https://lwn.net/Articles/453685/ +.. _Meet the Lockers: https://lwn.net/Articles/453685/ REF-walk is fairly heavy-handed with locks and reference counts. Not as heavy-handed as in the old "big kernel lock" days, but certainly not afraid of taking a lock when one is needed. It uses a variety of different concurrency controls. A background understanding of the various primitives is assumed, or can be gleaned from elsewhere such -as in [Meet the Lockers]. +as in `Meet the Lockers`_. The locking mechanisms used by REF-walk include: -### dentry->d_lockref ### +dentry->d_lockref +~~~~~~~~~~~~~~~~~ This uses the lockref primitive to provide both a spinlock and a reference count. The special-sauce of this primitive is that the @@ -168,49 +155,51 @@ with a single atomic memory operation. Holding a reference on a dentry ensures that the dentry won't suddenly be freed and used for something else, so the values in various fields -will behave as expected. It also protects the `->d_inode` reference +will behave as expected. It also protects the ``->d_inode`` reference to the inode to some extent. The association between a dentry and its inode is fairly permanent. For example, when a file is renamed, the dentry and inode move together to the new location. When a file is created the dentry will -initially be negative (i.e. `d_inode` is `NULL`), and will be assigned +initially be negative (i.e. ``d_inode`` is ``NULL``), and will be assigned to the new inode as part of the act of creation. When a file is deleted, this can be reflected in the cache either by -setting `d_inode` to `NULL`, or by removing it from the hash table +setting ``d_inode`` to ``NULL``, or by removing it from the hash table (described shortly) used to look up the name in the parent directory. If the dentry is still in use the second option is used as it is perfectly legal to keep using an open file after it has been deleted and having the dentry around helps. If the dentry is not otherwise in -use (i.e. if the refcount in `d_lockref` is one), only then will -`d_inode` be set to `NULL`. Doing it this way is more efficient for a +use (i.e. if the refcount in ``d_lockref`` is one), only then will +``d_inode`` be set to ``NULL``. Doing it this way is more efficient for a very common case. -So as long as a counted reference is held to a dentry, a non-`NULL` `->d_inode` +So as long as a counted reference is held to a dentry, a non-``NULL`` ``->d_inode`` value will never be changed. -### dentry->d_lock ### +dentry->d_lock +~~~~~~~~~~~~~~ -`d_lock` is a synonym for the spinlock that is part of `d_lockref` above. +``d_lock`` is a synonym for the spinlock that is part of ``d_lockref`` above. For our purposes, holding this lock protects against the dentry being -renamed or unlinked. In particular, its parent (`d_parent`), and its -name (`d_name`) cannot be changed, and it cannot be removed from the +renamed or unlinked. In particular, its parent (``d_parent``), and its +name (``d_name``) cannot be changed, and it cannot be removed from the dentry hash table. -When looking for a name in a directory, REF-walk takes `d_lock` on +When looking for a name in a directory, REF-walk takes ``d_lock`` on each candidate dentry that it finds in the hash table and then checks that the parent and name are correct. So it doesn't lock the parent while searching in the cache; it only locks children. -When looking for the parent for a given name (to handle "`..`"), -REF-walk can take `d_lock` to get a stable reference to `d_parent`, +When looking for the parent for a given name (to handle "``..``"), +REF-walk can take ``d_lock`` to get a stable reference to ``d_parent``, but it first tries a more lightweight approach. As seen in -`dget_parent()`, if a reference can be claimed on the parent, and if -subsequently `d_parent` can be seen to have not changed, then there is +``dget_parent()``, if a reference can be claimed on the parent, and if +subsequently ``d_parent`` can be seen to have not changed, then there is no need to actually take the lock on the child. -### rename_lock ### +rename_lock +~~~~~~~~~~~ Looking up a given name in a given directory involves computing a hash from the two values (the name and the dentry of the directory), @@ -224,71 +213,117 @@ happened to be looking at a dentry that was moved in this way, it might end up continuing the search down the wrong chain, and so miss out on part of the correct chain. -The name-lookup process (`d_lookup()`) does _not_ try to prevent this +The name-lookup process (``d_lookup()``) does _not_ try to prevent this from happening, but only to detect when it happens. -`rename_lock` is a seqlock that is updated whenever any dentry is -renamed. If `d_lookup` finds that a rename happened while it +``rename_lock`` is a seqlock that is updated whenever any dentry is +renamed. If ``d_lookup`` finds that a rename happened while it unsuccessfully scanned a chain in the hash table, it simply tries again. -### inode->i_mutex ### +inode->i_rwsem +~~~~~~~~~~~~~~ -`i_mutex` is a mutex that serializes all changes to a particular -directory. This ensures that, for example, an `unlink()` and a `rename()` +``i_rwsem`` is a read/write semaphore that serializes all changes to a particular +directory. This ensures that, for example, an ``unlink()`` and a ``rename()`` cannot both happen at the same time. It also keeps the directory stable while the filesystem is asked to look up a name that is not -currently in the dcache. +currently in the dcache or, optionally, when the list of entries in a +directory is being retrieved with ``readdir()``. -This has a complementary role to that of `d_lock`: `i_mutex` on a -directory protects all of the names in that directory, while `d_lock` +This has a complementary role to that of ``d_lock``: ``i_rwsem`` on a +directory protects all of the names in that directory, while ``d_lock`` on a name protects just one name in a directory. Most changes to the -dcache hold `i_mutex` on the relevant directory inode and briefly take -`d_lock` on one or more the dentries while the change happens. One +dcache hold ``i_rwsem`` on the relevant directory inode and briefly take +``d_lock`` on one or more the dentries while the change happens. One exception is when idle dentries are removed from the dcache due to -memory pressure. This uses `d_lock`, but `i_mutex` plays no role. +memory pressure. This uses ``d_lock``, but ``i_rwsem`` plays no role. -The mutex affects pathname lookup in two distinct ways. Firstly it -serializes lookup of a name in a directory. `walk_component()` uses -`lookup_fast()` first which, in turn, checks to see if the name is in the cache, -using only `d_lock` locking. If the name isn't found, then `walk_component()` -falls back to `lookup_slow()` which takes `i_mutex`, checks again that +The semaphore affects pathname lookup in two distinct ways. Firstly it +prevents changes during lookup of a name in a directory. ``walk_component()`` uses +``lookup_fast()`` first which, in turn, checks to see if the name is in the cache, +using only ``d_lock`` locking. If the name isn't found, then ``walk_component()`` +falls back to ``lookup_slow()`` which takes a shared lock on ``i_rwsem``, checks again that the name isn't in the cache, and then calls in to the filesystem to get a definitive answer. A new dentry will be added to the cache regardless of the result. Secondly, when pathname lookup reaches the final component, it will -sometimes need to take `i_mutex` before performing the last lookup so +sometimes need to take an exclusive lock on ``i_rwsem`` before performing the last lookup so that the required exclusion can be achieved. How path lookup chooses -to take, or not take, `i_mutex` is one of the +to take, or not take, ``i_rwsem`` is one of the issues addressed in a subsequent section. -### mnt->mnt_count ### - -`mnt_count` is a per-CPU reference counter on "`mount`" structures. +If two threads attempt to look up the same name at the same time - a +name that is not yet in the dcache - the shared lock on ``i_rwsem`` will +not prevent them both adding new dentries with the same name. As this +would result in confusion an extra level of interlocking is used, +based around a secondary hash table (``in_lookup_hashtable``) and a +per-dentry flag bit (``DCACHE_PAR_LOOKUP``). + +To add a new dentry to the cache while only holding a shared lock on +``i_rwsem``, a thread must call ``d_alloc_parallel()``. This allocates a +dentry, stores the required name and parent in it, checks if there +is already a matching dentry in the primary or secondary hash +tables, and if not, stores the newly allocated dentry in the secondary +hash table, with ``DCACHE_PAR_LOOKUP`` set. + +If a matching dentry was found in the primary hash table then that is +returned and the caller can know that it lost a race with some other +thread adding the entry. If no matching dentry is found in either +cache, the newly allocated dentry is returned and the caller can +detect this from the presence of ``DCACHE_PAR_LOOKUP``. In this case it +knows that it has won any race and now is responsible for asking the +filesystem to perform the lookup and find the matching inode. When +the lookup is complete, it must call ``d_lookup_done()`` which clears +the flag and does some other house keeping, including removing the +dentry from the secondary hash table - it will normally have been +added to the primary hash table already. Note that a ``struct +waitqueue_head`` is passed to ``d_alloc_parallel()``, and +``d_lookup_done()`` must be called while this ``waitqueue_head`` is still +in scope. + +If a matching dentry is found in the secondary hash table, +``d_alloc_parallel()`` has a little more work to do. It first waits for +``DCACHE_PAR_LOOKUP`` to be cleared, using a wait_queue that was passed +to the instance of ``d_alloc_parallel()`` that won the race and that +will be woken by the call to ``d_lookup_done()``. It then checks to see +if the dentry has now been added to the primary hash table. If it +has, the dentry is returned and the caller just sees that it lost any +race. If it hasn't been added to the primary hash table, the most +likely explanation is that some other dentry was added instead using +``d_splice_alias()``. In any case, ``d_alloc_parallel()`` repeats all the +look ups from the start and will normally return something from the +primary hash table. + +mnt->mnt_count +~~~~~~~~~~~~~~ + +``mnt_count`` is a per-CPU reference counter on "``mount``" structures. Per-CPU here means that incrementing the count is cheap as it only uses CPU-local memory, but checking if the count is zero is expensive as -it needs to check with every CPU. Taking a `mnt_count` reference +it needs to check with every CPU. Taking a ``mnt_count`` reference prevents the mount structure from disappearing as the result of regular unmount operations, but does not prevent a "lazy" unmount. So holding -`mnt_count` doesn't ensure that the mount remains in the namespace and, +``mnt_count`` doesn't ensure that the mount remains in the namespace and, in particular, doesn't stabilize the link to the mounted-on dentry. It -does, however, ensure that the `mount` data structure remains coherent, +does, however, ensure that the ``mount`` data structure remains coherent, and it provides a reference to the root dentry of the mounted -filesystem. So a reference through `->mnt_count` provides a stable +filesystem. So a reference through ``->mnt_count`` provides a stable reference to the mounted dentry, but not the mounted-on dentry. -### mount_lock ### +mount_lock +~~~~~~~~~~ -`mount_lock` is a global seqlock, a bit like `rename_lock`. It can be used to +``mount_lock`` is a global seqlock, a bit like ``rename_lock``. It can be used to check if any change has been made to any mount points. While walking down the tree (away from the root) this lock is used when crossing a mount point to check that the crossing was safe. That is, the value in the seqlock is read, then the code finds the mount that is mounted on the current directory, if there is one, and increments -the `mnt_count`. Finally the value in `mount_lock` is checked against +the ``mnt_count``. Finally the value in ``mount_lock`` is checked against the old value. If there is no change, then the crossing was safe. If there -was a change, the `mnt_count` is decremented and the whole process is +was a change, the ``mnt_count`` is decremented and the whole process is retried. When walking up the tree (towards the root) by following a ".." link, @@ -298,7 +333,8 @@ any changes to any mount points while stepping up. This locking is needed to stabilize the link to the mounted-on dentry, which the refcount on the mount itself doesn't ensure. -### RCU ### +RCU +~~~ Finally the global (but extremely lightweight) RCU read lock is held from time to time to ensure certain data structures don't get freed @@ -307,137 +343,141 @@ unexpectedly. In particular it is held while scanning chains in the dcache hash table, and the mount point hash table. -Bringing it together with `struct nameidata` +Bringing it together with ``struct nameidata`` -------------------------------------------- -[First edition Unix]: http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1/u2.s +.. _First edition Unix: http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1/u2.s Throughout the process of walking a path, the current status is stored -in a `struct nameidata`, "namei" being the traditional name - dating -all the way back to [First Edition Unix] - of the function that -converts a "name" to an "inode". `struct nameidata` contains (among +in a ``struct nameidata``, "namei" being the traditional name - dating +all the way back to `First Edition Unix`_ - of the function that +converts a "name" to an "inode". ``struct nameidata`` contains (among other fields): -### `struct path path` ### +``struct path path`` +~~~~~~~~~~~~~~~~~~ -A `path` contains a `struct vfsmount` (which is -embedded in a `struct mount`) and a `struct dentry`. Together these +A ``path`` contains a ``struct vfsmount`` (which is +embedded in a ``struct mount``) and a ``struct dentry``. Together these record the current status of the walk. They start out referring to the starting point (the current working directory, the root directory, or some other directory identified by a file descriptor), and are updated on each -step. A reference through `d_lockref` and `mnt_count` is always +step. A reference through ``d_lockref`` and ``mnt_count`` is always held. -### `struct qstr last` ### +``struct qstr last`` +~~~~~~~~~~~~~~~~~~ -This is a string together with a length (i.e. _not_ `nul` terminated) +This is a string together with a length (i.e. _not_ ``nul`` terminated) that is the "next" component in the pathname. -### `int last_type` ### +``int last_type`` +~~~~~~~~~~~~~~~ -This is one of `LAST_NORM`, `LAST_ROOT`, `LAST_DOT`, `LAST_DOTDOT`, or -`LAST_BIND`. The `last` field is only valid if the type is -`LAST_NORM`. `LAST_BIND` is used when following a symlink and no +This is one of ``LAST_NORM``, ``LAST_ROOT``, ``LAST_DOT``, ``LAST_DOTDOT``, or +``LAST_BIND``. The ``last`` field is only valid if the type is +``LAST_NORM``. ``LAST_BIND`` is used when following a symlink and no components of the symlink have been processed yet. Others should be fairly self-explanatory. -### `struct path root` ### +``struct path root`` +~~~~~~~~~~~~~~~~~~ This is used to hold a reference to the effective root of the filesystem. Often that reference won't be needed, so this field is only assigned the first time it is used, or when a non-standard root -is requested. Keeping a reference in the `nameidata` ensures that +is requested. Keeping a reference in the ``nameidata`` ensures that only one root is in effect for the entire path walk, even if it races -with a `chroot()` system call. +with a ``chroot()`` system call. The root is needed when either of two conditions holds: (1) either the -pathname or a symbolic link starts with a "'/'", or (2) a "`..`" -component is being handled, since "`..`" from the root must always stay +pathname or a symbolic link starts with a "'/'", or (2) a "``..``" +component is being handled, since "``..``" from the root must always stay at the root. The value used is usually the current root directory of the calling process. An alternate root can be provided as when -`sysctl()` calls `file_open_root()`, and when NFSv4 or Btrfs call -`mount_subtree()`. In each case a pathname is being looked up in a very +``sysctl()`` calls ``file_open_root()``, and when NFSv4 or Btrfs call +``mount_subtree()``. In each case a pathname is being looked up in a very specific part of the filesystem, and the lookup must not be allowed to -escape that subtree. It works a bit like a local `chroot()`. +escape that subtree. It works a bit like a local ``chroot()``. Ignoring the handling of symbolic links, we can now describe the -"`link_path_walk()`" function, which handles the lookup of everything +"``link_path_walk()``" function, which handles the lookup of everything except the final component as: -> Given a path (`name`) and a nameidata structure (`nd`), check that the -> current directory has execute permission and then advance `name` -> over one component while updating `last_type` and `last`. If that -> was the final component, then return, otherwise call -> `walk_component()` and repeat from the top. + Given a path (``name``) and a nameidata structure (``nd``), check that the + current directory has execute permission and then advance ``name`` + over one component while updating ``last_type`` and ``last``. If that + was the final component, then return, otherwise call + ``walk_component()`` and repeat from the top. -`walk_component()` is even easier. If the component is `LAST_DOTS`, -it calls `handle_dots()` which does the necessary locking as already -described. If it finds a `LAST_NORM` component it first calls -"`lookup_fast()`" which only looks in the dcache, but will ask the +``walk_component()`` is even easier. If the component is ``LAST_DOTS``, +it calls ``handle_dots()`` which does the necessary locking as already +described. If it finds a ``LAST_NORM`` component it first calls +"``lookup_fast()``" which only looks in the dcache, but will ask the filesystem to revalidate the result if it is that sort of filesystem. -If that doesn't get a good result, it calls "`lookup_slow()`" which -takes the `i_mutex`, rechecks the cache, and then asks the filesystem +If that doesn't get a good result, it calls "``lookup_slow()``" which +takes ``i_rwsem``, rechecks the cache, and then asks the filesystem to find a definitive answer. Each of these will call -`follow_managed()` (as described below) to handle any mount points. +``follow_managed()`` (as described below) to handle any mount points. -In the absence of symbolic links, `walk_component()` creates a new -`struct path` containing a counted reference to the new dentry and a -reference to the new `vfsmount` which is only counted if it is -different from the previous `vfsmount`. It then calls -`path_to_nameidata()` to install the new `struct path` in the -`struct nameidata` and drop the unneeded references. +In the absence of symbolic links, ``walk_component()`` creates a new +``struct path`` containing a counted reference to the new dentry and a +reference to the new ``vfsmount`` which is only counted if it is +different from the previous ``vfsmount``. It then calls +``path_to_nameidata()`` to install the new ``struct path`` in the +``struct nameidata`` and drop the unneeded references. This "hand-over-hand" sequencing of getting a reference to the new dentry before dropping the reference to the previous dentry may seem obvious, but is worth pointing out so that we will recognize its analogue in the "RCU-walk" version. -Handling the final component. ------------------------------ +Handling the final component +---------------------------- -`link_path_walk()` only walks as far as setting `nd->last` and -`nd->last_type` to refer to the final component of the path. It does -not call `walk_component()` that last time. Handling that final +``link_path_walk()`` only walks as far as setting ``nd->last`` and +``nd->last_type`` to refer to the final component of the path. It does +not call ``walk_component()`` that last time. Handling that final component remains for the caller to sort out. Those callers are -`path_lookupat()`, `path_parentat()`, `path_mountpoint()` and -`path_openat()` each of which handles the differing requirements of +``path_lookupat()``, ``path_parentat()``, ``path_mountpoint()`` and +``path_openat()`` each of which handles the differing requirements of different system calls. -`path_parentat()` is clearly the simplest - it just wraps a little bit -of housekeeping around `link_path_walk()` and returns the parent +``path_parentat()`` is clearly the simplest - it just wraps a little bit +of housekeeping around ``link_path_walk()`` and returns the parent directory and final component to the caller. The caller will be either -aiming to create a name (via `filename_create()`) or remove or rename -a name (in which case `user_path_parent()` is used). They will use -`i_mutex` to exclude other changes while they validate and then +aiming to create a name (via ``filename_create()``) or remove or rename +a name (in which case ``user_path_parent()`` is used). They will use +``i_rwsem`` to exclude other changes while they validate and then perform their operation. -`path_lookupat()` is nearly as simple - it is used when an existing -object is wanted such as by `stat()` or `chmod()`. It essentially just -calls `walk_component()` on the final component through a call to -`lookup_last()`. `path_lookupat()` returns just the final dentry. +``path_lookupat()`` is nearly as simple - it is used when an existing +object is wanted such as by ``stat()`` or ``chmod()``. It essentially just +calls ``walk_component()`` on the final component through a call to +``lookup_last()``. ``path_lookupat()`` returns just the final dentry. -`path_mountpoint()` handles the special case of unmounting which must +``path_mountpoint()`` handles the special case of unmounting which must not try to revalidate the mounted filesystem. It effectively -contains, through a call to `mountpoint_last()`, an alternate -implementation of `lookup_slow()` which skips that step. This is +contains, through a call to ``mountpoint_last()``, an alternate +implementation of ``lookup_slow()`` which skips that step. This is important when unmounting a filesystem that is inaccessible, such as one provided by a dead NFS server. -Finally `path_openat()` is used for the `open()` system call; it -contains, in support functions starting with "`do_last()`", all the +Finally ``path_openat()`` is used for the ``open()`` system call; it +contains, in support functions starting with "``do_last()``", all the complexity needed to handle the different subtleties of O_CREAT (with -or without O_EXCL), final "`/`" characters, and trailing symbolic +or without O_EXCL), final "``/``" characters, and trailing symbolic links. We will revisit this in the final part of this series, which -focuses on those symbolic links. "`do_last()`" will sometimes, but -not always, take `i_mutex`, depending on what it finds. +focuses on those symbolic links. "``do_last()``" will sometimes, but +not always, take ``i_rwsem``, depending on what it finds. Each of these, or the functions which call them, need to be alert to -the possibility that the final component is not `LAST_NORM`. If the +the possibility that the final component is not ``LAST_NORM``. If the goal of the lookup is to create something, then any value for -`last_type` other than `LAST_NORM` will result in an error. For -example if `path_parentat()` reports `LAST_DOTDOT`, then the caller +``last_type`` other than ``LAST_NORM`` will result in an error. For +example if ``path_parentat()`` reports ``LAST_DOTDOT``, then the caller won't try to create that name. They also check for trailing slashes -by testing `last.name[last.len]`. If there is any character beyond +by testing ``last.name[last.len]``. If there is any character beyond the final component, it must be a trailing slash. Revalidation and automounts @@ -448,12 +488,12 @@ process not yet covered. One is the handling of stale cache entries and the other is automounts. On filesystems that require it, the lookup routines will call the -`->d_revalidate()` dentry method to ensure that the cached information +``->d_revalidate()`` dentry method to ensure that the cached information is current. This will often confirm validity or update a few details from a server. In some cases it may find that there has been change further up the path and that something that was thought to be valid previously isn't really. When this happens the lookup of the whole -path is aborted and retried with the "`LOOKUP_REVAL`" flag set. This +path is aborted and retried with the "``LOOKUP_REVAL``" flag set. This forces revalidation to be more thorough. We will see more details of this retry process in the next article. @@ -465,52 +505,55 @@ tree, but a few notes specifically related to path lookup are in order here. The Linux VFS has a concept of "managed" dentries which is reflected -in function names such as "`follow_managed()`". There are three +in function names such as "``follow_managed()``". There are three potentially interesting things about these dentries corresponding -to three different flags that might be set in `dentry->d_flags`: +to three different flags that might be set in ``dentry->d_flags``: -### `DCACHE_MANAGE_TRANSIT` ### +``DCACHE_MANAGE_TRANSIT`` +~~~~~~~~~~~~~~~~~~~~~~~ If this flag has been set, then the filesystem has requested that the -`d_manage()` dentry operation be called before handling any possible +``d_manage()`` dentry operation be called before handling any possible mount point. This can perform two particular services: It can block to avoid races. If an automount point is being -unmounted, the `d_manage()` function will usually wait for that +unmounted, the ``d_manage()`` function will usually wait for that process to complete before letting the new lookup proceed and possibly trigger a new automount. It can selectively allow only some processes to transit through a mount point. When a server process is managing automounts, it may need to access a directory without triggering normal automount -processing. That server process can identify itself to the `autofs` +processing. That server process can identify itself to the ``autofs`` filesystem, which will then give it a special pass through -`d_manage()` by returning `-EISDIR`. +``d_manage()`` by returning ``-EISDIR``. -### `DCACHE_MOUNTED` ### +``DCACHE_MOUNTED`` +~~~~~~~~~~~~~~~~ This flag is set on every dentry that is mounted on. As Linux supports multiple filesystem namespaces, it is possible that the dentry may not be mounted on in *this* namespace, just in some other. So this flag is seen as a hint, not a promise. -If this flag is set, and `d_manage()` didn't return `-EISDIR`, -`lookup_mnt()` is called to examine the mount hash table (honoring the -`mount_lock` described earlier) and possibly return a new `vfsmount` -and a new `dentry` (both with counted references). +If this flag is set, and ``d_manage()`` didn't return ``-EISDIR``, +``lookup_mnt()`` is called to examine the mount hash table (honoring the +``mount_lock`` described earlier) and possibly return a new ``vfsmount`` +and a new ``dentry`` (both with counted references). -### `DCACHE_NEED_AUTOMOUNT` ### +``DCACHE_NEED_AUTOMOUNT`` +~~~~~~~~~~~~~~~~~~~~~~~ -If `d_manage()` allowed us to get this far, and `lookup_mnt()` didn't -find a mount point, then this flag causes the `d_automount()` dentry +If ``d_manage()`` allowed us to get this far, and ``lookup_mnt()`` didn't +find a mount point, then this flag causes the ``d_automount()`` dentry operation to be called. -The `d_automount()` operation can be arbitrarily complex and may +The ``d_automount()`` operation can be arbitrarily complex and may communicate with server processes etc. but it should ultimately either report that there was an error, that there was nothing to mount, or -should provide an updated `struct path` with new `dentry` and `vfsmount`. +should provide an updated ``struct path`` with new ``dentry`` and ``vfsmount``. -In the latter case, `finish_automount()` will be called to safely +In the latter case, ``finish_automount()`` will be called to safely install the new mount point into the mount table. There is no new locking of import here and it is important that no @@ -567,7 +610,7 @@ isn't in the cache, then it tries to stop gracefully and switch to REF-walk. This stopping requires getting a counted reference on the current -`vfsmount` and `dentry`, and ensuring that these are still valid - +``vfsmount`` and ``dentry``, and ensuring that these are still valid - that a path walk with REF-walk would have found the same entries. This is an invariant that RCU-walk must guarantee. It can only make decisions, such as selecting the next step, that are decisions which @@ -578,21 +621,21 @@ RCU-walk finds it cannot stop gracefully, it simply gives up and restarts from the top with REF-walk. This pattern of "try RCU-walk, if that fails try REF-walk" can be -clearly seen in functions like `filename_lookup()`, -`filename_parentat()`, `filename_mountpoint()`, -`do_filp_open()`, and `do_file_open_root()`. These five -correspond roughly to the four `path_`* functions we met earlier, -each of which calls `link_path_walk()`. The `path_*` functions are +clearly seen in functions like ``filename_lookup()``, +``filename_parentat()``, ``filename_mountpoint()``, +``do_filp_open()``, and ``do_file_open_root()``. These five +correspond roughly to the four ``path_``* functions we met earlier, +each of which calls ``link_path_walk()``. The ``path_*`` functions are called using different mode flags until a mode is found which works. -They are first called with `LOOKUP_RCU` set to request "RCU-walk". If -that fails with the error `ECHILD` they are called again with no +They are first called with ``LOOKUP_RCU`` set to request "RCU-walk". If +that fails with the error ``ECHILD`` they are called again with no special flag to request "REF-walk". If either of those report the -error `ESTALE` a final attempt is made with `LOOKUP_REVAL` set (and no -`LOOKUP_RCU`) to ensure that entries found in the cache are forcibly +error ``ESTALE`` a final attempt is made with ``LOOKUP_REVAL`` set (and no +``LOOKUP_RCU``) to ensure that entries found in the cache are forcibly revalidated - normally entries are only revalidated if the filesystem determines that they are too old to trust. -The `LOOKUP_RCU` attempt may drop that flag internally and switch to +The ``LOOKUP_RCU`` attempt may drop that flag internally and switch to REF-walk, but will never then try to switch back to RCU-walk. Places that trip up RCU-walk are much more likely to be near the leaves and so it is very unlikely that there will be much, if any, benefit from @@ -602,7 +645,7 @@ RCU and seqlocks: fast and light -------------------------------- RCU is, unsurprisingly, critical to RCU-walk mode. The -`rcu_read_lock()` is held for the entire time that RCU-walk is walking +``rcu_read_lock()`` is held for the entire time that RCU-walk is walking down a path. The particular guarantee it provides is that the key data structures - dentries, inodes, super_blocks, and mounts - will not be freed while the lock is held. They might be unlinked or @@ -614,7 +657,7 @@ seqlocks. As we saw above, REF-walk holds a counted reference to the current dentry and the current vfsmount, and does not release those references before taking references to the "next" dentry or vfsmount. It also -sometimes takes the `d_lock` spinlock. These references and locks are +sometimes takes the ``d_lock`` spinlock. These references and locks are taken to prevent certain changes from happening. RCU-walk must not take those references or locks and so cannot prevent such changes. Instead, it checks to see if a change has been made, and aborts or @@ -624,123 +667,126 @@ To preserve the invariant mentioned above (that RCU-walk may only make decisions that REF-walk could have made), it must make the checks at or near the same places that REF-walk holds the references. So, when REF-walk increments a reference count or takes a spinlock, RCU-walk -samples the status of a seqlock using `read_seqcount_begin()` or a +samples the status of a seqlock using ``read_seqcount_begin()`` or a similar function. When REF-walk decrements the count or drops the lock, RCU-walk checks if the sampled status is still valid using -`read_seqcount_retry()` or similar. +``read_seqcount_retry()`` or similar. However, there is a little bit more to seqlocks than that. If RCU-walk accesses two different fields in a seqlock-protected structure, or accesses the same field twice, there is no a priori guarantee of any consistency between those accesses. When consistency is needed - which it usually is - RCU-walk must take a copy and then -use `read_seqcount_retry()` to validate that copy. +use ``read_seqcount_retry()`` to validate that copy. -`read_seqcount_retry()` not only checks the sequence number, but also +``read_seqcount_retry()`` not only checks the sequence number, but also imposes a memory barrier so that no memory-read instruction from *before* the call can be delayed until *after* the call, either by the CPU or by the compiler. A simple example of this can be seen in -`slow_dentry_cmp()` which, for filesystems which do not use simple +``slow_dentry_cmp()`` which, for filesystems which do not use simple byte-wise name equality, calls into the filesystem to compare a name against a dentry. The length and name pointer are copied into local -variables, then `read_seqcount_retry()` is called to confirm the two -are consistent, and only then is `->d_compare()` called. When -standard filename comparison is used, `dentry_cmp()` is called -instead. Notably it does _not_ use `read_seqcount_retry()`, but +variables, then ``read_seqcount_retry()`` is called to confirm the two +are consistent, and only then is ``->d_compare()`` called. When +standard filename comparison is used, ``dentry_cmp()`` is called +instead. Notably it does _not_ use ``read_seqcount_retry()``, but instead has a large comment explaining why the consistency guarantee -isn't necessary. A subsequent `read_seqcount_retry()` will be +isn't necessary. A subsequent ``read_seqcount_retry()`` will be sufficient to catch any problem that could occur at this point. With that little refresher on seqlocks out of the way we can look at the bigger picture of how RCU-walk uses seqlocks. -### `mount_lock` and `nd->m_seq` ### +``mount_lock`` and ``nd->m_seq`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -We already met the `mount_lock` seqlock when REF-walk used it to +We already met the ``mount_lock`` seqlock when REF-walk used it to ensure that crossing a mount point is performed safely. RCU-walk uses it for that too, but for quite a bit more. -Instead of taking a counted reference to each `vfsmount` as it -descends the tree, RCU-walk samples the state of `mount_lock` at the +Instead of taking a counted reference to each ``vfsmount`` as it +descends the tree, RCU-walk samples the state of ``mount_lock`` at the start of the walk and stores this initial sequence number in the -`struct nameidata` in the `m_seq` field. This one lock and one -sequence number are used to validate all accesses to all `vfsmounts`, +``struct nameidata`` in the ``m_seq`` field. This one lock and one +sequence number are used to validate all accesses to all ``vfsmounts``, and all mount point crossings. As changes to the mount table are relatively rare, it is reasonable to fall back on REF-walk any time that any "mount" or "unmount" happens. -`m_seq` is checked (using `read_seqretry()`) at the end of an RCU-walk +``m_seq`` is checked (using ``read_seqretry()``) at the end of an RCU-walk sequence, whether switching to REF-walk for the rest of the path or when the end of the path is reached. It is also checked when stepping -down over a mount point (in `__follow_mount_rcu()`) or up (in -`follow_dotdot_rcu()`). If it is ever found to have changed, the +down over a mount point (in ``__follow_mount_rcu()``) or up (in +``follow_dotdot_rcu()``). If it is ever found to have changed, the whole RCU-walk sequence is aborted and the path is processed again by REF-walk. -If RCU-walk finds that `mount_lock` hasn't changed then it can be sure +If RCU-walk finds that ``mount_lock`` hasn't changed then it can be sure that, had REF-walk taken counted references on each vfsmount, the results would have been the same. This ensures the invariant holds, at least for vfsmount structures. -### `dentry->d_seq` and `nd->seq`. ### +``dentry->d_seq`` and ``nd->seq`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In place of taking a count or lock on `d_reflock`, RCU-walk samples -the per-dentry `d_seq` seqlock, and stores the sequence number in the -`seq` field of the nameidata structure, so `nd->seq` should always be -the current sequence number of `nd->dentry`. This number needs to be +In place of taking a count or lock on ``d_reflock``, RCU-walk samples +the per-dentry ``d_seq`` seqlock, and stores the sequence number in the +``seq`` field of the nameidata structure, so ``nd->seq`` should always be +the current sequence number of ``nd->dentry``. This number needs to be revalidated after copying, and before using, the name, parent, or inode of the dentry. The handling of the name we have already looked at, and the parent is -only accessed in `follow_dotdot_rcu()` which fairly trivially follows +only accessed in ``follow_dotdot_rcu()`` which fairly trivially follows the required pattern, though it does so for three different cases. -When not at a mount point, `d_parent` is followed and its `d_seq` is +When not at a mount point, ``d_parent`` is followed and its ``d_seq`` is collected. When we are at a mount point, we instead follow the -`mnt->mnt_mountpoint` link to get a new dentry and collect its -`d_seq`. Then, after finally finding a `d_parent` to follow, we must +``mnt->mnt_mountpoint`` link to get a new dentry and collect its +``d_seq``. Then, after finally finding a ``d_parent`` to follow, we must check if we have landed on a mount point and, if so, must find that -mount point and follow the `mnt->mnt_root` link. This would imply a +mount point and follow the ``mnt->mnt_root`` link. This would imply a somewhat unusual, but certainly possible, circumstance where the starting point of the path lookup was in part of the filesystem that was mounted on, and so not visible from the root. -The inode pointer, stored in `->d_inode`, is a little more +The inode pointer, stored in ``->d_inode``, is a little more interesting. The inode will always need to be accessed at least twice, once to determine if it is NULL and once to verify access permissions. Symlink handling requires a validated inode pointer too. Rather than revalidating on each access, a copy is made on the first -access and it is stored in the `inode` field of `nameidata` from where +access and it is stored in the ``inode`` field of ``nameidata`` from where it can be safely accessed without further validation. -`lookup_fast()` is the only lookup routine that is used in RCU-mode, -`lookup_slow()` being too slow and requiring locks. It is in -`lookup_fast()` that we find the important "hand over hand" tracking +``lookup_fast()`` is the only lookup routine that is used in RCU-mode, +``lookup_slow()`` being too slow and requiring locks. It is in +``lookup_fast()`` that we find the important "hand over hand" tracking of the current dentry. -The current `dentry` and current `seq` number are passed to -`__d_lookup_rcu()` which, on success, returns a new `dentry` and a -new `seq` number. `lookup_fast()` then copies the inode pointer and -revalidates the new `seq` number. It then validates the old `dentry` -with the old `seq` number one last time and only then continues. This -process of getting the `seq` number of the new dentry and then -checking the `seq` number of the old exactly mirrors the process of +The current ``dentry`` and current ``seq`` number are passed to +``__d_lookup_rcu()`` which, on success, returns a new ``dentry`` and a +new ``seq`` number. ``lookup_fast()`` then copies the inode pointer and +revalidates the new ``seq`` number. It then validates the old ``dentry`` +with the old ``seq`` number one last time and only then continues. This +process of getting the ``seq`` number of the new dentry and then +checking the ``seq`` number of the old exactly mirrors the process of getting a counted reference to the new dentry before dropping that for the old dentry which we saw in REF-walk. -### No `inode->i_mutex` or even `rename_lock` ### +No ``inode->i_rwsem`` or even ``rename_lock`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -A mutex is a fairly heavyweight lock that can only be taken when it is -permissible to sleep. As `rcu_read_lock()` forbids sleeping, -`inode->i_mutex` plays no role in RCU-walk. If some other thread does -take `i_mutex` and modifies the directory in a way that RCU-walk needs +A semaphore is a fairly heavyweight lock that can only be taken when it is +permissible to sleep. As ``rcu_read_lock()`` forbids sleeping, +``inode->i_rwsem`` plays no role in RCU-walk. If some other thread does +take ``i_rwsem`` and modifies the directory in a way that RCU-walk needs to notice, the result will be either that RCU-walk fails to find the dentry that it is looking for, or it will find a dentry which -`read_seqretry()` won't validate. In either case it will drop down to +``read_seqretry()`` won't validate. In either case it will drop down to REF-walk mode which can take whatever locks are needed. -Though `rename_lock` could be used by RCU-walk as it doesn't require -any sleeping, RCU-walk doesn't bother. REF-walk uses `rename_lock` to +Though ``rename_lock`` could be used by RCU-walk as it doesn't require +any sleeping, RCU-walk doesn't bother. REF-walk uses ``rename_lock`` to protect against the possibility of hash chains in the dcache changing while they are being searched. This can result in failing to find something that actually is there. When RCU-walk fails to find @@ -749,57 +795,57 @@ already drops down to REF-walk and tries again with appropriate locking. This neatly handles all cases, so adding extra checks on rename_lock would bring no significant value. -`unlazy walk()` and `complete_walk()` +``unlazy walk()`` and ``complete_walk()`` ------------------------------------- That "dropping down to REF-walk" typically involves a call to -`unlazy_walk()`, so named because "RCU-walk" is also sometimes -referred to as "lazy walk". `unlazy_walk()` is called when +``unlazy_walk()``, so named because "RCU-walk" is also sometimes +referred to as "lazy walk". ``unlazy_walk()`` is called when following the path down to the current vfsmount/dentry pair seems to have proceeded successfully, but the next step is problematic. This can happen if the next name cannot be found in the dcache, if permission checking or name revalidation couldn't be achieved while -the `rcu_read_lock()` is held (which forbids sleeping), if an +the ``rcu_read_lock()`` is held (which forbids sleeping), if an automount point is found, or in a couple of cases involving symlinks. -It is also called from `complete_walk()` when the lookup has reached +It is also called from ``complete_walk()`` when the lookup has reached the final component, or the very end of the path, depending on which particular flavor of lookup is used. Other reasons for dropping out of RCU-walk that do not trigger a call -to `unlazy_walk()` are when some inconsistency is found that cannot be -handled immediately, such as `mount_lock` or one of the `d_seq` +to ``unlazy_walk()`` are when some inconsistency is found that cannot be +handled immediately, such as ``mount_lock`` or one of the ``d_seq`` seqlocks reporting a change. In these cases the relevant function -will return `-ECHILD` which will percolate up until it triggers a new +will return ``-ECHILD`` which will percolate up until it triggers a new attempt from the top using REF-walk. -For those cases where `unlazy_walk()` is an option, it essentially +For those cases where ``unlazy_walk()`` is an option, it essentially takes a reference on each of the pointers that it holds (vfsmount, dentry, and possibly some symbolic links) and then verifies that the relevant seqlocks have not been changed. If there have been changes, -it, too, aborts with `-ECHILD`, otherwise the transition to REF-walk +it, too, aborts with ``-ECHILD``, otherwise the transition to REF-walk has been a success and the lookup process continues. Taking a reference on those pointers is not quite as simple as just incrementing a counter. That works to take a second reference if you already have one (often indirectly through another object), but it isn't sufficient if you don't actually have a counted reference at -all. For `dentry->d_lockref`, it is safe to increment the reference +all. For ``dentry->d_lockref``, it is safe to increment the reference counter to get a reference unless it has been explicitly marked as -"dead" which involves setting the counter to `-128`. -`lockref_get_not_dead()` achieves this. +"dead" which involves setting the counter to ``-128``. +``lockref_get_not_dead()`` achieves this. -For `mnt->mnt_count` it is safe to take a reference as long as -`mount_lock` is then used to validate the reference. If that +For ``mnt->mnt_count`` it is safe to take a reference as long as +``mount_lock`` is then used to validate the reference. If that validation fails, it may *not* be safe to just drop that reference in -the standard way of calling `mnt_put()` - an unmount may have -progressed too far. So the code in `legitimize_mnt()`, when it +the standard way of calling ``mnt_put()`` - an unmount may have +progressed too far. So the code in ``legitimize_mnt()``, when it finds that the reference it got might not be safe, checks the -`MNT_SYNC_UMOUNT` flag to determine if a simple `mnt_put()` is +``MNT_SYNC_UMOUNT`` flag to determine if a simple ``mnt_put()`` is correct, or if it should just decrement the count and pretend none of this ever happened. Taking care in filesystems ---------------------------- +-------------------------- RCU-walk depends almost entirely on cached information and often will not call into the filesystem at all. However there are two places, @@ -809,26 +855,26 @@ careful. If the filesystem has non-standard permission-checking requirements - such as a networked filesystem which may need to check with the server -- the `i_op->permission` interface might be called during RCU-walk. -In this case an extra "`MAY_NOT_BLOCK`" flag is passed so that it -knows not to sleep, but to return `-ECHILD` if it cannot complete -promptly. `i_op->permission` is given the inode pointer, not the +- the ``i_op->permission`` interface might be called during RCU-walk. +In this case an extra "``MAY_NOT_BLOCK``" flag is passed so that it +knows not to sleep, but to return ``-ECHILD`` if it cannot complete +promptly. ``i_op->permission`` is given the inode pointer, not the dentry, so it doesn't need to worry about further consistency checks. However if it accesses any other filesystem data structures, it must -ensure they are safe to be accessed with only the `rcu_read_lock()` -held. This typically means they must be freed using `kfree_rcu()` or +ensure they are safe to be accessed with only the ``rcu_read_lock()`` +held. This typically means they must be freed using ``kfree_rcu()`` or similar. -[`READ_ONCE()`]: https://lwn.net/Articles/624126/ +.. _READ_ONCE: https://lwn.net/Articles/624126/ If the filesystem may need to revalidate dcache entries, then -`d_op->d_revalidate` may be called in RCU-walk too. This interface -*is* passed the dentry but does not have access to the `inode` or the -`seq` number from the `nameidata`, so it needs to be extra careful +``d_op->d_revalidate`` may be called in RCU-walk too. This interface +*is* passed the dentry but does not have access to the ``inode`` or the +``seq`` number from the ``nameidata``, so it needs to be extra careful when accessing fields in the dentry. This "extra care" typically -involves using [`READ_ONCE()`] to access fields, and verifying the +involves using `READ_ONCE() `_ to access fields, and verifying the result is not NULL before using it. This pattern can be seen in -`nfs_lookup_revalidate()`. +``nfs_lookup_revalidate()``. A pair of patterns ------------------ @@ -839,14 +885,14 @@ being aware of. The first is "try quickly and check, if that fails try slowly". We can see that in the high-level approach of first trying RCU-walk and -then trying REF-walk, and in places where `unlazy_walk()` is used to +then trying REF-walk, and in places where ``unlazy_walk()`` is used to switch to REF-walk for the rest of the path. We also saw it earlier -in `dget_parent()` when following a "`..`" link. It tries a quick way +in ``dget_parent()`` when following a "``..``" link. It tries a quick way to get a reference, then falls back to taking locks if needed. The second pattern is "try quickly and check, if that fails try -again - repeatedly". This is seen with the use of `rename_lock` and -`mount_lock` in REF-walk. RCU-walk doesn't make use of this pattern - +again - repeatedly". This is seen with the use of ``rename_lock`` and +``mount_lock`` in REF-walk. RCU-walk doesn't make use of this pattern - if anything goes wrong it is much safer to just abort and try a more sedate approach. @@ -882,8 +928,8 @@ Conceptually, symbolic links could be handled by editing the path. If a component name refers to a symbolic link, then that component is replaced by the body of the link and, if that body starts with a '/', then all preceding parts of the path are discarded. This is what the -"`readlink -f`" command does, though it also edits out "`.`" and -"`..`" components. +"``readlink -f``" command does, though it also edits out "``.``" and +"``..``" components. Directly editing the path string is not really necessary when looking up a path, and discarding early components is pointless as they aren't @@ -899,19 +945,19 @@ There are two reasons for placing limits on how many symlinks can occur in a single path lookup. The most obvious is to avoid loops. If a symlink referred to itself either directly or through intermediaries, then following the symlink can never complete -successfully - the error `ELOOP` must be returned. Loops can be +successfully - the error ``ELOOP`` must be returned. Loops can be detected without imposing limits, but limits are the simplest solution and, given the second reason for restriction, quite sufficient. -[outlined recently]: http://thread.gmane.org/gmane.linux.kernel/1934390/focus=1934550 +.. _outlined recently: http://thread.gmane.org/gmane.linux.kernel/1934390/focus=1934550 -The second reason was [outlined recently] by Linus: +The second reason was `outlined recently`_ by Linus: -> Because it's a latency and DoS issue too. We need to react well to -> true loops, but also to "very deep" non-loops. It's not about memory -> use, it's about users triggering unreasonable CPU resources. + Because it's a latency and DoS issue too. We need to react well to + true loops, but also to "very deep" non-loops. It's not about memory + use, it's about users triggering unreasonable CPU resources. -Linux imposes a limit on the length of any pathname: `PATH_MAX`, which +Linux imposes a limit on the length of any pathname: ``PATH_MAX``, which is 4096. There are a number of reasons for this limit; not letting the kernel spend too much time on just one path is one of them. With symbolic links you can effectively generate much longer paths so some @@ -921,7 +967,7 @@ further limit of eight on the maximum depth of recursion, but that was raised to 40 when a separate stack was implemented, so there is now just the one limit. -The `nameidata` structure that we met in an earlier article contains a +The ``nameidata`` structure that we met in an earlier article contains a small stack that can be used to store the remaining part of up to two symlinks. In many cases this will be sufficient. If it isn't, a separate stack is allocated with room for 40 symlinks. Pathname @@ -941,13 +987,13 @@ to external storage. It is particularly important for RCU-walk to be able to find and temporarily hold onto these cached entries, so that it doesn't need to drop down into REF-walk. -[object-oriented design pattern]: https://lwn.net/Articles/446317/ +.. _object-oriented design pattern: https://lwn.net/Articles/446317/ While each filesystem is free to make its own choice, symlinks are typically stored in one of two places. Short symlinks are often -stored directly in the inode. When a filesystem allocates a `struct -inode` it typically allocates extra space to store private data (a -common [object-oriented design pattern] in the kernel). This will +stored directly in the inode. When a filesystem allocates a ``struct +inode`` it typically allocates extra space to store private data (a +common `object-oriented design pattern`_ in the kernel). This will sometimes include space for a symlink. The other common location is in the page cache, which normally stores the content of files. The pathname in a symlink can be seen as the content of that symlink and @@ -962,13 +1008,13 @@ the inode which, itself, is protected by RCU or by a counted reference on the dentry. This means that the mechanisms that pathname lookup uses to access the dcache and icache (inode cache) safely are quite sufficient for accessing some cached symlinks safely. In these cases, -the `i_link` pointer in the inode is set to point to wherever the +the ``i_link`` pointer in the inode is set to point to wherever the symlink is stored and it can be accessed directly whenever needed. When the symlink is stored in the page cache or elsewhere, the situation is not so straightforward. A reference on a dentry or even on an inode does not imply any reference on cached pages of that -inode, and even an `rcu_read_lock()` is not sufficient to ensure that +inode, and even an ``rcu_read_lock()`` is not sufficient to ensure that a page will not disappear. So for these symlinks the pathname lookup code needs to ask the filesystem to provide a stable reference and, significantly, needs to release that reference when it is finished @@ -978,48 +1024,48 @@ Taking a reference to a cache page is often possible even in RCU-walk mode. It does require making changes to memory, which is best avoided, but that isn't necessarily a big cost and it is better than dropping out of RCU-walk mode completely. Even filesystems that allocate -space to copy the symlink into can use `GFP_ATOMIC` to often successfully +space to copy the symlink into can use ``GFP_ATOMIC`` to often successfully allocate memory without the need to drop out of RCU-walk. If a filesystem cannot successfully get a reference in RCU-walk mode, it -must return `-ECHILD` and `unlazy_walk()` will be called to return to +must return ``-ECHILD`` and ``unlazy_walk()`` will be called to return to REF-walk mode in which the filesystem is allowed to sleep. -The place for all this to happen is the `i_op->follow_link()` inode +The place for all this to happen is the ``i_op->follow_link()`` inode method. In the present mainline code this is never actually called in RCU-walk mode as the rewrite is not quite complete. It is likely that -in a future release this method will be passed an `inode` pointer when +in a future release this method will be passed an ``inode`` pointer when called in RCU-walk mode so it both (1) knows to be careful, and (2) has the -validated pointer. Much like the `i_op->permission()` method we -looked at previously, `->follow_link()` would need to be careful that +validated pointer. Much like the ``i_op->permission()`` method we +looked at previously, ``->follow_link()`` would need to be careful that all the data structures it references are safe to be accessed while holding no counted reference, only the RCU lock. Though getting a -reference with `->follow_link()` is not yet done in RCU-walk mode, the +reference with ``->follow_link()`` is not yet done in RCU-walk mode, the code is ready to release the reference when that does happen. This need to drop the reference to a symlink adds significant complexity. It requires a reference to the inode so that the -`i_op->put_link()` inode operation can be called. In REF-walk, that +``i_op->put_link()`` inode operation can be called. In REF-walk, that reference is kept implicitly through a reference to the dentry, so -keeping the `struct path` of the symlink is easiest. For RCU-walk, +keeping the ``struct path`` of the symlink is easiest. For RCU-walk, the pointer to the inode is kept separately. To allow switching from RCU-walk back to REF-walk in the middle of processing nested symlinks we also need the seq number for the dentry so we can confirm that switching back was safe. Finally, when providing a reference to a symlink, the filesystem also -provides an opaque "cookie" that must be passed to `->put_link()` so that it +provides an opaque "cookie" that must be passed to ``->put_link()`` so that it knows what to free. This might be the allocated memory area, or a -pointer to the `struct page` in the page cache, or something else +pointer to the ``struct page`` in the page cache, or something else completely. Only the filesystem knows what it is. In order for the reference to each symlink to be dropped when the walk completes, whether in RCU-walk or REF-walk, the symlink stack needs to contain, along with the path remnants: -- the `struct path` to provide a reference to the inode in REF-walk -- the `struct inode *` to provide a reference to the inode in RCU-walk -- the `seq` to allow the path to be safely switched from RCU-walk to REF-walk -- the `cookie` that tells `->put_path()` what to put. +- the ``struct path`` to provide a reference to the inode in REF-walk +- the ``struct inode *`` to provide a reference to the inode in RCU-walk +- the ``seq`` to allow the path to be safely switched from RCU-walk to REF-walk +- the ``cookie`` that tells ``->put_path()`` what to put. This means that each entry in the symlink stack needs to hold five pointers and an integer instead of just one pointer (the path @@ -1028,28 +1074,28 @@ with 40 entries it adds up to 1600 bytes total, which is less than half a page. So it might seem like a lot, but is by no means excessive. -Note that, in a given stack frame, the path remnant (`name`) is not +Note that, in a given stack frame, the path remnant (``name``) is not part of the symlink that the other fields refer to. It is the remnant to be followed once that symlink has been fully parsed. Following the symlink --------------------- -The main loop in `link_path_walk()` iterates seamlessly over all +The main loop in ``link_path_walk()`` iterates seamlessly over all components in the path and all of the non-final symlinks. As symlinks -are processed, the `name` pointer is adjusted to point to a new +are processed, the ``name`` pointer is adjusted to point to a new symlink, or is restored from the stack, so that much of the loop -doesn't need to notice. Getting this `name` variable on and off the +doesn't need to notice. Getting this ``name`` variable on and off the stack is very straightforward; pushing and popping the references is a little more complex. -When a symlink is found, `walk_component()` returns the value `1` -(`0` is returned for any other sort of success, and a negative number -is, as usual, an error indicator). This causes `get_link()` to be +When a symlink is found, ``walk_component()`` returns the value ``1`` +(``0`` is returned for any other sort of success, and a negative number +is, as usual, an error indicator). This causes ``get_link()`` to be called; it then gets the link from the filesystem. Providing that -operation is successful, the old path `name` is placed on the stack, -and the new value is used as the `name` for a while. When the end of -the path is found (i.e. `*name` is `'\0'`) the old `name` is restored +operation is successful, the old path ``name`` is placed on the stack, +and the new value is used as the ``name`` for a while. When the end of +the path is found (i.e. ``*name`` is ``'\0'``) the old ``name`` is restored off the stack and path walking continues. Pushing and popping the reference pointers (inode, cookie, etc.) is more @@ -1060,113 +1106,114 @@ the symlink-just-found to avoid leaving empty path remnants that would just get in the way. It is most convenient to push the new symlink references onto the -stack in `walk_component()` immediately when the symlink is found; -`walk_component()` is also the last piece of code that needs to look at the +stack in ``walk_component()`` immediately when the symlink is found; +``walk_component()`` is also the last piece of code that needs to look at the old symlink as it walks that last component. So it is quite -convenient for `walk_component()` to release the old symlink and pop +convenient for ``walk_component()`` to release the old symlink and pop the references just before pushing the reference information for the -new symlink. It is guided in this by two flags; `WALK_GET`, which +new symlink. It is guided in this by two flags; ``WALK_GET``, which gives it permission to follow a symlink if it finds one, and -`WALK_PUT`, which tells it to release the current symlink after it has been -followed. `WALK_PUT` is tested first, leading to a call to -`put_link()`. `WALK_GET` is tested subsequently (by -`should_follow_link()`) leading to a call to `pick_link()` which sets +``WALK_PUT``, which tells it to release the current symlink after it has been +followed. ``WALK_PUT`` is tested first, leading to a call to +``put_link()``. ``WALK_GET`` is tested subsequently (by +``should_follow_link()``) leading to a call to ``pick_link()`` which sets up the stack frame. -### Symlinks with no final component ### +Symlinks with no final component +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A pair of special-case symlinks deserve a little further explanation. -Both result in a new `struct path` (with mount and dentry) being set -up in the `nameidata`, and result in `get_link()` returning `NULL`. +Both result in a new ``struct path`` (with mount and dentry) being set +up in the ``nameidata``, and result in ``get_link()`` returning ``NULL``. -The more obvious case is a symlink to "`/`". All symlinks starting -with "`/`" are detected in `get_link()` which resets the `nameidata` +The more obvious case is a symlink to "``/``". All symlinks starting +with "``/``" are detected in ``get_link()`` which resets the ``nameidata`` to point to the effective filesystem root. If the symlink only -contains "`/`" then there is nothing more to do, no components at all, -so `NULL` is returned to indicate that the symlink can be released and +contains "``/``" then there is nothing more to do, no components at all, +so ``NULL`` is returned to indicate that the symlink can be released and the stack frame discarded. -The other case involves things in `/proc` that look like symlinks but -aren't really. +The other case involves things in ``/proc`` that look like symlinks but +aren't really:: -> $ ls -l /proc/self/fd/1 -> lrwx------ 1 neilb neilb 64 Jun 13 10:19 /proc/self/fd/1 -> /dev/pts/4 + $ ls -l /proc/self/fd/1 + lrwx------ 1 neilb neilb 64 Jun 13 10:19 /proc/self/fd/1 -> /dev/pts/4 -Every open file descriptor in any process is represented in `/proc` by +Every open file descriptor in any process is represented in ``/proc`` by something that looks like a symlink. It is really a reference to the -target file, not just the name of it. When you `readlink` these +target file, not just the name of it. When you ``readlink`` these objects you get a name that might refer to the same file - unless it -has been unlinked or mounted over. When `walk_component()` follows -one of these, the `->follow_link()` method in "procfs" doesn't return -a string name, but instead calls `nd_jump_link()` which updates the -`nameidata` in place to point to that target. `->follow_link()` then -returns `NULL`. Again there is no final component and `get_link()` -reports this by leaving the `last_type` field of `nameidata` as -`LAST_BIND`. +has been unlinked or mounted over. When ``walk_component()`` follows +one of these, the ``->follow_link()`` method in "procfs" doesn't return +a string name, but instead calls ``nd_jump_link()`` which updates the +``nameidata`` in place to point to that target. ``->follow_link()`` then +returns ``NULL``. Again there is no final component and ``get_link()`` +reports this by leaving the ``last_type`` field of ``nameidata`` as +``LAST_BIND``. Following the symlink in the final component -------------------------------------------- -All this leads to `link_path_walk()` walking down every component, and +All this leads to ``link_path_walk()`` walking down every component, and following all symbolic links it finds, until it reaches the final -component. This is just returned in the `last` field of `nameidata`. +component. This is just returned in the ``last`` field of ``nameidata``. For some callers, this is all they need; they want to create that -`last` name if it doesn't exist or give an error if it does. Other +``last`` name if it doesn't exist or give an error if it does. Other callers will want to follow a symlink if one is found, and possibly apply special handling to the last component of that symlink, rather than just the last component of the original file name. These callers -potentially need to call `link_path_walk()` again and again on +potentially need to call ``link_path_walk()`` again and again on successive symlinks until one is found that doesn't point to another symlink. -This case is handled by the relevant caller of `link_path_walk()`, such as -`path_lookupat()` using a loop that calls `link_path_walk()`, and then +This case is handled by the relevant caller of ``link_path_walk()``, such as +``path_lookupat()`` using a loop that calls ``link_path_walk()``, and then handles the final component. If the final component is a symlink -that needs to be followed, then `trailing_symlink()` is called to set -things up properly and the loop repeats, calling `link_path_walk()` +that needs to be followed, then ``trailing_symlink()`` is called to set +things up properly and the loop repeats, calling ``link_path_walk()`` again. This could loop as many as 40 times if the last component of each symlink is another symlink. The various functions that examine the final component and possibly -report that it is a symlink are `lookup_last()`, `mountpoint_last()` -and `do_last()`, each of which use the same convention as -`walk_component()` of returning `1` if a symlink was found that needs +report that it is a symlink are ``lookup_last()``, ``mountpoint_last()`` +and ``do_last()``, each of which use the same convention as +``walk_component()`` of returning ``1`` if a symlink was found that needs to be followed. -Of these, `do_last()` is the most interesting as it is used for -opening a file. Part of `do_last()` runs with `i_mutex` held and this -part is in a separate function: `lookup_open()`. +Of these, ``do_last()`` is the most interesting as it is used for +opening a file. Part of ``do_last()`` runs with ``i_rwsem`` held and this +part is in a separate function: ``lookup_open()``. -Explaining `do_last()` completely is beyond the scope of this article, +Explaining ``do_last()`` completely is beyond the scope of this article, but a few highlights should help those interested in exploring the code. -1. Rather than just finding the target file, `do_last()` needs to open - it. If the file was found in the dcache, then `vfs_open()` is used for - this. If not, then `lookup_open()` will either call `atomic_open()` (if - the filesystem provides it) to combine the final lookup with the open, or - will perform the separate `lookup_real()` and `vfs_create()` steps - directly. In the later case the actual "open" of this newly found or - created file will be performed by `vfs_open()`, just as if the name - were found in the dcache. - -2. `vfs_open()` can fail with `-EOPENSTALE` if the cached information - wasn't quite current enough. Rather than restarting the lookup from - the top with `LOOKUP_REVAL` set, `lookup_open()` is called instead, - giving the filesystem a chance to resolve small inconsistencies. - If that doesn't work, only then is the lookup restarted from the top. +1. Rather than just finding the target file, ``do_last()`` needs to open + it. If the file was found in the dcache, then ``vfs_open()`` is used for + this. If not, then ``lookup_open()`` will either call ``atomic_open()`` (if + the filesystem provides it) to combine the final lookup with the open, or + will perform the separate ``lookup_real()`` and ``vfs_create()`` steps + directly. In the later case the actual "open" of this newly found or + created file will be performed by ``vfs_open()``, just as if the name + were found in the dcache. + +2. ``vfs_open()`` can fail with ``-EOPENSTALE`` if the cached information + wasn't quite current enough. Rather than restarting the lookup from + the top with ``LOOKUP_REVAL`` set, ``lookup_open()`` is called instead, + giving the filesystem a chance to resolve small inconsistencies. + If that doesn't work, only then is the lookup restarted from the top. 3. An open with O_CREAT **does** follow a symlink in the final component, - unlike other creation system calls (like `mkdir`). So the sequence: + unlike other creation system calls (like ``mkdir``). So the sequence:: - > ln -s bar /tmp/foo - > echo hello > /tmp/foo + ln -s bar /tmp/foo + echo hello > /tmp/foo - will create a file called `/tmp/bar`. This is not permitted if - `O_EXCL` is set but otherwise is handled for an O_CREAT open much - like for a non-creating open: `should_follow_link()` returns `1`, and - so does `do_last()` so that `trailing_symlink()` gets called and the - open process continues on the symlink that was found. + will create a file called ``/tmp/bar``. This is not permitted if + ``O_EXCL`` is set but otherwise is handled for an O_CREAT open much + like for a non-creating open: ``should_follow_link()`` returns ``1``, and + so does ``do_last()`` so that ``trailing_symlink()`` gets called and the + open process continues on the symlink that was found. Updating the access time ------------------------ @@ -1180,110 +1227,112 @@ footprints are best kept to a minimum. One other place where walking down a symlink can involve leaving footprints in a way that doesn't affect directories is in updating access times. In Unix (and Linux) every filesystem object has a "last accessed -time", or "`atime`". Passing through a directory to access a file +time", or "``atime``". Passing through a directory to access a file within is not considered to be an access for the purposes of -`atime`; only listing the contents of a directory can update its `atime`. -Symlinks are different it seems. Both reading a symlink (with `readlink()`) +``atime``; only listing the contents of a directory can update its ``atime``. +Symlinks are different it seems. Both reading a symlink (with ``readlink()``) and looking up a symlink on the way to some other destination can update the atime on that symlink. -[clearest statement]: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_08 +.. _clearest statement: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_08 It is not clear why this is the case; POSIX has little to say on the -subject. The [clearest statement] is that, if a particular implementation +subject. The `clearest statement`_ is that, if a particular implementation updates a timestamp in a place not specified by POSIX, this must be documented "except that any changes caused by pathname resolution need not be documented". This seems to imply that POSIX doesn't really care about access-time updates during pathname lookup. -[Linux 1.3.87]: https://git.kernel.org/cgit/linux/kernel/git/history/history.git/diff/fs/ext2/symlink.c?id=f806c6db77b8eaa6e00dcfb6b567706feae8dbb8 +.. _Linux 1.3.87: https://git.kernel.org/cgit/linux/kernel/git/history/history.git/diff/fs/ext2/symlink.c?id=f806c6db77b8eaa6e00dcfb6b567706feae8dbb8 -An examination of history shows that prior to [Linux 1.3.87], the ext2 +An examination of history shows that prior to `Linux 1.3.87`_, the ext2 filesystem, at least, didn't update atime when following a link. Unfortunately we have no record of why that behavior was changed. In any case, access time must now be updated and that operation can be quite complex. Trying to stay in RCU-walk while doing it is best -avoided. Fortunately it is often permitted to skip the `atime` -update. Because `atime` updates cause performance problems in various -areas, Linux supports the `relatime` mount option, which generally -limits the updates of `atime` to once per day on files that aren't +avoided. Fortunately it is often permitted to skip the ``atime`` +update. Because ``atime`` updates cause performance problems in various +areas, Linux supports the ``relatime`` mount option, which generally +limits the updates of ``atime`` to once per day on files that aren't being changed (and symlinks never change once created). Even without -`relatime`, many filesystems record `atime` with a one-second +``relatime``, many filesystems record ``atime`` with a one-second granularity, so only one update per second is required. -It is easy to test if an `atime` update is needed while in RCU-walk +It is easy to test if an ``atime`` update is needed while in RCU-walk mode and, if it isn't, the update can be skipped and RCU-walk mode -continues. Only when an `atime` update is actually required does the +continues. Only when an ``atime`` update is actually required does the path walk drop down to REF-walk. All of this is handled in the -`get_link()` function. +``get_link()`` function. A few flags ----------- A suitable way to wrap up this tour of pathname walking is to list -the various flags that can be stored in the `nameidata` to guide the +the various flags that can be stored in the ``nameidata`` to guide the lookup process. Many of these are only meaningful on the final component, others reflect the current state of the pathname lookup. -And then there is `LOOKUP_EMPTY`, which doesn't fit conceptually with +And then there is ``LOOKUP_EMPTY``, which doesn't fit conceptually with the others. If this is not set, an empty pathname causes an error very early on. If it is set, empty pathnames are not considered to be an error. -### Global state flags ### +Global state flags +~~~~~~~~~~~~~~~~~~ -We have already met two global state flags: `LOOKUP_RCU` and -`LOOKUP_REVAL`. These select between one of three overall approaches +We have already met two global state flags: ``LOOKUP_RCU`` and +``LOOKUP_REVAL``. These select between one of three overall approaches to lookup: RCU-walk, REF-walk, and REF-walk with forced revalidation. -`LOOKUP_PARENT` indicates that the final component hasn't been reached +``LOOKUP_PARENT`` indicates that the final component hasn't been reached yet. This is primarily used to tell the audit subsystem the full context of a particular access being audited. -`LOOKUP_ROOT` indicates that the `root` field in the `nameidata` was +``LOOKUP_ROOT`` indicates that the ``root`` field in the ``nameidata`` was provided by the caller, so it shouldn't be released when it is no longer needed. -`LOOKUP_JUMPED` means that the current dentry was chosen not because +``LOOKUP_JUMPED`` means that the current dentry was chosen not because it had the right name but for some other reason. This happens when -following "`..`", following a symlink to `/`, crossing a mount point -or accessing a "`/proc/$PID/fd/$FD`" symlink. In this case the +following "``..``", following a symlink to ``/``, crossing a mount point +or accessing a "``/proc/$PID/fd/$FD``" symlink. In this case the filesystem has not been asked to revalidate the name (with -`d_revalidate()`). In such cases the inode may still need to be -revalidated, so `d_op->d_weak_revalidate()` is called if -`LOOKUP_JUMPED` is set when the look completes - which may be at the +``d_revalidate()``). In such cases the inode may still need to be +revalidated, so ``d_op->d_weak_revalidate()`` is called if +``LOOKUP_JUMPED`` is set when the look completes - which may be at the final component or, when creating, unlinking, or renaming, at the penultimate component. -### Final-component flags ### +Final-component flags +~~~~~~~~~~~~~~~~~~~~~ Some of these flags are only set when the final component is being considered. Others are only checked for when considering that final component. -`LOOKUP_AUTOMOUNT` ensures that, if the final component is an automount +``LOOKUP_AUTOMOUNT`` ensures that, if the final component is an automount point, then the mount is triggered. Some operations would trigger it -anyway, but operations like `stat()` deliberately don't. `statfs()` -needs to trigger the mount but otherwise behaves a lot like `stat()`, so -it sets `LOOKUP_AUTOMOUNT`, as does "`quotactl()`" and the handling of -"`mount --bind`". +anyway, but operations like ``stat()`` deliberately don't. ``statfs()`` +needs to trigger the mount but otherwise behaves a lot like ``stat()``, so +it sets ``LOOKUP_AUTOMOUNT``, as does "``quotactl()``" and the handling of +"``mount --bind``". -`LOOKUP_FOLLOW` has a similar function to `LOOKUP_AUTOMOUNT` but for +``LOOKUP_FOLLOW`` has a similar function to ``LOOKUP_AUTOMOUNT`` but for symlinks. Some system calls set or clear it implicitly, while -others have API flags such as `AT_SYMLINK_FOLLOW` and -`UMOUNT_NOFOLLOW` to control it. Its effect is similar to -`WALK_GET` that we already met, but it is used in a different way. +others have API flags such as ``AT_SYMLINK_FOLLOW`` and +``UMOUNT_NOFOLLOW`` to control it. Its effect is similar to +``WALK_GET`` that we already met, but it is used in a different way. -`LOOKUP_DIRECTORY` insists that the final component is a directory. +``LOOKUP_DIRECTORY`` insists that the final component is a directory. Various callers set this and it is also set when the final component is found to be followed by a slash. -Finally `LOOKUP_OPEN`, `LOOKUP_CREATE`, `LOOKUP_EXCL`, and -`LOOKUP_RENAME_TARGET` are not used directly by the VFS but are made -available to the filesystem and particularly the `->d_revalidate()` +Finally ``LOOKUP_OPEN``, ``LOOKUP_CREATE``, ``LOOKUP_EXCL``, and +``LOOKUP_RENAME_TARGET`` are not used directly by the VFS but are made +available to the filesystem and particularly the ``->d_revalidate()`` method. A filesystem can choose not to bother revalidating too hard if it knows that it will be asked to open or create the file soon. -These flags were previously useful for `->lookup()` too but with the -introduction of `->atomic_open()` they are less relevant there. +These flags were previously useful for ``->lookup()`` too but with the +introduction of ``->atomic_open()`` they are less relevant there. End of the road --------------- diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt index 12a5e6e693b6..66cad5c86171 100644 --- a/Documentation/filesystems/proc.txt +++ b/Documentation/filesystems/proc.txt @@ -125,6 +125,13 @@ process running on the system, which is named after the process ID (PID). The link self points to the process reading the file system. Each process subdirectory has the entries listed in Table 1-1. +Note that an open a file descriptor to /proc/ or to any of its +contained files or subdirectories does not prevent being reused +for some other process in the event that exits. Operations on +open /proc/ file descriptors corresponding to dead processes +never act on any new process that the kernel may, through chance, have +also assigned the process ID . Instead, operations on these FDs +usually fail with ESRCH. Table 1-1: Process specific entries in /proc .............................................................................. @@ -182,6 +189,7 @@ read the file /proc/PID/status: VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 + THP_enabled: 1 Threads: 1 SigQ: 0/28578 SigPnd: 0000000000000000 @@ -193,8 +201,10 @@ read the file /proc/PID/status: CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff + CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 + Speculation_Store_Bypass: thread vulnerable voluntary_ctxt_switches: 0 nonvoluntary_ctxt_switches: 1 @@ -214,7 +224,7 @@ asynchronous manner and the value may not be very precise. To see a precise snapshot of a moment, you can see /proc//smaps file and scan page table. It's slow but very precise. -Table 1-2: Contents of the status files (as of 4.8) +Table 1-2: Contents of the status files (as of 4.19) .............................................................................. Field Content Name filename of the executable @@ -256,6 +266,8 @@ Table 1-2: Contents of the status files (as of 4.8) HugetlbPages size of hugetlb memory portions CoreDumping process's memory is currently being dumped (killing the process may lead to a corrupted core) + THP_enabled process is allowed to use THP (returns 0 when + PR_SET_THP_DISABLE is set on the process Threads number of threads SigQ number of signals queued/max. number for queue SigPnd bitmap of pending signals for the thread @@ -267,8 +279,10 @@ Table 1-2: Contents of the status files (as of 4.8) CapPrm bitmap of permitted capabilities CapEff bitmap of effective capabilities CapBnd bitmap of capabilities bounding set + CapAmb bitmap of ambient capabilities NoNewPrivs no_new_privs, like prctl(PR_GET_NO_NEW_PRIV, ...) Seccomp seccomp mode, like prctl(PR_GET_SECCOMP, ...) + Speculation_Store_Bypass speculative store bypass mitigation status Cpus_allowed mask of CPUs on which this process may run Cpus_allowed_list Same as previous, but in "list format" Mems_allowed mask of memory nodes allowed to this process @@ -425,6 +439,7 @@ SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB +THPeligible: 0 VmFlags: rd ex mr mw me dw the first of these lines shows the same information as is displayed for the @@ -462,6 +477,8 @@ replaced by copy-on-write) part of the underlying shmem object out on swap. "SwapPss" shows proportional swap share of this mapping. Unlike "Swap", this does not take into account swapped out page of underlying shmem objects. "Locked" indicates whether the mapping is locked in memory or not. +"THPeligible" indicates whether the mapping is eligible for THP pages - 1 if +true, 0 otherwise. "VmFlags" field deserves a separate description. This member represents the kernel flags associated with the particular virtual memory area in two letter encoded @@ -496,7 +513,9 @@ manner. The codes are the following: Note that there is no guarantee that every flag and associated mnemonic will be present in all further kernel releases. Things get changed, the flags may -be vanished or the reverse -- new added. +be vanished or the reverse -- new added. Interpretation of their meaning +might change in future as well. So each consumer of these flags has to +follow each specific kernel version for the exact semantic. This file is only present if the CONFIG_MMU kernel configuration option is enabled. diff --git a/Documentation/filesystems/qnx6.txt b/Documentation/filesystems/qnx6.txt index 4f3d6a882bdc..48ea68f15845 100644 --- a/Documentation/filesystems/qnx6.txt +++ b/Documentation/filesystems/qnx6.txt @@ -87,7 +87,7 @@ addressed with 16 direct blocks. For more than 16 blocks an indirect addressing in form of another tree is used. (scheme is the same as the one used for the superblock root nodes) -The filesize is stored 64bit. Inode counting starts with 1. (whilst long +The filesize is stored 64bit. Inode counting starts with 1. (while long filename inodes start with 0) Directories @@ -155,7 +155,7 @@ Then userspace. The requirement for a static, fixed preallocated system area comes from how qnx6fs deals with writes. Each superblock got it's own half of the system area. So superblock #1 -always uses blocks from the lower half whilst superblock #2 just writes to +always uses blocks from the lower half while superblock #2 just writes to blocks represented by the upper half bitmap system area bits. Bitmap blocks, Inode blocks and indirect addressing blocks for those two diff --git a/Documentation/filesystems/spufs.txt b/Documentation/filesystems/spufs.txt index 1343d118a9b2..eb9e3aa63026 100644 --- a/Documentation/filesystems/spufs.txt +++ b/Documentation/filesystems/spufs.txt @@ -452,7 +452,7 @@ RETURN VALUE ERRORS - EACCESS + EACCES The current user does not have write access on the spufs mount point. diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt index 5f71a252e2e0..8dc8e9c2913f 100644 --- a/Documentation/filesystems/vfs.txt +++ b/Documentation/filesystems/vfs.txt @@ -1131,7 +1131,7 @@ struct dentry_operations { d_manage: called to allow the filesystem to manage the transition from a dentry (optional). This allows autofs, for example, to hold up clients - waiting to explore behind a 'mountpoint' whilst letting the daemon go + waiting to explore behind a 'mountpoint' while letting the daemon go past and construct the subtree there. 0 should be returned to let the calling process continue. -EISDIR can be returned to tell pathwalk to use this directory as an ordinary directory and to ignore anything diff --git a/Documentation/filesystems/xfs-self-describing-metadata.txt b/Documentation/filesystems/xfs-self-describing-metadata.txt index 05aa455163e3..68604e67a495 100644 --- a/Documentation/filesystems/xfs-self-describing-metadata.txt +++ b/Documentation/filesystems/xfs-self-describing-metadata.txt @@ -110,7 +110,7 @@ owner field in the metadata object, we can immediately do top down validation to determine the scope of the problem. Different types of metadata have different owner identifiers. For example, -directory, attribute and extent tree blocks are all owned by an inode, whilst +directory, attribute and extent tree blocks are all owned by an inode, while freespace btree blocks are owned by an allocation group. Hence the size and contents of the owner field are determined by the type of metadata object we are looking at. The owner information can also identify misplaced writes (e.g. diff --git a/Documentation/filesystems/xfs.txt b/Documentation/filesystems/xfs.txt index a9ae82fb9d13..9ccfd1bc6201 100644 --- a/Documentation/filesystems/xfs.txt +++ b/Documentation/filesystems/xfs.txt @@ -417,7 +417,7 @@ level directory: filesystem from ever unmounting fully in the case of "retry forever" handler configurations. - Note: there is no guarantee that fail_at_unmount can be set whilst an + Note: there is no guarantee that fail_at_unmount can be set while an unmount is in progress. It is possible that the sysfs entries are removed by the unmounting filesystem before a "retry forever" error handler configuration causes unmount to hang, and hence the filesystem diff --git a/Documentation/gpu/drm-uapi.rst b/Documentation/gpu/drm-uapi.rst index 4b4bf2c5eac5..a752aa561ea4 100644 --- a/Documentation/gpu/drm-uapi.rst +++ b/Documentation/gpu/drm-uapi.rst @@ -190,11 +190,11 @@ ENOSPC: Simply running out of kernel/system memory is signalled through ENOMEM. -EPERM/EACCESS: +EPERM/EACCES: Returned for an operation that is valid, but needs more privileges. E.g. root-only or much more common, DRM master-only operations return this when when called by unpriviledged clients. There's no clear - difference between EACCESS and EPERM. + difference between EACCES and EPERM. ENODEV: The device is not (yet) present or fully initialized. diff --git a/Documentation/hwmon/adm1275 b/Documentation/hwmon/adm1275 index 39033538eb03..5e277b0d91ce 100644 --- a/Documentation/hwmon/adm1275 +++ b/Documentation/hwmon/adm1275 @@ -58,6 +58,9 @@ The ADM1075, unlike many other PMBus devices, does not support internal voltage or current scaling. Reported voltages, currents, and power are raw measurements, and will typically have to be scaled. +The shunt value in micro-ohms can be set via device tree at compile-time. Please +refer to the Documentation/devicetree/bindings/hwmon/adm1275.txt for bindings +if the device tree is used. Platform data support --------------------- diff --git a/Documentation/hwmon/adt7475 b/Documentation/hwmon/adt7475 index 09d73a10644c..01b46b290532 100644 --- a/Documentation/hwmon/adt7475 +++ b/Documentation/hwmon/adt7475 @@ -79,6 +79,18 @@ ADT7490: * 2 GPIO pins (not implemented) * system acoustics optimizations (not implemented) +Sysfs Mapping +------------- + + ADT7490 ADT7476 ADT7475 ADT7473 + ------- ------- ------- ------- +in0 2.5VIN (22) 2.5VIN (22) - - +in1 VCCP (23) VCCP (23) VCCP (14) VCCP (14) +in2 VCC (4) VCC (4) VCC (4) VCC (3) +in3 5VIN (20) 5VIN (20) +in4 12VIN (21) 12VIN (21) +in5 VTT (8) + Special Features ---------------- diff --git a/Documentation/hwmon/hwmon-kernel-api.txt b/Documentation/hwmon/hwmon-kernel-api.txt index eb7a78aebb38..8bdefb41be30 100644 --- a/Documentation/hwmon/hwmon-kernel-api.txt +++ b/Documentation/hwmon/hwmon-kernel-api.txt @@ -299,17 +299,25 @@ functions is used. The header file linux/hwmon-sysfs.h provides a number of useful macros to declare and use hardware monitoring sysfs attributes. -In many cases, you can use the exsting define DEVICE_ATTR to declare such -attributes. This is feasible if an attribute has no additional context. However, -in many cases there will be additional information such as a sensor index which -will need to be passed to the sysfs attribute handling function. +In many cases, you can use the exsting define DEVICE_ATTR or its variants +DEVICE_ATTR_{RW,RO,WO} to declare such attributes. This is feasible if an +attribute has no additional context. However, in many cases there will be +additional information such as a sensor index which will need to be passed +to the sysfs attribute handling function. SENSOR_DEVICE_ATTR and SENSOR_DEVICE_ATTR_2 can be used to define attributes which need such additional context information. SENSOR_DEVICE_ATTR requires one additional argument, SENSOR_DEVICE_ATTR_2 requires two. -SENSOR_DEVICE_ATTR defines a struct sensor_device_attribute variable. -This structure has the following fields. +Simplified variants of SENSOR_DEVICE_ATTR and SENSOR_DEVICE_ATTR_2 are available +and should be used if standard attribute permissions and function names are +feasible. Standard permissions are 0644 for SENSOR_DEVICE_ATTR[_2]_RW, +0444 for SENSOR_DEVICE_ATTR[_2]_RO, and 0200 for SENSOR_DEVICE_ATTR[_2]_WO. +Standard functions, similar to DEVICE_ATTR_{RW,RO,WO}, have _show and _store +appended to the provided function name. + +SENSOR_DEVICE_ATTR and its variants define a struct sensor_device_attribute +variable. This structure has the following fields. struct sensor_device_attribute { struct device_attribute dev_attr; @@ -320,8 +328,8 @@ You can use to_sensor_dev_attr to get the pointer to this structure from the attribute read or write function. Its parameter is the device to which the attribute is attached. -SENSOR_DEVICE_ATTR_2 defines a struct sensor_device_attribute_2 variable, -which is defined as follows. +SENSOR_DEVICE_ATTR_2 and its variants define a struct sensor_device_attribute_2 +variable, which is defined as follows. struct sensor_device_attribute_2 { struct device_attribute dev_attr; diff --git a/Documentation/hwmon/ina2xx b/Documentation/hwmon/ina2xx index b8df81f6d6bc..0f36c021192d 100644 --- a/Documentation/hwmon/ina2xx +++ b/Documentation/hwmon/ina2xx @@ -62,3 +62,18 @@ bus and shunt voltage conversion times multiplied by the averaging rate. We don't touch the conversion times and only modify the number of averages. The lower limit of the update_interval is 2 ms, the upper limit is 2253 ms. The actual programmed interval may vary from the desired value. + +General sysfs entries +------------- + +in0_input Shunt voltage(mV) channel +in1_input Bus voltage(mV) channel +curr1_input Current(mA) measurement channel +power1_input Power(uW) measurement channel +shunt_resistor Shunt resistance(uOhm) channel + +Sysfs entries for ina226, ina230 and ina231 only +------------- + +update_interval data conversion time; affects number of samples used + to average results for shunt and bus voltages. diff --git a/Documentation/hwmon/lm75 b/Documentation/hwmon/lm75 index 2f1120f88c16..010583608f12 100644 --- a/Documentation/hwmon/lm75 +++ b/Documentation/hwmon/lm75 @@ -42,6 +42,11 @@ Supported chips: Addresses scanned: none Datasheet: Publicly available at the ST website http://www.st.com/internet/analog/product/121769.jsp + * ST Microelectronics STLM75 + Prefix: 'stlm75' + Addresses scanned: none + Datasheet: Publicly available at the ST website + https://www.st.com/resource/en/datasheet/stlm75.pdf * Texas Instruments TMP100, TMP101, TMP105, TMP112, TMP75, TMP75C, TMP175, TMP275 Prefixes: 'tmp100', 'tmp101', 'tmp105', 'tmp112', 'tmp175', 'tmp75', 'tmp75c', 'tmp275' Addresses scanned: none diff --git a/Documentation/hwmon/occ b/Documentation/hwmon/occ new file mode 100644 index 000000000000..e787596e03fe --- /dev/null +++ b/Documentation/hwmon/occ @@ -0,0 +1,112 @@ +Kernel driver occ-hwmon +======================= + +Supported chips: + * POWER8 + * POWER9 + +Author: Eddie James + +Description +----------- + +This driver supports hardware monitoring for the On-Chip Controller (OCC) +embedded on POWER processors. The OCC is a device that collects and aggregates +sensor data from the processor and the system. The OCC can provide the raw +sensor data as well as perform thermal and power management on the system. + +The P8 version of this driver is a client driver of I2C. It may be probed +manually if an "ibm,p8-occ-hwmon" compatible device is found under the +appropriate I2C bus node in the device-tree. + +The P9 version of this driver is a client driver of the FSI-based OCC driver. +It will be probed automatically by the FSI-based OCC driver. + +Sysfs entries +------------- + +The following attributes are supported. All attributes are read-only unless +specified. + +The OCC sensor ID is an integer that represents the unique identifier of the +sensor with respect to the OCC. For example, a temperature sensor for the third +DIMM slot in the system may have a sensor ID of 7. This mapping is unavailable +to the device driver, which must therefore export the sensor ID as-is. + +Some entries are only present with certain OCC sensor versions or only on +certain OCCs in the system. The version number is not exported to the user +but can be inferred. + +temp[1-n]_label OCC sensor ID. +[with temperature sensor version 1] + temp[1-n]_input Measured temperature of the component in millidegrees + Celsius. +[with temperature sensor version >= 2] + temp[1-n]_type The FRU (Field Replaceable Unit) type + (represented by an integer) for the component + that this sensor measures. + temp[1-n]_fault Temperature sensor fault boolean; 1 to indicate + that a fault is present or 0 to indicate that + no fault is present. + [with type == 3 (FRU type is VRM)] + temp[1-n]_alarm VRM temperature alarm boolean; 1 to indicate + alarm, 0 to indicate no alarm + [else] + temp[1-n]_input Measured temperature of the component in + millidegrees Celsius. + +freq[1-n]_label OCC sensor ID. +freq[1-n]_input Measured frequency of the component in MHz. + +power[1-n]_input Latest measured power reading of the component in + microwatts. +power[1-n]_average Average power of the component in microwatts. +power[1-n]_average_interval The amount of time over which the power average + was taken in microseconds. +[with power sensor version < 2] + power[1-n]_label OCC sensor ID. +[with power sensor version >= 2] + power[1-n]_label OCC sensor ID + function ID + channel in the form + of a string, delimited by underscores, i.e. "0_15_1". + Both the function ID and channel are integers that + further identify the power sensor. +[with power sensor version 0xa0] + power[1-n]_label OCC sensor ID + sensor type in the form of a string, + delimited by an underscore, i.e. "0_system". Sensor + type will be one of "system", "proc", "vdd" or "vdn". + For this sensor version, OCC sensor ID will be the same + for all power sensors. +[present only on "master" OCC; represents the whole system power; only one of + this type of power sensor will be present] + power[1-n]_label "system" + power[1-n]_input Latest system output power in microwatts. + power[1-n]_cap Current system power cap in microwatts. + power[1-n]_cap_not_redundant System power cap in microwatts when + there is not redundant power. + power[1-n]_cap_max Maximum power cap that the OCC can enforce in + microwatts. + power[1-n]_cap_min Minimum power cap that the OCC can enforce in + microwatts. + power[1-n]_cap_user The power cap set by the user, in microwatts. + This attribute will return 0 if no user power + cap has been set. This attribute is read-write, + but writing any precision below watts will be + ignored, i.e. requesting a power cap of + 500900000 microwatts will result in a power cap + request of 500 watts. + [with caps sensor version > 1] + power[1-n]_cap_user_source Indicates how the user power cap was + set. This is an integer that maps to + system or firmware components that can + set the user power cap. + +The following "extn" sensors are exported as a way for the OCC to provide data +that doesn't fit anywhere else. The meaning of these sensors is entirely +dependent on their data, and cannot be statically defined. + +extn[1-n]_label ASCII ID or OCC sensor ID. +extn[1-n]_flags This is one byte hexadecimal value. Bit 7 indicates the + type of the label attribute; 1 for sensor ID, 0 for + ASCII ID. Other bits are reserved. +extn[1-n]_input 6 bytes of hexadecimal data, with a meaning defined by + the sensor ID. diff --git a/Documentation/kbuild/kbuild.txt b/Documentation/kbuild/kbuild.txt index 8390c360d4b3..c9e3d93e7a89 100644 --- a/Documentation/kbuild/kbuild.txt +++ b/Documentation/kbuild/kbuild.txt @@ -81,12 +81,7 @@ KBUILD_EXTMOD -------------------------------------------------- Set the directory to look for the kernel source when building external modules. -The directory can be specified in several ways: -1) Use "M=..." on the command line -2) Environment variable KBUILD_EXTMOD -3) Environment variable SUBDIRS -The possibilities are listed in the order they take precedence. -Using "M=..." will always override the others. +Setting "M=..." takes precedence over KBUILD_EXTMOD. KBUILD_OUTPUT -------------------------------------------------- diff --git a/Documentation/kobject.txt b/Documentation/kobject.txt index fc9485d79061..ff4c25098119 100644 --- a/Documentation/kobject.txt +++ b/Documentation/kobject.txt @@ -279,10 +279,14 @@ such a method has a form like:: One important point cannot be overstated: every kobject must have a release() method, and the kobject must persist (in a consistent state) until that method is called. If these constraints are not met, the code is -flawed. Note that the kernel will warn you if you forget to provide a +flawed. Note that the kernel will warn you if you forget to provide a release() method. Do not try to get rid of this warning by providing an -"empty" release function; you will be mocked mercilessly by the kobject -maintainer if you attempt this. +"empty" release function. + +If all your cleanup function needs to do is call kfree(), then you must +create a wrapper function which uses container_of() to upcast to the correct +type (as shown in the example above) and then calls kfree() on the overall +structure. Note, the name of the kobject is available in the release function, but it must NOT be changed within this callback. Otherwise there will be a memory diff --git a/Documentation/leds/leds-class.txt b/Documentation/leds/leds-class.txt index 836cb16d6f09..8b39cc6b03ee 100644 --- a/Documentation/leds/leds-class.txt +++ b/Documentation/leds/leds-class.txt @@ -15,7 +15,7 @@ existing subsystems with minimal additional code. Examples are the disk-activity nand-disk and sharpsl-charge triggers. With led triggers disabled, the code optimises away. -Complex triggers whilst available to all LEDs have LED specific +Complex triggers while available to all LEDs have LED specific parameters and work on a per LED basis. The timer trigger is an example. The timer trigger will periodically change the LED brightness between LED_OFF and the current brightness setting. The "on" and "off" time can diff --git a/Documentation/media/uapi/v4l/extended-controls.rst b/Documentation/media/uapi/v4l/extended-controls.rst index c471408d9bf9..286a2dd7ec36 100644 --- a/Documentation/media/uapi/v4l/extended-controls.rst +++ b/Documentation/media/uapi/v4l/extended-controls.rst @@ -4003,7 +4003,7 @@ demodulator. It receives radio frequency (RF) from the antenna and converts that received signal to lower intermediate frequency (IF) or baseband frequency (BB). Tuners that could do baseband output are often called Zero-IF tuners. Older tuners were typically simple PLL tuners -inside a metal box, whilst newer ones are highly integrated chips +inside a metal box, while newer ones are highly integrated chips without a metal box "silicon tuners". These controls are mostly applicable for new feature rich silicon tuners, just because older tuners does not have much adjustable features. diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index c1d913944ad8..1c22b21ae922 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -587,7 +587,7 @@ leading to the following situation: (Q == &B) and (D == 2) ???? -Whilst this may seem like a failure of coherency or causality maintenance, it +While this may seem like a failure of coherency or causality maintenance, it isn't, and this behaviour can be observed on certain real CPUs (such as the DEC Alpha). @@ -2008,7 +2008,7 @@ for each construct. These operations all imply certain barriers: Certain locking variants of the ACQUIRE operation may fail, either due to being unable to get the lock immediately, or due to receiving an unblocked - signal whilst asleep waiting for the lock to become available. Failed + signal while asleep waiting for the lock to become available. Failed locks do not imply any sort of barrier. [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only @@ -2508,7 +2508,7 @@ CPU, that CPU's dependency ordering logic will take care of everything else. ATOMIC OPERATIONS ----------------- -Whilst they are technically interprocessor interaction considerations, atomic +While they are technically interprocessor interaction considerations, atomic operations are noted specially as some of them imply full memory barriers and some don't, but they're very heavily relied on as a group throughout the kernel. @@ -2531,7 +2531,7 @@ the device to malfunction. Inside of the Linux kernel, I/O should be done through the appropriate accessor routines - such as inb() or writel() - which know how to make such accesses -appropriately sequential. Whilst this, for the most part, renders the explicit +appropriately sequential. While this, for the most part, renders the explicit use of memory barriers unnecessary, there are a couple of situations where they might be needed: @@ -2555,7 +2555,7 @@ access the device. This may be alleviated - at least in part - by disabling local interrupts (a form of locking), such that the critical operations are all contained within -the interrupt-disabled section in the driver. Whilst the driver's interrupt +the interrupt-disabled section in the driver. While the driver's interrupt routine is executing, the driver's core may not run on the same CPU, and its interrupt is not permitted to happen again until the current interrupt has been handled, thus the interrupt handler does not need to lock against that. @@ -2763,7 +2763,7 @@ CACHE COHERENCY Life isn't quite as simple as it may appear above, however: for while the caches are expected to be coherent, there's no guarantee that that coherency -will be ordered. This means that whilst changes made on one CPU will +will be ordered. This means that while changes made on one CPU will eventually become visible on all CPUs, there's no guarantee that they will become apparent in the same order on those other CPUs. @@ -2799,7 +2799,7 @@ Imagine the system has the following properties: (*) an even-numbered cache line may be in cache B, cache D or it may still be resident in memory; - (*) whilst the CPU core is interrogating one cache, the other cache may be + (*) while the CPU core is interrogating one cache, the other cache may be making use of the bus to access the rest of the system - perhaps to displace a dirty cacheline or to do a speculative load; @@ -2835,7 +2835,7 @@ now imagine that the second CPU wants to read those values: x = *q; The above pair of reads may then fail to happen in the expected order, as the -cacheline holding p may get updated in one of the second CPU's caches whilst +cacheline holding p may get updated in one of the second CPU's caches while the update to the cacheline holding v is delayed in the other of the second CPU's caches by some other cache event: @@ -2855,7 +2855,7 @@ CPU's caches by some other cache event: -Basically, whilst both cachelines will be updated on CPU 2 eventually, there's +Basically, while both cachelines will be updated on CPU 2 eventually, there's no guarantee that, without intervention, the order of update will be the same as that committed on CPU 1. @@ -2885,7 +2885,7 @@ coherency queue before processing any further requests: This sort of problem can be encountered on DEC Alpha processors as they have a split cache that improves performance by making better use of the data bus. -Whilst most CPUs do imply a data dependency barrier on the read when a memory +While most CPUs do imply a data dependency barrier on the read when a memory access depends on a read, not all do, so it may not be relied on. Other CPUs may also have split caches, but must coordinate between the various @@ -2974,7 +2974,7 @@ assumption doesn't hold because: thus cutting down on transaction setup costs (memory and PCI devices may both be able to do this); and - (*) the CPU's data cache may affect the ordering, and whilst cache-coherency + (*) the CPU's data cache may affect the ordering, and while cache-coherency mechanisms may alleviate this - once the store has actually hit the cache - there's no guarantee that the coherency management will be propagated in order to other CPUs. diff --git a/Documentation/networking/3c509.txt b/Documentation/networking/device_drivers/3com/3c509.txt similarity index 100% rename from Documentation/networking/3c509.txt rename to Documentation/networking/device_drivers/3com/3c509.txt diff --git a/Documentation/networking/vortex.txt b/Documentation/networking/device_drivers/3com/vortex.txt similarity index 99% rename from Documentation/networking/vortex.txt rename to Documentation/networking/device_drivers/3com/vortex.txt index ad3dead052a4..587f3fcfbcae 100644 --- a/Documentation/networking/vortex.txt +++ b/Documentation/networking/device_drivers/3com/vortex.txt @@ -1,4 +1,4 @@ -Documentation/networking/vortex.txt +Documentation/networking/device_drivers/3com/vortex.txt Andrew Morton 30 April 2000 diff --git a/Documentation/networking/ena.txt b/Documentation/networking/device_drivers/amazon/ena.txt similarity index 100% rename from Documentation/networking/ena.txt rename to Documentation/networking/device_drivers/amazon/ena.txt diff --git a/Documentation/networking/cxgb.txt b/Documentation/networking/device_drivers/chelsio/cxgb.txt similarity index 100% rename from Documentation/networking/cxgb.txt rename to Documentation/networking/device_drivers/chelsio/cxgb.txt diff --git a/Documentation/networking/cs89x0.txt b/Documentation/networking/device_drivers/cirrus/cs89x0.txt similarity index 100% rename from Documentation/networking/cs89x0.txt rename to Documentation/networking/device_drivers/cirrus/cs89x0.txt diff --git a/Documentation/networking/dm9000.txt b/Documentation/networking/device_drivers/davicom/dm9000.txt similarity index 100% rename from Documentation/networking/dm9000.txt rename to Documentation/networking/device_drivers/davicom/dm9000.txt diff --git a/Documentation/networking/de4x5.txt b/Documentation/networking/device_drivers/dec/de4x5.txt similarity index 99% rename from Documentation/networking/de4x5.txt rename to Documentation/networking/device_drivers/dec/de4x5.txt index c8e4ca9b2c3e..452aac58341d 100644 --- a/Documentation/networking/de4x5.txt +++ b/Documentation/networking/device_drivers/dec/de4x5.txt @@ -84,7 +84,7 @@ Automedia detection is included so that in principle you can disconnect from, e.g. TP, reconnect to BNC and things will still work (after a - pause whilst the driver figures out where its media went). My tests + pause while the driver figures out where its media went). My tests using ping showed that it appears to work.... By default, the driver will now autodetect any DECchip based card. diff --git a/Documentation/networking/dmfe.txt b/Documentation/networking/device_drivers/dec/dmfe.txt similarity index 100% rename from Documentation/networking/dmfe.txt rename to Documentation/networking/device_drivers/dec/dmfe.txt diff --git a/Documentation/networking/dl2k.txt b/Documentation/networking/device_drivers/dlink/dl2k.txt similarity index 100% rename from Documentation/networking/dl2k.txt rename to Documentation/networking/device_drivers/dlink/dl2k.txt diff --git a/Documentation/networking/dpaa.txt b/Documentation/networking/device_drivers/freescale/dpaa.txt similarity index 100% rename from Documentation/networking/dpaa.txt rename to Documentation/networking/device_drivers/freescale/dpaa.txt diff --git a/Documentation/networking/dpaa2/dpio-driver.rst b/Documentation/networking/device_drivers/freescale/dpaa2/dpio-driver.rst similarity index 97% rename from Documentation/networking/dpaa2/dpio-driver.rst rename to Documentation/networking/device_drivers/freescale/dpaa2/dpio-driver.rst index 13588104161b..a188466b6698 100644 --- a/Documentation/networking/dpaa2/dpio-driver.rst +++ b/Documentation/networking/device_drivers/freescale/dpaa2/dpio-driver.rst @@ -19,8 +19,8 @@ pool management for network interfaces. This document provides an overview the Linux DPIO driver, its subcomponents, and its APIs. -See Documentation/networking/dpaa2/overview.rst for a general overview of DPAA2 -and the general DPAA2 driver architecture in Linux. +See Documentation/networking/device_drivers/freescale/dpaa2/overview.rst for +a general overview of DPAA2 and the general DPAA2 driver architecture in Linux. Driver Overview --------------- diff --git a/Documentation/networking/dpaa2/ethernet-driver.rst b/Documentation/networking/device_drivers/freescale/dpaa2/ethernet-driver.rst similarity index 98% rename from Documentation/networking/dpaa2/ethernet-driver.rst rename to Documentation/networking/device_drivers/freescale/dpaa2/ethernet-driver.rst index 90ec940749e8..cb4c9a0c5a17 100644 --- a/Documentation/networking/dpaa2/ethernet-driver.rst +++ b/Documentation/networking/device_drivers/freescale/dpaa2/ethernet-driver.rst @@ -33,7 +33,7 @@ hardware resources, like queues, do not have a corresponding MC object and are treated as internal resources of other objects. For a more detailed description of the DPAA2 architecture and its object -abstractions see *Documentation/networking/dpaa2/overview.rst*. +abstractions see *Documentation/networking/device_drivers/freescale/dpaa2/overview.rst*. Each Linux net device is built on top of a Datapath Network Interface (DPNI) object and uses Buffer Pools (DPBPs), I/O Portals (DPIOs) and Concentrators diff --git a/Documentation/networking/dpaa2/index.rst b/Documentation/networking/device_drivers/freescale/dpaa2/index.rst similarity index 100% rename from Documentation/networking/dpaa2/index.rst rename to Documentation/networking/device_drivers/freescale/dpaa2/index.rst diff --git a/Documentation/networking/dpaa2/overview.rst b/Documentation/networking/device_drivers/freescale/dpaa2/overview.rst similarity index 100% rename from Documentation/networking/dpaa2/overview.rst rename to Documentation/networking/device_drivers/freescale/dpaa2/overview.rst diff --git a/Documentation/networking/gianfar.txt b/Documentation/networking/device_drivers/freescale/gianfar.txt similarity index 100% rename from Documentation/networking/gianfar.txt rename to Documentation/networking/device_drivers/freescale/gianfar.txt diff --git a/Documentation/networking/e100.rst b/Documentation/networking/device_drivers/intel/e100.rst similarity index 100% rename from Documentation/networking/e100.rst rename to Documentation/networking/device_drivers/intel/e100.rst diff --git a/Documentation/networking/e1000.rst b/Documentation/networking/device_drivers/intel/e1000.rst similarity index 100% rename from Documentation/networking/e1000.rst rename to Documentation/networking/device_drivers/intel/e1000.rst diff --git a/Documentation/networking/e1000e.rst b/Documentation/networking/device_drivers/intel/e1000e.rst similarity index 100% rename from Documentation/networking/e1000e.rst rename to Documentation/networking/device_drivers/intel/e1000e.rst diff --git a/Documentation/networking/fm10k.rst b/Documentation/networking/device_drivers/intel/fm10k.rst similarity index 100% rename from Documentation/networking/fm10k.rst rename to Documentation/networking/device_drivers/intel/fm10k.rst diff --git a/Documentation/networking/i40e.rst b/Documentation/networking/device_drivers/intel/i40e.rst similarity index 100% rename from Documentation/networking/i40e.rst rename to Documentation/networking/device_drivers/intel/i40e.rst diff --git a/Documentation/networking/iavf.rst b/Documentation/networking/device_drivers/intel/iavf.rst similarity index 100% rename from Documentation/networking/iavf.rst rename to Documentation/networking/device_drivers/intel/iavf.rst diff --git a/Documentation/networking/ice.rst b/Documentation/networking/device_drivers/intel/ice.rst similarity index 100% rename from Documentation/networking/ice.rst rename to Documentation/networking/device_drivers/intel/ice.rst diff --git a/Documentation/networking/igb.rst b/Documentation/networking/device_drivers/intel/igb.rst similarity index 87% rename from Documentation/networking/igb.rst rename to Documentation/networking/device_drivers/intel/igb.rst index ba16b86d5593..e87a4a72ea2d 100644 --- a/Documentation/networking/igb.rst +++ b/Documentation/networking/device_drivers/intel/igb.rst @@ -177,6 +177,25 @@ rate limit using the IProute2 tool. Download the latest version of the IProute2 tool from Sourceforge if your version does not have all the features you require. +Credit Based Shaper (Qav Mode) +------------------------------ +When enabling the CBS qdisc in the hardware offload mode, traffic shaping using +the CBS (described in the IEEE 802.1Q-2018 Section 8.6.8.2 and discussed in the +Annex L) algorithm will run in the i210 controller, so it's more accurate and +uses less CPU. + +When using offloaded CBS, and the traffic rate obeys the configured rate +(doesn't go above it), CBS should have little to no effect in the latency. + +The offloaded version of the algorithm has some limits, caused by how the idle +slope is expressed in the adapter's registers. It can only represent idle slopes +in 16.38431 kbps units, which means that if a idle slope of 2576kbps is +requested, the controller will be configured to use a idle slope of ~2589 kbps, +because the driver rounds the value up. For more details, see the comments on +:c:func:`igb_config_tx_modes()`. + +NOTE: This feature is exclusive to i210 models. + Support ======= diff --git a/Documentation/networking/igbvf.rst b/Documentation/networking/device_drivers/intel/igbvf.rst similarity index 100% rename from Documentation/networking/igbvf.rst rename to Documentation/networking/device_drivers/intel/igbvf.rst diff --git a/Documentation/networking/README.ipw2100 b/Documentation/networking/device_drivers/intel/ipw2100.txt similarity index 100% rename from Documentation/networking/README.ipw2100 rename to Documentation/networking/device_drivers/intel/ipw2100.txt diff --git a/Documentation/networking/README.ipw2200 b/Documentation/networking/device_drivers/intel/ipw2200.txt similarity index 100% rename from Documentation/networking/README.ipw2200 rename to Documentation/networking/device_drivers/intel/ipw2200.txt diff --git a/Documentation/networking/ixgb.rst b/Documentation/networking/device_drivers/intel/ixgb.rst similarity index 100% rename from Documentation/networking/ixgb.rst rename to Documentation/networking/device_drivers/intel/ixgb.rst diff --git a/Documentation/networking/ixgbe.rst b/Documentation/networking/device_drivers/intel/ixgbe.rst similarity index 97% rename from Documentation/networking/ixgbe.rst rename to Documentation/networking/device_drivers/intel/ixgbe.rst index 725fc697fd8f..86d887a63606 100644 --- a/Documentation/networking/ixgbe.rst +++ b/Documentation/networking/device_drivers/intel/ixgbe.rst @@ -501,6 +501,19 @@ NOTE: This feature can be disabled for a specific Virtual Function (VF):: ip link set vf spoofchk {off|on} +IPsec Offload +------------- +The ixgbe driver supports IPsec Hardware Offload. When creating Security +Associations with "ip xfrm ..." the 'offload' tag option can be used to +register the IPsec SA with the driver in order to get higher throughput in +the secure communications. + +The offload is also supported for ixgbe's VFs, but the VF must be set as +'trusted' and the support must be enabled with:: + + ethtool --set-priv-flags eth vf-ipsec on + ip link set eth vf trust on + Known Issues/Troubleshooting ============================ diff --git a/Documentation/networking/ixgbevf.rst b/Documentation/networking/device_drivers/intel/ixgbevf.rst similarity index 100% rename from Documentation/networking/ixgbevf.rst rename to Documentation/networking/device_drivers/intel/ixgbevf.rst diff --git a/Documentation/networking/netvsc.txt b/Documentation/networking/device_drivers/microsoft/netvsc.txt similarity index 100% rename from Documentation/networking/netvsc.txt rename to Documentation/networking/device_drivers/microsoft/netvsc.txt diff --git a/Documentation/networking/s2io.txt b/Documentation/networking/device_drivers/neterion/s2io.txt similarity index 100% rename from Documentation/networking/s2io.txt rename to Documentation/networking/device_drivers/neterion/s2io.txt diff --git a/Documentation/networking/vxge.txt b/Documentation/networking/device_drivers/neterion/vxge.txt similarity index 100% rename from Documentation/networking/vxge.txt rename to Documentation/networking/device_drivers/neterion/vxge.txt diff --git a/Documentation/networking/LICENSE.qla3xxx b/Documentation/networking/device_drivers/qlogic/LICENSE.qla3xxx similarity index 100% rename from Documentation/networking/LICENSE.qla3xxx rename to Documentation/networking/device_drivers/qlogic/LICENSE.qla3xxx diff --git a/Documentation/networking/LICENSE.qlcnic b/Documentation/networking/device_drivers/qlogic/LICENSE.qlcnic similarity index 100% rename from Documentation/networking/LICENSE.qlcnic rename to Documentation/networking/device_drivers/qlogic/LICENSE.qlcnic diff --git a/Documentation/networking/LICENSE.qlge b/Documentation/networking/device_drivers/qlogic/LICENSE.qlge similarity index 100% rename from Documentation/networking/LICENSE.qlge rename to Documentation/networking/device_drivers/qlogic/LICENSE.qlge diff --git a/Documentation/networking/rmnet.txt b/Documentation/networking/device_drivers/qualcomm/rmnet.txt similarity index 100% rename from Documentation/networking/rmnet.txt rename to Documentation/networking/device_drivers/qualcomm/rmnet.txt diff --git a/Documentation/networking/README.sb1000 b/Documentation/networking/device_drivers/sb1000.txt similarity index 100% rename from Documentation/networking/README.sb1000 rename to Documentation/networking/device_drivers/sb1000.txt diff --git a/Documentation/networking/smc9.txt b/Documentation/networking/device_drivers/smsc/smc9.txt similarity index 100% rename from Documentation/networking/smc9.txt rename to Documentation/networking/device_drivers/smsc/smc9.txt diff --git a/Documentation/networking/stmmac.txt b/Documentation/networking/device_drivers/stmicro/stmmac.txt similarity index 100% rename from Documentation/networking/stmmac.txt rename to Documentation/networking/device_drivers/stmicro/stmmac.txt diff --git a/Documentation/networking/ti-cpsw.txt b/Documentation/networking/device_drivers/ti/cpsw.txt similarity index 100% rename from Documentation/networking/ti-cpsw.txt rename to Documentation/networking/device_drivers/ti/cpsw.txt diff --git a/Documentation/networking/tlan.txt b/Documentation/networking/device_drivers/ti/tlan.txt similarity index 100% rename from Documentation/networking/tlan.txt rename to Documentation/networking/device_drivers/ti/tlan.txt diff --git a/Documentation/networking/spider_net.txt b/Documentation/networking/device_drivers/toshiba/spider_net.txt similarity index 100% rename from Documentation/networking/spider_net.txt rename to Documentation/networking/device_drivers/toshiba/spider_net.txt diff --git a/Documentation/networking/devlink-params.txt b/Documentation/networking/devlink-params.txt index ae444ffe73ac..2d26434ddcf8 100644 --- a/Documentation/networking/devlink-params.txt +++ b/Documentation/networking/devlink-params.txt @@ -40,3 +40,12 @@ msix_vec_per_pf_min [DEVICE, GENERIC] for the device initialization. Value is same across all physical functions (PFs) in the device. Type: u32 + +fw_load_policy [DEVICE, GENERIC] + Controls the device's firmware loading policy. + Valid values: + * DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DRIVER (0) + Load firmware version preferred by the driver. + * DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_FLASH (1) + Load firmware currently stored in flash. + Type: u8 diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst index bd89dae8d578..6a47629ef8ed 100644 --- a/Documentation/networking/index.rst +++ b/Documentation/networking/index.rst @@ -31,6 +31,7 @@ Contents: net_failover alias bridge + snmp_counter .. only:: subproject diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt index 32b21571adfe..acdfb5d2bcaa 100644 --- a/Documentation/networking/ip-sysctl.txt +++ b/Documentation/networking/ip-sysctl.txt @@ -108,8 +108,8 @@ neigh/default/gc_thresh2 - INTEGER Default: 512 neigh/default/gc_thresh3 - INTEGER - Maximum number of neighbor entries allowed. Increase this - when using large numbers of interfaces and when communicating + Maximum number of non-PERMANENT neighbor entries allowed. Increase + this when using large numbers of interfaces and when communicating with large numbers of directly-connected peers. Default: 1024 @@ -370,6 +370,7 @@ tcp_l3mdev_accept - BOOLEAN derived from the listen socket to be bound to the L3 domain in which the packets originated. Only valid when the kernel was compiled with CONFIG_NET_L3_MASTER_DEV. + Default: 0 (disabled) tcp_low_latency - BOOLEAN This is a legacy option, it has no effect anymore. @@ -758,7 +759,7 @@ tcp_limit_output_bytes - INTEGER flows, for typical pfifo_fast qdiscs. tcp_limit_output_bytes limits the number of bytes on qdisc or device to reduce artificial RTT/cwnd and reduce bufferbloat. - Default: 262144 + Default: 1048576 (16 * 65536) tcp_challenge_ack_limit - INTEGER Limits number of Challenge ACK sent per second, as recommended @@ -773,6 +774,7 @@ udp_l3mdev_accept - BOOLEAN being received regardless of the L3 domain in which they originated. Only valid when the kernel was compiled with CONFIG_NET_L3_MASTER_DEV. + Default: 0 (disabled) udp_mem - vector of 3 INTEGERs: min, pressure, max Number of pages allowed for queueing by all UDP sockets. @@ -799,6 +801,16 @@ udp_wmem_min - INTEGER total pages of UDP sockets exceed udp_mem pressure. The unit is byte. Default: 4K +RAW variables: + +raw_l3mdev_accept - BOOLEAN + Enabling this option allows a "global" bound socket to work + across L3 master domains (e.g., VRFs) with packets capable of + being received regardless of the L3 domain in which they + originated. Only valid when the kernel was compiled with + CONFIG_NET_L3_MASTER_DEV. + Default: 1 (enabled) + CIPSOv4 Variables: cipso_cache_enable - BOOLEAN diff --git a/Documentation/networking/netdev-features.txt b/Documentation/networking/netdev-features.txt index c4a54c162547..58dd1c1e3c65 100644 --- a/Documentation/networking/netdev-features.txt +++ b/Documentation/networking/netdev-features.txt @@ -115,7 +115,7 @@ set, be it TCPv4 (when NETIF_F_TSO is enabled) or TCPv6 (NETIF_F_TSO6). * Transmit UDP segmentation offload -NETIF_F_GSO_UDP_GSO_L4 accepts a single UDP header with a payload that exceeds +NETIF_F_GSO_UDP_L4 accepts a single UDP header with a payload that exceeds gso_size. On segmentation, it segments the payload on gso_size boundaries and replicates the network and UDP headers (fixing up the last one if less than gso_size). diff --git a/Documentation/networking/nf_conntrack-sysctl.txt b/Documentation/networking/nf_conntrack-sysctl.txt index 1669dc2419fd..f75c2ce6e136 100644 --- a/Documentation/networking/nf_conntrack-sysctl.txt +++ b/Documentation/networking/nf_conntrack-sysctl.txt @@ -157,7 +157,16 @@ nf_conntrack_udp_timeout - INTEGER (seconds) default 30 nf_conntrack_udp_timeout_stream - INTEGER (seconds) - default 180 + default 120 This extended timeout will be used in case there is an UDP stream detected. + +nf_conntrack_gre_timeout - INTEGER (seconds) + default 30 + +nf_conntrack_gre_timeout_stream - INTEGER (seconds) + default 180 + + This extended timeout will be used in case there is an GRE stream + detected. diff --git a/Documentation/networking/rxrpc.txt b/Documentation/networking/rxrpc.txt index 89f1302d593a..c9d052e0cf51 100644 --- a/Documentation/networking/rxrpc.txt +++ b/Documentation/networking/rxrpc.txt @@ -661,7 +661,7 @@ A server would be set up to accept operations in the following manner: setsockopt(server, SOL_RXRPC, RXRPC_SECURITY_KEYRING, "AFSkeys", 7); The keyring can be manipulated after it has been given to the socket. This - permits the server to add more keys, replace keys, etc. whilst it is live. + permits the server to add more keys, replace keys, etc. while it is live. (3) A local address must then be bound: @@ -1032,7 +1032,7 @@ The kernel interface functions are as follows: struct sockaddr_rxrpc *srx, struct key *key); - This attempts to partially reinitialise a call and submit it again whilst + This attempts to partially reinitialise a call and submit it again while reusing the original call's Tx queue to avoid the need to repackage and re-encrypt the data to be sent. call indicates the call to retry, srx the new address to send it to and key the encryption key to use for signing or @@ -1066,7 +1066,7 @@ The kernel interface functions are as follows: alive after waiting for a suitable interval. This allows the caller to work out if the server is still contactable and - if the call is still alive on the server whilst waiting for the server to + if the call is still alive on the server while waiting for the server to process a client operation. The second function causes a ping ACK to be transmitted to try to provoke @@ -1149,14 +1149,14 @@ adjusted through sysctls in /proc/net/rxrpc/: (*) connection_expiry The amount of time in seconds after a connection was last used before we - remove it from the connection list. Whilst a connection is in existence, + remove it from the connection list. While a connection is in existence, it serves as a placeholder for negotiated security; when it is deleted, the security must be renegotiated. (*) transport_expiry The amount of time in seconds after a transport was last used before we - remove it from the transport list. Whilst a transport is in existence, it + remove it from the transport list. While a transport is in existence, it serves to anchor the peer data and keeps the connection ID counter. (*) rxrpc_rx_window_size diff --git a/Documentation/networking/snmp_counter.rst b/Documentation/networking/snmp_counter.rst new file mode 100644 index 000000000000..f8eb77ddbd44 --- /dev/null +++ b/Documentation/networking/snmp_counter.rst @@ -0,0 +1,1190 @@ +=========== +SNMP counter +=========== + +This document explains the meaning of SNMP counters. + +General IPv4 counters +==================== +All layer 4 packets and ICMP packets will change these counters, but +these counters won't be changed by layer 2 packets (such as STP) or +ARP packets. + +* IpInReceives +Defined in `RFC1213 ipInReceives`_ + +.. _RFC1213 ipInReceives: https://tools.ietf.org/html/rfc1213#page-26 + +The number of packets received by the IP layer. It gets increasing at the +beginning of ip_rcv function, always be updated together with +IpExtInOctets. It will be increased even if the packet is dropped +later (e.g. due to the IP header is invalid or the checksum is wrong +and so on). It indicates the number of aggregated segments after +GRO/LRO. + +* IpInDelivers +Defined in `RFC1213 ipInDelivers`_ + +.. _RFC1213 ipInDelivers: https://tools.ietf.org/html/rfc1213#page-28 + +The number of packets delivers to the upper layer protocols. E.g. TCP, UDP, +ICMP and so on. If no one listens on a raw socket, only kernel +supported protocols will be delivered, if someone listens on the raw +socket, all valid IP packets will be delivered. + +* IpOutRequests +Defined in `RFC1213 ipOutRequests`_ + +.. _RFC1213 ipOutRequests: https://tools.ietf.org/html/rfc1213#page-28 + +The number of packets sent via IP layer, for both single cast and +multicast packets, and would always be updated together with +IpExtOutOctets. + +* IpExtInOctets and IpExtOutOctets +They are Linux kernel extensions, no RFC definitions. Please note, +RFC1213 indeed defines ifInOctets and ifOutOctets, but they +are different things. The ifInOctets and ifOutOctets include the MAC +layer header size but IpExtInOctets and IpExtOutOctets don't, they +only include the IP layer header and the IP layer data. + +* IpExtInNoECTPkts, IpExtInECT1Pkts, IpExtInECT0Pkts, IpExtInCEPkts +They indicate the number of four kinds of ECN IP packets, please refer +`Explicit Congestion Notification`_ for more details. + +.. _Explicit Congestion Notification: https://tools.ietf.org/html/rfc3168#page-6 + +These 4 counters calculate how many packets received per ECN +status. They count the real frame number regardless the LRO/GRO. So +for the same packet, you might find that IpInReceives count 1, but +IpExtInNoECTPkts counts 2 or more. + +* IpInHdrErrors +Defined in `RFC1213 ipInHdrErrors`_. It indicates the packet is +dropped due to the IP header error. It might happen in both IP input +and IP forward paths. + +.. _RFC1213 ipInHdrErrors: https://tools.ietf.org/html/rfc1213#page-27 + +* IpInAddrErrors +Defined in `RFC1213 ipInAddrErrors`_. It will be increased in two +scenarios: (1) The IP address is invalid. (2) The destination IP +address is not a local address and IP forwarding is not enabled + +.. _RFC1213 ipInAddrErrors: https://tools.ietf.org/html/rfc1213#page-27 + +* IpExtInNoRoutes +This counter means the packet is dropped when the IP stack receives a +packet and can't find a route for it from the route table. It might +happen when IP forwarding is enabled and the destination IP address is +not a local address and there is no route for the destination IP +address. + +* IpInUnknownProtos +Defined in `RFC1213 ipInUnknownProtos`_. It will be increased if the +layer 4 protocol is unsupported by kernel. If an application is using +raw socket, kernel will always deliver the packet to the raw socket +and this counter won't be increased. + +.. _RFC1213 ipInUnknownProtos: https://tools.ietf.org/html/rfc1213#page-27 + +* IpExtInTruncatedPkts +For IPv4 packet, it means the actual data size is smaller than the +"Total Length" field in the IPv4 header. + +* IpInDiscards +Defined in `RFC1213 ipInDiscards`_. It indicates the packet is dropped +in the IP receiving path and due to kernel internal reasons (e.g. no +enough memory). + +.. _RFC1213 ipInDiscards: https://tools.ietf.org/html/rfc1213#page-28 + +* IpOutDiscards +Defined in `RFC1213 ipOutDiscards`_. It indicates the packet is +dropped in the IP sending path and due to kernel internal reasons. + +.. _RFC1213 ipOutDiscards: https://tools.ietf.org/html/rfc1213#page-28 + +* IpOutNoRoutes +Defined in `RFC1213 ipOutNoRoutes`_. It indicates the packet is +dropped in the IP sending path and no route is found for it. + +.. _RFC1213 ipOutNoRoutes: https://tools.ietf.org/html/rfc1213#page-29 + +ICMP counters +============ +* IcmpInMsgs and IcmpOutMsgs +Defined by `RFC1213 icmpInMsgs`_ and `RFC1213 icmpOutMsgs`_ + +.. _RFC1213 icmpInMsgs: https://tools.ietf.org/html/rfc1213#page-41 +.. _RFC1213 icmpOutMsgs: https://tools.ietf.org/html/rfc1213#page-43 + +As mentioned in the RFC1213, these two counters include errors, they +would be increased even if the ICMP packet has an invalid type. The +ICMP output path will check the header of a raw socket, so the +IcmpOutMsgs would still be updated if the IP header is constructed by +a userspace program. + +* ICMP named types +| These counters include most of common ICMP types, they are: +| IcmpInDestUnreachs: `RFC1213 icmpInDestUnreachs`_ +| IcmpInTimeExcds: `RFC1213 icmpInTimeExcds`_ +| IcmpInParmProbs: `RFC1213 icmpInParmProbs`_ +| IcmpInSrcQuenchs: `RFC1213 icmpInSrcQuenchs`_ +| IcmpInRedirects: `RFC1213 icmpInRedirects`_ +| IcmpInEchos: `RFC1213 icmpInEchos`_ +| IcmpInEchoReps: `RFC1213 icmpInEchoReps`_ +| IcmpInTimestamps: `RFC1213 icmpInTimestamps`_ +| IcmpInTimestampReps: `RFC1213 icmpInTimestampReps`_ +| IcmpInAddrMasks: `RFC1213 icmpInAddrMasks`_ +| IcmpInAddrMaskReps: `RFC1213 icmpInAddrMaskReps`_ +| IcmpOutDestUnreachs: `RFC1213 icmpOutDestUnreachs`_ +| IcmpOutTimeExcds: `RFC1213 icmpOutTimeExcds`_ +| IcmpOutParmProbs: `RFC1213 icmpOutParmProbs`_ +| IcmpOutSrcQuenchs: `RFC1213 icmpOutSrcQuenchs`_ +| IcmpOutRedirects: `RFC1213 icmpOutRedirects`_ +| IcmpOutEchos: `RFC1213 icmpOutEchos`_ +| IcmpOutEchoReps: `RFC1213 icmpOutEchoReps`_ +| IcmpOutTimestamps: `RFC1213 icmpOutTimestamps`_ +| IcmpOutTimestampReps: `RFC1213 icmpOutTimestampReps`_ +| IcmpOutAddrMasks: `RFC1213 icmpOutAddrMasks`_ +| IcmpOutAddrMaskReps: `RFC1213 icmpOutAddrMaskReps`_ + +.. _RFC1213 icmpInDestUnreachs: https://tools.ietf.org/html/rfc1213#page-41 +.. _RFC1213 icmpInTimeExcds: https://tools.ietf.org/html/rfc1213#page-41 +.. _RFC1213 icmpInParmProbs: https://tools.ietf.org/html/rfc1213#page-42 +.. _RFC1213 icmpInSrcQuenchs: https://tools.ietf.org/html/rfc1213#page-42 +.. _RFC1213 icmpInRedirects: https://tools.ietf.org/html/rfc1213#page-42 +.. _RFC1213 icmpInEchos: https://tools.ietf.org/html/rfc1213#page-42 +.. _RFC1213 icmpInEchoReps: https://tools.ietf.org/html/rfc1213#page-42 +.. _RFC1213 icmpInTimestamps: https://tools.ietf.org/html/rfc1213#page-42 +.. _RFC1213 icmpInTimestampReps: https://tools.ietf.org/html/rfc1213#page-43 +.. _RFC1213 icmpInAddrMasks: https://tools.ietf.org/html/rfc1213#page-43 +.. _RFC1213 icmpInAddrMaskReps: https://tools.ietf.org/html/rfc1213#page-43 + +.. _RFC1213 icmpOutDestUnreachs: https://tools.ietf.org/html/rfc1213#page-44 +.. _RFC1213 icmpOutTimeExcds: https://tools.ietf.org/html/rfc1213#page-44 +.. _RFC1213 icmpOutParmProbs: https://tools.ietf.org/html/rfc1213#page-44 +.. _RFC1213 icmpOutSrcQuenchs: https://tools.ietf.org/html/rfc1213#page-44 +.. _RFC1213 icmpOutRedirects: https://tools.ietf.org/html/rfc1213#page-44 +.. _RFC1213 icmpOutEchos: https://tools.ietf.org/html/rfc1213#page-45 +.. _RFC1213 icmpOutEchoReps: https://tools.ietf.org/html/rfc1213#page-45 +.. _RFC1213 icmpOutTimestamps: https://tools.ietf.org/html/rfc1213#page-45 +.. _RFC1213 icmpOutTimestampReps: https://tools.ietf.org/html/rfc1213#page-45 +.. _RFC1213 icmpOutAddrMasks: https://tools.ietf.org/html/rfc1213#page-45 +.. _RFC1213 icmpOutAddrMaskReps: https://tools.ietf.org/html/rfc1213#page-46 + +Every ICMP type has two counters: 'In' and 'Out'. E.g., for the ICMP +Echo packet, they are IcmpInEchos and IcmpOutEchos. Their meanings are +straightforward. The 'In' counter means kernel receives such a packet +and the 'Out' counter means kernel sends such a packet. + +* ICMP numeric types +They are IcmpMsgInType[N] and IcmpMsgOutType[N], the [N] indicates the +ICMP type number. These counters track all kinds of ICMP packets. The +ICMP type number definition could be found in the `ICMP parameters`_ +document. + +.. _ICMP parameters: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml + +For example, if the Linux kernel sends an ICMP Echo packet, the +IcmpMsgOutType8 would increase 1. And if kernel gets an ICMP Echo Reply +packet, IcmpMsgInType0 would increase 1. + +* IcmpInCsumErrors +This counter indicates the checksum of the ICMP packet is +wrong. Kernel verifies the checksum after updating the IcmpInMsgs and +before updating IcmpMsgInType[N]. If a packet has bad checksum, the +IcmpInMsgs would be updated but none of IcmpMsgInType[N] would be updated. + +* IcmpInErrors and IcmpOutErrors +Defined by `RFC1213 icmpInErrors`_ and `RFC1213 icmpOutErrors`_ + +.. _RFC1213 icmpInErrors: https://tools.ietf.org/html/rfc1213#page-41 +.. _RFC1213 icmpOutErrors: https://tools.ietf.org/html/rfc1213#page-43 + +When an error occurs in the ICMP packet handler path, these two +counters would be updated. The receiving packet path use IcmpInErrors +and the sending packet path use IcmpOutErrors. When IcmpInCsumErrors +is increased, IcmpInErrors would always be increased too. + +relationship of the ICMP counters +------------------------------- +The sum of IcmpMsgOutType[N] is always equal to IcmpOutMsgs, as they +are updated at the same time. The sum of IcmpMsgInType[N] plus +IcmpInErrors should be equal or larger than IcmpInMsgs. When kernel +receives an ICMP packet, kernel follows below logic: + +1. increase IcmpInMsgs +2. if has any error, update IcmpInErrors and finish the process +3. update IcmpMsgOutType[N] +4. handle the packet depending on the type, if has any error, update + IcmpInErrors and finish the process + +So if all errors occur in step (2), IcmpInMsgs should be equal to the +sum of IcmpMsgOutType[N] plus IcmpInErrors. If all errors occur in +step (4), IcmpInMsgs should be equal to the sum of +IcmpMsgOutType[N]. If the errors occur in both step (2) and step (4), +IcmpInMsgs should be less than the sum of IcmpMsgOutType[N] plus +IcmpInErrors. + +General TCP counters +================== +* TcpInSegs +Defined in `RFC1213 tcpInSegs`_ + +.. _RFC1213 tcpInSegs: https://tools.ietf.org/html/rfc1213#page-48 + +The number of packets received by the TCP layer. As mentioned in +RFC1213, it includes the packets received in error, such as checksum +error, invalid TCP header and so on. Only one error won't be included: +if the layer 2 destination address is not the NIC's layer 2 +address. It might happen if the packet is a multicast or broadcast +packet, or the NIC is in promiscuous mode. In these situations, the +packets would be delivered to the TCP layer, but the TCP layer will discard +these packets before increasing TcpInSegs. The TcpInSegs counter +isn't aware of GRO. So if two packets are merged by GRO, the TcpInSegs +counter would only increase 1. + +* TcpOutSegs +Defined in `RFC1213 tcpOutSegs`_ + +.. _RFC1213 tcpOutSegs: https://tools.ietf.org/html/rfc1213#page-48 + +The number of packets sent by the TCP layer. As mentioned in RFC1213, +it excludes the retransmitted packets. But it includes the SYN, ACK +and RST packets. Doesn't like TcpInSegs, the TcpOutSegs is aware of +GSO, so if a packet would be split to 2 by GSO, TcpOutSegs will +increase 2. + +* TcpActiveOpens +Defined in `RFC1213 tcpActiveOpens`_ + +.. _RFC1213 tcpActiveOpens: https://tools.ietf.org/html/rfc1213#page-47 + +It means the TCP layer sends a SYN, and come into the SYN-SENT +state. Every time TcpActiveOpens increases 1, TcpOutSegs should always +increase 1. + +* TcpPassiveOpens +Defined in `RFC1213 tcpPassiveOpens`_ + +.. _RFC1213 tcpPassiveOpens: https://tools.ietf.org/html/rfc1213#page-47 + +It means the TCP layer receives a SYN, replies a SYN+ACK, come into +the SYN-RCVD state. + +* TcpExtTCPRcvCoalesce +When packets are received by the TCP layer and are not be read by the +application, the TCP layer will try to merge them. This counter +indicate how many packets are merged in such situation. If GRO is +enabled, lots of packets would be merged by GRO, these packets +wouldn't be counted to TcpExtTCPRcvCoalesce. + +* TcpExtTCPAutoCorking +When sending packets, the TCP layer will try to merge small packets to +a bigger one. This counter increase 1 for every packet merged in such +situation. Please refer to the LWN article for more details: +https://lwn.net/Articles/576263/ + +* TcpExtTCPOrigDataSent +This counter is explained by `kernel commit f19c29e3e391`_, I pasted the +explaination below:: + + TCPOrigDataSent: number of outgoing packets with original data (excluding + retransmission but including data-in-SYN). This counter is different from + TcpOutSegs because TcpOutSegs also tracks pure ACKs. TCPOrigDataSent is + more useful to track the TCP retransmission rate. + +* TCPSynRetrans +This counter is explained by `kernel commit f19c29e3e391`_, I pasted the +explaination below:: + + TCPSynRetrans: number of SYN and SYN/ACK retransmits to break down + retransmissions into SYN, fast-retransmits, timeout retransmits, etc. + +* TCPFastOpenActiveFail +This counter is explained by `kernel commit f19c29e3e391`_, I pasted the +explaination below:: + + TCPFastOpenActiveFail: Fast Open attempts (SYN/data) failed because + the remote does not accept it or the attempts timed out. + +.. _kernel commit f19c29e3e391: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=f19c29e3e391a66a273e9afebaf01917245148cd + +* TcpExtListenOverflows and TcpExtListenDrops +When kernel receives a SYN from a client, and if the TCP accept queue +is full, kernel will drop the SYN and add 1 to TcpExtListenOverflows. +At the same time kernel will also add 1 to TcpExtListenDrops. When a +TCP socket is in LISTEN state, and kernel need to drop a packet, +kernel would always add 1 to TcpExtListenDrops. So increase +TcpExtListenOverflows would let TcpExtListenDrops increasing at the +same time, but TcpExtListenDrops would also increase without +TcpExtListenOverflows increasing, e.g. a memory allocation fail would +also let TcpExtListenDrops increase. + +Note: The above explanation is based on kernel 4.10 or above version, on +an old kernel, the TCP stack has different behavior when TCP accept +queue is full. On the old kernel, TCP stack won't drop the SYN, it +would complete the 3-way handshake. As the accept queue is full, TCP +stack will keep the socket in the TCP half-open queue. As it is in the +half open queue, TCP stack will send SYN+ACK on an exponential backoff +timer, after client replies ACK, TCP stack checks whether the accept +queue is still full, if it is not full, moves the socket to the accept +queue, if it is full, keeps the socket in the half-open queue, at next +time client replies ACK, this socket will get another chance to move +to the accept queue. + + +TCP Fast Open +============ +When kernel receives a TCP packet, it has two paths to handler the +packet, one is fast path, another is slow path. The comment in kernel +code provides a good explanation of them, I pasted them below:: + + It is split into a fast path and a slow path. The fast path is + disabled when: + + - A zero window was announced from us + - zero window probing + is only handled properly on the slow path. + - Out of order segments arrived. + - Urgent data is expected. + - There is no buffer space left + - Unexpected TCP flags/window values/header lengths are received + (detected by checking the TCP header against pred_flags) + - Data is sent in both directions. The fast path only supports pure senders + or pure receivers (this means either the sequence number or the ack + value must stay constant) + - Unexpected TCP option. + +Kernel will try to use fast path unless any of the above conditions +are satisfied. If the packets are out of order, kernel will handle +them in slow path, which means the performance might be not very +good. Kernel would also come into slow path if the "Delayed ack" is +used, because when using "Delayed ack", the data is sent in both +directions. When the TCP window scale option is not used, kernel will +try to enable fast path immediately when the connection comes into the +established state, but if the TCP window scale option is used, kernel +will disable the fast path at first, and try to enable it after kernel +receives packets. + +* TcpExtTCPPureAcks and TcpExtTCPHPAcks +If a packet set ACK flag and has no data, it is a pure ACK packet, if +kernel handles it in the fast path, TcpExtTCPHPAcks will increase 1, +if kernel handles it in the slow path, TcpExtTCPPureAcks will +increase 1. + +* TcpExtTCPHPHits +If a TCP packet has data (which means it is not a pure ACK packet), +and this packet is handled in the fast path, TcpExtTCPHPHits will +increase 1. + + +TCP abort +======== + + +* TcpExtTCPAbortOnData +It means TCP layer has data in flight, but need to close the +connection. So TCP layer sends a RST to the other side, indicate the +connection is not closed very graceful. An easy way to increase this +counter is using the SO_LINGER option. Please refer to the SO_LINGER +section of the `socket man page`_: + +.. _socket man page: http://man7.org/linux/man-pages/man7/socket.7.html + +By default, when an application closes a connection, the close function +will return immediately and kernel will try to send the in-flight data +async. If you use the SO_LINGER option, set l_onoff to 1, and l_linger +to a positive number, the close function won't return immediately, but +wait for the in-flight data are acked by the other side, the max wait +time is l_linger seconds. If set l_onoff to 1 and set l_linger to 0, +when the application closes a connection, kernel will send a RST +immediately and increase the TcpExtTCPAbortOnData counter. + +* TcpExtTCPAbortOnClose +This counter means the application has unread data in the TCP layer when +the application wants to close the TCP connection. In such a situation, +kernel will send a RST to the other side of the TCP connection. + +* TcpExtTCPAbortOnMemory +When an application closes a TCP connection, kernel still need to track +the connection, let it complete the TCP disconnect process. E.g. an +app calls the close method of a socket, kernel sends fin to the other +side of the connection, then the app has no relationship with the +socket any more, but kernel need to keep the socket, this socket +becomes an orphan socket, kernel waits for the reply of the other side, +and would come to the TIME_WAIT state finally. When kernel has no +enough memory to keep the orphan socket, kernel would send an RST to +the other side, and delete the socket, in such situation, kernel will +increase 1 to the TcpExtTCPAbortOnMemory. Two conditions would trigger +TcpExtTCPAbortOnMemory: + +1. the memory used by the TCP protocol is higher than the third value of +the tcp_mem. Please refer the tcp_mem section in the `TCP man page`_: + +.. _TCP man page: http://man7.org/linux/man-pages/man7/tcp.7.html + +2. the orphan socket count is higher than net.ipv4.tcp_max_orphans + + +* TcpExtTCPAbortOnTimeout +This counter will increase when any of the TCP timers expire. In such +situation, kernel won't send RST, just give up the connection. + +* TcpExtTCPAbortOnLinger +When a TCP connection comes into FIN_WAIT_2 state, instead of waiting +for the fin packet from the other side, kernel could send a RST and +delete the socket immediately. This is not the default behavior of +Linux kernel TCP stack. By configuring the TCP_LINGER2 socket option, +you could let kernel follow this behavior. + +* TcpExtTCPAbortFailed +The kernel TCP layer will send RST if the `RFC2525 2.17 section`_ is +satisfied. If an internal error occurs during this process, +TcpExtTCPAbortFailed will be increased. + +.. _RFC2525 2.17 section: https://tools.ietf.org/html/rfc2525#page-50 + +TCP Hybrid Slow Start +==================== +The Hybrid Slow Start algorithm is an enhancement of the traditional +TCP congestion window Slow Start algorithm. It uses two pieces of +information to detect whether the max bandwidth of the TCP path is +approached. The two pieces of information are ACK train length and +increase in packet delay. For detail information, please refer the +`Hybrid Slow Start paper`_. Either ACK train length or packet delay +hits a specific threshold, the congestion control algorithm will come +into the Congestion Avoidance state. Until v4.20, two congestion +control algorithms are using Hybrid Slow Start, they are cubic (the +default congestion control algorithm) and cdg. Four snmp counters +relate with the Hybrid Slow Start algorithm. + +.. _Hybrid Slow Start paper: https://pdfs.semanticscholar.org/25e9/ef3f03315782c7f1cbcd31b587857adae7d1.pdf + +* TcpExtTCPHystartTrainDetect +How many times the ACK train length threshold is detected + +* TcpExtTCPHystartTrainCwnd +The sum of CWND detected by ACK train length. Dividing this value by +TcpExtTCPHystartTrainDetect is the average CWND which detected by the +ACK train length. + +* TcpExtTCPHystartDelayDetect +How many times the packet delay threshold is detected. + +* TcpExtTCPHystartDelayCwnd +The sum of CWND detected by packet delay. Dividing this value by +TcpExtTCPHystartDelayDetect is the average CWND which detected by the +packet delay. + +TCP retransmission and congestion control +====================================== +The TCP protocol has two retransmission mechanisms: SACK and fast +recovery. They are exclusive with each other. When SACK is enabled, +the kernel TCP stack would use SACK, or kernel would use fast +recovery. The SACK is a TCP option, which is defined in `RFC2018`_, +the fast recovery is defined in `RFC6582`_, which is also called +'Reno'. + +The TCP congestion control is a big and complex topic. To understand +the related snmp counter, we need to know the states of the congestion +control state machine. There are 5 states: Open, Disorder, CWR, +Recovery and Loss. For details about these states, please refer page 5 +and page 6 of this document: +https://pdfs.semanticscholar.org/0e9c/968d09ab2e53e24c4dca5b2d67c7f7140f8e.pdf + +.. _RFC2018: https://tools.ietf.org/html/rfc2018 +.. _RFC6582: https://tools.ietf.org/html/rfc6582 + +* TcpExtTCPRenoRecovery and TcpExtTCPSackRecovery +When the congestion control comes into Recovery state, if sack is +used, TcpExtTCPSackRecovery increases 1, if sack is not used, +TcpExtTCPRenoRecovery increases 1. These two counters mean the TCP +stack begins to retransmit the lost packets. + +* TcpExtTCPSACKReneging +A packet was acknowledged by SACK, but the receiver has dropped this +packet, so the sender needs to retransmit this packet. In this +situation, the sender adds 1 to TcpExtTCPSACKReneging. A receiver +could drop a packet which has been acknowledged by SACK, although it is +unusual, it is allowed by the TCP protocol. The sender doesn't really +know what happened on the receiver side. The sender just waits until +the RTO expires for this packet, then the sender assumes this packet +has been dropped by the receiver. + +* TcpExtTCPRenoReorder +The reorder packet is detected by fast recovery. It would only be used +if SACK is disabled. The fast recovery algorithm detects recorder by +the duplicate ACK number. E.g., if retransmission is triggered, and +the original retransmitted packet is not lost, it is just out of +order, the receiver would acknowledge multiple times, one for the +retransmitted packet, another for the arriving of the original out of +order packet. Thus the sender would find more ACks than its +expectation, and the sender knows out of order occurs. + +* TcpExtTCPTSReorder +The reorder packet is detected when a hole is filled. E.g., assume the +sender sends packet 1,2,3,4,5, and the receiving order is +1,2,4,5,3. When the sender receives the ACK of packet 3 (which will +fill the hole), two conditions will let TcpExtTCPTSReorder increase +1: (1) if the packet 3 is not re-retransmitted yet. (2) if the packet +3 is retransmitted but the timestamp of the packet 3's ACK is earlier +than the retransmission timestamp. + +* TcpExtTCPSACKReorder +The reorder packet detected by SACK. The SACK has two methods to +detect reorder: (1) DSACK is received by the sender. It means the +sender sends the same packet more than one times. And the only reason +is the sender believes an out of order packet is lost so it sends the +packet again. (2) Assume packet 1,2,3,4,5 are sent by the sender, and +the sender has received SACKs for packet 2 and 5, now the sender +receives SACK for packet 4 and the sender doesn't retransmit the +packet yet, the sender would know packet 4 is out of order. The TCP +stack of kernel will increase TcpExtTCPSACKReorder for both of the +above scenarios. + + +DSACK +===== +The DSACK is defined in `RFC2883`_. The receiver uses DSACK to report +duplicate packets to the sender. There are two kinds of +duplications: (1) a packet which has been acknowledged is +duplicate. (2) an out of order packet is duplicate. The TCP stack +counts these two kinds of duplications on both receiver side and +sender side. + +.. _RFC2883 : https://tools.ietf.org/html/rfc2883 + +* TcpExtTCPDSACKOldSent +The TCP stack receives a duplicate packet which has been acked, so it +sends a DSACK to the sender. + +* TcpExtTCPDSACKOfoSent +The TCP stack receives an out of order duplicate packet, so it sends a +DSACK to the sender. + +* TcpExtTCPDSACKRecv +The TCP stack receives a DSACK, which indicate an acknowledged +duplicate packet is received. + +* TcpExtTCPDSACKOfoRecv +The TCP stack receives a DSACK, which indicate an out of order +duplciate packet is received. + +examples +======= + +ping test +-------- +Run the ping command against the public dns server 8.8.8.8:: + + nstatuser@nstat-a:~$ ping 8.8.8.8 -c 1 + PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. + 64 bytes from 8.8.8.8: icmp_seq=1 ttl=119 time=17.8 ms + + --- 8.8.8.8 ping statistics --- + 1 packets transmitted, 1 received, 0% packet loss, time 0ms + rtt min/avg/max/mdev = 17.875/17.875/17.875/0.000 ms + +The nstayt result:: + + nstatuser@nstat-a:~$ nstat + #kernel + IpInReceives 1 0.0 + IpInDelivers 1 0.0 + IpOutRequests 1 0.0 + IcmpInMsgs 1 0.0 + IcmpInEchoReps 1 0.0 + IcmpOutMsgs 1 0.0 + IcmpOutEchos 1 0.0 + IcmpMsgInType0 1 0.0 + IcmpMsgOutType8 1 0.0 + IpExtInOctets 84 0.0 + IpExtOutOctets 84 0.0 + IpExtInNoECTPkts 1 0.0 + +The Linux server sent an ICMP Echo packet, so IpOutRequests, +IcmpOutMsgs, IcmpOutEchos and IcmpMsgOutType8 were increased 1. The +server got ICMP Echo Reply from 8.8.8.8, so IpInReceives, IcmpInMsgs, +IcmpInEchoReps and IcmpMsgInType0 were increased 1. The ICMP Echo Reply +was passed to the ICMP layer via IP layer, so IpInDelivers was +increased 1. The default ping data size is 48, so an ICMP Echo packet +and its corresponding Echo Reply packet are constructed by: + +* 14 bytes MAC header +* 20 bytes IP header +* 16 bytes ICMP header +* 48 bytes data (default value of the ping command) + +So the IpExtInOctets and IpExtOutOctets are 20+16+48=84. + +tcp 3-way handshake +------------------ +On server side, we run:: + + nstatuser@nstat-b:~$ nc -lknv 0.0.0.0 9000 + Listening on [0.0.0.0] (family 0, port 9000) + +On client side, we run:: + + nstatuser@nstat-a:~$ nc -nv 192.168.122.251 9000 + Connection to 192.168.122.251 9000 port [tcp/*] succeeded! + +The server listened on tcp 9000 port, the client connected to it, they +completed the 3-way handshake. + +On server side, we can find below nstat output:: + + nstatuser@nstat-b:~$ nstat | grep -i tcp + TcpPassiveOpens 1 0.0 + TcpInSegs 2 0.0 + TcpOutSegs 1 0.0 + TcpExtTCPPureAcks 1 0.0 + +On client side, we can find below nstat output:: + + nstatuser@nstat-a:~$ nstat | grep -i tcp + TcpActiveOpens 1 0.0 + TcpInSegs 1 0.0 + TcpOutSegs 2 0.0 + +When the server received the first SYN, it replied a SYN+ACK, and came into +SYN-RCVD state, so TcpPassiveOpens increased 1. The server received +SYN, sent SYN+ACK, received ACK, so server sent 1 packet, received 2 +packets, TcpInSegs increased 2, TcpOutSegs increased 1. The last ACK +of the 3-way handshake is a pure ACK without data, so +TcpExtTCPPureAcks increased 1. + +When the client sent SYN, the client came into the SYN-SENT state, so +TcpActiveOpens increased 1, the client sent SYN, received SYN+ACK, sent +ACK, so client sent 2 packets, received 1 packet, TcpInSegs increased +1, TcpOutSegs increased 2. + +TCP normal traffic +----------------- +Run nc on server:: + + nstatuser@nstat-b:~$ nc -lkv 0.0.0.0 9000 + Listening on [0.0.0.0] (family 0, port 9000) + +Run nc on client:: + + nstatuser@nstat-a:~$ nc -v nstat-b 9000 + Connection to nstat-b 9000 port [tcp/*] succeeded! + +Input a string in the nc client ('hello' in our example):: + + nstatuser@nstat-a:~$ nc -v nstat-b 9000 + Connection to nstat-b 9000 port [tcp/*] succeeded! + hello + +The client side nstat output:: + + nstatuser@nstat-a:~$ nstat + #kernel + IpInReceives 1 0.0 + IpInDelivers 1 0.0 + IpOutRequests 1 0.0 + TcpInSegs 1 0.0 + TcpOutSegs 1 0.0 + TcpExtTCPPureAcks 1 0.0 + TcpExtTCPOrigDataSent 1 0.0 + IpExtInOctets 52 0.0 + IpExtOutOctets 58 0.0 + IpExtInNoECTPkts 1 0.0 + +The server side nstat output:: + + nstatuser@nstat-b:~$ nstat + #kernel + IpInReceives 1 0.0 + IpInDelivers 1 0.0 + IpOutRequests 1 0.0 + TcpInSegs 1 0.0 + TcpOutSegs 1 0.0 + IpExtInOctets 58 0.0 + IpExtOutOctets 52 0.0 + IpExtInNoECTPkts 1 0.0 + +Input a string in nc client side again ('world' in our exmaple):: + + nstatuser@nstat-a:~$ nc -v nstat-b 9000 + Connection to nstat-b 9000 port [tcp/*] succeeded! + hello + world + +Client side nstat output:: + + nstatuser@nstat-a:~$ nstat + #kernel + IpInReceives 1 0.0 + IpInDelivers 1 0.0 + IpOutRequests 1 0.0 + TcpInSegs 1 0.0 + TcpOutSegs 1 0.0 + TcpExtTCPHPAcks 1 0.0 + TcpExtTCPOrigDataSent 1 0.0 + IpExtInOctets 52 0.0 + IpExtOutOctets 58 0.0 + IpExtInNoECTPkts 1 0.0 + + +Server side nstat output:: + + nstatuser@nstat-b:~$ nstat + #kernel + IpInReceives 1 0.0 + IpInDelivers 1 0.0 + IpOutRequests 1 0.0 + TcpInSegs 1 0.0 + TcpOutSegs 1 0.0 + TcpExtTCPHPHits 1 0.0 + IpExtInOctets 58 0.0 + IpExtOutOctets 52 0.0 + IpExtInNoECTPkts 1 0.0 + +Compare the first client-side nstat and the second client-side nstat, +we could find one difference: the first one had a 'TcpExtTCPPureAcks', +but the second one had a 'TcpExtTCPHPAcks'. The first server-side +nstat and the second server-side nstat had a difference too: the +second server-side nstat had a TcpExtTCPHPHits, but the first +server-side nstat didn't have it. The network traffic patterns were +exactly the same: the client sent a packet to the server, the server +replied an ACK. But kernel handled them in different ways. When the +TCP window scale option is not used, kernel will try to enable fast +path immediately when the connection comes into the established state, +but if the TCP window scale option is used, kernel will disable the +fast path at first, and try to enable it after kerenl receives +packets. We could use the 'ss' command to verify whether the window +scale option is used. e.g. run below command on either server or +client:: + + nstatuser@nstat-a:~$ ss -o state established -i '( dport = :9000 or sport = :9000 ) + Netid Recv-Q Send-Q Local Address:Port Peer Address:Port + tcp 0 0 192.168.122.250:40654 192.168.122.251:9000 + ts sack cubic wscale:7,7 rto:204 rtt:0.98/0.49 mss:1448 pmtu:1500 rcvmss:536 advmss:1448 cwnd:10 bytes_acked:1 segs_out:2 segs_in:1 send 118.2Mbps lastsnd:46572 lastrcv:46572 lastack:46572 pacing_rate 236.4Mbps rcv_space:29200 rcv_ssthresh:29200 minrtt:0.98 + +The 'wscale:7,7' means both server and client set the window scale +option to 7. Now we could explain the nstat output in our test: + +In the first nstat output of client side, the client sent a packet, server +reply an ACK, when kernel handled this ACK, the fast path was not +enabled, so the ACK was counted into 'TcpExtTCPPureAcks'. + +In the second nstat output of client side, the client sent a packet again, +and received another ACK from the server, in this time, the fast path is +enabled, and the ACK was qualified for fast path, so it was handled by +the fast path, so this ACK was counted into TcpExtTCPHPAcks. + +In the first nstat output of server side, fast path was not enabled, +so there was no 'TcpExtTCPHPHits'. + +In the second nstat output of server side, the fast path was enabled, +and the packet received from client qualified for fast path, so it +was counted into 'TcpExtTCPHPHits'. + +TcpExtTCPAbortOnClose +-------------------- +On the server side, we run below python script:: + + import socket + import time + + port = 9000 + + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.bind(('0.0.0.0', port)) + s.listen(1) + sock, addr = s.accept() + while True: + time.sleep(9999999) + +This python script listen on 9000 port, but doesn't read anything from +the connection. + +On the client side, we send the string "hello" by nc:: + + nstatuser@nstat-a:~$ echo "hello" | nc nstat-b 9000 + +Then, we come back to the server side, the server has received the "hello" +packet, and the TCP layer has acked this packet, but the application didn't +read it yet. We type Ctrl-C to terminate the server script. Then we +could find TcpExtTCPAbortOnClose increased 1 on the server side:: + + nstatuser@nstat-b:~$ nstat | grep -i abort + TcpExtTCPAbortOnClose 1 0.0 + +If we run tcpdump on the server side, we could find the server sent a +RST after we type Ctrl-C. + +TcpExtTCPAbortOnMemory and TcpExtTCPAbortOnTimeout +----------------------------------------------- +Below is an example which let the orphan socket count be higher than +net.ipv4.tcp_max_orphans. +Change tcp_max_orphans to a smaller value on client:: + + sudo bash -c "echo 10 > /proc/sys/net/ipv4/tcp_max_orphans" + +Client code (create 64 connection to server):: + + nstatuser@nstat-a:~$ cat client_orphan.py + import socket + import time + + server = 'nstat-b' # server address + port = 9000 + + count = 64 + + connection_list = [] + + for i in range(64): + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.connect((server, port)) + connection_list.append(s) + print("connection_count: %d" % len(connection_list)) + + while True: + time.sleep(99999) + +Server code (accept 64 connection from client):: + + nstatuser@nstat-b:~$ cat server_orphan.py + import socket + import time + + port = 9000 + count = 64 + + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.bind(('0.0.0.0', port)) + s.listen(count) + connection_list = [] + while True: + sock, addr = s.accept() + connection_list.append((sock, addr)) + print("connection_count: %d" % len(connection_list)) + +Run the python scripts on server and client. + +On server:: + + python3 server_orphan.py + +On client:: + + python3 client_orphan.py + +Run iptables on server:: + + sudo iptables -A INPUT -i ens3 -p tcp --destination-port 9000 -j DROP + +Type Ctrl-C on client, stop client_orphan.py. + +Check TcpExtTCPAbortOnMemory on client:: + + nstatuser@nstat-a:~$ nstat | grep -i abort + TcpExtTCPAbortOnMemory 54 0.0 + +Check orphane socket count on client:: + + nstatuser@nstat-a:~$ ss -s + Total: 131 (kernel 0) + TCP: 14 (estab 1, closed 0, orphaned 10, synrecv 0, timewait 0/0), ports 0 + + Transport Total IP IPv6 + * 0 - - + RAW 1 0 1 + UDP 1 1 0 + TCP 14 13 1 + INET 16 14 2 + FRAG 0 0 0 + +The explanation of the test: after run server_orphan.py and +client_orphan.py, we set up 64 connections between server and +client. Run the iptables command, the server will drop all packets from +the client, type Ctrl-C on client_orphan.py, the system of the client +would try to close these connections, and before they are closed +gracefully, these connections became orphan sockets. As the iptables +of the server blocked packets from the client, the server won't receive fin +from the client, so all connection on clients would be stuck on FIN_WAIT_1 +stage, so they will keep as orphan sockets until timeout. We have echo +10 to /proc/sys/net/ipv4/tcp_max_orphans, so the client system would +only keep 10 orphan sockets, for all other orphan sockets, the client +system sent RST for them and delete them. We have 64 connections, so +the 'ss -s' command shows the system has 10 orphan sockets, and the +value of TcpExtTCPAbortOnMemory was 54. + +An additional explanation about orphan socket count: You could find the +exactly orphan socket count by the 'ss -s' command, but when kernel +decide whither increases TcpExtTCPAbortOnMemory and sends RST, kernel +doesn't always check the exactly orphan socket count. For increasing +performance, kernel checks an approximate count firstly, if the +approximate count is more than tcp_max_orphans, kernel checks the +exact count again. So if the approximate count is less than +tcp_max_orphans, but exactly count is more than tcp_max_orphans, you +would find TcpExtTCPAbortOnMemory is not increased at all. If +tcp_max_orphans is large enough, it won't occur, but if you decrease +tcp_max_orphans to a small value like our test, you might find this +issue. So in our test, the client set up 64 connections although the +tcp_max_orphans is 10. If the client only set up 11 connections, we +can't find the change of TcpExtTCPAbortOnMemory. + +Continue the previous test, we wait for several minutes. Because of the +iptables on the server blocked the traffic, the server wouldn't receive +fin, and all the client's orphan sockets would timeout on the +FIN_WAIT_1 state finally. So we wait for a few minutes, we could find +10 timeout on the client:: + + nstatuser@nstat-a:~$ nstat | grep -i abort + TcpExtTCPAbortOnTimeout 10 0.0 + +TcpExtTCPAbortOnLinger +--------------------- +The server side code:: + + nstatuser@nstat-b:~$ cat server_linger.py + import socket + import time + + port = 9000 + + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.bind(('0.0.0.0', port)) + s.listen(1) + sock, addr = s.accept() + while True: + time.sleep(9999999) + +The client side code:: + + nstatuser@nstat-a:~$ cat client_linger.py + import socket + import struct + + server = 'nstat-b' # server address + port = 9000 + + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 10)) + s.setsockopt(socket.SOL_TCP, socket.TCP_LINGER2, struct.pack('i', -1)) + s.connect((server, port)) + s.close() + +Run server_linger.py on server:: + + nstatuser@nstat-b:~$ python3 server_linger.py + +Run client_linger.py on client:: + + nstatuser@nstat-a:~$ python3 client_linger.py + +After run client_linger.py, check the output of nstat:: + + nstatuser@nstat-a:~$ nstat | grep -i abort + TcpExtTCPAbortOnLinger 1 0.0 + +TcpExtTCPRcvCoalesce +------------------- +On the server, we run a program which listen on TCP port 9000, but +doesn't read any data:: + + import socket + import time + port = 9000 + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.bind(('0.0.0.0', port)) + s.listen(1) + sock, addr = s.accept() + while True: + time.sleep(9999999) + +Save the above code as server_coalesce.py, and run:: + + python3 server_coalesce.py + +On the client, save below code as client_coalesce.py:: + + import socket + server = 'nstat-b' + port = 9000 + s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + s.connect((server, port)) + +Run:: + + nstatuser@nstat-a:~$ python3 -i client_coalesce.py + +We use '-i' to come into the interactive mode, then a packet:: + + >>> s.send(b'foo') + 3 + +Send a packet again:: + + >>> s.send(b'bar') + 3 + +On the server, run nstat:: + + ubuntu@nstat-b:~$ nstat + #kernel + IpInReceives 2 0.0 + IpInDelivers 2 0.0 + IpOutRequests 2 0.0 + TcpInSegs 2 0.0 + TcpOutSegs 2 0.0 + TcpExtTCPRcvCoalesce 1 0.0 + IpExtInOctets 110 0.0 + IpExtOutOctets 104 0.0 + IpExtInNoECTPkts 2 0.0 + +The client sent two packets, server didn't read any data. When +the second packet arrived at server, the first packet was still in +the receiving queue. So the TCP layer merged the two packets, and we +could find the TcpExtTCPRcvCoalesce increased 1. + +TcpExtListenOverflows and TcpExtListenDrops +---------------------------------------- +On server, run the nc command, listen on port 9000:: + + nstatuser@nstat-b:~$ nc -lkv 0.0.0.0 9000 + Listening on [0.0.0.0] (family 0, port 9000) + +On client, run 3 nc commands in different terminals:: + + nstatuser@nstat-a:~$ nc -v nstat-b 9000 + Connection to nstat-b 9000 port [tcp/*] succeeded! + +The nc command only accepts 1 connection, and the accept queue length +is 1. On current linux implementation, set queue length to n means the +actual queue length is n+1. Now we create 3 connections, 1 is accepted +by nc, 2 in accepted queue, so the accept queue is full. + +Before running the 4th nc, we clean the nstat history on the server:: + + nstatuser@nstat-b:~$ nstat -n + +Run the 4th nc on the client:: + + nstatuser@nstat-a:~$ nc -v nstat-b 9000 + +If the nc server is running on kernel 4.10 or higher version, you +won't see the "Connection to ... succeeded!" string, because kernel +will drop the SYN if the accept queue is full. If the nc client is running +on an old kernel, you would see that the connection is succeeded, +because kernel would complete the 3 way handshake and keep the socket +on half open queue. I did the test on kernel 4.15. Below is the nstat +on the server:: + + nstatuser@nstat-b:~$ nstat + #kernel + IpInReceives 4 0.0 + IpInDelivers 4 0.0 + TcpInSegs 4 0.0 + TcpExtListenOverflows 4 0.0 + TcpExtListenDrops 4 0.0 + IpExtInOctets 240 0.0 + IpExtInNoECTPkts 4 0.0 + +Both TcpExtListenOverflows and TcpExtListenDrops were 4. If the time +between the 4th nc and the nstat was longer, the value of +TcpExtListenOverflows and TcpExtListenDrops would be larger, because +the SYN of the 4th nc was dropped, the client was retrying. + +IpInAddrErrors, IpExtInNoRoutes and IpOutNoRoutes +---------------------------------------------- +server A IP address: 192.168.122.250 +server B IP address: 192.168.122.251 +Prepare on server A, add a route to server B:: + + $ sudo ip route add 8.8.8.8/32 via 192.168.122.251 + +Prepare on server B, disable send_redirects for all interfaces:: + + $ sudo sysctl -w net.ipv4.conf.all.send_redirects=0 + $ sudo sysctl -w net.ipv4.conf.ens3.send_redirects=0 + $ sudo sysctl -w net.ipv4.conf.lo.send_redirects=0 + $ sudo sysctl -w net.ipv4.conf.default.send_redirects=0 + +We want to let sever A send a packet to 8.8.8.8, and route the packet +to server B. When server B receives such packet, it might send a ICMP +Redirect message to server A, set send_redirects to 0 will disable +this behavior. + +First, generate InAddrErrors. On server B, we disable IP forwarding:: + + $ sudo sysctl -w net.ipv4.conf.all.forwarding=0 + +On server A, we send packets to 8.8.8.8:: + + $ nc -v 8.8.8.8 53 + +On server B, we check the output of nstat:: + + $ nstat + #kernel + IpInReceives 3 0.0 + IpInAddrErrors 3 0.0 + IpExtInOctets 180 0.0 + IpExtInNoECTPkts 3 0.0 + +As we have let server A route 8.8.8.8 to server B, and we disabled IP +forwarding on server B, Server A sent packets to server B, then server B +dropped packets and increased IpInAddrErrors. As the nc command would +re-send the SYN packet if it didn't receive a SYN+ACK, we could find +multiple IpInAddrErrors. + +Second, generate IpExtInNoRoutes. On server B, we enable IP +forwarding:: + + $ sudo sysctl -w net.ipv4.conf.all.forwarding=1 + +Check the route table of server B and remove the default route:: + + $ ip route show + default via 192.168.122.1 dev ens3 proto static + 192.168.122.0/24 dev ens3 proto kernel scope link src 192.168.122.251 + $ sudo ip route delete default via 192.168.122.1 dev ens3 proto static + +On server A, we contact 8.8.8.8 again:: + + $ nc -v 8.8.8.8 53 + nc: connect to 8.8.8.8 port 53 (tcp) failed: Network is unreachable + +On server B, run nstat:: + + $ nstat + #kernel + IpInReceives 1 0.0 + IpOutRequests 1 0.0 + IcmpOutMsgs 1 0.0 + IcmpOutDestUnreachs 1 0.0 + IcmpMsgOutType3 1 0.0 + IpExtInNoRoutes 1 0.0 + IpExtInOctets 60 0.0 + IpExtOutOctets 88 0.0 + IpExtInNoECTPkts 1 0.0 + +We enabled IP forwarding on server B, when server B received a packet +which destination IP address is 8.8.8.8, server B will try to forward +this packet. We have deleted the default route, there was no route for +8.8.8.8, so server B increase IpExtInNoRoutes and sent the "ICMP +Destination Unreachable" message to server A. + +Third, generate IpOutNoRoutes. Run ping command on server B:: + + $ ping -c 1 8.8.8.8 + connect: Network is unreachable + +Run nstat on server B:: + + $ nstat + #kernel + IpOutNoRoutes 1 0.0 + +We have deleted the default route on server B. Server B couldn't find +a route for the 8.8.8.8 IP address, so server B increased +IpOutNoRoutes. diff --git a/Documentation/networking/vrf.txt b/Documentation/networking/vrf.txt index 8ff7b4c8f91b..a5f103b083a0 100644 --- a/Documentation/networking/vrf.txt +++ b/Documentation/networking/vrf.txt @@ -103,19 +103,33 @@ VRF device: or to specify the output device using cmsg and IP_PKTINFO. +By default the scope of the port bindings for unbound sockets is +limited to the default VRF. That is, it will not be matched by packets +arriving on interfaces enslaved to an l3mdev and processes may bind to +the same port if they bind to an l3mdev. + TCP & UDP services running in the default VRF context (ie., not bound to any VRF device) can work across all VRF domains by enabling the tcp_l3mdev_accept and udp_l3mdev_accept sysctl options: + sysctl -w net.ipv4.tcp_l3mdev_accept=1 sysctl -w net.ipv4.udp_l3mdev_accept=1 +These options are disabled by default so that a socket in a VRF is only +selected for packets in that VRF. There is a similar option for RAW +sockets, which is enabled by default for reasons of backwards compatibility. +This is so as to specify the output device with cmsg and IP_PKTINFO, but +using a socket not bound to the corresponding VRF. This allows e.g. older ping +implementations to be run with specifying the device but without executing it +in the VRF. This option can be disabled so that packets received in a VRF +context are only handled by a raw socket bound to the VRF, and packets in the +default VRF are only handled by a socket not bound to any VRF: + + sysctl -w net.ipv4.raw_l3mdev_accept=0 + netfilter rules on the VRF device can be used to limit access to services running in the default VRF context as well. -The default VRF does not have limited scope with respect to port bindings. -That is, if a process does a wildcard bind to a port in the default VRF it -owns the port across all VRF domains within the network namespace. - ################################################################################ Using iproute2 for VRFs diff --git a/Documentation/networking/xfrm_device.txt b/Documentation/networking/xfrm_device.txt index 267f55b5f54a..a1c904dc70dc 100644 --- a/Documentation/networking/xfrm_device.txt +++ b/Documentation/networking/xfrm_device.txt @@ -111,9 +111,10 @@ the stack in xfrm_input(). xfrm_state_hold(xs); store the state information into the skb - skb->sp = secpath_dup(skb->sp); - skb->sp->xvec[skb->sp->len++] = xs; - skb->sp->olen++; + sp = secpath_set(skb); + if (!sp) return; + sp->xvec[sp->len++] = xs; + sp->olen++; indicate the success and/or error status of the offload xo = xfrm_offload(skb); diff --git a/Documentation/nvdimm/security.txt b/Documentation/nvdimm/security.txt new file mode 100644 index 000000000000..4c36c05ca98e --- /dev/null +++ b/Documentation/nvdimm/security.txt @@ -0,0 +1,141 @@ +NVDIMM SECURITY +=============== + +1. Introduction +--------------- + +With the introduction of Intel Device Specific Methods (DSM) v1.8 +specification [1], security DSMs are introduced. The spec added the following +security DSMs: "get security state", "set passphrase", "disable passphrase", +"unlock unit", "freeze lock", "secure erase", and "overwrite". A security_ops +data structure has been added to struct dimm in order to support the security +operations and generic APIs are exposed to allow vendor neutral operations. + +2. Sysfs Interface +------------------ +The "security" sysfs attribute is provided in the nvdimm sysfs directory. For +example: +/sys/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0012:00/ndbus0/nmem0/security + +The "show" attribute of that attribute will display the security state for +that DIMM. The following states are available: disabled, unlocked, locked, +frozen, and overwrite. If security is not supported, the sysfs attribute +will not be visible. + +The "store" attribute takes several commands when it is being written to +in order to support some of the security functionalities: +update - enable or update passphrase. +disable - disable enabled security and remove key. +freeze - freeze changing of security states. +erase - delete existing user encryption key. +overwrite - wipe the entire nvdimm. +master_update - enable or update master passphrase. +master_erase - delete existing user encryption key. + +3. Key Management +----------------- + +The key is associated to the payload by the DIMM id. For example: +# cat /sys/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0012:00/ndbus0/nmem0/nfit/id +8089-a2-1740-00000133 +The DIMM id would be provided along with the key payload (passphrase) to +the kernel. + +The security keys are managed on the basis of a single key per DIMM. The +key "passphrase" is expected to be 32bytes long. This is similar to the ATA +security specification [2]. A key is initially acquired via the request_key() +kernel API call during nvdimm unlock. It is up to the user to make sure that +all the keys are in the kernel user keyring for unlock. + +A nvdimm encrypted-key of format enc32 has the description format of: +nvdimm: + +See file ``Documentation/security/keys/trusted-encrypted.rst`` for creating +encrypted-keys of enc32 format. TPM usage with a master trusted key is +preferred for sealing the encrypted-keys. + +4. Unlocking +------------ +When the DIMMs are being enumerated by the kernel, the kernel will attempt to +retrieve the key from the kernel user keyring. This is the only time +a locked DIMM can be unlocked. Once unlocked, the DIMM will remain unlocked +until reboot. Typically an entity (i.e. shell script) will inject all the +relevant encrypted-keys into the kernel user keyring during the initramfs phase. +This provides the unlock function access to all the related keys that contain +the passphrase for the respective nvdimms. It is also recommended that the +keys are injected before libnvdimm is loaded by modprobe. + +5. Update +--------- +When doing an update, it is expected that the existing key is removed from +the kernel user keyring and reinjected as different (old) key. It's irrelevant +what the key description is for the old key since we are only interested in the +keyid when doing the update operation. It is also expected that the new key +is injected with the description format described from earlier in this +document. The update command written to the sysfs attribute will be with +the format: +update + +If there is no old keyid due to a security enabling, then a 0 should be +passed in. + +6. Freeze +--------- +The freeze operation does not require any keys. The security config can be +frozen by a user with root privelege. + +7. Disable +---------- +The security disable command format is: +disable + +An key with the current passphrase payload that is tied to the nvdimm should be +in the kernel user keyring. + +8. Secure Erase +--------------- +The command format for doing a secure erase is: +erase + +An key with the current passphrase payload that is tied to the nvdimm should be +in the kernel user keyring. + +9. Overwrite +------------ +The command format for doing an overwrite is: +overwrite + +Overwrite can be done without a key if security is not enabled. A key serial +of 0 can be passed in to indicate no key. + +The sysfs attribute "security" can be polled to wait on overwrite completion. +Overwrite can last tens of minutes or more depending on nvdimm size. + +An encrypted-key with the current user passphrase that is tied to the nvdimm +should be injected and its keyid should be passed in via sysfs. + +10. Master Update +----------------- +The command format for doing a master update is: +update + +The operating mechanism for master update is identical to update except the +master passphrase key is passed to the kernel. The master passphrase key +is just another encrypted-key. + +This command is only available when security is disabled. + +11. Master Erase +---------------- +The command format for doing a master erase is: +master_erase + +This command has the same operating mechanism as erase except the master +passphrase key is passed to the kernel. The master passphrase key is just +another encrypted-key. + +This command is only available when the master security is enabled, indicated +by the extended security status. + +[1]: http://pmem.io/documents/NVDIMM_DSM_Interface-V1.8.pdf +[2]: http://www.t13.org/documents/UploadedDocuments/docs2006/e05179r4-ACS-SecurityClarifications.pdf diff --git a/Documentation/power/regulator/overview.txt b/Documentation/power/regulator/overview.txt index 40ca2d6e2742..721b4739ec32 100644 --- a/Documentation/power/regulator/overview.txt +++ b/Documentation/power/regulator/overview.txt @@ -22,7 +22,7 @@ Nomenclature Some terms used in this document:- o Regulator - Electronic device that supplies power to other devices. - Most regulators can enable and disable their output whilst + Most regulators can enable and disable their output while some can control their output voltage and or current. Input Voltage -> Regulator -> Output Voltage diff --git a/Documentation/powerpc/firmware-assisted-dump.txt b/Documentation/powerpc/firmware-assisted-dump.txt index bdd344aa18d9..18c5feef2577 100644 --- a/Documentation/powerpc/firmware-assisted-dump.txt +++ b/Documentation/powerpc/firmware-assisted-dump.txt @@ -113,7 +113,15 @@ header, is usually reserved at an offset greater than boot memory size (see Fig. 1). This area is *not* released: this region will be kept permanently reserved, so that it can act as a receptacle for a copy of the boot memory content in addition to CPU state -and HPTE region, in the case a crash does occur. +and HPTE region, in the case a crash does occur. Since this reserved +memory area is used only after the system crash, there is no point in +blocking this significant chunk of memory from production kernel. +Hence, the implementation uses the Linux kernel's Contiguous Memory +Allocator (CMA) for memory reservation if CMA is configured for kernel. +With CMA reservation this memory will be available for applications to +use it, while kernel is prevented from using it. With this fadump will +still be able to capture all of the kernel memory and most of the user +space memory except the user pages that were present in CMA region. o Memory Reservation during first kernel @@ -162,6 +170,9 @@ How to enable firmware-assisted dump (fadump): 1. Set config option CONFIG_FA_DUMP=y and build kernel. 2. Boot into linux kernel with 'fadump=on' kernel cmdline option. + By default, fadump reserved memory will be initialized as CMA area. + Alternatively, user can boot linux kernel with 'fadump=nocma' to + prevent fadump to use CMA. 3. Optionally, user can also set 'crashkernel=' kernel cmdline to specify size of the memory to reserve for boot memory dump preservation. @@ -172,6 +183,10 @@ NOTE: 1. 'fadump_reserve_mem=' parameter has been deprecated. Instead 2. If firmware-assisted dump fails to reserve memory then it will fallback to existing kdump mechanism if 'crashkernel=' option is set at kernel cmdline. + 3. if user wants to capture all of user space memory and ok with + reserved memory not available to production system, then + 'fadump=nocma' kernel parameter can be used to fallback to + old behaviour. Sysfs/debugfs files: ------------ diff --git a/Documentation/powerpc/isa-versions.rst b/Documentation/powerpc/isa-versions.rst new file mode 100644 index 000000000000..812e20cc898c --- /dev/null +++ b/Documentation/powerpc/isa-versions.rst @@ -0,0 +1,74 @@ +CPU to ISA Version Mapping +========================== + +Mapping of some CPU versions to relevant ISA versions. + +========= ==================== +CPU Architecture version +========= ==================== +Power9 Power ISA v3.0B +Power8 Power ISA v2.07 +Power7 Power ISA v2.06 +Power6 Power ISA v2.05 +PA6T Power ISA v2.04 +Cell PPU - Power ISA v2.02 with some minor exceptions + - Plus Altivec/VMX ~= 2.03 +Power5++ Power ISA v2.04 (no VMX) +Power5+ Power ISA v2.03 +Power5 - PowerPC User Instruction Set Architecture Book I v2.02 + - PowerPC Virtual Environment Architecture Book II v2.02 + - PowerPC Operating Environment Architecture Book III v2.02 +PPC970 - PowerPC User Instruction Set Architecture Book I v2.01 + - PowerPC Virtual Environment Architecture Book II v2.01 + - PowerPC Operating Environment Architecture Book III v2.01 + - Plus Altivec/VMX ~= 2.03 +========= ==================== + + +Key Features +------------ + +========== ================== +CPU VMX (aka. Altivec) +========== ================== +Power9 Yes +Power8 Yes +Power7 Yes +Power6 Yes +PA6T Yes +Cell PPU Yes +Power5++ No +Power5+ No +Power5 No +PPC970 Yes +========== ================== + +========== ==== +CPU VSX +========== ==== +Power9 Yes +Power8 Yes +Power7 Yes +Power6 No +PA6T No +Cell PPU No +Power5++ No +Power5+ No +Power5 No +PPC970 No +========== ==== + +========== ==================== +CPU Transactional Memory +========== ==================== +Power9 Yes (* see transactional_memory.txt) +Power8 Yes +Power7 No +Power6 No +PA6T No +Cell PPU No +Power5++ No +Power5+ No +Power5 No +PPC970 No +========== ==================== diff --git a/Documentation/process/1.Intro.rst b/Documentation/process/1.Intro.rst index e782ae2eef58..c3d0270bbfb3 100644 --- a/Documentation/process/1.Intro.rst +++ b/Documentation/process/1.Intro.rst @@ -1,3 +1,5 @@ +.. _development_process_intro: + Introduction ============ diff --git a/Documentation/process/4.Coding.rst b/Documentation/process/4.Coding.rst index eb4b185d168c..cfe264889447 100644 --- a/Documentation/process/4.Coding.rst +++ b/Documentation/process/4.Coding.rst @@ -315,7 +315,8 @@ variety of potential coding problems; it can also propose fixes for those problems. Quite a few "semantic patches" for the kernel have been packaged under the scripts/coccinelle directory; running "make coccicheck" will run through those semantic patches and report on any problems found. See -Documentation/dev-tools/coccinelle.rst for more information. +:ref:`Documentation/dev-tools/coccinelle.rst ` +for more information. Other kinds of portability errors are best found by compiling your code for other architectures. If you do not happen to have an S/390 system or a diff --git a/Documentation/process/5.Posting.rst b/Documentation/process/5.Posting.rst index c418c5d6cae4..4213e580f273 100644 --- a/Documentation/process/5.Posting.rst +++ b/Documentation/process/5.Posting.rst @@ -9,9 +9,10 @@ kernel. Unsurprisingly, the kernel development community has evolved a set of conventions and procedures which are used in the posting of patches; following them will make life much easier for everybody involved. This document will attempt to cover these expectations in reasonable detail; -more information can also be found in the files process/submitting-patches.rst, -process/submitting-drivers.rst, and process/submit-checklist.rst in the kernel -documentation directory. +more information can also be found in the files +:ref:`Documentation/process/submitting-patches.rst `, +:ref:`Documentation/process/submitting-drivers.rst ` +and :ref:`Documentation/process/submit-checklist.rst `. When to post @@ -198,8 +199,10 @@ pass it to diff with the "-X" option. The tags mentioned above are used to describe how various developers have been associated with the development of this patch. They are described in -detail in the process/submitting-patches.rst document; what follows here is a -brief summary. Each of these lines has the format: +detail in +the :ref:`Documentation/process/submitting-patches.rst ` +document; what follows here is a brief summary. Each of these lines has +the format: :: @@ -210,8 +213,8 @@ The tags in common use are: - Signed-off-by: this is a developer's certification that he or she has the right to submit the patch for inclusion into the kernel. It is an agreement to the Developer's Certificate of Origin, the full text of - which can be found in Documentation/process/submitting-patches.rst. Code - without a proper signoff cannot be merged into the mainline. + which can be found in :ref:`Documentation/process/submitting-patches.rst ` + Code without a proper signoff cannot be merged into the mainline. - Co-developed-by: states that the patch was also created by another developer along with the original author. This is useful at times when multiple @@ -226,7 +229,7 @@ The tags in common use are: it to work. - Reviewed-by: the named developer has reviewed the patch for correctness; - see the reviewer's statement in Documentation/process/submitting-patches.rst + see the reviewer's statement in :ref:`Documentation/process/submitting-patches.rst ` for more detail. - Reported-by: names a user who reported a problem which is fixed by this @@ -253,8 +256,8 @@ take care of: be examined in any detail. If there is any doubt at all, mail the patch to yourself and convince yourself that it shows up intact. - Documentation/process/email-clients.rst has some helpful hints on making - specific mail clients work for sending patches. + :ref:`Documentation/process/email-clients.rst ` has some + helpful hints on making specific mail clients work for sending patches. - Are you sure your patch is free of silly mistakes? You should always run patches through scripts/checkpatch.pl and address the complaints it diff --git a/Documentation/process/8.Conclusion.rst b/Documentation/process/8.Conclusion.rst index 1c7f54cd0261..8395aa2c1f3a 100644 --- a/Documentation/process/8.Conclusion.rst +++ b/Documentation/process/8.Conclusion.rst @@ -5,9 +5,10 @@ For more information There are numerous sources of information on Linux kernel development and related topics. First among those will always be the Documentation -directory found in the kernel source distribution. The top-level process/howto.rst -file is an important starting point; process/submitting-patches.rst and -process/submitting-drivers.rst are also something which all kernel developers should +directory found in the kernel source distribution. The top-level :ref:`process/howto.rst ` +file is an important starting point; :ref:`process/submitting-patches.rst ` +and :ref:`process/submitting-drivers.rst ` +are also something which all kernel developers should read. Many internal kernel APIs are documented using the kerneldoc mechanism; "make htmldocs" or "make pdfdocs" can be used to generate those documents in HTML or PDF format (though the version of TeX shipped by some diff --git a/Documentation/process/adding-syscalls.rst b/Documentation/process/adding-syscalls.rst index 88a7d5c8bb2f..1c3a840d06b9 100644 --- a/Documentation/process/adding-syscalls.rst +++ b/Documentation/process/adding-syscalls.rst @@ -1,3 +1,6 @@ + +.. _addsyscalls: + Adding a New System Call ======================== diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst index d1bf143b446f..18735dc460a0 100644 --- a/Documentation/process/changes.rst +++ b/Documentation/process/changes.rst @@ -326,7 +326,7 @@ Kernel documentation Sphinx ------ -Please see :ref:`sphinx_install` in ``Documentation/doc-guide/sphinx.rst`` +Please see :ref:`sphinx_install` in :ref:`Documentation/doc-guide/sphinx.rst ` for details about Sphinx requirements. Getting updated software diff --git a/Documentation/process/coding-style.rst b/Documentation/process/coding-style.rst index 4e7c0a1c427a..277c113376a6 100644 --- a/Documentation/process/coding-style.rst +++ b/Documentation/process/coding-style.rst @@ -1075,5 +1075,5 @@ gcc internals and indent, all available from http://www.gnu.org/manual/ WG14 is the international standardization working group for the programming language C, URL: http://www.open-std.org/JTC1/SC22/WG14/ -Kernel process/coding-style.rst, by greg@kroah.com at OLS 2002: +Kernel :ref:`process/coding-style.rst `, by greg@kroah.com at OLS 2002: http://www.kroah.com/linux/talks/ols_2002_kernel_codingstyle_talk/html/ diff --git a/Documentation/process/howto.rst b/Documentation/process/howto.rst index dcb25f94188e..58b2f46c4f98 100644 --- a/Documentation/process/howto.rst +++ b/Documentation/process/howto.rst @@ -1,3 +1,5 @@ +.. _process_howto: + HOWTO do Linux kernel development ================================= @@ -296,9 +298,9 @@ two weeks, but it can be longer if there are no pressing problems. A security-related problem, instead, can cause a release to happen almost instantly. -The file Documentation/process/stable-kernel-rules.rst in the kernel tree -documents what kinds of changes are acceptable for the -stable tree, and -how the release process works. +The file :ref:`Documentation/process/stable-kernel-rules.rst ` +in the kernel tree documents what kinds of changes are acceptable for +the -stable tree, and how the release process works. 4.x -git patches ~~~~~~~~~~~~~~~~ @@ -358,7 +360,8 @@ tool. For details on how to use the kernel bugzilla, please see: https://bugzilla.kernel.org/page.cgi?id=faq.html -The file admin-guide/reporting-bugs.rst in the main kernel source directory has a good +The file :ref:`admin-guide/reporting-bugs.rst ` +in the main kernel source directory has a good template for how to report a possible kernel bug, and details what kind of information is needed by the kernel developers to help track down the problem. @@ -424,7 +427,7 @@ add your statements between the individual quoted sections instead of writing at the top of the mail. If you add patches to your mail, make sure they are plain readable text -as stated in Documentation/process/submitting-patches.rst. +as stated in :ref:`Documentation/process/submitting-patches.rst `. Kernel developers don't want to deal with attachments or compressed patches; they may want to comment on individual lines of your patch, which works only that way. Make sure you diff --git a/Documentation/process/kernel-driver-statement.rst b/Documentation/process/kernel-driver-statement.rst index e78452c2164c..a849790a68bc 100644 --- a/Documentation/process/kernel-driver-statement.rst +++ b/Documentation/process/kernel-driver-statement.rst @@ -1,3 +1,5 @@ +.. _process_statement_driver: + Kernel Driver Statement ----------------------- diff --git a/Documentation/process/kernel-enforcement-statement.rst b/Documentation/process/kernel-enforcement-statement.rst index 6816c12d6956..e5a1be476047 100644 --- a/Documentation/process/kernel-enforcement-statement.rst +++ b/Documentation/process/kernel-enforcement-statement.rst @@ -1,4 +1,6 @@ -Linux Kernel Enforcement Statement +.. _process_statement_kernel: + +Linux Kernel Enforcement Statement ---------------------------------- As developers of the Linux kernel, we have a keen interest in how our software diff --git a/Documentation/process/magic-number.rst b/Documentation/process/magic-number.rst index 633be1043690..547bbf28e615 100644 --- a/Documentation/process/magic-number.rst +++ b/Documentation/process/magic-number.rst @@ -1,3 +1,5 @@ +.. _magicnumbers: + Linux magic numbers =================== diff --git a/Documentation/process/management-style.rst b/Documentation/process/management-style.rst index 85ef8ca8f639..186753ff3d2d 100644 --- a/Documentation/process/management-style.rst +++ b/Documentation/process/management-style.rst @@ -5,8 +5,9 @@ Linux kernel management style This is a short document describing the preferred (or made up, depending on who you ask) management style for the linux kernel. It's meant to -mirror the process/coding-style.rst document to some degree, and mainly written to -avoid answering [#f1]_ the same (or similar) questions over and over again. +mirror the :ref:`process/coding-style.rst ` document to some +degree, and mainly written to avoid answering [#f1]_ the same (or similar) +questions over and over again. Management style is very personal and much harder to quantify than simple coding style rules, so this document may or may not have anything diff --git a/Documentation/process/submitting-drivers.rst b/Documentation/process/submitting-drivers.rst index b38bf2054ce3..58bc047e7b95 100644 --- a/Documentation/process/submitting-drivers.rst +++ b/Documentation/process/submitting-drivers.rst @@ -16,7 +16,8 @@ you should probably talk to XFree86 (http://www.xfree86.org/) and/or X.Org Oh, and we don't really recommend submitting changes to XFree86 :) -Also read the Documentation/process/submitting-patches.rst document. +Also read the :ref:`Documentation/process/submitting-patches.rst ` +document. Allocating Device Numbers @@ -27,7 +28,8 @@ by the Linux assigned name and number authority (currently this is Torben Mathiasen). The site is http://www.lanana.org/. This also deals with allocating numbers for devices that are not going to be submitted to the mainstream kernel. -See Documentation/admin-guide/devices.rst for more information on this. +See :ref:`Documentation/admin-guide/devices.rst ` +for more information on this. If you don't use assigned numbers then when your device is submitted it will be given an assigned number even if that is different from values you may @@ -117,7 +119,7 @@ PM support: anything. For the driver testing instructions see Documentation/power/drivers-testing.txt and for a relatively complete overview of the power management issues related to - drivers see Documentation/driver-api/pm/devices.rst. + drivers see :ref:`Documentation/driver-api/pm/devices.rst `. Control: In general if there is active maintenance of a driver by diff --git a/Documentation/s390/3270.ChangeLog b/Documentation/s390/3270.ChangeLog index 031c36081946..ecaf60b6c381 100644 --- a/Documentation/s390/3270.ChangeLog +++ b/Documentation/s390/3270.ChangeLog @@ -16,7 +16,7 @@ Sep 2002: Dynamically get 3270 input buffer Sep 2002: Fix tubfs kmalloc()s * Do read and write lengths correctly in fs3270_read() - and fs3270_write(), whilst never asking kmalloc() + and fs3270_write(), while never asking kmalloc() for more than 0x800 bytes. Affects tubfs.c and tubio.h. Sep 2002: Recognize 3270 control unit type 3174 diff --git a/Documentation/scsi/scsi-parameters.txt b/Documentation/scsi/scsi-parameters.txt index 92999d4e0cb8..25a4b4cf04a6 100644 --- a/Documentation/scsi/scsi-parameters.txt +++ b/Documentation/scsi/scsi-parameters.txt @@ -97,11 +97,6 @@ parameters may be changed at runtime by the command allowing boot to proceed. none ignores them, expecting user space to do the scan. - scsi_mod.use_blk_mq= - [SCSI] use blk-mq I/O path by default - See SCSI_MQ_DEFAULT in drivers/scsi/Kconfig. - Format: - sim710= [SCSI,HW] See header of drivers/scsi/sim710.c. diff --git a/Documentation/scsi/scsi_mid_low_api.txt b/Documentation/scsi/scsi_mid_low_api.txt index 177c031763c0..c1dd4939f4ae 100644 --- a/Documentation/scsi/scsi_mid_low_api.txt +++ b/Documentation/scsi/scsi_mid_low_api.txt @@ -1098,8 +1098,6 @@ of interest: unchecked_isa_dma - 1=>only use bottom 16 MB of ram (ISA DMA addressing restriction), 0=>can use full 32 bit (or better) DMA address space - use_clustering - 1=>SCSI commands in mid level's queue can be merged, - 0=>disallow SCSI command merging no_async_abort - 1=>Asynchronous aborts are not supported 0=>Timed-out commands will be aborted asynchronously hostt - pointer to driver's struct scsi_host_template from which diff --git a/Documentation/security/credentials.rst b/Documentation/security/credentials.rst index 5bb7125faeee..282e79feee6a 100644 --- a/Documentation/security/credentials.rst +++ b/Documentation/security/credentials.rst @@ -291,7 +291,7 @@ for example), it must be considered immutable, barring two exceptions: 1. The reference count may be altered. - 2. Whilst the keyring subscriptions of a set of credentials may not be + 2. While the keyring subscriptions of a set of credentials may not be changed, the keyrings subscribed to may have their contents altered. To catch accidental credential alteration at compile time, struct task_struct @@ -358,7 +358,7 @@ Once a reference has been obtained, it must be released with ``put_cred()``, Accessing Another Task's Credentials ------------------------------------ -Whilst a task may access its own credentials without the need for locking, the +While a task may access its own credentials without the need for locking, the same is not true of a task wanting to access another task's credentials. It must use the RCU read lock and ``rcu_dereference()``. @@ -382,7 +382,7 @@ This should be used inside the RCU read lock, as in the following example:: } Should it be necessary to hold another task's credentials for a long period of -time, and possibly to sleep whilst doing so, then the caller should get a +time, and possibly to sleep while doing so, then the caller should get a reference on them using:: const struct cred *get_task_cred(struct task_struct *task); @@ -442,7 +442,7 @@ duplicate of the current process's credentials, returning with the mutex still held if successful. It returns NULL if not successful (out of memory). The mutex prevents ``ptrace()`` from altering the ptrace state of a process -whilst security checks on credentials construction and changing is taking place +while security checks on credentials construction and changing is taking place as the ptrace state may alter the outcome, particularly in the case of ``execve()``. diff --git a/Documentation/security/keys/request-key.rst b/Documentation/security/keys/request-key.rst index 21e27238cec6..600ad67d1707 100644 --- a/Documentation/security/keys/request-key.rst +++ b/Documentation/security/keys/request-key.rst @@ -132,7 +132,7 @@ Negative Instantiation And Rejection Rather than instantiating a key, it is possible for the possessor of an authorisation key to negatively instantiate a key that's under construction. This is a short duration placeholder that causes any attempt at re-requesting -the key whilst it exists to fail with error ENOKEY if negated or the specified +the key while it exists to fail with error ENOKEY if negated or the specified error if rejected. This is provided to prevent excessive repeated spawning of /sbin/request-key diff --git a/Documentation/security/keys/trusted-encrypted.rst b/Documentation/security/keys/trusted-encrypted.rst index 3bb24e09a332..e8a1c35cd277 100644 --- a/Documentation/security/keys/trusted-encrypted.rst +++ b/Documentation/security/keys/trusted-encrypted.rst @@ -76,7 +76,7 @@ Usage:: Where:: - format:= 'default | ecryptfs' + format:= 'default | ecryptfs | enc32' key-type:= 'trusted' | 'user' @@ -173,3 +173,7 @@ are anticipated. In particular the new format 'ecryptfs' has been defined in in order to use encrypted keys to mount an eCryptfs filesystem. More details about the usage can be found in the file ``Documentation/security/keys/ecryptfs.rst``. + +Another new format 'enc32' has been defined in order to support encrypted keys +with payload size of 32 bytes. This will initially be used for nvdimm security +but may expand to other usages that require 32 bytes payload. diff --git a/Documentation/serial/serial-rs485.txt b/Documentation/serial/serial-rs485.txt index 389fcd4759e9..ce0c1a9b8aab 100644 --- a/Documentation/serial/serial-rs485.txt +++ b/Documentation/serial/serial-rs485.txt @@ -75,7 +75,7 @@ /* Set rts delay after send, if needed: */ rs485conf.delay_rts_after_send = ...; - /* Set this flag if you want to receive data even whilst sending data */ + /* Set this flag if you want to receive data even while sending data */ rs485conf.flags |= SER_RS485_RX_DURING_TX; if (ioctl (fd, TIOCSRS485, &rs485conf) < 0) { diff --git a/Documentation/sh/new-machine.txt b/Documentation/sh/new-machine.txt index f0354164cb0e..e0961a66130b 100644 --- a/Documentation/sh/new-machine.txt +++ b/Documentation/sh/new-machine.txt @@ -116,7 +116,6 @@ might look something like: * arch/sh/boards/vapor/setup.c - Setup code for imaginary board */ #include -#include /* for board_time_init() */ const char *get_system_type(void) { @@ -132,13 +131,6 @@ int __init platform_setup(void) * this board. */ - /* - * Presume all FooTech boards have the same broken timer, - * and also presume that we've defined foo_timer_init to - * do something useful. - */ - board_time_init = foo_timer_init; - /* Start-up imaginary PCI ... */ /* And whatever else ... */ diff --git a/Documentation/sound/soc/dai.rst b/Documentation/sound/soc/dai.rst index 55820e51708f..2e99183a7a47 100644 --- a/Documentation/sound/soc/dai.rst +++ b/Documentation/sound/soc/dai.rst @@ -24,7 +24,7 @@ I2S === I2S is a common 4 wire DAI used in HiFi, STB and portable devices. The Tx and -Rx lines are used for audio transmission, whilst the bit clock (BCLK) and +Rx lines are used for audio transmission, while the bit clock (BCLK) and left/right clock (LRC) synchronise the link. I2S is flexible in that either the controller or CODEC can drive (master) the BCLK and LRC clock lines. Bit clock usually varies depending on the sample rate and the master system clock @@ -49,9 +49,9 @@ PCM PCM is another 4 wire interface, very similar to I2S, which can support a more flexible protocol. It has bit clock (BCLK) and sync (SYNC) lines that are used -to synchronise the link whilst the Tx and Rx lines are used to transmit and +to synchronise the link while the Tx and Rx lines are used to transmit and receive the audio data. Bit clock usually varies depending on sample rate -whilst sync runs at the sample rate. PCM also supports Time Division +while sync runs at the sample rate. PCM also supports Time Division Multiplexing (TDM) in that several devices can use the bus simultaneously (this is sometimes referred to as network mode). diff --git a/Documentation/sound/soc/dpcm.rst b/Documentation/sound/soc/dpcm.rst index fe61e02277f8..f6845b2278ea 100644 --- a/Documentation/sound/soc/dpcm.rst +++ b/Documentation/sound/soc/dpcm.rst @@ -218,7 +218,7 @@ like a BT phone call :- * * <----DAI5-----> FM ************* -This allows the host CPU to sleep whilst the DSP, MODEM DAI and the BT DAI are +This allows the host CPU to sleep while the DSP, MODEM DAI and the BT DAI are still in operation. A BE DAI link can also set the codec to a dummy device if the code is a device diff --git a/Documentation/static-keys.txt b/Documentation/static-keys.txt index ab16efe0c79d..d68135560895 100644 --- a/Documentation/static-keys.txt +++ b/Documentation/static-keys.txt @@ -156,7 +156,7 @@ or increment/decrement function. Note that switching branches results in some locks being taken, particularly the CPU hotplug lock (in order to avoid races against -CPUs being brought in the kernel whilst the kernel is getting +CPUs being brought in the kernel while the kernel is getting patched). Calling the static key API from within a hotplug notifier is thus a sure deadlock recipe. In order to still allow use of the functionnality, the following functions are provided: diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt index 7d73882e2c27..187ce4f599a2 100644 --- a/Documentation/sysctl/vm.txt +++ b/Documentation/sysctl/vm.txt @@ -63,6 +63,7 @@ Currently, these files are in /proc/sys/vm: - swappiness - user_reserve_kbytes - vfs_cache_pressure +- watermark_boost_factor - watermark_scale_factor - zone_reclaim_mode @@ -856,6 +857,26 @@ ten times more freeable objects than there are. ============================================================= +watermark_boost_factor: + +This factor controls the level of reclaim when memory is being fragmented. +It defines the percentage of the high watermark of a zone that will be +reclaimed if pages of different mobility are being mixed within pageblocks. +The intent is that compaction has less work to do in the future and to +increase the success rate of future high-order allocations such as SLUB +allocations, THP and hugetlbfs pages. + +To make it sensible with respect to the watermark_scale_factor parameter, +the unit is in fractions of 10,000. The default value of 15,000 means +that up to 150% of the high watermark will be reclaimed in the event of +a pageblock being mixed due to fragmentation. The level of reclaim is +determined by the number of fragmentation events that occurred in the +recent past. If this value is smaller than a pageblock then a pageblocks +worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor +of 0 will disable the feature. + +============================================================= + watermark_scale_factor: This factor controls the aggressiveness of kswapd. It defines the diff --git a/Documentation/thermal/power_allocator.txt b/Documentation/thermal/power_allocator.txt index a1ce2235f121..9fb0ff06dca9 100644 --- a/Documentation/thermal/power_allocator.txt +++ b/Documentation/thermal/power_allocator.txt @@ -110,7 +110,7 @@ the permitted thermal "ramp" of the system. For instance, a lower `k_pu` value will provide a slower ramp, at the cost of capping available capacity at a low temperature. On the other hand, a high value of `k_pu` will result in the governor granting very high power -whilst temperature is low, and may lead to temperature overshooting. +while temperature is low, and may lead to temperature overshooting. The default value for `k_pu` is: diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst index f82434f2795e..0131df7f5968 100644 --- a/Documentation/trace/ftrace.rst +++ b/Documentation/trace/ftrace.rst @@ -24,13 +24,13 @@ It can be used for debugging or analyzing latencies and performance issues that take place outside of user-space. Although ftrace is typically considered the function tracer, it -is really a frame work of several assorted tracing utilities. +is really a framework of several assorted tracing utilities. There's latency tracing to examine what occurs between interrupts disabled and enabled, as well as for preemption and from a time a task is woken to the task is actually scheduled in. One of the most common uses of ftrace is the event tracing. -Through out the kernel is hundreds of static event points that +Throughout the kernel is hundreds of static event points that can be enabled via the tracefs file system to see what is going on in certain parts of the kernel. @@ -462,7 +462,7 @@ of ftrace. Here is a list of some of the key files: mono_raw: This is the raw monotonic clock (CLOCK_MONOTONIC_RAW) - which is montonic but is not subject to any rate adjustments + which is monotonic but is not subject to any rate adjustments and ticks at the same rate as the hardware clocksource. boot: @@ -914,8 +914,8 @@ The above is mostly meaningful for kernel developers. current trace and the next trace. - '$' - greater than 1 second - - '@' - greater than 100 milisecond - - '*' - greater than 10 milisecond + - '@' - greater than 100 millisecond + - '*' - greater than 10 millisecond - '#' - greater than 1000 microsecond - '!' - greater than 100 microsecond - '+' - greater than 10 microsecond @@ -2541,7 +2541,7 @@ At compile time every C file object is run through the recordmcount program (located in the scripts directory). This program will parse the ELF headers in the C object to find all the locations in the .text section that call mcount. Starting -with gcc verson 4.6, the -mfentry has been added for x86, which +with gcc version 4.6, the -mfentry has been added for x86, which calls "__fentry__" instead of "mcount". Which is called before the creation of the stack frame. @@ -2978,7 +2978,7 @@ The following commands are supported: When the function is hit, it will dump the contents of the ftrace ring buffer to the console. This is useful if you need to debug something, and want to dump the trace when a certain function - is hit. Perhaps its a function that is called before a tripple + is hit. Perhaps it's a function that is called before a triple fault happens and does not allow you to get a regular dump. - cpudump: diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst index 306997941ba1..6b4107cf4b98 100644 --- a/Documentation/trace/index.rst +++ b/Documentation/trace/index.rst @@ -22,3 +22,4 @@ Linux Tracing Technologies hwlat_detector intel_th stm + sys-t diff --git a/Documentation/translations/it_IT/admin-guide/README.rst b/Documentation/translations/it_IT/admin-guide/README.rst new file mode 100644 index 000000000000..80f5ffc94a9e --- /dev/null +++ b/Documentation/translations/it_IT/admin-guide/README.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/admin-guide/README.rst ` + +.. _it_readme: + +Rilascio del kernel Linux 4.x +=================================================== + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/admin-guide/security-bugs.rst b/Documentation/translations/it_IT/admin-guide/security-bugs.rst new file mode 100644 index 000000000000..18a5822c7d9a --- /dev/null +++ b/Documentation/translations/it_IT/admin-guide/security-bugs.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/admin-guide/security-bugs.rst ` + +.. _it_securitybugs: + +Bachi di sicurezza +================== + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/doc-guide/kernel-doc.rst b/Documentation/translations/it_IT/doc-guide/kernel-doc.rst index 2bf1c1e2f394..a4ecd8f27631 100644 --- a/Documentation/translations/it_IT/doc-guide/kernel-doc.rst +++ b/Documentation/translations/it_IT/doc-guide/kernel-doc.rst @@ -107,7 +107,7 @@ macro simil-funzioni è il seguente:: * Context: Describes whether the function can sleep, what locks it takes, * releases, or expects to be held. It can extend over multiple * lines. - * Return: Describe the return value of foobar. + * Return: Describe the return value of function_name. * * The return value description can also have multiple paragraphs, and should * be placed at the end of the comment block. diff --git a/Documentation/translations/it_IT/index.rst b/Documentation/translations/it_IT/index.rst index 898a7823a6f4..ea9b2916b3e4 100644 --- a/Documentation/translations/it_IT/index.rst +++ b/Documentation/translations/it_IT/index.rst @@ -86,6 +86,7 @@ vostre modifiche molto più semplice .. toctree:: :maxdepth: 2 + process/index doc-guide/index kernel-hacking/index diff --git a/Documentation/translations/it_IT/kernel-hacking/locking.rst b/Documentation/translations/it_IT/kernel-hacking/locking.rst index 753643622c23..0ef31666663b 100644 --- a/Documentation/translations/it_IT/kernel-hacking/locking.rst +++ b/Documentation/translations/it_IT/kernel-hacking/locking.rst @@ -593,8 +593,8 @@ l'opzione ``GFP_KERNEL`` che è permessa solo in contesto utente. Ho supposto che :c:func:`cache_add()` venga chiamata dal contesto utente, altrimenti questa opzione deve diventare un parametro di :c:func:`cache_add()`. -Exposing Objects Outside This File ----------------------------------- +Esporre gli oggetti al di fuori del file +---------------------------------------- Se i vostri oggetti contengono più informazioni, potrebbe non essere sufficiente copiare i dati avanti e indietro: per esempio, altre parti del diff --git a/Documentation/translations/it_IT/process/1.Intro.rst b/Documentation/translations/it_IT/process/1.Intro.rst new file mode 100644 index 000000000000..c1be6dc398a7 --- /dev/null +++ b/Documentation/translations/it_IT/process/1.Intro.rst @@ -0,0 +1,297 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/1.Intro.rst ` +:Translator: Alessia Mantegazza + +.. _it_development_intro: + +Introduzione +============ + +Riepilogo generale +------------------ + +Il resto di questa sezione riguarda il processo di sviluppo del kernel e +quella sorta di frustrazione che gli sviluppatori e i loro datori di lavoro +potrebbero dover affrontare. Ci sono molte ragioni per le quali del codice +per il kernel debba essere incorporato nel kernel ufficiale, fra le quali: +disponibilità immediata agli utilizzatori, supporto della comunità in +differenti modalità, e la capacità di influenzare la direzione dello sviluppo +del kernel. +Il codice che contribuisce al kernel Linux deve essere reso disponibile sotto +una licenza GPL-compatibile. + +La sezione :ref:`it_development_process` introduce il processo di sviluppo, +il ciclo di rilascio del kernel, ed i meccanismi della finestra +d'incorporazione. Il capitolo copre le varie fasi di una modifica: sviluppo, +revisione e ciclo d'incorporazione. Ci sono alcuni dibattiti su strumenti e +liste di discussione. Gli sviluppatori che sono in attesa di poter sviluppare +qualcosa per il kernel sono invitati ad individuare e sistemare bachi come +esercizio iniziale. + +La sezione :ref:`it_development_early_stage` copre i primi stadi della +pianificazione di un progetto di sviluppo, con particolare enfasi sul +coinvolgimento della comunità, il prima possibile. + +La sezione :ref:`it_development_coding` riguarda il processo di scrittura +del codice. Qui, sono esposte le diverse insidie che sono state già affrontate +da altri sviluppatori. Il capitolo copre anche alcuni dei requisiti per le +modifiche, ed esiste un'introduzione ad alcuni strumenti che possono aiutarvi +nell'assicurarvi che le modifiche per il kernel siano corrette. + +La sezione :ref:`it_development_posting` parla del processo di pubblicazione +delle modifiche per la revisione. Per essere prese in considerazione dalla +comunità di sviluppo, le modifiche devono essere propriamente formattate ed +esposte, e devono essere inviate nel posto giusto. Seguire i consigli presenti +in questa sezione dovrebbe essere d'aiuto nell'assicurare la migliore +accoglienza possibile del vostro lavoro. + +La sezione :ref:`it_development_followthrough` copre ciò che accade dopo +la pubblicazione delle modifiche; a questo punto il lavoro è lontano +dall'essere concluso. Lavorare con i revisori è una parte cruciale del +processo di sviluppo; questa sezione offre una serie di consigli su come +evitare problemi in questa importante fase. Gli sviluppatori sono diffidenti +nell'affermare che il lavoro è concluso quando una modifica è incorporata nei +sorgenti principali. + +La sezione :ref:`it_development_advancedtopics` introduce un paio di argomenti +"avanzati": gestire le modifiche con git e controllare le modifiche pubblicate +da altri. + +La sezione :ref:`it_development_conclusion` chiude il documento con dei +riferimenti ad altre fonti che forniscono ulteriori informazioni sullo sviluppo +del kernel. + +Di cosa parla questo documento +------------------------------ + +Il kernel Linux, ha oltre 8 milioni di linee di codice e ben oltre 1000 +contributori ad ogni rilascio; è uno dei più vasti e più attivi software +liberi progettati mai esistiti. Sin dal sul modesto inizio nel 1991, +questo kernel si è evoluto nel miglior componente per sistemi operativi +che fanno funzionare piccoli riproduttori musicali, PC, grandi super computer +e tutte le altre tipologie di sistemi fra questi estremi. È una soluzione +robusta, efficiente ed adattabile a praticamente qualsiasi situazione. + +Con la crescita di Linux è arrivato anche un aumento di sviluppatori +(ed aziende) desiderosi di partecipare a questo sviluppo. I produttori di +hardware vogliono assicurarsi che il loro prodotti siano supportati da Linux, +rendendo questi prodotti attrattivi agli utenti Linux. I produttori di +sistemi integrati, che usano Linux come componente di un prodotto integrato, +vogliono che Linux sia capace ed adeguato agli obiettivi ed il più possibile +alla mano. Fornitori ed altri produttori di software che basano i propri +prodotti su Linux hanno un chiaro interesse verso capacità, prestazioni ed +affidabilità del kernel Linux. E gli utenti finali, anche, spesso vorrebbero +cambiare Linux per renderlo più aderente alle proprie necessità. + +Una delle caratteristiche più coinvolgenti di Linux è quella dell'accessibilità +per gli sviluppatori; chiunque con le capacità richieste può migliorare +Linux ed influenzarne la direzione di sviluppo. Prodotti non open-source non +possono offrire questo tipo di apertura, che è una caratteristica del software +libero. Ma, anzi, il kernel è persino più aperto rispetto a molti altri +progetti di software libero. Un classico ciclo di sviluppo trimestrale può +coinvolgere 1000 sviluppatori che lavorano per più di 100 differenti aziende +(o per nessuna azienda). + +Lavorare con la comunità di sviluppo del kernel non è particolarmente +difficile. Ma, ciononostante, diversi potenziali contributori hanno trovato +delle difficoltà quando hanno cercato di lavorare sul kernel. La comunità del +kernel utilizza un proprio modo di operare che gli permette di funzionare +agevolmente (e genera un prodotto di alta qualità) in un ambiente dove migliaia +di stringhe di codice sono modificate ogni giorni. Quindi non deve sorprendere +che il processo di sviluppo del kernel differisca notevolmente dai metodi di +sviluppo privati. + +Il processo di sviluppo del Kernel può, dall'altro lato, risultare +intimidatorio e strano ai nuovi sviluppatori, ma ha dietro di se buone ragioni +e solide esperienze. Uno sviluppatore che non comprende i modi della comunità +del kernel (o, peggio, che cerchi di aggirarli o violarli) avrà un'esperienza +deludente nel proprio bagaglio. La comunità di sviluppo, sebbene sia utile +a coloro che cercano di imparare, ha poco tempo da dedicare a coloro che non +ascoltano o coloro che non sono interessati al processo di sviluppo. + +Si spera che coloro che leggono questo documento saranno in grado di evitare +queste esperienze spiacevoli. C'è molto materiale qui, ma lo sforzo della +lettura sarà ripagato in breve tempo. La comunità di sviluppo ha sempre +bisogno di sviluppatori che vogliano aiutare a rendere il kernel migliore; +il testo seguente potrebbe esservi d'aiuto - o essere d'aiuto ai vostri +collaboratori- per entrare a far parte della nostra comunità. + +Crediti +------- + +Questo documento è stato scritto da Jonathan Corbet, corbet@lwn.net. +È stato migliorato da Johannes Berg, James Berry, Alex Chiang, Roland +Dreier, Randy Dunlap, Jake Edge, Jiri Kosina, Matt Mackall, Arthur Marsh, +Amanda McPherson, Andrew Morton, Andrew Price, Tsugikazu Shibata e Jochen Voß. + +Questo lavoro è stato supportato dalla Linux Foundation; un ringraziamento +speciale ad Amanda McPherson, che ha visto il valore di questo lavoro e lo ha +reso possibile. + +L'importanza d'avere il codice nei sorgenti principali +------------------------------------------------------ + +Alcune aziende e sviluppatori ogni tanto si domandano perché dovrebbero +preoccuparsi di apprendere come lavorare con la comunità del kernel e di +inserire il loro codice nel ramo di sviluppo principale (per ramo principale +s'intende quello mantenuto da Linus Torvalds e usato come base dai +distributori Linux). Nel breve termine, contribuire al codice può sembrare +un costo inutile; può sembra più facile tenere separato il proprio codice e +supportare direttamente i suoi utilizzatori. La verità è che il tenere il +codice separato ("fuori dai sorgenti", *"out-of-tree"*) è un falso risparmio. + +Per dimostrare i costi di un codice "fuori dai sorgenti", eccovi +alcuni aspetti rilevanti del processo di sviluppo kernel; la maggior parte +di essi saranno approfonditi dettagliatamente più avanti in questo documento. +Considerate: + +- Il codice che è stato inserito nel ramo principale del kernel è disponibile + a tutti gli utilizzatori Linux. Sarà automaticamente presente in tutte le + distribuzioni che lo consentono. Non c'è bisogno di: driver per dischi, + scaricare file, o della scocciatura del dover supportare diverse versioni di + diverse distribuzioni; funziona già tutto, per gli sviluppatori e per gli + utilizzatori. L'inserimento nel ramo principale risolve un gran numero di + problemi di distribuzione e di supporto. + +- Nonostante gli sviluppatori kernel si sforzino di tenere stabile + l'interfaccia dello spazio utente, quella interna al kernel è in continuo + cambiamento. La mancanza di un'interfaccia interna è deliberatamente una + decisione di progettazione; ciò permette che i miglioramenti fondamentali + vengano fatti in un qualsiasi momento e che risultino fatti con un codice di + alta qualità. Ma una delle conseguenze di questa politica è che qualsiasi + codice "fuori dai sorgenti" richiede costante manutenzione per renderlo + funzionante coi kernel più recenti. Tenere un codice "fuori dai sorgenti" + richiede una mole di lavoro significativa solo per farlo funzionare. + + Invece, il codice che si trova nel ramo principale non necessita di questo + tipo di lavoro poiché ad ogni sviluppatore che faccia una modifica alle + interfacce viene richiesto di sistemare anche il codice che utilizza + quell'interfaccia. Quindi, il codice che è stato inserito nel ramo principale + ha dei costi di mantenimento significativamente più bassi. + +- Oltre a ciò, spesso il codice che è all'interno del kernel sarà migliorato da + altri sviluppatori. Dare pieni poteri alla vostra comunità di utenti e ai + clienti può portare a sorprendenti risultati che migliorano i vostri + prodotti. + +- Il codice kernel è soggetto a revisioni, sia prima che dopo l'inserimento + nel ramo principale. Non importa quanto forti fossero le abilità dello + sviluppatore originale, il processo di revisione troverà il modo di migliore + il codice. Spesso la revisione trova bachi importanti e problemi di + sicurezza. Questo è particolarmente vero per il codice che è stato + sviluppato in un ambiente chiuso; tale codice ottiene un forte beneficio + dalle revisioni provenienti da sviluppatori esteri. Il codice + "fuori dai sorgenti", invece, è un codice di bassa qualità. + +- La partecipazione al processo di sviluppo costituisce la vostra via per + influenzare la direzione di sviluppo del kernel. Gli utilizzatori che + "reclamano da bordo campo" sono ascoltati, ma gli sviluppatori attivi + hanno una voce più forte - e la capacità di implementare modifiche che + renderanno il kernel più funzionale alle loro necessità. + +- Quando il codice è gestito separatamente, esiste sempre la possibilità che + terze parti contribuiscano con una differente implementazione che fornisce + le stesse funzionalità. Se dovesse accadere, l'inserimento del codice + diventerà molto più difficile - fino all'impossibilità. Poi, dovrete far + fronte a delle alternative poco piacevoli, come: (1) mantenere un elemento + non standard "fuori dai sorgenti" per un tempo indefinito, o (2) abbandonare + il codice e far migrare i vostri utenti alla versione "nei sorgenti". + +- Contribuire al codice è l'azione fondamentale che fa funzionare tutto il + processo. Contribuendo attraverso il vostro codice potete aggiungere nuove + funzioni al kernel e fornire competenze ed esempi che saranno utili ad + altri sviluppatori. Se avete sviluppato del codice Linux (o state pensando + di farlo), avete chiaramente interesse nel far proseguire il successo di + questa piattaforma. Contribuire al codice è une delle migliori vie per + aiutarne il successo. + +Il ragionamento sopra citato si applica ad ogni codice "fuori dai sorgenti" +dal kernel, incluso il codice proprietario distribuito solamente in formato +binario. Ci sono, comunque, dei fattori aggiuntivi che dovrebbero essere +tenuti in conto prima di prendere in considerazione qualsiasi tipo di +distribuzione binaria di codice kernel. Questo include che: + +- Le questioni legali legate alla distribuzione di moduli kernel proprietari + sono molto nebbiose; parecchi detentori di copyright sul kernel credono che + molti moduli binari siano prodotti derivati del kernel e che, come risultato, + la loro diffusione sia una violazione della licenza generale di GNU (della + quale si parlerà più avanti). L'autore qui non è un avvocato, e + niente in questo documento può essere considerato come un consiglio legale. + Il vero stato legale dei moduli proprietari può essere determinato + esclusivamente da un giudice. Ma l'incertezza che perseguita quei moduli + è lì comunque. + +- I moduli binari aumentano di molto la difficoltà di fare debugging del + kernel, al punto che la maggior parte degli sviluppatori del kernel non + vorranno nemmeno tentare. Quindi la diffusione di moduli esclusivamente + binari renderà difficile ai vostri utilizzatori trovare un supporto dalla + comunità. + +- Il supporto è anche difficile per i distributori di moduli binari che devono + fornire una versione del modulo per ogni distribuzione e per ogni versione + del kernel che vogliono supportate. Per fornire una copertura ragionevole e + comprensiva, può essere richiesto di produrre dozzine di singoli moduli. + E inoltre i vostri utilizzatori dovranno aggiornare il vostro modulo + separatamente ogni volta che aggiornano il loro kernel. + +- Tutto ciò che è stato detto prima riguardo alla revisione del codice si + applica doppiamente al codice proprietario. Dato che questo codice non è + del tutto disponibile, non può essere revisionato dalla comunità e avrà, + senza dubbio, seri problemi. + +I produttori di sistemi integrati, in particolare, potrebbero esser tentati +dall'evitare molto di ciò che è stato detto in questa sezione, credendo che +stiano distribuendo un prodotto finito che utilizza una versione del kernel +immutabile e che non richiede un ulteriore sviluppo dopo il rilascio. Questa +idea non comprende il valore di una vasta revisione del codice e il valore +del permettere ai propri utenti di aggiungere funzionalità al vostro prodotto. +Ma anche questi prodotti, hanno una vita commerciale limitata, dopo la quale +deve essere rilasciata una nuova versione. A quel punto, i produttori il cui +codice è nel ramo principale di sviluppo avranno un codice ben mantenuto e +saranno in una posizione migliore per ottenere velocemente un nuovo prodotto +pronto per essere distribuito. + + +Licenza +------- + +IL codice Linux utilizza diverse licenze, ma il codice completo deve essere +compatibile con la seconda versione della licenza GNU General Public License +(GPLv2), che è la licenza che copre la distribuzione del kernel. +Nella pratica, ciò significa che tutti i contributi al codice sono coperti +anche'essi dalla GPLv2 (con, opzionalmente, una dicitura che permette la +possibilità di distribuirlo con licenze più recenti di GPL) o dalla licenza +three-clause BSD. Qualsiasi contributo che non è coperto da una licenza +compatibile non verrà accettata nel kernel. + +Per il codice sottomesso al kernel non è necessario (o richiesto) la +concessione del Copyright. Tutto il codice inserito nel ramo principale del +kernel conserva la sua proprietà originale; ne risulta che ora il kernel abbia +migliaia di proprietari. + +Una conseguenza di questa organizzazione della proprietà è che qualsiasi +tentativo di modifica della licenza del kernel è destinata ad un quasi sicuro +fallimento. Esistono alcuni scenari pratici nei quali il consenso di tutti +i detentori di copyright può essere ottenuto (o il loro codice verrà rimosso +dal kernel). Quindi, in sostanza, non esiste la possibilità che si giunga ad +una versione 3 della licenza GPL nel prossimo futuro. + +È imperativo che tutto il codice che contribuisce al kernel sia legittimamente +software libero. Per questa ragione, un codice proveniente da un contributore +anonimo (o sotto pseudonimo) non verrà accettato. È richiesto a tutti i +contributori di firmare il proprio codice, attestando così che quest'ultimo +può essere distribuito insieme al kernel sotto la licenza GPL. Il codice che +non è stato licenziato come software libero dal proprio creatore, o che +potrebbe creare problemi di copyright per il kernel (come il codice derivante +da processi di ingegneria inversa senza le opportune tutele), non può essere +diffuso. + +Domande relative a questioni legate al copyright sono frequenti nelle liste +di discussione dedicate allo sviluppo di Linux. Tali quesiti, normalmente, +non riceveranno alcuna risposta, ma una cosa deve essere tenuta presente: +le persone che risponderanno a quelle domande non sono avvocati e non possono +fornire supporti legali. Se avete questioni legali relative ai sorgenti +del codice Linux, non esiste alternativa che quella di parlare con un +avvocato esperto nel settore. Fare affidamento sulle risposte ottenute da +una lista di discussione tecnica è rischioso. diff --git a/Documentation/translations/it_IT/process/2.Process.rst b/Documentation/translations/it_IT/process/2.Process.rst new file mode 100644 index 000000000000..9af4d01617c4 --- /dev/null +++ b/Documentation/translations/it_IT/process/2.Process.rst @@ -0,0 +1,531 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/2.Process.rst ` +:Translator: Alessia Mantegazza + +.. _it_development_process: + +Come funziona il processo di sviluppo +===================================== + +Lo sviluppo del Kernel agli inizi degli anno '90 era abbastanza libero, con +un numero di utenti e sviluppatori relativamente basso. Con una base +di milioni di utenti e con 2000 sviluppatori coinvolti nel giro di un anno, +il kernel da allora ha messo in atto un certo numero di procedure per rendere +lo sviluppo più agevole. È richiesta una solida conoscenza di come tale +processo si svolge per poter esserne parte attiva. + +Il quadro d'insieme +------------------- + +Gli sviluppatori kernel utilizzano un calendario di rilascio generico, dove +ogni due o tre mesi viene effettuata un rilascio importante del kernel. +I rilasci più recenti sono stati: + + ====== ================= + 4.11 Aprile 30, 2017 + 4.12 Luglio 2, 2017 + 4.13 Settembre 3, 2017 + 4.14 Novembre 12, 2017 + 4.15 Gennaio 28, 2018 + 4.16 Aprile 1, 2018 + ====== ================= + +Ciascun rilascio 4.x è un importante rilascio del kernel con nuove +funzionalità, modifiche interne dell'API, e molto altro. Un tipico +rilascio 4.x contiene quasi 13,000 gruppi di modifiche con ulteriori +modifiche a parecchie migliaia di linee di codice. La 4.x. è pertanto la +linea di confine nello sviluppo del kernel Linux; il kernel utilizza un sistema +di sviluppo continuo che integra costantemente nuove importanti modifiche. + +Viene seguita una disciplina abbastanza lineare per l'inclusione delle +patch di ogni rilascio. All'inizio di ogni ciclo di sviluppo, la +"finestra di inclusione" viene dichiarata aperta. In quel momento il codice +ritenuto sufficientemente stabile(e che è accettato dalla comunità di sviluppo) +viene incluso nel ramo principale del kernel. La maggior parte delle +patch per un nuovo ciclo di sviluppo (e tutte le più importanti modifiche) +saranno inserite durante questo periodo, ad un ritmo che si attesta sulle +1000 modifiche ("patch" o "gruppo di modifiche") al giorno. + +(per inciso, vale la pena notare che i cambiamenti integrati durante la +"finestra di inclusione" non escono dal nulla; questi infatti, sono stati +raccolti e, verificati in anticipo. Il funzionamento di tale procedimento +verrà descritto dettagliatamente più avanti). + +La finestra di inclusione resta attiva approssimativamente per due settimane. +Al termine di questo periodo, Linus Torvald dichiarerà che la finestra è +chiusa e rilascerà il primo degli "rc" del kernel. +Per il kernel che è destinato ad essere 2.6.40, per esempio, il rilascio +che emerge al termine della finestra d'inclusione si chiamerà 2.6.40-rc1. +Questo rilascio indica che il momento di aggiungere nuovi componenti è +passato, e che è iniziato il periodo di stabilizzazione del prossimo kernel. + +Nelle successive sei/dieci settimane, potranno essere sottoposte solo modifiche +che vanno a risolvere delle problematiche. Occasionalmente potrà essere +consentita una modifica più consistente, ma tali occasioni sono rare. +Gli sviluppatori che tenteranno di aggiungere nuovi elementi al di fuori della +finestra di inclusione, tendenzialmente, riceveranno un accoglienza poco +amichevole. Come regola generale: se vi perdete la finestra di inclusione per +un dato componente, la cosa migliore da fare è aspettare il ciclo di sviluppo +successivo (un'eccezione può essere fatta per i driver per hardware non +supportati in precedenza; se toccano codice non facente parte di quello +attuale, che non causino regressioni e che potrebbero essere aggiunti in +sicurezza in un qualsiasi momento) + +Mentre le correzioni si aprono la loro strada all'interno del ramo principale, +il ritmo delle modifiche rallenta col tempo. Linus rilascia un nuovo +kernel -rc circa una volta alla settimana; e ne usciranno circa 6 o 9 prima +che il kernel venga considerato sufficientemente stabile e che il rilascio +finale 2.6.x venga fatto. A quel punto tutto il processo ricomincerà. + +Esempio: ecco com'è andato il ciclo di sviluppo della versione 4.16 +(tutte le date si collocano nel 2018) + + + ============== ======================================= + Gennaio 28 4.15 rilascio stabile + Febbraio 11 4.16-rc1, finestra di inclusione chiusa + Febbraio 18 4.16-rc2 + Febbraio 25 4.16-rc3 + Marzo 4 4.16-rc4 + Marzo 11 4.16-rc5 + Marzo 18 4.16-rc6 + Marzo 25 4.16-rc7 + Aprile 1 4.17 rilascio stabile + ============== ======================================= + +In che modo gli sviluppatori decidono quando chiudere il ciclo di sviluppo e +creare quindi una rilascio stabile? Un metro valido è il numero di regressioni +rilevate nel precedente rilascio. Nessun baco è il benvenuto, ma quelli che +procurano problemi su sistemi che hanno funzionato in passato sono considerati +particolarmente seri. Per questa ragione, le modifiche che portano ad una +regressione sono viste sfavorevolmente e verranno quasi sicuramente annullate +durante il periodo di stabilizzazione. + +L'obiettivo degli sviluppatori è quello di aggiustare tutte le regressioni +conosciute prima che avvenga il rilascio stabile. Nel mondo reale, questo +tipo di perfezione difficilmente viene raggiunta; esistono troppe variabili +in un progetto di questa portata. Arriva un punto dove ritardare il rilascio +finale peggiora la situazione; la quantità di modifiche in attesa della +prossima finestra di inclusione crescerà enormemente, creando ancor più +regressioni al giro successivo. Quindi molti kernel 4.x escono con una +manciata di regressioni delle quali, si spera, nessuna è grave. + +Una volta che un rilascio stabile è fatto, il suo costante mantenimento è +affidato al "squadra stabilità", attualmente composta da Greg Kroah-Hartman. +Questa squadra rilascia occasionalmente degli aggiornamenti relativi al +rilascio stabile usando la numerazione 4.x.y. Per essere presa in +considerazione per un rilascio d'aggiornamento, una modifica deve: +(1) correggere un baco importante (2) essere già inserita nel ramo principale +per il prossimo sviluppo del kernel. Solitamente, passato il loro rilascio +iniziale, i kernel ricevono aggiornamenti per più di un ciclo di sviluppo. +Quindi, per esempio, la storia del kernel 4.13 appare così: + + ============== =============================== + Settembre 3 4.13 rilascio stabile + Settembre 13 4.13.1 + Settembre 20 4.13.2 + Settembre 27 4.13.3 + Ottobre 5 4.13.4 + Ottobre 12 4.13.5 + ... ... + Novembre 24 4.13.16 + ============== =============================== + +La 4.13.16 fu l'aggiornamento finale per la versione 4.13. + +Alcuni kernel sono destinati ad essere kernel a "lungo termine"; questi +riceveranno assistenza per un lungo periodo di tempo. Al momento in cui +scriviamo, i manutentori dei kernel stabili a lungo termine sono: + + ====== ====================== ========================================== + 3.16 Ben Hutchings (kernel stabile molto più a lungo termine) + 4.1 Sasha Levin + 4.4 Greg Kroah-Hartman (kernel stabile molto più a lungo termine) + 4.9 Greg Kroah-Hartman + 4.14 Greg Kroah-Hartman + ====== ====================== ========================================== + + +Questa selezione di kernel di lungo periodo sono puramente dovuti ai loro +manutentori, alla loro necessità e al tempo per tenere aggiornate proprio +quelle versioni. Non ci sono altri kernel a lungo termine in programma per +alcun rilascio in arrivo. + +Il ciclo di vita di una patch +----------------------------- + +Le patch non passano direttamente dalla tastiera dello sviluppatori +al ramo principale del kernel. Esiste, invece, una procedura disegnata +per assicurare che ogni patch sia di buona qualità e desiderata nel +ramo principale. Questo processo avviene velocemente per le correzioni +meno importanti, o, nel caso di patch ampie e controverse, va avanti per anni. +Per uno sviluppatore la maggior frustrazione viene dalla mancanza di +comprensione di questo processo o dai tentativi di aggirarlo. + +Nella speranza di ridurre questa frustrazione, questo documento spiegherà +come una patch viene inserita nel kernel. Ciò che segue è un'introduzione +che descrive il processo ideale. Approfondimenti verranno invece trattati +più avanti. + +Una patch attraversa, generalmente, le seguenti fasi: + + - Progetto. In questa fase sono stabilite quelli che sono i requisiti + della modifica - e come verranno soddisfatti. Il lavoro di progettazione + viene spesso svolto senza coinvolgere la comunità, ma è meglio renderlo + il più aperto possibile; questo può far risparmiare molto tempo evitando + eventuali riprogettazioni successive. + + - Prima revisione. Le patch vengono pubblicate sulle liste di discussione + interessate, e gli sviluppatori in quella lista risponderanno coi loro + commenti. Se si svolge correttamente, questo procedimento potrebbe far + emergere problemi rilevanti in una patch. + + - Revisione più ampia. Quando la patch è quasi pronta per essere inserita + nel ramo principale, un manutentore importante del sottosistema dovrebbe + accettarla - anche se, questa accettazione non è una garanzia che la + patch arriverà nel ramo principale. La patch sarà visibile nei sorgenti + del sottosistema in questione e nei sorgenti -next (descritti sotto). + Quando il processo va a buon fine, questo passo porta ad una revisione + più estesa della patch e alla scoperta di problemi d'integrazione + con il lavoro altrui. + +- Per favore, tenete da conto che la maggior parte dei manutentori ha + anche un lavoro quotidiano, quindi integrare le vostre patch potrebbe + non essere la loro priorità più alta. Se una vostra patch riceve + dei suggerimenti su dei cambiamenti necessari, dovreste applicare + quei cambiamenti o giustificare perché non sono necessari. Se la vostra + patch non riceve alcuna critica ma non è stata integrata dal + manutentore del driver o sottosistema, allora dovreste continuare con + i necessari aggiornamenti per mantenere la patch aggiornata al kernel + più recente cosicché questa possa integrarsi senza problemi; continuate + ad inviare gli aggiornamenti per essere revisionati e integrati. + + - Inclusione nel ramo principale. Eventualmente, una buona patch verrà + inserita all'interno nel repositorio principale, gestito da + Linus Torvalds. In questa fase potrebbero emergere nuovi problemi e/o + commenti; è importante che lo sviluppatore sia collaborativo e che sistemi + ogni questione che possa emergere. + + - Rilascio stabile. Ora, il numero di utilizzatori che sono potenzialmente + toccati dalla patch è aumentato, quindi, ancora una volta, potrebbero + emergere nuovi problemi. + + - Manutenzione di lungo periodo. Nonostante sia possibile che uno sviluppatore + si dimentichi del codice dopo la sua integrazione, questo comportamento + lascia una brutta impressione nella comunità di sviluppo. Integrare il + codice elimina alcuni degli oneri facenti parte della manutenzione, in + particolare, sistemerà le problematiche causate dalle modifiche all'API. + Ma lo sviluppatore originario dovrebbe continuare ad assumersi la + responsabilità per il codice se quest'ultimo continua ad essere utile + nel lungo periodo. + +Uno dei più grandi errori fatti dagli sviluppatori kernel (o dai loro datori +di lavoro) è quello di cercare di ridurre tutta la procedura ad una singola +"integrazione nel remo principale". Questo approccio inevitabilmente conduce +a una condizione di frustrazione per tutti coloro che sono coinvolti. + +Come le modifiche finiscono nel Kernel +-------------------------------------- + +Esiste una sola persona che può inserire le patch nel repositorio principale +del kernel: Linus Torvalds. Ma, di tutte le 9500 patch che entrarono nella +versione 2.6.38 del kernel, solo 112 (circa l'1,3%) furono scelte direttamente +da Linus in persona. Il progetto del kernel è cresciuto fino a raggiungere +una dimensione tale per cui un singolo sviluppatore non può controllare e +selezionare indipendentemente ogni modifica senza essere supportato. +La via scelta dagli sviluppatori per indirizzare tale crescita è stata quella +di utilizzare un sistema di "sottotenenti" basato sulla fiducia. + +Il codice base del kernel è spezzato in una serie si sottosistemi: rete, +supporto per specifiche architetture, gestione della memoria, video e +strumenti, etc. Molti sottosistemi hanno un manutentore designato: ovvero uno +sviluppatore che ha piena responsabilità di tutto il codice presente in quel +sottosistema. Tali manutentori di sottosistema sono i guardiani +(in un certo senso) della parte di kernel che gestiscono; sono coloro che +(solitamente) accetteranno una patch per l'inclusione nel ramo principale +del kernel. + +I manutentori di sottosistema gestiscono ciascuno la propria parte dei sorgenti +del kernel, utilizzando abitualmente (ma certamente non sempre) git. +Strumenti come git (e affini come quilt o mercurial) permettono ai manutentori +di stilare una lista delle patch, includendo informazioni sull'autore ed +altri metadati. In ogni momento, il manutentore può individuare quale patch +nel sua repositorio non si trova nel ramo principale. + +Quando la "finestra di integrazione" si apre, i manutentori di alto livello +chiederanno a Linus di "prendere" dai loro repositori le modifiche che hanno +selezionato per l'inclusione. Se Linus acconsente, il flusso di patch si +convoglierà nel repositorio di quest ultimo, divenendo così parte del ramo +principale del kernel. La quantità d'attenzione che Linus presta alle +singole patch ricevute durante l'operazione di integrazione varia. +È chiaro che, qualche volta, guardi più attentamente. Ma, come regola +generale, Linus confida nel fatto che i manutentori di sottosistema non +selezionino pessime patch. + +I manutentori di sottosistemi, a turno, possono "prendere" patch +provenienti da altri manutentori. Per esempio, i sorgenti per la rete rete +sono costruiti da modifiche che si sono accumulate inizialmente nei sorgenti +dedicati ai driver per dispositivi di rete, rete senza fili, ecc. Tale +catena di repositori può essere più o meno lunga, benché raramente ecceda +i due o tre collegamenti. Questo processo è conosciuto come +"la catena della fiducia", perché ogni manutentore all'interno della +catena si fida di coloro che gestiscono i livelli più bassi. + +Chiaramente, in un sistema come questo, l'inserimento delle patch all'interno +del kernel si basa sul trovare il manutentore giusto. Di norma, inviare +patch direttamente a Linus non è la via giusta. + + +Sorgenti -next +-------------- + +La catena di sottosistemi guida il flusso di patch all'interno del kernel, +ma solleva anche un interessante quesito: se qualcuno volesse vedere tutte le +patch pronte per la prossima finestra di integrazione? +Gli sviluppatori si interesseranno alle patch in sospeso per verificare +che non ci siano altri conflitti di cui preoccuparsi; una modifica che, per +esempio, cambia il prototipo di una funzione fondamentale del kernel andrà in +conflitto con qualsiasi altra modifica che utilizzi la vecchia versione di +quella funzione. Revisori e tester vogliono invece avere accesso alle +modifiche nella loro totalità prima che approdino nel ramo principale del +kernel. Uno potrebbe prendere le patch provenienti da tutti i sottosistemi +d'interesse, ma questo sarebbe un lavoro enorme e fallace. + +La risposta ci viene sotto forma di sorgenti -next, dove i sottosistemi sono +raccolti per essere testati e controllati. Il più vecchio di questi sorgenti, +gestito da Andrew Morton, è chiamato "-mm" (memory management, che è l'inizio +di tutto). L'-mm integra patch proveniente da una lunga lista di sottosistemi; +e ha, inoltre, alcune patch destinate al supporto del debugging. + +Oltre a questo, -mm contiene una raccolta significativa di patch che sono +state selezionate da Andrew direttamente. Queste patch potrebbero essere +state inviate in una lista di discussione, o possono essere applicate ad una +parte del kernel per la quale non esiste un sottosistema dedicato. +Di conseguenza, -mm opera come una specie di sottosistema "ultima spiaggia"; +se per una patch non esiste una via chiara per entrare nel ramo principale, +allora è probabile che finirà in -mm. Le patch passate per -mm +eventualmente finiranno nel sottosistema più appropriato o saranno inviate +direttamente a Linus. In un tipico ciclo di sviluppo, circa il 5-10% delle +patch andrà nel ramo principale attraverso -mm. + +La patch -mm correnti sono disponibili nella cartella "mmotm" (-mm of +the moment) all'indirizzo: + + http://www.ozlabs.org/~akpm/mmotm/ + +È molto probabile che l'uso dei sorgenti MMOTM diventi un'esperienza +frustrante; ci sono buone probabilità che non compili nemmeno. + +I sorgenti principali per il prossimo ciclo d'integrazione delle patch +è linux-next, gestito da Stephen Rothwell. I sorgenti linux-next sono, per +definizione, un'istantanea di come dovrà apparire il ramo principale dopo che +la prossima finestra di inclusione si chiuderà. I linux-next sono annunciati +sulla lista di discussione linux-kernel e linux-next nel momento in cui +vengono assemblati; e possono essere scaricate da: + + http://www.kernel.org/pub/linux/kernel/next/ + +Linux-next è divenuto parte integrante del processo di sviluppo del kernel; +tutte le patch incorporate durante una finestra di integrazione dovrebbero +aver trovato la propria strada in linux-next, a volte anche prima dell'apertura +della finestra di integrazione. + + +Sorgenti in preparazione +------------------------ + +Nei sorgenti del kernel esiste la cartella drivers/staging/, dove risiedono +molte sotto-cartelle per i driver o i filesystem che stanno per essere aggiunti +al kernel. Questi restano nella cartella drivers/staging fintanto che avranno +bisogno di maggior lavoro; una volta completato, possono essere spostate +all'interno del kernel nel posto più appropriato. Questo è il modo di tener +traccia dei driver che non sono ancora in linea con gli standard di codifica +o qualità, ma che le persone potrebbero voler usare ugualmente e tracciarne +lo sviluppo. + +Greg Kroah-Hartman attualmente gestisce i sorgenti in preparazione. I driver +che non sono completamente pronti vengono inviati a lui, e ciascun driver avrà +la propria sotto-cartella in drivers/staging/. Assieme ai file sorgenti +dei driver, dovrebbe essere presente nella stessa cartella anche un file TODO. +Il file TODO elenca il lavoro ancora da fare su questi driver per poter essere +accettati nel kernel, e indica anche la lista di persone da inserire in copia +conoscenza per ogni modifica fatta. Le regole attuali richiedono che i +driver debbano, come minimo, compilare adeguatamente. + +La *preparazione* può essere una via relativamente facile per inserire nuovi +driver all'interno del ramo principale, dove, con un po' di fortuna, saranno +notati da altri sviluppatori e migliorati velocemente. Entrare nella fase +di preparazione non è però la fine della storia, infatti, il codice che si +trova nella cartella staging che non mostra regolari progressi potrebbe +essere rimosso. Le distribuzioni, inoltre, tendono a dimostrarsi relativamente +riluttanti nell'attivare driver in preparazione. Quindi lo preparazione è, +nel migliore dei casi, una tappa sulla strada verso il divenire un driver +del ramo principale. + + +Strumenti +--------- + +Come è possibile notare dal testo sopra, il processo di sviluppo del kernel +dipende pesantemente dalla capacità di guidare la raccolta di patch in +diverse direzioni. L'intera cosa non funzionerebbe se non venisse svolta +con l'uso di strumenti appropriati e potenti. Spiegare l'uso di tali +strumenti non è lo scopo di questo documento, ma c'è spazio per alcuni +consigli. + +In assoluto, nella comunità del kernel, predomina l'uso di git come sistema +di gestione dei sorgenti. Git è una delle diverse tipologie di sistemi +distribuiti di controllo versione che sono stati sviluppati nella comunità +del software libero. Esso è calibrato per lo sviluppo del kernel, e si +comporta abbastanza bene quando ha a che fare con repositori grandi e con un +vasto numero di patch. Git ha inoltre la reputazione di essere difficile +da imparare e utilizzare, benché stia migliorando. Agli sviluppatori +del kernel viene richiesta un po' di familiarità con git; anche se non lo +utilizzano per il proprio lavoro, hanno bisogno di git per tenersi al passo +con il lavoro degli altri sviluppatori (e con il ramo principale). + +Git è ora compreso in quasi tutte le distribuzioni Linux. Esiste una sito che +potete consultare: + + http://git-scm.com/ + +Qui troverete i riferimenti alla documentazione e alle guide passo-passo. + +Tra gli sviluppatori Kernel che non usano git, la scelta alternativa più +popolare è quasi sicuramente Mercurial: + + http://www.selenic.com/mercurial/ + +Mercurial condivide diverse caratteristiche con git, ma fornisce +un'interfaccia che potrebbe risultare più semplice da utilizzare. + +L'altro strumento che vale la pena conoscere è Quilt: + + http://savannah.nongnu.org/projects/quilt/ + + +Quilt è un sistema di gestione delle patch, piuttosto che un sistema +di gestione dei sorgenti. Non mantiene uno storico degli eventi; ma piuttosto +è orientato verso il tracciamento di uno specifico insieme di modifiche +rispetto ad un codice in evoluzione. Molti dei più grandi manutentori di +sottosistema utilizzano quilt per gestire le patch che dovrebbero essere +integrate. Per la gestione di certe tipologie di sorgenti (-mm, per esempio), +quilt è il miglior strumento per svolgere il lavoro. + + +Liste di discussione +-------------------- + +Una grossa parte del lavoro di sviluppo del Kernel Linux viene svolto tramite +le liste di discussione. È difficile essere un membro della comunità +pienamente coinvolto se non si partecipa almeno ad una lista da qualche +parte. Ma, le liste di discussione di Linux rappresentano un potenziale +problema per gli sviluppatori, che rischiano di venir sepolti da un mare di +email, restare incagliati nelle convenzioni in vigore nelle liste Linux, +o entrambi. + +Molte delle liste di discussione del Kernel girano su vger.kernel.org; +l'elenco principale lo si trova sul sito: + + http://vger.kernel.org/vger-lists.html + +Esistono liste gestite altrove; un certo numero di queste sono in +lists.redhat.com. + +La lista di discussione principale per lo sviluppo del kernel è, ovviamente, +linux-kernel. Questa lista è un luogo ostile dove trovarsi; i volumi possono +raggiungere i 500 messaggi al giorno, la quantità di "rumore" è elevata, +la conversazione può essere strettamente tecnica e i partecipanti non sono +sempre preoccupati di mostrare un alto livello di educazione. Ma non esiste +altro luogo dove la comunità di sviluppo del kernel si unisce per intero; +gli sviluppatori che evitano tale lista si perderanno informazioni importanti. + +Ci sono alcuni consigli che possono essere utili per sopravvivere a +linux-kernel: + +- Tenete la lista in una cartella separata, piuttosto che inserirla nella + casella di posta principale. Così da essere in grado di ignorare il flusso + di mail per un certo periodo di tempo. + +- Non cercate di seguire ogni conversazione - nessuno lo fa. È importante + filtrare solo gli argomenti d'interesse (sebbene va notato che le + conversazioni di lungo periodo possono deviare dall'argomento originario + senza cambiare il titolo della mail) e le persone che stanno partecipando. + +- Non alimentate i troll. Se qualcuno cerca di creare nervosismo, ignoratelo. + +- Quando rispondete ad una mail linux-kernel (o ad altre liste) mantenete + tutti i Cc:. In assenza di importanti motivazioni (come una richiesta + esplicita), non dovreste mai togliere destinatari. Assicuratevi sempre che + la persona alla quale state rispondendo sia presente nella lista Cc. Questa + usanza fa si che divenga inutile chiedere esplicitamente di essere inseriti + in copia nel rispondere al vostro messaggio. + +- Cercate nell'archivio della lista (e nella rete nella sua totalità) prima + di far domande. Molti sviluppatori possono divenire impazienti con le + persone che chiaramente non hanno svolto i propri compiti a casa. + +- Evitate il *top-posting* (cioè la pratica di mettere la vostra risposta sopra + alla frase alla quale state rispondendo). Ciò renderebbe la vostra risposta + difficile da leggere e genera scarsa impressione. + +- Chiedete nella lista di discussione corretta. Linux-kernel può essere un + punto di incontro generale, ma non è il miglior posto dove trovare + sviluppatori da tutti i sottosistemi. + +Infine, la ricerca della corretta lista di discussione è uno degli errori più +comuni per gli sviluppatori principianti. Qualcuno che pone una domanda +relativa alla rete su linux-kernel riceverà quasi certamente il suggerimento +di chiedere sulla lista netdev, che è la lista frequentata dagli sviluppatori +di rete. Ci sono poi altre liste per i sottosistemi SCSI, video4linux, IDE, +filesystem, etc. Il miglior posto dove cercare una lista di discussione è il +file MAINTAINERS che si trova nei sorgenti del kernel. + +Iniziare con lo sviluppo del Kernel +----------------------------------- + +Sono comuni le domande sul come iniziare con lo sviluppo del kernel - sia da +singole persone che da aziende. Altrettanto comuni sono i passi falsi che +rendono l'inizio di tale relazione più difficile di quello che dovrebbe essere. + +Le aziende spesso cercano di assumere sviluppatori noti per creare un gruppo +di sviluppo iniziale. Questo, in effetti, può essere una tecnica efficace. +Ma risulta anche essere dispendiosa e non va ad accrescere il bacino di +sviluppatori kernel con esperienza. È possibile anche "portare a casa" +sviluppatori per accelerare lo sviluppo del kernel, dando comunque +all'investimento un po' di tempo. Prendersi questo tempo può fornire +al datore di lavoro un gruppo di sviluppatori che comprendono sia il kernel +che l'azienda stessa, e che possono supportare la formazione di altre persone. +Nel medio periodo, questa è spesso uno delle soluzioni più proficue. + +I singoli sviluppatori sono spesso, comprensibilmente, una perdita come punto +di partenza. Iniziare con un grande progetto può rivelarsi intimidatorio; +spesso all'inizio si vuole solo verificare il terreno con qualcosa di piccolo. +Questa è una delle motivazioni per le quali molti sviluppatori saltano alla +creazione di patch che vanno a sistemare errori di battitura o +problematiche minori legate allo stile del codice. Sfortunatamente, tali +patch creano un certo livello di rumore che distrae l'intera comunità di +sviluppo, quindi, sempre di più, esse vengono degradate. I nuovi sviluppatori +che desiderano presentarsi alla comunità non riceveranno l'accoglienza +che vorrebbero con questi mezzi. + +Andrew Morton da questo consiglio agli aspiranti sviluppatori kernel + +:: + + Il primo progetto per un neofita del kernel dovrebbe essere + sicuramente quello di "assicurarsi che il kernel funzioni alla + perfezione sempre e su tutte le macchine sulle quali potete stendere + la vostra mano". Solitamente il modo per fare ciò è quello di + collaborare con gli altri nel sistemare le cose (questo richiede + persistenza!) ma va bene - è parte dello sviluppo kernel. + +(http://lwn.net/Articles/283982/). + +In assenza di problemi ovvi da risolvere, si consiglia agli sviluppatori +di consultare, in generale, la lista di regressioni e di bachi aperti. +Non c'è mai carenza di problematiche bisognose di essere sistemate; +accollandosi tali questioni gli sviluppatori accumuleranno esperienza con +la procedura, ed allo stesso tempo, aumenteranno la loro rispettabilità +all'interno della comunità di sviluppo. diff --git a/Documentation/translations/it_IT/process/3.Early-stage.rst b/Documentation/translations/it_IT/process/3.Early-stage.rst new file mode 100644 index 000000000000..443ac1e5558f --- /dev/null +++ b/Documentation/translations/it_IT/process/3.Early-stage.rst @@ -0,0 +1,241 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/3.Early-stage.rst ` +:Translator: Alessia Mantegazza + +.. _it_development_early_stage: + +I primi passi della pianificazione +================================== + +Osservando un progetto di sviluppo per il kernel Linux, si potrebbe essere +tentati dal saltare tutto e iniziare a codificare. Tuttavia, come ogni +progetto significativo, molta della preparazione per giungere al successo +viene fatta prima che una sola linea di codice venga scritta. Il tempo speso +nella pianificazione e la comunicazione può far risparmiare molto +tempo in futuro. + +Specificare il problema +----------------------- + +Come qualsiasi progetto ingegneristico, un miglioramento del kernel di +successo parte con una chiara descrizione del problema da risolvere. +In alcuni casi, questo passaggio è facile: ad esempio quando un driver è +richiesto per un particolare dispositivo. In altri casi invece, si +tende a confondere il problema reale con le soluzioni proposte e questo +può portare all'emergere di problemi. + +Facciamo un esempio: qualche anno fa, gli sviluppatori che lavoravano con +linux audio cercarono un modo per far girare le applicazioni senza dropouts +o altri artefatti dovuti all'eccessivo ritardo nel sistema. La soluzione +alla quale giunsero fu un modulo del kernel destinato ad agganciarsi al +framework Linux Security Module (LSM); questo modulo poteva essere +configurato per dare ad una specifica applicazione accesso allo +schedulatore *realtime*. Tale modulo fu implementato e inviato nella +lista di discussione linux-kernel, dove incontrò subito dei problemi. + +Per gli sviluppatori audio, questo modulo di sicurezza era sufficiente a +risolvere il loro problema nell'immediato. Per l'intera comunità kernel, +invece, era un uso improprio del framework LSM (che non è progettato per +conferire privilegi a processi che altrimenti non avrebbero potuto ottenerli) +e un rischio per la stabilità del sistema. Le loro soluzioni di punta nel +breve periodo, comportavano un accesso alla schedulazione realtime attraverso +il meccanismo rlimit, e nel lungo periodo un costante lavoro nella riduzione +dei ritardi. + +La comunità audio, comunque, non poteva vedere al di là della singola +soluzione che avevano implementato; erano riluttanti ad accettare alternative. +Il conseguente dissenso lasciò in quegli sviluppatori un senso di +disillusione nei confronti dell'intero processo di sviluppo; uno di loro +scrisse questo messaggio: + + Ci sono numerosi sviluppatori del kernel Linux davvero bravi, ma + rischiano di restare sovrastati da una vasta massa di stolti arroganti. + Cercare di comunicare le richieste degli utenti a queste persone è + una perdita di tempo. Loro sono troppo "intelligenti" per stare ad + ascoltare dei poveri mortali. + + (http://lwn.net/Articles/131776/). + +La realtà delle cose fu differente; gli sviluppatori del kernel erano molto +più preoccupati per la stabilità del sistema, per la manutenzione di lungo +periodo e cercavano la giusta soluzione alla problematica esistente con uno +specifico modulo. La morale della storia è quella di concentrarsi sul +problema - non su di una specifica soluzione- e di discuterne con la comunità +di sviluppo prima di investire tempo nella scrittura del codice. + +Quindi, osservando un progetto di sviluppo del kernel, si dovrebbe +rispondere a questa lista di domande: + +- Qual'è, precisamente, il problema che dev'essere risolto? + +- Chi sono gli utenti coinvolti da tal problema? A quale caso dovrebbe + essere indirizzata la soluzione? + +- In che modo il kernel risulta manchevole nell'indirizzare il problema + in questione? + +Solo dopo ha senso iniziare a considerare le possibili soluzioni. + +Prime discussioni +----------------- + +Quando si pianifica un progetto di sviluppo per il kernel, sarebbe quanto meno +opportuno discuterne inizialmente con la comunità prima di lanciarsi +nell'implementazione. Una discussione preliminare può far risparmiare sia +tempo che problemi in svariati modi: + + - Potrebbe essere che il problema sia già stato risolto nel kernel in + una maniera che non avete ancora compreso. Il kernel Linux è grande e ha + una serie di funzionalità e capacità che non sono scontate nell'immediato. + Non tutte le capacità del kernel sono documentate così bene come ci + piacerebbe, ed è facile perdersi qualcosa. Il vostro autore ha assistito + alla pubblicazione di un driver intero che duplica un altro driver + esistente di cui il nuovo autore era ignaro. Il codice che rinnova + ingranaggi già esistenti non è soltanto dispendioso; non verrà nemmeno + accettato nel ramo principale del kernel. + + - Potrebbero esserci proposte che non sono considerate accettabili per + l'integrazione all'interno del ramo principale. È meglio affrontarle + prima di scrivere il codice. + + - È possibile che altri sviluppatori abbiano pensato al problema; potrebbero + avere delle idee per soluzioni migliori, e potrebbero voler contribuire + alla loro creazione. + +Anni di esperienza con la comunità di sviluppo del kernel hanno impartito una +chiara lezione: il codice per il kernel che è pensato e sviluppato a porte +chiuse, inevitabilmente, ha problematiche che si rivelano solo quando il +codice viene rilasciato pubblicamente. Qualche volta tali problemi sono +importanti e richiedono mesi o anni di sforzi prima che il codice possa +raggiungere gli standard richiesti della comunità. +Alcuni esempi possono essere: + + - La rete Devicescape è stata creata e implementata per sistemi + mono-processore. Non avrebbe potuto essere inserita nel ramo principale + fino a che non avesse supportato anche i sistemi multi-processore. + Riadattare i meccanismi di sincronizzazione e simili è un compito difficile; + come risultato, l'inserimento di questo codice (ora chiamato mac80211) + fu rimandato per più di un anno. + + - Il filesystem Reiser4 include una seria di funzionalità che, secondo + l'opinione degli sviluppatori principali del kernel, avrebbero dovuto + essere implementate a livello di filesystem virtuale. Comprende + anche funzionalità che non sono facilmente implementabili senza esporre + il sistema al rischio di uno stallo. La scoperta tardiva di questi + problemi - e il diniego a risolverne alcuni - ha avuto come conseguenza + il fatto che Raiser4 resta fuori dal ramo principale del kernel. + + - Il modulo di sicurezza AppArmor utilizzava strutture dati del + filesystem virtuale interno in modi che sono stati considerati rischiosi e + inattendibili. Questi problemi (tra le altre cose) hanno tenuto AppArmor + fuori dal ramo principale per anni. + +Ciascuno di questi casi è stato un travaglio e ha richiesto del lavoro +straordinario, cose che avrebbero potuto essere evitate con alcune +"chiacchierate" preliminari con gli sviluppatori kernel. + +Con chi parlare? +---------------- + +Quando gli sviluppatori hanno deciso di rendere pubblici i propri progetti, la +domanda successiva sarà: da dove partiamo? La risposta è quella di trovare +la giusta lista di discussione e il giusto manutentore. Per le liste di +discussione, il miglior approccio è quello di cercare la lista più adatta +nel file MAINTAINERS. Se esiste una lista di discussione di sottosistema, +è preferibile pubblicare lì piuttosto che sulla lista di discussione generale +del kernel Linux; avrete maggiori probabilità di trovare sviluppatori con +esperienza sul tema, e l'ambiente che troverete potrebbe essere più +incoraggiante. + +Trovare manutentori può rivelarsi un po' difficoltoso. Ancora, il file +MAINTAINERS è il posto giusto da dove iniziare. Il file potrebbe non essere +sempre aggiornato, inoltre, non tutti i sottosistemi sono rappresentati qui. +Coloro che sono elencati nel file MAINTAINERS potrebbero, in effetti, non +essere le persone che attualmente svolgono quel determinato ruolo. Quindi, +quando c'è un dubbio su chi contattare, un trucco utile è quello di usare +git (git log in particolare) per vedere chi attualmente è attivo all'interno +del sottosistema interessato. Controllate chi sta scrivendo le patch, +e chi, se non ci fosse nessuno, sta aggiungendo la propria firma +(Signed-off-by) a quelle patch. Quelle sono le persone maggiormente +qualificate per aiutarvi con lo sviluppo di nuovo progetto. + +Il compito di trovare il giusto manutentore, a volte, è una tale sfida che +ha spinto gli sviluppatori del kernel a scrivere uno script che li aiutasse +in questa ricerca: + +:: + + .../scripts/get_maintainer.pl + +Se questo script viene eseguito con l'opzione "-f" ritornerà il +manutentore(i) attuale per un dato file o cartella. Se viene passata una +patch sulla linea di comando, lo script elencherà i manutentori che +dovrebbero riceverne una copia. Ci sono svariate opzioni che regolano +quanto a fondo get_maintainer.pl debba cercare i manutentori; +siate quindi prudenti nell'utilizzare le opzioni più aggressive poiché +potreste finire per includere sviluppatori che non hanno un vero interesse +per il codice che state modificando. + +Se tutto ciò dovesse fallire, parlare con Andrew Morton potrebbe essere +un modo efficace per capire chi è il manutentore di un dato pezzo di codice. + +Quando pubblicare +----------------- + +Se potete, pubblicate i vostri intenti durante le fasi preliminari, sarà +molto utile. Descrivete il problema da risolvere e ogni piano che è stato +elaborato per l'implementazione. Ogni informazione fornita può aiutare +la comunità di sviluppo a fornire spunti utili per il progetto. + +Un evento che potrebbe risultare scoraggiate e che potrebbe accadere in +questa fase non è il ricevere una risposta ostile, ma, invece, ottenere +una misera o inesistente reazione. La triste verità è che: (1) gli +sviluppatori del kernel tendono ad essere occupati, (2) ci sono tante persone +con grandi progetti e poco codice (o anche solo la prospettiva di +avere un codice) a cui riferirsi e (3) nessuno è obbligato a revisionare +o a fare osservazioni in merito ad idee pubblicate da altri. Oltre a +questo, progetti di alto livello spesso nascondono problematiche che si +rivelano solo quando qualcuno cerca di implementarle; per questa ragione +gli sviluppatori kernel preferirebbero vedere il codice. + +Quindi, se una richiesta pubblica di commenti riscuote poco successo, non +pensate che ciò significhi che non ci sia interesse nel progetto. +Sfortunatamente, non potete nemmeno assumere che non ci siano problemi con +la vostra idea. La cosa migliore da fare in questa situazione è quella di +andare avanti e tenere la comunità informata mentre procedete. + +Ottenere riscontri ufficiali +---------------------------- + +Se il vostro lavoro è stato svolto in un ambiente aziendale - come molto +del lavoro fatto su Linux - dovete, ovviamente, avere il permesso dei +dirigenti prima che possiate pubblicare i progetti, o il codice aziendale, +su una lista di discussione pubblica. La pubblicazione di codice che non +è stato rilascio espressamente con licenza GPL-compatibile può rivelarsi +problematico; prima la dirigenza, e il personale legale, troverà una decisione +sulla pubblicazione di un progetto, meglio sarà per tutte le persone coinvolte. + +A questo punto, alcuni lettori potrebbero pensare che il loro lavoro sul +kernel è preposto a supportare un prodotto che non è ancora ufficialmente +riconosciuto. Rivelare le intenzioni dei propri datori di lavori in una +lista di discussione pubblica potrebbe non essere una soluzione valida. +In questi casi, vale la pena considerare se la segretezza sia necessaria +o meno; spesso non c'è una reale necessità di mantenere chiusi i progetti di +sviluppo. + +Detto ciò, ci sono anche casi dove l'azienda legittimamente non può rivelare +le proprie intenzioni in anticipo durante il processo di sviluppo. Le aziende +che hanno sviluppatori kernel esperti possono scegliere di procedere a +carte coperte partendo dall'assunto che saranno in grado di evitare, o gestire, +in futuro, eventuali problemi d'integrazione. Per le aziende senza questo tipo +di esperti, la migliore opzione è spesso quella di assumere uno sviluppatore +esterno che revisioni i progetti con un accordo di segretezza. +La Linux Foundation applica un programma di NDA creato appositamente per +aiutare le aziende in questa particolare situazione; potrete trovare più +informazioni sul sito: + + http://www.linuxfoundation.org/en/NDA_program + +Questa tipologia di revisione è spesso sufficiente per evitare gravi problemi +senza che sia richiesta l'esposizione pubblica del progetto. diff --git a/Documentation/translations/it_IT/process/4.Coding.rst b/Documentation/translations/it_IT/process/4.Coding.rst new file mode 100644 index 000000000000..c61059015e52 --- /dev/null +++ b/Documentation/translations/it_IT/process/4.Coding.rst @@ -0,0 +1,447 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/4.Coding.rst ` +:Translator: Alessia Mantegazza + +.. _it_development_coding: + +Scrivere codice corretto +======================== + +Nonostante ci sia molto da dire sul processo di creazione, sulla sua solidità +e sul suo orientamento alla comunità, la prova di ogni progetto di sviluppo +del kernel si trova nel codice stesso. È il codice che sarà esaminato dagli +altri sviluppatori ed inserito (o no) nel ramo principale. Quindi è la +qualità di questo codice che determinerà il successo finale del progetto. + +Questa sezione esaminerà il processo di codifica. Inizieremo con uno sguardo +sulle diverse casistiche nelle quali gli sviluppatori kernel possono +sbagliare. Poi, l'attenzione si sposterà verso "il fare le cose +correttamente" e sugli strumenti che possono essere utili in questa missione. + +Trappole +-------- + +Lo stile del codice +******************* + +Il kernel ha da tempo delle norme sullo stile di codifica che sono descritte in +:ref:`Documentation/translations/it_IT/process/coding-style.rst `. +Per la maggior parte del tempo, la politica descritta in quel file è stata +praticamente informativa. Ne risulta che ci sia una quantità sostanziale di +codice nel kernel che non rispetta le linee guida relative allo stile. +La presenza di quel codice conduce a due distinti pericoli per gli +sviluppatori kernel. + +Il primo di questi è credere che gli standard di codifica del kernel +non sono importanti e possono non essere applicati. La verità è che +aggiungere nuovo codice al kernel è davvero difficile se questo non +rispetta le norme; molti sviluppatori richiederanno che il codice sia +riformulato prima che anche solo lo revisionino. Una base di codice larga +quanto il kernel richiede una certa uniformità, in modo da rendere possibile +per gli sviluppatori una comprensione veloce di ogni sua parte. Non ci sono, +quindi, più spazi per un codice formattato alla carlona. + +Occasionalmente, lo stile di codifica del kernel andrà in conflitto con lo +stile richiesto da un datore di lavoro. In alcuni casi, lo stile del kernel +dovrà prevalere prima che il codice venga inserito. Mettere il codice +all'interno del kernel significa rinunciare a un certo grado di controllo +in differenti modi - incluso il controllo sul come formattare il codice. + +L’altra trappola è quella di pensare che il codice già presente nel kernel +abbia urgentemente bisogno di essere sistemato. Gli sviluppatori potrebbero +iniziare a generare patch che correggono lo stile come modo per prendere +famigliarità con il processo, o come modo per inserire i propri nomi nei +changelog del kernel – o entrambe. La comunità di sviluppo vede un attività +di codifica puramente correttiva come "rumore"; queste attività riceveranno +una fredda accoglienza. Di conseguenza è meglio evitare questo tipo di patch. +Mentre si lavora su un pezzo di codice è normale correggerne anche lo stile, +ma le modifiche di stile non dovrebbero essere fatte fini a se stesse. + +Il documento sullo stile del codice non dovrebbe essere letto come una legge +assoluta che non può mai essere trasgredita. Se c’è un a buona ragione +(per esempio, una linea che diviene poco leggibile se divisa per rientrare +nel limite di 80 colonne), fatelo e basta. + +Notate che potete utilizzare lo strumento “clang-format” per aiutarvi con +le regole, per una riformattazione automatica e veloce del vostro codice +e per revisionare interi file per individuare errori nello stile di codifica, +refusi e possibili miglioramenti. Inoltre è utile anche per classificare gli +``#includes``, per allineare variabili/macro, per testi derivati ed altri +compiti del genere. Consultate il file +:ref:`Documentation/translations/it_IT/process/clang-format.rst ` +per maggiori dettagli + + +Livelli di astrazione +********************* + + +I professori di Informatica insegnano ai propri studenti a fare ampio uso dei +livelli di astrazione nel nome della flessibilità e del nascondere informazioni. +Certo il kernel fa un grande uso dell'astrazione; nessun progetto con milioni +di righe di codice potrebbe fare altrimenti e sopravvivere. Ma l'esperienza +ha dimostrato che un'eccessiva o prematura astrazione può rivelarsi dannosa +al pari di una prematura ottimizzazione. L'astrazione dovrebbe essere usata +fino al livello necessario e non oltre. + +Ad un livello base, considerate una funzione che ha un argomento che viene +sempre impostato a zero da tutti i chiamanti. Uno potrebbe mantenere +quell'argomento nell'eventualità qualcuno volesse sfruttare la flessibilità +offerta. In ogni caso, tuttavia, ci sono buone possibilità che il codice +che va ad implementare questo argomento aggiuntivo, sia stato rotto in maniera +sottile, in un modo che non è mai stato notato - perché non è mai stato usato. +Oppure, quando sorge la necessità di avere più flessibilità, questo argomento +non la fornisce in maniera soddisfacente. Gli sviluppatori di Kernel, +sottopongono costantemente patch che vanno a rimuovere gli argomenti +inutilizzate; anche se, in generale, non avrebbero dovuto essere aggiunti. + +I livelli di astrazione che nascondono l'accesso all'hardware - +spesso per poter usare dei driver su diversi sistemi operativi - vengono +particolarmente disapprovati. Tali livelli oscurano il codice e possono +peggiorare le prestazioni; essi non appartengono al kernel Linux. + +D'altro canto, se vi ritrovate a dover copiare una quantità significativa di +codice proveniente da un altro sottosistema del kernel, è tempo di chiedersi +se, in effetti, non avrebbe più senso togliere parte di quel codice e metterlo +in una libreria separata o di implementare quella funzionalità ad un livello +più elevato. Non c'è utilità nel replicare lo stesso codice per tutto +il kernel. + + +#ifdef e l'uso del preprocessore in generale +******************************************** + +Il preprocessore C sembra essere una fonte di attrazione per qualche +programmatore C, che ci vede una via per ottenere una grande flessibilità +all'interno di un file sorgente. Ma il preprocessore non è scritto in C, +e un suo massiccio impiego conduce a un codice che è molto più difficile +da leggere per gli altri e che rende più difficile il lavoro di verifica del +compilatore. L'uso eccessivo del preprocessore è praticamente sempre il segno +di un codice che necessita di un certo lavoro di pulizia. + +La compilazione condizionata con #ifdef è, in effetti, un potente strumento, +ed esso viene usato all'interno del kernel. Ma esiste un piccolo desiderio: +quello di vedere il codice coperto solo da una leggera spolverata di +blocchi #ifdef. Come regola generale, quando possibile, l'uso di #ifdef +dovrebbe essere confinato nei file d'intestazione. Il codice compilato +condizionatamente può essere confinato a funzioni tali che, nel caso in cui +il codice non deve essere presente, diventano vuote. Il compilatore poi +ottimizzerà la chiamata alla funzione vuota rimuovendola. Il risultato è +un codice molto più pulito, più facile da seguire. + +Le macro del preprocessore C presentano una serie di pericoli, inclusi +valutazioni multiple di espressioni che hanno effetti collaterali e non +garantiscono una sicurezza rispetto ai tipi. Se siete tentati dal definire +una macro, considerate l'idea di creare invece una funzione inline. Il codice +che ne risulterà sarà lo stesso, ma le funzioni inline sono più leggibili, +non considerano i propri argomenti più volte, e permettono al compilatore di +effettuare controlli sul tipo degli argomenti e del valore di ritorno. + + +Funzioni inline +*************** + +Comunque, anche le funzioni inline hanno i loro pericoli. I programmatori +potrebbero innamorarsi dell'efficienza percepita derivata dalla rimozione +di una chiamata a funzione. Queste funzioni, tuttavia, possono ridurre le +prestazioni. Dato che il loro codice viene replicato ovunque vi sia una +chiamata ad esse, si finisce per gonfiare le dimensioni del kernel compilato. +Questi, a turno, creano pressione sulla memoria cache del processore, e questo +può causare rallentamenti importanti. Le funzioni inline, di norma, dovrebbero +essere piccole e usate raramente. Il costo di una chiamata a funzione, dopo +tutto, non è così alto; la creazione di molte funzioni inline è il classico +esempio di un'ottimizzazione prematura. + +In generale, i programmatori del kernel ignorano gli effetti della cache a +loro rischio e pericolo. Il classico compromesso tempo/spazio teorizzato +all'inizio delle lezioni sulle strutture dati spesso non si applica +all'hardware moderno. Lo spazio *è* tempo, in questo senso un programma +più grande sarà più lento rispetto ad uno più compatto. + +I compilatori più recenti hanno preso un ruolo attivo nel decidere se +una data funzione deve essere resa inline oppure no. Quindi l'uso +indiscriminato della parola chiave "inline" potrebbe non essere non solo +eccessivo, ma anche irrilevante. + +Sincronizzazione +**************** + +Nel maggio 2006, il sistema di rete "Devicescape" fu rilasciato in pompa magna +sotto la licenza GPL e reso disponibile per la sua inclusione nella ramo +principale del kernel. Questa donazione fu una notizia bene accolta; +il supporto per le reti senza fili era considerata, nel migliore dei casi, +al di sotto degli standard; il sistema Deviscape offrì la promessa di una +risoluzione a tale situazione. Tuttavia, questo codice non fu inserito nel +ramo principale fino al giugno del 2007 (2.6.22). Cosa accadde? + +Quel codice mostrava numerosi segnali di uno sviluppo in azienda avvenuto +a porte chiuse. Ma in particolare, un grosso problema fu che non fu +progettato per girare in un sistema multiprocessore. Prima che questo +sistema di rete (ora chiamato mac80211) potesse essere inserito, fu necessario +un lavoro sugli schemi di sincronizzazione. + +Una volta, il codice del kernel Linux poteva essere sviluppato senza pensare +ai problemi di concorrenza presenti nei sistemi multiprocessore. Ora, +comunque, questo documento è stato scritto su di un portatile dual-core. +Persino su sistemi a singolo processore, il lavoro svolto per incrementare +la capacità di risposta aumenterà il livello di concorrenza interno al kernel. +I giorni nei quali il codice poteva essere scritto senza pensare alla +sincronizzazione sono da passati tempo. + +Ogni risorsa (strutture dati, registri hardware, etc.) ai quali si potrebbe +avere accesso simultaneo da più di un thread deve essere sincronizzato. Il +nuovo codice dovrebbe essere scritto avendo tale accortezza in testa; +riadattare la sincronizzazione a posteriori è un compito molto più difficile. +Gli sviluppatori del kernel dovrebbero prendersi il tempo di comprendere bene +le primitive di sincronizzazione, in modo da sceglier lo strumento corretto +per eseguire un compito. Il codice che presenta una mancanza di attenzione +alla concorrenza avrà un percorso difficile all'interno del ramo principale. + +Regressioni +*********** + +Vale la pena menzionare un ultimo pericolo: potrebbe rivelarsi accattivante +l'idea di eseguire un cambiamento (che potrebbe portare a grandi +miglioramenti) che porterà ad alcune rotture per gli utenti esistenti. +Questa tipologia di cambiamento è chiamata "regressione", e le regressioni son +diventate mal viste nel ramo principale del kernel. Con alcune eccezioni, +i cambiamenti che causano regressioni saranno fermati se quest'ultime non +potranno essere corrette in tempo utile. È molto meglio quindi evitare +la regressione fin dall'inizio. + +Spesso si è argomentato che una regressione può essere giustificata se essa +porta risolve più problemi di quanti non ne crei. Perché, dunque, non fare +un cambiamento se questo porta a nuove funzionalità a dieci sistemi per +ognuno dei quali esso determina una rottura? La migliore risposta a questa +domanda ci è stata fornita da Linus nel luglio 2007: + +:: + Dunque, noi non sistemiamo bachi introducendo nuovi problemi. Quella + via nasconde insidie, e nessuno può sapere del tutto se state facendo + dei progressi reali. Sono due passi avanti e uno indietro, oppure + un passo avanti e due indietro? + +(http://lwn.net/Articles/243460/). + +Una particolare tipologia di regressione mal vista consiste in una qualsiasi +sorta di modifica all'ABI dello spazio utente. Una volta che un'interfaccia +viene esportata verso lo spazio utente, dev'essere supportata all'infinito. +Questo fatto rende la creazione di interfacce per lo spazio utente +particolarmente complicato: dato che non possono venir cambiate introducendo +incompatibilità, esse devono essere fatte bene al primo colpo. Per questa +ragione sono sempre richieste: ampie riflessioni, documentazione chiara e +ampie revisioni dell'interfaccia verso lo spazio utente. + + +Strumenti di verifica del codice +-------------------------------- +Almeno per ora la scrittura di codice priva di errori resta un ideale +irraggiungibile ai più. Quello che speriamo di poter fare, tuttavia, è +trovare e correggere molti di questi errori prima che il codice entri nel +ramo principale del kernel. A tal scopo gli sviluppatori del kernel devono +mettere insieme una schiera impressionante di strumenti che possano +localizzare automaticamente un'ampia varietà di problemi. Qualsiasi problema +trovato dal computer è un problema che non affliggerà l'utente in seguito, +ne consegue che gli strumenti automatici dovrebbero essere impiegati ovunque +possibile. + +Il primo passo consiste semplicemente nel fare attenzione agli avvertimenti +proveniente dal compilatore. Versioni moderne di gcc possono individuare +(e segnalare) un gran numero di potenziali errori. Molto spesso, questi +avvertimenti indicano problemi reali. Di regola, il codice inviato per la +revisione non dovrebbe produrre nessun avvertimento da parte del compilatore. +Per mettere a tacere gli avvertimenti, cercate di comprenderne le cause reali +e cercate di evitare le "riparazioni" che fan sparire l'avvertimento senza +però averne trovato la causa. + +Tenete a mente che non tutti gli avvertimenti sono disabilitati di default. +Costruite il kernel con "make EXTRA_CFLAGS=-W" per ottenerli tutti. + +Il kernel fornisce differenti opzioni che abilitano funzionalità di debugging; +molti di queste sono trovano all'interno del sotto menu "kernel hacking". +La maggior parte di queste opzioni possono essere attivate per qualsiasi +kernel utilizzato per lo sviluppo o a scopo di test. In particolare dovreste +attivare: + + - ENABLE_WARN_DEPRECATED, ENABLE_MUST_CHECK, e FRAME_WARN per ottenere degli + avvertimenti dedicati a problemi come l'uso di interfacce deprecate o + l'ignorare un importante valore di ritorno di una funzione. Il risultato + generato da questi avvertimenti può risultare verboso, ma non bisogna + preoccuparsi per gli avvertimenti provenienti da altre parti del kernel. + + - DEBUG_OBJECTS aggiungerà un codice per tracciare il ciclo di vita di + diversi oggetti creati dal kernel e avvisa quando qualcosa viene eseguito + fuori controllo. Se state aggiungendo un sottosistema che crea (ed + esporta) oggetti complessi propri, considerate l'aggiunta di un supporto + al debugging dell'oggetto. + + - DEBUG_SLAB può trovare svariati errori di uso e di allocazione di memoria; + esso dovrebbe esser usato dalla maggior parte dei kernel di sviluppo. + + - DEBUG_SPINLOCK, DEBUG_ATOMIC_SLEEP, e DEBUG_MUTEXES troveranno un certo + numero di errori comuni di sincronizzazione. + +Esistono ancora delle altre opzioni di debugging, di alcune di esse +discuteremo qui sotto. Alcune di esse hanno un forte impatto e non dovrebbero +essere usate tutte le volte. Ma qualche volta il tempo speso nell'capire +le opzioni disponibili porterà ad un risparmio di tempo nel breve termine. + +Uno degli strumenti di debugging più tosti è il *locking checker*, o +"lockdep". Questo strumento traccerà qualsiasi acquisizione e rilascio di +ogni *lock* (spinlock o mutex) nel sistema, l'ordine con il quale i *lock* +sono acquisiti in relazione l'uno con l'altro, l'ambiente corrente di +interruzione, eccetera. Inoltre esso può assicurare che i *lock* vengano +acquisiti sempre nello stesso ordine, che le stesse assunzioni sulle +interruzioni si applichino in tutte le occasioni, e così via. In altre parole, +lockdep può scovare diversi scenari nei quali il sistema potrebbe, in rari +casi, trovarsi in stallo. Questa tipologia di problema può essere grave +(sia per gli sviluppatori che per gli utenti) in un sistema in uso; lockdep +permette di trovare tali problemi automaticamente e in anticipo. + +In qualità di programmatore kernel diligente, senza dubbio, dovrete controllare +il valore di ritorno di ogni operazione (come l'allocazione della memoria) +poiché esso potrebbe fallire. Il nocciolo della questione è che i percorsi +di gestione degli errori, con grande probabilità, non sono mai stati +collaudati del tutto. Il codice collaudato tende ad essere codice bacato; +potrete quindi essere più a vostro agio con il vostro codice se tutti questi +percorsi fossero stati verificati un po' di volte. + +Il kernel fornisce un framework per l'inserimento di fallimenti che fa +esattamente al caso, specialmente dove sono coinvolte allocazioni di memoria. +Con l'opzione per l'inserimento dei fallimenti abilitata, una certa percentuale +di allocazione di memoria sarà destinata al fallimento; questi fallimenti +possono essere ridotti ad uno specifico pezzo di codice. Procedere con +l'inserimento dei fallimenti attivo permette al programmatore di verificare +come il codice risponde quando le cose vanno male. Consultate: +Documentation/fault-injection/fault-injection.txt per avere maggiori +informazioni su come utilizzare questo strumento. + +Altre tipologie di errori possono essere riscontrati con lo strumento di +analisi statica "sparse". Con Sparse, il programmatore può essere avvisato +circa la confusione tra gli indirizzi dello spazio utente e dello spazio +kernel, un miscuglio fra quantità big-endian e little-endian, il passaggio +di un valore intero dove ci sia aspetta un gruppo di flag, e così via. +Sparse deve essere installato separatamente (se il vostra distribuzione non +lo prevede, potete trovarlo su https://sparse.wiki.kernel.org/index.php/Main_Page); +può essere attivato sul codice aggiungendo "C=1" al comando make. + +Lo strumento "Coccinelle" (http://coccinelle.lip6.fr/) è in grado di trovare +una vasta varietà di potenziali problemi di codifica; e può inoltre proporre +soluzioni per risolverli. Un buon numero di "patch semantiche" per il kernel +sono state preparate nella cartella scripts/coccinelle; utilizzando +"make coccicheck" esso percorrerà tali patch semantiche e farà rapporto su +qualsiasi problema trovato. Per maggiori informazioni, consultate +:ref:`Documentation/dev-tools/coccinelle.rst `. + +Altri errori di portabilità sono meglio scovati compilando il vostro codice +per altre architetture. Se non vi accade di avere un sistema S/390 o una +scheda di sviluppo Blackfin sotto mano, potete comunque continuare la fase +di compilazione. Un vasto numero di cross-compilatori per x86 possono +essere trovati al sito: + + http://www.kernel.org/pub/tools/crosstool/ + +Il tempo impiegato nell'installare e usare questi compilatori sarà d'aiuto +nell'evitare situazioni imbarazzanti nel futuro. + + +Documentazione +-------------- + +La documentazione è spesso stata più un'eccezione che una regola nello +sviluppo del kernel. Nonostante questo, un'adeguata documentazione aiuterà +a facilitare l'inserimento di nuovo codice nel kernel, rende la vita più +facile per gli altri sviluppatori e sarà utile per i vostri utenti. In molti +casi, la documentazione è divenuta sostanzialmente obbligatoria. + +La prima parte di documentazione per qualsiasi patch è il suo changelog. +Questi dovrebbero descrivere le problematiche risolte, la tipologia di +soluzione, le persone che lavorano alla patch, ogni effetto rilevante +sulle prestazioni e tutto ciò che può servire per la comprensione della +patch. Assicuratevi che il changelog dica *perché*, vale la pena aggiungere +la patch; un numero sorprendente di sviluppatori sbaglia nel fornire tale +informazione. + +Qualsiasi codice che aggiunge una nuova interfaccia in spazio utente - inclusi +nuovi file in sysfs o /proc - dovrebbe includere la documentazione di tale +interfaccia così da permette agli sviluppatori dello spazio utente di sapere +con cosa stanno lavorando. Consultate: Documentation/ABI/README per avere una +descrizione di come questi documenti devono essere impostati e quali +informazioni devono essere fornite. + +Il file :ref:`Documentation/translations/it_IT/admin-guide/kernel-parameters.rst ` +descrive tutti i parametri di avvio del kernel. Ogni patch che aggiunga +nuovi parametri dovrebbe aggiungere nuove voci a questo file. + +Ogni nuova configurazione deve essere accompagnata da un testo di supporto +che spieghi chiaramente le opzioni e spieghi quando l'utente potrebbe volerle +selezionare. + +Per molti sottosistemi le informazioni sull'API interna sono documentate sotto +forma di commenti formattati in maniera particolare; questi commenti possono +essere estratti e formattati in differenti modi attraverso lo script +"kernel-doc". Se state lavorando all'interno di un sottosistema che ha +commenti kerneldoc dovreste mantenerli e aggiungerli, in maniera appropriata, +per le funzioni disponibili esternamente. Anche in aree che non sono molto +documentate, non c'è motivo per non aggiungere commenti kerneldoc per il +futuro; infatti, questa può essere un'attività utile per sviluppatori novizi +del kernel. Il formato di questi commenti, assieme alle informazione su come +creare modelli per kerneldoc, possono essere trovati in +:ref:`Documentation/translations/it_IT/doc-guide/ `. + +Chiunque legga un ammontare significativo di codice kernel noterà che, spesso, +i commenti si fanno maggiormente notare per la loro assenza. Ancora una volta, +le aspettative verso il nuovo codice sono più alte rispetto al passato; +inserire codice privo di commenti sarà più difficile. Detto ciò, va aggiunto +che non si desiderano commenti prolissi per il codice. Il codice dovrebbe +essere, di per sé, leggibile, con dei commenti che spieghino gli aspetti più +sottili. + +Determinate cose dovrebbero essere sempre commentate. L'uso di barriere +di memoria dovrebbero essere accompagnate da una riga che spieghi perché sia +necessaria. Le regole di sincronizzazione per le strutture dati, generalmente, +necessitano di una spiegazioni da qualche parte. Le strutture dati più +importanti, in generale, hanno bisogno di una documentazione onnicomprensiva. +Le dipendenze che non sono ovvie tra bit separati di codice dovrebbero essere +indicate. Tutto ciò che potrebbe indurre un inserviente del codice a fare +una "pulizia" incorretta, ha bisogno di un commento che dica perché è stato +fatto in quel modo. E così via. + +Cambiamenti interni dell'API +---------------------------- + +L'interfaccia binaria fornita dal kernel allo spazio utente non può essere +rotta tranne che in circostanze eccezionali. L'interfaccia di programmazione +interna al kernel, invece, è estremamente fluida e può essere modificata al +bisogno. Se vi trovate a dover lavorare attorno ad un'API del kernel o +semplicemente non state utilizzando una funzionalità offerta perché questa +non rispecchia i vostri bisogni, allora questo potrebbe essere un segno che +l'API ha bisogno di essere cambiata. In qualità di sviluppatore del kernel, +hai il potere di fare questo tipo di modifica. + +Ci sono ovviamente alcuni punti da cogliere. I cambiamenti API possono essere +fatti, ma devono essere giustificati. Quindi ogni patch che porta ad una +modifica dell'API interna dovrebbe essere accompagnata da una descrizione +della modifica in sé e del perché essa è necessaria. Questo tipo di +cambiamenti dovrebbero, inoltre, essere fatti in una patch separata, invece di +essere sepolti all'interno di una patch più grande. + +L'altro punto da cogliere consiste nel fatto che uno sviluppatore che +modifica l'API deve, in generale, essere responsabile della correzione +di tutto il codice del kernel che viene rotto per via della sua modifica. +Per una funzione ampiamente usata, questo compito può condurre letteralmente +a centinaia o migliaia di modifiche, molte delle quali sono in conflitto con +il lavoro svolto da altri sviluppatori. Non c'è bisogno di dire che questo +può essere un lavoro molto grosso, quindi è meglio essere sicuri che la +motivazione sia ben solida. Notate che lo strumento Coccinelle può fornire +un aiuto con modifiche estese dell'API. + +Quando viene fatta una modifica API incompatibile, una persona dovrebbe, +quando possibile, assicurarsi che quel codice non aggiornato sia trovato +dal compilatore. Questo vi aiuterà ad essere sicuri d'avere trovato, +tutti gli usi di quell'interfaccia. Inoltre questo avviserà gli sviluppatori +di codice fuori dal kernel che c'è un cambiamento per il quale è necessario del +lavoro. Il supporto al codice fuori dal kernel non è qualcosa di cui gli +sviluppatori del kernel devono preoccuparsi, ma non dobbiamo nemmeno rendere +più difficile del necessario la vita agli sviluppatori di questo codice. diff --git a/Documentation/translations/it_IT/process/5.Posting.rst b/Documentation/translations/it_IT/process/5.Posting.rst new file mode 100644 index 000000000000..b979266aa884 --- /dev/null +++ b/Documentation/translations/it_IT/process/5.Posting.rst @@ -0,0 +1,348 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/5.Posting.rst ` +:Translator: Federico Vaga + +.. _it_development_posting: + +Pubblicare modifiche +==================== + +Prima o poi arriva il momento in cui il vostro lavoro è pronto per essere +presentato alla comunità per una revisione ed eventualmente per la sua +inclusione nel ramo principale del kernel. Com'era prevedibile, +la comunità di sviluppo del kernel ha elaborato un insieme di convenzioni +e di procedure per la pubblicazione delle patch; seguirle renderà la vita +più facile a tutti quanti. Questo documento cercherà di coprire questi +argomenti con un ragionevole livello di dettaglio; più informazioni possono +essere trovare nella cartella 'Documentation', nei file +:ref:`translations/it_IT/process/submitting-patches.rst `, +:ref:`translations/it_IT/process/submitting-drivers.rst `, e +:ref:`translations/it_IT/process/submit-checklist.rst `. + + +Quando pubblicarle +------------------ + +C'è sempre una certa resistenza nel pubblicare patch finché non sono +veramente "pronte". Per semplici patch questo non è un problema. +Ma quando il lavoro è di una certa complessità, c'è molto da guadagnare +dai riscontri che la comunità può darvi prima che completiate il lavoro. +Dovreste considerare l'idea di pubblicare un lavoro incompleto, o anche +preparare un ramo git disponibile agli sviluppatori interessati, cosicché +possano stare al passo col vostro lavoro in qualunque momento. + +Quando pubblicate del codice che non è considerato pronto per l'inclusione, +è bene che lo diciate al momento della pubblicazione. Inoltre, aggiungete +informazioni sulle cose ancora da sviluppare e sui problemi conosciuti. +Poche persone guarderanno delle patch che si sa essere fatte a metà, +ma quelli che lo faranno penseranno di potervi aiutare a condurre il vostro +sviluppo nella giusta direzione. + + +Prima di creare patch +--------------------- + +Ci sono un certo numero di cose che dovreste fare prima di considerare +l'invio delle patch alla comunità di sviluppo. Queste cose includono: + + - Verificare il codice fino al massimo che vi è consentito. Usate gli + strumenti di debug del kernel, assicuratevi che il kernel compili con + tutte le più ragionevoli combinazioni d'opzioni, usate cross-compilatori + per compilare il codice per differenti architetture, eccetera. + + - Assicuratevi che il vostro codice sia conforme alla linee guida del + kernel sullo stile del codice. + + - La vostra patch ha delle conseguenze in termini di prestazioni? + Se è così, dovreste eseguire dei *benchmark* che mostrino il loro + impatto (anche positivo); un riassunto dei risultati dovrebbe essere + incluso nella patch. + + - Siate certi d'avere i diritti per pubblicare il codice. Se questo + lavoro è stato fatto per un datore di lavoro, egli avrà dei diritti su + questo lavoro e dovrà quindi essere d'accordo alla sua pubblicazione + con una licenza GPL + +Come regola generale, pensarci un po' di più prima di inviare il codice +ripaga quasi sempre lo sforzo. + + +Preparazione di una patch +------------------------- + +La preparazione delle patch per la pubblicazione può richiedere una quantità +di lavoro significativa, ma, ripetiamolo ancora, generalmente sconsigliamo +di risparmiare tempo in questa fase, anche sul breve periodo. + +Le patch devono essere preparate per una specifica versione del kernel. +Come regola generale, una patch dovrebbe basarsi sul ramo principale attuale +così come lo si trova nei sorgenti git di Linus. Quando vi basate sul ramo +principale, cominciate da un punto di rilascio ben noto - uno stabile o +un -rc - piuttosto che creare il vostro ramo da quello principale in un punto +a caso. + +Per facilitare una revisione e una verifica più estesa, potrebbe diventare +necessaria la produzione di versioni per -mm, linux-next o i sorgenti di un +sottosistema. Basare questa patch sui suddetti sorgenti potrebbe richiedere +un lavoro significativo nella risoluzione dei conflitti e nella correzione dei +cambiamenti di API; questo potrebbe variare a seconda dell'area d'interesse +della vostra patch e da quello che succede altrove nel kernel. + +Solo le modifiche più semplici dovrebbero essere preparate come una singola +patch; tutto il resto dovrebbe essere preparato come una serie logica di +modifiche. Spezzettare le patch è un po' un'arte; alcuni sviluppatori +passano molto tempo nel capire come farlo in modo che piaccia alla comunità. +Ci sono alcune regole spannometriche, che comunque possono aiutare +considerevolmente: + + - La serie di patch che pubblicherete, quasi sicuramente, non sarà + come quella che trovate nel vostro sistema di controllo di versione. + Invece, le vostre modifiche dovranno essere considerate nella loro forma + finale, e quindi separate in parti che abbiano un senso. Gli sviluppatori + sono interessati in modifiche che siano discrete e indipendenti, non + alla strada che avete percorso per ottenerle. + + - Ogni modifica logicamente indipendente dovrebbe essere preparata come una + patch separata. Queste modifiche possono essere piccole ("aggiunto un + campo in questa struttura") o grandi (l'aggiunta di un driver nuovo, + per esempio), ma dovrebbero essere concettualmente piccole da permettere + una descrizione in una sola riga. Ogni patch dovrebbe fare modifiche + specifiche che si possano revisionare indipendentemente e di cui si possa + verificare la veridicità. + + - Giusto per riaffermare quando detto sopra: non mischiate diversi tipi di + modifiche nella stessa patch. Se una modifica corregge un baco critico + per la sicurezza, riorganizza alcune strutture, e riformatta il codice, + ci sono buone probabilità che venga ignorata e che la correzione importante + venga persa. + + - Ogni modifica dovrebbe portare ad un kernel che compila e funziona + correttamente; se la vostra serie di patch si interrompe a metà il + risultato dovrebbe essere comunque un kernel funzionante. L'applicazione + parziale di una serie di patch è uno scenario comune nel quale il + comando "git bisect" viene usato per trovare delle regressioni; se il + risultato è un kernel guasto, renderete la vita degli sviluppatori più + difficile così come quella di chi s'impegna nel nobile lavoro di + scovare i problemi. + + - Però, non strafate. Una volta uno sviluppatore pubblicò una serie di 500 + patch che modificavano un unico file - un atto che non lo rese la persona + più popolare sulla lista di discussione del kernel. Una singola patch + può essere ragionevolmente grande fintanto che contenga un singolo + cambiamento *logico*. + + - Potrebbe essere allettante l'idea di aggiungere una nuova infrastruttura + come una serie di patch, ma di lasciare questa infrastruttura inutilizzata + finché l'ultima patch della serie non abilita tutto quanto. Quando è + possibile, questo dovrebbe essere evitato; se questa serie aggiunge delle + regressioni, "bisect" indicherà quest'ultima patch come causa del + problema anche se il baco si trova altrove. Possibilmente, quando una + patch aggiunge del nuovo codice dovrebbe renderlo attivo immediatamente. + +Lavorare per creare la serie di patch perfetta potrebbe essere frustrante +perché richiede un certo tempo e soprattutto dopo che il "vero lavoro" è +già stato fatto. Quando ben fatto, comunque, è tempo ben speso. + + +Formattazione delle patch e i changelog +--------------------------------------- + +Quindi adesso avete una serie perfetta di patch pronte per la pubblicazione, +ma il lavoro non è davvero finito. Ogni patch deve essere preparata con +un messaggio che spieghi al resto del mondo, in modo chiaro e veloce, +il suo scopo. Per ottenerlo, ogni patch sarà composta dai seguenti elementi: + + - Un campo opzionale "From" col nome dell'autore della patch. Questa riga + è necessaria solo se state passando la patch di qualcun altro via email, + ma nel dubbio non fa di certo male aggiungerlo. + + - Una descrizione di una riga che spieghi cosa fa la patch. Questo + messaggio dovrebbe essere sufficiente per far comprendere al lettore lo + scopo della patch senza altre informazioni. Questo messaggio, + solitamente, presenta in testa il nome del sottosistema a cui si riferisce, + seguito dallo scopo della patch. Per esempio: + + :: + + gpio: fix build on CONFIG_GPIO_SYSFS=n + + - Una riga bianca seguita da una descrizione dettagliata della patch. + Questa descrizione può essere lunga tanto quanto serve; dovrebbe spiegare + cosa fa e perché dovrebbe essere aggiunta al kernel. + + - Una o più righe etichette, con, minimo, una riga *Signed-off-by:* + col nome dall'autore della patch. Queste etichette verranno descritte + meglio più avanti. + +Gli elementi qui sopra, assieme, formano il changelog di una patch. +Scrivere un buon changelog è cruciale ma è spesso un'arte trascurata; +vale la pena spendere qualche parola in più al riguardo. Quando scrivete +un changelog dovreste tenere ben presente che molte persone leggeranno +le vostre parole. Queste includono i manutentori di un sotto-sistema, e i +revisori che devono decidere se la patch debba essere inclusa o no, +le distribuzioni e altri manutentori che cercano di valutare se la patch +debba essere applicata su kernel più vecchi, i cacciatori di bachi che si +chiederanno se la patch è la causa di un problema che stanno cercando, +gli utenti che vogliono sapere com'è cambiato il kernel, e molti altri. +Un buon changelog fornisce le informazioni necessarie a tutte queste +persone nel modo più diretto e conciso possibile. + +A questo scopo, la riga riassuntiva dovrebbe descrivere gli effetti della +modifica e la motivazione della patch nel modo migliore possibile nonostante +il limite di una sola riga. La descrizione dettagliata può spiegare meglio +i temi e fornire maggiori informazioni. Se una patch corregge un baco, +citate, se possibile, il commit che lo introdusse (e per favore, quando +citate un commit aggiungete sia il suo identificativo che il titolo), +Se il problema è associabile ad un file di log o all' output del compilatore, +includeteli al fine d'aiutare gli altri a trovare soluzioni per lo stesso +problema. Se la modifica ha lo scopo di essere di supporto a sviluppi +successivi, ditelo. Se le API interne vengono cambiate, dettagliate queste +modifiche e come gli altri dovrebbero agire per applicarle. In generale, +più riuscirete ad entrare nei panni di tutti quelli che leggeranno il +vostro changelog, meglio sarà il changelog (e il kernel nel suo insieme). + +Non serve dirlo, un changelog dovrebbe essere il testo usato nel messaggio +di commit in un sistema di controllo di versione. Sarà seguito da: + + - La patch stessa, nel formato unificato per patch ("-u"). Usare + l'opzione "-p" assocerà alla modifica il nome della funzione alla quale + si riferisce, rendendo il risultato più facile da leggere per gli altri. + +Dovreste evitare di includere nelle patch delle modifiche per file +irrilevanti (quelli generati dal processo di generazione, per esempio, o i file +di backup del vostro editor). Il file "dontdiff" nella cartella Documentation +potrà esservi d'aiuto su questo punto; passatelo a diff con l'opzione "-X". + +Le etichette sopra menzionante sono usate per descrivere come i vari +sviluppatori sono stati associati allo sviluppo di una patch. Sono descritte +in dettaglio nel documento :ref:`translations/it_IT/process/submitting-patches.rst `; +quello che segue è un breve riassunto. Ognuna di queste righe ha il seguente +formato: + +:: + + tag: Full Name optional-other-stuff + +Le etichette in uso più comuni sono: + + - Signed-off-by: questa è la certificazione che lo sviluppatore ha il diritto + di sottomettere la patch per l'integrazione nel kernel. Questo rappresenta + il consenso verso il certificato d'origine degli sviluppatori, il testo + completo potrà essere trovato in + :ref:`Documentation/translations/it_IT/process/submitting-patches.rst `. + Codice che non presenta una firma appropriata non potrà essere integrato. + + - Co-developed-by: indica che la patch è stata sviluppata anche da un altro + sviluppatore assieme all'autore originale. Questo è utile quando più + persone lavorano sulla stessa patch. Da notare che questa persona deve + avere anche una riga "Signed-off-by:" nella patch. + + - Acked-by: indica il consenso di un altro sviluppatore (spesso il manutentore + del codice in oggetto) all'integrazione della patch nel kernel. + + - Tested-by: menziona la persona che ha verificato la patch e l'ha trovata + funzionante. + + - Reviwed-by: menziona lo sviluppatore che ha revisionato la patch; per + maggiori dettagli leggete la dichiarazione dei revisori in + :ref:`Documentation/translations/it_IT/process/submitting-patches.rst ` + + - Reported-by: menziona l'utente che ha riportato il problema corretto da + questa patch; quest'etichetta viene usata per dare credito alle persone + che hanno verificato il codice e ci hanno fatto sapere quando le cose non + funzionavano correttamente. + + - Cc: la persona menzionata ha ricevuto una copia della patch ed ha avuto + l'opportunità di commentarla. + +State attenti ad aggiungere queste etichette alla vostra patch: solo +"Cc:" può essere aggiunta senza il permesso esplicito della persona menzionata. + +Inviare la modifica +------------------- + +Prima di inviare la vostra patch, ci sarebbero ancora un paio di cose di cui +dovreste aver cura: + + - Siete sicuri che il vostro programma di posta non corromperà le patch? + Le patch che hanno spazi bianchi in libertà o andate a capo aggiunti + dai programmi di posta non funzioneranno per chi le riceve, e spesso + non verranno nemmeno esaminate in dettaglio. Se avete un qualsiasi dubbio, + inviate la patch a voi stessi e verificate che sia integra. + + :ref:`Documentation/translations/it_IT/process/email-clients.rst ` + contiene alcuni suggerimenti utili sulla configurazione dei programmi + di posta al fine di inviare patch. + + - Siete sicuri che la vostra patch non contenga sciocchi errori? Dovreste + sempre processare le patch con scripts/checkpatch.pl e correggere eventuali + problemi riportati. Per favore tenete ben presente che checkpatch.pl non è + più intelligente di voi, nonostante sia il risultato di un certa quantità di + ragionamenti su come debba essere una patch per il kernel. Se seguire + i suggerimenti di checkpatch.pl rende il codice peggiore, allora non fatelo. + +Le patch dovrebbero essere sempre inviate come testo puro. Per favore non +inviatele come allegati; questo rende molto più difficile, per i revisori, +citare parti della patch che si vogliono commentare. Invece, mettete la vostra +patch direttamente nel messaggio. + +Quando inviate le patch, è importante inviarne una copia a tutte le persone che +potrebbero esserne interessate. Al contrario di altri progetti, il kernel +incoraggia le persone a peccare nell'invio di tante copie; non presumente che +le persone interessate vedano i vostri messaggi sulla lista di discussione. +In particolare le copie dovrebbero essere inviate a: + + - I manutentori dei sottosistemi affetti della modifica. Come descritto + in precedenza, il file MAINTAINERS è il primo luogo dove cercare i nomi + di queste persone. + + - Altri sviluppatori che hanno lavorato nello stesso ambiente - specialmente + quelli che potrebbero lavorarci proprio ora. Usate git potrebbe essere + utile per vedere chi altri ha modificato i file su cui state lavorando. + + - Se state rispondendo a un rapporto su un baco, o a una richiesta di + funzionalità, includete anche gli autori di quei rapporti/richieste. + + - Inviate una copia alle liste di discussione interessate, o, se nient'altro + è adatto, alla lista linux-kernel + + - Se state correggendo un baco, pensate se la patch dovrebbe essere inclusa + nel prossimo rilascio stabile. Se è così, la lista di discussione + stable@vger.kernel.org dovrebbe riceverne una copia. Aggiungete anche + l'etichetta "Cc: stable@vger.kernel.org" nella patch stessa; questo + permetterà alla squadra *stable* di ricevere una notifica quando questa + correzione viene integrata nel ramo principale. + +Quando scegliete i destinatari della patch, è bene avere un'idea di chi +pensiate che sia colui che, eventualmente, accetterà la vostra patch e +la integrerà. Nonostante sia possibile inviare patch direttamente a +Linus Torvalds, e lasciare che sia lui ad integrarle,solitamente non è la +strada migliore da seguire. Linus è occupato, e ci sono dei manutentori di +sotto-sistema che controllano una parte specifica del kernel. Solitamente, +vorreste che siano questi manutentori ad integrare le vostre patch. Se non +c'è un chiaro manutentore, l'ultima spiaggia è spesso Andrew Morton. + +Le patch devono avere anche un buon oggetto. Il tipico formato per l'oggetto +di una patch assomiglia a questo: + +:: + + [PATCH nn/mm] subsys: one-line description of the patch + +dove "nn" è il numero ordinale della patch, "mm" è il numero totale delle patch +nella serie, e "subsys" è il nome del sottosistema interessato. Chiaramente, +nn/mm può essere omesso per una serie composta da una singola patch. + +Se avete una significative serie di patch, è prassi inviare una descrizione +introduttiva come parte zero. Tuttavia questa convenzione non è universalmente +seguita; se la usate, ricordate che le informazioni nell'introduzione non +faranno parte del changelog del kernel. Quindi per favore, assicuratevi che +ogni patch abbia un changelog completo. + +In generale, la seconda parte e quelle successive di una patch "composta" +dovrebbero essere inviate come risposta alla prima, cosicché vengano viste +come un unico *thread*. Strumenti come git e quilt hanno comandi per inviare +gruppi di patch con la struttura appropriata. Se avete una serie lunga +e state usando git, per favore state alla larga dall'opzione --chain-reply-to +per evitare di creare un annidamento eccessivo. diff --git a/Documentation/translations/it_IT/process/6.Followthrough.rst b/Documentation/translations/it_IT/process/6.Followthrough.rst new file mode 100644 index 000000000000..df7d5fb28832 --- /dev/null +++ b/Documentation/translations/it_IT/process/6.Followthrough.rst @@ -0,0 +1,240 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/6.Followthrough.rst ` +:Translator: Alessia Mantegazza + +.. _it_development_followthrough: + +============= +Completamento +============= + +A questo punto, avete seguito le linee guida fino a questo punto e, con +l'aggiunta delle vostre capacità ingegneristiche, avete pubblicato una serie +perfetta di patch. Uno dei più grandi errori che possono essere commessi +persino da sviluppatori kernel esperti è quello di concludere che il +lavoro sia ormai finito. In verità, la pubblicazione delle patch +simboleggia una transizione alla fase successiva del processo, con, +probabilmente, ancora un po' di lavoro da fare. + +È raro che una modifica sia così bella alla sua prima pubblicazione che non +ci sia alcuno spazio di miglioramento. Il programma di sviluppo del kernel +riconosce questo fatto e quindi, è fortemente orientato al miglioramento +del codice pubblicato. Voi, in qualità di autori del codice, dovrete +lavorare con la comunità del kernel per assicurare che il vostro codice +mantenga gli standard qualitativi richiesti. Un fallimento in questo +processo è quasi come impedire l'inclusione delle vostre patch nel +ramo principale. + +Lavorare con i revisori +======================= + +Una patch che abbia una certa rilevanza avrà ricevuto numerosi commenti +da parte di altri sviluppatori dato che avranno revisionato il codice. +Lavorare con i revisori può rivelarsi, per molti sviluppatori, la parte +più intimidatoria del processo di sviluppo del kernel. La vita può esservi +resa molto più facile se tenete presente alcuni dettagli: + + - Se avete descritto la vostra modifica correttamente, i revisori ne + comprenderanno il valore e il perché vi siete presi il disturbo di + scriverla. Ma tale valore non li tratterrà dal porvi una domanda + fondamentale: come verrà mantenuto questo codice nel kernel nei prossimi + cinque o dieci anni? Molti dei cambiamenti che potrebbero esservi + richiesti - da piccoli problemi di stile a sostanziali ristesure - + vengono dalla consapevolezza che Linux resterà in circolazione e in + continuo sviluppo ancora per diverse decadi. + + - La revisione del codice è un duro lavoro, ed è un mestiere poco + riconosciuto; le persone ricordano chi ha scritto il codice, ma meno + fama è attribuita a chi lo ha revisionato. Quindi i revisori potrebbero + divenire burberi, specialmente quando vendono i medesimi errori venire + fatti ancora e ancora. Se ricevete una revisione che vi sembra abbia + un tono arrabbiato, insultante o addirittura offensivo, resistente alla + tentazione di rispondere a tono. La revisione riguarda il codice e non + la persona, e i revisori non vi stanno attaccando personalmente. + + - Similarmente, i revisori del codice non stanno cercando di promuovere + i loro interessi a vostre spese. Gli sviluppatori del kernel spesso si + aspettano di lavorare sul kernel per anni, ma sanno che il loro datore + di lavoro può cambiare. Davvero, senza praticamente eccezioni, loro + stanno lavorando per la creazione del miglior kernel possibile; non + stanno cercando di creare un disagio ad aziende concorrenti. + +Quello che si sta cercando di dire è che, quando i revisori vi inviano degli +appunti dovete fare attenzione alle osservazioni tecniche che vi stanno +facendo. Non lasciate che il loro modo di esprimersi o il vostro orgoglio +impediscano che ciò accada. Quando avete dei suggerimenti sulla revisione, +prendetevi il tempo per comprendere cosa il revisore stia cercando di +comunicarvi. Se possibile, sistemate le cose che il revisore vi chiede di +modificare. E rispondete al revisore ringraziandolo e spiegando come +intendete fare. + +Notate che non dovete per forza essere d'accordo con ogni singola modifica +suggerita dai revisori. Se credete che il revisore non abbia compreso +il vostro codice, spiegateglielo. Se avete un'obiezione tecnica da fargli +su di una modifica suggerita, spiegatela inserendo anche la vostra soluzione +al problema. Se la vostra spiegazione ha senso, il revisore la accetterà. +Tuttavia, la vostra motivazione potrebbe non essere del tutto persuasiva, +specialmente se altri iniziano ad essere d'accordo con il revisore. +Prendetevi quindi un po' di tempo per pensare ancora alla cosa. Può risultare +facile essere accecati dalla propria soluzione al punto che non realizzate che +c'è qualcosa di fondamentalmente sbagliato o, magari, non state nemmeno +risolvendo il problema giusto. + +Andrew Morton suggerisce che ogni suggerimento di revisione che non è +presente nella modifica del codice dovrebbe essere inserito in un commento +aggiuntivo; ciò può essere d'aiuto ai futuri revisori nell'evitare domande +che sorgono al primo sguardo. + +Un errore fatale è quello di ignorare i commenti di revisione nella speranza +che se ne andranno. Non andranno via. Se pubblicherete nuovamente il +codice senza aver risposto ai commenti ricevuti, probabilmente le vostre +modifiche non andranno da nessuna parte. + +Parlando di ripubblicazione del codice: per favore tenete a mente che i +revisori non ricorderanno tutti i dettagli del codice che avete pubblicato +l'ultima volta. Quindi è sempre una buona idea quella di ricordare ai +revisori le questioni sollevate precedetemene e come le avete risolte. +I revisori non dovrebbero star lì a cercare all'interno degli archivi per +famigliarizzare con ciò che è stato detto l'ultima volta; se li aiutate +in questo senso, saranno di umore migliore quando riguarderanno il vostro +codice. + +Se invece avete cercato di far tutto correttamente ma le cose continuano +a non andar bene? Molti disaccordi di natura tecnica possono essere risolti +attraverso la discussione, ma ci sono volte dove qualcuno deve prendere +una decisione. Se credete veramente che tale decisione andrà contro di voi +ingiustamente, potete sempre tentare di rivolgervi a qualcuno più +in alto di voi. Per cose di questo genere la persona con più potere è +Andrew Morton. Andrew è una figura molto rispettata all'interno della +comunità di sviluppo del kernel; lui può spesso sbrogliare situazioni che +sembrano irrimediabilmente bloccate. Rivolgersi ad Andrew non deve essere +fatto alla leggera, e non deve essere fatto prima di aver esplorato tutte +le altre alternative. E tenete a mente, ovviamente, che nemmeno lui +potrebbe non essere d'accordo con voi. + +Cosa accade poi +=============== + +Se la modifica è ritenuta un elemento valido da essere aggiunta al kernel, +e una volta che la maggior parte degli appunti dei revisori sono stati +sistemati, il passo successivo solitamente è quello di entrare in un +sottosistema gestito da un manutentore. Come ciò avviene dipende dal +sottosistema medesimo; ogni manutentore ha il proprio modo di fare le cose. +In particolare, ci potrebbero essere diversi sorgenti - uno, magari, dedicato +alle modifiche pianificate per la finestra di fusione successiva, e un altro +per il lavoro di lungo periodo. + +Per le modifiche proposte in aree per le quali non esiste un sottosistema +preciso (modifiche di gestione della memoria, per esempio), i sorgenti di +ripiego finiscono per essere -mm. Ed anche le modifiche che riguardano +più sottosistemi possono finire in quest'ultimo. + +L'inclusione nei sorgenti di un sottosistema può comportare per una patch, +un alto livello di visibilità. Ora altri sviluppatori che stanno lavorando +in quei medesimi sorgenti avranno le vostre modifiche. I sottosistemi +solitamente riforniscono anche Linux-next, rendendo i propri contenuti +visibili all'intera comunità di sviluppo. A questo punto, ci sono buone +possibilità per voi di ricevere ulteriori commenti da un nuovo gruppo di +revisori; anche a questi commenti dovrete rispondere come avete già fatto per +gli altri. + +Ciò che potrebbe accadere a questo punto, in base alla natura della vostra +modifica, riguarda eventuali conflitti con il lavoro svolto da altri. +Nella peggiore delle situazioni, i conflitti più pesanti tra modifiche possono +concludersi con la messa a lato di alcuni dei lavori svolti cosicché le +modifiche restanti possano funzionare ed essere integrate. Altre volte, la +risoluzione dei conflitti richiederà del lavoro con altri sviluppatori e, +possibilmente, lo spostamento di alcune patch da dei sorgenti a degli altri +in modo da assicurare che tutto sia applicato in modo pulito. Questo lavoro +può rivelarsi una spina nel fianco, ma consideratevi fortunati: prima +dell'avvento dei sorgenti linux-next, questi conflitti spesso emergevano solo +durante l'apertura della finestra di integrazione e dovevano essere smaltiti +in fretta. Ora essi possono essere risolti comodamente, prima dell'apertura +della finestra. + +Un giorno, se tutto va bene, vi collegherete e vedrete che la vostra patch +è stata inserita nel ramo principale de kernel. Congratulazioni! Terminati +i festeggiamenti (nel frattempo avrete inserito il vostro nome nel file +MAINTAINERS) vale la pena ricordare una piccola cosa, ma importante: il +lavoro non è ancora finito. L'inserimento nel ramo principale porta con se +nuove sfide. + +Cominciamo con il dire che ora la visibilità della vostra modifica è +ulteriormente cresciuta. Ci potrebbe portare ad una nuova fase di +commenti dagli sviluppatori che non erano ancora a conoscenza della vostra +patch. Ignorarli potrebbe essere allettante dato che non ci sono più +dubbi sull'integrazione della modifica. Resistete a tale tentazione, dovete +mantenervi disponibili agli sviluppatori che hanno domande o suggerimenti +per voi. + +Ancora più importante: l'inclusione nel ramo principale mette il vostro +codice nelle mani di un gruppo di *tester* molto più esteso. Anche se avete +contribuito ad un driver per un hardware che non è ancora disponibile, sarete +sorpresi da quante persone inseriranno il vostro codice nei loro kernel. +E, ovviamente, dove ci sono *tester*, ci saranno anche dei rapporti su +eventuali bachi. + +La peggior specie di rapporti sono quelli che indicano delle regressioni. +Se la vostra modifica causa una regressione, avrete un gran numero di +occhi puntati su di voi; la regressione deve essere sistemata il prima +possibile. Se non vorrete o non sarete capaci di sistemarla (e nessuno +lo farà per voi), la vostra modifica sarà quasi certamente rimossa durante +la fase di stabilizzazione. Oltre alla perdita di tutto il lavoro svolto +per far si che la vostra modifica fosse inserita nel ramo principale, +l'avere una modifica rimossa a causa del fallimento nel sistemare una +regressione, potrebbe rendere più difficile per voi far accettare +il vostro lavoro in futuro. + +Dopo che ogni regressione è stata affrontata, ci potrebbero essere altri +bachi ordinari da "sconfiggere". Il periodo di stabilizzazione è la +vostra migliore opportunità per sistemare questi bachi e assicurarvi che +il debutto del vostro codice nel ramo principale del kernel sia il più solido +possibile. Quindi, per favore, rispondete ai rapporti sui bachi e ponete +rimedio, se possibile, a tutti i problemi. È a questo che serve il periodo +di stabilizzazione; potete iniziare creando nuove fantastiche modifiche +una volta che ogni problema con le vecchie sia stato risolto. + +Non dimenticate che esistono altre pietre miliari che possono generare +rapporti sui bachi: il successivo rilascio stabile, quando una distribuzione +importante usa una versione del kernel nel quale è presente la vostra +modifica, eccetera. Il continuare a rispondere a questi rapporti è fonte di +orgoglio per il vostro lavoro. Se questa non è una sufficiente motivazione, +allora, è anche consigliabile considera che la comunità di sviluppo ricorda +gli sviluppatori che hanno perso interesse per il loro codice una volta +integrato. La prossima volta che pubblicherete una patch, la comunità +la valuterà anche sulla base del fatto che non sarete disponibili a +prendervene cura anche nel futuro. + + +Altre cose che posso accadere +============================= + +Un giorno, potreste aprire la vostra email e vedere che qualcuno vi ha +inviato una patch per il vostro codice. Questo, dopo tutto, è uno dei +vantaggi di avere il vostro codice "là fuori". Se siete d'accordo con +la modifica, potrete anche inoltrarla ad un manutentore di sottosistema +(assicuratevi di includere la riga "From:" cosicché l'attribuzione sia +corretta, e aggiungete una vostra firma "Signed-off-by"), oppure inviate +un "Acked-by:" e lasciate che l'autore originale la invii. + +Se non siete d'accordo con la patch, inviate una risposta educata +spiegando il perché. Se possibile, dite all'autore quali cambiamenti +servirebbero per rendere la patch accettabile da voi. C'è una certa +riluttanza nell'inserire modifiche con un conflitto fra autore +e manutentore del codice, ma solo fino ad un certo punto. Se siete visti +come qualcuno che blocca un buon lavoro senza motivo, quelle patch vi +passeranno oltre e andranno nel ramo principale in ogni caso. Nel kernel +Linux, nessuno ha potere di veto assoluto su alcun codice. Eccezione +fatta per Linus, forse. + +In rarissime occasioni, potreste vedere qualcosa di completamente diverso: +un altro sviluppatore che pubblica una soluzione differente al vostro +problema. A questo punto, c'è una buona probabilità che una delle due +modifiche non verrà integrata, e il "c'ero prima io" non è considerato +un argomento tecnico rilevante. Se la modifica di qualcun'altro rimpiazza +la vostra ed entra nel ramo principale, esiste un unico modo di reagire: +siate contenti che il vostro problema sia stato risolto e andate avanti con +il vostro lavoro. L'avere un vostro lavoro spintonato da parte in questo +modo può essere avvilente e scoraggiante, ma la comunità ricorderà come +avrete reagito anche dopo che avrà dimenticato quale fu la modifica accettata. diff --git a/Documentation/translations/it_IT/process/7.AdvancedTopics.rst b/Documentation/translations/it_IT/process/7.AdvancedTopics.rst new file mode 100644 index 000000000000..cc1cff5d23ae --- /dev/null +++ b/Documentation/translations/it_IT/process/7.AdvancedTopics.rst @@ -0,0 +1,191 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/7.AdvancedTopics.rst ` +:Translator: Federico Vaga + +.. _it_development_advancedtopics: + +Argomenti avanzati +================== + +A questo punto, si spera, dovreste avere un'idea su come funziona il processo +di sviluppo. Ma rimane comunque molto da imparare! Questo capitolo copre +alcuni argomenti che potrebbero essere utili per gli sviluppatori che stanno +per diventare parte integrante del processo di sviluppo del kernel. + +Gestire le modifiche con git +----------------------------- + +L'uso di un sistema distribuito per il controllo delle versioni del kernel +ebbe iniziò nel 2002 quando Linux iniziò a provare il programma proprietario +BitKeeper. Nonostante l'uso di BitKeeper fosse opinabile, di certo il suo +approccio alla gestione dei sorgenti non lo era. Un sistema distribuito per +il controllo delle versioni accelerò immediatamente lo sviluppo del kernel. +Oggigiorno, ci sono diverse alternative libere a BitKeeper. Per il meglio o il +peggio, il progetto del kernel ha deciso di usare git per gestire i sorgenti. + +Gestire le modifiche con git può rendere la vita dello sviluppatore molto +più facile, specialmente quando il volume delle modifiche cresce. +Git ha anche i suoi lati taglienti che possono essere pericolosi; è uno +strumento giovane e potente che è ancora in fase di civilizzazione da parte +dei suoi sviluppatori. Questo documento non ha lo scopo di insegnare l'uso +di git ai suoi lettori; ci sarebbe materiale a sufficienza per un lungo +documento al riguardo. Invece, qui ci concentriamo in particolare su come +git è parte del processo di sviluppo del kernel. Gli sviluppatori che +desiderassero diventare agili con git troveranno più informazioni ai +seguenti indirizzi: + + http://git-scm.com/ + + http://www.kernel.org/pub/software/scm/git/docs/user-manual.html + +e su varie guide che potrete trovare su internet. + +La prima cosa da fare prima di usarlo per produrre patch che saranno +disponibili ad altri, è quella di leggere i siti qui sopra e di acquisire una +base solida su come funziona git. Uno sviluppatore che sappia usare git +dovrebbe essere capace di ottenere una copia del repositorio principale, +esplorare la storia della revisione, registrare le modifiche, usare i rami, +eccetera. Una certa comprensione degli strumenti git per riscrivere la storia +(come ``rebase``) è altrettanto utile. Git ha i propri concetti e la propria +terminologia; un nuovo utente dovrebbe conoscere *refs*, *remote branch*, +*index*, *fast-forward merge*, *push* e *pull*, *detached head*, eccetera. +Il tutto potrebbe essere un po' intimidatorio visto da fuori, ma con un po' +di studio i concetti non saranno così difficili da capire. + +Utilizzare git per produrre patch da sottomettere via email può essere +un buon esercizio da fare mentre si sta prendendo confidenza con lo strumento. + +Quando sarete in grado di creare rami git che siano guardabili da altri, +vi servirà, ovviamente, un server dal quale sia possibile attingere le vostre +modifiche. Se avete un server accessibile da Internet, configurarlo per +eseguire git-daemon è relativamente semplice . Altrimenti, iniziano a +svilupparsi piattaforme che offrono spazi pubblici, e gratuiti (Github, +per esempio). Gli sviluppatori permanenti possono ottenere un account +su kernel.org, ma non è proprio facile da ottenere; per maggiori informazioni +consultate la pagina web http://kernel.org/faq/. + +In git è normale avere a che fare con tanti rami. Ogni linea di sviluppo +può essere separata in "rami per argomenti" e gestiti indipendentemente. +In git i rami sono facilissimi, per cui non c'è motivo per non usarli +in libertà. In ogni caso, non dovreste sviluppare su alcun ramo dal +quale altri potrebbero attingere. I rami disponibili pubblicamente dovrebbero +essere creati con attenzione; integrate patch dai rami di sviluppo +solo quando sono complete e pronte ad essere consegnate - non prima. + +Git offre alcuni strumenti che vi permettono di riscrivere la storia del +vostro sviluppo. Una modifica errata (diciamo, una che rompe la bisezione, +oppure che ha un qualche tipo di baco evidente) può essere corretta sul posto +o fatta sparire completamente dalla storia. Una serie di patch può essere +riscritta come se fosse stata scritta in cima al ramo principale di oggi, +anche se ci avete lavorato per mesi. Le modifiche possono essere spostate +in modo trasparente da un ramo ad un altro. E così via. Un uso giudizioso +di git per revisionare la storia può aiutare nella creazione di una serie +di patch pulite e con meno problemi. + +Un uso eccessivo può portare ad altri tipi di problemi, tuttavia, oltre +alla semplice ossessione per la creazione di una storia del progetto che sia +perfetta. Riscrivere la storia riscriverà le patch contenute in quella +storia, trasformando un kernel verificato (si spera) in uno da verificare. +Ma, oltre a questo, gli sviluppatori non possono collaborare se non condividono +la stessa vista sulla storia del progetto; se riscrivete la storia dalla quale +altri sviluppatori hanno attinto per i loro repositori, renderete la loro vita +molto più difficile. Quindi tenete conto di questa semplice regola generale: +la storia che avete esposto ad altri, generalmente, dovrebbe essere vista come +immutabile. + +Dunque, una volta che il vostro insieme di patch è stato reso disponibile +pubblicamente non dovrebbe essere più sovrascritto. Git tenterà di imporre +questa regola, e si rifiuterà di pubblicare nuove patch che non risultino +essere dirette discendenti di quelle pubblicate in precedenza (in altre parole, +patch che non condividono la stessa storia). È possibile ignorare questo +controllo, e ci saranno momenti in cui sarà davvero necessario riscrivere +un ramo già pubblicato. Un esempio è linux-next dove le patch vengono +spostate da un ramo all'altro al fine di evitare conflitti. Ma questo tipo +d'azione dovrebbe essere un'eccezione. Questo è uno dei motivi per cui lo +sviluppo dovrebbe avvenire in rami privati (che possono essere sovrascritti +quando lo si ritiene necessario) e reso pubblico solo quando è in uno stato +avanzato. + +Man mano che il ramo principale (o altri rami su cui avete basato le +modifiche) avanza, diventa allettante l'idea di integrare tutte le patch +per rimanere sempre aggiornati. Per un ramo privato, il *rebase* può essere +un modo semplice per rimanere aggiornati, ma questa non è un'opzione nel +momento in cui il vostro ramo è stato esposto al mondo intero. +*Merge* occasionali possono essere considerati di buon senso, ma quando +diventano troppo frequenti confondono inutilmente la storia. La tecnica +suggerita in questi casi è quella di fare *merge* raramente, e più in generale +solo nei momenti di rilascio (per esempio gli -rc del ramo principale). +Se siete nervosi circa alcune patch in particolare, potete sempre fare +dei *merge* di test in un ramo privato. In queste situazioni git "rerere" +può essere utile; questo strumento si ricorda come i conflitti di *merge* +furono risolti in passato cosicché non dovrete fare lo stesso lavoro due volte. + +Una delle lamentele più grosse e ricorrenti sull'uso di strumenti come git +è il grande movimento di patch da un repositorio all'altro che rende +facile l'integrazione nel ramo principale di modifiche mediocri, il tutto +sotto il naso dei revisori. Gli sviluppatori del kernel tendono ad essere +scontenti quando vedono succedere queste cose; preparare un ramo git con +patch che non hanno ricevuto alcuna revisione o completamente avulse, potrebbe +influire sulla vostra capacita di proporre, in futuro, l'integrazione dei +vostri rami. Citando Linus + +:: + + Potete inviarmi le vostre patch, ma per far si che io integri una + vostra modifica da git, devo sapere che voi sappiate cosa state + facendo, e ho bisogno di fidarmi *senza* dover passare tutte + le modifiche manualmente una per una. + +(http://lwn.net/Articles/224135/). + +Per evitare queste situazioni, assicuratevi che tutte le patch in un ramo +siano strettamente correlate al tema delle modifiche; un ramo "driver fixes" +non dovrebbe fare modifiche al codice principale per la gestione della memoria. +E, più importante ancora, non usate un repositorio git per tentare di +evitare il processo di revisione. Pubblicate un sommario di quello che il +vostro ramo contiene sulle liste di discussione più opportune, e , quando +sarà il momento, richiedete che il vostro ramo venga integrato in linux-next. + +Se e quando altri inizieranno ad inviarvi patch per essere incluse nel +vostro repositorio, non dovete dimenticare di revisionarle. Inoltre +assicuratevi di mantenerne le informazioni di paternità; al riguardo git "am" +fa del suo meglio, ma potreste dover aggiungere una riga "From:" alla patch +nel caso in cui sia arrivata per vie traverse. + +Quando richiedete l'integrazione, siate certi di fornire tutte le informazioni: +dov'è il vostro repositorio, quale ramo integrare, e quali cambiamenti si +otterranno dall'integrazione. Il comando git request-pull può essere d'aiuto; +preparerà una richiesta nel modo in cui gli altri sviluppatori se l'aspettano, +e verificherà che vi siate ricordati di pubblicare quelle patch su un +server pubblico. + +Revisionare le patch +-------------------- + +Alcuni lettori potrebbero avere obiezioni sulla presenza di questa sezione +negli "argomenti avanzati" sulla base che anche gli sviluppatori principianti +dovrebbero revisionare le patch. É certamente vero che non c'è modo +migliore di imparare come programmare per il kernel che guardare il codice +pubblicato dagli altri. In aggiunta, i revisori sono sempre troppo pochi; +guardando il codice potete apportare un significativo contributo all'intero +processo. + +Revisionare il codice potrebbe risultare intimidatorio, specialmente per i +nuovi arrivati che potrebbero sentirsi un po' nervosi nel questionare +il codice - in pubblico - pubblicato da sviluppatori più esperti. Perfino +il codice scritto dagli sviluppatori più esperti può essere migliorato. +Forse il suggerimento migliore per i revisori (tutti) è questo: formulate +i commenti come domande e non come critiche. Chiedere "Come viene rilasciato +il *lock* in questo percorso?" funziona sempre molto meglio che +"qui la sincronizzazione è sbagliata". + +Diversi sviluppatori revisioneranno il codice con diversi punti di vista. +Alcuni potrebbero concentrarsi principalmente sullo stile del codice e se +alcune linee hanno degli spazio bianchi di troppo. Altri si chiederanno +se accettare una modifica interamente è una cosa positiva per il kernel +o no. E altri ancora si focalizzeranno sui problemi di sincronizzazione, +l'uso eccessivo di *stack*, problemi di sicurezza, duplicazione del codice +in altri contesti, documentazione, effetti negativi sulle prestazioni, cambi +all'ABI dello spazio utente, eccetera. Qualunque tipo di revisione è ben +accetta e di valore, se porta ad avere un codice migliore nel kernel. diff --git a/Documentation/translations/it_IT/process/8.Conclusion.rst b/Documentation/translations/it_IT/process/8.Conclusion.rst new file mode 100644 index 000000000000..039bfc5a4108 --- /dev/null +++ b/Documentation/translations/it_IT/process/8.Conclusion.rst @@ -0,0 +1,85 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/8.Conclusion.rst ` +:Translator: Alessia Mantegazza + +.. _it_development_conclusion: + +Per maggiori informazioni +========================= + +Esistono numerose fonti di informazioni sullo sviluppo del kernel Linux +e argomenti correlati. Primo tra questi sarà sempre la cartella Documentation +che si trova nei sorgenti kernel. + +Il file :ref:`process/howto.rst ` è un punto di partenza +importante; :ref:`process/submitting-patches.rst ` e +:ref:`process/submitting-drivers.rst ` sono +anch'essi qualcosa che tutti gli sviluppatori del kernel dovrebbero leggere. +Molte API interne al kernel sono documentate utilizzando il meccanismo +kerneldoc; "make htmldocs" o "make pdfdocs" possono essere usati per generare +quei documenti in HTML o PDF (sebbene le versioni di TeX di alcune +distribuzioni hanno dei limiti interni e fallisce nel processare +appropriatamente i documenti). + +Diversi siti web approfondiscono lo sviluppo del kernel ad ogni livello +di dettaglio. Il vostro autore vorrebbe umilmente suggerirvi +http://lwn.net/ come fonte; usando l'indice 'kernel' su LWN troverete +molti argomenti specifici sul kernel: + + http://lwn.net/Kernel/Index/ + +Oltre a ciò, una risorsa valida per gli sviluppatori kernel è: + + http://kernelnewbies.org/ + +E, ovviamente, una fonte da non dimenticare è http://kernel.org/, il luogo +definitivo per le informazioni sui rilasci del kernel. + +Ci sono numerosi libri sullo sviluppo del kernel: + + Linux Device Drivers, 3rd Edition (Jonathan Corbet, Alessandro + Rubini, and Greg Kroah-Hartman). In linea all'indirizzo + http://lwn.net/Kernel/LDD3/. + + Linux Kernel Development (Robert Love). + + Understanding the Linux Kernel (Daniel Bovet and Marco Cesati). + +Tutti questi libri soffrono di un errore comune: tendono a risultare in un +certo senso obsoleti dal momento che si trovano in libreria da diverso +tempo. Comunque contengono informazioni abbastanza buone. + +La documentazione per git la troverete su: + + http://www.kernel.org/pub/software/scm/git/docs/ + + http://www.kernel.org/pub/software/scm/git/docs/user-manual.html + + + +Conclusioni +=========== + +Congratulazioni a chiunque ce l'abbia fatta a terminare questo documento di +lungo-respiro. Si spera che abbia fornito un'utile comprensione d'insieme +di come il kernel Linux viene sviluppato e di come potete partecipare a +tale processo. + +Infine, quello che conta è partecipare. Qualsiasi progetto software +open-source non è altro che la somma di quello che i suoi contributori +mettono al suo interno. Il kernel Linux è cresciuto velocemente e bene +perché ha ricevuto il supporto di un impressionante gruppo di sviluppatori, +ognuno dei quali sta lavorando per renderlo migliore. Il kernel è un esempio +importante di cosa può essere fatto quando migliaia di persone lavorano +insieme verso un obiettivo comune. + +Il kernel può sempre beneficiare di una larga base di sviluppatori, tuttavia, +c'è sempre molto lavoro da fare. Ma, cosa non meno importante, molti degli +altri partecipanti all'ecosistema Linux possono trarre beneficio attraverso +il contributo al kernel. Inserire codice nel ramo principale è la chiave +per arrivare ad una qualità del codice più alta, bassa manutenzione e +bassi prezzi di distribuzione, alti livelli d'influenza sulla direzione +dello sviluppo del kernel, e molto altro. È una situazione nella quale +tutti coloro che sono coinvolti vincono. Mollate il vostro editor e +raggiungeteci; sarete più che benvenuti. diff --git a/Documentation/translations/it_IT/process/adding-syscalls.rst b/Documentation/translations/it_IT/process/adding-syscalls.rst new file mode 100644 index 000000000000..e0a64b0688a7 --- /dev/null +++ b/Documentation/translations/it_IT/process/adding-syscalls.rst @@ -0,0 +1,643 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/adding-syscalls.rst ` +:Translator: Federico Vaga + +.. _it_addsyscalls: + +Aggiungere una nuova chiamata di sistema +======================================== + +Questo documento descrive quello che è necessario sapere per aggiungere +nuove chiamate di sistema al kernel Linux; questo è da considerarsi come +un'aggiunta ai soliti consigli su come proporre nuove modifiche +:ref:`Documentation/translations/it_IT/process/submitting-patches.rst `. + + +Alternative alle chiamate di sistema +------------------------------------ + +La prima considerazione da fare quando si aggiunge una nuova chiamata di +sistema è quella di valutare le alternative. Nonostante le chiamate di sistema +siano il punto di interazione fra spazio utente e kernel più tradizionale ed +ovvio, esistono altre possibilità - scegliete quella che meglio si adatta alle +vostra interfaccia. + + - Se le operazioni coinvolte possono rassomigliare a quelle di un filesystem, + allora potrebbe avere molto più senso la creazione di un nuovo filesystem o + dispositivo. Inoltre, questo rende più facile incapsulare la nuova + funzionalità in un modulo kernel piuttosto che essere sviluppata nel cuore + del kernel. + + - Se la nuova funzionalità prevede operazioni dove il kernel notifica + lo spazio utente su un avvenimento, allora restituire un descrittore + di file all'oggetto corrispondente permette allo spazio utente di + utilizzare ``poll``/``select``/``epoll`` per ricevere quelle notifiche. + - Tuttavia, le operazioni che non si sposano bene con operazioni tipo + :manpage:`read(2)`/:manpage:`write(2)` dovrebbero essere implementate + come chiamate :manpage:`ioctl(2)`, il che potrebbe portare ad un'API in + un qualche modo opaca. + + - Se dovete esporre solo delle informazioni sul sistema, un nuovo nodo in + sysfs (vedere ``Documentation/translations/it_IT/filesystems/sysfs.txt``) o + in procfs potrebbe essere sufficiente. Tuttavia, l'accesso a questi + meccanismi richiede che il filesystem sia montato, il che potrebbe non + essere sempre vero (per esempio, in ambienti come namespace/sandbox/chroot). + Evitate d'aggiungere nuove API in debugfs perché questo non viene + considerata un'interfaccia di 'produzione' verso lo spazio utente. + - Se l'operazione è specifica ad un particolare file o descrittore, allora + potrebbe essere appropriata l'aggiunta di un comando :manpage:`fcntl(2)`. + Tuttavia, :manpage:`fcntl(2)` è una chiamata di sistema multiplatrice che + nasconde una notevole complessità, quindi è ottima solo quando la nuova + funzione assomiglia a quelle già esistenti in :manpage:`fcntl(2)`, oppure + la nuova funzionalità è veramente semplice (per esempio, leggere/scrivere + un semplice flag associato ad un descrittore di file). + - Se l'operazione è specifica ad un particolare processo, allora + potrebbe essere appropriata l'aggiunta di un comando :manpage:`prctl(2)`. + Come per :manpage:`fcntl(2)`, questa chiamata di sistema è un complesso + multiplatore quindi è meglio usarlo per cose molto simili a quelle esistenti + nel comando ``prctl`` oppure per leggere/scrivere un semplice flag relativo + al processo. + + +Progettare l'API: pianificare le estensioni +------------------------------------------- + +Una nuova chiamata di sistema diventerà parte dell'API del kernel, e +dev'essere supportata per un periodo indefinito. Per questo, è davvero +un'ottima idea quella di discutere apertamente l'interfaccia sulla lista +di discussione del kernel, ed è altrettanto importante pianificarne eventuali +estensioni future. + +(Nella tabella delle chiamate di sistema sono disseminati esempi dove questo +non fu fatto, assieme ai corrispondenti aggiornamenti - +``eventfd``/``eventfd2``, ``dup2``/``dup3``, ``inotify_init``/``inotify_init1``, +``pipe``/``pipe2``, ``renameat``/``renameat2`` --quindi imparate dalla storia +del kernel e pianificate le estensioni fin dall'inizio) + +Per semplici chiamate di sistema che accettano solo un paio di argomenti, +il modo migliore di permettere l'estensibilità è quello di includere un +argomento *flags* alla chiamata di sistema. Per assicurarsi che i programmi +dello spazio utente possano usare in sicurezza *flags* con diverse versioni +del kernel, verificate se *flags* contiene un qualsiasi valore sconosciuto, +in qual caso rifiutate la chiamata di sistema (con ``EINVAL``):: + + if (flags & ~(THING_FLAG1 | THING_FLAG2 | THING_FLAG3)) + return -EINVAL; + +(Se *flags* non viene ancora utilizzato, verificate che l'argomento sia zero) + +Per chiamate di sistema più sofisticate che coinvolgono un numero più grande di +argomenti, il modo migliore è quello di incapsularne la maggior parte in una +struttura dati che verrà passata per puntatore. Questa struttura potrà +funzionare con future estensioni includendo un campo *size*:: + + struct xyzzy_params { + u32 size; /* userspace sets p->size = sizeof(struct xyzzy_params) */ + u32 param_1; + u64 param_2; + u64 param_3; + }; + +Fintanto che un qualsiasi campo nuovo, diciamo ``param_4``, è progettato per +offrire il comportamento precedente quando vale zero, allora questo permetterà +di gestire un conflitto di versione in entrambe le direzioni: + + - un vecchio kernel può gestire l'accesso di una versione moderna di un + programma in spazio utente verificando che la memoria oltre la dimensione + della struttura dati attesa sia zero (in pratica verificare che + ``param_4 == 0``). + - un nuovo kernel può gestire l'accesso di una versione vecchia di un + programma in spazio utente estendendo la struttura dati con zeri (in pratica + ``param_4 = 0``). + +Vedere :manpage:`perf_event_open(2)` e la funzione ``perf_copy_attr()`` (in +``kernel/events/core.c``) per un esempio pratico di questo approccio. + + +Progettare l'API: altre considerazioni +-------------------------------------- + +Se la vostra nuova chiamata di sistema permette allo spazio utente di fare +riferimento ad un oggetto del kernel, allora questa dovrebbe usare un +descrittore di file per accesso all'oggetto - non inventatevi nuovi tipi di +accesso da spazio utente quando il kernel ha già dei meccanismi e una semantica +ben definita per utilizzare i descrittori di file. + +Se la vostra nuova chiamata di sistema :manpage:`xyzzy(2)` ritorna un nuovo +descrittore di file, allora l'argomento *flags* dovrebbe includere un valore +equivalente a ``O_CLOEXEC`` per i nuovi descrittori. Questo rende possibile, +nello spazio utente, la chiusura della finestra temporale fra le chiamate a +``xyzzy()`` e ``fcntl(fd, F_SETFD, FD_CLOEXEC)``, dove un inaspettato +``fork()`` o ``execve()`` potrebbe trasferire il descrittore al programma +eseguito (Comunque, resistete alla tentazione di riutilizzare il valore di +``O_CLOEXEC`` dato che è specifico dell'architettura e fa parte di una +enumerazione di flag ``O_*`` che è abbastanza ricca). + +Se la vostra nuova chiamata di sistema ritorna un nuovo descrittore di file, +dovreste considerare che significato avrà l'uso delle chiamate di sistema +della famiglia di :manpage:`poll(2)`. Rendere un descrittore di file pronto +per la lettura o la scrittura è il tipico modo del kernel per notificare lo +spazio utente circa un evento associato all'oggetto del kernel. + +Se la vostra nuova chiamata di sistema :manpage:`xyzzy(2)` ha un argomento +che è il percorso ad un file:: + + int sys_xyzzy(const char __user *path, ..., unsigned int flags); + +dovreste anche considerare se non sia più appropriata una versione +:manpage:`xyzzyat(2)`:: + + int sys_xyzzyat(int dfd, const char __user *path, ..., unsigned int flags); + +Questo permette più flessibilità su come lo spazio utente specificherà il file +in questione; in particolare, permette allo spazio utente di richiedere la +funzionalità su un descrittore di file già aperto utilizzando il *flag* +``AT_EMPTY_PATH``, in pratica otterremmo gratuitamente l'operazione +:manpage:`fxyzzy(3)`:: + + - xyzzyat(AT_FDCWD, path, ..., 0) is equivalent to xyzzy(path,...) + - xyzzyat(fd, "", ..., AT_EMPTY_PATH) is equivalent to fxyzzy(fd, ...) + +(Per maggiori dettagli sulla logica delle chiamate \*at(), leggete la pagina +man :manpage:`openat(2)`; per un esempio di AT_EMPTY_PATH, leggere la pagina +man :manpage:`fstatat(2)`). + +Se la vostra nuova chiamata di sistema :manpage:`xyzzy(2)` prevede un parametro +per descrivere uno scostamento all'interno di un file, usate ``loff_t`` come +tipo cosicché scostamenti a 64-bit potranno essere supportati anche su +architetture a 32-bit. + +Se la vostra nuova chiamata di sistema :manpage:`xyzzy(2)` prevede l'uso di +funzioni riservate, allora dev'essere gestita da un opportuno bit di privilegio +(verificato con una chiamata a ``capable()``), come descritto nella pagina man +:manpage:`capabilities(7)`. Scegliete un bit di privilegio già esistente per +gestire la funzionalità associata, ma evitate la combinazione di diverse +funzionalità vagamente collegate dietro lo stesso bit, in quanto va contro il +principio di *capabilities* di separare i poteri di root. In particolare, +evitate di aggiungere nuovi usi al fin-troppo-generico privilegio +``CAP_SYS_ADMIN``. + +Se la vostra nuova chiamata di sistema :manpage:`xyzzy(2)` manipola altri +processi oltre a quello chiamato, allora dovrebbe essere limitata (usando +la chiamata ``ptrace_may_access()``) di modo che solo un processo chiamante +con gli stessi permessi del processo in oggetto, o con i necessari privilegi, +possa manipolarlo. + +Infine, state attenti che in alcune architetture non-x86 la vita delle chiamate +di sistema con argomenti a 64-bit viene semplificata se questi argomenti +ricadono in posizioni dispari (pratica, i parametri 1, 3, 5); questo permette +l'uso di coppie contigue di registri a 32-bit. (Questo non conta se gli +argomenti sono parte di una struttura dati che viene passata per puntatore). + + +Proporre l'API +-------------- + +Al fine di rendere le nuove chiamate di sistema di facile revisione, è meglio +che dividiate le modifiche i pezzi separati. Questi dovrebbero includere +almeno le seguenti voci in *commit* distinti (ognuno dei quali sarà descritto +più avanti): + + - l'essenza dell'implementazione della chiamata di sistema, con i prototipi, + i numeri generici, le modifiche al Kconfig e l'implementazione *stub* di + ripiego. + - preparare la nuova chiamata di sistema per un'architettura specifica, + solitamente x86 (ovvero tutti: x86_64, x86_32 e x32). + - un programma di auto-verifica da mettere in ``tools/testing/selftests/`` + che mostri l'uso della chiamata di sistema. + - una bozza di pagina man per la nuova chiamata di sistema. Può essere + scritta nell'email di presentazione, oppure come modifica vera e propria + al repositorio delle pagine man. + +Le proposte di nuove chiamate di sistema, come ogni altro modifica all'API del +kernel, deve essere sottomessa alla lista di discussione +linux-api@vger.kernel.org. + + +Implementazione di chiamate di sistema generiche +------------------------------------------------ + +Il principale punto d'accesso alla vostra nuova chiamata di sistema +:manpage:`xyzzy(2)` verrà chiamato ``sys_xyzzy()``; ma, piuttosto che in modo +esplicito, lo aggiungerete tramite la macro ``SYSCALL_DEFINEn``. La 'n' +indica il numero di argomenti della chiamata di sistema; la macro ha come +argomento il nome della chiamata di sistema, seguito dalle coppie (tipo, nome) +per definire i suoi parametri. L'uso di questa macro permette di avere +i metadati della nuova chiamata di sistema disponibili anche per altri +strumenti. + +Il nuovo punto d'accesso necessita anche del suo prototipo di funzione in +``include/linux/syscalls.h``, marcato come asmlinkage di modo da abbinargli +il modo in cui quelle chiamate di sistema verranno invocate:: + + asmlinkage long sys_xyzzy(...); + +Alcune architetture (per esempio x86) hanno le loro specifiche tabelle di +chiamate di sistema (syscall), ma molte altre architetture condividono una +tabella comune di syscall. Aggiungete alla lista generica la vostra nuova +chiamata di sistema aggiungendo un nuovo elemento alla lista in +``include/uapi/asm-generic/unistd.h``:: + + #define __NR_xyzzy 292 + __SYSCALL(__NR_xyzzy, sys_xyzzy) + +Aggiornate anche il contatore __NR_syscalls di modo che sia coerente con +l'aggiunta della nuove chiamate di sistema; va notato che se più di una nuova +chiamata di sistema viene aggiunga nella stessa finestra di sviluppo, il numero +della vostra nuova syscall potrebbe essere aggiustato al fine di risolvere i +conflitti. + +Il file ``kernel/sys_ni.c`` fornisce le implementazioni *stub* di ripiego che +ritornano ``-ENOSYS``. Aggiungete la vostra nuova chiamata di sistema anche +qui:: + + COND_SYSCALL(xyzzy); + +La vostra nuova funzionalità del kernel, e la chiamata di sistema che la +controlla, dovrebbero essere opzionali. Quindi, aggiungete un'opzione +``CONFIG`` (solitamente in ``init/Kconfig``). Come al solito per le nuove +opzioni ``CONFIG``: + + - Includete una descrizione della nuova funzionalità e della chiamata di + sistema che la controlla. + - Rendete l'opzione dipendente da EXPERT se dev'essere nascosta agli utenti + normali. + - Nel Makefile, rendere tutti i nuovi file sorgenti, che implementano la + nuova funzionalità, dipendenti dall'opzione CONFIG (per esempio + ``obj-$(CONFIG_XYZZY_SYSCALL) += xyzzy.o``). + - Controllate due volte che sia possibile generare il kernel con la nuova + opzione CONFIG disabilitata. + +Per riassumere, vi serve un *commit* che includa: + + - un'opzione ``CONFIG``per la nuova funzione, normalmente in ``init/Kconfig`` + - ``SYSCALL_DEFINEn(xyzzy, ...)`` per il punto d'accesso + - il corrispondente prototipo in ``include/linux/syscalls.h`` + - un elemento nella tabella generica in ``include/uapi/asm-generic/unistd.h`` + - *stub* di ripiego in ``kernel/sys_ni.c`` + + +Implementazione delle chiamate di sistema x86 +--------------------------------------------- + +Per collegare la vostra nuova chiamate di sistema alle piattaforme x86, +dovete aggiornate la tabella principale di syscall. Assumendo che la vostra +nuova chiamata di sistema non sia particolarmente speciale (vedere sotto), +dovete aggiungere un elemento *common* (per x86_64 e x32) in +arch/x86/entry/syscalls/syscall_64.tbl:: + + 333 common xyzzy sys_xyzzy + +e un elemento per *i386* ``arch/x86/entry/syscalls/syscall_32.tbl``:: + + 380 i386 xyzzy sys_xyzzy + +Ancora una volta, questi numeri potrebbero essere cambiati se generano +conflitti durante la finestra di integrazione. + + +Chiamate di sistema compatibili (generico) +------------------------------------------ + +Per molte chiamate di sistema, la stessa implementazione a 64-bit può essere +invocata anche quando il programma in spazio utente è a 32-bit; anche se la +chiamata di sistema include esplicitamente un puntatore, questo viene gestito +in modo trasparente. + +Tuttavia, ci sono un paio di situazione dove diventa necessario avere un +livello di gestione della compatibilità per risolvere le differenze di +dimensioni fra 32-bit e 64-bit. + +Il primo caso è quando un kernel a 64-bit supporta anche programmi in spazio +utente a 32-bit, perciò dovrà ispezionare aree della memoria (``__user``) che +potrebbero contenere valori a 32-bit o a 64-bit. In particolar modo, questo +è necessario quando un argomento di una chiamata di sistema è: + + - un puntatore ad un puntatore + - un puntatore ad una struttura dati contenente a sua volta un puntatore + ( ad esempio ``struct iovec __user *``) + - un puntatore ad un tipo intero di dimensione variabile (``time_t``, + ``off_t``, ``long``, ...) + - un puntatore ad una struttura dati contenente un tipo intero di dimensione + variabile. + +Il secondo caso che richiede un livello di gestione della compatibilità è +quando uno degli argomenti di una chiamata a sistema è esplicitamente un tipo +a 64-bit anche su architetture a 32-bit, per esempio ``loff_t`` o ``__u64``. +In questo caso, un valore che arriva ad un kernel a 64-bit da un'applicazione +a 32-bit verrà diviso in due valori a 32-bit che dovranno essere riassemblati +in questo livello di compatibilità. + +(Da notare che non serve questo livello di compatibilità per argomenti che +sono puntatori ad un tipo esplicitamente a 64-bit; per esempio, in +:manpage:`splice(2)` l'argomento di tipo ``loff_t __user *`` non necessita +di una chiamata di sistema ``compat_``) + +La versione compatibile della nostra chiamata di sistema si chiamerà +``compat_sys_xyzzy()``, e viene aggiunta utilizzando la macro +``COMPAT_SYSCALL_DEFINEn()`` (simile a SYSCALL_DEFINEn). Questa versione +dell'implementazione è parte del kernel a 64-bit ma accetta parametri a 32-bit +che trasformerà secondo le necessità (tipicamente, la versione +``compat_sys_`` converte questi valori nello loro corrispondente a 64-bit e +può chiamare la versione ``sys_`` oppure invocare una funzione che implementa +le parti comuni). + +Il punto d'accesso *compat* deve avere il corrispondente prototipo di funzione +in ``include/linux/compat.h``, marcato come asmlinkage di modo da abbinargli +il modo in cui quelle chiamate di sistema verranno invocate:: + + asmlinkage long compat_sys_xyzzy(...); + +Se la chiamata di sistema prevede una struttura dati organizzata in modo +diverso per sistemi a 32-bit e per quelli a 64-bit, diciamo +``struct xyzzy_args``, allora il file d'intestazione +``then the include/linux/compat.h`` deve includere la sua versione +*compatibile* (``struct compat_xyzzy_args``); ogni variabile con +dimensione variabile deve avere il proprio tipo ``compat_`` corrispondente +a quello in ``struct xyzzy_args``. La funzione ``compat_sys_xyzzy()`` +può usare la struttura ``compat_`` per analizzare gli argomenti ricevuti +da una chiamata a 32-bit. + +Per esempio, se avete i seguenti campi:: + + struct xyzzy_args { + const char __user *ptr; + __kernel_long_t varying_val; + u64 fixed_val; + /* ... */ + }; + +nella struttura ``struct xyzzy_args``, allora la struttura +``struct compat_xyzzy_args`` dovrebbe avere:: + + struct compat_xyzzy_args { + compat_uptr_t ptr; + compat_long_t varying_val; + u64 fixed_val; + /* ... */ + }; + +La lista generica delle chiamate di sistema ha bisogno di essere +aggiustata al fine di permettere l'uso della versione *compatibile*; +la voce in ``include/uapi/asm-generic/unistd.h`` dovrebbero usare +``__SC_COMP`` piuttosto di ``__SYSCALL``:: + + #define __NR_xyzzy 292 + __SC_COMP(__NR_xyzzy, sys_xyzzy, compat_sys_xyzzy) + +Riassumendo, vi serve: + + - un ``COMPAT_SYSCALL_DEFINEn(xyzzy, ...)`` per il punto d'accesso + *compatibile* + - un prototipo in ``include/linux/compat.h`` + - (se necessario) una struttura di compatibilità a 32-bit in + ``include/linux/compat.h`` + - una voce ``__SC_COMP``, e non ``__SYSCALL``, in + ``include/uapi/asm-generic/unistd.h`` + +Compatibilità delle chiamate di sistema (x86) +--------------------------------------------- + +Per collegare una chiamata di sistema, su un'architettura x86, con la sua +versione *compatibile*, è necessario aggiustare la voce nella tabella +delle syscall. + +Per prima cosa, la voce in ``arch/x86/entry/syscalls/syscall_32.tbl`` prende +un argomento aggiuntivo per indicare che un programma in spazio utente +a 32-bit, eseguito su un kernel a 64-bit, dovrebbe accedere tramite il punto +d'accesso compatibile:: + + 380 i386 xyzzy sys_xyzzy __ia32_compat_sys_xyzzy + +Secondo, dovete capire cosa dovrebbe succedere alla nuova chiamata di sistema +per la versione dell'ABI x32. Qui C'è una scelta da fare: gli argomenti +possono corrisponde alla versione a 64-bit o a quella a 32-bit. + +Se c'è un puntatore ad un puntatore, la decisione è semplice: x32 è ILP32, +quindi gli argomenti dovrebbero corrispondere a quelli a 32-bit, e la voce in +``arch/x86/entry/syscalls/syscall_64.tbl`` sarà divisa cosicché i programmi +x32 eseguano la chiamata *compatibile*:: + + 333 64 xyzzy sys_xyzzy + ... + 555 x32 xyzzy __x32_compat_sys_xyzzy + +Se non ci sono puntatori, allora è preferibile riutilizzare la chiamata di +sistema a 64-bit per l'ABI x32 (e di conseguenza la voce in +arch/x86/entry/syscalls/syscall_64.tbl rimane immutata). + +In ambo i casi, dovreste verificare che i tipi usati dagli argomenti +abbiano un'esatta corrispondenza da x32 (-mx32) al loro equivalente a +32-bit (-m32) o 64-bit (-m64). + + +Chiamate di sistema che ritornano altrove +----------------------------------------- + +Nella maggior parte delle chiamate di sistema, al termine della loro +esecuzione, i programmi in spazio utente riprendono esattamente dal punto +in cui si erano interrotti -- quindi dall'istruzione successiva, con lo +stesso *stack* e con la maggior parte del registri com'erano stati +lasciati prima della chiamata di sistema, e anche con la stessa memoria +virtuale. + +Tuttavia, alcune chiamata di sistema fanno le cose in modo differente. +Potrebbero ritornare ad un punto diverso (``rt_sigreturn``) o cambiare +la memoria in spazio utente (``fork``/``vfork``/``clone``) o perfino +l'architettura del programma (``execve``/``execveat``). + +Per permettere tutto ciò, l'implementazione nel kernel di questo tipo di +chiamate di sistema potrebbero dover salvare e ripristinare registri +aggiuntivi nello *stack* del kernel, permettendo così un controllo completo +su dove e come l'esecuzione dovrà continuare dopo l'esecuzione della +chiamata di sistema. + +Queste saranno specifiche per ogni architettura, ma tipicamente si definiscono +dei punti d'accesso in *assembly* per salvare/ripristinare i registri +aggiuntivi e quindi chiamare il vero punto d'accesso per la chiamata di +sistema. + +Per l'architettura x86_64, questo è implementato come un punto d'accesso +``stub_xyzzy`` in ``arch/x86/entry/entry_64.S``, e la voce nella tabella +di syscall (``arch/x86/entry/syscalls/syscall_64.tbl``) verrà corretta di +conseguenza:: + + 333 common xyzzy stub_xyzzy + +L'equivalente per programmi a 32-bit eseguiti su un kernel a 64-bit viene +normalmente chiamato ``stub32_xyzzy`` e implementato in +``arch/x86/entry/entry_64_compat.S`` con la corrispondente voce nella tabella +di syscall ``arch/x86/entry/syscalls/syscall_32.tbl`` corretta nel +seguente modo:: + + 380 i386 xyzzy sys_xyzzy stub32_xyzzy + +Se una chiamata di sistema necessita di un livello di compatibilità (come +nella sezione precedente), allora la versione ``stub32_`` deve invocare +la versione ``compat_sys_`` piuttosto che quella nativa a 64-bit. In aggiunta, +se l'implementazione dell'ABI x32 è diversa da quella x86_64, allora la sua +voce nella tabella di syscall dovrà chiamare uno *stub* che invoca la versione +``compat_sys_``, + +Per completezza, sarebbe carino impostare una mappatura cosicché +*user-mode* Linux (UML) continui a funzionare -- la sua tabella di syscall +farà riferimento a stub_xyzzy, ma UML non include l'implementazione +in ``arch/x86/entry/entry_64.S`` (perché UML simula i registri eccetera). +Correggerlo è semplice, basta aggiungere una #define in +``arch/x86/um/sys_call_table_64.c``:: + + #define stub_xyzzy sys_xyzzy + + +Altri dettagli +-------------- + +La maggior parte dei kernel tratta le chiamate di sistema allo stesso modo, +ma possono esserci rare eccezioni per le quali potrebbe essere necessario +l'aggiornamento della vostra chiamata di sistema. + +Il sotto-sistema di controllo (*audit subsystem*) è uno di questi casi +speciali; esso include (per architettura) funzioni che classificano alcuni +tipi di chiamate di sistema -- in particolare apertura dei file +(``open``/``openat``), esecuzione dei programmi (``execve``/``exeveat``) +oppure multiplatori di socket (``socketcall``). Se la vostra nuova chiamata +di sistema è simile ad una di queste, allora il sistema di controllo dovrebbe +essere aggiornato. + +Più in generale, se esiste una chiamata di sistema che è simile alla vostra, +vale la pena fare una ricerca con ``grep`` su tutto il kernel per la chiamata +di sistema esistente per verificare che non ci siano altri casi speciali. + + +Verifica +-------- + +Una nuova chiamata di sistema dev'essere, ovviamente, provata; è utile fornire +ai revisori un programma in spazio utente che mostri l'uso della chiamata di +sistema. Un buon modo per combinare queste cose è quello di aggiungere un +semplice programma di auto-verifica in una nuova cartella in +``tools/testing/selftests/``. + +Per una nuova chiamata di sistema, ovviamente, non ci sarà alcuna funzione +in libc e quindi il programma di verifica dovrà invocarla usando ``syscall()``; +inoltre, se la nuova chiamata di sistema prevede un nuova struttura dati +visibile in spazio utente, il file d'intestazione necessario dev'essere +installato al fine di compilare il programma. + +Assicuratevi che il programma di auto-verifica possa essere eseguito +correttamente su tutte le architetture supportate. Per esempio, verificate che +funzioni quando viene compilato per x86_64 (-m64), x86_32 (-m32) e x32 (-mx32). + +Al fine di una più meticolosa ed estesa verifica della nuova funzionalità, +dovreste considerare l'aggiunta di nuove verifica al progetto 'Linux Test', +oppure al progetto xfstests per cambiamenti relativi al filesystem. + + - https://linux-test-project.github.io/ + - git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git + + +Pagine man +---------- + +Tutte le nuove chiamate di sistema dovrebbero avere una pagina man completa, +idealmente usando i marcatori groff, ma anche il puro testo può andare. Se +state usando groff, è utile che includiate nella email di presentazione una +versione già convertita in formato ASCII: semplificherà la vita dei revisori. + +Le pagine man dovrebbero essere in copia-conoscenza verso +linux-man@vger.kernel.org +Per maggiori dettagli, leggere +https://www.kernel.org/doc/man-pages/patches.html + + +Non invocate chiamate di sistema dal kernel +------------------------------------------- + +Le chiamate di sistema sono, come già detto prima, punti di interazione fra +lo spazio utente e il kernel. Perciò, le chiamate di sistema come +``sys_xyzzy()`` o ``compat_sys_xyzzy()`` dovrebbero essere chiamate solo dallo +spazio utente attraverso la tabella syscall, ma non da nessun altro punto nel +kernel. Se la nuova funzionalità è utile all'interno del kernel, per esempio +dev'essere condivisa fra una vecchia e una nuova chiamata di sistema o +dev'essere utilizzata da una chiamata di sistema e la sua variante compatibile, +allora dev'essere implementata come una funzione di supporto +(*helper function*) (per esempio ``kern_xyzzy()``). Questa funzione potrà +essere chiamata dallo *stub* (``sys_xyzzy()``), dalla variante compatibile +(``compat_sys_xyzzy()``), e/o da altri parti del kernel. + +Sui sistemi x86 a 64-bit, a partire dalla versione v4.17 è un requisito +fondamentale quello di non invocare chiamate di sistema all'interno del kernel. +Esso usa una diversa convenzione per l'invocazione di chiamate di sistema dove +``struct pt_regs`` viene decodificata al volo in una funzione che racchiude +la chiamata di sistema la quale verrà eseguita successivamente. +Questo significa che verranno passati solo i parametri che sono davvero +necessari ad una specifica chiamata di sistema, invece che riempire ogni volta +6 registri del processore con contenuti presi dallo spazio utente (potrebbe +causare seri problemi nella sequenza di chiamate). + +Inoltre, le regole su come i dati possano essere usati potrebbero differire +fra il kernel e l'utente. Questo è un altro motivo per cui invocare +``sys_xyzzy()`` è generalmente una brutta idea. + +Eccezioni a questa regola vengono accettate solo per funzioni d'architetture +che surclassano quelle generiche, per funzioni d'architettura di compatibilità, +o per altro codice in arch/ + + +Riferimenti e fonti +------------------- + + - Articolo di Michael Kerris su LWN sull'uso dell'argomento flags nelle + chiamate di sistema: https://lwn.net/Articles/585415/ + - Articolo di Michael Kerris su LWN su come gestire flag sconosciuti in + una chiamata di sistema: https://lwn.net/Articles/588444/ + - Articolo di Jake Edge su LWN che descrive i limiti degli argomenti a 64-bit + delle chiamate di sistema: https://lwn.net/Articles/311630/ + - Una coppia di articoli di David Drysdale che descrivono i dettagli del + percorso implementativo di una chiamata di sistema per la versione v3.14: + + - https://lwn.net/Articles/604287/ + - https://lwn.net/Articles/604515/ + + - Requisiti specifici alle architetture sono discussi nella pagina man + :manpage:`syscall(2)` : + http://man7.org/linux/man-pages/man2/syscall.2.html#NOTES + - Collezione di email di Linux Torvalds sui problemi relativi a ``ioctl()``: + http://yarchive.net/comp/linux/ioctl.html + - "Come non inventare interfacce del kernel", Arnd Bergmann, + http://www.ukuug.org/events/linux2007/2007/papers/Bergmann.pdf + - Articolo di Michael Kerris su LWN sull'evitare nuovi usi di CAP_SYS_ADMIN: + https://lwn.net/Articles/486306/ + - Raccomandazioni da Andrew Morton circa il fatto che tutte le informazioni + su una nuova chiamata di sistema dovrebbero essere contenute nello stesso + filone di discussione di email: https://lkml.org/lkml/2014/7/24/641 + - Raccomandazioni da Michael Kerrisk circa il fatto che le nuove chiamate di + sistema dovrebbero avere una pagina man: https://lkml.org/lkml/2014/6/13/309 + - Consigli da Thomas Gleixner sul fatto che il collegamento all'architettura + x86 dovrebbe avvenire in un *commit* differente: + https://lkml.org/lkml/2014/11/19/254 + - Consigli da Greg Kroah-Hartman circa la bontà d'avere una pagina man e un + programma di auto-verifica per le nuove chiamate di sistema: + https://lkml.org/lkml/2014/3/19/710 + - Discussione di Michael Kerrisk sulle nuove chiamate di sistema contro + le estensioni :manpage:`prctl(2)`: https://lkml.org/lkml/2014/6/3/411 + - Consigli da Ingo Molnar che le chiamate di sistema con più argomenti + dovrebbero incapsularli in una struttura che includa un argomento + *size* per garantire l'estensibilità futura: + https://lkml.org/lkml/2015/7/30/117 + - Un certo numero di casi strani emersi dall'uso (riuso) dei flag O_*: + + - commit 75069f2b5bfb ("vfs: renumber FMODE_NONOTIFY and add to uniqueness + check") + - commit 12ed2e36c98a ("fanotify: FMODE_NONOTIFY and __O_SYNC in sparc + conflict") + - commit bb458c644a59 ("Safer ABI for O_TMPFILE") + + - Discussion from Matthew Wilcox about restrictions on 64-bit arguments: + https://lkml.org/lkml/2008/12/12/187 + - Raccomandazioni da Greg Kroah-Hartman sul fatto che i flag sconosciuti dovrebbero + essere controllati: https://lkml.org/lkml/2014/7/17/577 + - Raccomandazioni da Linus Torvalds che le chiamate di sistema x32 dovrebbero + favorire la compatibilità con le versioni a 64-bit piuttosto che quelle a 32-bit: + https://lkml.org/lkml/2011/8/31/244 diff --git a/Documentation/translations/it_IT/process/applying-patches.rst b/Documentation/translations/it_IT/process/applying-patches.rst new file mode 100644 index 000000000000..f5e9c7d0b16d --- /dev/null +++ b/Documentation/translations/it_IT/process/applying-patches.rst @@ -0,0 +1,13 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/applying-patches.rst ` + + +.. _it_applying_patches: + +Applicare modifiche al kernel Linux +=================================== + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/changes.rst b/Documentation/translations/it_IT/process/changes.rst new file mode 100644 index 000000000000..956cf95a1214 --- /dev/null +++ b/Documentation/translations/it_IT/process/changes.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/changes.rst ` + +.. _it_changes: + +Requisiti minimi per compilare il kernel +++++++++++++++++++++++++++++++++++++++++ + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/clang-format.rst b/Documentation/translations/it_IT/process/clang-format.rst new file mode 100644 index 000000000000..77eac809a639 --- /dev/null +++ b/Documentation/translations/it_IT/process/clang-format.rst @@ -0,0 +1,197 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/clang-format.rst ` +:Translator: Federico Vaga + +.. _it_clangformat: + +clang-format +============ +``clang-format`` è uno strumento per formattare codice C/C++/... secondo +un gruppo di regole ed euristiche. Come tutti gli strumenti, non è perfetto +e non copre tutti i singoli casi, ma è abbastanza buono per essere utile. + +``clang-format`` può essere usato per diversi fini: + + - Per riformattare rapidamente un blocco di codice secondo lo stile del + kernel. Particolarmente utile quando si sposta del codice e lo si + allinea/ordina. Vedere it_clangformatreformat_. + + - Identificare errori di stile, refusi e possibili miglioramenti nei + file che mantieni, le modifiche che revisioni, le differenze, + eccetera. Vedere it_clangformatreview_. + + - Ti aiuta a seguire lo stile del codice, particolarmente utile per i + nuovi arrivati o per coloro che lavorano allo stesso tempo su diversi + progetti con stili di codifica differenti. + +Il suo file di configurazione è ``.clang-format`` e si trova nella cartella +principale dei sorgenti del kernel. Le regole scritte in quel file tentano +di approssimare le lo stile di codifica del kernel. Si tenta anche di seguire +il più possibile +:ref:`Documentation/translations/it_IT/process/coding-style.rst `. +Dato che non tutto il kernel segue lo stesso stile, potreste voler aggiustare +le regole di base per un particolare sottosistema o cartella. Per farlo, +potete sovrascriverle scrivendole in un altro file ``.clang-format`` in +una sottocartella. + +Questo strumento è già stato incluso da molto tempo nelle distribuzioni +Linux più popolari. Cercate ``clang-format`` nel vostro repositorio. +Altrimenti, potete scaricare una versione pre-generata dei binari di LLVM/clang +oppure generarlo dai codici sorgenti: + + http://releases.llvm.org/download.html + +Troverete più informazioni ai seguenti indirizzi: + + https://clang.llvm.org/docs/ClangFormat.html + + https://clang.llvm.org/docs/ClangFormatStyleOptions.html + + +.. _it_clangformatreview: + +Revisionare lo stile di codifica per file e modifiche +----------------------------------------------------- + +Eseguendo questo programma, potrete revisionare un intero sottosistema, +cartella o singoli file alla ricerca di errori di stile, refusi o +miglioramenti. + +Per farlo, potete eseguire qualcosa del genere:: + + # Make sure your working directory is clean! + clang-format -i kernel/*.[ch] + +E poi date un'occhiata a *git diff*. + +Osservare le righe di questo diff è utile a migliorare/aggiustare +le opzioni di stile nel file di configurazione; così come per verificare +le nuove funzionalità/versioni di ``clang-format``. + +``clang-format`` è in grado di leggere diversi diff unificati, quindi +potrete revisionare facilmente delle modifiche e *git diff*. +La documentazione si trova al seguente indirizzo: + + https://clang.llvm.org/docs/ClangFormat.html#script-for-patch-reformatting + +Per evitare che ``clang-format`` formatti alcune parti di un file, potete +scrivere nel codice:: + + int formatted_code; + // clang-format off + void unformatted_code ; + // clang-format on + void formatted_code_again; + +Nonostante si attraente l'idea di utilizzarlo per mantenere un file +sempre in sintonia con ``clang-format``, specialmente per file nuovi o +se siete un manutentore, ricordatevi che altre persone potrebbero usare +una versione diversa di ``clang-format`` oppure non utilizzarlo del tutto. +Quindi, dovreste trattenervi dall'usare questi marcatori nel codice del +kernel; almeno finché non vediamo che ``clang-format`` è diventato largamente +utilizzato. + + +.. _it_clangformatreformat: + +Riformattare blocchi di codice +------------------------------ + +Utilizzando dei plugin per il vostro editor, potete riformattare una +blocco (selezione) di codice con una singola combinazione di tasti. +Questo è particolarmente utile: quando si riorganizza il codice, per codice +complesso, macro multi-riga (e allineare le loro "barre"), eccetera. + +Ricordatevi che potete sempre aggiustare le modifiche in quei casi dove +questo strumento non ha fatto un buon lavoro. Ma come prima approssimazione, +può essere davvero molto utile. + +Questo programma si integra con molti dei più popolari editor. Alcuni di +essi come vim, emacs, BBEdit, Visaul Studio, lo supportano direttamente. +Al seguente indirizzo troverete le istruzioni: + + https://clang.llvm.org/docs/ClangFormat.html + +Per Atom, Eclipse, Sublime Text, Visual Studio Code, XCode e altri editor +e IDEs dovreste essere in grado di trovare dei plugin pronti all'uso. + +Per questo caso d'uso, considerate l'uso di un secondo ``.clang-format`` +che potete personalizzare con le vostre opzioni. +Consultare it_clangformatextra_. + + +.. _it_clangformatmissing: + +Cose non supportate +------------------- + +``clang-format`` non ha il supporto per alcune cose che sono comuni nel +codice del kernel. Sono facili da ricordare; quindi, se lo usate +regolarmente, imparerete rapidamente a evitare/ignorare certi problemi. + +In particolare, quelli più comuni che noterete sono: + + - Allineamento di ``#define`` su una singola riga, per esempio:: + + #define TRACING_MAP_BITS_DEFAULT 11 + #define TRACING_MAP_BITS_MAX 17 + #define TRACING_MAP_BITS_MIN 7 + + contro:: + + #define TRACING_MAP_BITS_DEFAULT 11 + #define TRACING_MAP_BITS_MAX 17 + #define TRACING_MAP_BITS_MIN 7 + + - Allineamento dei valori iniziali, per esempio:: + + static const struct file_operations uprobe_events_ops = { + .owner = THIS_MODULE, + .open = probes_open, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, + .write = probes_write, + }; + + contro:: + + static const struct file_operations uprobe_events_ops = { + .owner = THIS_MODULE, + .open = probes_open, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, + .write = probes_write, + }; + + +.. _it_clangformatextra: + +Funzionalità e opzioni aggiuntive +--------------------------------- + +Al fine di minimizzare le differenze fra il codice attuale e l'output +del programma, alcune opzioni di stile e funzionalità non sono abilitate +nella configurazione base. In altre parole, lo scopo è di rendere le +differenze le più piccole possibili, permettendo la semplificazione +della revisione di file, differenze e modifiche. + +In altri casi (per esempio un particolare sottosistema/cartella/file), lo +stile del kernel potrebbe essere diverso e abilitare alcune di queste +opzioni potrebbe dare risultati migliori. + +Per esempio: + + - Allineare assegnamenti (``AlignConsecutiveAssignments``). + + - Allineare dichiarazioni (``AlignConsecutiveDeclarations``). + + - Riorganizzare il testo nei commenti (``ReflowComments``). + + - Ordinare gli ``#include`` (``SortIncludes``). + +Piuttosto che per interi file, solitamente sono utili per la riformattazione +di singoli blocchi. In alternativa, potete creare un altro file +``.clang-format`` da utilizzare con il vostro editor/IDE. diff --git a/Documentation/translations/it_IT/process/code-of-conduct.rst b/Documentation/translations/it_IT/process/code-of-conduct.rst new file mode 100644 index 000000000000..7dbd7f55f37c --- /dev/null +++ b/Documentation/translations/it_IT/process/code-of-conduct.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/code-of-conduct.rst ` + +.. _it_code_of_conduct: + +Accordo dei contributori sul codice di condotta ++++++++++++++++++++++++++++++++++++++++++++++++ + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/coding-style.rst b/Documentation/translations/it_IT/process/coding-style.rst new file mode 100644 index 000000000000..b707bdbe178c --- /dev/null +++ b/Documentation/translations/it_IT/process/coding-style.rst @@ -0,0 +1,1094 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/coding-style.rst ` +:Translator: Federico Vaga + +.. _it_codingstyle: + +Stile del codice per il kernel Linux +==================================== + +Questo è un breve documento che descrive lo stile di codice preferito per +il kernel Linux. Lo stile di codifica è molto personale e non voglio +**forzare** nessuno ad accettare il mio, ma questo stile è quello che +dev'essere usato per qualsiasi cosa che io sia in grado di mantenere, e l'ho +preferito anche per molte altre cose. Per favore, almeno tenete in +considerazione le osservazioni espresse qui. + +La prima cosa che suggerisco è quella di stamparsi una copia degli standard +di codifica GNU e di NON leggerla. Bruciatela, è un grande gesto simbolico. + +Comunque, ecco i punti: + +1) Indentazione +--------------- + +La tabulazione (tab) è di 8 caratteri e così anche le indentazioni. Ci sono +alcuni movimenti di eretici che vorrebbero l'indentazione a 4 (o perfino 2!) +caratteri di profondità, che è simile al tentativo di definire il valore del +pi-greco a 3. + +Motivazione: l'idea dell'indentazione è di definire chiaramente dove un blocco +di controllo inizia e finisce. Specialmente quando siete rimasti a guardare lo +schermo per 20 ore a file, troverete molto più facile capire i livelli di +indentazione se questi sono larghi. + +Ora, alcuni rivendicano che un'indentazione da 8 caratteri sposta il codice +troppo a destra e che quindi rende difficile la lettura su schermi a 80 +caratteri. La risposta a questa affermazione è che se vi servono più di 3 +livelli di indentazione, siete comunque fregati e dovreste correggere il vostro +programma. + +In breve, l'indentazione ad 8 caratteri rende più facile la lettura, e in +aggiunta vi avvisa quando state annidando troppo le vostre funzioni. +Tenete ben a mente questo avviso. + +Al fine di facilitare l'indentazione del costrutto switch, si preferisce +allineare sulla stessa colonna la parola chiave ``switch`` e i suoi +subordinati ``case``. In questo modo si evita una doppia indentazione per +i ``case``. Un esempio.: + +.. code-block:: c + + switch (suffix) { + case 'G': + case 'g': + mem <<= 30; + break; + case 'M': + case 'm': + mem <<= 20; + break; + case 'K': + case 'k': + mem <<= 10; + /* fall through */ + default: + break; + } + +A meno che non vogliate nascondere qualcosa, non mettete più istruzioni sulla +stessa riga: + +.. code-block:: c + + if (condition) do_this; + do_something_everytime; + +né mettete più assegnamenti sulla stessa riga. Lo stile del kernel +è ultrasemplice. Evitate espressioni intricate. + +Al di fuori dei commenti, della documentazione ed escludendo i Kconfig, gli +spazi non vengono mai usati per l'indentazione, e l'esempio qui sopra è +volutamente errato. + +Procuratevi un buon editor di testo e non lasciate spazi bianchi alla fine +delle righe. + + +2) Spezzare righe lunghe e stringhe +----------------------------------- + +Lo stile del codice riguarda la leggibilità e la manutenibilità utilizzando +strumenti comuni. + +Il limite delle righe è di 80 colonne e questo e un limite fortemente +desiderato. + +Espressioni più lunghe di 80 colonne saranno spezzettate in pezzi più piccoli, +a meno che eccedere le 80 colonne non aiuti ad aumentare la leggibilità senza +nascondere informazioni. I pezzi derivati sono sostanzialmente più corti degli +originali e vengono posizionati più a destra. Lo stesso si applica, nei file +d'intestazione, alle funzioni con una lista di argomenti molto lunga. Tuttavia, +non spezzettate mai le stringhe visibili agli utenti come i messaggi di +printk, questo perché inibireste la possibilità d'utilizzare grep per cercarle. + +3) Posizionamento di parentesi graffe e spazi +--------------------------------------------- + +Un altro problema che s'affronta sempre quando si parla di stile in C è +il posizionamento delle parentesi graffe. Al contrario della dimensione +dell'indentazione, non ci sono motivi tecnici sulla base dei quali scegliere +una strategia di posizionamento o un'altra; ma il modo qui preferito, +come mostratoci dai profeti Kernighan e Ritchie, è quello di +posizionare la parentesi graffa di apertura per ultima sulla riga, e quella +di chiusura per prima su una nuova riga, così: + +.. code-block:: c + + if (x is true) { + we do y + } + +Questo è valido per tutte le espressioni che non siano funzioni (if, switch, +for, while, do). Per esempio: + +.. code-block:: c + + switch (action) { + case KOBJ_ADD: + return "add"; + case KOBJ_REMOVE: + return "remove"; + case KOBJ_CHANGE: + return "change"; + default: + return NULL; + } + +Tuttavia, c'è il caso speciale, le funzioni: queste hanno la parentesi graffa +di apertura all'inizio della riga successiva, quindi: + +.. code-block:: c + + int function(int x) + { + body of function + } + +Eretici da tutto il mondo affermano che questa incoerenza è ... +insomma ... incoerente, ma tutte le persone ragionevoli sanno che (a) +K&R hanno **ragione** e (b) K&R hanno ragione. A parte questo, le funzioni +sono comunque speciali (non potete annidarle in C). + +Notate che la graffa di chiusura è da sola su una riga propria, ad +**eccezione** di quei casi dove è seguita dalla continuazione della stessa +espressione, in pratica ``while`` nell'espressione do-while, oppure ``else`` +nell'espressione if-else, come questo: + +.. code-block:: c + + do { + body of do-loop + } while (condition); + +e + +.. code-block:: c + + if (x == y) { + .. + } else if (x > y) { + ... + } else { + .... + } + +Motivazione: K&R. + +Inoltre, notate che questo posizionamento delle graffe minimizza il numero +di righe vuote senza perdere di leggibilità. In questo modo, dato che le +righe sul vostro schermo non sono una risorsa illimitata (pensate ad uno +terminale con 25 righe), avrete delle righe vuote da riempire con dei +commenti. + +Non usate inutilmente le graffe dove una singola espressione è sufficiente. + +.. code-block:: c + + if (condition) + action(); + +e + +.. code-block:: none + + if (condition) + do_this(); + else + do_that(); + +Questo non vale nel caso in cui solo un ramo dell'espressione if-else +contiene una sola espressione; in quest'ultimo caso usate le graffe per +entrambe i rami: + +.. code-block:: c + + if (condition) { + do_this(); + do_that(); + } else { + otherwise(); + } + +Inoltre, usate le graffe se un ciclo contiene più di una semplice istruzione: + +.. code-block:: c + + while (condition) { + if (test) + do_something(); + } + +3.1) Spazi +********** + +Lo stile del kernel Linux per quanto riguarda gli spazi, dipende +(principalmente) dalle funzioni e dalle parole chiave. Usate una spazio dopo +(quasi tutte) le parole chiave. L'eccezioni più evidenti sono sizeof, typeof, +alignof, e __attribute__, il cui aspetto è molto simile a quello delle +funzioni (e in Linux, solitamente, sono usate con le parentesi, anche se il +linguaggio non lo richiede; come ``sizeof info`` dopo aver dichiarato +``struct fileinfo info``). + +Quindi utilizzate uno spazio dopo le seguenti parole chiave:: + + if, switch, case, for, do, while + +ma non con sizeof, typeof, alignof, o __attribute__. Ad esempio, + +.. code-block:: c + + + s = sizeof(struct file); + +Non aggiungete spazi attorno (dentro) ad un'espressione fra parentesi. Questo +esempio è **brutto**: + +.. code-block:: c + + + s = sizeof( struct file ); + +Quando dichiarate un puntatore ad una variabile o una funzione che ritorna un +puntatore, il posto suggerito per l'asterisco ``*`` è adiacente al nome della +variabile o della funzione, e non adiacente al nome del tipo. Esempi: + +.. code-block:: c + + + char *linux_banner; + unsigned long long memparse(char *ptr, char **retptr); + char *match_strdup(substring_t *s); + +Usate uno spazio attorno (da ogni parte) alla maggior parte degli operatori +binari o ternari, come i seguenti:: + + = + - < > * / % | & ^ <= >= == != ? : + +ma non mettete spazi dopo gli operatori unari:: + + & * + - ~ ! sizeof typeof alignof __attribute__ defined + +nessuno spazio dopo l'operatore unario suffisso di incremento o decremento:: + + ++ -- + +nessuno spazio dopo l'operatore unario prefisso di incremento o decremento:: + + ++ -- + +e nessuno spazio attorno agli operatori dei membri di una struttura ``.`` e +``->``. + +Non lasciate spazi bianchi alla fine delle righe. Alcuni editor con +l'indentazione ``furba`` inseriranno gli spazi bianchi all'inizio di una nuova +riga in modo appropriato, quindi potrete scrivere la riga di codice successiva +immediatamente. Tuttavia, alcuni di questi stessi editor non rimuovono +questi spazi bianchi quando non scrivete nulla sulla nuova riga, ad esempio +perché volete lasciare una riga vuota. Il risultato è che finirete per avere +delle righe che contengono spazi bianchi in coda. + +Git vi avviserà delle modifiche che aggiungono questi spazi vuoti di fine riga, +e può opzionalmente rimuoverli per conto vostro; tuttavia, se state applicando +una serie di modifiche, questo potrebbe far fallire delle modifiche successive +perché il contesto delle righe verrà cambiato. + +4) Assegnare nomi +----------------- + +C è un linguaggio spartano, e così dovrebbero esserlo i vostri nomi. Al +contrario dei programmatori Modula-2 o Pascal, i programmatori C non usano +nomi graziosi come ThisVariableIsATemporaryCounter. Un programmatore C +chiamerebbe questa variabile ``tmp``, che è molto più facile da scrivere e +non è una delle più difficili da capire. + +TUTTAVIA, nonostante i nomi con notazione mista siano da condannare, i nomi +descrittivi per variabili globali sono un dovere. Chiamare una funzione +globale ``pippo`` è un insulto. + +Le variabili GLOBALI (da usare solo se vi servono **davvero**) devono avere +dei nomi descrittivi, così come le funzioni globali. Se avete una funzione +che conta gli utenti attivi, dovreste chiamarla ``count_active_users()`` o +qualcosa di simile, **non** dovreste chiamarla ``cntusr()``. + +Codificare il tipo di funzione nel suo nome (quella cosa chiamata notazione +ungherese) fa male al cervello - il compilatore conosce comunque il tipo e +può verificarli, e inoltre confonde i programmatori. Non c'è da +sorprendersi che MicroSoft faccia programmi bacati. + +Le variabili LOCALI dovrebbero avere nomi corti, e significativi. Se avete +un qualsiasi contatore di ciclo, probabilmente sarà chiamato ``i``. +Chiamarlo ``loop_counter`` non è produttivo, non ci sono possibilità che +``i`` possa non essere capito. Analogamente, ``tmp`` può essere una qualsiasi +variabile che viene usata per salvare temporaneamente un valore. + +Se avete paura di fare casino coi nomi delle vostre variabili locali, allora +avete un altro problema che è chiamato sindrome dello squilibrio dell'ormone +della crescita delle funzioni. Vedere il capitolo 6 (funzioni). + +5) Definizione di tipi (typedef) +-------------------------------- + +Per favore non usate cose come ``vps_t``. +Usare il typedef per strutture e puntatori è uno **sbaglio**. Quando vedete: + +.. code-block:: c + + vps_t a; + +nei sorgenti, cosa significa? +Se, invece, dicesse: + +.. code-block:: c + + struct virtual_container *a; + +potreste dire cos'è effettivamente ``a``. + +Molte persone pensano che la definizione dei tipi ``migliori la leggibilità``. +Non molto. Sono utili per: + + (a) gli oggetti completamente opachi (dove typedef viene proprio usato allo + scopo di **nascondere** cosa sia davvero l'oggetto). + + Esempio: ``pte_t`` eccetera sono oggetti opachi che potete usare solamente + con le loro funzioni accessorie. + + .. note:: + Gli oggetti opachi e le ``funzioni accessorie`` non sono, di per se, + una bella cosa. Il motivo per cui abbiamo cose come pte_t eccetera è + che davvero non c'è alcuna informazione portabile. + + (b) i tipi chiaramente interi, dove l'astrazione **aiuta** ad evitare + confusione sul fatto che siano ``int`` oppure ``long``. + + u8/u16/u32 sono typedef perfettamente accettabili, anche se ricadono + nella categoria (d) piuttosto che in questa. + + .. note:: + + Ancora - dev'esserci una **ragione** per farlo. Se qualcosa è + ``unsigned long``, non c'è alcun bisogno di avere: + + typedef unsigned long myfalgs_t; + + ma se ci sono chiare circostanze in cui potrebbe essere ``unsigned int`` + e in altre configurazioni ``unsigned long``, allora certamente typedef + è una buona scelta. + + (c) quando di rado create letteralmente dei **nuovi** tipi su cui effettuare + verifiche. + + (d) circostanze eccezionali, in cui si definiscono nuovi tipi identici a + quelli definiti dallo standard C99. + + Nonostante ci voglia poco tempo per abituare occhi e cervello all'uso dei + tipi standard come ``uint32_t``, alcune persone ne obiettano l'uso. + + Perciò, i tipi specifici di Linux ``u8/u16/u32/u64`` e i loro equivalenti + con segno, identici ai tipi standard, sono permessi- tuttavia, non sono + obbligatori per il nuovo codice. + + (e) i tipi sicuri nella spazio utente. + + In alcune strutture dati visibili dallo spazio utente non possiamo + richiedere l'uso dei tipi C99 e nemmeno i vari ``u32`` descritti prima. + Perciò, utilizziamo __u32 e tipi simili in tutte le strutture dati + condivise con lo spazio utente. + +Magari ci sono altri casi validi, ma la regola di base dovrebbe essere di +non usare MAI MAI un typedef a meno che non rientri in una delle regole +descritte qui. + +In generale, un puntatore, o una struttura a cui si ha accesso diretto in +modo ragionevole, non dovrebbero **mai** essere definite con un typedef. + +6) Funzioni +----------- + +Le funzioni dovrebbero essere brevi e carine, e fare una cosa sola. Dovrebbero +occupare uno o due schermi di testo (come tutti sappiamo, la dimensione +di uno schermo secondo ISO/ANSI è di 80x24), e fare una cosa sola e bene. + +La massima lunghezza di una funziona è inversamente proporzionale alla sua +complessità e al livello di indentazione di quella funzione. Quindi, se avete +una funzione che è concettualmente semplice ma che è implementata come un +lunga (ma semplice) sequenza di caso-istruzione, dove avete molte piccole cose +per molti casi differenti, allora va bene avere funzioni più lunghe. + +Comunque, se avete una funzione complessa e sospettate che uno studente +non particolarmente dotato del primo anno delle scuole superiori potrebbe +non capire cosa faccia la funzione, allora dovreste attenervi strettamente ai +limiti. Usate funzioni di supporto con nomi descrittivi (potete chiedere al +compilatore di renderle inline se credete che sia necessario per le +prestazioni, e probabilmente farà un lavoro migliore di quanto avreste potuto +fare voi). + +Un'altra misura delle funzioni sono il numero di variabili locali. Non +dovrebbero eccedere le 5-10, oppure state sbagliando qualcosa. Ripensate la +funzione, e dividetela in pezzettini. Generalmente, un cervello umano può +seguire facilmente circa 7 cose diverse, di più lo confonderebbe. Lo sai +d'essere brillante, ma magari vorresti riuscire a capire cos'avevi fatto due +settimane prima. + +Nei file sorgenti, separate le funzioni con una riga vuota. Se la funzione è +esportata, la macro **EXPORT** per questa funzione deve seguire immediatamente +la riga della parentesi graffa di chiusura. Ad esempio: + +.. code-block:: c + + int system_is_up(void) + { + return system_state == SYSTEM_RUNNING; + } + EXPORT_SYMBOL(system_is_up); + +Nei prototipi di funzione, includete i nomi dei parametri e i loro tipi. +Nonostante questo non sia richiesto dal linguaggio C, in Linux viene preferito +perché è un modo semplice per aggiungere informazioni importanti per il +lettore. + +7) Centralizzare il ritorno delle funzioni +------------------------------------------ + +Sebbene sia deprecata da molte persone, l'istruzione goto è impiegata di +frequente dai compilatori sotto forma di salto incondizionato. + +L'istruzione goto diventa utile quando una funzione ha punti d'uscita multipli +e vanno eseguite alcune procedure di pulizia in comune. Se non è necessario +pulire alcunché, allora ritornate direttamente. + +Assegnate un nome all'etichetta di modo che suggerisca cosa fa la goto o +perché esiste. Un esempio di un buon nome potrebbe essere ``out_free_buffer:`` +se la goto libera (free) un ``buffer``. Evitate l'uso di nomi GW-BASIC come +``err1:`` ed ``err2:``, potreste doverli riordinare se aggiungete o rimuovete +punti d'uscita, e inoltre rende difficile verificarne la correttezza. + +I motivo per usare le goto sono: + +- i salti incondizionati sono più facili da capire e seguire +- l'annidamento si riduce +- si evita di dimenticare, per errore, di aggiornare un singolo punto d'uscita +- aiuta il compilatore ad ottimizzare il codice ridondante ;) + +.. code-block:: c + + int fun(int a) + { + int result = 0; + char *buffer; + + buffer = kmalloc(SIZE, GFP_KERNEL); + if (!buffer) + return -ENOMEM; + + if (condition1) { + while (loop1) { + ... + } + result = 1; + goto out_free_buffer; + } + ... + out_free_buffer: + kfree(buffer); + return result; + } + +Un baco abbastanza comune di cui bisogna prendere nota è il ``one err bugs`` +che assomiglia a questo: + +.. code-block:: c + + err: + kfree(foo->bar); + kfree(foo); + return ret; + +Il baco in questo codice è che in alcuni punti d'uscita la variabile ``foo`` è +NULL. Normalmente si corregge questo baco dividendo la gestione dell'errore in +due parti ``err_free_bar:`` e ``err_free_foo:``: + +.. code-block:: c + + err_free_bar: + kfree(foo->bar); + err_free_foo: + kfree(foo); + return ret; + +Idealmente, dovreste simulare condizioni d'errore per verificare i vostri +percorsi d'uscita. + + +8) Commenti +----------- + +I commenti sono una buona cosa, ma c'è anche il rischio di esagerare. MAI +spiegare COME funziona il vostro codice in un commento: è molto meglio +scrivere il codice di modo che il suo funzionamento sia ovvio, inoltre +spiegare codice scritto male è una perdita di tempo. + +Solitamente, i commenti devono dire COSA fa il codice, e non COME lo fa. +Inoltre, cercate di evitare i commenti nel corpo della funzione: se la +funzione è così complessa che dovete commentarla a pezzi, allora dovreste +tornare al punto 6 per un momento. Potete mettere dei piccoli commenti per +annotare o avvisare il lettore circa un qualcosa di particolarmente arguto +(o brutto), ma cercate di non esagerare. Invece, mettete i commenti in +testa alla funzione spiegando alle persone cosa fa, e possibilmente anche +il PERCHÉ. + +Per favore, quando commentate una funzione dell'API del kernel usate il +formato kernel-doc. Per maggiori dettagli, leggete i file in +:ref::ref:`Documentation/translations/it_IT/doc-guide/ ` e in +``script/kernel-doc``. + +Lo stile preferito per i commenti più lunghi (multi-riga) è: + +.. code-block:: c + + /* + * This is the preferred style for multi-line + * comments in the Linux kernel source code. + * Please use it consistently. + * + * Description: A column of asterisks on the left side, + * with beginning and ending almost-blank lines. + */ + +Per i file in net/ e in drivers/net/ lo stile preferito per i commenti +più lunghi (multi-riga) è leggermente diverso. + +.. code-block:: c + + /* The preferred comment style for files in net/ and drivers/net + * looks like this. + * + * It is nearly the same as the generally preferred comment style, + * but there is no initial almost-blank line. + */ + +È anche importante commentare i dati, sia per i tipi base che per tipi +derivati. A questo scopo, dichiarate un dato per riga (niente virgole +per una dichiarazione multipla). Questo vi lascerà spazio per un piccolo +commento per spiegarne l'uso. + + +9) Avete fatto un pasticcio +--------------------------- + +Va bene, li facciamo tutti. Probabilmente vi è stato detto dal vostro +aiutante Unix di fiducia che ``GNU emacs`` formatta automaticamente il +codice C per conto vostro, e avete notato che sì, in effetti lo fa, ma che +i modi predefiniti non sono proprio allettanti (infatti, sono peggio che +premere tasti a caso - un numero infinito di scimmie che scrivono in +GNU emacs non faranno mai un buon programma). + +Quindi, potete sbarazzarvi di GNU emacs, o riconfigurarlo con valori più +sensati. Per fare quest'ultima cosa, potete appiccicare il codice che +segue nel vostro file .emacs: + +.. code-block:: none + + (defun c-lineup-arglist-tabs-only (ignored) + "Line up argument lists by tabs, not spaces" + (let* ((anchor (c-langelem-pos c-syntactic-element)) + (column (c-langelem-2nd-pos c-syntactic-element)) + (offset (- (1+ column) anchor)) + (steps (floor offset c-basic-offset))) + (* (max steps 1) + c-basic-offset))) + + (add-hook 'c-mode-common-hook + (lambda () + ;; Add kernel style + (c-add-style + "linux-tabs-only" + '("linux" (c-offsets-alist + (arglist-cont-nonempty + c-lineup-gcc-asm-reg + c-lineup-arglist-tabs-only)))))) + + (add-hook 'c-mode-hook + (lambda () + (let ((filename (buffer-file-name))) + ;; Enable kernel mode for the appropriate files + (when (and filename + (string-match (expand-file-name "~/src/linux-trees") + filename)) + (setq indent-tabs-mode t) + (setq show-trailing-whitespace t) + (c-set-style "linux-tabs-only"))))) + +Questo farà funzionare meglio emacs con lo stile del kernel per i file che +si trovano nella cartella ``~/src/linux-trees``. + +Ma anche se doveste fallire nell'ottenere una formattazione sensata in emacs +non tutto è perduto: usate ``indent``. + +Ora, ancora, GNU indent ha la stessa configurazione decerebrata di GNU emacs, +ed è per questo che dovete passargli alcune opzioni da riga di comando. +Tuttavia, non è così terribile, perché perfino i creatori di GNU indent +riconoscono l'autorità di K&R (le persone del progetto GNU non sono cattive, +sono solo mal indirizzate sull'argomento), quindi date ad indent le opzioni +``-kr -i8`` (che significa ``K&R, 8 caratteri di indentazione``), o utilizzate +``scripts/Lindent`` che indenterà usando l'ultimo stile. + +``indent`` ha un sacco di opzioni, e specialmente quando si tratta di +riformattare i commenti dovreste dare un'occhiata alle pagine man. +Ma ricordatevi: ``indent`` non è un correttore per una cattiva programmazione. + +Da notare che potete utilizzare anche ``clang-format`` per aiutarvi con queste +regole, per riformattare rapidamente ad automaticamente alcune parti del +vostro codice, e per revisionare interi file al fine di identificare errori +di stile, refusi e possibilmente anche delle migliorie. È anche utile per +ordinare gli ``#include``, per allineare variabili/macro, per ridistribuire +il testo e altre cose simili. +Per maggiori dettagli, consultate il file +:ref:`Documentation/translations/it_IT/process/clang-format.rst `. + + +10) File di configurazione Kconfig +---------------------------------- + +Per tutti i file di configurazione Kconfig* che si possono trovare nei +sorgenti, l'indentazione è un po' differente. Le linee dopo un ``config`` +sono indentate con un tab, mentre il testo descrittivo è indentato di +ulteriori due spazi. Esempio:: + + config AUDIT + bool "Auditing support" + depends on NET + help + Enable auditing infrastructure that can be used with another + kernel subsystem, such as SELinux (which requires this for + logging of avc messages output). Does not do system-call + auditing without CONFIG_AUDITSYSCALL. + +Le funzionalità davvero pericolose (per esempio il supporto alla scrittura +per certi filesystem) dovrebbero essere dichiarate chiaramente come tali +nella stringa di titolo:: + + config ADFS_FS_RW + bool "ADFS write support (DANGEROUS)" + depends on ADFS_FS + ... + +Per la documentazione completa sui file di configurazione, consultate +il documento Documentation/translations/it_IT/kbuild/kconfig-language.txt + + +11) Strutture dati +------------------ + +Le strutture dati che hanno una visibilità superiore al contesto del +singolo thread in cui vengono create e distrutte, dovrebbero sempre +avere un contatore di riferimenti. Nel kernel non esiste un +*garbage collector* (e fuori dal kernel i *garbage collector* sono lenti +e inefficienti), questo significa che **dovete** assolutamente avere un +contatore di riferimenti per ogni cosa che usate. + +Avere un contatore di riferimenti significa che potete evitare la +sincronizzazione e permette a più utenti di accedere alla struttura dati +in parallelo - e non doversi preoccupare di una struttura dati che +improvvisamente sparisce dalla loro vista perché il loro processo dormiva +o stava facendo altro per un attimo. + +Da notare che la sincronizzazione **non** si sostituisce al conteggio dei +riferimenti. La sincronizzazione ha lo scopo di mantenere le strutture +dati coerenti, mentre il conteggio dei riferimenti è una tecnica di gestione +della memoria. Solitamente servono entrambe le cose, e non vanno confuse fra +di loro. + +Quando si hanno diverse classi di utenti, le strutture dati possono avere +due livelli di contatori di riferimenti. Il contatore di classe conta +il numero dei suoi utenti, e il contatore globale viene decrementato una +sola volta quando il contatore di classe va a zero. + +Un esempio di questo tipo di conteggio dei riferimenti multi-livello può +essere trovato nella gestore della memoria (``struct mm_sturct``: mm_user e +mm_count), e nel codice dei filesystem (``struct super_block``: s_count e +s_active). + +Ricordatevi: se un altro thread può trovare la vostra struttura dati, e non +avete un contatore di riferimenti per essa, quasi certamente avete un baco. + +12) Macro, enumerati e RTL +--------------------------- + +I nomi delle macro che definiscono delle costanti e le etichette degli +enumerati sono scritte in maiuscolo. + +.. code-block:: c + + #define CONSTANT 0x12345 + +Gli enumerati sono da preferire quando si definiscono molte costanti correlate. + +I nomi delle macro in MAIUSCOLO sono preferibili ma le macro che assomigliano +a delle funzioni possono essere scritte in minuscolo. + +Generalmente, le funzioni inline sono preferibili rispetto alle macro che +sembrano funzioni. + +Le macro che contengono più istruzioni dovrebbero essere sempre chiuse in un +blocco do - while: + +.. code-block:: c + + #define macrofun(a, b, c) \ + do { \ + if (a == 5) \ + do_this(b, c); \ + } while (0) + +Cose da evitare quando si usano le macro: + +1) le macro che hanno effetti sul flusso del codice: + +.. code-block:: c + + #define FOO(x) \ + do { \ + if (blah(x) < 0) \ + return -EBUGGERED; \ + } while (0) + +sono **proprio** una pessima idea. Sembra una chiamata a funzione ma termina +la funzione chiamante; non cercate di rompere il decodificatore interno di +chi legge il codice. + +2) le macro che dipendono dall'uso di una variabile locale con un nome magico: + +.. code-block:: c + + #define FOO(val) bar(index, val) + +potrebbe sembrare una bella cosa, ma è dannatamente confusionario quando uno +legge il codice e potrebbe romperlo con una cambiamento che sembra innocente. + +3) le macro con argomenti che sono utilizzati come l-values; questo potrebbe +ritorcervisi contro se qualcuno, per esempio, trasforma FOO in una funzione +inline. + +4) dimenticatevi delle precedenze: le macro che definiscono espressioni devono +essere racchiuse fra parentesi. State attenti a problemi simili con le macro +parametrizzate. + +.. code-block:: c + + #define CONSTANT 0x4000 + #define CONSTEXP (CONSTANT | 3) + +5) collisione nello spazio dei nomi quando si definisce una variabile locale in +una macro che sembra una funzione: + +.. code-block:: c + + #define FOO(x) \ + ({ \ + typeof(x) ret; \ + ret = calc_ret(x); \ + (ret); \ + }) + +ret è un nome comune per una variabile locale - __foo_ret difficilmente +andrà in conflitto con una variabile già esistente. + +Il manuale di cpp si occupa esaustivamente delle macro. Il manuale di sviluppo +di gcc copre anche l'RTL che viene usato frequentemente nel kernel per il +linguaggio assembler. + +13) Visualizzare i messaggi del kernel +-------------------------------------- + +Agli sviluppatori del kernel piace essere visti come dotti. Tenete un occhio +di riguardo per l'ortografia e farete una belle figura. In inglese, evitate +l'uso di parole mozzate come ``dont``: usate ``do not`` oppure ``don't``. +Scrivete messaggi concisi, chiari, e inequivocabili. + +I messaggi del kernel non devono terminare con un punto fermo. + +Scrivere i numeri fra parentesi (%d) non migliora alcunché e per questo +dovrebbero essere evitati. + +Ci sono alcune macro per la diagnostica in che dovreste +usare per assicurarvi che i messaggi vengano associati correttamente ai +dispositivi e ai driver, e che siano etichettati correttamente: dev_err(), +dev_warn(), dev_info(), e così via. Per messaggi che non sono associati ad +alcun dispositivo, definisce pr_info(), pr_warn(), pr_err(), +eccetera. + +Tirar fuori un buon messaggio di debug può essere una vera sfida; e quando +l'avete può essere d'enorme aiuto per risolvere problemi da remoto. +Tuttavia, i messaggi di debug sono gestiti differentemente rispetto agli +altri. Le funzioni pr_XXX() stampano incondizionatamente ma pr_debug() no; +essa non viene compilata nella configurazione predefinita, a meno che +DEBUG o CONFIG_DYNAMIC_DEBUG non vengono impostati. Questo vale anche per +dev_dbg() e in aggiunta VERBOSE_DEBUG per aggiungere i messaggi dev_vdbg(). + +Molti sottosistemi hanno delle opzioni di debug in Kconfig che aggiungono +-DDEBUG nei corrispettivi Makefile, e in altri casi aggiungono #define DEBUG +in specifici file. Infine, quando un messaggio di debug dev'essere stampato +incondizionatamente, per esempio perché siete già in una sezione di debug +racchiusa in #ifdef, potete usare printk(KERN_DEBUG ...). + +14) Assegnare memoria +--------------------- + +Il kernel fornisce i seguenti assegnatori ad uso generico: +kmalloc(), kzalloc(), kmalloc_array(), kcalloc(), vmalloc(), e vzalloc(). +Per maggiori informazioni, consultate la documentazione dell'API. + +Il modo preferito per passare la dimensione di una struttura è il seguente: + +.. code-block:: c + + p = kmalloc(sizeof(*p), ...); + +La forma alternativa, dove il nome della struttura viene scritto interamente, +peggiora la leggibilità e introduce possibili bachi quando il tipo di +puntatore cambia tipo ma il corrispondente sizeof non viene aggiornato. + +Il valore di ritorno è un puntatore void, effettuare un cast su di esso è +ridondante. La conversione fra un puntatore void e un qualsiasi altro tipo +di puntatore è garantito dal linguaggio di programmazione C. + +Il modo preferito per assegnare un vettore è il seguente: + +.. code-block:: c + + p = kmalloc_array(n, sizeof(...), ...); + +Il modo preferito per assegnare un vettore a zero è il seguente: + +.. code-block:: c + + p = kcalloc(n, sizeof(...), ...); + +Entrambe verificano la condizione di overflow per la dimensione +d'assegnamento n * sizeof(...), se accade ritorneranno NULL. + +15) Il morbo inline +------------------- + +Sembra che ci sia la percezione errata che gcc abbia una qualche magica +opzione "rendimi più veloce" chiamata ``inline``. In alcuni casi l'uso di +inline è appropriato (per esempio in sostituzione delle macro, vedi +capitolo 12), ma molto spesso non lo è. L'uso abbondante della parola chiave +inline porta ad avere un kernel più grande, che si traduce in un sistema nel +suo complesso più lento per via di una cache per le istruzioni della CPU più +grande e poi semplicemente perché ci sarà meno spazio disponibile per una +pagina di cache. Pensateci un attimo; una fallimento nella cache causa una +ricerca su disco che può tranquillamente richiedere 5 millisecondi. Ci sono +TANTI cicli di CPU che potrebbero essere usati in questi 5 millisecondi. + +Spesso le persone dicono che aggiungere inline a delle funzioni dichiarate +static e utilizzare una sola volta è sempre una scelta vincente perché non +ci sono altri compromessi. Questo è tecnicamente vero ma gcc è in grado di +trasformare automaticamente queste funzioni in inline; i problemi di +manutenzione del codice per rimuovere gli inline quando compare un secondo +utente surclassano il potenziale vantaggio nel suggerire a gcc di fare una +cosa che avrebbe fatto comunque. + +16) Nomi e valori di ritorno delle funzioni +------------------------------------------- + +Le funzioni possono ritornare diversi tipi di valori, e uno dei più comuni +è quel valore che indica se una funzione ha completato con successo o meno. +Questo valore può essere rappresentato come un codice di errore intero +(-Exxx = fallimento, 0 = successo) oppure un booleano di successo +(0 = fallimento, non-zero = successo). + +Mischiare questi due tipi di rappresentazioni è un terreno fertile per +i bachi più insidiosi. Se il linguaggio C includesse una forte distinzione +fra gli interi e i booleani, allora il compilatore potrebbe trovare questi +errori per conto nostro ... ma questo non c'è. Per evitare di imbattersi +in questo tipo di baco, seguite sempre la seguente convenzione:: + + Se il nome di una funzione è un'azione o un comando imperativo, + essa dovrebbe ritornare un codice di errore intero. Se il nome + è un predicato, la funzione dovrebbe ritornare un booleano di + "successo" + +Per esempio, ``add work`` è un comando, e la funzione add_work() ritorna 0 +in caso di successo o -EBUSY in caso di fallimento. Allo stesso modo, +``PCI device present`` è un predicato, e la funzione pci_dev_present() ritorna +1 se trova il dispositivo corrispondente con successo, altrimenti 0. + +Tutte le funzioni esportate (EXPORT) devono rispettare questa convenzione, e +così dovrebbero anche tutte le funzioni pubbliche. Le funzioni private +(static) possono non seguire questa convenzione, ma è comunque raccomandato +che lo facciano. + +Le funzioni il cui valore di ritorno è il risultato di una computazione, +piuttosto che l'indicazione sul successo di tale computazione, non sono +soggette a questa regola. Solitamente si indicano gli errori ritornando un +qualche valore fuori dai limiti. Un tipico esempio è quello delle funzioni +che ritornano un puntatore; queste utilizzano NULL o ERR_PTR come meccanismo +di notifica degli errori. + +17) Non reinventate le macro del kernel +--------------------------------------- + +Il file di intestazione include/linux/kernel.h contiene un certo numero +di macro che dovreste usare piuttosto che implementarne una qualche variante. +Per esempio, se dovete calcolare la lunghezza di un vettore, sfruttate la +macro: + +.. code-block:: c + + #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) + +Analogamente, se dovete calcolare la dimensione di un qualche campo di una +struttura, usate + +.. code-block:: c + + #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f)) + +Ci sono anche le macro min() e max() che, se vi serve, effettuano un controllo +rigido sui tipi. Sentitevi liberi di leggere attentamente questo file +d'intestazione per scoprire cos'altro è stato definito che non dovreste +reinventare nel vostro codice. + +18) Linee di configurazione degli editor e altre schifezze +----------------------------------------------------------- + +Alcuni editor possono interpretare dei parametri di configurazione integrati +nei file sorgenti e indicati con dai marcatori speciali. Per esempio, emacs +interpreta le linee marcate nel seguente modo: + +.. code-block:: c + + -*- mode: c -*- + +O come queste: + +.. code-block:: c + + /* + Local Variables: + compile-command: "gcc -DMAGIC_DEBUG_FLAG foo.c" + End: + */ + +Vim interpreta i marcatori come questi: + +.. code-block:: c + + /* vim:set sw=8 noet */ + +Non includete nessuna di queste cose nei file sorgenti. Le persone hanno le +proprie configurazioni personali per l'editor, e i vostri sorgenti non +dovrebbero sovrascrivergliele. Questo vale anche per i marcatori +d'indentazione e di modalità d'uso. Le persone potrebbero aver configurato una +modalità su misura, oppure potrebbero avere qualche altra magia per far +funzionare bene l'indentazione. + +19) Inline assembly +--------------------- + +Nel codice specifico per un'architettura, potreste aver bisogno di codice +*inline assembly* per interfacciarvi col processore o con una funzionalità +specifica della piattaforma. Non esitate a farlo quando è necessario. +Comunque, non usatele gratuitamente quando il C può fare la stessa cosa. +Potete e dovreste punzecchiare l'hardware in C quando è possibile. + +Considerate la scrittura di una semplice funzione che racchiude pezzi comuni +di codice assembler piuttosto che continuare a riscrivere delle piccole +varianti. Ricordatevi che l' *inline assembly* può utilizzare i parametri C. + +Il codice assembler più corposo e non banale dovrebbe andare nei file .S, +coi rispettivi prototipi C definiti nei file d'intestazione. I prototipi C +per le funzioni assembler dovrebbero usare ``asmlinkage``. + +Potreste aver bisogno di marcare il vostro codice asm come volatile al fine +d'evitare che GCC lo rimuova quando pensa che non ci siano effetti collaterali. +Non c'è sempre bisogno di farlo, e farlo quando non serve limita le +ottimizzazioni. + +Quando scrivete una singola espressione *inline assembly* contenente più +istruzioni, mettete ognuna di queste istruzioni in una stringa e riga diversa; +ad eccezione dell'ultima stringa/istruzione, ognuna deve terminare con ``\n\t`` +al fine di allineare correttamente l'assembler che verrà generato: + +.. code-block:: c + + asm ("magic %reg1, #42\n\t" + "more_magic %reg2, %reg3" + : /* outputs */ : /* inputs */ : /* clobbers */); + +20) Compilazione sotto condizione +--------------------------------- + +Ovunque sia possibile, non usate le direttive condizionali del preprocessore +(#if, #ifdef) nei file .c; farlo rende il codice difficile da leggere e da +seguire. Invece, usate queste direttive nei file d'intestazione per definire +le funzioni usate nei file .c, fornendo i relativi stub nel caso #else, +e quindi chiamate queste funzioni senza condizioni di preprocessore. Il +compilatore non produrrà alcun codice per le funzioni stub, produrrà gli +stessi risultati, e la logica rimarrà semplice da seguire. + +È preferibile non compilare intere funzioni piuttosto che porzioni d'esse o +porzioni d'espressioni. Piuttosto che mettere una ifdef in un'espressione, +fattorizzate parte dell'espressione, o interamente, in funzioni e applicate +la direttiva condizionale su di esse. + +Se avete una variabile o funzione che potrebbe non essere usata in alcune +configurazioni, e quindi il compilatore potrebbe avvisarvi circa la definizione +inutilizzata, marcate questa definizione come __maybe_used piuttosto che +racchiuderla in una direttiva condizionale del preprocessore. (Comunque, +se una variabile o funzione è *sempre* inutilizzata, rimuovetela). + +Nel codice, dov'è possibile, usate la macro IS_ENABLED per convertire i +simboli Kconfig in espressioni booleane C, e quindi usatela nelle classiche +condizioni C: + +.. code-block:: c + + if (IS_ENABLED(CONFIG_SOMETHING)) { + ... + } + +Il compilatore valuterà la condizione come costante (constant-fold), e quindi +includerà o escluderà il blocco di codice come se fosse in un #ifdef, quindi +non ne aumenterà il tempo di esecuzione. Tuttavia, questo permette al +compilatore C di vedere il codice nel blocco condizionale e verificarne la +correttezza (sintassi, tipi, riferimenti ai simboli, eccetera). Quindi +dovete comunque utilizzare #ifdef se il codice nel blocco condizionale esiste +solo quando la condizione è soddisfatta. + +Alla fine di un blocco corposo di #if o #ifdef (più di alcune linee), +mettete un commento sulla stessa riga di #endif, annotando la condizione +che termina. Per esempio: + +.. code-block:: c + + #ifdef CONFIG_SOMETHING + ... + #endif /* CONFIG_SOMETHING */ + +Appendice I) riferimenti +------------------------ + +The C Programming Language, Second Edition +by Brian W. Kernighan and Dennis M. Ritchie. +Prentice Hall, Inc., 1988. +ISBN 0-13-110362-8 (paperback), 0-13-110370-9 (hardback). + +The Practice of Programming +by Brian W. Kernighan and Rob Pike. +Addison-Wesley, Inc., 1999. +ISBN 0-201-61586-X. + +Manuali GNU - nei casi in cui sono compatibili con K&R e questo documento - +per indent, cpp, gcc e i suoi dettagli interni, tutto disponibile qui +http://www.gnu.org/manual/ + +WG14 è il gruppo internazionale di standardizzazione per il linguaggio C, +URL: http://www.open-std.org/JTC1/SC22/WG14/ + +Kernel process/coding-style.rst, by greg@kroah.com at OLS 2002: +http://www.kroah.com/linux/talks/ols_2002_kernel_codingstyle_talk/html/ diff --git a/Documentation/translations/it_IT/process/development-process.rst b/Documentation/translations/it_IT/process/development-process.rst new file mode 100644 index 000000000000..f1a6eca30824 --- /dev/null +++ b/Documentation/translations/it_IT/process/development-process.rst @@ -0,0 +1,33 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/development-process.rst ` +:Translator: Alessia Mantegazza + +.. _it_development_process_main: + +Una guida al processo di sviluppo del Kernel +============================================ + +Contenuti: + +.. toctree:: + :numbered: + :maxdepth: 2 + + 1.Intro + 2.Process + 3.Early-stage + 4.Coding + 5.Posting + 6.Followthrough + 7.AdvancedTopics + 8.Conclusion + +Lo scopo di questo documento è quello di aiutare gli sviluppatori (ed i loro +supervisori) a lavorare con la communità di sviluppo con il minimo sforzo. È +un tentativo di documentare il funzionamento di questa communità in modo che +sia accessibile anche a coloro che non hanno famigliarità con lo sviluppo del +Kernel Linux (o, anzi, con lo sviluppo di software libero in generale). Benchè +qui sia presente del materiale tecnico, questa è una discussione rivolta in +particolare al procedimento, e quindi per essere compreso non richiede una +conoscenza approfondità sullo sviluppo del kernel. diff --git a/Documentation/translations/it_IT/process/email-clients.rst b/Documentation/translations/it_IT/process/email-clients.rst new file mode 100644 index 000000000000..224ab031ffd3 --- /dev/null +++ b/Documentation/translations/it_IT/process/email-clients.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/email-clients.rst ` + +.. _it_email_clients: + +Informazioni sui programmi di posta elettronica per Linux +========================================================= + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/howto.rst b/Documentation/translations/it_IT/process/howto.rst new file mode 100644 index 000000000000..909e6a55bc43 --- /dev/null +++ b/Documentation/translations/it_IT/process/howto.rst @@ -0,0 +1,655 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/howto.rst ` +:Translator: Alessia Mantegazza + +.. _it_process_howto: + +Come partecipare allo sviluppo del kernel Linux +=============================================== + +Questo è il documento fulcro di quanto trattato sull'argomento. +Esso contiene le istruzioni su come diventare uno sviluppatore +del kernel Linux e spiega come lavorare con la comunità di +sviluppo kernel Linux. Il documento non tratterà alcun aspetto +tecnico relativo alla programmazione del kernel, ma vi aiuterà +indirizzandovi sulla corretta strada. + +Se qualsiasi cosa presente in questo documento diventasse obsoleta, +vi preghiamo di inviare le correzioni agli amministratori di questo +file, indicati in fondo al presente documento. + +Introduzione +------------ +Dunque, volete imparare come diventare sviluppatori del kernel Linux? +O vi è stato detto dal vostro capo, "Vai, scrivi un driver Linux per +questo dispositivo". Bene, l'obbiettivo di questo documento è quello +di insegnarvi tutto ciò che dovete sapere per raggiungere il vostro +scopo descrivendo il procedimento da seguire e consigliandovi +su come lavorare con la comunità. Il documento cercherà, inoltre, +di spiegare alcune delle ragioni per le quali la comunità lavora in un +modo suo particolare. + +Il kernel è scritto prevalentemente nel linguaggio C con alcune parti +specifiche dell'architettura scritte in linguaggio assembly. +Per lo sviluppo kernel è richiesta una buona conoscenza del linguaggio C. +L'assembly (di qualsiasi architettura) non è richiesto, a meno che non +pensiate di fare dello sviluppo di basso livello per un'architettura. +Sebbene essi non siano un buon sostituto ad un solido studio del +linguaggio C o ad anni di esperienza, i seguenti libri sono, se non +altro, utili riferimenti: + +- "The C Programming Language" di Kernighan e Ritchie [Prentice Hall] +- "Practical C Programming" di Steve Oualline [O'Reilly] +- "C: A Reference Manual" di Harbison and Steele [Prentice Hall] + +Il kernel è stato scritto usando GNU C e la toolchain GNU. +Sebbene si attenga allo standard ISO C89, esso utilizza una serie di +estensioni che non sono previste in questo standard. Il kernel è un +ambiente C indipendente, che non ha alcuna dipendenza dalle librerie +C standard, così alcune parti del C standard non sono supportate. +Le divisioni ``long long`` e numeri in virgola mobile non sono permessi. +Qualche volta è difficile comprendere gli assunti che il kernel ha +riguardo gli strumenti e le estensioni in uso, e sfortunatamente non +esiste alcuna indicazione definitiva. Per maggiori informazioni, controllate, +la pagina `info gcc`. + +Tenete a mente che state cercando di apprendere come lavorare con la comunità +di sviluppo già esistente. Questo è un gruppo eterogeneo di persone, con alti +standard di codifica, di stile e di procedura. Questi standard sono stati +creati nel corso del tempo basandosi su quanto hanno riscontrato funzionare al +meglio per un squadra così grande e geograficamente sparsa. Cercate di +imparare, in anticipo, il più possibile circa questi standard, poichè ben +spiegati; non aspettatevi che gli altri si adattino al vostro modo di fare +o a quello della vostra azienda. + +Note legali +------------ +Il codice sorgente del kernel Linux è rilasciato sotto GPL. Siete pregati +di visionare il file, COPYING, presente nella cartella principale dei +sorgente, per eventuali dettagli sulla licenza. Se avete ulteriori domande +sulla licenza, contattate un avvocato, non chiedete sulle liste di discussione +del kernel Linux. Le persone presenti in queste liste non sono avvocati, +e non dovreste basarvi sulle loro dichiarazioni in materia giuridica. + +Per domande più frequenti e risposte sulla licenza GPL, guardare: + + https://www.gnu.org/licenses/gpl-faq.html + +Documentazione +-------------- +I sorgenti del kernel Linux hanno una vasta base di documenti che vi +insegneranno come interagire con la comunità del kernel. Quando nuove +funzionalità vengono aggiunte al kernel, si raccomanda di aggiungere anche i +relativi file di documentatione che spiegano come usarele. +Quando un cambiamento del kernel genera anche un cambiamento nell'interfaccia +con lo spazio utente, è raccomandabile che inviate una notifica o una +correzione alle pagine *man* spiegando tale modifica agli amministratori di +queste pagine all'indirizzo mtk.manpages@gmail.com, aggiungendo +in CC la lista linux-api@vger.kernel.org. + +Di seguito una lista di file che sono presenti nei sorgente del kernel e che +è richiesto che voi leggiate: + + :ref:`Documentation/translations/it_IT/admin-guide/README.rst ` + Questo file da una piccola anteprima del kernel Linux e descrive il + minimo necessario per configurare e generare il kernel. I novizi + del kernel dovrebbero iniziare da qui. + + :ref:`Documentation/translations/it_IT/process/changes.rst ` + + Questo file fornisce una lista dei pacchetti software necessari + a compilare e far funzionare il kernel con successo. + + :ref:`Documentation/translations/it_IT/process/coding-style.rst ` + + Questo file descrive lo stile della codifica per il kernel Linux, + e parte delle motivazioni che ne sono alla base. Tutto il nuovo codice deve + seguire le linee guida in questo documento. Molti amministratori + accetteranno patch solo se queste osserveranno tali regole, e molte + persone revisioneranno il codice solo se scritto nello stile appropriato. + + :ref:`Documentation/translations/it_IT/process/submitting-patches.rst ` e + :ref:`Documentation/translations/it_IT/process/submitting-drivers.rst ` + + Questo file descrive dettagliatamente come creare ed inviare una patch + con successo, includendo (ma non solo questo): + + - Contenuto delle email + - Formato delle email + - I destinatari delle email + + Seguire tali regole non garantirà il successo (tutte le patch sono soggette + a controlli realitivi a contenuto e stile), ma non seguirle lo precluderà + sempre. + + Altre ottime descrizioni di come creare buone patch sono: + + "The Perfect Patch" + https://www.ozlabs.org/~akpm/stuff/tpp.txt + + "Linux kernel patch submission format" + http://linux.yyz.us/patch-format.html + + :ref:`Documentation/process/translations/it_IT/stable-api-nonsense.rst ` + + Questo file descrive la motivazioni sottostanti la conscia decisione di + non avere un API stabile all'interno del kernel, incluso cose come: + + - Sottosistemi shim-layers (per compatibilità?) + - Portabilità fra Sistemi Operativi dei driver. + - Attenuare i rapidi cambiamenti all'interno dei sorgenti del kernel + (o prevenirli) + + Questo documento è vitale per la comprensione della filosifia alla base + dello sviluppo di Linux ed è molto importante per le persone che arrivano + da esperienze con altri Sistemi Operativi. + + :ref:`Documentation/translations/it_IT/admin-guide/security-bugs.rst ` + Se ritenete di aver trovato un problema di sicurezza nel kernel Linux, + seguite i passaggi scritti in questo documento per notificarlo agli + sviluppatori del kernel, ed aiutare la risoluzione del problema. + + :ref:`Documentation/translations/it_IT/process/management-style.rst ` + Questo documento descrive come i manutentori del kernel Linux operano + e la filosofia comune alla base del loro metodo. Questa è un'importante + lettura per tutti coloro che sono nuovi allo sviluppo del kernel (o per + chi è semplicemente curioso), poiché risolve molti dei più comuni + fraintendimenti e confusioni dovuti al particolare comportamento dei + manutentori del kernel. + + :ref:`Documentation/translations/it_IT/process/stable-kernel-rules.rst ` + Questo file descrive le regole sulle quali vengono basati i rilasci del + kernel, e spiega cosa fare se si vuole che una modifica venga inserita + in uno di questi rilasci. + + :ref:`Documentation/translations/it_IT/process/kernel-docs.rst ` + Una lista di documenti pertinenti allo sviluppo del kernel. + Per favore consultate questa lista se non trovate ciò che cercate nella + documentazione interna del kernel. + + :ref:`Documentation/translations/it_IT/process/applying-patches.rst ` + Una buona introduzione che descrivere esattamente cos'è una patch e come + applicarla ai differenti rami di sviluppo del kernel. + +Il kernel inoltre ha un vasto numero di documenti che possono essere +automaticamente generati dal codice sorgente stesso o da file +ReStructuredText (ReST), come questo. Esso include una completa +descrizione dell'API interna del kernel, e le regole su come gestire la +sincronizzazione (locking) correttamente + +Tutte queste tipologie di documenti possono essere generati in PDF o in +HTML utilizzando:: + + make pdfdocs + make htmldocs + +rispettivamente dalla cartella principale dei sorgenti del kernel. + +I documenti che impiegano ReST saranno generati nella cartella +Documentation/output. +Questi posso essere generati anche in formato LaTex e ePub con:: + + make latexdocs + make epubdocs + +Diventare uno sviluppatore del kernel +------------------------------------- +Se non sapete nulla sullo sviluppo del kernel Linux, dovreste dare uno +sguardo al progetto *Linux KernelNewbies*: + + https://kernelnewbies.org + +Esso prevede un'utile lista di discussione dove potete porre più o meno ogni +tipo di quesito relativo ai concetti fondamentali sullo sviluppo del kernel +(assicuratevi di cercare negli archivi, prima di chiedere qualcosa alla +quale è già stata fornita risposta in passato). Esistono inoltre, un canale IRC +che potete usare per formulare domande in tempo reale, e molti documenti utili +che vi faciliteranno nell'apprendimento dello sviluppo del kernel Linux. + +Il sito internet contiene informazioni di base circa l'organizzazione del +codice, sottosistemi e progetti attuali (sia interni che esterni a Linux). +Esso descrive, inoltre, informazioni logistiche di base, riguardanti ad esempio +la compilazione del kernel e l'applicazione di una modifica. + +Se non sapete dove cominciare, ma volete cercare delle attività dalle quali +partire per partecipare alla comunità di sviluppo, andate al progetto Linux +Kernel Janitor's. + + https://kernelnewbies.org/KernelJanitors + +È un buon posto da cui iniziare. Esso presenta una lista di problematiche +relativamente semplici da sistemare e pulire all'interno della sorgente del +kernel Linux. Lavorando con gli sviluppatori incaricati di questo progetto, +imparerete le basi per l'inserimento delle vostre modifiche all'interno dei +sorgenti del kernel Linux, e possibilmente, sarete indirizzati al lavoro +successivo da svolgere, se non ne avrete ancora idea. + +Prima di apportare una qualsiasi modifica al codice del kernel Linux, +è imperativo comprendere come tale codice funziona. A questo scopo, non c'è +nulla di meglio che leggerlo direttamente (la maggior parte dei bit più +complessi sono ben commentati), eventualmente anche con l'aiuto di strumenti +specializzati. Uno degli strumenti che è particolarmente raccomandato è +il progetto Linux Cross-Reference, che è in grado di presentare codice +sorgente in un formato autoreferenziale ed indicizzato. Un eccellente ed +aggiornata fonte di consultazione del codice del kernel la potete trovare qui: + + http://lxr.free-electrons.com/ + + +Il processo di sviluppo +----------------------- +Il processo di sviluppo del kernel Linux si compone di pochi "rami" principali +e di molti altri rami per specifici sottosistemi. Questi rami sono: + + - I sorgenti kernel 4.x + - I sorgenti stabili del kernel 4.x.y -stable + - Le modifiche in 4.x -git + - Sorgenti dei sottosistemi del kernel e le loro modifiche + - Il kernel 4.x -next per test d'integrazione + +I sorgenti kernel 4.x +~~~~~~~~~~~~~~~~~~~~~ + +I kernel 4.x sono amministrati da Linus Torvald, e possono essere trovati +su https://kernel.org nella cartella pub/linux/kernel/v4.x/. Il processo +di sviluppo è il seguente: + + - Non appena un nuovo kernel viene rilasciato si apre una finestra di due + settimane. Durante questo periodo i manutentori possono proporre a Linus + dei grossi cambiamenti; solitamente i cambiamenti che sono già stati + inseriti nel ramo -next del kernel per alcune settimane. Il modo migliore + per sottoporre dei cambiamenti è attraverso git (lo strumento usato per + gestire i sorgenti del kernel, più informazioni sul sito + https://git-scm.com/) ma anche delle patch vanno bene. + + - Al termine delle due settimane un kernel -rc1 viene rilasciato e + l'obbiettivo ora è quello di renderlo il più solido possibile. A questo + punto la maggior parte delle patch dovrebbero correggere un'eventuale + regressione. I bachi che sono sempre esistiti non sono considerabili come + regressioni, quindi inviate questo tipo di cambiamenti solo se sono + importanti. Notate che un intero driver (o filesystem) potrebbe essere + accettato dopo la -rc1 poiché non esistono rischi di una possibile + regressione con tale cambiamento, fintanto che quest'ultimo è + auto-contenuto e non influisce su aree esterne al codice che è stato + aggiunto. git può essere utilizzato per inviare le patch a Linus dopo che + la -rc1 è stata rilasciata, ma è anche necessario inviare le patch ad + una lista di discussione pubblica per un'ulteriore revisione. + + - Una nuova -rc viene rilasciata ogni volta che Linus reputa che gli attuali + sorgenti siano in uno stato di salute ragionevolmente adeguato ai test. + L'obiettivo è quello di rilasciare una nuova -rc ogni settimana. + + - Il processo continua fino a che il kernel è considerato "pronto"; tale + processo dovrebbe durare circa in 6 settimane. + +È utile menzionare quanto scritto da Andrew Morton sulla lista di discussione +kernel-linux in merito ai rilasci del kernel: + + *"Nessuno sa quando un kernel verrà rilasciato, poichè questo è + legato allo stato dei bachi e non ad una cronologia preventiva."* + +I sorgenti stabili del kernel 4.x.y -stable +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +I kernel con versioni in 3-parti sono "kernel stabili". Essi contengono +correzioni critiche relativamente piccole nell'ambito della sicurezza +oppure significative regressioni scoperte in un dato 4.x kernel. + +Questo è il ramo raccomandato per gli utenti che vogliono un kernel recente +e stabile e non sono interessati a dare il proprio contributo alla verifica +delle versioni di sviluppo o sperimentali. + +Se non è disponibile alcun kernel 4.x.y., quello più aggiornato e stabile +sarà il kernel 4.x con la numerazione più alta. + +4.x.y sono amministrati dal gruppo "stable" , e sono +rilasciati a seconda delle esigenze. Il normale periodo di rilascio è +approssimativamente di due settimane, ma può essere più lungo se non si +verificano problematiche urgenti. Un problema relativo alla sicurezza, invece, +può determinare un rilascio immediato. + +Il file Documentation/process/stable-kernel-rules.rst (nei sorgenti) documenta +quali tipologie di modifiche sono accettate per i sorgenti -stable, e come +avviene il processo di rilascio. + +Le modifiche in 4.x -git +~~~~~~~~~~~~~~~~~~~~~~~~ + +Queste sono istantanee quotidiane del kernel di Linus e sono gestite in +una repositorio git (da qui il nome). Queste modifiche sono solitamente +rilasciate giornalmente e rappresentano l'attuale stato dei sorgenti di +Linus. Queste sono da considerarsi più sperimentali di un -rc in quanto +generate automaticamente senza nemmeno aver dato una rapida occhiata +per verificarne lo stato. + + +Sorgenti dei sottosistemi del kernel e le loro patch +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +I manutentori dei diversi sottosistemi del kernel --- ed anche molti +sviluppatori di sottosistemi --- mostrano il loro attuale stato di sviluppo +nei loro repositori. In questo modo, altri possono vedere cosa succede nelle +diverse parti del kernel. In aree dove lo sviluppo è rapido, potrebbe essere +chiesto ad uno sviluppatore di basare le proprie modifiche su questi repositori +in modo da evitare i conflitti fra le sottomissioni ed altri lavori in corso + +La maggior parte di questi repositori sono git, ma esistono anche altri SCM +in uso, o file di patch pubblicate come una serie quilt. +Gli indirizzi dei repositori di sottosistema sono indicati nel file +MAINTAINERS. Molti di questi posso essere trovati su https://git.kernel.org/. + +Prima che una modifica venga inclusa in questi sottosistemi, sarà soggetta ad +una revisione che inizialmente avviene tramite liste di discussione (vedere la +sezione dedicata qui sotto). Per molti sottosistemi del kernel, tale processo +di revisione è monitorato con lo strumento patchwork. +Patchwork offre un'interfaccia web che mostra le patch pubblicate, inclusi i +commenti o le revisioni fatte, e gli amministratori possono indicare le patch +come "in revisione", "accettate", o "rifiutate". Diversi siti Patchwork sono +elencati al sito https://patchwork.kernel.org/. + +Il kernel 4.x -next per test d'integrazione +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Prima che gli aggiornamenti dei sottosistemi siano accorpati nel ramo +principale 4.x, sarà necessario un test d'integrazione. +A tale scopo, esiste un repositorio speciale di test nel quale virtualmente +tutti i rami dei sottosistemi vengono inclusi su base quotidiana: + + https://git.kernel.org/?p=linux/kernel/git/next/linux-next.git + +In questo modo, i kernel -next offrono uno sguardo riassuntivo su quello che +ci si aspetterà essere nel kernel principale nel successivo periodo +d'incorporazione. +Coloro che vorranno fare dei test d'esecuzione del kernel -next sono più che +benvenuti. + + +Riportare Bug +------------- + +https://bugzilla.kernel.org è dove gli sviluppatori del kernel Linux tracciano +i bachi del kernel. Gli utenti sono incoraggiati nel riportare tutti i bachi +che trovano utilizzando questo strumento. +Per maggiori dettagli su come usare il bugzilla del kernel, guardare: + + https://bugzilla.kernel.org/page.cgi?id=faq.html + +Il file admin-guide/reporting-bugs.rst nella cartella principale del kernel +fornisce un buon modello sul come segnalare un baco nel kernel, e spiega quali +informazioni sono necessarie agli sviluppatori per poter aiutare il +rintracciamento del problema. + +Gestire i rapporti sui bug +-------------------------- + +Uno dei modi migliori per mettere in pratica le vostre capacità di hacking è +quello di riparare bachi riportati da altre persone. Non solo aiuterete a far +diventare il kernel più stabile, ma imparerete a riparare problemi veri dal +mondo ed accrescerete le vostre competenze, e gli altri sviluppatori saranno +al corrente della vostra presenza. Riparare bachi è una delle migliori vie per +acquisire meriti tra gli altri sviluppatori, perchè non a molte persone piace +perdere tempo a sistemare i bachi di altri. + +Per lavorare sui rapporti di bachi già riportati, andate su +https://bugzilla.kernel.org. + +Liste di discussione +-------------------- + +Come descritto in molti dei documenti qui sopra, la maggior parte degli +sviluppatori del kernel partecipano alla lista di discussione Linux Kernel. +I dettagli su come iscriversi e disiscriversi dalla lista possono essere +trovati al sito: + + http://vger.kernel.org/vger-lists.html#linux-kernel + +Ci sono diversi archivi della lista di discussione. Usate un qualsiasi motore +di ricerca per trovarli. Per esempio: + + http://dir.gmane.org/gmane.linux.kernel + +É caldamente consigliata una ricerca in questi archivi sul tema che volete +sollevare, prima di pubblicarlo sulla lista. Molte cose sono già state +discusse in dettaglio e registrate negli archivi della lista di discussione. + +Molti dei sottosistemi del kernel hanno anche una loro lista di discussione +dedicata. Guardate nel file MAINTAINERS per avere una lista delle liste di +discussione e il loro uso. + +Molte di queste liste sono gestite su kernel.org. Per informazioni consultate +la seguente pagina: + + http://vger.kernel.org/vger-lists.html + +Per favore ricordatevi della buona educazione quando utilizzate queste liste. +Sebbene sia un pò dozzinale, il seguente URL contiene alcune semplici linee +guida per interagire con la lista (o con qualsiasi altra lista): + + http://www.albion.com/netiquette/ + +Se diverse persone rispondo alla vostra mail, la lista dei riceventi (copia +conoscenza) potrebbe diventare abbastanza lunga. Non cancellate nessuno dalla +lista di CC: senza un buon motivo, e non rispondete solo all'indirizzo +della lista di discussione. Fateci l'abitudine perché capita spesso di +ricevere la stessa email due volte: una dal mittente ed una dalla lista; e non +cercate di modificarla aggiungendo intestazioni stravaganti, agli altri non +piacerà. + +Ricordate di rimanere sempre in argomento e di mantenere le attribuzioni +delle vostre risposte invariate; mantenete il "John Kernelhacker wrote ...:" +in cima alla vostra replica e aggiungete le vostre risposte fra i singoli +blocchi citati, non scrivete all'inizio dell'email. + +Se aggiungete patch alla vostra mail, assicuratevi che siano del tutto +leggibili come indicato in Documentation/process/submitting-patches.rst. +Gli sviluppatori kernel non vogliono avere a che fare con allegati o patch +compresse; vogliono invece poter commentare le righe dei vostri cambiamenti, +il che può funzionare solo in questo modo. +Assicuratevi di utilizzare un gestore di mail che non alterì gli spazi ed i +caratteri. Un ottimo primo test è quello di inviare a voi stessi una mail e +cercare di sottoporre la vostra stessa patch. Se non funziona, sistemate il +vostro programma di posta, o cambiatelo, finché non funziona. + +Ed infine, per favore ricordatevi di mostrare rispetto per gli altri +sottoscriventi. + +Lavorare con la comunità +------------------------ + +L'obiettivo di questa comunità è quello di fornire il miglior kernel possibile. +Quando inviate una modifica che volete integrare, sarà valutata esclusivamente +dal punto di vista tecnico. Quindi, cosa dovreste aspettarvi? + + - critiche + - commenti + - richieste di cambiamento + - richieste di spiegazioni + - nulla + +Ricordatevi che questo fa parte dell'integrazione della vostra modifica +all'interno del kernel. Dovete essere in grado di accettare le critiche, +valutarle a livello tecnico ed eventualmente rielaborare nuovamente le vostre +modifiche o fornire delle chiare e concise motivazioni per le quali le +modifiche suggerite non dovrebbero essere fatte. +Se non riceverete risposte, aspettate qualche giorno e riprovate ancora, +qualche volta le cose si perdono nell'enorme mucchio di email. + +Cosa non dovreste fare? + + - aspettarvi che la vostra modifica venga accettata senza problemi + - mettervi sulla difensiva + - ignorare i commenti + - sottomettere nuovamente la modifica senza fare nessuno dei cambiamenti + richiesti + +In una comunità che è alla ricerca delle migliori soluzioni tecniche possibili, +ci saranno sempre opinioni differenti sull'utilità di una modifica. +Siate cooperativi e vogliate adattare la vostra idea in modo che sia inserita +nel kernel. O almeno vogliate dimostrare che la vostra idea vale. +Ricordatevi, sbagliare è accettato fintanto che siate disposti a lavorare verso +una soluzione che è corretta. + +È normale che le risposte alla vostra prima modifica possa essere +semplicemente una lista con dozzine di cose che dovreste correggere. +Questo **non** implica che la vostra patch non sarà accettata, e questo +**non** è contro di voi personalmente. +Semplicemente correggete tutte le questioni sollevate contro la vostra modifica +ed inviatela nuovamente. + +Differenze tra la comunità del kernel e le strutture aziendali +-------------------------------------------------------------- + +La comunità del kernel funziona diversamente rispetto a molti ambienti di +sviluppo aziendali. Qui di seguito una lista di cose che potete provare a +fare per evitare problemi: + + Cose da dire riguardanti le modifiche da voi proposte: + + - "Questo risolve più problematiche." + - "Questo elimina 2000 stringhe di codice." + - "Qui una modifica che spiega cosa sto cercando di fare." + - "L'ho testato su 5 diverse architetture.." + - "Qui una serie di piccole modifiche che.." + - "Questo aumenta le prestazioni di macchine standard..." + + Cose che dovreste evitare di dire: + + - "Lo abbiamo fatto in questo modo in AIX/ptx/Solaris, di conseguenza + deve per forza essere giusto..." + - "Ho fatto questo per 20 anni, quindi.." + - "Questo è richiesto dalla mia Azienda per far soldi" + - "Questo è per la linea di prodotti della nostra Azienda" + - "Ecco il mio documento di design di 1000 pagine che descrive ciò che ho + in mente" + - "Ci ho lavorato per 6 mesi..." + - "Ecco una patch da 5000 righe che.." + - "Ho riscritto il pasticcio attuale, ed ecco qua.." + - "Ho una scadenza, e questa modifica ha bisogno di essere approvata ora" + +Un'altra cosa nella quale la comunità del kernel si differenzia dai più +classici ambienti di ingegneria del software è la natura "senza volto" delle +interazioni umane. Uno dei benefici dell'uso delle email e di irc come forma +primordiale di comunicazione è l'assenza di discriminazione basata su genere e +razza. L'ambienti di lavoro Linux accetta donne e minoranze perchè tutto quello +che sei è un indirizzo email. Aiuta anche l'aspetto internazionale nel +livellare il terreno di gioco perchè non è possibile indovinare il genere +basandosi sul nome di una persona. Un uomo può chiamarsi Andrea ed una donna +potrebbe chiamarsi Pat. Gran parte delle donne che hanno lavorato al kernel +Linux e che hanno espresso una personale opinione hanno avuto esperienze +positive. + +La lingua potrebbe essere un ostacolo per quelle persone che non si trovano +a loro agio con l'inglese. Una buona padronanza del linguaggio può essere +necessaria per esporre le proprie idee in maniera appropiata all'interno +delle liste di discussione, quindi è consigliabile che rileggiate le vostre +email prima di inviarle in modo da essere certi che abbiano senso in inglese. + + +Spezzare le vostre modifiche +---------------------------- + +La comunità del kernel Linux non accetta con piacere grossi pezzi di codice +buttati lì tutti in una volta. Le modifiche necessitano di essere +adeguatamente presentate, discusse, e suddivise in parti più piccole ed +indipendenti. Questo è praticamente l'esatto opposto di quello che le +aziende fanno solitamente. La vostra proposta dovrebbe, inoltre, essere +presentata prestissimo nel processo di sviluppo, così che possiate ricevere +un riscontro su quello che state facendo. Lasciate che la comunità +senta che state lavorando con loro, e che non li stiate sfruttando come +discarica per le vostre aggiunte. In ogni caso, non inviate 50 email nello +stesso momento in una lista di discussione, il più delle volte la vostra serie +di modifiche dovrebbe essere più piccola. + +I motivi per i quali dovreste frammentare le cose sono i seguenti: + +1) Piccole modifiche aumentano le probabilità che vengano accettate, + altrimenti richiederebbe troppo tempo o sforzo nel verificarne + la correttezza. Una modifica di 5 righe può essere accettata da un + manutentore con a mala pena una seconda occhiata. Invece, una modifica da + 500 linee può richiedere ore di rilettura per verificarne la correttezza + (il tempo necessario è esponenzialmente proporzionale alla dimensione della + modifica, o giù di lì) + + Piccole modifiche sono inoltre molto facili da debuggare quando qualcosa + non va. È molto più facile annullare le modifiche una per una che + dissezionare una patch molto grande dopo la sua sottomissione (e rompere + qualcosa). + +2) È importante non solo inviare piccole modifiche, ma anche riscriverle e + semplificarle (o più semplicemente ordinarle) prima di sottoporle. + +Qui un'analogia dello sviluppatore kernel Al Viro: + + *"Pensate ad un insegnante di matematica che corregge il compito + di uno studente (di matematica). L'insegnante non vuole vedere le + prove e gli errori commessi dallo studente prima che arrivi alla + soluzione. Vuole vedere la risposta più pulita ed elegante + possibile. Un buono studente lo sa, e non presenterebbe mai le + proprie bozze prima prima della soluzione finale"* + + *"Lo stesso vale per lo sviluppo del kernel. I manutentori ed i + revisori non vogliono vedere il procedimento che sta dietro al + problema che uno sta risolvendo. Vogliono vedere una soluzione + semplice ed elegante."* + +Può essere una vera sfida il saper mantenere l'equilibrio fra una presentazione +elegante della vostra soluzione, lavorare insieme ad una comunità e dibattere +su un lavoro incompleto. Pertanto è bene entrare presto nel processo di +revisione per migliorare il vostro lavoro, ma anche per riuscire a tenere le +vostre modifiche in pezzettini che potrebbero essere già accettate, nonostante +la vostra intera attività non lo sia ancora. + +In fine, rendetevi conto che non è accettabile inviare delle modifiche +incomplete con la promessa che saranno "sistemate dopo". + + +Giustificare le vostre modifiche +-------------------------------- + +Insieme alla frammentazione delle vostre modifiche, è altrettanto importante +permettere alla comunità Linux di capire perché dovrebbero accettarle. +Nuove funzionalità devono essere motivate come necessarie ed utili. + + +Documentare le vostre modifiche +------------------------------- + +Quando inviate le vostre modifiche, fate particolare attenzione a quello che +scrivete nella vostra email. Questa diventerà il *ChangeLog* per la modifica, +e sarà visibile a tutti per sempre. Dovrebbe descrivere la modifica nella sua +interezza, contenendo: + + - perchè la modifica è necessaria + - l'approccio d'insieme alla patch + - dettagli supplementari + - risultati dei test + +Per maggiori dettagli su come tutto ciò dovrebbe apparire, riferitevi alla +sezione ChangeLog del documento: + + "The Perfect Patch" + http://www.ozlabs.org/~akpm/stuff/tpp.txt + +A volte tutto questo è difficile da realizzare. Il perfezionamento di queste +pratiche può richiedere anni (eventualmente). È un processo continuo di +miglioramento che richiede molta pazienza e determinazione. Ma non mollate, +si può fare. Molti lo hanno fatto prima, ed ognuno ha dovuto iniziare dove +siete voi ora. + + + + +---------- + +Grazie a Paolo Ciarrocchi che ha permesso che la sezione "Development Process" +(https://lwn.net/Articles/94386/) fosse basata sui testi da lui scritti, ed a +Randy Dunlap e Gerrit Huizenga per la lista di cose che dovreste e non +dovreste dire. Grazie anche a Pat Mochel, Hanna Linder, Randy Dunlap, +Kay Sievers, Vojtech Pavlik, Jan Kara, Josh Boyer, Kees Cook, Andrew Morton, +Andi Kleen, Vadim Lobanov, Jesper Juhl, Adrian Bunk, Keri Harris, Frans Pop, +David A. Wheeler, Junio Hamano, Michael Kerrisk, e Alex Shepard per le +loro revisioni, commenti e contributi. Senza il loro aiuto, questo documento +non sarebbe stato possibile. + +Manutentore: Greg Kroah-Hartman diff --git a/Documentation/translations/it_IT/process/index.rst b/Documentation/translations/it_IT/process/index.rst new file mode 100644 index 000000000000..2eda85d5cd1e --- /dev/null +++ b/Documentation/translations/it_IT/process/index.rst @@ -0,0 +1,67 @@ +.. raw:: latex + + \renewcommand\thesection* + \renewcommand\thesubsection* + +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/index.rst ` +:Translator: Federico Vaga + +.. _it_process_index: + +Lavorare con la comunità di sviluppo del kernel +=============================================== + +Quindi volete diventare sviluppatori del kernel? Benvenuti! C'è molto da +imparare sul lato tecnico del kernel, ma è anche importante capire come +funziona la nostra comunità. Leggere questi documenti renderà più facile +l'accettazione delle vostre modifiche con il minimo sforzo. + +Di seguito le guide che ogni sviluppatore dovrebbe leggere. + +.. toctree:: + :maxdepth: 1 + + howto + code-of-conduct + development-process + submitting-patches + coding-style + maintainer-pgp-guide + email-clients + kernel-enforcement-statement + kernel-driver-statement + +Poi ci sono altre guide sulla comunità che sono di interesse per molti +degli sviluppatori: + +.. toctree:: + :maxdepth: 1 + + changes + submitting-drivers + stable-api-nonsense + management-style + stable-kernel-rules + submit-checklist + kernel-docs + +Ed infine, qui ci sono alcune guide più tecniche che son state messe qua solo +perché non si è trovato un posto migliore. + +.. toctree:: + :maxdepth: 1 + + applying-patches + adding-syscalls + magic-number + volatile-considered-harmful + clang-format + +.. only:: subproject and html + + Indices + ======= + + * :ref:`genindex` diff --git a/Documentation/translations/it_IT/process/kernel-docs.rst b/Documentation/translations/it_IT/process/kernel-docs.rst new file mode 100644 index 000000000000..7bd70d661737 --- /dev/null +++ b/Documentation/translations/it_IT/process/kernel-docs.rst @@ -0,0 +1,13 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/kernel-docs.rst ` + + +.. _it_kernel_docs: + +Indice di documenti per le persone interessate a capire e/o scrivere per il kernel Linux +======================================================================================== + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/kernel-driver-statement.rst b/Documentation/translations/it_IT/process/kernel-driver-statement.rst new file mode 100644 index 000000000000..f016a75a9d6e --- /dev/null +++ b/Documentation/translations/it_IT/process/kernel-driver-statement.rst @@ -0,0 +1,211 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/kernel-driver-statement.rst ` +:Translator: Federico Vaga + +.. _it_process_statement_driver: + +Dichiarazioni sui driver per il kernel +====================================== + +Presa di posizione sui moduli per il kernel Linux +------------------------------------------------- + +Noi, i sottoscritti sviluppatori del kernel, consideriamo pericoloso +o indesiderato qualsiasi modulo o driver per il kernel Linux di tipo +*a sorgenti chiusi* (*closed-source*). Ripetutamente, li abbiamo +trovati deleteri per gli utenti Linux, le aziende, ed in generale +l'ecosistema Linux. Questi moduli impediscono l'apertura, la stabilità, +la flessibilità, e la manutenibilità del modello di sviluppo di Linux +e impediscono ai loro utenti di beneficiare dell'esperienza dalla +comunità Linux. I fornitori che distribuiscono codice a sorgenti chiusi +obbligano i propri utenti a rinunciare ai principali vantaggi di Linux +o a cercarsi nuovi fornitori. +Perciò, al fine di sfruttare i vantaggi che codice aperto ha da offrire, +come l'abbattimento dei costi e un supporto condiviso, spingiamo i +fornitori ad adottare una politica di supporto ai loro clienti Linux +che preveda il rilascio dei sorgenti per il kernel. + +Parliamo solo per noi stessi, e non per una qualsiasi azienda per la +quale lavoriamo oggi, o abbiamo lavorato in passato, o lavoreremo in +futuro. + + + - Dave Airlie + - Nick Andrew + - Jens Axboe + - Ralf Baechle + - Felipe Balbi + - Ohad Ben-Cohen + - Muli Ben-Yehuda + - Jiri Benc + - Arnd Bergmann + - Thomas Bogendoerfer + - Vitaly Bordug + - James Bottomley + - Josh Boyer + - Neil Brown + - Mark Brown + - David Brownell + - Michael Buesch + - Franck Bui-Huu + - Adrian Bunk + - François Cami + - Ralph Campbell + - Luiz Fernando N. Capitulino + - Mauro Carvalho Chehab + - Denis Cheng + - Jonathan Corbet + - Glauber Costa + - Alan Cox + - Magnus Damm + - Ahmed S. Darwish + - Robert P. J. Day + - Hans de Goede + - Arnaldo Carvalho de Melo + - Helge Deller + - Jean Delvare + - Mathieu Desnoyers + - Sven-Thorsten Dietrich + - Alexey Dobriyan + - Daniel Drake + - Alex Dubov + - Randy Dunlap + - Michael Ellerman + - Pekka Enberg + - Jan Engelhardt + - Mark Fasheh + - J. Bruce Fields + - Larry Finger + - Jeremy Fitzhardinge + - Mike Frysinger + - Kumar Gala + - Robin Getz + - Liam Girdwood + - Jan-Benedict Glaw + - Thomas Gleixner + - Brice Goglin + - Cyrill Gorcunov + - Andy Gospodarek + - Thomas Graf + - Krzysztof Halasa + - Harvey Harrison + - Stephen Hemminger + - Michael Hennerich + - Tejun Heo + - Benjamin Herrenschmidt + - Kristian Høgsberg + - Henrique de Moraes Holschuh + - Marcel Holtmann + - Mike Isely + - Takashi Iwai + - Olof Johansson + - Dave Jones + - Jesper Juhl + - Matthias Kaehlcke + - Kenji Kaneshige + - Jan Kara + - Jeremy Kerr + - Russell King + - Olaf Kirch + - Roel Kluin + - Hans-Jürgen Koch + - Auke Kok + - Peter Korsgaard + - Jiri Kosina + - Aaro Koskinen + - Mariusz Kozlowski + - Greg Kroah-Hartman + - Michael Krufky + - Aneesh Kumar + - Clemens Ladisch + - Christoph Lameter + - Gunnar Larisch + - Anders Larsen + - Grant Likely + - John W. Linville + - Yinghai Lu + - Tony Luck + - Pavel Machek + - Matt Mackall + - Paul Mackerras + - Roland McGrath + - Patrick McHardy + - Kyle McMartin + - Paul Menage + - Thierry Merle + - Eric Miao + - Akinobu Mita + - Ingo Molnar + - James Morris + - Andrew Morton + - Paul Mundt + - Oleg Nesterov + - Luca Olivetti + - S.Çağlar Onur + - Pierre Ossman + - Keith Owens + - Venkatesh Pallipadi + - Nick Piggin + - Nicolas Pitre + - Evgeniy Polyakov + - Richard Purdie + - Mike Rapoport + - Sam Ravnborg + - Gerrit Renker + - Stefan Richter + - David Rientjes + - Luis R. Rodriguez + - Stefan Roese + - Francois Romieu + - Rami Rosen + - Stephen Rothwell + - Maciej W. Rozycki + - Mark Salyzyn + - Yoshinori Sato + - Deepak Saxena + - Holger Schurig + - Amit Shah + - Yoshihiro Shimoda + - Sergei Shtylyov + - Kay Sievers + - Sebastian Siewior + - Rik Snel + - Jes Sorensen + - Alexey Starikovskiy + - Alan Stern + - Timur Tabi + - Hirokazu Takata + - Eliezer Tamir + - Eugene Teo + - Doug Thompson + - FUJITA Tomonori + - Dmitry Torokhov + - Marcelo Tosatti + - Steven Toth + - Theodore Tso + - Matthias Urlichs + - Geert Uytterhoeven + - Arjan van de Ven + - Ivo van Doorn + - Rik van Riel + - Wim Van Sebroeck + - Hans Verkuil + - Horst H. von Brand + - Dmitri Vorobiev + - Anton Vorontsov + - Daniel Walker + - Johannes Weiner + - Harald Welte + - Matthew Wilcox + - Dan J. Williams + - Darrick J. Wong + - David Woodhouse + - Chris Wright + - Bryan Wu + - Rafael J. Wysocki + - Herbert Xu + - Vlad Yasevich + - Peter Zijlstra + - Bartlomiej Zolnierkiewicz + diff --git a/Documentation/translations/it_IT/process/kernel-enforcement-statement.rst b/Documentation/translations/it_IT/process/kernel-enforcement-statement.rst new file mode 100644 index 000000000000..4ddf5a35b270 --- /dev/null +++ b/Documentation/translations/it_IT/process/kernel-enforcement-statement.rst @@ -0,0 +1,13 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/kernel-enforcement-statement.rst ` + + +.. _it_process_statement_kernel: + +Applicazione della licenza sul kernel Linux +=========================================== + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/magic-number.rst b/Documentation/translations/it_IT/process/magic-number.rst new file mode 100644 index 000000000000..5281d53e57ee --- /dev/null +++ b/Documentation/translations/it_IT/process/magic-number.rst @@ -0,0 +1,170 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/magic-numbers.rst ` +:Translator: Federico Vaga + +.. _it_magicnumbers: + +I numeri magici di Linux +======================== + +Questo documento è un registro dei numeri magici in uso. Quando +aggiungete un numero magico ad una struttura, dovreste aggiungerlo anche +a questo documento; la cosa migliore è che tutti i numeri magici usati +dalle varie strutture siano unici. + +È **davvero** un'ottima idea proteggere le strutture dati del kernel con +dei numeri magici. Questo vi permette in fase d'esecuzione di (a) verificare +se una struttura è stata malmenata, o (b) avete passato a una procedura la +struttura errata. Quest'ultimo è molto utile - particolarmente quando si passa +una struttura dati tramite un puntatore void \*. Il codice tty, per esempio, +effettua questa operazione con regolarità passando avanti e indietro le +strutture specifiche per driver e discipline. + +Per utilizzare un numero magico, dovete dichiararlo all'inizio della struttura +dati, come di seguito:: + + struct tty_ldisc { + int magic; + ... + }; + +Per favore, seguite questa direttiva quando aggiungerete migliorie al kernel! +Mi ha risparmiato un numero illimitato di ore di debug, specialmente nei casi +più ostici dove si è andati oltre la dimensione di un vettore e la struttura +dati che lo seguiva in memoria è stata sovrascritta. Seguendo questa +direttiva, questi casi vengono identificati velocemente e in sicurezza. + +Registro dei cambiamenti:: + + Theodore Ts'o + 31 Mar 94 + + La tabella magica è aggiornata a Linux 2.1.55. + + Michael Chastain + + 22 Sep 1997 + + Ora dovrebbe essere aggiornata a Linux 2.1.112. Dato che + siamo in un momento di congelamento delle funzionalità + (*feature freeze*) è improbabile che qualcosa cambi prima + della versione 2.2.x. Le righe sono ordinate secondo il + campo numero. + + Krzysztof G. Baranowski + + 29 Jul 1998 + + Aggiornamento della tabella a Linux 2.5.45. Giusti nel congelamento + delle funzionalità ma è comunque possibile che qualche nuovo + numero magico s'intrufoli prima del kernel 2.6.x. + + Petr Baudis + + 03 Nov 2002 + + Aggiornamento della tabella magica a Linux 2.5.74. + + Fabian Frederick + + 09 Jul 2003 + + +===================== ================ ======================== ========================================== +Nome magico Numero Struttura File +===================== ================ ======================== ========================================== +PG_MAGIC 'P' pg_{read,write}_hdr ``include/linux/pg.h`` +CMAGIC 0x0111 user ``include/linux/a.out.h`` +MKISS_DRIVER_MAGIC 0x04bf mkiss_channel ``drivers/net/mkiss.h`` +HDLC_MAGIC 0x239e n_hdlc ``drivers/char/n_hdlc.c`` +APM_BIOS_MAGIC 0x4101 apm_user ``arch/x86/kernel/apm_32.c`` +CYCLADES_MAGIC 0x4359 cyclades_port ``include/linux/cyclades.h`` +DB_MAGIC 0x4442 fc_info ``drivers/net/iph5526_novram.c`` +DL_MAGIC 0x444d fc_info ``drivers/net/iph5526_novram.c`` +FASYNC_MAGIC 0x4601 fasync_struct ``include/linux/fs.h`` +FF_MAGIC 0x4646 fc_info ``drivers/net/iph5526_novram.c`` +ISICOM_MAGIC 0x4d54 isi_port ``include/linux/isicom.h`` +PTY_MAGIC 0x5001 ``drivers/char/pty.c`` +PPP_MAGIC 0x5002 ppp ``include/linux/if_pppvar.h`` +SERIAL_MAGIC 0x5301 async_struct ``include/linux/serial.h`` +SSTATE_MAGIC 0x5302 serial_state ``include/linux/serial.h`` +SLIP_MAGIC 0x5302 slip ``drivers/net/slip.h`` +STRIP_MAGIC 0x5303 strip ``drivers/net/strip.c`` +X25_ASY_MAGIC 0x5303 x25_asy ``drivers/net/x25_asy.h`` +SIXPACK_MAGIC 0x5304 sixpack ``drivers/net/hamradio/6pack.h`` +AX25_MAGIC 0x5316 ax_disp ``drivers/net/mkiss.h`` +TTY_MAGIC 0x5401 tty_struct ``include/linux/tty.h`` +MGSL_MAGIC 0x5401 mgsl_info ``drivers/char/synclink.c`` +TTY_DRIVER_MAGIC 0x5402 tty_driver ``include/linux/tty_driver.h`` +MGSLPC_MAGIC 0x5402 mgslpc_info ``drivers/char/pcmcia/synclink_cs.c`` +TTY_LDISC_MAGIC 0x5403 tty_ldisc ``include/linux/tty_ldisc.h`` +USB_SERIAL_MAGIC 0x6702 usb_serial ``drivers/usb/serial/usb-serial.h`` +FULL_DUPLEX_MAGIC 0x6969 ``drivers/net/ethernet/dec/tulip/de2104x.c`` +USB_BLUETOOTH_MAGIC 0x6d02 usb_bluetooth ``drivers/usb/class/bluetty.c`` +RFCOMM_TTY_MAGIC 0x6d02 ``net/bluetooth/rfcomm/tty.c`` +USB_SERIAL_PORT_MAGIC 0x7301 usb_serial_port ``drivers/usb/serial/usb-serial.h`` +CG_MAGIC 0x00090255 ufs_cylinder_group ``include/linux/ufs_fs.h`` +RPORT_MAGIC 0x00525001 r_port ``drivers/char/rocket_int.h`` +LSEMAGIC 0x05091998 lse ``drivers/fc4/fc.c`` +GDTIOCTL_MAGIC 0x06030f07 gdth_iowr_str ``drivers/scsi/gdth_ioctl.h`` +RIEBL_MAGIC 0x09051990 ``drivers/net/atarilance.c`` +NBD_REQUEST_MAGIC 0x12560953 nbd_request ``include/linux/nbd.h`` +RED_MAGIC2 0x170fc2a5 (any) ``mm/slab.c`` +BAYCOM_MAGIC 0x19730510 baycom_state ``drivers/net/baycom_epp.c`` +ISDN_X25IFACE_MAGIC 0x1e75a2b9 isdn_x25iface_proto_data ``drivers/isdn/isdn_x25iface.h`` +ECP_MAGIC 0x21504345 cdkecpsig ``include/linux/cdk.h`` +LSOMAGIC 0x27091997 lso ``drivers/fc4/fc.c`` +LSMAGIC 0x2a3b4d2a ls ``drivers/fc4/fc.c`` +WANPIPE_MAGIC 0x414C4453 sdla_{dump,exec} ``include/linux/wanpipe.h`` +CS_CARD_MAGIC 0x43525553 cs_card ``sound/oss/cs46xx.c`` +LABELCL_MAGIC 0x4857434c labelcl_info_s ``include/asm/ia64/sn/labelcl.h`` +ISDN_ASYNC_MAGIC 0x49344C01 modem_info ``include/linux/isdn.h`` +CTC_ASYNC_MAGIC 0x49344C01 ctc_tty_info ``drivers/s390/net/ctctty.c`` +ISDN_NET_MAGIC 0x49344C02 isdn_net_local_s ``drivers/isdn/i4l/isdn_net_lib.h`` +SAVEKMSG_MAGIC2 0x4B4D5347 savekmsg ``arch/*/amiga/config.c`` +CS_STATE_MAGIC 0x4c4f4749 cs_state ``sound/oss/cs46xx.c`` +SLAB_C_MAGIC 0x4f17a36d kmem_cache ``mm/slab.c`` +COW_MAGIC 0x4f4f4f4d cow_header_v1 ``arch/um/drivers/ubd_user.c`` +I810_CARD_MAGIC 0x5072696E i810_card ``sound/oss/i810_audio.c`` +TRIDENT_CARD_MAGIC 0x5072696E trident_card ``sound/oss/trident.c`` +ROUTER_MAGIC 0x524d4157 wan_device [in ``wanrouter.h`` pre 3.9] +SAVEKMSG_MAGIC1 0x53415645 savekmsg ``arch/*/amiga/config.c`` +GDA_MAGIC 0x58464552 gda ``arch/mips/include/asm/sn/gda.h`` +RED_MAGIC1 0x5a2cf071 (any) ``mm/slab.c`` +EEPROM_MAGIC_VALUE 0x5ab478d2 lanai_dev ``drivers/atm/lanai.c`` +HDLCDRV_MAGIC 0x5ac6e778 hdlcdrv_state ``include/linux/hdlcdrv.h`` +PCXX_MAGIC 0x5c6df104 channel ``drivers/char/pcxx.h`` +KV_MAGIC 0x5f4b565f kernel_vars_s ``arch/mips/include/asm/sn/klkernvars.h`` +I810_STATE_MAGIC 0x63657373 i810_state ``sound/oss/i810_audio.c`` +TRIDENT_STATE_MAGIC 0x63657373 trient_state ``sound/oss/trident.c`` +M3_CARD_MAGIC 0x646e6f50 m3_card ``sound/oss/maestro3.c`` +FW_HEADER_MAGIC 0x65726F66 fw_header ``drivers/atm/fore200e.h`` +SLOT_MAGIC 0x67267321 slot ``drivers/hotplug/cpqphp.h`` +SLOT_MAGIC 0x67267322 slot ``drivers/hotplug/acpiphp.h`` +LO_MAGIC 0x68797548 nbd_device ``include/linux/nbd.h`` +OPROFILE_MAGIC 0x6f70726f super_block ``drivers/oprofile/oprofilefs.h`` +M3_STATE_MAGIC 0x734d724d m3_state ``sound/oss/maestro3.c`` +VMALLOC_MAGIC 0x87654320 snd_alloc_track ``sound/core/memory.c`` +KMALLOC_MAGIC 0x87654321 snd_alloc_track ``sound/core/memory.c`` +PWC_MAGIC 0x89DC10AB pwc_device ``drivers/usb/media/pwc.h`` +NBD_REPLY_MAGIC 0x96744668 nbd_reply ``include/linux/nbd.h`` +ENI155_MAGIC 0xa54b872d midway_eprom ``drivers/atm/eni.h`` +CODA_MAGIC 0xC0DAC0DA coda_file_info ``fs/coda/coda_fs_i.h`` +DPMEM_MAGIC 0xc0ffee11 gdt_pci_sram ``drivers/scsi/gdth.h`` +YAM_MAGIC 0xF10A7654 yam_port ``drivers/net/hamradio/yam.c`` +CCB_MAGIC 0xf2691ad2 ccb ``drivers/scsi/ncr53c8xx.c`` +QUEUE_MAGIC_FREE 0xf7e1c9a3 queue_entry ``drivers/scsi/arm/queue.c`` +QUEUE_MAGIC_USED 0xf7e1cc33 queue_entry ``drivers/scsi/arm/queue.c`` +HTB_CMAGIC 0xFEFAFEF1 htb_class ``net/sched/sch_htb.c`` +NMI_MAGIC 0x48414d4d455201 nmi_s ``arch/mips/include/asm/sn/nmi.h`` +===================== ================ ======================== ========================================== + +Da notare che ci sono anche dei numeri magici specifici per driver nel +*sound memory management*. Consultate ``include/sound/sndmagic.h`` per una +lista completa. Molti driver audio OSS hanno i loro numeri magici costruiti a +partire dall'identificativo PCI della scheda audio - nemmeno questi sono +elencati in questo file. + +Il file-system HFS è un altro grande utilizzatore di numeri magici - potete +trovarli qui ``fs/hfs/hfs.h``. diff --git a/Documentation/translations/it_IT/process/maintainer-pgp-guide.rst b/Documentation/translations/it_IT/process/maintainer-pgp-guide.rst new file mode 100644 index 000000000000..24a133f0a51d --- /dev/null +++ b/Documentation/translations/it_IT/process/maintainer-pgp-guide.rst @@ -0,0 +1,13 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/maintainer-pgp-guide.rst ` + +.. _it_pgpguide: + +======================================== +Guida a PGP per i manutentori del kernel +======================================== + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/management-style.rst b/Documentation/translations/it_IT/process/management-style.rst new file mode 100644 index 000000000000..07e68bfb8402 --- /dev/null +++ b/Documentation/translations/it_IT/process/management-style.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/management-style.rst ` + +.. _it_managementstyle: + +Tipo di gestione del kernel Linux +================================= + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/stable-api-nonsense.rst b/Documentation/translations/it_IT/process/stable-api-nonsense.rst new file mode 100644 index 000000000000..d4fa4abf8dd3 --- /dev/null +++ b/Documentation/translations/it_IT/process/stable-api-nonsense.rst @@ -0,0 +1,13 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/stable-api-nonsense.rst ` + + +.. _it_stable_api_nonsense: + +L'interfaccia dei driver per il kernel Linux +============================================ + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/stable-kernel-rules.rst b/Documentation/translations/it_IT/process/stable-kernel-rules.rst new file mode 100644 index 000000000000..6fa5ce9c3572 --- /dev/null +++ b/Documentation/translations/it_IT/process/stable-kernel-rules.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/stable-kernel-rules.rst ` + +.. _it_stable_kernel_rules: + +Tutto quello che volevate sapere sui rilasci -stable di Linux +============================================================== + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/submit-checklist.rst b/Documentation/translations/it_IT/process/submit-checklist.rst new file mode 100644 index 000000000000..b6b4dd94a660 --- /dev/null +++ b/Documentation/translations/it_IT/process/submit-checklist.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/submit-checklist.rst ` + +.. _it_submitchecklist: + +Lista delle cose da fare per inviare una modifica al kernel Linux +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/submitting-drivers.rst b/Documentation/translations/it_IT/process/submitting-drivers.rst new file mode 100644 index 000000000000..16df950ef808 --- /dev/null +++ b/Documentation/translations/it_IT/process/submitting-drivers.rst @@ -0,0 +1,12 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/submitting-drivers.rst ` + +.. _it_submittingdrivers: + +Sottomettere driver per il kernel Linux +======================================= + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/submitting-patches.rst b/Documentation/translations/it_IT/process/submitting-patches.rst new file mode 100644 index 000000000000..d633775ed556 --- /dev/null +++ b/Documentation/translations/it_IT/process/submitting-patches.rst @@ -0,0 +1,13 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/submitting-patches.rst ` + + +.. _it_submittingpatches: + +Sottomettere modifiche: la guida essenziale per vedere il vostro codice nel kernel +================================================================================== + +.. warning:: + + TODO ancora da tradurre diff --git a/Documentation/translations/it_IT/process/volatile-considered-harmful.rst b/Documentation/translations/it_IT/process/volatile-considered-harmful.rst new file mode 100644 index 000000000000..efc640cac596 --- /dev/null +++ b/Documentation/translations/it_IT/process/volatile-considered-harmful.rst @@ -0,0 +1,134 @@ +.. include:: ../disclaimer-ita.rst + +:Original: :ref:`Documentation/process/volatile-considered-harmful.rst ` +:Translator: Federico Vaga + +.. _it_volatile_considered_harmful: + +Perché la parola chiave "volatile" non dovrebbe essere usata +------------------------------------------------------------ + +Spesso i programmatori C considerano volatili quelle variabili che potrebbero +essere cambiate al di fuori dal thread di esecuzione corrente; come risultato, +a volte saranno tentati dall'utilizzare *volatile* nel kernel per le +strutture dati condivise. In altre parole, gli è stato insegnato ad usare +*volatile* come una variabile atomica di facile utilizzo, ma non è così. +L'uso di *volatile* nel kernel non è quasi mai corretto; questo documento ne +descrive le ragioni. + +Il punto chiave da capire su *volatile* è che il suo scopo è quello di +sopprimere le ottimizzazioni, che non è quasi mai quello che si vuole. +Nel kernel si devono proteggere le strutture dati condivise contro accessi +concorrenti e indesiderati: questa è un'attività completamente diversa. +Il processo di protezione contro gli accessi concorrenti indesiderati eviterà +anche la maggior parte dei problemi relativi all'ottimizzazione in modo più +efficiente. + +Come *volatile*, le primitive del kernel che rendono sicuro l'accesso ai dati +(spinlock, mutex, barriere di sincronizzazione, ecc) sono progettate per +prevenire le ottimizzazioni indesiderate. Se vengono usate opportunamente, +non ci sarà bisogno di utilizzare *volatile*. Se vi sembra che *volatile* sia +comunque necessario, ci dev'essere quasi sicuramente un baco da qualche parte. +In un pezzo di codice kernel scritto a dovere, *volatile* può solo servire a +rallentare le cose. + +Considerate questo tipico blocco di codice kernel:: + + spin_lock(&the_lock); + do_something_on(&shared_data); + do_something_else_with(&shared_data); + spin_unlock(&the_lock); + +Se tutto il codice seguisse le regole di sincronizzazione, il valore di un +dato condiviso non potrebbe cambiare inaspettatamente mentre si trattiene un +lock. Un qualsiasi altro blocco di codice che vorrà usare quel dato rimarrà +in attesa del lock. Gli spinlock agiscono come barriere di sincronizzazione +- sono stati esplicitamente scritti per agire così - il che significa che gli +accessi al dato condiviso non saranno ottimizzati. Quindi il compilatore +potrebbe pensare di sapere cosa ci sarà nel dato condiviso ma la chiamata +spin_lock(), che agisce come una barriera di sincronizzazione, gli imporrà di +dimenticarsi tutto ciò che sapeva su di esso. + +Se il dato condiviso fosse stato dichiarato come *volatile*, la +sincronizzazione rimarrebbe comunque necessaria. Ma verrà impedito al +compilatore di ottimizzare gli accessi al dato anche _dentro_ alla sezione +critica, dove sappiamo che in realtà nessun altro può accedervi. Mentre si +trattiene un lock, il dato condiviso non è *volatile*. Quando si ha a che +fare con dei dati condivisi, un'opportuna sincronizzazione rende inutile +l'uso di *volatile* - anzi potenzialmente dannoso. + +L'uso di *volatile* fu originalmente pensato per l'accesso ai registri di I/O +mappati in memoria. All'interno del kernel, l'accesso ai registri, dovrebbe +essere protetto dai lock, ma si potrebbe anche desiderare che il compilatore +non "ottimizzi" l'accesso ai registri all'interno di una sezione critica. +Ma, all'interno del kernel, l'accesso alla memoria di I/O viene sempre fatto +attraverso funzioni d'accesso; accedere alla memoria di I/O direttamente +con i puntatori è sconsigliato e non funziona su tutte le architetture. +Queste funzioni d'accesso sono scritte per evitare ottimizzazioni indesiderate, +quindi, di nuovo, *volatile* è inutile. + +Un'altra situazione dove qualcuno potrebbe essere tentato dall'uso di +*volatile*, è nel caso in cui il processore è in un'attesa attiva sul valore +di una variabile. Il modo giusto di fare questo tipo di attesa è il seguente:: + + while (my_variable != what_i_want) + cpu_relax(); + +La chiamata cpu_relax() può ridurre il consumo di energia del processore +o cedere il passo ad un processore hyperthreaded gemello; funziona anche come +una barriera per il compilatore, quindi, ancora una volta, *volatile* non è +necessario. Ovviamente, tanto per puntualizzare, le attese attive sono +generalmente un atto antisociale. + +Ci sono comunque alcune rare situazioni dove l'uso di *volatile* nel kernel +ha senso: + + - Le funzioni d'accesso sopracitate potrebbero usare *volatile* su quelle + architetture che supportano l'accesso diretto alla memoria di I/O. + In pratica, ogni chiamata ad una funzione d'accesso diventa una piccola + sezione critica a se stante, e garantisce che l'accesso avvenga secondo + le aspettative del programmatore. + + - I codice *inline assembly* che fa cambiamenti nella memoria, ma che non + ha altri effetti espliciti, rischia di essere rimosso da GCC. Aggiungere + la parola chiave *volatile* a questo codice ne previene la rimozione. + + - La variabile jiffies è speciale in quanto assume un valore diverso ogni + volta che viene letta ma può essere lette senza alcuna sincronizzazione. + Quindi jiffies può essere *volatile*, ma l'aggiunta ad altre variabili di + questo è sconsigliata. Jiffies è considerata uno "stupido retaggio" + (parole di Linus) in questo contesto; correggerla non ne varrebbe la pena e + causerebbe più problemi. + + - I puntatori a delle strutture dati in una memoria coerente che potrebbe + essere modificata da dispositivi di I/O può, a volte, essere legittimamente + *volatile*. Un esempio pratico può essere quello di un adattatore di rete + che utilizza un puntatore ad un buffer circolare, questo viene cambiato + dall'adattatore per indicare quali descrittori sono stati processati. + +Per la maggior parte del codice, nessuna delle giustificazioni sopracitate può +essere considerata. Di conseguenza, l'uso di *volatile* è probabile che venga +visto come un baco e porterà a verifiche aggiuntive. Gli sviluppatori tentati +dall'uso di *volatile* dovrebbero fermarsi e pensare a cosa vogliono davvero +ottenere. + +Le modifiche che rimuovono variabili *volatile* sono generalmente ben accette +- purché accompagnate da una giustificazione che dimostri che i problemi di +concorrenza siano stati opportunamente considerati. + +Riferimenti +=========== + +[1] http://lwn.net/Articles/233481/ + +[2] http://lwn.net/Articles/233482/ + +Crediti +======= + +Impulso e ricerca originale di Randy Dunlap + +Scritto da Jonathan Corbet + +Migliorato dai commenti di Satyam Sharma, Johannes Stezenbach, Jesper +Juhl, Heikki Orsila, H. Peter Anvin, Philipp Hahn, e Stefan Richter. diff --git a/Documentation/usb/authorization.txt b/Documentation/usb/authorization.txt index c7e985f05d8f..f901ec77439c 100644 --- a/Documentation/usb/authorization.txt +++ b/Documentation/usb/authorization.txt @@ -119,5 +119,5 @@ If a deauthorized interface will be authorized so the driver probing must be triggered manually by writing INTERFACE to /sys/bus/usb/drivers_probe For drivers that need multiple interfaces all needed interfaces should be -authroized first. After that the drivers should be probed. +authorized first. After that the drivers should be probed. This avoids side effects. diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst index c4ded22197ca..2b3ab3a1ccf3 100644 --- a/Documentation/vm/index.rst +++ b/Documentation/vm/index.rst @@ -2,7 +2,9 @@ Linux Memory Management Documentation ===================================== -This is a collection of documents about Linux memory management (mm) subsystem. +This is a collection of documents about the Linux memory management (mm) +subsystem. If you are looking for advice on simply allocating memory, +see the :ref:`memory-allocation`. User guides for MM features =========================== diff --git a/Documentation/x86/boot.txt b/Documentation/x86/boot.txt index 5e9b826b5f62..f4c2a97bfdbd 100644 --- a/Documentation/x86/boot.txt +++ b/Documentation/x86/boot.txt @@ -58,7 +58,7 @@ Protocol 2.11: (Kernel 3.6) Added a field for offset of EFI handover protocol entry point. Protocol 2.12: (Kernel 3.8) Added the xloadflags field and extension fields - to struct boot_params for loading bzImage and ramdisk + to struct boot_params for loading bzImage and ramdisk above 4G in 64bit. **** MEMORY LAYOUT diff --git a/Documentation/x86/x86_64/boot-options.txt b/Documentation/x86/x86_64/boot-options.txt index ad6d2a80cf05..abc53886655e 100644 --- a/Documentation/x86/x86_64/boot-options.txt +++ b/Documentation/x86/x86_64/boot-options.txt @@ -209,7 +209,7 @@ IOMMU (input/output memory management unit) mapping with memory protection, etc. Kernel boot message: "PCI-DMA: Using Calgary IOMMU" - iommu=[][,noagp][,off][,force][,noforce][,leak[=] + iommu=[][,noagp][,off][,force][,noforce] [,memaper[=]][,merge][,fullflush][,nomerge] [,noaperture][,calgary] @@ -228,9 +228,6 @@ IOMMU (input/output memory management unit) allowed Overwrite iommu off workarounds for specific chipsets. fullflush Flush IOMMU on each allocation (default). nofullflush Don't use IOMMU fullflush. - leak Turn on simple iommu leak tracing (only when - CONFIG_IOMMU_LEAK is on). Default number of leak pages - is 20. memaper[=] Allocate an own aperture over RAM with size 32MB< $@ -endef define filechk_gentimeconst (echo $(CONFIG_HZ) | bc -q $< ) endef -$(obj)/$(timeconst-file): kernel/time/timeconst.bc FORCE +$(timeconst-file): kernel/time/timeconst.bc FORCE $(call filechk,gentimeconst) ##### @@ -50,12 +42,9 @@ offsets-file := include/generated/asm-offsets.h always += $(offsets-file) targets += arch/$(SRCARCH)/kernel/asm-offsets.s -# We use internal kbuild rules to avoid the "is up to date" message from make -arch/$(SRCARCH)/kernel/asm-offsets.s: arch/$(SRCARCH)/kernel/asm-offsets.c \ - $(obj)/$(timeconst-file) $(obj)/$(bounds-file) FORCE - $(call if_changed_dep,cc_s_c) +arch/$(SRCARCH)/kernel/asm-offsets.s: $(timeconst-file) $(bounds-file) -$(obj)/$(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE +$(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE $(call filechk,offsets,__ASM_OFFSETS_H__) ##### @@ -77,7 +66,7 @@ missing-syscalls: scripts/checksyscalls.sh $(offsets-file) FORCE extra-$(CONFIG_GDB_SCRIPTS) += build_constants_py -build_constants_py: $(obj)/$(timeconst-file) $(obj)/$(bounds-file) +build_constants_py: $(timeconst-file) $(bounds-file) @$(MAKE) $(build)=scripts/gdb/linux $@ # Keep these three files during make clean diff --git a/MAINTAINERS b/MAINTAINERS index 705555d9b3af..7d37f8a4743c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -140,7 +140,7 @@ Maintainers List (try to look for most precise areas first) M: Steffen Klassert L: netdev@vger.kernel.org S: Odd Fixes -F: Documentation/networking/vortex.txt +F: Documentation/networking/device_drivers/3com/vortex.txt F: drivers/net/ethernet/3com/3c59x.c 3CR990 NETWORK DRIVER @@ -740,7 +740,7 @@ R: Saeed Bishara R: Zorik Machulsky L: netdev@vger.kernel.org S: Supported -F: Documentation/networking/ena.txt +F: Documentation/networking/device_drivers/amazon/ena.txt F: drivers/net/ethernet/amazon/ AMD CRYPTOGRAPHIC COPROCESSOR (CCP) DRIVER @@ -846,6 +846,14 @@ S: Supported F: drivers/iio/dac/ad5758.c F: Documentation/devicetree/bindings/iio/dac/ad5758.txt +ANALOG DEVICES INC AD7124 DRIVER +M: Stefan Popa +L: linux-iio@vger.kernel.org +W: http://ez.analog.com/community/linux-device-drivers +S: Supported +F: drivers/iio/adc/ad7124.c +F: Documentation/devicetree/bindings/iio/adc/adi,ad7124.txt + ANALOG DEVICES INC AD9389B DRIVER M: Hans Verkuil L: linux-media@vger.kernel.org @@ -950,6 +958,7 @@ M: Arve HjønnevÃ¥g M: Todd Kjos M: Martijn Coenen M: Joel Fernandes +M: Christian Brauner T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git L: devel@driverdev.osuosl.org S: Supported @@ -1434,6 +1443,7 @@ F: arch/arm/mach-ep93xx/micro9.c ARM/CORESIGHT FRAMEWORK AND DRIVERS M: Mathieu Poirier +R: Suzuki K Poulose L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: drivers/hwtracing/coresight/* @@ -2353,7 +2363,7 @@ F: drivers/pinctrl/zte/ F: drivers/soc/zte/ F: drivers/thermal/zx2967_thermal.c F: drivers/watchdog/zx2967_wdt.c -F: Documentation/devicetree/bindings/arm/zte.txt +F: Documentation/devicetree/bindings/arm/zte.yaml F: Documentation/devicetree/bindings/clock/zx2967*.txt F: Documentation/devicetree/bindings/dma/zxdma.txt F: Documentation/devicetree/bindings/gpio/zx296702-gpio.txt @@ -3484,6 +3494,7 @@ F: include/linux/spi/cc2520.h F: Documentation/devicetree/bindings/net/ieee802154/cc2520.txt CCREE ARM TRUSTZONE CRYPTOCELL REE DRIVER +M: Yael Chemla M: Gilad Ben-Yossef L: linux-crypto@vger.kernel.org S: Supported @@ -3687,6 +3698,8 @@ F: drivers/net/ethernet/cisco/enic/ CISCO VIC LOW LATENCY NIC DRIVER M: Christian Benvenuti +M: Nelson Escobar +M: Parvi Kaustubhi S: Supported F: drivers/infiniband/hw/usnic/ @@ -4224,7 +4237,7 @@ F: net/ax25/sysctl_net_ax25.c DAVICOM FAST ETHERNET (DMFE) NETWORK DRIVER L: netdev@vger.kernel.org S: Orphan -F: Documentation/networking/dmfe.txt +F: Documentation/networking/device_drivers/dec/dmfe.txt F: drivers/net/ethernet/dec/tulip/dmfe.c DC390/AM53C974 SCSI driver @@ -5658,6 +5671,7 @@ F: include/linux/of_net.h F: include/linux/phy.h F: include/linux/phy_fixed.h F: include/linux/platform_data/mdio-bcm-unimac.h +F: include/linux/platform_data/mdio-gpio.h F: include/trace/events/mdio.h F: include/uapi/linux/mdio.h F: include/uapi/linux/mii.h @@ -6306,6 +6320,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy.git S: Supported F: drivers/phy/ F: include/linux/phy/ +F: Documentation/devicetree/bindings/phy/ GENERIC PINCTRL I2C DEMULTIPLEXER DRIVER M: Wolfram Sang @@ -6408,7 +6423,6 @@ F: drivers/media/rc/gpio-ir-tx.c GPIO MOCKUP DRIVER M: Bamvor Jian Zhang -R: Bartosz Golaszewski L: linux-gpio@vger.kernel.org S: Maintained F: drivers/gpio/gpio-mockup.c @@ -6902,6 +6916,14 @@ L: linux-input@vger.kernel.org S: Maintained F: drivers/input/touchscreen/htcpen.c +HTS221 TEMPERATURE-HUMIDITY IIO DRIVER +M: Lorenzo Bianconi +L: linux-iio@vger.kernel.org +W: http://www.st.com/ +S: Maintained +F: drivers/iio/humidity/hts221* +F: Documentation/devicetree/bindings/iio/humidity/hts221.txt + HUAWEI ETHERNET DRIVER M: Aviad Krawczyk L: netdev@vger.kernel.org @@ -6949,7 +6971,7 @@ M: Sasha Levin T: git git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux.git L: devel@linuxdriverproject.org S: Supported -F: Documentation/networking/netvsc.txt +F: Documentation/networking/device_drivers/microsoft/netvsc.txt F: arch/x86/include/asm/mshyperv.h F: arch/x86/include/asm/trace/hyperv.h F: arch/x86/include/asm/hyperv-tlfs.h @@ -7146,7 +7168,9 @@ F: crypto/842.c F: lib/842/ IBM Power in-Nest Crypto Acceleration -M: Paulo Flabiano Smorigo +M: Breno Leitão +M: Nayna Jain +M: Paulo Flabiano Smorigo L: linux-crypto@vger.kernel.org S: Supported F: drivers/crypto/nx/Makefile @@ -7210,7 +7234,9 @@ S: Supported F: drivers/scsi/ibmvscsi_tgt/ IBM Power VMX Cryptographic instructions -M: Paulo Flabiano Smorigo +M: Breno Leitão +M: Nayna Jain +M: Paulo Flabiano Smorigo L: linux-crypto@vger.kernel.org S: Supported F: drivers/crypto/vmx/Makefile @@ -7547,18 +7573,18 @@ Q: http://patchwork.ozlabs.org/project/intel-wired-lan/list/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git S: Supported -F: Documentation/networking/e100.rst -F: Documentation/networking/e1000.rst -F: Documentation/networking/e1000e.rst -F: Documentation/networking/fm10k.rst -F: Documentation/networking/igb.rst -F: Documentation/networking/igbvf.rst -F: Documentation/networking/ixgb.rst -F: Documentation/networking/ixgbe.rst -F: Documentation/networking/ixgbevf.rst -F: Documentation/networking/i40e.rst -F: Documentation/networking/iavf.rst -F: Documentation/networking/ice.rst +F: Documentation/networking/device_drivers/intel/e100.rst +F: Documentation/networking/device_drivers/intel/e1000.rst +F: Documentation/networking/device_drivers/intel/e1000e.rst +F: Documentation/networking/device_drivers/intel/fm10k.rst +F: Documentation/networking/device_drivers/intel/igb.rst +F: Documentation/networking/device_drivers/intel/igbvf.rst +F: Documentation/networking/device_drivers/intel/ixgb.rst +F: Documentation/networking/device_drivers/intel/ixgbe.rst +F: Documentation/networking/device_drivers/intel/ixgbevf.rst +F: Documentation/networking/device_drivers/intel/i40e.rst +F: Documentation/networking/device_drivers/intel/iavf.rst +F: Documentation/networking/device_drivers/intel/ice.rst F: drivers/net/ethernet/intel/ F: drivers/net/ethernet/intel/*/ F: include/linux/avf/virtchnl.h @@ -7740,8 +7766,8 @@ INTEL PRO/WIRELESS 2100, 2200BG, 2915ABG NETWORK CONNECTION SUPPORT M: Stanislav Yakovlev L: linux-wireless@vger.kernel.org S: Maintained -F: Documentation/networking/README.ipw2100 -F: Documentation/networking/README.ipw2200 +F: Documentation/networking/device_drivers/intel/ipw2100.txt +F: Documentation/networking/device_drivers/intel/ipw2200.txt F: drivers/net/wireless/intel/ipw2x00/ INTEL PSTATE DRIVER @@ -8001,13 +8027,6 @@ F: include/linux/isdn/ F: include/uapi/linux/isdn.h F: include/uapi/linux/isdn/ -ISDN SUBSYSTEM (Eicon active card driver) -M: Armin Schindler -L: isdn4linux@listserv.isdn4linux.de (subscribers-only) -W: http://www.melware.de -S: Maintained -F: drivers/isdn/hardware/eicon/ - IT87 HARDWARE MONITORING DRIVER M: Jean Delvare L: linux-hwmon@vger.kernel.org @@ -9932,6 +9951,12 @@ M: Nicolas Ferre S: Supported F: drivers/power/reset/at91-sama5d2_shdwc.c +MICROCHIP SAMA5D2-COMPATIBLE PIOBU GPIO +M: Andrei Stefanescu +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) +L: linux-gpio@vger.kernel.org +F: drivers/gpio/gpio-sama5d2-piobu.c + MICROCHIP SPI DRIVER M: Nicolas Ferre S: Supported @@ -9994,6 +10019,7 @@ F: Documentation/scsi/smartpqi.txt MICROSEMI ETHERNET SWITCH DRIVER M: Alexandre Belloni +M: Microchip Linux Driver Support L: netdev@vger.kernel.org S: Supported F: drivers/net/ethernet/mscc/ @@ -10405,8 +10431,8 @@ NETERION 10GbE DRIVERS (s2io/vxge) M: Jon Mason L: netdev@vger.kernel.org S: Supported -F: Documentation/networking/s2io.txt -F: Documentation/networking/vxge.txt +F: Documentation/networking/device_drivers/neterion/s2io.txt +F: Documentation/networking/device_drivers/neterion/vxge.txt F: drivers/net/ethernet/neterion/ NETFILTER @@ -10848,6 +10874,14 @@ L: linux-nfc@lists.01.org (moderated for non-subscribers) S: Supported F: drivers/nfc/nxp-nci +OBJAGG +M: Jiri Pirko +L: netdev@vger.kernel.org +S: Supported +F: lib/objagg.c +F: lib/test_objagg.c +F: include/linux/objagg.h + OBJTOOL M: Josh Poimboeuf M: Peter Zijlstra @@ -12039,6 +12073,13 @@ M: "Rafael J. Wysocki" S: Maintained F: drivers/pnp/ +PNI RM3100 IIO DRIVER +M: Song Qiang +L: linux-iio@vger.kernel.org +S: Maintained +F: drivers/iio/magnetometer/rm3100* +F: Documentation/devicetree/bindings/iio/magnetometer/pni,rm3100.txt + POSIX CLOCKS and TIMERS M: Thomas Gleixner L: linux-kernel@vger.kernel.org @@ -12411,7 +12452,7 @@ QLOGIC QLA3XXX NETWORK DRIVER M: Dept-GELinuxNICDev@cavium.com L: netdev@vger.kernel.org S: Supported -F: Documentation/networking/LICENSE.qla3xxx +F: Documentation/networking/device_drivers/qlogic/LICENSE.qla3xxx F: drivers/net/ethernet/qlogic/qla3xxx.* QLOGIC QLA4XXX iSCSI DRIVER @@ -12463,7 +12504,7 @@ L: linux-kernel@vger.kernel.org S: Maintained F: drivers/bus/fsl-mc/ F: Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt -F: Documentation/networking/dpaa2/overview.rst +F: Documentation/networking/device_drivers/freescale/dpaa2/overview.rst QT1010 MEDIA DRIVER M: Antti Palosaari @@ -12824,7 +12865,8 @@ RENESAS R-CAR GYROADC DRIVER M: Marek Vasut L: linux-iio@vger.kernel.org S: Supported -F: drivers/iio/adc/rcar_gyro_adc.c +F: Documentation/devicetree/bindings/iio/adc/renesas,gyroadc.txt +F: drivers/iio/adc/rcar-gyroadc.c RENESAS R-CAR I2C DRIVERS M: Wolfram Sang @@ -14225,7 +14267,7 @@ SPIDERNET NETWORK DRIVER for CELL M: Ishizaki Kou L: netdev@vger.kernel.org S: Supported -F: Documentation/networking/spider_net.txt +F: Documentation/networking/device_drivers/toshiba/spider_net.txt F: drivers/net/ethernet/toshiba/spider_net* SPMI SUBSYSTEM @@ -14259,6 +14301,14 @@ M: Jan-Benedict Glaw S: Maintained F: arch/alpha/kernel/srm_env.c +ST LSM6DSx IMU IIO DRIVER +M: Lorenzo Bianconi +L: linux-iio@vger.kernel.org +W: http://www.st.com/ +S: Maintained +F: drivers/iio/imu/st_lsm6dsx/ +F: Documentation/devicetree/bindings/iio/imu/st_lsm6dsx.txt + ST STM32 I2C/SMBUS DRIVER M: Pierre-Yves MORDRET L: linux-i2c@vger.kernel.org @@ -14344,8 +14394,8 @@ S: Odd Fixes F: drivers/staging/vt665?/ STAGING - WILC1000 WIFI DRIVER -M: Aditya Shankar -M: Ganesh Krishna +M: Adham Abozaeid +M: Ajay Singh L: linux-wireless@vger.kernel.org S: Supported F: drivers/staging/wilc1000/ @@ -15221,7 +15271,7 @@ M: Samuel Chessman L: tlan-devel@lists.sourceforge.net (subscribers-only) W: http://sourceforge.net/projects/tlan/ S: Maintained -F: Documentation/networking/tlan.txt +F: Documentation/networking/device_drivers/ti/tlan.txt F: drivers/net/ethernet/ti/tlan.* TM6000 VIDEO4LINUX DRIVER @@ -16194,7 +16244,7 @@ F: drivers/vme/ F: include/linux/vme* VMWARE BALLOON DRIVER -M: Xavier Deguillard +M: Julien Freche M: Nadav Amit M: "VMware, Inc." L: linux-kernel@vger.kernel.org diff --git a/Makefile b/Makefile index 7a2a9a175756..60a473247657 100644 --- a/Makefile +++ b/Makefile @@ -186,6 +186,10 @@ endif # Old syntax make ... SUBDIRS=$PWD is still supported # Setting the environment variable KBUILD_EXTMOD take precedence ifdef SUBDIRS + $(warning ================= WARNING ================) + $(warning 'SUBDIRS' will be removed after Linux 5.3) + $(warning Please use 'M=' or 'KBUILD_EXTMOD' instead) + $(warning ==========================================) KBUILD_EXTMOD ?= $(SUBDIRS) endif @@ -422,10 +426,10 @@ LINUXINCLUDE := \ -I$(objtree)/include \ $(USERINCLUDE) -KBUILD_AFLAGS := -D__ASSEMBLY__ -KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ - -fno-strict-aliasing -fno-common -fshort-wchar \ - -Werror-implicit-function-declaration \ +KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE +KBUILD_CFLAGS := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \ + -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \ + -Werror-implicit-function-declaration -Werror=implicit-int \ -Wno-format-security \ -std=gnu89 KBUILD_CPPFLAGS := -D__KERNEL__ @@ -487,18 +491,18 @@ endif ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),) ifneq ($(CROSS_COMPILE),) -CLANG_TARGET := --target=$(notdir $(CROSS_COMPILE:%-=%)) +CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%)) GCC_TOOLCHAIN_DIR := $(dir $(shell which $(LD))) -CLANG_PREFIX := --prefix=$(GCC_TOOLCHAIN_DIR) +CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR) GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..) endif ifneq ($(GCC_TOOLCHAIN),) -CLANG_GCC_TC := --gcc-toolchain=$(GCC_TOOLCHAIN) +CLANG_FLAGS += --gcc-toolchain=$(GCC_TOOLCHAIN) endif -KBUILD_CFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC) $(CLANG_PREFIX) -KBUILD_AFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC) $(CLANG_PREFIX) -KBUILD_CFLAGS += $(call cc-option, -no-integrated-as) -KBUILD_AFLAGS += $(call cc-option, -no-integrated-as) +CLANG_FLAGS += -no-integrated-as +KBUILD_CFLAGS += $(CLANG_FLAGS) +KBUILD_AFLAGS += $(CLANG_FLAGS) +export CLANG_FLAGS endif RETPOLINE_CFLAGS_GCC := -mindirect-branch=thunk-extern -mindirect-branch-register @@ -510,9 +514,6 @@ RETPOLINE_VDSO_CFLAGS := $(call cc-option,$(RETPOLINE_VDSO_CFLAGS_GCC),$(call cc export RETPOLINE_CFLAGS export RETPOLINE_VDSO_CFLAGS -KBUILD_CFLAGS += $(call cc-option,-fno-PIE) -KBUILD_AFLAGS += $(call cc-option,-fno-PIE) - # check for 'asm goto' ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC) $(KBUILD_CFLAGS)), y) CC_HAVE_ASM_GOTO := 1 @@ -828,12 +829,6 @@ KBUILD_CFLAGS += $(call cc-option,-fno-stack-check,) # conserve stack if available KBUILD_CFLAGS += $(call cc-option,-fconserve-stack) -# disallow errors like 'EXPORT_GPL(foo);' with missing header -KBUILD_CFLAGS += $(call cc-option,-Werror=implicit-int) - -# require functions to have arguments in prototypes, not empty 'int foo()' -KBUILD_CFLAGS += $(call cc-option,-Werror=strict-prototypes) - # Prohibit date/time macros, which would make the build non-deterministic KBUILD_CFLAGS += $(call cc-option,-Werror=date-time) @@ -1026,14 +1021,13 @@ cmd_link-vmlinux = \ $(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) $@, true) vmlinux: scripts/link-vmlinux.sh autoksyms_recursive $(vmlinux-deps) FORCE -ifdef CONFIG_HEADERS_CHECK - $(Q)$(MAKE) -f $(srctree)/Makefile headers_check -endif ifdef CONFIG_GDB_SCRIPTS $(Q)ln -fsn $(abspath $(srctree)/scripts/gdb/vmlinux-gdb.py) endif +$(call if_changed,link-vmlinux) +targets := vmlinux + # Build samples along the rest of the kernel. This needs headers_install. ifdef CONFIG_SAMPLES vmlinux-dirs += samples @@ -1051,7 +1045,7 @@ $(sort $(vmlinux-deps)): $(vmlinux-dirs) ; # Error messages still appears in the original language PHONY += $(vmlinux-dirs) -$(vmlinux-dirs): prepare scripts +$(vmlinux-dirs): prepare $(Q)$(MAKE) $(build)=$@ need-builtin=1 define filechk_kernel.release @@ -1066,7 +1060,7 @@ include/config/kernel.release: $(srctree)/Makefile FORCE # Carefully list dependencies so we do not try to build scripts twice # in parallel PHONY += scripts -scripts: scripts_basic scripts_dtc asm-generic gcc-plugins $(autoksyms_h) +scripts: scripts_basic scripts_dtc $(Q)$(MAKE) $(build)=$(@) # Things we need to do before we recursively start building the kernel @@ -1099,22 +1093,23 @@ prepare2: prepare3 outputmakefile asm-generic prepare1: prepare2 $(version_h) $(autoksyms_h) include/generated/utsrelease.h $(cmd_crmodverdir) -archprepare: archheaders archscripts prepare1 scripts_basic +archprepare: archheaders archscripts prepare1 scripts -prepare0: archprepare gcc-plugins +prepare0: archprepare + $(Q)$(MAKE) $(build)=scripts/mod $(Q)$(MAKE) $(build)=. # All the preparing.. prepare: prepare0 prepare-objtool # Support for using generic headers in asm-generic +asm-generic := -f $(srctree)/scripts/Makefile.asm-generic obj + PHONY += asm-generic uapi-asm-generic asm-generic: uapi-asm-generic - $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.asm-generic \ - src=asm obj=arch/$(SRCARCH)/include/generated/asm + $(Q)$(MAKE) $(asm-generic)=arch/$(SRCARCH)/include/generated/asm uapi-asm-generic: - $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.asm-generic \ - src=uapi/asm obj=arch/$(SRCARCH)/include/generated/uapi/asm + $(Q)$(MAKE) $(asm-generic)=arch/$(SRCARCH)/include/generated/uapi/asm PHONY += prepare-objtool prepare-objtool: $(objtool_target) @@ -1199,6 +1194,10 @@ headers_check: headers_install $(Q)$(MAKE) $(hdr-inst)=include/uapi dst=include HDRCHECK=1 $(Q)$(MAKE) $(hdr-inst)=arch/$(SRCARCH)/include/uapi $(hdr-dst) HDRCHECK=1 +ifdef CONFIG_HEADERS_CHECK +all: headers_check +endif + # --------------------------------------------------------------------------- # Kernel selftest @@ -1230,10 +1229,13 @@ ifneq ($(dtstree),) %.dtb: prepare3 scripts_dtc $(Q)$(MAKE) $(build)=$(dtstree) $(dtstree)/$@ -PHONY += dtbs dtbs_install -dtbs: prepare3 scripts_dtc +PHONY += dtbs dtbs_install dt_binding_check +dtbs dtbs_check: prepare3 scripts_dtc $(Q)$(MAKE) $(build)=$(dtstree) +dtbs_check: export CHECK_DTBS=1 +dtbs_check: dt_binding_check + dtbs_install: $(Q)$(MAKE) $(dtbinst)=$(dtstree) @@ -1247,6 +1249,9 @@ PHONY += scripts_dtc scripts_dtc: scripts_basic $(Q)$(MAKE) $(build)=scripts/dtc +dt_binding_check: scripts_dtc + $(Q)$(MAKE) $(build)=Documentation/devicetree/bindings + # --------------------------------------------------------------------------- # Modules @@ -1277,7 +1282,7 @@ modules.builtin: $(vmlinux-dirs:%=%/modules.builtin) # Target to prepare building external modules PHONY += modules_prepare -modules_prepare: prepare scripts +modules_prepare: prepare # Target to install modules PHONY += modules_install @@ -1545,9 +1550,6 @@ else # KBUILD_EXTMOD # We are always building modules KBUILD_MODULES := 1 -PHONY += crmodverdir -crmodverdir: - $(cmd_crmodverdir) PHONY += $(objtree)/Module.symvers $(objtree)/Module.symvers: @@ -1559,7 +1561,7 @@ $(objtree)/Module.symvers: module-dirs := $(addprefix _module_,$(KBUILD_EXTMOD)) PHONY += $(module-dirs) modules -$(module-dirs): crmodverdir $(objtree)/Module.symvers +$(module-dirs): prepare $(objtree)/Module.symvers $(Q)$(MAKE) $(build)=$(patsubst _module_%,%,$@) modules: $(module-dirs) @@ -1598,10 +1600,9 @@ help: @echo ' clean - remove generated files in module directory only' @echo '' -# Dummies... -PHONY += prepare scripts -prepare: ; -scripts: ; +PHONY += prepare +prepare: + $(cmd_crmodverdir) endif # KBUILD_EXTMOD clean: $(clean-dirs) @@ -1609,7 +1610,8 @@ clean: $(clean-dirs) $(call cmd,rmfiles) @find $(if $(KBUILD_EXTMOD), $(KBUILD_EXTMOD), .) $(RCS_FIND_IGNORE) \ \( -name '*.[aios]' -o -name '*.ko' -o -name '.*.cmd' \ - -o -name '*.ko.*' -o -name '*.dtb' -o -name '*.dtb.S' \ + -o -name '*.ko.*' \ + -o -name '*.dtb' -o -name '*.dtb.S' -o -name '*.dt.yaml' \ -o -name '*.dwo' -o -name '*.lst' \ -o -name '*.su' \ -o -name '.*.d' -o -name '.*.tmp' -o -name '*.mod.c' \ @@ -1705,36 +1707,33 @@ else target-dir = $(if $(KBUILD_EXTMOD),$(dir $<),$(dir $@)) endif -%.s: %.c prepare scripts FORCE +%.s: %.c prepare FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) -%.i: %.c prepare scripts FORCE +%.i: %.c prepare FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) -%.o: %.c prepare scripts FORCE +%.o: %.c prepare FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) -%.lst: %.c prepare scripts FORCE +%.lst: %.c prepare FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) -%.s: %.S prepare scripts FORCE +%.s: %.S prepare FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) -%.o: %.S prepare scripts FORCE +%.o: %.S prepare FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) -%.symtypes: %.c prepare scripts FORCE +%.symtypes: %.c prepare FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) -%.ll: %.c prepare scripts FORCE +%.ll: %.c prepare FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) # Modules -/: prepare scripts FORCE - $(cmd_crmodverdir) +/: prepare FORCE $(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \ $(build)=$(build-dir) # Make sure the latest headers are built for Documentation Documentation/ samples/: headers_install -%/: prepare scripts FORCE - $(cmd_crmodverdir) +%/: prepare FORCE $(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \ $(build)=$(build-dir) -%.ko: prepare scripts FORCE - $(cmd_crmodverdir) +%.ko: prepare FORCE $(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \ $(build)=$(build-dir) $(@:.ko=.o) $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost @@ -1758,13 +1757,12 @@ quiet_cmd_depmod = DEPMOD $(KERNELRELEASE) cmd_crmodverdir = $(Q)mkdir -p $(MODVERDIR) \ $(if $(KBUILD_MODULES),; rm -f $(MODVERDIR)/*) -# read all saved command lines -cmd_files := $(wildcard .*.cmd) +# read saved command lines for existing targets +existing-targets := $(wildcard $(sort $(targets))) -ifneq ($(cmd_files),) - $(cmd_files): ; # Do not try to update included dependency files - include $(cmd_files) -endif +cmd_files := $(foreach f,$(existing-targets),$(dir $(f)).$(notdir $(f)).cmd) +$(cmd_files): ; # Do not try to update included dependency files +-include $(cmd_files) endif # ifeq ($(config-targets),1) endif # ifeq ($(mixed-targets),1) diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig index 5b4f88363453..584a6e114853 100644 --- a/arch/alpha/Kconfig +++ b/arch/alpha/Kconfig @@ -5,7 +5,11 @@ config ALPHA select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_SERIO select ARCH_NO_PREEMPT + select ARCH_NO_SG_CHAIN select ARCH_USE_CMPXCHG_LOCKREF + select FORCE_PCI if !ALPHA_JENSEN + select PCI_DOMAINS if PCI + select PCI_SYSCALL if PCI select HAVE_AOUT select HAVE_IDE select HAVE_OPROFILE @@ -15,6 +19,7 @@ config ALPHA select NEED_SG_DMA_LENGTH select VIRT_TO_BUS select GENERIC_IRQ_PROBE + select GENERIC_PCI_IOMAP if PCI select AUTO_IRQ_AFFINITY if SMP select GENERIC_IRQ_SHOW select ARCH_WANT_IPC_PARSE_VERSION @@ -125,11 +130,13 @@ choice config ALPHA_GENERIC bool "Generic" depends on TTY + select HAVE_EISA help A generic kernel will run on all supported Alpha hardware. config ALPHA_ALCOR bool "Alcor/Alpha-XLT" + select HAVE_EISA help For systems using the Digital ALCOR chipset: 5 chips (4, 64-bit data slices (Data Switch, DSW) - 208-pin PQFP and 1 control (Control, I/O @@ -202,7 +209,7 @@ config ALPHA_EIGER config ALPHA_JENSEN bool "Jensen" depends on BROKEN - select DMA_DIRECT_OPS + select HAVE_EISA help DEC PC 150 AXP (aka Jensen): This is a very old Digital system - one of the first-generation Alpha systems. A number of these systems @@ -219,6 +226,7 @@ config ALPHA_LX164 config ALPHA_LYNX bool "Lynx" + select HAVE_EISA help AlphaServer 2100A-based systems. @@ -229,6 +237,7 @@ config ALPHA_MARVEL config ALPHA_MIATA bool "Miata" + select HAVE_EISA help The Digital PersonalWorkStation (PWS 433a, 433au, 500a, 500au, 600a, or 600au). @@ -248,6 +257,7 @@ config ALPHA_NONAME_CH config ALPHA_NORITAKE bool "Noritake" + select HAVE_EISA help AlphaServer 1000A, AlphaServer 600A, and AlphaServer 800-based systems. @@ -260,6 +270,7 @@ config ALPHA_P2K config ALPHA_RAWHIDE bool "Rawhide" + select HAVE_EISA help AlphaServer 1200, AlphaServer 4000 and AlphaServer 4100 machines. See HOWTO at @@ -279,6 +290,7 @@ config ALPHA_SX164 config ALPHA_SABLE bool "Sable" + select HAVE_EISA help Digital AlphaServer 2000 and 2100-based systems. @@ -319,24 +331,6 @@ config ISA_DMA_API bool default y -config PCI - bool - depends on !ALPHA_JENSEN - select GENERIC_PCI_IOMAP - default y - help - Find out whether you have a PCI motherboard. PCI is the name of a - bus system, i.e. the way the CPU talks to the other stuff inside - your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or - VESA. If you have PCI, say Y, otherwise N. - -config PCI_DOMAINS - bool - default y - -config PCI_SYSCALL - def_bool PCI - config ALPHA_NONAME bool depends on ALPHA_BOOK1 || ALPHA_NONAME_CH @@ -526,11 +520,6 @@ config ALPHA_SRM If unsure, say N. -config EISA - bool - depends on ALPHA_GENERIC || ALPHA_JENSEN || ALPHA_ALCOR || ALPHA_MIKASA || ALPHA_SABLE || ALPHA_LYNX || ALPHA_NORITAKE || ALPHA_RAWHIDE - default y - config ARCH_MAY_HAVE_PC_FDC def_bool y @@ -681,11 +670,6 @@ config HZ default 1200 if HZ_1200 default 1024 -source "drivers/pci/Kconfig" -source "drivers/eisa/Kconfig" - -source "drivers/pcmcia/Kconfig" - config SRM_ENV tristate "SRM environment through procfs" depends on PROC_FS diff --git a/arch/alpha/include/asm/dma-mapping.h b/arch/alpha/include/asm/dma-mapping.h index 8beeafd4f68e..0ee6a5c99b16 100644 --- a/arch/alpha/include/asm/dma-mapping.h +++ b/arch/alpha/include/asm/dma-mapping.h @@ -7,7 +7,7 @@ extern const struct dma_map_ops alpha_pci_ops; static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { #ifdef CONFIG_ALPHA_JENSEN - return &dma_direct_ops; + return NULL; #else return &alpha_pci_ops; #endif diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c index 46e08e0d9181..aa0f50d0f823 100644 --- a/arch/alpha/kernel/pci_iommu.c +++ b/arch/alpha/kernel/pci_iommu.c @@ -291,7 +291,7 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size, use direct_map above, it now must be considered an error. */ if (! alpha_mv.mv_pci_tbi) { printk_once(KERN_WARNING "pci_map_single: no HW sg\n"); - return 0; + return DMA_MAPPING_ERROR; } arena = hose->sg_pci; @@ -307,7 +307,7 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size, if (dma_ofs < 0) { printk(KERN_WARNING "pci_map_single failed: " "could not allocate dma page tables\n"); - return 0; + return DMA_MAPPING_ERROR; } paddr &= PAGE_MASK; @@ -443,7 +443,7 @@ static void *alpha_pci_alloc_coherent(struct device *dev, size_t size, gfp &= ~GFP_DMA; try_again: - cpu_addr = (void *)__get_free_pages(gfp, order); + cpu_addr = (void *)__get_free_pages(gfp | __GFP_ZERO, order); if (! cpu_addr) { printk(KERN_INFO "pci_alloc_consistent: " "get_free_pages failed from %pf\n", @@ -455,7 +455,7 @@ try_again: memset(cpu_addr, 0, size); *dma_addrp = pci_map_single_1(pdev, cpu_addr, size, 0); - if (*dma_addrp == 0) { + if (*dma_addrp == DMA_MAPPING_ERROR) { free_pages((unsigned long)cpu_addr, order); if (alpha_mv.mv_pci_tbi || (gfp & GFP_DMA)) return NULL; @@ -671,7 +671,7 @@ static int alpha_pci_map_sg(struct device *dev, struct scatterlist *sg, sg->dma_address = pci_map_single_1(pdev, SG_ENT_VIRT_ADDRESS(sg), sg->length, dac_allowed); - return sg->dma_address != 0; + return sg->dma_address != DMA_MAPPING_ERROR; } start = sg; @@ -935,11 +935,6 @@ iommu_unbind(struct pci_iommu_arena *arena, long pg_start, long pg_count) return 0; } -static int alpha_pci_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == 0; -} - const struct dma_map_ops alpha_pci_ops = { .alloc = alpha_pci_alloc_coherent, .free = alpha_pci_free_coherent, @@ -947,7 +942,6 @@ const struct dma_map_ops alpha_pci_ops = { .unmap_page = alpha_pci_unmap_page, .map_sg = alpha_pci_map_sg, .unmap_sg = alpha_pci_unmap_sg, - .mapping_error = alpha_pci_mapping_error, .dma_supported = alpha_pci_supported, }; EXPORT_SYMBOL(alpha_pci_ops); diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index dadb494d83fd..376366a7db81 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -13,12 +13,10 @@ config ARC select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_DEVICE - select ARCH_HAS_SG_CHAIN select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS select COMMON_CLK - select DMA_DIRECT_OPS select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) select GENERIC_CLOCKEVENTS select GENERIC_FIND_FIRST_BIT @@ -47,14 +45,12 @@ config ARC select OF select OF_EARLY_FLATTREE select OF_RESERVED_MEM + select PCI_SYSCALL if PCI select PERF_USE_VMALLOC if ARC_CACHE_VIPT_ALIASING config ARCH_HAS_CACHE_LINE_SIZE def_bool y -config MIGHT_HAVE_PCI - bool - config TRACE_IRQFLAGS_SUPPORT def_bool y @@ -543,24 +539,4 @@ config FORCE_MAX_ZONEORDER default "12" if ARC_HUGEPAGE_16M default "11" -menu "Bus Support" - -config PCI - bool "PCI support" if MIGHT_HAVE_PCI - help - PCI is the name of a bus system, i.e., the way the CPU talks to - the other stuff inside your box. Find out if your board/platform - has PCI. - - Note: PCIe support for Synopsys Device will be available only - when HAPS DX is configured with PCIe RC bitmap. If you have PCI, - say Y, otherwise N. - -config PCI_SYSCALL - def_bool PCI - -source "drivers/pci/Kconfig" - -endmenu - source "kernel/power/Kconfig" diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index cf9619d4efb4..4135abec3fb0 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -1294,7 +1294,7 @@ void __init arc_cache_init_master(void) /* * In case of IOC (say IOC+SLC case), pointers above could still be set * but end up not being relevant as the first function in chain is not - * called at all for @dma_direct_ops + * called at all for devices using coherent DMA. * arch_sync_dma_for_cpu() -> dma_cache_*() -> __dma_cache_*() */ } diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index db203ff69ccf..1525ac00fd02 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -33,7 +33,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, */ BUG_ON(gfp & __GFP_HIGHMEM); - page = alloc_pages(gfp, order); + page = alloc_pages(gfp | __GFP_ZERO, order); if (!page) return NULL; diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index f8fe5668b30f..43bf4c3a1290 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -78,24 +78,6 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size) base, TO_MB(size), !in_use ? "Not used":""); } -#ifdef CONFIG_BLK_DEV_INITRD -static int __init early_initrd(char *p) -{ - unsigned long start, size; - char *endp; - - start = memparse(p, &endp); - if (*endp == ',') { - size = memparse(endp + 1, NULL); - - initrd_start = (unsigned long)__va(start); - initrd_end = (unsigned long)__va(start + size); - } - return 0; -} -early_param("initrd", early_initrd); -#endif - /* * First memory setup routine called from setup_arch() * 1. setup swapper's mm @init_mm @@ -140,8 +122,11 @@ void __init setup_arch_memory(void) memblock_reserve(low_mem_start, __pa(_end) - low_mem_start); #ifdef CONFIG_BLK_DEV_INITRD - if (initrd_start) - memblock_reserve(__pa(initrd_start), initrd_end - initrd_start); + if (phys_initrd_size) { + memblock_reserve(phys_initrd_start, phys_initrd_size); + initrd_start = (unsigned long)__va(phys_initrd_start); + initrd_end = initrd_start + phys_initrd_size; + } #endif early_init_fdt_reserve_self(); diff --git a/arch/arc/plat-axs10x/Kconfig b/arch/arc/plat-axs10x/Kconfig index 4e0df7b7a248..27b9eb97a6bf 100644 --- a/arch/arc/plat-axs10x/Kconfig +++ b/arch/arc/plat-axs10x/Kconfig @@ -11,7 +11,7 @@ menuconfig ARC_PLAT_AXS10X select DW_APB_ICTL select GPIO_DWAPB select OF_GPIO - select MIGHT_HAVE_PCI + select HAVE_PCI select GENERIC_IRQ_CHIP select GPIOLIB select AXS101 if ISA_ARCOMPACT diff --git a/arch/arc/plat-hsdk/Kconfig b/arch/arc/plat-hsdk/Kconfig index 9356753c2ed8..f25c085b9874 100644 --- a/arch/arc/plat-hsdk/Kconfig +++ b/arch/arc/plat-hsdk/Kconfig @@ -11,4 +11,4 @@ menuconfig ARC_SOC_HSDK select ARC_HAS_ACCL_REGS select CLK_HSDK select RESET_HSDK - select MIGHT_HAVE_PCI + select HAVE_PCI diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 91be74d8df65..a3f436ba554d 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -19,6 +19,7 @@ config ARM select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_MIGHT_HAVE_PC_PARPORT + select ARCH_NO_SG_CHAIN if !ARM_HAS_SG_CHAIN select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7 select ARCH_SUPPORTS_ATOMIC_RMW @@ -29,7 +30,7 @@ config ARM select CLONE_BACKWARDS select CPU_PM if (SUSPEND || CPU_IDLE) select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS - select DMA_DIRECT_OPS if !MMU + select DMA_REMAP if MMU select EDAC_SUPPORT select EDAC_ATOMIC_SCRUB select GENERIC_ALLOCATOR @@ -103,6 +104,7 @@ config ARM select OF_RESERVED_MEM if OF select OLD_SIGACTION select OLD_SIGSUSPEND3 + select PCI_SYSCALL if PCI select PERF_USE_VMALLOC select REFCOUNT_FULL select RTC_LIB @@ -118,7 +120,6 @@ config ARM . config ARM_HAS_SG_CHAIN - select ARCH_HAS_SG_CHAIN bool config ARM_DMA_USE_IOMMU @@ -147,9 +148,6 @@ config ARM_DMA_IOMMU_ALIGNMENT endif -config MIGHT_HAVE_PCI - bool - config SYS_SUPPORTS_APM_EMULATION bool @@ -163,21 +161,6 @@ config HAVE_PROC_CPU config NO_IOPORT_MAP bool -config EISA - bool - ---help--- - The Extended Industry Standard Architecture (EISA) bus was - developed as an open alternative to the IBM MicroChannel bus. - - The EISA bus provided some of the features of the IBM MicroChannel - bus while maintaining backward compatibility with cards made for - the older ISA bus. The EISA bus saw limited use between 1988 and - 1995 when it was made obsolete by the PCI bus. - - Say Y here if you are building a kernel for an EISA-based machine. - - Otherwise, say N. - config SBUS bool @@ -333,8 +316,8 @@ config ARCH_MULTIPLATFORM select COMMON_CLK select GENERIC_CLOCKEVENTS select GENERIC_IRQ_MULTI_HANDLER - select MIGHT_HAVE_PCI - select PCI_DOMAINS if PCI + select HAVE_PCI + select PCI_DOMAINS_GENERIC if PCI select SPARSE_IRQ select USE_OF @@ -407,7 +390,7 @@ config ARCH_IOP13XX select CPU_XSC3 select NEED_MACH_MEMORY_H select NEED_RET_TO_USER - select PCI + select FORCE_PCI select PLAT_IOP select VMSPLIT_1G select SPARSE_IRQ @@ -421,7 +404,7 @@ config ARCH_IOP32X select GPIO_IOP select GPIOLIB select NEED_RET_TO_USER - select PCI + select FORCE_PCI select PLAT_IOP help Support for Intel's 80219 and IOP32X (XScale) family of @@ -434,7 +417,7 @@ config ARCH_IOP33X select GPIO_IOP select GPIOLIB select NEED_RET_TO_USER - select PCI + select FORCE_PCI select PLAT_IOP help Support for Intel's IOP33X (XScale) family of processors. @@ -449,7 +432,7 @@ config ARCH_IXP4XX select DMABOUNCE if PCI select GENERIC_CLOCKEVENTS select GPIOLIB - select MIGHT_HAVE_PCI + select HAVE_PCI select NEED_MACH_IO_H select USB_EHCI_BIG_ENDIAN_DESC select USB_EHCI_BIG_ENDIAN_MMIO @@ -462,7 +445,7 @@ config ARCH_DOVE select GENERIC_CLOCKEVENTS select GENERIC_IRQ_MULTI_HANDLER select GPIOLIB - select MIGHT_HAVE_PCI + select HAVE_PCI select MVEBU_MBUS select PINCTRL select PINCTRL_DOVE @@ -910,7 +893,7 @@ config PLAT_VERSATILE source "arch/arm/firmware/Kconfig" -source arch/arm/mm/Kconfig +source "arch/arm/mm/Kconfig" config IWMMXT bool "Enable iWMMXt support" @@ -1230,46 +1213,18 @@ config ISA_DMA config ISA_DMA_API bool -config PCI - bool "PCI support" if MIGHT_HAVE_PCI - help - Find out whether you have a PCI motherboard. PCI is the name of a - bus system, i.e. the way the CPU talks to the other stuff inside - your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or - VESA. If you have PCI, say Y, otherwise N. - -config PCI_DOMAINS - bool "Support for multiple PCI domains" - depends on PCI - help - Enable PCI domains kernel management. Say Y if your machine - has a PCI bus hierarchy that requires more than one PCI - domain (aka segment) to be correctly managed. Say N otherwise. - - If you don't know what to do here, say N. - -config PCI_DOMAINS_GENERIC - def_bool PCI_DOMAINS - config PCI_NANOENGINE bool "BSE nanoEngine PCI support" depends on SA1100_NANOENGINE help Enable PCI on the BSE nanoEngine board. -config PCI_SYSCALL - def_bool PCI - config PCI_HOST_ITE8152 bool depends on PCI && MACH_ARMCORE default y select DMABOUNCE -source "drivers/pci/Kconfig" - -source "drivers/pcmcia/Kconfig" - endmenu menu "Kernel Features" @@ -1810,6 +1765,21 @@ config XEN help Say Y if you want to run Linux in a Virtual Machine on Xen on ARM. +config STACKPROTECTOR_PER_TASK + bool "Use a unique stack canary value for each task" + depends on GCC_PLUGINS && STACKPROTECTOR && SMP && !XIP_DEFLATED_DATA + select GCC_PLUGIN_ARM_SSP_PER_TASK + default y + help + Due to the fact that GCC uses an ordinary symbol reference from + which to load the value of the stack canary, this value can only + change at reboot time on SMP systems, and all tasks running in the + kernel's address space are forced to use the same canary value for + the entire duration that the system is up. + + Enable this option to switch to a different method that uses a + different canary value for each task. + endmenu menu "Boot options" diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 05a91d8b89f3..0436002d5091 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -303,6 +303,18 @@ else KBUILD_IMAGE := $(boot)/zImage endif +ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y) +prepare: stack_protector_prepare +stack_protector_prepare: prepare0 + $(eval KBUILD_CFLAGS += \ + -fplugin-arg-arm_ssp_per_task_plugin-tso=$(shell \ + awk '{if ($$2 == "THREAD_SZ_ORDER") print $$3;}'\ + include/generated/asm-offsets.h) \ + -fplugin-arg-arm_ssp_per_task_plugin-offset=$(shell \ + awk '{if ($$2 == "TI_STACK_CANARY") print $$3;}'\ + include/generated/asm-offsets.h)) +endif + all: $(notdir $(KBUILD_IMAGE)) diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index 1f5a5ffe7fcf..01bf2585a0fa 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -101,6 +101,7 @@ clean-files += piggy_data lib1funcs.S ashldi3.S bswapsdi2.S \ $(libfdt) $(libfdt_hdrs) hyp-stub.S KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING +KBUILD_CFLAGS += $(DISABLE_ARM_SSP_PER_TASK_PLUGIN) ifeq ($(CONFIG_FUNCTION_TRACER),y) ORIG_CFLAGS := $(KBUILD_CFLAGS) diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c index 9a92de63426f..5ba4622030ca 100644 --- a/arch/arm/common/dmabounce.c +++ b/arch/arm/common/dmabounce.c @@ -257,7 +257,7 @@ static inline dma_addr_t map_single(struct device *dev, void *ptr, size_t size, if (buf == NULL) { dev_err(dev, "%s: unable to map unsafe buffer %p!\n", __func__, ptr); - return ARM_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n", @@ -327,7 +327,7 @@ static dma_addr_t dmabounce_map_page(struct device *dev, struct page *page, ret = needs_bounce(dev, dma_addr, size); if (ret < 0) - return ARM_MAPPING_ERROR; + return DMA_MAPPING_ERROR; if (ret == 0) { arm_dma_ops.sync_single_for_device(dev, dma_addr, size, dir); @@ -336,7 +336,7 @@ static dma_addr_t dmabounce_map_page(struct device *dev, struct page *page, if (PageHighMem(page)) { dev_err(dev, "DMA buffer bouncing of HIGHMEM pages is not supported\n"); - return ARM_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } return map_single(dev, page_address(page) + offset, size, dir, attrs); @@ -453,11 +453,6 @@ static int dmabounce_dma_supported(struct device *dev, u64 dma_mask) return arm_dma_ops.dma_supported(dev, dma_mask); } -static int dmabounce_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return arm_dma_ops.mapping_error(dev, dma_addr); -} - static const struct dma_map_ops dmabounce_ops = { .alloc = arm_dma_alloc, .free = arm_dma_free, @@ -472,7 +467,6 @@ static const struct dma_map_ops dmabounce_ops = { .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu, .sync_sg_for_device = arm_dma_sync_sg_for_device, .dma_supported = dmabounce_dma_supported, - .mapping_error = dmabounce_mapping_error, }; static int dmabounce_init_pool(struct dmabounce_pool *pool, struct device *dev, diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index ef0c7feea6e2..a95322b59799 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -69,6 +69,15 @@ config CRYPTO_AES_ARM help Use optimized AES assembler routines for ARM platforms. + On ARM processors without the Crypto Extensions, this is the + fastest AES implementation for single blocks. For multiple + blocks, the NEON bit-sliced implementation is usually faster. + + This implementation may be vulnerable to cache timing attacks, + since it uses lookup tables. However, as countermeasures it + disables IRQs and preloads the tables; it is hoped this makes + such attacks very difficult. + config CRYPTO_AES_ARM_BS tristate "Bit sliced AES using NEON instructions" depends on KERNEL_MODE_NEON @@ -117,9 +126,14 @@ config CRYPTO_CRC32_ARM_CE select CRYPTO_HASH config CRYPTO_CHACHA20_NEON - tristate "NEON accelerated ChaCha20 symmetric cipher" + tristate "NEON accelerated ChaCha stream cipher algorithms" depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_CHACHA20 +config CRYPTO_NHPOLY1305_NEON + tristate "NEON accelerated NHPoly1305 hash function (for Adiantum)" + depends on KERNEL_MODE_NEON + select CRYPTO_NHPOLY1305 + endif diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile index bd5bceef0605..4180f3a13512 100644 --- a/arch/arm/crypto/Makefile +++ b/arch/arm/crypto/Makefile @@ -9,7 +9,8 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o -obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o +obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha-neon.o +obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) += nhpoly1305-neon.o ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o @@ -52,7 +53,8 @@ aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o -chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o +chacha-neon-y := chacha-neon-core.o chacha-neon-glue.o +nhpoly1305-neon-y := nh-neon-core.o nhpoly1305-neon-glue.o ifdef REGENERATE_ARM_CRYPTO quiet_cmd_perl = PERL $@ @@ -65,4 +67,4 @@ $(src)/sha512-core.S_shipped: $(src)/sha512-armv4.pl $(call cmd,perl) endif -targets += sha256-core.S sha512-core.S +clean-files += sha256-core.S sha512-core.S diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c index d0a9cec73707..5affb8482379 100644 --- a/arch/arm/crypto/aes-ce-glue.c +++ b/arch/arm/crypto/aes-ce-glue.c @@ -10,7 +10,6 @@ #include #include -#include #include #include #include diff --git a/arch/arm/crypto/aes-cipher-core.S b/arch/arm/crypto/aes-cipher-core.S index 184d6c2d15d5..f2d67c095e59 100644 --- a/arch/arm/crypto/aes-cipher-core.S +++ b/arch/arm/crypto/aes-cipher-core.S @@ -10,6 +10,7 @@ */ #include +#include #include .text @@ -41,7 +42,7 @@ .endif .endm - .macro __hround, out0, out1, in0, in1, in2, in3, t3, t4, enc, sz, op + .macro __hround, out0, out1, in0, in1, in2, in3, t3, t4, enc, sz, op, oldcpsr __select \out0, \in0, 0 __select t0, \in1, 1 __load \out0, \out0, 0, \sz, \op @@ -73,6 +74,14 @@ __load t0, t0, 3, \sz, \op __load \t4, \t4, 3, \sz, \op + .ifnb \oldcpsr + /* + * This is the final round and we're done with all data-dependent table + * lookups, so we can safely re-enable interrupts. + */ + restore_irqs \oldcpsr + .endif + eor \out1, \out1, t1, ror #24 eor \out0, \out0, t2, ror #16 ldm rk!, {t1, t2} @@ -83,14 +92,14 @@ eor \out1, \out1, t2 .endm - .macro fround, out0, out1, out2, out3, in0, in1, in2, in3, sz=2, op + .macro fround, out0, out1, out2, out3, in0, in1, in2, in3, sz=2, op, oldcpsr __hround \out0, \out1, \in0, \in1, \in2, \in3, \out2, \out3, 1, \sz, \op - __hround \out2, \out3, \in2, \in3, \in0, \in1, \in1, \in2, 1, \sz, \op + __hround \out2, \out3, \in2, \in3, \in0, \in1, \in1, \in2, 1, \sz, \op, \oldcpsr .endm - .macro iround, out0, out1, out2, out3, in0, in1, in2, in3, sz=2, op + .macro iround, out0, out1, out2, out3, in0, in1, in2, in3, sz=2, op, oldcpsr __hround \out0, \out1, \in0, \in3, \in2, \in1, \out2, \out3, 0, \sz, \op - __hround \out2, \out3, \in2, \in1, \in0, \in3, \in1, \in0, 0, \sz, \op + __hround \out2, \out3, \in2, \in1, \in0, \in3, \in1, \in0, 0, \sz, \op, \oldcpsr .endm .macro __rev, out, in @@ -118,13 +127,14 @@ .macro do_crypt, round, ttab, ltab, bsz push {r3-r11, lr} + // Load keys first, to reduce latency in case they're not cached yet. + ldm rk!, {r8-r11} + ldr r4, [in] ldr r5, [in, #4] ldr r6, [in, #8] ldr r7, [in, #12] - ldm rk!, {r8-r11} - #ifdef CONFIG_CPU_BIG_ENDIAN __rev r4, r4 __rev r5, r5 @@ -138,6 +148,25 @@ eor r7, r7, r11 __adrl ttab, \ttab + /* + * Disable interrupts and prefetch the 1024-byte 'ft' or 'it' table into + * L1 cache, assuming cacheline size >= 32. This is a hardening measure + * intended to make cache-timing attacks more difficult. They may not + * be fully prevented, however; see the paper + * https://cr.yp.to/antiforgery/cachetiming-20050414.pdf + * ("Cache-timing attacks on AES") for a discussion of the many + * difficulties involved in writing truly constant-time AES software. + */ + save_and_disable_irqs t0 + .set i, 0 + .rept 1024 / 128 + ldr r8, [ttab, #i + 0] + ldr r9, [ttab, #i + 32] + ldr r10, [ttab, #i + 64] + ldr r11, [ttab, #i + 96] + .set i, i + 128 + .endr + push {t0} // oldcpsr tst rounds, #2 bne 1f @@ -151,8 +180,21 @@ \round r4, r5, r6, r7, r8, r9, r10, r11 b 0b -2: __adrl ttab, \ltab - \round r4, r5, r6, r7, r8, r9, r10, r11, \bsz, b +2: .ifb \ltab + add ttab, ttab, #1 + .else + __adrl ttab, \ltab + // Prefetch inverse S-box for final round; see explanation above + .set i, 0 + .rept 256 / 64 + ldr t0, [ttab, #i + 0] + ldr t1, [ttab, #i + 32] + .set i, i + 64 + .endr + .endif + + pop {rounds} // oldcpsr + \round r4, r5, r6, r7, r8, r9, r10, r11, \bsz, b, rounds #ifdef CONFIG_CPU_BIG_ENDIAN __rev r4, r4 @@ -175,7 +217,7 @@ .endm ENTRY(__aes_arm_encrypt) - do_crypt fround, crypto_ft_tab, crypto_ft_tab + 1, 2 + do_crypt fround, crypto_ft_tab,, 2 ENDPROC(__aes_arm_encrypt) .align 5 diff --git a/arch/arm/crypto/chacha20-neon-core.S b/arch/arm/crypto/chacha-neon-core.S similarity index 90% rename from arch/arm/crypto/chacha20-neon-core.S rename to arch/arm/crypto/chacha-neon-core.S index 50e7b9896818..eb22926d4912 100644 --- a/arch/arm/crypto/chacha20-neon-core.S +++ b/arch/arm/crypto/chacha-neon-core.S @@ -1,5 +1,5 @@ /* - * ChaCha20 256-bit cipher algorithm, RFC7539, ARM NEON functions + * ChaCha/XChaCha NEON helper functions * * Copyright (C) 2016 Linaro, Ltd. * @@ -27,9 +27,9 @@ * (d) vtbl.8 + vtbl.8 (multiple of 8 bits rotations only, * needs index vector) * - * ChaCha20 has 16, 12, 8, and 7-bit rotations. For the 12 and 7-bit - * rotations, the only choices are (a) and (b). We use (a) since it takes - * two-thirds the cycles of (b) on both Cortex-A7 and Cortex-A53. + * ChaCha has 16, 12, 8, and 7-bit rotations. For the 12 and 7-bit rotations, + * the only choices are (a) and (b). We use (a) since it takes two-thirds the + * cycles of (b) on both Cortex-A7 and Cortex-A53. * * For the 16-bit rotation, we use vrev32.16 since it's consistently fastest * and doesn't need a temporary register. @@ -52,30 +52,20 @@ .fpu neon .align 5 -ENTRY(chacha20_block_xor_neon) - // r0: Input state matrix, s - // r1: 1 data block output, o - // r2: 1 data block input, i - - // - // This function encrypts one ChaCha20 block by loading the state matrix - // in four NEON registers. It performs matrix operation on four words in - // parallel, but requireds shuffling to rearrange the words after each - // round. - // - - // x0..3 = s0..3 - add ip, r0, #0x20 - vld1.32 {q0-q1}, [r0] - vld1.32 {q2-q3}, [ip] - - vmov q8, q0 - vmov q9, q1 - vmov q10, q2 - vmov q11, q3 +/* + * chacha_permute - permute one block + * + * Permute one 64-byte block where the state matrix is stored in the four NEON + * registers q0-q3. It performs matrix operations on four words in parallel, + * but requires shuffling to rearrange the words after each round. + * + * The round count is given in r3. + * + * Clobbers: r3, ip, q4-q5 + */ +chacha_permute: adr ip, .Lrol8_table - mov r3, #10 vld1.8 {d10}, [ip, :64] .Ldoubleround: @@ -139,9 +129,31 @@ ENTRY(chacha20_block_xor_neon) // x3 = shuffle32(x3, MASK(0, 3, 2, 1)) vext.8 q3, q3, q3, #4 - subs r3, r3, #1 + subs r3, r3, #2 bne .Ldoubleround + bx lr +ENDPROC(chacha_permute) + +ENTRY(chacha_block_xor_neon) + // r0: Input state matrix, s + // r1: 1 data block output, o + // r2: 1 data block input, i + // r3: nrounds + push {lr} + + // x0..3 = s0..3 + add ip, r0, #0x20 + vld1.32 {q0-q1}, [r0] + vld1.32 {q2-q3}, [ip] + + vmov q8, q0 + vmov q9, q1 + vmov q10, q2 + vmov q11, q3 + + bl chacha_permute + add ip, r2, #0x20 vld1.8 {q4-q5}, [r2] vld1.8 {q6-q7}, [ip] @@ -166,15 +178,33 @@ ENTRY(chacha20_block_xor_neon) vst1.8 {q0-q1}, [r1] vst1.8 {q2-q3}, [ip] - bx lr -ENDPROC(chacha20_block_xor_neon) + pop {pc} +ENDPROC(chacha_block_xor_neon) + +ENTRY(hchacha_block_neon) + // r0: Input state matrix, s + // r1: output (8 32-bit words) + // r2: nrounds + push {lr} + + vld1.32 {q0-q1}, [r0]! + vld1.32 {q2-q3}, [r0] + + mov r3, r2 + bl chacha_permute + + vst1.32 {q0}, [r1]! + vst1.32 {q3}, [r1] + + pop {pc} +ENDPROC(hchacha_block_neon) .align 4 .Lctrinc: .word 0, 1, 2, 3 .Lrol8_table: .byte 3, 0, 1, 2, 7, 4, 5, 6 .align 5 -ENTRY(chacha20_4block_xor_neon) +ENTRY(chacha_4block_xor_neon) push {r4-r5} mov r4, sp // preserve the stack pointer sub ip, sp, #0x20 // allocate a 32 byte buffer @@ -184,9 +214,10 @@ ENTRY(chacha20_4block_xor_neon) // r0: Input state matrix, s // r1: 4 data blocks output, o // r2: 4 data blocks input, i + // r3: nrounds // - // This function encrypts four consecutive ChaCha20 blocks by loading + // This function encrypts four consecutive ChaCha blocks by loading // the state matrix in NEON registers four times. The algorithm performs // each operation on the corresponding word of each state matrix, hence // requires no word shuffling. The words are re-interleaved before the @@ -219,7 +250,6 @@ ENTRY(chacha20_4block_xor_neon) vdup.32 q0, d0[0] adr ip, .Lrol8_table - mov r3, #10 b 1f .Ldoubleround4: @@ -417,7 +447,7 @@ ENTRY(chacha20_4block_xor_neon) vsri.u32 q5, q8, #25 vsri.u32 q6, q9, #25 - subs r3, r3, #1 + subs r3, r3, #2 bne .Ldoubleround4 // x0..7[0-3] are in q0-q7, x10..15[0-3] are in q10-q15. @@ -527,4 +557,4 @@ ENTRY(chacha20_4block_xor_neon) pop {r4-r5} bx lr -ENDPROC(chacha20_4block_xor_neon) +ENDPROC(chacha_4block_xor_neon) diff --git a/arch/arm/crypto/chacha-neon-glue.c b/arch/arm/crypto/chacha-neon-glue.c new file mode 100644 index 000000000000..9d6fda81986d --- /dev/null +++ b/arch/arm/crypto/chacha-neon-glue.c @@ -0,0 +1,201 @@ +/* + * ARM NEON accelerated ChaCha and XChaCha stream ciphers, + * including ChaCha20 (RFC7539) + * + * Copyright (C) 2016 Linaro, Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Based on: + * ChaCha20 256-bit cipher algorithm, RFC7539, SIMD glue code + * + * Copyright (C) 2015 Martin Willi + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include +#include +#include +#include +#include + +#include +#include +#include + +asmlinkage void chacha_block_xor_neon(const u32 *state, u8 *dst, const u8 *src, + int nrounds); +asmlinkage void chacha_4block_xor_neon(const u32 *state, u8 *dst, const u8 *src, + int nrounds); +asmlinkage void hchacha_block_neon(const u32 *state, u32 *out, int nrounds); + +static void chacha_doneon(u32 *state, u8 *dst, const u8 *src, + unsigned int bytes, int nrounds) +{ + u8 buf[CHACHA_BLOCK_SIZE]; + + while (bytes >= CHACHA_BLOCK_SIZE * 4) { + chacha_4block_xor_neon(state, dst, src, nrounds); + bytes -= CHACHA_BLOCK_SIZE * 4; + src += CHACHA_BLOCK_SIZE * 4; + dst += CHACHA_BLOCK_SIZE * 4; + state[12] += 4; + } + while (bytes >= CHACHA_BLOCK_SIZE) { + chacha_block_xor_neon(state, dst, src, nrounds); + bytes -= CHACHA_BLOCK_SIZE; + src += CHACHA_BLOCK_SIZE; + dst += CHACHA_BLOCK_SIZE; + state[12]++; + } + if (bytes) { + memcpy(buf, src, bytes); + chacha_block_xor_neon(state, buf, buf, nrounds); + memcpy(dst, buf, bytes); + } +} + +static int chacha_neon_stream_xor(struct skcipher_request *req, + struct chacha_ctx *ctx, u8 *iv) +{ + struct skcipher_walk walk; + u32 state[16]; + int err; + + err = skcipher_walk_virt(&walk, req, false); + + crypto_chacha_init(state, ctx, iv); + + while (walk.nbytes > 0) { + unsigned int nbytes = walk.nbytes; + + if (nbytes < walk.total) + nbytes = round_down(nbytes, walk.stride); + + kernel_neon_begin(); + chacha_doneon(state, walk.dst.virt.addr, walk.src.virt.addr, + nbytes, ctx->nrounds); + kernel_neon_end(); + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + + return err; +} + +static int chacha_neon(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (req->cryptlen <= CHACHA_BLOCK_SIZE || !may_use_simd()) + return crypto_chacha_crypt(req); + + return chacha_neon_stream_xor(req, ctx, req->iv); +} + +static int xchacha_neon(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + struct chacha_ctx subctx; + u32 state[16]; + u8 real_iv[16]; + + if (req->cryptlen <= CHACHA_BLOCK_SIZE || !may_use_simd()) + return crypto_xchacha_crypt(req); + + crypto_chacha_init(state, ctx, req->iv); + + kernel_neon_begin(); + hchacha_block_neon(state, subctx.key, ctx->nrounds); + kernel_neon_end(); + subctx.nrounds = ctx->nrounds; + + memcpy(&real_iv[0], req->iv + 24, 8); + memcpy(&real_iv[8], req->iv + 16, 8); + return chacha_neon_stream_xor(req, &subctx, real_iv); +} + +static struct skcipher_alg algs[] = { + { + .base.cra_name = "chacha20", + .base.cra_driver_name = "chacha20-neon", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = CHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .walksize = 4 * CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha20_setkey, + .encrypt = chacha_neon, + .decrypt = chacha_neon, + }, { + .base.cra_name = "xchacha20", + .base.cra_driver_name = "xchacha20-neon", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = XCHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .walksize = 4 * CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha20_setkey, + .encrypt = xchacha_neon, + .decrypt = xchacha_neon, + }, { + .base.cra_name = "xchacha12", + .base.cra_driver_name = "xchacha12-neon", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = XCHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .walksize = 4 * CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha12_setkey, + .encrypt = xchacha_neon, + .decrypt = xchacha_neon, + } +}; + +static int __init chacha_simd_mod_init(void) +{ + if (!(elf_hwcap & HWCAP_NEON)) + return -ENODEV; + + return crypto_register_skciphers(algs, ARRAY_SIZE(algs)); +} + +static void __exit chacha_simd_mod_fini(void) +{ + crypto_unregister_skciphers(algs, ARRAY_SIZE(algs)); +} + +module_init(chacha_simd_mod_init); +module_exit(chacha_simd_mod_fini); + +MODULE_DESCRIPTION("ChaCha and XChaCha stream ciphers (NEON accelerated)"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_CRYPTO("chacha20"); +MODULE_ALIAS_CRYPTO("chacha20-neon"); +MODULE_ALIAS_CRYPTO("xchacha20"); +MODULE_ALIAS_CRYPTO("xchacha20-neon"); +MODULE_ALIAS_CRYPTO("xchacha12"); +MODULE_ALIAS_CRYPTO("xchacha12-neon"); diff --git a/arch/arm/crypto/chacha20-neon-glue.c b/arch/arm/crypto/chacha20-neon-glue.c deleted file mode 100644 index 59a7be08e80c..000000000000 --- a/arch/arm/crypto/chacha20-neon-glue.c +++ /dev/null @@ -1,127 +0,0 @@ -/* - * ChaCha20 256-bit cipher algorithm, RFC7539, ARM NEON functions - * - * Copyright (C) 2016 Linaro, Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * Based on: - * ChaCha20 256-bit cipher algorithm, RFC7539, SIMD glue code - * - * Copyright (C) 2015 Martin Willi - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - */ - -#include -#include -#include -#include -#include - -#include -#include -#include - -asmlinkage void chacha20_block_xor_neon(u32 *state, u8 *dst, const u8 *src); -asmlinkage void chacha20_4block_xor_neon(u32 *state, u8 *dst, const u8 *src); - -static void chacha20_doneon(u32 *state, u8 *dst, const u8 *src, - unsigned int bytes) -{ - u8 buf[CHACHA20_BLOCK_SIZE]; - - while (bytes >= CHACHA20_BLOCK_SIZE * 4) { - chacha20_4block_xor_neon(state, dst, src); - bytes -= CHACHA20_BLOCK_SIZE * 4; - src += CHACHA20_BLOCK_SIZE * 4; - dst += CHACHA20_BLOCK_SIZE * 4; - state[12] += 4; - } - while (bytes >= CHACHA20_BLOCK_SIZE) { - chacha20_block_xor_neon(state, dst, src); - bytes -= CHACHA20_BLOCK_SIZE; - src += CHACHA20_BLOCK_SIZE; - dst += CHACHA20_BLOCK_SIZE; - state[12]++; - } - if (bytes) { - memcpy(buf, src, bytes); - chacha20_block_xor_neon(state, buf, buf); - memcpy(dst, buf, bytes); - } -} - -static int chacha20_neon(struct skcipher_request *req) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm); - struct skcipher_walk walk; - u32 state[16]; - int err; - - if (req->cryptlen <= CHACHA20_BLOCK_SIZE || !may_use_simd()) - return crypto_chacha20_crypt(req); - - err = skcipher_walk_virt(&walk, req, true); - - crypto_chacha20_init(state, ctx, walk.iv); - - kernel_neon_begin(); - while (walk.nbytes > 0) { - unsigned int nbytes = walk.nbytes; - - if (nbytes < walk.total) - nbytes = round_down(nbytes, walk.stride); - - chacha20_doneon(state, walk.dst.virt.addr, walk.src.virt.addr, - nbytes); - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - } - kernel_neon_end(); - - return err; -} - -static struct skcipher_alg alg = { - .base.cra_name = "chacha20", - .base.cra_driver_name = "chacha20-neon", - .base.cra_priority = 300, - .base.cra_blocksize = 1, - .base.cra_ctxsize = sizeof(struct chacha20_ctx), - .base.cra_module = THIS_MODULE, - - .min_keysize = CHACHA20_KEY_SIZE, - .max_keysize = CHACHA20_KEY_SIZE, - .ivsize = CHACHA20_IV_SIZE, - .chunksize = CHACHA20_BLOCK_SIZE, - .walksize = 4 * CHACHA20_BLOCK_SIZE, - .setkey = crypto_chacha20_setkey, - .encrypt = chacha20_neon, - .decrypt = chacha20_neon, -}; - -static int __init chacha20_simd_mod_init(void) -{ - if (!(elf_hwcap & HWCAP_NEON)) - return -ENODEV; - - return crypto_register_skcipher(&alg); -} - -static void __exit chacha20_simd_mod_fini(void) -{ - crypto_unregister_skcipher(&alg); -} - -module_init(chacha20_simd_mod_init); -module_exit(chacha20_simd_mod_fini); - -MODULE_AUTHOR("Ard Biesheuvel "); -MODULE_LICENSE("GPL v2"); -MODULE_ALIAS_CRYPTO("chacha20"); diff --git a/arch/arm/crypto/nh-neon-core.S b/arch/arm/crypto/nh-neon-core.S new file mode 100644 index 000000000000..434d80ab531c --- /dev/null +++ b/arch/arm/crypto/nh-neon-core.S @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * NH - ε-almost-universal hash function, NEON accelerated version + * + * Copyright 2018 Google LLC + * + * Author: Eric Biggers + */ + +#include + + .text + .fpu neon + + KEY .req r0 + MESSAGE .req r1 + MESSAGE_LEN .req r2 + HASH .req r3 + + PASS0_SUMS .req q0 + PASS0_SUM_A .req d0 + PASS0_SUM_B .req d1 + PASS1_SUMS .req q1 + PASS1_SUM_A .req d2 + PASS1_SUM_B .req d3 + PASS2_SUMS .req q2 + PASS2_SUM_A .req d4 + PASS2_SUM_B .req d5 + PASS3_SUMS .req q3 + PASS3_SUM_A .req d6 + PASS3_SUM_B .req d7 + K0 .req q4 + K1 .req q5 + K2 .req q6 + K3 .req q7 + T0 .req q8 + T0_L .req d16 + T0_H .req d17 + T1 .req q9 + T1_L .req d18 + T1_H .req d19 + T2 .req q10 + T2_L .req d20 + T2_H .req d21 + T3 .req q11 + T3_L .req d22 + T3_H .req d23 + +.macro _nh_stride k0, k1, k2, k3 + + // Load next message stride + vld1.8 {T3}, [MESSAGE]! + + // Load next key stride + vld1.32 {\k3}, [KEY]! + + // Add message words to key words + vadd.u32 T0, T3, \k0 + vadd.u32 T1, T3, \k1 + vadd.u32 T2, T3, \k2 + vadd.u32 T3, T3, \k3 + + // Multiply 32x32 => 64 and accumulate + vmlal.u32 PASS0_SUMS, T0_L, T0_H + vmlal.u32 PASS1_SUMS, T1_L, T1_H + vmlal.u32 PASS2_SUMS, T2_L, T2_H + vmlal.u32 PASS3_SUMS, T3_L, T3_H +.endm + +/* + * void nh_neon(const u32 *key, const u8 *message, size_t message_len, + * u8 hash[NH_HASH_BYTES]) + * + * It's guaranteed that message_len % 16 == 0. + */ +ENTRY(nh_neon) + + vld1.32 {K0,K1}, [KEY]! + vmov.u64 PASS0_SUMS, #0 + vmov.u64 PASS1_SUMS, #0 + vld1.32 {K2}, [KEY]! + vmov.u64 PASS2_SUMS, #0 + vmov.u64 PASS3_SUMS, #0 + + subs MESSAGE_LEN, MESSAGE_LEN, #64 + blt .Lloop4_done +.Lloop4: + _nh_stride K0, K1, K2, K3 + _nh_stride K1, K2, K3, K0 + _nh_stride K2, K3, K0, K1 + _nh_stride K3, K0, K1, K2 + subs MESSAGE_LEN, MESSAGE_LEN, #64 + bge .Lloop4 + +.Lloop4_done: + ands MESSAGE_LEN, MESSAGE_LEN, #63 + beq .Ldone + _nh_stride K0, K1, K2, K3 + + subs MESSAGE_LEN, MESSAGE_LEN, #16 + beq .Ldone + _nh_stride K1, K2, K3, K0 + + subs MESSAGE_LEN, MESSAGE_LEN, #16 + beq .Ldone + _nh_stride K2, K3, K0, K1 + +.Ldone: + // Sum the accumulators for each pass, then store the sums to 'hash' + vadd.u64 T0_L, PASS0_SUM_A, PASS0_SUM_B + vadd.u64 T0_H, PASS1_SUM_A, PASS1_SUM_B + vadd.u64 T1_L, PASS2_SUM_A, PASS2_SUM_B + vadd.u64 T1_H, PASS3_SUM_A, PASS3_SUM_B + vst1.8 {T0-T1}, [HASH] + bx lr +ENDPROC(nh_neon) diff --git a/arch/arm/crypto/nhpoly1305-neon-glue.c b/arch/arm/crypto/nhpoly1305-neon-glue.c new file mode 100644 index 000000000000..49aae87cb2bc --- /dev/null +++ b/arch/arm/crypto/nhpoly1305-neon-glue.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NHPoly1305 - ε-almost-∆-universal hash function for Adiantum + * (NEON accelerated version) + * + * Copyright 2018 Google LLC + */ + +#include +#include +#include +#include +#include + +asmlinkage void nh_neon(const u32 *key, const u8 *message, size_t message_len, + u8 hash[NH_HASH_BYTES]); + +/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */ +static void _nh_neon(const u32 *key, const u8 *message, size_t message_len, + __le64 hash[NH_NUM_PASSES]) +{ + nh_neon(key, message, message_len, (u8 *)hash); +} + +static int nhpoly1305_neon_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + if (srclen < 64 || !may_use_simd()) + return crypto_nhpoly1305_update(desc, src, srclen); + + do { + unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE); + + kernel_neon_begin(); + crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon); + kernel_neon_end(); + src += n; + srclen -= n; + } while (srclen); + return 0; +} + +static struct shash_alg nhpoly1305_alg = { + .base.cra_name = "nhpoly1305", + .base.cra_driver_name = "nhpoly1305-neon", + .base.cra_priority = 200, + .base.cra_ctxsize = sizeof(struct nhpoly1305_key), + .base.cra_module = THIS_MODULE, + .digestsize = POLY1305_DIGEST_SIZE, + .init = crypto_nhpoly1305_init, + .update = nhpoly1305_neon_update, + .final = crypto_nhpoly1305_final, + .setkey = crypto_nhpoly1305_setkey, + .descsize = sizeof(struct nhpoly1305_state), +}; + +static int __init nhpoly1305_mod_init(void) +{ + if (!(elf_hwcap & HWCAP_NEON)) + return -ENODEV; + + return crypto_register_shash(&nhpoly1305_alg); +} + +static void __exit nhpoly1305_mod_exit(void) +{ + crypto_unregister_shash(&nhpoly1305_alg); +} + +module_init(nhpoly1305_mod_init); +module_exit(nhpoly1305_mod_exit); + +MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function (NEON-accelerated)"); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Eric Biggers "); +MODULE_ALIAS_CRYPTO("nhpoly1305"); +MODULE_ALIAS_CRYPTO("nhpoly1305-neon"); diff --git a/arch/arm/include/asm/dma-iommu.h b/arch/arm/include/asm/dma-iommu.h index 6821f1249300..772f48ef84b7 100644 --- a/arch/arm/include/asm/dma-iommu.h +++ b/arch/arm/include/asm/dma-iommu.h @@ -9,8 +9,6 @@ #include #include -#define ARM_MAPPING_ERROR (~(dma_addr_t)0x0) - struct dma_iommu_mapping { /* iommu specific data */ struct iommu_domain *domain; diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h index 965b7c846ecb..31d3b96f0f4b 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -18,7 +18,7 @@ extern const struct dma_map_ops arm_coherent_dma_ops; static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { - return IS_ENABLED(CONFIG_MMU) ? &arm_dma_ops : &dma_direct_ops; + return IS_ENABLED(CONFIG_MMU) ? &arm_dma_ops : NULL; } #ifdef __arch_page_to_dma diff --git a/arch/arm/include/asm/module.h b/arch/arm/include/asm/module.h index 9e81b7c498d8..182163b55546 100644 --- a/arch/arm/include/asm/module.h +++ b/arch/arm/include/asm/module.h @@ -61,4 +61,15 @@ u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val); MODULE_ARCH_VERMAGIC_ARMTHUMB \ MODULE_ARCH_VERMAGIC_P2V +#ifdef CONFIG_THUMB2_KERNEL +#define HAVE_ARCH_KALLSYMS_SYMBOL_VALUE +static inline unsigned long kallsyms_symbol_value(const Elf_Sym *sym) +{ + if (ELF_ST_TYPE(sym->st_info) == STT_FUNC) + return sym->st_value & ~1; + + return sym->st_value; +} +#endif + #endif /* _ASM_ARM_MODULE_H */ diff --git a/arch/arm/include/asm/stackprotector.h b/arch/arm/include/asm/stackprotector.h index ef5f7b69443e..72a20c3a0a90 100644 --- a/arch/arm/include/asm/stackprotector.h +++ b/arch/arm/include/asm/stackprotector.h @@ -6,8 +6,10 @@ * the stack frame and verifying that it hasn't been overwritten when * returning from the function. The pattern is called stack canary * and gcc expects it to be defined by a global variable called - * "__stack_chk_guard" on ARM. This unfortunately means that on SMP - * we cannot have a different canary value per task. + * "__stack_chk_guard" on ARM. This prevents SMP systems from using a + * different value for each task unless we enable a GCC plugin that + * replaces these symbol references with references to each task's own + * value. */ #ifndef _ASM_STACKPROTECTOR_H @@ -16,6 +18,8 @@ #include #include +#include + extern unsigned long __stack_chk_guard; /* @@ -33,7 +37,11 @@ static __always_inline void boot_init_stack_canary(void) canary ^= LINUX_VERSION_CODE; current->stack_canary = canary; +#ifndef CONFIG_STACKPROTECTOR_PER_TASK __stack_chk_guard = current->stack_canary; +#else + current_thread_info()->stack_canary = current->stack_canary; +#endif } #endif /* _ASM_STACKPROTECTOR_H */ diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index 8f55dc520a3e..286eb61c632b 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -53,6 +53,9 @@ struct thread_info { struct task_struct *task; /* main task structure */ __u32 cpu; /* cpu */ __u32 cpu_domain; /* cpu domain */ +#ifdef CONFIG_STACKPROTECTOR_PER_TASK + unsigned long stack_canary; +#endif struct cpu_context_save cpu_context; /* cpu context */ __u32 syscall; /* syscall number */ __u8 used_cp[16]; /* thread used copro */ diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 3968d6c22455..28b27104ac0c 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -79,6 +79,10 @@ int main(void) #ifdef CONFIG_CRUNCH DEFINE(TI_CRUNCH_STATE, offsetof(struct thread_info, crunchstate)); #endif +#ifdef CONFIG_STACKPROTECTOR_PER_TASK + DEFINE(TI_STACK_CANARY, offsetof(struct thread_info, stack_canary)); +#endif + DEFINE(THREAD_SZ_ORDER, THREAD_SIZE_ORDER); BLANK(); DEFINE(S_R0, offsetof(struct pt_regs, ARM_r0)); DEFINE(S_R1, offsetof(struct pt_regs, ARM_r1)); diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 82ab015bf42b..16601d1442d1 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -39,7 +39,7 @@ #include #include -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK) #include unsigned long __stack_chk_guard __read_mostly; EXPORT_SYMBOL(__stack_chk_guard); @@ -267,6 +267,10 @@ copy_thread(unsigned long clone_flags, unsigned long stack_start, thread_notify(THREAD_NOTIFY_COPY, thread); +#ifdef CONFIG_STACKPROTECTOR_PER_TASK + thread->stack_canary = p->stack_canary; +#endif + return 0; } diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig index e2bd35b6780c..3f5320f46de2 100644 --- a/arch/arm/kvm/Kconfig +++ b/arch/arm/kvm/Kconfig @@ -55,6 +55,6 @@ config KVM_ARM_HOST ---help--- Provides host support for ARM processors. -source drivers/vhost/Kconfig +source "drivers/vhost/Kconfig" endif # VIRTUALIZATION diff --git a/arch/arm/mach-alpine/Kconfig b/arch/arm/mach-alpine/Kconfig index e3cbb07fe1b4..bc04c91294cf 100644 --- a/arch/arm/mach-alpine/Kconfig +++ b/arch/arm/mach-alpine/Kconfig @@ -9,7 +9,7 @@ config ARCH_ALPINE select HAVE_ARM_ARCH_TIMER select HAVE_SMP select MFD_SYSCON - select PCI + select FORCE_PCI select PCI_HOST_GENERIC help This enables support for the Annapurna Labs Alpine V1 boards. diff --git a/arch/arm/mach-at91/Makefile b/arch/arm/mach-at91/Makefile index 7415f181907b..31b61f0e1c07 100644 --- a/arch/arm/mach-at91/Makefile +++ b/arch/arm/mach-at91/Makefile @@ -19,10 +19,9 @@ ifeq ($(CONFIG_PM_DEBUG),y) CFLAGS_pm.o += -DDEBUG endif -arch/arm/mach-at91/pm_data-offsets.s: arch/arm/mach-at91/pm_data-offsets.c - $(call if_changed_dep,cc_s_c) - include/generated/at91_pm_data-offsets.h: arch/arm/mach-at91/pm_data-offsets.s FORCE $(call filechk,offsets,__PM_DATA_OFFSETS_H__) arch/arm/mach-at91/pm_suspend.o: include/generated/at91_pm_data-offsets.h + +targets += pm_data-offsets.s diff --git a/arch/arm/mach-bcm/Kconfig b/arch/arm/mach-bcm/Kconfig index 25aac6ee2ab1..a3f375af673d 100644 --- a/arch/arm/mach-bcm/Kconfig +++ b/arch/arm/mach-bcm/Kconfig @@ -20,7 +20,7 @@ config ARCH_BCM_IPROC select GPIOLIB select ARM_AMBA select PINCTRL - select PCI_DOMAINS if PCI + select PCI_DOMAINS_GENERIC if PCI help This enables support for systems based on Broadcom IPROC architected SoCs. The IPROC complex contains one or more ARM CPUs along with common diff --git a/arch/arm/mach-ep93xx/simone.c b/arch/arm/mach-ep93xx/simone.c index 41aa57581356..80ccb984d521 100644 --- a/arch/arm/mach-ep93xx/simone.c +++ b/arch/arm/mach-ep93xx/simone.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -45,9 +46,15 @@ static struct ep93xxfb_mach_info __initdata simone_fb_info = { static struct mmc_spi_platform_data simone_mmc_spi_data = { .detect_delay = 500, .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .flags = MMC_SPI_USE_CD_GPIO, - .cd_gpio = EP93XX_GPIO_LINE_EGPIO0, - .cd_debounce = 1, +}; + +static struct gpiod_lookup_table simone_mmc_spi_gpio_table = { + .dev_id = "mmc_spi.0", /* "mmc_spi" @ CS0 */ + .table = { + /* Card detect */ + GPIO_LOOKUP_IDX("A", 0, NULL, 0, GPIO_ACTIVE_LOW), + { }, + }, }; static struct spi_board_info simone_spi_devices[] __initdata = { @@ -105,6 +112,7 @@ static void __init simone_init_machine(void) ep93xx_register_fb(&simone_fb_info); ep93xx_register_i2c(simone_i2c_board_info, ARRAY_SIZE(simone_i2c_board_info)); + gpiod_add_lookup_table(&simone_mmc_spi_gpio_table); ep93xx_register_spi(&simone_spi_info, simone_spi_devices, ARRAY_SIZE(simone_spi_devices)); simone_register_audio(); diff --git a/arch/arm/mach-ep93xx/vision_ep9307.c b/arch/arm/mach-ep93xx/vision_ep9307.c index 5a0b6187990a..767ee64628dc 100644 --- a/arch/arm/mach-ep93xx/vision_ep9307.c +++ b/arch/arm/mach-ep93xx/vision_ep9307.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -202,13 +203,20 @@ static struct mmc_spi_platform_data vision_spi_mmc_data = { .detect_delay = 100, .powerup_msecs = 100, .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .flags = MMC_SPI_USE_CD_GPIO | MMC_SPI_USE_RO_GPIO, - .cd_gpio = EP93XX_GPIO_LINE_EGPIO15, - .cd_debounce = 1, - .ro_gpio = EP93XX_GPIO_LINE_F(0), .caps2 = MMC_CAP2_RO_ACTIVE_HIGH, }; +static struct gpiod_lookup_table vision_spi_mmc_gpio_table = { + .dev_id = "mmc_spi.2", /* "mmc_spi @ CS2 */ + .table = { + /* Card detect */ + GPIO_LOOKUP_IDX("B", 7, NULL, 0, GPIO_ACTIVE_LOW), + /* Write protect */ + GPIO_LOOKUP_IDX("F", 0, NULL, 1, GPIO_ACTIVE_HIGH), + { }, + }, +}; + /************************************************************************* * SPI Bus *************************************************************************/ @@ -286,6 +294,7 @@ static void __init vision_init_machine(void) ep93xx_register_i2c(vision_i2c_info, ARRAY_SIZE(vision_i2c_info)); + gpiod_add_lookup_table(&vision_spi_mmc_gpio_table); ep93xx_register_spi(&vision_spi_master, vision_spi_board_info, ARRAY_SIZE(vision_spi_board_info)); vision_register_i2s(); diff --git a/arch/arm/mach-footbridge/Kconfig b/arch/arm/mach-footbridge/Kconfig index cbbdd84cf49a..816a5b89be25 100644 --- a/arch/arm/mach-footbridge/Kconfig +++ b/arch/arm/mach-footbridge/Kconfig @@ -9,7 +9,7 @@ config ARCH_CATS select FOOTBRIDGE_HOST select ISA select ISA_DMA - select PCI + select FORCE_PCI help Say Y here if you intend to run this kernel on the CATS. @@ -20,7 +20,7 @@ config ARCH_PERSONAL_SERVER select FOOTBRIDGE_HOST select ISA select ISA_DMA - select PCI + select FORCE_PCI ---help--- Say Y here if you intend to run this kernel on the Compaq Personal Server. @@ -53,7 +53,7 @@ config ARCH_EBSA285_HOST select ISA select ISA_DMA select ARCH_MAY_HAVE_PC_FDC - select PCI + select FORCE_PCI help Say Y here if you intend to run this kernel on the EBSA285 card in host ("central function") mode. @@ -67,7 +67,7 @@ config ARCH_NETWINDER select FOOTBRIDGE_HOST select ISA select ISA_DMA - select PCI + select FORCE_PCI help Say Y here if you intend to run this kernel on the Rebel.COM NetWinder. Information about this machine can be found at: diff --git a/arch/arm/mach-imx/mach-pcm043.c b/arch/arm/mach-imx/mach-pcm043.c index e595e5368676..46ba3348e8f0 100644 --- a/arch/arm/mach-imx/mach-pcm043.c +++ b/arch/arm/mach-imx/mach-pcm043.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -214,8 +215,6 @@ static const iomux_v3_cfg_t pcm043_pads[] __initconst = { #define AC97_GPIO_TXFS IMX_GPIO_NR(2, 31) #define AC97_GPIO_TXD IMX_GPIO_NR(2, 28) #define AC97_GPIO_RESET IMX_GPIO_NR(2, 0) -#define SD1_GPIO_WP IMX_GPIO_NR(2, 23) -#define SD1_GPIO_CD IMX_GPIO_NR(2, 24) static void pcm043_ac97_warm_reset(struct snd_ac97 *ac97) { @@ -341,12 +340,21 @@ static int __init pcm043_otg_mode(char *options) __setup("otg_mode=", pcm043_otg_mode); static struct esdhc_platform_data sd1_pdata = { - .wp_gpio = SD1_GPIO_WP, - .cd_gpio = SD1_GPIO_CD, .wp_type = ESDHC_WP_GPIO, .cd_type = ESDHC_CD_GPIO, }; +static struct gpiod_lookup_table sd1_gpio_table = { + .dev_id = "sdhci-esdhc-imx35.0", + .table = { + /* Card detect: bank 2 offset 24 */ + GPIO_LOOKUP("imx35-gpio.2", 24, "cd", GPIO_ACTIVE_LOW), + /* Write protect: bank 2 offset 23 */ + GPIO_LOOKUP("imx35-gpio.2", 23, "wp", GPIO_ACTIVE_LOW), + { }, + }, +}; + /* * Board specific initialization. */ @@ -391,6 +399,7 @@ static void __init pcm043_late_init(void) { imx35_add_imx_ssi(0, &pcm043_ssi_pdata); + gpiod_add_lookup_table(&sd1_gpio_table); imx35_add_sdhci_esdhc_imx(0, &sd1_pdata); } diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig index c342dc4e8a45..fea008123eb1 100644 --- a/arch/arm/mach-ixp4xx/Kconfig +++ b/arch/arm/mach-ixp4xx/Kconfig @@ -7,7 +7,7 @@ comment "IXP4xx Platforms" config MACH_NSLU2 bool prompt "Linksys NSLU2" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support Linksys's NSLU2 NAS device. For more information on this platform, @@ -15,7 +15,7 @@ config MACH_NSLU2 config MACH_AVILA bool "Avila" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support the Gateworks Avila Network Platform. For more information on this platform, @@ -31,7 +31,7 @@ config MACH_LOFT config ARCH_ADI_COYOTE bool "Coyote" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support the ADI Engineering Coyote Gateway Reference Platform. For more @@ -39,7 +39,7 @@ config ARCH_ADI_COYOTE config MACH_GATEWAY7001 bool "Gateway 7001" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support Gateway's 7001 Access Point. For more information on this platform, @@ -47,7 +47,7 @@ config MACH_GATEWAY7001 config MACH_WG302V2 bool "Netgear WG302 v2 / WAG302 v2" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support Netgear's WG302 v2 or WAG302 v2 Access Points. For more information @@ -107,7 +107,7 @@ config ARCH_PRPMC1100 config MACH_NAS100D bool prompt "NAS100D" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support Iomega's NAS 100d device. For more information on this platform, @@ -116,7 +116,7 @@ config MACH_NAS100D config MACH_DSMG600 bool prompt "D-Link DSM-G600 RevA" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support D-Link's DSM-G600 RevA device. For more information on this platform, @@ -130,7 +130,7 @@ config ARCH_IXDP4XX config MACH_FSG bool prompt "Freecom FSG-3" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support Freecom's FSG-3 device. For more information on this platform, @@ -139,7 +139,7 @@ config MACH_FSG config MACH_ARCOM_VULCAN bool prompt "Arcom/Eurotech Vulcan" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support Arcom's Vulcan board. @@ -160,7 +160,7 @@ config CPU_IXP43X config MACH_GTWX5715 bool "Gemtek WX5715 (Linksys WRV54G)" depends on ARCH_IXP4XX - select PCI + select FORCE_PCI help This board is currently inside the Linksys WRV54G Gateways. @@ -183,7 +183,7 @@ config MACH_DEVIXP config MACH_MICCPT bool "Omicron MICCPT" - select PCI + select FORCE_PCI help Say 'Y' here if you want your kernel to support the MICCPT board from OMICRON electronics GmbH. diff --git a/arch/arm/mach-ks8695/Kconfig b/arch/arm/mach-ks8695/Kconfig index a545976bdbd6..b3185c05fffa 100644 --- a/arch/arm/mach-ks8695/Kconfig +++ b/arch/arm/mach-ks8695/Kconfig @@ -4,7 +4,7 @@ menu "Kendin/Micrel KS8695 Implementations" config MACH_KS8695 bool "KS8695 development board" - select MIGHT_HAVE_PCI + select HAVE_PCI help Say 'Y' here if you want your kernel to run on the original Kendin-Micrel KS8695 development board. @@ -52,7 +52,7 @@ config MACH_CM4002 config MACH_CM4008 bool "OpenGear CM4008" - select MIGHT_HAVE_PCI + select HAVE_PCI help Say 'Y' here if you want your kernel to support the OpenGear CM4008 Console Server. See http://www.opengear.com for more @@ -60,7 +60,7 @@ config MACH_CM4008 config MACH_CM41xx bool "OpenGear CM41xx" - select MIGHT_HAVE_PCI + select HAVE_PCI help Say 'Y' here if you want your kernel to support the OpenGear CM4016 or CM4048 Console Servers. See http://www.opengear.com for @@ -68,7 +68,7 @@ config MACH_CM41xx config MACH_IM4004 bool "OpenGear IM4004" - select MIGHT_HAVE_PCI + select HAVE_PCI help Say 'Y' here if you want your kernel to support the OpenGear IM4004 Secure Access Server. See http://www.opengear.com for @@ -76,7 +76,7 @@ config MACH_IM4004 config MACH_IM42xx bool "OpenGear IM42xx" - select MIGHT_HAVE_PCI + select HAVE_PCI help Say 'Y' here if you want your kernel to support the OpenGear IM4216 or IM4248 Console Servers. See http://www.opengear.com for diff --git a/arch/arm/mach-mv78xx0/Kconfig b/arch/arm/mach-mv78xx0/Kconfig index 81c0f08a2684..d686a844a790 100644 --- a/arch/arm/mach-mv78xx0/Kconfig +++ b/arch/arm/mach-mv78xx0/Kconfig @@ -4,7 +4,7 @@ menuconfig ARCH_MV78XX0 select CPU_FEROCEON select GPIOLIB select MVEBU_MBUS - select PCI + select FORCE_PCI select PLAT_ORION_LEGACY help Support for the following Marvell MV78xx0 series SoCs: diff --git a/arch/arm/mach-mvebu/Kconfig b/arch/arm/mach-mvebu/Kconfig index 2c20599cc350..5d6fbadd7849 100644 --- a/arch/arm/mach-mvebu/Kconfig +++ b/arch/arm/mach-mvebu/Kconfig @@ -124,7 +124,7 @@ config MACH_KIRKWOOD select MACH_MVEBU_ANY select ORION_IRQCHIP select ORION_TIMER - select PCI + select FORCE_PCI select PCI_QUIRKS select PINCTRL_KIRKWOOD help diff --git a/arch/arm/mach-omap1/ams-delta-fiq.c b/arch/arm/mach-omap1/ams-delta-fiq.c index b0dc7ddf5877..0324d0f209ea 100644 --- a/arch/arm/mach-omap1/ams-delta-fiq.c +++ b/arch/arm/mach-omap1/ams-delta-fiq.c @@ -103,7 +103,7 @@ void __init ams_delta_init_fiq(struct gpio_chip *chip, } for (i = 0; i < ARRAY_SIZE(irq_data); i++) { - gpiod = gpiochip_request_own_desc(chip, i, pin_name[i]); + gpiod = gpiochip_request_own_desc(chip, i, pin_name[i], 0); if (IS_ERR(gpiod)) { pr_err("%s: failed to get GPIO pin %d (%ld)\n", __func__, i, PTR_ERR(gpiod)); diff --git a/arch/arm/mach-omap1/board-ams-delta.c b/arch/arm/mach-omap1/board-ams-delta.c index 691a8da13fac..49e3adffbf5b 100644 --- a/arch/arm/mach-omap1/board-ams-delta.c +++ b/arch/arm/mach-omap1/board-ams-delta.c @@ -601,7 +601,7 @@ static void __init modem_assign_irq(struct gpio_chip *chip) struct gpio_desc *gpiod; gpiod = gpiochip_request_own_desc(chip, AMS_DELTA_GPIO_PIN_MODEM_IRQ, - "modem_irq"); + "modem_irq", 0); if (IS_ERR(gpiod)) { pr_err("%s: modem IRQ GPIO request failed (%ld)\n", __func__, PTR_ERR(gpiod)); @@ -809,7 +809,7 @@ static void __init ams_delta_led_init(struct gpio_chip *chip) int i; for (i = LATCH1_PIN_LED_CAMERA; i < LATCH1_PIN_DOCKIT1; i++) { - gpiod = gpiochip_request_own_desc(chip, i, NULL); + gpiod = gpiochip_request_own_desc(chip, i, "camera-led", 0); if (IS_ERR(gpiod)) { pr_warn("%s: %s GPIO %d request failed (%ld)\n", __func__, LATCH1_LABEL, i, PTR_ERR(gpiod)); diff --git a/arch/arm/mach-omap2/Makefile b/arch/arm/mach-omap2/Makefile index 899c60fac159..85d1b13c9215 100644 --- a/arch/arm/mach-omap2/Makefile +++ b/arch/arm/mach-omap2/Makefile @@ -236,10 +236,9 @@ obj-y += omap_phy_internal.o obj-$(CONFIG_MACH_OMAP2_TUSB6010) += usb-tusb6010.o -arch/arm/mach-omap2/pm-asm-offsets.s: arch/arm/mach-omap2/pm-asm-offsets.c - $(call if_changed_dep,cc_s_c) - include/generated/ti-pm-asm-offsets.h: arch/arm/mach-omap2/pm-asm-offsets.s FORCE $(call filechk,offsets,__TI_PM_ASM_OFFSETS_H__) $(obj)/sleep33xx.o $(obj)/sleep43xx.o: include/generated/ti-pm-asm-offsets.h + +targets += pm-asm-offsets.s diff --git a/arch/arm/mach-orion5x/Kconfig b/arch/arm/mach-orion5x/Kconfig index a810f4dd34b1..38c45a88c793 100644 --- a/arch/arm/mach-orion5x/Kconfig +++ b/arch/arm/mach-orion5x/Kconfig @@ -5,7 +5,7 @@ menuconfig ARCH_ORION5X select GENERIC_CLOCKEVENTS select GPIOLIB select MVEBU_MBUS - select PCI + select FORCE_PCI select PHYLIB if NETDEVICES select PLAT_ORION_LEGACY help diff --git a/arch/arm/mach-pxa/Kconfig b/arch/arm/mach-pxa/Kconfig index a68b34183107..b185794549be 100644 --- a/arch/arm/mach-pxa/Kconfig +++ b/arch/arm/mach-pxa/Kconfig @@ -125,7 +125,7 @@ config MACH_ARMCORE bool "CompuLab CM-X255/CM-X270 modules" select ARCH_HAS_DMA_SET_COHERENT_MASK if PCI select IWMMXT - select MIGHT_HAVE_PCI + select HAVE_PCI select NEED_MACH_IO_H if PCI select PXA25x select PXA27x diff --git a/arch/arm/mach-pxa/balloon3.c b/arch/arm/mach-pxa/balloon3.c index c52c081eb6d9..4bcbd3d55b36 100644 --- a/arch/arm/mach-pxa/balloon3.c +++ b/arch/arm/mach-pxa/balloon3.c @@ -290,9 +290,6 @@ static unsigned long balloon3_mmc_pin_config[] __initdata = { static struct pxamci_platform_data balloon3_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, .detect_delay_ms = 200, }; diff --git a/arch/arm/mach-pxa/cm-x270.c b/arch/arm/mach-pxa/cm-x270.c index be4a66166d61..f7081a50dc67 100644 --- a/arch/arm/mach-pxa/cm-x270.c +++ b/arch/arm/mach-pxa/cm-x270.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -288,14 +289,23 @@ static inline void cmx270_init_ohci(void) {} #if defined(CONFIG_MMC) || defined(CONFIG_MMC_MODULE) static struct pxamci_platform_data cmx270_mci_platform_data = { .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, - .gpio_card_detect = GPIO83_MMC_IRQ, - .gpio_card_ro = -1, - .gpio_power = GPIO105_MMC_POWER, - .gpio_power_invert = 1, +}; + +static struct gpiod_lookup_table cmx270_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Card detect on GPIO 83 */ + GPIO_LOOKUP("gpio-pxa", GPIO83_MMC_IRQ, "cd", GPIO_ACTIVE_LOW), + /* Power on GPIO 105 */ + GPIO_LOOKUP("gpio-pxa", GPIO105_MMC_POWER, + "power", GPIO_ACTIVE_LOW), + { }, + }, }; static void __init cmx270_init_mmc(void) { + gpiod_add_lookup_table(&cmx270_mci_gpio_table); pxa_set_mci_info(&cmx270_mci_platform_data); } #else diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c index c5c0ab8ac9f9..109fab292f94 100644 --- a/arch/arm/mach-pxa/cm-x300.c +++ b/arch/arm/mach-pxa/cm-x300.c @@ -459,9 +459,17 @@ static inline void cm_x300_init_nand(void) {} static struct pxamci_platform_data cm_x300_mci_platform_data = { .detect_delay_ms = 200, .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, - .gpio_card_detect = GPIO82_MMC_IRQ, - .gpio_card_ro = GPIO85_MMC_WP, - .gpio_power = -1, +}; + +static struct gpiod_lookup_table cm_x300_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Card detect on GPIO 82 */ + GPIO_LOOKUP("gpio-pxa", GPIO82_MMC_IRQ, "cd", GPIO_ACTIVE_LOW), + /* Write protect on GPIO 85 */ + GPIO_LOOKUP("gpio-pxa", GPIO85_MMC_WP, "wp", GPIO_ACTIVE_LOW), + { }, + }, }; /* The second MMC slot of CM-X300 is hardwired to Libertas card and has @@ -482,13 +490,11 @@ static struct pxamci_platform_data cm_x300_mci2_platform_data = { .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, .init = cm_x300_mci2_init, .exit = cm_x300_mci2_exit, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; static void __init cm_x300_init_mmc(void) { + gpiod_add_lookup_table(&cm_x300_mci_gpio_table); pxa_set_mci_info(&cm_x300_mci_platform_data); pxa3xx_set_mci2_info(&cm_x300_mci2_platform_data); } diff --git a/arch/arm/mach-pxa/colibri-evalboard.c b/arch/arm/mach-pxa/colibri-evalboard.c index 10e2278b7a28..2ccdef5de138 100644 --- a/arch/arm/mach-pxa/colibri-evalboard.c +++ b/arch/arm/mach-pxa/colibri-evalboard.c @@ -14,7 +14,7 @@ #include #include #include -#include +#include #include #include #include @@ -37,22 +37,44 @@ #if defined(CONFIG_MMC_PXA) || defined(CONFIG_MMC_PXA_MODULE) static struct pxamci_platform_data colibri_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_power = -1, - .gpio_card_ro = -1, .detect_delay_ms = 200, }; +static struct gpiod_lookup_table colibri_pxa270_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO0_COLIBRI_PXA270_SD_DETECT, + "cd", GPIO_ACTIVE_LOW), + { }, + }, +}; + +static struct gpiod_lookup_table colibri_pxa300_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO13_COLIBRI_PXA300_SD_DETECT, + "cd", GPIO_ACTIVE_LOW), + { }, + }, +}; + +static struct gpiod_lookup_table colibri_pxa320_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO28_COLIBRI_PXA320_SD_DETECT, + "cd", GPIO_ACTIVE_LOW), + { }, + }, +}; + static void __init colibri_mmc_init(void) { if (machine_is_colibri()) /* PXA270 Colibri */ - colibri_mci_platform_data.gpio_card_detect = - GPIO0_COLIBRI_PXA270_SD_DETECT; + gpiod_add_lookup_table(&colibri_pxa270_mci_gpio_table); if (machine_is_colibri300()) /* PXA300 Colibri */ - colibri_mci_platform_data.gpio_card_detect = - GPIO13_COLIBRI_PXA300_SD_DETECT; + gpiod_add_lookup_table(&colibri_pxa300_mci_gpio_table); else /* PXA320 Colibri */ - colibri_mci_platform_data.gpio_card_detect = - GPIO28_COLIBRI_PXA320_SD_DETECT; + gpiod_add_lookup_table(&colibri_pxa320_mci_gpio_table); pxa_set_mci_info(&colibri_mci_platform_data); } diff --git a/arch/arm/mach-pxa/colibri-pxa270-income.c b/arch/arm/mach-pxa/colibri-pxa270-income.c index 3ccf2a95569b..d203dd30cdd0 100644 --- a/arch/arm/mach-pxa/colibri-pxa270-income.c +++ b/arch/arm/mach-pxa/colibri-pxa270-income.c @@ -14,7 +14,7 @@ #include #include -#include +#include #include #include #include @@ -51,14 +51,25 @@ #if defined(CONFIG_MMC_PXA) || defined(CONFIG_MMC_PXA_MODULE) static struct pxamci_platform_data income_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_power = -1, - .gpio_card_detect = GPIO0_INCOME_SD_DETECT, - .gpio_card_ro = GPIO0_INCOME_SD_RO, .detect_delay_ms = 200, }; +static struct gpiod_lookup_table income_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Card detect on GPIO 0 */ + GPIO_LOOKUP("gpio-pxa", GPIO0_INCOME_SD_DETECT, + "cd", GPIO_ACTIVE_LOW), + /* Write protect on GPIO 1 */ + GPIO_LOOKUP("gpio-pxa", GPIO0_INCOME_SD_RO, + "wp", GPIO_ACTIVE_LOW), + { }, + }, +}; + static void __init income_mmc_init(void) { + gpiod_add_lookup_table(&income_mci_gpio_table); pxa_set_mci_info(&income_mci_platform_data); } #else diff --git a/arch/arm/mach-pxa/corgi.c b/arch/arm/mach-pxa/corgi.c index 9a5a35e90769..c9732cace5e3 100644 --- a/arch/arm/mach-pxa/corgi.c +++ b/arch/arm/mach-pxa/corgi.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -493,11 +494,23 @@ static struct platform_device corgi_audio_device = { static struct pxamci_platform_data corgi_mci_platform_data = { .detect_delay_ms = 250, .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, - .gpio_card_detect = CORGI_GPIO_nSD_DETECT, - .gpio_card_ro = CORGI_GPIO_nSD_WP, - .gpio_power = CORGI_GPIO_SD_PWR, }; +static struct gpiod_lookup_table corgi_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Card detect on GPIO 9 */ + GPIO_LOOKUP("gpio-pxa", CORGI_GPIO_nSD_DETECT, + "cd", GPIO_ACTIVE_LOW), + /* Write protect on GPIO 7 */ + GPIO_LOOKUP("gpio-pxa", CORGI_GPIO_nSD_WP, + "wp", GPIO_ACTIVE_LOW), + /* Power on GPIO 33 */ + GPIO_LOOKUP("gpio-pxa", CORGI_GPIO_SD_PWR, + "power", GPIO_ACTIVE_HIGH), + { }, + }, +}; /* * Irda @@ -731,6 +744,7 @@ static void __init corgi_init(void) corgi_init_spi(); pxa_set_udc_info(&udc_info); + gpiod_add_lookup_table(&corgi_mci_gpio_table); pxa_set_mci_info(&corgi_mci_platform_data); pxa_set_ficp_info(&corgi_ficp_platform_data); pxa_set_i2c_info(NULL); diff --git a/arch/arm/mach-pxa/csb726.c b/arch/arm/mach-pxa/csb726.c index 271aedae7542..e26e7e60a169 100644 --- a/arch/arm/mach-pxa/csb726.c +++ b/arch/arm/mach-pxa/csb726.c @@ -11,7 +11,7 @@ #include #include #include -#include +#include #include #include #include @@ -129,9 +129,19 @@ static struct pxamci_platform_data csb726_mci = { .detect_delay_ms = 500, .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, /* FIXME setpower */ - .gpio_card_detect = CSB726_GPIO_MMC_DETECT, - .gpio_card_ro = CSB726_GPIO_MMC_RO, - .gpio_power = -1, +}; + +static struct gpiod_lookup_table csb726_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Card detect on GPIO 100 */ + GPIO_LOOKUP("gpio-pxa", CSB726_GPIO_MMC_DETECT, + "cd", GPIO_ACTIVE_LOW), + /* Write protect on GPIO 101 */ + GPIO_LOOKUP("gpio-pxa", CSB726_GPIO_MMC_RO, + "wp", GPIO_ACTIVE_LOW), + { }, + }, }; static struct pxaohci_platform_data csb726_ohci_platform_data = { @@ -264,6 +274,7 @@ static void __init csb726_init(void) pxa_set_stuart_info(NULL); pxa_set_i2c_info(NULL); pxa27x_set_i2c_power_info(NULL); + gpiod_add_lookup_table(&csb726_mci_gpio_table); pxa_set_mci_info(&csb726_mci); pxa_set_ohci_info(&csb726_ohci_platform_data); pxa_set_ac97_info(NULL); diff --git a/arch/arm/mach-pxa/em-x270.c b/arch/arm/mach-pxa/em-x270.c index 67e37df637f5..32c1edeb3f14 100644 --- a/arch/arm/mach-pxa/em-x270.c +++ b/arch/arm/mach-pxa/em-x270.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -546,6 +547,15 @@ static inline void em_x270_init_ohci(void) {} #if defined(CONFIG_MMC) || defined(CONFIG_MMC_MODULE) static struct regulator *em_x270_sdio_ldo; +static struct gpiod_lookup_table em_x270_mci_wp_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Write protect on GPIO 95 */ + GPIO_LOOKUP("gpio-pxa", GPIO95_MMC_WP, "wp", GPIO_ACTIVE_LOW), + { }, + }, +}; + static int em_x270_mci_init(struct device *dev, irq_handler_t em_x270_detect_int, void *data) @@ -567,15 +577,7 @@ static int em_x270_mci_init(struct device *dev, goto err_irq; } - if (machine_is_em_x270()) { - err = gpio_request(GPIO95_MMC_WP, "MMC WP"); - if (err) { - dev_err(dev, "can't request MMC write protect: %d\n", - err); - goto err_gpio_wp; - } - gpio_direction_input(GPIO95_MMC_WP); - } else { + if (!machine_is_em_x270()) { err = gpio_request(GPIO38_SD_PWEN, "sdio power"); if (err) { dev_err(dev, "can't request MMC power control : %d\n", @@ -615,17 +617,10 @@ static void em_x270_mci_exit(struct device *dev, void *data) free_irq(gpio_to_irq(mmc_cd), data); regulator_put(em_x270_sdio_ldo); - if (machine_is_em_x270()) - gpio_free(GPIO95_MMC_WP); - else + if (!machine_is_em_x270()) gpio_free(GPIO38_SD_PWEN); } -static int em_x270_mci_get_ro(struct device *dev) -{ - return gpio_get_value(GPIO95_MMC_WP); -} - static struct pxamci_platform_data em_x270_mci_platform_data = { .detect_delay_ms = 250, .ocr_mask = MMC_VDD_20_21|MMC_VDD_21_22|MMC_VDD_22_23| @@ -635,15 +630,12 @@ static struct pxamci_platform_data em_x270_mci_platform_data = { .init = em_x270_mci_init, .setpower = em_x270_mci_setpower, .exit = em_x270_mci_exit, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; static void __init em_x270_init_mmc(void) { if (machine_is_em_x270()) - em_x270_mci_platform_data.get_ro = em_x270_mci_get_ro; + gpiod_add_lookup_table(&em_x270_mci_wp_gpio_table); pxa_set_mci_info(&em_x270_mci_platform_data); } diff --git a/arch/arm/mach-pxa/gumstix.c b/arch/arm/mach-pxa/gumstix.c index 9c5b2fb054f9..4764acca5480 100644 --- a/arch/arm/mach-pxa/gumstix.c +++ b/arch/arm/mach-pxa/gumstix.c @@ -90,9 +90,6 @@ static struct platform_device *devices[] __initdata = { #ifdef CONFIG_MMC_PXA static struct pxamci_platform_data gumstix_mci_platform_data = { .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; static void __init gumstix_mmc_init(void) diff --git a/arch/arm/mach-pxa/idp.c b/arch/arm/mach-pxa/idp.c index 88e0068f92a8..7bfc246a1d75 100644 --- a/arch/arm/mach-pxa/idp.c +++ b/arch/arm/mach-pxa/idp.c @@ -160,9 +160,6 @@ static struct pxafb_mach_info sharp_lm8v31 = { static struct pxamci_platform_data idp_mci_platform_data = { .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; static void __init idp_init(void) diff --git a/arch/arm/mach-pxa/littleton.c b/arch/arm/mach-pxa/littleton.c index 9e132b3e48c6..8e0b60a33026 100644 --- a/arch/arm/mach-pxa/littleton.c +++ b/arch/arm/mach-pxa/littleton.c @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include @@ -51,8 +51,6 @@ #include "generic.h" -#define GPIO_MMC1_CARD_DETECT mfp_to_gpio(MFP_PIN_GPIO15) - /* Littleton MFP configurations */ static mfp_cfg_t littleton_mfp_cfg[] __initdata = { /* LCD */ @@ -278,13 +276,21 @@ static inline void littleton_init_keypad(void) {} static struct pxamci_platform_data littleton_mci_platform_data = { .detect_delay_ms = 200, .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_card_detect = GPIO_MMC1_CARD_DETECT, - .gpio_card_ro = -1, - .gpio_power = -1, +}; + +static struct gpiod_lookup_table littleton_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Card detect on MFP (gpio-pxa) GPIO 15 */ + GPIO_LOOKUP("gpio-pxa", MFP_PIN_GPIO15, + "cd", GPIO_ACTIVE_LOW), + { }, + }, }; static void __init littleton_init_mmc(void) { + gpiod_add_lookup_table(&littleton_mci_gpio_table); pxa_set_mci_info(&littleton_mci_platform_data); } #else diff --git a/arch/arm/mach-pxa/lubbock.c b/arch/arm/mach-pxa/lubbock.c index fe2ef9b78602..c576e8462043 100644 --- a/arch/arm/mach-pxa/lubbock.c +++ b/arch/arm/mach-pxa/lubbock.c @@ -440,9 +440,6 @@ static struct pxamci_platform_data lubbock_mci_platform_data = { .init = lubbock_mci_init, .get_ro = lubbock_mci_get_ro, .exit = lubbock_mci_exit, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; static void lubbock_irda_transceiver_mode(struct device *dev, int mode) diff --git a/arch/arm/mach-pxa/magician.c b/arch/arm/mach-pxa/magician.c index 14c0f80bc9e7..08b079653c3f 100644 --- a/arch/arm/mach-pxa/magician.c +++ b/arch/arm/mach-pxa/magician.c @@ -775,12 +775,31 @@ static struct pxamci_platform_data magician_mci_info = { .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, .init = magician_mci_init, .exit = magician_mci_exit, - .gpio_card_detect = -1, - .gpio_card_ro = EGPIO_MAGICIAN_nSD_READONLY, .gpio_card_ro_invert = 1, - .gpio_power = EGPIO_MAGICIAN_SD_POWER, }; +/* + * Write protect on EGPIO register 5 index 4, this is on the second HTC + * EGPIO chip which starts at register 4, so we need offset 8+4=12 on that + * particular chip. + */ +#define EGPIO_MAGICIAN_nSD_READONLY_OFFSET 12 +/* + * Power on EGPIO register 2 index 0, so this is on the first HTC EGPIO chip + * starting at register 0 so we need offset 2*8+0 = 16 on that chip. + */ +#define EGPIO_MAGICIAN_nSD_POWER_OFFSET 16 + +static struct gpiod_lookup_table magician_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("htc-egpio-1", EGPIO_MAGICIAN_nSD_READONLY_OFFSET, + "wp", GPIO_ACTIVE_HIGH), + GPIO_LOOKUP("htc-egpio-0", EGPIO_MAGICIAN_nSD_POWER_OFFSET, + "power", GPIO_ACTIVE_HIGH), + { }, + }, +}; /* * USB OHCI @@ -979,6 +998,7 @@ static void __init magician_init(void) i2c_register_board_info(1, ARRAY_AND_SIZE(magician_pwr_i2c_board_info)); + gpiod_add_lookup_table(&magician_mci_gpio_table); pxa_set_mci_info(&magician_mci_info); pxa_set_ohci_info(&magician_ohci_info); pxa_set_udc_info(&magician_udc_info); diff --git a/arch/arm/mach-pxa/mainstone.c b/arch/arm/mach-pxa/mainstone.c index afd62a94fdbf..9e39fc2ad2d9 100644 --- a/arch/arm/mach-pxa/mainstone.c +++ b/arch/arm/mach-pxa/mainstone.c @@ -361,9 +361,6 @@ static struct pxamci_platform_data mainstone_mci_platform_data = { .init = mainstone_mci_init, .setpower = mainstone_mci_setpower, .exit = mainstone_mci_exit, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; static void mainstone_irda_transceiver_mode(struct device *dev, int mode) diff --git a/arch/arm/mach-pxa/mioa701.c b/arch/arm/mach-pxa/mioa701.c index 04dc78d0809f..d0fa5c72622d 100644 --- a/arch/arm/mach-pxa/mioa701.c +++ b/arch/arm/mach-pxa/mioa701.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -397,9 +398,22 @@ struct gpio_vbus_mach_info gpio_vbus_data = { static struct pxamci_platform_data mioa701_mci_info = { .detect_delay_ms = 250, .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_card_detect = GPIO15_SDIO_INSERT, - .gpio_card_ro = GPIO78_SDIO_RO, - .gpio_power = GPIO91_SDIO_EN, +}; + +static struct gpiod_lookup_table mioa701_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Card detect on GPIO 15 */ + GPIO_LOOKUP("gpio-pxa", GPIO15_SDIO_INSERT, + "cd", GPIO_ACTIVE_LOW), + /* Write protect on GPIO 78 */ + GPIO_LOOKUP("gpio-pxa", GPIO78_SDIO_RO, + "wp", GPIO_ACTIVE_LOW), + /* Power on GPIO 91 */ + GPIO_LOOKUP("gpio-pxa", GPIO91_SDIO_EN, + "power", GPIO_ACTIVE_HIGH), + { }, + }, }; /* FlashRAM */ @@ -743,6 +757,7 @@ static void __init mioa701_machine_init(void) pr_err("MioA701: Failed to request GPIOs: %d", rc); bootstrap_init(); pxa_set_fb_info(NULL, &mioa701_pxafb_info); + gpiod_add_lookup_table(&mioa701_mci_gpio_table); pxa_set_mci_info(&mioa701_mci_info); pxa_set_keypad_info(&mioa701_keypad_info); pxa_set_udc_info(&mioa701_udc_info); diff --git a/arch/arm/mach-pxa/mxm8x10.c b/arch/arm/mach-pxa/mxm8x10.c index 616b22397d73..e4248a3a8dfc 100644 --- a/arch/arm/mach-pxa/mxm8x10.c +++ b/arch/arm/mach-pxa/mxm8x10.c @@ -21,7 +21,7 @@ #include #include -#include +#include #include #include @@ -326,13 +326,24 @@ static mfp_cfg_t mfp_cfg[] __initdata = { static struct pxamci_platform_data mxm_8x10_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, .detect_delay_ms = 10, - .gpio_card_detect = MXM_8X10_SD_nCD, - .gpio_card_ro = MXM_8X10_SD_WP, - .gpio_power = -1 +}; + +static struct gpiod_lookup_table mxm_8x10_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + /* Card detect on GPIO 72 */ + GPIO_LOOKUP("gpio-pxa", MXM_8X10_SD_nCD, + "cd", GPIO_ACTIVE_LOW), + /* Write protect on GPIO 84 */ + GPIO_LOOKUP("gpio-pxa", MXM_8X10_SD_WP, + "wp", GPIO_ACTIVE_LOW), + { }, + }, }; void __init mxm_8x10_mmc_init(void) { + gpiod_add_lookup_table(&mxm_8x10_mci_gpio_table); pxa_set_mci_info(&mxm_8x10_mci_platform_data); } #endif diff --git a/arch/arm/mach-pxa/palm27x.c b/arch/arm/mach-pxa/palm27x.c index 1efe9bcf07fa..b94c45f65215 100644 --- a/arch/arm/mach-pxa/palm27x.c +++ b/arch/arm/mach-pxa/palm27x.c @@ -49,14 +49,10 @@ static struct pxamci_platform_data palm27x_mci_platform_data = { .detect_delay_ms = 200, }; -void __init palm27x_mmc_init(int detect, int ro, int power, - int power_inverted) +void __init palm27x_mmc_init(struct gpiod_lookup_table *gtable) { - palm27x_mci_platform_data.gpio_card_detect = detect; - palm27x_mci_platform_data.gpio_card_ro = ro; - palm27x_mci_platform_data.gpio_power = power; - palm27x_mci_platform_data.gpio_power_invert = power_inverted; - + if (gtable) + gpiod_add_lookup_table(gtable); pxa_set_mci_info(&palm27x_mci_platform_data); } #endif diff --git a/arch/arm/mach-pxa/palm27x.h b/arch/arm/mach-pxa/palm27x.h index d4eac3d6ffb5..cd071f876132 100644 --- a/arch/arm/mach-pxa/palm27x.h +++ b/arch/arm/mach-pxa/palm27x.h @@ -12,12 +12,12 @@ #ifndef __INCLUDE_MACH_PALM27X__ #define __INCLUDE_MACH_PALM27X__ +#include + #if defined(CONFIG_MMC_PXA) || defined(CONFIG_MMC_PXA_MODULE) -extern void __init palm27x_mmc_init(int detect, int ro, int power, - int power_inverted); +extern void __init palm27x_mmc_init(struct gpiod_lookup_table *gtable); #else -static inline void palm27x_mmc_init(int detect, int ro, int power, - int power_inverted) +static inline void palm27x_mmc_init(struct gpiod_lookup_table *gtable) {} #endif diff --git a/arch/arm/mach-pxa/palmld.c b/arch/arm/mach-pxa/palmld.c index 980f2847f5b5..bf2b0cfc86df 100644 --- a/arch/arm/mach-pxa/palmld.c +++ b/arch/arm/mach-pxa/palmld.c @@ -288,8 +288,20 @@ static struct platform_device palmld_ide_device = { .id = -1, }; +static struct gpiod_lookup_table palmld_ide_gpio_table = { + .dev_id = "pata_palmld", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMLD_IDE_PWEN, + "power", GPIO_ACTIVE_HIGH), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMLD_IDE_RESET, + "reset", GPIO_ACTIVE_LOW), + { }, + }, +}; + static void __init palmld_ide_init(void) { + gpiod_add_lookup_table(&palmld_ide_gpio_table); platform_device_register(&palmld_ide_device); } #else @@ -320,6 +332,19 @@ static void __init palmld_map_io(void) iotable_init(palmld_io_desc, ARRAY_SIZE(palmld_io_desc)); } +static struct gpiod_lookup_table palmld_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMLD_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMLD_SD_READONLY, + "wp", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMLD_SD_POWER, + "power", GPIO_ACTIVE_HIGH), + { }, + }, +}; + static void __init palmld_init(void) { pxa2xx_mfp_config(ARRAY_AND_SIZE(palmld_pin_config)); @@ -327,8 +352,7 @@ static void __init palmld_init(void) pxa_set_btuart_info(NULL); pxa_set_stuart_info(NULL); - palm27x_mmc_init(GPIO_NR_PALMLD_SD_DETECT_N, GPIO_NR_PALMLD_SD_READONLY, - GPIO_NR_PALMLD_SD_POWER, 0); + palm27x_mmc_init(&palmld_mci_gpio_table); palm27x_pm_init(PALMLD_STR_BASE); palm27x_lcd_init(-1, &palm_320x480_lcd_mode); palm27x_irda_init(GPIO_NR_PALMLD_IR_DISABLE); diff --git a/arch/arm/mach-pxa/palmt5.c b/arch/arm/mach-pxa/palmt5.c index 876144aa3564..8811f11f670e 100644 --- a/arch/arm/mach-pxa/palmt5.c +++ b/arch/arm/mach-pxa/palmt5.c @@ -182,6 +182,19 @@ static void __init palmt5_reserve(void) memblock_reserve(0xa0200000, 0x1000); } +static struct gpiod_lookup_table palmt5_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMT5_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMT5_SD_READONLY, + "wp", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMT5_SD_POWER, + "power", GPIO_ACTIVE_HIGH), + { }, + }, +}; + static void __init palmt5_init(void) { pxa2xx_mfp_config(ARRAY_AND_SIZE(palmt5_pin_config)); @@ -189,8 +202,7 @@ static void __init palmt5_init(void) pxa_set_btuart_info(NULL); pxa_set_stuart_info(NULL); - palm27x_mmc_init(GPIO_NR_PALMT5_SD_DETECT_N, GPIO_NR_PALMT5_SD_READONLY, - GPIO_NR_PALMT5_SD_POWER, 0); + palm27x_mmc_init(&palmt5_mci_gpio_table); palm27x_pm_init(PALMT5_STR_BASE); palm27x_lcd_init(-1, &palm_320x480_lcd_mode); palm27x_udc_init(GPIO_NR_PALMT5_USB_DETECT_N, diff --git a/arch/arm/mach-pxa/palmtc.c b/arch/arm/mach-pxa/palmtc.c index 18946594a7c8..7ce4fc287115 100644 --- a/arch/arm/mach-pxa/palmtc.c +++ b/arch/arm/mach-pxa/palmtc.c @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include @@ -120,14 +120,25 @@ static unsigned long palmtc_pin_config[] __initdata = { #if defined(CONFIG_MMC_PXA) || defined(CONFIG_MMC_PXA_MODULE) static struct pxamci_platform_data palmtc_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_power = GPIO_NR_PALMTC_SD_POWER, - .gpio_card_ro = GPIO_NR_PALMTC_SD_READONLY, - .gpio_card_detect = GPIO_NR_PALMTC_SD_DETECT_N, .detect_delay_ms = 200, }; +static struct gpiod_lookup_table palmtc_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTC_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTC_SD_READONLY, + "wp", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTC_SD_POWER, + "power", GPIO_ACTIVE_HIGH), + { }, + }, +}; + static void __init palmtc_mmc_init(void) { + gpiod_add_lookup_table(&palmtc_mci_gpio_table); pxa_set_mci_info(&palmtc_mci_platform_data); } #else diff --git a/arch/arm/mach-pxa/palmte2.c b/arch/arm/mach-pxa/palmte2.c index 36b46141a28b..e830005af8d0 100644 --- a/arch/arm/mach-pxa/palmte2.c +++ b/arch/arm/mach-pxa/palmte2.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -101,9 +102,19 @@ static unsigned long palmte2_pin_config[] __initdata = { ******************************************************************************/ static struct pxamci_platform_data palmte2_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_card_detect = GPIO_NR_PALMTE2_SD_DETECT_N, - .gpio_card_ro = GPIO_NR_PALMTE2_SD_READONLY, - .gpio_power = GPIO_NR_PALMTE2_SD_POWER, +}; + +static struct gpiod_lookup_table palmte2_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTE2_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTE2_SD_READONLY, + "wp", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTE2_SD_POWER, + "power", GPIO_ACTIVE_HIGH), + { }, + }, }; #if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE) @@ -354,6 +365,7 @@ static void __init palmte2_init(void) pxa_set_stuart_info(NULL); pxa_set_fb_info(NULL, &palmte2_lcd_screen); + gpiod_add_lookup_table(&palmte2_mci_gpio_table); pxa_set_mci_info(&palmte2_mci_platform_data); palmte2_udc_init(); pxa_set_ac97_info(&palmte2_ac97_pdata); diff --git a/arch/arm/mach-pxa/palmtreo.c b/arch/arm/mach-pxa/palmtreo.c index b66b0b11d717..70f1a8a3aa94 100644 --- a/arch/arm/mach-pxa/palmtreo.c +++ b/arch/arm/mach-pxa/palmtreo.c @@ -480,23 +480,46 @@ void __init treo680_gpio_init(void) gpio_free(GPIO_NR_TREO680_LCD_EN_N); } +static struct gpiod_lookup_table treo680_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_TREO_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_TREO680_SD_READONLY, + "wp", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_TREO680_SD_POWER, + "power", GPIO_ACTIVE_HIGH), + { }, + }, +}; + static void __init treo680_init(void) { pxa2xx_mfp_config(ARRAY_AND_SIZE(treo680_pin_config)); palmphone_common_init(); treo680_gpio_init(); - palm27x_mmc_init(GPIO_NR_TREO_SD_DETECT_N, GPIO_NR_TREO680_SD_READONLY, - GPIO_NR_TREO680_SD_POWER, 0); + palm27x_mmc_init(&treo680_mci_gpio_table); } #endif #ifdef CONFIG_MACH_CENTRO + +static struct gpiod_lookup_table centro685_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_TREO_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_CENTRO_SD_POWER, + "power", GPIO_ACTIVE_LOW), + { }, + }, +}; + static void __init centro_init(void) { pxa2xx_mfp_config(ARRAY_AND_SIZE(centro685_pin_config)); palmphone_common_init(); - palm27x_mmc_init(GPIO_NR_TREO_SD_DETECT_N, -1, - GPIO_NR_CENTRO_SD_POWER, 1); + palm27x_mmc_init(¢ro685_mci_gpio_table); } #endif diff --git a/arch/arm/mach-pxa/palmtx.c b/arch/arm/mach-pxa/palmtx.c index 1d06a8e91d8f..ef71bf2abb47 100644 --- a/arch/arm/mach-pxa/palmtx.c +++ b/arch/arm/mach-pxa/palmtx.c @@ -337,6 +337,19 @@ static void __init palmtx_map_io(void) iotable_init(palmtx_io_desc, ARRAY_SIZE(palmtx_io_desc)); } +static struct gpiod_lookup_table palmtx_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTX_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTX_SD_READONLY, + "wp", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMTX_SD_POWER, + "power", GPIO_ACTIVE_HIGH), + { }, + }, +}; + static void __init palmtx_init(void) { pxa2xx_mfp_config(ARRAY_AND_SIZE(palmtx_pin_config)); @@ -344,8 +357,7 @@ static void __init palmtx_init(void) pxa_set_btuart_info(NULL); pxa_set_stuart_info(NULL); - palm27x_mmc_init(GPIO_NR_PALMTX_SD_DETECT_N, GPIO_NR_PALMTX_SD_READONLY, - GPIO_NR_PALMTX_SD_POWER, 0); + palm27x_mmc_init(&palmtx_mci_gpio_table); palm27x_pm_init(PALMTX_STR_BASE); palm27x_lcd_init(-1, &palm_320x480_lcd_mode); palm27x_udc_init(GPIO_NR_PALMTX_USB_DETECT_N, diff --git a/arch/arm/mach-pxa/palmz72.c b/arch/arm/mach-pxa/palmz72.c index 4d475f6f4a77..ea1c7b2ed8d4 100644 --- a/arch/arm/mach-pxa/palmz72.c +++ b/arch/arm/mach-pxa/palmz72.c @@ -386,6 +386,19 @@ static void __init palmz72_camera_init(void) static inline void palmz72_camera_init(void) {} #endif +static struct gpiod_lookup_table palmz72_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMZ72_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMZ72_SD_RO, + "wp", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO_NR_PALMZ72_SD_POWER_N, + "power", GPIO_ACTIVE_LOW), + { }, + }, +}; + /****************************************************************************** * Machine init ******************************************************************************/ @@ -396,8 +409,7 @@ static void __init palmz72_init(void) pxa_set_btuart_info(NULL); pxa_set_stuart_info(NULL); - palm27x_mmc_init(GPIO_NR_PALMZ72_SD_DETECT_N, GPIO_NR_PALMZ72_SD_RO, - GPIO_NR_PALMZ72_SD_POWER_N, 1); + palm27x_mmc_init(&palmz72_mci_gpio_table); palm27x_lcd_init(-1, &palm_320x320_lcd_mode); palm27x_udc_init(GPIO_NR_PALMZ72_USB_DETECT_N, GPIO_NR_PALMZ72_USB_PULLUP, 0); diff --git a/arch/arm/mach-pxa/pcm990-baseboard.c b/arch/arm/mach-pxa/pcm990-baseboard.c index 973568d4b9ec..be19e3a4eacc 100644 --- a/arch/arm/mach-pxa/pcm990-baseboard.c +++ b/arch/arm/mach-pxa/pcm990-baseboard.c @@ -370,9 +370,6 @@ static struct pxamci_platform_data pcm990_mci_platform_data = { .init = pcm990_mci_init, .setpower = pcm990_mci_setpower, .exit = pcm990_mci_exit, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; static struct pxaohci_platform_data pcm990_ohci_platform_data = { diff --git a/arch/arm/mach-pxa/poodle.c b/arch/arm/mach-pxa/poodle.c index 1adde1251e2b..c2a43d4cfd3e 100644 --- a/arch/arm/mach-pxa/poodle.c +++ b/arch/arm/mach-pxa/poodle.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -288,11 +289,18 @@ static struct pxamci_platform_data poodle_mci_platform_data = { .init = poodle_mci_init, .setpower = poodle_mci_setpower, .exit = poodle_mci_exit, - .gpio_card_detect = POODLE_GPIO_nSD_DETECT, - .gpio_card_ro = POODLE_GPIO_nSD_WP, - .gpio_power = -1, }; +static struct gpiod_lookup_table poodle_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", POODLE_GPIO_nSD_DETECT, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", POODLE_GPIO_nSD_WP, + "wp", GPIO_ACTIVE_LOW), + { }, + }, +}; /* * Irda @@ -439,6 +447,7 @@ static void __init poodle_init(void) pxa_set_fb_info(&poodle_locomo_device.dev, &poodle_fb_info); pxa_set_udc_info(&udc_info); + gpiod_add_lookup_table(&poodle_mci_gpio_table); pxa_set_mci_info(&poodle_mci_platform_data); pxa_set_ficp_info(&poodle_ficp_platform_data); pxa_set_i2c_info(NULL); diff --git a/arch/arm/mach-pxa/raumfeld.c b/arch/arm/mach-pxa/raumfeld.c index bd3c23ad6ce6..e1db072756f2 100644 --- a/arch/arm/mach-pxa/raumfeld.c +++ b/arch/arm/mach-pxa/raumfeld.c @@ -749,9 +749,6 @@ static struct pxamci_platform_data raumfeld_mci_platform_data = { .init = raumfeld_mci_init, .exit = raumfeld_mci_exit, .detect_delay_ms = 200, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; /* diff --git a/arch/arm/mach-pxa/spitz.c b/arch/arm/mach-pxa/spitz.c index 5d50025492b7..306818e2cf54 100644 --- a/arch/arm/mach-pxa/spitz.c +++ b/arch/arm/mach-pxa/spitz.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -615,13 +616,22 @@ static struct pxamci_platform_data spitz_mci_platform_data = { .detect_delay_ms = 250, .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, .setpower = spitz_mci_setpower, - .gpio_card_detect = SPITZ_GPIO_nSD_DETECT, - .gpio_card_ro = SPITZ_GPIO_nSD_WP, - .gpio_power = -1, +}; + +static struct gpiod_lookup_table spitz_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", SPITZ_GPIO_nSD_DETECT, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", SPITZ_GPIO_nSD_WP, + "wp", GPIO_ACTIVE_LOW), + { }, + }, }; static void __init spitz_mmc_init(void) { + gpiod_add_lookup_table(&spitz_mci_gpio_table); pxa_set_mci_info(&spitz_mci_platform_data); } #else diff --git a/arch/arm/mach-pxa/stargate2.c b/arch/arm/mach-pxa/stargate2.c index bbea5fa9a140..e0d6c872270a 100644 --- a/arch/arm/mach-pxa/stargate2.c +++ b/arch/arm/mach-pxa/stargate2.c @@ -436,9 +436,6 @@ static int imote2_mci_get_ro(struct device *dev) static struct pxamci_platform_data imote2_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, /* default anyway */ .get_ro = imote2_mci_get_ro, - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; static struct gpio_led imote2_led_pins[] = { diff --git a/arch/arm/mach-pxa/tosa.c b/arch/arm/mach-pxa/tosa.c index cb5cd8e78c94..e8a93c088c35 100644 --- a/arch/arm/mach-pxa/tosa.c +++ b/arch/arm/mach-pxa/tosa.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -291,9 +292,19 @@ static struct pxamci_platform_data tosa_mci_platform_data = { .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, .init = tosa_mci_init, .exit = tosa_mci_exit, - .gpio_card_detect = TOSA_GPIO_nSD_DETECT, - .gpio_card_ro = TOSA_GPIO_SD_WP, - .gpio_power = TOSA_GPIO_PWR_ON, +}; + +static struct gpiod_lookup_table tosa_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_nSD_DETECT, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_SD_WP, + "wp", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_PWR_ON, + "power", GPIO_ACTIVE_HIGH), + { }, + }, }; /* @@ -908,6 +919,7 @@ static void __init tosa_init(void) /* enable batt_fault */ PMCR = 0x01; + gpiod_add_lookup_table(&tosa_mci_gpio_table); pxa_set_mci_info(&tosa_mci_platform_data); pxa_set_ficp_info(&tosa_ficp_platform_data); pxa_set_i2c_info(NULL); diff --git a/arch/arm/mach-pxa/trizeps4.c b/arch/arm/mach-pxa/trizeps4.c index 55b8c501b6fc..c76f1daecfc9 100644 --- a/arch/arm/mach-pxa/trizeps4.c +++ b/arch/arm/mach-pxa/trizeps4.c @@ -355,9 +355,6 @@ static struct pxamci_platform_data trizeps4_mci_platform_data = { .exit = trizeps4_mci_exit, .get_ro = NULL, /* write-protection not supported */ .setpower = NULL, /* power-switching not supported */ - .gpio_card_detect = -1, - .gpio_card_ro = -1, - .gpio_power = -1, }; /**************************************************************************** diff --git a/arch/arm/mach-pxa/vpac270.c b/arch/arm/mach-pxa/vpac270.c index f65dfb6e20e2..829284406fa3 100644 --- a/arch/arm/mach-pxa/vpac270.c +++ b/arch/arm/mach-pxa/vpac270.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -240,14 +241,23 @@ static void __init vpac270_onenand_init(void) {} #if defined(CONFIG_MMC_PXA) || defined(CONFIG_MMC_PXA_MODULE) static struct pxamci_platform_data vpac270_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_power = -1, - .gpio_card_detect = GPIO53_VPAC270_SD_DETECT_N, - .gpio_card_ro = GPIO52_VPAC270_SD_READONLY, .detect_delay_ms = 200, }; +static struct gpiod_lookup_table vpac270_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO53_VPAC270_SD_DETECT_N, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", GPIO52_VPAC270_SD_READONLY, + "wp", GPIO_ACTIVE_LOW), + { }, + }, +}; + static void __init vpac270_mmc_init(void) { + gpiod_add_lookup_table(&vpac270_mci_gpio_table); pxa_set_mci_info(&vpac270_mci_platform_data); } #else diff --git a/arch/arm/mach-pxa/z2.c b/arch/arm/mach-pxa/z2.c index 6fffcfc4621e..e2353e75bb28 100644 --- a/arch/arm/mach-pxa/z2.c +++ b/arch/arm/mach-pxa/z2.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -290,14 +291,21 @@ static inline void z2_lcd_init(void) {} #if defined(CONFIG_MMC_PXA) || defined(CONFIG_MMC_PXA_MODULE) static struct pxamci_platform_data z2_mci_platform_data = { .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, - .gpio_card_detect = GPIO96_ZIPITZ2_SD_DETECT, - .gpio_power = -1, - .gpio_card_ro = -1, .detect_delay_ms = 200, }; +static struct gpiod_lookup_table z2_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", GPIO96_ZIPITZ2_SD_DETECT, + "cd", GPIO_ACTIVE_LOW), + { }, + }, +}; + static void __init z2_mmc_init(void) { + gpiod_add_lookup_table(&z2_mci_gpio_table); pxa_set_mci_info(&z2_mci_platform_data); } #else diff --git a/arch/arm/mach-pxa/zeus.c b/arch/arm/mach-pxa/zeus.c index d53ea12fc766..897ef59fbe0c 100644 --- a/arch/arm/mach-pxa/zeus.c +++ b/arch/arm/mach-pxa/zeus.c @@ -663,10 +663,18 @@ static struct pxafb_mach_info zeus_fb_info = { static struct pxamci_platform_data zeus_mci_platform_data = { .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, .detect_delay_ms = 250, - .gpio_card_detect = ZEUS_MMC_CD_GPIO, - .gpio_card_ro = ZEUS_MMC_WP_GPIO, .gpio_card_ro_invert = 1, - .gpio_power = -1 +}; + +static struct gpiod_lookup_table zeus_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("gpio-pxa", ZEUS_MMC_CD_GPIO, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("gpio-pxa", ZEUS_MMC_WP_GPIO, + "wp", GPIO_ACTIVE_HIGH), + { }, + }, }; /* @@ -883,6 +891,7 @@ static void __init zeus_init(void) else pxa_set_fb_info(NULL, &zeus_fb_info); + gpiod_add_lookup_table(&zeus_mci_gpio_table); pxa_set_mci_info(&zeus_mci_platform_data); pxa_set_udc_info(&zeus_udc_info); pxa_set_ac97_info(&zeus_ac97_info); diff --git a/arch/arm/mach-pxa/zylonite.c b/arch/arm/mach-pxa/zylonite.c index 52e70a5c1281..1f88d7bae849 100644 --- a/arch/arm/mach-pxa/zylonite.c +++ b/arch/arm/mach-pxa/zylonite.c @@ -19,7 +19,7 @@ #include #include #include -#include +#include #include #include #include @@ -227,33 +227,68 @@ static inline void zylonite_init_lcd(void) {} static struct pxamci_platform_data zylonite_mci_platform_data = { .detect_delay_ms= 200, .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, - .gpio_card_detect = EXT_GPIO(0), - .gpio_card_ro = EXT_GPIO(2), - .gpio_power = -1, +}; + +#define PCA9539A_MCI_CD 0 +#define PCA9539A_MCI1_CD 1 +#define PCA9539A_MCI_WP 2 +#define PCA9539A_MCI1_WP 3 +#define PCA9539A_MCI3_CD 30 +#define PCA9539A_MCI3_WP 31 + +static struct gpiod_lookup_table zylonite_mci_gpio_table = { + .dev_id = "pxa2xx-mci.0", + .table = { + GPIO_LOOKUP("i2c-pca9539-a", PCA9539A_MCI_CD, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("i2c-pca9539-a", PCA9539A_MCI_WP, + "wp", GPIO_ACTIVE_LOW), + { }, + }, }; static struct pxamci_platform_data zylonite_mci2_platform_data = { .detect_delay_ms= 200, .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, - .gpio_card_detect = EXT_GPIO(1), - .gpio_card_ro = EXT_GPIO(3), - .gpio_power = -1, +}; + +static struct gpiod_lookup_table zylonite_mci2_gpio_table = { + .dev_id = "pxa2xx-mci.1", + .table = { + GPIO_LOOKUP("i2c-pca9539-a", PCA9539A_MCI1_CD, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("i2c-pca9539-a", PCA9539A_MCI1_WP, + "wp", GPIO_ACTIVE_LOW), + { }, + }, }; static struct pxamci_platform_data zylonite_mci3_platform_data = { .detect_delay_ms= 200, .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34, - .gpio_card_detect = EXT_GPIO(30), - .gpio_card_ro = EXT_GPIO(31), - .gpio_power = -1, +}; + +static struct gpiod_lookup_table zylonite_mci3_gpio_table = { + .dev_id = "pxa2xx-mci.2", + .table = { + GPIO_LOOKUP("i2c-pca9539-a", PCA9539A_MCI3_CD, + "cd", GPIO_ACTIVE_LOW), + GPIO_LOOKUP("i2c-pca9539-a", PCA9539A_MCI3_WP, + "wp", GPIO_ACTIVE_LOW), + { }, + }, }; static void __init zylonite_init_mmc(void) { + gpiod_add_lookup_table(&zylonite_mci_gpio_table); pxa_set_mci_info(&zylonite_mci_platform_data); + gpiod_add_lookup_table(&zylonite_mci2_gpio_table); pxa3xx_set_mci2_info(&zylonite_mci2_platform_data); - if (cpu_is_pxa310()) + if (cpu_is_pxa310()) { + gpiod_add_lookup_table(&zylonite_mci3_gpio_table); pxa3xx_set_mci3_info(&zylonite_mci3_platform_data); + } } #else static inline void zylonite_init_mmc(void) {} diff --git a/arch/arm/mach-pxa/zylonite_pxa300.c b/arch/arm/mach-pxa/zylonite_pxa300.c index 0ff4e218080f..8f930a9dd0fd 100644 --- a/arch/arm/mach-pxa/zylonite_pxa300.c +++ b/arch/arm/mach-pxa/zylonite_pxa300.c @@ -230,11 +230,13 @@ static struct pca953x_platform_data gpio_exp[] = { static struct i2c_board_info zylonite_i2c_board_info[] = { { .type = "pca9539", + .dev_name = "pca9539-a", .addr = 0x74, .platform_data = &gpio_exp[0], .irq = PXA_GPIO_TO_IRQ(18), }, { .type = "pca9539", + .dev_name = "pca9539-b", .addr = 0x75, .platform_data = &gpio_exp[1], .irq = PXA_GPIO_TO_IRQ(19), diff --git a/arch/arm/mach-s3c24xx/mach-at2440evb.c b/arch/arm/mach-s3c24xx/mach-at2440evb.c index 68a4fa94257a..58c5ef3cf1d7 100644 --- a/arch/arm/mach-s3c24xx/mach-at2440evb.c +++ b/arch/arm/mach-s3c24xx/mach-at2440evb.c @@ -9,7 +9,7 @@ #include #include -#include +#include #include #include #include @@ -136,7 +136,16 @@ static struct platform_device at2440evb_device_eth = { }; static struct s3c24xx_mci_pdata at2440evb_mci_pdata __initdata = { - .gpio_detect = S3C2410_GPG(10), + /* Intentionally left blank */ +}; + +static struct gpiod_lookup_table at2440evb_mci_gpio_table = { + .dev_id = "s3c2410-sdi", + .table = { + /* Card detect S3C2410_GPG(10) */ + GPIO_LOOKUP("GPG", 10, "cd", GPIO_ACTIVE_LOW), + { }, + }, }; /* 7" LCD panel */ @@ -200,6 +209,7 @@ static void __init at2440evb_init_time(void) static void __init at2440evb_init(void) { s3c24xx_fb_set_platdata(&at2440evb_fb_info); + gpiod_add_lookup_table(&at2440evb_mci_gpio_table); s3c24xx_mci_set_platdata(&at2440evb_mci_pdata); s3c_nand_set_platdata(&at2440evb_nand_info); s3c_i2c0_set_platdata(NULL); diff --git a/arch/arm/mach-s3c24xx/mach-h1940.c b/arch/arm/mach-s3c24xx/mach-h1940.c index e064c73a57d3..74d6b68e91c7 100644 --- a/arch/arm/mach-s3c24xx/mach-h1940.c +++ b/arch/arm/mach-s3c24xx/mach-h1940.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -459,12 +460,21 @@ static void h1940_set_mmc_power(unsigned char power_mode, unsigned short vdd) } static struct s3c24xx_mci_pdata h1940_mmc_cfg __initdata = { - .gpio_detect = S3C2410_GPF(5), - .gpio_wprotect = S3C2410_GPH(8), .set_power = h1940_set_mmc_power, .ocr_avail = MMC_VDD_32_33, }; +static struct gpiod_lookup_table h1940_mmc_gpio_table = { + .dev_id = "s3c2410-sdi", + .table = { + /* Card detect S3C2410_GPF(5) */ + GPIO_LOOKUP("GPF", 5, "cd", GPIO_ACTIVE_LOW), + /* Write protect S3C2410_GPH(8) */ + GPIO_LOOKUP("GPH", 8, "wp", GPIO_ACTIVE_LOW), + { }, + }, +}; + static struct pwm_lookup h1940_pwm_lookup[] = { PWM_LOOKUP("samsung-pwm", 0, "pwm-backlight", NULL, 36296, PWM_POLARITY_NORMAL), @@ -680,6 +690,7 @@ static void __init h1940_init(void) u32 tmp; s3c24xx_fb_set_platdata(&h1940_fb_info); + gpiod_add_lookup_table(&h1940_mmc_gpio_table); s3c24xx_mci_set_platdata(&h1940_mmc_cfg); s3c24xx_udc_set_platdata(&h1940_udc_cfg); s3c24xx_ts_set_platdata(&h1940_ts_cfg); diff --git a/arch/arm/mach-s3c24xx/mach-mini2440.c b/arch/arm/mach-s3c24xx/mach-mini2440.c index 50d67d760efd..9035f868fb34 100644 --- a/arch/arm/mach-s3c24xx/mach-mini2440.c +++ b/arch/arm/mach-s3c24xx/mach-mini2440.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -234,13 +235,22 @@ static struct s3c2410fb_mach_info mini2440_fb_info __initdata = { /* MMC/SD */ static struct s3c24xx_mci_pdata mini2440_mmc_cfg __initdata = { - .gpio_detect = S3C2410_GPG(8), - .gpio_wprotect = S3C2410_GPH(8), .wprotect_invert = 1, .set_power = NULL, .ocr_avail = MMC_VDD_32_33|MMC_VDD_33_34, }; +static struct gpiod_lookup_table mini2440_mmc_gpio_table = { + .dev_id = "s3c2410-sdi", + .table = { + /* Card detect S3C2410_GPG(8) */ + GPIO_LOOKUP("GPG", 8, "cd", GPIO_ACTIVE_LOW), + /* Write protect S3C2410_GPH(8) */ + GPIO_LOOKUP("GPH", 8, "wp", GPIO_ACTIVE_HIGH), + { }, + }, +}; + /* NAND Flash on MINI2440 board */ static struct mtd_partition mini2440_default_nand_part[] __initdata = { @@ -696,6 +706,7 @@ static void __init mini2440_init(void) } s3c24xx_udc_set_platdata(&mini2440_udc_cfg); + gpiod_add_lookup_table(&mini2440_mmc_gpio_table); s3c24xx_mci_set_platdata(&mini2440_mmc_cfg); s3c_nand_set_platdata(&mini2440_nand_info); s3c_i2c0_set_platdata(NULL); diff --git a/arch/arm/mach-s3c24xx/mach-n30.c b/arch/arm/mach-s3c24xx/mach-n30.c index eec51fadb14a..d856f23939af 100644 --- a/arch/arm/mach-s3c24xx/mach-n30.c +++ b/arch/arm/mach-s3c24xx/mach-n30.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -350,12 +351,21 @@ static void n30_sdi_set_power(unsigned char power_mode, unsigned short vdd) } static struct s3c24xx_mci_pdata n30_mci_cfg __initdata = { - .gpio_detect = S3C2410_GPF(1), - .gpio_wprotect = S3C2410_GPG(10), .ocr_avail = MMC_VDD_32_33, .set_power = n30_sdi_set_power, }; +static struct gpiod_lookup_table n30_mci_gpio_table = { + .dev_id = "s3c2410-sdi", + .table = { + /* Card detect S3C2410_GPF(1) */ + GPIO_LOOKUP("GPF", 1, "cd", GPIO_ACTIVE_LOW), + /* Write protect S3C2410_GPG(10) */ + GPIO_LOOKUP("GPG", 10, "wp", GPIO_ACTIVE_LOW), + { }, + }, +}; + static struct platform_device *n30_devices[] __initdata = { &s3c_device_lcd, &s3c_device_wdt, @@ -549,6 +559,7 @@ static void __init n30_init(void) s3c24xx_fb_set_platdata(&n30_fb_info); s3c24xx_udc_set_platdata(&n30_udc_cfg); + gpiod_add_lookup_table(&n30_mci_gpio_table); s3c24xx_mci_set_platdata(&n30_mci_cfg); s3c_i2c0_set_platdata(&n30_i2ccfg); diff --git a/arch/arm/mach-s3c24xx/mach-rx1950.c b/arch/arm/mach-s3c24xx/mach-rx1950.c index 7f5a18fa305b..29f9b345a531 100644 --- a/arch/arm/mach-s3c24xx/mach-rx1950.c +++ b/arch/arm/mach-s3c24xx/mach-rx1950.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -558,12 +559,21 @@ static void rx1950_set_mmc_power(unsigned char power_mode, unsigned short vdd) } static struct s3c24xx_mci_pdata rx1950_mmc_cfg __initdata = { - .gpio_detect = S3C2410_GPF(5), - .gpio_wprotect = S3C2410_GPH(8), .set_power = rx1950_set_mmc_power, .ocr_avail = MMC_VDD_32_33, }; +static struct gpiod_lookup_table rx1950_mmc_gpio_table = { + .dev_id = "s3c2410-sdi", + .table = { + /* Card detect S3C2410_GPF(5) */ + GPIO_LOOKUP("GPF", 5, "cd", GPIO_ACTIVE_LOW), + /* Write protect S3C2410_GPH(8) */ + GPIO_LOOKUP("GPH", 8, "wp", GPIO_ACTIVE_LOW), + { }, + }, +}; + static struct mtd_partition rx1950_nand_part[] = { [0] = { .name = "Boot0", @@ -762,6 +772,7 @@ static void __init rx1950_init_machine(void) s3c24xx_fb_set_platdata(&rx1950_lcd_cfg); s3c24xx_udc_set_platdata(&rx1950_udc_cfg); s3c24xx_ts_set_platdata(&rx1950_ts_cfg); + gpiod_add_lookup_table(&rx1950_mmc_gpio_table); s3c24xx_mci_set_platdata(&rx1950_mmc_cfg); s3c_i2c0_set_platdata(NULL); s3c_nand_set_platdata(&rx1950_nand_info); diff --git a/arch/arm/mach-sa1100/Kconfig b/arch/arm/mach-sa1100/Kconfig index fde7ef1ab192..acb2c520ae8b 100644 --- a/arch/arm/mach-sa1100/Kconfig +++ b/arch/arm/mach-sa1100/Kconfig @@ -120,7 +120,7 @@ config SA1100_LART config SA1100_NANOENGINE bool "nanoEngine" select ARM_SA1110_CPUFREQ - select PCI + select FORCE_PCI select PCI_NANOENGINE help Say Y here if you are using the Bright Star Engineering nanoEngine. diff --git a/arch/arm/mach-socfpga/Kconfig b/arch/arm/mach-socfpga/Kconfig index 4adb901dd5eb..d43798defdba 100644 --- a/arch/arm/mach-socfpga/Kconfig +++ b/arch/arm/mach-socfpga/Kconfig @@ -10,7 +10,7 @@ menuconfig ARCH_SOCFPGA select HAVE_ARM_SCU select HAVE_ARM_TWD if SMP select MFD_SYSCON - select PCI_DOMAINS if PCI + select PCI_DOMAINS_GENERIC if PCI if ARCH_SOCFPGA config SOCFPGA_SUSPEND diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index 712416ecd8e6..f304b10e23a4 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -22,7 +22,7 @@ #include "dma.h" /* - * dma_direct_ops is used if + * The generic direct mapping code is used if * - MMU/MPU is off * - cpu is v7m w/o cache support * - device is coherent @@ -209,16 +209,9 @@ const struct dma_map_ops arm_nommu_dma_ops = { }; EXPORT_SYMBOL(arm_nommu_dma_ops); -static const struct dma_map_ops *arm_nommu_get_dma_map_ops(bool coherent) -{ - return coherent ? &dma_direct_ops : &arm_nommu_dma_ops; -} - void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, const struct iommu_ops *iommu, bool coherent) { - const struct dma_map_ops *dma_ops; - if (IS_ENABLED(CONFIG_CPU_V7M)) { /* * Cache support for v7m is optional, so can be treated as @@ -234,7 +227,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, dev->archdata.dma_coherent = (get_cr() & CR_M) ? coherent : true; } - dma_ops = arm_nommu_get_dma_map_ops(dev->archdata.dma_coherent); - - set_dma_ops(dev, dma_ops); + if (!dev->archdata.dma_coherent) + set_dma_ops(dev, &arm_nommu_dma_ops); } diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 78de138aa66d..f1e2922e447c 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -179,11 +179,6 @@ static void arm_dma_sync_single_for_device(struct device *dev, __dma_page_cpu_to_dev(page, offset, size, dir); } -static int arm_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == ARM_MAPPING_ERROR; -} - const struct dma_map_ops arm_dma_ops = { .alloc = arm_dma_alloc, .free = arm_dma_free, @@ -197,7 +192,6 @@ const struct dma_map_ops arm_dma_ops = { .sync_single_for_device = arm_dma_sync_single_for_device, .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu, .sync_sg_for_device = arm_dma_sync_sg_for_device, - .mapping_error = arm_dma_mapping_error, .dma_supported = arm_dma_supported, }; EXPORT_SYMBOL(arm_dma_ops); @@ -217,7 +211,6 @@ const struct dma_map_ops arm_coherent_dma_ops = { .get_sgtable = arm_dma_get_sgtable, .map_page = arm_coherent_dma_map_page, .map_sg = arm_dma_map_sg, - .mapping_error = arm_dma_mapping_error, .dma_supported = arm_dma_supported, }; EXPORT_SYMBOL(arm_coherent_dma_ops); @@ -774,7 +767,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp &= ~(__GFP_COMP); args.gfp = gfp; - *handle = ARM_MAPPING_ERROR; + *handle = DMA_MAPPING_ERROR; allowblock = gfpflags_allow_blocking(gfp); cma = allowblock ? dev_get_cma_area(dev) : false; @@ -1217,7 +1210,7 @@ static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping, if (i == mapping->nr_bitmaps) { if (extend_iommu_mapping(mapping)) { spin_unlock_irqrestore(&mapping->lock, flags); - return ARM_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } start = bitmap_find_next_zero_area(mapping->bitmaps[i], @@ -1225,7 +1218,7 @@ static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping, if (start > mapping->bits) { spin_unlock_irqrestore(&mapping->lock, flags); - return ARM_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } bitmap_set(mapping->bitmaps[i], start, count); @@ -1409,7 +1402,7 @@ __iommu_create_mapping(struct device *dev, struct page **pages, size_t size, int i; dma_addr = __alloc_iova(mapping, size); - if (dma_addr == ARM_MAPPING_ERROR) + if (dma_addr == DMA_MAPPING_ERROR) return dma_addr; iova = dma_addr; @@ -1436,7 +1429,7 @@ __iommu_create_mapping(struct device *dev, struct page **pages, size_t size, fail: iommu_unmap(mapping->domain, dma_addr, iova-dma_addr); __free_iova(mapping, dma_addr, size); - return ARM_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size) @@ -1497,7 +1490,7 @@ static void *__iommu_alloc_simple(struct device *dev, size_t size, gfp_t gfp, return NULL; *handle = __iommu_create_mapping(dev, &page, size, attrs); - if (*handle == ARM_MAPPING_ERROR) + if (*handle == DMA_MAPPING_ERROR) goto err_mapping; return addr; @@ -1525,7 +1518,7 @@ static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size, struct page **pages; void *addr = NULL; - *handle = ARM_MAPPING_ERROR; + *handle = DMA_MAPPING_ERROR; size = PAGE_ALIGN(size); if (coherent_flag == COHERENT || !gfpflags_allow_blocking(gfp)) @@ -1546,7 +1539,7 @@ static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size, return NULL; *handle = __iommu_create_mapping(dev, pages, size, attrs); - if (*handle == ARM_MAPPING_ERROR) + if (*handle == DMA_MAPPING_ERROR) goto err_buffer; if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) @@ -1696,10 +1689,10 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg, int prot; size = PAGE_ALIGN(size); - *handle = ARM_MAPPING_ERROR; + *handle = DMA_MAPPING_ERROR; iova_base = iova = __alloc_iova(mapping, size); - if (iova == ARM_MAPPING_ERROR) + if (iova == DMA_MAPPING_ERROR) return -ENOMEM; for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) { @@ -1739,7 +1732,7 @@ static int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents, for (i = 1; i < nents; i++) { s = sg_next(s); - s->dma_address = ARM_MAPPING_ERROR; + s->dma_address = DMA_MAPPING_ERROR; s->dma_length = 0; if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) { @@ -1914,7 +1907,7 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p int ret, prot, len = PAGE_ALIGN(size + offset); dma_addr = __alloc_iova(mapping, len); - if (dma_addr == ARM_MAPPING_ERROR) + if (dma_addr == DMA_MAPPING_ERROR) return dma_addr; prot = __dma_info_to_prot(dir, attrs); @@ -1926,7 +1919,7 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p return dma_addr + offset; fail: __free_iova(mapping, dma_addr, len); - return ARM_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } /** @@ -2020,7 +2013,7 @@ static dma_addr_t arm_iommu_map_resource(struct device *dev, size_t len = PAGE_ALIGN(size + offset); dma_addr = __alloc_iova(mapping, len); - if (dma_addr == ARM_MAPPING_ERROR) + if (dma_addr == DMA_MAPPING_ERROR) return dma_addr; prot = __dma_info_to_prot(dir, attrs) | IOMMU_MMIO; @@ -2032,7 +2025,7 @@ static dma_addr_t arm_iommu_map_resource(struct device *dev, return dma_addr + offset; fail: __free_iova(mapping, dma_addr, len); - return ARM_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } /** @@ -2105,7 +2098,6 @@ const struct dma_map_ops iommu_ops = { .map_resource = arm_iommu_map_resource, .unmap_resource = arm_iommu_unmap_resource, - .mapping_error = arm_dma_mapping_error, .dma_supported = arm_dma_supported, }; @@ -2124,7 +2116,6 @@ const struct dma_map_ops iommu_coherent_ops = { .map_resource = arm_iommu_map_resource, .unmap_resource = arm_iommu_unmap_resource, - .mapping_error = arm_dma_mapping_error, .dma_supported = arm_dma_supported, }; diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 32e4845af2b6..478ea8b7db87 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -50,26 +50,7 @@ unsigned long __init __clear_cr(unsigned long mask) } #endif -static phys_addr_t phys_initrd_start __initdata = 0; -static unsigned long phys_initrd_size __initdata = 0; - -static int __init early_initrd(char *p) -{ - phys_addr_t start; - unsigned long size; - char *endp; - - start = memparse(p, &endp); - if (*endp == ',') { - size = memparse(endp + 1, NULL); - - phys_initrd_start = start; - phys_initrd_size = size; - } - return 0; -} -early_param("initrd", early_initrd); - +#ifdef CONFIG_BLK_DEV_INITRD static int __init parse_tag_initrd(const struct tag *tag) { pr_warn("ATAG_INITRD is deprecated; " @@ -89,6 +70,7 @@ static int __init parse_tag_initrd2(const struct tag *tag) } __tagtable(ATAG_INITRD2, parse_tag_initrd2); +#endif static void __init find_limits(unsigned long *min, unsigned long *max_low, unsigned long *max_high) @@ -236,12 +218,6 @@ static void __init arm_initrd_init(void) phys_addr_t start; unsigned long size; - /* FDT scan will populate initrd_start */ - if (initrd_start && !phys_initrd_size) { - phys_initrd_start = __virt_to_phys(initrd_start); - phys_initrd_size = initrd_end - initrd_start; - } - initrd_start = initrd_end = 0; if (!phys_initrd_size) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6b7bf0fc190d..a4168d366127 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -23,7 +23,6 @@ config ARM64 select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SET_MEMORY - select ARCH_HAS_SG_CHAIN select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_SYNC_DMA_FOR_DEVICE @@ -81,7 +80,7 @@ config ARM64 select CPU_PM if (SUSPEND || CPU_IDLE) select CRC32 select DCACHE_WORD_ACCESS - select DMA_DIRECT_OPS + select DMA_DIRECT_REMAP select EDAC_SUPPORT select FRAME_POINTER select GENERIC_ALLOCATOR @@ -103,6 +102,7 @@ config ARM64 select GENERIC_TIME_VSYSCALL select HANDLE_DOMAIN_IRQ select HARDIRQS_SW_RESEND + select HAVE_PCI select HAVE_ACPI_APEI if (ACPI && EFI) select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ARCH_AUDITSYSCALL @@ -111,6 +111,7 @@ config ARM64 select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) + select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT @@ -163,7 +164,9 @@ config ARM64 select OF select OF_EARLY_FLATTREE select OF_RESERVED_MEM + select PCI_DOMAINS_GENERIC if PCI select PCI_ECAM if (ACPI && PCI) + select PCI_SYSCALL if PCI select POWER_RESET select POWER_SUPPLY select REFCOUNT_FULL @@ -290,28 +293,6 @@ config ARCH_PROC_KCORE_TEXT source "arch/arm64/Kconfig.platforms" -menu "Bus support" - -config PCI - bool "PCI support" - help - This feature enables support for PCI bus system. If you say Y - here, the kernel will include drivers and infrastructure code - to support PCI bus devices. - -config PCI_DOMAINS - def_bool PCI - -config PCI_DOMAINS_GENERIC - def_bool PCI - -config PCI_SYSCALL - def_bool PCI - -source "drivers/pci/Kconfig" - -endmenu - menu "Kernel Features" menu "ARM errata workarounds via the alternatives framework" @@ -857,7 +838,7 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK config HOLES_IN_ZONE def_bool y -source kernel/Kconfig.hz +source "kernel/Kconfig.hz" config ARCH_SUPPORTS_DEBUG_PAGEALLOC def_bool y diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 398bdb81a900..b025304bde46 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -101,10 +101,19 @@ else TEXT_OFFSET := 0x00080000 endif +ifeq ($(CONFIG_KASAN_SW_TAGS), y) +KASAN_SHADOW_SCALE_SHIFT := 4 +else +KASAN_SHADOW_SCALE_SHIFT := 3 +endif + +KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) +KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) +KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) + # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) # in 32-bit arithmetic -KASAN_SHADOW_SCALE_SHIFT := 3 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi index fef7351e9f67..a20df0d9c96d 100644 --- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi +++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi @@ -24,6 +24,19 @@ #address-cells = <2>; #size-cells = <2>; + reserved-memory { + #address-cells = <2>; + #size-cells = <2>; + ranges; + + service_reserved: svcbuffer@0 { + compatible = "shared-dma-pool"; + reg = <0x0 0x0 0x0 0x1000000>; + alignment = <0x1000>; + no-map; + }; + }; + cpus { #address-cells = <1>; #size-cells = <0>; @@ -93,6 +106,14 @@ interrupt-parent = <&intc>; ranges = <0 0 0 0xffffffff>; + base_fpga_region { + #address-cells = <0x1>; + #size-cells = <0x1>; + + compatible = "fpga-region"; + fpga-mgr = <&fpga_mgr>; + }; + clkmgr: clock-controller@ffd10000 { compatible = "intel,stratix10-clkmgr"; reg = <0xffd10000 0x1000>; @@ -537,5 +558,17 @@ status = "disabled"; }; + + firmware { + svc { + compatible = "intel,stratix10-svc"; + method = "smc"; + memory-region = <&service_reserved>; + + fpga_mgr: fpga-mgr { + compatible = "intel,stratix10-soc-fpga-mgr"; + }; + }; + }; }; }; diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index a5606823ed4d..d9a523ecdd83 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -101,11 +101,16 @@ config CRYPTO_AES_ARM64_NEON_BLK select CRYPTO_SIMD config CRYPTO_CHACHA20_NEON - tristate "NEON accelerated ChaCha20 symmetric cipher" + tristate "ChaCha20, XChaCha20, and XChaCha12 stream ciphers using NEON instructions" depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_CHACHA20 +config CRYPTO_NHPOLY1305_NEON + tristate "NHPoly1305 hash function using NEON instructions (for Adiantum)" + depends on KERNEL_MODE_NEON + select CRYPTO_NHPOLY1305 + config CRYPTO_AES_ARM64_BS tristate "AES in ECB/CBC/CTR/XTS modes using bit-sliced NEON algorithm" depends on KERNEL_MODE_NEON diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index f476fede09ba..e766daf43b7c 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -50,8 +50,11 @@ sha256-arm64-y := sha256-glue.o sha256-core.o obj-$(CONFIG_CRYPTO_SHA512_ARM64) += sha512-arm64.o sha512-arm64-y := sha512-glue.o sha512-core.o -obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o -chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o +obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha-neon.o +chacha-neon-y := chacha-neon-core.o chacha-neon-glue.o + +obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) += nhpoly1305-neon.o +nhpoly1305-neon-y := nh-neon-core.o nhpoly1305-neon-glue.o obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o @@ -75,4 +78,4 @@ $(src)/sha512-core.S_shipped: $(src)/sha512-armv8.pl $(call cmd,perlasm) endif -targets += sha256-core.S sha512-core.S +clean-files += sha256-core.S sha512-core.S diff --git a/arch/arm64/crypto/chacha20-neon-core.S b/arch/arm64/crypto/chacha-neon-core.S similarity index 52% rename from arch/arm64/crypto/chacha20-neon-core.S rename to arch/arm64/crypto/chacha-neon-core.S index 13c85e272c2a..021bb9e9784b 100644 --- a/arch/arm64/crypto/chacha20-neon-core.S +++ b/arch/arm64/crypto/chacha-neon-core.S @@ -1,13 +1,13 @@ /* - * ChaCha20 256-bit cipher algorithm, RFC7539, arm64 NEON functions + * ChaCha/XChaCha NEON helper functions * - * Copyright (C) 2016 Linaro, Ltd. + * Copyright (C) 2016-2018 Linaro, Ltd. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. * - * Based on: + * Originally based on: * ChaCha20 256-bit cipher algorithm, RFC7539, x64 SSSE3 functions * * Copyright (C) 2015 Martin Willi @@ -19,29 +19,27 @@ */ #include +#include +#include .text .align 6 -ENTRY(chacha20_block_xor_neon) - // x0: Input state matrix, s - // x1: 1 data block output, o - // x2: 1 data block input, i - - // - // This function encrypts one ChaCha20 block by loading the state matrix - // in four NEON registers. It performs matrix operation on four words in - // parallel, but requires shuffling to rearrange the words after each - // round. - // - - // x0..3 = s0..3 - adr x3, ROT8 - ld1 {v0.4s-v3.4s}, [x0] - ld1 {v8.4s-v11.4s}, [x0] - ld1 {v12.4s}, [x3] +/* + * chacha_permute - permute one block + * + * Permute one 64-byte block where the state matrix is stored in the four NEON + * registers v0-v3. It performs matrix operations on four words in parallel, + * but requires shuffling to rearrange the words after each round. + * + * The round count is given in w3. + * + * Clobbers: w3, x10, v4, v12 + */ +chacha_permute: - mov x3, #10 + adr_l x10, ROT8 + ld1 {v12.4s}, [x10] .Ldoubleround: // x0 += x1, x3 = rotl32(x3 ^ x0, 16) @@ -102,9 +100,27 @@ ENTRY(chacha20_block_xor_neon) // x3 = shuffle32(x3, MASK(0, 3, 2, 1)) ext v3.16b, v3.16b, v3.16b, #4 - subs x3, x3, #1 + subs w3, w3, #2 b.ne .Ldoubleround + ret +ENDPROC(chacha_permute) + +ENTRY(chacha_block_xor_neon) + // x0: Input state matrix, s + // x1: 1 data block output, o + // x2: 1 data block input, i + // w3: nrounds + + stp x29, x30, [sp, #-16]! + mov x29, sp + + // x0..3 = s0..3 + ld1 {v0.4s-v3.4s}, [x0] + ld1 {v8.4s-v11.4s}, [x0] + + bl chacha_permute + ld1 {v4.16b-v7.16b}, [x2] // o0 = i0 ^ (x0 + s0) @@ -125,71 +141,156 @@ ENTRY(chacha20_block_xor_neon) st1 {v0.16b-v3.16b}, [x1] + ldp x29, x30, [sp], #16 ret -ENDPROC(chacha20_block_xor_neon) +ENDPROC(chacha_block_xor_neon) + +ENTRY(hchacha_block_neon) + // x0: Input state matrix, s + // x1: output (8 32-bit words) + // w2: nrounds + + stp x29, x30, [sp, #-16]! + mov x29, sp + + ld1 {v0.4s-v3.4s}, [x0] + + mov w3, w2 + bl chacha_permute + + st1 {v0.16b}, [x1], #16 + st1 {v3.16b}, [x1] + + ldp x29, x30, [sp], #16 + ret +ENDPROC(hchacha_block_neon) + + a0 .req w12 + a1 .req w13 + a2 .req w14 + a3 .req w15 + a4 .req w16 + a5 .req w17 + a6 .req w19 + a7 .req w20 + a8 .req w21 + a9 .req w22 + a10 .req w23 + a11 .req w24 + a12 .req w25 + a13 .req w26 + a14 .req w27 + a15 .req w28 .align 6 -ENTRY(chacha20_4block_xor_neon) +ENTRY(chacha_4block_xor_neon) + frame_push 10 + // x0: Input state matrix, s // x1: 4 data blocks output, o // x2: 4 data blocks input, i + // w3: nrounds + // x4: byte count + + adr_l x10, .Lpermute + and x5, x4, #63 + add x10, x10, x5 + add x11, x10, #64 // - // This function encrypts four consecutive ChaCha20 blocks by loading + // This function encrypts four consecutive ChaCha blocks by loading // the state matrix in NEON registers four times. The algorithm performs // each operation on the corresponding word of each state matrix, hence // requires no word shuffling. For final XORing step we transpose the // matrix by interleaving 32- and then 64-bit words, which allows us to // do XOR in NEON registers. // - adr x3, CTRINC // ... and ROT8 - ld1 {v30.4s-v31.4s}, [x3] + // At the same time, a fifth block is encrypted in parallel using + // scalar registers + // + adr_l x9, CTRINC // ... and ROT8 + ld1 {v30.4s-v31.4s}, [x9] // x0..15[0-3] = s0..3[0..3] - mov x4, x0 - ld4r { v0.4s- v3.4s}, [x4], #16 - ld4r { v4.4s- v7.4s}, [x4], #16 - ld4r { v8.4s-v11.4s}, [x4], #16 - ld4r {v12.4s-v15.4s}, [x4] - - // x12 += counter values 0-3 + add x8, x0, #16 + ld4r { v0.4s- v3.4s}, [x0] + ld4r { v4.4s- v7.4s}, [x8], #16 + ld4r { v8.4s-v11.4s}, [x8], #16 + ld4r {v12.4s-v15.4s}, [x8] + + mov a0, v0.s[0] + mov a1, v1.s[0] + mov a2, v2.s[0] + mov a3, v3.s[0] + mov a4, v4.s[0] + mov a5, v5.s[0] + mov a6, v6.s[0] + mov a7, v7.s[0] + mov a8, v8.s[0] + mov a9, v9.s[0] + mov a10, v10.s[0] + mov a11, v11.s[0] + mov a12, v12.s[0] + mov a13, v13.s[0] + mov a14, v14.s[0] + mov a15, v15.s[0] + + // x12 += counter values 1-4 add v12.4s, v12.4s, v30.4s - mov x3, #10 - .Ldoubleround4: // x0 += x4, x12 = rotl32(x12 ^ x0, 16) // x1 += x5, x13 = rotl32(x13 ^ x1, 16) // x2 += x6, x14 = rotl32(x14 ^ x2, 16) // x3 += x7, x15 = rotl32(x15 ^ x3, 16) add v0.4s, v0.4s, v4.4s + add a0, a0, a4 add v1.4s, v1.4s, v5.4s + add a1, a1, a5 add v2.4s, v2.4s, v6.4s + add a2, a2, a6 add v3.4s, v3.4s, v7.4s + add a3, a3, a7 eor v12.16b, v12.16b, v0.16b + eor a12, a12, a0 eor v13.16b, v13.16b, v1.16b + eor a13, a13, a1 eor v14.16b, v14.16b, v2.16b + eor a14, a14, a2 eor v15.16b, v15.16b, v3.16b + eor a15, a15, a3 rev32 v12.8h, v12.8h + ror a12, a12, #16 rev32 v13.8h, v13.8h + ror a13, a13, #16 rev32 v14.8h, v14.8h + ror a14, a14, #16 rev32 v15.8h, v15.8h + ror a15, a15, #16 // x8 += x12, x4 = rotl32(x4 ^ x8, 12) // x9 += x13, x5 = rotl32(x5 ^ x9, 12) // x10 += x14, x6 = rotl32(x6 ^ x10, 12) // x11 += x15, x7 = rotl32(x7 ^ x11, 12) add v8.4s, v8.4s, v12.4s + add a8, a8, a12 add v9.4s, v9.4s, v13.4s + add a9, a9, a13 add v10.4s, v10.4s, v14.4s + add a10, a10, a14 add v11.4s, v11.4s, v15.4s + add a11, a11, a15 eor v16.16b, v4.16b, v8.16b + eor a4, a4, a8 eor v17.16b, v5.16b, v9.16b + eor a5, a5, a9 eor v18.16b, v6.16b, v10.16b + eor a6, a6, a10 eor v19.16b, v7.16b, v11.16b + eor a7, a7, a11 shl v4.4s, v16.4s, #12 shl v5.4s, v17.4s, #12 @@ -197,42 +298,66 @@ ENTRY(chacha20_4block_xor_neon) shl v7.4s, v19.4s, #12 sri v4.4s, v16.4s, #20 + ror a4, a4, #20 sri v5.4s, v17.4s, #20 + ror a5, a5, #20 sri v6.4s, v18.4s, #20 + ror a6, a6, #20 sri v7.4s, v19.4s, #20 + ror a7, a7, #20 // x0 += x4, x12 = rotl32(x12 ^ x0, 8) // x1 += x5, x13 = rotl32(x13 ^ x1, 8) // x2 += x6, x14 = rotl32(x14 ^ x2, 8) // x3 += x7, x15 = rotl32(x15 ^ x3, 8) add v0.4s, v0.4s, v4.4s + add a0, a0, a4 add v1.4s, v1.4s, v5.4s + add a1, a1, a5 add v2.4s, v2.4s, v6.4s + add a2, a2, a6 add v3.4s, v3.4s, v7.4s + add a3, a3, a7 eor v12.16b, v12.16b, v0.16b + eor a12, a12, a0 eor v13.16b, v13.16b, v1.16b + eor a13, a13, a1 eor v14.16b, v14.16b, v2.16b + eor a14, a14, a2 eor v15.16b, v15.16b, v3.16b + eor a15, a15, a3 tbl v12.16b, {v12.16b}, v31.16b + ror a12, a12, #24 tbl v13.16b, {v13.16b}, v31.16b + ror a13, a13, #24 tbl v14.16b, {v14.16b}, v31.16b + ror a14, a14, #24 tbl v15.16b, {v15.16b}, v31.16b + ror a15, a15, #24 // x8 += x12, x4 = rotl32(x4 ^ x8, 7) // x9 += x13, x5 = rotl32(x5 ^ x9, 7) // x10 += x14, x6 = rotl32(x6 ^ x10, 7) // x11 += x15, x7 = rotl32(x7 ^ x11, 7) add v8.4s, v8.4s, v12.4s + add a8, a8, a12 add v9.4s, v9.4s, v13.4s + add a9, a9, a13 add v10.4s, v10.4s, v14.4s + add a10, a10, a14 add v11.4s, v11.4s, v15.4s + add a11, a11, a15 eor v16.16b, v4.16b, v8.16b + eor a4, a4, a8 eor v17.16b, v5.16b, v9.16b + eor a5, a5, a9 eor v18.16b, v6.16b, v10.16b + eor a6, a6, a10 eor v19.16b, v7.16b, v11.16b + eor a7, a7, a11 shl v4.4s, v16.4s, #7 shl v5.4s, v17.4s, #7 @@ -240,42 +365,66 @@ ENTRY(chacha20_4block_xor_neon) shl v7.4s, v19.4s, #7 sri v4.4s, v16.4s, #25 + ror a4, a4, #25 sri v5.4s, v17.4s, #25 + ror a5, a5, #25 sri v6.4s, v18.4s, #25 + ror a6, a6, #25 sri v7.4s, v19.4s, #25 + ror a7, a7, #25 // x0 += x5, x15 = rotl32(x15 ^ x0, 16) // x1 += x6, x12 = rotl32(x12 ^ x1, 16) // x2 += x7, x13 = rotl32(x13 ^ x2, 16) // x3 += x4, x14 = rotl32(x14 ^ x3, 16) add v0.4s, v0.4s, v5.4s + add a0, a0, a5 add v1.4s, v1.4s, v6.4s + add a1, a1, a6 add v2.4s, v2.4s, v7.4s + add a2, a2, a7 add v3.4s, v3.4s, v4.4s + add a3, a3, a4 eor v15.16b, v15.16b, v0.16b + eor a15, a15, a0 eor v12.16b, v12.16b, v1.16b + eor a12, a12, a1 eor v13.16b, v13.16b, v2.16b + eor a13, a13, a2 eor v14.16b, v14.16b, v3.16b + eor a14, a14, a3 rev32 v15.8h, v15.8h + ror a15, a15, #16 rev32 v12.8h, v12.8h + ror a12, a12, #16 rev32 v13.8h, v13.8h + ror a13, a13, #16 rev32 v14.8h, v14.8h + ror a14, a14, #16 // x10 += x15, x5 = rotl32(x5 ^ x10, 12) // x11 += x12, x6 = rotl32(x6 ^ x11, 12) // x8 += x13, x7 = rotl32(x7 ^ x8, 12) // x9 += x14, x4 = rotl32(x4 ^ x9, 12) add v10.4s, v10.4s, v15.4s + add a10, a10, a15 add v11.4s, v11.4s, v12.4s + add a11, a11, a12 add v8.4s, v8.4s, v13.4s + add a8, a8, a13 add v9.4s, v9.4s, v14.4s + add a9, a9, a14 eor v16.16b, v5.16b, v10.16b + eor a5, a5, a10 eor v17.16b, v6.16b, v11.16b + eor a6, a6, a11 eor v18.16b, v7.16b, v8.16b + eor a7, a7, a8 eor v19.16b, v4.16b, v9.16b + eor a4, a4, a9 shl v5.4s, v16.4s, #12 shl v6.4s, v17.4s, #12 @@ -283,42 +432,66 @@ ENTRY(chacha20_4block_xor_neon) shl v4.4s, v19.4s, #12 sri v5.4s, v16.4s, #20 + ror a5, a5, #20 sri v6.4s, v17.4s, #20 + ror a6, a6, #20 sri v7.4s, v18.4s, #20 + ror a7, a7, #20 sri v4.4s, v19.4s, #20 + ror a4, a4, #20 // x0 += x5, x15 = rotl32(x15 ^ x0, 8) // x1 += x6, x12 = rotl32(x12 ^ x1, 8) // x2 += x7, x13 = rotl32(x13 ^ x2, 8) // x3 += x4, x14 = rotl32(x14 ^ x3, 8) add v0.4s, v0.4s, v5.4s + add a0, a0, a5 add v1.4s, v1.4s, v6.4s + add a1, a1, a6 add v2.4s, v2.4s, v7.4s + add a2, a2, a7 add v3.4s, v3.4s, v4.4s + add a3, a3, a4 eor v15.16b, v15.16b, v0.16b + eor a15, a15, a0 eor v12.16b, v12.16b, v1.16b + eor a12, a12, a1 eor v13.16b, v13.16b, v2.16b + eor a13, a13, a2 eor v14.16b, v14.16b, v3.16b + eor a14, a14, a3 tbl v15.16b, {v15.16b}, v31.16b + ror a15, a15, #24 tbl v12.16b, {v12.16b}, v31.16b + ror a12, a12, #24 tbl v13.16b, {v13.16b}, v31.16b + ror a13, a13, #24 tbl v14.16b, {v14.16b}, v31.16b + ror a14, a14, #24 // x10 += x15, x5 = rotl32(x5 ^ x10, 7) // x11 += x12, x6 = rotl32(x6 ^ x11, 7) // x8 += x13, x7 = rotl32(x7 ^ x8, 7) // x9 += x14, x4 = rotl32(x4 ^ x9, 7) add v10.4s, v10.4s, v15.4s + add a10, a10, a15 add v11.4s, v11.4s, v12.4s + add a11, a11, a12 add v8.4s, v8.4s, v13.4s + add a8, a8, a13 add v9.4s, v9.4s, v14.4s + add a9, a9, a14 eor v16.16b, v5.16b, v10.16b + eor a5, a5, a10 eor v17.16b, v6.16b, v11.16b + eor a6, a6, a11 eor v18.16b, v7.16b, v8.16b + eor a7, a7, a8 eor v19.16b, v4.16b, v9.16b + eor a4, a4, a9 shl v5.4s, v16.4s, #7 shl v6.4s, v17.4s, #7 @@ -326,11 +499,15 @@ ENTRY(chacha20_4block_xor_neon) shl v4.4s, v19.4s, #7 sri v5.4s, v16.4s, #25 + ror a5, a5, #25 sri v6.4s, v17.4s, #25 + ror a6, a6, #25 sri v7.4s, v18.4s, #25 + ror a7, a7, #25 sri v4.4s, v19.4s, #25 + ror a4, a4, #25 - subs x3, x3, #1 + subs w3, w3, #2 b.ne .Ldoubleround4 ld4r {v16.4s-v19.4s}, [x0], #16 @@ -344,9 +521,17 @@ ENTRY(chacha20_4block_xor_neon) // x2[0-3] += s0[2] // x3[0-3] += s0[3] add v0.4s, v0.4s, v16.4s + mov w6, v16.s[0] + mov w7, v17.s[0] add v1.4s, v1.4s, v17.4s + mov w8, v18.s[0] + mov w9, v19.s[0] add v2.4s, v2.4s, v18.4s + add a0, a0, w6 + add a1, a1, w7 add v3.4s, v3.4s, v19.4s + add a2, a2, w8 + add a3, a3, w9 ld4r {v24.4s-v27.4s}, [x0], #16 ld4r {v28.4s-v31.4s}, [x0] @@ -356,95 +541,304 @@ ENTRY(chacha20_4block_xor_neon) // x6[0-3] += s1[2] // x7[0-3] += s1[3] add v4.4s, v4.4s, v20.4s + mov w6, v20.s[0] + mov w7, v21.s[0] add v5.4s, v5.4s, v21.4s + mov w8, v22.s[0] + mov w9, v23.s[0] add v6.4s, v6.4s, v22.4s + add a4, a4, w6 + add a5, a5, w7 add v7.4s, v7.4s, v23.4s + add a6, a6, w8 + add a7, a7, w9 // x8[0-3] += s2[0] // x9[0-3] += s2[1] // x10[0-3] += s2[2] // x11[0-3] += s2[3] add v8.4s, v8.4s, v24.4s + mov w6, v24.s[0] + mov w7, v25.s[0] add v9.4s, v9.4s, v25.4s + mov w8, v26.s[0] + mov w9, v27.s[0] add v10.4s, v10.4s, v26.4s + add a8, a8, w6 + add a9, a9, w7 add v11.4s, v11.4s, v27.4s + add a10, a10, w8 + add a11, a11, w9 // x12[0-3] += s3[0] // x13[0-3] += s3[1] // x14[0-3] += s3[2] // x15[0-3] += s3[3] add v12.4s, v12.4s, v28.4s + mov w6, v28.s[0] + mov w7, v29.s[0] add v13.4s, v13.4s, v29.4s + mov w8, v30.s[0] + mov w9, v31.s[0] add v14.4s, v14.4s, v30.4s + add a12, a12, w6 + add a13, a13, w7 add v15.4s, v15.4s, v31.4s + add a14, a14, w8 + add a15, a15, w9 // interleave 32-bit words in state n, n+1 + ldp w6, w7, [x2], #64 zip1 v16.4s, v0.4s, v1.4s + ldp w8, w9, [x2, #-56] + eor a0, a0, w6 zip2 v17.4s, v0.4s, v1.4s + eor a1, a1, w7 zip1 v18.4s, v2.4s, v3.4s + eor a2, a2, w8 zip2 v19.4s, v2.4s, v3.4s + eor a3, a3, w9 + ldp w6, w7, [x2, #-48] zip1 v20.4s, v4.4s, v5.4s + ldp w8, w9, [x2, #-40] + eor a4, a4, w6 zip2 v21.4s, v4.4s, v5.4s + eor a5, a5, w7 zip1 v22.4s, v6.4s, v7.4s + eor a6, a6, w8 zip2 v23.4s, v6.4s, v7.4s + eor a7, a7, w9 + ldp w6, w7, [x2, #-32] zip1 v24.4s, v8.4s, v9.4s + ldp w8, w9, [x2, #-24] + eor a8, a8, w6 zip2 v25.4s, v8.4s, v9.4s + eor a9, a9, w7 zip1 v26.4s, v10.4s, v11.4s + eor a10, a10, w8 zip2 v27.4s, v10.4s, v11.4s + eor a11, a11, w9 + ldp w6, w7, [x2, #-16] zip1 v28.4s, v12.4s, v13.4s + ldp w8, w9, [x2, #-8] + eor a12, a12, w6 zip2 v29.4s, v12.4s, v13.4s + eor a13, a13, w7 zip1 v30.4s, v14.4s, v15.4s + eor a14, a14, w8 zip2 v31.4s, v14.4s, v15.4s + eor a15, a15, w9 + + mov x3, #64 + subs x5, x4, #128 + add x6, x5, x2 + csel x3, x3, xzr, ge + csel x2, x2, x6, ge // interleave 64-bit words in state n, n+2 zip1 v0.2d, v16.2d, v18.2d zip2 v4.2d, v16.2d, v18.2d + stp a0, a1, [x1], #64 zip1 v8.2d, v17.2d, v19.2d zip2 v12.2d, v17.2d, v19.2d - ld1 {v16.16b-v19.16b}, [x2], #64 + stp a2, a3, [x1, #-56] + ld1 {v16.16b-v19.16b}, [x2], x3 + + subs x6, x4, #192 + ccmp x3, xzr, #4, lt + add x7, x6, x2 + csel x3, x3, xzr, eq + csel x2, x2, x7, eq zip1 v1.2d, v20.2d, v22.2d zip2 v5.2d, v20.2d, v22.2d + stp a4, a5, [x1, #-48] zip1 v9.2d, v21.2d, v23.2d zip2 v13.2d, v21.2d, v23.2d - ld1 {v20.16b-v23.16b}, [x2], #64 + stp a6, a7, [x1, #-40] + ld1 {v20.16b-v23.16b}, [x2], x3 + + subs x7, x4, #256 + ccmp x3, xzr, #4, lt + add x8, x7, x2 + csel x3, x3, xzr, eq + csel x2, x2, x8, eq zip1 v2.2d, v24.2d, v26.2d zip2 v6.2d, v24.2d, v26.2d + stp a8, a9, [x1, #-32] zip1 v10.2d, v25.2d, v27.2d zip2 v14.2d, v25.2d, v27.2d - ld1 {v24.16b-v27.16b}, [x2], #64 + stp a10, a11, [x1, #-24] + ld1 {v24.16b-v27.16b}, [x2], x3 + + subs x8, x4, #320 + ccmp x3, xzr, #4, lt + add x9, x8, x2 + csel x2, x2, x9, eq zip1 v3.2d, v28.2d, v30.2d zip2 v7.2d, v28.2d, v30.2d + stp a12, a13, [x1, #-16] zip1 v11.2d, v29.2d, v31.2d zip2 v15.2d, v29.2d, v31.2d + stp a14, a15, [x1, #-8] ld1 {v28.16b-v31.16b}, [x2] // xor with corresponding input, write to output + tbnz x5, #63, 0f eor v16.16b, v16.16b, v0.16b eor v17.16b, v17.16b, v1.16b eor v18.16b, v18.16b, v2.16b eor v19.16b, v19.16b, v3.16b + st1 {v16.16b-v19.16b}, [x1], #64 + cbz x5, .Lout + + tbnz x6, #63, 1f eor v20.16b, v20.16b, v4.16b eor v21.16b, v21.16b, v5.16b - st1 {v16.16b-v19.16b}, [x1], #64 eor v22.16b, v22.16b, v6.16b eor v23.16b, v23.16b, v7.16b + st1 {v20.16b-v23.16b}, [x1], #64 + cbz x6, .Lout + + tbnz x7, #63, 2f eor v24.16b, v24.16b, v8.16b eor v25.16b, v25.16b, v9.16b - st1 {v20.16b-v23.16b}, [x1], #64 eor v26.16b, v26.16b, v10.16b eor v27.16b, v27.16b, v11.16b - eor v28.16b, v28.16b, v12.16b st1 {v24.16b-v27.16b}, [x1], #64 + cbz x7, .Lout + + tbnz x8, #63, 3f + eor v28.16b, v28.16b, v12.16b eor v29.16b, v29.16b, v13.16b eor v30.16b, v30.16b, v14.16b eor v31.16b, v31.16b, v15.16b st1 {v28.16b-v31.16b}, [x1] +.Lout: frame_pop ret -ENDPROC(chacha20_4block_xor_neon) -CTRINC: .word 0, 1, 2, 3 + // fewer than 128 bytes of in/output +0: ld1 {v8.16b}, [x10] + ld1 {v9.16b}, [x11] + movi v10.16b, #16 + sub x2, x1, #64 + add x1, x1, x5 + ld1 {v16.16b-v19.16b}, [x2] + tbl v4.16b, {v0.16b-v3.16b}, v8.16b + tbx v20.16b, {v16.16b-v19.16b}, v9.16b + add v8.16b, v8.16b, v10.16b + add v9.16b, v9.16b, v10.16b + tbl v5.16b, {v0.16b-v3.16b}, v8.16b + tbx v21.16b, {v16.16b-v19.16b}, v9.16b + add v8.16b, v8.16b, v10.16b + add v9.16b, v9.16b, v10.16b + tbl v6.16b, {v0.16b-v3.16b}, v8.16b + tbx v22.16b, {v16.16b-v19.16b}, v9.16b + add v8.16b, v8.16b, v10.16b + add v9.16b, v9.16b, v10.16b + tbl v7.16b, {v0.16b-v3.16b}, v8.16b + tbx v23.16b, {v16.16b-v19.16b}, v9.16b + + eor v20.16b, v20.16b, v4.16b + eor v21.16b, v21.16b, v5.16b + eor v22.16b, v22.16b, v6.16b + eor v23.16b, v23.16b, v7.16b + st1 {v20.16b-v23.16b}, [x1] + b .Lout + + // fewer than 192 bytes of in/output +1: ld1 {v8.16b}, [x10] + ld1 {v9.16b}, [x11] + movi v10.16b, #16 + add x1, x1, x6 + tbl v0.16b, {v4.16b-v7.16b}, v8.16b + tbx v20.16b, {v16.16b-v19.16b}, v9.16b + add v8.16b, v8.16b, v10.16b + add v9.16b, v9.16b, v10.16b + tbl v1.16b, {v4.16b-v7.16b}, v8.16b + tbx v21.16b, {v16.16b-v19.16b}, v9.16b + add v8.16b, v8.16b, v10.16b + add v9.16b, v9.16b, v10.16b + tbl v2.16b, {v4.16b-v7.16b}, v8.16b + tbx v22.16b, {v16.16b-v19.16b}, v9.16b + add v8.16b, v8.16b, v10.16b + add v9.16b, v9.16b, v10.16b + tbl v3.16b, {v4.16b-v7.16b}, v8.16b + tbx v23.16b, {v16.16b-v19.16b}, v9.16b + + eor v20.16b, v20.16b, v0.16b + eor v21.16b, v21.16b, v1.16b + eor v22.16b, v22.16b, v2.16b + eor v23.16b, v23.16b, v3.16b + st1 {v20.16b-v23.16b}, [x1] + b .Lout + + // fewer than 256 bytes of in/output +2: ld1 {v4.16b}, [x10] + ld1 {v5.16b}, [x11] + movi v6.16b, #16 + add x1, x1, x7 + tbl v0.16b, {v8.16b-v11.16b}, v4.16b + tbx v24.16b, {v20.16b-v23.16b}, v5.16b + add v4.16b, v4.16b, v6.16b + add v5.16b, v5.16b, v6.16b + tbl v1.16b, {v8.16b-v11.16b}, v4.16b + tbx v25.16b, {v20.16b-v23.16b}, v5.16b + add v4.16b, v4.16b, v6.16b + add v5.16b, v5.16b, v6.16b + tbl v2.16b, {v8.16b-v11.16b}, v4.16b + tbx v26.16b, {v20.16b-v23.16b}, v5.16b + add v4.16b, v4.16b, v6.16b + add v5.16b, v5.16b, v6.16b + tbl v3.16b, {v8.16b-v11.16b}, v4.16b + tbx v27.16b, {v20.16b-v23.16b}, v5.16b + + eor v24.16b, v24.16b, v0.16b + eor v25.16b, v25.16b, v1.16b + eor v26.16b, v26.16b, v2.16b + eor v27.16b, v27.16b, v3.16b + st1 {v24.16b-v27.16b}, [x1] + b .Lout + + // fewer than 320 bytes of in/output +3: ld1 {v4.16b}, [x10] + ld1 {v5.16b}, [x11] + movi v6.16b, #16 + add x1, x1, x8 + tbl v0.16b, {v12.16b-v15.16b}, v4.16b + tbx v28.16b, {v24.16b-v27.16b}, v5.16b + add v4.16b, v4.16b, v6.16b + add v5.16b, v5.16b, v6.16b + tbl v1.16b, {v12.16b-v15.16b}, v4.16b + tbx v29.16b, {v24.16b-v27.16b}, v5.16b + add v4.16b, v4.16b, v6.16b + add v5.16b, v5.16b, v6.16b + tbl v2.16b, {v12.16b-v15.16b}, v4.16b + tbx v30.16b, {v24.16b-v27.16b}, v5.16b + add v4.16b, v4.16b, v6.16b + add v5.16b, v5.16b, v6.16b + tbl v3.16b, {v12.16b-v15.16b}, v4.16b + tbx v31.16b, {v24.16b-v27.16b}, v5.16b + + eor v28.16b, v28.16b, v0.16b + eor v29.16b, v29.16b, v1.16b + eor v30.16b, v30.16b, v2.16b + eor v31.16b, v31.16b, v3.16b + st1 {v28.16b-v31.16b}, [x1] + b .Lout +ENDPROC(chacha_4block_xor_neon) + + .section ".rodata", "a", %progbits + .align L1_CACHE_SHIFT +.Lpermute: + .set .Li, 0 + .rept 192 + .byte (.Li - 64) + .set .Li, .Li + 1 + .endr + +CTRINC: .word 1, 2, 3, 4 ROT8: .word 0x02010003, 0x06050407, 0x0a09080b, 0x0e0d0c0f diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c new file mode 100644 index 000000000000..bece1d85bd81 --- /dev/null +++ b/arch/arm64/crypto/chacha-neon-glue.c @@ -0,0 +1,198 @@ +/* + * ARM NEON accelerated ChaCha and XChaCha stream ciphers, + * including ChaCha20 (RFC7539) + * + * Copyright (C) 2016 - 2017 Linaro, Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Based on: + * ChaCha20 256-bit cipher algorithm, RFC7539, SIMD glue code + * + * Copyright (C) 2015 Martin Willi + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include +#include +#include +#include +#include + +#include +#include +#include + +asmlinkage void chacha_block_xor_neon(u32 *state, u8 *dst, const u8 *src, + int nrounds); +asmlinkage void chacha_4block_xor_neon(u32 *state, u8 *dst, const u8 *src, + int nrounds, int bytes); +asmlinkage void hchacha_block_neon(const u32 *state, u32 *out, int nrounds); + +static void chacha_doneon(u32 *state, u8 *dst, const u8 *src, + int bytes, int nrounds) +{ + while (bytes > 0) { + int l = min(bytes, CHACHA_BLOCK_SIZE * 5); + + if (l <= CHACHA_BLOCK_SIZE) { + u8 buf[CHACHA_BLOCK_SIZE]; + + memcpy(buf, src, l); + chacha_block_xor_neon(state, buf, buf, nrounds); + memcpy(dst, buf, l); + state[12] += 1; + break; + } + chacha_4block_xor_neon(state, dst, src, nrounds, l); + bytes -= CHACHA_BLOCK_SIZE * 5; + src += CHACHA_BLOCK_SIZE * 5; + dst += CHACHA_BLOCK_SIZE * 5; + state[12] += 5; + } +} + +static int chacha_neon_stream_xor(struct skcipher_request *req, + struct chacha_ctx *ctx, u8 *iv) +{ + struct skcipher_walk walk; + u32 state[16]; + int err; + + err = skcipher_walk_virt(&walk, req, false); + + crypto_chacha_init(state, ctx, iv); + + while (walk.nbytes > 0) { + unsigned int nbytes = walk.nbytes; + + if (nbytes < walk.total) + nbytes = rounddown(nbytes, walk.stride); + + kernel_neon_begin(); + chacha_doneon(state, walk.dst.virt.addr, walk.src.virt.addr, + nbytes, ctx->nrounds); + kernel_neon_end(); + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + + return err; +} + +static int chacha_neon(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (req->cryptlen <= CHACHA_BLOCK_SIZE || !may_use_simd()) + return crypto_chacha_crypt(req); + + return chacha_neon_stream_xor(req, ctx, req->iv); +} + +static int xchacha_neon(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + struct chacha_ctx subctx; + u32 state[16]; + u8 real_iv[16]; + + if (req->cryptlen <= CHACHA_BLOCK_SIZE || !may_use_simd()) + return crypto_xchacha_crypt(req); + + crypto_chacha_init(state, ctx, req->iv); + + kernel_neon_begin(); + hchacha_block_neon(state, subctx.key, ctx->nrounds); + kernel_neon_end(); + subctx.nrounds = ctx->nrounds; + + memcpy(&real_iv[0], req->iv + 24, 8); + memcpy(&real_iv[8], req->iv + 16, 8); + return chacha_neon_stream_xor(req, &subctx, real_iv); +} + +static struct skcipher_alg algs[] = { + { + .base.cra_name = "chacha20", + .base.cra_driver_name = "chacha20-neon", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = CHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .walksize = 5 * CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha20_setkey, + .encrypt = chacha_neon, + .decrypt = chacha_neon, + }, { + .base.cra_name = "xchacha20", + .base.cra_driver_name = "xchacha20-neon", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = XCHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .walksize = 5 * CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha20_setkey, + .encrypt = xchacha_neon, + .decrypt = xchacha_neon, + }, { + .base.cra_name = "xchacha12", + .base.cra_driver_name = "xchacha12-neon", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = XCHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .walksize = 5 * CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha12_setkey, + .encrypt = xchacha_neon, + .decrypt = xchacha_neon, + } +}; + +static int __init chacha_simd_mod_init(void) +{ + if (!(elf_hwcap & HWCAP_ASIMD)) + return -ENODEV; + + return crypto_register_skciphers(algs, ARRAY_SIZE(algs)); +} + +static void __exit chacha_simd_mod_fini(void) +{ + crypto_unregister_skciphers(algs, ARRAY_SIZE(algs)); +} + +module_init(chacha_simd_mod_init); +module_exit(chacha_simd_mod_fini); + +MODULE_DESCRIPTION("ChaCha and XChaCha stream ciphers (NEON accelerated)"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_CRYPTO("chacha20"); +MODULE_ALIAS_CRYPTO("chacha20-neon"); +MODULE_ALIAS_CRYPTO("xchacha20"); +MODULE_ALIAS_CRYPTO("xchacha20-neon"); +MODULE_ALIAS_CRYPTO("xchacha12"); +MODULE_ALIAS_CRYPTO("xchacha12-neon"); diff --git a/arch/arm64/crypto/chacha20-neon-glue.c b/arch/arm64/crypto/chacha20-neon-glue.c deleted file mode 100644 index 727579c93ded..000000000000 --- a/arch/arm64/crypto/chacha20-neon-glue.c +++ /dev/null @@ -1,133 +0,0 @@ -/* - * ChaCha20 256-bit cipher algorithm, RFC7539, arm64 NEON functions - * - * Copyright (C) 2016 - 2017 Linaro, Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * Based on: - * ChaCha20 256-bit cipher algorithm, RFC7539, SIMD glue code - * - * Copyright (C) 2015 Martin Willi - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - */ - -#include -#include -#include -#include -#include - -#include -#include -#include - -asmlinkage void chacha20_block_xor_neon(u32 *state, u8 *dst, const u8 *src); -asmlinkage void chacha20_4block_xor_neon(u32 *state, u8 *dst, const u8 *src); - -static void chacha20_doneon(u32 *state, u8 *dst, const u8 *src, - unsigned int bytes) -{ - u8 buf[CHACHA20_BLOCK_SIZE]; - - while (bytes >= CHACHA20_BLOCK_SIZE * 4) { - kernel_neon_begin(); - chacha20_4block_xor_neon(state, dst, src); - kernel_neon_end(); - bytes -= CHACHA20_BLOCK_SIZE * 4; - src += CHACHA20_BLOCK_SIZE * 4; - dst += CHACHA20_BLOCK_SIZE * 4; - state[12] += 4; - } - - if (!bytes) - return; - - kernel_neon_begin(); - while (bytes >= CHACHA20_BLOCK_SIZE) { - chacha20_block_xor_neon(state, dst, src); - bytes -= CHACHA20_BLOCK_SIZE; - src += CHACHA20_BLOCK_SIZE; - dst += CHACHA20_BLOCK_SIZE; - state[12]++; - } - if (bytes) { - memcpy(buf, src, bytes); - chacha20_block_xor_neon(state, buf, buf); - memcpy(dst, buf, bytes); - } - kernel_neon_end(); -} - -static int chacha20_neon(struct skcipher_request *req) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm); - struct skcipher_walk walk; - u32 state[16]; - int err; - - if (!may_use_simd() || req->cryptlen <= CHACHA20_BLOCK_SIZE) - return crypto_chacha20_crypt(req); - - err = skcipher_walk_virt(&walk, req, false); - - crypto_chacha20_init(state, ctx, walk.iv); - - while (walk.nbytes > 0) { - unsigned int nbytes = walk.nbytes; - - if (nbytes < walk.total) - nbytes = round_down(nbytes, walk.stride); - - chacha20_doneon(state, walk.dst.virt.addr, walk.src.virt.addr, - nbytes); - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - } - - return err; -} - -static struct skcipher_alg alg = { - .base.cra_name = "chacha20", - .base.cra_driver_name = "chacha20-neon", - .base.cra_priority = 300, - .base.cra_blocksize = 1, - .base.cra_ctxsize = sizeof(struct chacha20_ctx), - .base.cra_module = THIS_MODULE, - - .min_keysize = CHACHA20_KEY_SIZE, - .max_keysize = CHACHA20_KEY_SIZE, - .ivsize = CHACHA20_IV_SIZE, - .chunksize = CHACHA20_BLOCK_SIZE, - .walksize = 4 * CHACHA20_BLOCK_SIZE, - .setkey = crypto_chacha20_setkey, - .encrypt = chacha20_neon, - .decrypt = chacha20_neon, -}; - -static int __init chacha20_simd_mod_init(void) -{ - if (!(elf_hwcap & HWCAP_ASIMD)) - return -ENODEV; - - return crypto_register_skcipher(&alg); -} - -static void __exit chacha20_simd_mod_fini(void) -{ - crypto_unregister_skcipher(&alg); -} - -module_init(chacha20_simd_mod_init); -module_exit(chacha20_simd_mod_fini); - -MODULE_AUTHOR("Ard Biesheuvel "); -MODULE_LICENSE("GPL v2"); -MODULE_ALIAS_CRYPTO("chacha20"); diff --git a/arch/arm64/crypto/nh-neon-core.S b/arch/arm64/crypto/nh-neon-core.S new file mode 100644 index 000000000000..e05570c38de7 --- /dev/null +++ b/arch/arm64/crypto/nh-neon-core.S @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * NH - ε-almost-universal hash function, ARM64 NEON accelerated version + * + * Copyright 2018 Google LLC + * + * Author: Eric Biggers + */ + +#include + + KEY .req x0 + MESSAGE .req x1 + MESSAGE_LEN .req x2 + HASH .req x3 + + PASS0_SUMS .req v0 + PASS1_SUMS .req v1 + PASS2_SUMS .req v2 + PASS3_SUMS .req v3 + K0 .req v4 + K1 .req v5 + K2 .req v6 + K3 .req v7 + T0 .req v8 + T1 .req v9 + T2 .req v10 + T3 .req v11 + T4 .req v12 + T5 .req v13 + T6 .req v14 + T7 .req v15 + +.macro _nh_stride k0, k1, k2, k3 + + // Load next message stride + ld1 {T3.16b}, [MESSAGE], #16 + + // Load next key stride + ld1 {\k3\().4s}, [KEY], #16 + + // Add message words to key words + add T0.4s, T3.4s, \k0\().4s + add T1.4s, T3.4s, \k1\().4s + add T2.4s, T3.4s, \k2\().4s + add T3.4s, T3.4s, \k3\().4s + + // Multiply 32x32 => 64 and accumulate + mov T4.d[0], T0.d[1] + mov T5.d[0], T1.d[1] + mov T6.d[0], T2.d[1] + mov T7.d[0], T3.d[1] + umlal PASS0_SUMS.2d, T0.2s, T4.2s + umlal PASS1_SUMS.2d, T1.2s, T5.2s + umlal PASS2_SUMS.2d, T2.2s, T6.2s + umlal PASS3_SUMS.2d, T3.2s, T7.2s +.endm + +/* + * void nh_neon(const u32 *key, const u8 *message, size_t message_len, + * u8 hash[NH_HASH_BYTES]) + * + * It's guaranteed that message_len % 16 == 0. + */ +ENTRY(nh_neon) + + ld1 {K0.4s,K1.4s}, [KEY], #32 + movi PASS0_SUMS.2d, #0 + movi PASS1_SUMS.2d, #0 + ld1 {K2.4s}, [KEY], #16 + movi PASS2_SUMS.2d, #0 + movi PASS3_SUMS.2d, #0 + + subs MESSAGE_LEN, MESSAGE_LEN, #64 + blt .Lloop4_done +.Lloop4: + _nh_stride K0, K1, K2, K3 + _nh_stride K1, K2, K3, K0 + _nh_stride K2, K3, K0, K1 + _nh_stride K3, K0, K1, K2 + subs MESSAGE_LEN, MESSAGE_LEN, #64 + bge .Lloop4 + +.Lloop4_done: + ands MESSAGE_LEN, MESSAGE_LEN, #63 + beq .Ldone + _nh_stride K0, K1, K2, K3 + + subs MESSAGE_LEN, MESSAGE_LEN, #16 + beq .Ldone + _nh_stride K1, K2, K3, K0 + + subs MESSAGE_LEN, MESSAGE_LEN, #16 + beq .Ldone + _nh_stride K2, K3, K0, K1 + +.Ldone: + // Sum the accumulators for each pass, then store the sums to 'hash' + addp T0.2d, PASS0_SUMS.2d, PASS1_SUMS.2d + addp T1.2d, PASS2_SUMS.2d, PASS3_SUMS.2d + st1 {T0.16b,T1.16b}, [HASH] + ret +ENDPROC(nh_neon) diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c new file mode 100644 index 000000000000..22cc32ac9448 --- /dev/null +++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NHPoly1305 - ε-almost-∆-universal hash function for Adiantum + * (ARM64 NEON accelerated version) + * + * Copyright 2018 Google LLC + */ + +#include +#include +#include +#include +#include + +asmlinkage void nh_neon(const u32 *key, const u8 *message, size_t message_len, + u8 hash[NH_HASH_BYTES]); + +/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */ +static void _nh_neon(const u32 *key, const u8 *message, size_t message_len, + __le64 hash[NH_NUM_PASSES]) +{ + nh_neon(key, message, message_len, (u8 *)hash); +} + +static int nhpoly1305_neon_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + if (srclen < 64 || !may_use_simd()) + return crypto_nhpoly1305_update(desc, src, srclen); + + do { + unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE); + + kernel_neon_begin(); + crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon); + kernel_neon_end(); + src += n; + srclen -= n; + } while (srclen); + return 0; +} + +static struct shash_alg nhpoly1305_alg = { + .base.cra_name = "nhpoly1305", + .base.cra_driver_name = "nhpoly1305-neon", + .base.cra_priority = 200, + .base.cra_ctxsize = sizeof(struct nhpoly1305_key), + .base.cra_module = THIS_MODULE, + .digestsize = POLY1305_DIGEST_SIZE, + .init = crypto_nhpoly1305_init, + .update = nhpoly1305_neon_update, + .final = crypto_nhpoly1305_final, + .setkey = crypto_nhpoly1305_setkey, + .descsize = sizeof(struct nhpoly1305_state), +}; + +static int __init nhpoly1305_mod_init(void) +{ + if (!(elf_hwcap & HWCAP_ASIMD)) + return -ENODEV; + + return crypto_register_shash(&nhpoly1305_alg); +} + +static void __exit nhpoly1305_mod_exit(void) +{ + crypto_unregister_shash(&nhpoly1305_alg); +} + +module_init(nhpoly1305_mod_init); +module_exit(nhpoly1305_mod_exit); + +MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function (NEON-accelerated)"); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Eric Biggers "); +MODULE_ALIAS_CRYPTO("nhpoly1305"); +MODULE_ALIAS_CRYPTO("nhpoly1305-neon"); diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h index ed693c5bcec0..2945fe6cd863 100644 --- a/arch/arm64/include/asm/brk-imm.h +++ b/arch/arm64/include/asm/brk-imm.h @@ -16,10 +16,12 @@ * 0x400: for dynamic BRK instruction * 0x401: for compile time BRK instruction * 0x800: kernel-mode BUG() and WARN() traps + * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff) */ #define FAULT_BRK_IMM 0x100 #define KGDB_DYN_DBG_BRK_IMM 0x400 #define KGDB_COMPILED_DBG_BRK_IMM 0x401 #define BUG_BRK_IMM 0x800 +#define KASAN_BRK_IMM 0x900 #endif diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h index c41f3fb1446c..95dbf3ef735a 100644 --- a/arch/arm64/include/asm/dma-mapping.h +++ b/arch/arm64/include/asm/dma-mapping.h @@ -24,15 +24,9 @@ #include #include -extern const struct dma_map_ops dummy_dma_ops; - static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { - /* - * We expect no ISA devices, and all other DMA masters are expected to - * have someone call arch_setup_dma_ops at device creation time. - */ - return &dummy_dma_ops; + return NULL; } void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index 8758bb008436..b52aacd2c526 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -4,12 +4,16 @@ #ifndef __ASSEMBLY__ -#ifdef CONFIG_KASAN - #include #include #include +#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) +#define arch_kasan_reset_tag(addr) __tag_reset(addr) +#define arch_kasan_get_tag(addr) __tag_get(addr) + +#ifdef CONFIG_KASAN + /* * KASAN_SHADOW_START: beginning of the kernel virtual addresses. * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses, diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index a0ee78c208c3..e1ec947e7c0c 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -53,8 +53,11 @@ #define PAGE_OFFSET (UL(0xffffffffffffffff) - \ (UL(1) << (VA_BITS - 1)) + 1) #define KIMAGE_VADDR (MODULES_END) +#define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE) +#define BPF_JIT_REGION_SIZE (SZ_128M) +#define BPF_JIT_REGION_END (BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE) #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) -#define MODULES_VADDR (VA_START + KASAN_SHADOW_SIZE) +#define MODULES_VADDR (BPF_JIT_REGION_END) #define MODULES_VSIZE (SZ_128M) #define VMEMMAP_START (PAGE_OFFSET - VMEMMAP_SIZE) #define PCI_IO_END (VMEMMAP_START - SZ_2M) @@ -71,13 +74,11 @@ #endif /* - * KASAN requires 1/8th of the kernel virtual address space for the shadow - * region. KASAN can bloat the stack significantly, so double the (minimum) - * stack size when KASAN is in use, and then double it again if KASAN_EXTRA is - * on. + * Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual + * address space for the shadow region respectively. They can bloat the stack + * significantly, so double the (minimum) stack size when they are in use. */ #ifdef CONFIG_KASAN -#define KASAN_SHADOW_SCALE_SHIFT 3 #define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) #ifdef CONFIG_KASAN_EXTRA #define KASAN_THREAD_SHIFT 2 @@ -170,14 +171,6 @@ #define IOREMAP_MAX_ORDER (PMD_SHIFT) #endif -#ifdef CONFIG_BLK_DEV_INITRD -#define __early_init_dt_declare_initrd(__start, __end) \ - do { \ - initrd_start = (__start); \ - initrd_end = (__end); \ - } while (0) -#endif - #ifndef __ASSEMBLY__ #include @@ -217,6 +210,26 @@ extern u64 vabits_user; */ #define PHYS_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT) +/* + * When dealing with data aborts, watchpoints, or instruction traps we may end + * up with a tagged userland pointer. Clear the tag to get a sane pointer to + * pass on to access_ok(), for instance. + */ +#define untagged_addr(addr) \ + ((__typeof__(addr))sign_extend64((u64)(addr), 55)) + +#ifdef CONFIG_KASAN_SW_TAGS +#define __tag_shifted(tag) ((u64)(tag) << 56) +#define __tag_set(addr, tag) (__typeof__(addr))( \ + ((u64)(addr) & ~__tag_shifted(0xff)) | __tag_shifted(tag)) +#define __tag_reset(addr) untagged_addr(addr) +#define __tag_get(addr) (__u8)((u64)(addr) >> 56) +#else +#define __tag_set(addr, tag) (addr) +#define __tag_reset(addr) (addr) +#define __tag_get(addr) 0 +#endif + /* * Physical vs virtual RAM address space conversion. These are * private definitions which should NOT be used outside memory.h @@ -300,7 +313,13 @@ static inline void *phys_to_virt(phys_addr_t x) #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) #define __page_to_voff(kaddr) (((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) -#define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET)) +#define page_to_virt(page) ({ \ + unsigned long __addr = \ + ((__page_to_voff(page)) | PAGE_OFFSET); \ + __addr = __tag_set(__addr, page_kasan_tag(page)); \ + ((void *)__addr); \ +}) + #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) #define _virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ @@ -308,9 +327,10 @@ static inline void *phys_to_virt(phys_addr_t x) #endif #endif -#define _virt_addr_is_linear(kaddr) (((u64)(kaddr)) >= PAGE_OFFSET) -#define virt_addr_valid(kaddr) (_virt_addr_is_linear(kaddr) && \ - _virt_addr_valid(kaddr)) +#define _virt_addr_is_linear(kaddr) \ + (__tag_reset((u64)(kaddr)) >= PAGE_OFFSET) +#define virt_addr_valid(kaddr) \ + (_virt_addr_is_linear(kaddr) && _virt_addr_valid(kaddr)) #include diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 22bb3ae514f5..e9b0a7d75184 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -299,6 +299,7 @@ #define TCR_A1 (UL(1) << 22) #define TCR_ASID16 (UL(1) << 36) #define TCR_TBI0 (UL(1) << 37) +#define TCR_TBI1 (UL(1) << 38) #define TCR_HA (UL(1) << 39) #define TCR_HD (UL(1) << 40) #define TCR_NFD1 (UL(1) << 54) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index fad33f5fde47..ed252435fd92 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -95,13 +95,6 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si return ret; } -/* - * When dealing with data aborts, watchpoints, or instruction traps we may end - * up with a tagged userland pointer. Clear the tag to get a sane pointer to - * pass on to access_ok(), for instance. - */ -#define untagged_addr(addr) sign_extend64(addr, 55) - #define access_ok(type, addr, size) __range_ok(addr, size) #define user_addr_max get_fs diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 5f4d9acb32f5..cdc71cf70aad 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -969,6 +970,58 @@ static struct break_hook bug_break_hook = { .fn = bug_handler, }; +#ifdef CONFIG_KASAN_SW_TAGS + +#define KASAN_ESR_RECOVER 0x20 +#define KASAN_ESR_WRITE 0x10 +#define KASAN_ESR_SIZE_MASK 0x0f +#define KASAN_ESR_SIZE(esr) (1 << ((esr) & KASAN_ESR_SIZE_MASK)) + +static int kasan_handler(struct pt_regs *regs, unsigned int esr) +{ + bool recover = esr & KASAN_ESR_RECOVER; + bool write = esr & KASAN_ESR_WRITE; + size_t size = KASAN_ESR_SIZE(esr); + u64 addr = regs->regs[0]; + u64 pc = regs->pc; + + if (user_mode(regs)) + return DBG_HOOK_ERROR; + + kasan_report(addr, size, write, pc); + + /* + * The instrumentation allows to control whether we can proceed after + * a crash was detected. This is done by passing the -recover flag to + * the compiler. Disabling recovery allows to generate more compact + * code. + * + * Unfortunately disabling recovery doesn't work for the kernel right + * now. KASAN reporting is disabled in some contexts (for example when + * the allocator accesses slab object metadata; this is controlled by + * current->kasan_depth). All these accesses are detected by the tool, + * even though the reports for them are not printed. + * + * This is something that might be fixed at some point in the future. + */ + if (!recover) + die("Oops - KASAN", regs, 0); + + /* If thread survives, skip over the brk instruction and continue: */ + arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); + return DBG_HOOK_HANDLED; +} + +#define KASAN_ESR_VAL (0xf2000000 | KASAN_BRK_IMM) +#define KASAN_ESR_MASK 0xffffff00 + +static struct break_hook kasan_break_hook = { + .esr_val = KASAN_ESR_VAL, + .esr_mask = KASAN_ESR_MASK, + .fn = kasan_handler, +}; +#endif + /* * Initial handler for AArch64 BRK exceptions * This handler only used until debug_traps_init(). @@ -976,6 +1029,10 @@ static struct break_hook bug_break_hook = { int __init early_brk64(unsigned long addr, unsigned int esr, struct pt_regs *regs) { +#ifdef CONFIG_KASAN_SW_TAGS + if ((esr & KASAN_ESR_MASK) == KASAN_ESR_VAL) + return kasan_handler(regs, esr) != DBG_HOOK_HANDLED; +#endif return bug_handler(regs, esr) != DBG_HOOK_HANDLED; } @@ -983,4 +1040,7 @@ int __init early_brk64(unsigned long addr, unsigned int esr, void __init trap_init(void) { register_break_hook(&bug_break_hook); +#ifdef CONFIG_KASAN_SW_TAGS + register_break_hook(&kasan_break_hook); +#endif } diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 47b23bf617c7..a3f85624313e 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -61,6 +61,6 @@ config KVM_ARM_PMU config KVM_INDIRECT_VECTORS def_bool KVM && (HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS) -source drivers/vhost/Kconfig +source "drivers/vhost/Kconfig" endif # VIRTUALIZATION diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index a53704406099..fb0908456a1f 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -33,113 +33,6 @@ #include -static struct gen_pool *atomic_pool __ro_after_init; - -#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K -static size_t atomic_pool_size __initdata = DEFAULT_DMA_COHERENT_POOL_SIZE; - -static int __init early_coherent_pool(char *p) -{ - atomic_pool_size = memparse(p, &p); - return 0; -} -early_param("coherent_pool", early_coherent_pool); - -static void *__alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) -{ - unsigned long val; - void *ptr = NULL; - - if (!atomic_pool) { - WARN(1, "coherent pool not initialised!\n"); - return NULL; - } - - val = gen_pool_alloc(atomic_pool, size); - if (val) { - phys_addr_t phys = gen_pool_virt_to_phys(atomic_pool, val); - - *ret_page = phys_to_page(phys); - ptr = (void *)val; - memset(ptr, 0, size); - } - - return ptr; -} - -static bool __in_atomic_pool(void *start, size_t size) -{ - return addr_in_gen_pool(atomic_pool, (unsigned long)start, size); -} - -static int __free_from_pool(void *start, size_t size) -{ - if (!__in_atomic_pool(start, size)) - return 0; - - gen_pool_free(atomic_pool, (unsigned long)start, size); - - return 1; -} - -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t flags, unsigned long attrs) -{ - struct page *page; - void *ptr, *coherent_ptr; - pgprot_t prot = pgprot_writecombine(PAGE_KERNEL); - - size = PAGE_ALIGN(size); - - if (!gfpflags_allow_blocking(flags)) { - struct page *page = NULL; - void *addr = __alloc_from_pool(size, &page, flags); - - if (addr) - *dma_handle = phys_to_dma(dev, page_to_phys(page)); - - return addr; - } - - ptr = dma_direct_alloc_pages(dev, size, dma_handle, flags, attrs); - if (!ptr) - goto no_mem; - - /* remove any dirty cache lines on the kernel alias */ - __dma_flush_area(ptr, size); - - /* create a coherent mapping */ - page = virt_to_page(ptr); - coherent_ptr = dma_common_contiguous_remap(page, size, VM_USERMAP, - prot, __builtin_return_address(0)); - if (!coherent_ptr) - goto no_map; - - return coherent_ptr; - -no_map: - dma_direct_free_pages(dev, size, ptr, *dma_handle, attrs); -no_mem: - return NULL; -} - -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - if (!__free_from_pool(vaddr, PAGE_ALIGN(size))) { - void *kaddr = phys_to_virt(dma_to_phys(dev, dma_handle)); - - vunmap(vaddr); - dma_direct_free_pages(dev, size, kaddr, dma_handle, attrs); - } -} - -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return __phys_to_pfn(dma_to_phys(dev, dma_addr)); -} - pgprot_t arch_dma_mmap_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs) { @@ -160,6 +53,11 @@ void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, __dma_unmap_area(phys_to_virt(paddr), size, dir); } +void arch_dma_prep_coherent(struct page *page, size_t size) +{ + __dma_flush_area(page_address(page), size); +} + #ifdef CONFIG_IOMMU_DMA static int __swiotlb_get_sgtable_page(struct sg_table *sgt, struct page *page, size_t size) @@ -191,167 +89,13 @@ static int __swiotlb_mmap_pfn(struct vm_area_struct *vma, } #endif /* CONFIG_IOMMU_DMA */ -static int __init atomic_pool_init(void) -{ - pgprot_t prot = __pgprot(PROT_NORMAL_NC); - unsigned long nr_pages = atomic_pool_size >> PAGE_SHIFT; - struct page *page; - void *addr; - unsigned int pool_size_order = get_order(atomic_pool_size); - - if (dev_get_cma_area(NULL)) - page = dma_alloc_from_contiguous(NULL, nr_pages, - pool_size_order, false); - else - page = alloc_pages(GFP_DMA32, pool_size_order); - - if (page) { - int ret; - void *page_addr = page_address(page); - - memset(page_addr, 0, atomic_pool_size); - __dma_flush_area(page_addr, atomic_pool_size); - - atomic_pool = gen_pool_create(PAGE_SHIFT, -1); - if (!atomic_pool) - goto free_page; - - addr = dma_common_contiguous_remap(page, atomic_pool_size, - VM_USERMAP, prot, atomic_pool_init); - - if (!addr) - goto destroy_genpool; - - ret = gen_pool_add_virt(atomic_pool, (unsigned long)addr, - page_to_phys(page), - atomic_pool_size, -1); - if (ret) - goto remove_mapping; - - gen_pool_set_algo(atomic_pool, - gen_pool_first_fit_order_align, - NULL); - - pr_info("DMA: preallocated %zu KiB pool for atomic allocations\n", - atomic_pool_size / 1024); - return 0; - } - goto out; - -remove_mapping: - dma_common_free_remap(addr, atomic_pool_size, VM_USERMAP); -destroy_genpool: - gen_pool_destroy(atomic_pool); - atomic_pool = NULL; -free_page: - if (!dma_release_from_contiguous(NULL, page, nr_pages)) - __free_pages(page, pool_size_order); -out: - pr_err("DMA: failed to allocate %zu KiB pool for atomic coherent allocation\n", - atomic_pool_size / 1024); - return -ENOMEM; -} - -/******************************************** - * The following APIs are for dummy DMA ops * - ********************************************/ - -static void *__dummy_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flags, - unsigned long attrs) -{ - return NULL; -} - -static void __dummy_free(struct device *dev, size_t size, - void *vaddr, dma_addr_t dma_handle, - unsigned long attrs) -{ -} - -static int __dummy_mmap(struct device *dev, - struct vm_area_struct *vma, - void *cpu_addr, dma_addr_t dma_addr, size_t size, - unsigned long attrs) -{ - return -ENXIO; -} - -static dma_addr_t __dummy_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, - unsigned long attrs) -{ - return 0; -} - -static void __dummy_unmap_page(struct device *dev, dma_addr_t dev_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ -} - -static int __dummy_map_sg(struct device *dev, struct scatterlist *sgl, - int nelems, enum dma_data_direction dir, - unsigned long attrs) -{ - return 0; -} - -static void __dummy_unmap_sg(struct device *dev, - struct scatterlist *sgl, int nelems, - enum dma_data_direction dir, - unsigned long attrs) -{ -} - -static void __dummy_sync_single(struct device *dev, - dma_addr_t dev_addr, size_t size, - enum dma_data_direction dir) -{ -} - -static void __dummy_sync_sg(struct device *dev, - struct scatterlist *sgl, int nelems, - enum dma_data_direction dir) -{ -} - -static int __dummy_mapping_error(struct device *hwdev, dma_addr_t dma_addr) -{ - return 1; -} - -static int __dummy_dma_supported(struct device *hwdev, u64 mask) -{ - return 0; -} - -const struct dma_map_ops dummy_dma_ops = { - .alloc = __dummy_alloc, - .free = __dummy_free, - .mmap = __dummy_mmap, - .map_page = __dummy_map_page, - .unmap_page = __dummy_unmap_page, - .map_sg = __dummy_map_sg, - .unmap_sg = __dummy_unmap_sg, - .sync_single_for_cpu = __dummy_sync_single, - .sync_single_for_device = __dummy_sync_single, - .sync_sg_for_cpu = __dummy_sync_sg, - .sync_sg_for_device = __dummy_sync_sg, - .mapping_error = __dummy_mapping_error, - .dma_supported = __dummy_dma_supported, -}; -EXPORT_SYMBOL(dummy_dma_ops); - static int __init arm64_dma_init(void) { WARN_TAINT(ARCH_DMA_MINALIGN < cache_line_size(), TAINT_CPU_OUT_OF_SPEC, "ARCH_DMA_MINALIGN smaller than CTR_EL0.CWG (%d < %d)", ARCH_DMA_MINALIGN, cache_line_size()); - - return atomic_pool_init(); + return dma_atomic_pool_init(GFP_DMA32, __pgprot(PROT_NORMAL_NC)); } arch_initcall(arm64_dma_init); @@ -397,17 +141,17 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, page = alloc_pages(gfp, get_order(size)); addr = page ? page_address(page) : NULL; } else { - addr = __alloc_from_pool(size, &page, gfp); + addr = dma_alloc_from_pool(size, &page, gfp); } if (!addr) return NULL; *handle = iommu_dma_map_page(dev, page, 0, iosize, ioprot); - if (iommu_dma_mapping_error(dev, *handle)) { + if (*handle == DMA_MAPPING_ERROR) { if (coherent) __free_pages(page, get_order(size)); else - __free_from_pool(addr, size); + dma_free_from_pool(addr, size); addr = NULL; } } else if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { @@ -420,7 +164,7 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, return NULL; *handle = iommu_dma_map_page(dev, page, 0, iosize, ioprot); - if (iommu_dma_mapping_error(dev, *handle)) { + if (*handle == DMA_MAPPING_ERROR) { dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); return NULL; @@ -471,9 +215,9 @@ static void __iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, * coherent devices. * Hence how dodgy the below logic looks... */ - if (__in_atomic_pool(cpu_addr, size)) { + if (dma_in_atomic_pool(cpu_addr, size)) { iommu_dma_unmap_page(dev, handle, iosize, 0, 0); - __free_from_pool(cpu_addr, size); + dma_free_from_pool(cpu_addr, size); } else if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { struct page *page = vmalloc_to_page(cpu_addr); @@ -580,7 +324,7 @@ static dma_addr_t __iommu_map_page(struct device *dev, struct page *page, dma_addr_t dev_addr = iommu_dma_map_page(dev, page, offset, size, prot); if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - !iommu_dma_mapping_error(dev, dev_addr)) + dev_addr != DMA_MAPPING_ERROR) __dma_map_area(page_address(page) + offset, size, dir); return dev_addr; @@ -663,7 +407,6 @@ static const struct dma_map_ops iommu_dma_ops = { .sync_sg_for_device = __iommu_sync_sg_for_device, .map_resource = iommu_dma_map_resource, .unmap_resource = iommu_dma_unmap_resource, - .mapping_error = iommu_dma_mapping_error, }; static int __init __iommu_dma_init(void) @@ -719,9 +462,6 @@ static void __iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, const struct iommu_ops *iommu, bool coherent) { - if (!dev->dma_ops) - dev->dma_ops = &swiotlb_dma_ops; - dev->dma_coherent = coherent; __iommu_setup_dma_ops(dev, dma_base, size, iommu); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 5fe6d2e40e9b..efb7b2cbead5 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include #include @@ -132,6 +133,18 @@ static void mem_abort_decode(unsigned int esr) data_abort_decode(esr); } +static inline bool is_ttbr0_addr(unsigned long addr) +{ + /* entry assembly clears tags for TTBR0 addrs */ + return addr < TASK_SIZE; +} + +static inline bool is_ttbr1_addr(unsigned long addr) +{ + /* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */ + return arch_kasan_reset_tag(addr) >= VA_START; +} + /* * Dump out the page tables associated with 'addr' in the currently active mm. */ @@ -141,7 +154,7 @@ void show_pte(unsigned long addr) pgd_t *pgdp; pgd_t pgd; - if (addr < TASK_SIZE) { + if (is_ttbr0_addr(addr)) { /* TTBR0 */ mm = current->active_mm; if (mm == &init_mm) { @@ -149,7 +162,7 @@ void show_pte(unsigned long addr) addr); return; } - } else if (addr >= VA_START) { + } else if (is_ttbr1_addr(addr)) { /* TTBR1 */ mm = &init_mm; } else { @@ -254,7 +267,7 @@ static inline bool is_el1_permission_fault(unsigned long addr, unsigned int esr, if (fsc_type == ESR_ELx_FSC_PERM) return true; - if (addr < TASK_SIZE && system_uses_ttbr0_pan()) + if (is_ttbr0_addr(addr) && system_uses_ttbr0_pan()) return fsc_type == ESR_ELx_FSC_FAULT && (regs->pstate & PSR_PAN_BIT); @@ -319,7 +332,7 @@ static void set_thread_esr(unsigned long address, unsigned int esr) * type", so we ignore this wrinkle and just return the translation * fault.) */ - if (current->thread.fault_address >= TASK_SIZE) { + if (!is_ttbr0_addr(current->thread.fault_address)) { switch (ESR_ELx_EC(esr)) { case ESR_ELx_EC_DABT_LOW: /* @@ -455,7 +468,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, mm_flags |= FAULT_FLAG_WRITE; } - if (addr < TASK_SIZE && is_el1_permission_fault(addr, esr, regs)) { + if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) { /* regs->orig_addr_limit may be 0 if we entered from EL0 */ if (regs->orig_addr_limit == KERNEL_DS) die_kernel_fault("access to user memory with fs=KERNEL_DS", @@ -603,7 +616,7 @@ static int __kprobes do_translation_fault(unsigned long addr, unsigned int esr, struct pt_regs *regs) { - if (addr < TASK_SIZE) + if (is_ttbr0_addr(addr)) return do_page_fault(addr, esr, regs); do_bad_area(addr, esr, regs); @@ -758,7 +771,7 @@ asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr, * re-enabled IRQs. If the address is a kernel address, apply * BP hardening prior to enabling IRQs and pre-emption. */ - if (addr > TASK_SIZE) + if (!is_ttbr0_addr(addr)) arm64_apply_bp_hardening(); local_daif_restore(DAIF_PROCCTX); @@ -771,7 +784,7 @@ asmlinkage void __exception do_sp_pc_abort(unsigned long addr, struct pt_regs *regs) { if (user_mode(regs)) { - if (instruction_pointer(regs) > TASK_SIZE) + if (!is_ttbr0_addr(instruction_pointer(regs))) arm64_apply_bp_hardening(); local_daif_restore(DAIF_PROCCTX); } @@ -825,7 +838,7 @@ asmlinkage int __exception do_debug_exception(unsigned long addr, if (interrupts_enabled(regs)) trace_hardirqs_off(); - if (user_mode(regs) && instruction_pointer(regs) > TASK_SIZE) + if (user_mode(regs) && !is_ttbr0_addr(instruction_pointer(regs))) arm64_apply_bp_hardening(); if (!inf->fn(addr, esr, regs)) { diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index cbba537ba3d2..a8f2e4792ef9 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -63,24 +63,6 @@ EXPORT_SYMBOL(memstart_addr); phys_addr_t arm64_dma_phys_limit __ro_after_init; -#ifdef CONFIG_BLK_DEV_INITRD -static int __init early_initrd(char *p) -{ - unsigned long start, size; - char *endp; - - start = memparse(p, &endp); - if (*endp == ',') { - size = memparse(endp + 1, NULL); - - initrd_start = start; - initrd_end = start + size; - } - return 0; -} -early_param("initrd", early_initrd); -#endif - #ifdef CONFIG_KEXEC_CORE /* * reserve_crashkernel() - reserves memory for crash kernel @@ -417,14 +399,14 @@ void __init arm64_memblock_init(void) memblock_add(__pa_symbol(_text), (u64)(_end - _text)); } - if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) { + if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) { /* * Add back the memory we just removed if it results in the * initrd to become inaccessible via the linear mapping. * Otherwise, this is a no-op */ - u64 base = initrd_start & PAGE_MASK; - u64 size = PAGE_ALIGN(initrd_end) - base; + u64 base = phys_initrd_start & PAGE_MASK; + u64 size = PAGE_ALIGN(phys_initrd_size); /* * We can only add back the initrd memory if we don't end up @@ -468,15 +450,11 @@ void __init arm64_memblock_init(void) * pagetables with memblock. */ memblock_reserve(__pa_symbol(_text), _end - _text); -#ifdef CONFIG_BLK_DEV_INITRD - if (initrd_start) { - memblock_reserve(initrd_start, initrd_end - initrd_start); - + if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) { /* the generic initrd code expects virtual addresses */ - initrd_start = __phys_to_virt(initrd_start); - initrd_end = __phys_to_virt(initrd_end); + initrd_start = __phys_to_virt(phys_initrd_start); + initrd_end = initrd_start + phys_initrd_size; } -#endif early_init_fdt_scan_reserved_mem(); diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 63527e585aac..4b55b15707a3 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -39,7 +39,15 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node) { void *p = memblock_alloc_try_nid(PAGE_SIZE, PAGE_SIZE, __pa(MAX_DMA_ADDRESS), - MEMBLOCK_ALLOC_ACCESSIBLE, node); + MEMBLOCK_ALLOC_KASAN, node); + return __pa(p); +} + +static phys_addr_t __init kasan_alloc_raw_page(int node) +{ + void *p = memblock_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE, + __pa(MAX_DMA_ADDRESS), + MEMBLOCK_ALLOC_KASAN, node); return __pa(p); } @@ -47,8 +55,9 @@ static pte_t *__init kasan_pte_offset(pmd_t *pmdp, unsigned long addr, int node, bool early) { if (pmd_none(READ_ONCE(*pmdp))) { - phys_addr_t pte_phys = early ? __pa_symbol(kasan_zero_pte) - : kasan_alloc_zeroed_page(node); + phys_addr_t pte_phys = early ? + __pa_symbol(kasan_early_shadow_pte) + : kasan_alloc_zeroed_page(node); __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE); } @@ -60,8 +69,9 @@ static pmd_t *__init kasan_pmd_offset(pud_t *pudp, unsigned long addr, int node, bool early) { if (pud_none(READ_ONCE(*pudp))) { - phys_addr_t pmd_phys = early ? __pa_symbol(kasan_zero_pmd) - : kasan_alloc_zeroed_page(node); + phys_addr_t pmd_phys = early ? + __pa_symbol(kasan_early_shadow_pmd) + : kasan_alloc_zeroed_page(node); __pud_populate(pudp, pmd_phys, PMD_TYPE_TABLE); } @@ -72,8 +82,9 @@ static pud_t *__init kasan_pud_offset(pgd_t *pgdp, unsigned long addr, int node, bool early) { if (pgd_none(READ_ONCE(*pgdp))) { - phys_addr_t pud_phys = early ? __pa_symbol(kasan_zero_pud) - : kasan_alloc_zeroed_page(node); + phys_addr_t pud_phys = early ? + __pa_symbol(kasan_early_shadow_pud) + : kasan_alloc_zeroed_page(node); __pgd_populate(pgdp, pud_phys, PMD_TYPE_TABLE); } @@ -87,8 +98,11 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, pte_t *ptep = kasan_pte_offset(pmdp, addr, node, early); do { - phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page) - : kasan_alloc_zeroed_page(node); + phys_addr_t page_phys = early ? + __pa_symbol(kasan_early_shadow_page) + : kasan_alloc_raw_page(node); + if (!early) + memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE); next = addr + PAGE_SIZE; set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep))); @@ -205,14 +219,14 @@ void __init kasan_init(void) kasan_map_populate(kimg_shadow_start, kimg_shadow_end, early_pfn_to_nid(virt_to_pfn(lm_alias(_text)))); - kasan_populate_zero_shadow((void *)KASAN_SHADOW_START, - (void *)mod_shadow_start); - kasan_populate_zero_shadow((void *)kimg_shadow_end, - kasan_mem_to_shadow((void *)PAGE_OFFSET)); + kasan_populate_early_shadow((void *)KASAN_SHADOW_START, + (void *)mod_shadow_start); + kasan_populate_early_shadow((void *)kimg_shadow_end, + kasan_mem_to_shadow((void *)PAGE_OFFSET)); if (kimg_shadow_start > mod_shadow_end) - kasan_populate_zero_shadow((void *)mod_shadow_end, - (void *)kimg_shadow_start); + kasan_populate_early_shadow((void *)mod_shadow_end, + (void *)kimg_shadow_start); for_each_memblock(memory, reg) { void *start = (void *)__phys_to_virt(reg->base); @@ -227,16 +241,19 @@ void __init kasan_init(void) } /* - * KAsan may reuse the contents of kasan_zero_pte directly, so we - * should make sure that it maps the zero page read-only. + * KAsan may reuse the contents of kasan_early_shadow_pte directly, + * so we should make sure that it maps the zero page read-only. */ for (i = 0; i < PTRS_PER_PTE; i++) - set_pte(&kasan_zero_pte[i], - pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO)); + set_pte(&kasan_early_shadow_pte[i], + pfn_pte(sym_to_pfn(kasan_early_shadow_page), + PAGE_KERNEL_RO)); - memset(kasan_zero_page, 0, PAGE_SIZE); + memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + kasan_init_tags(); + /* At this point kasan is fully initialized. Enable error messages */ init_task.kasan_depth = 0; pr_info("KernelAddressSanitizer initialized\n"); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index da513a1facf4..b6f5aa52ac67 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1003,10 +1003,8 @@ int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr) pmd = READ_ONCE(*pmdp); - if (!pmd_present(pmd)) - return 1; if (!pmd_table(pmd)) { - VM_WARN_ON(!pmd_table(pmd)); + VM_WARN_ON(1); return 1; } @@ -1026,10 +1024,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr) pud = READ_ONCE(*pudp); - if (!pud_present(pud)) - return 1; if (!pud_table(pud)) { - VM_WARN_ON(!pud_table(pud)); + VM_WARN_ON(1); return 1; } @@ -1047,6 +1043,11 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr) return 1; } +int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) +{ + return 0; /* Don't attempt a block mapping */ +} + #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, bool want_memblock) diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index e05b3ce1db6b..73886a5f1f30 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -47,6 +47,12 @@ /* PTWs cacheable, inner/outer WBWA */ #define TCR_CACHE_FLAGS TCR_IRGN_WBWA | TCR_ORGN_WBWA +#ifdef CONFIG_KASAN_SW_TAGS +#define TCR_KASAN_FLAGS TCR_TBI1 +#else +#define TCR_KASAN_FLAGS 0 +#endif + #define MAIR(attr, mt) ((attr) << ((mt) * 8)) /* @@ -449,7 +455,7 @@ ENTRY(__cpu_setup) */ ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ - TCR_TBI0 | TCR_A1 + TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS #ifdef CONFIG_ARM64_USER_VA_BITS_52 ldr_l x9, vabits_user diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 89198017e8e6..1542df00b23c 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -134,10 +134,9 @@ static inline void emit_a64_mov_i64(const int reg, const u64 val, } /* - * This is an unoptimized 64 immediate emission used for BPF to BPF call - * addresses. It will always do a full 64 bit decomposition as otherwise - * more complexity in the last extra pass is required since we previously - * reserved 4 instructions for the address. + * Kernel addresses in the vmalloc space use at most 48 bits, and the + * remaining bits are guaranteed to be 0x1. So we can compose the address + * with a fixed length movn/movk/movk sequence. */ static inline void emit_addr_mov_i64(const int reg, const u64 val, struct jit_ctx *ctx) @@ -145,8 +144,8 @@ static inline void emit_addr_mov_i64(const int reg, const u64 val, u64 tmp = val; int shift = 0; - emit(A64_MOVZ(1, reg, tmp & 0xffff, shift), ctx); - for (;shift < 48;) { + emit(A64_MOVN(1, reg, ~tmp & 0xffff, shift), ctx); + while (shift < 32) { tmp >>= 16; shift += 16; emit(A64_MOVK(1, reg, tmp & 0xffff, shift), ctx); @@ -634,11 +633,7 @@ emit_cond_jmp: &func_addr, &func_addr_fixed); if (ret < 0) return ret; - if (func_addr_fixed) - /* We can use optimized emission here. */ - emit_a64_mov_i64(tmp, func_addr, ctx); - else - emit_addr_mov_i64(tmp, func_addr, ctx); + emit_addr_mov_i64(tmp, func_addr, ctx); emit(A64_BLR(tmp), ctx); emit(A64_MOV(1, r0, A64_R(0)), ctx); break; @@ -937,6 +932,7 @@ skip_init_ctx: prog->jited_len = image_size; if (!prog->is_func || extra_pass) { + bpf_prog_fill_jited_linfo(prog, ctx.offset); out_off: kfree(ctx.offset); kfree(jit_data); @@ -948,3 +944,16 @@ out: tmp : orig_prog); return prog; } + +void *bpf_jit_alloc_exec(unsigned long size) +{ + return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START, + BPF_JIT_REGION_END, GFP_KERNEL, + PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, + __builtin_return_address(0)); +} + +void bpf_jit_free_exec(void *addr) +{ + return vfree(addr); +} diff --git a/arch/c6x/Kconfig b/arch/c6x/Kconfig index 84420109113d..456e154674d1 100644 --- a/arch/c6x/Kconfig +++ b/arch/c6x/Kconfig @@ -9,7 +9,6 @@ config C6X select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_DEVICE select CLKDEV_LOOKUP - select DMA_DIRECT_OPS select GENERIC_ATOMIC64 select GENERIC_IRQ_SHOW select HAVE_ARCH_TRACEHOOK diff --git a/arch/c6x/mm/dma-coherent.c b/arch/c6x/mm/dma-coherent.c index 01305c787201..75b79571732c 100644 --- a/arch/c6x/mm/dma-coherent.c +++ b/arch/c6x/mm/dma-coherent.c @@ -78,6 +78,7 @@ static void __free_dma_pages(u32 addr, int order) void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp, unsigned long attrs) { + void *ret; u32 paddr; int order; @@ -94,7 +95,9 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, if (!paddr) return NULL; - return phys_to_virt(paddr); + ret = phys_to_virt(paddr); + memset(ret, 0, 1 << order); + return ret; } /* diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig index cb64f8dacd08..37bed8aadf95 100644 --- a/arch/csky/Kconfig +++ b/arch/csky/Kconfig @@ -7,8 +7,7 @@ config CSKY select COMMON_CLK select CLKSRC_MMIO select CLKSRC_OF - select DMA_DIRECT_OPS - select DMA_NONCOHERENT_OPS + select DMA_DIRECT_REMAP select IRQ_DOMAIN select HANDLE_DOMAIN_IRQ select DW_APB_TIMER_OF diff --git a/arch/csky/mm/dma-mapping.c b/arch/csky/mm/dma-mapping.c index 85437b21e045..80783bb71c5c 100644 --- a/arch/csky/mm/dma-mapping.c +++ b/arch/csky/mm/dma-mapping.c @@ -14,73 +14,13 @@ #include #include -static struct gen_pool *atomic_pool; -static size_t atomic_pool_size __initdata = SZ_256K; - -static int __init early_coherent_pool(char *p) -{ - atomic_pool_size = memparse(p, &p); - return 0; -} -early_param("coherent_pool", early_coherent_pool); - static int __init atomic_pool_init(void) { - struct page *page; - size_t size = atomic_pool_size; - void *ptr; - int ret; - - atomic_pool = gen_pool_create(PAGE_SHIFT, -1); - if (!atomic_pool) - BUG(); - - page = alloc_pages(GFP_KERNEL | GFP_DMA, get_order(size)); - if (!page) - BUG(); - - ptr = dma_common_contiguous_remap(page, size, VM_ALLOC, - pgprot_noncached(PAGE_KERNEL), - __builtin_return_address(0)); - if (!ptr) - BUG(); - - ret = gen_pool_add_virt(atomic_pool, (unsigned long)ptr, - page_to_phys(page), atomic_pool_size, -1); - if (ret) - BUG(); - - gen_pool_set_algo(atomic_pool, gen_pool_first_fit_order_align, NULL); - - pr_info("DMA: preallocated %zu KiB pool for atomic coherent pool\n", - atomic_pool_size / 1024); - - pr_info("DMA: vaddr: 0x%x phy: 0x%lx,\n", (unsigned int)ptr, - page_to_phys(page)); - - return 0; + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); } postcore_initcall(atomic_pool_init); -static void *csky_dma_alloc_atomic(struct device *dev, size_t size, - dma_addr_t *dma_handle) -{ - unsigned long addr; - - addr = gen_pool_alloc(atomic_pool, size); - if (addr) - *dma_handle = gen_pool_virt_to_phys(atomic_pool, addr); - - return (void *)addr; -} - -static void csky_dma_free_atomic(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - gen_pool_free(atomic_pool, (unsigned long)vaddr, size); -} - -static void __dma_clear_buffer(struct page *page, size_t size) +void arch_dma_prep_coherent(struct page *page, size_t size) { if (PageHighMem(page)) { unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; @@ -107,84 +47,6 @@ static void __dma_clear_buffer(struct page *page, size_t size) } } -static void *csky_dma_alloc_nonatomic(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, - unsigned long attrs) -{ - void *vaddr; - struct page *page; - unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; - - if (DMA_ATTR_NON_CONSISTENT & attrs) { - pr_err("csky %s can't support DMA_ATTR_NON_CONSISTENT.\n", __func__); - return NULL; - } - - if (IS_ENABLED(CONFIG_DMA_CMA)) - page = dma_alloc_from_contiguous(dev, count, get_order(size), - gfp); - else - page = alloc_pages(gfp, get_order(size)); - - if (!page) { - pr_err("csky %s no more free pages.\n", __func__); - return NULL; - } - - *dma_handle = page_to_phys(page); - - __dma_clear_buffer(page, size); - - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) - return page; - - vaddr = dma_common_contiguous_remap(page, PAGE_ALIGN(size), VM_USERMAP, - pgprot_noncached(PAGE_KERNEL), __builtin_return_address(0)); - if (!vaddr) - BUG(); - - return vaddr; -} - -static void csky_dma_free_nonatomic( - struct device *dev, - size_t size, - void *vaddr, - dma_addr_t dma_handle, - unsigned long attrs - ) -{ - struct page *page = phys_to_page(dma_handle); - unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; - - if ((unsigned int)vaddr >= VMALLOC_START) - dma_common_free_remap(vaddr, size, VM_USERMAP); - - if (IS_ENABLED(CONFIG_DMA_CMA)) - dma_release_from_contiguous(dev, page, count); - else - __free_pages(page, get_order(size)); -} - -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t gfp, unsigned long attrs) -{ - if (gfpflags_allow_blocking(gfp)) - return csky_dma_alloc_nonatomic(dev, size, dma_handle, gfp, - attrs); - else - return csky_dma_alloc_atomic(dev, size, dma_handle); -} - -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - if (!addr_in_gen_pool(atomic_pool, (unsigned int) vaddr, size)) - csky_dma_free_nonatomic(dev, size, vaddr, dma_handle, attrs); - else - csky_dma_free_atomic(dev, size, vaddr, dma_handle, attrs); -} - static inline void cache_op(phys_addr_t paddr, size_t size, void (*fn)(unsigned long start, unsigned long end)) { diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c index dc07c078f9b8..66e597053488 100644 --- a/arch/csky/mm/init.c +++ b/arch/csky/mm/init.c @@ -71,7 +71,7 @@ void free_initrd_mem(unsigned long start, unsigned long end) ClearPageReserved(virt_to_page(start)); init_page_count(virt_to_page(start)); free_page(start); - totalram_pages++; + totalram_pages_inc(); } } #endif @@ -88,7 +88,7 @@ void free_initmem(void) ClearPageReserved(virt_to_page(addr)); init_page_count(virt_to_page(addr)); free_page(addr); - totalram_pages++; + totalram_pages_inc(); addr += PAGE_SIZE; } diff --git a/arch/h8300/Kconfig b/arch/h8300/Kconfig index d19c6b16cd5d..6472a0685470 100644 --- a/arch/h8300/Kconfig +++ b/arch/h8300/Kconfig @@ -22,7 +22,6 @@ config H8300 select HAVE_ARCH_KGDB select HAVE_ARCH_HASH select CPU_NO_EFFICIENT_FFS - select DMA_DIRECT_OPS config CPU_BIG_ENDIAN def_bool y diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig index 2b688af379e6..fb2fbfcfc532 100644 --- a/arch/hexagon/Kconfig +++ b/arch/hexagon/Kconfig @@ -31,7 +31,6 @@ config HEXAGON select GENERIC_CLOCKEVENTS_BROADCAST select MODULES_USE_ELF_RELA select GENERIC_CPU_DEVICES - select DMA_DIRECT_OPS ---help--- Qualcomm Hexagon is a processor architecture designed for high performance and low power across a wide variety of applications. @@ -47,9 +46,6 @@ config FRAME_POINTER config LOCKDEP_SUPPORT def_bool y -config PCI - def_bool n - config EARLY_PRINTK def_bool y diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 36773def6920..ccd56f5df8cd 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -10,11 +10,13 @@ config IA64 bool select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_SERIO - select PCI if (!IA64_HP_SIM) select ACPI if (!IA64_HP_SIM) select ARCH_SUPPORTS_ACPI if (!IA64_HP_SIM) select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI + select FORCE_PCI if (!IA64_HP_SIM) + select PCI_DOMAINS if PCI + select PCI_SYSCALL if PCI select HAVE_UNSTABLE_SCHED_CLOCK select HAVE_EXIT_THREAD select HAVE_IDE @@ -28,8 +30,8 @@ config IA64 select HAVE_ARCH_TRACEHOOK select HAVE_MEMBLOCK_NODE_MAP select HAVE_VIRT_CPU_ACCOUNTING - select ARCH_HAS_DMA_MARK_CLEAN - select ARCH_HAS_SG_CHAIN + select ARCH_HAS_DMA_COHERENT_TO_PFN if SWIOTLB + select ARCH_HAS_SYNC_DMA_FOR_CPU select VIRT_TO_BUS select ARCH_DISCARD_MEMBLOCK select GENERIC_IRQ_PROBE @@ -261,7 +263,7 @@ config HZ endif if !IA64_HP_SIM -source kernel/Kconfig.hz +source "kernel/Kconfig.hz" endif config IA64_BRL_EMU @@ -540,30 +542,6 @@ endif endmenu -if !IA64_HP_SIM - -menu "Bus options (PCI, PCMCIA)" - -config PCI - bool "PCI support" - help - Real IA-64 machines all have PCI/PCI-X/PCI Express busses. Say Y - here unless you are using a simulator without PCI support. - -config PCI_DOMAINS - def_bool PCI - -config PCI_SYSCALL - def_bool PCI - -source "drivers/pci/Kconfig" - -source "drivers/pcmcia/Kconfig" - -endmenu - -endif - source "arch/ia64/hp/sim/Kconfig" config MSPEC diff --git a/arch/ia64/hp/common/hwsw_iommu.c b/arch/ia64/hp/common/hwsw_iommu.c index 58969039bed2..8840ed97712f 100644 --- a/arch/ia64/hp/common/hwsw_iommu.c +++ b/arch/ia64/hp/common/hwsw_iommu.c @@ -38,7 +38,7 @@ static inline int use_swiotlb(struct device *dev) const struct dma_map_ops *hwsw_dma_get_ops(struct device *dev) { if (use_swiotlb(dev)) - return &swiotlb_dma_ops; + return NULL; return &sba_dma_ops; } EXPORT_SYMBOL(hwsw_dma_get_ops); diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index e8a93b07283e..5a361e51cb1e 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -907,11 +907,12 @@ sba_mark_invalid(struct ioc *ioc, dma_addr_t iova, size_t byte_cnt) } /** - * sba_map_single_attrs - map one buffer and return IOVA for DMA + * sba_map_page - map one buffer and return IOVA for DMA * @dev: instance of PCI owned by the driver that's asking. - * @addr: driver buffer to map. - * @size: number of bytes to map in driver buffer. - * @dir: R/W or both. + * @page: page to map + * @poff: offset into page + * @size: number of bytes to map + * @dir: dma direction * @attrs: optional dma attributes * * See Documentation/DMA-API-HOWTO.txt @@ -944,7 +945,7 @@ static dma_addr_t sba_map_page(struct device *dev, struct page *page, ** Device is bit capable of DMA'ing to the buffer... ** just return the PCI address of ptr */ - DBG_BYPASS("sba_map_single_attrs() bypass mask/addr: " + DBG_BYPASS("sba_map_page() bypass mask/addr: " "0x%lx/0x%lx\n", to_pci_dev(dev)->dma_mask, pci_addr); return pci_addr; @@ -966,14 +967,14 @@ static dma_addr_t sba_map_page(struct device *dev, struct page *page, #ifdef ASSERT_PDIR_SANITY spin_lock_irqsave(&ioc->res_lock, flags); - if (sba_check_pdir(ioc,"Check before sba_map_single_attrs()")) + if (sba_check_pdir(ioc,"Check before sba_map_page()")) panic("Sanity check failed"); spin_unlock_irqrestore(&ioc->res_lock, flags); #endif pide = sba_alloc_range(ioc, dev, size); if (pide < 0) - return 0; + return DMA_MAPPING_ERROR; iovp = (dma_addr_t) pide << iovp_shift; @@ -997,20 +998,12 @@ static dma_addr_t sba_map_page(struct device *dev, struct page *page, /* form complete address */ #ifdef ASSERT_PDIR_SANITY spin_lock_irqsave(&ioc->res_lock, flags); - sba_check_pdir(ioc,"Check after sba_map_single_attrs()"); + sba_check_pdir(ioc,"Check after sba_map_page()"); spin_unlock_irqrestore(&ioc->res_lock, flags); #endif return SBA_IOVA(ioc, iovp, offset); } -static dma_addr_t sba_map_single_attrs(struct device *dev, void *addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - return sba_map_page(dev, virt_to_page(addr), - (unsigned long)addr & ~PAGE_MASK, size, dir, attrs); -} - #ifdef ENABLE_MARK_CLEAN static SBA_INLINE void sba_mark_clean(struct ioc *ioc, dma_addr_t iova, size_t size) @@ -1036,7 +1029,7 @@ sba_mark_clean(struct ioc *ioc, dma_addr_t iova, size_t size) #endif /** - * sba_unmap_single_attrs - unmap one IOVA and free resources + * sba_unmap_page - unmap one IOVA and free resources * @dev: instance of PCI owned by the driver that's asking. * @iova: IOVA of driver buffer previously mapped. * @size: number of bytes mapped in driver buffer. @@ -1063,7 +1056,7 @@ static void sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, /* ** Address does not fall w/in IOVA, must be bypassing */ - DBG_BYPASS("sba_unmap_single_attrs() bypass addr: 0x%lx\n", + DBG_BYPASS("sba_unmap_page() bypass addr: 0x%lx\n", iova); #ifdef ENABLE_MARK_CLEAN @@ -1114,12 +1107,6 @@ static void sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, #endif /* DELAYED_RESOURCE_CNT == 0 */ } -void sba_unmap_single_attrs(struct device *dev, dma_addr_t iova, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - sba_unmap_page(dev, iova, size, dir, attrs); -} - /** * sba_alloc_coherent - allocate/map shared mem for DMA * @dev: instance of PCI owned by the driver that's asking. @@ -1132,30 +1119,24 @@ static void * sba_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs) { + struct page *page; struct ioc *ioc; + int node = -1; void *addr; ioc = GET_IOC(dev); ASSERT(ioc); - #ifdef CONFIG_NUMA - { - struct page *page; - - page = alloc_pages_node(ioc->node, flags, get_order(size)); - if (unlikely(!page)) - return NULL; - - addr = page_address(page); - } -#else - addr = (void *) __get_free_pages(flags, get_order(size)); + node = ioc->node; #endif - if (unlikely(!addr)) + + page = alloc_pages_node(node, flags, get_order(size)); + if (unlikely(!page)) return NULL; + addr = page_address(page); memset(addr, 0, size); - *dma_handle = virt_to_phys(addr); + *dma_handle = page_to_phys(page); #ifdef ALLOW_IOV_BYPASS ASSERT(dev->coherent_dma_mask); @@ -1174,9 +1155,10 @@ sba_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, * If device can't bypass or bypass is disabled, pass the 32bit fake * device to map single to get an iova mapping. */ - *dma_handle = sba_map_single_attrs(&ioc->sac_only_dev->dev, addr, - size, 0, 0); - + *dma_handle = sba_map_page(&ioc->sac_only_dev->dev, page, 0, size, + DMA_BIDIRECTIONAL, 0); + if (dma_mapping_error(dev, *dma_handle)) + return NULL; return addr; } @@ -1193,7 +1175,7 @@ sba_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, static void sba_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - sba_unmap_single_attrs(dev, dma_handle, size, 0, 0); + sba_unmap_page(dev, dma_handle, size, 0, 0); free_pages((unsigned long) vaddr, get_order(size)); } @@ -1483,7 +1465,10 @@ static int sba_map_sg_attrs(struct device *dev, struct scatterlist *sglist, /* Fast path single entry scatterlists. */ if (nents == 1) { sglist->dma_length = sglist->length; - sglist->dma_address = sba_map_single_attrs(dev, sba_sg_address(sglist), sglist->length, dir, attrs); + sglist->dma_address = sba_map_page(dev, sg_page(sglist), + sglist->offset, sglist->length, dir, attrs); + if (dma_mapping_error(dev, sglist->dma_address)) + return 0; return 1; } @@ -1572,8 +1557,8 @@ static void sba_unmap_sg_attrs(struct device *dev, struct scatterlist *sglist, while (nents && sglist->dma_length) { - sba_unmap_single_attrs(dev, sglist->dma_address, - sglist->dma_length, dir, attrs); + sba_unmap_page(dev, sglist->dma_address, sglist->dma_length, + dir, attrs); sglist = sg_next(sglist); nents--; } @@ -2080,8 +2065,6 @@ static int __init acpi_sba_ioc_init_acpi(void) /* This has to run before acpi_scan_init(). */ arch_initcall(acpi_sba_ioc_init_acpi); -extern const struct dma_map_ops swiotlb_dma_ops; - static int __init sba_init(void) { @@ -2095,7 +2078,7 @@ sba_init(void) * a successful kdump kernel boot is to use the swiotlb. */ if (is_kdump_kernel()) { - dma_ops = &swiotlb_dma_ops; + dma_ops = NULL; if (swiotlb_late_init_with_default_size(64 * (1<<20)) != 0) panic("Unable to initialize software I/O TLB:" " Try machvec=dig boot option"); @@ -2117,7 +2100,7 @@ sba_init(void) * If we didn't find something sba_iommu can claim, we * need to setup the swiotlb and switch to the dig machvec. */ - dma_ops = &swiotlb_dma_ops; + dma_ops = NULL; if (swiotlb_late_init_with_default_size(64 * (1<<20)) != 0) panic("Unable to find SBA IOMMU or initialize " "software I/O TLB: Try machvec=dig boot option"); @@ -2170,11 +2153,6 @@ static int sba_dma_supported (struct device *dev, u64 mask) return ((mask & 0xFFFFFFFFUL) == 0xFFFFFFFFUL); } -static int sba_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return 0; -} - __setup("nosbagart", nosbagart); static int __init @@ -2208,7 +2186,6 @@ const struct dma_map_ops sba_dma_ops = { .map_sg = sba_map_sg_attrs, .unmap_sg = sba_unmap_sg_attrs, .dma_supported = sba_dma_supported, - .mapping_error = sba_dma_mapping_error, }; void sba_dma_init(void) diff --git a/arch/ia64/hp/sim/simscsi.c b/arch/ia64/hp/sim/simscsi.c index 7e1426e76d96..f86844fc0725 100644 --- a/arch/ia64/hp/sim/simscsi.c +++ b/arch/ia64/hp/sim/simscsi.c @@ -347,7 +347,7 @@ static struct scsi_host_template driver_template = { .sg_tablesize = SG_ALL, .max_sectors = 1024, .cmd_per_lun = SIMSCSI_REQ_QUEUE_LEN, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, }; static int __init diff --git a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile index d0c0ccdd656a..4ba05140b249 100644 --- a/arch/ia64/kernel/Makefile +++ b/arch/ia64/kernel/Makefile @@ -50,10 +50,7 @@ CFLAGS_traps.o += -mfixed-range=f2-f5,f16-f31 # The gate DSO image is built using a special linker script. include $(src)/Makefile.gate -# We use internal kbuild rules to avoid the "is up to date" message from make -arch/$(SRCARCH)/kernel/nr-irqs.s: arch/$(SRCARCH)/kernel/nr-irqs.c - $(Q)mkdir -p $(dir $@) - $(call if_changed_dep,cc_s_c) - include/generated/nr-irqs.h: arch/$(SRCARCH)/kernel/nr-irqs.s FORCE $(call filechk,offsets,__ASM_NR_IRQS_H__) + +targets += nr-irqs.s diff --git a/arch/ia64/kernel/dma-mapping.c b/arch/ia64/kernel/dma-mapping.c index 7a471d8d67d4..ad7d9963de34 100644 --- a/arch/ia64/kernel/dma-mapping.c +++ b/arch/ia64/kernel/dma-mapping.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -#include +#include #include #include @@ -16,9 +16,26 @@ const struct dma_map_ops *dma_get_ops(struct device *dev) EXPORT_SYMBOL(dma_get_ops); #ifdef CONFIG_SWIOTLB +void *arch_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) +{ + return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); +} + +void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_addr, unsigned long attrs) +{ + dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); +} + +long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, + dma_addr_t dma_addr) +{ + return page_to_pfn(virt_to_page(cpu_addr)); +} + void __init swiotlb_dma_init(void) { - dma_ops = &swiotlb_dma_ops; swiotlb_init(1); } #endif diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index d5e12ff1d73c..055382622f07 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -71,18 +72,14 @@ __ia64_sync_icache_dcache (pte_t pte) * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ -void -dma_mark_clean(void *addr, size_t size) +void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, + size_t size, enum dma_data_direction dir) { - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page(pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; - } + unsigned long pfn = PHYS_PFN(paddr); + + do { + set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); + } while (++pfn <= PHYS_PFN(paddr + size - 1)); } inline void @@ -661,7 +658,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, } #ifdef CONFIG_MEMORY_HOTREMOVE -int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/arch/ia64/sn/pci/pci_dma.c b/arch/ia64/sn/pci/pci_dma.c index 4ce4ee4ef9f2..b7d42e4edc1f 100644 --- a/arch/ia64/sn/pci/pci_dma.c +++ b/arch/ia64/sn/pci/pci_dma.c @@ -196,7 +196,7 @@ static dma_addr_t sn_dma_map_page(struct device *dev, struct page *page, if (!dma_addr) { printk(KERN_ERR "%s: out of ATEs\n", __func__); - return 0; + return DMA_MAPPING_ERROR; } return dma_addr; } @@ -314,11 +314,6 @@ static int sn_dma_map_sg(struct device *dev, struct scatterlist *sgl, return nhwentries; } -static int sn_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return 0; -} - static u64 sn_dma_get_required_mask(struct device *dev) { return DMA_BIT_MASK(64); @@ -441,7 +436,6 @@ static struct dma_map_ops sn_dma_ops = { .unmap_page = sn_dma_unmap_page, .map_sg = sn_dma_map_sg, .unmap_sg = sn_dma_unmap_sg, - .mapping_error = sn_dma_mapping_error, .dma_supported = sn_dma_supported, .get_required_mask = sn_dma_get_required_mask, }; diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig index 1bc9f1ba759a..e173ea2ff395 100644 --- a/arch/m68k/Kconfig +++ b/arch/m68k/Kconfig @@ -26,7 +26,6 @@ config M68K select MODULES_USE_ELF_RELA select OLD_SIGSUSPEND3 select OLD_SIGACTION - select DMA_DIRECT_OPS if HAS_DMA select ARCH_DISCARD_MEMBLOCK config CPU_BIG_ENDIAN @@ -123,11 +122,11 @@ config BOOTINFO_PROC menu "Platform setup" -source arch/m68k/Kconfig.cpu +source "arch/m68k/Kconfig.cpu" -source arch/m68k/Kconfig.machine +source "arch/m68k/Kconfig.machine" -source arch/m68k/Kconfig.bus +source "arch/m68k/Kconfig.bus" endmenu diff --git a/arch/m68k/Kconfig.bus b/arch/m68k/Kconfig.bus index aef698fa50e5..9d0a3a23d50e 100644 --- a/arch/m68k/Kconfig.bus +++ b/arch/m68k/Kconfig.bus @@ -63,22 +63,9 @@ source "drivers/zorro/Kconfig" endif -config PCI - bool "PCI support" - depends on M54xx - help - Enable the PCI bus. Support for the PCI bus hardware built into the - ColdFire 547x and 548x processors. - -if PCI -source "drivers/pci/Kconfig" -endif - if !MMU config ISA_DMA_API def_bool !M5272 -source "drivers/pcmcia/Kconfig" - endif diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu index 21f00349af52..60ac1cd8b96f 100644 --- a/arch/m68k/Kconfig.cpu +++ b/arch/m68k/Kconfig.cpu @@ -299,6 +299,7 @@ config M53xx bool config M54xx + select HAVE_PCI bool endif # COLDFIRE diff --git a/arch/m68k/kernel/dma.c b/arch/m68k/kernel/dma.c index e99993c57d6b..b4aa853051bd 100644 --- a/arch/m68k/kernel/dma.c +++ b/arch/m68k/kernel/dma.c @@ -32,7 +32,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, size = PAGE_ALIGN(size); order = get_order(size); - page = alloc_pages(flag, order); + page = alloc_pages(flag | __GFP_ZERO, order); if (!page) return NULL; diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig index effed2efd306..58aff2653d86 100644 --- a/arch/microblaze/Kconfig +++ b/arch/microblaze/Kconfig @@ -12,7 +12,6 @@ config MICROBLAZE select TIMER_OF select CLONE_BACKWARDS3 select COMMON_CLK - select DMA_DIRECT_OPS select GENERIC_ATOMIC64 select GENERIC_CLOCKEVENTS select GENERIC_CPU_DEVICES @@ -30,11 +29,14 @@ config MICROBLAZE select HAVE_FUNCTION_TRACER select HAVE_MEMBLOCK_NODE_MAP select HAVE_OPROFILE + select HAVE_PCI select IRQ_DOMAIN select XILINX_INTC select MODULES_USE_ELF_RELA select OF select OF_EARLY_FLATTREE + select PCI_DOMAINS_GENERIC if PCI + select PCI_SYSCALL if PCI select TRACING_SUPPORT select VIRT_TO_BUS select CPU_NO_EFFICIENT_FFS @@ -266,22 +268,8 @@ endmenu menu "Bus Options" -config PCI - bool "PCI support" - -config PCI_DOMAINS - def_bool PCI - -config PCI_DOMAINS_GENERIC - def_bool PCI_DOMAINS - -config PCI_SYSCALL - def_bool PCI - config PCI_XILINX bool "Xilinx PCI host bridge support" depends on PCI -source "drivers/pci/Kconfig" - endmenu diff --git a/arch/microblaze/Kconfig.platform b/arch/microblaze/Kconfig.platform index f7f1739c11b9..7361974417dc 100644 --- a/arch/microblaze/Kconfig.platform +++ b/arch/microblaze/Kconfig.platform @@ -65,6 +65,6 @@ config XILINX_MICROBLAZE0_USE_FPU config XILINX_MICROBLAZE0_HW_VER string "Core version number" - default 7.10.d + default "7.10.d" endmenu diff --git a/arch/microblaze/mm/consistent.c b/arch/microblaze/mm/consistent.c index 45e0a1aa9357..3002cbca3059 100644 --- a/arch/microblaze/mm/consistent.c +++ b/arch/microblaze/mm/consistent.c @@ -81,7 +81,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, size = PAGE_ALIGN(size); order = get_order(size); - vaddr = __get_free_pages(gfp, order); + vaddr = __get_free_pages(gfp | __GFP_ZERO, order); if (!vaddr) return NULL; diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index e49b5a0c8585..787290781b8c 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -18,7 +18,6 @@ config MIPS select CLONE_BACKWARDS select CPU_NO_EFFICIENT_FFS if (TARGET_ISA_REV < 1) select CPU_PM if CPU_IDLE - select DMA_DIRECT_OPS select GENERIC_ATOMIC64 if !64BIT select GENERIC_CLOCKEVENTS select GENERIC_CMOS_UPDATE @@ -26,6 +25,7 @@ config MIPS select GENERIC_IOMAP select GENERIC_IRQ_PROBE select GENERIC_IRQ_SHOW + select GENERIC_ISA_DMA if EISA select GENERIC_LIB_ASHLDI3 select GENERIC_LIB_ASHRDI3 select GENERIC_LIB_CMPDI2 @@ -75,6 +75,7 @@ config MIPS select HAVE_SYSCALL_TRACEPOINTS select HAVE_VIRT_CPU_ACCOUNTING_GEN if 64BIT || !SMP select IRQ_FORCED_THREADING + select ISA if EISA select MODULES_USE_ELF_RELA if MODULES && 64BIT select MODULES_USE_ELF_REL if MODULES select PERF_USE_VMALLOC @@ -99,7 +100,7 @@ config MIPS_GENERIC select CPU_MIPSR2_IRQ_EI select CSRC_R4K select DMA_PERDEV_COHERENT - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select LIBFDT select MIPS_AUTO_PFN_OFFSET @@ -260,7 +261,7 @@ config BCM47XX select CEVT_R4K select CSRC_R4K select DMA_NONCOHERENT - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select SYS_HAS_CPU_MIPS32_R1 select NO_EXCEPT_FILL @@ -303,13 +304,12 @@ config MIPS_COBALT select CSRC_R4K select CEVT_GT641XX select DMA_NONCOHERENT - select HW_HAS_PCI + select FORCE_PCI select I8253 select I8259 select IRQ_MIPS_CPU select IRQ_GT641XX select PCI_GT64XXX_PCI0 - select PCI select SYS_HAS_CPU_NEVADA select SYS_HAS_EARLY_PRINTK select SYS_SUPPORTS_32BIT_KERNEL @@ -426,7 +426,7 @@ config LASAT select CSRC_R4K select DMA_NONCOHERENT select SYS_HAS_EARLY_PRINTK - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select PCI_GT64XXX_PCI0 select MIPS_NILE4 @@ -504,7 +504,7 @@ config MIPS_MALTA select DMA_MAYBE_COHERENT select GENERIC_ISA_DMA select HAVE_PCSPKR_PLATFORM - select HW_HAS_PCI + select HAVE_PCI select I8253 select I8259 select IRQ_MIPS_CPU @@ -558,7 +558,7 @@ config MACH_PIC32 config NEC_MARKEINS bool "NEC EMMA2RH Mark-eins board" select SOC_EMMA2RH - select HW_HAS_PCI + select HAVE_PCI help This enables support for the NEC Electronics Mark-eins boards. @@ -635,7 +635,7 @@ config SGI_IP22 select CSRC_R4K select DEFAULT_SGI_PARTITION select DMA_NONCOHERENT - select HW_HAS_EISA + select HAVE_EISA select I8253 select I8259 select IP22_CPU_SCACHE @@ -675,7 +675,7 @@ config SGI_IP27 select BOOT_ELF64 select DEFAULT_SGI_PARTITION select SYS_HAS_EARLY_PRINTK - select HW_HAS_PCI + select HAVE_PCI select NR_CPUS_DEFAULT_64 select SYS_HAS_CPU_R10000 select SYS_SUPPORTS_64BIT_KERNEL @@ -700,7 +700,7 @@ config SGI_IP28 select DMA_NONCOHERENT select GENERIC_ISA_DMA_SUPPORT_BROKEN select IRQ_MIPS_CPU - select HW_HAS_EISA + select HAVE_EISA select I8253 select I8259 select SGI_HAS_I8042 @@ -735,7 +735,7 @@ config SGI_IP32 select CEVT_R4K select CSRC_R4K select DMA_NONCOHERENT - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select R5000_CPU_SCACHE select RM7000_CPU_SCACHE @@ -847,9 +847,9 @@ config SNI_RM select DEFAULT_SGI_PARTITION if CPU_BIG_ENDIAN select DMA_NONCOHERENT select GENERIC_ISA_DMA + select HAVE_EISA select HAVE_PCSPKR_PLATFORM - select HW_HAS_EISA - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select I8253 select I8259 @@ -882,7 +882,7 @@ config MIKROTIK_RB532 select CEVT_R4K select CSRC_R4K select DMA_NONCOHERENT - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select SYS_HAS_CPU_MIPS32_R1 select SYS_SUPPORTS_32BIT_KERNEL @@ -899,7 +899,7 @@ config CAVIUM_OCTEON_SOC bool "Cavium Networks Octeon SoC based boards" select CEVT_R4K select ARCH_HAS_PHYS_TO_DMA - select HAS_RAPIDIO + select HAVE_RAPIDIO select PHYS_ADDR_T_64BIT select SYS_SUPPORTS_64BIT_KERNEL select SYS_SUPPORTS_BIG_ENDIAN @@ -909,7 +909,7 @@ config CAVIUM_OCTEON_SOC select SYS_SUPPORTS_HOTPLUG_CPU if CPU_BIG_ENDIAN select SYS_HAS_EARLY_PRINTK select SYS_HAS_CPU_CAVIUM_OCTEON - select HW_HAS_PCI + select HAVE_PCI select ZONE_DMA32 select HOLES_IN_ZONE select GPIOLIB @@ -942,7 +942,7 @@ config NLM_XLR_BOARD select NLM_COMMON select SYS_HAS_CPU_XLR select SYS_SUPPORTS_SMP - select HW_HAS_PCI + select HAVE_PCI select SWAP_IO_SPACE select SYS_SUPPORTS_32BIT_KERNEL select SYS_SUPPORTS_64BIT_KERNEL @@ -968,7 +968,7 @@ config NLM_XLP_BOARD select NLM_COMMON select SYS_HAS_CPU_XLP select SYS_SUPPORTS_SMP - select HW_HAS_PCI + select HAVE_PCI select SYS_SUPPORTS_32BIT_KERNEL select SYS_SUPPORTS_64BIT_KERNEL select PHYS_ADDR_T_64BIT @@ -1003,7 +1003,7 @@ config MIPS_PARAVIRT select SYS_HAS_CPU_MIPS32_R2 select SYS_HAS_CPU_MIPS64_R2 select SYS_HAS_CPU_CAVIUM_OCTEON - select HW_HAS_PCI + select HAVE_PCI select SWAP_IO_SPACE help This option supports guest running under ???? @@ -3064,47 +3064,14 @@ config MIPS_AUTO_PFN_OFFSET menu "Bus options (PCI, PCMCIA, EISA, ISA, TC)" -config HW_HAS_EISA - bool -config HW_HAS_PCI - bool - -config PCI - bool "Support for PCI controller" - depends on HW_HAS_PCI - select PCI_DOMAINS - help - Find out whether you have a PCI motherboard. PCI is the name of a - bus system, i.e. the way the CPU talks to the other stuff inside - your box. Other bus systems are ISA, EISA, or VESA. If you have PCI, - say Y, otherwise N. - -config HT_PCI - bool "Support for HT-linked PCI" - default y - depends on CPU_LOONGSON3 - select PCI - select PCI_DOMAINS - help - Loongson family machines use Hyper-Transport bus for inter-core - connection and device connection. The PCI bus is a subordinate - linked at HT. Choose Y for Loongson-3 based machines. - -config PCI_DOMAINS - bool - -config PCI_DOMAINS_GENERIC - bool - config PCI_DRIVERS_GENERIC - select PCI_DOMAINS_GENERIC if PCI_DOMAINS + select PCI_DOMAINS_GENERIC if PCI bool config PCI_DRIVERS_LEGACY def_bool !PCI_DRIVERS_GENERIC select NO_GENERIC_PCI_IOPORT_MAP - -source "drivers/pci/Kconfig" + select PCI_DOMAINS if PCI # # ISA support is now enabled via select. Too many systems still have the one @@ -3114,26 +3081,6 @@ source "drivers/pci/Kconfig" config ISA bool -config EISA - bool "EISA support" - depends on HW_HAS_EISA - select ISA - select GENERIC_ISA_DMA - ---help--- - The Extended Industry Standard Architecture (EISA) bus was - developed as an open alternative to the IBM MicroChannel bus. - - The EISA bus provided some of the features of the IBM MicroChannel - bus while maintaining backward compatibility with cards made for - the older ISA bus. The EISA bus saw limited use between 1988 and - 1995 when it was made obsolete by the PCI bus. - - Say Y here if you are building a kernel for an EISA-based machine. - - Otherwise, say N. - -source "drivers/eisa/Kconfig" - config TC bool "TURBOchannel support" depends on MACH_DECSTATION @@ -3177,21 +3124,6 @@ config ZONE_DMA config ZONE_DMA32 bool -source "drivers/pcmcia/Kconfig" - -config HAS_RAPIDIO - bool - default n - -config RAPIDIO - tristate "RapidIO support" - depends on HAS_RAPIDIO || PCI - help - If you say Y here, the kernel will include drivers and - infrastructure code to support RapidIO interconnect devices. - -source "drivers/rapidio/Kconfig" - endmenu config TRAD_SIGNALS diff --git a/arch/mips/alchemy/Kconfig b/arch/mips/alchemy/Kconfig index 7d73f7f4202b..83b288b95b16 100644 --- a/arch/mips/alchemy/Kconfig +++ b/arch/mips/alchemy/Kconfig @@ -14,7 +14,7 @@ choice config MIPS_MTX1 bool "4G Systems MTX-1 board" - select HW_HAS_PCI + select HAVE_PCI select ALCHEMY_GPIOINT_AU1000 select SYS_SUPPORTS_LITTLE_ENDIAN select SYS_HAS_EARLY_PRINTK @@ -22,7 +22,7 @@ config MIPS_MTX1 config MIPS_DB1XXX bool "Alchemy DB1XXX / PB1XXX boards" select GPIOLIB - select HW_HAS_PCI + select HAVE_PCI select SYS_SUPPORTS_LITTLE_ENDIAN select SYS_HAS_EARLY_PRINTK help @@ -40,7 +40,7 @@ config MIPS_XXS1500 config MIPS_GPR bool "Trapeze ITS GPR board" select ALCHEMY_GPIOINT_AU1000 - select HW_HAS_PCI + select HAVE_PCI select SYS_SUPPORTS_LITTLE_ENDIAN select SYS_HAS_EARLY_PRINTK diff --git a/arch/mips/ath25/Kconfig b/arch/mips/ath25/Kconfig index 2c1dfd06c366..3014c80cf581 100644 --- a/arch/mips/ath25/Kconfig +++ b/arch/mips/ath25/Kconfig @@ -13,6 +13,5 @@ config PCI_AR2315 bool "Atheros AR2315 PCI controller support" depends on SOC_AR2315 select ARCH_HAS_PHYS_TO_DMA - select HW_HAS_PCI - select PCI + select FORCE_PCI default y diff --git a/arch/mips/ath79/Kconfig b/arch/mips/ath79/Kconfig index 9547cf1ea38d..191c3910eac5 100644 --- a/arch/mips/ath79/Kconfig +++ b/arch/mips/ath79/Kconfig @@ -75,11 +75,11 @@ config ATH79_MACH_UBNT_XM endmenu config SOC_AR71XX - select HW_HAS_PCI + select HAVE_PCI def_bool n config SOC_AR724X - select HW_HAS_PCI + select HAVE_PCI select PCI_AR724X if PCI def_bool n @@ -90,12 +90,12 @@ config SOC_AR933X def_bool n config SOC_AR934X - select HW_HAS_PCI + select HAVE_PCI select PCI_AR724X if PCI def_bool n config SOC_QCA955X - select HW_HAS_PCI + select HAVE_PCI select PCI_AR724X if PCI def_bool n diff --git a/arch/mips/bcm63xx/Kconfig b/arch/mips/bcm63xx/Kconfig index 96ed735a4f4a..837f6e5a2f37 100644 --- a/arch/mips/bcm63xx/Kconfig +++ b/arch/mips/bcm63xx/Kconfig @@ -5,17 +5,17 @@ menu "CPU support" config BCM63XX_CPU_3368 bool "support 3368 CPU" select SYS_HAS_CPU_BMIPS4350 - select HW_HAS_PCI + select HAVE_PCI config BCM63XX_CPU_6328 bool "support 6328 CPU" select SYS_HAS_CPU_BMIPS4350 - select HW_HAS_PCI + select HAVE_PCI config BCM63XX_CPU_6338 bool "support 6338 CPU" select SYS_HAS_CPU_BMIPS32_3300 - select HW_HAS_PCI + select HAVE_PCI config BCM63XX_CPU_6345 bool "support 6345 CPU" @@ -24,22 +24,22 @@ config BCM63XX_CPU_6345 config BCM63XX_CPU_6348 bool "support 6348 CPU" select SYS_HAS_CPU_BMIPS32_3300 - select HW_HAS_PCI + select HAVE_PCI config BCM63XX_CPU_6358 bool "support 6358 CPU" select SYS_HAS_CPU_BMIPS4350 - select HW_HAS_PCI + select HAVE_PCI config BCM63XX_CPU_6362 bool "support 6362 CPU" select SYS_HAS_CPU_BMIPS4350 - select HW_HAS_PCI + select HAVE_PCI config BCM63XX_CPU_6368 bool "support 6368 CPU" select SYS_HAS_CPU_BMIPS4350 - select HW_HAS_PCI + select HAVE_PCI endmenu source "arch/mips/bcm63xx/boards/Kconfig" diff --git a/arch/mips/include/asm/dma-mapping.h b/arch/mips/include/asm/dma-mapping.h index b4c477eb46ce..20dfaad3a55d 100644 --- a/arch/mips/include/asm/dma-mapping.h +++ b/arch/mips/include/asm/dma-mapping.h @@ -10,10 +10,8 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { #if defined(CONFIG_MACH_JAZZ) return &jazz_dma_ops; -#elif defined(CONFIG_SWIOTLB) - return &swiotlb_dma_ops; #else - return &dma_direct_ops; + return NULL; #endif } diff --git a/arch/mips/include/asm/jazzdma.h b/arch/mips/include/asm/jazzdma.h index d913439c738c..d13f940022d5 100644 --- a/arch/mips/include/asm/jazzdma.h +++ b/arch/mips/include/asm/jazzdma.h @@ -39,12 +39,6 @@ extern int vdma_get_enable(int channel); #define VDMA_PAGE(a) ((unsigned int)(a) >> 12) #define VDMA_OFFSET(a) ((unsigned int)(a) & (VDMA_PAGESIZE-1)) -/* - * error code returned by vdma_alloc() - * (See also arch/mips/kernel/jazzdma.c) - */ -#define VDMA_ERROR 0xffffffff - /* * VDMA pagetable entry description */ diff --git a/arch/mips/include/asm/mach-jz4740/jz4740_mmc.h b/arch/mips/include/asm/mach-jz4740/jz4740_mmc.h index e9cc62cfac99..9a7de47c7c79 100644 --- a/arch/mips/include/asm/mach-jz4740/jz4740_mmc.h +++ b/arch/mips/include/asm/mach-jz4740/jz4740_mmc.h @@ -3,12 +3,8 @@ #define __LINUX_MMC_JZ4740_MMC struct jz4740_mmc_platform_data { - int gpio_power; - int gpio_card_detect; - int gpio_read_only; unsigned card_detect_active_low:1; unsigned read_only_active_low:1; - unsigned power_active_low:1; unsigned data_1bit:1; }; diff --git a/arch/mips/include/asm/mach-loongson64/loongson.h b/arch/mips/include/asm/mach-loongson64/loongson.h index d0ae5d55413b..b6870fec0f99 100644 --- a/arch/mips/include/asm/mach-loongson64/loongson.h +++ b/arch/mips/include/asm/mach-loongson64/loongson.h @@ -113,7 +113,7 @@ static inline void do_perfcnt_IRQ(void) #define LOONGSON_PCICFG_SIZE 0x00000800 /* 2K */ #define LOONGSON_PCICFG_TOP (LOONGSON_PCICFG_BASE+LOONGSON_PCICFG_SIZE-1) -#if defined(CONFIG_HT_PCI) +#ifdef CONFIG_CPU_LOONGSON3 #define LOONGSON_PCIIO_BASE loongson_sysconf.pci_io_base #else #define LOONGSON_PCIIO_BASE 0x1fd00000 diff --git a/arch/mips/include/asm/mach-rc32434/rb.h b/arch/mips/include/asm/mach-rc32434/rb.h index aac8ce8902e7..5dfd4d66d6fc 100644 --- a/arch/mips/include/asm/mach-rc32434/rb.h +++ b/arch/mips/include/asm/mach-rc32434/rb.h @@ -71,12 +71,6 @@ struct korina_device { struct net_device *dev; }; -struct cf_device { - int gpio_pin; - void *dev; - struct gendisk *gd; -}; - struct mpmc_device { unsigned char state; spinlock_t lock; diff --git a/arch/mips/include/asm/uasm.h b/arch/mips/include/asm/uasm.h index 59dae37f6b8d..b1990dd75f27 100644 --- a/arch/mips/include/asm/uasm.h +++ b/arch/mips/include/asm/uasm.h @@ -157,6 +157,7 @@ Ip_u2u1s3(_slti); Ip_u2u1s3(_sltiu); Ip_u3u1u2(_sltu); Ip_u2u1u3(_sra); +Ip_u3u2u1(_srav); Ip_u2u1u3(_srl); Ip_u3u2u1(_srlv); Ip_u3u1u2(_subu); diff --git a/arch/mips/include/uapi/asm/inst.h b/arch/mips/include/uapi/asm/inst.h index c05dcf5ab414..40fbb5dd66df 100644 --- a/arch/mips/include/uapi/asm/inst.h +++ b/arch/mips/include/uapi/asm/inst.h @@ -369,8 +369,9 @@ enum mm_32a_minor_op { mm_ext_op = 0x02c, mm_pool32axf_op = 0x03c, mm_srl32_op = 0x040, + mm_srlv32_op = 0x050, mm_sra_op = 0x080, - mm_srlv32_op = 0x090, + mm_srav_op = 0x090, mm_rotr_op = 0x0c0, mm_lwxs_op = 0x118, mm_addu32_op = 0x150, diff --git a/arch/mips/jazz/jazzdma.c b/arch/mips/jazz/jazzdma.c index 4c41ed0a637e..6256d35dbf4d 100644 --- a/arch/mips/jazz/jazzdma.c +++ b/arch/mips/jazz/jazzdma.c @@ -104,12 +104,12 @@ unsigned long vdma_alloc(unsigned long paddr, unsigned long size) if (vdma_debug) printk("vdma_alloc: Invalid physical address: %08lx\n", paddr); - return VDMA_ERROR; /* invalid physical address */ + return DMA_MAPPING_ERROR; /* invalid physical address */ } if (size > 0x400000 || size == 0) { if (vdma_debug) printk("vdma_alloc: Invalid size: %08lx\n", size); - return VDMA_ERROR; /* invalid physical address */ + return DMA_MAPPING_ERROR; /* invalid physical address */ } spin_lock_irqsave(&vdma_lock, flags); @@ -123,7 +123,7 @@ unsigned long vdma_alloc(unsigned long paddr, unsigned long size) first < VDMA_PGTBL_ENTRIES) first++; if (first + pages > VDMA_PGTBL_ENTRIES) { /* nothing free */ spin_unlock_irqrestore(&vdma_lock, flags); - return VDMA_ERROR; + return DMA_MAPPING_ERROR; } last = first + 1; @@ -569,7 +569,7 @@ static void *jazz_dma_alloc(struct device *dev, size_t size, return NULL; *dma_handle = vdma_alloc(virt_to_phys(ret), size); - if (*dma_handle == VDMA_ERROR) { + if (*dma_handle == DMA_MAPPING_ERROR) { dma_direct_free_pages(dev, size, ret, *dma_handle, attrs); return NULL; } @@ -620,7 +620,7 @@ static int jazz_dma_map_sg(struct device *dev, struct scatterlist *sglist, arch_sync_dma_for_device(dev, sg_phys(sg), sg->length, dir); sg->dma_address = vdma_alloc(sg_phys(sg), sg->length); - if (sg->dma_address == VDMA_ERROR) + if (sg->dma_address == DMA_MAPPING_ERROR) return 0; sg_dma_len(sg) = sg->length; } @@ -674,11 +674,6 @@ static void jazz_dma_sync_sg_for_cpu(struct device *dev, arch_sync_dma_for_cpu(dev, sg_phys(sg), sg->length, dir); } -static int jazz_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == VDMA_ERROR; -} - const struct dma_map_ops jazz_dma_ops = { .alloc = jazz_dma_alloc, .free = jazz_dma_free, @@ -692,6 +687,5 @@ const struct dma_map_ops jazz_dma_ops = { .sync_sg_for_device = jazz_dma_sync_sg_for_device, .dma_supported = dma_direct_supported, .cache_sync = arch_dma_cache_sync, - .mapping_error = jazz_dma_mapping_error, }; EXPORT_SYMBOL(jazz_dma_ops); diff --git a/arch/mips/jz4740/board-qi_lb60.c b/arch/mips/jz4740/board-qi_lb60.c index af0c8ace0141..6718efb400f4 100644 --- a/arch/mips/jz4740/board-qi_lb60.c +++ b/arch/mips/jz4740/board-qi_lb60.c @@ -43,9 +43,6 @@ #include "clock.h" /* GPIOs */ -#define QI_LB60_GPIO_SD_CD JZ_GPIO_PORTD(0) -#define QI_LB60_GPIO_SD_VCC_EN_N JZ_GPIO_PORTD(2) - #define QI_LB60_GPIO_KEYOUT(x) (JZ_GPIO_PORTC(10) + (x)) #define QI_LB60_GPIO_KEYIN(x) (JZ_GPIO_PORTD(18) + (x)) #define QI_LB60_GPIO_KEYIN8 JZ_GPIO_PORTD(26) @@ -386,10 +383,16 @@ static struct platform_device qi_lb60_gpio_keys = { }; static struct jz4740_mmc_platform_data qi_lb60_mmc_pdata = { - .gpio_card_detect = QI_LB60_GPIO_SD_CD, - .gpio_read_only = -1, - .gpio_power = QI_LB60_GPIO_SD_VCC_EN_N, - .power_active_low = 1, + /* Intentionally left blank */ +}; + +static struct gpiod_lookup_table qi_lb60_mmc_gpio_table = { + .dev_id = "jz4740-mmc.0", + .table = { + GPIO_LOOKUP("GPIOD", 0, "cd", GPIO_ACTIVE_HIGH), + GPIO_LOOKUP("GPIOD", 2, "power", GPIO_ACTIVE_LOW), + { }, + }, }; /* beeper */ @@ -500,6 +503,7 @@ static int __init qi_lb60_init_platform_devices(void) gpiod_add_lookup_table(&qi_lb60_audio_gpio_table); gpiod_add_lookup_table(&qi_lb60_nand_gpio_table); gpiod_add_lookup_table(&qi_lb60_spigpio_gpio_table); + gpiod_add_lookup_table(&qi_lb60_mmc_gpio_table); spi_register_board_info(qi_lb60_spi_board_info, ARRAY_SIZE(qi_lb60_spi_board_info)); diff --git a/arch/mips/kvm/Kconfig b/arch/mips/kvm/Kconfig index 760aec70dce5..4528bc9c3cb1 100644 --- a/arch/mips/kvm/Kconfig +++ b/arch/mips/kvm/Kconfig @@ -73,6 +73,6 @@ config KVM_MIPS_DEBUG_COP0_COUNTERS If unsure, say N. -source drivers/vhost/Kconfig +source "drivers/vhost/Kconfig" endif # VIRTUALIZATION diff --git a/arch/mips/lantiq/Kconfig b/arch/mips/lantiq/Kconfig index 8e3a1fc2bc39..188de95d6dbd 100644 --- a/arch/mips/lantiq/Kconfig +++ b/arch/mips/lantiq/Kconfig @@ -19,7 +19,7 @@ config SOC_AMAZON_SE config SOC_XWAY bool "XWAY" select SOC_TYPE_XWAY - select HW_HAS_PCI + select HAVE_PCI select MFD_SYSCON select MFD_CORE diff --git a/arch/mips/loongson64/Kconfig b/arch/mips/loongson64/Kconfig index c865b4b9b775..4c14a11525f4 100644 --- a/arch/mips/loongson64/Kconfig +++ b/arch/mips/loongson64/Kconfig @@ -15,7 +15,7 @@ config LEMOTE_FULOONG2E select DMA_NONCOHERENT select BOOT_ELF32 select BOARD_SCACHE - select HW_HAS_PCI + select HAVE_PCI select I8259 select ISA select IRQ_MIPS_CPU @@ -46,7 +46,7 @@ config LEMOTE_MACH2F select DMA_NONCOHERENT select GENERIC_ISA_DMA_SUPPORT_BROKEN select HAVE_CLK - select HW_HAS_PCI + select HAVE_PCI select I8259 select IRQ_MIPS_CPU select ISA @@ -74,9 +74,8 @@ config LOONGSON_MACH3X select CSRC_R4K select CEVT_R4K select CPU_HAS_WB - select HW_HAS_PCI + select FORCE_PCI select ISA - select HT_PCI select I8259 select IRQ_MIPS_CPU select NR_CPUS_DEFAULT_4 diff --git a/arch/mips/mm/uasm-micromips.c b/arch/mips/mm/uasm-micromips.c index 24e5b0d06899..75ef90486fe6 100644 --- a/arch/mips/mm/uasm-micromips.c +++ b/arch/mips/mm/uasm-micromips.c @@ -104,6 +104,7 @@ static const struct insn insn_table_MM[insn_invalid] = { [insn_sltiu] = {M(mm_sltiu32_op, 0, 0, 0, 0, 0), RT | RS | SIMM}, [insn_sltu] = {M(mm_pool32a_op, 0, 0, 0, 0, mm_sltu_op), RT | RS | RD}, [insn_sra] = {M(mm_pool32a_op, 0, 0, 0, 0, mm_sra_op), RT | RS | RD}, + [insn_srav] = {M(mm_pool32a_op, 0, 0, 0, 0, mm_srav_op), RT | RS | RD}, [insn_srl] = {M(mm_pool32a_op, 0, 0, 0, 0, mm_srl32_op), RT | RS | RD}, [insn_srlv] = {M(mm_pool32a_op, 0, 0, 0, 0, mm_srlv32_op), RT | RS | RD}, [insn_rotr] = {M(mm_pool32a_op, 0, 0, 0, 0, mm_rotr_op), RT | RS | RD}, diff --git a/arch/mips/mm/uasm-mips.c b/arch/mips/mm/uasm-mips.c index 60ceb93c71a0..6abe40fc413d 100644 --- a/arch/mips/mm/uasm-mips.c +++ b/arch/mips/mm/uasm-mips.c @@ -171,6 +171,7 @@ static const struct insn insn_table[insn_invalid] = { [insn_sltiu] = {M(sltiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM}, [insn_sltu] = {M(spec_op, 0, 0, 0, 0, sltu_op), RS | RT | RD}, [insn_sra] = {M(spec_op, 0, 0, 0, 0, sra_op), RT | RD | RE}, + [insn_srav] = {M(spec_op, 0, 0, 0, 0, srav_op), RS | RT | RD}, [insn_srl] = {M(spec_op, 0, 0, 0, 0, srl_op), RT | RD | RE}, [insn_srlv] = {M(spec_op, 0, 0, 0, 0, srlv_op), RS | RT | RD}, [insn_subu] = {M(spec_op, 0, 0, 0, 0, subu_op), RS | RT | RD}, diff --git a/arch/mips/mm/uasm.c b/arch/mips/mm/uasm.c index 57570c0649b4..45b6264ff308 100644 --- a/arch/mips/mm/uasm.c +++ b/arch/mips/mm/uasm.c @@ -61,10 +61,10 @@ enum opcode { insn_mthc0, insn_mthi, insn_mtlo, insn_mul, insn_multu, insn_nor, insn_or, insn_ori, insn_pref, insn_rfe, insn_rotr, insn_sb, insn_sc, insn_scd, insn_sd, insn_sh, insn_sll, insn_sllv, - insn_slt, insn_slti, insn_sltiu, insn_sltu, insn_sra, insn_srl, - insn_srlv, insn_subu, insn_sw, insn_sync, insn_syscall, insn_tlbp, - insn_tlbr, insn_tlbwi, insn_tlbwr, insn_wait, insn_wsbh, insn_xor, - insn_xori, insn_yield, + insn_slt, insn_slti, insn_sltiu, insn_sltu, insn_sra, insn_srav, + insn_srl, insn_srlv, insn_subu, insn_sw, insn_sync, insn_syscall, + insn_tlbp, insn_tlbr, insn_tlbwi, insn_tlbwr, insn_wait, insn_wsbh, + insn_xor, insn_xori, insn_yield, insn_invalid /* insn_invalid must be last */ }; @@ -353,6 +353,7 @@ I_u2u1s3(_slti) I_u2u1s3(_sltiu) I_u3u1u2(_sltu) I_u2u1u3(_sra) +I_u3u2u1(_srav) I_u2u1u3(_srl) I_u3u2u1(_srlv) I_u2u1u3(_rotr) diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c index 4d8cb9bb8365..3a0e34f4e615 100644 --- a/arch/mips/net/bpf_jit.c +++ b/arch/mips/net/bpf_jit.c @@ -1159,19 +1159,19 @@ jmp_cmp: emit_load(r_A, r_skb, off, ctx); break; case BPF_ANC | SKF_AD_VLAN_TAG: - case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: ctx->flags |= SEEN_SKB | SEEN_A; BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2); off = offsetof(struct sk_buff, vlan_tci); - emit_half_load_unsigned(r_s0, r_skb, off, ctx); - if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { - emit_andi(r_A, r_s0, (u16)~VLAN_TAG_PRESENT, ctx); - } else { - emit_andi(r_A, r_s0, VLAN_TAG_PRESENT, ctx); - /* return 1 if present */ - emit_sltu(r_A, r_zero, r_A, ctx); - } + emit_half_load_unsigned(r_A, r_skb, off, ctx); + break; + case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: + ctx->flags |= SEEN_SKB | SEEN_A; + emit_load_byte(r_A, r_skb, PKT_VLAN_PRESENT_OFFSET(), ctx); + if (PKT_VLAN_PRESENT_BIT) + emit_srl(r_A, r_A, PKT_VLAN_PRESENT_BIT, ctx); + if (PKT_VLAN_PRESENT_BIT < 7) + emit_andi(r_A, r_A, 1, ctx); break; case BPF_ANC | SKF_AD_PKTTYPE: ctx->flags |= SEEN_SKB; diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index aeb7b1b0f202..b16710a8a9e7 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -854,6 +854,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ALU | BPF_MOD | BPF_X: /* ALU_REG */ case BPF_ALU | BPF_LSH | BPF_X: /* ALU_REG */ case BPF_ALU | BPF_RSH | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_ARSH | BPF_X: /* ALU_REG */ src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp); dst = ebpf_to_mips_reg(ctx, insn, dst_reg); if (src < 0 || dst < 0) @@ -913,6 +914,9 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_RSH: emit_instr(ctx, srlv, dst, dst, src); break; + case BPF_ARSH: + emit_instr(ctx, srav, dst, dst, src); + break; default: pr_err("ALU_REG NOT HANDLED\n"); return -EINVAL; diff --git a/arch/mips/pmcs-msp71xx/Kconfig b/arch/mips/pmcs-msp71xx/Kconfig index d319bc0c3df6..b185b7620c97 100644 --- a/arch/mips/pmcs-msp71xx/Kconfig +++ b/arch/mips/pmcs-msp71xx/Kconfig @@ -6,25 +6,25 @@ choice config PMC_MSP4200_EVAL bool "PMC-Sierra MSP4200 Eval Board" select IRQ_MSP_SLP - select HW_HAS_PCI + select HAVE_PCI select MIPS_L1_CACHE_SHIFT_4 config PMC_MSP4200_GW bool "PMC-Sierra MSP4200 VoIP Gateway" select IRQ_MSP_SLP - select HW_HAS_PCI + select HAVE_PCI config PMC_MSP7120_EVAL bool "PMC-Sierra MSP7120 Eval Board" select SYS_SUPPORTS_MULTITHREADING select IRQ_MSP_CIC - select HW_HAS_PCI + select HAVE_PCI config PMC_MSP7120_GW bool "PMC-Sierra MSP7120 Residential Gateway" select SYS_SUPPORTS_MULTITHREADING select IRQ_MSP_CIC - select HW_HAS_PCI + select HAVE_PCI select MSP_HAS_USB select MSP_ETH @@ -32,7 +32,7 @@ config PMC_MSP7120_FPGA bool "PMC-Sierra MSP7120 FPGA" select SYS_SUPPORTS_MULTITHREADING select IRQ_MSP_CIC - select HW_HAS_PCI + select HAVE_PCI endchoice diff --git a/arch/mips/ralink/Kconfig b/arch/mips/ralink/Kconfig index 1f9cb0e3c79a..4c8006b4a5f7 100644 --- a/arch/mips/ralink/Kconfig +++ b/arch/mips/ralink/Kconfig @@ -27,18 +27,18 @@ choice config SOC_RT288X bool "RT288x" select MIPS_L1_CACHE_SHIFT_4 - select HW_HAS_PCI + select HAVE_PCI config SOC_RT305X bool "RT305x" config SOC_RT3883 bool "RT3883" - select HW_HAS_PCI + select HAVE_PCI config SOC_MT7620 bool "MT7620/8" - select HW_HAS_PCI + select HAVE_PCI config SOC_MT7621 bool "MT7621" @@ -50,7 +50,7 @@ choice select MIPS_GIC select COMMON_CLK select CLKSRC_MIPS_GIC - select HW_HAS_PCI + select HAVE_PCI endchoice choice diff --git a/arch/mips/rb532/devices.c b/arch/mips/rb532/devices.c index 2b23ad640f39..828d8cc3a5df 100644 --- a/arch/mips/rb532/devices.c +++ b/arch/mips/rb532/devices.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -127,14 +128,18 @@ static struct resource cf_slot0_res[] = { } }; -static struct cf_device cf_slot0_data = { - .gpio_pin = CF_GPIO_NUM +static struct gpiod_lookup_table cf_slot0_gpio_table = { + .dev_id = "pata-rb532-cf", + .table = { + GPIO_LOOKUP("gpio0", CF_GPIO_NUM, + NULL, GPIO_ACTIVE_HIGH), + { }, + }, }; static struct platform_device cf_slot0 = { .id = -1, .name = "pata-rb532-cf", - .dev.platform_data = &cf_slot0_data, .resource = cf_slot0_res, .num_resources = ARRAY_SIZE(cf_slot0_res), }; @@ -305,6 +310,7 @@ static int __init plat_setup_devices(void) dev_set_drvdata(&korina_dev0.dev, &korina_dev0_data); + gpiod_add_lookup_table(&cf_slot0_gpio_table); return platform_add_devices(rb532_devs, ARRAY_SIZE(rb532_devs)); } diff --git a/arch/mips/sibyte/Kconfig b/arch/mips/sibyte/Kconfig index 7ec278d72096..470d46183677 100644 --- a/arch/mips/sibyte/Kconfig +++ b/arch/mips/sibyte/Kconfig @@ -3,7 +3,7 @@ config SIBYTE_SB1250 bool select CEVT_SB1250 select CSRC_SB1250 - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select SIBYTE_ENABLE_LDT_IF_PCI select SIBYTE_HAS_ZBUS_PROFILING @@ -23,7 +23,7 @@ config SIBYTE_BCM1125 bool select CEVT_SB1250 select CSRC_SB1250 - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select SIBYTE_BCM112X select SIBYTE_HAS_ZBUS_PROFILING @@ -33,7 +33,7 @@ config SIBYTE_BCM1125H bool select CEVT_SB1250 select CSRC_SB1250 - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select SIBYTE_BCM112X select SIBYTE_ENABLE_LDT_IF_PCI @@ -52,7 +52,7 @@ config SIBYTE_BCM1x80 bool select CEVT_BCM1480 select CSRC_BCM1480 - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select SIBYTE_HAS_ZBUS_PROFILING select SIBYTE_SB1xxx_SOC @@ -62,7 +62,7 @@ config SIBYTE_BCM1x55 bool select CEVT_BCM1480 select CSRC_BCM1480 - select HW_HAS_PCI + select HAVE_PCI select IRQ_MIPS_CPU select SIBYTE_SB1xxx_SOC select SIBYTE_HAS_ZBUS_PROFILING diff --git a/arch/mips/txx9/Kconfig b/arch/mips/txx9/Kconfig index d2509c93f0ee..9a22a182b7a4 100644 --- a/arch/mips/txx9/Kconfig +++ b/arch/mips/txx9/Kconfig @@ -59,7 +59,7 @@ config SOC_TX3927 bool select CEVT_TXX9 select HAS_TXX9_SERIAL - select HW_HAS_PCI + select HAVE_PCI select IRQ_TXX9 select GPIO_TXX9 @@ -67,7 +67,7 @@ config SOC_TX4927 bool select CEVT_TXX9 select HAS_TXX9_SERIAL - select HW_HAS_PCI + select HAVE_PCI select IRQ_TXX9 select PCI_TX4927 select GPIO_TXX9 @@ -77,7 +77,7 @@ config SOC_TX4938 bool select CEVT_TXX9 select HAS_TXX9_SERIAL - select HW_HAS_PCI + select HAVE_PCI select IRQ_TXX9 select PCI_TX4927 select GPIO_TXX9 @@ -87,7 +87,7 @@ config SOC_TX4939 bool select CEVT_TXX9 select HAS_TXX9_SERIAL - select HW_HAS_PCI + select HAVE_PCI select PCI_TX4927 select HAS_TXX9_ACLC diff --git a/arch/mips/vr41xx/Kconfig b/arch/mips/vr41xx/Kconfig index 992c988b83b0..e0b651db371d 100644 --- a/arch/mips/vr41xx/Kconfig +++ b/arch/mips/vr41xx/Kconfig @@ -30,7 +30,7 @@ config TANBAC_TB022X select CSRC_R4K select DMA_NONCOHERENT select IRQ_MIPS_CPU - select HW_HAS_PCI + select HAVE_PCI select SYS_SUPPORTS_32BIT_KERNEL select SYS_SUPPORTS_LITTLE_ENDIAN help @@ -46,7 +46,7 @@ config VICTOR_MPC30X select CSRC_R4K select DMA_NONCOHERENT select IRQ_MIPS_CPU - select HW_HAS_PCI + select HAVE_PCI select PCI_VR41XX select SYS_SUPPORTS_32BIT_KERNEL select SYS_SUPPORTS_LITTLE_ENDIAN @@ -57,7 +57,7 @@ config ZAO_CAPCELLA select CSRC_R4K select DMA_NONCOHERENT select IRQ_MIPS_CPU - select HW_HAS_PCI + select HAVE_PCI select PCI_VR41XX select SYS_SUPPORTS_32BIT_KERNEL select SYS_SUPPORTS_LITTLE_ENDIAN @@ -99,6 +99,6 @@ endchoice config PCI_VR41XX bool "Add PCI control unit support of NEC VR4100 series" - depends on MACH_VR41XX && HW_HAS_PCI + depends on MACH_VR41XX && HAVE_PCI default y select PCI diff --git a/arch/nds32/Kconfig b/arch/nds32/Kconfig index 7a04adacb2f0..dda1906bba11 100644 --- a/arch/nds32/Kconfig +++ b/arch/nds32/Kconfig @@ -11,7 +11,6 @@ config NDS32 select CLKSRC_MMIO select CLONE_BACKWARDS select COMMON_CLK - select DMA_DIRECT_OPS select GENERIC_ATOMIC64 select GENERIC_CPU_DEVICES select GENERIC_CLOCKEVENTS @@ -29,7 +28,9 @@ config NDS32 select HANDLE_DOMAIN_IRQ select HAVE_ARCH_TRACEHOOK select HAVE_DEBUG_KMEMLEAK + select HAVE_EXIT_THREAD select HAVE_REGS_AND_STACK_ACCESS_API + select HAVE_PERF_EVENTS select IRQ_DOMAIN select LOCKDEP_SUPPORT select MODULES_USE_ELF_RELA @@ -92,3 +93,13 @@ endmenu menu "Kernel Features" source "kernel/Kconfig.hz" endmenu + +menu "Power management options" +config SYS_SUPPORTS_APM_EMULATION + bool + +config ARCH_SUSPEND_POSSIBLE + def_bool y + +source "kernel/power/Kconfig" +endmenu diff --git a/arch/nds32/Kconfig.cpu b/arch/nds32/Kconfig.cpu index b8c8984d1456..f16edf0582b4 100644 --- a/arch/nds32/Kconfig.cpu +++ b/arch/nds32/Kconfig.cpu @@ -7,6 +7,40 @@ config CPU_LITTLE_ENDIAN bool "Little endian" default y +config FPU + bool "FPU support" + default n + help + If FPU ISA is used in user space, this configuration shall be Y to + enable required support in kerenl such as fpu context switch and + fpu exception handler. + + If no FPU ISA is used in user space, say N. + +config LAZY_FPU + bool "lazy FPU support" + depends on FPU + default y + help + Say Y here to enable the lazy FPU scheme. The lazy FPU scheme can + enhance system performance by reducing the context switch + frequency of the FPU register. + + For nomal case, say Y. + +config SUPPORT_DENORMAL_ARITHMETIC + bool "Denormal arithmetic support" + depends on FPU + default n + help + Say Y here to enable arithmetic of denormalized number. Enabling + this feature can enhance the precision for tininess number. + However, performance loss in float pointe calculations is + possibly significant due to additional FPU exception. + + If the calculated tolerance for tininess number is not critical, + say N to prevent performance loss. + config HWZOL bool "hardware zero overhead loop support" depends on CPU_D10 || CPU_D15 @@ -143,6 +177,13 @@ config CACHE_L2 Say Y here to enable L2 cache if your SoC are integrated with L2CC. If unsure, say N. +config HW_PRE + bool "Enable hardware prefetcher" + default y + help + Say Y here to enable hardware prefetcher feature. + Only when CPU_VER.REV >= 0x09 can support. + menu "Memory configuration" choice diff --git a/arch/nds32/Makefile b/arch/nds32/Makefile index 9f525ed70049..0a935c136ec2 100644 --- a/arch/nds32/Makefile +++ b/arch/nds32/Makefile @@ -5,10 +5,14 @@ KBUILD_DEFCONFIG := defconfig comma = , + ifdef CONFIG_FUNCTION_TRACER arch-y += -malways-save-lp -mno-relax endif +# Avoid generating FPU instructions +arch-y += -mno-ext-fpu-sp -mno-ext-fpu-dp -mfloat-abi=soft + KBUILD_CFLAGS += $(call cc-option, -mno-sched-prolog-epilog) KBUILD_CFLAGS += -mcmodel=large @@ -26,6 +30,7 @@ export TEXTADDR # If we have a machine-specific directory, then include it in the build. core-y += arch/nds32/kernel/ arch/nds32/mm/ +core-$(CONFIG_FPU) += arch/nds32/math-emu/ libs-y += arch/nds32/lib/ ifneq '$(CONFIG_NDS32_BUILTIN_DTB)' '""' diff --git a/arch/nds32/boot/dts/ae3xx.dts b/arch/nds32/boot/dts/ae3xx.dts index bb39749a6673..16a9f54a805e 100644 --- a/arch/nds32/boot/dts/ae3xx.dts +++ b/arch/nds32/boot/dts/ae3xx.dts @@ -82,4 +82,9 @@ interrupts = <18>; }; }; + + pmu { + compatible = "andestech,nds32v3-pmu"; + interrupts= <13>; + }; }; diff --git a/arch/nds32/include/asm/Kbuild b/arch/nds32/include/asm/Kbuild index dbc4e5422550..f81b633d5379 100644 --- a/arch/nds32/include/asm/Kbuild +++ b/arch/nds32/include/asm/Kbuild @@ -36,6 +36,7 @@ generic-y += kprobes.h generic-y += kvm_para.h generic-y += limits.h generic-y += local.h +generic-y += local64.h generic-y += mm-arch-hooks.h generic-y += mman.h generic-y += parport.h diff --git a/arch/nds32/include/asm/bitfield.h b/arch/nds32/include/asm/bitfield.h index 8e84fc385b94..7414fcbbab4e 100644 --- a/arch/nds32/include/asm/bitfield.h +++ b/arch/nds32/include/asm/bitfield.h @@ -251,6 +251,11 @@ #define ITYPE_mskSTYPE ( 0xF << ITYPE_offSTYPE ) #define ITYPE_mskCPID ( 0x3 << ITYPE_offCPID ) +/* Additional definitions of ITYPE register for FPU */ +#define FPU_DISABLE_EXCEPTION (0x1 << ITYPE_offSTYPE) +#define FPU_EXCEPTION (0x2 << ITYPE_offSTYPE) +#define FPU_CPID 0 /* FPU Co-Processor ID is 0 */ + #define NDS32_VECTOR_mskNONEXCEPTION 0x78 #define NDS32_VECTOR_offEXCEPTION 8 #define NDS32_VECTOR_offINTERRUPT 9 @@ -692,8 +697,8 @@ #define PFM_CTL_offKU1 13 /* Enable user mode event counting for PFMC1 */ #define PFM_CTL_offKU2 14 /* Enable user mode event counting for PFMC2 */ #define PFM_CTL_offSEL0 15 /* The event selection for PFMC0 */ -#define PFM_CTL_offSEL1 21 /* The event selection for PFMC1 */ -#define PFM_CTL_offSEL2 27 /* The event selection for PFMC2 */ +#define PFM_CTL_offSEL1 16 /* The event selection for PFMC1 */ +#define PFM_CTL_offSEL2 22 /* The event selection for PFMC2 */ /* bit 28:31 reserved */ #define PFM_CTL_mskEN0 ( 0x01 << PFM_CTL_offEN0 ) @@ -735,14 +740,20 @@ #define N13MISC_CTL_offRTP 1 /* Disable Return Target Predictor */ #define N13MISC_CTL_offPTEPF 2 /* Disable HPTWK L2 PTE pefetch */ #define N13MISC_CTL_offSP_SHADOW_EN 4 /* Enable shadow stack pointers */ +#define MISC_CTL_offHWPRE 11 /* Enable HardWare PREFETCH */ /* bit 6, 9:31 reserved */ #define N13MISC_CTL_makBTB ( 0x1 << N13MISC_CTL_offBTB ) #define N13MISC_CTL_makRTP ( 0x1 << N13MISC_CTL_offRTP ) #define N13MISC_CTL_makPTEPF ( 0x1 << N13MISC_CTL_offPTEPF ) #define N13MISC_CTL_makSP_SHADOW_EN ( 0x1 << N13MISC_CTL_offSP_SHADOW_EN ) +#define MISC_CTL_makHWPRE_EN ( 0x1 << MISC_CTL_offHWPRE ) +#ifdef CONFIG_HW_PRE +#define MISC_init (N13MISC_CTL_makBTB|N13MISC_CTL_makRTP|N13MISC_CTL_makSP_SHADOW_EN|MISC_CTL_makHWPRE_EN) +#else #define MISC_init (N13MISC_CTL_makBTB|N13MISC_CTL_makRTP|N13MISC_CTL_makSP_SHADOW_EN) +#endif /****************************************************************************** * PRUSR_ACC_CTL (Privileged Resource User Access Control Registers) @@ -926,6 +937,7 @@ #define FPCSR_mskDNIT ( 0x1 << FPCSR_offDNIT ) #define FPCSR_mskRIT ( 0x1 << FPCSR_offRIT ) #define FPCSR_mskALL (FPCSR_mskIVO | FPCSR_mskDBZ | FPCSR_mskOVF | FPCSR_mskUDF | FPCSR_mskIEX) +#define FPCSR_mskALLE_NO_UDFE (FPCSR_mskIVOE | FPCSR_mskDBZE | FPCSR_mskOVFE | FPCSR_mskIEXE) #define FPCSR_mskALLE (FPCSR_mskIVOE | FPCSR_mskDBZE | FPCSR_mskOVFE | FPCSR_mskUDFE | FPCSR_mskIEXE) #define FPCSR_mskALLT (FPCSR_mskIVOT | FPCSR_mskDBZT | FPCSR_mskOVFT | FPCSR_mskUDFT | FPCSR_mskIEXT |FPCSR_mskDNIT | FPCSR_mskRIT) @@ -946,6 +958,15 @@ #define FPCFG_mskIMVER ( 0x1F << FPCFG_offIMVER ) #define FPCFG_mskAVER ( 0x1F << FPCFG_offAVER ) +/* 8 Single precision or 4 double precision registers are available */ +#define SP8_DP4_reg 0 +/* 16 Single precision or 8 double precision registers are available */ +#define SP16_DP8_reg 1 +/* 32 Single precision or 16 double precision registers are available */ +#define SP32_DP16_reg 2 +/* 32 Single precision or 32 double precision registers are available */ +#define SP32_DP32_reg 3 + /****************************************************************************** * fucpr: FUCOP_CTL (FPU and Coprocessor Enable Control Register) *****************************************************************************/ diff --git a/arch/nds32/include/asm/elf.h b/arch/nds32/include/asm/elf.h index f5f9cf7e0544..95f3ea253e4c 100644 --- a/arch/nds32/include/asm/elf.h +++ b/arch/nds32/include/asm/elf.h @@ -9,6 +9,7 @@ */ #include +#include typedef unsigned long elf_greg_t; typedef unsigned long elf_freg_t[3]; @@ -159,8 +160,18 @@ struct elf32_hdr; #endif + +#if IS_ENABLED(CONFIG_FPU) +#define FPU_AUX_ENT NEW_AUX_ENT(AT_FPUCW, FPCSR_INIT) +#else +#define FPU_AUX_ENT NEW_AUX_ENT(AT_IGNORE, 0) +#endif + #define ARCH_DLINFO \ do { \ + /* Optional FPU initialization */ \ + FPU_AUX_ENT; \ + \ NEW_AUX_ENT(AT_SYSINFO_EHDR, \ (elf_addr_t)current->mm->context.vdso); \ } while (0) diff --git a/arch/nds32/include/asm/fpu.h b/arch/nds32/include/asm/fpu.h new file mode 100644 index 000000000000..019f1bcfc5ee --- /dev/null +++ b/arch/nds32/include/asm/fpu.h @@ -0,0 +1,126 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2005-2018 Andes Technology Corporation */ + +#ifndef __ASM_NDS32_FPU_H +#define __ASM_NDS32_FPU_H + +#if IS_ENABLED(CONFIG_FPU) +#ifndef __ASSEMBLY__ +#include +#include +#include + +extern bool has_fpu; + +extern void save_fpu(struct task_struct *__tsk); +extern void load_fpu(const struct fpu_struct *fpregs); +extern bool do_fpu_exception(unsigned int subtype, struct pt_regs *regs); +extern int do_fpuemu(struct pt_regs *regs, struct fpu_struct *fpu); + +#define test_tsk_fpu(regs) (regs->fucop_ctl & FUCOP_CTL_mskCP0EN) + +/* + * Initially load the FPU with signalling NANS. This bit pattern + * has the property that no matter whether considered as single or as + * double precision, it still represents a signalling NAN. + */ + +#define sNAN64 0xFFFFFFFFFFFFFFFFULL +#define sNAN32 0xFFFFFFFFUL + +#if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) +/* + * Denormalized number is unsupported by nds32 FPU. Hence the operation + * is treated as underflow cases when the final result is a denormalized + * number. To enhance precision, underflow exception trap should be + * enabled by default and kerenl will re-execute it by fpu emulator + * when getting underflow exception. + */ +#define FPCSR_INIT FPCSR_mskUDFE +#else +#define FPCSR_INIT 0x0UL +#endif + +extern const struct fpu_struct init_fpuregs; + +static inline void disable_ptreg_fpu(struct pt_regs *regs) +{ + regs->fucop_ctl &= ~FUCOP_CTL_mskCP0EN; +} + +static inline void enable_ptreg_fpu(struct pt_regs *regs) +{ + regs->fucop_ctl |= FUCOP_CTL_mskCP0EN; +} + +static inline void enable_fpu(void) +{ + unsigned long fucop_ctl; + + fucop_ctl = __nds32__mfsr(NDS32_SR_FUCOP_CTL) | FUCOP_CTL_mskCP0EN; + __nds32__mtsr(fucop_ctl, NDS32_SR_FUCOP_CTL); + __nds32__isb(); +} + +static inline void disable_fpu(void) +{ + unsigned long fucop_ctl; + + fucop_ctl = __nds32__mfsr(NDS32_SR_FUCOP_CTL) & ~FUCOP_CTL_mskCP0EN; + __nds32__mtsr(fucop_ctl, NDS32_SR_FUCOP_CTL); + __nds32__isb(); +} + +static inline void lose_fpu(void) +{ + preempt_disable(); +#if IS_ENABLED(CONFIG_LAZY_FPU) + if (last_task_used_math == current) { + last_task_used_math = NULL; +#else + if (test_tsk_fpu(task_pt_regs(current))) { +#endif + save_fpu(current); + } + disable_ptreg_fpu(task_pt_regs(current)); + preempt_enable(); +} + +static inline void own_fpu(void) +{ + preempt_disable(); +#if IS_ENABLED(CONFIG_LAZY_FPU) + if (last_task_used_math != current) { + if (last_task_used_math != NULL) + save_fpu(last_task_used_math); + load_fpu(¤t->thread.fpu); + last_task_used_math = current; + } +#else + if (!test_tsk_fpu(task_pt_regs(current))) { + load_fpu(¤t->thread.fpu); + } +#endif + enable_ptreg_fpu(task_pt_regs(current)); + preempt_enable(); +} + +#if !IS_ENABLED(CONFIG_LAZY_FPU) +static inline void unlazy_fpu(struct task_struct *tsk) +{ + preempt_disable(); + if (test_tsk_fpu(task_pt_regs(tsk))) + save_fpu(tsk); + preempt_enable(); +} +#endif /* !CONFIG_LAZY_FPU */ +static inline void clear_fpu(struct pt_regs *regs) +{ + preempt_disable(); + if (test_tsk_fpu(regs)) + disable_ptreg_fpu(regs); + preempt_enable(); +} +#endif /* CONFIG_FPU */ +#endif /* __ASSEMBLY__ */ +#endif /* __ASM_NDS32_FPU_H */ diff --git a/arch/nds32/include/asm/fpuemu.h b/arch/nds32/include/asm/fpuemu.h new file mode 100644 index 000000000000..c4bd0c7faa75 --- /dev/null +++ b/arch/nds32/include/asm/fpuemu.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2005-2018 Andes Technology Corporation */ + +#ifndef __ARCH_NDS32_FPUEMU_H +#define __ARCH_NDS32_FPUEMU_H + +/* + * single precision + */ + +void fadds(void *ft, void *fa, void *fb); +void fsubs(void *ft, void *fa, void *fb); +void fmuls(void *ft, void *fa, void *fb); +void fdivs(void *ft, void *fa, void *fb); +void fs2d(void *ft, void *fa); +void fsqrts(void *ft, void *fa); +void fnegs(void *ft, void *fa); +int fcmps(void *ft, void *fa, void *fb, int cop); + +/* + * double precision + */ +void faddd(void *ft, void *fa, void *fb); +void fsubd(void *ft, void *fa, void *fb); +void fmuld(void *ft, void *fa, void *fb); +void fdivd(void *ft, void *fa, void *fb); +void fsqrtd(void *ft, void *fa); +void fd2s(void *ft, void *fa); +void fnegd(void *ft, void *fa); +int fcmpd(void *ft, void *fa, void *fb, int cop); + +#endif /* __ARCH_NDS32_FPUEMU_H */ diff --git a/arch/nds32/include/asm/nds32_fpu_inst.h b/arch/nds32/include/asm/nds32_fpu_inst.h new file mode 100644 index 000000000000..1e4b86a90a48 --- /dev/null +++ b/arch/nds32/include/asm/nds32_fpu_inst.h @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2005-2018 Andes Technology Corporation */ + +#ifndef __NDS32_FPU_INST_H +#define __NDS32_FPU_INST_H + +#define cop0_op 0x35 + +/* + * COP0 field of opcodes. + */ +#define fs1_op 0x0 +#define fs2_op 0x4 +#define fd1_op 0x8 +#define fd2_op 0xc + +/* + * FS1 opcode. + */ +enum fs1 { + fadds_op, fsubs_op, fcpynss_op, fcpyss_op, + fmadds_op, fmsubs_op, fcmovns_op, fcmovzs_op, + fnmadds_op, fnmsubs_op, + fmuls_op = 0xc, fdivs_op, + fs1_f2op_op = 0xf +}; + +/* + * FS1/F2OP opcode. + */ +enum fs1_f2 { + fs2d_op, fsqrts_op, + fui2s_op = 0x8, fsi2s_op = 0xc, + fs2ui_op = 0x10, fs2ui_z_op = 0x14, + fs2si_op = 0x18, fs2si_z_op = 0x1c +}; + +/* + * FS2 opcode. + */ +enum fs2 { + fcmpeqs_op, fcmpeqs_e_op, fcmplts_op, fcmplts_e_op, + fcmples_op, fcmples_e_op, fcmpuns_op, fcmpuns_e_op +}; + +/* + * FD1 opcode. + */ +enum fd1 { + faddd_op, fsubd_op, fcpynsd_op, fcpysd_op, + fmaddd_op, fmsubd_op, fcmovnd_op, fcmovzd_op, + fnmaddd_op, fnmsubd_op, + fmuld_op = 0xc, fdivd_op, fd1_f2op_op = 0xf +}; + +/* + * FD1/F2OP opcode. + */ +enum fd1_f2 { + fd2s_op, fsqrtd_op, + fui2d_op = 0x8, fsi2d_op = 0xc, + fd2ui_op = 0x10, fd2ui_z_op = 0x14, + fd2si_op = 0x18, fd2si_z_op = 0x1c +}; + +/* + * FD2 opcode. + */ +enum fd2 { + fcmpeqd_op, fcmpeqd_e_op, fcmpltd_op, fcmpltd_e_op, + fcmpled_op, fcmpled_e_op, fcmpund_op, fcmpund_e_op +}; + +#define NDS32Insn(x) x + +#define I_OPCODE_off 25 +#define NDS32Insn_OPCODE(x) (NDS32Insn(x) >> I_OPCODE_off) + +#define I_OPCODE_offRt 20 +#define I_OPCODE_mskRt (0x1fUL << I_OPCODE_offRt) +#define NDS32Insn_OPCODE_Rt(x) \ + ((NDS32Insn(x) & I_OPCODE_mskRt) >> I_OPCODE_offRt) + +#define I_OPCODE_offRa 15 +#define I_OPCODE_mskRa (0x1fUL << I_OPCODE_offRa) +#define NDS32Insn_OPCODE_Ra(x) \ + ((NDS32Insn(x) & I_OPCODE_mskRa) >> I_OPCODE_offRa) + +#define I_OPCODE_offRb 10 +#define I_OPCODE_mskRb (0x1fUL << I_OPCODE_offRb) +#define NDS32Insn_OPCODE_Rb(x) \ + ((NDS32Insn(x) & I_OPCODE_mskRb) >> I_OPCODE_offRb) + +#define I_OPCODE_offbit1014 10 +#define I_OPCODE_mskbit1014 (0x1fUL << I_OPCODE_offbit1014) +#define NDS32Insn_OPCODE_BIT1014(x) \ + ((NDS32Insn(x) & I_OPCODE_mskbit1014) >> I_OPCODE_offbit1014) + +#define I_OPCODE_offbit69 6 +#define I_OPCODE_mskbit69 (0xfUL << I_OPCODE_offbit69) +#define NDS32Insn_OPCODE_BIT69(x) \ + ((NDS32Insn(x) & I_OPCODE_mskbit69) >> I_OPCODE_offbit69) + +#define I_OPCODE_offCOP0 0 +#define I_OPCODE_mskCOP0 (0x3fUL << I_OPCODE_offCOP0) +#define NDS32Insn_OPCODE_COP0(x) \ + ((NDS32Insn(x) & I_OPCODE_mskCOP0) >> I_OPCODE_offCOP0) + +#endif /* __NDS32_FPU_INST_H */ diff --git a/arch/nds32/include/asm/perf_event.h b/arch/nds32/include/asm/perf_event.h new file mode 100644 index 000000000000..fcdff02acc14 --- /dev/null +++ b/arch/nds32/include/asm/perf_event.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2008-2018 Andes Technology Corporation */ + +#ifndef __ASM_PERF_EVENT_H +#define __ASM_PERF_EVENT_H + +/* + * This file is request by Perf, + * please refer to tools/perf/design.txt for more details + */ +struct pt_regs; +unsigned long perf_instruction_pointer(struct pt_regs *regs); +unsigned long perf_misc_flags(struct pt_regs *regs); +#define perf_misc_flags(regs) perf_misc_flags(regs) + +#endif diff --git a/arch/nds32/include/asm/pmu.h b/arch/nds32/include/asm/pmu.h new file mode 100644 index 000000000000..e1ac0b0b8bcf --- /dev/null +++ b/arch/nds32/include/asm/pmu.h @@ -0,0 +1,386 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2008-2018 Andes Technology Corporation */ + +#ifndef __ASM_PMU_H +#define __ASM_PMU_H + +#include +#include +#include +#include + +/* Has special meaning for perf core implementation */ +#define HW_OP_UNSUPPORTED 0x0 +#define C(_x) PERF_COUNT_HW_CACHE_##_x +#define CACHE_OP_UNSUPPORTED 0x0 + +/* Enough for both software and hardware defined events */ +#define SOFTWARE_EVENT_MASK 0xFF + +#define PFM_OFFSET_MAGIC_0 2 /* DO NOT START FROM 0 */ +#define PFM_OFFSET_MAGIC_1 (PFM_OFFSET_MAGIC_0 + 36) +#define PFM_OFFSET_MAGIC_2 (PFM_OFFSET_MAGIC_1 + 36) + +enum { PFMC0, PFMC1, PFMC2, MAX_COUNTERS }; + +u32 PFM_CTL_OVF[3] = { PFM_CTL_mskOVF0, PFM_CTL_mskOVF1, + PFM_CTL_mskOVF2 }; +u32 PFM_CTL_EN[3] = { PFM_CTL_mskEN0, PFM_CTL_mskEN1, + PFM_CTL_mskEN2 }; +u32 PFM_CTL_OFFSEL[3] = { PFM_CTL_offSEL0, PFM_CTL_offSEL1, + PFM_CTL_offSEL2 }; +u32 PFM_CTL_IE[3] = { PFM_CTL_mskIE0, PFM_CTL_mskIE1, PFM_CTL_mskIE2 }; +u32 PFM_CTL_KS[3] = { PFM_CTL_mskKS0, PFM_CTL_mskKS1, PFM_CTL_mskKS2 }; +u32 PFM_CTL_KU[3] = { PFM_CTL_mskKU0, PFM_CTL_mskKU1, PFM_CTL_mskKU2 }; +u32 PFM_CTL_SEL[3] = { PFM_CTL_mskSEL0, PFM_CTL_mskSEL1, PFM_CTL_mskSEL2 }; +/* + * Perf Events' indices + */ +#define NDS32_IDX_CYCLE_COUNTER 0 +#define NDS32_IDX_COUNTER0 1 +#define NDS32_IDX_COUNTER1 2 + +/* The events for a given PMU register set. */ +struct pmu_hw_events { + /* + * The events that are active on the PMU for the given index. + */ + struct perf_event *events[MAX_COUNTERS]; + + /* + * A 1 bit for an index indicates that the counter is being used for + * an event. A 0 means that the counter can be used. + */ + unsigned long used_mask[BITS_TO_LONGS(MAX_COUNTERS)]; + + /* + * Hardware lock to serialize accesses to PMU registers. Needed for the + * read/modify/write sequences. + */ + raw_spinlock_t pmu_lock; +}; + +struct nds32_pmu { + struct pmu pmu; + cpumask_t active_irqs; + char *name; + irqreturn_t (*handle_irq)(int irq_num, void *dev); + void (*enable)(struct perf_event *event); + void (*disable)(struct perf_event *event); + int (*get_event_idx)(struct pmu_hw_events *hw_events, + struct perf_event *event); + int (*set_event_filter)(struct hw_perf_event *evt, + struct perf_event_attr *attr); + u32 (*read_counter)(struct perf_event *event); + void (*write_counter)(struct perf_event *event, u32 val); + void (*start)(struct nds32_pmu *nds32_pmu); + void (*stop)(struct nds32_pmu *nds32_pmu); + void (*reset)(void *data); + int (*request_irq)(struct nds32_pmu *nds32_pmu, irq_handler_t handler); + void (*free_irq)(struct nds32_pmu *nds32_pmu); + int (*map_event)(struct perf_event *event); + int num_events; + atomic_t active_events; + u64 max_period; + struct platform_device *plat_device; + struct pmu_hw_events *(*get_hw_events)(void); +}; + +#define to_nds32_pmu(p) (container_of(p, struct nds32_pmu, pmu)) + +int nds32_pmu_register(struct nds32_pmu *nds32_pmu, int type); + +u64 nds32_pmu_event_update(struct perf_event *event); + +int nds32_pmu_event_set_period(struct perf_event *event); + +/* + * Common NDS32 SPAv3 event types + * + * Note: An implementation may not be able to count all of these events + * but the encodings are considered to be `reserved' in the case that + * they are not available. + * + * SEL_TOTAL_CYCLES will add an offset is due to ZERO is defined as + * NOT_SUPPORTED EVENT mapping in generic perf code. + * You will need to deal it in the event writing implementation. + */ +enum spav3_counter_0_perf_types { + SPAV3_0_SEL_BASE = -1 + PFM_OFFSET_MAGIC_0, /* counting symbol */ + SPAV3_0_SEL_TOTAL_CYCLES = 0 + PFM_OFFSET_MAGIC_0, + SPAV3_0_SEL_COMPLETED_INSTRUCTION = 1 + PFM_OFFSET_MAGIC_0, + SPAV3_0_SEL_LAST /* counting symbol */ +}; + +enum spav3_counter_1_perf_types { + SPAV3_1_SEL_BASE = -1 + PFM_OFFSET_MAGIC_1, /* counting symbol */ + SPAV3_1_SEL_TOTAL_CYCLES = 0 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_COMPLETED_INSTRUCTION = 1 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_CONDITIONAL_BRANCH = 2 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_TAKEN_CONDITIONAL_BRANCH = 3 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_PREFETCH_INSTRUCTION = 4 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_RET_INST = 5 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_JR_INST = 6 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_JAL_JRAL_INST = 7 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_NOP_INST = 8 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_SCW_INST = 9 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_ISB_DSB_INST = 10 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_CCTL_INST = 11 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_TAKEN_INTERRUPTS = 12 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_LOADS_COMPLETED = 13 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_UITLB_ACCESS = 14 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_UDTLB_ACCESS = 15 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_MTLB_ACCESS = 16 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_CODE_CACHE_ACCESS = 17 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_DATA_DEPENDENCY_STALL_CYCLES = 18 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_DATA_CACHE_MISS_STALL_CYCLES = 19 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_DATA_CACHE_ACCESS = 20 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_DATA_CACHE_MISS = 21 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_LOAD_DATA_CACHE_ACCESS = 22 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_STORE_DATA_CACHE_ACCESS = 23 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_ILM_ACCESS = 24 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_LSU_BIU_CYCLES = 25 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_HPTWK_BIU_CYCLES = 26 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_DMA_BIU_CYCLES = 27 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_CODE_CACHE_FILL_BIU_CYCLES = 28 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_LEGAL_UNALIGN_DCACHE_ACCESS = 29 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_PUSH25 = 30 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_SYSCALLS_INST = 31 + PFM_OFFSET_MAGIC_1, + SPAV3_1_SEL_LAST /* counting symbol */ +}; + +enum spav3_counter_2_perf_types { + SPAV3_2_SEL_BASE = -1 + PFM_OFFSET_MAGIC_2, /* counting symbol */ + SPAV3_2_SEL_TOTAL_CYCLES = 0 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_COMPLETED_INSTRUCTION = 1 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_CONDITIONAL_BRANCH_MISPREDICT = 2 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_TAKEN_CONDITIONAL_BRANCH_MISPREDICT = + 3 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_PREFETCH_INSTRUCTION_CACHE_HIT = 4 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_RET_MISPREDICT = 5 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_IMMEDIATE_J_INST = 6 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_MULTIPLY_INST = 7 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_16_BIT_INST = 8 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_FAILED_SCW_INST = 9 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_LD_AFTER_ST_CONFLICT_REPLAYS = 10 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_TAKEN_EXCEPTIONS = 12 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_STORES_COMPLETED = 13 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_UITLB_MISS = 14 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_UDTLB_MISS = 15 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_MTLB_MISS = 16 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_CODE_CACHE_MISS = 17 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_EMPTY_INST_QUEUE_STALL_CYCLES = 18 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_DATA_WRITE_BACK = 19 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_DATA_CACHE_MISS = 21 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_LOAD_DATA_CACHE_MISS = 22 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_STORE_DATA_CACHE_MISS = 23 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_DLM_ACCESS = 24 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_LSU_BIU_REQUEST = 25 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_HPTWK_BIU_REQUEST = 26 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_DMA_BIU_REQUEST = 27 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_CODE_CACHE_FILL_BIU_REQUEST = 28 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_EXTERNAL_EVENTS = 29 + PFM_OFFSET_MAGIC_2, + SPAV3_1_SEL_POP25 = 30 + PFM_OFFSET_MAGIC_2, + SPAV3_2_SEL_LAST /* counting symbol */ +}; + +/* Get converted event counter index */ +static inline int get_converted_event_idx(unsigned long event) +{ + int idx; + + if ((event) > SPAV3_0_SEL_BASE && event < SPAV3_0_SEL_LAST) { + idx = 0; + } else if ((event) > SPAV3_1_SEL_BASE && event < SPAV3_1_SEL_LAST) { + idx = 1; + } else if ((event) > SPAV3_2_SEL_BASE && event < SPAV3_2_SEL_LAST) { + idx = 2; + } else { + pr_err("GET_CONVERTED_EVENT_IDX PFM counter range error\n"); + return -EPERM; + } + + return idx; +} + +/* Get converted hardware event number */ +static inline u32 get_converted_evet_hw_num(u32 event) +{ + if (event > SPAV3_0_SEL_BASE && event < SPAV3_0_SEL_LAST) + event -= PFM_OFFSET_MAGIC_0; + else if (event > SPAV3_1_SEL_BASE && event < SPAV3_1_SEL_LAST) + event -= PFM_OFFSET_MAGIC_1; + else if (event > SPAV3_2_SEL_BASE && event < SPAV3_2_SEL_LAST) + event -= PFM_OFFSET_MAGIC_2; + else if (event != 0) + pr_err("GET_CONVERTED_EVENT_HW_NUM PFM counter range error\n"); + + return event; +} + +/* + * NDS32 HW events mapping + * + * The hardware events that we support. We do support cache operations but + * we have harvard caches and no way to combine instruction and data + * accesses/misses in hardware. + */ +static const unsigned int nds32_pfm_perf_map[PERF_COUNT_HW_MAX] = { + [PERF_COUNT_HW_CPU_CYCLES] = SPAV3_0_SEL_TOTAL_CYCLES, + [PERF_COUNT_HW_INSTRUCTIONS] = SPAV3_1_SEL_COMPLETED_INSTRUCTION, + [PERF_COUNT_HW_CACHE_REFERENCES] = SPAV3_1_SEL_DATA_CACHE_ACCESS, + [PERF_COUNT_HW_CACHE_MISSES] = SPAV3_2_SEL_DATA_CACHE_MISS, + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = HW_OP_UNSUPPORTED, + [PERF_COUNT_HW_BRANCH_MISSES] = HW_OP_UNSUPPORTED, + [PERF_COUNT_HW_BUS_CYCLES] = HW_OP_UNSUPPORTED, + [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = HW_OP_UNSUPPORTED, + [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = HW_OP_UNSUPPORTED, + [PERF_COUNT_HW_REF_CPU_CYCLES] = HW_OP_UNSUPPORTED +}; + +static const unsigned int nds32_pfm_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX] = { + [C(L1D)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = + SPAV3_1_SEL_LOAD_DATA_CACHE_ACCESS, + [C(RESULT_MISS)] = + SPAV3_2_SEL_LOAD_DATA_CACHE_MISS, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = + SPAV3_1_SEL_STORE_DATA_CACHE_ACCESS, + [C(RESULT_MISS)] = + SPAV3_2_SEL_STORE_DATA_CACHE_MISS, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = + CACHE_OP_UNSUPPORTED, + }, + }, + [C(L1I)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = + SPAV3_1_SEL_CODE_CACHE_ACCESS, + [C(RESULT_MISS)] = + SPAV3_2_SEL_CODE_CACHE_MISS, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = + SPAV3_1_SEL_CODE_CACHE_ACCESS, + [C(RESULT_MISS)] = + SPAV3_2_SEL_CODE_CACHE_MISS, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, + }, + }, + /* TODO: L2CC */ + [C(LL)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, + }, + }, + /* NDS32 PMU does not support TLB read/write hit/miss, + * However, it can count access/miss, which mixed with read and write. + * Therefore, only READ counter will use it. + * We do as possible as we can. + */ + [C(DTLB)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = + SPAV3_1_SEL_UDTLB_ACCESS, + [C(RESULT_MISS)] = + SPAV3_2_SEL_UDTLB_MISS, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = + CACHE_OP_UNSUPPORTED, + }, + }, + [C(ITLB)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = + SPAV3_1_SEL_UITLB_ACCESS, + [C(RESULT_MISS)] = + SPAV3_2_SEL_UITLB_MISS, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = + CACHE_OP_UNSUPPORTED, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = + CACHE_OP_UNSUPPORTED, + }, + }, + [C(BPU)] = { /* What is BPU? */ + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = + CACHE_OP_UNSUPPORTED, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = + CACHE_OP_UNSUPPORTED, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = + CACHE_OP_UNSUPPORTED, + }, + }, + [C(NODE)] = { /* What is NODE? */ + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = + CACHE_OP_UNSUPPORTED, + [C(RESULT_MISS)] = + CACHE_OP_UNSUPPORTED, + }, + }, +}; + +int nds32_pmu_map_event(struct perf_event *event, + const unsigned int (*event_map)[PERF_COUNT_HW_MAX], + const unsigned int (*cache_map)[PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX], u32 raw_event_mask); + +#endif /* __ASM_PMU_H */ diff --git a/arch/nds32/include/asm/processor.h b/arch/nds32/include/asm/processor.h index c2660f566bac..72024f8bc129 100644 --- a/arch/nds32/include/asm/processor.h +++ b/arch/nds32/include/asm/processor.h @@ -35,6 +35,8 @@ struct thread_struct { unsigned long address; unsigned long trap_no; unsigned long error_code; + + struct fpu_struct fpu; }; #define INIT_THREAD { } @@ -72,6 +74,11 @@ struct task_struct; /* Free all resources held by a thread. */ #define release_thread(thread) do { } while(0) +#if IS_ENABLED(CONFIG_FPU) +#if !IS_ENABLED(CONFIG_UNLAZU_FPU) +extern struct task_struct *last_task_used_math; +#endif +#endif /* Prepare to copy thread state - unlazy all lazy status */ #define prepare_to_copy(tsk) do { } while (0) diff --git a/arch/nds32/include/asm/sfp-machine.h b/arch/nds32/include/asm/sfp-machine.h new file mode 100644 index 000000000000..b1a5caa332b5 --- /dev/null +++ b/arch/nds32/include/asm/sfp-machine.h @@ -0,0 +1,158 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2005-2018 Andes Technology Corporation */ + +#include + +#define _FP_W_TYPE_SIZE 32 +#define _FP_W_TYPE unsigned long +#define _FP_WS_TYPE signed long +#define _FP_I_TYPE long + +#define __ll_B ((UWtype) 1 << (W_TYPE_SIZE / 2)) +#define __ll_lowpart(t) ((UWtype) (t) & (__ll_B - 1)) +#define __ll_highpart(t) ((UWtype) (t) >> (W_TYPE_SIZE / 2)) + +#define _FP_MUL_MEAT_S(R, X, Y) \ + _FP_MUL_MEAT_1_wide(_FP_WFRACBITS_S, R, X, Y, umul_ppmm) +#define _FP_MUL_MEAT_D(R, X, Y) \ + _FP_MUL_MEAT_2_wide(_FP_WFRACBITS_D, R, X, Y, umul_ppmm) +#define _FP_MUL_MEAT_Q(R, X, Y) \ + _FP_MUL_MEAT_4_wide(_FP_WFRACBITS_Q, R, X, Y, umul_ppmm) + +#define _FP_MUL_MEAT_DW_S(R, X, Y) \ + _FP_MUL_MEAT_DW_1_wide(_FP_WFRACBITS_S, R, X, Y, umul_ppmm) +#define _FP_MUL_MEAT_DW_D(R, X, Y) \ + _FP_MUL_MEAT_DW_2_wide(_FP_WFRACBITS_D, R, X, Y, umul_ppmm) + +#define _FP_DIV_MEAT_S(R, X, Y) _FP_DIV_MEAT_1_udiv_norm(S, R, X, Y) +#define _FP_DIV_MEAT_D(R, X, Y) _FP_DIV_MEAT_2_udiv(D, R, X, Y) + +#define _FP_NANFRAC_S ((_FP_QNANBIT_S << 1) - 1) +#define _FP_NANFRAC_D ((_FP_QNANBIT_D << 1) - 1), -1 +#define _FP_NANFRAC_Q ((_FP_QNANBIT_Q << 1) - 1), -1, -1, -1 +#define _FP_NANSIGN_S 0 +#define _FP_NANSIGN_D 0 +#define _FP_NANSIGN_Q 0 + +#define _FP_KEEPNANFRACP 1 +#define _FP_QNANNEGATEDP 0 + +#define _FP_CHOOSENAN(fs, wc, R, X, Y, OP) \ +do { \ + if ((_FP_FRAC_HIGH_RAW_##fs(X) & _FP_QNANBIT_##fs) \ + && !(_FP_FRAC_HIGH_RAW_##fs(Y) & _FP_QNANBIT_##fs)) { \ + R##_s = Y##_s; \ + _FP_FRAC_COPY_##wc(R, Y); \ + } else { \ + R##_s = X##_s; \ + _FP_FRAC_COPY_##wc(R, X); \ + } \ + R##_c = FP_CLS_NAN; \ +} while (0) + +#define __FPU_FPCSR (current->thread.fpu.fpcsr) + +/* Obtain the current rounding mode. */ +#define FP_ROUNDMODE \ +({ \ + __FPU_FPCSR & FPCSR_mskRM; \ +}) + +#define FP_RND_NEAREST 0 +#define FP_RND_PINF 1 +#define FP_RND_MINF 2 +#define FP_RND_ZERO 3 + +#define FP_EX_INVALID FPCSR_mskIVO +#define FP_EX_DIVZERO FPCSR_mskDBZ +#define FP_EX_OVERFLOW FPCSR_mskOVF +#define FP_EX_UNDERFLOW FPCSR_mskUDF +#define FP_EX_INEXACT FPCSR_mskIEX + +#define SF_CEQ 2 +#define SF_CLT 1 +#define SF_CGT 3 +#define SF_CUN 4 + +#include + +#ifdef __BIG_ENDIAN__ +#define __BYTE_ORDER __BIG_ENDIAN +#define __LITTLE_ENDIAN 0 +#else +#define __BYTE_ORDER __LITTLE_ENDIAN +#define __BIG_ENDIAN 0 +#endif + +#define abort() do { } while (0) +#define umul_ppmm(w1, w0, u, v) \ +do { \ + UWtype __x0, __x1, __x2, __x3; \ + UHWtype __ul, __vl, __uh, __vh; \ + \ + __ul = __ll_lowpart(u); \ + __uh = __ll_highpart(u); \ + __vl = __ll_lowpart(v); \ + __vh = __ll_highpart(v); \ + \ + __x0 = (UWtype) __ul * __vl; \ + __x1 = (UWtype) __ul * __vh; \ + __x2 = (UWtype) __uh * __vl; \ + __x3 = (UWtype) __uh * __vh; \ + \ + __x1 += __ll_highpart(__x0); \ + __x1 += __x2; \ + if (__x1 < __x2) \ + __x3 += __ll_B; \ + \ + (w1) = __x3 + __ll_highpart(__x1); \ + (w0) = __ll_lowpart(__x1) * __ll_B + __ll_lowpart(__x0); \ +} while (0) + +#define add_ssaaaa(sh, sl, ah, al, bh, bl) \ +do { \ + UWtype __x; \ + __x = (al) + (bl); \ + (sh) = (ah) + (bh) + (__x < (al)); \ + (sl) = __x; \ +} while (0) + +#define sub_ddmmss(sh, sl, ah, al, bh, bl) \ +do { \ + UWtype __x; \ + __x = (al) - (bl); \ + (sh) = (ah) - (bh) - (__x > (al)); \ + (sl) = __x; \ +} while (0) + +#define udiv_qrnnd(q, r, n1, n0, d) \ +do { \ + UWtype __d1, __d0, __q1, __q0, __r1, __r0, __m; \ + __d1 = __ll_highpart(d); \ + __d0 = __ll_lowpart(d); \ + \ + __r1 = (n1) % __d1; \ + __q1 = (n1) / __d1; \ + __m = (UWtype) __q1 * __d0; \ + __r1 = __r1 * __ll_B | __ll_highpart(n0); \ + if (__r1 < __m) { \ + __q1--, __r1 += (d); \ + if (__r1 >= (d)) \ + if (__r1 < __m) \ + __q1--, __r1 += (d); \ + } \ + __r1 -= __m; \ + __r0 = __r1 % __d1; \ + __q0 = __r1 / __d1; \ + __m = (UWtype) __q0 * __d0; \ + __r0 = __r0 * __ll_B | __ll_lowpart(n0); \ + if (__r0 < __m) { \ + __q0--, __r0 += (d); \ + if (__r0 >= (d)) \ + if (__r0 < __m) \ + __q0--, __r0 += (d); \ + } \ + __r0 -= __m; \ + (q) = (UWtype) __q1 * __ll_B | __q0; \ + (r) = __r0; \ +} while (0) diff --git a/arch/nds32/include/asm/stacktrace.h b/arch/nds32/include/asm/stacktrace.h new file mode 100644 index 000000000000..6bf7c777bda4 --- /dev/null +++ b/arch/nds32/include/asm/stacktrace.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2008-2018 Andes Technology Corporation */ + +#ifndef __ASM_STACKTRACE_H +#define __ASM_STACKTRACE_H + +/* Kernel callchain */ +struct stackframe { + unsigned long fp; + unsigned long sp; + unsigned long lp; +}; + +/* + * struct frame_tail: User callchain + * IMPORTANT: + * This struct is used for call-stack walking, + * the order and types matters. + * Do not use array, it only stores sizeof(pointer) + * + * The details can refer to arch/arm/kernel/perf_event.c + */ +struct frame_tail { + unsigned long stack_fp; + unsigned long stack_lp; +}; + +/* For User callchain with optimize for size */ +struct frame_tail_opt_size { + unsigned long stack_r6; + unsigned long stack_fp; + unsigned long stack_gp; + unsigned long stack_lp; +}; + +extern void +get_real_ret_addr(unsigned long *addr, struct task_struct *tsk, int *graph); + +#endif /* __ASM_STACKTRACE_H */ diff --git a/arch/nds32/include/asm/suspend.h b/arch/nds32/include/asm/suspend.h new file mode 100644 index 000000000000..6ed2418af1ac --- /dev/null +++ b/arch/nds32/include/asm/suspend.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +// Copyright (C) 2008-2017 Andes Technology Corporation + +#ifndef __ASM_NDS32_SUSPEND_H +#define __ASM_NDS32_SUSPEND_H + +extern void suspend2ram(void); +extern void cpu_resume(void); +extern unsigned long wake_mask; + +#endif diff --git a/arch/nds32/include/asm/syscalls.h b/arch/nds32/include/asm/syscalls.h index 78778ecff60c..da32101b455d 100644 --- a/arch/nds32/include/asm/syscalls.h +++ b/arch/nds32/include/asm/syscalls.h @@ -7,6 +7,7 @@ asmlinkage long sys_cacheflush(unsigned long addr, unsigned long len, unsigned int op); asmlinkage long sys_fadvise64_64_wrapper(int fd, int advice, loff_t offset, loff_t len); asmlinkage long sys_rt_sigreturn_wrapper(void); +asmlinkage long sys_udftrap(int option); #include diff --git a/arch/nds32/include/uapi/asm/auxvec.h b/arch/nds32/include/uapi/asm/auxvec.h index 56043ce4972f..2d3213f5e595 100644 --- a/arch/nds32/include/uapi/asm/auxvec.h +++ b/arch/nds32/include/uapi/asm/auxvec.h @@ -4,6 +4,13 @@ #ifndef __ASM_AUXVEC_H #define __ASM_AUXVEC_H +/* + * This entry gives some information about the FPU initialization + * performed by the kernel. + */ +#define AT_FPUCW 18 /* Used FPU control word. */ + + /* VDSO location */ #define AT_SYSINFO_EHDR 33 diff --git a/arch/nds32/include/uapi/asm/sigcontext.h b/arch/nds32/include/uapi/asm/sigcontext.h index 00567b237b0c..58afc416473e 100644 --- a/arch/nds32/include/uapi/asm/sigcontext.h +++ b/arch/nds32/include/uapi/asm/sigcontext.h @@ -9,6 +9,19 @@ * before the signal handler was invoked. Note: only add new entries * to the end of the structure. */ +struct fpu_struct { + unsigned long long fd_regs[32]; + unsigned long fpcsr; + /* + * UDF_trap is used to recognize whether underflow trap is enabled + * or not. When UDF_trap == 1, this process will be traped and then + * get a SIGFPE signal when encountering an underflow exception. + * UDF_trap is only modified through setfputrap syscall. Therefore, + * UDF_trap needn't be saved or loaded to context in each context + * switch. + */ + unsigned long UDF_trap; +}; struct zol_struct { unsigned long nds32_lc; /* $LC */ @@ -54,6 +67,7 @@ struct sigcontext { unsigned long fault_address; unsigned long used_math_flag; /* FPU Registers */ + struct fpu_struct fpu; struct zol_struct zol; }; diff --git a/arch/nds32/include/uapi/asm/udftrap.h b/arch/nds32/include/uapi/asm/udftrap.h new file mode 100644 index 000000000000..433f79d679c0 --- /dev/null +++ b/arch/nds32/include/uapi/asm/udftrap.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2005-2018 Andes Technology Corporation */ +#ifndef _ASM_SETFPUTRAP +#define _ASM_SETFPUTRAP + +/* + * Options for setfputrap system call + */ +#define DISABLE_UDFTRAP 0 /* disable underflow exception trap */ +#define ENABLE_UDFTRAP 1 /* enable undeflos exception trap */ +#define GET_UDFTRAP 2 /* only get undeflos exception trap status */ + +#endif /* _ASM_CACHECTL */ diff --git a/arch/nds32/include/uapi/asm/unistd.h b/arch/nds32/include/uapi/asm/unistd.h index 603e826e0449..c2c3a3e34083 100644 --- a/arch/nds32/include/uapi/asm/unistd.h +++ b/arch/nds32/include/uapi/asm/unistd.h @@ -9,4 +9,6 @@ /* Additional NDS32 specific syscalls. */ #define __NR_cacheflush (__NR_arch_specific_syscall) +#define __NR_udftrap (__NR_arch_specific_syscall + 1) __SYSCALL(__NR_cacheflush, sys_cacheflush) +__SYSCALL(__NR_udftrap, sys_udftrap) diff --git a/arch/nds32/kernel/Makefile b/arch/nds32/kernel/Makefile index 27cded39fa66..a1a1d61509e5 100644 --- a/arch/nds32/kernel/Makefile +++ b/arch/nds32/kernel/Makefile @@ -4,7 +4,6 @@ CPPFLAGS_vmlinux.lds := -DTEXTADDR=$(TEXTADDR) AFLAGS_head.o := -DTEXTADDR=$(TEXTADDR) - # Object file lists. obj-y := ex-entry.o ex-exit.o ex-scall.o irq.o \ @@ -14,11 +13,15 @@ obj-y := ex-entry.o ex-exit.o ex-scall.o irq.o \ obj-$(CONFIG_MODULES) += nds32_ksyms.o module.o obj-$(CONFIG_STACKTRACE) += stacktrace.o +obj-$(CONFIG_FPU) += fpu.o obj-$(CONFIG_OF) += devtree.o obj-$(CONFIG_CACHE_L2) += atl2c.o - +obj-$(CONFIG_PERF_EVENTS) += perf_event_cpu.o +obj-$(CONFIG_PM) += pm.o sleep.o extra-y := head.o vmlinux.lds +CFLAGS_fpu.o += -mext-fpu-sp -mext-fpu-dp + obj-y += vdso/ diff --git a/arch/nds32/kernel/ex-entry.S b/arch/nds32/kernel/ex-entry.S index 21a144071566..107d98a1d1b8 100644 --- a/arch/nds32/kernel/ex-entry.S +++ b/arch/nds32/kernel/ex-entry.S @@ -7,6 +7,7 @@ #include #include #include +#include #ifdef CONFIG_HWZOL .macro push_zol @@ -15,12 +16,31 @@ mfusr $r16, $LC .endm #endif + .macro skip_save_fucop_ctl +#if defined(CONFIG_FPU) +skip_fucop_ctl: + smw.adm $p0, [$sp], $p0, #0x1 + j fucop_ctl_done +#endif + .endm .macro save_user_regs - +#if defined(CONFIG_FPU) + sethi $p0, hi20(has_fpu) + lbsi $p0, [$p0+lo12(has_fpu)] + beqz $p0, skip_fucop_ctl + mfsr $p0, $FUCOP_CTL + smw.adm $p0, [$sp], $p0, #0x1 + bclr $p0, $p0, #FUCOP_CTL_offCP0EN + mtsr $p0, $FUCOP_CTL +fucop_ctl_done: + /* move $SP to the bottom of pt_regs */ + addi $sp, $sp, -FUCOP_CTL_OFFSET +#else smw.adm $sp, [$sp], $sp, #0x1 /* move $SP to the bottom of pt_regs */ addi $sp, $sp, -OSP_OFFSET +#endif /* push $r0 ~ $r25 */ smw.bim $r0, [$sp], $r25 @@ -79,6 +99,7 @@ exception_handlers: .long eh_syscall !Syscall .long asm_do_IRQ !IRQ + skip_save_fucop_ctl common_exception_handler: save_user_regs mfsr $p0, $ITYPE @@ -103,7 +124,6 @@ common_exception_handler: mtsr $r21, $PSW dsb jr $p1 - /* syscall */ 1: addi $p1, $p0, #-NDS32_VECTOR_offEXCEPTION diff --git a/arch/nds32/kernel/ex-exit.S b/arch/nds32/kernel/ex-exit.S index f00af92f7e22..97ba15cd4180 100644 --- a/arch/nds32/kernel/ex-exit.S +++ b/arch/nds32/kernel/ex-exit.S @@ -8,6 +8,7 @@ #include #include #include +#include @@ -22,10 +23,18 @@ .macro restore_user_regs_first setgie.d isb - +#if defined(CONFIG_FPU) + addi $sp, $sp, OSP_OFFSET + lmw.adm $r12, [$sp], $r25, #0x0 + sethi $p0, hi20(has_fpu) + lbsi $p0, [$p0+lo12(has_fpu)] + beqz $p0, 2f + mtsr $r25, $FUCOP_CTL +2: +#else addi $sp, $sp, FUCOP_CTL_OFFSET - lmw.adm $r12, [$sp], $r24, #0x0 +#endif mtsr $r12, $SP_USR mtsr $r13, $IPC #ifdef CONFIG_HWZOL diff --git a/arch/nds32/kernel/ex-scall.S b/arch/nds32/kernel/ex-scall.S index 36aa87ecdabd..270050f1b7b1 100644 --- a/arch/nds32/kernel/ex-scall.S +++ b/arch/nds32/kernel/ex-scall.S @@ -19,11 +19,13 @@ ENTRY(__switch_to) la $p0, __entry_task sw $r1, [$p0] - move $p1, $r0 - addi $p1, $p1, #THREAD_CPU_CONTEXT + addi $p1, $r0, #THREAD_CPU_CONTEXT smw.bi $r6, [$p1], $r14, #0xb ! push r6~r14, fp, lp, sp move $r25, $r1 - addi $r1, $r1, #THREAD_CPU_CONTEXT +#if defined(CONFIG_FPU) + call _switch_fpu +#endif + addi $r1, $r25, #THREAD_CPU_CONTEXT lmw.bi $r6, [$r1], $r14, #0xb ! pop r6~r14, fp, lp, sp ret diff --git a/arch/nds32/kernel/fpu.c b/arch/nds32/kernel/fpu.c new file mode 100644 index 000000000000..fddd40c7a16f --- /dev/null +++ b/arch/nds32/kernel/fpu.c @@ -0,0 +1,269 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation + +#include +#include +#include +#include +#include +#include +#include +#include + +const struct fpu_struct init_fpuregs = { + .fd_regs = {[0 ... 31] = sNAN64}, + .fpcsr = FPCSR_INIT, +#if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) + .UDF_trap = 0 +#endif +}; + +void save_fpu(struct task_struct *tsk) +{ + unsigned int fpcfg, fpcsr; + + enable_fpu(); + fpcfg = ((__nds32__fmfcfg() & FPCFG_mskFREG) >> FPCFG_offFREG); + switch (fpcfg) { + case SP32_DP32_reg: + asm volatile ("fsdi $fd31, [%0+0xf8]\n\t" + "fsdi $fd30, [%0+0xf0]\n\t" + "fsdi $fd29, [%0+0xe8]\n\t" + "fsdi $fd28, [%0+0xe0]\n\t" + "fsdi $fd27, [%0+0xd8]\n\t" + "fsdi $fd26, [%0+0xd0]\n\t" + "fsdi $fd25, [%0+0xc8]\n\t" + "fsdi $fd24, [%0+0xc0]\n\t" + "fsdi $fd23, [%0+0xb8]\n\t" + "fsdi $fd22, [%0+0xb0]\n\t" + "fsdi $fd21, [%0+0xa8]\n\t" + "fsdi $fd20, [%0+0xa0]\n\t" + "fsdi $fd19, [%0+0x98]\n\t" + "fsdi $fd18, [%0+0x90]\n\t" + "fsdi $fd17, [%0+0x88]\n\t" + "fsdi $fd16, [%0+0x80]\n\t" + : /* no output */ + : "r" (&tsk->thread.fpu) + : "memory"); + /* fall through */ + case SP32_DP16_reg: + asm volatile ("fsdi $fd15, [%0+0x78]\n\t" + "fsdi $fd14, [%0+0x70]\n\t" + "fsdi $fd13, [%0+0x68]\n\t" + "fsdi $fd12, [%0+0x60]\n\t" + "fsdi $fd11, [%0+0x58]\n\t" + "fsdi $fd10, [%0+0x50]\n\t" + "fsdi $fd9, [%0+0x48]\n\t" + "fsdi $fd8, [%0+0x40]\n\t" + : /* no output */ + : "r" (&tsk->thread.fpu) + : "memory"); + /* fall through */ + case SP16_DP8_reg: + asm volatile ("fsdi $fd7, [%0+0x38]\n\t" + "fsdi $fd6, [%0+0x30]\n\t" + "fsdi $fd5, [%0+0x28]\n\t" + "fsdi $fd4, [%0+0x20]\n\t" + : /* no output */ + : "r" (&tsk->thread.fpu) + : "memory"); + /* fall through */ + case SP8_DP4_reg: + asm volatile ("fsdi $fd3, [%1+0x18]\n\t" + "fsdi $fd2, [%1+0x10]\n\t" + "fsdi $fd1, [%1+0x8]\n\t" + "fsdi $fd0, [%1+0x0]\n\t" + "fmfcsr %0\n\t" + "swi %0, [%1+0x100]\n\t" + : "=&r" (fpcsr) + : "r"(&tsk->thread.fpu) + : "memory"); + } + disable_fpu(); +} + +void load_fpu(const struct fpu_struct *fpregs) +{ + unsigned int fpcfg, fpcsr; + + enable_fpu(); + fpcfg = ((__nds32__fmfcfg() & FPCFG_mskFREG) >> FPCFG_offFREG); + switch (fpcfg) { + case SP32_DP32_reg: + asm volatile ("fldi $fd31, [%0+0xf8]\n\t" + "fldi $fd30, [%0+0xf0]\n\t" + "fldi $fd29, [%0+0xe8]\n\t" + "fldi $fd28, [%0+0xe0]\n\t" + "fldi $fd27, [%0+0xd8]\n\t" + "fldi $fd26, [%0+0xd0]\n\t" + "fldi $fd25, [%0+0xc8]\n\t" + "fldi $fd24, [%0+0xc0]\n\t" + "fldi $fd23, [%0+0xb8]\n\t" + "fldi $fd22, [%0+0xb0]\n\t" + "fldi $fd21, [%0+0xa8]\n\t" + "fldi $fd20, [%0+0xa0]\n\t" + "fldi $fd19, [%0+0x98]\n\t" + "fldi $fd18, [%0+0x90]\n\t" + "fldi $fd17, [%0+0x88]\n\t" + "fldi $fd16, [%0+0x80]\n\t" + : /* no output */ + : "r" (fpregs)); + /* fall through */ + case SP32_DP16_reg: + asm volatile ("fldi $fd15, [%0+0x78]\n\t" + "fldi $fd14, [%0+0x70]\n\t" + "fldi $fd13, [%0+0x68]\n\t" + "fldi $fd12, [%0+0x60]\n\t" + "fldi $fd11, [%0+0x58]\n\t" + "fldi $fd10, [%0+0x50]\n\t" + "fldi $fd9, [%0+0x48]\n\t" + "fldi $fd8, [%0+0x40]\n\t" + : /* no output */ + : "r" (fpregs)); + /* fall through */ + case SP16_DP8_reg: + asm volatile ("fldi $fd7, [%0+0x38]\n\t" + "fldi $fd6, [%0+0x30]\n\t" + "fldi $fd5, [%0+0x28]\n\t" + "fldi $fd4, [%0+0x20]\n\t" + : /* no output */ + : "r" (fpregs)); + /* fall through */ + case SP8_DP4_reg: + asm volatile ("fldi $fd3, [%1+0x18]\n\t" + "fldi $fd2, [%1+0x10]\n\t" + "fldi $fd1, [%1+0x8]\n\t" + "fldi $fd0, [%1+0x0]\n\t" + "lwi %0, [%1+0x100]\n\t" + "fmtcsr %0\n\t":"=&r" (fpcsr) + : "r"(fpregs)); + } + disable_fpu(); +} +void store_fpu_for_suspend(void) +{ +#ifdef CONFIG_LAZY_FPU + if (last_task_used_math != NULL) + save_fpu(last_task_used_math); + last_task_used_math = NULL; +#else + if (!used_math()) + return; + unlazy_fpu(current); +#endif + clear_fpu(task_pt_regs(current)); +} +inline void do_fpu_context_switch(struct pt_regs *regs) +{ + /* Enable to use FPU. */ + + if (!user_mode(regs)) { + pr_err("BUG: FPU is used in kernel mode.\n"); + BUG(); + return; + } + + enable_ptreg_fpu(regs); +#ifdef CONFIG_LAZY_FPU //Lazy FPU is used + if (last_task_used_math == current) + return; + if (last_task_used_math != NULL) + /* Other processes fpu state, save away */ + save_fpu(last_task_used_math); + last_task_used_math = current; +#endif + if (used_math()) { + load_fpu(¤t->thread.fpu); + } else { + /* First time FPU user. */ + load_fpu(&init_fpuregs); +#if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) + current->thread.fpu.UDF_trap = init_fpuregs.UDF_trap; +#endif + set_used_math(); + } + +} + +inline void fill_sigfpe_signo(unsigned int fpcsr, int *signo) +{ + if (fpcsr & FPCSR_mskOVFT) + *signo = FPE_FLTOVF; +#ifndef CONFIG_SUPPORT_DENORMAL_ARITHMETIC + else if (fpcsr & FPCSR_mskUDFT) + *signo = FPE_FLTUND; +#endif + else if (fpcsr & FPCSR_mskIVOT) + *signo = FPE_FLTINV; + else if (fpcsr & FPCSR_mskDBZT) + *signo = FPE_FLTDIV; + else if (fpcsr & FPCSR_mskIEXT) + *signo = FPE_FLTRES; +} + +inline void handle_fpu_exception(struct pt_regs *regs) +{ + unsigned int fpcsr; + int si_code = 0, si_signo = SIGFPE; +#if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) + unsigned long redo_except = FPCSR_mskDNIT|FPCSR_mskUDFT; +#else + unsigned long redo_except = FPCSR_mskDNIT; +#endif + + lose_fpu(); + fpcsr = current->thread.fpu.fpcsr; + + if (fpcsr & redo_except) { +#if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) + if (fpcsr & FPCSR_mskUDFT) + current->thread.fpu.fpcsr &= ~FPCSR_mskIEX; +#endif + si_signo = do_fpuemu(regs, ¤t->thread.fpu); + fpcsr = current->thread.fpu.fpcsr; + if (!si_signo) + goto done; + } else if (fpcsr & FPCSR_mskRIT) { + if (!user_mode(regs)) + do_exit(SIGILL); + si_signo = SIGILL; + } + + + switch (si_signo) { + case SIGFPE: + fill_sigfpe_signo(fpcsr, &si_code); + break; + case SIGILL: + show_regs(regs); + si_code = ILL_COPROC; + break; + case SIGBUS: + si_code = BUS_ADRERR; + break; + default: + break; + } + + force_sig_fault(si_signo, si_code, + (void __user *)instruction_pointer(regs), current); +done: + own_fpu(); +} + +bool do_fpu_exception(unsigned int subtype, struct pt_regs *regs) +{ + int done = true; + /* Coprocessor disabled exception */ + if (subtype == FPU_DISABLE_EXCEPTION) { + preempt_disable(); + do_fpu_context_switch(regs); + preempt_enable(); + } + /* Coprocessor exception such as underflow and overflow */ + else if (subtype == FPU_EXCEPTION) + handle_fpu_exception(regs); + else + done = false; + return done; +} diff --git a/arch/nds32/kernel/head.S b/arch/nds32/kernel/head.S index c5fdae174ced..db64b78b1232 100644 --- a/arch/nds32/kernel/head.S +++ b/arch/nds32/kernel/head.S @@ -123,21 +123,12 @@ _image_size_check: andi $r0, $r0, MMU_CFG_mskTBS srli $r6, $r6, MMU_CFG_offTBW srli $r0, $r0, MMU_CFG_offTBS - /* - * we just map the kernel to the maximum way - 1 of tlb - * reserver one way for UART VA mapping - * it will cause page fault if UART mapping cover the kernel mapping - * - * direct mapping is not supported now. - */ - li $r2, 't' - beqz $r6, __error ! MMU_CFG.TBW = 0 is direct mappin + addi $r6, $r6, #0x1 ! MMU_CFG.TBW value -> meaning addi $r0, $r0, #0x2 ! MMU_CFG.TBS value -> meaning sll $r0, $r6, $r0 ! entries = k-way * n-set mul $r6, $r0, $r5 ! max size = entries * page size /* check kernel image size */ la $r3, (_end - PAGE_OFFSET) - li $r2, 's' bgt $r3, $r6, __error li $r2, #(PHYS_OFFSET + TLB_DATA_kernel_text_attr) @@ -160,7 +151,7 @@ _tlb: #endif mtsr $r3, $TLB_MISC - mfsr $r0, $MISC_CTL ! Enable BTB and RTP and shadow sp + mfsr $r0, $MISC_CTL ! Enable BTB, RTP, shadow sp, and HW_PRE ori $r0, $r0, #MISC_init mtsr $r0, $MISC_CTL diff --git a/arch/nds32/kernel/perf_event_cpu.c b/arch/nds32/kernel/perf_event_cpu.c new file mode 100644 index 000000000000..5e00ce54d0ff --- /dev/null +++ b/arch/nds32/kernel/perf_event_cpu.c @@ -0,0 +1,1522 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2008-2017 Andes Technology Corporation + * + * Reference ARMv7: Jean Pihet + * 2010 (c) MontaVista Software, LLC. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +/* Set at runtime when we know what CPU type we are. */ +static struct nds32_pmu *cpu_pmu; + +static DEFINE_PER_CPU(struct pmu_hw_events, cpu_hw_events); +static void nds32_pmu_start(struct nds32_pmu *cpu_pmu); +static void nds32_pmu_stop(struct nds32_pmu *cpu_pmu); +static struct platform_device_id cpu_pmu_plat_device_ids[] = { + {.name = "nds32-pfm"}, + {}, +}; + +static int nds32_pmu_map_cache_event(const unsigned int (*cache_map) + [PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX], u64 config) +{ + unsigned int cache_type, cache_op, cache_result, ret; + + cache_type = (config >> 0) & 0xff; + if (cache_type >= PERF_COUNT_HW_CACHE_MAX) + return -EINVAL; + + cache_op = (config >> 8) & 0xff; + if (cache_op >= PERF_COUNT_HW_CACHE_OP_MAX) + return -EINVAL; + + cache_result = (config >> 16) & 0xff; + if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX) + return -EINVAL; + + ret = (int)(*cache_map)[cache_type][cache_op][cache_result]; + + if (ret == CACHE_OP_UNSUPPORTED) + return -ENOENT; + + return ret; +} + +static int +nds32_pmu_map_hw_event(const unsigned int (*event_map)[PERF_COUNT_HW_MAX], + u64 config) +{ + int mapping; + + if (config >= PERF_COUNT_HW_MAX) + return -ENOENT; + + mapping = (*event_map)[config]; + return mapping == HW_OP_UNSUPPORTED ? -ENOENT : mapping; +} + +static int nds32_pmu_map_raw_event(u32 raw_event_mask, u64 config) +{ + int ev_type = (int)(config & raw_event_mask); + int idx = config >> 8; + + switch (idx) { + case 0: + ev_type = PFM_OFFSET_MAGIC_0 + ev_type; + if (ev_type >= SPAV3_0_SEL_LAST || ev_type <= SPAV3_0_SEL_BASE) + return -ENOENT; + break; + case 1: + ev_type = PFM_OFFSET_MAGIC_1 + ev_type; + if (ev_type >= SPAV3_1_SEL_LAST || ev_type <= SPAV3_1_SEL_BASE) + return -ENOENT; + break; + case 2: + ev_type = PFM_OFFSET_MAGIC_2 + ev_type; + if (ev_type >= SPAV3_2_SEL_LAST || ev_type <= SPAV3_2_SEL_BASE) + return -ENOENT; + break; + default: + return -ENOENT; + } + + return ev_type; +} + +int +nds32_pmu_map_event(struct perf_event *event, + const unsigned int (*event_map)[PERF_COUNT_HW_MAX], + const unsigned int (*cache_map) + [PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX], u32 raw_event_mask) +{ + u64 config = event->attr.config; + + switch (event->attr.type) { + case PERF_TYPE_HARDWARE: + return nds32_pmu_map_hw_event(event_map, config); + case PERF_TYPE_HW_CACHE: + return nds32_pmu_map_cache_event(cache_map, config); + case PERF_TYPE_RAW: + return nds32_pmu_map_raw_event(raw_event_mask, config); + } + + return -ENOENT; +} + +static int nds32_spav3_map_event(struct perf_event *event) +{ + return nds32_pmu_map_event(event, &nds32_pfm_perf_map, + &nds32_pfm_perf_cache_map, SOFTWARE_EVENT_MASK); +} + +static inline u32 nds32_pfm_getreset_flags(void) +{ + /* Read overflow status */ + u32 val = __nds32__mfsr(NDS32_SR_PFM_CTL); + u32 old_val = val; + + /* Write overflow bit to clear status, and others keep it 0 */ + u32 ov_flag = PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]; + + __nds32__mtsr(val | ov_flag, NDS32_SR_PFM_CTL); + + return old_val; +} + +static inline int nds32_pfm_has_overflowed(u32 pfm) +{ + u32 ov_flag = PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]; + + return pfm & ov_flag; +} + +static inline int nds32_pfm_counter_has_overflowed(u32 pfm, int idx) +{ + u32 mask = 0; + + switch (idx) { + case 0: + mask = PFM_CTL_OVF[0]; + break; + case 1: + mask = PFM_CTL_OVF[1]; + break; + case 2: + mask = PFM_CTL_OVF[2]; + break; + default: + pr_err("%s index wrong\n", __func__); + break; + } + return pfm & mask; +} + +/* + * Set the next IRQ period, based on the hwc->period_left value. + * To be called with the event disabled in hw: + */ +int nds32_pmu_event_set_period(struct perf_event *event) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + s64 left = local64_read(&hwc->period_left); + s64 period = hwc->sample_period; + int ret = 0; + + /* The period may have been changed by PERF_EVENT_IOC_PERIOD */ + if (unlikely(period != hwc->last_period)) + left = period - (hwc->last_period - left); + + if (unlikely(left <= -period)) { + left = period; + local64_set(&hwc->period_left, left); + hwc->last_period = period; + ret = 1; + } + + if (unlikely(left <= 0)) { + left += period; + local64_set(&hwc->period_left, left); + hwc->last_period = period; + ret = 1; + } + + if (left > (s64)nds32_pmu->max_period) + left = nds32_pmu->max_period; + + /* + * The hw event starts counting from this event offset, + * mark it to be able to extract future "deltas": + */ + local64_set(&hwc->prev_count, (u64)(-left)); + + nds32_pmu->write_counter(event, (u64)(-left) & nds32_pmu->max_period); + + perf_event_update_userpage(event); + + return ret; +} + +static irqreturn_t nds32_pmu_handle_irq(int irq_num, void *dev) +{ + u32 pfm; + struct perf_sample_data data; + struct nds32_pmu *cpu_pmu = (struct nds32_pmu *)dev; + struct pmu_hw_events *cpuc = cpu_pmu->get_hw_events(); + struct pt_regs *regs; + int idx; + /* + * Get and reset the IRQ flags + */ + pfm = nds32_pfm_getreset_flags(); + + /* + * Did an overflow occur? + */ + if (!nds32_pfm_has_overflowed(pfm)) + return IRQ_NONE; + + /* + * Handle the counter(s) overflow(s) + */ + regs = get_irq_regs(); + + nds32_pmu_stop(cpu_pmu); + for (idx = 0; idx < cpu_pmu->num_events; ++idx) { + struct perf_event *event = cpuc->events[idx]; + struct hw_perf_event *hwc; + + /* Ignore if we don't have an event. */ + if (!event) + continue; + + /* + * We have a single interrupt for all counters. Check that + * each counter has overflowed before we process it. + */ + if (!nds32_pfm_counter_has_overflowed(pfm, idx)) + continue; + + hwc = &event->hw; + nds32_pmu_event_update(event); + perf_sample_data_init(&data, 0, hwc->last_period); + if (!nds32_pmu_event_set_period(event)) + continue; + + if (perf_event_overflow(event, &data, regs)) + cpu_pmu->disable(event); + } + nds32_pmu_start(cpu_pmu); + /* + * Handle the pending perf events. + * + * Note: this call *must* be run with interrupts disabled. For + * platforms that can have the PMU interrupts raised as an NMI, this + * will not work. + */ + irq_work_run(); + + return IRQ_HANDLED; +} + +static inline int nds32_pfm_counter_valid(struct nds32_pmu *cpu_pmu, int idx) +{ + return ((idx >= 0) && (idx < cpu_pmu->num_events)); +} + +static inline int nds32_pfm_disable_counter(int idx) +{ + unsigned int val = __nds32__mfsr(NDS32_SR_PFM_CTL); + u32 mask = 0; + + mask = PFM_CTL_EN[idx]; + val &= ~mask; + val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); + __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); + return idx; +} + +/* + * Add an event filter to a given event. + */ +static int nds32_pmu_set_event_filter(struct hw_perf_event *event, + struct perf_event_attr *attr) +{ + unsigned long config_base = 0; + int idx = event->idx; + unsigned long no_kernel_tracing = 0; + unsigned long no_user_tracing = 0; + /* If index is -1, do not do anything */ + if (idx == -1) + return 0; + + no_kernel_tracing = PFM_CTL_KS[idx]; + no_user_tracing = PFM_CTL_KU[idx]; + /* + * Default: enable both kernel and user mode tracing. + */ + if (attr->exclude_user) + config_base |= no_user_tracing; + + if (attr->exclude_kernel) + config_base |= no_kernel_tracing; + + /* + * Install the filter into config_base as this is used to + * construct the event type. + */ + event->config_base |= config_base; + return 0; +} + +static inline void nds32_pfm_write_evtsel(int idx, u32 evnum) +{ + u32 offset = 0; + u32 ori_val = __nds32__mfsr(NDS32_SR_PFM_CTL); + u32 ev_mask = 0; + u32 no_kernel_mask = 0; + u32 no_user_mask = 0; + u32 val; + + offset = PFM_CTL_OFFSEL[idx]; + /* Clear previous mode selection, and write new one */ + no_kernel_mask = PFM_CTL_KS[idx]; + no_user_mask = PFM_CTL_KU[idx]; + ori_val &= ~no_kernel_mask; + ori_val &= ~no_user_mask; + if (evnum & no_kernel_mask) + ori_val |= no_kernel_mask; + + if (evnum & no_user_mask) + ori_val |= no_user_mask; + + /* Clear previous event selection */ + ev_mask = PFM_CTL_SEL[idx]; + ori_val &= ~ev_mask; + evnum &= SOFTWARE_EVENT_MASK; + + /* undo the linear mapping */ + evnum = get_converted_evet_hw_num(evnum); + val = ori_val | (evnum << offset); + val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); + __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); +} + +static inline int nds32_pfm_enable_counter(int idx) +{ + unsigned int val = __nds32__mfsr(NDS32_SR_PFM_CTL); + u32 mask = 0; + + mask = PFM_CTL_EN[idx]; + val |= mask; + val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); + __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); + return idx; +} + +static inline int nds32_pfm_enable_intens(int idx) +{ + unsigned int val = __nds32__mfsr(NDS32_SR_PFM_CTL); + u32 mask = 0; + + mask = PFM_CTL_IE[idx]; + val |= mask; + val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); + __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); + return idx; +} + +static inline int nds32_pfm_disable_intens(int idx) +{ + unsigned int val = __nds32__mfsr(NDS32_SR_PFM_CTL); + u32 mask = 0; + + mask = PFM_CTL_IE[idx]; + val &= ~mask; + val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); + __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); + return idx; +} + +static int event_requires_mode_exclusion(struct perf_event_attr *attr) +{ + /* Other modes NDS32 does not support */ + return attr->exclude_user || attr->exclude_kernel; +} + +static void nds32_pmu_enable_event(struct perf_event *event) +{ + unsigned long flags; + unsigned int evnum = 0; + struct hw_perf_event *hwc = &event->hw; + struct nds32_pmu *cpu_pmu = to_nds32_pmu(event->pmu); + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); + int idx = hwc->idx; + + if (!nds32_pfm_counter_valid(cpu_pmu, idx)) { + pr_err("CPU enabling wrong pfm counter IRQ enable\n"); + return; + } + + /* + * Enable counter and interrupt, and set the counter to count + * the event that we're interested in. + */ + raw_spin_lock_irqsave(&events->pmu_lock, flags); + + /* + * Disable counter + */ + nds32_pfm_disable_counter(idx); + + /* + * Check whether we need to exclude the counter from certain modes. + */ + if ((!cpu_pmu->set_event_filter || + cpu_pmu->set_event_filter(hwc, &event->attr)) && + event_requires_mode_exclusion(&event->attr)) { + pr_notice + ("NDS32 performance counters do not support mode exclusion\n"); + hwc->config_base = 0; + } + /* Write event */ + evnum = hwc->config_base; + nds32_pfm_write_evtsel(idx, evnum); + + /* + * Enable interrupt for this counter + */ + nds32_pfm_enable_intens(idx); + + /* + * Enable counter + */ + nds32_pfm_enable_counter(idx); + + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + +static void nds32_pmu_disable_event(struct perf_event *event) +{ + unsigned long flags; + struct hw_perf_event *hwc = &event->hw; + struct nds32_pmu *cpu_pmu = to_nds32_pmu(event->pmu); + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); + int idx = hwc->idx; + + if (!nds32_pfm_counter_valid(cpu_pmu, idx)) { + pr_err("CPU disabling wrong pfm counter IRQ enable %d\n", idx); + return; + } + + /* + * Disable counter and interrupt + */ + raw_spin_lock_irqsave(&events->pmu_lock, flags); + + /* + * Disable counter + */ + nds32_pfm_disable_counter(idx); + + /* + * Disable interrupt for this counter + */ + nds32_pfm_disable_intens(idx); + + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + +static inline u32 nds32_pmu_read_counter(struct perf_event *event) +{ + struct nds32_pmu *cpu_pmu = to_nds32_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + int idx = hwc->idx; + u32 count = 0; + + if (!nds32_pfm_counter_valid(cpu_pmu, idx)) { + pr_err("CPU reading wrong counter %d\n", idx); + } else { + switch (idx) { + case PFMC0: + count = __nds32__mfsr(NDS32_SR_PFMC0); + break; + case PFMC1: + count = __nds32__mfsr(NDS32_SR_PFMC1); + break; + case PFMC2: + count = __nds32__mfsr(NDS32_SR_PFMC2); + break; + default: + pr_err + ("%s: CPU has no performance counters %d\n", + __func__, idx); + } + } + return count; +} + +static inline void nds32_pmu_write_counter(struct perf_event *event, u32 value) +{ + struct nds32_pmu *cpu_pmu = to_nds32_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + int idx = hwc->idx; + + if (!nds32_pfm_counter_valid(cpu_pmu, idx)) { + pr_err("CPU writing wrong counter %d\n", idx); + } else { + switch (idx) { + case PFMC0: + __nds32__mtsr_isb(value, NDS32_SR_PFMC0); + break; + case PFMC1: + __nds32__mtsr_isb(value, NDS32_SR_PFMC1); + break; + case PFMC2: + __nds32__mtsr_isb(value, NDS32_SR_PFMC2); + break; + default: + pr_err + ("%s: CPU has no performance counters %d\n", + __func__, idx); + } + } +} + +static int nds32_pmu_get_event_idx(struct pmu_hw_events *cpuc, + struct perf_event *event) +{ + int idx; + struct hw_perf_event *hwc = &event->hw; + /* + * Current implementation maps cycles, instruction count and cache-miss + * to specific counter. + * However, multiple of the 3 counters are able to count these events. + * + * + * SOFTWARE_EVENT_MASK mask for getting event num , + * This is defined by Jia-Rung, you can change the polocies. + * However, do not exceed 8 bits. This is hardware specific. + * The last number is SPAv3_2_SEL_LAST. + */ + unsigned long evtype = hwc->config_base & SOFTWARE_EVENT_MASK; + + idx = get_converted_event_idx(evtype); + /* + * Try to get the counter for correpsonding event + */ + if (evtype == SPAV3_0_SEL_TOTAL_CYCLES) { + if (!test_and_set_bit(idx, cpuc->used_mask)) + return idx; + if (!test_and_set_bit(NDS32_IDX_COUNTER0, cpuc->used_mask)) + return NDS32_IDX_COUNTER0; + if (!test_and_set_bit(NDS32_IDX_COUNTER1, cpuc->used_mask)) + return NDS32_IDX_COUNTER1; + } else if (evtype == SPAV3_1_SEL_COMPLETED_INSTRUCTION) { + if (!test_and_set_bit(idx, cpuc->used_mask)) + return idx; + else if (!test_and_set_bit(NDS32_IDX_COUNTER1, cpuc->used_mask)) + return NDS32_IDX_COUNTER1; + else if (!test_and_set_bit + (NDS32_IDX_CYCLE_COUNTER, cpuc->used_mask)) + return NDS32_IDX_CYCLE_COUNTER; + } else { + if (!test_and_set_bit(idx, cpuc->used_mask)) + return idx; + } + return -EAGAIN; +} + +static void nds32_pmu_start(struct nds32_pmu *cpu_pmu) +{ + unsigned long flags; + unsigned int val; + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); + + raw_spin_lock_irqsave(&events->pmu_lock, flags); + + /* Enable all counters , NDS PFM has 3 counters */ + val = __nds32__mfsr(NDS32_SR_PFM_CTL); + val |= (PFM_CTL_EN[0] | PFM_CTL_EN[1] | PFM_CTL_EN[2]); + val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); + __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); + + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + +static void nds32_pmu_stop(struct nds32_pmu *cpu_pmu) +{ + unsigned long flags; + unsigned int val; + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); + + raw_spin_lock_irqsave(&events->pmu_lock, flags); + + /* Disable all counters , NDS PFM has 3 counters */ + val = __nds32__mfsr(NDS32_SR_PFM_CTL); + val &= ~(PFM_CTL_EN[0] | PFM_CTL_EN[1] | PFM_CTL_EN[2]); + val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); + __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); + + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + +static void nds32_pmu_reset(void *info) +{ + u32 val = 0; + + val |= (PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); + __nds32__mtsr(val, NDS32_SR_PFM_CTL); + __nds32__mtsr(0, NDS32_SR_PFM_CTL); + __nds32__mtsr(0, NDS32_SR_PFMC0); + __nds32__mtsr(0, NDS32_SR_PFMC1); + __nds32__mtsr(0, NDS32_SR_PFMC2); +} + +static void nds32_pmu_init(struct nds32_pmu *cpu_pmu) +{ + cpu_pmu->handle_irq = nds32_pmu_handle_irq; + cpu_pmu->enable = nds32_pmu_enable_event; + cpu_pmu->disable = nds32_pmu_disable_event; + cpu_pmu->read_counter = nds32_pmu_read_counter; + cpu_pmu->write_counter = nds32_pmu_write_counter; + cpu_pmu->get_event_idx = nds32_pmu_get_event_idx; + cpu_pmu->start = nds32_pmu_start; + cpu_pmu->stop = nds32_pmu_stop; + cpu_pmu->reset = nds32_pmu_reset; + cpu_pmu->max_period = 0xFFFFFFFF; /* Maximum counts */ +}; + +static u32 nds32_read_num_pfm_events(void) +{ + /* NDS32 SPAv3 PMU support 3 counter */ + return 3; +} + +static int device_pmu_init(struct nds32_pmu *cpu_pmu) +{ + nds32_pmu_init(cpu_pmu); + /* + * This name should be devive-specific name, whatever you like :) + * I think "PMU" will be a good generic name. + */ + cpu_pmu->name = "nds32v3-pmu"; + cpu_pmu->map_event = nds32_spav3_map_event; + cpu_pmu->num_events = nds32_read_num_pfm_events(); + cpu_pmu->set_event_filter = nds32_pmu_set_event_filter; + return 0; +} + +/* + * CPU PMU identification and probing. + */ +static int probe_current_pmu(struct nds32_pmu *pmu) +{ + int ret; + + get_cpu(); + ret = -ENODEV; + /* + * If ther are various CPU types with its own PMU, initialize with + * + * the corresponding one + */ + device_pmu_init(pmu); + put_cpu(); + return ret; +} + +static void nds32_pmu_enable(struct pmu *pmu) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(pmu); + struct pmu_hw_events *hw_events = nds32_pmu->get_hw_events(); + int enabled = bitmap_weight(hw_events->used_mask, + nds32_pmu->num_events); + + if (enabled) + nds32_pmu->start(nds32_pmu); +} + +static void nds32_pmu_disable(struct pmu *pmu) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(pmu); + + nds32_pmu->stop(nds32_pmu); +} + +static void nds32_pmu_release_hardware(struct nds32_pmu *nds32_pmu) +{ + nds32_pmu->free_irq(nds32_pmu); + pm_runtime_put_sync(&nds32_pmu->plat_device->dev); +} + +static irqreturn_t nds32_pmu_dispatch_irq(int irq, void *dev) +{ + struct nds32_pmu *nds32_pmu = (struct nds32_pmu *)dev; + int ret; + u64 start_clock, finish_clock; + + start_clock = local_clock(); + ret = nds32_pmu->handle_irq(irq, dev); + finish_clock = local_clock(); + + perf_sample_event_took(finish_clock - start_clock); + return ret; +} + +static int nds32_pmu_reserve_hardware(struct nds32_pmu *nds32_pmu) +{ + int err; + struct platform_device *pmu_device = nds32_pmu->plat_device; + + if (!pmu_device) + return -ENODEV; + + pm_runtime_get_sync(&pmu_device->dev); + err = nds32_pmu->request_irq(nds32_pmu, nds32_pmu_dispatch_irq); + if (err) { + nds32_pmu_release_hardware(nds32_pmu); + return err; + } + + return 0; +} + +static int +validate_event(struct pmu *pmu, struct pmu_hw_events *hw_events, + struct perf_event *event) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + + if (is_software_event(event)) + return 1; + + if (event->pmu != pmu) + return 0; + + if (event->state < PERF_EVENT_STATE_OFF) + return 1; + + if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) + return 1; + + return nds32_pmu->get_event_idx(hw_events, event) >= 0; +} + +static int validate_group(struct perf_event *event) +{ + struct perf_event *sibling, *leader = event->group_leader; + struct pmu_hw_events fake_pmu; + DECLARE_BITMAP(fake_used_mask, MAX_COUNTERS); + /* + * Initialize the fake PMU. We only need to populate the + * used_mask for the purposes of validation. + */ + memset(fake_used_mask, 0, sizeof(fake_used_mask)); + + if (!validate_event(event->pmu, &fake_pmu, leader)) + return -EINVAL; + + for_each_sibling_event(sibling, leader) { + if (!validate_event(event->pmu, &fake_pmu, sibling)) + return -EINVAL; + } + + if (!validate_event(event->pmu, &fake_pmu, event)) + return -EINVAL; + + return 0; +} + +static int __hw_perf_event_init(struct perf_event *event) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + int mapping; + + mapping = nds32_pmu->map_event(event); + + if (mapping < 0) { + pr_debug("event %x:%llx not supported\n", event->attr.type, + event->attr.config); + return mapping; + } + + /* + * We don't assign an index until we actually place the event onto + * hardware. Use -1 to signify that we haven't decided where to put it + * yet. For SMP systems, each core has it's own PMU so we can't do any + * clever allocation or constraints checking at this point. + */ + hwc->idx = -1; + hwc->config_base = 0; + hwc->config = 0; + hwc->event_base = 0; + + /* + * Check whether we need to exclude the counter from certain modes. + */ + if ((!nds32_pmu->set_event_filter || + nds32_pmu->set_event_filter(hwc, &event->attr)) && + event_requires_mode_exclusion(&event->attr)) { + pr_debug + ("NDS performance counters do not support mode exclusion\n"); + return -EOPNOTSUPP; + } + + /* + * Store the event encoding into the config_base field. + */ + hwc->config_base |= (unsigned long)mapping; + + if (!hwc->sample_period) { + /* + * For non-sampling runs, limit the sample_period to half + * of the counter width. That way, the new counter value + * is far less likely to overtake the previous one unless + * you have some serious IRQ latency issues. + */ + hwc->sample_period = nds32_pmu->max_period >> 1; + hwc->last_period = hwc->sample_period; + local64_set(&hwc->period_left, hwc->sample_period); + } + + if (event->group_leader != event) { + if (validate_group(event) != 0) + return -EINVAL; + } + + return 0; +} + +static int nds32_pmu_event_init(struct perf_event *event) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + int err = 0; + atomic_t *active_events = &nds32_pmu->active_events; + + /* does not support taken branch sampling */ + if (has_branch_stack(event)) + return -EOPNOTSUPP; + + if (nds32_pmu->map_event(event) == -ENOENT) + return -ENOENT; + + if (!atomic_inc_not_zero(active_events)) { + if (atomic_read(active_events) == 0) { + /* Register irq handler */ + err = nds32_pmu_reserve_hardware(nds32_pmu); + } + + if (!err) + atomic_inc(active_events); + } + + if (err) + return err; + + err = __hw_perf_event_init(event); + + return err; +} + +static void nds32_start(struct perf_event *event, int flags) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + /* + * NDS pmu always has to reprogram the period, so ignore + * PERF_EF_RELOAD, see the comment below. + */ + if (flags & PERF_EF_RELOAD) + WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); + + hwc->state = 0; + /* Set the period for the event. */ + nds32_pmu_event_set_period(event); + + nds32_pmu->enable(event); +} + +static int nds32_pmu_add(struct perf_event *event, int flags) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + struct pmu_hw_events *hw_events = nds32_pmu->get_hw_events(); + struct hw_perf_event *hwc = &event->hw; + int idx; + int err = 0; + + perf_pmu_disable(event->pmu); + + /* If we don't have a space for the counter then finish early. */ + idx = nds32_pmu->get_event_idx(hw_events, event); + if (idx < 0) { + err = idx; + goto out; + } + + /* + * If there is an event in the counter we are going to use then make + * sure it is disabled. + */ + event->hw.idx = idx; + nds32_pmu->disable(event); + hw_events->events[idx] = event; + + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; + if (flags & PERF_EF_START) + nds32_start(event, PERF_EF_RELOAD); + + /* Propagate our changes to the userspace mapping. */ + perf_event_update_userpage(event); + +out: + perf_pmu_enable(event->pmu); + return err; +} + +u64 nds32_pmu_event_update(struct perf_event *event) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + u64 delta, prev_raw_count, new_raw_count; + +again: + prev_raw_count = local64_read(&hwc->prev_count); + new_raw_count = nds32_pmu->read_counter(event); + + if (local64_cmpxchg(&hwc->prev_count, prev_raw_count, + new_raw_count) != prev_raw_count) { + goto again; + } + /* + * Whether overflow or not, "unsigned substraction" + * will always get their delta + */ + delta = (new_raw_count - prev_raw_count) & nds32_pmu->max_period; + + local64_add(delta, &event->count); + local64_sub(delta, &hwc->period_left); + + return new_raw_count; +} + +static void nds32_stop(struct perf_event *event, int flags) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + /* + * NDS pmu always has to update the counter, so ignore + * PERF_EF_UPDATE, see comments in nds32_start(). + */ + if (!(hwc->state & PERF_HES_STOPPED)) { + nds32_pmu->disable(event); + nds32_pmu_event_update(event); + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; + } +} + +static void nds32_pmu_del(struct perf_event *event, int flags) +{ + struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); + struct pmu_hw_events *hw_events = nds32_pmu->get_hw_events(); + struct hw_perf_event *hwc = &event->hw; + int idx = hwc->idx; + + nds32_stop(event, PERF_EF_UPDATE); + hw_events->events[idx] = NULL; + clear_bit(idx, hw_events->used_mask); + + perf_event_update_userpage(event); +} + +static void nds32_pmu_read(struct perf_event *event) +{ + nds32_pmu_event_update(event); +} + +/* Please refer to SPAv3 for more hardware specific details */ +PMU_FORMAT_ATTR(event, "config:0-63"); + +static struct attribute *nds32_arch_formats_attr[] = { + &format_attr_event.attr, + NULL, +}; + +static struct attribute_group nds32_pmu_format_group = { + .name = "format", + .attrs = nds32_arch_formats_attr, +}; + +static ssize_t nds32_pmu_cpumask_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + return 0; +} + +static DEVICE_ATTR(cpus, 0444, nds32_pmu_cpumask_show, NULL); + +static struct attribute *nds32_pmu_common_attrs[] = { + &dev_attr_cpus.attr, + NULL, +}; + +static struct attribute_group nds32_pmu_common_group = { + .attrs = nds32_pmu_common_attrs, +}; + +static const struct attribute_group *nds32_pmu_attr_groups[] = { + &nds32_pmu_format_group, + &nds32_pmu_common_group, + NULL, +}; + +static void nds32_init(struct nds32_pmu *nds32_pmu) +{ + atomic_set(&nds32_pmu->active_events, 0); + + nds32_pmu->pmu = (struct pmu) { + .pmu_enable = nds32_pmu_enable, + .pmu_disable = nds32_pmu_disable, + .attr_groups = nds32_pmu_attr_groups, + .event_init = nds32_pmu_event_init, + .add = nds32_pmu_add, + .del = nds32_pmu_del, + .start = nds32_start, + .stop = nds32_stop, + .read = nds32_pmu_read, + }; +} + +int nds32_pmu_register(struct nds32_pmu *nds32_pmu, int type) +{ + nds32_init(nds32_pmu); + pm_runtime_enable(&nds32_pmu->plat_device->dev); + pr_info("enabled with %s PMU driver, %d counters available\n", + nds32_pmu->name, nds32_pmu->num_events); + return perf_pmu_register(&nds32_pmu->pmu, nds32_pmu->name, type); +} + +static struct pmu_hw_events *cpu_pmu_get_cpu_events(void) +{ + return this_cpu_ptr(&cpu_hw_events); +} + +static int cpu_pmu_request_irq(struct nds32_pmu *cpu_pmu, irq_handler_t handler) +{ + int err, irq, irqs; + struct platform_device *pmu_device = cpu_pmu->plat_device; + + if (!pmu_device) + return -ENODEV; + + irqs = min(pmu_device->num_resources, num_possible_cpus()); + if (irqs < 1) { + pr_err("no irqs for PMUs defined\n"); + return -ENODEV; + } + + irq = platform_get_irq(pmu_device, 0); + err = request_irq(irq, handler, IRQF_NOBALANCING, "nds32-pfm", + cpu_pmu); + if (err) { + pr_err("unable to request IRQ%d for NDS PMU counters\n", + irq); + return err; + } + return 0; +} + +static void cpu_pmu_free_irq(struct nds32_pmu *cpu_pmu) +{ + int irq; + struct platform_device *pmu_device = cpu_pmu->plat_device; + + irq = platform_get_irq(pmu_device, 0); + if (irq >= 0) + free_irq(irq, cpu_pmu); +} + +static void cpu_pmu_init(struct nds32_pmu *cpu_pmu) +{ + int cpu; + struct pmu_hw_events *events = &per_cpu(cpu_hw_events, cpu); + + raw_spin_lock_init(&events->pmu_lock); + + cpu_pmu->get_hw_events = cpu_pmu_get_cpu_events; + cpu_pmu->request_irq = cpu_pmu_request_irq; + cpu_pmu->free_irq = cpu_pmu_free_irq; + + /* Ensure the PMU has sane values out of reset. */ + if (cpu_pmu->reset) + on_each_cpu(cpu_pmu->reset, cpu_pmu, 1); +} + +const static struct of_device_id cpu_pmu_of_device_ids[] = { + {.compatible = "andestech,nds32v3-pmu", + .data = device_pmu_init}, + {}, +}; + +static int cpu_pmu_device_probe(struct platform_device *pdev) +{ + const struct of_device_id *of_id; + int (*init_fn)(struct nds32_pmu *nds32_pmu); + struct device_node *node = pdev->dev.of_node; + struct nds32_pmu *pmu; + int ret = -ENODEV; + + if (cpu_pmu) { + pr_notice("[perf] attempt to register multiple PMU devices!\n"); + return -ENOSPC; + } + + pmu = kzalloc(sizeof(*pmu), GFP_KERNEL); + if (!pmu) + return -ENOMEM; + + of_id = of_match_node(cpu_pmu_of_device_ids, pdev->dev.of_node); + if (node && of_id) { + init_fn = of_id->data; + ret = init_fn(pmu); + } else { + ret = probe_current_pmu(pmu); + } + + if (ret) { + pr_notice("[perf] failed to probe PMU!\n"); + goto out_free; + } + + cpu_pmu = pmu; + cpu_pmu->plat_device = pdev; + cpu_pmu_init(cpu_pmu); + ret = nds32_pmu_register(cpu_pmu, PERF_TYPE_RAW); + + if (!ret) + return 0; + +out_free: + pr_notice("[perf] failed to register PMU devices!\n"); + kfree(pmu); + return ret; +} + +static struct platform_driver cpu_pmu_driver = { + .driver = { + .name = "nds32-pfm", + .of_match_table = cpu_pmu_of_device_ids, + }, + .probe = cpu_pmu_device_probe, + .id_table = cpu_pmu_plat_device_ids, +}; + +static int __init register_pmu_driver(void) +{ + int err = 0; + + err = platform_driver_register(&cpu_pmu_driver); + if (err) + pr_notice("[perf] PMU initialization failed\n"); + else + pr_notice("[perf] PMU initialization done\n"); + + return err; +} + +device_initcall(register_pmu_driver); + +/* + * References: arch/nds32/kernel/traps.c:__dump() + * You will need to know the NDS ABI first. + */ +static int unwind_frame_kernel(struct stackframe *frame) +{ + int graph = 0; +#ifdef CONFIG_FRAME_POINTER + /* 0x3 means misalignment */ + if (!kstack_end((void *)frame->fp) && + !((unsigned long)frame->fp & 0x3) && + ((unsigned long)frame->fp >= TASK_SIZE)) { + /* + * The array index is based on the ABI, the below graph + * illustrate the reasons. + * Function call procedure: "smw" and "lmw" will always + * update SP and FP for you automatically. + * + * Stack Relative Address + * | | 0 + * ---- + * |LP| <-- SP(before smw) <-- FP(after smw) -1 + * ---- + * |FP| -2 + * ---- + * | | <-- SP(after smw) -3 + */ + frame->lp = ((unsigned long *)frame->fp)[-1]; + frame->fp = ((unsigned long *)frame->fp)[FP_OFFSET]; + /* make sure CONFIG_FUNCTION_GRAPH_TRACER is turned on */ + if (__kernel_text_address(frame->lp)) + frame->lp = ftrace_graph_ret_addr + (NULL, &graph, frame->lp, NULL); + + return 0; + } else { + return -EPERM; + } +#else + /* + * You can refer to arch/nds32/kernel/traps.c:__dump() + * Treat "sp" as "fp", but the "sp" is one frame ahead of "fp". + * And, the "sp" is not always correct. + * + * Stack Relative Address + * | | 0 + * ---- + * |LP| <-- SP(before smw) -1 + * ---- + * | | <-- SP(after smw) -2 + * ---- + */ + if (!kstack_end((void *)frame->sp)) { + frame->lp = ((unsigned long *)frame->sp)[1]; + /* TODO: How to deal with the value in first + * "sp" is not correct? + */ + if (__kernel_text_address(frame->lp)) + frame->lp = ftrace_graph_ret_addr + (tsk, &graph, frame->lp, NULL); + + frame->sp = ((unsigned long *)frame->sp) + 1; + + return 0; + } else { + return -EPERM; + } +#endif +} + +static void notrace +walk_stackframe(struct stackframe *frame, + int (*fn_record)(struct stackframe *, void *), + void *data) +{ + while (1) { + int ret; + + if (fn_record(frame, data)) + break; + + ret = unwind_frame_kernel(frame); + if (ret < 0) + break; + } +} + +/* + * Gets called by walk_stackframe() for every stackframe. This will be called + * whist unwinding the stackframe and is like a subroutine return so we use + * the PC. + */ +static int callchain_trace(struct stackframe *fr, void *data) +{ + struct perf_callchain_entry_ctx *entry = data; + + perf_callchain_store(entry, fr->lp); + return 0; +} + +/* + * Get the return address for a single stackframe and return a pointer to the + * next frame tail. + */ +static unsigned long +user_backtrace(struct perf_callchain_entry_ctx *entry, unsigned long fp) +{ + struct frame_tail buftail; + unsigned long lp = 0; + unsigned long *user_frame_tail = + (unsigned long *)(fp - (unsigned long)sizeof(buftail)); + + /* Check accessibility of one struct frame_tail beyond */ + if (!access_ok(VERIFY_READ, user_frame_tail, sizeof(buftail))) + return 0; + if (__copy_from_user_inatomic + (&buftail, user_frame_tail, sizeof(buftail))) + return 0; + + /* + * Refer to unwind_frame_kernel() for more illurstration + */ + lp = buftail.stack_lp; /* ((unsigned long *)fp)[-1] */ + fp = buftail.stack_fp; /* ((unsigned long *)fp)[FP_OFFSET] */ + perf_callchain_store(entry, lp); + return fp; +} + +static unsigned long +user_backtrace_opt_size(struct perf_callchain_entry_ctx *entry, + unsigned long fp) +{ + struct frame_tail_opt_size buftail; + unsigned long lp = 0; + + unsigned long *user_frame_tail = + (unsigned long *)(fp - (unsigned long)sizeof(buftail)); + + /* Check accessibility of one struct frame_tail beyond */ + if (!access_ok(VERIFY_READ, user_frame_tail, sizeof(buftail))) + return 0; + if (__copy_from_user_inatomic + (&buftail, user_frame_tail, sizeof(buftail))) + return 0; + + /* + * Refer to unwind_frame_kernel() for more illurstration + */ + lp = buftail.stack_lp; /* ((unsigned long *)fp)[-1] */ + fp = buftail.stack_fp; /* ((unsigned long *)fp)[FP_OFFSET] */ + + perf_callchain_store(entry, lp); + return fp; +} + +/* + * This will be called when the target is in user mode + * This function will only be called when we use + * "PERF_SAMPLE_CALLCHAIN" in + * kernel/events/core.c:perf_prepare_sample() + * + * How to trigger perf_callchain_[user/kernel] : + * $ perf record -e cpu-clock --call-graph fp ./program + * $ perf report --call-graph + */ +unsigned long leaf_fp; +void +perf_callchain_user(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + unsigned long fp = 0; + unsigned long gp = 0; + unsigned long lp = 0; + unsigned long sp = 0; + unsigned long *user_frame_tail; + + leaf_fp = 0; + + if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { + /* We don't support guest os callchain now */ + return; + } + + perf_callchain_store(entry, regs->ipc); + fp = regs->fp; + gp = regs->gp; + lp = regs->lp; + sp = regs->sp; + if (entry->nr < PERF_MAX_STACK_DEPTH && + (unsigned long)fp && !((unsigned long)fp & 0x7) && fp > sp) { + user_frame_tail = + (unsigned long *)(fp - (unsigned long)sizeof(fp)); + + if (!access_ok(VERIFY_READ, user_frame_tail, sizeof(fp))) + return; + + if (__copy_from_user_inatomic + (&leaf_fp, user_frame_tail, sizeof(fp))) + return; + + if (leaf_fp == lp) { + /* + * Maybe this is non leaf function + * with optimize for size, + * or maybe this is the function + * with optimize for size + */ + struct frame_tail buftail; + + user_frame_tail = + (unsigned long *)(fp - + (unsigned long)sizeof(buftail)); + + if (!access_ok + (VERIFY_READ, user_frame_tail, sizeof(buftail))) + return; + + if (__copy_from_user_inatomic + (&buftail, user_frame_tail, sizeof(buftail))) + return; + + if (buftail.stack_fp == gp) { + /* non leaf function with optimize + * for size condition + */ + struct frame_tail_opt_size buftail_opt_size; + + user_frame_tail = + (unsigned long *)(fp - (unsigned long) + sizeof(buftail_opt_size)); + + if (!access_ok(VERIFY_READ, user_frame_tail, + sizeof(buftail_opt_size))) + return; + + if (__copy_from_user_inatomic + (&buftail_opt_size, user_frame_tail, + sizeof(buftail_opt_size))) + return; + + perf_callchain_store(entry, lp); + fp = buftail_opt_size.stack_fp; + + while ((entry->nr < PERF_MAX_STACK_DEPTH) && + (unsigned long)fp && + !((unsigned long)fp & 0x7) && + fp > sp) { + sp = fp; + fp = user_backtrace_opt_size(entry, fp); + } + + } else { + /* this is the function + * without optimize for size + */ + fp = buftail.stack_fp; + perf_callchain_store(entry, lp); + while ((entry->nr < PERF_MAX_STACK_DEPTH) && + (unsigned long)fp && + !((unsigned long)fp & 0x7) && + fp > sp) { + sp = fp; + fp = user_backtrace(entry, fp); + } + } + } else { + /* this is leaf function */ + fp = leaf_fp; + perf_callchain_store(entry, lp); + + /* previous function callcahin */ + while ((entry->nr < PERF_MAX_STACK_DEPTH) && + (unsigned long)fp && + !((unsigned long)fp & 0x7) && fp > sp) { + sp = fp; + fp = user_backtrace(entry, fp); + } + } + return; + } +} + +/* This will be called when the target is in kernel mode */ +void +perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + struct stackframe fr; + + if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { + /* We don't support guest os callchain now */ + return; + } + fr.fp = regs->fp; + fr.lp = regs->lp; + fr.sp = regs->sp; + walk_stackframe(&fr, callchain_trace, entry); +} + +unsigned long perf_instruction_pointer(struct pt_regs *regs) +{ + /* However, NDS32 does not support virtualization */ + if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) + return perf_guest_cbs->get_guest_ip(); + + return instruction_pointer(regs); +} + +unsigned long perf_misc_flags(struct pt_regs *regs) +{ + int misc = 0; + + /* However, NDS32 does not support virtualization */ + if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { + if (perf_guest_cbs->is_user_mode()) + misc |= PERF_RECORD_MISC_GUEST_USER; + else + misc |= PERF_RECORD_MISC_GUEST_KERNEL; + } else { + if (user_mode(regs)) + misc |= PERF_RECORD_MISC_USER; + else + misc |= PERF_RECORD_MISC_KERNEL; + } + + return misc; +} diff --git a/arch/nds32/kernel/pm.c b/arch/nds32/kernel/pm.c new file mode 100644 index 000000000000..ffa8040d8be7 --- /dev/null +++ b/arch/nds32/kernel/pm.c @@ -0,0 +1,78 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2008-2017 Andes Technology Corporation + +#include +#include +#include +#include +#include +#include + +unsigned int resume_addr; +unsigned int *phy_addr_sp_tmp; + +static void nds32_suspend2ram(void) +{ + pgd_t *pgdv; + pud_t *pudv; + pmd_t *pmdv; + pte_t *ptev; + + pgdv = (pgd_t *)__va((__nds32__mfsr(NDS32_SR_L1_PPTB) & + L1_PPTB_mskBASE)) + pgd_index((unsigned int)cpu_resume); + + pudv = pud_offset(pgdv, (unsigned int)cpu_resume); + pmdv = pmd_offset(pudv, (unsigned int)cpu_resume); + ptev = pte_offset_map(pmdv, (unsigned int)cpu_resume); + + resume_addr = ((*ptev) & TLB_DATA_mskPPN) + | ((unsigned int)cpu_resume & 0x00000fff); + + suspend2ram(); +} + +static void nds32_suspend_cpu(void) +{ + while (!(__nds32__mfsr(NDS32_SR_INT_PEND) & wake_mask)) + __asm__ volatile ("standby no_wake_grant\n\t"); +} + +static int nds32_pm_valid(suspend_state_t state) +{ + switch (state) { + case PM_SUSPEND_ON: + case PM_SUSPEND_STANDBY: + case PM_SUSPEND_MEM: + return 1; + default: + return 0; + } +} + +static int nds32_pm_enter(suspend_state_t state) +{ + pr_debug("%s:state:%d\n", __func__, state); + switch (state) { + case PM_SUSPEND_STANDBY: + nds32_suspend_cpu(); + return 0; + case PM_SUSPEND_MEM: + nds32_suspend2ram(); + return 0; + default: + return -EINVAL; + } +} + +static const struct platform_suspend_ops nds32_pm_ops = { + .valid = nds32_pm_valid, + .enter = nds32_pm_enter, +}; + +static int __init nds32_pm_init(void) +{ + pr_debug("Enter %s\n", __func__); + suspend_set_ops(&nds32_pm_ops); + return 0; +} +late_initcall(nds32_pm_init); diff --git a/arch/nds32/kernel/process.c b/arch/nds32/kernel/process.c index 65fda986e55f..ab7ab46234b1 100644 --- a/arch/nds32/kernel/process.c +++ b/arch/nds32/kernel/process.c @@ -9,15 +9,16 @@ #include #include #include +#include #include #include -extern void setup_mm_for_reboot(char mode); -#ifdef CONFIG_PROC_FS -struct proc_dir_entry *proc_dir_cpu; -EXPORT_SYMBOL(proc_dir_cpu); +#if IS_ENABLED(CONFIG_LAZY_FPU) +struct task_struct *last_task_used_math; #endif +extern void setup_mm_for_reboot(char mode); + extern inline void arch_reset(char mode) { if (mode == 's') { @@ -125,15 +126,31 @@ void show_regs(struct pt_regs *regs) EXPORT_SYMBOL(show_regs); +void exit_thread(struct task_struct *tsk) +{ +#if defined(CONFIG_FPU) && defined(CONFIG_LAZY_FPU) + if (last_task_used_math == tsk) + last_task_used_math = NULL; +#endif +} + void flush_thread(void) { +#if defined(CONFIG_FPU) + clear_fpu(task_pt_regs(current)); + clear_used_math(); +# ifdef CONFIG_LAZY_FPU + if (last_task_used_math == current) + last_task_used_math = NULL; +# endif +#endif } DEFINE_PER_CPU(struct task_struct *, __entry_task); asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); int copy_thread(unsigned long clone_flags, unsigned long stack_start, - unsigned long stk_sz, struct task_struct *p) + unsigned long stk_sz, struct task_struct *p) { struct pt_regs *childregs = task_pt_regs(p); @@ -159,6 +176,22 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, p->thread.cpu_context.pc = (unsigned long)ret_from_fork; p->thread.cpu_context.sp = (unsigned long)childregs; +#if IS_ENABLED(CONFIG_FPU) + if (used_math()) { +# if !IS_ENABLED(CONFIG_LAZY_FPU) + unlazy_fpu(current); +# else + preempt_disable(); + if (last_task_used_math == current) + save_fpu(current); + preempt_enable(); +# endif + p->thread.fpu = current->thread.fpu; + clear_fpu(task_pt_regs(p)); + set_stopped_child_used_math(p); + } +#endif + #ifdef CONFIG_HWZOL childregs->lb = 0; childregs->le = 0; @@ -168,12 +201,33 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, return 0; } +#if IS_ENABLED(CONFIG_FPU) +struct task_struct *_switch_fpu(struct task_struct *prev, struct task_struct *next) +{ +#if !IS_ENABLED(CONFIG_LAZY_FPU) + unlazy_fpu(prev); +#endif + if (!(next->flags & PF_KTHREAD)) + clear_fpu(task_pt_regs(next)); + return prev; +} +#endif + /* * fill in the fpe structure for a core dump... */ int dump_fpu(struct pt_regs *regs, elf_fpregset_t * fpu) { int fpvalid = 0; +#if IS_ENABLED(CONFIG_FPU) + struct task_struct *tsk = current; + + fpvalid = tsk_used_math(tsk); + if (fpvalid) { + lose_fpu(); + memcpy(fpu, &tsk->thread.fpu, sizeof(*fpu)); + } +#endif return fpvalid; } diff --git a/arch/nds32/kernel/setup.c b/arch/nds32/kernel/setup.c index eacc79024879..31d29d92478e 100644 --- a/arch/nds32/kernel/setup.c +++ b/arch/nds32/kernel/setup.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #define HWCAP_MFUSR_PC 0x000001 @@ -38,8 +39,10 @@ #define HWCAP_FPU_DP 0x040000 #define HWCAP_V2 0x080000 #define HWCAP_DX_REGS 0x100000 +#define HWCAP_HWPRE 0x200000 unsigned long cpu_id, cpu_rev, cpu_cfgid; +bool has_fpu = false; char cpu_series; char *endianness = NULL; @@ -70,8 +73,10 @@ static const char *hwcap_str[] = { "div", "mac", "l2c", - "dx_regs", + "fpu_dp", "v2", + "dx_regs", + "hw_pre", NULL, }; @@ -136,6 +141,11 @@ static void __init dump_cpu_info(int cpu) (aliasing_num - 1) << PAGE_SHIFT; } #endif +#ifdef CONFIG_FPU + /* Disable fpu and enable when it is used. */ + if (has_fpu) + disable_fpu(); +#endif } static void __init setup_cpuinfo(void) @@ -180,9 +190,10 @@ static void __init setup_cpuinfo(void) if (cpu_cfgid & 0x0004) elf_hwcap |= HWCAP_EXT2; - if (cpu_cfgid & 0x0008) + if (cpu_cfgid & 0x0008) { elf_hwcap |= HWCAP_FPU; - + has_fpu = true; + } if (cpu_cfgid & 0x0010) elf_hwcap |= HWCAP_STRING; @@ -212,6 +223,11 @@ static void __init setup_cpuinfo(void) if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskL2C) elf_hwcap |= HWCAP_L2C; +#ifdef CONFIG_HW_PRE + if (__nds32__mfsr(NDS32_SR_MISC_CTL) & MISC_CTL_makHWPRE_EN) + elf_hwcap |= HWCAP_HWPRE; +#endif + tmp = __nds32__mfsr(NDS32_SR_CACHE_CTL); if (!IS_ENABLED(CONFIG_CPU_DCACHE_DISABLE)) tmp |= CACHE_CTL_mskDC_EN; diff --git a/arch/nds32/kernel/signal.c b/arch/nds32/kernel/signal.c index 5d01f6e33cb8..5b5be082cfa4 100644 --- a/arch/nds32/kernel/signal.c +++ b/arch/nds32/kernel/signal.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -20,6 +21,60 @@ struct rt_sigframe { struct siginfo info; struct ucontext uc; }; +#if IS_ENABLED(CONFIG_FPU) +static inline int restore_sigcontext_fpu(struct pt_regs *regs, + struct sigcontext __user *sc) +{ + struct task_struct *tsk = current; + unsigned long used_math_flag; + int ret = 0; + + clear_used_math(); + __get_user_error(used_math_flag, &sc->used_math_flag, ret); + + if (!used_math_flag) + return 0; + set_used_math(); + +#if IS_ENABLED(CONFIG_LAZY_FPU) + preempt_disable(); + if (current == last_task_used_math) { + last_task_used_math = NULL; + disable_ptreg_fpu(regs); + } + preempt_enable(); +#else + clear_fpu(regs); +#endif + + return __copy_from_user(&tsk->thread.fpu, &sc->fpu, + sizeof(struct fpu_struct)); +} + +static inline int setup_sigcontext_fpu(struct pt_regs *regs, + struct sigcontext __user *sc) +{ + struct task_struct *tsk = current; + int ret = 0; + + __put_user_error(used_math(), &sc->used_math_flag, ret); + + if (!used_math()) + return ret; + + preempt_disable(); +#if IS_ENABLED(CONFIG_LAZY_FPU) + if (last_task_used_math == tsk) + save_fpu(last_task_used_math); +#else + unlazy_fpu(tsk); +#endif + ret = __copy_to_user(&sc->fpu, &tsk->thread.fpu, + sizeof(struct fpu_struct)); + preempt_enable(); + return ret; +} +#endif static int restore_sigframe(struct pt_regs *regs, struct rt_sigframe __user * sf) @@ -69,7 +124,9 @@ static int restore_sigframe(struct pt_regs *regs, __get_user_error(regs->le, &sf->uc.uc_mcontext.zol.nds32_le, err); __get_user_error(regs->lb, &sf->uc.uc_mcontext.zol.nds32_lb, err); #endif - +#if IS_ENABLED(CONFIG_FPU) + err |= restore_sigcontext_fpu(regs, &sf->uc.uc_mcontext); +#endif /* * Avoid sys_rt_sigreturn() restarting. */ @@ -153,6 +210,9 @@ setup_sigframe(struct rt_sigframe __user * sf, struct pt_regs *regs, __put_user_error(regs->le, &sf->uc.uc_mcontext.zol.nds32_le, err); __put_user_error(regs->lb, &sf->uc.uc_mcontext.zol.nds32_lb, err); #endif +#if IS_ENABLED(CONFIG_FPU) + err |= setup_sigcontext_fpu(regs, &sf->uc.uc_mcontext); +#endif __put_user_error(current->thread.trap_no, &sf->uc.uc_mcontext.trap_no, err); diff --git a/arch/nds32/kernel/sleep.S b/arch/nds32/kernel/sleep.S new file mode 100644 index 000000000000..ca4e61f3656f --- /dev/null +++ b/arch/nds32/kernel/sleep.S @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2017 Andes Technology Corporation */ + +#include + +.data +.global sp_tmp +sp_tmp: +.long + +.text +.globl suspend2ram +.globl cpu_resume + +suspend2ram: + pushm $r0, $r31 +#if defined(CONFIG_HWZOL) + mfusr $r0, $lc + mfusr $r1, $le + mfusr $r2, $lb +#endif + mfsr $r3, $mr0 + mfsr $r4, $mr1 + mfsr $r5, $mr4 + mfsr $r6, $mr6 + mfsr $r7, $mr7 + mfsr $r8, $mr8 + mfsr $r9, $ir0 + mfsr $r10, $ir1 + mfsr $r11, $ir2 + mfsr $r12, $ir3 + mfsr $r13, $ir9 + mfsr $r14, $ir10 + mfsr $r15, $ir12 + mfsr $r16, $ir13 + mfsr $r17, $ir14 + mfsr $r18, $ir15 + pushm $r0, $r19 +#if defined(CONFIG_FPU) + jal store_fpu_for_suspend +#endif + tlbop FlushAll + isb + + // transfer $sp from va to pa + sethi $r0, hi20(PAGE_OFFSET) + ori $r0, $r0, lo12(PAGE_OFFSET) + movi $r2, PHYS_OFFSET + sub $r1, $sp, $r0 + add $r2, $r1, $r2 + + // store pa($sp) to sp_tmp + sethi $r1, hi20(sp_tmp) + swi $r2, [$r1 + lo12(sp_tmp)] + + pushm $r16, $r25 + pushm $r29, $r30 +#ifdef CONFIG_CACHE_L2 + jal dcache_wb_all_level +#else + jal cpu_dcache_wb_all +#endif + popm $r29, $r30 + popm $r16, $r25 + + // get wake_mask and loop in standby + la $r1, wake_mask + lwi $r1, [$r1] +self_loop: + standby wake_grant + mfsr $r2, $ir15 + and $r2, $r1, $r2 + beqz $r2, self_loop + + // set ipc to resume address + la $r1, resume_addr + lwi $r1, [$r1] + mtsr $r1, $ipc + isb + + // reset psw, turn off the address translation + li $r2, 0x7000a + mtsr $r2, $ipsw + isb + + iret +cpu_resume: + // translate the address of sp_tmp variable to pa + la $r1, sp_tmp + sethi $r0, hi20(PAGE_OFFSET) + ori $r0, $r0, lo12(PAGE_OFFSET) + movi $r2, PHYS_OFFSET + sub $r1, $r1, $r0 + add $r1, $r1, $r2 + + // access the sp_tmp to get stack pointer + lwi $sp, [$r1] + + popm $r0, $r19 +#if defined(CONFIG_HWZOL) + mtusr $r0, $lb + mtusr $r1, $lc + mtusr $r2, $le +#endif + mtsr $r3, $mr0 + mtsr $r4, $mr1 + mtsr $r5, $mr4 + mtsr $r6, $mr6 + mtsr $r7, $mr7 + mtsr $r8, $mr8 + // set original psw to ipsw + mtsr $r9, $ir1 + + mtsr $r11, $ir2 + mtsr $r12, $ir3 + + // set ipc to RR + la $r13, RR + mtsr $r13, $ir9 + + mtsr $r14, $ir10 + mtsr $r15, $ir12 + mtsr $r16, $ir13 + mtsr $r17, $ir14 + mtsr $r18, $ir15 + popm $r0, $r31 + + isb + iret +RR: + ret diff --git a/arch/nds32/kernel/sys_nds32.c b/arch/nds32/kernel/sys_nds32.c index 9de93ab4c52b..0835277636ce 100644 --- a/arch/nds32/kernel/sys_nds32.c +++ b/arch/nds32/kernel/sys_nds32.c @@ -6,6 +6,8 @@ #include #include +#include +#include SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len, unsigned long, prot, unsigned long, flags, @@ -48,3 +50,33 @@ SYSCALL_DEFINE3(cacheflush, unsigned int, start, unsigned int, end, int, cache) return 0; } + +SYSCALL_DEFINE1(udftrap, int, option) +{ +#if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) + int old_udftrap; + + if (!used_math()) { + load_fpu(&init_fpuregs); + current->thread.fpu.UDF_trap = init_fpuregs.UDF_trap; + set_used_math(); + } + + old_udftrap = current->thread.fpu.UDF_trap; + switch (option) { + case DISABLE_UDFTRAP: + current->thread.fpu.UDF_trap = 0; + break; + case ENABLE_UDFTRAP: + current->thread.fpu.UDF_trap = FPCSR_mskUDFE; + break; + case GET_UDFTRAP: + break; + default: + return -EINVAL; + } + return old_udftrap; +#else + return -ENOTSUPP; +#endif +} diff --git a/arch/nds32/kernel/traps.c b/arch/nds32/kernel/traps.c index 1496aab48998..5aa7c17da27a 100644 --- a/arch/nds32/kernel/traps.c +++ b/arch/nds32/kernel/traps.c @@ -12,6 +12,7 @@ #include #include +#include #include #include @@ -357,6 +358,21 @@ void do_dispatch_general(unsigned long entry, unsigned long addr, } else if (type == ETYPE_RESERVED_INSTRUCTION) { /* Reserved instruction */ do_revinsn(regs); + } else if (type == ETYPE_COPROCESSOR) { + /* Coprocessor */ +#if IS_ENABLED(CONFIG_FPU) + unsigned int fucop_exist = __nds32__mfsr(NDS32_SR_FUCOP_EXIST); + unsigned int cpid = ((itype & ITYPE_mskCPID) >> ITYPE_offCPID); + + if ((cpid == FPU_CPID) && + (fucop_exist & FUCOP_EXIST_mskCP0ISFPU)) { + unsigned int subtype = (itype & ITYPE_mskSTYPE); + + if (true == do_fpu_exception(subtype, regs)) + return; + } +#endif + unhandled_exceptions(entry, addr, type, regs); } else if (type == ETYPE_TRAP && swid == SWID_RAISE_INTERRUPT_LEVEL) { /* trap, used on v3 EDM target debugging workaround */ /* diff --git a/arch/nds32/math-emu/Makefile b/arch/nds32/math-emu/Makefile new file mode 100644 index 000000000000..947fe0c3d52f --- /dev/null +++ b/arch/nds32/math-emu/Makefile @@ -0,0 +1,7 @@ +# +# Makefile for the Linux/nds32 kernel FPU emulation. +# + +obj-y := fpuemu.o \ + fdivd.o fmuld.o fsubd.o faddd.o fs2d.o fsqrtd.o fcmpd.o fnegs.o \ + fdivs.o fmuls.o fsubs.o fadds.o fd2s.o fsqrts.o fcmps.o fnegd.o diff --git a/arch/nds32/math-emu/faddd.c b/arch/nds32/math-emu/faddd.c new file mode 100644 index 000000000000..f7fd4e3c3904 --- /dev/null +++ b/arch/nds32/math-emu/faddd.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void faddd(void *ft, void *fa, void *fb) +{ + FP_DECL_D(A); + FP_DECL_D(B); + FP_DECL_D(R); + FP_DECL_EX; + + FP_UNPACK_DP(A, fa); + FP_UNPACK_DP(B, fb); + + FP_ADD_D(R, A, B); + + FP_PACK_DP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; + +} diff --git a/arch/nds32/math-emu/fadds.c b/arch/nds32/math-emu/fadds.c new file mode 100644 index 000000000000..f5af6ca8cca5 --- /dev/null +++ b/arch/nds32/math-emu/fadds.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void fadds(void *ft, void *fa, void *fb) +{ + FP_DECL_S(A); + FP_DECL_S(B); + FP_DECL_S(R); + FP_DECL_EX; + + FP_UNPACK_SP(A, fa); + FP_UNPACK_SP(B, fb); + + FP_ADD_S(R, A, B); + + FP_PACK_SP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; + +} diff --git a/arch/nds32/math-emu/fcmpd.c b/arch/nds32/math-emu/fcmpd.c new file mode 100644 index 000000000000..0ea225abe880 --- /dev/null +++ b/arch/nds32/math-emu/fcmpd.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include +#include +#include +int fcmpd(void *ft, void *fa, void *fb, int cmpop) +{ + FP_DECL_D(A); + FP_DECL_D(B); + FP_DECL_EX; + long cmp; + + FP_UNPACK_DP(A, fa); + FP_UNPACK_DP(B, fb); + + FP_CMP_D(cmp, A, B, SF_CUN); + cmp += 2; + if (cmp == SF_CGT) + *(long *)ft = 0; + else + *(long *)ft = (cmp & cmpop) ? 1 : 0; + + return 0; +} diff --git a/arch/nds32/math-emu/fcmps.c b/arch/nds32/math-emu/fcmps.c new file mode 100644 index 000000000000..681480758213 --- /dev/null +++ b/arch/nds32/math-emu/fcmps.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include +#include +#include +int fcmps(void *ft, void *fa, void *fb, int cmpop) +{ + FP_DECL_S(A); + FP_DECL_S(B); + FP_DECL_EX; + long cmp; + + FP_UNPACK_SP(A, fa); + FP_UNPACK_SP(B, fb); + + FP_CMP_S(cmp, A, B, SF_CUN); + cmp += 2; + if (cmp == SF_CGT) + *(int *)ft = 0x0; + else + *(int *)ft = (cmp & cmpop) ? 0x1 : 0x0; + + return 0; +} diff --git a/arch/nds32/math-emu/fd2s.c b/arch/nds32/math-emu/fd2s.c new file mode 100644 index 000000000000..1328371e8170 --- /dev/null +++ b/arch/nds32/math-emu/fd2s.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +#include +void fd2s(void *ft, void *fa) +{ + FP_DECL_D(A); + FP_DECL_S(R); + FP_DECL_EX; + + FP_UNPACK_DP(A, fa); + + FP_CONV(S, D, 1, 2, R, A); + + FP_PACK_SP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fdivd.c b/arch/nds32/math-emu/fdivd.c new file mode 100644 index 000000000000..458e7e98b08e --- /dev/null +++ b/arch/nds32/math-emu/fdivd.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation + +#include +#include +#include +#include + +void fdivd(void *ft, void *fa, void *fb) +{ + FP_DECL_D(A); + FP_DECL_D(B); + FP_DECL_D(R); + FP_DECL_EX; + + FP_UNPACK_DP(A, fa); + FP_UNPACK_DP(B, fb); + + if (B_c == FP_CLS_ZERO && A_c != FP_CLS_ZERO) + FP_SET_EXCEPTION(FP_EX_DIVZERO); + + FP_DIV_D(R, A, B); + + FP_PACK_DP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fdivs.c b/arch/nds32/math-emu/fdivs.c new file mode 100644 index 000000000000..c7d202159ce2 --- /dev/null +++ b/arch/nds32/math-emu/fdivs.c @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void fdivs(void *ft, void *fa, void *fb) +{ + FP_DECL_S(A); + FP_DECL_S(B); + FP_DECL_S(R); + FP_DECL_EX; + + FP_UNPACK_SP(A, fa); + FP_UNPACK_SP(B, fb); + + if (B_c == FP_CLS_ZERO && A_c != FP_CLS_ZERO) + FP_SET_EXCEPTION(FP_EX_DIVZERO); + + FP_DIV_S(R, A, B); + + FP_PACK_SP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fmuld.c b/arch/nds32/math-emu/fmuld.c new file mode 100644 index 000000000000..f3c77a45ddc2 --- /dev/null +++ b/arch/nds32/math-emu/fmuld.c @@ -0,0 +1,23 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void fmuld(void *ft, void *fa, void *fb) +{ + FP_DECL_D(A); + FP_DECL_D(B); + FP_DECL_D(R); + FP_DECL_EX; + + FP_UNPACK_DP(A, fa); + FP_UNPACK_DP(B, fb); + + FP_MUL_D(R, A, B); + + FP_PACK_DP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fmuls.c b/arch/nds32/math-emu/fmuls.c new file mode 100644 index 000000000000..cf150df938f9 --- /dev/null +++ b/arch/nds32/math-emu/fmuls.c @@ -0,0 +1,23 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void fmuls(void *ft, void *fa, void *fb) +{ + FP_DECL_S(A); + FP_DECL_S(B); + FP_DECL_S(R); + FP_DECL_EX; + + FP_UNPACK_SP(A, fa); + FP_UNPACK_SP(B, fb); + + FP_MUL_S(R, A, B); + + FP_PACK_SP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fnegd.c b/arch/nds32/math-emu/fnegd.c new file mode 100644 index 000000000000..de7ea6a0873e --- /dev/null +++ b/arch/nds32/math-emu/fnegd.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void fnegd(void *ft, void *fa) +{ + FP_DECL_D(A); + FP_DECL_D(R); + FP_DECL_EX; + + FP_UNPACK_DP(A, fa); + + FP_NEG_D(R, A); + + FP_PACK_DP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fnegs.c b/arch/nds32/math-emu/fnegs.c new file mode 100644 index 000000000000..07270b326a77 --- /dev/null +++ b/arch/nds32/math-emu/fnegs.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void fnegs(void *ft, void *fa) +{ + FP_DECL_S(A); + FP_DECL_S(R); + FP_DECL_EX; + + FP_UNPACK_SP(A, fa); + + FP_NEG_S(R, A); + + FP_PACK_SP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fpuemu.c b/arch/nds32/math-emu/fpuemu.c new file mode 100644 index 000000000000..75cf1643fa78 --- /dev/null +++ b/arch/nds32/math-emu/fpuemu.c @@ -0,0 +1,357 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation + +#include +#include +#include +#include +#include + +#define DPFROMREG(dp, x) (dp = (void *)((unsigned long *)fpu_reg + 2*x)) +#ifdef __NDS32_EL__ +#define SPFROMREG(sp, x)\ + ((sp) = (void *)((unsigned long *)fpu_reg + (x^1))) +#else +#define SPFROMREG(sp, x) ((sp) = (void *)((unsigned long *)fpu_reg + x)) +#endif + +#define DEF3OP(name, p, f1, f2) \ +void fpemu_##name##p(void *ft, void *fa, void *fb) \ +{ \ + f1(fa, fa, fb); \ + f2(ft, ft, fa); \ +} + +#define DEF3OPNEG(name, p, f1, f2, f3) \ +void fpemu_##name##p(void *ft, void *fa, void *fb) \ +{ \ + f1(fa, fa, fb); \ + f2(ft, ft, fa); \ + f3(ft, ft); \ +} +DEF3OP(fmadd, s, fmuls, fadds); +DEF3OP(fmsub, s, fmuls, fsubs); +DEF3OP(fmadd, d, fmuld, faddd); +DEF3OP(fmsub, d, fmuld, fsubd); +DEF3OPNEG(fnmadd, s, fmuls, fadds, fnegs); +DEF3OPNEG(fnmsub, s, fmuls, fsubs, fnegs); +DEF3OPNEG(fnmadd, d, fmuld, faddd, fnegd); +DEF3OPNEG(fnmsub, d, fmuld, fsubd, fnegd); + +static const unsigned char cmptab[8] = { + SF_CEQ, + SF_CEQ, + SF_CLT, + SF_CLT, + SF_CLT | SF_CEQ, + SF_CLT | SF_CEQ, + SF_CUN, + SF_CUN +}; + +enum ARGTYPE { + S1S = 1, + S2S, + S1D, + CS, + D1D, + D2D, + D1S, + CD +}; +union func_t { + void (*t)(void *ft, void *fa, void *fb); + void (*b)(void *ft, void *fa); +}; +/* + * Emulate a single FPU arithmetic instruction. + */ +static int fpu_emu(struct fpu_struct *fpu_reg, unsigned long insn) +{ + int rfmt; /* resulting format */ + union func_t func; + int ftype = 0; + + switch (rfmt = NDS32Insn_OPCODE_COP0(insn)) { + case fs1_op:{ + switch (NDS32Insn_OPCODE_BIT69(insn)) { + case fadds_op: + func.t = fadds; + ftype = S2S; + break; + case fsubs_op: + func.t = fsubs; + ftype = S2S; + break; + case fmadds_op: + func.t = fpemu_fmadds; + ftype = S2S; + break; + case fmsubs_op: + func.t = fpemu_fmsubs; + ftype = S2S; + break; + case fnmadds_op: + func.t = fpemu_fnmadds; + ftype = S2S; + break; + case fnmsubs_op: + func.t = fpemu_fnmsubs; + ftype = S2S; + break; + case fmuls_op: + func.t = fmuls; + ftype = S2S; + break; + case fdivs_op: + func.t = fdivs; + ftype = S2S; + break; + case fs1_f2op_op: + switch (NDS32Insn_OPCODE_BIT1014(insn)) { + case fs2d_op: + func.b = fs2d; + ftype = S1D; + break; + case fsqrts_op: + func.b = fsqrts; + ftype = S1S; + break; + default: + return SIGILL; + } + break; + default: + return SIGILL; + } + break; + } + case fs2_op: + switch (NDS32Insn_OPCODE_BIT69(insn)) { + case fcmpeqs_op: + case fcmpeqs_e_op: + case fcmplts_op: + case fcmplts_e_op: + case fcmples_op: + case fcmples_e_op: + case fcmpuns_op: + case fcmpuns_e_op: + ftype = CS; + break; + default: + return SIGILL; + } + break; + case fd1_op:{ + switch (NDS32Insn_OPCODE_BIT69(insn)) { + case faddd_op: + func.t = faddd; + ftype = D2D; + break; + case fsubd_op: + func.t = fsubd; + ftype = D2D; + break; + case fmaddd_op: + func.t = fpemu_fmaddd; + ftype = D2D; + break; + case fmsubd_op: + func.t = fpemu_fmsubd; + ftype = D2D; + break; + case fnmaddd_op: + func.t = fpemu_fnmaddd; + ftype = D2D; + break; + case fnmsubd_op: + func.t = fpemu_fnmsubd; + ftype = D2D; + break; + case fmuld_op: + func.t = fmuld; + ftype = D2D; + break; + case fdivd_op: + func.t = fdivd; + ftype = D2D; + break; + case fd1_f2op_op: + switch (NDS32Insn_OPCODE_BIT1014(insn)) { + case fd2s_op: + func.b = fd2s; + ftype = D1S; + break; + case fsqrtd_op: + func.b = fsqrtd; + ftype = D1D; + break; + default: + return SIGILL; + } + break; + default: + return SIGILL; + + } + break; + } + + case fd2_op: + switch (NDS32Insn_OPCODE_BIT69(insn)) { + case fcmpeqd_op: + case fcmpeqd_e_op: + case fcmpltd_op: + case fcmpltd_e_op: + case fcmpled_op: + case fcmpled_e_op: + case fcmpund_op: + case fcmpund_e_op: + ftype = CD; + break; + default: + return SIGILL; + } + break; + + default: + return SIGILL; + } + + switch (ftype) { + case S1S:{ + void *ft, *fa; + + SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); + SPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); + func.b(ft, fa); + break; + } + case S2S:{ + void *ft, *fa, *fb; + + SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); + SPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); + SPFROMREG(fb, NDS32Insn_OPCODE_Rb(insn)); + func.t(ft, fa, fb); + break; + } + case S1D:{ + void *ft, *fa; + + DPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); + SPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); + func.b(ft, fa); + break; + } + case CS:{ + unsigned int cmpop = NDS32Insn_OPCODE_BIT69(insn); + void *ft, *fa, *fb; + + SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); + SPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); + SPFROMREG(fb, NDS32Insn_OPCODE_Rb(insn)); + if (cmpop < 0x8) { + cmpop = cmptab[cmpop]; + fcmps(ft, fa, fb, cmpop); + } else + return SIGILL; + break; + } + case D1D:{ + void *ft, *fa; + + DPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); + DPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); + func.b(ft, fa); + break; + } + case D2D:{ + void *ft, *fa, *fb; + + DPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); + DPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); + DPFROMREG(fb, NDS32Insn_OPCODE_Rb(insn)); + func.t(ft, fa, fb); + break; + } + case D1S:{ + void *ft, *fa; + + SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); + DPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); + func.b(ft, fa); + break; + } + case CD:{ + unsigned int cmpop = NDS32Insn_OPCODE_BIT69(insn); + void *ft, *fa, *fb; + + SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); + DPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); + DPFROMREG(fb, NDS32Insn_OPCODE_Rb(insn)); + if (cmpop < 0x8) { + cmpop = cmptab[cmpop]; + fcmpd(ft, fa, fb, cmpop); + } else + return SIGILL; + break; + } + default: + return SIGILL; + } + + /* + * If an exception is required, generate a tidy SIGFPE exception. + */ +#if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) + if (((fpu_reg->fpcsr << 5) & fpu_reg->fpcsr & FPCSR_mskALLE_NO_UDFE) || + ((fpu_reg->fpcsr & FPCSR_mskUDF) && (fpu_reg->UDF_trap))) +#else + if ((fpu_reg->fpcsr << 5) & fpu_reg->fpcsr & FPCSR_mskALLE) +#endif + return SIGFPE; + return 0; +} + + +int do_fpuemu(struct pt_regs *regs, struct fpu_struct *fpu) +{ + unsigned long insn = 0, addr = regs->ipc; + unsigned long emulpc, contpc; + unsigned char *pc = (void *)&insn; + char c; + int i = 0, ret; + + for (i = 0; i < 4; i++) { + if (__get_user(c, (unsigned char *)addr++)) + return SIGBUS; + *pc++ = c; + } + + insn = be32_to_cpu(insn); + + emulpc = regs->ipc; + contpc = regs->ipc + 4; + + if (NDS32Insn_OPCODE(insn) != cop0_op) + return SIGILL; + switch (NDS32Insn_OPCODE_COP0(insn)) { + case fs1_op: + case fs2_op: + case fd1_op: + case fd2_op: + { + /* a real fpu computation instruction */ + ret = fpu_emu(fpu, insn); + if (!ret) + regs->ipc = contpc; + } + break; + + default: + return SIGILL; + } + + return ret; +} diff --git a/arch/nds32/math-emu/fs2d.c b/arch/nds32/math-emu/fs2d.c new file mode 100644 index 000000000000..0e8db9035631 --- /dev/null +++ b/arch/nds32/math-emu/fs2d.c @@ -0,0 +1,23 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation + +#include +#include +#include +#include +#include + +void fs2d(void *ft, void *fa) +{ + FP_DECL_S(A); + FP_DECL_D(R); + FP_DECL_EX; + + FP_UNPACK_SP(A, fa); + + FP_CONV(D, S, 2, 1, R, A); + + FP_PACK_DP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fsqrtd.c b/arch/nds32/math-emu/fsqrtd.c new file mode 100644 index 000000000000..c3a8dbd81d4e --- /dev/null +++ b/arch/nds32/math-emu/fsqrtd.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation + +#include +#include +#include +#include +void fsqrtd(void *ft, void *fa) +{ + FP_DECL_D(A); + FP_DECL_D(R); + FP_DECL_EX; + + FP_UNPACK_DP(A, fa); + + FP_SQRT_D(R, A); + + FP_PACK_DP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fsqrts.c b/arch/nds32/math-emu/fsqrts.c new file mode 100644 index 000000000000..4c6f94b27328 --- /dev/null +++ b/arch/nds32/math-emu/fsqrts.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation + +#include +#include +#include +#include +void fsqrts(void *ft, void *fa) +{ + FP_DECL_S(A); + FP_DECL_S(R); + FP_DECL_EX; + + FP_UNPACK_SP(A, fa); + + FP_SQRT_S(R, A); + + FP_PACK_SP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fsubd.c b/arch/nds32/math-emu/fsubd.c new file mode 100644 index 000000000000..81b6a0d02a1f --- /dev/null +++ b/arch/nds32/math-emu/fsubd.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void fsubd(void *ft, void *fa, void *fb) +{ + + FP_DECL_D(A); + FP_DECL_D(B); + FP_DECL_D(R); + FP_DECL_EX; + + FP_UNPACK_DP(A, fa); + FP_UNPACK_DP(B, fb); + + if (B_c != FP_CLS_NAN) + B_s ^= 1; + + FP_ADD_D(R, A, B); + + FP_PACK_DP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/math-emu/fsubs.c b/arch/nds32/math-emu/fsubs.c new file mode 100644 index 000000000000..61ddd9708465 --- /dev/null +++ b/arch/nds32/math-emu/fsubs.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2005-2018 Andes Technology Corporation +#include + +#include +#include +#include +void fsubs(void *ft, void *fa, void *fb) +{ + + FP_DECL_S(A); + FP_DECL_S(B); + FP_DECL_S(R); + FP_DECL_EX; + + FP_UNPACK_SP(A, fa); + FP_UNPACK_SP(B, fb); + + if (B_c != FP_CLS_NAN) + B_s ^= 1; + + FP_ADD_S(R, A, B); + + FP_PACK_SP(ft, R); + + __FPU_FPCSR |= FP_CUR_EXCEPTIONS; +} diff --git a/arch/nds32/mm/Makefile b/arch/nds32/mm/Makefile index 6b6855852223..7c5c15ad854a 100644 --- a/arch/nds32/mm/Makefile +++ b/arch/nds32/mm/Makefile @@ -4,4 +4,8 @@ obj-y := extable.o tlb.o \ obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o obj-$(CONFIG_HIGHMEM) += highmem.o -CFLAGS_proc-n13.o += -fomit-frame-pointer + +ifdef CONFIG_FUNCTION_TRACER +CFLAGS_REMOVE_proc.o = $(CC_FLAGS_FTRACE) +endif +CFLAGS_proc.o += -fomit-frame-pointer diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index b740534b152c..68d5f2a27f38 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -169,8 +170,6 @@ good_area: mask = VM_EXEC; else { mask = VM_READ | VM_WRITE; - if (vma->vm_flags & VM_WRITE) - flags |= FAULT_FLAG_WRITE; } } else if (entry == ENTRY_TLB_MISC) { switch (error_code & ITYPE_mskETYPE) { @@ -231,11 +230,17 @@ good_area: * attempt. If we go through a retry, it is extremely likely that the * page will be found in page cache at that point. */ + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) + if (fault & VM_FAULT_MAJOR) { tsk->maj_flt++; - else + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, + 1, regs, addr); + } else { tsk->min_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, + 1, regs, addr); + } if (fault & VM_FAULT_RETRY) { flags &= ~FAULT_FLAG_ALLOW_RETRY; flags |= FAULT_FLAG_TRIED; diff --git a/arch/nds32/mm/init.c b/arch/nds32/mm/init.c index 131104bd2538..253f79fc7196 100644 --- a/arch/nds32/mm/init.c +++ b/arch/nds32/mm/init.c @@ -21,8 +21,6 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_gathers); DEFINE_SPINLOCK(anon_alias_lock); extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; -extern unsigned long phys_initrd_start; -extern unsigned long phys_initrd_size; /* * empty_zero_page is a special page that is used for diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig index 7e95506e957a..f6c4b0f49997 100644 --- a/arch/nios2/Kconfig +++ b/arch/nios2/Kconfig @@ -4,7 +4,6 @@ config NIOS2 select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_NO_SWAP - select DMA_DIRECT_OPS select TIMER_OF select GENERIC_ATOMIC64 select GENERIC_CLOCKEVENTS diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig index 285f7d05c8ed..09ab59e942ae 100644 --- a/arch/openrisc/Kconfig +++ b/arch/openrisc/Kconfig @@ -7,7 +7,6 @@ config OPENRISC def_bool y select ARCH_HAS_SYNC_DMA_FOR_DEVICE - select DMA_DIRECT_OPS select OF select OF_EARLY_FLATTREE select IRQ_DOMAIN @@ -139,7 +138,7 @@ config SMP If you don't know what to do here, say N. -source kernel/Kconfig.hz +source "kernel/Kconfig.hz" config OPENRISC_NO_SPR_SR_DSX bool "use SPR_SR_DSX software emulation" if OR1K_1200 diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c index 159336adfa2f..f79457cb3741 100644 --- a/arch/openrisc/kernel/dma.c +++ b/arch/openrisc/kernel/dma.c @@ -89,7 +89,7 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, .mm = &init_mm }; - page = alloc_pages_exact(size, gfp); + page = alloc_pages_exact(size, gfp | __GFP_ZERO); if (!page) return NULL; diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig index 92a339ee28b3..7ca2c3ebad64 100644 --- a/arch/parisc/Kconfig +++ b/arch/parisc/Kconfig @@ -11,12 +11,14 @@ config PARISC select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_UBSAN_SANITIZE_ALL + select ARCH_NO_SG_CHAIN select ARCH_SUPPORTS_MEMORY_FAILURE select RTC_CLASS select RTC_DRV_GENERIC select INIT_ALL_POSSIBLE select BUG select BUILDTIME_EXTABLE_SORT + select HAVE_PCI select HAVE_PERF_EVENTS select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_GZIP @@ -184,7 +186,6 @@ config PA11 depends on PA7000 || PA7100LC || PA7200 || PA7300LC select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_DEVICE - select DMA_DIRECT_OPS select DMA_NONCOHERENT_CACHE_SYNC config PREFETCH diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 04c48f1ef3fb..239162355b58 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -404,7 +404,7 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, order = get_order(size); size = 1 << (order + PAGE_SHIFT); vaddr = pcxl_alloc_range(size); - paddr = __get_free_pages(flag, order); + paddr = __get_free_pages(flag | __GFP_ZERO, order); flush_kernel_dcache_range(paddr, size); paddr = __pa(paddr); map_uncached_pages(vaddr, size, paddr); @@ -429,7 +429,7 @@ static void *pcx_dma_alloc(struct device *dev, size_t size, if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) return NULL; - addr = (void *)__get_free_pages(flag, get_order(size)); + addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); if (addr) *dma_handle = (dma_addr_t)virt_to_phys(addr); diff --git a/arch/parisc/kernel/setup.c b/arch/parisc/kernel/setup.c index 2b108ee3b217..f2cf86ac279b 100644 --- a/arch/parisc/kernel/setup.c +++ b/arch/parisc/kernel/setup.c @@ -99,10 +99,6 @@ void __init dma_ops_init(void) case pcxl2: pa7300lc_init(); - case pcxl: /* falls through */ - case pcxs: - case pcxt: - hppa_dma_ops = &dma_direct_ops; break; default: break; diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 8be31261aec8..2890d36eb531 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -128,6 +128,7 @@ config PPC # # Please keep this list sorted alphabetically. # + select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_DMA_SET_COHERENT_MASK select ARCH_HAS_ELF_RANDOMIZE @@ -138,7 +139,6 @@ config PPC select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_MEMBARRIER_CALLBACKS select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC64 - select ARCH_HAS_SG_CHAIN select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION) select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_UACCESS_FLUSHCACHE if PPC64 @@ -168,6 +168,7 @@ config PPC select GENERIC_CPU_VULNERABILITIES if PPC_BARRIER_NOSPEC select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW_LEVEL + select GENERIC_PCI_IOMAP if PCI select GENERIC_SMP_IDLE_THREAD select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNLEN_USER @@ -235,6 +236,8 @@ config PPC select OF_RESERVED_MEM select OLD_SIGACTION if PPC32 select OLD_SIGSUSPEND + select PCI_DOMAINS if PCI + select PCI_SYSCALL if PCI select RTC_LIB select SPARSE_IRQ select SYSCTL_EXCEPTION_TRACE @@ -374,9 +377,9 @@ config PPC_ADV_DEBUG_DAC_RANGE depends on PPC_ADV_DEBUG_REGS && 44x default y -config ZONE_DMA32 +config ZONE_DMA bool - default y if PPC64 + default y if PPC_BOOK3E_64 config PGTABLE_LEVELS int @@ -393,7 +396,7 @@ config HIGHMEM bool "High memory support" depends on PPC32 -source kernel/Kconfig.hz +source "kernel/Kconfig.hz" config HUGETLB_PAGE_SIZE_VARIABLE bool @@ -556,7 +559,7 @@ config RELOCATABLE_TEST config CRASH_DUMP bool "Build a dump capture kernel" - depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP) + depends on PPC64 || PPC_BOOK3S_32 || FSL_BOOKE || (44x && !SMP) select RELOCATABLE if PPC64 || 44x || FSL_BOOKE help Build a kernel suitable for use as a dump capture kernel. @@ -816,7 +819,7 @@ config ARCH_WANTS_FREEZER_CONTROL def_bool y depends on ADB_PMU -source kernel/power/Kconfig +source "kernel/power/Kconfig" config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" @@ -869,10 +872,6 @@ config ISA have an IBM RS/6000 or pSeries machine, say Y. If you have an embedded board, consult your board documentation. -config ZONE_DMA - bool - default y - config GENERIC_ISA_DMA bool depends on ISA_DMA_API @@ -883,9 +882,6 @@ config PPC_INDIRECT_PCI depends on PCI default y if 40x || 44x -config EISA - bool - config SBUS bool @@ -930,59 +926,20 @@ config FSL_GTM help Freescale General-purpose Timers support -# Platforms that what PCI turned unconditionally just do select PCI -# in their config node. Platforms that want to choose at config -# time should select PPC_PCI_CHOICE -config PPC_PCI_CHOICE - bool - -config PCI - bool "PCI support" if PPC_PCI_CHOICE - default y if !40x && !CPM2 && !PPC_8xx && !PPC_83xx \ - && !PPC_85xx && !PPC_86xx && !GAMECUBE_COMMON - select GENERIC_PCI_IOMAP - help - Find out whether your system includes a PCI bus. PCI is the name of - a bus system, i.e. the way the CPU talks to the other stuff inside - your box. If you say Y here, the kernel will include drivers and - infrastructure code to support PCI bus devices. - -config PCI_DOMAINS - def_bool PCI - -config PCI_SYSCALL - def_bool PCI - config PCI_8260 bool depends on PCI && 8260 select PPC_INDIRECT_PCI default y -source "drivers/pci/Kconfig" - -source "drivers/pcmcia/Kconfig" - -config HAS_RAPIDIO - bool - -config RAPIDIO - tristate "RapidIO support" - depends on HAS_RAPIDIO || PCI - help - If you say Y here, the kernel will include drivers and - infrastructure code to support RapidIO interconnect devices. - config FSL_RIO bool "Freescale Embedded SRIO Controller support" - depends on RAPIDIO = y && HAS_RAPIDIO + depends on RAPIDIO = y && HAVE_RAPIDIO default "n" ---help--- Include support for RapidIO controller on Freescale embedded processors (MPC8548, MPC8641, etc). -source "drivers/rapidio/Kconfig" - endmenu config NONSTATIC_KERNEL @@ -1096,7 +1053,7 @@ config PHYSICAL_START_BOOL config PHYSICAL_START hex "Physical address where the kernel is loaded" if PHYSICAL_START_BOOL - default "0x02000000" if PPC_STD_MMU && CRASH_DUMP && !NONSTATIC_KERNEL + default "0x02000000" if PPC_BOOK3S && CRASH_DUMP && !NONSTATIC_KERNEL default "0x00000000" config PHYSICAL_ALIGN @@ -1146,7 +1103,7 @@ config PIN_TLB_DATA config PIN_TLB_IMMR bool "Pinned TLB for IMMR" - depends on PIN_TLB + depends on PIN_TLB || PPC_EARLY_DEBUG_CPM default y config PIN_TLB_TEXT diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index 854199c9ab7e..488c9edffa58 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -30,6 +30,10 @@ endif endif endif +ifdef CONFIG_PPC_BOOK3S_32 +KBUILD_CFLAGS += -mcpu=powerpc +endif + ifeq ($(CROSS_COMPILE),) KBUILD_DEFCONFIG := $(shell uname -m)_defconfig else @@ -152,7 +156,14 @@ endif CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,$(call cc-option,-mminimal-toc)) CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions) -CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 $(MULTIPLEWORD) +# Clang unconditionally reserves r2 on ppc32 and does not support the flag +# https://bugs.llvm.org/show_bug.cgi?id=39555 +CFLAGS-$(CONFIG_PPC32) := $(call cc-option, -ffixed-r2) + +# Clang doesn't support -mmultiple / -mno-multiple +# https://bugs.llvm.org/show_bug.cgi?id=39556 +CFLAGS-$(CONFIG_PPC32) += $(call cc-option, $(MULTIPLEWORD)) + CFLAGS-$(CONFIG_PPC32) += $(call cc-option,-mno-readonly-in-sdata) ifdef CONFIG_PPC_BOOK3S_64 @@ -237,10 +248,6 @@ KBUILD_CFLAGS += $(call cc-option,-fno-dwarf2-cfi-asm) # often slow when they are implemented at all KBUILD_CFLAGS += $(call cc-option,-mno-string) -ifdef CONFIG_6xx -KBUILD_CFLAGS += -mcpu=powerpc -endif - cpu-as-$(CONFIG_4xx) += -Wa,-m405 cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec) cpu-as-$(CONFIG_E200) += -Wa,-me200 @@ -313,6 +320,14 @@ PHONY += ppc64le_defconfig ppc64le_defconfig: $(call merge_into_defconfig,ppc64_defconfig,le) +PHONY += ppc64le_guest_defconfig +ppc64le_guest_defconfig: + $(call merge_into_defconfig,ppc64_defconfig,le guest) + +PHONY += ppc64_guest_defconfig +ppc64_guest_defconfig: + $(call merge_into_defconfig,ppc64_defconfig,be guest) + PHONY += powernv_be_defconfig powernv_be_defconfig: $(call merge_into_defconfig,powernv_defconfig,be) @@ -398,6 +413,9 @@ archclean: archprepare: checkbin +archheaders: + $(Q)$(MAKE) $(build)=arch/powerpc/kernel/syscalls all + ifdef CONFIG_STACKPROTECTOR prepare: stack_protector_prepare diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile index ed9883169190..0e8dadd011bc 100644 --- a/arch/powerpc/boot/Makefile +++ b/arch/powerpc/boot/Makefile @@ -55,6 +55,11 @@ BOOTAFLAGS := -D__ASSEMBLY__ $(BOOTCFLAGS) -traditional -nostdinc BOOTARFLAGS := -cr$(KBUILD_ARFLAGS) +ifdef CONFIG_CC_IS_CLANG +BOOTCFLAGS += $(CLANG_FLAGS) +BOOTAFLAGS += $(CLANG_FLAGS) +endif + ifdef CONFIG_DEBUG_INFO BOOTCFLAGS += -g endif diff --git a/arch/powerpc/boot/dts/bamboo.dts b/arch/powerpc/boot/dts/bamboo.dts index 538e42b1120d..b5861fa3836c 100644 --- a/arch/powerpc/boot/dts/bamboo.dts +++ b/arch/powerpc/boot/dts/bamboo.dts @@ -268,8 +268,10 @@ /* Outbound ranges, one memory and one IO, * later cannot be changed. Chip supports a second * IO range but we don't use it for now + * The chip also supports a larger memory range but + * it's not naturally aligned, so our code will break */ - ranges = <0x02000000 0x00000000 0xa0000000 0x00000000 0xa0000000 0x00000000 0x40000000 + ranges = <0x02000000 0x00000000 0xa0000000 0x00000000 0xa0000000 0x00000000 0x20000000 0x02000000 0x00000000 0x00000000 0x00000000 0xe0000000 0x00000000 0x00100000 0x01000000 0x00000000 0x00000000 0x00000000 0xe8000000 0x00000000 0x00010000>; diff --git a/arch/powerpc/boot/dts/fsl/b4420si-pre.dtsi b/arch/powerpc/boot/dts/fsl/b4420si-pre.dtsi index 88d8423f8ac5..bb7b9b9f3f5f 100644 --- a/arch/powerpc/boot/dts/fsl/b4420si-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/b4420si-pre.dtsi @@ -70,14 +70,14 @@ cpu0: PowerPC,e6500@0 { device_type = "cpu"; reg = <0 1>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu1: PowerPC,e6500@2 { device_type = "cpu"; reg = <2 3>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; diff --git a/arch/powerpc/boot/dts/fsl/b4860si-pre.dtsi b/arch/powerpc/boot/dts/fsl/b4860si-pre.dtsi index f3f968c51f4b..388ba1b15f8c 100644 --- a/arch/powerpc/boot/dts/fsl/b4860si-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/b4860si-pre.dtsi @@ -75,28 +75,28 @@ cpu0: PowerPC,e6500@0 { device_type = "cpu"; reg = <0 1>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu1: PowerPC,e6500@2 { device_type = "cpu"; reg = <2 3>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu2: PowerPC,e6500@4 { device_type = "cpu"; reg = <4 5>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu3: PowerPC,e6500@6 { device_type = "cpu"; reg = <6 7>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; diff --git a/arch/powerpc/boot/dts/fsl/b4si-post.dtsi b/arch/powerpc/boot/dts/fsl/b4si-post.dtsi index 1b33f5157c8a..4f044b41a776 100644 --- a/arch/powerpc/boot/dts/fsl/b4si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/b4si-post.dtsi @@ -398,21 +398,6 @@ }; /include/ "qoriq-clockgen2.dtsi" - clockgen: global-utilities@e1000 { - compatible = "fsl,b4-clockgen", "fsl,qoriq-clockgen-2.0"; - reg = <0xe1000 0x1000>; - - mux0: mux0@0 { - #clock-cells = <0>; - reg = <0x0 0x4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>; - clock-names = "pll0", "pll0-div2", "pll0-div4", - "pll1", "pll1-div2", "pll1-div4"; - clock-output-names = "cmux0"; - }; - }; rcpm: global-utilities@e2000 { compatible = "fsl,b4-rcpm", "fsl,qoriq-rcpm-2.0"; diff --git a/arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts b/arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts index 11bea3e6a43f..58ac17496c89 100644 --- a/arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts +++ b/arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts @@ -169,100 +169,100 @@ interrupt-map-mask = <0xff00 0 0 7>; interrupt-map = < /* IDSEL 0x11 func 0 - PCI slot 1 */ - 0x8800 0 0 1 &mpic 2 1 - 0x8800 0 0 2 &mpic 3 1 - 0x8800 0 0 3 &mpic 4 1 - 0x8800 0 0 4 &mpic 1 1 + 0x8800 0 0 1 &mpic 2 1 0 0 + 0x8800 0 0 2 &mpic 3 1 0 0 + 0x8800 0 0 3 &mpic 4 1 0 0 + 0x8800 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 1 - PCI slot 1 */ - 0x8900 0 0 1 &mpic 2 1 - 0x8900 0 0 2 &mpic 3 1 - 0x8900 0 0 3 &mpic 4 1 - 0x8900 0 0 4 &mpic 1 1 + 0x8900 0 0 1 &mpic 2 1 0 0 + 0x8900 0 0 2 &mpic 3 1 0 0 + 0x8900 0 0 3 &mpic 4 1 0 0 + 0x8900 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 2 - PCI slot 1 */ - 0x8a00 0 0 1 &mpic 2 1 - 0x8a00 0 0 2 &mpic 3 1 - 0x8a00 0 0 3 &mpic 4 1 - 0x8a00 0 0 4 &mpic 1 1 + 0x8a00 0 0 1 &mpic 2 1 0 0 + 0x8a00 0 0 2 &mpic 3 1 0 0 + 0x8a00 0 0 3 &mpic 4 1 0 0 + 0x8a00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 3 - PCI slot 1 */ - 0x8b00 0 0 1 &mpic 2 1 - 0x8b00 0 0 2 &mpic 3 1 - 0x8b00 0 0 3 &mpic 4 1 - 0x8b00 0 0 4 &mpic 1 1 + 0x8b00 0 0 1 &mpic 2 1 0 0 + 0x8b00 0 0 2 &mpic 3 1 0 0 + 0x8b00 0 0 3 &mpic 4 1 0 0 + 0x8b00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 4 - PCI slot 1 */ - 0x8c00 0 0 1 &mpic 2 1 - 0x8c00 0 0 2 &mpic 3 1 - 0x8c00 0 0 3 &mpic 4 1 - 0x8c00 0 0 4 &mpic 1 1 + 0x8c00 0 0 1 &mpic 2 1 0 0 + 0x8c00 0 0 2 &mpic 3 1 0 0 + 0x8c00 0 0 3 &mpic 4 1 0 0 + 0x8c00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 5 - PCI slot 1 */ - 0x8d00 0 0 1 &mpic 2 1 - 0x8d00 0 0 2 &mpic 3 1 - 0x8d00 0 0 3 &mpic 4 1 - 0x8d00 0 0 4 &mpic 1 1 + 0x8d00 0 0 1 &mpic 2 1 0 0 + 0x8d00 0 0 2 &mpic 3 1 0 0 + 0x8d00 0 0 3 &mpic 4 1 0 0 + 0x8d00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 6 - PCI slot 1 */ - 0x8e00 0 0 1 &mpic 2 1 - 0x8e00 0 0 2 &mpic 3 1 - 0x8e00 0 0 3 &mpic 4 1 - 0x8e00 0 0 4 &mpic 1 1 + 0x8e00 0 0 1 &mpic 2 1 0 0 + 0x8e00 0 0 2 &mpic 3 1 0 0 + 0x8e00 0 0 3 &mpic 4 1 0 0 + 0x8e00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 7 - PCI slot 1 */ - 0x8f00 0 0 1 &mpic 2 1 - 0x8f00 0 0 2 &mpic 3 1 - 0x8f00 0 0 3 &mpic 4 1 - 0x8f00 0 0 4 &mpic 1 1 + 0x8f00 0 0 1 &mpic 2 1 0 0 + 0x8f00 0 0 2 &mpic 3 1 0 0 + 0x8f00 0 0 3 &mpic 4 1 0 0 + 0x8f00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x12 func 0 - PCI slot 2 */ - 0x9000 0 0 1 &mpic 3 1 - 0x9000 0 0 2 &mpic 4 1 - 0x9000 0 0 3 &mpic 1 1 - 0x9000 0 0 4 &mpic 2 1 + 0x9000 0 0 1 &mpic 3 1 0 0 + 0x9000 0 0 2 &mpic 4 1 0 0 + 0x9000 0 0 3 &mpic 1 1 0 0 + 0x9000 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 1 - PCI slot 2 */ - 0x9100 0 0 1 &mpic 3 1 - 0x9100 0 0 2 &mpic 4 1 - 0x9100 0 0 3 &mpic 1 1 - 0x9100 0 0 4 &mpic 2 1 + 0x9100 0 0 1 &mpic 3 1 0 0 + 0x9100 0 0 2 &mpic 4 1 0 0 + 0x9100 0 0 3 &mpic 1 1 0 0 + 0x9100 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 2 - PCI slot 2 */ - 0x9200 0 0 1 &mpic 3 1 - 0x9200 0 0 2 &mpic 4 1 - 0x9200 0 0 3 &mpic 1 1 - 0x9200 0 0 4 &mpic 2 1 + 0x9200 0 0 1 &mpic 3 1 0 0 + 0x9200 0 0 2 &mpic 4 1 0 0 + 0x9200 0 0 3 &mpic 1 1 0 0 + 0x9200 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 3 - PCI slot 2 */ - 0x9300 0 0 1 &mpic 3 1 - 0x9300 0 0 2 &mpic 4 1 - 0x9300 0 0 3 &mpic 1 1 - 0x9300 0 0 4 &mpic 2 1 + 0x9300 0 0 1 &mpic 3 1 0 0 + 0x9300 0 0 2 &mpic 4 1 0 0 + 0x9300 0 0 3 &mpic 1 1 0 0 + 0x9300 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 4 - PCI slot 2 */ - 0x9400 0 0 1 &mpic 3 1 - 0x9400 0 0 2 &mpic 4 1 - 0x9400 0 0 3 &mpic 1 1 - 0x9400 0 0 4 &mpic 2 1 + 0x9400 0 0 1 &mpic 3 1 0 0 + 0x9400 0 0 2 &mpic 4 1 0 0 + 0x9400 0 0 3 &mpic 1 1 0 0 + 0x9400 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 5 - PCI slot 2 */ - 0x9500 0 0 1 &mpic 3 1 - 0x9500 0 0 2 &mpic 4 1 - 0x9500 0 0 3 &mpic 1 1 - 0x9500 0 0 4 &mpic 2 1 + 0x9500 0 0 1 &mpic 3 1 0 0 + 0x9500 0 0 2 &mpic 4 1 0 0 + 0x9500 0 0 3 &mpic 1 1 0 0 + 0x9500 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 6 - PCI slot 2 */ - 0x9600 0 0 1 &mpic 3 1 - 0x9600 0 0 2 &mpic 4 1 - 0x9600 0 0 3 &mpic 1 1 - 0x9600 0 0 4 &mpic 2 1 + 0x9600 0 0 1 &mpic 3 1 0 0 + 0x9600 0 0 2 &mpic 4 1 0 0 + 0x9600 0 0 3 &mpic 1 1 0 0 + 0x9600 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 7 - PCI slot 2 */ - 0x9700 0 0 1 &mpic 3 1 - 0x9700 0 0 2 &mpic 4 1 - 0x9700 0 0 3 &mpic 1 1 - 0x9700 0 0 4 &mpic 2 1 + 0x9700 0 0 1 &mpic 3 1 0 0 + 0x9700 0 0 2 &mpic 4 1 0 0 + 0x9700 0 0 3 &mpic 1 1 0 0 + 0x9700 0 0 4 &mpic 2 1 0 0 // IDSEL 0x1c USB 0xe000 0 0 1 &i8259 12 2 diff --git a/arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts b/arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts index 7ff62046a9ea..e64b91e321f6 100644 --- a/arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts +++ b/arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts @@ -136,100 +136,100 @@ interrupt-map-mask = <0xff00 0 0 7>; interrupt-map = < /* IDSEL 0x11 func 0 - PCI slot 1 */ - 0x8800 0 0 1 &mpic 2 1 - 0x8800 0 0 2 &mpic 3 1 - 0x8800 0 0 3 &mpic 4 1 - 0x8800 0 0 4 &mpic 1 1 + 0x8800 0 0 1 &mpic 2 1 0 0 + 0x8800 0 0 2 &mpic 3 1 0 0 + 0x8800 0 0 3 &mpic 4 1 0 0 + 0x8800 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 1 - PCI slot 1 */ - 0x8900 0 0 1 &mpic 2 1 - 0x8900 0 0 2 &mpic 3 1 - 0x8900 0 0 3 &mpic 4 1 - 0x8900 0 0 4 &mpic 1 1 + 0x8900 0 0 1 &mpic 2 1 0 0 + 0x8900 0 0 2 &mpic 3 1 0 0 + 0x8900 0 0 3 &mpic 4 1 0 0 + 0x8900 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 2 - PCI slot 1 */ - 0x8a00 0 0 1 &mpic 2 1 - 0x8a00 0 0 2 &mpic 3 1 - 0x8a00 0 0 3 &mpic 4 1 - 0x8a00 0 0 4 &mpic 1 1 + 0x8a00 0 0 1 &mpic 2 1 0 0 + 0x8a00 0 0 2 &mpic 3 1 0 0 + 0x8a00 0 0 3 &mpic 4 1 0 0 + 0x8a00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 3 - PCI slot 1 */ - 0x8b00 0 0 1 &mpic 2 1 - 0x8b00 0 0 2 &mpic 3 1 - 0x8b00 0 0 3 &mpic 4 1 - 0x8b00 0 0 4 &mpic 1 1 + 0x8b00 0 0 1 &mpic 2 1 0 0 + 0x8b00 0 0 2 &mpic 3 1 0 0 + 0x8b00 0 0 3 &mpic 4 1 0 0 + 0x8b00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 4 - PCI slot 1 */ - 0x8c00 0 0 1 &mpic 2 1 - 0x8c00 0 0 2 &mpic 3 1 - 0x8c00 0 0 3 &mpic 4 1 - 0x8c00 0 0 4 &mpic 1 1 + 0x8c00 0 0 1 &mpic 2 1 0 0 + 0x8c00 0 0 2 &mpic 3 1 0 0 + 0x8c00 0 0 3 &mpic 4 1 0 0 + 0x8c00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 5 - PCI slot 1 */ - 0x8d00 0 0 1 &mpic 2 1 - 0x8d00 0 0 2 &mpic 3 1 - 0x8d00 0 0 3 &mpic 4 1 - 0x8d00 0 0 4 &mpic 1 1 + 0x8d00 0 0 1 &mpic 2 1 0 0 + 0x8d00 0 0 2 &mpic 3 1 0 0 + 0x8d00 0 0 3 &mpic 4 1 0 0 + 0x8d00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 6 - PCI slot 1 */ - 0x8e00 0 0 1 &mpic 2 1 - 0x8e00 0 0 2 &mpic 3 1 - 0x8e00 0 0 3 &mpic 4 1 - 0x8e00 0 0 4 &mpic 1 1 + 0x8e00 0 0 1 &mpic 2 1 0 0 + 0x8e00 0 0 2 &mpic 3 1 0 0 + 0x8e00 0 0 3 &mpic 4 1 0 0 + 0x8e00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x11 func 7 - PCI slot 1 */ - 0x8f00 0 0 1 &mpic 2 1 - 0x8f00 0 0 2 &mpic 3 1 - 0x8f00 0 0 3 &mpic 4 1 - 0x8f00 0 0 4 &mpic 1 1 + 0x8f00 0 0 1 &mpic 2 1 0 0 + 0x8f00 0 0 2 &mpic 3 1 0 0 + 0x8f00 0 0 3 &mpic 4 1 0 0 + 0x8f00 0 0 4 &mpic 1 1 0 0 /* IDSEL 0x12 func 0 - PCI slot 2 */ - 0x9000 0 0 1 &mpic 3 1 - 0x9000 0 0 2 &mpic 4 1 - 0x9000 0 0 3 &mpic 1 1 - 0x9000 0 0 4 &mpic 2 1 + 0x9000 0 0 1 &mpic 3 1 0 0 + 0x9000 0 0 2 &mpic 4 1 0 0 + 0x9000 0 0 3 &mpic 1 1 0 0 + 0x9000 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 1 - PCI slot 2 */ - 0x9100 0 0 1 &mpic 3 1 - 0x9100 0 0 2 &mpic 4 1 - 0x9100 0 0 3 &mpic 1 1 - 0x9100 0 0 4 &mpic 2 1 + 0x9100 0 0 1 &mpic 3 1 0 0 + 0x9100 0 0 2 &mpic 4 1 0 0 + 0x9100 0 0 3 &mpic 1 1 0 0 + 0x9100 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 2 - PCI slot 2 */ - 0x9200 0 0 1 &mpic 3 1 - 0x9200 0 0 2 &mpic 4 1 - 0x9200 0 0 3 &mpic 1 1 - 0x9200 0 0 4 &mpic 2 1 + 0x9200 0 0 1 &mpic 3 1 0 0 + 0x9200 0 0 2 &mpic 4 1 0 0 + 0x9200 0 0 3 &mpic 1 1 0 0 + 0x9200 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 3 - PCI slot 2 */ - 0x9300 0 0 1 &mpic 3 1 - 0x9300 0 0 2 &mpic 4 1 - 0x9300 0 0 3 &mpic 1 1 - 0x9300 0 0 4 &mpic 2 1 + 0x9300 0 0 1 &mpic 3 1 0 0 + 0x9300 0 0 2 &mpic 4 1 0 0 + 0x9300 0 0 3 &mpic 1 1 0 0 + 0x9300 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 4 - PCI slot 2 */ - 0x9400 0 0 1 &mpic 3 1 - 0x9400 0 0 2 &mpic 4 1 - 0x9400 0 0 3 &mpic 1 1 - 0x9400 0 0 4 &mpic 2 1 + 0x9400 0 0 1 &mpic 3 1 0 0 + 0x9400 0 0 2 &mpic 4 1 0 0 + 0x9400 0 0 3 &mpic 1 1 0 0 + 0x9400 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 5 - PCI slot 2 */ - 0x9500 0 0 1 &mpic 3 1 - 0x9500 0 0 2 &mpic 4 1 - 0x9500 0 0 3 &mpic 1 1 - 0x9500 0 0 4 &mpic 2 1 + 0x9500 0 0 1 &mpic 3 1 0 0 + 0x9500 0 0 2 &mpic 4 1 0 0 + 0x9500 0 0 3 &mpic 1 1 0 0 + 0x9500 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 6 - PCI slot 2 */ - 0x9600 0 0 1 &mpic 3 1 - 0x9600 0 0 2 &mpic 4 1 - 0x9600 0 0 3 &mpic 1 1 - 0x9600 0 0 4 &mpic 2 1 + 0x9600 0 0 1 &mpic 3 1 0 0 + 0x9600 0 0 2 &mpic 4 1 0 0 + 0x9600 0 0 3 &mpic 1 1 0 0 + 0x9600 0 0 4 &mpic 2 1 0 0 /* IDSEL 0x12 func 7 - PCI slot 2 */ - 0x9700 0 0 1 &mpic 3 1 - 0x9700 0 0 2 &mpic 4 1 - 0x9700 0 0 3 &mpic 1 1 - 0x9700 0 0 4 &mpic 2 1 + 0x9700 0 0 1 &mpic 3 1 0 0 + 0x9700 0 0 2 &mpic 4 1 0 0 + 0x9700 0 0 3 &mpic 1 1 0 0 + 0x9700 0 0 4 &mpic 2 1 0 0 // IDSEL 0x1c USB 0xe000 0 0 1 &i8259 12 2 diff --git a/arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi b/arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi index eeb7c65d5f22..50039d4fa278 100644 --- a/arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi @@ -97,6 +97,7 @@ &pci0 { compatible = "fsl,mpc8641-pcie"; device_type = "pci"; + #interrupt-cells = <1>; #size-cells = <2>; #address-cells = <3>; bus-range = <0x0 0xff>; @@ -123,6 +124,7 @@ &pci1 { compatible = "fsl,mpc8641-pcie"; device_type = "pci"; + #interrupt-cells = <1>; #size-cells = <2>; #address-cells = <3>; bus-range = <0x0 0xff>; diff --git a/arch/powerpc/boot/dts/fsl/p1020rdb-pc.dtsi b/arch/powerpc/boot/dts/fsl/p1020rdb-pc.dtsi index 25f81eea60e0..a13876c05c1e 100644 --- a/arch/powerpc/boot/dts/fsl/p1020rdb-pc.dtsi +++ b/arch/powerpc/boot/dts/fsl/p1020rdb-pc.dtsi @@ -205,13 +205,13 @@ mdio@24000 { phy0: ethernet-phy@0 { interrupt-parent = <&mpic>; - interrupts = <3 1>; + interrupts = <3 1 0 0>; reg = <0x0>; }; phy1: ethernet-phy@1 { interrupt-parent = <&mpic>; - interrupts = <2 1>; + interrupts = <2 1 0 0>; reg = <0x1>; }; diff --git a/arch/powerpc/boot/dts/fsl/p2041si-post.dtsi b/arch/powerpc/boot/dts/fsl/p2041si-post.dtsi index 51e975d7631a..872e4485dc3f 100644 --- a/arch/powerpc/boot/dts/fsl/p2041si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/p2041si-post.dtsi @@ -327,24 +327,6 @@ /include/ "qoriq-clockgen1.dtsi" global-utilities@e1000 { compatible = "fsl,p2041-clockgen", "fsl,qoriq-clockgen-1.0"; - - mux2: mux2@40 { - #clock-cells = <0>; - reg = <0x40 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux2"; - }; - - mux3: mux3@60 { - #clock-cells = <0>; - reg = <0x60 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux3"; - }; }; rcpm: global-utilities@e2000 { diff --git a/arch/powerpc/boot/dts/fsl/p2041si-pre.dtsi b/arch/powerpc/boot/dts/fsl/p2041si-pre.dtsi index 941274c41f21..6318962e8d14 100644 --- a/arch/powerpc/boot/dts/fsl/p2041si-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/p2041si-pre.dtsi @@ -89,7 +89,7 @@ cpu0: PowerPC,e500mc@0 { device_type = "cpu"; reg = <0>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_0>; fsl,portid-mapping = <0x80000000>; L2_0: l2-cache { @@ -99,7 +99,7 @@ cpu1: PowerPC,e500mc@1 { device_type = "cpu"; reg = <1>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x40000000>; L2_1: l2-cache { @@ -109,7 +109,7 @@ cpu2: PowerPC,e500mc@2 { device_type = "cpu"; reg = <2>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_2>; fsl,portid-mapping = <0x20000000>; L2_2: l2-cache { @@ -119,7 +119,7 @@ cpu3: PowerPC,e500mc@3 { device_type = "cpu"; reg = <3>; - clocks = <&mux3>; + clocks = <&clockgen 1 3>; next-level-cache = <&L2_3>; fsl,portid-mapping = <0x10000000>; L2_3: l2-cache { diff --git a/arch/powerpc/boot/dts/fsl/p3041si-post.dtsi b/arch/powerpc/boot/dts/fsl/p3041si-post.dtsi index 187676fa8d83..81bc75aca2e0 100644 --- a/arch/powerpc/boot/dts/fsl/p3041si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/p3041si-post.dtsi @@ -354,24 +354,6 @@ /include/ "qoriq-clockgen1.dtsi" global-utilities@e1000 { compatible = "fsl,p3041-clockgen", "fsl,qoriq-clockgen-1.0"; - - mux2: mux2@40 { - #clock-cells = <0>; - reg = <0x40 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux2"; - }; - - mux3: mux3@60 { - #clock-cells = <0>; - reg = <0x60 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux3"; - }; }; rcpm: global-utilities@e2000 { diff --git a/arch/powerpc/boot/dts/fsl/p3041si-pre.dtsi b/arch/powerpc/boot/dts/fsl/p3041si-pre.dtsi index 50b73e8e638f..db92f1151a48 100644 --- a/arch/powerpc/boot/dts/fsl/p3041si-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/p3041si-pre.dtsi @@ -90,7 +90,7 @@ cpu0: PowerPC,e500mc@0 { device_type = "cpu"; reg = <0>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_0>; fsl,portid-mapping = <0x80000000>; L2_0: l2-cache { @@ -100,7 +100,7 @@ cpu1: PowerPC,e500mc@1 { device_type = "cpu"; reg = <1>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x40000000>; L2_1: l2-cache { @@ -110,7 +110,7 @@ cpu2: PowerPC,e500mc@2 { device_type = "cpu"; reg = <2>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_2>; fsl,portid-mapping = <0x20000000>; L2_2: l2-cache { @@ -120,7 +120,7 @@ cpu3: PowerPC,e500mc@3 { device_type = "cpu"; reg = <3>; - clocks = <&mux3>; + clocks = <&clockgen 1 3>; next-level-cache = <&L2_3>; fsl,portid-mapping = <0x10000000>; L2_3: l2-cache { diff --git a/arch/powerpc/boot/dts/fsl/p4080si-post.dtsi b/arch/powerpc/boot/dts/fsl/p4080si-post.dtsi index a0252085f858..4da49b6dd3f5 100644 --- a/arch/powerpc/boot/dts/fsl/p4080si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/p4080si-post.dtsi @@ -374,76 +374,6 @@ /include/ "qoriq-clockgen1.dtsi" global-utilities@e1000 { compatible = "fsl,p4080-clockgen", "fsl,qoriq-clockgen-1.0"; - - pll2: pll2@840 { - #clock-cells = <1>; - reg = <0x840 0x4>; - compatible = "fsl,qoriq-core-pll-1.0"; - clocks = <&sysclk>; - clock-output-names = "pll2", "pll2-div2"; - }; - - pll3: pll3@860 { - #clock-cells = <1>; - reg = <0x860 0x4>; - compatible = "fsl,qoriq-core-pll-1.0"; - clocks = <&sysclk>; - clock-output-names = "pll3", "pll3-div2"; - }; - - mux2: mux2@40 { - #clock-cells = <0>; - reg = <0x40 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux2"; - }; - - mux3: mux3@60 { - #clock-cells = <0>; - reg = <0x60 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux3"; - }; - - mux4: mux4@80 { - #clock-cells = <0>; - reg = <0x80 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll2 0>, <&pll2 1>, <&pll3 0>, <&pll3 1>; - clock-names = "pll2", "pll2-div2", "pll3", "pll3-div2"; - clock-output-names = "cmux4"; - }; - - mux5: mux5@a0 { - #clock-cells = <0>; - reg = <0xa0 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll2 0>, <&pll2 1>, <&pll3 0>, <&pll3 1>; - clock-names = "pll2", "pll2-div2", "pll3", "pll3-div2"; - clock-output-names = "cmux5"; - }; - - mux6: mux6@c0 { - #clock-cells = <0>; - reg = <0xc0 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll2 0>, <&pll2 1>, <&pll3 0>, <&pll3 1>; - clock-names = "pll2", "pll2-div2", "pll3", "pll3-div2"; - clock-output-names = "cmux6"; - }; - - mux7: mux7@e0 { - #clock-cells = <0>; - reg = <0xe0 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll2 0>, <&pll2 1>, <&pll3 0>, <&pll3 1>; - clock-names = "pll2", "pll2-div2", "pll3", "pll3-div2"; - clock-output-names = "cmux7"; - }; }; rcpm: global-utilities@e2000 { diff --git a/arch/powerpc/boot/dts/fsl/p4080si-pre.dtsi b/arch/powerpc/boot/dts/fsl/p4080si-pre.dtsi index d56a546b73e6..0a7c65a00e5e 100644 --- a/arch/powerpc/boot/dts/fsl/p4080si-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/p4080si-pre.dtsi @@ -94,7 +94,7 @@ cpu0: PowerPC,e500mc@0 { device_type = "cpu"; reg = <0>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_0>; fsl,portid-mapping = <0x80000000>; L2_0: l2-cache { @@ -104,7 +104,7 @@ cpu1: PowerPC,e500mc@1 { device_type = "cpu"; reg = <1>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x40000000>; L2_1: l2-cache { @@ -114,7 +114,7 @@ cpu2: PowerPC,e500mc@2 { device_type = "cpu"; reg = <2>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_2>; fsl,portid-mapping = <0x20000000>; L2_2: l2-cache { @@ -124,7 +124,7 @@ cpu3: PowerPC,e500mc@3 { device_type = "cpu"; reg = <3>; - clocks = <&mux3>; + clocks = <&clockgen 1 3>; next-level-cache = <&L2_3>; fsl,portid-mapping = <0x10000000>; L2_3: l2-cache { @@ -134,7 +134,7 @@ cpu4: PowerPC,e500mc@4 { device_type = "cpu"; reg = <4>; - clocks = <&mux4>; + clocks = <&clockgen 1 4>; next-level-cache = <&L2_4>; fsl,portid-mapping = <0x08000000>; L2_4: l2-cache { @@ -144,7 +144,7 @@ cpu5: PowerPC,e500mc@5 { device_type = "cpu"; reg = <5>; - clocks = <&mux5>; + clocks = <&clockgen 1 5>; next-level-cache = <&L2_5>; fsl,portid-mapping = <0x04000000>; L2_5: l2-cache { @@ -154,7 +154,7 @@ cpu6: PowerPC,e500mc@6 { device_type = "cpu"; reg = <6>; - clocks = <&mux6>; + clocks = <&clockgen 1 6>; next-level-cache = <&L2_6>; fsl,portid-mapping = <0x02000000>; L2_6: l2-cache { @@ -164,7 +164,7 @@ cpu7: PowerPC,e500mc@7 { device_type = "cpu"; reg = <7>; - clocks = <&mux7>; + clocks = <&clockgen 1 7>; next-level-cache = <&L2_7>; fsl,portid-mapping = <0x01000000>; L2_7: l2-cache { diff --git a/arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi b/arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi index bfba0b4f1cbb..2d74ea85e5df 100644 --- a/arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi @@ -96,7 +96,7 @@ cpu0: PowerPC,e5500@0 { device_type = "cpu"; reg = <0>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_0>; fsl,portid-mapping = <0x80000000>; L2_0: l2-cache { @@ -106,7 +106,7 @@ cpu1: PowerPC,e5500@1 { device_type = "cpu"; reg = <1>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x40000000>; L2_1: l2-cache { diff --git a/arch/powerpc/boot/dts/fsl/p5040si-post.dtsi b/arch/powerpc/boot/dts/fsl/p5040si-post.dtsi index e2bd9313e632..16b454b504e2 100644 --- a/arch/powerpc/boot/dts/fsl/p5040si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/p5040si-post.dtsi @@ -319,24 +319,6 @@ /include/ "qoriq-clockgen1.dtsi" global-utilities@e1000 { compatible = "fsl,p5040-clockgen", "fsl,qoriq-clockgen-1.0"; - - mux2: mux2@40 { - #clock-cells = <0>; - reg = <0x40 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux2"; - }; - - mux3: mux3@60 { - #clock-cells = <0>; - reg = <0x60 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux3"; - }; }; rcpm: global-utilities@e2000 { diff --git a/arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi b/arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi index dbd57750fc02..ed89dbbdacf0 100644 --- a/arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi @@ -102,7 +102,7 @@ cpu0: PowerPC,e5500@0 { device_type = "cpu"; reg = <0>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_0>; fsl,portid-mapping = <0x80000000>; L2_0: l2-cache { @@ -112,7 +112,7 @@ cpu1: PowerPC,e5500@1 { device_type = "cpu"; reg = <1>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x40000000>; L2_1: l2-cache { @@ -122,7 +122,7 @@ cpu2: PowerPC,e5500@2 { device_type = "cpu"; reg = <2>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_2>; fsl,portid-mapping = <0x20000000>; L2_2: l2-cache { @@ -132,7 +132,7 @@ cpu3: PowerPC,e5500@3 { device_type = "cpu"; reg = <3>; - clocks = <&mux3>; + clocks = <&clockgen 1 3>; next-level-cache = <&L2_3>; fsl,portid-mapping = <0x10000000>; L2_3: l2-cache { diff --git a/arch/powerpc/boot/dts/fsl/qoriq-clockgen1.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-clockgen1.dtsi index 88cd70de4f86..463c1ed9ffdd 100644 --- a/arch/powerpc/boot/dts/fsl/qoriq-clockgen1.dtsi +++ b/arch/powerpc/boot/dts/fsl/qoriq-clockgen1.dtsi @@ -34,53 +34,6 @@ clockgen: global-utilities@e1000 { compatible = "fsl,qoriq-clockgen-1.0"; - ranges = <0x0 0xe1000 0x1000>; reg = <0xe1000 0x1000>; - clock-frequency = <0>; - #address-cells = <1>; - #size-cells = <1>; #clock-cells = <2>; - - sysclk: sysclk { - #clock-cells = <0>; - compatible = "fsl,qoriq-sysclk-1.0", "fixed-clock"; - clock-output-names = "sysclk"; - }; - pll0: pll0@800 { - #clock-cells = <1>; - reg = <0x800 0x4>; - compatible = "fsl,qoriq-core-pll-1.0"; - clocks = <&sysclk>; - clock-output-names = "pll0", "pll0-div2"; - }; - pll1: pll1@820 { - #clock-cells = <1>; - reg = <0x820 0x4>; - compatible = "fsl,qoriq-core-pll-1.0"; - clocks = <&sysclk>; - clock-output-names = "pll1", "pll1-div2"; - }; - mux0: mux0@0 { - #clock-cells = <0>; - reg = <0x0 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux0"; - }; - mux1: mux1@20 { - #clock-cells = <0>; - reg = <0x20 0x4>; - compatible = "fsl,qoriq-core-mux-1.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; - clock-output-names = "cmux1"; - }; - platform_pll: platform-pll@c00 { - #clock-cells = <1>; - reg = <0xc00 0x4>; - compatible = "fsl,qoriq-platform-pll-1.0"; - clocks = <&sysclk>; - clock-output-names = "platform-pll", "platform-pll-div2"; - }; }; diff --git a/arch/powerpc/boot/dts/fsl/qoriq-clockgen2.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-clockgen2.dtsi index 6dfd7c5357ab..0361050bb56a 100644 --- a/arch/powerpc/boot/dts/fsl/qoriq-clockgen2.dtsi +++ b/arch/powerpc/boot/dts/fsl/qoriq-clockgen2.dtsi @@ -34,36 +34,6 @@ clockgen: global-utilities@e1000 { compatible = "fsl,qoriq-clockgen-2.0"; - ranges = <0x0 0xe1000 0x1000>; reg = <0xe1000 0x1000>; - #address-cells = <1>; - #size-cells = <1>; #clock-cells = <2>; - - sysclk: sysclk { - #clock-cells = <0>; - compatible = "fsl,qoriq-sysclk-2.0", "fixed-clock"; - clock-output-names = "sysclk"; - }; - pll0: pll0@800 { - #clock-cells = <1>; - reg = <0x800 0x4>; - compatible = "fsl,qoriq-core-pll-2.0"; - clocks = <&sysclk>; - clock-output-names = "pll0", "pll0-div2", "pll0-div4"; - }; - pll1: pll1@820 { - #clock-cells = <1>; - reg = <0x820 0x4>; - compatible = "fsl,qoriq-core-pll-2.0"; - clocks = <&sysclk>; - clock-output-names = "pll1", "pll1-div2", "pll1-div4"; - }; - platform_pll: platform-pll@c00 { - #clock-cells = <1>; - reg = <0xc00 0x4>; - compatible = "fsl,qoriq-platform-pll-2.0"; - clocks = <&sysclk>; - clock-output-names = "platform-pll", "platform-pll-div2"; - }; }; diff --git a/arch/powerpc/boot/dts/fsl/t1023si-post.dtsi b/arch/powerpc/boot/dts/fsl/t1023si-post.dtsi index 4908af501098..d552044c5afc 100644 --- a/arch/powerpc/boot/dts/fsl/t1023si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/t1023si-post.dtsi @@ -345,22 +345,6 @@ /include/ "qoriq-clockgen2.dtsi" global-utilities@e1000 { compatible = "fsl,t1023-clockgen", "fsl,qoriq-clockgen-2.0"; - mux0: mux0@0 { - #clock-cells = <0>; - reg = <0x0 4>; - compatible = "fsl,core-mux-clock"; - clocks = <&pll0 0>, <&pll0 1>; - clock-names = "pll0_0", "pll0_1"; - clock-output-names = "cmux0"; - }; - mux1: mux1@20 { - #clock-cells = <0>; - reg = <0x20 4>; - compatible = "fsl,core-mux-clock"; - clocks = <&pll0 0>, <&pll0 1>; - clock-names = "pll0_0", "pll0_1"; - clock-output-names = "cmux1"; - }; }; rcpm: global-utilities@e2000 { diff --git a/arch/powerpc/boot/dts/fsl/t102xsi-pre.dtsi b/arch/powerpc/boot/dts/fsl/t102xsi-pre.dtsi index 9d08a363bab3..d87ea13164f2 100644 --- a/arch/powerpc/boot/dts/fsl/t102xsi-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/t102xsi-pre.dtsi @@ -74,7 +74,7 @@ cpu0: PowerPC,e5500@0 { device_type = "cpu"; reg = <0>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; #cooling-cells = <2>; L2_1: l2-cache { @@ -84,7 +84,7 @@ cpu1: PowerPC,e5500@1 { device_type = "cpu"; reg = <1>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_2>; #cooling-cells = <2>; L2_2: l2-cache { diff --git a/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi b/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi index 145c7f43b5b6..315d0557eefc 100644 --- a/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi @@ -425,50 +425,6 @@ /include/ "qoriq-clockgen2.dtsi" global-utilities@e1000 { compatible = "fsl,t1040-clockgen", "fsl,qoriq-clockgen-2.0"; - - mux0: mux0@0 { - #clock-cells = <0>; - reg = <0x0 4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>; - clock-names = "pll0", "pll0-div2", "pll1-div4", - "pll1", "pll1-div2", "pll1-div4"; - clock-output-names = "cmux0"; - }; - - mux1: mux1@20 { - #clock-cells = <0>; - reg = <0x20 4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>; - clock-names = "pll0", "pll0-div2", "pll1-div4", - "pll1", "pll1-div2", "pll1-div4"; - clock-output-names = "cmux1"; - }; - - mux2: mux2@40 { - #clock-cells = <0>; - reg = <0x40 4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>; - clock-names = "pll0", "pll0-div2", "pll1-div4", - "pll1", "pll1-div2", "pll1-div4"; - clock-output-names = "cmux2"; - }; - - mux3: mux3@60 { - #clock-cells = <0>; - reg = <0x60 4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>; - clock-names = "pll0_0", "pll0_1", "pll0_2", - "pll1_0", "pll1_1", "pll1_2"; - clock-output-names = "cmux3"; - }; }; rcpm: global-utilities@e2000 { diff --git a/arch/powerpc/boot/dts/fsl/t104xsi-pre.dtsi b/arch/powerpc/boot/dts/fsl/t104xsi-pre.dtsi index 6db0ee8b1384..dd59e4b69480 100644 --- a/arch/powerpc/boot/dts/fsl/t104xsi-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/t104xsi-pre.dtsi @@ -74,7 +74,7 @@ cpu0: PowerPC,e5500@0 { device_type = "cpu"; reg = <0>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; #cooling-cells = <2>; L2_1: l2-cache { @@ -84,7 +84,7 @@ cpu1: PowerPC,e5500@1 { device_type = "cpu"; reg = <1>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_2>; #cooling-cells = <2>; L2_2: l2-cache { @@ -94,7 +94,7 @@ cpu2: PowerPC,e5500@2 { device_type = "cpu"; reg = <2>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_3>; #cooling-cells = <2>; L2_3: l2-cache { @@ -104,7 +104,7 @@ cpu3: PowerPC,e5500@3 { device_type = "cpu"; reg = <3>; - clocks = <&mux3>; + clocks = <&clockgen 1 3>; next-level-cache = <&L2_4>; #cooling-cells = <2>; L2_4: l2-cache { diff --git a/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi b/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi index a97296c64eb2..ecbb447920bc 100644 --- a/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi @@ -535,28 +535,6 @@ /include/ "qoriq-clockgen2.dtsi" global-utilities@e1000 { compatible = "fsl,t2080-clockgen", "fsl,qoriq-clockgen-2.0"; - - mux0: mux0@0 { - #clock-cells = <0>; - reg = <0x0 4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>; - clock-names = "pll0", "pll0-div2", "pll0-div4", - "pll1", "pll1-div2", "pll1-div4"; - clock-output-names = "cmux0"; - }; - - mux1: mux1@20 { - #clock-cells = <0>; - reg = <0x20 4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>; - clock-names = "pll0", "pll0-div2", "pll0-div4", - "pll1", "pll1-div2", "pll1-div4"; - clock-output-names = "cmux1"; - }; }; rcpm: global-utilities@e2000 { diff --git a/arch/powerpc/boot/dts/fsl/t208xsi-pre.dtsi b/arch/powerpc/boot/dts/fsl/t208xsi-pre.dtsi index c2e57203910d..3f745de44284 100644 --- a/arch/powerpc/boot/dts/fsl/t208xsi-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/t208xsi-pre.dtsi @@ -81,28 +81,28 @@ cpu0: PowerPC,e6500@0 { device_type = "cpu"; reg = <0 1>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu1: PowerPC,e6500@2 { device_type = "cpu"; reg = <2 3>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu2: PowerPC,e6500@4 { device_type = "cpu"; reg = <4 5>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu3: PowerPC,e6500@6 { device_type = "cpu"; reg = <6 7>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; diff --git a/arch/powerpc/boot/dts/fsl/t4240si-post.dtsi b/arch/powerpc/boot/dts/fsl/t4240si-post.dtsi index 68c4eadc19e3..fcac73486d48 100644 --- a/arch/powerpc/boot/dts/fsl/t4240si-post.dtsi +++ b/arch/powerpc/boot/dts/fsl/t4240si-post.dtsi @@ -950,67 +950,6 @@ /include/ "qoriq-clockgen2.dtsi" global-utilities@e1000 { compatible = "fsl,t4240-clockgen", "fsl,qoriq-clockgen-2.0"; - - pll2: pll2@840 { - #clock-cells = <1>; - reg = <0x840 0x4>; - compatible = "fsl,qoriq-core-pll-2.0"; - clocks = <&sysclk>; - clock-output-names = "pll2", "pll2-div2", "pll2-div4"; - }; - - pll3: pll3@860 { - #clock-cells = <1>; - reg = <0x860 0x4>; - compatible = "fsl,qoriq-core-pll-2.0"; - clocks = <&sysclk>; - clock-output-names = "pll3", "pll3-div2", "pll3-div4"; - }; - - pll4: pll4@880 { - #clock-cells = <1>; - reg = <0x880 0x4>; - compatible = "fsl,qoriq-core-pll-2.0"; - clocks = <&sysclk>; - clock-output-names = "pll4", "pll4-div2", "pll4-div4"; - }; - - mux0: mux0@0 { - #clock-cells = <0>; - reg = <0x0 0x4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>, - <&pll2 0>, <&pll2 1>, <&pll2 2>; - clock-names = "pll0", "pll0-div2", "pll0-div4", - "pll1", "pll1-div2", "pll1-div4", - "pll2", "pll2-div2", "pll2-div4"; - clock-output-names = "cmux0"; - }; - - mux1: mux1@20 { - #clock-cells = <0>; - reg = <0x20 0x4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, - <&pll1 0>, <&pll1 1>, <&pll1 2>, - <&pll2 0>, <&pll2 1>, <&pll2 2>; - clock-names = "pll0", "pll0-div2", "pll0-div4", - "pll1", "pll1-div2", "pll1-div4", - "pll2", "pll2-div2", "pll2-div4"; - clock-output-names = "cmux1"; - }; - - mux2: mux2@40 { - #clock-cells = <0>; - reg = <0x40 0x4>; - compatible = "fsl,qoriq-core-mux-2.0"; - clocks = <&pll3 0>, <&pll3 1>, <&pll3 2>, - <&pll4 0>, <&pll4 1>, <&pll4 2>; - clock-names = "pll3", "pll3-div2", "pll3-div4", - "pll4", "pll4-div2", "pll4-div4"; - clock-output-names = "cmux2"; - }; }; rcpm: global-utilities@e2000 { diff --git a/arch/powerpc/boot/dts/fsl/t4240si-pre.dtsi b/arch/powerpc/boot/dts/fsl/t4240si-pre.dtsi index 038cf8fadee4..632314c6faa9 100644 --- a/arch/powerpc/boot/dts/fsl/t4240si-pre.dtsi +++ b/arch/powerpc/boot/dts/fsl/t4240si-pre.dtsi @@ -90,84 +90,84 @@ cpu0: PowerPC,e6500@0 { device_type = "cpu"; reg = <0 1>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu1: PowerPC,e6500@2 { device_type = "cpu"; reg = <2 3>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu2: PowerPC,e6500@4 { device_type = "cpu"; reg = <4 5>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu3: PowerPC,e6500@6 { device_type = "cpu"; reg = <6 7>; - clocks = <&mux0>; + clocks = <&clockgen 1 0>; next-level-cache = <&L2_1>; fsl,portid-mapping = <0x80000000>; }; cpu4: PowerPC,e6500@8 { device_type = "cpu"; reg = <8 9>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_2>; fsl,portid-mapping = <0x40000000>; }; cpu5: PowerPC,e6500@10 { device_type = "cpu"; reg = <10 11>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_2>; fsl,portid-mapping = <0x40000000>; }; cpu6: PowerPC,e6500@12 { device_type = "cpu"; reg = <12 13>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_2>; fsl,portid-mapping = <0x40000000>; }; cpu7: PowerPC,e6500@14 { device_type = "cpu"; reg = <14 15>; - clocks = <&mux1>; + clocks = <&clockgen 1 1>; next-level-cache = <&L2_2>; fsl,portid-mapping = <0x40000000>; }; cpu8: PowerPC,e6500@16 { device_type = "cpu"; reg = <16 17>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_3>; fsl,portid-mapping = <0x20000000>; }; cpu9: PowerPC,e6500@18 { device_type = "cpu"; reg = <18 19>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_3>; fsl,portid-mapping = <0x20000000>; }; cpu10: PowerPC,e6500@20 { device_type = "cpu"; reg = <20 21>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_3>; fsl,portid-mapping = <0x20000000>; }; cpu11: PowerPC,e6500@22 { device_type = "cpu"; reg = <22 23>; - clocks = <&mux2>; + clocks = <&clockgen 1 2>; next-level-cache = <&L2_3>; fsl,portid-mapping = <0x20000000>; }; diff --git a/arch/powerpc/boot/dts/mpc832x_rdb.dts b/arch/powerpc/boot/dts/mpc832x_rdb.dts index 647cae14c16d..be6ef3531b28 100644 --- a/arch/powerpc/boot/dts/mpc832x_rdb.dts +++ b/arch/powerpc/boot/dts/mpc832x_rdb.dts @@ -311,13 +311,9 @@ compatible = "fsl,ucc-mdio"; phy00:ethernet-phy@0 { - interrupt-parent = <&ipic>; - interrupts = <0>; reg = <0x0>; }; phy04:ethernet-phy@4 { - interrupt-parent = <&ipic>; - interrupts = <0>; reg = <0x4>; }; }; diff --git a/arch/powerpc/boot/serial.c b/arch/powerpc/boot/serial.c index f045f8494bf9..b0491b8c0199 100644 --- a/arch/powerpc/boot/serial.c +++ b/arch/powerpc/boot/serial.c @@ -93,7 +93,8 @@ static void *serial_get_stdout_devp(void) if (devp == NULL) goto err_out; - if (getprop(devp, "linux,stdout-path", path, MAX_PATH_LEN) > 0) { + if (getprop(devp, "linux,stdout-path", path, MAX_PATH_LEN) > 0 || + getprop(devp, "stdout-path", path, MAX_PATH_LEN) > 0) { devp = finddevice(path); if (devp == NULL) goto err_out; diff --git a/arch/powerpc/configs/fsl-emb-nonhw.config b/arch/powerpc/configs/fsl-emb-nonhw.config index e0567dc41968..d592ba27b122 100644 --- a/arch/powerpc/configs/fsl-emb-nonhw.config +++ b/arch/powerpc/configs/fsl-emb-nonhw.config @@ -25,6 +25,7 @@ CONFIG_CRYPTO_SHA256=y CONFIG_CRYPTO_SHA512=y CONFIG_DEBUG_FS=y CONFIG_DEBUG_INFO=y +CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_SHIRQ=y CONFIG_DETECT_HUNG_TASK=y CONFIG_DEVTMPFS_MOUNT=y diff --git a/arch/powerpc/configs/g5_defconfig b/arch/powerpc/configs/g5_defconfig index f686cc1eac0b..ceb3c770786f 100644 --- a/arch/powerpc/configs/g5_defconfig +++ b/arch/powerpc/configs/g5_defconfig @@ -246,7 +246,6 @@ CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_MUTEXES=y CONFIG_LATENCYTOP=y CONFIG_BOOTX_TEXT=y -CONFIG_PPC_EARLY_DEBUG=y CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_HMAC=y diff --git a/arch/powerpc/configs/guest.config b/arch/powerpc/configs/guest.config new file mode 100644 index 000000000000..8b8cd18ecd7c --- /dev/null +++ b/arch/powerpc/configs/guest.config @@ -0,0 +1,13 @@ +CONFIG_VIRTIO_BLK=y +CONFIG_VIRTIO_BLK_SCSI=y +CONFIG_SCSI_VIRTIO=y +CONFIG_VIRTIO_NET=y +CONFIG_NET_FAILOVER=y +CONFIG_VIRTIO_CONSOLE=y +CONFIG_VIRTIO=y +CONFIG_VIRTIO_PCI=y +CONFIG_KVM_GUEST=y +CONFIG_EPAPR_PARAVIRT=y +CONFIG_VIRTIO_BALLOON=y +CONFIG_VHOST_NET=y +CONFIG_VHOST=y diff --git a/arch/powerpc/configs/maple_defconfig b/arch/powerpc/configs/maple_defconfig index f71eddafb02f..c5f2005005d3 100644 --- a/arch/powerpc/configs/maple_defconfig +++ b/arch/powerpc/configs/maple_defconfig @@ -108,7 +108,6 @@ CONFIG_LATENCYTOP=y CONFIG_XMON=y CONFIG_XMON_DEFAULT=y CONFIG_BOOTX_TEXT=y -CONFIG_PPC_EARLY_DEBUG=y CONFIG_CRYPTO_ECB=m CONFIG_CRYPTO_PCBC=m # CONFIG_CRYPTO_HW is not set diff --git a/arch/powerpc/configs/pmac32_defconfig b/arch/powerpc/configs/pmac32_defconfig index 62948d198d7f..50b610b48914 100644 --- a/arch/powerpc/configs/pmac32_defconfig +++ b/arch/powerpc/configs/pmac32_defconfig @@ -297,7 +297,6 @@ CONFIG_LATENCYTOP=y CONFIG_XMON=y CONFIG_XMON_DEFAULT=y CONFIG_BOOTX_TEXT=y -CONFIG_PPC_EARLY_DEBUG=y CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_MD4=m CONFIG_CRYPTO_SHA512=m diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig index f2515674a1e2..91fdb619b484 100644 --- a/arch/powerpc/configs/ppc64_defconfig +++ b/arch/powerpc/configs/ppc64_defconfig @@ -1,4 +1,3 @@ -CONFIG_PPC64=y CONFIG_SYSVIPC=y CONFIG_POSIX_MQUEUE=y CONFIG_NO_HZ=y @@ -9,21 +8,22 @@ CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y CONFIG_LOG_BUF_SHIFT=18 CONFIG_LOG_CPU_MAX_BUF_SHIFT=13 +CONFIG_NUMA_BALANCING=y CONFIG_CGROUPS=y +CONFIG_MEMCG=y +CONFIG_CGROUP_SCHED=y +CONFIG_CGROUP_FREEZER=y CONFIG_CPUSETS=y +CONFIG_CGROUP_DEVICE=y +CONFIG_CGROUP_CPUACCT=y +CONFIG_CGROUP_PERF=y CONFIG_CGROUP_BPF=y CONFIG_BLK_DEV_INITRD=y CONFIG_BPF_SYSCALL=y # CONFIG_COMPAT_BRK is not set CONFIG_PROFILING=y -CONFIG_OPROFILE=m -CONFIG_KPROBES=y -CONFIG_JUMP_LABEL=y -CONFIG_MODULES=y -CONFIG_MODULE_UNLOAD=y -CONFIG_MODVERSIONS=y -CONFIG_MODULE_SRCVERSION_ALL=y -CONFIG_PARTITION_ADVANCED=y +CONFIG_PPC64=y +CONFIG_NR_CPUS=2048 CONFIG_PPC_SPLPAR=y CONFIG_DTL=y CONFIG_SCANLOG=m @@ -45,14 +45,11 @@ CONFIG_CPU_FREQ_GOV_USERSPACE=y CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y CONFIG_CPU_FREQ_PMAC64=y CONFIG_HZ_100=y -CONFIG_BINFMT_MISC=m CONFIG_PPC_TRANSACTIONAL_MEM=y CONFIG_KEXEC=y CONFIG_KEXEC_FILE=y CONFIG_CRASH_DUMP=y CONFIG_IRQ_ALL_CPUS=y -CONFIG_KSM=y -CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_PPC_64K_PAGES=y CONFIG_SCHED_SMT=y CONFIG_HOTPLUG_PCI=y @@ -60,6 +57,23 @@ CONFIG_HOTPLUG_PCI_RPA=m CONFIG_HOTPLUG_PCI_RPA_DLPAR=m CONFIG_PCCARD=y CONFIG_ELECTRA_CF=y +CONFIG_VIRTUALIZATION=y +CONFIG_KVM_BOOK3S_64=m +CONFIG_KVM_BOOK3S_64_HV=m +CONFIG_VHOST_NET=m +CONFIG_OPROFILE=m +CONFIG_KPROBES=y +CONFIG_JUMP_LABEL=y +CONFIG_MODULES=y +CONFIG_MODULE_UNLOAD=y +CONFIG_MODVERSIONS=y +CONFIG_MODULE_SRCVERSION_ALL=y +CONFIG_PARTITION_ADVANCED=y +CONFIG_BINFMT_MISC=m +CONFIG_MEMORY_HOTPLUG=y +CONFIG_MEMORY_HOTREMOVE=y +CONFIG_KSM=y +CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_NET=y CONFIG_PACKET=y CONFIG_UNIX=y @@ -163,7 +177,6 @@ CONFIG_TIGON3=y CONFIG_BNX2X=m CONFIG_CHELSIO_T1=m CONFIG_BE2NET=m -CONFIG_S2IO=m CONFIG_IBMVETH=m CONFIG_EHEA=m CONFIG_E100=y @@ -174,6 +187,7 @@ CONFIG_IXGBE=m CONFIG_I40E=m CONFIG_MLX4_EN=m CONFIG_MYRI10GE=m +CONFIG_S2IO=m CONFIG_PASEMI_MAC=y CONFIG_QLGE=m CONFIG_NETXEN_NIC=m @@ -284,7 +298,7 @@ CONFIG_REISERFS_FS_SECURITY=y CONFIG_JFS_FS=m CONFIG_JFS_POSIX_ACL=y CONFIG_JFS_SECURITY=y -CONFIG_XFS_FS=m +CONFIG_XFS_FS=y CONFIG_XFS_POSIX_ACL=y CONFIG_BTRFS_FS=m CONFIG_BTRFS_FS_POSIX_ACL=y @@ -323,25 +337,6 @@ CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_ASCII=y CONFIG_NLS_ISO8859_1=y CONFIG_NLS_UTF8=y -CONFIG_MAGIC_SYSRQ=y -CONFIG_DEBUG_KERNEL=y -CONFIG_DEBUG_STACK_USAGE=y -CONFIG_DEBUG_STACKOVERFLOW=y -CONFIG_SOFTLOCKUP_DETECTOR=y -CONFIG_HARDLOCKUP_DETECTOR=y -CONFIG_DEBUG_MUTEXES=y -CONFIG_LATENCYTOP=y -CONFIG_FTRACE=y -CONFIG_FUNCTION_TRACER=y -CONFIG_FUNCTION_GRAPH_TRACER=y -CONFIG_SCHED_TRACER=y -CONFIG_BLK_DEV_IO_TRACE=y -CONFIG_CODE_PATCHING_SELFTEST=y -CONFIG_FTR_FIXUP_SELFTEST=y -CONFIG_MSI_BITMAP_SELFTEST=y -CONFIG_XMON=y -CONFIG_BOOTX_TEXT=y -CONFIG_PPC_EARLY_DEBUG=y CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_HMAC=y @@ -364,8 +359,20 @@ CONFIG_CRYPTO_LZO=m CONFIG_CRYPTO_DEV_NX=y CONFIG_CRYPTO_DEV_NX_ENCRYPT=m CONFIG_CRYPTO_DEV_VMX=y -CONFIG_VIRTUALIZATION=y -CONFIG_KVM_BOOK3S_64=m -CONFIG_KVM_BOOK3S_64_HV=m -CONFIG_VHOST_NET=m CONFIG_PRINTK_TIME=y +CONFIG_MAGIC_SYSRQ=y +CONFIG_DEBUG_KERNEL=y +CONFIG_DEBUG_STACK_USAGE=y +CONFIG_DEBUG_STACKOVERFLOW=y +CONFIG_SOFTLOCKUP_DETECTOR=y +CONFIG_HARDLOCKUP_DETECTOR=y +CONFIG_DEBUG_MUTEXES=y +CONFIG_LATENCYTOP=y +CONFIG_FUNCTION_TRACER=y +CONFIG_SCHED_TRACER=y +CONFIG_BLK_DEV_IO_TRACE=y +CONFIG_CODE_PATCHING_SELFTEST=y +CONFIG_FTR_FIXUP_SELFTEST=y +CONFIG_MSI_BITMAP_SELFTEST=y +CONFIG_XMON=y +CONFIG_BOOTX_TEXT=y diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig index 7ee736f20774..53687c3a70c4 100644 --- a/arch/powerpc/configs/ppc6xx_defconfig +++ b/arch/powerpc/configs/ppc6xx_defconfig @@ -1155,7 +1155,6 @@ CONFIG_STACK_TRACER=y CONFIG_BLK_DEV_IO_TRACE=y CONFIG_XMON=y CONFIG_BOOTX_TEXT=y -CONFIG_PPC_EARLY_DEBUG=y CONFIG_SECURITY=y CONFIG_SECURITY_NETWORK=y CONFIG_SECURITY_NETWORK_XFRM=y diff --git a/arch/powerpc/configs/pseries_defconfig b/arch/powerpc/configs/pseries_defconfig index 5e09a40cbcbf..ea79c519863d 100644 --- a/arch/powerpc/configs/pseries_defconfig +++ b/arch/powerpc/configs/pseries_defconfig @@ -290,9 +290,7 @@ CONFIG_DEBUG_STACKOVERFLOW=y CONFIG_SOFTLOCKUP_DETECTOR=y CONFIG_HARDLOCKUP_DETECTOR=y CONFIG_LATENCYTOP=y -CONFIG_FTRACE=y CONFIG_FUNCTION_TRACER=y -CONFIG_FUNCTION_GRAPH_TRACER=y CONFIG_SCHED_TRACER=y CONFIG_BLK_DEV_IO_TRACE=y CONFIG_CODE_PATCHING_SELFTEST=y diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild index 3196d227e351..77ff7fb24823 100644 --- a/arch/powerpc/include/asm/Kbuild +++ b/arch/powerpc/include/asm/Kbuild @@ -1,3 +1,7 @@ +generated-y += syscall_table_32.h +generated-y += syscall_table_64.h +generated-y += syscall_table_c32.h +generated-y += syscall_table_spu.h generic-y += div64.h generic-y += export.h generic-y += irq_regs.h diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h index ec691d489656..6f201b199c02 100644 --- a/arch/powerpc/include/asm/asm-prototypes.h +++ b/arch/powerpc/include/asm/asm-prototypes.h @@ -61,7 +61,6 @@ void RunModeException(struct pt_regs *regs); void single_step_exception(struct pt_regs *regs); void program_check_exception(struct pt_regs *regs); void alignment_exception(struct pt_regs *regs); -void slb_miss_bad_addr(struct pt_regs *regs); void StackOverflow(struct pt_regs *regs); void kernel_fp_unavailable_exception(struct pt_regs *regs); void altivec_unavailable_exception(struct pt_regs *regs); diff --git a/arch/powerpc/include/asm/book3s/32/hash.h b/arch/powerpc/include/asm/book3s/32/hash.h index f2892c7ab73e..2a0a467d2985 100644 --- a/arch/powerpc/include/asm/book3s/32/hash.h +++ b/arch/powerpc/include/asm/book3s/32/hash.h @@ -26,6 +26,7 @@ #define _PAGE_WRITETHRU 0x040 /* W: cache write-through */ #define _PAGE_DIRTY 0x080 /* C: page changed */ #define _PAGE_ACCESSED 0x100 /* R: page referenced */ +#define _PAGE_EXEC 0x200 /* software: exec allowed */ #define _PAGE_RW 0x400 /* software: user write access allowed */ #define _PAGE_SPECIAL 0x800 /* software: Special page */ diff --git a/arch/powerpc/include/asm/book3s/32/mmu-hash.h b/arch/powerpc/include/asm/book3s/32/mmu-hash.h index e38c91388c40..0c261ba2c826 100644 --- a/arch/powerpc/include/asm/book3s/32/mmu-hash.h +++ b/arch/powerpc/include/asm/book3s/32/mmu-hash.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_ #define _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_ + /* * 32-bit hash table MMU support */ @@ -9,6 +10,8 @@ * BATs */ +#include + /* Block size masks */ #define BL_128K 0x000 #define BL_256K 0x001 @@ -34,14 +37,20 @@ #define BAT_PHYS_ADDR(x) ((u32)((x & 0x00000000fffe0000ULL) | \ ((x & 0x0000000e00000000ULL) >> 24) | \ ((x & 0x0000000100000000ULL) >> 30))) +#define PHYS_BAT_ADDR(x) (((u64)(x) & 0x00000000fffe0000ULL) | \ + (((u64)(x) << 24) & 0x0000000e00000000ULL) | \ + (((u64)(x) << 30) & 0x0000000100000000ULL)) #else #define BAT_PHYS_ADDR(x) (x) +#define PHYS_BAT_ADDR(x) ((x) & 0xfffe0000) #endif struct ppc_bat { u32 batu; u32 batl; }; + +typedef pte_t *pgtable_t; #endif /* !__ASSEMBLY__ */ /* @@ -83,6 +92,12 @@ typedef struct { unsigned long vdso_base; } mm_context_t; +/* patch sites */ +extern s32 patch__hash_page_A0, patch__hash_page_A1, patch__hash_page_A2; +extern s32 patch__hash_page_B, patch__hash_page_C; +extern s32 patch__flush_hash_A0, patch__flush_hash_A1, patch__flush_hash_A2; +extern s32 patch__flush_hash_B; + #endif /* !__ASSEMBLY__ */ /* We happily ignore the smaller BATs on 601, we don't actually use diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index 82e44b1a00ae..b5b955eb2fb7 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -25,10 +25,7 @@ extern void __bad_pte(pmd_t *pmd); extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) ({ \ - BUG_ON(!(shift)); \ - pgtable_cache[(shift) - 1]; \ - }) +#define PGT_CACHE(shift) pgtable_cache[shift] static inline pgd_t *pgd_alloc(struct mm_struct *mm) { @@ -50,8 +47,6 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) #define __pmd_free_tlb(tlb,x,a) do { } while (0) /* #define pgd_populate(mm, pmd, pte) BUG() */ -#ifndef CONFIG_BOOKE - static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *pte) { @@ -61,46 +56,31 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t pte_page) { - *pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_PRESENT); -} - -#define pmd_pgtable(pmd) pmd_page(pmd) -#else - -static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, - pte_t *pte) -{ - *pmdp = __pmd((unsigned long)pte | _PMD_PRESENT); -} - -static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, - pgtable_t pte_page) -{ - *pmdp = __pmd((unsigned long)lowmem_page_address(pte_page) | _PMD_PRESENT); + *pmdp = __pmd(__pa(pte_page) | _PMD_PRESENT); } -#define pmd_pgtable(pmd) pmd_page(pmd) -#endif +#define pmd_pgtable(pmd) ((pgtable_t)pmd_page_vaddr(pmd)) extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr); extern pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr); +void pte_frag_destroy(void *pte_frag); +pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel); +void pte_fragment_free(unsigned long *table, int kernel); static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long)pte); + pte_fragment_free((unsigned long *)pte, 1); } static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) { - pgtable_page_dtor(ptepage); - __free_page(ptepage); + pte_fragment_free((unsigned long *)ptepage, 0); } static inline void pgtable_free(void *table, unsigned index_size) { if (!index_size) { - pgtable_page_dtor(virt_to_page(table)); - free_page((unsigned long)table); + pte_fragment_free((unsigned long *)table, 0); } else { BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE); kmem_cache_free(PGT_CACHE(index_size), table); @@ -138,6 +118,6 @@ static inline void pgtable_free_tlb(struct mmu_gather *tlb, static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) { - pgtable_free_tlb(tlb, page_address(table), 0); + pgtable_free_tlb(tlb, table, 0); } #endif /* _ASM_POWERPC_BOOK3S_32_PGALLOC_H */ diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h index c21d33704633..49d76adb9bc5 100644 --- a/arch/powerpc/include/asm/book3s/32/pgtable.h +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h @@ -10,9 +10,9 @@ /* And here we include common definitions */ #define _PAGE_KERNEL_RO 0 -#define _PAGE_KERNEL_ROX 0 +#define _PAGE_KERNEL_ROX (_PAGE_EXEC) #define _PAGE_KERNEL_RW (_PAGE_DIRTY | _PAGE_RW) -#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW) +#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC) #define _PAGE_HPTEFLAGS _PAGE_HASHPTE @@ -66,11 +66,11 @@ static inline bool pte_user(pte_t pte) */ #define PAGE_NONE __pgprot(_PAGE_BASE) #define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW) -#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW) +#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC) #define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER) -#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER) +#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC) #define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER) -#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER) +#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC) /* Permission masks used for kernel mappings */ #define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW) @@ -318,7 +318,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma, int psize) { unsigned long set = pte_val(entry) & - (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW); + (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC); pte_update(ptep, 0, set); @@ -328,24 +328,10 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma, #define __HAVE_ARCH_PTE_SAME #define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HASHPTE) == 0) -/* - * Note that on Book E processors, the pmd contains the kernel virtual - * (lowmem) address of the pte page. The physical address is less useful - * because everything runs with translation enabled (even the TLB miss - * handler). On everything else the pmd contains the physical address - * of the pte page. -- paulus - */ -#ifndef CONFIG_BOOKE #define pmd_page_vaddr(pmd) \ - ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)) + ((unsigned long)__va(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) #define pmd_page(pmd) \ pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT) -#else -#define pmd_page_vaddr(pmd) \ - ((unsigned long) (pmd_val(pmd) & PAGE_MASK)) -#define pmd_page(pmd) \ - pfn_to_page((__pa(pmd_val(pmd)) >> PAGE_SHIFT)) -#endif /* to find an entry in a kernel page-table-directory */ #define pgd_offset_k(address) pgd_offset(&init_mm, address) @@ -360,7 +346,8 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma, #define pte_offset_kernel(dir, addr) \ ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(addr)) #define pte_offset_map(dir, addr) \ - ((pte_t *) kmap_atomic(pmd_page(*(dir))) + pte_index(addr)) + ((pte_t *)(kmap_atomic(pmd_page(*(dir))) + \ + (pmd_page_vaddr(*(dir)) & ~PAGE_MASK)) + pte_index(addr)) #define pte_unmap(pte) kunmap_atomic(pte) /* @@ -384,7 +371,7 @@ static inline int pte_dirty(pte_t pte) { return !!(pte_val(pte) & _PAGE_DIRTY); static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSED); } static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); } static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; } -static inline bool pte_exec(pte_t pte) { return true; } +static inline bool pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_EXEC; } static inline int pte_present(pte_t pte) { @@ -451,7 +438,7 @@ static inline pte_t pte_wrprotect(pte_t pte) static inline pte_t pte_exprotect(pte_t pte) { - return pte; + return __pte(pte_val(pte) & ~_PAGE_EXEC); } static inline pte_t pte_mkclean(pte_t pte) @@ -466,7 +453,7 @@ static inline pte_t pte_mkold(pte_t pte) static inline pte_t pte_mkexec(pte_t pte) { - return pte; + return __pte(pte_val(pte) | _PAGE_EXEC); } static inline pte_t pte_mkpte(pte_t pte) @@ -524,7 +511,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte, int percpu) { -#if defined(CONFIG_PPC_STD_MMU_32) && defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT) +#if defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT) /* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the * helper pte_update() which does an atomic update. We need to do that * because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a @@ -537,7 +524,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, else pte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte)); -#elif defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT) +#elif defined(CONFIG_PTE_64BIT) /* Second case is 32-bit with 64-bit PTE. In this case, we * can just store as long as we do the two halves in the right order * with a barrier in between. This is possible because we take care, @@ -560,7 +547,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, : "=m" (*ptep), "=m" (*((unsigned char *)ptep+4)) : "r" (pte) : "memory"); -#elif defined(CONFIG_PPC_STD_MMU_32) +#else /* Third case is 32-bit hash table in UP mode, we need to preserve * the _PAGE_HASHPTE bit since we may not have invalidated the previous * translation in the hash yet (done in a subsequent flush_tlb_xxx()) @@ -568,9 +555,6 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, */ *ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE) | (pte_val(pte) & ~_PAGE_HASHPTE)); - -#else -#error "Not supported " #endif } diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index 15bc16b1dc9c..cf5ba5254299 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -1,11 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _ASM_POWERPC_BOOK3S_64_HASH_4K_H #define _ASM_POWERPC_BOOK3S_64_HASH_4K_H -/* - * Entries per page directory level. The PTE level must use a 64b record - * for each page table entry. The PMD and PGD level use a 32b record for - * each entry by assuming that each entry is page aligned. - */ + #define H_PTE_INDEX_SIZE 9 #define H_PMD_INDEX_SIZE 7 #define H_PUD_INDEX_SIZE 9 diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h index 6328857f259f..1ceee000c18d 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu.h +++ b/arch/powerpc/include/asm/book3s/64/mmu.h @@ -2,6 +2,8 @@ #ifndef _ASM_POWERPC_BOOK3S_64_MMU_H_ #define _ASM_POWERPC_BOOK3S_64_MMU_H_ +#include + #ifndef __ASSEMBLY__ /* * Page size definition @@ -24,6 +26,13 @@ struct mmu_psize_def { }; extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT]; +/* + * For BOOK3s 64 with 4k and 64K linux page size + * we want to use pointers, because the page table + * actually store pfn + */ +typedef pte_t *pgtable_t; + #endif /* __ASSEMBLY__ */ /* 64-bit classic hash table MMU */ diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index 391ed2c3b697..4aba625389c4 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -37,10 +37,7 @@ extern struct vmemmap_backing *vmemmap_list; #define MAX_PGTABLE_INDEX_SIZE 0xf extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) ({ \ - BUG_ON(!(shift)); \ - pgtable_cache[(shift) - 1]; \ - }) +#define PGT_CACHE(shift) pgtable_cache[shift] extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int); extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); @@ -50,6 +47,7 @@ extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); #ifdef CONFIG_SMP extern void __tlb_remove_table(void *_table); #endif +void pte_frag_destroy(void *pte_frag); static inline pgd_t *radix__pgd_alloc(struct mm_struct *mm) { diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 6c99e846a8c9..2e6ada28da64 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -1304,7 +1304,7 @@ static inline int pgd_devmap(pgd_t pgd) } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline const int pud_pfn(pud_t pud) +static inline int pud_pfn(pud_t pud) { /* * Currently all calls to pud_pfn() are gated around a pud_devmap() diff --git a/arch/powerpc/include/asm/cache.h b/arch/powerpc/include/asm/cache.h index 66298461b640..40ea5b3781c6 100644 --- a/arch/powerpc/include/asm/cache.h +++ b/arch/powerpc/include/asm/cache.h @@ -71,7 +71,7 @@ extern struct ppc64_caches ppc64_caches; #else #define __read_mostly __attribute__((__section__(".data..read_mostly"))) -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 extern long _get_L2CR(void); extern long _get_L3CR(void); extern void _set_L2CR(unsigned long); diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h index 3d5acd2b113a..2074b40f3fb5 100644 --- a/arch/powerpc/include/asm/code-patching.h +++ b/arch/powerpc/include/asm/code-patching.h @@ -33,14 +33,33 @@ unsigned int create_cond_branch(const unsigned int *addr, int patch_branch(unsigned int *addr, unsigned long target, int flags); int patch_instruction(unsigned int *addr, unsigned int instr); int raw_patch_instruction(unsigned int *addr, unsigned int instr); -int patch_instruction_site(s32 *addr, unsigned int instr); -int patch_branch_site(s32 *site, unsigned long target, int flags); static inline unsigned long patch_site_addr(s32 *site) { return (unsigned long)site + *site; } +static inline int patch_instruction_site(s32 *site, unsigned int instr) +{ + return patch_instruction((unsigned int *)patch_site_addr(site), instr); +} + +static inline int patch_branch_site(s32 *site, unsigned long target, int flags) +{ + return patch_branch((unsigned int *)patch_site_addr(site), target, flags); +} + +static inline int modify_instruction(unsigned int *addr, unsigned int clr, + unsigned int set) +{ + return patch_instruction(addr, (*addr & ~clr) | set); +} + +static inline int modify_instruction_site(s32 *site, unsigned int clr, unsigned int set) +{ + return modify_instruction((unsigned int *)patch_site_addr(site), clr, set); +} + int instr_is_relative_branch(unsigned int instr); int instr_is_relative_link_branch(unsigned int instr); int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr); diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h index 29f49a35d6ee..d05f0c28e515 100644 --- a/arch/powerpc/include/asm/cputable.h +++ b/arch/powerpc/include/asm/cputable.h @@ -44,6 +44,7 @@ extern int machine_check_e500(struct pt_regs *regs); extern int machine_check_e200(struct pt_regs *regs); extern int machine_check_47x(struct pt_regs *regs); int machine_check_8xx(struct pt_regs *regs); +int machine_check_83xx(struct pt_regs *regs); extern void cpu_down_flush_e500v2(void); extern void cpu_down_flush_e500mc(void); @@ -296,7 +297,7 @@ static inline void cpu_feature_keys_init(void) { } #define CPU_FTRS_PPC601 (CPU_FTR_COMMON | CPU_FTR_601 | \ CPU_FTR_COHERENT_ICACHE | CPU_FTR_UNIFIED_ID_CACHE | CPU_FTR_USE_RTC) #define CPU_FTRS_603 (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | \ - CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_PPC_LE) + CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_PPC_LE | CPU_FTR_NOEXECUTE) #define CPU_FTRS_604 (CPU_FTR_COMMON | CPU_FTR_PPC_LE) #define CPU_FTRS_740_NOTAU (CPU_FTR_COMMON | \ CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_L2CR | \ @@ -367,15 +368,15 @@ static inline void cpu_feature_keys_init(void) { } CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \ CPU_FTR_SPEC7450 | CPU_FTR_NAP_DISABLE_L2_PR | \ CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX) -#define CPU_FTRS_82XX (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE) +#define CPU_FTRS_82XX (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_NOEXECUTE) #define CPU_FTRS_G2_LE (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | \ CPU_FTR_MAYBE_CAN_NAP) #define CPU_FTRS_E300 (CPU_FTR_MAYBE_CAN_DOZE | \ CPU_FTR_MAYBE_CAN_NAP | \ - CPU_FTR_COMMON) + CPU_FTR_COMMON | CPU_FTR_NOEXECUTE) #define CPU_FTRS_E300C2 (CPU_FTR_MAYBE_CAN_DOZE | \ CPU_FTR_MAYBE_CAN_NAP | \ - CPU_FTR_COMMON | CPU_FTR_FPU_UNAVAILABLE) + CPU_FTR_COMMON | CPU_FTR_FPU_UNAVAILABLE | CPU_FTR_NOEXECUTE) #define CPU_FTRS_CLASSIC32 (CPU_FTR_COMMON) #define CPU_FTRS_8XX (CPU_FTR_NOEXECUTE) #define CPU_FTRS_40X (CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE) diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h index 8fa394520af6..ebf66809f2d3 100644 --- a/arch/powerpc/include/asm/dma-mapping.h +++ b/arch/powerpc/include/asm/dma-mapping.h @@ -39,9 +39,6 @@ extern int dma_nommu_mmap_coherent(struct device *dev, * to ensure it is consistent. */ struct device; -extern void *__dma_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *handle, gfp_t gfp); -extern void __dma_free_coherent(size_t size, void *vaddr); extern void __dma_sync(void *vaddr, size_t size, int direction); extern void __dma_sync_page(struct page *page, unsigned long offset, size_t size, int direction); @@ -52,8 +49,6 @@ extern unsigned long __dma_get_coherent_pfn(unsigned long cpu_addr); * Cache coherent cores. */ -#define __dma_alloc_coherent(dev, gfp, size, handle) NULL -#define __dma_free_coherent(size, addr) ((void)0) #define __dma_sync(addr, size, rw) ((void)0) #define __dma_sync_page(pg, off, sz, rw) ((void)0) @@ -108,11 +103,8 @@ static inline void set_dma_offset(struct device *dev, dma_addr_t off) } #define HAVE_ARCH_DMA_SET_MASK 1 -extern int dma_set_mask(struct device *dev, u64 dma_mask); extern u64 __dma_get_required_mask(struct device *dev); -#define ARCH_HAS_DMA_MMAP_COHERENT - #endif /* __KERNEL__ */ #endif /* _ASM_DMA_MAPPING_H */ diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h index 1e7a33592e29..188776befaf9 100644 --- a/arch/powerpc/include/asm/fadump.h +++ b/arch/powerpc/include/asm/fadump.h @@ -48,6 +48,10 @@ #define memblock_num_regions(memblock_type) (memblock.memblock_type.cnt) +/* Alignement per CMA requirement. */ +#define FADUMP_CMA_ALIGNMENT (PAGE_SIZE << \ + max_t(unsigned long, MAX_ORDER - 1, pageblock_order)) + /* Firmware provided dump sections */ #define FADUMP_CPU_STATE_DATA 0x0001 #define FADUMP_HPTE_REGION 0x0002 @@ -141,6 +145,7 @@ struct fw_dump { unsigned long fadump_supported:1; unsigned long dump_active:1; unsigned long dump_registered:1; + unsigned long nocma:1; }; /* @@ -200,7 +205,7 @@ struct fad_crash_memory_ranges { unsigned long long size; }; -extern int is_fadump_boot_memory_area(u64 addr, ulong size); +extern int is_fadump_memory_area(u64 addr, ulong size); extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname, int depth, void *data); extern int fadump_reserve_mem(void); diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h index 33b6f9c892c8..40a6c9261a6b 100644 --- a/arch/powerpc/include/asm/feature-fixups.h +++ b/arch/powerpc/include/asm/feature-fixups.h @@ -221,6 +221,17 @@ label##3: \ FTR_ENTRY_OFFSET 953b-954b; \ .popsection; +#define START_BTB_FLUSH_SECTION \ +955: \ + +#define END_BTB_FLUSH_SECTION \ +956: \ + .pushsection __btb_flush_fixup,"a"; \ + .align 2; \ +957: \ + FTR_ENTRY_OFFSET 955b-957b; \ + FTR_ENTRY_OFFSET 956b-957b; \ + .popsection; #ifndef __ASSEMBLY__ #include @@ -230,6 +241,7 @@ extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup; extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup; extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup; extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; +extern long __start__btb_flush_fixup, __stop__btb_flush_fixup; void apply_feature_fixups(void); void setup_feature_keys(void); diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index 383da1ab9e23..8d40565ad0c3 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -5,8 +5,6 @@ #ifdef CONFIG_HUGETLB_PAGE #include -extern struct kmem_cache *hugepte_cache; - #ifdef CONFIG_PPC_BOOK3S_64 #include @@ -76,7 +74,9 @@ static inline pte_t *hugepte_offset(hugepd_t hpd, unsigned long addr, unsigned long idx = 0; pte_t *dir = hugepd_page(hpd); -#ifndef CONFIG_PPC_FSL_BOOK3E +#ifdef CONFIG_PPC_8xx + idx = (addr & ((1UL << pdshift) - 1)) >> PAGE_SHIFT; +#elif !defined(CONFIG_PPC_FSL_BOOK3E) idx = (addr & ((1UL << pdshift) - 1)) >> hugepd_shift(hpd); #endif @@ -129,15 +129,14 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - pte_t pte; - pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_hugetlb_page(vma, addr); } #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS -extern int huge_ptep_set_access_flags(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep, - pte_t pte, int dirty); +int huge_ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + pte_t pte, int dirty); static inline void arch_clear_hugepage_flags(struct page *page) { diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h index e746becd9d6f..7f19fbd3ba55 100644 --- a/arch/powerpc/include/asm/io.h +++ b/arch/powerpc/include/asm/io.h @@ -29,12 +29,14 @@ extern struct pci_dev *isa_bridge_pcidev; #include #include +#include #include #include #include #include #include #include +#include #ifdef CONFIG_PPC64 #include @@ -804,6 +806,8 @@ extern void __iounmap_at(void *ea, unsigned long size); */ static inline unsigned long virt_to_phys(volatile void * address) { + WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && !virt_addr_valid(address)); + return __pa((unsigned long)address); } @@ -827,7 +831,14 @@ static inline void * phys_to_virt(unsigned long address) /* * Change "struct page" to physical address. */ -#define page_to_phys(page) ((phys_addr_t)page_to_pfn(page) << PAGE_SHIFT) +static inline phys_addr_t page_to_phys(struct page *page) +{ + unsigned long pfn = page_to_pfn(page); + + WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && !pfn_valid(pfn)); + + return PFN_PHYS(pfn); +} /* * 32 bits still uses virt_to_bus() for it's implementation of DMA diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h index 35db0cbc9222..17524d222a7b 100644 --- a/arch/powerpc/include/asm/iommu.h +++ b/arch/powerpc/include/asm/iommu.h @@ -143,8 +143,6 @@ struct scatterlist; #ifdef CONFIG_PPC64 -#define IOMMU_MAPPING_ERROR (~(dma_addr_t)0x0) - static inline void set_iommu_table_base(struct device *dev, struct iommu_table *base) { @@ -215,11 +213,12 @@ struct iommu_table_group { extern void iommu_register_group(struct iommu_table_group *table_group, int pci_domain_number, unsigned long pe_num); -extern int iommu_add_device(struct device *dev); +extern int iommu_add_device(struct iommu_table_group *table_group, + struct device *dev); extern void iommu_del_device(struct device *dev); -extern int __init tce_iommu_bus_notifier_init(void); -extern long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry, - unsigned long *hpa, enum dma_data_direction *direction); +extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl, + unsigned long entry, unsigned long *hpa, + enum dma_data_direction *direction); #else static inline void iommu_register_group(struct iommu_table_group *table_group, int pci_domain_number, @@ -227,7 +226,8 @@ static inline void iommu_register_group(struct iommu_table_group *table_group, { } -static inline int iommu_add_device(struct device *dev) +static inline int iommu_add_device(struct iommu_table_group *table_group, + struct device *dev) { return 0; } @@ -235,15 +235,8 @@ static inline int iommu_add_device(struct device *dev) static inline void iommu_del_device(struct device *dev) { } - -static inline int __init tce_iommu_bus_notifier_init(void) -{ - return 0; -} #endif /* !CONFIG_IOMMU_API */ -int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr); - #else static inline void *get_iommu_table_base(struct device *dev) diff --git a/arch/powerpc/include/asm/ipic.h b/arch/powerpc/include/asm/ipic.h index fb59829983b8..3dbd47f2bffe 100644 --- a/arch/powerpc/include/asm/ipic.h +++ b/arch/powerpc/include/asm/ipic.h @@ -69,7 +69,6 @@ enum ipic_mcp_irq { IPIC_MCP_MU = 7, }; -extern int ipic_set_priority(unsigned int irq, unsigned int priority); extern void ipic_set_highest_priority(unsigned int irq); extern void ipic_set_default_priority(void); extern void ipic_enable_mcp(enum ipic_mcp_irq mcp_irq); diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h index eb20eb3b8fb0..25607604a7a5 100644 --- a/arch/powerpc/include/asm/mmu.h +++ b/arch/powerpc/include/asm/mmu.h @@ -48,7 +48,7 @@ #define MMU_FTR_USE_HIGH_BATS ASM_CONST(0x00010000) /* Enable >32-bit physical addresses on 32-bit processor, only used - * by CONFIG_6xx currently as BookE supports that from day 1 + * by CONFIG_PPC_BOOK3S_32 currently as BookE supports that from day 1 */ #define MMU_FTR_BIG_PHYS ASM_CONST(0x00020000) @@ -131,16 +131,37 @@ DECLARE_PER_CPU(int, next_tlbcam_idx); #endif enum { - MMU_FTRS_POSSIBLE = MMU_FTR_HPTE_TABLE | MMU_FTR_TYPE_8xx | - MMU_FTR_TYPE_40x | MMU_FTR_TYPE_44x | MMU_FTR_TYPE_FSL_E | - MMU_FTR_TYPE_47x | MMU_FTR_USE_HIGH_BATS | MMU_FTR_BIG_PHYS | - MMU_FTR_USE_TLBIVAX_BCAST | MMU_FTR_USE_TLBILX | - MMU_FTR_LOCK_BCAST_INVAL | MMU_FTR_NEED_DTLB_SW_LRU | + MMU_FTRS_POSSIBLE = +#ifdef CONFIG_PPC_BOOK3S + MMU_FTR_HPTE_TABLE | +#endif +#ifdef CONFIG_PPC_8xx + MMU_FTR_TYPE_8xx | +#endif +#ifdef CONFIG_40x + MMU_FTR_TYPE_40x | +#endif +#ifdef CONFIG_44x + MMU_FTR_TYPE_44x | +#endif +#if defined(CONFIG_E200) || defined(CONFIG_E500) + MMU_FTR_TYPE_FSL_E | MMU_FTR_BIG_PHYS | MMU_FTR_USE_TLBILX | +#endif +#ifdef CONFIG_PPC_47x + MMU_FTR_TYPE_47x | MMU_FTR_USE_TLBIVAX_BCAST | MMU_FTR_LOCK_BCAST_INVAL | +#endif +#ifdef CONFIG_PPC_BOOK3S_32 + MMU_FTR_USE_HIGH_BATS | MMU_FTR_NEED_DTLB_SW_LRU | +#endif +#ifdef CONFIG_PPC_BOOK3E_64 MMU_FTR_USE_TLBRSRV | MMU_FTR_USE_PAIRED_MAS | +#endif +#ifdef CONFIG_PPC_BOOK3S_64 MMU_FTR_NO_SLBIE_B | MMU_FTR_16M_PAGE | MMU_FTR_TLBIEL | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_CI_LARGE_PAGE | MMU_FTR_1T_SEGMENT | MMU_FTR_TLBIE_CROP_VA | MMU_FTR_KERNEL_RO | MMU_FTR_68_BIT_VA | +#endif #ifdef CONFIG_PPC_RADIX_MMU MMU_FTR_TYPE_RADIX | #endif @@ -338,21 +359,11 @@ static inline void mmu_early_init_devtree(void) { } #endif /* __ASSEMBLY__ */ #endif -#if defined(CONFIG_PPC_STD_MMU_32) +#if defined(CONFIG_PPC_BOOK3S_32) /* 32-bit classic hash table MMU */ #include -#elif defined(CONFIG_40x) -/* 40x-style software loaded TLB */ -# include -#elif defined(CONFIG_44x) -/* 44x-style software loaded TLB */ -# include -#elif defined(CONFIG_PPC_BOOK3E_MMU) -/* Freescale Book-E software loaded TLB or Book-3e (ISA 2.06+) MMU */ -# include -#elif defined (CONFIG_PPC_8xx) -/* Motorola/Freescale 8xx software loaded TLB */ -# include +#elif defined(CONFIG_PPC_MMU_NOHASH) +#include #endif #endif /* __KERNEL__ */ diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 0381394a425b..6ee8195a2ffb 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -21,9 +21,12 @@ struct mm_iommu_table_group_mem_t; extern int isolate_lru_page(struct page *page); /* from internal.h */ extern bool mm_iommu_preregistered(struct mm_struct *mm); -extern long mm_iommu_get(struct mm_struct *mm, +extern long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries, struct mm_iommu_table_group_mem_t **pmem); +extern long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua, + unsigned long entries, unsigned long dev_hpa, + struct mm_iommu_table_group_mem_t **pmem); extern long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem); extern void mm_iommu_init(struct mm_struct *mm); @@ -32,15 +35,23 @@ extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup(struct mm_struct *mm, unsigned long ua, unsigned long size); extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm( struct mm_struct *mm, unsigned long ua, unsigned long size); -extern struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm, +extern struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries); extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, unsigned long ua, unsigned int pageshift, unsigned long *hpa); extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, unsigned long ua, unsigned int pageshift, unsigned long *hpa); extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua); +extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa, + unsigned int pageshift, unsigned long *size); extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem); extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem); +#else +static inline bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa, + unsigned int pageshift, unsigned long *size) +{ + return false; +} #endif extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm); extern void set_context(unsigned long id, pgd_t *pgd); @@ -217,13 +228,7 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, #endif } -static inline int arch_dup_mmap(struct mm_struct *oldmm, - struct mm_struct *mm) -{ - return 0; -} - -#ifndef CONFIG_PPC_BOOK3S_64 +#ifdef CONFIG_PPC_BOOK3E_64 static inline void arch_exit_mmap(struct mm_struct *mm) { } @@ -247,6 +252,7 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm, #ifdef CONFIG_PPC_MEM_KEYS bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write, bool execute, bool foreign); +void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm); #else /* CONFIG_PPC_MEM_KEYS */ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write, bool execute, bool foreign) @@ -259,6 +265,7 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, #define thread_pkey_regs_save(thread) #define thread_pkey_regs_restore(new_thread, old_thread) #define thread_pkey_regs_init(thread) +#define arch_dup_pkeys(oldmm, mm) static inline u64 pte_to_hpte_pkey_bits(u64 pteflags) { @@ -267,5 +274,12 @@ static inline u64 pte_to_hpte_pkey_bits(u64 pteflags) #endif /* CONFIG_PPC_MEM_KEYS */ +static inline int arch_dup_mmap(struct mm_struct *oldmm, + struct mm_struct *mm) +{ + arch_dup_pkeys(oldmm, mm); + return 0; +} + #endif /* __KERNEL__ */ #endif /* __ASM_POWERPC_MMU_CONTEXT_H */ diff --git a/arch/powerpc/include/asm/mmu-40x.h b/arch/powerpc/include/asm/nohash/32/mmu-40x.h similarity index 100% rename from arch/powerpc/include/asm/mmu-40x.h rename to arch/powerpc/include/asm/nohash/32/mmu-40x.h diff --git a/arch/powerpc/include/asm/mmu-44x.h b/arch/powerpc/include/asm/nohash/32/mmu-44x.h similarity index 98% rename from arch/powerpc/include/asm/mmu-44x.h rename to arch/powerpc/include/asm/nohash/32/mmu-44x.h index 295b3dbb2698..28aa3b339c5e 100644 --- a/arch/powerpc/include/asm/mmu-44x.h +++ b/arch/powerpc/include/asm/nohash/32/mmu-44x.h @@ -111,6 +111,9 @@ typedef struct { unsigned long vdso_base; } mm_context_t; +/* patch sites */ +extern s32 patch__tlb_44x_hwater_D, patch__tlb_44x_hwater_I; + #endif /* !__ASSEMBLY__ */ #ifndef CONFIG_PPC_EARLY_DEBUG_44x diff --git a/arch/powerpc/include/asm/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h similarity index 98% rename from arch/powerpc/include/asm/mmu-8xx.h rename to arch/powerpc/include/asm/nohash/32/mmu-8xx.h index fa05aa566ece..b0f764c827c0 100644 --- a/arch/powerpc/include/asm/mmu-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h @@ -190,6 +190,7 @@ typedef struct { struct slice_mask mask_8m; # endif #endif + void *pte_frag; } mm_context_t; #define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000) @@ -244,6 +245,9 @@ extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf; #define mmu_virtual_psize MMU_PAGE_4K #elif defined(CONFIG_PPC_16K_PAGES) #define mmu_virtual_psize MMU_PAGE_16K +#define PTE_FRAG_NR 4 +#define PTE_FRAG_SIZE_SHIFT 12 +#define PTE_FRAG_SIZE (1UL << 12) #else #error "Unsupported PAGE_SIZE" #endif diff --git a/arch/powerpc/include/asm/nohash/32/mmu.h b/arch/powerpc/include/asm/nohash/32/mmu.h new file mode 100644 index 000000000000..7d94a36d57d2 --- /dev/null +++ b/arch/powerpc/include/asm/nohash/32/mmu.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_NOHASH_32_MMU_H_ +#define _ASM_POWERPC_NOHASH_32_MMU_H_ + +#include + +#if defined(CONFIG_40x) +/* 40x-style software loaded TLB */ +#include +#elif defined(CONFIG_44x) +/* 44x-style software loaded TLB */ +#include +#elif defined(CONFIG_PPC_BOOK3E_MMU) +/* Freescale Book-E software loaded TLB or Book-3e (ISA 2.06+) MMU */ +#include +#elif defined (CONFIG_PPC_8xx) +/* Motorola/Freescale 8xx software loaded TLB */ +#include +#endif + +#ifndef __ASSEMBLY__ +typedef pte_t *pgtable_t; +#endif + +#endif /* _ASM_POWERPC_NOHASH_32_MMU_H_ */ diff --git a/arch/powerpc/include/asm/nohash/32/pgalloc.h b/arch/powerpc/include/asm/nohash/32/pgalloc.h index 8825953c225b..17963951bdb0 100644 --- a/arch/powerpc/include/asm/nohash/32/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/32/pgalloc.h @@ -25,10 +25,7 @@ extern void __bad_pte(pmd_t *pmd); extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) ({ \ - BUG_ON(!(shift)); \ - pgtable_cache[(shift) - 1]; \ - }) +#define PGT_CACHE(shift) pgtable_cache[shift] static inline pgd_t *pgd_alloc(struct mm_struct *mm) { @@ -61,11 +58,10 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t pte_page) { - *pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_USER | - _PMD_PRESENT); + *pmdp = __pmd(__pa(pte_page) | _PMD_USER | _PMD_PRESENT); } -#define pmd_pgtable(pmd) pmd_page(pmd) +#define pmd_pgtable(pmd) ((pgtable_t)pmd_page_vaddr(pmd)) #else static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, @@ -77,31 +73,32 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t pte_page) { - *pmdp = __pmd((unsigned long)lowmem_page_address(pte_page) | _PMD_PRESENT); + *pmdp = __pmd((unsigned long)pte_page | _PMD_PRESENT); } -#define pmd_pgtable(pmd) pmd_page(pmd) +#define pmd_pgtable(pmd) ((pgtable_t)pmd_page_vaddr(pmd)) #endif extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr); extern pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr); +void pte_frag_destroy(void *pte_frag); +pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel); +void pte_fragment_free(unsigned long *table, int kernel); static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long)pte); + pte_fragment_free((unsigned long *)pte, 1); } static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) { - pgtable_page_dtor(ptepage); - __free_page(ptepage); + pte_fragment_free((unsigned long *)ptepage, 0); } static inline void pgtable_free(void *table, unsigned index_size) { if (!index_size) { - pgtable_page_dtor(virt_to_page(table)); - free_page((unsigned long)table); + pte_fragment_free((unsigned long *)table, 0); } else { BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE); kmem_cache_free(PGT_CACHE(index_size), table); @@ -140,6 +137,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) { tlb_flush_pgtable(tlb, address); - pgtable_free_tlb(tlb, page_address(table), 0); + pgtable_free_tlb(tlb, table, 0); } #endif /* _ASM_POWERPC_PGALLOC_32_H */ diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h index 3ffb0ff5a038..bed433358260 100644 --- a/arch/powerpc/include/asm/nohash/32/pgtable.h +++ b/arch/powerpc/include/asm/nohash/32/pgtable.h @@ -232,7 +232,13 @@ static inline unsigned long pte_update(pte_t *p, : "cc" ); #else /* PTE_ATOMIC_UPDATES */ unsigned long old = pte_val(*p); - *p = __pte((old & ~clr) | set); + unsigned long new = (old & ~clr) | set; + +#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES) + p->pte = p->pte1 = p->pte2 = p->pte3 = new; +#else + *p = __pte(new); +#endif #endif /* !PTE_ATOMIC_UPDATES */ #ifdef CONFIG_44x @@ -333,12 +339,12 @@ static inline int pte_young(pte_t pte) */ #ifndef CONFIG_BOOKE #define pmd_page_vaddr(pmd) \ - ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)) + ((unsigned long)__va(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) #define pmd_page(pmd) \ pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT) #else #define pmd_page_vaddr(pmd) \ - ((unsigned long) (pmd_val(pmd) & PAGE_MASK)) + ((unsigned long)(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) #define pmd_page(pmd) \ pfn_to_page((__pa(pmd_val(pmd)) >> PAGE_SHIFT)) #endif @@ -357,7 +363,8 @@ static inline int pte_young(pte_t pte) (pmd_bad(*(dir)) ? NULL : (pte_t *)pmd_page_vaddr(*(dir)) + \ pte_index(addr)) #define pte_offset_map(dir, addr) \ - ((pte_t *) kmap_atomic(pmd_page(*(dir))) + pte_index(addr)) + ((pte_t *)(kmap_atomic(pmd_page(*(dir))) + \ + (pmd_page_vaddr(*(dir)) & ~PAGE_MASK)) + pte_index(addr)) #define pte_unmap(pte) kunmap_atomic(pte) /* diff --git a/arch/powerpc/include/asm/nohash/32/pte-40x.h b/arch/powerpc/include/asm/nohash/32/pte-40x.h index 661f4599f2fc..12c6811e344b 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-40x.h +++ b/arch/powerpc/include/asm/nohash/32/pte-40x.h @@ -33,7 +33,7 @@ * is cleared in the TLB miss handler before the TLB entry is loaded. * - All other bits of the PTE are loaded into TLBLO without * modification, leaving us only the bits 20, 21, 24, 25, 26, 30 for - * software PTE bits. We actually use use bits 21, 24, 25, and + * software PTE bits. We actually use bits 21, 24, 25, and * 30 respectively for the software bits: ACCESSED, DIRTY, RW, and * PRESENT. */ diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h index 6bfe041ef59d..c9e4b2d90f65 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h @@ -65,9 +65,6 @@ #define _PTE_NONE_MASK 0 -/* Until my rework is finished, 8xx still needs atomic PTE updates */ -#define PTE_ATOMIC_UPDATES 1 - #ifdef CONFIG_PPC_16K_PAGES #define _PAGE_PSIZE _PAGE_SPS #else diff --git a/arch/powerpc/include/asm/nohash/64/mmu.h b/arch/powerpc/include/asm/nohash/64/mmu.h new file mode 100644 index 000000000000..e6585480dfc4 --- /dev/null +++ b/arch/powerpc/include/asm/nohash/64/mmu.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_NOHASH_64_MMU_H_ +#define _ASM_POWERPC_NOHASH_64_MMU_H_ + +/* Freescale Book-E software loaded TLB or Book-3e (ISA 2.06+) MMU */ +#include + +#ifndef __ASSEMBLY__ +typedef struct page *pgtable_t; +#endif + +#endif /* _ASM_POWERPC_NOHASH_64_MMU_H_ */ diff --git a/arch/powerpc/include/asm/nohash/64/pgalloc.h b/arch/powerpc/include/asm/nohash/64/pgalloc.h index e2d62d033708..e95eb499a174 100644 --- a/arch/powerpc/include/asm/nohash/64/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/64/pgalloc.h @@ -36,10 +36,7 @@ extern struct vmemmap_backing *vmemmap_list; #define MAX_PGTABLE_INDEX_SIZE 0xf extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) ({ \ - BUG_ON(!(shift)); \ - pgtable_cache[(shift) - 1]; \ - }) +#define PGT_CACHE(shift) pgtable_cache[shift] static inline pgd_t *pgd_alloc(struct mm_struct *mm) { diff --git a/arch/powerpc/include/asm/mmu-book3e.h b/arch/powerpc/include/asm/nohash/mmu-book3e.h similarity index 100% rename from arch/powerpc/include/asm/mmu-book3e.h rename to arch/powerpc/include/asm/nohash/mmu-book3e.h diff --git a/arch/powerpc/include/asm/nohash/mmu.h b/arch/powerpc/include/asm/nohash/mmu.h new file mode 100644 index 000000000000..a037cb1efb57 --- /dev/null +++ b/arch/powerpc/include/asm/nohash/mmu.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_NOHASH_MMU_H_ +#define _ASM_POWERPC_NOHASH_MMU_H_ + +#ifdef CONFIG_PPC64 +#include +#else +#include +#endif + +#endif /* _ASM_POWERPC_NOHASH_MMU_H_ */ diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index 70ff23974b59..1ca1c1864b32 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -209,7 +209,11 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, /* Anything else just stores the PTE normally. That covers all 64-bit * cases, and 32-bit non-hash with 32-bit PTEs. */ +#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES) + ptep->pte = ptep->pte1 = ptep->pte2 = ptep->pte3 = pte_val(pte); +#else *ptep = pte; +#endif /* * With hardware tablewalk, a sync is needed to ensure that diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h index ff3866473afe..a55b01c90bb1 100644 --- a/arch/powerpc/include/asm/opal.h +++ b/arch/powerpc/include/asm/opal.h @@ -347,6 +347,7 @@ extern int opal_async_comp_init(void); extern int opal_sensor_init(void); extern int opal_hmi_handler_init(void); extern int opal_event_init(void); +int opal_power_control_init(void); extern int opal_machine_check(struct pt_regs *regs); extern bool opal_mce_check_early_recovery(struct pt_regs *regs); diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index f6a1265face2..5c5ea2413413 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -289,7 +289,7 @@ static inline bool pfn_valid(unsigned long pfn) * page tables at arbitrary addresses, this breaks and will have to change. */ #ifdef CONFIG_PPC64 -#define PD_HUGE 0x8000000000000000 +#define PD_HUGE 0x8000000000000000UL #else #define PD_HUGE 0x80000000 #endif @@ -335,23 +335,11 @@ void arch_free_page(struct page *page, int order); #endif struct vm_area_struct; -#ifdef CONFIG_PPC_BOOK3S_64 -/* - * For BOOK3s 64 with 4k and 64K linux page size - * we want to use pointers, because the page table - * actually store pfn - */ -typedef pte_t *pgtable_t; -#else -#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC64) -typedef pte_t *pgtable_t; -#else -typedef struct page *pgtable_t; -#endif -#endif #include #endif /* __ASSEMBLY__ */ #include +#define ARCH_ZONE_DMA_BITS 31 + #endif /* _ASM_POWERPC_PAGE_H */ diff --git a/arch/powerpc/include/asm/page_32.h b/arch/powerpc/include/asm/page_32.h index 5c378e9b78c8..683dfbc67ca8 100644 --- a/arch/powerpc/include/asm/page_32.h +++ b/arch/powerpc/include/asm/page_32.h @@ -22,7 +22,8 @@ #define PTE_FLAGS_OFFSET 0 #endif -#ifdef CONFIG_PPC_256K_PAGES +#if defined(CONFIG_PPC_256K_PAGES) || \ + (defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)) #define PTE_SHIFT (PAGE_SHIFT - PTE_T_LOG2 - 2) /* 1/4 of a page */ #else #define PTE_SHIFT (PAGE_SHIFT - PTE_T_LOG2) /* full page */ diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h index 94d449031b18..aee4fcc24990 100644 --- a/arch/powerpc/include/asm/pci-bridge.h +++ b/arch/powerpc/include/asm/pci-bridge.h @@ -129,6 +129,7 @@ struct pci_controller { #endif /* CONFIG_PPC64 */ void *private_data; + struct npu *npu; }; /* These are used for config access before all the PCI probing diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h index 2af9ded80540..0c72f1897063 100644 --- a/arch/powerpc/include/asm/pci.h +++ b/arch/powerpc/include/asm/pci.h @@ -129,5 +129,9 @@ extern void pcibios_scan_phb(struct pci_controller *hose); extern struct pci_dev *pnv_pci_get_gpu_dev(struct pci_dev *npdev); extern struct pci_dev *pnv_pci_get_npu_dev(struct pci_dev *gpdev, int index); +extern int pnv_npu2_init(struct pci_controller *hose); +extern int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid, + unsigned long msr); +extern int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev); #endif /* __ASM_POWERPC_PCI_H */ diff --git a/arch/powerpc/include/asm/perf_event.h b/arch/powerpc/include/asm/perf_event.h index 16a49819da9a..35926cd6cd0b 100644 --- a/arch/powerpc/include/asm/perf_event.h +++ b/arch/powerpc/include/asm/perf_event.h @@ -39,4 +39,7 @@ (regs)->gpr[1] = current_stack_pointer(); \ asm volatile("mfmsr %0" : "=r" ((regs)->msr)); \ } while (0) + +/* To support perf_regs sier update */ +extern bool is_sier_available(void); #endif diff --git a/arch/powerpc/include/asm/perf_event_server.h b/arch/powerpc/include/asm/perf_event_server.h index 67a8a9585d50..e60aeb46d6a0 100644 --- a/arch/powerpc/include/asm/perf_event_server.h +++ b/arch/powerpc/include/asm/perf_event_server.h @@ -41,6 +41,8 @@ struct power_pmu { void (*get_mem_data_src)(union perf_mem_data_src *dsrc, u32 flags, struct pt_regs *regs); void (*get_mem_weight)(u64 *weight); + unsigned long group_constraint_mask; + unsigned long group_constraint_val; u64 (*bhrb_filter_map)(u64 branch_sample_type); void (*config_bhrb)(u64 pmu_bhrb_filter); void (*disable_pmc)(unsigned int pmc, unsigned long mmcr[]); diff --git a/arch/powerpc/include/asm/pgtable-types.h b/arch/powerpc/include/asm/pgtable-types.h index eccb30b38b47..3b0edf041b2e 100644 --- a/arch/powerpc/include/asm/pgtable-types.h +++ b/arch/powerpc/include/asm/pgtable-types.h @@ -3,7 +3,11 @@ #define _ASM_POWERPC_PGTABLE_TYPES_H /* PTE level */ +#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES) +typedef struct { pte_basic_t pte, pte1, pte2, pte3; } pte_t; +#else typedef struct { pte_basic_t pte; } pte_t; +#endif #define __pte(x) ((pte_t) { (x) }) static inline pte_basic_t pte_val(pte_t x) { diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 9679b7519a35..dad1d27e196d 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -66,7 +66,6 @@ extern unsigned long empty_zero_page[]; extern pgd_t swapper_pg_dir[]; -void limit_zone_pfn(enum zone_type zone, unsigned long max_pfn); int dma_pfn_limit_to_zone(u64 pfn_limit); extern void paging_init(void); @@ -101,7 +100,7 @@ extern int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* can we use this in kvm */ unsigned long vmalloc_to_phys(void *vmalloc_addr); -void pgtable_cache_add(unsigned shift, void (*ctor)(void *)); +void pgtable_cache_add(unsigned int shift); void pgtable_cache_init(void); #if defined(CONFIG_STRICT_KERNEL_RWX) || defined(CONFIG_PPC32) @@ -110,6 +109,35 @@ void mark_initmem_nx(void); static inline void mark_initmem_nx(void) { } #endif +/* + * When used, PTE_FRAG_NR is defined in subarch pgtable.h + * so we are sure it is included when arriving here. + */ +#ifdef PTE_FRAG_NR +static inline void *pte_frag_get(mm_context_t *ctx) +{ + return ctx->pte_frag; +} + +static inline void pte_frag_set(mm_context_t *ctx, void *p) +{ + ctx->pte_frag = p; +} +#else +#define PTE_FRAG_NR 1 +#define PTE_FRAG_SIZE_SHIFT PAGE_SHIFT +#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT) + +static inline void *pte_frag_get(mm_context_t *ctx) +{ + return NULL; +} + +static inline void pte_frag_set(mm_context_t *ctx, void *p) +{ +} +#endif + #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_PGTABLE_H */ diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h index a6e9e314c707..19a8834e0398 100644 --- a/arch/powerpc/include/asm/ppc-opcode.h +++ b/arch/powerpc/include/asm/ppc-opcode.h @@ -257,6 +257,7 @@ #define PPC_INST_MTSPR_DSCR_USER_MASK 0xfc1ffffe #define PPC_INST_MFVSRD 0x7c000066 #define PPC_INST_MTVSRD 0x7c000166 +#define PPC_INST_SC 0x44000002 #define PPC_INST_SLBFEE 0x7c0007a7 #define PPC_INST_SLBIA 0x7c0003e4 @@ -342,6 +343,8 @@ #define PPC_INST_SLW 0x7c000030 #define PPC_INST_SLD 0x7c000036 #define PPC_INST_SRW 0x7c000430 +#define PPC_INST_SRAW 0x7c000630 +#define PPC_INST_SRAWI 0x7c000670 #define PPC_INST_SRD 0x7c000436 #define PPC_INST_SRAD 0x7c000634 #define PPC_INST_SRADI 0x7c000674 diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h index b5d023680801..e0637730a8e7 100644 --- a/arch/powerpc/include/asm/ppc_asm.h +++ b/arch/powerpc/include/asm/ppc_asm.h @@ -480,26 +480,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601) ori rd,rd,((KERNELBASE>>48)&0xFFFF);\ rotldi rd,rd,48 #else -/* - * On APUS (Amiga PowerPC cpu upgrade board), we don't know the - * physical base address of RAM at compile time. - */ #define toreal(rd) tophys(rd,rd) #define fromreal(rd) tovirt(rd,rd) -#define tophys(rd,rs) \ -0: addis rd,rs,-PAGE_OFFSET@h; \ - .section ".vtop_fixup","aw"; \ - .align 1; \ - .long 0b; \ - .previous - -#define tovirt(rd,rs) \ -0: addis rd,rs,PAGE_OFFSET@h; \ - .section ".ptov_fixup","aw"; \ - .align 1; \ - .long 0b; \ - .previous +#define tophys(rd, rs) addis rd, rs, -PAGE_OFFSET@h +#define tovirt(rd, rs) addis rd, rs, PAGE_OFFSET@h #endif #ifdef CONFIG_PPC_BOOK3S_64 @@ -821,4 +806,14 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601) stringify_in_c(.long (_target) - . ;) \ stringify_in_c(.previous) +#ifdef CONFIG_PPC_FSL_BOOK3E +#define BTB_FLUSH(reg) \ + lis reg,BUCSR_INIT@h; \ + ori reg,reg,BUCSR_INIT@l; \ + mtspr SPRN_BUCSR,reg; \ + isync; +#else +#define BTB_FLUSH(reg) +#endif /* CONFIG_PPC_FSL_BOOK3E */ + #endif /* _ASM_POWERPC_PPC_ASM_H */ diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index de52c3166ba4..1c98ef1f2d5b 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -582,7 +582,7 @@ #define HID0_POWER9_RADIX __MASK(63 - 8) #define SPRN_HID1 0x3F1 /* Hardware Implementation Register 1 */ -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 #define HID1_EMCP (1<<31) /* 7450 Machine Check Pin Enable */ #define HID1_DFS (1<<22) /* 7447A Dynamic Frequency Scaling */ #define HID1_PC0 (1<<16) /* 7450 PLL_CFG[0] */ @@ -769,6 +769,8 @@ #define SRR1_PROGTRAP 0x00020000 /* Trap */ #define SRR1_PROGADDR 0x00010000 /* SRR0 contains subsequent addr */ +#define SRR1_MCE_MCP 0x00080000 /* Machine check signal caused interrupt */ + #define SPRN_HSRR0 0x13A /* Save/Restore Register 0 */ #define SPRN_HSRR1 0x13B /* Save/Restore Register 1 */ #define HSRR1_DENORM 0x00100000 /* Denorm exception */ diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h index 1fffbba8d6a5..65676e2325b8 100644 --- a/arch/powerpc/include/asm/setup.h +++ b/arch/powerpc/include/asm/setup.h @@ -67,6 +67,13 @@ void do_barrier_nospec_fixups_range(bool enable, void *start, void *end); static inline void do_barrier_nospec_fixups_range(bool enable, void *start, void *end) { }; #endif +#ifdef CONFIG_PPC_FSL_BOOK3E +void setup_spectre_v2(void); +#else +static inline void setup_spectre_v2(void) {}; +#endif +void do_btb_flush_fixups(void); + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_POWERPC_SETUP_H */ diff --git a/arch/powerpc/include/asm/sfp-machine.h b/arch/powerpc/include/asm/sfp-machine.h index d89beaba26ff..8b957aabb826 100644 --- a/arch/powerpc/include/asm/sfp-machine.h +++ b/arch/powerpc/include/asm/sfp-machine.h @@ -213,30 +213,18 @@ * respectively. The result is placed in HIGH_SUM and LOW_SUM. Overflow * (i.e. carry out) is not stored anywhere, and is lost. */ -#define add_ssaaaa(sh, sl, ah, al, bh, bl) \ +#define add_ssaaaa(sh, sl, ah, al, bh, bl) \ do { \ if (__builtin_constant_p (bh) && (bh) == 0) \ - __asm__ ("{a%I4|add%I4c} %1,%3,%4\n\t{aze|addze} %0,%2" \ - : "=r" ((USItype)(sh)), \ - "=&r" ((USItype)(sl)) \ - : "%r" ((USItype)(ah)), \ - "%r" ((USItype)(al)), \ - "rI" ((USItype)(bl))); \ - else if (__builtin_constant_p (bh) && (bh) ==~(USItype) 0) \ - __asm__ ("{a%I4|add%I4c} %1,%3,%4\n\t{ame|addme} %0,%2" \ - : "=r" ((USItype)(sh)), \ - "=&r" ((USItype)(sl)) \ - : "%r" ((USItype)(ah)), \ - "%r" ((USItype)(al)), \ - "rI" ((USItype)(bl))); \ + __asm__ ("add%I4c %1,%3,%4\n\taddze %0,%2" \ + : "=r" (sh), "=&r" (sl) : "r" (ah), "%r" (al), "rI" (bl));\ + else if (__builtin_constant_p (bh) && (bh) == ~(USItype) 0) \ + __asm__ ("add%I4c %1,%3,%4\n\taddme %0,%2" \ + : "=r" (sh), "=&r" (sl) : "r" (ah), "%r" (al), "rI" (bl));\ else \ - __asm__ ("{a%I5|add%I5c} %1,%4,%5\n\t{ae|adde} %0,%2,%3" \ - : "=r" ((USItype)(sh)), \ - "=&r" ((USItype)(sl)) \ - : "%r" ((USItype)(ah)), \ - "r" ((USItype)(bh)), \ - "%r" ((USItype)(al)), \ - "rI" ((USItype)(bl))); \ + __asm__ ("add%I5c %1,%4,%5\n\tadde %0,%2,%3" \ + : "=r" (sh), "=&r" (sl) \ + : "%r" (ah), "r" (bh), "%r" (al), "rI" (bl)); \ } while (0) /* sub_ddmmss is used in op-2.h and udivmodti4.c and should be equivalent to @@ -248,44 +236,24 @@ * and LOW_DIFFERENCE. Overflow (i.e. carry out) is not stored anywhere, * and is lost. */ -#define sub_ddmmss(sh, sl, ah, al, bh, bl) \ +#define sub_ddmmss(sh, sl, ah, al, bh, bl) \ do { \ if (__builtin_constant_p (ah) && (ah) == 0) \ - __asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{sfze|subfze} %0,%2" \ - : "=r" ((USItype)(sh)), \ - "=&r" ((USItype)(sl)) \ - : "r" ((USItype)(bh)), \ - "rI" ((USItype)(al)), \ - "r" ((USItype)(bl))); \ - else if (__builtin_constant_p (ah) && (ah) ==~(USItype) 0) \ - __asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{sfme|subfme} %0,%2" \ - : "=r" ((USItype)(sh)), \ - "=&r" ((USItype)(sl)) \ - : "r" ((USItype)(bh)), \ - "rI" ((USItype)(al)), \ - "r" ((USItype)(bl))); \ + __asm__ ("subf%I3c %1,%4,%3\n\tsubfze %0,%2" \ + : "=r" (sh), "=&r" (sl) : "r" (bh), "rI" (al), "r" (bl));\ + else if (__builtin_constant_p (ah) && (ah) == ~(USItype) 0) \ + __asm__ ("subf%I3c %1,%4,%3\n\tsubfme %0,%2" \ + : "=r" (sh), "=&r" (sl) : "r" (bh), "rI" (al), "r" (bl));\ else if (__builtin_constant_p (bh) && (bh) == 0) \ - __asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{ame|addme} %0,%2" \ - : "=r" ((USItype)(sh)), \ - "=&r" ((USItype)(sl)) \ - : "r" ((USItype)(ah)), \ - "rI" ((USItype)(al)), \ - "r" ((USItype)(bl))); \ - else if (__builtin_constant_p (bh) && (bh) ==~(USItype) 0) \ - __asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{aze|addze} %0,%2" \ - : "=r" ((USItype)(sh)), \ - "=&r" ((USItype)(sl)) \ - : "r" ((USItype)(ah)), \ - "rI" ((USItype)(al)), \ - "r" ((USItype)(bl))); \ + __asm__ ("subf%I3c %1,%4,%3\n\taddme %0,%2" \ + : "=r" (sh), "=&r" (sl) : "r" (ah), "rI" (al), "r" (bl));\ + else if (__builtin_constant_p (bh) && (bh) == ~(USItype) 0) \ + __asm__ ("subf%I3c %1,%4,%3\n\taddze %0,%2" \ + : "=r" (sh), "=&r" (sl) : "r" (ah), "rI" (al), "r" (bl));\ else \ - __asm__ ("{sf%I4|subf%I4c} %1,%5,%4\n\t{sfe|subfe} %0,%3,%2" \ - : "=r" ((USItype)(sh)), \ - "=&r" ((USItype)(sl)) \ - : "r" ((USItype)(ah)), \ - "r" ((USItype)(bh)), \ - "rI" ((USItype)(al)), \ - "r" ((USItype)(bl))); \ + __asm__ ("subf%I4c %1,%5,%4\n\tsubfe %0,%3,%2" \ + : "=r" (sh), "=&r" (sl) \ + : "r" (ah), "r" (bh), "rI" (al), "r" (bl)); \ } while (0) /* asm fragments for mul and div */ @@ -294,13 +262,10 @@ * UWtype integers MULTIPLER and MULTIPLICAND, and generates a two UWtype * word product in HIGH_PROD and LOW_PROD. */ -#define umul_ppmm(ph, pl, m0, m1) \ +#define umul_ppmm(ph, pl, m0, m1) \ do { \ USItype __m0 = (m0), __m1 = (m1); \ - __asm__ ("mulhwu %0,%1,%2" \ - : "=r" ((USItype)(ph)) \ - : "%r" (__m0), \ - "r" (__m1)); \ + __asm__ ("mulhwu %0,%1,%2" : "=r" (ph) : "%r" (m0), "r" (m1)); \ (pl) = __m0 * __m1; \ } while (0) @@ -312,9 +277,10 @@ * significant bit of DENOMINATOR must be 1, then the pre-processor symbol * UDIV_NEEDS_NORMALIZATION is defined to 1. */ -#define udiv_qrnnd(q, r, n1, n0, d) \ +#define udiv_qrnnd(q, r, n1, n0, d) \ do { \ - UWtype __d1, __d0, __q1, __q0, __r1, __r0, __m; \ + UWtype __d1, __d0, __q1, __q0; \ + UWtype __r1, __r0, __m; \ __d1 = __ll_highpart (d); \ __d0 = __ll_lowpart (d); \ \ @@ -325,7 +291,7 @@ if (__r1 < __m) \ { \ __q1--, __r1 += (d); \ - if (__r1 >= (d)) /* we didn't get carry when adding to __r1 */ \ + if (__r1 >= (d)) /* i.e. we didn't get carry when adding to __r1 */\ if (__r1 < __m) \ __q1--, __r1 += (d); \ } \ diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h index a595461c9cb0..44816cbc4198 100644 --- a/arch/powerpc/include/asm/slice.h +++ b/arch/powerpc/include/asm/slice.h @@ -10,6 +10,10 @@ #include #endif +#ifndef __ASSEMBLY__ + +struct mm_struct; + #ifdef CONFIG_PPC_MM_SLICES #ifdef CONFIG_HUGETLB_PAGE @@ -18,10 +22,6 @@ #define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN -#ifndef __ASSEMBLY__ - -struct mm_struct; - unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, unsigned long flags, unsigned int psize, int topdown); @@ -34,8 +34,12 @@ void slice_set_range_psize(struct mm_struct *mm, unsigned long start, void slice_init_new_context_exec(struct mm_struct *mm); void slice_setup_new_exec(void); -#endif /* __ASSEMBLY__ */ +#else /* CONFIG_PPC_MM_SLICES */ + +static inline void slice_init_new_context_exec(struct mm_struct *mm) {} #endif /* CONFIG_PPC_MM_SLICES */ +#endif /* __ASSEMBLY__ */ + #endif /* _ASM_POWERPC_SLICE_H */ diff --git a/arch/powerpc/include/asm/syscall.h b/arch/powerpc/include/asm/syscall.h index ab9f3f0a8637..1a0e7a8b1c81 100644 --- a/arch/powerpc/include/asm/syscall.h +++ b/arch/powerpc/include/asm/syscall.h @@ -18,9 +18,8 @@ #include /* ftrace syscalls requires exporting the sys_call_table */ -#ifdef CONFIG_FTRACE_SYSCALLS extern const unsigned long sys_call_table[]; -#endif /* CONFIG_FTRACE_SYSCALLS */ +extern const unsigned long compat_sys_call_table[]; static inline int syscall_get_nr(struct task_struct *task, struct pt_regs *regs) { diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h deleted file mode 100644 index 01b5171ea189..000000000000 --- a/arch/powerpc/include/asm/systbl.h +++ /dev/null @@ -1,396 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * List of powerpc syscalls. For the meaning of the _SPU suffix see - * arch/powerpc/platforms/cell/spu_callbacks.c - */ - -SYSCALL(restart_syscall) -SYSCALL(exit) -PPC_SYS(fork) -SYSCALL_SPU(read) -SYSCALL_SPU(write) -COMPAT_SYS_SPU(open) -SYSCALL_SPU(close) -SYSCALL_SPU(waitpid) -SYSCALL_SPU(creat) -SYSCALL_SPU(link) -SYSCALL_SPU(unlink) -COMPAT_SYS(execve) -SYSCALL_SPU(chdir) -COMPAT_SYS_SPU(time) -SYSCALL_SPU(mknod) -SYSCALL_SPU(chmod) -SYSCALL_SPU(lchown) -SYSCALL(ni_syscall) -OLDSYS(stat) -COMPAT_SYS_SPU(lseek) -SYSCALL_SPU(getpid) -COMPAT_SYS(mount) -SYSX(sys_ni_syscall,sys_oldumount,sys_oldumount) -SYSCALL_SPU(setuid) -SYSCALL_SPU(getuid) -COMPAT_SYS_SPU(stime) -COMPAT_SYS(ptrace) -SYSCALL_SPU(alarm) -OLDSYS(fstat) -SYSCALL(pause) -COMPAT_SYS(utime) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL_SPU(access) -SYSCALL_SPU(nice) -SYSCALL(ni_syscall) -SYSCALL_SPU(sync) -SYSCALL_SPU(kill) -SYSCALL_SPU(rename) -SYSCALL_SPU(mkdir) -SYSCALL_SPU(rmdir) -SYSCALL_SPU(dup) -SYSCALL_SPU(pipe) -COMPAT_SYS_SPU(times) -SYSCALL(ni_syscall) -SYSCALL_SPU(brk) -SYSCALL_SPU(setgid) -SYSCALL_SPU(getgid) -SYSCALL(signal) -SYSCALL_SPU(geteuid) -SYSCALL_SPU(getegid) -SYSCALL(acct) -SYSCALL(umount) -SYSCALL(ni_syscall) -COMPAT_SYS_SPU(ioctl) -COMPAT_SYS_SPU(fcntl) -SYSCALL(ni_syscall) -SYSCALL_SPU(setpgid) -SYSCALL(ni_syscall) -SYSX(sys_ni_syscall,sys_olduname,sys_olduname) -SYSCALL_SPU(umask) -SYSCALL_SPU(chroot) -COMPAT_SYS(ustat) -SYSCALL_SPU(dup2) -SYSCALL_SPU(getppid) -SYSCALL_SPU(getpgrp) -SYSCALL_SPU(setsid) -SYS32ONLY(sigaction) -SYSCALL_SPU(sgetmask) -SYSCALL_SPU(ssetmask) -SYSCALL_SPU(setreuid) -SYSCALL_SPU(setregid) -#define compat_sys_sigsuspend sys_sigsuspend -SYS32ONLY(sigsuspend) -SYSX(sys_ni_syscall,compat_sys_sigpending,sys_sigpending) -SYSCALL_SPU(sethostname) -COMPAT_SYS_SPU(setrlimit) -SYSX(sys_ni_syscall,compat_sys_old_getrlimit,sys_old_getrlimit) -COMPAT_SYS_SPU(getrusage) -COMPAT_SYS_SPU(gettimeofday) -COMPAT_SYS_SPU(settimeofday) -SYSCALL_SPU(getgroups) -SYSCALL_SPU(setgroups) -SYSX(sys_ni_syscall,sys_ni_syscall,ppc_select) -SYSCALL_SPU(symlink) -OLDSYS(lstat) -SYSCALL_SPU(readlink) -SYSCALL(uselib) -SYSCALL(swapon) -SYSCALL(reboot) -SYSX(sys_ni_syscall,compat_sys_old_readdir,sys_old_readdir) -SYSCALL_SPU(mmap) -SYSCALL_SPU(munmap) -COMPAT_SYS_SPU(truncate) -COMPAT_SYS_SPU(ftruncate) -SYSCALL_SPU(fchmod) -SYSCALL_SPU(fchown) -SYSCALL_SPU(getpriority) -SYSCALL_SPU(setpriority) -SYSCALL(ni_syscall) -COMPAT_SYS(statfs) -COMPAT_SYS(fstatfs) -SYSCALL(ni_syscall) -COMPAT_SYS_SPU(socketcall) -SYSCALL_SPU(syslog) -COMPAT_SYS_SPU(setitimer) -COMPAT_SYS_SPU(getitimer) -COMPAT_SYS_SPU(newstat) -COMPAT_SYS_SPU(newlstat) -COMPAT_SYS_SPU(newfstat) -SYSX(sys_ni_syscall,sys_uname,sys_uname) -SYSCALL(ni_syscall) -SYSCALL_SPU(vhangup) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -COMPAT_SYS_SPU(wait4) -SYSCALL(swapoff) -COMPAT_SYS_SPU(sysinfo) -COMPAT_SYS(ipc) -SYSCALL_SPU(fsync) -SYS32ONLY(sigreturn) -PPC_SYS(clone) -SYSCALL_SPU(setdomainname) -SYSCALL_SPU(newuname) -SYSCALL(ni_syscall) -COMPAT_SYS_SPU(adjtimex) -SYSCALL_SPU(mprotect) -SYSX(sys_ni_syscall,compat_sys_sigprocmask,sys_sigprocmask) -SYSCALL(ni_syscall) -SYSCALL(init_module) -SYSCALL(delete_module) -SYSCALL(ni_syscall) -SYSCALL(quotactl) -SYSCALL_SPU(getpgid) -SYSCALL_SPU(fchdir) -SYSCALL_SPU(bdflush) -SYSCALL_SPU(sysfs) -SYSX_SPU(ppc64_personality,ppc64_personality,sys_personality) -SYSCALL(ni_syscall) -SYSCALL_SPU(setfsuid) -SYSCALL_SPU(setfsgid) -SYSCALL_SPU(llseek) -COMPAT_SYS_SPU(getdents) -COMPAT_SPU_NEW(select) -SYSCALL_SPU(flock) -SYSCALL_SPU(msync) -COMPAT_SYS_SPU(readv) -COMPAT_SYS_SPU(writev) -SYSCALL_SPU(getsid) -SYSCALL_SPU(fdatasync) -COMPAT_SYS(sysctl) -SYSCALL_SPU(mlock) -SYSCALL_SPU(munlock) -SYSCALL_SPU(mlockall) -SYSCALL_SPU(munlockall) -SYSCALL_SPU(sched_setparam) -SYSCALL_SPU(sched_getparam) -SYSCALL_SPU(sched_setscheduler) -SYSCALL_SPU(sched_getscheduler) -SYSCALL_SPU(sched_yield) -SYSCALL_SPU(sched_get_priority_max) -SYSCALL_SPU(sched_get_priority_min) -COMPAT_SYS_SPU(sched_rr_get_interval) -COMPAT_SYS_SPU(nanosleep) -SYSCALL_SPU(mremap) -SYSCALL_SPU(setresuid) -SYSCALL_SPU(getresuid) -SYSCALL(ni_syscall) -SYSCALL_SPU(poll) -SYSCALL(ni_syscall) -SYSCALL_SPU(setresgid) -SYSCALL_SPU(getresgid) -SYSCALL_SPU(prctl) -COMPAT_SYS(rt_sigreturn) -COMPAT_SYS(rt_sigaction) -COMPAT_SYS(rt_sigprocmask) -COMPAT_SYS(rt_sigpending) -COMPAT_SYS(rt_sigtimedwait) -COMPAT_SYS(rt_sigqueueinfo) -COMPAT_SYS(rt_sigsuspend) -COMPAT_SYS_SPU(pread64) -COMPAT_SYS_SPU(pwrite64) -SYSCALL_SPU(chown) -SYSCALL_SPU(getcwd) -SYSCALL_SPU(capget) -SYSCALL_SPU(capset) -COMPAT_SYS(sigaltstack) -SYSX_SPU(sys_sendfile64,compat_sys_sendfile,sys_sendfile) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -PPC_SYS(vfork) -COMPAT_SYS_SPU(getrlimit) -COMPAT_SYS_SPU(readahead) -SYS32ONLY(mmap2) -SYS32ONLY(truncate64) -SYS32ONLY(ftruncate64) -SYSX(sys_ni_syscall,sys_stat64,sys_stat64) -SYSX(sys_ni_syscall,sys_lstat64,sys_lstat64) -SYSX(sys_ni_syscall,sys_fstat64,sys_fstat64) -SYSCALL(pciconfig_read) -SYSCALL(pciconfig_write) -SYSCALL(pciconfig_iobase) -SYSCALL(ni_syscall) -SYSCALL_SPU(getdents64) -SYSCALL_SPU(pivot_root) -SYSX(sys_ni_syscall,compat_sys_fcntl64,sys_fcntl64) -SYSCALL_SPU(madvise) -SYSCALL_SPU(mincore) -SYSCALL_SPU(gettid) -SYSCALL_SPU(tkill) -SYSCALL_SPU(setxattr) -SYSCALL_SPU(lsetxattr) -SYSCALL_SPU(fsetxattr) -SYSCALL_SPU(getxattr) -SYSCALL_SPU(lgetxattr) -SYSCALL_SPU(fgetxattr) -SYSCALL_SPU(listxattr) -SYSCALL_SPU(llistxattr) -SYSCALL_SPU(flistxattr) -SYSCALL_SPU(removexattr) -SYSCALL_SPU(lremovexattr) -SYSCALL_SPU(fremovexattr) -COMPAT_SYS_SPU(futex) -COMPAT_SYS_SPU(sched_setaffinity) -COMPAT_SYS_SPU(sched_getaffinity) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYS32ONLY(sendfile64) -COMPAT_SYS_SPU(io_setup) -SYSCALL_SPU(io_destroy) -COMPAT_SYS_SPU(io_getevents) -COMPAT_SYS_SPU(io_submit) -SYSCALL_SPU(io_cancel) -SYSCALL(set_tid_address) -SYSX_SPU(sys_fadvise64,ppc32_fadvise64,sys_fadvise64) -SYSCALL(exit_group) -COMPAT_SYS(lookup_dcookie) -SYSCALL_SPU(epoll_create) -SYSCALL_SPU(epoll_ctl) -SYSCALL_SPU(epoll_wait) -SYSCALL_SPU(remap_file_pages) -COMPAT_SYS_SPU(timer_create) -COMPAT_SYS_SPU(timer_settime) -COMPAT_SYS_SPU(timer_gettime) -SYSCALL_SPU(timer_getoverrun) -SYSCALL_SPU(timer_delete) -COMPAT_SYS_SPU(clock_settime) -COMPAT_SYS_SPU(clock_gettime) -COMPAT_SYS_SPU(clock_getres) -COMPAT_SYS_SPU(clock_nanosleep) -SYSX(ppc64_swapcontext,ppc32_swapcontext,ppc_swapcontext) -SYSCALL_SPU(tgkill) -COMPAT_SYS_SPU(utimes) -COMPAT_SYS_SPU(statfs64) -COMPAT_SYS_SPU(fstatfs64) -SYSX(sys_ni_syscall,ppc_fadvise64_64,ppc_fadvise64_64) -SYSCALL_SPU(rtas) -OLDSYS(debug_setcontext) -SYSCALL(ni_syscall) -COMPAT_SYS(migrate_pages) -COMPAT_SYS(mbind) -COMPAT_SYS(get_mempolicy) -COMPAT_SYS(set_mempolicy) -COMPAT_SYS(mq_open) -SYSCALL(mq_unlink) -COMPAT_SYS(mq_timedsend) -COMPAT_SYS(mq_timedreceive) -COMPAT_SYS(mq_notify) -COMPAT_SYS(mq_getsetattr) -COMPAT_SYS(kexec_load) -SYSCALL(add_key) -SYSCALL(request_key) -COMPAT_SYS(keyctl) -COMPAT_SYS(waitid) -SYSCALL(ioprio_set) -SYSCALL(ioprio_get) -SYSCALL(inotify_init) -SYSCALL(inotify_add_watch) -SYSCALL(inotify_rm_watch) -SYSCALL(spu_run) -SYSCALL(spu_create) -COMPAT_SYS(pselect6) -COMPAT_SYS(ppoll) -SYSCALL_SPU(unshare) -SYSCALL_SPU(splice) -SYSCALL_SPU(tee) -COMPAT_SYS_SPU(vmsplice) -COMPAT_SYS_SPU(openat) -SYSCALL_SPU(mkdirat) -SYSCALL_SPU(mknodat) -SYSCALL_SPU(fchownat) -COMPAT_SYS_SPU(futimesat) -SYSX_SPU(sys_newfstatat,sys_fstatat64,sys_fstatat64) -SYSCALL_SPU(unlinkat) -SYSCALL_SPU(renameat) -SYSCALL_SPU(linkat) -SYSCALL_SPU(symlinkat) -SYSCALL_SPU(readlinkat) -SYSCALL_SPU(fchmodat) -SYSCALL_SPU(faccessat) -COMPAT_SYS_SPU(get_robust_list) -COMPAT_SYS_SPU(set_robust_list) -COMPAT_SYS_SPU(move_pages) -SYSCALL_SPU(getcpu) -COMPAT_SYS(epoll_pwait) -COMPAT_SYS_SPU(utimensat) -COMPAT_SYS_SPU(signalfd) -SYSCALL_SPU(timerfd_create) -SYSCALL_SPU(eventfd) -COMPAT_SYS_SPU(sync_file_range2) -COMPAT_SYS(fallocate) -SYSCALL(subpage_prot) -COMPAT_SYS_SPU(timerfd_settime) -COMPAT_SYS_SPU(timerfd_gettime) -COMPAT_SYS_SPU(signalfd4) -SYSCALL_SPU(eventfd2) -SYSCALL_SPU(epoll_create1) -SYSCALL_SPU(dup3) -SYSCALL_SPU(pipe2) -SYSCALL(inotify_init1) -SYSCALL_SPU(perf_event_open) -COMPAT_SYS_SPU(preadv) -COMPAT_SYS_SPU(pwritev) -COMPAT_SYS(rt_tgsigqueueinfo) -SYSCALL(fanotify_init) -COMPAT_SYS(fanotify_mark) -SYSCALL_SPU(prlimit64) -SYSCALL_SPU(socket) -SYSCALL_SPU(bind) -SYSCALL_SPU(connect) -SYSCALL_SPU(listen) -SYSCALL_SPU(accept) -SYSCALL_SPU(getsockname) -SYSCALL_SPU(getpeername) -SYSCALL_SPU(socketpair) -SYSCALL_SPU(send) -SYSCALL_SPU(sendto) -COMPAT_SYS_SPU(recv) -COMPAT_SYS_SPU(recvfrom) -SYSCALL_SPU(shutdown) -COMPAT_SYS_SPU(setsockopt) -COMPAT_SYS_SPU(getsockopt) -COMPAT_SYS_SPU(sendmsg) -COMPAT_SYS_SPU(recvmsg) -COMPAT_SYS_SPU(recvmmsg) -SYSCALL_SPU(accept4) -SYSCALL_SPU(name_to_handle_at) -COMPAT_SYS_SPU(open_by_handle_at) -COMPAT_SYS_SPU(clock_adjtime) -SYSCALL_SPU(syncfs) -COMPAT_SYS_SPU(sendmmsg) -SYSCALL_SPU(setns) -COMPAT_SYS(process_vm_readv) -COMPAT_SYS(process_vm_writev) -SYSCALL(finit_module) -SYSCALL(kcmp) /* sys_kcmp */ -SYSCALL_SPU(sched_setattr) -SYSCALL_SPU(sched_getattr) -SYSCALL_SPU(renameat2) -SYSCALL_SPU(seccomp) -SYSCALL_SPU(getrandom) -SYSCALL_SPU(memfd_create) -SYSCALL_SPU(bpf) -COMPAT_SYS(execveat) -PPC64ONLY(switch_endian) -SYSCALL_SPU(userfaultfd) -SYSCALL_SPU(membarrier) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(ni_syscall) -SYSCALL(mlock2) -SYSCALL(copy_file_range) -COMPAT_SYS_SPU(preadv2) -COMPAT_SYS_SPU(pwritev2) -SYSCALL(kexec_file_load) -SYSCALL(statx) -SYSCALL(pkey_alloc) -SYSCALL(pkey_free) -SYSCALL(pkey_mprotect) -SYSCALL(rseq) -COMPAT_SYS(io_pgetevents) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index b80d492ceb29..54bf7e68a7e1 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -43,7 +43,7 @@ struct div_result { /* Accessor functions for the timebase (RTC on 601) registers. */ /* If one day CONFIG_POWER is added just define __USE_RTC as 1 */ -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 #define __USE_RTC() (cpu_has_feature(CPU_FTR_USE_RTC)) #else #define __USE_RTC() 0 diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h index f0e571b2dc7c..e24c67d5ba75 100644 --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -40,7 +40,7 @@ extern void flush_hash_entry(struct mm_struct *mm, pte_t *ptep, static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long address) { -#ifdef CONFIG_PPC_STD_MMU_32 +#ifdef CONFIG_PPC_BOOK3S_32 if (pte_val(*ptep) & _PAGE_HASHPTE) flush_hash_entry(tlb->mm, ptep, address); #endif diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index 15bea9a0f260..ebc0b916dcf9 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -63,7 +63,7 @@ static inline int __access_ok(unsigned long addr, unsigned long size, #endif #define access_ok(type, addr, size) \ - (__chk_user_ptr(addr), \ + (__chk_user_ptr(addr), (void)(type), \ __access_ok((__force unsigned long)(addr), (size), get_fs())) /* diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index b0de85b477e1..a3c35e6d6ffb 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -11,8 +11,7 @@ #include - -#define NR_syscalls 389 +#define NR_syscalls __NR_syscalls #define __NR__exit __NR_exit diff --git a/arch/powerpc/include/uapi/asm/Kbuild b/arch/powerpc/include/uapi/asm/Kbuild index 3712152206f3..8ab8ba1b71bc 100644 --- a/arch/powerpc/include/uapi/asm/Kbuild +++ b/arch/powerpc/include/uapi/asm/Kbuild @@ -1,6 +1,8 @@ # UAPI Header export list include include/uapi/asm-generic/Kbuild.asm +generated-y += unistd_32.h +generated-y += unistd_64.h generic-y += param.h generic-y += poll.h generic-y += resource.h diff --git a/arch/powerpc/include/uapi/asm/perf_regs.h b/arch/powerpc/include/uapi/asm/perf_regs.h index 9e52c86ccbd3..ff91192407d1 100644 --- a/arch/powerpc/include/uapi/asm/perf_regs.h +++ b/arch/powerpc/include/uapi/asm/perf_regs.h @@ -46,6 +46,7 @@ enum perf_event_powerpc_regs { PERF_REG_POWERPC_TRAP, PERF_REG_POWERPC_DAR, PERF_REG_POWERPC_DSISR, + PERF_REG_POWERPC_SIER, PERF_REG_POWERPC_MAX, }; #endif /* _UAPI_ASM_POWERPC_PERF_REGS_H */ diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h index 985534d0b448..5f84e3dc98d0 100644 --- a/arch/powerpc/include/uapi/asm/unistd.h +++ b/arch/powerpc/include/uapi/asm/unistd.h @@ -10,395 +10,10 @@ #ifndef _UAPI_ASM_POWERPC_UNISTD_H_ #define _UAPI_ASM_POWERPC_UNISTD_H_ - -#define __NR_restart_syscall 0 -#define __NR_exit 1 -#define __NR_fork 2 -#define __NR_read 3 -#define __NR_write 4 -#define __NR_open 5 -#define __NR_close 6 -#define __NR_waitpid 7 -#define __NR_creat 8 -#define __NR_link 9 -#define __NR_unlink 10 -#define __NR_execve 11 -#define __NR_chdir 12 -#define __NR_time 13 -#define __NR_mknod 14 -#define __NR_chmod 15 -#define __NR_lchown 16 -#define __NR_break 17 -#define __NR_oldstat 18 -#define __NR_lseek 19 -#define __NR_getpid 20 -#define __NR_mount 21 -#define __NR_umount 22 -#define __NR_setuid 23 -#define __NR_getuid 24 -#define __NR_stime 25 -#define __NR_ptrace 26 -#define __NR_alarm 27 -#define __NR_oldfstat 28 -#define __NR_pause 29 -#define __NR_utime 30 -#define __NR_stty 31 -#define __NR_gtty 32 -#define __NR_access 33 -#define __NR_nice 34 -#define __NR_ftime 35 -#define __NR_sync 36 -#define __NR_kill 37 -#define __NR_rename 38 -#define __NR_mkdir 39 -#define __NR_rmdir 40 -#define __NR_dup 41 -#define __NR_pipe 42 -#define __NR_times 43 -#define __NR_prof 44 -#define __NR_brk 45 -#define __NR_setgid 46 -#define __NR_getgid 47 -#define __NR_signal 48 -#define __NR_geteuid 49 -#define __NR_getegid 50 -#define __NR_acct 51 -#define __NR_umount2 52 -#define __NR_lock 53 -#define __NR_ioctl 54 -#define __NR_fcntl 55 -#define __NR_mpx 56 -#define __NR_setpgid 57 -#define __NR_ulimit 58 -#define __NR_oldolduname 59 -#define __NR_umask 60 -#define __NR_chroot 61 -#define __NR_ustat 62 -#define __NR_dup2 63 -#define __NR_getppid 64 -#define __NR_getpgrp 65 -#define __NR_setsid 66 -#define __NR_sigaction 67 -#define __NR_sgetmask 68 -#define __NR_ssetmask 69 -#define __NR_setreuid 70 -#define __NR_setregid 71 -#define __NR_sigsuspend 72 -#define __NR_sigpending 73 -#define __NR_sethostname 74 -#define __NR_setrlimit 75 -#define __NR_getrlimit 76 -#define __NR_getrusage 77 -#define __NR_gettimeofday 78 -#define __NR_settimeofday 79 -#define __NR_getgroups 80 -#define __NR_setgroups 81 -#define __NR_select 82 -#define __NR_symlink 83 -#define __NR_oldlstat 84 -#define __NR_readlink 85 -#define __NR_uselib 86 -#define __NR_swapon 87 -#define __NR_reboot 88 -#define __NR_readdir 89 -#define __NR_mmap 90 -#define __NR_munmap 91 -#define __NR_truncate 92 -#define __NR_ftruncate 93 -#define __NR_fchmod 94 -#define __NR_fchown 95 -#define __NR_getpriority 96 -#define __NR_setpriority 97 -#define __NR_profil 98 -#define __NR_statfs 99 -#define __NR_fstatfs 100 -#define __NR_ioperm 101 -#define __NR_socketcall 102 -#define __NR_syslog 103 -#define __NR_setitimer 104 -#define __NR_getitimer 105 -#define __NR_stat 106 -#define __NR_lstat 107 -#define __NR_fstat 108 -#define __NR_olduname 109 -#define __NR_iopl 110 -#define __NR_vhangup 111 -#define __NR_idle 112 -#define __NR_vm86 113 -#define __NR_wait4 114 -#define __NR_swapoff 115 -#define __NR_sysinfo 116 -#define __NR_ipc 117 -#define __NR_fsync 118 -#define __NR_sigreturn 119 -#define __NR_clone 120 -#define __NR_setdomainname 121 -#define __NR_uname 122 -#define __NR_modify_ldt 123 -#define __NR_adjtimex 124 -#define __NR_mprotect 125 -#define __NR_sigprocmask 126 -#define __NR_create_module 127 -#define __NR_init_module 128 -#define __NR_delete_module 129 -#define __NR_get_kernel_syms 130 -#define __NR_quotactl 131 -#define __NR_getpgid 132 -#define __NR_fchdir 133 -#define __NR_bdflush 134 -#define __NR_sysfs 135 -#define __NR_personality 136 -#define __NR_afs_syscall 137 /* Syscall for Andrew File System */ -#define __NR_setfsuid 138 -#define __NR_setfsgid 139 -#define __NR__llseek 140 -#define __NR_getdents 141 -#define __NR__newselect 142 -#define __NR_flock 143 -#define __NR_msync 144 -#define __NR_readv 145 -#define __NR_writev 146 -#define __NR_getsid 147 -#define __NR_fdatasync 148 -#define __NR__sysctl 149 -#define __NR_mlock 150 -#define __NR_munlock 151 -#define __NR_mlockall 152 -#define __NR_munlockall 153 -#define __NR_sched_setparam 154 -#define __NR_sched_getparam 155 -#define __NR_sched_setscheduler 156 -#define __NR_sched_getscheduler 157 -#define __NR_sched_yield 158 -#define __NR_sched_get_priority_max 159 -#define __NR_sched_get_priority_min 160 -#define __NR_sched_rr_get_interval 161 -#define __NR_nanosleep 162 -#define __NR_mremap 163 -#define __NR_setresuid 164 -#define __NR_getresuid 165 -#define __NR_query_module 166 -#define __NR_poll 167 -#define __NR_nfsservctl 168 -#define __NR_setresgid 169 -#define __NR_getresgid 170 -#define __NR_prctl 171 -#define __NR_rt_sigreturn 172 -#define __NR_rt_sigaction 173 -#define __NR_rt_sigprocmask 174 -#define __NR_rt_sigpending 175 -#define __NR_rt_sigtimedwait 176 -#define __NR_rt_sigqueueinfo 177 -#define __NR_rt_sigsuspend 178 -#define __NR_pread64 179 -#define __NR_pwrite64 180 -#define __NR_chown 181 -#define __NR_getcwd 182 -#define __NR_capget 183 -#define __NR_capset 184 -#define __NR_sigaltstack 185 -#define __NR_sendfile 186 -#define __NR_getpmsg 187 /* some people actually want streams */ -#define __NR_putpmsg 188 /* some people actually want streams */ -#define __NR_vfork 189 -#define __NR_ugetrlimit 190 /* SuS compliant getrlimit */ -#define __NR_readahead 191 -#ifndef __powerpc64__ /* these are 32-bit only */ -#define __NR_mmap2 192 -#define __NR_truncate64 193 -#define __NR_ftruncate64 194 -#define __NR_stat64 195 -#define __NR_lstat64 196 -#define __NR_fstat64 197 -#endif -#define __NR_pciconfig_read 198 -#define __NR_pciconfig_write 199 -#define __NR_pciconfig_iobase 200 -#define __NR_multiplexer 201 -#define __NR_getdents64 202 -#define __NR_pivot_root 203 -#ifndef __powerpc64__ -#define __NR_fcntl64 204 -#endif -#define __NR_madvise 205 -#define __NR_mincore 206 -#define __NR_gettid 207 -#define __NR_tkill 208 -#define __NR_setxattr 209 -#define __NR_lsetxattr 210 -#define __NR_fsetxattr 211 -#define __NR_getxattr 212 -#define __NR_lgetxattr 213 -#define __NR_fgetxattr 214 -#define __NR_listxattr 215 -#define __NR_llistxattr 216 -#define __NR_flistxattr 217 -#define __NR_removexattr 218 -#define __NR_lremovexattr 219 -#define __NR_fremovexattr 220 -#define __NR_futex 221 -#define __NR_sched_setaffinity 222 -#define __NR_sched_getaffinity 223 -/* 224 currently unused */ -#define __NR_tuxcall 225 #ifndef __powerpc64__ -#define __NR_sendfile64 226 -#endif -#define __NR_io_setup 227 -#define __NR_io_destroy 228 -#define __NR_io_getevents 229 -#define __NR_io_submit 230 -#define __NR_io_cancel 231 -#define __NR_set_tid_address 232 -#define __NR_fadvise64 233 -#define __NR_exit_group 234 -#define __NR_lookup_dcookie 235 -#define __NR_epoll_create 236 -#define __NR_epoll_ctl 237 -#define __NR_epoll_wait 238 -#define __NR_remap_file_pages 239 -#define __NR_timer_create 240 -#define __NR_timer_settime 241 -#define __NR_timer_gettime 242 -#define __NR_timer_getoverrun 243 -#define __NR_timer_delete 244 -#define __NR_clock_settime 245 -#define __NR_clock_gettime 246 -#define __NR_clock_getres 247 -#define __NR_clock_nanosleep 248 -#define __NR_swapcontext 249 -#define __NR_tgkill 250 -#define __NR_utimes 251 -#define __NR_statfs64 252 -#define __NR_fstatfs64 253 -#ifndef __powerpc64__ -#define __NR_fadvise64_64 254 -#endif -#define __NR_rtas 255 -#define __NR_sys_debug_setcontext 256 -/* Number 257 is reserved for vserver */ -#define __NR_migrate_pages 258 -#define __NR_mbind 259 -#define __NR_get_mempolicy 260 -#define __NR_set_mempolicy 261 -#define __NR_mq_open 262 -#define __NR_mq_unlink 263 -#define __NR_mq_timedsend 264 -#define __NR_mq_timedreceive 265 -#define __NR_mq_notify 266 -#define __NR_mq_getsetattr 267 -#define __NR_kexec_load 268 -#define __NR_add_key 269 -#define __NR_request_key 270 -#define __NR_keyctl 271 -#define __NR_waitid 272 -#define __NR_ioprio_set 273 -#define __NR_ioprio_get 274 -#define __NR_inotify_init 275 -#define __NR_inotify_add_watch 276 -#define __NR_inotify_rm_watch 277 -#define __NR_spu_run 278 -#define __NR_spu_create 279 -#define __NR_pselect6 280 -#define __NR_ppoll 281 -#define __NR_unshare 282 -#define __NR_splice 283 -#define __NR_tee 284 -#define __NR_vmsplice 285 -#define __NR_openat 286 -#define __NR_mkdirat 287 -#define __NR_mknodat 288 -#define __NR_fchownat 289 -#define __NR_futimesat 290 -#ifdef __powerpc64__ -#define __NR_newfstatat 291 +#include #else -#define __NR_fstatat64 291 +#include #endif -#define __NR_unlinkat 292 -#define __NR_renameat 293 -#define __NR_linkat 294 -#define __NR_symlinkat 295 -#define __NR_readlinkat 296 -#define __NR_fchmodat 297 -#define __NR_faccessat 298 -#define __NR_get_robust_list 299 -#define __NR_set_robust_list 300 -#define __NR_move_pages 301 -#define __NR_getcpu 302 -#define __NR_epoll_pwait 303 -#define __NR_utimensat 304 -#define __NR_signalfd 305 -#define __NR_timerfd_create 306 -#define __NR_eventfd 307 -#define __NR_sync_file_range2 308 -#define __NR_fallocate 309 -#define __NR_subpage_prot 310 -#define __NR_timerfd_settime 311 -#define __NR_timerfd_gettime 312 -#define __NR_signalfd4 313 -#define __NR_eventfd2 314 -#define __NR_epoll_create1 315 -#define __NR_dup3 316 -#define __NR_pipe2 317 -#define __NR_inotify_init1 318 -#define __NR_perf_event_open 319 -#define __NR_preadv 320 -#define __NR_pwritev 321 -#define __NR_rt_tgsigqueueinfo 322 -#define __NR_fanotify_init 323 -#define __NR_fanotify_mark 324 -#define __NR_prlimit64 325 -#define __NR_socket 326 -#define __NR_bind 327 -#define __NR_connect 328 -#define __NR_listen 329 -#define __NR_accept 330 -#define __NR_getsockname 331 -#define __NR_getpeername 332 -#define __NR_socketpair 333 -#define __NR_send 334 -#define __NR_sendto 335 -#define __NR_recv 336 -#define __NR_recvfrom 337 -#define __NR_shutdown 338 -#define __NR_setsockopt 339 -#define __NR_getsockopt 340 -#define __NR_sendmsg 341 -#define __NR_recvmsg 342 -#define __NR_recvmmsg 343 -#define __NR_accept4 344 -#define __NR_name_to_handle_at 345 -#define __NR_open_by_handle_at 346 -#define __NR_clock_adjtime 347 -#define __NR_syncfs 348 -#define __NR_sendmmsg 349 -#define __NR_setns 350 -#define __NR_process_vm_readv 351 -#define __NR_process_vm_writev 352 -#define __NR_finit_module 353 -#define __NR_kcmp 354 -#define __NR_sched_setattr 355 -#define __NR_sched_getattr 356 -#define __NR_renameat2 357 -#define __NR_seccomp 358 -#define __NR_getrandom 359 -#define __NR_memfd_create 360 -#define __NR_bpf 361 -#define __NR_execveat 362 -#define __NR_switch_endian 363 -#define __NR_userfaultfd 364 -#define __NR_membarrier 365 -#define __NR_mlock2 378 -#define __NR_copy_file_range 379 -#define __NR_preadv2 380 -#define __NR_pwritev2 381 -#define __NR_kexec_file_load 382 -#define __NR_statx 383 -#define __NR_pkey_alloc 384 -#define __NR_pkey_free 385 -#define __NR_pkey_mprotect 386 -#define __NR_rseq 387 -#define __NR_io_pgetevents 388 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile index 53d4b8d5b54d..cb7f0bb9ee71 100644 --- a/arch/powerpc/kernel/Makefile +++ b/arch/powerpc/kernel/Makefile @@ -69,7 +69,7 @@ obj-$(CONFIG_FA_DUMP) += fadump.o ifdef CONFIG_PPC32 obj-$(CONFIG_E500) += idle_e500.o endif -obj-$(CONFIG_6xx) += idle_6xx.o l2cr_6xx.o cpu_setup_6xx.o +obj-$(CONFIG_PPC_BOOK3S_32) += idle_6xx.o l2cr_6xx.o cpu_setup_6xx.o obj-$(CONFIG_TAU) += tau_6xx.o obj-$(CONFIG_HIBERNATION) += swsusp.o suspend.o ifdef CONFIG_FSL_BOOKE @@ -160,16 +160,6 @@ extra-$(CONFIG_ALTIVEC) += vector.o extra-$(CONFIG_PPC64) += entry_64.o extra-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init.o -extra-y += systbl_chk.i -$(obj)/systbl.o: systbl_chk - -quiet_cmd_systbl_chk = CALL $< - cmd_systbl_chk = $(CONFIG_SHELL) $< $(obj)/systbl_chk.i - -PHONY += systbl_chk -systbl_chk: $(src)/systbl_chk.sh $(obj)/systbl_chk.i - $(call cmd,systbl_chk) - ifdef CONFIG_PPC_OF_BOOT_TRAMPOLINE $(obj)/built-in.a: prom_init_check diff --git a/arch/powerpc/kernel/btext.c b/arch/powerpc/kernel/btext.c index b4241ed1456e..6dfceaa820e4 100644 --- a/arch/powerpc/kernel/btext.c +++ b/arch/powerpc/kernel/btext.c @@ -232,20 +232,12 @@ static int btext_initialize(struct device_node *np) int __init btext_find_display(int allow_nonstdout) { - const char *name; - struct device_node *np = NULL; + struct device_node *np = of_stdout; int rc = -ENODEV; - name = of_get_property(of_chosen, "linux,stdout-path", NULL); - if (name != NULL) { - np = of_find_node_by_path(name); - if (np != NULL) { - if (strcmp(np->type, "display") != 0) { - printk("boot stdout isn't a display !\n"); - of_node_put(np); - np = NULL; - } - } + if (!of_node_is_type(np, "display")) { + printk("boot stdout isn't a display !\n"); + np = NULL; } if (np) rc = btext_initialize(np); diff --git a/arch/powerpc/kernel/cacheinfo.c b/arch/powerpc/kernel/cacheinfo.c index be57bd07596d..53102764fd2f 100644 --- a/arch/powerpc/kernel/cacheinfo.c +++ b/arch/powerpc/kernel/cacheinfo.c @@ -428,7 +428,7 @@ static void link_cache_lists(struct cache *smaller, struct cache *bigger) static void do_subsidiary_caches_debugcheck(struct cache *cache) { WARN_ON_ONCE(cache->level != 1); - WARN_ON_ONCE(strcmp(cache->ofnode->type, "cpu")); + WARN_ON_ONCE(!of_node_is_type(cache->ofnode, "cpu")); } static void do_subsidiary_caches(struct cache *cache) diff --git a/arch/powerpc/kernel/cpu_setup_6xx.S b/arch/powerpc/kernel/cpu_setup_6xx.S index fa3c2c91290c..8c069e96c478 100644 --- a/arch/powerpc/kernel/cpu_setup_6xx.S +++ b/arch/powerpc/kernel/cpu_setup_6xx.S @@ -326,7 +326,7 @@ _GLOBAL(__save_cpu_setup) lis r5,cpu_state_storage@h ori r5,r5,cpu_state_storage@l - /* Save HID0 (common to all CONFIG_6xx cpus) */ + /* Save HID0 (common to all CONFIG_PPC_BOOK3S_32 cpus) */ mfspr r3,SPRN_HID0 stw r3,CS_HID0(r5) diff --git a/arch/powerpc/kernel/cpu_setup_fsl_booke.S b/arch/powerpc/kernel/cpu_setup_fsl_booke.S index 8d142e5d84cd..5fbc890d1094 100644 --- a/arch/powerpc/kernel/cpu_setup_fsl_booke.S +++ b/arch/powerpc/kernel/cpu_setup_fsl_booke.S @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c index 2da01340c84c..1eab54bc6ee9 100644 --- a/arch/powerpc/kernel/cputable.c +++ b/arch/powerpc/kernel/cputable.c @@ -1141,6 +1141,7 @@ static struct cpu_spec __initdata cpu_specs[] = { .machine_check = machine_check_generic, .platform = "ppc603", }, +#ifdef CONFIG_PPC_83xx { /* e300c1 (a 603e core, plus some) on 83xx */ .pvr_mask = 0x7fff0000, .pvr_value = 0x00830000, @@ -1151,7 +1152,7 @@ static struct cpu_spec __initdata cpu_specs[] = { .icache_bsize = 32, .dcache_bsize = 32, .cpu_setup = __setup_cpu_603, - .machine_check = machine_check_generic, + .machine_check = machine_check_83xx, .platform = "ppc603", }, { /* e300c2 (an e300c1 core, plus some, minus FPU) on 83xx */ @@ -1165,7 +1166,7 @@ static struct cpu_spec __initdata cpu_specs[] = { .icache_bsize = 32, .dcache_bsize = 32, .cpu_setup = __setup_cpu_603, - .machine_check = machine_check_generic, + .machine_check = machine_check_83xx, .platform = "ppc603", }, { /* e300c3 (e300c1, plus one IU, half cache size) on 83xx */ @@ -1179,7 +1180,7 @@ static struct cpu_spec __initdata cpu_specs[] = { .icache_bsize = 32, .dcache_bsize = 32, .cpu_setup = __setup_cpu_603, - .machine_check = machine_check_generic, + .machine_check = machine_check_83xx, .num_pmcs = 4, .oprofile_cpu_type = "ppc/e300", .oprofile_type = PPC_OPROFILE_FSL_EMB, @@ -1196,12 +1197,13 @@ static struct cpu_spec __initdata cpu_specs[] = { .icache_bsize = 32, .dcache_bsize = 32, .cpu_setup = __setup_cpu_603, - .machine_check = machine_check_generic, + .machine_check = machine_check_83xx, .num_pmcs = 4, .oprofile_cpu_type = "ppc/e300", .oprofile_type = PPC_OPROFILE_FSL_EMB, .platform = "ppc603", }, +#endif { /* default match, we assume split I/D cache & TB (non-601)... */ .pvr_mask = 0x00000000, .pvr_value = 0x00000000, diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c index f9fe2080ceb9..9c9bcaae2f75 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -6,7 +6,6 @@ * busses using the iommu infrastructure */ -#include #include /* @@ -106,11 +105,6 @@ static u64 dma_iommu_get_required_mask(struct device *dev) return mask; } -int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == IOMMU_MAPPING_ERROR; -} - struct dma_map_ops dma_iommu_ops = { .alloc = dma_iommu_alloc_coherent, .free = dma_iommu_free_coherent, @@ -121,6 +115,4 @@ struct dma_map_ops dma_iommu_ops = { .map_page = dma_iommu_map_page, .unmap_page = dma_iommu_unmap_page, .get_required_mask = dma_iommu_get_required_mask, - .mapping_error = dma_iommu_mapping_error, }; -EXPORT_SYMBOL(dma_iommu_ops); diff --git a/arch/powerpc/kernel/dma-swiotlb.c b/arch/powerpc/kernel/dma-swiotlb.c index 5fc335f4d9cd..7d5fc9751622 100644 --- a/arch/powerpc/kernel/dma-swiotlb.c +++ b/arch/powerpc/kernel/dma-swiotlb.c @@ -50,16 +50,15 @@ const struct dma_map_ops powerpc_swiotlb_dma_ops = { .alloc = __dma_nommu_alloc_coherent, .free = __dma_nommu_free_coherent, .mmap = dma_nommu_mmap_coherent, - .map_sg = swiotlb_map_sg_attrs, - .unmap_sg = swiotlb_unmap_sg_attrs, + .map_sg = dma_direct_map_sg, + .unmap_sg = dma_direct_unmap_sg, .dma_supported = swiotlb_dma_supported, - .map_page = swiotlb_map_page, - .unmap_page = swiotlb_unmap_page, - .sync_single_for_cpu = swiotlb_sync_single_for_cpu, - .sync_single_for_device = swiotlb_sync_single_for_device, - .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu, - .sync_sg_for_device = swiotlb_sync_sg_for_device, - .mapping_error = dma_direct_mapping_error, + .map_page = dma_direct_map_page, + .unmap_page = dma_direct_unmap_page, + .sync_single_for_cpu = dma_direct_sync_single_for_cpu, + .sync_single_for_device = dma_direct_sync_single_for_device, + .sync_sg_for_cpu = dma_direct_sync_sg_for_cpu, + .sync_sg_for_device = dma_direct_sync_sg_for_device, .get_required_mask = swiotlb_powerpc_get_required, }; @@ -108,12 +107,8 @@ int __init swiotlb_setup_bus_notifier(void) void __init swiotlb_detect_4g(void) { - if ((memblock_end_of_DRAM() - 1) > 0xffffffff) { + if ((memblock_end_of_DRAM() - 1) > 0xffffffff) ppc_swiotlb_enable = 1; -#ifdef CONFIG_ZONE_DMA32 - limit_zone_pfn(ZONE_DMA32, (1ULL << 32) >> PAGE_SHIFT); -#endif - } } static int __init check_swiotlb_enabled(void) diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c index dbfc7056d7df..b1903ebb2e9c 100644 --- a/arch/powerpc/kernel/dma.c +++ b/arch/powerpc/kernel/dma.c @@ -50,7 +50,8 @@ static int dma_nommu_dma_supported(struct device *dev, u64 mask) return 1; #ifdef CONFIG_FSL_SOC - /* Freescale gets another chance via ZONE_DMA/ZONE_DMA32, however + /* + * Freescale gets another chance via ZONE_DMA, however * that will have to be refined if/when they support iommus */ return 1; @@ -62,18 +63,12 @@ static int dma_nommu_dma_supported(struct device *dev, u64 mask) #endif } +#ifndef CONFIG_NOT_COHERENT_CACHE void *__dma_nommu_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) { void *ret; -#ifdef CONFIG_NOT_COHERENT_CACHE - ret = __dma_alloc_coherent(dev, size, dma_handle, flag); - if (ret == NULL) - return NULL; - *dma_handle += get_dma_offset(dev); - return ret; -#else struct page *page; int node = dev_to_node(dev); #ifdef CONFIG_FSL_SOC @@ -94,13 +89,10 @@ void *__dma_nommu_alloc_coherent(struct device *dev, size_t size, } switch (zone) { +#ifdef CONFIG_ZONE_DMA case ZONE_DMA: flag |= GFP_DMA; break; -#ifdef CONFIG_ZONE_DMA32 - case ZONE_DMA32: - flag |= GFP_DMA32; - break; #endif }; #endif /* CONFIG_FSL_SOC */ @@ -113,19 +105,15 @@ void *__dma_nommu_alloc_coherent(struct device *dev, size_t size, *dma_handle = __pa(ret) + get_dma_offset(dev); return ret; -#endif } void __dma_nommu_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { -#ifdef CONFIG_NOT_COHERENT_CACHE - __dma_free_coherent(size, vaddr); -#else free_pages((unsigned long)vaddr, get_order(size)); -#endif } +#endif /* !CONFIG_NOT_COHERENT_CACHE */ static void *dma_nommu_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag, @@ -210,10 +198,15 @@ static int dma_nommu_map_sg(struct device *dev, struct scatterlist *sgl, return nents; } -static void dma_nommu_unmap_sg(struct device *dev, struct scatterlist *sg, +static void dma_nommu_unmap_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction direction, unsigned long attrs) { + struct scatterlist *sg; + int i; + + for_each_sg(sgl, sg, nents, i) + __dma_sync_page(sg_page(sg), sg->offset, sg->length, direction); } static u64 dma_nommu_get_required_mask(struct device *dev) @@ -247,6 +240,8 @@ static inline void dma_nommu_unmap_page(struct device *dev, enum dma_data_direction direction, unsigned long attrs) { + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + __dma_sync(bus_to_virt(dma_address), size, direction); } #ifdef CONFIG_NOT_COHERENT_CACHE diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c index 6cae6b56ffd6..3230137469ab 100644 --- a/arch/powerpc/kernel/eeh.c +++ b/arch/powerpc/kernel/eeh.c @@ -1808,10 +1808,10 @@ static int eeh_freeze_dbgfs_get(void *data, u64 *val) return 0; } -DEFINE_SIMPLE_ATTRIBUTE(eeh_enable_dbgfs_ops, eeh_enable_dbgfs_get, - eeh_enable_dbgfs_set, "0x%llx\n"); -DEFINE_SIMPLE_ATTRIBUTE(eeh_freeze_dbgfs_ops, eeh_freeze_dbgfs_get, - eeh_freeze_dbgfs_set, "0x%llx\n"); +DEFINE_DEBUGFS_ATTRIBUTE(eeh_enable_dbgfs_ops, eeh_enable_dbgfs_get, + eeh_enable_dbgfs_set, "0x%llx\n"); +DEFINE_DEBUGFS_ATTRIBUTE(eeh_freeze_dbgfs_ops, eeh_freeze_dbgfs_get, + eeh_freeze_dbgfs_set, "0x%llx\n"); #endif static int __init eeh_init_proc(void) @@ -1819,12 +1819,12 @@ static int __init eeh_init_proc(void) if (machine_is(pseries) || machine_is(powernv)) { proc_create_single("powerpc/eeh", 0, NULL, proc_eeh_show); #ifdef CONFIG_DEBUG_FS - debugfs_create_file("eeh_enable", 0600, - powerpc_debugfs_root, NULL, - &eeh_enable_dbgfs_ops); - debugfs_create_file("eeh_max_freezes", 0600, - powerpc_debugfs_root, NULL, - &eeh_freeze_dbgfs_ops); + debugfs_create_file_unsafe("eeh_enable", 0600, + powerpc_debugfs_root, NULL, + &eeh_enable_dbgfs_ops); + debugfs_create_file_unsafe("eeh_max_freezes", 0600, + powerpc_debugfs_root, NULL, + &eeh_freeze_dbgfs_ops); #endif } diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c index 9446248eb6b8..99eab7bc7edc 100644 --- a/arch/powerpc/kernel/eeh_driver.c +++ b/arch/powerpc/kernel/eeh_driver.c @@ -60,7 +60,7 @@ static int eeh_result_priority(enum pci_ers_result result) } }; -const char *pci_ers_result_name(enum pci_ers_result result) +static const char *pci_ers_result_name(enum pci_ers_result result) { switch (result) { case PCI_ERS_RESULT_NONE: diff --git a/arch/powerpc/kernel/eeh_event.c b/arch/powerpc/kernel/eeh_event.c index 61c9356bf9c9..227e57f980df 100644 --- a/arch/powerpc/kernel/eeh_event.c +++ b/arch/powerpc/kernel/eeh_event.c @@ -35,7 +35,7 @@ */ static DEFINE_SPINLOCK(eeh_eventlist_lock); -static struct semaphore eeh_eventlist_sem; +static DECLARE_COMPLETION(eeh_eventlist_event); static LIST_HEAD(eeh_eventlist); /** @@ -55,7 +55,7 @@ static int eeh_event_handler(void * dummy) struct eeh_pe *pe; while (!kthread_should_stop()) { - if (down_interruptible(&eeh_eventlist_sem)) + if (wait_for_completion_interruptible(&eeh_eventlist_event)) break; /* Fetch EEH event from the queue */ @@ -102,9 +102,6 @@ int eeh_event_init(void) struct task_struct *t; int ret = 0; - /* Initialize semaphore */ - sema_init(&eeh_eventlist_sem, 0); - t = kthread_run(eeh_event_handler, NULL, "eehd"); if (IS_ERR(t)) { ret = PTR_ERR(t); @@ -142,7 +139,7 @@ int eeh_send_failure_event(struct eeh_pe *pe) spin_unlock_irqrestore(&eeh_eventlist_lock, flags); /* For EEH deamon to knick in */ - up(&eeh_eventlist_sem); + complete(&eeh_eventlist_event); return 0; } diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S index 77decded1175..0768dfd8a64e 100644 --- a/arch/powerpc/kernel/entry_32.S +++ b/arch/powerpc/kernel/entry_32.S @@ -200,14 +200,14 @@ transfer_to_handler: cmplw r1,r9 /* if r1 <= ksp_limit */ ble- stack_ovf /* then the kernel stack overflowed */ 5: -#if defined(CONFIG_6xx) || defined(CONFIG_E500) +#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500) CURRENT_THREAD_INFO(r9, r1) tophys(r9,r9) /* check local flags */ lwz r12,TI_LOCAL_FLAGS(r9) mtcrf 0x01,r12 bt- 31-TLF_NAPPING,4f bt- 31-TLF_SLEEPING,7f -#endif /* CONFIG_6xx || CONFIG_E500 */ +#endif /* CONFIG_PPC_BOOK3S_32 || CONFIG_E500 */ .globl transfer_to_handler_cont transfer_to_handler_cont: 3: @@ -273,7 +273,7 @@ reenable_mmu: /* re-enable mmu so we can */ RFI /* jump to handler, enable MMU */ #endif /* CONFIG_TRACE_IRQFLAGS */ -#if defined (CONFIG_6xx) || defined(CONFIG_E500) +#if defined (CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500) 4: rlwinm r12,r12,0,~_TLF_NAPPING stw r12,TI_LOCAL_FLAGS(r9) b power_save_ppc32_restore @@ -612,7 +612,7 @@ ppc_swapcontext: handle_page_fault: stw r4,_DAR(r1) addi r3,r1,STACK_FRAME_OVERHEAD -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 andis. r0,r5,DSISR_DABRMATCH@h bne- handle_dabr_fault #endif @@ -629,7 +629,7 @@ handle_page_fault: bl bad_page_fault b ret_from_except_full -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 /* We have a data breakpoint exception - handle it */ handle_dabr_fault: SAVE_NVGPRS(r1) diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index 7b1693adff2a..435927f549c4 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -54,6 +54,9 @@ SYS_CALL_TABLE: .tc sys_call_table[TC],sys_call_table +COMPAT_SYS_CALL_TABLE: + .tc compat_sys_call_table[TC],compat_sys_call_table + /* This value is used to mark exception frames on the stack. */ exception_marker: .tc ID_EXC_MARKER[TC],STACK_FRAME_REGS_MARKER @@ -80,6 +83,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) std r0,GPR0(r1) std r10,GPR1(r1) beq 2f /* if from kernel mode */ +#ifdef CONFIG_PPC_FSL_BOOK3E +START_BTB_FLUSH_SECTION + BTB_FLUSH(r10) +END_BTB_FLUSH_SECTION +#endif ACCOUNT_CPU_USER_ENTRY(r13, r10, r11) 2: std r2,GPR2(r1) std r3,GPR3(r1) @@ -173,7 +181,7 @@ system_call: /* label this so stack traces look sane */ ld r11,SYS_CALL_TABLE@toc(2) andis. r10,r10,_TIF_32BIT@h beq 15f - addi r11,r11,8 /* use 32-bit syscall entries */ + ld r11,COMPAT_SYS_CALL_TABLE@toc(2) clrldi r3,r3,32 clrldi r4,r4,32 clrldi r5,r5,32 @@ -181,7 +189,7 @@ system_call: /* label this so stack traces look sane */ clrldi r7,r7,32 clrldi r8,r8,32 15: - slwi r0,r0,4 + slwi r0,r0,3 barrier_nospec_asm /* @@ -286,6 +294,10 @@ BEGIN_FTR_SECTION HMT_MEDIUM_LOW END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + std r8, PACATMSCRATCH(r13) +#endif + ld r13,GPR13(r1) /* only restore r13 if returning to usermode */ ld r2,GPR2(r1) ld r1,GPR1(r1) diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S index 6d6e144a28ce..afb638778f44 100644 --- a/arch/powerpc/kernel/exceptions-64e.S +++ b/arch/powerpc/kernel/exceptions-64e.S @@ -296,7 +296,8 @@ ret_from_mc_except: andi. r10,r11,MSR_PR; /* save stack pointer */ \ beq 1f; /* branch around if supervisor */ \ ld r1,PACAKSAVE(r13); /* get kernel stack coming from usr */\ -1: cmpdi cr1,r1,0; /* check if SP makes sense */ \ +1: type##_BTB_FLUSH \ + cmpdi cr1,r1,0; /* check if SP makes sense */ \ bge- cr1,exc_##n##_bad_stack;/* bad stack (TODO: out of line) */ \ mfspr r10,SPRN_##type##_SRR0; /* read SRR0 before touching stack */ @@ -328,6 +329,29 @@ ret_from_mc_except: #define SPRN_MC_SRR0 SPRN_MCSRR0 #define SPRN_MC_SRR1 SPRN_MCSRR1 +#ifdef CONFIG_PPC_FSL_BOOK3E +#define GEN_BTB_FLUSH \ + START_BTB_FLUSH_SECTION \ + beq 1f; \ + BTB_FLUSH(r10) \ + 1: \ + END_BTB_FLUSH_SECTION + +#define CRIT_BTB_FLUSH \ + START_BTB_FLUSH_SECTION \ + BTB_FLUSH(r10) \ + END_BTB_FLUSH_SECTION + +#define DBG_BTB_FLUSH CRIT_BTB_FLUSH +#define MC_BTB_FLUSH CRIT_BTB_FLUSH +#define GDBELL_BTB_FLUSH GEN_BTB_FLUSH +#else +#define GEN_BTB_FLUSH +#define CRIT_BTB_FLUSH +#define DBG_BTB_FLUSH +#define GDBELL_BTB_FLUSH +#endif + #define NORMAL_EXCEPTION_PROLOG(n, intnum, addition) \ EXCEPTION_PROLOG(n, intnum, GEN, addition##_GEN(n)) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index db2691ff4c0b..9e253ce27e08 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1040,7 +1040,7 @@ TRAMP_REAL_BEGIN(hmi_exception_early) EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN) EXCEPTION_PROLOG_COMMON_3(0xe60) addi r3,r1,STACK_FRAME_OVERHEAD - BRANCH_LINK_TO_FAR(hmi_exception_realmode) /* Function call ABI */ + BRANCH_LINK_TO_FAR(DOTSYM(hmi_exception_realmode)) /* Function call ABI */ cmpdi cr0,r3,0 /* Windup the stack. */ diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index 761b28b1427d..45a8d0be1c96 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -46,6 +47,9 @@ static struct fw_dump fw_dump; static struct fadump_mem_struct fdm; static const struct fadump_mem_struct *fdm_active; +#ifdef CONFIG_CMA +static struct cma *fadump_cma; +#endif static DEFINE_MUTEX(fadump_mutex); struct fad_crash_memory_ranges *crash_memory_ranges; @@ -53,6 +57,67 @@ int crash_memory_ranges_size; int crash_mem_ranges; int max_crash_mem_ranges; +#ifdef CONFIG_CMA +/* + * fadump_cma_init() - Initialize CMA area from a fadump reserved memory + * + * This function initializes CMA area from fadump reserved memory. + * The total size of fadump reserved memory covers for boot memory size + * + cpu data size + hpte size and metadata. + * Initialize only the area equivalent to boot memory size for CMA use. + * The reamining portion of fadump reserved memory will be not given + * to CMA and pages for thoes will stay reserved. boot memory size is + * aligned per CMA requirement to satisy cma_init_reserved_mem() call. + * But for some reason even if it fails we still have the memory reservation + * with us and we can still continue doing fadump. + */ +int __init fadump_cma_init(void) +{ + unsigned long long base, size; + int rc; + + if (!fw_dump.fadump_enabled) + return 0; + + /* + * Do not use CMA if user has provided fadump=nocma kernel parameter. + * Return 1 to continue with fadump old behaviour. + */ + if (fw_dump.nocma) + return 1; + + base = fw_dump.reserve_dump_area_start; + size = fw_dump.boot_memory_size; + + if (!size) + return 0; + + rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma); + if (rc) { + pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc); + /* + * Though the CMA init has failed we still have memory + * reservation with us. The reserved memory will be + * blocked from production system usage. Hence return 1, + * so that we can continue with fadump. + */ + return 1; + } + + /* + * So we now have successfully initialized cma area for fadump. + */ + pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx " + "bytes of memory reserved for firmware-assisted dump\n", + cma_get_size(fadump_cma), + (unsigned long)cma_get_base(fadump_cma) >> 20, + fw_dump.reserve_dump_area_size); + return 1; +} +#else +static int __init fadump_cma_init(void) { return 1; } +#endif /* CONFIG_CMA */ + /* Scan the Firmware Assisted dump configuration details. */ int __init early_init_dt_scan_fw_dump(unsigned long node, const char *uname, int depth, void *data) @@ -118,13 +183,19 @@ int __init early_init_dt_scan_fw_dump(unsigned long node, /* * If fadump is registered, check if the memory provided - * falls within boot memory area. + * falls within boot memory area and reserved memory area. */ -int is_fadump_boot_memory_area(u64 addr, ulong size) +int is_fadump_memory_area(u64 addr, ulong size) { + u64 d_start = fw_dump.reserve_dump_area_start; + u64 d_end = d_start + fw_dump.reserve_dump_area_size; + if (!fw_dump.dump_registered) return 0; + if (((addr + size) > d_start) && (addr <= d_end)) + return 1; + return (addr + size) > RMA_START && addr <= fw_dump.boot_memory_size; } @@ -172,6 +243,35 @@ static int is_boot_memory_area_contiguous(void) return ret; } +/* + * Returns true, if there are no holes in reserved memory area, + * false otherwise. + */ +static bool is_reserved_memory_area_contiguous(void) +{ + struct memblock_region *reg; + unsigned long start, end; + unsigned long d_start = fw_dump.reserve_dump_area_start; + unsigned long d_end = d_start + fw_dump.reserve_dump_area_size; + + for_each_memblock(memory, reg) { + start = max(d_start, (unsigned long)reg->base); + end = min(d_end, (unsigned long)(reg->base + reg->size)); + if (d_start < end) { + /* Memory hole from d_start to start */ + if (start > d_start) + break; + + if (end == d_end) + return true; + + d_start = end + 1; + } + } + + return false; +} + /* Print firmware assisted dump configurations for debugging purpose. */ static void fadump_show_config(void) { @@ -378,8 +478,15 @@ int __init fadump_reserve_mem(void) */ if (fdm_active) fw_dump.boot_memory_size = be64_to_cpu(fdm_active->rmr_region.source_len); - else + else { fw_dump.boot_memory_size = fadump_calculate_reserve_size(); +#ifdef CONFIG_CMA + if (!fw_dump.nocma) + fw_dump.boot_memory_size = + ALIGN(fw_dump.boot_memory_size, + FADUMP_CMA_ALIGNMENT); +#endif + } /* * Calculate the memory boundary. @@ -426,8 +533,9 @@ int __init fadump_reserve_mem(void) fw_dump.fadumphdr_addr = be64_to_cpu(fdm_active->rmr_region.destination_address) + be64_to_cpu(fdm_active->rmr_region.source_len); - pr_debug("fadumphdr_addr = %p\n", - (void *) fw_dump.fadumphdr_addr); + pr_debug("fadumphdr_addr = %pa\n", &fw_dump.fadumphdr_addr); + fw_dump.reserve_dump_area_start = base; + fw_dump.reserve_dump_area_size = size; } else { size = get_fadump_area_size(); @@ -455,10 +563,11 @@ int __init fadump_reserve_mem(void) (unsigned long)(size >> 20), (unsigned long)(base >> 20), (unsigned long)(memblock_phys_mem_size() >> 20)); - } - fw_dump.reserve_dump_area_start = base; - fw_dump.reserve_dump_area_size = size; + fw_dump.reserve_dump_area_start = base; + fw_dump.reserve_dump_area_size = size; + return fadump_cma_init(); + } return 1; } @@ -477,6 +586,10 @@ static int __init early_fadump_param(char *p) fw_dump.fadump_enabled = 1; else if (strncmp(p, "off", 3) == 0) fw_dump.fadump_enabled = 0; + else if (strncmp(p, "nocma", 5) == 0) { + fw_dump.fadump_enabled = 1; + fw_dump.nocma = 1; + } return 0; } @@ -525,8 +638,10 @@ static int register_fw_dump(struct fadump_mem_struct *fdm) break; case -3: if (!is_boot_memory_area_contiguous()) - pr_err("Can't have holes in boot memory area while " - "registering fadump\n"); + pr_err("Can't have holes in boot memory area while registering fadump\n"); + else if (!is_reserved_memory_area_contiguous()) + pr_err("Can't have holes in reserved memory area while" + " registering fadump\n"); printk(KERN_ERR "Failed to register firmware-assisted kernel" " dump. Parameter Error(%d).\n", rc); @@ -1229,7 +1344,7 @@ static int fadump_unregister_dump(struct fadump_mem_struct *fdm) return 0; } -static int fadump_invalidate_dump(struct fadump_mem_struct *fdm) +static int fadump_invalidate_dump(const struct fadump_mem_struct *fdm) { int rc = 0; unsigned int wait_time; @@ -1260,9 +1375,8 @@ void fadump_cleanup(void) { /* Invalidate the registration only if dump is active. */ if (fw_dump.dump_active) { - init_fadump_mem_struct(&fdm, - be64_to_cpu(fdm_active->cpu_state_data.destination_address)); - fadump_invalidate_dump(&fdm); + /* pass the same memory dump structure provided by platform */ + fadump_invalidate_dump(fdm_active); } else if (fw_dump.dump_registered) { /* Un-register Firmware-assisted dump if it was registered. */ fadump_unregister_dump(&fdm); @@ -1531,17 +1645,7 @@ static struct kobj_attribute fadump_register_attr = __ATTR(fadump_registered, 0644, fadump_register_show, fadump_register_store); -static int fadump_region_open(struct inode *inode, struct file *file) -{ - return single_open(file, fadump_region_show, inode->i_private); -} - -static const struct file_operations fadump_region_fops = { - .open = fadump_region_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(fadump_region); static void fadump_init_files(void) { diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S index 61ca27929355..05b08db3901d 100644 --- a/arch/powerpc/kernel/head_32.S +++ b/arch/powerpc/kernel/head_32.S @@ -176,10 +176,10 @@ __after_mmu_off: bl reloc_offset li r24,0 /* cpu# */ bl call_setup_cpu /* Call setup_cpu for this CPU */ -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 bl reloc_offset bl init_idle_6xx -#endif /* CONFIG_6xx */ +#endif /* CONFIG_PPC_BOOK3S_32 */ /* @@ -393,7 +393,9 @@ DataAccess: bne 1f /* if not, try to put a PTE */ mfspr r4,SPRN_DAR /* into the hash table */ rlwinm r3,r10,32-15,21,21 /* DSISR_STORE -> _PAGE_RW */ +BEGIN_MMU_FTR_SECTION bl hash_page +END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE) 1: lwz r5,_DSISR(r11) /* get DSISR value */ mfspr r4,SPRN_DAR EXC_XFER_LITE(0x300, handle_page_fault) @@ -408,7 +410,9 @@ InstructionAccess: beq 1f /* if so, try to put a PTE */ li r3,0 /* into the hash table */ mr r4,r12 /* SRR0 is fault address */ +BEGIN_MMU_FTR_SECTION bl hash_page +END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE) 1: mr r4,r12 andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */ EXC_XFER_LITE(0x400, handle_page_fault) @@ -499,7 +503,7 @@ InstructionTLBMiss: lis r1,PAGE_OFFSET@h /* check if kernel address */ cmplw 0,r1,r3 mfspr r2,SPRN_SPRG_THREAD - li r1,_PAGE_USER|_PAGE_PRESENT /* low addresses tested as user */ + li r1,_PAGE_USER|_PAGE_PRESENT|_PAGE_EXEC /* low addresses tested as user */ lwz r2,PGDIR(r2) bge- 112f mfspr r2,SPRN_SRR1 /* and MSR_PR bit from SRR1 */ @@ -836,10 +840,10 @@ __secondary_start: lis r3,-KERNELBASE@h mr r4,r24 bl call_setup_cpu /* Call setup_cpu for this CPU */ -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 lis r3,-KERNELBASE@h bl init_idle_6xx -#endif /* CONFIG_6xx */ +#endif /* CONFIG_PPC_BOOK3S_32 */ /* get current_thread_info and current */ lis r1,secondary_ti@ha @@ -880,14 +884,14 @@ __secondary_start: /* * Those generic dummy functions are kept for CPUs not - * included in CONFIG_6xx + * included in CONFIG_PPC_BOOK3S_32 */ -#if !defined(CONFIG_6xx) +#if !defined(CONFIG_PPC_BOOK3S_32) _ENTRY(__save_cpu_setup) blr _ENTRY(__restore_cpu_setup) blr -#endif /* !defined(CONFIG_6xx) */ +#endif /* !defined(CONFIG_PPC_BOOK3S_32) */ /* diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S index 37e4a7cf0065..bf23c19c92d6 100644 --- a/arch/powerpc/kernel/head_44x.S +++ b/arch/powerpc/kernel/head_44x.S @@ -40,6 +40,7 @@ #include #include #include +#include #include "head_booke.h" @@ -382,10 +383,9 @@ interrupt_base: /* Increment, rollover, and store TLB index */ addi r13,r13,1 + patch_site 0f, patch__tlb_44x_hwater_D /* Compare with watermark (instruction gets patched) */ - .globl tlb_44x_patch_hwater_D -tlb_44x_patch_hwater_D: - cmpwi 0,r13,1 /* reserve entries */ +0: cmpwi 0,r13,1 /* reserve entries */ ble 5f li r13,0 5: @@ -478,10 +478,9 @@ tlb_44x_patch_hwater_D: /* Increment, rollover, and store TLB index */ addi r13,r13,1 + patch_site 0f, patch__tlb_44x_hwater_I /* Compare with watermark (instruction gets patched) */ - .globl tlb_44x_patch_hwater_I -tlb_44x_patch_hwater_I: - cmpwi 0,r13,1 /* reserve entries */ +0: cmpwi 0,r13,1 /* reserve entries */ ble 5f li r13,0 5: diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S index 3b67b9533c82..57deb1e9ffea 100644 --- a/arch/powerpc/kernel/head_8xx.S +++ b/arch/powerpc/kernel/head_8xx.S @@ -106,6 +106,23 @@ turn_on_mmu: mtspr SPRN_SRR0,r0 rfi /* enables MMU */ + +#ifdef CONFIG_PERF_EVENTS + .align 4 + + .globl itlb_miss_counter +itlb_miss_counter: + .space 4 + + .globl dtlb_miss_counter +dtlb_miss_counter: + .space 4 + + .globl instruction_counter +instruction_counter: + .space 4 +#endif + /* * Exception entry code. This code runs with address translation * turned off, i.e. using physical addresses. @@ -149,6 +166,9 @@ turn_on_mmu: li r10,MSR_KERNEL & ~(MSR_IR|MSR_DR); /* can take exceptions */ \ mtmsr r10; \ stw r0,GPR0(r11); \ + lis r10, STACK_FRAME_REGS_MARKER@ha; /* exception frame marker */ \ + addi r10, r10, STACK_FRAME_REGS_MARKER@l; \ + stw r10, 8(r11); \ SAVE_4GPRS(3, r11); \ SAVE_2GPRS(7, r11) @@ -275,7 +295,7 @@ SystemCall: . = 0x1100 /* * For the MPC8xx, this is a software tablewalk to load the instruction - * TLB. The task switch loads the M_TW register with the pointer to the first + * TLB. The task switch loads the M_TWB register with the pointer to the first * level table. * If we discover there is no second level table (value is zero) or if there * is an invalid pte, we load that into the TLB, which causes another fault @@ -285,186 +305,154 @@ SystemCall: */ #ifdef CONFIG_8xx_CPU15 -#define INVALIDATE_ADJACENT_PAGES_CPU15(tmp, addr) \ - addi tmp, addr, PAGE_SIZE; \ - tlbie tmp; \ - addi tmp, addr, -PAGE_SIZE; \ - tlbie tmp +#define INVALIDATE_ADJACENT_PAGES_CPU15(addr) \ + addi addr, addr, PAGE_SIZE; \ + tlbie addr; \ + addi addr, addr, -(PAGE_SIZE << 1); \ + tlbie addr; \ + addi addr, addr, PAGE_SIZE #else -#define INVALIDATE_ADJACENT_PAGES_CPU15(tmp, addr) +#define INVALIDATE_ADJACENT_PAGES_CPU15(addr) #endif InstructionTLBMiss: mtspr SPRN_SPRG_SCRATCH0, r10 +#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) mtspr SPRN_SPRG_SCRATCH1, r11 -#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_HUGETLB_PAGE) - mtspr SPRN_SPRG_SCRATCH2, r12 #endif /* If we are faulting a kernel address, we have to use the * kernel page tables. */ mfspr r10, SPRN_SRR0 /* Get effective address of fault */ - INVALIDATE_ADJACENT_PAGES_CPU15(r11, r10) + INVALIDATE_ADJACENT_PAGES_CPU15(r10) + mtspr SPRN_MD_EPN, r10 /* Only modules will cause ITLB Misses as we always * pin the first 8MB of kernel memory */ -#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_HUGETLB_PAGE) - mfcr r12 -#endif #ifdef ITLB_MISS_KERNEL + mfcr r11 #if defined(SIMPLE_KERNEL_ADDRESS) && defined(CONFIG_PIN_TLB_TEXT) - andis. r11, r10, 0x8000 /* Address >= 0x80000000 */ + cmpi cr0, r10, 0 /* Address >= 0x80000000 */ #else - rlwinm r11, r10, 16, 0xfff8 - cmpli cr0, r11, PAGE_OFFSET@h + rlwinm r10, r10, 16, 0xfff8 + cmpli cr0, r10, PAGE_OFFSET@h #ifndef CONFIG_PIN_TLB_TEXT /* It is assumed that kernel code fits into the first 8M page */ -0: cmpli cr7, r11, (PAGE_OFFSET + 0x0800000)@h +0: cmpli cr7, r10, (PAGE_OFFSET + 0x0800000)@h patch_site 0b, patch__itlbmiss_linmem_top #endif #endif #endif - mfspr r11, SPRN_M_TW /* Get level 1 table */ + mfspr r10, SPRN_M_TWB /* Get level 1 table */ #ifdef ITLB_MISS_KERNEL #if defined(SIMPLE_KERNEL_ADDRESS) && defined(CONFIG_PIN_TLB_TEXT) - beq+ 3f + bge+ 3f #else blt+ 3f #endif #ifndef CONFIG_PIN_TLB_TEXT blt cr7, ITLBMissLinear #endif - lis r11, (swapper_pg_dir-PAGE_OFFSET)@ha + rlwinm r10, r10, 0, 20, 31 + oris r10, r10, (swapper_pg_dir - PAGE_OFFSET)@ha 3: #endif - /* Insert level 1 index */ - rlwimi r11, r10, 32 - ((PAGE_SHIFT - 2) << 1), (PAGE_SHIFT - 2) << 1, 29 - lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r11) /* Get the level 1 entry */ + lwz r10, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */ + mtspr SPRN_MI_TWC, r10 /* Set segment attributes */ - /* Extract level 2 index */ - rlwinm r10, r10, 32 - (PAGE_SHIFT - 2), 32 - PAGE_SHIFT, 29 -#ifdef CONFIG_HUGETLB_PAGE - mtcr r11 - bt- 28, 10f /* bit 28 = Large page (8M) */ - bt- 29, 20f /* bit 29 = Large page (8M or 512k) */ -#endif - rlwimi r10, r11, 0, 0, 32 - PAGE_SHIFT - 1 /* Add level 2 base */ + mtspr SPRN_MD_TWC, r10 + mfspr r10, SPRN_MD_TWC lwz r10, 0(r10) /* Get the pte */ -4: -#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_HUGETLB_PAGE) - mtcr r12 +#ifdef ITLB_MISS_KERNEL + mtcr r11 #endif - /* Load the MI_TWC with the attributes for this "segment." */ - mtspr SPRN_MI_TWC, r11 /* Set segment attributes */ - #ifdef CONFIG_SWAP rlwinm r11, r10, 32-5, _PAGE_PRESENT and r11, r11, r10 rlwimi r10, r11, 0, _PAGE_PRESENT #endif - li r11, RPN_PATTERN | 0x200 /* The Linux PTE won't go exactly into the MMU TLB. * Software indicator bits 20 and 23 must be clear. * Software indicator bits 22, 24, 25, 26, and 27 must be * set. All other Linux PTE bits control the behavior * of the MMU. */ - rlwimi r11, r10, 4, 0x0400 /* Copy _PAGE_EXEC into bit 21 */ - rlwimi r10, r11, 0, 0x0ff0 /* Set 22, 24-27, clear 20,23 */ + rlwimi r10, r10, 0, 0x0f00 /* Clear bits 20-23 */ + rlwimi r10, r10, 4, 0x0400 /* Copy _PAGE_EXEC into bit 21 */ + ori r10, r10, RPN_PATTERN | 0x200 /* Set 22 and 24-27 */ mtspr SPRN_MI_RPN, r10 /* Update TLB entry */ /* Restore registers */ 0: mfspr r10, SPRN_SPRG_SCRATCH0 +#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) mfspr r11, SPRN_SPRG_SCRATCH1 -#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_HUGETLB_PAGE) - mfspr r12, SPRN_SPRG_SCRATCH2 #endif rfi patch_site 0b, patch__itlbmiss_exit_1 #ifdef CONFIG_PERF_EVENTS patch_site 0f, patch__itlbmiss_perf -0: lis r10, (itlb_miss_counter - PAGE_OFFSET)@ha - lwz r11, (itlb_miss_counter - PAGE_OFFSET)@l(r10) - addi r11, r11, 1 - stw r11, (itlb_miss_counter - PAGE_OFFSET)@l(r10) -#endif +0: lwz r10, (itlb_miss_counter - PAGE_OFFSET)@l(0) + addi r10, r10, 1 + stw r10, (itlb_miss_counter - PAGE_OFFSET)@l(0) mfspr r10, SPRN_SPRG_SCRATCH0 +#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) mfspr r11, SPRN_SPRG_SCRATCH1 -#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_HUGETLB_PAGE) - mfspr r12, SPRN_SPRG_SCRATCH2 #endif rfi - -#ifdef CONFIG_HUGETLB_PAGE -10: /* 8M pages */ -#ifdef CONFIG_PPC_16K_PAGES - /* Extract level 2 index */ - rlwinm r10, r10, 32 - (PAGE_SHIFT_8M - PAGE_SHIFT), 32 + PAGE_SHIFT_8M - (PAGE_SHIFT << 1), 29 - /* Add level 2 base */ - rlwimi r10, r11, 0, 0, 32 + PAGE_SHIFT_8M - (PAGE_SHIFT << 1) - 1 -#else - /* Level 2 base */ - rlwinm r10, r11, 0, ~HUGEPD_SHIFT_MASK #endif - lwz r10, 0(r10) /* Get the pte */ - b 4b -20: /* 512k pages */ - /* Extract level 2 index */ - rlwinm r10, r10, 32 - (PAGE_SHIFT_512K - PAGE_SHIFT), 32 + PAGE_SHIFT_512K - (PAGE_SHIFT << 1), 29 - /* Add level 2 base */ - rlwimi r10, r11, 0, 0, 32 + PAGE_SHIFT_512K - (PAGE_SHIFT << 1) - 1 - lwz r10, 0(r10) /* Get the pte */ - b 4b +#ifndef CONFIG_PIN_TLB_TEXT +ITLBMissLinear: + mtcr r11 + /* Set 8M byte page and mark it valid */ + li r11, MI_PS8MEG | MI_SVALID + mtspr SPRN_MI_TWC, r11 + rlwinm r10, r10, 20, 0x0f800000 /* 8xx supports max 256Mb RAM */ + ori r10, r10, 0xf0 | MI_SPS16K | _PAGE_SH | _PAGE_DIRTY | \ + _PAGE_PRESENT + mtspr SPRN_MI_RPN, r10 /* Update TLB entry */ + +0: mfspr r10, SPRN_SPRG_SCRATCH0 + mfspr r11, SPRN_SPRG_SCRATCH1 + rfi + patch_site 0b, patch__itlbmiss_exit_2 #endif . = 0x1200 DataStoreTLBMiss: mtspr SPRN_SPRG_SCRATCH0, r10 mtspr SPRN_SPRG_SCRATCH1, r11 - mtspr SPRN_SPRG_SCRATCH2, r12 - mfcr r12 + mfcr r11 /* If we are faulting a kernel address, we have to use the * kernel page tables. */ mfspr r10, SPRN_MD_EPN - rlwinm r11, r10, 16, 0xfff8 - cmpli cr0, r11, PAGE_OFFSET@h - mfspr r11, SPRN_M_TW /* Get level 1 table */ - blt+ 3f - rlwinm r11, r10, 16, 0xfff8 + rlwinm r10, r10, 16, 0xfff8 + cmpli cr0, r10, PAGE_OFFSET@h #ifndef CONFIG_PIN_TLB_IMMR - cmpli cr0, r11, VIRT_IMMR_BASE@h + cmpli cr6, r10, VIRT_IMMR_BASE@h #endif -0: cmpli cr7, r11, (PAGE_OFFSET + 0x1800000)@h +0: cmpli cr7, r10, (PAGE_OFFSET + 0x1800000)@h patch_site 0b, patch__dtlbmiss_linmem_top + + mfspr r10, SPRN_M_TWB /* Get level 1 table */ + blt+ 3f #ifndef CONFIG_PIN_TLB_IMMR -0: beq- DTLBMissIMMR +0: beq- cr6, DTLBMissIMMR patch_site 0b, patch__dtlbmiss_immr_jmp #endif blt cr7, DTLBMissLinear - lis r11, (swapper_pg_dir-PAGE_OFFSET)@ha + rlwinm r10, r10, 0, 20, 31 + oris r10, r10, (swapper_pg_dir - PAGE_OFFSET)@ha 3: - - /* Insert level 1 index */ - rlwimi r11, r10, 32 - ((PAGE_SHIFT - 2) << 1), (PAGE_SHIFT - 2) << 1, 29 - lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r11) /* Get the level 1 entry */ - - /* We have a pte table, so load fetch the pte from the table. - */ - /* Extract level 2 index */ - rlwinm r10, r10, 32 - (PAGE_SHIFT - 2), 32 - PAGE_SHIFT, 29 -#ifdef CONFIG_HUGETLB_PAGE mtcr r11 - bt- 28, 10f /* bit 28 = Large page (8M) */ - bt- 29, 20f /* bit 29 = Large page (8M or 512k) */ -#endif - rlwimi r10, r11, 0, 0, 32 - PAGE_SHIFT - 1 /* Add level 2 base */ + lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */ + + mtspr SPRN_MD_TWC, r11 + mfspr r10, SPRN_MD_TWC lwz r10, 0(r10) /* Get the pte */ -4: - mtcr r12 /* Insert the Guarded flag into the TWC from the Linux PTE. * It is bit 27 of both the Linux PTE and the TWC (at least @@ -503,44 +491,55 @@ DataStoreTLBMiss: 0: mfspr r10, SPRN_SPRG_SCRATCH0 mfspr r11, SPRN_SPRG_SCRATCH1 - mfspr r12, SPRN_SPRG_SCRATCH2 rfi patch_site 0b, patch__dtlbmiss_exit_1 #ifdef CONFIG_PERF_EVENTS patch_site 0f, patch__dtlbmiss_perf -0: lis r10, (dtlb_miss_counter - PAGE_OFFSET)@ha - lwz r11, (dtlb_miss_counter - PAGE_OFFSET)@l(r10) - addi r11, r11, 1 - stw r11, (dtlb_miss_counter - PAGE_OFFSET)@l(r10) -#endif +0: lwz r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0) + addi r10, r10, 1 + stw r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0) mfspr r10, SPRN_SPRG_SCRATCH0 mfspr r11, SPRN_SPRG_SCRATCH1 - mfspr r12, SPRN_SPRG_SCRATCH2 rfi - -#ifdef CONFIG_HUGETLB_PAGE -10: /* 8M pages */ - /* Extract level 2 index */ -#ifdef CONFIG_PPC_16K_PAGES - rlwinm r10, r10, 32 - (PAGE_SHIFT_8M - PAGE_SHIFT), 32 + PAGE_SHIFT_8M - (PAGE_SHIFT << 1), 29 - /* Add level 2 base */ - rlwimi r10, r11, 0, 0, 32 + PAGE_SHIFT_8M - (PAGE_SHIFT << 1) - 1 -#else - /* Level 2 base */ - rlwinm r10, r11, 0, ~HUGEPD_SHIFT_MASK #endif - lwz r10, 0(r10) /* Get the pte */ - b 4b -20: /* 512k pages */ - /* Extract level 2 index */ - rlwinm r10, r10, 32 - (PAGE_SHIFT_512K - PAGE_SHIFT), 32 + PAGE_SHIFT_512K - (PAGE_SHIFT << 1), 29 - /* Add level 2 base */ - rlwimi r10, r11, 0, 0, 32 + PAGE_SHIFT_512K - (PAGE_SHIFT << 1) - 1 - lwz r10, 0(r10) /* Get the pte */ - b 4b -#endif +DTLBMissIMMR: + mtcr r11 + /* Set 512k byte guarded page and mark it valid */ + li r10, MD_PS512K | MD_GUARDED | MD_SVALID + mtspr SPRN_MD_TWC, r10 + mfspr r10, SPRN_IMMR /* Get current IMMR */ + rlwinm r10, r10, 0, 0xfff80000 /* Get 512 kbytes boundary */ + ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY | \ + _PAGE_PRESENT | _PAGE_NO_CACHE + mtspr SPRN_MD_RPN, r10 /* Update TLB entry */ + + li r11, RPN_PATTERN + mtspr SPRN_DAR, r11 /* Tag DAR */ + +0: mfspr r10, SPRN_SPRG_SCRATCH0 + mfspr r11, SPRN_SPRG_SCRATCH1 + rfi + patch_site 0b, patch__dtlbmiss_exit_2 + +DTLBMissLinear: + mtcr r11 + /* Set 8M byte page and mark it valid */ + li r11, MD_PS8MEG | MD_SVALID + mtspr SPRN_MD_TWC, r11 + rlwinm r10, r10, 20, 0x0f800000 /* 8xx supports max 256Mb RAM */ + ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY | \ + _PAGE_PRESENT + mtspr SPRN_MD_RPN, r10 /* Update TLB entry */ + + li r11, RPN_PATTERN + mtspr SPRN_DAR, r11 /* Tag DAR */ + +0: mfspr r10, SPRN_SPRG_SCRATCH0 + mfspr r11, SPRN_SPRG_SCRATCH1 + rfi + patch_site 0b, patch__dtlbmiss_exit_3 /* This is an instruction TLB error on the MPC8xx. This could be due * to many reasons, such as executing guarded memory or illegal instruction @@ -625,16 +624,13 @@ DataBreakpoint: . = 0x1d00 InstructionBreakpoint: mtspr SPRN_SPRG_SCRATCH0, r10 - mtspr SPRN_SPRG_SCRATCH1, r11 - lis r10, (instruction_counter - PAGE_OFFSET)@ha - lwz r11, (instruction_counter - PAGE_OFFSET)@l(r10) - addi r11, r11, -1 - stw r11, (instruction_counter - PAGE_OFFSET)@l(r10) + lwz r10, (instruction_counter - PAGE_OFFSET)@l(0) + addi r10, r10, -1 + stw r10, (instruction_counter - PAGE_OFFSET)@l(0) lis r10, 0xffff ori r10, r10, 0x01 mtspr SPRN_COUNTA, r10 mfspr r10, SPRN_SPRG_SCRATCH0 - mfspr r11, SPRN_SPRG_SCRATCH1 rfi #else EXCEPTION(0x1d00, Trap_1d, unknown_exception, EXC_XFER_EE) @@ -644,67 +640,6 @@ InstructionBreakpoint: . = 0x2000 -/* - * Bottom part of DataStoreTLBMiss handlers for IMMR area and linear RAM. - * not enough space in the DataStoreTLBMiss area. - */ -DTLBMissIMMR: - mtcr r12 - /* Set 512k byte guarded page and mark it valid */ - li r10, MD_PS512K | MD_GUARDED | MD_SVALID - mtspr SPRN_MD_TWC, r10 - mfspr r10, SPRN_IMMR /* Get current IMMR */ - rlwinm r10, r10, 0, 0xfff80000 /* Get 512 kbytes boundary */ - ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY | \ - _PAGE_PRESENT | _PAGE_NO_CACHE - mtspr SPRN_MD_RPN, r10 /* Update TLB entry */ - - li r11, RPN_PATTERN - mtspr SPRN_DAR, r11 /* Tag DAR */ - -0: mfspr r10, SPRN_SPRG_SCRATCH0 - mfspr r11, SPRN_SPRG_SCRATCH1 - mfspr r12, SPRN_SPRG_SCRATCH2 - rfi - patch_site 0b, patch__dtlbmiss_exit_2 - -DTLBMissLinear: - mtcr r12 - /* Set 8M byte page and mark it valid */ - li r11, MD_PS8MEG | MD_SVALID - mtspr SPRN_MD_TWC, r11 - rlwinm r10, r10, 0, 0x0f800000 /* 8xx supports max 256Mb RAM */ - ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY | \ - _PAGE_PRESENT - mtspr SPRN_MD_RPN, r10 /* Update TLB entry */ - - li r11, RPN_PATTERN - mtspr SPRN_DAR, r11 /* Tag DAR */ - -0: mfspr r10, SPRN_SPRG_SCRATCH0 - mfspr r11, SPRN_SPRG_SCRATCH1 - mfspr r12, SPRN_SPRG_SCRATCH2 - rfi - patch_site 0b, patch__dtlbmiss_exit_3 - -#ifndef CONFIG_PIN_TLB_TEXT -ITLBMissLinear: - mtcr r12 - /* Set 8M byte page and mark it valid */ - li r11, MI_PS8MEG | MI_SVALID - mtspr SPRN_MI_TWC, r11 - rlwinm r10, r10, 0, 0x0f800000 /* 8xx supports max 256Mb RAM */ - ori r10, r10, 0xf0 | MI_SPS16K | _PAGE_SH | _PAGE_DIRTY | \ - _PAGE_PRESENT - mtspr SPRN_MI_RPN, r10 /* Update TLB entry */ - -0: mfspr r10, SPRN_SPRG_SCRATCH0 - mfspr r11, SPRN_SPRG_SCRATCH1 - mfspr r12, SPRN_SPRG_SCRATCH2 - rfi - patch_site 0b, patch__itlbmiss_exit_2 -#endif - /* This is the procedure to calculate the data EA for buggy dcbx,dcbi instructions * by decoding the registers used by the dcbx instruction and adding them. * DAR is set to the calculated address. @@ -712,12 +647,13 @@ ITLBMissLinear: /* define if you don't want to use self modifying code */ #define NO_SELF_MODIFYING_CODE FixupDAR:/* Entry point for dcbx workaround. */ - mtspr SPRN_SPRG_SCRATCH2, r10 + mtspr SPRN_M_TW, r10 /* fetch instruction from memory. */ mfspr r10, SPRN_SRR0 + mtspr SPRN_MD_EPN, r10 rlwinm r11, r10, 16, 0xfff8 cmpli cr0, r11, PAGE_OFFSET@h - mfspr r11, SPRN_M_TW /* Get level 1 table */ + mfspr r11, SPRN_M_TWB /* Get level 1 table */ blt+ 3f rlwinm r11, r10, 16, 0xfff8 @@ -727,17 +663,17 @@ FixupDAR:/* Entry point for dcbx workaround. */ /* create physical page address from effective address */ tophys(r11, r10) blt- cr7, 201f - lis r11, (swapper_pg_dir-PAGE_OFFSET)@ha - /* Insert level 1 index */ -3: rlwimi r11, r10, 32 - ((PAGE_SHIFT - 2) << 1), (PAGE_SHIFT - 2) << 1, 29 + mfspr r11, SPRN_M_TWB /* Get level 1 table */ + rlwinm r11, r11, 0, 20, 31 + oris r11, r11, (swapper_pg_dir - PAGE_OFFSET)@ha +3: lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r11) /* Get the level 1 entry */ + mtspr SPRN_MD_TWC, r11 mtcr r11 + mfspr r11, SPRN_MD_TWC + lwz r11, 0(r11) /* Get the pte */ bt 28,200f /* bit 28 = Large page (8M) */ bt 29,202f /* bit 29 = Large page (8M or 512K) */ - rlwinm r11, r11,0,0,19 /* Extract page descriptor page address */ - /* Insert level 2 index */ - rlwimi r11, r10, 32 - (PAGE_SHIFT - 2), 32 - PAGE_SHIFT, 29 - lwz r11, 0(r11) /* Get the pte */ /* concat physical page address(r11) and page offset(r10) */ rlwimi r11, r10, 0, 32 - PAGE_SHIFT, 31 201: lwz r11,0(r11) @@ -756,26 +692,15 @@ FixupDAR:/* Entry point for dcbx workaround. */ beq+ 142f cmpwi cr0, r10, 1964 /* Is icbi? */ beq+ 142f -141: mfspr r10,SPRN_SPRG_SCRATCH2 +141: mfspr r10,SPRN_M_TW b DARFixed /* Nope, go back to normal TLB processing */ - /* concat physical page address(r11) and page offset(r10) */ 200: -#ifdef CONFIG_PPC_16K_PAGES - rlwinm r11, r11, 0, 0, 32 + PAGE_SHIFT_8M - (PAGE_SHIFT << 1) - 1 - rlwimi r11, r10, 32 - (PAGE_SHIFT_8M - 2), 32 + PAGE_SHIFT_8M - (PAGE_SHIFT << 1), 29 -#else - rlwinm r11, r10, 0, ~HUGEPD_SHIFT_MASK -#endif - lwz r11, 0(r11) /* Get the pte */ /* concat physical page address(r11) and page offset(r10) */ rlwimi r11, r10, 0, 32 - PAGE_SHIFT_8M, 31 b 201b 202: - rlwinm r11, r11, 0, 0, 32 + PAGE_SHIFT_512K - (PAGE_SHIFT << 1) - 1 - rlwimi r11, r10, 32 - (PAGE_SHIFT_512K - 2), 32 + PAGE_SHIFT_512K - (PAGE_SHIFT << 1), 29 - lwz r11, 0(r11) /* Get the pte */ /* concat physical page address(r11) and page offset(r10) */ rlwimi r11, r10, 0, 32 - PAGE_SHIFT_512K, 31 b 201b @@ -802,7 +727,7 @@ modified_instr: bne+ 143f subf r10,r0,r10 /* r10=r10-r0, only if reg RA is r0 */ 143: mtdar r10 /* store faulting EA in DAR */ - mfspr r10,SPRN_SPRG_SCRATCH2 + mfspr r10,SPRN_M_TW b DARFixed /* Go back to normal TLB handling */ #else mfctr r10 @@ -856,7 +781,7 @@ modified_instr: mfdar r11 mtctr r11 /* restore ctr reg from DAR */ mtdar r10 /* save fault EA to DAR */ - mfspr r10,SPRN_SPRG_SCRATCH2 + mfspr r10,SPRN_M_TW b DARFixed /* Go back to normal TLB handling */ /* special handling for r10,r11 since these are modified already */ @@ -891,7 +816,7 @@ start_here: lis r6, swapper_pg_dir@ha tophys(r6,r6) - mtspr SPRN_M_TW, r6 + mtspr SPRN_M_TWB, r6 bl early_init /* We have to do this with MMU on */ @@ -1065,17 +990,3 @@ swapper_pg_dir: */ abatron_pteptrs: .space 8 - -#ifdef CONFIG_PERF_EVENTS - .globl itlb_miss_counter -itlb_miss_counter: - .space 4 - - .globl dtlb_miss_counter -dtlb_miss_counter: - .space 4 - - .globl instruction_counter -instruction_counter: - .space 4 -#endif diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h index d0862a100d29..15ac51072eb3 100644 --- a/arch/powerpc/kernel/head_booke.h +++ b/arch/powerpc/kernel/head_booke.h @@ -43,6 +43,9 @@ andi. r11, r11, MSR_PR; /* check whether user or kernel */\ mr r11, r1; \ beq 1f; \ +START_BTB_FLUSH_SECTION \ + BTB_FLUSH(r11) \ +END_BTB_FLUSH_SECTION \ /* if from user, start at top of this thread's kernel stack */ \ lwz r11, THREAD_INFO-THREAD(r10); \ ALLOC_STACK_FRAME(r11, THREAD_SIZE); \ @@ -128,6 +131,9 @@ stw r9,_CCR(r8); /* save CR on stack */\ mfspr r11,exc_level_srr1; /* check whether user or kernel */\ DO_KVM BOOKE_INTERRUPT_##intno exc_level_srr1; \ +START_BTB_FLUSH_SECTION \ + BTB_FLUSH(r10) \ +END_BTB_FLUSH_SECTION \ andi. r11,r11,MSR_PR; \ mfspr r11,SPRN_SPRG_THREAD; /* if from user, start at top of */\ lwz r11,THREAD_INFO-THREAD(r11); /* this thread's kernel stack */\ diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S index e2750b856c8f..2386ce2a9c6e 100644 --- a/arch/powerpc/kernel/head_fsl_booke.S +++ b/arch/powerpc/kernel/head_fsl_booke.S @@ -453,6 +453,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV) mfcr r13 stw r13, THREAD_NORMSAVE(3)(r10) DO_KVM BOOKE_INTERRUPT_DTLB_MISS SPRN_SRR1 +START_BTB_FLUSH_SECTION + mfspr r11, SPRN_SRR1 + andi. r10,r11,MSR_PR + beq 1f + BTB_FLUSH(r10) +1: +END_BTB_FLUSH_SECTION mfspr r10, SPRN_DEAR /* Get faulting address */ /* If we are faulting a kernel address, we have to use the @@ -547,6 +554,14 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV) mfcr r13 stw r13, THREAD_NORMSAVE(3)(r10) DO_KVM BOOKE_INTERRUPT_ITLB_MISS SPRN_SRR1 +START_BTB_FLUSH_SECTION + mfspr r11, SPRN_SRR1 + andi. r10,r11,MSR_PR + beq 1f + BTB_FLUSH(r10) +1: +END_BTB_FLUSH_SECTION + mfspr r10, SPRN_SRR0 /* Get faulting address */ /* If we are faulting a kernel address, we have to use the diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c index f0dc680e659a..d0625480b59e 100644 --- a/arch/powerpc/kernel/iommu.c +++ b/arch/powerpc/kernel/iommu.c @@ -47,6 +47,7 @@ #include #include #include +#include #define DBG(...) @@ -197,11 +198,11 @@ static unsigned long iommu_range_alloc(struct device *dev, if (unlikely(npages == 0)) { if (printk_ratelimit()) WARN_ON(1); - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } if (should_fail_iommu(dev)) - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; /* * We don't need to disable preemption here because any CPU can @@ -277,7 +278,7 @@ again: } else { /* Give up */ spin_unlock_irqrestore(&(pool->lock), flags); - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } } @@ -309,13 +310,13 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl, unsigned long attrs) { unsigned long entry; - dma_addr_t ret = IOMMU_MAPPING_ERROR; + dma_addr_t ret = DMA_MAPPING_ERROR; int build_fail; entry = iommu_range_alloc(dev, tbl, npages, NULL, mask, align_order); - if (unlikely(entry == IOMMU_MAPPING_ERROR)) - return IOMMU_MAPPING_ERROR; + if (unlikely(entry == DMA_MAPPING_ERROR)) + return DMA_MAPPING_ERROR; entry += tbl->it_offset; /* Offset into real TCE table */ ret = entry << tbl->it_page_shift; /* Set the return dma address */ @@ -327,12 +328,12 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl, /* tbl->it_ops->set() only returns non-zero for transient errors. * Clean up the table bitmap in this case and return - * IOMMU_MAPPING_ERROR. For all other errors the functionality is + * DMA_MAPPING_ERROR. For all other errors the functionality is * not altered. */ if (unlikely(build_fail)) { __iommu_free(tbl, ret, npages); - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } /* Flush/invalidate TLB caches if necessary */ @@ -477,7 +478,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl, DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen); /* Handle failure */ - if (unlikely(entry == IOMMU_MAPPING_ERROR)) { + if (unlikely(entry == DMA_MAPPING_ERROR)) { if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_info(dev, "iommu_alloc failed, tbl %p " @@ -544,7 +545,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl, */ if (outcount < incount) { outs = sg_next(outs); - outs->dma_address = IOMMU_MAPPING_ERROR; + outs->dma_address = DMA_MAPPING_ERROR; outs->dma_length = 0; } @@ -562,7 +563,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl, npages = iommu_num_pages(s->dma_address, s->dma_length, IOMMU_PAGE_SIZE(tbl)); __iommu_free(tbl, vaddr, npages); - s->dma_address = IOMMU_MAPPING_ERROR; + s->dma_address = DMA_MAPPING_ERROR; s->dma_length = 0; } if (s == outs) @@ -776,7 +777,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl, unsigned long mask, enum dma_data_direction direction, unsigned long attrs) { - dma_addr_t dma_handle = IOMMU_MAPPING_ERROR; + dma_addr_t dma_handle = DMA_MAPPING_ERROR; void *vaddr; unsigned long uaddr; unsigned int npages, align; @@ -796,7 +797,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl, dma_handle = iommu_alloc(dev, tbl, vaddr, npages, direction, mask >> tbl->it_page_shift, align, attrs); - if (dma_handle == IOMMU_MAPPING_ERROR) { + if (dma_handle == DMA_MAPPING_ERROR) { if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) { dev_info(dev, "iommu_alloc failed, tbl %p " @@ -868,7 +869,7 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl, io_order = get_iommu_order(size, tbl); mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL, mask >> tbl->it_page_shift, io_order, 0); - if (mapping == IOMMU_MAPPING_ERROR) { + if (mapping == DMA_MAPPING_ERROR) { free_pages((unsigned long)ret, order); return NULL; } @@ -993,15 +994,19 @@ int iommu_tce_check_gpa(unsigned long page_shift, unsigned long gpa) } EXPORT_SYMBOL_GPL(iommu_tce_check_gpa); -long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry, - unsigned long *hpa, enum dma_data_direction *direction) +long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl, + unsigned long entry, unsigned long *hpa, + enum dma_data_direction *direction) { long ret; + unsigned long size = 0; ret = tbl->it_ops->exchange(tbl, entry, hpa, direction); if (!ret && ((*direction == DMA_FROM_DEVICE) || - (*direction == DMA_BIDIRECTIONAL))) + (*direction == DMA_BIDIRECTIONAL)) && + !mm_iommu_is_devmem(mm, *hpa, tbl->it_page_shift, + &size)) SetPageDirty(pfn_to_page(*hpa >> PAGE_SHIFT)); /* if (unlikely(ret)) @@ -1073,11 +1078,8 @@ void iommu_release_ownership(struct iommu_table *tbl) } EXPORT_SYMBOL_GPL(iommu_release_ownership); -int iommu_add_device(struct device *dev) +int iommu_add_device(struct iommu_table_group *table_group, struct device *dev) { - struct iommu_table *tbl; - struct iommu_table_group_link *tgl; - /* * The sysfs entries should be populated before * binding IOMMU group. If sysfs entries isn't @@ -1093,32 +1095,10 @@ int iommu_add_device(struct device *dev) return -EBUSY; } - tbl = get_iommu_table_base(dev); - if (!tbl) { - pr_debug("%s: Skipping device %s with no tbl\n", - __func__, dev_name(dev)); - return 0; - } - - tgl = list_first_entry_or_null(&tbl->it_group_list, - struct iommu_table_group_link, next); - if (!tgl) { - pr_debug("%s: Skipping device %s with no group\n", - __func__, dev_name(dev)); - return 0; - } pr_debug("%s: Adding %s to iommu group %d\n", - __func__, dev_name(dev), - iommu_group_id(tgl->table_group->group)); - - if (PAGE_SIZE < IOMMU_PAGE_SIZE(tbl)) { - pr_err("%s: Invalid IOMMU page size %lx (%lx) on %s\n", - __func__, IOMMU_PAGE_SIZE(tbl), - PAGE_SIZE, dev_name(dev)); - return -EINVAL; - } + __func__, dev_name(dev), iommu_group_id(table_group->group)); - return iommu_group_add_device(tgl->table_group->group, dev); + return iommu_group_add_device(table_group->group, dev); } EXPORT_SYMBOL_GPL(iommu_add_device); @@ -1138,31 +1118,4 @@ void iommu_del_device(struct device *dev) iommu_group_remove_device(dev); } EXPORT_SYMBOL_GPL(iommu_del_device); - -static int tce_iommu_bus_notifier(struct notifier_block *nb, - unsigned long action, void *data) -{ - struct device *dev = data; - - switch (action) { - case BUS_NOTIFY_ADD_DEVICE: - return iommu_add_device(dev); - case BUS_NOTIFY_DEL_DEVICE: - if (dev->iommu_group) - iommu_del_device(dev); - return 0; - default: - return 0; - } -} - -static struct notifier_block tce_iommu_bus_nb = { - .notifier_call = tce_iommu_bus_notifier, -}; - -int __init tce_iommu_bus_notifier_init(void) -{ - bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb); - return 0; -} #endif /* CONFIG_IOMMU_API */ diff --git a/arch/powerpc/kernel/isa-bridge.c b/arch/powerpc/kernel/isa-bridge.c index fda3ae48480c..0e7099da4f25 100644 --- a/arch/powerpc/kernel/isa-bridge.c +++ b/arch/powerpc/kernel/isa-bridge.c @@ -327,8 +327,7 @@ static int isa_bridge_notify(struct notifier_block *nb, unsigned long action, /* Check if we have no ISA device, and this happens to be one, * register it as such if it has an OF device */ - if (!isa_bridge_devnode && devnode && devnode->type && - !strcmp(devnode->type, "isa")) + if (!isa_bridge_devnode && of_node_is_type(devnode, "isa")) isa_bridge_find_late(pdev, devnode); return 0; diff --git a/arch/powerpc/kernel/legacy_serial.c b/arch/powerpc/kernel/legacy_serial.c index 5b9dce17f0c9..7cea5978f21f 100644 --- a/arch/powerpc/kernel/legacy_serial.c +++ b/arch/powerpc/kernel/legacy_serial.c @@ -192,7 +192,7 @@ static int __init add_legacy_soc_port(struct device_node *np, /* Add port, irq will be dealt with later. We passed a translated * IO port value. It will be fixed up later along with the irq */ - if (tsi && !strcmp(tsi->type, "tsi-bridge")) + if (of_node_is_type(tsi, "tsi-bridge")) return add_legacy_port(np, -1, UPIO_TSI, addr, addr, 0, legacy_port_flags, 0); else @@ -400,8 +400,7 @@ void __init find_legacy_serial_ports(void) /* Next, fill our array with ISA ports */ for_each_node_by_type(np, "serial") { struct device_node *isa = of_get_parent(np); - if (isa && (!strcmp(isa->name, "isa") || - !strcmp(isa->name, "lpc"))) { + if (of_node_name_eq(isa, "isa") || of_node_name_eq(isa, "lpc")) { if (of_device_is_available(np)) { index = add_legacy_isa_port(np, isa); if (index >= 0 && np == stdout) @@ -415,11 +414,12 @@ void __init find_legacy_serial_ports(void) /* Next, try to locate PCI ports */ for (np = NULL; (np = of_find_all_nodes(np));) { struct device_node *pci, *parent = of_get_parent(np); - if (parent && !strcmp(parent->name, "isa")) { + if (of_node_name_eq(parent, "isa")) { of_node_put(parent); continue; } - if (strcmp(np->name, "serial") && strcmp(np->type, "serial")) { + if (!of_node_name_eq(np, "serial") && + !of_node_is_type(np, "serial")) { of_node_put(parent); continue; } diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S index 695b24a2d954..57d2ffb2d45c 100644 --- a/arch/powerpc/kernel/misc_32.S +++ b/arch/powerpc/kernel/misc_32.S @@ -153,7 +153,7 @@ _GLOBAL(call_setup_cpu) mtctr r5 bctr -#if defined(CONFIG_CPU_FREQ_PMAC) && defined(CONFIG_6xx) +#if defined(CONFIG_CPU_FREQ_PMAC) && defined(CONFIG_PPC_BOOK3S_32) /* This gets called by via-pmu.c to switch the PLL selection * on 750fx CPU. This function should really be moved to some @@ -223,7 +223,7 @@ _GLOBAL(low_choose_7447a_dfs) mtmsr r7 blr -#endif /* CONFIG_CPU_FREQ_PMAC && CONFIG_6xx */ +#endif /* CONFIG_CPU_FREQ_PMAC && CONFIG_PPC_BOOK3S_32 */ /* * complement mask on the msr then "or" some values on. diff --git a/arch/powerpc/kernel/nvram_64.c b/arch/powerpc/kernel/nvram_64.c index 22e9d281324d..38b03a330cd2 100644 --- a/arch/powerpc/kernel/nvram_64.c +++ b/arch/powerpc/kernel/nvram_64.c @@ -563,8 +563,6 @@ static int nvram_pstore_init(void) nvram_pstore_info.buf = oops_data; nvram_pstore_info.bufsize = oops_data_sz; - spin_lock_init(&nvram_pstore_info.buf_lock); - rc = pstore_register(&nvram_pstore_info); if (rc && (rc != -EPERM)) /* Print error only when pstore.backend == nvram */ @@ -809,6 +807,7 @@ static long dev_nvram_ioctl(struct file *file, unsigned int cmd, #ifdef CONFIG_PPC_PMAC case OBSOLETE_PMAC_NVRAM_GET_OFFSET: printk(KERN_WARNING "nvram: Using obsolete PMAC_NVRAM_GET_OFFSET ioctl\n"); + /* fall through */ case IOC_NVRAM_GET_OFFSET: { int part, offset; diff --git a/arch/powerpc/kernel/pci_of_scan.c b/arch/powerpc/kernel/pci_of_scan.c index 98f04725def7..24191ea2d9a7 100644 --- a/arch/powerpc/kernel/pci_of_scan.c +++ b/arch/powerpc/kernel/pci_of_scan.c @@ -125,16 +125,13 @@ struct pci_dev *of_create_pci_dev(struct device_node *node, struct pci_bus *bus, int devfn) { struct pci_dev *dev; - const char *type; dev = pci_alloc_dev(bus); if (!dev) return NULL; - type = of_get_property(node, "device_type", NULL); - if (type == NULL) - type = ""; - pr_debug(" create device, devfn: %x, type: %s\n", devfn, type); + pr_debug(" create device, devfn: %x, type: %s\n", devfn, + of_node_get_device_type(node)); dev->dev.of_node = of_node_get(node); dev->dev.parent = bus->bridge; @@ -167,12 +164,12 @@ struct pci_dev *of_create_pci_dev(struct device_node *node, /* Early fixups, before probing the BARs */ pci_fixup_device(pci_fixup_early, dev); - if (!strcmp(type, "pci") || !strcmp(type, "pciex")) { + if (of_node_is_type(node, "pci") || of_node_is_type(node, "pciex")) { /* a PCI-PCI bridge */ dev->hdr_type = PCI_HEADER_TYPE_BRIDGE; dev->rom_base_reg = PCI_ROM_ADDRESS1; set_pcie_hotplug_bridge(dev); - } else if (!strcmp(type, "cardbus")) { + } else if (of_node_is_type(node, "cardbus")) { dev->hdr_type = PCI_HEADER_TYPE_CARDBUS; } else { dev->hdr_type = PCI_HEADER_TYPE_NORMAL; diff --git a/arch/powerpc/kernel/pmc.c b/arch/powerpc/kernel/pmc.c index 58eaa3ddf7b9..2de71faca911 100644 --- a/arch/powerpc/kernel/pmc.c +++ b/arch/powerpc/kernel/pmc.c @@ -29,7 +29,7 @@ static void dummy_perf(struct pt_regs *regs) { #if defined(CONFIG_FSL_EMB_PERFMON) mtpmr(PMRN_PMGC0, mfpmr(PMRN_PMGC0) & ~PMGC0_PMIE); -#elif defined(CONFIG_PPC64) || defined(CONFIG_6xx) +#elif defined(CONFIG_PPC64) || defined(CONFIG_PPC_BOOK3S_32) if (cur_cpu_spec->pmc_type == PPC_PMC_IBM) mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~(MMCR0_PMXE|MMCR0_PMAO)); #else diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index fe758cedb93f..4181ec715f88 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -124,12 +124,12 @@ static void __init move_device_tree(void) size = fdt_totalsize(initial_boot_params); if ((memory_limit && (start + size) > PHYSICAL_START + memory_limit) || - overlaps_crashkernel(start, size) || - overlaps_initrd(start, size)) { + !memblock_is_memory(start + size - 1) || + overlaps_crashkernel(start, size) || overlaps_initrd(start, size)) { p = __va(memblock_phys_alloc(size, PAGE_SIZE)); memcpy(p, initial_boot_params, size); initial_boot_params = p; - DBG("Moved device tree to 0x%p\n", p); + DBG("Moved device tree to 0x%px\n", p); } DBG("<- move_device_tree\n"); @@ -689,7 +689,7 @@ void __init early_init_devtree(void *params) { phys_addr_t limit; - DBG(" -> early_init_devtree(%p)\n", params); + DBG(" -> early_init_devtree(%px)\n", params); /* Too early to BUG_ON(), do it by hand */ if (!early_init_dt_verify(params)) @@ -749,7 +749,7 @@ void __init early_init_devtree(void *params) memblock_allow_resize(); memblock_dump_all(); - DBG("Phys. mem: %llx\n", memblock_phys_mem_size()); + DBG("Phys. mem: %llx\n", (unsigned long long)memblock_phys_mem_size()); /* We may need to relocate the flat tree, do it now. * FIXME .. and the initrd too? */ diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c index 714c3480c52d..cdd5d1d3ae41 100644 --- a/arch/powerpc/kernel/ptrace.c +++ b/arch/powerpc/kernel/ptrace.c @@ -3263,32 +3263,40 @@ static inline int do_seccomp(struct pt_regs *regs) { return 0; } */ long do_syscall_trace_enter(struct pt_regs *regs) { + u32 flags; + user_exit(); - if (test_thread_flag(TIF_SYSCALL_EMU)) { - /* - * A nonzero return code from tracehook_report_syscall_entry() - * tells us to prevent the syscall execution, but we are not - * going to execute it anyway. - * - * Returning -1 will skip the syscall execution. We want to - * avoid clobbering any register also, thus, not 'gotoing' - * skip label. - */ - if (tracehook_report_syscall_entry(regs)) - ; - return -1; - } + flags = READ_ONCE(current_thread_info()->flags) & + (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE); - /* - * The tracer may decide to abort the syscall, if so tracehook - * will return !0. Note that the tracer may also just change - * regs->gpr[0] to an invalid syscall number, that is handled - * below on the exit path. - */ - if (test_thread_flag(TIF_SYSCALL_TRACE) && - tracehook_report_syscall_entry(regs)) - goto skip; + if (flags) { + int rc = tracehook_report_syscall_entry(regs); + + if (unlikely(flags & _TIF_SYSCALL_EMU)) { + /* + * A nonzero return code from + * tracehook_report_syscall_entry() tells us to prevent + * the syscall execution, but we are not going to + * execute it anyway. + * + * Returning -1 will skip the syscall execution. We want + * to avoid clobbering any registers, so we don't goto + * the skip label below. + */ + return -1; + } + + if (rc) { + /* + * The tracer decided to abort the syscall. Note that + * the tracer may also just change regs->gpr[0] to an + * invalid syscall number, that is handled below on the + * exit path. + */ + goto skip; + } + } /* Run seccomp after ptrace; allow it to set gpr[3]. */ if (do_seccomp(regs)) diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c index f6f469fc4073..9b8631533e02 100644 --- a/arch/powerpc/kernel/security.c +++ b/arch/powerpc/kernel/security.c @@ -4,6 +4,7 @@ // // Copyright 2018, Michael Ellerman, IBM Corporation. +#include #include #include #include @@ -22,10 +23,14 @@ enum count_cache_flush_type { COUNT_CACHE_FLUSH_SW = 0x2, COUNT_CACHE_FLUSH_HW = 0x4, }; -static enum count_cache_flush_type count_cache_flush_type; +static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE; bool barrier_nospec_enabled; static bool no_nospec; +static bool btb_flush_enabled; +#ifdef CONFIG_PPC_FSL_BOOK3E +static bool no_spectrev2; +#endif static void enable_barrier_nospec(bool enable) { @@ -101,6 +106,23 @@ static __init int barrier_nospec_debugfs_init(void) device_initcall(barrier_nospec_debugfs_init); #endif /* CONFIG_DEBUG_FS */ +#ifdef CONFIG_PPC_FSL_BOOK3E +static int __init handle_nospectre_v2(char *p) +{ + no_spectrev2 = true; + + return 0; +} +early_param("nospectre_v2", handle_nospectre_v2); +void setup_spectre_v2(void) +{ + if (no_spectrev2) + do_btb_flush_fixups(); + else + btb_flush_enabled = true; +} +#endif /* CONFIG_PPC_FSL_BOOK3E */ + #ifdef CONFIG_PPC_BOOK3S_64 ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf) { @@ -191,8 +213,11 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW) seq_buf_printf(&s, "(hardware accelerated)"); - } else + } else if (btb_flush_enabled) { + seq_buf_printf(&s, "Mitigation: Branch predictor state flush"); + } else { seq_buf_printf(&s, "Vulnerable"); + } seq_buf_printf(&s, "\n"); diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index 93ee3703b42f..ca00fbb97cf8 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -687,7 +687,7 @@ int check_legacy_ioport(unsigned long base_port) return ret; parent = of_get_parent(np); if (parent) { - if (strcmp(parent->type, "isa") == 0) + if (of_node_is_type(parent, "isa")) ret = 0; of_node_put(parent); } @@ -800,7 +800,7 @@ static __init void print_system_info(void) #ifdef CONFIG_PPC_BOOK3S_64 pr_info("ppc64_pft_size = 0x%llx\n", ppc64_pft_size); #endif -#ifdef CONFIG_PPC_STD_MMU_32 +#ifdef CONFIG_PPC_BOOK3S_32 pr_info("Hash_size = 0x%lx\n", Hash_size); #endif pr_info("phys_mem_size = 0x%llx\n", @@ -830,7 +830,7 @@ static __init void print_system_info(void) if (htab_hash_mask) pr_info("htab_hash_mask = 0x%lx\n", htab_hash_mask); #endif -#ifdef CONFIG_PPC_STD_MMU_32 +#ifdef CONFIG_PPC_BOOK3S_32 if (Hash) pr_info("Hash = 0x%p\n", Hash); if (Hash_mask) @@ -974,6 +974,7 @@ void __init setup_arch(char **cmdline_p) ppc_md.setup_arch(); setup_barrier_nospec(); + setup_spectre_v2(); paging_init(); diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c index 81909600013a..947f904688b0 100644 --- a/arch/powerpc/kernel/setup_32.c +++ b/arch/powerpc/kernel/setup_32.c @@ -59,7 +59,6 @@ unsigned long ISA_DMA_THRESHOLD; unsigned int DMA_MODE_READ; unsigned int DMA_MODE_WRITE; -EXPORT_SYMBOL(ISA_DMA_THRESHOLD); EXPORT_SYMBOL(DMA_MODE_READ); EXPORT_SYMBOL(DMA_MODE_WRITE); @@ -101,8 +100,7 @@ notrace unsigned long __init early_init(unsigned long dt_ptr) */ notrace void __init machine_init(u64 dt_ptr) { - unsigned int *addr = (unsigned int *)((unsigned long)&patch__memset_nocache + - patch__memset_nocache); + unsigned int *addr = (unsigned int *)patch_site_addr(&patch__memset_nocache); unsigned long insn; /* Configure static keys first, now that we're relocated. */ @@ -240,7 +238,7 @@ void __init exc_lvl_early_init(void) void __init setup_power_save(void) { -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 if (cpu_has_feature(CPU_FTR_CAN_DOZE) || cpu_has_feature(CPU_FTR_CAN_NAP)) ppc_md.power_save = ppc6xx_idle; diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index e6474a45cef5..2d47cc79e5b3 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -470,9 +470,9 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame, return 1; if (sigret) { - /* Set up the sigreturn trampoline: li r0,sigret; sc */ - if (__put_user(0x38000000UL + sigret, &frame->tramp[0]) - || __put_user(0x44000002UL, &frame->tramp[1])) + /* Set up the sigreturn trampoline: li 0,sigret; sc */ + if (__put_user(PPC_INST_ADDI + sigret, &frame->tramp[0]) + || __put_user(PPC_INST_SC, &frame->tramp[1])) return 1; flush_icache_range((unsigned long) &frame->tramp[0], (unsigned long) &frame->tramp[2]); @@ -619,9 +619,9 @@ static int save_tm_user_regs(struct pt_regs *regs, if (__put_user(msr, &frame->mc_gregs[PT_MSR])) return 1; if (sigret) { - /* Set up the sigreturn trampoline: li r0,sigret; sc */ - if (__put_user(0x38000000UL + sigret, &frame->tramp[0]) - || __put_user(0x44000002UL, &frame->tramp[1])) + /* Set up the sigreturn trampoline: li 0,sigret; sc */ + if (__put_user(PPC_INST_ADDI + sigret, &frame->tramp[0]) + || __put_user(PPC_INST_SC, &frame->tramp[1])) return 1; flush_icache_range((unsigned long) &frame->tramp[0], (unsigned long) &frame->tramp[2]); @@ -848,7 +848,23 @@ static long restore_tm_user_regs(struct pt_regs *regs, /* If TM bits are set to the reserved value, it's an invalid context */ if (MSR_TM_RESV(msr_hi)) return 1; - /* Pull in the MSR TM bits from the user context */ + + /* + * Disabling preemption, since it is unsafe to be preempted + * with MSR[TS] set without recheckpointing. + */ + preempt_disable(); + + /* + * CAUTION: + * After regs->MSR[TS] being updated, make sure that get_user(), + * put_user() or similar functions are *not* called. These + * functions can generate page faults which will cause the process + * to be de-scheduled with MSR[TS] set but without calling + * tm_recheckpoint(). This can cause a bug. + * + * Pull in the MSR TM bits from the user context + */ regs->msr = (regs->msr & ~MSR_TS_MASK) | (msr_hi & MSR_TS_MASK); /* Now, recheckpoint. This loads up all of the checkpointed (older) * registers, including FP and V[S]Rs. After recheckpointing, the @@ -873,6 +889,8 @@ static long restore_tm_user_regs(struct pt_regs *regs, } #endif + preempt_enable(); + return 0; } #endif @@ -1140,11 +1158,11 @@ SYSCALL_DEFINE0(rt_sigreturn) { struct rt_sigframe __user *rt_sf; struct pt_regs *regs = current_pt_regs(); + int tm_restore = 0; #ifdef CONFIG_PPC_TRANSACTIONAL_MEM struct ucontext __user *uc_transact; unsigned long msr_hi; unsigned long tmp; - int tm_restore = 0; #endif /* Always make any pending restarted system calls return -EINTR */ current->restart_block.fn = do_no_restart_syscall; @@ -1192,11 +1210,19 @@ SYSCALL_DEFINE0(rt_sigreturn) goto bad; } } - if (!tm_restore) - /* Fall through, for non-TM restore */ + if (!tm_restore) { + /* + * Unset regs->msr because ucontext MSR TS is not + * set, and recheckpoint was not called. This avoid + * hitting a TM Bad thing at RFID + */ + regs->msr &= ~MSR_TS_MASK; + } + /* Fall through, for non-TM restore */ #endif - if (do_setcontext(&rt_sf->uc, regs, 1)) - goto bad; + if (!tm_restore) + if (do_setcontext(&rt_sf->uc, regs, 1)) + goto bad; /* * It's not clear whether or why it is desirable to save the diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c index 83d51bf586c7..0935fe6c282a 100644 --- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -467,20 +467,6 @@ static long restore_tm_sigcontexts(struct task_struct *tsk, if (MSR_TM_RESV(msr)) return -EINVAL; - /* pull in MSR TS bits from user context */ - regs->msr = (regs->msr & ~MSR_TS_MASK) | (msr & MSR_TS_MASK); - - /* - * Ensure that TM is enabled in regs->msr before we leave the signal - * handler. It could be the case that (a) user disabled the TM bit - * through the manipulation of the MSR bits in uc_mcontext or (b) the - * TM bit was disabled because a sufficient number of context switches - * happened whilst in the signal handler and load_tm overflowed, - * disabling the TM bit. In either case we can end up with an illegal - * TM state leading to a TM Bad Thing when we return to userspace. - */ - regs->msr |= MSR_TM; - /* pull in MSR LE from user context */ regs->msr = (regs->msr & ~MSR_LE) | (msr & MSR_LE); @@ -572,6 +558,34 @@ static long restore_tm_sigcontexts(struct task_struct *tsk, tm_enable(); /* Make sure the transaction is marked as failed */ tsk->thread.tm_texasr |= TEXASR_FS; + + /* + * Disabling preemption, since it is unsafe to be preempted + * with MSR[TS] set without recheckpointing. + */ + preempt_disable(); + + /* pull in MSR TS bits from user context */ + regs->msr = (regs->msr & ~MSR_TS_MASK) | (msr & MSR_TS_MASK); + + /* + * Ensure that TM is enabled in regs->msr before we leave the signal + * handler. It could be the case that (a) user disabled the TM bit + * through the manipulation of the MSR bits in uc_mcontext or (b) the + * TM bit was disabled because a sufficient number of context switches + * happened whilst in the signal handler and load_tm overflowed, + * disabling the TM bit. In either case we can end up with an illegal + * TM state leading to a TM Bad Thing when we return to userspace. + * + * CAUTION: + * After regs->MSR[TS] being updated, make sure that get_user(), + * put_user() or similar functions are *not* called. These + * functions can generate page faults which will cause the process + * to be de-scheduled with MSR[TS] set but without calling + * tm_recheckpoint(). This can cause a bug. + */ + regs->msr |= MSR_TM; + /* This loads the checkpointed FP/VEC state, if used */ tm_recheckpoint(&tsk->thread); @@ -585,6 +599,8 @@ static long restore_tm_sigcontexts(struct task_struct *tsk, regs->msr |= MSR_VEC; } + preempt_enable(); + return err; } #endif @@ -598,11 +614,12 @@ static long setup_trampoline(unsigned int syscall, unsigned int __user *tramp) long err = 0; /* addi r1, r1, __SIGNAL_FRAMESIZE # Pop the dummy stackframe */ - err |= __put_user(0x38210000UL | (__SIGNAL_FRAMESIZE & 0xffff), &tramp[0]); + err |= __put_user(PPC_INST_ADDI | __PPC_RT(R1) | __PPC_RA(R1) | + (__SIGNAL_FRAMESIZE & 0xffff), &tramp[0]); /* li r0, __NR_[rt_]sigreturn| */ - err |= __put_user(0x38000000UL | (syscall & 0xffff), &tramp[1]); + err |= __put_user(PPC_INST_ADDI | (syscall & 0xffff), &tramp[1]); /* sc */ - err |= __put_user(0x44000002UL, &tramp[2]); + err |= __put_user(PPC_INST_SC, &tramp[2]); /* Minimal traceback info */ for (i=TRAMP_TRACEBACK; i < TRAMP_SIZE ;i++) @@ -740,11 +757,23 @@ SYSCALL_DEFINE0(rt_sigreturn) &uc_transact->uc_mcontext)) goto badframe; } - else - /* Fall through, for non-TM restore */ #endif - if (restore_sigcontext(current, NULL, 1, &uc->uc_mcontext)) - goto badframe; + /* Fall through, for non-TM restore */ + if (!MSR_TM_ACTIVE(msr)) { + /* + * Unset MSR[TS] on the thread regs since MSR from user + * context does not have MSR active, and recheckpoint was + * not called since restore_tm_sigcontexts() was not called + * also. + * + * If not unsetting it, the code can RFID to userspace with + * MSR[TS] set, but without CPU in the proper state, + * causing a TM bad thing. + */ + current->thread.regs->msr &= ~MSR_TS_MASK; + if (restore_sigcontext(current, NULL, 1, &uc->uc_mcontext)) + goto badframe; + } if (restore_altstack(&uc->uc_stack)) goto badframe; diff --git a/arch/powerpc/kernel/syscalls/Makefile b/arch/powerpc/kernel/syscalls/Makefile new file mode 100644 index 000000000000..27b48954808d --- /dev/null +++ b/arch/powerpc/kernel/syscalls/Makefile @@ -0,0 +1,63 @@ +# SPDX-License-Identifier: GPL-2.0 +kapi := arch/$(SRCARCH)/include/generated/asm +uapi := arch/$(SRCARCH)/include/generated/uapi/asm + +_dummy := $(shell [ -d '$(uapi)' ] || mkdir -p '$(uapi)') \ + $(shell [ -d '$(kapi)' ] || mkdir -p '$(kapi)') + +syscall := $(srctree)/$(src)/syscall.tbl +syshdr := $(srctree)/$(src)/syscallhdr.sh +systbl := $(srctree)/$(src)/syscalltbl.sh + +quiet_cmd_syshdr = SYSHDR $@ + cmd_syshdr = $(CONFIG_SHELL) '$(syshdr)' '$<' '$@' \ + '$(syshdr_abis_$(basetarget))' \ + '$(syshdr_pfx_$(basetarget))' \ + '$(syshdr_offset_$(basetarget))' + +quiet_cmd_systbl = SYSTBL $@ + cmd_systbl = $(CONFIG_SHELL) '$(systbl)' '$<' '$@' \ + '$(systbl_abis_$(basetarget))' \ + '$(systbl_abi_$(basetarget))' \ + '$(systbl_offset_$(basetarget))' + +syshdr_abis_unistd_32 := common,nospu,32 +$(uapi)/unistd_32.h: $(syscall) $(syshdr) + $(call if_changed,syshdr) + +syshdr_abis_unistd_64 := common,nospu,64 +$(uapi)/unistd_64.h: $(syscall) $(syshdr) + $(call if_changed,syshdr) + +systbl_abis_syscall_table_32 := common,nospu,32 +systbl_abi_syscall_table_32 := 32 +$(kapi)/syscall_table_32.h: $(syscall) $(systbl) + $(call if_changed,systbl) + +systbl_abis_syscall_table_64 := common,nospu,64 +systbl_abi_syscall_table_64 := 64 +$(kapi)/syscall_table_64.h: $(syscall) $(systbl) + $(call if_changed,systbl) + +systbl_abis_syscall_table_c32 := common,nospu,32 +systbl_abi_syscall_table_c32 := c32 +$(kapi)/syscall_table_c32.h: $(syscall) $(systbl) + $(call if_changed,systbl) + +systbl_abis_syscall_table_spu := common,spu +systbl_abi_syscall_table_spu := spu +$(kapi)/syscall_table_spu.h: $(syscall) $(systbl) + $(call if_changed,systbl) + +uapisyshdr-y += unistd_32.h unistd_64.h +kapisyshdr-y += syscall_table_32.h \ + syscall_table_64.h \ + syscall_table_c32.h \ + syscall_table_spu.h + +targets += $(uapisyshdr-y) $(kapisyshdr-y) + +PHONY += all +all: $(addprefix $(uapi)/,$(uapisyshdr-y)) +all: $(addprefix $(kapi)/,$(kapisyshdr-y)) + @: diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl new file mode 100644 index 000000000000..db3bbb8744af --- /dev/null +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -0,0 +1,427 @@ +# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note +# +# system call numbers and entry vectors for powerpc +# +# The format is: +# +# +# The can be common, spu, nospu, 64, or 32 for this file. +# +0 nospu restart_syscall sys_restart_syscall +1 nospu exit sys_exit +2 nospu fork ppc_fork +3 common read sys_read +4 common write sys_write +5 common open sys_open compat_sys_open +6 common close sys_close +7 common waitpid sys_waitpid +8 common creat sys_creat +9 common link sys_link +10 common unlink sys_unlink +11 nospu execve sys_execve compat_sys_execve +12 common chdir sys_chdir +13 common time sys_time compat_sys_time +14 common mknod sys_mknod +15 common chmod sys_chmod +16 common lchown sys_lchown +17 common break sys_ni_syscall +18 32 oldstat sys_stat sys_ni_syscall +18 64 oldstat sys_ni_syscall +18 spu oldstat sys_ni_syscall +19 common lseek sys_lseek compat_sys_lseek +20 common getpid sys_getpid +21 nospu mount sys_mount compat_sys_mount +22 32 umount sys_oldumount +22 64 umount sys_ni_syscall +22 spu umount sys_ni_syscall +23 common setuid sys_setuid +24 common getuid sys_getuid +25 common stime sys_stime compat_sys_stime +26 nospu ptrace sys_ptrace compat_sys_ptrace +27 common alarm sys_alarm +28 32 oldfstat sys_fstat sys_ni_syscall +28 64 oldfstat sys_ni_syscall +28 spu oldfstat sys_ni_syscall +29 nospu pause sys_pause +30 nospu utime sys_utime compat_sys_utime +31 common stty sys_ni_syscall +32 common gtty sys_ni_syscall +33 common access sys_access +34 common nice sys_nice +35 common ftime sys_ni_syscall +36 common sync sys_sync +37 common kill sys_kill +38 common rename sys_rename +39 common mkdir sys_mkdir +40 common rmdir sys_rmdir +41 common dup sys_dup +42 common pipe sys_pipe +43 common times sys_times compat_sys_times +44 common prof sys_ni_syscall +45 common brk sys_brk +46 common setgid sys_setgid +47 common getgid sys_getgid +48 nospu signal sys_signal +49 common geteuid sys_geteuid +50 common getegid sys_getegid +51 nospu acct sys_acct +52 nospu umount2 sys_umount +53 common lock sys_ni_syscall +54 common ioctl sys_ioctl compat_sys_ioctl +55 common fcntl sys_fcntl compat_sys_fcntl +56 common mpx sys_ni_syscall +57 common setpgid sys_setpgid +58 common ulimit sys_ni_syscall +59 32 oldolduname sys_olduname +59 64 oldolduname sys_ni_syscall +59 spu oldolduname sys_ni_syscall +60 common umask sys_umask +61 common chroot sys_chroot +62 nospu ustat sys_ustat compat_sys_ustat +63 common dup2 sys_dup2 +64 common getppid sys_getppid +65 common getpgrp sys_getpgrp +66 common setsid sys_setsid +67 32 sigaction sys_sigaction compat_sys_sigaction +67 64 sigaction sys_ni_syscall +67 spu sigaction sys_ni_syscall +68 common sgetmask sys_sgetmask +69 common ssetmask sys_ssetmask +70 common setreuid sys_setreuid +71 common setregid sys_setregid +72 32 sigsuspend sys_sigsuspend +72 64 sigsuspend sys_ni_syscall +72 spu sigsuspend sys_ni_syscall +73 32 sigpending sys_sigpending compat_sys_sigpending +73 64 sigpending sys_ni_syscall +73 spu sigpending sys_ni_syscall +74 common sethostname sys_sethostname +75 common setrlimit sys_setrlimit compat_sys_setrlimit +76 32 getrlimit sys_old_getrlimit compat_sys_old_getrlimit +76 64 getrlimit sys_ni_syscall +76 spu getrlimit sys_ni_syscall +77 common getrusage sys_getrusage compat_sys_getrusage +78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday +79 common settimeofday sys_settimeofday compat_sys_settimeofday +80 common getgroups sys_getgroups +81 common setgroups sys_setgroups +82 32 select ppc_select sys_ni_syscall +82 64 select sys_ni_syscall +82 spu select sys_ni_syscall +83 common symlink sys_symlink +84 32 oldlstat sys_lstat sys_ni_syscall +84 64 oldlstat sys_ni_syscall +84 spu oldlstat sys_ni_syscall +85 common readlink sys_readlink +86 nospu uselib sys_uselib +87 nospu swapon sys_swapon +88 nospu reboot sys_reboot +89 32 readdir sys_old_readdir compat_sys_old_readdir +89 64 readdir sys_ni_syscall +89 spu readdir sys_ni_syscall +90 common mmap sys_mmap +91 common munmap sys_munmap +92 common truncate sys_truncate compat_sys_truncate +93 common ftruncate sys_ftruncate compat_sys_ftruncate +94 common fchmod sys_fchmod +95 common fchown sys_fchown +96 common getpriority sys_getpriority +97 common setpriority sys_setpriority +98 common profil sys_ni_syscall +99 nospu statfs sys_statfs compat_sys_statfs +100 nospu fstatfs sys_fstatfs compat_sys_fstatfs +101 common ioperm sys_ni_syscall +102 common socketcall sys_socketcall compat_sys_socketcall +103 common syslog sys_syslog +104 common setitimer sys_setitimer compat_sys_setitimer +105 common getitimer sys_getitimer compat_sys_getitimer +106 common stat sys_newstat compat_sys_newstat +107 common lstat sys_newlstat compat_sys_newlstat +108 common fstat sys_newfstat compat_sys_newfstat +109 32 olduname sys_uname +109 64 olduname sys_ni_syscall +109 spu olduname sys_ni_syscall +110 common iopl sys_ni_syscall +111 common vhangup sys_vhangup +112 common idle sys_ni_syscall +113 common vm86 sys_ni_syscall +114 common wait4 sys_wait4 compat_sys_wait4 +115 nospu swapoff sys_swapoff +116 common sysinfo sys_sysinfo compat_sys_sysinfo +117 nospu ipc sys_ipc compat_sys_ipc +118 common fsync sys_fsync +119 32 sigreturn sys_sigreturn compat_sys_sigreturn +119 64 sigreturn sys_ni_syscall +119 spu sigreturn sys_ni_syscall +120 nospu clone ppc_clone +121 common setdomainname sys_setdomainname +122 common uname sys_newuname +123 common modify_ldt sys_ni_syscall +124 common adjtimex sys_adjtimex compat_sys_adjtimex +125 common mprotect sys_mprotect +126 32 sigprocmask sys_sigprocmask compat_sys_sigprocmask +126 64 sigprocmask sys_ni_syscall +126 spu sigprocmask sys_ni_syscall +127 common create_module sys_ni_syscall +128 nospu init_module sys_init_module +129 nospu delete_module sys_delete_module +130 common get_kernel_syms sys_ni_syscall +131 nospu quotactl sys_quotactl +132 common getpgid sys_getpgid +133 common fchdir sys_fchdir +134 common bdflush sys_bdflush +135 common sysfs sys_sysfs +136 32 personality sys_personality ppc64_personality +136 64 personality ppc64_personality +136 spu personality ppc64_personality +137 common afs_syscall sys_ni_syscall +138 common setfsuid sys_setfsuid +139 common setfsgid sys_setfsgid +140 common _llseek sys_llseek +141 common getdents sys_getdents compat_sys_getdents +142 common _newselect sys_select compat_sys_select +143 common flock sys_flock +144 common msync sys_msync +145 common readv sys_readv compat_sys_readv +146 common writev sys_writev compat_sys_writev +147 common getsid sys_getsid +148 common fdatasync sys_fdatasync +149 nospu _sysctl sys_sysctl compat_sys_sysctl +150 common mlock sys_mlock +151 common munlock sys_munlock +152 common mlockall sys_mlockall +153 common munlockall sys_munlockall +154 common sched_setparam sys_sched_setparam +155 common sched_getparam sys_sched_getparam +156 common sched_setscheduler sys_sched_setscheduler +157 common sched_getscheduler sys_sched_getscheduler +158 common sched_yield sys_sched_yield +159 common sched_get_priority_max sys_sched_get_priority_max +160 common sched_get_priority_min sys_sched_get_priority_min +161 common sched_rr_get_interval sys_sched_rr_get_interval compat_sys_sched_rr_get_interval +162 common nanosleep sys_nanosleep compat_sys_nanosleep +163 common mremap sys_mremap +164 common setresuid sys_setresuid +165 common getresuid sys_getresuid +166 common query_module sys_ni_syscall +167 common poll sys_poll +168 common nfsservctl sys_ni_syscall +169 common setresgid sys_setresgid +170 common getresgid sys_getresgid +171 common prctl sys_prctl +172 nospu rt_sigreturn sys_rt_sigreturn compat_sys_rt_sigreturn +173 nospu rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction +174 nospu rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask +175 nospu rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending +176 nospu rt_sigtimedwait sys_rt_sigtimedwait compat_sys_rt_sigtimedwait +177 nospu rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo +178 nospu rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend +179 common pread64 sys_pread64 compat_sys_pread64 +180 common pwrite64 sys_pwrite64 compat_sys_pwrite64 +181 common chown sys_chown +182 common getcwd sys_getcwd +183 common capget sys_capget +184 common capset sys_capset +185 nospu sigaltstack sys_sigaltstack compat_sys_sigaltstack +186 32 sendfile sys_sendfile compat_sys_sendfile +186 64 sendfile sys_sendfile64 +186 spu sendfile sys_sendfile64 +187 common getpmsg sys_ni_syscall +188 common putpmsg sys_ni_syscall +189 nospu vfork ppc_vfork +190 common ugetrlimit sys_getrlimit compat_sys_getrlimit +191 common readahead sys_readahead compat_sys_readahead +192 32 mmap2 sys_mmap2 compat_sys_mmap2 +193 32 truncate64 sys_truncate64 compat_sys_truncate64 +194 32 ftruncate64 sys_ftruncate64 compat_sys_ftruncate64 +195 32 stat64 sys_stat64 +196 32 lstat64 sys_lstat64 +197 32 fstat64 sys_fstat64 +198 nospu pciconfig_read sys_pciconfig_read +199 nospu pciconfig_write sys_pciconfig_write +200 nospu pciconfig_iobase sys_pciconfig_iobase +201 common multiplexer sys_ni_syscall +202 common getdents64 sys_getdents64 +203 common pivot_root sys_pivot_root +204 32 fcntl64 sys_fcntl64 compat_sys_fcntl64 +205 common madvise sys_madvise +206 common mincore sys_mincore +207 common gettid sys_gettid +208 common tkill sys_tkill +209 common setxattr sys_setxattr +210 common lsetxattr sys_lsetxattr +211 common fsetxattr sys_fsetxattr +212 common getxattr sys_getxattr +213 common lgetxattr sys_lgetxattr +214 common fgetxattr sys_fgetxattr +215 common listxattr sys_listxattr +216 common llistxattr sys_llistxattr +217 common flistxattr sys_flistxattr +218 common removexattr sys_removexattr +219 common lremovexattr sys_lremovexattr +220 common fremovexattr sys_fremovexattr +221 common futex sys_futex compat_sys_futex +222 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity +223 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity +# 224 unused +225 common tuxcall sys_ni_syscall +226 32 sendfile64 sys_sendfile64 compat_sys_sendfile64 +227 common io_setup sys_io_setup compat_sys_io_setup +228 common io_destroy sys_io_destroy +229 common io_getevents sys_io_getevents compat_sys_io_getevents +230 common io_submit sys_io_submit compat_sys_io_submit +231 common io_cancel sys_io_cancel +232 nospu set_tid_address sys_set_tid_address +233 common fadvise64 sys_fadvise64 ppc32_fadvise64 +234 nospu exit_group sys_exit_group +235 nospu lookup_dcookie sys_lookup_dcookie compat_sys_lookup_dcookie +236 common epoll_create sys_epoll_create +237 common epoll_ctl sys_epoll_ctl +238 common epoll_wait sys_epoll_wait +239 common remap_file_pages sys_remap_file_pages +240 common timer_create sys_timer_create compat_sys_timer_create +241 common timer_settime sys_timer_settime compat_sys_timer_settime +242 common timer_gettime sys_timer_gettime compat_sys_timer_gettime +243 common timer_getoverrun sys_timer_getoverrun +244 common timer_delete sys_timer_delete +245 common clock_settime sys_clock_settime compat_sys_clock_settime +246 common clock_gettime sys_clock_gettime compat_sys_clock_gettime +247 common clock_getres sys_clock_getres compat_sys_clock_getres +248 common clock_nanosleep sys_clock_nanosleep compat_sys_clock_nanosleep +249 32 swapcontext ppc_swapcontext ppc32_swapcontext +249 64 swapcontext ppc64_swapcontext +249 spu swapcontext sys_ni_syscall +250 common tgkill sys_tgkill +251 common utimes sys_utimes compat_sys_utimes +252 common statfs64 sys_statfs64 compat_sys_statfs64 +253 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64 +254 32 fadvise64_64 ppc_fadvise64_64 +254 spu fadvise64_64 sys_ni_syscall +255 common rtas sys_rtas +256 32 sys_debug_setcontext sys_debug_setcontext sys_ni_syscall +256 64 sys_debug_setcontext sys_ni_syscall +256 spu sys_debug_setcontext sys_ni_syscall +# 257 reserved for vserver +258 nospu migrate_pages sys_migrate_pages compat_sys_migrate_pages +259 nospu mbind sys_mbind compat_sys_mbind +260 nospu get_mempolicy sys_get_mempolicy compat_sys_get_mempolicy +261 nospu set_mempolicy sys_set_mempolicy compat_sys_set_mempolicy +262 nospu mq_open sys_mq_open compat_sys_mq_open +263 nospu mq_unlink sys_mq_unlink +264 nospu mq_timedsend sys_mq_timedsend compat_sys_mq_timedsend +265 nospu mq_timedreceive sys_mq_timedreceive compat_sys_mq_timedreceive +266 nospu mq_notify sys_mq_notify compat_sys_mq_notify +267 nospu mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr +268 nospu kexec_load sys_kexec_load compat_sys_kexec_load +269 nospu add_key sys_add_key +270 nospu request_key sys_request_key +271 nospu keyctl sys_keyctl compat_sys_keyctl +272 nospu waitid sys_waitid compat_sys_waitid +273 nospu ioprio_set sys_ioprio_set +274 nospu ioprio_get sys_ioprio_get +275 nospu inotify_init sys_inotify_init +276 nospu inotify_add_watch sys_inotify_add_watch +277 nospu inotify_rm_watch sys_inotify_rm_watch +278 nospu spu_run sys_spu_run +279 nospu spu_create sys_spu_create +280 nospu pselect6 sys_pselect6 compat_sys_pselect6 +281 nospu ppoll sys_ppoll compat_sys_ppoll +282 common unshare sys_unshare +283 common splice sys_splice +284 common tee sys_tee +285 common vmsplice sys_vmsplice compat_sys_vmsplice +286 common openat sys_openat compat_sys_openat +287 common mkdirat sys_mkdirat +288 common mknodat sys_mknodat +289 common fchownat sys_fchownat +290 common futimesat sys_futimesat compat_sys_futimesat +291 32 fstatat64 sys_fstatat64 +291 64 newfstatat sys_newfstatat +291 spu newfstatat sys_newfstatat +292 common unlinkat sys_unlinkat +293 common renameat sys_renameat +294 common linkat sys_linkat +295 common symlinkat sys_symlinkat +296 common readlinkat sys_readlinkat +297 common fchmodat sys_fchmodat +298 common faccessat sys_faccessat +299 common get_robust_list sys_get_robust_list compat_sys_get_robust_list +300 common set_robust_list sys_set_robust_list compat_sys_set_robust_list +301 common move_pages sys_move_pages compat_sys_move_pages +302 common getcpu sys_getcpu +303 nospu epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait +304 common utimensat sys_utimensat compat_sys_utimensat +305 common signalfd sys_signalfd compat_sys_signalfd +306 common timerfd_create sys_timerfd_create +307 common eventfd sys_eventfd +308 common sync_file_range2 sys_sync_file_range2 compat_sys_sync_file_range2 +309 nospu fallocate sys_fallocate compat_sys_fallocate +310 nospu subpage_prot sys_subpage_prot +311 common timerfd_settime sys_timerfd_settime compat_sys_timerfd_settime +312 common timerfd_gettime sys_timerfd_gettime compat_sys_timerfd_gettime +313 common signalfd4 sys_signalfd4 compat_sys_signalfd4 +314 common eventfd2 sys_eventfd2 +315 common epoll_create1 sys_epoll_create1 +316 common dup3 sys_dup3 +317 common pipe2 sys_pipe2 +318 nospu inotify_init1 sys_inotify_init1 +319 common perf_event_open sys_perf_event_open +320 common preadv sys_preadv compat_sys_preadv +321 common pwritev sys_pwritev compat_sys_pwritev +322 nospu rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo +323 nospu fanotify_init sys_fanotify_init +324 nospu fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark +325 common prlimit64 sys_prlimit64 +326 common socket sys_socket +327 common bind sys_bind +328 common connect sys_connect +329 common listen sys_listen +330 common accept sys_accept +331 common getsockname sys_getsockname +332 common getpeername sys_getpeername +333 common socketpair sys_socketpair +334 common send sys_send +335 common sendto sys_sendto +336 common recv sys_recv compat_sys_recv +337 common recvfrom sys_recvfrom compat_sys_recvfrom +338 common shutdown sys_shutdown +339 common setsockopt sys_setsockopt compat_sys_setsockopt +340 common getsockopt sys_getsockopt compat_sys_getsockopt +341 common sendmsg sys_sendmsg compat_sys_sendmsg +342 common recvmsg sys_recvmsg compat_sys_recvmsg +343 common recvmmsg sys_recvmmsg compat_sys_recvmmsg +344 common accept4 sys_accept4 +345 common name_to_handle_at sys_name_to_handle_at +346 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at +347 common clock_adjtime sys_clock_adjtime compat_sys_clock_adjtime +348 common syncfs sys_syncfs +349 common sendmmsg sys_sendmmsg compat_sys_sendmmsg +350 common setns sys_setns +351 nospu process_vm_readv sys_process_vm_readv compat_sys_process_vm_readv +352 nospu process_vm_writev sys_process_vm_writev compat_sys_process_vm_writev +353 nospu finit_module sys_finit_module +354 nospu kcmp sys_kcmp +355 common sched_setattr sys_sched_setattr +356 common sched_getattr sys_sched_getattr +357 common renameat2 sys_renameat2 +358 common seccomp sys_seccomp +359 common getrandom sys_getrandom +360 common memfd_create sys_memfd_create +361 common bpf sys_bpf +362 nospu execveat sys_execveat compat_sys_execveat +363 32 switch_endian sys_ni_syscall +363 64 switch_endian ppc_switch_endian +363 spu switch_endian sys_ni_syscall +364 common userfaultfd sys_userfaultfd +365 common membarrier sys_membarrier +378 nospu mlock2 sys_mlock2 +379 nospu copy_file_range sys_copy_file_range +380 common preadv2 sys_preadv2 compat_sys_preadv2 +381 common pwritev2 sys_pwritev2 compat_sys_pwritev2 +382 nospu kexec_file_load sys_kexec_file_load +383 nospu statx sys_statx +384 nospu pkey_alloc sys_pkey_alloc +385 nospu pkey_free sys_pkey_free +386 nospu pkey_mprotect sys_pkey_mprotect +387 nospu rseq sys_rseq +388 nospu io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents diff --git a/arch/powerpc/kernel/syscalls/syscallhdr.sh b/arch/powerpc/kernel/syscalls/syscallhdr.sh new file mode 100644 index 000000000000..c0a9a32937f1 --- /dev/null +++ b/arch/powerpc/kernel/syscalls/syscallhdr.sh @@ -0,0 +1,37 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +in="$1" +out="$2" +my_abis=`echo "($3)" | tr ',' '|'` +prefix="$4" +offset="$5" + +fileguard=_UAPI_ASM_POWERPC_`basename "$out" | sed \ + -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/' \ + -e 's/[^A-Z0-9_]/_/g' -e 's/__/_/g'` +grep -E "^[0-9A-Fa-fXx]+[[:space:]]+${my_abis}" "$in" | sort -n | ( + printf "#ifndef %s\n" "${fileguard}" + printf "#define %s\n" "${fileguard}" + printf "\n" + + nxt=0 + while read nr abi name entry compat ; do + if [ -z "$offset" ]; then + printf "#define __NR_%s%s\t%s\n" \ + "${prefix}" "${name}" "${nr}" + else + printf "#define __NR_%s%s\t(%s + %s)\n" \ + "${prefix}" "${name}" "${offset}" "${nr}" + fi + nxt=$((nr+1)) + done + + printf "\n" + printf "#ifdef __KERNEL__\n" + printf "#define __NR_syscalls\t%s\n" "${nxt}" + printf "#endif\n" + printf "\n" + printf "#endif /* %s */" "${fileguard}" + printf "\n" +) > "$out" diff --git a/arch/powerpc/kernel/syscalls/syscalltbl.sh b/arch/powerpc/kernel/syscalls/syscalltbl.sh new file mode 100644 index 000000000000..fd620490a542 --- /dev/null +++ b/arch/powerpc/kernel/syscalls/syscalltbl.sh @@ -0,0 +1,36 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +in="$1" +out="$2" +my_abis=`echo "($3)" | tr ',' '|'` +my_abi="$4" +offset="$5" + +emit() { + t_nxt="$1" + t_nr="$2" + t_entry="$3" + + while [ $t_nxt -lt $t_nr ]; do + printf "__SYSCALL(%s,sys_ni_syscall, )\n" "${t_nxt}" + t_nxt=$((t_nxt+1)) + done + printf "__SYSCALL(%s,%s, )\n" "${t_nxt}" "${t_entry}" +} + +grep -E "^[0-9A-Fa-fXx]+[[:space:]]+${my_abis}" "$in" | sort -n | ( + nxt=0 + if [ -z "$offset" ]; then + offset=0 + fi + + while read nr abi name entry compat ; do + if [ "$my_abi" = "c32" ] && [ ! -z "$compat" ]; then + emit $((nxt+offset)) $((nr+offset)) $compat + else + emit $((nxt+offset)) $((nr+offset)) $entry + fi + nxt=$((nr+1)) + done +) > "$out" diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c index 755dc98a57ae..e8e93c2c7d03 100644 --- a/arch/powerpc/kernel/sysfs.c +++ b/arch/powerpc/kernel/sysfs.c @@ -457,7 +457,7 @@ static ssize_t __used \ #define HAS_PPC_PMC_CLASSIC 1 #define HAS_PPC_PMC_IBM 1 #define HAS_PPC_PMC_PA6T 1 -#elif defined(CONFIG_6xx) +#elif defined(CONFIG_PPC_BOOK3S_32) #define HAS_PPC_PMC_CLASSIC 1 #define HAS_PPC_PMC_IBM 1 #define HAS_PPC_PMC_G4 1 diff --git a/arch/powerpc/kernel/systbl.S b/arch/powerpc/kernel/systbl.S index 919a32746ede..23265a28740b 100644 --- a/arch/powerpc/kernel/systbl.S +++ b/arch/powerpc/kernel/systbl.S @@ -16,28 +16,6 @@ #include -#ifdef CONFIG_PPC64 -#define SYSCALL(func) .8byte DOTSYM(sys_##func),DOTSYM(sys_##func) -#define COMPAT_SYS(func) .8byte DOTSYM(sys_##func),DOTSYM(compat_sys_##func) -#define PPC_SYS(func) .8byte DOTSYM(ppc_##func),DOTSYM(ppc_##func) -#define OLDSYS(func) .8byte DOTSYM(sys_ni_syscall),DOTSYM(sys_ni_syscall) -#define SYS32ONLY(func) .8byte DOTSYM(sys_ni_syscall),DOTSYM(compat_sys_##func) -#define PPC64ONLY(func) .8byte DOTSYM(ppc_##func),DOTSYM(sys_ni_syscall) -#define SYSX(f, f3264, f32) .8byte DOTSYM(f),DOTSYM(f3264) -#else -#define SYSCALL(func) .long sys_##func -#define COMPAT_SYS(func) .long sys_##func -#define PPC_SYS(func) .long ppc_##func -#define OLDSYS(func) .long sys_##func -#define SYS32ONLY(func) .long sys_##func -#define PPC64ONLY(func) .long sys_ni_syscall -#define SYSX(f, f3264, f32) .long f32 -#endif -#define SYSCALL_SPU(func) SYSCALL(func) -#define COMPAT_SYS_SPU(func) COMPAT_SYS(func) -#define COMPAT_SPU_NEW(func) COMPAT_SYS(func) -#define SYSX_SPU(f, f3264, f32) SYSX(f, f3264, f32) - .section .rodata,"a" #ifdef CONFIG_PPC64 @@ -46,5 +24,21 @@ .globl sys_call_table sys_call_table: +#ifdef CONFIG_PPC64 +#define __SYSCALL(nr, entry, nargs) .8byte DOTSYM(entry) +#include +#undef __SYSCALL +#else +#define __SYSCALL(nr, entry, nargs) .long entry +#include +#undef __SYSCALL +#endif -#include +#ifdef CONFIG_COMPAT +.globl compat_sys_call_table +compat_sys_call_table: +#define compat_sys_sigsuspend sys_sigsuspend +#define __SYSCALL(nr, entry, nargs) .8byte DOTSYM(entry) +#include +#undef __SYSCALL +#endif diff --git a/arch/powerpc/kernel/systbl_chk.c b/arch/powerpc/kernel/systbl_chk.c deleted file mode 100644 index 4653258722ac..000000000000 --- a/arch/powerpc/kernel/systbl_chk.c +++ /dev/null @@ -1,60 +0,0 @@ -/* - * This file, when run through CPP produces a list of syscall numbers - * in the order of systbl.h. That way we can check for gaps and syscalls - * that are out of order. - * - * Unfortunately, we cannot check for the correct ordering of entries - * using SYSX(). - * - * Copyright © IBM Corporation - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. - */ -#include - -#define SYSCALL(func) __NR_##func -#define COMPAT_SYS(func) __NR_##func -#define PPC_SYS(func) __NR_##func -#ifdef CONFIG_PPC64 -#define OLDSYS(func) -1 -#define SYS32ONLY(func) -1 -#define PPC64ONLY(func) __NR_##func -#else -#define OLDSYS(func) __NR_old##func -#define SYS32ONLY(func) __NR_##func -#define PPC64ONLY(func) -1 -#endif -#define SYSX(f, f3264, f32) -1 - -#define SYSCALL_SPU(func) SYSCALL(func) -#define COMPAT_SYS_SPU(func) COMPAT_SYS(func) -#define COMPAT_SPU_NEW(func) COMPAT_SYS(_new##func) -#define SYSX_SPU(f, f3264, f32) SYSX(f, f3264, f32) - -/* Just insert a marker for ni_syscalls */ -#define __NR_ni_syscall -1 - -/* - * These are the known exceptions. - * Hopefully, there will be no more. - */ -#define __NR_llseek __NR__llseek -#undef __NR_umount -#define __NR_umount __NR_umount2 -#define __NR_old_getrlimit __NR_getrlimit -#define __NR_newstat __NR_stat -#define __NR_newlstat __NR_lstat -#define __NR_newfstat __NR_fstat -#define __NR_newuname __NR_uname -#define __NR_sysctl __NR__sysctl -#define __NR_olddebug_setcontext __NR_sys_debug_setcontext - -/* We call sys_ugetrlimit for syscall number __NR_getrlimit */ -#define getrlimit ugetrlimit - -START_TABLE -#include -END_TABLE NR_syscalls diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c index b65c8a34ad6e..29746dc28df5 100644 --- a/arch/powerpc/kernel/trace/ftrace.c +++ b/arch/powerpc/kernel/trace/ftrace.c @@ -107,7 +107,7 @@ static int is_b_op(unsigned int op) static unsigned long find_bl_target(unsigned long ip, unsigned int op) { - static int offset; + int offset; offset = (op & 0x03fffffc); /* make it signed */ diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c index 9a86572db1ef..00af2c4febf4 100644 --- a/arch/powerpc/kernel/traps.c +++ b/arch/powerpc/kernel/traps.c @@ -1434,7 +1434,8 @@ void program_check_exception(struct pt_regs *regs) goto bail; } else { printk(KERN_EMERG "Unexpected TM Bad Thing exception " - "at %lx (msr 0x%lx)\n", regs->nip, regs->msr); + "at %lx (msr 0x%lx) tm_scratch=%llx\n", + regs->nip, regs->msr, get_paca()->tm_scratch); die("Unrecoverable exception", regs, SIGABRT); } } diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c index 65b3bdb99f0b..7725a9714736 100644 --- a/arch/powerpc/kernel/vdso.c +++ b/arch/powerpc/kernel/vdso.c @@ -671,15 +671,18 @@ static void __init vdso_setup_syscall_map(void) { unsigned int i; extern unsigned long *sys_call_table; +#ifdef CONFIG_PPC64 + extern unsigned long *compat_sys_call_table; +#endif extern unsigned long sys_ni_syscall; for (i = 0; i < NR_syscalls; i++) { #ifdef CONFIG_PPC64 - if (sys_call_table[i*2] != sys_ni_syscall) + if (sys_call_table[i] != sys_ni_syscall) vdso_data->syscall_map_64[i >> 5] |= 0x80000000UL >> (i & 0x1f); - if (sys_call_table[i*2+1] != sys_ni_syscall) + if (compat_sys_call_table[i] != sys_ni_syscall) vdso_data->syscall_map_32[i >> 5] |= 0x80000000UL >> (i & 0x1f); #else /* CONFIG_PPC64 */ diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S index 434581bcd5b4..ad1c77f71f54 100644 --- a/arch/powerpc/kernel/vmlinux.lds.S +++ b/arch/powerpc/kernel/vmlinux.lds.S @@ -170,6 +170,14 @@ SECTIONS } #endif /* CONFIG_PPC_BARRIER_NOSPEC */ +#ifdef CONFIG_PPC_FSL_BOOK3E + . = ALIGN(8); + __spec_btb_flush_fixup : AT(ADDR(__spec_btb_flush_fixup) - LOAD_OFFSET) { + __start__btb_flush_fixup = .; + *(__btb_flush_fixup) + __stop__btb_flush_fixup = .; + } +#endif EXCEPTION_TABLE(0) NOTES :kernel :notes @@ -206,12 +214,6 @@ SECTIONS .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) { INIT_DATA - __vtop_table_begin = .; - KEEP(*(.vtop_fixup)); - __vtop_table_end = .; - __ptov_table_begin = .; - KEEP(*(.ptov_fixup)); - __ptov_table_end = .; } .init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) { @@ -308,6 +310,10 @@ SECTIONS #ifdef CONFIG_PPC32 .data : AT(ADDR(.data) - LOAD_OFFSET) { DATA_DATA +#ifdef CONFIG_UBSAN + *(.data..Lubsan_data*) + *(.data..Lubsan_type*) +#endif *(.data.rel*) *(SDATA_MAIN) *(.sdata2) diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig index 68a0e9d5b440..bfdde04e4905 100644 --- a/arch/powerpc/kvm/Kconfig +++ b/arch/powerpc/kvm/Kconfig @@ -204,6 +204,6 @@ config KVM_XIVE default y depends on KVM_XICS && PPC_XIVE_NATIVE && KVM_BOOK3S_HV_POSSIBLE -source drivers/vhost/Kconfig +source "drivers/vhost/Kconfig" endif # VIRTUALIZATION diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c index 62a8d03ba7e9..532ab79734c7 100644 --- a/arch/powerpc/kvm/book3s_64_vio.c +++ b/arch/powerpc/kvm/book3s_64_vio.c @@ -397,12 +397,13 @@ static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt, return H_SUCCESS; } -static void kvmppc_clear_tce(struct iommu_table *tbl, unsigned long entry) +static void kvmppc_clear_tce(struct mm_struct *mm, struct iommu_table *tbl, + unsigned long entry) { unsigned long hpa = 0; enum dma_data_direction dir = DMA_NONE; - iommu_tce_xchg(tbl, entry, &hpa, &dir); + iommu_tce_xchg(mm, tbl, entry, &hpa, &dir); } static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm, @@ -433,7 +434,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm, unsigned long hpa = 0; long ret; - if (WARN_ON_ONCE(iommu_tce_xchg(tbl, entry, &hpa, &dir))) + if (WARN_ON_ONCE(iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir))) return H_TOO_HARD; if (dir == DMA_NONE) @@ -441,7 +442,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm, ret = kvmppc_tce_iommu_mapped_dec(kvm, tbl, entry); if (ret != H_SUCCESS) - iommu_tce_xchg(tbl, entry, &hpa, &dir); + iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir); return ret; } @@ -487,7 +488,7 @@ long kvmppc_tce_iommu_do_map(struct kvm *kvm, struct iommu_table *tbl, if (mm_iommu_mapped_inc(mem)) return H_TOO_HARD; - ret = iommu_tce_xchg(tbl, entry, &hpa, &dir); + ret = iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir); if (WARN_ON_ONCE(ret)) { mm_iommu_mapped_dec(mem); return H_TOO_HARD; @@ -566,7 +567,7 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, entry, ua, dir); if (ret != H_SUCCESS) { - kvmppc_clear_tce(stit->tbl, entry); + kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry); goto unlock_exit; } } @@ -655,7 +656,8 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu, iommu_tce_direction(tce)); if (ret != H_SUCCESS) { - kvmppc_clear_tce(stit->tbl, entry); + kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, + entry); goto unlock_exit; } } @@ -704,7 +706,7 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu, return ret; WARN_ON_ONCE(1); - kvmppc_clear_tce(stit->tbl, entry); + kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry); } } diff --git a/arch/powerpc/kvm/bookehv_interrupts.S b/arch/powerpc/kvm/bookehv_interrupts.S index 051af7d97327..4e5081e58409 100644 --- a/arch/powerpc/kvm/bookehv_interrupts.S +++ b/arch/powerpc/kvm/bookehv_interrupts.S @@ -75,6 +75,10 @@ PPC_LL r1, VCPU_HOST_STACK(r4) PPC_LL r2, HOST_R2(r1) +START_BTB_FLUSH_SECTION + BTB_FLUSH(r10) +END_BTB_FLUSH_SECTION + mfspr r10, SPRN_PID lwz r8, VCPU_HOST_PID(r4) PPC_LL r11, VCPU_SHARED(r4) diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h index 94f04fcb373e..962ee90a0dfe 100644 --- a/arch/powerpc/kvm/e500.h +++ b/arch/powerpc/kvm/e500.h @@ -20,7 +20,7 @@ #define KVM_E500_H #include -#include +#include #include #include diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c index 3f8189eb56ed..fde1de08b4d7 100644 --- a/arch/powerpc/kvm/e500_emulate.c +++ b/arch/powerpc/kvm/e500_emulate.c @@ -277,6 +277,13 @@ int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn, ulong spr_va vcpu->arch.pwrmgtcr0 = spr_val; break; + case SPRN_BUCSR: + /* + * If we are here, it means that we have already flushed the + * branch predictor, so just return to guest. + */ + break; + /* extra exceptions */ #ifdef CONFIG_SPE_POSSIBLE case SPRN_IVOR32: diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 89502cbccb1b..506413a2c25e 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -204,22 +204,6 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags) return patch_instruction(addr, create_branch(addr, target, flags)); } -int patch_branch_site(s32 *site, unsigned long target, int flags) -{ - unsigned int *addr; - - addr = (unsigned int *)((unsigned long)site + *site); - return patch_instruction(addr, create_branch(addr, target, flags)); -} - -int patch_instruction_site(s32 *site, unsigned int instr) -{ - unsigned int *addr; - - addr = (unsigned int *)((unsigned long)site + *site); - return patch_instruction(addr, instr); -} - bool is_offset_in_branch_range(long offset) { /* diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c index e613b02bb2f0..5169cc805464 100644 --- a/arch/powerpc/lib/feature-fixups.c +++ b/arch/powerpc/lib/feature-fixups.c @@ -118,7 +118,7 @@ void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end) } #ifdef CONFIG_PPC_BOOK3S_64 -void do_stf_entry_barrier_fixups(enum stf_barrier_type types) +static void do_stf_entry_barrier_fixups(enum stf_barrier_type types) { unsigned int instrs[3], *dest; long *start, *end; @@ -168,7 +168,7 @@ void do_stf_entry_barrier_fixups(enum stf_barrier_type types) : "unknown"); } -void do_stf_exit_barrier_fixups(enum stf_barrier_type types) +static void do_stf_exit_barrier_fixups(enum stf_barrier_type types) { unsigned int instrs[6], *dest; long *start, *end; @@ -347,6 +347,29 @@ void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_ printk(KERN_DEBUG "barrier-nospec: patched %d locations\n", i); } + +static void patch_btb_flush_section(long *curr) +{ + unsigned int *start, *end; + + start = (void *)curr + *curr; + end = (void *)curr + *(curr + 1); + for (; start < end; start++) { + pr_devel("patching dest %lx\n", (unsigned long)start); + patch_instruction(start, PPC_INST_NOP); + } +} + +void do_btb_flush_fixups(void) +{ + long *start, *end; + + start = PTRRELOC(&__start__btb_flush_fixup); + end = PTRRELOC(&__stop__btb_flush_fixup); + + for (; start < end; start += 2) + patch_btb_flush_section(start); +} #endif /* CONFIG_PPC_FSL_BOOK3E */ void do_lwsync_fixups(unsigned long value, void *fixup_start, void *fixup_end) diff --git a/arch/powerpc/mm/44x_mmu.c b/arch/powerpc/mm/44x_mmu.c index 12d92518e898..ea2b9af08a48 100644 --- a/arch/powerpc/mm/44x_mmu.c +++ b/arch/powerpc/mm/44x_mmu.c @@ -29,6 +29,7 @@ #include #include #include +#include #include "mmu_decl.h" @@ -43,22 +44,13 @@ unsigned long tlb_47x_boltmap[1024/8]; static void ppc44x_update_tlb_hwater(void) { - extern unsigned int tlb_44x_patch_hwater_D[]; - extern unsigned int tlb_44x_patch_hwater_I[]; - /* The TLB miss handlers hard codes the watermark in a cmpli * instruction to improve performances rather than loading it * from the global variable. Thus, we patch the instructions * in the 2 TLB miss handlers when updating the value */ - tlb_44x_patch_hwater_D[0] = (tlb_44x_patch_hwater_D[0] & 0xffff0000) | - tlb_44x_hwater; - flush_icache_range((unsigned long)&tlb_44x_patch_hwater_D[0], - (unsigned long)&tlb_44x_patch_hwater_D[1]); - tlb_44x_patch_hwater_I[0] = (tlb_44x_patch_hwater_I[0] & 0xffff0000) | - tlb_44x_hwater; - flush_icache_range((unsigned long)&tlb_44x_patch_hwater_I[0], - (unsigned long)&tlb_44x_patch_hwater_I[1]); + modify_instruction_site(&patch__tlb_44x_hwater_D, 0xffff, tlb_44x_hwater); + modify_instruction_site(&patch__tlb_44x_hwater_I, 0xffff, tlb_44x_hwater); } /* diff --git a/arch/powerpc/mm/8xx_mmu.c b/arch/powerpc/mm/8xx_mmu.c index 01b7f5107c3a..bfa503cff351 100644 --- a/arch/powerpc/mm/8xx_mmu.c +++ b/arch/powerpc/mm/8xx_mmu.c @@ -100,11 +100,7 @@ static void __init mmu_mapin_immr(void) static void __init mmu_patch_cmp_limit(s32 *site, unsigned long mapped) { - unsigned int instr = *(unsigned int *)patch_site_addr(site); - - instr &= 0xffff0000; - instr |= (unsigned long)__va(mapped) >> 16; - patch_instruction_site(site, instr); + modify_instruction_site(site, 0xffff, (unsigned long)__va(mapped) >> 16); } unsigned long __init mmu_mapin_ram(unsigned long top) @@ -175,12 +171,12 @@ void set_context(unsigned long id, pgd_t *pgd) *(ptr + 1) = pgd; #endif - /* Register M_TW will contain base address of level 1 table minus the + /* Register M_TWB will contain base address of level 1 table minus the * lower part of the kernel PGDIR base address, so that all accesses to * level 1 table are done relative to lower part of kernel PGDIR base * address. */ - mtspr(SPRN_M_TW, __pa(pgd) - offset); + mtspr(SPRN_M_TWB, __pa(pgd) - offset); /* Update context */ mtspr(SPRN_M_CASID, id - 1); diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index ca96e7be4d0e..f965fc33a8b7 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -15,10 +15,13 @@ obj-$(CONFIG_PPC_MMU_NOHASH) += mmu_context_nohash.o tlb_nohash.o \ obj-$(CONFIG_PPC_BOOK3E) += tlb_low_$(BITS)e.o hash64-$(CONFIG_PPC_NATIVE) := hash_native_64.o obj-$(CONFIG_PPC_BOOK3E_64) += pgtable-book3e.o -obj-$(CONFIG_PPC_BOOK3S_64) += pgtable-hash64.o hash_utils_64.o slb.o $(hash64-y) mmu_context_book3s64.o pgtable-book3s64.o +obj-$(CONFIG_PPC_BOOK3S_64) += pgtable-hash64.o hash_utils_64.o slb.o \ + $(hash64-y) mmu_context_book3s64.o \ + pgtable-book3s64.o pgtable-frag.o +obj-$(CONFIG_PPC32) += pgtable-frag.o obj-$(CONFIG_PPC_RADIX_MMU) += pgtable-radix.o tlb-radix.o -obj-$(CONFIG_PPC_STD_MMU_32) += ppc_mmu_32.o hash_low_32.o mmu_context_hash32.o -obj-$(CONFIG_PPC_STD_MMU) += tlb_hash$(BITS).o +obj-$(CONFIG_PPC_BOOK3S_32) += ppc_mmu_32.o hash_low_32.o mmu_context_hash32.o +obj-$(CONFIG_PPC_BOOK3S) += tlb_hash$(BITS).o ifdef CONFIG_PPC_BOOK3S_64 obj-$(CONFIG_PPC_4K_PAGES) += hash64_4k.o obj-$(CONFIG_PPC_64K_PAGES) += hash64_64k.o @@ -47,7 +50,7 @@ ifdef CONFIG_PPC_PTDUMP obj-$(CONFIG_4xx) += dump_linuxpagetables-generic.o obj-$(CONFIG_PPC_8xx) += dump_linuxpagetables-8xx.o obj-$(CONFIG_PPC_BOOK3E_MMU) += dump_linuxpagetables-generic.o -obj-$(CONFIG_PPC_BOOK3S_32) += dump_linuxpagetables-generic.o +obj-$(CONFIG_PPC_BOOK3S_32) += dump_linuxpagetables-generic.o dump_bats.o dump_sr.o obj-$(CONFIG_PPC_BOOK3S_64) += dump_linuxpagetables-book3s64.o endif obj-$(CONFIG_PPC_HTDUMP) += dump_hashpagetable.o diff --git a/arch/powerpc/mm/dma-noncoherent.c b/arch/powerpc/mm/dma-noncoherent.c index b6e7b5952ab5..e955539686a4 100644 --- a/arch/powerpc/mm/dma-noncoherent.c +++ b/arch/powerpc/mm/dma-noncoherent.c @@ -29,7 +29,7 @@ #include #include #include -#include +#include #include #include @@ -151,8 +151,8 @@ static struct ppc_vm_region *ppc_vm_region_find(struct ppc_vm_region *head, unsi * Allocate DMA-coherent memory space and return both the kernel remapped * virtual and bus address for that space. */ -void * -__dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp) +void *__dma_nommu_alloc_coherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { struct page *page; struct ppc_vm_region *c; @@ -223,7 +223,7 @@ __dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, gfp_t /* * Set the "dma handle" */ - *handle = page_to_phys(page); + *dma_handle = phys_to_dma(dev, page_to_phys(page)); do { SetPageReserved(page); @@ -249,12 +249,12 @@ __dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, gfp_t no_page: return NULL; } -EXPORT_SYMBOL(__dma_alloc_coherent); /* * free a page as defined by the above mapping. */ -void __dma_free_coherent(size_t size, void *vaddr) +void __dma_nommu_free_coherent(struct device *dev, size_t size, void *vaddr, + dma_addr_t dma_handle, unsigned long attrs) { struct ppc_vm_region *c; unsigned long flags, addr; @@ -309,7 +309,6 @@ void __dma_free_coherent(size_t size, void *vaddr) __func__, vaddr); dump_stack(); } -EXPORT_SYMBOL(__dma_free_coherent); /* * make an area consistent. @@ -401,7 +400,7 @@ EXPORT_SYMBOL(__dma_sync_page); /* * Return the PFN for a given cpu virtual address returned by - * __dma_alloc_coherent. This is used by dma_mmap_coherent() + * __dma_nommu_alloc_coherent. This is used by dma_mmap_coherent() */ unsigned long __dma_get_coherent_pfn(unsigned long cpu_addr) { diff --git a/arch/powerpc/mm/dump_bats.c b/arch/powerpc/mm/dump_bats.c new file mode 100644 index 000000000000..a0d23e96e841 --- /dev/null +++ b/arch/powerpc/mm/dump_bats.c @@ -0,0 +1,173 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright 2018, Christophe Leroy CS S.I. + * + * + * This dumps the content of BATS + */ + +#include +#include +#include + +static char *pp_601(int k, int pp) +{ + if (pp == 0) + return k ? "NA" : "RWX"; + if (pp == 1) + return k ? "ROX" : "RWX"; + if (pp == 2) + return k ? "RWX" : "RWX"; + return k ? "ROX" : "ROX"; +} + +static void bat_show_601(struct seq_file *m, int idx, u32 lower, u32 upper) +{ + u32 blpi = upper & 0xfffe0000; + u32 k = (upper >> 2) & 3; + u32 pp = upper & 3; + phys_addr_t pbn = PHYS_BAT_ADDR(lower); + u32 bsm = lower & 0x3ff; + u32 size = (bsm + 1) << 17; + + seq_printf(m, "%d: ", idx); + if (!(lower & 0x40)) { + seq_puts(m, " -\n"); + return; + } + + seq_printf(m, "0x%08x-0x%08x ", blpi, blpi + size - 1); +#ifdef CONFIG_PHYS_64BIT + seq_printf(m, "0x%016llx ", pbn); +#else + seq_printf(m, "0x%08x ", pbn); +#endif + + seq_printf(m, "Kernel %s User %s", pp_601(k & 2, pp), pp_601(k & 1, pp)); + + if (lower & _PAGE_WRITETHRU) + seq_puts(m, "write through "); + if (lower & _PAGE_NO_CACHE) + seq_puts(m, "no cache "); + if (lower & _PAGE_COHERENT) + seq_puts(m, "coherent "); + seq_puts(m, "\n"); +} + +#define BAT_SHOW_601(_m, _n, _l, _u) bat_show_601(_m, _n, mfspr(_l), mfspr(_u)) + +static int bats_show_601(struct seq_file *m, void *v) +{ + seq_puts(m, "---[ Block Address Translation ]---\n"); + + BAT_SHOW_601(m, 0, SPRN_IBAT0L, SPRN_IBAT0U); + BAT_SHOW_601(m, 1, SPRN_IBAT1L, SPRN_IBAT1U); + BAT_SHOW_601(m, 2, SPRN_IBAT2L, SPRN_IBAT2U); + BAT_SHOW_601(m, 3, SPRN_IBAT3L, SPRN_IBAT3U); + + return 0; +} + +static void bat_show_603(struct seq_file *m, int idx, u32 lower, u32 upper, bool is_d) +{ + u32 bepi = upper & 0xfffe0000; + u32 bl = (upper >> 2) & 0x7ff; + u32 k = upper & 3; + phys_addr_t brpn = PHYS_BAT_ADDR(lower); + u32 size = (bl + 1) << 17; + + seq_printf(m, "%d: ", idx); + if (k == 0) { + seq_puts(m, " -\n"); + return; + } + + seq_printf(m, "0x%08x-0x%08x ", bepi, bepi + size - 1); +#ifdef CONFIG_PHYS_64BIT + seq_printf(m, "0x%016llx ", brpn); +#else + seq_printf(m, "0x%08x ", brpn); +#endif + + if (k == 1) + seq_puts(m, "User "); + else if (k == 2) + seq_puts(m, "Kernel "); + else + seq_puts(m, "Kernel/User "); + + if (lower & BPP_RX) + seq_puts(m, is_d ? "RO " : "EXEC "); + else if (lower & BPP_RW) + seq_puts(m, is_d ? "RW " : "EXEC "); + else + seq_puts(m, is_d ? "NA " : "NX "); + + if (lower & _PAGE_WRITETHRU) + seq_puts(m, "write through "); + if (lower & _PAGE_NO_CACHE) + seq_puts(m, "no cache "); + if (lower & _PAGE_COHERENT) + seq_puts(m, "coherent "); + if (lower & _PAGE_GUARDED) + seq_puts(m, "guarded "); + seq_puts(m, "\n"); +} + +#define BAT_SHOW_603(_m, _n, _l, _u, _d) bat_show_603(_m, _n, mfspr(_l), mfspr(_u), _d) + +static int bats_show_603(struct seq_file *m, void *v) +{ + seq_puts(m, "---[ Instruction Block Address Translation ]---\n"); + + BAT_SHOW_603(m, 0, SPRN_IBAT0L, SPRN_IBAT0U, false); + BAT_SHOW_603(m, 1, SPRN_IBAT1L, SPRN_IBAT1U, false); + BAT_SHOW_603(m, 2, SPRN_IBAT2L, SPRN_IBAT2U, false); + BAT_SHOW_603(m, 3, SPRN_IBAT3L, SPRN_IBAT3U, false); + if (mmu_has_feature(MMU_FTR_USE_HIGH_BATS)) { + BAT_SHOW_603(m, 4, SPRN_IBAT4L, SPRN_IBAT4U, false); + BAT_SHOW_603(m, 5, SPRN_IBAT5L, SPRN_IBAT5U, false); + BAT_SHOW_603(m, 6, SPRN_IBAT6L, SPRN_IBAT6U, false); + BAT_SHOW_603(m, 7, SPRN_IBAT7L, SPRN_IBAT7U, false); + } + + seq_puts(m, "\n---[ Data Block Address Translation ]---\n"); + + BAT_SHOW_603(m, 0, SPRN_DBAT0L, SPRN_DBAT0U, true); + BAT_SHOW_603(m, 1, SPRN_DBAT1L, SPRN_DBAT1U, true); + BAT_SHOW_603(m, 2, SPRN_DBAT2L, SPRN_DBAT2U, true); + BAT_SHOW_603(m, 3, SPRN_DBAT3L, SPRN_DBAT3U, true); + if (mmu_has_feature(MMU_FTR_USE_HIGH_BATS)) { + BAT_SHOW_603(m, 4, SPRN_DBAT4L, SPRN_DBAT4U, true); + BAT_SHOW_603(m, 5, SPRN_DBAT5L, SPRN_DBAT5U, true); + BAT_SHOW_603(m, 6, SPRN_DBAT6L, SPRN_DBAT6U, true); + BAT_SHOW_603(m, 7, SPRN_DBAT7L, SPRN_DBAT7U, true); + } + + return 0; +} + +static int bats_open(struct inode *inode, struct file *file) +{ + if (cpu_has_feature(CPU_FTR_601)) + return single_open(file, bats_show_601, NULL); + + return single_open(file, bats_show_603, NULL); +} + +static const struct file_operations bats_fops = { + .open = bats_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + +static int __init bats_init(void) +{ + struct dentry *debugfs_file; + + debugfs_file = debugfs_create_file("block_address_translation", 0400, + powerpc_debugfs_root, NULL, &bats_fops); + return debugfs_file ? 0 : -ENOMEM; +} +device_initcall(bats_init); diff --git a/arch/powerpc/mm/dump_linuxpagetables-generic.c b/arch/powerpc/mm/dump_linuxpagetables-generic.c index 1e3829ec1348..3fe98a0974c6 100644 --- a/arch/powerpc/mm/dump_linuxpagetables-generic.c +++ b/arch/powerpc/mm/dump_linuxpagetables-generic.c @@ -21,13 +21,11 @@ static const struct flag_info flag_array[] = { .set = "rw", .clear = "r ", }, { -#ifndef CONFIG_PPC_BOOK3S_32 .mask = _PAGE_EXEC, .val = _PAGE_EXEC, .set = " X ", .clear = " ", }, { -#endif .mask = _PAGE_PRESENT, .val = _PAGE_PRESENT, .set = "present", diff --git a/arch/powerpc/mm/dump_sr.c b/arch/powerpc/mm/dump_sr.c new file mode 100644 index 000000000000..501843664bb9 --- /dev/null +++ b/arch/powerpc/mm/dump_sr.c @@ -0,0 +1,64 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright 2018, Christophe Leroy CS S.I. + * + * + * This dumps the content of Segment Registers + */ + +#include + +static void seg_show(struct seq_file *m, int i) +{ + u32 val = mfsrin(i << 28); + + seq_printf(m, "0x%01x0000000-0x%01xfffffff ", i, i); + seq_printf(m, "Kern key %d ", (val >> 30) & 1); + seq_printf(m, "User key %d ", (val >> 29) & 1); + if (val & 0x80000000) { + seq_printf(m, "Device 0x%03x", (val >> 20) & 0x1ff); + seq_printf(m, "-0x%05x", val & 0xfffff); + } else { + if (val & 0x10000000) + seq_puts(m, "No Exec "); + seq_printf(m, "VSID 0x%06x", val & 0xffffff); + } + seq_puts(m, "\n"); +} + +static int sr_show(struct seq_file *m, void *v) +{ + int i; + + seq_puts(m, "---[ User Segments ]---\n"); + for (i = 0; i < TASK_SIZE >> 28; i++) + seg_show(m, i); + + seq_puts(m, "\n---[ Kernel Segments ]---\n"); + for (; i < 16; i++) + seg_show(m, i); + + return 0; +} + +static int sr_open(struct inode *inode, struct file *file) +{ + return single_open(file, sr_show, NULL); +} + +static const struct file_operations sr_fops = { + .open = sr_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + +static int __init sr_init(void) +{ + struct dentry *debugfs_file; + + debugfs_file = debugfs_create_file("segment_registers", 0400, + powerpc_debugfs_root, NULL, &sr_fops); + return debugfs_file ? 0 : -ENOMEM; +} +device_initcall(sr_init); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 2e6fb1d758c3..a6dcfda3e11e 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -226,7 +226,9 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, static bool bad_kernel_fault(bool is_exec, unsigned long error_code, unsigned long address) { - if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT))) { + /* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */ + if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT | + DSISR_PROTFAULT))) { printk_ratelimited(KERN_CRIT "kernel tried to execute" " exec-protected page (%lx) -" "exploit attempt? (uid: %d)\n", @@ -341,9 +343,20 @@ static inline void cmo_account_page_fault(void) static inline void cmo_account_page_fault(void) { } #endif /* CONFIG_PPC_SMLPAR */ -#ifdef CONFIG_PPC_STD_MMU -static void sanity_check_fault(bool is_write, unsigned long error_code) +#ifdef CONFIG_PPC_BOOK3S +static void sanity_check_fault(bool is_write, bool is_user, + unsigned long error_code, unsigned long address) { + /* + * Userspace trying to access kernel address, we get PROTFAULT for that. + */ + if (is_user && address >= TASK_SIZE) { + pr_crit_ratelimited("%s[%d]: User access of kernel address (%lx) - exploit attempt? (uid: %d)\n", + current->comm, current->pid, address, + from_kuid(&init_user_ns, current_uid())); + return; + } + /* * For hash translation mode, we should never get a * PROTFAULT. Any update to pte to reduce access will result in us @@ -373,12 +386,15 @@ static void sanity_check_fault(bool is_write, unsigned long error_code) * For radix, we can get prot fault for autonuma case, because radix * page table will have them marked noaccess for user. */ - if (!radix_enabled() && !is_write) - WARN_ON_ONCE(error_code & DSISR_PROTFAULT); + if (radix_enabled() || is_write) + return; + + WARN_ON_ONCE(error_code & DSISR_PROTFAULT); } #else -static void sanity_check_fault(bool is_write, unsigned long error_code) { } -#endif /* CONFIG_PPC_STD_MMU */ +static void sanity_check_fault(bool is_write, bool is_user, + unsigned long error_code, unsigned long address) { } +#endif /* CONFIG_PPC_BOOK3S */ /* * Define the correct "is_write" bit in error_code based @@ -435,7 +451,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, } /* Additional sanity check(s) */ - sanity_check_fault(is_write, error_code); + sanity_check_fault(is_write, is_user, error_code, address); /* * The kernel should never take an execute fault nor should it @@ -637,21 +653,22 @@ void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig) case 0x300: case 0x380: case 0xe00: - printk(KERN_ALERT "Unable to handle kernel paging request for " - "data at address 0x%08lx\n", regs->dar); + pr_alert("BUG: %s at 0x%08lx\n", + regs->dar < PAGE_SIZE ? "Kernel NULL pointer dereference" : + "Unable to handle kernel data access", regs->dar); break; case 0x400: case 0x480: - printk(KERN_ALERT "Unable to handle kernel paging request for " - "instruction fetch\n"); + pr_alert("BUG: Unable to handle kernel instruction fetch%s", + regs->nip < PAGE_SIZE ? " (NULL pointer?)\n" : "\n"); break; case 0x600: - printk(KERN_ALERT "Unable to handle kernel paging request for " - "unaligned access at address 0x%08lx\n", regs->dar); + pr_alert("BUG: Unable to handle kernel unaligned access at 0x%08lx\n", + regs->dar); break; default: - printk(KERN_ALERT "Unable to handle kernel paging request for " - "unknown fault\n"); + pr_alert("BUG: Unable to handle unknown paging fault at 0x%08lx\n", + regs->dar); break; } printk(KERN_ALERT "Faulting instruction address: 0x%08lx\n", diff --git a/arch/powerpc/mm/hash_low_32.S b/arch/powerpc/mm/hash_low_32.S index 26acf6c8c20c..1e2df3e9f9ea 100644 --- a/arch/powerpc/mm/hash_low_32.S +++ b/arch/powerpc/mm/hash_low_32.S @@ -28,6 +28,7 @@ #include #include #include +#include #ifdef CONFIG_SMP .section .bss @@ -337,11 +338,13 @@ END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT) rlwimi r5,r4,10,26,31 /* put in API (abbrev page index) */ SET_V(r5) /* set V (valid) bit */ + patch_site 0f, patch__hash_page_A0 + patch_site 1f, patch__hash_page_A1 + patch_site 2f, patch__hash_page_A2 /* Get the address of the primary PTE group in the hash table (r3) */ -_GLOBAL(hash_page_patch_A) - addis r0,r7,Hash_base@h /* base address of hash table */ - rlwimi r0,r3,LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* VSID -> hash */ - rlwinm r3,r4,20+LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* PI -> hash */ +0: addis r0,r7,Hash_base@h /* base address of hash table */ +1: rlwimi r0,r3,LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* VSID -> hash */ +2: rlwinm r3,r4,20+LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* PI -> hash */ xor r3,r3,r0 /* make primary hash */ li r0,8 /* PTEs/group */ @@ -366,10 +369,10 @@ _GLOBAL(hash_page_patch_A) bdnzf 2,1b /* loop while ctr != 0 && !cr0.eq */ beq+ found_slot + patch_site 0f, patch__hash_page_B /* Search the secondary PTEG for a matching PTE */ ori r5,r5,PTE_H /* set H (secondary hash) bit */ -_GLOBAL(hash_page_patch_B) - xoris r4,r3,Hash_msk>>16 /* compute secondary hash */ +0: xoris r4,r3,Hash_msk>>16 /* compute secondary hash */ xori r4,r4,(-PTEG_SIZE & 0xffff) addi r4,r4,-HPTE_SIZE mtctr r0 @@ -393,10 +396,10 @@ _GLOBAL(hash_page_patch_B) addi r6,r6,1 stw r6,primary_pteg_full@l(r4) + patch_site 0f, patch__hash_page_C /* Search the secondary PTEG for an empty slot */ ori r5,r5,PTE_H /* set H (secondary hash) bit */ -_GLOBAL(hash_page_patch_C) - xoris r4,r3,Hash_msk>>16 /* compute secondary hash */ +0: xoris r4,r3,Hash_msk>>16 /* compute secondary hash */ xori r4,r4,(-PTEG_SIZE & 0xffff) addi r4,r4,-HPTE_SIZE mtctr r0 @@ -577,11 +580,13 @@ _GLOBAL(flush_hash_pages) stwcx. r8,0,r5 /* update the pte */ bne- 33b + patch_site 0f, patch__flush_hash_A0 + patch_site 1f, patch__flush_hash_A1 + patch_site 2f, patch__flush_hash_A2 /* Get the address of the primary PTE group in the hash table (r3) */ -_GLOBAL(flush_hash_patch_A) - addis r8,r7,Hash_base@h /* base address of hash table */ - rlwimi r8,r3,LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* VSID -> hash */ - rlwinm r0,r4,20+LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* PI -> hash */ +0: addis r8,r7,Hash_base@h /* base address of hash table */ +1: rlwimi r8,r3,LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* VSID -> hash */ +2: rlwinm r0,r4,20+LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* PI -> hash */ xor r8,r0,r8 /* make primary hash */ /* Search the primary PTEG for a PTE whose 1st (d)word matches r5 */ @@ -593,11 +598,11 @@ _GLOBAL(flush_hash_patch_A) bdnzf 2,1b /* loop while ctr != 0 && !cr0.eq */ beq+ 3f + patch_site 0f, patch__flush_hash_B /* Search the secondary PTEG for a matching PTE */ ori r11,r11,PTE_H /* set H (secondary hash) bit */ li r0,8 /* PTEs/group */ -_GLOBAL(flush_hash_patch_B) - xoris r12,r8,Hash_msk>>16 /* compute secondary hash */ +0: xoris r12,r8,Hash_msk>>16 /* compute secondary hash */ xori r12,r12,(-PTEG_SIZE & 0xffff) addi r12,r12,-HPTE_SIZE mtctr r0 diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 4c01e9a01a74..9e732bb2c84a 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -42,6 +42,8 @@ EXPORT_SYMBOL(HPAGE_SHIFT); #define hugepd_none(hpd) (hpd_val(hpd) == 0) +#define PTE_T_ORDER (__builtin_ffs(sizeof(pte_t)) - __builtin_ffs(sizeof(void *))) + pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz) { /* @@ -61,14 +63,17 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp, int num_hugepd; if (pshift >= pdshift) { - cachep = hugepte_cache; + cachep = PGT_CACHE(PTE_T_ORDER); num_hugepd = 1 << (pshift - pdshift); + } else if (IS_ENABLED(CONFIG_PPC_8xx)) { + cachep = PGT_CACHE(PTE_INDEX_SIZE); + num_hugepd = 1; } else { cachep = PGT_CACHE(pdshift - pshift); num_hugepd = 1; } - new = kmem_cache_zalloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL)); + new = kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL)); BUG_ON(pshift > HUGEPD_SHIFT_MASK); BUG_ON((unsigned long)new & HUGEPD_SHIFT_MASK); @@ -264,7 +269,7 @@ static void hugepd_free_rcu_callback(struct rcu_head *head) unsigned int i; for (i = 0; i < batch->index; i++) - kmem_cache_free(hugepte_cache, batch->ptes[i]); + kmem_cache_free(PGT_CACHE(PTE_T_ORDER), batch->ptes[i]); free_page((unsigned long)batch); } @@ -277,7 +282,7 @@ static void hugepd_free(struct mmu_gather *tlb, void *hugepte) if (atomic_read(&tlb->mm->mm_users) < 2 || mm_is_thread_local(tlb->mm)) { - kmem_cache_free(hugepte_cache, hugepte); + kmem_cache_free(PGT_CACHE(PTE_T_ORDER), hugepte); put_cpu_var(hugepd_freelist_cur); return; } @@ -329,6 +334,9 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif if (shift >= pdshift) hugepd_free(tlb, hugepte); + else if (IS_ENABLED(CONFIG_PPC_8xx)) + pgtable_free_tlb(tlb, hugepte, + get_hugepd_cache_index(PTE_INDEX_SIZE)); else pgtable_free_tlb(tlb, hugepte, get_hugepd_cache_index(pdshift - shift)); @@ -652,7 +660,6 @@ static int __init hugepage_setup_sz(char *str) } __setup("hugepagesz=", hugepage_setup_sz); -struct kmem_cache *hugepte_cache; static int __init hugetlbpage_init(void) { int psize; @@ -699,24 +706,13 @@ static int __init hugetlbpage_init(void) * if we have pdshift and shift value same, we don't * use pgt cache for hugepd. */ - if (pdshift > shift) - pgtable_cache_add(pdshift - shift, NULL); + if (pdshift > shift && IS_ENABLED(CONFIG_PPC_8xx)) + pgtable_cache_add(PTE_INDEX_SIZE); + else if (pdshift > shift) + pgtable_cache_add(pdshift - shift); #if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_8xx) - else if (!hugepte_cache) { - /* - * Create a kmem cache for hugeptes. The bottom bits in - * the pte have size information encoded in them, so - * align them to allow this - */ - hugepte_cache = kmem_cache_create("hugepte-cache", - sizeof(pte_t), - HUGEPD_SHIFT_MASK + 1, - 0, NULL); - if (hugepte_cache == NULL) - panic("%s: Unable to create kmem cache " - "for hugeptes\n", __func__); - - } + else + pgtable_cache_add(PTE_T_ORDER); #endif } diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c index 2b656e67f2ea..1e6910eb70ed 100644 --- a/arch/powerpc/mm/init-common.c +++ b/arch/powerpc/mm/init-common.c @@ -25,22 +25,40 @@ #include #include -static void pgd_ctor(void *addr) -{ - memset(addr, 0, PGD_TABLE_SIZE); +#define CTOR(shift) static void ctor_##shift(void *addr) \ +{ \ + memset(addr, 0, sizeof(void *) << (shift)); \ } -static void pud_ctor(void *addr) -{ - memset(addr, 0, PUD_TABLE_SIZE); -} +CTOR(0); CTOR(1); CTOR(2); CTOR(3); CTOR(4); CTOR(5); CTOR(6); CTOR(7); +CTOR(8); CTOR(9); CTOR(10); CTOR(11); CTOR(12); CTOR(13); CTOR(14); CTOR(15); -static void pmd_ctor(void *addr) +static inline void (*ctor(int shift))(void *) { - memset(addr, 0, PMD_TABLE_SIZE); + BUILD_BUG_ON(MAX_PGTABLE_INDEX_SIZE != 15); + + switch (shift) { + case 0: return ctor_0; + case 1: return ctor_1; + case 2: return ctor_2; + case 3: return ctor_3; + case 4: return ctor_4; + case 5: return ctor_5; + case 6: return ctor_6; + case 7: return ctor_7; + case 8: return ctor_8; + case 9: return ctor_9; + case 10: return ctor_10; + case 11: return ctor_11; + case 12: return ctor_12; + case 13: return ctor_13; + case 14: return ctor_14; + case 15: return ctor_15; + } + return NULL; } -struct kmem_cache *pgtable_cache[MAX_PGTABLE_INDEX_SIZE]; +struct kmem_cache *pgtable_cache[MAX_PGTABLE_INDEX_SIZE + 1]; EXPORT_SYMBOL_GPL(pgtable_cache); /* used by kvm_hv module */ /* @@ -50,7 +68,7 @@ EXPORT_SYMBOL_GPL(pgtable_cache); /* used by kvm_hv module */ * everything else. Caches created by this function are used for all * the higher level pagetables, and for hugepage pagetables. */ -void pgtable_cache_add(unsigned shift, void (*ctor)(void *)) +void pgtable_cache_add(unsigned int shift) { char *name; unsigned long table_size = sizeof(void *) << shift; @@ -71,19 +89,19 @@ void pgtable_cache_add(unsigned shift, void (*ctor)(void *)) * moment, gcc doesn't seem to recognize is_power_of_2 as a * constant expression, so so much for that. */ BUG_ON(!is_power_of_2(minalign)); - BUG_ON((shift < 1) || (shift > MAX_PGTABLE_INDEX_SIZE)); + BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE); if (PGT_CACHE(shift)) return; /* Already have a cache of this size */ align = max_t(unsigned long, align, minalign); name = kasprintf(GFP_KERNEL, "pgtable-2^%d", shift); - new = kmem_cache_create(name, table_size, align, 0, ctor); + new = kmem_cache_create(name, table_size, align, 0, ctor(shift)); if (!new) panic("Could not allocate pgtable cache for order %d", shift); kfree(name); - pgtable_cache[shift - 1] = new; + pgtable_cache[shift] = new; pr_debug("Allocated pgtable cache for order %d\n", shift); } @@ -91,15 +109,15 @@ EXPORT_SYMBOL_GPL(pgtable_cache_add); /* used by kvm_hv module */ void pgtable_cache_init(void) { - pgtable_cache_add(PGD_INDEX_SIZE, pgd_ctor); + pgtable_cache_add(PGD_INDEX_SIZE); - if (PMD_CACHE_INDEX && !PGT_CACHE(PMD_CACHE_INDEX)) - pgtable_cache_add(PMD_CACHE_INDEX, pmd_ctor); + if (PMD_CACHE_INDEX) + pgtable_cache_add(PMD_CACHE_INDEX); /* * In all current configs, when the PUD index exists it's the * same size as either the pgd or pmd index except with THP enabled * on book3s 64 */ - if (PUD_CACHE_INDEX && !PGT_CACHE(PUD_CACHE_INDEX)) - pgtable_cache_add(PUD_CACHE_INDEX, pud_ctor); + if (PUD_CACHE_INDEX) + pgtable_cache_add(PUD_CACHE_INDEX); } diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 0a64fffabee1..33cc6f676fa6 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -139,7 +139,8 @@ int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap * } #ifdef CONFIG_MEMORY_HOTREMOVE -int __meminit arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int __meminit arch_remove_memory(int nid, u64 start, u64 size, + struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; @@ -246,35 +247,19 @@ static int __init mark_nonram_nosave(void) } #endif -static bool zone_limits_final; - /* - * The memory zones past TOP_ZONE are managed by generic mm code. - * These should be set to zero since that's what every other - * architecture does. + * Zones usage: + * + * We setup ZONE_DMA to be 31-bits on all platforms and ZONE_NORMAL to be + * everything else. GFP_DMA32 page allocations automatically fall back to + * ZONE_DMA. + * + * By using 31-bit unconditionally, we can exploit ARCH_ZONE_DMA_BITS to + * inform the generic DMA mapping code. 32-bit only devices (if not handled + * by an IOMMU anyway) will take a first dip into ZONE_NORMAL and get + * otherwise served by ZONE_DMA. */ -static unsigned long max_zone_pfns[MAX_NR_ZONES] = { - [0 ... TOP_ZONE ] = ~0UL, - [TOP_ZONE + 1 ... MAX_NR_ZONES - 1] = 0 -}; - -/* - * Restrict the specified zone and all more restrictive zones - * to be below the specified pfn. May not be called after - * paging_init(). - */ -void __init limit_zone_pfn(enum zone_type zone, unsigned long pfn_limit) -{ - int i; - - if (WARN_ON(zone_limits_final)) - return; - - for (i = zone; i >= 0; i--) { - if (max_zone_pfns[i] > pfn_limit) - max_zone_pfns[i] = pfn_limit; - } -} +static unsigned long max_zone_pfns[MAX_NR_ZONES]; /* * Find the least restrictive zone that is entirely below the @@ -324,11 +309,14 @@ void __init paging_init(void) printk(KERN_DEBUG "Memory hole size: %ldMB\n", (long int)((top_of_ram - total_ram) >> 20)); +#ifdef CONFIG_ZONE_DMA + max_zone_pfns[ZONE_DMA] = min(max_low_pfn, 0x7fffffffUL >> PAGE_SHIFT); +#endif + max_zone_pfns[ZONE_NORMAL] = max_low_pfn; #ifdef CONFIG_HIGHMEM - limit_zone_pfn(ZONE_NORMAL, lowmem_end_addr >> PAGE_SHIFT); + max_zone_pfns[ZONE_HIGHMEM] = max_pfn; #endif - limit_zone_pfn(TOP_ZONE, top_of_ram >> PAGE_SHIFT); - zone_limits_final = true; + free_area_init_nodes(max_zone_pfns); mark_nonram_nosave(); @@ -503,7 +491,7 @@ EXPORT_SYMBOL(flush_icache_user_range); void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { -#ifdef CONFIG_PPC_STD_MMU +#ifdef CONFIG_PPC_BOOK3S /* * We don't need to worry about _PAGE_PRESENT here because we are * called with either mm->page_table_lock held or ptl lock held @@ -541,7 +529,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, } hash_preload(vma->vm_mm, address, is_exec, trap); -#endif /* CONFIG_PPC_STD_MMU */ +#endif /* CONFIG_PPC_BOOK3S */ #if (defined(CONFIG_PPC_BOOK3E_64) || defined(CONFIG_PPC_FSL_BOOK3E)) \ && defined(CONFIG_HUGETLB_PAGE) if (is_vm_hugetlb_page(vma)) diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.c index f84e14f23e50..bb52320b7369 100644 --- a/arch/powerpc/mm/mmu_context.c +++ b/arch/powerpc/mm/mmu_context.c @@ -15,6 +15,7 @@ #include #include +#include #if defined(CONFIG_PPC32) static inline void switch_mm_pgdir(struct task_struct *tsk, @@ -97,3 +98,12 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, switch_mmu_context(prev, next, tsk); } +#ifdef CONFIG_PPC32 +void arch_exit_mmap(struct mm_struct *mm) +{ + void *frag = pte_frag_get(&mm->context); + + if (frag) + pte_frag_destroy(frag); +} +#endif diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c index 510f103d7813..f720c5cc0b5e 100644 --- a/arch/powerpc/mm/mmu_context_book3s64.c +++ b/arch/powerpc/mm/mmu_context_book3s64.c @@ -164,21 +164,6 @@ static void destroy_contexts(mm_context_t *ctx) } } -static void pte_frag_destroy(void *pte_frag) -{ - int count; - struct page *page; - - page = virt_to_page(pte_frag); - /* drop all the pending references */ - count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; - /* We allow PTE_FRAG_NR fragments from a PTE page */ - if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) { - pgtable_page_dtor(page); - __free_page(page); - } -} - static void pmd_frag_destroy(void *pmd_frag) { int count; diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c index 56c2234cc6ae..a712a650a8b6 100644 --- a/arch/powerpc/mm/mmu_context_iommu.c +++ b/arch/powerpc/mm/mmu_context_iommu.c @@ -36,6 +36,8 @@ struct mm_iommu_table_group_mem_t { u64 ua; /* userspace address */ u64 entries; /* number of entries in hpas[] */ u64 *hpas; /* vmalloc'ed */ +#define MM_IOMMU_TABLE_INVALID_HPA ((uint64_t)-1) + u64 dev_hpa; /* Device memory base address */ }; static long mm_iommu_adjust_locked_vm(struct mm_struct *mm, @@ -126,7 +128,8 @@ static int mm_iommu_move_page_from_cma(struct page *page) return 0; } -long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, +static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, + unsigned long entries, unsigned long dev_hpa, struct mm_iommu_table_group_mem_t **pmem) { struct mm_iommu_table_group_mem_t *mem; @@ -140,12 +143,6 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) { - if ((mem->ua == ua) && (mem->entries == entries)) { - ++mem->used; - *pmem = mem; - goto unlock_exit; - } - /* Overlap? */ if ((mem->ua < (ua + (entries << PAGE_SHIFT))) && (ua < (mem->ua + @@ -156,11 +153,13 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, } - ret = mm_iommu_adjust_locked_vm(mm, entries, true); - if (ret) - goto unlock_exit; + if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) { + ret = mm_iommu_adjust_locked_vm(mm, entries, true); + if (ret) + goto unlock_exit; - locked_entries = entries; + locked_entries = entries; + } mem = kzalloc(sizeof(*mem), GFP_KERNEL); if (!mem) { @@ -168,6 +167,13 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries, goto unlock_exit; } + if (dev_hpa != MM_IOMMU_TABLE_INVALID_HPA) { + mem->pageshift = __ffs(dev_hpa | (entries << PAGE_SHIFT)); + mem->dev_hpa = dev_hpa; + goto good_exit; + } + mem->dev_hpa = MM_IOMMU_TABLE_INVALID_HPA; + /* * For a starting point for a maximum page size calculation * we use @ua and @entries natural alignment to allow IOMMU pages @@ -236,6 +242,7 @@ populate: mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT; } +good_exit: atomic64_set(&mem->mapped, 1); mem->used = 1; mem->ua = ua; @@ -252,13 +259,31 @@ unlock_exit: return ret; } -EXPORT_SYMBOL_GPL(mm_iommu_get); + +long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries, + struct mm_iommu_table_group_mem_t **pmem) +{ + return mm_iommu_do_alloc(mm, ua, entries, MM_IOMMU_TABLE_INVALID_HPA, + pmem); +} +EXPORT_SYMBOL_GPL(mm_iommu_new); + +long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua, + unsigned long entries, unsigned long dev_hpa, + struct mm_iommu_table_group_mem_t **pmem) +{ + return mm_iommu_do_alloc(mm, ua, entries, dev_hpa, pmem); +} +EXPORT_SYMBOL_GPL(mm_iommu_newdev); static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) { long i; struct page *page = NULL; + if (!mem->hpas) + return; + for (i = 0; i < mem->entries; ++i) { if (!mem->hpas[i]) continue; @@ -300,6 +325,7 @@ static void mm_iommu_release(struct mm_iommu_table_group_mem_t *mem) long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem) { long ret = 0; + unsigned long entries, dev_hpa; mutex_lock(&mem_list_mutex); @@ -321,9 +347,12 @@ long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem) } /* @mapped became 0 so now mappings are disabled, release the region */ + entries = mem->entries; + dev_hpa = mem->dev_hpa; mm_iommu_release(mem); - mm_iommu_adjust_locked_vm(mm, mem->entries, false); + if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) + mm_iommu_adjust_locked_vm(mm, entries, false); unlock_exit: mutex_unlock(&mem_list_mutex); @@ -368,27 +397,32 @@ struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(struct mm_struct *mm, return ret; } -struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm, +struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries) { struct mm_iommu_table_group_mem_t *mem, *ret = NULL; + mutex_lock(&mem_list_mutex); + list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) { if ((mem->ua == ua) && (mem->entries == entries)) { ret = mem; + ++mem->used; break; } } + mutex_unlock(&mem_list_mutex); + return ret; } -EXPORT_SYMBOL_GPL(mm_iommu_find); +EXPORT_SYMBOL_GPL(mm_iommu_get); long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, unsigned long ua, unsigned int pageshift, unsigned long *hpa) { const long entry = (ua - mem->ua) >> PAGE_SHIFT; - u64 *va = &mem->hpas[entry]; + u64 *va; if (entry >= mem->entries) return -EFAULT; @@ -396,6 +430,12 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, if (pageshift > mem->pageshift) return -EFAULT; + if (!mem->hpas) { + *hpa = mem->dev_hpa + (ua - mem->ua); + return 0; + } + + va = &mem->hpas[entry]; *hpa = (*va & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK); return 0; @@ -406,7 +446,6 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, unsigned long ua, unsigned int pageshift, unsigned long *hpa) { const long entry = (ua - mem->ua) >> PAGE_SHIFT; - void *va = &mem->hpas[entry]; unsigned long *pa; if (entry >= mem->entries) @@ -415,7 +454,12 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, if (pageshift > mem->pageshift) return -EFAULT; - pa = (void *) vmalloc_to_phys(va); + if (!mem->hpas) { + *hpa = mem->dev_hpa + (ua - mem->ua); + return 0; + } + + pa = (void *) vmalloc_to_phys(&mem->hpas[entry]); if (!pa) return -EFAULT; @@ -435,6 +479,9 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua) if (!mem) return; + if (mem->dev_hpa != MM_IOMMU_TABLE_INVALID_HPA) + return; + entry = (ua - mem->ua) >> PAGE_SHIFT; va = &mem->hpas[entry]; @@ -445,6 +492,33 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua) *pa |= MM_IOMMU_TABLE_GROUP_PAGE_DIRTY; } +bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa, + unsigned int pageshift, unsigned long *size) +{ + struct mm_iommu_table_group_mem_t *mem; + unsigned long end; + + list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) { + if (mem->dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) + continue; + + end = mem->dev_hpa + (mem->entries << PAGE_SHIFT); + if ((mem->dev_hpa <= hpa) && (hpa < end)) { + /* + * Since the IOMMU page size might be bigger than + * PAGE_SIZE, the amount of preregistered memory + * starting from @hpa might be smaller than 1<mapped)) diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c index 2faca46ad720..22d71a58167f 100644 --- a/arch/powerpc/mm/mmu_context_nohash.c +++ b/arch/powerpc/mm/mmu_context_nohash.c @@ -372,7 +372,6 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm) { pr_hard("initing context for mm @%p\n", mm); -#ifdef CONFIG_PPC_MM_SLICES /* * We have MMU_NO_CONTEXT set to be ~0. Hence check * explicitly against context.id == 0. This ensures that we properly @@ -382,9 +381,9 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm) */ if (mm->context.id == 0) slice_init_new_context_exec(mm); -#endif mm->context.id = MMU_NO_CONTEXT; mm->context.active = 0; + pte_frag_set(&mm->context, NULL); return 0; } @@ -487,4 +486,3 @@ void __init mmu_context_init(void) next_context = FIRST_CONTEXT; nr_free_contexts = LAST_CONTEXT - FIRST_CONTEXT + 1; } - diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h index 8574fbbc45e0..c4a717da65eb 100644 --- a/arch/powerpc/mm/mmu_decl.h +++ b/arch/powerpc/mm/mmu_decl.h @@ -155,7 +155,7 @@ struct tlbcam { }; #endif -#if defined(CONFIG_6xx) || defined(CONFIG_FSL_BOOKE) || defined(CONFIG_PPC_8xx) +#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_FSL_BOOKE) || defined(CONFIG_PPC_8xx) /* 6xx have BATS */ /* FSL_BOOKE have TLBCAM */ /* 8xx have LTLB */ diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index ce28ae5ca080..87f0dd004295 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -1475,7 +1475,7 @@ static int dt_update_callback(struct notifier_block *nb, switch (action) { case OF_RECONFIG_UPDATE_PROPERTY: - if (!of_prop_cmp(update->dn->type, "cpu") && + if (of_node_is_type(update->dn, "cpu") && !of_prop_cmp(update->prop->name, "ibm,associativity")) { u32 core_id; of_property_read_u32(update->dn, "reg", &core_id); diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c index 9f93c9f985c5..f3c31f5e1026 100644 --- a/arch/powerpc/mm/pgtable-book3s64.c +++ b/arch/powerpc/mm/pgtable-book3s64.c @@ -244,6 +244,9 @@ static pmd_t *get_pmd_from_cache(struct mm_struct *mm) { void *pmd_frag, *ret; + if (PMD_FRAG_NR == 1) + return NULL; + spin_lock(&mm->page_table_lock); ret = mm->context.pmd_frag; if (ret) { @@ -322,91 +325,6 @@ void pmd_fragment_free(unsigned long *pmd) } } -static pte_t *get_pte_from_cache(struct mm_struct *mm) -{ - void *pte_frag, *ret; - - spin_lock(&mm->page_table_lock); - ret = mm->context.pte_frag; - if (ret) { - pte_frag = ret + PTE_FRAG_SIZE; - /* - * If we have taken up all the fragments mark PTE page NULL - */ - if (((unsigned long)pte_frag & ~PAGE_MASK) == 0) - pte_frag = NULL; - mm->context.pte_frag = pte_frag; - } - spin_unlock(&mm->page_table_lock); - return (pte_t *)ret; -} - -static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) -{ - void *ret = NULL; - struct page *page; - - if (!kernel) { - page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); - if (!page) - return NULL; - if (!pgtable_page_ctor(page)) { - __free_page(page); - return NULL; - } - } else { - page = alloc_page(PGALLOC_GFP); - if (!page) - return NULL; - } - - atomic_set(&page->pt_frag_refcount, 1); - - ret = page_address(page); - /* - * if we support only one fragment just return the - * allocated page. - */ - if (PTE_FRAG_NR == 1) - return ret; - spin_lock(&mm->page_table_lock); - /* - * If we find pgtable_page set, we return - * the allocated page with single fragement - * count. - */ - if (likely(!mm->context.pte_frag)) { - atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR); - mm->context.pte_frag = ret + PTE_FRAG_SIZE; - } - spin_unlock(&mm->page_table_lock); - - return (pte_t *)ret; -} - -pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel) -{ - pte_t *pte; - - pte = get_pte_from_cache(mm); - if (pte) - return pte; - - return __alloc_for_ptecache(mm, kernel); -} - -void pte_fragment_free(unsigned long *table, int kernel) -{ - struct page *page = virt_to_page(table); - - BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); - if (atomic_dec_and_test(&page->pt_frag_refcount)) { - if (!kernel) - pgtable_page_dtor(page); - __free_page(page); - } -} - static inline void pgtable_free(void *table, int index) { switch (index) { diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c new file mode 100644 index 000000000000..af23a587f019 --- /dev/null +++ b/arch/powerpc/mm/pgtable-frag.c @@ -0,0 +1,119 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Handling Page Tables through page fragments + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +void pte_frag_destroy(void *pte_frag) +{ + int count; + struct page *page; + + page = virt_to_page(pte_frag); + /* drop all the pending references */ + count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; + /* We allow PTE_FRAG_NR fragments from a PTE page */ + if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) { + pgtable_page_dtor(page); + __free_page(page); + } +} + +static pte_t *get_pte_from_cache(struct mm_struct *mm) +{ + void *pte_frag, *ret; + + if (PTE_FRAG_NR == 1) + return NULL; + + spin_lock(&mm->page_table_lock); + ret = pte_frag_get(&mm->context); + if (ret) { + pte_frag = ret + PTE_FRAG_SIZE; + /* + * If we have taken up all the fragments mark PTE page NULL + */ + if (((unsigned long)pte_frag & ~PAGE_MASK) == 0) + pte_frag = NULL; + pte_frag_set(&mm->context, pte_frag); + } + spin_unlock(&mm->page_table_lock); + return (pte_t *)ret; +} + +static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) +{ + void *ret = NULL; + struct page *page; + + if (!kernel) { + page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); + if (!page) + return NULL; + if (!pgtable_page_ctor(page)) { + __free_page(page); + return NULL; + } + } else { + page = alloc_page(PGALLOC_GFP); + if (!page) + return NULL; + } + + atomic_set(&page->pt_frag_refcount, 1); + + ret = page_address(page); + /* + * if we support only one fragment just return the + * allocated page. + */ + if (PTE_FRAG_NR == 1) + return ret; + spin_lock(&mm->page_table_lock); + /* + * If we find pgtable_page set, we return + * the allocated page with single fragement + * count. + */ + if (likely(!pte_frag_get(&mm->context))) { + atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR); + pte_frag_set(&mm->context, ret + PTE_FRAG_SIZE); + } + spin_unlock(&mm->page_table_lock); + + return (pte_t *)ret; +} + +pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel) +{ + pte_t *pte; + + pte = get_pte_from_cache(mm); + if (pte) + return pte; + + return __alloc_for_ptecache(mm, kernel); +} + +void pte_fragment_free(unsigned long *table, int kernel) +{ + struct page *page = virt_to_page(table); + + BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); + if (atomic_dec_and_test(&page->pt_frag_refcount)) { + if (!kernel) + pgtable_page_dtor(page); + __free_page(page); + } +} diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 010e1c616cb2..d3d61d29b4f1 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -74,7 +74,7 @@ static struct page *maybe_pte_to_page(pte_t pte) * support falls into the same category. */ -static pte_t set_pte_filter(pte_t pte) +static pte_t set_pte_filter_hash(pte_t pte) { if (radix_enabled()) return pte; @@ -93,14 +93,12 @@ static pte_t set_pte_filter(pte_t pte) return pte; } -static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, - int dirty) -{ - return pte; -} - #else /* CONFIG_PPC_BOOK3S */ +static pte_t set_pte_filter_hash(pte_t pte) { return pte; } + +#endif /* CONFIG_PPC_BOOK3S */ + /* Embedded type MMU with HW exec support. This is a bit more complicated * as we don't have two bits to spare for _PAGE_EXEC and _PAGE_HWEXEC so * instead we "filter out" the exec permission for non clean pages. @@ -109,6 +107,9 @@ static pte_t set_pte_filter(pte_t pte) { struct page *pg; + if (mmu_has_feature(MMU_FTR_HPTE_TABLE)) + return set_pte_filter_hash(pte); + /* No exec permission in the first place, move on */ if (!pte_exec(pte) || !pte_looks_normal(pte)) return pte; @@ -138,6 +139,9 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, { struct page *pg; + if (mmu_has_feature(MMU_FTR_HPTE_TABLE)) + return pte; + /* So here, we only care about exec faults, as we use them * to recover lost _PAGE_EXEC and perform I$/D$ coherency * if necessary. Also if _PAGE_EXEC is already set, same deal, @@ -172,8 +176,6 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, return pte_mkexec(pte); } -#endif /* CONFIG_PPC_BOOK3S */ - /* * set_pte stores a linux PTE into the linux page table. */ @@ -221,9 +223,9 @@ int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, } #ifdef CONFIG_HUGETLB_PAGE -extern int huge_ptep_set_access_flags(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep, - pte_t pte, int dirty) +int huge_ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + pte_t pte, int dirty) { #ifdef HUGETLB_NEED_PRELOAD /* diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index bda3c6f1bd32..d67215248d82 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -45,32 +45,15 @@ extern char etext[], _stext[], _sinittext[], _einittext[]; __ref pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address) { - pte_t *pte; + if (!slab_is_available()) + return memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE); - if (slab_is_available()) { - pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_ZERO); - } else { - pte = __va(memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE)); - if (pte) - clear_page(pte); - } - return pte; + return (pte_t *)pte_fragment_alloc(mm, address, 1); } pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address) { - struct page *ptepage; - - gfp_t flags = GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT; - - ptepage = alloc_pages(flags, 0); - if (!ptepage) - return NULL; - if (!pgtable_page_ctor(ptepage)) { - __free_page(ptepage); - return NULL; - } - return ptepage; + return (pgtable_t)pte_fragment_alloc(mm, address, 0); } void __iomem * @@ -160,7 +143,7 @@ __ioremap_caller(phys_addr_t addr, unsigned long size, pgprot_t prot, void *call * Don't allow anybody to remap normal RAM that we're using. * mem_init() sets high_memory so only do the check after that. */ - if (slab_is_available() && (p < virt_to_phys(high_memory)) && + if (slab_is_available() && p <= virt_to_phys(high_memory - 1) && page_is_ram(__phys_to_pfn(p))) { printk("__ioremap(): phys addr 0x%llx is RAM lr %ps\n", (unsigned long long)p, __builtin_return_address(0)); @@ -260,7 +243,7 @@ static void __init __mapin_ram_chunk(unsigned long offset, unsigned long top) ktext = ((char *)v >= _stext && (char *)v < etext) || ((char *)v >= _sinittext && (char *)v < _einittext); map_kernel_page(v, p, ktext ? PAGE_KERNEL_TEXT : PAGE_KERNEL); -#ifdef CONFIG_PPC_STD_MMU_32 +#ifdef CONFIG_PPC_BOOK3S_32 if (ktext) hash_preload(&init_mm, v, false, 0x300); #endif diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index b271b283c785..587807763737 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -6,20 +6,21 @@ */ #include +#include #include #include #include DEFINE_STATIC_KEY_TRUE(pkey_disabled); -bool pkey_execute_disable_supported; int pkeys_total; /* Total pkeys as per device tree */ -bool pkeys_devtree_defined; /* pkey property exported by device tree */ u32 initial_allocation_mask; /* Bits set for the initially allocated keys */ u32 reserved_allocation_mask; /* Bits set for reserved keys */ -u64 pkey_amr_mask; /* Bits in AMR not to be touched */ -u64 pkey_iamr_mask; /* Bits in AMR not to be touched */ -u64 pkey_uamor_mask; /* Bits in UMOR not to be touched */ -int execute_only_key = 2; +static bool pkey_execute_disable_supported; +static bool pkeys_devtree_defined; /* property exported by device tree */ +static u64 pkey_amr_mask; /* Bits in AMR not to be touched */ +static u64 pkey_iamr_mask; /* Bits in AMR not to be touched */ +static u64 pkey_uamor_mask; /* Bits in UMOR not to be touched */ +static int execute_only_key = 2; #define AMR_BITS_PER_PKEY 2 #define AMR_RD_BIT 0x1UL @@ -57,7 +58,7 @@ static inline bool pkey_mmu_enabled(void) return cpu_has_feature(CPU_FTR_PKEY); } -int pkey_initialize(void) +static int pkey_initialize(void) { int os_reserved, i; @@ -414,3 +415,13 @@ bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write, return pkey_access_permitted(vma_pkey(vma), write, execute); } + +void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm) +{ + if (static_branch_likely(&pkey_disabled)) + return; + + /* Duplicate the oldmm pkey state in mm: */ + mm_pkey_allocation_map(mm) = mm_pkey_allocation_map(oldmm); + mm->context.execute_only_pkey = oldmm->context.execute_only_pkey; +} diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c index f6f575bae3bc..3f4193201ee7 100644 --- a/arch/powerpc/mm/ppc_mmu_32.c +++ b/arch/powerpc/mm/ppc_mmu_32.c @@ -31,6 +31,7 @@ #include #include #include +#include #include "mmu_decl.h" @@ -52,7 +53,7 @@ struct batrange { /* stores address ranges mapped by BATs */ phys_addr_t v_block_mapped(unsigned long va) { int b; - for (b = 0; b < 4; ++b) + for (b = 0; b < ARRAY_SIZE(bat_addrs); ++b) if (va >= bat_addrs[b].start && va < bat_addrs[b].limit) return bat_addrs[b].phys + (va - bat_addrs[b].start); return 0; @@ -64,7 +65,7 @@ phys_addr_t v_block_mapped(unsigned long va) unsigned long p_block_mapped(phys_addr_t pa) { int b; - for (b = 0; b < 4; ++b) + for (b = 0; b < ARRAY_SIZE(bat_addrs); ++b) if (pa >= bat_addrs[b].phys && pa < (bat_addrs[b].limit-bat_addrs[b].start) +bat_addrs[b].phys) @@ -182,22 +183,8 @@ void __init MMU_init_hw(void) unsigned int hmask, mb, mb2; unsigned int n_hpteg, lg_n_hpteg; - extern unsigned int hash_page_patch_A[]; - extern unsigned int hash_page_patch_B[], hash_page_patch_C[]; - extern unsigned int hash_page[]; - extern unsigned int flush_hash_patch_A[], flush_hash_patch_B[]; - - if (!mmu_has_feature(MMU_FTR_HPTE_TABLE)) { - /* - * Put a blr (procedure return) instruction at the - * start of hash_page, since we can still get DSI - * exceptions on a 603. - */ - hash_page[0] = 0x4e800020; - flush_icache_range((unsigned long) &hash_page[0], - (unsigned long) &hash_page[1]); + if (!mmu_has_feature(MMU_FTR_HPTE_TABLE)) return; - } if ( ppc_md.progress ) ppc_md.progress("hash:enter", 0x105); @@ -244,31 +231,19 @@ void __init MMU_init_hw(void) if (lg_n_hpteg > 16) mb2 = 16 - LG_HPTEG_SIZE; - hash_page_patch_A[0] = (hash_page_patch_A[0] & ~0xffff) - | ((unsigned int)(Hash) >> 16); - hash_page_patch_A[1] = (hash_page_patch_A[1] & ~0x7c0) | (mb << 6); - hash_page_patch_A[2] = (hash_page_patch_A[2] & ~0x7c0) | (mb2 << 6); - hash_page_patch_B[0] = (hash_page_patch_B[0] & ~0xffff) | hmask; - hash_page_patch_C[0] = (hash_page_patch_C[0] & ~0xffff) | hmask; - - /* - * Ensure that the locations we've patched have been written - * out from the data cache and invalidated in the instruction - * cache, on those machines with split caches. - */ - flush_icache_range((unsigned long) &hash_page_patch_A[0], - (unsigned long) &hash_page_patch_C[1]); + modify_instruction_site(&patch__hash_page_A0, 0xffff, (unsigned int)Hash >> 16); + modify_instruction_site(&patch__hash_page_A1, 0x7c0, mb << 6); + modify_instruction_site(&patch__hash_page_A2, 0x7c0, mb2 << 6); + modify_instruction_site(&patch__hash_page_B, 0xffff, hmask); + modify_instruction_site(&patch__hash_page_C, 0xffff, hmask); /* * Patch up the instructions in hashtable.S:flush_hash_page */ - flush_hash_patch_A[0] = (flush_hash_patch_A[0] & ~0xffff) - | ((unsigned int)(Hash) >> 16); - flush_hash_patch_A[1] = (flush_hash_patch_A[1] & ~0x7c0) | (mb << 6); - flush_hash_patch_A[2] = (flush_hash_patch_A[2] & ~0x7c0) | (mb2 << 6); - flush_hash_patch_B[0] = (flush_hash_patch_B[0] & ~0xffff) | hmask; - flush_icache_range((unsigned long) &flush_hash_patch_A[0], - (unsigned long) &flush_hash_patch_B[1]); + modify_instruction_site(&patch__flush_hash_A0, 0xffff, (unsigned int)Hash >> 16); + modify_instruction_site(&patch__flush_hash_A1, 0x7c0, mb << 6); + modify_instruction_site(&patch__flush_hash_A2, 0x7c0, mb2 << 6); + modify_instruction_site(&patch__flush_hash_B, 0xffff, hmask); if ( ppc_md.progress ) ppc_md.progress("hash:done", 0x205); } diff --git a/arch/powerpc/mm/tlb_low_64e.S b/arch/powerpc/mm/tlb_low_64e.S index 7fd20c52a8ec..9ed90064f542 100644 --- a/arch/powerpc/mm/tlb_low_64e.S +++ b/arch/powerpc/mm/tlb_low_64e.S @@ -70,6 +70,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV) std r15,EX_TLB_R15(r12) std r10,EX_TLB_CR(r12) #ifdef CONFIG_PPC_FSL_BOOK3E +START_BTB_FLUSH_SECTION + mfspr r11, SPRN_SRR1 + andi. r10,r11,MSR_PR + beq 1f + BTB_FLUSH(r10) +1: +END_BTB_FLUSH_SECTION std r7,EX_TLB_R7(r12) #endif TLB_MISS_PROLOG_STATS diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h index 47fc6660845d..c2d5192ed64f 100644 --- a/arch/powerpc/net/bpf_jit.h +++ b/arch/powerpc/net/bpf_jit.h @@ -152,6 +152,10 @@ ___PPC_RS(a) | ___PPC_RB(s)) #define PPC_SRW(d, a, s) EMIT(PPC_INST_SRW | ___PPC_RA(d) | \ ___PPC_RS(a) | ___PPC_RB(s)) +#define PPC_SRAW(d, a, s) EMIT(PPC_INST_SRAW | ___PPC_RA(d) | \ + ___PPC_RS(a) | ___PPC_RB(s)) +#define PPC_SRAWI(d, a, i) EMIT(PPC_INST_SRAWI | ___PPC_RA(d) | \ + ___PPC_RS(a) | __PPC_SH(i)) #define PPC_SRD(d, a, s) EMIT(PPC_INST_SRD | ___PPC_RA(d) | \ ___PPC_RS(a) | ___PPC_RB(s)) #define PPC_SRAD(d, a, s) EMIT(PPC_INST_SRAD | ___PPC_RA(d) | \ diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index d5bfe24bb3b5..91d223cf512b 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -379,18 +379,17 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, hash)); break; case BPF_ANC | SKF_AD_VLAN_TAG: - case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2); - BUILD_BUG_ON(VLAN_TAG_PRESENT != 0x1000); PPC_LHZ_OFFS(r_A, r_skb, offsetof(struct sk_buff, vlan_tci)); - if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { - PPC_ANDI(r_A, r_A, ~VLAN_TAG_PRESENT); - } else { - PPC_ANDI(r_A, r_A, VLAN_TAG_PRESENT); - PPC_SRWI(r_A, r_A, 12); - } + break; + case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: + PPC_LBZ_OFFS(r_A, r_skb, PKT_VLAN_PRESENT_OFFSET()); + if (PKT_VLAN_PRESENT_BIT) + PPC_SRWI(r_A, r_A, PKT_VLAN_PRESENT_BIT); + if (PKT_VLAN_PRESENT_BIT < 7) + PPC_ANDI(r_A, r_A, 1); break; case BPF_ANC | SKF_AD_QUEUE: BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 9393e231cbc2..7ce57657d3b8 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -529,9 +529,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, if (imm != 0) PPC_SRDI(dst_reg, dst_reg, imm); break; + case BPF_ALU | BPF_ARSH | BPF_X: /* (s32) dst >>= src */ + PPC_SRAW(dst_reg, dst_reg, src_reg); + goto bpf_alu32_trunc; case BPF_ALU64 | BPF_ARSH | BPF_X: /* (s64) dst >>= src */ PPC_SRAD(dst_reg, dst_reg, src_reg); break; + case BPF_ALU | BPF_ARSH | BPF_K: /* (s32) dst >>= imm */ + PPC_SRAWI(dst_reg, dst_reg, imm); + goto bpf_alu32_trunc; case BPF_ALU64 | BPF_ARSH | BPF_K: /* (s64) dst >>= imm */ if (imm != 0) PPC_SRADI(dst_reg, dst_reg, imm); diff --git a/arch/powerpc/oprofile/Makefile b/arch/powerpc/oprofile/Makefile index 8d26d7416481..bb2d94c8cbe6 100644 --- a/arch/powerpc/oprofile/Makefile +++ b/arch/powerpc/oprofile/Makefile @@ -16,4 +16,4 @@ oprofile-$(CONFIG_OPROFILE_CELL) += op_model_cell.o \ cell/spu_task_sync.o oprofile-$(CONFIG_PPC_BOOK3S_64) += op_model_power4.o op_model_pa6t.o oprofile-$(CONFIG_FSL_EMB_PERFMON) += op_model_fsl_emb.o -oprofile-$(CONFIG_6xx) += op_model_7450.o +oprofile-$(CONFIG_PPC_BOOK3S_32) += op_model_7450.o diff --git a/arch/powerpc/oprofile/common.c b/arch/powerpc/oprofile/common.c index bf094c5a4bd9..a11132865504 100644 --- a/arch/powerpc/oprofile/common.c +++ b/arch/powerpc/oprofile/common.c @@ -212,7 +212,7 @@ int __init oprofile_arch_init(struct oprofile_operations *ops) model = &op_model_pa6t; break; #endif -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 case PPC_OPROFILE_G4: model = &op_model_7450; break; diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index 81f8a0c838ae..b0723002a396 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -10,6 +10,7 @@ */ #include #include +#include #include #include #include @@ -130,6 +131,14 @@ static inline void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw) {} static void pmao_restore_workaround(bool ebb) { } #endif /* CONFIG_PPC32 */ +bool is_sier_available(void) +{ + if (ppmu->flags & PPMU_HAS_SIER) + return true; + + return false; +} + static bool regs_use_siar(struct pt_regs *regs) { /* @@ -864,6 +873,8 @@ static int power_check_constraints(struct cpu_hw_events *cpuhw, int i, j; unsigned long addf = ppmu->add_fields; unsigned long tadd = ppmu->test_adder; + unsigned long grp_mask = ppmu->group_constraint_mask; + unsigned long grp_val = ppmu->group_constraint_val; if (n_ev > ppmu->n_counter) return -1; @@ -884,15 +895,23 @@ static int power_check_constraints(struct cpu_hw_events *cpuhw, for (i = 0; i < n_ev; ++i) { nv = (value | cpuhw->avalues[i][0]) + (value & cpuhw->avalues[i][0] & addf); - if ((((nv + tadd) ^ value) & mask) != 0 || - (((nv + tadd) ^ cpuhw->avalues[i][0]) & - cpuhw->amasks[i][0]) != 0) + + if (((((nv + tadd) ^ value) & mask) & (~grp_mask)) != 0) + break; + + if (((((nv + tadd) ^ cpuhw->avalues[i][0]) & cpuhw->amasks[i][0]) + & (~grp_mask)) != 0) break; + value = nv; mask |= cpuhw->amasks[i][0]; } - if (i == n_ev) - return 0; /* all OK */ + if (i == n_ev) { + if ((value & mask & grp_mask) != (mask & grp_val)) + return -1; + else + return 0; /* all OK */ + } /* doesn't work, gather alternatives... */ if (!ppmu->get_alternatives) @@ -2148,7 +2167,7 @@ static bool pmc_overflow(unsigned long val) /* * Performance monitor interrupt stuff */ -static void perf_event_interrupt(struct pt_regs *regs) +static void __perf_event_interrupt(struct pt_regs *regs) { int i, j; struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); @@ -2232,6 +2251,14 @@ static void perf_event_interrupt(struct pt_regs *regs) irq_exit(); } +static void perf_event_interrupt(struct pt_regs *regs) +{ + u64 start_clock = sched_clock(); + + __perf_event_interrupt(regs); + perf_sample_event_took(sched_clock() - start_clock); +} + static int power_pmu_prepare_cpu(unsigned int cpu) { struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu); diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c index 6954636b16d1..f292a3f284f1 100644 --- a/arch/powerpc/perf/imc-pmu.c +++ b/arch/powerpc/perf/imc-pmu.c @@ -28,13 +28,13 @@ static DEFINE_MUTEX(nest_init_lock); static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc); static struct imc_pmu **per_nest_pmu_arr; static cpumask_t nest_imc_cpumask; -struct imc_pmu_ref *nest_imc_refc; +static struct imc_pmu_ref *nest_imc_refc; static int nest_pmus; /* Core IMC data structures and variables */ static cpumask_t core_imc_cpumask; -struct imc_pmu_ref *core_imc_refc; +static struct imc_pmu_ref *core_imc_refc; static struct imc_pmu *core_imc_pmu; /* Thread IMC data structures and variables */ @@ -43,7 +43,7 @@ static DEFINE_PER_CPU(u64 *, thread_imc_mem); static struct imc_pmu *thread_imc_pmu; static int thread_imc_mem_size; -struct imc_pmu *imc_event_to_pmu(struct perf_event *event) +static struct imc_pmu *imc_event_to_pmu(struct perf_event *event) { return container_of(event->pmu, struct imc_pmu, pmu); } diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c index 177de814286f..a6c24d866b2f 100644 --- a/arch/powerpc/perf/isa207-common.c +++ b/arch/powerpc/perf/isa207-common.c @@ -148,6 +148,14 @@ static bool is_thresh_cmp_valid(u64 event) return true; } +static unsigned int dc_ic_rld_quad_l1_sel(u64 event) +{ + unsigned int cache; + + cache = (event >> EVENT_CACHE_SEL_SHIFT) & MMCR1_DC_IC_QUAL_MASK; + return cache; +} + static inline u64 isa207_find_source(u64 idx, u32 sub_idx) { u64 ret = PERF_MEM_NA; @@ -226,8 +234,13 @@ void isa207_get_mem_weight(u64 *weight) u64 mmcra = mfspr(SPRN_MMCRA); u64 exp = MMCRA_THR_CTR_EXP(mmcra); u64 mantissa = MMCRA_THR_CTR_MANT(mmcra); + u64 sier = mfspr(SPRN_SIER); + u64 val = (sier & ISA207_SIER_TYPE_MASK) >> ISA207_SIER_TYPE_SHIFT; - *weight = mantissa << (2 * exp); + if (val == 0 || val == 7) + *weight = 0; + else + *weight = mantissa << (2 * exp); } int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp) @@ -274,19 +287,27 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp) } if (unit >= 6 && unit <= 9) { - /* - * L2/L3 events contain a cache selector field, which is - * supposed to be programmed into MMCRC. However MMCRC is only - * HV writable, and there is no API for guest kernels to modify - * it. The solution is for the hypervisor to initialise the - * field to zeroes, and for us to only ever allow events that - * have a cache selector of zero. The bank selector (bit 3) is - * irrelevant, as long as the rest of the value is 0. - */ - if (cache & 0x7) + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + mask |= CNST_CACHE_GROUP_MASK; + value |= CNST_CACHE_GROUP_VAL(event & 0xff); + + mask |= CNST_CACHE_PMC4_MASK; + if (pmc == 4) + value |= CNST_CACHE_PMC4_VAL; + } else if (cache & 0x7) { + /* + * L2/L3 events contain a cache selector field, which is + * supposed to be programmed into MMCRC. However MMCRC is only + * HV writable, and there is no API for guest kernels to modify + * it. The solution is for the hypervisor to initialise the + * field to zeroes, and for us to only ever allow events that + * have a cache selector of zero. The bank selector (bit 3) is + * irrelevant, as long as the rest of the value is 0. + */ return -1; + } - } else if (event & EVENT_IS_L1) { + } else if (cpu_has_feature(CPU_FTR_ARCH_300) || (event & EVENT_IS_L1)) { mask |= CNST_L1_QUAL_MASK; value |= CNST_L1_QUAL_VAL(cache); } @@ -389,11 +410,14 @@ int isa207_compute_mmcr(u64 event[], int n_ev, /* In continuous sampling mode, update SDAR on TLB miss */ mmcra_sdar_mode(event[i], &mmcra); - if (event[i] & EVENT_IS_L1) { - cache = event[i] >> EVENT_CACHE_SEL_SHIFT; - mmcr1 |= (cache & 1) << MMCR1_IC_QUAL_SHIFT; - cache >>= 1; - mmcr1 |= (cache & 1) << MMCR1_DC_QUAL_SHIFT; + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + cache = dc_ic_rld_quad_l1_sel(event[i]); + mmcr1 |= (cache) << MMCR1_DC_IC_QUAL_SHIFT; + } else { + if (event[i] & EVENT_IS_L1) { + cache = dc_ic_rld_quad_l1_sel(event[i]); + mmcr1 |= (cache) << MMCR1_DC_IC_QUAL_SHIFT; + } } if (is_event_marked(event[i])) { diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h index 0028f4b9490d..91350f42a662 100644 --- a/arch/powerpc/perf/isa207-common.h +++ b/arch/powerpc/perf/isa207-common.h @@ -134,6 +134,11 @@ #define CNST_SAMPLE_VAL(v) (((v) & EVENT_SAMPLE_MASK) << 16) #define CNST_SAMPLE_MASK CNST_SAMPLE_VAL(EVENT_SAMPLE_MASK) +#define CNST_CACHE_GROUP_VAL(v) (((v) & 0xffull) << 55) +#define CNST_CACHE_GROUP_MASK CNST_CACHE_GROUP_VAL(0xff) +#define CNST_CACHE_PMC4_VAL (1ull << 54) +#define CNST_CACHE_PMC4_MASK CNST_CACHE_PMC4_VAL + /* * For NC we are counting up to 4 events. This requires three bits, and we need * the fifth event to overflow and set the 4th bit. To achieve that we bias the @@ -163,8 +168,8 @@ #define MMCR1_COMBINE_SHIFT(pmc) (35 - ((pmc) - 1)) #define MMCR1_PMCSEL_SHIFT(pmc) (24 - (((pmc) - 1)) * 8) #define MMCR1_FAB_SHIFT 36 -#define MMCR1_DC_QUAL_SHIFT 47 -#define MMCR1_IC_QUAL_SHIFT 46 +#define MMCR1_DC_IC_QUAL_MASK 0x3 +#define MMCR1_DC_IC_QUAL_SHIFT 46 /* MMCR1 Combine bits macro for power9 */ #define p9_MMCR1_COMBINE_SHIFT(pmc) (38 - ((pmc - 1) * 2)) diff --git a/arch/powerpc/perf/perf_regs.c b/arch/powerpc/perf/perf_regs.c index 09ceea6175ba..5c36b3a8d47a 100644 --- a/arch/powerpc/perf/perf_regs.c +++ b/arch/powerpc/perf/perf_regs.c @@ -69,6 +69,7 @@ static unsigned int pt_regs_offset[PERF_REG_POWERPC_MAX] = { PT_REGS_OFFSET(PERF_REG_POWERPC_TRAP, trap), PT_REGS_OFFSET(PERF_REG_POWERPC_DAR, dar), PT_REGS_OFFSET(PERF_REG_POWERPC_DSISR, dsisr), + PT_REGS_OFFSET(PERF_REG_POWERPC_SIER, dar), }; u64 perf_reg_value(struct pt_regs *regs, int idx) @@ -76,6 +77,12 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) if (WARN_ON_ONCE(idx >= PERF_REG_POWERPC_MAX)) return 0; + if (idx == PERF_REG_POWERPC_SIER && + (IS_ENABLED(CONFIG_FSL_EMB_PERF_EVENT) || + IS_ENABLED(CONFIG_PPC32) || + !is_sier_available())) + return 0; + return regs_get_register(regs, pt_regs_offset[idx]); } diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c index e012b1030a5b..0ff9c43733e9 100644 --- a/arch/powerpc/perf/power9-pmu.c +++ b/arch/powerpc/perf/power9-pmu.c @@ -63,16 +63,8 @@ * MMCRA[9:11] = thresh_cmp[0:2] * MMCRA[12:18] = thresh_cmp[3:9] * - * if unit == 6 or unit == 7 - * MMCRC[53:55] = cache_sel[1:3] (L2EVENT_SEL) - * else if unit == 8 or unit == 9: - * if cache_sel[0] == 0: # L3 bank - * MMCRC[47:49] = cache_sel[1:3] (L3EVENT_SEL0) - * else if cache_sel[0] == 1: - * MMCRC[50:51] = cache_sel[2:3] (L3EVENT_SEL1) - * else if cache_sel[1]: # L1 event - * MMCR1[16] = cache_sel[2] - * MMCR1[17] = cache_sel[3] + * MMCR1[16] = cache_sel[2] + * MMCR1[17] = cache_sel[3] * * if mark: * MMCRA[63] = 1 (SAMPLE_ENABLE) @@ -179,8 +171,6 @@ CACHE_EVENT_ATTR(L1-icache-prefetches, PM_IC_PREF_WRITE); CACHE_EVENT_ATTR(LLC-load-misses, PM_DATA_FROM_L3MISS); CACHE_EVENT_ATTR(LLC-loads, PM_DATA_FROM_L3); CACHE_EVENT_ATTR(LLC-prefetches, PM_L3_PREF_ALL); -CACHE_EVENT_ATTR(LLC-store-misses, PM_L2_ST_MISS); -CACHE_EVENT_ATTR(LLC-stores, PM_L2_ST); CACHE_EVENT_ATTR(branch-load-misses, PM_BR_MPRED_CMPL); CACHE_EVENT_ATTR(branch-loads, PM_BR_CMPL); CACHE_EVENT_ATTR(dTLB-load-misses, PM_DTLB_MISS); @@ -205,8 +195,6 @@ static struct attribute *power9_events_attr[] = { CACHE_EVENT_PTR(PM_DATA_FROM_L3MISS), CACHE_EVENT_PTR(PM_DATA_FROM_L3), CACHE_EVENT_PTR(PM_L3_PREF_ALL), - CACHE_EVENT_PTR(PM_L2_ST_MISS), - CACHE_EVENT_PTR(PM_L2_ST), CACHE_EVENT_PTR(PM_BR_MPRED_CMPL), CACHE_EVENT_PTR(PM_BR_CMPL), CACHE_EVENT_PTR(PM_DTLB_MISS), @@ -354,8 +342,8 @@ static int power9_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = { [ C(RESULT_MISS) ] = PM_DATA_FROM_L3MISS, }, [ C(OP_WRITE) ] = { - [ C(RESULT_ACCESS) ] = PM_L2_ST, - [ C(RESULT_MISS) ] = PM_L2_ST_MISS, + [ C(RESULT_ACCESS) ] = 0, + [ C(RESULT_MISS) ] = 0, }, [ C(OP_PREFETCH) ] = { [ C(RESULT_ACCESS) ] = PM_L3_PREF_ALL, @@ -427,6 +415,8 @@ static struct power_pmu power9_pmu = { .n_counter = MAX_PMU_COUNTERS, .add_fields = ISA207_ADD_FIELDS, .test_adder = ISA207_TEST_ADDER, + .group_constraint_mask = CNST_CACHE_PMC4_MASK, + .group_constraint_val = CNST_CACHE_PMC4_VAL, .compute_mmcr = isa207_compute_mmcr, .config_bhrb = power9_config_bhrb, .bhrb_filter_map = power9_bhrb_filter_map, diff --git a/arch/powerpc/platforms/40x/Kconfig b/arch/powerpc/platforms/40x/Kconfig index 5326ece36120..ad2bb1408b4c 100644 --- a/arch/powerpc/platforms/40x/Kconfig +++ b/arch/powerpc/platforms/40x/Kconfig @@ -11,7 +11,7 @@ config EP405 bool "EP405/EP405PC" depends on 40x select 405GP - select PCI + select FORCE_PCI help This option enables support for the EP405/EP405PC boards. @@ -19,7 +19,7 @@ config HOTFOOT bool "Hotfoot" depends on 40x select PPC40x_SIMPLE - select PCI + select FORCE_PCI help This option enables support for the ESTEEM 195E Hotfoot board. @@ -29,7 +29,7 @@ config KILAUEA select 405EX select PPC40x_SIMPLE select PPC4xx_PCI_EXPRESS - select PCI + select FORCE_PCI select PCI_MSI select PPC4xx_MSI help @@ -39,7 +39,7 @@ config MAKALU bool "Makalu" depends on 40x select 405EX - select PCI + select FORCE_PCI select PPC4xx_PCI_EXPRESS select PPC40x_SIMPLE help @@ -50,7 +50,7 @@ config WALNUT depends on 40x default y select 405GP - select PCI + select FORCE_PCI select OF_RTC help This option enables support for the IBM PPC405GP evaluation board. diff --git a/arch/powerpc/platforms/44x/Kconfig b/arch/powerpc/platforms/44x/Kconfig index 9a85d350b1b6..4a9a72d01c3c 100644 --- a/arch/powerpc/platforms/44x/Kconfig +++ b/arch/powerpc/platforms/44x/Kconfig @@ -12,7 +12,7 @@ config BAMBOO depends on 44x select PPC44x_SIMPLE select 440EP - select PCI + select FORCE_PCI help This option enables support for the IBM PPC440EP evaluation board. @@ -21,7 +21,7 @@ config BLUESTONE depends on 44x select PPC44x_SIMPLE select APM821xx - select PCI + select FORCE_PCI select PCI_MSI select PPC4xx_MSI select PPC4xx_PCI_EXPRESS @@ -34,7 +34,7 @@ config EBONY depends on 44x default y select 440GP - select PCI + select FORCE_PCI select OF_RTC help This option enables support for the IBM PPC440GP evaluation board. @@ -43,7 +43,7 @@ config SAM440EP bool "Sam440ep" depends on 44x select 440EP - select PCI + select FORCE_PCI help This option enables support for the ACube Sam440ep board. @@ -60,7 +60,7 @@ config TAISHAN depends on 44x select PPC44x_SIMPLE select 440GX - select PCI + select FORCE_PCI help This option enables support for the AMCC PPC440GX "Taishan" evaluation board. @@ -70,7 +70,7 @@ config KATMAI depends on 44x select PPC44x_SIMPLE select 440SPe - select PCI + select FORCE_PCI select PPC4xx_PCI_EXPRESS select PCI_MSI select PPC4xx_MSI @@ -82,7 +82,7 @@ config RAINIER depends on 44x select PPC44x_SIMPLE select 440GRX - select PCI + select FORCE_PCI help This option enables support for the AMCC PPC440GRX evaluation board. @@ -103,7 +103,7 @@ config ARCHES depends on 44x select PPC44x_SIMPLE select 460EX # Odd since it uses 460GT but the effects are the same - select PCI + select FORCE_PCI select PPC4xx_PCI_EXPRESS help This option enables support for the AMCC Dual PPC460GT evaluation board. @@ -112,7 +112,7 @@ config CANYONLANDS bool "Canyonlands" depends on 44x select 460EX - select PCI + select FORCE_PCI select PPC4xx_PCI_EXPRESS select PCI_MSI select PPC4xx_MSI @@ -126,7 +126,7 @@ config GLACIER depends on 44x select PPC44x_SIMPLE select 460EX # Odd since it uses 460GT but the effects are the same - select PCI + select FORCE_PCI select PPC4xx_PCI_EXPRESS select IBM_EMAC_RGMII if IBM_EMAC select IBM_EMAC_ZMII if IBM_EMAC @@ -138,7 +138,7 @@ config REDWOOD depends on 44x select PPC44x_SIMPLE select 460SX - select PCI + select FORCE_PCI select PPC4xx_PCI_EXPRESS select PCI_MSI select PPC4xx_MSI @@ -150,7 +150,7 @@ config EIGER depends on 44x select PPC44x_SIMPLE select 460SX - select PCI + select FORCE_PCI select PPC4xx_PCI_EXPRESS select IBM_EMAC_RGMII if IBM_EMAC help @@ -161,7 +161,7 @@ config YOSEMITE depends on 44x select PPC44x_SIMPLE select 440EP - select PCI + select FORCE_PCI help This option enables support for the AMCC PPC440EP evaluation board. @@ -201,7 +201,7 @@ config AKEBONO select SWIOTLB select 476FPE select PPC4xx_PCI_EXPRESS - select PCI + select FORCE_PCI select PCI_MSI select PPC4xx_HSTA_MSI select I2C @@ -226,7 +226,7 @@ config ICON depends on 44x select PPC44x_SIMPLE select 440SPe - select PCI + select FORCE_PCI select PPC4xx_PCI_EXPRESS help This option enables support for the AMCC PPC440SPe evaluation board. @@ -250,7 +250,7 @@ config XILINX_VIRTEX440_GENERIC_BOARD config XILINX_ML510 bool "Xilinx ML510 extra support" depends on XILINX_VIRTEX440_GENERIC_BOARD - select PPC_PCI_CHOICE + select HAVE_PCI select XILINX_PCI if PCI select PPC_INDIRECT_PCI if PCI select PPC_I8259 if PCI diff --git a/arch/powerpc/platforms/44x/warp.c b/arch/powerpc/platforms/44x/warp.c index a886c2c22097..f467247fd1c4 100644 --- a/arch/powerpc/platforms/44x/warp.c +++ b/arch/powerpc/platforms/44x/warp.c @@ -47,7 +47,7 @@ static int __init warp_probe(void) if (!of_machine_is_compatible("pika,warp")) return 0; - /* For __dma_alloc_coherent */ + /* For __dma_nommu_alloc_coherent */ ISA_DMA_THRESHOLD = ~0L; return 1; @@ -179,9 +179,9 @@ static int pika_setup_leds(void) } for_each_child_of_node(np, child) - if (strcmp(child->name, "green") == 0) + if (of_node_name_eq(child, "green")) green_led = of_get_gpio(child, 0); - else if (strcmp(child->name, "red") == 0) + else if (of_node_name_eq(child, "red")) red_led = of_get_gpio(child, 0); of_node_put(np); diff --git a/arch/powerpc/platforms/4xx/ocm.c b/arch/powerpc/platforms/4xx/ocm.c index f5bbd4563342..f2610a02844a 100644 --- a/arch/powerpc/platforms/4xx/ocm.c +++ b/arch/powerpc/platforms/4xx/ocm.c @@ -223,8 +223,6 @@ static void __init ocm_init_node(int count, struct device_node *node) INIT_LIST_HEAD(&ocm->c.list); ocm->ready = 1; - - return; } static int ocm_debugfs_show(struct seq_file *m, void *v) @@ -242,9 +240,7 @@ static int ocm_debugfs_show(struct seq_file *m, void *v) seq_printf(m, "PhysAddr : 0x%llx\n", ocm->phys); seq_printf(m, "MemTotal : %d Bytes\n", ocm->memtotal); seq_printf(m, "MemTotal(NC) : %d Bytes\n", ocm->nc.memtotal); - seq_printf(m, "MemTotal(C) : %d Bytes\n", ocm->c.memtotal); - - seq_printf(m, "\n"); + seq_printf(m, "MemTotal(C) : %d Bytes\n\n", ocm->c.memtotal); seq_printf(m, "NC.PhysAddr : 0x%llx\n", ocm->nc.phys); seq_printf(m, "NC.VirtAddr : 0x%p\n", ocm->nc.virt); @@ -256,9 +252,7 @@ static int ocm_debugfs_show(struct seq_file *m, void *v) blk->size, blk->owner); } - seq_printf(m, "\n"); - - seq_printf(m, "C.PhysAddr : 0x%llx\n", ocm->c.phys); + seq_printf(m, "\nC.PhysAddr : 0x%llx\n", ocm->c.phys); seq_printf(m, "C.VirtAddr : 0x%p\n", ocm->c.virt); seq_printf(m, "C.MemTotal : %d Bytes\n", ocm->c.memtotal); seq_printf(m, "C.MemFree : %d Bytes\n", ocm->c.memfree); @@ -268,7 +262,7 @@ static int ocm_debugfs_show(struct seq_file *m, void *v) blk->size, blk->owner); } - seq_printf(m, "\n"); + seq_putc(m, '\n'); } return 0; @@ -338,7 +332,6 @@ void *ppc4xx_ocm_alloc(phys_addr_t *phys, int size, int align, ocm_blk = kzalloc(sizeof(*ocm_blk), GFP_KERNEL); if (!ocm_blk) { - printk(KERN_ERR "PPC4XX OCM: could not allocate ocm block"); rh_free(ocm_reg->rh, offset); break; } @@ -392,10 +385,8 @@ static int __init ppc4xx_ocm_init(void) return 0; ocm_nodes = kzalloc((count * sizeof(struct ocm_info)), GFP_KERNEL); - if (!ocm_nodes) { - printk(KERN_ERR "PPC4XX OCM: failed to allocate OCM nodes!\n"); + if (!ocm_nodes) return -ENOMEM; - } ocm_count = count; count = 0; diff --git a/arch/powerpc/platforms/4xx/pci.c b/arch/powerpc/platforms/4xx/pci.c index 5aca523551ae..e6e2adcc7b64 100644 --- a/arch/powerpc/platforms/4xx/pci.c +++ b/arch/powerpc/platforms/4xx/pci.c @@ -1399,7 +1399,6 @@ static void __init ppc_476fpe_pciex_check_link(struct ppc4xx_pciex_port *port) printk(KERN_WARNING "PCIE%d: Link up failed\n", port->index); iounmap(mbase); - return; } static struct ppc4xx_pciex_hwops ppc_476fpe_pcie_hwops __initdata = @@ -2081,7 +2080,6 @@ static void __init ppc4xx_probe_pciex_bridge(struct device_node *np) const u32 *pval; int portno; unsigned int dcrs; - const char *val; /* First, proceed to core initialization as we assume there's * only one PCIe core in the system @@ -2127,10 +2125,9 @@ static void __init ppc4xx_probe_pciex_bridge(struct device_node *np) * Resulting from this setup this PCIe port will be configured * as root-complex or as endpoint. */ - val = of_get_property(port->node, "device_type", NULL); - if (!strcmp(val, "pci-endpoint")) { + if (of_node_is_type(port->node, "pci-endpoint")) { port->endpoint = 1; - } else if (!strcmp(val, "pci")) { + } else if (of_node_is_type(port->node, "pci")) { port->endpoint = 0; } else { printk(KERN_ERR "PCIE: missing or incorrect device_type for %pOF\n", diff --git a/arch/powerpc/platforms/512x/Kconfig b/arch/powerpc/platforms/512x/Kconfig index b59eab6cbb1b..deecede78776 100644 --- a/arch/powerpc/platforms/512x/Kconfig +++ b/arch/powerpc/platforms/512x/Kconfig @@ -1,11 +1,11 @@ # SPDX-License-Identifier: GPL-2.0 config PPC_MPC512x bool "512x-based boards" - depends on 6xx + depends on PPC_BOOK3S_32 select COMMON_CLK select FSL_SOC select IPIC - select PPC_PCI_CHOICE + select HAVE_PCI select FSL_PCI if PCI select USB_EHCI_BIG_ENDIAN_MMIO if USB_EHCI_HCD select USB_EHCI_BIG_ENDIAN_DESC if USB_EHCI_HCD diff --git a/arch/powerpc/platforms/52xx/Kconfig b/arch/powerpc/platforms/52xx/Kconfig index 55a587070342..99d60acc20c8 100644 --- a/arch/powerpc/platforms/52xx/Kconfig +++ b/arch/powerpc/platforms/52xx/Kconfig @@ -1,9 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 config PPC_MPC52xx bool "52xx-based boards" - depends on 6xx + depends on PPC_BOOK3S_32 select COMMON_CLK - select PPC_PCI_CHOICE + select HAVE_PCI config PPC_MPC5200_SIMPLE bool "Generic support for simple MPC5200 based boards" diff --git a/arch/powerpc/platforms/52xx/efika.c b/arch/powerpc/platforms/52xx/efika.c index 1ecbf176d35a..61538869e88a 100644 --- a/arch/powerpc/platforms/52xx/efika.c +++ b/arch/powerpc/platforms/52xx/efika.c @@ -82,11 +82,9 @@ static void __init efika_pcisetup(void) return; } - for (pcictrl = NULL;;) { - pcictrl = of_get_next_child(root, pcictrl); - if ((pcictrl == NULL) || (strcmp(pcictrl->name, "pci") == 0)) + for_each_child_of_node(root, pcictrl) + if (of_node_name_eq(pcictrl, "pci")) break; - } of_node_put(root); diff --git a/arch/powerpc/platforms/82xx/Kconfig b/arch/powerpc/platforms/82xx/Kconfig index 1947a88bc69f..1af81de1c4e6 100644 --- a/arch/powerpc/platforms/82xx/Kconfig +++ b/arch/powerpc/platforms/82xx/Kconfig @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 menuconfig PPC_82xx bool "82xx-based boards (PQ II)" - depends on 6xx + depends on PPC_BOOK3S_32 if PPC_82xx @@ -54,7 +54,7 @@ config PQ2ADS config 8260 bool - depends on 6xx + depends on PPC_BOOK3S_32 select CPM2 help The MPC8260 is a typical embedded CPU made by Freescale. Selecting diff --git a/arch/powerpc/platforms/83xx/Kconfig b/arch/powerpc/platforms/83xx/Kconfig index 071f53b0c0a0..bee119725f61 100644 --- a/arch/powerpc/platforms/83xx/Kconfig +++ b/arch/powerpc/platforms/83xx/Kconfig @@ -1,9 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 menuconfig PPC_83xx bool "83xx-based boards" - depends on 6xx + depends on PPC_BOOK3S_32 select PPC_UDBG_16550 - select PPC_PCI_CHOICE + select HAVE_PCI select FSL_PCI if PCI select FSL_SOC select IPIC diff --git a/arch/powerpc/platforms/83xx/misc.c b/arch/powerpc/platforms/83xx/misc.c index d75c9816a5c9..2b6589fe812d 100644 --- a/arch/powerpc/platforms/83xx/misc.c +++ b/arch/powerpc/platforms/83xx/misc.c @@ -14,6 +14,7 @@ #include #include +#include #include #include #include @@ -150,3 +151,19 @@ void __init mpc83xx_setup_arch(void) mpc83xx_setup_pci(); } + +int machine_check_83xx(struct pt_regs *regs) +{ + u32 mask = 1 << (31 - IPIC_MCP_WDT); + + if (!(regs->msr & SRR1_MCE_MCP) || !(ipic_get_mcp_status() & mask)) + return machine_check_generic(regs); + ipic_clear_mcp_status(mask); + + if (debugger_fault_handler(regs)) + return 1; + + die("Watchdog NMI Reset", regs, 0); + + return 1; +} diff --git a/arch/powerpc/platforms/85xx/Kconfig b/arch/powerpc/platforms/85xx/Kconfig index 68920d42b4bc..d1af0ee2f8c8 100644 --- a/arch/powerpc/platforms/85xx/Kconfig +++ b/arch/powerpc/platforms/85xx/Kconfig @@ -5,7 +5,7 @@ menuconfig FSL_SOC_BOOKE select FSL_SOC select PPC_UDBG_16550 select MPIC - select PPC_PCI_CHOICE + select HAVE_PCI select FSL_PCI if PCI select SERIAL_8250_EXTENDED if SERIAL_8250 select SERIAL_8250_SHARE_IRQ if SERIAL_8250 @@ -66,7 +66,7 @@ config MPC85xx_CDS bool "Freescale MPC85xx CDS" select DEFAULT_UIMAGE select PPC_I8259 - select HAS_RAPIDIO + select HAVE_RAPIDIO help This option enables support for the MPC85xx CDS board @@ -74,7 +74,7 @@ config MPC85xx_MDS bool "Freescale MPC85xx MDS" select DEFAULT_UIMAGE select PHYLIB if NETDEVICES - select HAS_RAPIDIO + select HAVE_RAPIDIO select SWIOTLB help This option enables support for the MPC85xx MDS board @@ -219,7 +219,7 @@ config PPA8548 help This option enables support for the Prodrive PPA8548 board. select DEFAULT_UIMAGE - select HAS_RAPIDIO + select HAVE_RAPIDIO config GE_IMP3A bool "GE Intelligent Platforms IMP3A" @@ -277,7 +277,7 @@ config CORENET_GENERIC select SWIOTLB select GPIOLIB select GPIO_MPC8XXX - select HAS_RAPIDIO + select HAVE_RAPIDIO select PPC_EPAPR_HV_PIC help This option enables support for the FSL CoreNet based boards. diff --git a/arch/powerpc/platforms/85xx/corenet_generic.c b/arch/powerpc/platforms/85xx/corenet_generic.c index ac191a7a1337..b0dac307bebf 100644 --- a/arch/powerpc/platforms/85xx/corenet_generic.c +++ b/arch/powerpc/platforms/85xx/corenet_generic.c @@ -68,16 +68,6 @@ void __init corenet_gen_setup_arch(void) swiotlb_detect_4g(); -#if defined(CONFIG_FSL_PCI) && defined(CONFIG_ZONE_DMA32) - /* - * Inbound windows don't cover the full lower 4 GiB - * due to conflicts with PCICSRBAR and outbound windows, - * so limit the DMA32 zone to 2 GiB, to allow consistent - * allocations to succeed. - */ - limit_zone_pfn(ZONE_DMA32, 1UL << (31 - PAGE_SHIFT)); -#endif - pr_info("%s board\n", ppc_md.name); mpc85xx_qe_init(); diff --git a/arch/powerpc/platforms/85xx/qemu_e500.c b/arch/powerpc/platforms/85xx/qemu_e500.c index b63a8548366f..27631c607f3d 100644 --- a/arch/powerpc/platforms/85xx/qemu_e500.c +++ b/arch/powerpc/platforms/85xx/qemu_e500.c @@ -45,15 +45,6 @@ static void __init qemu_e500_setup_arch(void) fsl_pci_assign_primary(); swiotlb_detect_4g(); -#if defined(CONFIG_FSL_PCI) && defined(CONFIG_ZONE_DMA32) - /* - * Inbound windows don't cover the full lower 4 GiB - * due to conflicts with PCICSRBAR and outbound windows, - * so limit the DMA32 zone to 2 GiB, to allow consistent - * allocations to succeed. - */ - limit_zone_pfn(ZONE_DMA32, 1UL << (31 - PAGE_SHIFT)); -#endif mpc85xx_smp_init(); } diff --git a/arch/powerpc/platforms/85xx/t1042rdb_diu.c b/arch/powerpc/platforms/85xx/t1042rdb_diu.c index dac36ba82fea..2d1652108ba1 100644 --- a/arch/powerpc/platforms/85xx/t1042rdb_diu.c +++ b/arch/powerpc/platforms/85xx/t1042rdb_diu.c @@ -39,7 +39,7 @@ struct device_node *cpld_node; */ static void t1042rdb_set_monitor_port(enum fsl_diu_monitor_port port) { - static void __iomem *cpld_base; + void __iomem *cpld_base; cpld_base = of_iomap(cpld_node, 0); if (!cpld_base) { diff --git a/arch/powerpc/platforms/86xx/Kconfig b/arch/powerpc/platforms/86xx/Kconfig index bcd179d3ed92..0a610114bc38 100644 --- a/arch/powerpc/platforms/86xx/Kconfig +++ b/arch/powerpc/platforms/86xx/Kconfig @@ -2,7 +2,7 @@ config PPC_86xx menuconfig PPC_86xx bool "86xx-based boards" - depends on 6xx + depends on PPC_BOOK3S_32 select FSL_SOC select ALTIVEC help @@ -15,7 +15,7 @@ config MPC8641_HPCN select PPC_I8259 select DEFAULT_UIMAGE select FSL_ULI1575 if PCI - select HAS_RAPIDIO + select HAVE_RAPIDIO select SWIOTLB help This option enables support for the MPC8641 HPCN board. @@ -57,7 +57,7 @@ config GEF_SBC610 select MMIO_NVRAM select GPIOLIB select GE_FPGA - select HAS_RAPIDIO + select HAVE_RAPIDIO help This option enables support for the GE SBC610. @@ -70,7 +70,7 @@ endif config MPC8641 bool - select PPC_PCI_CHOICE + select HAVE_PCI select FSL_PCI if PCI select PPC_UDBG_16550 select MPIC @@ -79,7 +79,7 @@ config MPC8641 config MPC8610 bool - select PPC_PCI_CHOICE + select HAVE_PCI select FSL_PCI if PCI select PPC_UDBG_16550 select MPIC diff --git a/arch/powerpc/platforms/86xx/mpc86xx_smp.c b/arch/powerpc/platforms/86xx/mpc86xx_smp.c index 020e84a47a32..9f2c1ecc85c3 100644 --- a/arch/powerpc/platforms/86xx/mpc86xx_smp.c +++ b/arch/powerpc/platforms/86xx/mpc86xx_smp.c @@ -86,8 +86,7 @@ smp_86xx_kick_cpu(int nr) mdelay(1); /* Restore the exception vector */ - *vector = save_vector; - flush_icache_range((unsigned long) vector, (unsigned long) vector + 4); + patch_instruction(vector, save_vector); local_irq_restore(flags); diff --git a/arch/powerpc/platforms/Kconfig b/arch/powerpc/platforms/Kconfig index 260a56b7602d..f3fb79fccc72 100644 --- a/arch/powerpc/platforms/Kconfig +++ b/arch/powerpc/platforms/Kconfig @@ -40,7 +40,7 @@ config EPAPR_PARAVIRT config PPC_NATIVE bool - depends on 6xx || PPC64 + depends on PPC_BOOK3S_32 || PPC64 help Support for running natively on the hardware, i.e. without a hypervisor. This option is not user-selectable but should @@ -48,7 +48,7 @@ config PPC_NATIVE config PPC_OF_BOOT_TRAMPOLINE bool "Support booting from Open Firmware or yaboot" - depends on 6xx || PPC64 + depends on PPC_BOOK3S_32 || PPC64 default y help Support from booting from Open Firmware or yaboot using an @@ -197,7 +197,7 @@ endmenu config PPC601_SYNC_FIX bool "Workarounds for PPC601 bugs" - depends on 6xx && PPC_PMAC + depends on PPC_BOOK3S_32 && PPC_PMAC help Some versions of the PPC601 (the first PowerPC chip) have bugs which mean that extra synchronization instructions are required near @@ -211,7 +211,7 @@ config PPC601_SYNC_FIX config TAU bool "On-chip CPU temperature sensor support" - depends on 6xx + depends on PPC_BOOK3S_32 help G3 and G4 processors have an on-chip temperature sensor called the 'Thermal Assist Unit (TAU)', which, in theory, can measure the on-die @@ -265,7 +265,7 @@ config CPM2 bool "Enable support for the CPM2 (Communications Processor Module)" depends on (FSL_SOC_BOOKE && PPC32) || 8260 select CPM - select PPC_PCI_CHOICE + select HAVE_PCI select GPIOLIB help The CPM2 (Communications Processor Module) is a coprocessor on diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index f4e2c5729374..8c7464c3f27f 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -24,6 +24,7 @@ choice config PPC_BOOK3S_32 bool "512x/52xx/6xx/7xx/74xx/82xx/83xx/86xx" select PPC_FPU + select PPC_HAVE_PMU_SUPPORT config PPC_85xx bool "Freescale 85xx" @@ -39,14 +40,14 @@ config 40x select PPC_DCR_NATIVE select PPC_UDBG_16550 select 4xx_SOC - select PPC_PCI_CHOICE + select HAVE_PCI config 44x bool "AMCC 44x, 46x or 47x" select PPC_DCR_NATIVE select PPC_UDBG_16550 select 4xx_SOC - select PPC_PCI_CHOICE + select HAVE_PCI select PHYS_64BIT config E200 @@ -179,11 +180,6 @@ config PPC_BOOK3E def_bool y depends on PPC_BOOK3E_64 -config 6xx - def_bool y - depends on PPC32 && PPC_BOOK3S - select PPC_HAVE_PMU_SUPPORT - config E500 select FSL_EMB_PERFMON select PPC_FSL_BOOK3E @@ -266,7 +262,7 @@ config PHYS_64BIT config ALTIVEC bool "AltiVec Support" - depends on 6xx || PPC_BOOK3S_64 || (PPC_E500MC && PPC64) + depends on PPC_BOOK3S_32 || PPC_BOOK3S_64 || (PPC_E500MC && PPC64) ---help--- This option enables kernel support for the Altivec extensions to the PowerPC processor. The kernel currently supports saving and restoring @@ -316,14 +312,6 @@ config SPE If in doubt, say Y here. -config PPC_STD_MMU - def_bool y - depends on PPC_BOOK3S - -config PPC_STD_MMU_32 - def_bool y - depends on PPC_STD_MMU && PPC32 - config ARCH_ENABLE_SPLIT_PMD_PTLOCK def_bool y depends on PPC_BOOK3S_64 @@ -358,7 +346,7 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION config PPC_MMU_NOHASH def_bool y - depends on !PPC_STD_MMU + depends on !PPC_BOOK3S config PPC_BOOK3E_MMU def_bool y @@ -412,7 +400,8 @@ config NR_CPUS config NOT_COHERENT_CACHE bool - depends on 4xx || PPC_8xx || E200 || PPC_MPC512x || GAMECUBE_COMMON + depends on 4xx || PPC_8xx || E200 || PPC_MPC512x || \ + GAMECUBE_COMMON || AMIGAONE default n if PPC_47x default y diff --git a/arch/powerpc/platforms/amigaone/Kconfig b/arch/powerpc/platforms/amigaone/Kconfig index 03dc1e37c25b..0741edb10b7b 100644 --- a/arch/powerpc/platforms/amigaone/Kconfig +++ b/arch/powerpc/platforms/amigaone/Kconfig @@ -1,11 +1,11 @@ # SPDX-License-Identifier: GPL-2.0 config AMIGAONE bool "Eyetech AmigaOne/MAI Teron" - depends on 6xx && BROKEN_ON_SMP + depends on PPC_BOOK3S_32 && BROKEN_ON_SMP select PPC_I8259 select PPC_INDIRECT_PCI select PPC_UDBG_16550 - select PCI + select FORCE_PCI select NOT_COHERENT_CACHE select CHECK_CACHE_COHERENCY select DEFAULT_UIMAGE diff --git a/arch/powerpc/platforms/cell/Kconfig b/arch/powerpc/platforms/cell/Kconfig index 4b2f114f3116..0f7c8241912b 100644 --- a/arch/powerpc/platforms/cell/Kconfig +++ b/arch/powerpc/platforms/cell/Kconfig @@ -27,7 +27,7 @@ config PPC_IBM_CELL_BLADE depends on PPC64 && PPC_BOOK3S && CPU_BIG_ENDIAN select PPC_CELL_NATIVE select PPC_OF_PLATFORM_PCI - select PCI + select FORCE_PCI select MMIO_NVRAM select PPC_UDBG_16550 select UDBG_RTAS_CONSOLE diff --git a/arch/powerpc/platforms/cell/cbe_regs.c b/arch/powerpc/platforms/cell/cbe_regs.c index b926438d73af..27ee65b89099 100644 --- a/arch/powerpc/platforms/cell/cbe_regs.c +++ b/arch/powerpc/platforms/cell/cbe_regs.c @@ -53,7 +53,7 @@ static struct cbe_regs_map *cbe_find_map(struct device_node *np) int i; struct device_node *tmp_np; - if (strcasecmp(np->type, "spe")) { + if (!of_node_is_type(np, "spe")) { for (i = 0; i < cbe_regs_map_count; i++) if (cbe_regs_maps[i].cpu_node == np || cbe_regs_maps[i].be_node == np) @@ -70,8 +70,8 @@ static struct cbe_regs_map *cbe_find_map(struct device_node *np) tmp_np = tmp_np->parent; /* on a correct devicetree we wont get up to root */ BUG_ON(!tmp_np); - } while (strcasecmp(tmp_np->type, "cpu") && - strcasecmp(tmp_np->type, "be")); + } while (!of_node_is_type(tmp_np, "cpu") || + !of_node_is_type(tmp_np, "be")); np->data = cbe_find_map(tmp_np); diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c index 12352a58072a..af2a3c15e0ec 100644 --- a/arch/powerpc/platforms/cell/iommu.c +++ b/arch/powerpc/platforms/cell/iommu.c @@ -654,7 +654,6 @@ static const struct dma_map_ops dma_iommu_fixed_ops = { .dma_supported = dma_suported_and_switch, .map_page = dma_fixed_map_page, .unmap_page = dma_fixed_unmap_page, - .mapping_error = dma_iommu_mapping_error, }; static void cell_dma_dev_setup(struct device *dev) diff --git a/arch/powerpc/platforms/cell/setup.c b/arch/powerpc/platforms/cell/setup.c index 7d31b8d14661..e2e1371a71e2 100644 --- a/arch/powerpc/platforms/cell/setup.c +++ b/arch/powerpc/platforms/cell/setup.c @@ -131,7 +131,7 @@ static int cell_setup_phb(struct pci_controller *phb) np = phb->dn; model = of_get_property(np, "model", NULL); - if (model == NULL || strcmp(np->name, "pci")) + if (model == NULL || !of_node_name_eq(np, "pci")) return 0; /* Setup workarounds for spider */ @@ -168,8 +168,7 @@ static int __init cell_publish_devices(void) * platform devices for the PCI host bridges */ for_each_child_of_node(root, np) { - if (np->type == NULL || (strcmp(np->type, "pci") != 0 && - strcmp(np->type, "pciex") != 0)) + if (!of_node_is_type(np, "pci") && !of_node_is_type(np, "pciex")) continue; of_platform_device_create(np, NULL, NULL); } diff --git a/arch/powerpc/platforms/cell/spu_callbacks.c b/arch/powerpc/platforms/cell/spu_callbacks.c index 8ae86200ef6c..125f2a5f02de 100644 --- a/arch/powerpc/platforms/cell/spu_callbacks.c +++ b/arch/powerpc/platforms/cell/spu_callbacks.c @@ -34,20 +34,9 @@ */ static void *spu_syscall_table[] = { -#define SYSCALL(func) sys_ni_syscall, -#define COMPAT_SYS(func) sys_ni_syscall, -#define PPC_SYS(func) sys_ni_syscall, -#define OLDSYS(func) sys_ni_syscall, -#define SYS32ONLY(func) sys_ni_syscall, -#define PPC64ONLY(func) sys_ni_syscall, -#define SYSX(f, f3264, f32) sys_ni_syscall, - -#define SYSCALL_SPU(func) sys_##func, -#define COMPAT_SYS_SPU(func) sys_##func, -#define COMPAT_SPU_NEW(func) sys_##func, -#define SYSX_SPU(f, f3264, f32) f, - -#include +#define __SYSCALL(nr, entry, nargs) entry, +#include +#undef __SYSCALL }; long spu_sys_callback(struct spu_syscall_block *s) diff --git a/arch/powerpc/platforms/cell/spu_manage.c b/arch/powerpc/platforms/cell/spu_manage.c index f7e36373f6e0..bed935c51ec2 100644 --- a/arch/powerpc/platforms/cell/spu_manage.c +++ b/arch/powerpc/platforms/cell/spu_manage.c @@ -458,7 +458,6 @@ static void init_affinity_node(int cbe) struct device_node *vic_dn, *last_spu_dn; phandle avoid_ph; const phandle *vic_handles; - const char *name; int lenp, i, added; last_spu = list_first_entry(&cbe_spu_info[cbe].spus, struct spu, @@ -480,12 +479,7 @@ static void init_affinity_node(int cbe) if (!vic_dn) continue; - /* a neighbour might be spe, mic-tm, or bif0 */ - name = of_get_property(vic_dn, "name", NULL); - if (!name) - continue; - - if (strcmp(name, "spe") == 0) { + if (of_node_name_eq(vic_dn, "spe") ) { spu = devnode_spu(cbe, vic_dn); avoid_ph = last_spu_dn->phandle; } else { @@ -498,7 +492,7 @@ static void init_affinity_node(int cbe) spu = neighbour_spu(cbe, vic_dn, last_spu_dn); if (!spu) continue; - if (!strcmp(name, "mic-tm")) { + if (of_node_name_eq(vic_dn, "mic-tm")) { last_spu->has_mem_affinity = 1; spu->has_mem_affinity = 1; } diff --git a/arch/powerpc/platforms/chrp/Kconfig b/arch/powerpc/platforms/chrp/Kconfig index ead99eff875a..9b5c5505718a 100644 --- a/arch/powerpc/platforms/chrp/Kconfig +++ b/arch/powerpc/platforms/chrp/Kconfig @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 config PPC_CHRP bool "Common Hardware Reference Platform (CHRP) based machines" - depends on 6xx + depends on PPC_BOOK3S_32 select HAVE_PCSPKR_PLATFORM select MPIC select PPC_I8259 @@ -12,5 +12,5 @@ config PPC_CHRP select PPC_MPC106 select PPC_UDBG_16550 select PPC_NATIVE - select PCI + select FORCE_PCI default y diff --git a/arch/powerpc/platforms/chrp/pci.c b/arch/powerpc/platforms/chrp/pci.c index 5ddb57b82921..b020c757d2bf 100644 --- a/arch/powerpc/platforms/chrp/pci.c +++ b/arch/powerpc/platforms/chrp/pci.c @@ -230,8 +230,8 @@ chrp_find_bridges(void) else if (strncmp(machine, "Pegasos", 7) == 0) is_pegasos = 1; } - for (dev = root->child; dev != NULL; dev = dev->sibling) { - if (dev->type == NULL || strcmp(dev->type, "pci") != 0) + for_each_child_of_node(root, dev) { + if (!of_node_is_type(dev, "pci")) continue; ++index; /* The GG2 bridge on the LongTrail doesn't have an address */ diff --git a/arch/powerpc/platforms/chrp/setup.c b/arch/powerpc/platforms/chrp/setup.c index d6d8ffc0271e..e66644e0fb40 100644 --- a/arch/powerpc/platforms/chrp/setup.c +++ b/arch/powerpc/platforms/chrp/setup.c @@ -280,20 +280,14 @@ static __init void chrp_init(void) node = of_find_node_by_path(property); if (!node) return; - property = of_get_property(node, "device_type", NULL); - if (!property) - goto out_put; - if (strcmp(property, "serial")) + if (!of_node_is_type(node, "serial")) goto out_put; /* * The 9pin connector is either /failsafe * or /pci@80000000/isa@C/serial@i2F8 * The optional graphics card has also type 'serial' in VGA mode. */ - property = of_get_property(node, "name", NULL); - if (!property) - goto out_put; - if (!strcmp(property, "failsafe") || !strcmp(property, "serial")) + if (of_node_name_eq(node, "failsafe") || of_node_name_eq(node, "serial")) add_preferred_console("ttyS", 0, NULL); out_put: of_node_put(node); diff --git a/arch/powerpc/platforms/embedded6xx/Kconfig b/arch/powerpc/platforms/embedded6xx/Kconfig index 8ea16db5ff48..c1920961f410 100644 --- a/arch/powerpc/platforms/embedded6xx/Kconfig +++ b/arch/powerpc/platforms/embedded6xx/Kconfig @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 config EMBEDDED6xx bool "Embedded 6xx/7xx/7xxx-based boards" - depends on 6xx && BROKEN_ON_SMP + depends on PPC_BOOK3S_32 && BROKEN_ON_SMP config LINKSTATION bool "Linkstation / Kurobox(HG) from Buffalo" @@ -52,7 +52,7 @@ config MVME5100 bool "Motorola/Emerson MVME5100" depends on EMBEDDED6xx select MPIC - select PCI + select FORCE_PCI select PPC_INDIRECT_PCI select PPC_I8259 select PPC_NATIVE @@ -63,7 +63,7 @@ config MVME5100 config TSI108_BRIDGE bool - select PCI + select FORCE_PCI select MPIC select MPIC_WEIRD diff --git a/arch/powerpc/platforms/maple/Kconfig b/arch/powerpc/platforms/maple/Kconfig index 2601fac50354..08d530a2a8b1 100644 --- a/arch/powerpc/platforms/maple/Kconfig +++ b/arch/powerpc/platforms/maple/Kconfig @@ -2,7 +2,7 @@ config PPC_MAPLE depends on PPC64 && PPC_BOOK3S && CPU_BIG_ENDIAN bool "Maple 970FX Evaluation Board" - select PCI + select FORCE_PCI select MPIC select U3_DART select MPIC_U3_HT_IRQS diff --git a/arch/powerpc/platforms/maple/pci.c b/arch/powerpc/platforms/maple/pci.c index e3821379e86f..13fba004b7e7 100644 --- a/arch/powerpc/platforms/maple/pci.c +++ b/arch/powerpc/platforms/maple/pci.c @@ -604,10 +604,8 @@ void __init maple_pci_init(void) printk(KERN_CRIT "maple_find_bridges: can't find root of device tree\n"); return; } - for (np = NULL; (np = of_get_next_child(root, np)) != NULL;) { - if (!np->type) - continue; - if (strcmp(np->type, "pci") && strcmp(np->type, "ht")) + for_each_child_of_node(root, np) { + if (!of_node_is_type(np, "pci") && !of_node_is_type(np, "ht")) continue; if ((of_device_is_compatible(np, "u4-pcie") || of_device_is_compatible(np, "u3-agp")) && diff --git a/arch/powerpc/platforms/pasemi/Kconfig b/arch/powerpc/platforms/pasemi/Kconfig index 98e3bc22bebc..c52731a7773f 100644 --- a/arch/powerpc/platforms/pasemi/Kconfig +++ b/arch/powerpc/platforms/pasemi/Kconfig @@ -3,7 +3,7 @@ config PPC_PASEMI depends on PPC64 && PPC_BOOK3S && CPU_BIG_ENDIAN bool "PA Semi SoC-based platforms" select MPIC - select PCI + select FORCE_PCI select PPC_UDBG_16550 select PPC_NATIVE select MPIC_BROKEN_REGREAD diff --git a/arch/powerpc/platforms/pasemi/dma_lib.c b/arch/powerpc/platforms/pasemi/dma_lib.c index 53384eb42a76..d18d16489a15 100644 --- a/arch/powerpc/platforms/pasemi/dma_lib.c +++ b/arch/powerpc/platforms/pasemi/dma_lib.c @@ -255,15 +255,13 @@ int pasemi_dma_alloc_ring(struct pasemi_dmachan *chan, int ring_size) chan->ring_size = ring_size; - chan->ring_virt = dma_alloc_coherent(&dma_pdev->dev, + chan->ring_virt = dma_zalloc_coherent(&dma_pdev->dev, ring_size * sizeof(u64), &chan->ring_dma, GFP_KERNEL); if (!chan->ring_virt) return -ENOMEM; - memset(chan->ring_virt, 0, ring_size * sizeof(u64)); - return 0; } EXPORT_SYMBOL(pasemi_dma_alloc_ring); diff --git a/arch/powerpc/platforms/pasemi/pci.c b/arch/powerpc/platforms/pasemi/pci.c index c3c64172482d..fdc839d93837 100644 --- a/arch/powerpc/platforms/pasemi/pci.c +++ b/arch/powerpc/platforms/pasemi/pci.c @@ -27,6 +27,7 @@ #include #include +#include #include #include @@ -108,6 +109,61 @@ static int workaround_5945(struct pci_bus *bus, unsigned int devfn, return 1; } +#ifdef CONFIG_PPC_PASEMI_NEMO +#define PXP_ERR_CFG_REG 0x4 +#define PXP_IGNORE_PCIE_ERRORS 0x800 +#define SB600_BUS 5 + +static void sb600_set_flag(int bus) +{ + static void __iomem *iob_mapbase = NULL; + struct resource res; + struct device_node *dn; + int err; + + if (iob_mapbase == NULL) { + dn = of_find_compatible_node(NULL, "isa", "pasemi,1682m-iob"); + if (!dn) { + pr_crit("NEMO SB600 missing iob node\n"); + return; + } + + err = of_address_to_resource(dn, 0, &res); + of_node_put(dn); + + if (err) { + pr_crit("NEMO SB600 missing resource\n"); + return; + } + + pr_info("NEMO SB600 IOB base %08llx\n",res.start); + + iob_mapbase = ioremap(res.start + 0x100, 0x94); + } + + if (iob_mapbase != NULL) { + if (bus == SB600_BUS) { + /* + * This is the SB600's bus, tell the PCI-e root port + * to allow non-zero devices to enumerate. + */ + out_le32(iob_mapbase + PXP_ERR_CFG_REG, in_le32(iob_mapbase + PXP_ERR_CFG_REG) | PXP_IGNORE_PCIE_ERRORS); + } else { + /* + * Only scan device 0 on other busses + */ + out_le32(iob_mapbase + PXP_ERR_CFG_REG, in_le32(iob_mapbase + PXP_ERR_CFG_REG) & ~PXP_IGNORE_PCIE_ERRORS); + } + } +} + +#else + +static void sb600_set_flag(int bus) +{ +} +#endif + static int pa_pxp_read_config(struct pci_bus *bus, unsigned int devfn, int offset, int len, u32 *val) { @@ -126,6 +182,8 @@ static int pa_pxp_read_config(struct pci_bus *bus, unsigned int devfn, addr = pa_pxp_cfg_addr(hose, bus->number, devfn, offset); + sb600_set_flag(bus->number); + /* * Note: the caller has already checked that offset is * suitably aligned and that len is 1, 2 or 4. @@ -160,6 +218,8 @@ static int pa_pxp_write_config(struct pci_bus *bus, unsigned int devfn, addr = pa_pxp_cfg_addr(hose, bus->number, devfn, offset); + sb600_set_flag(bus->number); + /* * Note: the caller has already checked that offset is * suitably aligned and that len is 1, 2 or 4. @@ -210,6 +270,12 @@ static int __init pas_add_bridge(struct device_node *dev) /* Interpret the "ranges" property */ pci_process_bridge_OF_ranges(hose, dev, 1); + /* + * Scan for an isa bridge. This is needed to find the SB600 on the nemo + * and does nothing on machines without one. + */ + isa_bridge_find_early(hose); + return 0; } diff --git a/arch/powerpc/platforms/pasemi/setup.c b/arch/powerpc/platforms/pasemi/setup.c index 9a6eb04cca83..c0532999f854 100644 --- a/arch/powerpc/platforms/pasemi/setup.c +++ b/arch/powerpc/platforms/pasemi/setup.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -72,6 +73,40 @@ static void __noreturn pas_restart(char *cmd) out_le32(reset_reg, 0x6000000); } +#ifdef CONFIG_PPC_PASEMI_NEMO +void pas_shutdown(void) +{ + /* Set the PLD bit that makes the SB600 think the power button is being pressed */ + void __iomem *pld_map = ioremap(0xf5000000,4096); + while (1) + out_8(pld_map+7,0x01); +} + +/* RTC platform device structure as is not in device tree */ +static struct resource rtc_resource[] = {{ + .name = "rtc", + .start = 0x70, + .end = 0x71, + .flags = IORESOURCE_IO, +}, { + .name = "rtc", + .start = 8, + .end = 8, + .flags = IORESOURCE_IRQ, +}}; + +static inline void nemo_init_rtc(void) +{ + platform_device_register_simple("rtc_cmos", -1, rtc_resource, 2); +} + +#else + +static inline void nemo_init_rtc(void) +{ +} +#endif + #ifdef CONFIG_SMP static arch_spinlock_t timebase_lock; static unsigned long timebase; @@ -183,6 +218,42 @@ static int __init pas_setup_mce_regs(void) } machine_device_initcall(pasemi, pas_setup_mce_regs); +#ifdef CONFIG_PPC_PASEMI_NEMO +static void sb600_8259_cascade(struct irq_desc *desc) +{ + struct irq_chip *chip = irq_desc_get_chip(desc); + unsigned int cascade_irq = i8259_irq(); + + if (cascade_irq) + generic_handle_irq(cascade_irq); + + chip->irq_eoi(&desc->irq_data); +} + +static void nemo_init_IRQ(struct mpic *mpic) +{ + struct device_node *np; + int gpio_virq; + /* Connect the SB600's legacy i8259 controller */ + np = of_find_node_by_path("/pxp@0,e0000000"); + i8259_init(np, 0); + of_node_put(np); + + gpio_virq = irq_create_mapping(NULL, 3); + irq_set_irq_type(gpio_virq, IRQ_TYPE_LEVEL_HIGH); + irq_set_chained_handler(gpio_virq, sb600_8259_cascade); + mpic_unmask_irq(irq_get_irq_data(gpio_virq)); + + irq_set_default_host(mpic->irqhost); +} + +#else + +static inline void nemo_init_IRQ(struct mpic *mpic) +{ +} +#endif + static __init void pas_init_IRQ(void) { struct device_node *np; @@ -243,6 +314,8 @@ static __init void pas_init_IRQ(void) mpic_unmask_irq(irq_get_irq_data(nmi_virq)); } + nemo_init_IRQ(mpic); + of_node_put(mpic_node); of_node_put(root); } @@ -404,6 +477,8 @@ static int __init pasemi_publish_devices(void) /* Publish OF platform devices for SDC and other non-PCI devices */ of_platform_bus_probe(NULL, pasemi_bus_ids, NULL); + nemo_init_rtc(); + return 0; } machine_device_initcall(pasemi, pasemi_publish_devices); @@ -418,6 +493,17 @@ static int __init pas_probe(void) !of_machine_is_compatible("pasemi,pwrficient")) return 0; +#ifdef CONFIG_PPC_PASEMI_NEMO + /* + * Check for the Nemo motherboard here, if we are running on one + * change the machine definition to fit + */ + if (of_machine_is_compatible("pasemi,nemo")) { + pm_power_off = pas_shutdown; + ppc_md.name = "A-EON Amigaone X1000"; + } +#endif + iommu_init_early_pasemi(); return 1; diff --git a/arch/powerpc/platforms/powermac/Kconfig b/arch/powerpc/platforms/powermac/Kconfig index fc90cb35cea3..f834a19ed772 100644 --- a/arch/powerpc/platforms/powermac/Kconfig +++ b/arch/powerpc/platforms/powermac/Kconfig @@ -3,7 +3,7 @@ config PPC_PMAC bool "Apple PowerMac based machines" depends on PPC_BOOK3S && CPU_BIG_ENDIAN select MPIC - select PCI + select FORCE_PCI select PPC_INDIRECT_PCI if PPC32 select PPC_MPC106 if PPC32 select PPC_NATIVE diff --git a/arch/powerpc/platforms/powermac/cache.S b/arch/powerpc/platforms/powermac/cache.S index 27862feee4a5..f0641b6e6075 100644 --- a/arch/powerpc/platforms/powermac/cache.S +++ b/arch/powerpc/platforms/powermac/cache.S @@ -28,7 +28,7 @@ */ _GLOBAL(flush_disable_caches) -#ifndef CONFIG_6xx +#ifndef CONFIG_PPC_BOOK3S_32 blr #else BEGIN_FTR_SECTION @@ -356,4 +356,4 @@ END_FTR_SECTION_IFSET(CPU_FTR_L3CR) mtmsr r11 /* restore DR and EE */ isync blr -#endif /* CONFIG_6xx */ +#endif /* CONFIG_PPC_BOOK3S_32 */ diff --git a/arch/powerpc/platforms/powermac/feature.c b/arch/powerpc/platforms/powermac/feature.c index ed2f54b3f173..c3e5ee8b5175 100644 --- a/arch/powerpc/platforms/powermac/feature.c +++ b/arch/powerpc/platforms/powermac/feature.c @@ -51,7 +51,7 @@ #define DBG(fmt...) #endif -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 extern int powersave_lowspeed; #endif @@ -173,9 +173,9 @@ static long ohare_htw_scc_enable(struct device_node *node, long param, macio = macio_find(node, 0); if (!macio) return -ENODEV; - if (!strcmp(node->name, "ch-a")) + if (of_node_name_eq(node, "ch-a")) chan_mask = MACIO_FLAG_SCCA_ON; - else if (!strcmp(node->name, "ch-b")) + else if (of_node_name_eq(node, "ch-b")) chan_mask = MACIO_FLAG_SCCB_ON; else return -ENODEV; @@ -610,9 +610,9 @@ static long core99_scc_enable(struct device_node *node, long param, long value) macio = macio_find(node, 0); if (!macio) return -ENODEV; - if (!strcmp(node->name, "ch-a")) + if (of_node_name_eq(node, "ch-a")) chan_mask = MACIO_FLAG_SCCA_ON; - else if (!strcmp(node->name, "ch-b")) + else if (of_node_name_eq(node, "ch-b")) chan_mask = MACIO_FLAG_SCCB_ON; else return -ENODEV; @@ -1392,8 +1392,7 @@ static long g5_mpic_enable(struct device_node *node, long param, long value) if (parent == NULL) return 0; - is_u3 = strcmp(parent->name, "u3") == 0 || - strcmp(parent->name, "u4") == 0; + is_u3 = of_node_name_eq(parent, "u3") || of_node_name_eq(parent, "u4"); of_node_put(parent); if (!is_u3) return 0; @@ -1471,6 +1470,7 @@ static long g5_i2s_enable(struct device_node *node, long param, long value) case 2: if (macio->type == macio_shasta) break; + /* fall through */ default: return -ENODEV; } diff --git a/arch/powerpc/platforms/powermac/low_i2c.c b/arch/powerpc/platforms/powermac/low_i2c.c index d4d411820597..4de058a20d2b 100644 --- a/arch/powerpc/platforms/powermac/low_i2c.c +++ b/arch/powerpc/platforms/powermac/low_i2c.c @@ -617,7 +617,7 @@ static void __init kw_i2c_probe(void) * but not for now */ child = of_get_next_child(np, NULL); - multibus = !child || strcmp(child->name, "i2c-bus"); + multibus = !of_node_name_eq(child, "i2c-bus"); of_node_put(child); /* For a multibus setup, we get the bus count based on the @@ -917,10 +917,9 @@ static void __init smu_i2c_probe(void) * type as older device trees mix i2c busses and other things * at the same level */ - for (busnode = NULL; - (busnode = of_get_next_child(controller, busnode)) != NULL;) { - if (strcmp(busnode->type, "i2c") && - strcmp(busnode->type, "i2c-bus")) + for_each_child_of_node(controller, busnode) { + if (!of_node_is_type(busnode, "i2c") && + !of_node_is_type(busnode, "i2c-bus")) continue; reg = of_get_property(busnode, "reg", NULL); if (reg == NULL) @@ -1206,7 +1205,7 @@ static void pmac_i2c_devscan(void (*callback)(struct device_node *dev, if (bus != pmac_i2c_find_bus(np)) continue; for (p = whitelist; p->name != NULL; p++) { - if (strcmp(np->name, p->name)) + if (!of_node_name_eq(np, p->name)) continue; if (p->compatible && !of_device_is_compatible(np, p->compatible)) diff --git a/arch/powerpc/platforms/powermac/pci.c b/arch/powerpc/platforms/powermac/pci.c index 04527d13d5a4..3d7420503c37 100644 --- a/arch/powerpc/platforms/powermac/pci.c +++ b/arch/powerpc/platforms/powermac/pci.c @@ -501,9 +501,7 @@ static void __init init_p2pbridge(void) /* XXX it would be better here to identify the specific PCI-PCI bridge chip we have. */ p2pbridge = of_find_node_by_name(NULL, "pci-bridge"); - if (p2pbridge == NULL - || p2pbridge->parent == NULL - || strcmp(p2pbridge->parent->name, "pci") != 0) + if (p2pbridge == NULL || !of_node_name_eq(p2pbridge->parent, "pci")) goto done; if (pci_device_from_OF_node(p2pbridge, &bus, &devfn) < 0) { DBG("Can't find PCI infos for PCI<->PCI bridge\n"); @@ -828,14 +826,14 @@ static int __init pmac_add_bridge(struct device_node *dev) if (of_device_is_compatible(dev, "uni-north")) { primary = setup_uninorth(hose, &rsrc); disp_name = "UniNorth"; - } else if (strcmp(dev->name, "pci") == 0) { + } else if (of_node_name_eq(dev, "pci")) { /* XXX assume this is a mpc106 (grackle) */ setup_grackle(hose); disp_name = "Grackle (MPC106)"; - } else if (strcmp(dev->name, "bandit") == 0) { + } else if (of_node_name_eq(dev, "bandit")) { setup_bandit(hose, &rsrc); disp_name = "Bandit"; - } else if (strcmp(dev->name, "chaos") == 0) { + } else if (of_node_name_eq(dev, "chaos")) { setup_chaos(hose, &rsrc); disp_name = "Chaos"; primary = 0; @@ -914,16 +912,14 @@ void __init pmac_pci_init(void) "of device tree\n"); return; } - for (np = NULL; (np = of_get_next_child(root, np)) != NULL;) { - if (np->name == NULL) - continue; - if (strcmp(np->name, "bandit") == 0 - || strcmp(np->name, "chaos") == 0 - || strcmp(np->name, "pci") == 0) { + for_each_child_of_node(root, np) { + if (of_node_name_eq(np, "bandit") + || of_node_name_eq(np, "chaos") + || of_node_name_eq(np, "pci")) { if (pmac_add_bridge(np) == 0) of_node_get(np); } - if (strcmp(np->name, "ht") == 0) { + if (of_node_name_eq(np, "ht")) { of_node_get(np); ht = np; } @@ -983,7 +979,7 @@ static bool pmac_pci_enable_device_hook(struct pci_dev *dev) /* Firewire & GMAC were disabled after PCI probe, the driver is * claiming them, we must re-enable them now. */ - if (uninorth_child && !strcmp(node->name, "firewire") && + if (uninorth_child && of_node_name_eq(node, "firewire") && (of_device_is_compatible(node, "pci106b,18") || of_device_is_compatible(node, "pci106b,30") || of_device_is_compatible(node, "pci11c1,5811"))) { @@ -991,7 +987,7 @@ static bool pmac_pci_enable_device_hook(struct pci_dev *dev) pmac_call_feature(PMAC_FTR_1394_ENABLE, node, 0, 1); updatecfg = 1; } - if (uninorth_child && !strcmp(node->name, "ethernet") && + if (uninorth_child && of_node_name_eq(node, "ethernet") && of_device_is_compatible(node, "gmac")) { pmac_call_feature(PMAC_FTR_GMAC_ENABLE, node, 0, 1); updatecfg = 1; @@ -1262,4 +1258,3 @@ struct pci_controller_ops pmac_pci_controller_ops = { .enable_device_hook = pmac_pci_enable_device_hook, #endif }; - diff --git a/arch/powerpc/platforms/powermac/pfunc_base.c b/arch/powerpc/platforms/powermac/pfunc_base.c index fd2e210559c8..62311e84a423 100644 --- a/arch/powerpc/platforms/powermac/pfunc_base.c +++ b/arch/powerpc/platforms/powermac/pfunc_base.c @@ -101,9 +101,8 @@ static void macio_gpio_init_one(struct macio_chip *macio) * Find the "gpio" parent node */ - for (gparent = NULL; - (gparent = of_get_next_child(macio->of_node, gparent)) != NULL;) - if (strcmp(gparent->name, "gpio") == 0) + for_each_child_of_node(macio->of_node, gparent) + if (of_node_name_eq(gparent, "gpio")) break; if (gparent == NULL) return; @@ -313,7 +312,7 @@ static void uninorth_install_pfunc(void) * Install handlers for the hwclock child if any */ for (np = NULL; (np = of_get_next_child(uninorth_node, np)) != NULL;) - if (strcmp(np->name, "hw-clock") == 0) { + if (of_node_name_eq(np, "hw-clock")) { unin_hwclock = np; break; } diff --git a/arch/powerpc/platforms/powermac/pic.c b/arch/powerpc/platforms/powermac/pic.c index 57bbff465964..c292ffac2ed4 100644 --- a/arch/powerpc/platforms/powermac/pic.c +++ b/arch/powerpc/platforms/powermac/pic.c @@ -417,7 +417,7 @@ int of_irq_parse_oldworld(struct device_node *device, int index, if (ints != NULL) break; device = device->parent; - if (device && strcmp(device->type, "pci") != 0) + if (!of_node_is_type(device, "pci")) break; } if (ints == NULL) @@ -553,13 +553,13 @@ void __init pmac_pic_init(void) for_each_node_with_property(np, "interrupt-controller") { /* Skip /chosen/interrupt-controller */ - if (strcmp(np->name, "chosen") == 0) + if (of_node_name_eq(np, "chosen")) continue; /* It seems like at least one person wants * to use BootX on a machine with an AppleKiwi * controller which happens to pretend to be an * interrupt controller too. */ - if (strcmp(np->name, "AppleKiwi") == 0) + if (of_node_name_eq(np, "AppleKiwi")) continue; /* I think we found one ! */ of_irq_dflt_pic = np; diff --git a/arch/powerpc/platforms/powermac/setup.c b/arch/powerpc/platforms/powermac/setup.c index 2f00e3daafb0..2e8221e20ee8 100644 --- a/arch/powerpc/platforms/powermac/setup.c +++ b/arch/powerpc/platforms/powermac/setup.c @@ -560,15 +560,9 @@ static int __init check_pmac_serial_console(void) } pr_debug("stdout is %pOF\n", prom_stdout); - name = of_get_property(prom_stdout, "name", NULL); - if (!name) { - pr_debug(" stdout package has no name !\n"); - goto not_found; - } - - if (strcmp(name, "ch-a") == 0) + if (of_node_name_eq(prom_stdout, "ch-a")) offset = 0; - else if (strcmp(name, "ch-b") == 0) + else if (of_node_name_eq(prom_stdout, "ch-b")) offset = 1; else goto not_found; diff --git a/arch/powerpc/platforms/powermac/sleep.S b/arch/powerpc/platforms/powermac/sleep.S index f89808b9713d..fb64b09cad9d 100644 --- a/arch/powerpc/platforms/powermac/sleep.S +++ b/arch/powerpc/platforms/powermac/sleep.S @@ -56,7 +56,7 @@ * vector that will be called by the ROM on wakeup */ _GLOBAL(low_sleep_handler) -#ifndef CONFIG_6xx +#ifndef CONFIG_PPC_BOOK3S_32 blr #else mflr r0 @@ -394,5 +394,5 @@ sleep_storage: .long 0 .balign L1_CACHE_BYTES, 0 -#endif /* CONFIG_6xx */ +#endif /* CONFIG_PPC_BOOK3S_32 */ .section .text diff --git a/arch/powerpc/platforms/powermac/smp.c b/arch/powerpc/platforms/powermac/smp.c index 447da6db450a..35be6e0b886d 100644 --- a/arch/powerpc/platforms/powermac/smp.c +++ b/arch/powerpc/platforms/powermac/smp.c @@ -832,8 +832,7 @@ static int smp_core99_kick_cpu(int nr) mdelay(1); /* Restore our exception vector */ - *vector = save_vector; - flush_icache_range((unsigned long) vector, (unsigned long) vector + 4); + patch_instruction(vector, save_vector); local_irq_restore(flags); if (ppc_md.progress) ppc_md.progress("smp_core99_kick_cpu done", 0x347); diff --git a/arch/powerpc/platforms/powermac/udbg_adb.c b/arch/powerpc/platforms/powermac/udbg_adb.c index 64f38f0d15ed..12158bb4fed7 100644 --- a/arch/powerpc/platforms/powermac/udbg_adb.c +++ b/arch/powerpc/platforms/powermac/udbg_adb.c @@ -194,7 +194,7 @@ int __init udbg_adb_init(int force_btext) */ for_each_node_by_name(np, "keyboard") { struct device_node *parent = of_get_parent(np); - int found = (parent && strcmp(parent->type, "adb") == 0); + int found = of_node_is_type(parent, "adb"); of_node_put(parent); if (found) break; diff --git a/arch/powerpc/platforms/powermac/udbg_scc.c b/arch/powerpc/platforms/powermac/udbg_scc.c index 8901973ed683..415b74d7c253 100644 --- a/arch/powerpc/platforms/powermac/udbg_scc.c +++ b/arch/powerpc/platforms/powermac/udbg_scc.c @@ -87,7 +87,7 @@ void udbg_scc_init(int force_scc) for (ch = NULL; (ch = of_get_next_child(escc, ch)) != NULL;) { if (ch == stdout) ch_def = of_node_get(ch); - if (strcmp(ch->name, "ch-a") == 0) + if (of_node_name_eq(ch, "ch-a")) ch_a = of_node_get(ch); } if (ch_def == NULL && !force_scc) diff --git a/arch/powerpc/platforms/powernv/Kconfig b/arch/powerpc/platforms/powernv/Kconfig index 99083fe992d5..850eee860cf2 100644 --- a/arch/powerpc/platforms/powernv/Kconfig +++ b/arch/powerpc/platforms/powernv/Kconfig @@ -7,7 +7,7 @@ config PPC_POWERNV select PPC_ICP_NATIVE select PPC_XIVE_NATIVE select PPC_P7_NAP - select PCI + select FORCE_PCI select PCI_MSI select EPAPR_BOOT select PPC_INDIRECT_PIO diff --git a/arch/powerpc/platforms/powernv/eeh-powernv.c b/arch/powerpc/platforms/powernv/eeh-powernv.c index abc0be7507c8..f38078976c5d 100644 --- a/arch/powerpc/platforms/powernv/eeh-powernv.c +++ b/arch/powerpc/platforms/powernv/eeh-powernv.c @@ -564,8 +564,8 @@ static void pnv_eeh_get_phb_diag(struct eeh_pe *pe) static int pnv_eeh_get_phb_state(struct eeh_pe *pe) { struct pnv_phb *phb = pe->phb->private_data; - u8 fstate; - __be16 pcierr; + u8 fstate = 0; + __be16 pcierr = 0; s64 rc; int result = 0; @@ -603,8 +603,8 @@ static int pnv_eeh_get_phb_state(struct eeh_pe *pe) static int pnv_eeh_get_pe_state(struct eeh_pe *pe) { struct pnv_phb *phb = pe->phb->private_data; - u8 fstate; - __be16 pcierr; + u8 fstate = 0; + __be16 pcierr = 0; s64 rc; int result; diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c index 75b935252981..d7f742ed48ba 100644 --- a/arch/powerpc/platforms/powernv/npu-dma.c +++ b/arch/powerpc/platforms/powernv/npu-dma.c @@ -9,32 +9,19 @@ * License as published by the Free Software Foundation. */ -#include #include #include #include -#include #include #include -#include #include #include -#include #include -#include -#include -#include -#include -#include -#include #include -#include "powernv.h" #include "pci.h" -#define npu_to_phb(x) container_of(x, struct pnv_phb, npu) - /* * spinlock to protect initialisation of an npu_context for a particular * mm_struct. @@ -133,15 +120,25 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe, return pe; } -long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num, +static long pnv_npu_unset_window(struct iommu_table_group *table_group, + int num); + +static long pnv_npu_set_window(struct iommu_table_group *table_group, int num, struct iommu_table *tbl) { + struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe, + table_group); struct pnv_phb *phb = npe->phb; int64_t rc; const unsigned long size = tbl->it_indirect_levels ? tbl->it_level_size : tbl->it_size; const __u64 start_addr = tbl->it_offset << tbl->it_page_shift; const __u64 win_size = tbl->it_size << tbl->it_page_shift; + int num2 = (num == 0) ? 1 : 0; + + /* NPU has just one TVE so if there is another table, remove it first */ + if (npe->table_group.tables[num2]) + pnv_npu_unset_window(&npe->table_group, num2); pe_info(npe, "Setting up window %llx..%llx pg=%lx\n", start_addr, start_addr + win_size - 1, @@ -167,11 +164,16 @@ long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num, return 0; } -long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num) +static long pnv_npu_unset_window(struct iommu_table_group *table_group, int num) { + struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe, + table_group); struct pnv_phb *phb = npe->phb; int64_t rc; + if (!npe->table_group.tables[num]) + return 0; + pe_info(npe, "Removing DMA window\n"); rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number, @@ -210,7 +212,8 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe) if (!gpe) return; - rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]); + rc = pnv_npu_set_window(&npe->table_group, 0, + gpe->table_group.tables[0]); /* * NVLink devices use the same TCE table configuration as @@ -235,7 +238,7 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe) if (phb->type != PNV_PHB_NPU_NVLINK || !npe->pdev) return -EINVAL; - rc = pnv_npu_unset_window(npe, 0); + rc = pnv_npu_unset_window(&npe->table_group, 0); if (rc != OPAL_SUCCESS) return rc; @@ -288,11 +291,15 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass) } } +#ifdef CONFIG_IOMMU_API /* Switch ownership from platform code to external user (e.g. VFIO) */ -void pnv_npu_take_ownership(struct pnv_ioda_pe *npe) +static void pnv_npu_take_ownership(struct iommu_table_group *table_group) { + struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe, + table_group); struct pnv_phb *phb = npe->phb; int64_t rc; + struct pci_dev *gpdev = NULL; /* * Note: NPU has just a single TVE in the hardware which means that @@ -301,7 +308,7 @@ void pnv_npu_take_ownership(struct pnv_ioda_pe *npe) * if it was enabled at the moment of ownership change. */ if (npe->table_group.tables[0]) { - pnv_npu_unset_window(npe, 0); + pnv_npu_unset_window(&npe->table_group, 0); return; } @@ -314,30 +321,315 @@ void pnv_npu_take_ownership(struct pnv_ioda_pe *npe) return; } pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false); + + get_gpu_pci_dev_and_pe(npe, &gpdev); + if (gpdev) + pnv_npu2_unmap_lpar_dev(gpdev); } -struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe) +static void pnv_npu_release_ownership(struct iommu_table_group *table_group) { - struct pnv_phb *phb = npe->phb; - struct pci_bus *pbus = phb->hose->bus; - struct pci_dev *npdev, *gpdev = NULL, *gptmp; - struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev); + struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe, + table_group); + struct pci_dev *gpdev = NULL; + + get_gpu_pci_dev_and_pe(npe, &gpdev); + if (gpdev) + pnv_npu2_map_lpar_dev(gpdev, 0, MSR_DR | MSR_PR | MSR_HV); +} + +static struct iommu_table_group_ops pnv_pci_npu_ops = { + .set_window = pnv_npu_set_window, + .unset_window = pnv_npu_unset_window, + .take_ownership = pnv_npu_take_ownership, + .release_ownership = pnv_npu_release_ownership, +}; +#endif /* !CONFIG_IOMMU_API */ + +/* + * NPU2 ATS + */ +/* Maximum possible number of ATSD MMIO registers per NPU */ +#define NV_NMMU_ATSD_REGS 8 +#define NV_NPU_MAX_PE_NUM 16 + +/* + * A compound NPU IOMMU group which might consist of 1 GPU + 2xNPUs (POWER8) or + * up to 3 x (GPU + 2xNPUs) (POWER9). + */ +struct npu_comp { + struct iommu_table_group table_group; + int pe_num; + struct pnv_ioda_pe *pe[NV_NPU_MAX_PE_NUM]; +}; + +/* An NPU descriptor, valid for POWER9 only */ +struct npu { + int index; + __be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS]; + unsigned int mmio_atsd_count; + + /* Bitmask for MMIO register usage */ + unsigned long mmio_atsd_usage; + + /* Do we need to explicitly flush the nest mmu? */ + bool nmmu_flush; + + struct npu_comp npucomp; +}; + +#ifdef CONFIG_IOMMU_API +static long pnv_npu_peers_create_table_userspace( + struct iommu_table_group *table_group, + int num, __u32 page_shift, __u64 window_size, __u32 levels, + struct iommu_table **ptbl) +{ + struct npu_comp *npucomp = container_of(table_group, struct npu_comp, + table_group); + + if (!npucomp->pe_num || !npucomp->pe[0] || + !npucomp->pe[0]->table_group.ops || + !npucomp->pe[0]->table_group.ops->create_table) + return -EFAULT; + + return npucomp->pe[0]->table_group.ops->create_table( + &npucomp->pe[0]->table_group, num, page_shift, + window_size, levels, ptbl); +} + +static long pnv_npu_peers_set_window(struct iommu_table_group *table_group, + int num, struct iommu_table *tbl) +{ + int i, j; + long ret = 0; + struct npu_comp *npucomp = container_of(table_group, struct npu_comp, + table_group); + + for (i = 0; i < npucomp->pe_num; ++i) { + struct pnv_ioda_pe *pe = npucomp->pe[i]; + + if (!pe->table_group.ops->set_window) + continue; + + ret = pe->table_group.ops->set_window(&pe->table_group, + num, tbl); + if (ret) + break; + } + + if (ret) { + for (j = 0; j < i; ++j) { + struct pnv_ioda_pe *pe = npucomp->pe[j]; + + if (!pe->table_group.ops->unset_window) + continue; + + ret = pe->table_group.ops->unset_window( + &pe->table_group, num); + if (ret) + break; + } + } else { + table_group->tables[num] = iommu_tce_table_get(tbl); + } + + return ret; +} - if (!gpe || !gpdev) +static long pnv_npu_peers_unset_window(struct iommu_table_group *table_group, + int num) +{ + int i, j; + long ret = 0; + struct npu_comp *npucomp = container_of(table_group, struct npu_comp, + table_group); + + for (i = 0; i < npucomp->pe_num; ++i) { + struct pnv_ioda_pe *pe = npucomp->pe[i]; + + WARN_ON(npucomp->table_group.tables[num] != + table_group->tables[num]); + if (!npucomp->table_group.tables[num]) + continue; + + if (!pe->table_group.ops->unset_window) + continue; + + ret = pe->table_group.ops->unset_window(&pe->table_group, num); + if (ret) + break; + } + + if (ret) { + for (j = 0; j < i; ++j) { + struct pnv_ioda_pe *pe = npucomp->pe[j]; + + if (!npucomp->table_group.tables[num]) + continue; + + if (!pe->table_group.ops->set_window) + continue; + + ret = pe->table_group.ops->set_window(&pe->table_group, + num, table_group->tables[num]); + if (ret) + break; + } + } else if (table_group->tables[num]) { + iommu_tce_table_put(table_group->tables[num]); + table_group->tables[num] = NULL; + } + + return ret; +} + +static void pnv_npu_peers_take_ownership(struct iommu_table_group *table_group) +{ + int i; + struct npu_comp *npucomp = container_of(table_group, struct npu_comp, + table_group); + + for (i = 0; i < npucomp->pe_num; ++i) { + struct pnv_ioda_pe *pe = npucomp->pe[i]; + + if (!pe->table_group.ops->take_ownership) + continue; + pe->table_group.ops->take_ownership(&pe->table_group); + } +} + +static void pnv_npu_peers_release_ownership( + struct iommu_table_group *table_group) +{ + int i; + struct npu_comp *npucomp = container_of(table_group, struct npu_comp, + table_group); + + for (i = 0; i < npucomp->pe_num; ++i) { + struct pnv_ioda_pe *pe = npucomp->pe[i]; + + if (!pe->table_group.ops->release_ownership) + continue; + pe->table_group.ops->release_ownership(&pe->table_group); + } +} + +static struct iommu_table_group_ops pnv_npu_peers_ops = { + .get_table_size = pnv_pci_ioda2_get_table_size, + .create_table = pnv_npu_peers_create_table_userspace, + .set_window = pnv_npu_peers_set_window, + .unset_window = pnv_npu_peers_unset_window, + .take_ownership = pnv_npu_peers_take_ownership, + .release_ownership = pnv_npu_peers_release_ownership, +}; + +static void pnv_comp_attach_table_group(struct npu_comp *npucomp, + struct pnv_ioda_pe *pe) +{ + if (WARN_ON(npucomp->pe_num == NV_NPU_MAX_PE_NUM)) + return; + + npucomp->pe[npucomp->pe_num] = pe; + ++npucomp->pe_num; +} + +struct iommu_table_group *pnv_try_setup_npu_table_group(struct pnv_ioda_pe *pe) +{ + struct iommu_table_group *table_group; + struct npu_comp *npucomp; + struct pci_dev *gpdev = NULL; + struct pci_controller *hose; + struct pci_dev *npdev = NULL; + + list_for_each_entry(gpdev, &pe->pbus->devices, bus_list) { + npdev = pnv_pci_get_npu_dev(gpdev, 0); + if (npdev) + break; + } + + if (!npdev) + /* It is not an NPU attached device, skip */ + return NULL; + + hose = pci_bus_to_host(npdev->bus); + + if (hose->npu) { + table_group = &hose->npu->npucomp.table_group; + + if (!table_group->group) { + table_group->ops = &pnv_npu_peers_ops; + iommu_register_group(table_group, + hose->global_number, + pe->pe_number); + } + } else { + /* Create a group for 1 GPU and attached NPUs for POWER8 */ + pe->npucomp = kzalloc(sizeof(pe->npucomp), GFP_KERNEL); + table_group = &pe->npucomp->table_group; + table_group->ops = &pnv_npu_peers_ops; + iommu_register_group(table_group, hose->global_number, + pe->pe_number); + } + + /* Steal capabilities from a GPU PE */ + table_group->max_dynamic_windows_supported = + pe->table_group.max_dynamic_windows_supported; + table_group->tce32_start = pe->table_group.tce32_start; + table_group->tce32_size = pe->table_group.tce32_size; + table_group->max_levels = pe->table_group.max_levels; + if (!table_group->pgsizes) + table_group->pgsizes = pe->table_group.pgsizes; + + npucomp = container_of(table_group, struct npu_comp, table_group); + pnv_comp_attach_table_group(npucomp, pe); + + return table_group; +} + +struct iommu_table_group *pnv_npu_compound_attach(struct pnv_ioda_pe *pe) +{ + struct iommu_table_group *table_group; + struct npu_comp *npucomp; + struct pci_dev *gpdev = NULL; + struct pci_dev *npdev; + struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(pe, &gpdev); + + WARN_ON(!(pe->flags & PNV_IODA_PE_DEV)); + if (!gpe) return NULL; - list_for_each_entry(npdev, &pbus->devices, bus_list) { - gptmp = pnv_pci_get_gpu_dev(npdev); + /* + * IODA2 bridges get this set up from pci_controller_ops::setup_bridge + * but NPU bridges do not have this hook defined so we do it here. + * We do not setup other table group parameters as they won't be used + * anyway - NVLink bridges are subordinate PEs. + */ + pe->table_group.ops = &pnv_pci_npu_ops; + + table_group = iommu_group_get_iommudata( + iommu_group_get(&gpdev->dev)); + + /* + * On P9 NPU PHB and PCI PHB support different page sizes, + * keep only matching. We expect here that NVLink bridge PE pgsizes is + * initialized by the caller. + */ + table_group->pgsizes &= pe->table_group.pgsizes; + npucomp = container_of(table_group, struct npu_comp, table_group); + pnv_comp_attach_table_group(npucomp, pe); + + list_for_each_entry(npdev, &pe->phb->hose->bus->devices, bus_list) { + struct pci_dev *gpdevtmp = pnv_pci_get_gpu_dev(npdev); - if (gptmp != gpdev) + if (gpdevtmp != gpdev) continue; - pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev)); - iommu_group_add_device(gpe->table_group.group, &npdev->dev); + iommu_add_device(table_group, &npdev->dev); } - return gpe; + return table_group; } +#endif /* CONFIG_IOMMU_API */ /* Maximum number of nvlinks per npu */ #define NV_MAX_LINKS 6 @@ -490,7 +782,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context, int i, j; struct npu *npu; struct pci_dev *npdev; - struct pnv_phb *nphb; for (i = 0; i <= max_npu2_index; i++) { mmio_atsd_reg[i].reg = -1; @@ -505,8 +796,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context, if (!npdev) continue; - nphb = pci_bus_to_host(npdev->bus)->private_data; - npu = &nphb->npu; + npu = pci_bus_to_host(npdev->bus)->npu; + if (!npu) + continue; + mmio_atsd_reg[i].npu = npu; mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu); while (mmio_atsd_reg[i].reg < 0) { @@ -671,9 +964,9 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, u32 nvlink_index; struct device_node *nvlink_dn; struct mm_struct *mm = current->mm; - struct pnv_phb *nphb; struct npu *npu; struct npu_context *npu_context; + struct pci_controller *hose; /* * At present we don't support GPUs connected to multiple NPUs and I'm @@ -681,13 +974,14 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, */ struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0); - if (!firmware_has_feature(FW_FEATURE_OPAL)) - return ERR_PTR(-ENODEV); - if (!npdev) /* No nvlink associated with this GPU device */ return ERR_PTR(-ENODEV); + /* We only support DR/PR/HV in pnv_npu2_map_lpar_dev() */ + if (flags & ~(MSR_DR | MSR_PR | MSR_HV)) + return ERR_PTR(-EINVAL); + nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0); if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", &nvlink_index))) @@ -701,20 +995,10 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, return ERR_PTR(-EINVAL); } - nphb = pci_bus_to_host(npdev->bus)->private_data; - npu = &nphb->npu; - - /* - * Setup the NPU context table for a particular GPU. These need to be - * per-GPU as we need the tables to filter ATSDs when there are no - * active contexts on a particular GPU. It is safe for these to be - * called concurrently with destroy as the OPAL call takes appropriate - * locks and refcounts on init/destroy. - */ - rc = opal_npu_init_context(nphb->opal_id, mm->context.id, flags, - PCI_DEVID(gpdev->bus->number, gpdev->devfn)); - if (rc < 0) - return ERR_PTR(-ENOSPC); + hose = pci_bus_to_host(npdev->bus); + npu = hose->npu; + if (!npu) + return ERR_PTR(-ENODEV); /* * We store the npu pci device so we can more easily get at the @@ -726,9 +1010,6 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, if (npu_context->release_cb != cb || npu_context->priv != priv) { spin_unlock(&npu_context_lock); - opal_npu_destroy_context(nphb->opal_id, mm->context.id, - PCI_DEVID(gpdev->bus->number, - gpdev->devfn)); return ERR_PTR(-EINVAL); } @@ -754,9 +1035,6 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, if (rc) { kfree(npu_context); - opal_npu_destroy_context(nphb->opal_id, mm->context.id, - PCI_DEVID(gpdev->bus->number, - gpdev->devfn)); return ERR_PTR(rc); } @@ -776,7 +1054,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, */ WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], npdev); - if (!nphb->npu.nmmu_flush) { + if (!npu->nmmu_flush) { /* * If we're not explicitly flushing ourselves we need to mark * the thread for global flushes @@ -809,27 +1087,24 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context, struct pci_dev *gpdev) { int removed; - struct pnv_phb *nphb; struct npu *npu; struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0); struct device_node *nvlink_dn; u32 nvlink_index; + struct pci_controller *hose; if (WARN_ON(!npdev)) return; - if (!firmware_has_feature(FW_FEATURE_OPAL)) + hose = pci_bus_to_host(npdev->bus); + npu = hose->npu; + if (!npu) return; - - nphb = pci_bus_to_host(npdev->bus)->private_data; - npu = &nphb->npu; nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0); if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", &nvlink_index))) return; WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], NULL); - opal_npu_destroy_context(nphb->opal_id, npu_context->mm->context.id, - PCI_DEVID(gpdev->bus->number, gpdev->devfn)); spin_lock(&npu_context_lock); removed = kref_put(&npu_context->kref, pnv_npu2_release_context); spin_unlock(&npu_context_lock); @@ -857,13 +1132,12 @@ int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea, u64 rc = 0, result = 0; int i, is_write; struct page *page[1]; + const char __user *u; + char c; /* mmap_sem should be held so the struct_mm must be present */ struct mm_struct *mm = context->mm; - if (!firmware_has_feature(FW_FEATURE_OPAL)) - return -ENODEV; - WARN_ON(!rwsem_is_locked(&mm->mmap_sem)); for (i = 0; i < count; i++) { @@ -872,18 +1146,17 @@ int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea, is_write ? FOLL_WRITE : 0, page, NULL, NULL); - /* - * To support virtualised environments we will have to do an - * access to the page to ensure it gets faulted into the - * hypervisor. For the moment virtualisation is not supported in - * other areas so leave the access out. - */ if (rc != 1) { status[i] = rc; result = -EFAULT; continue; } + /* Make sure partition scoped tree gets a pte */ + u = page_address(page[0]); + if (__get_user(c, u)) + result = -EFAULT; + status[i] = 0; put_page(page[0]); } @@ -892,42 +1165,127 @@ int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea, } EXPORT_SYMBOL(pnv_npu2_handle_fault); -int pnv_npu2_init(struct pnv_phb *phb) +int pnv_npu2_init(struct pci_controller *hose) { unsigned int i; u64 mmio_atsd; - struct device_node *dn; - struct pci_dev *gpdev; static int npu_index; - uint64_t rc = 0; - - phb->npu.nmmu_flush = - of_property_read_bool(phb->hose->dn, "ibm,nmmu-flush"); - for_each_child_of_node(phb->hose->dn, dn) { - gpdev = pnv_pci_get_gpu_dev(get_pci_dev(dn)); - if (gpdev) { - rc = opal_npu_map_lpar(phb->opal_id, - PCI_DEVID(gpdev->bus->number, gpdev->devfn), - 0, 0); - if (rc) - dev_err(&gpdev->dev, - "Error %lld mapping device to LPAR\n", - rc); - } - } + struct npu *npu; + int ret; + + npu = kzalloc(sizeof(*npu), GFP_KERNEL); + if (!npu) + return -ENOMEM; - for (i = 0; !of_property_read_u64_index(phb->hose->dn, "ibm,mmio-atsd", - i, &mmio_atsd); i++) - phb->npu.mmio_atsd_regs[i] = ioremap(mmio_atsd, 32); + npu->nmmu_flush = of_property_read_bool(hose->dn, "ibm,nmmu-flush"); - pr_info("NPU%lld: Found %d MMIO ATSD registers", phb->opal_id, i); - phb->npu.mmio_atsd_count = i; - phb->npu.mmio_atsd_usage = 0; + for (i = 0; i < ARRAY_SIZE(npu->mmio_atsd_regs) && + !of_property_read_u64_index(hose->dn, "ibm,mmio-atsd", + i, &mmio_atsd); i++) + npu->mmio_atsd_regs[i] = ioremap(mmio_atsd, 32); + + pr_info("NPU%d: Found %d MMIO ATSD registers", hose->global_number, i); + npu->mmio_atsd_count = i; + npu->mmio_atsd_usage = 0; npu_index++; - if (WARN_ON(npu_index >= NV_MAX_NPUS)) - return -ENOSPC; + if (WARN_ON(npu_index >= NV_MAX_NPUS)) { + ret = -ENOSPC; + goto fail_exit; + } max_npu2_index = npu_index; - phb->npu.index = npu_index; + npu->index = npu_index; + hose->npu = npu; + + return 0; + +fail_exit: + for (i = 0; i < npu->mmio_atsd_count; ++i) + iounmap(npu->mmio_atsd_regs[i]); + + kfree(npu); + + return ret; +} + +int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid, + unsigned long msr) +{ + int ret; + struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0); + struct pci_controller *hose; + struct pnv_phb *nphb; + + if (!npdev) + return -ENODEV; + + hose = pci_bus_to_host(npdev->bus); + nphb = hose->private_data; + + dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=%u\n", + nphb->opal_id, lparid); + /* + * Currently we only support radix and non-zero LPCR only makes sense + * for hash tables so skiboot expects the LPCR parameter to be a zero. + */ + ret = opal_npu_map_lpar(nphb->opal_id, + PCI_DEVID(gpdev->bus->number, gpdev->devfn), lparid, + 0 /* LPCR bits */); + if (ret) { + dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret); + return ret; + } + + dev_dbg(&gpdev->dev, "init context opalid=%llu msr=%lx\n", + nphb->opal_id, msr); + ret = opal_npu_init_context(nphb->opal_id, 0/*__unused*/, msr, + PCI_DEVID(gpdev->bus->number, gpdev->devfn)); + if (ret < 0) + dev_err(&gpdev->dev, "Failed to init context: %d\n", ret); + else + ret = 0; return 0; } +EXPORT_SYMBOL_GPL(pnv_npu2_map_lpar_dev); + +void pnv_npu2_map_lpar(struct pnv_ioda_pe *gpe, unsigned long msr) +{ + struct pci_dev *gpdev; + + list_for_each_entry(gpdev, &gpe->pbus->devices, bus_list) + pnv_npu2_map_lpar_dev(gpdev, 0, msr); +} + +int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev) +{ + int ret; + struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0); + struct pci_controller *hose; + struct pnv_phb *nphb; + + if (!npdev) + return -ENODEV; + + hose = pci_bus_to_host(npdev->bus); + nphb = hose->private_data; + + dev_dbg(&gpdev->dev, "destroy context opalid=%llu\n", + nphb->opal_id); + ret = opal_npu_destroy_context(nphb->opal_id, 0/*__unused*/, + PCI_DEVID(gpdev->bus->number, gpdev->devfn)); + if (ret < 0) { + dev_err(&gpdev->dev, "Failed to destroy context: %d\n", ret); + return ret; + } + + /* Set LPID to 0 anyway, just to be safe */ + dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=0\n", nphb->opal_id); + ret = opal_npu_map_lpar(nphb->opal_id, + PCI_DEVID(gpdev->bus->number, gpdev->devfn), 0 /*LPID*/, + 0 /* LPCR bits */); + if (ret) + dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret); + + return ret; +} +EXPORT_SYMBOL_GPL(pnv_npu2_unmap_lpar_dev); diff --git a/arch/powerpc/platforms/powernv/opal-power.c b/arch/powerpc/platforms/powernv/opal-power.c index 58dc3308237f..89ab1da57657 100644 --- a/arch/powerpc/platforms/powernv/opal-power.c +++ b/arch/powerpc/platforms/powernv/opal-power.c @@ -138,7 +138,7 @@ static struct notifier_block opal_power_control_nb = { .priority = 0, }; -static int __init opal_power_control_init(void) +int __init opal_power_control_init(void) { int ret, supported = 0; struct device_node *np; @@ -176,4 +176,3 @@ static int __init opal_power_control_init(void) return 0; } -machine_subsys_initcall(powernv, opal_power_control_init); diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c index beed86f4224b..79586f127521 100644 --- a/arch/powerpc/platforms/powernv/opal.c +++ b/arch/powerpc/platforms/powernv/opal.c @@ -877,7 +877,7 @@ static int __init opal_init(void) consoles = of_find_node_by_path("/ibm,opal/consoles"); if (consoles) { for_each_child_of_node(consoles, np) { - if (strcmp(np->name, "serial")) + if (!of_node_name_eq(np, "serial")) continue; of_platform_device_create(np, NULL, NULL); } @@ -960,6 +960,9 @@ static int __init opal_init(void) /* Initialise OPAL sensor groups */ opal_sensor_groups_init(); + /* Initialise OPAL Power control interface */ + opal_power_control_init(); + return 0; } machine_subsys_initcall(powernv, opal_init); diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c index fe9691040f54..697449afb3f7 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c +++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c @@ -299,7 +299,7 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset, if (alloc_userspace_copy) { offset = 0; uas = pnv_pci_ioda2_table_do_alloc_pages(nid, level_shift, - levels, tce_table_size, &offset, + tmplevels, tce_table_size, &offset, &total_allocated_uas); if (!uas) goto free_tces_exit; @@ -368,6 +368,7 @@ void pnv_pci_unlink_table_and_group(struct iommu_table *tbl, found = false; for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i) { if (table_group->tables[i] == tbl) { + iommu_tce_table_put(tbl); table_group->tables[i] = NULL; found = true; break; @@ -393,7 +394,7 @@ long pnv_pci_link_table_and_group(int node, int num, tgl->table_group = table_group; list_add_rcu(&tgl->next, &tbl->it_group_list); - table_group->tables[num] = tbl; + table_group->tables[num] = iommu_tce_table_get(tbl); return 0; } diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index dd807446801e..1d6406a051f1 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -190,7 +190,8 @@ static void pnv_ioda_free_pe(struct pnv_ioda_pe *pe) unsigned int pe_num = pe->pe_number; WARN_ON(pe->pdev); - + WARN_ON(pe->npucomp); /* NPUs are not supposed to be freed */ + kfree(pe->npucomp); memset(pe, 0, sizeof(struct pnv_ioda_pe)); clear_bit(pe_num, phb->ioda.pe_alloc); } @@ -517,8 +518,6 @@ static void __init pnv_ioda_parse_m64_window(struct pnv_phb *phb) phb->init_m64 = pnv_ioda1_init_m64; else phb->init_m64 = pnv_ioda2_init_m64; - phb->reserve_m64_pe = pnv_ioda_reserve_m64_pe; - phb->pick_m64_pe = pnv_ioda_pick_m64_pe; } static void pnv_ioda_freeze_pe(struct pnv_phb *phb, int pe_no) @@ -604,8 +603,8 @@ static int pnv_ioda_unfreeze_pe(struct pnv_phb *phb, int pe_no, int opt) static int pnv_ioda_get_pe_state(struct pnv_phb *phb, int pe_no) { struct pnv_ioda_pe *slave, *pe; - u8 fstate, state; - __be16 pcierr; + u8 fstate = 0, state; + __be16 pcierr = 0; s64 rc; /* Sanity check on PE number */ @@ -663,10 +662,6 @@ static int pnv_ioda_get_pe_state(struct pnv_phb *phb, int pe_no) return state; } -/* Currently those 2 are only used when MSIs are enabled, this will change - * but in the meantime, we need to protect them to avoid warnings - */ -#ifdef CONFIG_PCI_MSI struct pnv_ioda_pe *pnv_ioda_get_pe(struct pci_dev *dev) { struct pci_controller *hose = pci_bus_to_host(dev->bus); @@ -679,7 +674,6 @@ struct pnv_ioda_pe *pnv_ioda_get_pe(struct pci_dev *dev) return NULL; return &phb->ioda.pe_array[pdn->pe_number]; } -#endif /* CONFIG_PCI_MSI */ static int pnv_ioda_set_one_peltv(struct pnv_phb *phb, struct pnv_ioda_pe *parent, @@ -1160,8 +1154,8 @@ static struct pnv_ioda_pe *pnv_ioda_setup_bus_PE(struct pci_bus *bus, bool all) pe = &phb->ioda.pe_array[phb->ioda.root_pe_idx]; /* Check if PE is determined by M64 */ - if (!pe && phb->pick_m64_pe) - pe = phb->pick_m64_pe(bus, all); + if (!pe) + pe = pnv_ioda_pick_m64_pe(bus, all); /* The PE number isn't pinned by M64 */ if (!pe) @@ -1273,19 +1267,20 @@ static void pnv_ioda_setup_npu_PEs(struct pci_bus *bus) static void pnv_pci_ioda_setup_PEs(void) { - struct pci_controller *hose, *tmp; + struct pci_controller *hose; struct pnv_phb *phb; struct pci_bus *bus; struct pci_dev *pdev; + struct pnv_ioda_pe *pe; - list_for_each_entry_safe(hose, tmp, &hose_list, list_node) { + list_for_each_entry(hose, &hose_list, list_node) { phb = hose->private_data; if (phb->type == PNV_PHB_NPU_NVLINK) { /* PE#0 is needed for error reporting */ pnv_ioda_reserve_pe(phb, 0); pnv_ioda_setup_npu_PEs(hose->bus); if (phb->model == PNV_PHB_MODEL_NPU2) - pnv_npu2_init(phb); + WARN_ON_ONCE(pnv_npu2_init(hose)); } if (phb->type == PNV_PHB_NPU_OCAPI) { bus = hose->bus; @@ -1293,6 +1288,14 @@ static void pnv_pci_ioda_setup_PEs(void) pnv_ioda_setup_dev_PE(pdev); } } + list_for_each_entry(hose, &hose_list, list_node) { + phb = hose->private_data; + if (phb->type != PNV_PHB_IODA2) + continue; + + list_for_each_entry(pe, &phb->ioda.pe_list, list) + pnv_npu2_map_lpar(pe, MSR_DR | MSR_PR | MSR_HV); + } } #ifdef CONFIG_PCI_IOV @@ -1531,6 +1534,11 @@ void pnv_pci_sriov_disable(struct pci_dev *pdev) static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe); +#ifdef CONFIG_IOMMU_API +static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe, + struct iommu_table_group *table_group, struct pci_bus *bus); + +#endif static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) { struct pci_bus *bus; @@ -1584,6 +1592,9 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) mutex_unlock(&phb->ioda.pe_list_mutex); pnv_pci_ioda2_setup_dma_pe(phb, pe); +#ifdef CONFIG_IOMMU_API + pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL); +#endif } } @@ -1923,21 +1934,16 @@ static u64 pnv_pci_ioda_dma_get_required_mask(struct pci_dev *pdev) return mask; } -static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe, - struct pci_bus *bus, - bool add_to_group) +static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe, struct pci_bus *bus) { struct pci_dev *dev; list_for_each_entry(dev, &bus->devices, bus_list) { set_iommu_table_base(&dev->dev, pe->table_group.tables[0]); set_dma_offset(&dev->dev, pe->tce_bypass_base); - if (add_to_group) - iommu_add_device(&dev->dev); if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate) - pnv_ioda_setup_bus_dma(pe, dev->subordinate, - add_to_group); + pnv_ioda_setup_bus_dma(pe, dev->subordinate); } } @@ -2366,16 +2372,8 @@ found: pe->table_group.tce32_size = tbl->it_size << tbl->it_page_shift; iommu_init_table(tbl, phb->hose->node); - if (pe->flags & PNV_IODA_PE_DEV) { - /* - * Setting table base here only for carrying iommu_group - * further down to let iommu_add_device() do the job. - * pnv_pci_ioda_dma_dev_setup will override it later anyway. - */ - set_iommu_table_base(&pe->pdev->dev, tbl); - iommu_add_device(&pe->pdev->dev); - } else if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)) - pnv_ioda_setup_bus_dma(pe, pe->pbus, true); + if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)) + pnv_ioda_setup_bus_dma(pe, pe->pbus); return; fail: @@ -2527,14 +2525,6 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe) if (!pnv_iommu_bypass_disabled) pnv_pci_ioda2_set_bypass(pe, true); - /* - * Setting table base here only for carrying iommu_group - * further down to let iommu_add_device() do the job. - * pnv_pci_ioda_dma_dev_setup will override it later anyway. - */ - if (pe->flags & PNV_IODA_PE_DEV) - set_iommu_table_base(&pe->pdev->dev, tbl); - return 0; } @@ -2565,7 +2555,7 @@ static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group, #endif #ifdef CONFIG_IOMMU_API -static unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift, +unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift, __u64 window_size, __u32 levels) { unsigned long bytes = 0; @@ -2616,7 +2606,7 @@ static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group) pnv_pci_ioda2_set_bypass(pe, false); pnv_pci_ioda2_unset_window(&pe->table_group, 0); if (pe->pbus) - pnv_ioda_setup_bus_dma(pe, pe->pbus, false); + pnv_ioda_setup_bus_dma(pe, pe->pbus); iommu_tce_table_put(tbl); } @@ -2627,7 +2617,7 @@ static void pnv_ioda2_release_ownership(struct iommu_table_group *table_group) pnv_pci_ioda2_setup_default_config(pe); if (pe->pbus) - pnv_ioda_setup_bus_dma(pe, pe->pbus, false); + pnv_ioda_setup_bus_dma(pe, pe->pbus); } static struct iommu_table_group_ops pnv_pci_ioda2_ops = { @@ -2639,131 +2629,100 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops = { .release_ownership = pnv_ioda2_release_ownership, }; -static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque) +static void pnv_ioda_setup_bus_iommu_group_add_devices(struct pnv_ioda_pe *pe, + struct iommu_table_group *table_group, + struct pci_bus *bus) { - struct pci_controller *hose; - struct pnv_phb *phb; - struct pnv_ioda_pe **ptmppe = opaque; - struct pci_dev *pdev = container_of(dev, struct pci_dev, dev); - struct pci_dn *pdn = pci_get_pdn(pdev); - - if (!pdn || pdn->pe_number == IODA_INVALID_PE) - return 0; - - hose = pci_bus_to_host(pdev->bus); - phb = hose->private_data; - if (phb->type != PNV_PHB_NPU_NVLINK) - return 0; + struct pci_dev *dev; - *ptmppe = &phb->ioda.pe_array[pdn->pe_number]; + list_for_each_entry(dev, &bus->devices, bus_list) { + iommu_add_device(table_group, &dev->dev); - return 1; + if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate) + pnv_ioda_setup_bus_iommu_group_add_devices(pe, + table_group, dev->subordinate); + } } -/* - * This returns PE of associated NPU. - * This assumes that NPU is in the same IOMMU group with GPU and there is - * no other PEs. - */ -static struct pnv_ioda_pe *gpe_table_group_to_npe( - struct iommu_table_group *table_group) +static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe, + struct iommu_table_group *table_group, struct pci_bus *bus) { - struct pnv_ioda_pe *npe = NULL; - int ret = iommu_group_for_each_dev(table_group->group, &npe, - gpe_table_group_to_npe_cb); - BUG_ON(!ret || !npe); + if (pe->flags & PNV_IODA_PE_DEV) + iommu_add_device(table_group, &pe->pdev->dev); - return npe; + if ((pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)) || bus) + pnv_ioda_setup_bus_iommu_group_add_devices(pe, table_group, + bus); } -static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group, - int num, struct iommu_table *tbl) -{ - struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group); - int num2 = (num == 0) ? 1 : 0; - long ret = pnv_pci_ioda2_set_window(table_group, num, tbl); - - if (ret) - return ret; - - if (table_group->tables[num2]) - pnv_npu_unset_window(npe, num2); - - ret = pnv_npu_set_window(npe, num, tbl); - if (ret) { - pnv_pci_ioda2_unset_window(table_group, num); - if (table_group->tables[num2]) - pnv_npu_set_window(npe, num2, - table_group->tables[num2]); - } - - return ret; -} +static unsigned long pnv_ioda_parse_tce_sizes(struct pnv_phb *phb); -static long pnv_pci_ioda2_npu_unset_window( - struct iommu_table_group *table_group, - int num) +static void pnv_pci_ioda_setup_iommu_api(void) { - struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group); - int num2 = (num == 0) ? 1 : 0; - long ret = pnv_pci_ioda2_unset_window(table_group, num); - - if (ret) - return ret; - - if (!npe->table_group.tables[num]) - return 0; - - ret = pnv_npu_unset_window(npe, num); - if (ret) - return ret; - - if (table_group->tables[num2]) - ret = pnv_npu_set_window(npe, num2, table_group->tables[num2]); - - return ret; -} + struct pci_controller *hose; + struct pnv_phb *phb; + struct pnv_ioda_pe *pe; -static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group) -{ /* - * Detach NPU first as pnv_ioda2_take_ownership() will destroy - * the iommu_table if 32bit DMA is enabled. + * There are 4 types of PEs: + * - PNV_IODA_PE_BUS: a downstream port with an adapter, + * created from pnv_pci_setup_bridge(); + * - PNV_IODA_PE_BUS_ALL: a PCI-PCIX bridge with devices behind it, + * created from pnv_pci_setup_bridge(); + * - PNV_IODA_PE_VF: a SRIOV virtual function, + * created from pnv_pcibios_sriov_enable(); + * - PNV_IODA_PE_DEV: an NPU or OCAPI device, + * created from pnv_pci_ioda_fixup(). + * + * Normally a PE is represented by an IOMMU group, however for + * devices with side channels the groups need to be more strict. */ - pnv_npu_take_ownership(gpe_table_group_to_npe(table_group)); - pnv_ioda2_take_ownership(table_group); -} + list_for_each_entry(hose, &hose_list, list_node) { + phb = hose->private_data; -static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = { - .get_table_size = pnv_pci_ioda2_get_table_size, - .create_table = pnv_pci_ioda2_create_table_userspace, - .set_window = pnv_pci_ioda2_npu_set_window, - .unset_window = pnv_pci_ioda2_npu_unset_window, - .take_ownership = pnv_ioda2_npu_take_ownership, - .release_ownership = pnv_ioda2_release_ownership, -}; + if (phb->type == PNV_PHB_NPU_NVLINK) + continue; -static void pnv_pci_ioda_setup_iommu_api(void) -{ - struct pci_controller *hose, *tmp; - struct pnv_phb *phb; - struct pnv_ioda_pe *pe, *gpe; + list_for_each_entry(pe, &phb->ioda.pe_list, list) { + struct iommu_table_group *table_group; + + table_group = pnv_try_setup_npu_table_group(pe); + if (!table_group) { + if (!pnv_pci_ioda_pe_dma_weight(pe)) + continue; + + table_group = &pe->table_group; + iommu_register_group(&pe->table_group, + pe->phb->hose->global_number, + pe->pe_number); + } + pnv_ioda_setup_bus_iommu_group(pe, table_group, + pe->pbus); + } + } /* * Now we have all PHBs discovered, time to add NPU devices to * the corresponding IOMMU groups. */ - list_for_each_entry_safe(hose, tmp, &hose_list, list_node) { + list_for_each_entry(hose, &hose_list, list_node) { + unsigned long pgsizes; + phb = hose->private_data; if (phb->type != PNV_PHB_NPU_NVLINK) continue; + pgsizes = pnv_ioda_parse_tce_sizes(phb); list_for_each_entry(pe, &phb->ioda.pe_list, list) { - gpe = pnv_pci_npu_setup_iommu(pe); - if (gpe) - gpe->table_group.ops = &pnv_pci_ioda2_npu_ops; + /* + * IODA2 bridges get this set up from + * pci_controller_ops::setup_bridge but NPU bridges + * do not have this hook defined so we do it here. + */ + pe->table_group.pgsizes = pgsizes; + pnv_npu_compound_attach(pe); } } } @@ -2810,9 +2769,6 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, /* TVE #1 is selected by PCI address bit 59 */ pe->tce_bypass_base = 1ull << 59; - iommu_register_group(&pe->table_group, phb->hose->global_number, - pe->pe_number); - /* The PE will reserve all possible 32-bits space */ pe_info(pe, "Setting up 32-bit TCE table at 0..%08x\n", phb->ioda.m32_pci_base); @@ -2833,10 +2789,9 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, return; if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)) - pnv_ioda_setup_bus_dma(pe, pe->pbus, true); + pnv_ioda_setup_bus_dma(pe, pe->pbus); } -#ifdef CONFIG_PCI_MSI int64_t pnv_opal_pci_msi_eoi(struct irq_chip *chip, unsigned int hw_irq) { struct pnv_phb *phb = container_of(chip, struct pnv_phb, @@ -2982,9 +2937,6 @@ static void pnv_pci_init_ioda_msis(struct pnv_phb *phb) pr_info(" Allocated bitmap for %d MSIs (base IRQ 0x%x)\n", count, phb->msi_base); } -#else -static void pnv_pci_init_ioda_msis(struct pnv_phb *phb) { } -#endif /* CONFIG_PCI_MSI */ #ifdef CONFIG_PCI_IOV static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) @@ -3402,8 +3354,7 @@ static void pnv_pci_setup_bridge(struct pci_bus *bus, unsigned long type) return; /* Reserve PEs according to used M64 resources */ - if (phb->reserve_m64_pe) - phb->reserve_m64_pe(bus, NULL, all); + pnv_ioda_reserve_m64_pe(bus, NULL, all); /* * Assign PE. We might run here because of partial hotplug. @@ -3687,6 +3638,15 @@ static void pnv_pci_release_device(struct pci_dev *pdev) pnv_ioda_release_pe(pe); } +static void pnv_npu_disable_device(struct pci_dev *pdev) +{ + struct eeh_dev *edev = pci_dev_to_eeh_dev(pdev); + struct eeh_pe *eehpe = edev ? edev->pe : NULL; + + if (eehpe && eeh_ops && eeh_ops->reset) + eeh_ops->reset(eehpe, EEH_RESET_HOT); +} + static void pnv_pci_ioda_shutdown(struct pci_controller *hose) { struct pnv_phb *phb = hose->private_data; @@ -3698,10 +3658,8 @@ static void pnv_pci_ioda_shutdown(struct pci_controller *hose) static const struct pci_controller_ops pnv_pci_ioda_controller_ops = { .dma_dev_setup = pnv_pci_dma_dev_setup, .dma_bus_setup = pnv_pci_dma_bus_setup, -#ifdef CONFIG_PCI_MSI .setup_msi_irqs = pnv_setup_msi_irqs, .teardown_msi_irqs = pnv_teardown_msi_irqs, -#endif .enable_device_hook = pnv_pci_enable_device_hook, .release_device = pnv_pci_release_device, .window_alignment = pnv_pci_window_alignment, @@ -3722,15 +3680,14 @@ static int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask) static const struct pci_controller_ops pnv_npu_ioda_controller_ops = { .dma_dev_setup = pnv_pci_dma_dev_setup, -#ifdef CONFIG_PCI_MSI .setup_msi_irqs = pnv_setup_msi_irqs, .teardown_msi_irqs = pnv_teardown_msi_irqs, -#endif .enable_device_hook = pnv_pci_enable_device_hook, .window_alignment = pnv_pci_window_alignment, .reset_secondary_bus = pnv_pci_reset_secondary_bus, .dma_set_mask = pnv_npu_dma_set_mask, .shutdown = pnv_pci_ioda_shutdown, + .disable_device = pnv_npu_disable_device, }; static const struct pci_controller_ops pnv_npu_ocapi_ioda_controller_ops = { diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c index 13aef2323bbc..45fb70b4bfa7 100644 --- a/arch/powerpc/platforms/powernv/pci.c +++ b/arch/powerpc/platforms/powernv/pci.c @@ -160,7 +160,6 @@ exit: } EXPORT_SYMBOL_GPL(pnv_pci_set_power_state); -#ifdef CONFIG_PCI_MSI int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type) { struct pci_controller *hose = pci_bus_to_host(pdev->bus); @@ -229,7 +228,6 @@ void pnv_teardown_msi_irqs(struct pci_dev *pdev) msi_bitmap_free_hwirqs(&phb->msi_bmp, hwirq - phb->msi_base, 1); } } -#endif /* CONFIG_PCI_MSI */ /* Nicely print the contents of the PE State Tables (PEST). */ static void pnv_pci_dump_pest(__be64 pestA[], __be64 pestB[], int pest_size) @@ -602,8 +600,8 @@ static void pnv_pci_handle_eeh_config(struct pnv_phb *phb, u32 pe_no) static void pnv_pci_config_check_eeh(struct pci_dn *pdn) { struct pnv_phb *phb = pdn->phb->private_data; - u8 fstate; - __be16 pcierr; + u8 fstate = 0; + __be16 pcierr = 0; unsigned int pe_no; s64 rc; @@ -1127,4 +1125,45 @@ void __init pnv_pci_init(void) set_pci_dma_ops(&dma_iommu_ops); } -machine_subsys_initcall_sync(powernv, tce_iommu_bus_notifier_init); +static int pnv_tce_iommu_bus_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct device *dev = data; + struct pci_dev *pdev; + struct pci_dn *pdn; + struct pnv_ioda_pe *pe; + struct pci_controller *hose; + struct pnv_phb *phb; + + switch (action) { + case BUS_NOTIFY_ADD_DEVICE: + pdev = to_pci_dev(dev); + pdn = pci_get_pdn(pdev); + hose = pci_bus_to_host(pdev->bus); + phb = hose->private_data; + + WARN_ON_ONCE(!phb); + if (!pdn || pdn->pe_number == IODA_INVALID_PE || !phb) + return 0; + + pe = &phb->ioda.pe_array[pdn->pe_number]; + iommu_add_device(&pe->table_group, dev); + return 0; + case BUS_NOTIFY_DEL_DEVICE: + iommu_del_device(dev); + return 0; + default: + return 0; + } +} + +static struct notifier_block pnv_tce_iommu_bus_nb = { + .notifier_call = pnv_tce_iommu_bus_notifier, +}; + +static int __init pnv_tce_iommu_bus_notifier_init(void) +{ + bus_register_notifier(&pci_bus_type, &pnv_tce_iommu_bus_nb); + return 0; +} +machine_subsys_initcall_sync(powernv, pnv_tce_iommu_bus_notifier_init); diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 8b37b28e3831..8e36da379252 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -8,9 +8,6 @@ struct pci_dn; -/* Maximum possible number of ATSD MMIO registers per NPU */ -#define NV_NMMU_ATSD_REGS 8 - enum pnv_phb_type { PNV_PHB_IODA1 = 0, PNV_PHB_IODA2 = 1, @@ -65,6 +62,7 @@ struct pnv_ioda_pe { /* "Base" iommu table, ie, 4K TCEs, 32-bit DMA */ struct iommu_table_group table_group; + struct npu_comp *npucomp; /* 64-bit TCE bypass region */ bool tce_bypass_enabled; @@ -106,20 +104,14 @@ struct pnv_phb { struct dentry *dbgfs; #endif -#ifdef CONFIG_PCI_MSI unsigned int msi_base; unsigned int msi32_support; struct msi_bitmap msi_bmp; -#endif int (*msi_setup)(struct pnv_phb *phb, struct pci_dev *dev, unsigned int hwirq, unsigned int virq, unsigned int is_64, struct msi_msg *msg); void (*dma_dev_setup)(struct pnv_phb *phb, struct pci_dev *pdev); - void (*fixup_phb)(struct pci_controller *hose); int (*init_m64)(struct pnv_phb *phb); - void (*reserve_m64_pe)(struct pci_bus *bus, - unsigned long *pe_bitmap, bool all); - struct pnv_ioda_pe *(*pick_m64_pe)(struct pci_bus *bus, bool all); int (*get_pe_state)(struct pnv_phb *phb, int pe_no); void (*freeze_pe)(struct pnv_phb *phb, int pe_no); int (*unfreeze_pe)(struct pnv_phb *phb, int pe_no, int opt); @@ -180,19 +172,6 @@ struct pnv_phb { unsigned int diag_data_size; u8 *diag_data; - /* Nvlink2 data */ - struct npu { - int index; - __be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS]; - unsigned int mmio_atsd_count; - - /* Bitmask for MMIO register usage */ - unsigned long mmio_atsd_usage; - - /* Do we need to explicitly flush the nest mmu? */ - bool nmmu_flush; - } npu; - int p2p_target_count; }; @@ -210,6 +189,7 @@ extern void pnv_pci_init_ioda_hub(struct device_node *np); extern void pnv_pci_init_ioda2_phb(struct device_node *np); extern void pnv_pci_init_npu_phb(struct device_node *np); extern void pnv_pci_init_npu2_opencapi_phb(struct device_node *np); +extern void pnv_npu2_map_lpar(struct pnv_ioda_pe *gpe, unsigned long msr); extern void pnv_pci_reset_secondary_bus(struct pci_dev *dev); extern int pnv_eeh_phb_reset(struct pci_controller *hose, int option); @@ -220,6 +200,8 @@ extern void pnv_teardown_msi_irqs(struct pci_dev *pdev); extern struct pnv_ioda_pe *pnv_ioda_get_pe(struct pci_dev *dev); extern void pnv_set_msi_irq_chip(struct pnv_phb *phb, unsigned int virq); extern void pnv_pci_ioda2_set_bypass(struct pnv_ioda_pe *pe, bool enable); +extern unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift, + __u64 window_size, __u32 levels); extern int pnv_eeh_post_init(void); extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level, @@ -235,12 +217,10 @@ extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level, extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass); extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm); extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe); -extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num, - struct iommu_table *tbl); -extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num); -extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe); -extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe); -extern int pnv_npu2_init(struct pnv_phb *phb); +extern struct iommu_table_group *pnv_try_setup_npu_table_group( + struct pnv_ioda_pe *pe); +extern struct iommu_table_group *pnv_npu_compound_attach( + struct pnv_ioda_pe *pe); /* pci-ioda-tce.c */ #define POWERNV_IOMMU_DEFAULT_LEVELS 1 diff --git a/arch/powerpc/platforms/powernv/vas-debug.c b/arch/powerpc/platforms/powernv/vas-debug.c index 4f7276ebdf9c..4d3929fbc08f 100644 --- a/arch/powerpc/platforms/powernv/vas-debug.c +++ b/arch/powerpc/platforms/powernv/vas-debug.c @@ -30,7 +30,7 @@ static char *cop_to_str(int cop) } } -static int info_dbg_show(struct seq_file *s, void *private) +static int info_show(struct seq_file *s, void *private) { struct vas_window *window = s->private; @@ -49,17 +49,7 @@ unlock: return 0; } -static int info_dbg_open(struct inode *inode, struct file *file) -{ - return single_open(file, info_dbg_show, inode->i_private); -} - -static const struct file_operations info_fops = { - .open = info_dbg_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(info); static inline void print_reg(struct seq_file *s, struct vas_window *win, char *name, u32 reg) @@ -67,7 +57,7 @@ static inline void print_reg(struct seq_file *s, struct vas_window *win, seq_printf(s, "0x%016llx %s\n", read_hvwc_reg(win, name, reg), name); } -static int hvwc_dbg_show(struct seq_file *s, void *private) +static int hvwc_show(struct seq_file *s, void *private) { struct vas_window *window = s->private; @@ -115,17 +105,7 @@ unlock: return 0; } -static int hvwc_dbg_open(struct inode *inode, struct file *file) -{ - return single_open(file, hvwc_dbg_show, inode->i_private); -} - -static const struct file_operations hvwc_fops = { - .open = hvwc_dbg_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(hvwc); void vas_window_free_dbgdir(struct vas_window *window) { diff --git a/arch/powerpc/platforms/ps3/Kconfig b/arch/powerpc/platforms/ps3/Kconfig index 24864b8aaf5d..e32406e918d0 100644 --- a/arch/powerpc/platforms/ps3/Kconfig +++ b/arch/powerpc/platforms/ps3/Kconfig @@ -6,7 +6,7 @@ config PPC_PS3 select USB_OHCI_LITTLE_ENDIAN select USB_OHCI_BIG_ENDIAN_MMIO select USB_EHCI_BIG_ENDIAN_MMIO - select PPC_PCI_CHOICE + select HAVE_PCI help This option enables support for the Sony PS3 game console and other platforms using the PS3 hypervisor. Enabling this diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig index 472b784f01eb..9c6b3d860518 100644 --- a/arch/powerpc/platforms/pseries/Kconfig +++ b/arch/powerpc/platforms/pseries/Kconfig @@ -5,7 +5,7 @@ config PPC_PSERIES select HAVE_PCSPKR_PLATFORM select MPIC select OF_DYNAMIC - select PCI + select FORCE_PCI select PCI_MSI select PPC_XICS select PPC_XIVE_SPAPR diff --git a/arch/powerpc/platforms/pseries/cmm.c b/arch/powerpc/platforms/pseries/cmm.c index 25427a48feae..e8d63a6a9002 100644 --- a/arch/powerpc/platforms/pseries/cmm.c +++ b/arch/powerpc/platforms/pseries/cmm.c @@ -208,7 +208,7 @@ static long cmm_alloc_pages(long nr) pa->page[pa->index++] = addr; loaned_pages++; - totalram_pages--; + totalram_pages_dec(); spin_unlock(&cmm_lock); nr--; } @@ -247,7 +247,7 @@ static long cmm_free_pages(long nr) free_page(addr); loaned_pages--; nr--; - totalram_pages++; + totalram_pages_inc(); } spin_unlock(&cmm_lock); cmm_dbg("End request with %ld pages unfulfilled\n", nr); @@ -291,7 +291,7 @@ static void cmm_get_mpp(void) int rc; struct hvcall_mpp_data mpp_data; signed long active_pages_target, page_loan_request, target; - signed long total_pages = totalram_pages + loaned_pages; + signed long total_pages = totalram_pages() + loaned_pages; signed long min_mem_pages = (min_mem_mb * 1024 * 1024) / PAGE_SIZE; rc = h_get_mpp(&mpp_data); @@ -322,7 +322,7 @@ static void cmm_get_mpp(void) cmm_dbg("delta = %ld, loaned = %lu, target = %lu, oom = %lu, totalram = %lu\n", page_loan_request, loaned_pages, loaned_pages_target, - oom_freed_pages, totalram_pages); + oom_freed_pages, totalram_pages()); } static struct notifier_block cmm_oom_nb = { @@ -581,7 +581,7 @@ static int cmm_mem_going_offline(void *arg) free_page(pa_curr->page[idx]); freed++; loaned_pages--; - totalram_pages++; + totalram_pages_inc(); pa_curr->page[idx] = pa_last->page[--pa_last->index]; if (pa_last->index == 0) { if (pa_curr == pa_last) diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c index 7625546caefd..17958043e7f7 100644 --- a/arch/powerpc/platforms/pseries/dlpar.c +++ b/arch/powerpc/platforms/pseries/dlpar.c @@ -270,6 +270,8 @@ int dlpar_detach_node(struct device_node *dn) if (rc) return rc; + of_node_put(dn); + return 0; } diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c index 2a983b5a52e1..d291b618a559 100644 --- a/arch/powerpc/platforms/pseries/hotplug-memory.c +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c @@ -197,6 +197,7 @@ static int update_lmb_associativity_index(struct drmem_lmb *lmb) found = find_aa_index(dr_node, ala_prop, lmb_assoc, &aa_index); + of_node_put(dr_node); dlpar_free_cc_nodes(lmb_node); if (!found) { @@ -313,7 +314,6 @@ out: static int pseries_remove_mem_node(struct device_node *np) { - const char *type; const __be32 *regs; unsigned long base; unsigned int lmb_size; @@ -322,8 +322,7 @@ static int pseries_remove_mem_node(struct device_node *np) /* * Check to see if we are actually removing memory */ - type = of_get_property(np, "device_type", NULL); - if (type == NULL || strcmp(type, "memory") != 0) + if (!of_node_is_type(np, "memory")) return 0; /* @@ -355,8 +354,11 @@ static bool lmb_is_removable(struct drmem_lmb *lmb) phys_addr = lmb->base_addr; #ifdef CONFIG_FA_DUMP - /* Don't hot-remove memory that falls in fadump boot memory area */ - if (is_fadump_boot_memory_area(phys_addr, block_sz)) + /* + * Don't hot-remove memory that falls in fadump boot memory area + * and memory that is reserved for capturing old kernel memory. + */ + if (is_fadump_memory_area(phys_addr, block_sz)) return false; #endif @@ -936,7 +938,6 @@ int dlpar_memory(struct pseries_hp_errorlog *hp_elog) static int pseries_add_mem_node(struct device_node *np) { - const char *type; const __be32 *regs; unsigned long base; unsigned int lmb_size; @@ -945,8 +946,7 @@ static int pseries_add_mem_node(struct device_node *np) /* * Check to see if we are actually adding memory */ - type = of_get_property(np, "device_type", NULL); - if (type == NULL || strcmp(type, "memory") != 0) + if (!of_node_is_type(np, "memory")) return 0; /* diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c index 06f02960b439..8fc8fe0b9848 100644 --- a/arch/powerpc/platforms/pseries/iommu.c +++ b/arch/powerpc/platforms/pseries/iommu.c @@ -57,7 +57,6 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node) { struct iommu_table_group *table_group; struct iommu_table *tbl; - struct iommu_table_group_link *tgl; table_group = kzalloc_node(sizeof(struct iommu_table_group), GFP_KERNEL, node); @@ -68,22 +67,13 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node) if (!tbl) goto free_group; - tgl = kzalloc_node(sizeof(struct iommu_table_group_link), GFP_KERNEL, - node); - if (!tgl) - goto free_table; - INIT_LIST_HEAD_RCU(&tbl->it_group_list); kref_init(&tbl->it_kref); - tgl->table_group = table_group; - list_add_rcu(&tgl->next, &tbl->it_group_list); table_group->tables[0] = tbl; return table_group; -free_table: - kfree(tbl); free_group: kfree(table_group); return NULL; @@ -93,23 +83,12 @@ static void iommu_pseries_free_group(struct iommu_table_group *table_group, const char *node_name) { struct iommu_table *tbl; -#ifdef CONFIG_IOMMU_API - struct iommu_table_group_link *tgl; -#endif if (!table_group) return; tbl = table_group->tables[0]; #ifdef CONFIG_IOMMU_API - tgl = list_first_entry_or_null(&tbl->it_group_list, - struct iommu_table_group_link, next); - - WARN_ON_ONCE(!tgl); - if (tgl) { - list_del_rcu(&tgl->next); - kfree(tgl); - } if (table_group->group) { iommu_group_put(table_group->group); BUG_ON(table_group->group); @@ -645,7 +624,6 @@ static void pci_dma_bus_setup_pSeries(struct pci_bus *bus) iommu_table_setparms(pci->phb, dn, tbl); tbl->it_ops = &iommu_table_pseries_ops; iommu_init_table(tbl, pci->phb->node); - iommu_register_group(pci->table_group, pci_domain_nr(bus), 0); /* Divide the rest (1.75GB) among the children */ pci->phb->dma_window_size = 0x80000000ul; @@ -756,10 +734,7 @@ static void pci_dma_dev_setup_pSeries(struct pci_dev *dev) iommu_table_setparms(phb, dn, tbl); tbl->it_ops = &iommu_table_pseries_ops; iommu_init_table(tbl, phb->node); - iommu_register_group(PCI_DN(dn)->table_group, - pci_domain_nr(phb->bus), 0); set_iommu_table_base(&dev->dev, tbl); - iommu_add_device(&dev->dev); return; } @@ -770,11 +745,10 @@ static void pci_dma_dev_setup_pSeries(struct pci_dev *dev) while (dn && PCI_DN(dn) && PCI_DN(dn)->table_group == NULL) dn = dn->parent; - if (dn && PCI_DN(dn)) { + if (dn && PCI_DN(dn)) set_iommu_table_base(&dev->dev, PCI_DN(dn)->table_group->tables[0]); - iommu_add_device(&dev->dev); - } else + else printk(KERN_WARNING "iommu: Device %s has no iommu table\n", pci_name(dev)); } @@ -964,6 +938,37 @@ struct failed_ddw_pdn { static LIST_HEAD(failed_ddw_pdn_list); +static phys_addr_t ddw_memory_hotplug_max(void) +{ + phys_addr_t max_addr = memory_hotplug_max(); + struct device_node *memory; + + for_each_node_by_type(memory, "memory") { + unsigned long start, size; + int ranges, n_mem_addr_cells, n_mem_size_cells, len; + const __be32 *memcell_buf; + + memcell_buf = of_get_property(memory, "reg", &len); + if (!memcell_buf || len <= 0) + continue; + + n_mem_addr_cells = of_n_addr_cells(memory); + n_mem_size_cells = of_n_size_cells(memory); + + /* ranges in cell */ + ranges = (len >> 2) / (n_mem_addr_cells + n_mem_size_cells); + + start = of_read_number(memcell_buf, n_mem_addr_cells); + memcell_buf += n_mem_addr_cells; + size = of_read_number(memcell_buf, n_mem_size_cells); + memcell_buf += n_mem_size_cells; + + max_addr = max_t(phys_addr_t, max_addr, start + size); + } + + return max_addr; +} + /* * If the PE supports dynamic dma windows, and there is space for a table * that can map all pages in a linear offset, then setup such a table, @@ -1053,7 +1058,7 @@ static u64 enable_ddw(struct pci_dev *dev, struct device_node *pdn) } /* verify the window * number of ptes will map the partition */ /* check largest block * page size > max memory hotplug addr */ - max_addr = memory_hotplug_max(); + max_addr = ddw_memory_hotplug_max(); if (query.largest_available_block < (max_addr >> page_shift)) { dev_dbg(&dev->dev, "can't map partition max 0x%llx with %u " "%llu-sized pages\n", max_addr, query.largest_available_block, @@ -1190,7 +1195,7 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev) } set_iommu_table_base(&dev->dev, pci->table_group->tables[0]); - iommu_add_device(&dev->dev); + iommu_add_device(pci->table_group, &dev->dev); } static int dma_set_mask_pSeriesLP(struct device *dev, u64 dma_mask) @@ -1395,4 +1400,27 @@ static int __init disable_multitce(char *str) __setup("multitce=", disable_multitce); +static int tce_iommu_bus_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct device *dev = data; + + switch (action) { + case BUS_NOTIFY_DEL_DEVICE: + iommu_del_device(dev); + return 0; + default: + return 0; + } +} + +static struct notifier_block tce_iommu_bus_nb = { + .notifier_call = tce_iommu_bus_notifier, +}; + +static int __init tce_iommu_bus_notifier_init(void) +{ + bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb); + return 0; +} machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init); diff --git a/arch/powerpc/platforms/pseries/pci.c b/arch/powerpc/platforms/pseries/pci.c index 41d8a4d1d02e..7725825d887d 100644 --- a/arch/powerpc/platforms/pseries/pci.c +++ b/arch/powerpc/platforms/pseries/pci.c @@ -29,6 +29,7 @@ #include #include #include +#include #include "pseries.h" #if 0 @@ -237,6 +238,8 @@ static void __init pSeries_request_regions(void) void __init pSeries_final_fixup(void) { + struct pci_controller *hose; + pSeries_request_regions(); eeh_probe_devices(); @@ -246,6 +249,25 @@ void __init pSeries_final_fixup(void) ppc_md.pcibios_sriov_enable = pseries_pcibios_sriov_enable; ppc_md.pcibios_sriov_disable = pseries_pcibios_sriov_disable; #endif + list_for_each_entry(hose, &hose_list, list_node) { + struct device_node *dn = hose->dn, *nvdn; + + while (1) { + dn = of_find_all_nodes(dn); + if (!dn) + break; + nvdn = of_parse_phandle(dn, "ibm,nvlink", 0); + if (!nvdn) + continue; + if (!of_device_is_compatible(nvdn, "ibm,npu-link")) + continue; + if (!of_device_is_compatible(nvdn->parent, + "ibm,power9-npu")) + continue; + WARN_ON_ONCE(pnv_npu2_init(hose)); + break; + } + } } /* diff --git a/arch/powerpc/platforms/pseries/pmem.c b/arch/powerpc/platforms/pseries/pmem.c index a27f40eb57b1..27f0a915c8a9 100644 --- a/arch/powerpc/platforms/pseries/pmem.c +++ b/arch/powerpc/platforms/pseries/pmem.c @@ -52,8 +52,8 @@ static ssize_t pmem_drc_add_node(u32 drc_index) /* NB: The of reconfig notifier creates platform device from the node */ rc = dlpar_attach_node(dn, pmem_node); if (rc) { - pr_err("Failed to attach node %s, rc: %d, drc index: %x\n", - dn->name, rc, drc_index); + pr_err("Failed to attach node %pOF, rc: %d, drc index: %x\n", + dn, rc, drc_index); if (dlpar_release_drc(drc_index)) dlpar_free_cc_nodes(dn); @@ -93,8 +93,8 @@ static ssize_t pmem_drc_remove_node(u32 drc_index) rc = dlpar_release_drc(drc_index); if (rc) { - pr_err("Failed to release drc (%x) for CPU %s, rc: %d\n", - drc_index, dn->name, rc); + pr_err("Failed to release drc (%x) for CPU %pOFn, rc: %d\n", + drc_index, dn, rc); dlpar_attach_node(dn, pmem_node); return rc; } diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 0f553dcfa548..41f62ca27c63 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -190,7 +190,7 @@ static void __init pseries_setup_i8259_cascade(void) of_node_put(old); if (np == NULL) break; - if (strcmp(np->name, "pci") != 0) + if (!of_node_name_eq(np, "pci")) continue; addrp = of_get_property(np, "8259-interrupt-acknowledge", NULL); if (addrp == NULL) @@ -469,8 +469,8 @@ static void __init find_and_init_phbs(void) struct device_node *root = of_find_node_by_path("/"); for_each_child_of_node(root, node) { - if (node->type == NULL || (strcmp(node->type, "pci") != 0 && - strcmp(node->type, "pciex") != 0)) + if (!of_node_is_type(node, "pci") && + !of_node_is_type(node, "pciex")) continue; phb = pcibios_alloc_controller(node); @@ -978,11 +978,7 @@ static void pseries_power_off(void) static int __init pSeries_probe(void) { - const char *dtype = of_get_property(of_root, "device_type", NULL); - - if (dtype == NULL) - return 0; - if (strcmp(dtype, "chrp")) + if (!of_node_is_type(of_root, "chrp")) return 0; /* Cell blades firmware claims to be chrp while it's not. Until this diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/pseries/vio.c index 88f1ad1d6309..1fad4649735b 100644 --- a/arch/powerpc/platforms/pseries/vio.c +++ b/arch/powerpc/platforms/pseries/vio.c @@ -519,7 +519,7 @@ static dma_addr_t vio_dma_iommu_map_page(struct device *dev, struct page *page, { struct vio_dev *viodev = to_vio_dev(dev); struct iommu_table *tbl; - dma_addr_t ret = IOMMU_MAPPING_ERROR; + dma_addr_t ret = DMA_MAPPING_ERROR; tbl = get_iommu_table_base(dev); if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl)))) { @@ -625,7 +625,6 @@ static const struct dma_map_ops vio_dma_mapping_ops = { .unmap_page = vio_dma_iommu_unmap_page, .dma_supported = vio_dma_iommu_dma_supported, .get_required_mask = vio_dma_get_required_mask, - .mapping_error = dma_iommu_mapping_error, }; /** @@ -1356,9 +1355,9 @@ struct vio_dev *vio_register_device_node(struct device_node *of_node) */ parent_node = of_get_parent(of_node); if (parent_node) { - if (!strcmp(parent_node->type, "ibm,platform-facilities")) + if (of_node_is_type(parent_node, "ibm,platform-facilities")) family = PFO; - else if (!strcmp(parent_node->type, "vdevice")) + else if (of_node_is_type(parent_node, "vdevice")) family = VDEVICE; else { pr_warn("%s: parent(%pOF) of %pOFn not recognized.\n", @@ -1395,9 +1394,8 @@ struct vio_dev *vio_register_device_node(struct device_node *of_node) if (viodev->family == VDEVICE) { unsigned int unit_address; - if (of_node->type != NULL) - viodev->type = of_node->type; - else { + viodev->type = of_node_get_device_type(of_node); + if (!viodev->type) { pr_warn("%s: node %pOFn is missing the 'device_type' " "property.\n", __func__, of_node); goto out; @@ -1672,32 +1670,30 @@ struct vio_dev *vio_find_node(struct device_node *vnode) { char kobj_name[20]; struct device_node *vnode_parent; - const char *dev_type; vnode_parent = of_get_parent(vnode); if (!vnode_parent) return NULL; - dev_type = of_get_property(vnode_parent, "device_type", NULL); - of_node_put(vnode_parent); - if (!dev_type) - return NULL; - /* construct the kobject name from the device node */ - if (!strcmp(dev_type, "vdevice")) { + if (of_node_is_type(vnode_parent, "vdevice")) { const __be32 *prop; prop = of_get_property(vnode, "reg", NULL); if (!prop) - return NULL; + goto out; snprintf(kobj_name, sizeof(kobj_name), "%x", (uint32_t)of_read_number(prop, 1)); - } else if (!strcmp(dev_type, "ibm,platform-facilities")) + } else if (of_node_is_type(vnode_parent, "ibm,platform-facilities")) snprintf(kobj_name, sizeof(kobj_name), "%pOFn", vnode); else - return NULL; + goto out; + of_node_put(vnode_parent); return vio_find_name(kobj_name); +out: + of_node_put(vnode_parent); + return NULL; } EXPORT_SYMBOL(vio_find_node); diff --git a/arch/powerpc/sysdev/Makefile b/arch/powerpc/sysdev/Makefile index 2caa4defdfb6..aaf23283ba0c 100644 --- a/arch/powerpc/sysdev/Makefile +++ b/arch/powerpc/sysdev/Makefile @@ -48,7 +48,7 @@ obj-$(CONFIG_PPC_MPC512x) += mpc5xxx_clocks.o obj-$(CONFIG_PPC_MPC52xx) += mpc5xxx_clocks.o ifdef CONFIG_SUSPEND -obj-$(CONFIG_6xx) += 6xx-suspend.o +obj-$(CONFIG_PPC_BOOK3S_32) += 6xx-suspend.o endif obj-$(CONFIG_PPC_SCOM) += scom.o diff --git a/arch/powerpc/sysdev/fsl_rio.h b/arch/powerpc/sysdev/fsl_rio.h index 12dd18fd4795..6c13d9a7b7b2 100644 --- a/arch/powerpc/sysdev/fsl_rio.h +++ b/arch/powerpc/sysdev/fsl_rio.h @@ -41,7 +41,7 @@ #define DOORBELL_ROWAR_PCI 0x02000000 /* PCI window */ #define DOORBELL_ROWAR_NREAD 0x00040000 /* NREAD */ #define DOORBELL_ROWAR_MAINTRD 0x00070000 /* maintenance read */ -#define DOORBELL_ROWAR_RES 0x00002000 /* wrtpy: reserverd */ +#define DOORBELL_ROWAR_RES 0x00002000 /* wrtpy: reserved */ #define DOORBELL_ROWAR_MAINTWD 0x00007000 #define DOORBELL_ROWAR_SIZE 0x0000000b /* window size is 4k */ diff --git a/arch/powerpc/sysdev/fsl_rmu.c b/arch/powerpc/sysdev/fsl_rmu.c index 88b35a3dcdc5..8b0ebf3940d2 100644 --- a/arch/powerpc/sysdev/fsl_rmu.c +++ b/arch/powerpc/sysdev/fsl_rmu.c @@ -756,15 +756,13 @@ fsl_open_outb_mbox(struct rio_mport *mport, void *dev_id, int mbox, int entries) } /* Initialize outbound message descriptor ring */ - rmu->msg_tx_ring.virt = dma_alloc_coherent(priv->dev, + rmu->msg_tx_ring.virt = dma_zalloc_coherent(priv->dev, rmu->msg_tx_ring.size * RIO_MSG_DESC_SIZE, &rmu->msg_tx_ring.phys, GFP_KERNEL); if (!rmu->msg_tx_ring.virt) { rc = -ENOMEM; goto out_dma; } - memset(rmu->msg_tx_ring.virt, 0, - rmu->msg_tx_ring.size * RIO_MSG_DESC_SIZE); rmu->msg_tx_ring.tx_slot = 0; /* Point dequeue/enqueue pointers at first entry in ring */ diff --git a/arch/powerpc/sysdev/ipic.c b/arch/powerpc/sysdev/ipic.c index 6300123ce965..8030a0f55e96 100644 --- a/arch/powerpc/sysdev/ipic.c +++ b/arch/powerpc/sysdev/ipic.c @@ -771,34 +771,6 @@ struct ipic * __init ipic_init(struct device_node *node, unsigned int flags) return ipic; } -int ipic_set_priority(unsigned int virq, unsigned int priority) -{ - struct ipic *ipic = ipic_from_irq(virq); - unsigned int src = virq_to_hw(virq); - u32 temp; - - if (priority > 7) - return -EINVAL; - if (src > 127) - return -EINVAL; - if (ipic_info[src].prio == 0) - return -EINVAL; - - temp = ipic_read(ipic->regs, ipic_info[src].prio); - - if (priority < 4) { - temp &= ~(0x7 << (20 + (3 - priority) * 3)); - temp |= ipic_info[src].prio_mask << (20 + (3 - priority) * 3); - } else { - temp &= ~(0x7 << (4 + (7 - priority) * 3)); - temp |= ipic_info[src].prio_mask << (4 + (7 - priority) * 3); - } - - ipic_write(ipic->regs, ipic_info[src].prio, temp); - - return 0; -} - void ipic_set_highest_priority(unsigned int virq) { struct ipic *ipic = ipic_from_irq(virq); diff --git a/arch/powerpc/sysdev/scom.c b/arch/powerpc/sysdev/scom.c index 0f6fd5d04d33..a707b24a7ddb 100644 --- a/arch/powerpc/sysdev/scom.c +++ b/arch/powerpc/sysdev/scom.c @@ -60,7 +60,7 @@ scom_map_t scom_map_device(struct device_node *dev, int index) parent = scom_find_parent(dev); if (parent == NULL) - return 0; + return NULL; /* * We support "scom-reg" properties for adding scom registers @@ -83,7 +83,7 @@ scom_map_t scom_map_device(struct device_node *dev, int index) size >>= 2; if (index >= (size / (2*cells))) - return 0; + return NULL; reg = of_read_number(&prop[index * cells * 2], cells); cnt = of_read_number(&prop[index * cells * 2 + cells], cells); diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c index 9824074ec1b5..94a69a62f5db 100644 --- a/arch/powerpc/sysdev/xive/common.c +++ b/arch/powerpc/sysdev/xive/common.c @@ -309,7 +309,7 @@ static void xive_do_queue_eoi(struct xive_cpu *xc) * EOI an interrupt at the source. There are several methods * to do this depending on the HW version and source type */ -void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd) +static void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd) { /* If the XIVE supports the new "store EOI facility, use it */ if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI) diff --git a/arch/powerpc/tools/checkpatch.sh b/arch/powerpc/tools/checkpatch.sh index 1fad3fb90e7c..3ce5c093b19d 100755 --- a/arch/powerpc/tools/checkpatch.sh +++ b/arch/powerpc/tools/checkpatch.sh @@ -19,4 +19,5 @@ exec $script_base/../../../scripts/checkpatch.pl \ --ignore GLOBAL_INITIALISERS \ --ignore LINE_SPACING \ --ignore MULTIPLE_ASSIGNMENTS \ + --ignore DT_SPLIT_BINDING_PATCH \ $@ diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c index 36b8dc47a3c3..757b8499aba2 100644 --- a/arch/powerpc/xmon/xmon.c +++ b/arch/powerpc/xmon/xmon.c @@ -75,6 +75,9 @@ static int xmon_gate; #define xmon_owner 0 #endif /* CONFIG_SMP */ +#ifdef CONFIG_PPC_PSERIES +static int set_indicator_token = RTAS_UNKNOWN_SERVICE; +#endif static unsigned long in_xmon __read_mostly = 0; static int xmon_on = IS_ENABLED(CONFIG_XMON_DEFAULT); @@ -273,7 +276,7 @@ Commands:\n\ X exit monitor and don't recover\n" #if defined(CONFIG_PPC64) && !defined(CONFIG_PPC_BOOK3E) " u dump segment table or SLB\n" -#elif defined(CONFIG_PPC_STD_MMU_32) +#elif defined(CONFIG_PPC_BOOK3S_32) " u dump segment registers\n" #elif defined(CONFIG_44x) || defined(CONFIG_PPC_BOOK3E) " u dump TLB\n" @@ -358,7 +361,6 @@ static inline void disable_surveillance(void) #ifdef CONFIG_PPC_PSERIES /* Since this can't be a module, args should end up below 4GB. */ static struct rtas_args args; - int token; /* * At this point we have got all the cpus we can into @@ -367,11 +369,11 @@ static inline void disable_surveillance(void) * If we did try to take rtas.lock there would be a * real possibility of deadlock. */ - token = rtas_token("set-indicator"); - if (token == RTAS_UNKNOWN_SERVICE) + if (set_indicator_token == RTAS_UNKNOWN_SERVICE) return; - rtas_call_unlocked(&args, token, 3, 1, NULL, SURVEILLANCE_TOKEN, 0, 0); + rtas_call_unlocked(&args, set_indicator_token, 3, 1, NULL, + SURVEILLANCE_TOKEN, 0, 0); #endif /* CONFIG_PPC_PSERIES */ } @@ -1058,7 +1060,7 @@ cmds(struct pt_regs *excp) case 'P': show_tasks(); break; -#ifdef CONFIG_PPC_STD_MMU +#ifdef CONFIG_PPC_BOOK3S case 'u': dump_segments(); break; @@ -2793,7 +2795,7 @@ print_address(unsigned long addr) xmon_print_symbol(addr, "\t# ", ""); } -void +static void dump_log_buf(void) { struct kmsg_dumper dumper = { .active = 1 }; @@ -2994,13 +2996,13 @@ static void show_task(struct task_struct *tsk) printf("%px %016lx %6d %6d %c %2d %s\n", tsk, tsk->thread.ksp, - tsk->pid, tsk->parent->pid, + tsk->pid, rcu_dereference(tsk->parent)->pid, state, task_thread_info(tsk)->cpu, tsk->comm); } #ifdef CONFIG_PPC_BOOK3S_64 -void format_pte(void *ptep, unsigned long pte) +static void format_pte(void *ptep, unsigned long pte) { pte_t entry = __pte(pte); @@ -3495,14 +3497,14 @@ void dump_segments(void) } #endif -#ifdef CONFIG_PPC_STD_MMU_32 +#ifdef CONFIG_PPC_BOOK3S_32 void dump_segments(void) { int i; printf("sr0-15 ="); for (i = 0; i < 16; ++i) - printf(" %x", mfsrin(i)); + printf(" %x", mfsrin(i << 28)); printf("\n"); } #endif @@ -3688,6 +3690,14 @@ static void xmon_init(int enable) __debugger_iabr_match = xmon_iabr_match; __debugger_break_match = xmon_break_match; __debugger_fault_handler = xmon_fault_handler; + +#ifdef CONFIG_PPC_PSERIES + /* + * Get the token here to avoid trying to get a lock + * during the crash, causing a deadlock. + */ + set_indicator_token = rtas_token("set-indicator"); +#endif } else { __debugger = NULL; __debugger_ipi = NULL; @@ -4033,6 +4043,7 @@ static int do_spu_cmd(void) subcmd = inchar(); if (isxdigit(subcmd) || subcmd == '\n') termch = subcmd; + /* fall through */ case 'f': scanhex(&num); if (num >= XMON_NUM_SPUS || !spu_info[num].spu) { diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index ee833e6f5ccb..e0d7d61779a6 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -19,7 +19,6 @@ config RISCV select ARCH_WANT_FRAME_POINTERS select CLONE_BACKWARDS select COMMON_CLK - select DMA_DIRECT_OPS select GENERIC_CLOCKEVENTS select GENERIC_CPU_DEVICES select GENERIC_IRQ_SHOW @@ -39,8 +38,11 @@ config RISCV select SPARSE_IRQ select SYSCTL_EXCEPTION_TRACE select HAVE_ARCH_TRACEHOOK + select HAVE_PCI select MODULES_USE_ELF_RELA if MODULES select THREAD_INFO_IN_TASK + select PCI_DOMAINS_GENERIC if PCI + select PCI_MSI if PCI select RISCV_TIMER select GENERIC_IRQ_MULTI_HANDLER select ARCH_HAS_PTE_SPECIAL @@ -273,30 +275,8 @@ endchoice endmenu -menu "Bus support" - -config PCI - bool "PCI support" - select PCI_MSI - help - This feature enables support for PCI bus system. If you say Y - here, the kernel will include drivers and infrastructure code - to support PCI bus devices. - - If you don't know what to do here, say Y. - -config PCI_DOMAINS - def_bool PCI - -config PCI_DOMAINS_GENERIC - def_bool PCI - -source "drivers/pci/Kconfig" - -endmenu - menu "Power management options" -source kernel/power/Kconfig +source "kernel/power/Kconfig" endmenu diff --git a/arch/riscv/include/asm/dma-mapping.h b/arch/riscv/include/asm/dma-mapping.h deleted file mode 100644 index 8facc1c8fa05..000000000000 --- a/arch/riscv/include/asm/dma-mapping.h +++ /dev/null @@ -1,15 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#ifndef _RISCV_ASM_DMA_MAPPING_H -#define _RISCV_ASM_DMA_MAPPING_H 1 - -#ifdef CONFIG_SWIOTLB -#include -static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) -{ - return &swiotlb_dma_ops; -} -#else -#include -#endif /* CONFIG_SWIOTLB */ - -#endif /* _RISCV_ASM_DMA_MAPPING_H */ diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 5173366af8f3..ed554b09eb3f 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -73,7 +73,6 @@ config S390 select ARCH_HAS_KCOV select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SET_MEMORY - select ARCH_HAS_SG_CHAIN select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_UBSAN_SANITIZE_ALL @@ -140,7 +139,6 @@ config S390 select HAVE_COPY_THREAD_TLS select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS - select DMA_DIRECT_OPS select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EFFICIENT_UNALIGNED_ACCESS @@ -168,14 +166,21 @@ config S390 select HAVE_MOD_ARCH_SPECIFIC select HAVE_NOP_MCOUNT select HAVE_OPROFILE + select HAVE_PCI select HAVE_PERF_EVENTS select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RSEQ select HAVE_SYSCALL_TRACEPOINTS select HAVE_VIRT_CPU_ACCOUNTING + select IOMMU_HELPER if PCI + select IOMMU_SUPPORT if PCI select MODULES_USE_ELF_RELA + select NEED_DMA_MAP_STATE if PCI + select NEED_SG_DMA_LENGTH if PCI select OLD_SIGACTION select OLD_SIGSUSPEND3 + select PCI_DOMAINS if PCI + select PCI_MSI if PCI select SPARSE_IRQ select SYSCTL_EXCEPTION_TRACE select THREAD_INFO_IN_TASK @@ -520,7 +525,7 @@ config SCHED_TOPOLOGY making when dealing with machines that have multi-threading, multiple cores or multiple books. -source kernel/Kconfig.hz +source "kernel/Kconfig.hz" config KEXEC def_bool y @@ -706,17 +711,6 @@ config QDIO If unsure, say Y. -menuconfig PCI - bool "PCI support" - select PCI_MSI - select IOMMU_HELPER - select IOMMU_SUPPORT - select NEED_DMA_MAP_STATE - select NEED_SG_DMA_LENGTH - - help - Enable PCI support. - if PCI config PCI_NR_FUNCTIONS @@ -727,13 +721,8 @@ config PCI_NR_FUNCTIONS This allows you to specify the maximum number of PCI functions which this kernel will support. -source "drivers/pci/Kconfig" - endif # PCI -config PCI_DOMAINS - def_bool PCI - config HAS_IOMEM def_bool PCI @@ -837,9 +826,6 @@ source "kernel/power/Kconfig" endmenu -config PCMCIA - def_bool n - config CCW def_bool y diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index 812d9498d97b..dd456725189f 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -137,7 +137,7 @@ static int fallback_init_cip(struct crypto_tfm *tfm) struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm); sctx->fallback.cip = crypto_alloc_cipher(name, 0, - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(sctx->fallback.cip)) { pr_err("Allocating AES fallback algorithm %s failed\n", diff --git a/arch/s390/kvm/Kconfig b/arch/s390/kvm/Kconfig index a3dbd459cce9..767453faacfc 100644 --- a/arch/s390/kvm/Kconfig +++ b/arch/s390/kvm/Kconfig @@ -57,6 +57,6 @@ config KVM_S390_UCONTROL # OK, it's a little counter-intuitive to do this, but it puts it neatly under # the virtualization menu. -source drivers/vhost/Kconfig +source "drivers/vhost/Kconfig" endif # VIRTUALIZATION diff --git a/arch/s390/mm/dump_pagetables.c b/arch/s390/mm/dump_pagetables.c index 363f6470d742..3b93ba0b5d8d 100644 --- a/arch/s390/mm/dump_pagetables.c +++ b/arch/s390/mm/dump_pagetables.c @@ -111,11 +111,12 @@ static void note_page(struct seq_file *m, struct pg_state *st, } #ifdef CONFIG_KASAN -static void note_kasan_zero_page(struct seq_file *m, struct pg_state *st) +static void note_kasan_early_shadow_page(struct seq_file *m, + struct pg_state *st) { unsigned int prot; - prot = pte_val(*kasan_zero_pte) & + prot = pte_val(*kasan_early_shadow_pte) & (_PAGE_PROTECT | _PAGE_INVALID | _PAGE_NOEXEC); note_page(m, st, prot, 4); } @@ -154,8 +155,8 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, int i; #ifdef CONFIG_KASAN - if ((pud_val(*pud) & PAGE_MASK) == __pa(kasan_zero_pmd)) { - note_kasan_zero_page(m, st); + if ((pud_val(*pud) & PAGE_MASK) == __pa(kasan_early_shadow_pmd)) { + note_kasan_early_shadow_page(m, st); return; } #endif @@ -185,8 +186,8 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, int i; #ifdef CONFIG_KASAN - if ((p4d_val(*p4d) & PAGE_MASK) == __pa(kasan_zero_pud)) { - note_kasan_zero_page(m, st); + if ((p4d_val(*p4d) & PAGE_MASK) == __pa(kasan_early_shadow_pud)) { + note_kasan_early_shadow_page(m, st); return; } #endif @@ -215,8 +216,8 @@ static void walk_p4d_level(struct seq_file *m, struct pg_state *st, int i; #ifdef CONFIG_KASAN - if ((pgd_val(*pgd) & PAGE_MASK) == __pa(kasan_zero_p4d)) { - note_kasan_zero_page(m, st); + if ((pgd_val(*pgd) & PAGE_MASK) == __pa(kasan_early_shadow_p4d)) { + note_kasan_early_shadow_page(m, st); return; } #endif diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 76d0708438e9..3e82f66d5c61 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -59,7 +59,7 @@ static void __init setup_zero_pages(void) order = 7; /* Limit number of empty zero pages for small memory sizes */ - while (order > 2 && (totalram_pages >> 10) < (1UL << order)) + while (order > 2 && (totalram_pages() >> 10) < (1UL << order)) order--; empty_zero_page = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order); @@ -242,7 +242,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, } #ifdef CONFIG_MEMORY_HOTREMOVE -int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { /* * There is no hardware or firmware interface which could trigger a diff --git a/arch/s390/mm/kasan_init.c b/arch/s390/mm/kasan_init.c index acb9645b762b..bac5c27d11fc 100644 --- a/arch/s390/mm/kasan_init.c +++ b/arch/s390/mm/kasan_init.c @@ -107,7 +107,8 @@ static void __init kasan_early_vmemmap_populate(unsigned long address, if (mode == POPULATE_ZERO_SHADOW && IS_ALIGNED(address, PGDIR_SIZE) && end - address >= PGDIR_SIZE) { - pgd_populate(&init_mm, pg_dir, kasan_zero_p4d); + pgd_populate(&init_mm, pg_dir, + kasan_early_shadow_p4d); address = (address + PGDIR_SIZE) & PGDIR_MASK; continue; } @@ -120,7 +121,8 @@ static void __init kasan_early_vmemmap_populate(unsigned long address, if (mode == POPULATE_ZERO_SHADOW && IS_ALIGNED(address, P4D_SIZE) && end - address >= P4D_SIZE) { - p4d_populate(&init_mm, p4_dir, kasan_zero_pud); + p4d_populate(&init_mm, p4_dir, + kasan_early_shadow_pud); address = (address + P4D_SIZE) & P4D_MASK; continue; } @@ -133,7 +135,8 @@ static void __init kasan_early_vmemmap_populate(unsigned long address, if (mode == POPULATE_ZERO_SHADOW && IS_ALIGNED(address, PUD_SIZE) && end - address >= PUD_SIZE) { - pud_populate(&init_mm, pu_dir, kasan_zero_pmd); + pud_populate(&init_mm, pu_dir, + kasan_early_shadow_pmd); address = (address + PUD_SIZE) & PUD_MASK; continue; } @@ -146,7 +149,8 @@ static void __init kasan_early_vmemmap_populate(unsigned long address, if (mode == POPULATE_ZERO_SHADOW && IS_ALIGNED(address, PMD_SIZE) && end - address >= PMD_SIZE) { - pmd_populate(&init_mm, pm_dir, kasan_zero_pte); + pmd_populate(&init_mm, pm_dir, + kasan_early_shadow_pte); address = (address + PMD_SIZE) & PMD_MASK; continue; } @@ -188,7 +192,7 @@ static void __init kasan_early_vmemmap_populate(unsigned long address, pte_val(*pt_dir) = __pa(page) | pgt_prot; break; case POPULATE_ZERO_SHADOW: - page = kasan_zero_page; + page = kasan_early_shadow_page; pte_val(*pt_dir) = __pa(page) | pgt_prot_zero; break; } @@ -256,14 +260,14 @@ void __init kasan_early_init(void) unsigned long vmax; unsigned long pgt_prot = pgprot_val(PAGE_KERNEL_RO); pte_t pte_z; - pmd_t pmd_z = __pmd(__pa(kasan_zero_pte) | _SEGMENT_ENTRY); - pud_t pud_z = __pud(__pa(kasan_zero_pmd) | _REGION3_ENTRY); - p4d_t p4d_z = __p4d(__pa(kasan_zero_pud) | _REGION2_ENTRY); + pmd_t pmd_z = __pmd(__pa(kasan_early_shadow_pte) | _SEGMENT_ENTRY); + pud_t pud_z = __pud(__pa(kasan_early_shadow_pmd) | _REGION3_ENTRY); + p4d_t p4d_z = __p4d(__pa(kasan_early_shadow_pud) | _REGION2_ENTRY); kasan_early_detect_facilities(); if (!has_nx) pgt_prot &= ~_PAGE_NOEXEC; - pte_z = __pte(__pa(kasan_zero_page) | pgt_prot); + pte_z = __pte(__pa(kasan_early_shadow_page) | pgt_prot); memsize = get_mem_detect_end(); if (!memsize) @@ -292,10 +296,13 @@ void __init kasan_early_init(void) } /* init kasan zero shadow */ - crst_table_init((unsigned long *)kasan_zero_p4d, p4d_val(p4d_z)); - crst_table_init((unsigned long *)kasan_zero_pud, pud_val(pud_z)); - crst_table_init((unsigned long *)kasan_zero_pmd, pmd_val(pmd_z)); - memset64((u64 *)kasan_zero_pte, pte_val(pte_z), PTRS_PER_PTE); + crst_table_init((unsigned long *)kasan_early_shadow_p4d, + p4d_val(p4d_z)); + crst_table_init((unsigned long *)kasan_early_shadow_pud, + pud_val(pud_z)); + crst_table_init((unsigned long *)kasan_early_shadow_pmd, + pmd_val(pmd_z)); + memset64((u64 *)kasan_early_shadow_pte, pte_val(pte_z), PTRS_PER_PTE); shadow_alloc_size = memsize >> KASAN_SHADOW_SCALE_SHIFT; pgalloc_low = round_up((unsigned long)_end, _SEGMENT_SIZE); diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index d7052cbe984f..3ff758eeb71d 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -821,10 +821,22 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i /* * BPF_ARSH */ + case BPF_ALU | BPF_ARSH | BPF_X: /* ((s32) dst) >>= src */ + /* sra %dst,%dst,0(%src) */ + EMIT4_DISP(0x8a000000, dst_reg, src_reg, 0); + EMIT_ZERO(dst_reg); + break; case BPF_ALU64 | BPF_ARSH | BPF_X: /* ((s64) dst) >>= src */ /* srag %dst,%dst,0(%src) */ EMIT6_DISP_LH(0xeb000000, 0x000a, dst_reg, dst_reg, src_reg, 0); break; + case BPF_ALU | BPF_ARSH | BPF_K: /* ((s32) dst >> imm */ + if (imm == 0) + break; + /* sra %dst,imm(%r0) */ + EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm); + EMIT_ZERO(dst_reg); + break; case BPF_ALU64 | BPF_ARSH | BPF_K: /* ((s64) dst) >>= imm */ if (imm == 0) break; diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c index d387a0fbdd7e..9e52d1527f71 100644 --- a/arch/s390/pci/pci_dma.c +++ b/arch/s390/pci/pci_dma.c @@ -15,8 +15,6 @@ #include #include -#define S390_MAPPING_ERROR (~(dma_addr_t) 0x0) - static struct kmem_cache *dma_region_table_cache; static struct kmem_cache *dma_page_table_cache; static int s390_iommu_strict; @@ -301,7 +299,7 @@ static dma_addr_t dma_alloc_address(struct device *dev, int size) out_error: spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags); - return S390_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } static void dma_free_address(struct device *dev, dma_addr_t dma_addr, int size) @@ -349,7 +347,7 @@ static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page, /* This rounds up number of pages based on size and offset */ nr_pages = iommu_num_pages(pa, size, PAGE_SIZE); dma_addr = dma_alloc_address(dev, nr_pages); - if (dma_addr == S390_MAPPING_ERROR) { + if (dma_addr == DMA_MAPPING_ERROR) { ret = -ENOSPC; goto out_err; } @@ -372,7 +370,7 @@ out_free: out_err: zpci_err("map error:\n"); zpci_err_dma(ret, pa); - return S390_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } static void s390_dma_unmap_pages(struct device *dev, dma_addr_t dma_addr, @@ -406,7 +404,7 @@ static void *s390_dma_alloc(struct device *dev, size_t size, dma_addr_t map; size = PAGE_ALIGN(size); - page = alloc_pages(flag, get_order(size)); + page = alloc_pages(flag | __GFP_ZERO, get_order(size)); if (!page) return NULL; @@ -449,7 +447,7 @@ static int __s390_dma_map_sg(struct device *dev, struct scatterlist *sg, int ret; dma_addr_base = dma_alloc_address(dev, nr_pages); - if (dma_addr_base == S390_MAPPING_ERROR) + if (dma_addr_base == DMA_MAPPING_ERROR) return -ENOMEM; dma_addr = dma_addr_base; @@ -496,7 +494,7 @@ static int s390_dma_map_sg(struct device *dev, struct scatterlist *sg, for (i = 1; i < nr_elements; i++) { s = sg_next(s); - s->dma_address = S390_MAPPING_ERROR; + s->dma_address = DMA_MAPPING_ERROR; s->dma_length = 0; if (s->offset || (size & ~PAGE_MASK) || @@ -546,11 +544,6 @@ static void s390_dma_unmap_sg(struct device *dev, struct scatterlist *sg, } } -static int s390_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == S390_MAPPING_ERROR; -} - int zpci_dma_init_device(struct zpci_dev *zdev) { int rc; @@ -675,7 +668,6 @@ const struct dma_map_ops s390_pci_dma_ops = { .unmap_sg = s390_dma_unmap_sg, .map_page = s390_dma_map_pages, .unmap_page = s390_dma_unmap_pages, - .mapping_error = s390_mapping_error, /* dma_supported is unconditionally true without a callback */ }; EXPORT_SYMBOL_GPL(s390_pci_dma_ops); diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index f82a4da7adf3..a9c36f95744a 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -7,7 +7,6 @@ config SUPERH select ARCH_NO_COHERENT_DMA_MMAP if !MMU select HAVE_PATA_PLATFORM select CLKDEV_LOOKUP - select DMA_DIRECT_OPS select HAVE_IDE if HAS_IOPORT_MAP select HAVE_MEMBLOCK_NODE_MAP select ARCH_DISCARD_MEMBLOCK @@ -40,13 +39,16 @@ config SUPERH select GENERIC_IDLE_POLL_SETUP select GENERIC_CLOCKEVENTS select GENERIC_CMOS_UPDATE if SH_SH03 || SH_DREAMCAST + select GENERIC_PCI_IOMAP if PCI select GENERIC_SCHED_CLOCK select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNLEN_USER select HAVE_MOD_ARCH_SPECIFIC if DWARF_UNWINDER select MODULES_USE_ELF_RELA + select NO_GENERIC_PCI_IOPORT_MAP if PCI select OLD_SIGSUSPEND select OLD_SIGACTION + select PCI_DOMAINS if PCI select HAVE_ARCH_AUDITSYSCALL select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_NMI @@ -130,9 +132,6 @@ config SYS_SUPPORTS_SMP config SYS_SUPPORTS_NUMA bool -config SYS_SUPPORTS_PCI - bool - config STACKTRACE_SUPPORT def_bool y @@ -597,7 +596,7 @@ endmenu menu "Kernel features" -source kernel/Kconfig.hz +source "kernel/Kconfig.hz" config KEXEC bool "kexec system call (EXPERIMENTAL)" @@ -855,24 +854,6 @@ config MAPLE Dreamcast with a serial line terminal or a remote network connection. -config PCI - bool "PCI support" - depends on SYS_SUPPORTS_PCI - select PCI_DOMAINS - select GENERIC_PCI_IOMAP - select NO_GENERIC_PCI_IOPORT_MAP - help - Find out whether you have a PCI motherboard. PCI is the name of a - bus system, i.e. the way the CPU talks to the other stuff inside - your box. If you have PCI, say Y, otherwise N. - -config PCI_DOMAINS - bool - -source "drivers/pci/Kconfig" - -source "drivers/pcmcia/Kconfig" - endmenu menu "Power management options (EXPERIMENTAL)" diff --git a/arch/sh/boards/Kconfig b/arch/sh/boards/Kconfig index 6394b4f0a69b..b9a37057b77a 100644 --- a/arch/sh/boards/Kconfig +++ b/arch/sh/boards/Kconfig @@ -101,7 +101,7 @@ config SH_7751_SOLUTION_ENGINE config SH_7780_SOLUTION_ENGINE bool "SolutionEngine7780" select SOLUTION_ENGINE - select SYS_SUPPORTS_PCI + select HAVE_PCI depends on CPU_SUBTYPE_SH7780 help Select 7780 SolutionEngine if configuring for a Renesas SH7780 @@ -129,7 +129,7 @@ config SH_HP6XX config SH_DREAMCAST bool "Dreamcast" - select SYS_SUPPORTS_PCI + select HAVE_PCI depends on CPU_SUBTYPE_SH7091 help Select Dreamcast if configuring for a SEGA Dreamcast. @@ -139,7 +139,7 @@ config SH_SH03 bool "Interface CTP/PCI-SH03" depends on CPU_SUBTYPE_SH7751 select CPU_HAS_IPR_IRQ - select SYS_SUPPORTS_PCI + select HAVE_PCI help CTP/PCI-SH03 is a CPU module computer that is produced by Interface Corporation. @@ -149,7 +149,7 @@ config SH_SECUREEDGE5410 bool "SecureEdge5410" depends on CPU_SUBTYPE_SH7751R select CPU_HAS_IPR_IRQ - select SYS_SUPPORTS_PCI + select HAVE_PCI help Select SecureEdge5410 if configuring for a SnapGear SH board. This includes both the OEM SecureEdge products as well as the @@ -158,7 +158,7 @@ config SH_SECUREEDGE5410 config SH_RTS7751R2D bool "RTS7751R2D" depends on CPU_SUBTYPE_SH7751R - select SYS_SUPPORTS_PCI + select HAVE_PCI select IO_TRAPPED if MMU help Select RTS7751R2D if configuring for a Renesas Technology @@ -176,7 +176,7 @@ config SH_RSK config SH_SDK7780 bool "SDK7780R3" depends on CPU_SUBTYPE_SH7780 - select SYS_SUPPORTS_PCI + select HAVE_PCI help Select SDK7780 if configuring for a Renesas SH7780 SDK7780R3 evaluation board. @@ -184,7 +184,7 @@ config SH_SDK7780 config SH_SDK7786 bool "SDK7786" depends on CPU_SUBTYPE_SH7786 - select SYS_SUPPORTS_PCI + select HAVE_PCI select NO_IOPORT_MAP if !PCI select HAVE_SRAM_POOL select REGULATOR_FIXED_VOLTAGE if REGULATOR @@ -195,7 +195,7 @@ config SH_SDK7786 config SH_HIGHLANDER bool "Highlander" depends on CPU_SUBTYPE_SH7780 || CPU_SUBTYPE_SH7785 - select SYS_SUPPORTS_PCI + select HAVE_PCI select IO_TRAPPED if MMU config SH_SH7757LCR @@ -207,7 +207,7 @@ config SH_SH7757LCR config SH_SH7785LCR bool "SH7785LCR" depends on CPU_SUBTYPE_SH7785 - select SYS_SUPPORTS_PCI + select HAVE_PCI config SH_SH7785LCR_29BIT_PHYSMAPS bool "SH7785LCR 29bit physmaps" @@ -229,7 +229,7 @@ config SH_URQUELL bool "Urquell" depends on CPU_SUBTYPE_SH7786 select GPIOLIB - select SYS_SUPPORTS_PCI + select HAVE_PCI select NO_IOPORT_MAP if !PCI config SH_MIGOR @@ -302,7 +302,7 @@ config SH_SH4202_MICRODEV config SH_LANDISK bool "LANDISK" depends on CPU_SUBTYPE_SH7751R - select SYS_SUPPORTS_PCI + select HAVE_PCI help I-O DATA DEVICE, INC. "LANDISK Series" support. @@ -310,7 +310,7 @@ config SH_TITAN bool "TITAN" depends on CPU_SUBTYPE_SH7751R select CPU_HAS_IPR_IRQ - select SYS_SUPPORTS_PCI + select HAVE_PCI help Select Titan if you are configuring for a Nimble Microsystems NetEngine NP51R. @@ -325,7 +325,7 @@ config SH_SHMIN config SH_LBOX_RE2 bool "L-BOX RE2" depends on CPU_SUBTYPE_SH7751R - select SYS_SUPPORTS_PCI + select HAVE_PCI help Select L-BOX RE2 if configuring for the NTT COMWARE L-BOX RE2. @@ -346,7 +346,7 @@ config SH_MAGIC_PANEL_R2 config SH_CAYMAN bool "Hitachi Cayman" depends on CPU_SUBTYPE_SH5_101 || CPU_SUBTYPE_SH5_103 - select SYS_SUPPORTS_PCI + select HAVE_PCI select ARCH_MIGHT_HAVE_PC_SERIO config SH_POLARIS @@ -380,7 +380,7 @@ config SH_APSH4A3A config SH_APSH4AD0A bool "AP-SH4AD-0A" select SH_ALPHA_BOARD - select SYS_SUPPORTS_PCI + select HAVE_PCI select REGULATOR_FIXED_VOLTAGE if REGULATOR depends on CPU_SUBTYPE_SH7786 help diff --git a/arch/sh/boards/board-apsh4a3a.c b/arch/sh/boards/board-apsh4a3a.c index 0a39c241628a..346eda7a2ef6 100644 --- a/arch/sh/boards/board-apsh4a3a.c +++ b/arch/sh/boards/board-apsh4a3a.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * ALPHAPROJECT AP-SH4A-3A Support. * * Copyright (C) 2010 ALPHAPROJECT Co.,Ltd. * Copyright (C) 2008 Yoshihiro Shimoda * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/board-apsh4ad0a.c b/arch/sh/boards/board-apsh4ad0a.c index 92eac3a99187..4efa9c571f64 100644 --- a/arch/sh/boards/board-apsh4ad0a.c +++ b/arch/sh/boards/board-apsh4ad0a.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * ALPHAPROJECT AP-SH4AD-0A Support. * * Copyright (C) 2010 ALPHAPROJECT Co.,Ltd. * Copyright (C) 2010 Matt Fleming * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/board-edosk7760.c b/arch/sh/boards/board-edosk7760.c index bab5b9513904..0fbe91cba67a 100644 --- a/arch/sh/boards/board-edosk7760.c +++ b/arch/sh/boards/board-edosk7760.c @@ -1,22 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0+ /* * Renesas Europe EDOSK7760 Board Support * * Copyright (C) 2008 SPES Societa' Progettazione Elettronica e Software Ltd. * Author: Luca Santini - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ #include #include diff --git a/arch/sh/boards/board-espt.c b/arch/sh/boards/board-espt.c index 4d6be53058d6..f478fee3b48a 100644 --- a/arch/sh/boards/board-espt.c +++ b/arch/sh/boards/board-espt.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Data Technology Inc. ESPT-GIGA board support * * Copyright (C) 2008, 2009 Renesas Solutions Corp. * Copyright (C) 2008, 2009 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/board-magicpanelr2.c b/arch/sh/boards/board-magicpanelr2.c index 20500858b56c..56bd386ff3b0 100644 --- a/arch/sh/boards/board-magicpanelr2.c +++ b/arch/sh/boards/board-magicpanelr2.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/magicpanel/setup.c * * Copyright (C) 2007 Markus Brunner, Mark Jonas * * Magic Panel Release 2 board setup - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/board-sh7757lcr.c b/arch/sh/boards/board-sh7757lcr.c index 1bde08dc067d..c32b4c6229d3 100644 --- a/arch/sh/boards/board-sh7757lcr.c +++ b/arch/sh/boards/board-sh7757lcr.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas R0P7757LC0012RL Support. * * Copyright (C) 2009 - 2010 Renesas Solutions Corp. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/board-sh7785lcr.c b/arch/sh/boards/board-sh7785lcr.c index 3cba60ff7aab..d964c4d6b139 100644 --- a/arch/sh/boards/board-sh7785lcr.c +++ b/arch/sh/boards/board-sh7785lcr.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Technology Corp. R0P7785LC0011RL Support. * * Copyright (C) 2008 Yoshihiro Shimoda * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/board-titan.c b/arch/sh/boards/board-titan.c index 94c36c7bc0b3..074a848d8b56 100644 --- a/arch/sh/boards/board-titan.c +++ b/arch/sh/boards/board-titan.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/titan/setup.c - Setup for Titan * * Copyright (C) 2006 Jamie Lenehan - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/board-urquell.c b/arch/sh/boards/board-urquell.c index b52abcc5259a..799af57c0b81 100644 --- a/arch/sh/boards/board-urquell.c +++ b/arch/sh/boards/board-urquell.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Technology Corp. SH7786 Urquell Support. * @@ -6,10 +7,6 @@ * * Based on board-sh7785lcr.c * Copyright (C) 2008 Yoshihiro Shimoda - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-ap325rxa/Makefile b/arch/sh/boards/mach-ap325rxa/Makefile index 4cf1774d2613..dba5d0c20261 100644 --- a/arch/sh/boards/mach-ap325rxa/Makefile +++ b/arch/sh/boards/mach-ap325rxa/Makefile @@ -1,2 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 obj-y := setup.o sdram.o diff --git a/arch/sh/boards/mach-ap325rxa/sdram.S b/arch/sh/boards/mach-ap325rxa/sdram.S index db24fbed4fca..541c82cc30b1 100644 --- a/arch/sh/boards/mach-ap325rxa/sdram.S +++ b/arch/sh/boards/mach-ap325rxa/sdram.S @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * AP325RXA sdram self/auto-refresh setup code * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/mach-cayman/Makefile b/arch/sh/boards/mach-cayman/Makefile index 00fa3eaecb1b..775a4be57434 100644 --- a/arch/sh/boards/mach-cayman/Makefile +++ b/arch/sh/boards/mach-cayman/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the Hitachi Cayman specific parts of the kernel # diff --git a/arch/sh/boards/mach-cayman/irq.c b/arch/sh/boards/mach-cayman/irq.c index 724e8b7271f4..9108789fafef 100644 --- a/arch/sh/boards/mach-cayman/irq.c +++ b/arch/sh/boards/mach-cayman/irq.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/mach-cayman/irq.c - SH-5 Cayman Interrupt Support * * This file handles the board specific parts of the Cayman interrupt system * * Copyright (C) 2002 Stuart Menefy - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-cayman/panic.c b/arch/sh/boards/mach-cayman/panic.c index d1e67306d07c..cfc46314e7d9 100644 --- a/arch/sh/boards/mach-cayman/panic.c +++ b/arch/sh/boards/mach-cayman/panic.c @@ -1,9 +1,6 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2003 Richard Curnow, SuperH UK Limited - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/mach-cayman/setup.c b/arch/sh/boards/mach-cayman/setup.c index 9c292c27e0d7..4cec14700adc 100644 --- a/arch/sh/boards/mach-cayman/setup.c +++ b/arch/sh/boards/mach-cayman/setup.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/mach-cayman/setup.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2002 David J. Mckay & Benedict Gaster * Copyright (C) 2003 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-dreamcast/Makefile b/arch/sh/boards/mach-dreamcast/Makefile index 7b97546c7e5f..37b2452206aa 100644 --- a/arch/sh/boards/mach-dreamcast/Makefile +++ b/arch/sh/boards/mach-dreamcast/Makefile @@ -1,6 +1,7 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the Sega Dreamcast specific parts of the kernel # -obj-y := setup.o irq.o rtc.o - +obj-y := setup.o irq.o +obj-$(CONFIG_RTC_DRV_GENERIC) += rtc.o diff --git a/arch/sh/boards/mach-dreamcast/irq.c b/arch/sh/boards/mach-dreamcast/irq.c index 2789647abebe..a929f764ae04 100644 --- a/arch/sh/boards/mach-dreamcast/irq.c +++ b/arch/sh/boards/mach-dreamcast/irq.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/dreamcast/irq.c * @@ -6,7 +7,6 @@ * Copyright (c) 2001, 2002 M. R. Brown * * This file is part of the LinuxDC project (www.linuxdc.org) - * Released under the terms of the GNU GPL v2.0 */ #include #include diff --git a/arch/sh/boards/mach-dreamcast/rtc.c b/arch/sh/boards/mach-dreamcast/rtc.c index 061d65714fcc..7873cd27e4e0 100644 --- a/arch/sh/boards/mach-dreamcast/rtc.c +++ b/arch/sh/boards/mach-dreamcast/rtc.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/dreamcast/rtc.c * @@ -5,14 +6,12 @@ * * Copyright (c) 2001, 2002 M. R. Brown * Copyright (c) 2002 Paul Mundt - * - * Released under the terms of the GNU GPL v2.0. - * */ #include -#include -#include +#include +#include +#include /* The AICA RTC has an Epoch of 1/1/1950, so we must subtract 20 years (in seconds) to get the standard Unix Epoch when getting the time, and add @@ -26,13 +25,15 @@ /** * aica_rtc_gettimeofday - Get the time from the AICA RTC - * @ts: pointer to resulting timespec + * @dev: the RTC device (ignored) + * @tm: pointer to resulting RTC time structure * * Grabs the current RTC seconds counter and adjusts it to the Unix Epoch. */ -static void aica_rtc_gettimeofday(struct timespec *ts) +static int aica_rtc_gettimeofday(struct device *dev, struct rtc_time *tm) { unsigned long val1, val2; + time64_t t; do { val1 = ((__raw_readl(AICA_RTC_SECS_H) & 0xffff) << 16) | @@ -42,22 +43,26 @@ static void aica_rtc_gettimeofday(struct timespec *ts) (__raw_readl(AICA_RTC_SECS_L) & 0xffff); } while (val1 != val2); - ts->tv_sec = val1 - TWENTY_YEARS; + /* normalize to 1970..2106 time range */ + t = (u32)(val1 - TWENTY_YEARS); - /* Can't get nanoseconds with just a seconds counter. */ - ts->tv_nsec = 0; + rtc_time64_to_tm(t, tm); + + return 0; } /** * aica_rtc_settimeofday - Set the AICA RTC to the current time - * @secs: contains the time_t to set + * @dev: the RTC device (ignored) + * @tm: pointer to new RTC time structure * * Adjusts the given @tv to the AICA Epoch and sets the RTC seconds counter. */ -static int aica_rtc_settimeofday(const time_t secs) +static int aica_rtc_settimeofday(struct device *dev, struct rtc_time *tm) { unsigned long val1, val2; - unsigned long adj = secs + TWENTY_YEARS; + time64_t secs = rtc_tm_to_time64(tm); + u32 adj = secs + TWENTY_YEARS; do { __raw_writel((adj & 0xffff0000) >> 16, AICA_RTC_SECS_H); @@ -73,9 +78,19 @@ static int aica_rtc_settimeofday(const time_t secs) return 0; } -void aica_time_init(void) +static const struct rtc_class_ops rtc_generic_ops = { + .read_time = aica_rtc_gettimeofday, + .set_time = aica_rtc_settimeofday, +}; + +static int __init aica_time_init(void) { - rtc_sh_get_time = aica_rtc_gettimeofday; - rtc_sh_set_time = aica_rtc_settimeofday; -} + struct platform_device *pdev; + pdev = platform_device_register_data(NULL, "rtc-generic", -1, + &rtc_generic_ops, + sizeof(rtc_generic_ops)); + + return PTR_ERR_OR_ZERO(pdev); +} +arch_initcall(aica_time_init); diff --git a/arch/sh/boards/mach-dreamcast/setup.c b/arch/sh/boards/mach-dreamcast/setup.c index ad1a4db72e04..2d966c1c2cc1 100644 --- a/arch/sh/boards/mach-dreamcast/setup.c +++ b/arch/sh/boards/mach-dreamcast/setup.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/dreamcast/setup.c * @@ -8,8 +9,6 @@ * * This file is part of the LinuxDC project (www.linuxdc.org) * - * Released under the terms of the GNU GPL v2.0. - * * This file originally bore the message (with enclosed-$): * Id: setup_dc.c,v 1.5 2001/05/24 05:09:16 mrbrown Exp * SEGA Dreamcast support @@ -30,7 +29,6 @@ static void __init dreamcast_setup(char **cmdline_p) { - board_time_init = aica_time_init; } static struct sh_machine_vector mv_dreamcast __initmv = { diff --git a/arch/sh/boards/mach-ecovec24/Makefile b/arch/sh/boards/mach-ecovec24/Makefile index e69bc82208fc..d78d4904ddee 100644 --- a/arch/sh/boards/mach-ecovec24/Makefile +++ b/arch/sh/boards/mach-ecovec24/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the R0P7724LC0011/21RL (EcoVec) # @@ -6,4 +7,4 @@ # for more details. # -obj-y := setup.o sdram.o \ No newline at end of file +obj-y := setup.o sdram.o diff --git a/arch/sh/boards/mach-ecovec24/sdram.S b/arch/sh/boards/mach-ecovec24/sdram.S index 3963c6f23d52..d2f269169abb 100644 --- a/arch/sh/boards/mach-ecovec24/sdram.S +++ b/arch/sh/boards/mach-ecovec24/sdram.S @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Ecovec24 sdram self/auto-refresh setup code * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/mach-ecovec24/setup.c b/arch/sh/boards/mach-ecovec24/setup.c index 06a894526a0b..22b4106b8084 100644 --- a/arch/sh/boards/mach-ecovec24/setup.c +++ b/arch/sh/boards/mach-ecovec24/setup.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2009 Renesas Solutions Corp. * * Kuninori Morimoto - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include @@ -696,13 +693,20 @@ static struct gpiod_lookup_table sdhi0_power_gpiod_table = { }, }; +static struct gpiod_lookup_table sdhi0_gpio_table = { + .dev_id = "sh_mobile_sdhi.0", + .table = { + /* Card detect */ + GPIO_LOOKUP("sh7724_pfc", GPIO_PTY7, "cd", GPIO_ACTIVE_LOW), + { }, + }, +}; + static struct tmio_mmc_data sdhi0_info = { .chan_priv_tx = (void *)SHDMA_SLAVE_SDHI0_TX, .chan_priv_rx = (void *)SHDMA_SLAVE_SDHI0_RX, .capabilities = MMC_CAP_SDIO_IRQ | MMC_CAP_POWER_OFF_CARD | MMC_CAP_NEEDS_POLL, - .flags = TMIO_MMC_USE_GPIO_CD, - .cd_gpio = GPIO_PTY7, }; static struct resource sdhi0_resources[] = { @@ -735,8 +739,15 @@ static struct tmio_mmc_data sdhi1_info = { .chan_priv_rx = (void *)SHDMA_SLAVE_SDHI1_RX, .capabilities = MMC_CAP_SDIO_IRQ | MMC_CAP_POWER_OFF_CARD | MMC_CAP_NEEDS_POLL, - .flags = TMIO_MMC_USE_GPIO_CD, - .cd_gpio = GPIO_PTW7, +}; + +static struct gpiod_lookup_table sdhi1_gpio_table = { + .dev_id = "sh_mobile_sdhi.1", + .table = { + /* Card detect */ + GPIO_LOOKUP("sh7724_pfc", GPIO_PTW7, "cd", GPIO_ACTIVE_LOW), + { }, + }, }; static struct resource sdhi1_resources[] = { @@ -776,9 +787,19 @@ static struct mmc_spi_platform_data mmc_spi_info = { .caps2 = MMC_CAP2_RO_ACTIVE_HIGH, .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, /* 3.3V only */ .setpower = mmc_spi_setpower, - .flags = MMC_SPI_USE_CD_GPIO | MMC_SPI_USE_RO_GPIO, - .cd_gpio = GPIO_PTY7, - .ro_gpio = GPIO_PTY6, +}; + +static struct gpiod_lookup_table mmc_spi_gpio_table = { + .dev_id = "mmc_spi.0", /* device "mmc_spi" @ CS0 */ + .table = { + /* Card detect */ + GPIO_LOOKUP_IDX("sh7724_pfc", GPIO_PTY7, NULL, 0, + GPIO_ACTIVE_LOW), + /* Write protect */ + GPIO_LOOKUP_IDX("sh7724_pfc", GPIO_PTY6, NULL, 1, + GPIO_ACTIVE_HIGH), + { }, + }, }; static struct spi_board_info spi_bus[] = { @@ -1282,6 +1303,7 @@ static int __init arch_setup(void) gpio_request(GPIO_PTB6, NULL); /* 3.3V power control */ gpio_direction_output(GPIO_PTB6, 0); /* disable power by default */ + gpiod_add_lookup_table(&mmc_spi_gpio_table); spi_register_board_info(spi_bus, ARRAY_SIZE(spi_bus)); #endif @@ -1434,6 +1456,10 @@ static int __init arch_setup(void) gpiod_add_lookup_table(&cn12_power_gpiod_table); #if defined(CONFIG_MMC_SDHI) || defined(CONFIG_MMC_SDHI_MODULE) gpiod_add_lookup_table(&sdhi0_power_gpiod_table); + gpiod_add_lookup_table(&sdhi0_gpio_table); +#if !defined(CONFIG_MMC_SH_MMCIF) && !defined(CONFIG_MMC_SH_MMCIF_MODULE) + gpiod_add_lookup_table(&sdhi1_gpio_table); +#endif #endif return platform_add_devices(ecovec_devices, diff --git a/arch/sh/boards/mach-highlander/irq-r7780mp.c b/arch/sh/boards/mach-highlander/irq-r7780mp.c index 9893fd3a1358..f46637377b6a 100644 --- a/arch/sh/boards/mach-highlander/irq-r7780mp.c +++ b/arch/sh/boards/mach-highlander/irq-r7780mp.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Solutions Highlander R7780MP Support. * * Copyright (C) 2002 Atom Create Engineering Co., Ltd. * Copyright (C) 2006 Paul Mundt * Copyright (C) 2007 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-highlander/irq-r7780rp.c b/arch/sh/boards/mach-highlander/irq-r7780rp.c index 0805b2151452..c61177e8724b 100644 --- a/arch/sh/boards/mach-highlander/irq-r7780rp.c +++ b/arch/sh/boards/mach-highlander/irq-r7780rp.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Solutions Highlander R7780RP-1 Support. * * Copyright (C) 2002 Atom Create Engineering Co., Ltd. * Copyright (C) 2006 Paul Mundt * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-highlander/irq-r7785rp.c b/arch/sh/boards/mach-highlander/irq-r7785rp.c index 558b24862776..0ebebbed0d63 100644 --- a/arch/sh/boards/mach-highlander/irq-r7785rp.c +++ b/arch/sh/boards/mach-highlander/irq-r7785rp.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Solutions Highlander R7785RP Support. * * Copyright (C) 2002 Atom Create Engineering Co., Ltd. * Copyright (C) 2006 - 2008 Paul Mundt * Copyright (C) 2007 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-highlander/pinmux-r7785rp.c b/arch/sh/boards/mach-highlander/pinmux-r7785rp.c index c77a2bea8f2a..703179faf652 100644 --- a/arch/sh/boards/mach-highlander/pinmux-r7785rp.c +++ b/arch/sh/boards/mach-highlander/pinmux-r7785rp.c @@ -1,9 +1,6 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-highlander/psw.c b/arch/sh/boards/mach-highlander/psw.c index 40e2b585d488..d445c54f74e4 100644 --- a/arch/sh/boards/mach-highlander/psw.c +++ b/arch/sh/boards/mach-highlander/psw.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/renesas/r7780rp/psw.c * * push switch support for RDBRP-1/RDBREVRP-1 debug boards. * * Copyright (C) 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-highlander/setup.c b/arch/sh/boards/mach-highlander/setup.c index 4a52590fe3d8..533393d779c2 100644 --- a/arch/sh/boards/mach-highlander/setup.c +++ b/arch/sh/boards/mach-highlander/setup.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/renesas/r7780rp/setup.c * @@ -8,10 +9,6 @@ * * This contains support for the R7780RP-1, R7780MP, and R7785RP * Highlander modules. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-hp6xx/Makefile b/arch/sh/boards/mach-hp6xx/Makefile index b3124278247c..4b0fe29e5612 100644 --- a/arch/sh/boards/mach-hp6xx/Makefile +++ b/arch/sh/boards/mach-hp6xx/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the HP6xx specific parts of the kernel # diff --git a/arch/sh/boards/mach-hp6xx/hp6xx_apm.c b/arch/sh/boards/mach-hp6xx/hp6xx_apm.c index 865d8d6e823f..e5c4c7d34139 100644 --- a/arch/sh/boards/mach-hp6xx/hp6xx_apm.c +++ b/arch/sh/boards/mach-hp6xx/hp6xx_apm.c @@ -1,11 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * bios-less APM driver for hp680 * * Copyright 2005 (c) Andriy Skulysh * Copyright 2008 (c) Kristoffer Ericson - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License. */ #include #include diff --git a/arch/sh/boards/mach-hp6xx/pm.c b/arch/sh/boards/mach-hp6xx/pm.c index 8b50cf763c06..fe505ec168d0 100644 --- a/arch/sh/boards/mach-hp6xx/pm.c +++ b/arch/sh/boards/mach-hp6xx/pm.c @@ -1,10 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * hp6x0 Power Management Routines * * Copyright (c) 2006 Andriy Skulysh - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License. */ #include #include diff --git a/arch/sh/boards/mach-hp6xx/pm_wakeup.S b/arch/sh/boards/mach-hp6xx/pm_wakeup.S index 4f18d44e0541..0fd43301f083 100644 --- a/arch/sh/boards/mach-hp6xx/pm_wakeup.S +++ b/arch/sh/boards/mach-hp6xx/pm_wakeup.S @@ -1,10 +1,6 @@ -/* - * Copyright (c) 2006 Andriy Skulysh - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. +/* SPDX-License-Identifier: GPL-2.0 * + * Copyright (c) 2006 Andriy Skulysh */ #include diff --git a/arch/sh/boards/mach-hp6xx/setup.c b/arch/sh/boards/mach-hp6xx/setup.c index 05797b33f68e..2ceead68d7bf 100644 --- a/arch/sh/boards/mach-hp6xx/setup.c +++ b/arch/sh/boards/mach-hp6xx/setup.c @@ -1,12 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/hp6xx/setup.c * * Copyright (C) 2002 Andriy Skulysh * Copyright (C) 2007 Kristoffer Ericson * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. - * * Setup code for HP620/HP660/HP680/HP690 (internal peripherials only) */ #include diff --git a/arch/sh/boards/mach-kfr2r09/Makefile b/arch/sh/boards/mach-kfr2r09/Makefile index 60dd63f4a427..4a4a35ad7ba0 100644 --- a/arch/sh/boards/mach-kfr2r09/Makefile +++ b/arch/sh/boards/mach-kfr2r09/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 obj-y := setup.o sdram.o ifneq ($(CONFIG_FB_SH_MOBILE_LCDC),) obj-y += lcd_wqvga.o diff --git a/arch/sh/boards/mach-kfr2r09/lcd_wqvga.c b/arch/sh/boards/mach-kfr2r09/lcd_wqvga.c index 355a78a3b313..f6bbac106d13 100644 --- a/arch/sh/boards/mach-kfr2r09/lcd_wqvga.c +++ b/arch/sh/boards/mach-kfr2r09/lcd_wqvga.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * KFR2R09 LCD panel support * @@ -5,10 +6,6 @@ * * Register settings based on the out-of-tree t33fb.c driver * Copyright (C) 2008 Lineo Solutions, Inc. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file COPYING in the main directory of this archive for - * more details. */ #include diff --git a/arch/sh/boards/mach-kfr2r09/sdram.S b/arch/sh/boards/mach-kfr2r09/sdram.S index 0c9f55bec2fe..f1b8985cb922 100644 --- a/arch/sh/boards/mach-kfr2r09/sdram.S +++ b/arch/sh/boards/mach-kfr2r09/sdram.S @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * KFR2R09 sdram self/auto-refresh setup code * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/mach-kfr2r09/setup.c b/arch/sh/boards/mach-kfr2r09/setup.c index e59c577ed871..203d249a0a2b 100644 --- a/arch/sh/boards/mach-kfr2r09/setup.c +++ b/arch/sh/boards/mach-kfr2r09/setup.c @@ -25,7 +25,6 @@ #include #include #include -#include #include #include #include @@ -478,7 +477,7 @@ extern char kfr2r09_sdram_leave_end; static int __init kfr2r09_devices_setup(void) { - static struct clk *camera_clk; + struct clk *camera_clk; /* register board specific self-refresh code */ sh_mobile_register_self_refresh(SUSP_SH_STANDBY | SUSP_SH_SF | diff --git a/arch/sh/boards/mach-landisk/Makefile b/arch/sh/boards/mach-landisk/Makefile index a696b4277fa9..6cba041fffe0 100644 --- a/arch/sh/boards/mach-landisk/Makefile +++ b/arch/sh/boards/mach-landisk/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for I-O DATA DEVICE, INC. "LANDISK Series" # diff --git a/arch/sh/boards/mach-landisk/gio.c b/arch/sh/boards/mach-landisk/gio.c index 32c317f5d991..1c0da99dfc60 100644 --- a/arch/sh/boards/mach-landisk/gio.c +++ b/arch/sh/boards/mach-landisk/gio.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/landisk/gio.c - driver for landisk * @@ -6,11 +7,6 @@ * * Copylight (C) 2006 kogiidena * Copylight (C) 2002 Atom Create Engineering Co., Ltd. * - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include #include diff --git a/arch/sh/boards/mach-landisk/irq.c b/arch/sh/boards/mach-landisk/irq.c index c00ace38db3f..29b8b1f85246 100644 --- a/arch/sh/boards/mach-landisk/irq.c +++ b/arch/sh/boards/mach-landisk/irq.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/mach-landisk/irq.c * @@ -8,10 +9,6 @@ * * Copyright (C) 2001 Ian da Silva, Jeremy Siegel * Based largely on io_se.c. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/mach-landisk/psw.c b/arch/sh/boards/mach-landisk/psw.c index 5192b1f43ada..e171d9af48f3 100644 --- a/arch/sh/boards/mach-landisk/psw.c +++ b/arch/sh/boards/mach-landisk/psw.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/landisk/psw.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2006-2007 Paul Mundt * Copyright (C) 2007 kogiidena - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-landisk/setup.c b/arch/sh/boards/mach-landisk/setup.c index f1147caebacf..16b4d8b0bb85 100644 --- a/arch/sh/boards/mach-landisk/setup.c +++ b/arch/sh/boards/mach-landisk/setup.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/landisk/setup.c * @@ -7,10 +8,6 @@ * Copyright (C) 2002 Paul Mundt * Copylight (C) 2002 Atom Create Engineering Co., Ltd. * Copyright (C) 2005-2007 kogiidena - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-lboxre2/Makefile b/arch/sh/boards/mach-lboxre2/Makefile index e9ed140c06f6..0fbd0822911a 100644 --- a/arch/sh/boards/mach-lboxre2/Makefile +++ b/arch/sh/boards/mach-lboxre2/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the L-BOX RE2 specific parts of the kernel # Copyright (c) 2007 Nobuhiro Iwamatsu diff --git a/arch/sh/boards/mach-lboxre2/irq.c b/arch/sh/boards/mach-lboxre2/irq.c index 8aa171ab833e..a250e3b9019d 100644 --- a/arch/sh/boards/mach-lboxre2/irq.c +++ b/arch/sh/boards/mach-lboxre2/irq.c @@ -1,14 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/lboxre2/irq.c * * Copyright (C) 2007 Nobuhiro Iwamatsu * * NTT COMWARE L-BOX RE2 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include #include diff --git a/arch/sh/boards/mach-lboxre2/setup.c b/arch/sh/boards/mach-lboxre2/setup.c index 6660622aa457..20d01b430f2a 100644 --- a/arch/sh/boards/mach-lboxre2/setup.c +++ b/arch/sh/boards/mach-lboxre2/setup.c @@ -1,14 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/lbox/setup.c * * Copyright (C) 2007 Nobuhiro Iwamatsu * * NTT COMWARE L-BOX RE2 Support - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include diff --git a/arch/sh/boards/mach-microdev/Makefile b/arch/sh/boards/mach-microdev/Makefile index 4e3588e8806b..05c5698dcad0 100644 --- a/arch/sh/boards/mach-microdev/Makefile +++ b/arch/sh/boards/mach-microdev/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the SuperH MicroDev specific parts of the kernel # diff --git a/arch/sh/boards/mach-microdev/fdc37c93xapm.c b/arch/sh/boards/mach-microdev/fdc37c93xapm.c index 458a7cf5fb46..2a04f72dd145 100644 --- a/arch/sh/boards/mach-microdev/fdc37c93xapm.c +++ b/arch/sh/boards/mach-microdev/fdc37c93xapm.c @@ -1,5 +1,5 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * * Setup for the SMSC FDC37C93xAPM * * Copyright (C) 2003 Sean McGoogan (Sean.McGoogan@superh.com) @@ -7,9 +7,6 @@ * Copyright (C) 2004, 2005 Paul Mundt * * SuperH SH4-202 MicroDev board support. - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. */ #include #include diff --git a/arch/sh/boards/mach-microdev/io.c b/arch/sh/boards/mach-microdev/io.c index acdafb0c6404..a76c12721e63 100644 --- a/arch/sh/boards/mach-microdev/io.c +++ b/arch/sh/boards/mach-microdev/io.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/superh/microdev/io.c * @@ -6,9 +7,6 @@ * Copyright (C) 2004 Paul Mundt * * SuperH SH4-202 MicroDev board support. - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. */ #include diff --git a/arch/sh/boards/mach-microdev/irq.c b/arch/sh/boards/mach-microdev/irq.c index 9a8aff339619..dc27492c83d7 100644 --- a/arch/sh/boards/mach-microdev/irq.c +++ b/arch/sh/boards/mach-microdev/irq.c @@ -1,12 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/superh/microdev/irq.c * * Copyright (C) 2003 Sean McGoogan (Sean.McGoogan@superh.com) * * SuperH SH4-202 MicroDev board support. - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. */ #include diff --git a/arch/sh/boards/mach-microdev/setup.c b/arch/sh/boards/mach-microdev/setup.c index 6c66ee4d842b..706b48f797be 100644 --- a/arch/sh/boards/mach-microdev/setup.c +++ b/arch/sh/boards/mach-microdev/setup.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/superh/microdev/setup.c * @@ -6,9 +7,6 @@ * Copyright (C) 2004, 2005 Paul Mundt * * SuperH SH4-202 MicroDev board support. - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. */ #include #include diff --git a/arch/sh/boards/mach-migor/Makefile b/arch/sh/boards/mach-migor/Makefile index 4601a89e5ac7..c223d759fcb1 100644 --- a/arch/sh/boards/mach-migor/Makefile +++ b/arch/sh/boards/mach-migor/Makefile @@ -1,2 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 obj-y := setup.o sdram.o obj-$(CONFIG_SH_MIGOR_QVGA) += lcd_qvga.o diff --git a/arch/sh/boards/mach-migor/lcd_qvga.c b/arch/sh/boards/mach-migor/lcd_qvga.c index 8bccd345b69c..4ebf130510bc 100644 --- a/arch/sh/boards/mach-migor/lcd_qvga.c +++ b/arch/sh/boards/mach-migor/lcd_qvga.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Support for SuperH MigoR Quarter VGA LCD Panel * @@ -5,10 +6,6 @@ * * Based on lcd_powertip.c from Kenati Technologies Pvt Ltd. * Copyright (c) 2007 Ujjwal Pande , - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #include diff --git a/arch/sh/boards/mach-migor/sdram.S b/arch/sh/boards/mach-migor/sdram.S index 614aa3a1398c..3a6bee1270aa 100644 --- a/arch/sh/boards/mach-migor/sdram.S +++ b/arch/sh/boards/mach-migor/sdram.S @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Migo-R sdram self/auto-refresh setup code * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/mach-r2d/Makefile b/arch/sh/boards/mach-r2d/Makefile index 0d4c75a72be0..7e7ac5e05662 100644 --- a/arch/sh/boards/mach-r2d/Makefile +++ b/arch/sh/boards/mach-r2d/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the RTS7751R2D specific parts of the kernel # diff --git a/arch/sh/boards/mach-r2d/setup.c b/arch/sh/boards/mach-r2d/setup.c index 4b98a5251f83..3bc52f651d96 100644 --- a/arch/sh/boards/mach-r2d/setup.c +++ b/arch/sh/boards/mach-r2d/setup.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Technology Sales RTS7751R2D Support. * * Copyright (C) 2002 - 2006 Atom Create Engineering Co., Ltd. * Copyright (C) 2004 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-rsk/Makefile b/arch/sh/boards/mach-rsk/Makefile index 6a4e1b538a62..43cca39a9fe6 100644 --- a/arch/sh/boards/mach-rsk/Makefile +++ b/arch/sh/boards/mach-rsk/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 obj-y := setup.o obj-$(CONFIG_SH_RSK7203) += devices-rsk7203.o obj-$(CONFIG_SH_RSK7264) += devices-rsk7264.o diff --git a/arch/sh/boards/mach-rsk/devices-rsk7203.c b/arch/sh/boards/mach-rsk/devices-rsk7203.c index a8089f79d058..e6b05d4588b7 100644 --- a/arch/sh/boards/mach-rsk/devices-rsk7203.c +++ b/arch/sh/boards/mach-rsk/devices-rsk7203.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Technology Europe RSK+ 7203 Support. * * Copyright (C) 2008 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-rsk/devices-rsk7264.c b/arch/sh/boards/mach-rsk/devices-rsk7264.c index 7251e37a842f..eaf700a20b83 100644 --- a/arch/sh/boards/mach-rsk/devices-rsk7264.c +++ b/arch/sh/boards/mach-rsk/devices-rsk7264.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * RSK+SH7264 Support. * * Copyright (C) 2012 Renesas Electronics Europe - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-rsk/devices-rsk7269.c b/arch/sh/boards/mach-rsk/devices-rsk7269.c index 4a544591d6f0..4b1e386b51dd 100644 --- a/arch/sh/boards/mach-rsk/devices-rsk7269.c +++ b/arch/sh/boards/mach-rsk/devices-rsk7269.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * RSK+SH7269 Support * * Copyright (C) 2012 Renesas Electronics Europe Ltd * Copyright (C) 2012 Phil Edworthy - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-rsk/setup.c b/arch/sh/boards/mach-rsk/setup.c index 6bc134bd7ec2..9370c4fdc41e 100644 --- a/arch/sh/boards/mach-rsk/setup.c +++ b/arch/sh/boards/mach-rsk/setup.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Technology Europe RSK+ Support. * * Copyright (C) 2008 Paul Mundt * Copyright (C) 2008 Peter Griffin - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sdk7780/Makefile b/arch/sh/boards/mach-sdk7780/Makefile index 3d8f0befc35d..37e857f9a55a 100644 --- a/arch/sh/boards/mach-sdk7780/Makefile +++ b/arch/sh/boards/mach-sdk7780/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the SDK7780 specific parts of the kernel # diff --git a/arch/sh/boards/mach-sdk7780/irq.c b/arch/sh/boards/mach-sdk7780/irq.c index e5f7564f2511..fa392f3dce26 100644 --- a/arch/sh/boards/mach-sdk7780/irq.c +++ b/arch/sh/boards/mach-sdk7780/irq.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/renesas/sdk7780/irq.c * * Renesas Technology Europe SDK7780 Support. * * Copyright (C) 2008 Nicholas Beck - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sdk7780/setup.c b/arch/sh/boards/mach-sdk7780/setup.c index 2241659c3299..482761b780e4 100644 --- a/arch/sh/boards/mach-sdk7780/setup.c +++ b/arch/sh/boards/mach-sdk7780/setup.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/renesas/sdk7780/setup.c * * Renesas Solutions SH7780 SDK Support * Copyright (C) 2008 Nicholas Beck - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sdk7786/Makefile b/arch/sh/boards/mach-sdk7786/Makefile index 45d32e3590b9..731a87c694b3 100644 --- a/arch/sh/boards/mach-sdk7786/Makefile +++ b/arch/sh/boards/mach-sdk7786/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 obj-y := fpga.o irq.o nmi.o setup.o obj-$(CONFIG_GPIOLIB) += gpio.o diff --git a/arch/sh/boards/mach-sdk7786/fpga.c b/arch/sh/boards/mach-sdk7786/fpga.c index 3e4ec66a0417..6d2a3d381c2a 100644 --- a/arch/sh/boards/mach-sdk7786/fpga.c +++ b/arch/sh/boards/mach-sdk7786/fpga.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SDK7786 FPGA Support. * * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sdk7786/gpio.c b/arch/sh/boards/mach-sdk7786/gpio.c index 47997010b77a..c4587d1013e6 100644 --- a/arch/sh/boards/mach-sdk7786/gpio.c +++ b/arch/sh/boards/mach-sdk7786/gpio.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SDK7786 FPGA USRGPIR Support. * * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sdk7786/irq.c b/arch/sh/boards/mach-sdk7786/irq.c index 46943a0da5b7..340c306ea952 100644 --- a/arch/sh/boards/mach-sdk7786/irq.c +++ b/arch/sh/boards/mach-sdk7786/irq.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SDK7786 FPGA IRQ Controller Support. * * Copyright (C) 2010 Matt Fleming * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sdk7786/nmi.c b/arch/sh/boards/mach-sdk7786/nmi.c index edcfa1f568ba..c2e09d798537 100644 --- a/arch/sh/boards/mach-sdk7786/nmi.c +++ b/arch/sh/boards/mach-sdk7786/nmi.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SDK7786 FPGA NMI Support. * * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sdk7786/setup.c b/arch/sh/boards/mach-sdk7786/setup.c index c29268bfd34a..65721c3a482c 100644 --- a/arch/sh/boards/mach-sdk7786/setup.c +++ b/arch/sh/boards/mach-sdk7786/setup.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas Technology Europe SDK7786 Support. * * Copyright (C) 2010 Matt Fleming * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sdk7786/sram.c b/arch/sh/boards/mach-sdk7786/sram.c index c81c3abbe01c..d76cdb7ede39 100644 --- a/arch/sh/boards/mach-sdk7786/sram.c +++ b/arch/sh/boards/mach-sdk7786/sram.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SDK7786 FPGA SRAM Support. * * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt diff --git a/arch/sh/boards/mach-se/7206/Makefile b/arch/sh/boards/mach-se/7206/Makefile index 5c9eaa0535b9..b40b30853ce3 100644 --- a/arch/sh/boards/mach-se/7206/Makefile +++ b/arch/sh/boards/mach-se/7206/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the 7206 SolutionEngine specific parts of the kernel # diff --git a/arch/sh/boards/mach-se/7343/Makefile b/arch/sh/boards/mach-se/7343/Makefile index 4c3666a93790..e058661091a2 100644 --- a/arch/sh/boards/mach-se/7343/Makefile +++ b/arch/sh/boards/mach-se/7343/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the 7343 SolutionEngine specific parts of the kernel # diff --git a/arch/sh/boards/mach-se/7343/irq.c b/arch/sh/boards/mach-se/7343/irq.c index 6129aef6db76..39a3175e72b2 100644 --- a/arch/sh/boards/mach-se/7343/irq.c +++ b/arch/sh/boards/mach-se/7343/irq.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Hitachi UL SolutionEngine 7343 FPGA IRQ Support. * @@ -6,10 +7,6 @@ * * Based on linux/arch/sh/boards/se/7343/irq.c * Copyright (C) 2007 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define DRV_NAME "SE7343-FPGA" #define pr_fmt(fmt) DRV_NAME ": " fmt diff --git a/arch/sh/boards/mach-se/770x/Makefile b/arch/sh/boards/mach-se/770x/Makefile index 43ea14feef51..900d93cfb6a5 100644 --- a/arch/sh/boards/mach-se/770x/Makefile +++ b/arch/sh/boards/mach-se/770x/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the 770x SolutionEngine specific parts of the kernel # diff --git a/arch/sh/boards/mach-se/7721/Makefile b/arch/sh/boards/mach-se/7721/Makefile index 7f09030980b3..09436f10ddf1 100644 --- a/arch/sh/boards/mach-se/7721/Makefile +++ b/arch/sh/boards/mach-se/7721/Makefile @@ -1 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 obj-y := setup.o irq.o diff --git a/arch/sh/boards/mach-se/7721/irq.c b/arch/sh/boards/mach-se/7721/irq.c index d85022ea3f12..e6ef2a2655c3 100644 --- a/arch/sh/boards/mach-se/7721/irq.c +++ b/arch/sh/boards/mach-se/7721/irq.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/se/7721/irq.c * * Copyright (C) 2008 Renesas Solutions Corp. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-se/7721/setup.c b/arch/sh/boards/mach-se/7721/setup.c index a0b3dba34ebf..3af724dc4ba4 100644 --- a/arch/sh/boards/mach-se/7721/setup.c +++ b/arch/sh/boards/mach-se/7721/setup.c @@ -1,14 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/se/7721/setup.c * * Copyright (C) 2008 Renesas Solutions Corp. * * Hitachi UL SolutionEngine 7721 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include #include diff --git a/arch/sh/boards/mach-se/7722/Makefile b/arch/sh/boards/mach-se/7722/Makefile index 8694373389e5..a5e89c0c6bb2 100644 --- a/arch/sh/boards/mach-se/7722/Makefile +++ b/arch/sh/boards/mach-se/7722/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the HITACHI UL SolutionEngine 7722 specific parts of the kernel # diff --git a/arch/sh/boards/mach-se/7722/irq.c b/arch/sh/boards/mach-se/7722/irq.c index 24c74a88290c..f6e3009edd4e 100644 --- a/arch/sh/boards/mach-se/7722/irq.c +++ b/arch/sh/boards/mach-se/7722/irq.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Hitachi UL SolutionEngine 7722 FPGA IRQ Support. * * Copyright (C) 2007 Nobuhiro Iwamatsu * Copyright (C) 2012 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define DRV_NAME "SE7722-FPGA" #define pr_fmt(fmt) DRV_NAME ": " fmt diff --git a/arch/sh/boards/mach-se/7722/setup.c b/arch/sh/boards/mach-se/7722/setup.c index e04e2bc46984..2cd4a2e84b93 100644 --- a/arch/sh/boards/mach-se/7722/setup.c +++ b/arch/sh/boards/mach-se/7722/setup.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/se/7722/setup.c * @@ -5,11 +6,6 @@ * Copyright (C) 2012 Paul Mundt * * Hitachi UL SolutionEngine 7722 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include #include diff --git a/arch/sh/boards/mach-se/7724/Makefile b/arch/sh/boards/mach-se/7724/Makefile index a08b36830f0e..6c6112b24617 100644 --- a/arch/sh/boards/mach-se/7724/Makefile +++ b/arch/sh/boards/mach-se/7724/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the HITACHI UL SolutionEngine 7724 specific parts of the kernel # diff --git a/arch/sh/boards/mach-se/7724/irq.c b/arch/sh/boards/mach-se/7724/irq.c index 64e681e66c57..14ce3024738f 100644 --- a/arch/sh/boards/mach-se/7724/irq.c +++ b/arch/sh/boards/mach-se/7724/irq.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/se/7724/irq.c * @@ -9,10 +10,6 @@ * Copyright (C) 2007 Nobuhiro Iwamatsu * * Hitachi UL SolutionEngine 7724 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-se/7724/sdram.S b/arch/sh/boards/mach-se/7724/sdram.S index 6fa4734d09c7..61c1fe78d71a 100644 --- a/arch/sh/boards/mach-se/7724/sdram.S +++ b/arch/sh/boards/mach-se/7724/sdram.S @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * MS7724SE sdram self/auto-refresh setup code * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/mach-se/7751/Makefile b/arch/sh/boards/mach-se/7751/Makefile index a338fd9d5039..2406d3e35352 100644 --- a/arch/sh/boards/mach-se/7751/Makefile +++ b/arch/sh/boards/mach-se/7751/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the 7751 SolutionEngine specific parts of the kernel # diff --git a/arch/sh/boards/mach-se/7780/Makefile b/arch/sh/boards/mach-se/7780/Makefile index 6b88adae3ecc..1f6669ab1bc0 100644 --- a/arch/sh/boards/mach-se/7780/Makefile +++ b/arch/sh/boards/mach-se/7780/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the HITACHI UL SolutionEngine 7780 specific parts of the kernel # diff --git a/arch/sh/boards/mach-se/7780/irq.c b/arch/sh/boards/mach-se/7780/irq.c index d5c9edc172a3..d427dfd711f1 100644 --- a/arch/sh/boards/mach-se/7780/irq.c +++ b/arch/sh/boards/mach-se/7780/irq.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/se/7780/irq.c * * Copyright (C) 2006,2007 Nobuhiro Iwamatsu * * Hitachi UL SolutionEngine 7780 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-se/7780/setup.c b/arch/sh/boards/mach-se/7780/setup.c index ae5a1d84fdf8..309f2681381b 100644 --- a/arch/sh/boards/mach-se/7780/setup.c +++ b/arch/sh/boards/mach-se/7780/setup.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/se/7780/setup.c * * Copyright (C) 2006,2007 Nobuhiro Iwamatsu * * Hitachi UL SolutionEngine 7780 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-sh03/Makefile b/arch/sh/boards/mach-sh03/Makefile index 400306a796ec..f89c25c6a39c 100644 --- a/arch/sh/boards/mach-sh03/Makefile +++ b/arch/sh/boards/mach-sh03/Makefile @@ -1,5 +1,7 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the Interface (CTP/PCI-SH03) specific parts of the kernel # -obj-y := setup.o rtc.o +obj-y := setup.o +obj-$(CONFIG_RTC_DRV_GENERIC) += rtc.o diff --git a/arch/sh/boards/mach-sh03/rtc.c b/arch/sh/boards/mach-sh03/rtc.c index dc3d50e3b7a2..8b23ed7c201c 100644 --- a/arch/sh/boards/mach-sh03/rtc.c +++ b/arch/sh/boards/mach-sh03/rtc.c @@ -13,8 +13,9 @@ #include #include #include -#include -#include +#include +#include +#include #define RTC_BASE 0xb0000000 #define RTC_SEC1 (RTC_BASE + 0) @@ -38,7 +39,7 @@ static DEFINE_SPINLOCK(sh03_rtc_lock); -unsigned long get_cmos_time(void) +static int sh03_rtc_gettimeofday(struct device *dev, struct rtc_time *tm) { unsigned int year, mon, day, hour, min, sec; @@ -75,17 +76,18 @@ unsigned long get_cmos_time(void) } spin_unlock(&sh03_rtc_lock); - return mktime(year, mon, day, hour, min, sec); -} -void sh03_rtc_gettimeofday(struct timespec *tv) -{ + tm->tm_sec = sec; + tm->tm_min = min; + tm->tm_hour = hour; + tm->tm_mday = day; + tm->tm_mon = mon; + tm->tm_year = year - 1900; - tv->tv_sec = get_cmos_time(); - tv->tv_nsec = 0; + return 0; } -static int set_rtc_mmss(unsigned long nowtime) +static int set_rtc_mmss(struct rtc_time *tm) { int retval = 0; int real_seconds, real_minutes, cmos_minutes; @@ -97,8 +99,8 @@ static int set_rtc_mmss(unsigned long nowtime) if (!(__raw_readb(RTC_CTL) & RTC_BUSY)) break; cmos_minutes = (__raw_readb(RTC_MIN1) & 0xf) + (__raw_readb(RTC_MIN10) & 0xf) * 10; - real_seconds = nowtime % 60; - real_minutes = nowtime / 60; + real_seconds = tm->tm_sec; + real_minutes = tm->tm_min; if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1) real_minutes += 30; /* correct for half hour time zone */ real_minutes %= 60; @@ -112,22 +114,31 @@ static int set_rtc_mmss(unsigned long nowtime) printk_once(KERN_NOTICE "set_rtc_mmss: can't update from %d to %d\n", cmos_minutes, real_minutes); - retval = -1; + retval = -EINVAL; } spin_unlock(&sh03_rtc_lock); return retval; } -int sh03_rtc_settimeofday(const time_t secs) +int sh03_rtc_settimeofday(struct device *dev, struct rtc_time *tm) { - unsigned long nowtime = secs; - - return set_rtc_mmss(nowtime); + return set_rtc_mmss(tm); } -void sh03_time_init(void) +static const struct rtc_class_ops rtc_generic_ops = { + .read_time = sh03_rtc_gettimeofday, + .set_time = sh03_rtc_settimeofday, +}; + +static int __init sh03_time_init(void) { - rtc_sh_get_time = sh03_rtc_gettimeofday; - rtc_sh_set_time = sh03_rtc_settimeofday; + struct platform_device *pdev; + + pdev = platform_device_register_data(NULL, "rtc-generic", -1, + &rtc_generic_ops, + sizeof(rtc_generic_ops)); + + return PTR_ERR_OR_ZERO(pdev); } +arch_initcall(sh03_time_init); diff --git a/arch/sh/boards/mach-sh03/setup.c b/arch/sh/boards/mach-sh03/setup.c index 85e7059a77e9..3901b6031ad5 100644 --- a/arch/sh/boards/mach-sh03/setup.c +++ b/arch/sh/boards/mach-sh03/setup.c @@ -22,14 +22,6 @@ static void __init init_sh03_IRQ(void) plat_irq_setup_pins(IRQ_MODE_IRQ); } -/* arch/sh/boards/sh03/rtc.c */ -void sh03_time_init(void); - -static void __init sh03_setup(char **cmdline_p) -{ - board_time_init = sh03_time_init; -} - static struct resource cf_ide_resources[] = { [0] = { .start = 0x1f0, @@ -101,6 +93,5 @@ device_initcall(sh03_devices_setup); static struct sh_machine_vector mv_sh03 __initmv = { .mv_name = "Interface (CTP/PCI-SH03)", - .mv_setup = sh03_setup, .mv_init_irq = init_sh03_IRQ, }; diff --git a/arch/sh/boards/mach-sh7763rdp/Makefile b/arch/sh/boards/mach-sh7763rdp/Makefile index f6c0b55516d2..d6341310444a 100644 --- a/arch/sh/boards/mach-sh7763rdp/Makefile +++ b/arch/sh/boards/mach-sh7763rdp/Makefile @@ -1 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 obj-y := setup.o irq.o diff --git a/arch/sh/boards/mach-sh7763rdp/irq.c b/arch/sh/boards/mach-sh7763rdp/irq.c index add698c8f2b4..efd382b7dad4 100644 --- a/arch/sh/boards/mach-sh7763rdp/irq.c +++ b/arch/sh/boards/mach-sh7763rdp/irq.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/renesas/sh7763rdp/irq.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2008 Renesas Solutions Corp. * Copyright (C) 2008 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/boards/mach-sh7763rdp/setup.c b/arch/sh/boards/mach-sh7763rdp/setup.c index 6e62686b81b1..97e715e4e9b3 100644 --- a/arch/sh/boards/mach-sh7763rdp/setup.c +++ b/arch/sh/boards/mach-sh7763rdp/setup.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * linux/arch/sh/boards/renesas/sh7763rdp/setup.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2008 Renesas Solutions Corp. * Copyright (C) 2008 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/mach-x3proto/Makefile b/arch/sh/boards/mach-x3proto/Makefile index 0cbe3d02dea3..6caefa114598 100644 --- a/arch/sh/boards/mach-x3proto/Makefile +++ b/arch/sh/boards/mach-x3proto/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 obj-y += setup.o ilsel.o obj-$(CONFIG_GPIOLIB) += gpio.o diff --git a/arch/sh/boards/mach-x3proto/gpio.c b/arch/sh/boards/mach-x3proto/gpio.c index cea88b0effa2..efc992f641a6 100644 --- a/arch/sh/boards/mach-x3proto/gpio.c +++ b/arch/sh/boards/mach-x3proto/gpio.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/mach-x3proto/gpio.c * * Renesas SH-X3 Prototype Baseboard GPIO Support. * * Copyright (C) 2010 - 2012 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt diff --git a/arch/sh/boards/mach-x3proto/ilsel.c b/arch/sh/boards/mach-x3proto/ilsel.c index 95e346139515..f0d5eb41521a 100644 --- a/arch/sh/boards/mach-x3proto/ilsel.c +++ b/arch/sh/boards/mach-x3proto/ilsel.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/mach-x3proto/ilsel.c * * Helper routines for SH-X3 proto board ILSEL. * * Copyright (C) 2007 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt diff --git a/arch/sh/boards/mach-x3proto/setup.c b/arch/sh/boards/mach-x3proto/setup.c index d682e2b6a856..95b85f2e13dd 100644 --- a/arch/sh/boards/mach-x3proto/setup.c +++ b/arch/sh/boards/mach-x3proto/setup.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/boards/mach-x3proto/setup.c * * Renesas SH-X3 Prototype Board Support. * * Copyright (C) 2007 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/boards/of-generic.c b/arch/sh/boards/of-generic.c index cde370cad4ae..958f46da3a79 100644 --- a/arch/sh/boards/of-generic.c +++ b/arch/sh/boards/of-generic.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH generic board support, using device tree * * Copyright (C) 2015-2016 Smart Energy Instruments, Inc. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include @@ -117,18 +114,10 @@ static void __init sh_of_mem_reserve(void) early_init_fdt_scan_reserved_mem(); } -static void __init sh_of_time_init(void) -{ - pr_info("SH generic board support: scanning for clocksource devices\n"); - timer_probe(); -} - static void __init sh_of_setup(char **cmdline_p) { struct device_node *root; - board_time_init = sh_of_time_init; - sh_mv.mv_name = "Unknown SH model"; root = of_find_node_by_path("/"); if (root) { diff --git a/arch/sh/configs/dreamcast_defconfig b/arch/sh/configs/dreamcast_defconfig index 3f08dc54480b..1d27666c029f 100644 --- a/arch/sh/configs/dreamcast_defconfig +++ b/arch/sh/configs/dreamcast_defconfig @@ -70,3 +70,5 @@ CONFIG_PROC_KCORE=y CONFIG_TMPFS=y CONFIG_HUGETLBFS=y # CONFIG_CRYPTO_ANSI_CPRNG is not set +CONFIG_RTC_CLASS=y +CONFIG_RTC_DRV_GENERIC=y diff --git a/arch/sh/configs/sh03_defconfig b/arch/sh/configs/sh03_defconfig index 2156223405a1..489ffdfb1517 100644 --- a/arch/sh/configs/sh03_defconfig +++ b/arch/sh/configs/sh03_defconfig @@ -130,3 +130,5 @@ CONFIG_CRYPTO_SHA1=y CONFIG_CRYPTO_DEFLATE=y # CONFIG_CRYPTO_ANSI_CPRNG is not set CONFIG_CRC_CCITT=y +CONFIG_RTC_CLASS=y +CONFIG_RTC_DRV_GENERIC=y diff --git a/arch/sh/drivers/dma/Makefile b/arch/sh/drivers/dma/Makefile index d88c9484762c..d2fdd56208f6 100644 --- a/arch/sh/drivers/dma/Makefile +++ b/arch/sh/drivers/dma/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the SuperH DMA specific kernel interface routines under Linux. # diff --git a/arch/sh/drivers/dma/dma-api.c b/arch/sh/drivers/dma/dma-api.c index b05be597b19f..ab9170494dcc 100644 --- a/arch/sh/drivers/dma/dma-api.c +++ b/arch/sh/drivers/dma/dma-api.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/dma/dma-api.c * * SuperH-specific DMA management API * * Copyright (C) 2003, 2004, 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include @@ -417,4 +414,4 @@ subsys_initcall(dma_api_init); MODULE_AUTHOR("Paul Mundt "); MODULE_DESCRIPTION("DMA API for SuperH"); -MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL v2"); diff --git a/arch/sh/drivers/dma/dma-g2.c b/arch/sh/drivers/dma/dma-g2.c index e1ab6eb3c04b..52a8ae5e30d2 100644 --- a/arch/sh/drivers/dma/dma-g2.c +++ b/arch/sh/drivers/dma/dma-g2.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/dma/dma-g2.c * * G2 bus DMA support * * Copyright (C) 2003 - 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include @@ -197,4 +194,4 @@ module_exit(g2_dma_exit); MODULE_AUTHOR("Paul Mundt "); MODULE_DESCRIPTION("G2 bus DMA driver"); -MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL v2"); diff --git a/arch/sh/drivers/dma/dma-pvr2.c b/arch/sh/drivers/dma/dma-pvr2.c index 706a3434af7a..b5dbd1f75768 100644 --- a/arch/sh/drivers/dma/dma-pvr2.c +++ b/arch/sh/drivers/dma/dma-pvr2.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/dma/dma-pvr2.c * * NEC PowerVR 2 (Dreamcast) DMA support * * Copyright (C) 2003, 2004 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include @@ -105,4 +102,4 @@ module_exit(pvr2_dma_exit); MODULE_AUTHOR("Paul Mundt "); MODULE_DESCRIPTION("NEC PowerVR 2 DMA driver"); -MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL v2"); diff --git a/arch/sh/drivers/dma/dma-sh.c b/arch/sh/drivers/dma/dma-sh.c index afde2a7d3eb3..96c626c2cd0a 100644 --- a/arch/sh/drivers/dma/dma-sh.c +++ b/arch/sh/drivers/dma/dma-sh.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/dma/dma-sh.c * @@ -6,10 +7,6 @@ * Copyright (C) 2000 Takashi YOSHII * Copyright (C) 2003, 2004 Paul Mundt * Copyright (C) 2005 Andriy Skulysh - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include @@ -414,4 +411,4 @@ module_exit(sh_dmac_exit); MODULE_AUTHOR("Takashi YOSHII, Paul Mundt, Andriy Skulysh"); MODULE_DESCRIPTION("SuperH On-Chip DMAC Support"); -MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL v2"); diff --git a/arch/sh/drivers/dma/dma-sysfs.c b/arch/sh/drivers/dma/dma-sysfs.c index 4b15feda54b0..8ef318150f84 100644 --- a/arch/sh/drivers/dma/dma-sysfs.c +++ b/arch/sh/drivers/dma/dma-sysfs.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/dma/dma-sysfs.c * * sysfs interface for SH DMA API * * Copyright (C) 2004 - 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/dma/dmabrg.c b/arch/sh/drivers/dma/dmabrg.c index e5a57a109d6c..5b2c1fd254d7 100644 --- a/arch/sh/drivers/dma/dmabrg.c +++ b/arch/sh/drivers/dma/dmabrg.c @@ -1,9 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7760 DMABRG IRQ handling * * (c) 2007 MSC Vertriebsges.m.b.H, Manuel Lauss - * licensed under the GPLv2. - * */ #include diff --git a/arch/sh/drivers/heartbeat.c b/arch/sh/drivers/heartbeat.c index e8af2ff29bc3..cf2fcccca812 100644 --- a/arch/sh/drivers/heartbeat.c +++ b/arch/sh/drivers/heartbeat.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Generic heartbeat driver for regular LED banks * @@ -13,10 +14,6 @@ * traditionally used for strobing the load average. This use case is * handled by this driver, rather than giving each LED bit position its * own struct device. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/fixups-dreamcast.c b/arch/sh/drivers/pci/fixups-dreamcast.c index 48aaefd8f5d6..dfdbd05b6eb1 100644 --- a/arch/sh/drivers/pci/fixups-dreamcast.c +++ b/arch/sh/drivers/pci/fixups-dreamcast.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/pci/fixups-dreamcast.c * @@ -9,10 +10,6 @@ * This file originally bore the message (with enclosed-$): * Id: pci.c,v 1.3 2003/05/04 19:29:46 lethal Exp * Dreamcast PCI: Supports SEGA Broadband Adaptor only. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/drivers/pci/fixups-landisk.c b/arch/sh/drivers/pci/fixups-landisk.c index db5b40a98e62..53fa2fc87eec 100644 --- a/arch/sh/drivers/pci/fixups-landisk.c +++ b/arch/sh/drivers/pci/fixups-landisk.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/pci/fixups-landisk.c * @@ -5,9 +6,6 @@ * * Copyright (C) 2006 kogiidena * Copyright (C) 2010 Nobuhiro Iwamatsu - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. */ #include #include diff --git a/arch/sh/drivers/pci/fixups-r7780rp.c b/arch/sh/drivers/pci/fixups-r7780rp.c index 2c9b58f848dd..3c9139c5955e 100644 --- a/arch/sh/drivers/pci/fixups-r7780rp.c +++ b/arch/sh/drivers/pci/fixups-r7780rp.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/pci/fixups-r7780rp.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2003 Lineo uSolutions, Inc. * Copyright (C) 2004 - 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/fixups-rts7751r2d.c b/arch/sh/drivers/pci/fixups-rts7751r2d.c index 358ac104f08c..3f0a6fe1610b 100644 --- a/arch/sh/drivers/pci/fixups-rts7751r2d.c +++ b/arch/sh/drivers/pci/fixups-rts7751r2d.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/pci/fixups-rts7751r2d.c * @@ -6,10 +7,6 @@ * Copyright (C) 2003 Lineo uSolutions, Inc. * Copyright (C) 2004 Paul Mundt * Copyright (C) 2007 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/fixups-sdk7780.c b/arch/sh/drivers/pci/fixups-sdk7780.c index 24e96dfbdb22..c306040485bd 100644 --- a/arch/sh/drivers/pci/fixups-sdk7780.c +++ b/arch/sh/drivers/pci/fixups-sdk7780.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/pci/fixups-sdk7780.c * @@ -6,10 +7,6 @@ * Copyright (C) 2003 Lineo uSolutions, Inc. * Copyright (C) 2004 - 2006 Paul Mundt * Copyright (C) 2006 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/fixups-sdk7786.c b/arch/sh/drivers/pci/fixups-sdk7786.c index 36eb6fc3c18a..8cbfa5310a4b 100644 --- a/arch/sh/drivers/pci/fixups-sdk7786.c +++ b/arch/sh/drivers/pci/fixups-sdk7786.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SDK7786 FPGA PCIe mux handling * * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define pr_fmt(fmt) "PCI: " fmt diff --git a/arch/sh/drivers/pci/fixups-snapgear.c b/arch/sh/drivers/pci/fixups-snapgear.c index a931e5928f58..317225c09413 100644 --- a/arch/sh/drivers/pci/fixups-snapgear.c +++ b/arch/sh/drivers/pci/fixups-snapgear.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/pci/ops-snapgear.c * @@ -7,9 +8,6 @@ * * Highly leveraged from pci-bigsur.c, written by Dustin McIntire. * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. - * * PCI initialization for the SnapGear boards */ #include diff --git a/arch/sh/drivers/pci/fixups-titan.c b/arch/sh/drivers/pci/fixups-titan.c index a9d563e479d5..b5bb65caa16d 100644 --- a/arch/sh/drivers/pci/fixups-titan.c +++ b/arch/sh/drivers/pci/fixups-titan.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/pci/ops-titan.c * @@ -6,9 +7,6 @@ * Modified from ops-snapgear.c written by David McCullough * Highly leveraged from pci-bigsur.c, written by Dustin McIntire. * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. - * * PCI initialization for the Titan boards */ #include diff --git a/arch/sh/drivers/pci/ops-dreamcast.c b/arch/sh/drivers/pci/ops-dreamcast.c index 16e0a1baad88..517a8a9702f6 100644 --- a/arch/sh/drivers/pci/ops-dreamcast.c +++ b/arch/sh/drivers/pci/ops-dreamcast.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * PCI operations for the Sega Dreamcast * * Copyright (C) 2001, 2002 M. R. Brown * Copyright (C) 2002, 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/drivers/pci/ops-sh4.c b/arch/sh/drivers/pci/ops-sh4.c index b6234203e0ac..a205be3bfc4a 100644 --- a/arch/sh/drivers/pci/ops-sh4.c +++ b/arch/sh/drivers/pci/ops-sh4.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Generic SH-4 / SH-4A PCIC operations (SH7751, SH7780). * * Copyright (C) 2002 - 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License v2. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/ops-sh5.c b/arch/sh/drivers/pci/ops-sh5.c index 45361946460f..9fbaf72949ab 100644 --- a/arch/sh/drivers/pci/ops-sh5.c +++ b/arch/sh/drivers/pci/ops-sh5.c @@ -1,12 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Support functions for the SH5 PCI hardware. * * Copyright (C) 2001 David J. Mckay (david.mckay@st.com) * Copyright (C) 2003, 2004 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. */ #include #include diff --git a/arch/sh/drivers/pci/ops-sh7786.c b/arch/sh/drivers/pci/ops-sh7786.c index 128421009e3f..a10f9f4ebd7f 100644 --- a/arch/sh/drivers/pci/ops-sh7786.c +++ b/arch/sh/drivers/pci/ops-sh7786.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Generic SH7786 PCI-Express operations. * * Copyright (C) 2009 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License v2. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/pci-dreamcast.c b/arch/sh/drivers/pci/pci-dreamcast.c index 633694193af8..4cff2a8107bf 100644 --- a/arch/sh/drivers/pci/pci-dreamcast.c +++ b/arch/sh/drivers/pci/pci-dreamcast.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * PCI support for the Sega Dreamcast * @@ -7,10 +8,6 @@ * This file originally bore the message (with enclosed-$): * Id: pci.c,v 1.3 2003/05/04 19:29:46 lethal Exp * Dreamcast PCI: Supports SEGA Broadband Adaptor only. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/drivers/pci/pci-sh5.c b/arch/sh/drivers/pci/pci-sh5.c index 8229114c6a58..49303fab187b 100644 --- a/arch/sh/drivers/pci/pci-sh5.c +++ b/arch/sh/drivers/pci/pci-sh5.c @@ -1,11 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2001 David J. Mckay (david.mckay@st.com) * Copyright (C) 2003, 2004 Paul Mundt * Copyright (C) 2004 Richard Curnow * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. - * * Support functions for the SH5 PCI hardware. */ diff --git a/arch/sh/drivers/pci/pci-sh5.h b/arch/sh/drivers/pci/pci-sh5.h index 3f01decb4307..91348af0ef6c 100644 --- a/arch/sh/drivers/pci/pci-sh5.h +++ b/arch/sh/drivers/pci/pci-sh5.h @@ -1,8 +1,6 @@ -/* - * Copyright (C) 2001 David J. Mckay (david.mckay@st.com) +/* SPDX-License-Identifier: GPL-2.0 * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. + * Copyright (C) 2001 David J. Mckay (david.mckay@st.com) * * Definitions for the SH5 PCI hardware. */ diff --git a/arch/sh/drivers/pci/pci-sh7751.c b/arch/sh/drivers/pci/pci-sh7751.c index 86adb1e235cd..1b9e5caac389 100644 --- a/arch/sh/drivers/pci/pci-sh7751.c +++ b/arch/sh/drivers/pci/pci-sh7751.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Low-Level PCI Support for the SH7751 * @@ -5,10 +6,6 @@ * Copyright (C) 2001 Dustin McIntire * * With cleanup by Paul van Gool , 2003. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/pci-sh7751.h b/arch/sh/drivers/pci/pci-sh7751.h index 5ede38c330d3..d1951e50effc 100644 --- a/arch/sh/drivers/pci/pci-sh7751.h +++ b/arch/sh/drivers/pci/pci-sh7751.h @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Low-Level PCI Support for SH7751 targets * * Dustin McIntire (dustin@sensoria.com) (c) 2001 * Paul Mundt (lethal@linux-sh.org) (c) 2003 - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. - * */ #ifndef _PCI_SH7751_H_ diff --git a/arch/sh/drivers/pci/pci-sh7780.c b/arch/sh/drivers/pci/pci-sh7780.c index 5a6dab6e27d9..3fd0f392a0ee 100644 --- a/arch/sh/drivers/pci/pci-sh7780.c +++ b/arch/sh/drivers/pci/pci-sh7780.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Low-Level PCI Support for the SH7780 * * Copyright (C) 2005 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/pci-sh7780.h b/arch/sh/drivers/pci/pci-sh7780.h index 1742e2c9db7a..e2ac770f8e35 100644 --- a/arch/sh/drivers/pci/pci-sh7780.h +++ b/arch/sh/drivers/pci/pci-sh7780.h @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Low-Level PCI Support for SH7780 targets * * Dustin McIntire (dustin@sensoria.com) (c) 2001 * Paul Mundt (lethal@linux-sh.org) (c) 2003 - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. - * */ #ifndef _PCI_SH7780_H_ diff --git a/arch/sh/drivers/pci/pci.c b/arch/sh/drivers/pci/pci.c index 8256626bc53c..c7784e156964 100644 --- a/arch/sh/drivers/pci/pci.c +++ b/arch/sh/drivers/pci/pci.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * New-style PCI core. * @@ -6,10 +7,6 @@ * * Modelled after arch/mips/pci/pci.c: * Copyright (C) 2003, 04 Ralf Baechle (ralf@linux-mips.org) - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/pci/pcie-sh7786.c b/arch/sh/drivers/pci/pcie-sh7786.c index 3d81a8b80942..a58b77cea295 100644 --- a/arch/sh/drivers/pci/pcie-sh7786.c +++ b/arch/sh/drivers/pci/pcie-sh7786.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Low-Level PCI Express Support for the SH7786 * * Copyright (C) 2009 - 2011 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define pr_fmt(fmt) "PCI: " fmt diff --git a/arch/sh/drivers/pci/pcie-sh7786.h b/arch/sh/drivers/pci/pcie-sh7786.h index 4a6ff55f759b..ffe383681a0b 100644 --- a/arch/sh/drivers/pci/pcie-sh7786.h +++ b/arch/sh/drivers/pci/pcie-sh7786.h @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * SH7786 PCI-Express controller definitions. * * Copyright (C) 2008, 2009 Renesas Technology Corp. * All rights reserved. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __PCI_SH7786_H #define __PCI_SH7786_H diff --git a/arch/sh/drivers/push-switch.c b/arch/sh/drivers/push-switch.c index 762bc5619910..2813140fd92b 100644 --- a/arch/sh/drivers/push-switch.c +++ b/arch/sh/drivers/push-switch.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Generic push-switch framework * * Copyright (C) 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/drivers/superhyway/Makefile b/arch/sh/drivers/superhyway/Makefile index 5b8e0c7ca3a5..aa6e3267c055 100644 --- a/arch/sh/drivers/superhyway/Makefile +++ b/arch/sh/drivers/superhyway/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the SuperHyway specific kernel interface routines under Linux. # diff --git a/arch/sh/drivers/superhyway/ops-sh4-202.c b/arch/sh/drivers/superhyway/ops-sh4-202.c index 6da62e9475c4..490142274e3b 100644 --- a/arch/sh/drivers/superhyway/ops-sh4-202.c +++ b/arch/sh/drivers/superhyway/ops-sh4-202.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/drivers/superhyway/ops-sh4-202.c * * SuperHyway bus support for SH4-202 * * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU - * General Public License. See the file "COPYING" in the main - * directory of this archive for more details. */ #include #include diff --git a/arch/sh/include/asm/Kbuild b/arch/sh/include/asm/Kbuild index b15caf34813a..a6ef3fee5f85 100644 --- a/arch/sh/include/asm/Kbuild +++ b/arch/sh/include/asm/Kbuild @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 generated-y += syscall_table.h generic-y += compat.h generic-y += current.h diff --git a/arch/sh/include/asm/addrspace.h b/arch/sh/include/asm/addrspace.h index 3d1ae2bfaa6f..34bfbcddcce0 100644 --- a/arch/sh/include/asm/addrspace.h +++ b/arch/sh/include/asm/addrspace.h @@ -1,7 +1,4 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. +/* SPDX-License-Identifier: GPL-2.0 * * Copyright (C) 1999 by Kaz Kojima * diff --git a/arch/sh/include/asm/asm-offsets.h b/arch/sh/include/asm/asm-offsets.h index d370ee36a182..9f8535716392 100644 --- a/arch/sh/include/asm/asm-offsets.h +++ b/arch/sh/include/asm/asm-offsets.h @@ -1 +1,2 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #include diff --git a/arch/sh/include/asm/bl_bit_64.h b/arch/sh/include/asm/bl_bit_64.h index 6cc8711af435..aac9780fe864 100644 --- a/arch/sh/include/asm/bl_bit_64.h +++ b/arch/sh/include/asm/bl_bit_64.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_BL_BIT_64_H #define __ASM_SH_BL_BIT_64_H diff --git a/arch/sh/include/asm/cache_insns_64.h b/arch/sh/include/asm/cache_insns_64.h index 70b6357eaf1a..ed682b987b0d 100644 --- a/arch/sh/include/asm/cache_insns_64.h +++ b/arch/sh/include/asm/cache_insns_64.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_CACHE_INSNS_64_H #define __ASM_SH_CACHE_INSNS_64_H diff --git a/arch/sh/include/asm/checksum_32.h b/arch/sh/include/asm/checksum_32.h index 9c84386d35cb..b58f3d95dc19 100644 --- a/arch/sh/include/asm/checksum_32.h +++ b/arch/sh/include/asm/checksum_32.h @@ -1,11 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_CHECKSUM_H #define __ASM_SH_CHECKSUM_H /* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * * Copyright (C) 1999 by Kaz Kojima & Niibe Yutaka */ diff --git a/arch/sh/include/asm/cmpxchg-xchg.h b/arch/sh/include/asm/cmpxchg-xchg.h index 593a9704782b..c373f21efe4d 100644 --- a/arch/sh/include/asm/cmpxchg-xchg.h +++ b/arch/sh/include/asm/cmpxchg-xchg.h @@ -1,12 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_CMPXCHG_XCHG_H #define __ASM_SH_CMPXCHG_XCHG_H /* * Copyright (C) 2016 Red Hat, Inc. * Author: Michael S. Tsirkin - * - * This work is licensed under the terms of the GNU GPL, version 2. See the - * file "COPYING" in the main directory of this archive for more details. */ #include #include diff --git a/arch/sh/include/asm/device.h b/arch/sh/include/asm/device.h index 071bcb4d4bfd..6f3e686a1c6f 100644 --- a/arch/sh/include/asm/device.h +++ b/arch/sh/include/asm/device.h @@ -1,7 +1,6 @@ -/* - * Arch specific extensions to struct device +/* SPDX-License-Identifier: GPL-2.0 * - * This file is released under the GPLv2 + * Arch specific extensions to struct device */ #ifndef __ASM_SH_DEVICE_H #define __ASM_SH_DEVICE_H diff --git a/arch/sh/include/asm/dma-register.h b/arch/sh/include/asm/dma-register.h index c757b47e6b64..724dab912b71 100644 --- a/arch/sh/include/asm/dma-register.h +++ b/arch/sh/include/asm/dma-register.h @@ -1,14 +1,11 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Common header for the legacy SH DMA driver and the new dmaengine driver * * extracted from arch/sh/include/asm/dma-sh.h: * * Copyright (C) 2000 Takashi YOSHII * Copyright (C) 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef DMA_REGISTER_H #define DMA_REGISTER_H diff --git a/arch/sh/include/asm/dma.h b/arch/sh/include/asm/dma.h index fb6e4f7b00a2..4d5a21a891c0 100644 --- a/arch/sh/include/asm/dma.h +++ b/arch/sh/include/asm/dma.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/dma.h * * Copyright (C) 2003, 2004 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_DMA_H #define __ASM_SH_DMA_H diff --git a/arch/sh/include/asm/dwarf.h b/arch/sh/include/asm/dwarf.h index d62abd1d0c05..571954474122 100644 --- a/arch/sh/include/asm/dwarf.h +++ b/arch/sh/include/asm/dwarf.h @@ -1,10 +1,6 @@ -/* - * Copyright (C) 2009 Matt Fleming - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. +/* SPDX-License-Identifier: GPL-2.0 * + * Copyright (C) 2009 Matt Fleming */ #ifndef __ASM_SH_DWARF_H #define __ASM_SH_DWARF_H diff --git a/arch/sh/include/asm/fb.h b/arch/sh/include/asm/fb.h index d92e99cd8c8a..9a0bca2686fd 100644 --- a/arch/sh/include/asm/fb.h +++ b/arch/sh/include/asm/fb.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef _ASM_FB_H_ #define _ASM_FB_H_ diff --git a/arch/sh/include/asm/fixmap.h b/arch/sh/include/asm/fixmap.h index 4daf91c3b725..e30348c58073 100644 --- a/arch/sh/include/asm/fixmap.h +++ b/arch/sh/include/asm/fixmap.h @@ -1,9 +1,6 @@ -/* - * fixmap.h: compile-time virtual memory allocation +/* SPDX-License-Identifier: GPL-2.0 * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. + * fixmap.h: compile-time virtual memory allocation * * Copyright (C) 1998 Ingo Molnar * diff --git a/arch/sh/include/asm/flat.h b/arch/sh/include/asm/flat.h index 275fcae23539..843d458b8329 100644 --- a/arch/sh/include/asm/flat.h +++ b/arch/sh/include/asm/flat.h @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/flat.h * * uClinux flat-format executables * * Copyright (C) 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive for - * more details. */ #ifndef __ASM_SH_FLAT_H #define __ASM_SH_FLAT_H diff --git a/arch/sh/include/asm/freq.h b/arch/sh/include/asm/freq.h index 4ece90b09b9c..18133bf83738 100644 --- a/arch/sh/include/asm/freq.h +++ b/arch/sh/include/asm/freq.h @@ -1,12 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0+ + * * include/asm-sh/freq.h * * Copyright (C) 2002, 2003 Paul Mundt - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2 of the License, or (at your - * option) any later version. */ #ifndef __ASM_SH_FREQ_H #define __ASM_SH_FREQ_H diff --git a/arch/sh/include/asm/gpio.h b/arch/sh/include/asm/gpio.h index 7dfe15e2e990..351918894e86 100644 --- a/arch/sh/include/asm/gpio.h +++ b/arch/sh/include/asm/gpio.h @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/gpio.h * * Generic GPIO API and pinmux table support for SuperH. * * Copyright (c) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_GPIO_H #define __ASM_SH_GPIO_H diff --git a/arch/sh/include/asm/machvec.h b/arch/sh/include/asm/machvec.h index d3324e4f372e..f7d05546beca 100644 --- a/arch/sh/include/asm/machvec.h +++ b/arch/sh/include/asm/machvec.h @@ -1,10 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/machvec.h * * Copyright 2000 Stuart Menefy (stuart.menefy@st.com) - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. */ #ifndef _ASM_SH_MACHVEC_H diff --git a/arch/sh/include/asm/mmu_context_64.h b/arch/sh/include/asm/mmu_context_64.h index de121025d87f..bacafe0b887d 100644 --- a/arch/sh/include/asm/mmu_context_64.h +++ b/arch/sh/include/asm/mmu_context_64.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_MMU_CONTEXT_64_H #define __ASM_SH_MMU_CONTEXT_64_H @@ -6,10 +7,6 @@ * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index f6abfe2bca93..3587103afe59 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * This file contains the functions and defines necessary to modify and * use the SuperH page table tree. * * Copyright (C) 1999 Niibe Yutaka * Copyright (C) 2002 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General - * Public License. See the file "COPYING" in the main directory of this - * archive for more details. */ #ifndef __ASM_SH_PGTABLE_H #define __ASM_SH_PGTABLE_H diff --git a/arch/sh/include/asm/pgtable_64.h b/arch/sh/include/asm/pgtable_64.h index 07424968df62..1778bc5971e7 100644 --- a/arch/sh/include/asm/pgtable_64.h +++ b/arch/sh/include/asm/pgtable_64.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_PGTABLE_64_H #define __ASM_SH_PGTABLE_64_H @@ -10,10 +11,6 @@ * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003, 2004 Paul Mundt * Copyright (C) 2003, 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/include/asm/processor_64.h b/arch/sh/include/asm/processor_64.h index f3d7075648d0..53efc9f51ef1 100644 --- a/arch/sh/include/asm/processor_64.h +++ b/arch/sh/include/asm/processor_64.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_PROCESSOR_64_H #define __ASM_SH_PROCESSOR_64_H @@ -7,10 +8,6 @@ * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASSEMBLY__ diff --git a/arch/sh/include/asm/rtc.h b/arch/sh/include/asm/rtc.h index c63555ee1255..69dbae2949b0 100644 --- a/arch/sh/include/asm/rtc.h +++ b/arch/sh/include/asm/rtc.h @@ -3,9 +3,6 @@ #define _ASM_RTC_H void time_init(void); -extern void (*board_time_init)(void); -extern void (*rtc_sh_get_time)(struct timespec *); -extern int (*rtc_sh_set_time)(const time_t); #define RTC_CAP_4_DIGIT_YEAR (1 << 0) diff --git a/arch/sh/include/asm/sfp-machine.h b/arch/sh/include/asm/sfp-machine.h index d3c548443f2a..cbc7cf8c97ce 100644 --- a/arch/sh/include/asm/sfp-machine.h +++ b/arch/sh/include/asm/sfp-machine.h @@ -1,4 +1,6 @@ -/* Machine-dependent software floating-point definitions. +/* SPDX-License-Identifier: GPL-2.0+ + * + * Machine-dependent software floating-point definitions. SuperH kernel version. Copyright (C) 1997,1998,1999 Free Software Foundation, Inc. This file is part of the GNU C Library. @@ -6,21 +8,7 @@ Jakub Jelinek (jj@ultra.linux.cz), David S. Miller (davem@redhat.com) and Peter Maydell (pmaydell@chiark.greenend.org.uk). - - The GNU C Library is free software; you can redistribute it and/or - modify it under the terms of the GNU Library General Public License as - published by the Free Software Foundation; either version 2 of the - License, or (at your option) any later version. - - The GNU C Library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Library General Public License for more details. - - You should have received a copy of the GNU Library General Public - License along with the GNU C Library; see the file COPYING.LIB. If - not, write to the Free Software Foundation, Inc., - 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ +*/ #ifndef _SFP_MACHINE_H #define _SFP_MACHINE_H diff --git a/arch/sh/include/asm/shmparam.h b/arch/sh/include/asm/shmparam.h index ba1758d90106..6c580a644a78 100644 --- a/arch/sh/include/asm/shmparam.h +++ b/arch/sh/include/asm/shmparam.h @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/shmparam.h * * Copyright (C) 1999 Niibe Yutaka * Copyright (C) 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_SHMPARAM_H #define __ASM_SH_SHMPARAM_H diff --git a/arch/sh/include/asm/siu.h b/arch/sh/include/asm/siu.h index 580b7ac228b7..35e4839d381e 100644 --- a/arch/sh/include/asm/siu.h +++ b/arch/sh/include/asm/siu.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * platform header for the SIU ASoC driver * * Copyright (C) 2009-2010 Guennadi Liakhovetski - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #ifndef ASM_SIU_H diff --git a/arch/sh/include/asm/spinlock-cas.h b/arch/sh/include/asm/spinlock-cas.h index 270ee4d3e25b..3d49985ebf41 100644 --- a/arch/sh/include/asm/spinlock-cas.h +++ b/arch/sh/include/asm/spinlock-cas.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/spinlock-cas.h * * Copyright (C) 2015 SEI - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_SPINLOCK_CAS_H #define __ASM_SH_SPINLOCK_CAS_H diff --git a/arch/sh/include/asm/spinlock-llsc.h b/arch/sh/include/asm/spinlock-llsc.h index 715595de286a..786ee0fde3b0 100644 --- a/arch/sh/include/asm/spinlock-llsc.h +++ b/arch/sh/include/asm/spinlock-llsc.h @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/spinlock-llsc.h * * Copyright (C) 2002, 2003 Paul Mundt * Copyright (C) 2006, 2007 Akio Idehara - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_SPINLOCK_LLSC_H #define __ASM_SH_SPINLOCK_LLSC_H diff --git a/arch/sh/include/asm/spinlock.h b/arch/sh/include/asm/spinlock.h index c2c61ea6a8e2..fa6801f63551 100644 --- a/arch/sh/include/asm/spinlock.h +++ b/arch/sh/include/asm/spinlock.h @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/spinlock.h * * Copyright (C) 2002, 2003 Paul Mundt * Copyright (C) 2006, 2007 Akio Idehara - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_SPINLOCK_H #define __ASM_SH_SPINLOCK_H diff --git a/arch/sh/include/asm/string_32.h b/arch/sh/include/asm/string_32.h index 55f8db6bc1d7..3558b1d7123e 100644 --- a/arch/sh/include/asm/string_32.h +++ b/arch/sh/include/asm/string_32.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_STRING_H #define __ASM_SH_STRING_H diff --git a/arch/sh/include/asm/switch_to.h b/arch/sh/include/asm/switch_to.h index bcd722fc8347..9eec80ab5aa2 100644 --- a/arch/sh/include/asm/switch_to.h +++ b/arch/sh/include/asm/switch_to.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_SWITCH_TO_H #define __ASM_SH_SWITCH_TO_H diff --git a/arch/sh/include/asm/switch_to_64.h b/arch/sh/include/asm/switch_to_64.h index ba3129d6bc21..2dbf2311669f 100644 --- a/arch/sh/include/asm/switch_to_64.h +++ b/arch/sh/include/asm/switch_to_64.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_SWITCH_TO_64_H #define __ASM_SH_SWITCH_TO_64_H diff --git a/arch/sh/include/asm/tlb_64.h b/arch/sh/include/asm/tlb_64.h index ef0ae2a28f23..59fa0a23dad7 100644 --- a/arch/sh/include/asm/tlb_64.h +++ b/arch/sh/include/asm/tlb_64.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/tlb_64.h * * Copyright (C) 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_TLB_64_H #define __ASM_SH_TLB_64_H diff --git a/arch/sh/include/asm/traps_64.h b/arch/sh/include/asm/traps_64.h index ef5eff919449..f28db6dfbe45 100644 --- a/arch/sh/include/asm/traps_64.h +++ b/arch/sh/include/asm/traps_64.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_TRAPS_64_H #define __ASM_SH_TRAPS_64_H diff --git a/arch/sh/include/asm/uaccess_64.h b/arch/sh/include/asm/uaccess_64.h index ca5073dd4596..0c19d02dc566 100644 --- a/arch/sh/include/asm/uaccess_64.h +++ b/arch/sh/include/asm/uaccess_64.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_UACCESS_64_H #define __ASM_SH_UACCESS_64_H @@ -15,10 +16,6 @@ * MIPS implementation version 1.15 by * Copyright (C) 1996, 1997, 1998 by Ralf Baechle * and i386 version. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define __get_user_size(x,ptr,size,retval) \ diff --git a/arch/sh/include/asm/vga.h b/arch/sh/include/asm/vga.h index 06a5de8ace1a..089fbdc6c0b1 100644 --- a/arch/sh/include/asm/vga.h +++ b/arch/sh/include/asm/vga.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_VGA_H #define __ASM_SH_VGA_H diff --git a/arch/sh/include/asm/watchdog.h b/arch/sh/include/asm/watchdog.h index 85a7aca7fb8f..cecd0fc507f9 100644 --- a/arch/sh/include/asm/watchdog.h +++ b/arch/sh/include/asm/watchdog.h @@ -1,14 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0+ + * * include/asm-sh/watchdog.h * * Copyright (C) 2002, 2003 Paul Mundt * Copyright (C) 2009 Siemens AG * Copyright (C) 2009 Valentin Sitdikov - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2 of the License, or (at your - * option) any later version. */ #ifndef __ASM_SH_WATCHDOG_H #define __ASM_SH_WATCHDOG_H diff --git a/arch/sh/include/cpu-common/cpu/addrspace.h b/arch/sh/include/cpu-common/cpu/addrspace.h index 2b9ab93efa4e..d8bf5d7d2fdf 100644 --- a/arch/sh/include/cpu-common/cpu/addrspace.h +++ b/arch/sh/include/cpu-common/cpu/addrspace.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Definitions for the address spaces of the SH-2 CPUs. * * Copyright (C) 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH2_ADDRSPACE_H #define __ASM_CPU_SH2_ADDRSPACE_H diff --git a/arch/sh/include/cpu-common/cpu/mmu_context.h b/arch/sh/include/cpu-common/cpu/mmu_context.h index beeb299e01ec..cef3a30dbf97 100644 --- a/arch/sh/include/cpu-common/cpu/mmu_context.h +++ b/arch/sh/include/cpu-common/cpu/mmu_context.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh2/mmu_context.h * * Copyright (C) 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH2_MMU_CONTEXT_H #define __ASM_CPU_SH2_MMU_CONTEXT_H diff --git a/arch/sh/include/cpu-common/cpu/pfc.h b/arch/sh/include/cpu-common/cpu/pfc.h index e538813286a8..879d2c9da537 100644 --- a/arch/sh/include/cpu-common/cpu/pfc.h +++ b/arch/sh/include/cpu-common/cpu/pfc.h @@ -1,16 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * SH Pin Function Control Initialization * * Copyright (C) 2012 Renesas Solutions Corp. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; version 2 of the License. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #ifndef __ARCH_SH_CPU_PFC_H__ diff --git a/arch/sh/include/cpu-common/cpu/timer.h b/arch/sh/include/cpu-common/cpu/timer.h index a39c241e8195..af51438755e0 100644 --- a/arch/sh/include/cpu-common/cpu/timer.h +++ b/arch/sh/include/cpu-common/cpu/timer.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_CPU_SH2_TIMER_H #define __ASM_CPU_SH2_TIMER_H diff --git a/arch/sh/include/cpu-sh2/cpu/cache.h b/arch/sh/include/cpu-sh2/cpu/cache.h index aa1b2b9088a7..070aa9f50d3f 100644 --- a/arch/sh/include/cpu-sh2/cpu/cache.h +++ b/arch/sh/include/cpu-sh2/cpu/cache.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh2/cache.h * * Copyright (C) 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH2_CACHE_H #define __ASM_CPU_SH2_CACHE_H diff --git a/arch/sh/include/cpu-sh2/cpu/freq.h b/arch/sh/include/cpu-sh2/cpu/freq.h index 31de475da70b..fb2e5d2831bc 100644 --- a/arch/sh/include/cpu-sh2/cpu/freq.h +++ b/arch/sh/include/cpu-sh2/cpu/freq.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh2/freq.h * * Copyright (C) 2006 Yoshinori Sato - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH2_FREQ_H #define __ASM_CPU_SH2_FREQ_H diff --git a/arch/sh/include/cpu-sh2/cpu/watchdog.h b/arch/sh/include/cpu-sh2/cpu/watchdog.h index 1eab8aa63a6d..141fe296d751 100644 --- a/arch/sh/include/cpu-sh2/cpu/watchdog.h +++ b/arch/sh/include/cpu-sh2/cpu/watchdog.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh2/watchdog.h * * Copyright (C) 2002, 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH2_WATCHDOG_H #define __ASM_CPU_SH2_WATCHDOG_H diff --git a/arch/sh/include/cpu-sh2a/cpu/cache.h b/arch/sh/include/cpu-sh2a/cpu/cache.h index b27ce92cb600..06efb233eb35 100644 --- a/arch/sh/include/cpu-sh2a/cpu/cache.h +++ b/arch/sh/include/cpu-sh2a/cpu/cache.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh2a/cache.h * * Copyright (C) 2004 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH2A_CACHE_H #define __ASM_CPU_SH2A_CACHE_H diff --git a/arch/sh/include/cpu-sh2a/cpu/freq.h b/arch/sh/include/cpu-sh2a/cpu/freq.h index 830fd43b6cdc..fb0813f47043 100644 --- a/arch/sh/include/cpu-sh2a/cpu/freq.h +++ b/arch/sh/include/cpu-sh2a/cpu/freq.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh2a/freq.h * * Copyright (C) 2006 Yoshinori Sato - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH2A_FREQ_H #define __ASM_CPU_SH2A_FREQ_H diff --git a/arch/sh/include/cpu-sh2a/cpu/watchdog.h b/arch/sh/include/cpu-sh2a/cpu/watchdog.h index e7e8259e468c..8f932b733c67 100644 --- a/arch/sh/include/cpu-sh2a/cpu/watchdog.h +++ b/arch/sh/include/cpu-sh2a/cpu/watchdog.h @@ -1 +1,2 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #include diff --git a/arch/sh/include/cpu-sh3/cpu/cache.h b/arch/sh/include/cpu-sh3/cpu/cache.h index 29700fd88c75..f57124826943 100644 --- a/arch/sh/include/cpu-sh3/cpu/cache.h +++ b/arch/sh/include/cpu-sh3/cpu/cache.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh3/cache.h * * Copyright (C) 1999 Niibe Yutaka - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH3_CACHE_H #define __ASM_CPU_SH3_CACHE_H diff --git a/arch/sh/include/cpu-sh3/cpu/dma-register.h b/arch/sh/include/cpu-sh3/cpu/dma-register.h index 2349e488c9a6..c0f921fb4edc 100644 --- a/arch/sh/include/cpu-sh3/cpu/dma-register.h +++ b/arch/sh/include/cpu-sh3/cpu/dma-register.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * SH3 CPU-specific DMA definitions, used by both DMA drivers * * Copyright (C) 2010 Guennadi Liakhovetski - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #ifndef CPU_DMA_REGISTER_H #define CPU_DMA_REGISTER_H diff --git a/arch/sh/include/cpu-sh3/cpu/freq.h b/arch/sh/include/cpu-sh3/cpu/freq.h index 53c62302b2e3..7290f02b7173 100644 --- a/arch/sh/include/cpu-sh3/cpu/freq.h +++ b/arch/sh/include/cpu-sh3/cpu/freq.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh3/freq.h * * Copyright (C) 2002, 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH3_FREQ_H #define __ASM_CPU_SH3_FREQ_H diff --git a/arch/sh/include/cpu-sh3/cpu/gpio.h b/arch/sh/include/cpu-sh3/cpu/gpio.h index 9a22b882f3dc..aeb0588ace98 100644 --- a/arch/sh/include/cpu-sh3/cpu/gpio.h +++ b/arch/sh/include/cpu-sh3/cpu/gpio.h @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh3/gpio.h * * Copyright (C) 2007 Markus Brunner, Mark Jonas * * Addresses for the Pin Function Controller - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef _CPU_SH3_GPIO_H #define _CPU_SH3_GPIO_H diff --git a/arch/sh/include/cpu-sh3/cpu/mmu_context.h b/arch/sh/include/cpu-sh3/cpu/mmu_context.h index 0c7c735ea82a..ead9a6f72113 100644 --- a/arch/sh/include/cpu-sh3/cpu/mmu_context.h +++ b/arch/sh/include/cpu-sh3/cpu/mmu_context.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh3/mmu_context.h * * Copyright (C) 1999 Niibe Yutaka - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH3_MMU_CONTEXT_H #define __ASM_CPU_SH3_MMU_CONTEXT_H diff --git a/arch/sh/include/cpu-sh3/cpu/watchdog.h b/arch/sh/include/cpu-sh3/cpu/watchdog.h index 4ee0347298d8..9d7e9d986809 100644 --- a/arch/sh/include/cpu-sh3/cpu/watchdog.h +++ b/arch/sh/include/cpu-sh3/cpu/watchdog.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh3/watchdog.h * * Copyright (C) 2002, 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH3_WATCHDOG_H #define __ASM_CPU_SH3_WATCHDOG_H diff --git a/arch/sh/include/cpu-sh4/cpu/addrspace.h b/arch/sh/include/cpu-sh4/cpu/addrspace.h index d51da25da72c..f006c9489f5a 100644 --- a/arch/sh/include/cpu-sh4/cpu/addrspace.h +++ b/arch/sh/include/cpu-sh4/cpu/addrspace.h @@ -1,7 +1,4 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. +/* SPDX-License-Identifier: GPL-2.0 * * Copyright (C) 1999 by Kaz Kojima * diff --git a/arch/sh/include/cpu-sh4/cpu/cache.h b/arch/sh/include/cpu-sh4/cpu/cache.h index 92c4cd119b66..72b4d13da127 100644 --- a/arch/sh/include/cpu-sh4/cpu/cache.h +++ b/arch/sh/include/cpu-sh4/cpu/cache.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh4/cache.h * * Copyright (C) 1999 Niibe Yutaka - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH4_CACHE_H #define __ASM_CPU_SH4_CACHE_H diff --git a/arch/sh/include/cpu-sh4/cpu/dma-register.h b/arch/sh/include/cpu-sh4/cpu/dma-register.h index 9cd81e54056a..53f7ab990d88 100644 --- a/arch/sh/include/cpu-sh4/cpu/dma-register.h +++ b/arch/sh/include/cpu-sh4/cpu/dma-register.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * SH4 CPU-specific DMA definitions, used by both DMA drivers * * Copyright (C) 2010 Guennadi Liakhovetski - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #ifndef CPU_DMA_REGISTER_H #define CPU_DMA_REGISTER_H diff --git a/arch/sh/include/cpu-sh4/cpu/fpu.h b/arch/sh/include/cpu-sh4/cpu/fpu.h index febef7342528..29f451bfef19 100644 --- a/arch/sh/include/cpu-sh4/cpu/fpu.h +++ b/arch/sh/include/cpu-sh4/cpu/fpu.h @@ -1,12 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * linux/arch/sh/kernel/cpu/sh4/sh4_fpu.h * * Copyright (C) 2006 STMicroelectronics Limited * Author: Carl Shaw * - * May be copied or modified under the terms of the GNU General Public - * License Version 2. See linux/COPYING for more information. - * * Definitions for SH4 FPU operations */ diff --git a/arch/sh/include/cpu-sh4/cpu/freq.h b/arch/sh/include/cpu-sh4/cpu/freq.h index 1631fc238e6f..662f0f30e106 100644 --- a/arch/sh/include/cpu-sh4/cpu/freq.h +++ b/arch/sh/include/cpu-sh4/cpu/freq.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh4/freq.h * * Copyright (C) 2002, 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH4_FREQ_H #define __ASM_CPU_SH4_FREQ_H diff --git a/arch/sh/include/cpu-sh4/cpu/mmu_context.h b/arch/sh/include/cpu-sh4/cpu/mmu_context.h index e46ec708105a..421b56d5c595 100644 --- a/arch/sh/include/cpu-sh4/cpu/mmu_context.h +++ b/arch/sh/include/cpu-sh4/cpu/mmu_context.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh4/mmu_context.h * * Copyright (C) 1999 Niibe Yutaka - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH4_MMU_CONTEXT_H #define __ASM_CPU_SH4_MMU_CONTEXT_H diff --git a/arch/sh/include/cpu-sh4/cpu/sh7786.h b/arch/sh/include/cpu-sh4/cpu/sh7786.h index 96b8cb1f754a..8f9bfbf3cdb1 100644 --- a/arch/sh/include/cpu-sh4/cpu/sh7786.h +++ b/arch/sh/include/cpu-sh4/cpu/sh7786.h @@ -1,14 +1,11 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * SH7786 Pinmux * * Copyright (C) 2008, 2009 Renesas Solutions Corp. * Kuninori Morimoto * * Based on sh7785.h - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __CPU_SH7786_H__ diff --git a/arch/sh/include/cpu-sh4/cpu/sq.h b/arch/sh/include/cpu-sh4/cpu/sq.h index 74716ba2dc3c..81966e41fc21 100644 --- a/arch/sh/include/cpu-sh4/cpu/sq.h +++ b/arch/sh/include/cpu-sh4/cpu/sq.h @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh4/sq.h * * Copyright (C) 2001, 2002, 2003 Paul Mundt * Copyright (C) 2001, 2002 M. R. Brown - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH4_SQ_H #define __ASM_CPU_SH4_SQ_H diff --git a/arch/sh/include/cpu-sh4/cpu/watchdog.h b/arch/sh/include/cpu-sh4/cpu/watchdog.h index 7f62b9380938..fa7bcb398b8c 100644 --- a/arch/sh/include/cpu-sh4/cpu/watchdog.h +++ b/arch/sh/include/cpu-sh4/cpu/watchdog.h @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/cpu-sh4/watchdog.h * * Copyright (C) 2002, 2003 Paul Mundt * Copyright (C) 2009 Siemens AG * Copyright (C) 2009 Sitdikov Valentin - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_CPU_SH4_WATCHDOG_H #define __ASM_CPU_SH4_WATCHDOG_H diff --git a/arch/sh/include/cpu-sh5/cpu/cache.h b/arch/sh/include/cpu-sh5/cpu/cache.h index ed050ab526f2..ef49538f386f 100644 --- a/arch/sh/include/cpu-sh5/cpu/cache.h +++ b/arch/sh/include/cpu-sh5/cpu/cache.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_CPU_SH5_CACHE_H #define __ASM_SH_CPU_SH5_CACHE_H @@ -6,10 +7,6 @@ * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003, 2004 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #define L1_CACHE_SHIFT 5 diff --git a/arch/sh/include/cpu-sh5/cpu/irq.h b/arch/sh/include/cpu-sh5/cpu/irq.h index 0ccf257a72d1..4aa6ac54b9d6 100644 --- a/arch/sh/include/cpu-sh5/cpu/irq.h +++ b/arch/sh/include/cpu-sh5/cpu/irq.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_CPU_SH5_IRQ_H #define __ASM_SH_CPU_SH5_IRQ_H @@ -5,10 +6,6 @@ * include/asm-sh/cpu-sh5/irq.h * * Copyright (C) 2000, 2001 Paolo Alberelli - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ diff --git a/arch/sh/include/cpu-sh5/cpu/registers.h b/arch/sh/include/cpu-sh5/cpu/registers.h index 6664ea6f1566..372c1e1978b3 100644 --- a/arch/sh/include/cpu-sh5/cpu/registers.h +++ b/arch/sh/include/cpu-sh5/cpu/registers.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_CPU_SH5_REGISTERS_H #define __ASM_SH_CPU_SH5_REGISTERS_H @@ -6,10 +7,6 @@ * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifdef __ASSEMBLY__ diff --git a/arch/sh/include/mach-common/mach/hp6xx.h b/arch/sh/include/mach-common/mach/hp6xx.h index 6aaaf8596e6a..71241f0d02a1 100644 --- a/arch/sh/include/mach-common/mach/hp6xx.h +++ b/arch/sh/include/mach-common/mach/hp6xx.h @@ -1,14 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright (C) 2003, 2004, 2005 Andriy Skulysh + */ #ifndef __ASM_SH_HP6XX_H #define __ASM_SH_HP6XX_H -/* - * Copyright (C) 2003, 2004, 2005 Andriy Skulysh - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - */ #include #define HP680_BTN_IRQ evt2irq(0x600) /* IRQ0_IRQ */ diff --git a/arch/sh/include/mach-common/mach/lboxre2.h b/arch/sh/include/mach-common/mach/lboxre2.h index 3a4dcc5c74ee..5b6bb8e3cf28 100644 --- a/arch/sh/include/mach-common/mach/lboxre2.h +++ b/arch/sh/include/mach-common/mach/lboxre2.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_LBOXRE2_H #define __ASM_SH_LBOXRE2_H @@ -5,11 +6,6 @@ * Copyright (C) 2007 Nobuhiro Iwamatsu * * NTT COMWARE L-BOX RE2 support - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include diff --git a/arch/sh/include/mach-common/mach/magicpanelr2.h b/arch/sh/include/mach-common/mach/magicpanelr2.h index eb0cf205176f..c2d218cea74b 100644 --- a/arch/sh/include/mach-common/mach/magicpanelr2.h +++ b/arch/sh/include/mach-common/mach/magicpanelr2.h @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/magicpanelr2.h * * Copyright (C) 2007 Markus Brunner, Mark Jonas * * I/O addresses and bitmasks for Magic Panel Release 2 board - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_MAGICPANELR2_H diff --git a/arch/sh/include/mach-common/mach/mangle-port.h b/arch/sh/include/mach-common/mach/mangle-port.h index 4ca1769a0f12..dd5a761a52ee 100644 --- a/arch/sh/include/mach-common/mach/mangle-port.h +++ b/arch/sh/include/mach-common/mach/mangle-port.h @@ -1,9 +1,6 @@ -/* - * SH version cribbed from the MIPS copy: +/* SPDX-License-Identifier: GPL-2.0 * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. + * SH version cribbed from the MIPS copy: * * Copyright (C) 2003, 2004 Ralf Baechle */ diff --git a/arch/sh/include/mach-common/mach/microdev.h b/arch/sh/include/mach-common/mach/microdev.h index dcb05fa8c164..0e2f9ab11976 100644 --- a/arch/sh/include/mach-common/mach/microdev.h +++ b/arch/sh/include/mach-common/mach/microdev.h @@ -1,12 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * linux/include/asm-sh/microdev.h * * Copyright (C) 2003 Sean McGoogan (Sean.McGoogan@superh.com) * * Definitions for the SuperH SH4-202 MicroDev board. - * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. */ #ifndef __ASM_SH_MICRODEV_H #define __ASM_SH_MICRODEV_H diff --git a/arch/sh/include/mach-common/mach/sdk7780.h b/arch/sh/include/mach-common/mach/sdk7780.h index ce64e02e9b50..a27dbe4184b3 100644 --- a/arch/sh/include/mach-common/mach/sdk7780.h +++ b/arch/sh/include/mach-common/mach/sdk7780.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_RENESAS_SDK7780_H #define __ASM_SH_RENESAS_SDK7780_H @@ -6,10 +7,6 @@ * * Renesas Solutions SH7780 SDK Support * Copyright (C) 2008 Nicholas Beck - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/include/mach-common/mach/secureedge5410.h b/arch/sh/include/mach-common/mach/secureedge5410.h index 3653b9a4bacc..dfc68aa91003 100644 --- a/arch/sh/include/mach-common/mach/secureedge5410.h +++ b/arch/sh/include/mach-common/mach/secureedge5410.h @@ -1,11 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/snapgear.h * * Modified version of io_se.h for the snapgear-specific functions. * - * May be copied or modified under the terms of the GNU General Public - * License. See linux/COPYING for more information. - * * IO functions for a SnapGear */ diff --git a/arch/sh/include/mach-common/mach/sh7763rdp.h b/arch/sh/include/mach-common/mach/sh7763rdp.h index 8750cc852977..301f85a1c044 100644 --- a/arch/sh/include/mach-common/mach/sh7763rdp.h +++ b/arch/sh/include/mach-common/mach/sh7763rdp.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_SH7763RDP_H #define __ASM_SH_SH7763RDP_H @@ -6,11 +7,6 @@ * * Copyright (C) 2008 Renesas Solutions * Copyright (C) 2008 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include diff --git a/arch/sh/include/mach-dreamcast/mach/dma.h b/arch/sh/include/mach-dreamcast/mach/dma.h index 1dbfdf701c9d..a773a763843a 100644 --- a/arch/sh/include/mach-dreamcast/mach/dma.h +++ b/arch/sh/include/mach-dreamcast/mach/dma.h @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/dreamcast/dma.h * * Copyright (C) 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_DREAMCAST_DMA_H #define __ASM_SH_DREAMCAST_DMA_H diff --git a/arch/sh/include/mach-dreamcast/mach/pci.h b/arch/sh/include/mach-dreamcast/mach/pci.h index 0314d975e626..c037c1ec63a9 100644 --- a/arch/sh/include/mach-dreamcast/mach/pci.h +++ b/arch/sh/include/mach-dreamcast/mach/pci.h @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * include/asm-sh/dreamcast/pci.h * * Copyright (C) 2001, 2002 M. R. Brown * Copyright (C) 2002, 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #ifndef __ASM_SH_DREAMCAST_PCI_H #define __ASM_SH_DREAMCAST_PCI_H diff --git a/arch/sh/include/mach-dreamcast/mach/sysasic.h b/arch/sh/include/mach-dreamcast/mach/sysasic.h index 58f710e1ebc2..ed69ce7f2030 100644 --- a/arch/sh/include/mach-dreamcast/mach/sysasic.h +++ b/arch/sh/include/mach-dreamcast/mach/sysasic.h @@ -1,4 +1,6 @@ -/* include/asm-sh/dreamcast/sysasic.h +/* SPDX-License-Identifier: GPL-2.0 + * + * include/asm-sh/dreamcast/sysasic.h * * Definitions for the Dreamcast System ASIC and related peripherals. * @@ -6,9 +8,6 @@ * Copyright (C) 2003 Paul Mundt * * This file is part of the LinuxDC project (www.linuxdc.org) - * - * Released under the terms of the GNU GPL v2.0. - * */ #ifndef __ASM_SH_DREAMCAST_SYSASIC_H #define __ASM_SH_DREAMCAST_SYSASIC_H @@ -42,7 +41,6 @@ /* arch/sh/boards/mach-dreamcast/irq.c */ extern int systemasic_irq_demux(int); extern void systemasic_irq_init(void); -extern void aica_time_init(void); #endif /* __ASM_SH_DREAMCAST_SYSASIC_H */ diff --git a/arch/sh/include/mach-ecovec24/mach/partner-jet-setup.txt b/arch/sh/include/mach-ecovec24/mach/partner-jet-setup.txt index cc737b807334..2d685cc2d54c 100644 --- a/arch/sh/include/mach-ecovec24/mach/partner-jet-setup.txt +++ b/arch/sh/include/mach-ecovec24/mach/partner-jet-setup.txt @@ -1,3 +1,4 @@ +LIST "SPDX-License-Identifier: GPL-2.0" LIST "partner-jet-setup.txt" LIST "(C) Copyright 2009 Renesas Solutions Corp" LIST "Kuninori Morimoto " diff --git a/arch/sh/include/mach-kfr2r09/mach/partner-jet-setup.txt b/arch/sh/include/mach-kfr2r09/mach/partner-jet-setup.txt index 3a65503714ee..a67b1926be22 100644 --- a/arch/sh/include/mach-kfr2r09/mach/partner-jet-setup.txt +++ b/arch/sh/include/mach-kfr2r09/mach/partner-jet-setup.txt @@ -1,3 +1,4 @@ +LIST "SPDX-License-Identifier: GPL-2.0" LIST "partner-jet-setup.txt - 20090729 Magnus Damm" LIST "set up enough of the kfr2r09 hardware to boot the kernel" diff --git a/arch/sh/include/mach-se/mach/se7721.h b/arch/sh/include/mach-se/mach/se7721.h index eabd0538de44..82226d40faf5 100644 --- a/arch/sh/include/mach-se/mach/se7721.h +++ b/arch/sh/include/mach-se/mach/se7721.h @@ -1,12 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Copyright (C) 2008 Renesas Solutions Corp. * * Hitachi UL SolutionEngine 7721 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #ifndef __ASM_SH_SE7721_H diff --git a/arch/sh/include/mach-se/mach/se7722.h b/arch/sh/include/mach-se/mach/se7722.h index 637e7ac753f8..efb761f9f6e0 100644 --- a/arch/sh/include/mach-se/mach/se7722.h +++ b/arch/sh/include/mach-se/mach/se7722.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_SE7722_H #define __ASM_SH_SE7722_H @@ -7,11 +8,6 @@ * Copyright (C) 2007 Nobuhiro Iwamatsu * * Hitachi UL SolutionEngine 7722 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include #include diff --git a/arch/sh/include/mach-se/mach/se7724.h b/arch/sh/include/mach-se/mach/se7724.h index be842dd1ca02..1fe28820dfa9 100644 --- a/arch/sh/include/mach-se/mach/se7724.h +++ b/arch/sh/include/mach-se/mach/se7724.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_SE7724_H #define __ASM_SH_SE7724_H @@ -12,11 +13,6 @@ * * Based on se7722.h * Copyright (C) 2007 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include #include diff --git a/arch/sh/include/mach-se/mach/se7780.h b/arch/sh/include/mach-se/mach/se7780.h index bde357cf81bd..24f0ac82f8b3 100644 --- a/arch/sh/include/mach-se/mach/se7780.h +++ b/arch/sh/include/mach-se/mach/se7780.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_SH_SE7780_H #define __ASM_SH_SE7780_H @@ -7,10 +8,6 @@ * Copyright (C) 2006,2007 Nobuhiro Iwamatsu * * Hitachi UL SolutionEngine 7780 Support. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/include/uapi/asm/Kbuild b/arch/sh/include/uapi/asm/Kbuild index a55e317c1ef2..dcb93543f55d 100644 --- a/arch/sh/include/uapi/asm/Kbuild +++ b/arch/sh/include/uapi/asm/Kbuild @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # UAPI Header export list include include/uapi/asm-generic/Kbuild.asm diff --git a/arch/sh/include/uapi/asm/setup.h b/arch/sh/include/uapi/asm/setup.h index 552df83f1a49..1170dd2fb998 100644 --- a/arch/sh/include/uapi/asm/setup.h +++ b/arch/sh/include/uapi/asm/setup.h @@ -1 +1,2 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #include diff --git a/arch/sh/include/uapi/asm/types.h b/arch/sh/include/uapi/asm/types.h index b9e79bc580dd..f83795fdc0da 100644 --- a/arch/sh/include/uapi/asm/types.h +++ b/arch/sh/include/uapi/asm/types.h @@ -1 +1,2 @@ +/* SPDX-License-Identifier: GPL-2.0 */ #include diff --git a/arch/sh/kernel/cpu/clock.c b/arch/sh/kernel/cpu/clock.c index fca9b1e78a63..6fb34410d630 100644 --- a/arch/sh/kernel/cpu/clock.c +++ b/arch/sh/kernel/cpu/clock.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/clock.c - SuperH clock framework * @@ -9,10 +10,6 @@ * Written by Tuukka Tikkanen * * Modified for omap shared clock framework by Tony Lindgren - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/init.c b/arch/sh/kernel/cpu/init.c index c4f01c5c8736..ce7291e12a30 100644 --- a/arch/sh/kernel/cpu/init.c +++ b/arch/sh/kernel/cpu/init.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/init.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2002 - 2009 Paul Mundt * Copyright (C) 2003 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/irq/Makefile b/arch/sh/kernel/cpu/irq/Makefile index 3f8e79402d7d..8b91cb96411b 100644 --- a/arch/sh/kernel/cpu/irq/Makefile +++ b/arch/sh/kernel/cpu/irq/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the Linux/SuperH CPU-specific IRQ handlers. # diff --git a/arch/sh/kernel/cpu/irq/intc-sh5.c b/arch/sh/kernel/cpu/irq/intc-sh5.c index 9e056a3a0c73..744f903b4df3 100644 --- a/arch/sh/kernel/cpu/irq/intc-sh5.c +++ b/arch/sh/kernel/cpu/irq/intc-sh5.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/irq/intc-sh5.c * @@ -9,10 +10,6 @@ * Per-interrupt selective. IRLM=0 (Fixed priority) is not * supported being useless without a cascaded interrupt * controller. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/irq/ipr.c b/arch/sh/kernel/cpu/irq/ipr.c index 5de6dff5c21b..d41bce71f211 100644 --- a/arch/sh/kernel/cpu/irq/ipr.c +++ b/arch/sh/kernel/cpu/irq/ipr.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Interrupt handling for IPR-based IRQ. * @@ -11,10 +12,6 @@ * On-chip supporting modules for SH7709/SH7709A/SH7729. * Hitachi SolutionEngine external I/O: * MS7709SE01, MS7709ASE01, and MS7750SE01 - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/pfc.c b/arch/sh/kernel/cpu/pfc.c index d766564ef7c2..062056ede88d 100644 --- a/arch/sh/kernel/cpu/pfc.c +++ b/arch/sh/kernel/cpu/pfc.c @@ -1,16 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH Pin Function Control Initialization * * Copyright (C) 2012 Renesas Solutions Corp. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; version 2 of the License. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2/Makefile b/arch/sh/kernel/cpu/sh2/Makefile index 904c4283d923..214c3a5b184a 100644 --- a/arch/sh/kernel/cpu/sh2/Makefile +++ b/arch/sh/kernel/cpu/sh2/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the Linux/SuperH SH-2 backends. # diff --git a/arch/sh/kernel/cpu/sh2/clock-sh7619.c b/arch/sh/kernel/cpu/sh2/clock-sh7619.c index e80252ae5bca..d66d194c7731 100644 --- a/arch/sh/kernel/cpu/sh2/clock-sh7619.c +++ b/arch/sh/kernel/cpu/sh2/clock-sh7619.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2/clock-sh7619.c * @@ -7,10 +8,6 @@ * * Based on clock-sh4.c * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2/entry.S b/arch/sh/kernel/cpu/sh2/entry.S index 1ee0a6e774c6..0a1c2bf216bc 100644 --- a/arch/sh/kernel/cpu/sh2/entry.S +++ b/arch/sh/kernel/cpu/sh2/entry.S @@ -1,14 +1,11 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh2/entry.S * * The SH-2 exception entry * * Copyright (C) 2005-2008 Yoshinori Sato * Copyright (C) 2005 AXE,Inc. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2/ex.S b/arch/sh/kernel/cpu/sh2/ex.S index 85b0bf81fc1d..dd0cc887a3ca 100644 --- a/arch/sh/kernel/cpu/sh2/ex.S +++ b/arch/sh/kernel/cpu/sh2/ex.S @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh2/ex.S * * The SH-2 exception vector table * * Copyright (C) 2005 Yoshinori Sato - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2/probe.c b/arch/sh/kernel/cpu/sh2/probe.c index a5bd03642678..d342ea08843f 100644 --- a/arch/sh/kernel/cpu/sh2/probe.c +++ b/arch/sh/kernel/cpu/sh2/probe.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2/probe.c * * CPU Subtype Probing for SH-2. * * Copyright (C) 2002 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2/setup-sh7619.c b/arch/sh/kernel/cpu/sh2/setup-sh7619.c index d08db08dec38..f5b6841ef7e1 100644 --- a/arch/sh/kernel/cpu/sh2/setup-sh7619.c +++ b/arch/sh/kernel/cpu/sh2/setup-sh7619.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7619 Setup * * Copyright (C) 2006 Yoshinori Sato * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2/smp-j2.c b/arch/sh/kernel/cpu/sh2/smp-j2.c index 6ccd7e4dc008..ae44dc24c455 100644 --- a/arch/sh/kernel/cpu/sh2/smp-j2.c +++ b/arch/sh/kernel/cpu/sh2/smp-j2.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SMP support for J2 processor * * Copyright (C) 2015-2016 Smart Energy Instruments, Inc. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2a/clock-sh7201.c b/arch/sh/kernel/cpu/sh2a/clock-sh7201.c index 532a36c72322..5a5daaafb27a 100644 --- a/arch/sh/kernel/cpu/sh2a/clock-sh7201.c +++ b/arch/sh/kernel/cpu/sh2a/clock-sh7201.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2a/clock-sh7201.c * @@ -7,10 +8,6 @@ * * Based on clock-sh4.c * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/clock-sh7203.c b/arch/sh/kernel/cpu/sh2a/clock-sh7203.c index 529f719b6e33..c62053945664 100644 --- a/arch/sh/kernel/cpu/sh2a/clock-sh7203.c +++ b/arch/sh/kernel/cpu/sh2a/clock-sh7203.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2a/clock-sh7203.c * @@ -10,10 +11,6 @@ * * Based on clock-sh4.c * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/clock-sh7206.c b/arch/sh/kernel/cpu/sh2a/clock-sh7206.c index 177789834678..d286d7b918d5 100644 --- a/arch/sh/kernel/cpu/sh2a/clock-sh7206.c +++ b/arch/sh/kernel/cpu/sh2a/clock-sh7206.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2a/clock-sh7206.c * @@ -7,10 +8,6 @@ * * Based on clock-sh4.c * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/clock-sh7264.c b/arch/sh/kernel/cpu/sh2a/clock-sh7264.c index 7e06e39b0958..d9acc1ed7981 100644 --- a/arch/sh/kernel/cpu/sh2a/clock-sh7264.c +++ b/arch/sh/kernel/cpu/sh2a/clock-sh7264.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2a/clock-sh7264.c * * SH7264 clock framework support * * Copyright (C) 2012 Phil Edworthy - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/clock-sh7269.c b/arch/sh/kernel/cpu/sh2a/clock-sh7269.c index 663a97bed554..c17ab0d76538 100644 --- a/arch/sh/kernel/cpu/sh2a/clock-sh7269.c +++ b/arch/sh/kernel/cpu/sh2a/clock-sh7269.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2a/clock-sh7269.c * * SH7269 clock framework support * * Copyright (C) 2012 Phil Edworthy - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/entry.S b/arch/sh/kernel/cpu/sh2a/entry.S index da77a8ef4696..9f11fc8b5052 100644 --- a/arch/sh/kernel/cpu/sh2a/entry.S +++ b/arch/sh/kernel/cpu/sh2a/entry.S @@ -1,14 +1,11 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh2a/entry.S * * The SH-2A exception entry * * Copyright (C) 2008 Yoshinori Sato * Based on arch/sh/kernel/cpu/sh2/entry.S - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2a/ex.S b/arch/sh/kernel/cpu/sh2a/ex.S index 4568066700cf..ed91996287c7 100644 --- a/arch/sh/kernel/cpu/sh2a/ex.S +++ b/arch/sh/kernel/cpu/sh2a/ex.S @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh2a/ex.S * * The SH-2A exception vector table * * Copyright (C) 2008 Yoshinori Sato - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2a/fpu.c b/arch/sh/kernel/cpu/sh2a/fpu.c index 352f894bece1..74b48db86dd7 100644 --- a/arch/sh/kernel/cpu/sh2a/fpu.c +++ b/arch/sh/kernel/cpu/sh2a/fpu.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Save/restore floating point context for signal handlers. * * Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * * FIXME! These routines can be optimized in big endian case. */ #include diff --git a/arch/sh/kernel/cpu/sh2a/opcode_helper.c b/arch/sh/kernel/cpu/sh2a/opcode_helper.c index 72aa61c81e48..c509081d90b9 100644 --- a/arch/sh/kernel/cpu/sh2a/opcode_helper.c +++ b/arch/sh/kernel/cpu/sh2a/opcode_helper.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2a/opcode_helper.c * * Helper for the SH-2A 32-bit opcodes. * * Copyright (C) 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2a/pinmux-sh7203.c b/arch/sh/kernel/cpu/sh2a/pinmux-sh7203.c index eef17dcc3a41..a6777e6fc8cd 100644 --- a/arch/sh/kernel/cpu/sh2a/pinmux-sh7203.c +++ b/arch/sh/kernel/cpu/sh2a/pinmux-sh7203.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7203 Pinmux * * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2a/pinmux-sh7264.c b/arch/sh/kernel/cpu/sh2a/pinmux-sh7264.c index 569decbd6d93..7a103e16cf01 100644 --- a/arch/sh/kernel/cpu/sh2a/pinmux-sh7264.c +++ b/arch/sh/kernel/cpu/sh2a/pinmux-sh7264.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7264 Pinmux * * Copyright (C) 2012 Renesas Electronics Europe Ltd - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2a/pinmux-sh7269.c b/arch/sh/kernel/cpu/sh2a/pinmux-sh7269.c index 4c17fb6970b1..4da432ef1b40 100644 --- a/arch/sh/kernel/cpu/sh2a/pinmux-sh7269.c +++ b/arch/sh/kernel/cpu/sh2a/pinmux-sh7269.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7269 Pinmux * * Copyright (C) 2012 Renesas Electronics Europe Ltd * Copyright (C) 2012 Phil Edworthy - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh2a/probe.c b/arch/sh/kernel/cpu/sh2a/probe.c index 3f87971082f1..c66a3bc882bf 100644 --- a/arch/sh/kernel/cpu/sh2a/probe.c +++ b/arch/sh/kernel/cpu/sh2a/probe.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh2a/probe.c * * CPU Subtype Probing for SH-2A. * * Copyright (C) 2004 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/setup-mxg.c b/arch/sh/kernel/cpu/sh2a/setup-mxg.c index 060fdd369f09..52350ad0b0a2 100644 --- a/arch/sh/kernel/cpu/sh2a/setup-mxg.c +++ b/arch/sh/kernel/cpu/sh2a/setup-mxg.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas MX-G (R8A03022BG) Setup * * Copyright (C) 2008, 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/setup-sh7201.c b/arch/sh/kernel/cpu/sh2a/setup-sh7201.c index c1301f68d3cd..b51ed761ae08 100644 --- a/arch/sh/kernel/cpu/sh2a/setup-sh7201.c +++ b/arch/sh/kernel/cpu/sh2a/setup-sh7201.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7201 setup * * Copyright (C) 2008 Peter Griffin pgriffin@mpc-data.co.uk * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/setup-sh7203.c b/arch/sh/kernel/cpu/sh2a/setup-sh7203.c index 32ec732e28e5..89b3e49fc250 100644 --- a/arch/sh/kernel/cpu/sh2a/setup-sh7203.c +++ b/arch/sh/kernel/cpu/sh2a/setup-sh7203.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7203 and SH7263 Setup * * Copyright (C) 2007 - 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/setup-sh7206.c b/arch/sh/kernel/cpu/sh2a/setup-sh7206.c index 8d8d354851ce..36ff3a3139da 100644 --- a/arch/sh/kernel/cpu/sh2a/setup-sh7206.c +++ b/arch/sh/kernel/cpu/sh2a/setup-sh7206.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7206 Setup * * Copyright (C) 2006 Yoshinori Sato * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/setup-sh7264.c b/arch/sh/kernel/cpu/sh2a/setup-sh7264.c index ab71eab690fd..d199618d877c 100644 --- a/arch/sh/kernel/cpu/sh2a/setup-sh7264.c +++ b/arch/sh/kernel/cpu/sh2a/setup-sh7264.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7264 Setup * * Copyright (C) 2012 Renesas Electronics Europe Ltd - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh2a/setup-sh7269.c b/arch/sh/kernel/cpu/sh2a/setup-sh7269.c index c7e81b20967c..9095c960b455 100644 --- a/arch/sh/kernel/cpu/sh2a/setup-sh7269.c +++ b/arch/sh/kernel/cpu/sh2a/setup-sh7269.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7269 Setup * * Copyright (C) 2012 Renesas Electronics Europe Ltd * Copyright (C) 2012 Phil Edworthy - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/clock-sh3.c b/arch/sh/kernel/cpu/sh3/clock-sh3.c index 90faa44ca94d..d7765728cadf 100644 --- a/arch/sh/kernel/cpu/sh3/clock-sh3.c +++ b/arch/sh/kernel/cpu/sh3/clock-sh3.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh3/clock-sh3.c * @@ -11,10 +12,6 @@ * Copyright (C) 2000 Philipp Rumpf * Copyright (C) 2002, 2003, 2004 Paul Mundt * Copyright (C) 2002 M. R. Brown - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/clock-sh7705.c b/arch/sh/kernel/cpu/sh3/clock-sh7705.c index a8da4a9986b3..4947114af090 100644 --- a/arch/sh/kernel/cpu/sh3/clock-sh7705.c +++ b/arch/sh/kernel/cpu/sh3/clock-sh7705.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh3/clock-sh7705.c * @@ -11,10 +12,6 @@ * Copyright (C) 2000 Philipp Rumpf * Copyright (C) 2002, 2003, 2004 Paul Mundt * Copyright (C) 2002 M. R. Brown - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/clock-sh7706.c b/arch/sh/kernel/cpu/sh3/clock-sh7706.c index a4088e5b2203..17855022c118 100644 --- a/arch/sh/kernel/cpu/sh3/clock-sh7706.c +++ b/arch/sh/kernel/cpu/sh3/clock-sh7706.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh3/clock-sh7706.c * @@ -7,10 +8,6 @@ * * Based on arch/sh/kernel/cpu/sh3/clock-sh7709.c * Copyright (C) 2005 Andriy Skulysh - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/clock-sh7709.c b/arch/sh/kernel/cpu/sh3/clock-sh7709.c index 54a6d4bcc0db..54701bbf7caa 100644 --- a/arch/sh/kernel/cpu/sh3/clock-sh7709.c +++ b/arch/sh/kernel/cpu/sh3/clock-sh7709.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh3/clock-sh7709.c * @@ -7,10 +8,6 @@ * * Based on arch/sh/kernel/cpu/sh3/clock-sh7705.c * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/clock-sh7710.c b/arch/sh/kernel/cpu/sh3/clock-sh7710.c index ce601b2e3976..e60d0bc19cbe 100644 --- a/arch/sh/kernel/cpu/sh3/clock-sh7710.c +++ b/arch/sh/kernel/cpu/sh3/clock-sh7710.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh3/clock-sh7710.c * @@ -11,10 +12,6 @@ * Copyright (C) 2000 Philipp Rumpf * Copyright (C) 2002, 2003, 2004 Paul Mundt * Copyright (C) 2002 M. R. Brown - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/clock-sh7712.c b/arch/sh/kernel/cpu/sh3/clock-sh7712.c index 21438a9a1ae1..5af553f38d3a 100644 --- a/arch/sh/kernel/cpu/sh3/clock-sh7712.c +++ b/arch/sh/kernel/cpu/sh3/clock-sh7712.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh3/clock-sh7712.c * @@ -7,10 +8,6 @@ * * Based on arch/sh/kernel/cpu/sh3/clock-sh3.c * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/entry.S b/arch/sh/kernel/cpu/sh3/entry.S index 262db6ec067b..25eb80905416 100644 --- a/arch/sh/kernel/cpu/sh3/entry.S +++ b/arch/sh/kernel/cpu/sh3/entry.S @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh3/entry.S * * Copyright (C) 1999, 2000, 2002 Niibe Yutaka * Copyright (C) 2003 - 2012 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/ex.S b/arch/sh/kernel/cpu/sh3/ex.S index 99b4d020179a..ee2113f4215c 100644 --- a/arch/sh/kernel/cpu/sh3/ex.S +++ b/arch/sh/kernel/cpu/sh3/ex.S @@ -1,14 +1,11 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh3/ex.S * * The SH-3 and SH-4 exception vector table. - + * * Copyright (C) 1999, 2000, 2002 Niibe Yutaka * Copyright (C) 2003 - 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh3/pinmux-sh7720.c b/arch/sh/kernel/cpu/sh3/pinmux-sh7720.c index 26e90a66ebb7..34015e608ee9 100644 --- a/arch/sh/kernel/cpu/sh3/pinmux-sh7720.c +++ b/arch/sh/kernel/cpu/sh3/pinmux-sh7720.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7720 Pinmux * * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh3/probe.c b/arch/sh/kernel/cpu/sh3/probe.c index 426e1e1dcedc..5e7ad591ab16 100644 --- a/arch/sh/kernel/cpu/sh3/probe.c +++ b/arch/sh/kernel/cpu/sh3/probe.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh3/probe.c * @@ -5,10 +6,6 @@ * * Copyright (C) 1999, 2000 Niibe Yutaka * Copyright (C) 2002 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh3/setup-sh3.c b/arch/sh/kernel/cpu/sh3/setup-sh3.c index 53be70b98116..8058c01cf09d 100644 --- a/arch/sh/kernel/cpu/sh3/setup-sh3.c +++ b/arch/sh/kernel/cpu/sh3/setup-sh3.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Shared SH3 Setup code * * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh3/setup-sh7705.c b/arch/sh/kernel/cpu/sh3/setup-sh7705.c index f6e392e0d27e..e19d1ce7b6ad 100644 --- a/arch/sh/kernel/cpu/sh3/setup-sh7705.c +++ b/arch/sh/kernel/cpu/sh3/setup-sh7705.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7705 Setup * * Copyright (C) 2006 - 2009 Paul Mundt * Copyright (C) 2007 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/setup-sh770x.c b/arch/sh/kernel/cpu/sh3/setup-sh770x.c index 59a88611df55..5c5144bee6bc 100644 --- a/arch/sh/kernel/cpu/sh3/setup-sh770x.c +++ b/arch/sh/kernel/cpu/sh3/setup-sh770x.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH3 Setup code for SH7706, SH7707, SH7708, SH7709 * @@ -7,10 +8,6 @@ * Based on setup-sh7709.c * * Copyright (C) 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/setup-sh7710.c b/arch/sh/kernel/cpu/sh3/setup-sh7710.c index ea52410b430d..4776e2495738 100644 --- a/arch/sh/kernel/cpu/sh3/setup-sh7710.c +++ b/arch/sh/kernel/cpu/sh3/setup-sh7710.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH3 Setup code for SH7710, SH7712 * * Copyright (C) 2006 - 2009 Paul Mundt * Copyright (C) 2007 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/setup-sh7720.c b/arch/sh/kernel/cpu/sh3/setup-sh7720.c index bf34b4e2e9ef..1d4c34e7b7db 100644 --- a/arch/sh/kernel/cpu/sh3/setup-sh7720.c +++ b/arch/sh/kernel/cpu/sh3/setup-sh7720.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Setup code for SH7720, SH7721. * @@ -8,10 +9,6 @@ * * Copyright (C) 2006 Paul Mundt * Copyright (C) 2006 Jamie Lenehan - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh3/swsusp.S b/arch/sh/kernel/cpu/sh3/swsusp.S index 01145426a2b8..dc111c4ccf21 100644 --- a/arch/sh/kernel/cpu/sh3/swsusp.S +++ b/arch/sh/kernel/cpu/sh3/swsusp.S @@ -1,11 +1,8 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh3/swsusp.S * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4/clock-sh4-202.c b/arch/sh/kernel/cpu/sh4/clock-sh4-202.c index 4b5bab5f875f..c1cdef763cb2 100644 --- a/arch/sh/kernel/cpu/sh4/clock-sh4-202.c +++ b/arch/sh/kernel/cpu/sh4/clock-sh4-202.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4/clock-sh4-202.c * * Additional SH4-202 support for the clock framework * * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4/clock-sh4.c b/arch/sh/kernel/cpu/sh4/clock-sh4.c index 99e5ec8b483d..ee3c5537a9d8 100644 --- a/arch/sh/kernel/cpu/sh4/clock-sh4.c +++ b/arch/sh/kernel/cpu/sh4/clock-sh4.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4/clock-sh4.c * @@ -11,10 +12,6 @@ * Copyright (C) 2000 Philipp Rumpf * Copyright (C) 2002, 2003, 2004 Paul Mundt * Copyright (C) 2002 M. R. Brown - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4/fpu.c b/arch/sh/kernel/cpu/sh4/fpu.c index 95fd2dcb83da..1ff56e5ba990 100644 --- a/arch/sh/kernel/cpu/sh4/fpu.c +++ b/arch/sh/kernel/cpu/sh4/fpu.c @@ -1,10 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Save/restore floating point context for signal handlers. * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * * Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka * Copyright (C) 2006 ST Microelectronics Ltd. (denorm support) * diff --git a/arch/sh/kernel/cpu/sh4/perf_event.c b/arch/sh/kernel/cpu/sh4/perf_event.c index fa4f724b295a..db5847bb7330 100644 --- a/arch/sh/kernel/cpu/sh4/perf_event.c +++ b/arch/sh/kernel/cpu/sh4/perf_event.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Performance events support for SH7750-style performance counters * * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4/probe.c b/arch/sh/kernel/cpu/sh4/probe.c index a521bcf50695..ef4dd6295263 100644 --- a/arch/sh/kernel/cpu/sh4/probe.c +++ b/arch/sh/kernel/cpu/sh4/probe.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4/probe.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2001 - 2007 Paul Mundt * Copyright (C) 2003 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4/setup-sh4-202.c b/arch/sh/kernel/cpu/sh4/setup-sh4-202.c index 2623f820d510..a40ef35d101a 100644 --- a/arch/sh/kernel/cpu/sh4/setup-sh4-202.c +++ b/arch/sh/kernel/cpu/sh4/setup-sh4-202.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH4-202 Setup * * Copyright (C) 2006 Paul Mundt * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4/setup-sh7750.c b/arch/sh/kernel/cpu/sh4/setup-sh7750.c index 57d30689204d..b37bda66a532 100644 --- a/arch/sh/kernel/cpu/sh4/setup-sh7750.c +++ b/arch/sh/kernel/cpu/sh4/setup-sh7750.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7091/SH7750/SH7750S/SH7750R/SH7751/SH7751R Setup * * Copyright (C) 2006 Paul Mundt * Copyright (C) 2006 Jamie Lenehan - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4/setup-sh7760.c b/arch/sh/kernel/cpu/sh4/setup-sh7760.c index e51fe1734e13..86845da85997 100644 --- a/arch/sh/kernel/cpu/sh4/setup-sh7760.c +++ b/arch/sh/kernel/cpu/sh4/setup-sh7760.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7760 Setup * * Copyright (C) 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4/sq.c b/arch/sh/kernel/cpu/sh4/sq.c index 4ca78ed71ad2..934ff84844fa 100644 --- a/arch/sh/kernel/cpu/sh4/sq.c +++ b/arch/sh/kernel/cpu/sh4/sq.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4/sq.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2001 - 2006 Paul Mundt * Copyright (C) 2001, 2002 M. R. Brown - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7343.c b/arch/sh/kernel/cpu/sh4a/clock-sh7343.c index a907ee2388bf..32cb5d1fd3b3 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7343.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7343.c @@ -1,22 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7343.c * * SH7343 clock framework support * * Copyright (C) 2009 Magnus Damm - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7366.c b/arch/sh/kernel/cpu/sh4a/clock-sh7366.c index ac9854179dee..aa3444b41e72 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7366.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7366.c @@ -1,22 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7366.c * * SH7366 clock framework support * * Copyright (C) 2009 Magnus Damm - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7722.c b/arch/sh/kernel/cpu/sh4a/clock-sh7722.c index d85091ec4b01..38b057703eaa 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7722.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7722.c @@ -1,22 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7722.c * * SH7722 clock framework support * * Copyright (C) 2009 Magnus Damm - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7723.c b/arch/sh/kernel/cpu/sh4a/clock-sh7723.c index af01664f7b4c..9dc3a987d7cf 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7723.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7723.c @@ -1,22 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7723.c * * SH7723 clock framework support * * Copyright (C) 2009 Magnus Damm - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7724.c b/arch/sh/kernel/cpu/sh4a/clock-sh7724.c index 3194336a3599..2a1f0d847a2e 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7724.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7724.c @@ -1,22 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7724.c * * SH7724 clock framework support * * Copyright (C) 2009 Magnus Damm - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7734.c b/arch/sh/kernel/cpu/sh4a/clock-sh7734.c index 354dcac5e4cd..c81ee60eddb8 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7734.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7734.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7734.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2011, 2012 Nobuhiro Iwamatsu * Copyright (C) 2011, 2012 Renesas Solutions Corp. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7757.c b/arch/sh/kernel/cpu/sh4a/clock-sh7757.c index b10af2ae9f35..9acb72210fed 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7757.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7757.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4/clock-sh7757.c * * SH7757 support for the clock framework * * Copyright (C) 2009-2010 Renesas Solutions Corp. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7763.c b/arch/sh/kernel/cpu/sh4a/clock-sh7763.c index 7707e35aea46..aaff4b96812c 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7763.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7763.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7763.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2005 Paul Mundt * Copyright (C) 2007 Yoshihiro Shimoda - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7770.c b/arch/sh/kernel/cpu/sh4a/clock-sh7770.c index 5d36f334bb0a..f356dfcd17b7 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7770.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7770.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7770.c * * SH7770 support for the clock framework * * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7780.c b/arch/sh/kernel/cpu/sh4a/clock-sh7780.c index 793dae42a2f8..fc0a3efb53d5 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7780.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7780.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7780.c * * SH7780 support for the clock framework * * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7785.c b/arch/sh/kernel/cpu/sh4a/clock-sh7785.c index 1aafd5496752..fca351378bbc 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7785.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7785.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7785.c * * SH7785 support for the clock framework * * Copyright (C) 2007 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-sh7786.c b/arch/sh/kernel/cpu/sh4a/clock-sh7786.c index ac3dcfe5d303..f23862df3e8f 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-sh7786.c +++ b/arch/sh/kernel/cpu/sh4a/clock-sh7786.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/clock-sh7786.c * * SH7786 support for the clock framework * * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/clock-shx3.c b/arch/sh/kernel/cpu/sh4a/clock-shx3.c index b1bdbc3cbc21..6c7b6ab6cab5 100644 --- a/arch/sh/kernel/cpu/sh4a/clock-shx3.c +++ b/arch/sh/kernel/cpu/sh4a/clock-shx3.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4/clock-shx3.c * @@ -6,10 +7,6 @@ * Copyright (C) 2006-2007 Renesas Technology Corp. * Copyright (C) 2006-2007 Renesas Solutions Corp. * Copyright (C) 2006-2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/intc-shx3.c b/arch/sh/kernel/cpu/sh4a/intc-shx3.c index 78c971486b4e..eea87d25efbb 100644 --- a/arch/sh/kernel/cpu/sh4a/intc-shx3.c +++ b/arch/sh/kernel/cpu/sh4a/intc-shx3.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Shared support for SH-X3 interrupt controllers. * * Copyright (C) 2009 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/perf_event.c b/arch/sh/kernel/cpu/sh4a/perf_event.c index 84a2c396ceee..3beb8fed3d28 100644 --- a/arch/sh/kernel/cpu/sh4a/perf_event.c +++ b/arch/sh/kernel/cpu/sh4a/perf_event.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Performance events support for SH-4A performance counters * * Copyright (C) 2009, 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/pinmux-sh7723.c b/arch/sh/kernel/cpu/sh4a/pinmux-sh7723.c index 99c637d5bf7a..b67abc0637a4 100644 --- a/arch/sh/kernel/cpu/sh4a/pinmux-sh7723.c +++ b/arch/sh/kernel/cpu/sh4a/pinmux-sh7723.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7723 Pinmux * * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh4a/pinmux-sh7724.c b/arch/sh/kernel/cpu/sh4a/pinmux-sh7724.c index 63be4749e341..b43c3259060b 100644 --- a/arch/sh/kernel/cpu/sh4a/pinmux-sh7724.c +++ b/arch/sh/kernel/cpu/sh4a/pinmux-sh7724.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7724 Pinmux * @@ -7,10 +8,6 @@ * * Based on SH7723 Pinmux * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh4a/pinmux-sh7734.c b/arch/sh/kernel/cpu/sh4a/pinmux-sh7734.c index ea2db632a764..46256b19619a 100644 --- a/arch/sh/kernel/cpu/sh4a/pinmux-sh7734.c +++ b/arch/sh/kernel/cpu/sh4a/pinmux-sh7734.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7734 processor support - PFC hardware block * * Copyright (C) 2012 Renesas Solutions Corp. * Copyright (C) 2012 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/pinmux-sh7757.c b/arch/sh/kernel/cpu/sh4a/pinmux-sh7757.c index 567745d44221..c92f304cb4ba 100644 --- a/arch/sh/kernel/cpu/sh4a/pinmux-sh7757.c +++ b/arch/sh/kernel/cpu/sh4a/pinmux-sh7757.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7757 (B0 step) Pinmux * @@ -7,10 +8,6 @@ * * Based on SH7723 Pinmux * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh4a/pinmux-sh7785.c b/arch/sh/kernel/cpu/sh4a/pinmux-sh7785.c index e336ab8b5125..f329de6e758a 100644 --- a/arch/sh/kernel/cpu/sh4a/pinmux-sh7785.c +++ b/arch/sh/kernel/cpu/sh4a/pinmux-sh7785.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7785 Pinmux * * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh4a/pinmux-sh7786.c b/arch/sh/kernel/cpu/sh4a/pinmux-sh7786.c index 9a459556a2f7..47e8639f3e71 100644 --- a/arch/sh/kernel/cpu/sh4a/pinmux-sh7786.c +++ b/arch/sh/kernel/cpu/sh4a/pinmux-sh7786.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7786 Pinmux * @@ -7,10 +8,6 @@ * Based on SH7785 pinmux * * Copyright (C) 2008 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh4a/pinmux-shx3.c b/arch/sh/kernel/cpu/sh4a/pinmux-shx3.c index 444bf25c60fa..6c02f6256467 100644 --- a/arch/sh/kernel/cpu/sh4a/pinmux-shx3.c +++ b/arch/sh/kernel/cpu/sh4a/pinmux-shx3.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH-X3 prototype CPU pinmux * * Copyright (C) 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7343.c b/arch/sh/kernel/cpu/sh4a/setup-sh7343.c index 5788073a7c30..a15e25690b5f 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7343.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7343.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7343 Setup * * Copyright (C) 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7366.c b/arch/sh/kernel/cpu/sh4a/setup-sh7366.c index 646918713d9a..7bd2776441ba 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7366.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7366.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7366 Setup * * Copyright (C) 2008 Renesas Solutions * * Based on linux/arch/sh/kernel/cpu/sh4a/setup-sh7722.c - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7722.c b/arch/sh/kernel/cpu/sh4a/setup-sh7722.c index 6b3a26e61abb..1ce65f88f060 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7722.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7722.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7722 Setup * * Copyright (C) 2006 - 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7723.c b/arch/sh/kernel/cpu/sh4a/setup-sh7723.c index 1c1b3c469831..edb649950662 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7723.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7723.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7723 Setup * * Copyright (C) 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7724.c b/arch/sh/kernel/cpu/sh4a/setup-sh7724.c index c20258b18775..3e9825031d3d 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7724.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7724.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7724 Setup * @@ -7,10 +8,6 @@ * * Based on SH7723 Setup * Copyright (C) 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7734.c b/arch/sh/kernel/cpu/sh4a/setup-sh7734.c index 8c0c9da6b5b3..06a91569697a 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7734.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7734.c @@ -1,14 +1,11 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/setup-sh7734.c - + * * SH7734 Setup * * Copyright (C) 2011,2012 Nobuhiro Iwamatsu * Copyright (C) 2011,2012 Renesas Solutions Corp. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7757.c b/arch/sh/kernel/cpu/sh4a/setup-sh7757.c index a46a19b49e08..2501ce656511 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7757.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7757.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7757 Setup * * Copyright (C) 2009, 2011 Renesas Solutions Corp. * * based on setup-sh7785.c : Copyright (C) 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7763.c b/arch/sh/kernel/cpu/sh4a/setup-sh7763.c index 40e6cda914d3..419c5efe4a17 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7763.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7763.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7763 Setup * * Copyright (C) 2006 Paul Mundt * Copyright (C) 2007 Yoshihiro Shimoda * Copyright (C) 2008, 2009 Nobuhiro Iwamatsu - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7770.c b/arch/sh/kernel/cpu/sh4a/setup-sh7770.c index 82e3bdf2e1b6..5fb4cf9b58c6 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7770.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7770.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7770 Setup * * Copyright (C) 2006 - 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7780.c b/arch/sh/kernel/cpu/sh4a/setup-sh7780.c index d90ff67a4633..ab7d6b715865 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7780.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7780.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7780 Setup * * Copyright (C) 2006 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7785.c b/arch/sh/kernel/cpu/sh4a/setup-sh7785.c index b0d6f82f2d71..a438da47285d 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7785.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7785.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7785 Setup * * Copyright (C) 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7786.c b/arch/sh/kernel/cpu/sh4a/setup-sh7786.c index 17aac38a6e90..d894165a0ef6 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-sh7786.c +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7786.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH7786 Setup * @@ -8,10 +9,6 @@ * Based on SH7785 Setup * * Copyright (C) 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/setup-shx3.c b/arch/sh/kernel/cpu/sh4a/setup-shx3.c index ee14d92d840f..14aa4552bc45 100644 --- a/arch/sh/kernel/cpu/sh4a/setup-shx3.c +++ b/arch/sh/kernel/cpu/sh4a/setup-shx3.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH-X3 Prototype Setup * * Copyright (C) 2007 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/smp-shx3.c b/arch/sh/kernel/cpu/sh4a/smp-shx3.c index 0d3637c494bf..f8a2bec0f260 100644 --- a/arch/sh/kernel/cpu/sh4a/smp-shx3.c +++ b/arch/sh/kernel/cpu/sh4a/smp-shx3.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH-X3 SMP * * Copyright (C) 2007 - 2010 Paul Mundt * Copyright (C) 2007 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh4a/ubc.c b/arch/sh/kernel/cpu/sh4a/ubc.c index efb2745bcb36..25eacd9c47d1 100644 --- a/arch/sh/kernel/cpu/sh4a/ubc.c +++ b/arch/sh/kernel/cpu/sh4a/ubc.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh4a/ubc.c * * On-chip UBC support for SH-4A CPUs. * * Copyright (C) 2009 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh5/clock-sh5.c b/arch/sh/kernel/cpu/sh5/clock-sh5.c index c48b93d4c081..43763c26a752 100644 --- a/arch/sh/kernel/cpu/sh5/clock-sh5.c +++ b/arch/sh/kernel/cpu/sh5/clock-sh5.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh5/clock-sh5.c * * SH-5 support for the clock framework * * Copyright (C) 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh5/entry.S b/arch/sh/kernel/cpu/sh5/entry.S index 0c8d0377d40b..de68ffdfffbf 100644 --- a/arch/sh/kernel/cpu/sh5/entry.S +++ b/arch/sh/kernel/cpu/sh5/entry.S @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh5/entry.S * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2004 - 2008 Paul Mundt * Copyright (C) 2003, 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh5/fpu.c b/arch/sh/kernel/cpu/sh5/fpu.c index 9f8713aa7184..9218d9ed787e 100644 --- a/arch/sh/kernel/cpu/sh5/fpu.c +++ b/arch/sh/kernel/cpu/sh5/fpu.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh5/fpu.c * @@ -7,10 +8,6 @@ * * Started from SH4 version: * Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh5/probe.c b/arch/sh/kernel/cpu/sh5/probe.c index eca427c2f2f3..947250188065 100644 --- a/arch/sh/kernel/cpu/sh5/probe.c +++ b/arch/sh/kernel/cpu/sh5/probe.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh5/probe.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh5/setup-sh5.c b/arch/sh/kernel/cpu/sh5/setup-sh5.c index 084a9cc99175..41c1673afc0b 100644 --- a/arch/sh/kernel/cpu/sh5/setup-sh5.c +++ b/arch/sh/kernel/cpu/sh5/setup-sh5.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SH5-101/SH5-103 CPU Setup * * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/sh5/switchto.S b/arch/sh/kernel/cpu/sh5/switchto.S index 45c351b0f1ba..d1beff755632 100644 --- a/arch/sh/kernel/cpu/sh5/switchto.S +++ b/arch/sh/kernel/cpu/sh5/switchto.S @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh5/switchto.S * * sh64 context switch * * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ .section .text..SHmedia32,"ax" diff --git a/arch/sh/kernel/cpu/sh5/unwind.c b/arch/sh/kernel/cpu/sh5/unwind.c index 3a4fed406fc6..3cb0cd9cea29 100644 --- a/arch/sh/kernel/cpu/sh5/unwind.c +++ b/arch/sh/kernel/cpu/sh5/unwind.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/sh5/unwind.c * * Copyright (C) 2004 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/shmobile/Makefile b/arch/sh/kernel/cpu/shmobile/Makefile index e8a5111e848a..7581d5f03ce1 100644 --- a/arch/sh/kernel/cpu/shmobile/Makefile +++ b/arch/sh/kernel/cpu/shmobile/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the Linux/SuperH SH-Mobile backends. # diff --git a/arch/sh/kernel/cpu/shmobile/cpuidle.c b/arch/sh/kernel/cpu/shmobile/cpuidle.c index c32e66079f7c..dbd2cdec2ddb 100644 --- a/arch/sh/kernel/cpu/shmobile/cpuidle.c +++ b/arch/sh/kernel/cpu/shmobile/cpuidle.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/shmobile/cpuidle.c * * Cpuidle support code for SuperH Mobile * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/shmobile/pm.c b/arch/sh/kernel/cpu/shmobile/pm.c index fba2be5d72e9..ca9945f51e51 100644 --- a/arch/sh/kernel/cpu/shmobile/pm.c +++ b/arch/sh/kernel/cpu/shmobile/pm.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/cpu/shmobile/pm.c * * Power management support code for SuperH Mobile * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/cpu/shmobile/sleep.S b/arch/sh/kernel/cpu/shmobile/sleep.S index e6aac65f5750..f928c0315129 100644 --- a/arch/sh/kernel/cpu/shmobile/sleep.S +++ b/arch/sh/kernel/cpu/shmobile/sleep.S @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/cpu/sh4a/sleep-sh_mobile.S * * Sleep mode and Standby modes support for SuperH Mobile * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/debugtraps.S b/arch/sh/kernel/debugtraps.S index 7a1b46fec0f4..ad07527e2a99 100644 --- a/arch/sh/kernel/debugtraps.S +++ b/arch/sh/kernel/debugtraps.S @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/debugtraps.S * * Debug trap jump tables for SuperH * * Copyright (C) 2006 - 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/disassemble.c b/arch/sh/kernel/disassemble.c index 015fee58014b..defebf1a9c8a 100644 --- a/arch/sh/kernel/disassemble.c +++ b/arch/sh/kernel/disassemble.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Disassemble SuperH instructions. * * Copyright (C) 1999 kaz Kojima * Copyright (C) 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/dma-coherent.c b/arch/sh/kernel/dma-coherent.c index a0021eef956b..b17514619b7e 100644 --- a/arch/sh/kernel/dma-coherent.c +++ b/arch/sh/kernel/dma-coherent.c @@ -1,9 +1,6 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2004 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/dumpstack.c b/arch/sh/kernel/dumpstack.c index b564b1eae4ae..93c6c0e691ee 100644 --- a/arch/sh/kernel/dumpstack.c +++ b/arch/sh/kernel/dumpstack.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 1991, 1992 Linus Torvalds * Copyright (C) 2000, 2001, 2002 Andi Kleen, SuSE Labs * Copyright (C) 2009 Matt Fleming * Copyright (C) 2002 - 2012 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/dwarf.c b/arch/sh/kernel/dwarf.c index bb511e2d9d68..9e1d26c8a0c4 100644 --- a/arch/sh/kernel/dwarf.c +++ b/arch/sh/kernel/dwarf.c @@ -1,10 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2009 Matt Fleming * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * * This is an implementation of a DWARF unwinder. Its main purpose is * for generating stacktrace information. Based on the DWARF 3 * specification from http://www.dwarfstd.org. diff --git a/arch/sh/kernel/entry-common.S b/arch/sh/kernel/entry-common.S index 28cc61216b64..d31f66e82ce5 100644 --- a/arch/sh/kernel/entry-common.S +++ b/arch/sh/kernel/entry-common.S @@ -1,11 +1,7 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * Copyright (C) 1999, 2000, 2002 Niibe Yutaka * Copyright (C) 2003 - 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ ! NOTE: diff --git a/arch/sh/kernel/head_32.S b/arch/sh/kernel/head_32.S index 4e352c3f79e6..4adbd4ade319 100644 --- a/arch/sh/kernel/head_32.S +++ b/arch/sh/kernel/head_32.S @@ -1,14 +1,11 @@ -/* $Id: head.S,v 1.7 2003/09/01 17:58:19 lethal Exp $ +/* SPDX-License-Identifier: GPL-2.0 + * $Id: head.S,v 1.7 2003/09/01 17:58:19 lethal Exp $ * * arch/sh/kernel/head.S * * Copyright (C) 1999, 2000 Niibe Yutaka & Kaz Kojima * Copyright (C) 2010 Matt Fleming * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * * Head.S contains the SH exception handlers and startup code. */ #include diff --git a/arch/sh/kernel/head_64.S b/arch/sh/kernel/head_64.S index cca491397a28..67685e1f00e1 100644 --- a/arch/sh/kernel/head_64.S +++ b/arch/sh/kernel/head_64.S @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/head_64.S * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003, 2004 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/hw_breakpoint.c b/arch/sh/kernel/hw_breakpoint.c index d9ff3b42da7c..bc96b16288c1 100644 --- a/arch/sh/kernel/hw_breakpoint.c +++ b/arch/sh/kernel/hw_breakpoint.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/hw_breakpoint.c * * Unified kernel/user-space hardware breakpoint facility for the on-chip UBC. * * Copyright (C) 2009 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/idle.c b/arch/sh/kernel/idle.c index be616ee0cf87..c20fc5487e05 100644 --- a/arch/sh/kernel/idle.c +++ b/arch/sh/kernel/idle.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * The idle loop for all SuperH platforms. * * Copyright (C) 2002 - 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/io.c b/arch/sh/kernel/io.c index 5c51b794ba2a..da22f3b32d30 100644 --- a/arch/sh/kernel/io.c +++ b/arch/sh/kernel/io.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/io.c - Machine independent I/O functions. * * Copyright (C) 2000 - 2009 Stuart Menefy * Copyright (C) 2005 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/io_trapped.c b/arch/sh/kernel/io_trapped.c index 4d4e7a2a774b..bacad6da4fe4 100644 --- a/arch/sh/kernel/io_trapped.c +++ b/arch/sh/kernel/io_trapped.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Trapped io support * * Copyright (C) 2008 Magnus Damm * * Intercept io operations by trapping. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/iomap.c b/arch/sh/kernel/iomap.c index 2e8e8b9b9cef..ef9e2c97cbb7 100644 --- a/arch/sh/kernel/iomap.c +++ b/arch/sh/kernel/iomap.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/iomap.c * * Copyright (C) 2000 Niibe Yutaka * Copyright (C) 2005 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/ioport.c b/arch/sh/kernel/ioport.c index cca14ba84a37..34f8cdbbcf0b 100644 --- a/arch/sh/kernel/ioport.c +++ b/arch/sh/kernel/ioport.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/ioport.c * * Copyright (C) 2000 Niibe Yutaka * Copyright (C) 2005 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/irq_32.c b/arch/sh/kernel/irq_32.c index e5a755be9129..e09cdc4ada68 100644 --- a/arch/sh/kernel/irq_32.c +++ b/arch/sh/kernel/irq_32.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SHcompact irqflags support * * Copyright (C) 2006 - 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/irq_64.c b/arch/sh/kernel/irq_64.c index 8fc05b997b6d..7a1f50435e33 100644 --- a/arch/sh/kernel/irq_64.c +++ b/arch/sh/kernel/irq_64.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SHmedia irqflags support * * Copyright (C) 2006 - 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/kgdb.c b/arch/sh/kernel/kgdb.c index 4f04c6638a4d..d24bd2d2ffad 100644 --- a/arch/sh/kernel/kgdb.c +++ b/arch/sh/kernel/kgdb.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SuperH KGDB support * * Copyright (C) 2008 - 2012 Paul Mundt * * Single stepping taken from the old stub by Henry Bell and Jeremy Siegel. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/kprobes.c b/arch/sh/kernel/kprobes.c index 241e903dd3ee..1f8c0d30567f 100644 --- a/arch/sh/kernel/kprobes.c +++ b/arch/sh/kernel/kprobes.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Kernel probes (kprobes) for SuperH * * Copyright (C) 2007 Chris Smith * Copyright (C) 2006 Lineo Solutions, Inc. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/machine_kexec.c b/arch/sh/kernel/machine_kexec.c index 9fea49f6e667..b9f9f1a5afdc 100644 --- a/arch/sh/kernel/machine_kexec.c +++ b/arch/sh/kernel/machine_kexec.c @@ -1,12 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * machine_kexec.c - handle transition of Linux booting another kernel * Copyright (C) 2002-2003 Eric Biederman * * GameCube/ppc32 port Copyright (C) 2004 Albert Herranz * LANDISK/sh4 supported by kogiidena - * - * This source code is licensed under the GNU General Public License, - * Version 2. See the file COPYING for more details. */ #include #include diff --git a/arch/sh/kernel/machvec.c b/arch/sh/kernel/machvec.c index ec05f491c347..beadbbdb4486 100644 --- a/arch/sh/kernel/machvec.c +++ b/arch/sh/kernel/machvec.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/machvec.c * @@ -5,10 +6,6 @@ * * Copyright (C) 1999 Niibe Yutaka * Copyright (C) 2002 - 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/module.c b/arch/sh/kernel/module.c index 1b525dedd29a..bbc78d1d618e 100644 --- a/arch/sh/kernel/module.c +++ b/arch/sh/kernel/module.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0+ /* Kernel module help for SH. SHcompact version by Kaz Kojima and Paul Mundt. @@ -9,20 +10,6 @@ Based on the sh version, and on code from the sh64-specific parts of modutils, originally written by Richard Curnow and Ben Gaster. - - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ #include #include diff --git a/arch/sh/kernel/nmi_debug.c b/arch/sh/kernel/nmi_debug.c index 730d928f0d12..11777867c6f5 100644 --- a/arch/sh/kernel/nmi_debug.c +++ b/arch/sh/kernel/nmi_debug.c @@ -1,9 +1,6 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2007 Atmel Corporation - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #include #include diff --git a/arch/sh/kernel/perf_callchain.c b/arch/sh/kernel/perf_callchain.c index fa2c0cd23eaa..6281f2fdf9ca 100644 --- a/arch/sh/kernel/perf_callchain.c +++ b/arch/sh/kernel/perf_callchain.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Performance event callchain support - SuperH architecture code * * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/perf_event.c b/arch/sh/kernel/perf_event.c index ba3269a8304b..445e3ece4c23 100644 --- a/arch/sh/kernel/perf_event.c +++ b/arch/sh/kernel/perf_event.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Performance event support framework for SuperH hardware counters. * @@ -15,10 +16,6 @@ * * ppc: * Copyright 2008-2009 Paul Mackerras, IBM Corporation. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/process_32.c b/arch/sh/kernel/process_32.c index 27fddb56b3e1..a094633874c3 100644 --- a/arch/sh/kernel/process_32.c +++ b/arch/sh/kernel/process_32.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/process.c * @@ -8,10 +9,6 @@ * SuperH version: Copyright (C) 1999, 2000 Niibe Yutaka & Kaz Kojima * Copyright (C) 2006 Lineo Solutions Inc. support SH4A UBC * Copyright (C) 2002 - 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/process_64.c b/arch/sh/kernel/process_64.c index ee2abe96f9f3..c2844a2e18cd 100644 --- a/arch/sh/kernel/process_64.c +++ b/arch/sh/kernel/process_64.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/process_64.c * @@ -12,10 +13,6 @@ * * In turn started from i386 version: * Copyright (C) 1995 Linus Torvalds - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/ptrace_32.c b/arch/sh/kernel/ptrace_32.c index 5fc3ff606210..d5052c30a0e9 100644 --- a/arch/sh/kernel/ptrace_32.c +++ b/arch/sh/kernel/ptrace_32.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * SuperH process tracing * @@ -5,10 +6,6 @@ * Copyright (C) 2002 - 2009 Paul Mundt * * Audit support by Yuichi Nakamura - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/ptrace_64.c b/arch/sh/kernel/ptrace_64.c index 1e0656d9e7af..3390349ff976 100644 --- a/arch/sh/kernel/ptrace_64.c +++ b/arch/sh/kernel/ptrace_64.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/ptrace_64.c * @@ -10,10 +11,6 @@ * Original x86 implementation: * By Ross Biro 1/23/92 * edited by Linus Torvalds - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/relocate_kernel.S b/arch/sh/kernel/relocate_kernel.S index fcc9934fb97b..d9bf2b727b42 100644 --- a/arch/sh/kernel/relocate_kernel.S +++ b/arch/sh/kernel/relocate_kernel.S @@ -1,13 +1,11 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * relocate_kernel.S - put the kernel image in place to boot * 2005.9.17 kogiidena@eggplant.ddo.jp * * LANDISK/sh4 is supported. Maybe, SH archtecture works well. * * 2009-03-18 Magnus Damm - Added Kexec Jump support - * - * This source code is licensed under the GNU General Public License, - * Version 2. See the file COPYING for more details. */ #include #include diff --git a/arch/sh/kernel/return_address.c b/arch/sh/kernel/return_address.c index 5124aeb28c3f..8838094c9ff9 100644 --- a/arch/sh/kernel/return_address.c +++ b/arch/sh/kernel/return_address.c @@ -1,12 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/return_address.c * * Copyright (C) 2009 Matt Fleming * Copyright (C) 2009 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/sh_bios.c b/arch/sh/kernel/sh_bios.c index fe584e516964..250dbdf3fa74 100644 --- a/arch/sh/kernel/sh_bios.c +++ b/arch/sh/kernel/sh_bios.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * C interface for trapping into the standard LinuxSH BIOS. * @@ -5,10 +6,6 @@ * Copyright (C) 1999, 2000 Niibe Yutaka * Copyright (C) 2002 M. R. Brown * Copyright (C) 2004 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/sh_ksyms_64.c b/arch/sh/kernel/sh_ksyms_64.c index 6ee3740e009e..9de17065afb4 100644 --- a/arch/sh/kernel/sh_ksyms_64.c +++ b/arch/sh/kernel/sh_ksyms_64.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/sh_ksyms_64.c * * Copyright (C) 2000, 2001 Paolo Alberelli - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/signal_64.c b/arch/sh/kernel/signal_64.c index 7b77f1812434..76661dee3c65 100644 --- a/arch/sh/kernel/signal_64.c +++ b/arch/sh/kernel/signal_64.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/signal_64.c * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003 - 2008 Paul Mundt * Copyright (C) 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c index c483422ea4d0..372acdc9033e 100644 --- a/arch/sh/kernel/smp.c +++ b/arch/sh/kernel/smp.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/smp.c * @@ -5,10 +6,6 @@ * * Copyright (C) 2002 - 2010 Paul Mundt * Copyright (C) 2006 - 2007 Akio Idehara - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/stacktrace.c b/arch/sh/kernel/stacktrace.c index 7a73d2763e1b..f3cb2cccb262 100644 --- a/arch/sh/kernel/stacktrace.c +++ b/arch/sh/kernel/stacktrace.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/stacktrace.c * * Stack trace management functions * * Copyright (C) 2006 - 2008 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/swsusp.c b/arch/sh/kernel/swsusp.c index 12b64a0f2f01..0b772d6d714f 100644 --- a/arch/sh/kernel/swsusp.c +++ b/arch/sh/kernel/swsusp.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * swsusp.c - SuperH hibernation support * * Copyright (C) 2009 Magnus Damm - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/syscalls_32.S b/arch/sh/kernel/syscalls_32.S index 54978e01bf94..96e9c54a07f5 100644 --- a/arch/sh/kernel/syscalls_32.S +++ b/arch/sh/kernel/syscalls_32.S @@ -1,15 +1,11 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/syscalls.S * * System call table for SuperH * * Copyright (C) 1999, 2000, 2002 Niibe Yutaka * Copyright (C) 2003 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * */ #include #include diff --git a/arch/sh/kernel/syscalls_64.S b/arch/sh/kernel/syscalls_64.S index d6a27f7a4c54..1bcb86f0b728 100644 --- a/arch/sh/kernel/syscalls_64.S +++ b/arch/sh/kernel/syscalls_64.S @@ -1,13 +1,10 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/kernel/syscalls_64.S * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2004 - 2007 Paul Mundt * Copyright (C) 2003, 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include diff --git a/arch/sh/kernel/time.c b/arch/sh/kernel/time.c index fcd5e41977d1..e16b2cd269a3 100644 --- a/arch/sh/kernel/time.c +++ b/arch/sh/kernel/time.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/time.c * @@ -5,10 +6,6 @@ * Copyright (C) 2000 Philipp Rumpf * Copyright (C) 2002 - 2009 Paul Mundt * Copyright (C) 2002 M. R. Brown - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include @@ -22,77 +19,6 @@ #include #include -/* Dummy RTC ops */ -static void null_rtc_get_time(struct timespec *tv) -{ - tv->tv_sec = mktime(2000, 1, 1, 0, 0, 0); - tv->tv_nsec = 0; -} - -static int null_rtc_set_time(const time_t secs) -{ - return 0; -} - -void (*rtc_sh_get_time)(struct timespec *) = null_rtc_get_time; -int (*rtc_sh_set_time)(const time_t) = null_rtc_set_time; - -void read_persistent_clock(struct timespec *ts) -{ - rtc_sh_get_time(ts); -} - -#ifdef CONFIG_GENERIC_CMOS_UPDATE -int update_persistent_clock(struct timespec now) -{ - return rtc_sh_set_time(now.tv_sec); -} -#endif - -static int rtc_generic_get_time(struct device *dev, struct rtc_time *tm) -{ - struct timespec tv; - - rtc_sh_get_time(&tv); - rtc_time_to_tm(tv.tv_sec, tm); - return 0; -} - -static int rtc_generic_set_time(struct device *dev, struct rtc_time *tm) -{ - unsigned long secs; - - rtc_tm_to_time(tm, &secs); - if ((rtc_sh_set_time == null_rtc_set_time) || - (rtc_sh_set_time(secs) < 0)) - return -EOPNOTSUPP; - - return 0; -} - -static const struct rtc_class_ops rtc_generic_ops = { - .read_time = rtc_generic_get_time, - .set_time = rtc_generic_set_time, -}; - -static int __init rtc_generic_init(void) -{ - struct platform_device *pdev; - - if (rtc_sh_get_time == null_rtc_get_time) - return -ENODEV; - - pdev = platform_device_register_data(NULL, "rtc-generic", -1, - &rtc_generic_ops, - sizeof(rtc_generic_ops)); - - - return PTR_ERR_OR_ZERO(pdev); -} -device_initcall(rtc_generic_init); - -void (*board_time_init)(void); - static void __init sh_late_time_init(void) { /* @@ -110,8 +36,7 @@ static void __init sh_late_time_init(void) void __init time_init(void) { - if (board_time_init) - board_time_init(); + timer_probe(); clk_init(); diff --git a/arch/sh/kernel/topology.c b/arch/sh/kernel/topology.c index c82912a61d74..7a989eed3b18 100644 --- a/arch/sh/kernel/topology.c +++ b/arch/sh/kernel/topology.c @@ -1,11 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/topology.c * * Copyright (C) 2007 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/traps_32.c b/arch/sh/kernel/traps_32.c index 60709ad17fc7..f2a18b5fafd8 100644 --- a/arch/sh/kernel/traps_32.c +++ b/arch/sh/kernel/traps_32.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * 'traps.c' handles hardware traps and faults after we have saved some * state in 'entry.S'. @@ -6,10 +7,6 @@ * Copyright (C) 2000 Philipp Rumpf * Copyright (C) 2000 David Howells * Copyright (C) 2002 - 2010 Paul Mundt - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/traps_64.c b/arch/sh/kernel/traps_64.c index 014fb08cf133..c52bda4d2574 100644 --- a/arch/sh/kernel/traps_64.c +++ b/arch/sh/kernel/traps_64.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/traps_64.c * * Copyright (C) 2000, 2001 Paolo Alberelli * Copyright (C) 2003, 2004 Paul Mundt * Copyright (C) 2003, 2004 Richard Curnow - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/kernel/unwinder.c b/arch/sh/kernel/unwinder.c index 521b5432471f..7a54b72dd923 100644 --- a/arch/sh/kernel/unwinder.c +++ b/arch/sh/kernel/unwinder.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2009 Matt Fleming * diff --git a/arch/sh/kernel/vsyscall/vsyscall.c b/arch/sh/kernel/vsyscall/vsyscall.c index cc0cc5b4ff18..98494480f048 100644 --- a/arch/sh/kernel/vsyscall/vsyscall.c +++ b/arch/sh/kernel/vsyscall/vsyscall.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/kernel/vsyscall/vsyscall.c * @@ -5,10 +6,6 @@ * * vDSO randomization * Copyright(C) 2005-2006, Red Hat, Inc., Ingo Molnar - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/lib/ashiftrt.S b/arch/sh/lib/ashiftrt.S index 45ce86558f46..0f7145e3c51e 100644 --- a/arch/sh/lib/ashiftrt.S +++ b/arch/sh/lib/ashiftrt.S @@ -1,30 +1,9 @@ -/* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +*/ !! libgcc routines for the Renesas / SuperH SH CPUs. !! Contributed by Steve Chamberlain. diff --git a/arch/sh/lib/ashlsi3.S b/arch/sh/lib/ashlsi3.S index 70a6434945ab..4df4401cdf31 100644 --- a/arch/sh/lib/ashlsi3.S +++ b/arch/sh/lib/ashlsi3.S @@ -1,30 +1,9 @@ -/* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +*/ !! libgcc routines for the Renesas / SuperH SH CPUs. !! Contributed by Steve Chamberlain. diff --git a/arch/sh/lib/ashrsi3.S b/arch/sh/lib/ashrsi3.S index 602599d80209..bf3c4e03e6ff 100644 --- a/arch/sh/lib/ashrsi3.S +++ b/arch/sh/lib/ashrsi3.S @@ -1,30 +1,9 @@ -/* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +*/ !! libgcc routines for the Renesas / SuperH SH CPUs. !! Contributed by Steve Chamberlain. diff --git a/arch/sh/lib/checksum.S b/arch/sh/lib/checksum.S index 356c8ec92893..97b5c2d9fec4 100644 --- a/arch/sh/lib/checksum.S +++ b/arch/sh/lib/checksum.S @@ -1,4 +1,6 @@ -/* $Id: checksum.S,v 1.10 2001/07/06 13:11:32 gniibe Exp $ +/* SPDX-License-Identifier: GPL-2.0+ + * + * $Id: checksum.S,v 1.10 2001/07/06 13:11:32 gniibe Exp $ * * INET An implementation of the TCP/IP protocol suite for the LINUX * operating system. INET is implemented using the BSD Socket @@ -21,11 +23,6 @@ * converted to pure assembler * * SuperH version: Copyright (C) 1999 Niibe Yutaka - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. */ #include diff --git a/arch/sh/lib/io.c b/arch/sh/lib/io.c index 88dfe6e396bc..ebcf7c0a7335 100644 --- a/arch/sh/lib/io.c +++ b/arch/sh/lib/io.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * arch/sh/lib/io.c - SH32 optimized I/O routines * @@ -6,10 +7,6 @@ * * Provide real functions which expand to whatever the header file defined. * Also definitions of machine independent IO functions. - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/lib/libgcc.h b/arch/sh/lib/libgcc.h index 05909d58e2fe..58ada9e8f1c2 100644 --- a/arch/sh/lib/libgcc.h +++ b/arch/sh/lib/libgcc.h @@ -1,3 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + #ifndef __ASM_LIBGCC_H #define __ASM_LIBGCC_H diff --git a/arch/sh/lib/lshrsi3.S b/arch/sh/lib/lshrsi3.S index f2a6959f526d..b79b8170061f 100644 --- a/arch/sh/lib/lshrsi3.S +++ b/arch/sh/lib/lshrsi3.S @@ -1,30 +1,9 @@ -/* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +*/ !! libgcc routines for the Renesas / SuperH SH CPUs. !! Contributed by Steve Chamberlain. diff --git a/arch/sh/lib/mcount.S b/arch/sh/lib/mcount.S index 7a8572f9d58b..c6ca90cc9606 100644 --- a/arch/sh/lib/mcount.S +++ b/arch/sh/lib/mcount.S @@ -1,12 +1,9 @@ -/* +/* SPDX-License-Identifier: GPL-2.0 + * * arch/sh/lib/mcount.S * * Copyright (C) 2008, 2009 Paul Mundt * Copyright (C) 2008, 2009 Matt Fleming - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. */ #include #include diff --git a/arch/sh/lib/movmem.S b/arch/sh/lib/movmem.S index 62075f6bc67c..8ac54d6b38a1 100644 --- a/arch/sh/lib/movmem.S +++ b/arch/sh/lib/movmem.S @@ -1,30 +1,9 @@ -/* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +*/ !! libgcc routines for the Renesas / SuperH SH CPUs. !! Contributed by Steve Chamberlain. diff --git a/arch/sh/lib/udiv_qrnnd.S b/arch/sh/lib/udiv_qrnnd.S index 32b9a36de943..28938daccd6b 100644 --- a/arch/sh/lib/udiv_qrnnd.S +++ b/arch/sh/lib/udiv_qrnnd.S @@ -1,30 +1,9 @@ -/* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +*/ !! libgcc routines for the Renesas / SuperH SH CPUs. !! Contributed by Steve Chamberlain. diff --git a/arch/sh/lib/udivsi3.S b/arch/sh/lib/udivsi3.S index 72157ab5c314..09ed1f9deb2e 100644 --- a/arch/sh/lib/udivsi3.S +++ b/arch/sh/lib/udivsi3.S @@ -1,30 +1,9 @@ -/* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +*/ !! libgcc routines for the Renesas / SuperH SH CPUs. !! Contributed by Steve Chamberlain. diff --git a/arch/sh/lib/udivsi3_i4i-Os.S b/arch/sh/lib/udivsi3_i4i-Os.S index 4835553e1ea9..fa4e4dff3da1 100644 --- a/arch/sh/lib/udivsi3_i4i-Os.S +++ b/arch/sh/lib/udivsi3_i4i-Os.S @@ -1,28 +1,7 @@ -/* Copyright (C) 2006 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + * + * Copyright (C) 2006 Free Software Foundation, Inc. + */ /* Moderately Space-optimized libgcc routines for the Renesas SH / STMicroelectronics ST40 CPUs. diff --git a/arch/sh/lib/udivsi3_i4i.S b/arch/sh/lib/udivsi3_i4i.S index f1a79d9c5015..6944eb6b4a75 100644 --- a/arch/sh/lib/udivsi3_i4i.S +++ b/arch/sh/lib/udivsi3_i4i.S @@ -1,30 +1,9 @@ -/* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, +/* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 + + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. - -This file is free software; you can redistribute it and/or modify it -under the terms of the GNU General Public License as published by the -Free Software Foundation; either version 2, or (at your option) any -later version. - -In addition to the permissions in the GNU General Public License, the -Free Software Foundation gives you unlimited permission to link the -compiled version of this file into combinations with other programs, -and to distribute those combinations without any restriction coming -from the use of this file. (The General Public License restrictions -do apply in other respects; for example, they cover modification of -the file, and distribution when not linked into a combine -executable.) - -This file is distributed in the hope that it will be useful, but -WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; see the file COPYING. If not, write to -the Free Software Foundation, 51 Franklin Street, Fifth Floor, -Boston, MA 02110-1301, USA. */ +*/ !! libgcc routines for the Renesas / SuperH SH CPUs. !! Contributed by Steve Chamberlain. diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index c8c13c777162..a8e5c0e00fca 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -443,7 +443,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); #endif #ifdef CONFIG_MEMORY_HOTREMOVE -int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 490b2c95c212..d5dd652fb8cc 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -21,6 +21,7 @@ config SPARC select HAVE_ARCH_KGDB if !SMP || SPARC64 select HAVE_ARCH_TRACEHOOK select HAVE_EXIT_THREAD + select HAVE_PCI select SYSCTL_EXCEPTION_TRACE select RTC_CLASS select RTC_DRV_M48T59 @@ -38,9 +39,9 @@ config SPARC select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNLEN_USER select MODULES_USE_ELF_RELA + select PCI_SYSCALL if PCI select ODD_RT_SIGACTION select OLD_SIGSUSPEND - select ARCH_HAS_SG_CHAIN select CPU_NO_EFFICIENT_FFS select LOCKDEP_SMALL if LOCKDEP select NEED_DMA_MAP_STATE @@ -49,7 +50,6 @@ config SPARC config SPARC32 def_bool !64BIT select ARCH_HAS_SYNC_DMA_FOR_CPU - select DMA_DIRECT_OPS select GENERIC_ATOMIC64 select CLZ_TAB select HAVE_UID16 @@ -89,6 +89,7 @@ config SPARC64 select GENERIC_TIME_VSYSCALL select ARCH_CLOCKSOURCE_DATA select ARCH_HAS_PTE_SPECIAL + select PCI_DOMAINS if PCI config ARCH_DEFCONFIG string @@ -187,7 +188,7 @@ config NR_CPUS default 32 if SPARC32 default 4096 if SPARC64 -source kernel/Kconfig.hz +source "kernel/Kconfig.hz" config RWSEM_GENERIC_SPINLOCK bool @@ -472,24 +473,6 @@ config SUN_LDOMS Say Y here is you want to support virtual devices via Logical Domains. -config PCI - bool "Support for PCI and PS/2 keyboard/mouse" - help - Find out whether your system includes a PCI bus. PCI is the name of - a bus system, i.e. the way the CPU talks to the other stuff inside - your box. If you say Y here, the kernel will include drivers and - infrastructure code to support PCI bus devices. - - CONFIG_PCI is needed for all JavaStation's (including MrCoffee), - CP-1200, JavaEngine-1, Corona, Red October, and Serengeti SGSC. - All of these platforms are extremely obscure, so say N if unsure. - -config PCI_DOMAINS - def_bool PCI if SPARC64 - -config PCI_SYSCALL - def_bool PCI - config PCIC_PCI bool depends on PCI && SPARC32 && !SPARC_LEON @@ -518,10 +501,6 @@ config SPARC_GRPCI2 help Say Y here to include the GRPCI2 Host Bridge Driver. -source "drivers/pci/Kconfig" - -source "drivers/pcmcia/Kconfig" - config SUN_OPENPROMFS tristate "Openprom tree appears in /proc/openprom" help diff --git a/arch/sparc/crypto/aes_glue.c b/arch/sparc/crypto/aes_glue.c index 3cd4f6b198b6..a9b8b0b94a8d 100644 --- a/arch/sparc/crypto/aes_glue.c +++ b/arch/sparc/crypto/aes_glue.c @@ -476,11 +476,6 @@ static bool __init sparc64_has_aes_opcode(void) static int __init aes_sparc64_mod_init(void) { - int i; - - for (i = 0; i < ARRAY_SIZE(algs); i++) - INIT_LIST_HEAD(&algs[i].cra_list); - if (sparc64_has_aes_opcode()) { pr_info("Using sparc64 aes opcodes optimized AES implementation\n"); return crypto_register_algs(algs, ARRAY_SIZE(algs)); diff --git a/arch/sparc/crypto/camellia_glue.c b/arch/sparc/crypto/camellia_glue.c index 561a84d93cf6..900d5c617e83 100644 --- a/arch/sparc/crypto/camellia_glue.c +++ b/arch/sparc/crypto/camellia_glue.c @@ -299,11 +299,6 @@ static bool __init sparc64_has_camellia_opcode(void) static int __init camellia_sparc64_mod_init(void) { - int i; - - for (i = 0; i < ARRAY_SIZE(algs); i++) - INIT_LIST_HEAD(&algs[i].cra_list); - if (sparc64_has_camellia_opcode()) { pr_info("Using sparc64 camellia opcodes optimized CAMELLIA implementation\n"); return crypto_register_algs(algs, ARRAY_SIZE(algs)); diff --git a/arch/sparc/crypto/des_glue.c b/arch/sparc/crypto/des_glue.c index 61af794aa2d3..56499ea39fd3 100644 --- a/arch/sparc/crypto/des_glue.c +++ b/arch/sparc/crypto/des_glue.c @@ -510,11 +510,6 @@ static bool __init sparc64_has_des_opcode(void) static int __init des_sparc64_mod_init(void) { - int i; - - for (i = 0; i < ARRAY_SIZE(algs); i++) - INIT_LIST_HEAD(&algs[i].cra_list); - if (sparc64_has_des_opcode()) { pr_info("Using sparc64 des opcodes optimized DES implementation\n"); return crypto_register_algs(algs, ARRAY_SIZE(algs)); diff --git a/arch/sparc/include/asm/dma-mapping.h b/arch/sparc/include/asm/dma-mapping.h index b0bb2fcaf1c9..ed32845bd2d2 100644 --- a/arch/sparc/include/asm/dma-mapping.h +++ b/arch/sparc/include/asm/dma-mapping.h @@ -2,9 +2,7 @@ #ifndef ___ASM_SPARC_DMA_MAPPING_H #define ___ASM_SPARC_DMA_MAPPING_H -#include -#include -#include +#include extern const struct dma_map_ops *dma_ops; @@ -14,11 +12,11 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { #ifdef CONFIG_SPARC_LEON if (sparc_cpu_model == sparc_leon) - return &dma_direct_ops; + return NULL; #endif #if defined(CONFIG_SPARC32) && defined(CONFIG_PCI) if (bus == &pci_bus_type) - return &dma_direct_ops; + return NULL; #endif return dma_ops; } diff --git a/arch/sparc/include/asm/dma.h b/arch/sparc/include/asm/dma.h index a1d7c86917c6..462e7c794a09 100644 --- a/arch/sparc/include/asm/dma.h +++ b/arch/sparc/include/asm/dma.h @@ -91,54 +91,10 @@ extern int isa_dma_bridge_buggy; #endif #ifdef CONFIG_SPARC32 - -/* Routines for data transfer buffers. */ struct device; -struct scatterlist; - -struct sparc32_dma_ops { - __u32 (*get_scsi_one)(struct device *, char *, unsigned long); - void (*get_scsi_sgl)(struct device *, struct scatterlist *, int); - void (*release_scsi_one)(struct device *, __u32, unsigned long); - void (*release_scsi_sgl)(struct device *, struct scatterlist *,int); -#ifdef CONFIG_SBUS - int (*map_dma_area)(struct device *, dma_addr_t *, unsigned long, unsigned long, int); - void (*unmap_dma_area)(struct device *, unsigned long, int); -#endif -}; -extern const struct sparc32_dma_ops *sparc32_dma_ops; - -#define mmu_get_scsi_one(dev,vaddr,len) \ - sparc32_dma_ops->get_scsi_one(dev, vaddr, len) -#define mmu_get_scsi_sgl(dev,sg,sz) \ - sparc32_dma_ops->get_scsi_sgl(dev, sg, sz) -#define mmu_release_scsi_one(dev,vaddr,len) \ - sparc32_dma_ops->release_scsi_one(dev, vaddr,len) -#define mmu_release_scsi_sgl(dev,sg,sz) \ - sparc32_dma_ops->release_scsi_sgl(dev, sg, sz) - -#ifdef CONFIG_SBUS -/* - * mmu_map/unmap are provided by iommu/iounit; Invalid to call on IIep. - * - * The mmu_map_dma_area establishes two mappings in one go. - * These mappings point to pages normally mapped at 'va' (linear address). - * First mapping is for CPU visible address at 'a', uncached. - * This is an alias, but it works because it is an uncached mapping. - * Second mapping is for device visible address, or "bus" address. - * The bus address is returned at '*pba'. - * - * These functions seem distinct, but are hard to split. - * On sun4m, page attributes depend on the CPU type, so we have to - * know if we are mapping RAM or I/O, so it has to be an additional argument - * to a separate mapping function for CPU visible mappings. - */ -#define sbus_map_dma_area(dev,pba,va,a,len) \ - sparc32_dma_ops->map_dma_area(dev, pba, va, a, len) -#define sbus_unmap_dma_area(dev,ba,len) \ - sparc32_dma_ops->unmap_dma_area(dev, ba, len) -#endif /* CONFIG_SBUS */ +unsigned long sparc_dma_alloc_resource(struct device *dev, size_t len); +bool sparc_dma_free_resource(void *cpu_addr, size_t size); #endif #endif /* !(_ASM_SPARC_DMA_H) */ diff --git a/arch/sparc/include/asm/leon.h b/arch/sparc/include/asm/leon.h index 8c01f0f6b1ed..c1e05e4ab9e3 100644 --- a/arch/sparc/include/asm/leon.h +++ b/arch/sparc/include/asm/leon.h @@ -254,4 +254,13 @@ extern int leon_ipi_irq; #define _pfn_valid(pfn) ((pfn < last_valid_pfn) && (pfn >= PFN(phys_base))) #define _SRMMU_PTE_PMASK_LEON 0xffffffff +/* + * On LEON PCI Memory space is mapped 1:1 with physical address space. + * + * I/O space is located at low 64Kbytes in PCI I/O space. The I/O addresses + * are converted into CPU addresses to virtual addresses that are mapped with + * MMU to the PCI Host PCI I/O space window which are translated to the low + * 64Kbytes by the Host controller. + */ + #endif diff --git a/arch/sparc/include/asm/pci.h b/arch/sparc/include/asm/pci.h index cad79a6ce0e4..cfec79bb1831 100644 --- a/arch/sparc/include/asm/pci.h +++ b/arch/sparc/include/asm/pci.h @@ -1,9 +1,54 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef ___ASM_SPARC_PCI_H #define ___ASM_SPARC_PCI_H -#if defined(__sparc__) && defined(__arch64__) -#include + + +/* Can be used to override the logic in pci_scan_bus for skipping + * already-configured bus numbers - to be used for buggy BIOSes + * or architectures with incomplete PCI setup by the loader. + */ +#define pcibios_assign_all_busses() 0 + +#define PCIBIOS_MIN_IO 0UL +#define PCIBIOS_MIN_MEM 0UL + +#define PCI_IRQ_NONE 0xffffffff + + +#ifdef CONFIG_SPARC64 + +/* PCI IOMMU mapping bypass support. */ + +/* PCI 64-bit addressing works for all slots on all controller + * types on sparc64. However, it requires that the device + * can drive enough of the 64 bits. + */ +#define PCI64_REQUIRED_MASK (~(u64)0) +#define PCI64_ADDR_BASE 0xfffc000000000000UL + +/* Return the index of the PCI controller for device PDEV. */ +int pci_domain_nr(struct pci_bus *bus); +static inline int pci_proc_domain(struct pci_bus *bus) +{ + return 1; +} + +/* Platform support for /proc/bus/pci/X/Y mmap()s. */ +#define HAVE_PCI_MMAP +#define arch_can_pci_mmap_io() 1 +#define HAVE_ARCH_PCI_GET_UNMAPPED_AREA +#define get_pci_unmapped_area get_fb_unmapped_area + +#define HAVE_ARCH_PCI_RESOURCE_TO_USER +#endif /* CONFIG_SPARC64 */ + +#if defined(CONFIG_SPARC64) || defined(CONFIG_LEON_PCI) +static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) +{ + return PCI_IRQ_NONE; +} #else -#include -#endif +#include #endif + +#endif /* ___ASM_SPARC_PCI_H */ diff --git a/arch/sparc/include/asm/pci_32.h b/arch/sparc/include/asm/pci_32.h deleted file mode 100644 index cfc0ee9476c6..000000000000 --- a/arch/sparc/include/asm/pci_32.h +++ /dev/null @@ -1,41 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __SPARC_PCI_H -#define __SPARC_PCI_H - -#ifdef __KERNEL__ - -#include - -/* Can be used to override the logic in pci_scan_bus for skipping - * already-configured bus numbers - to be used for buggy BIOSes - * or architectures with incomplete PCI setup by the loader. - */ -#define pcibios_assign_all_busses() 0 - -#define PCIBIOS_MIN_IO 0UL -#define PCIBIOS_MIN_MEM 0UL - -#define PCI_IRQ_NONE 0xffffffff - -#endif /* __KERNEL__ */ - -#ifndef CONFIG_LEON_PCI -/* generic pci stuff */ -#include -#else -/* - * On LEON PCI Memory space is mapped 1:1 with physical address space. - * - * I/O space is located at low 64Kbytes in PCI I/O space. The I/O addresses - * are converted into CPU addresses to virtual addresses that are mapped with - * MMU to the PCI Host PCI I/O space window which are translated to the low - * 64Kbytes by the Host controller. - */ - -static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) -{ - return PCI_IRQ_NONE; -} -#endif - -#endif /* __SPARC_PCI_H */ diff --git a/arch/sparc/include/asm/pci_64.h b/arch/sparc/include/asm/pci_64.h deleted file mode 100644 index fac77813402c..000000000000 --- a/arch/sparc/include/asm/pci_64.h +++ /dev/null @@ -1,52 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __SPARC64_PCI_H -#define __SPARC64_PCI_H - -#ifdef __KERNEL__ - -#include - -/* Can be used to override the logic in pci_scan_bus for skipping - * already-configured bus numbers - to be used for buggy BIOSes - * or architectures with incomplete PCI setup by the loader. - */ -#define pcibios_assign_all_busses() 0 - -#define PCIBIOS_MIN_IO 0UL -#define PCIBIOS_MIN_MEM 0UL - -#define PCI_IRQ_NONE 0xffffffff - -/* PCI IOMMU mapping bypass support. */ - -/* PCI 64-bit addressing works for all slots on all controller - * types on sparc64. However, it requires that the device - * can drive enough of the 64 bits. - */ -#define PCI64_REQUIRED_MASK (~(u64)0) -#define PCI64_ADDR_BASE 0xfffc000000000000UL - -/* Return the index of the PCI controller for device PDEV. */ - -int pci_domain_nr(struct pci_bus *bus); -static inline int pci_proc_domain(struct pci_bus *bus) -{ - return 1; -} - -/* Platform support for /proc/bus/pci/X/Y mmap()s. */ - -#define HAVE_PCI_MMAP -#define arch_can_pci_mmap_io() 1 -#define HAVE_ARCH_PCI_GET_UNMAPPED_AREA -#define get_pci_unmapped_area get_fb_unmapped_area - -static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) -{ - return PCI_IRQ_NONE; -} - -#define HAVE_ARCH_PCI_RESOURCE_TO_USER -#endif /* __KERNEL__ */ - -#endif /* __SPARC64_PCI_H */ diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c index 05eb016fc41b..b1a09080e8da 100644 --- a/arch/sparc/kernel/iommu.c +++ b/arch/sparc/kernel/iommu.c @@ -314,7 +314,7 @@ bad: bad_no_ctx: if (printk_ratelimit()) WARN_ON(1); - return SPARC_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } static void strbuf_flush(struct strbuf *strbuf, struct iommu *iommu, @@ -547,7 +547,7 @@ static int dma_4u_map_sg(struct device *dev, struct scatterlist *sglist, if (outcount < incount) { outs = sg_next(outs); - outs->dma_address = SPARC_MAPPING_ERROR; + outs->dma_address = DMA_MAPPING_ERROR; outs->dma_length = 0; } @@ -573,7 +573,7 @@ iommu_map_failed: iommu_tbl_range_free(&iommu->tbl, vaddr, npages, IOMMU_ERROR_CODE); - s->dma_address = SPARC_MAPPING_ERROR; + s->dma_address = DMA_MAPPING_ERROR; s->dma_length = 0; } if (s == outs) @@ -741,11 +741,6 @@ static void dma_4u_sync_sg_for_cpu(struct device *dev, spin_unlock_irqrestore(&iommu->lock, flags); } -static int dma_4u_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == SPARC_MAPPING_ERROR; -} - static int dma_4u_supported(struct device *dev, u64 device_mask) { struct iommu *iommu = dev->archdata.iommu; @@ -771,7 +766,6 @@ static const struct dma_map_ops sun4u_dma_ops = { .sync_single_for_cpu = dma_4u_sync_single_for_cpu, .sync_sg_for_cpu = dma_4u_sync_sg_for_cpu, .dma_supported = dma_4u_supported, - .mapping_error = dma_4u_mapping_error, }; const struct dma_map_ops *dma_ops = &sun4u_dma_ops; diff --git a/arch/sparc/kernel/iommu_common.h b/arch/sparc/kernel/iommu_common.h index e3c02ba32500..d62ed9c5682d 100644 --- a/arch/sparc/kernel/iommu_common.h +++ b/arch/sparc/kernel/iommu_common.h @@ -48,6 +48,4 @@ static inline int is_span_boundary(unsigned long entry, return iommu_is_span_boundary(entry, nr, shift, boundary_size); } -#define SPARC_MAPPING_ERROR (~(dma_addr_t)0x0) - #endif /* _IOMMU_COMMON_H */ diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c index aeaad04fdd14..f89603855f1e 100644 --- a/arch/sparc/kernel/ioport.c +++ b/arch/sparc/kernel/ioport.c @@ -52,8 +52,6 @@ #include #include -const struct sparc32_dma_ops *sparc32_dma_ops; - /* This function must make sure that caches and memory are coherent after DMA * On LEON systems without cache snooping it flushes the entire D-CACHE. */ @@ -247,178 +245,60 @@ static void _sparc_free_io(struct resource *res) release_resource(res); } -#ifdef CONFIG_SBUS - -void sbus_set_sbus64(struct device *dev, int x) -{ - printk("sbus_set_sbus64: unsupported\n"); -} -EXPORT_SYMBOL(sbus_set_sbus64); - -/* - * Allocate a chunk of memory suitable for DMA. - * Typically devices use them for control blocks. - * CPU may access them without any explicit flushing. - */ -static void *sbus_alloc_coherent(struct device *dev, size_t len, - dma_addr_t *dma_addrp, gfp_t gfp, - unsigned long attrs) +unsigned long sparc_dma_alloc_resource(struct device *dev, size_t len) { - struct platform_device *op = to_platform_device(dev); - unsigned long len_total = PAGE_ALIGN(len); - unsigned long va; struct resource *res; - int order; - - /* XXX why are some lengths signed, others unsigned? */ - if (len <= 0) { - return NULL; - } - /* XXX So what is maxphys for us and how do drivers know it? */ - if (len > 256*1024) { /* __get_free_pages() limit */ - return NULL; - } - - order = get_order(len_total); - va = __get_free_pages(gfp, order); - if (va == 0) - goto err_nopages; - if ((res = kzalloc(sizeof(struct resource), GFP_KERNEL)) == NULL) - goto err_nomem; + res = kzalloc(sizeof(*res), GFP_KERNEL); + if (!res) + return 0; + res->name = dev->of_node->full_name; - if (allocate_resource(&_sparc_dvma, res, len_total, - _sparc_dvma.start, _sparc_dvma.end, PAGE_SIZE, NULL, NULL) != 0) { - printk("sbus_alloc_consistent: cannot occupy 0x%lx", len_total); - goto err_nova; + if (allocate_resource(&_sparc_dvma, res, len, _sparc_dvma.start, + _sparc_dvma.end, PAGE_SIZE, NULL, NULL) != 0) { + printk("%s: cannot occupy 0x%zx", __func__, len); + kfree(res); + return 0; } - // XXX The sbus_map_dma_area does this for us below, see comments. - // srmmu_mapiorange(0, virt_to_phys(va), res->start, len_total); - /* - * XXX That's where sdev would be used. Currently we load - * all iommu tables with the same translations. - */ - if (sbus_map_dma_area(dev, dma_addrp, va, res->start, len_total) != 0) - goto err_noiommu; - - res->name = op->dev.of_node->full_name; - - return (void *)(unsigned long)res->start; - -err_noiommu: - release_resource(res); -err_nova: - kfree(res); -err_nomem: - free_pages(va, order); -err_nopages: - return NULL; + return res->start; } -static void sbus_free_coherent(struct device *dev, size_t n, void *p, - dma_addr_t ba, unsigned long attrs) +bool sparc_dma_free_resource(void *cpu_addr, size_t size) { + unsigned long addr = (unsigned long)cpu_addr; struct resource *res; - struct page *pgv; - if ((res = lookup_resource(&_sparc_dvma, - (unsigned long)p)) == NULL) { - printk("sbus_free_consistent: cannot free %p\n", p); - return; + res = lookup_resource(&_sparc_dvma, addr); + if (!res) { + printk("%s: cannot free %p\n", __func__, cpu_addr); + return false; } - if (((unsigned long)p & (PAGE_SIZE-1)) != 0) { - printk("sbus_free_consistent: unaligned va %p\n", p); - return; + if ((addr & (PAGE_SIZE - 1)) != 0) { + printk("%s: unaligned va %p\n", __func__, cpu_addr); + return false; } - n = PAGE_ALIGN(n); - if (resource_size(res) != n) { - printk("sbus_free_consistent: region 0x%lx asked 0x%zx\n", - (long)resource_size(res), n); - return; + size = PAGE_ALIGN(size); + if (resource_size(res) != size) { + printk("%s: region 0x%lx asked 0x%zx\n", + __func__, (long)resource_size(res), size); + return false; } release_resource(res); kfree(res); - - pgv = virt_to_page(p); - sbus_unmap_dma_area(dev, ba, n); - - __free_pages(pgv, get_order(n)); -} - -/* - * Map a chunk of memory so that devices can see it. - * CPU view of this memory may be inconsistent with - * a device view and explicit flushing is necessary. - */ -static dma_addr_t sbus_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t len, - enum dma_data_direction dir, - unsigned long attrs) -{ - void *va = page_address(page) + offset; - - /* XXX why are some lengths signed, others unsigned? */ - if (len <= 0) { - return 0; - } - /* XXX So what is maxphys for us and how do drivers know it? */ - if (len > 256*1024) { /* __get_free_pages() limit */ - return 0; - } - return mmu_get_scsi_one(dev, va, len); -} - -static void sbus_unmap_page(struct device *dev, dma_addr_t ba, size_t n, - enum dma_data_direction dir, unsigned long attrs) -{ - mmu_release_scsi_one(dev, ba, n); -} - -static int sbus_map_sg(struct device *dev, struct scatterlist *sg, int n, - enum dma_data_direction dir, unsigned long attrs) -{ - mmu_get_scsi_sgl(dev, sg, n); - return n; -} - -static void sbus_unmap_sg(struct device *dev, struct scatterlist *sg, int n, - enum dma_data_direction dir, unsigned long attrs) -{ - mmu_release_scsi_sgl(dev, sg, n); -} - -static void sbus_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, - int n, enum dma_data_direction dir) -{ - BUG(); + return true; } -static void sbus_sync_sg_for_device(struct device *dev, struct scatterlist *sg, - int n, enum dma_data_direction dir) -{ - BUG(); -} +#ifdef CONFIG_SBUS -static int sbus_dma_supported(struct device *dev, u64 mask) +void sbus_set_sbus64(struct device *dev, int x) { - return 0; + printk("sbus_set_sbus64: unsupported\n"); } - -static const struct dma_map_ops sbus_dma_ops = { - .alloc = sbus_alloc_coherent, - .free = sbus_free_coherent, - .map_page = sbus_map_page, - .unmap_page = sbus_unmap_page, - .map_sg = sbus_map_sg, - .unmap_sg = sbus_unmap_sg, - .sync_sg_for_cpu = sbus_sync_sg_for_cpu, - .sync_sg_for_device = sbus_sync_sg_for_device, - .dma_supported = sbus_dma_supported, -}; +EXPORT_SYMBOL(sbus_set_sbus64); static int __init sparc_register_ioport(void) { @@ -438,45 +318,30 @@ arch_initcall(sparc_register_ioport); void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { - unsigned long len_total = PAGE_ALIGN(size); + unsigned long addr; void *va; - struct resource *res; - int order; - if (size == 0) { + if (!size || size > 256 * 1024) /* __get_free_pages() limit */ return NULL; - } - if (size > 256*1024) { /* __get_free_pages() limit */ - return NULL; - } - order = get_order(len_total); - va = (void *) __get_free_pages(gfp, order); - if (va == NULL) { - printk("%s: no %ld pages\n", __func__, len_total>>PAGE_SHIFT); - goto err_nopages; + size = PAGE_ALIGN(size); + va = (void *) __get_free_pages(gfp | __GFP_ZERO, get_order(size)); + if (!va) { + printk("%s: no %zd pages\n", __func__, size >> PAGE_SHIFT); + return NULL; } - if ((res = kzalloc(sizeof(struct resource), GFP_KERNEL)) == NULL) { - printk("%s: no core\n", __func__); + addr = sparc_dma_alloc_resource(dev, size); + if (!addr) goto err_nomem; - } - if (allocate_resource(&_sparc_dvma, res, len_total, - _sparc_dvma.start, _sparc_dvma.end, PAGE_SIZE, NULL, NULL) != 0) { - printk("%s: cannot occupy 0x%lx", __func__, len_total); - goto err_nova; - } - srmmu_mapiorange(0, virt_to_phys(va), res->start, len_total); + srmmu_mapiorange(0, virt_to_phys(va), addr, size); *dma_handle = virt_to_phys(va); - return (void *) res->start; + return (void *)addr; -err_nova: - kfree(res); err_nomem: - free_pages((unsigned long)va, order); -err_nopages: + free_pages((unsigned long)va, get_order(size)); return NULL; } @@ -491,31 +356,11 @@ err_nopages: void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - struct resource *res; - - if ((res = lookup_resource(&_sparc_dvma, - (unsigned long)cpu_addr)) == NULL) { - printk("%s: cannot free %p\n", __func__, cpu_addr); + if (!sparc_dma_free_resource(cpu_addr, PAGE_ALIGN(size))) return; - } - - if (((unsigned long)cpu_addr & (PAGE_SIZE-1)) != 0) { - printk("%s: unaligned va %p\n", __func__, cpu_addr); - return; - } - - size = PAGE_ALIGN(size); - if (resource_size(res) != size) { - printk("%s: region 0x%lx asked 0x%zx\n", __func__, - (long)resource_size(res), size); - return; - } dma_make_coherent(dma_addr, size); srmmu_unmapiorange((unsigned long)cpu_addr, size); - - release_resource(res); - kfree(res); free_pages((unsigned long)phys_to_virt(dma_addr), get_order(size)); } @@ -528,7 +373,7 @@ void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, dma_make_coherent(paddr, PAGE_ALIGN(size)); } -const struct dma_map_ops *dma_ops = &sbus_dma_ops; +const struct dma_map_ops *dma_ops; EXPORT_SYMBOL(dma_ops); #ifdef CONFIG_PROC_FS diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c index 565d9ac883d0..fa0e42b4cbfb 100644 --- a/arch/sparc/kernel/pci_sun4v.c +++ b/arch/sparc/kernel/pci_sun4v.c @@ -414,12 +414,12 @@ static dma_addr_t dma_4v_map_page(struct device *dev, struct page *page, bad: if (printk_ratelimit()) WARN_ON(1); - return SPARC_MAPPING_ERROR; + return DMA_MAPPING_ERROR; iommu_map_fail: local_irq_restore(flags); iommu_tbl_range_free(tbl, bus_addr, npages, IOMMU_ERROR_CODE); - return SPARC_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } static void dma_4v_unmap_page(struct device *dev, dma_addr_t bus_addr, @@ -592,7 +592,7 @@ static int dma_4v_map_sg(struct device *dev, struct scatterlist *sglist, if (outcount < incount) { outs = sg_next(outs); - outs->dma_address = SPARC_MAPPING_ERROR; + outs->dma_address = DMA_MAPPING_ERROR; outs->dma_length = 0; } @@ -609,7 +609,7 @@ iommu_map_failed: iommu_tbl_range_free(tbl, vaddr, npages, IOMMU_ERROR_CODE); /* XXX demap? XXX */ - s->dma_address = SPARC_MAPPING_ERROR; + s->dma_address = DMA_MAPPING_ERROR; s->dma_length = 0; } if (s == outs) @@ -688,11 +688,6 @@ static int dma_4v_supported(struct device *dev, u64 device_mask) return pci64_dma_supported(to_pci_dev(dev), device_mask); } -static int dma_4v_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == SPARC_MAPPING_ERROR; -} - static const struct dma_map_ops sun4v_dma_ops = { .alloc = dma_4v_alloc_coherent, .free = dma_4v_free_coherent, @@ -701,7 +696,6 @@ static const struct dma_map_ops sun4v_dma_ops = { .map_sg = dma_4v_map_sg, .unmap_sg = dma_4v_unmap_sg, .dma_supported = dma_4v_supported, - .mapping_error = dma_4v_mapping_error, }; static void pci_sun4v_scan_bus(struct pci_pbm_info *pbm, struct device *parent) diff --git a/arch/sparc/mm/io-unit.c b/arch/sparc/mm/io-unit.c index c8cb27d3ea75..f770ee7229d8 100644 --- a/arch/sparc/mm/io-unit.c +++ b/arch/sparc/mm/io-unit.c @@ -12,7 +12,7 @@ #include #include /* pte_offset_map => kmap_atomic */ #include -#include +#include #include #include @@ -140,34 +140,44 @@ nexti: scan = find_next_zero_bit(iounit->bmap, limit, scan); return vaddr; } -static __u32 iounit_get_scsi_one(struct device *dev, char *vaddr, unsigned long len) +static dma_addr_t iounit_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t len, enum dma_data_direction dir, + unsigned long attrs) { + void *vaddr = page_address(page) + offset; struct iounit_struct *iounit = dev->archdata.iommu; unsigned long ret, flags; + /* XXX So what is maxphys for us and how do drivers know it? */ + if (!len || len > 256 * 1024) + return DMA_MAPPING_ERROR; + spin_lock_irqsave(&iounit->lock, flags); ret = iounit_get_area(iounit, (unsigned long)vaddr, len); spin_unlock_irqrestore(&iounit->lock, flags); return ret; } -static void iounit_get_scsi_sgl(struct device *dev, struct scatterlist *sg, int sz) +static int iounit_map_sg(struct device *dev, struct scatterlist *sgl, int nents, + enum dma_data_direction dir, unsigned long attrs) { struct iounit_struct *iounit = dev->archdata.iommu; + struct scatterlist *sg; unsigned long flags; + int i; /* FIXME: Cache some resolved pages - often several sg entries are to the same page */ spin_lock_irqsave(&iounit->lock, flags); - while (sz != 0) { - --sz; + for_each_sg(sgl, sg, nents, i) { sg->dma_address = iounit_get_area(iounit, (unsigned long) sg_virt(sg), sg->length); sg->dma_length = sg->length; - sg = sg_next(sg); } spin_unlock_irqrestore(&iounit->lock, flags); + return nents; } -static void iounit_release_scsi_one(struct device *dev, __u32 vaddr, unsigned long len) +static void iounit_unmap_page(struct device *dev, dma_addr_t vaddr, size_t len, + enum dma_data_direction dir, unsigned long attrs) { struct iounit_struct *iounit = dev->archdata.iommu; unsigned long flags; @@ -181,34 +191,47 @@ static void iounit_release_scsi_one(struct device *dev, __u32 vaddr, unsigned lo spin_unlock_irqrestore(&iounit->lock, flags); } -static void iounit_release_scsi_sgl(struct device *dev, struct scatterlist *sg, int sz) +static void iounit_unmap_sg(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, unsigned long attrs) { struct iounit_struct *iounit = dev->archdata.iommu; - unsigned long flags; - unsigned long vaddr, len; + unsigned long flags, vaddr, len; + struct scatterlist *sg; + int i; spin_lock_irqsave(&iounit->lock, flags); - while (sz != 0) { - --sz; + for_each_sg(sgl, sg, nents, i) { len = ((sg->dma_address & ~PAGE_MASK) + sg->length + (PAGE_SIZE-1)) >> PAGE_SHIFT; vaddr = (sg->dma_address - IOUNIT_DMA_BASE) >> PAGE_SHIFT; IOD(("iounit_release %08lx-%08lx\n", (long)vaddr, (long)len+vaddr)); for (len += vaddr; vaddr < len; vaddr++) clear_bit(vaddr, iounit->bmap); - sg = sg_next(sg); } spin_unlock_irqrestore(&iounit->lock, flags); } #ifdef CONFIG_SBUS -static int iounit_map_dma_area(struct device *dev, dma_addr_t *pba, unsigned long va, unsigned long addr, int len) +static void *iounit_alloc(struct device *dev, size_t len, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { struct iounit_struct *iounit = dev->archdata.iommu; - unsigned long page, end; + unsigned long va, addr, page, end, ret; pgprot_t dvma_prot; iopte_t __iomem *iopte; - *pba = addr; + /* XXX So what is maxphys for us and how do drivers know it? */ + if (!len || len > 256 * 1024) + return NULL; + + len = PAGE_ALIGN(len); + va = __get_free_pages(gfp | __GFP_ZERO, get_order(len)); + if (!va) + return NULL; + + addr = ret = sparc_dma_alloc_resource(dev, len); + if (!addr) + goto out_free_pages; + *dma_handle = addr; dvma_prot = __pgprot(SRMMU_CACHE | SRMMU_ET_PTE | SRMMU_PRIV); end = PAGE_ALIGN((addr + len)); @@ -237,27 +260,32 @@ static int iounit_map_dma_area(struct device *dev, dma_addr_t *pba, unsigned lon flush_cache_all(); flush_tlb_all(); - return 0; + return (void *)ret; + +out_free_pages: + free_pages(va, get_order(len)); + return NULL; } -static void iounit_unmap_dma_area(struct device *dev, unsigned long addr, int len) +static void iounit_free(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_addr, unsigned long attrs) { /* XXX Somebody please fill this in */ } #endif -static const struct sparc32_dma_ops iounit_dma_ops = { - .get_scsi_one = iounit_get_scsi_one, - .get_scsi_sgl = iounit_get_scsi_sgl, - .release_scsi_one = iounit_release_scsi_one, - .release_scsi_sgl = iounit_release_scsi_sgl, +static const struct dma_map_ops iounit_dma_ops = { #ifdef CONFIG_SBUS - .map_dma_area = iounit_map_dma_area, - .unmap_dma_area = iounit_unmap_dma_area, + .alloc = iounit_alloc, + .free = iounit_free, #endif + .map_page = iounit_map_page, + .unmap_page = iounit_unmap_page, + .map_sg = iounit_map_sg, + .unmap_sg = iounit_unmap_sg, }; void __init ld_mmu_iounit(void) { - sparc32_dma_ops = &iounit_dma_ops; + dma_ops = &iounit_dma_ops; } diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c index 2c5f8a648f8c..e8d5d73ca40d 100644 --- a/arch/sparc/mm/iommu.c +++ b/arch/sparc/mm/iommu.c @@ -13,7 +13,7 @@ #include #include #include /* pte_offset_map => kmap_atomic */ -#include +#include #include #include @@ -205,59 +205,67 @@ static u32 iommu_get_one(struct device *dev, struct page *page, int npages) return busa0; } -static u32 iommu_get_scsi_one(struct device *dev, char *vaddr, unsigned int len) +static dma_addr_t __sbus_iommu_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t len) { - unsigned long off; - int npages; - struct page *page; - u32 busa; - - off = (unsigned long)vaddr & ~PAGE_MASK; - npages = (off + len + PAGE_SIZE-1) >> PAGE_SHIFT; - page = virt_to_page((unsigned long)vaddr & PAGE_MASK); - busa = iommu_get_one(dev, page, npages); - return busa + off; + void *vaddr = page_address(page) + offset; + unsigned long off = (unsigned long)vaddr & ~PAGE_MASK; + unsigned long npages = (off + len + PAGE_SIZE - 1) >> PAGE_SHIFT; + + /* XXX So what is maxphys for us and how do drivers know it? */ + if (!len || len > 256 * 1024) + return DMA_MAPPING_ERROR; + return iommu_get_one(dev, virt_to_page(vaddr), npages) + off; } -static __u32 iommu_get_scsi_one_gflush(struct device *dev, char *vaddr, unsigned long len) +static dma_addr_t sbus_iommu_map_page_gflush(struct device *dev, + struct page *page, unsigned long offset, size_t len, + enum dma_data_direction dir, unsigned long attrs) { flush_page_for_dma(0); - return iommu_get_scsi_one(dev, vaddr, len); + return __sbus_iommu_map_page(dev, page, offset, len); } -static __u32 iommu_get_scsi_one_pflush(struct device *dev, char *vaddr, unsigned long len) +static dma_addr_t sbus_iommu_map_page_pflush(struct device *dev, + struct page *page, unsigned long offset, size_t len, + enum dma_data_direction dir, unsigned long attrs) { - unsigned long page = ((unsigned long) vaddr) & PAGE_MASK; + void *vaddr = page_address(page) + offset; + unsigned long p = ((unsigned long)vaddr) & PAGE_MASK; - while(page < ((unsigned long)(vaddr + len))) { - flush_page_for_dma(page); - page += PAGE_SIZE; + while (p < (unsigned long)vaddr + len) { + flush_page_for_dma(p); + p += PAGE_SIZE; } - return iommu_get_scsi_one(dev, vaddr, len); + + return __sbus_iommu_map_page(dev, page, offset, len); } -static void iommu_get_scsi_sgl_gflush(struct device *dev, struct scatterlist *sg, int sz) +static int sbus_iommu_map_sg_gflush(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, unsigned long attrs) { - int n; + struct scatterlist *sg; + int i, n; flush_page_for_dma(0); - while (sz != 0) { - --sz; + + for_each_sg(sgl, sg, nents, i) { n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT; sg->dma_address = iommu_get_one(dev, sg_page(sg), n) + sg->offset; sg->dma_length = sg->length; - sg = sg_next(sg); } + + return nents; } -static void iommu_get_scsi_sgl_pflush(struct device *dev, struct scatterlist *sg, int sz) +static int sbus_iommu_map_sg_pflush(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, unsigned long attrs) { unsigned long page, oldpage = 0; - int n, i; - - while(sz != 0) { - --sz; + struct scatterlist *sg; + int i, j, n; + for_each_sg(sgl, sg, nents, j) { n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT; /* @@ -277,8 +285,9 @@ static void iommu_get_scsi_sgl_pflush(struct device *dev, struct scatterlist *sg sg->dma_address = iommu_get_one(dev, sg_page(sg), n) + sg->offset; sg->dma_length = sg->length; - sg = sg_next(sg); } + + return nents; } static void iommu_release_one(struct device *dev, u32 busa, int npages) @@ -297,40 +306,52 @@ static void iommu_release_one(struct device *dev, u32 busa, int npages) bit_map_clear(&iommu->usemap, ioptex, npages); } -static void iommu_release_scsi_one(struct device *dev, __u32 vaddr, unsigned long len) +static void sbus_iommu_unmap_page(struct device *dev, dma_addr_t dma_addr, + size_t len, enum dma_data_direction dir, unsigned long attrs) { - unsigned long off; + unsigned long off = dma_addr & ~PAGE_MASK; int npages; - off = vaddr & ~PAGE_MASK; npages = (off + len + PAGE_SIZE-1) >> PAGE_SHIFT; - iommu_release_one(dev, vaddr & PAGE_MASK, npages); + iommu_release_one(dev, dma_addr & PAGE_MASK, npages); } -static void iommu_release_scsi_sgl(struct device *dev, struct scatterlist *sg, int sz) +static void sbus_iommu_unmap_sg(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, unsigned long attrs) { - int n; - - while(sz != 0) { - --sz; + struct scatterlist *sg; + int i, n; + for_each_sg(sgl, sg, nents, i) { n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT; iommu_release_one(dev, sg->dma_address & PAGE_MASK, n); sg->dma_address = 0x21212121; - sg = sg_next(sg); } } #ifdef CONFIG_SBUS -static int iommu_map_dma_area(struct device *dev, dma_addr_t *pba, unsigned long va, - unsigned long addr, int len) +static void *sbus_iommu_alloc(struct device *dev, size_t len, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { struct iommu_struct *iommu = dev->archdata.iommu; - unsigned long page, end; + unsigned long va, addr, page, end, ret; iopte_t *iopte = iommu->page_table; iopte_t *first; int ioptex; + /* XXX So what is maxphys for us and how do drivers know it? */ + if (!len || len > 256 * 1024) + return NULL; + + len = PAGE_ALIGN(len); + va = __get_free_pages(gfp | __GFP_ZERO, get_order(len)); + if (va == 0) + return NULL; + + addr = ret = sparc_dma_alloc_resource(dev, len); + if (!addr) + goto out_free_pages; + BUG_ON((va & ~PAGE_MASK) != 0); BUG_ON((addr & ~PAGE_MASK) != 0); BUG_ON((len & ~PAGE_MASK) != 0); @@ -385,16 +406,25 @@ static int iommu_map_dma_area(struct device *dev, dma_addr_t *pba, unsigned long flush_tlb_all(); iommu_invalidate(iommu->regs); - *pba = iommu->start + (ioptex << PAGE_SHIFT); - return 0; + *dma_handle = iommu->start + (ioptex << PAGE_SHIFT); + return (void *)ret; + +out_free_pages: + free_pages(va, get_order(len)); + return NULL; } -static void iommu_unmap_dma_area(struct device *dev, unsigned long busa, int len) +static void sbus_iommu_free(struct device *dev, size_t len, void *cpu_addr, + dma_addr_t busa, unsigned long attrs) { struct iommu_struct *iommu = dev->archdata.iommu; iopte_t *iopte = iommu->page_table; - unsigned long end; + struct page *page = virt_to_page(cpu_addr); int ioptex = (busa - iommu->start) >> PAGE_SHIFT; + unsigned long end; + + if (!sparc_dma_free_resource(cpu_addr, len)) + return; BUG_ON((busa & ~PAGE_MASK) != 0); BUG_ON((len & ~PAGE_MASK) != 0); @@ -408,38 +438,40 @@ static void iommu_unmap_dma_area(struct device *dev, unsigned long busa, int len flush_tlb_all(); iommu_invalidate(iommu->regs); bit_map_clear(&iommu->usemap, ioptex, len >> PAGE_SHIFT); + + __free_pages(page, get_order(len)); } #endif -static const struct sparc32_dma_ops iommu_dma_gflush_ops = { - .get_scsi_one = iommu_get_scsi_one_gflush, - .get_scsi_sgl = iommu_get_scsi_sgl_gflush, - .release_scsi_one = iommu_release_scsi_one, - .release_scsi_sgl = iommu_release_scsi_sgl, +static const struct dma_map_ops sbus_iommu_dma_gflush_ops = { #ifdef CONFIG_SBUS - .map_dma_area = iommu_map_dma_area, - .unmap_dma_area = iommu_unmap_dma_area, + .alloc = sbus_iommu_alloc, + .free = sbus_iommu_free, #endif + .map_page = sbus_iommu_map_page_gflush, + .unmap_page = sbus_iommu_unmap_page, + .map_sg = sbus_iommu_map_sg_gflush, + .unmap_sg = sbus_iommu_unmap_sg, }; -static const struct sparc32_dma_ops iommu_dma_pflush_ops = { - .get_scsi_one = iommu_get_scsi_one_pflush, - .get_scsi_sgl = iommu_get_scsi_sgl_pflush, - .release_scsi_one = iommu_release_scsi_one, - .release_scsi_sgl = iommu_release_scsi_sgl, +static const struct dma_map_ops sbus_iommu_dma_pflush_ops = { #ifdef CONFIG_SBUS - .map_dma_area = iommu_map_dma_area, - .unmap_dma_area = iommu_unmap_dma_area, + .alloc = sbus_iommu_alloc, + .free = sbus_iommu_free, #endif + .map_page = sbus_iommu_map_page_pflush, + .unmap_page = sbus_iommu_unmap_page, + .map_sg = sbus_iommu_map_sg_pflush, + .unmap_sg = sbus_iommu_unmap_sg, }; void __init ld_mmu_iommu(void) { if (flush_page_for_dma_global) { /* flush_page_for_dma flushes everything, no matter of what page is it */ - sparc32_dma_ops = &iommu_dma_gflush_ops; + dma_ops = &sbus_iommu_dma_gflush_ops; } else { - sparc32_dma_ops = &iommu_dma_pflush_ops; + dma_ops = &sbus_iommu_dma_pflush_ops; } if (viking_mxcc_present || srmmu_modtype == HyperSparc) { diff --git a/arch/sparc/net/bpf_jit_comp_32.c b/arch/sparc/net/bpf_jit_comp_32.c index a5ff88643d5c..84cc8f7f83e9 100644 --- a/arch/sparc/net/bpf_jit_comp_32.c +++ b/arch/sparc/net/bpf_jit_comp_32.c @@ -552,15 +552,14 @@ void bpf_jit_compile(struct bpf_prog *fp) emit_skb_load32(hash, r_A); break; case BPF_ANC | SKF_AD_VLAN_TAG: - case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: emit_skb_load16(vlan_tci, r_A); - if (code != (BPF_ANC | SKF_AD_VLAN_TAG)) { - emit_alu_K(SRL, 12); + break; + case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: + __emit_skb_load8(__pkt_vlan_present_offset, r_A); + if (PKT_VLAN_PRESENT_BIT) + emit_alu_K(SRL, PKT_VLAN_PRESENT_BIT); + if (PKT_VLAN_PRESENT_BIT < 7) emit_andi(r_A, 1, r_A); - } else { - emit_loadimm(~VLAN_TAG_PRESENT, r_TMP); - emit_and(r_A, r_TMP, r_A); - } break; case BPF_LD | BPF_W | BPF_LEN: emit_skb_load32(len, r_A); diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c index 5fda4f7bf15d..65428e79b2f3 100644 --- a/arch/sparc/net/bpf_jit_comp_64.c +++ b/arch/sparc/net/bpf_jit_comp_64.c @@ -1575,6 +1575,7 @@ skip_init_ctx: prog->jited_len = image_size; if (!prog->is_func || extra_pass) { + bpf_prog_fill_jited_linfo(prog, ctx.offset); out_off: kfree(ctx.offset); kfree(jit_data); diff --git a/arch/um/Kconfig b/arch/um/Kconfig index 6b9938919f0b..a238547671d6 100644 --- a/arch/um/Kconfig +++ b/arch/um/Kconfig @@ -31,12 +31,6 @@ config ISA config SBUS bool -config PCI - bool - -config PCMCIA - bool - config TRACE_IRQFLAGS_SUPPORT bool default y diff --git a/arch/um/Makefile b/arch/um/Makefile index ab1066c38944..273130cf91d1 100644 --- a/arch/um/Makefile +++ b/arch/um/Makefile @@ -23,8 +23,6 @@ OS := $(shell uname -s) # features. SHELL := /bin/bash -filechk_gen_header = $< - core-y += $(ARCH_DIR)/kernel/ \ $(ARCH_DIR)/drivers/ \ $(ARCH_DIR)/os-$(OS)/ @@ -116,7 +114,8 @@ endef archheaders: $(Q)$(MAKE) -f $(srctree)/Makefile ARCH=$(HEADER_ARCH) asm-generic archheaders -archprepare: include/generated/user_constants.h +archprepare: + $(Q)$(MAKE) $(build)=$(HOST_DIR)/um include/generated/user_constants.h LINK-$(CONFIG_LD_SCRIPT_STATIC) += -static LINK-$(CONFIG_LD_SCRIPT_DYN) += -Wl,-rpath,/lib $(call cc-option, -no-pie) @@ -146,25 +145,4 @@ archclean: @find . \( -name '*.bb' -o -name '*.bbg' -o -name '*.da' \ -o -name '*.gcov' \) -type f -print | xargs rm -f -# Generated files - -$(HOST_DIR)/um/user-offsets.s: __headers FORCE - $(Q)$(MAKE) $(build)=$(HOST_DIR)/um $@ - -define filechk_gen-asm-offsets - (set -e; \ - echo "/*"; \ - echo " * DO NOT MODIFY."; \ - echo " *"; \ - echo " * This file was generated by arch/$(ARCH)/Makefile"; \ - echo " *"; \ - echo " */"; \ - echo ""; \ - sed -ne "/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}" < $<; \ - echo ""; ) -endef - -include/generated/user_constants.h: $(HOST_DIR)/um/user-offsets.s - $(call filechk,gen-asm-offsets) - export HEADER_ARCH SUBARCH USER_CFLAGS CFLAGS_NO_HARDENING OS DEV_NULL_PATH diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index 1067469ba2ea..8d21a83dd289 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -51,8 +51,8 @@ void __init mem_init(void) /* this will put all low memory onto the freelists */ memblock_free_all(); - max_low_pfn = totalram_pages; - max_pfn = totalram_pages; + max_low_pfn = totalram_pages(); + max_pfn = max_low_pfn; mem_init_print_info(NULL); kmalloc_ok = 1; } diff --git a/arch/unicore32/Kconfig b/arch/unicore32/Kconfig index a4c05159dca5..c3a41bfe161b 100644 --- a/arch/unicore32/Kconfig +++ b/arch/unicore32/Kconfig @@ -4,13 +4,13 @@ config UNICORE32 select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_SERIO - select DMA_DIRECT_OPS select HAVE_GENERIC_DMA_COHERENT select HAVE_KERNEL_GZIP select HAVE_KERNEL_BZIP2 select GENERIC_ATOMIC64 select HAVE_KERNEL_LZO select HAVE_KERNEL_LZMA + select HAVE_PCI select VIRT_TO_BUS select ARCH_HAVE_CUSTOM_GPIO_H select GENERIC_FIND_FIRST_BIT @@ -116,22 +116,6 @@ config UNICORE_FPU_F64 endmenu -menu "Bus support" - -config PCI - bool "PCI Support" - help - Find out whether you have a PCI motherboard. PCI is the name of a - bus system, i.e. the way the CPU talks to the other stuff inside - your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or - VESA. If you have PCI, say Y, otherwise N. - -source "drivers/pci/Kconfig" - -source "drivers/pcmcia/Kconfig" - -endmenu - menu "Kernel Features" source "kernel/Kconfig.hz" diff --git a/arch/unicore32/mm/init.c b/arch/unicore32/mm/init.c index cf4eb9481fd6..85ef2c624090 100644 --- a/arch/unicore32/mm/init.c +++ b/arch/unicore32/mm/init.c @@ -30,25 +30,6 @@ #include "mm.h" -static unsigned long phys_initrd_start __initdata = 0x01000000; -static unsigned long phys_initrd_size __initdata = SZ_8M; - -static int __init early_initrd(char *p) -{ - unsigned long start, size; - char *endp; - - start = memparse(p, &endp); - if (*endp == ',') { - size = memparse(endp + 1, NULL); - - phys_initrd_start = start; - phys_initrd_size = size; - } - return 0; -} -early_param("initrd", early_initrd); - /* * This keeps memory configuration data used by a couple memory * initialization functions, as well as show_mem() for the skipping @@ -156,6 +137,11 @@ void __init uc32_memblock_init(struct meminfo *mi) memblock_reserve(__pa(_text), _end - _text); #ifdef CONFIG_BLK_DEV_INITRD + if (!phys_initrd_size) { + phys_initrd_start = 0x01000000; + phys_initrd_size = SZ_8M; + } + if (phys_initrd_size) { memblock_reserve(phys_initrd_start, phys_initrd_size); diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index c7094f813183..e260460210e1 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -66,7 +66,6 @@ config X86 select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64 select ARCH_HAS_UACCESS_MCSAFE if X86_64 && X86_MCE select ARCH_HAS_SET_MEMORY - select ARCH_HAS_SG_CHAIN select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE @@ -90,7 +89,6 @@ config X86 select CLOCKSOURCE_VALIDATE_LAST_CYCLE select CLOCKSOURCE_WATCHDOG select DCACHE_WORD_ACCESS - select DMA_DIRECT_OPS select EDAC_ATOMIC_SCRUB select EDAC_SUPPORT select GENERIC_CLOCKEVENTS @@ -147,6 +145,7 @@ config X86 select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EBPF_JIT select HAVE_EFFICIENT_UNALIGNED_ACCESS + select HAVE_EISA select HAVE_EXIT_THREAD select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD @@ -180,6 +179,7 @@ config X86 select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS_NMI select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI + select HAVE_PCI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if PARAVIRT @@ -196,6 +196,7 @@ config X86 select HOTPLUG_SMT if SMP select IRQ_FORCED_THREADING select NEED_SG_DMA_LENGTH + select PCI_DOMAINS if PCI select PCI_LOCKLESS_CONFIG select PERF_EVENTS select RTC_LIB @@ -1979,7 +1980,7 @@ config SECCOMP If unsure, say Y. Only embedded should say N here. -source kernel/Kconfig.hz +source "kernel/Kconfig.hz" config KEXEC bool "kexec system call" @@ -2576,15 +2577,6 @@ endmenu menu "Bus options (PCI etc.)" -config PCI - bool "PCI support" - default y - ---help--- - Find out whether you have a PCI motherboard. PCI is the name of a - bus system, i.e. the way the CPU talks to the other stuff inside - your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or - VESA. If you have PCI, say Y, otherwise N. - choice prompt "PCI access mode" depends on X86_32 && PCI @@ -2646,10 +2638,6 @@ config PCI_XEN depends on PCI && XEN select SWIOTLB_XEN -config PCI_DOMAINS - def_bool y - depends on PCI - config MMCONF_FAM10H def_bool y depends on X86_64 && PCI_MMCONFIG && ACPI @@ -2667,8 +2655,6 @@ config PCI_CNB20LE_QUIRK You should say N unless you know you need this. -source "drivers/pci/Kconfig" - config ISA_BUS bool "ISA bus support on modern systems" if EXPERT help @@ -2699,24 +2685,6 @@ config ISA (MCA) or VESA. ISA is an older system, now being displaced by PCI; newer boards don't support it. If you have ISA, say Y, otherwise N. -config EISA - bool "EISA support" - depends on ISA - ---help--- - The Extended Industry Standard Architecture (EISA) bus was - developed as an open alternative to the IBM MicroChannel bus. - - The EISA bus provided some of the features of the IBM MicroChannel - bus while maintaining backward compatibility with cards made for - the older ISA bus. The EISA bus saw limited use between 1988 and - 1995 when it was made obsolete by the PCI bus. - - Say Y here if you are building a kernel for an EISA-based machine. - - Otherwise, say N. - -source "drivers/eisa/Kconfig" - config SCx200 tristate "NatSemi SCx200 support" ---help--- @@ -2828,17 +2796,6 @@ config AMD_NB def_bool y depends on CPU_SUP_AMD && PCI -source "drivers/pcmcia/Kconfig" - -config RAPIDIO - tristate "RapidIO support" - depends on PCI - help - If enabled this option will include drivers and the core - infrastructure code to support RapidIO interconnect devices. - -source "drivers/rapidio/Kconfig" - config X86_SYSFB bool "Mark VGA/VBE/EFI FB as generic system framebuffer" help diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig index 6c3ab05c231d..4bb95d7ad947 100644 --- a/arch/x86/configs/i386_defconfig +++ b/arch/x86/configs/i386_defconfig @@ -69,6 +69,7 @@ CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE=y CONFIG_CPU_FREQ_GOV_PERFORMANCE=y CONFIG_CPU_FREQ_GOV_ONDEMAND=y CONFIG_X86_ACPI_CPUFREQ=y +CONFIG_PCI=y CONFIG_PCIEPORTBUS=y CONFIG_PCI_MSI=y CONFIG_PCCARD=y diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig index ac9ae487cfeb..0fed049422a8 100644 --- a/arch/x86/configs/x86_64_defconfig +++ b/arch/x86/configs/x86_64_defconfig @@ -67,6 +67,7 @@ CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE=y CONFIG_CPU_FREQ_GOV_PERFORMANCE=y CONFIG_CPU_FREQ_GOV_ONDEMAND=y CONFIG_X86_ACPI_CPUFREQ=y +CONFIG_PCI=y CONFIG_PCI_MMCONFIG=y CONFIG_PCIEPORTBUS=y CONFIG_PCCARD=y diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index a4b0007a54e1..45734e1cf967 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -8,6 +8,7 @@ OBJECT_FILES_NON_STANDARD := y avx_supported := $(call as-instr,vpxor %xmm0$(comma)%xmm0$(comma)%xmm0,yes,no) avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\ $(comma)4)$(comma)%ymm2,yes,no) +avx512_supported :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,yes,no) sha1_ni_supported :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,yes,no) sha256_ni_supported :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,yes,no) @@ -23,7 +24,7 @@ obj-$(CONFIG_CRYPTO_CAMELLIA_X86_64) += camellia-x86_64.o obj-$(CONFIG_CRYPTO_BLOWFISH_X86_64) += blowfish-x86_64.o obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o obj-$(CONFIG_CRYPTO_TWOFISH_X86_64_3WAY) += twofish-x86_64-3way.o -obj-$(CONFIG_CRYPTO_CHACHA20_X86_64) += chacha20-x86_64.o +obj-$(CONFIG_CRYPTO_CHACHA20_X86_64) += chacha-x86_64.o obj-$(CONFIG_CRYPTO_SERPENT_SSE2_X86_64) += serpent-sse2-x86_64.o obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o @@ -46,6 +47,9 @@ obj-$(CONFIG_CRYPTO_MORUS1280_GLUE) += morus1280_glue.o obj-$(CONFIG_CRYPTO_MORUS640_SSE2) += morus640-sse2.o obj-$(CONFIG_CRYPTO_MORUS1280_SSE2) += morus1280-sse2.o +obj-$(CONFIG_CRYPTO_NHPOLY1305_SSE2) += nhpoly1305-sse2.o +obj-$(CONFIG_CRYPTO_NHPOLY1305_AVX2) += nhpoly1305-avx2.o + # These modules require assembler to support AVX. ifeq ($(avx_supported),yes) obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64) += \ @@ -74,7 +78,7 @@ camellia-x86_64-y := camellia-x86_64-asm_64.o camellia_glue.o blowfish-x86_64-y := blowfish-x86_64-asm_64.o blowfish_glue.o twofish-x86_64-y := twofish-x86_64-asm_64.o twofish_glue.o twofish-x86_64-3way-y := twofish-x86_64-asm_64-3way.o twofish_glue_3way.o -chacha20-x86_64-y := chacha20-ssse3-x86_64.o chacha20_glue.o +chacha-x86_64-y := chacha-ssse3-x86_64.o chacha_glue.o serpent-sse2-x86_64-y := serpent-sse2-x86_64-asm_64.o serpent_sse2_glue.o aegis128-aesni-y := aegis128-aesni-asm.o aegis128-aesni-glue.o @@ -84,6 +88,8 @@ aegis256-aesni-y := aegis256-aesni-asm.o aegis256-aesni-glue.o morus640-sse2-y := morus640-sse2-asm.o morus640-sse2-glue.o morus1280-sse2-y := morus1280-sse2-asm.o morus1280-sse2-glue.o +nhpoly1305-sse2-y := nh-sse2-x86_64.o nhpoly1305-sse2-glue.o + ifeq ($(avx_supported),yes) camellia-aesni-avx-x86_64-y := camellia-aesni-avx-asm_64.o \ camellia_aesni_avx_glue.o @@ -97,10 +103,16 @@ endif ifeq ($(avx2_supported),yes) camellia-aesni-avx2-y := camellia-aesni-avx2-asm_64.o camellia_aesni_avx2_glue.o - chacha20-x86_64-y += chacha20-avx2-x86_64.o + chacha-x86_64-y += chacha-avx2-x86_64.o serpent-avx2-y := serpent-avx2-asm_64.o serpent_avx2_glue.o morus1280-avx2-y := morus1280-avx2-asm.o morus1280-avx2-glue.o + + nhpoly1305-avx2-y := nh-avx2-x86_64.o nhpoly1305-avx2-glue.o +endif + +ifeq ($(avx512_supported),yes) + chacha-x86_64-y += chacha-avx512vl-x86_64.o endif aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S index 1985ea0b551b..91c039ab5699 100644 --- a/arch/x86/crypto/aesni-intel_avx-x86_64.S +++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S @@ -182,43 +182,30 @@ aad_shift_arr: .text -##define the fields of the gcm aes context -#{ -# u8 expanded_keys[16*11] store expanded keys -# u8 shifted_hkey_1[16] store HashKey <<1 mod poly here -# u8 shifted_hkey_2[16] store HashKey^2 <<1 mod poly here -# u8 shifted_hkey_3[16] store HashKey^3 <<1 mod poly here -# u8 shifted_hkey_4[16] store HashKey^4 <<1 mod poly here -# u8 shifted_hkey_5[16] store HashKey^5 <<1 mod poly here -# u8 shifted_hkey_6[16] store HashKey^6 <<1 mod poly here -# u8 shifted_hkey_7[16] store HashKey^7 <<1 mod poly here -# u8 shifted_hkey_8[16] store HashKey^8 <<1 mod poly here -# u8 shifted_hkey_1_k[16] store XOR HashKey <<1 mod poly here (for Karatsuba purposes) -# u8 shifted_hkey_2_k[16] store XOR HashKey^2 <<1 mod poly here (for Karatsuba purposes) -# u8 shifted_hkey_3_k[16] store XOR HashKey^3 <<1 mod poly here (for Karatsuba purposes) -# u8 shifted_hkey_4_k[16] store XOR HashKey^4 <<1 mod poly here (for Karatsuba purposes) -# u8 shifted_hkey_5_k[16] store XOR HashKey^5 <<1 mod poly here (for Karatsuba purposes) -# u8 shifted_hkey_6_k[16] store XOR HashKey^6 <<1 mod poly here (for Karatsuba purposes) -# u8 shifted_hkey_7_k[16] store XOR HashKey^7 <<1 mod poly here (for Karatsuba purposes) -# u8 shifted_hkey_8_k[16] store XOR HashKey^8 <<1 mod poly here (for Karatsuba purposes) -#} gcm_ctx# - -HashKey = 16*11 # store HashKey <<1 mod poly here -HashKey_2 = 16*12 # store HashKey^2 <<1 mod poly here -HashKey_3 = 16*13 # store HashKey^3 <<1 mod poly here -HashKey_4 = 16*14 # store HashKey^4 <<1 mod poly here -HashKey_5 = 16*15 # store HashKey^5 <<1 mod poly here -HashKey_6 = 16*16 # store HashKey^6 <<1 mod poly here -HashKey_7 = 16*17 # store HashKey^7 <<1 mod poly here -HashKey_8 = 16*18 # store HashKey^8 <<1 mod poly here -HashKey_k = 16*19 # store XOR of HashKey <<1 mod poly here (for Karatsuba purposes) -HashKey_2_k = 16*20 # store XOR of HashKey^2 <<1 mod poly here (for Karatsuba purposes) -HashKey_3_k = 16*21 # store XOR of HashKey^3 <<1 mod poly here (for Karatsuba purposes) -HashKey_4_k = 16*22 # store XOR of HashKey^4 <<1 mod poly here (for Karatsuba purposes) -HashKey_5_k = 16*23 # store XOR of HashKey^5 <<1 mod poly here (for Karatsuba purposes) -HashKey_6_k = 16*24 # store XOR of HashKey^6 <<1 mod poly here (for Karatsuba purposes) -HashKey_7_k = 16*25 # store XOR of HashKey^7 <<1 mod poly here (for Karatsuba purposes) -HashKey_8_k = 16*26 # store XOR of HashKey^8 <<1 mod poly here (for Karatsuba purposes) +#define AadHash 16*0 +#define AadLen 16*1 +#define InLen (16*1)+8 +#define PBlockEncKey 16*2 +#define OrigIV 16*3 +#define CurCount 16*4 +#define PBlockLen 16*5 + +HashKey = 16*6 # store HashKey <<1 mod poly here +HashKey_2 = 16*7 # store HashKey^2 <<1 mod poly here +HashKey_3 = 16*8 # store HashKey^3 <<1 mod poly here +HashKey_4 = 16*9 # store HashKey^4 <<1 mod poly here +HashKey_5 = 16*10 # store HashKey^5 <<1 mod poly here +HashKey_6 = 16*11 # store HashKey^6 <<1 mod poly here +HashKey_7 = 16*12 # store HashKey^7 <<1 mod poly here +HashKey_8 = 16*13 # store HashKey^8 <<1 mod poly here +HashKey_k = 16*14 # store XOR of HashKey <<1 mod poly here (for Karatsuba purposes) +HashKey_2_k = 16*15 # store XOR of HashKey^2 <<1 mod poly here (for Karatsuba purposes) +HashKey_3_k = 16*16 # store XOR of HashKey^3 <<1 mod poly here (for Karatsuba purposes) +HashKey_4_k = 16*17 # store XOR of HashKey^4 <<1 mod poly here (for Karatsuba purposes) +HashKey_5_k = 16*18 # store XOR of HashKey^5 <<1 mod poly here (for Karatsuba purposes) +HashKey_6_k = 16*19 # store XOR of HashKey^6 <<1 mod poly here (for Karatsuba purposes) +HashKey_7_k = 16*20 # store XOR of HashKey^7 <<1 mod poly here (for Karatsuba purposes) +HashKey_8_k = 16*21 # store XOR of HashKey^8 <<1 mod poly here (for Karatsuba purposes) #define arg1 %rdi #define arg2 %rsi @@ -229,6 +216,8 @@ HashKey_8_k = 16*26 # store XOR of HashKey^8 <<1 mod poly here (for Karatsu #define arg7 STACK_OFFSET+8*1(%r14) #define arg8 STACK_OFFSET+8*2(%r14) #define arg9 STACK_OFFSET+8*3(%r14) +#define arg10 STACK_OFFSET+8*4(%r14) +#define keysize 2*15*16(arg1) i = 0 j = 0 @@ -267,1356 +256,1619 @@ VARIABLE_OFFSET = 16*8 # Utility Macros ################################ +.macro FUNC_SAVE + #the number of pushes must equal STACK_OFFSET + push %r12 + push %r13 + push %r14 + push %r15 + + mov %rsp, %r14 + + + + sub $VARIABLE_OFFSET, %rsp + and $~63, %rsp # align rsp to 64 bytes +.endm + +.macro FUNC_RESTORE + mov %r14, %rsp + + pop %r15 + pop %r14 + pop %r13 + pop %r12 +.endm + # Encryption of a single block -.macro ENCRYPT_SINGLE_BLOCK XMM0 +.macro ENCRYPT_SINGLE_BLOCK REP XMM0 vpxor (arg1), \XMM0, \XMM0 - i = 1 - setreg -.rep 9 + i = 1 + setreg +.rep \REP vaesenc 16*i(arg1), \XMM0, \XMM0 - i = (i+1) - setreg + i = (i+1) + setreg .endr - vaesenclast 16*10(arg1), \XMM0, \XMM0 + vaesenclast 16*i(arg1), \XMM0, \XMM0 .endm -#ifdef CONFIG_AS_AVX -############################################################################### -# GHASH_MUL MACRO to implement: Data*HashKey mod (128,127,126,121,0) -# Input: A and B (128-bits each, bit-reflected) -# Output: C = A*B*x mod poly, (i.e. >>1 ) -# To compute GH = GH*HashKey mod poly, give HK = HashKey<<1 mod poly as input -# GH = GH * HK * x mod poly which is equivalent to GH*HashKey mod poly. -############################################################################### -.macro GHASH_MUL_AVX GH HK T1 T2 T3 T4 T5 +# combined for GCM encrypt and decrypt functions +# clobbering all xmm registers +# clobbering r10, r11, r12, r13, r14, r15 +.macro GCM_ENC_DEC INITIAL_BLOCKS GHASH_8_ENCRYPT_8_PARALLEL GHASH_LAST_8 GHASH_MUL ENC_DEC REP + vmovdqu AadHash(arg2), %xmm8 + vmovdqu HashKey(arg2), %xmm13 # xmm13 = HashKey + add arg5, InLen(arg2) - vpshufd $0b01001110, \GH, \T2 - vpshufd $0b01001110, \HK, \T3 - vpxor \GH , \T2, \T2 # T2 = (a1+a0) - vpxor \HK , \T3, \T3 # T3 = (b1+b0) + # initialize the data pointer offset as zero + xor %r11d, %r11d - vpclmulqdq $0x11, \HK, \GH, \T1 # T1 = a1*b1 - vpclmulqdq $0x00, \HK, \GH, \GH # GH = a0*b0 - vpclmulqdq $0x00, \T3, \T2, \T2 # T2 = (a1+a0)*(b1+b0) - vpxor \GH, \T2,\T2 - vpxor \T1, \T2,\T2 # T2 = a0*b1+a1*b0 + PARTIAL_BLOCK \GHASH_MUL, arg3, arg4, arg5, %r11, %xmm8, \ENC_DEC + sub %r11, arg5 - vpslldq $8, \T2,\T3 # shift-L T3 2 DWs - vpsrldq $8, \T2,\T2 # shift-R T2 2 DWs - vpxor \T3, \GH, \GH - vpxor \T2, \T1, \T1 # = GH x HK + mov arg5, %r13 # save the number of bytes of plaintext/ciphertext + and $-16, %r13 # r13 = r13 - (r13 mod 16) - #first phase of the reduction - vpslld $31, \GH, \T2 # packed right shifting << 31 - vpslld $30, \GH, \T3 # packed right shifting shift << 30 - vpslld $25, \GH, \T4 # packed right shifting shift << 25 + mov %r13, %r12 + shr $4, %r12 + and $7, %r12 + jz _initial_num_blocks_is_0\@ - vpxor \T3, \T2, \T2 # xor the shifted versions - vpxor \T4, \T2, \T2 + cmp $7, %r12 + je _initial_num_blocks_is_7\@ + cmp $6, %r12 + je _initial_num_blocks_is_6\@ + cmp $5, %r12 + je _initial_num_blocks_is_5\@ + cmp $4, %r12 + je _initial_num_blocks_is_4\@ + cmp $3, %r12 + je _initial_num_blocks_is_3\@ + cmp $2, %r12 + je _initial_num_blocks_is_2\@ - vpsrldq $4, \T2, \T5 # shift-R T5 1 DW + jmp _initial_num_blocks_is_1\@ - vpslldq $12, \T2, \T2 # shift-L T2 3 DWs - vpxor \T2, \GH, \GH # first phase of the reduction complete +_initial_num_blocks_is_7\@: + \INITIAL_BLOCKS \REP, 7, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + sub $16*7, %r13 + jmp _initial_blocks_encrypted\@ - #second phase of the reduction +_initial_num_blocks_is_6\@: + \INITIAL_BLOCKS \REP, 6, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + sub $16*6, %r13 + jmp _initial_blocks_encrypted\@ - vpsrld $1,\GH, \T2 # packed left shifting >> 1 - vpsrld $2,\GH, \T3 # packed left shifting >> 2 - vpsrld $7,\GH, \T4 # packed left shifting >> 7 - vpxor \T3, \T2, \T2 # xor the shifted versions - vpxor \T4, \T2, \T2 +_initial_num_blocks_is_5\@: + \INITIAL_BLOCKS \REP, 5, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + sub $16*5, %r13 + jmp _initial_blocks_encrypted\@ - vpxor \T5, \T2, \T2 - vpxor \T2, \GH, \GH - vpxor \T1, \GH, \GH # the result is in GH +_initial_num_blocks_is_4\@: + \INITIAL_BLOCKS \REP, 4, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + sub $16*4, %r13 + jmp _initial_blocks_encrypted\@ +_initial_num_blocks_is_3\@: + \INITIAL_BLOCKS \REP, 3, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + sub $16*3, %r13 + jmp _initial_blocks_encrypted\@ -.endm +_initial_num_blocks_is_2\@: + \INITIAL_BLOCKS \REP, 2, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + sub $16*2, %r13 + jmp _initial_blocks_encrypted\@ -.macro PRECOMPUTE_AVX HK T1 T2 T3 T4 T5 T6 +_initial_num_blocks_is_1\@: + \INITIAL_BLOCKS \REP, 1, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + sub $16*1, %r13 + jmp _initial_blocks_encrypted\@ - # Haskey_i_k holds XORed values of the low and high parts of the Haskey_i - vmovdqa \HK, \T5 +_initial_num_blocks_is_0\@: + \INITIAL_BLOCKS \REP, 0, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - vpshufd $0b01001110, \T5, \T1 - vpxor \T5, \T1, \T1 - vmovdqa \T1, HashKey_k(arg1) - GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^2<<1 mod poly - vmovdqa \T5, HashKey_2(arg1) # [HashKey_2] = HashKey^2<<1 mod poly - vpshufd $0b01001110, \T5, \T1 - vpxor \T5, \T1, \T1 - vmovdqa \T1, HashKey_2_k(arg1) +_initial_blocks_encrypted\@: + cmp $0, %r13 + je _zero_cipher_left\@ - GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^3<<1 mod poly - vmovdqa \T5, HashKey_3(arg1) - vpshufd $0b01001110, \T5, \T1 - vpxor \T5, \T1, \T1 - vmovdqa \T1, HashKey_3_k(arg1) + sub $128, %r13 + je _eight_cipher_left\@ - GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^4<<1 mod poly - vmovdqa \T5, HashKey_4(arg1) - vpshufd $0b01001110, \T5, \T1 - vpxor \T5, \T1, \T1 - vmovdqa \T1, HashKey_4_k(arg1) - GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^5<<1 mod poly - vmovdqa \T5, HashKey_5(arg1) - vpshufd $0b01001110, \T5, \T1 - vpxor \T5, \T1, \T1 - vmovdqa \T1, HashKey_5_k(arg1) - GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^6<<1 mod poly - vmovdqa \T5, HashKey_6(arg1) - vpshufd $0b01001110, \T5, \T1 - vpxor \T5, \T1, \T1 - vmovdqa \T1, HashKey_6_k(arg1) - GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^7<<1 mod poly - vmovdqa \T5, HashKey_7(arg1) - vpshufd $0b01001110, \T5, \T1 - vpxor \T5, \T1, \T1 - vmovdqa \T1, HashKey_7_k(arg1) + vmovd %xmm9, %r15d + and $255, %r15d + vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^8<<1 mod poly - vmovdqa \T5, HashKey_8(arg1) - vpshufd $0b01001110, \T5, \T1 - vpxor \T5, \T1, \T1 - vmovdqa \T1, HashKey_8_k(arg1) -.endm +_encrypt_by_8_new\@: + cmp $(255-8), %r15d + jg _encrypt_by_8\@ -## if a = number of total plaintext bytes -## b = floor(a/16) -## num_initial_blocks = b mod 4# -## encrypt the initial num_initial_blocks blocks and apply ghash on the ciphertext -## r10, r11, r12, rax are clobbered -## arg1, arg2, arg3, r14 are used as a pointer only, not modified -.macro INITIAL_BLOCKS_AVX num_initial_blocks T1 T2 T3 T4 T5 CTR XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 T6 T_key ENC_DEC - i = (8-\num_initial_blocks) - j = 0 - setreg - mov arg6, %r10 # r10 = AAD - mov arg7, %r12 # r12 = aadLen + add $8, %r15b + \GHASH_8_ENCRYPT_8_PARALLEL \REP, %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm15, out_order, \ENC_DEC + add $128, %r11 + sub $128, %r13 + jne _encrypt_by_8_new\@ + vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 + jmp _eight_cipher_left\@ - mov %r12, %r11 +_encrypt_by_8\@: + vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 + add $8, %r15b + \GHASH_8_ENCRYPT_8_PARALLEL \REP, %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm15, in_order, \ENC_DEC + vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 + add $128, %r11 + sub $128, %r13 + jne _encrypt_by_8_new\@ - vpxor reg_j, reg_j, reg_j - vpxor reg_i, reg_i, reg_i - cmp $16, %r11 - jl _get_AAD_rest8\@ -_get_AAD_blocks\@: - vmovdqu (%r10), reg_i - vpshufb SHUF_MASK(%rip), reg_i, reg_i - vpxor reg_i, reg_j, reg_j - GHASH_MUL_AVX reg_j, \T2, \T1, \T3, \T4, \T5, \T6 - add $16, %r10 - sub $16, %r12 - sub $16, %r11 - cmp $16, %r11 - jge _get_AAD_blocks\@ - vmovdqu reg_j, reg_i - cmp $0, %r11 - je _get_AAD_done\@ + vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - vpxor reg_i, reg_i, reg_i - /* read the last <16B of AAD. since we have at least 4B of - data right after the AAD (the ICV, and maybe some CT), we can - read 4B/8B blocks safely, and then get rid of the extra stuff */ -_get_AAD_rest8\@: - cmp $4, %r11 - jle _get_AAD_rest4\@ - movq (%r10), \T1 - add $8, %r10 - sub $8, %r11 - vpslldq $8, \T1, \T1 - vpsrldq $8, reg_i, reg_i - vpxor \T1, reg_i, reg_i - jmp _get_AAD_rest8\@ -_get_AAD_rest4\@: - cmp $0, %r11 - jle _get_AAD_rest0\@ - mov (%r10), %eax - movq %rax, \T1 - add $4, %r10 - sub $4, %r11 - vpslldq $12, \T1, \T1 - vpsrldq $4, reg_i, reg_i - vpxor \T1, reg_i, reg_i -_get_AAD_rest0\@: - /* finalize: shift out the extra bytes we read, and align - left. since pslldq can only shift by an immediate, we use - vpshufb and an array of shuffle masks */ - movq %r12, %r11 - salq $4, %r11 - movdqu aad_shift_arr(%r11), \T1 - vpshufb \T1, reg_i, reg_i -_get_AAD_rest_final\@: - vpshufb SHUF_MASK(%rip), reg_i, reg_i - vpxor reg_j, reg_i, reg_i - GHASH_MUL_AVX reg_i, \T2, \T1, \T3, \T4, \T5, \T6 -_get_AAD_done\@: - # initialize the data pointer offset as zero - xor %r11d, %r11d - # start AES for num_initial_blocks blocks - mov arg5, %rax # rax = *Y0 - vmovdqu (%rax), \CTR # CTR = Y0 - vpshufb SHUF_MASK(%rip), \CTR, \CTR +_eight_cipher_left\@: + \GHASH_LAST_8 %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8 - i = (9-\num_initial_blocks) - setreg -.rep \num_initial_blocks - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, reg_i - vpshufb SHUF_MASK(%rip), reg_i, reg_i # perform a 16Byte swap - i = (i+1) - setreg -.endr +_zero_cipher_left\@: + vmovdqu %xmm14, AadHash(arg2) + vmovdqu %xmm9, CurCount(arg2) - vmovdqa (arg1), \T_key - i = (9-\num_initial_blocks) - setreg -.rep \num_initial_blocks - vpxor \T_key, reg_i, reg_i - i = (i+1) - setreg -.endr + # check for 0 length + mov arg5, %r13 + and $15, %r13 # r13 = (arg5 mod 16) - j = 1 - setreg -.rep 9 - vmovdqa 16*j(arg1), \T_key - i = (9-\num_initial_blocks) - setreg -.rep \num_initial_blocks - vaesenc \T_key, reg_i, reg_i - i = (i+1) - setreg -.endr + je _multiple_of_16_bytes\@ - j = (j+1) - setreg -.endr + # handle the last <16 Byte block separately + mov %r13, PBlockLen(arg2) - vmovdqa 16*10(arg1), \T_key - i = (9-\num_initial_blocks) - setreg -.rep \num_initial_blocks - vaesenclast \T_key, reg_i, reg_i - i = (i+1) - setreg -.endr + vpaddd ONE(%rip), %xmm9, %xmm9 # INCR CNT to get Yn + vmovdqu %xmm9, CurCount(arg2) + vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - i = (9-\num_initial_blocks) - setreg -.rep \num_initial_blocks - vmovdqu (arg3, %r11), \T1 - vpxor \T1, reg_i, reg_i - vmovdqu reg_i, (arg2 , %r11) # write back ciphertext for num_initial_blocks blocks - add $16, %r11 -.if \ENC_DEC == DEC - vmovdqa \T1, reg_i -.endif - vpshufb SHUF_MASK(%rip), reg_i, reg_i # prepare ciphertext for GHASH computations - i = (i+1) - setreg -.endr + ENCRYPT_SINGLE_BLOCK \REP, %xmm9 # E(K, Yn) + vmovdqu %xmm9, PBlockEncKey(arg2) + cmp $16, arg5 + jge _large_enough_update\@ - i = (8-\num_initial_blocks) - j = (9-\num_initial_blocks) - setreg + lea (arg4,%r11,1), %r10 + mov %r13, %r12 -.rep \num_initial_blocks - vpxor reg_i, reg_j, reg_j - GHASH_MUL_AVX reg_j, \T2, \T1, \T3, \T4, \T5, \T6 # apply GHASH on num_initial_blocks blocks - i = (i+1) - j = (j+1) - setreg -.endr - # XMM8 has the combined result here + READ_PARTIAL_BLOCK %r10 %r12 %xmm1 - vmovdqa \XMM8, TMP1(%rsp) - vmovdqa \XMM8, \T3 + lea SHIFT_MASK+16(%rip), %r12 + sub %r13, %r12 # adjust the shuffle mask pointer to be + # able to shift 16-r13 bytes (r13 is the + # number of bytes in plaintext mod 16) - cmp $128, %r13 - jl _initial_blocks_done\@ # no need for precomputed constants + jmp _final_ghash_mul\@ -############################################################################### -# Haskey_i_k holds XORed values of the low and high parts of the Haskey_i - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, \XMM1 - vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap +_large_enough_update\@: + sub $16, %r11 + add %r13, %r11 - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, \XMM2 - vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap + # receive the last <16 Byte block + vmovdqu (arg4, %r11, 1), %xmm1 - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, \XMM3 - vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap + sub %r13, %r11 + add $16, %r11 - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, \XMM4 - vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap + lea SHIFT_MASK+16(%rip), %r12 + # adjust the shuffle mask pointer to be able to shift 16-r13 bytes + # (r13 is the number of bytes in plaintext mod 16) + sub %r13, %r12 + # get the appropriate shuffle mask + vmovdqu (%r12), %xmm2 + # shift right 16-r13 bytes + vpshufb %xmm2, %xmm1, %xmm1 - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, \XMM5 - vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap +_final_ghash_mul\@: + .if \ENC_DEC == DEC + vmovdqa %xmm1, %xmm2 + vpxor %xmm1, %xmm9, %xmm9 # Plaintext XOR E(K, Yn) + vmovdqu ALL_F-SHIFT_MASK(%r12), %xmm1 # get the appropriate mask to + # mask out top 16-r13 bytes of xmm9 + vpand %xmm1, %xmm9, %xmm9 # mask out top 16-r13 bytes of xmm9 + vpand %xmm1, %xmm2, %xmm2 + vpshufb SHUF_MASK(%rip), %xmm2, %xmm2 + vpxor %xmm2, %xmm14, %xmm14 - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, \XMM6 - vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap + vmovdqu %xmm14, AadHash(arg2) + .else + vpxor %xmm1, %xmm9, %xmm9 # Plaintext XOR E(K, Yn) + vmovdqu ALL_F-SHIFT_MASK(%r12), %xmm1 # get the appropriate mask to + # mask out top 16-r13 bytes of xmm9 + vpand %xmm1, %xmm9, %xmm9 # mask out top 16-r13 bytes of xmm9 + vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 + vpxor %xmm9, %xmm14, %xmm14 - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, \XMM7 - vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap + vmovdqu %xmm14, AadHash(arg2) + vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 # shuffle xmm9 back to output as ciphertext + .endif - vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 - vmovdqa \CTR, \XMM8 - vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap - vmovdqa (arg1), \T_key - vpxor \T_key, \XMM1, \XMM1 - vpxor \T_key, \XMM2, \XMM2 - vpxor \T_key, \XMM3, \XMM3 - vpxor \T_key, \XMM4, \XMM4 - vpxor \T_key, \XMM5, \XMM5 - vpxor \T_key, \XMM6, \XMM6 - vpxor \T_key, \XMM7, \XMM7 - vpxor \T_key, \XMM8, \XMM8 + ############################# + # output r13 Bytes + vmovq %xmm9, %rax + cmp $8, %r13 + jle _less_than_8_bytes_left\@ - i = 1 - setreg -.rep 9 # do 9 rounds - vmovdqa 16*i(arg1), \T_key - vaesenc \T_key, \XMM1, \XMM1 - vaesenc \T_key, \XMM2, \XMM2 - vaesenc \T_key, \XMM3, \XMM3 - vaesenc \T_key, \XMM4, \XMM4 - vaesenc \T_key, \XMM5, \XMM5 - vaesenc \T_key, \XMM6, \XMM6 - vaesenc \T_key, \XMM7, \XMM7 - vaesenc \T_key, \XMM8, \XMM8 - i = (i+1) - setreg -.endr + mov %rax, (arg3 , %r11) + add $8, %r11 + vpsrldq $8, %xmm9, %xmm9 + vmovq %xmm9, %rax + sub $8, %r13 +_less_than_8_bytes_left\@: + movb %al, (arg3 , %r11) + add $1, %r11 + shr $8, %rax + sub $1, %r13 + jne _less_than_8_bytes_left\@ + ############################# - vmovdqa 16*i(arg1), \T_key - vaesenclast \T_key, \XMM1, \XMM1 - vaesenclast \T_key, \XMM2, \XMM2 - vaesenclast \T_key, \XMM3, \XMM3 - vaesenclast \T_key, \XMM4, \XMM4 - vaesenclast \T_key, \XMM5, \XMM5 - vaesenclast \T_key, \XMM6, \XMM6 - vaesenclast \T_key, \XMM7, \XMM7 - vaesenclast \T_key, \XMM8, \XMM8 +_multiple_of_16_bytes\@: +.endm - vmovdqu (arg3, %r11), \T1 - vpxor \T1, \XMM1, \XMM1 - vmovdqu \XMM1, (arg2 , %r11) - .if \ENC_DEC == DEC - vmovdqa \T1, \XMM1 - .endif - vmovdqu 16*1(arg3, %r11), \T1 - vpxor \T1, \XMM2, \XMM2 - vmovdqu \XMM2, 16*1(arg2 , %r11) - .if \ENC_DEC == DEC - vmovdqa \T1, \XMM2 - .endif +# GCM_COMPLETE Finishes update of tag of last partial block +# Output: Authorization Tag (AUTH_TAG) +# Clobbers rax, r10-r12, and xmm0, xmm1, xmm5-xmm15 +.macro GCM_COMPLETE GHASH_MUL REP AUTH_TAG AUTH_TAG_LEN + vmovdqu AadHash(arg2), %xmm14 + vmovdqu HashKey(arg2), %xmm13 - vmovdqu 16*2(arg3, %r11), \T1 - vpxor \T1, \XMM3, \XMM3 - vmovdqu \XMM3, 16*2(arg2 , %r11) - .if \ENC_DEC == DEC - vmovdqa \T1, \XMM3 - .endif + mov PBlockLen(arg2), %r12 + cmp $0, %r12 + je _partial_done\@ - vmovdqu 16*3(arg3, %r11), \T1 - vpxor \T1, \XMM4, \XMM4 - vmovdqu \XMM4, 16*3(arg2 , %r11) - .if \ENC_DEC == DEC - vmovdqa \T1, \XMM4 - .endif + #GHASH computation for the last <16 Byte block + \GHASH_MUL %xmm14, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 - vmovdqu 16*4(arg3, %r11), \T1 - vpxor \T1, \XMM5, \XMM5 - vmovdqu \XMM5, 16*4(arg2 , %r11) - .if \ENC_DEC == DEC - vmovdqa \T1, \XMM5 - .endif +_partial_done\@: + mov AadLen(arg2), %r12 # r12 = aadLen (number of bytes) + shl $3, %r12 # convert into number of bits + vmovd %r12d, %xmm15 # len(A) in xmm15 - vmovdqu 16*5(arg3, %r11), \T1 - vpxor \T1, \XMM6, \XMM6 - vmovdqu \XMM6, 16*5(arg2 , %r11) - .if \ENC_DEC == DEC - vmovdqa \T1, \XMM6 - .endif + mov InLen(arg2), %r12 + shl $3, %r12 # len(C) in bits (*128) + vmovq %r12, %xmm1 + vpslldq $8, %xmm15, %xmm15 # xmm15 = len(A)|| 0x0000000000000000 + vpxor %xmm1, %xmm15, %xmm15 # xmm15 = len(A)||len(C) - vmovdqu 16*6(arg3, %r11), \T1 - vpxor \T1, \XMM7, \XMM7 - vmovdqu \XMM7, 16*6(arg2 , %r11) - .if \ENC_DEC == DEC - vmovdqa \T1, \XMM7 - .endif + vpxor %xmm15, %xmm14, %xmm14 + \GHASH_MUL %xmm14, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 # final GHASH computation + vpshufb SHUF_MASK(%rip), %xmm14, %xmm14 # perform a 16Byte swap - vmovdqu 16*7(arg3, %r11), \T1 - vpxor \T1, \XMM8, \XMM8 - vmovdqu \XMM8, 16*7(arg2 , %r11) - .if \ENC_DEC == DEC - vmovdqa \T1, \XMM8 - .endif + vmovdqu OrigIV(arg2), %xmm9 - add $128, %r11 + ENCRYPT_SINGLE_BLOCK \REP, %xmm9 # E(K, Y0) - vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap - vpxor TMP1(%rsp), \XMM1, \XMM1 # combine GHASHed value with the corresponding ciphertext - vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap + vpxor %xmm14, %xmm9, %xmm9 -############################################################################### -_initial_blocks_done\@: -.endm +_return_T\@: + mov \AUTH_TAG, %r10 # r10 = authTag + mov \AUTH_TAG_LEN, %r11 # r11 = auth_tag_len -# encrypt 8 blocks at a time -# ghash the 8 previously encrypted ciphertext blocks -# arg1, arg2, arg3 are used as pointers only, not modified -# r11 is the data offset value -.macro GHASH_8_ENCRYPT_8_PARALLEL_AVX T1 T2 T3 T4 T5 T6 CTR XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 T7 loop_idx ENC_DEC + cmp $16, %r11 + je _T_16\@ - vmovdqa \XMM1, \T2 - vmovdqa \XMM2, TMP2(%rsp) - vmovdqa \XMM3, TMP3(%rsp) - vmovdqa \XMM4, TMP4(%rsp) - vmovdqa \XMM5, TMP5(%rsp) - vmovdqa \XMM6, TMP6(%rsp) - vmovdqa \XMM7, TMP7(%rsp) - vmovdqa \XMM8, TMP8(%rsp) + cmp $8, %r11 + jl _T_4\@ -.if \loop_idx == in_order - vpaddd ONE(%rip), \CTR, \XMM1 # INCR CNT - vpaddd ONE(%rip), \XMM1, \XMM2 - vpaddd ONE(%rip), \XMM2, \XMM3 - vpaddd ONE(%rip), \XMM3, \XMM4 - vpaddd ONE(%rip), \XMM4, \XMM5 - vpaddd ONE(%rip), \XMM5, \XMM6 - vpaddd ONE(%rip), \XMM6, \XMM7 - vpaddd ONE(%rip), \XMM7, \XMM8 - vmovdqa \XMM8, \CTR +_T_8\@: + vmovq %xmm9, %rax + mov %rax, (%r10) + add $8, %r10 + sub $8, %r11 + vpsrldq $8, %xmm9, %xmm9 + cmp $0, %r11 + je _return_T_done\@ +_T_4\@: + vmovd %xmm9, %eax + mov %eax, (%r10) + add $4, %r10 + sub $4, %r11 + vpsrldq $4, %xmm9, %xmm9 + cmp $0, %r11 + je _return_T_done\@ +_T_123\@: + vmovd %xmm9, %eax + cmp $2, %r11 + jl _T_1\@ + mov %ax, (%r10) + cmp $2, %r11 + je _return_T_done\@ + add $2, %r10 + sar $16, %eax +_T_1\@: + mov %al, (%r10) + jmp _return_T_done\@ - vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap -.else - vpaddd ONEf(%rip), \CTR, \XMM1 # INCR CNT - vpaddd ONEf(%rip), \XMM1, \XMM2 - vpaddd ONEf(%rip), \XMM2, \XMM3 - vpaddd ONEf(%rip), \XMM3, \XMM4 - vpaddd ONEf(%rip), \XMM4, \XMM5 - vpaddd ONEf(%rip), \XMM5, \XMM6 - vpaddd ONEf(%rip), \XMM6, \XMM7 - vpaddd ONEf(%rip), \XMM7, \XMM8 - vmovdqa \XMM8, \CTR -.endif +_T_16\@: + vmovdqu %xmm9, (%r10) +_return_T_done\@: +.endm - ####################################################################### - - vmovdqu (arg1), \T1 - vpxor \T1, \XMM1, \XMM1 - vpxor \T1, \XMM2, \XMM2 - vpxor \T1, \XMM3, \XMM3 - vpxor \T1, \XMM4, \XMM4 - vpxor \T1, \XMM5, \XMM5 - vpxor \T1, \XMM6, \XMM6 - vpxor \T1, \XMM7, \XMM7 - vpxor \T1, \XMM8, \XMM8 - - ####################################################################### - +.macro CALC_AAD_HASH GHASH_MUL AAD AADLEN T1 T2 T3 T4 T5 T6 T7 T8 + mov \AAD, %r10 # r10 = AAD + mov \AADLEN, %r12 # r12 = aadLen + mov %r12, %r11 - vmovdqu 16*1(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 + vpxor \T8, \T8, \T8 + vpxor \T7, \T7, \T7 + cmp $16, %r11 + jl _get_AAD_rest8\@ +_get_AAD_blocks\@: + vmovdqu (%r10), \T7 + vpshufb SHUF_MASK(%rip), \T7, \T7 + vpxor \T7, \T8, \T8 + \GHASH_MUL \T8, \T2, \T1, \T3, \T4, \T5, \T6 + add $16, %r10 + sub $16, %r12 + sub $16, %r11 + cmp $16, %r11 + jge _get_AAD_blocks\@ + vmovdqu \T8, \T7 + cmp $0, %r11 + je _get_AAD_done\@ - vmovdqu 16*2(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 + vpxor \T7, \T7, \T7 + /* read the last <16B of AAD. since we have at least 4B of + data right after the AAD (the ICV, and maybe some CT), we can + read 4B/8B blocks safely, and then get rid of the extra stuff */ +_get_AAD_rest8\@: + cmp $4, %r11 + jle _get_AAD_rest4\@ + movq (%r10), \T1 + add $8, %r10 + sub $8, %r11 + vpslldq $8, \T1, \T1 + vpsrldq $8, \T7, \T7 + vpxor \T1, \T7, \T7 + jmp _get_AAD_rest8\@ +_get_AAD_rest4\@: + cmp $0, %r11 + jle _get_AAD_rest0\@ + mov (%r10), %eax + movq %rax, \T1 + add $4, %r10 + sub $4, %r11 + vpslldq $12, \T1, \T1 + vpsrldq $4, \T7, \T7 + vpxor \T1, \T7, \T7 +_get_AAD_rest0\@: + /* finalize: shift out the extra bytes we read, and align + left. since pslldq can only shift by an immediate, we use + vpshufb and an array of shuffle masks */ + movq %r12, %r11 + salq $4, %r11 + vmovdqu aad_shift_arr(%r11), \T1 + vpshufb \T1, \T7, \T7 +_get_AAD_rest_final\@: + vpshufb SHUF_MASK(%rip), \T7, \T7 + vpxor \T8, \T7, \T7 + \GHASH_MUL \T7, \T2, \T1, \T3, \T4, \T5, \T6 - ####################################################################### +_get_AAD_done\@: + vmovdqu \T7, AadHash(arg2) +.endm - vmovdqa HashKey_8(arg1), \T5 - vpclmulqdq $0x11, \T5, \T2, \T4 # T4 = a1*b1 - vpclmulqdq $0x00, \T5, \T2, \T7 # T7 = a0*b0 +.macro INIT GHASH_MUL PRECOMPUTE + mov arg6, %r11 + mov %r11, AadLen(arg2) # ctx_data.aad_length = aad_length + xor %r11d, %r11d + mov %r11, InLen(arg2) # ctx_data.in_length = 0 - vpshufd $0b01001110, \T2, \T6 - vpxor \T2, \T6, \T6 + mov %r11, PBlockLen(arg2) # ctx_data.partial_block_length = 0 + mov %r11, PBlockEncKey(arg2) # ctx_data.partial_block_enc_key = 0 + mov arg3, %rax + movdqu (%rax), %xmm0 + movdqu %xmm0, OrigIV(arg2) # ctx_data.orig_IV = iv - vmovdqa HashKey_8_k(arg1), \T5 - vpclmulqdq $0x00, \T5, \T6, \T6 + vpshufb SHUF_MASK(%rip), %xmm0, %xmm0 + movdqu %xmm0, CurCount(arg2) # ctx_data.current_counter = iv - vmovdqu 16*3(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 - - vmovdqa TMP2(%rsp), \T1 - vmovdqa HashKey_7(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpshufd $0b01001110, \T1, \T3 - vpxor \T1, \T3, \T3 - vmovdqa HashKey_7_k(arg1), \T5 - vpclmulqdq $0x10, \T5, \T3, \T3 - vpxor \T3, \T6, \T6 - - vmovdqu 16*4(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 + vmovdqu (arg4), %xmm6 # xmm6 = HashKey + vpshufb SHUF_MASK(%rip), %xmm6, %xmm6 + ############### PRECOMPUTATION of HashKey<<1 mod poly from the HashKey + vmovdqa %xmm6, %xmm2 + vpsllq $1, %xmm6, %xmm6 + vpsrlq $63, %xmm2, %xmm2 + vmovdqa %xmm2, %xmm1 + vpslldq $8, %xmm2, %xmm2 + vpsrldq $8, %xmm1, %xmm1 + vpor %xmm2, %xmm6, %xmm6 + #reduction + vpshufd $0b00100100, %xmm1, %xmm2 + vpcmpeqd TWOONE(%rip), %xmm2, %xmm2 + vpand POLY(%rip), %xmm2, %xmm2 + vpxor %xmm2, %xmm6, %xmm6 # xmm6 holds the HashKey<<1 mod poly ####################################################################### + vmovdqu %xmm6, HashKey(arg2) # store HashKey<<1 mod poly - vmovdqa TMP3(%rsp), \T1 - vmovdqa HashKey_6(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpshufd $0b01001110, \T1, \T3 - vpxor \T1, \T3, \T3 - vmovdqa HashKey_6_k(arg1), \T5 - vpclmulqdq $0x10, \T5, \T3, \T3 - vpxor \T3, \T6, \T6 - - vmovdqu 16*5(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 - - vmovdqa TMP4(%rsp), \T1 - vmovdqa HashKey_5(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpshufd $0b01001110, \T1, \T3 - vpxor \T1, \T3, \T3 - vmovdqa HashKey_5_k(arg1), \T5 - vpclmulqdq $0x10, \T5, \T3, \T3 - vpxor \T3, \T6, \T6 - - vmovdqu 16*6(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 - - - vmovdqa TMP5(%rsp), \T1 - vmovdqa HashKey_4(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpshufd $0b01001110, \T1, \T3 - vpxor \T1, \T3, \T3 - vmovdqa HashKey_4_k(arg1), \T5 - vpclmulqdq $0x10, \T5, \T3, \T3 - vpxor \T3, \T6, \T6 + CALC_AAD_HASH \GHASH_MUL, arg5, arg6, %xmm2, %xmm6, %xmm3, %xmm4, %xmm5, %xmm7, %xmm1, %xmm0 - vmovdqu 16*7(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 - - vmovdqa TMP6(%rsp), \T1 - vmovdqa HashKey_3(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpshufd $0b01001110, \T1, \T3 - vpxor \T1, \T3, \T3 - vmovdqa HashKey_3_k(arg1), \T5 - vpclmulqdq $0x10, \T5, \T3, \T3 - vpxor \T3, \T6, \T6 + \PRECOMPUTE %xmm6, %xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5 +.endm - vmovdqu 16*8(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 +# Reads DLEN bytes starting at DPTR and stores in XMMDst +# where 0 < DLEN < 16 +# Clobbers %rax, DLEN +.macro READ_PARTIAL_BLOCK DPTR DLEN XMMDst + vpxor \XMMDst, \XMMDst, \XMMDst + + cmp $8, \DLEN + jl _read_lt8_\@ + mov (\DPTR), %rax + vpinsrq $0, %rax, \XMMDst, \XMMDst + sub $8, \DLEN + jz _done_read_partial_block_\@ + xor %eax, %eax +_read_next_byte_\@: + shl $8, %rax + mov 7(\DPTR, \DLEN, 1), %al + dec \DLEN + jnz _read_next_byte_\@ + vpinsrq $1, %rax, \XMMDst, \XMMDst + jmp _done_read_partial_block_\@ +_read_lt8_\@: + xor %eax, %eax +_read_next_byte_lt8_\@: + shl $8, %rax + mov -1(\DPTR, \DLEN, 1), %al + dec \DLEN + jnz _read_next_byte_lt8_\@ + vpinsrq $0, %rax, \XMMDst, \XMMDst +_done_read_partial_block_\@: +.endm - vmovdqa TMP7(%rsp), \T1 - vmovdqa HashKey_2(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 +# PARTIAL_BLOCK: Handles encryption/decryption and the tag partial blocks +# between update calls. +# Requires the input data be at least 1 byte long due to READ_PARTIAL_BLOCK +# Outputs encrypted bytes, and updates hash and partial info in gcm_data_context +# Clobbers rax, r10, r12, r13, xmm0-6, xmm9-13 +.macro PARTIAL_BLOCK GHASH_MUL CYPH_PLAIN_OUT PLAIN_CYPH_IN PLAIN_CYPH_LEN DATA_OFFSET \ + AAD_HASH ENC_DEC + mov PBlockLen(arg2), %r13 + cmp $0, %r13 + je _partial_block_done_\@ # Leave Macro if no partial blocks + # Read in input data without over reading + cmp $16, \PLAIN_CYPH_LEN + jl _fewer_than_16_bytes_\@ + vmovdqu (\PLAIN_CYPH_IN), %xmm1 # If more than 16 bytes, just fill xmm + jmp _data_read_\@ + +_fewer_than_16_bytes_\@: + lea (\PLAIN_CYPH_IN, \DATA_OFFSET, 1), %r10 + mov \PLAIN_CYPH_LEN, %r12 + READ_PARTIAL_BLOCK %r10 %r12 %xmm1 + + mov PBlockLen(arg2), %r13 + +_data_read_\@: # Finished reading in data + + vmovdqu PBlockEncKey(arg2), %xmm9 + vmovdqu HashKey(arg2), %xmm13 + + lea SHIFT_MASK(%rip), %r12 + + # adjust the shuffle mask pointer to be able to shift r13 bytes + # r16-r13 is the number of bytes in plaintext mod 16) + add %r13, %r12 + vmovdqu (%r12), %xmm2 # get the appropriate shuffle mask + vpshufb %xmm2, %xmm9, %xmm9 # shift right r13 bytes + +.if \ENC_DEC == DEC + vmovdqa %xmm1, %xmm3 + pxor %xmm1, %xmm9 # Cyphertext XOR E(K, Yn) + + mov \PLAIN_CYPH_LEN, %r10 + add %r13, %r10 + # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling + sub $16, %r10 + # Determine if if partial block is not being filled and + # shift mask accordingly + jge _no_extra_mask_1_\@ + sub %r10, %r12 +_no_extra_mask_1_\@: + + vmovdqu ALL_F-SHIFT_MASK(%r12), %xmm1 + # get the appropriate mask to mask out bottom r13 bytes of xmm9 + vpand %xmm1, %xmm9, %xmm9 # mask out bottom r13 bytes of xmm9 + + vpand %xmm1, %xmm3, %xmm3 + vmovdqa SHUF_MASK(%rip), %xmm10 + vpshufb %xmm10, %xmm3, %xmm3 + vpshufb %xmm2, %xmm3, %xmm3 + vpxor %xmm3, \AAD_HASH, \AAD_HASH + + cmp $0, %r10 + jl _partial_incomplete_1_\@ + + # GHASH computation for the last <16 Byte block + \GHASH_MUL \AAD_HASH, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 + xor %eax,%eax + + mov %rax, PBlockLen(arg2) + jmp _dec_done_\@ +_partial_incomplete_1_\@: + add \PLAIN_CYPH_LEN, PBlockLen(arg2) +_dec_done_\@: + vmovdqu \AAD_HASH, AadHash(arg2) +.else + vpxor %xmm1, %xmm9, %xmm9 # Plaintext XOR E(K, Yn) + + mov \PLAIN_CYPH_LEN, %r10 + add %r13, %r10 + # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling + sub $16, %r10 + # Determine if if partial block is not being filled and + # shift mask accordingly + jge _no_extra_mask_2_\@ + sub %r10, %r12 +_no_extra_mask_2_\@: + + vmovdqu ALL_F-SHIFT_MASK(%r12), %xmm1 + # get the appropriate mask to mask out bottom r13 bytes of xmm9 + vpand %xmm1, %xmm9, %xmm9 + + vmovdqa SHUF_MASK(%rip), %xmm1 + vpshufb %xmm1, %xmm9, %xmm9 + vpshufb %xmm2, %xmm9, %xmm9 + vpxor %xmm9, \AAD_HASH, \AAD_HASH + + cmp $0, %r10 + jl _partial_incomplete_2_\@ + + # GHASH computation for the last <16 Byte block + \GHASH_MUL \AAD_HASH, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 + xor %eax,%eax + + mov %rax, PBlockLen(arg2) + jmp _encode_done_\@ +_partial_incomplete_2_\@: + add \PLAIN_CYPH_LEN, PBlockLen(arg2) +_encode_done_\@: + vmovdqu \AAD_HASH, AadHash(arg2) + + vmovdqa SHUF_MASK(%rip), %xmm10 + # shuffle xmm9 back to output as ciphertext + vpshufb %xmm10, %xmm9, %xmm9 + vpshufb %xmm2, %xmm9, %xmm9 +.endif + # output encrypted Bytes + cmp $0, %r10 + jl _partial_fill_\@ + mov %r13, %r12 + mov $16, %r13 + # Set r13 to be the number of bytes to write out + sub %r12, %r13 + jmp _count_set_\@ +_partial_fill_\@: + mov \PLAIN_CYPH_LEN, %r13 +_count_set_\@: + vmovdqa %xmm9, %xmm0 + vmovq %xmm0, %rax + cmp $8, %r13 + jle _less_than_8_bytes_left_\@ + + mov %rax, (\CYPH_PLAIN_OUT, \DATA_OFFSET, 1) + add $8, \DATA_OFFSET + psrldq $8, %xmm0 + vmovq %xmm0, %rax + sub $8, %r13 +_less_than_8_bytes_left_\@: + movb %al, (\CYPH_PLAIN_OUT, \DATA_OFFSET, 1) + add $1, \DATA_OFFSET + shr $8, %rax + sub $1, %r13 + jne _less_than_8_bytes_left_\@ +_partial_block_done_\@: +.endm # PARTIAL_BLOCK - vpshufd $0b01001110, \T1, \T3 - vpxor \T1, \T3, \T3 - vmovdqa HashKey_2_k(arg1), \T5 - vpclmulqdq $0x10, \T5, \T3, \T3 - vpxor \T3, \T6, \T6 +#ifdef CONFIG_AS_AVX +############################################################################### +# GHASH_MUL MACRO to implement: Data*HashKey mod (128,127,126,121,0) +# Input: A and B (128-bits each, bit-reflected) +# Output: C = A*B*x mod poly, (i.e. >>1 ) +# To compute GH = GH*HashKey mod poly, give HK = HashKey<<1 mod poly as input +# GH = GH * HK * x mod poly which is equivalent to GH*HashKey mod poly. +############################################################################### +.macro GHASH_MUL_AVX GH HK T1 T2 T3 T4 T5 - ####################################################################### + vpshufd $0b01001110, \GH, \T2 + vpshufd $0b01001110, \HK, \T3 + vpxor \GH , \T2, \T2 # T2 = (a1+a0) + vpxor \HK , \T3, \T3 # T3 = (b1+b0) - vmovdqu 16*9(arg1), \T5 - vaesenc \T5, \XMM1, \XMM1 - vaesenc \T5, \XMM2, \XMM2 - vaesenc \T5, \XMM3, \XMM3 - vaesenc \T5, \XMM4, \XMM4 - vaesenc \T5, \XMM5, \XMM5 - vaesenc \T5, \XMM6, \XMM6 - vaesenc \T5, \XMM7, \XMM7 - vaesenc \T5, \XMM8, \XMM8 + vpclmulqdq $0x11, \HK, \GH, \T1 # T1 = a1*b1 + vpclmulqdq $0x00, \HK, \GH, \GH # GH = a0*b0 + vpclmulqdq $0x00, \T3, \T2, \T2 # T2 = (a1+a0)*(b1+b0) + vpxor \GH, \T2,\T2 + vpxor \T1, \T2,\T2 # T2 = a0*b1+a1*b0 - vmovdqa TMP8(%rsp), \T1 - vmovdqa HashKey(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 + vpslldq $8, \T2,\T3 # shift-L T3 2 DWs + vpsrldq $8, \T2,\T2 # shift-R T2 2 DWs + vpxor \T3, \GH, \GH + vpxor \T2, \T1, \T1 # = GH x HK - vpshufd $0b01001110, \T1, \T3 - vpxor \T1, \T3, \T3 - vmovdqa HashKey_k(arg1), \T5 - vpclmulqdq $0x10, \T5, \T3, \T3 - vpxor \T3, \T6, \T6 + #first phase of the reduction + vpslld $31, \GH, \T2 # packed right shifting << 31 + vpslld $30, \GH, \T3 # packed right shifting shift << 30 + vpslld $25, \GH, \T4 # packed right shifting shift << 25 - vpxor \T4, \T6, \T6 - vpxor \T7, \T6, \T6 + vpxor \T3, \T2, \T2 # xor the shifted versions + vpxor \T4, \T2, \T2 - vmovdqu 16*10(arg1), \T5 + vpsrldq $4, \T2, \T5 # shift-R T5 1 DW - i = 0 - j = 1 - setreg -.rep 8 - vpxor 16*i(arg3, %r11), \T5, \T2 - .if \ENC_DEC == ENC - vaesenclast \T2, reg_j, reg_j - .else - vaesenclast \T2, reg_j, \T3 - vmovdqu 16*i(arg3, %r11), reg_j - vmovdqu \T3, 16*i(arg2, %r11) - .endif - i = (i+1) - j = (j+1) - setreg -.endr - ####################################################################### + vpslldq $12, \T2, \T2 # shift-L T2 3 DWs + vpxor \T2, \GH, \GH # first phase of the reduction complete + #second phase of the reduction - vpslldq $8, \T6, \T3 # shift-L T3 2 DWs - vpsrldq $8, \T6, \T6 # shift-R T2 2 DWs - vpxor \T3, \T7, \T7 - vpxor \T4, \T6, \T6 # accumulate the results in T6:T7 + vpsrld $1,\GH, \T2 # packed left shifting >> 1 + vpsrld $2,\GH, \T3 # packed left shifting >> 2 + vpsrld $7,\GH, \T4 # packed left shifting >> 7 + vpxor \T3, \T2, \T2 # xor the shifted versions + vpxor \T4, \T2, \T2 + vpxor \T5, \T2, \T2 + vpxor \T2, \GH, \GH + vpxor \T1, \GH, \GH # the result is in GH - ####################################################################### - #first phase of the reduction - ####################################################################### - vpslld $31, \T7, \T2 # packed right shifting << 31 - vpslld $30, \T7, \T3 # packed right shifting shift << 30 - vpslld $25, \T7, \T4 # packed right shifting shift << 25 +.endm - vpxor \T3, \T2, \T2 # xor the shifted versions - vpxor \T4, \T2, \T2 +.macro PRECOMPUTE_AVX HK T1 T2 T3 T4 T5 T6 - vpsrldq $4, \T2, \T1 # shift-R T1 1 DW + # Haskey_i_k holds XORed values of the low and high parts of the Haskey_i + vmovdqa \HK, \T5 - vpslldq $12, \T2, \T2 # shift-L T2 3 DWs - vpxor \T2, \T7, \T7 # first phase of the reduction complete - ####################################################################### - .if \ENC_DEC == ENC - vmovdqu \XMM1, 16*0(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM2, 16*1(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM3, 16*2(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM4, 16*3(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM5, 16*4(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM6, 16*5(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM7, 16*6(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM8, 16*7(arg2,%r11) # Write to the Ciphertext buffer - .endif + vpshufd $0b01001110, \T5, \T1 + vpxor \T5, \T1, \T1 + vmovdqu \T1, HashKey_k(arg2) - ####################################################################### - #second phase of the reduction - vpsrld $1, \T7, \T2 # packed left shifting >> 1 - vpsrld $2, \T7, \T3 # packed left shifting >> 2 - vpsrld $7, \T7, \T4 # packed left shifting >> 7 - vpxor \T3, \T2, \T2 # xor the shifted versions - vpxor \T4, \T2, \T2 + GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^2<<1 mod poly + vmovdqu \T5, HashKey_2(arg2) # [HashKey_2] = HashKey^2<<1 mod poly + vpshufd $0b01001110, \T5, \T1 + vpxor \T5, \T1, \T1 + vmovdqu \T1, HashKey_2_k(arg2) - vpxor \T1, \T2, \T2 - vpxor \T2, \T7, \T7 - vpxor \T7, \T6, \T6 # the result is in T6 - ####################################################################### + GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^3<<1 mod poly + vmovdqu \T5, HashKey_3(arg2) + vpshufd $0b01001110, \T5, \T1 + vpxor \T5, \T1, \T1 + vmovdqu \T1, HashKey_3_k(arg2) - vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap + GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^4<<1 mod poly + vmovdqu \T5, HashKey_4(arg2) + vpshufd $0b01001110, \T5, \T1 + vpxor \T5, \T1, \T1 + vmovdqu \T1, HashKey_4_k(arg2) + GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^5<<1 mod poly + vmovdqu \T5, HashKey_5(arg2) + vpshufd $0b01001110, \T5, \T1 + vpxor \T5, \T1, \T1 + vmovdqu \T1, HashKey_5_k(arg2) - vpxor \T6, \XMM1, \XMM1 + GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^6<<1 mod poly + vmovdqu \T5, HashKey_6(arg2) + vpshufd $0b01001110, \T5, \T1 + vpxor \T5, \T1, \T1 + vmovdqu \T1, HashKey_6_k(arg2) + GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^7<<1 mod poly + vmovdqu \T5, HashKey_7(arg2) + vpshufd $0b01001110, \T5, \T1 + vpxor \T5, \T1, \T1 + vmovdqu \T1, HashKey_7_k(arg2) + GHASH_MUL_AVX \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^8<<1 mod poly + vmovdqu \T5, HashKey_8(arg2) + vpshufd $0b01001110, \T5, \T1 + vpxor \T5, \T1, \T1 + vmovdqu \T1, HashKey_8_k(arg2) .endm +## if a = number of total plaintext bytes +## b = floor(a/16) +## num_initial_blocks = b mod 4# +## encrypt the initial num_initial_blocks blocks and apply ghash on the ciphertext +## r10, r11, r12, rax are clobbered +## arg1, arg3, arg4, r14 are used as a pointer only, not modified -# GHASH the last 4 ciphertext blocks. -.macro GHASH_LAST_8_AVX T1 T2 T3 T4 T5 T6 T7 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 +.macro INITIAL_BLOCKS_AVX REP num_initial_blocks T1 T2 T3 T4 T5 CTR XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 T6 T_key ENC_DEC + i = (8-\num_initial_blocks) + setreg + vmovdqu AadHash(arg2), reg_i - ## Karatsuba Method + # start AES for num_initial_blocks blocks + vmovdqu CurCount(arg2), \CTR + i = (9-\num_initial_blocks) + setreg +.rep \num_initial_blocks + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, reg_i + vpshufb SHUF_MASK(%rip), reg_i, reg_i # perform a 16Byte swap + i = (i+1) + setreg +.endr - vpshufd $0b01001110, \XMM1, \T2 - vpxor \XMM1, \T2, \T2 - vmovdqa HashKey_8(arg1), \T5 - vpclmulqdq $0x11, \T5, \XMM1, \T6 - vpclmulqdq $0x00, \T5, \XMM1, \T7 + vmovdqa (arg1), \T_key + i = (9-\num_initial_blocks) + setreg +.rep \num_initial_blocks + vpxor \T_key, reg_i, reg_i + i = (i+1) + setreg +.endr - vmovdqa HashKey_8_k(arg1), \T3 - vpclmulqdq $0x00, \T3, \T2, \XMM1 + j = 1 + setreg +.rep \REP + vmovdqa 16*j(arg1), \T_key + i = (9-\num_initial_blocks) + setreg +.rep \num_initial_blocks + vaesenc \T_key, reg_i, reg_i + i = (i+1) + setreg +.endr - ###################### + j = (j+1) + setreg +.endr - vpshufd $0b01001110, \XMM2, \T2 - vpxor \XMM2, \T2, \T2 - vmovdqa HashKey_7(arg1), \T5 - vpclmulqdq $0x11, \T5, \XMM2, \T4 - vpxor \T4, \T6, \T6 + vmovdqa 16*j(arg1), \T_key + i = (9-\num_initial_blocks) + setreg +.rep \num_initial_blocks + vaesenclast \T_key, reg_i, reg_i + i = (i+1) + setreg +.endr - vpclmulqdq $0x00, \T5, \XMM2, \T4 - vpxor \T4, \T7, \T7 + i = (9-\num_initial_blocks) + setreg +.rep \num_initial_blocks + vmovdqu (arg4, %r11), \T1 + vpxor \T1, reg_i, reg_i + vmovdqu reg_i, (arg3 , %r11) # write back ciphertext for num_initial_blocks blocks + add $16, %r11 +.if \ENC_DEC == DEC + vmovdqa \T1, reg_i +.endif + vpshufb SHUF_MASK(%rip), reg_i, reg_i # prepare ciphertext for GHASH computations + i = (i+1) + setreg +.endr - vmovdqa HashKey_7_k(arg1), \T3 - vpclmulqdq $0x00, \T3, \T2, \T2 - vpxor \T2, \XMM1, \XMM1 - ###################### + i = (8-\num_initial_blocks) + j = (9-\num_initial_blocks) + setreg - vpshufd $0b01001110, \XMM3, \T2 - vpxor \XMM3, \T2, \T2 - vmovdqa HashKey_6(arg1), \T5 - vpclmulqdq $0x11, \T5, \XMM3, \T4 - vpxor \T4, \T6, \T6 +.rep \num_initial_blocks + vpxor reg_i, reg_j, reg_j + GHASH_MUL_AVX reg_j, \T2, \T1, \T3, \T4, \T5, \T6 # apply GHASH on num_initial_blocks blocks + i = (i+1) + j = (j+1) + setreg +.endr + # XMM8 has the combined result here - vpclmulqdq $0x00, \T5, \XMM3, \T4 - vpxor \T4, \T7, \T7 + vmovdqa \XMM8, TMP1(%rsp) + vmovdqa \XMM8, \T3 - vmovdqa HashKey_6_k(arg1), \T3 - vpclmulqdq $0x00, \T3, \T2, \T2 - vpxor \T2, \XMM1, \XMM1 + cmp $128, %r13 + jl _initial_blocks_done\@ # no need for precomputed constants - ###################### +############################################################################### +# Haskey_i_k holds XORed values of the low and high parts of the Haskey_i + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, \XMM1 + vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap - vpshufd $0b01001110, \XMM4, \T2 - vpxor \XMM4, \T2, \T2 - vmovdqa HashKey_5(arg1), \T5 - vpclmulqdq $0x11, \T5, \XMM4, \T4 - vpxor \T4, \T6, \T6 + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, \XMM2 + vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap - vpclmulqdq $0x00, \T5, \XMM4, \T4 - vpxor \T4, \T7, \T7 + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, \XMM3 + vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap - vmovdqa HashKey_5_k(arg1), \T3 - vpclmulqdq $0x00, \T3, \T2, \T2 - vpxor \T2, \XMM1, \XMM1 + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, \XMM4 + vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap - ###################### + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, \XMM5 + vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap - vpshufd $0b01001110, \XMM5, \T2 - vpxor \XMM5, \T2, \T2 - vmovdqa HashKey_4(arg1), \T5 - vpclmulqdq $0x11, \T5, \XMM5, \T4 - vpxor \T4, \T6, \T6 + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, \XMM6 + vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap - vpclmulqdq $0x00, \T5, \XMM5, \T4 - vpxor \T4, \T7, \T7 + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, \XMM7 + vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap - vmovdqa HashKey_4_k(arg1), \T3 - vpclmulqdq $0x00, \T3, \T2, \T2 - vpxor \T2, \XMM1, \XMM1 + vpaddd ONE(%rip), \CTR, \CTR # INCR Y0 + vmovdqa \CTR, \XMM8 + vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap - ###################### + vmovdqa (arg1), \T_key + vpxor \T_key, \XMM1, \XMM1 + vpxor \T_key, \XMM2, \XMM2 + vpxor \T_key, \XMM3, \XMM3 + vpxor \T_key, \XMM4, \XMM4 + vpxor \T_key, \XMM5, \XMM5 + vpxor \T_key, \XMM6, \XMM6 + vpxor \T_key, \XMM7, \XMM7 + vpxor \T_key, \XMM8, \XMM8 - vpshufd $0b01001110, \XMM6, \T2 - vpxor \XMM6, \T2, \T2 - vmovdqa HashKey_3(arg1), \T5 - vpclmulqdq $0x11, \T5, \XMM6, \T4 - vpxor \T4, \T6, \T6 + i = 1 + setreg +.rep \REP # do REP rounds + vmovdqa 16*i(arg1), \T_key + vaesenc \T_key, \XMM1, \XMM1 + vaesenc \T_key, \XMM2, \XMM2 + vaesenc \T_key, \XMM3, \XMM3 + vaesenc \T_key, \XMM4, \XMM4 + vaesenc \T_key, \XMM5, \XMM5 + vaesenc \T_key, \XMM6, \XMM6 + vaesenc \T_key, \XMM7, \XMM7 + vaesenc \T_key, \XMM8, \XMM8 + i = (i+1) + setreg +.endr - vpclmulqdq $0x00, \T5, \XMM6, \T4 - vpxor \T4, \T7, \T7 + vmovdqa 16*i(arg1), \T_key + vaesenclast \T_key, \XMM1, \XMM1 + vaesenclast \T_key, \XMM2, \XMM2 + vaesenclast \T_key, \XMM3, \XMM3 + vaesenclast \T_key, \XMM4, \XMM4 + vaesenclast \T_key, \XMM5, \XMM5 + vaesenclast \T_key, \XMM6, \XMM6 + vaesenclast \T_key, \XMM7, \XMM7 + vaesenclast \T_key, \XMM8, \XMM8 - vmovdqa HashKey_3_k(arg1), \T3 - vpclmulqdq $0x00, \T3, \T2, \T2 - vpxor \T2, \XMM1, \XMM1 + vmovdqu (arg4, %r11), \T1 + vpxor \T1, \XMM1, \XMM1 + vmovdqu \XMM1, (arg3 , %r11) + .if \ENC_DEC == DEC + vmovdqa \T1, \XMM1 + .endif - ###################### + vmovdqu 16*1(arg4, %r11), \T1 + vpxor \T1, \XMM2, \XMM2 + vmovdqu \XMM2, 16*1(arg3 , %r11) + .if \ENC_DEC == DEC + vmovdqa \T1, \XMM2 + .endif - vpshufd $0b01001110, \XMM7, \T2 - vpxor \XMM7, \T2, \T2 - vmovdqa HashKey_2(arg1), \T5 - vpclmulqdq $0x11, \T5, \XMM7, \T4 - vpxor \T4, \T6, \T6 + vmovdqu 16*2(arg4, %r11), \T1 + vpxor \T1, \XMM3, \XMM3 + vmovdqu \XMM3, 16*2(arg3 , %r11) + .if \ENC_DEC == DEC + vmovdqa \T1, \XMM3 + .endif - vpclmulqdq $0x00, \T5, \XMM7, \T4 - vpxor \T4, \T7, \T7 + vmovdqu 16*3(arg4, %r11), \T1 + vpxor \T1, \XMM4, \XMM4 + vmovdqu \XMM4, 16*3(arg3 , %r11) + .if \ENC_DEC == DEC + vmovdqa \T1, \XMM4 + .endif - vmovdqa HashKey_2_k(arg1), \T3 - vpclmulqdq $0x00, \T3, \T2, \T2 - vpxor \T2, \XMM1, \XMM1 + vmovdqu 16*4(arg4, %r11), \T1 + vpxor \T1, \XMM5, \XMM5 + vmovdqu \XMM5, 16*4(arg3 , %r11) + .if \ENC_DEC == DEC + vmovdqa \T1, \XMM5 + .endif - ###################### + vmovdqu 16*5(arg4, %r11), \T1 + vpxor \T1, \XMM6, \XMM6 + vmovdqu \XMM6, 16*5(arg3 , %r11) + .if \ENC_DEC == DEC + vmovdqa \T1, \XMM6 + .endif - vpshufd $0b01001110, \XMM8, \T2 - vpxor \XMM8, \T2, \T2 - vmovdqa HashKey(arg1), \T5 - vpclmulqdq $0x11, \T5, \XMM8, \T4 - vpxor \T4, \T6, \T6 + vmovdqu 16*6(arg4, %r11), \T1 + vpxor \T1, \XMM7, \XMM7 + vmovdqu \XMM7, 16*6(arg3 , %r11) + .if \ENC_DEC == DEC + vmovdqa \T1, \XMM7 + .endif + + vmovdqu 16*7(arg4, %r11), \T1 + vpxor \T1, \XMM8, \XMM8 + vmovdqu \XMM8, 16*7(arg3 , %r11) + .if \ENC_DEC == DEC + vmovdqa \T1, \XMM8 + .endif + + add $128, %r11 + + vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap + vpxor TMP1(%rsp), \XMM1, \XMM1 # combine GHASHed value with the corresponding ciphertext + vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap - vpclmulqdq $0x00, \T5, \XMM8, \T4 - vpxor \T4, \T7, \T7 +############################################################################### - vmovdqa HashKey_k(arg1), \T3 - vpclmulqdq $0x00, \T3, \T2, \T2 +_initial_blocks_done\@: - vpxor \T2, \XMM1, \XMM1 - vpxor \T6, \XMM1, \XMM1 - vpxor \T7, \XMM1, \T2 +.endm +# encrypt 8 blocks at a time +# ghash the 8 previously encrypted ciphertext blocks +# arg1, arg3, arg4 are used as pointers only, not modified +# r11 is the data offset value +.macro GHASH_8_ENCRYPT_8_PARALLEL_AVX REP T1 T2 T3 T4 T5 T6 CTR XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 T7 loop_idx ENC_DEC + vmovdqa \XMM1, \T2 + vmovdqa \XMM2, TMP2(%rsp) + vmovdqa \XMM3, TMP3(%rsp) + vmovdqa \XMM4, TMP4(%rsp) + vmovdqa \XMM5, TMP5(%rsp) + vmovdqa \XMM6, TMP6(%rsp) + vmovdqa \XMM7, TMP7(%rsp) + vmovdqa \XMM8, TMP8(%rsp) +.if \loop_idx == in_order + vpaddd ONE(%rip), \CTR, \XMM1 # INCR CNT + vpaddd ONE(%rip), \XMM1, \XMM2 + vpaddd ONE(%rip), \XMM2, \XMM3 + vpaddd ONE(%rip), \XMM3, \XMM4 + vpaddd ONE(%rip), \XMM4, \XMM5 + vpaddd ONE(%rip), \XMM5, \XMM6 + vpaddd ONE(%rip), \XMM6, \XMM7 + vpaddd ONE(%rip), \XMM7, \XMM8 + vmovdqa \XMM8, \CTR - vpslldq $8, \T2, \T4 - vpsrldq $8, \T2, \T2 + vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap +.else + vpaddd ONEf(%rip), \CTR, \XMM1 # INCR CNT + vpaddd ONEf(%rip), \XMM1, \XMM2 + vpaddd ONEf(%rip), \XMM2, \XMM3 + vpaddd ONEf(%rip), \XMM3, \XMM4 + vpaddd ONEf(%rip), \XMM4, \XMM5 + vpaddd ONEf(%rip), \XMM5, \XMM6 + vpaddd ONEf(%rip), \XMM6, \XMM7 + vpaddd ONEf(%rip), \XMM7, \XMM8 + vmovdqa \XMM8, \CTR +.endif - vpxor \T4, \T7, \T7 - vpxor \T2, \T6, \T6 # holds the result of - # the accumulated carry-less multiplications ####################################################################### - #first phase of the reduction - vpslld $31, \T7, \T2 # packed right shifting << 31 - vpslld $30, \T7, \T3 # packed right shifting shift << 30 - vpslld $25, \T7, \T4 # packed right shifting shift << 25 - vpxor \T3, \T2, \T2 # xor the shifted versions - vpxor \T4, \T2, \T2 - - vpsrldq $4, \T2, \T1 # shift-R T1 1 DW + vmovdqu (arg1), \T1 + vpxor \T1, \XMM1, \XMM1 + vpxor \T1, \XMM2, \XMM2 + vpxor \T1, \XMM3, \XMM3 + vpxor \T1, \XMM4, \XMM4 + vpxor \T1, \XMM5, \XMM5 + vpxor \T1, \XMM6, \XMM6 + vpxor \T1, \XMM7, \XMM7 + vpxor \T1, \XMM8, \XMM8 - vpslldq $12, \T2, \T2 # shift-L T2 3 DWs - vpxor \T2, \T7, \T7 # first phase of the reduction complete ####################################################################### - #second phase of the reduction - vpsrld $1, \T7, \T2 # packed left shifting >> 1 - vpsrld $2, \T7, \T3 # packed left shifting >> 2 - vpsrld $7, \T7, \T4 # packed left shifting >> 7 - vpxor \T3, \T2, \T2 # xor the shifted versions - vpxor \T4, \T2, \T2 - vpxor \T1, \T2, \T2 - vpxor \T2, \T7, \T7 - vpxor \T7, \T6, \T6 # the result is in T6 -.endm + vmovdqu 16*1(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 -# combined for GCM encrypt and decrypt functions -# clobbering all xmm registers -# clobbering r10, r11, r12, r13, r14, r15 -.macro GCM_ENC_DEC_AVX ENC_DEC + vmovdqu 16*2(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 - #the number of pushes must equal STACK_OFFSET - push %r12 - push %r13 - push %r14 - push %r15 - mov %rsp, %r14 + ####################################################################### + vmovdqu HashKey_8(arg2), \T5 + vpclmulqdq $0x11, \T5, \T2, \T4 # T4 = a1*b1 + vpclmulqdq $0x00, \T5, \T2, \T7 # T7 = a0*b0 + vpshufd $0b01001110, \T2, \T6 + vpxor \T2, \T6, \T6 + vmovdqu HashKey_8_k(arg2), \T5 + vpclmulqdq $0x00, \T5, \T6, \T6 - sub $VARIABLE_OFFSET, %rsp - and $~63, %rsp # align rsp to 64 bytes + vmovdqu 16*3(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 + vmovdqa TMP2(%rsp), \T1 + vmovdqu HashKey_7(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 - vmovdqu HashKey(arg1), %xmm13 # xmm13 = HashKey + vpshufd $0b01001110, \T1, \T3 + vpxor \T1, \T3, \T3 + vmovdqu HashKey_7_k(arg2), \T5 + vpclmulqdq $0x10, \T5, \T3, \T3 + vpxor \T3, \T6, \T6 - mov arg4, %r13 # save the number of bytes of plaintext/ciphertext - and $-16, %r13 # r13 = r13 - (r13 mod 16) + vmovdqu 16*4(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 - mov %r13, %r12 - shr $4, %r12 - and $7, %r12 - jz _initial_num_blocks_is_0\@ + ####################################################################### - cmp $7, %r12 - je _initial_num_blocks_is_7\@ - cmp $6, %r12 - je _initial_num_blocks_is_6\@ - cmp $5, %r12 - je _initial_num_blocks_is_5\@ - cmp $4, %r12 - je _initial_num_blocks_is_4\@ - cmp $3, %r12 - je _initial_num_blocks_is_3\@ - cmp $2, %r12 - je _initial_num_blocks_is_2\@ + vmovdqa TMP3(%rsp), \T1 + vmovdqu HashKey_6(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 - jmp _initial_num_blocks_is_1\@ + vpshufd $0b01001110, \T1, \T3 + vpxor \T1, \T3, \T3 + vmovdqu HashKey_6_k(arg2), \T5 + vpclmulqdq $0x10, \T5, \T3, \T3 + vpxor \T3, \T6, \T6 -_initial_num_blocks_is_7\@: - INITIAL_BLOCKS_AVX 7, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*7, %r13 - jmp _initial_blocks_encrypted\@ + vmovdqu 16*5(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 -_initial_num_blocks_is_6\@: - INITIAL_BLOCKS_AVX 6, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*6, %r13 - jmp _initial_blocks_encrypted\@ + vmovdqa TMP4(%rsp), \T1 + vmovdqu HashKey_5(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 -_initial_num_blocks_is_5\@: - INITIAL_BLOCKS_AVX 5, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*5, %r13 - jmp _initial_blocks_encrypted\@ + vpshufd $0b01001110, \T1, \T3 + vpxor \T1, \T3, \T3 + vmovdqu HashKey_5_k(arg2), \T5 + vpclmulqdq $0x10, \T5, \T3, \T3 + vpxor \T3, \T6, \T6 -_initial_num_blocks_is_4\@: - INITIAL_BLOCKS_AVX 4, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*4, %r13 - jmp _initial_blocks_encrypted\@ + vmovdqu 16*6(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 -_initial_num_blocks_is_3\@: - INITIAL_BLOCKS_AVX 3, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*3, %r13 - jmp _initial_blocks_encrypted\@ -_initial_num_blocks_is_2\@: - INITIAL_BLOCKS_AVX 2, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*2, %r13 - jmp _initial_blocks_encrypted\@ + vmovdqa TMP5(%rsp), \T1 + vmovdqu HashKey_4(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 -_initial_num_blocks_is_1\@: - INITIAL_BLOCKS_AVX 1, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*1, %r13 - jmp _initial_blocks_encrypted\@ + vpshufd $0b01001110, \T1, \T3 + vpxor \T1, \T3, \T3 + vmovdqu HashKey_4_k(arg2), \T5 + vpclmulqdq $0x10, \T5, \T3, \T3 + vpxor \T3, \T6, \T6 -_initial_num_blocks_is_0\@: - INITIAL_BLOCKS_AVX 0, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + vmovdqu 16*7(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 + vmovdqa TMP6(%rsp), \T1 + vmovdqu HashKey_3(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 -_initial_blocks_encrypted\@: - cmp $0, %r13 - je _zero_cipher_left\@ + vpshufd $0b01001110, \T1, \T3 + vpxor \T1, \T3, \T3 + vmovdqu HashKey_3_k(arg2), \T5 + vpclmulqdq $0x10, \T5, \T3, \T3 + vpxor \T3, \T6, \T6 - sub $128, %r13 - je _eight_cipher_left\@ + vmovdqu 16*8(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 + vmovdqa TMP7(%rsp), \T1 + vmovdqu HashKey_2(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 + vpshufd $0b01001110, \T1, \T3 + vpxor \T1, \T3, \T3 + vmovdqu HashKey_2_k(arg2), \T5 + vpclmulqdq $0x10, \T5, \T3, \T3 + vpxor \T3, \T6, \T6 - vmovd %xmm9, %r15d - and $255, %r15d - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 + ####################################################################### + vmovdqu 16*9(arg1), \T5 + vaesenc \T5, \XMM1, \XMM1 + vaesenc \T5, \XMM2, \XMM2 + vaesenc \T5, \XMM3, \XMM3 + vaesenc \T5, \XMM4, \XMM4 + vaesenc \T5, \XMM5, \XMM5 + vaesenc \T5, \XMM6, \XMM6 + vaesenc \T5, \XMM7, \XMM7 + vaesenc \T5, \XMM8, \XMM8 -_encrypt_by_8_new\@: - cmp $(255-8), %r15d - jg _encrypt_by_8\@ + vmovdqa TMP8(%rsp), \T1 + vmovdqu HashKey(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 + vpshufd $0b01001110, \T1, \T3 + vpxor \T1, \T3, \T3 + vmovdqu HashKey_k(arg2), \T5 + vpclmulqdq $0x10, \T5, \T3, \T3 + vpxor \T3, \T6, \T6 + vpxor \T4, \T6, \T6 + vpxor \T7, \T6, \T6 - add $8, %r15b - GHASH_8_ENCRYPT_8_PARALLEL_AVX %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm15, out_order, \ENC_DEC - add $128, %r11 - sub $128, %r13 - jne _encrypt_by_8_new\@ + vmovdqu 16*10(arg1), \T5 - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - jmp _eight_cipher_left\@ + i = 11 + setreg +.rep (\REP-9) + + vaesenc \T5, \XMM1, \XMM1 + vaesenc \T5, \XMM2, \XMM2 + vaesenc \T5, \XMM3, \XMM3 + vaesenc \T5, \XMM4, \XMM4 + vaesenc \T5, \XMM5, \XMM5 + vaesenc \T5, \XMM6, \XMM6 + vaesenc \T5, \XMM7, \XMM7 + vaesenc \T5, \XMM8, \XMM8 + + vmovdqu 16*i(arg1), \T5 + i = i + 1 + setreg +.endr -_encrypt_by_8\@: - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - add $8, %r15b - GHASH_8_ENCRYPT_8_PARALLEL_AVX %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm15, in_order, \ENC_DEC - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - add $128, %r11 - sub $128, %r13 - jne _encrypt_by_8_new\@ + i = 0 + j = 1 + setreg +.rep 8 + vpxor 16*i(arg4, %r11), \T5, \T2 + .if \ENC_DEC == ENC + vaesenclast \T2, reg_j, reg_j + .else + vaesenclast \T2, reg_j, \T3 + vmovdqu 16*i(arg4, %r11), reg_j + vmovdqu \T3, 16*i(arg3, %r11) + .endif + i = (i+1) + j = (j+1) + setreg +.endr + ####################################################################### - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 + vpslldq $8, \T6, \T3 # shift-L T3 2 DWs + vpsrldq $8, \T6, \T6 # shift-R T2 2 DWs + vpxor \T3, \T7, \T7 + vpxor \T4, \T6, \T6 # accumulate the results in T6:T7 -_eight_cipher_left\@: - GHASH_LAST_8_AVX %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8 + ####################################################################### + #first phase of the reduction + ####################################################################### + vpslld $31, \T7, \T2 # packed right shifting << 31 + vpslld $30, \T7, \T3 # packed right shifting shift << 30 + vpslld $25, \T7, \T4 # packed right shifting shift << 25 + vpxor \T3, \T2, \T2 # xor the shifted versions + vpxor \T4, \T2, \T2 -_zero_cipher_left\@: - cmp $16, arg4 - jl _only_less_than_16\@ + vpsrldq $4, \T2, \T1 # shift-R T1 1 DW - mov arg4, %r13 - and $15, %r13 # r13 = (arg4 mod 16) + vpslldq $12, \T2, \T2 # shift-L T2 3 DWs + vpxor \T2, \T7, \T7 # first phase of the reduction complete + ####################################################################### + .if \ENC_DEC == ENC + vmovdqu \XMM1, 16*0(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM2, 16*1(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM3, 16*2(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM4, 16*3(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM5, 16*4(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM6, 16*5(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM7, 16*6(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM8, 16*7(arg3,%r11) # Write to the Ciphertext buffer + .endif - je _multiple_of_16_bytes\@ + ####################################################################### + #second phase of the reduction + vpsrld $1, \T7, \T2 # packed left shifting >> 1 + vpsrld $2, \T7, \T3 # packed left shifting >> 2 + vpsrld $7, \T7, \T4 # packed left shifting >> 7 + vpxor \T3, \T2, \T2 # xor the shifted versions + vpxor \T4, \T2, \T2 - # handle the last <16 Byte block seperately + vpxor \T1, \T2, \T2 + vpxor \T2, \T7, \T7 + vpxor \T7, \T6, \T6 # the result is in T6 + ####################################################################### + vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap - vpaddd ONE(%rip), %xmm9, %xmm9 # INCR CNT to get Yn - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - ENCRYPT_SINGLE_BLOCK %xmm9 # E(K, Yn) - sub $16, %r11 - add %r13, %r11 - vmovdqu (arg3, %r11), %xmm1 # receive the last <16 Byte block + vpxor \T6, \XMM1, \XMM1 - lea SHIFT_MASK+16(%rip), %r12 - sub %r13, %r12 # adjust the shuffle mask pointer to be - # able to shift 16-r13 bytes (r13 is the - # number of bytes in plaintext mod 16) - vmovdqu (%r12), %xmm2 # get the appropriate shuffle mask - vpshufb %xmm2, %xmm1, %xmm1 # shift right 16-r13 bytes - jmp _final_ghash_mul\@ -_only_less_than_16\@: - # check for 0 length - mov arg4, %r13 - and $15, %r13 # r13 = (arg4 mod 16) - je _multiple_of_16_bytes\@ +.endm - # handle the last <16 Byte block seperately +# GHASH the last 4 ciphertext blocks. +.macro GHASH_LAST_8_AVX T1 T2 T3 T4 T5 T6 T7 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 - vpaddd ONE(%rip), %xmm9, %xmm9 # INCR CNT to get Yn - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - ENCRYPT_SINGLE_BLOCK %xmm9 # E(K, Yn) + ## Karatsuba Method - lea SHIFT_MASK+16(%rip), %r12 - sub %r13, %r12 # adjust the shuffle mask pointer to be - # able to shift 16-r13 bytes (r13 is the - # number of bytes in plaintext mod 16) + vpshufd $0b01001110, \XMM1, \T2 + vpxor \XMM1, \T2, \T2 + vmovdqu HashKey_8(arg2), \T5 + vpclmulqdq $0x11, \T5, \XMM1, \T6 + vpclmulqdq $0x00, \T5, \XMM1, \T7 -_get_last_16_byte_loop\@: - movb (arg3, %r11), %al - movb %al, TMP1 (%rsp , %r11) - add $1, %r11 - cmp %r13, %r11 - jne _get_last_16_byte_loop\@ + vmovdqu HashKey_8_k(arg2), \T3 + vpclmulqdq $0x00, \T3, \T2, \XMM1 - vmovdqu TMP1(%rsp), %xmm1 + ###################### - sub $16, %r11 + vpshufd $0b01001110, \XMM2, \T2 + vpxor \XMM2, \T2, \T2 + vmovdqu HashKey_7(arg2), \T5 + vpclmulqdq $0x11, \T5, \XMM2, \T4 + vpxor \T4, \T6, \T6 -_final_ghash_mul\@: - .if \ENC_DEC == DEC - vmovdqa %xmm1, %xmm2 - vpxor %xmm1, %xmm9, %xmm9 # Plaintext XOR E(K, Yn) - vmovdqu ALL_F-SHIFT_MASK(%r12), %xmm1 # get the appropriate mask to - # mask out top 16-r13 bytes of xmm9 - vpand %xmm1, %xmm9, %xmm9 # mask out top 16-r13 bytes of xmm9 - vpand %xmm1, %xmm2, %xmm2 - vpshufb SHUF_MASK(%rip), %xmm2, %xmm2 - vpxor %xmm2, %xmm14, %xmm14 - #GHASH computation for the last <16 Byte block - GHASH_MUL_AVX %xmm14, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 - sub %r13, %r11 - add $16, %r11 - .else - vpxor %xmm1, %xmm9, %xmm9 # Plaintext XOR E(K, Yn) - vmovdqu ALL_F-SHIFT_MASK(%r12), %xmm1 # get the appropriate mask to - # mask out top 16-r13 bytes of xmm9 - vpand %xmm1, %xmm9, %xmm9 # mask out top 16-r13 bytes of xmm9 - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - vpxor %xmm9, %xmm14, %xmm14 - #GHASH computation for the last <16 Byte block - GHASH_MUL_AVX %xmm14, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 - sub %r13, %r11 - add $16, %r11 - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 # shuffle xmm9 back to output as ciphertext - .endif + vpclmulqdq $0x00, \T5, \XMM2, \T4 + vpxor \T4, \T7, \T7 + vmovdqu HashKey_7_k(arg2), \T3 + vpclmulqdq $0x00, \T3, \T2, \T2 + vpxor \T2, \XMM1, \XMM1 - ############################# - # output r13 Bytes - vmovq %xmm9, %rax - cmp $8, %r13 - jle _less_than_8_bytes_left\@ + ###################### - mov %rax, (arg2 , %r11) - add $8, %r11 - vpsrldq $8, %xmm9, %xmm9 - vmovq %xmm9, %rax - sub $8, %r13 + vpshufd $0b01001110, \XMM3, \T2 + vpxor \XMM3, \T2, \T2 + vmovdqu HashKey_6(arg2), \T5 + vpclmulqdq $0x11, \T5, \XMM3, \T4 + vpxor \T4, \T6, \T6 -_less_than_8_bytes_left\@: - movb %al, (arg2 , %r11) - add $1, %r11 - shr $8, %rax - sub $1, %r13 - jne _less_than_8_bytes_left\@ - ############################# + vpclmulqdq $0x00, \T5, \XMM3, \T4 + vpxor \T4, \T7, \T7 -_multiple_of_16_bytes\@: - mov arg7, %r12 # r12 = aadLen (number of bytes) - shl $3, %r12 # convert into number of bits - vmovd %r12d, %xmm15 # len(A) in xmm15 + vmovdqu HashKey_6_k(arg2), \T3 + vpclmulqdq $0x00, \T3, \T2, \T2 + vpxor \T2, \XMM1, \XMM1 - shl $3, arg4 # len(C) in bits (*128) - vmovq arg4, %xmm1 - vpslldq $8, %xmm15, %xmm15 # xmm15 = len(A)|| 0x0000000000000000 - vpxor %xmm1, %xmm15, %xmm15 # xmm15 = len(A)||len(C) + ###################### - vpxor %xmm15, %xmm14, %xmm14 - GHASH_MUL_AVX %xmm14, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 # final GHASH computation - vpshufb SHUF_MASK(%rip), %xmm14, %xmm14 # perform a 16Byte swap + vpshufd $0b01001110, \XMM4, \T2 + vpxor \XMM4, \T2, \T2 + vmovdqu HashKey_5(arg2), \T5 + vpclmulqdq $0x11, \T5, \XMM4, \T4 + vpxor \T4, \T6, \T6 - mov arg5, %rax # rax = *Y0 - vmovdqu (%rax), %xmm9 # xmm9 = Y0 + vpclmulqdq $0x00, \T5, \XMM4, \T4 + vpxor \T4, \T7, \T7 - ENCRYPT_SINGLE_BLOCK %xmm9 # E(K, Y0) + vmovdqu HashKey_5_k(arg2), \T3 + vpclmulqdq $0x00, \T3, \T2, \T2 + vpxor \T2, \XMM1, \XMM1 - vpxor %xmm14, %xmm9, %xmm9 + ###################### + vpshufd $0b01001110, \XMM5, \T2 + vpxor \XMM5, \T2, \T2 + vmovdqu HashKey_4(arg2), \T5 + vpclmulqdq $0x11, \T5, \XMM5, \T4 + vpxor \T4, \T6, \T6 + vpclmulqdq $0x00, \T5, \XMM5, \T4 + vpxor \T4, \T7, \T7 -_return_T\@: - mov arg8, %r10 # r10 = authTag - mov arg9, %r11 # r11 = auth_tag_len + vmovdqu HashKey_4_k(arg2), \T3 + vpclmulqdq $0x00, \T3, \T2, \T2 + vpxor \T2, \XMM1, \XMM1 - cmp $16, %r11 - je _T_16\@ + ###################### - cmp $8, %r11 - jl _T_4\@ + vpshufd $0b01001110, \XMM6, \T2 + vpxor \XMM6, \T2, \T2 + vmovdqu HashKey_3(arg2), \T5 + vpclmulqdq $0x11, \T5, \XMM6, \T4 + vpxor \T4, \T6, \T6 -_T_8\@: - vmovq %xmm9, %rax - mov %rax, (%r10) - add $8, %r10 - sub $8, %r11 - vpsrldq $8, %xmm9, %xmm9 - cmp $0, %r11 - je _return_T_done\@ -_T_4\@: - vmovd %xmm9, %eax - mov %eax, (%r10) - add $4, %r10 - sub $4, %r11 - vpsrldq $4, %xmm9, %xmm9 - cmp $0, %r11 - je _return_T_done\@ -_T_123\@: - vmovd %xmm9, %eax - cmp $2, %r11 - jl _T_1\@ - mov %ax, (%r10) - cmp $2, %r11 - je _return_T_done\@ - add $2, %r10 - sar $16, %eax -_T_1\@: - mov %al, (%r10) - jmp _return_T_done\@ + vpclmulqdq $0x00, \T5, \XMM6, \T4 + vpxor \T4, \T7, \T7 -_T_16\@: - vmovdqu %xmm9, (%r10) + vmovdqu HashKey_3_k(arg2), \T3 + vpclmulqdq $0x00, \T3, \T2, \T2 + vpxor \T2, \XMM1, \XMM1 -_return_T_done\@: - mov %r14, %rsp + ###################### - pop %r15 - pop %r14 - pop %r13 - pop %r12 -.endm + vpshufd $0b01001110, \XMM7, \T2 + vpxor \XMM7, \T2, \T2 + vmovdqu HashKey_2(arg2), \T5 + vpclmulqdq $0x11, \T5, \XMM7, \T4 + vpxor \T4, \T6, \T6 + vpclmulqdq $0x00, \T5, \XMM7, \T4 + vpxor \T4, \T7, \T7 -############################################################# -#void aesni_gcm_precomp_avx_gen2 -# (gcm_data *my_ctx_data, -# u8 *hash_subkey)# /* H, the Hash sub key input. Data starts on a 16-byte boundary. */ -############################################################# -ENTRY(aesni_gcm_precomp_avx_gen2) - #the number of pushes must equal STACK_OFFSET - push %r12 - push %r13 - push %r14 - push %r15 + vmovdqu HashKey_2_k(arg2), \T3 + vpclmulqdq $0x00, \T3, \T2, \T2 + vpxor \T2, \XMM1, \XMM1 - mov %rsp, %r14 + ###################### + + vpshufd $0b01001110, \XMM8, \T2 + vpxor \XMM8, \T2, \T2 + vmovdqu HashKey(arg2), \T5 + vpclmulqdq $0x11, \T5, \XMM8, \T4 + vpxor \T4, \T6, \T6 + vpclmulqdq $0x00, \T5, \XMM8, \T4 + vpxor \T4, \T7, \T7 + vmovdqu HashKey_k(arg2), \T3 + vpclmulqdq $0x00, \T3, \T2, \T2 - sub $VARIABLE_OFFSET, %rsp - and $~63, %rsp # align rsp to 64 bytes + vpxor \T2, \XMM1, \XMM1 + vpxor \T6, \XMM1, \XMM1 + vpxor \T7, \XMM1, \T2 - vmovdqu (arg2), %xmm6 # xmm6 = HashKey - vpshufb SHUF_MASK(%rip), %xmm6, %xmm6 - ############### PRECOMPUTATION of HashKey<<1 mod poly from the HashKey - vmovdqa %xmm6, %xmm2 - vpsllq $1, %xmm6, %xmm6 - vpsrlq $63, %xmm2, %xmm2 - vmovdqa %xmm2, %xmm1 - vpslldq $8, %xmm2, %xmm2 - vpsrldq $8, %xmm1, %xmm1 - vpor %xmm2, %xmm6, %xmm6 - #reduction - vpshufd $0b00100100, %xmm1, %xmm2 - vpcmpeqd TWOONE(%rip), %xmm2, %xmm2 - vpand POLY(%rip), %xmm2, %xmm2 - vpxor %xmm2, %xmm6, %xmm6 # xmm6 holds the HashKey<<1 mod poly + + + vpslldq $8, \T2, \T4 + vpsrldq $8, \T2, \T2 + + vpxor \T4, \T7, \T7 + vpxor \T2, \T6, \T6 # holds the result of + # the accumulated carry-less multiplications + ####################################################################### - vmovdqa %xmm6, HashKey(arg1) # store HashKey<<1 mod poly + #first phase of the reduction + vpslld $31, \T7, \T2 # packed right shifting << 31 + vpslld $30, \T7, \T3 # packed right shifting shift << 30 + vpslld $25, \T7, \T4 # packed right shifting shift << 25 + vpxor \T3, \T2, \T2 # xor the shifted versions + vpxor \T4, \T2, \T2 - PRECOMPUTE_AVX %xmm6, %xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5 + vpsrldq $4, \T2, \T1 # shift-R T1 1 DW - mov %r14, %rsp + vpslldq $12, \T2, \T2 # shift-L T2 3 DWs + vpxor \T2, \T7, \T7 # first phase of the reduction complete + ####################################################################### - pop %r15 - pop %r14 - pop %r13 - pop %r12 - ret -ENDPROC(aesni_gcm_precomp_avx_gen2) -############################################################################### -#void aesni_gcm_enc_avx_gen2( -# gcm_data *my_ctx_data, /* aligned to 16 Bytes */ -# u8 *out, /* Ciphertext output. Encrypt in-place is allowed. */ -# const u8 *in, /* Plaintext input */ -# u64 plaintext_len, /* Length of data in Bytes for encryption. */ + #second phase of the reduction + vpsrld $1, \T7, \T2 # packed left shifting >> 1 + vpsrld $2, \T7, \T3 # packed left shifting >> 2 + vpsrld $7, \T7, \T4 # packed left shifting >> 7 + vpxor \T3, \T2, \T2 # xor the shifted versions + vpxor \T4, \T2, \T2 + + vpxor \T1, \T2, \T2 + vpxor \T2, \T7, \T7 + vpxor \T7, \T6, \T6 # the result is in T6 + +.endm + +############################################################# +#void aesni_gcm_precomp_avx_gen2 +# (gcm_data *my_ctx_data, +# gcm_context_data *data, +# u8 *hash_subkey# /* H, the Hash sub key input. Data starts on a 16-byte boundary. */ # u8 *iv, /* Pre-counter block j0: 4 byte salt # (from Security Association) concatenated with 8 byte # Initialisation Vector (from IPSec ESP Payload) # concatenated with 0x00000001. 16-byte aligned pointer. */ # const u8 *aad, /* Additional Authentication Data (AAD)*/ -# u64 aad_len, /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */ -# u8 *auth_tag, /* Authenticated Tag output. */ -# u64 auth_tag_len)# /* Authenticated Tag Length in bytes. -# Valid values are 16 (most likely), 12 or 8. */ +# u64 aad_len) /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */ +############################################################# +ENTRY(aesni_gcm_init_avx_gen2) + FUNC_SAVE + INIT GHASH_MUL_AVX, PRECOMPUTE_AVX + FUNC_RESTORE + ret +ENDPROC(aesni_gcm_init_avx_gen2) + ############################################################################### -ENTRY(aesni_gcm_enc_avx_gen2) - GCM_ENC_DEC_AVX ENC - ret -ENDPROC(aesni_gcm_enc_avx_gen2) +#void aesni_gcm_enc_update_avx_gen2( +# gcm_data *my_ctx_data, /* aligned to 16 Bytes */ +# gcm_context_data *data, +# u8 *out, /* Ciphertext output. Encrypt in-place is allowed. */ +# const u8 *in, /* Plaintext input */ +# u64 plaintext_len) /* Length of data in Bytes for encryption. */ +############################################################################### +ENTRY(aesni_gcm_enc_update_avx_gen2) + FUNC_SAVE + mov keysize, %eax + cmp $32, %eax + je key_256_enc_update + cmp $16, %eax + je key_128_enc_update + # must be 192 + GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 11 + FUNC_RESTORE + ret +key_128_enc_update: + GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 9 + FUNC_RESTORE + ret +key_256_enc_update: + GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 13 + FUNC_RESTORE + ret +ENDPROC(aesni_gcm_enc_update_avx_gen2) ############################################################################### -#void aesni_gcm_dec_avx_gen2( +#void aesni_gcm_dec_update_avx_gen2( # gcm_data *my_ctx_data, /* aligned to 16 Bytes */ +# gcm_context_data *data, # u8 *out, /* Plaintext output. Decrypt in-place is allowed. */ # const u8 *in, /* Ciphertext input */ -# u64 plaintext_len, /* Length of data in Bytes for encryption. */ -# u8 *iv, /* Pre-counter block j0: 4 byte salt -# (from Security Association) concatenated with 8 byte -# Initialisation Vector (from IPSec ESP Payload) -# concatenated with 0x00000001. 16-byte aligned pointer. */ -# const u8 *aad, /* Additional Authentication Data (AAD)*/ -# u64 aad_len, /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */ +# u64 plaintext_len) /* Length of data in Bytes for encryption. */ +############################################################################### +ENTRY(aesni_gcm_dec_update_avx_gen2) + FUNC_SAVE + mov keysize,%eax + cmp $32, %eax + je key_256_dec_update + cmp $16, %eax + je key_128_dec_update + # must be 192 + GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 11 + FUNC_RESTORE + ret +key_128_dec_update: + GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 9 + FUNC_RESTORE + ret +key_256_dec_update: + GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 13 + FUNC_RESTORE + ret +ENDPROC(aesni_gcm_dec_update_avx_gen2) + +############################################################################### +#void aesni_gcm_finalize_avx_gen2( +# gcm_data *my_ctx_data, /* aligned to 16 Bytes */ +# gcm_context_data *data, # u8 *auth_tag, /* Authenticated Tag output. */ # u64 auth_tag_len)# /* Authenticated Tag Length in bytes. # Valid values are 16 (most likely), 12 or 8. */ ############################################################################### -ENTRY(aesni_gcm_dec_avx_gen2) - GCM_ENC_DEC_AVX DEC - ret -ENDPROC(aesni_gcm_dec_avx_gen2) +ENTRY(aesni_gcm_finalize_avx_gen2) + FUNC_SAVE + mov keysize,%eax + cmp $32, %eax + je key_256_finalize + cmp $16, %eax + je key_128_finalize + # must be 192 + GCM_COMPLETE GHASH_MUL_AVX, 11, arg3, arg4 + FUNC_RESTORE + ret +key_128_finalize: + GCM_COMPLETE GHASH_MUL_AVX, 9, arg3, arg4 + FUNC_RESTORE + ret +key_256_finalize: + GCM_COMPLETE GHASH_MUL_AVX, 13, arg3, arg4 + FUNC_RESTORE + ret +ENDPROC(aesni_gcm_finalize_avx_gen2) + #endif /* CONFIG_AS_AVX */ #ifdef CONFIG_AS_AVX2 @@ -1670,113 +1922,42 @@ ENDPROC(aesni_gcm_dec_avx_gen2) # Haskey_i_k holds XORed values of the low and high parts of the Haskey_i vmovdqa \HK, \T5 GHASH_MUL_AVX2 \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^2<<1 mod poly - vmovdqa \T5, HashKey_2(arg1) # [HashKey_2] = HashKey^2<<1 mod poly + vmovdqu \T5, HashKey_2(arg2) # [HashKey_2] = HashKey^2<<1 mod poly GHASH_MUL_AVX2 \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^3<<1 mod poly - vmovdqa \T5, HashKey_3(arg1) + vmovdqu \T5, HashKey_3(arg2) GHASH_MUL_AVX2 \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^4<<1 mod poly - vmovdqa \T5, HashKey_4(arg1) + vmovdqu \T5, HashKey_4(arg2) GHASH_MUL_AVX2 \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^5<<1 mod poly - vmovdqa \T5, HashKey_5(arg1) + vmovdqu \T5, HashKey_5(arg2) GHASH_MUL_AVX2 \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^6<<1 mod poly - vmovdqa \T5, HashKey_6(arg1) + vmovdqu \T5, HashKey_6(arg2) GHASH_MUL_AVX2 \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^7<<1 mod poly - vmovdqa \T5, HashKey_7(arg1) + vmovdqu \T5, HashKey_7(arg2) GHASH_MUL_AVX2 \T5, \HK, \T1, \T3, \T4, \T6, \T2 # T5 = HashKey^8<<1 mod poly - vmovdqa \T5, HashKey_8(arg1) + vmovdqu \T5, HashKey_8(arg2) .endm - ## if a = number of total plaintext bytes ## b = floor(a/16) ## num_initial_blocks = b mod 4# ## encrypt the initial num_initial_blocks blocks and apply ghash on the ciphertext ## r10, r11, r12, rax are clobbered -## arg1, arg2, arg3, r14 are used as a pointer only, not modified +## arg1, arg3, arg4, r14 are used as a pointer only, not modified -.macro INITIAL_BLOCKS_AVX2 num_initial_blocks T1 T2 T3 T4 T5 CTR XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 T6 T_key ENC_DEC VER +.macro INITIAL_BLOCKS_AVX2 REP num_initial_blocks T1 T2 T3 T4 T5 CTR XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 T6 T_key ENC_DEC VER i = (8-\num_initial_blocks) - j = 0 setreg - - mov arg6, %r10 # r10 = AAD - mov arg7, %r12 # r12 = aadLen - - - mov %r12, %r11 - - vpxor reg_j, reg_j, reg_j - vpxor reg_i, reg_i, reg_i - - cmp $16, %r11 - jl _get_AAD_rest8\@ -_get_AAD_blocks\@: - vmovdqu (%r10), reg_i - vpshufb SHUF_MASK(%rip), reg_i, reg_i - vpxor reg_i, reg_j, reg_j - GHASH_MUL_AVX2 reg_j, \T2, \T1, \T3, \T4, \T5, \T6 - add $16, %r10 - sub $16, %r12 - sub $16, %r11 - cmp $16, %r11 - jge _get_AAD_blocks\@ - vmovdqu reg_j, reg_i - cmp $0, %r11 - je _get_AAD_done\@ - - vpxor reg_i, reg_i, reg_i - - /* read the last <16B of AAD. since we have at least 4B of - data right after the AAD (the ICV, and maybe some CT), we can - read 4B/8B blocks safely, and then get rid of the extra stuff */ -_get_AAD_rest8\@: - cmp $4, %r11 - jle _get_AAD_rest4\@ - movq (%r10), \T1 - add $8, %r10 - sub $8, %r11 - vpslldq $8, \T1, \T1 - vpsrldq $8, reg_i, reg_i - vpxor \T1, reg_i, reg_i - jmp _get_AAD_rest8\@ -_get_AAD_rest4\@: - cmp $0, %r11 - jle _get_AAD_rest0\@ - mov (%r10), %eax - movq %rax, \T1 - add $4, %r10 - sub $4, %r11 - vpslldq $12, \T1, \T1 - vpsrldq $4, reg_i, reg_i - vpxor \T1, reg_i, reg_i -_get_AAD_rest0\@: - /* finalize: shift out the extra bytes we read, and align - left. since pslldq can only shift by an immediate, we use - vpshufb and an array of shuffle masks */ - movq %r12, %r11 - salq $4, %r11 - movdqu aad_shift_arr(%r11), \T1 - vpshufb \T1, reg_i, reg_i -_get_AAD_rest_final\@: - vpshufb SHUF_MASK(%rip), reg_i, reg_i - vpxor reg_j, reg_i, reg_i - GHASH_MUL_AVX2 reg_i, \T2, \T1, \T3, \T4, \T5, \T6 - -_get_AAD_done\@: - # initialize the data pointer offset as zero - xor %r11d, %r11d + vmovdqu AadHash(arg2), reg_i # start AES for num_initial_blocks blocks - mov arg5, %rax # rax = *Y0 - vmovdqu (%rax), \CTR # CTR = Y0 - vpshufb SHUF_MASK(%rip), \CTR, \CTR - + vmovdqu CurCount(arg2), \CTR i = (9-\num_initial_blocks) setreg @@ -1799,7 +1980,7 @@ _get_AAD_done\@: j = 1 setreg -.rep 9 +.rep \REP vmovdqa 16*j(arg1), \T_key i = (9-\num_initial_blocks) setreg @@ -1814,7 +1995,7 @@ _get_AAD_done\@: .endr - vmovdqa 16*10(arg1), \T_key + vmovdqa 16*j(arg1), \T_key i = (9-\num_initial_blocks) setreg .rep \num_initial_blocks @@ -1826,9 +2007,9 @@ _get_AAD_done\@: i = (9-\num_initial_blocks) setreg .rep \num_initial_blocks - vmovdqu (arg3, %r11), \T1 + vmovdqu (arg4, %r11), \T1 vpxor \T1, reg_i, reg_i - vmovdqu reg_i, (arg2 , %r11) # write back ciphertext for + vmovdqu reg_i, (arg3 , %r11) # write back ciphertext for # num_initial_blocks blocks add $16, %r11 .if \ENC_DEC == DEC @@ -1905,7 +2086,7 @@ _get_AAD_done\@: i = 1 setreg -.rep 9 # do 9 rounds +.rep \REP # do REP rounds vmovdqa 16*i(arg1), \T_key vaesenc \T_key, \XMM1, \XMM1 vaesenc \T_key, \XMM2, \XMM2 @@ -1930,58 +2111,58 @@ _get_AAD_done\@: vaesenclast \T_key, \XMM7, \XMM7 vaesenclast \T_key, \XMM8, \XMM8 - vmovdqu (arg3, %r11), \T1 + vmovdqu (arg4, %r11), \T1 vpxor \T1, \XMM1, \XMM1 - vmovdqu \XMM1, (arg2 , %r11) + vmovdqu \XMM1, (arg3 , %r11) .if \ENC_DEC == DEC vmovdqa \T1, \XMM1 .endif - vmovdqu 16*1(arg3, %r11), \T1 + vmovdqu 16*1(arg4, %r11), \T1 vpxor \T1, \XMM2, \XMM2 - vmovdqu \XMM2, 16*1(arg2 , %r11) + vmovdqu \XMM2, 16*1(arg3 , %r11) .if \ENC_DEC == DEC vmovdqa \T1, \XMM2 .endif - vmovdqu 16*2(arg3, %r11), \T1 + vmovdqu 16*2(arg4, %r11), \T1 vpxor \T1, \XMM3, \XMM3 - vmovdqu \XMM3, 16*2(arg2 , %r11) + vmovdqu \XMM3, 16*2(arg3 , %r11) .if \ENC_DEC == DEC vmovdqa \T1, \XMM3 .endif - vmovdqu 16*3(arg3, %r11), \T1 + vmovdqu 16*3(arg4, %r11), \T1 vpxor \T1, \XMM4, \XMM4 - vmovdqu \XMM4, 16*3(arg2 , %r11) + vmovdqu \XMM4, 16*3(arg3 , %r11) .if \ENC_DEC == DEC vmovdqa \T1, \XMM4 .endif - vmovdqu 16*4(arg3, %r11), \T1 + vmovdqu 16*4(arg4, %r11), \T1 vpxor \T1, \XMM5, \XMM5 - vmovdqu \XMM5, 16*4(arg2 , %r11) + vmovdqu \XMM5, 16*4(arg3 , %r11) .if \ENC_DEC == DEC vmovdqa \T1, \XMM5 .endif - vmovdqu 16*5(arg3, %r11), \T1 + vmovdqu 16*5(arg4, %r11), \T1 vpxor \T1, \XMM6, \XMM6 - vmovdqu \XMM6, 16*5(arg2 , %r11) + vmovdqu \XMM6, 16*5(arg3 , %r11) .if \ENC_DEC == DEC vmovdqa \T1, \XMM6 .endif - vmovdqu 16*6(arg3, %r11), \T1 + vmovdqu 16*6(arg4, %r11), \T1 vpxor \T1, \XMM7, \XMM7 - vmovdqu \XMM7, 16*6(arg2 , %r11) + vmovdqu \XMM7, 16*6(arg3 , %r11) .if \ENC_DEC == DEC vmovdqa \T1, \XMM7 .endif - vmovdqu 16*7(arg3, %r11), \T1 + vmovdqu 16*7(arg4, %r11), \T1 vpxor \T1, \XMM8, \XMM8 - vmovdqu \XMM8, 16*7(arg2 , %r11) + vmovdqu \XMM8, 16*7(arg3 , %r11) .if \ENC_DEC == DEC vmovdqa \T1, \XMM8 .endif @@ -2010,9 +2191,9 @@ _initial_blocks_done\@: # encrypt 8 blocks at a time # ghash the 8 previously encrypted ciphertext blocks -# arg1, arg2, arg3 are used as pointers only, not modified +# arg1, arg3, arg4 are used as pointers only, not modified # r11 is the data offset value -.macro GHASH_8_ENCRYPT_8_PARALLEL_AVX2 T1 T2 T3 T4 T5 T6 CTR XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 T7 loop_idx ENC_DEC +.macro GHASH_8_ENCRYPT_8_PARALLEL_AVX2 REP T1 T2 T3 T4 T5 T6 CTR XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 T7 loop_idx ENC_DEC vmovdqa \XMM1, \T2 vmovdqa \XMM2, TMP2(%rsp) @@ -2073,87 +2254,7 @@ _initial_blocks_done\@: - vmovdqu 16*1(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 - - vmovdqu 16*2(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 - - - ####################################################################### - - vmovdqa HashKey_8(arg1), \T5 - vpclmulqdq $0x11, \T5, \T2, \T4 # T4 = a1*b1 - vpclmulqdq $0x00, \T5, \T2, \T7 # T7 = a0*b0 - vpclmulqdq $0x01, \T5, \T2, \T6 # T6 = a1*b0 - vpclmulqdq $0x10, \T5, \T2, \T5 # T5 = a0*b1 - vpxor \T5, \T6, \T6 - - vmovdqu 16*3(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 - - vmovdqa TMP2(%rsp), \T1 - vmovdqa HashKey_7(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpclmulqdq $0x01, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 - - vpclmulqdq $0x10, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 - - vmovdqu 16*4(arg1), \T1 - vaesenc \T1, \XMM1, \XMM1 - vaesenc \T1, \XMM2, \XMM2 - vaesenc \T1, \XMM3, \XMM3 - vaesenc \T1, \XMM4, \XMM4 - vaesenc \T1, \XMM5, \XMM5 - vaesenc \T1, \XMM6, \XMM6 - vaesenc \T1, \XMM7, \XMM7 - vaesenc \T1, \XMM8, \XMM8 - - ####################################################################### - - vmovdqa TMP3(%rsp), \T1 - vmovdqa HashKey_6(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpclmulqdq $0x01, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 - - vpclmulqdq $0x10, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 - - vmovdqu 16*5(arg1), \T1 + vmovdqu 16*1(arg1), \T1 vaesenc \T1, \XMM1, \XMM1 vaesenc \T1, \XMM2, \XMM2 vaesenc \T1, \XMM3, \XMM3 @@ -2163,21 +2264,7 @@ _initial_blocks_done\@: vaesenc \T1, \XMM7, \XMM7 vaesenc \T1, \XMM8, \XMM8 - vmovdqa TMP4(%rsp), \T1 - vmovdqa HashKey_5(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpclmulqdq $0x01, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 - - vpclmulqdq $0x10, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 - - vmovdqu 16*6(arg1), \T1 + vmovdqu 16*2(arg1), \T1 vaesenc \T1, \XMM1, \XMM1 vaesenc \T1, \XMM2, \XMM2 vaesenc \T1, \XMM3, \XMM3 @@ -2188,21 +2275,16 @@ _initial_blocks_done\@: vaesenc \T1, \XMM8, \XMM8 - vmovdqa TMP5(%rsp), \T1 - vmovdqa HashKey_4(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpclmulqdq $0x01, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 + ####################################################################### - vpclmulqdq $0x10, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 + vmovdqu HashKey_8(arg2), \T5 + vpclmulqdq $0x11, \T5, \T2, \T4 # T4 = a1*b1 + vpclmulqdq $0x00, \T5, \T2, \T7 # T7 = a0*b0 + vpclmulqdq $0x01, \T5, \T2, \T6 # T6 = a1*b0 + vpclmulqdq $0x10, \T5, \T2, \T5 # T5 = a0*b1 + vpxor \T5, \T6, \T6 - vmovdqu 16*7(arg1), \T1 + vmovdqu 16*3(arg1), \T1 vaesenc \T1, \XMM1, \XMM1 vaesenc \T1, \XMM2, \XMM2 vaesenc \T1, \XMM3, \XMM3 @@ -2212,8 +2294,8 @@ _initial_blocks_done\@: vaesenc \T1, \XMM7, \XMM7 vaesenc \T1, \XMM8, \XMM8 - vmovdqa TMP6(%rsp), \T1 - vmovdqa HashKey_3(arg1), \T5 + vmovdqa TMP2(%rsp), \T1 + vmovdqu HashKey_7(arg2), \T5 vpclmulqdq $0x11, \T5, \T1, \T3 vpxor \T3, \T4, \T4 @@ -2226,7 +2308,7 @@ _initial_blocks_done\@: vpclmulqdq $0x10, \T5, \T1, \T3 vpxor \T3, \T6, \T6 - vmovdqu 16*8(arg1), \T1 + vmovdqu 16*4(arg1), \T1 vaesenc \T1, \XMM1, \XMM1 vaesenc \T1, \XMM2, \XMM2 vaesenc \T1, \XMM3, \XMM3 @@ -2236,35 +2318,12 @@ _initial_blocks_done\@: vaesenc \T1, \XMM7, \XMM7 vaesenc \T1, \XMM8, \XMM8 - vmovdqa TMP7(%rsp), \T1 - vmovdqa HashKey_2(arg1), \T5 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T4 - - vpclmulqdq $0x00, \T5, \T1, \T3 - vpxor \T3, \T7, \T7 - - vpclmulqdq $0x01, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 - - vpclmulqdq $0x10, \T5, \T1, \T3 - vpxor \T3, \T6, \T6 - - ####################################################################### - vmovdqu 16*9(arg1), \T5 - vaesenc \T5, \XMM1, \XMM1 - vaesenc \T5, \XMM2, \XMM2 - vaesenc \T5, \XMM3, \XMM3 - vaesenc \T5, \XMM4, \XMM4 - vaesenc \T5, \XMM5, \XMM5 - vaesenc \T5, \XMM6, \XMM6 - vaesenc \T5, \XMM7, \XMM7 - vaesenc \T5, \XMM8, \XMM8 - - vmovdqa TMP8(%rsp), \T1 - vmovdqa HashKey(arg1), \T5 + vmovdqa TMP3(%rsp), \T1 + vmovdqu HashKey_6(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 vpclmulqdq $0x00, \T5, \T1, \T3 vpxor \T3, \T7, \T7 @@ -2275,672 +2334,510 @@ _initial_blocks_done\@: vpclmulqdq $0x10, \T5, \T1, \T3 vpxor \T3, \T6, \T6 - vpclmulqdq $0x11, \T5, \T1, \T3 - vpxor \T3, \T4, \T1 - - - vmovdqu 16*10(arg1), \T5 - - i = 0 - j = 1 - setreg -.rep 8 - vpxor 16*i(arg3, %r11), \T5, \T2 - .if \ENC_DEC == ENC - vaesenclast \T2, reg_j, reg_j - .else - vaesenclast \T2, reg_j, \T3 - vmovdqu 16*i(arg3, %r11), reg_j - vmovdqu \T3, 16*i(arg2, %r11) - .endif - i = (i+1) - j = (j+1) - setreg -.endr - ####################################################################### - - - vpslldq $8, \T6, \T3 # shift-L T3 2 DWs - vpsrldq $8, \T6, \T6 # shift-R T2 2 DWs - vpxor \T3, \T7, \T7 - vpxor \T6, \T1, \T1 # accumulate the results in T1:T7 - - - - ####################################################################### - #first phase of the reduction - vmovdqa POLY2(%rip), \T3 - - vpclmulqdq $0x01, \T7, \T3, \T2 - vpslldq $8, \T2, \T2 # shift-L xmm2 2 DWs - - vpxor \T2, \T7, \T7 # first phase of the reduction complete - ####################################################################### - .if \ENC_DEC == ENC - vmovdqu \XMM1, 16*0(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM2, 16*1(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM3, 16*2(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM4, 16*3(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM5, 16*4(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM6, 16*5(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM7, 16*6(arg2,%r11) # Write to the Ciphertext buffer - vmovdqu \XMM8, 16*7(arg2,%r11) # Write to the Ciphertext buffer - .endif - - ####################################################################### - #second phase of the reduction - vpclmulqdq $0x00, \T7, \T3, \T2 - vpsrldq $4, \T2, \T2 # shift-R xmm2 1 DW (Shift-R only 1-DW to obtain 2-DWs shift-R) - - vpclmulqdq $0x10, \T7, \T3, \T4 - vpslldq $4, \T4, \T4 # shift-L xmm0 1 DW (Shift-L 1-DW to obtain result with no shifts) - - vpxor \T2, \T4, \T4 # second phase of the reduction complete - ####################################################################### - vpxor \T4, \T1, \T1 # the result is in T1 - - vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap - vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap - - - vpxor \T1, \XMM1, \XMM1 - - - -.endm - - -# GHASH the last 4 ciphertext blocks. -.macro GHASH_LAST_8_AVX2 T1 T2 T3 T4 T5 T6 T7 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 - - ## Karatsuba Method - - vmovdqa HashKey_8(arg1), \T5 - - vpshufd $0b01001110, \XMM1, \T2 - vpshufd $0b01001110, \T5, \T3 - vpxor \XMM1, \T2, \T2 - vpxor \T5, \T3, \T3 - - vpclmulqdq $0x11, \T5, \XMM1, \T6 - vpclmulqdq $0x00, \T5, \XMM1, \T7 - - vpclmulqdq $0x00, \T3, \T2, \XMM1 - - ###################### - - vmovdqa HashKey_7(arg1), \T5 - vpshufd $0b01001110, \XMM2, \T2 - vpshufd $0b01001110, \T5, \T3 - vpxor \XMM2, \T2, \T2 - vpxor \T5, \T3, \T3 - - vpclmulqdq $0x11, \T5, \XMM2, \T4 - vpxor \T4, \T6, \T6 - - vpclmulqdq $0x00, \T5, \XMM2, \T4 - vpxor \T4, \T7, \T7 - - vpclmulqdq $0x00, \T3, \T2, \T2 - - vpxor \T2, \XMM1, \XMM1 - - ###################### - - vmovdqa HashKey_6(arg1), \T5 - vpshufd $0b01001110, \XMM3, \T2 - vpshufd $0b01001110, \T5, \T3 - vpxor \XMM3, \T2, \T2 - vpxor \T5, \T3, \T3 - - vpclmulqdq $0x11, \T5, \XMM3, \T4 - vpxor \T4, \T6, \T6 - - vpclmulqdq $0x00, \T5, \XMM3, \T4 - vpxor \T4, \T7, \T7 - - vpclmulqdq $0x00, \T3, \T2, \T2 - - vpxor \T2, \XMM1, \XMM1 - - ###################### - - vmovdqa HashKey_5(arg1), \T5 - vpshufd $0b01001110, \XMM4, \T2 - vpshufd $0b01001110, \T5, \T3 - vpxor \XMM4, \T2, \T2 - vpxor \T5, \T3, \T3 - - vpclmulqdq $0x11, \T5, \XMM4, \T4 - vpxor \T4, \T6, \T6 - - vpclmulqdq $0x00, \T5, \XMM4, \T4 - vpxor \T4, \T7, \T7 - - vpclmulqdq $0x00, \T3, \T2, \T2 - - vpxor \T2, \XMM1, \XMM1 - - ###################### - - vmovdqa HashKey_4(arg1), \T5 - vpshufd $0b01001110, \XMM5, \T2 - vpshufd $0b01001110, \T5, \T3 - vpxor \XMM5, \T2, \T2 - vpxor \T5, \T3, \T3 - - vpclmulqdq $0x11, \T5, \XMM5, \T4 - vpxor \T4, \T6, \T6 - - vpclmulqdq $0x00, \T5, \XMM5, \T4 - vpxor \T4, \T7, \T7 - - vpclmulqdq $0x00, \T3, \T2, \T2 - - vpxor \T2, \XMM1, \XMM1 - - ###################### - - vmovdqa HashKey_3(arg1), \T5 - vpshufd $0b01001110, \XMM6, \T2 - vpshufd $0b01001110, \T5, \T3 - vpxor \XMM6, \T2, \T2 - vpxor \T5, \T3, \T3 - - vpclmulqdq $0x11, \T5, \XMM6, \T4 - vpxor \T4, \T6, \T6 - - vpclmulqdq $0x00, \T5, \XMM6, \T4 - vpxor \T4, \T7, \T7 - - vpclmulqdq $0x00, \T3, \T2, \T2 - - vpxor \T2, \XMM1, \XMM1 - - ###################### - - vmovdqa HashKey_2(arg1), \T5 - vpshufd $0b01001110, \XMM7, \T2 - vpshufd $0b01001110, \T5, \T3 - vpxor \XMM7, \T2, \T2 - vpxor \T5, \T3, \T3 + vmovdqu 16*5(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 - vpclmulqdq $0x11, \T5, \XMM7, \T4 - vpxor \T4, \T6, \T6 + vmovdqa TMP4(%rsp), \T1 + vmovdqu HashKey_5(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \XMM7, \T4 - vpxor \T4, \T7, \T7 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 - vpclmulqdq $0x00, \T3, \T2, \T2 + vpclmulqdq $0x01, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 - vpxor \T2, \XMM1, \XMM1 + vpclmulqdq $0x10, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 - ###################### + vmovdqu 16*6(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 - vmovdqa HashKey(arg1), \T5 - vpshufd $0b01001110, \XMM8, \T2 - vpshufd $0b01001110, \T5, \T3 - vpxor \XMM8, \T2, \T2 - vpxor \T5, \T3, \T3 - vpclmulqdq $0x11, \T5, \XMM8, \T4 - vpxor \T4, \T6, \T6 + vmovdqa TMP5(%rsp), \T1 + vmovdqu HashKey_4(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 - vpclmulqdq $0x00, \T5, \XMM8, \T4 - vpxor \T4, \T7, \T7 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 - vpclmulqdq $0x00, \T3, \T2, \T2 + vpclmulqdq $0x01, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 - vpxor \T2, \XMM1, \XMM1 - vpxor \T6, \XMM1, \XMM1 - vpxor \T7, \XMM1, \T2 + vpclmulqdq $0x10, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 + vmovdqu 16*7(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 + vmovdqa TMP6(%rsp), \T1 + vmovdqu HashKey_3(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 - vpslldq $8, \T2, \T4 - vpsrldq $8, \T2, \T2 + vpclmulqdq $0x01, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 - vpxor \T4, \T7, \T7 - vpxor \T2, \T6, \T6 # holds the result of the - # accumulated carry-less multiplications + vpclmulqdq $0x10, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 - ####################################################################### - #first phase of the reduction - vmovdqa POLY2(%rip), \T3 + vmovdqu 16*8(arg1), \T1 + vaesenc \T1, \XMM1, \XMM1 + vaesenc \T1, \XMM2, \XMM2 + vaesenc \T1, \XMM3, \XMM3 + vaesenc \T1, \XMM4, \XMM4 + vaesenc \T1, \XMM5, \XMM5 + vaesenc \T1, \XMM6, \XMM6 + vaesenc \T1, \XMM7, \XMM7 + vaesenc \T1, \XMM8, \XMM8 - vpclmulqdq $0x01, \T7, \T3, \T2 - vpslldq $8, \T2, \T2 # shift-L xmm2 2 DWs + vmovdqa TMP7(%rsp), \T1 + vmovdqu HashKey_2(arg2), \T5 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T4 - vpxor \T2, \T7, \T7 # first phase of the reduction complete - ####################################################################### + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 + vpclmulqdq $0x01, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 - #second phase of the reduction - vpclmulqdq $0x00, \T7, \T3, \T2 - vpsrldq $4, \T2, \T2 # shift-R T2 1 DW (Shift-R only 1-DW to obtain 2-DWs shift-R) + vpclmulqdq $0x10, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 - vpclmulqdq $0x10, \T7, \T3, \T4 - vpslldq $4, \T4, \T4 # shift-L T4 1 DW (Shift-L 1-DW to obtain result with no shifts) - vpxor \T2, \T4, \T4 # second phase of the reduction complete ####################################################################### - vpxor \T4, \T6, \T6 # the result is in T6 -.endm - - -# combined for GCM encrypt and decrypt functions -# clobbering all xmm registers -# clobbering r10, r11, r12, r13, r14, r15 -.macro GCM_ENC_DEC_AVX2 ENC_DEC + vmovdqu 16*9(arg1), \T5 + vaesenc \T5, \XMM1, \XMM1 + vaesenc \T5, \XMM2, \XMM2 + vaesenc \T5, \XMM3, \XMM3 + vaesenc \T5, \XMM4, \XMM4 + vaesenc \T5, \XMM5, \XMM5 + vaesenc \T5, \XMM6, \XMM6 + vaesenc \T5, \XMM7, \XMM7 + vaesenc \T5, \XMM8, \XMM8 - #the number of pushes must equal STACK_OFFSET - push %r12 - push %r13 - push %r14 - push %r15 + vmovdqa TMP8(%rsp), \T1 + vmovdqu HashKey(arg2), \T5 - mov %rsp, %r14 + vpclmulqdq $0x00, \T5, \T1, \T3 + vpxor \T3, \T7, \T7 + vpclmulqdq $0x01, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 + vpclmulqdq $0x10, \T5, \T1, \T3 + vpxor \T3, \T6, \T6 + vpclmulqdq $0x11, \T5, \T1, \T3 + vpxor \T3, \T4, \T1 - sub $VARIABLE_OFFSET, %rsp - and $~63, %rsp # align rsp to 64 bytes + vmovdqu 16*10(arg1), \T5 - vmovdqu HashKey(arg1), %xmm13 # xmm13 = HashKey + i = 11 + setreg +.rep (\REP-9) + vaesenc \T5, \XMM1, \XMM1 + vaesenc \T5, \XMM2, \XMM2 + vaesenc \T5, \XMM3, \XMM3 + vaesenc \T5, \XMM4, \XMM4 + vaesenc \T5, \XMM5, \XMM5 + vaesenc \T5, \XMM6, \XMM6 + vaesenc \T5, \XMM7, \XMM7 + vaesenc \T5, \XMM8, \XMM8 + + vmovdqu 16*i(arg1), \T5 + i = i + 1 + setreg +.endr - mov arg4, %r13 # save the number of bytes of plaintext/ciphertext - and $-16, %r13 # r13 = r13 - (r13 mod 16) + i = 0 + j = 1 + setreg +.rep 8 + vpxor 16*i(arg4, %r11), \T5, \T2 + .if \ENC_DEC == ENC + vaesenclast \T2, reg_j, reg_j + .else + vaesenclast \T2, reg_j, \T3 + vmovdqu 16*i(arg4, %r11), reg_j + vmovdqu \T3, 16*i(arg3, %r11) + .endif + i = (i+1) + j = (j+1) + setreg +.endr + ####################################################################### - mov %r13, %r12 - shr $4, %r12 - and $7, %r12 - jz _initial_num_blocks_is_0\@ - cmp $7, %r12 - je _initial_num_blocks_is_7\@ - cmp $6, %r12 - je _initial_num_blocks_is_6\@ - cmp $5, %r12 - je _initial_num_blocks_is_5\@ - cmp $4, %r12 - je _initial_num_blocks_is_4\@ - cmp $3, %r12 - je _initial_num_blocks_is_3\@ - cmp $2, %r12 - je _initial_num_blocks_is_2\@ + vpslldq $8, \T6, \T3 # shift-L T3 2 DWs + vpsrldq $8, \T6, \T6 # shift-R T2 2 DWs + vpxor \T3, \T7, \T7 + vpxor \T6, \T1, \T1 # accumulate the results in T1:T7 - jmp _initial_num_blocks_is_1\@ -_initial_num_blocks_is_7\@: - INITIAL_BLOCKS_AVX2 7, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*7, %r13 - jmp _initial_blocks_encrypted\@ -_initial_num_blocks_is_6\@: - INITIAL_BLOCKS_AVX2 6, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*6, %r13 - jmp _initial_blocks_encrypted\@ + ####################################################################### + #first phase of the reduction + vmovdqa POLY2(%rip), \T3 -_initial_num_blocks_is_5\@: - INITIAL_BLOCKS_AVX2 5, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*5, %r13 - jmp _initial_blocks_encrypted\@ + vpclmulqdq $0x01, \T7, \T3, \T2 + vpslldq $8, \T2, \T2 # shift-L xmm2 2 DWs -_initial_num_blocks_is_4\@: - INITIAL_BLOCKS_AVX2 4, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*4, %r13 - jmp _initial_blocks_encrypted\@ + vpxor \T2, \T7, \T7 # first phase of the reduction complete + ####################################################################### + .if \ENC_DEC == ENC + vmovdqu \XMM1, 16*0(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM2, 16*1(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM3, 16*2(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM4, 16*3(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM5, 16*4(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM6, 16*5(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM7, 16*6(arg3,%r11) # Write to the Ciphertext buffer + vmovdqu \XMM8, 16*7(arg3,%r11) # Write to the Ciphertext buffer + .endif -_initial_num_blocks_is_3\@: - INITIAL_BLOCKS_AVX2 3, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*3, %r13 - jmp _initial_blocks_encrypted\@ + ####################################################################### + #second phase of the reduction + vpclmulqdq $0x00, \T7, \T3, \T2 + vpsrldq $4, \T2, \T2 # shift-R xmm2 1 DW (Shift-R only 1-DW to obtain 2-DWs shift-R) -_initial_num_blocks_is_2\@: - INITIAL_BLOCKS_AVX2 2, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*2, %r13 - jmp _initial_blocks_encrypted\@ + vpclmulqdq $0x10, \T7, \T3, \T4 + vpslldq $4, \T4, \T4 # shift-L xmm0 1 DW (Shift-L 1-DW to obtain result with no shifts) -_initial_num_blocks_is_1\@: - INITIAL_BLOCKS_AVX2 1, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC - sub $16*1, %r13 - jmp _initial_blocks_encrypted\@ + vpxor \T2, \T4, \T4 # second phase of the reduction complete + ####################################################################### + vpxor \T4, \T1, \T1 # the result is in T1 -_initial_num_blocks_is_0\@: - INITIAL_BLOCKS_AVX2 0, %xmm12, %xmm13, %xmm14, %xmm15, %xmm11, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm10, %xmm0, \ENC_DEC + vpshufb SHUF_MASK(%rip), \XMM1, \XMM1 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM2, \XMM2 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM3, \XMM3 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM4, \XMM4 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM5, \XMM5 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM6, \XMM6 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM7, \XMM7 # perform a 16Byte swap + vpshufb SHUF_MASK(%rip), \XMM8, \XMM8 # perform a 16Byte swap -_initial_blocks_encrypted\@: - cmp $0, %r13 - je _zero_cipher_left\@ + vpxor \T1, \XMM1, \XMM1 - sub $128, %r13 - je _eight_cipher_left\@ +.endm - vmovd %xmm9, %r15d - and $255, %r15d - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 +# GHASH the last 4 ciphertext blocks. +.macro GHASH_LAST_8_AVX2 T1 T2 T3 T4 T5 T6 T7 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 + ## Karatsuba Method -_encrypt_by_8_new\@: - cmp $(255-8), %r15d - jg _encrypt_by_8\@ + vmovdqu HashKey_8(arg2), \T5 + vpshufd $0b01001110, \XMM1, \T2 + vpshufd $0b01001110, \T5, \T3 + vpxor \XMM1, \T2, \T2 + vpxor \T5, \T3, \T3 + vpclmulqdq $0x11, \T5, \XMM1, \T6 + vpclmulqdq $0x00, \T5, \XMM1, \T7 - add $8, %r15b - GHASH_8_ENCRYPT_8_PARALLEL_AVX2 %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm15, out_order, \ENC_DEC - add $128, %r11 - sub $128, %r13 - jne _encrypt_by_8_new\@ + vpclmulqdq $0x00, \T3, \T2, \XMM1 - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - jmp _eight_cipher_left\@ + ###################### -_encrypt_by_8\@: - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - add $8, %r15b - GHASH_8_ENCRYPT_8_PARALLEL_AVX2 %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm9, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm15, in_order, \ENC_DEC - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - add $128, %r11 - sub $128, %r13 - jne _encrypt_by_8_new\@ + vmovdqu HashKey_7(arg2), \T5 + vpshufd $0b01001110, \XMM2, \T2 + vpshufd $0b01001110, \T5, \T3 + vpxor \XMM2, \T2, \T2 + vpxor \T5, \T3, \T3 - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 + vpclmulqdq $0x11, \T5, \XMM2, \T4 + vpxor \T4, \T6, \T6 + vpclmulqdq $0x00, \T5, \XMM2, \T4 + vpxor \T4, \T7, \T7 + vpclmulqdq $0x00, \T3, \T2, \T2 + vpxor \T2, \XMM1, \XMM1 -_eight_cipher_left\@: - GHASH_LAST_8_AVX2 %xmm0, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8 + ###################### + vmovdqu HashKey_6(arg2), \T5 + vpshufd $0b01001110, \XMM3, \T2 + vpshufd $0b01001110, \T5, \T3 + vpxor \XMM3, \T2, \T2 + vpxor \T5, \T3, \T3 -_zero_cipher_left\@: - cmp $16, arg4 - jl _only_less_than_16\@ + vpclmulqdq $0x11, \T5, \XMM3, \T4 + vpxor \T4, \T6, \T6 - mov arg4, %r13 - and $15, %r13 # r13 = (arg4 mod 16) + vpclmulqdq $0x00, \T5, \XMM3, \T4 + vpxor \T4, \T7, \T7 - je _multiple_of_16_bytes\@ + vpclmulqdq $0x00, \T3, \T2, \T2 - # handle the last <16 Byte block seperately + vpxor \T2, \XMM1, \XMM1 + ###################### - vpaddd ONE(%rip), %xmm9, %xmm9 # INCR CNT to get Yn - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - ENCRYPT_SINGLE_BLOCK %xmm9 # E(K, Yn) + vmovdqu HashKey_5(arg2), \T5 + vpshufd $0b01001110, \XMM4, \T2 + vpshufd $0b01001110, \T5, \T3 + vpxor \XMM4, \T2, \T2 + vpxor \T5, \T3, \T3 - sub $16, %r11 - add %r13, %r11 - vmovdqu (arg3, %r11), %xmm1 # receive the last <16 Byte block + vpclmulqdq $0x11, \T5, \XMM4, \T4 + vpxor \T4, \T6, \T6 - lea SHIFT_MASK+16(%rip), %r12 - sub %r13, %r12 # adjust the shuffle mask pointer - # to be able to shift 16-r13 bytes - # (r13 is the number of bytes in plaintext mod 16) - vmovdqu (%r12), %xmm2 # get the appropriate shuffle mask - vpshufb %xmm2, %xmm1, %xmm1 # shift right 16-r13 bytes - jmp _final_ghash_mul\@ - -_only_less_than_16\@: - # check for 0 length - mov arg4, %r13 - and $15, %r13 # r13 = (arg4 mod 16) + vpclmulqdq $0x00, \T5, \XMM4, \T4 + vpxor \T4, \T7, \T7 - je _multiple_of_16_bytes\@ + vpclmulqdq $0x00, \T3, \T2, \T2 - # handle the last <16 Byte block seperately + vpxor \T2, \XMM1, \XMM1 + ###################### - vpaddd ONE(%rip), %xmm9, %xmm9 # INCR CNT to get Yn - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - ENCRYPT_SINGLE_BLOCK %xmm9 # E(K, Yn) + vmovdqu HashKey_4(arg2), \T5 + vpshufd $0b01001110, \XMM5, \T2 + vpshufd $0b01001110, \T5, \T3 + vpxor \XMM5, \T2, \T2 + vpxor \T5, \T3, \T3 + vpclmulqdq $0x11, \T5, \XMM5, \T4 + vpxor \T4, \T6, \T6 - lea SHIFT_MASK+16(%rip), %r12 - sub %r13, %r12 # adjust the shuffle mask pointer to be - # able to shift 16-r13 bytes (r13 is the - # number of bytes in plaintext mod 16) + vpclmulqdq $0x00, \T5, \XMM5, \T4 + vpxor \T4, \T7, \T7 -_get_last_16_byte_loop\@: - movb (arg3, %r11), %al - movb %al, TMP1 (%rsp , %r11) - add $1, %r11 - cmp %r13, %r11 - jne _get_last_16_byte_loop\@ + vpclmulqdq $0x00, \T3, \T2, \T2 - vmovdqu TMP1(%rsp), %xmm1 + vpxor \T2, \XMM1, \XMM1 - sub $16, %r11 + ###################### -_final_ghash_mul\@: - .if \ENC_DEC == DEC - vmovdqa %xmm1, %xmm2 - vpxor %xmm1, %xmm9, %xmm9 # Plaintext XOR E(K, Yn) - vmovdqu ALL_F-SHIFT_MASK(%r12), %xmm1 # get the appropriate mask to mask out top 16-r13 bytes of xmm9 - vpand %xmm1, %xmm9, %xmm9 # mask out top 16-r13 bytes of xmm9 - vpand %xmm1, %xmm2, %xmm2 - vpshufb SHUF_MASK(%rip), %xmm2, %xmm2 - vpxor %xmm2, %xmm14, %xmm14 - #GHASH computation for the last <16 Byte block - GHASH_MUL_AVX2 %xmm14, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 - sub %r13, %r11 - add $16, %r11 - .else - vpxor %xmm1, %xmm9, %xmm9 # Plaintext XOR E(K, Yn) - vmovdqu ALL_F-SHIFT_MASK(%r12), %xmm1 # get the appropriate mask to mask out top 16-r13 bytes of xmm9 - vpand %xmm1, %xmm9, %xmm9 # mask out top 16-r13 bytes of xmm9 - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 - vpxor %xmm9, %xmm14, %xmm14 - #GHASH computation for the last <16 Byte block - GHASH_MUL_AVX2 %xmm14, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 - sub %r13, %r11 - add $16, %r11 - vpshufb SHUF_MASK(%rip), %xmm9, %xmm9 # shuffle xmm9 back to output as ciphertext - .endif + vmovdqu HashKey_3(arg2), \T5 + vpshufd $0b01001110, \XMM6, \T2 + vpshufd $0b01001110, \T5, \T3 + vpxor \XMM6, \T2, \T2 + vpxor \T5, \T3, \T3 + vpclmulqdq $0x11, \T5, \XMM6, \T4 + vpxor \T4, \T6, \T6 - ############################# - # output r13 Bytes - vmovq %xmm9, %rax - cmp $8, %r13 - jle _less_than_8_bytes_left\@ + vpclmulqdq $0x00, \T5, \XMM6, \T4 + vpxor \T4, \T7, \T7 - mov %rax, (arg2 , %r11) - add $8, %r11 - vpsrldq $8, %xmm9, %xmm9 - vmovq %xmm9, %rax - sub $8, %r13 + vpclmulqdq $0x00, \T3, \T2, \T2 -_less_than_8_bytes_left\@: - movb %al, (arg2 , %r11) - add $1, %r11 - shr $8, %rax - sub $1, %r13 - jne _less_than_8_bytes_left\@ - ############################# + vpxor \T2, \XMM1, \XMM1 -_multiple_of_16_bytes\@: - mov arg7, %r12 # r12 = aadLen (number of bytes) - shl $3, %r12 # convert into number of bits - vmovd %r12d, %xmm15 # len(A) in xmm15 + ###################### - shl $3, arg4 # len(C) in bits (*128) - vmovq arg4, %xmm1 - vpslldq $8, %xmm15, %xmm15 # xmm15 = len(A)|| 0x0000000000000000 - vpxor %xmm1, %xmm15, %xmm15 # xmm15 = len(A)||len(C) + vmovdqu HashKey_2(arg2), \T5 + vpshufd $0b01001110, \XMM7, \T2 + vpshufd $0b01001110, \T5, \T3 + vpxor \XMM7, \T2, \T2 + vpxor \T5, \T3, \T3 - vpxor %xmm15, %xmm14, %xmm14 - GHASH_MUL_AVX2 %xmm14, %xmm13, %xmm0, %xmm10, %xmm11, %xmm5, %xmm6 # final GHASH computation - vpshufb SHUF_MASK(%rip), %xmm14, %xmm14 # perform a 16Byte swap + vpclmulqdq $0x11, \T5, \XMM7, \T4 + vpxor \T4, \T6, \T6 - mov arg5, %rax # rax = *Y0 - vmovdqu (%rax), %xmm9 # xmm9 = Y0 + vpclmulqdq $0x00, \T5, \XMM7, \T4 + vpxor \T4, \T7, \T7 - ENCRYPT_SINGLE_BLOCK %xmm9 # E(K, Y0) + vpclmulqdq $0x00, \T3, \T2, \T2 - vpxor %xmm14, %xmm9, %xmm9 + vpxor \T2, \XMM1, \XMM1 + ###################### + vmovdqu HashKey(arg2), \T5 + vpshufd $0b01001110, \XMM8, \T2 + vpshufd $0b01001110, \T5, \T3 + vpxor \XMM8, \T2, \T2 + vpxor \T5, \T3, \T3 -_return_T\@: - mov arg8, %r10 # r10 = authTag - mov arg9, %r11 # r11 = auth_tag_len + vpclmulqdq $0x11, \T5, \XMM8, \T4 + vpxor \T4, \T6, \T6 - cmp $16, %r11 - je _T_16\@ + vpclmulqdq $0x00, \T5, \XMM8, \T4 + vpxor \T4, \T7, \T7 - cmp $8, %r11 - jl _T_4\@ + vpclmulqdq $0x00, \T3, \T2, \T2 -_T_8\@: - vmovq %xmm9, %rax - mov %rax, (%r10) - add $8, %r10 - sub $8, %r11 - vpsrldq $8, %xmm9, %xmm9 - cmp $0, %r11 - je _return_T_done\@ -_T_4\@: - vmovd %xmm9, %eax - mov %eax, (%r10) - add $4, %r10 - sub $4, %r11 - vpsrldq $4, %xmm9, %xmm9 - cmp $0, %r11 - je _return_T_done\@ -_T_123\@: - vmovd %xmm9, %eax - cmp $2, %r11 - jl _T_1\@ - mov %ax, (%r10) - cmp $2, %r11 - je _return_T_done\@ - add $2, %r10 - sar $16, %eax -_T_1\@: - mov %al, (%r10) - jmp _return_T_done\@ + vpxor \T2, \XMM1, \XMM1 + vpxor \T6, \XMM1, \XMM1 + vpxor \T7, \XMM1, \T2 -_T_16\@: - vmovdqu %xmm9, (%r10) -_return_T_done\@: - mov %r14, %rsp - pop %r15 - pop %r14 - pop %r13 - pop %r12 -.endm + vpslldq $8, \T2, \T4 + vpsrldq $8, \T2, \T2 -############################################################# -#void aesni_gcm_precomp_avx_gen4 -# (gcm_data *my_ctx_data, -# u8 *hash_subkey)# /* H, the Hash sub key input. -# Data starts on a 16-byte boundary. */ -############################################################# -ENTRY(aesni_gcm_precomp_avx_gen4) - #the number of pushes must equal STACK_OFFSET - push %r12 - push %r13 - push %r14 - push %r15 + vpxor \T4, \T7, \T7 + vpxor \T2, \T6, \T6 # holds the result of the + # accumulated carry-less multiplications - mov %rsp, %r14 + ####################################################################### + #first phase of the reduction + vmovdqa POLY2(%rip), \T3 + vpclmulqdq $0x01, \T7, \T3, \T2 + vpslldq $8, \T2, \T2 # shift-L xmm2 2 DWs + + vpxor \T2, \T7, \T7 # first phase of the reduction complete + ####################################################################### - sub $VARIABLE_OFFSET, %rsp - and $~63, %rsp # align rsp to 64 bytes + #second phase of the reduction + vpclmulqdq $0x00, \T7, \T3, \T2 + vpsrldq $4, \T2, \T2 # shift-R T2 1 DW (Shift-R only 1-DW to obtain 2-DWs shift-R) - vmovdqu (arg2), %xmm6 # xmm6 = HashKey + vpclmulqdq $0x10, \T7, \T3, \T4 + vpslldq $4, \T4, \T4 # shift-L T4 1 DW (Shift-L 1-DW to obtain result with no shifts) - vpshufb SHUF_MASK(%rip), %xmm6, %xmm6 - ############### PRECOMPUTATION of HashKey<<1 mod poly from the HashKey - vmovdqa %xmm6, %xmm2 - vpsllq $1, %xmm6, %xmm6 - vpsrlq $63, %xmm2, %xmm2 - vmovdqa %xmm2, %xmm1 - vpslldq $8, %xmm2, %xmm2 - vpsrldq $8, %xmm1, %xmm1 - vpor %xmm2, %xmm6, %xmm6 - #reduction - vpshufd $0b00100100, %xmm1, %xmm2 - vpcmpeqd TWOONE(%rip), %xmm2, %xmm2 - vpand POLY(%rip), %xmm2, %xmm2 - vpxor %xmm2, %xmm6, %xmm6 # xmm6 holds the HashKey<<1 mod poly + vpxor \T2, \T4, \T4 # second phase of the reduction complete ####################################################################### - vmovdqa %xmm6, HashKey(arg1) # store HashKey<<1 mod poly - + vpxor \T4, \T6, \T6 # the result is in T6 +.endm - PRECOMPUTE_AVX2 %xmm6, %xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5 - mov %r14, %rsp - pop %r15 - pop %r14 - pop %r13 - pop %r12 +############################################################# +#void aesni_gcm_init_avx_gen4 +# (gcm_data *my_ctx_data, +# gcm_context_data *data, +# u8 *iv, /* Pre-counter block j0: 4 byte salt +# (from Security Association) concatenated with 8 byte +# Initialisation Vector (from IPSec ESP Payload) +# concatenated with 0x00000001. 16-byte aligned pointer. */ +# u8 *hash_subkey# /* H, the Hash sub key input. Data starts on a 16-byte boundary. */ +# const u8 *aad, /* Additional Authentication Data (AAD)*/ +# u64 aad_len) /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */ +############################################################# +ENTRY(aesni_gcm_init_avx_gen4) + FUNC_SAVE + INIT GHASH_MUL_AVX2, PRECOMPUTE_AVX2 + FUNC_RESTORE ret -ENDPROC(aesni_gcm_precomp_avx_gen4) - +ENDPROC(aesni_gcm_init_avx_gen4) ############################################################################### #void aesni_gcm_enc_avx_gen4( # gcm_data *my_ctx_data, /* aligned to 16 Bytes */ +# gcm_context_data *data, # u8 *out, /* Ciphertext output. Encrypt in-place is allowed. */ # const u8 *in, /* Plaintext input */ -# u64 plaintext_len, /* Length of data in Bytes for encryption. */ -# u8 *iv, /* Pre-counter block j0: 4 byte salt -# (from Security Association) concatenated with 8 byte -# Initialisation Vector (from IPSec ESP Payload) -# concatenated with 0x00000001. 16-byte aligned pointer. */ -# const u8 *aad, /* Additional Authentication Data (AAD)*/ -# u64 aad_len, /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */ -# u8 *auth_tag, /* Authenticated Tag output. */ -# u64 auth_tag_len)# /* Authenticated Tag Length in bytes. -# Valid values are 16 (most likely), 12 or 8. */ +# u64 plaintext_len) /* Length of data in Bytes for encryption. */ ############################################################################### -ENTRY(aesni_gcm_enc_avx_gen4) - GCM_ENC_DEC_AVX2 ENC +ENTRY(aesni_gcm_enc_update_avx_gen4) + FUNC_SAVE + mov keysize,%eax + cmp $32, %eax + je key_256_enc_update4 + cmp $16, %eax + je key_128_enc_update4 + # must be 192 + GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 11 + FUNC_RESTORE + ret +key_128_enc_update4: + GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 9 + FUNC_RESTORE ret -ENDPROC(aesni_gcm_enc_avx_gen4) +key_256_enc_update4: + GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 13 + FUNC_RESTORE + ret +ENDPROC(aesni_gcm_enc_update_avx_gen4) ############################################################################### -#void aesni_gcm_dec_avx_gen4( +#void aesni_gcm_dec_update_avx_gen4( # gcm_data *my_ctx_data, /* aligned to 16 Bytes */ +# gcm_context_data *data, # u8 *out, /* Plaintext output. Decrypt in-place is allowed. */ # const u8 *in, /* Ciphertext input */ -# u64 plaintext_len, /* Length of data in Bytes for encryption. */ -# u8 *iv, /* Pre-counter block j0: 4 byte salt -# (from Security Association) concatenated with 8 byte -# Initialisation Vector (from IPSec ESP Payload) -# concatenated with 0x00000001. 16-byte aligned pointer. */ -# const u8 *aad, /* Additional Authentication Data (AAD)*/ -# u64 aad_len, /* Length of AAD in bytes. With RFC4106 this is going to be 8 or 12 Bytes */ +# u64 plaintext_len) /* Length of data in Bytes for encryption. */ +############################################################################### +ENTRY(aesni_gcm_dec_update_avx_gen4) + FUNC_SAVE + mov keysize,%eax + cmp $32, %eax + je key_256_dec_update4 + cmp $16, %eax + je key_128_dec_update4 + # must be 192 + GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 11 + FUNC_RESTORE + ret +key_128_dec_update4: + GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 9 + FUNC_RESTORE + ret +key_256_dec_update4: + GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 13 + FUNC_RESTORE + ret +ENDPROC(aesni_gcm_dec_update_avx_gen4) + +############################################################################### +#void aesni_gcm_finalize_avx_gen4( +# gcm_data *my_ctx_data, /* aligned to 16 Bytes */ +# gcm_context_data *data, # u8 *auth_tag, /* Authenticated Tag output. */ # u64 auth_tag_len)# /* Authenticated Tag Length in bytes. -# Valid values are 16 (most likely), 12 or 8. */ +# Valid values are 16 (most likely), 12 or 8. */ ############################################################################### -ENTRY(aesni_gcm_dec_avx_gen4) - GCM_ENC_DEC_AVX2 DEC - ret -ENDPROC(aesni_gcm_dec_avx_gen4) +ENTRY(aesni_gcm_finalize_avx_gen4) + FUNC_SAVE + mov keysize,%eax + cmp $32, %eax + je key_256_finalize4 + cmp $16, %eax + je key_128_finalize4 + # must be 192 + GCM_COMPLETE GHASH_MUL_AVX2, 11, arg3, arg4 + FUNC_RESTORE + ret +key_128_finalize4: + GCM_COMPLETE GHASH_MUL_AVX2, 9, arg3, arg4 + FUNC_RESTORE + ret +key_256_finalize4: + GCM_COMPLETE GHASH_MUL_AVX2, 13, arg3, arg4 + FUNC_RESTORE + ret +ENDPROC(aesni_gcm_finalize_avx_gen4) #endif /* CONFIG_AS_AVX2 */ diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 661f7daf43da..1321700d6647 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -84,7 +84,7 @@ struct gcm_context_data { u8 current_counter[GCM_BLOCK_LEN]; u64 partial_block_len; u64 unused; - u8 hash_keys[GCM_BLOCK_LEN * 8]; + u8 hash_keys[GCM_BLOCK_LEN * 16]; }; asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, @@ -175,6 +175,32 @@ asmlinkage void aesni_gcm_finalize(void *ctx, struct gcm_context_data *gdata, u8 *auth_tag, unsigned long auth_tag_len); +static struct aesni_gcm_tfm_s { +void (*init)(void *ctx, + struct gcm_context_data *gdata, + u8 *iv, + u8 *hash_subkey, const u8 *aad, + unsigned long aad_len); +void (*enc_update)(void *ctx, + struct gcm_context_data *gdata, u8 *out, + const u8 *in, + unsigned long plaintext_len); +void (*dec_update)(void *ctx, + struct gcm_context_data *gdata, u8 *out, + const u8 *in, + unsigned long ciphertext_len); +void (*finalize)(void *ctx, + struct gcm_context_data *gdata, + u8 *auth_tag, unsigned long auth_tag_len); +} *aesni_gcm_tfm; + +struct aesni_gcm_tfm_s aesni_gcm_tfm_sse = { + .init = &aesni_gcm_init, + .enc_update = &aesni_gcm_enc_update, + .dec_update = &aesni_gcm_dec_update, + .finalize = &aesni_gcm_finalize, +}; + #ifdef CONFIG_AS_AVX asmlinkage void aes_ctr_enc_128_avx_by8(const u8 *in, u8 *iv, void *keys, u8 *out, unsigned int num_bytes); @@ -183,136 +209,94 @@ asmlinkage void aes_ctr_enc_192_avx_by8(const u8 *in, u8 *iv, asmlinkage void aes_ctr_enc_256_avx_by8(const u8 *in, u8 *iv, void *keys, u8 *out, unsigned int num_bytes); /* - * asmlinkage void aesni_gcm_precomp_avx_gen2() + * asmlinkage void aesni_gcm_init_avx_gen2() * gcm_data *my_ctx_data, context data * u8 *hash_subkey, the Hash sub key input. Data starts on a 16-byte boundary. */ -asmlinkage void aesni_gcm_precomp_avx_gen2(void *my_ctx_data, u8 *hash_subkey); +asmlinkage void aesni_gcm_init_avx_gen2(void *my_ctx_data, + struct gcm_context_data *gdata, + u8 *iv, + u8 *hash_subkey, + const u8 *aad, + unsigned long aad_len); + +asmlinkage void aesni_gcm_enc_update_avx_gen2(void *ctx, + struct gcm_context_data *gdata, u8 *out, + const u8 *in, unsigned long plaintext_len); +asmlinkage void aesni_gcm_dec_update_avx_gen2(void *ctx, + struct gcm_context_data *gdata, u8 *out, + const u8 *in, + unsigned long ciphertext_len); +asmlinkage void aesni_gcm_finalize_avx_gen2(void *ctx, + struct gcm_context_data *gdata, + u8 *auth_tag, unsigned long auth_tag_len); -asmlinkage void aesni_gcm_enc_avx_gen2(void *ctx, u8 *out, +asmlinkage void aesni_gcm_enc_avx_gen2(void *ctx, + struct gcm_context_data *gdata, u8 *out, const u8 *in, unsigned long plaintext_len, u8 *iv, const u8 *aad, unsigned long aad_len, u8 *auth_tag, unsigned long auth_tag_len); -asmlinkage void aesni_gcm_dec_avx_gen2(void *ctx, u8 *out, +asmlinkage void aesni_gcm_dec_avx_gen2(void *ctx, + struct gcm_context_data *gdata, u8 *out, const u8 *in, unsigned long ciphertext_len, u8 *iv, const u8 *aad, unsigned long aad_len, u8 *auth_tag, unsigned long auth_tag_len); -static void aesni_gcm_enc_avx(void *ctx, - struct gcm_context_data *data, u8 *out, - const u8 *in, unsigned long plaintext_len, u8 *iv, - u8 *hash_subkey, const u8 *aad, unsigned long aad_len, - u8 *auth_tag, unsigned long auth_tag_len) -{ - struct crypto_aes_ctx *aes_ctx = (struct crypto_aes_ctx*)ctx; - if ((plaintext_len < AVX_GEN2_OPTSIZE) || (aes_ctx-> key_length != AES_KEYSIZE_128)){ - aesni_gcm_enc(ctx, data, out, in, - plaintext_len, iv, hash_subkey, aad, - aad_len, auth_tag, auth_tag_len); - } else { - aesni_gcm_precomp_avx_gen2(ctx, hash_subkey); - aesni_gcm_enc_avx_gen2(ctx, out, in, plaintext_len, iv, aad, - aad_len, auth_tag, auth_tag_len); - } -} +struct aesni_gcm_tfm_s aesni_gcm_tfm_avx_gen2 = { + .init = &aesni_gcm_init_avx_gen2, + .enc_update = &aesni_gcm_enc_update_avx_gen2, + .dec_update = &aesni_gcm_dec_update_avx_gen2, + .finalize = &aesni_gcm_finalize_avx_gen2, +}; -static void aesni_gcm_dec_avx(void *ctx, - struct gcm_context_data *data, u8 *out, - const u8 *in, unsigned long ciphertext_len, u8 *iv, - u8 *hash_subkey, const u8 *aad, unsigned long aad_len, - u8 *auth_tag, unsigned long auth_tag_len) -{ - struct crypto_aes_ctx *aes_ctx = (struct crypto_aes_ctx*)ctx; - if ((ciphertext_len < AVX_GEN2_OPTSIZE) || (aes_ctx-> key_length != AES_KEYSIZE_128)) { - aesni_gcm_dec(ctx, data, out, in, - ciphertext_len, iv, hash_subkey, aad, - aad_len, auth_tag, auth_tag_len); - } else { - aesni_gcm_precomp_avx_gen2(ctx, hash_subkey); - aesni_gcm_dec_avx_gen2(ctx, out, in, ciphertext_len, iv, aad, - aad_len, auth_tag, auth_tag_len); - } -} #endif #ifdef CONFIG_AS_AVX2 /* - * asmlinkage void aesni_gcm_precomp_avx_gen4() + * asmlinkage void aesni_gcm_init_avx_gen4() * gcm_data *my_ctx_data, context data * u8 *hash_subkey, the Hash sub key input. Data starts on a 16-byte boundary. */ -asmlinkage void aesni_gcm_precomp_avx_gen4(void *my_ctx_data, u8 *hash_subkey); +asmlinkage void aesni_gcm_init_avx_gen4(void *my_ctx_data, + struct gcm_context_data *gdata, + u8 *iv, + u8 *hash_subkey, + const u8 *aad, + unsigned long aad_len); + +asmlinkage void aesni_gcm_enc_update_avx_gen4(void *ctx, + struct gcm_context_data *gdata, u8 *out, + const u8 *in, unsigned long plaintext_len); +asmlinkage void aesni_gcm_dec_update_avx_gen4(void *ctx, + struct gcm_context_data *gdata, u8 *out, + const u8 *in, + unsigned long ciphertext_len); +asmlinkage void aesni_gcm_finalize_avx_gen4(void *ctx, + struct gcm_context_data *gdata, + u8 *auth_tag, unsigned long auth_tag_len); -asmlinkage void aesni_gcm_enc_avx_gen4(void *ctx, u8 *out, +asmlinkage void aesni_gcm_enc_avx_gen4(void *ctx, + struct gcm_context_data *gdata, u8 *out, const u8 *in, unsigned long plaintext_len, u8 *iv, const u8 *aad, unsigned long aad_len, u8 *auth_tag, unsigned long auth_tag_len); -asmlinkage void aesni_gcm_dec_avx_gen4(void *ctx, u8 *out, +asmlinkage void aesni_gcm_dec_avx_gen4(void *ctx, + struct gcm_context_data *gdata, u8 *out, const u8 *in, unsigned long ciphertext_len, u8 *iv, const u8 *aad, unsigned long aad_len, u8 *auth_tag, unsigned long auth_tag_len); -static void aesni_gcm_enc_avx2(void *ctx, - struct gcm_context_data *data, u8 *out, - const u8 *in, unsigned long plaintext_len, u8 *iv, - u8 *hash_subkey, const u8 *aad, unsigned long aad_len, - u8 *auth_tag, unsigned long auth_tag_len) -{ - struct crypto_aes_ctx *aes_ctx = (struct crypto_aes_ctx*)ctx; - if ((plaintext_len < AVX_GEN2_OPTSIZE) || (aes_ctx-> key_length != AES_KEYSIZE_128)) { - aesni_gcm_enc(ctx, data, out, in, - plaintext_len, iv, hash_subkey, aad, - aad_len, auth_tag, auth_tag_len); - } else if (plaintext_len < AVX_GEN4_OPTSIZE) { - aesni_gcm_precomp_avx_gen2(ctx, hash_subkey); - aesni_gcm_enc_avx_gen2(ctx, out, in, plaintext_len, iv, aad, - aad_len, auth_tag, auth_tag_len); - } else { - aesni_gcm_precomp_avx_gen4(ctx, hash_subkey); - aesni_gcm_enc_avx_gen4(ctx, out, in, plaintext_len, iv, aad, - aad_len, auth_tag, auth_tag_len); - } -} +struct aesni_gcm_tfm_s aesni_gcm_tfm_avx_gen4 = { + .init = &aesni_gcm_init_avx_gen4, + .enc_update = &aesni_gcm_enc_update_avx_gen4, + .dec_update = &aesni_gcm_dec_update_avx_gen4, + .finalize = &aesni_gcm_finalize_avx_gen4, +}; -static void aesni_gcm_dec_avx2(void *ctx, - struct gcm_context_data *data, u8 *out, - const u8 *in, unsigned long ciphertext_len, u8 *iv, - u8 *hash_subkey, const u8 *aad, unsigned long aad_len, - u8 *auth_tag, unsigned long auth_tag_len) -{ - struct crypto_aes_ctx *aes_ctx = (struct crypto_aes_ctx*)ctx; - if ((ciphertext_len < AVX_GEN2_OPTSIZE) || (aes_ctx-> key_length != AES_KEYSIZE_128)) { - aesni_gcm_dec(ctx, data, out, in, - ciphertext_len, iv, hash_subkey, - aad, aad_len, auth_tag, auth_tag_len); - } else if (ciphertext_len < AVX_GEN4_OPTSIZE) { - aesni_gcm_precomp_avx_gen2(ctx, hash_subkey); - aesni_gcm_dec_avx_gen2(ctx, out, in, ciphertext_len, iv, aad, - aad_len, auth_tag, auth_tag_len); - } else { - aesni_gcm_precomp_avx_gen4(ctx, hash_subkey); - aesni_gcm_dec_avx_gen4(ctx, out, in, ciphertext_len, iv, aad, - aad_len, auth_tag, auth_tag_len); - } -} #endif -static void (*aesni_gcm_enc_tfm)(void *ctx, - struct gcm_context_data *data, u8 *out, - const u8 *in, unsigned long plaintext_len, - u8 *iv, u8 *hash_subkey, const u8 *aad, - unsigned long aad_len, u8 *auth_tag, - unsigned long auth_tag_len); - -static void (*aesni_gcm_dec_tfm)(void *ctx, - struct gcm_context_data *data, u8 *out, - const u8 *in, unsigned long ciphertext_len, - u8 *iv, u8 *hash_subkey, const u8 *aad, - unsigned long aad_len, u8 *auth_tag, - unsigned long auth_tag_len); - static inline struct aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm) { @@ -794,6 +778,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, { struct crypto_aead *tfm = crypto_aead_reqtfm(req); unsigned long auth_tag_len = crypto_aead_authsize(tfm); + struct aesni_gcm_tfm_s *gcm_tfm = aesni_gcm_tfm; struct gcm_context_data data AESNI_ALIGN_ATTR; struct scatter_walk dst_sg_walk = {}; unsigned long left = req->cryptlen; @@ -811,6 +796,15 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, if (!enc) left -= auth_tag_len; +#ifdef CONFIG_AS_AVX2 + if (left < AVX_GEN4_OPTSIZE && gcm_tfm == &aesni_gcm_tfm_avx_gen4) + gcm_tfm = &aesni_gcm_tfm_avx_gen2; +#endif +#ifdef CONFIG_AS_AVX + if (left < AVX_GEN2_OPTSIZE && gcm_tfm == &aesni_gcm_tfm_avx_gen2) + gcm_tfm = &aesni_gcm_tfm_sse; +#endif + /* Linearize assoc, if not already linear */ if (req->src->length >= assoclen && req->src->length && (!PageHighMem(sg_page(req->src)) || @@ -835,7 +829,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, } kernel_fpu_begin(); - aesni_gcm_init(aes_ctx, &data, iv, + gcm_tfm->init(aes_ctx, &data, iv, hash_subkey, assoc, assoclen); if (req->src != req->dst) { while (left) { @@ -846,10 +840,10 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, len = min(srclen, dstlen); if (len) { if (enc) - aesni_gcm_enc_update(aes_ctx, &data, + gcm_tfm->enc_update(aes_ctx, &data, dst, src, len); else - aesni_gcm_dec_update(aes_ctx, &data, + gcm_tfm->dec_update(aes_ctx, &data, dst, src, len); } left -= len; @@ -867,10 +861,10 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, len = scatterwalk_clamp(&src_sg_walk, left); if (len) { if (enc) - aesni_gcm_enc_update(aes_ctx, &data, + gcm_tfm->enc_update(aes_ctx, &data, src, src, len); else - aesni_gcm_dec_update(aes_ctx, &data, + gcm_tfm->dec_update(aes_ctx, &data, src, src, len); } left -= len; @@ -879,7 +873,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, scatterwalk_done(&src_sg_walk, 1, left); } } - aesni_gcm_finalize(aes_ctx, &data, authTag, auth_tag_len); + gcm_tfm->finalize(aes_ctx, &data, authTag, auth_tag_len); kernel_fpu_end(); if (!assocmem) @@ -912,147 +906,15 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, static int gcmaes_encrypt(struct aead_request *req, unsigned int assoclen, u8 *hash_subkey, u8 *iv, void *aes_ctx) { - u8 one_entry_in_sg = 0; - u8 *src, *dst, *assoc; - struct crypto_aead *tfm = crypto_aead_reqtfm(req); - unsigned long auth_tag_len = crypto_aead_authsize(tfm); - struct scatter_walk src_sg_walk; - struct scatter_walk dst_sg_walk = {}; - struct gcm_context_data data AESNI_ALIGN_ATTR; - - if (((struct crypto_aes_ctx *)aes_ctx)->key_length != AES_KEYSIZE_128 || - aesni_gcm_enc_tfm == aesni_gcm_enc || - req->cryptlen < AVX_GEN2_OPTSIZE) { - return gcmaes_crypt_by_sg(true, req, assoclen, hash_subkey, iv, - aes_ctx); - } - if (sg_is_last(req->src) && - (!PageHighMem(sg_page(req->src)) || - req->src->offset + req->src->length <= PAGE_SIZE) && - sg_is_last(req->dst) && - (!PageHighMem(sg_page(req->dst)) || - req->dst->offset + req->dst->length <= PAGE_SIZE)) { - one_entry_in_sg = 1; - scatterwalk_start(&src_sg_walk, req->src); - assoc = scatterwalk_map(&src_sg_walk); - src = assoc + req->assoclen; - dst = src; - if (unlikely(req->src != req->dst)) { - scatterwalk_start(&dst_sg_walk, req->dst); - dst = scatterwalk_map(&dst_sg_walk) + req->assoclen; - } - } else { - /* Allocate memory for src, dst, assoc */ - assoc = kmalloc(req->cryptlen + auth_tag_len + req->assoclen, - GFP_ATOMIC); - if (unlikely(!assoc)) - return -ENOMEM; - scatterwalk_map_and_copy(assoc, req->src, 0, - req->assoclen + req->cryptlen, 0); - src = assoc + req->assoclen; - dst = src; - } - - kernel_fpu_begin(); - aesni_gcm_enc_tfm(aes_ctx, &data, dst, src, req->cryptlen, iv, - hash_subkey, assoc, assoclen, - dst + req->cryptlen, auth_tag_len); - kernel_fpu_end(); - - /* The authTag (aka the Integrity Check Value) needs to be written - * back to the packet. */ - if (one_entry_in_sg) { - if (unlikely(req->src != req->dst)) { - scatterwalk_unmap(dst - req->assoclen); - scatterwalk_advance(&dst_sg_walk, req->dst->length); - scatterwalk_done(&dst_sg_walk, 1, 0); - } - scatterwalk_unmap(assoc); - scatterwalk_advance(&src_sg_walk, req->src->length); - scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); - } else { - scatterwalk_map_and_copy(dst, req->dst, req->assoclen, - req->cryptlen + auth_tag_len, 1); - kfree(assoc); - } - return 0; + return gcmaes_crypt_by_sg(true, req, assoclen, hash_subkey, iv, + aes_ctx); } static int gcmaes_decrypt(struct aead_request *req, unsigned int assoclen, u8 *hash_subkey, u8 *iv, void *aes_ctx) { - u8 one_entry_in_sg = 0; - u8 *src, *dst, *assoc; - unsigned long tempCipherLen = 0; - struct crypto_aead *tfm = crypto_aead_reqtfm(req); - unsigned long auth_tag_len = crypto_aead_authsize(tfm); - u8 authTag[16]; - struct scatter_walk src_sg_walk; - struct scatter_walk dst_sg_walk = {}; - struct gcm_context_data data AESNI_ALIGN_ATTR; - int retval = 0; - - if (((struct crypto_aes_ctx *)aes_ctx)->key_length != AES_KEYSIZE_128 || - aesni_gcm_enc_tfm == aesni_gcm_enc || - req->cryptlen < AVX_GEN2_OPTSIZE) { - return gcmaes_crypt_by_sg(false, req, assoclen, hash_subkey, iv, - aes_ctx); - } - tempCipherLen = (unsigned long)(req->cryptlen - auth_tag_len); - - if (sg_is_last(req->src) && - (!PageHighMem(sg_page(req->src)) || - req->src->offset + req->src->length <= PAGE_SIZE) && - sg_is_last(req->dst) && req->dst->length && - (!PageHighMem(sg_page(req->dst)) || - req->dst->offset + req->dst->length <= PAGE_SIZE)) { - one_entry_in_sg = 1; - scatterwalk_start(&src_sg_walk, req->src); - assoc = scatterwalk_map(&src_sg_walk); - src = assoc + req->assoclen; - dst = src; - if (unlikely(req->src != req->dst)) { - scatterwalk_start(&dst_sg_walk, req->dst); - dst = scatterwalk_map(&dst_sg_walk) + req->assoclen; - } - } else { - /* Allocate memory for src, dst, assoc */ - assoc = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC); - if (!assoc) - return -ENOMEM; - scatterwalk_map_and_copy(assoc, req->src, 0, - req->assoclen + req->cryptlen, 0); - src = assoc + req->assoclen; - dst = src; - } - - - kernel_fpu_begin(); - aesni_gcm_dec_tfm(aes_ctx, &data, dst, src, tempCipherLen, iv, - hash_subkey, assoc, assoclen, - authTag, auth_tag_len); - kernel_fpu_end(); - - /* Compare generated tag with passed in tag. */ - retval = crypto_memneq(src + tempCipherLen, authTag, auth_tag_len) ? - -EBADMSG : 0; - - if (one_entry_in_sg) { - if (unlikely(req->src != req->dst)) { - scatterwalk_unmap(dst - req->assoclen); - scatterwalk_advance(&dst_sg_walk, req->dst->length); - scatterwalk_done(&dst_sg_walk, 1, 0); - } - scatterwalk_unmap(assoc); - scatterwalk_advance(&src_sg_walk, req->src->length); - scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); - } else { - scatterwalk_map_and_copy(dst, req->dst, req->assoclen, - tempCipherLen, 1); - kfree(assoc); - } - return retval; - + return gcmaes_crypt_by_sg(false, req, assoclen, hash_subkey, iv, + aes_ctx); } static int helper_rfc4106_encrypt(struct aead_request *req) @@ -1420,21 +1282,18 @@ static int __init aesni_init(void) #ifdef CONFIG_AS_AVX2 if (boot_cpu_has(X86_FEATURE_AVX2)) { pr_info("AVX2 version of gcm_enc/dec engaged.\n"); - aesni_gcm_enc_tfm = aesni_gcm_enc_avx2; - aesni_gcm_dec_tfm = aesni_gcm_dec_avx2; + aesni_gcm_tfm = &aesni_gcm_tfm_avx_gen4; } else #endif #ifdef CONFIG_AS_AVX if (boot_cpu_has(X86_FEATURE_AVX)) { pr_info("AVX version of gcm_enc/dec engaged.\n"); - aesni_gcm_enc_tfm = aesni_gcm_enc_avx; - aesni_gcm_dec_tfm = aesni_gcm_dec_avx; + aesni_gcm_tfm = &aesni_gcm_tfm_avx_gen2; } else #endif { pr_info("SSE version of gcm_enc/dec engaged.\n"); - aesni_gcm_enc_tfm = aesni_gcm_enc; - aesni_gcm_dec_tfm = aesni_gcm_dec; + aesni_gcm_tfm = &aesni_gcm_tfm_sse; } aesni_ctr_enc_tfm = aesni_ctr_enc; #ifdef CONFIG_AS_AVX diff --git a/arch/x86/crypto/chacha-avx2-x86_64.S b/arch/x86/crypto/chacha-avx2-x86_64.S new file mode 100644 index 000000000000..32903fd450af --- /dev/null +++ b/arch/x86/crypto/chacha-avx2-x86_64.S @@ -0,0 +1,1025 @@ +/* + * ChaCha 256-bit cipher algorithm, x64 AVX2 functions + * + * Copyright (C) 2015 Martin Willi + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include + +.section .rodata.cst32.ROT8, "aM", @progbits, 32 +.align 32 +ROT8: .octa 0x0e0d0c0f0a09080b0605040702010003 + .octa 0x0e0d0c0f0a09080b0605040702010003 + +.section .rodata.cst32.ROT16, "aM", @progbits, 32 +.align 32 +ROT16: .octa 0x0d0c0f0e09080b0a0504070601000302 + .octa 0x0d0c0f0e09080b0a0504070601000302 + +.section .rodata.cst32.CTRINC, "aM", @progbits, 32 +.align 32 +CTRINC: .octa 0x00000003000000020000000100000000 + .octa 0x00000007000000060000000500000004 + +.section .rodata.cst32.CTR2BL, "aM", @progbits, 32 +.align 32 +CTR2BL: .octa 0x00000000000000000000000000000000 + .octa 0x00000000000000000000000000000001 + +.section .rodata.cst32.CTR4BL, "aM", @progbits, 32 +.align 32 +CTR4BL: .octa 0x00000000000000000000000000000002 + .octa 0x00000000000000000000000000000003 + +.text + +ENTRY(chacha_2block_xor_avx2) + # %rdi: Input state matrix, s + # %rsi: up to 2 data blocks output, o + # %rdx: up to 2 data blocks input, i + # %rcx: input/output length in bytes + # %r8d: nrounds + + # This function encrypts two ChaCha blocks by loading the state + # matrix twice across four AVX registers. It performs matrix operations + # on four words in each matrix in parallel, but requires shuffling to + # rearrange the words after each round. + + vzeroupper + + # x0..3[0-2] = s0..3 + vbroadcasti128 0x00(%rdi),%ymm0 + vbroadcasti128 0x10(%rdi),%ymm1 + vbroadcasti128 0x20(%rdi),%ymm2 + vbroadcasti128 0x30(%rdi),%ymm3 + + vpaddd CTR2BL(%rip),%ymm3,%ymm3 + + vmovdqa %ymm0,%ymm8 + vmovdqa %ymm1,%ymm9 + vmovdqa %ymm2,%ymm10 + vmovdqa %ymm3,%ymm11 + + vmovdqa ROT8(%rip),%ymm4 + vmovdqa ROT16(%rip),%ymm5 + + mov %rcx,%rax + +.Ldoubleround: + + # x0 += x1, x3 = rotl32(x3 ^ x0, 16) + vpaddd %ymm1,%ymm0,%ymm0 + vpxor %ymm0,%ymm3,%ymm3 + vpshufb %ymm5,%ymm3,%ymm3 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 12) + vpaddd %ymm3,%ymm2,%ymm2 + vpxor %ymm2,%ymm1,%ymm1 + vmovdqa %ymm1,%ymm6 + vpslld $12,%ymm6,%ymm6 + vpsrld $20,%ymm1,%ymm1 + vpor %ymm6,%ymm1,%ymm1 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 8) + vpaddd %ymm1,%ymm0,%ymm0 + vpxor %ymm0,%ymm3,%ymm3 + vpshufb %ymm4,%ymm3,%ymm3 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 7) + vpaddd %ymm3,%ymm2,%ymm2 + vpxor %ymm2,%ymm1,%ymm1 + vmovdqa %ymm1,%ymm7 + vpslld $7,%ymm7,%ymm7 + vpsrld $25,%ymm1,%ymm1 + vpor %ymm7,%ymm1,%ymm1 + + # x1 = shuffle32(x1, MASK(0, 3, 2, 1)) + vpshufd $0x39,%ymm1,%ymm1 + # x2 = shuffle32(x2, MASK(1, 0, 3, 2)) + vpshufd $0x4e,%ymm2,%ymm2 + # x3 = shuffle32(x3, MASK(2, 1, 0, 3)) + vpshufd $0x93,%ymm3,%ymm3 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 16) + vpaddd %ymm1,%ymm0,%ymm0 + vpxor %ymm0,%ymm3,%ymm3 + vpshufb %ymm5,%ymm3,%ymm3 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 12) + vpaddd %ymm3,%ymm2,%ymm2 + vpxor %ymm2,%ymm1,%ymm1 + vmovdqa %ymm1,%ymm6 + vpslld $12,%ymm6,%ymm6 + vpsrld $20,%ymm1,%ymm1 + vpor %ymm6,%ymm1,%ymm1 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 8) + vpaddd %ymm1,%ymm0,%ymm0 + vpxor %ymm0,%ymm3,%ymm3 + vpshufb %ymm4,%ymm3,%ymm3 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 7) + vpaddd %ymm3,%ymm2,%ymm2 + vpxor %ymm2,%ymm1,%ymm1 + vmovdqa %ymm1,%ymm7 + vpslld $7,%ymm7,%ymm7 + vpsrld $25,%ymm1,%ymm1 + vpor %ymm7,%ymm1,%ymm1 + + # x1 = shuffle32(x1, MASK(2, 1, 0, 3)) + vpshufd $0x93,%ymm1,%ymm1 + # x2 = shuffle32(x2, MASK(1, 0, 3, 2)) + vpshufd $0x4e,%ymm2,%ymm2 + # x3 = shuffle32(x3, MASK(0, 3, 2, 1)) + vpshufd $0x39,%ymm3,%ymm3 + + sub $2,%r8d + jnz .Ldoubleround + + # o0 = i0 ^ (x0 + s0) + vpaddd %ymm8,%ymm0,%ymm7 + cmp $0x10,%rax + jl .Lxorpart2 + vpxor 0x00(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x00(%rsi) + vextracti128 $1,%ymm7,%xmm0 + # o1 = i1 ^ (x1 + s1) + vpaddd %ymm9,%ymm1,%ymm7 + cmp $0x20,%rax + jl .Lxorpart2 + vpxor 0x10(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x10(%rsi) + vextracti128 $1,%ymm7,%xmm1 + # o2 = i2 ^ (x2 + s2) + vpaddd %ymm10,%ymm2,%ymm7 + cmp $0x30,%rax + jl .Lxorpart2 + vpxor 0x20(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x20(%rsi) + vextracti128 $1,%ymm7,%xmm2 + # o3 = i3 ^ (x3 + s3) + vpaddd %ymm11,%ymm3,%ymm7 + cmp $0x40,%rax + jl .Lxorpart2 + vpxor 0x30(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x30(%rsi) + vextracti128 $1,%ymm7,%xmm3 + + # xor and write second block + vmovdqa %xmm0,%xmm7 + cmp $0x50,%rax + jl .Lxorpart2 + vpxor 0x40(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x40(%rsi) + + vmovdqa %xmm1,%xmm7 + cmp $0x60,%rax + jl .Lxorpart2 + vpxor 0x50(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x50(%rsi) + + vmovdqa %xmm2,%xmm7 + cmp $0x70,%rax + jl .Lxorpart2 + vpxor 0x60(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x60(%rsi) + + vmovdqa %xmm3,%xmm7 + cmp $0x80,%rax + jl .Lxorpart2 + vpxor 0x70(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x70(%rsi) + +.Ldone2: + vzeroupper + ret + +.Lxorpart2: + # xor remaining bytes from partial register into output + mov %rax,%r9 + and $0x0f,%r9 + jz .Ldone2 + and $~0x0f,%rax + + mov %rsi,%r11 + + lea 8(%rsp),%r10 + sub $0x10,%rsp + and $~31,%rsp + + lea (%rdx,%rax),%rsi + mov %rsp,%rdi + mov %r9,%rcx + rep movsb + + vpxor 0x00(%rsp),%xmm7,%xmm7 + vmovdqa %xmm7,0x00(%rsp) + + mov %rsp,%rsi + lea (%r11,%rax),%rdi + mov %r9,%rcx + rep movsb + + lea -8(%r10),%rsp + jmp .Ldone2 + +ENDPROC(chacha_2block_xor_avx2) + +ENTRY(chacha_4block_xor_avx2) + # %rdi: Input state matrix, s + # %rsi: up to 4 data blocks output, o + # %rdx: up to 4 data blocks input, i + # %rcx: input/output length in bytes + # %r8d: nrounds + + # This function encrypts four ChaCha blocks by loading the state + # matrix four times across eight AVX registers. It performs matrix + # operations on four words in two matrices in parallel, sequentially + # to the operations on the four words of the other two matrices. The + # required word shuffling has a rather high latency, we can do the + # arithmetic on two matrix-pairs without much slowdown. + + vzeroupper + + # x0..3[0-4] = s0..3 + vbroadcasti128 0x00(%rdi),%ymm0 + vbroadcasti128 0x10(%rdi),%ymm1 + vbroadcasti128 0x20(%rdi),%ymm2 + vbroadcasti128 0x30(%rdi),%ymm3 + + vmovdqa %ymm0,%ymm4 + vmovdqa %ymm1,%ymm5 + vmovdqa %ymm2,%ymm6 + vmovdqa %ymm3,%ymm7 + + vpaddd CTR2BL(%rip),%ymm3,%ymm3 + vpaddd CTR4BL(%rip),%ymm7,%ymm7 + + vmovdqa %ymm0,%ymm11 + vmovdqa %ymm1,%ymm12 + vmovdqa %ymm2,%ymm13 + vmovdqa %ymm3,%ymm14 + vmovdqa %ymm7,%ymm15 + + vmovdqa ROT8(%rip),%ymm8 + vmovdqa ROT16(%rip),%ymm9 + + mov %rcx,%rax + +.Ldoubleround4: + + # x0 += x1, x3 = rotl32(x3 ^ x0, 16) + vpaddd %ymm1,%ymm0,%ymm0 + vpxor %ymm0,%ymm3,%ymm3 + vpshufb %ymm9,%ymm3,%ymm3 + + vpaddd %ymm5,%ymm4,%ymm4 + vpxor %ymm4,%ymm7,%ymm7 + vpshufb %ymm9,%ymm7,%ymm7 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 12) + vpaddd %ymm3,%ymm2,%ymm2 + vpxor %ymm2,%ymm1,%ymm1 + vmovdqa %ymm1,%ymm10 + vpslld $12,%ymm10,%ymm10 + vpsrld $20,%ymm1,%ymm1 + vpor %ymm10,%ymm1,%ymm1 + + vpaddd %ymm7,%ymm6,%ymm6 + vpxor %ymm6,%ymm5,%ymm5 + vmovdqa %ymm5,%ymm10 + vpslld $12,%ymm10,%ymm10 + vpsrld $20,%ymm5,%ymm5 + vpor %ymm10,%ymm5,%ymm5 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 8) + vpaddd %ymm1,%ymm0,%ymm0 + vpxor %ymm0,%ymm3,%ymm3 + vpshufb %ymm8,%ymm3,%ymm3 + + vpaddd %ymm5,%ymm4,%ymm4 + vpxor %ymm4,%ymm7,%ymm7 + vpshufb %ymm8,%ymm7,%ymm7 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 7) + vpaddd %ymm3,%ymm2,%ymm2 + vpxor %ymm2,%ymm1,%ymm1 + vmovdqa %ymm1,%ymm10 + vpslld $7,%ymm10,%ymm10 + vpsrld $25,%ymm1,%ymm1 + vpor %ymm10,%ymm1,%ymm1 + + vpaddd %ymm7,%ymm6,%ymm6 + vpxor %ymm6,%ymm5,%ymm5 + vmovdqa %ymm5,%ymm10 + vpslld $7,%ymm10,%ymm10 + vpsrld $25,%ymm5,%ymm5 + vpor %ymm10,%ymm5,%ymm5 + + # x1 = shuffle32(x1, MASK(0, 3, 2, 1)) + vpshufd $0x39,%ymm1,%ymm1 + vpshufd $0x39,%ymm5,%ymm5 + # x2 = shuffle32(x2, MASK(1, 0, 3, 2)) + vpshufd $0x4e,%ymm2,%ymm2 + vpshufd $0x4e,%ymm6,%ymm6 + # x3 = shuffle32(x3, MASK(2, 1, 0, 3)) + vpshufd $0x93,%ymm3,%ymm3 + vpshufd $0x93,%ymm7,%ymm7 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 16) + vpaddd %ymm1,%ymm0,%ymm0 + vpxor %ymm0,%ymm3,%ymm3 + vpshufb %ymm9,%ymm3,%ymm3 + + vpaddd %ymm5,%ymm4,%ymm4 + vpxor %ymm4,%ymm7,%ymm7 + vpshufb %ymm9,%ymm7,%ymm7 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 12) + vpaddd %ymm3,%ymm2,%ymm2 + vpxor %ymm2,%ymm1,%ymm1 + vmovdqa %ymm1,%ymm10 + vpslld $12,%ymm10,%ymm10 + vpsrld $20,%ymm1,%ymm1 + vpor %ymm10,%ymm1,%ymm1 + + vpaddd %ymm7,%ymm6,%ymm6 + vpxor %ymm6,%ymm5,%ymm5 + vmovdqa %ymm5,%ymm10 + vpslld $12,%ymm10,%ymm10 + vpsrld $20,%ymm5,%ymm5 + vpor %ymm10,%ymm5,%ymm5 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 8) + vpaddd %ymm1,%ymm0,%ymm0 + vpxor %ymm0,%ymm3,%ymm3 + vpshufb %ymm8,%ymm3,%ymm3 + + vpaddd %ymm5,%ymm4,%ymm4 + vpxor %ymm4,%ymm7,%ymm7 + vpshufb %ymm8,%ymm7,%ymm7 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 7) + vpaddd %ymm3,%ymm2,%ymm2 + vpxor %ymm2,%ymm1,%ymm1 + vmovdqa %ymm1,%ymm10 + vpslld $7,%ymm10,%ymm10 + vpsrld $25,%ymm1,%ymm1 + vpor %ymm10,%ymm1,%ymm1 + + vpaddd %ymm7,%ymm6,%ymm6 + vpxor %ymm6,%ymm5,%ymm5 + vmovdqa %ymm5,%ymm10 + vpslld $7,%ymm10,%ymm10 + vpsrld $25,%ymm5,%ymm5 + vpor %ymm10,%ymm5,%ymm5 + + # x1 = shuffle32(x1, MASK(2, 1, 0, 3)) + vpshufd $0x93,%ymm1,%ymm1 + vpshufd $0x93,%ymm5,%ymm5 + # x2 = shuffle32(x2, MASK(1, 0, 3, 2)) + vpshufd $0x4e,%ymm2,%ymm2 + vpshufd $0x4e,%ymm6,%ymm6 + # x3 = shuffle32(x3, MASK(0, 3, 2, 1)) + vpshufd $0x39,%ymm3,%ymm3 + vpshufd $0x39,%ymm7,%ymm7 + + sub $2,%r8d + jnz .Ldoubleround4 + + # o0 = i0 ^ (x0 + s0), first block + vpaddd %ymm11,%ymm0,%ymm10 + cmp $0x10,%rax + jl .Lxorpart4 + vpxor 0x00(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x00(%rsi) + vextracti128 $1,%ymm10,%xmm0 + # o1 = i1 ^ (x1 + s1), first block + vpaddd %ymm12,%ymm1,%ymm10 + cmp $0x20,%rax + jl .Lxorpart4 + vpxor 0x10(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x10(%rsi) + vextracti128 $1,%ymm10,%xmm1 + # o2 = i2 ^ (x2 + s2), first block + vpaddd %ymm13,%ymm2,%ymm10 + cmp $0x30,%rax + jl .Lxorpart4 + vpxor 0x20(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x20(%rsi) + vextracti128 $1,%ymm10,%xmm2 + # o3 = i3 ^ (x3 + s3), first block + vpaddd %ymm14,%ymm3,%ymm10 + cmp $0x40,%rax + jl .Lxorpart4 + vpxor 0x30(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x30(%rsi) + vextracti128 $1,%ymm10,%xmm3 + + # xor and write second block + vmovdqa %xmm0,%xmm10 + cmp $0x50,%rax + jl .Lxorpart4 + vpxor 0x40(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x40(%rsi) + + vmovdqa %xmm1,%xmm10 + cmp $0x60,%rax + jl .Lxorpart4 + vpxor 0x50(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x50(%rsi) + + vmovdqa %xmm2,%xmm10 + cmp $0x70,%rax + jl .Lxorpart4 + vpxor 0x60(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x60(%rsi) + + vmovdqa %xmm3,%xmm10 + cmp $0x80,%rax + jl .Lxorpart4 + vpxor 0x70(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x70(%rsi) + + # o0 = i0 ^ (x0 + s0), third block + vpaddd %ymm11,%ymm4,%ymm10 + cmp $0x90,%rax + jl .Lxorpart4 + vpxor 0x80(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x80(%rsi) + vextracti128 $1,%ymm10,%xmm4 + # o1 = i1 ^ (x1 + s1), third block + vpaddd %ymm12,%ymm5,%ymm10 + cmp $0xa0,%rax + jl .Lxorpart4 + vpxor 0x90(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x90(%rsi) + vextracti128 $1,%ymm10,%xmm5 + # o2 = i2 ^ (x2 + s2), third block + vpaddd %ymm13,%ymm6,%ymm10 + cmp $0xb0,%rax + jl .Lxorpart4 + vpxor 0xa0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xa0(%rsi) + vextracti128 $1,%ymm10,%xmm6 + # o3 = i3 ^ (x3 + s3), third block + vpaddd %ymm15,%ymm7,%ymm10 + cmp $0xc0,%rax + jl .Lxorpart4 + vpxor 0xb0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xb0(%rsi) + vextracti128 $1,%ymm10,%xmm7 + + # xor and write fourth block + vmovdqa %xmm4,%xmm10 + cmp $0xd0,%rax + jl .Lxorpart4 + vpxor 0xc0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xc0(%rsi) + + vmovdqa %xmm5,%xmm10 + cmp $0xe0,%rax + jl .Lxorpart4 + vpxor 0xd0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xd0(%rsi) + + vmovdqa %xmm6,%xmm10 + cmp $0xf0,%rax + jl .Lxorpart4 + vpxor 0xe0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xe0(%rsi) + + vmovdqa %xmm7,%xmm10 + cmp $0x100,%rax + jl .Lxorpart4 + vpxor 0xf0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xf0(%rsi) + +.Ldone4: + vzeroupper + ret + +.Lxorpart4: + # xor remaining bytes from partial register into output + mov %rax,%r9 + and $0x0f,%r9 + jz .Ldone4 + and $~0x0f,%rax + + mov %rsi,%r11 + + lea 8(%rsp),%r10 + sub $0x10,%rsp + and $~31,%rsp + + lea (%rdx,%rax),%rsi + mov %rsp,%rdi + mov %r9,%rcx + rep movsb + + vpxor 0x00(%rsp),%xmm10,%xmm10 + vmovdqa %xmm10,0x00(%rsp) + + mov %rsp,%rsi + lea (%r11,%rax),%rdi + mov %r9,%rcx + rep movsb + + lea -8(%r10),%rsp + jmp .Ldone4 + +ENDPROC(chacha_4block_xor_avx2) + +ENTRY(chacha_8block_xor_avx2) + # %rdi: Input state matrix, s + # %rsi: up to 8 data blocks output, o + # %rdx: up to 8 data blocks input, i + # %rcx: input/output length in bytes + # %r8d: nrounds + + # This function encrypts eight consecutive ChaCha blocks by loading + # the state matrix in AVX registers eight times. As we need some + # scratch registers, we save the first four registers on the stack. The + # algorithm performs each operation on the corresponding word of each + # state matrix, hence requires no word shuffling. For final XORing step + # we transpose the matrix by interleaving 32-, 64- and then 128-bit + # words, which allows us to do XOR in AVX registers. 8/16-bit word + # rotation is done with the slightly better performing byte shuffling, + # 7/12-bit word rotation uses traditional shift+OR. + + vzeroupper + # 4 * 32 byte stack, 32-byte aligned + lea 8(%rsp),%r10 + and $~31, %rsp + sub $0x80, %rsp + mov %rcx,%rax + + # x0..15[0-7] = s[0..15] + vpbroadcastd 0x00(%rdi),%ymm0 + vpbroadcastd 0x04(%rdi),%ymm1 + vpbroadcastd 0x08(%rdi),%ymm2 + vpbroadcastd 0x0c(%rdi),%ymm3 + vpbroadcastd 0x10(%rdi),%ymm4 + vpbroadcastd 0x14(%rdi),%ymm5 + vpbroadcastd 0x18(%rdi),%ymm6 + vpbroadcastd 0x1c(%rdi),%ymm7 + vpbroadcastd 0x20(%rdi),%ymm8 + vpbroadcastd 0x24(%rdi),%ymm9 + vpbroadcastd 0x28(%rdi),%ymm10 + vpbroadcastd 0x2c(%rdi),%ymm11 + vpbroadcastd 0x30(%rdi),%ymm12 + vpbroadcastd 0x34(%rdi),%ymm13 + vpbroadcastd 0x38(%rdi),%ymm14 + vpbroadcastd 0x3c(%rdi),%ymm15 + # x0..3 on stack + vmovdqa %ymm0,0x00(%rsp) + vmovdqa %ymm1,0x20(%rsp) + vmovdqa %ymm2,0x40(%rsp) + vmovdqa %ymm3,0x60(%rsp) + + vmovdqa CTRINC(%rip),%ymm1 + vmovdqa ROT8(%rip),%ymm2 + vmovdqa ROT16(%rip),%ymm3 + + # x12 += counter values 0-3 + vpaddd %ymm1,%ymm12,%ymm12 + +.Ldoubleround8: + # x0 += x4, x12 = rotl32(x12 ^ x0, 16) + vpaddd 0x00(%rsp),%ymm4,%ymm0 + vmovdqa %ymm0,0x00(%rsp) + vpxor %ymm0,%ymm12,%ymm12 + vpshufb %ymm3,%ymm12,%ymm12 + # x1 += x5, x13 = rotl32(x13 ^ x1, 16) + vpaddd 0x20(%rsp),%ymm5,%ymm0 + vmovdqa %ymm0,0x20(%rsp) + vpxor %ymm0,%ymm13,%ymm13 + vpshufb %ymm3,%ymm13,%ymm13 + # x2 += x6, x14 = rotl32(x14 ^ x2, 16) + vpaddd 0x40(%rsp),%ymm6,%ymm0 + vmovdqa %ymm0,0x40(%rsp) + vpxor %ymm0,%ymm14,%ymm14 + vpshufb %ymm3,%ymm14,%ymm14 + # x3 += x7, x15 = rotl32(x15 ^ x3, 16) + vpaddd 0x60(%rsp),%ymm7,%ymm0 + vmovdqa %ymm0,0x60(%rsp) + vpxor %ymm0,%ymm15,%ymm15 + vpshufb %ymm3,%ymm15,%ymm15 + + # x8 += x12, x4 = rotl32(x4 ^ x8, 12) + vpaddd %ymm12,%ymm8,%ymm8 + vpxor %ymm8,%ymm4,%ymm4 + vpslld $12,%ymm4,%ymm0 + vpsrld $20,%ymm4,%ymm4 + vpor %ymm0,%ymm4,%ymm4 + # x9 += x13, x5 = rotl32(x5 ^ x9, 12) + vpaddd %ymm13,%ymm9,%ymm9 + vpxor %ymm9,%ymm5,%ymm5 + vpslld $12,%ymm5,%ymm0 + vpsrld $20,%ymm5,%ymm5 + vpor %ymm0,%ymm5,%ymm5 + # x10 += x14, x6 = rotl32(x6 ^ x10, 12) + vpaddd %ymm14,%ymm10,%ymm10 + vpxor %ymm10,%ymm6,%ymm6 + vpslld $12,%ymm6,%ymm0 + vpsrld $20,%ymm6,%ymm6 + vpor %ymm0,%ymm6,%ymm6 + # x11 += x15, x7 = rotl32(x7 ^ x11, 12) + vpaddd %ymm15,%ymm11,%ymm11 + vpxor %ymm11,%ymm7,%ymm7 + vpslld $12,%ymm7,%ymm0 + vpsrld $20,%ymm7,%ymm7 + vpor %ymm0,%ymm7,%ymm7 + + # x0 += x4, x12 = rotl32(x12 ^ x0, 8) + vpaddd 0x00(%rsp),%ymm4,%ymm0 + vmovdqa %ymm0,0x00(%rsp) + vpxor %ymm0,%ymm12,%ymm12 + vpshufb %ymm2,%ymm12,%ymm12 + # x1 += x5, x13 = rotl32(x13 ^ x1, 8) + vpaddd 0x20(%rsp),%ymm5,%ymm0 + vmovdqa %ymm0,0x20(%rsp) + vpxor %ymm0,%ymm13,%ymm13 + vpshufb %ymm2,%ymm13,%ymm13 + # x2 += x6, x14 = rotl32(x14 ^ x2, 8) + vpaddd 0x40(%rsp),%ymm6,%ymm0 + vmovdqa %ymm0,0x40(%rsp) + vpxor %ymm0,%ymm14,%ymm14 + vpshufb %ymm2,%ymm14,%ymm14 + # x3 += x7, x15 = rotl32(x15 ^ x3, 8) + vpaddd 0x60(%rsp),%ymm7,%ymm0 + vmovdqa %ymm0,0x60(%rsp) + vpxor %ymm0,%ymm15,%ymm15 + vpshufb %ymm2,%ymm15,%ymm15 + + # x8 += x12, x4 = rotl32(x4 ^ x8, 7) + vpaddd %ymm12,%ymm8,%ymm8 + vpxor %ymm8,%ymm4,%ymm4 + vpslld $7,%ymm4,%ymm0 + vpsrld $25,%ymm4,%ymm4 + vpor %ymm0,%ymm4,%ymm4 + # x9 += x13, x5 = rotl32(x5 ^ x9, 7) + vpaddd %ymm13,%ymm9,%ymm9 + vpxor %ymm9,%ymm5,%ymm5 + vpslld $7,%ymm5,%ymm0 + vpsrld $25,%ymm5,%ymm5 + vpor %ymm0,%ymm5,%ymm5 + # x10 += x14, x6 = rotl32(x6 ^ x10, 7) + vpaddd %ymm14,%ymm10,%ymm10 + vpxor %ymm10,%ymm6,%ymm6 + vpslld $7,%ymm6,%ymm0 + vpsrld $25,%ymm6,%ymm6 + vpor %ymm0,%ymm6,%ymm6 + # x11 += x15, x7 = rotl32(x7 ^ x11, 7) + vpaddd %ymm15,%ymm11,%ymm11 + vpxor %ymm11,%ymm7,%ymm7 + vpslld $7,%ymm7,%ymm0 + vpsrld $25,%ymm7,%ymm7 + vpor %ymm0,%ymm7,%ymm7 + + # x0 += x5, x15 = rotl32(x15 ^ x0, 16) + vpaddd 0x00(%rsp),%ymm5,%ymm0 + vmovdqa %ymm0,0x00(%rsp) + vpxor %ymm0,%ymm15,%ymm15 + vpshufb %ymm3,%ymm15,%ymm15 + # x1 += x6, x12 = rotl32(x12 ^ x1, 16)%ymm0 + vpaddd 0x20(%rsp),%ymm6,%ymm0 + vmovdqa %ymm0,0x20(%rsp) + vpxor %ymm0,%ymm12,%ymm12 + vpshufb %ymm3,%ymm12,%ymm12 + # x2 += x7, x13 = rotl32(x13 ^ x2, 16) + vpaddd 0x40(%rsp),%ymm7,%ymm0 + vmovdqa %ymm0,0x40(%rsp) + vpxor %ymm0,%ymm13,%ymm13 + vpshufb %ymm3,%ymm13,%ymm13 + # x3 += x4, x14 = rotl32(x14 ^ x3, 16) + vpaddd 0x60(%rsp),%ymm4,%ymm0 + vmovdqa %ymm0,0x60(%rsp) + vpxor %ymm0,%ymm14,%ymm14 + vpshufb %ymm3,%ymm14,%ymm14 + + # x10 += x15, x5 = rotl32(x5 ^ x10, 12) + vpaddd %ymm15,%ymm10,%ymm10 + vpxor %ymm10,%ymm5,%ymm5 + vpslld $12,%ymm5,%ymm0 + vpsrld $20,%ymm5,%ymm5 + vpor %ymm0,%ymm5,%ymm5 + # x11 += x12, x6 = rotl32(x6 ^ x11, 12) + vpaddd %ymm12,%ymm11,%ymm11 + vpxor %ymm11,%ymm6,%ymm6 + vpslld $12,%ymm6,%ymm0 + vpsrld $20,%ymm6,%ymm6 + vpor %ymm0,%ymm6,%ymm6 + # x8 += x13, x7 = rotl32(x7 ^ x8, 12) + vpaddd %ymm13,%ymm8,%ymm8 + vpxor %ymm8,%ymm7,%ymm7 + vpslld $12,%ymm7,%ymm0 + vpsrld $20,%ymm7,%ymm7 + vpor %ymm0,%ymm7,%ymm7 + # x9 += x14, x4 = rotl32(x4 ^ x9, 12) + vpaddd %ymm14,%ymm9,%ymm9 + vpxor %ymm9,%ymm4,%ymm4 + vpslld $12,%ymm4,%ymm0 + vpsrld $20,%ymm4,%ymm4 + vpor %ymm0,%ymm4,%ymm4 + + # x0 += x5, x15 = rotl32(x15 ^ x0, 8) + vpaddd 0x00(%rsp),%ymm5,%ymm0 + vmovdqa %ymm0,0x00(%rsp) + vpxor %ymm0,%ymm15,%ymm15 + vpshufb %ymm2,%ymm15,%ymm15 + # x1 += x6, x12 = rotl32(x12 ^ x1, 8) + vpaddd 0x20(%rsp),%ymm6,%ymm0 + vmovdqa %ymm0,0x20(%rsp) + vpxor %ymm0,%ymm12,%ymm12 + vpshufb %ymm2,%ymm12,%ymm12 + # x2 += x7, x13 = rotl32(x13 ^ x2, 8) + vpaddd 0x40(%rsp),%ymm7,%ymm0 + vmovdqa %ymm0,0x40(%rsp) + vpxor %ymm0,%ymm13,%ymm13 + vpshufb %ymm2,%ymm13,%ymm13 + # x3 += x4, x14 = rotl32(x14 ^ x3, 8) + vpaddd 0x60(%rsp),%ymm4,%ymm0 + vmovdqa %ymm0,0x60(%rsp) + vpxor %ymm0,%ymm14,%ymm14 + vpshufb %ymm2,%ymm14,%ymm14 + + # x10 += x15, x5 = rotl32(x5 ^ x10, 7) + vpaddd %ymm15,%ymm10,%ymm10 + vpxor %ymm10,%ymm5,%ymm5 + vpslld $7,%ymm5,%ymm0 + vpsrld $25,%ymm5,%ymm5 + vpor %ymm0,%ymm5,%ymm5 + # x11 += x12, x6 = rotl32(x6 ^ x11, 7) + vpaddd %ymm12,%ymm11,%ymm11 + vpxor %ymm11,%ymm6,%ymm6 + vpslld $7,%ymm6,%ymm0 + vpsrld $25,%ymm6,%ymm6 + vpor %ymm0,%ymm6,%ymm6 + # x8 += x13, x7 = rotl32(x7 ^ x8, 7) + vpaddd %ymm13,%ymm8,%ymm8 + vpxor %ymm8,%ymm7,%ymm7 + vpslld $7,%ymm7,%ymm0 + vpsrld $25,%ymm7,%ymm7 + vpor %ymm0,%ymm7,%ymm7 + # x9 += x14, x4 = rotl32(x4 ^ x9, 7) + vpaddd %ymm14,%ymm9,%ymm9 + vpxor %ymm9,%ymm4,%ymm4 + vpslld $7,%ymm4,%ymm0 + vpsrld $25,%ymm4,%ymm4 + vpor %ymm0,%ymm4,%ymm4 + + sub $2,%r8d + jnz .Ldoubleround8 + + # x0..15[0-3] += s[0..15] + vpbroadcastd 0x00(%rdi),%ymm0 + vpaddd 0x00(%rsp),%ymm0,%ymm0 + vmovdqa %ymm0,0x00(%rsp) + vpbroadcastd 0x04(%rdi),%ymm0 + vpaddd 0x20(%rsp),%ymm0,%ymm0 + vmovdqa %ymm0,0x20(%rsp) + vpbroadcastd 0x08(%rdi),%ymm0 + vpaddd 0x40(%rsp),%ymm0,%ymm0 + vmovdqa %ymm0,0x40(%rsp) + vpbroadcastd 0x0c(%rdi),%ymm0 + vpaddd 0x60(%rsp),%ymm0,%ymm0 + vmovdqa %ymm0,0x60(%rsp) + vpbroadcastd 0x10(%rdi),%ymm0 + vpaddd %ymm0,%ymm4,%ymm4 + vpbroadcastd 0x14(%rdi),%ymm0 + vpaddd %ymm0,%ymm5,%ymm5 + vpbroadcastd 0x18(%rdi),%ymm0 + vpaddd %ymm0,%ymm6,%ymm6 + vpbroadcastd 0x1c(%rdi),%ymm0 + vpaddd %ymm0,%ymm7,%ymm7 + vpbroadcastd 0x20(%rdi),%ymm0 + vpaddd %ymm0,%ymm8,%ymm8 + vpbroadcastd 0x24(%rdi),%ymm0 + vpaddd %ymm0,%ymm9,%ymm9 + vpbroadcastd 0x28(%rdi),%ymm0 + vpaddd %ymm0,%ymm10,%ymm10 + vpbroadcastd 0x2c(%rdi),%ymm0 + vpaddd %ymm0,%ymm11,%ymm11 + vpbroadcastd 0x30(%rdi),%ymm0 + vpaddd %ymm0,%ymm12,%ymm12 + vpbroadcastd 0x34(%rdi),%ymm0 + vpaddd %ymm0,%ymm13,%ymm13 + vpbroadcastd 0x38(%rdi),%ymm0 + vpaddd %ymm0,%ymm14,%ymm14 + vpbroadcastd 0x3c(%rdi),%ymm0 + vpaddd %ymm0,%ymm15,%ymm15 + + # x12 += counter values 0-3 + vpaddd %ymm1,%ymm12,%ymm12 + + # interleave 32-bit words in state n, n+1 + vmovdqa 0x00(%rsp),%ymm0 + vmovdqa 0x20(%rsp),%ymm1 + vpunpckldq %ymm1,%ymm0,%ymm2 + vpunpckhdq %ymm1,%ymm0,%ymm1 + vmovdqa %ymm2,0x00(%rsp) + vmovdqa %ymm1,0x20(%rsp) + vmovdqa 0x40(%rsp),%ymm0 + vmovdqa 0x60(%rsp),%ymm1 + vpunpckldq %ymm1,%ymm0,%ymm2 + vpunpckhdq %ymm1,%ymm0,%ymm1 + vmovdqa %ymm2,0x40(%rsp) + vmovdqa %ymm1,0x60(%rsp) + vmovdqa %ymm4,%ymm0 + vpunpckldq %ymm5,%ymm0,%ymm4 + vpunpckhdq %ymm5,%ymm0,%ymm5 + vmovdqa %ymm6,%ymm0 + vpunpckldq %ymm7,%ymm0,%ymm6 + vpunpckhdq %ymm7,%ymm0,%ymm7 + vmovdqa %ymm8,%ymm0 + vpunpckldq %ymm9,%ymm0,%ymm8 + vpunpckhdq %ymm9,%ymm0,%ymm9 + vmovdqa %ymm10,%ymm0 + vpunpckldq %ymm11,%ymm0,%ymm10 + vpunpckhdq %ymm11,%ymm0,%ymm11 + vmovdqa %ymm12,%ymm0 + vpunpckldq %ymm13,%ymm0,%ymm12 + vpunpckhdq %ymm13,%ymm0,%ymm13 + vmovdqa %ymm14,%ymm0 + vpunpckldq %ymm15,%ymm0,%ymm14 + vpunpckhdq %ymm15,%ymm0,%ymm15 + + # interleave 64-bit words in state n, n+2 + vmovdqa 0x00(%rsp),%ymm0 + vmovdqa 0x40(%rsp),%ymm2 + vpunpcklqdq %ymm2,%ymm0,%ymm1 + vpunpckhqdq %ymm2,%ymm0,%ymm2 + vmovdqa %ymm1,0x00(%rsp) + vmovdqa %ymm2,0x40(%rsp) + vmovdqa 0x20(%rsp),%ymm0 + vmovdqa 0x60(%rsp),%ymm2 + vpunpcklqdq %ymm2,%ymm0,%ymm1 + vpunpckhqdq %ymm2,%ymm0,%ymm2 + vmovdqa %ymm1,0x20(%rsp) + vmovdqa %ymm2,0x60(%rsp) + vmovdqa %ymm4,%ymm0 + vpunpcklqdq %ymm6,%ymm0,%ymm4 + vpunpckhqdq %ymm6,%ymm0,%ymm6 + vmovdqa %ymm5,%ymm0 + vpunpcklqdq %ymm7,%ymm0,%ymm5 + vpunpckhqdq %ymm7,%ymm0,%ymm7 + vmovdqa %ymm8,%ymm0 + vpunpcklqdq %ymm10,%ymm0,%ymm8 + vpunpckhqdq %ymm10,%ymm0,%ymm10 + vmovdqa %ymm9,%ymm0 + vpunpcklqdq %ymm11,%ymm0,%ymm9 + vpunpckhqdq %ymm11,%ymm0,%ymm11 + vmovdqa %ymm12,%ymm0 + vpunpcklqdq %ymm14,%ymm0,%ymm12 + vpunpckhqdq %ymm14,%ymm0,%ymm14 + vmovdqa %ymm13,%ymm0 + vpunpcklqdq %ymm15,%ymm0,%ymm13 + vpunpckhqdq %ymm15,%ymm0,%ymm15 + + # interleave 128-bit words in state n, n+4 + # xor/write first four blocks + vmovdqa 0x00(%rsp),%ymm1 + vperm2i128 $0x20,%ymm4,%ymm1,%ymm0 + cmp $0x0020,%rax + jl .Lxorpart8 + vpxor 0x0000(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0000(%rsi) + vperm2i128 $0x31,%ymm4,%ymm1,%ymm4 + + vperm2i128 $0x20,%ymm12,%ymm8,%ymm0 + cmp $0x0040,%rax + jl .Lxorpart8 + vpxor 0x0020(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0020(%rsi) + vperm2i128 $0x31,%ymm12,%ymm8,%ymm12 + + vmovdqa 0x40(%rsp),%ymm1 + vperm2i128 $0x20,%ymm6,%ymm1,%ymm0 + cmp $0x0060,%rax + jl .Lxorpart8 + vpxor 0x0040(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0040(%rsi) + vperm2i128 $0x31,%ymm6,%ymm1,%ymm6 + + vperm2i128 $0x20,%ymm14,%ymm10,%ymm0 + cmp $0x0080,%rax + jl .Lxorpart8 + vpxor 0x0060(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0060(%rsi) + vperm2i128 $0x31,%ymm14,%ymm10,%ymm14 + + vmovdqa 0x20(%rsp),%ymm1 + vperm2i128 $0x20,%ymm5,%ymm1,%ymm0 + cmp $0x00a0,%rax + jl .Lxorpart8 + vpxor 0x0080(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0080(%rsi) + vperm2i128 $0x31,%ymm5,%ymm1,%ymm5 + + vperm2i128 $0x20,%ymm13,%ymm9,%ymm0 + cmp $0x00c0,%rax + jl .Lxorpart8 + vpxor 0x00a0(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x00a0(%rsi) + vperm2i128 $0x31,%ymm13,%ymm9,%ymm13 + + vmovdqa 0x60(%rsp),%ymm1 + vperm2i128 $0x20,%ymm7,%ymm1,%ymm0 + cmp $0x00e0,%rax + jl .Lxorpart8 + vpxor 0x00c0(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x00c0(%rsi) + vperm2i128 $0x31,%ymm7,%ymm1,%ymm7 + + vperm2i128 $0x20,%ymm15,%ymm11,%ymm0 + cmp $0x0100,%rax + jl .Lxorpart8 + vpxor 0x00e0(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x00e0(%rsi) + vperm2i128 $0x31,%ymm15,%ymm11,%ymm15 + + # xor remaining blocks, write to output + vmovdqa %ymm4,%ymm0 + cmp $0x0120,%rax + jl .Lxorpart8 + vpxor 0x0100(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0100(%rsi) + + vmovdqa %ymm12,%ymm0 + cmp $0x0140,%rax + jl .Lxorpart8 + vpxor 0x0120(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0120(%rsi) + + vmovdqa %ymm6,%ymm0 + cmp $0x0160,%rax + jl .Lxorpart8 + vpxor 0x0140(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0140(%rsi) + + vmovdqa %ymm14,%ymm0 + cmp $0x0180,%rax + jl .Lxorpart8 + vpxor 0x0160(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0160(%rsi) + + vmovdqa %ymm5,%ymm0 + cmp $0x01a0,%rax + jl .Lxorpart8 + vpxor 0x0180(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x0180(%rsi) + + vmovdqa %ymm13,%ymm0 + cmp $0x01c0,%rax + jl .Lxorpart8 + vpxor 0x01a0(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x01a0(%rsi) + + vmovdqa %ymm7,%ymm0 + cmp $0x01e0,%rax + jl .Lxorpart8 + vpxor 0x01c0(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x01c0(%rsi) + + vmovdqa %ymm15,%ymm0 + cmp $0x0200,%rax + jl .Lxorpart8 + vpxor 0x01e0(%rdx),%ymm0,%ymm0 + vmovdqu %ymm0,0x01e0(%rsi) + +.Ldone8: + vzeroupper + lea -8(%r10),%rsp + ret + +.Lxorpart8: + # xor remaining bytes from partial register into output + mov %rax,%r9 + and $0x1f,%r9 + jz .Ldone8 + and $~0x1f,%rax + + mov %rsi,%r11 + + lea (%rdx,%rax),%rsi + mov %rsp,%rdi + mov %r9,%rcx + rep movsb + + vpxor 0x00(%rsp),%ymm0,%ymm0 + vmovdqa %ymm0,0x00(%rsp) + + mov %rsp,%rsi + lea (%r11,%rax),%rdi + mov %r9,%rcx + rep movsb + + jmp .Ldone8 + +ENDPROC(chacha_8block_xor_avx2) diff --git a/arch/x86/crypto/chacha-avx512vl-x86_64.S b/arch/x86/crypto/chacha-avx512vl-x86_64.S new file mode 100644 index 000000000000..848f9c75fd4f --- /dev/null +++ b/arch/x86/crypto/chacha-avx512vl-x86_64.S @@ -0,0 +1,836 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * ChaCha 256-bit cipher algorithm, x64 AVX-512VL functions + * + * Copyright (C) 2018 Martin Willi + */ + +#include + +.section .rodata.cst32.CTR2BL, "aM", @progbits, 32 +.align 32 +CTR2BL: .octa 0x00000000000000000000000000000000 + .octa 0x00000000000000000000000000000001 + +.section .rodata.cst32.CTR4BL, "aM", @progbits, 32 +.align 32 +CTR4BL: .octa 0x00000000000000000000000000000002 + .octa 0x00000000000000000000000000000003 + +.section .rodata.cst32.CTR8BL, "aM", @progbits, 32 +.align 32 +CTR8BL: .octa 0x00000003000000020000000100000000 + .octa 0x00000007000000060000000500000004 + +.text + +ENTRY(chacha_2block_xor_avx512vl) + # %rdi: Input state matrix, s + # %rsi: up to 2 data blocks output, o + # %rdx: up to 2 data blocks input, i + # %rcx: input/output length in bytes + # %r8d: nrounds + + # This function encrypts two ChaCha blocks by loading the state + # matrix twice across four AVX registers. It performs matrix operations + # on four words in each matrix in parallel, but requires shuffling to + # rearrange the words after each round. + + vzeroupper + + # x0..3[0-2] = s0..3 + vbroadcasti128 0x00(%rdi),%ymm0 + vbroadcasti128 0x10(%rdi),%ymm1 + vbroadcasti128 0x20(%rdi),%ymm2 + vbroadcasti128 0x30(%rdi),%ymm3 + + vpaddd CTR2BL(%rip),%ymm3,%ymm3 + + vmovdqa %ymm0,%ymm8 + vmovdqa %ymm1,%ymm9 + vmovdqa %ymm2,%ymm10 + vmovdqa %ymm3,%ymm11 + +.Ldoubleround: + + # x0 += x1, x3 = rotl32(x3 ^ x0, 16) + vpaddd %ymm1,%ymm0,%ymm0 + vpxord %ymm0,%ymm3,%ymm3 + vprold $16,%ymm3,%ymm3 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 12) + vpaddd %ymm3,%ymm2,%ymm2 + vpxord %ymm2,%ymm1,%ymm1 + vprold $12,%ymm1,%ymm1 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 8) + vpaddd %ymm1,%ymm0,%ymm0 + vpxord %ymm0,%ymm3,%ymm3 + vprold $8,%ymm3,%ymm3 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 7) + vpaddd %ymm3,%ymm2,%ymm2 + vpxord %ymm2,%ymm1,%ymm1 + vprold $7,%ymm1,%ymm1 + + # x1 = shuffle32(x1, MASK(0, 3, 2, 1)) + vpshufd $0x39,%ymm1,%ymm1 + # x2 = shuffle32(x2, MASK(1, 0, 3, 2)) + vpshufd $0x4e,%ymm2,%ymm2 + # x3 = shuffle32(x3, MASK(2, 1, 0, 3)) + vpshufd $0x93,%ymm3,%ymm3 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 16) + vpaddd %ymm1,%ymm0,%ymm0 + vpxord %ymm0,%ymm3,%ymm3 + vprold $16,%ymm3,%ymm3 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 12) + vpaddd %ymm3,%ymm2,%ymm2 + vpxord %ymm2,%ymm1,%ymm1 + vprold $12,%ymm1,%ymm1 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 8) + vpaddd %ymm1,%ymm0,%ymm0 + vpxord %ymm0,%ymm3,%ymm3 + vprold $8,%ymm3,%ymm3 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 7) + vpaddd %ymm3,%ymm2,%ymm2 + vpxord %ymm2,%ymm1,%ymm1 + vprold $7,%ymm1,%ymm1 + + # x1 = shuffle32(x1, MASK(2, 1, 0, 3)) + vpshufd $0x93,%ymm1,%ymm1 + # x2 = shuffle32(x2, MASK(1, 0, 3, 2)) + vpshufd $0x4e,%ymm2,%ymm2 + # x3 = shuffle32(x3, MASK(0, 3, 2, 1)) + vpshufd $0x39,%ymm3,%ymm3 + + sub $2,%r8d + jnz .Ldoubleround + + # o0 = i0 ^ (x0 + s0) + vpaddd %ymm8,%ymm0,%ymm7 + cmp $0x10,%rcx + jl .Lxorpart2 + vpxord 0x00(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x00(%rsi) + vextracti128 $1,%ymm7,%xmm0 + # o1 = i1 ^ (x1 + s1) + vpaddd %ymm9,%ymm1,%ymm7 + cmp $0x20,%rcx + jl .Lxorpart2 + vpxord 0x10(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x10(%rsi) + vextracti128 $1,%ymm7,%xmm1 + # o2 = i2 ^ (x2 + s2) + vpaddd %ymm10,%ymm2,%ymm7 + cmp $0x30,%rcx + jl .Lxorpart2 + vpxord 0x20(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x20(%rsi) + vextracti128 $1,%ymm7,%xmm2 + # o3 = i3 ^ (x3 + s3) + vpaddd %ymm11,%ymm3,%ymm7 + cmp $0x40,%rcx + jl .Lxorpart2 + vpxord 0x30(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x30(%rsi) + vextracti128 $1,%ymm7,%xmm3 + + # xor and write second block + vmovdqa %xmm0,%xmm7 + cmp $0x50,%rcx + jl .Lxorpart2 + vpxord 0x40(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x40(%rsi) + + vmovdqa %xmm1,%xmm7 + cmp $0x60,%rcx + jl .Lxorpart2 + vpxord 0x50(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x50(%rsi) + + vmovdqa %xmm2,%xmm7 + cmp $0x70,%rcx + jl .Lxorpart2 + vpxord 0x60(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x60(%rsi) + + vmovdqa %xmm3,%xmm7 + cmp $0x80,%rcx + jl .Lxorpart2 + vpxord 0x70(%rdx),%xmm7,%xmm6 + vmovdqu %xmm6,0x70(%rsi) + +.Ldone2: + vzeroupper + ret + +.Lxorpart2: + # xor remaining bytes from partial register into output + mov %rcx,%rax + and $0xf,%rcx + jz .Ldone8 + mov %rax,%r9 + and $~0xf,%r9 + + mov $1,%rax + shld %cl,%rax,%rax + sub $1,%rax + kmovq %rax,%k1 + + vmovdqu8 (%rdx,%r9),%xmm1{%k1}{z} + vpxord %xmm7,%xmm1,%xmm1 + vmovdqu8 %xmm1,(%rsi,%r9){%k1} + + jmp .Ldone2 + +ENDPROC(chacha_2block_xor_avx512vl) + +ENTRY(chacha_4block_xor_avx512vl) + # %rdi: Input state matrix, s + # %rsi: up to 4 data blocks output, o + # %rdx: up to 4 data blocks input, i + # %rcx: input/output length in bytes + # %r8d: nrounds + + # This function encrypts four ChaCha blocks by loading the state + # matrix four times across eight AVX registers. It performs matrix + # operations on four words in two matrices in parallel, sequentially + # to the operations on the four words of the other two matrices. The + # required word shuffling has a rather high latency, we can do the + # arithmetic on two matrix-pairs without much slowdown. + + vzeroupper + + # x0..3[0-4] = s0..3 + vbroadcasti128 0x00(%rdi),%ymm0 + vbroadcasti128 0x10(%rdi),%ymm1 + vbroadcasti128 0x20(%rdi),%ymm2 + vbroadcasti128 0x30(%rdi),%ymm3 + + vmovdqa %ymm0,%ymm4 + vmovdqa %ymm1,%ymm5 + vmovdqa %ymm2,%ymm6 + vmovdqa %ymm3,%ymm7 + + vpaddd CTR2BL(%rip),%ymm3,%ymm3 + vpaddd CTR4BL(%rip),%ymm7,%ymm7 + + vmovdqa %ymm0,%ymm11 + vmovdqa %ymm1,%ymm12 + vmovdqa %ymm2,%ymm13 + vmovdqa %ymm3,%ymm14 + vmovdqa %ymm7,%ymm15 + +.Ldoubleround4: + + # x0 += x1, x3 = rotl32(x3 ^ x0, 16) + vpaddd %ymm1,%ymm0,%ymm0 + vpxord %ymm0,%ymm3,%ymm3 + vprold $16,%ymm3,%ymm3 + + vpaddd %ymm5,%ymm4,%ymm4 + vpxord %ymm4,%ymm7,%ymm7 + vprold $16,%ymm7,%ymm7 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 12) + vpaddd %ymm3,%ymm2,%ymm2 + vpxord %ymm2,%ymm1,%ymm1 + vprold $12,%ymm1,%ymm1 + + vpaddd %ymm7,%ymm6,%ymm6 + vpxord %ymm6,%ymm5,%ymm5 + vprold $12,%ymm5,%ymm5 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 8) + vpaddd %ymm1,%ymm0,%ymm0 + vpxord %ymm0,%ymm3,%ymm3 + vprold $8,%ymm3,%ymm3 + + vpaddd %ymm5,%ymm4,%ymm4 + vpxord %ymm4,%ymm7,%ymm7 + vprold $8,%ymm7,%ymm7 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 7) + vpaddd %ymm3,%ymm2,%ymm2 + vpxord %ymm2,%ymm1,%ymm1 + vprold $7,%ymm1,%ymm1 + + vpaddd %ymm7,%ymm6,%ymm6 + vpxord %ymm6,%ymm5,%ymm5 + vprold $7,%ymm5,%ymm5 + + # x1 = shuffle32(x1, MASK(0, 3, 2, 1)) + vpshufd $0x39,%ymm1,%ymm1 + vpshufd $0x39,%ymm5,%ymm5 + # x2 = shuffle32(x2, MASK(1, 0, 3, 2)) + vpshufd $0x4e,%ymm2,%ymm2 + vpshufd $0x4e,%ymm6,%ymm6 + # x3 = shuffle32(x3, MASK(2, 1, 0, 3)) + vpshufd $0x93,%ymm3,%ymm3 + vpshufd $0x93,%ymm7,%ymm7 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 16) + vpaddd %ymm1,%ymm0,%ymm0 + vpxord %ymm0,%ymm3,%ymm3 + vprold $16,%ymm3,%ymm3 + + vpaddd %ymm5,%ymm4,%ymm4 + vpxord %ymm4,%ymm7,%ymm7 + vprold $16,%ymm7,%ymm7 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 12) + vpaddd %ymm3,%ymm2,%ymm2 + vpxord %ymm2,%ymm1,%ymm1 + vprold $12,%ymm1,%ymm1 + + vpaddd %ymm7,%ymm6,%ymm6 + vpxord %ymm6,%ymm5,%ymm5 + vprold $12,%ymm5,%ymm5 + + # x0 += x1, x3 = rotl32(x3 ^ x0, 8) + vpaddd %ymm1,%ymm0,%ymm0 + vpxord %ymm0,%ymm3,%ymm3 + vprold $8,%ymm3,%ymm3 + + vpaddd %ymm5,%ymm4,%ymm4 + vpxord %ymm4,%ymm7,%ymm7 + vprold $8,%ymm7,%ymm7 + + # x2 += x3, x1 = rotl32(x1 ^ x2, 7) + vpaddd %ymm3,%ymm2,%ymm2 + vpxord %ymm2,%ymm1,%ymm1 + vprold $7,%ymm1,%ymm1 + + vpaddd %ymm7,%ymm6,%ymm6 + vpxord %ymm6,%ymm5,%ymm5 + vprold $7,%ymm5,%ymm5 + + # x1 = shuffle32(x1, MASK(2, 1, 0, 3)) + vpshufd $0x93,%ymm1,%ymm1 + vpshufd $0x93,%ymm5,%ymm5 + # x2 = shuffle32(x2, MASK(1, 0, 3, 2)) + vpshufd $0x4e,%ymm2,%ymm2 + vpshufd $0x4e,%ymm6,%ymm6 + # x3 = shuffle32(x3, MASK(0, 3, 2, 1)) + vpshufd $0x39,%ymm3,%ymm3 + vpshufd $0x39,%ymm7,%ymm7 + + sub $2,%r8d + jnz .Ldoubleround4 + + # o0 = i0 ^ (x0 + s0), first block + vpaddd %ymm11,%ymm0,%ymm10 + cmp $0x10,%rcx + jl .Lxorpart4 + vpxord 0x00(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x00(%rsi) + vextracti128 $1,%ymm10,%xmm0 + # o1 = i1 ^ (x1 + s1), first block + vpaddd %ymm12,%ymm1,%ymm10 + cmp $0x20,%rcx + jl .Lxorpart4 + vpxord 0x10(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x10(%rsi) + vextracti128 $1,%ymm10,%xmm1 + # o2 = i2 ^ (x2 + s2), first block + vpaddd %ymm13,%ymm2,%ymm10 + cmp $0x30,%rcx + jl .Lxorpart4 + vpxord 0x20(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x20(%rsi) + vextracti128 $1,%ymm10,%xmm2 + # o3 = i3 ^ (x3 + s3), first block + vpaddd %ymm14,%ymm3,%ymm10 + cmp $0x40,%rcx + jl .Lxorpart4 + vpxord 0x30(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x30(%rsi) + vextracti128 $1,%ymm10,%xmm3 + + # xor and write second block + vmovdqa %xmm0,%xmm10 + cmp $0x50,%rcx + jl .Lxorpart4 + vpxord 0x40(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x40(%rsi) + + vmovdqa %xmm1,%xmm10 + cmp $0x60,%rcx + jl .Lxorpart4 + vpxord 0x50(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x50(%rsi) + + vmovdqa %xmm2,%xmm10 + cmp $0x70,%rcx + jl .Lxorpart4 + vpxord 0x60(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x60(%rsi) + + vmovdqa %xmm3,%xmm10 + cmp $0x80,%rcx + jl .Lxorpart4 + vpxord 0x70(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x70(%rsi) + + # o0 = i0 ^ (x0 + s0), third block + vpaddd %ymm11,%ymm4,%ymm10 + cmp $0x90,%rcx + jl .Lxorpart4 + vpxord 0x80(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x80(%rsi) + vextracti128 $1,%ymm10,%xmm4 + # o1 = i1 ^ (x1 + s1), third block + vpaddd %ymm12,%ymm5,%ymm10 + cmp $0xa0,%rcx + jl .Lxorpart4 + vpxord 0x90(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0x90(%rsi) + vextracti128 $1,%ymm10,%xmm5 + # o2 = i2 ^ (x2 + s2), third block + vpaddd %ymm13,%ymm6,%ymm10 + cmp $0xb0,%rcx + jl .Lxorpart4 + vpxord 0xa0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xa0(%rsi) + vextracti128 $1,%ymm10,%xmm6 + # o3 = i3 ^ (x3 + s3), third block + vpaddd %ymm15,%ymm7,%ymm10 + cmp $0xc0,%rcx + jl .Lxorpart4 + vpxord 0xb0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xb0(%rsi) + vextracti128 $1,%ymm10,%xmm7 + + # xor and write fourth block + vmovdqa %xmm4,%xmm10 + cmp $0xd0,%rcx + jl .Lxorpart4 + vpxord 0xc0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xc0(%rsi) + + vmovdqa %xmm5,%xmm10 + cmp $0xe0,%rcx + jl .Lxorpart4 + vpxord 0xd0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xd0(%rsi) + + vmovdqa %xmm6,%xmm10 + cmp $0xf0,%rcx + jl .Lxorpart4 + vpxord 0xe0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xe0(%rsi) + + vmovdqa %xmm7,%xmm10 + cmp $0x100,%rcx + jl .Lxorpart4 + vpxord 0xf0(%rdx),%xmm10,%xmm9 + vmovdqu %xmm9,0xf0(%rsi) + +.Ldone4: + vzeroupper + ret + +.Lxorpart4: + # xor remaining bytes from partial register into output + mov %rcx,%rax + and $0xf,%rcx + jz .Ldone8 + mov %rax,%r9 + and $~0xf,%r9 + + mov $1,%rax + shld %cl,%rax,%rax + sub $1,%rax + kmovq %rax,%k1 + + vmovdqu8 (%rdx,%r9),%xmm1{%k1}{z} + vpxord %xmm10,%xmm1,%xmm1 + vmovdqu8 %xmm1,(%rsi,%r9){%k1} + + jmp .Ldone4 + +ENDPROC(chacha_4block_xor_avx512vl) + +ENTRY(chacha_8block_xor_avx512vl) + # %rdi: Input state matrix, s + # %rsi: up to 8 data blocks output, o + # %rdx: up to 8 data blocks input, i + # %rcx: input/output length in bytes + # %r8d: nrounds + + # This function encrypts eight consecutive ChaCha blocks by loading + # the state matrix in AVX registers eight times. Compared to AVX2, this + # mostly benefits from the new rotate instructions in VL and the + # additional registers. + + vzeroupper + + # x0..15[0-7] = s[0..15] + vpbroadcastd 0x00(%rdi),%ymm0 + vpbroadcastd 0x04(%rdi),%ymm1 + vpbroadcastd 0x08(%rdi),%ymm2 + vpbroadcastd 0x0c(%rdi),%ymm3 + vpbroadcastd 0x10(%rdi),%ymm4 + vpbroadcastd 0x14(%rdi),%ymm5 + vpbroadcastd 0x18(%rdi),%ymm6 + vpbroadcastd 0x1c(%rdi),%ymm7 + vpbroadcastd 0x20(%rdi),%ymm8 + vpbroadcastd 0x24(%rdi),%ymm9 + vpbroadcastd 0x28(%rdi),%ymm10 + vpbroadcastd 0x2c(%rdi),%ymm11 + vpbroadcastd 0x30(%rdi),%ymm12 + vpbroadcastd 0x34(%rdi),%ymm13 + vpbroadcastd 0x38(%rdi),%ymm14 + vpbroadcastd 0x3c(%rdi),%ymm15 + + # x12 += counter values 0-3 + vpaddd CTR8BL(%rip),%ymm12,%ymm12 + + vmovdqa64 %ymm0,%ymm16 + vmovdqa64 %ymm1,%ymm17 + vmovdqa64 %ymm2,%ymm18 + vmovdqa64 %ymm3,%ymm19 + vmovdqa64 %ymm4,%ymm20 + vmovdqa64 %ymm5,%ymm21 + vmovdqa64 %ymm6,%ymm22 + vmovdqa64 %ymm7,%ymm23 + vmovdqa64 %ymm8,%ymm24 + vmovdqa64 %ymm9,%ymm25 + vmovdqa64 %ymm10,%ymm26 + vmovdqa64 %ymm11,%ymm27 + vmovdqa64 %ymm12,%ymm28 + vmovdqa64 %ymm13,%ymm29 + vmovdqa64 %ymm14,%ymm30 + vmovdqa64 %ymm15,%ymm31 + +.Ldoubleround8: + # x0 += x4, x12 = rotl32(x12 ^ x0, 16) + vpaddd %ymm0,%ymm4,%ymm0 + vpxord %ymm0,%ymm12,%ymm12 + vprold $16,%ymm12,%ymm12 + # x1 += x5, x13 = rotl32(x13 ^ x1, 16) + vpaddd %ymm1,%ymm5,%ymm1 + vpxord %ymm1,%ymm13,%ymm13 + vprold $16,%ymm13,%ymm13 + # x2 += x6, x14 = rotl32(x14 ^ x2, 16) + vpaddd %ymm2,%ymm6,%ymm2 + vpxord %ymm2,%ymm14,%ymm14 + vprold $16,%ymm14,%ymm14 + # x3 += x7, x15 = rotl32(x15 ^ x3, 16) + vpaddd %ymm3,%ymm7,%ymm3 + vpxord %ymm3,%ymm15,%ymm15 + vprold $16,%ymm15,%ymm15 + + # x8 += x12, x4 = rotl32(x4 ^ x8, 12) + vpaddd %ymm12,%ymm8,%ymm8 + vpxord %ymm8,%ymm4,%ymm4 + vprold $12,%ymm4,%ymm4 + # x9 += x13, x5 = rotl32(x5 ^ x9, 12) + vpaddd %ymm13,%ymm9,%ymm9 + vpxord %ymm9,%ymm5,%ymm5 + vprold $12,%ymm5,%ymm5 + # x10 += x14, x6 = rotl32(x6 ^ x10, 12) + vpaddd %ymm14,%ymm10,%ymm10 + vpxord %ymm10,%ymm6,%ymm6 + vprold $12,%ymm6,%ymm6 + # x11 += x15, x7 = rotl32(x7 ^ x11, 12) + vpaddd %ymm15,%ymm11,%ymm11 + vpxord %ymm11,%ymm7,%ymm7 + vprold $12,%ymm7,%ymm7 + + # x0 += x4, x12 = rotl32(x12 ^ x0, 8) + vpaddd %ymm0,%ymm4,%ymm0 + vpxord %ymm0,%ymm12,%ymm12 + vprold $8,%ymm12,%ymm12 + # x1 += x5, x13 = rotl32(x13 ^ x1, 8) + vpaddd %ymm1,%ymm5,%ymm1 + vpxord %ymm1,%ymm13,%ymm13 + vprold $8,%ymm13,%ymm13 + # x2 += x6, x14 = rotl32(x14 ^ x2, 8) + vpaddd %ymm2,%ymm6,%ymm2 + vpxord %ymm2,%ymm14,%ymm14 + vprold $8,%ymm14,%ymm14 + # x3 += x7, x15 = rotl32(x15 ^ x3, 8) + vpaddd %ymm3,%ymm7,%ymm3 + vpxord %ymm3,%ymm15,%ymm15 + vprold $8,%ymm15,%ymm15 + + # x8 += x12, x4 = rotl32(x4 ^ x8, 7) + vpaddd %ymm12,%ymm8,%ymm8 + vpxord %ymm8,%ymm4,%ymm4 + vprold $7,%ymm4,%ymm4 + # x9 += x13, x5 = rotl32(x5 ^ x9, 7) + vpaddd %ymm13,%ymm9,%ymm9 + vpxord %ymm9,%ymm5,%ymm5 + vprold $7,%ymm5,%ymm5 + # x10 += x14, x6 = rotl32(x6 ^ x10, 7) + vpaddd %ymm14,%ymm10,%ymm10 + vpxord %ymm10,%ymm6,%ymm6 + vprold $7,%ymm6,%ymm6 + # x11 += x15, x7 = rotl32(x7 ^ x11, 7) + vpaddd %ymm15,%ymm11,%ymm11 + vpxord %ymm11,%ymm7,%ymm7 + vprold $7,%ymm7,%ymm7 + + # x0 += x5, x15 = rotl32(x15 ^ x0, 16) + vpaddd %ymm0,%ymm5,%ymm0 + vpxord %ymm0,%ymm15,%ymm15 + vprold $16,%ymm15,%ymm15 + # x1 += x6, x12 = rotl32(x12 ^ x1, 16) + vpaddd %ymm1,%ymm6,%ymm1 + vpxord %ymm1,%ymm12,%ymm12 + vprold $16,%ymm12,%ymm12 + # x2 += x7, x13 = rotl32(x13 ^ x2, 16) + vpaddd %ymm2,%ymm7,%ymm2 + vpxord %ymm2,%ymm13,%ymm13 + vprold $16,%ymm13,%ymm13 + # x3 += x4, x14 = rotl32(x14 ^ x3, 16) + vpaddd %ymm3,%ymm4,%ymm3 + vpxord %ymm3,%ymm14,%ymm14 + vprold $16,%ymm14,%ymm14 + + # x10 += x15, x5 = rotl32(x5 ^ x10, 12) + vpaddd %ymm15,%ymm10,%ymm10 + vpxord %ymm10,%ymm5,%ymm5 + vprold $12,%ymm5,%ymm5 + # x11 += x12, x6 = rotl32(x6 ^ x11, 12) + vpaddd %ymm12,%ymm11,%ymm11 + vpxord %ymm11,%ymm6,%ymm6 + vprold $12,%ymm6,%ymm6 + # x8 += x13, x7 = rotl32(x7 ^ x8, 12) + vpaddd %ymm13,%ymm8,%ymm8 + vpxord %ymm8,%ymm7,%ymm7 + vprold $12,%ymm7,%ymm7 + # x9 += x14, x4 = rotl32(x4 ^ x9, 12) + vpaddd %ymm14,%ymm9,%ymm9 + vpxord %ymm9,%ymm4,%ymm4 + vprold $12,%ymm4,%ymm4 + + # x0 += x5, x15 = rotl32(x15 ^ x0, 8) + vpaddd %ymm0,%ymm5,%ymm0 + vpxord %ymm0,%ymm15,%ymm15 + vprold $8,%ymm15,%ymm15 + # x1 += x6, x12 = rotl32(x12 ^ x1, 8) + vpaddd %ymm1,%ymm6,%ymm1 + vpxord %ymm1,%ymm12,%ymm12 + vprold $8,%ymm12,%ymm12 + # x2 += x7, x13 = rotl32(x13 ^ x2, 8) + vpaddd %ymm2,%ymm7,%ymm2 + vpxord %ymm2,%ymm13,%ymm13 + vprold $8,%ymm13,%ymm13 + # x3 += x4, x14 = rotl32(x14 ^ x3, 8) + vpaddd %ymm3,%ymm4,%ymm3 + vpxord %ymm3,%ymm14,%ymm14 + vprold $8,%ymm14,%ymm14 + + # x10 += x15, x5 = rotl32(x5 ^ x10, 7) + vpaddd %ymm15,%ymm10,%ymm10 + vpxord %ymm10,%ymm5,%ymm5 + vprold $7,%ymm5,%ymm5 + # x11 += x12, x6 = rotl32(x6 ^ x11, 7) + vpaddd %ymm12,%ymm11,%ymm11 + vpxord %ymm11,%ymm6,%ymm6 + vprold $7,%ymm6,%ymm6 + # x8 += x13, x7 = rotl32(x7 ^ x8, 7) + vpaddd %ymm13,%ymm8,%ymm8 + vpxord %ymm8,%ymm7,%ymm7 + vprold $7,%ymm7,%ymm7 + # x9 += x14, x4 = rotl32(x4 ^ x9, 7) + vpaddd %ymm14,%ymm9,%ymm9 + vpxord %ymm9,%ymm4,%ymm4 + vprold $7,%ymm4,%ymm4 + + sub $2,%r8d + jnz .Ldoubleround8 + + # x0..15[0-3] += s[0..15] + vpaddd %ymm16,%ymm0,%ymm0 + vpaddd %ymm17,%ymm1,%ymm1 + vpaddd %ymm18,%ymm2,%ymm2 + vpaddd %ymm19,%ymm3,%ymm3 + vpaddd %ymm20,%ymm4,%ymm4 + vpaddd %ymm21,%ymm5,%ymm5 + vpaddd %ymm22,%ymm6,%ymm6 + vpaddd %ymm23,%ymm7,%ymm7 + vpaddd %ymm24,%ymm8,%ymm8 + vpaddd %ymm25,%ymm9,%ymm9 + vpaddd %ymm26,%ymm10,%ymm10 + vpaddd %ymm27,%ymm11,%ymm11 + vpaddd %ymm28,%ymm12,%ymm12 + vpaddd %ymm29,%ymm13,%ymm13 + vpaddd %ymm30,%ymm14,%ymm14 + vpaddd %ymm31,%ymm15,%ymm15 + + # interleave 32-bit words in state n, n+1 + vpunpckldq %ymm1,%ymm0,%ymm16 + vpunpckhdq %ymm1,%ymm0,%ymm17 + vpunpckldq %ymm3,%ymm2,%ymm18 + vpunpckhdq %ymm3,%ymm2,%ymm19 + vpunpckldq %ymm5,%ymm4,%ymm20 + vpunpckhdq %ymm5,%ymm4,%ymm21 + vpunpckldq %ymm7,%ymm6,%ymm22 + vpunpckhdq %ymm7,%ymm6,%ymm23 + vpunpckldq %ymm9,%ymm8,%ymm24 + vpunpckhdq %ymm9,%ymm8,%ymm25 + vpunpckldq %ymm11,%ymm10,%ymm26 + vpunpckhdq %ymm11,%ymm10,%ymm27 + vpunpckldq %ymm13,%ymm12,%ymm28 + vpunpckhdq %ymm13,%ymm12,%ymm29 + vpunpckldq %ymm15,%ymm14,%ymm30 + vpunpckhdq %ymm15,%ymm14,%ymm31 + + # interleave 64-bit words in state n, n+2 + vpunpcklqdq %ymm18,%ymm16,%ymm0 + vpunpcklqdq %ymm19,%ymm17,%ymm1 + vpunpckhqdq %ymm18,%ymm16,%ymm2 + vpunpckhqdq %ymm19,%ymm17,%ymm3 + vpunpcklqdq %ymm22,%ymm20,%ymm4 + vpunpcklqdq %ymm23,%ymm21,%ymm5 + vpunpckhqdq %ymm22,%ymm20,%ymm6 + vpunpckhqdq %ymm23,%ymm21,%ymm7 + vpunpcklqdq %ymm26,%ymm24,%ymm8 + vpunpcklqdq %ymm27,%ymm25,%ymm9 + vpunpckhqdq %ymm26,%ymm24,%ymm10 + vpunpckhqdq %ymm27,%ymm25,%ymm11 + vpunpcklqdq %ymm30,%ymm28,%ymm12 + vpunpcklqdq %ymm31,%ymm29,%ymm13 + vpunpckhqdq %ymm30,%ymm28,%ymm14 + vpunpckhqdq %ymm31,%ymm29,%ymm15 + + # interleave 128-bit words in state n, n+4 + # xor/write first four blocks + vmovdqa64 %ymm0,%ymm16 + vperm2i128 $0x20,%ymm4,%ymm0,%ymm0 + cmp $0x0020,%rcx + jl .Lxorpart8 + vpxord 0x0000(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0000(%rsi) + vmovdqa64 %ymm16,%ymm0 + vperm2i128 $0x31,%ymm4,%ymm0,%ymm4 + + vperm2i128 $0x20,%ymm12,%ymm8,%ymm0 + cmp $0x0040,%rcx + jl .Lxorpart8 + vpxord 0x0020(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0020(%rsi) + vperm2i128 $0x31,%ymm12,%ymm8,%ymm12 + + vperm2i128 $0x20,%ymm6,%ymm2,%ymm0 + cmp $0x0060,%rcx + jl .Lxorpart8 + vpxord 0x0040(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0040(%rsi) + vperm2i128 $0x31,%ymm6,%ymm2,%ymm6 + + vperm2i128 $0x20,%ymm14,%ymm10,%ymm0 + cmp $0x0080,%rcx + jl .Lxorpart8 + vpxord 0x0060(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0060(%rsi) + vperm2i128 $0x31,%ymm14,%ymm10,%ymm14 + + vperm2i128 $0x20,%ymm5,%ymm1,%ymm0 + cmp $0x00a0,%rcx + jl .Lxorpart8 + vpxord 0x0080(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0080(%rsi) + vperm2i128 $0x31,%ymm5,%ymm1,%ymm5 + + vperm2i128 $0x20,%ymm13,%ymm9,%ymm0 + cmp $0x00c0,%rcx + jl .Lxorpart8 + vpxord 0x00a0(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x00a0(%rsi) + vperm2i128 $0x31,%ymm13,%ymm9,%ymm13 + + vperm2i128 $0x20,%ymm7,%ymm3,%ymm0 + cmp $0x00e0,%rcx + jl .Lxorpart8 + vpxord 0x00c0(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x00c0(%rsi) + vperm2i128 $0x31,%ymm7,%ymm3,%ymm7 + + vperm2i128 $0x20,%ymm15,%ymm11,%ymm0 + cmp $0x0100,%rcx + jl .Lxorpart8 + vpxord 0x00e0(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x00e0(%rsi) + vperm2i128 $0x31,%ymm15,%ymm11,%ymm15 + + # xor remaining blocks, write to output + vmovdqa64 %ymm4,%ymm0 + cmp $0x0120,%rcx + jl .Lxorpart8 + vpxord 0x0100(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0100(%rsi) + + vmovdqa64 %ymm12,%ymm0 + cmp $0x0140,%rcx + jl .Lxorpart8 + vpxord 0x0120(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0120(%rsi) + + vmovdqa64 %ymm6,%ymm0 + cmp $0x0160,%rcx + jl .Lxorpart8 + vpxord 0x0140(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0140(%rsi) + + vmovdqa64 %ymm14,%ymm0 + cmp $0x0180,%rcx + jl .Lxorpart8 + vpxord 0x0160(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0160(%rsi) + + vmovdqa64 %ymm5,%ymm0 + cmp $0x01a0,%rcx + jl .Lxorpart8 + vpxord 0x0180(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x0180(%rsi) + + vmovdqa64 %ymm13,%ymm0 + cmp $0x01c0,%rcx + jl .Lxorpart8 + vpxord 0x01a0(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x01a0(%rsi) + + vmovdqa64 %ymm7,%ymm0 + cmp $0x01e0,%rcx + jl .Lxorpart8 + vpxord 0x01c0(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x01c0(%rsi) + + vmovdqa64 %ymm15,%ymm0 + cmp $0x0200,%rcx + jl .Lxorpart8 + vpxord 0x01e0(%rdx),%ymm0,%ymm0 + vmovdqu64 %ymm0,0x01e0(%rsi) + +.Ldone8: + vzeroupper + ret + +.Lxorpart8: + # xor remaining bytes from partial register into output + mov %rcx,%rax + and $0x1f,%rcx + jz .Ldone8 + mov %rax,%r9 + and $~0x1f,%r9 + + mov $1,%rax + shld %cl,%rax,%rax + sub $1,%rax + kmovq %rax,%k1 + + vmovdqu8 (%rdx,%r9),%ymm1{%k1}{z} + vpxord %ymm0,%ymm1,%ymm1 + vmovdqu8 %ymm1,(%rsi,%r9){%k1} + + jmp .Ldone8 + +ENDPROC(chacha_8block_xor_avx512vl) diff --git a/arch/x86/crypto/chacha20-ssse3-x86_64.S b/arch/x86/crypto/chacha-ssse3-x86_64.S similarity index 76% rename from arch/x86/crypto/chacha20-ssse3-x86_64.S rename to arch/x86/crypto/chacha-ssse3-x86_64.S index 512a2b500fd1..c05a7a963dc3 100644 --- a/arch/x86/crypto/chacha20-ssse3-x86_64.S +++ b/arch/x86/crypto/chacha-ssse3-x86_64.S @@ -1,5 +1,5 @@ /* - * ChaCha20 256-bit cipher algorithm, RFC7539, x64 SSSE3 functions + * ChaCha 256-bit cipher algorithm, x64 SSSE3 functions * * Copyright (C) 2015 Martin Willi * @@ -10,6 +10,7 @@ */ #include +#include .section .rodata.cst16.ROT8, "aM", @progbits, 16 .align 16 @@ -23,35 +24,25 @@ CTRINC: .octa 0x00000003000000020000000100000000 .text -ENTRY(chacha20_block_xor_ssse3) - # %rdi: Input state matrix, s - # %rsi: 1 data block output, o - # %rdx: 1 data block input, i - - # This function encrypts one ChaCha20 block by loading the state matrix - # in four SSE registers. It performs matrix operation on four words in - # parallel, but requireds shuffling to rearrange the words after each - # round. 8/16-bit word rotation is done with the slightly better - # performing SSSE3 byte shuffling, 7/12-bit word rotation uses - # traditional shift+OR. - - # x0..3 = s0..3 - movdqa 0x00(%rdi),%xmm0 - movdqa 0x10(%rdi),%xmm1 - movdqa 0x20(%rdi),%xmm2 - movdqa 0x30(%rdi),%xmm3 - movdqa %xmm0,%xmm8 - movdqa %xmm1,%xmm9 - movdqa %xmm2,%xmm10 - movdqa %xmm3,%xmm11 +/* + * chacha_permute - permute one block + * + * Permute one 64-byte block where the state matrix is in %xmm0-%xmm3. This + * function performs matrix operations on four words in parallel, but requires + * shuffling to rearrange the words after each round. 8/16-bit word rotation is + * done with the slightly better performing SSSE3 byte shuffling, 7/12-bit word + * rotation uses traditional shift+OR. + * + * The round count is given in %r8d. + * + * Clobbers: %r8d, %xmm4-%xmm7 + */ +chacha_permute: movdqa ROT8(%rip),%xmm4 movdqa ROT16(%rip),%xmm5 - mov $10,%ecx - .Ldoubleround: - # x0 += x1, x3 = rotl32(x3 ^ x0, 16) paddd %xmm1,%xmm0 pxor %xmm0,%xmm3 @@ -118,39 +109,129 @@ ENTRY(chacha20_block_xor_ssse3) # x3 = shuffle32(x3, MASK(0, 3, 2, 1)) pshufd $0x39,%xmm3,%xmm3 - dec %ecx + sub $2,%r8d jnz .Ldoubleround + ret +ENDPROC(chacha_permute) + +ENTRY(chacha_block_xor_ssse3) + # %rdi: Input state matrix, s + # %rsi: up to 1 data block output, o + # %rdx: up to 1 data block input, i + # %rcx: input/output length in bytes + # %r8d: nrounds + FRAME_BEGIN + + # x0..3 = s0..3 + movdqa 0x00(%rdi),%xmm0 + movdqa 0x10(%rdi),%xmm1 + movdqa 0x20(%rdi),%xmm2 + movdqa 0x30(%rdi),%xmm3 + movdqa %xmm0,%xmm8 + movdqa %xmm1,%xmm9 + movdqa %xmm2,%xmm10 + movdqa %xmm3,%xmm11 + + mov %rcx,%rax + call chacha_permute + # o0 = i0 ^ (x0 + s0) - movdqu 0x00(%rdx),%xmm4 paddd %xmm8,%xmm0 + cmp $0x10,%rax + jl .Lxorpart + movdqu 0x00(%rdx),%xmm4 pxor %xmm4,%xmm0 movdqu %xmm0,0x00(%rsi) # o1 = i1 ^ (x1 + s1) - movdqu 0x10(%rdx),%xmm5 paddd %xmm9,%xmm1 - pxor %xmm5,%xmm1 - movdqu %xmm1,0x10(%rsi) + movdqa %xmm1,%xmm0 + cmp $0x20,%rax + jl .Lxorpart + movdqu 0x10(%rdx),%xmm0 + pxor %xmm1,%xmm0 + movdqu %xmm0,0x10(%rsi) # o2 = i2 ^ (x2 + s2) - movdqu 0x20(%rdx),%xmm6 paddd %xmm10,%xmm2 - pxor %xmm6,%xmm2 - movdqu %xmm2,0x20(%rsi) + movdqa %xmm2,%xmm0 + cmp $0x30,%rax + jl .Lxorpart + movdqu 0x20(%rdx),%xmm0 + pxor %xmm2,%xmm0 + movdqu %xmm0,0x20(%rsi) # o3 = i3 ^ (x3 + s3) - movdqu 0x30(%rdx),%xmm7 paddd %xmm11,%xmm3 - pxor %xmm7,%xmm3 - movdqu %xmm3,0x30(%rsi) + movdqa %xmm3,%xmm0 + cmp $0x40,%rax + jl .Lxorpart + movdqu 0x30(%rdx),%xmm0 + pxor %xmm3,%xmm0 + movdqu %xmm0,0x30(%rsi) + +.Ldone: + FRAME_END + ret + +.Lxorpart: + # xor remaining bytes from partial register into output + mov %rax,%r9 + and $0x0f,%r9 + jz .Ldone + and $~0x0f,%rax + + mov %rsi,%r11 + + lea 8(%rsp),%r10 + sub $0x10,%rsp + and $~31,%rsp + + lea (%rdx,%rax),%rsi + mov %rsp,%rdi + mov %r9,%rcx + rep movsb + + pxor 0x00(%rsp),%xmm0 + movdqa %xmm0,0x00(%rsp) + mov %rsp,%rsi + lea (%r11,%rax),%rdi + mov %r9,%rcx + rep movsb + + lea -8(%r10),%rsp + jmp .Ldone + +ENDPROC(chacha_block_xor_ssse3) + +ENTRY(hchacha_block_ssse3) + # %rdi: Input state matrix, s + # %rsi: output (8 32-bit words) + # %edx: nrounds + FRAME_BEGIN + + movdqa 0x00(%rdi),%xmm0 + movdqa 0x10(%rdi),%xmm1 + movdqa 0x20(%rdi),%xmm2 + movdqa 0x30(%rdi),%xmm3 + + mov %edx,%r8d + call chacha_permute + + movdqu %xmm0,0x00(%rsi) + movdqu %xmm3,0x10(%rsi) + + FRAME_END ret -ENDPROC(chacha20_block_xor_ssse3) +ENDPROC(hchacha_block_ssse3) -ENTRY(chacha20_4block_xor_ssse3) +ENTRY(chacha_4block_xor_ssse3) # %rdi: Input state matrix, s - # %rsi: 4 data blocks output, o - # %rdx: 4 data blocks input, i + # %rsi: up to 4 data blocks output, o + # %rdx: up to 4 data blocks input, i + # %rcx: input/output length in bytes + # %r8d: nrounds - # This function encrypts four consecutive ChaCha20 blocks by loading the + # This function encrypts four consecutive ChaCha blocks by loading the # the state matrix in SSE registers four times. As we need some scratch # registers, we save the first four registers on the stack. The # algorithm performs each operation on the corresponding word of each @@ -163,6 +244,7 @@ ENTRY(chacha20_4block_xor_ssse3) lea 8(%rsp),%r10 sub $0x80,%rsp and $~63,%rsp + mov %rcx,%rax # x0..15[0-3] = s0..3[0..3] movq 0x00(%rdi),%xmm1 @@ -202,8 +284,6 @@ ENTRY(chacha20_4block_xor_ssse3) # x12 += counter values 0-3 paddd %xmm1,%xmm12 - mov $10,%ecx - .Ldoubleround4: # x0 += x4, x12 = rotl32(x12 ^ x0, 16) movdqa 0x00(%rsp),%xmm0 @@ -421,7 +501,7 @@ ENTRY(chacha20_4block_xor_ssse3) psrld $25,%xmm4 por %xmm0,%xmm4 - dec %ecx + sub $2,%r8d jnz .Ldoubleround4 # x0[0-3] += s0[0] @@ -573,58 +653,143 @@ ENTRY(chacha20_4block_xor_ssse3) # xor with corresponding input, write to output movdqa 0x00(%rsp),%xmm0 + cmp $0x10,%rax + jl .Lxorpart4 movdqu 0x00(%rdx),%xmm1 pxor %xmm1,%xmm0 movdqu %xmm0,0x00(%rsi) - movdqa 0x10(%rsp),%xmm0 - movdqu 0x80(%rdx),%xmm1 + + movdqu %xmm4,%xmm0 + cmp $0x20,%rax + jl .Lxorpart4 + movdqu 0x10(%rdx),%xmm1 pxor %xmm1,%xmm0 - movdqu %xmm0,0x80(%rsi) + movdqu %xmm0,0x10(%rsi) + + movdqu %xmm8,%xmm0 + cmp $0x30,%rax + jl .Lxorpart4 + movdqu 0x20(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0x20(%rsi) + + movdqu %xmm12,%xmm0 + cmp $0x40,%rax + jl .Lxorpart4 + movdqu 0x30(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0x30(%rsi) + movdqa 0x20(%rsp),%xmm0 + cmp $0x50,%rax + jl .Lxorpart4 movdqu 0x40(%rdx),%xmm1 pxor %xmm1,%xmm0 movdqu %xmm0,0x40(%rsi) + + movdqu %xmm6,%xmm0 + cmp $0x60,%rax + jl .Lxorpart4 + movdqu 0x50(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0x50(%rsi) + + movdqu %xmm10,%xmm0 + cmp $0x70,%rax + jl .Lxorpart4 + movdqu 0x60(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0x60(%rsi) + + movdqu %xmm14,%xmm0 + cmp $0x80,%rax + jl .Lxorpart4 + movdqu 0x70(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0x70(%rsi) + + movdqa 0x10(%rsp),%xmm0 + cmp $0x90,%rax + jl .Lxorpart4 + movdqu 0x80(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0x80(%rsi) + + movdqu %xmm5,%xmm0 + cmp $0xa0,%rax + jl .Lxorpart4 + movdqu 0x90(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0x90(%rsi) + + movdqu %xmm9,%xmm0 + cmp $0xb0,%rax + jl .Lxorpart4 + movdqu 0xa0(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0xa0(%rsi) + + movdqu %xmm13,%xmm0 + cmp $0xc0,%rax + jl .Lxorpart4 + movdqu 0xb0(%rdx),%xmm1 + pxor %xmm1,%xmm0 + movdqu %xmm0,0xb0(%rsi) + movdqa 0x30(%rsp),%xmm0 + cmp $0xd0,%rax + jl .Lxorpart4 movdqu 0xc0(%rdx),%xmm1 pxor %xmm1,%xmm0 movdqu %xmm0,0xc0(%rsi) - movdqu 0x10(%rdx),%xmm1 - pxor %xmm1,%xmm4 - movdqu %xmm4,0x10(%rsi) - movdqu 0x90(%rdx),%xmm1 - pxor %xmm1,%xmm5 - movdqu %xmm5,0x90(%rsi) - movdqu 0x50(%rdx),%xmm1 - pxor %xmm1,%xmm6 - movdqu %xmm6,0x50(%rsi) + + movdqu %xmm7,%xmm0 + cmp $0xe0,%rax + jl .Lxorpart4 movdqu 0xd0(%rdx),%xmm1 - pxor %xmm1,%xmm7 - movdqu %xmm7,0xd0(%rsi) - movdqu 0x20(%rdx),%xmm1 - pxor %xmm1,%xmm8 - movdqu %xmm8,0x20(%rsi) - movdqu 0xa0(%rdx),%xmm1 - pxor %xmm1,%xmm9 - movdqu %xmm9,0xa0(%rsi) - movdqu 0x60(%rdx),%xmm1 - pxor %xmm1,%xmm10 - movdqu %xmm10,0x60(%rsi) + pxor %xmm1,%xmm0 + movdqu %xmm0,0xd0(%rsi) + + movdqu %xmm11,%xmm0 + cmp $0xf0,%rax + jl .Lxorpart4 movdqu 0xe0(%rdx),%xmm1 - pxor %xmm1,%xmm11 - movdqu %xmm11,0xe0(%rsi) - movdqu 0x30(%rdx),%xmm1 - pxor %xmm1,%xmm12 - movdqu %xmm12,0x30(%rsi) - movdqu 0xb0(%rdx),%xmm1 - pxor %xmm1,%xmm13 - movdqu %xmm13,0xb0(%rsi) - movdqu 0x70(%rdx),%xmm1 - pxor %xmm1,%xmm14 - movdqu %xmm14,0x70(%rsi) + pxor %xmm1,%xmm0 + movdqu %xmm0,0xe0(%rsi) + + movdqu %xmm15,%xmm0 + cmp $0x100,%rax + jl .Lxorpart4 movdqu 0xf0(%rdx),%xmm1 - pxor %xmm1,%xmm15 - movdqu %xmm15,0xf0(%rsi) + pxor %xmm1,%xmm0 + movdqu %xmm0,0xf0(%rsi) +.Ldone4: lea -8(%r10),%rsp ret -ENDPROC(chacha20_4block_xor_ssse3) + +.Lxorpart4: + # xor remaining bytes from partial register into output + mov %rax,%r9 + and $0x0f,%r9 + jz .Ldone4 + and $~0x0f,%rax + + mov %rsi,%r11 + + lea (%rdx,%rax),%rsi + mov %rsp,%rdi + mov %r9,%rcx + rep movsb + + pxor 0x00(%rsp),%xmm0 + movdqa %xmm0,0x00(%rsp) + + mov %rsp,%rsi + lea (%r11,%rax),%rdi + mov %r9,%rcx + rep movsb + + jmp .Ldone4 + +ENDPROC(chacha_4block_xor_ssse3) diff --git a/arch/x86/crypto/chacha20-avx2-x86_64.S b/arch/x86/crypto/chacha20-avx2-x86_64.S deleted file mode 100644 index f3cd26f48332..000000000000 --- a/arch/x86/crypto/chacha20-avx2-x86_64.S +++ /dev/null @@ -1,448 +0,0 @@ -/* - * ChaCha20 256-bit cipher algorithm, RFC7539, x64 AVX2 functions - * - * Copyright (C) 2015 Martin Willi - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - */ - -#include - -.section .rodata.cst32.ROT8, "aM", @progbits, 32 -.align 32 -ROT8: .octa 0x0e0d0c0f0a09080b0605040702010003 - .octa 0x0e0d0c0f0a09080b0605040702010003 - -.section .rodata.cst32.ROT16, "aM", @progbits, 32 -.align 32 -ROT16: .octa 0x0d0c0f0e09080b0a0504070601000302 - .octa 0x0d0c0f0e09080b0a0504070601000302 - -.section .rodata.cst32.CTRINC, "aM", @progbits, 32 -.align 32 -CTRINC: .octa 0x00000003000000020000000100000000 - .octa 0x00000007000000060000000500000004 - -.text - -ENTRY(chacha20_8block_xor_avx2) - # %rdi: Input state matrix, s - # %rsi: 8 data blocks output, o - # %rdx: 8 data blocks input, i - - # This function encrypts eight consecutive ChaCha20 blocks by loading - # the state matrix in AVX registers eight times. As we need some - # scratch registers, we save the first four registers on the stack. The - # algorithm performs each operation on the corresponding word of each - # state matrix, hence requires no word shuffling. For final XORing step - # we transpose the matrix by interleaving 32-, 64- and then 128-bit - # words, which allows us to do XOR in AVX registers. 8/16-bit word - # rotation is done with the slightly better performing byte shuffling, - # 7/12-bit word rotation uses traditional shift+OR. - - vzeroupper - # 4 * 32 byte stack, 32-byte aligned - lea 8(%rsp),%r10 - and $~31, %rsp - sub $0x80, %rsp - - # x0..15[0-7] = s[0..15] - vpbroadcastd 0x00(%rdi),%ymm0 - vpbroadcastd 0x04(%rdi),%ymm1 - vpbroadcastd 0x08(%rdi),%ymm2 - vpbroadcastd 0x0c(%rdi),%ymm3 - vpbroadcastd 0x10(%rdi),%ymm4 - vpbroadcastd 0x14(%rdi),%ymm5 - vpbroadcastd 0x18(%rdi),%ymm6 - vpbroadcastd 0x1c(%rdi),%ymm7 - vpbroadcastd 0x20(%rdi),%ymm8 - vpbroadcastd 0x24(%rdi),%ymm9 - vpbroadcastd 0x28(%rdi),%ymm10 - vpbroadcastd 0x2c(%rdi),%ymm11 - vpbroadcastd 0x30(%rdi),%ymm12 - vpbroadcastd 0x34(%rdi),%ymm13 - vpbroadcastd 0x38(%rdi),%ymm14 - vpbroadcastd 0x3c(%rdi),%ymm15 - # x0..3 on stack - vmovdqa %ymm0,0x00(%rsp) - vmovdqa %ymm1,0x20(%rsp) - vmovdqa %ymm2,0x40(%rsp) - vmovdqa %ymm3,0x60(%rsp) - - vmovdqa CTRINC(%rip),%ymm1 - vmovdqa ROT8(%rip),%ymm2 - vmovdqa ROT16(%rip),%ymm3 - - # x12 += counter values 0-3 - vpaddd %ymm1,%ymm12,%ymm12 - - mov $10,%ecx - -.Ldoubleround8: - # x0 += x4, x12 = rotl32(x12 ^ x0, 16) - vpaddd 0x00(%rsp),%ymm4,%ymm0 - vmovdqa %ymm0,0x00(%rsp) - vpxor %ymm0,%ymm12,%ymm12 - vpshufb %ymm3,%ymm12,%ymm12 - # x1 += x5, x13 = rotl32(x13 ^ x1, 16) - vpaddd 0x20(%rsp),%ymm5,%ymm0 - vmovdqa %ymm0,0x20(%rsp) - vpxor %ymm0,%ymm13,%ymm13 - vpshufb %ymm3,%ymm13,%ymm13 - # x2 += x6, x14 = rotl32(x14 ^ x2, 16) - vpaddd 0x40(%rsp),%ymm6,%ymm0 - vmovdqa %ymm0,0x40(%rsp) - vpxor %ymm0,%ymm14,%ymm14 - vpshufb %ymm3,%ymm14,%ymm14 - # x3 += x7, x15 = rotl32(x15 ^ x3, 16) - vpaddd 0x60(%rsp),%ymm7,%ymm0 - vmovdqa %ymm0,0x60(%rsp) - vpxor %ymm0,%ymm15,%ymm15 - vpshufb %ymm3,%ymm15,%ymm15 - - # x8 += x12, x4 = rotl32(x4 ^ x8, 12) - vpaddd %ymm12,%ymm8,%ymm8 - vpxor %ymm8,%ymm4,%ymm4 - vpslld $12,%ymm4,%ymm0 - vpsrld $20,%ymm4,%ymm4 - vpor %ymm0,%ymm4,%ymm4 - # x9 += x13, x5 = rotl32(x5 ^ x9, 12) - vpaddd %ymm13,%ymm9,%ymm9 - vpxor %ymm9,%ymm5,%ymm5 - vpslld $12,%ymm5,%ymm0 - vpsrld $20,%ymm5,%ymm5 - vpor %ymm0,%ymm5,%ymm5 - # x10 += x14, x6 = rotl32(x6 ^ x10, 12) - vpaddd %ymm14,%ymm10,%ymm10 - vpxor %ymm10,%ymm6,%ymm6 - vpslld $12,%ymm6,%ymm0 - vpsrld $20,%ymm6,%ymm6 - vpor %ymm0,%ymm6,%ymm6 - # x11 += x15, x7 = rotl32(x7 ^ x11, 12) - vpaddd %ymm15,%ymm11,%ymm11 - vpxor %ymm11,%ymm7,%ymm7 - vpslld $12,%ymm7,%ymm0 - vpsrld $20,%ymm7,%ymm7 - vpor %ymm0,%ymm7,%ymm7 - - # x0 += x4, x12 = rotl32(x12 ^ x0, 8) - vpaddd 0x00(%rsp),%ymm4,%ymm0 - vmovdqa %ymm0,0x00(%rsp) - vpxor %ymm0,%ymm12,%ymm12 - vpshufb %ymm2,%ymm12,%ymm12 - # x1 += x5, x13 = rotl32(x13 ^ x1, 8) - vpaddd 0x20(%rsp),%ymm5,%ymm0 - vmovdqa %ymm0,0x20(%rsp) - vpxor %ymm0,%ymm13,%ymm13 - vpshufb %ymm2,%ymm13,%ymm13 - # x2 += x6, x14 = rotl32(x14 ^ x2, 8) - vpaddd 0x40(%rsp),%ymm6,%ymm0 - vmovdqa %ymm0,0x40(%rsp) - vpxor %ymm0,%ymm14,%ymm14 - vpshufb %ymm2,%ymm14,%ymm14 - # x3 += x7, x15 = rotl32(x15 ^ x3, 8) - vpaddd 0x60(%rsp),%ymm7,%ymm0 - vmovdqa %ymm0,0x60(%rsp) - vpxor %ymm0,%ymm15,%ymm15 - vpshufb %ymm2,%ymm15,%ymm15 - - # x8 += x12, x4 = rotl32(x4 ^ x8, 7) - vpaddd %ymm12,%ymm8,%ymm8 - vpxor %ymm8,%ymm4,%ymm4 - vpslld $7,%ymm4,%ymm0 - vpsrld $25,%ymm4,%ymm4 - vpor %ymm0,%ymm4,%ymm4 - # x9 += x13, x5 = rotl32(x5 ^ x9, 7) - vpaddd %ymm13,%ymm9,%ymm9 - vpxor %ymm9,%ymm5,%ymm5 - vpslld $7,%ymm5,%ymm0 - vpsrld $25,%ymm5,%ymm5 - vpor %ymm0,%ymm5,%ymm5 - # x10 += x14, x6 = rotl32(x6 ^ x10, 7) - vpaddd %ymm14,%ymm10,%ymm10 - vpxor %ymm10,%ymm6,%ymm6 - vpslld $7,%ymm6,%ymm0 - vpsrld $25,%ymm6,%ymm6 - vpor %ymm0,%ymm6,%ymm6 - # x11 += x15, x7 = rotl32(x7 ^ x11, 7) - vpaddd %ymm15,%ymm11,%ymm11 - vpxor %ymm11,%ymm7,%ymm7 - vpslld $7,%ymm7,%ymm0 - vpsrld $25,%ymm7,%ymm7 - vpor %ymm0,%ymm7,%ymm7 - - # x0 += x5, x15 = rotl32(x15 ^ x0, 16) - vpaddd 0x00(%rsp),%ymm5,%ymm0 - vmovdqa %ymm0,0x00(%rsp) - vpxor %ymm0,%ymm15,%ymm15 - vpshufb %ymm3,%ymm15,%ymm15 - # x1 += x6, x12 = rotl32(x12 ^ x1, 16)%ymm0 - vpaddd 0x20(%rsp),%ymm6,%ymm0 - vmovdqa %ymm0,0x20(%rsp) - vpxor %ymm0,%ymm12,%ymm12 - vpshufb %ymm3,%ymm12,%ymm12 - # x2 += x7, x13 = rotl32(x13 ^ x2, 16) - vpaddd 0x40(%rsp),%ymm7,%ymm0 - vmovdqa %ymm0,0x40(%rsp) - vpxor %ymm0,%ymm13,%ymm13 - vpshufb %ymm3,%ymm13,%ymm13 - # x3 += x4, x14 = rotl32(x14 ^ x3, 16) - vpaddd 0x60(%rsp),%ymm4,%ymm0 - vmovdqa %ymm0,0x60(%rsp) - vpxor %ymm0,%ymm14,%ymm14 - vpshufb %ymm3,%ymm14,%ymm14 - - # x10 += x15, x5 = rotl32(x5 ^ x10, 12) - vpaddd %ymm15,%ymm10,%ymm10 - vpxor %ymm10,%ymm5,%ymm5 - vpslld $12,%ymm5,%ymm0 - vpsrld $20,%ymm5,%ymm5 - vpor %ymm0,%ymm5,%ymm5 - # x11 += x12, x6 = rotl32(x6 ^ x11, 12) - vpaddd %ymm12,%ymm11,%ymm11 - vpxor %ymm11,%ymm6,%ymm6 - vpslld $12,%ymm6,%ymm0 - vpsrld $20,%ymm6,%ymm6 - vpor %ymm0,%ymm6,%ymm6 - # x8 += x13, x7 = rotl32(x7 ^ x8, 12) - vpaddd %ymm13,%ymm8,%ymm8 - vpxor %ymm8,%ymm7,%ymm7 - vpslld $12,%ymm7,%ymm0 - vpsrld $20,%ymm7,%ymm7 - vpor %ymm0,%ymm7,%ymm7 - # x9 += x14, x4 = rotl32(x4 ^ x9, 12) - vpaddd %ymm14,%ymm9,%ymm9 - vpxor %ymm9,%ymm4,%ymm4 - vpslld $12,%ymm4,%ymm0 - vpsrld $20,%ymm4,%ymm4 - vpor %ymm0,%ymm4,%ymm4 - - # x0 += x5, x15 = rotl32(x15 ^ x0, 8) - vpaddd 0x00(%rsp),%ymm5,%ymm0 - vmovdqa %ymm0,0x00(%rsp) - vpxor %ymm0,%ymm15,%ymm15 - vpshufb %ymm2,%ymm15,%ymm15 - # x1 += x6, x12 = rotl32(x12 ^ x1, 8) - vpaddd 0x20(%rsp),%ymm6,%ymm0 - vmovdqa %ymm0,0x20(%rsp) - vpxor %ymm0,%ymm12,%ymm12 - vpshufb %ymm2,%ymm12,%ymm12 - # x2 += x7, x13 = rotl32(x13 ^ x2, 8) - vpaddd 0x40(%rsp),%ymm7,%ymm0 - vmovdqa %ymm0,0x40(%rsp) - vpxor %ymm0,%ymm13,%ymm13 - vpshufb %ymm2,%ymm13,%ymm13 - # x3 += x4, x14 = rotl32(x14 ^ x3, 8) - vpaddd 0x60(%rsp),%ymm4,%ymm0 - vmovdqa %ymm0,0x60(%rsp) - vpxor %ymm0,%ymm14,%ymm14 - vpshufb %ymm2,%ymm14,%ymm14 - - # x10 += x15, x5 = rotl32(x5 ^ x10, 7) - vpaddd %ymm15,%ymm10,%ymm10 - vpxor %ymm10,%ymm5,%ymm5 - vpslld $7,%ymm5,%ymm0 - vpsrld $25,%ymm5,%ymm5 - vpor %ymm0,%ymm5,%ymm5 - # x11 += x12, x6 = rotl32(x6 ^ x11, 7) - vpaddd %ymm12,%ymm11,%ymm11 - vpxor %ymm11,%ymm6,%ymm6 - vpslld $7,%ymm6,%ymm0 - vpsrld $25,%ymm6,%ymm6 - vpor %ymm0,%ymm6,%ymm6 - # x8 += x13, x7 = rotl32(x7 ^ x8, 7) - vpaddd %ymm13,%ymm8,%ymm8 - vpxor %ymm8,%ymm7,%ymm7 - vpslld $7,%ymm7,%ymm0 - vpsrld $25,%ymm7,%ymm7 - vpor %ymm0,%ymm7,%ymm7 - # x9 += x14, x4 = rotl32(x4 ^ x9, 7) - vpaddd %ymm14,%ymm9,%ymm9 - vpxor %ymm9,%ymm4,%ymm4 - vpslld $7,%ymm4,%ymm0 - vpsrld $25,%ymm4,%ymm4 - vpor %ymm0,%ymm4,%ymm4 - - dec %ecx - jnz .Ldoubleround8 - - # x0..15[0-3] += s[0..15] - vpbroadcastd 0x00(%rdi),%ymm0 - vpaddd 0x00(%rsp),%ymm0,%ymm0 - vmovdqa %ymm0,0x00(%rsp) - vpbroadcastd 0x04(%rdi),%ymm0 - vpaddd 0x20(%rsp),%ymm0,%ymm0 - vmovdqa %ymm0,0x20(%rsp) - vpbroadcastd 0x08(%rdi),%ymm0 - vpaddd 0x40(%rsp),%ymm0,%ymm0 - vmovdqa %ymm0,0x40(%rsp) - vpbroadcastd 0x0c(%rdi),%ymm0 - vpaddd 0x60(%rsp),%ymm0,%ymm0 - vmovdqa %ymm0,0x60(%rsp) - vpbroadcastd 0x10(%rdi),%ymm0 - vpaddd %ymm0,%ymm4,%ymm4 - vpbroadcastd 0x14(%rdi),%ymm0 - vpaddd %ymm0,%ymm5,%ymm5 - vpbroadcastd 0x18(%rdi),%ymm0 - vpaddd %ymm0,%ymm6,%ymm6 - vpbroadcastd 0x1c(%rdi),%ymm0 - vpaddd %ymm0,%ymm7,%ymm7 - vpbroadcastd 0x20(%rdi),%ymm0 - vpaddd %ymm0,%ymm8,%ymm8 - vpbroadcastd 0x24(%rdi),%ymm0 - vpaddd %ymm0,%ymm9,%ymm9 - vpbroadcastd 0x28(%rdi),%ymm0 - vpaddd %ymm0,%ymm10,%ymm10 - vpbroadcastd 0x2c(%rdi),%ymm0 - vpaddd %ymm0,%ymm11,%ymm11 - vpbroadcastd 0x30(%rdi),%ymm0 - vpaddd %ymm0,%ymm12,%ymm12 - vpbroadcastd 0x34(%rdi),%ymm0 - vpaddd %ymm0,%ymm13,%ymm13 - vpbroadcastd 0x38(%rdi),%ymm0 - vpaddd %ymm0,%ymm14,%ymm14 - vpbroadcastd 0x3c(%rdi),%ymm0 - vpaddd %ymm0,%ymm15,%ymm15 - - # x12 += counter values 0-3 - vpaddd %ymm1,%ymm12,%ymm12 - - # interleave 32-bit words in state n, n+1 - vmovdqa 0x00(%rsp),%ymm0 - vmovdqa 0x20(%rsp),%ymm1 - vpunpckldq %ymm1,%ymm0,%ymm2 - vpunpckhdq %ymm1,%ymm0,%ymm1 - vmovdqa %ymm2,0x00(%rsp) - vmovdqa %ymm1,0x20(%rsp) - vmovdqa 0x40(%rsp),%ymm0 - vmovdqa 0x60(%rsp),%ymm1 - vpunpckldq %ymm1,%ymm0,%ymm2 - vpunpckhdq %ymm1,%ymm0,%ymm1 - vmovdqa %ymm2,0x40(%rsp) - vmovdqa %ymm1,0x60(%rsp) - vmovdqa %ymm4,%ymm0 - vpunpckldq %ymm5,%ymm0,%ymm4 - vpunpckhdq %ymm5,%ymm0,%ymm5 - vmovdqa %ymm6,%ymm0 - vpunpckldq %ymm7,%ymm0,%ymm6 - vpunpckhdq %ymm7,%ymm0,%ymm7 - vmovdqa %ymm8,%ymm0 - vpunpckldq %ymm9,%ymm0,%ymm8 - vpunpckhdq %ymm9,%ymm0,%ymm9 - vmovdqa %ymm10,%ymm0 - vpunpckldq %ymm11,%ymm0,%ymm10 - vpunpckhdq %ymm11,%ymm0,%ymm11 - vmovdqa %ymm12,%ymm0 - vpunpckldq %ymm13,%ymm0,%ymm12 - vpunpckhdq %ymm13,%ymm0,%ymm13 - vmovdqa %ymm14,%ymm0 - vpunpckldq %ymm15,%ymm0,%ymm14 - vpunpckhdq %ymm15,%ymm0,%ymm15 - - # interleave 64-bit words in state n, n+2 - vmovdqa 0x00(%rsp),%ymm0 - vmovdqa 0x40(%rsp),%ymm2 - vpunpcklqdq %ymm2,%ymm0,%ymm1 - vpunpckhqdq %ymm2,%ymm0,%ymm2 - vmovdqa %ymm1,0x00(%rsp) - vmovdqa %ymm2,0x40(%rsp) - vmovdqa 0x20(%rsp),%ymm0 - vmovdqa 0x60(%rsp),%ymm2 - vpunpcklqdq %ymm2,%ymm0,%ymm1 - vpunpckhqdq %ymm2,%ymm0,%ymm2 - vmovdqa %ymm1,0x20(%rsp) - vmovdqa %ymm2,0x60(%rsp) - vmovdqa %ymm4,%ymm0 - vpunpcklqdq %ymm6,%ymm0,%ymm4 - vpunpckhqdq %ymm6,%ymm0,%ymm6 - vmovdqa %ymm5,%ymm0 - vpunpcklqdq %ymm7,%ymm0,%ymm5 - vpunpckhqdq %ymm7,%ymm0,%ymm7 - vmovdqa %ymm8,%ymm0 - vpunpcklqdq %ymm10,%ymm0,%ymm8 - vpunpckhqdq %ymm10,%ymm0,%ymm10 - vmovdqa %ymm9,%ymm0 - vpunpcklqdq %ymm11,%ymm0,%ymm9 - vpunpckhqdq %ymm11,%ymm0,%ymm11 - vmovdqa %ymm12,%ymm0 - vpunpcklqdq %ymm14,%ymm0,%ymm12 - vpunpckhqdq %ymm14,%ymm0,%ymm14 - vmovdqa %ymm13,%ymm0 - vpunpcklqdq %ymm15,%ymm0,%ymm13 - vpunpckhqdq %ymm15,%ymm0,%ymm15 - - # interleave 128-bit words in state n, n+4 - vmovdqa 0x00(%rsp),%ymm0 - vperm2i128 $0x20,%ymm4,%ymm0,%ymm1 - vperm2i128 $0x31,%ymm4,%ymm0,%ymm4 - vmovdqa %ymm1,0x00(%rsp) - vmovdqa 0x20(%rsp),%ymm0 - vperm2i128 $0x20,%ymm5,%ymm0,%ymm1 - vperm2i128 $0x31,%ymm5,%ymm0,%ymm5 - vmovdqa %ymm1,0x20(%rsp) - vmovdqa 0x40(%rsp),%ymm0 - vperm2i128 $0x20,%ymm6,%ymm0,%ymm1 - vperm2i128 $0x31,%ymm6,%ymm0,%ymm6 - vmovdqa %ymm1,0x40(%rsp) - vmovdqa 0x60(%rsp),%ymm0 - vperm2i128 $0x20,%ymm7,%ymm0,%ymm1 - vperm2i128 $0x31,%ymm7,%ymm0,%ymm7 - vmovdqa %ymm1,0x60(%rsp) - vperm2i128 $0x20,%ymm12,%ymm8,%ymm0 - vperm2i128 $0x31,%ymm12,%ymm8,%ymm12 - vmovdqa %ymm0,%ymm8 - vperm2i128 $0x20,%ymm13,%ymm9,%ymm0 - vperm2i128 $0x31,%ymm13,%ymm9,%ymm13 - vmovdqa %ymm0,%ymm9 - vperm2i128 $0x20,%ymm14,%ymm10,%ymm0 - vperm2i128 $0x31,%ymm14,%ymm10,%ymm14 - vmovdqa %ymm0,%ymm10 - vperm2i128 $0x20,%ymm15,%ymm11,%ymm0 - vperm2i128 $0x31,%ymm15,%ymm11,%ymm15 - vmovdqa %ymm0,%ymm11 - - # xor with corresponding input, write to output - vmovdqa 0x00(%rsp),%ymm0 - vpxor 0x0000(%rdx),%ymm0,%ymm0 - vmovdqu %ymm0,0x0000(%rsi) - vmovdqa 0x20(%rsp),%ymm0 - vpxor 0x0080(%rdx),%ymm0,%ymm0 - vmovdqu %ymm0,0x0080(%rsi) - vmovdqa 0x40(%rsp),%ymm0 - vpxor 0x0040(%rdx),%ymm0,%ymm0 - vmovdqu %ymm0,0x0040(%rsi) - vmovdqa 0x60(%rsp),%ymm0 - vpxor 0x00c0(%rdx),%ymm0,%ymm0 - vmovdqu %ymm0,0x00c0(%rsi) - vpxor 0x0100(%rdx),%ymm4,%ymm4 - vmovdqu %ymm4,0x0100(%rsi) - vpxor 0x0180(%rdx),%ymm5,%ymm5 - vmovdqu %ymm5,0x00180(%rsi) - vpxor 0x0140(%rdx),%ymm6,%ymm6 - vmovdqu %ymm6,0x0140(%rsi) - vpxor 0x01c0(%rdx),%ymm7,%ymm7 - vmovdqu %ymm7,0x01c0(%rsi) - vpxor 0x0020(%rdx),%ymm8,%ymm8 - vmovdqu %ymm8,0x0020(%rsi) - vpxor 0x00a0(%rdx),%ymm9,%ymm9 - vmovdqu %ymm9,0x00a0(%rsi) - vpxor 0x0060(%rdx),%ymm10,%ymm10 - vmovdqu %ymm10,0x0060(%rsi) - vpxor 0x00e0(%rdx),%ymm11,%ymm11 - vmovdqu %ymm11,0x00e0(%rsi) - vpxor 0x0120(%rdx),%ymm12,%ymm12 - vmovdqu %ymm12,0x0120(%rsi) - vpxor 0x01a0(%rdx),%ymm13,%ymm13 - vmovdqu %ymm13,0x01a0(%rsi) - vpxor 0x0160(%rdx),%ymm14,%ymm14 - vmovdqu %ymm14,0x0160(%rsi) - vpxor 0x01e0(%rdx),%ymm15,%ymm15 - vmovdqu %ymm15,0x01e0(%rsi) - - vzeroupper - lea -8(%r10),%rsp - ret -ENDPROC(chacha20_8block_xor_avx2) diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c deleted file mode 100644 index dce7c5d39c2f..000000000000 --- a/arch/x86/crypto/chacha20_glue.c +++ /dev/null @@ -1,146 +0,0 @@ -/* - * ChaCha20 256-bit cipher algorithm, RFC7539, SIMD glue code - * - * Copyright (C) 2015 Martin Willi - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - */ - -#include -#include -#include -#include -#include -#include -#include - -#define CHACHA20_STATE_ALIGN 16 - -asmlinkage void chacha20_block_xor_ssse3(u32 *state, u8 *dst, const u8 *src); -asmlinkage void chacha20_4block_xor_ssse3(u32 *state, u8 *dst, const u8 *src); -#ifdef CONFIG_AS_AVX2 -asmlinkage void chacha20_8block_xor_avx2(u32 *state, u8 *dst, const u8 *src); -static bool chacha20_use_avx2; -#endif - -static void chacha20_dosimd(u32 *state, u8 *dst, const u8 *src, - unsigned int bytes) -{ - u8 buf[CHACHA20_BLOCK_SIZE]; - -#ifdef CONFIG_AS_AVX2 - if (chacha20_use_avx2) { - while (bytes >= CHACHA20_BLOCK_SIZE * 8) { - chacha20_8block_xor_avx2(state, dst, src); - bytes -= CHACHA20_BLOCK_SIZE * 8; - src += CHACHA20_BLOCK_SIZE * 8; - dst += CHACHA20_BLOCK_SIZE * 8; - state[12] += 8; - } - } -#endif - while (bytes >= CHACHA20_BLOCK_SIZE * 4) { - chacha20_4block_xor_ssse3(state, dst, src); - bytes -= CHACHA20_BLOCK_SIZE * 4; - src += CHACHA20_BLOCK_SIZE * 4; - dst += CHACHA20_BLOCK_SIZE * 4; - state[12] += 4; - } - while (bytes >= CHACHA20_BLOCK_SIZE) { - chacha20_block_xor_ssse3(state, dst, src); - bytes -= CHACHA20_BLOCK_SIZE; - src += CHACHA20_BLOCK_SIZE; - dst += CHACHA20_BLOCK_SIZE; - state[12]++; - } - if (bytes) { - memcpy(buf, src, bytes); - chacha20_block_xor_ssse3(state, buf, buf); - memcpy(dst, buf, bytes); - } -} - -static int chacha20_simd(struct skcipher_request *req) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm); - u32 *state, state_buf[16 + 2] __aligned(8); - struct skcipher_walk walk; - int err; - - BUILD_BUG_ON(CHACHA20_STATE_ALIGN != 16); - state = PTR_ALIGN(state_buf + 0, CHACHA20_STATE_ALIGN); - - if (req->cryptlen <= CHACHA20_BLOCK_SIZE || !may_use_simd()) - return crypto_chacha20_crypt(req); - - err = skcipher_walk_virt(&walk, req, true); - - crypto_chacha20_init(state, ctx, walk.iv); - - kernel_fpu_begin(); - - while (walk.nbytes >= CHACHA20_BLOCK_SIZE) { - chacha20_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr, - rounddown(walk.nbytes, CHACHA20_BLOCK_SIZE)); - err = skcipher_walk_done(&walk, - walk.nbytes % CHACHA20_BLOCK_SIZE); - } - - if (walk.nbytes) { - chacha20_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes); - err = skcipher_walk_done(&walk, 0); - } - - kernel_fpu_end(); - - return err; -} - -static struct skcipher_alg alg = { - .base.cra_name = "chacha20", - .base.cra_driver_name = "chacha20-simd", - .base.cra_priority = 300, - .base.cra_blocksize = 1, - .base.cra_ctxsize = sizeof(struct chacha20_ctx), - .base.cra_module = THIS_MODULE, - - .min_keysize = CHACHA20_KEY_SIZE, - .max_keysize = CHACHA20_KEY_SIZE, - .ivsize = CHACHA20_IV_SIZE, - .chunksize = CHACHA20_BLOCK_SIZE, - .setkey = crypto_chacha20_setkey, - .encrypt = chacha20_simd, - .decrypt = chacha20_simd, -}; - -static int __init chacha20_simd_mod_init(void) -{ - if (!boot_cpu_has(X86_FEATURE_SSSE3)) - return -ENODEV; - -#ifdef CONFIG_AS_AVX2 - chacha20_use_avx2 = boot_cpu_has(X86_FEATURE_AVX) && - boot_cpu_has(X86_FEATURE_AVX2) && - cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL); -#endif - return crypto_register_skcipher(&alg); -} - -static void __exit chacha20_simd_mod_fini(void) -{ - crypto_unregister_skcipher(&alg); -} - -module_init(chacha20_simd_mod_init); -module_exit(chacha20_simd_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Martin Willi "); -MODULE_DESCRIPTION("chacha20 cipher algorithm, SIMD accelerated"); -MODULE_ALIAS_CRYPTO("chacha20"); -MODULE_ALIAS_CRYPTO("chacha20-simd"); diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c new file mode 100644 index 000000000000..45c1c4143176 --- /dev/null +++ b/arch/x86/crypto/chacha_glue.c @@ -0,0 +1,304 @@ +/* + * x64 SIMD accelerated ChaCha and XChaCha stream ciphers, + * including ChaCha20 (RFC7539) + * + * Copyright (C) 2015 Martin Willi + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include +#include +#include +#include +#include +#include +#include + +#define CHACHA_STATE_ALIGN 16 + +asmlinkage void chacha_block_xor_ssse3(u32 *state, u8 *dst, const u8 *src, + unsigned int len, int nrounds); +asmlinkage void chacha_4block_xor_ssse3(u32 *state, u8 *dst, const u8 *src, + unsigned int len, int nrounds); +asmlinkage void hchacha_block_ssse3(const u32 *state, u32 *out, int nrounds); +#ifdef CONFIG_AS_AVX2 +asmlinkage void chacha_2block_xor_avx2(u32 *state, u8 *dst, const u8 *src, + unsigned int len, int nrounds); +asmlinkage void chacha_4block_xor_avx2(u32 *state, u8 *dst, const u8 *src, + unsigned int len, int nrounds); +asmlinkage void chacha_8block_xor_avx2(u32 *state, u8 *dst, const u8 *src, + unsigned int len, int nrounds); +static bool chacha_use_avx2; +#ifdef CONFIG_AS_AVX512 +asmlinkage void chacha_2block_xor_avx512vl(u32 *state, u8 *dst, const u8 *src, + unsigned int len, int nrounds); +asmlinkage void chacha_4block_xor_avx512vl(u32 *state, u8 *dst, const u8 *src, + unsigned int len, int nrounds); +asmlinkage void chacha_8block_xor_avx512vl(u32 *state, u8 *dst, const u8 *src, + unsigned int len, int nrounds); +static bool chacha_use_avx512vl; +#endif +#endif + +static unsigned int chacha_advance(unsigned int len, unsigned int maxblocks) +{ + len = min(len, maxblocks * CHACHA_BLOCK_SIZE); + return round_up(len, CHACHA_BLOCK_SIZE) / CHACHA_BLOCK_SIZE; +} + +static void chacha_dosimd(u32 *state, u8 *dst, const u8 *src, + unsigned int bytes, int nrounds) +{ +#ifdef CONFIG_AS_AVX2 +#ifdef CONFIG_AS_AVX512 + if (chacha_use_avx512vl) { + while (bytes >= CHACHA_BLOCK_SIZE * 8) { + chacha_8block_xor_avx512vl(state, dst, src, bytes, + nrounds); + bytes -= CHACHA_BLOCK_SIZE * 8; + src += CHACHA_BLOCK_SIZE * 8; + dst += CHACHA_BLOCK_SIZE * 8; + state[12] += 8; + } + if (bytes > CHACHA_BLOCK_SIZE * 4) { + chacha_8block_xor_avx512vl(state, dst, src, bytes, + nrounds); + state[12] += chacha_advance(bytes, 8); + return; + } + if (bytes > CHACHA_BLOCK_SIZE * 2) { + chacha_4block_xor_avx512vl(state, dst, src, bytes, + nrounds); + state[12] += chacha_advance(bytes, 4); + return; + } + if (bytes) { + chacha_2block_xor_avx512vl(state, dst, src, bytes, + nrounds); + state[12] += chacha_advance(bytes, 2); + return; + } + } +#endif + if (chacha_use_avx2) { + while (bytes >= CHACHA_BLOCK_SIZE * 8) { + chacha_8block_xor_avx2(state, dst, src, bytes, nrounds); + bytes -= CHACHA_BLOCK_SIZE * 8; + src += CHACHA_BLOCK_SIZE * 8; + dst += CHACHA_BLOCK_SIZE * 8; + state[12] += 8; + } + if (bytes > CHACHA_BLOCK_SIZE * 4) { + chacha_8block_xor_avx2(state, dst, src, bytes, nrounds); + state[12] += chacha_advance(bytes, 8); + return; + } + if (bytes > CHACHA_BLOCK_SIZE * 2) { + chacha_4block_xor_avx2(state, dst, src, bytes, nrounds); + state[12] += chacha_advance(bytes, 4); + return; + } + if (bytes > CHACHA_BLOCK_SIZE) { + chacha_2block_xor_avx2(state, dst, src, bytes, nrounds); + state[12] += chacha_advance(bytes, 2); + return; + } + } +#endif + while (bytes >= CHACHA_BLOCK_SIZE * 4) { + chacha_4block_xor_ssse3(state, dst, src, bytes, nrounds); + bytes -= CHACHA_BLOCK_SIZE * 4; + src += CHACHA_BLOCK_SIZE * 4; + dst += CHACHA_BLOCK_SIZE * 4; + state[12] += 4; + } + if (bytes > CHACHA_BLOCK_SIZE) { + chacha_4block_xor_ssse3(state, dst, src, bytes, nrounds); + state[12] += chacha_advance(bytes, 4); + return; + } + if (bytes) { + chacha_block_xor_ssse3(state, dst, src, bytes, nrounds); + state[12]++; + } +} + +static int chacha_simd_stream_xor(struct skcipher_walk *walk, + struct chacha_ctx *ctx, u8 *iv) +{ + u32 *state, state_buf[16 + 2] __aligned(8); + int next_yield = 4096; /* bytes until next FPU yield */ + int err = 0; + + BUILD_BUG_ON(CHACHA_STATE_ALIGN != 16); + state = PTR_ALIGN(state_buf + 0, CHACHA_STATE_ALIGN); + + crypto_chacha_init(state, ctx, iv); + + while (walk->nbytes > 0) { + unsigned int nbytes = walk->nbytes; + + if (nbytes < walk->total) { + nbytes = round_down(nbytes, walk->stride); + next_yield -= nbytes; + } + + chacha_dosimd(state, walk->dst.virt.addr, walk->src.virt.addr, + nbytes, ctx->nrounds); + + if (next_yield <= 0) { + /* temporarily allow preemption */ + kernel_fpu_end(); + kernel_fpu_begin(); + next_yield = 4096; + } + + err = skcipher_walk_done(walk, walk->nbytes - nbytes); + } + + return err; +} + +static int chacha_simd(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + int err; + + if (req->cryptlen <= CHACHA_BLOCK_SIZE || !irq_fpu_usable()) + return crypto_chacha_crypt(req); + + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + + kernel_fpu_begin(); + err = chacha_simd_stream_xor(&walk, ctx, req->iv); + kernel_fpu_end(); + return err; +} + +static int xchacha_simd(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + struct chacha_ctx subctx; + u32 *state, state_buf[16 + 2] __aligned(8); + u8 real_iv[16]; + int err; + + if (req->cryptlen <= CHACHA_BLOCK_SIZE || !irq_fpu_usable()) + return crypto_xchacha_crypt(req); + + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + + BUILD_BUG_ON(CHACHA_STATE_ALIGN != 16); + state = PTR_ALIGN(state_buf + 0, CHACHA_STATE_ALIGN); + crypto_chacha_init(state, ctx, req->iv); + + kernel_fpu_begin(); + + hchacha_block_ssse3(state, subctx.key, ctx->nrounds); + subctx.nrounds = ctx->nrounds; + + memcpy(&real_iv[0], req->iv + 24, 8); + memcpy(&real_iv[8], req->iv + 16, 8); + err = chacha_simd_stream_xor(&walk, &subctx, real_iv); + + kernel_fpu_end(); + + return err; +} + +static struct skcipher_alg algs[] = { + { + .base.cra_name = "chacha20", + .base.cra_driver_name = "chacha20-simd", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = CHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha20_setkey, + .encrypt = chacha_simd, + .decrypt = chacha_simd, + }, { + .base.cra_name = "xchacha20", + .base.cra_driver_name = "xchacha20-simd", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = XCHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha20_setkey, + .encrypt = xchacha_simd, + .decrypt = xchacha_simd, + }, { + .base.cra_name = "xchacha12", + .base.cra_driver_name = "xchacha12-simd", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = XCHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha12_setkey, + .encrypt = xchacha_simd, + .decrypt = xchacha_simd, + }, +}; + +static int __init chacha_simd_mod_init(void) +{ + if (!boot_cpu_has(X86_FEATURE_SSSE3)) + return -ENODEV; + +#ifdef CONFIG_AS_AVX2 + chacha_use_avx2 = boot_cpu_has(X86_FEATURE_AVX) && + boot_cpu_has(X86_FEATURE_AVX2) && + cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL); +#ifdef CONFIG_AS_AVX512 + chacha_use_avx512vl = chacha_use_avx2 && + boot_cpu_has(X86_FEATURE_AVX512VL) && + boot_cpu_has(X86_FEATURE_AVX512BW); /* kmovq */ +#endif +#endif + return crypto_register_skciphers(algs, ARRAY_SIZE(algs)); +} + +static void __exit chacha_simd_mod_fini(void) +{ + crypto_unregister_skciphers(algs, ARRAY_SIZE(algs)); +} + +module_init(chacha_simd_mod_init); +module_exit(chacha_simd_mod_fini); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Martin Willi "); +MODULE_DESCRIPTION("ChaCha and XChaCha stream ciphers (x64 SIMD accelerated)"); +MODULE_ALIAS_CRYPTO("chacha20"); +MODULE_ALIAS_CRYPTO("chacha20-simd"); +MODULE_ALIAS_CRYPTO("xchacha20"); +MODULE_ALIAS_CRYPTO("xchacha20-simd"); +MODULE_ALIAS_CRYPTO("xchacha12"); +MODULE_ALIAS_CRYPTO("xchacha12-simd"); diff --git a/arch/x86/crypto/nh-avx2-x86_64.S b/arch/x86/crypto/nh-avx2-x86_64.S new file mode 100644 index 000000000000..f7946ea1b704 --- /dev/null +++ b/arch/x86/crypto/nh-avx2-x86_64.S @@ -0,0 +1,157 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * NH - ε-almost-universal hash function, x86_64 AVX2 accelerated + * + * Copyright 2018 Google LLC + * + * Author: Eric Biggers + */ + +#include + +#define PASS0_SUMS %ymm0 +#define PASS1_SUMS %ymm1 +#define PASS2_SUMS %ymm2 +#define PASS3_SUMS %ymm3 +#define K0 %ymm4 +#define K0_XMM %xmm4 +#define K1 %ymm5 +#define K1_XMM %xmm5 +#define K2 %ymm6 +#define K2_XMM %xmm6 +#define K3 %ymm7 +#define K3_XMM %xmm7 +#define T0 %ymm8 +#define T1 %ymm9 +#define T2 %ymm10 +#define T2_XMM %xmm10 +#define T3 %ymm11 +#define T3_XMM %xmm11 +#define T4 %ymm12 +#define T5 %ymm13 +#define T6 %ymm14 +#define T7 %ymm15 +#define KEY %rdi +#define MESSAGE %rsi +#define MESSAGE_LEN %rdx +#define HASH %rcx + +.macro _nh_2xstride k0, k1, k2, k3 + + // Add message words to key words + vpaddd \k0, T3, T0 + vpaddd \k1, T3, T1 + vpaddd \k2, T3, T2 + vpaddd \k3, T3, T3 + + // Multiply 32x32 => 64 and accumulate + vpshufd $0x10, T0, T4 + vpshufd $0x32, T0, T0 + vpshufd $0x10, T1, T5 + vpshufd $0x32, T1, T1 + vpshufd $0x10, T2, T6 + vpshufd $0x32, T2, T2 + vpshufd $0x10, T3, T7 + vpshufd $0x32, T3, T3 + vpmuludq T4, T0, T0 + vpmuludq T5, T1, T1 + vpmuludq T6, T2, T2 + vpmuludq T7, T3, T3 + vpaddq T0, PASS0_SUMS, PASS0_SUMS + vpaddq T1, PASS1_SUMS, PASS1_SUMS + vpaddq T2, PASS2_SUMS, PASS2_SUMS + vpaddq T3, PASS3_SUMS, PASS3_SUMS +.endm + +/* + * void nh_avx2(const u32 *key, const u8 *message, size_t message_len, + * u8 hash[NH_HASH_BYTES]) + * + * It's guaranteed that message_len % 16 == 0. + */ +ENTRY(nh_avx2) + + vmovdqu 0x00(KEY), K0 + vmovdqu 0x10(KEY), K1 + add $0x20, KEY + vpxor PASS0_SUMS, PASS0_SUMS, PASS0_SUMS + vpxor PASS1_SUMS, PASS1_SUMS, PASS1_SUMS + vpxor PASS2_SUMS, PASS2_SUMS, PASS2_SUMS + vpxor PASS3_SUMS, PASS3_SUMS, PASS3_SUMS + + sub $0x40, MESSAGE_LEN + jl .Lloop4_done +.Lloop4: + vmovdqu (MESSAGE), T3 + vmovdqu 0x00(KEY), K2 + vmovdqu 0x10(KEY), K3 + _nh_2xstride K0, K1, K2, K3 + + vmovdqu 0x20(MESSAGE), T3 + vmovdqu 0x20(KEY), K0 + vmovdqu 0x30(KEY), K1 + _nh_2xstride K2, K3, K0, K1 + + add $0x40, MESSAGE + add $0x40, KEY + sub $0x40, MESSAGE_LEN + jge .Lloop4 + +.Lloop4_done: + and $0x3f, MESSAGE_LEN + jz .Ldone + + cmp $0x20, MESSAGE_LEN + jl .Llast + + // 2 or 3 strides remain; do 2 more. + vmovdqu (MESSAGE), T3 + vmovdqu 0x00(KEY), K2 + vmovdqu 0x10(KEY), K3 + _nh_2xstride K0, K1, K2, K3 + add $0x20, MESSAGE + add $0x20, KEY + sub $0x20, MESSAGE_LEN + jz .Ldone + vmovdqa K2, K0 + vmovdqa K3, K1 +.Llast: + // Last stride. Zero the high 128 bits of the message and keys so they + // don't affect the result when processing them like 2 strides. + vmovdqu (MESSAGE), T3_XMM + vmovdqa K0_XMM, K0_XMM + vmovdqa K1_XMM, K1_XMM + vmovdqu 0x00(KEY), K2_XMM + vmovdqu 0x10(KEY), K3_XMM + _nh_2xstride K0, K1, K2, K3 + +.Ldone: + // Sum the accumulators for each pass, then store the sums to 'hash' + + // PASS0_SUMS is (0A 0B 0C 0D) + // PASS1_SUMS is (1A 1B 1C 1D) + // PASS2_SUMS is (2A 2B 2C 2D) + // PASS3_SUMS is (3A 3B 3C 3D) + // We need the horizontal sums: + // (0A + 0B + 0C + 0D, + // 1A + 1B + 1C + 1D, + // 2A + 2B + 2C + 2D, + // 3A + 3B + 3C + 3D) + // + + vpunpcklqdq PASS1_SUMS, PASS0_SUMS, T0 // T0 = (0A 1A 0C 1C) + vpunpckhqdq PASS1_SUMS, PASS0_SUMS, T1 // T1 = (0B 1B 0D 1D) + vpunpcklqdq PASS3_SUMS, PASS2_SUMS, T2 // T2 = (2A 3A 2C 3C) + vpunpckhqdq PASS3_SUMS, PASS2_SUMS, T3 // T3 = (2B 3B 2D 3D) + + vinserti128 $0x1, T2_XMM, T0, T4 // T4 = (0A 1A 2A 3A) + vinserti128 $0x1, T3_XMM, T1, T5 // T5 = (0B 1B 2B 3B) + vperm2i128 $0x31, T2, T0, T0 // T0 = (0C 1C 2C 3C) + vperm2i128 $0x31, T3, T1, T1 // T1 = (0D 1D 2D 3D) + + vpaddq T5, T4, T4 + vpaddq T1, T0, T0 + vpaddq T4, T0, T0 + vmovdqu T0, (HASH) + ret +ENDPROC(nh_avx2) diff --git a/arch/x86/crypto/nh-sse2-x86_64.S b/arch/x86/crypto/nh-sse2-x86_64.S new file mode 100644 index 000000000000..51f52d4ab4bb --- /dev/null +++ b/arch/x86/crypto/nh-sse2-x86_64.S @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * NH - ε-almost-universal hash function, x86_64 SSE2 accelerated + * + * Copyright 2018 Google LLC + * + * Author: Eric Biggers + */ + +#include + +#define PASS0_SUMS %xmm0 +#define PASS1_SUMS %xmm1 +#define PASS2_SUMS %xmm2 +#define PASS3_SUMS %xmm3 +#define K0 %xmm4 +#define K1 %xmm5 +#define K2 %xmm6 +#define K3 %xmm7 +#define T0 %xmm8 +#define T1 %xmm9 +#define T2 %xmm10 +#define T3 %xmm11 +#define T4 %xmm12 +#define T5 %xmm13 +#define T6 %xmm14 +#define T7 %xmm15 +#define KEY %rdi +#define MESSAGE %rsi +#define MESSAGE_LEN %rdx +#define HASH %rcx + +.macro _nh_stride k0, k1, k2, k3, offset + + // Load next message stride + movdqu \offset(MESSAGE), T1 + + // Load next key stride + movdqu \offset(KEY), \k3 + + // Add message words to key words + movdqa T1, T2 + movdqa T1, T3 + paddd T1, \k0 // reuse k0 to avoid a move + paddd \k1, T1 + paddd \k2, T2 + paddd \k3, T3 + + // Multiply 32x32 => 64 and accumulate + pshufd $0x10, \k0, T4 + pshufd $0x32, \k0, \k0 + pshufd $0x10, T1, T5 + pshufd $0x32, T1, T1 + pshufd $0x10, T2, T6 + pshufd $0x32, T2, T2 + pshufd $0x10, T3, T7 + pshufd $0x32, T3, T3 + pmuludq T4, \k0 + pmuludq T5, T1 + pmuludq T6, T2 + pmuludq T7, T3 + paddq \k0, PASS0_SUMS + paddq T1, PASS1_SUMS + paddq T2, PASS2_SUMS + paddq T3, PASS3_SUMS +.endm + +/* + * void nh_sse2(const u32 *key, const u8 *message, size_t message_len, + * u8 hash[NH_HASH_BYTES]) + * + * It's guaranteed that message_len % 16 == 0. + */ +ENTRY(nh_sse2) + + movdqu 0x00(KEY), K0 + movdqu 0x10(KEY), K1 + movdqu 0x20(KEY), K2 + add $0x30, KEY + pxor PASS0_SUMS, PASS0_SUMS + pxor PASS1_SUMS, PASS1_SUMS + pxor PASS2_SUMS, PASS2_SUMS + pxor PASS3_SUMS, PASS3_SUMS + + sub $0x40, MESSAGE_LEN + jl .Lloop4_done +.Lloop4: + _nh_stride K0, K1, K2, K3, 0x00 + _nh_stride K1, K2, K3, K0, 0x10 + _nh_stride K2, K3, K0, K1, 0x20 + _nh_stride K3, K0, K1, K2, 0x30 + add $0x40, KEY + add $0x40, MESSAGE + sub $0x40, MESSAGE_LEN + jge .Lloop4 + +.Lloop4_done: + and $0x3f, MESSAGE_LEN + jz .Ldone + _nh_stride K0, K1, K2, K3, 0x00 + + sub $0x10, MESSAGE_LEN + jz .Ldone + _nh_stride K1, K2, K3, K0, 0x10 + + sub $0x10, MESSAGE_LEN + jz .Ldone + _nh_stride K2, K3, K0, K1, 0x20 + +.Ldone: + // Sum the accumulators for each pass, then store the sums to 'hash' + movdqa PASS0_SUMS, T0 + movdqa PASS2_SUMS, T1 + punpcklqdq PASS1_SUMS, T0 // => (PASS0_SUM_A PASS1_SUM_A) + punpcklqdq PASS3_SUMS, T1 // => (PASS2_SUM_A PASS3_SUM_A) + punpckhqdq PASS1_SUMS, PASS0_SUMS // => (PASS0_SUM_B PASS1_SUM_B) + punpckhqdq PASS3_SUMS, PASS2_SUMS // => (PASS2_SUM_B PASS3_SUM_B) + paddq PASS0_SUMS, T0 + paddq PASS2_SUMS, T1 + movdqu T0, 0x00(HASH) + movdqu T1, 0x10(HASH) + ret +ENDPROC(nh_sse2) diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c new file mode 100644 index 000000000000..20d815ea4b6a --- /dev/null +++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NHPoly1305 - ε-almost-∆-universal hash function for Adiantum + * (AVX2 accelerated version) + * + * Copyright 2018 Google LLC + */ + +#include +#include +#include +#include + +asmlinkage void nh_avx2(const u32 *key, const u8 *message, size_t message_len, + u8 hash[NH_HASH_BYTES]); + +/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */ +static void _nh_avx2(const u32 *key, const u8 *message, size_t message_len, + __le64 hash[NH_NUM_PASSES]) +{ + nh_avx2(key, message, message_len, (u8 *)hash); +} + +static int nhpoly1305_avx2_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + if (srclen < 64 || !irq_fpu_usable()) + return crypto_nhpoly1305_update(desc, src, srclen); + + do { + unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE); + + kernel_fpu_begin(); + crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2); + kernel_fpu_end(); + src += n; + srclen -= n; + } while (srclen); + return 0; +} + +static struct shash_alg nhpoly1305_alg = { + .base.cra_name = "nhpoly1305", + .base.cra_driver_name = "nhpoly1305-avx2", + .base.cra_priority = 300, + .base.cra_ctxsize = sizeof(struct nhpoly1305_key), + .base.cra_module = THIS_MODULE, + .digestsize = POLY1305_DIGEST_SIZE, + .init = crypto_nhpoly1305_init, + .update = nhpoly1305_avx2_update, + .final = crypto_nhpoly1305_final, + .setkey = crypto_nhpoly1305_setkey, + .descsize = sizeof(struct nhpoly1305_state), +}; + +static int __init nhpoly1305_mod_init(void) +{ + if (!boot_cpu_has(X86_FEATURE_AVX2) || + !boot_cpu_has(X86_FEATURE_OSXSAVE)) + return -ENODEV; + + return crypto_register_shash(&nhpoly1305_alg); +} + +static void __exit nhpoly1305_mod_exit(void) +{ + crypto_unregister_shash(&nhpoly1305_alg); +} + +module_init(nhpoly1305_mod_init); +module_exit(nhpoly1305_mod_exit); + +MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function (AVX2-accelerated)"); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Eric Biggers "); +MODULE_ALIAS_CRYPTO("nhpoly1305"); +MODULE_ALIAS_CRYPTO("nhpoly1305-avx2"); diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c new file mode 100644 index 000000000000..ed68d164ce14 --- /dev/null +++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NHPoly1305 - ε-almost-∆-universal hash function for Adiantum + * (SSE2 accelerated version) + * + * Copyright 2018 Google LLC + */ + +#include +#include +#include +#include + +asmlinkage void nh_sse2(const u32 *key, const u8 *message, size_t message_len, + u8 hash[NH_HASH_BYTES]); + +/* wrapper to avoid indirect call to assembly, which doesn't work with CFI */ +static void _nh_sse2(const u32 *key, const u8 *message, size_t message_len, + __le64 hash[NH_NUM_PASSES]) +{ + nh_sse2(key, message, message_len, (u8 *)hash); +} + +static int nhpoly1305_sse2_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + if (srclen < 64 || !irq_fpu_usable()) + return crypto_nhpoly1305_update(desc, src, srclen); + + do { + unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE); + + kernel_fpu_begin(); + crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2); + kernel_fpu_end(); + src += n; + srclen -= n; + } while (srclen); + return 0; +} + +static struct shash_alg nhpoly1305_alg = { + .base.cra_name = "nhpoly1305", + .base.cra_driver_name = "nhpoly1305-sse2", + .base.cra_priority = 200, + .base.cra_ctxsize = sizeof(struct nhpoly1305_key), + .base.cra_module = THIS_MODULE, + .digestsize = POLY1305_DIGEST_SIZE, + .init = crypto_nhpoly1305_init, + .update = nhpoly1305_sse2_update, + .final = crypto_nhpoly1305_final, + .setkey = crypto_nhpoly1305_setkey, + .descsize = sizeof(struct nhpoly1305_state), +}; + +static int __init nhpoly1305_mod_init(void) +{ + if (!boot_cpu_has(X86_FEATURE_XMM2)) + return -ENODEV; + + return crypto_register_shash(&nhpoly1305_alg); +} + +static void __exit nhpoly1305_mod_exit(void) +{ + crypto_unregister_shash(&nhpoly1305_alg); +} + +module_init(nhpoly1305_mod_init); +module_exit(nhpoly1305_mod_exit); + +MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function (SSE2-accelerated)"); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Eric Biggers "); +MODULE_ALIAS_CRYPTO("nhpoly1305"); +MODULE_ALIAS_CRYPTO("nhpoly1305-sse2"); diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c index f012b7e28ad1..88cc01506c84 100644 --- a/arch/x86/crypto/poly1305_glue.c +++ b/arch/x86/crypto/poly1305_glue.c @@ -83,35 +83,37 @@ static unsigned int poly1305_simd_blocks(struct poly1305_desc_ctx *dctx, if (poly1305_use_avx2 && srclen >= POLY1305_BLOCK_SIZE * 4) { if (unlikely(!sctx->wset)) { if (!sctx->uset) { - memcpy(sctx->u, dctx->r, sizeof(sctx->u)); - poly1305_simd_mult(sctx->u, dctx->r); + memcpy(sctx->u, dctx->r.r, sizeof(sctx->u)); + poly1305_simd_mult(sctx->u, dctx->r.r); sctx->uset = true; } memcpy(sctx->u + 5, sctx->u, sizeof(sctx->u)); - poly1305_simd_mult(sctx->u + 5, dctx->r); + poly1305_simd_mult(sctx->u + 5, dctx->r.r); memcpy(sctx->u + 10, sctx->u + 5, sizeof(sctx->u)); - poly1305_simd_mult(sctx->u + 10, dctx->r); + poly1305_simd_mult(sctx->u + 10, dctx->r.r); sctx->wset = true; } blocks = srclen / (POLY1305_BLOCK_SIZE * 4); - poly1305_4block_avx2(dctx->h, src, dctx->r, blocks, sctx->u); + poly1305_4block_avx2(dctx->h.h, src, dctx->r.r, blocks, + sctx->u); src += POLY1305_BLOCK_SIZE * 4 * blocks; srclen -= POLY1305_BLOCK_SIZE * 4 * blocks; } #endif if (likely(srclen >= POLY1305_BLOCK_SIZE * 2)) { if (unlikely(!sctx->uset)) { - memcpy(sctx->u, dctx->r, sizeof(sctx->u)); - poly1305_simd_mult(sctx->u, dctx->r); + memcpy(sctx->u, dctx->r.r, sizeof(sctx->u)); + poly1305_simd_mult(sctx->u, dctx->r.r); sctx->uset = true; } blocks = srclen / (POLY1305_BLOCK_SIZE * 2); - poly1305_2block_sse2(dctx->h, src, dctx->r, blocks, sctx->u); + poly1305_2block_sse2(dctx->h.h, src, dctx->r.r, blocks, + sctx->u); src += POLY1305_BLOCK_SIZE * 2 * blocks; srclen -= POLY1305_BLOCK_SIZE * 2 * blocks; } if (srclen >= POLY1305_BLOCK_SIZE) { - poly1305_block_sse2(dctx->h, src, dctx->r, 1); + poly1305_block_sse2(dctx->h.h, src, dctx->r.r, 1); srclen -= POLY1305_BLOCK_SIZE; } return srclen; diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 071b2a6fff85..33051436c864 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -967,7 +967,7 @@ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves) } extern unsigned long arch_align_stack(unsigned long sp); -extern void free_init_pages(char *what, unsigned long begin, unsigned long end); +void free_init_pages(const char *what, unsigned long begin, unsigned long end); extern void free_kernel_image_pages(void *begin, void *end); void default_idle(void); diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index 3f9d1b4019bb..e0ff3ac8c127 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -50,8 +50,6 @@ static unsigned long iommu_pages; /* .. and in pages */ static u32 *iommu_gatt_base; /* Remapping table */ -static dma_addr_t bad_dma_addr; - /* * If this is disabled the IOMMU will use an optimized flushing strategy * of only flushing when an mapping is reused. With it true the GART is @@ -74,8 +72,6 @@ static u32 gart_unmapped_entry; (((x) & 0xfffff000) | (((x) >> 32) << 4) | GPTE_VALID | GPTE_COHERENT) #define GPTE_DECODE(x) (((x) & 0xfffff000) | (((u64)(x) & 0xff0) << 28)) -#define EMERGENCY_PAGES 32 /* = 128KB */ - #ifdef CONFIG_AGP #define AGPEXTERN extern #else @@ -155,9 +151,6 @@ static void flush_gart(void) #ifdef CONFIG_IOMMU_LEAK /* Debugging aid for drivers that don't free their IOMMU tables */ -static int leak_trace; -static int iommu_leak_pages = 20; - static void dump_leak(void) { static int dump; @@ -184,14 +177,6 @@ static void iommu_full(struct device *dev, size_t size, int dir) */ dev_err(dev, "PCI-DMA: Out of IOMMU space for %lu bytes\n", size); - - if (size > PAGE_SIZE*EMERGENCY_PAGES) { - if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL) - panic("PCI-DMA: Memory would be corrupted\n"); - if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL) - panic(KERN_ERR - "PCI-DMA: Random memory would be DMAed\n"); - } #ifdef CONFIG_IOMMU_LEAK dump_leak(); #endif @@ -220,7 +205,7 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem, int i; if (unlikely(phys_mem + size > GART_MAX_PHYS_ADDR)) - return bad_dma_addr; + return DMA_MAPPING_ERROR; iommu_page = alloc_iommu(dev, npages, align_mask); if (iommu_page == -1) { @@ -229,7 +214,7 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem, if (panic_on_overflow) panic("dma_map_area overflow %lu bytes\n", size); iommu_full(dev, size, dir); - return bad_dma_addr; + return DMA_MAPPING_ERROR; } for (i = 0; i < npages; i++) { @@ -271,7 +256,7 @@ static void gart_unmap_page(struct device *dev, dma_addr_t dma_addr, int npages; int i; - if (dma_addr < iommu_bus_base + EMERGENCY_PAGES*PAGE_SIZE || + if (dma_addr == DMA_MAPPING_ERROR || dma_addr >= iommu_bus_base + iommu_size) return; @@ -315,7 +300,7 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg, if (nonforced_iommu(dev, addr, s->length)) { addr = dma_map_area(dev, addr, s->length, dir, 0); - if (addr == bad_dma_addr) { + if (addr == DMA_MAPPING_ERROR) { if (i > 0) gart_unmap_sg(dev, sg, i, dir, 0); nents = 0; @@ -471,7 +456,7 @@ error: iommu_full(dev, pages << PAGE_SHIFT, dir); for_each_sg(sg, s, nents, i) - s->dma_address = bad_dma_addr; + s->dma_address = DMA_MAPPING_ERROR; return 0; } @@ -490,7 +475,7 @@ gart_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_addr, *dma_addr = dma_map_area(dev, virt_to_phys(vaddr), size, DMA_BIDIRECTIONAL, (1UL << get_order(size)) - 1); flush_gart(); - if (unlikely(*dma_addr == bad_dma_addr)) + if (unlikely(*dma_addr == DMA_MAPPING_ERROR)) goto out_free; return vaddr; out_free: @@ -507,11 +492,6 @@ gart_free_coherent(struct device *dev, size_t size, void *vaddr, dma_direct_free_pages(dev, size, vaddr, dma_addr, attrs); } -static int gart_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return (dma_addr == bad_dma_addr); -} - static int no_agp; static __init unsigned long check_iommu_size(unsigned long aper, u64 aper_size) @@ -695,7 +675,6 @@ static const struct dma_map_ops gart_dma_ops = { .unmap_page = gart_unmap_page, .alloc = gart_alloc_coherent, .free = gart_free_coherent, - .mapping_error = gart_mapping_error, .dma_supported = dma_direct_supported, }; @@ -730,7 +709,6 @@ int __init gart_iommu_init(void) unsigned long aper_base, aper_size; unsigned long start_pfn, end_pfn; unsigned long scratch; - long i; if (!amd_nb_has_feature(AMD_NB_GART)) return 0; @@ -774,29 +752,12 @@ int __init gart_iommu_init(void) if (!iommu_gart_bitmap) panic("Cannot allocate iommu bitmap\n"); -#ifdef CONFIG_IOMMU_LEAK - if (leak_trace) { - int ret; - - ret = dma_debug_resize_entries(iommu_pages); - if (ret) - pr_debug("PCI-DMA: Cannot trace all the entries\n"); - } -#endif - - /* - * Out of IOMMU space handling. - * Reserve some invalid pages at the beginning of the GART. - */ - bitmap_set(iommu_gart_bitmap, 0, EMERGENCY_PAGES); - pr_info("PCI-DMA: Reserving %luMB of IOMMU area in the AGP aperture\n", iommu_size >> 20); agp_memory_reserved = iommu_size; iommu_start = aper_size - iommu_size; iommu_bus_base = info.aper_base + iommu_start; - bad_dma_addr = iommu_bus_base; iommu_gatt_base = agp_gatt_table + (iommu_start>>PAGE_SHIFT); /* @@ -838,8 +799,6 @@ int __init gart_iommu_init(void) if (!scratch) panic("Cannot allocate iommu scratch page"); gart_unmapped_entry = GPTE_ENCODE(__pa(scratch)); - for (i = EMERGENCY_PAGES; i < iommu_pages; i++) - iommu_gatt_base[i] = gart_unmapped_entry; flush_gart(); dma_ops = &gart_dma_ops; @@ -853,16 +812,6 @@ void __init gart_parse_options(char *p) { int arg; -#ifdef CONFIG_IOMMU_LEAK - if (!strncmp(p, "leak", 4)) { - leak_trace = 1; - p += 4; - if (*p == '=') - ++p; - if (isdigit(*p) && get_option(&p, &arg)) - iommu_leak_pages = arg; - } -#endif if (isdigit(*p) && get_option(&p, &arg)) iommu_size = arg; if (!strncmp(p, "fullflush", 9)) diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c index 2637ff09d6a0..97f9ada9ceda 100644 --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -434,9 +434,10 @@ static ssize_t microcode_write(struct file *file, const char __user *buf, size_t len, loff_t *ppos) { ssize_t ret = -EINVAL; + unsigned long nr_pages = totalram_pages(); - if ((len >> PAGE_SHIFT) > totalram_pages) { - pr_err("too much data (max %ld pages)\n", totalram_pages); + if ((len >> PAGE_SHIFT) > nr_pages) { + pr_err("too much data (max %ld pages)\n", nr_pages); return ret; } diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c index bbfc8b1e9104..c70720f61a34 100644 --- a/arch/x86/kernel/pci-calgary_64.c +++ b/arch/x86/kernel/pci-calgary_64.c @@ -51,8 +51,6 @@ #include #include -#define CALGARY_MAPPING_ERROR 0 - #ifdef CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT int use_calgary __read_mostly = 1; #else @@ -157,8 +155,6 @@ static const unsigned long phb_debug_offsets[] = { #define PHB_DEBUG_STUFF_OFFSET 0x0020 -#define EMERGENCY_PAGES 32 /* = 128KB */ - unsigned int specified_table_size = TCE_TABLE_SIZE_UNSPECIFIED; static int translate_empty_slots __read_mostly = 0; static int calgary_detected __read_mostly = 0; @@ -255,7 +251,7 @@ static unsigned long iommu_range_alloc(struct device *dev, if (panic_on_overflow) panic("Calgary: fix the allocator.\n"); else - return CALGARY_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } } @@ -274,11 +270,10 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl, dma_addr_t ret; entry = iommu_range_alloc(dev, tbl, npages); - - if (unlikely(entry == CALGARY_MAPPING_ERROR)) { + if (unlikely(entry == DMA_MAPPING_ERROR)) { pr_warn("failed to allocate %u pages in iommu %p\n", npages, tbl); - return CALGARY_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } /* set the return dma address */ @@ -294,12 +289,10 @@ static void iommu_free(struct iommu_table *tbl, dma_addr_t dma_addr, unsigned int npages) { unsigned long entry; - unsigned long badend; unsigned long flags; /* were we called with bad_dma_address? */ - badend = CALGARY_MAPPING_ERROR + (EMERGENCY_PAGES * PAGE_SIZE); - if (unlikely(dma_addr < badend)) { + if (unlikely(dma_addr == DMA_MAPPING_ERROR)) { WARN(1, KERN_ERR "Calgary: driver tried unmapping bad DMA " "address 0x%Lx\n", dma_addr); return; @@ -383,7 +376,7 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg, npages = iommu_num_pages(vaddr, s->length, PAGE_SIZE); entry = iommu_range_alloc(dev, tbl, npages); - if (entry == CALGARY_MAPPING_ERROR) { + if (entry == DMA_MAPPING_ERROR) { /* makes sure unmap knows to stop */ s->dma_length = 0; goto error; @@ -401,7 +394,7 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg, error: calgary_unmap_sg(dev, sg, nelems, dir, 0); for_each_sg(sg, s, nelems, i) { - sg->dma_address = CALGARY_MAPPING_ERROR; + sg->dma_address = DMA_MAPPING_ERROR; sg->dma_length = 0; } return 0; @@ -454,7 +447,7 @@ static void* calgary_alloc_coherent(struct device *dev, size_t size, /* set up tces to cover the allocated range */ mapping = iommu_alloc(dev, tbl, ret, npages, DMA_BIDIRECTIONAL); - if (mapping == CALGARY_MAPPING_ERROR) + if (mapping == DMA_MAPPING_ERROR) goto free; *dma_handle = mapping; return ret; @@ -479,11 +472,6 @@ static void calgary_free_coherent(struct device *dev, size_t size, free_pages((unsigned long)vaddr, get_order(size)); } -static int calgary_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == CALGARY_MAPPING_ERROR; -} - static const struct dma_map_ops calgary_dma_ops = { .alloc = calgary_alloc_coherent, .free = calgary_free_coherent, @@ -491,7 +479,6 @@ static const struct dma_map_ops calgary_dma_ops = { .unmap_sg = calgary_unmap_sg, .map_page = calgary_map_page, .unmap_page = calgary_unmap_page, - .mapping_error = calgary_mapping_error, .dma_supported = dma_direct_supported, }; @@ -739,9 +726,6 @@ static void __init calgary_reserve_regions(struct pci_dev *dev) u64 start; struct iommu_table *tbl = pci_iommu(dev->bus); - /* reserve EMERGENCY_PAGES from bad_dma_address and up */ - iommu_range_reserve(tbl, CALGARY_MAPPING_ERROR, EMERGENCY_PAGES); - /* avoid the BIOS/VGA first 640KB-1MB region */ /* for CalIOC2 - avoid the entire first MB */ if (is_calgary(dev->device)) { diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index f4562fcec681..d460998ae828 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -17,7 +17,7 @@ static bool disable_dac_quirk __read_mostly; -const struct dma_map_ops *dma_ops = &dma_direct_ops; +const struct dma_map_ops *dma_ops; EXPORT_SYMBOL(dma_ops); #ifdef CONFIG_IOMMU_DEBUG diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index bd08b9e1c9e2..5f5302028a9a 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -62,10 +62,8 @@ IOMMU_INIT(pci_swiotlb_detect_4gb, void __init pci_swiotlb_init(void) { - if (swiotlb) { + if (swiotlb) swiotlb_init(0); - dma_ops = &swiotlb_dma_ops; - } } void __init pci_swiotlb_late_init(void) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 1bbec387d289..72fa955f4a15 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -98,6 +98,6 @@ config KVM_MMU_AUDIT # OK, it's a little counter-intuitive to do this, but it puts it neatly under # the virtualization menu. -source drivers/vhost/Kconfig +source "drivers/vhost/Kconfig" endif # VIRTUALIZATION diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index abcb8d00b014..e3cdc85ce5b6 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -377,7 +377,7 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, /* * This is an optimization for KASAN=y case. Since all kasan page tables - * eventually point to the kasan_zero_page we could call note_page() + * eventually point to the kasan_early_shadow_page we could call note_page() * right away without walking through lower level page tables. This saves * us dozens of seconds (minutes for 5-level config) while checking for * W+X mapping or reading kernel_page_tables debugfs file. @@ -385,10 +385,11 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st, void *pt) { - if (__pa(pt) == __pa(kasan_zero_pmd) || - (pgtable_l5_enabled() && __pa(pt) == __pa(kasan_zero_p4d)) || - __pa(pt) == __pa(kasan_zero_pud)) { - pgprotval_t prot = pte_flags(kasan_zero_pte[0]); + if (__pa(pt) == __pa(kasan_early_shadow_pmd) || + (pgtable_l5_enabled() && + __pa(pt) == __pa(kasan_early_shadow_p4d)) || + __pa(pt) == __pa(kasan_early_shadow_pud)) { + pgprotval_t prot = pte_flags(kasan_early_shadow_pte[0]); note_page(m, st, __pgprot(prot), 0, 5); return true; } diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 427a955a2cf2..f905a2371080 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -742,7 +742,7 @@ int devmem_is_allowed(unsigned long pagenr) return 1; } -void free_init_pages(char *what, unsigned long begin, unsigned long end) +void free_init_pages(const char *what, unsigned long begin, unsigned long end) { unsigned long begin_aligned, end_aligned; diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 49ecf5ecf6d3..85c94f9a87f8 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -860,7 +860,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, } #ifdef CONFIG_MEMORY_HOTREMOVE -int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 484c1b92f078..bccff68e3267 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1141,7 +1141,8 @@ kernel_physical_mapping_remove(unsigned long start, unsigned long end) remove_pagetable(start, end, true, NULL); } -int __ref arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +int __ref arch_remove_memory(int nid, u64 start, u64 size, + struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 04a9cf6b034f..462fde83b515 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -211,7 +211,8 @@ static void __init kasan_early_p4d_populate(pgd_t *pgd, unsigned long next; if (pgd_none(*pgd)) { - pgd_entry = __pgd(_KERNPG_TABLE | __pa_nodebug(kasan_zero_p4d)); + pgd_entry = __pgd(_KERNPG_TABLE | + __pa_nodebug(kasan_early_shadow_p4d)); set_pgd(pgd, pgd_entry); } @@ -222,7 +223,8 @@ static void __init kasan_early_p4d_populate(pgd_t *pgd, if (!p4d_none(*p4d)) continue; - p4d_entry = __p4d(_KERNPG_TABLE | __pa_nodebug(kasan_zero_pud)); + p4d_entry = __p4d(_KERNPG_TABLE | + __pa_nodebug(kasan_early_shadow_pud)); set_p4d(p4d, p4d_entry); } while (p4d++, addr = next, addr != end && p4d_none(*p4d)); } @@ -261,10 +263,11 @@ static struct notifier_block kasan_die_notifier = { void __init kasan_early_init(void) { int i; - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; - pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; - pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; - p4dval_t p4d_val = __pa_nodebug(kasan_zero_pud) | _KERNPG_TABLE; + pteval_t pte_val = __pa_nodebug(kasan_early_shadow_page) | + __PAGE_KERNEL | _PAGE_ENC; + pmdval_t pmd_val = __pa_nodebug(kasan_early_shadow_pte) | _KERNPG_TABLE; + pudval_t pud_val = __pa_nodebug(kasan_early_shadow_pmd) | _KERNPG_TABLE; + p4dval_t p4d_val = __pa_nodebug(kasan_early_shadow_pud) | _KERNPG_TABLE; /* Mask out unsupported __PAGE_KERNEL bits: */ pte_val &= __default_kernel_pte_mask; @@ -273,16 +276,16 @@ void __init kasan_early_init(void) p4d_val &= __default_kernel_pte_mask; for (i = 0; i < PTRS_PER_PTE; i++) - kasan_zero_pte[i] = __pte(pte_val); + kasan_early_shadow_pte[i] = __pte(pte_val); for (i = 0; i < PTRS_PER_PMD; i++) - kasan_zero_pmd[i] = __pmd(pmd_val); + kasan_early_shadow_pmd[i] = __pmd(pmd_val); for (i = 0; i < PTRS_PER_PUD; i++) - kasan_zero_pud[i] = __pud(pud_val); + kasan_early_shadow_pud[i] = __pud(pud_val); for (i = 0; pgtable_l5_enabled() && i < PTRS_PER_P4D; i++) - kasan_zero_p4d[i] = __p4d(p4d_val); + kasan_early_shadow_p4d[i] = __p4d(p4d_val); kasan_map_early_shadow(early_top_pgt); kasan_map_early_shadow(init_top_pgt); @@ -326,7 +329,7 @@ void __init kasan_init(void) clear_pgds(KASAN_SHADOW_START & PGDIR_MASK, KASAN_SHADOW_END); - kasan_populate_zero_shadow((void *)(KASAN_SHADOW_START & PGDIR_MASK), + kasan_populate_early_shadow((void *)(KASAN_SHADOW_START & PGDIR_MASK), kasan_mem_to_shadow((void *)PAGE_OFFSET)); for (i = 0; i < E820_MAX_ENTRIES; i++) { @@ -338,41 +341,41 @@ void __init kasan_init(void) shadow_cpu_entry_begin = (void *)CPU_ENTRY_AREA_BASE; shadow_cpu_entry_begin = kasan_mem_to_shadow(shadow_cpu_entry_begin); - shadow_cpu_entry_begin = (void *)round_down((unsigned long)shadow_cpu_entry_begin, - PAGE_SIZE); + shadow_cpu_entry_begin = (void *)round_down( + (unsigned long)shadow_cpu_entry_begin, PAGE_SIZE); shadow_cpu_entry_end = (void *)(CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE); shadow_cpu_entry_end = kasan_mem_to_shadow(shadow_cpu_entry_end); - shadow_cpu_entry_end = (void *)round_up((unsigned long)shadow_cpu_entry_end, - PAGE_SIZE); + shadow_cpu_entry_end = (void *)round_up( + (unsigned long)shadow_cpu_entry_end, PAGE_SIZE); - kasan_populate_zero_shadow( + kasan_populate_early_shadow( kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), shadow_cpu_entry_begin); kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin, (unsigned long)shadow_cpu_entry_end, 0); - kasan_populate_zero_shadow(shadow_cpu_entry_end, - kasan_mem_to_shadow((void *)__START_KERNEL_map)); + kasan_populate_early_shadow(shadow_cpu_entry_end, + kasan_mem_to_shadow((void *)__START_KERNEL_map)); kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext), (unsigned long)kasan_mem_to_shadow(_end), early_pfn_to_nid(__pa(_stext))); - kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END), - (void *)KASAN_SHADOW_END); + kasan_populate_early_shadow(kasan_mem_to_shadow((void *)MODULES_END), + (void *)KASAN_SHADOW_END); load_cr3(init_top_pgt); __flush_tlb_all(); /* - * kasan_zero_page has been used as early shadow memory, thus it may - * contain some garbage. Now we can clear and write protect it, since - * after the TLB flush no one should write to it. + * kasan_early_shadow_page has been used as early shadow memory, thus + * it may contain some garbage. Now we can clear and write protect it, + * since after the TLB flush no one should write to it. */ - memset(kasan_zero_page, 0, PAGE_SIZE); + memset(kasan_early_shadow_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { pte_t pte; pgprot_t prot; @@ -380,8 +383,8 @@ void __init kasan_init(void) prot = __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC); pgprot_val(prot) &= __default_kernel_pte_mask; - pte = __pte(__pa(kasan_zero_page) | pgprot_val(prot)); - set_pte(&kasan_zero_pte[i], pte); + pte = __pte(__pa(kasan_early_shadow_page) | pgprot_val(prot)); + set_pte(&kasan_early_shadow_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ __flush_tlb_all(); diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 006f373f54ab..385afa2b9e17 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -380,13 +380,6 @@ void __init mem_encrypt_init(void) /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ swiotlb_update_mem_attributes(); - /* - * With SEV, DMA operations cannot use encryption, we need to use - * SWIOTLB to bounce buffer DMA operation. - */ - if (sev_active()) - dma_ops = &swiotlb_dma_ops; - /* * With SEV, we need to unroll the rep string I/O instructions. */ diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 59274e2c1ac4..b0284eab14dc 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -794,6 +794,14 @@ int pmd_clear_huge(pmd_t *pmd) return 0; } +/* + * Until we support 512GB pages, skip them in the vmap area. + */ +int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) +{ + return 0; +} + #ifdef CONFIG_X86_64 /** * pud_free_pmd_page - Clear pud entry and free pmd page. @@ -811,9 +819,6 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr) pte_t *pte; int i; - if (pud_none(*pud)) - return 1; - pmd = (pmd_t *)pud_page_vaddr(*pud); pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); if (!pmd_sv) @@ -855,9 +860,6 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) { pte_t *pte; - if (pmd_none(*pmd)) - return 1; - pte = (pte_t *)pmd_page_vaddr(*pmd); pmd_clear(pmd); diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 2580cd2e98b1..5542303c43d9 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1181,6 +1181,8 @@ out_image: } if (!image || !prog->is_func || extra_pass) { + if (image) + bpf_prog_fill_jited_linfo(prog, addrs); out_addrs: kfree(addrs); kfree(jit_data); diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c index 7a5bafb76d77..3cdafea55ab6 100644 --- a/arch/x86/pci/sta2x11-fixup.c +++ b/arch/x86/pci/sta2x11-fixup.c @@ -168,7 +168,6 @@ static void sta2x11_setup_pdev(struct pci_dev *pdev) return; pci_set_consistent_dma_mask(pdev, STA2X11_AMBA_SIZE - 1); pci_set_dma_mask(pdev, STA2X11_AMBA_SIZE - 1); - pdev->dev.dma_ops = &swiotlb_dma_ops; pdev->dev.archdata.is_sta2x11 = true; /* We must enable all devices as master, for audio DMA to work */ diff --git a/arch/x86/um/Makefile b/arch/x86/um/Makefile index c2d3d7c51e9e..2d686ae54681 100644 --- a/arch/x86/um/Makefile +++ b/arch/x86/um/Makefile @@ -36,9 +36,12 @@ subarch-$(CONFIG_MODULES) += ../kernel/module.o USER_OBJS := bugs_$(BITS).o ptrace_user.o fault.o -extra-y += user-offsets.s $(obj)/user-offsets.s: c_flags = -Wp,-MD,$(depfile) $(USER_CFLAGS) \ -Iarch/x86/include/generated +targets += user-offsets.s + +include/generated/user_constants.h: $(obj)/user-offsets.s + $(call filechk,offsets,__USER_CONSTANT_H__) UNPROFILE_OBJS := stub_segv.o CFLAGS_stub_segv.o := $(CFLAGS_NO_HARDENING) diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig index d29b7365da8d..20a0756f27ef 100644 --- a/arch/xtensa/Kconfig +++ b/arch/xtensa/Kconfig @@ -1,7 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 config XTENSA def_bool y - select ARCH_HAS_SG_CHAIN select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_NO_COHERENT_DMA_MMAP if !MMU @@ -10,14 +9,16 @@ config XTENSA select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS select COMMON_CLK - select DMA_DIRECT_OPS + select DMA_REMAP if MMU select GENERIC_ATOMIC64 select GENERIC_CLOCKEVENTS select GENERIC_IRQ_SHOW select GENERIC_PCI_IOMAP select GENERIC_SCHED_CLOCK select GENERIC_STRNCPY_FROM_USER if KASAN + select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if MMU + select HAVE_ARCH_TRACEHOOK select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS select HAVE_EXIT_THREAD @@ -26,8 +27,10 @@ config XTENSA select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_IRQ_TIME_ACCOUNTING select HAVE_OPROFILE + select HAVE_PCI select HAVE_PERF_EVENTS select HAVE_STACKPROTECTOR + select HAVE_SYSCALL_TRACEPOINTS select IRQ_DOMAIN select MODULES_USE_ELF_RELA select PERF_USE_VMALLOC @@ -379,21 +382,6 @@ config XTENSA_CALIBRATE_CCOUNT config SERIAL_CONSOLE def_bool n -menu "Bus options" - -config PCI - bool "PCI support" - default y - help - Find out whether you have a PCI motherboard. PCI is the name of a - bus system, i.e. the way the CPU talks to the other stuff inside - your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or - VESA. If you have PCI, say Y, otherwise N. - -source "drivers/pci/Kconfig" - -endmenu - menu "Platform options" choice @@ -526,8 +514,6 @@ config FORCE_MAX_ZONEORDER This config option is actually maximum order plus one. For example, a value of 11 means that the largest free memory block is 2^10 pages. -source "drivers/pcmcia/Kconfig" - config PLATFORM_WANT_DEFAULT_MEM def_bool n diff --git a/arch/xtensa/Makefile b/arch/xtensa/Makefile index be060dfb1cc3..1542018c9e57 100644 --- a/arch/xtensa/Makefile +++ b/arch/xtensa/Makefile @@ -90,6 +90,9 @@ boot := arch/xtensa/boot all Image zImage uImage: vmlinux $(Q)$(MAKE) $(build)=$(boot) $@ +archheaders: + $(Q)$(MAKE) $(build)=arch/xtensa/kernel/syscalls all + define archhelp @echo '* Image - Kernel ELF image with reset vector' @echo '* zImage - Compressed kernel image (arch/xtensa/boot/images/zImage.*)' diff --git a/arch/xtensa/boot/boot-elf/bootstrap.S b/arch/xtensa/boot/boot-elf/bootstrap.S index 29c68426ab56..99e98c9bae41 100644 --- a/arch/xtensa/boot/boot-elf/bootstrap.S +++ b/arch/xtensa/boot/boot-elf/bootstrap.S @@ -29,17 +29,7 @@ _ResetVector: .begin no-absolute-literals .literal_position -#if defined(CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX) && \ - XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY - .literal RomInitAddr, CONFIG_KERNEL_LOAD_ADDRESS -#else - .literal RomInitAddr, KERNELOFFSET -#endif -#ifndef CONFIG_PARSE_BOOTPARAM - .literal RomBootParam, 0 -#else - .literal RomBootParam, _bootparam - +#ifdef CONFIG_PARSE_BOOTPARAM .align 4 _bootparam: .short BP_TAG_FIRST @@ -66,13 +56,22 @@ _SetupMMU: initialize_mmu #endif - .end no-absolute-literals - rsil a0, XCHAL_DEBUGLEVEL-1 rsync reset: - l32r a0, RomInitAddr - l32r a2, RomBootParam +#if defined(CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX) && \ + XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY + movi a0, CONFIG_KERNEL_LOAD_ADDRESS +#else + movi a0, KERNELOFFSET +#endif +#ifdef CONFIG_PARSE_BOOTPARAM + movi a2, _bootparam +#else + movi a2, 0 +#endif movi a3, 0 movi a4, 0 jx a0 + + .end no-absolute-literals diff --git a/arch/xtensa/boot/dts/xtfpga.dtsi b/arch/xtensa/boot/dts/xtfpga.dtsi index 1090528825ec..e46ae07bab05 100644 --- a/arch/xtensa/boot/dts/xtfpga.dtsi +++ b/arch/xtensa/boot/dts/xtfpga.dtsi @@ -103,7 +103,7 @@ }; }; - spi0: spi-master@0d0a0000 { + spi0: spi@0d0a0000 { compatible = "cdns,xtfpga-spi"; #address-cells = <1>; #size-cells = <0>; diff --git a/arch/xtensa/configs/common_defconfig b/arch/xtensa/configs/common_defconfig index 4bcc76b02109..fa9389869154 100644 --- a/arch/xtensa/configs/common_defconfig +++ b/arch/xtensa/configs/common_defconfig @@ -1,3 +1,4 @@ +CONFIG_PCI=y CONFIG_SYSVIPC=y CONFIG_BSD_PROCESS_ACCT=y CONFIG_LOG_BUF_SHIFT=14 diff --git a/arch/xtensa/include/asm/Kbuild b/arch/xtensa/include/asm/Kbuild index 3310adecafb0..e255683cd520 100644 --- a/arch/xtensa/include/asm/Kbuild +++ b/arch/xtensa/include/asm/Kbuild @@ -1,3 +1,4 @@ +generated-y += syscall_table.h generic-y += bug.h generic-y += compat.h generic-y += device.h diff --git a/arch/xtensa/include/asm/coprocessor.h b/arch/xtensa/include/asm/coprocessor.h index 677501b32dfc..6712929a27c9 100644 --- a/arch/xtensa/include/asm/coprocessor.h +++ b/arch/xtensa/include/asm/coprocessor.h @@ -12,7 +12,6 @@ #ifndef _XTENSA_COPROCESSOR_H #define _XTENSA_COPROCESSOR_H -#include #include #include #include @@ -90,19 +89,6 @@ #ifndef __ASSEMBLY__ - -#if XCHAL_HAVE_CP - -#define RSR_CPENABLE(x) do { \ - __asm__ __volatile__("rsr %0, cpenable" : "=a" (x)); \ - } while(0); -#define WSR_CPENABLE(x) do { \ - __asm__ __volatile__("wsr %0, cpenable; rsync" :: "a" (x)); \ - } while(0); - -#endif /* XCHAL_HAVE_CP */ - - /* * Additional registers. * We define three types of additional registers: @@ -157,20 +143,11 @@ typedef struct { XCHAL_CP7_SA_LIST(2) } xtregs_cp7_t __attribute__ ((aligned (XCHAL_CP7_SA_ALIGN))); extern struct thread_info* coprocessor_owner[XCHAL_CP_MAX]; -extern void coprocessor_save(void*, int); -extern void coprocessor_load(void*, int); extern void coprocessor_flush(struct thread_info*, int); -extern void coprocessor_restore(struct thread_info*, int); extern void coprocessor_release_all(struct thread_info*); extern void coprocessor_flush_all(struct thread_info*); -static inline void coprocessor_clear_cpenable(void) -{ - unsigned long i = 0; - WSR_CPENABLE(i); -} - #endif /* XTENSA_HAVE_COPROCESSORS */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/xtensa/include/asm/elf.h b/arch/xtensa/include/asm/elf.h index eacb25a41718..909a6ab4f22b 100644 --- a/arch/xtensa/include/asm/elf.h +++ b/arch/xtensa/include/asm/elf.h @@ -15,10 +15,10 @@ #include #include +#include /* Xtensa processor ELF architecture-magic number */ -#define EM_XTENSA 94 #define EM_XTENSA_OLD 0xABC7 /* Xtensa relocations defined by the ABIs */ @@ -75,19 +75,7 @@ typedef unsigned long elf_greg_t; -typedef struct { - elf_greg_t pc; - elf_greg_t ps; - elf_greg_t lbeg; - elf_greg_t lend; - elf_greg_t lcount; - elf_greg_t sar; - elf_greg_t windowstart; - elf_greg_t windowbase; - elf_greg_t threadptr; - elf_greg_t reserved[7+48]; - elf_greg_t a[64]; -} xtensa_gregset_t; +typedef struct user_pt_regs xtensa_gregset_t; #define ELF_NGREG (sizeof(xtensa_gregset_t) / sizeof(elf_greg_t)) @@ -98,11 +86,6 @@ typedef elf_greg_t elf_gregset_t[ELF_NGREG]; typedef unsigned int elf_fpreg_t; typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG]; -#define ELF_CORE_COPY_REGS(_eregs, _pregs) \ - xtensa_elf_core_copy_regs ((xtensa_gregset_t*)&(_eregs), _pregs); - -extern void xtensa_elf_core_copy_regs (xtensa_gregset_t *, struct pt_regs *); - /* * This is used to ensure we don't load something for the wrong architecture. */ @@ -126,6 +109,7 @@ extern void xtensa_elf_core_copy_regs (xtensa_gregset_t *, struct pt_regs *); #define ELF_ARCH EM_XTENSA #define ELF_EXEC_PAGESIZE PAGE_SIZE +#define CORE_DUMP_USE_REGSET /* * This is the location that an ET_DYN program is loaded if exec'ed. Typical @@ -193,15 +177,4 @@ typedef struct { #define SET_PERSONALITY(ex) \ set_personality(PER_LINUX_32BIT | (current->personality & (~PER_MASK))) -struct task_struct; - -extern void do_copy_regs (xtensa_gregset_t*, struct pt_regs*, - struct task_struct*); -extern void do_restore_regs (xtensa_gregset_t*, struct pt_regs*, - struct task_struct*); -extern void do_save_fpregs (elf_fpregset_t*, struct pt_regs*, - struct task_struct*); -extern int do_restore_fpregs (elf_fpregset_t*, struct pt_regs*, - struct task_struct*); - #endif /* _XTENSA_ELF_H */ diff --git a/arch/xtensa/include/asm/futex.h b/arch/xtensa/include/asm/futex.h index 5bfbc1c401d4..fd0eef6b8e7c 100644 --- a/arch/xtensa/include/asm/futex.h +++ b/arch/xtensa/include/asm/futex.h @@ -32,8 +32,8 @@ "3:\n" \ " .section .fixup,\"ax\"\n" \ " .align 4\n" \ - "4: .long 3b\n" \ - "5: l32r %0, 4b\n" \ + " .literal_position\n" \ + "5: movi %0, 3b\n" \ " movi %1, %3\n" \ " jx %0\n" \ " .previous\n" \ @@ -108,8 +108,8 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, "2:\n" " .section .fixup,\"ax\"\n" " .align 4\n" - "3: .long 2b\n" - "4: l32r %1, 3b\n" + " .literal_position\n" + "4: movi %1, 2b\n" " movi %0, %7\n" " jx %1\n" " .previous\n" diff --git a/arch/xtensa/include/asm/irqflags.h b/arch/xtensa/include/asm/irqflags.h index 407606e576f8..9b5e8526afe5 100644 --- a/arch/xtensa/include/asm/irqflags.h +++ b/arch/xtensa/include/asm/irqflags.h @@ -12,6 +12,7 @@ #ifndef _XTENSA_IRQFLAGS_H #define _XTENSA_IRQFLAGS_H +#include #include #include diff --git a/arch/xtensa/include/asm/jump_label.h b/arch/xtensa/include/asm/jump_label.h new file mode 100644 index 000000000000..c812bf85021c --- /dev/null +++ b/arch/xtensa/include/asm/jump_label.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018 Cadence Design Systems Inc. */ + +#ifndef _ASM_XTENSA_JUMP_LABEL_H +#define _ASM_XTENSA_JUMP_LABEL_H + +#ifndef __ASSEMBLY__ + +#include + +#define JUMP_LABEL_NOP_SIZE 3 + +static __always_inline bool arch_static_branch(struct static_key *key, + bool branch) +{ + asm_volatile_goto("1:\n\t" + "_nop\n\t" + ".pushsection __jump_table, \"aw\"\n\t" + ".word 1b, %l[l_yes], %c0\n\t" + ".popsection\n\t" + : : "i" (&((char *)key)[branch]) : : l_yes); + + return false; +l_yes: + return true; +} + +static __always_inline bool arch_static_branch_jump(struct static_key *key, + bool branch) +{ + /* + * Xtensa assembler will mark certain points in the code + * as unreachable, so that later assembler or linker relaxation + * passes could use them. A spot right after the J instruction + * is one such point. Assembler and/or linker may insert padding + * or literals here, breaking code flow in case the J instruction + * is later replaced with NOP. Put a label right after the J to + * make it reachable and wrap both into a no-transform block + * to avoid any assembler interference with this. + */ + asm_volatile_goto("1:\n\t" + ".begin no-transform\n\t" + "_j %l[l_yes]\n\t" + "2:\n\t" + ".end no-transform\n\t" + ".pushsection __jump_table, \"aw\"\n\t" + ".word 1b, %l[l_yes], %c0\n\t" + ".popsection\n\t" + : : "i" (&((char *)key)[branch]) : : l_yes); + + return false; +l_yes: + return true; +} + +typedef u32 jump_label_t; + +struct jump_entry { + jump_label_t code; + jump_label_t target; + jump_label_t key; +}; + +#endif /* __ASSEMBLY__ */ +#endif diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h index 34a23016dd14..f7dd895b2353 100644 --- a/arch/xtensa/include/asm/processor.h +++ b/arch/xtensa/include/asm/processor.h @@ -13,6 +13,7 @@ #include #include +#include #include #include #include @@ -212,11 +213,18 @@ extern unsigned long get_wchan(struct task_struct *p); /* Special register access. */ -#define WSR(v,sr) __asm__ __volatile__ ("wsr %0,"__stringify(sr) :: "a"(v)); -#define RSR(v,sr) __asm__ __volatile__ ("rsr %0,"__stringify(sr) : "=a"(v)); - -#define set_sr(x,sr) ({unsigned int v=(unsigned int)x; WSR(v,sr);}) -#define get_sr(sr) ({unsigned int v; RSR(v,sr); v; }) +#define xtensa_set_sr(x, sr) \ + ({ \ + unsigned int v = (unsigned int)(x); \ + __asm__ __volatile__ ("wsr %0, "__stringify(sr) :: "a"(v)); \ + }) + +#define xtensa_get_sr(sr) \ + ({ \ + unsigned int v; \ + __asm__ __volatile__ ("rsr %0, "__stringify(sr) : "=a"(v)); \ + v; \ + }) #ifndef XCHAL_HAVE_EXTERN_REGS #define XCHAL_HAVE_EXTERN_REGS 0 diff --git a/arch/xtensa/include/asm/ptrace.h b/arch/xtensa/include/asm/ptrace.h index 3a5c5918aea3..62a58d2567e9 100644 --- a/arch/xtensa/include/asm/ptrace.h +++ b/arch/xtensa/include/asm/ptrace.h @@ -39,6 +39,8 @@ * +-----------------------+ -------- */ +#define NO_SYSCALL (-1) + #ifndef __ASSEMBLY__ #include @@ -100,6 +102,11 @@ struct pt_regs { #define user_stack_pointer(regs) ((regs)->areg[1]) +static inline unsigned long regs_return_value(struct pt_regs *regs) +{ + return regs->areg[2]; +} + #else /* __ASSEMBLY__ */ # include diff --git a/arch/xtensa/include/asm/syscall.h b/arch/xtensa/include/asm/syscall.h index 3673ff1f1bc5..a168bf81c7f4 100644 --- a/arch/xtensa/include/asm/syscall.h +++ b/arch/xtensa/include/asm/syscall.h @@ -1,27 +1,108 @@ /* - * include/asm-xtensa/syscall.h - * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * Copyright (C) 2001 - 2007 Tensilica Inc. + * Copyright (C) 2018 Cadence Design Systems Inc. */ -struct pt_regs; -asmlinkage long xtensa_ptrace(long, long, long, long); -asmlinkage long xtensa_sigreturn(struct pt_regs*); +#ifndef _ASM_SYSCALL_H +#define _ASM_SYSCALL_H + +#include +#include +#include + +static inline int syscall_get_arch(void) +{ + return AUDIT_ARCH_XTENSA; +} + +typedef void (*syscall_t)(void); +extern syscall_t sys_call_table[]; + +static inline long syscall_get_nr(struct task_struct *task, + struct pt_regs *regs) +{ + return regs->syscall; +} + +static inline void syscall_rollback(struct task_struct *task, + struct pt_regs *regs) +{ + /* Do nothing. */ +} + +static inline long syscall_get_error(struct task_struct *task, + struct pt_regs *regs) +{ + /* 0 if syscall succeeded, otherwise -Errorcode */ + return IS_ERR_VALUE(regs->areg[2]) ? regs->areg[2] : 0; +} + +static inline long syscall_get_return_value(struct task_struct *task, + struct pt_regs *regs) +{ + return regs->areg[2]; +} + +static inline void syscall_set_return_value(struct task_struct *task, + struct pt_regs *regs, + int error, long val) +{ + regs->areg[0] = (long) error ? error : val; +} + +#define SYSCALL_MAX_ARGS 6 +#define XTENSA_SYSCALL_ARGUMENT_REGS {6, 3, 4, 5, 8, 9} + +static inline void syscall_get_arguments(struct task_struct *task, + struct pt_regs *regs, + unsigned int i, unsigned int n, + unsigned long *args) +{ + static const unsigned int reg[] = XTENSA_SYSCALL_ARGUMENT_REGS; + unsigned int j; + + if (n == 0) + return; + + WARN_ON_ONCE(i + n > SYSCALL_MAX_ARGS); + + for (j = 0; j < n; ++j) { + if (i + j < SYSCALL_MAX_ARGS) + args[j] = regs->areg[reg[i + j]]; + else + args[j] = 0; + } +} + +static inline void syscall_set_arguments(struct task_struct *task, + struct pt_regs *regs, + unsigned int i, unsigned int n, + const unsigned long *args) +{ + static const unsigned int reg[] = XTENSA_SYSCALL_ARGUMENT_REGS; + unsigned int j; + + if (n == 0) + return; + + if (WARN_ON_ONCE(i + n > SYSCALL_MAX_ARGS)) { + if (i < SYSCALL_MAX_ARGS) + n = SYSCALL_MAX_ARGS - i; + else + return; + } + + for (j = 0; j < n; ++j) + regs->areg[reg[i + j]] = args[j]; +} + asmlinkage long xtensa_rt_sigreturn(struct pt_regs*); asmlinkage long xtensa_shmat(int, char __user *, int); asmlinkage long xtensa_fadvise64_64(int, int, unsigned long long, unsigned long long); -/* Should probably move to linux/syscalls.h */ -struct pollfd; -asmlinkage long sys_pselect6(int n, fd_set __user *inp, fd_set __user *outp, - fd_set __user *exp, struct timespec __user *tsp, - void __user *sig); -asmlinkage long sys_ppoll(struct pollfd __user *ufds, unsigned int nfds, - struct timespec __user *tsp, - const sigset_t __user *sigmask, - size_t sigsetsize); +#endif diff --git a/arch/xtensa/include/asm/thread_info.h b/arch/xtensa/include/asm/thread_info.h index 2bd19ae61e47..f333f10a7650 100644 --- a/arch/xtensa/include/asm/thread_info.h +++ b/arch/xtensa/include/asm/thread_info.h @@ -11,6 +11,7 @@ #ifndef _XTENSA_THREAD_INFO_H #define _XTENSA_THREAD_INFO_H +#include #include #define CURRENT_SHIFT KERNEL_STACK_SHIFT @@ -100,13 +101,12 @@ static inline struct thread_info *current_thread_info(void) /* * thread information flags * - these are process state flags that various assembly files may need to access - * - pending work-to-be-done flags are in LSW - * - other flags in MSW */ #define TIF_SYSCALL_TRACE 0 /* syscall trace active */ #define TIF_SIGPENDING 1 /* signal pending */ #define TIF_NEED_RESCHED 2 /* rescheduling necessary */ #define TIF_SINGLESTEP 3 /* restore singlestep on return to user mode */ +#define TIF_SYSCALL_TRACEPOINT 4 /* syscall tracepoint instrumentation */ #define TIF_MEMDIE 5 /* is terminating due to OOM killer */ #define TIF_RESTORE_SIGMASK 6 /* restore signal mask in do_signal() */ #define TIF_NOTIFY_RESUME 7 /* callback before returning to user */ @@ -116,9 +116,10 @@ static inline struct thread_info *current_thread_info(void) #define _TIF_SIGPENDING (1< -#include #if XCHAL_NUM_TIMERS > 0 && \ XTENSA_INT_LEVEL(XCHAL_TIMER0_INTERRUPT) <= XCHAL_EXCM_LEVEL @@ -40,33 +39,24 @@ void local_timer_setup(unsigned cpu); * Register access. */ -#define WSR_CCOUNT(r) asm volatile ("wsr %0, ccount" :: "a" (r)) -#define RSR_CCOUNT(r) asm volatile ("rsr %0, ccount" : "=a" (r)) -#define WSR_CCOMPARE(x,r) asm volatile ("wsr %0,"__stringify(SREG_CCOMPARE)"+"__stringify(x) :: "a"(r)) -#define RSR_CCOMPARE(x,r) asm volatile ("rsr %0,"__stringify(SREG_CCOMPARE)"+"__stringify(x) : "=a"(r)) - static inline unsigned long get_ccount (void) { - unsigned long ccount; - RSR_CCOUNT(ccount); - return ccount; + return xtensa_get_sr(ccount); } static inline void set_ccount (unsigned long ccount) { - WSR_CCOUNT(ccount); + xtensa_set_sr(ccount, ccount); } static inline unsigned long get_linux_timer (void) { - unsigned ccompare; - RSR_CCOMPARE(LINUX_TIMER, ccompare); - return ccompare; + return xtensa_get_sr(SREG_CCOMPARE + LINUX_TIMER); } static inline void set_linux_timer (unsigned long ccompare) { - WSR_CCOMPARE(LINUX_TIMER, ccompare); + xtensa_set_sr(ccompare, SREG_CCOMPARE + LINUX_TIMER); } #endif /* _XTENSA_TIMEX_H */ diff --git a/arch/xtensa/include/asm/traps.h b/arch/xtensa/include/asm/traps.h index f5cd7a7e65e0..f720a57d0a5b 100644 --- a/arch/xtensa/include/asm/traps.h +++ b/arch/xtensa/include/asm/traps.h @@ -25,8 +25,6 @@ struct exc_table { void *fixup; /* For passing a parameter to fixup */ void *fixup_param; - /* For fast syscall handler */ - unsigned long syscall_save; /* Fast user exception handlers */ void *fast_user_handler[EXCCAUSE_N]; /* Fast kernel exception handlers */ diff --git a/arch/xtensa/include/asm/uaccess.h b/arch/xtensa/include/asm/uaccess.h index f1158b4c629c..d11ef2939652 100644 --- a/arch/xtensa/include/asm/uaccess.h +++ b/arch/xtensa/include/asm/uaccess.h @@ -159,10 +159,9 @@ __asm__ __volatile__( \ "2: \n" \ " .section .fixup,\"ax\" \n" \ " .align 4 \n" \ - "4: \n" \ - " .long 2b \n" \ + " .literal_position \n" \ "5: \n" \ - " l32r %1, 4b \n" \ + " movi %1, 2b \n" \ " movi %0, %4 \n" \ " jx %1 \n" \ " .previous \n" \ @@ -217,10 +216,9 @@ __asm__ __volatile__( \ "2: \n" \ " .section .fixup,\"ax\" \n" \ " .align 4 \n" \ - "4: \n" \ - " .long 2b \n" \ + " .literal_position \n" \ "5: \n" \ - " l32r %1, 4b \n" \ + " movi %1, 2b \n" \ " movi %2, 0 \n" \ " movi %0, %4 \n" \ " jx %1 \n" \ diff --git a/arch/xtensa/include/asm/unistd.h b/arch/xtensa/include/asm/unistd.h index 574e5520968c..0d34629dafc5 100644 --- a/arch/xtensa/include/asm/unistd.h +++ b/arch/xtensa/include/asm/unistd.h @@ -22,4 +22,6 @@ #define __IGNORE_vfork /* use clone */ #define __IGNORE_fadvise64 /* use fadvise64_64 */ +#define NR_syscalls __NR_syscalls + #endif /* _XTENSA_UNISTD_H */ diff --git a/arch/xtensa/include/uapi/asm/Kbuild b/arch/xtensa/include/uapi/asm/Kbuild index 837d4dd76785..f95cad300369 100644 --- a/arch/xtensa/include/uapi/asm/Kbuild +++ b/arch/xtensa/include/uapi/asm/Kbuild @@ -1,6 +1,7 @@ # UAPI Header export list include include/uapi/asm-generic/Kbuild.asm +generated-y += unistd_32.h generic-y += bitsperlong.h generic-y += bpf_perf_event.h generic-y += errno.h diff --git a/arch/xtensa/include/uapi/asm/ptrace.h b/arch/xtensa/include/uapi/asm/ptrace.h index a10b42963703..2ec0f9100a06 100644 --- a/arch/xtensa/include/uapi/asm/ptrace.h +++ b/arch/xtensa/include/uapi/asm/ptrace.h @@ -12,6 +12,8 @@ #ifndef _UAPI_XTENSA_PTRACE_H #define _UAPI_XTENSA_PTRACE_H +#include + /* Registers used by strace */ #define REG_A_BASE 0x0000 @@ -36,5 +38,21 @@ #define PTRACE_GETHBPREGS 20 #define PTRACE_SETHBPREGS 21 - +#ifndef __ASSEMBLY__ + +struct user_pt_regs { + __u32 pc; + __u32 ps; + __u32 lbeg; + __u32 lend; + __u32 lcount; + __u32 sar; + __u32 windowstart; + __u32 windowbase; + __u32 threadptr; + __u32 reserved[7 + 48]; + __u32 a[64]; +}; + +#endif #endif /* _UAPI_XTENSA_PTRACE_H */ diff --git a/arch/xtensa/include/uapi/asm/unistd.h b/arch/xtensa/include/uapi/asm/unistd.h index bc3f62db5c5f..e67ccdd117d8 100644 --- a/arch/xtensa/include/uapi/asm/unistd.h +++ b/arch/xtensa/include/uapi/asm/unistd.h @@ -1,784 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ -#if !defined(_UAPI_XTENSA_UNISTD_H) || defined(__SYSCALL) +#ifndef _UAPI_XTENSA_UNISTD_H #define _UAPI_XTENSA_UNISTD_H -#ifndef __SYSCALL -# define __SYSCALL(nr,func,nargs) -#endif +#include -#define __NR_spill 0 -__SYSCALL( 0, sys_ni_syscall, 0) -#define __NR_xtensa 1 -__SYSCALL( 1, sys_ni_syscall, 0) -#define __NR_available4 2 -__SYSCALL( 2, sys_ni_syscall, 0) -#define __NR_available5 3 -__SYSCALL( 3, sys_ni_syscall, 0) -#define __NR_available6 4 -__SYSCALL( 4, sys_ni_syscall, 0) -#define __NR_available7 5 -__SYSCALL( 5, sys_ni_syscall, 0) -#define __NR_available8 6 -__SYSCALL( 6, sys_ni_syscall, 0) -#define __NR_available9 7 -__SYSCALL( 7, sys_ni_syscall, 0) - -/* File Operations */ - -#define __NR_open 8 -__SYSCALL( 8, sys_open, 3) -#define __NR_close 9 -__SYSCALL( 9, sys_close, 1) -#define __NR_dup 10 -__SYSCALL( 10, sys_dup, 1) -#define __NR_dup2 11 -__SYSCALL( 11, sys_dup2, 2) -#define __NR_read 12 -__SYSCALL( 12, sys_read, 3) -#define __NR_write 13 -__SYSCALL( 13, sys_write, 3) -#define __NR_select 14 -__SYSCALL( 14, sys_select, 5) -#define __NR_lseek 15 -__SYSCALL( 15, sys_lseek, 3) -#define __NR_poll 16 -__SYSCALL( 16, sys_poll, 3) -#define __NR__llseek 17 -__SYSCALL( 17, sys_llseek, 5) -#define __NR_epoll_wait 18 -__SYSCALL( 18, sys_epoll_wait, 4) -#define __NR_epoll_ctl 19 -__SYSCALL( 19, sys_epoll_ctl, 4) -#define __NR_epoll_create 20 -__SYSCALL( 20, sys_epoll_create, 1) -#define __NR_creat 21 -__SYSCALL( 21, sys_creat, 2) -#define __NR_truncate 22 -__SYSCALL( 22, sys_truncate, 2) -#define __NR_ftruncate 23 -__SYSCALL( 23, sys_ftruncate, 2) -#define __NR_readv 24 -__SYSCALL( 24, sys_readv, 3) -#define __NR_writev 25 -__SYSCALL( 25, sys_writev, 3) -#define __NR_fsync 26 -__SYSCALL( 26, sys_fsync, 1) -#define __NR_fdatasync 27 -__SYSCALL( 27, sys_fdatasync, 1) -#define __NR_truncate64 28 -__SYSCALL( 28, sys_truncate64, 2) -#define __NR_ftruncate64 29 -__SYSCALL( 29, sys_ftruncate64, 2) -#define __NR_pread64 30 -__SYSCALL( 30, sys_pread64, 6) -#define __NR_pwrite64 31 -__SYSCALL( 31, sys_pwrite64, 6) - -#define __NR_link 32 -__SYSCALL( 32, sys_link, 2) -#define __NR_rename 33 -__SYSCALL( 33, sys_rename, 2) -#define __NR_symlink 34 -__SYSCALL( 34, sys_symlink, 2) -#define __NR_readlink 35 -__SYSCALL( 35, sys_readlink, 3) -#define __NR_mknod 36 -__SYSCALL( 36, sys_mknod, 3) -#define __NR_pipe 37 -__SYSCALL( 37, sys_pipe, 1) -#define __NR_unlink 38 -__SYSCALL( 38, sys_unlink, 1) -#define __NR_rmdir 39 -__SYSCALL( 39, sys_rmdir, 1) - -#define __NR_mkdir 40 -__SYSCALL( 40, sys_mkdir, 2) -#define __NR_chdir 41 -__SYSCALL( 41, sys_chdir, 1) -#define __NR_fchdir 42 -__SYSCALL( 42, sys_fchdir, 1) -#define __NR_getcwd 43 -__SYSCALL( 43, sys_getcwd, 2) - -#define __NR_chmod 44 -__SYSCALL( 44, sys_chmod, 2) -#define __NR_chown 45 -__SYSCALL( 45, sys_chown, 3) -#define __NR_stat 46 -__SYSCALL( 46, sys_newstat, 2) -#define __NR_stat64 47 -__SYSCALL( 47, sys_stat64, 2) - -#define __NR_lchown 48 -__SYSCALL( 48, sys_lchown, 3) -#define __NR_lstat 49 -__SYSCALL( 49, sys_newlstat, 2) -#define __NR_lstat64 50 -__SYSCALL( 50, sys_lstat64, 2) -#define __NR_available51 51 -__SYSCALL( 51, sys_ni_syscall, 0) - -#define __NR_fchmod 52 -__SYSCALL( 52, sys_fchmod, 2) -#define __NR_fchown 53 -__SYSCALL( 53, sys_fchown, 3) -#define __NR_fstat 54 -__SYSCALL( 54, sys_newfstat, 2) -#define __NR_fstat64 55 -__SYSCALL( 55, sys_fstat64, 2) - -#define __NR_flock 56 -__SYSCALL( 56, sys_flock, 2) -#define __NR_access 57 -__SYSCALL( 57, sys_access, 2) -#define __NR_umask 58 -__SYSCALL( 58, sys_umask, 1) -#define __NR_getdents 59 -__SYSCALL( 59, sys_getdents, 3) -#define __NR_getdents64 60 -__SYSCALL( 60, sys_getdents64, 3) -#define __NR_fcntl64 61 -__SYSCALL( 61, sys_fcntl64, 3) -#define __NR_fallocate 62 -__SYSCALL( 62, sys_fallocate, 6) -#define __NR_fadvise64_64 63 -__SYSCALL( 63, xtensa_fadvise64_64, 6) -#define __NR_utime 64 /* glibc 2.3.3 ?? */ -__SYSCALL( 64, sys_utime, 2) -#define __NR_utimes 65 -__SYSCALL( 65, sys_utimes, 2) -#define __NR_ioctl 66 -__SYSCALL( 66, sys_ioctl, 3) -#define __NR_fcntl 67 -__SYSCALL( 67, sys_fcntl, 3) - -#define __NR_setxattr 68 -__SYSCALL( 68, sys_setxattr, 5) -#define __NR_getxattr 69 -__SYSCALL( 69, sys_getxattr, 4) -#define __NR_listxattr 70 -__SYSCALL( 70, sys_listxattr, 3) -#define __NR_removexattr 71 -__SYSCALL( 71, sys_removexattr, 2) -#define __NR_lsetxattr 72 -__SYSCALL( 72, sys_lsetxattr, 5) -#define __NR_lgetxattr 73 -__SYSCALL( 73, sys_lgetxattr, 4) -#define __NR_llistxattr 74 -__SYSCALL( 74, sys_llistxattr, 3) -#define __NR_lremovexattr 75 -__SYSCALL( 75, sys_lremovexattr, 2) -#define __NR_fsetxattr 76 -__SYSCALL( 76, sys_fsetxattr, 5) -#define __NR_fgetxattr 77 -__SYSCALL( 77, sys_fgetxattr, 4) -#define __NR_flistxattr 78 -__SYSCALL( 78, sys_flistxattr, 3) -#define __NR_fremovexattr 79 -__SYSCALL( 79, sys_fremovexattr, 2) - -/* File Map / Shared Memory Operations */ - -#define __NR_mmap2 80 -__SYSCALL( 80, sys_mmap_pgoff, 6) -#define __NR_munmap 81 -__SYSCALL( 81, sys_munmap, 2) -#define __NR_mprotect 82 -__SYSCALL( 82, sys_mprotect, 3) -#define __NR_brk 83 -__SYSCALL( 83, sys_brk, 1) -#define __NR_mlock 84 -__SYSCALL( 84, sys_mlock, 2) -#define __NR_munlock 85 -__SYSCALL( 85, sys_munlock, 2) -#define __NR_mlockall 86 -__SYSCALL( 86, sys_mlockall, 1) -#define __NR_munlockall 87 -__SYSCALL( 87, sys_munlockall, 0) -#define __NR_mremap 88 -__SYSCALL( 88, sys_mremap, 4) -#define __NR_msync 89 -__SYSCALL( 89, sys_msync, 3) -#define __NR_mincore 90 -__SYSCALL( 90, sys_mincore, 3) -#define __NR_madvise 91 -__SYSCALL( 91, sys_madvise, 3) -#define __NR_shmget 92 -__SYSCALL( 92, sys_shmget, 4) -#define __NR_shmat 93 -__SYSCALL( 93, xtensa_shmat, 4) -#define __NR_shmctl 94 -__SYSCALL( 94, sys_shmctl, 4) -#define __NR_shmdt 95 -__SYSCALL( 95, sys_shmdt, 4) - -/* Socket Operations */ - -#define __NR_socket 96 -__SYSCALL( 96, sys_socket, 3) -#define __NR_setsockopt 97 -__SYSCALL( 97, sys_setsockopt, 5) -#define __NR_getsockopt 98 -__SYSCALL( 98, sys_getsockopt, 5) -#define __NR_shutdown 99 -__SYSCALL( 99, sys_shutdown, 2) - -#define __NR_bind 100 -__SYSCALL(100, sys_bind, 3) -#define __NR_connect 101 -__SYSCALL(101, sys_connect, 3) -#define __NR_listen 102 -__SYSCALL(102, sys_listen, 2) -#define __NR_accept 103 -__SYSCALL(103, sys_accept, 3) - -#define __NR_getsockname 104 -__SYSCALL(104, sys_getsockname, 3) -#define __NR_getpeername 105 -__SYSCALL(105, sys_getpeername, 3) -#define __NR_sendmsg 106 -__SYSCALL(106, sys_sendmsg, 3) -#define __NR_recvmsg 107 -__SYSCALL(107, sys_recvmsg, 3) -#define __NR_send 108 -__SYSCALL(108, sys_send, 4) -#define __NR_recv 109 -__SYSCALL(109, sys_recv, 4) -#define __NR_sendto 110 -__SYSCALL(110, sys_sendto, 6) -#define __NR_recvfrom 111 -__SYSCALL(111, sys_recvfrom, 6) - -#define __NR_socketpair 112 -__SYSCALL(112, sys_socketpair, 4) -#define __NR_sendfile 113 -__SYSCALL(113, sys_sendfile, 4) -#define __NR_sendfile64 114 -__SYSCALL(114, sys_sendfile64, 4) -#define __NR_sendmmsg 115 -__SYSCALL(115, sys_sendmmsg, 4) - -/* Process Operations */ - -#define __NR_clone 116 -__SYSCALL(116, sys_clone, 5) -#define __NR_execve 117 -__SYSCALL(117, sys_execve, 3) -#define __NR_exit 118 -__SYSCALL(118, sys_exit, 1) -#define __NR_exit_group 119 -__SYSCALL(119, sys_exit_group, 1) -#define __NR_getpid 120 -__SYSCALL(120, sys_getpid, 0) -#define __NR_wait4 121 -__SYSCALL(121, sys_wait4, 4) -#define __NR_waitid 122 -__SYSCALL(122, sys_waitid, 5) -#define __NR_kill 123 -__SYSCALL(123, sys_kill, 2) -#define __NR_tkill 124 -__SYSCALL(124, sys_tkill, 2) -#define __NR_tgkill 125 -__SYSCALL(125, sys_tgkill, 3) -#define __NR_set_tid_address 126 -__SYSCALL(126, sys_set_tid_address, 1) -#define __NR_gettid 127 -__SYSCALL(127, sys_gettid, 0) -#define __NR_setsid 128 -__SYSCALL(128, sys_setsid, 0) -#define __NR_getsid 129 -__SYSCALL(129, sys_getsid, 1) -#define __NR_prctl 130 -__SYSCALL(130, sys_prctl, 5) -#define __NR_personality 131 -__SYSCALL(131, sys_personality, 1) -#define __NR_getpriority 132 -__SYSCALL(132, sys_getpriority, 2) -#define __NR_setpriority 133 -__SYSCALL(133, sys_setpriority, 3) -#define __NR_setitimer 134 -__SYSCALL(134, sys_setitimer, 3) -#define __NR_getitimer 135 -__SYSCALL(135, sys_getitimer, 2) -#define __NR_setuid 136 -__SYSCALL(136, sys_setuid, 1) -#define __NR_getuid 137 -__SYSCALL(137, sys_getuid, 0) -#define __NR_setgid 138 -__SYSCALL(138, sys_setgid, 1) -#define __NR_getgid 139 -__SYSCALL(139, sys_getgid, 0) -#define __NR_geteuid 140 -__SYSCALL(140, sys_geteuid, 0) -#define __NR_getegid 141 -__SYSCALL(141, sys_getegid, 0) -#define __NR_setreuid 142 -__SYSCALL(142, sys_setreuid, 2) -#define __NR_setregid 143 -__SYSCALL(143, sys_setregid, 2) -#define __NR_setresuid 144 -__SYSCALL(144, sys_setresuid, 3) -#define __NR_getresuid 145 -__SYSCALL(145, sys_getresuid, 3) -#define __NR_setresgid 146 -__SYSCALL(146, sys_setresgid, 3) -#define __NR_getresgid 147 -__SYSCALL(147, sys_getresgid, 3) -#define __NR_setpgid 148 -__SYSCALL(148, sys_setpgid, 2) -#define __NR_getpgid 149 -__SYSCALL(149, sys_getpgid, 1) -#define __NR_getppid 150 -__SYSCALL(150, sys_getppid, 0) -#define __NR_getpgrp 151 -__SYSCALL(151, sys_getpgrp, 0) - -#define __NR_reserved152 152 /* set_thread_area */ -__SYSCALL(152, sys_ni_syscall, 0) -#define __NR_reserved153 153 /* get_thread_area */ -__SYSCALL(153, sys_ni_syscall, 0) -#define __NR_times 154 -__SYSCALL(154, sys_times, 1) -#define __NR_acct 155 -__SYSCALL(155, sys_acct, 1) -#define __NR_sched_setaffinity 156 -__SYSCALL(156, sys_sched_setaffinity, 3) -#define __NR_sched_getaffinity 157 -__SYSCALL(157, sys_sched_getaffinity, 3) -#define __NR_capget 158 -__SYSCALL(158, sys_capget, 2) -#define __NR_capset 159 -__SYSCALL(159, sys_capset, 2) -#define __NR_ptrace 160 -__SYSCALL(160, sys_ptrace, 4) -#define __NR_semtimedop 161 -__SYSCALL(161, sys_semtimedop, 5) -#define __NR_semget 162 -__SYSCALL(162, sys_semget, 4) -#define __NR_semop 163 -__SYSCALL(163, sys_semop, 4) -#define __NR_semctl 164 -__SYSCALL(164, sys_semctl, 4) -#define __NR_available165 165 -__SYSCALL(165, sys_ni_syscall, 0) -#define __NR_msgget 166 -__SYSCALL(166, sys_msgget, 4) -#define __NR_msgsnd 167 -__SYSCALL(167, sys_msgsnd, 4) -#define __NR_msgrcv 168 -__SYSCALL(168, sys_msgrcv, 4) -#define __NR_msgctl 169 -__SYSCALL(169, sys_msgctl, 4) -#define __NR_available170 170 -__SYSCALL(170, sys_ni_syscall, 0) - -/* File System */ - -#define __NR_umount2 171 -__SYSCALL(171, sys_umount, 2) -#define __NR_mount 172 -__SYSCALL(172, sys_mount, 5) -#define __NR_swapon 173 -__SYSCALL(173, sys_swapon, 2) -#define __NR_chroot 174 -__SYSCALL(174, sys_chroot, 1) -#define __NR_pivot_root 175 -__SYSCALL(175, sys_pivot_root, 2) -#define __NR_umount 176 -__SYSCALL(176, sys_oldumount, 1) #define __ARCH_WANT_SYS_OLDUMOUNT -#define __NR_swapoff 177 -__SYSCALL(177, sys_swapoff, 1) -#define __NR_sync 178 -__SYSCALL(178, sys_sync, 0) -#define __NR_syncfs 179 -__SYSCALL(179, sys_syncfs, 1) -#define __NR_setfsuid 180 -__SYSCALL(180, sys_setfsuid, 1) -#define __NR_setfsgid 181 -__SYSCALL(181, sys_setfsgid, 1) -#define __NR_sysfs 182 -__SYSCALL(182, sys_sysfs, 3) -#define __NR_ustat 183 -__SYSCALL(183, sys_ustat, 2) -#define __NR_statfs 184 -__SYSCALL(184, sys_statfs, 2) -#define __NR_fstatfs 185 -__SYSCALL(185, sys_fstatfs, 2) -#define __NR_statfs64 186 -__SYSCALL(186, sys_statfs64, 3) -#define __NR_fstatfs64 187 -__SYSCALL(187, sys_fstatfs64, 3) - -/* System */ - -#define __NR_setrlimit 188 -__SYSCALL(188, sys_setrlimit, 2) -#define __NR_getrlimit 189 -__SYSCALL(189, sys_getrlimit, 2) -#define __NR_getrusage 190 -__SYSCALL(190, sys_getrusage, 2) -#define __NR_futex 191 -__SYSCALL(191, sys_futex, 5) -#define __NR_gettimeofday 192 -__SYSCALL(192, sys_gettimeofday, 2) -#define __NR_settimeofday 193 -__SYSCALL(193, sys_settimeofday, 2) -#define __NR_adjtimex 194 -__SYSCALL(194, sys_adjtimex, 1) -#define __NR_nanosleep 195 -__SYSCALL(195, sys_nanosleep, 2) -#define __NR_getgroups 196 -__SYSCALL(196, sys_getgroups, 2) -#define __NR_setgroups 197 -__SYSCALL(197, sys_setgroups, 2) -#define __NR_sethostname 198 -__SYSCALL(198, sys_sethostname, 2) -#define __NR_setdomainname 199 -__SYSCALL(199, sys_setdomainname, 2) -#define __NR_syslog 200 -__SYSCALL(200, sys_syslog, 3) -#define __NR_vhangup 201 -__SYSCALL(201, sys_vhangup, 0) -#define __NR_uselib 202 -__SYSCALL(202, sys_uselib, 1) -#define __NR_reboot 203 -__SYSCALL(203, sys_reboot, 3) -#define __NR_quotactl 204 -__SYSCALL(204, sys_quotactl, 4) -#define __NR_nfsservctl 205 -__SYSCALL(205, sys_ni_syscall, 0) /* old nfsservctl */ -#define __NR__sysctl 206 -__SYSCALL(206, sys_sysctl, 1) -#define __NR_bdflush 207 -__SYSCALL(207, sys_bdflush, 2) -#define __NR_uname 208 -__SYSCALL(208, sys_newuname, 1) -#define __NR_sysinfo 209 -__SYSCALL(209, sys_sysinfo, 1) -#define __NR_init_module 210 -__SYSCALL(210, sys_init_module, 2) -#define __NR_delete_module 211 -__SYSCALL(211, sys_delete_module, 1) - -#define __NR_sched_setparam 212 -__SYSCALL(212, sys_sched_setparam, 2) -#define __NR_sched_getparam 213 -__SYSCALL(213, sys_sched_getparam, 2) -#define __NR_sched_setscheduler 214 -__SYSCALL(214, sys_sched_setscheduler, 3) -#define __NR_sched_getscheduler 215 -__SYSCALL(215, sys_sched_getscheduler, 1) -#define __NR_sched_get_priority_max 216 -__SYSCALL(216, sys_sched_get_priority_max, 1) -#define __NR_sched_get_priority_min 217 -__SYSCALL(217, sys_sched_get_priority_min, 1) -#define __NR_sched_rr_get_interval 218 -__SYSCALL(218, sys_sched_rr_get_interval, 2) -#define __NR_sched_yield 219 -__SYSCALL(219, sys_sched_yield, 0) -#define __NR_available222 222 -__SYSCALL(222, sys_ni_syscall, 0) - -/* Signal Handling */ - -#define __NR_restart_syscall 223 -__SYSCALL(223, sys_restart_syscall, 0) -#define __NR_sigaltstack 224 -__SYSCALL(224, sys_sigaltstack, 2) -#define __NR_rt_sigreturn 225 -__SYSCALL(225, xtensa_rt_sigreturn, 1) -#define __NR_rt_sigaction 226 -__SYSCALL(226, sys_rt_sigaction, 4) -#define __NR_rt_sigprocmask 227 -__SYSCALL(227, sys_rt_sigprocmask, 4) -#define __NR_rt_sigpending 228 -__SYSCALL(228, sys_rt_sigpending, 2) -#define __NR_rt_sigtimedwait 229 -__SYSCALL(229, sys_rt_sigtimedwait, 4) -#define __NR_rt_sigqueueinfo 230 -__SYSCALL(230, sys_rt_sigqueueinfo, 3) -#define __NR_rt_sigsuspend 231 -__SYSCALL(231, sys_rt_sigsuspend, 2) - -/* Message */ - -#define __NR_mq_open 232 -__SYSCALL(232, sys_mq_open, 4) -#define __NR_mq_unlink 233 -__SYSCALL(233, sys_mq_unlink, 1) -#define __NR_mq_timedsend 234 -__SYSCALL(234, sys_mq_timedsend, 5) -#define __NR_mq_timedreceive 235 -__SYSCALL(235, sys_mq_timedreceive, 5) -#define __NR_mq_notify 236 -__SYSCALL(236, sys_mq_notify, 2) -#define __NR_mq_getsetattr 237 -__SYSCALL(237, sys_mq_getsetattr, 3) -#define __NR_available238 238 -__SYSCALL(238, sys_ni_syscall, 0) - -/* IO */ - -#define __NR_io_setup 239 -__SYSCALL(239, sys_io_setup, 2) -#define __NR_io_destroy 240 -__SYSCALL(240, sys_io_destroy, 1) -#define __NR_io_submit 241 -__SYSCALL(241, sys_io_submit, 3) -#define __NR_io_getevents 242 -__SYSCALL(242, sys_io_getevents, 5) -#define __NR_io_cancel 243 -__SYSCALL(243, sys_io_cancel, 3) -#define __NR_clock_settime 244 -__SYSCALL(244, sys_clock_settime, 2) -#define __NR_clock_gettime 245 -__SYSCALL(245, sys_clock_gettime, 2) -#define __NR_clock_getres 246 -__SYSCALL(246, sys_clock_getres, 2) -#define __NR_clock_nanosleep 247 -__SYSCALL(247, sys_clock_nanosleep, 4) - -/* Timer */ - -#define __NR_timer_create 248 -__SYSCALL(248, sys_timer_create, 3) -#define __NR_timer_delete 249 -__SYSCALL(249, sys_timer_delete, 1) -#define __NR_timer_settime 250 -__SYSCALL(250, sys_timer_settime, 4) -#define __NR_timer_gettime 251 -__SYSCALL(251, sys_timer_gettime, 2) -#define __NR_timer_getoverrun 252 -__SYSCALL(252, sys_timer_getoverrun, 1) - -/* System */ - -#define __NR_reserved253 253 -__SYSCALL(253, sys_ni_syscall, 0) -#define __NR_lookup_dcookie 254 -__SYSCALL(254, sys_lookup_dcookie, 4) -#define __NR_available255 255 -__SYSCALL(255, sys_ni_syscall, 0) -#define __NR_add_key 256 -__SYSCALL(256, sys_add_key, 5) -#define __NR_request_key 257 -__SYSCALL(257, sys_request_key, 5) -#define __NR_keyctl 258 -__SYSCALL(258, sys_keyctl, 5) -#define __NR_available259 259 -__SYSCALL(259, sys_ni_syscall, 0) - - -#define __NR_readahead 260 -__SYSCALL(260, sys_readahead, 5) -#define __NR_remap_file_pages 261 -__SYSCALL(261, sys_remap_file_pages, 5) -#define __NR_migrate_pages 262 -__SYSCALL(262, sys_migrate_pages, 0) -#define __NR_mbind 263 -__SYSCALL(263, sys_mbind, 6) -#define __NR_get_mempolicy 264 -__SYSCALL(264, sys_get_mempolicy, 5) -#define __NR_set_mempolicy 265 -__SYSCALL(265, sys_set_mempolicy, 3) -#define __NR_unshare 266 -__SYSCALL(266, sys_unshare, 1) -#define __NR_move_pages 267 -__SYSCALL(267, sys_move_pages, 0) -#define __NR_splice 268 -__SYSCALL(268, sys_splice, 0) -#define __NR_tee 269 -__SYSCALL(269, sys_tee, 0) -#define __NR_vmsplice 270 -__SYSCALL(270, sys_vmsplice, 0) -#define __NR_available271 271 -__SYSCALL(271, sys_ni_syscall, 0) - -#define __NR_pselect6 272 -__SYSCALL(272, sys_pselect6, 0) -#define __NR_ppoll 273 -__SYSCALL(273, sys_ppoll, 0) -#define __NR_epoll_pwait 274 -__SYSCALL(274, sys_epoll_pwait, 0) -#define __NR_epoll_create1 275 -__SYSCALL(275, sys_epoll_create1, 1) - -#define __NR_inotify_init 276 -__SYSCALL(276, sys_inotify_init, 0) -#define __NR_inotify_add_watch 277 -__SYSCALL(277, sys_inotify_add_watch, 3) -#define __NR_inotify_rm_watch 278 -__SYSCALL(278, sys_inotify_rm_watch, 2) -#define __NR_inotify_init1 279 -__SYSCALL(279, sys_inotify_init1, 1) - -#define __NR_getcpu 280 -__SYSCALL(280, sys_getcpu, 0) -#define __NR_kexec_load 281 -__SYSCALL(281, sys_ni_syscall, 0) - -#define __NR_ioprio_set 282 -__SYSCALL(282, sys_ioprio_set, 2) -#define __NR_ioprio_get 283 -__SYSCALL(283, sys_ioprio_get, 3) - -#define __NR_set_robust_list 284 -__SYSCALL(284, sys_set_robust_list, 3) -#define __NR_get_robust_list 285 -__SYSCALL(285, sys_get_robust_list, 3) -#define __NR_available286 286 -__SYSCALL(286, sys_ni_syscall, 0) -#define __NR_available287 287 -__SYSCALL(287, sys_ni_syscall, 0) - -/* Relative File Operations */ - -#define __NR_openat 288 -__SYSCALL(288, sys_openat, 4) -#define __NR_mkdirat 289 -__SYSCALL(289, sys_mkdirat, 3) -#define __NR_mknodat 290 -__SYSCALL(290, sys_mknodat, 4) -#define __NR_unlinkat 291 -__SYSCALL(291, sys_unlinkat, 3) -#define __NR_renameat 292 -__SYSCALL(292, sys_renameat, 4) -#define __NR_linkat 293 -__SYSCALL(293, sys_linkat, 5) -#define __NR_symlinkat 294 -__SYSCALL(294, sys_symlinkat, 3) -#define __NR_readlinkat 295 -__SYSCALL(295, sys_readlinkat, 4) -#define __NR_utimensat 296 -__SYSCALL(296, sys_utimensat, 0) -#define __NR_fchownat 297 -__SYSCALL(297, sys_fchownat, 5) -#define __NR_futimesat 298 -__SYSCALL(298, sys_futimesat, 4) -#define __NR_fstatat64 299 -__SYSCALL(299, sys_fstatat64, 0) -#define __NR_fchmodat 300 -__SYSCALL(300, sys_fchmodat, 4) -#define __NR_faccessat 301 -__SYSCALL(301, sys_faccessat, 4) -#define __NR_available302 302 -__SYSCALL(302, sys_ni_syscall, 0) -#define __NR_available303 303 -__SYSCALL(303, sys_ni_syscall, 0) - -#define __NR_signalfd 304 -__SYSCALL(304, sys_signalfd, 3) -/* 305 was __NR_timerfd */ -__SYSCALL(305, sys_ni_syscall, 0) -#define __NR_eventfd 306 -__SYSCALL(306, sys_eventfd, 1) -#define __NR_recvmmsg 307 -__SYSCALL(307, sys_recvmmsg, 5) - -#define __NR_setns 308 -__SYSCALL(308, sys_setns, 2) -#define __NR_signalfd4 309 -__SYSCALL(309, sys_signalfd4, 4) -#define __NR_dup3 310 -__SYSCALL(310, sys_dup3, 3) -#define __NR_pipe2 311 -__SYSCALL(311, sys_pipe2, 2) - -#define __NR_timerfd_create 312 -__SYSCALL(312, sys_timerfd_create, 2) -#define __NR_timerfd_settime 313 -__SYSCALL(313, sys_timerfd_settime, 4) -#define __NR_timerfd_gettime 314 -__SYSCALL(314, sys_timerfd_gettime, 2) -#define __NR_available315 315 -__SYSCALL(315, sys_ni_syscall, 0) - -#define __NR_eventfd2 316 -__SYSCALL(316, sys_eventfd2, 2) -#define __NR_preadv 317 -__SYSCALL(317, sys_preadv, 5) -#define __NR_pwritev 318 -__SYSCALL(318, sys_pwritev, 5) -#define __NR_available319 319 -__SYSCALL(319, sys_ni_syscall, 0) - -#define __NR_fanotify_init 320 -__SYSCALL(320, sys_fanotify_init, 2) -#define __NR_fanotify_mark 321 -__SYSCALL(321, sys_fanotify_mark, 6) -#define __NR_process_vm_readv 322 -__SYSCALL(322, sys_process_vm_readv, 6) -#define __NR_process_vm_writev 323 -__SYSCALL(323, sys_process_vm_writev, 6) - -#define __NR_name_to_handle_at 324 -__SYSCALL(324, sys_name_to_handle_at, 5) -#define __NR_open_by_handle_at 325 -__SYSCALL(325, sys_open_by_handle_at, 3) -#define __NR_sync_file_range2 326 -__SYSCALL(326, sys_sync_file_range2, 6) -#define __NR_perf_event_open 327 -__SYSCALL(327, sys_perf_event_open, 5) - -#define __NR_rt_tgsigqueueinfo 328 -__SYSCALL(328, sys_rt_tgsigqueueinfo, 4) -#define __NR_clock_adjtime 329 -__SYSCALL(329, sys_clock_adjtime, 2) -#define __NR_prlimit64 330 -__SYSCALL(330, sys_prlimit64, 4) -#define __NR_kcmp 331 -__SYSCALL(331, sys_kcmp, 5) - -#define __NR_finit_module 332 -__SYSCALL(332, sys_finit_module, 3) - -#define __NR_accept4 333 -__SYSCALL(333, sys_accept4, 4) - -#define __NR_sched_setattr 334 -__SYSCALL(334, sys_sched_setattr, 2) -#define __NR_sched_getattr 335 -__SYSCALL(335, sys_sched_getattr, 3) - -#define __NR_renameat2 336 -__SYSCALL(336, sys_renameat2, 5) - -#define __NR_seccomp 337 -__SYSCALL(337, sys_seccomp, 3) -#define __NR_getrandom 338 -__SYSCALL(338, sys_getrandom, 3) -#define __NR_memfd_create 339 -__SYSCALL(339, sys_memfd_create, 2) -#define __NR_bpf 340 -__SYSCALL(340, sys_bpf, 3) -#define __NR_execveat 341 -__SYSCALL(341, sys_execveat, 5) - -#define __NR_userfaultfd 342 -__SYSCALL(342, sys_userfaultfd, 1) -#define __NR_membarrier 343 -__SYSCALL(343, sys_membarrier, 2) -#define __NR_mlock2 344 -__SYSCALL(344, sys_mlock2, 3) -#define __NR_copy_file_range 345 -__SYSCALL(345, sys_copy_file_range, 6) -#define __NR_preadv2 346 -__SYSCALL(346, sys_preadv2, 6) -#define __NR_pwritev2 347 -__SYSCALL(347, sys_pwritev2, 6) - -#define __NR_pkey_mprotect 348 -__SYSCALL(348, sys_pkey_mprotect, 4) -#define __NR_pkey_alloc 349 -__SYSCALL(349, sys_pkey_alloc, 2) -#define __NR_pkey_free 350 -__SYSCALL(350, sys_pkey_free, 1) - -#define __NR_statx 351 -__SYSCALL(351, sys_statx, 5) - -#define __NR_syscall_count 352 /* * sysxtensa syscall handler @@ -795,9 +21,6 @@ __SYSCALL(351, sys_statx, 5) #define SYS_XTENSA_ATOMIC_EXG_ADD 2 /* exchange memory and add */ #define SYS_XTENSA_ATOMIC_ADD 3 /* add to memory */ #define SYS_XTENSA_ATOMIC_CMP_SWP 4 /* compare and swap */ - #define SYS_XTENSA_COUNT 5 /* count */ -#undef __SYSCALL - #endif /* _UAPI_XTENSA_UNISTD_H */ diff --git a/arch/xtensa/kernel/Makefile b/arch/xtensa/kernel/Makefile index 8dff506caf07..6f629027ac7d 100644 --- a/arch/xtensa/kernel/Makefile +++ b/arch/xtensa/kernel/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_SMP) += smp.o mxhead.o obj-$(CONFIG_XTENSA_VARIANT_HAVE_PERF_EVENTS) += perf_event.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o obj-$(CONFIG_S32C1I_SELFTEST) += s32c1i_selftest.o +obj-$(CONFIG_JUMP_LABEL) += jump_label.o # In the Xtensa architecture, assembly generates literals which must always # precede the L32R instruction with a relative offset less than 256 kB. diff --git a/arch/xtensa/kernel/asm-offsets.c b/arch/xtensa/kernel/asm-offsets.c index 120dd746a147..33a257b33723 100644 --- a/arch/xtensa/kernel/asm-offsets.c +++ b/arch/xtensa/kernel/asm-offsets.c @@ -137,8 +137,6 @@ int main(void) DEFINE(EXC_TABLE_DOUBLE_SAVE, offsetof(struct exc_table, double_save)); DEFINE(EXC_TABLE_FIXUP, offsetof(struct exc_table, fixup)); DEFINE(EXC_TABLE_PARAM, offsetof(struct exc_table, fixup_param)); - DEFINE(EXC_TABLE_SYSCALL_SAVE, - offsetof(struct exc_table, syscall_save)); DEFINE(EXC_TABLE_FAST_USER, offsetof(struct exc_table, fast_user_handler)); DEFINE(EXC_TABLE_FAST_KERNEL, diff --git a/arch/xtensa/kernel/coprocessor.S b/arch/xtensa/kernel/coprocessor.S index 4f8b52d575a2..92bf24a9da92 100644 --- a/arch/xtensa/kernel/coprocessor.S +++ b/arch/xtensa/kernel/coprocessor.S @@ -33,16 +33,16 @@ */ #define SAVE_CP_REGS(x) \ - .align 4; \ - .Lsave_cp_regs_cp##x: \ .if XTENSA_HAVE_COPROCESSOR(x); \ + .align 4; \ + .Lsave_cp_regs_cp##x: \ xchal_cp##x##_store a2 a4 a5 a6 a7; \ - .endif; \ - jx a0 + jx a0; \ + .endif #define SAVE_CP_REGS_TAB(x) \ .if XTENSA_HAVE_COPROCESSOR(x); \ - .long .Lsave_cp_regs_cp##x - .Lsave_cp_regs_jump_table; \ + .long .Lsave_cp_regs_cp##x; \ .else; \ .long 0; \ .endif; \ @@ -50,16 +50,16 @@ #define LOAD_CP_REGS(x) \ - .align 4; \ - .Lload_cp_regs_cp##x: \ .if XTENSA_HAVE_COPROCESSOR(x); \ + .align 4; \ + .Lload_cp_regs_cp##x: \ xchal_cp##x##_load a2 a4 a5 a6 a7; \ - .endif; \ - jx a0 + jx a0; \ + .endif #define LOAD_CP_REGS_TAB(x) \ .if XTENSA_HAVE_COPROCESSOR(x); \ - .long .Lload_cp_regs_cp##x - .Lload_cp_regs_jump_table; \ + .long .Lload_cp_regs_cp##x; \ .else; \ .long 0; \ .endif; \ @@ -83,6 +83,7 @@ LOAD_CP_REGS(6) LOAD_CP_REGS(7) + .section ".rodata", "a" .align 4 .Lsave_cp_regs_jump_table: SAVE_CP_REGS_TAB(0) @@ -104,64 +105,20 @@ LOAD_CP_REGS_TAB(6) LOAD_CP_REGS_TAB(7) -/* - * coprocessor_save(buffer, index) - * a2 a3 - * coprocessor_load(buffer, index) - * a2 a3 - * - * Save or load coprocessor registers for coprocessor 'index'. - * The register values are saved to or loaded from them 'buffer' address. - * - * Note that these functions don't update the coprocessor_owner information! - * - */ - -ENTRY(coprocessor_save) - - entry a1, 32 - s32i a0, a1, 0 - movi a0, .Lsave_cp_regs_jump_table - addx8 a3, a3, a0 - l32i a3, a3, 0 - beqz a3, 1f - add a0, a0, a3 - callx0 a0 -1: l32i a0, a1, 0 - retw - -ENDPROC(coprocessor_save) - -ENTRY(coprocessor_load) - - entry a1, 32 - s32i a0, a1, 0 - movi a0, .Lload_cp_regs_jump_table - addx4 a3, a3, a0 - l32i a3, a3, 0 - beqz a3, 1f - add a0, a0, a3 - callx0 a0 -1: l32i a0, a1, 0 - retw - -ENDPROC(coprocessor_load) + .previous /* - * coprocessor_flush(struct task_info*, index) + * coprocessor_flush(struct thread_info*, index) * a2 a3 - * coprocessor_restore(struct task_info*, index) - * a2 a3 * - * Save or load coprocessor registers for coprocessor 'index'. + * Save coprocessor registers for coprocessor 'index'. * The register values are saved to or loaded from the coprocessor area * inside the task_info structure. * - * Note that these functions don't update the coprocessor_owner information! + * Note that this function doesn't update the coprocessor_owner information! * */ - ENTRY(coprocessor_flush) entry a1, 32 @@ -172,29 +129,12 @@ ENTRY(coprocessor_flush) l32i a3, a3, 0 add a2, a2, a4 beqz a3, 1f - add a0, a0, a3 - callx0 a0 + callx0 a3 1: l32i a0, a1, 0 retw ENDPROC(coprocessor_flush) -ENTRY(coprocessor_restore) - entry a1, 32 - s32i a0, a1, 0 - movi a0, .Lload_cp_regs_jump_table - addx4 a3, a3, a0 - l32i a4, a3, 4 - l32i a3, a3, 0 - add a2, a2, a4 - beqz a3, 1f - add a0, a0, a3 - callx0 a0 -1: l32i a0, a1, 0 - retw - -ENDPROC(coprocessor_restore) - /* * Entry condition: * @@ -274,10 +214,9 @@ ENTRY(fast_coprocessor) movi a0, 2f # a0: 'return' address addx8 a3, a3, a5 # a3: coprocessor number l32i a2, a3, 4 # a2: xtregs offset - l32i a3, a3, 0 # a3: jump offset + l32i a3, a3, 0 # a3: jump address add a2, a2, a4 - add a4, a3, a5 # a4: address of save routine - jx a4 + jx a3 /* Note that only a0 and a1 were preserved. */ @@ -297,10 +236,9 @@ ENTRY(fast_coprocessor) movi a0, 1f addx8 a3, a3, a5 l32i a2, a3, 4 # a2: xtregs offset - l32i a3, a3, 0 # a3: jump offset + l32i a3, a3, 0 # a3: jump address add a2, a2, a4 - add a4, a3, a5 - jx a4 + jx a3 /* Restore all registers and return from exception handler. */ diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S index 9cbc380e9572..e50f5124dc6f 100644 --- a/arch/xtensa/kernel/entry.S +++ b/arch/xtensa/kernel/entry.S @@ -364,7 +364,7 @@ common_exception: s32i a2, a1, PT_DEBUGCAUSE s32i a3, a1, PT_PC - movi a2, -1 + movi a2, NO_SYSCALL rsr a3, excvaddr s32i a2, a1, PT_SYSCALL movi a2, 0 @@ -1022,25 +1022,6 @@ ENDPROC(fast_alloca) * excsave_1: dispatch table */ -ENTRY(fast_syscall_kernel) - - /* Skip syscall. */ - - rsr a0, epc1 - addi a0, a0, 3 - wsr a0, epc1 - - l32i a0, a2, PT_DEPC - bgeui a0, VALID_DOUBLE_EXCEPTION_ADDRESS, fast_syscall_unrecoverable - - rsr a0, depc # get syscall-nr - _beqz a0, fast_syscall_spill_registers - _beqi a0, __NR_xtensa, fast_syscall_xtensa - - j kernel_exception - -ENDPROC(fast_syscall_kernel) - ENTRY(fast_syscall_user) /* Skip syscall. */ @@ -1865,20 +1846,28 @@ ENTRY(system_call) /* regs->syscall = regs->areg[2] */ - l32i a3, a2, PT_AREG2 + l32i a7, a2, PT_AREG2 + s32i a7, a2, PT_SYSCALL + + GET_THREAD_INFO(a4, a1) + l32i a3, a4, TI_FLAGS + movi a4, _TIF_WORK_MASK + and a3, a3, a4 + beqz a3, 1f + mov a6, a2 - s32i a3, a2, PT_SYSCALL call4 do_syscall_trace_enter - mov a3, a6 + l32i a7, a2, PT_SYSCALL +1: /* syscall = sys_call_table[syscall_nr] */ movi a4, sys_call_table - movi a5, __NR_syscall_count + movi a5, __NR_syscalls movi a6, -ENOSYS - bgeu a3, a5, 1f + bgeu a7, a5, 1f - addx4 a4, a3, a4 + addx4 a4, a7, a4 l32i a4, a4, 0 movi a5, sys_ni_syscall; beq a4, a5, 1f @@ -1900,6 +1889,10 @@ ENTRY(system_call) 1: /* regs->areg[2] = return_value */ s32i a6, a2, PT_AREG2 + bnez a3, 1f + retw + +1: mov a6, a2 call4 do_syscall_trace_leave retw diff --git a/arch/xtensa/kernel/head.S b/arch/xtensa/kernel/head.S index 9053a5622d2c..da08e75100ab 100644 --- a/arch/xtensa/kernel/head.S +++ b/arch/xtensa/kernel/head.S @@ -59,10 +59,6 @@ ENTRY(_start) .align 4 .literal_position -.Lstartup: - .word _startup - - .align 4 _SetupOCD: /* * Initialize WB, WS, and clear PS.EXCM (to allow loop instructions). @@ -99,12 +95,12 @@ _SetupMMU: 1: #endif #endif - .end no-absolute-literals - l32r a0, .Lstartup + movi a0, _startup jx a0 ENDPROC(_start) + .end no-absolute-literals __REF .literal_position diff --git a/arch/xtensa/kernel/hw_breakpoint.c b/arch/xtensa/kernel/hw_breakpoint.c index c2e387c19cda..4f20416061fb 100644 --- a/arch/xtensa/kernel/hw_breakpoint.c +++ b/arch/xtensa/kernel/hw_breakpoint.c @@ -101,30 +101,30 @@ static void xtensa_wsr(unsigned long v, u8 sr) switch (sr) { #if XCHAL_NUM_IBREAK > 0 case SREG_IBREAKA + 0: - WSR(v, SREG_IBREAKA + 0); + xtensa_set_sr(v, SREG_IBREAKA + 0); break; #endif #if XCHAL_NUM_IBREAK > 1 case SREG_IBREAKA + 1: - WSR(v, SREG_IBREAKA + 1); + xtensa_set_sr(v, SREG_IBREAKA + 1); break; #endif #if XCHAL_NUM_DBREAK > 0 case SREG_DBREAKA + 0: - WSR(v, SREG_DBREAKA + 0); + xtensa_set_sr(v, SREG_DBREAKA + 0); break; case SREG_DBREAKC + 0: - WSR(v, SREG_DBREAKC + 0); + xtensa_set_sr(v, SREG_DBREAKC + 0); break; #endif #if XCHAL_NUM_DBREAK > 1 case SREG_DBREAKA + 1: - WSR(v, SREG_DBREAKA + 1); + xtensa_set_sr(v, SREG_DBREAKA + 1); break; case SREG_DBREAKC + 1: - WSR(v, SREG_DBREAKC + 1); + xtensa_set_sr(v, SREG_DBREAKC + 1); break; #endif } @@ -150,8 +150,8 @@ static void set_ibreak_regs(int reg, struct perf_event *bp) unsigned long ibreakenable; xtensa_wsr(info->address, SREG_IBREAKA + reg); - RSR(ibreakenable, SREG_IBREAKENABLE); - WSR(ibreakenable | (1 << reg), SREG_IBREAKENABLE); + ibreakenable = xtensa_get_sr(SREG_IBREAKENABLE); + xtensa_set_sr(ibreakenable | (1 << reg), SREG_IBREAKENABLE); } static void set_dbreak_regs(int reg, struct perf_event *bp) @@ -214,8 +214,9 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp) /* Breakpoint */ i = free_slot(this_cpu_ptr(bp_on_reg), XCHAL_NUM_IBREAK, bp); if (i >= 0) { - RSR(ibreakenable, SREG_IBREAKENABLE); - WSR(ibreakenable & ~(1 << i), SREG_IBREAKENABLE); + ibreakenable = xtensa_get_sr(SREG_IBREAKENABLE); + xtensa_set_sr(ibreakenable & ~(1 << i), + SREG_IBREAKENABLE); } } else { /* Watchpoint */ diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c new file mode 100644 index 000000000000..d108f721c116 --- /dev/null +++ b/arch/xtensa/kernel/jump_label.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2018 Cadence Design Systems Inc. + +#include +#include +#include +#include +#include +#include + +#include + +#ifdef HAVE_JUMP_LABEL + +#define J_OFFSET_MASK 0x0003ffff +#define J_SIGN_MASK (~(J_OFFSET_MASK >> 1)) + +#if defined(__XTENSA_EL__) +#define J_INSN 0x6 +#define NOP_INSN 0x0020f0 +#elif defined(__XTENSA_EB__) +#define J_INSN 0x60000000 +#define NOP_INSN 0x0f020000 +#else +#error Unsupported endianness. +#endif + +struct patch { + atomic_t cpu_count; + unsigned long addr; + size_t sz; + const void *data; +}; + +static void local_patch_text(unsigned long addr, const void *data, size_t sz) +{ + memcpy((void *)addr, data, sz); + local_flush_icache_range(addr, addr + sz); +} + +static int patch_text_stop_machine(void *data) +{ + struct patch *patch = data; + + if (atomic_inc_return(&patch->cpu_count) == 1) { + local_patch_text(patch->addr, patch->data, patch->sz); + atomic_inc(&patch->cpu_count); + } else { + while (atomic_read(&patch->cpu_count) <= num_online_cpus()) + cpu_relax(); + __invalidate_icache_range(patch->addr, patch->sz); + } + return 0; +} + +static void patch_text(unsigned long addr, const void *data, size_t sz) +{ + if (IS_ENABLED(CONFIG_SMP)) { + struct patch patch = { + .cpu_count = ATOMIC_INIT(0), + .addr = addr, + .sz = sz, + .data = data, + }; + stop_machine_cpuslocked(patch_text_stop_machine, + &patch, NULL); + } else { + unsigned long flags; + + local_irq_save(flags); + local_patch_text(addr, data, sz); + local_irq_restore(flags); + } +} + +void arch_jump_label_transform(struct jump_entry *e, + enum jump_label_type type) +{ + u32 d = (jump_entry_target(e) - (jump_entry_code(e) + 4)); + u32 insn; + + /* Jump only works within 128K of the J instruction. */ + BUG_ON(!((d & J_SIGN_MASK) == 0 || + (d & J_SIGN_MASK) == J_SIGN_MASK)); + + if (type == JUMP_LABEL_JMP) { +#if defined(__XTENSA_EL__) + insn = ((d & J_OFFSET_MASK) << 6) | J_INSN; +#elif defined(__XTENSA_EB__) + insn = ((d & J_OFFSET_MASK) << 8) | J_INSN; +#endif + } else { + insn = NOP_INSN; + } + + patch_text(jump_entry_code(e), &insn, JUMP_LABEL_NOP_SIZE); +} + +#endif /* HAVE_JUMP_LABEL */ diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 1fc138b6bc0a..9171bff76fc4 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -160,7 +160,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, flag & __GFP_NOWARN); if (!page) - page = alloc_pages(flag, get_order(size)); + page = alloc_pages(flag | __GFP_ZERO, get_order(size)); if (!page) return NULL; diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c index 4bb68133a72a..74969a437a37 100644 --- a/arch/xtensa/kernel/process.c +++ b/arch/xtensa/kernel/process.c @@ -87,7 +87,8 @@ void coprocessor_release_all(struct thread_info *ti) } ti->cpenable = cpenable; - coprocessor_clear_cpenable(); + if (ti == current_thread_info()) + xtensa_set_sr(0, cpenable); preempt_enable(); } @@ -99,16 +100,16 @@ void coprocessor_flush_all(struct thread_info *ti) preempt_disable(); - RSR_CPENABLE(old_cpenable); + old_cpenable = xtensa_get_sr(cpenable); cpenable = ti->cpenable; - WSR_CPENABLE(cpenable); + xtensa_set_sr(cpenable, cpenable); for (i = 0; i < XCHAL_CP_MAX; i++) { if ((cpenable & 1) != 0 && coprocessor_owner[i] == ti) coprocessor_flush(ti, i); cpenable >>= 1; } - WSR_CPENABLE(old_cpenable); + xtensa_set_sr(old_cpenable, cpenable); preempt_enable(); } @@ -325,49 +326,3 @@ unsigned long get_wchan(struct task_struct *p) } while (count++ < 16); return 0; } - -/* - * xtensa_gregset_t and 'struct pt_regs' are vastly different formats - * of processor registers. Besides different ordering, - * xtensa_gregset_t contains non-live register information that - * 'struct pt_regs' does not. Exception handling (primarily) uses - * 'struct pt_regs'. Core files and ptrace use xtensa_gregset_t. - * - */ - -void xtensa_elf_core_copy_regs (xtensa_gregset_t *elfregs, struct pt_regs *regs) -{ - unsigned long wb, ws, wm; - int live, last; - - wb = regs->windowbase; - ws = regs->windowstart; - wm = regs->wmask; - ws = ((ws >> wb) | (ws << (WSBITS - wb))) & ((1 << WSBITS) - 1); - - /* Don't leak any random bits. */ - - memset(elfregs, 0, sizeof(*elfregs)); - - /* Note: PS.EXCM is not set while user task is running; its - * being set in regs->ps is for exception handling convenience. - */ - - elfregs->pc = regs->pc; - elfregs->ps = (regs->ps & ~(1 << PS_EXCM_BIT)); - elfregs->lbeg = regs->lbeg; - elfregs->lend = regs->lend; - elfregs->lcount = regs->lcount; - elfregs->sar = regs->sar; - elfregs->windowstart = ws; - - live = (wm & 2) ? 4 : (wm & 4) ? 8 : (wm & 8) ? 12 : 16; - last = XCHAL_NUM_AREGS - (wm >> 4) * 4; - memcpy(elfregs->a, regs->areg, live * 4); - memcpy(elfregs->a + last, regs->areg + last, (wm >> 4) * 16); -} - -int dump_fpu(void) -{ - return 0; -} diff --git a/arch/xtensa/kernel/ptrace.c b/arch/xtensa/kernel/ptrace.c index d9541be0605a..b964f0b2d886 100644 --- a/arch/xtensa/kernel/ptrace.c +++ b/arch/xtensa/kernel/ptrace.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -26,189 +27,243 @@ #include #include +#define CREATE_TRACE_POINTS +#include + #include #include #include #include #include - -void user_enable_single_step(struct task_struct *child) +static int gpr_get(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + void *kbuf, void __user *ubuf) { - child->ptrace |= PT_SINGLESTEP; + struct pt_regs *regs = task_pt_regs(target); + struct user_pt_regs newregs = { + .pc = regs->pc, + .ps = regs->ps & ~(1 << PS_EXCM_BIT), + .lbeg = regs->lbeg, + .lend = regs->lend, + .lcount = regs->lcount, + .sar = regs->sar, + .threadptr = regs->threadptr, + .windowbase = regs->windowbase, + .windowstart = regs->windowstart, + }; + + memcpy(newregs.a, + regs->areg + XCHAL_NUM_AREGS - regs->windowbase * 4, + regs->windowbase * 16); + memcpy(newregs.a + regs->windowbase * 4, + regs->areg, + (WSBITS - regs->windowbase) * 16); + + return user_regset_copyout(&pos, &count, &kbuf, &ubuf, + &newregs, 0, -1); } -void user_disable_single_step(struct task_struct *child) -{ - child->ptrace &= ~PT_SINGLESTEP; -} - -/* - * Called by kernel/ptrace.c when detaching to disable single stepping. - */ - -void ptrace_disable(struct task_struct *child) +static int gpr_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) { - /* Nothing to do.. */ -} + int ret; + struct user_pt_regs newregs = {0}; + struct pt_regs *regs; + const u32 ps_mask = PS_CALLINC_MASK | PS_OWB_MASK; -static int ptrace_getregs(struct task_struct *child, void __user *uregs) -{ - struct pt_regs *regs = task_pt_regs(child); - xtensa_gregset_t __user *gregset = uregs; - unsigned long wb = regs->windowbase; - int i; + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &newregs, 0, -1); + if (ret) + return ret; - if (!access_ok(VERIFY_WRITE, uregs, sizeof(xtensa_gregset_t))) - return -EIO; + if (newregs.windowbase >= XCHAL_NUM_AREGS / 4) + return -EINVAL; - __put_user(regs->pc, &gregset->pc); - __put_user(regs->ps & ~(1 << PS_EXCM_BIT), &gregset->ps); - __put_user(regs->lbeg, &gregset->lbeg); - __put_user(regs->lend, &gregset->lend); - __put_user(regs->lcount, &gregset->lcount); - __put_user(regs->windowstart, &gregset->windowstart); - __put_user(regs->windowbase, &gregset->windowbase); - __put_user(regs->threadptr, &gregset->threadptr); + regs = task_pt_regs(target); + regs->pc = newregs.pc; + regs->ps = (regs->ps & ~ps_mask) | (newregs.ps & ps_mask); + regs->lbeg = newregs.lbeg; + regs->lend = newregs.lend; + regs->lcount = newregs.lcount; + regs->sar = newregs.sar; + regs->threadptr = newregs.threadptr; + + if (newregs.windowbase != regs->windowbase || + newregs.windowstart != regs->windowstart) { + u32 rotws, wmask; + + rotws = (((newregs.windowstart | + (newregs.windowstart << WSBITS)) >> + newregs.windowbase) & + ((1 << WSBITS) - 1)) & ~1; + wmask = ((rotws ? WSBITS + 1 - ffs(rotws) : 0) << 4) | + (rotws & 0xF) | 1; + regs->windowbase = newregs.windowbase; + regs->windowstart = newregs.windowstart; + regs->wmask = wmask; + } - for (i = 0; i < XCHAL_NUM_AREGS; i++) - __put_user(regs->areg[i], - gregset->a + ((wb * 4 + i) % XCHAL_NUM_AREGS)); + memcpy(regs->areg + XCHAL_NUM_AREGS - newregs.windowbase * 4, + newregs.a, newregs.windowbase * 16); + memcpy(regs->areg, newregs.a + newregs.windowbase * 4, + (WSBITS - newregs.windowbase) * 16); return 0; } -static int ptrace_setregs(struct task_struct *child, void __user *uregs) +static int tie_get(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + void *kbuf, void __user *ubuf) { - struct pt_regs *regs = task_pt_regs(child); - xtensa_gregset_t *gregset = uregs; - const unsigned long ps_mask = PS_CALLINC_MASK | PS_OWB_MASK; - unsigned long ps; - unsigned long wb, ws; + int ret; + struct pt_regs *regs = task_pt_regs(target); + struct thread_info *ti = task_thread_info(target); + elf_xtregs_t *newregs = kzalloc(sizeof(elf_xtregs_t), GFP_KERNEL); - if (!access_ok(VERIFY_WRITE, uregs, sizeof(xtensa_gregset_t))) - return -EIO; + if (!newregs) + return -ENOMEM; - __get_user(regs->pc, &gregset->pc); - __get_user(ps, &gregset->ps); - __get_user(regs->lbeg, &gregset->lbeg); - __get_user(regs->lend, &gregset->lend); - __get_user(regs->lcount, &gregset->lcount); - __get_user(ws, &gregset->windowstart); - __get_user(wb, &gregset->windowbase); - __get_user(regs->threadptr, &gregset->threadptr); + newregs->opt = regs->xtregs_opt; + newregs->user = ti->xtregs_user; - regs->ps = (regs->ps & ~ps_mask) | (ps & ps_mask) | (1 << PS_EXCM_BIT); +#if XTENSA_HAVE_COPROCESSORS + /* Flush all coprocessor registers to memory. */ + coprocessor_flush_all(ti); + newregs->cp0 = ti->xtregs_cp.cp0; + newregs->cp1 = ti->xtregs_cp.cp1; + newregs->cp2 = ti->xtregs_cp.cp2; + newregs->cp3 = ti->xtregs_cp.cp3; + newregs->cp4 = ti->xtregs_cp.cp4; + newregs->cp5 = ti->xtregs_cp.cp5; + newregs->cp6 = ti->xtregs_cp.cp6; + newregs->cp7 = ti->xtregs_cp.cp7; +#endif + ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, + newregs, 0, -1); + kfree(newregs); + return ret; +} - if (wb >= XCHAL_NUM_AREGS / 4) - return -EFAULT; +static int tie_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + int ret; + struct pt_regs *regs = task_pt_regs(target); + struct thread_info *ti = task_thread_info(target); + elf_xtregs_t *newregs = kzalloc(sizeof(elf_xtregs_t), GFP_KERNEL); - if (wb != regs->windowbase || ws != regs->windowstart) { - unsigned long rotws, wmask; + if (!newregs) + return -ENOMEM; - rotws = (((ws | (ws << WSBITS)) >> wb) & - ((1 << WSBITS) - 1)) & ~1; - wmask = ((rotws ? WSBITS + 1 - ffs(rotws) : 0) << 4) | - (rotws & 0xF) | 1; - regs->windowbase = wb; - regs->windowstart = ws; - regs->wmask = wmask; - } + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, + newregs, 0, -1); - if (wb != 0 && __copy_from_user(regs->areg + XCHAL_NUM_AREGS - wb * 4, - gregset->a, wb * 16)) - return -EFAULT; - - if (__copy_from_user(regs->areg, gregset->a + wb * 4, - (WSBITS - wb) * 16)) - return -EFAULT; + if (ret) + goto exit; + regs->xtregs_opt = newregs->opt; + ti->xtregs_user = newregs->user; - return 0; +#if XTENSA_HAVE_COPROCESSORS + /* Flush all coprocessors before we overwrite them. */ + coprocessor_flush_all(ti); + coprocessor_release_all(ti); + ti->xtregs_cp.cp0 = newregs->cp0; + ti->xtregs_cp.cp1 = newregs->cp1; + ti->xtregs_cp.cp2 = newregs->cp2; + ti->xtregs_cp.cp3 = newregs->cp3; + ti->xtregs_cp.cp4 = newregs->cp4; + ti->xtregs_cp.cp5 = newregs->cp5; + ti->xtregs_cp.cp6 = newregs->cp6; + ti->xtregs_cp.cp7 = newregs->cp7; +#endif +exit: + kfree(newregs); + return ret; } +enum xtensa_regset { + REGSET_GPR, + REGSET_TIE, +}; -#if XTENSA_HAVE_COPROCESSORS -#define CP_OFFSETS(cp) \ - { \ - .elf_xtregs_offset = offsetof(elf_xtregs_t, cp), \ - .ti_offset = offsetof(struct thread_info, xtregs_cp.cp), \ - .sz = sizeof(xtregs_ ## cp ## _t), \ - } +static const struct user_regset xtensa_regsets[] = { + [REGSET_GPR] = { + .core_note_type = NT_PRSTATUS, + .n = sizeof(struct user_pt_regs) / sizeof(u32), + .size = sizeof(u32), + .align = sizeof(u32), + .get = gpr_get, + .set = gpr_set, + }, + [REGSET_TIE] = { + .core_note_type = NT_PRFPREG, + .n = sizeof(elf_xtregs_t) / sizeof(u32), + .size = sizeof(u32), + .align = sizeof(u32), + .get = tie_get, + .set = tie_set, + }, +}; -static const struct { - size_t elf_xtregs_offset; - size_t ti_offset; - size_t sz; -} cp_offsets[] = { - CP_OFFSETS(cp0), - CP_OFFSETS(cp1), - CP_OFFSETS(cp2), - CP_OFFSETS(cp3), - CP_OFFSETS(cp4), - CP_OFFSETS(cp5), - CP_OFFSETS(cp6), - CP_OFFSETS(cp7), +static const struct user_regset_view user_xtensa_view = { + .name = "xtensa", + .e_machine = EM_XTENSA, + .regsets = xtensa_regsets, + .n = ARRAY_SIZE(xtensa_regsets) }; -#endif -static int ptrace_getxregs(struct task_struct *child, void __user *uregs) +const struct user_regset_view *task_user_regset_view(struct task_struct *task) { - struct pt_regs *regs = task_pt_regs(child); - struct thread_info *ti = task_thread_info(child); - elf_xtregs_t __user *xtregs = uregs; - int ret = 0; - int i __maybe_unused; + return &user_xtensa_view; +} - if (!access_ok(VERIFY_WRITE, uregs, sizeof(elf_xtregs_t))) - return -EIO; +void user_enable_single_step(struct task_struct *child) +{ + child->ptrace |= PT_SINGLESTEP; +} -#if XTENSA_HAVE_COPROCESSORS - /* Flush all coprocessor registers to memory. */ - coprocessor_flush_all(ti); +void user_disable_single_step(struct task_struct *child) +{ + child->ptrace &= ~PT_SINGLESTEP; +} - for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i) - ret |= __copy_to_user((char __user *)xtregs + - cp_offsets[i].elf_xtregs_offset, - (const char *)ti + - cp_offsets[i].ti_offset, - cp_offsets[i].sz); -#endif - ret |= __copy_to_user(&xtregs->opt, ®s->xtregs_opt, - sizeof(xtregs->opt)); - ret |= __copy_to_user(&xtregs->user,&ti->xtregs_user, - sizeof(xtregs->user)); +/* + * Called by kernel/ptrace.c when detaching to disable single stepping. + */ - return ret ? -EFAULT : 0; +void ptrace_disable(struct task_struct *child) +{ + /* Nothing to do.. */ } -static int ptrace_setxregs(struct task_struct *child, void __user *uregs) +static int ptrace_getregs(struct task_struct *child, void __user *uregs) { - struct thread_info *ti = task_thread_info(child); - struct pt_regs *regs = task_pt_regs(child); - elf_xtregs_t *xtregs = uregs; - int ret = 0; - int i __maybe_unused; - - if (!access_ok(VERIFY_READ, uregs, sizeof(elf_xtregs_t))) - return -EFAULT; + return copy_regset_to_user(child, &user_xtensa_view, REGSET_GPR, + 0, sizeof(xtensa_gregset_t), uregs); +} -#if XTENSA_HAVE_COPROCESSORS - /* Flush all coprocessors before we overwrite them. */ - coprocessor_flush_all(ti); - coprocessor_release_all(ti); +static int ptrace_setregs(struct task_struct *child, void __user *uregs) +{ + return copy_regset_from_user(child, &user_xtensa_view, REGSET_GPR, + 0, sizeof(xtensa_gregset_t), uregs); +} - for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i) - ret |= __copy_from_user((char *)ti + cp_offsets[i].ti_offset, - (const char __user *)xtregs + - cp_offsets[i].elf_xtregs_offset, - cp_offsets[i].sz); -#endif - ret |= __copy_from_user(®s->xtregs_opt, &xtregs->opt, - sizeof(xtregs->opt)); - ret |= __copy_from_user(&ti->xtregs_user, &xtregs->user, - sizeof(xtregs->user)); +static int ptrace_getxregs(struct task_struct *child, void __user *uregs) +{ + return copy_regset_to_user(child, &user_xtensa_view, REGSET_TIE, + 0, sizeof(elf_xtregs_t), uregs); +} - return ret ? -EFAULT : 0; +static int ptrace_setxregs(struct task_struct *child, void __user *uregs) +{ + return copy_regset_from_user(child, &user_xtensa_view, REGSET_TIE, + 0, sizeof(elf_xtregs_t), uregs); } static int ptrace_peekusr(struct task_struct *child, long regno, @@ -447,20 +502,10 @@ long arch_ptrace(struct task_struct *child, long request, void __user *datap = (void __user *) data; switch (request) { - case PTRACE_PEEKTEXT: /* read word at location addr. */ - case PTRACE_PEEKDATA: - ret = generic_ptrace_peekdata(child, addr, data); - break; - case PTRACE_PEEKUSR: /* read register specified by addr. */ ret = ptrace_peekusr(child, addr, datap); break; - case PTRACE_POKETEXT: /* write the word at location addr. */ - case PTRACE_POKEDATA: - ret = generic_ptrace_pokedata(child, addr, data); - break; - case PTRACE_POKEUSR: /* write register specified by addr. */ ret = ptrace_pokeusr(child, addr, data); break; @@ -497,19 +542,23 @@ long arch_ptrace(struct task_struct *child, long request, return ret; } -unsigned long do_syscall_trace_enter(struct pt_regs *regs) +void do_syscall_trace_enter(struct pt_regs *regs) { if (test_thread_flag(TIF_SYSCALL_TRACE) && tracehook_report_syscall_entry(regs)) - return -1; + regs->syscall = NO_SYSCALL; - return regs->areg[2]; + if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) + trace_sys_enter(regs, syscall_get_nr(current, regs)); } void do_syscall_trace_leave(struct pt_regs *regs) { int step; + if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) + trace_sys_exit(regs, regs_return_value(regs)); + step = test_thread_flag(TIF_SINGLESTEP); if (step || test_thread_flag(TIF_SYSCALL_TRACE)) diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c index 351283b60df6..4ec6fbb696bf 100644 --- a/arch/xtensa/kernel/setup.c +++ b/arch/xtensa/kernel/setup.c @@ -318,9 +318,9 @@ static inline int mem_reserve(unsigned long start, unsigned long end) void __init setup_arch(char **cmdline_p) { pr_info("config ID: %08x:%08x\n", - get_sr(SREG_EPC), get_sr(SREG_EXCSAVE)); - if (get_sr(SREG_EPC) != XCHAL_HW_CONFIGID0 || - get_sr(SREG_EXCSAVE) != XCHAL_HW_CONFIGID1) + xtensa_get_sr(SREG_EPC), xtensa_get_sr(SREG_EXCSAVE)); + if (xtensa_get_sr(SREG_EPC) != XCHAL_HW_CONFIGID0 || + xtensa_get_sr(SREG_EXCSAVE) != XCHAL_HW_CONFIGID1) pr_info("built for config ID: %08x:%08x\n", XCHAL_HW_CONFIGID0, XCHAL_HW_CONFIGID1); @@ -596,7 +596,7 @@ c_show(struct seq_file *f, void *slot) num_online_cpus(), cpumask_pr_args(cpu_online_mask), XCHAL_BUILD_UNIQUE_ID, - get_sr(SREG_EPC), get_sr(SREG_EXCSAVE), + xtensa_get_sr(SREG_EPC), xtensa_get_sr(SREG_EXCSAVE), XCHAL_HAVE_BE ? "big" : "little", ccount_freq/1000000, (ccount_freq/10000) % 100, diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c index f88e7a0b232c..74e1682876ac 100644 --- a/arch/xtensa/kernel/signal.c +++ b/arch/xtensa/kernel/signal.c @@ -185,13 +185,13 @@ restore_sigcontext(struct pt_regs *regs, struct rt_sigframe __user *frame) COPY(sar); #undef COPY - /* All registers were flushed to stack. Start with a prestine frame. */ + /* All registers were flushed to stack. Start with a pristine frame. */ regs->wmask = 1; regs->windowbase = 0; regs->windowstart = 1; - regs->syscall = -1; /* disable syscall checks */ + regs->syscall = NO_SYSCALL; /* disable syscall checks */ /* For PS, restore only PS.CALLINC. * Assume that all other bits are either the same as for the signal @@ -423,7 +423,7 @@ static void do_signal(struct pt_regs *regs) /* Are we from a system call? */ - if ((signed)regs->syscall >= 0) { + if (regs->syscall != NO_SYSCALL) { /* If so, check system call restarting.. */ @@ -462,7 +462,7 @@ static void do_signal(struct pt_regs *regs) } /* Did we come from a system call? */ - if ((signed) regs->syscall >= 0) { + if (regs->syscall != NO_SYSCALL) { /* Restart the system call - no handlers present */ switch (regs->areg[2]) { case -ERESTARTNOHAND: diff --git a/arch/xtensa/kernel/syscall.c b/arch/xtensa/kernel/syscall.c index 8201748da05b..2c415fce6801 100644 --- a/arch/xtensa/kernel/syscall.c +++ b/arch/xtensa/kernel/syscall.c @@ -28,13 +28,12 @@ #include #include -typedef void (*syscall_t)(void); +syscall_t sys_call_table[__NR_syscalls] /* FIXME __cacheline_aligned */= { + [0 ... __NR_syscalls - 1] = (syscall_t)&sys_ni_syscall, -syscall_t sys_call_table[__NR_syscall_count] /* FIXME __cacheline_aligned */= { - [0 ... __NR_syscall_count - 1] = (syscall_t)&sys_ni_syscall, - -#define __SYSCALL(nr,symbol,nargs) [ nr ] = (syscall_t)symbol, -#include +#define __SYSCALL(nr, entry, nargs)[nr] = (syscall_t)entry, +#include +#undef __SYSCALL }; #define COLOUR_ALIGN(addr, pgoff) \ diff --git a/arch/xtensa/kernel/syscalls/Makefile b/arch/xtensa/kernel/syscalls/Makefile new file mode 100644 index 000000000000..659faefdcb1d --- /dev/null +++ b/arch/xtensa/kernel/syscalls/Makefile @@ -0,0 +1,38 @@ +# SPDX-License-Identifier: GPL-2.0 +kapi := arch/$(SRCARCH)/include/generated/asm +uapi := arch/$(SRCARCH)/include/generated/uapi/asm + +_dummy := $(shell [ -d '$(uapi)' ] || mkdir -p '$(uapi)') \ + $(shell [ -d '$(kapi)' ] || mkdir -p '$(kapi)') + +syscall := $(srctree)/$(src)/syscall.tbl +syshdr := $(srctree)/$(src)/syscallhdr.sh +systbl := $(srctree)/$(src)/syscalltbl.sh + +quiet_cmd_syshdr = SYSHDR $@ + cmd_syshdr = $(CONFIG_SHELL) '$(syshdr)' '$<' '$@' \ + '$(syshdr_abis_$(basetarget))' \ + '$(syshdr_pfx_$(basetarget))' \ + '$(syshdr_offset_$(basetarget))' + +quiet_cmd_systbl = SYSTBL $@ + cmd_systbl = $(CONFIG_SHELL) '$(systbl)' '$<' '$@' \ + '$(systbl_abis_$(basetarget))' \ + '$(systbl_abi_$(basetarget))' \ + '$(systbl_offset_$(basetarget))' + +$(uapi)/unistd_32.h: $(syscall) $(syshdr) + $(call if_changed,syshdr) + +$(kapi)/syscall_table.h: $(syscall) $(systbl) + $(call if_changed,systbl) + +uapisyshdr-y += unistd_32.h +kapisyshdr-y += syscall_table.h + +targets += $(uapisyshdr-y) $(kapisyshdr-y) + +PHONY += all +all: $(addprefix $(uapi)/,$(uapisyshdr-y)) +all: $(addprefix $(kapi)/,$(kapisyshdr-y)) + @: diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl new file mode 100644 index 000000000000..69cf91b03b26 --- /dev/null +++ b/arch/xtensa/kernel/syscalls/syscall.tbl @@ -0,0 +1,374 @@ +# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note +# +# system call numbers and entry vectors for xtensa +# +# The format is: +# +# +# The is always "common" for this file +# +0 common spill sys_ni_syscall +1 common xtensa sys_ni_syscall +2 common available4 sys_ni_syscall +3 common available5 sys_ni_syscall +4 common available6 sys_ni_syscall +5 common available7 sys_ni_syscall +6 common available8 sys_ni_syscall +7 common available9 sys_ni_syscall +# File Operations +8 common open sys_open +9 common close sys_close +10 common dup sys_dup +11 common dup2 sys_dup2 +12 common read sys_read +13 common write sys_write +14 common select sys_select +15 common lseek sys_lseek +16 common poll sys_poll +17 common _llseek sys_llseek +18 common epoll_wait sys_epoll_wait +19 common epoll_ctl sys_epoll_ctl +20 common epoll_create sys_epoll_create +21 common creat sys_creat +22 common truncate sys_truncate +23 common ftruncate sys_ftruncate +24 common readv sys_readv +25 common writev sys_writev +26 common fsync sys_fsync +27 common fdatasync sys_fdatasync +28 common truncate64 sys_truncate64 +29 common ftruncate64 sys_ftruncate64 +30 common pread64 sys_pread64 +31 common pwrite64 sys_pwrite64 +32 common link sys_link +33 common rename sys_rename +34 common symlink sys_symlink +35 common readlink sys_readlink +36 common mknod sys_mknod +37 common pipe sys_pipe +38 common unlink sys_unlink +39 common rmdir sys_rmdir +40 common mkdir sys_mkdir +41 common chdir sys_chdir +42 common fchdir sys_fchdir +43 common getcwd sys_getcwd +44 common chmod sys_chmod +45 common chown sys_chown +46 common stat sys_newstat +47 common stat64 sys_stat64 +48 common lchown sys_lchown +49 common lstat sys_newlstat +50 common lstat64 sys_lstat64 +51 common available51 sys_ni_syscall +52 common fchmod sys_fchmod +53 common fchown sys_fchown +54 common fstat sys_newfstat +55 common fstat64 sys_fstat64 +56 common flock sys_flock +57 common access sys_access +58 common umask sys_umask +59 common getdents sys_getdents +60 common getdents64 sys_getdents64 +61 common fcntl64 sys_fcntl64 +62 common fallocate sys_fallocate +63 common fadvise64_64 xtensa_fadvise64_64 +64 common utime sys_utime +65 common utimes sys_utimes +66 common ioctl sys_ioctl +67 common fcntl sys_fcntl +68 common setxattr sys_setxattr +69 common getxattr sys_getxattr +70 common listxattr sys_listxattr +71 common removexattr sys_removexattr +72 common lsetxattr sys_lsetxattr +73 common lgetxattr sys_lgetxattr +74 common llistxattr sys_llistxattr +75 common lremovexattr sys_lremovexattr +76 common fsetxattr sys_fsetxattr +77 common fgetxattr sys_fgetxattr +78 common flistxattr sys_flistxattr +79 common fremovexattr sys_fremovexattr +# File Map / Shared Memory Operations +80 common mmap2 sys_mmap_pgoff +81 common munmap sys_munmap +82 common mprotect sys_mprotect +83 common brk sys_brk +84 common mlock sys_mlock +85 common munlock sys_munlock +86 common mlockall sys_mlockall +87 common munlockall sys_munlockall +88 common mremap sys_mremap +89 common msync sys_msync +90 common mincore sys_mincore +91 common madvise sys_madvise +92 common shmget sys_shmget +93 common shmat xtensa_shmat +94 common shmctl sys_shmctl +95 common shmdt sys_shmdt +# Socket Operations +96 common socket sys_socket +97 common setsockopt sys_setsockopt +98 common getsockopt sys_getsockopt +99 common shutdown sys_shutdown +100 common bind sys_bind +101 common connect sys_connect +102 common listen sys_listen +103 common accept sys_accept +104 common getsockname sys_getsockname +105 common getpeername sys_getpeername +106 common sendmsg sys_sendmsg +107 common recvmsg sys_recvmsg +108 common send sys_send +109 common recv sys_recv +110 common sendto sys_sendto +111 common recvfrom sys_recvfrom +112 common socketpair sys_socketpair +113 common sendfile sys_sendfile +114 common sendfile64 sys_sendfile64 +115 common sendmmsg sys_sendmmsg +# Process Operations +116 common clone sys_clone +117 common execve sys_execve +118 common exit sys_exit +119 common exit_group sys_exit_group +120 common getpid sys_getpid +121 common wait4 sys_wait4 +122 common waitid sys_waitid +123 common kill sys_kill +124 common tkill sys_tkill +125 common tgkill sys_tgkill +126 common set_tid_address sys_set_tid_address +127 common gettid sys_gettid +128 common setsid sys_setsid +129 common getsid sys_getsid +130 common prctl sys_prctl +131 common personality sys_personality +132 common getpriority sys_getpriority +133 common setpriority sys_setpriority +134 common setitimer sys_setitimer +135 common getitimer sys_getitimer +136 common setuid sys_setuid +137 common getuid sys_getuid +138 common setgid sys_setgid +139 common getgid sys_getgid +140 common geteuid sys_geteuid +141 common getegid sys_getegid +142 common setreuid sys_setreuid +143 common setregid sys_setregid +144 common setresuid sys_setresuid +145 common getresuid sys_getresuid +146 common setresgid sys_setresgid +147 common getresgid sys_getresgid +148 common setpgid sys_setpgid +149 common getpgid sys_getpgid +150 common getppid sys_getppid +151 common getpgrp sys_getpgrp +# 152 was set_thread_area +152 common reserved152 sys_ni_syscall +# 153 was get_thread_area +153 common reserved153 sys_ni_syscall +154 common times sys_times +155 common acct sys_acct +156 common sched_setaffinity sys_sched_setaffinity +157 common sched_getaffinity sys_sched_getaffinity +158 common capget sys_capget +159 common capset sys_capset +160 common ptrace sys_ptrace +161 common semtimedop sys_semtimedop +162 common semget sys_semget +163 common semop sys_semop +164 common semctl sys_semctl +165 common available165 sys_ni_syscall +166 common msgget sys_msgget +167 common msgsnd sys_msgsnd +168 common msgrcv sys_msgrcv +169 common msgctl sys_msgctl +170 common available170 sys_ni_syscall +# File System +171 common umount2 sys_umount +172 common mount sys_mount +173 common swapon sys_swapon +174 common chroot sys_chroot +175 common pivot_root sys_pivot_root +176 common umount sys_oldumount +177 common swapoff sys_swapoff +178 common sync sys_sync +179 common syncfs sys_syncfs +180 common setfsuid sys_setfsuid +181 common setfsgid sys_setfsgid +182 common sysfs sys_sysfs +183 common ustat sys_ustat +184 common statfs sys_statfs +185 common fstatfs sys_fstatfs +186 common statfs64 sys_statfs64 +187 common fstatfs64 sys_fstatfs64 +# System +188 common setrlimit sys_setrlimit +189 common getrlimit sys_getrlimit +190 common getrusage sys_getrusage +191 common futex sys_futex +192 common gettimeofday sys_gettimeofday +193 common settimeofday sys_settimeofday +194 common adjtimex sys_adjtimex +195 common nanosleep sys_nanosleep +196 common getgroups sys_getgroups +197 common setgroups sys_setgroups +198 common sethostname sys_sethostname +199 common setdomainname sys_setdomainname +200 common syslog sys_syslog +201 common vhangup sys_vhangup +202 common uselib sys_uselib +203 common reboot sys_reboot +204 common quotactl sys_quotactl +# 205 was old nfsservctl +205 common nfsservctl sys_ni_syscall +206 common _sysctl sys_sysctl +207 common bdflush sys_bdflush +208 common uname sys_newuname +209 common sysinfo sys_sysinfo +210 common init_module sys_init_module +211 common delete_module sys_delete_module +212 common sched_setparam sys_sched_setparam +213 common sched_getparam sys_sched_getparam +214 common sched_setscheduler sys_sched_setscheduler +215 common sched_getscheduler sys_sched_getscheduler +216 common sched_get_priority_max sys_sched_get_priority_max +217 common sched_get_priority_min sys_sched_get_priority_min +218 common sched_rr_get_interval sys_sched_rr_get_interval +219 common sched_yield sys_sched_yield +222 common available222 sys_ni_syscall +# Signal Handling +223 common restart_syscall sys_restart_syscall +224 common sigaltstack sys_sigaltstack +225 common rt_sigreturn xtensa_rt_sigreturn +226 common rt_sigaction sys_rt_sigaction +227 common rt_sigprocmask sys_rt_sigprocmask +228 common rt_sigpending sys_rt_sigpending +229 common rt_sigtimedwait sys_rt_sigtimedwait +230 common rt_sigqueueinfo sys_rt_sigqueueinfo +231 common rt_sigsuspend sys_rt_sigsuspend +# Message +232 common mq_open sys_mq_open +233 common mq_unlink sys_mq_unlink +234 common mq_timedsend sys_mq_timedsend +235 common mq_timedreceive sys_mq_timedreceive +236 common mq_notify sys_mq_notify +237 common mq_getsetattr sys_mq_getsetattr +238 common available238 sys_ni_syscall +239 common io_setup sys_io_setup +# IO +240 common io_destroy sys_io_destroy +241 common io_submit sys_io_submit +242 common io_getevents sys_io_getevents +243 common io_cancel sys_io_cancel +244 common clock_settime sys_clock_settime +245 common clock_gettime sys_clock_gettime +246 common clock_getres sys_clock_getres +247 common clock_nanosleep sys_clock_nanosleep +# Timer +248 common timer_create sys_timer_create +249 common timer_delete sys_timer_delete +250 common timer_settime sys_timer_settime +251 common timer_gettime sys_timer_gettime +252 common timer_getoverrun sys_timer_getoverrun +# System +253 common reserved253 sys_ni_syscall +254 common lookup_dcookie sys_lookup_dcookie +255 common available255 sys_ni_syscall +256 common add_key sys_add_key +257 common request_key sys_request_key +258 common keyctl sys_keyctl +259 common available259 sys_ni_syscall +260 common readahead sys_readahead +261 common remap_file_pages sys_remap_file_pages +262 common migrate_pages sys_migrate_pages +263 common mbind sys_mbind +264 common get_mempolicy sys_get_mempolicy +265 common set_mempolicy sys_set_mempolicy +266 common unshare sys_unshare +267 common move_pages sys_move_pages +268 common splice sys_splice +269 common tee sys_tee +270 common vmsplice sys_vmsplice +271 common available271 sys_ni_syscall +272 common pselect6 sys_pselect6 +273 common ppoll sys_ppoll +274 common epoll_pwait sys_epoll_pwait +275 common epoll_create1 sys_epoll_create1 +276 common inotify_init sys_inotify_init +277 common inotify_add_watch sys_inotify_add_watch +278 common inotify_rm_watch sys_inotify_rm_watch +279 common inotify_init1 sys_inotify_init1 +280 common getcpu sys_getcpu +281 common kexec_load sys_ni_syscall +282 common ioprio_set sys_ioprio_set +283 common ioprio_get sys_ioprio_get +284 common set_robust_list sys_set_robust_list +285 common get_robust_list sys_get_robust_list +286 common available286 sys_ni_syscall +287 common available287 sys_ni_syscall +# Relative File Operations +288 common openat sys_openat +289 common mkdirat sys_mkdirat +290 common mknodat sys_mknodat +291 common unlinkat sys_unlinkat +292 common renameat sys_renameat +293 common linkat sys_linkat +294 common symlinkat sys_symlinkat +295 common readlinkat sys_readlinkat +296 common utimensat sys_utimensat +297 common fchownat sys_fchownat +298 common futimesat sys_futimesat +299 common fstatat64 sys_fstatat64 +300 common fchmodat sys_fchmodat +301 common faccessat sys_faccessat +302 common available302 sys_ni_syscall +303 common available303 sys_ni_syscall +304 common signalfd sys_signalfd +# 305 was timerfd +306 common eventfd sys_eventfd +307 common recvmmsg sys_recvmmsg +308 common setns sys_setns +309 common signalfd4 sys_signalfd4 +310 common dup3 sys_dup3 +311 common pipe2 sys_pipe2 +312 common timerfd_create sys_timerfd_create +313 common timerfd_settime sys_timerfd_settime +314 common timerfd_gettime sys_timerfd_gettime +315 common available315 sys_ni_syscall +316 common eventfd2 sys_eventfd2 +317 common preadv sys_preadv +318 common pwritev sys_pwritev +319 common available319 sys_ni_syscall +320 common fanotify_init sys_fanotify_init +321 common fanotify_mark sys_fanotify_mark +322 common process_vm_readv sys_process_vm_readv +323 common process_vm_writev sys_process_vm_writev +324 common name_to_handle_at sys_name_to_handle_at +325 common open_by_handle_at sys_open_by_handle_at +326 common sync_file_range2 sys_sync_file_range2 +327 common perf_event_open sys_perf_event_open +328 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo +329 common clock_adjtime sys_clock_adjtime +330 common prlimit64 sys_prlimit64 +331 common kcmp sys_kcmp +332 common finit_module sys_finit_module +333 common accept4 sys_accept4 +334 common sched_setattr sys_sched_setattr +335 common sched_getattr sys_sched_getattr +336 common renameat2 sys_renameat2 +337 common seccomp sys_seccomp +338 common getrandom sys_getrandom +339 common memfd_create sys_memfd_create +340 common bpf sys_bpf +341 common execveat sys_execveat +342 common userfaultfd sys_userfaultfd +343 common membarrier sys_membarrier +344 common mlock2 sys_mlock2 +345 common copy_file_range sys_copy_file_range +346 common preadv2 sys_preadv2 +347 common pwritev2 sys_pwritev2 +348 common pkey_mprotect sys_pkey_mprotect +349 common pkey_alloc sys_pkey_alloc +350 common pkey_free sys_pkey_free +351 common statx sys_statx diff --git a/arch/xtensa/kernel/syscalls/syscallhdr.sh b/arch/xtensa/kernel/syscalls/syscallhdr.sh new file mode 100644 index 000000000000..d37db641ca31 --- /dev/null +++ b/arch/xtensa/kernel/syscalls/syscallhdr.sh @@ -0,0 +1,36 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +in="$1" +out="$2" +my_abis=`echo "($3)" | tr ',' '|'` +prefix="$4" +offset="$5" + +fileguard=_UAPI_ASM_XTENSA_`basename "$out" | sed \ + -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/' \ + -e 's/[^A-Z0-9_]/_/g' -e 's/__/_/g'` +grep -E "^[0-9A-Fa-fXx]+[[:space:]]+${my_abis}" "$in" | sort -n | ( + printf "#ifndef %s\n" "${fileguard}" + printf "#define %s\n" "${fileguard}" + printf "\n" + + nxt=0 + while read nr abi name entry ; do + if [ -z "$offset" ]; then + printf "#define __NR_%s%s\t%s\n" \ + "${prefix}" "${name}" "${nr}" + else + printf "#define __NR_%s%s\t(%s + %s)\n" \ + "${prefix}" "${name}" "${offset}" "${nr}" + fi + nxt=$((nr+1)) + done + + printf "\n" + printf "#ifdef __KERNEL__\n" + printf "#define __NR_syscalls\t%s\n" "${nxt}" + printf "#endif\n" + printf "\n" + printf "#endif /* %s */" "${fileguard}" +) > "$out" diff --git a/arch/xtensa/kernel/syscalls/syscalltbl.sh b/arch/xtensa/kernel/syscalls/syscalltbl.sh new file mode 100644 index 000000000000..85d78d9309ad --- /dev/null +++ b/arch/xtensa/kernel/syscalls/syscalltbl.sh @@ -0,0 +1,32 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +in="$1" +out="$2" +my_abis=`echo "($3)" | tr ',' '|'` +my_abi="$4" +offset="$5" + +emit() { + t_nxt="$1" + t_nr="$2" + t_entry="$3" + + while [ $t_nxt -lt $t_nr ]; do + printf "__SYSCALL(%s, sys_ni_syscall, )\n" "${t_nxt}" + t_nxt=$((t_nxt+1)) + done + printf "__SYSCALL(%s, %s, )\n" "${t_nxt}" "${t_entry}" +} + +grep -E "^[0-9A-Fa-fXx]+[[:space:]]+${my_abis}" "$in" | sort -n | ( + nxt=0 + if [ -z "$offset" ]; then + offset=0 + fi + + while read nr abi name entry ; do + emit $((nxt+offset)) $((nr+offset)) $entry + nxt=$((nr+1)) + done +) > "$out" diff --git a/arch/xtensa/kernel/traps.c b/arch/xtensa/kernel/traps.c index 86507fa7c2d7..e6fa55aa1ccb 100644 --- a/arch/xtensa/kernel/traps.c +++ b/arch/xtensa/kernel/traps.c @@ -51,7 +51,6 @@ extern void kernel_exception(void); extern void user_exception(void); -extern void fast_syscall_kernel(void); extern void fast_syscall_user(void); extern void fast_alloca(void); extern void fast_unaligned(void); @@ -89,7 +88,6 @@ typedef struct { static dispatch_init_table_t __initdata dispatch_init_table[] = { { EXCCAUSE_ILLEGAL_INSTRUCTION, 0, do_illegal_instruction}, -{ EXCCAUSE_SYSTEM_CALL, KRNL, fast_syscall_kernel }, { EXCCAUSE_SYSTEM_CALL, USER, fast_syscall_user }, { EXCCAUSE_SYSTEM_CALL, 0, system_call }, /* EXCCAUSE_INSTRUCTION_FETCH unhandled */ @@ -215,8 +213,8 @@ extern void do_IRQ(int, struct pt_regs *); static inline void check_valid_nmi(void) { - unsigned intread = get_sr(interrupt); - unsigned intenable = get_sr(intenable); + unsigned intread = xtensa_get_sr(interrupt); + unsigned intenable = xtensa_get_sr(intenable); BUG_ON(intread & intenable & ~(XTENSA_INTLEVEL_ANDBELOW_MASK(PROFILING_INTLEVEL) ^ @@ -273,8 +271,8 @@ void do_interrupt(struct pt_regs *regs) irq_enter(); for (;;) { - unsigned intread = get_sr(interrupt); - unsigned intenable = get_sr(intenable); + unsigned intread = xtensa_get_sr(interrupt); + unsigned intenable = xtensa_get_sr(intenable); unsigned int_at_level = intread & intenable; unsigned level; diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index 30a48bba4a47..d49861099684 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -60,6 +60,9 @@ void __init bootmem_init(void) max_pfn = PFN_DOWN(memblock_end_of_DRAM()); max_low_pfn = min(max_pfn, MAX_LOW_PFN); + early_memtest((phys_addr_t)min_low_pfn << PAGE_SHIFT, + (phys_addr_t)max_low_pfn << PAGE_SHIFT); + memblock_set_current_limit(PFN_PHYS(max_low_pfn)); dma_contiguous_reserve(PFN_PHYS(max_low_pfn)); diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c index 6b95ca43aec0..1734cda6bc4a 100644 --- a/arch/xtensa/mm/kasan_init.c +++ b/arch/xtensa/mm/kasan_init.c @@ -24,12 +24,13 @@ void __init kasan_early_init(void) int i; for (i = 0; i < PTRS_PER_PTE; ++i) - set_pte(kasan_zero_pte + i, - mk_pte(virt_to_page(kasan_zero_page), PAGE_KERNEL)); + set_pte(kasan_early_shadow_pte + i, + mk_pte(virt_to_page(kasan_early_shadow_page), + PAGE_KERNEL)); for (vaddr = 0; vaddr < KASAN_SHADOW_SIZE; vaddr += PMD_SIZE, ++pmd) { BUG_ON(!pmd_none(*pmd)); - set_pmd(pmd, __pmd((unsigned long)kasan_zero_pte)); + set_pmd(pmd, __pmd((unsigned long)kasan_early_shadow_pte)); } early_trap_init(); } @@ -80,13 +81,16 @@ void __init kasan_init(void) populate(kasan_mem_to_shadow((void *)VMALLOC_START), kasan_mem_to_shadow((void *)XCHAL_KSEG_BYPASS_VADDR)); - /* Write protect kasan_zero_page and zero-initialize it again. */ + /* + * Write protect kasan_early_shadow_page and zero-initialize it again. + */ for (i = 0; i < PTRS_PER_PTE; ++i) - set_pte(kasan_zero_pte + i, - mk_pte(virt_to_page(kasan_zero_page), PAGE_KERNEL_RO)); + set_pte(kasan_early_shadow_pte + i, + mk_pte(virt_to_page(kasan_early_shadow_page), + PAGE_KERNEL_RO)); local_flush_tlb_all(); - memset(kasan_zero_page, 0, PAGE_SIZE); + memset(kasan_early_shadow_page, 0, PAGE_SIZE); /* At this point kasan is fully initialized. Enable error messages. */ current->kasan_depth = 0; diff --git a/block/Kconfig b/block/Kconfig index f7045aa47edb..028bc085dac8 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -155,12 +155,6 @@ config BLK_CGROUP_IOLATENCY Note, this is an experimental interface and could be changed someday. -config BLK_WBT_SQ - bool "Single queue writeback throttling" - depends on BLK_WBT - ---help--- - Enable writeback throttling by default on legacy single queue devices - config BLK_WBT_MQ bool "Multiqueue writeback throttling" default y @@ -224,4 +218,4 @@ config BLK_MQ_RDMA config BLK_PM def_bool BLOCK && PM -source block/Kconfig.iosched +source "block/Kconfig.iosched" diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched index f95a48b0d7b2..4626b88b2d5a 100644 --- a/block/Kconfig.iosched +++ b/block/Kconfig.iosched @@ -3,67 +3,6 @@ if BLOCK menu "IO Schedulers" -config IOSCHED_NOOP - bool - default y - ---help--- - The no-op I/O scheduler is a minimal scheduler that does basic merging - and sorting. Its main uses include non-disk based block devices like - memory devices, and specialised software or hardware environments - that do their own scheduling and require only minimal assistance from - the kernel. - -config IOSCHED_DEADLINE - tristate "Deadline I/O scheduler" - default y - ---help--- - The deadline I/O scheduler is simple and compact. It will provide - CSCAN service with FIFO expiration of requests, switching to - a new point in the service tree and doing a batch of IO from there - in case of expiry. - -config IOSCHED_CFQ - tristate "CFQ I/O scheduler" - default y - ---help--- - The CFQ I/O scheduler tries to distribute bandwidth equally - among all processes in the system. It should provide a fair - and low latency working environment, suitable for both desktop - and server systems. - - This is the default I/O scheduler. - -config CFQ_GROUP_IOSCHED - bool "CFQ Group Scheduling support" - depends on IOSCHED_CFQ && BLK_CGROUP - ---help--- - Enable group IO scheduling in CFQ. - -choice - - prompt "Default I/O scheduler" - default DEFAULT_CFQ - help - Select the I/O scheduler which will be used by default for all - block devices. - - config DEFAULT_DEADLINE - bool "Deadline" if IOSCHED_DEADLINE=y - - config DEFAULT_CFQ - bool "CFQ" if IOSCHED_CFQ=y - - config DEFAULT_NOOP - bool "No-op" - -endchoice - -config DEFAULT_IOSCHED - string - default "deadline" if DEFAULT_DEADLINE - default "cfq" if DEFAULT_CFQ - default "noop" if DEFAULT_NOOP - config MQ_IOSCHED_DEADLINE tristate "MQ deadline I/O scheduler" default y diff --git a/block/Makefile b/block/Makefile index 27eac600474f..eee1b4ceecf9 100644 --- a/block/Makefile +++ b/block/Makefile @@ -3,7 +3,7 @@ # Makefile for the kernel block layer # -obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-tag.o blk-sysfs.o \ +obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-sysfs.o \ blk-flush.o blk-settings.o blk-ioc.o blk-map.o \ blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \ blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \ @@ -18,9 +18,6 @@ obj-$(CONFIG_BLK_DEV_BSGLIB) += bsg-lib.o obj-$(CONFIG_BLK_CGROUP) += blk-cgroup.o obj-$(CONFIG_BLK_DEV_THROTTLING) += blk-throttle.o obj-$(CONFIG_BLK_CGROUP_IOLATENCY) += blk-iolatency.o -obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o -obj-$(CONFIG_IOSCHED_DEADLINE) += deadline-iosched.o -obj-$(CONFIG_IOSCHED_CFQ) += cfq-iosched.o obj-$(CONFIG_MQ_IOSCHED_DEADLINE) += mq-deadline.o obj-$(CONFIG_MQ_IOSCHED_KYBER) += kyber-iosched.o bfq-y := bfq-iosched.o bfq-wf2q.o bfq-cgroup.o diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 9fe5952d117d..c6113af31960 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -334,7 +334,7 @@ static void bfqg_stats_xfer_dead(struct bfq_group *bfqg) parent = bfqg_parent(bfqg); - lockdep_assert_held(bfqg_to_blkg(bfqg)->q->queue_lock); + lockdep_assert_held(&bfqg_to_blkg(bfqg)->q->queue_lock); if (unlikely(!parent)) return; @@ -642,7 +642,7 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) uint64_t serial_nr; rcu_read_lock(); - serial_nr = bio_blkcg(bio)->css.serial_nr; + serial_nr = __bio_blkcg(bio)->css.serial_nr; /* * Check whether blkcg has changed. The condition may trigger @@ -651,7 +651,7 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr)) goto out; - bfqg = __bfq_bic_change_cgroup(bfqd, bic, bio_blkcg(bio)); + bfqg = __bfq_bic_change_cgroup(bfqd, bic, __bio_blkcg(bio)); /* * Update blkg_path for bfq_log_* functions. We cache this * path, and update it here, for the following diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 97337214bec4..cd307767a134 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -399,9 +399,9 @@ static struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd, unsigned long flags; struct bfq_io_cq *icq; - spin_lock_irqsave(q->queue_lock, flags); + spin_lock_irqsave(&q->queue_lock, flags); icq = icq_to_bic(ioc_lookup_icq(ioc, q)); - spin_unlock_irqrestore(q->queue_lock, flags); + spin_unlock_irqrestore(&q->queue_lock, flags); return icq; } @@ -4066,7 +4066,7 @@ static void bfq_update_dispatch_stats(struct request_queue *q, * In addition, the following queue lock guarantees that * bfqq_group(bfqq) exists as well. */ - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); if (idle_timer_disabled) /* * Since the idle timer has been disabled, @@ -4085,7 +4085,7 @@ static void bfq_update_dispatch_stats(struct request_queue *q, bfqg_stats_set_start_empty_time(bfqg); bfqg_stats_update_io_remove(bfqg, rq->cmd_flags); } - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); } #else static inline void bfq_update_dispatch_stats(struct request_queue *q, @@ -4416,7 +4416,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, rcu_read_lock(); - bfqg = bfq_find_set_group(bfqd, bio_blkcg(bio)); + bfqg = bfq_find_set_group(bfqd, __bio_blkcg(bio)); if (!bfqg) { bfqq = &bfqd->oom_bfqq; goto out; @@ -4669,11 +4669,11 @@ static void bfq_update_insert_stats(struct request_queue *q, * In addition, the following queue lock guarantees that * bfqq_group(bfqq) exists as well. */ - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags); if (idle_timer_disabled) bfqg_stats_update_idle_time(bfqq_group(bfqq)); - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); } #else static inline void bfq_update_insert_stats(struct request_queue *q, @@ -5414,9 +5414,9 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) } eq->elevator_data = bfqd; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); q->elevator = eq; - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); /* * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues. @@ -5756,7 +5756,7 @@ static struct elv_fs_entry bfq_attrs[] = { }; static struct elevator_type iosched_bfq_mq = { - .ops.mq = { + .ops = { .limit_depth = bfq_limit_depth, .prepare_request = bfq_prepare_request, .requeue_request = bfq_finish_requeue_request, @@ -5777,7 +5777,6 @@ static struct elevator_type iosched_bfq_mq = { .exit_sched = bfq_exit_queue, }, - .uses_mq = true, .icq_size = sizeof(struct bfq_io_cq), .icq_align = __alignof__(struct bfq_io_cq), .elevator_attrs = bfq_attrs, diff --git a/block/bio-integrity.c b/block/bio-integrity.c index 290af497997b..1b633a3526d4 100644 --- a/block/bio-integrity.c +++ b/block/bio-integrity.c @@ -390,7 +390,6 @@ void bio_integrity_advance(struct bio *bio, unsigned int bytes_done) bip->bip_iter.bi_sector += bytes_done >> 9; bvec_iter_advance(bip->bip_vec, &bip->bip_iter, bytes); } -EXPORT_SYMBOL(bio_integrity_advance); /** * bio_integrity_trim - Trim integrity vector @@ -460,7 +459,6 @@ void bioset_integrity_free(struct bio_set *bs) mempool_exit(&bs->bio_integrity_pool); mempool_exit(&bs->bvec_integrity_pool); } -EXPORT_SYMBOL(bioset_integrity_free); void __init bio_integrity_init(void) { diff --git a/block/bio.c b/block/bio.c index 4d86e90654b2..8281bfcbc265 100644 --- a/block/bio.c +++ b/block/bio.c @@ -244,7 +244,7 @@ fallback: void bio_uninit(struct bio *bio) { - bio_disassociate_task(bio); + bio_disassociate_blkg(bio); } EXPORT_SYMBOL(bio_uninit); @@ -571,14 +571,13 @@ void bio_put(struct bio *bio) } EXPORT_SYMBOL(bio_put); -inline int bio_phys_segments(struct request_queue *q, struct bio *bio) +int bio_phys_segments(struct request_queue *q, struct bio *bio) { if (unlikely(!bio_flagged(bio, BIO_SEG_VALID))) blk_recount_segments(q, bio); return bio->bi_phys_segments; } -EXPORT_SYMBOL(bio_phys_segments); /** * __bio_clone_fast - clone a bio that shares the original bio's biovec @@ -610,7 +609,8 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src) bio->bi_iter = bio_src->bi_iter; bio->bi_io_vec = bio_src->bi_io_vec; - bio_clone_blkcg_association(bio, bio_src); + bio_clone_blkg_association(bio, bio_src); + blkcg_bio_issue_init(bio); } EXPORT_SYMBOL(__bio_clone_fast); @@ -901,7 +901,6 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) return 0; } -EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages); static void submit_bio_wait_endio(struct bio *bio) { @@ -1592,7 +1591,6 @@ void bio_set_pages_dirty(struct bio *bio) set_page_dirty_lock(bvec->bv_page); } } -EXPORT_SYMBOL_GPL(bio_set_pages_dirty); static void bio_release_pages(struct bio *bio) { @@ -1662,17 +1660,33 @@ defer: spin_unlock_irqrestore(&bio_dirty_lock, flags); schedule_work(&bio_dirty_work); } -EXPORT_SYMBOL_GPL(bio_check_pages_dirty); + +void update_io_ticks(struct hd_struct *part, unsigned long now) +{ + unsigned long stamp; +again: + stamp = READ_ONCE(part->stamp); + if (unlikely(stamp != now)) { + if (likely(cmpxchg(&part->stamp, stamp, now) == stamp)) { + __part_stat_add(part, io_ticks, 1); + } + } + if (part->partno) { + part = &part_to_disk(part)->part0; + goto again; + } +} void generic_start_io_acct(struct request_queue *q, int op, unsigned long sectors, struct hd_struct *part) { const int sgrp = op_stat_group(op); - int cpu = part_stat_lock(); - part_round_stats(q, cpu, part); - part_stat_inc(cpu, part, ios[sgrp]); - part_stat_add(cpu, part, sectors[sgrp], sectors); + part_stat_lock(); + + update_io_ticks(part, jiffies); + part_stat_inc(part, ios[sgrp]); + part_stat_add(part, sectors[sgrp], sectors); part_inc_in_flight(q, part, op_is_write(op)); part_stat_unlock(); @@ -1682,12 +1696,15 @@ EXPORT_SYMBOL(generic_start_io_acct); void generic_end_io_acct(struct request_queue *q, int req_op, struct hd_struct *part, unsigned long start_time) { - unsigned long duration = jiffies - start_time; + unsigned long now = jiffies; + unsigned long duration = now - start_time; const int sgrp = op_stat_group(req_op); - int cpu = part_stat_lock(); - part_stat_add(cpu, part, nsecs[sgrp], jiffies_to_nsecs(duration)); - part_round_stats(q, cpu, part); + part_stat_lock(); + + update_io_ticks(part, now); + part_stat_add(part, nsecs[sgrp], jiffies_to_nsecs(duration)); + part_stat_add(part, time_in_queue, duration); part_dec_in_flight(q, part, op_is_write(req_op)); part_stat_unlock(); @@ -1957,102 +1974,133 @@ EXPORT_SYMBOL(bioset_init_from_src); #ifdef CONFIG_BLK_CGROUP -#ifdef CONFIG_MEMCG /** - * bio_associate_blkcg_from_page - associate a bio with the page's blkcg + * bio_disassociate_blkg - puts back the blkg reference if associated * @bio: target bio - * @page: the page to lookup the blkcg from * - * Associate @bio with the blkcg from @page's owning memcg. This works like - * every other associate function wrt references. + * Helper to disassociate the blkg from @bio if a blkg is associated. */ -int bio_associate_blkcg_from_page(struct bio *bio, struct page *page) +void bio_disassociate_blkg(struct bio *bio) { - struct cgroup_subsys_state *blkcg_css; - - if (unlikely(bio->bi_css)) - return -EBUSY; - if (!page->mem_cgroup) - return 0; - blkcg_css = cgroup_get_e_css(page->mem_cgroup->css.cgroup, - &io_cgrp_subsys); - bio->bi_css = blkcg_css; - return 0; + if (bio->bi_blkg) { + blkg_put(bio->bi_blkg); + bio->bi_blkg = NULL; + } } -#endif /* CONFIG_MEMCG */ +EXPORT_SYMBOL_GPL(bio_disassociate_blkg); /** - * bio_associate_blkcg - associate a bio with the specified blkcg + * __bio_associate_blkg - associate a bio with the a blkg * @bio: target bio - * @blkcg_css: css of the blkcg to associate + * @blkg: the blkg to associate * - * Associate @bio with the blkcg specified by @blkcg_css. Block layer will - * treat @bio as if it were issued by a task which belongs to the blkcg. + * This tries to associate @bio with the specified @blkg. Association failure + * is handled by walking up the blkg tree. Therefore, the blkg associated can + * be anything between @blkg and the root_blkg. This situation only happens + * when a cgroup is dying and then the remaining bios will spill to the closest + * alive blkg. * - * This function takes an extra reference of @blkcg_css which will be put - * when @bio is released. The caller must own @bio and is responsible for - * synchronizing calls to this function. + * A reference will be taken on the @blkg and will be released when @bio is + * freed. */ -int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css) +static void __bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg) { - if (unlikely(bio->bi_css)) - return -EBUSY; - css_get(blkcg_css); - bio->bi_css = blkcg_css; - return 0; + bio_disassociate_blkg(bio); + + bio->bi_blkg = blkg_tryget_closest(blkg); } -EXPORT_SYMBOL_GPL(bio_associate_blkcg); /** - * bio_associate_blkg - associate a bio with the specified blkg + * bio_associate_blkg_from_css - associate a bio with a specified css * @bio: target bio - * @blkg: the blkg to associate + * @css: target css * - * Associate @bio with the blkg specified by @blkg. This is the queue specific - * blkcg information associated with the @bio, a reference will be taken on the - * @blkg and will be freed when the bio is freed. + * Associate @bio with the blkg found by combining the css's blkg and the + * request_queue of the @bio. This falls back to the queue's root_blkg if + * the association fails with the css. */ -int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg) +void bio_associate_blkg_from_css(struct bio *bio, + struct cgroup_subsys_state *css) { - if (unlikely(bio->bi_blkg)) - return -EBUSY; - if (!blkg_try_get(blkg)) - return -ENODEV; - bio->bi_blkg = blkg; - return 0; + struct request_queue *q = bio->bi_disk->queue; + struct blkcg_gq *blkg; + + rcu_read_lock(); + + if (!css || !css->parent) + blkg = q->root_blkg; + else + blkg = blkg_lookup_create(css_to_blkcg(css), q); + + __bio_associate_blkg(bio, blkg); + + rcu_read_unlock(); } +EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css); +#ifdef CONFIG_MEMCG /** - * bio_disassociate_task - undo bio_associate_current() + * bio_associate_blkg_from_page - associate a bio with the page's blkg * @bio: target bio + * @page: the page to lookup the blkcg from + * + * Associate @bio with the blkg from @page's owning memcg and the respective + * request_queue. If cgroup_e_css returns %NULL, fall back to the queue's + * root_blkg. */ -void bio_disassociate_task(struct bio *bio) +void bio_associate_blkg_from_page(struct bio *bio, struct page *page) { - if (bio->bi_ioc) { - put_io_context(bio->bi_ioc); - bio->bi_ioc = NULL; - } - if (bio->bi_css) { - css_put(bio->bi_css); - bio->bi_css = NULL; - } - if (bio->bi_blkg) { - blkg_put(bio->bi_blkg); - bio->bi_blkg = NULL; - } + struct cgroup_subsys_state *css; + + if (!page->mem_cgroup) + return; + + rcu_read_lock(); + + css = cgroup_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys); + bio_associate_blkg_from_css(bio, css); + + rcu_read_unlock(); +} +#endif /* CONFIG_MEMCG */ + +/** + * bio_associate_blkg - associate a bio with a blkg + * @bio: target bio + * + * Associate @bio with the blkg found from the bio's css and request_queue. + * If one is not found, bio_lookup_blkg() creates the blkg. If a blkg is + * already associated, the css is reused and association redone as the + * request_queue may have changed. + */ +void bio_associate_blkg(struct bio *bio) +{ + struct cgroup_subsys_state *css; + + rcu_read_lock(); + + if (bio->bi_blkg) + css = &bio_blkcg(bio)->css; + else + css = blkcg_css(); + + bio_associate_blkg_from_css(bio, css); + + rcu_read_unlock(); } +EXPORT_SYMBOL_GPL(bio_associate_blkg); /** - * bio_clone_blkcg_association - clone blkcg association from src to dst bio + * bio_clone_blkg_association - clone blkg association from src to dst bio * @dst: destination bio * @src: source bio */ -void bio_clone_blkcg_association(struct bio *dst, struct bio *src) +void bio_clone_blkg_association(struct bio *dst, struct bio *src) { - if (src->bi_css) - WARN_ON(bio_associate_blkcg(dst, src->bi_css)); + if (src->bi_blkg) + __bio_associate_blkg(dst, src->bi_blkg); } -EXPORT_SYMBOL_GPL(bio_clone_blkcg_association); +EXPORT_SYMBOL_GPL(bio_clone_blkg_association); #endif /* CONFIG_BLK_CGROUP */ static void __init biovec_init_slabs(void) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index c630e02836a8..c8cc1cbb6370 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -76,14 +76,42 @@ static void blkg_free(struct blkcg_gq *blkg) if (blkg->pd[i]) blkcg_policy[i]->pd_free_fn(blkg->pd[i]); - if (blkg->blkcg != &blkcg_root) - blk_exit_rl(blkg->q, &blkg->rl); - blkg_rwstat_exit(&blkg->stat_ios); blkg_rwstat_exit(&blkg->stat_bytes); kfree(blkg); } +static void __blkg_release(struct rcu_head *rcu) +{ + struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head); + + percpu_ref_exit(&blkg->refcnt); + + /* release the blkcg and parent blkg refs this blkg has been holding */ + css_put(&blkg->blkcg->css); + if (blkg->parent) + blkg_put(blkg->parent); + + wb_congested_put(blkg->wb_congested); + + blkg_free(blkg); +} + +/* + * A group is RCU protected, but having an rcu lock does not mean that one + * can access all the fields of blkg and assume these are valid. For + * example, don't try to follow throtl_data and request queue links. + * + * Having a reference to blkg under an rcu allows accesses to only values + * local to groups like group stats and group rate limits. + */ +static void blkg_release(struct percpu_ref *ref) +{ + struct blkcg_gq *blkg = container_of(ref, struct blkcg_gq, refcnt); + + call_rcu(&blkg->rcu_head, __blkg_release); +} + /** * blkg_alloc - allocate a blkg * @blkcg: block cgroup the new blkg is associated with @@ -110,14 +138,6 @@ static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct request_queue *q, blkg->q = q; INIT_LIST_HEAD(&blkg->q_node); blkg->blkcg = blkcg; - atomic_set(&blkg->refcnt, 1); - - /* root blkg uses @q->root_rl, init rl only for !root blkgs */ - if (blkcg != &blkcg_root) { - if (blk_init_rl(&blkg->rl, q, gfp_mask)) - goto err_free; - blkg->rl.blkg = blkg; - } for (i = 0; i < BLKCG_MAX_POLS; i++) { struct blkcg_policy *pol = blkcg_policy[i]; @@ -157,7 +177,7 @@ struct blkcg_gq *blkg_lookup_slowpath(struct blkcg *blkcg, blkg = radix_tree_lookup(&blkcg->blkg_tree, q->id); if (blkg && blkg->q == q) { if (update_hint) { - lockdep_assert_held(q->queue_lock); + lockdep_assert_held(&q->queue_lock); rcu_assign_pointer(blkcg->blkg_hint, blkg); } return blkg; @@ -180,7 +200,13 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, int i, ret; WARN_ON_ONCE(!rcu_read_lock_held()); - lockdep_assert_held(q->queue_lock); + lockdep_assert_held(&q->queue_lock); + + /* request_queue is dying, do not create/recreate a blkg */ + if (blk_queue_dying(q)) { + ret = -ENODEV; + goto err_free_blkg; + } /* blkg holds a reference to blkcg */ if (!css_tryget_online(&blkcg->css)) { @@ -217,6 +243,11 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, blkg_get(blkg->parent); } + ret = percpu_ref_init(&blkg->refcnt, blkg_release, 0, + GFP_NOWAIT | __GFP_NOWARN); + if (ret) + goto err_cancel_ref; + /* invoke per-policy init */ for (i = 0; i < BLKCG_MAX_POLS; i++) { struct blkcg_policy *pol = blkcg_policy[i]; @@ -249,6 +280,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, blkg_put(blkg); return ERR_PTR(ret); +err_cancel_ref: + percpu_ref_exit(&blkg->refcnt); err_put_congested: wb_congested_put(wb_congested); err_put_css: @@ -259,7 +292,7 @@ err_free_blkg: } /** - * blkg_lookup_create - lookup blkg, try to create one if not there + * __blkg_lookup_create - lookup blkg, try to create one if not there * @blkcg: blkcg of interest * @q: request_queue of interest * @@ -268,24 +301,16 @@ err_free_blkg: * that all non-root blkg's have access to the parent blkg. This function * should be called under RCU read lock and @q->queue_lock. * - * Returns pointer to the looked up or created blkg on success, ERR_PTR() - * value on error. If @q is dead, returns ERR_PTR(-EINVAL). If @q is not - * dead and bypassing, returns ERR_PTR(-EBUSY). + * Returns the blkg or the closest blkg if blkg_create() fails as it walks + * down from root. */ -struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg, - struct request_queue *q) +struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg, + struct request_queue *q) { struct blkcg_gq *blkg; WARN_ON_ONCE(!rcu_read_lock_held()); - lockdep_assert_held(q->queue_lock); - - /* - * This could be the first entry point of blkcg implementation and - * we shouldn't allow anything to go through for a bypassing queue. - */ - if (unlikely(blk_queue_bypass(q))) - return ERR_PTR(blk_queue_dying(q) ? -ENODEV : -EBUSY); + lockdep_assert_held(&q->queue_lock); blkg = __blkg_lookup(blkcg, q, true); if (blkg) @@ -293,30 +318,64 @@ struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg, /* * Create blkgs walking down from blkcg_root to @blkcg, so that all - * non-root blkgs have access to their parents. + * non-root blkgs have access to their parents. Returns the closest + * blkg to the intended blkg should blkg_create() fail. */ while (true) { struct blkcg *pos = blkcg; struct blkcg *parent = blkcg_parent(blkcg); - - while (parent && !__blkg_lookup(parent, q, false)) { + struct blkcg_gq *ret_blkg = q->root_blkg; + + while (parent) { + blkg = __blkg_lookup(parent, q, false); + if (blkg) { + /* remember closest blkg */ + ret_blkg = blkg; + break; + } pos = parent; parent = blkcg_parent(parent); } blkg = blkg_create(pos, q, NULL); - if (pos == blkcg || IS_ERR(blkg)) + if (IS_ERR(blkg)) + return ret_blkg; + if (pos == blkcg) return blkg; } } +/** + * blkg_lookup_create - find or create a blkg + * @blkcg: target block cgroup + * @q: target request_queue + * + * This looks up or creates the blkg representing the unique pair + * of the blkcg and the request_queue. + */ +struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg, + struct request_queue *q) +{ + struct blkcg_gq *blkg = blkg_lookup(blkcg, q); + + if (unlikely(!blkg)) { + unsigned long flags; + + spin_lock_irqsave(&q->queue_lock, flags); + blkg = __blkg_lookup_create(blkcg, q); + spin_unlock_irqrestore(&q->queue_lock, flags); + } + + return blkg; +} + static void blkg_destroy(struct blkcg_gq *blkg) { struct blkcg *blkcg = blkg->blkcg; struct blkcg_gq *parent = blkg->parent; int i; - lockdep_assert_held(blkg->q->queue_lock); + lockdep_assert_held(&blkg->q->queue_lock); lockdep_assert_held(&blkcg->lock); /* Something wrong if we are trying to remove same group twice */ @@ -353,7 +412,7 @@ static void blkg_destroy(struct blkcg_gq *blkg) * Put the reference taken at the time of creation so that when all * queues are gone, group can be destroyed. */ - blkg_put(blkg); + percpu_ref_kill(&blkg->refcnt); } /** @@ -366,8 +425,7 @@ static void blkg_destroy_all(struct request_queue *q) { struct blkcg_gq *blkg, *n; - lockdep_assert_held(q->queue_lock); - + spin_lock_irq(&q->queue_lock); list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) { struct blkcg *blkcg = blkg->blkcg; @@ -377,7 +435,7 @@ static void blkg_destroy_all(struct request_queue *q) } q->root_blkg = NULL; - q->root_rl.blkg = NULL; + spin_unlock_irq(&q->queue_lock); } /* @@ -403,41 +461,6 @@ void __blkg_release_rcu(struct rcu_head *rcu_head) } EXPORT_SYMBOL_GPL(__blkg_release_rcu); -/* - * The next function used by blk_queue_for_each_rl(). It's a bit tricky - * because the root blkg uses @q->root_rl instead of its own rl. - */ -struct request_list *__blk_queue_next_rl(struct request_list *rl, - struct request_queue *q) -{ - struct list_head *ent; - struct blkcg_gq *blkg; - - /* - * Determine the current blkg list_head. The first entry is - * root_rl which is off @q->blkg_list and mapped to the head. - */ - if (rl == &q->root_rl) { - ent = &q->blkg_list; - /* There are no more block groups, hence no request lists */ - if (list_empty(ent)) - return NULL; - } else { - blkg = container_of(rl, struct blkcg_gq, rl); - ent = &blkg->q_node; - } - - /* walk to the next list_head, skip root blkcg */ - ent = ent->next; - if (ent == &q->root_blkg->q_node) - ent = ent->next; - if (ent == &q->blkg_list) - return NULL; - - blkg = container_of(ent, struct blkcg_gq, q_node); - return &blkg->rl; -} - static int blkcg_reset_stats(struct cgroup_subsys_state *css, struct cftype *cftype, u64 val) { @@ -477,7 +500,6 @@ const char *blkg_dev_name(struct blkcg_gq *blkg) return dev_name(blkg->q->backing_dev_info->dev); return NULL; } -EXPORT_SYMBOL_GPL(blkg_dev_name); /** * blkcg_print_blkgs - helper for printing per-blkg data @@ -508,10 +530,10 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg, rcu_read_lock(); hlist_for_each_entry_rcu(blkg, &blkcg->blkg_list, blkcg_node) { - spin_lock_irq(blkg->q->queue_lock); + spin_lock_irq(&blkg->q->queue_lock); if (blkcg_policy_enabled(blkg->q, pol)) total += prfill(sf, blkg->pd[pol->plid], data); - spin_unlock_irq(blkg->q->queue_lock); + spin_unlock_irq(&blkg->q->queue_lock); } rcu_read_unlock(); @@ -709,7 +731,7 @@ u64 blkg_stat_recursive_sum(struct blkcg_gq *blkg, struct cgroup_subsys_state *pos_css; u64 sum = 0; - lockdep_assert_held(blkg->q->queue_lock); + lockdep_assert_held(&blkg->q->queue_lock); rcu_read_lock(); blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) { @@ -752,7 +774,7 @@ struct blkg_rwstat blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkg_rwstat sum = { }; int i; - lockdep_assert_held(blkg->q->queue_lock); + lockdep_assert_held(&blkg->q->queue_lock); rcu_read_lock(); blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) { @@ -783,18 +805,10 @@ static struct blkcg_gq *blkg_lookup_check(struct blkcg *blkcg, struct request_queue *q) { WARN_ON_ONCE(!rcu_read_lock_held()); - lockdep_assert_held(q->queue_lock); + lockdep_assert_held(&q->queue_lock); if (!blkcg_policy_enabled(q, pol)) return ERR_PTR(-EOPNOTSUPP); - - /* - * This could be the first entry point of blkcg implementation and - * we shouldn't allow anything to go through for a bypassing queue. - */ - if (unlikely(blk_queue_bypass(q))) - return ERR_PTR(blk_queue_dying(q) ? -ENODEV : -EBUSY); - return __blkg_lookup(blkcg, q, true /* update_hint */); } @@ -812,7 +826,7 @@ static struct blkcg_gq *blkg_lookup_check(struct blkcg *blkcg, */ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, char *input, struct blkg_conf_ctx *ctx) - __acquires(rcu) __acquires(disk->queue->queue_lock) + __acquires(rcu) __acquires(&disk->queue->queue_lock) { struct gendisk *disk; struct request_queue *q; @@ -840,7 +854,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, q = disk->queue; rcu_read_lock(); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); blkg = blkg_lookup_check(blkcg, pol, q); if (IS_ERR(blkg)) { @@ -867,7 +881,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, } /* Drop locks to do new blkg allocation with GFP_KERNEL. */ - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); rcu_read_unlock(); new_blkg = blkg_alloc(pos, q, GFP_KERNEL); @@ -877,7 +891,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, } rcu_read_lock(); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); blkg = blkg_lookup_check(pos, pol, q); if (IS_ERR(blkg)) { @@ -905,7 +919,7 @@ success: return 0; fail_unlock: - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); rcu_read_unlock(); fail: put_disk_and_module(disk); @@ -921,7 +935,6 @@ fail: } return ret; } -EXPORT_SYMBOL_GPL(blkg_conf_prep); /** * blkg_conf_finish - finish up per-blkg config update @@ -931,13 +944,12 @@ EXPORT_SYMBOL_GPL(blkg_conf_prep); * with blkg_conf_prep(). */ void blkg_conf_finish(struct blkg_conf_ctx *ctx) - __releases(ctx->disk->queue->queue_lock) __releases(rcu) + __releases(&ctx->disk->queue->queue_lock) __releases(rcu) { - spin_unlock_irq(ctx->disk->queue->queue_lock); + spin_unlock_irq(&ctx->disk->queue->queue_lock); rcu_read_unlock(); put_disk_and_module(ctx->disk); } -EXPORT_SYMBOL_GPL(blkg_conf_finish); static int blkcg_print_stat(struct seq_file *sf, void *v) { @@ -967,7 +979,7 @@ static int blkcg_print_stat(struct seq_file *sf, void *v) */ off += scnprintf(buf+off, size-off, "%s ", dname); - spin_lock_irq(blkg->q->queue_lock); + spin_lock_irq(&blkg->q->queue_lock); rwstat = blkg_rwstat_recursive_sum(blkg, NULL, offsetof(struct blkcg_gq, stat_bytes)); @@ -981,7 +993,7 @@ static int blkcg_print_stat(struct seq_file *sf, void *v) wios = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_WRITE]); dios = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_DISCARD]); - spin_unlock_irq(blkg->q->queue_lock); + spin_unlock_irq(&blkg->q->queue_lock); if (rbytes || wbytes || rios || wios) { has_stats = true; @@ -1102,9 +1114,9 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg) struct blkcg_gq, blkcg_node); struct request_queue *q = blkg->q; - if (spin_trylock(q->queue_lock)) { + if (spin_trylock(&q->queue_lock)) { blkg_destroy(blkg); - spin_unlock(q->queue_lock); + spin_unlock(&q->queue_lock); } else { spin_unlock_irq(&blkcg->lock); cpu_relax(); @@ -1225,36 +1237,31 @@ int blkcg_init_queue(struct request_queue *q) /* Make sure the root blkg exists. */ rcu_read_lock(); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); blkg = blkg_create(&blkcg_root, q, new_blkg); if (IS_ERR(blkg)) goto err_unlock; q->root_blkg = blkg; - q->root_rl.blkg = blkg; - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); rcu_read_unlock(); if (preloaded) radix_tree_preload_end(); ret = blk_iolatency_init(q); - if (ret) { - spin_lock_irq(q->queue_lock); - blkg_destroy_all(q); - spin_unlock_irq(q->queue_lock); - return ret; - } + if (ret) + goto err_destroy_all; ret = blk_throtl_init(q); - if (ret) { - spin_lock_irq(q->queue_lock); - blkg_destroy_all(q); - spin_unlock_irq(q->queue_lock); - } - return ret; + if (ret) + goto err_destroy_all; + return 0; +err_destroy_all: + blkg_destroy_all(q); + return ret; err_unlock: - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); rcu_read_unlock(); if (preloaded) radix_tree_preload_end(); @@ -1269,7 +1276,7 @@ err_unlock: */ void blkcg_drain_queue(struct request_queue *q) { - lockdep_assert_held(q->queue_lock); + lockdep_assert_held(&q->queue_lock); /* * @q could be exiting and already have destroyed all blkgs as @@ -1289,10 +1296,7 @@ void blkcg_drain_queue(struct request_queue *q) */ void blkcg_exit_queue(struct request_queue *q) { - spin_lock_irq(q->queue_lock); blkg_destroy_all(q); - spin_unlock_irq(q->queue_lock); - blk_throtl_exit(q); } @@ -1396,10 +1400,8 @@ int blkcg_activate_policy(struct request_queue *q, if (blkcg_policy_enabled(q, pol)) return 0; - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_freeze_queue(q); - else - blk_queue_bypass_start(q); pd_prealloc: if (!pd_prealloc) { pd_prealloc = pol->pd_alloc_fn(GFP_KERNEL, q->node); @@ -1409,7 +1411,7 @@ pd_prealloc: } } - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); list_for_each_entry(blkg, &q->blkg_list, q_node) { struct blkg_policy_data *pd; @@ -1421,7 +1423,7 @@ pd_prealloc: if (!pd) swap(pd, pd_prealloc); if (!pd) { - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); goto pd_prealloc; } @@ -1435,12 +1437,10 @@ pd_prealloc: __set_bit(pol->plid, q->blkcg_pols); ret = 0; - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); out_bypass_end: - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_unfreeze_queue(q); - else - blk_queue_bypass_end(q); if (pd_prealloc) pol->pd_free_fn(pd_prealloc); return ret; @@ -1463,12 +1463,10 @@ void blkcg_deactivate_policy(struct request_queue *q, if (!blkcg_policy_enabled(q, pol)) return; - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_freeze_queue(q); - else - blk_queue_bypass_start(q); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); __clear_bit(pol->plid, q->blkcg_pols); @@ -1481,12 +1479,10 @@ void blkcg_deactivate_policy(struct request_queue *q, } } - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_unfreeze_queue(q); - else - blk_queue_bypass_end(q); } EXPORT_SYMBOL_GPL(blkcg_deactivate_policy); @@ -1748,8 +1744,7 @@ void blkcg_maybe_throttle_current(void) blkg = blkg_lookup(blkcg, q); if (!blkg) goto out; - blkg = blkg_try_get(blkg); - if (!blkg) + if (!blkg_tryget(blkg)) goto out; rcu_read_unlock(); @@ -1761,7 +1756,6 @@ out: rcu_read_unlock(); blk_put_queue(q); } -EXPORT_SYMBOL_GPL(blkcg_maybe_throttle_current); /** * blkcg_schedule_throttle - this task needs to check for throttling @@ -1795,7 +1789,6 @@ void blkcg_schedule_throttle(struct request_queue *q, bool use_memdelay) current->use_memdelay = use_memdelay; set_notify_resume(current); } -EXPORT_SYMBOL_GPL(blkcg_schedule_throttle); /** * blkcg_add_delay - add delay to this blkg @@ -1810,7 +1803,6 @@ void blkcg_add_delay(struct blkcg_gq *blkg, u64 now, u64 delta) blkcg_scale_delay(blkg, now); atomic64_add(delta, &blkg->delay_nsec); } -EXPORT_SYMBOL_GPL(blkcg_add_delay); module_param(blkcg_debug_stats, bool, 0644); MODULE_PARM_DESC(blkcg_debug_stats, "True if you want debug stats, false if not"); diff --git a/block/blk-core.c b/block/blk-core.c index deb56932f8c4..c78042975737 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -57,11 +57,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(block_unplug); DEFINE_IDA(blk_queue_ida); -/* - * For the allocated request tables - */ -struct kmem_cache *request_cachep; - /* * For queue allocation */ @@ -79,11 +74,7 @@ static struct workqueue_struct *kblockd_workqueue; */ void blk_queue_flag_set(unsigned int flag, struct request_queue *q) { - unsigned long flags; - - spin_lock_irqsave(q->queue_lock, flags); - queue_flag_set(flag, q); - spin_unlock_irqrestore(q->queue_lock, flags); + set_bit(flag, &q->queue_flags); } EXPORT_SYMBOL(blk_queue_flag_set); @@ -94,11 +85,7 @@ EXPORT_SYMBOL(blk_queue_flag_set); */ void blk_queue_flag_clear(unsigned int flag, struct request_queue *q) { - unsigned long flags; - - spin_lock_irqsave(q->queue_lock, flags); - queue_flag_clear(flag, q); - spin_unlock_irqrestore(q->queue_lock, flags); + clear_bit(flag, &q->queue_flags); } EXPORT_SYMBOL(blk_queue_flag_clear); @@ -112,85 +99,15 @@ EXPORT_SYMBOL(blk_queue_flag_clear); */ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q) { - unsigned long flags; - bool res; - - spin_lock_irqsave(q->queue_lock, flags); - res = queue_flag_test_and_set(flag, q); - spin_unlock_irqrestore(q->queue_lock, flags); - - return res; + return test_and_set_bit(flag, &q->queue_flags); } EXPORT_SYMBOL_GPL(blk_queue_flag_test_and_set); -/** - * blk_queue_flag_test_and_clear - atomically test and clear a queue flag - * @flag: flag to be cleared - * @q: request queue - * - * Returns the previous value of @flag - 0 if the flag was not set and 1 if - * the flag was set. - */ -bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q) -{ - unsigned long flags; - bool res; - - spin_lock_irqsave(q->queue_lock, flags); - res = queue_flag_test_and_clear(flag, q); - spin_unlock_irqrestore(q->queue_lock, flags); - - return res; -} -EXPORT_SYMBOL_GPL(blk_queue_flag_test_and_clear); - -static void blk_clear_congested(struct request_list *rl, int sync) -{ -#ifdef CONFIG_CGROUP_WRITEBACK - clear_wb_congested(rl->blkg->wb_congested, sync); -#else - /* - * If !CGROUP_WRITEBACK, all blkg's map to bdi->wb and we shouldn't - * flip its congestion state for events on other blkcgs. - */ - if (rl == &rl->q->root_rl) - clear_wb_congested(rl->q->backing_dev_info->wb.congested, sync); -#endif -} - -static void blk_set_congested(struct request_list *rl, int sync) -{ -#ifdef CONFIG_CGROUP_WRITEBACK - set_wb_congested(rl->blkg->wb_congested, sync); -#else - /* see blk_clear_congested() */ - if (rl == &rl->q->root_rl) - set_wb_congested(rl->q->backing_dev_info->wb.congested, sync); -#endif -} - -void blk_queue_congestion_threshold(struct request_queue *q) -{ - int nr; - - nr = q->nr_requests - (q->nr_requests / 8) + 1; - if (nr > q->nr_requests) - nr = q->nr_requests; - q->nr_congestion_on = nr; - - nr = q->nr_requests - (q->nr_requests / 8) - (q->nr_requests / 16) - 1; - if (nr < 1) - nr = 1; - q->nr_congestion_off = nr; -} - void blk_rq_init(struct request_queue *q, struct request *rq) { memset(rq, 0, sizeof(*rq)); INIT_LIST_HEAD(&rq->queuelist); - INIT_LIST_HEAD(&rq->timeout_list); - rq->cpu = -1; rq->q = q; rq->__sector = (sector_t) -1; INIT_HLIST_NODE(&rq->hash); @@ -256,10 +173,11 @@ static void print_req_error(struct request *req, blk_status_t status) if (WARN_ON_ONCE(idx >= ARRAY_SIZE(blk_errors))) return; - printk_ratelimited(KERN_ERR "%s: %s error, dev %s, sector %llu\n", - __func__, blk_errors[idx].name, req->rq_disk ? - req->rq_disk->disk_name : "?", - (unsigned long long)blk_rq_pos(req)); + printk_ratelimited(KERN_ERR "%s: %s error, dev %s, sector %llu flags %x\n", + __func__, blk_errors[idx].name, + req->rq_disk ? req->rq_disk->disk_name : "?", + (unsigned long long)blk_rq_pos(req), + req->cmd_flags); } static void req_bio_endio(struct request *rq, struct bio *bio, @@ -292,99 +210,6 @@ void blk_dump_rq_flags(struct request *rq, char *msg) } EXPORT_SYMBOL(blk_dump_rq_flags); -static void blk_delay_work(struct work_struct *work) -{ - struct request_queue *q; - - q = container_of(work, struct request_queue, delay_work.work); - spin_lock_irq(q->queue_lock); - __blk_run_queue(q); - spin_unlock_irq(q->queue_lock); -} - -/** - * blk_delay_queue - restart queueing after defined interval - * @q: The &struct request_queue in question - * @msecs: Delay in msecs - * - * Description: - * Sometimes queueing needs to be postponed for a little while, to allow - * resources to come back. This function will make sure that queueing is - * restarted around the specified time. - */ -void blk_delay_queue(struct request_queue *q, unsigned long msecs) -{ - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - if (likely(!blk_queue_dead(q))) - queue_delayed_work(kblockd_workqueue, &q->delay_work, - msecs_to_jiffies(msecs)); -} -EXPORT_SYMBOL(blk_delay_queue); - -/** - * blk_start_queue_async - asynchronously restart a previously stopped queue - * @q: The &struct request_queue in question - * - * Description: - * blk_start_queue_async() will clear the stop flag on the queue, and - * ensure that the request_fn for the queue is run from an async - * context. - **/ -void blk_start_queue_async(struct request_queue *q) -{ - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - queue_flag_clear(QUEUE_FLAG_STOPPED, q); - blk_run_queue_async(q); -} -EXPORT_SYMBOL(blk_start_queue_async); - -/** - * blk_start_queue - restart a previously stopped queue - * @q: The &struct request_queue in question - * - * Description: - * blk_start_queue() will clear the stop flag on the queue, and call - * the request_fn for the queue if it was in a stopped state when - * entered. Also see blk_stop_queue(). - **/ -void blk_start_queue(struct request_queue *q) -{ - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - queue_flag_clear(QUEUE_FLAG_STOPPED, q); - __blk_run_queue(q); -} -EXPORT_SYMBOL(blk_start_queue); - -/** - * blk_stop_queue - stop a queue - * @q: The &struct request_queue in question - * - * Description: - * The Linux block layer assumes that a block driver will consume all - * entries on the request queue when the request_fn strategy is called. - * Often this will not happen, because of hardware limitations (queue - * depth settings). If a device driver gets a 'queue full' response, - * or if it simply chooses not to queue more I/O at one point, it can - * call this function to prevent the request_fn from being called until - * the driver has signalled it's ready to go again. This happens by calling - * blk_start_queue() to restart queue operations. - **/ -void blk_stop_queue(struct request_queue *q) -{ - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - cancel_delayed_work(&q->delay_work); - queue_flag_set(QUEUE_FLAG_STOPPED, q); -} -EXPORT_SYMBOL(blk_stop_queue); - /** * blk_sync_queue - cancel any pending callbacks on a queue * @q: the queue @@ -408,15 +233,13 @@ void blk_sync_queue(struct request_queue *q) del_timer_sync(&q->timeout); cancel_work_sync(&q->timeout_work); - if (q->mq_ops) { + if (queue_is_mq(q)) { struct blk_mq_hw_ctx *hctx; int i; cancel_delayed_work_sync(&q->requeue_work); queue_for_each_hw_ctx(q, hctx, i) cancel_delayed_work_sync(&hctx->run_work); - } else { - cancel_delayed_work_sync(&q->delay_work); } } EXPORT_SYMBOL(blk_sync_queue); @@ -442,250 +265,12 @@ void blk_clear_pm_only(struct request_queue *q) } EXPORT_SYMBOL_GPL(blk_clear_pm_only); -/** - * __blk_run_queue_uncond - run a queue whether or not it has been stopped - * @q: The queue to run - * - * Description: - * Invoke request handling on a queue if there are any pending requests. - * May be used to restart request handling after a request has completed. - * This variant runs the queue whether or not the queue has been - * stopped. Must be called with the queue lock held and interrupts - * disabled. See also @blk_run_queue. - */ -inline void __blk_run_queue_uncond(struct request_queue *q) -{ - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - if (unlikely(blk_queue_dead(q))) - return; - - /* - * Some request_fn implementations, e.g. scsi_request_fn(), unlock - * the queue lock internally. As a result multiple threads may be - * running such a request function concurrently. Keep track of the - * number of active request_fn invocations such that blk_drain_queue() - * can wait until all these request_fn calls have finished. - */ - q->request_fn_active++; - q->request_fn(q); - q->request_fn_active--; -} -EXPORT_SYMBOL_GPL(__blk_run_queue_uncond); - -/** - * __blk_run_queue - run a single device queue - * @q: The queue to run - * - * Description: - * See @blk_run_queue. - */ -void __blk_run_queue(struct request_queue *q) -{ - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - if (unlikely(blk_queue_stopped(q))) - return; - - __blk_run_queue_uncond(q); -} -EXPORT_SYMBOL(__blk_run_queue); - -/** - * blk_run_queue_async - run a single device queue in workqueue context - * @q: The queue to run - * - * Description: - * Tells kblockd to perform the equivalent of @blk_run_queue on behalf - * of us. - * - * Note: - * Since it is not allowed to run q->delay_work after blk_cleanup_queue() - * has canceled q->delay_work, callers must hold the queue lock to avoid - * race conditions between blk_cleanup_queue() and blk_run_queue_async(). - */ -void blk_run_queue_async(struct request_queue *q) -{ - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - if (likely(!blk_queue_stopped(q) && !blk_queue_dead(q))) - mod_delayed_work(kblockd_workqueue, &q->delay_work, 0); -} -EXPORT_SYMBOL(blk_run_queue_async); - -/** - * blk_run_queue - run a single device queue - * @q: The queue to run - * - * Description: - * Invoke request handling on this queue, if it has pending work to do. - * May be used to restart queueing when a request has completed. - */ -void blk_run_queue(struct request_queue *q) -{ - unsigned long flags; - - WARN_ON_ONCE(q->mq_ops); - - spin_lock_irqsave(q->queue_lock, flags); - __blk_run_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); -} -EXPORT_SYMBOL(blk_run_queue); - void blk_put_queue(struct request_queue *q) { kobject_put(&q->kobj); } EXPORT_SYMBOL(blk_put_queue); -/** - * __blk_drain_queue - drain requests from request_queue - * @q: queue to drain - * @drain_all: whether to drain all requests or only the ones w/ ELVPRIV - * - * Drain requests from @q. If @drain_all is set, all requests are drained. - * If not, only ELVPRIV requests are drained. The caller is responsible - * for ensuring that no new requests which need to be drained are queued. - */ -static void __blk_drain_queue(struct request_queue *q, bool drain_all) - __releases(q->queue_lock) - __acquires(q->queue_lock) -{ - int i; - - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - while (true) { - bool drain = false; - - /* - * The caller might be trying to drain @q before its - * elevator is initialized. - */ - if (q->elevator) - elv_drain_elevator(q); - - blkcg_drain_queue(q); - - /* - * This function might be called on a queue which failed - * driver init after queue creation or is not yet fully - * active yet. Some drivers (e.g. fd and loop) get unhappy - * in such cases. Kick queue iff dispatch queue has - * something on it and @q has request_fn set. - */ - if (!list_empty(&q->queue_head) && q->request_fn) - __blk_run_queue(q); - - drain |= q->nr_rqs_elvpriv; - drain |= q->request_fn_active; - - /* - * Unfortunately, requests are queued at and tracked from - * multiple places and there's no single counter which can - * be drained. Check all the queues and counters. - */ - if (drain_all) { - struct blk_flush_queue *fq = blk_get_flush_queue(q, NULL); - drain |= !list_empty(&q->queue_head); - for (i = 0; i < 2; i++) { - drain |= q->nr_rqs[i]; - drain |= q->in_flight[i]; - if (fq) - drain |= !list_empty(&fq->flush_queue[i]); - } - } - - if (!drain) - break; - - spin_unlock_irq(q->queue_lock); - - msleep(10); - - spin_lock_irq(q->queue_lock); - } - - /* - * With queue marked dead, any woken up waiter will fail the - * allocation path, so the wakeup chaining is lost and we're - * left with hung waiters. We need to wake up those waiters. - */ - if (q->request_fn) { - struct request_list *rl; - - blk_queue_for_each_rl(rl, q) - for (i = 0; i < ARRAY_SIZE(rl->wait); i++) - wake_up_all(&rl->wait[i]); - } -} - -void blk_drain_queue(struct request_queue *q) -{ - spin_lock_irq(q->queue_lock); - __blk_drain_queue(q, true); - spin_unlock_irq(q->queue_lock); -} - -/** - * blk_queue_bypass_start - enter queue bypass mode - * @q: queue of interest - * - * In bypass mode, only the dispatch FIFO queue of @q is used. This - * function makes @q enter bypass mode and drains all requests which were - * throttled or issued before. On return, it's guaranteed that no request - * is being throttled or has ELVPRIV set and blk_queue_bypass() %true - * inside queue or RCU read lock. - */ -void blk_queue_bypass_start(struct request_queue *q) -{ - WARN_ON_ONCE(q->mq_ops); - - spin_lock_irq(q->queue_lock); - q->bypass_depth++; - queue_flag_set(QUEUE_FLAG_BYPASS, q); - spin_unlock_irq(q->queue_lock); - - /* - * Queues start drained. Skip actual draining till init is - * complete. This avoids lenghty delays during queue init which - * can happen many times during boot. - */ - if (blk_queue_init_done(q)) { - spin_lock_irq(q->queue_lock); - __blk_drain_queue(q, false); - spin_unlock_irq(q->queue_lock); - - /* ensure blk_queue_bypass() is %true inside RCU read lock */ - synchronize_rcu(); - } -} -EXPORT_SYMBOL_GPL(blk_queue_bypass_start); - -/** - * blk_queue_bypass_end - leave queue bypass mode - * @q: queue of interest - * - * Leave bypass mode and restore the normal queueing behavior. - * - * Note: although blk_queue_bypass_start() is only called for blk-sq queues, - * this function is called for both blk-sq and blk-mq queues. - */ -void blk_queue_bypass_end(struct request_queue *q) -{ - spin_lock_irq(q->queue_lock); - if (!--q->bypass_depth) - queue_flag_clear(QUEUE_FLAG_BYPASS, q); - WARN_ON_ONCE(q->bypass_depth < 0); - spin_unlock_irq(q->queue_lock); -} -EXPORT_SYMBOL_GPL(blk_queue_bypass_end); - void blk_set_queue_dying(struct request_queue *q) { blk_queue_flag_set(QUEUE_FLAG_DYING, q); @@ -697,20 +282,8 @@ void blk_set_queue_dying(struct request_queue *q) */ blk_freeze_queue_start(q); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_wake_waiters(q); - else { - struct request_list *rl; - - spin_lock_irq(q->queue_lock); - blk_queue_for_each_rl(rl, q) { - if (rl->rq_pool) { - wake_up_all(&rl->wait[BLK_RW_SYNC]); - wake_up_all(&rl->wait[BLK_RW_ASYNC]); - } - } - spin_unlock_irq(q->queue_lock); - } /* Make blk_queue_enter() reexamine the DYING flag. */ wake_up_all(&q->mq_freeze_wq); @@ -755,29 +328,13 @@ void blk_exit_queue(struct request_queue *q) */ void blk_cleanup_queue(struct request_queue *q) { - spinlock_t *lock = q->queue_lock; - /* mark @q DYING, no new request or merges will be allowed afterwards */ mutex_lock(&q->sysfs_lock); blk_set_queue_dying(q); - spin_lock_irq(lock); - - /* - * A dying queue is permanently in bypass mode till released. Note - * that, unlike blk_queue_bypass_start(), we aren't performing - * synchronize_rcu() after entering bypass mode to avoid the delay - * as some drivers create and destroy a lot of queues while - * probing. This is still safe because blk_release_queue() will be - * called only after the queue refcnt drops to zero and nothing, - * RCU or not, would be traversing the queue by then. - */ - q->bypass_depth++; - queue_flag_set(QUEUE_FLAG_BYPASS, q); - queue_flag_set(QUEUE_FLAG_NOMERGES, q); - queue_flag_set(QUEUE_FLAG_NOXMERGES, q); - queue_flag_set(QUEUE_FLAG_DYING, q); - spin_unlock_irq(lock); + blk_queue_flag_set(QUEUE_FLAG_NOMERGES, q); + blk_queue_flag_set(QUEUE_FLAG_NOXMERGES, q); + blk_queue_flag_set(QUEUE_FLAG_DYING, q); mutex_unlock(&q->sysfs_lock); /* @@ -788,9 +345,7 @@ void blk_cleanup_queue(struct request_queue *q) rq_qos_exit(q); - spin_lock_irq(lock); - queue_flag_set(QUEUE_FLAG_DEAD, q); - spin_unlock_irq(lock); + blk_queue_flag_set(QUEUE_FLAG_DEAD, q); /* * make sure all in-progress dispatch are completed because @@ -801,7 +356,7 @@ void blk_cleanup_queue(struct request_queue *q) * We rely on driver to deal with the race in case that queue * initialization isn't done. */ - if (q->mq_ops && blk_queue_init_done(q)) + if (queue_is_mq(q) && blk_queue_init_done(q)) blk_mq_quiesce_queue(q); /* for synchronous bio-based driver finish in-flight integrity i/o */ @@ -819,98 +374,19 @@ void blk_cleanup_queue(struct request_queue *q) blk_exit_queue(q); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_free_queue(q); - percpu_ref_exit(&q->q_usage_counter); - spin_lock_irq(lock); - if (q->queue_lock != &q->__queue_lock) - q->queue_lock = &q->__queue_lock; - spin_unlock_irq(lock); + percpu_ref_exit(&q->q_usage_counter); /* @q is and will stay empty, shutdown and put */ blk_put_queue(q); } EXPORT_SYMBOL(blk_cleanup_queue); -/* Allocate memory local to the request queue */ -static void *alloc_request_simple(gfp_t gfp_mask, void *data) -{ - struct request_queue *q = data; - - return kmem_cache_alloc_node(request_cachep, gfp_mask, q->node); -} - -static void free_request_simple(void *element, void *data) -{ - kmem_cache_free(request_cachep, element); -} - -static void *alloc_request_size(gfp_t gfp_mask, void *data) -{ - struct request_queue *q = data; - struct request *rq; - - rq = kmalloc_node(sizeof(struct request) + q->cmd_size, gfp_mask, - q->node); - if (rq && q->init_rq_fn && q->init_rq_fn(q, rq, gfp_mask) < 0) { - kfree(rq); - rq = NULL; - } - return rq; -} - -static void free_request_size(void *element, void *data) -{ - struct request_queue *q = data; - - if (q->exit_rq_fn) - q->exit_rq_fn(q, element); - kfree(element); -} - -int blk_init_rl(struct request_list *rl, struct request_queue *q, - gfp_t gfp_mask) -{ - if (unlikely(rl->rq_pool) || q->mq_ops) - return 0; - - rl->q = q; - rl->count[BLK_RW_SYNC] = rl->count[BLK_RW_ASYNC] = 0; - rl->starved[BLK_RW_SYNC] = rl->starved[BLK_RW_ASYNC] = 0; - init_waitqueue_head(&rl->wait[BLK_RW_SYNC]); - init_waitqueue_head(&rl->wait[BLK_RW_ASYNC]); - - if (q->cmd_size) { - rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, - alloc_request_size, free_request_size, - q, gfp_mask, q->node); - } else { - rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, - alloc_request_simple, free_request_simple, - q, gfp_mask, q->node); - } - if (!rl->rq_pool) - return -ENOMEM; - - if (rl != &q->root_rl) - WARN_ON_ONCE(!blk_get_queue(q)); - - return 0; -} - -void blk_exit_rl(struct request_queue *q, struct request_list *rl) -{ - if (rl->rq_pool) { - mempool_destroy(rl->rq_pool); - if (rl != &q->root_rl) - blk_put_queue(q); - } -} - struct request_queue *blk_alloc_queue(gfp_t gfp_mask) { - return blk_alloc_queue_node(gfp_mask, NUMA_NO_NODE, NULL); + return blk_alloc_queue_node(gfp_mask, NUMA_NO_NODE); } EXPORT_SYMBOL(blk_alloc_queue); @@ -948,666 +424,143 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) return -EBUSY; /* - * read pair of barrier in blk_freeze_queue_start(), - * we need to order reading __PERCPU_REF_DEAD flag of - * .q_usage_counter and reading .mq_freeze_depth or - * queue dying flag, otherwise the following wait may - * never return if the two reads are reordered. - */ - smp_rmb(); - - wait_event(q->mq_freeze_wq, - (atomic_read(&q->mq_freeze_depth) == 0 && - (pm || (blk_pm_request_resume(q), - !blk_queue_pm_only(q)))) || - blk_queue_dying(q)); - if (blk_queue_dying(q)) - return -ENODEV; - } -} - -void blk_queue_exit(struct request_queue *q) -{ - percpu_ref_put(&q->q_usage_counter); -} - -static void blk_queue_usage_counter_release(struct percpu_ref *ref) -{ - struct request_queue *q = - container_of(ref, struct request_queue, q_usage_counter); - - wake_up_all(&q->mq_freeze_wq); -} - -static void blk_rq_timed_out_timer(struct timer_list *t) -{ - struct request_queue *q = from_timer(q, t, timeout); - - kblockd_schedule_work(&q->timeout_work); -} - -/** - * blk_alloc_queue_node - allocate a request queue - * @gfp_mask: memory allocation flags - * @node_id: NUMA node to allocate memory from - * @lock: For legacy queues, pointer to a spinlock that will be used to e.g. - * serialize calls to the legacy .request_fn() callback. Ignored for - * blk-mq request queues. - * - * Note: pass the queue lock as the third argument to this function instead of - * setting the queue lock pointer explicitly to avoid triggering a sporadic - * crash in the blkcg code. This function namely calls blkcg_init_queue() and - * the queue lock pointer must be set before blkcg_init_queue() is called. - */ -struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, - spinlock_t *lock) -{ - struct request_queue *q; - int ret; - - q = kmem_cache_alloc_node(blk_requestq_cachep, - gfp_mask | __GFP_ZERO, node_id); - if (!q) - return NULL; - - INIT_LIST_HEAD(&q->queue_head); - q->last_merge = NULL; - q->end_sector = 0; - q->boundary_rq = NULL; - - q->id = ida_simple_get(&blk_queue_ida, 0, 0, gfp_mask); - if (q->id < 0) - goto fail_q; - - ret = bioset_init(&q->bio_split, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); - if (ret) - goto fail_id; - - q->backing_dev_info = bdi_alloc_node(gfp_mask, node_id); - if (!q->backing_dev_info) - goto fail_split; - - q->stats = blk_alloc_queue_stats(); - if (!q->stats) - goto fail_stats; - - q->backing_dev_info->ra_pages = - (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; - q->backing_dev_info->capabilities = BDI_CAP_CGROUP_WRITEBACK; - q->backing_dev_info->name = "block"; - q->node = node_id; - - timer_setup(&q->backing_dev_info->laptop_mode_wb_timer, - laptop_mode_timer_fn, 0); - timer_setup(&q->timeout, blk_rq_timed_out_timer, 0); - INIT_WORK(&q->timeout_work, NULL); - INIT_LIST_HEAD(&q->timeout_list); - INIT_LIST_HEAD(&q->icq_list); -#ifdef CONFIG_BLK_CGROUP - INIT_LIST_HEAD(&q->blkg_list); -#endif - INIT_DELAYED_WORK(&q->delay_work, blk_delay_work); - - kobject_init(&q->kobj, &blk_queue_ktype); - -#ifdef CONFIG_BLK_DEV_IO_TRACE - mutex_init(&q->blk_trace_mutex); -#endif - mutex_init(&q->sysfs_lock); - spin_lock_init(&q->__queue_lock); - - q->queue_lock = lock ? : &q->__queue_lock; - - /* - * A queue starts its life with bypass turned on to avoid - * unnecessary bypass on/off overhead and nasty surprises during - * init. The initial bypass will be finished when the queue is - * registered by blk_register_queue(). - */ - q->bypass_depth = 1; - queue_flag_set_unlocked(QUEUE_FLAG_BYPASS, q); - - init_waitqueue_head(&q->mq_freeze_wq); - - /* - * Init percpu_ref in atomic mode so that it's faster to shutdown. - * See blk_register_queue() for details. - */ - if (percpu_ref_init(&q->q_usage_counter, - blk_queue_usage_counter_release, - PERCPU_REF_INIT_ATOMIC, GFP_KERNEL)) - goto fail_bdi; - - if (blkcg_init_queue(q)) - goto fail_ref; - - return q; - -fail_ref: - percpu_ref_exit(&q->q_usage_counter); -fail_bdi: - blk_free_queue_stats(q->stats); -fail_stats: - bdi_put(q->backing_dev_info); -fail_split: - bioset_exit(&q->bio_split); -fail_id: - ida_simple_remove(&blk_queue_ida, q->id); -fail_q: - kmem_cache_free(blk_requestq_cachep, q); - return NULL; -} -EXPORT_SYMBOL(blk_alloc_queue_node); - -/** - * blk_init_queue - prepare a request queue for use with a block device - * @rfn: The function to be called to process requests that have been - * placed on the queue. - * @lock: Request queue spin lock - * - * Description: - * If a block device wishes to use the standard request handling procedures, - * which sorts requests and coalesces adjacent requests, then it must - * call blk_init_queue(). The function @rfn will be called when there - * are requests on the queue that need to be processed. If the device - * supports plugging, then @rfn may not be called immediately when requests - * are available on the queue, but may be called at some time later instead. - * Plugged queues are generally unplugged when a buffer belonging to one - * of the requests on the queue is needed, or due to memory pressure. - * - * @rfn is not required, or even expected, to remove all requests off the - * queue, but only as many as it can handle at a time. If it does leave - * requests on the queue, it is responsible for arranging that the requests - * get dealt with eventually. - * - * The queue spin lock must be held while manipulating the requests on the - * request queue; this lock will be taken also from interrupt context, so irq - * disabling is needed for it. - * - * Function returns a pointer to the initialized request queue, or %NULL if - * it didn't succeed. - * - * Note: - * blk_init_queue() must be paired with a blk_cleanup_queue() call - * when the block device is deactivated (such as at module unload). - **/ - -struct request_queue *blk_init_queue(request_fn_proc *rfn, spinlock_t *lock) -{ - return blk_init_queue_node(rfn, lock, NUMA_NO_NODE); -} -EXPORT_SYMBOL(blk_init_queue); - -struct request_queue * -blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id) -{ - struct request_queue *q; - - q = blk_alloc_queue_node(GFP_KERNEL, node_id, lock); - if (!q) - return NULL; - - q->request_fn = rfn; - if (blk_init_allocated_queue(q) < 0) { - blk_cleanup_queue(q); - return NULL; - } - - return q; -} -EXPORT_SYMBOL(blk_init_queue_node); - -static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio); - - -int blk_init_allocated_queue(struct request_queue *q) -{ - WARN_ON_ONCE(q->mq_ops); - - q->fq = blk_alloc_flush_queue(q, NUMA_NO_NODE, q->cmd_size, GFP_KERNEL); - if (!q->fq) - return -ENOMEM; - - if (q->init_rq_fn && q->init_rq_fn(q, q->fq->flush_rq, GFP_KERNEL)) - goto out_free_flush_queue; - - if (blk_init_rl(&q->root_rl, q, GFP_KERNEL)) - goto out_exit_flush_rq; - - INIT_WORK(&q->timeout_work, blk_timeout_work); - q->queue_flags |= QUEUE_FLAG_DEFAULT; - - /* - * This also sets hw/phys segments, boundary and size - */ - blk_queue_make_request(q, blk_queue_bio); - - q->sg_reserved_size = INT_MAX; - - if (elevator_init(q)) - goto out_exit_flush_rq; - return 0; - -out_exit_flush_rq: - if (q->exit_rq_fn) - q->exit_rq_fn(q, q->fq->flush_rq); -out_free_flush_queue: - blk_free_flush_queue(q->fq); - q->fq = NULL; - return -ENOMEM; -} -EXPORT_SYMBOL(blk_init_allocated_queue); - -bool blk_get_queue(struct request_queue *q) -{ - if (likely(!blk_queue_dying(q))) { - __blk_get_queue(q); - return true; - } - - return false; -} -EXPORT_SYMBOL(blk_get_queue); - -static inline void blk_free_request(struct request_list *rl, struct request *rq) -{ - if (rq->rq_flags & RQF_ELVPRIV) { - elv_put_request(rl->q, rq); - if (rq->elv.icq) - put_io_context(rq->elv.icq->ioc); - } - - mempool_free(rq, rl->rq_pool); -} - -/* - * ioc_batching returns true if the ioc is a valid batching request and - * should be given priority access to a request. - */ -static inline int ioc_batching(struct request_queue *q, struct io_context *ioc) -{ - if (!ioc) - return 0; - - /* - * Make sure the process is able to allocate at least 1 request - * even if the batch times out, otherwise we could theoretically - * lose wakeups. - */ - return ioc->nr_batch_requests == q->nr_batching || - (ioc->nr_batch_requests > 0 - && time_before(jiffies, ioc->last_waited + BLK_BATCH_TIME)); -} - -/* - * ioc_set_batching sets ioc to be a new "batcher" if it is not one. This - * will cause the process to be a "batcher" on all queues in the system. This - * is the behaviour we want though - once it gets a wakeup it should be given - * a nice run. - */ -static void ioc_set_batching(struct request_queue *q, struct io_context *ioc) -{ - if (!ioc || ioc_batching(q, ioc)) - return; - - ioc->nr_batch_requests = q->nr_batching; - ioc->last_waited = jiffies; -} - -static void __freed_request(struct request_list *rl, int sync) -{ - struct request_queue *q = rl->q; - - if (rl->count[sync] < queue_congestion_off_threshold(q)) - blk_clear_congested(rl, sync); - - if (rl->count[sync] + 1 <= q->nr_requests) { - if (waitqueue_active(&rl->wait[sync])) - wake_up(&rl->wait[sync]); - - blk_clear_rl_full(rl, sync); - } -} - -/* - * A request has just been released. Account for it, update the full and - * congestion status, wake up any waiters. Called under q->queue_lock. - */ -static void freed_request(struct request_list *rl, bool sync, - req_flags_t rq_flags) -{ - struct request_queue *q = rl->q; - - q->nr_rqs[sync]--; - rl->count[sync]--; - if (rq_flags & RQF_ELVPRIV) - q->nr_rqs_elvpriv--; - - __freed_request(rl, sync); - - if (unlikely(rl->starved[sync ^ 1])) - __freed_request(rl, sync ^ 1); -} - -int blk_update_nr_requests(struct request_queue *q, unsigned int nr) -{ - struct request_list *rl; - int on_thresh, off_thresh; - - WARN_ON_ONCE(q->mq_ops); - - spin_lock_irq(q->queue_lock); - q->nr_requests = nr; - blk_queue_congestion_threshold(q); - on_thresh = queue_congestion_on_threshold(q); - off_thresh = queue_congestion_off_threshold(q); - - blk_queue_for_each_rl(rl, q) { - if (rl->count[BLK_RW_SYNC] >= on_thresh) - blk_set_congested(rl, BLK_RW_SYNC); - else if (rl->count[BLK_RW_SYNC] < off_thresh) - blk_clear_congested(rl, BLK_RW_SYNC); - - if (rl->count[BLK_RW_ASYNC] >= on_thresh) - blk_set_congested(rl, BLK_RW_ASYNC); - else if (rl->count[BLK_RW_ASYNC] < off_thresh) - blk_clear_congested(rl, BLK_RW_ASYNC); - - if (rl->count[BLK_RW_SYNC] >= q->nr_requests) { - blk_set_rl_full(rl, BLK_RW_SYNC); - } else { - blk_clear_rl_full(rl, BLK_RW_SYNC); - wake_up(&rl->wait[BLK_RW_SYNC]); - } - - if (rl->count[BLK_RW_ASYNC] >= q->nr_requests) { - blk_set_rl_full(rl, BLK_RW_ASYNC); - } else { - blk_clear_rl_full(rl, BLK_RW_ASYNC); - wake_up(&rl->wait[BLK_RW_ASYNC]); - } - } - - spin_unlock_irq(q->queue_lock); - return 0; -} - -/** - * __get_request - get a free request - * @rl: request list to allocate from - * @op: operation and flags - * @bio: bio to allocate request for (can be %NULL) - * @flags: BLQ_MQ_REQ_* flags - * @gfp_mask: allocator flags - * - * Get a free request from @q. This function may fail under memory - * pressure or if @q is dead. - * - * Must be called with @q->queue_lock held and, - * Returns ERR_PTR on failure, with @q->queue_lock held. - * Returns request pointer on success, with @q->queue_lock *not held*. - */ -static struct request *__get_request(struct request_list *rl, unsigned int op, - struct bio *bio, blk_mq_req_flags_t flags, gfp_t gfp_mask) -{ - struct request_queue *q = rl->q; - struct request *rq; - struct elevator_type *et = q->elevator->type; - struct io_context *ioc = rq_ioc(bio); - struct io_cq *icq = NULL; - const bool is_sync = op_is_sync(op); - int may_queue; - req_flags_t rq_flags = RQF_ALLOCED; - - lockdep_assert_held(q->queue_lock); - - if (unlikely(blk_queue_dying(q))) - return ERR_PTR(-ENODEV); - - may_queue = elv_may_queue(q, op); - if (may_queue == ELV_MQUEUE_NO) - goto rq_starved; - - if (rl->count[is_sync]+1 >= queue_congestion_on_threshold(q)) { - if (rl->count[is_sync]+1 >= q->nr_requests) { - /* - * The queue will fill after this allocation, so set - * it as full, and mark this process as "batching". - * This process will be allowed to complete a batch of - * requests, others will be blocked. - */ - if (!blk_rl_full(rl, is_sync)) { - ioc_set_batching(q, ioc); - blk_set_rl_full(rl, is_sync); - } else { - if (may_queue != ELV_MQUEUE_MUST - && !ioc_batching(q, ioc)) { - /* - * The queue is full and the allocating - * process is not a "batcher", and not - * exempted by the IO scheduler - */ - return ERR_PTR(-ENOMEM); - } - } - } - blk_set_congested(rl, is_sync); - } - - /* - * Only allow batching queuers to allocate up to 50% over the defined - * limit of requests, otherwise we could have thousands of requests - * allocated with any setting of ->nr_requests - */ - if (rl->count[is_sync] >= (3 * q->nr_requests / 2)) - return ERR_PTR(-ENOMEM); - - q->nr_rqs[is_sync]++; - rl->count[is_sync]++; - rl->starved[is_sync] = 0; - - /* - * Decide whether the new request will be managed by elevator. If - * so, mark @rq_flags and increment elvpriv. Non-zero elvpriv will - * prevent the current elevator from being destroyed until the new - * request is freed. This guarantees icq's won't be destroyed and - * makes creating new ones safe. - * - * Flush requests do not use the elevator so skip initialization. - * This allows a request to share the flush and elevator data. - * - * Also, lookup icq while holding queue_lock. If it doesn't exist, - * it will be created after releasing queue_lock. - */ - if (!op_is_flush(op) && !blk_queue_bypass(q)) { - rq_flags |= RQF_ELVPRIV; - q->nr_rqs_elvpriv++; - if (et->icq_cache && ioc) - icq = ioc_lookup_icq(ioc, q); - } - - if (blk_queue_io_stat(q)) - rq_flags |= RQF_IO_STAT; - spin_unlock_irq(q->queue_lock); - - /* allocate and init request */ - rq = mempool_alloc(rl->rq_pool, gfp_mask); - if (!rq) - goto fail_alloc; - - blk_rq_init(q, rq); - blk_rq_set_rl(rq, rl); - rq->cmd_flags = op; - rq->rq_flags = rq_flags; - if (flags & BLK_MQ_REQ_PREEMPT) - rq->rq_flags |= RQF_PREEMPT; - - /* init elvpriv */ - if (rq_flags & RQF_ELVPRIV) { - if (unlikely(et->icq_cache && !icq)) { - if (ioc) - icq = ioc_create_icq(ioc, q, gfp_mask); - if (!icq) - goto fail_elvpriv; - } - - rq->elv.icq = icq; - if (unlikely(elv_set_request(q, rq, bio, gfp_mask))) - goto fail_elvpriv; + * read pair of barrier in blk_freeze_queue_start(), + * we need to order reading __PERCPU_REF_DEAD flag of + * .q_usage_counter and reading .mq_freeze_depth or + * queue dying flag, otherwise the following wait may + * never return if the two reads are reordered. + */ + smp_rmb(); - /* @rq->elv.icq holds io_context until @rq is freed */ - if (icq) - get_io_context(icq->ioc); + wait_event(q->mq_freeze_wq, + (atomic_read(&q->mq_freeze_depth) == 0 && + (pm || (blk_pm_request_resume(q), + !blk_queue_pm_only(q)))) || + blk_queue_dying(q)); + if (blk_queue_dying(q)) + return -ENODEV; } -out: - /* - * ioc may be NULL here, and ioc_batching will be false. That's - * OK, if the queue is under the request limit then requests need - * not count toward the nr_batch_requests limit. There will always - * be some limit enforced by BLK_BATCH_TIME. - */ - if (ioc_batching(q, ioc)) - ioc->nr_batch_requests--; - - trace_block_getrq(q, bio, op); - return rq; +} -fail_elvpriv: - /* - * elvpriv init failed. ioc, icq and elvpriv aren't mempool backed - * and may fail indefinitely under memory pressure and thus - * shouldn't stall IO. Treat this request as !elvpriv. This will - * disturb iosched and blkcg but weird is bettern than dead. - */ - printk_ratelimited(KERN_WARNING "%s: dev %s: request aux data allocation failed, iosched may be disturbed\n", - __func__, dev_name(q->backing_dev_info->dev)); +void blk_queue_exit(struct request_queue *q) +{ + percpu_ref_put(&q->q_usage_counter); +} - rq->rq_flags &= ~RQF_ELVPRIV; - rq->elv.icq = NULL; +static void blk_queue_usage_counter_release(struct percpu_ref *ref) +{ + struct request_queue *q = + container_of(ref, struct request_queue, q_usage_counter); - spin_lock_irq(q->queue_lock); - q->nr_rqs_elvpriv--; - spin_unlock_irq(q->queue_lock); - goto out; + wake_up_all(&q->mq_freeze_wq); +} -fail_alloc: - /* - * Allocation failed presumably due to memory. Undo anything we - * might have messed up. - * - * Allocating task should really be put onto the front of the wait - * queue, but this is pretty rare. - */ - spin_lock_irq(q->queue_lock); - freed_request(rl, is_sync, rq_flags); +static void blk_rq_timed_out_timer(struct timer_list *t) +{ + struct request_queue *q = from_timer(q, t, timeout); - /* - * in the very unlikely event that allocation failed and no - * requests for this direction was pending, mark us starved so that - * freeing of a request in the other direction will notice - * us. another possible fix would be to split the rq mempool into - * READ and WRITE - */ -rq_starved: - if (unlikely(rl->count[is_sync] == 0)) - rl->starved[is_sync] = 1; - return ERR_PTR(-ENOMEM); + kblockd_schedule_work(&q->timeout_work); } /** - * get_request - get a free request - * @q: request_queue to allocate request from - * @op: operation and flags - * @bio: bio to allocate request for (can be %NULL) - * @flags: BLK_MQ_REQ_* flags. - * @gfp: allocator flags - * - * Get a free request from @q. If %BLK_MQ_REQ_NOWAIT is set in @flags, - * this function keeps retrying under memory pressure and fails iff @q is dead. - * - * Must be called with @q->queue_lock held and, - * Returns ERR_PTR on failure, with @q->queue_lock held. - * Returns request pointer on success, with @q->queue_lock *not held*. + * blk_alloc_queue_node - allocate a request queue + * @gfp_mask: memory allocation flags + * @node_id: NUMA node to allocate memory from */ -static struct request *get_request(struct request_queue *q, unsigned int op, - struct bio *bio, blk_mq_req_flags_t flags, gfp_t gfp) +struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id) { - const bool is_sync = op_is_sync(op); - DEFINE_WAIT(wait); - struct request_list *rl; - struct request *rq; + struct request_queue *q; + int ret; - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); + q = kmem_cache_alloc_node(blk_requestq_cachep, + gfp_mask | __GFP_ZERO, node_id); + if (!q) + return NULL; - rl = blk_get_rl(q, bio); /* transferred to @rq on success */ -retry: - rq = __get_request(rl, op, bio, flags, gfp); - if (!IS_ERR(rq)) - return rq; + INIT_LIST_HEAD(&q->queue_head); + q->last_merge = NULL; - if (op & REQ_NOWAIT) { - blk_put_rl(rl); - return ERR_PTR(-EAGAIN); - } + q->id = ida_simple_get(&blk_queue_ida, 0, 0, gfp_mask); + if (q->id < 0) + goto fail_q; - if ((flags & BLK_MQ_REQ_NOWAIT) || unlikely(blk_queue_dying(q))) { - blk_put_rl(rl); - return rq; - } + ret = bioset_init(&q->bio_split, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); + if (ret) + goto fail_id; + + q->backing_dev_info = bdi_alloc_node(gfp_mask, node_id); + if (!q->backing_dev_info) + goto fail_split; + + q->stats = blk_alloc_queue_stats(); + if (!q->stats) + goto fail_stats; + + q->backing_dev_info->ra_pages = + (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; + q->backing_dev_info->capabilities = BDI_CAP_CGROUP_WRITEBACK; + q->backing_dev_info->name = "block"; + q->node = node_id; + + timer_setup(&q->backing_dev_info->laptop_mode_wb_timer, + laptop_mode_timer_fn, 0); + timer_setup(&q->timeout, blk_rq_timed_out_timer, 0); + INIT_WORK(&q->timeout_work, NULL); + INIT_LIST_HEAD(&q->icq_list); +#ifdef CONFIG_BLK_CGROUP + INIT_LIST_HEAD(&q->blkg_list); +#endif - /* wait on @rl and retry */ - prepare_to_wait_exclusive(&rl->wait[is_sync], &wait, - TASK_UNINTERRUPTIBLE); + kobject_init(&q->kobj, &blk_queue_ktype); - trace_block_sleeprq(q, bio, op); +#ifdef CONFIG_BLK_DEV_IO_TRACE + mutex_init(&q->blk_trace_mutex); +#endif + mutex_init(&q->sysfs_lock); + spin_lock_init(&q->queue_lock); - spin_unlock_irq(q->queue_lock); - io_schedule(); + init_waitqueue_head(&q->mq_freeze_wq); /* - * After sleeping, we become a "batching" process and will be able - * to allocate at least one request, and up to a big batch of them - * for a small period time. See ioc_batching, ioc_set_batching + * Init percpu_ref in atomic mode so that it's faster to shutdown. + * See blk_register_queue() for details. */ - ioc_set_batching(q, current->io_context); + if (percpu_ref_init(&q->q_usage_counter, + blk_queue_usage_counter_release, + PERCPU_REF_INIT_ATOMIC, GFP_KERNEL)) + goto fail_bdi; + + if (blkcg_init_queue(q)) + goto fail_ref; - spin_lock_irq(q->queue_lock); - finish_wait(&rl->wait[is_sync], &wait); + return q; - goto retry; +fail_ref: + percpu_ref_exit(&q->q_usage_counter); +fail_bdi: + blk_free_queue_stats(q->stats); +fail_stats: + bdi_put(q->backing_dev_info); +fail_split: + bioset_exit(&q->bio_split); +fail_id: + ida_simple_remove(&blk_queue_ida, q->id); +fail_q: + kmem_cache_free(blk_requestq_cachep, q); + return NULL; } +EXPORT_SYMBOL(blk_alloc_queue_node); -/* flags: BLK_MQ_REQ_PREEMPT and/or BLK_MQ_REQ_NOWAIT. */ -static struct request *blk_old_get_request(struct request_queue *q, - unsigned int op, blk_mq_req_flags_t flags) +bool blk_get_queue(struct request_queue *q) { - struct request *rq; - gfp_t gfp_mask = flags & BLK_MQ_REQ_NOWAIT ? GFP_ATOMIC : GFP_NOIO; - int ret = 0; - - WARN_ON_ONCE(q->mq_ops); - - /* create ioc upfront */ - create_io_context(gfp_mask, q->node); - - ret = blk_queue_enter(q, flags); - if (ret) - return ERR_PTR(ret); - spin_lock_irq(q->queue_lock); - rq = get_request(q, op, NULL, flags, gfp_mask); - if (IS_ERR(rq)) { - spin_unlock_irq(q->queue_lock); - blk_queue_exit(q); - return rq; + if (likely(!blk_queue_dying(q))) { + __blk_get_queue(q); + return true; } - /* q->queue_lock is unlocked at this point */ - rq->__data_len = 0; - rq->__sector = (sector_t) -1; - rq->bio = rq->biotail = NULL; - return rq; + return false; } +EXPORT_SYMBOL(blk_get_queue); /** * blk_get_request - allocate a request @@ -1623,170 +576,17 @@ struct request *blk_get_request(struct request_queue *q, unsigned int op, WARN_ON_ONCE(op & REQ_NOWAIT); WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PREEMPT)); - if (q->mq_ops) { - req = blk_mq_alloc_request(q, op, flags); - if (!IS_ERR(req) && q->mq_ops->initialize_rq_fn) - q->mq_ops->initialize_rq_fn(req); - } else { - req = blk_old_get_request(q, op, flags); - if (!IS_ERR(req) && q->initialize_rq_fn) - q->initialize_rq_fn(req); - } + req = blk_mq_alloc_request(q, op, flags); + if (!IS_ERR(req) && q->mq_ops->initialize_rq_fn) + q->mq_ops->initialize_rq_fn(req); return req; } EXPORT_SYMBOL(blk_get_request); -/** - * blk_requeue_request - put a request back on queue - * @q: request queue where request should be inserted - * @rq: request to be inserted - * - * Description: - * Drivers often keep queueing requests until the hardware cannot accept - * more, when that condition happens we need to put the request back - * on the queue. Must be called with queue lock held. - */ -void blk_requeue_request(struct request_queue *q, struct request *rq) -{ - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - blk_delete_timer(rq); - blk_clear_rq_complete(rq); - trace_block_rq_requeue(q, rq); - rq_qos_requeue(q, rq); - - if (rq->rq_flags & RQF_QUEUED) - blk_queue_end_tag(q, rq); - - BUG_ON(blk_queued_rq(rq)); - - elv_requeue_request(q, rq); -} -EXPORT_SYMBOL(blk_requeue_request); - -static void add_acct_request(struct request_queue *q, struct request *rq, - int where) -{ - blk_account_io_start(rq, true); - __elv_add_request(q, rq, where); -} - -static void part_round_stats_single(struct request_queue *q, int cpu, - struct hd_struct *part, unsigned long now, - unsigned int inflight) -{ - if (inflight) { - __part_stat_add(cpu, part, time_in_queue, - inflight * (now - part->stamp)); - __part_stat_add(cpu, part, io_ticks, (now - part->stamp)); - } - part->stamp = now; -} - -/** - * part_round_stats() - Round off the performance stats on a struct disk_stats. - * @q: target block queue - * @cpu: cpu number for stats access - * @part: target partition - * - * The average IO queue length and utilisation statistics are maintained - * by observing the current state of the queue length and the amount of - * time it has been in this state for. - * - * Normally, that accounting is done on IO completion, but that can result - * in more than a second's worth of IO being accounted for within any one - * second, leading to >100% utilisation. To deal with that, we call this - * function to do a round-off before returning the results when reading - * /proc/diskstats. This accounts immediately for all queue usage up to - * the current jiffies and restarts the counters again. - */ -void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part) -{ - struct hd_struct *part2 = NULL; - unsigned long now = jiffies; - unsigned int inflight[2]; - int stats = 0; - - if (part->stamp != now) - stats |= 1; - - if (part->partno) { - part2 = &part_to_disk(part)->part0; - if (part2->stamp != now) - stats |= 2; - } - - if (!stats) - return; - - part_in_flight(q, part, inflight); - - if (stats & 2) - part_round_stats_single(q, cpu, part2, now, inflight[1]); - if (stats & 1) - part_round_stats_single(q, cpu, part, now, inflight[0]); -} -EXPORT_SYMBOL_GPL(part_round_stats); - -void __blk_put_request(struct request_queue *q, struct request *req) -{ - req_flags_t rq_flags = req->rq_flags; - - if (unlikely(!q)) - return; - - if (q->mq_ops) { - blk_mq_free_request(req); - return; - } - - lockdep_assert_held(q->queue_lock); - - blk_req_zone_write_unlock(req); - blk_pm_put_request(req); - blk_pm_mark_last_busy(req); - - elv_completed_request(q, req); - - /* this is a bio leak */ - WARN_ON(req->bio != NULL); - - rq_qos_done(q, req); - - /* - * Request may not have originated from ll_rw_blk. if not, - * it didn't come out of our reserved rq pools - */ - if (rq_flags & RQF_ALLOCED) { - struct request_list *rl = blk_rq_rl(req); - bool sync = op_is_sync(req->cmd_flags); - - BUG_ON(!list_empty(&req->queuelist)); - BUG_ON(ELV_ON_HASH(req)); - - blk_free_request(rl, req); - freed_request(rl, sync, rq_flags); - blk_put_rl(rl); - blk_queue_exit(q); - } -} -EXPORT_SYMBOL_GPL(__blk_put_request); - void blk_put_request(struct request *req) { - struct request_queue *q = req->q; - - if (q->mq_ops) - blk_mq_free_request(req); - else { - unsigned long flags; - - spin_lock_irqsave(q->queue_lock, flags); - __blk_put_request(q, req); - spin_unlock_irqrestore(q->queue_lock, flags); - } + blk_mq_free_request(req); } EXPORT_SYMBOL(blk_put_request); @@ -1806,7 +606,6 @@ bool bio_attempt_back_merge(struct request_queue *q, struct request *req, req->biotail->bi_next = bio; req->biotail = bio; req->__data_len += bio->bi_iter.bi_size; - req->ioprio = ioprio_best(req->ioprio, bio_prio(bio)); blk_account_io_start(req, false); return true; @@ -1830,7 +629,6 @@ bool bio_attempt_front_merge(struct request_queue *q, struct request *req, req->__sector = bio->bi_iter.bi_sector; req->__data_len += bio->bi_iter.bi_size; - req->ioprio = ioprio_best(req->ioprio, bio_prio(bio)); blk_account_io_start(req, false); return true; @@ -1850,7 +648,6 @@ bool bio_attempt_discard_merge(struct request_queue *q, struct request *req, req->biotail->bi_next = bio; req->biotail = bio; req->__data_len += bio->bi_iter.bi_size; - req->ioprio = ioprio_best(req->ioprio, bio_prio(bio)); req->nr_phys_segments = segments + 1; blk_account_io_start(req, false); @@ -1883,7 +680,6 @@ no_merge: * Caller must ensure !blk_queue_nomerges(q) beforehand. */ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, - unsigned int *request_count, struct request **same_queue_rq) { struct blk_plug *plug; @@ -1893,25 +689,19 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, plug = current->plug; if (!plug) return false; - *request_count = 0; - if (q->mq_ops) - plug_list = &plug->mq_list; - else - plug_list = &plug->list; + plug_list = &plug->mq_list; list_for_each_entry_reverse(rq, plug_list, queuelist) { bool merged = false; - if (rq->q == q) { - (*request_count)++; + if (rq->q == q && same_queue_rq) { /* * Only blk-mq multiple hardware queues case checks the * rq in the same queue, there should be only one such * rq in a queue **/ - if (same_queue_rq) - *same_queue_rq = rq; + *same_queue_rq = rq; } if (rq->q != q || !blk_rq_merge_ok(rq, bio)) @@ -1938,176 +728,18 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, return false; } -unsigned int blk_plug_queued_count(struct request_queue *q) -{ - struct blk_plug *plug; - struct request *rq; - struct list_head *plug_list; - unsigned int ret = 0; - - plug = current->plug; - if (!plug) - goto out; - - if (q->mq_ops) - plug_list = &plug->mq_list; - else - plug_list = &plug->list; - - list_for_each_entry(rq, plug_list, queuelist) { - if (rq->q == q) - ret++; - } -out: - return ret; -} - void blk_init_request_from_bio(struct request *req, struct bio *bio) { - struct io_context *ioc = rq_ioc(bio); - if (bio->bi_opf & REQ_RAHEAD) req->cmd_flags |= REQ_FAILFAST_MASK; req->__sector = bio->bi_iter.bi_sector; - if (ioprio_valid(bio_prio(bio))) - req->ioprio = bio_prio(bio); - else if (ioc) - req->ioprio = ioc->ioprio; - else - req->ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0); + req->ioprio = bio_prio(bio); req->write_hint = bio->bi_write_hint; blk_rq_bio_prep(req->q, req, bio); } EXPORT_SYMBOL_GPL(blk_init_request_from_bio); -static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio) -{ - struct blk_plug *plug; - int where = ELEVATOR_INSERT_SORT; - struct request *req, *free; - unsigned int request_count = 0; - - /* - * low level driver can indicate that it wants pages above a - * certain limit bounced to low memory (ie for highmem, or even - * ISA dma in theory) - */ - blk_queue_bounce(q, &bio); - - blk_queue_split(q, &bio); - - if (!bio_integrity_prep(bio)) - return BLK_QC_T_NONE; - - if (op_is_flush(bio->bi_opf)) { - spin_lock_irq(q->queue_lock); - where = ELEVATOR_INSERT_FLUSH; - goto get_rq; - } - - /* - * Check if we can merge with the plugged list before grabbing - * any locks. - */ - if (!blk_queue_nomerges(q)) { - if (blk_attempt_plug_merge(q, bio, &request_count, NULL)) - return BLK_QC_T_NONE; - } else - request_count = blk_plug_queued_count(q); - - spin_lock_irq(q->queue_lock); - - switch (elv_merge(q, &req, bio)) { - case ELEVATOR_BACK_MERGE: - if (!bio_attempt_back_merge(q, req, bio)) - break; - elv_bio_merged(q, req, bio); - free = attempt_back_merge(q, req); - if (free) - __blk_put_request(q, free); - else - elv_merged_request(q, req, ELEVATOR_BACK_MERGE); - goto out_unlock; - case ELEVATOR_FRONT_MERGE: - if (!bio_attempt_front_merge(q, req, bio)) - break; - elv_bio_merged(q, req, bio); - free = attempt_front_merge(q, req); - if (free) - __blk_put_request(q, free); - else - elv_merged_request(q, req, ELEVATOR_FRONT_MERGE); - goto out_unlock; - default: - break; - } - -get_rq: - rq_qos_throttle(q, bio, q->queue_lock); - - /* - * Grab a free request. This is might sleep but can not fail. - * Returns with the queue unlocked. - */ - blk_queue_enter_live(q); - req = get_request(q, bio->bi_opf, bio, 0, GFP_NOIO); - if (IS_ERR(req)) { - blk_queue_exit(q); - rq_qos_cleanup(q, bio); - if (PTR_ERR(req) == -ENOMEM) - bio->bi_status = BLK_STS_RESOURCE; - else - bio->bi_status = BLK_STS_IOERR; - bio_endio(bio); - goto out_unlock; - } - - rq_qos_track(q, req, bio); - - /* - * After dropping the lock and possibly sleeping here, our request - * may now be mergeable after it had proven unmergeable (above). - * We don't worry about that case for efficiency. It won't happen - * often, and the elevators are able to handle it. - */ - blk_init_request_from_bio(req, bio); - - if (test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags)) - req->cpu = raw_smp_processor_id(); - - plug = current->plug; - if (plug) { - /* - * If this is the first request added after a plug, fire - * of a plug trace. - * - * @request_count may become stale because of schedule - * out, so check plug list again. - */ - if (!request_count || list_empty(&plug->list)) - trace_block_plug(q); - else { - struct request *last = list_entry_rq(plug->list.prev); - if (request_count >= BLK_MAX_REQUEST_COUNT || - blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE) { - blk_flush_plug_list(plug, false); - trace_block_plug(q); - } - } - list_add_tail(&req->queuelist, &plug->list); - blk_account_io_start(req, true); - } else { - spin_lock_irq(q->queue_lock); - add_acct_request(q, req, where); - __blk_run_queue(q); -out_unlock: - spin_unlock_irq(q->queue_lock); - } - - return BLK_QC_T_NONE; -} - static void handle_bad_sector(struct bio *bio, sector_t maxsector) { char b[BDEVNAME_SIZE]; @@ -2259,7 +891,7 @@ generic_make_request_checks(struct bio *bio) * For a REQ_NOWAIT based request, return -EOPNOTSUPP * if queue is not a request based queue. */ - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) + if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_mq(q)) goto not_supported; if (should_fail_bio(bio)) @@ -2289,6 +921,9 @@ generic_make_request_checks(struct bio *bio) } } + if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) + bio->bi_opf &= ~REQ_HIPRI; + switch (bio_op(bio)) { case REQ_OP_DISCARD: if (!blk_queue_discard(q)) @@ -2559,18 +1194,7 @@ blk_qc_t submit_bio(struct bio *bio) return generic_make_request(bio); } -EXPORT_SYMBOL(submit_bio); - -bool blk_poll(struct request_queue *q, blk_qc_t cookie) -{ - if (!q->poll_fn || !blk_qc_t_valid(cookie)) - return false; - - if (current->plug) - blk_flush_plug_list(current->plug, false); - return q->poll_fn(q, cookie); -} -EXPORT_SYMBOL_GPL(blk_poll); +EXPORT_SYMBOL(submit_bio); /** * blk_cloned_rq_check_limits - Helper function to check a cloned request @@ -2619,8 +1243,7 @@ static int blk_cloned_rq_check_limits(struct request_queue *q, */ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq) { - unsigned long flags; - int where = ELEVATOR_INSERT_BACK; + blk_qc_t unused; if (blk_cloned_rq_check_limits(q, rq)) return BLK_STS_IOERR; @@ -2629,38 +1252,15 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request * should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq))) return BLK_STS_IOERR; - if (q->mq_ops) { - if (blk_queue_io_stat(q)) - blk_account_io_start(rq, true); - /* - * Since we have a scheduler attached on the top device, - * bypass a potential scheduler on the bottom device for - * insert. - */ - return blk_mq_request_issue_directly(rq); - } - - spin_lock_irqsave(q->queue_lock, flags); - if (unlikely(blk_queue_dying(q))) { - spin_unlock_irqrestore(q->queue_lock, flags); - return BLK_STS_IOERR; - } + if (blk_queue_io_stat(q)) + blk_account_io_start(rq, true); /* - * Submitting request must be dequeued before calling this function - * because it will be linked to another request_queue + * Since we have a scheduler attached on the top device, + * bypass a potential scheduler on the bottom device for + * insert. */ - BUG_ON(blk_queued_rq(rq)); - - if (op_is_flush(rq->cmd_flags)) - where = ELEVATOR_INSERT_FLUSH; - - add_acct_request(q, rq, where); - if (where == ELEVATOR_INSERT_FLUSH) - __blk_run_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - - return BLK_STS_OK; + return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused, true, true); } EXPORT_SYMBOL_GPL(blk_insert_cloned_request); @@ -2710,11 +1310,10 @@ void blk_account_io_completion(struct request *req, unsigned int bytes) if (blk_do_io_stat(req)) { const int sgrp = op_stat_group(req_op(req)); struct hd_struct *part; - int cpu; - cpu = part_stat_lock(); + part_stat_lock(); part = req->part; - part_stat_add(cpu, part, sectors[sgrp], bytes >> 9); + part_stat_add(part, sectors[sgrp], bytes >> 9); part_stat_unlock(); } } @@ -2729,14 +1328,14 @@ void blk_account_io_done(struct request *req, u64 now) if (blk_do_io_stat(req) && !(req->rq_flags & RQF_FLUSH_SEQ)) { const int sgrp = op_stat_group(req_op(req)); struct hd_struct *part; - int cpu; - cpu = part_stat_lock(); + part_stat_lock(); part = req->part; - part_stat_inc(cpu, part, ios[sgrp]); - part_stat_add(cpu, part, nsecs[sgrp], now - req->start_time_ns); - part_round_stats(req->q, cpu, part); + update_io_ticks(part, jiffies); + part_stat_inc(part, ios[sgrp]); + part_stat_add(part, nsecs[sgrp], now - req->start_time_ns); + part_stat_add(part, time_in_queue, nsecs_to_jiffies64(now - req->start_time_ns)); part_dec_in_flight(req->q, part, rq_data_dir(req)); hd_struct_put(part); @@ -2748,16 +1347,15 @@ void blk_account_io_start(struct request *rq, bool new_io) { struct hd_struct *part; int rw = rq_data_dir(rq); - int cpu; if (!blk_do_io_stat(rq)) return; - cpu = part_stat_lock(); + part_stat_lock(); if (!new_io) { part = rq->part; - part_stat_inc(cpu, part, merges[rw]); + part_stat_inc(part, merges[rw]); } else { part = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq)); if (!hd_struct_try_get(part)) { @@ -2772,232 +1370,14 @@ void blk_account_io_start(struct request *rq, bool new_io) part = &rq->rq_disk->part0; hd_struct_get(part); } - part_round_stats(rq->q, cpu, part); part_inc_in_flight(rq->q, part, rw); rq->part = part; } - part_stat_unlock(); -} - -static struct request *elv_next_request(struct request_queue *q) -{ - struct request *rq; - struct blk_flush_queue *fq = blk_get_flush_queue(q, NULL); - - WARN_ON_ONCE(q->mq_ops); - - while (1) { - list_for_each_entry(rq, &q->queue_head, queuelist) { -#ifdef CONFIG_PM - /* - * If a request gets queued in state RPM_SUSPENDED - * then that's a kernel bug. - */ - WARN_ON_ONCE(q->rpm_status == RPM_SUSPENDED); -#endif - return rq; - } - - /* - * Flush request is running and flush request isn't queueable - * in the drive, we can hold the queue till flush request is - * finished. Even we don't do this, driver can't dispatch next - * requests and will requeue them. And this can improve - * throughput too. For example, we have request flush1, write1, - * flush 2. flush1 is dispatched, then queue is hold, write1 - * isn't inserted to queue. After flush1 is finished, flush2 - * will be dispatched. Since disk cache is already clean, - * flush2 will be finished very soon, so looks like flush2 is - * folded to flush1. - * Since the queue is hold, a flag is set to indicate the queue - * should be restarted later. Please see flush_end_io() for - * details. - */ - if (fq->flush_pending_idx != fq->flush_running_idx && - !queue_flush_queueable(q)) { - fq->flush_queue_delayed = 1; - return NULL; - } - if (unlikely(blk_queue_bypass(q)) || - !q->elevator->type->ops.sq.elevator_dispatch_fn(q, 0)) - return NULL; - } -} - -/** - * blk_peek_request - peek at the top of a request queue - * @q: request queue to peek at - * - * Description: - * Return the request at the top of @q. The returned request - * should be started using blk_start_request() before LLD starts - * processing it. - * - * Return: - * Pointer to the request at the top of @q if available. Null - * otherwise. - */ -struct request *blk_peek_request(struct request_queue *q) -{ - struct request *rq; - int ret; - - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - while ((rq = elv_next_request(q)) != NULL) { - if (!(rq->rq_flags & RQF_STARTED)) { - /* - * This is the first time the device driver - * sees this request (possibly after - * requeueing). Notify IO scheduler. - */ - if (rq->rq_flags & RQF_SORTED) - elv_activate_rq(q, rq); - - /* - * just mark as started even if we don't start - * it, a request that has been delayed should - * not be passed by new incoming requests - */ - rq->rq_flags |= RQF_STARTED; - trace_block_rq_issue(q, rq); - } - - if (!q->boundary_rq || q->boundary_rq == rq) { - q->end_sector = rq_end_sector(rq); - q->boundary_rq = NULL; - } - - if (rq->rq_flags & RQF_DONTPREP) - break; - - if (q->dma_drain_size && blk_rq_bytes(rq)) { - /* - * make sure space for the drain appears we - * know we can do this because max_hw_segments - * has been adjusted to be one fewer than the - * device can handle - */ - rq->nr_phys_segments++; - } - - if (!q->prep_rq_fn) - break; - - ret = q->prep_rq_fn(q, rq); - if (ret == BLKPREP_OK) { - break; - } else if (ret == BLKPREP_DEFER) { - /* - * the request may have been (partially) prepped. - * we need to keep this request in the front to - * avoid resource deadlock. RQF_STARTED will - * prevent other fs requests from passing this one. - */ - if (q->dma_drain_size && blk_rq_bytes(rq) && - !(rq->rq_flags & RQF_DONTPREP)) { - /* - * remove the space for the drain we added - * so that we don't add it again - */ - --rq->nr_phys_segments; - } - - rq = NULL; - break; - } else if (ret == BLKPREP_KILL || ret == BLKPREP_INVALID) { - rq->rq_flags |= RQF_QUIET; - /* - * Mark this request as started so we don't trigger - * any debug logic in the end I/O path. - */ - blk_start_request(rq); - __blk_end_request_all(rq, ret == BLKPREP_INVALID ? - BLK_STS_TARGET : BLK_STS_IOERR); - } else { - printk(KERN_ERR "%s: bad return=%d\n", __func__, ret); - break; - } - } - - return rq; -} -EXPORT_SYMBOL(blk_peek_request); - -static void blk_dequeue_request(struct request *rq) -{ - struct request_queue *q = rq->q; - - BUG_ON(list_empty(&rq->queuelist)); - BUG_ON(ELV_ON_HASH(rq)); - - list_del_init(&rq->queuelist); - - /* - * the time frame between a request being removed from the lists - * and to it is freed is accounted as io that is in progress at - * the driver side. - */ - if (blk_account_rq(rq)) - q->in_flight[rq_is_sync(rq)]++; -} - -/** - * blk_start_request - start request processing on the driver - * @req: request to dequeue - * - * Description: - * Dequeue @req and start timeout timer on it. This hands off the - * request to the driver. - */ -void blk_start_request(struct request *req) -{ - lockdep_assert_held(req->q->queue_lock); - WARN_ON_ONCE(req->q->mq_ops); - - blk_dequeue_request(req); - - if (test_bit(QUEUE_FLAG_STATS, &req->q->queue_flags)) { - req->io_start_time_ns = ktime_get_ns(); -#ifdef CONFIG_BLK_DEV_THROTTLING_LOW - req->throtl_size = blk_rq_sectors(req); -#endif - req->rq_flags |= RQF_STATS; - rq_qos_issue(req->q, req); - } - - BUG_ON(blk_rq_is_complete(req)); - blk_add_timer(req); -} -EXPORT_SYMBOL(blk_start_request); - -/** - * blk_fetch_request - fetch a request from a request queue - * @q: request queue to fetch a request from - * - * Description: - * Return the request at the top of @q. The request is started on - * return and LLD can start processing it immediately. - * - * Return: - * Pointer to the request at the top of @q if available. Null - * otherwise. - */ -struct request *blk_fetch_request(struct request_queue *q) -{ - struct request *rq; - - lockdep_assert_held(q->queue_lock); - WARN_ON_ONCE(q->mq_ops); + update_io_ticks(part, jiffies); - rq = blk_peek_request(q); - if (rq) - blk_start_request(rq); - return rq; + part_stat_unlock(); } -EXPORT_SYMBOL(blk_fetch_request); /* * Steal bios from a request and add them to a bio list. @@ -3124,255 +1504,6 @@ bool blk_update_request(struct request *req, blk_status_t error, } EXPORT_SYMBOL_GPL(blk_update_request); -static bool blk_update_bidi_request(struct request *rq, blk_status_t error, - unsigned int nr_bytes, - unsigned int bidi_bytes) -{ - if (blk_update_request(rq, error, nr_bytes)) - return true; - - /* Bidi request must be completed as a whole */ - if (unlikely(blk_bidi_rq(rq)) && - blk_update_request(rq->next_rq, error, bidi_bytes)) - return true; - - if (blk_queue_add_random(rq->q)) - add_disk_randomness(rq->rq_disk); - - return false; -} - -/** - * blk_unprep_request - unprepare a request - * @req: the request - * - * This function makes a request ready for complete resubmission (or - * completion). It happens only after all error handling is complete, - * so represents the appropriate moment to deallocate any resources - * that were allocated to the request in the prep_rq_fn. The queue - * lock is held when calling this. - */ -void blk_unprep_request(struct request *req) -{ - struct request_queue *q = req->q; - - req->rq_flags &= ~RQF_DONTPREP; - if (q->unprep_rq_fn) - q->unprep_rq_fn(q, req); -} -EXPORT_SYMBOL_GPL(blk_unprep_request); - -void blk_finish_request(struct request *req, blk_status_t error) -{ - struct request_queue *q = req->q; - u64 now = ktime_get_ns(); - - lockdep_assert_held(req->q->queue_lock); - WARN_ON_ONCE(q->mq_ops); - - if (req->rq_flags & RQF_STATS) - blk_stat_add(req, now); - - if (req->rq_flags & RQF_QUEUED) - blk_queue_end_tag(q, req); - - BUG_ON(blk_queued_rq(req)); - - if (unlikely(laptop_mode) && !blk_rq_is_passthrough(req)) - laptop_io_completion(req->q->backing_dev_info); - - blk_delete_timer(req); - - if (req->rq_flags & RQF_DONTPREP) - blk_unprep_request(req); - - blk_account_io_done(req, now); - - if (req->end_io) { - rq_qos_done(q, req); - req->end_io(req, error); - } else { - if (blk_bidi_rq(req)) - __blk_put_request(req->next_rq->q, req->next_rq); - - __blk_put_request(q, req); - } -} -EXPORT_SYMBOL(blk_finish_request); - -/** - * blk_end_bidi_request - Complete a bidi request - * @rq: the request to complete - * @error: block status code - * @nr_bytes: number of bytes to complete @rq - * @bidi_bytes: number of bytes to complete @rq->next_rq - * - * Description: - * Ends I/O on a number of bytes attached to @rq and @rq->next_rq. - * Drivers that supports bidi can safely call this member for any - * type of request, bidi or uni. In the later case @bidi_bytes is - * just ignored. - * - * Return: - * %false - we are done with this request - * %true - still buffers pending for this request - **/ -static bool blk_end_bidi_request(struct request *rq, blk_status_t error, - unsigned int nr_bytes, unsigned int bidi_bytes) -{ - struct request_queue *q = rq->q; - unsigned long flags; - - WARN_ON_ONCE(q->mq_ops); - - if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes)) - return true; - - spin_lock_irqsave(q->queue_lock, flags); - blk_finish_request(rq, error); - spin_unlock_irqrestore(q->queue_lock, flags); - - return false; -} - -/** - * __blk_end_bidi_request - Complete a bidi request with queue lock held - * @rq: the request to complete - * @error: block status code - * @nr_bytes: number of bytes to complete @rq - * @bidi_bytes: number of bytes to complete @rq->next_rq - * - * Description: - * Identical to blk_end_bidi_request() except that queue lock is - * assumed to be locked on entry and remains so on return. - * - * Return: - * %false - we are done with this request - * %true - still buffers pending for this request - **/ -static bool __blk_end_bidi_request(struct request *rq, blk_status_t error, - unsigned int nr_bytes, unsigned int bidi_bytes) -{ - lockdep_assert_held(rq->q->queue_lock); - WARN_ON_ONCE(rq->q->mq_ops); - - if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes)) - return true; - - blk_finish_request(rq, error); - - return false; -} - -/** - * blk_end_request - Helper function for drivers to complete the request. - * @rq: the request being processed - * @error: block status code - * @nr_bytes: number of bytes to complete - * - * Description: - * Ends I/O on a number of bytes attached to @rq. - * If @rq has leftover, sets it up for the next range of segments. - * - * Return: - * %false - we are done with this request - * %true - still buffers pending for this request - **/ -bool blk_end_request(struct request *rq, blk_status_t error, - unsigned int nr_bytes) -{ - WARN_ON_ONCE(rq->q->mq_ops); - return blk_end_bidi_request(rq, error, nr_bytes, 0); -} -EXPORT_SYMBOL(blk_end_request); - -/** - * blk_end_request_all - Helper function for drives to finish the request. - * @rq: the request to finish - * @error: block status code - * - * Description: - * Completely finish @rq. - */ -void blk_end_request_all(struct request *rq, blk_status_t error) -{ - bool pending; - unsigned int bidi_bytes = 0; - - if (unlikely(blk_bidi_rq(rq))) - bidi_bytes = blk_rq_bytes(rq->next_rq); - - pending = blk_end_bidi_request(rq, error, blk_rq_bytes(rq), bidi_bytes); - BUG_ON(pending); -} -EXPORT_SYMBOL(blk_end_request_all); - -/** - * __blk_end_request - Helper function for drivers to complete the request. - * @rq: the request being processed - * @error: block status code - * @nr_bytes: number of bytes to complete - * - * Description: - * Must be called with queue lock held unlike blk_end_request(). - * - * Return: - * %false - we are done with this request - * %true - still buffers pending for this request - **/ -bool __blk_end_request(struct request *rq, blk_status_t error, - unsigned int nr_bytes) -{ - lockdep_assert_held(rq->q->queue_lock); - WARN_ON_ONCE(rq->q->mq_ops); - - return __blk_end_bidi_request(rq, error, nr_bytes, 0); -} -EXPORT_SYMBOL(__blk_end_request); - -/** - * __blk_end_request_all - Helper function for drives to finish the request. - * @rq: the request to finish - * @error: block status code - * - * Description: - * Completely finish @rq. Must be called with queue lock held. - */ -void __blk_end_request_all(struct request *rq, blk_status_t error) -{ - bool pending; - unsigned int bidi_bytes = 0; - - lockdep_assert_held(rq->q->queue_lock); - WARN_ON_ONCE(rq->q->mq_ops); - - if (unlikely(blk_bidi_rq(rq))) - bidi_bytes = blk_rq_bytes(rq->next_rq); - - pending = __blk_end_bidi_request(rq, error, blk_rq_bytes(rq), bidi_bytes); - BUG_ON(pending); -} -EXPORT_SYMBOL(__blk_end_request_all); - -/** - * __blk_end_request_cur - Helper function to finish the current request chunk. - * @rq: the request to finish the current chunk for - * @error: block status code - * - * Description: - * Complete the current consecutively mapped chunk from @rq. Must - * be called with queue lock held. - * - * Return: - * %false - we are done with this request - * %true - still buffers pending for this request - */ -bool __blk_end_request_cur(struct request *rq, blk_status_t error) -{ - return __blk_end_request(rq, error, blk_rq_cur_bytes(rq)); -} -EXPORT_SYMBOL(__blk_end_request_cur); - void blk_rq_bio_prep(struct request_queue *q, struct request *rq, struct bio *bio) { @@ -3428,8 +1559,8 @@ EXPORT_SYMBOL_GPL(rq_flush_dcache_pages); */ int blk_lld_busy(struct request_queue *q) { - if (q->lld_busy_fn) - return q->lld_busy_fn(q); + if (queue_is_mq(q) && q->mq_ops->busy) + return q->mq_ops->busy(q); return 0; } @@ -3460,7 +1591,6 @@ EXPORT_SYMBOL_GPL(blk_rq_unprep_clone); */ static void __blk_rq_prep_clone(struct request *dst, struct request *src) { - dst->cpu = src->cpu; dst->__sector = blk_rq_pos(src); dst->__data_len = blk_rq_bytes(src); if (src->rq_flags & RQF_SPECIAL_PAYLOAD) { @@ -3572,9 +1702,11 @@ void blk_start_plug(struct blk_plug *plug) if (tsk->plug) return; - INIT_LIST_HEAD(&plug->list); INIT_LIST_HEAD(&plug->mq_list); INIT_LIST_HEAD(&plug->cb_list); + plug->rq_count = 0; + plug->multiple_queues = false; + /* * Store ordering should not be needed here, since a potential * preempt will imply a full memory barrier @@ -3583,36 +1715,6 @@ void blk_start_plug(struct blk_plug *plug) } EXPORT_SYMBOL(blk_start_plug); -static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b) -{ - struct request *rqa = container_of(a, struct request, queuelist); - struct request *rqb = container_of(b, struct request, queuelist); - - return !(rqa->q < rqb->q || - (rqa->q == rqb->q && blk_rq_pos(rqa) < blk_rq_pos(rqb))); -} - -/* - * If 'from_schedule' is true, then postpone the dispatch of requests - * until a safe kblockd context. We due this to avoid accidental big - * additional stack usage in driver dispatch, in places where the originally - * plugger did not intend it. - */ -static void queue_unplugged(struct request_queue *q, unsigned int depth, - bool from_schedule) - __releases(q->queue_lock) -{ - lockdep_assert_held(q->queue_lock); - - trace_block_unplug(q, depth, !from_schedule); - - if (from_schedule) - blk_run_queue_async(q); - else - __blk_run_queue(q); - spin_unlock_irq(q->queue_lock); -} - static void flush_plug_callbacks(struct blk_plug *plug, bool from_schedule) { LIST_HEAD(callbacks); @@ -3657,65 +1759,10 @@ EXPORT_SYMBOL(blk_check_plugged); void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule) { - struct request_queue *q; - struct request *rq; - LIST_HEAD(list); - unsigned int depth; - flush_plug_callbacks(plug, from_schedule); if (!list_empty(&plug->mq_list)) blk_mq_flush_plug_list(plug, from_schedule); - - if (list_empty(&plug->list)) - return; - - list_splice_init(&plug->list, &list); - - list_sort(NULL, &list, plug_rq_cmp); - - q = NULL; - depth = 0; - - while (!list_empty(&list)) { - rq = list_entry_rq(list.next); - list_del_init(&rq->queuelist); - BUG_ON(!rq->q); - if (rq->q != q) { - /* - * This drops the queue lock - */ - if (q) - queue_unplugged(q, depth, from_schedule); - q = rq->q; - depth = 0; - spin_lock_irq(q->queue_lock); - } - - /* - * Short-circuit if @q is dead - */ - if (unlikely(blk_queue_dying(q))) { - __blk_end_request_all(rq, BLK_STS_IOERR); - continue; - } - - /* - * rq is already accounted, so use raw insert - */ - if (op_is_flush(rq->cmd_flags)) - __elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH); - else - __elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE); - - depth++; - } - - /* - * This drops the queue lock - */ - if (q) - queue_unplugged(q, depth, from_schedule); } void blk_finish_plug(struct blk_plug *plug) @@ -3742,9 +1789,6 @@ int __init blk_dev_init(void) if (!kblockd_workqueue) panic("Failed to create kblockd\n"); - request_cachep = kmem_cache_create("blkdev_requests", - sizeof(struct request), 0, SLAB_PANIC, NULL); - blk_requestq_cachep = kmem_cache_create("request_queue", sizeof(struct request_queue), 0, SLAB_PANIC, NULL); diff --git a/block/blk-exec.c b/block/blk-exec.c index f7b292f12449..a34b7d918742 100644 --- a/block/blk-exec.c +++ b/block/blk-exec.c @@ -48,8 +48,6 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk, struct request *rq, int at_head, rq_end_io_fn *done) { - int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK; - WARN_ON(irqs_disabled()); WARN_ON(!blk_rq_is_passthrough(rq)); @@ -60,23 +58,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk, * don't check dying flag for MQ because the request won't * be reused after dying flag is set */ - if (q->mq_ops) { - blk_mq_sched_insert_request(rq, at_head, true, false); - return; - } - - spin_lock_irq(q->queue_lock); - - if (unlikely(blk_queue_dying(q))) { - rq->rq_flags |= RQF_QUIET; - __blk_end_request_all(rq, BLK_STS_IOERR); - spin_unlock_irq(q->queue_lock); - return; - } - - __elv_add_request(q, rq, where); - __blk_run_queue(q); - spin_unlock_irq(q->queue_lock); + blk_mq_sched_insert_request(rq, at_head, true, false); } EXPORT_SYMBOL_GPL(blk_execute_rq_nowait); diff --git a/block/blk-flush.c b/block/blk-flush.c index 8b44b86779da..a3fc7191c694 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -93,7 +93,7 @@ enum { FLUSH_PENDING_TIMEOUT = 5 * HZ, }; -static bool blk_kick_flush(struct request_queue *q, +static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, unsigned int flags); static unsigned int blk_flush_policy(unsigned long fflags, struct request *rq) @@ -132,18 +132,9 @@ static void blk_flush_restore_request(struct request *rq) rq->end_io = rq->flush.saved_end_io; } -static bool blk_flush_queue_rq(struct request *rq, bool add_front) +static void blk_flush_queue_rq(struct request *rq, bool add_front) { - if (rq->q->mq_ops) { - blk_mq_add_to_requeue_list(rq, add_front, true); - return false; - } else { - if (add_front) - list_add(&rq->queuelist, &rq->q->queue_head); - else - list_add_tail(&rq->queuelist, &rq->q->queue_head); - return true; - } + blk_mq_add_to_requeue_list(rq, add_front, true); } /** @@ -157,18 +148,17 @@ static bool blk_flush_queue_rq(struct request *rq, bool add_front) * completion and trigger the next step. * * CONTEXT: - * spin_lock_irq(q->queue_lock or fq->mq_flush_lock) + * spin_lock_irq(fq->mq_flush_lock) * * RETURNS: * %true if requests were added to the dispatch queue, %false otherwise. */ -static bool blk_flush_complete_seq(struct request *rq, +static void blk_flush_complete_seq(struct request *rq, struct blk_flush_queue *fq, unsigned int seq, blk_status_t error) { struct request_queue *q = rq->q; struct list_head *pending = &fq->flush_queue[fq->flush_pending_idx]; - bool queued = false, kicked; unsigned int cmd_flags; BUG_ON(rq->flush.seq & seq); @@ -191,7 +181,7 @@ static bool blk_flush_complete_seq(struct request *rq, case REQ_FSEQ_DATA: list_move_tail(&rq->flush.list, &fq->flush_data_in_flight); - queued = blk_flush_queue_rq(rq, true); + blk_flush_queue_rq(rq, true); break; case REQ_FSEQ_DONE: @@ -204,42 +194,34 @@ static bool blk_flush_complete_seq(struct request *rq, BUG_ON(!list_empty(&rq->queuelist)); list_del_init(&rq->flush.list); blk_flush_restore_request(rq); - if (q->mq_ops) - blk_mq_end_request(rq, error); - else - __blk_end_request_all(rq, error); + blk_mq_end_request(rq, error); break; default: BUG(); } - kicked = blk_kick_flush(q, fq, cmd_flags); - return kicked | queued; + blk_kick_flush(q, fq, cmd_flags); } static void flush_end_io(struct request *flush_rq, blk_status_t error) { struct request_queue *q = flush_rq->q; struct list_head *running; - bool queued = false; struct request *rq, *n; unsigned long flags = 0; struct blk_flush_queue *fq = blk_get_flush_queue(q, flush_rq->mq_ctx); + struct blk_mq_hw_ctx *hctx; - if (q->mq_ops) { - struct blk_mq_hw_ctx *hctx; - - /* release the tag's ownership to the req cloned from */ - spin_lock_irqsave(&fq->mq_flush_lock, flags); - hctx = blk_mq_map_queue(q, flush_rq->mq_ctx->cpu); - if (!q->elevator) { - blk_mq_tag_set_rq(hctx, flush_rq->tag, fq->orig_rq); - flush_rq->tag = -1; - } else { - blk_mq_put_driver_tag_hctx(hctx, flush_rq); - flush_rq->internal_tag = -1; - } + /* release the tag's ownership to the req cloned from */ + spin_lock_irqsave(&fq->mq_flush_lock, flags); + hctx = flush_rq->mq_hctx; + if (!q->elevator) { + blk_mq_tag_set_rq(hctx, flush_rq->tag, fq->orig_rq); + flush_rq->tag = -1; + } else { + blk_mq_put_driver_tag_hctx(hctx, flush_rq); + flush_rq->internal_tag = -1; } running = &fq->flush_queue[fq->flush_running_idx]; @@ -248,35 +230,16 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error) /* account completion of the flush request */ fq->flush_running_idx ^= 1; - if (!q->mq_ops) - elv_completed_request(q, flush_rq); - /* and push the waiting requests to the next stage */ list_for_each_entry_safe(rq, n, running, flush.list) { unsigned int seq = blk_flush_cur_seq(rq); BUG_ON(seq != REQ_FSEQ_PREFLUSH && seq != REQ_FSEQ_POSTFLUSH); - queued |= blk_flush_complete_seq(rq, fq, seq, error); + blk_flush_complete_seq(rq, fq, seq, error); } - /* - * Kick the queue to avoid stall for two cases: - * 1. Moving a request silently to empty queue_head may stall the - * queue. - * 2. When flush request is running in non-queueable queue, the - * queue is hold. Restart the queue after flush request is finished - * to avoid stall. - * This function is called from request completion path and calling - * directly into request_fn may confuse the driver. Always use - * kblockd. - */ - if (queued || fq->flush_queue_delayed) { - WARN_ON(q->mq_ops); - blk_run_queue_async(q); - } fq->flush_queue_delayed = 0; - if (q->mq_ops) - spin_unlock_irqrestore(&fq->mq_flush_lock, flags); + spin_unlock_irqrestore(&fq->mq_flush_lock, flags); } /** @@ -289,12 +252,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error) * Please read the comment at the top of this file for more info. * * CONTEXT: - * spin_lock_irq(q->queue_lock or fq->mq_flush_lock) + * spin_lock_irq(fq->mq_flush_lock) * - * RETURNS: - * %true if flush was issued, %false otherwise. */ -static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, +static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, unsigned int flags) { struct list_head *pending = &fq->flush_queue[fq->flush_pending_idx]; @@ -304,7 +265,7 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, /* C1 described at the top of this file */ if (fq->flush_pending_idx != fq->flush_running_idx || list_empty(pending)) - return false; + return; /* C2 and C3 * @@ -312,11 +273,10 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, * assigned to empty flushes, and we deadlock if we are expecting * other requests to make progress. Don't defer for that case. */ - if (!list_empty(&fq->flush_data_in_flight) && - !(q->mq_ops && q->elevator) && + if (!list_empty(&fq->flush_data_in_flight) && q->elevator && time_before(jiffies, fq->flush_pending_since + FLUSH_PENDING_TIMEOUT)) - return false; + return; /* * Issue flush and toggle pending_idx. This makes pending_idx @@ -334,19 +294,15 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, * In case of IO scheduler, flush rq need to borrow scheduler tag * just for cheating put/get driver tag. */ - if (q->mq_ops) { - struct blk_mq_hw_ctx *hctx; - - flush_rq->mq_ctx = first_rq->mq_ctx; - - if (!q->elevator) { - fq->orig_rq = first_rq; - flush_rq->tag = first_rq->tag; - hctx = blk_mq_map_queue(q, first_rq->mq_ctx->cpu); - blk_mq_tag_set_rq(hctx, first_rq->tag, flush_rq); - } else { - flush_rq->internal_tag = first_rq->internal_tag; - } + flush_rq->mq_ctx = first_rq->mq_ctx; + flush_rq->mq_hctx = first_rq->mq_hctx; + + if (!q->elevator) { + fq->orig_rq = first_rq; + flush_rq->tag = first_rq->tag; + blk_mq_tag_set_rq(flush_rq->mq_hctx, first_rq->tag, flush_rq); + } else { + flush_rq->internal_tag = first_rq->internal_tag; } flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH; @@ -355,62 +311,17 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, flush_rq->rq_disk = first_rq->rq_disk; flush_rq->end_io = flush_end_io; - return blk_flush_queue_rq(flush_rq, false); -} - -static void flush_data_end_io(struct request *rq, blk_status_t error) -{ - struct request_queue *q = rq->q; - struct blk_flush_queue *fq = blk_get_flush_queue(q, NULL); - - lockdep_assert_held(q->queue_lock); - - /* - * Updating q->in_flight[] here for making this tag usable - * early. Because in blk_queue_start_tag(), - * q->in_flight[BLK_RW_ASYNC] is used to limit async I/O and - * reserve tags for sync I/O. - * - * More importantly this way can avoid the following I/O - * deadlock: - * - * - suppose there are 40 fua requests comming to flush queue - * and queue depth is 31 - * - 30 rqs are scheduled then blk_queue_start_tag() can't alloc - * tag for async I/O any more - * - all the 30 rqs are completed before FLUSH_PENDING_TIMEOUT - * and flush_data_end_io() is called - * - the other rqs still can't go ahead if not updating - * q->in_flight[BLK_RW_ASYNC] here, meantime these rqs - * are held in flush data queue and make no progress of - * handling post flush rq - * - only after the post flush rq is handled, all these rqs - * can be completed - */ - - elv_completed_request(q, rq); - - /* for avoiding double accounting */ - rq->rq_flags &= ~RQF_STARTED; - - /* - * After populating an empty queue, kick it to avoid stall. Read - * the comment in flush_end_io(). - */ - if (blk_flush_complete_seq(rq, fq, REQ_FSEQ_DATA, error)) - blk_run_queue_async(q); + blk_flush_queue_rq(flush_rq, false); } static void mq_flush_data_end_io(struct request *rq, blk_status_t error) { struct request_queue *q = rq->q; - struct blk_mq_hw_ctx *hctx; + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; struct blk_mq_ctx *ctx = rq->mq_ctx; unsigned long flags; struct blk_flush_queue *fq = blk_get_flush_queue(q, ctx); - hctx = blk_mq_map_queue(q, ctx->cpu); - if (q->elevator) { WARN_ON(rq->tag < 0); blk_mq_put_driver_tag_hctx(hctx, rq); @@ -443,9 +354,6 @@ void blk_insert_flush(struct request *rq) unsigned int policy = blk_flush_policy(fflags, rq); struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx); - if (!q->mq_ops) - lockdep_assert_held(q->queue_lock); - /* * @policy now records what operations need to be done. Adjust * REQ_PREFLUSH and FUA for the driver. @@ -468,10 +376,7 @@ void blk_insert_flush(struct request *rq) * complete the request. */ if (!policy) { - if (q->mq_ops) - blk_mq_end_request(rq, 0); - else - __blk_end_request(rq, 0, 0); + blk_mq_end_request(rq, 0); return; } @@ -484,10 +389,7 @@ void blk_insert_flush(struct request *rq) */ if ((policy & REQ_FSEQ_DATA) && !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) { - if (q->mq_ops) - blk_mq_request_bypass_insert(rq, false); - else - list_add_tail(&rq->queuelist, &q->queue_head); + blk_mq_request_bypass_insert(rq, false); return; } @@ -499,17 +401,12 @@ void blk_insert_flush(struct request *rq) INIT_LIST_HEAD(&rq->flush.list); rq->rq_flags |= RQF_FLUSH_SEQ; rq->flush.saved_end_io = rq->end_io; /* Usually NULL */ - if (q->mq_ops) { - rq->end_io = mq_flush_data_end_io; - spin_lock_irq(&fq->mq_flush_lock); - blk_flush_complete_seq(rq, fq, REQ_FSEQ_ACTIONS & ~policy, 0); - spin_unlock_irq(&fq->mq_flush_lock); - return; - } - rq->end_io = flush_data_end_io; + rq->end_io = mq_flush_data_end_io; + spin_lock_irq(&fq->mq_flush_lock); blk_flush_complete_seq(rq, fq, REQ_FSEQ_ACTIONS & ~policy, 0); + spin_unlock_irq(&fq->mq_flush_lock); } /** @@ -575,8 +472,7 @@ struct blk_flush_queue *blk_alloc_flush_queue(struct request_queue *q, if (!fq) goto fail; - if (q->mq_ops) - spin_lock_init(&fq->mq_flush_lock); + spin_lock_init(&fq->mq_flush_lock); rq_sz = round_up(rq_sz + cmd_size, cache_line_size()); fq->flush_rq = kzalloc_node(rq_sz, flags, node); diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 01580f88fcb3..5ed59ac6ae58 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -28,7 +28,6 @@ void get_io_context(struct io_context *ioc) BUG_ON(atomic_long_read(&ioc->refcount) <= 0); atomic_long_inc(&ioc->refcount); } -EXPORT_SYMBOL(get_io_context); static void icq_free_icq_rcu(struct rcu_head *head) { @@ -48,10 +47,8 @@ static void ioc_exit_icq(struct io_cq *icq) if (icq->flags & ICQ_EXITED) return; - if (et->uses_mq && et->ops.mq.exit_icq) - et->ops.mq.exit_icq(icq); - else if (!et->uses_mq && et->ops.sq.elevator_exit_icq_fn) - et->ops.sq.elevator_exit_icq_fn(icq); + if (et->ops.exit_icq) + et->ops.exit_icq(icq); icq->flags |= ICQ_EXITED; } @@ -113,9 +110,9 @@ static void ioc_release_fn(struct work_struct *work) struct io_cq, ioc_node); struct request_queue *q = icq->q; - if (spin_trylock(q->queue_lock)) { + if (spin_trylock(&q->queue_lock)) { ioc_destroy_icq(icq); - spin_unlock(q->queue_lock); + spin_unlock(&q->queue_lock); } else { spin_unlock_irqrestore(&ioc->lock, flags); cpu_relax(); @@ -162,7 +159,6 @@ void put_io_context(struct io_context *ioc) if (free_ioc) kmem_cache_free(iocontext_cachep, ioc); } -EXPORT_SYMBOL(put_io_context); /** * put_io_context_active - put active reference on ioc @@ -173,7 +169,6 @@ EXPORT_SYMBOL(put_io_context); */ void put_io_context_active(struct io_context *ioc) { - struct elevator_type *et; unsigned long flags; struct io_cq *icq; @@ -187,25 +182,12 @@ void put_io_context_active(struct io_context *ioc) * reverse double locking. Read comment in ioc_release_fn() for * explanation on the nested locking annotation. */ -retry: spin_lock_irqsave_nested(&ioc->lock, flags, 1); hlist_for_each_entry(icq, &ioc->icq_list, ioc_node) { if (icq->flags & ICQ_EXITED) continue; - et = icq->q->elevator->type; - if (et->uses_mq) { - ioc_exit_icq(icq); - } else { - if (spin_trylock(icq->q->queue_lock)) { - ioc_exit_icq(icq); - spin_unlock(icq->q->queue_lock); - } else { - spin_unlock_irqrestore(&ioc->lock, flags); - cpu_relax(); - goto retry; - } - } + ioc_exit_icq(icq); } spin_unlock_irqrestore(&ioc->lock, flags); @@ -232,7 +214,7 @@ static void __ioc_clear_queue(struct list_head *icq_list) while (!list_empty(icq_list)) { struct io_cq *icq = list_entry(icq_list->next, - struct io_cq, q_node); + struct io_cq, q_node); struct io_context *ioc = icq->ioc; spin_lock_irqsave(&ioc->lock, flags); @@ -251,16 +233,11 @@ void ioc_clear_queue(struct request_queue *q) { LIST_HEAD(icq_list); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); list_splice_init(&q->icq_list, &icq_list); + spin_unlock_irq(&q->queue_lock); - if (q->mq_ops) { - spin_unlock_irq(q->queue_lock); - __ioc_clear_queue(&icq_list); - } else { - __ioc_clear_queue(&icq_list); - spin_unlock_irq(q->queue_lock); - } + __ioc_clear_queue(&icq_list); } int create_task_io_context(struct task_struct *task, gfp_t gfp_flags, int node) @@ -336,7 +313,6 @@ struct io_context *get_task_io_context(struct task_struct *task, return NULL; } -EXPORT_SYMBOL(get_task_io_context); /** * ioc_lookup_icq - lookup io_cq from ioc @@ -350,7 +326,7 @@ struct io_cq *ioc_lookup_icq(struct io_context *ioc, struct request_queue *q) { struct io_cq *icq; - lockdep_assert_held(q->queue_lock); + lockdep_assert_held(&q->queue_lock); /* * icq's are indexed from @ioc using radix tree and hint pointer, @@ -409,16 +385,14 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q, INIT_HLIST_NODE(&icq->ioc_node); /* lock both q and ioc and try to link @icq */ - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); spin_lock(&ioc->lock); if (likely(!radix_tree_insert(&ioc->icq_tree, q->id, icq))) { hlist_add_head(&icq->ioc_node, &ioc->icq_list); list_add(&icq->q_node, &q->icq_list); - if (et->uses_mq && et->ops.mq.init_icq) - et->ops.mq.init_icq(icq); - else if (!et->uses_mq && et->ops.sq.elevator_init_icq_fn) - et->ops.sq.elevator_init_icq_fn(icq); + if (et->ops.init_icq) + et->ops.init_icq(icq); } else { kmem_cache_free(et->icq_cache, icq); icq = ioc_lookup_icq(ioc, q); @@ -427,7 +401,7 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q, } spin_unlock(&ioc->lock); - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); radix_tree_preload_end(); return icq; } diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 38c35c32aff2..fc714ef402a6 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -262,29 +262,25 @@ static inline void iolat_update_total_lat_avg(struct iolatency_grp *iolat, stat->rqs.mean); } -static inline bool iolatency_may_queue(struct iolatency_grp *iolat, - wait_queue_entry_t *wait, - bool first_block) +static void iolat_cleanup_cb(struct rq_wait *rqw, void *private_data) { - struct rq_wait *rqw = &iolat->rq_wait; + atomic_dec(&rqw->inflight); + wake_up(&rqw->wait); +} - if (first_block && waitqueue_active(&rqw->wait) && - rqw->wait.head.next != &wait->entry) - return false; +static bool iolat_acquire_inflight(struct rq_wait *rqw, void *private_data) +{ + struct iolatency_grp *iolat = private_data; return rq_wait_inc_below(rqw, iolat->rq_depth.max_depth); } static void __blkcg_iolatency_throttle(struct rq_qos *rqos, struct iolatency_grp *iolat, - spinlock_t *lock, bool issue_as_root, + bool issue_as_root, bool use_memdelay) - __releases(lock) - __acquires(lock) { struct rq_wait *rqw = &iolat->rq_wait; unsigned use_delay = atomic_read(&lat_to_blkg(iolat)->use_delay); - DEFINE_WAIT(wait); - bool first_block = true; if (use_delay) blkcg_schedule_throttle(rqos->q, use_memdelay); @@ -301,27 +297,7 @@ static void __blkcg_iolatency_throttle(struct rq_qos *rqos, return; } - if (iolatency_may_queue(iolat, &wait, first_block)) - return; - - do { - prepare_to_wait_exclusive(&rqw->wait, &wait, - TASK_UNINTERRUPTIBLE); - - if (iolatency_may_queue(iolat, &wait, first_block)) - break; - first_block = false; - - if (lock) { - spin_unlock_irq(lock); - io_schedule(); - spin_lock_irq(lock); - } else { - io_schedule(); - } - } while (1); - - finish_wait(&rqw->wait, &wait); + rq_qos_wait(rqw, iolat, iolat_acquire_inflight, iolat_cleanup_cb); } #define SCALE_DOWN_FACTOR 2 @@ -478,38 +454,15 @@ static void check_scale_change(struct iolatency_grp *iolat) scale_change(iolat, direction > 0); } -static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio, - spinlock_t *lock) +static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio) { struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos); - struct blkcg *blkcg; - struct blkcg_gq *blkg; - struct request_queue *q = rqos->q; + struct blkcg_gq *blkg = bio->bi_blkg; bool issue_as_root = bio_issue_as_root_blkg(bio); if (!blk_iolatency_enabled(blkiolat)) return; - rcu_read_lock(); - blkcg = bio_blkcg(bio); - bio_associate_blkcg(bio, &blkcg->css); - blkg = blkg_lookup(blkcg, q); - if (unlikely(!blkg)) { - if (!lock) - spin_lock_irq(q->queue_lock); - blkg = blkg_lookup_create(blkcg, q); - if (IS_ERR(blkg)) - blkg = NULL; - if (!lock) - spin_unlock_irq(q->queue_lock); - } - if (!blkg) - goto out; - - bio_issue_init(&bio->bi_issue, bio_sectors(bio)); - bio_associate_blkg(bio, blkg); -out: - rcu_read_unlock(); while (blkg && blkg->parent) { struct iolatency_grp *iolat = blkg_to_lat(blkg); if (!iolat) { @@ -518,7 +471,7 @@ out: } check_scale_change(iolat); - __blkcg_iolatency_throttle(rqos, iolat, lock, issue_as_root, + __blkcg_iolatency_throttle(rqos, iolat, issue_as_root, (bio->bi_opf & REQ_SWAP) == REQ_SWAP); blkg = blkg->parent; } @@ -640,7 +593,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio) bool enabled = false; blkg = bio->bi_blkg; - if (!blkg) + if (!blkg || !bio_flagged(bio, BIO_TRACKED)) return; iolat = blkg_to_lat(bio->bi_blkg); @@ -730,7 +683,7 @@ static void blkiolatency_timer_fn(struct timer_list *t) * We could be exiting, don't access the pd unless we have a * ref on the blkg. */ - if (!blkg_try_get(blkg)) + if (!blkg_tryget(blkg)) continue; iolat = blkg_to_lat(blkg); diff --git a/block/blk-merge.c b/block/blk-merge.c index 7695034f4b87..71e9ac03f621 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -195,7 +195,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, goto split; } - if (bvprvp && blk_queue_cluster(q)) { + if (bvprvp) { if (seg_size + bv.bv_len > queue_max_segment_size(q)) goto new_segment; if (!biovec_phys_mergeable(q, bvprvp, &bv)) @@ -295,7 +295,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, bool no_sg_merge) { struct bio_vec bv, bvprv = { NULL }; - int cluster, prev = 0; + int prev = 0; unsigned int seg_size, nr_phys_segs; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -313,7 +313,6 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, } fbio = bio; - cluster = blk_queue_cluster(q); seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { @@ -325,7 +324,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, if (no_sg_merge) goto new_segment; - if (prev && cluster) { + if (prev) { if (seg_size + bv.bv_len > queue_max_segment_size(q)) goto new_segment; @@ -389,16 +388,12 @@ void blk_recount_segments(struct request_queue *q, struct bio *bio) bio_set_flag(bio, BIO_SEG_VALID); } -EXPORT_SYMBOL(blk_recount_segments); static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, struct bio *nxt) { struct bio_vec end_bv = { NULL }, nxt_bv; - if (!blk_queue_cluster(q)) - return 0; - if (bio->bi_seg_back_size + nxt->bi_seg_front_size > queue_max_segment_size(q)) return 0; @@ -415,12 +410,12 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, struct scatterlist *sglist, struct bio_vec *bvprv, - struct scatterlist **sg, int *nsegs, int *cluster) + struct scatterlist **sg, int *nsegs) { int nbytes = bvec->bv_len; - if (*sg && *cluster) { + if (*sg) { if ((*sg)->length + nbytes > queue_max_segment_size(q)) goto new_segment; if (!biovec_phys_mergeable(q, bvprv, bvec)) @@ -466,12 +461,12 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, { struct bio_vec bvec, bvprv = { NULL }; struct bvec_iter iter; - int cluster = blk_queue_cluster(q), nsegs = 0; + int nsegs = 0; for_each_bio(bio) bio_for_each_segment(bvec, bio, iter) __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg, - &nsegs, &cluster); + &nsegs); return nsegs; } @@ -596,17 +591,6 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req, return ll_new_hw_segment(q, req, bio); } -/* - * blk-mq uses req->special to carry normal driver per-request payload, it - * does not indicate a prepared command that we cannot merge with. - */ -static bool req_no_special_merge(struct request *req) -{ - struct request_queue *q = req->q; - - return !q->mq_ops && req->special; -} - static bool req_attempt_discard_merge(struct request_queue *q, struct request *req, struct request *next) { @@ -632,13 +616,6 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, unsigned int seg_size = req->biotail->bi_seg_back_size + next->bio->bi_seg_front_size; - /* - * First check if the either of the requests are re-queued - * requests. Can't merge them if they are. - */ - if (req_no_special_merge(req) || req_no_special_merge(next)) - return 0; - if (req_gap_back_merge(req, next->bio)) return 0; @@ -703,12 +680,10 @@ static void blk_account_io_merge(struct request *req) { if (blk_do_io_stat(req)) { struct hd_struct *part; - int cpu; - cpu = part_stat_lock(); + part_stat_lock(); part = req->part; - part_round_stats(req->q, cpu, part); part_dec_in_flight(req->q, part, rq_data_dir(req)); hd_struct_put(part); @@ -731,7 +706,8 @@ static inline bool blk_discard_mergable(struct request *req) return false; } -enum elv_merge blk_try_req_merge(struct request *req, struct request *next) +static enum elv_merge blk_try_req_merge(struct request *req, + struct request *next) { if (blk_discard_mergable(req)) return ELEVATOR_DISCARD_MERGE; @@ -748,9 +724,6 @@ enum elv_merge blk_try_req_merge(struct request *req, struct request *next) static struct request *attempt_merge(struct request_queue *q, struct request *req, struct request *next) { - if (!q->mq_ops) - lockdep_assert_held(q->queue_lock); - if (!rq_mergeable(req) || !rq_mergeable(next)) return NULL; @@ -758,8 +731,7 @@ static struct request *attempt_merge(struct request_queue *q, return NULL; if (rq_data_dir(req) != rq_data_dir(next) - || req->rq_disk != next->rq_disk - || req_no_special_merge(next)) + || req->rq_disk != next->rq_disk) return NULL; if (req_op(req) == REQ_OP_WRITE_SAME && @@ -773,6 +745,9 @@ static struct request *attempt_merge(struct request_queue *q, if (req->write_hint != next->write_hint) return NULL; + if (req->ioprio != next->ioprio) + return NULL; + /* * If we are allowed to merge, then append bio list * from next to rq and release next. merge_requests_fn @@ -828,10 +803,6 @@ static struct request *attempt_merge(struct request_queue *q, */ blk_account_io_merge(next); - req->ioprio = ioprio_best(req->ioprio, next->ioprio); - if (blk_rq_cpu_valid(next)) - req->cpu = next->cpu; - /* * ownership of bio passed from next to req, return 'next' for * the caller to free @@ -863,16 +834,11 @@ struct request *attempt_front_merge(struct request_queue *q, struct request *rq) int blk_attempt_req_merge(struct request_queue *q, struct request *rq, struct request *next) { - struct elevator_queue *e = q->elevator; struct request *free; - if (!e->uses_mq && e->type->ops.sq.elevator_allow_rq_merge_fn) - if (!e->type->ops.sq.elevator_allow_rq_merge_fn(q, rq, next)) - return 0; - free = attempt_merge(q, rq, next); if (free) { - __blk_put_request(q, free); + blk_put_request(free); return 1; } @@ -891,8 +857,8 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio) if (bio_data_dir(bio) != rq_data_dir(rq)) return false; - /* must be same device and not a special request */ - if (rq->rq_disk != bio->bi_disk || req_no_special_merge(rq)) + /* must be same device */ + if (rq->rq_disk != bio->bi_disk) return false; /* only merge integrity protected bio into ditto rq */ @@ -911,6 +877,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio) if (rq->write_hint != bio->bi_write_hint) return false; + if (rq->ioprio != bio_prio(bio)) + return false; + return true; } diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c index 3eb169f15842..03a534820271 100644 --- a/block/blk-mq-cpumap.c +++ b/block/blk-mq-cpumap.c @@ -14,9 +14,10 @@ #include "blk.h" #include "blk-mq.h" -static int cpu_to_queue_index(unsigned int nr_queues, const int cpu) +static int cpu_to_queue_index(struct blk_mq_queue_map *qmap, + unsigned int nr_queues, const int cpu) { - return cpu % nr_queues; + return qmap->queue_offset + (cpu % nr_queues); } static int get_first_sibling(unsigned int cpu) @@ -30,10 +31,10 @@ static int get_first_sibling(unsigned int cpu) return cpu; } -int blk_mq_map_queues(struct blk_mq_tag_set *set) +int blk_mq_map_queues(struct blk_mq_queue_map *qmap) { - unsigned int *map = set->mq_map; - unsigned int nr_queues = set->nr_hw_queues; + unsigned int *map = qmap->mq_map; + unsigned int nr_queues = qmap->nr_queues; unsigned int cpu, first_sibling; for_each_possible_cpu(cpu) { @@ -44,11 +45,11 @@ int blk_mq_map_queues(struct blk_mq_tag_set *set) * performace optimizations. */ if (cpu < nr_queues) { - map[cpu] = cpu_to_queue_index(nr_queues, cpu); + map[cpu] = cpu_to_queue_index(qmap, nr_queues, cpu); } else { first_sibling = get_first_sibling(cpu); if (first_sibling == cpu) - map[cpu] = cpu_to_queue_index(nr_queues, cpu); + map[cpu] = cpu_to_queue_index(qmap, nr_queues, cpu); else map[cpu] = map[first_sibling]; } @@ -62,12 +63,12 @@ EXPORT_SYMBOL_GPL(blk_mq_map_queues); * We have no quick way of doing reverse lookups. This is only used at * queue init time, so runtime isn't important. */ -int blk_mq_hw_queue_to_node(unsigned int *mq_map, unsigned int index) +int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index) { int i; for_each_possible_cpu(i) { - if (index == mq_map[i]) + if (index == qmap->mq_map[i]) return local_memory_node(cpu_to_node(i)); } diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 10b284a1f18d..90d68760af08 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -23,6 +23,7 @@ #include "blk-mq.h" #include "blk-mq-debugfs.h" #include "blk-mq-tag.h" +#include "blk-rq-qos.h" static void print_stat(struct seq_file *m, struct blk_rq_stat *stat) { @@ -112,10 +113,8 @@ static int queue_pm_only_show(void *data, struct seq_file *m) #define QUEUE_FLAG_NAME(name) [QUEUE_FLAG_##name] = #name static const char *const blk_queue_flag_name[] = { - QUEUE_FLAG_NAME(QUEUED), QUEUE_FLAG_NAME(STOPPED), QUEUE_FLAG_NAME(DYING), - QUEUE_FLAG_NAME(BYPASS), QUEUE_FLAG_NAME(BIDI), QUEUE_FLAG_NAME(NOMERGES), QUEUE_FLAG_NAME(SAME_COMP), @@ -318,7 +317,6 @@ static const char *const cmd_flag_name[] = { static const char *const rqf_name[] = { RQF_NAME(SORTED), RQF_NAME(STARTED), - RQF_NAME(QUEUED), RQF_NAME(SOFTBARRIER), RQF_NAME(FLUSH_SEQ), RQF_NAME(MIXED_MERGE), @@ -424,15 +422,18 @@ struct show_busy_params { /* * Note: the state of a request may change while this function is in progress, - * e.g. due to a concurrent blk_mq_finish_request() call. + * e.g. due to a concurrent blk_mq_finish_request() call. Returns true to + * keep iterating requests. */ -static void hctx_show_busy_rq(struct request *rq, void *data, bool reserved) +static bool hctx_show_busy_rq(struct request *rq, void *data, bool reserved) { const struct show_busy_params *params = data; - if (blk_mq_map_queue(rq->q, rq->mq_ctx->cpu) == params->hctx) + if (rq->mq_hctx == params->hctx) __blk_mq_debugfs_rq_show(params->m, list_entry_rq(&rq->queuelist)); + + return true; } static int hctx_busy_show(void *data, struct seq_file *m) @@ -446,6 +447,21 @@ static int hctx_busy_show(void *data, struct seq_file *m) return 0; } +static const char *const hctx_types[] = { + [HCTX_TYPE_DEFAULT] = "default", + [HCTX_TYPE_READ] = "read", + [HCTX_TYPE_POLL] = "poll", +}; + +static int hctx_type_show(void *data, struct seq_file *m) +{ + struct blk_mq_hw_ctx *hctx = data; + + BUILD_BUG_ON(ARRAY_SIZE(hctx_types) != HCTX_MAX_TYPES); + seq_printf(m, "%s\n", hctx_types[hctx->type]); + return 0; +} + static int hctx_ctx_map_show(void *data, struct seq_file *m) { struct blk_mq_hw_ctx *hctx = data; @@ -636,36 +652,43 @@ static int hctx_dispatch_busy_show(void *data, struct seq_file *m) return 0; } -static void *ctx_rq_list_start(struct seq_file *m, loff_t *pos) - __acquires(&ctx->lock) -{ - struct blk_mq_ctx *ctx = m->private; - - spin_lock(&ctx->lock); - return seq_list_start(&ctx->rq_list, *pos); -} - -static void *ctx_rq_list_next(struct seq_file *m, void *v, loff_t *pos) -{ - struct blk_mq_ctx *ctx = m->private; - - return seq_list_next(v, &ctx->rq_list, pos); +#define CTX_RQ_SEQ_OPS(name, type) \ +static void *ctx_##name##_rq_list_start(struct seq_file *m, loff_t *pos) \ + __acquires(&ctx->lock) \ +{ \ + struct blk_mq_ctx *ctx = m->private; \ + \ + spin_lock(&ctx->lock); \ + return seq_list_start(&ctx->rq_lists[type], *pos); \ +} \ + \ +static void *ctx_##name##_rq_list_next(struct seq_file *m, void *v, \ + loff_t *pos) \ +{ \ + struct blk_mq_ctx *ctx = m->private; \ + \ + return seq_list_next(v, &ctx->rq_lists[type], pos); \ +} \ + \ +static void ctx_##name##_rq_list_stop(struct seq_file *m, void *v) \ + __releases(&ctx->lock) \ +{ \ + struct blk_mq_ctx *ctx = m->private; \ + \ + spin_unlock(&ctx->lock); \ +} \ + \ +static const struct seq_operations ctx_##name##_rq_list_seq_ops = { \ + .start = ctx_##name##_rq_list_start, \ + .next = ctx_##name##_rq_list_next, \ + .stop = ctx_##name##_rq_list_stop, \ + .show = blk_mq_debugfs_rq_show, \ } -static void ctx_rq_list_stop(struct seq_file *m, void *v) - __releases(&ctx->lock) -{ - struct blk_mq_ctx *ctx = m->private; - - spin_unlock(&ctx->lock); -} +CTX_RQ_SEQ_OPS(default, HCTX_TYPE_DEFAULT); +CTX_RQ_SEQ_OPS(read, HCTX_TYPE_READ); +CTX_RQ_SEQ_OPS(poll, HCTX_TYPE_POLL); -static const struct seq_operations ctx_rq_list_seq_ops = { - .start = ctx_rq_list_start, - .next = ctx_rq_list_next, - .stop = ctx_rq_list_stop, - .show = blk_mq_debugfs_rq_show, -}; static int ctx_dispatched_show(void *data, struct seq_file *m) { struct blk_mq_ctx *ctx = data; @@ -798,11 +821,14 @@ static const struct blk_mq_debugfs_attr blk_mq_debugfs_hctx_attrs[] = { {"run", 0600, hctx_run_show, hctx_run_write}, {"active", 0400, hctx_active_show}, {"dispatch_busy", 0400, hctx_dispatch_busy_show}, + {"type", 0400, hctx_type_show}, {}, }; static const struct blk_mq_debugfs_attr blk_mq_debugfs_ctx_attrs[] = { - {"rq_list", 0400, .seq_ops = &ctx_rq_list_seq_ops}, + {"default_rq_list", 0400, .seq_ops = &ctx_default_rq_list_seq_ops}, + {"read_rq_list", 0400, .seq_ops = &ctx_read_rq_list_seq_ops}, + {"poll_rq_list", 0400, .seq_ops = &ctx_poll_rq_list_seq_ops}, {"dispatched", 0600, ctx_dispatched_show, ctx_dispatched_write}, {"merged", 0600, ctx_merged_show, ctx_merged_write}, {"completed", 0600, ctx_completed_show, ctx_completed_write}, @@ -856,6 +882,15 @@ int blk_mq_debugfs_register(struct request_queue *q) goto err; } + if (q->rq_qos) { + struct rq_qos *rqos = q->rq_qos; + + while (rqos) { + blk_mq_debugfs_register_rqos(rqos); + rqos = rqos->next; + } + } + return 0; err: @@ -978,6 +1013,50 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q) q->sched_debugfs_dir = NULL; } +void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos) +{ + debugfs_remove_recursive(rqos->debugfs_dir); + rqos->debugfs_dir = NULL; +} + +int blk_mq_debugfs_register_rqos(struct rq_qos *rqos) +{ + struct request_queue *q = rqos->q; + const char *dir_name = rq_qos_id_to_name(rqos->id); + + if (!q->debugfs_dir) + return -ENOENT; + + if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs) + return 0; + + if (!q->rqos_debugfs_dir) { + q->rqos_debugfs_dir = debugfs_create_dir("rqos", + q->debugfs_dir); + if (!q->rqos_debugfs_dir) + return -ENOMEM; + } + + rqos->debugfs_dir = debugfs_create_dir(dir_name, + rqos->q->rqos_debugfs_dir); + if (!rqos->debugfs_dir) + return -ENOMEM; + + if (!debugfs_create_files(rqos->debugfs_dir, rqos, + rqos->ops->debugfs_attrs)) + goto err; + return 0; + err: + blk_mq_debugfs_unregister_rqos(rqos); + return -ENOMEM; +} + +void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q) +{ + debugfs_remove_recursive(q->rqos_debugfs_dir); + q->rqos_debugfs_dir = NULL; +} + int blk_mq_debugfs_register_sched_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx) { diff --git a/block/blk-mq-debugfs.h b/block/blk-mq-debugfs.h index a9160be12be0..8c9012a578c1 100644 --- a/block/blk-mq-debugfs.h +++ b/block/blk-mq-debugfs.h @@ -31,6 +31,10 @@ void blk_mq_debugfs_unregister_sched(struct request_queue *q); int blk_mq_debugfs_register_sched_hctx(struct request_queue *q, struct blk_mq_hw_ctx *hctx); void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx); + +int blk_mq_debugfs_register_rqos(struct rq_qos *rqos); +void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos); +void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q); #else static inline int blk_mq_debugfs_register(struct request_queue *q) { @@ -78,6 +82,19 @@ static inline int blk_mq_debugfs_register_sched_hctx(struct request_queue *q, static inline void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx) { } + +static inline int blk_mq_debugfs_register_rqos(struct rq_qos *rqos) +{ + return 0; +} + +static inline void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos) +{ +} + +static inline void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q) +{ +} #endif #ifdef CONFIG_BLK_DEBUG_FS_ZONED diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c index db644ec624f5..1dce18553984 100644 --- a/block/blk-mq-pci.c +++ b/block/blk-mq-pci.c @@ -31,26 +31,26 @@ * that maps a queue to the CPUs that have irq affinity for the corresponding * vector. */ -int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev, +int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap, struct pci_dev *pdev, int offset) { const struct cpumask *mask; unsigned int queue, cpu; - for (queue = 0; queue < set->nr_hw_queues; queue++) { + for (queue = 0; queue < qmap->nr_queues; queue++) { mask = pci_irq_get_affinity(pdev, queue + offset); if (!mask) goto fallback; for_each_cpu(cpu, mask) - set->mq_map[cpu] = queue; + qmap->mq_map[cpu] = qmap->queue_offset + queue; } return 0; fallback: - WARN_ON_ONCE(set->nr_hw_queues > 1); - blk_mq_clear_mq_map(set); + WARN_ON_ONCE(qmap->nr_queues > 1); + blk_mq_clear_mq_map(qmap); return 0; } EXPORT_SYMBOL_GPL(blk_mq_pci_map_queues); diff --git a/block/blk-mq-rdma.c b/block/blk-mq-rdma.c index 996167f1de18..45030a81a1ed 100644 --- a/block/blk-mq-rdma.c +++ b/block/blk-mq-rdma.c @@ -29,24 +29,24 @@ * @set->nr_hw_queues, or @dev does not provide an affinity mask for a * vector, we fallback to the naive mapping. */ -int blk_mq_rdma_map_queues(struct blk_mq_tag_set *set, +int blk_mq_rdma_map_queues(struct blk_mq_queue_map *map, struct ib_device *dev, int first_vec) { const struct cpumask *mask; unsigned int queue, cpu; - for (queue = 0; queue < set->nr_hw_queues; queue++) { + for (queue = 0; queue < map->nr_queues; queue++) { mask = ib_get_vector_affinity(dev, first_vec + queue); if (!mask) goto fallback; for_each_cpu(cpu, mask) - set->mq_map[cpu] = queue; + map->mq_map[cpu] = map->queue_offset + queue; } return 0; fallback: - return blk_mq_map_queues(set); + return blk_mq_map_queues(map); } EXPORT_SYMBOL_GPL(blk_mq_rdma_map_queues); diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 29bfe8017a2d..140933e4a7d1 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -31,15 +31,22 @@ void blk_mq_sched_free_hctx_data(struct request_queue *q, } EXPORT_SYMBOL_GPL(blk_mq_sched_free_hctx_data); -void blk_mq_sched_assign_ioc(struct request *rq, struct bio *bio) +void blk_mq_sched_assign_ioc(struct request *rq) { struct request_queue *q = rq->q; - struct io_context *ioc = rq_ioc(bio); + struct io_context *ioc; struct io_cq *icq; - spin_lock_irq(q->queue_lock); + /* + * May not have an IO context if it's a passthrough request + */ + ioc = current->io_context; + if (!ioc) + return; + + spin_lock_irq(&q->queue_lock); icq = ioc_lookup_icq(ioc, q); - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); if (!icq) { icq = ioc_create_icq(ioc, q, GFP_ATOMIC); @@ -54,13 +61,14 @@ void blk_mq_sched_assign_ioc(struct request *rq, struct bio *bio) * Mark a hardware queue as needing a restart. For shared queues, maintain * a count of how many hardware queues are marked for restart. */ -static void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx) +void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx) { if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) return; set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state); } +EXPORT_SYMBOL_GPL(blk_mq_sched_mark_restart_hctx); void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx) { @@ -85,14 +93,13 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) do { struct request *rq; - if (e->type->ops.mq.has_work && - !e->type->ops.mq.has_work(hctx)) + if (e->type->ops.has_work && !e->type->ops.has_work(hctx)) break; if (!blk_mq_get_dispatch_budget(hctx)) break; - rq = e->type->ops.mq.dispatch_request(hctx); + rq = e->type->ops.dispatch_request(hctx); if (!rq) { blk_mq_put_dispatch_budget(hctx); break; @@ -110,7 +117,7 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) static struct blk_mq_ctx *blk_mq_next_ctx(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx) { - unsigned idx = ctx->index_hw; + unsigned short idx = ctx->index_hw[hctx->type]; if (++idx == hctx->nr_ctx) idx = 0; @@ -163,7 +170,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; struct elevator_queue *e = q->elevator; - const bool has_sched_dispatch = e && e->type->ops.mq.dispatch_request; + const bool has_sched_dispatch = e && e->type->ops.dispatch_request; LIST_HEAD(rq_list); /* RCU or SRCU read lock is needed before checking quiesced flag */ @@ -295,11 +302,14 @@ EXPORT_SYMBOL_GPL(blk_mq_bio_list_merge); * too much time checking for merges. */ static bool blk_mq_attempt_merge(struct request_queue *q, + struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct bio *bio) { + enum hctx_type type = hctx->type; + lockdep_assert_held(&ctx->lock); - if (blk_mq_bio_list_merge(q, &ctx->rq_list, bio)) { + if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio)) { ctx->rq_merged++; return true; } @@ -311,19 +321,21 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio) { struct elevator_queue *e = q->elevator; struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); - struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); + struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx->cpu); bool ret = false; + enum hctx_type type; - if (e && e->type->ops.mq.bio_merge) { + if (e && e->type->ops.bio_merge) { blk_mq_put_ctx(ctx); - return e->type->ops.mq.bio_merge(hctx, bio); + return e->type->ops.bio_merge(hctx, bio); } + type = hctx->type; if ((hctx->flags & BLK_MQ_F_SHOULD_MERGE) && - !list_empty_careful(&ctx->rq_list)) { + !list_empty_careful(&ctx->rq_lists[type])) { /* default per sw-queue merge */ spin_lock(&ctx->lock); - ret = blk_mq_attempt_merge(q, ctx, bio); + ret = blk_mq_attempt_merge(q, hctx, ctx, bio); spin_unlock(&ctx->lock); } @@ -367,7 +379,7 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head, struct request_queue *q = rq->q; struct elevator_queue *e = q->elevator; struct blk_mq_ctx *ctx = rq->mq_ctx; - struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; /* flush rq in flush machinery need to be dispatched directly */ if (!(rq->rq_flags & RQF_FLUSH_SEQ) && op_is_flush(rq->cmd_flags)) { @@ -380,11 +392,11 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head, if (blk_mq_sched_bypass_insert(hctx, !!e, rq)) goto run; - if (e && e->type->ops.mq.insert_requests) { + if (e && e->type->ops.insert_requests) { LIST_HEAD(list); list_add(&rq->queuelist, &list); - e->type->ops.mq.insert_requests(hctx, &list, at_head); + e->type->ops.insert_requests(hctx, &list, at_head); } else { spin_lock(&ctx->lock); __blk_mq_insert_request(hctx, rq, at_head); @@ -396,27 +408,25 @@ run: blk_mq_run_hw_queue(hctx, async); } -void blk_mq_sched_insert_requests(struct request_queue *q, +void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct list_head *list, bool run_queue_async) { - struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); - struct elevator_queue *e = hctx->queue->elevator; + struct elevator_queue *e; - if (e && e->type->ops.mq.insert_requests) - e->type->ops.mq.insert_requests(hctx, list, false); + e = hctx->queue->elevator; + if (e && e->type->ops.insert_requests) + e->type->ops.insert_requests(hctx, list, false); else { /* * try to issue requests directly if the hw queue isn't * busy in case of 'none' scheduler, and this way may save * us one extra enqueue & dequeue to sw queue. */ - if (!hctx->dispatch_busy && !e && !run_queue_async) { + if (!hctx->dispatch_busy && !e && !run_queue_async) blk_mq_try_issue_list_directly(hctx, list); - if (list_empty(list)) - return; - } - blk_mq_insert_requests(hctx, ctx, list); + else + blk_mq_insert_requests(hctx, ctx, list); } blk_mq_run_hw_queue(hctx, run_queue_async); @@ -489,15 +499,15 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) goto err; } - ret = e->ops.mq.init_sched(q, e); + ret = e->ops.init_sched(q, e); if (ret) goto err; blk_mq_debugfs_register_sched(q); queue_for_each_hw_ctx(q, hctx, i) { - if (e->ops.mq.init_hctx) { - ret = e->ops.mq.init_hctx(hctx, i); + if (e->ops.init_hctx) { + ret = e->ops.init_hctx(hctx, i); if (ret) { eq = q->elevator; blk_mq_exit_sched(q, eq); @@ -523,14 +533,14 @@ void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e) queue_for_each_hw_ctx(q, hctx, i) { blk_mq_debugfs_unregister_sched_hctx(hctx); - if (e->type->ops.mq.exit_hctx && hctx->sched_data) { - e->type->ops.mq.exit_hctx(hctx, i); + if (e->type->ops.exit_hctx && hctx->sched_data) { + e->type->ops.exit_hctx(hctx, i); hctx->sched_data = NULL; } } blk_mq_debugfs_unregister_sched(q); - if (e->type->ops.mq.exit_sched) - e->type->ops.mq.exit_sched(e); + if (e->type->ops.exit_sched) + e->type->ops.exit_sched(e); blk_mq_sched_tags_teardown(q); q->elevator = NULL; } diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 8a9544203173..c7bdb52367ac 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -8,18 +8,19 @@ void blk_mq_sched_free_hctx_data(struct request_queue *q, void (*exit)(struct blk_mq_hw_ctx *)); -void blk_mq_sched_assign_ioc(struct request *rq, struct bio *bio); +void blk_mq_sched_assign_ioc(struct request *rq); void blk_mq_sched_request_inserted(struct request *rq); bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio, struct request **merged_request); bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio); bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq); +void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx); void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx); void blk_mq_sched_insert_request(struct request *rq, bool at_head, bool run_queue, bool async); -void blk_mq_sched_insert_requests(struct request_queue *q, +void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct list_head *list, bool run_queue_async); @@ -43,8 +44,8 @@ blk_mq_sched_allow_merge(struct request_queue *q, struct request *rq, { struct elevator_queue *e = q->elevator; - if (e && e->type->ops.mq.allow_merge) - return e->type->ops.mq.allow_merge(q, rq, bio); + if (e && e->type->ops.allow_merge) + return e->type->ops.allow_merge(q, rq, bio); return true; } @@ -53,8 +54,8 @@ static inline void blk_mq_sched_completed_request(struct request *rq, u64 now) { struct elevator_queue *e = rq->q->elevator; - if (e && e->type->ops.mq.completed_request) - e->type->ops.mq.completed_request(rq, now); + if (e && e->type->ops.completed_request) + e->type->ops.completed_request(rq, now); } static inline void blk_mq_sched_started_request(struct request *rq) @@ -62,8 +63,8 @@ static inline void blk_mq_sched_started_request(struct request *rq) struct request_queue *q = rq->q; struct elevator_queue *e = q->elevator; - if (e && e->type->ops.mq.started_request) - e->type->ops.mq.started_request(rq); + if (e && e->type->ops.started_request) + e->type->ops.started_request(rq); } static inline void blk_mq_sched_requeue_request(struct request *rq) @@ -71,16 +72,16 @@ static inline void blk_mq_sched_requeue_request(struct request *rq) struct request_queue *q = rq->q; struct elevator_queue *e = q->elevator; - if (e && e->type->ops.mq.requeue_request) - e->type->ops.mq.requeue_request(rq); + if (e && e->type->ops.requeue_request) + e->type->ops.requeue_request(rq); } static inline bool blk_mq_sched_has_work(struct blk_mq_hw_ctx *hctx) { struct elevator_queue *e = hctx->queue->elevator; - if (e && e->type->ops.mq.has_work) - return e->type->ops.mq.has_work(hctx); + if (e && e->type->ops.has_work) + return e->type->ops.has_work(hctx); return false; } diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c index aafb44224c89..3f9c3f4ac44c 100644 --- a/block/blk-mq-sysfs.c +++ b/block/blk-mq-sysfs.c @@ -15,6 +15,18 @@ static void blk_mq_sysfs_release(struct kobject *kobj) { + struct blk_mq_ctxs *ctxs = container_of(kobj, struct blk_mq_ctxs, kobj); + + free_percpu(ctxs->queue_ctx); + kfree(ctxs); +} + +static void blk_mq_ctx_sysfs_release(struct kobject *kobj) +{ + struct blk_mq_ctx *ctx = container_of(kobj, struct blk_mq_ctx, kobj); + + /* ctx->ctxs won't be released until all ctx are freed */ + kobject_put(&ctx->ctxs->kobj); } static void blk_mq_hw_sysfs_release(struct kobject *kobj) @@ -203,7 +215,7 @@ static struct kobj_type blk_mq_ktype = { static struct kobj_type blk_mq_ctx_ktype = { .sysfs_ops = &blk_mq_sysfs_ops, .default_attrs = default_ctx_attrs, - .release = blk_mq_sysfs_release, + .release = blk_mq_ctx_sysfs_release, }; static struct kobj_type blk_mq_hw_ktype = { @@ -235,7 +247,7 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx) if (!hctx->nr_ctx) return 0; - ret = kobject_add(&hctx->kobj, &q->mq_kobj, "%u", hctx->queue_num); + ret = kobject_add(&hctx->kobj, q->mq_kobj, "%u", hctx->queue_num); if (ret) return ret; @@ -258,8 +270,8 @@ void blk_mq_unregister_dev(struct device *dev, struct request_queue *q) queue_for_each_hw_ctx(q, hctx, i) blk_mq_unregister_hctx(hctx); - kobject_uevent(&q->mq_kobj, KOBJ_REMOVE); - kobject_del(&q->mq_kobj); + kobject_uevent(q->mq_kobj, KOBJ_REMOVE); + kobject_del(q->mq_kobj); kobject_put(&dev->kobj); q->mq_sysfs_init_done = false; @@ -279,7 +291,7 @@ void blk_mq_sysfs_deinit(struct request_queue *q) ctx = per_cpu_ptr(q->queue_ctx, cpu); kobject_put(&ctx->kobj); } - kobject_put(&q->mq_kobj); + kobject_put(q->mq_kobj); } void blk_mq_sysfs_init(struct request_queue *q) @@ -287,10 +299,12 @@ void blk_mq_sysfs_init(struct request_queue *q) struct blk_mq_ctx *ctx; int cpu; - kobject_init(&q->mq_kobj, &blk_mq_ktype); + kobject_init(q->mq_kobj, &blk_mq_ktype); for_each_possible_cpu(cpu) { ctx = per_cpu_ptr(q->queue_ctx, cpu); + + kobject_get(q->mq_kobj); kobject_init(&ctx->kobj, &blk_mq_ctx_ktype); } } @@ -303,11 +317,11 @@ int __blk_mq_register_dev(struct device *dev, struct request_queue *q) WARN_ON_ONCE(!q->kobj.parent); lockdep_assert_held(&q->sysfs_lock); - ret = kobject_add(&q->mq_kobj, kobject_get(&dev->kobj), "%s", "mq"); + ret = kobject_add(q->mq_kobj, kobject_get(&dev->kobj), "%s", "mq"); if (ret < 0) goto out; - kobject_uevent(&q->mq_kobj, KOBJ_ADD); + kobject_uevent(q->mq_kobj, KOBJ_ADD); queue_for_each_hw_ctx(q, hctx, i) { ret = blk_mq_register_hctx(hctx); @@ -324,8 +338,8 @@ unreg: while (--i >= 0) blk_mq_unregister_hctx(q->queue_hw_ctx[i]); - kobject_uevent(&q->mq_kobj, KOBJ_REMOVE); - kobject_del(&q->mq_kobj); + kobject_uevent(q->mq_kobj, KOBJ_REMOVE); + kobject_del(q->mq_kobj); kobject_put(&dev->kobj); return ret; } @@ -340,7 +354,6 @@ int blk_mq_register_dev(struct device *dev, struct request_queue *q) return ret; } -EXPORT_SYMBOL_GPL(blk_mq_register_dev); void blk_mq_sysfs_unregister(struct request_queue *q) { diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index cfda95b85d34..2089c6c62f44 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -110,7 +110,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) struct blk_mq_tags *tags = blk_mq_tags_from_data(data); struct sbitmap_queue *bt; struct sbq_wait_state *ws; - DEFINE_WAIT(wait); + DEFINE_SBQ_WAIT(wait); unsigned int tag_offset; bool drop_ctx; int tag; @@ -154,8 +154,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) if (tag != -1) break; - prepare_to_wait_exclusive(&ws->wait, &wait, - TASK_UNINTERRUPTIBLE); + sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE); tag = __blk_mq_get_tag(data, bt); if (tag != -1) @@ -167,16 +166,17 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) bt_prev = bt; io_schedule(); + sbitmap_finish_wait(bt, ws, &wait); + data->ctx = blk_mq_get_ctx(data->q); - data->hctx = blk_mq_map_queue(data->q, data->ctx->cpu); + data->hctx = blk_mq_map_queue(data->q, data->cmd_flags, + data->ctx->cpu); tags = blk_mq_tags_from_data(data); if (data->flags & BLK_MQ_REQ_RESERVED) bt = &tags->breserved_tags; else bt = &tags->bitmap_tags; - finish_wait(&ws->wait, &wait); - /* * If destination hw queue is changed, fake wake up on * previous queue for compensating the wake up miss, so @@ -191,7 +191,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) if (drop_ctx && data->ctx) blk_mq_put_ctx(data->ctx); - finish_wait(&ws->wait, &wait); + sbitmap_finish_wait(bt, ws, &wait); found_tag: return tag + tag_offset; @@ -235,7 +235,7 @@ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) * test and set the bit before assigning ->rqs[]. */ if (rq && rq->q == hctx->queue) - iter_data->fn(hctx, rq, iter_data->data, reserved); + return iter_data->fn(hctx, rq, iter_data->data, reserved); return true; } @@ -247,7 +247,8 @@ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) * @fn: Pointer to the function that will be called for each request * associated with @hctx that has been assigned a driver tag. * @fn will be called as follows: @fn(@hctx, rq, @data, @reserved) - * where rq is a pointer to a request. + * where rq is a pointer to a request. Return true to continue + * iterating tags, false to stop. * @data: Will be passed as third argument to @fn. * @reserved: Indicates whether @bt is the breserved_tags member or the * bitmap_tags member of struct blk_mq_tags. @@ -288,7 +289,7 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) */ rq = tags->rqs[bitnr]; if (rq && blk_mq_request_started(rq)) - iter_data->fn(rq, iter_data->data, reserved); + return iter_data->fn(rq, iter_data->data, reserved); return true; } @@ -300,7 +301,8 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) * or the bitmap_tags member of struct blk_mq_tags. * @fn: Pointer to the function that will be called for each started * request. @fn will be called as follows: @fn(rq, @data, - * @reserved) where rq is a pointer to a request. + * @reserved) where rq is a pointer to a request. Return true + * to continue iterating tags, false to stop. * @data: Will be passed as second argument to @fn. * @reserved: Indicates whether @bt is the breserved_tags member or the * bitmap_tags member of struct blk_mq_tags. @@ -325,7 +327,8 @@ static void bt_tags_for_each(struct blk_mq_tags *tags, struct sbitmap_queue *bt, * @fn: Pointer to the function that will be called for each started * request. @fn will be called as follows: @fn(rq, @priv, * reserved) where rq is a pointer to a request. 'reserved' - * indicates whether or not @rq is a reserved request. + * indicates whether or not @rq is a reserved request. Return + * true to continue iterating tags, false to stop. * @priv: Will be passed as second argument to @fn. */ static void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags, @@ -342,7 +345,8 @@ static void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags, * @fn: Pointer to the function that will be called for each started * request. @fn will be called as follows: @fn(rq, @priv, * reserved) where rq is a pointer to a request. 'reserved' - * indicates whether or not @rq is a reserved request. + * indicates whether or not @rq is a reserved request. Return + * true to continue iterating tags, false to stop. * @priv: Will be passed as second argument to @fn. */ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset, @@ -526,16 +530,7 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, */ u32 blk_mq_unique_tag(struct request *rq) { - struct request_queue *q = rq->q; - struct blk_mq_hw_ctx *hctx; - int hwq = 0; - - if (q->mq_ops) { - hctx = blk_mq_map_queue(q, rq->mq_ctx->cpu); - hwq = hctx->queue_num; - } - - return (hwq << BLK_MQ_UNIQUE_TAG_BITS) | + return (rq->mq_hctx->queue_num << BLK_MQ_UNIQUE_TAG_BITS) | (rq->tag & BLK_MQ_UNIQUE_TAG_MASK); } EXPORT_SYMBOL(blk_mq_unique_tag); diff --git a/block/blk-mq-virtio.c b/block/blk-mq-virtio.c index c3afbca11299..370827163835 100644 --- a/block/blk-mq-virtio.c +++ b/block/blk-mq-virtio.c @@ -29,7 +29,7 @@ * that maps a queue to the CPUs that have irq affinity for the corresponding * vector. */ -int blk_mq_virtio_map_queues(struct blk_mq_tag_set *set, +int blk_mq_virtio_map_queues(struct blk_mq_queue_map *qmap, struct virtio_device *vdev, int first_vec) { const struct cpumask *mask; @@ -38,17 +38,17 @@ int blk_mq_virtio_map_queues(struct blk_mq_tag_set *set, if (!vdev->config->get_vq_affinity) goto fallback; - for (queue = 0; queue < set->nr_hw_queues; queue++) { + for (queue = 0; queue < qmap->nr_queues; queue++) { mask = vdev->config->get_vq_affinity(vdev, first_vec + queue); if (!mask) goto fallback; for_each_cpu(cpu, mask) - set->mq_map[cpu] = queue; + qmap->mq_map[cpu] = qmap->queue_offset + queue; } return 0; fallback: - return blk_mq_map_queues(set); + return blk_mq_map_queues(qmap); } EXPORT_SYMBOL_GPL(blk_mq_virtio_map_queues); diff --git a/block/blk-mq.c b/block/blk-mq.c index 6a7566244de3..3ba37b9e15e9 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -38,7 +38,6 @@ #include "blk-mq-sched.h" #include "blk-rq-qos.h" -static bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie); static void blk_mq_poll_stats_start(struct request_queue *q); static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb); @@ -75,14 +74,18 @@ static bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx) static void blk_mq_hctx_mark_pending(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx) { - if (!sbitmap_test_bit(&hctx->ctx_map, ctx->index_hw)) - sbitmap_set_bit(&hctx->ctx_map, ctx->index_hw); + const int bit = ctx->index_hw[hctx->type]; + + if (!sbitmap_test_bit(&hctx->ctx_map, bit)) + sbitmap_set_bit(&hctx->ctx_map, bit); } static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx) { - sbitmap_clear_bit(&hctx->ctx_map, ctx->index_hw); + const int bit = ctx->index_hw[hctx->type]; + + sbitmap_clear_bit(&hctx->ctx_map, bit); } struct mq_inflight { @@ -90,33 +93,33 @@ struct mq_inflight { unsigned int *inflight; }; -static void blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx, +static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx, struct request *rq, void *priv, bool reserved) { struct mq_inflight *mi = priv; /* - * index[0] counts the specific partition that was asked for. index[1] - * counts the ones that are active on the whole device, so increment - * that if mi->part is indeed a partition, and not a whole device. + * index[0] counts the specific partition that was asked for. */ if (rq->part == mi->part) mi->inflight[0]++; - if (mi->part->partno) - mi->inflight[1]++; + + return true; } -void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part, - unsigned int inflight[2]) +unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part) { + unsigned inflight[2]; struct mq_inflight mi = { .part = part, .inflight = inflight, }; inflight[0] = inflight[1] = 0; blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi); + + return inflight[0]; } -static void blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx, +static bool blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx, struct request *rq, void *priv, bool reserved) { @@ -124,6 +127,8 @@ static void blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx, if (rq->part == mi->part) mi->inflight[rq_data_dir(rq)]++; + + return true; } void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part, @@ -142,7 +147,7 @@ void blk_freeze_queue_start(struct request_queue *q) freeze_depth = atomic_inc_return(&q->mq_freeze_depth); if (freeze_depth == 1) { percpu_ref_kill(&q->q_usage_counter); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_run_hw_queues(q, false); } } @@ -177,8 +182,6 @@ void blk_freeze_queue(struct request_queue *q) * exported to drivers as the only user for unfreeze is blk_mq. */ blk_freeze_queue_start(q); - if (!q->mq_ops) - blk_drain_queue(q); blk_mq_freeze_queue_wait(q); } @@ -275,6 +278,15 @@ bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx) } EXPORT_SYMBOL(blk_mq_can_queue); +/* + * Only need start/end time stamping if we have stats enabled, or using + * an IO scheduler. + */ +static inline bool blk_mq_need_time_stamp(struct request *rq) +{ + return (rq->rq_flags & RQF_IO_STAT) || rq->q->elevator; +} + static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, unsigned int tag, unsigned int op) { @@ -298,8 +310,8 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, /* csd/requeue_work/fifo_time is initialized before use */ rq->q = data->q; rq->mq_ctx = data->ctx; + rq->mq_hctx = data->hctx; rq->rq_flags = rq_flags; - rq->cpu = -1; rq->cmd_flags = op; if (data->flags & BLK_MQ_REQ_PREEMPT) rq->rq_flags |= RQF_PREEMPT; @@ -310,7 +322,10 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, RB_CLEAR_NODE(&rq->rb_node); rq->rq_disk = NULL; rq->part = NULL; - rq->start_time_ns = ktime_get_ns(); + if (blk_mq_need_time_stamp(rq)) + rq->start_time_ns = ktime_get_ns(); + else + rq->start_time_ns = 0; rq->io_start_time_ns = 0; rq->nr_phys_segments = 0; #if defined(CONFIG_BLK_DEV_INTEGRITY) @@ -319,27 +334,22 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, rq->special = NULL; /* tag was already set */ rq->extra_len = 0; - rq->__deadline = 0; + WRITE_ONCE(rq->deadline, 0); - INIT_LIST_HEAD(&rq->timeout_list); rq->timeout = 0; rq->end_io = NULL; rq->end_io_data = NULL; rq->next_rq = NULL; -#ifdef CONFIG_BLK_CGROUP - rq->rl = NULL; -#endif - data->ctx->rq_dispatched[op_is_sync(op)]++; refcount_set(&rq->ref, 1); return rq; } static struct request *blk_mq_get_request(struct request_queue *q, - struct bio *bio, unsigned int op, - struct blk_mq_alloc_data *data) + struct bio *bio, + struct blk_mq_alloc_data *data) { struct elevator_queue *e = q->elevator; struct request *rq; @@ -353,8 +363,9 @@ static struct request *blk_mq_get_request(struct request_queue *q, put_ctx_on_error = true; } if (likely(!data->hctx)) - data->hctx = blk_mq_map_queue(q, data->ctx->cpu); - if (op & REQ_NOWAIT) + data->hctx = blk_mq_map_queue(q, data->cmd_flags, + data->ctx->cpu); + if (data->cmd_flags & REQ_NOWAIT) data->flags |= BLK_MQ_REQ_NOWAIT; if (e) { @@ -365,9 +376,10 @@ static struct request *blk_mq_get_request(struct request_queue *q, * dispatch list. Don't include reserved tags in the * limiting, as it isn't useful. */ - if (!op_is_flush(op) && e->type->ops.mq.limit_depth && + if (!op_is_flush(data->cmd_flags) && + e->type->ops.limit_depth && !(data->flags & BLK_MQ_REQ_RESERVED)) - e->type->ops.mq.limit_depth(op, data); + e->type->ops.limit_depth(data->cmd_flags, data); } else { blk_mq_tag_busy(data->hctx); } @@ -382,14 +394,14 @@ static struct request *blk_mq_get_request(struct request_queue *q, return NULL; } - rq = blk_mq_rq_ctx_init(data, tag, op); - if (!op_is_flush(op)) { + rq = blk_mq_rq_ctx_init(data, tag, data->cmd_flags); + if (!op_is_flush(data->cmd_flags)) { rq->elv.icq = NULL; - if (e && e->type->ops.mq.prepare_request) { - if (e->type->icq_cache && rq_ioc(bio)) - blk_mq_sched_assign_ioc(rq, bio); + if (e && e->type->ops.prepare_request) { + if (e->type->icq_cache) + blk_mq_sched_assign_ioc(rq); - e->type->ops.mq.prepare_request(rq, bio); + e->type->ops.prepare_request(rq, bio); rq->rq_flags |= RQF_ELVPRIV; } } @@ -400,7 +412,7 @@ static struct request *blk_mq_get_request(struct request_queue *q, struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, blk_mq_req_flags_t flags) { - struct blk_mq_alloc_data alloc_data = { .flags = flags }; + struct blk_mq_alloc_data alloc_data = { .flags = flags, .cmd_flags = op }; struct request *rq; int ret; @@ -408,7 +420,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, if (ret) return ERR_PTR(ret); - rq = blk_mq_get_request(q, NULL, op, &alloc_data); + rq = blk_mq_get_request(q, NULL, &alloc_data); blk_queue_exit(q); if (!rq) @@ -426,7 +438,7 @@ EXPORT_SYMBOL(blk_mq_alloc_request); struct request *blk_mq_alloc_request_hctx(struct request_queue *q, unsigned int op, blk_mq_req_flags_t flags, unsigned int hctx_idx) { - struct blk_mq_alloc_data alloc_data = { .flags = flags }; + struct blk_mq_alloc_data alloc_data = { .flags = flags, .cmd_flags = op }; struct request *rq; unsigned int cpu; int ret; @@ -459,7 +471,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, cpu = cpumask_first_and(alloc_data.hctx->cpumask, cpu_online_mask); alloc_data.ctx = __blk_mq_get_ctx(q, cpu); - rq = blk_mq_get_request(q, NULL, op, &alloc_data); + rq = blk_mq_get_request(q, NULL, &alloc_data); blk_queue_exit(q); if (!rq) @@ -473,10 +485,11 @@ static void __blk_mq_free_request(struct request *rq) { struct request_queue *q = rq->q; struct blk_mq_ctx *ctx = rq->mq_ctx; - struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; const int sched_tag = rq->internal_tag; blk_pm_mark_last_busy(rq); + rq->mq_hctx = NULL; if (rq->tag != -1) blk_mq_put_tag(hctx, hctx->tags, ctx, rq->tag); if (sched_tag != -1) @@ -490,11 +503,11 @@ void blk_mq_free_request(struct request *rq) struct request_queue *q = rq->q; struct elevator_queue *e = q->elevator; struct blk_mq_ctx *ctx = rq->mq_ctx; - struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; if (rq->rq_flags & RQF_ELVPRIV) { - if (e && e->type->ops.mq.finish_request) - e->type->ops.mq.finish_request(rq); + if (e && e->type->ops.finish_request) + e->type->ops.finish_request(rq); if (rq->elv.icq) { put_io_context(rq->elv.icq->ioc); rq->elv.icq = NULL; @@ -510,9 +523,6 @@ void blk_mq_free_request(struct request *rq) rq_qos_done(q, rq); - if (blk_rq_rl(rq)) - blk_put_rl(blk_rq_rl(rq)); - WRITE_ONCE(rq->state, MQ_RQ_IDLE); if (refcount_dec_and_test(&rq->ref)) __blk_mq_free_request(rq); @@ -521,7 +531,10 @@ EXPORT_SYMBOL_GPL(blk_mq_free_request); inline void __blk_mq_end_request(struct request *rq, blk_status_t error) { - u64 now = ktime_get_ns(); + u64 now = 0; + + if (blk_mq_need_time_stamp(rq)) + now = ktime_get_ns(); if (rq->rq_flags & RQF_STATS) { blk_mq_poll_stats_start(rq->q); @@ -555,19 +568,19 @@ EXPORT_SYMBOL(blk_mq_end_request); static void __blk_mq_complete_request_remote(void *data) { struct request *rq = data; + struct request_queue *q = rq->q; - rq->q->softirq_done_fn(rq); + q->mq_ops->complete(rq); } static void __blk_mq_complete_request(struct request *rq) { struct blk_mq_ctx *ctx = rq->mq_ctx; + struct request_queue *q = rq->q; bool shared = false; int cpu; - if (!blk_mq_mark_complete(rq)) - return; - + WRITE_ONCE(rq->state, MQ_RQ_COMPLETE); /* * Most of single queue controllers, there is only one irq vector * for handling IO completion, and the only irq's affinity is set @@ -577,18 +590,23 @@ static void __blk_mq_complete_request(struct request *rq) * So complete IO reqeust in softirq context in case of single queue * for not degrading IO performance by irqsoff latency. */ - if (rq->q->nr_hw_queues == 1) { + if (q->nr_hw_queues == 1) { __blk_complete_request(rq); return; } - if (!test_bit(QUEUE_FLAG_SAME_COMP, &rq->q->queue_flags)) { - rq->q->softirq_done_fn(rq); + /* + * For a polled request, always complete locallly, it's pointless + * to redirect the completion. + */ + if ((rq->cmd_flags & REQ_HIPRI) || + !test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags)) { + q->mq_ops->complete(rq); return; } cpu = get_cpu(); - if (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags)) + if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags)) shared = cpus_share_cache(cpu, ctx->cpu); if (cpu != ctx->cpu && !shared && cpu_online(ctx->cpu)) { @@ -597,7 +615,7 @@ static void __blk_mq_complete_request(struct request *rq) rq->csd.flags = 0; smp_call_function_single_async(ctx->cpu, &rq->csd); } else { - rq->q->softirq_done_fn(rq); + q->mq_ops->complete(rq); } put_cpu(); } @@ -630,11 +648,12 @@ static void hctx_lock(struct blk_mq_hw_ctx *hctx, int *srcu_idx) * Ends all I/O on a request. It does not handle partial completions. * The actual completion happens out-of-order, through a IPI handler. **/ -void blk_mq_complete_request(struct request *rq) +bool blk_mq_complete_request(struct request *rq) { if (unlikely(blk_should_fake_timeout(rq->q))) - return; + return false; __blk_mq_complete_request(rq); + return true; } EXPORT_SYMBOL(blk_mq_complete_request); @@ -701,7 +720,7 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list) /* this request will be re-inserted to io scheduler queue */ blk_mq_sched_requeue_request(rq); - BUG_ON(blk_queued_rq(rq)); + BUG_ON(!list_empty(&rq->queuelist)); blk_mq_add_to_requeue_list(rq, true, kick_requeue_list); } EXPORT_SYMBOL(blk_mq_requeue_request); @@ -786,6 +805,32 @@ struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag) } EXPORT_SYMBOL(blk_mq_tag_to_rq); +static bool blk_mq_rq_inflight(struct blk_mq_hw_ctx *hctx, struct request *rq, + void *priv, bool reserved) +{ + /* + * If we find a request that is inflight and the queue matches, + * we know the queue is busy. Return false to stop the iteration. + */ + if (rq->state == MQ_RQ_IN_FLIGHT && rq->q == hctx->queue) { + bool *busy = priv; + + *busy = true; + return false; + } + + return true; +} + +bool blk_mq_queue_inflight(struct request_queue *q) +{ + bool busy = false; + + blk_mq_queue_tag_busy_iter(q, blk_mq_rq_inflight, &busy); + return busy; +} +EXPORT_SYMBOL_GPL(blk_mq_queue_inflight); + static void blk_mq_rq_timed_out(struct request *req, bool reserved) { req->rq_flags |= RQF_TIMED_OUT; @@ -810,7 +855,7 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next) if (rq->rq_flags & RQF_TIMED_OUT) return false; - deadline = blk_rq_deadline(rq); + deadline = READ_ONCE(rq->deadline); if (time_after_eq(jiffies, deadline)) return true; @@ -821,7 +866,7 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next) return false; } -static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, +static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, struct request *rq, void *priv, bool reserved) { unsigned long *next = priv; @@ -831,7 +876,7 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, * so we're not unnecessarilly synchronizing across CPUs. */ if (!blk_mq_req_expired(rq, next)) - return; + return true; /* * We have reason to believe the request may be expired. Take a @@ -843,7 +888,7 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, * timeout handler to posting a natural completion. */ if (!refcount_inc_not_zero(&rq->ref)) - return; + return true; /* * The request is now locked and cannot be reallocated underneath the @@ -855,6 +900,8 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, blk_mq_rq_timed_out(rq, reserved); if (refcount_dec_and_test(&rq->ref)) __blk_mq_free_request(rq); + + return true; } static void blk_mq_timeout_work(struct work_struct *work) @@ -911,9 +958,10 @@ static bool flush_busy_ctx(struct sbitmap *sb, unsigned int bitnr, void *data) struct flush_busy_ctx_data *flush_data = data; struct blk_mq_hw_ctx *hctx = flush_data->hctx; struct blk_mq_ctx *ctx = hctx->ctxs[bitnr]; + enum hctx_type type = hctx->type; spin_lock(&ctx->lock); - list_splice_tail_init(&ctx->rq_list, flush_data->list); + list_splice_tail_init(&ctx->rq_lists[type], flush_data->list); sbitmap_clear_bit(sb, bitnr); spin_unlock(&ctx->lock); return true; @@ -945,12 +993,13 @@ static bool dispatch_rq_from_ctx(struct sbitmap *sb, unsigned int bitnr, struct dispatch_rq_data *dispatch_data = data; struct blk_mq_hw_ctx *hctx = dispatch_data->hctx; struct blk_mq_ctx *ctx = hctx->ctxs[bitnr]; + enum hctx_type type = hctx->type; spin_lock(&ctx->lock); - if (!list_empty(&ctx->rq_list)) { - dispatch_data->rq = list_entry_rq(ctx->rq_list.next); + if (!list_empty(&ctx->rq_lists[type])) { + dispatch_data->rq = list_entry_rq(ctx->rq_lists[type].next); list_del_init(&dispatch_data->rq->queuelist); - if (list_empty(&ctx->rq_list)) + if (list_empty(&ctx->rq_lists[type])) sbitmap_clear_bit(sb, bitnr); } spin_unlock(&ctx->lock); @@ -961,7 +1010,7 @@ static bool dispatch_rq_from_ctx(struct sbitmap *sb, unsigned int bitnr, struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *start) { - unsigned off = start ? start->index_hw : 0; + unsigned off = start ? start->index_hw[hctx->type] : 0; struct dispatch_rq_data data = { .hctx = hctx, .rq = NULL, @@ -985,8 +1034,9 @@ bool blk_mq_get_driver_tag(struct request *rq) { struct blk_mq_alloc_data data = { .q = rq->q, - .hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu), + .hctx = rq->mq_hctx, .flags = BLK_MQ_REQ_NOWAIT, + .cmd_flags = rq->cmd_flags, }; bool shared; @@ -1150,7 +1200,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, rq = list_first_entry(list, struct request, queuelist); - hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu); + hctx = rq->mq_hctx; if (!got_budget && !blk_mq_get_dispatch_budget(hctx)) break; @@ -1223,6 +1273,14 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, if (!list_empty(list)) { bool needs_restart; + /* + * If we didn't flush the entire list, we could have told + * the driver there was more coming, but that turned out to + * be a lie. + */ + if (q->mq_ops->commit_rqs) + q->mq_ops->commit_rqs(hctx); + spin_lock(&hctx->lock); list_splice_init(list, &hctx->dispatch); spin_unlock(&hctx->lock); @@ -1552,15 +1610,16 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx, bool at_head) { struct blk_mq_ctx *ctx = rq->mq_ctx; + enum hctx_type type = hctx->type; lockdep_assert_held(&ctx->lock); trace_block_rq_insert(hctx->queue, rq); if (at_head) - list_add(&rq->queuelist, &ctx->rq_list); + list_add(&rq->queuelist, &ctx->rq_lists[type]); else - list_add_tail(&rq->queuelist, &ctx->rq_list); + list_add_tail(&rq->queuelist, &ctx->rq_lists[type]); } void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, @@ -1580,8 +1639,7 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, */ void blk_mq_request_bypass_insert(struct request *rq, bool run_queue) { - struct blk_mq_ctx *ctx = rq->mq_ctx; - struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu); + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; spin_lock(&hctx->lock); list_add_tail(&rq->queuelist, &hctx->dispatch); @@ -1596,6 +1654,7 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, { struct request *rq; + enum hctx_type type = hctx->type; /* * preemption doesn't flush plug list, so it's possible ctx->cpu is @@ -1607,35 +1666,46 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, } spin_lock(&ctx->lock); - list_splice_tail_init(list, &ctx->rq_list); + list_splice_tail_init(list, &ctx->rq_lists[type]); blk_mq_hctx_mark_pending(hctx, ctx); spin_unlock(&ctx->lock); } -static int plug_ctx_cmp(void *priv, struct list_head *a, struct list_head *b) +static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b) { struct request *rqa = container_of(a, struct request, queuelist); struct request *rqb = container_of(b, struct request, queuelist); - return !(rqa->mq_ctx < rqb->mq_ctx || - (rqa->mq_ctx == rqb->mq_ctx && - blk_rq_pos(rqa) < blk_rq_pos(rqb))); + if (rqa->mq_ctx < rqb->mq_ctx) + return -1; + else if (rqa->mq_ctx > rqb->mq_ctx) + return 1; + else if (rqa->mq_hctx < rqb->mq_hctx) + return -1; + else if (rqa->mq_hctx > rqb->mq_hctx) + return 1; + + return blk_rq_pos(rqa) > blk_rq_pos(rqb); } void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) { + struct blk_mq_hw_ctx *this_hctx; struct blk_mq_ctx *this_ctx; struct request_queue *this_q; struct request *rq; LIST_HEAD(list); - LIST_HEAD(ctx_list); + LIST_HEAD(rq_list); unsigned int depth; list_splice_init(&plug->mq_list, &list); + plug->rq_count = 0; - list_sort(NULL, &list, plug_ctx_cmp); + if (plug->rq_count > 2 && plug->multiple_queues) + list_sort(NULL, &list, plug_rq_cmp); this_q = NULL; + this_hctx = NULL; this_ctx = NULL; depth = 0; @@ -1643,30 +1713,31 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) rq = list_entry_rq(list.next); list_del_init(&rq->queuelist); BUG_ON(!rq->q); - if (rq->mq_ctx != this_ctx) { - if (this_ctx) { + if (rq->mq_hctx != this_hctx || rq->mq_ctx != this_ctx) { + if (this_hctx) { trace_block_unplug(this_q, depth, !from_schedule); - blk_mq_sched_insert_requests(this_q, this_ctx, - &ctx_list, + blk_mq_sched_insert_requests(this_hctx, this_ctx, + &rq_list, from_schedule); } - this_ctx = rq->mq_ctx; this_q = rq->q; + this_ctx = rq->mq_ctx; + this_hctx = rq->mq_hctx; depth = 0; } depth++; - list_add_tail(&rq->queuelist, &ctx_list); + list_add_tail(&rq->queuelist, &rq_list); } /* - * If 'this_ctx' is set, we know we have entries to complete - * on 'ctx_list'. Do those. + * If 'this_hctx' is set, we know we have entries to complete + * on 'rq_list'. Do those. */ - if (this_ctx) { + if (this_hctx) { trace_block_unplug(this_q, depth, !from_schedule); - blk_mq_sched_insert_requests(this_q, this_ctx, &ctx_list, + blk_mq_sched_insert_requests(this_hctx, this_ctx, &rq_list, from_schedule); } } @@ -1675,27 +1746,17 @@ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio) { blk_init_request_from_bio(rq, bio); - blk_rq_set_rl(rq, blk_get_rl(rq->q, bio)); - blk_account_io_start(rq, true); } -static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx, struct request *rq) -{ - if (rq->tag != -1) - return blk_tag_to_qc_t(rq->tag, hctx->queue_num, false); - - return blk_tag_to_qc_t(rq->internal_tag, hctx->queue_num, true); -} - static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, struct request *rq, - blk_qc_t *cookie) + blk_qc_t *cookie, bool last) { struct request_queue *q = rq->q; struct blk_mq_queue_data bd = { .rq = rq, - .last = true, + .last = last, }; blk_qc_t new_cookie; blk_status_t ret; @@ -1727,77 +1788,74 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, return ret; } -static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, +blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_qc_t *cookie, - bool bypass_insert) + bool bypass, bool last) { struct request_queue *q = rq->q; bool run_queue = true; + blk_status_t ret = BLK_STS_RESOURCE; + int srcu_idx; + bool force = false; + hctx_lock(hctx, &srcu_idx); /* - * RCU or SRCU read lock is needed before checking quiesced flag. + * hctx_lock is needed before checking quiesced flag. * - * When queue is stopped or quiesced, ignore 'bypass_insert' from - * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller, - * and avoid driver to try to dispatch again. + * When queue is stopped or quiesced, ignore 'bypass', insert + * and return BLK_STS_OK to caller, and avoid driver to try to + * dispatch again. */ - if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) { + if (unlikely(blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q))) { run_queue = false; - bypass_insert = false; - goto insert; + bypass = false; + goto out_unlock; } - if (q->elevator && !bypass_insert) - goto insert; + if (unlikely(q->elevator && !bypass)) + goto out_unlock; if (!blk_mq_get_dispatch_budget(hctx)) - goto insert; + goto out_unlock; if (!blk_mq_get_driver_tag(rq)) { blk_mq_put_dispatch_budget(hctx); - goto insert; + goto out_unlock; } - return __blk_mq_issue_directly(hctx, rq, cookie); -insert: - if (bypass_insert) - return BLK_STS_RESOURCE; - - blk_mq_request_bypass_insert(rq, run_queue); - return BLK_STS_OK; -} - -static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, - struct request *rq, blk_qc_t *cookie) -{ - blk_status_t ret; - int srcu_idx; - - might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING); - - hctx_lock(hctx, &srcu_idx); - - ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false); - if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) - blk_mq_request_bypass_insert(rq, true); - else if (ret != BLK_STS_OK) - blk_mq_end_request(rq, ret); - - hctx_unlock(hctx, srcu_idx); -} - -blk_status_t blk_mq_request_issue_directly(struct request *rq) -{ - blk_status_t ret; - int srcu_idx; - blk_qc_t unused_cookie; - struct blk_mq_ctx *ctx = rq->mq_ctx; - struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu); - - hctx_lock(hctx, &srcu_idx); - ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true); + /* + * Always add a request that has been through + *.queue_rq() to the hardware dispatch list. + */ + force = true; + ret = __blk_mq_issue_directly(hctx, rq, cookie, last); +out_unlock: hctx_unlock(hctx, srcu_idx); + switch (ret) { + case BLK_STS_OK: + break; + case BLK_STS_DEV_RESOURCE: + case BLK_STS_RESOURCE: + if (force) { + blk_mq_request_bypass_insert(rq, run_queue); + /* + * We have to return BLK_STS_OK for the DM + * to avoid livelock. Otherwise, we return + * the real result to indicate whether the + * request is direct-issued successfully. + */ + ret = bypass ? BLK_STS_OK : ret; + } else if (!bypass) { + blk_mq_sched_insert_request(rq, false, + run_queue, false); + } + break; + default: + if (!bypass) + blk_mq_end_request(rq, ret); + break; + } return ret; } @@ -1805,22 +1863,42 @@ blk_status_t blk_mq_request_issue_directly(struct request *rq) void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, struct list_head *list) { + blk_qc_t unused; + blk_status_t ret = BLK_STS_OK; + while (!list_empty(list)) { - blk_status_t ret; struct request *rq = list_first_entry(list, struct request, queuelist); list_del_init(&rq->queuelist); - ret = blk_mq_request_issue_directly(rq); - if (ret != BLK_STS_OK) { - if (ret == BLK_STS_RESOURCE || - ret == BLK_STS_DEV_RESOURCE) { - blk_mq_request_bypass_insert(rq, + if (ret == BLK_STS_OK) + ret = blk_mq_try_issue_directly(hctx, rq, &unused, + false, list_empty(list)); - break; - } - blk_mq_end_request(rq, ret); - } + else + blk_mq_sched_insert_request(rq, false, true, false); + } + + /* + * If we didn't flush the entire list, we could have told + * the driver there was more coming, but that turned out to + * be a lie. + */ + if (ret != BLK_STS_OK && hctx->queue->mq_ops->commit_rqs) + hctx->queue->mq_ops->commit_rqs(hctx); +} + +static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq) +{ + list_add_tail(&rq->queuelist, &plug->mq_list); + plug->rq_count++; + if (!plug->multiple_queues && !list_is_singular(&plug->mq_list)) { + struct request *tmp; + + tmp = list_first_entry(&plug->mq_list, struct request, + queuelist); + if (tmp->q != rq->q) + plug->multiple_queues = true; } } @@ -1828,9 +1906,8 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) { const int is_sync = op_is_sync(bio->bi_opf); const int is_flush_fua = op_is_flush(bio->bi_opf); - struct blk_mq_alloc_data data = { .flags = 0 }; + struct blk_mq_alloc_data data = { .flags = 0, .cmd_flags = bio->bi_opf }; struct request *rq; - unsigned int request_count = 0; struct blk_plug *plug; struct request *same_queue_rq = NULL; blk_qc_t cookie; @@ -1843,15 +1920,15 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) return BLK_QC_T_NONE; if (!is_flush_fua && !blk_queue_nomerges(q) && - blk_attempt_plug_merge(q, bio, &request_count, &same_queue_rq)) + blk_attempt_plug_merge(q, bio, &same_queue_rq)) return BLK_QC_T_NONE; if (blk_mq_sched_bio_merge(q, bio)) return BLK_QC_T_NONE; - rq_qos_throttle(q, bio, NULL); + rq_qos_throttle(q, bio); - rq = blk_mq_get_request(q, bio, bio->bi_opf, &data); + rq = blk_mq_get_request(q, bio, &data); if (unlikely(!rq)) { rq_qos_cleanup(q, bio); if (bio->bi_opf & REQ_NOWAIT) @@ -1873,21 +1950,17 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) /* bypass scheduler for flush rq */ blk_insert_flush(rq); blk_mq_run_hw_queue(data.hctx, true); - } else if (plug && q->nr_hw_queues == 1) { + } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) { + /* + * Use plugging if we have a ->commit_rqs() hook as well, as + * we know the driver uses bd->last in a smart fashion. + */ + unsigned int request_count = plug->rq_count; struct request *last = NULL; blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio); - /* - * @request_count may become stale because of schedule - * out, so check the list again. - */ - if (list_empty(&plug->mq_list)) - request_count = 0; - else if (blk_queue_nomerges(q)) - request_count = blk_plug_queued_count(q); - if (!request_count) trace_block_plug(q); else @@ -1899,7 +1972,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) trace_block_plug(q); } - list_add_tail(&rq->queuelist, &plug->mq_list); + blk_add_rq_to_plug(plug, rq); } else if (plug && !blk_queue_nomerges(q)) { blk_mq_bio_to_request(rq, bio); @@ -1912,23 +1985,24 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) */ if (list_empty(&plug->mq_list)) same_queue_rq = NULL; - if (same_queue_rq) + if (same_queue_rq) { list_del_init(&same_queue_rq->queuelist); - list_add_tail(&rq->queuelist, &plug->mq_list); + plug->rq_count--; + } + blk_add_rq_to_plug(plug, rq); blk_mq_put_ctx(data.ctx); if (same_queue_rq) { - data.hctx = blk_mq_map_queue(q, - same_queue_rq->mq_ctx->cpu); + data.hctx = same_queue_rq->mq_hctx; blk_mq_try_issue_directly(data.hctx, same_queue_rq, - &cookie); + &cookie, false, true); } } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator && !data.hctx->dispatch_busy)) { blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio); - blk_mq_try_issue_directly(data.hctx, rq, &cookie); + blk_mq_try_issue_directly(data.hctx, rq, &cookie, false, true); } else { blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio); @@ -1986,7 +2060,7 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, struct blk_mq_tags *tags; int node; - node = blk_mq_hw_queue_to_node(set->mq_map, hctx_idx); + node = blk_mq_hw_queue_to_node(&set->map[0], hctx_idx); if (node == NUMA_NO_NODE) node = set->numa_node; @@ -2042,7 +2116,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, size_t rq_size, left; int node; - node = blk_mq_hw_queue_to_node(set->mq_map, hctx_idx); + node = blk_mq_hw_queue_to_node(&set->map[0], hctx_idx); if (node == NUMA_NO_NODE) node = set->numa_node; @@ -2122,13 +2196,15 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node) struct blk_mq_hw_ctx *hctx; struct blk_mq_ctx *ctx; LIST_HEAD(tmp); + enum hctx_type type; hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_dead); ctx = __blk_mq_get_ctx(hctx->queue, cpu); + type = hctx->type; spin_lock(&ctx->lock); - if (!list_empty(&ctx->rq_list)) { - list_splice_init(&ctx->rq_list, &tmp); + if (!list_empty(&ctx->rq_lists[type])) { + list_splice_init(&ctx->rq_lists[type], &tmp); blk_mq_hctx_clear_pending(hctx, ctx); } spin_unlock(&ctx->lock); @@ -2259,24 +2335,30 @@ static int blk_mq_init_hctx(struct request_queue *q, static void blk_mq_init_cpu_queues(struct request_queue *q, unsigned int nr_hw_queues) { - unsigned int i; + struct blk_mq_tag_set *set = q->tag_set; + unsigned int i, j; for_each_possible_cpu(i) { struct blk_mq_ctx *__ctx = per_cpu_ptr(q->queue_ctx, i); struct blk_mq_hw_ctx *hctx; + int k; __ctx->cpu = i; spin_lock_init(&__ctx->lock); - INIT_LIST_HEAD(&__ctx->rq_list); + for (k = HCTX_TYPE_DEFAULT; k < HCTX_MAX_TYPES; k++) + INIT_LIST_HEAD(&__ctx->rq_lists[k]); + __ctx->queue = q; /* * Set local node, IFF we have more than one hw queue. If * not, we remain on the home node of the device */ - hctx = blk_mq_map_queue(q, i); - if (nr_hw_queues > 1 && hctx->numa_node == NUMA_NO_NODE) - hctx->numa_node = local_memory_node(cpu_to_node(i)); + for (j = 0; j < set->nr_maps; j++) { + hctx = blk_mq_map_queue_type(q, j, i); + if (nr_hw_queues > 1 && hctx->numa_node == NUMA_NO_NODE) + hctx->numa_node = local_memory_node(cpu_to_node(i)); + } } } @@ -2302,7 +2384,7 @@ static bool __blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, int hctx_idx) static void blk_mq_free_map_and_requests(struct blk_mq_tag_set *set, unsigned int hctx_idx) { - if (set->tags[hctx_idx]) { + if (set->tags && set->tags[hctx_idx]) { blk_mq_free_rqs(set, set->tags[hctx_idx], hctx_idx); blk_mq_free_rq_map(set->tags[hctx_idx]); set->tags[hctx_idx] = NULL; @@ -2311,7 +2393,7 @@ static void blk_mq_free_map_and_requests(struct blk_mq_tag_set *set, static void blk_mq_map_swqueue(struct request_queue *q) { - unsigned int i, hctx_idx; + unsigned int i, j, hctx_idx; struct blk_mq_hw_ctx *hctx; struct blk_mq_ctx *ctx; struct blk_mq_tag_set *set = q->tag_set; @@ -2333,7 +2415,7 @@ static void blk_mq_map_swqueue(struct request_queue *q) * If the cpu isn't present, the cpu is mapped to first hctx. */ for_each_possible_cpu(i) { - hctx_idx = q->mq_map[i]; + hctx_idx = set->map[0].mq_map[i]; /* unmapped hw queue can be remapped after CPU topo changed */ if (!set->tags[hctx_idx] && !__blk_mq_alloc_rq_map(set, hctx_idx)) { @@ -2343,15 +2425,35 @@ static void blk_mq_map_swqueue(struct request_queue *q) * case, remap the current ctx to hctx[0] which * is guaranteed to always have tags allocated */ - q->mq_map[i] = 0; + set->map[0].mq_map[i] = 0; } ctx = per_cpu_ptr(q->queue_ctx, i); - hctx = blk_mq_map_queue(q, i); + for (j = 0; j < set->nr_maps; j++) { + if (!set->map[j].nr_queues) + continue; + + hctx = blk_mq_map_queue_type(q, j, i); + + /* + * If the CPU is already set in the mask, then we've + * mapped this one already. This can happen if + * devices share queues across queue maps. + */ + if (cpumask_test_cpu(i, hctx->cpumask)) + continue; + + cpumask_set_cpu(i, hctx->cpumask); + hctx->type = j; + ctx->index_hw[hctx->type] = hctx->nr_ctx; + hctx->ctxs[hctx->nr_ctx++] = ctx; - cpumask_set_cpu(i, hctx->cpumask); - ctx->index_hw = hctx->nr_ctx; - hctx->ctxs[hctx->nr_ctx++] = ctx; + /* + * If the nr_ctx type overflows, we have exceeded the + * amount of sw queues we can support. + */ + BUG_ON(!hctx->nr_ctx); + } } mutex_unlock(&q->sysfs_lock); @@ -2441,8 +2543,6 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q) static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set, struct request_queue *q) { - q->tag_set = set; - mutex_lock(&set->tag_list_lock); /* @@ -2461,6 +2561,34 @@ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set, mutex_unlock(&set->tag_list_lock); } +/* All allocations will be freed in release handler of q->mq_kobj */ +static int blk_mq_alloc_ctxs(struct request_queue *q) +{ + struct blk_mq_ctxs *ctxs; + int cpu; + + ctxs = kzalloc(sizeof(*ctxs), GFP_KERNEL); + if (!ctxs) + return -ENOMEM; + + ctxs->queue_ctx = alloc_percpu(struct blk_mq_ctx); + if (!ctxs->queue_ctx) + goto fail; + + for_each_possible_cpu(cpu) { + struct blk_mq_ctx *ctx = per_cpu_ptr(ctxs->queue_ctx, cpu); + ctx->ctxs = ctxs; + } + + q->mq_kobj = &ctxs->kobj; + q->queue_ctx = ctxs->queue_ctx; + + return 0; + fail: + kfree(ctxs); + return -ENOMEM; +} + /* * It is the actual release handler for mq, but we do it from * request queue's release handler for avoiding use-after-free @@ -2479,8 +2607,6 @@ void blk_mq_release(struct request_queue *q) kobject_put(&hctx->kobj); } - q->mq_map = NULL; - kfree(q->queue_hw_ctx); /* @@ -2488,15 +2614,13 @@ void blk_mq_release(struct request_queue *q) * both share lifetime with request queue. */ blk_mq_sysfs_deinit(q); - - free_percpu(q->queue_ctx); } struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set) { struct request_queue *uninit_q, *q; - uninit_q = blk_alloc_queue_node(GFP_KERNEL, set->numa_node, NULL); + uninit_q = blk_alloc_queue_node(GFP_KERNEL, set->numa_node); if (!uninit_q) return ERR_PTR(-ENOMEM); @@ -2523,6 +2647,7 @@ struct request_queue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set, memset(set, 0, sizeof(*set)); set->ops = ops; set->nr_hw_queues = 1; + set->nr_maps = 1; set->queue_depth = queue_depth; set->numa_node = NUMA_NO_NODE; set->flags = set_flags; @@ -2600,7 +2725,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, int node; struct blk_mq_hw_ctx *hctx; - node = blk_mq_hw_queue_to_node(q->mq_map, i); + node = blk_mq_hw_queue_to_node(&set->map[0], i); /* * If the hw queue has been mapped to another numa node, * we need to realloc the hctx. If allocation fails, fallback @@ -2653,6 +2778,19 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, mutex_unlock(&q->sysfs_lock); } +/* + * Maximum number of hardware queues we support. For single sets, we'll never + * have more than the CPUs (software queues). For multiple sets, the tag_set + * user may have set ->nr_hw_queues larger. + */ +static unsigned int nr_hw_queues(struct blk_mq_tag_set *set) +{ + if (set->nr_maps == 1) + return nr_cpu_ids; + + return max(set->nr_hw_queues, nr_cpu_ids); +} + struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, struct request_queue *q) { @@ -2665,19 +2803,17 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, if (!q->poll_cb) goto err_exit; - q->queue_ctx = alloc_percpu(struct blk_mq_ctx); - if (!q->queue_ctx) + if (blk_mq_alloc_ctxs(q)) goto err_exit; /* init q->mq_kobj and sw queues' kobjects */ blk_mq_sysfs_init(q); - q->queue_hw_ctx = kcalloc_node(nr_cpu_ids, sizeof(*(q->queue_hw_ctx)), + q->nr_queues = nr_hw_queues(set); + q->queue_hw_ctx = kcalloc_node(q->nr_queues, sizeof(*(q->queue_hw_ctx)), GFP_KERNEL, set->numa_node); if (!q->queue_hw_ctx) - goto err_percpu; - - q->mq_map = set->mq_map; + goto err_sys_init; blk_mq_realloc_hw_ctxs(set, q); if (!q->nr_hw_queues) @@ -2686,12 +2822,15 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, INIT_WORK(&q->timeout_work, blk_mq_timeout_work); blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ); - q->nr_queues = nr_cpu_ids; + q->tag_set = set; q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT; + if (set->nr_maps > HCTX_TYPE_POLL && + set->map[HCTX_TYPE_POLL].nr_queues) + blk_queue_flag_set(QUEUE_FLAG_POLL, q); if (!(set->flags & BLK_MQ_F_SG_MERGE)) - queue_flag_set_unlocked(QUEUE_FLAG_NO_SG_MERGE, q); + blk_queue_flag_set(QUEUE_FLAG_NO_SG_MERGE, q); q->sg_reserved_size = INT_MAX; @@ -2700,8 +2839,6 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, spin_lock_init(&q->requeue_lock); blk_queue_make_request(q, blk_mq_make_request); - if (q->mq_ops->poll) - q->poll_fn = blk_mq_poll; /* * Do this after blk_queue_make_request() overrides it... @@ -2713,9 +2850,6 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, */ q->poll_nsec = -1; - if (set->ops->complete) - blk_queue_softirq_done(q, set->ops->complete); - blk_mq_init_cpu_queues(q, set->nr_hw_queues); blk_mq_add_queue_tag_set(set, q); blk_mq_map_swqueue(q); @@ -2732,8 +2866,8 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, err_hctxs: kfree(q->queue_hw_ctx); -err_percpu: - free_percpu(q->queue_ctx); +err_sys_init: + blk_mq_sysfs_deinit(q); err_exit: q->mq_ops = NULL; return ERR_PTR(-ENOMEM); @@ -2802,7 +2936,9 @@ static int blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) static int blk_mq_update_queue_map(struct blk_mq_tag_set *set) { - if (set->ops->map_queues) { + if (set->ops->map_queues && !is_kdump_kernel()) { + int i; + /* * transport .map_queues is usually done in the following * way: @@ -2810,18 +2946,21 @@ static int blk_mq_update_queue_map(struct blk_mq_tag_set *set) * for (queue = 0; queue < set->nr_hw_queues; queue++) { * mask = get_cpu_mask(queue) * for_each_cpu(cpu, mask) - * set->mq_map[cpu] = queue; + * set->map[x].mq_map[cpu] = queue; * } * * When we need to remap, the table has to be cleared for * killing stale mapping since one CPU may not be mapped * to any hw queue. */ - blk_mq_clear_mq_map(set); + for (i = 0; i < set->nr_maps; i++) + blk_mq_clear_mq_map(&set->map[i]); return set->ops->map_queues(set); - } else - return blk_mq_map_queues(set); + } else { + BUG_ON(set->nr_maps > 1); + return blk_mq_map_queues(&set->map[0]); + } } /* @@ -2832,7 +2971,7 @@ static int blk_mq_update_queue_map(struct blk_mq_tag_set *set) */ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) { - int ret; + int i, ret; BUILD_BUG_ON(BLK_MQ_MAX_DEPTH > 1 << BLK_MQ_UNIQUE_TAG_BITS); @@ -2855,6 +2994,11 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) set->queue_depth = BLK_MQ_MAX_DEPTH; } + if (!set->nr_maps) + set->nr_maps = 1; + else if (set->nr_maps > HCTX_MAX_TYPES) + return -EINVAL; + /* * If a crashdump is active, then we are potentially in a very * memory constrained environment. Limit us to 1 queue and @@ -2862,24 +3006,30 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) */ if (is_kdump_kernel()) { set->nr_hw_queues = 1; + set->nr_maps = 1; set->queue_depth = min(64U, set->queue_depth); } /* - * There is no use for more h/w queues than cpus. + * There is no use for more h/w queues than cpus if we just have + * a single map */ - if (set->nr_hw_queues > nr_cpu_ids) + if (set->nr_maps == 1 && set->nr_hw_queues > nr_cpu_ids) set->nr_hw_queues = nr_cpu_ids; - set->tags = kcalloc_node(nr_cpu_ids, sizeof(struct blk_mq_tags *), + set->tags = kcalloc_node(nr_hw_queues(set), sizeof(struct blk_mq_tags *), GFP_KERNEL, set->numa_node); if (!set->tags) return -ENOMEM; ret = -ENOMEM; - set->mq_map = kcalloc_node(nr_cpu_ids, sizeof(*set->mq_map), - GFP_KERNEL, set->numa_node); - if (!set->mq_map) - goto out_free_tags; + for (i = 0; i < set->nr_maps; i++) { + set->map[i].mq_map = kcalloc_node(nr_cpu_ids, + sizeof(set->map[i].mq_map[0]), + GFP_KERNEL, set->numa_node); + if (!set->map[i].mq_map) + goto out_free_mq_map; + set->map[i].nr_queues = is_kdump_kernel() ? 1 : set->nr_hw_queues; + } ret = blk_mq_update_queue_map(set); if (ret) @@ -2895,9 +3045,10 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) return 0; out_free_mq_map: - kfree(set->mq_map); - set->mq_map = NULL; -out_free_tags: + for (i = 0; i < set->nr_maps; i++) { + kfree(set->map[i].mq_map); + set->map[i].mq_map = NULL; + } kfree(set->tags); set->tags = NULL; return ret; @@ -2906,13 +3057,15 @@ EXPORT_SYMBOL(blk_mq_alloc_tag_set); void blk_mq_free_tag_set(struct blk_mq_tag_set *set) { - int i; + int i, j; - for (i = 0; i < nr_cpu_ids; i++) + for (i = 0; i < nr_hw_queues(set); i++) blk_mq_free_map_and_requests(set, i); - kfree(set->mq_map); - set->mq_map = NULL; + for (j = 0; j < set->nr_maps; j++) { + kfree(set->map[j].mq_map); + set->map[j].mq_map = NULL; + } kfree(set->tags); set->tags = NULL; @@ -3038,7 +3191,7 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, lockdep_assert_held(&set->tag_list_lock); - if (nr_hw_queues > nr_cpu_ids) + if (set->nr_maps == 1 && nr_hw_queues > nr_cpu_ids) nr_hw_queues = nr_cpu_ids; if (nr_hw_queues < 1 || nr_hw_queues == set->nr_hw_queues) return; @@ -3073,7 +3226,7 @@ fallback: pr_warn("Increasing nr_hw_queues to %d fails, fallback to %d\n", nr_hw_queues, prev_nr_hw_queues); set->nr_hw_queues = prev_nr_hw_queues; - blk_mq_map_queues(set); + blk_mq_map_queues(&set->map[0]); goto fallback; } blk_mq_map_swqueue(q); @@ -3180,15 +3333,12 @@ static bool blk_mq_poll_hybrid_sleep(struct request_queue *q, return false; /* - * poll_nsec can be: + * If we get here, hybrid polling is enabled. Hence poll_nsec can be: * - * -1: don't ever hybrid sleep * 0: use half of prev avg * >0: use this specific value */ - if (q->poll_nsec == -1) - return false; - else if (q->poll_nsec > 0) + if (q->poll_nsec > 0) nsecs = q->poll_nsec; else nsecs = blk_mq_poll_nsecs(q, hctx, rq); @@ -3225,11 +3375,57 @@ static bool blk_mq_poll_hybrid_sleep(struct request_queue *q, return true; } -static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) +static bool blk_mq_poll_hybrid(struct request_queue *q, + struct blk_mq_hw_ctx *hctx, blk_qc_t cookie) { - struct request_queue *q = hctx->queue; + struct request *rq; + + if (q->poll_nsec == -1) + return false; + + if (!blk_qc_t_is_internal(cookie)) + rq = blk_mq_tag_to_rq(hctx->tags, blk_qc_t_to_tag(cookie)); + else { + rq = blk_mq_tag_to_rq(hctx->sched_tags, blk_qc_t_to_tag(cookie)); + /* + * With scheduling, if the request has completed, we'll + * get a NULL return here, as we clear the sched tag when + * that happens. The request still remains valid, like always, + * so we should be safe with just the NULL check. + */ + if (!rq) + return false; + } + + return blk_mq_poll_hybrid_sleep(q, hctx, rq); +} + +/** + * blk_poll - poll for IO completions + * @q: the queue + * @cookie: cookie passed back at IO submission time + * @spin: whether to spin for completions + * + * Description: + * Poll for completions on the passed in queue. Returns number of + * completed entries found. If @spin is true, then blk_poll will continue + * looping until at least one completion is found, unless the task is + * otherwise marked running (or we need to reschedule). + */ +int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin) +{ + struct blk_mq_hw_ctx *hctx; long state; + if (!blk_qc_t_valid(cookie) || + !test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) + return 0; + + if (current->plug) + blk_flush_plug_list(current->plug, false); + + hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)]; + /* * If we sleep, have the caller restart the poll loop to reset * the state. Like for the other success return cases, the @@ -3237,63 +3433,44 @@ static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) * the IO isn't complete, we'll get called again and will go * straight to the busy poll loop. */ - if (blk_mq_poll_hybrid_sleep(q, hctx, rq)) - return true; + if (blk_mq_poll_hybrid(q, hctx, cookie)) + return 1; hctx->poll_considered++; state = current->state; - while (!need_resched()) { + do { int ret; hctx->poll_invoked++; - ret = q->mq_ops->poll(hctx, rq->tag); + ret = q->mq_ops->poll(hctx); if (ret > 0) { hctx->poll_success++; - set_current_state(TASK_RUNNING); - return true; + __set_current_state(TASK_RUNNING); + return ret; } if (signal_pending_state(state, current)) - set_current_state(TASK_RUNNING); + __set_current_state(TASK_RUNNING); if (current->state == TASK_RUNNING) - return true; - if (ret < 0) + return 1; + if (ret < 0 || !spin) break; cpu_relax(); - } + } while (!need_resched()); __set_current_state(TASK_RUNNING); - return false; + return 0; } +EXPORT_SYMBOL_GPL(blk_poll); -static bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie) +unsigned int blk_mq_rq_cpu(struct request *rq) { - struct blk_mq_hw_ctx *hctx; - struct request *rq; - - if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) - return false; - - hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)]; - if (!blk_qc_t_is_internal(cookie)) - rq = blk_mq_tag_to_rq(hctx->tags, blk_qc_t_to_tag(cookie)); - else { - rq = blk_mq_tag_to_rq(hctx->sched_tags, blk_qc_t_to_tag(cookie)); - /* - * With scheduling, if the request has completed, we'll - * get a NULL return here, as we clear the sched tag when - * that happens. The request still remains valid, like always, - * so we should be safe with just the NULL check. - */ - if (!rq) - return false; - } - - return __blk_mq_poll(hctx, rq); + return rq->mq_ctx->cpu; } +EXPORT_SYMBOL(blk_mq_rq_cpu); static int __init blk_mq_init(void) { diff --git a/block/blk-mq.h b/block/blk-mq.h index 9497b47e2526..d943d46b0785 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -7,17 +7,22 @@ struct blk_mq_tag_set; +struct blk_mq_ctxs { + struct kobject kobj; + struct blk_mq_ctx __percpu *queue_ctx; +}; + /** * struct blk_mq_ctx - State for a software queue facing the submitting CPUs */ struct blk_mq_ctx { struct { spinlock_t lock; - struct list_head rq_list; - } ____cacheline_aligned_in_smp; + struct list_head rq_lists[HCTX_MAX_TYPES]; + } ____cacheline_aligned_in_smp; unsigned int cpu; - unsigned int index_hw; + unsigned short index_hw[HCTX_MAX_TYPES]; /* incremented at dispatch time */ unsigned long rq_dispatched[2]; @@ -27,6 +32,7 @@ struct blk_mq_ctx { unsigned long ____cacheline_aligned_in_smp rq_completed[2]; struct request_queue *queue; + struct blk_mq_ctxs *ctxs; struct kobject kobj; } ____cacheline_aligned_in_smp; @@ -62,20 +68,55 @@ void blk_mq_request_bypass_insert(struct request *rq, bool run_queue); void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct list_head *list); -/* Used by blk_insert_cloned_request() to issue request directly */ -blk_status_t blk_mq_request_issue_directly(struct request *rq); +blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, + struct request *rq, + blk_qc_t *cookie, + bool bypass, bool last); void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, struct list_head *list); /* * CPU -> queue mappings */ -extern int blk_mq_hw_queue_to_node(unsigned int *map, unsigned int); +extern int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int); + +/* + * blk_mq_map_queue_type() - map (hctx_type,cpu) to hardware queue + * @q: request queue + * @type: the hctx type index + * @cpu: CPU + */ +static inline struct blk_mq_hw_ctx *blk_mq_map_queue_type(struct request_queue *q, + enum hctx_type type, + unsigned int cpu) +{ + return q->queue_hw_ctx[q->tag_set->map[type].mq_map[cpu]]; +} +/* + * blk_mq_map_queue() - map (cmd_flags,type) to hardware queue + * @q: request queue + * @flags: request command flags + * @cpu: CPU + */ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q, - int cpu) + unsigned int flags, + unsigned int cpu) { - return q->queue_hw_ctx[q->mq_map[cpu]]; + enum hctx_type type = HCTX_TYPE_DEFAULT; + + if ((flags & REQ_HIPRI) && + q->tag_set->nr_maps > HCTX_TYPE_POLL && + q->tag_set->map[HCTX_TYPE_POLL].nr_queues && + test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) + type = HCTX_TYPE_POLL; + + else if (((flags & REQ_OP_MASK) == REQ_OP_READ) && + q->tag_set->nr_maps > HCTX_TYPE_READ && + q->tag_set->map[HCTX_TYPE_READ].nr_queues) + type = HCTX_TYPE_READ; + + return blk_mq_map_queue_type(q, type, cpu); } /* @@ -126,6 +167,7 @@ struct blk_mq_alloc_data { struct request_queue *q; blk_mq_req_flags_t flags; unsigned int shallow_depth; + unsigned int cmd_flags; /* input & output parameter */ struct blk_mq_ctx *ctx; @@ -150,8 +192,7 @@ static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx) return hctx->nr_ctx && hctx->tags; } -void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part, - unsigned int inflight[2]); +unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part); void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part, unsigned int inflight[2]); @@ -195,21 +236,18 @@ static inline void blk_mq_put_driver_tag_hctx(struct blk_mq_hw_ctx *hctx, static inline void blk_mq_put_driver_tag(struct request *rq) { - struct blk_mq_hw_ctx *hctx; - if (rq->tag == -1 || rq->internal_tag == -1) return; - hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu); - __blk_mq_put_driver_tag(hctx, rq); + __blk_mq_put_driver_tag(rq->mq_hctx, rq); } -static inline void blk_mq_clear_mq_map(struct blk_mq_tag_set *set) +static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap) { int cpu; for_each_possible_cpu(cpu) - set->mq_map[cpu] = 0; + qmap->mq_map[cpu] = 0; } #endif diff --git a/block/blk-pm.c b/block/blk-pm.c index f8fdae01bea2..0a028c189897 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -89,12 +89,12 @@ int blk_pre_runtime_suspend(struct request_queue *q) /* Switch q_usage_counter back to per-cpu mode. */ blk_mq_unfreeze_queue(q); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); if (ret < 0) pm_runtime_mark_last_busy(q->dev); else q->rpm_status = RPM_SUSPENDING; - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); if (ret) blk_clear_pm_only(q); @@ -121,14 +121,14 @@ void blk_post_runtime_suspend(struct request_queue *q, int err) if (!q->dev) return; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); if (!err) { q->rpm_status = RPM_SUSPENDED; } else { q->rpm_status = RPM_ACTIVE; pm_runtime_mark_last_busy(q->dev); } - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); if (err) blk_clear_pm_only(q); @@ -151,9 +151,9 @@ void blk_pre_runtime_resume(struct request_queue *q) if (!q->dev) return; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); q->rpm_status = RPM_RESUMING; - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); } EXPORT_SYMBOL(blk_pre_runtime_resume); @@ -176,7 +176,7 @@ void blk_post_runtime_resume(struct request_queue *q, int err) if (!q->dev) return; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); if (!err) { q->rpm_status = RPM_ACTIVE; pm_runtime_mark_last_busy(q->dev); @@ -184,7 +184,7 @@ void blk_post_runtime_resume(struct request_queue *q, int err) } else { q->rpm_status = RPM_SUSPENDED; } - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); if (!err) blk_clear_pm_only(q); @@ -207,10 +207,10 @@ EXPORT_SYMBOL(blk_post_runtime_resume); */ void blk_set_runtime_active(struct request_queue *q) { - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); q->rpm_status = RPM_ACTIVE; pm_runtime_mark_last_busy(q->dev); pm_request_autosuspend(q->dev); - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); } EXPORT_SYMBOL(blk_set_runtime_active); diff --git a/block/blk-pm.h b/block/blk-pm.h index a8564ea72a41..ea5507d23e75 100644 --- a/block/blk-pm.h +++ b/block/blk-pm.h @@ -21,7 +21,7 @@ static inline void blk_pm_mark_last_busy(struct request *rq) static inline void blk_pm_requeue_request(struct request *rq) { - lockdep_assert_held(rq->q->queue_lock); + lockdep_assert_held(&rq->q->queue_lock); if (rq->q->dev && !(rq->rq_flags & RQF_PM)) rq->q->nr_pending--; @@ -30,7 +30,7 @@ static inline void blk_pm_requeue_request(struct request *rq) static inline void blk_pm_add_request(struct request_queue *q, struct request *rq) { - lockdep_assert_held(q->queue_lock); + lockdep_assert_held(&q->queue_lock); if (q->dev && !(rq->rq_flags & RQF_PM)) q->nr_pending++; @@ -38,7 +38,7 @@ static inline void blk_pm_add_request(struct request_queue *q, static inline void blk_pm_put_request(struct request *rq) { - lockdep_assert_held(rq->q->queue_lock); + lockdep_assert_held(&rq->q->queue_lock); if (rq->q->dev && !(rq->rq_flags & RQF_PM)) --rq->q->nr_pending; diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c index 0005dfd568dd..d169d7188fa6 100644 --- a/block/blk-rq-qos.c +++ b/block/blk-rq-qos.c @@ -27,75 +27,67 @@ bool rq_wait_inc_below(struct rq_wait *rq_wait, unsigned int limit) return atomic_inc_below(&rq_wait->inflight, limit); } -void rq_qos_cleanup(struct request_queue *q, struct bio *bio) +void __rq_qos_cleanup(struct rq_qos *rqos, struct bio *bio) { - struct rq_qos *rqos; - - for (rqos = q->rq_qos; rqos; rqos = rqos->next) { + do { if (rqos->ops->cleanup) rqos->ops->cleanup(rqos, bio); - } + rqos = rqos->next; + } while (rqos); } -void rq_qos_done(struct request_queue *q, struct request *rq) +void __rq_qos_done(struct rq_qos *rqos, struct request *rq) { - struct rq_qos *rqos; - - for (rqos = q->rq_qos; rqos; rqos = rqos->next) { + do { if (rqos->ops->done) rqos->ops->done(rqos, rq); - } + rqos = rqos->next; + } while (rqos); } -void rq_qos_issue(struct request_queue *q, struct request *rq) +void __rq_qos_issue(struct rq_qos *rqos, struct request *rq) { - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { + do { if (rqos->ops->issue) rqos->ops->issue(rqos, rq); - } + rqos = rqos->next; + } while (rqos); } -void rq_qos_requeue(struct request_queue *q, struct request *rq) +void __rq_qos_requeue(struct rq_qos *rqos, struct request *rq) { - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { + do { if (rqos->ops->requeue) rqos->ops->requeue(rqos, rq); - } + rqos = rqos->next; + } while (rqos); } -void rq_qos_throttle(struct request_queue *q, struct bio *bio, - spinlock_t *lock) +void __rq_qos_throttle(struct rq_qos *rqos, struct bio *bio) { - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { + do { if (rqos->ops->throttle) - rqos->ops->throttle(rqos, bio, lock); - } + rqos->ops->throttle(rqos, bio); + rqos = rqos->next; + } while (rqos); } -void rq_qos_track(struct request_queue *q, struct request *rq, struct bio *bio) +void __rq_qos_track(struct rq_qos *rqos, struct request *rq, struct bio *bio) { - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { + do { if (rqos->ops->track) rqos->ops->track(rqos, rq, bio); - } + rqos = rqos->next; + } while (rqos); } -void rq_qos_done_bio(struct request_queue *q, struct bio *bio) +void __rq_qos_done_bio(struct rq_qos *rqos, struct bio *bio) { - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { + do { if (rqos->ops->done_bio) rqos->ops->done_bio(rqos, bio); - } + rqos = rqos->next; + } while (rqos); } /* @@ -184,8 +176,96 @@ void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle) rq_depth_calc_max_depth(rqd); } +struct rq_qos_wait_data { + struct wait_queue_entry wq; + struct task_struct *task; + struct rq_wait *rqw; + acquire_inflight_cb_t *cb; + void *private_data; + bool got_token; +}; + +static int rq_qos_wake_function(struct wait_queue_entry *curr, + unsigned int mode, int wake_flags, void *key) +{ + struct rq_qos_wait_data *data = container_of(curr, + struct rq_qos_wait_data, + wq); + + /* + * If we fail to get a budget, return -1 to interrupt the wake up loop + * in __wake_up_common. + */ + if (!data->cb(data->rqw, data->private_data)) + return -1; + + data->got_token = true; + list_del_init(&curr->entry); + wake_up_process(data->task); + return 1; +} + +/** + * rq_qos_wait - throttle on a rqw if we need to + * @private_data - caller provided specific data + * @acquire_inflight_cb - inc the rqw->inflight counter if we can + * @cleanup_cb - the callback to cleanup in case we race with a waker + * + * This provides a uniform place for the rq_qos users to do their throttling. + * Since you can end up with a lot of things sleeping at once, this manages the + * waking up based on the resources available. The acquire_inflight_cb should + * inc the rqw->inflight if we have the ability to do so, or return false if not + * and then we will sleep until the room becomes available. + * + * cleanup_cb is in case that we race with a waker and need to cleanup the + * inflight count accordingly. + */ +void rq_qos_wait(struct rq_wait *rqw, void *private_data, + acquire_inflight_cb_t *acquire_inflight_cb, + cleanup_cb_t *cleanup_cb) +{ + struct rq_qos_wait_data data = { + .wq = { + .func = rq_qos_wake_function, + .entry = LIST_HEAD_INIT(data.wq.entry), + }, + .task = current, + .rqw = rqw, + .cb = acquire_inflight_cb, + .private_data = private_data, + }; + bool has_sleeper; + + has_sleeper = wq_has_sleeper(&rqw->wait); + if (!has_sleeper && acquire_inflight_cb(rqw, private_data)) + return; + + prepare_to_wait_exclusive(&rqw->wait, &data.wq, TASK_UNINTERRUPTIBLE); + do { + if (data.got_token) + break; + if (!has_sleeper && acquire_inflight_cb(rqw, private_data)) { + finish_wait(&rqw->wait, &data.wq); + + /* + * We raced with wbt_wake_function() getting a token, + * which means we now have two. Put our local token + * and wake anyone else potentially waiting for one. + */ + if (data.got_token) + cleanup_cb(rqw, private_data); + break; + } + io_schedule(); + has_sleeper = false; + } while (1); + finish_wait(&rqw->wait, &data.wq); +} + void rq_qos_exit(struct request_queue *q) { + blk_mq_debugfs_unregister_queue_rqos(q); + while (q->rq_qos) { struct rq_qos *rqos = q->rq_qos; q->rq_qos = rqos->next; diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 32b02efbfa66..564851889550 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -7,6 +7,10 @@ #include #include +#include "blk-mq-debugfs.h" + +struct blk_mq_debugfs_attr; + enum rq_qos_id { RQ_QOS_WBT, RQ_QOS_CGROUP, @@ -22,10 +26,13 @@ struct rq_qos { struct request_queue *q; enum rq_qos_id id; struct rq_qos *next; +#ifdef CONFIG_BLK_DEBUG_FS + struct dentry *debugfs_dir; +#endif }; struct rq_qos_ops { - void (*throttle)(struct rq_qos *, struct bio *, spinlock_t *); + void (*throttle)(struct rq_qos *, struct bio *); void (*track)(struct rq_qos *, struct request *, struct bio *); void (*issue)(struct rq_qos *, struct request *); void (*requeue)(struct rq_qos *, struct request *); @@ -33,6 +40,7 @@ struct rq_qos_ops { void (*done_bio)(struct rq_qos *, struct bio *); void (*cleanup)(struct rq_qos *, struct bio *); void (*exit)(struct rq_qos *); + const struct blk_mq_debugfs_attr *debugfs_attrs; }; struct rq_depth { @@ -66,6 +74,17 @@ static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q) return rq_qos_id(q, RQ_QOS_CGROUP); } +static inline const char *rq_qos_id_to_name(enum rq_qos_id id) +{ + switch (id) { + case RQ_QOS_WBT: + return "wbt"; + case RQ_QOS_CGROUP: + return "cgroup"; + } + return "unknown"; +} + static inline void rq_wait_init(struct rq_wait *rq_wait) { atomic_set(&rq_wait->inflight, 0); @@ -76,6 +95,9 @@ static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos) { rqos->next = q->rq_qos; q->rq_qos = rqos; + + if (rqos->ops->debugfs_attrs) + blk_mq_debugfs_register_rqos(rqos); } static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos) @@ -91,19 +113,77 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos) } prev = cur; } + + blk_mq_debugfs_unregister_rqos(rqos); } +typedef bool (acquire_inflight_cb_t)(struct rq_wait *rqw, void *private_data); +typedef void (cleanup_cb_t)(struct rq_wait *rqw, void *private_data); + +void rq_qos_wait(struct rq_wait *rqw, void *private_data, + acquire_inflight_cb_t *acquire_inflight_cb, + cleanup_cb_t *cleanup_cb); bool rq_wait_inc_below(struct rq_wait *rq_wait, unsigned int limit); void rq_depth_scale_up(struct rq_depth *rqd); void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle); bool rq_depth_calc_max_depth(struct rq_depth *rqd); -void rq_qos_cleanup(struct request_queue *, struct bio *); -void rq_qos_done(struct request_queue *, struct request *); -void rq_qos_issue(struct request_queue *, struct request *); -void rq_qos_requeue(struct request_queue *, struct request *); -void rq_qos_done_bio(struct request_queue *q, struct bio *bio); -void rq_qos_throttle(struct request_queue *, struct bio *, spinlock_t *); -void rq_qos_track(struct request_queue *q, struct request *, struct bio *); +void __rq_qos_cleanup(struct rq_qos *rqos, struct bio *bio); +void __rq_qos_done(struct rq_qos *rqos, struct request *rq); +void __rq_qos_issue(struct rq_qos *rqos, struct request *rq); +void __rq_qos_requeue(struct rq_qos *rqos, struct request *rq); +void __rq_qos_throttle(struct rq_qos *rqos, struct bio *bio); +void __rq_qos_track(struct rq_qos *rqos, struct request *rq, struct bio *bio); +void __rq_qos_done_bio(struct rq_qos *rqos, struct bio *bio); + +static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio) +{ + if (q->rq_qos) + __rq_qos_cleanup(q->rq_qos, bio); +} + +static inline void rq_qos_done(struct request_queue *q, struct request *rq) +{ + if (q->rq_qos) + __rq_qos_done(q->rq_qos, rq); +} + +static inline void rq_qos_issue(struct request_queue *q, struct request *rq) +{ + if (q->rq_qos) + __rq_qos_issue(q->rq_qos, rq); +} + +static inline void rq_qos_requeue(struct request_queue *q, struct request *rq) +{ + if (q->rq_qos) + __rq_qos_requeue(q->rq_qos, rq); +} + +static inline void rq_qos_done_bio(struct request_queue *q, struct bio *bio) +{ + if (q->rq_qos) + __rq_qos_done_bio(q->rq_qos, bio); +} + +static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) +{ + /* + * BIO_TRACKED lets controllers know that a bio went through the + * normal rq_qos path. + */ + bio_set_flag(bio, BIO_TRACKED); + if (q->rq_qos) + __rq_qos_throttle(q->rq_qos, bio); +} + +static inline void rq_qos_track(struct request_queue *q, struct request *rq, + struct bio *bio) +{ + if (q->rq_qos) + __rq_qos_track(q->rq_qos, rq, bio); +} + void rq_qos_exit(struct request_queue *); + #endif diff --git a/block/blk-settings.c b/block/blk-settings.c index 696c04c1ab6c..3e7038e475ee 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -20,65 +20,12 @@ EXPORT_SYMBOL(blk_max_low_pfn); unsigned long blk_max_pfn; -/** - * blk_queue_prep_rq - set a prepare_request function for queue - * @q: queue - * @pfn: prepare_request function - * - * It's possible for a queue to register a prepare_request callback which - * is invoked before the request is handed to the request_fn. The goal of - * the function is to prepare a request for I/O, it can be used to build a - * cdb from the request data for instance. - * - */ -void blk_queue_prep_rq(struct request_queue *q, prep_rq_fn *pfn) -{ - q->prep_rq_fn = pfn; -} -EXPORT_SYMBOL(blk_queue_prep_rq); - -/** - * blk_queue_unprep_rq - set an unprepare_request function for queue - * @q: queue - * @ufn: unprepare_request function - * - * It's possible for a queue to register an unprepare_request callback - * which is invoked before the request is finally completed. The goal - * of the function is to deallocate any data that was allocated in the - * prepare_request callback. - * - */ -void blk_queue_unprep_rq(struct request_queue *q, unprep_rq_fn *ufn) -{ - q->unprep_rq_fn = ufn; -} -EXPORT_SYMBOL(blk_queue_unprep_rq); - -void blk_queue_softirq_done(struct request_queue *q, softirq_done_fn *fn) -{ - q->softirq_done_fn = fn; -} -EXPORT_SYMBOL(blk_queue_softirq_done); - void blk_queue_rq_timeout(struct request_queue *q, unsigned int timeout) { q->rq_timeout = timeout; } EXPORT_SYMBOL_GPL(blk_queue_rq_timeout); -void blk_queue_rq_timed_out(struct request_queue *q, rq_timed_out_fn *fn) -{ - WARN_ON_ONCE(q->mq_ops); - q->rq_timed_out_fn = fn; -} -EXPORT_SYMBOL_GPL(blk_queue_rq_timed_out); - -void blk_queue_lld_busy(struct request_queue *q, lld_busy_fn *fn) -{ - q->lld_busy_fn = fn; -} -EXPORT_SYMBOL_GPL(blk_queue_lld_busy); - /** * blk_set_default_limits - reset limits to default values * @lim: the queue_limits structure to reset @@ -109,7 +56,6 @@ void blk_set_default_limits(struct queue_limits *lim) lim->alignment_offset = 0; lim->io_opt = 0; lim->misaligned = 0; - lim->cluster = 1; lim->zoned = BLK_ZONED_NONE; } EXPORT_SYMBOL(blk_set_default_limits); @@ -169,8 +115,6 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn) q->make_request_fn = mfn; blk_queue_dma_alignment(q, 511); - blk_queue_congestion_threshold(q); - q->nr_batching = BLK_BATCH_REQ; blk_set_default_limits(&q->limits); } @@ -602,8 +546,6 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->io_min = max(t->io_min, b->io_min); t->io_opt = lcm_not_zero(t->io_opt, b->io_opt); - t->cluster &= b->cluster; - /* Physical block size a multiple of the logical block size? */ if (t->physical_block_size & (t->logical_block_size - 1)) { t->physical_block_size = t->logical_block_size; @@ -889,16 +831,14 @@ EXPORT_SYMBOL(blk_set_queue_depth); */ void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua) { - spin_lock_irq(q->queue_lock); if (wc) - queue_flag_set(QUEUE_FLAG_WC, q); + blk_queue_flag_set(QUEUE_FLAG_WC, q); else - queue_flag_clear(QUEUE_FLAG_WC, q); + blk_queue_flag_clear(QUEUE_FLAG_WC, q); if (fua) - queue_flag_set(QUEUE_FLAG_FUA, q); + blk_queue_flag_set(QUEUE_FLAG_FUA, q); else - queue_flag_clear(QUEUE_FLAG_FUA, q); - spin_unlock_irq(q->queue_lock); + blk_queue_flag_clear(QUEUE_FLAG_FUA, q); wbt_set_write_cache(q, test_bit(QUEUE_FLAG_WC, &q->queue_flags)); } diff --git a/block/blk-softirq.c b/block/blk-softirq.c index e47a2f751884..457d9ba3eb20 100644 --- a/block/blk-softirq.c +++ b/block/blk-softirq.c @@ -34,7 +34,7 @@ static __latent_entropy void blk_done_softirq(struct softirq_action *h) rq = list_entry(local_list.next, struct request, ipi_list); list_del_init(&rq->ipi_list); - rq->q->softirq_done_fn(rq); + rq->q->mq_ops->complete(rq); } } @@ -98,11 +98,11 @@ static int blk_softirq_cpu_dead(unsigned int cpu) void __blk_complete_request(struct request *req) { struct request_queue *q = req->q; - int cpu, ccpu = q->mq_ops ? req->mq_ctx->cpu : req->cpu; + int cpu, ccpu = req->mq_ctx->cpu; unsigned long flags; bool shared = false; - BUG_ON(!q->softirq_done_fn); + BUG_ON(!q->mq_ops->complete); local_irq_save(flags); cpu = smp_processor_id(); @@ -143,27 +143,6 @@ do_local: local_irq_restore(flags); } -EXPORT_SYMBOL(__blk_complete_request); - -/** - * blk_complete_request - end I/O on a request - * @req: the request being processed - * - * Description: - * Ends all I/O on a request. It does not handle partial completions, - * unless the driver actually implements this in its completion callback - * through requeueing. The actual completion happens out-of-order, - * through a softirq handler. The user must have registered a completion - * callback through blk_queue_softirq_done(). - **/ -void blk_complete_request(struct request *req) -{ - if (unlikely(blk_should_fake_timeout(req->q))) - return; - if (!blk_mark_rq_complete(req)) - __blk_complete_request(req); -} -EXPORT_SYMBOL(blk_complete_request); static __init int blk_softirq_init(void) { diff --git a/block/blk-stat.c b/block/blk-stat.c index 90561af85a62..696a04176e4d 100644 --- a/block/blk-stat.c +++ b/block/blk-stat.c @@ -130,7 +130,6 @@ blk_stat_alloc_callback(void (*timer_fn)(struct blk_stat_callback *), return cb; } -EXPORT_SYMBOL_GPL(blk_stat_alloc_callback); void blk_stat_add_callback(struct request_queue *q, struct blk_stat_callback *cb) @@ -151,7 +150,6 @@ void blk_stat_add_callback(struct request_queue *q, blk_queue_flag_set(QUEUE_FLAG_STATS, q); spin_unlock(&q->stats->lock); } -EXPORT_SYMBOL_GPL(blk_stat_add_callback); void blk_stat_remove_callback(struct request_queue *q, struct blk_stat_callback *cb) @@ -164,7 +162,6 @@ void blk_stat_remove_callback(struct request_queue *q, del_timer_sync(&cb->timer); } -EXPORT_SYMBOL_GPL(blk_stat_remove_callback); static void blk_stat_free_callback_rcu(struct rcu_head *head) { @@ -181,7 +178,6 @@ void blk_stat_free_callback(struct blk_stat_callback *cb) if (cb) call_rcu(&cb->rcu, blk_stat_free_callback_rcu); } -EXPORT_SYMBOL_GPL(blk_stat_free_callback); void blk_stat_enable_accounting(struct request_queue *q) { diff --git a/block/blk-stat.h b/block/blk-stat.h index f4a1568e81a4..17b47a86eefb 100644 --- a/block/blk-stat.h +++ b/block/blk-stat.h @@ -145,6 +145,11 @@ static inline void blk_stat_activate_nsecs(struct blk_stat_callback *cb, mod_timer(&cb->timer, jiffies + nsecs_to_jiffies(nsecs)); } +static inline void blk_stat_deactivate(struct blk_stat_callback *cb) +{ + del_timer_sync(&cb->timer); +} + /** * blk_stat_activate_msecs() - Gather block statistics during a time window in * milliseconds. diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 844a454a7b3a..590d1ef2f961 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -68,7 +68,7 @@ queue_requests_store(struct request_queue *q, const char *page, size_t count) unsigned long nr; int ret, err; - if (!q->request_fn && !q->mq_ops) + if (!queue_is_mq(q)) return -EINVAL; ret = queue_var_store(&nr, page, count); @@ -78,11 +78,7 @@ queue_requests_store(struct request_queue *q, const char *page, size_t count) if (nr < BLKDEV_MIN_RQ) nr = BLKDEV_MIN_RQ; - if (q->request_fn) - err = blk_update_nr_requests(q, nr); - else - err = blk_mq_update_nr_requests(q, nr); - + err = blk_mq_update_nr_requests(q, nr); if (err) return err; @@ -136,10 +132,7 @@ static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char * static ssize_t queue_max_segment_size_show(struct request_queue *q, char *page) { - if (blk_queue_cluster(q)) - return queue_var_show(queue_max_segment_size(q), (page)); - - return queue_var_show(PAGE_SIZE, (page)); + return queue_var_show(queue_max_segment_size(q), (page)); } static ssize_t queue_logical_block_size_show(struct request_queue *q, char *page) @@ -242,10 +235,10 @@ queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) if (max_sectors_kb > max_hw_sectors_kb || max_sectors_kb < page_kb) return -EINVAL; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); q->limits.max_sectors = max_sectors_kb << 1; q->backing_dev_info->io_pages = max_sectors_kb >> (PAGE_SHIFT - 10); - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); return ret; } @@ -320,14 +313,12 @@ static ssize_t queue_nomerges_store(struct request_queue *q, const char *page, if (ret < 0) return ret; - spin_lock_irq(q->queue_lock); - queue_flag_clear(QUEUE_FLAG_NOMERGES, q); - queue_flag_clear(QUEUE_FLAG_NOXMERGES, q); + blk_queue_flag_clear(QUEUE_FLAG_NOMERGES, q); + blk_queue_flag_clear(QUEUE_FLAG_NOXMERGES, q); if (nm == 2) - queue_flag_set(QUEUE_FLAG_NOMERGES, q); + blk_queue_flag_set(QUEUE_FLAG_NOMERGES, q); else if (nm) - queue_flag_set(QUEUE_FLAG_NOXMERGES, q); - spin_unlock_irq(q->queue_lock); + blk_queue_flag_set(QUEUE_FLAG_NOXMERGES, q); return ret; } @@ -351,18 +342,16 @@ queue_rq_affinity_store(struct request_queue *q, const char *page, size_t count) if (ret < 0) return ret; - spin_lock_irq(q->queue_lock); if (val == 2) { - queue_flag_set(QUEUE_FLAG_SAME_COMP, q); - queue_flag_set(QUEUE_FLAG_SAME_FORCE, q); + blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, q); + blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, q); } else if (val == 1) { - queue_flag_set(QUEUE_FLAG_SAME_COMP, q); - queue_flag_clear(QUEUE_FLAG_SAME_FORCE, q); + blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, q); + blk_queue_flag_clear(QUEUE_FLAG_SAME_FORCE, q); } else if (val == 0) { - queue_flag_clear(QUEUE_FLAG_SAME_COMP, q); - queue_flag_clear(QUEUE_FLAG_SAME_FORCE, q); + blk_queue_flag_clear(QUEUE_FLAG_SAME_COMP, q); + blk_queue_flag_clear(QUEUE_FLAG_SAME_FORCE, q); } - spin_unlock_irq(q->queue_lock); #endif return ret; } @@ -410,7 +399,8 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page, unsigned long poll_on; ssize_t ret; - if (!q->mq_ops || !q->mq_ops->poll) + if (!q->tag_set || q->tag_set->nr_maps <= HCTX_TYPE_POLL || + !q->tag_set->map[HCTX_TYPE_POLL].nr_queues) return -EINVAL; ret = queue_var_store(&poll_on, page, count); @@ -425,6 +415,26 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page, return ret; } +static ssize_t queue_io_timeout_show(struct request_queue *q, char *page) +{ + return sprintf(page, "%u\n", jiffies_to_msecs(q->rq_timeout)); +} + +static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page, + size_t count) +{ + unsigned int val; + int err; + + err = kstrtou32(page, 10, &val); + if (err || val == 0) + return -EINVAL; + + blk_queue_rq_timeout(q, msecs_to_jiffies(val)); + + return count; +} + static ssize_t queue_wb_lat_show(struct request_queue *q, char *page) { if (!wbt_rq_qos(q)) @@ -463,20 +473,14 @@ static ssize_t queue_wb_lat_store(struct request_queue *q, const char *page, * ends up either enabling or disabling wbt completely. We can't * have IO inflight if that happens. */ - if (q->mq_ops) { - blk_mq_freeze_queue(q); - blk_mq_quiesce_queue(q); - } else - blk_queue_bypass_start(q); + blk_mq_freeze_queue(q); + blk_mq_quiesce_queue(q); wbt_set_min_lat(q, val); wbt_update_limits(q); - if (q->mq_ops) { - blk_mq_unquiesce_queue(q); - blk_mq_unfreeze_queue(q); - } else - blk_queue_bypass_end(q); + blk_mq_unquiesce_queue(q); + blk_mq_unfreeze_queue(q); return count; } @@ -699,6 +703,12 @@ static struct queue_sysfs_entry queue_dax_entry = { .show = queue_dax_show, }; +static struct queue_sysfs_entry queue_io_timeout_entry = { + .attr = {.name = "io_timeout", .mode = 0644 }, + .show = queue_io_timeout_show, + .store = queue_io_timeout_store, +}; + static struct queue_sysfs_entry queue_wb_lat_entry = { .attr = {.name = "wbt_lat_usec", .mode = 0644 }, .show = queue_wb_lat_show, @@ -748,6 +758,7 @@ static struct attribute *default_attrs[] = { &queue_dax_entry.attr, &queue_wb_lat_entry.attr, &queue_poll_delay_entry.attr, + &queue_io_timeout_entry.attr, #ifdef CONFIG_BLK_DEV_THROTTLING_LOW &throtl_sample_time_entry.attr, #endif @@ -847,24 +858,14 @@ static void __blk_release_queue(struct work_struct *work) blk_free_queue_stats(q->stats); - blk_exit_rl(q, &q->root_rl); - - if (q->queue_tags) - __blk_queue_free_tags(q); - blk_queue_free_zone_bitmaps(q); - if (!q->mq_ops) { - if (q->exit_rq_fn) - q->exit_rq_fn(q, q->fq->flush_rq); - blk_free_flush_queue(q->fq); - } else { + if (queue_is_mq(q)) blk_mq_release(q); - } blk_trace_shutdown(q); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_debugfs_unregister(q); bioset_exit(&q->bio_split); @@ -909,7 +910,7 @@ int blk_register_queue(struct gendisk *disk) WARN_ONCE(test_bit(QUEUE_FLAG_REGISTERED, &q->queue_flags), "%s is registering an already registered queue\n", kobject_name(&dev->kobj)); - queue_flag_set_unlocked(QUEUE_FLAG_REGISTERED, q); + blk_queue_flag_set(QUEUE_FLAG_REGISTERED, q); /* * SCSI probing may synchronously create and destroy a lot of @@ -921,9 +922,8 @@ int blk_register_queue(struct gendisk *disk) * request_queues for non-existent devices never get registered. */ if (!blk_queue_init_done(q)) { - queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q); + blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q); percpu_ref_switch_to_percpu(&q->q_usage_counter); - blk_queue_bypass_end(q); } ret = blk_trace_init_sysfs(dev); @@ -939,7 +939,7 @@ int blk_register_queue(struct gendisk *disk) goto unlock; } - if (q->mq_ops) { + if (queue_is_mq(q)) { __blk_mq_register_dev(dev, q); blk_mq_debugfs_register(q); } @@ -950,7 +950,7 @@ int blk_register_queue(struct gendisk *disk) blk_throtl_register_queue(q); - if (q->request_fn || (q->mq_ops && q->elevator)) { + if (q->elevator) { ret = elv_register_queue(q); if (ret) { mutex_unlock(&q->sysfs_lock); @@ -999,7 +999,7 @@ void blk_unregister_queue(struct gendisk *disk) * Remove the sysfs attributes before unregistering the queue data * structures that can be modified through sysfs. */ - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_unregister_dev(disk_to_dev(disk), q); mutex_unlock(&q->sysfs_lock); @@ -1008,7 +1008,7 @@ void blk_unregister_queue(struct gendisk *disk) blk_trace_remove_sysfs(disk_to_dev(disk)); mutex_lock(&q->sysfs_lock); - if (q->request_fn || (q->mq_ops && q->elevator)) + if (q->elevator) elv_unregister_queue(q); mutex_unlock(&q->sysfs_lock); diff --git a/block/blk-tag.c b/block/blk-tag.c deleted file mode 100644 index fbc153aef166..000000000000 --- a/block/blk-tag.c +++ /dev/null @@ -1,378 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Functions related to tagged command queuing - */ -#include -#include -#include -#include -#include - -#include "blk.h" - -/** - * blk_queue_find_tag - find a request by its tag and queue - * @q: The request queue for the device - * @tag: The tag of the request - * - * Notes: - * Should be used when a device returns a tag and you want to match - * it with a request. - * - * no locks need be held. - **/ -struct request *blk_queue_find_tag(struct request_queue *q, int tag) -{ - return blk_map_queue_find_tag(q->queue_tags, tag); -} -EXPORT_SYMBOL(blk_queue_find_tag); - -/** - * blk_free_tags - release a given set of tag maintenance info - * @bqt: the tag map to free - * - * Drop the reference count on @bqt and frees it when the last reference - * is dropped. - */ -void blk_free_tags(struct blk_queue_tag *bqt) -{ - if (atomic_dec_and_test(&bqt->refcnt)) { - BUG_ON(find_first_bit(bqt->tag_map, bqt->max_depth) < - bqt->max_depth); - - kfree(bqt->tag_index); - bqt->tag_index = NULL; - - kfree(bqt->tag_map); - bqt->tag_map = NULL; - - kfree(bqt); - } -} -EXPORT_SYMBOL(blk_free_tags); - -/** - * __blk_queue_free_tags - release tag maintenance info - * @q: the request queue for the device - * - * Notes: - * blk_cleanup_queue() will take care of calling this function, if tagging - * has been used. So there's no need to call this directly. - **/ -void __blk_queue_free_tags(struct request_queue *q) -{ - struct blk_queue_tag *bqt = q->queue_tags; - - if (!bqt) - return; - - blk_free_tags(bqt); - - q->queue_tags = NULL; - queue_flag_clear_unlocked(QUEUE_FLAG_QUEUED, q); -} - -/** - * blk_queue_free_tags - release tag maintenance info - * @q: the request queue for the device - * - * Notes: - * This is used to disable tagged queuing to a device, yet leave - * queue in function. - **/ -void blk_queue_free_tags(struct request_queue *q) -{ - queue_flag_clear_unlocked(QUEUE_FLAG_QUEUED, q); -} -EXPORT_SYMBOL(blk_queue_free_tags); - -static int -init_tag_map(struct request_queue *q, struct blk_queue_tag *tags, int depth) -{ - struct request **tag_index; - unsigned long *tag_map; - int nr_ulongs; - - if (q && depth > q->nr_requests * 2) { - depth = q->nr_requests * 2; - printk(KERN_ERR "%s: adjusted depth to %d\n", - __func__, depth); - } - - tag_index = kcalloc(depth, sizeof(struct request *), GFP_ATOMIC); - if (!tag_index) - goto fail; - - nr_ulongs = ALIGN(depth, BITS_PER_LONG) / BITS_PER_LONG; - tag_map = kcalloc(nr_ulongs, sizeof(unsigned long), GFP_ATOMIC); - if (!tag_map) - goto fail; - - tags->real_max_depth = depth; - tags->max_depth = depth; - tags->tag_index = tag_index; - tags->tag_map = tag_map; - - return 0; -fail: - kfree(tag_index); - return -ENOMEM; -} - -static struct blk_queue_tag *__blk_queue_init_tags(struct request_queue *q, - int depth, int alloc_policy) -{ - struct blk_queue_tag *tags; - - tags = kmalloc(sizeof(struct blk_queue_tag), GFP_ATOMIC); - if (!tags) - goto fail; - - if (init_tag_map(q, tags, depth)) - goto fail; - - atomic_set(&tags->refcnt, 1); - tags->alloc_policy = alloc_policy; - tags->next_tag = 0; - return tags; -fail: - kfree(tags); - return NULL; -} - -/** - * blk_init_tags - initialize the tag info for an external tag map - * @depth: the maximum queue depth supported - * @alloc_policy: tag allocation policy - **/ -struct blk_queue_tag *blk_init_tags(int depth, int alloc_policy) -{ - return __blk_queue_init_tags(NULL, depth, alloc_policy); -} -EXPORT_SYMBOL(blk_init_tags); - -/** - * blk_queue_init_tags - initialize the queue tag info - * @q: the request queue for the device - * @depth: the maximum queue depth supported - * @tags: the tag to use - * @alloc_policy: tag allocation policy - * - * Queue lock must be held here if the function is called to resize an - * existing map. - **/ -int blk_queue_init_tags(struct request_queue *q, int depth, - struct blk_queue_tag *tags, int alloc_policy) -{ - int rc; - - BUG_ON(tags && q->queue_tags && tags != q->queue_tags); - - if (!tags && !q->queue_tags) { - tags = __blk_queue_init_tags(q, depth, alloc_policy); - - if (!tags) - return -ENOMEM; - - } else if (q->queue_tags) { - rc = blk_queue_resize_tags(q, depth); - if (rc) - return rc; - queue_flag_set(QUEUE_FLAG_QUEUED, q); - return 0; - } else - atomic_inc(&tags->refcnt); - - /* - * assign it, all done - */ - q->queue_tags = tags; - queue_flag_set_unlocked(QUEUE_FLAG_QUEUED, q); - return 0; -} -EXPORT_SYMBOL(blk_queue_init_tags); - -/** - * blk_queue_resize_tags - change the queueing depth - * @q: the request queue for the device - * @new_depth: the new max command queueing depth - * - * Notes: - * Must be called with the queue lock held. - **/ -int blk_queue_resize_tags(struct request_queue *q, int new_depth) -{ - struct blk_queue_tag *bqt = q->queue_tags; - struct request **tag_index; - unsigned long *tag_map; - int max_depth, nr_ulongs; - - if (!bqt) - return -ENXIO; - - /* - * if we already have large enough real_max_depth. just - * adjust max_depth. *NOTE* as requests with tag value - * between new_depth and real_max_depth can be in-flight, tag - * map can not be shrunk blindly here. - */ - if (new_depth <= bqt->real_max_depth) { - bqt->max_depth = new_depth; - return 0; - } - - /* - * Currently cannot replace a shared tag map with a new - * one, so error out if this is the case - */ - if (atomic_read(&bqt->refcnt) != 1) - return -EBUSY; - - /* - * save the old state info, so we can copy it back - */ - tag_index = bqt->tag_index; - tag_map = bqt->tag_map; - max_depth = bqt->real_max_depth; - - if (init_tag_map(q, bqt, new_depth)) - return -ENOMEM; - - memcpy(bqt->tag_index, tag_index, max_depth * sizeof(struct request *)); - nr_ulongs = ALIGN(max_depth, BITS_PER_LONG) / BITS_PER_LONG; - memcpy(bqt->tag_map, tag_map, nr_ulongs * sizeof(unsigned long)); - - kfree(tag_index); - kfree(tag_map); - return 0; -} -EXPORT_SYMBOL(blk_queue_resize_tags); - -/** - * blk_queue_end_tag - end tag operations for a request - * @q: the request queue for the device - * @rq: the request that has completed - * - * Description: - * Typically called when end_that_request_first() returns %0, meaning - * all transfers have been done for a request. It's important to call - * this function before end_that_request_last(), as that will put the - * request back on the free list thus corrupting the internal tag list. - **/ -void blk_queue_end_tag(struct request_queue *q, struct request *rq) -{ - struct blk_queue_tag *bqt = q->queue_tags; - unsigned tag = rq->tag; /* negative tags invalid */ - - lockdep_assert_held(q->queue_lock); - - BUG_ON(tag >= bqt->real_max_depth); - - list_del_init(&rq->queuelist); - rq->rq_flags &= ~RQF_QUEUED; - rq->tag = -1; - rq->internal_tag = -1; - - if (unlikely(bqt->tag_index[tag] == NULL)) - printk(KERN_ERR "%s: tag %d is missing\n", - __func__, tag); - - bqt->tag_index[tag] = NULL; - - if (unlikely(!test_bit(tag, bqt->tag_map))) { - printk(KERN_ERR "%s: attempt to clear non-busy tag (%d)\n", - __func__, tag); - return; - } - /* - * The tag_map bit acts as a lock for tag_index[bit], so we need - * unlock memory barrier semantics. - */ - clear_bit_unlock(tag, bqt->tag_map); -} - -/** - * blk_queue_start_tag - find a free tag and assign it - * @q: the request queue for the device - * @rq: the block request that needs tagging - * - * Description: - * This can either be used as a stand-alone helper, or possibly be - * assigned as the queue &prep_rq_fn (in which case &struct request - * automagically gets a tag assigned). Note that this function - * assumes that any type of request can be queued! if this is not - * true for your device, you must check the request type before - * calling this function. The request will also be removed from - * the request queue, so it's the drivers responsibility to readd - * it if it should need to be restarted for some reason. - **/ -int blk_queue_start_tag(struct request_queue *q, struct request *rq) -{ - struct blk_queue_tag *bqt = q->queue_tags; - unsigned max_depth; - int tag; - - lockdep_assert_held(q->queue_lock); - - if (unlikely((rq->rq_flags & RQF_QUEUED))) { - printk(KERN_ERR - "%s: request %p for device [%s] already tagged %d", - __func__, rq, - rq->rq_disk ? rq->rq_disk->disk_name : "?", rq->tag); - BUG(); - } - - /* - * Protect against shared tag maps, as we may not have exclusive - * access to the tag map. - * - * We reserve a few tags just for sync IO, since we don't want - * to starve sync IO on behalf of flooding async IO. - */ - max_depth = bqt->max_depth; - if (!rq_is_sync(rq) && max_depth > 1) { - switch (max_depth) { - case 2: - max_depth = 1; - break; - case 3: - max_depth = 2; - break; - default: - max_depth -= 2; - } - if (q->in_flight[BLK_RW_ASYNC] > max_depth) - return 1; - } - - do { - if (bqt->alloc_policy == BLK_TAG_ALLOC_FIFO) { - tag = find_first_zero_bit(bqt->tag_map, max_depth); - if (tag >= max_depth) - return 1; - } else { - int start = bqt->next_tag; - int size = min_t(int, bqt->max_depth, max_depth + start); - tag = find_next_zero_bit(bqt->tag_map, size, start); - if (tag >= size && start + size > bqt->max_depth) { - size = start + size - bqt->max_depth; - tag = find_first_zero_bit(bqt->tag_map, size); - } - if (tag >= size) - return 1; - } - - } while (test_and_set_bit_lock(tag, bqt->tag_map)); - /* - * We need lock ordering semantics given by test_and_set_bit_lock. - * See blk_queue_end_tag for details. - */ - - bqt->next_tag = (tag + 1) % bqt->max_depth; - rq->rq_flags |= RQF_QUEUED; - rq->tag = tag; - bqt->tag_index[tag] = rq; - blk_start_request(rq); - return 0; -} -EXPORT_SYMBOL(blk_queue_start_tag); diff --git a/block/blk-throttle.c b/block/blk-throttle.c index db1a3a2ae006..1b97a73d2fb1 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -1243,7 +1243,7 @@ static void throtl_pending_timer_fn(struct timer_list *t) bool dispatched; int ret; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); if (throtl_can_upgrade(td, NULL)) throtl_upgrade_state(td); @@ -1266,9 +1266,9 @@ again: break; /* this dispatch windows is still open, relax and repeat */ - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); cpu_relax(); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); } if (!dispatched) @@ -1290,7 +1290,7 @@ again: queue_work(kthrotld_workqueue, &td->dispatch_work); } out_unlock: - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); } /** @@ -1314,11 +1314,11 @@ static void blk_throtl_dispatch_work_fn(struct work_struct *work) bio_list_init(&bio_list_on_stack); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); for (rw = READ; rw <= WRITE; rw++) while ((bio = throtl_pop_queued(&td_sq->queued[rw], NULL))) bio_list_add(&bio_list_on_stack, bio); - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); if (!bio_list_empty(&bio_list_on_stack)) { blk_start_plug(&plug); @@ -2115,16 +2115,6 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td) } #endif -static void blk_throtl_assoc_bio(struct throtl_grp *tg, struct bio *bio) -{ -#ifdef CONFIG_BLK_DEV_THROTTLING_LOW - /* fallback to root_blkg if we fail to get a blkg ref */ - if (bio->bi_css && (bio_associate_blkg(bio, tg_to_blkg(tg)) == -ENODEV)) - bio_associate_blkg(bio, bio->bi_disk->queue->root_blkg); - bio_issue_init(&bio->bi_issue, bio_sectors(bio)); -#endif -} - bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, struct bio *bio) { @@ -2141,14 +2131,10 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, if (bio_flagged(bio, BIO_THROTTLED) || !tg->has_rules[rw]) goto out; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); throtl_update_latency_buckets(td); - if (unlikely(blk_queue_bypass(q))) - goto out_unlock; - - blk_throtl_assoc_bio(tg, bio); blk_throtl_update_idletime(tg); sq = &tg->service_queue; @@ -2227,7 +2213,7 @@ again: } out_unlock: - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); out: bio_set_flag(bio, BIO_THROTTLED); @@ -2348,7 +2334,7 @@ static void tg_drain_bios(struct throtl_service_queue *parent_sq) * Dispatch all currently throttled bios on @q through ->make_request_fn(). */ void blk_throtl_drain(struct request_queue *q) - __releases(q->queue_lock) __acquires(q->queue_lock) + __releases(&q->queue_lock) __acquires(&q->queue_lock) { struct throtl_data *td = q->td; struct blkcg_gq *blkg; @@ -2356,7 +2342,6 @@ void blk_throtl_drain(struct request_queue *q) struct bio *bio; int rw; - queue_lockdep_assert_held(q); rcu_read_lock(); /* @@ -2372,7 +2357,7 @@ void blk_throtl_drain(struct request_queue *q) tg_drain_bios(&td->service_queue); rcu_read_unlock(); - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&q->queue_lock); /* all bios now should be in td->service_queue, issue them */ for (rw = READ; rw <= WRITE; rw++) @@ -2380,7 +2365,7 @@ void blk_throtl_drain(struct request_queue *q) NULL))) generic_make_request(bio); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&q->queue_lock); } int blk_throtl_init(struct request_queue *q) @@ -2460,7 +2445,7 @@ void blk_throtl_register_queue(struct request_queue *q) td->throtl_slice = DFL_THROTL_SLICE_HD; #endif - td->track_bio_latency = !queue_is_rq_based(q); + td->track_bio_latency = !queue_is_mq(q); if (!td->track_bio_latency) blk_stat_enable_accounting(q); } diff --git a/block/blk-timeout.c b/block/blk-timeout.c index f2cfd56e1606..124c26128bf6 100644 --- a/block/blk-timeout.c +++ b/block/blk-timeout.c @@ -68,80 +68,6 @@ ssize_t part_timeout_store(struct device *dev, struct device_attribute *attr, #endif /* CONFIG_FAIL_IO_TIMEOUT */ -/* - * blk_delete_timer - Delete/cancel timer for a given function. - * @req: request that we are canceling timer for - * - */ -void blk_delete_timer(struct request *req) -{ - list_del_init(&req->timeout_list); -} - -static void blk_rq_timed_out(struct request *req) -{ - struct request_queue *q = req->q; - enum blk_eh_timer_return ret = BLK_EH_RESET_TIMER; - - if (q->rq_timed_out_fn) - ret = q->rq_timed_out_fn(req); - switch (ret) { - case BLK_EH_RESET_TIMER: - blk_add_timer(req); - blk_clear_rq_complete(req); - break; - case BLK_EH_DONE: - /* - * LLD handles this for now but in the future - * we can send a request msg to abort the command - * and we can move more of the generic scsi eh code to - * the blk layer. - */ - break; - default: - printk(KERN_ERR "block: bad eh return: %d\n", ret); - break; - } -} - -static void blk_rq_check_expired(struct request *rq, unsigned long *next_timeout, - unsigned int *next_set) -{ - const unsigned long deadline = blk_rq_deadline(rq); - - if (time_after_eq(jiffies, deadline)) { - list_del_init(&rq->timeout_list); - - /* - * Check if we raced with end io completion - */ - if (!blk_mark_rq_complete(rq)) - blk_rq_timed_out(rq); - } else if (!*next_set || time_after(*next_timeout, deadline)) { - *next_timeout = deadline; - *next_set = 1; - } -} - -void blk_timeout_work(struct work_struct *work) -{ - struct request_queue *q = - container_of(work, struct request_queue, timeout_work); - unsigned long flags, next = 0; - struct request *rq, *tmp; - int next_set = 0; - - spin_lock_irqsave(q->queue_lock, flags); - - list_for_each_entry_safe(rq, tmp, &q->timeout_list, timeout_list) - blk_rq_check_expired(rq, &next, &next_set); - - if (next_set) - mod_timer(&q->timeout, round_jiffies_up(next)); - - spin_unlock_irqrestore(q->queue_lock, flags); -} - /** * blk_abort_request -- Request request recovery for the specified command * @req: pointer to the request of interest @@ -149,24 +75,17 @@ void blk_timeout_work(struct work_struct *work) * This function requests that the block layer start recovery for the * request by deleting the timer and calling the q's timeout function. * LLDDs who implement their own error recovery MAY ignore the timeout - * event if they generated blk_abort_req. Must hold queue lock. + * event if they generated blk_abort_request. */ void blk_abort_request(struct request *req) { - if (req->q->mq_ops) { - /* - * All we need to ensure is that timeout scan takes place - * immediately and that scan sees the new timeout value. - * No need for fancy synchronizations. - */ - blk_rq_set_deadline(req, jiffies); - kblockd_schedule_work(&req->q->timeout_work); - } else { - if (blk_mark_rq_complete(req)) - return; - blk_delete_timer(req); - blk_rq_timed_out(req); - } + /* + * All we need to ensure is that timeout scan takes place + * immediately and that scan sees the new timeout value. + * No need for fancy synchronizations. + */ + WRITE_ONCE(req->deadline, jiffies); + kblockd_schedule_work(&req->q->timeout_work); } EXPORT_SYMBOL_GPL(blk_abort_request); @@ -194,15 +113,6 @@ void blk_add_timer(struct request *req) struct request_queue *q = req->q; unsigned long expiry; - if (!q->mq_ops) - lockdep_assert_held(q->queue_lock); - - /* blk-mq has its own handler, so we don't need ->rq_timed_out_fn */ - if (!q->mq_ops && !q->rq_timed_out_fn) - return; - - BUG_ON(!list_empty(&req->timeout_list)); - /* * Some LLDs, like scsi, peek at the timeout to prevent a * command from being retried forever. @@ -211,21 +121,16 @@ void blk_add_timer(struct request *req) req->timeout = q->rq_timeout; req->rq_flags &= ~RQF_TIMED_OUT; - blk_rq_set_deadline(req, jiffies + req->timeout); - /* - * Only the non-mq case needs to add the request to a protected list. - * For the mq case we simply scan the tag map. - */ - if (!q->mq_ops) - list_add_tail(&req->timeout_list, &req->q->timeout_list); + expiry = jiffies + req->timeout; + WRITE_ONCE(req->deadline, expiry); /* * If the timer isn't already pending or this timeout is earlier * than an existing one, modify the timer. Round up to next nearest * second. */ - expiry = blk_rq_timeout(round_jiffies_up(blk_rq_deadline(req))); + expiry = blk_rq_timeout(round_jiffies_up(expiry)); if (!timer_pending(&q->timeout) || time_before(expiry, q->timeout.expires)) { diff --git a/block/blk-wbt.c b/block/blk-wbt.c index 8ac93fcbaa2e..f0c56649775f 100644 --- a/block/blk-wbt.c +++ b/block/blk-wbt.c @@ -489,31 +489,21 @@ static inline unsigned int get_limit(struct rq_wb *rwb, unsigned long rw) } struct wbt_wait_data { - struct wait_queue_entry wq; - struct task_struct *task; struct rq_wb *rwb; - struct rq_wait *rqw; + enum wbt_flags wb_acct; unsigned long rw; - bool got_token; }; -static int wbt_wake_function(struct wait_queue_entry *curr, unsigned int mode, - int wake_flags, void *key) +static bool wbt_inflight_cb(struct rq_wait *rqw, void *private_data) { - struct wbt_wait_data *data = container_of(curr, struct wbt_wait_data, - wq); - - /* - * If we fail to get a budget, return -1 to interrupt the wake up - * loop in __wake_up_common. - */ - if (!rq_wait_inc_below(data->rqw, get_limit(data->rwb, data->rw))) - return -1; + struct wbt_wait_data *data = private_data; + return rq_wait_inc_below(rqw, get_limit(data->rwb, data->rw)); +} - data->got_token = true; - list_del_init(&curr->entry); - wake_up_process(data->task); - return 1; +static void wbt_cleanup_cb(struct rq_wait *rqw, void *private_data) +{ + struct wbt_wait_data *data = private_data; + wbt_rqw_done(data->rwb, rqw, data->wb_acct); } /* @@ -521,57 +511,16 @@ static int wbt_wake_function(struct wait_queue_entry *curr, unsigned int mode, * the timer to kick off queuing again. */ static void __wbt_wait(struct rq_wb *rwb, enum wbt_flags wb_acct, - unsigned long rw, spinlock_t *lock) - __releases(lock) - __acquires(lock) + unsigned long rw) { struct rq_wait *rqw = get_rq_wait(rwb, wb_acct); struct wbt_wait_data data = { - .wq = { - .func = wbt_wake_function, - .entry = LIST_HEAD_INIT(data.wq.entry), - }, - .task = current, .rwb = rwb, - .rqw = rqw, + .wb_acct = wb_acct, .rw = rw, }; - bool has_sleeper; - - has_sleeper = wq_has_sleeper(&rqw->wait); - if (!has_sleeper && rq_wait_inc_below(rqw, get_limit(rwb, rw))) - return; - prepare_to_wait_exclusive(&rqw->wait, &data.wq, TASK_UNINTERRUPTIBLE); - do { - if (data.got_token) - break; - - if (!has_sleeper && - rq_wait_inc_below(rqw, get_limit(rwb, rw))) { - finish_wait(&rqw->wait, &data.wq); - - /* - * We raced with wbt_wake_function() getting a token, - * which means we now have two. Put our local token - * and wake anyone else potentially waiting for one. - */ - if (data.got_token) - wbt_rqw_done(rwb, rqw, wb_acct); - break; - } - - if (lock) { - spin_unlock_irq(lock); - io_schedule(); - spin_lock_irq(lock); - } else - io_schedule(); - - has_sleeper = false; - } while (1); - - finish_wait(&rqw->wait, &data.wq); + rq_qos_wait(rqw, &data, wbt_inflight_cb, wbt_cleanup_cb); } static inline bool wbt_should_throttle(struct rq_wb *rwb, struct bio *bio) @@ -624,7 +573,7 @@ static void wbt_cleanup(struct rq_qos *rqos, struct bio *bio) * in an irq held spinlock, if it holds one when calling this function. * If we do sleep, we'll release and re-grab it. */ -static void wbt_wait(struct rq_qos *rqos, struct bio *bio, spinlock_t *lock) +static void wbt_wait(struct rq_qos *rqos, struct bio *bio) { struct rq_wb *rwb = RQWB(rqos); enum wbt_flags flags; @@ -636,7 +585,7 @@ static void wbt_wait(struct rq_qos *rqos, struct bio *bio, spinlock_t *lock) return; } - __wbt_wait(rwb, flags, bio->bi_opf, lock); + __wbt_wait(rwb, flags, bio->bi_opf); if (!blk_stat_is_active(rwb->cb)) rwb_arm_timer(rwb); @@ -709,8 +658,7 @@ void wbt_enable_default(struct request_queue *q) if (!test_bit(QUEUE_FLAG_REGISTERED, &q->queue_flags)) return; - if ((q->mq_ops && IS_ENABLED(CONFIG_BLK_WBT_MQ)) || - (q->request_fn && IS_ENABLED(CONFIG_BLK_WBT_SQ))) + if (queue_is_mq(q) && IS_ENABLED(CONFIG_BLK_WBT_MQ)) wbt_init(q); } EXPORT_SYMBOL_GPL(wbt_enable_default); @@ -760,11 +708,100 @@ void wbt_disable_default(struct request_queue *q) if (!rqos) return; rwb = RQWB(rqos); - if (rwb->enable_state == WBT_STATE_ON_DEFAULT) + if (rwb->enable_state == WBT_STATE_ON_DEFAULT) { + blk_stat_deactivate(rwb->cb); rwb->wb_normal = 0; + } } EXPORT_SYMBOL_GPL(wbt_disable_default); +#ifdef CONFIG_BLK_DEBUG_FS +static int wbt_curr_win_nsec_show(void *data, struct seq_file *m) +{ + struct rq_qos *rqos = data; + struct rq_wb *rwb = RQWB(rqos); + + seq_printf(m, "%llu\n", rwb->cur_win_nsec); + return 0; +} + +static int wbt_enabled_show(void *data, struct seq_file *m) +{ + struct rq_qos *rqos = data; + struct rq_wb *rwb = RQWB(rqos); + + seq_printf(m, "%d\n", rwb->enable_state); + return 0; +} + +static int wbt_id_show(void *data, struct seq_file *m) +{ + struct rq_qos *rqos = data; + + seq_printf(m, "%u\n", rqos->id); + return 0; +} + +static int wbt_inflight_show(void *data, struct seq_file *m) +{ + struct rq_qos *rqos = data; + struct rq_wb *rwb = RQWB(rqos); + int i; + + for (i = 0; i < WBT_NUM_RWQ; i++) + seq_printf(m, "%d: inflight %d\n", i, + atomic_read(&rwb->rq_wait[i].inflight)); + return 0; +} + +static int wbt_min_lat_nsec_show(void *data, struct seq_file *m) +{ + struct rq_qos *rqos = data; + struct rq_wb *rwb = RQWB(rqos); + + seq_printf(m, "%lu\n", rwb->min_lat_nsec); + return 0; +} + +static int wbt_unknown_cnt_show(void *data, struct seq_file *m) +{ + struct rq_qos *rqos = data; + struct rq_wb *rwb = RQWB(rqos); + + seq_printf(m, "%u\n", rwb->unknown_cnt); + return 0; +} + +static int wbt_normal_show(void *data, struct seq_file *m) +{ + struct rq_qos *rqos = data; + struct rq_wb *rwb = RQWB(rqos); + + seq_printf(m, "%u\n", rwb->wb_normal); + return 0; +} + +static int wbt_background_show(void *data, struct seq_file *m) +{ + struct rq_qos *rqos = data; + struct rq_wb *rwb = RQWB(rqos); + + seq_printf(m, "%u\n", rwb->wb_background); + return 0; +} + +static const struct blk_mq_debugfs_attr wbt_debugfs_attrs[] = { + {"curr_win_nsec", 0400, wbt_curr_win_nsec_show}, + {"enabled", 0400, wbt_enabled_show}, + {"id", 0400, wbt_id_show}, + {"inflight", 0400, wbt_inflight_show}, + {"min_lat_nsec", 0400, wbt_min_lat_nsec_show}, + {"unknown_cnt", 0400, wbt_unknown_cnt_show}, + {"wb_normal", 0400, wbt_normal_show}, + {"wb_background", 0400, wbt_background_show}, + {}, +}; +#endif static struct rq_qos_ops wbt_rqos_ops = { .throttle = wbt_wait, @@ -774,6 +811,9 @@ static struct rq_qos_ops wbt_rqos_ops = { .done = wbt_done, .cleanup = wbt_cleanup, .exit = wbt_exit, +#ifdef CONFIG_BLK_DEBUG_FS + .debugfs_attrs = wbt_debugfs_attrs, +#endif }; int wbt_init(struct request_queue *q) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index a327bef07642..2d98803faec2 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -421,7 +421,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk) * BIO based queues do not use a scheduler so only q->nr_zones * needs to be updated so that the sysfs exposed value is correct. */ - if (!queue_is_rq_based(q)) { + if (!queue_is_mq(q)) { q->nr_zones = nr_zones; return 0; } diff --git a/block/blk.h b/block/blk.h index 0089fefdf771..848278c52030 100644 --- a/block/blk.h +++ b/block/blk.h @@ -7,12 +7,6 @@ #include #include "blk-mq.h" -/* Amount of time in which a process may batch requests */ -#define BLK_BATCH_TIME (HZ/50UL) - -/* Number of requests a "batching" process may submit */ -#define BLK_BATCH_REQ 32 - /* Max future timer expiry for timeouts */ #define BLK_MAX_TIMEOUT (5 * HZ) @@ -38,85 +32,13 @@ struct blk_flush_queue { }; extern struct kmem_cache *blk_requestq_cachep; -extern struct kmem_cache *request_cachep; extern struct kobj_type blk_queue_ktype; extern struct ida blk_queue_ida; -/* - * @q->queue_lock is set while a queue is being initialized. Since we know - * that no other threads access the queue object before @q->queue_lock has - * been set, it is safe to manipulate queue flags without holding the - * queue_lock if @q->queue_lock == NULL. See also blk_alloc_queue_node() and - * blk_init_allocated_queue(). - */ -static inline void queue_lockdep_assert_held(struct request_queue *q) -{ - if (q->queue_lock) - lockdep_assert_held(q->queue_lock); -} - -static inline void queue_flag_set_unlocked(unsigned int flag, - struct request_queue *q) -{ - if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) && - kref_read(&q->kobj.kref)) - lockdep_assert_held(q->queue_lock); - __set_bit(flag, &q->queue_flags); -} - -static inline void queue_flag_clear_unlocked(unsigned int flag, - struct request_queue *q) -{ - if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) && - kref_read(&q->kobj.kref)) - lockdep_assert_held(q->queue_lock); - __clear_bit(flag, &q->queue_flags); -} - -static inline int queue_flag_test_and_clear(unsigned int flag, - struct request_queue *q) -{ - queue_lockdep_assert_held(q); - - if (test_bit(flag, &q->queue_flags)) { - __clear_bit(flag, &q->queue_flags); - return 1; - } - - return 0; -} - -static inline int queue_flag_test_and_set(unsigned int flag, - struct request_queue *q) -{ - queue_lockdep_assert_held(q); - - if (!test_bit(flag, &q->queue_flags)) { - __set_bit(flag, &q->queue_flags); - return 0; - } - - return 1; -} - -static inline void queue_flag_set(unsigned int flag, struct request_queue *q) -{ - queue_lockdep_assert_held(q); - __set_bit(flag, &q->queue_flags); -} - -static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) -{ - queue_lockdep_assert_held(q); - __clear_bit(flag, &q->queue_flags); -} - -static inline struct blk_flush_queue *blk_get_flush_queue( - struct request_queue *q, struct blk_mq_ctx *ctx) +static inline struct blk_flush_queue * +blk_get_flush_queue(struct request_queue *q, struct blk_mq_ctx *ctx) { - if (q->mq_ops) - return blk_mq_map_queue(q, ctx->cpu)->fq; - return q->fq; + return blk_mq_map_queue(q, REQ_OP_FLUSH, ctx->cpu)->fq; } static inline void __blk_get_queue(struct request_queue *q) @@ -128,15 +50,9 @@ struct blk_flush_queue *blk_alloc_flush_queue(struct request_queue *q, int node, int cmd_size, gfp_t flags); void blk_free_flush_queue(struct blk_flush_queue *q); -int blk_init_rl(struct request_list *rl, struct request_queue *q, - gfp_t gfp_mask); -void blk_exit_rl(struct request_queue *q, struct request_list *rl); void blk_exit_queue(struct request_queue *q); void blk_rq_bio_prep(struct request_queue *q, struct request *rq, struct bio *bio); -void blk_queue_bypass_start(struct request_queue *q); -void blk_queue_bypass_end(struct request_queue *q); -void __blk_queue_free_tags(struct request_queue *q); void blk_freeze_queue(struct request_queue *q); static inline void blk_queue_enter_live(struct request_queue *q) @@ -235,11 +151,8 @@ static inline bool bio_integrity_endio(struct bio *bio) } #endif /* CONFIG_BLK_DEV_INTEGRITY */ -void blk_timeout_work(struct work_struct *work); unsigned long blk_rq_timeout(unsigned long timeout); void blk_add_timer(struct request *req); -void blk_delete_timer(struct request *); - bool bio_attempt_front_merge(struct request_queue *q, struct request *req, struct bio *bio); @@ -248,34 +161,12 @@ bool bio_attempt_back_merge(struct request_queue *q, struct request *req, bool bio_attempt_discard_merge(struct request_queue *q, struct request *req, struct bio *bio); bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, - unsigned int *request_count, struct request **same_queue_rq); -unsigned int blk_plug_queued_count(struct request_queue *q); void blk_account_io_start(struct request *req, bool new_io); void blk_account_io_completion(struct request *req, unsigned int bytes); void blk_account_io_done(struct request *req, u64 now); -/* - * EH timer and IO completion will both attempt to 'grab' the request, make - * sure that only one of them succeeds. Steal the bottom bit of the - * __deadline field for this. - */ -static inline int blk_mark_rq_complete(struct request *rq) -{ - return test_and_set_bit(0, &rq->__deadline); -} - -static inline void blk_clear_rq_complete(struct request *rq) -{ - clear_bit(0, &rq->__deadline); -} - -static inline bool blk_rq_is_complete(struct request *rq) -{ - return test_bit(0, &rq->__deadline); -} - /* * Internal elevator interface */ @@ -283,23 +174,6 @@ static inline bool blk_rq_is_complete(struct request *rq) void blk_insert_flush(struct request *rq); -static inline void elv_activate_rq(struct request_queue *q, struct request *rq) -{ - struct elevator_queue *e = q->elevator; - - if (e->type->ops.sq.elevator_activate_req_fn) - e->type->ops.sq.elevator_activate_req_fn(q, rq); -} - -static inline void elv_deactivate_rq(struct request_queue *q, struct request *rq) -{ - struct elevator_queue *e = q->elevator; - - if (e->type->ops.sq.elevator_deactivate_req_fn) - e->type->ops.sq.elevator_deactivate_req_fn(q, rq); -} - -int elevator_init(struct request_queue *); int elevator_init_mq(struct request_queue *q); int elevator_switch_mq(struct request_queue *q, struct elevator_type *new_e); @@ -334,31 +208,8 @@ void blk_rq_set_mixed_merge(struct request *rq); bool blk_rq_merge_ok(struct request *rq, struct bio *bio); enum elv_merge blk_try_merge(struct request *rq, struct bio *bio); -void blk_queue_congestion_threshold(struct request_queue *q); - int blk_dev_init(void); - -/* - * Return the threshold (number of used requests) at which the queue is - * considered to be congested. It include a little hysteresis to keep the - * context switch rate down. - */ -static inline int queue_congestion_on_threshold(struct request_queue *q) -{ - return q->nr_congestion_on; -} - -/* - * The threshold at which a queue is considered to be uncongested - */ -static inline int queue_congestion_off_threshold(struct request_queue *q) -{ - return q->nr_congestion_off; -} - -extern int blk_update_nr_requests(struct request_queue *, unsigned int); - /* * Contribute to IO statistics IFF: * @@ -380,21 +231,6 @@ static inline void req_set_nomerge(struct request_queue *q, struct request *req) q->last_merge = NULL; } -/* - * Steal a bit from this field for legacy IO path atomic IO marking. Note that - * setting the deadline clears the bottom bit, potentially clearing the - * completed bit. The user has to be OK with this (current ones are fine). - */ -static inline void blk_rq_set_deadline(struct request *rq, unsigned long time) -{ - rq->__deadline = time & ~0x1UL; -} - -static inline unsigned long blk_rq_deadline(struct request *rq) -{ - return rq->__deadline & ~0x1UL; -} - /* * The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size * is defined as 'unsigned int', meantime it has to aligned to with logical @@ -416,22 +252,6 @@ void ioc_clear_queue(struct request_queue *q); int create_task_io_context(struct task_struct *task, gfp_t gfp_mask, int node); -/** - * rq_ioc - determine io_context for request allocation - * @bio: request being allocated is for this bio (can be %NULL) - * - * Determine io_context to use for request allocation for @bio. May return - * %NULL if %current->io_context doesn't exist. - */ -static inline struct io_context *rq_ioc(struct bio *bio) -{ -#ifdef CONFIG_BLK_CGROUP - if (bio && bio->bi_ioc) - return bio->bi_ioc; -#endif - return current->io_context; -} - /** * create_io_context - try to create task->io_context * @gfp_mask: allocation mask @@ -490,8 +310,6 @@ static inline void blk_queue_bounce(struct request_queue *q, struct bio **bio) } #endif /* CONFIG_BOUNCE */ -extern void blk_drain_queue(struct request_queue *q); - #ifdef CONFIG_BLK_CGROUP_IOLATENCY extern int blk_iolatency_init(struct request_queue *q); #else diff --git a/block/bounce.c b/block/bounce.c index 559c55bda040..ffb9e9ecfa7e 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -277,7 +277,8 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, } } - bio_clone_blkcg_association(bio, bio_src); + bio_clone_blkg_association(bio, bio_src); + blkcg_bio_issue_init(bio); return bio; } diff --git a/block/bsg-lib.c b/block/bsg-lib.c index f3501cdaf1a6..192129856342 100644 --- a/block/bsg-lib.c +++ b/block/bsg-lib.c @@ -21,7 +21,7 @@ * */ #include -#include +#include #include #include #include @@ -31,6 +31,12 @@ #define uptr64(val) ((void __user *)(uintptr_t)(val)) +struct bsg_set { + struct blk_mq_tag_set tag_set; + bsg_job_fn *job_fn; + bsg_timeout_fn *timeout_fn; +}; + static int bsg_transport_check_proto(struct sg_io_v4 *hdr) { if (hdr->protocol != BSG_PROTOCOL_SCSI || @@ -129,7 +135,7 @@ static void bsg_teardown_job(struct kref *kref) kfree(job->request_payload.sg_list); kfree(job->reply_payload.sg_list); - blk_end_request_all(rq, BLK_STS_OK); + blk_mq_end_request(rq, BLK_STS_OK); } void bsg_job_put(struct bsg_job *job) @@ -157,15 +163,15 @@ void bsg_job_done(struct bsg_job *job, int result, { job->result = result; job->reply_payload_rcv_len = reply_payload_rcv_len; - blk_complete_request(blk_mq_rq_from_pdu(job)); + blk_mq_complete_request(blk_mq_rq_from_pdu(job)); } EXPORT_SYMBOL_GPL(bsg_job_done); /** - * bsg_softirq_done - softirq done routine for destroying the bsg requests + * bsg_complete - softirq done routine for destroying the bsg requests * @rq: BSG request that holds the job to be destroyed */ -static void bsg_softirq_done(struct request *rq) +static void bsg_complete(struct request *rq) { struct bsg_job *job = blk_mq_rq_to_pdu(rq); @@ -224,54 +230,48 @@ failjob_rls_job: } /** - * bsg_request_fn - generic handler for bsg requests - * @q: request queue to manage + * bsg_queue_rq - generic handler for bsg requests + * @hctx: hardware queue + * @bd: queue data * * On error the create_bsg_job function should return a -Exyz error value * that will be set to ->result. * * Drivers/subsys should pass this to the queue init function. */ -static void bsg_request_fn(struct request_queue *q) - __releases(q->queue_lock) - __acquires(q->queue_lock) +static blk_status_t bsg_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) { + struct request_queue *q = hctx->queue; struct device *dev = q->queuedata; - struct request *req; + struct request *req = bd->rq; + struct bsg_set *bset = + container_of(q->tag_set, struct bsg_set, tag_set); int ret; + blk_mq_start_request(req); + if (!get_device(dev)) - return; - - while (1) { - req = blk_fetch_request(q); - if (!req) - break; - spin_unlock_irq(q->queue_lock); - - if (!bsg_prepare_job(dev, req)) { - blk_end_request_all(req, BLK_STS_OK); - spin_lock_irq(q->queue_lock); - continue; - } - - ret = q->bsg_job_fn(blk_mq_rq_to_pdu(req)); - spin_lock_irq(q->queue_lock); - if (ret) - break; - } + return BLK_STS_IOERR; + + if (!bsg_prepare_job(dev, req)) + return BLK_STS_IOERR; + + ret = bset->job_fn(blk_mq_rq_to_pdu(req)); + if (ret) + return BLK_STS_IOERR; - spin_unlock_irq(q->queue_lock); put_device(dev); - spin_lock_irq(q->queue_lock); + return BLK_STS_OK; } /* called right after the request is allocated for the request_queue */ -static int bsg_init_rq(struct request_queue *q, struct request *req, gfp_t gfp) +static int bsg_init_rq(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx, unsigned int numa_node) { struct bsg_job *job = blk_mq_rq_to_pdu(req); - job->reply = kzalloc(SCSI_SENSE_BUFFERSIZE, gfp); + job->reply = kzalloc(SCSI_SENSE_BUFFERSIZE, GFP_KERNEL); if (!job->reply) return -ENOMEM; return 0; @@ -289,13 +289,47 @@ static void bsg_initialize_rq(struct request *req) job->dd_data = job + 1; } -static void bsg_exit_rq(struct request_queue *q, struct request *req) +static void bsg_exit_rq(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx) { struct bsg_job *job = blk_mq_rq_to_pdu(req); kfree(job->reply); } +void bsg_remove_queue(struct request_queue *q) +{ + if (q) { + struct bsg_set *bset = + container_of(q->tag_set, struct bsg_set, tag_set); + + bsg_unregister_queue(q); + blk_cleanup_queue(q); + blk_mq_free_tag_set(&bset->tag_set); + kfree(bset); + } +} +EXPORT_SYMBOL_GPL(bsg_remove_queue); + +static enum blk_eh_timer_return bsg_timeout(struct request *rq, bool reserved) +{ + struct bsg_set *bset = + container_of(rq->q->tag_set, struct bsg_set, tag_set); + + if (!bset->timeout_fn) + return BLK_EH_DONE; + return bset->timeout_fn(rq); +} + +static const struct blk_mq_ops bsg_mq_ops = { + .queue_rq = bsg_queue_rq, + .init_request = bsg_init_rq, + .exit_request = bsg_exit_rq, + .initialize_rq_fn = bsg_initialize_rq, + .complete = bsg_complete, + .timeout = bsg_timeout, +}; + /** * bsg_setup_queue - Create and add the bsg hooks so we can receive requests * @dev: device to attach bsg device to @@ -304,28 +338,38 @@ static void bsg_exit_rq(struct request_queue *q, struct request *req) * @dd_job_size: size of LLD data needed for each job */ struct request_queue *bsg_setup_queue(struct device *dev, const char *name, - bsg_job_fn *job_fn, int dd_job_size) + bsg_job_fn *job_fn, bsg_timeout_fn *timeout, int dd_job_size) { + struct bsg_set *bset; + struct blk_mq_tag_set *set; struct request_queue *q; - int ret; + int ret = -ENOMEM; - q = blk_alloc_queue(GFP_KERNEL); - if (!q) + bset = kzalloc(sizeof(*bset), GFP_KERNEL); + if (!bset) return ERR_PTR(-ENOMEM); - q->cmd_size = sizeof(struct bsg_job) + dd_job_size; - q->init_rq_fn = bsg_init_rq; - q->exit_rq_fn = bsg_exit_rq; - q->initialize_rq_fn = bsg_initialize_rq; - q->request_fn = bsg_request_fn; - ret = blk_init_allocated_queue(q); - if (ret) - goto out_cleanup_queue; + bset->job_fn = job_fn; + bset->timeout_fn = timeout; + + set = &bset->tag_set; + set->ops = &bsg_mq_ops, + set->nr_hw_queues = 1; + set->queue_depth = 128; + set->numa_node = NUMA_NO_NODE; + set->cmd_size = sizeof(struct bsg_job) + dd_job_size; + set->flags = BLK_MQ_F_NO_SCHED | BLK_MQ_F_BLOCKING; + if (blk_mq_alloc_tag_set(set)) + goto out_tag_set; + + q = blk_mq_init_queue(set); + if (IS_ERR(q)) { + ret = PTR_ERR(q); + goto out_queue; + } q->queuedata = dev; - q->bsg_job_fn = job_fn; blk_queue_flag_set(QUEUE_FLAG_BIDI, q); - blk_queue_softirq_done(q, bsg_softirq_done); blk_queue_rq_timeout(q, BLK_DEFAULT_SG_TIMEOUT); ret = bsg_register_queue(q, dev, name, &bsg_transport_ops); @@ -338,6 +382,10 @@ struct request_queue *bsg_setup_queue(struct device *dev, const char *name, return q; out_cleanup_queue: blk_cleanup_queue(q); +out_queue: + blk_mq_free_tag_set(set); +out_tag_set: + kfree(bset); return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(bsg_setup_queue); diff --git a/block/bsg.c b/block/bsg.c index 9a442c23a715..44f6028b9567 100644 --- a/block/bsg.c +++ b/block/bsg.c @@ -471,7 +471,7 @@ int bsg_register_queue(struct request_queue *q, struct device *parent, /* * we need a proper transport to send commands, not a stacked device */ - if (!queue_is_rq_based(q)) + if (!queue_is_mq(q)) return 0; bcd = &q->bsg_dev; diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c deleted file mode 100644 index ed41aa978c4a..000000000000 --- a/block/cfq-iosched.c +++ /dev/null @@ -1,4916 +0,0 @@ -/* - * CFQ, or complete fairness queueing, disk scheduler. - * - * Based on ideas from a previously unfinished io - * scheduler (round robin per-process disk scheduling) and Andrea Arcangeli. - * - * Copyright (C) 2003 Jens Axboe - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include "blk.h" -#include "blk-wbt.h" - -/* - * tunables - */ -/* max queue in one round of service */ -static const int cfq_quantum = 8; -static const u64 cfq_fifo_expire[2] = { NSEC_PER_SEC / 4, NSEC_PER_SEC / 8 }; -/* maximum backwards seek, in KiB */ -static const int cfq_back_max = 16 * 1024; -/* penalty of a backwards seek */ -static const int cfq_back_penalty = 2; -static const u64 cfq_slice_sync = NSEC_PER_SEC / 10; -static u64 cfq_slice_async = NSEC_PER_SEC / 25; -static const int cfq_slice_async_rq = 2; -static u64 cfq_slice_idle = NSEC_PER_SEC / 125; -static u64 cfq_group_idle = NSEC_PER_SEC / 125; -static const u64 cfq_target_latency = (u64)NSEC_PER_SEC * 3/10; /* 300 ms */ -static const int cfq_hist_divisor = 4; - -/* - * offset from end of queue service tree for idle class - */ -#define CFQ_IDLE_DELAY (NSEC_PER_SEC / 5) -/* offset from end of group service tree under time slice mode */ -#define CFQ_SLICE_MODE_GROUP_DELAY (NSEC_PER_SEC / 5) -/* offset from end of group service under IOPS mode */ -#define CFQ_IOPS_MODE_GROUP_DELAY (HZ / 5) - -/* - * below this threshold, we consider thinktime immediate - */ -#define CFQ_MIN_TT (2 * NSEC_PER_SEC / HZ) - -#define CFQ_SLICE_SCALE (5) -#define CFQ_HW_QUEUE_MIN (5) -#define CFQ_SERVICE_SHIFT 12 - -#define CFQQ_SEEK_THR (sector_t)(8 * 100) -#define CFQQ_CLOSE_THR (sector_t)(8 * 1024) -#define CFQQ_SECT_THR_NONROT (sector_t)(2 * 32) -#define CFQQ_SEEKY(cfqq) (hweight32(cfqq->seek_history) > 32/8) - -#define RQ_CIC(rq) icq_to_cic((rq)->elv.icq) -#define RQ_CFQQ(rq) (struct cfq_queue *) ((rq)->elv.priv[0]) -#define RQ_CFQG(rq) (struct cfq_group *) ((rq)->elv.priv[1]) - -static struct kmem_cache *cfq_pool; - -#define CFQ_PRIO_LISTS IOPRIO_BE_NR -#define cfq_class_idle(cfqq) ((cfqq)->ioprio_class == IOPRIO_CLASS_IDLE) -#define cfq_class_rt(cfqq) ((cfqq)->ioprio_class == IOPRIO_CLASS_RT) - -#define sample_valid(samples) ((samples) > 80) -#define rb_entry_cfqg(node) rb_entry((node), struct cfq_group, rb_node) - -/* blkio-related constants */ -#define CFQ_WEIGHT_LEGACY_MIN 10 -#define CFQ_WEIGHT_LEGACY_DFL 500 -#define CFQ_WEIGHT_LEGACY_MAX 1000 - -struct cfq_ttime { - u64 last_end_request; - - u64 ttime_total; - u64 ttime_mean; - unsigned long ttime_samples; -}; - -/* - * Most of our rbtree usage is for sorting with min extraction, so - * if we cache the leftmost node we don't have to walk down the tree - * to find it. Idea borrowed from Ingo Molnars CFS scheduler. We should - * move this into the elevator for the rq sorting as well. - */ -struct cfq_rb_root { - struct rb_root_cached rb; - struct rb_node *rb_rightmost; - unsigned count; - u64 min_vdisktime; - struct cfq_ttime ttime; -}; -#define CFQ_RB_ROOT (struct cfq_rb_root) { .rb = RB_ROOT_CACHED, \ - .rb_rightmost = NULL, \ - .ttime = {.last_end_request = ktime_get_ns(),},} - -/* - * Per process-grouping structure - */ -struct cfq_queue { - /* reference count */ - int ref; - /* various state flags, see below */ - unsigned int flags; - /* parent cfq_data */ - struct cfq_data *cfqd; - /* service_tree member */ - struct rb_node rb_node; - /* service_tree key */ - u64 rb_key; - /* prio tree member */ - struct rb_node p_node; - /* prio tree root we belong to, if any */ - struct rb_root *p_root; - /* sorted list of pending requests */ - struct rb_root sort_list; - /* if fifo isn't expired, next request to serve */ - struct request *next_rq; - /* requests queued in sort_list */ - int queued[2]; - /* currently allocated requests */ - int allocated[2]; - /* fifo list of requests in sort_list */ - struct list_head fifo; - - /* time when queue got scheduled in to dispatch first request. */ - u64 dispatch_start; - u64 allocated_slice; - u64 slice_dispatch; - /* time when first request from queue completed and slice started. */ - u64 slice_start; - u64 slice_end; - s64 slice_resid; - - /* pending priority requests */ - int prio_pending; - /* number of requests that are on the dispatch list or inside driver */ - int dispatched; - - /* io prio of this group */ - unsigned short ioprio, org_ioprio; - unsigned short ioprio_class, org_ioprio_class; - - pid_t pid; - - u32 seek_history; - sector_t last_request_pos; - - struct cfq_rb_root *service_tree; - struct cfq_queue *new_cfqq; - struct cfq_group *cfqg; - /* Number of sectors dispatched from queue in single dispatch round */ - unsigned long nr_sectors; -}; - -/* - * First index in the service_trees. - * IDLE is handled separately, so it has negative index - */ -enum wl_class_t { - BE_WORKLOAD = 0, - RT_WORKLOAD = 1, - IDLE_WORKLOAD = 2, - CFQ_PRIO_NR, -}; - -/* - * Second index in the service_trees. - */ -enum wl_type_t { - ASYNC_WORKLOAD = 0, - SYNC_NOIDLE_WORKLOAD = 1, - SYNC_WORKLOAD = 2 -}; - -struct cfqg_stats { -#ifdef CONFIG_CFQ_GROUP_IOSCHED - /* number of ios merged */ - struct blkg_rwstat merged; - /* total time spent on device in ns, may not be accurate w/ queueing */ - struct blkg_rwstat service_time; - /* total time spent waiting in scheduler queue in ns */ - struct blkg_rwstat wait_time; - /* number of IOs queued up */ - struct blkg_rwstat queued; - /* total disk time and nr sectors dispatched by this group */ - struct blkg_stat time; -#ifdef CONFIG_DEBUG_BLK_CGROUP - /* time not charged to this cgroup */ - struct blkg_stat unaccounted_time; - /* sum of number of ios queued across all samples */ - struct blkg_stat avg_queue_size_sum; - /* count of samples taken for average */ - struct blkg_stat avg_queue_size_samples; - /* how many times this group has been removed from service tree */ - struct blkg_stat dequeue; - /* total time spent waiting for it to be assigned a timeslice. */ - struct blkg_stat group_wait_time; - /* time spent idling for this blkcg_gq */ - struct blkg_stat idle_time; - /* total time with empty current active q with other requests queued */ - struct blkg_stat empty_time; - /* fields after this shouldn't be cleared on stat reset */ - u64 start_group_wait_time; - u64 start_idle_time; - u64 start_empty_time; - uint16_t flags; -#endif /* CONFIG_DEBUG_BLK_CGROUP */ -#endif /* CONFIG_CFQ_GROUP_IOSCHED */ -}; - -/* Per-cgroup data */ -struct cfq_group_data { - /* must be the first member */ - struct blkcg_policy_data cpd; - - unsigned int weight; - unsigned int leaf_weight; -}; - -/* This is per cgroup per device grouping structure */ -struct cfq_group { - /* must be the first member */ - struct blkg_policy_data pd; - - /* group service_tree member */ - struct rb_node rb_node; - - /* group service_tree key */ - u64 vdisktime; - - /* - * The number of active cfqgs and sum of their weights under this - * cfqg. This covers this cfqg's leaf_weight and all children's - * weights, but does not cover weights of further descendants. - * - * If a cfqg is on the service tree, it's active. An active cfqg - * also activates its parent and contributes to the children_weight - * of the parent. - */ - int nr_active; - unsigned int children_weight; - - /* - * vfraction is the fraction of vdisktime that the tasks in this - * cfqg are entitled to. This is determined by compounding the - * ratios walking up from this cfqg to the root. - * - * It is in fixed point w/ CFQ_SERVICE_SHIFT and the sum of all - * vfractions on a service tree is approximately 1. The sum may - * deviate a bit due to rounding errors and fluctuations caused by - * cfqgs entering and leaving the service tree. - */ - unsigned int vfraction; - - /* - * There are two weights - (internal) weight is the weight of this - * cfqg against the sibling cfqgs. leaf_weight is the wight of - * this cfqg against the child cfqgs. For the root cfqg, both - * weights are kept in sync for backward compatibility. - */ - unsigned int weight; - unsigned int new_weight; - unsigned int dev_weight; - - unsigned int leaf_weight; - unsigned int new_leaf_weight; - unsigned int dev_leaf_weight; - - /* number of cfqq currently on this group */ - int nr_cfqq; - - /* - * Per group busy queues average. Useful for workload slice calc. We - * create the array for each prio class but at run time it is used - * only for RT and BE class and slot for IDLE class remains unused. - * This is primarily done to avoid confusion and a gcc warning. - */ - unsigned int busy_queues_avg[CFQ_PRIO_NR]; - /* - * rr lists of queues with requests. We maintain service trees for - * RT and BE classes. These trees are subdivided in subclasses - * of SYNC, SYNC_NOIDLE and ASYNC based on workload type. For IDLE - * class there is no subclassification and all the cfq queues go on - * a single tree service_tree_idle. - * Counts are embedded in the cfq_rb_root - */ - struct cfq_rb_root service_trees[2][3]; - struct cfq_rb_root service_tree_idle; - - u64 saved_wl_slice; - enum wl_type_t saved_wl_type; - enum wl_class_t saved_wl_class; - - /* number of requests that are on the dispatch list or inside driver */ - int dispatched; - struct cfq_ttime ttime; - struct cfqg_stats stats; /* stats for this cfqg */ - - /* async queue for each priority case */ - struct cfq_queue *async_cfqq[2][IOPRIO_BE_NR]; - struct cfq_queue *async_idle_cfqq; - -}; - -struct cfq_io_cq { - struct io_cq icq; /* must be the first member */ - struct cfq_queue *cfqq[2]; - struct cfq_ttime ttime; - int ioprio; /* the current ioprio */ -#ifdef CONFIG_CFQ_GROUP_IOSCHED - uint64_t blkcg_serial_nr; /* the current blkcg serial */ -#endif -}; - -/* - * Per block device queue structure - */ -struct cfq_data { - struct request_queue *queue; - /* Root service tree for cfq_groups */ - struct cfq_rb_root grp_service_tree; - struct cfq_group *root_group; - - /* - * The priority currently being served - */ - enum wl_class_t serving_wl_class; - enum wl_type_t serving_wl_type; - u64 workload_expires; - struct cfq_group *serving_group; - - /* - * Each priority tree is sorted by next_request position. These - * trees are used when determining if two or more queues are - * interleaving requests (see cfq_close_cooperator). - */ - struct rb_root prio_trees[CFQ_PRIO_LISTS]; - - unsigned int busy_queues; - unsigned int busy_sync_queues; - - int rq_in_driver; - int rq_in_flight[2]; - - /* - * queue-depth detection - */ - int rq_queued; - int hw_tag; - /* - * hw_tag can be - * -1 => indeterminate, (cfq will behave as if NCQ is present, to allow better detection) - * 1 => NCQ is present (hw_tag_est_depth is the estimated max depth) - * 0 => no NCQ - */ - int hw_tag_est_depth; - unsigned int hw_tag_samples; - - /* - * idle window management - */ - struct hrtimer idle_slice_timer; - struct work_struct unplug_work; - - struct cfq_queue *active_queue; - struct cfq_io_cq *active_cic; - - sector_t last_position; - - /* - * tunables, see top of file - */ - unsigned int cfq_quantum; - unsigned int cfq_back_penalty; - unsigned int cfq_back_max; - unsigned int cfq_slice_async_rq; - unsigned int cfq_latency; - u64 cfq_fifo_expire[2]; - u64 cfq_slice[2]; - u64 cfq_slice_idle; - u64 cfq_group_idle; - u64 cfq_target_latency; - - /* - * Fallback dummy cfqq for extreme OOM conditions - */ - struct cfq_queue oom_cfqq; - - u64 last_delayed_sync; -}; - -static struct cfq_group *cfq_get_next_cfqg(struct cfq_data *cfqd); -static void cfq_put_queue(struct cfq_queue *cfqq); - -static struct cfq_rb_root *st_for(struct cfq_group *cfqg, - enum wl_class_t class, - enum wl_type_t type) -{ - if (!cfqg) - return NULL; - - if (class == IDLE_WORKLOAD) - return &cfqg->service_tree_idle; - - return &cfqg->service_trees[class][type]; -} - -enum cfqq_state_flags { - CFQ_CFQQ_FLAG_on_rr = 0, /* on round-robin busy list */ - CFQ_CFQQ_FLAG_wait_request, /* waiting for a request */ - CFQ_CFQQ_FLAG_must_dispatch, /* must be allowed a dispatch */ - CFQ_CFQQ_FLAG_must_alloc_slice, /* per-slice must_alloc flag */ - CFQ_CFQQ_FLAG_fifo_expire, /* FIFO checked in this slice */ - CFQ_CFQQ_FLAG_idle_window, /* slice idling enabled */ - CFQ_CFQQ_FLAG_prio_changed, /* task priority has changed */ - CFQ_CFQQ_FLAG_slice_new, /* no requests dispatched in slice */ - CFQ_CFQQ_FLAG_sync, /* synchronous queue */ - CFQ_CFQQ_FLAG_coop, /* cfqq is shared */ - CFQ_CFQQ_FLAG_split_coop, /* shared cfqq will be splitted */ - CFQ_CFQQ_FLAG_deep, /* sync cfqq experienced large depth */ - CFQ_CFQQ_FLAG_wait_busy, /* Waiting for next request */ -}; - -#define CFQ_CFQQ_FNS(name) \ -static inline void cfq_mark_cfqq_##name(struct cfq_queue *cfqq) \ -{ \ - (cfqq)->flags |= (1 << CFQ_CFQQ_FLAG_##name); \ -} \ -static inline void cfq_clear_cfqq_##name(struct cfq_queue *cfqq) \ -{ \ - (cfqq)->flags &= ~(1 << CFQ_CFQQ_FLAG_##name); \ -} \ -static inline int cfq_cfqq_##name(const struct cfq_queue *cfqq) \ -{ \ - return ((cfqq)->flags & (1 << CFQ_CFQQ_FLAG_##name)) != 0; \ -} - -CFQ_CFQQ_FNS(on_rr); -CFQ_CFQQ_FNS(wait_request); -CFQ_CFQQ_FNS(must_dispatch); -CFQ_CFQQ_FNS(must_alloc_slice); -CFQ_CFQQ_FNS(fifo_expire); -CFQ_CFQQ_FNS(idle_window); -CFQ_CFQQ_FNS(prio_changed); -CFQ_CFQQ_FNS(slice_new); -CFQ_CFQQ_FNS(sync); -CFQ_CFQQ_FNS(coop); -CFQ_CFQQ_FNS(split_coop); -CFQ_CFQQ_FNS(deep); -CFQ_CFQQ_FNS(wait_busy); -#undef CFQ_CFQQ_FNS - -#if defined(CONFIG_CFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) - -/* cfqg stats flags */ -enum cfqg_stats_flags { - CFQG_stats_waiting = 0, - CFQG_stats_idling, - CFQG_stats_empty, -}; - -#define CFQG_FLAG_FNS(name) \ -static inline void cfqg_stats_mark_##name(struct cfqg_stats *stats) \ -{ \ - stats->flags |= (1 << CFQG_stats_##name); \ -} \ -static inline void cfqg_stats_clear_##name(struct cfqg_stats *stats) \ -{ \ - stats->flags &= ~(1 << CFQG_stats_##name); \ -} \ -static inline int cfqg_stats_##name(struct cfqg_stats *stats) \ -{ \ - return (stats->flags & (1 << CFQG_stats_##name)) != 0; \ -} \ - -CFQG_FLAG_FNS(waiting) -CFQG_FLAG_FNS(idling) -CFQG_FLAG_FNS(empty) -#undef CFQG_FLAG_FNS - -/* This should be called with the queue_lock held. */ -static void cfqg_stats_update_group_wait_time(struct cfqg_stats *stats) -{ - u64 now; - - if (!cfqg_stats_waiting(stats)) - return; - - now = ktime_get_ns(); - if (now > stats->start_group_wait_time) - blkg_stat_add(&stats->group_wait_time, - now - stats->start_group_wait_time); - cfqg_stats_clear_waiting(stats); -} - -/* This should be called with the queue_lock held. */ -static void cfqg_stats_set_start_group_wait_time(struct cfq_group *cfqg, - struct cfq_group *curr_cfqg) -{ - struct cfqg_stats *stats = &cfqg->stats; - - if (cfqg_stats_waiting(stats)) - return; - if (cfqg == curr_cfqg) - return; - stats->start_group_wait_time = ktime_get_ns(); - cfqg_stats_mark_waiting(stats); -} - -/* This should be called with the queue_lock held. */ -static void cfqg_stats_end_empty_time(struct cfqg_stats *stats) -{ - u64 now; - - if (!cfqg_stats_empty(stats)) - return; - - now = ktime_get_ns(); - if (now > stats->start_empty_time) - blkg_stat_add(&stats->empty_time, - now - stats->start_empty_time); - cfqg_stats_clear_empty(stats); -} - -static void cfqg_stats_update_dequeue(struct cfq_group *cfqg) -{ - blkg_stat_add(&cfqg->stats.dequeue, 1); -} - -static void cfqg_stats_set_start_empty_time(struct cfq_group *cfqg) -{ - struct cfqg_stats *stats = &cfqg->stats; - - if (blkg_rwstat_total(&stats->queued)) - return; - - /* - * group is already marked empty. This can happen if cfqq got new - * request in parent group and moved to this group while being added - * to service tree. Just ignore the event and move on. - */ - if (cfqg_stats_empty(stats)) - return; - - stats->start_empty_time = ktime_get_ns(); - cfqg_stats_mark_empty(stats); -} - -static void cfqg_stats_update_idle_time(struct cfq_group *cfqg) -{ - struct cfqg_stats *stats = &cfqg->stats; - - if (cfqg_stats_idling(stats)) { - u64 now = ktime_get_ns(); - - if (now > stats->start_idle_time) - blkg_stat_add(&stats->idle_time, - now - stats->start_idle_time); - cfqg_stats_clear_idling(stats); - } -} - -static void cfqg_stats_set_start_idle_time(struct cfq_group *cfqg) -{ - struct cfqg_stats *stats = &cfqg->stats; - - BUG_ON(cfqg_stats_idling(stats)); - - stats->start_idle_time = ktime_get_ns(); - cfqg_stats_mark_idling(stats); -} - -static void cfqg_stats_update_avg_queue_size(struct cfq_group *cfqg) -{ - struct cfqg_stats *stats = &cfqg->stats; - - blkg_stat_add(&stats->avg_queue_size_sum, - blkg_rwstat_total(&stats->queued)); - blkg_stat_add(&stats->avg_queue_size_samples, 1); - cfqg_stats_update_group_wait_time(stats); -} - -#else /* CONFIG_CFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */ - -static inline void cfqg_stats_set_start_group_wait_time(struct cfq_group *cfqg, struct cfq_group *curr_cfqg) { } -static inline void cfqg_stats_end_empty_time(struct cfqg_stats *stats) { } -static inline void cfqg_stats_update_dequeue(struct cfq_group *cfqg) { } -static inline void cfqg_stats_set_start_empty_time(struct cfq_group *cfqg) { } -static inline void cfqg_stats_update_idle_time(struct cfq_group *cfqg) { } -static inline void cfqg_stats_set_start_idle_time(struct cfq_group *cfqg) { } -static inline void cfqg_stats_update_avg_queue_size(struct cfq_group *cfqg) { } - -#endif /* CONFIG_CFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */ - -#ifdef CONFIG_CFQ_GROUP_IOSCHED - -static inline struct cfq_group *pd_to_cfqg(struct blkg_policy_data *pd) -{ - return pd ? container_of(pd, struct cfq_group, pd) : NULL; -} - -static struct cfq_group_data -*cpd_to_cfqgd(struct blkcg_policy_data *cpd) -{ - return cpd ? container_of(cpd, struct cfq_group_data, cpd) : NULL; -} - -static inline struct blkcg_gq *cfqg_to_blkg(struct cfq_group *cfqg) -{ - return pd_to_blkg(&cfqg->pd); -} - -static struct blkcg_policy blkcg_policy_cfq; - -static inline struct cfq_group *blkg_to_cfqg(struct blkcg_gq *blkg) -{ - return pd_to_cfqg(blkg_to_pd(blkg, &blkcg_policy_cfq)); -} - -static struct cfq_group_data *blkcg_to_cfqgd(struct blkcg *blkcg) -{ - return cpd_to_cfqgd(blkcg_to_cpd(blkcg, &blkcg_policy_cfq)); -} - -static inline struct cfq_group *cfqg_parent(struct cfq_group *cfqg) -{ - struct blkcg_gq *pblkg = cfqg_to_blkg(cfqg)->parent; - - return pblkg ? blkg_to_cfqg(pblkg) : NULL; -} - -static inline bool cfqg_is_descendant(struct cfq_group *cfqg, - struct cfq_group *ancestor) -{ - return cgroup_is_descendant(cfqg_to_blkg(cfqg)->blkcg->css.cgroup, - cfqg_to_blkg(ancestor)->blkcg->css.cgroup); -} - -static inline void cfqg_get(struct cfq_group *cfqg) -{ - return blkg_get(cfqg_to_blkg(cfqg)); -} - -static inline void cfqg_put(struct cfq_group *cfqg) -{ - return blkg_put(cfqg_to_blkg(cfqg)); -} - -#define cfq_log_cfqq(cfqd, cfqq, fmt, args...) do { \ - blk_add_cgroup_trace_msg((cfqd)->queue, \ - cfqg_to_blkg((cfqq)->cfqg)->blkcg, \ - "cfq%d%c%c " fmt, (cfqq)->pid, \ - cfq_cfqq_sync((cfqq)) ? 'S' : 'A', \ - cfqq_type((cfqq)) == SYNC_NOIDLE_WORKLOAD ? 'N' : ' ',\ - ##args); \ -} while (0) - -#define cfq_log_cfqg(cfqd, cfqg, fmt, args...) do { \ - blk_add_cgroup_trace_msg((cfqd)->queue, \ - cfqg_to_blkg(cfqg)->blkcg, fmt, ##args); \ -} while (0) - -static inline void cfqg_stats_update_io_add(struct cfq_group *cfqg, - struct cfq_group *curr_cfqg, - unsigned int op) -{ - blkg_rwstat_add(&cfqg->stats.queued, op, 1); - cfqg_stats_end_empty_time(&cfqg->stats); - cfqg_stats_set_start_group_wait_time(cfqg, curr_cfqg); -} - -static inline void cfqg_stats_update_timeslice_used(struct cfq_group *cfqg, - uint64_t time, unsigned long unaccounted_time) -{ - blkg_stat_add(&cfqg->stats.time, time); -#ifdef CONFIG_DEBUG_BLK_CGROUP - blkg_stat_add(&cfqg->stats.unaccounted_time, unaccounted_time); -#endif -} - -static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg, - unsigned int op) -{ - blkg_rwstat_add(&cfqg->stats.queued, op, -1); -} - -static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg, - unsigned int op) -{ - blkg_rwstat_add(&cfqg->stats.merged, op, 1); -} - -static inline void cfqg_stats_update_completion(struct cfq_group *cfqg, - u64 start_time_ns, - u64 io_start_time_ns, - unsigned int op) -{ - struct cfqg_stats *stats = &cfqg->stats; - u64 now = ktime_get_ns(); - - if (now > io_start_time_ns) - blkg_rwstat_add(&stats->service_time, op, - now - io_start_time_ns); - if (io_start_time_ns > start_time_ns) - blkg_rwstat_add(&stats->wait_time, op, - io_start_time_ns - start_time_ns); -} - -/* @stats = 0 */ -static void cfqg_stats_reset(struct cfqg_stats *stats) -{ - /* queued stats shouldn't be cleared */ - blkg_rwstat_reset(&stats->merged); - blkg_rwstat_reset(&stats->service_time); - blkg_rwstat_reset(&stats->wait_time); - blkg_stat_reset(&stats->time); -#ifdef CONFIG_DEBUG_BLK_CGROUP - blkg_stat_reset(&stats->unaccounted_time); - blkg_stat_reset(&stats->avg_queue_size_sum); - blkg_stat_reset(&stats->avg_queue_size_samples); - blkg_stat_reset(&stats->dequeue); - blkg_stat_reset(&stats->group_wait_time); - blkg_stat_reset(&stats->idle_time); - blkg_stat_reset(&stats->empty_time); -#endif -} - -/* @to += @from */ -static void cfqg_stats_add_aux(struct cfqg_stats *to, struct cfqg_stats *from) -{ - /* queued stats shouldn't be cleared */ - blkg_rwstat_add_aux(&to->merged, &from->merged); - blkg_rwstat_add_aux(&to->service_time, &from->service_time); - blkg_rwstat_add_aux(&to->wait_time, &from->wait_time); - blkg_stat_add_aux(&from->time, &from->time); -#ifdef CONFIG_DEBUG_BLK_CGROUP - blkg_stat_add_aux(&to->unaccounted_time, &from->unaccounted_time); - blkg_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum); - blkg_stat_add_aux(&to->avg_queue_size_samples, &from->avg_queue_size_samples); - blkg_stat_add_aux(&to->dequeue, &from->dequeue); - blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time); - blkg_stat_add_aux(&to->idle_time, &from->idle_time); - blkg_stat_add_aux(&to->empty_time, &from->empty_time); -#endif -} - -/* - * Transfer @cfqg's stats to its parent's aux counts so that the ancestors' - * recursive stats can still account for the amount used by this cfqg after - * it's gone. - */ -static void cfqg_stats_xfer_dead(struct cfq_group *cfqg) -{ - struct cfq_group *parent = cfqg_parent(cfqg); - - lockdep_assert_held(cfqg_to_blkg(cfqg)->q->queue_lock); - - if (unlikely(!parent)) - return; - - cfqg_stats_add_aux(&parent->stats, &cfqg->stats); - cfqg_stats_reset(&cfqg->stats); -} - -#else /* CONFIG_CFQ_GROUP_IOSCHED */ - -static inline struct cfq_group *cfqg_parent(struct cfq_group *cfqg) { return NULL; } -static inline bool cfqg_is_descendant(struct cfq_group *cfqg, - struct cfq_group *ancestor) -{ - return true; -} -static inline void cfqg_get(struct cfq_group *cfqg) { } -static inline void cfqg_put(struct cfq_group *cfqg) { } - -#define cfq_log_cfqq(cfqd, cfqq, fmt, args...) \ - blk_add_trace_msg((cfqd)->queue, "cfq%d%c%c " fmt, (cfqq)->pid, \ - cfq_cfqq_sync((cfqq)) ? 'S' : 'A', \ - cfqq_type((cfqq)) == SYNC_NOIDLE_WORKLOAD ? 'N' : ' ',\ - ##args) -#define cfq_log_cfqg(cfqd, cfqg, fmt, args...) do {} while (0) - -static inline void cfqg_stats_update_io_add(struct cfq_group *cfqg, - struct cfq_group *curr_cfqg, unsigned int op) { } -static inline void cfqg_stats_update_timeslice_used(struct cfq_group *cfqg, - uint64_t time, unsigned long unaccounted_time) { } -static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg, - unsigned int op) { } -static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg, - unsigned int op) { } -static inline void cfqg_stats_update_completion(struct cfq_group *cfqg, - u64 start_time_ns, - u64 io_start_time_ns, - unsigned int op) { } - -#endif /* CONFIG_CFQ_GROUP_IOSCHED */ - -#define cfq_log(cfqd, fmt, args...) \ - blk_add_trace_msg((cfqd)->queue, "cfq " fmt, ##args) - -/* Traverses through cfq group service trees */ -#define for_each_cfqg_st(cfqg, i, j, st) \ - for (i = 0; i <= IDLE_WORKLOAD; i++) \ - for (j = 0, st = i < IDLE_WORKLOAD ? &cfqg->service_trees[i][j]\ - : &cfqg->service_tree_idle; \ - (i < IDLE_WORKLOAD && j <= SYNC_WORKLOAD) || \ - (i == IDLE_WORKLOAD && j == 0); \ - j++, st = i < IDLE_WORKLOAD ? \ - &cfqg->service_trees[i][j]: NULL) \ - -static inline bool cfq_io_thinktime_big(struct cfq_data *cfqd, - struct cfq_ttime *ttime, bool group_idle) -{ - u64 slice; - if (!sample_valid(ttime->ttime_samples)) - return false; - if (group_idle) - slice = cfqd->cfq_group_idle; - else - slice = cfqd->cfq_slice_idle; - return ttime->ttime_mean > slice; -} - -static inline bool iops_mode(struct cfq_data *cfqd) -{ - /* - * If we are not idling on queues and it is a NCQ drive, parallel - * execution of requests is on and measuring time is not possible - * in most of the cases until and unless we drive shallower queue - * depths and that becomes a performance bottleneck. In such cases - * switch to start providing fairness in terms of number of IOs. - */ - if (!cfqd->cfq_slice_idle && cfqd->hw_tag) - return true; - else - return false; -} - -static inline enum wl_class_t cfqq_class(struct cfq_queue *cfqq) -{ - if (cfq_class_idle(cfqq)) - return IDLE_WORKLOAD; - if (cfq_class_rt(cfqq)) - return RT_WORKLOAD; - return BE_WORKLOAD; -} - - -static enum wl_type_t cfqq_type(struct cfq_queue *cfqq) -{ - if (!cfq_cfqq_sync(cfqq)) - return ASYNC_WORKLOAD; - if (!cfq_cfqq_idle_window(cfqq)) - return SYNC_NOIDLE_WORKLOAD; - return SYNC_WORKLOAD; -} - -static inline int cfq_group_busy_queues_wl(enum wl_class_t wl_class, - struct cfq_data *cfqd, - struct cfq_group *cfqg) -{ - if (wl_class == IDLE_WORKLOAD) - return cfqg->service_tree_idle.count; - - return cfqg->service_trees[wl_class][ASYNC_WORKLOAD].count + - cfqg->service_trees[wl_class][SYNC_NOIDLE_WORKLOAD].count + - cfqg->service_trees[wl_class][SYNC_WORKLOAD].count; -} - -static inline int cfqg_busy_async_queues(struct cfq_data *cfqd, - struct cfq_group *cfqg) -{ - return cfqg->service_trees[RT_WORKLOAD][ASYNC_WORKLOAD].count + - cfqg->service_trees[BE_WORKLOAD][ASYNC_WORKLOAD].count; -} - -static void cfq_dispatch_insert(struct request_queue *, struct request *); -static struct cfq_queue *cfq_get_queue(struct cfq_data *cfqd, bool is_sync, - struct cfq_io_cq *cic, struct bio *bio); - -static inline struct cfq_io_cq *icq_to_cic(struct io_cq *icq) -{ - /* cic->icq is the first member, %NULL will convert to %NULL */ - return container_of(icq, struct cfq_io_cq, icq); -} - -static inline struct cfq_io_cq *cfq_cic_lookup(struct cfq_data *cfqd, - struct io_context *ioc) -{ - if (ioc) - return icq_to_cic(ioc_lookup_icq(ioc, cfqd->queue)); - return NULL; -} - -static inline struct cfq_queue *cic_to_cfqq(struct cfq_io_cq *cic, bool is_sync) -{ - return cic->cfqq[is_sync]; -} - -static inline void cic_set_cfqq(struct cfq_io_cq *cic, struct cfq_queue *cfqq, - bool is_sync) -{ - cic->cfqq[is_sync] = cfqq; -} - -static inline struct cfq_data *cic_to_cfqd(struct cfq_io_cq *cic) -{ - return cic->icq.q->elevator->elevator_data; -} - -/* - * scheduler run of queue, if there are requests pending and no one in the - * driver that will restart queueing - */ -static inline void cfq_schedule_dispatch(struct cfq_data *cfqd) -{ - if (cfqd->busy_queues) { - cfq_log(cfqd, "schedule dispatch"); - kblockd_schedule_work(&cfqd->unplug_work); - } -} - -/* - * Scale schedule slice based on io priority. Use the sync time slice only - * if a queue is marked sync and has sync io queued. A sync queue with async - * io only, should not get full sync slice length. - */ -static inline u64 cfq_prio_slice(struct cfq_data *cfqd, bool sync, - unsigned short prio) -{ - u64 base_slice = cfqd->cfq_slice[sync]; - u64 slice = div_u64(base_slice, CFQ_SLICE_SCALE); - - WARN_ON(prio >= IOPRIO_BE_NR); - - return base_slice + (slice * (4 - prio)); -} - -static inline u64 -cfq_prio_to_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - return cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio); -} - -/** - * cfqg_scale_charge - scale disk time charge according to cfqg weight - * @charge: disk time being charged - * @vfraction: vfraction of the cfqg, fixed point w/ CFQ_SERVICE_SHIFT - * - * Scale @charge according to @vfraction, which is in range (0, 1]. The - * scaling is inversely proportional. - * - * scaled = charge / vfraction - * - * The result is also in fixed point w/ CFQ_SERVICE_SHIFT. - */ -static inline u64 cfqg_scale_charge(u64 charge, - unsigned int vfraction) -{ - u64 c = charge << CFQ_SERVICE_SHIFT; /* make it fixed point */ - - /* charge / vfraction */ - c <<= CFQ_SERVICE_SHIFT; - return div_u64(c, vfraction); -} - -static inline u64 max_vdisktime(u64 min_vdisktime, u64 vdisktime) -{ - s64 delta = (s64)(vdisktime - min_vdisktime); - if (delta > 0) - min_vdisktime = vdisktime; - - return min_vdisktime; -} - -static void update_min_vdisktime(struct cfq_rb_root *st) -{ - if (!RB_EMPTY_ROOT(&st->rb.rb_root)) { - struct cfq_group *cfqg = rb_entry_cfqg(st->rb.rb_leftmost); - - st->min_vdisktime = max_vdisktime(st->min_vdisktime, - cfqg->vdisktime); - } -} - -/* - * get averaged number of queues of RT/BE priority. - * average is updated, with a formula that gives more weight to higher numbers, - * to quickly follows sudden increases and decrease slowly - */ - -static inline unsigned cfq_group_get_avg_queues(struct cfq_data *cfqd, - struct cfq_group *cfqg, bool rt) -{ - unsigned min_q, max_q; - unsigned mult = cfq_hist_divisor - 1; - unsigned round = cfq_hist_divisor / 2; - unsigned busy = cfq_group_busy_queues_wl(rt, cfqd, cfqg); - - min_q = min(cfqg->busy_queues_avg[rt], busy); - max_q = max(cfqg->busy_queues_avg[rt], busy); - cfqg->busy_queues_avg[rt] = (mult * max_q + min_q + round) / - cfq_hist_divisor; - return cfqg->busy_queues_avg[rt]; -} - -static inline u64 -cfq_group_slice(struct cfq_data *cfqd, struct cfq_group *cfqg) -{ - return cfqd->cfq_target_latency * cfqg->vfraction >> CFQ_SERVICE_SHIFT; -} - -static inline u64 -cfq_scaled_cfqq_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - u64 slice = cfq_prio_to_slice(cfqd, cfqq); - if (cfqd->cfq_latency) { - /* - * interested queues (we consider only the ones with the same - * priority class in the cfq group) - */ - unsigned iq = cfq_group_get_avg_queues(cfqd, cfqq->cfqg, - cfq_class_rt(cfqq)); - u64 sync_slice = cfqd->cfq_slice[1]; - u64 expect_latency = sync_slice * iq; - u64 group_slice = cfq_group_slice(cfqd, cfqq->cfqg); - - if (expect_latency > group_slice) { - u64 base_low_slice = 2 * cfqd->cfq_slice_idle; - u64 low_slice; - - /* scale low_slice according to IO priority - * and sync vs async */ - low_slice = div64_u64(base_low_slice*slice, sync_slice); - low_slice = min(slice, low_slice); - /* the adapted slice value is scaled to fit all iqs - * into the target latency */ - slice = div64_u64(slice*group_slice, expect_latency); - slice = max(slice, low_slice); - } - } - return slice; -} - -static inline void -cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - u64 slice = cfq_scaled_cfqq_slice(cfqd, cfqq); - u64 now = ktime_get_ns(); - - cfqq->slice_start = now; - cfqq->slice_end = now + slice; - cfqq->allocated_slice = slice; - cfq_log_cfqq(cfqd, cfqq, "set_slice=%llu", cfqq->slice_end - now); -} - -/* - * We need to wrap this check in cfq_cfqq_slice_new(), since ->slice_end - * isn't valid until the first request from the dispatch is activated - * and the slice time set. - */ -static inline bool cfq_slice_used(struct cfq_queue *cfqq) -{ - if (cfq_cfqq_slice_new(cfqq)) - return false; - if (ktime_get_ns() < cfqq->slice_end) - return false; - - return true; -} - -/* - * Lifted from AS - choose which of rq1 and rq2 that is best served now. - * We choose the request that is closest to the head right now. Distance - * behind the head is penalized and only allowed to a certain extent. - */ -static struct request * -cfq_choose_req(struct cfq_data *cfqd, struct request *rq1, struct request *rq2, sector_t last) -{ - sector_t s1, s2, d1 = 0, d2 = 0; - unsigned long back_max; -#define CFQ_RQ1_WRAP 0x01 /* request 1 wraps */ -#define CFQ_RQ2_WRAP 0x02 /* request 2 wraps */ - unsigned wrap = 0; /* bit mask: requests behind the disk head? */ - - if (rq1 == NULL || rq1 == rq2) - return rq2; - if (rq2 == NULL) - return rq1; - - if (rq_is_sync(rq1) != rq_is_sync(rq2)) - return rq_is_sync(rq1) ? rq1 : rq2; - - if ((rq1->cmd_flags ^ rq2->cmd_flags) & REQ_PRIO) - return rq1->cmd_flags & REQ_PRIO ? rq1 : rq2; - - s1 = blk_rq_pos(rq1); - s2 = blk_rq_pos(rq2); - - /* - * by definition, 1KiB is 2 sectors - */ - back_max = cfqd->cfq_back_max * 2; - - /* - * Strict one way elevator _except_ in the case where we allow - * short backward seeks which are biased as twice the cost of a - * similar forward seek. - */ - if (s1 >= last) - d1 = s1 - last; - else if (s1 + back_max >= last) - d1 = (last - s1) * cfqd->cfq_back_penalty; - else - wrap |= CFQ_RQ1_WRAP; - - if (s2 >= last) - d2 = s2 - last; - else if (s2 + back_max >= last) - d2 = (last - s2) * cfqd->cfq_back_penalty; - else - wrap |= CFQ_RQ2_WRAP; - - /* Found required data */ - - /* - * By doing switch() on the bit mask "wrap" we avoid having to - * check two variables for all permutations: --> faster! - */ - switch (wrap) { - case 0: /* common case for CFQ: rq1 and rq2 not wrapped */ - if (d1 < d2) - return rq1; - else if (d2 < d1) - return rq2; - else { - if (s1 >= s2) - return rq1; - else - return rq2; - } - - case CFQ_RQ2_WRAP: - return rq1; - case CFQ_RQ1_WRAP: - return rq2; - case (CFQ_RQ1_WRAP|CFQ_RQ2_WRAP): /* both rqs wrapped */ - default: - /* - * Since both rqs are wrapped, - * start with the one that's further behind head - * (--> only *one* back seek required), - * since back seek takes more time than forward. - */ - if (s1 <= s2) - return rq1; - else - return rq2; - } -} - -static struct cfq_queue *cfq_rb_first(struct cfq_rb_root *root) -{ - /* Service tree is empty */ - if (!root->count) - return NULL; - - return rb_entry(rb_first_cached(&root->rb), struct cfq_queue, rb_node); -} - -static struct cfq_group *cfq_rb_first_group(struct cfq_rb_root *root) -{ - return rb_entry_cfqg(rb_first_cached(&root->rb)); -} - -static void cfq_rb_erase(struct rb_node *n, struct cfq_rb_root *root) -{ - if (root->rb_rightmost == n) - root->rb_rightmost = rb_prev(n); - - rb_erase_cached(n, &root->rb); - RB_CLEAR_NODE(n); - - --root->count; -} - -/* - * would be nice to take fifo expire time into account as well - */ -static struct request * -cfq_find_next_rq(struct cfq_data *cfqd, struct cfq_queue *cfqq, - struct request *last) -{ - struct rb_node *rbnext = rb_next(&last->rb_node); - struct rb_node *rbprev = rb_prev(&last->rb_node); - struct request *next = NULL, *prev = NULL; - - BUG_ON(RB_EMPTY_NODE(&last->rb_node)); - - if (rbprev) - prev = rb_entry_rq(rbprev); - - if (rbnext) - next = rb_entry_rq(rbnext); - else { - rbnext = rb_first(&cfqq->sort_list); - if (rbnext && rbnext != &last->rb_node) - next = rb_entry_rq(rbnext); - } - - return cfq_choose_req(cfqd, next, prev, blk_rq_pos(last)); -} - -static u64 cfq_slice_offset(struct cfq_data *cfqd, - struct cfq_queue *cfqq) -{ - /* - * just an approximation, should be ok. - */ - return (cfqq->cfqg->nr_cfqq - 1) * (cfq_prio_slice(cfqd, 1, 0) - - cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio)); -} - -static inline s64 -cfqg_key(struct cfq_rb_root *st, struct cfq_group *cfqg) -{ - return cfqg->vdisktime - st->min_vdisktime; -} - -static void -__cfq_group_service_tree_add(struct cfq_rb_root *st, struct cfq_group *cfqg) -{ - struct rb_node **node = &st->rb.rb_root.rb_node; - struct rb_node *parent = NULL; - struct cfq_group *__cfqg; - s64 key = cfqg_key(st, cfqg); - bool leftmost = true, rightmost = true; - - while (*node != NULL) { - parent = *node; - __cfqg = rb_entry_cfqg(parent); - - if (key < cfqg_key(st, __cfqg)) { - node = &parent->rb_left; - rightmost = false; - } else { - node = &parent->rb_right; - leftmost = false; - } - } - - if (rightmost) - st->rb_rightmost = &cfqg->rb_node; - - rb_link_node(&cfqg->rb_node, parent, node); - rb_insert_color_cached(&cfqg->rb_node, &st->rb, leftmost); -} - -/* - * This has to be called only on activation of cfqg - */ -static void -cfq_update_group_weight(struct cfq_group *cfqg) -{ - if (cfqg->new_weight) { - cfqg->weight = cfqg->new_weight; - cfqg->new_weight = 0; - } -} - -static void -cfq_update_group_leaf_weight(struct cfq_group *cfqg) -{ - BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node)); - - if (cfqg->new_leaf_weight) { - cfqg->leaf_weight = cfqg->new_leaf_weight; - cfqg->new_leaf_weight = 0; - } -} - -static void -cfq_group_service_tree_add(struct cfq_rb_root *st, struct cfq_group *cfqg) -{ - unsigned int vfr = 1 << CFQ_SERVICE_SHIFT; /* start with 1 */ - struct cfq_group *pos = cfqg; - struct cfq_group *parent; - bool propagate; - - /* add to the service tree */ - BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node)); - - /* - * Update leaf_weight. We cannot update weight at this point - * because cfqg might already have been activated and is - * contributing its current weight to the parent's child_weight. - */ - cfq_update_group_leaf_weight(cfqg); - __cfq_group_service_tree_add(st, cfqg); - - /* - * Activate @cfqg and calculate the portion of vfraction @cfqg is - * entitled to. vfraction is calculated by walking the tree - * towards the root calculating the fraction it has at each level. - * The compounded ratio is how much vfraction @cfqg owns. - * - * Start with the proportion tasks in this cfqg has against active - * children cfqgs - its leaf_weight against children_weight. - */ - propagate = !pos->nr_active++; - pos->children_weight += pos->leaf_weight; - vfr = vfr * pos->leaf_weight / pos->children_weight; - - /* - * Compound ->weight walking up the tree. Both activation and - * vfraction calculation are done in the same loop. Propagation - * stops once an already activated node is met. vfraction - * calculation should always continue to the root. - */ - while ((parent = cfqg_parent(pos))) { - if (propagate) { - cfq_update_group_weight(pos); - propagate = !parent->nr_active++; - parent->children_weight += pos->weight; - } - vfr = vfr * pos->weight / parent->children_weight; - pos = parent; - } - - cfqg->vfraction = max_t(unsigned, vfr, 1); -} - -static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd) -{ - if (!iops_mode(cfqd)) - return CFQ_SLICE_MODE_GROUP_DELAY; - else - return CFQ_IOPS_MODE_GROUP_DELAY; -} - -static void -cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg) -{ - struct cfq_rb_root *st = &cfqd->grp_service_tree; - struct cfq_group *__cfqg; - struct rb_node *n; - - cfqg->nr_cfqq++; - if (!RB_EMPTY_NODE(&cfqg->rb_node)) - return; - - /* - * Currently put the group at the end. Later implement something - * so that groups get lesser vtime based on their weights, so that - * if group does not loose all if it was not continuously backlogged. - */ - n = st->rb_rightmost; - if (n) { - __cfqg = rb_entry_cfqg(n); - cfqg->vdisktime = __cfqg->vdisktime + - cfq_get_cfqg_vdisktime_delay(cfqd); - } else - cfqg->vdisktime = st->min_vdisktime; - cfq_group_service_tree_add(st, cfqg); -} - -static void -cfq_group_service_tree_del(struct cfq_rb_root *st, struct cfq_group *cfqg) -{ - struct cfq_group *pos = cfqg; - bool propagate; - - /* - * Undo activation from cfq_group_service_tree_add(). Deactivate - * @cfqg and propagate deactivation upwards. - */ - propagate = !--pos->nr_active; - pos->children_weight -= pos->leaf_weight; - - while (propagate) { - struct cfq_group *parent = cfqg_parent(pos); - - /* @pos has 0 nr_active at this point */ - WARN_ON_ONCE(pos->children_weight); - pos->vfraction = 0; - - if (!parent) - break; - - propagate = !--parent->nr_active; - parent->children_weight -= pos->weight; - pos = parent; - } - - /* remove from the service tree */ - if (!RB_EMPTY_NODE(&cfqg->rb_node)) - cfq_rb_erase(&cfqg->rb_node, st); -} - -static void -cfq_group_notify_queue_del(struct cfq_data *cfqd, struct cfq_group *cfqg) -{ - struct cfq_rb_root *st = &cfqd->grp_service_tree; - - BUG_ON(cfqg->nr_cfqq < 1); - cfqg->nr_cfqq--; - - /* If there are other cfq queues under this group, don't delete it */ - if (cfqg->nr_cfqq) - return; - - cfq_log_cfqg(cfqd, cfqg, "del_from_rr group"); - cfq_group_service_tree_del(st, cfqg); - cfqg->saved_wl_slice = 0; - cfqg_stats_update_dequeue(cfqg); -} - -static inline u64 cfq_cfqq_slice_usage(struct cfq_queue *cfqq, - u64 *unaccounted_time) -{ - u64 slice_used; - u64 now = ktime_get_ns(); - - /* - * Queue got expired before even a single request completed or - * got expired immediately after first request completion. - */ - if (!cfqq->slice_start || cfqq->slice_start == now) { - /* - * Also charge the seek time incurred to the group, otherwise - * if there are mutiple queues in the group, each can dispatch - * a single request on seeky media and cause lots of seek time - * and group will never know it. - */ - slice_used = max_t(u64, (now - cfqq->dispatch_start), - jiffies_to_nsecs(1)); - } else { - slice_used = now - cfqq->slice_start; - if (slice_used > cfqq->allocated_slice) { - *unaccounted_time = slice_used - cfqq->allocated_slice; - slice_used = cfqq->allocated_slice; - } - if (cfqq->slice_start > cfqq->dispatch_start) - *unaccounted_time += cfqq->slice_start - - cfqq->dispatch_start; - } - - return slice_used; -} - -static void cfq_group_served(struct cfq_data *cfqd, struct cfq_group *cfqg, - struct cfq_queue *cfqq) -{ - struct cfq_rb_root *st = &cfqd->grp_service_tree; - u64 used_sl, charge, unaccounted_sl = 0; - int nr_sync = cfqg->nr_cfqq - cfqg_busy_async_queues(cfqd, cfqg) - - cfqg->service_tree_idle.count; - unsigned int vfr; - u64 now = ktime_get_ns(); - - BUG_ON(nr_sync < 0); - used_sl = charge = cfq_cfqq_slice_usage(cfqq, &unaccounted_sl); - - if (iops_mode(cfqd)) - charge = cfqq->slice_dispatch; - else if (!cfq_cfqq_sync(cfqq) && !nr_sync) - charge = cfqq->allocated_slice; - - /* - * Can't update vdisktime while on service tree and cfqg->vfraction - * is valid only while on it. Cache vfr, leave the service tree, - * update vdisktime and go back on. The re-addition to the tree - * will also update the weights as necessary. - */ - vfr = cfqg->vfraction; - cfq_group_service_tree_del(st, cfqg); - cfqg->vdisktime += cfqg_scale_charge(charge, vfr); - cfq_group_service_tree_add(st, cfqg); - - /* This group is being expired. Save the context */ - if (cfqd->workload_expires > now) { - cfqg->saved_wl_slice = cfqd->workload_expires - now; - cfqg->saved_wl_type = cfqd->serving_wl_type; - cfqg->saved_wl_class = cfqd->serving_wl_class; - } else - cfqg->saved_wl_slice = 0; - - cfq_log_cfqg(cfqd, cfqg, "served: vt=%llu min_vt=%llu", cfqg->vdisktime, - st->min_vdisktime); - cfq_log_cfqq(cfqq->cfqd, cfqq, - "sl_used=%llu disp=%llu charge=%llu iops=%u sect=%lu", - used_sl, cfqq->slice_dispatch, charge, - iops_mode(cfqd), cfqq->nr_sectors); - cfqg_stats_update_timeslice_used(cfqg, used_sl, unaccounted_sl); - cfqg_stats_set_start_empty_time(cfqg); -} - -/** - * cfq_init_cfqg_base - initialize base part of a cfq_group - * @cfqg: cfq_group to initialize - * - * Initialize the base part which is used whether %CONFIG_CFQ_GROUP_IOSCHED - * is enabled or not. - */ -static void cfq_init_cfqg_base(struct cfq_group *cfqg) -{ - struct cfq_rb_root *st; - int i, j; - - for_each_cfqg_st(cfqg, i, j, st) - *st = CFQ_RB_ROOT; - RB_CLEAR_NODE(&cfqg->rb_node); - - cfqg->ttime.last_end_request = ktime_get_ns(); -} - -#ifdef CONFIG_CFQ_GROUP_IOSCHED -static int __cfq_set_weight(struct cgroup_subsys_state *css, u64 val, - bool on_dfl, bool reset_dev, bool is_leaf_weight); - -static void cfqg_stats_exit(struct cfqg_stats *stats) -{ - blkg_rwstat_exit(&stats->merged); - blkg_rwstat_exit(&stats->service_time); - blkg_rwstat_exit(&stats->wait_time); - blkg_rwstat_exit(&stats->queued); - blkg_stat_exit(&stats->time); -#ifdef CONFIG_DEBUG_BLK_CGROUP - blkg_stat_exit(&stats->unaccounted_time); - blkg_stat_exit(&stats->avg_queue_size_sum); - blkg_stat_exit(&stats->avg_queue_size_samples); - blkg_stat_exit(&stats->dequeue); - blkg_stat_exit(&stats->group_wait_time); - blkg_stat_exit(&stats->idle_time); - blkg_stat_exit(&stats->empty_time); -#endif -} - -static int cfqg_stats_init(struct cfqg_stats *stats, gfp_t gfp) -{ - if (blkg_rwstat_init(&stats->merged, gfp) || - blkg_rwstat_init(&stats->service_time, gfp) || - blkg_rwstat_init(&stats->wait_time, gfp) || - blkg_rwstat_init(&stats->queued, gfp) || - blkg_stat_init(&stats->time, gfp)) - goto err; - -#ifdef CONFIG_DEBUG_BLK_CGROUP - if (blkg_stat_init(&stats->unaccounted_time, gfp) || - blkg_stat_init(&stats->avg_queue_size_sum, gfp) || - blkg_stat_init(&stats->avg_queue_size_samples, gfp) || - blkg_stat_init(&stats->dequeue, gfp) || - blkg_stat_init(&stats->group_wait_time, gfp) || - blkg_stat_init(&stats->idle_time, gfp) || - blkg_stat_init(&stats->empty_time, gfp)) - goto err; -#endif - return 0; -err: - cfqg_stats_exit(stats); - return -ENOMEM; -} - -static struct blkcg_policy_data *cfq_cpd_alloc(gfp_t gfp) -{ - struct cfq_group_data *cgd; - - cgd = kzalloc(sizeof(*cgd), gfp); - if (!cgd) - return NULL; - return &cgd->cpd; -} - -static void cfq_cpd_init(struct blkcg_policy_data *cpd) -{ - struct cfq_group_data *cgd = cpd_to_cfqgd(cpd); - unsigned int weight = cgroup_subsys_on_dfl(io_cgrp_subsys) ? - CGROUP_WEIGHT_DFL : CFQ_WEIGHT_LEGACY_DFL; - - if (cpd_to_blkcg(cpd) == &blkcg_root) - weight *= 2; - - cgd->weight = weight; - cgd->leaf_weight = weight; -} - -static void cfq_cpd_free(struct blkcg_policy_data *cpd) -{ - kfree(cpd_to_cfqgd(cpd)); -} - -static void cfq_cpd_bind(struct blkcg_policy_data *cpd) -{ - struct blkcg *blkcg = cpd_to_blkcg(cpd); - bool on_dfl = cgroup_subsys_on_dfl(io_cgrp_subsys); - unsigned int weight = on_dfl ? CGROUP_WEIGHT_DFL : CFQ_WEIGHT_LEGACY_DFL; - - if (blkcg == &blkcg_root) - weight *= 2; - - WARN_ON_ONCE(__cfq_set_weight(&blkcg->css, weight, on_dfl, true, false)); - WARN_ON_ONCE(__cfq_set_weight(&blkcg->css, weight, on_dfl, true, true)); -} - -static struct blkg_policy_data *cfq_pd_alloc(gfp_t gfp, int node) -{ - struct cfq_group *cfqg; - - cfqg = kzalloc_node(sizeof(*cfqg), gfp, node); - if (!cfqg) - return NULL; - - cfq_init_cfqg_base(cfqg); - if (cfqg_stats_init(&cfqg->stats, gfp)) { - kfree(cfqg); - return NULL; - } - - return &cfqg->pd; -} - -static void cfq_pd_init(struct blkg_policy_data *pd) -{ - struct cfq_group *cfqg = pd_to_cfqg(pd); - struct cfq_group_data *cgd = blkcg_to_cfqgd(pd->blkg->blkcg); - - cfqg->weight = cgd->weight; - cfqg->leaf_weight = cgd->leaf_weight; -} - -static void cfq_pd_offline(struct blkg_policy_data *pd) -{ - struct cfq_group *cfqg = pd_to_cfqg(pd); - int i; - - for (i = 0; i < IOPRIO_BE_NR; i++) { - if (cfqg->async_cfqq[0][i]) { - cfq_put_queue(cfqg->async_cfqq[0][i]); - cfqg->async_cfqq[0][i] = NULL; - } - if (cfqg->async_cfqq[1][i]) { - cfq_put_queue(cfqg->async_cfqq[1][i]); - cfqg->async_cfqq[1][i] = NULL; - } - } - - if (cfqg->async_idle_cfqq) { - cfq_put_queue(cfqg->async_idle_cfqq); - cfqg->async_idle_cfqq = NULL; - } - - /* - * @blkg is going offline and will be ignored by - * blkg_[rw]stat_recursive_sum(). Transfer stats to the parent so - * that they don't get lost. If IOs complete after this point, the - * stats for them will be lost. Oh well... - */ - cfqg_stats_xfer_dead(cfqg); -} - -static void cfq_pd_free(struct blkg_policy_data *pd) -{ - struct cfq_group *cfqg = pd_to_cfqg(pd); - - cfqg_stats_exit(&cfqg->stats); - return kfree(cfqg); -} - -static void cfq_pd_reset_stats(struct blkg_policy_data *pd) -{ - struct cfq_group *cfqg = pd_to_cfqg(pd); - - cfqg_stats_reset(&cfqg->stats); -} - -static struct cfq_group *cfq_lookup_cfqg(struct cfq_data *cfqd, - struct blkcg *blkcg) -{ - struct blkcg_gq *blkg; - - blkg = blkg_lookup(blkcg, cfqd->queue); - if (likely(blkg)) - return blkg_to_cfqg(blkg); - return NULL; -} - -static void cfq_link_cfqq_cfqg(struct cfq_queue *cfqq, struct cfq_group *cfqg) -{ - cfqq->cfqg = cfqg; - /* cfqq reference on cfqg */ - cfqg_get(cfqg); -} - -static u64 cfqg_prfill_weight_device(struct seq_file *sf, - struct blkg_policy_data *pd, int off) -{ - struct cfq_group *cfqg = pd_to_cfqg(pd); - - if (!cfqg->dev_weight) - return 0; - return __blkg_prfill_u64(sf, pd, cfqg->dev_weight); -} - -static int cfqg_print_weight_device(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), - cfqg_prfill_weight_device, &blkcg_policy_cfq, - 0, false); - return 0; -} - -static u64 cfqg_prfill_leaf_weight_device(struct seq_file *sf, - struct blkg_policy_data *pd, int off) -{ - struct cfq_group *cfqg = pd_to_cfqg(pd); - - if (!cfqg->dev_leaf_weight) - return 0; - return __blkg_prfill_u64(sf, pd, cfqg->dev_leaf_weight); -} - -static int cfqg_print_leaf_weight_device(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), - cfqg_prfill_leaf_weight_device, &blkcg_policy_cfq, - 0, false); - return 0; -} - -static int cfq_print_weight(struct seq_file *sf, void *v) -{ - struct blkcg *blkcg = css_to_blkcg(seq_css(sf)); - struct cfq_group_data *cgd = blkcg_to_cfqgd(blkcg); - unsigned int val = 0; - - if (cgd) - val = cgd->weight; - - seq_printf(sf, "%u\n", val); - return 0; -} - -static int cfq_print_leaf_weight(struct seq_file *sf, void *v) -{ - struct blkcg *blkcg = css_to_blkcg(seq_css(sf)); - struct cfq_group_data *cgd = blkcg_to_cfqgd(blkcg); - unsigned int val = 0; - - if (cgd) - val = cgd->leaf_weight; - - seq_printf(sf, "%u\n", val); - return 0; -} - -static ssize_t __cfqg_set_weight_device(struct kernfs_open_file *of, - char *buf, size_t nbytes, loff_t off, - bool on_dfl, bool is_leaf_weight) -{ - unsigned int min = on_dfl ? CGROUP_WEIGHT_MIN : CFQ_WEIGHT_LEGACY_MIN; - unsigned int max = on_dfl ? CGROUP_WEIGHT_MAX : CFQ_WEIGHT_LEGACY_MAX; - struct blkcg *blkcg = css_to_blkcg(of_css(of)); - struct blkg_conf_ctx ctx; - struct cfq_group *cfqg; - struct cfq_group_data *cfqgd; - int ret; - u64 v; - - ret = blkg_conf_prep(blkcg, &blkcg_policy_cfq, buf, &ctx); - if (ret) - return ret; - - if (sscanf(ctx.body, "%llu", &v) == 1) { - /* require "default" on dfl */ - ret = -ERANGE; - if (!v && on_dfl) - goto out_finish; - } else if (!strcmp(strim(ctx.body), "default")) { - v = 0; - } else { - ret = -EINVAL; - goto out_finish; - } - - cfqg = blkg_to_cfqg(ctx.blkg); - cfqgd = blkcg_to_cfqgd(blkcg); - - ret = -ERANGE; - if (!v || (v >= min && v <= max)) { - if (!is_leaf_weight) { - cfqg->dev_weight = v; - cfqg->new_weight = v ?: cfqgd->weight; - } else { - cfqg->dev_leaf_weight = v; - cfqg->new_leaf_weight = v ?: cfqgd->leaf_weight; - } - ret = 0; - } -out_finish: - blkg_conf_finish(&ctx); - return ret ?: nbytes; -} - -static ssize_t cfqg_set_weight_device(struct kernfs_open_file *of, - char *buf, size_t nbytes, loff_t off) -{ - return __cfqg_set_weight_device(of, buf, nbytes, off, false, false); -} - -static ssize_t cfqg_set_leaf_weight_device(struct kernfs_open_file *of, - char *buf, size_t nbytes, loff_t off) -{ - return __cfqg_set_weight_device(of, buf, nbytes, off, false, true); -} - -static int __cfq_set_weight(struct cgroup_subsys_state *css, u64 val, - bool on_dfl, bool reset_dev, bool is_leaf_weight) -{ - unsigned int min = on_dfl ? CGROUP_WEIGHT_MIN : CFQ_WEIGHT_LEGACY_MIN; - unsigned int max = on_dfl ? CGROUP_WEIGHT_MAX : CFQ_WEIGHT_LEGACY_MAX; - struct blkcg *blkcg = css_to_blkcg(css); - struct blkcg_gq *blkg; - struct cfq_group_data *cfqgd; - int ret = 0; - - if (val < min || val > max) - return -ERANGE; - - spin_lock_irq(&blkcg->lock); - cfqgd = blkcg_to_cfqgd(blkcg); - if (!cfqgd) { - ret = -EINVAL; - goto out; - } - - if (!is_leaf_weight) - cfqgd->weight = val; - else - cfqgd->leaf_weight = val; - - hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { - struct cfq_group *cfqg = blkg_to_cfqg(blkg); - - if (!cfqg) - continue; - - if (!is_leaf_weight) { - if (reset_dev) - cfqg->dev_weight = 0; - if (!cfqg->dev_weight) - cfqg->new_weight = cfqgd->weight; - } else { - if (reset_dev) - cfqg->dev_leaf_weight = 0; - if (!cfqg->dev_leaf_weight) - cfqg->new_leaf_weight = cfqgd->leaf_weight; - } - } - -out: - spin_unlock_irq(&blkcg->lock); - return ret; -} - -static int cfq_set_weight(struct cgroup_subsys_state *css, struct cftype *cft, - u64 val) -{ - return __cfq_set_weight(css, val, false, false, false); -} - -static int cfq_set_leaf_weight(struct cgroup_subsys_state *css, - struct cftype *cft, u64 val) -{ - return __cfq_set_weight(css, val, false, false, true); -} - -static int cfqg_print_stat(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat, - &blkcg_policy_cfq, seq_cft(sf)->private, false); - return 0; -} - -static int cfqg_print_rwstat(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_rwstat, - &blkcg_policy_cfq, seq_cft(sf)->private, true); - return 0; -} - -static u64 cfqg_prfill_stat_recursive(struct seq_file *sf, - struct blkg_policy_data *pd, int off) -{ - u64 sum = blkg_stat_recursive_sum(pd_to_blkg(pd), - &blkcg_policy_cfq, off); - return __blkg_prfill_u64(sf, pd, sum); -} - -static u64 cfqg_prfill_rwstat_recursive(struct seq_file *sf, - struct blkg_policy_data *pd, int off) -{ - struct blkg_rwstat sum = blkg_rwstat_recursive_sum(pd_to_blkg(pd), - &blkcg_policy_cfq, off); - return __blkg_prfill_rwstat(sf, pd, &sum); -} - -static int cfqg_print_stat_recursive(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), - cfqg_prfill_stat_recursive, &blkcg_policy_cfq, - seq_cft(sf)->private, false); - return 0; -} - -static int cfqg_print_rwstat_recursive(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), - cfqg_prfill_rwstat_recursive, &blkcg_policy_cfq, - seq_cft(sf)->private, true); - return 0; -} - -static u64 cfqg_prfill_sectors(struct seq_file *sf, struct blkg_policy_data *pd, - int off) -{ - u64 sum = blkg_rwstat_total(&pd->blkg->stat_bytes); - - return __blkg_prfill_u64(sf, pd, sum >> 9); -} - -static int cfqg_print_stat_sectors(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), - cfqg_prfill_sectors, &blkcg_policy_cfq, 0, false); - return 0; -} - -static u64 cfqg_prfill_sectors_recursive(struct seq_file *sf, - struct blkg_policy_data *pd, int off) -{ - struct blkg_rwstat tmp = blkg_rwstat_recursive_sum(pd->blkg, NULL, - offsetof(struct blkcg_gq, stat_bytes)); - u64 sum = atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) + - atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_WRITE]); - - return __blkg_prfill_u64(sf, pd, sum >> 9); -} - -static int cfqg_print_stat_sectors_recursive(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), - cfqg_prfill_sectors_recursive, &blkcg_policy_cfq, 0, - false); - return 0; -} - -#ifdef CONFIG_DEBUG_BLK_CGROUP -static u64 cfqg_prfill_avg_queue_size(struct seq_file *sf, - struct blkg_policy_data *pd, int off) -{ - struct cfq_group *cfqg = pd_to_cfqg(pd); - u64 samples = blkg_stat_read(&cfqg->stats.avg_queue_size_samples); - u64 v = 0; - - if (samples) { - v = blkg_stat_read(&cfqg->stats.avg_queue_size_sum); - v = div64_u64(v, samples); - } - __blkg_prfill_u64(sf, pd, v); - return 0; -} - -/* print avg_queue_size */ -static int cfqg_print_avg_queue_size(struct seq_file *sf, void *v) -{ - blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), - cfqg_prfill_avg_queue_size, &blkcg_policy_cfq, - 0, false); - return 0; -} -#endif /* CONFIG_DEBUG_BLK_CGROUP */ - -static struct cftype cfq_blkcg_legacy_files[] = { - /* on root, weight is mapped to leaf_weight */ - { - .name = "weight_device", - .flags = CFTYPE_ONLY_ON_ROOT, - .seq_show = cfqg_print_leaf_weight_device, - .write = cfqg_set_leaf_weight_device, - }, - { - .name = "weight", - .flags = CFTYPE_ONLY_ON_ROOT, - .seq_show = cfq_print_leaf_weight, - .write_u64 = cfq_set_leaf_weight, - }, - - /* no such mapping necessary for !roots */ - { - .name = "weight_device", - .flags = CFTYPE_NOT_ON_ROOT, - .seq_show = cfqg_print_weight_device, - .write = cfqg_set_weight_device, - }, - { - .name = "weight", - .flags = CFTYPE_NOT_ON_ROOT, - .seq_show = cfq_print_weight, - .write_u64 = cfq_set_weight, - }, - - { - .name = "leaf_weight_device", - .seq_show = cfqg_print_leaf_weight_device, - .write = cfqg_set_leaf_weight_device, - }, - { - .name = "leaf_weight", - .seq_show = cfq_print_leaf_weight, - .write_u64 = cfq_set_leaf_weight, - }, - - /* statistics, covers only the tasks in the cfqg */ - { - .name = "time", - .private = offsetof(struct cfq_group, stats.time), - .seq_show = cfqg_print_stat, - }, - { - .name = "sectors", - .seq_show = cfqg_print_stat_sectors, - }, - { - .name = "io_service_bytes", - .private = (unsigned long)&blkcg_policy_cfq, - .seq_show = blkg_print_stat_bytes, - }, - { - .name = "io_serviced", - .private = (unsigned long)&blkcg_policy_cfq, - .seq_show = blkg_print_stat_ios, - }, - { - .name = "io_service_time", - .private = offsetof(struct cfq_group, stats.service_time), - .seq_show = cfqg_print_rwstat, - }, - { - .name = "io_wait_time", - .private = offsetof(struct cfq_group, stats.wait_time), - .seq_show = cfqg_print_rwstat, - }, - { - .name = "io_merged", - .private = offsetof(struct cfq_group, stats.merged), - .seq_show = cfqg_print_rwstat, - }, - { - .name = "io_queued", - .private = offsetof(struct cfq_group, stats.queued), - .seq_show = cfqg_print_rwstat, - }, - - /* the same statictics which cover the cfqg and its descendants */ - { - .name = "time_recursive", - .private = offsetof(struct cfq_group, stats.time), - .seq_show = cfqg_print_stat_recursive, - }, - { - .name = "sectors_recursive", - .seq_show = cfqg_print_stat_sectors_recursive, - }, - { - .name = "io_service_bytes_recursive", - .private = (unsigned long)&blkcg_policy_cfq, - .seq_show = blkg_print_stat_bytes_recursive, - }, - { - .name = "io_serviced_recursive", - .private = (unsigned long)&blkcg_policy_cfq, - .seq_show = blkg_print_stat_ios_recursive, - }, - { - .name = "io_service_time_recursive", - .private = offsetof(struct cfq_group, stats.service_time), - .seq_show = cfqg_print_rwstat_recursive, - }, - { - .name = "io_wait_time_recursive", - .private = offsetof(struct cfq_group, stats.wait_time), - .seq_show = cfqg_print_rwstat_recursive, - }, - { - .name = "io_merged_recursive", - .private = offsetof(struct cfq_group, stats.merged), - .seq_show = cfqg_print_rwstat_recursive, - }, - { - .name = "io_queued_recursive", - .private = offsetof(struct cfq_group, stats.queued), - .seq_show = cfqg_print_rwstat_recursive, - }, -#ifdef CONFIG_DEBUG_BLK_CGROUP - { - .name = "avg_queue_size", - .seq_show = cfqg_print_avg_queue_size, - }, - { - .name = "group_wait_time", - .private = offsetof(struct cfq_group, stats.group_wait_time), - .seq_show = cfqg_print_stat, - }, - { - .name = "idle_time", - .private = offsetof(struct cfq_group, stats.idle_time), - .seq_show = cfqg_print_stat, - }, - { - .name = "empty_time", - .private = offsetof(struct cfq_group, stats.empty_time), - .seq_show = cfqg_print_stat, - }, - { - .name = "dequeue", - .private = offsetof(struct cfq_group, stats.dequeue), - .seq_show = cfqg_print_stat, - }, - { - .name = "unaccounted_time", - .private = offsetof(struct cfq_group, stats.unaccounted_time), - .seq_show = cfqg_print_stat, - }, -#endif /* CONFIG_DEBUG_BLK_CGROUP */ - { } /* terminate */ -}; - -static int cfq_print_weight_on_dfl(struct seq_file *sf, void *v) -{ - struct blkcg *blkcg = css_to_blkcg(seq_css(sf)); - struct cfq_group_data *cgd = blkcg_to_cfqgd(blkcg); - - seq_printf(sf, "default %u\n", cgd->weight); - blkcg_print_blkgs(sf, blkcg, cfqg_prfill_weight_device, - &blkcg_policy_cfq, 0, false); - return 0; -} - -static ssize_t cfq_set_weight_on_dfl(struct kernfs_open_file *of, - char *buf, size_t nbytes, loff_t off) -{ - char *endp; - int ret; - u64 v; - - buf = strim(buf); - - /* "WEIGHT" or "default WEIGHT" sets the default weight */ - v = simple_strtoull(buf, &endp, 0); - if (*endp == '\0' || sscanf(buf, "default %llu", &v) == 1) { - ret = __cfq_set_weight(of_css(of), v, true, false, false); - return ret ?: nbytes; - } - - /* "MAJ:MIN WEIGHT" */ - return __cfqg_set_weight_device(of, buf, nbytes, off, true, false); -} - -static struct cftype cfq_blkcg_files[] = { - { - .name = "weight", - .flags = CFTYPE_NOT_ON_ROOT, - .seq_show = cfq_print_weight_on_dfl, - .write = cfq_set_weight_on_dfl, - }, - { } /* terminate */ -}; - -#else /* GROUP_IOSCHED */ -static struct cfq_group *cfq_lookup_cfqg(struct cfq_data *cfqd, - struct blkcg *blkcg) -{ - return cfqd->root_group; -} - -static inline void -cfq_link_cfqq_cfqg(struct cfq_queue *cfqq, struct cfq_group *cfqg) { - cfqq->cfqg = cfqg; -} - -#endif /* GROUP_IOSCHED */ - -/* - * The cfqd->service_trees holds all pending cfq_queue's that have - * requests waiting to be processed. It is sorted in the order that - * we will service the queues. - */ -static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq, - bool add_front) -{ - struct rb_node **p, *parent; - struct cfq_queue *__cfqq; - u64 rb_key; - struct cfq_rb_root *st; - bool leftmost = true; - int new_cfqq = 1; - u64 now = ktime_get_ns(); - - st = st_for(cfqq->cfqg, cfqq_class(cfqq), cfqq_type(cfqq)); - if (cfq_class_idle(cfqq)) { - rb_key = CFQ_IDLE_DELAY; - parent = st->rb_rightmost; - if (parent && parent != &cfqq->rb_node) { - __cfqq = rb_entry(parent, struct cfq_queue, rb_node); - rb_key += __cfqq->rb_key; - } else - rb_key += now; - } else if (!add_front) { - /* - * Get our rb key offset. Subtract any residual slice - * value carried from last service. A negative resid - * count indicates slice overrun, and this should position - * the next service time further away in the tree. - */ - rb_key = cfq_slice_offset(cfqd, cfqq) + now; - rb_key -= cfqq->slice_resid; - cfqq->slice_resid = 0; - } else { - rb_key = -NSEC_PER_SEC; - __cfqq = cfq_rb_first(st); - rb_key += __cfqq ? __cfqq->rb_key : now; - } - - if (!RB_EMPTY_NODE(&cfqq->rb_node)) { - new_cfqq = 0; - /* - * same position, nothing more to do - */ - if (rb_key == cfqq->rb_key && cfqq->service_tree == st) - return; - - cfq_rb_erase(&cfqq->rb_node, cfqq->service_tree); - cfqq->service_tree = NULL; - } - - parent = NULL; - cfqq->service_tree = st; - p = &st->rb.rb_root.rb_node; - while (*p) { - parent = *p; - __cfqq = rb_entry(parent, struct cfq_queue, rb_node); - - /* - * sort by key, that represents service time. - */ - if (rb_key < __cfqq->rb_key) - p = &parent->rb_left; - else { - p = &parent->rb_right; - leftmost = false; - } - } - - cfqq->rb_key = rb_key; - rb_link_node(&cfqq->rb_node, parent, p); - rb_insert_color_cached(&cfqq->rb_node, &st->rb, leftmost); - st->count++; - if (add_front || !new_cfqq) - return; - cfq_group_notify_queue_add(cfqd, cfqq->cfqg); -} - -static struct cfq_queue * -cfq_prio_tree_lookup(struct cfq_data *cfqd, struct rb_root *root, - sector_t sector, struct rb_node **ret_parent, - struct rb_node ***rb_link) -{ - struct rb_node **p, *parent; - struct cfq_queue *cfqq = NULL; - - parent = NULL; - p = &root->rb_node; - while (*p) { - struct rb_node **n; - - parent = *p; - cfqq = rb_entry(parent, struct cfq_queue, p_node); - - /* - * Sort strictly based on sector. Smallest to the left, - * largest to the right. - */ - if (sector > blk_rq_pos(cfqq->next_rq)) - n = &(*p)->rb_right; - else if (sector < blk_rq_pos(cfqq->next_rq)) - n = &(*p)->rb_left; - else - break; - p = n; - cfqq = NULL; - } - - *ret_parent = parent; - if (rb_link) - *rb_link = p; - return cfqq; -} - -static void cfq_prio_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - struct rb_node **p, *parent; - struct cfq_queue *__cfqq; - - if (cfqq->p_root) { - rb_erase(&cfqq->p_node, cfqq->p_root); - cfqq->p_root = NULL; - } - - if (cfq_class_idle(cfqq)) - return; - if (!cfqq->next_rq) - return; - - cfqq->p_root = &cfqd->prio_trees[cfqq->org_ioprio]; - __cfqq = cfq_prio_tree_lookup(cfqd, cfqq->p_root, - blk_rq_pos(cfqq->next_rq), &parent, &p); - if (!__cfqq) { - rb_link_node(&cfqq->p_node, parent, p); - rb_insert_color(&cfqq->p_node, cfqq->p_root); - } else - cfqq->p_root = NULL; -} - -/* - * Update cfqq's position in the service tree. - */ -static void cfq_resort_rr_list(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - /* - * Resorting requires the cfqq to be on the RR list already. - */ - if (cfq_cfqq_on_rr(cfqq)) { - cfq_service_tree_add(cfqd, cfqq, 0); - cfq_prio_tree_add(cfqd, cfqq); - } -} - -/* - * add to busy list of queues for service, trying to be fair in ordering - * the pending list according to last request service - */ -static void cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - cfq_log_cfqq(cfqd, cfqq, "add_to_rr"); - BUG_ON(cfq_cfqq_on_rr(cfqq)); - cfq_mark_cfqq_on_rr(cfqq); - cfqd->busy_queues++; - if (cfq_cfqq_sync(cfqq)) - cfqd->busy_sync_queues++; - - cfq_resort_rr_list(cfqd, cfqq); -} - -/* - * Called when the cfqq no longer has requests pending, remove it from - * the service tree. - */ -static void cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - cfq_log_cfqq(cfqd, cfqq, "del_from_rr"); - BUG_ON(!cfq_cfqq_on_rr(cfqq)); - cfq_clear_cfqq_on_rr(cfqq); - - if (!RB_EMPTY_NODE(&cfqq->rb_node)) { - cfq_rb_erase(&cfqq->rb_node, cfqq->service_tree); - cfqq->service_tree = NULL; - } - if (cfqq->p_root) { - rb_erase(&cfqq->p_node, cfqq->p_root); - cfqq->p_root = NULL; - } - - cfq_group_notify_queue_del(cfqd, cfqq->cfqg); - BUG_ON(!cfqd->busy_queues); - cfqd->busy_queues--; - if (cfq_cfqq_sync(cfqq)) - cfqd->busy_sync_queues--; -} - -/* - * rb tree support functions - */ -static void cfq_del_rq_rb(struct request *rq) -{ - struct cfq_queue *cfqq = RQ_CFQQ(rq); - const int sync = rq_is_sync(rq); - - BUG_ON(!cfqq->queued[sync]); - cfqq->queued[sync]--; - - elv_rb_del(&cfqq->sort_list, rq); - - if (cfq_cfqq_on_rr(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list)) { - /* - * Queue will be deleted from service tree when we actually - * expire it later. Right now just remove it from prio tree - * as it is empty. - */ - if (cfqq->p_root) { - rb_erase(&cfqq->p_node, cfqq->p_root); - cfqq->p_root = NULL; - } - } -} - -static void cfq_add_rq_rb(struct request *rq) -{ - struct cfq_queue *cfqq = RQ_CFQQ(rq); - struct cfq_data *cfqd = cfqq->cfqd; - struct request *prev; - - cfqq->queued[rq_is_sync(rq)]++; - - elv_rb_add(&cfqq->sort_list, rq); - - if (!cfq_cfqq_on_rr(cfqq)) - cfq_add_cfqq_rr(cfqd, cfqq); - - /* - * check if this request is a better next-serve candidate - */ - prev = cfqq->next_rq; - cfqq->next_rq = cfq_choose_req(cfqd, cfqq->next_rq, rq, cfqd->last_position); - - /* - * adjust priority tree position, if ->next_rq changes - */ - if (prev != cfqq->next_rq) - cfq_prio_tree_add(cfqd, cfqq); - - BUG_ON(!cfqq->next_rq); -} - -static void cfq_reposition_rq_rb(struct cfq_queue *cfqq, struct request *rq) -{ - elv_rb_del(&cfqq->sort_list, rq); - cfqq->queued[rq_is_sync(rq)]--; - cfqg_stats_update_io_remove(RQ_CFQG(rq), rq->cmd_flags); - cfq_add_rq_rb(rq); - cfqg_stats_update_io_add(RQ_CFQG(rq), cfqq->cfqd->serving_group, - rq->cmd_flags); -} - -static struct request * -cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio) -{ - struct task_struct *tsk = current; - struct cfq_io_cq *cic; - struct cfq_queue *cfqq; - - cic = cfq_cic_lookup(cfqd, tsk->io_context); - if (!cic) - return NULL; - - cfqq = cic_to_cfqq(cic, op_is_sync(bio->bi_opf)); - if (cfqq) - return elv_rb_find(&cfqq->sort_list, bio_end_sector(bio)); - - return NULL; -} - -static void cfq_activate_request(struct request_queue *q, struct request *rq) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - - cfqd->rq_in_driver++; - cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "activate rq, drv=%d", - cfqd->rq_in_driver); - - cfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq); -} - -static void cfq_deactivate_request(struct request_queue *q, struct request *rq) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - - WARN_ON(!cfqd->rq_in_driver); - cfqd->rq_in_driver--; - cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "deactivate rq, drv=%d", - cfqd->rq_in_driver); -} - -static void cfq_remove_request(struct request *rq) -{ - struct cfq_queue *cfqq = RQ_CFQQ(rq); - - if (cfqq->next_rq == rq) - cfqq->next_rq = cfq_find_next_rq(cfqq->cfqd, cfqq, rq); - - list_del_init(&rq->queuelist); - cfq_del_rq_rb(rq); - - cfqq->cfqd->rq_queued--; - cfqg_stats_update_io_remove(RQ_CFQG(rq), rq->cmd_flags); - if (rq->cmd_flags & REQ_PRIO) { - WARN_ON(!cfqq->prio_pending); - cfqq->prio_pending--; - } -} - -static enum elv_merge cfq_merge(struct request_queue *q, struct request **req, - struct bio *bio) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - struct request *__rq; - - __rq = cfq_find_rq_fmerge(cfqd, bio); - if (__rq && elv_bio_merge_ok(__rq, bio)) { - *req = __rq; - return ELEVATOR_FRONT_MERGE; - } - - return ELEVATOR_NO_MERGE; -} - -static void cfq_merged_request(struct request_queue *q, struct request *req, - enum elv_merge type) -{ - if (type == ELEVATOR_FRONT_MERGE) { - struct cfq_queue *cfqq = RQ_CFQQ(req); - - cfq_reposition_rq_rb(cfqq, req); - } -} - -static void cfq_bio_merged(struct request_queue *q, struct request *req, - struct bio *bio) -{ - cfqg_stats_update_io_merged(RQ_CFQG(req), bio->bi_opf); -} - -static void -cfq_merged_requests(struct request_queue *q, struct request *rq, - struct request *next) -{ - struct cfq_queue *cfqq = RQ_CFQQ(rq); - struct cfq_data *cfqd = q->elevator->elevator_data; - - /* - * reposition in fifo if next is older than rq - */ - if (!list_empty(&rq->queuelist) && !list_empty(&next->queuelist) && - next->fifo_time < rq->fifo_time && - cfqq == RQ_CFQQ(next)) { - list_move(&rq->queuelist, &next->queuelist); - rq->fifo_time = next->fifo_time; - } - - if (cfqq->next_rq == next) - cfqq->next_rq = rq; - cfq_remove_request(next); - cfqg_stats_update_io_merged(RQ_CFQG(rq), next->cmd_flags); - - cfqq = RQ_CFQQ(next); - /* - * all requests of this queue are merged to other queues, delete it - * from the service tree. If it's the active_queue, - * cfq_dispatch_requests() will choose to expire it or do idle - */ - if (cfq_cfqq_on_rr(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list) && - cfqq != cfqd->active_queue) - cfq_del_cfqq_rr(cfqd, cfqq); -} - -static int cfq_allow_bio_merge(struct request_queue *q, struct request *rq, - struct bio *bio) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - bool is_sync = op_is_sync(bio->bi_opf); - struct cfq_io_cq *cic; - struct cfq_queue *cfqq; - - /* - * Disallow merge of a sync bio into an async request. - */ - if (is_sync && !rq_is_sync(rq)) - return false; - - /* - * Lookup the cfqq that this bio will be queued with and allow - * merge only if rq is queued there. - */ - cic = cfq_cic_lookup(cfqd, current->io_context); - if (!cic) - return false; - - cfqq = cic_to_cfqq(cic, is_sync); - return cfqq == RQ_CFQQ(rq); -} - -static int cfq_allow_rq_merge(struct request_queue *q, struct request *rq, - struct request *next) -{ - return RQ_CFQQ(rq) == RQ_CFQQ(next); -} - -static inline void cfq_del_timer(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - hrtimer_try_to_cancel(&cfqd->idle_slice_timer); - cfqg_stats_update_idle_time(cfqq->cfqg); -} - -static void __cfq_set_active_queue(struct cfq_data *cfqd, - struct cfq_queue *cfqq) -{ - if (cfqq) { - cfq_log_cfqq(cfqd, cfqq, "set_active wl_class:%d wl_type:%d", - cfqd->serving_wl_class, cfqd->serving_wl_type); - cfqg_stats_update_avg_queue_size(cfqq->cfqg); - cfqq->slice_start = 0; - cfqq->dispatch_start = ktime_get_ns(); - cfqq->allocated_slice = 0; - cfqq->slice_end = 0; - cfqq->slice_dispatch = 0; - cfqq->nr_sectors = 0; - - cfq_clear_cfqq_wait_request(cfqq); - cfq_clear_cfqq_must_dispatch(cfqq); - cfq_clear_cfqq_must_alloc_slice(cfqq); - cfq_clear_cfqq_fifo_expire(cfqq); - cfq_mark_cfqq_slice_new(cfqq); - - cfq_del_timer(cfqd, cfqq); - } - - cfqd->active_queue = cfqq; -} - -/* - * current cfqq expired its slice (or was too idle), select new one - */ -static void -__cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq, - bool timed_out) -{ - cfq_log_cfqq(cfqd, cfqq, "slice expired t=%d", timed_out); - - if (cfq_cfqq_wait_request(cfqq)) - cfq_del_timer(cfqd, cfqq); - - cfq_clear_cfqq_wait_request(cfqq); - cfq_clear_cfqq_wait_busy(cfqq); - - /* - * If this cfqq is shared between multiple processes, check to - * make sure that those processes are still issuing I/Os within - * the mean seek distance. If not, it may be time to break the - * queues apart again. - */ - if (cfq_cfqq_coop(cfqq) && CFQQ_SEEKY(cfqq)) - cfq_mark_cfqq_split_coop(cfqq); - - /* - * store what was left of this slice, if the queue idled/timed out - */ - if (timed_out) { - if (cfq_cfqq_slice_new(cfqq)) - cfqq->slice_resid = cfq_scaled_cfqq_slice(cfqd, cfqq); - else - cfqq->slice_resid = cfqq->slice_end - ktime_get_ns(); - cfq_log_cfqq(cfqd, cfqq, "resid=%lld", cfqq->slice_resid); - } - - cfq_group_served(cfqd, cfqq->cfqg, cfqq); - - if (cfq_cfqq_on_rr(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list)) - cfq_del_cfqq_rr(cfqd, cfqq); - - cfq_resort_rr_list(cfqd, cfqq); - - if (cfqq == cfqd->active_queue) - cfqd->active_queue = NULL; - - if (cfqd->active_cic) { - put_io_context(cfqd->active_cic->icq.ioc); - cfqd->active_cic = NULL; - } -} - -static inline void cfq_slice_expired(struct cfq_data *cfqd, bool timed_out) -{ - struct cfq_queue *cfqq = cfqd->active_queue; - - if (cfqq) - __cfq_slice_expired(cfqd, cfqq, timed_out); -} - -/* - * Get next queue for service. Unless we have a queue preemption, - * we'll simply select the first cfqq in the service tree. - */ -static struct cfq_queue *cfq_get_next_queue(struct cfq_data *cfqd) -{ - struct cfq_rb_root *st = st_for(cfqd->serving_group, - cfqd->serving_wl_class, cfqd->serving_wl_type); - - if (!cfqd->rq_queued) - return NULL; - - /* There is nothing to dispatch */ - if (!st) - return NULL; - if (RB_EMPTY_ROOT(&st->rb.rb_root)) - return NULL; - return cfq_rb_first(st); -} - -static struct cfq_queue *cfq_get_next_queue_forced(struct cfq_data *cfqd) -{ - struct cfq_group *cfqg; - struct cfq_queue *cfqq; - int i, j; - struct cfq_rb_root *st; - - if (!cfqd->rq_queued) - return NULL; - - cfqg = cfq_get_next_cfqg(cfqd); - if (!cfqg) - return NULL; - - for_each_cfqg_st(cfqg, i, j, st) { - cfqq = cfq_rb_first(st); - if (cfqq) - return cfqq; - } - return NULL; -} - -/* - * Get and set a new active queue for service. - */ -static struct cfq_queue *cfq_set_active_queue(struct cfq_data *cfqd, - struct cfq_queue *cfqq) -{ - if (!cfqq) - cfqq = cfq_get_next_queue(cfqd); - - __cfq_set_active_queue(cfqd, cfqq); - return cfqq; -} - -static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd, - struct request *rq) -{ - if (blk_rq_pos(rq) >= cfqd->last_position) - return blk_rq_pos(rq) - cfqd->last_position; - else - return cfqd->last_position - blk_rq_pos(rq); -} - -static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq, - struct request *rq) -{ - return cfq_dist_from_last(cfqd, rq) <= CFQQ_CLOSE_THR; -} - -static struct cfq_queue *cfqq_close(struct cfq_data *cfqd, - struct cfq_queue *cur_cfqq) -{ - struct rb_root *root = &cfqd->prio_trees[cur_cfqq->org_ioprio]; - struct rb_node *parent, *node; - struct cfq_queue *__cfqq; - sector_t sector = cfqd->last_position; - - if (RB_EMPTY_ROOT(root)) - return NULL; - - /* - * First, if we find a request starting at the end of the last - * request, choose it. - */ - __cfqq = cfq_prio_tree_lookup(cfqd, root, sector, &parent, NULL); - if (__cfqq) - return __cfqq; - - /* - * If the exact sector wasn't found, the parent of the NULL leaf - * will contain the closest sector. - */ - __cfqq = rb_entry(parent, struct cfq_queue, p_node); - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq)) - return __cfqq; - - if (blk_rq_pos(__cfqq->next_rq) < sector) - node = rb_next(&__cfqq->p_node); - else - node = rb_prev(&__cfqq->p_node); - if (!node) - return NULL; - - __cfqq = rb_entry(node, struct cfq_queue, p_node); - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq)) - return __cfqq; - - return NULL; -} - -/* - * cfqd - obvious - * cur_cfqq - passed in so that we don't decide that the current queue is - * closely cooperating with itself. - * - * So, basically we're assuming that that cur_cfqq has dispatched at least - * one request, and that cfqd->last_position reflects a position on the disk - * associated with the I/O issued by cur_cfqq. I'm not sure this is a valid - * assumption. - */ -static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd, - struct cfq_queue *cur_cfqq) -{ - struct cfq_queue *cfqq; - - if (cfq_class_idle(cur_cfqq)) - return NULL; - if (!cfq_cfqq_sync(cur_cfqq)) - return NULL; - if (CFQQ_SEEKY(cur_cfqq)) - return NULL; - - /* - * Don't search priority tree if it's the only queue in the group. - */ - if (cur_cfqq->cfqg->nr_cfqq == 1) - return NULL; - - /* - * We should notice if some of the queues are cooperating, eg - * working closely on the same area of the disk. In that case, - * we can group them together and don't waste time idling. - */ - cfqq = cfqq_close(cfqd, cur_cfqq); - if (!cfqq) - return NULL; - - /* If new queue belongs to different cfq_group, don't choose it */ - if (cur_cfqq->cfqg != cfqq->cfqg) - return NULL; - - /* - * It only makes sense to merge sync queues. - */ - if (!cfq_cfqq_sync(cfqq)) - return NULL; - if (CFQQ_SEEKY(cfqq)) - return NULL; - - /* - * Do not merge queues of different priority classes - */ - if (cfq_class_rt(cfqq) != cfq_class_rt(cur_cfqq)) - return NULL; - - return cfqq; -} - -/* - * Determine whether we should enforce idle window for this queue. - */ - -static bool cfq_should_idle(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - enum wl_class_t wl_class = cfqq_class(cfqq); - struct cfq_rb_root *st = cfqq->service_tree; - - BUG_ON(!st); - BUG_ON(!st->count); - - if (!cfqd->cfq_slice_idle) - return false; - - /* We never do for idle class queues. */ - if (wl_class == IDLE_WORKLOAD) - return false; - - /* We do for queues that were marked with idle window flag. */ - if (cfq_cfqq_idle_window(cfqq) && - !(blk_queue_nonrot(cfqd->queue) && cfqd->hw_tag)) - return true; - - /* - * Otherwise, we do only if they are the last ones - * in their service tree. - */ - if (st->count == 1 && cfq_cfqq_sync(cfqq) && - !cfq_io_thinktime_big(cfqd, &st->ttime, false)) - return true; - cfq_log_cfqq(cfqd, cfqq, "Not idling. st->count:%d", st->count); - return false; -} - -static void cfq_arm_slice_timer(struct cfq_data *cfqd) -{ - struct cfq_queue *cfqq = cfqd->active_queue; - struct cfq_rb_root *st = cfqq->service_tree; - struct cfq_io_cq *cic; - u64 sl, group_idle = 0; - u64 now = ktime_get_ns(); - - /* - * SSD device without seek penalty, disable idling. But only do so - * for devices that support queuing, otherwise we still have a problem - * with sync vs async workloads. - */ - if (blk_queue_nonrot(cfqd->queue) && cfqd->hw_tag && - !cfqd->cfq_group_idle) - return; - - WARN_ON(!RB_EMPTY_ROOT(&cfqq->sort_list)); - WARN_ON(cfq_cfqq_slice_new(cfqq)); - - /* - * idle is disabled, either manually or by past process history - */ - if (!cfq_should_idle(cfqd, cfqq)) { - /* no queue idling. Check for group idling */ - if (cfqd->cfq_group_idle) - group_idle = cfqd->cfq_group_idle; - else - return; - } - - /* - * still active requests from this queue, don't idle - */ - if (cfqq->dispatched) - return; - - /* - * task has exited, don't wait - */ - cic = cfqd->active_cic; - if (!cic || !atomic_read(&cic->icq.ioc->active_ref)) - return; - - /* - * If our average think time is larger than the remaining time - * slice, then don't idle. This avoids overrunning the allotted - * time slice. - */ - if (sample_valid(cic->ttime.ttime_samples) && - (cfqq->slice_end - now < cic->ttime.ttime_mean)) { - cfq_log_cfqq(cfqd, cfqq, "Not idling. think_time:%llu", - cic->ttime.ttime_mean); - return; - } - - /* - * There are other queues in the group or this is the only group and - * it has too big thinktime, don't do group idle. - */ - if (group_idle && - (cfqq->cfqg->nr_cfqq > 1 || - cfq_io_thinktime_big(cfqd, &st->ttime, true))) - return; - - cfq_mark_cfqq_wait_request(cfqq); - - if (group_idle) - sl = cfqd->cfq_group_idle; - else - sl = cfqd->cfq_slice_idle; - - hrtimer_start(&cfqd->idle_slice_timer, ns_to_ktime(sl), - HRTIMER_MODE_REL); - cfqg_stats_set_start_idle_time(cfqq->cfqg); - cfq_log_cfqq(cfqd, cfqq, "arm_idle: %llu group_idle: %d", sl, - group_idle ? 1 : 0); -} - -/* - * Move request from internal lists to the request queue dispatch list. - */ -static void cfq_dispatch_insert(struct request_queue *q, struct request *rq) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - struct cfq_queue *cfqq = RQ_CFQQ(rq); - - cfq_log_cfqq(cfqd, cfqq, "dispatch_insert"); - - cfqq->next_rq = cfq_find_next_rq(cfqd, cfqq, rq); - cfq_remove_request(rq); - cfqq->dispatched++; - (RQ_CFQG(rq))->dispatched++; - elv_dispatch_sort(q, rq); - - cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]++; - cfqq->nr_sectors += blk_rq_sectors(rq); -} - -/* - * return expired entry, or NULL to just start from scratch in rbtree - */ -static struct request *cfq_check_fifo(struct cfq_queue *cfqq) -{ - struct request *rq = NULL; - - if (cfq_cfqq_fifo_expire(cfqq)) - return NULL; - - cfq_mark_cfqq_fifo_expire(cfqq); - - if (list_empty(&cfqq->fifo)) - return NULL; - - rq = rq_entry_fifo(cfqq->fifo.next); - if (ktime_get_ns() < rq->fifo_time) - rq = NULL; - - return rq; -} - -static inline int -cfq_prio_to_maxrq(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - const int base_rq = cfqd->cfq_slice_async_rq; - - WARN_ON(cfqq->ioprio >= IOPRIO_BE_NR); - - return 2 * base_rq * (IOPRIO_BE_NR - cfqq->ioprio); -} - -/* - * Must be called with the queue_lock held. - */ -static int cfqq_process_refs(struct cfq_queue *cfqq) -{ - int process_refs, io_refs; - - io_refs = cfqq->allocated[READ] + cfqq->allocated[WRITE]; - process_refs = cfqq->ref - io_refs; - BUG_ON(process_refs < 0); - return process_refs; -} - -static void cfq_setup_merge(struct cfq_queue *cfqq, struct cfq_queue *new_cfqq) -{ - int process_refs, new_process_refs; - struct cfq_queue *__cfqq; - - /* - * If there are no process references on the new_cfqq, then it is - * unsafe to follow the ->new_cfqq chain as other cfqq's in the - * chain may have dropped their last reference (not just their - * last process reference). - */ - if (!cfqq_process_refs(new_cfqq)) - return; - - /* Avoid a circular list and skip interim queue merges */ - while ((__cfqq = new_cfqq->new_cfqq)) { - if (__cfqq == cfqq) - return; - new_cfqq = __cfqq; - } - - process_refs = cfqq_process_refs(cfqq); - new_process_refs = cfqq_process_refs(new_cfqq); - /* - * If the process for the cfqq has gone away, there is no - * sense in merging the queues. - */ - if (process_refs == 0 || new_process_refs == 0) - return; - - /* - * Merge in the direction of the lesser amount of work. - */ - if (new_process_refs >= process_refs) { - cfqq->new_cfqq = new_cfqq; - new_cfqq->ref += process_refs; - } else { - new_cfqq->new_cfqq = cfqq; - cfqq->ref += new_process_refs; - } -} - -static enum wl_type_t cfq_choose_wl_type(struct cfq_data *cfqd, - struct cfq_group *cfqg, enum wl_class_t wl_class) -{ - struct cfq_queue *queue; - int i; - bool key_valid = false; - u64 lowest_key = 0; - enum wl_type_t cur_best = SYNC_NOIDLE_WORKLOAD; - - for (i = 0; i <= SYNC_WORKLOAD; ++i) { - /* select the one with lowest rb_key */ - queue = cfq_rb_first(st_for(cfqg, wl_class, i)); - if (queue && - (!key_valid || queue->rb_key < lowest_key)) { - lowest_key = queue->rb_key; - cur_best = i; - key_valid = true; - } - } - - return cur_best; -} - -static void -choose_wl_class_and_type(struct cfq_data *cfqd, struct cfq_group *cfqg) -{ - u64 slice; - unsigned count; - struct cfq_rb_root *st; - u64 group_slice; - enum wl_class_t original_class = cfqd->serving_wl_class; - u64 now = ktime_get_ns(); - - /* Choose next priority. RT > BE > IDLE */ - if (cfq_group_busy_queues_wl(RT_WORKLOAD, cfqd, cfqg)) - cfqd->serving_wl_class = RT_WORKLOAD; - else if (cfq_group_busy_queues_wl(BE_WORKLOAD, cfqd, cfqg)) - cfqd->serving_wl_class = BE_WORKLOAD; - else { - cfqd->serving_wl_class = IDLE_WORKLOAD; - cfqd->workload_expires = now + jiffies_to_nsecs(1); - return; - } - - if (original_class != cfqd->serving_wl_class) - goto new_workload; - - /* - * For RT and BE, we have to choose also the type - * (SYNC, SYNC_NOIDLE, ASYNC), and to compute a workload - * expiration time - */ - st = st_for(cfqg, cfqd->serving_wl_class, cfqd->serving_wl_type); - count = st->count; - - /* - * check workload expiration, and that we still have other queues ready - */ - if (count && !(now > cfqd->workload_expires)) - return; - -new_workload: - /* otherwise select new workload type */ - cfqd->serving_wl_type = cfq_choose_wl_type(cfqd, cfqg, - cfqd->serving_wl_class); - st = st_for(cfqg, cfqd->serving_wl_class, cfqd->serving_wl_type); - count = st->count; - - /* - * the workload slice is computed as a fraction of target latency - * proportional to the number of queues in that workload, over - * all the queues in the same priority class - */ - group_slice = cfq_group_slice(cfqd, cfqg); - - slice = div_u64(group_slice * count, - max_t(unsigned, cfqg->busy_queues_avg[cfqd->serving_wl_class], - cfq_group_busy_queues_wl(cfqd->serving_wl_class, cfqd, - cfqg))); - - if (cfqd->serving_wl_type == ASYNC_WORKLOAD) { - u64 tmp; - - /* - * Async queues are currently system wide. Just taking - * proportion of queues with-in same group will lead to higher - * async ratio system wide as generally root group is going - * to have higher weight. A more accurate thing would be to - * calculate system wide asnc/sync ratio. - */ - tmp = cfqd->cfq_target_latency * - cfqg_busy_async_queues(cfqd, cfqg); - tmp = div_u64(tmp, cfqd->busy_queues); - slice = min_t(u64, slice, tmp); - - /* async workload slice is scaled down according to - * the sync/async slice ratio. */ - slice = div64_u64(slice*cfqd->cfq_slice[0], cfqd->cfq_slice[1]); - } else - /* sync workload slice is at least 2 * cfq_slice_idle */ - slice = max(slice, 2 * cfqd->cfq_slice_idle); - - slice = max_t(u64, slice, CFQ_MIN_TT); - cfq_log(cfqd, "workload slice:%llu", slice); - cfqd->workload_expires = now + slice; -} - -static struct cfq_group *cfq_get_next_cfqg(struct cfq_data *cfqd) -{ - struct cfq_rb_root *st = &cfqd->grp_service_tree; - struct cfq_group *cfqg; - - if (RB_EMPTY_ROOT(&st->rb.rb_root)) - return NULL; - cfqg = cfq_rb_first_group(st); - update_min_vdisktime(st); - return cfqg; -} - -static void cfq_choose_cfqg(struct cfq_data *cfqd) -{ - struct cfq_group *cfqg = cfq_get_next_cfqg(cfqd); - u64 now = ktime_get_ns(); - - cfqd->serving_group = cfqg; - - /* Restore the workload type data */ - if (cfqg->saved_wl_slice) { - cfqd->workload_expires = now + cfqg->saved_wl_slice; - cfqd->serving_wl_type = cfqg->saved_wl_type; - cfqd->serving_wl_class = cfqg->saved_wl_class; - } else - cfqd->workload_expires = now - 1; - - choose_wl_class_and_type(cfqd, cfqg); -} - -/* - * Select a queue for service. If we have a current active queue, - * check whether to continue servicing it, or retrieve and set a new one. - */ -static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd) -{ - struct cfq_queue *cfqq, *new_cfqq = NULL; - u64 now = ktime_get_ns(); - - cfqq = cfqd->active_queue; - if (!cfqq) - goto new_queue; - - if (!cfqd->rq_queued) - return NULL; - - /* - * We were waiting for group to get backlogged. Expire the queue - */ - if (cfq_cfqq_wait_busy(cfqq) && !RB_EMPTY_ROOT(&cfqq->sort_list)) - goto expire; - - /* - * The active queue has run out of time, expire it and select new. - */ - if (cfq_slice_used(cfqq) && !cfq_cfqq_must_dispatch(cfqq)) { - /* - * If slice had not expired at the completion of last request - * we might not have turned on wait_busy flag. Don't expire - * the queue yet. Allow the group to get backlogged. - * - * The very fact that we have used the slice, that means we - * have been idling all along on this queue and it should be - * ok to wait for this request to complete. - */ - if (cfqq->cfqg->nr_cfqq == 1 && RB_EMPTY_ROOT(&cfqq->sort_list) - && cfqq->dispatched && cfq_should_idle(cfqd, cfqq)) { - cfqq = NULL; - goto keep_queue; - } else - goto check_group_idle; - } - - /* - * The active queue has requests and isn't expired, allow it to - * dispatch. - */ - if (!RB_EMPTY_ROOT(&cfqq->sort_list)) - goto keep_queue; - - /* - * If another queue has a request waiting within our mean seek - * distance, let it run. The expire code will check for close - * cooperators and put the close queue at the front of the service - * tree. If possible, merge the expiring queue with the new cfqq. - */ - new_cfqq = cfq_close_cooperator(cfqd, cfqq); - if (new_cfqq) { - if (!cfqq->new_cfqq) - cfq_setup_merge(cfqq, new_cfqq); - goto expire; - } - - /* - * No requests pending. If the active queue still has requests in - * flight or is idling for a new request, allow either of these - * conditions to happen (or time out) before selecting a new queue. - */ - if (hrtimer_active(&cfqd->idle_slice_timer)) { - cfqq = NULL; - goto keep_queue; - } - - /* - * This is a deep seek queue, but the device is much faster than - * the queue can deliver, don't idle - **/ - if (CFQQ_SEEKY(cfqq) && cfq_cfqq_idle_window(cfqq) && - (cfq_cfqq_slice_new(cfqq) || - (cfqq->slice_end - now > now - cfqq->slice_start))) { - cfq_clear_cfqq_deep(cfqq); - cfq_clear_cfqq_idle_window(cfqq); - } - - if (cfqq->dispatched && cfq_should_idle(cfqd, cfqq)) { - cfqq = NULL; - goto keep_queue; - } - - /* - * If group idle is enabled and there are requests dispatched from - * this group, wait for requests to complete. - */ -check_group_idle: - if (cfqd->cfq_group_idle && cfqq->cfqg->nr_cfqq == 1 && - cfqq->cfqg->dispatched && - !cfq_io_thinktime_big(cfqd, &cfqq->cfqg->ttime, true)) { - cfqq = NULL; - goto keep_queue; - } - -expire: - cfq_slice_expired(cfqd, 0); -new_queue: - /* - * Current queue expired. Check if we have to switch to a new - * service tree - */ - if (!new_cfqq) - cfq_choose_cfqg(cfqd); - - cfqq = cfq_set_active_queue(cfqd, new_cfqq); -keep_queue: - return cfqq; -} - -static int __cfq_forced_dispatch_cfqq(struct cfq_queue *cfqq) -{ - int dispatched = 0; - - while (cfqq->next_rq) { - cfq_dispatch_insert(cfqq->cfqd->queue, cfqq->next_rq); - dispatched++; - } - - BUG_ON(!list_empty(&cfqq->fifo)); - - /* By default cfqq is not expired if it is empty. Do it explicitly */ - __cfq_slice_expired(cfqq->cfqd, cfqq, 0); - return dispatched; -} - -/* - * Drain our current requests. Used for barriers and when switching - * io schedulers on-the-fly. - */ -static int cfq_forced_dispatch(struct cfq_data *cfqd) -{ - struct cfq_queue *cfqq; - int dispatched = 0; - - /* Expire the timeslice of the current active queue first */ - cfq_slice_expired(cfqd, 0); - while ((cfqq = cfq_get_next_queue_forced(cfqd)) != NULL) { - __cfq_set_active_queue(cfqd, cfqq); - dispatched += __cfq_forced_dispatch_cfqq(cfqq); - } - - BUG_ON(cfqd->busy_queues); - - cfq_log(cfqd, "forced_dispatch=%d", dispatched); - return dispatched; -} - -static inline bool cfq_slice_used_soon(struct cfq_data *cfqd, - struct cfq_queue *cfqq) -{ - u64 now = ktime_get_ns(); - - /* the queue hasn't finished any request, can't estimate */ - if (cfq_cfqq_slice_new(cfqq)) - return true; - if (now + cfqd->cfq_slice_idle * cfqq->dispatched > cfqq->slice_end) - return true; - - return false; -} - -static bool cfq_may_dispatch(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - unsigned int max_dispatch; - - if (cfq_cfqq_must_dispatch(cfqq)) - return true; - - /* - * Drain async requests before we start sync IO - */ - if (cfq_should_idle(cfqd, cfqq) && cfqd->rq_in_flight[BLK_RW_ASYNC]) - return false; - - /* - * If this is an async queue and we have sync IO in flight, let it wait - */ - if (cfqd->rq_in_flight[BLK_RW_SYNC] && !cfq_cfqq_sync(cfqq)) - return false; - - max_dispatch = max_t(unsigned int, cfqd->cfq_quantum / 2, 1); - if (cfq_class_idle(cfqq)) - max_dispatch = 1; - - /* - * Does this cfqq already have too much IO in flight? - */ - if (cfqq->dispatched >= max_dispatch) { - bool promote_sync = false; - /* - * idle queue must always only have a single IO in flight - */ - if (cfq_class_idle(cfqq)) - return false; - - /* - * If there is only one sync queue - * we can ignore async queue here and give the sync - * queue no dispatch limit. The reason is a sync queue can - * preempt async queue, limiting the sync queue doesn't make - * sense. This is useful for aiostress test. - */ - if (cfq_cfqq_sync(cfqq) && cfqd->busy_sync_queues == 1) - promote_sync = true; - - /* - * We have other queues, don't allow more IO from this one - */ - if (cfqd->busy_queues > 1 && cfq_slice_used_soon(cfqd, cfqq) && - !promote_sync) - return false; - - /* - * Sole queue user, no limit - */ - if (cfqd->busy_queues == 1 || promote_sync) - max_dispatch = -1; - else - /* - * Normally we start throttling cfqq when cfq_quantum/2 - * requests have been dispatched. But we can drive - * deeper queue depths at the beginning of slice - * subjected to upper limit of cfq_quantum. - * */ - max_dispatch = cfqd->cfq_quantum; - } - - /* - * Async queues must wait a bit before being allowed dispatch. - * We also ramp up the dispatch depth gradually for async IO, - * based on the last sync IO we serviced - */ - if (!cfq_cfqq_sync(cfqq) && cfqd->cfq_latency) { - u64 last_sync = ktime_get_ns() - cfqd->last_delayed_sync; - unsigned int depth; - - depth = div64_u64(last_sync, cfqd->cfq_slice[1]); - if (!depth && !cfqq->dispatched) - depth = 1; - if (depth < max_dispatch) - max_dispatch = depth; - } - - /* - * If we're below the current max, allow a dispatch - */ - return cfqq->dispatched < max_dispatch; -} - -/* - * Dispatch a request from cfqq, moving them to the request queue - * dispatch list. - */ -static bool cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - struct request *rq; - - BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list)); - - rq = cfq_check_fifo(cfqq); - if (rq) - cfq_mark_cfqq_must_dispatch(cfqq); - - if (!cfq_may_dispatch(cfqd, cfqq)) - return false; - - /* - * follow expired path, else get first next available - */ - if (!rq) - rq = cfqq->next_rq; - else - cfq_log_cfqq(cfqq->cfqd, cfqq, "fifo=%p", rq); - - /* - * insert request into driver dispatch list - */ - cfq_dispatch_insert(cfqd->queue, rq); - - if (!cfqd->active_cic) { - struct cfq_io_cq *cic = RQ_CIC(rq); - - atomic_long_inc(&cic->icq.ioc->refcount); - cfqd->active_cic = cic; - } - - return true; -} - -/* - * Find the cfqq that we need to service and move a request from that to the - * dispatch list - */ -static int cfq_dispatch_requests(struct request_queue *q, int force) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - struct cfq_queue *cfqq; - - if (!cfqd->busy_queues) - return 0; - - if (unlikely(force)) - return cfq_forced_dispatch(cfqd); - - cfqq = cfq_select_queue(cfqd); - if (!cfqq) - return 0; - - /* - * Dispatch a request from this cfqq, if it is allowed - */ - if (!cfq_dispatch_request(cfqd, cfqq)) - return 0; - - cfqq->slice_dispatch++; - cfq_clear_cfqq_must_dispatch(cfqq); - - /* - * expire an async queue immediately if it has used up its slice. idle - * queue always expire after 1 dispatch round. - */ - if (cfqd->busy_queues > 1 && ((!cfq_cfqq_sync(cfqq) && - cfqq->slice_dispatch >= cfq_prio_to_maxrq(cfqd, cfqq)) || - cfq_class_idle(cfqq))) { - cfqq->slice_end = ktime_get_ns() + 1; - cfq_slice_expired(cfqd, 0); - } - - cfq_log_cfqq(cfqd, cfqq, "dispatched a request"); - return 1; -} - -/* - * task holds one reference to the queue, dropped when task exits. each rq - * in-flight on this queue also holds a reference, dropped when rq is freed. - * - * Each cfq queue took a reference on the parent group. Drop it now. - * queue lock must be held here. - */ -static void cfq_put_queue(struct cfq_queue *cfqq) -{ - struct cfq_data *cfqd = cfqq->cfqd; - struct cfq_group *cfqg; - - BUG_ON(cfqq->ref <= 0); - - cfqq->ref--; - if (cfqq->ref) - return; - - cfq_log_cfqq(cfqd, cfqq, "put_queue"); - BUG_ON(rb_first(&cfqq->sort_list)); - BUG_ON(cfqq->allocated[READ] + cfqq->allocated[WRITE]); - cfqg = cfqq->cfqg; - - if (unlikely(cfqd->active_queue == cfqq)) { - __cfq_slice_expired(cfqd, cfqq, 0); - cfq_schedule_dispatch(cfqd); - } - - BUG_ON(cfq_cfqq_on_rr(cfqq)); - kmem_cache_free(cfq_pool, cfqq); - cfqg_put(cfqg); -} - -static void cfq_put_cooperator(struct cfq_queue *cfqq) -{ - struct cfq_queue *__cfqq, *next; - - /* - * If this queue was scheduled to merge with another queue, be - * sure to drop the reference taken on that queue (and others in - * the merge chain). See cfq_setup_merge and cfq_merge_cfqqs. - */ - __cfqq = cfqq->new_cfqq; - while (__cfqq) { - if (__cfqq == cfqq) { - WARN(1, "cfqq->new_cfqq loop detected\n"); - break; - } - next = __cfqq->new_cfqq; - cfq_put_queue(__cfqq); - __cfqq = next; - } -} - -static void cfq_exit_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - if (unlikely(cfqq == cfqd->active_queue)) { - __cfq_slice_expired(cfqd, cfqq, 0); - cfq_schedule_dispatch(cfqd); - } - - cfq_put_cooperator(cfqq); - - cfq_put_queue(cfqq); -} - -static void cfq_init_icq(struct io_cq *icq) -{ - struct cfq_io_cq *cic = icq_to_cic(icq); - - cic->ttime.last_end_request = ktime_get_ns(); -} - -static void cfq_exit_icq(struct io_cq *icq) -{ - struct cfq_io_cq *cic = icq_to_cic(icq); - struct cfq_data *cfqd = cic_to_cfqd(cic); - - if (cic_to_cfqq(cic, false)) { - cfq_exit_cfqq(cfqd, cic_to_cfqq(cic, false)); - cic_set_cfqq(cic, NULL, false); - } - - if (cic_to_cfqq(cic, true)) { - cfq_exit_cfqq(cfqd, cic_to_cfqq(cic, true)); - cic_set_cfqq(cic, NULL, true); - } -} - -static void cfq_init_prio_data(struct cfq_queue *cfqq, struct cfq_io_cq *cic) -{ - struct task_struct *tsk = current; - int ioprio_class; - - if (!cfq_cfqq_prio_changed(cfqq)) - return; - - ioprio_class = IOPRIO_PRIO_CLASS(cic->ioprio); - switch (ioprio_class) { - default: - printk(KERN_ERR "cfq: bad prio %x\n", ioprio_class); - /* fall through */ - case IOPRIO_CLASS_NONE: - /* - * no prio set, inherit CPU scheduling settings - */ - cfqq->ioprio = task_nice_ioprio(tsk); - cfqq->ioprio_class = task_nice_ioclass(tsk); - break; - case IOPRIO_CLASS_RT: - cfqq->ioprio = IOPRIO_PRIO_DATA(cic->ioprio); - cfqq->ioprio_class = IOPRIO_CLASS_RT; - break; - case IOPRIO_CLASS_BE: - cfqq->ioprio = IOPRIO_PRIO_DATA(cic->ioprio); - cfqq->ioprio_class = IOPRIO_CLASS_BE; - break; - case IOPRIO_CLASS_IDLE: - cfqq->ioprio_class = IOPRIO_CLASS_IDLE; - cfqq->ioprio = 7; - cfq_clear_cfqq_idle_window(cfqq); - break; - } - - /* - * keep track of original prio settings in case we have to temporarily - * elevate the priority of this queue - */ - cfqq->org_ioprio = cfqq->ioprio; - cfqq->org_ioprio_class = cfqq->ioprio_class; - cfq_clear_cfqq_prio_changed(cfqq); -} - -static void check_ioprio_changed(struct cfq_io_cq *cic, struct bio *bio) -{ - int ioprio = cic->icq.ioc->ioprio; - struct cfq_data *cfqd = cic_to_cfqd(cic); - struct cfq_queue *cfqq; - - /* - * Check whether ioprio has changed. The condition may trigger - * spuriously on a newly created cic but there's no harm. - */ - if (unlikely(!cfqd) || likely(cic->ioprio == ioprio)) - return; - - cfqq = cic_to_cfqq(cic, false); - if (cfqq) { - cfq_put_queue(cfqq); - cfqq = cfq_get_queue(cfqd, BLK_RW_ASYNC, cic, bio); - cic_set_cfqq(cic, cfqq, false); - } - - cfqq = cic_to_cfqq(cic, true); - if (cfqq) - cfq_mark_cfqq_prio_changed(cfqq); - - cic->ioprio = ioprio; -} - -static void cfq_init_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq, - pid_t pid, bool is_sync) -{ - RB_CLEAR_NODE(&cfqq->rb_node); - RB_CLEAR_NODE(&cfqq->p_node); - INIT_LIST_HEAD(&cfqq->fifo); - - cfqq->ref = 0; - cfqq->cfqd = cfqd; - - cfq_mark_cfqq_prio_changed(cfqq); - - if (is_sync) { - if (!cfq_class_idle(cfqq)) - cfq_mark_cfqq_idle_window(cfqq); - cfq_mark_cfqq_sync(cfqq); - } - cfqq->pid = pid; -} - -#ifdef CONFIG_CFQ_GROUP_IOSCHED -static void check_blkcg_changed(struct cfq_io_cq *cic, struct bio *bio) -{ - struct cfq_data *cfqd = cic_to_cfqd(cic); - struct cfq_queue *cfqq; - uint64_t serial_nr; - - rcu_read_lock(); - serial_nr = bio_blkcg(bio)->css.serial_nr; - rcu_read_unlock(); - - /* - * Check whether blkcg has changed. The condition may trigger - * spuriously on a newly created cic but there's no harm. - */ - if (unlikely(!cfqd) || likely(cic->blkcg_serial_nr == serial_nr)) - return; - - /* - * Drop reference to queues. New queues will be assigned in new - * group upon arrival of fresh requests. - */ - cfqq = cic_to_cfqq(cic, false); - if (cfqq) { - cfq_log_cfqq(cfqd, cfqq, "changed cgroup"); - cic_set_cfqq(cic, NULL, false); - cfq_put_queue(cfqq); - } - - cfqq = cic_to_cfqq(cic, true); - if (cfqq) { - cfq_log_cfqq(cfqd, cfqq, "changed cgroup"); - cic_set_cfqq(cic, NULL, true); - cfq_put_queue(cfqq); - } - - cic->blkcg_serial_nr = serial_nr; -} -#else -static inline void check_blkcg_changed(struct cfq_io_cq *cic, struct bio *bio) -{ -} -#endif /* CONFIG_CFQ_GROUP_IOSCHED */ - -static struct cfq_queue ** -cfq_async_queue_prio(struct cfq_group *cfqg, int ioprio_class, int ioprio) -{ - switch (ioprio_class) { - case IOPRIO_CLASS_RT: - return &cfqg->async_cfqq[0][ioprio]; - case IOPRIO_CLASS_NONE: - ioprio = IOPRIO_NORM; - /* fall through */ - case IOPRIO_CLASS_BE: - return &cfqg->async_cfqq[1][ioprio]; - case IOPRIO_CLASS_IDLE: - return &cfqg->async_idle_cfqq; - default: - BUG(); - } -} - -static struct cfq_queue * -cfq_get_queue(struct cfq_data *cfqd, bool is_sync, struct cfq_io_cq *cic, - struct bio *bio) -{ - int ioprio_class = IOPRIO_PRIO_CLASS(cic->ioprio); - int ioprio = IOPRIO_PRIO_DATA(cic->ioprio); - struct cfq_queue **async_cfqq = NULL; - struct cfq_queue *cfqq; - struct cfq_group *cfqg; - - rcu_read_lock(); - cfqg = cfq_lookup_cfqg(cfqd, bio_blkcg(bio)); - if (!cfqg) { - cfqq = &cfqd->oom_cfqq; - goto out; - } - - if (!is_sync) { - if (!ioprio_valid(cic->ioprio)) { - struct task_struct *tsk = current; - ioprio = task_nice_ioprio(tsk); - ioprio_class = task_nice_ioclass(tsk); - } - async_cfqq = cfq_async_queue_prio(cfqg, ioprio_class, ioprio); - cfqq = *async_cfqq; - if (cfqq) - goto out; - } - - cfqq = kmem_cache_alloc_node(cfq_pool, - GFP_NOWAIT | __GFP_ZERO | __GFP_NOWARN, - cfqd->queue->node); - if (!cfqq) { - cfqq = &cfqd->oom_cfqq; - goto out; - } - - /* cfq_init_cfqq() assumes cfqq->ioprio_class is initialized. */ - cfqq->ioprio_class = IOPRIO_CLASS_NONE; - cfq_init_cfqq(cfqd, cfqq, current->pid, is_sync); - cfq_init_prio_data(cfqq, cic); - cfq_link_cfqq_cfqg(cfqq, cfqg); - cfq_log_cfqq(cfqd, cfqq, "alloced"); - - if (async_cfqq) { - /* a new async queue is created, pin and remember */ - cfqq->ref++; - *async_cfqq = cfqq; - } -out: - cfqq->ref++; - rcu_read_unlock(); - return cfqq; -} - -static void -__cfq_update_io_thinktime(struct cfq_ttime *ttime, u64 slice_idle) -{ - u64 elapsed = ktime_get_ns() - ttime->last_end_request; - elapsed = min(elapsed, 2UL * slice_idle); - - ttime->ttime_samples = (7*ttime->ttime_samples + 256) / 8; - ttime->ttime_total = div_u64(7*ttime->ttime_total + 256*elapsed, 8); - ttime->ttime_mean = div64_ul(ttime->ttime_total + 128, - ttime->ttime_samples); -} - -static void -cfq_update_io_thinktime(struct cfq_data *cfqd, struct cfq_queue *cfqq, - struct cfq_io_cq *cic) -{ - if (cfq_cfqq_sync(cfqq)) { - __cfq_update_io_thinktime(&cic->ttime, cfqd->cfq_slice_idle); - __cfq_update_io_thinktime(&cfqq->service_tree->ttime, - cfqd->cfq_slice_idle); - } -#ifdef CONFIG_CFQ_GROUP_IOSCHED - __cfq_update_io_thinktime(&cfqq->cfqg->ttime, cfqd->cfq_group_idle); -#endif -} - -static void -cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_queue *cfqq, - struct request *rq) -{ - sector_t sdist = 0; - sector_t n_sec = blk_rq_sectors(rq); - if (cfqq->last_request_pos) { - if (cfqq->last_request_pos < blk_rq_pos(rq)) - sdist = blk_rq_pos(rq) - cfqq->last_request_pos; - else - sdist = cfqq->last_request_pos - blk_rq_pos(rq); - } - - cfqq->seek_history <<= 1; - if (blk_queue_nonrot(cfqd->queue)) - cfqq->seek_history |= (n_sec < CFQQ_SECT_THR_NONROT); - else - cfqq->seek_history |= (sdist > CFQQ_SEEK_THR); -} - -static inline bool req_noidle(struct request *req) -{ - return req_op(req) == REQ_OP_WRITE && - (req->cmd_flags & (REQ_SYNC | REQ_IDLE)) == REQ_SYNC; -} - -/* - * Disable idle window if the process thinks too long or seeks so much that - * it doesn't matter - */ -static void -cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq, - struct cfq_io_cq *cic) -{ - int old_idle, enable_idle; - - /* - * Don't idle for async or idle io prio class - */ - if (!cfq_cfqq_sync(cfqq) || cfq_class_idle(cfqq)) - return; - - enable_idle = old_idle = cfq_cfqq_idle_window(cfqq); - - if (cfqq->queued[0] + cfqq->queued[1] >= 4) - cfq_mark_cfqq_deep(cfqq); - - if (cfqq->next_rq && req_noidle(cfqq->next_rq)) - enable_idle = 0; - else if (!atomic_read(&cic->icq.ioc->active_ref) || - !cfqd->cfq_slice_idle || - (!cfq_cfqq_deep(cfqq) && CFQQ_SEEKY(cfqq))) - enable_idle = 0; - else if (sample_valid(cic->ttime.ttime_samples)) { - if (cic->ttime.ttime_mean > cfqd->cfq_slice_idle) - enable_idle = 0; - else - enable_idle = 1; - } - - if (old_idle != enable_idle) { - cfq_log_cfqq(cfqd, cfqq, "idle=%d", enable_idle); - if (enable_idle) - cfq_mark_cfqq_idle_window(cfqq); - else - cfq_clear_cfqq_idle_window(cfqq); - } -} - -/* - * Check if new_cfqq should preempt the currently active queue. Return 0 for - * no or if we aren't sure, a 1 will cause a preempt. - */ -static bool -cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq, - struct request *rq) -{ - struct cfq_queue *cfqq; - - cfqq = cfqd->active_queue; - if (!cfqq) - return false; - - if (cfq_class_idle(new_cfqq)) - return false; - - if (cfq_class_idle(cfqq)) - return true; - - /* - * Don't allow a non-RT request to preempt an ongoing RT cfqq timeslice. - */ - if (cfq_class_rt(cfqq) && !cfq_class_rt(new_cfqq)) - return false; - - /* - * if the new request is sync, but the currently running queue is - * not, let the sync request have priority. - */ - if (rq_is_sync(rq) && !cfq_cfqq_sync(cfqq) && !cfq_cfqq_must_dispatch(cfqq)) - return true; - - /* - * Treat ancestors of current cgroup the same way as current cgroup. - * For anybody else we disallow preemption to guarantee service - * fairness among cgroups. - */ - if (!cfqg_is_descendant(cfqq->cfqg, new_cfqq->cfqg)) - return false; - - if (cfq_slice_used(cfqq)) - return true; - - /* - * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice. - */ - if (cfq_class_rt(new_cfqq) && !cfq_class_rt(cfqq)) - return true; - - WARN_ON_ONCE(cfqq->ioprio_class != new_cfqq->ioprio_class); - /* Allow preemption only if we are idling on sync-noidle tree */ - if (cfqd->serving_wl_type == SYNC_NOIDLE_WORKLOAD && - cfqq_type(new_cfqq) == SYNC_NOIDLE_WORKLOAD && - RB_EMPTY_ROOT(&cfqq->sort_list)) - return true; - - /* - * So both queues are sync. Let the new request get disk time if - * it's a metadata request and the current queue is doing regular IO. - */ - if ((rq->cmd_flags & REQ_PRIO) && !cfqq->prio_pending) - return true; - - /* An idle queue should not be idle now for some reason */ - if (RB_EMPTY_ROOT(&cfqq->sort_list) && !cfq_should_idle(cfqd, cfqq)) - return true; - - if (!cfqd->active_cic || !cfq_cfqq_wait_request(cfqq)) - return false; - - /* - * if this request is as-good as one we would expect from the - * current cfqq, let it preempt - */ - if (cfq_rq_close(cfqd, cfqq, rq)) - return true; - - return false; -} - -/* - * cfqq preempts the active queue. if we allowed preempt with no slice left, - * let it have half of its nominal slice. - */ -static void cfq_preempt_queue(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - enum wl_type_t old_type = cfqq_type(cfqd->active_queue); - - cfq_log_cfqq(cfqd, cfqq, "preempt"); - cfq_slice_expired(cfqd, 1); - - /* - * workload type is changed, don't save slice, otherwise preempt - * doesn't happen - */ - if (old_type != cfqq_type(cfqq)) - cfqq->cfqg->saved_wl_slice = 0; - - /* - * Put the new queue at the front of the of the current list, - * so we know that it will be selected next. - */ - BUG_ON(!cfq_cfqq_on_rr(cfqq)); - - cfq_service_tree_add(cfqd, cfqq, 1); - - cfqq->slice_end = 0; - cfq_mark_cfqq_slice_new(cfqq); -} - -/* - * Called when a new fs request (rq) is added (to cfqq). Check if there's - * something we should do about it - */ -static void -cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq, - struct request *rq) -{ - struct cfq_io_cq *cic = RQ_CIC(rq); - - cfqd->rq_queued++; - if (rq->cmd_flags & REQ_PRIO) - cfqq->prio_pending++; - - cfq_update_io_thinktime(cfqd, cfqq, cic); - cfq_update_io_seektime(cfqd, cfqq, rq); - cfq_update_idle_window(cfqd, cfqq, cic); - - cfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq); - - if (cfqq == cfqd->active_queue) { - /* - * Remember that we saw a request from this process, but - * don't start queuing just yet. Otherwise we risk seeing lots - * of tiny requests, because we disrupt the normal plugging - * and merging. If the request is already larger than a single - * page, let it rip immediately. For that case we assume that - * merging is already done. Ditto for a busy system that - * has other work pending, don't risk delaying until the - * idle timer unplug to continue working. - */ - if (cfq_cfqq_wait_request(cfqq)) { - if (blk_rq_bytes(rq) > PAGE_SIZE || - cfqd->busy_queues > 1) { - cfq_del_timer(cfqd, cfqq); - cfq_clear_cfqq_wait_request(cfqq); - __blk_run_queue(cfqd->queue); - } else { - cfqg_stats_update_idle_time(cfqq->cfqg); - cfq_mark_cfqq_must_dispatch(cfqq); - } - } - } else if (cfq_should_preempt(cfqd, cfqq, rq)) { - /* - * not the active queue - expire current slice if it is - * idle and has expired it's mean thinktime or this new queue - * has some old slice time left and is of higher priority or - * this new queue is RT and the current one is BE - */ - cfq_preempt_queue(cfqd, cfqq); - __blk_run_queue(cfqd->queue); - } -} - -static void cfq_insert_request(struct request_queue *q, struct request *rq) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - struct cfq_queue *cfqq = RQ_CFQQ(rq); - - cfq_log_cfqq(cfqd, cfqq, "insert_request"); - cfq_init_prio_data(cfqq, RQ_CIC(rq)); - - rq->fifo_time = ktime_get_ns() + cfqd->cfq_fifo_expire[rq_is_sync(rq)]; - list_add_tail(&rq->queuelist, &cfqq->fifo); - cfq_add_rq_rb(rq); - cfqg_stats_update_io_add(RQ_CFQG(rq), cfqd->serving_group, - rq->cmd_flags); - cfq_rq_enqueued(cfqd, cfqq, rq); -} - -/* - * Update hw_tag based on peak queue depth over 50 samples under - * sufficient load. - */ -static void cfq_update_hw_tag(struct cfq_data *cfqd) -{ - struct cfq_queue *cfqq = cfqd->active_queue; - - if (cfqd->rq_in_driver > cfqd->hw_tag_est_depth) - cfqd->hw_tag_est_depth = cfqd->rq_in_driver; - - if (cfqd->hw_tag == 1) - return; - - if (cfqd->rq_queued <= CFQ_HW_QUEUE_MIN && - cfqd->rq_in_driver <= CFQ_HW_QUEUE_MIN) - return; - - /* - * If active queue hasn't enough requests and can idle, cfq might not - * dispatch sufficient requests to hardware. Don't zero hw_tag in this - * case - */ - if (cfqq && cfq_cfqq_idle_window(cfqq) && - cfqq->dispatched + cfqq->queued[0] + cfqq->queued[1] < - CFQ_HW_QUEUE_MIN && cfqd->rq_in_driver < CFQ_HW_QUEUE_MIN) - return; - - if (cfqd->hw_tag_samples++ < 50) - return; - - if (cfqd->hw_tag_est_depth >= CFQ_HW_QUEUE_MIN) - cfqd->hw_tag = 1; - else - cfqd->hw_tag = 0; -} - -static bool cfq_should_wait_busy(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - struct cfq_io_cq *cic = cfqd->active_cic; - u64 now = ktime_get_ns(); - - /* If the queue already has requests, don't wait */ - if (!RB_EMPTY_ROOT(&cfqq->sort_list)) - return false; - - /* If there are other queues in the group, don't wait */ - if (cfqq->cfqg->nr_cfqq > 1) - return false; - - /* the only queue in the group, but think time is big */ - if (cfq_io_thinktime_big(cfqd, &cfqq->cfqg->ttime, true)) - return false; - - if (cfq_slice_used(cfqq)) - return true; - - /* if slice left is less than think time, wait busy */ - if (cic && sample_valid(cic->ttime.ttime_samples) - && (cfqq->slice_end - now < cic->ttime.ttime_mean)) - return true; - - /* - * If think times is less than a jiffy than ttime_mean=0 and above - * will not be true. It might happen that slice has not expired yet - * but will expire soon (4-5 ns) during select_queue(). To cover the - * case where think time is less than a jiffy, mark the queue wait - * busy if only 1 jiffy is left in the slice. - */ - if (cfqq->slice_end - now <= jiffies_to_nsecs(1)) - return true; - - return false; -} - -static void cfq_completed_request(struct request_queue *q, struct request *rq) -{ - struct cfq_queue *cfqq = RQ_CFQQ(rq); - struct cfq_data *cfqd = cfqq->cfqd; - const int sync = rq_is_sync(rq); - u64 now = ktime_get_ns(); - - cfq_log_cfqq(cfqd, cfqq, "complete rqnoidle %d", req_noidle(rq)); - - cfq_update_hw_tag(cfqd); - - WARN_ON(!cfqd->rq_in_driver); - WARN_ON(!cfqq->dispatched); - cfqd->rq_in_driver--; - cfqq->dispatched--; - (RQ_CFQG(rq))->dispatched--; - cfqg_stats_update_completion(cfqq->cfqg, rq->start_time_ns, - rq->io_start_time_ns, rq->cmd_flags); - - cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]--; - - if (sync) { - struct cfq_rb_root *st; - - RQ_CIC(rq)->ttime.last_end_request = now; - - if (cfq_cfqq_on_rr(cfqq)) - st = cfqq->service_tree; - else - st = st_for(cfqq->cfqg, cfqq_class(cfqq), - cfqq_type(cfqq)); - - st->ttime.last_end_request = now; - if (rq->start_time_ns + cfqd->cfq_fifo_expire[1] <= now) - cfqd->last_delayed_sync = now; - } - -#ifdef CONFIG_CFQ_GROUP_IOSCHED - cfqq->cfqg->ttime.last_end_request = now; -#endif - - /* - * If this is the active queue, check if it needs to be expired, - * or if we want to idle in case it has no pending requests. - */ - if (cfqd->active_queue == cfqq) { - const bool cfqq_empty = RB_EMPTY_ROOT(&cfqq->sort_list); - - if (cfq_cfqq_slice_new(cfqq)) { - cfq_set_prio_slice(cfqd, cfqq); - cfq_clear_cfqq_slice_new(cfqq); - } - - /* - * Should we wait for next request to come in before we expire - * the queue. - */ - if (cfq_should_wait_busy(cfqd, cfqq)) { - u64 extend_sl = cfqd->cfq_slice_idle; - if (!cfqd->cfq_slice_idle) - extend_sl = cfqd->cfq_group_idle; - cfqq->slice_end = now + extend_sl; - cfq_mark_cfqq_wait_busy(cfqq); - cfq_log_cfqq(cfqd, cfqq, "will busy wait"); - } - - /* - * Idling is not enabled on: - * - expired queues - * - idle-priority queues - * - async queues - * - queues with still some requests queued - * - when there is a close cooperator - */ - if (cfq_slice_used(cfqq) || cfq_class_idle(cfqq)) - cfq_slice_expired(cfqd, 1); - else if (sync && cfqq_empty && - !cfq_close_cooperator(cfqd, cfqq)) { - cfq_arm_slice_timer(cfqd); - } - } - - if (!cfqd->rq_in_driver) - cfq_schedule_dispatch(cfqd); -} - -static void cfqq_boost_on_prio(struct cfq_queue *cfqq, unsigned int op) -{ - /* - * If REQ_PRIO is set, boost class and prio level, if it's below - * BE/NORM. If prio is not set, restore the potentially boosted - * class/prio level. - */ - if (!(op & REQ_PRIO)) { - cfqq->ioprio_class = cfqq->org_ioprio_class; - cfqq->ioprio = cfqq->org_ioprio; - } else { - if (cfq_class_idle(cfqq)) - cfqq->ioprio_class = IOPRIO_CLASS_BE; - if (cfqq->ioprio > IOPRIO_NORM) - cfqq->ioprio = IOPRIO_NORM; - } -} - -static inline int __cfq_may_queue(struct cfq_queue *cfqq) -{ - if (cfq_cfqq_wait_request(cfqq) && !cfq_cfqq_must_alloc_slice(cfqq)) { - cfq_mark_cfqq_must_alloc_slice(cfqq); - return ELV_MQUEUE_MUST; - } - - return ELV_MQUEUE_MAY; -} - -static int cfq_may_queue(struct request_queue *q, unsigned int op) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - struct task_struct *tsk = current; - struct cfq_io_cq *cic; - struct cfq_queue *cfqq; - - /* - * don't force setup of a queue from here, as a call to may_queue - * does not necessarily imply that a request actually will be queued. - * so just lookup a possibly existing queue, or return 'may queue' - * if that fails - */ - cic = cfq_cic_lookup(cfqd, tsk->io_context); - if (!cic) - return ELV_MQUEUE_MAY; - - cfqq = cic_to_cfqq(cic, op_is_sync(op)); - if (cfqq) { - cfq_init_prio_data(cfqq, cic); - cfqq_boost_on_prio(cfqq, op); - - return __cfq_may_queue(cfqq); - } - - return ELV_MQUEUE_MAY; -} - -/* - * queue lock held here - */ -static void cfq_put_request(struct request *rq) -{ - struct cfq_queue *cfqq = RQ_CFQQ(rq); - - if (cfqq) { - const int rw = rq_data_dir(rq); - - BUG_ON(!cfqq->allocated[rw]); - cfqq->allocated[rw]--; - - /* Put down rq reference on cfqg */ - cfqg_put(RQ_CFQG(rq)); - rq->elv.priv[0] = NULL; - rq->elv.priv[1] = NULL; - - cfq_put_queue(cfqq); - } -} - -static struct cfq_queue * -cfq_merge_cfqqs(struct cfq_data *cfqd, struct cfq_io_cq *cic, - struct cfq_queue *cfqq) -{ - cfq_log_cfqq(cfqd, cfqq, "merging with queue %p", cfqq->new_cfqq); - cic_set_cfqq(cic, cfqq->new_cfqq, 1); - cfq_mark_cfqq_coop(cfqq->new_cfqq); - cfq_put_queue(cfqq); - return cic_to_cfqq(cic, 1); -} - -/* - * Returns NULL if a new cfqq should be allocated, or the old cfqq if this - * was the last process referring to said cfqq. - */ -static struct cfq_queue * -split_cfqq(struct cfq_io_cq *cic, struct cfq_queue *cfqq) -{ - if (cfqq_process_refs(cfqq) == 1) { - cfqq->pid = current->pid; - cfq_clear_cfqq_coop(cfqq); - cfq_clear_cfqq_split_coop(cfqq); - return cfqq; - } - - cic_set_cfqq(cic, NULL, 1); - - cfq_put_cooperator(cfqq); - - cfq_put_queue(cfqq); - return NULL; -} -/* - * Allocate cfq data structures associated with this request. - */ -static int -cfq_set_request(struct request_queue *q, struct request *rq, struct bio *bio, - gfp_t gfp_mask) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - struct cfq_io_cq *cic = icq_to_cic(rq->elv.icq); - const int rw = rq_data_dir(rq); - const bool is_sync = rq_is_sync(rq); - struct cfq_queue *cfqq; - - spin_lock_irq(q->queue_lock); - - check_ioprio_changed(cic, bio); - check_blkcg_changed(cic, bio); -new_queue: - cfqq = cic_to_cfqq(cic, is_sync); - if (!cfqq || cfqq == &cfqd->oom_cfqq) { - if (cfqq) - cfq_put_queue(cfqq); - cfqq = cfq_get_queue(cfqd, is_sync, cic, bio); - cic_set_cfqq(cic, cfqq, is_sync); - } else { - /* - * If the queue was seeky for too long, break it apart. - */ - if (cfq_cfqq_coop(cfqq) && cfq_cfqq_split_coop(cfqq)) { - cfq_log_cfqq(cfqd, cfqq, "breaking apart cfqq"); - cfqq = split_cfqq(cic, cfqq); - if (!cfqq) - goto new_queue; - } - - /* - * Check to see if this queue is scheduled to merge with - * another, closely cooperating queue. The merging of - * queues happens here as it must be done in process context. - * The reference on new_cfqq was taken in merge_cfqqs. - */ - if (cfqq->new_cfqq) - cfqq = cfq_merge_cfqqs(cfqd, cic, cfqq); - } - - cfqq->allocated[rw]++; - - cfqq->ref++; - cfqg_get(cfqq->cfqg); - rq->elv.priv[0] = cfqq; - rq->elv.priv[1] = cfqq->cfqg; - spin_unlock_irq(q->queue_lock); - - return 0; -} - -static void cfq_kick_queue(struct work_struct *work) -{ - struct cfq_data *cfqd = - container_of(work, struct cfq_data, unplug_work); - struct request_queue *q = cfqd->queue; - - spin_lock_irq(q->queue_lock); - __blk_run_queue(cfqd->queue); - spin_unlock_irq(q->queue_lock); -} - -/* - * Timer running if the active_queue is currently idling inside its time slice - */ -static enum hrtimer_restart cfq_idle_slice_timer(struct hrtimer *timer) -{ - struct cfq_data *cfqd = container_of(timer, struct cfq_data, - idle_slice_timer); - struct cfq_queue *cfqq; - unsigned long flags; - int timed_out = 1; - - cfq_log(cfqd, "idle timer fired"); - - spin_lock_irqsave(cfqd->queue->queue_lock, flags); - - cfqq = cfqd->active_queue; - if (cfqq) { - timed_out = 0; - - /* - * We saw a request before the queue expired, let it through - */ - if (cfq_cfqq_must_dispatch(cfqq)) - goto out_kick; - - /* - * expired - */ - if (cfq_slice_used(cfqq)) - goto expire; - - /* - * only expire and reinvoke request handler, if there are - * other queues with pending requests - */ - if (!cfqd->busy_queues) - goto out_cont; - - /* - * not expired and it has a request pending, let it dispatch - */ - if (!RB_EMPTY_ROOT(&cfqq->sort_list)) - goto out_kick; - - /* - * Queue depth flag is reset only when the idle didn't succeed - */ - cfq_clear_cfqq_deep(cfqq); - } -expire: - cfq_slice_expired(cfqd, timed_out); -out_kick: - cfq_schedule_dispatch(cfqd); -out_cont: - spin_unlock_irqrestore(cfqd->queue->queue_lock, flags); - return HRTIMER_NORESTART; -} - -static void cfq_shutdown_timer_wq(struct cfq_data *cfqd) -{ - hrtimer_cancel(&cfqd->idle_slice_timer); - cancel_work_sync(&cfqd->unplug_work); -} - -static void cfq_exit_queue(struct elevator_queue *e) -{ - struct cfq_data *cfqd = e->elevator_data; - struct request_queue *q = cfqd->queue; - - cfq_shutdown_timer_wq(cfqd); - - spin_lock_irq(q->queue_lock); - - if (cfqd->active_queue) - __cfq_slice_expired(cfqd, cfqd->active_queue, 0); - - spin_unlock_irq(q->queue_lock); - - cfq_shutdown_timer_wq(cfqd); - -#ifdef CONFIG_CFQ_GROUP_IOSCHED - blkcg_deactivate_policy(q, &blkcg_policy_cfq); -#else - kfree(cfqd->root_group); -#endif - kfree(cfqd); -} - -static int cfq_init_queue(struct request_queue *q, struct elevator_type *e) -{ - struct cfq_data *cfqd; - struct blkcg_gq *blkg __maybe_unused; - int i, ret; - struct elevator_queue *eq; - - eq = elevator_alloc(q, e); - if (!eq) - return -ENOMEM; - - cfqd = kzalloc_node(sizeof(*cfqd), GFP_KERNEL, q->node); - if (!cfqd) { - kobject_put(&eq->kobj); - return -ENOMEM; - } - eq->elevator_data = cfqd; - - cfqd->queue = q; - spin_lock_irq(q->queue_lock); - q->elevator = eq; - spin_unlock_irq(q->queue_lock); - - /* Init root service tree */ - cfqd->grp_service_tree = CFQ_RB_ROOT; - - /* Init root group and prefer root group over other groups by default */ -#ifdef CONFIG_CFQ_GROUP_IOSCHED - ret = blkcg_activate_policy(q, &blkcg_policy_cfq); - if (ret) - goto out_free; - - cfqd->root_group = blkg_to_cfqg(q->root_blkg); -#else - ret = -ENOMEM; - cfqd->root_group = kzalloc_node(sizeof(*cfqd->root_group), - GFP_KERNEL, cfqd->queue->node); - if (!cfqd->root_group) - goto out_free; - - cfq_init_cfqg_base(cfqd->root_group); - cfqd->root_group->weight = 2 * CFQ_WEIGHT_LEGACY_DFL; - cfqd->root_group->leaf_weight = 2 * CFQ_WEIGHT_LEGACY_DFL; -#endif - - /* - * Not strictly needed (since RB_ROOT just clears the node and we - * zeroed cfqd on alloc), but better be safe in case someone decides - * to add magic to the rb code - */ - for (i = 0; i < CFQ_PRIO_LISTS; i++) - cfqd->prio_trees[i] = RB_ROOT; - - /* - * Our fallback cfqq if cfq_get_queue() runs into OOM issues. - * Grab a permanent reference to it, so that the normal code flow - * will not attempt to free it. oom_cfqq is linked to root_group - * but shouldn't hold a reference as it'll never be unlinked. Lose - * the reference from linking right away. - */ - cfq_init_cfqq(cfqd, &cfqd->oom_cfqq, 1, 0); - cfqd->oom_cfqq.ref++; - - spin_lock_irq(q->queue_lock); - cfq_link_cfqq_cfqg(&cfqd->oom_cfqq, cfqd->root_group); - cfqg_put(cfqd->root_group); - spin_unlock_irq(q->queue_lock); - - hrtimer_init(&cfqd->idle_slice_timer, CLOCK_MONOTONIC, - HRTIMER_MODE_REL); - cfqd->idle_slice_timer.function = cfq_idle_slice_timer; - - INIT_WORK(&cfqd->unplug_work, cfq_kick_queue); - - cfqd->cfq_quantum = cfq_quantum; - cfqd->cfq_fifo_expire[0] = cfq_fifo_expire[0]; - cfqd->cfq_fifo_expire[1] = cfq_fifo_expire[1]; - cfqd->cfq_back_max = cfq_back_max; - cfqd->cfq_back_penalty = cfq_back_penalty; - cfqd->cfq_slice[0] = cfq_slice_async; - cfqd->cfq_slice[1] = cfq_slice_sync; - cfqd->cfq_target_latency = cfq_target_latency; - cfqd->cfq_slice_async_rq = cfq_slice_async_rq; - cfqd->cfq_slice_idle = cfq_slice_idle; - cfqd->cfq_group_idle = cfq_group_idle; - cfqd->cfq_latency = 1; - cfqd->hw_tag = -1; - /* - * we optimistically start assuming sync ops weren't delayed in last - * second, in order to have larger depth for async operations. - */ - cfqd->last_delayed_sync = ktime_get_ns() - NSEC_PER_SEC; - return 0; - -out_free: - kfree(cfqd); - kobject_put(&eq->kobj); - return ret; -} - -static void cfq_registered_queue(struct request_queue *q) -{ - struct elevator_queue *e = q->elevator; - struct cfq_data *cfqd = e->elevator_data; - - /* - * Default to IOPS mode with no idling for SSDs - */ - if (blk_queue_nonrot(q)) - cfqd->cfq_slice_idle = 0; - wbt_disable_default(q); -} - -/* - * sysfs parts below --> - */ -static ssize_t -cfq_var_show(unsigned int var, char *page) -{ - return sprintf(page, "%u\n", var); -} - -static void -cfq_var_store(unsigned int *var, const char *page) -{ - char *p = (char *) page; - - *var = simple_strtoul(p, &p, 10); -} - -#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \ -static ssize_t __FUNC(struct elevator_queue *e, char *page) \ -{ \ - struct cfq_data *cfqd = e->elevator_data; \ - u64 __data = __VAR; \ - if (__CONV) \ - __data = div_u64(__data, NSEC_PER_MSEC); \ - return cfq_var_show(__data, (page)); \ -} -SHOW_FUNCTION(cfq_quantum_show, cfqd->cfq_quantum, 0); -SHOW_FUNCTION(cfq_fifo_expire_sync_show, cfqd->cfq_fifo_expire[1], 1); -SHOW_FUNCTION(cfq_fifo_expire_async_show, cfqd->cfq_fifo_expire[0], 1); -SHOW_FUNCTION(cfq_back_seek_max_show, cfqd->cfq_back_max, 0); -SHOW_FUNCTION(cfq_back_seek_penalty_show, cfqd->cfq_back_penalty, 0); -SHOW_FUNCTION(cfq_slice_idle_show, cfqd->cfq_slice_idle, 1); -SHOW_FUNCTION(cfq_group_idle_show, cfqd->cfq_group_idle, 1); -SHOW_FUNCTION(cfq_slice_sync_show, cfqd->cfq_slice[1], 1); -SHOW_FUNCTION(cfq_slice_async_show, cfqd->cfq_slice[0], 1); -SHOW_FUNCTION(cfq_slice_async_rq_show, cfqd->cfq_slice_async_rq, 0); -SHOW_FUNCTION(cfq_low_latency_show, cfqd->cfq_latency, 0); -SHOW_FUNCTION(cfq_target_latency_show, cfqd->cfq_target_latency, 1); -#undef SHOW_FUNCTION - -#define USEC_SHOW_FUNCTION(__FUNC, __VAR) \ -static ssize_t __FUNC(struct elevator_queue *e, char *page) \ -{ \ - struct cfq_data *cfqd = e->elevator_data; \ - u64 __data = __VAR; \ - __data = div_u64(__data, NSEC_PER_USEC); \ - return cfq_var_show(__data, (page)); \ -} -USEC_SHOW_FUNCTION(cfq_slice_idle_us_show, cfqd->cfq_slice_idle); -USEC_SHOW_FUNCTION(cfq_group_idle_us_show, cfqd->cfq_group_idle); -USEC_SHOW_FUNCTION(cfq_slice_sync_us_show, cfqd->cfq_slice[1]); -USEC_SHOW_FUNCTION(cfq_slice_async_us_show, cfqd->cfq_slice[0]); -USEC_SHOW_FUNCTION(cfq_target_latency_us_show, cfqd->cfq_target_latency); -#undef USEC_SHOW_FUNCTION - -#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ -static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count) \ -{ \ - struct cfq_data *cfqd = e->elevator_data; \ - unsigned int __data, __min = (MIN), __max = (MAX); \ - \ - cfq_var_store(&__data, (page)); \ - if (__data < __min) \ - __data = __min; \ - else if (__data > __max) \ - __data = __max; \ - if (__CONV) \ - *(__PTR) = (u64)__data * NSEC_PER_MSEC; \ - else \ - *(__PTR) = __data; \ - return count; \ -} -STORE_FUNCTION(cfq_quantum_store, &cfqd->cfq_quantum, 1, UINT_MAX, 0); -STORE_FUNCTION(cfq_fifo_expire_sync_store, &cfqd->cfq_fifo_expire[1], 1, - UINT_MAX, 1); -STORE_FUNCTION(cfq_fifo_expire_async_store, &cfqd->cfq_fifo_expire[0], 1, - UINT_MAX, 1); -STORE_FUNCTION(cfq_back_seek_max_store, &cfqd->cfq_back_max, 0, UINT_MAX, 0); -STORE_FUNCTION(cfq_back_seek_penalty_store, &cfqd->cfq_back_penalty, 1, - UINT_MAX, 0); -STORE_FUNCTION(cfq_slice_idle_store, &cfqd->cfq_slice_idle, 0, UINT_MAX, 1); -STORE_FUNCTION(cfq_group_idle_store, &cfqd->cfq_group_idle, 0, UINT_MAX, 1); -STORE_FUNCTION(cfq_slice_sync_store, &cfqd->cfq_slice[1], 1, UINT_MAX, 1); -STORE_FUNCTION(cfq_slice_async_store, &cfqd->cfq_slice[0], 1, UINT_MAX, 1); -STORE_FUNCTION(cfq_slice_async_rq_store, &cfqd->cfq_slice_async_rq, 1, - UINT_MAX, 0); -STORE_FUNCTION(cfq_low_latency_store, &cfqd->cfq_latency, 0, 1, 0); -STORE_FUNCTION(cfq_target_latency_store, &cfqd->cfq_target_latency, 1, UINT_MAX, 1); -#undef STORE_FUNCTION - -#define USEC_STORE_FUNCTION(__FUNC, __PTR, MIN, MAX) \ -static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count) \ -{ \ - struct cfq_data *cfqd = e->elevator_data; \ - unsigned int __data, __min = (MIN), __max = (MAX); \ - \ - cfq_var_store(&__data, (page)); \ - if (__data < __min) \ - __data = __min; \ - else if (__data > __max) \ - __data = __max; \ - *(__PTR) = (u64)__data * NSEC_PER_USEC; \ - return count; \ -} -USEC_STORE_FUNCTION(cfq_slice_idle_us_store, &cfqd->cfq_slice_idle, 0, UINT_MAX); -USEC_STORE_FUNCTION(cfq_group_idle_us_store, &cfqd->cfq_group_idle, 0, UINT_MAX); -USEC_STORE_FUNCTION(cfq_slice_sync_us_store, &cfqd->cfq_slice[1], 1, UINT_MAX); -USEC_STORE_FUNCTION(cfq_slice_async_us_store, &cfqd->cfq_slice[0], 1, UINT_MAX); -USEC_STORE_FUNCTION(cfq_target_latency_us_store, &cfqd->cfq_target_latency, 1, UINT_MAX); -#undef USEC_STORE_FUNCTION - -#define CFQ_ATTR(name) \ - __ATTR(name, 0644, cfq_##name##_show, cfq_##name##_store) - -static struct elv_fs_entry cfq_attrs[] = { - CFQ_ATTR(quantum), - CFQ_ATTR(fifo_expire_sync), - CFQ_ATTR(fifo_expire_async), - CFQ_ATTR(back_seek_max), - CFQ_ATTR(back_seek_penalty), - CFQ_ATTR(slice_sync), - CFQ_ATTR(slice_sync_us), - CFQ_ATTR(slice_async), - CFQ_ATTR(slice_async_us), - CFQ_ATTR(slice_async_rq), - CFQ_ATTR(slice_idle), - CFQ_ATTR(slice_idle_us), - CFQ_ATTR(group_idle), - CFQ_ATTR(group_idle_us), - CFQ_ATTR(low_latency), - CFQ_ATTR(target_latency), - CFQ_ATTR(target_latency_us), - __ATTR_NULL -}; - -static struct elevator_type iosched_cfq = { - .ops.sq = { - .elevator_merge_fn = cfq_merge, - .elevator_merged_fn = cfq_merged_request, - .elevator_merge_req_fn = cfq_merged_requests, - .elevator_allow_bio_merge_fn = cfq_allow_bio_merge, - .elevator_allow_rq_merge_fn = cfq_allow_rq_merge, - .elevator_bio_merged_fn = cfq_bio_merged, - .elevator_dispatch_fn = cfq_dispatch_requests, - .elevator_add_req_fn = cfq_insert_request, - .elevator_activate_req_fn = cfq_activate_request, - .elevator_deactivate_req_fn = cfq_deactivate_request, - .elevator_completed_req_fn = cfq_completed_request, - .elevator_former_req_fn = elv_rb_former_request, - .elevator_latter_req_fn = elv_rb_latter_request, - .elevator_init_icq_fn = cfq_init_icq, - .elevator_exit_icq_fn = cfq_exit_icq, - .elevator_set_req_fn = cfq_set_request, - .elevator_put_req_fn = cfq_put_request, - .elevator_may_queue_fn = cfq_may_queue, - .elevator_init_fn = cfq_init_queue, - .elevator_exit_fn = cfq_exit_queue, - .elevator_registered_fn = cfq_registered_queue, - }, - .icq_size = sizeof(struct cfq_io_cq), - .icq_align = __alignof__(struct cfq_io_cq), - .elevator_attrs = cfq_attrs, - .elevator_name = "cfq", - .elevator_owner = THIS_MODULE, -}; - -#ifdef CONFIG_CFQ_GROUP_IOSCHED -static struct blkcg_policy blkcg_policy_cfq = { - .dfl_cftypes = cfq_blkcg_files, - .legacy_cftypes = cfq_blkcg_legacy_files, - - .cpd_alloc_fn = cfq_cpd_alloc, - .cpd_init_fn = cfq_cpd_init, - .cpd_free_fn = cfq_cpd_free, - .cpd_bind_fn = cfq_cpd_bind, - - .pd_alloc_fn = cfq_pd_alloc, - .pd_init_fn = cfq_pd_init, - .pd_offline_fn = cfq_pd_offline, - .pd_free_fn = cfq_pd_free, - .pd_reset_stats_fn = cfq_pd_reset_stats, -}; -#endif - -static int __init cfq_init(void) -{ - int ret; - -#ifdef CONFIG_CFQ_GROUP_IOSCHED - ret = blkcg_policy_register(&blkcg_policy_cfq); - if (ret) - return ret; -#else - cfq_group_idle = 0; -#endif - - ret = -ENOMEM; - cfq_pool = KMEM_CACHE(cfq_queue, 0); - if (!cfq_pool) - goto err_pol_unreg; - - ret = elv_register(&iosched_cfq); - if (ret) - goto err_free_pool; - - return 0; - -err_free_pool: - kmem_cache_destroy(cfq_pool); -err_pol_unreg: -#ifdef CONFIG_CFQ_GROUP_IOSCHED - blkcg_policy_unregister(&blkcg_policy_cfq); -#endif - return ret; -} - -static void __exit cfq_exit(void) -{ -#ifdef CONFIG_CFQ_GROUP_IOSCHED - blkcg_policy_unregister(&blkcg_policy_cfq); -#endif - elv_unregister(&iosched_cfq); - kmem_cache_destroy(cfq_pool); -} - -module_init(cfq_init); -module_exit(cfq_exit); - -MODULE_AUTHOR("Jens Axboe"); -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Completely Fair Queueing IO scheduler"); diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c deleted file mode 100644 index ef2f1f09e9b3..000000000000 --- a/block/deadline-iosched.c +++ /dev/null @@ -1,560 +0,0 @@ -/* - * Deadline i/o scheduler. - * - * Copyright (C) 2002 Jens Axboe - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -/* - * See Documentation/block/deadline-iosched.txt - */ -static const int read_expire = HZ / 2; /* max time before a read is submitted. */ -static const int write_expire = 5 * HZ; /* ditto for writes, these limits are SOFT! */ -static const int writes_starved = 2; /* max times reads can starve a write */ -static const int fifo_batch = 16; /* # of sequential requests treated as one - by the above parameters. For throughput. */ - -struct deadline_data { - /* - * run time data - */ - - /* - * requests (deadline_rq s) are present on both sort_list and fifo_list - */ - struct rb_root sort_list[2]; - struct list_head fifo_list[2]; - - /* - * next in sort order. read, write or both are NULL - */ - struct request *next_rq[2]; - unsigned int batching; /* number of sequential requests made */ - unsigned int starved; /* times reads have starved writes */ - - /* - * settings that change how the i/o scheduler behaves - */ - int fifo_expire[2]; - int fifo_batch; - int writes_starved; - int front_merges; -}; - -static inline struct rb_root * -deadline_rb_root(struct deadline_data *dd, struct request *rq) -{ - return &dd->sort_list[rq_data_dir(rq)]; -} - -/* - * get the request after `rq' in sector-sorted order - */ -static inline struct request * -deadline_latter_request(struct request *rq) -{ - struct rb_node *node = rb_next(&rq->rb_node); - - if (node) - return rb_entry_rq(node); - - return NULL; -} - -static void -deadline_add_rq_rb(struct deadline_data *dd, struct request *rq) -{ - struct rb_root *root = deadline_rb_root(dd, rq); - - elv_rb_add(root, rq); -} - -static inline void -deadline_del_rq_rb(struct deadline_data *dd, struct request *rq) -{ - const int data_dir = rq_data_dir(rq); - - if (dd->next_rq[data_dir] == rq) - dd->next_rq[data_dir] = deadline_latter_request(rq); - - elv_rb_del(deadline_rb_root(dd, rq), rq); -} - -/* - * add rq to rbtree and fifo - */ -static void -deadline_add_request(struct request_queue *q, struct request *rq) -{ - struct deadline_data *dd = q->elevator->elevator_data; - const int data_dir = rq_data_dir(rq); - - /* - * This may be a requeue of a write request that has locked its - * target zone. If it is the case, this releases the zone lock. - */ - blk_req_zone_write_unlock(rq); - - deadline_add_rq_rb(dd, rq); - - /* - * set expire time and add to fifo list - */ - rq->fifo_time = jiffies + dd->fifo_expire[data_dir]; - list_add_tail(&rq->queuelist, &dd->fifo_list[data_dir]); -} - -/* - * remove rq from rbtree and fifo. - */ -static void deadline_remove_request(struct request_queue *q, struct request *rq) -{ - struct deadline_data *dd = q->elevator->elevator_data; - - rq_fifo_clear(rq); - deadline_del_rq_rb(dd, rq); -} - -static enum elv_merge -deadline_merge(struct request_queue *q, struct request **req, struct bio *bio) -{ - struct deadline_data *dd = q->elevator->elevator_data; - struct request *__rq; - - /* - * check for front merge - */ - if (dd->front_merges) { - sector_t sector = bio_end_sector(bio); - - __rq = elv_rb_find(&dd->sort_list[bio_data_dir(bio)], sector); - if (__rq) { - BUG_ON(sector != blk_rq_pos(__rq)); - - if (elv_bio_merge_ok(__rq, bio)) { - *req = __rq; - return ELEVATOR_FRONT_MERGE; - } - } - } - - return ELEVATOR_NO_MERGE; -} - -static void deadline_merged_request(struct request_queue *q, - struct request *req, enum elv_merge type) -{ - struct deadline_data *dd = q->elevator->elevator_data; - - /* - * if the merge was a front merge, we need to reposition request - */ - if (type == ELEVATOR_FRONT_MERGE) { - elv_rb_del(deadline_rb_root(dd, req), req); - deadline_add_rq_rb(dd, req); - } -} - -static void -deadline_merged_requests(struct request_queue *q, struct request *req, - struct request *next) -{ - /* - * if next expires before rq, assign its expire time to rq - * and move into next position (next will be deleted) in fifo - */ - if (!list_empty(&req->queuelist) && !list_empty(&next->queuelist)) { - if (time_before((unsigned long)next->fifo_time, - (unsigned long)req->fifo_time)) { - list_move(&req->queuelist, &next->queuelist); - req->fifo_time = next->fifo_time; - } - } - - /* - * kill knowledge of next, this one is a goner - */ - deadline_remove_request(q, next); -} - -/* - * move request from sort list to dispatch queue. - */ -static inline void -deadline_move_to_dispatch(struct deadline_data *dd, struct request *rq) -{ - struct request_queue *q = rq->q; - - /* - * For a zoned block device, write requests must write lock their - * target zone. - */ - blk_req_zone_write_lock(rq); - - deadline_remove_request(q, rq); - elv_dispatch_add_tail(q, rq); -} - -/* - * move an entry to dispatch queue - */ -static void -deadline_move_request(struct deadline_data *dd, struct request *rq) -{ - const int data_dir = rq_data_dir(rq); - - dd->next_rq[READ] = NULL; - dd->next_rq[WRITE] = NULL; - dd->next_rq[data_dir] = deadline_latter_request(rq); - - /* - * take it off the sort and fifo list, move - * to dispatch queue - */ - deadline_move_to_dispatch(dd, rq); -} - -/* - * deadline_check_fifo returns 0 if there are no expired requests on the fifo, - * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir]) - */ -static inline int deadline_check_fifo(struct deadline_data *dd, int ddir) -{ - struct request *rq = rq_entry_fifo(dd->fifo_list[ddir].next); - - /* - * rq is expired! - */ - if (time_after_eq(jiffies, (unsigned long)rq->fifo_time)) - return 1; - - return 0; -} - -/* - * For the specified data direction, return the next request to dispatch using - * arrival ordered lists. - */ -static struct request * -deadline_fifo_request(struct deadline_data *dd, int data_dir) -{ - struct request *rq; - - if (WARN_ON_ONCE(data_dir != READ && data_dir != WRITE)) - return NULL; - - if (list_empty(&dd->fifo_list[data_dir])) - return NULL; - - rq = rq_entry_fifo(dd->fifo_list[data_dir].next); - if (data_dir == READ || !blk_queue_is_zoned(rq->q)) - return rq; - - /* - * Look for a write request that can be dispatched, that is one with - * an unlocked target zone. - */ - list_for_each_entry(rq, &dd->fifo_list[WRITE], queuelist) { - if (blk_req_can_dispatch_to_zone(rq)) - return rq; - } - - return NULL; -} - -/* - * For the specified data direction, return the next request to dispatch using - * sector position sorted lists. - */ -static struct request * -deadline_next_request(struct deadline_data *dd, int data_dir) -{ - struct request *rq; - - if (WARN_ON_ONCE(data_dir != READ && data_dir != WRITE)) - return NULL; - - rq = dd->next_rq[data_dir]; - if (!rq) - return NULL; - - if (data_dir == READ || !blk_queue_is_zoned(rq->q)) - return rq; - - /* - * Look for a write request that can be dispatched, that is one with - * an unlocked target zone. - */ - while (rq) { - if (blk_req_can_dispatch_to_zone(rq)) - return rq; - rq = deadline_latter_request(rq); - } - - return NULL; -} - -/* - * deadline_dispatch_requests selects the best request according to - * read/write expire, fifo_batch, etc - */ -static int deadline_dispatch_requests(struct request_queue *q, int force) -{ - struct deadline_data *dd = q->elevator->elevator_data; - const int reads = !list_empty(&dd->fifo_list[READ]); - const int writes = !list_empty(&dd->fifo_list[WRITE]); - struct request *rq, *next_rq; - int data_dir; - - /* - * batches are currently reads XOR writes - */ - rq = deadline_next_request(dd, WRITE); - if (!rq) - rq = deadline_next_request(dd, READ); - - if (rq && dd->batching < dd->fifo_batch) - /* we have a next request are still entitled to batch */ - goto dispatch_request; - - /* - * at this point we are not running a batch. select the appropriate - * data direction (read / write) - */ - - if (reads) { - BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ])); - - if (deadline_fifo_request(dd, WRITE) && - (dd->starved++ >= dd->writes_starved)) - goto dispatch_writes; - - data_dir = READ; - - goto dispatch_find_request; - } - - /* - * there are either no reads or writes have been starved - */ - - if (writes) { -dispatch_writes: - BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[WRITE])); - - dd->starved = 0; - - data_dir = WRITE; - - goto dispatch_find_request; - } - - return 0; - -dispatch_find_request: - /* - * we are not running a batch, find best request for selected data_dir - */ - next_rq = deadline_next_request(dd, data_dir); - if (deadline_check_fifo(dd, data_dir) || !next_rq) { - /* - * A deadline has expired, the last request was in the other - * direction, or we have run out of higher-sectored requests. - * Start again from the request with the earliest expiry time. - */ - rq = deadline_fifo_request(dd, data_dir); - } else { - /* - * The last req was the same dir and we have a next request in - * sort order. No expired requests so continue on from here. - */ - rq = next_rq; - } - - /* - * For a zoned block device, if we only have writes queued and none of - * them can be dispatched, rq will be NULL. - */ - if (!rq) - return 0; - - dd->batching = 0; - -dispatch_request: - /* - * rq is the selected appropriate request. - */ - dd->batching++; - deadline_move_request(dd, rq); - - return 1; -} - -/* - * For zoned block devices, write unlock the target zone of completed - * write requests. - */ -static void -deadline_completed_request(struct request_queue *q, struct request *rq) -{ - blk_req_zone_write_unlock(rq); -} - -static void deadline_exit_queue(struct elevator_queue *e) -{ - struct deadline_data *dd = e->elevator_data; - - BUG_ON(!list_empty(&dd->fifo_list[READ])); - BUG_ON(!list_empty(&dd->fifo_list[WRITE])); - - kfree(dd); -} - -/* - * initialize elevator private data (deadline_data). - */ -static int deadline_init_queue(struct request_queue *q, struct elevator_type *e) -{ - struct deadline_data *dd; - struct elevator_queue *eq; - - eq = elevator_alloc(q, e); - if (!eq) - return -ENOMEM; - - dd = kzalloc_node(sizeof(*dd), GFP_KERNEL, q->node); - if (!dd) { - kobject_put(&eq->kobj); - return -ENOMEM; - } - eq->elevator_data = dd; - - INIT_LIST_HEAD(&dd->fifo_list[READ]); - INIT_LIST_HEAD(&dd->fifo_list[WRITE]); - dd->sort_list[READ] = RB_ROOT; - dd->sort_list[WRITE] = RB_ROOT; - dd->fifo_expire[READ] = read_expire; - dd->fifo_expire[WRITE] = write_expire; - dd->writes_starved = writes_starved; - dd->front_merges = 1; - dd->fifo_batch = fifo_batch; - - spin_lock_irq(q->queue_lock); - q->elevator = eq; - spin_unlock_irq(q->queue_lock); - return 0; -} - -/* - * sysfs parts below - */ - -static ssize_t -deadline_var_show(int var, char *page) -{ - return sprintf(page, "%d\n", var); -} - -static void -deadline_var_store(int *var, const char *page) -{ - char *p = (char *) page; - - *var = simple_strtol(p, &p, 10); -} - -#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \ -static ssize_t __FUNC(struct elevator_queue *e, char *page) \ -{ \ - struct deadline_data *dd = e->elevator_data; \ - int __data = __VAR; \ - if (__CONV) \ - __data = jiffies_to_msecs(__data); \ - return deadline_var_show(__data, (page)); \ -} -SHOW_FUNCTION(deadline_read_expire_show, dd->fifo_expire[READ], 1); -SHOW_FUNCTION(deadline_write_expire_show, dd->fifo_expire[WRITE], 1); -SHOW_FUNCTION(deadline_writes_starved_show, dd->writes_starved, 0); -SHOW_FUNCTION(deadline_front_merges_show, dd->front_merges, 0); -SHOW_FUNCTION(deadline_fifo_batch_show, dd->fifo_batch, 0); -#undef SHOW_FUNCTION - -#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ -static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count) \ -{ \ - struct deadline_data *dd = e->elevator_data; \ - int __data; \ - deadline_var_store(&__data, (page)); \ - if (__data < (MIN)) \ - __data = (MIN); \ - else if (__data > (MAX)) \ - __data = (MAX); \ - if (__CONV) \ - *(__PTR) = msecs_to_jiffies(__data); \ - else \ - *(__PTR) = __data; \ - return count; \ -} -STORE_FUNCTION(deadline_read_expire_store, &dd->fifo_expire[READ], 0, INT_MAX, 1); -STORE_FUNCTION(deadline_write_expire_store, &dd->fifo_expire[WRITE], 0, INT_MAX, 1); -STORE_FUNCTION(deadline_writes_starved_store, &dd->writes_starved, INT_MIN, INT_MAX, 0); -STORE_FUNCTION(deadline_front_merges_store, &dd->front_merges, 0, 1, 0); -STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0); -#undef STORE_FUNCTION - -#define DD_ATTR(name) \ - __ATTR(name, 0644, deadline_##name##_show, deadline_##name##_store) - -static struct elv_fs_entry deadline_attrs[] = { - DD_ATTR(read_expire), - DD_ATTR(write_expire), - DD_ATTR(writes_starved), - DD_ATTR(front_merges), - DD_ATTR(fifo_batch), - __ATTR_NULL -}; - -static struct elevator_type iosched_deadline = { - .ops.sq = { - .elevator_merge_fn = deadline_merge, - .elevator_merged_fn = deadline_merged_request, - .elevator_merge_req_fn = deadline_merged_requests, - .elevator_dispatch_fn = deadline_dispatch_requests, - .elevator_completed_req_fn = deadline_completed_request, - .elevator_add_req_fn = deadline_add_request, - .elevator_former_req_fn = elv_rb_former_request, - .elevator_latter_req_fn = elv_rb_latter_request, - .elevator_init_fn = deadline_init_queue, - .elevator_exit_fn = deadline_exit_queue, - }, - - .elevator_attrs = deadline_attrs, - .elevator_name = "deadline", - .elevator_owner = THIS_MODULE, -}; - -static int __init deadline_init(void) -{ - return elv_register(&iosched_deadline); -} - -static void __exit deadline_exit(void) -{ - elv_unregister(&iosched_deadline); -} - -module_init(deadline_init); -module_exit(deadline_exit); - -MODULE_AUTHOR("Jens Axboe"); -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("deadline IO scheduler"); diff --git a/block/elevator.c b/block/elevator.c index 8fdcd64ae12e..f05e90d4e695 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -61,10 +61,8 @@ static int elv_iosched_allow_bio_merge(struct request *rq, struct bio *bio) struct request_queue *q = rq->q; struct elevator_queue *e = q->elevator; - if (e->uses_mq && e->type->ops.mq.allow_merge) - return e->type->ops.mq.allow_merge(q, rq, bio); - else if (!e->uses_mq && e->type->ops.sq.elevator_allow_bio_merge_fn) - return e->type->ops.sq.elevator_allow_bio_merge_fn(q, rq, bio); + if (e->type->ops.allow_merge) + return e->type->ops.allow_merge(q, rq, bio); return 1; } @@ -95,14 +93,14 @@ static bool elevator_match(const struct elevator_type *e, const char *name) } /* - * Return scheduler with name 'name' and with matching 'mq capability + * Return scheduler with name 'name' */ -static struct elevator_type *elevator_find(const char *name, bool mq) +static struct elevator_type *elevator_find(const char *name) { struct elevator_type *e; list_for_each_entry(e, &elv_list, list) { - if (elevator_match(e, name) && (mq == e->uses_mq)) + if (elevator_match(e, name)) return e; } @@ -121,12 +119,12 @@ static struct elevator_type *elevator_get(struct request_queue *q, spin_lock(&elv_list_lock); - e = elevator_find(name, q->mq_ops != NULL); + e = elevator_find(name); if (!e && try_loading) { spin_unlock(&elv_list_lock); request_module("%s-iosched", name); spin_lock(&elv_list_lock); - e = elevator_find(name, q->mq_ops != NULL); + e = elevator_find(name); } if (e && !try_module_get(e->elevator_owner)) @@ -150,26 +148,6 @@ static int __init elevator_setup(char *str) __setup("elevator=", elevator_setup); -/* called during boot to load the elevator chosen by the elevator param */ -void __init load_default_elevator_module(void) -{ - struct elevator_type *e; - - if (!chosen_elevator[0]) - return; - - /* - * Boot parameter is deprecated, we haven't supported that for MQ. - * Only look for non-mq schedulers from here. - */ - spin_lock(&elv_list_lock); - e = elevator_find(chosen_elevator, false); - spin_unlock(&elv_list_lock); - - if (!e) - request_module("%s-iosched", chosen_elevator); -} - static struct kobj_type elv_ktype; struct elevator_queue *elevator_alloc(struct request_queue *q, @@ -185,7 +163,6 @@ struct elevator_queue *elevator_alloc(struct request_queue *q, kobject_init(&eq->kobj, &elv_ktype); mutex_init(&eq->sysfs_lock); hash_init(eq->hash); - eq->uses_mq = e->uses_mq; return eq; } @@ -200,54 +177,11 @@ static void elevator_release(struct kobject *kobj) kfree(e); } -/* - * Use the default elevator specified by config boot param for non-mq devices, - * or by config option. Don't try to load modules as we could be running off - * async and request_module() isn't allowed from async. - */ -int elevator_init(struct request_queue *q) -{ - struct elevator_type *e = NULL; - int err = 0; - - /* - * q->sysfs_lock must be held to provide mutual exclusion between - * elevator_switch() and here. - */ - mutex_lock(&q->sysfs_lock); - if (unlikely(q->elevator)) - goto out_unlock; - - if (*chosen_elevator) { - e = elevator_get(q, chosen_elevator, false); - if (!e) - printk(KERN_ERR "I/O scheduler %s not found\n", - chosen_elevator); - } - - if (!e) - e = elevator_get(q, CONFIG_DEFAULT_IOSCHED, false); - if (!e) { - printk(KERN_ERR - "Default I/O scheduler not found. Using noop.\n"); - e = elevator_get(q, "noop", false); - } - - err = e->ops.sq.elevator_init_fn(q, e); - if (err) - elevator_put(e); -out_unlock: - mutex_unlock(&q->sysfs_lock); - return err; -} - void elevator_exit(struct request_queue *q, struct elevator_queue *e) { mutex_lock(&e->sysfs_lock); - if (e->uses_mq && e->type->ops.mq.exit_sched) + if (e->type->ops.exit_sched) blk_mq_exit_sched(q, e); - else if (!e->uses_mq && e->type->ops.sq.elevator_exit_fn) - e->type->ops.sq.elevator_exit_fn(e); mutex_unlock(&e->sysfs_lock); kobject_put(&e->kobj); @@ -356,68 +290,6 @@ struct request *elv_rb_find(struct rb_root *root, sector_t sector) } EXPORT_SYMBOL(elv_rb_find); -/* - * Insert rq into dispatch queue of q. Queue lock must be held on - * entry. rq is sort instead into the dispatch queue. To be used by - * specific elevators. - */ -void elv_dispatch_sort(struct request_queue *q, struct request *rq) -{ - sector_t boundary; - struct list_head *entry; - - if (q->last_merge == rq) - q->last_merge = NULL; - - elv_rqhash_del(q, rq); - - q->nr_sorted--; - - boundary = q->end_sector; - list_for_each_prev(entry, &q->queue_head) { - struct request *pos = list_entry_rq(entry); - - if (req_op(rq) != req_op(pos)) - break; - if (rq_data_dir(rq) != rq_data_dir(pos)) - break; - if (pos->rq_flags & (RQF_STARTED | RQF_SOFTBARRIER)) - break; - if (blk_rq_pos(rq) >= boundary) { - if (blk_rq_pos(pos) < boundary) - continue; - } else { - if (blk_rq_pos(pos) >= boundary) - break; - } - if (blk_rq_pos(rq) >= blk_rq_pos(pos)) - break; - } - - list_add(&rq->queuelist, entry); -} -EXPORT_SYMBOL(elv_dispatch_sort); - -/* - * Insert rq into dispatch queue of q. Queue lock must be held on - * entry. rq is added to the back of the dispatch queue. To be used by - * specific elevators. - */ -void elv_dispatch_add_tail(struct request_queue *q, struct request *rq) -{ - if (q->last_merge == rq) - q->last_merge = NULL; - - elv_rqhash_del(q, rq); - - q->nr_sorted--; - - q->end_sector = rq_end_sector(rq); - q->boundary_rq = rq; - list_add_tail(&rq->queuelist, &q->queue_head); -} -EXPORT_SYMBOL(elv_dispatch_add_tail); - enum elv_merge elv_merge(struct request_queue *q, struct request **req, struct bio *bio) { @@ -457,10 +329,8 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req, return ELEVATOR_BACK_MERGE; } - if (e->uses_mq && e->type->ops.mq.request_merge) - return e->type->ops.mq.request_merge(q, req, bio); - else if (!e->uses_mq && e->type->ops.sq.elevator_merge_fn) - return e->type->ops.sq.elevator_merge_fn(q, req, bio); + if (e->type->ops.request_merge) + return e->type->ops.request_merge(q, req, bio); return ELEVATOR_NO_MERGE; } @@ -511,10 +381,8 @@ void elv_merged_request(struct request_queue *q, struct request *rq, { struct elevator_queue *e = q->elevator; - if (e->uses_mq && e->type->ops.mq.request_merged) - e->type->ops.mq.request_merged(q, rq, type); - else if (!e->uses_mq && e->type->ops.sq.elevator_merged_fn) - e->type->ops.sq.elevator_merged_fn(q, rq, type); + if (e->type->ops.request_merged) + e->type->ops.request_merged(q, rq, type); if (type == ELEVATOR_BACK_MERGE) elv_rqhash_reposition(q, rq); @@ -526,176 +394,20 @@ void elv_merge_requests(struct request_queue *q, struct request *rq, struct request *next) { struct elevator_queue *e = q->elevator; - bool next_sorted = false; - - if (e->uses_mq && e->type->ops.mq.requests_merged) - e->type->ops.mq.requests_merged(q, rq, next); - else if (e->type->ops.sq.elevator_merge_req_fn) { - next_sorted = (__force bool)(next->rq_flags & RQF_SORTED); - if (next_sorted) - e->type->ops.sq.elevator_merge_req_fn(q, rq, next); - } - elv_rqhash_reposition(q, rq); - - if (next_sorted) { - elv_rqhash_del(q, next); - q->nr_sorted--; - } + if (e->type->ops.requests_merged) + e->type->ops.requests_merged(q, rq, next); + elv_rqhash_reposition(q, rq); q->last_merge = rq; } -void elv_bio_merged(struct request_queue *q, struct request *rq, - struct bio *bio) -{ - struct elevator_queue *e = q->elevator; - - if (WARN_ON_ONCE(e->uses_mq)) - return; - - if (e->type->ops.sq.elevator_bio_merged_fn) - e->type->ops.sq.elevator_bio_merged_fn(q, rq, bio); -} - -void elv_requeue_request(struct request_queue *q, struct request *rq) -{ - /* - * it already went through dequeue, we need to decrement the - * in_flight count again - */ - if (blk_account_rq(rq)) { - q->in_flight[rq_is_sync(rq)]--; - if (rq->rq_flags & RQF_SORTED) - elv_deactivate_rq(q, rq); - } - - rq->rq_flags &= ~RQF_STARTED; - - blk_pm_requeue_request(rq); - - __elv_add_request(q, rq, ELEVATOR_INSERT_REQUEUE); -} - -void elv_drain_elevator(struct request_queue *q) -{ - struct elevator_queue *e = q->elevator; - static int printed; - - if (WARN_ON_ONCE(e->uses_mq)) - return; - - lockdep_assert_held(q->queue_lock); - - while (e->type->ops.sq.elevator_dispatch_fn(q, 1)) - ; - if (q->nr_sorted && !blk_queue_is_zoned(q) && printed++ < 10 ) { - printk(KERN_ERR "%s: forced dispatching is broken " - "(nr_sorted=%u), please report this\n", - q->elevator->type->elevator_name, q->nr_sorted); - } -} - -void __elv_add_request(struct request_queue *q, struct request *rq, int where) -{ - trace_block_rq_insert(q, rq); - - blk_pm_add_request(q, rq); - - rq->q = q; - - if (rq->rq_flags & RQF_SOFTBARRIER) { - /* barriers are scheduling boundary, update end_sector */ - if (!blk_rq_is_passthrough(rq)) { - q->end_sector = rq_end_sector(rq); - q->boundary_rq = rq; - } - } else if (!(rq->rq_flags & RQF_ELVPRIV) && - (where == ELEVATOR_INSERT_SORT || - where == ELEVATOR_INSERT_SORT_MERGE)) - where = ELEVATOR_INSERT_BACK; - - switch (where) { - case ELEVATOR_INSERT_REQUEUE: - case ELEVATOR_INSERT_FRONT: - rq->rq_flags |= RQF_SOFTBARRIER; - list_add(&rq->queuelist, &q->queue_head); - break; - - case ELEVATOR_INSERT_BACK: - rq->rq_flags |= RQF_SOFTBARRIER; - elv_drain_elevator(q); - list_add_tail(&rq->queuelist, &q->queue_head); - /* - * We kick the queue here for the following reasons. - * - The elevator might have returned NULL previously - * to delay requests and returned them now. As the - * queue wasn't empty before this request, ll_rw_blk - * won't run the queue on return, resulting in hang. - * - Usually, back inserted requests won't be merged - * with anything. There's no point in delaying queue - * processing. - */ - __blk_run_queue(q); - break; - - case ELEVATOR_INSERT_SORT_MERGE: - /* - * If we succeed in merging this request with one in the - * queue already, we are done - rq has now been freed, - * so no need to do anything further. - */ - if (elv_attempt_insert_merge(q, rq)) - break; - /* fall through */ - case ELEVATOR_INSERT_SORT: - BUG_ON(blk_rq_is_passthrough(rq)); - rq->rq_flags |= RQF_SORTED; - q->nr_sorted++; - if (rq_mergeable(rq)) { - elv_rqhash_add(q, rq); - if (!q->last_merge) - q->last_merge = rq; - } - - /* - * Some ioscheds (cfq) run q->request_fn directly, so - * rq cannot be accessed after calling - * elevator_add_req_fn. - */ - q->elevator->type->ops.sq.elevator_add_req_fn(q, rq); - break; - - case ELEVATOR_INSERT_FLUSH: - rq->rq_flags |= RQF_SOFTBARRIER; - blk_insert_flush(rq); - break; - default: - printk(KERN_ERR "%s: bad insertion point %d\n", - __func__, where); - BUG(); - } -} -EXPORT_SYMBOL(__elv_add_request); - -void elv_add_request(struct request_queue *q, struct request *rq, int where) -{ - unsigned long flags; - - spin_lock_irqsave(q->queue_lock, flags); - __elv_add_request(q, rq, where); - spin_unlock_irqrestore(q->queue_lock, flags); -} -EXPORT_SYMBOL(elv_add_request); - struct request *elv_latter_request(struct request_queue *q, struct request *rq) { struct elevator_queue *e = q->elevator; - if (e->uses_mq && e->type->ops.mq.next_request) - return e->type->ops.mq.next_request(q, rq); - else if (!e->uses_mq && e->type->ops.sq.elevator_latter_req_fn) - return e->type->ops.sq.elevator_latter_req_fn(q, rq); + if (e->type->ops.next_request) + return e->type->ops.next_request(q, rq); return NULL; } @@ -704,66 +416,10 @@ struct request *elv_former_request(struct request_queue *q, struct request *rq) { struct elevator_queue *e = q->elevator; - if (e->uses_mq && e->type->ops.mq.former_request) - return e->type->ops.mq.former_request(q, rq); - if (!e->uses_mq && e->type->ops.sq.elevator_former_req_fn) - return e->type->ops.sq.elevator_former_req_fn(q, rq); - return NULL; -} - -int elv_set_request(struct request_queue *q, struct request *rq, - struct bio *bio, gfp_t gfp_mask) -{ - struct elevator_queue *e = q->elevator; - - if (WARN_ON_ONCE(e->uses_mq)) - return 0; - - if (e->type->ops.sq.elevator_set_req_fn) - return e->type->ops.sq.elevator_set_req_fn(q, rq, bio, gfp_mask); - return 0; -} - -void elv_put_request(struct request_queue *q, struct request *rq) -{ - struct elevator_queue *e = q->elevator; - - if (WARN_ON_ONCE(e->uses_mq)) - return; - - if (e->type->ops.sq.elevator_put_req_fn) - e->type->ops.sq.elevator_put_req_fn(rq); -} - -int elv_may_queue(struct request_queue *q, unsigned int op) -{ - struct elevator_queue *e = q->elevator; - - if (WARN_ON_ONCE(e->uses_mq)) - return 0; - - if (e->type->ops.sq.elevator_may_queue_fn) - return e->type->ops.sq.elevator_may_queue_fn(q, op); - - return ELV_MQUEUE_MAY; -} - -void elv_completed_request(struct request_queue *q, struct request *rq) -{ - struct elevator_queue *e = q->elevator; - - if (WARN_ON_ONCE(e->uses_mq)) - return; + if (e->type->ops.former_request) + return e->type->ops.former_request(q, rq); - /* - * request is released from the driver, io must be done - */ - if (blk_account_rq(rq)) { - q->in_flight[rq_is_sync(rq)]--; - if ((rq->rq_flags & RQF_SORTED) && - e->type->ops.sq.elevator_completed_req_fn) - e->type->ops.sq.elevator_completed_req_fn(q, rq); - } + return NULL; } #define to_elv(atr) container_of((atr), struct elv_fs_entry, attr) @@ -832,8 +488,6 @@ int elv_register_queue(struct request_queue *q) } kobject_uevent(&e->kobj, KOBJ_ADD); e->registered = 1; - if (!e->uses_mq && e->type->ops.sq.elevator_registered_fn) - e->type->ops.sq.elevator_registered_fn(q); } return error; } @@ -873,7 +527,7 @@ int elv_register(struct elevator_type *e) /* register, don't allow duplicate names */ spin_lock(&elv_list_lock); - if (elevator_find(e->elevator_name, e->uses_mq)) { + if (elevator_find(e->elevator_name)) { spin_unlock(&elv_list_lock); kmem_cache_destroy(e->icq_cache); return -EBUSY; @@ -881,12 +535,6 @@ int elv_register(struct elevator_type *e) list_add_tail(&e->list, &elv_list); spin_unlock(&elv_list_lock); - /* print pretty message */ - if (elevator_match(e, chosen_elevator) || - (!*chosen_elevator && - elevator_match(e, CONFIG_DEFAULT_IOSCHED))) - def = " (default)"; - printk(KERN_INFO "io scheduler %s registered%s\n", e->elevator_name, def); return 0; @@ -989,71 +637,17 @@ out_unlock: */ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e) { - struct elevator_queue *old = q->elevator; - bool old_registered = false; int err; lockdep_assert_held(&q->sysfs_lock); - if (q->mq_ops) { - blk_mq_freeze_queue(q); - blk_mq_quiesce_queue(q); - - err = elevator_switch_mq(q, new_e); - - blk_mq_unquiesce_queue(q); - blk_mq_unfreeze_queue(q); - - return err; - } - - /* - * Turn on BYPASS and drain all requests w/ elevator private data. - * Block layer doesn't call into a quiesced elevator - all requests - * are directly put on the dispatch list without elevator data - * using INSERT_BACK. All requests have SOFTBARRIER set and no - * merge happens either. - */ - if (old) { - old_registered = old->registered; - - blk_queue_bypass_start(q); - - /* unregister and clear all auxiliary data of the old elevator */ - if (old_registered) - elv_unregister_queue(q); - - ioc_clear_queue(q); - } + blk_mq_freeze_queue(q); + blk_mq_quiesce_queue(q); - /* allocate, init and register new elevator */ - err = new_e->ops.sq.elevator_init_fn(q, new_e); - if (err) - goto fail_init; - - err = elv_register_queue(q); - if (err) - goto fail_register; - - /* done, kill the old one and finish */ - if (old) { - elevator_exit(q, old); - blk_queue_bypass_end(q); - } + err = elevator_switch_mq(q, new_e); - blk_add_trace_msg(q, "elv switch: %s", new_e->elevator_name); - - return 0; - -fail_register: - elevator_exit(q, q->elevator); -fail_init: - /* switch failed, restore and re-register old elevator */ - if (old) { - q->elevator = old; - elv_register_queue(q); - blk_queue_bypass_end(q); - } + blk_mq_unquiesce_queue(q); + blk_mq_unfreeze_queue(q); return err; } @@ -1073,7 +667,7 @@ static int __elevator_change(struct request_queue *q, const char *name) /* * Special case for mq, turn off scheduling */ - if (q->mq_ops && !strncmp(name, "none", 4)) + if (!strncmp(name, "none", 4)) return elevator_switch(q, NULL); strlcpy(elevator_name, name, sizeof(elevator_name)); @@ -1091,8 +685,7 @@ static int __elevator_change(struct request_queue *q, const char *name) static inline bool elv_support_iosched(struct request_queue *q) { - if (q->mq_ops && q->tag_set && (q->tag_set->flags & - BLK_MQ_F_NO_SCHED)) + if (q->tag_set && (q->tag_set->flags & BLK_MQ_F_NO_SCHED)) return false; return true; } @@ -1102,7 +695,7 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name, { int ret; - if (!(q->mq_ops || q->request_fn) || !elv_support_iosched(q)) + if (!queue_is_mq(q) || !elv_support_iosched(q)) return count; ret = __elevator_change(q, name); @@ -1117,10 +710,9 @@ ssize_t elv_iosched_show(struct request_queue *q, char *name) struct elevator_queue *e = q->elevator; struct elevator_type *elv = NULL; struct elevator_type *__e; - bool uses_mq = q->mq_ops != NULL; int len = 0; - if (!queue_is_rq_based(q)) + if (!queue_is_mq(q)) return sprintf(name, "none\n"); if (!q->elevator) @@ -1130,19 +722,16 @@ ssize_t elv_iosched_show(struct request_queue *q, char *name) spin_lock(&elv_list_lock); list_for_each_entry(__e, &elv_list, list) { - if (elv && elevator_match(elv, __e->elevator_name) && - (__e->uses_mq == uses_mq)) { + if (elv && elevator_match(elv, __e->elevator_name)) { len += sprintf(name+len, "[%s] ", elv->elevator_name); continue; } - if (__e->uses_mq && q->mq_ops && elv_support_iosched(q)) - len += sprintf(name+len, "%s ", __e->elevator_name); - else if (!__e->uses_mq && !q->mq_ops) + if (elv_support_iosched(q)) len += sprintf(name+len, "%s ", __e->elevator_name); } spin_unlock(&elv_list_lock); - if (q->mq_ops && q->elevator) + if (q->elevator) len += sprintf(name+len, "none"); len += sprintf(len+name, "\n"); diff --git a/block/genhd.c b/block/genhd.c index cff6bdf27226..1dd8fd6613b8 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -47,51 +47,64 @@ static void disk_release_events(struct gendisk *disk); void part_inc_in_flight(struct request_queue *q, struct hd_struct *part, int rw) { - if (q->mq_ops) + if (queue_is_mq(q)) return; - atomic_inc(&part->in_flight[rw]); + part_stat_local_inc(part, in_flight[rw]); if (part->partno) - atomic_inc(&part_to_disk(part)->part0.in_flight[rw]); + part_stat_local_inc(&part_to_disk(part)->part0, in_flight[rw]); } void part_dec_in_flight(struct request_queue *q, struct hd_struct *part, int rw) { - if (q->mq_ops) + if (queue_is_mq(q)) return; - atomic_dec(&part->in_flight[rw]); + part_stat_local_dec(part, in_flight[rw]); if (part->partno) - atomic_dec(&part_to_disk(part)->part0.in_flight[rw]); + part_stat_local_dec(&part_to_disk(part)->part0, in_flight[rw]); } -void part_in_flight(struct request_queue *q, struct hd_struct *part, - unsigned int inflight[2]) +unsigned int part_in_flight(struct request_queue *q, struct hd_struct *part) { - if (q->mq_ops) { - blk_mq_in_flight(q, part, inflight); - return; + int cpu; + unsigned int inflight; + + if (queue_is_mq(q)) { + return blk_mq_in_flight(q, part); } - inflight[0] = atomic_read(&part->in_flight[0]) + - atomic_read(&part->in_flight[1]); - if (part->partno) { - part = &part_to_disk(part)->part0; - inflight[1] = atomic_read(&part->in_flight[0]) + - atomic_read(&part->in_flight[1]); + inflight = 0; + for_each_possible_cpu(cpu) { + inflight += part_stat_local_read_cpu(part, in_flight[0], cpu) + + part_stat_local_read_cpu(part, in_flight[1], cpu); } + if ((int)inflight < 0) + inflight = 0; + + return inflight; } void part_in_flight_rw(struct request_queue *q, struct hd_struct *part, unsigned int inflight[2]) { - if (q->mq_ops) { + int cpu; + + if (queue_is_mq(q)) { blk_mq_in_flight_rw(q, part, inflight); return; } - inflight[0] = atomic_read(&part->in_flight[0]); - inflight[1] = atomic_read(&part->in_flight[1]); + inflight[0] = 0; + inflight[1] = 0; + for_each_possible_cpu(cpu) { + inflight[0] += part_stat_local_read_cpu(part, in_flight[0], cpu); + inflight[1] += part_stat_local_read_cpu(part, in_flight[1], cpu); + } + if ((int)inflight[0] < 0) + inflight[0] = 0; + if ((int)inflight[1] < 0) + inflight[1] = 0; } struct hd_struct *__disk_get_part(struct gendisk *disk, int partno) @@ -1325,8 +1338,7 @@ static int diskstats_show(struct seq_file *seqf, void *v) struct disk_part_iter piter; struct hd_struct *hd; char buf[BDEVNAME_SIZE]; - unsigned int inflight[2]; - int cpu; + unsigned int inflight; /* if (&disk_to_dev(gp)->kobj.entry == block_class.devices.next) @@ -1338,10 +1350,7 @@ static int diskstats_show(struct seq_file *seqf, void *v) disk_part_iter_init(&piter, gp, DISK_PITER_INCL_EMPTY_PART0); while ((hd = disk_part_iter_next(&piter))) { - cpu = part_stat_lock(); - part_round_stats(gp->queue, cpu, hd); - part_stat_unlock(); - part_in_flight(gp->queue, hd, inflight); + inflight = part_in_flight(gp->queue, hd); seq_printf(seqf, "%4d %7d %s " "%lu %lu %lu %u " "%lu %lu %lu %u " @@ -1357,7 +1366,7 @@ static int diskstats_show(struct seq_file *seqf, void *v) part_stat_read(hd, merges[STAT_WRITE]), part_stat_read(hd, sectors[STAT_WRITE]), (unsigned int)part_stat_read_msecs(hd, STAT_WRITE), - inflight[0], + inflight, jiffies_to_msecs(part_stat_read(hd, io_ticks)), jiffies_to_msecs(part_stat_read(hd, time_in_queue)), part_stat_read(hd, ios[STAT_DISCARD]), diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index eccac01a10b6..ec6a04e01bc1 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -195,7 +195,7 @@ struct kyber_hctx_data { unsigned int batching; struct kyber_ctx_queue *kcqs; struct sbitmap kcq_map[KYBER_NUM_DOMAINS]; - wait_queue_entry_t domain_wait[KYBER_NUM_DOMAINS]; + struct sbq_wait domain_wait[KYBER_NUM_DOMAINS]; struct sbq_wait_state *domain_ws[KYBER_NUM_DOMAINS]; atomic_t wait_index[KYBER_NUM_DOMAINS]; }; @@ -501,10 +501,11 @@ static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) for (i = 0; i < KYBER_NUM_DOMAINS; i++) { INIT_LIST_HEAD(&khd->rqs[i]); - init_waitqueue_func_entry(&khd->domain_wait[i], + khd->domain_wait[i].sbq = NULL; + init_waitqueue_func_entry(&khd->domain_wait[i].wait, kyber_domain_wake); - khd->domain_wait[i].private = hctx; - INIT_LIST_HEAD(&khd->domain_wait[i].entry); + khd->domain_wait[i].wait.private = hctx; + INIT_LIST_HEAD(&khd->domain_wait[i].wait.entry); atomic_set(&khd->wait_index[i], 0); } @@ -576,7 +577,7 @@ static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) { struct kyber_hctx_data *khd = hctx->sched_data; struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue); - struct kyber_ctx_queue *kcq = &khd->kcqs[ctx->index_hw]; + struct kyber_ctx_queue *kcq = &khd->kcqs[ctx->index_hw[hctx->type]]; unsigned int sched_domain = kyber_sched_domain(bio->bi_opf); struct list_head *rq_list = &kcq->rq_list[sched_domain]; bool merged; @@ -602,7 +603,7 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx, list_for_each_entry_safe(rq, next, rq_list, queuelist) { unsigned int sched_domain = kyber_sched_domain(rq->cmd_flags); - struct kyber_ctx_queue *kcq = &khd->kcqs[rq->mq_ctx->index_hw]; + struct kyber_ctx_queue *kcq = &khd->kcqs[rq->mq_ctx->index_hw[hctx->type]]; struct list_head *head = &kcq->rq_list[sched_domain]; spin_lock(&kcq->lock); @@ -611,7 +612,7 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx, else list_move_tail(&rq->queuelist, head); sbitmap_set_bit(&khd->kcq_map[sched_domain], - rq->mq_ctx->index_hw); + rq->mq_ctx->index_hw[hctx->type]); blk_mq_sched_request_inserted(rq); spin_unlock(&kcq->lock); } @@ -698,12 +699,13 @@ static void kyber_flush_busy_kcqs(struct kyber_hctx_data *khd, flush_busy_kcq, &data); } -static int kyber_domain_wake(wait_queue_entry_t *wait, unsigned mode, int flags, +static int kyber_domain_wake(wait_queue_entry_t *wqe, unsigned mode, int flags, void *key) { - struct blk_mq_hw_ctx *hctx = READ_ONCE(wait->private); + struct blk_mq_hw_ctx *hctx = READ_ONCE(wqe->private); + struct sbq_wait *wait = container_of(wqe, struct sbq_wait, wait); - list_del_init(&wait->entry); + sbitmap_del_wait_queue(wait); blk_mq_run_hw_queue(hctx, true); return 1; } @@ -714,7 +716,7 @@ static int kyber_get_domain_token(struct kyber_queue_data *kqd, { unsigned int sched_domain = khd->cur_domain; struct sbitmap_queue *domain_tokens = &kqd->domain_tokens[sched_domain]; - wait_queue_entry_t *wait = &khd->domain_wait[sched_domain]; + struct sbq_wait *wait = &khd->domain_wait[sched_domain]; struct sbq_wait_state *ws; int nr; @@ -725,11 +727,11 @@ static int kyber_get_domain_token(struct kyber_queue_data *kqd, * run when one becomes available. Note that this is serialized on * khd->lock, but we still need to be careful about the waker. */ - if (nr < 0 && list_empty_careful(&wait->entry)) { + if (nr < 0 && list_empty_careful(&wait->wait.entry)) { ws = sbq_wait_ptr(domain_tokens, &khd->wait_index[sched_domain]); khd->domain_ws[sched_domain] = ws; - add_wait_queue(&ws->wait, wait); + sbitmap_add_wait_queue(domain_tokens, ws, wait); /* * Try again in case a token was freed before we got on the wait @@ -745,10 +747,10 @@ static int kyber_get_domain_token(struct kyber_queue_data *kqd, * between the !list_empty_careful() check and us grabbing the lock, but * list_del_init() is okay with that. */ - if (nr >= 0 && !list_empty_careful(&wait->entry)) { + if (nr >= 0 && !list_empty_careful(&wait->wait.entry)) { ws = khd->domain_ws[sched_domain]; spin_lock_irq(&ws->wait.lock); - list_del_init(&wait->entry); + sbitmap_del_wait_queue(wait); spin_unlock_irq(&ws->wait.lock); } @@ -951,7 +953,7 @@ static int kyber_##name##_waiting_show(void *data, struct seq_file *m) \ { \ struct blk_mq_hw_ctx *hctx = data; \ struct kyber_hctx_data *khd = hctx->sched_data; \ - wait_queue_entry_t *wait = &khd->domain_wait[domain]; \ + wait_queue_entry_t *wait = &khd->domain_wait[domain].wait; \ \ seq_printf(m, "%d\n", !list_empty_careful(&wait->entry)); \ return 0; \ @@ -1017,7 +1019,7 @@ static const struct blk_mq_debugfs_attr kyber_hctx_debugfs_attrs[] = { #endif static struct elevator_type kyber_sched = { - .ops.mq = { + .ops = { .init_sched = kyber_init_sched, .exit_sched = kyber_exit_sched, .init_hctx = kyber_init_hctx, @@ -1032,7 +1034,6 @@ static struct elevator_type kyber_sched = { .dispatch_request = kyber_dispatch_request, .has_work = kyber_has_work, }, - .uses_mq = true, #ifdef CONFIG_BLK_DEBUG_FS .queue_debugfs_attrs = kyber_queue_debugfs_attrs, .hctx_debugfs_attrs = kyber_hctx_debugfs_attrs, diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 099a9e05854c..14288f864e94 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -373,9 +373,16 @@ done: /* * One confusing aspect here is that we get called for a specific - * hardware queue, but we return a request that may not be for a + * hardware queue, but we may return a request that is for a * different hardware queue. This is because mq-deadline has shared * state for all hardware queues, in terms of sorting, FIFOs, etc. + * + * For a zoned block device, __dd_dispatch_request() may return NULL + * if all the queued write requests are directed at zones that are already + * locked due to on-going write requests. In this case, make sure to mark + * the queue as needing a restart to ensure that the queue is run again + * and the pending writes dispatched once the target zones for the ongoing + * write requests are unlocked in dd_finish_request(). */ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx) { @@ -384,6 +391,9 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx) spin_lock(&dd->lock); rq = __dd_dispatch_request(dd); + if (!rq && blk_queue_is_zoned(hctx->queue) && + !list_empty(&dd->fifo_list[WRITE])) + blk_mq_sched_mark_restart_hctx(hctx); spin_unlock(&dd->lock); return rq; @@ -761,7 +771,7 @@ static const struct blk_mq_debugfs_attr deadline_queue_debugfs_attrs[] = { #endif static struct elevator_type mq_deadline = { - .ops.mq = { + .ops = { .insert_requests = dd_insert_requests, .dispatch_request = dd_dispatch_request, .prepare_request = dd_prepare_request, @@ -777,7 +787,6 @@ static struct elevator_type mq_deadline = { .exit_sched = dd_exit_queue, }, - .uses_mq = true, #ifdef CONFIG_BLK_DEBUG_FS .queue_debugfs_attrs = deadline_queue_debugfs_attrs, #endif diff --git a/block/noop-iosched.c b/block/noop-iosched.c deleted file mode 100644 index 2d1b15d89b45..000000000000 --- a/block/noop-iosched.c +++ /dev/null @@ -1,124 +0,0 @@ -/* - * elevator noop - */ -#include -#include -#include -#include -#include -#include - -struct noop_data { - struct list_head queue; -}; - -static void noop_merged_requests(struct request_queue *q, struct request *rq, - struct request *next) -{ - list_del_init(&next->queuelist); -} - -static int noop_dispatch(struct request_queue *q, int force) -{ - struct noop_data *nd = q->elevator->elevator_data; - struct request *rq; - - rq = list_first_entry_or_null(&nd->queue, struct request, queuelist); - if (rq) { - list_del_init(&rq->queuelist); - elv_dispatch_sort(q, rq); - return 1; - } - return 0; -} - -static void noop_add_request(struct request_queue *q, struct request *rq) -{ - struct noop_data *nd = q->elevator->elevator_data; - - list_add_tail(&rq->queuelist, &nd->queue); -} - -static struct request * -noop_former_request(struct request_queue *q, struct request *rq) -{ - struct noop_data *nd = q->elevator->elevator_data; - - if (rq->queuelist.prev == &nd->queue) - return NULL; - return list_prev_entry(rq, queuelist); -} - -static struct request * -noop_latter_request(struct request_queue *q, struct request *rq) -{ - struct noop_data *nd = q->elevator->elevator_data; - - if (rq->queuelist.next == &nd->queue) - return NULL; - return list_next_entry(rq, queuelist); -} - -static int noop_init_queue(struct request_queue *q, struct elevator_type *e) -{ - struct noop_data *nd; - struct elevator_queue *eq; - - eq = elevator_alloc(q, e); - if (!eq) - return -ENOMEM; - - nd = kmalloc_node(sizeof(*nd), GFP_KERNEL, q->node); - if (!nd) { - kobject_put(&eq->kobj); - return -ENOMEM; - } - eq->elevator_data = nd; - - INIT_LIST_HEAD(&nd->queue); - - spin_lock_irq(q->queue_lock); - q->elevator = eq; - spin_unlock_irq(q->queue_lock); - return 0; -} - -static void noop_exit_queue(struct elevator_queue *e) -{ - struct noop_data *nd = e->elevator_data; - - BUG_ON(!list_empty(&nd->queue)); - kfree(nd); -} - -static struct elevator_type elevator_noop = { - .ops.sq = { - .elevator_merge_req_fn = noop_merged_requests, - .elevator_dispatch_fn = noop_dispatch, - .elevator_add_req_fn = noop_add_request, - .elevator_former_req_fn = noop_former_request, - .elevator_latter_req_fn = noop_latter_request, - .elevator_init_fn = noop_init_queue, - .elevator_exit_fn = noop_exit_queue, - }, - .elevator_name = "noop", - .elevator_owner = THIS_MODULE, -}; - -static int __init noop_init(void) -{ - return elv_register(&elevator_noop); -} - -static void __exit noop_exit(void) -{ - elv_unregister(&elevator_noop); -} - -module_init(noop_init); -module_exit(noop_exit); - - -MODULE_AUTHOR("Jens Axboe"); -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("No-op IO scheduler"); diff --git a/block/partition-generic.c b/block/partition-generic.c index d3d14e81fb12..8e596a8dff32 100644 --- a/block/partition-generic.c +++ b/block/partition-generic.c @@ -120,13 +120,9 @@ ssize_t part_stat_show(struct device *dev, { struct hd_struct *p = dev_to_part(dev); struct request_queue *q = part_to_disk(p)->queue; - unsigned int inflight[2]; - int cpu; + unsigned int inflight; - cpu = part_stat_lock(); - part_round_stats(q, cpu, p); - part_stat_unlock(); - part_in_flight(q, p, inflight); + inflight = part_in_flight(q, p); return sprintf(buf, "%8lu %8lu %8llu %8u " "%8lu %8lu %8llu %8u " @@ -141,7 +137,7 @@ ssize_t part_stat_show(struct device *dev, part_stat_read(p, merges[STAT_WRITE]), (unsigned long long)part_stat_read(p, sectors[STAT_WRITE]), (unsigned int)part_stat_read_msecs(p, STAT_WRITE), - inflight[0], + inflight, jiffies_to_msecs(part_stat_read(p, io_ticks)), jiffies_to_msecs(part_stat_read(p, time_in_queue)), part_stat_read(p, ios[STAT_DISCARD]), @@ -249,9 +245,10 @@ struct device_type part_type = { .uevent = part_uevent, }; -static void delete_partition_rcu_cb(struct rcu_head *head) +static void delete_partition_work_fn(struct work_struct *work) { - struct hd_struct *part = container_of(head, struct hd_struct, rcu_head); + struct hd_struct *part = container_of(to_rcu_work(work), struct hd_struct, + rcu_work); part->start_sect = 0; part->nr_sects = 0; @@ -262,7 +259,8 @@ static void delete_partition_rcu_cb(struct rcu_head *head) void __delete_partition(struct percpu_ref *ref) { struct hd_struct *part = container_of(ref, struct hd_struct, ref); - call_rcu(&part->rcu_head, delete_partition_rcu_cb); + INIT_RCU_WORK(&part->rcu_work, delete_partition_work_fn); + queue_rcu_work(system_wq, &part->rcu_work); } /* diff --git a/crypto/Kconfig b/crypto/Kconfig index 05c91eb10ca1..9511144ac7b5 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -430,11 +430,14 @@ config CRYPTO_CTS help CTS: Cipher Text Stealing This is the Cipher Text Stealing mode as described by - Section 8 of rfc2040 and referenced by rfc3962. - (rfc3962 includes errata information in its Appendix A) + Section 8 of rfc2040 and referenced by rfc3962 + (rfc3962 includes errata information in its Appendix A) or + CBC-CS3 as defined by NIST in Sp800-38A addendum from Oct 2010. This mode is required for Kerberos gss mechanism support for AES encryption. + See: https://csrc.nist.gov/publications/detail/sp/800-38a/addendum/final + config CRYPTO_ECB tristate "ECB support" select CRYPTO_BLKCIPHER @@ -493,6 +496,50 @@ config CRYPTO_KEYWRAP Support for key wrapping (NIST SP800-38F / RFC3394) without padding. +config CRYPTO_NHPOLY1305 + tristate + select CRYPTO_HASH + select CRYPTO_POLY1305 + +config CRYPTO_NHPOLY1305_SSE2 + tristate "NHPoly1305 hash function (x86_64 SSE2 implementation)" + depends on X86 && 64BIT + select CRYPTO_NHPOLY1305 + help + SSE2 optimized implementation of the hash function used by the + Adiantum encryption mode. + +config CRYPTO_NHPOLY1305_AVX2 + tristate "NHPoly1305 hash function (x86_64 AVX2 implementation)" + depends on X86 && 64BIT + select CRYPTO_NHPOLY1305 + help + AVX2 optimized implementation of the hash function used by the + Adiantum encryption mode. + +config CRYPTO_ADIANTUM + tristate "Adiantum support" + select CRYPTO_CHACHA20 + select CRYPTO_POLY1305 + select CRYPTO_NHPOLY1305 + help + Adiantum is a tweakable, length-preserving encryption mode + designed for fast and secure disk encryption, especially on + CPUs without dedicated crypto instructions. It encrypts + each sector using the XChaCha12 stream cipher, two passes of + an ε-almost-∆-universal hash function, and an invocation of + the AES-256 block cipher on a single 16-byte block. On CPUs + without AES instructions, Adiantum is much faster than + AES-XTS. + + Adiantum's security is provably reducible to that of its + underlying stream and block ciphers, subject to a security + bound. Unlike XTS, Adiantum is a true wide-block encryption + mode, so it actually provides an even stronger notion of + security than XTS, subject to the security bound. + + If unsure, say N. + comment "Hash modes" config CRYPTO_CMAC @@ -936,6 +983,18 @@ config CRYPTO_SM3 http://www.oscca.gov.cn/UpFile/20101222141857786.pdf https://datatracker.ietf.org/doc/html/draft-shen-sm3-hash +config CRYPTO_STREEBOG + tristate "Streebog Hash Function" + select CRYPTO_HASH + help + Streebog Hash Function (GOST R 34.11-2012, RFC 6986) is one of the Russian + cryptographic standard algorithms (called GOST algorithms). + This setting enables two hash algorithms with 256 and 512 bits output. + + References: + https://tc26.ru/upload/iblock/fed/feddbb4d26b685903faa2ba11aea43f6.pdf + https://tools.ietf.org/html/rfc6986 + config CRYPTO_TGR192 tristate "Tiger digest algorithms" select CRYPTO_HASH @@ -1006,7 +1065,8 @@ config CRYPTO_AES_TI 8 for decryption), this implementation only uses just two S-boxes of 256 bytes each, and attempts to eliminate data dependent latencies by prefetching the entire table into the cache at the start of each - block. + block. Interrupts are also disabled to avoid races where cachelines + are evicted when the CPU is interrupted to do something else. config CRYPTO_AES_586 tristate "AES cipher algorithms (i586)" @@ -1387,32 +1447,34 @@ config CRYPTO_SALSA20 Bernstein . See config CRYPTO_CHACHA20 - tristate "ChaCha20 cipher algorithm" + tristate "ChaCha stream cipher algorithms" select CRYPTO_BLKCIPHER help - ChaCha20 cipher algorithm, RFC7539. + The ChaCha20, XChaCha20, and XChaCha12 stream cipher algorithms. ChaCha20 is a 256-bit high-speed stream cipher designed by Daniel J. Bernstein and further specified in RFC7539 for use in IETF protocols. - This is the portable C implementation of ChaCha20. - - See also: + This is the portable C implementation of ChaCha20. See also: + XChaCha20 is the application of the XSalsa20 construction to ChaCha20 + rather than to Salsa20. XChaCha20 extends ChaCha20's nonce length + from 64 bits (or 96 bits using the RFC7539 convention) to 192 bits, + while provably retaining ChaCha20's security. See also: + + + XChaCha12 is XChaCha20 reduced to 12 rounds, with correspondingly + reduced security margin but increased performance. It can be needed + in some performance-sensitive scenarios. + config CRYPTO_CHACHA20_X86_64 - tristate "ChaCha20 cipher algorithm (x86_64/SSSE3/AVX2)" + tristate "ChaCha stream cipher algorithms (x86_64/SSSE3/AVX2/AVX-512VL)" depends on X86 && 64BIT select CRYPTO_BLKCIPHER select CRYPTO_CHACHA20 help - ChaCha20 cipher algorithm, RFC7539. - - ChaCha20 is a 256-bit high-speed stream cipher designed by Daniel J. - Bernstein and further specified in RFC7539 for use in IETF protocols. - This is the x86_64 assembler implementation using SIMD instructions. - - See also: - + SSSE3, AVX2, and AVX-512VL optimized implementations of the ChaCha20, + XChaCha20, and XChaCha12 stream ciphers. config CRYPTO_SEED tristate "SEED cipher algorithm" @@ -1812,7 +1874,8 @@ config CRYPTO_USER_API_AEAD cipher algorithms. config CRYPTO_STATS - bool + bool "Crypto usage statistics for User-space" + depends on CRYPTO_USER help This option enables the gathering of crypto stats. This will collect: @@ -1826,7 +1889,7 @@ config CRYPTO_HASH_INFO bool source "drivers/crypto/Kconfig" -source crypto/asymmetric_keys/Kconfig -source certs/Kconfig +source "crypto/asymmetric_keys/Kconfig" +source "certs/Kconfig" endif # if CRYPTO diff --git a/crypto/Makefile b/crypto/Makefile index 5c207c76abf7..799ed5e94606 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -54,7 +54,8 @@ cryptomgr-y := algboss.o testmgr.o obj-$(CONFIG_CRYPTO_MANAGER2) += cryptomgr.o obj-$(CONFIG_CRYPTO_USER) += crypto_user.o -crypto_user-y := crypto_user_base.o crypto_user_stat.o +crypto_user-y := crypto_user_base.o +crypto_user-$(CONFIG_CRYPTO_STATS) += crypto_user_stat.o obj-$(CONFIG_CRYPTO_CMAC) += cmac.o obj-$(CONFIG_CRYPTO_HMAC) += hmac.o obj-$(CONFIG_CRYPTO_VMAC) += vmac.o @@ -71,6 +72,7 @@ obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o obj-$(CONFIG_CRYPTO_SM3) += sm3_generic.o +obj-$(CONFIG_CRYPTO_STREEBOG) += streebog_generic.o obj-$(CONFIG_CRYPTO_WP512) += wp512.o CFLAGS_wp512.o := $(call cc-option,-fno-schedule-insns) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o @@ -84,6 +86,8 @@ obj-$(CONFIG_CRYPTO_LRW) += lrw.o obj-$(CONFIG_CRYPTO_XTS) += xts.o obj-$(CONFIG_CRYPTO_CTR) += ctr.o obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o +obj-$(CONFIG_CRYPTO_ADIANTUM) += adiantum.o +obj-$(CONFIG_CRYPTO_NHPOLY1305) += nhpoly1305.o obj-$(CONFIG_CRYPTO_GCM) += gcm.o obj-$(CONFIG_CRYPTO_CCM) += ccm.o obj-$(CONFIG_CRYPTO_CHACHA20POLY1305) += chacha20poly1305.o @@ -116,7 +120,7 @@ obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o obj-$(CONFIG_CRYPTO_SEED) += seed.o obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o -obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o +obj-$(CONFIG_CRYPTO_CHACHA20) += chacha_generic.o obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c index 8882e90e868e..b339587073c3 100644 --- a/crypto/ablkcipher.c +++ b/crypto/ablkcipher.c @@ -365,23 +365,18 @@ static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_blkcipher rblkcipher; - strncpy(rblkcipher.type, "ablkcipher", sizeof(rblkcipher.type)); - strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "", - sizeof(rblkcipher.geniv)); - rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0'; + memset(&rblkcipher, 0, sizeof(rblkcipher)); + + strscpy(rblkcipher.type, "ablkcipher", sizeof(rblkcipher.type)); + strscpy(rblkcipher.geniv, "", sizeof(rblkcipher.geniv)); rblkcipher.blocksize = alg->cra_blocksize; rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize; rblkcipher.max_keysize = alg->cra_ablkcipher.max_keysize; rblkcipher.ivsize = alg->cra_ablkcipher.ivsize; - if (nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER, - sizeof(struct crypto_report_blkcipher), &rblkcipher)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER, + sizeof(rblkcipher), &rblkcipher); } #else static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) @@ -403,7 +398,7 @@ static void crypto_ablkcipher_show(struct seq_file *m, struct crypto_alg *alg) seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize); seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize); seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize); - seq_printf(m, "geniv : %s\n", ablkcipher->geniv ?: ""); + seq_printf(m, "geniv : \n"); } const struct crypto_type crypto_ablkcipher_type = { @@ -415,78 +410,3 @@ const struct crypto_type crypto_ablkcipher_type = { .report = crypto_ablkcipher_report, }; EXPORT_SYMBOL_GPL(crypto_ablkcipher_type); - -static int crypto_init_givcipher_ops(struct crypto_tfm *tfm, u32 type, - u32 mask) -{ - struct ablkcipher_alg *alg = &tfm->__crt_alg->cra_ablkcipher; - struct ablkcipher_tfm *crt = &tfm->crt_ablkcipher; - - if (alg->ivsize > PAGE_SIZE / 8) - return -EINVAL; - - crt->setkey = tfm->__crt_alg->cra_flags & CRYPTO_ALG_GENIV ? - alg->setkey : setkey; - crt->encrypt = alg->encrypt; - crt->decrypt = alg->decrypt; - crt->base = __crypto_ablkcipher_cast(tfm); - crt->ivsize = alg->ivsize; - - return 0; -} - -#ifdef CONFIG_NET -static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg) -{ - struct crypto_report_blkcipher rblkcipher; - - strncpy(rblkcipher.type, "givcipher", sizeof(rblkcipher.type)); - strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "", - sizeof(rblkcipher.geniv)); - rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0'; - - rblkcipher.blocksize = alg->cra_blocksize; - rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize; - rblkcipher.max_keysize = alg->cra_ablkcipher.max_keysize; - rblkcipher.ivsize = alg->cra_ablkcipher.ivsize; - - if (nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER, - sizeof(struct crypto_report_blkcipher), &rblkcipher)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; -} -#else -static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg) -{ - return -ENOSYS; -} -#endif - -static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg) - __maybe_unused; -static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg) -{ - struct ablkcipher_alg *ablkcipher = &alg->cra_ablkcipher; - - seq_printf(m, "type : givcipher\n"); - seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ? - "yes" : "no"); - seq_printf(m, "blocksize : %u\n", alg->cra_blocksize); - seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize); - seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize); - seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize); - seq_printf(m, "geniv : %s\n", ablkcipher->geniv ?: ""); -} - -const struct crypto_type crypto_givcipher_type = { - .ctxsize = crypto_ablkcipher_ctxsize, - .init = crypto_init_givcipher_ops, -#ifdef CONFIG_PROC_FS - .show = crypto_givcipher_show, -#endif - .report = crypto_givcipher_report, -}; -EXPORT_SYMBOL_GPL(crypto_givcipher_type); diff --git a/crypto/acompress.c b/crypto/acompress.c index 1544b7c057fb..0c5bedd06e70 100644 --- a/crypto/acompress.c +++ b/crypto/acompress.c @@ -33,15 +33,11 @@ static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_acomp racomp; - strncpy(racomp.type, "acomp", sizeof(racomp.type)); + memset(&racomp, 0, sizeof(racomp)); - if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMP, - sizeof(struct crypto_report_acomp), &racomp)) - goto nla_put_failure; - return 0; + strscpy(racomp.type, "acomp", sizeof(racomp.type)); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_ACOMP, sizeof(racomp), &racomp); } #else static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/adiantum.c b/crypto/adiantum.c new file mode 100644 index 000000000000..6651e713c45d --- /dev/null +++ b/crypto/adiantum.c @@ -0,0 +1,664 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Adiantum length-preserving encryption mode + * + * Copyright 2018 Google LLC + */ + +/* + * Adiantum is a tweakable, length-preserving encryption mode designed for fast + * and secure disk encryption, especially on CPUs without dedicated crypto + * instructions. Adiantum encrypts each sector using the XChaCha12 stream + * cipher, two passes of an ε-almost-∆-universal (ε-∆U) hash function based on + * NH and Poly1305, and an invocation of the AES-256 block cipher on a single + * 16-byte block. See the paper for details: + * + * Adiantum: length-preserving encryption for entry-level processors + * (https://eprint.iacr.org/2018/720.pdf) + * + * For flexibility, this implementation also allows other ciphers: + * + * - Stream cipher: XChaCha12 or XChaCha20 + * - Block cipher: any with a 128-bit block size and 256-bit key + * + * This implementation doesn't currently allow other ε-∆U hash functions, i.e. + * HPolyC is not supported. This is because Adiantum is ~20% faster than HPolyC + * but still provably as secure, and also the ε-∆U hash function of HBSH is + * formally defined to take two inputs (tweak, message) which makes it difficult + * to wrap with the crypto_shash API. Rather, some details need to be handled + * here. Nevertheless, if needed in the future, support for other ε-∆U hash + * functions could be added here. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +/* + * Size of right-hand part of input data, in bytes; also the size of the block + * cipher's block size and the hash function's output. + */ +#define BLOCKCIPHER_BLOCK_SIZE 16 + +/* Size of the block cipher key (K_E) in bytes */ +#define BLOCKCIPHER_KEY_SIZE 32 + +/* Size of the hash key (K_H) in bytes */ +#define HASH_KEY_SIZE (POLY1305_BLOCK_SIZE + NHPOLY1305_KEY_SIZE) + +/* + * The specification allows variable-length tweaks, but Linux's crypto API + * currently only allows algorithms to support a single length. The "natural" + * tweak length for Adiantum is 16, since that fits into one Poly1305 block for + * the best performance. But longer tweaks are useful for fscrypt, to avoid + * needing to derive per-file keys. So instead we use two blocks, or 32 bytes. + */ +#define TWEAK_SIZE 32 + +struct adiantum_instance_ctx { + struct crypto_skcipher_spawn streamcipher_spawn; + struct crypto_spawn blockcipher_spawn; + struct crypto_shash_spawn hash_spawn; +}; + +struct adiantum_tfm_ctx { + struct crypto_skcipher *streamcipher; + struct crypto_cipher *blockcipher; + struct crypto_shash *hash; + struct poly1305_key header_hash_key; +}; + +struct adiantum_request_ctx { + + /* + * Buffer for right-hand part of data, i.e. + * + * P_L => P_M => C_M => C_R when encrypting, or + * C_R => C_M => P_M => P_L when decrypting. + * + * Also used to build the IV for the stream cipher. + */ + union { + u8 bytes[XCHACHA_IV_SIZE]; + __le32 words[XCHACHA_IV_SIZE / sizeof(__le32)]; + le128 bignum; /* interpret as element of Z/(2^{128}Z) */ + } rbuf; + + bool enc; /* true if encrypting, false if decrypting */ + + /* + * The result of the Poly1305 ε-∆U hash function applied to + * (bulk length, tweak) + */ + le128 header_hash; + + /* Sub-requests, must be last */ + union { + struct shash_desc hash_desc; + struct skcipher_request streamcipher_req; + } u; +}; + +/* + * Given the XChaCha stream key K_S, derive the block cipher key K_E and the + * hash key K_H as follows: + * + * K_E || K_H || ... = XChaCha(key=K_S, nonce=1||0^191) + * + * Note that this denotes using bits from the XChaCha keystream, which here we + * get indirectly by encrypting a buffer containing all 0's. + */ +static int adiantum_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keylen) +{ + struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct { + u8 iv[XCHACHA_IV_SIZE]; + u8 derived_keys[BLOCKCIPHER_KEY_SIZE + HASH_KEY_SIZE]; + struct scatterlist sg; + struct crypto_wait wait; + struct skcipher_request req; /* must be last */ + } *data; + u8 *keyp; + int err; + + /* Set the stream cipher key (K_S) */ + crypto_skcipher_clear_flags(tctx->streamcipher, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(tctx->streamcipher, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_skcipher_setkey(tctx->streamcipher, key, keylen); + crypto_skcipher_set_flags(tfm, + crypto_skcipher_get_flags(tctx->streamcipher) & + CRYPTO_TFM_RES_MASK); + if (err) + return err; + + /* Derive the subkeys */ + data = kzalloc(sizeof(*data) + + crypto_skcipher_reqsize(tctx->streamcipher), GFP_KERNEL); + if (!data) + return -ENOMEM; + data->iv[0] = 1; + sg_init_one(&data->sg, data->derived_keys, sizeof(data->derived_keys)); + crypto_init_wait(&data->wait); + skcipher_request_set_tfm(&data->req, tctx->streamcipher); + skcipher_request_set_callback(&data->req, CRYPTO_TFM_REQ_MAY_SLEEP | + CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &data->wait); + skcipher_request_set_crypt(&data->req, &data->sg, &data->sg, + sizeof(data->derived_keys), data->iv); + err = crypto_wait_req(crypto_skcipher_encrypt(&data->req), &data->wait); + if (err) + goto out; + keyp = data->derived_keys; + + /* Set the block cipher key (K_E) */ + crypto_cipher_clear_flags(tctx->blockcipher, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tctx->blockcipher, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tctx->blockcipher, keyp, + BLOCKCIPHER_KEY_SIZE); + crypto_skcipher_set_flags(tfm, + crypto_cipher_get_flags(tctx->blockcipher) & + CRYPTO_TFM_RES_MASK); + if (err) + goto out; + keyp += BLOCKCIPHER_KEY_SIZE; + + /* Set the hash key (K_H) */ + poly1305_core_setkey(&tctx->header_hash_key, keyp); + keyp += POLY1305_BLOCK_SIZE; + + crypto_shash_clear_flags(tctx->hash, CRYPTO_TFM_REQ_MASK); + crypto_shash_set_flags(tctx->hash, crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_shash_setkey(tctx->hash, keyp, NHPOLY1305_KEY_SIZE); + crypto_skcipher_set_flags(tfm, crypto_shash_get_flags(tctx->hash) & + CRYPTO_TFM_RES_MASK); + keyp += NHPOLY1305_KEY_SIZE; + WARN_ON(keyp != &data->derived_keys[ARRAY_SIZE(data->derived_keys)]); +out: + kzfree(data); + return err; +} + +/* Addition in Z/(2^{128}Z) */ +static inline void le128_add(le128 *r, const le128 *v1, const le128 *v2) +{ + u64 x = le64_to_cpu(v1->b); + u64 y = le64_to_cpu(v2->b); + + r->b = cpu_to_le64(x + y); + r->a = cpu_to_le64(le64_to_cpu(v1->a) + le64_to_cpu(v2->a) + + (x + y < x)); +} + +/* Subtraction in Z/(2^{128}Z) */ +static inline void le128_sub(le128 *r, const le128 *v1, const le128 *v2) +{ + u64 x = le64_to_cpu(v1->b); + u64 y = le64_to_cpu(v2->b); + + r->b = cpu_to_le64(x - y); + r->a = cpu_to_le64(le64_to_cpu(v1->a) - le64_to_cpu(v2->a) - + (x - y > x)); +} + +/* + * Apply the Poly1305 ε-∆U hash function to (bulk length, tweak) and save the + * result to rctx->header_hash. This is the calculation + * + * H_T ← Poly1305_{K_T}(bin_{128}(|L|) || T) + * + * from the procedure in section 6.4 of the Adiantum paper. The resulting value + * is reused in both the first and second hash steps. Specifically, it's added + * to the result of an independently keyed ε-∆U hash function (for equal length + * inputs only) taken over the left-hand part (the "bulk") of the message, to + * give the overall Adiantum hash of the (tweak, left-hand part) pair. + */ +static void adiantum_hash_header(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); + const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; + struct { + __le64 message_bits; + __le64 padding; + } header = { + .message_bits = cpu_to_le64((u64)bulk_len * 8) + }; + struct poly1305_state state; + + poly1305_core_init(&state); + + BUILD_BUG_ON(sizeof(header) % POLY1305_BLOCK_SIZE != 0); + poly1305_core_blocks(&state, &tctx->header_hash_key, + &header, sizeof(header) / POLY1305_BLOCK_SIZE); + + BUILD_BUG_ON(TWEAK_SIZE % POLY1305_BLOCK_SIZE != 0); + poly1305_core_blocks(&state, &tctx->header_hash_key, req->iv, + TWEAK_SIZE / POLY1305_BLOCK_SIZE); + + poly1305_core_emit(&state, &rctx->header_hash); +} + +/* Hash the left-hand part (the "bulk") of the message using NHPoly1305 */ +static int adiantum_hash_message(struct skcipher_request *req, + struct scatterlist *sgl, le128 *digest) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); + const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; + struct shash_desc *hash_desc = &rctx->u.hash_desc; + struct sg_mapping_iter miter; + unsigned int i, n; + int err; + + hash_desc->tfm = tctx->hash; + hash_desc->flags = 0; + + err = crypto_shash_init(hash_desc); + if (err) + return err; + + sg_miter_start(&miter, sgl, sg_nents(sgl), + SG_MITER_FROM_SG | SG_MITER_ATOMIC); + for (i = 0; i < bulk_len; i += n) { + sg_miter_next(&miter); + n = min_t(unsigned int, miter.length, bulk_len - i); + err = crypto_shash_update(hash_desc, miter.addr, n); + if (err) + break; + } + sg_miter_stop(&miter); + if (err) + return err; + + return crypto_shash_final(hash_desc, (u8 *)digest); +} + +/* Continue Adiantum encryption/decryption after the stream cipher step */ +static int adiantum_finish(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); + const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; + le128 digest; + int err; + + /* If decrypting, decrypt C_M with the block cipher to get P_M */ + if (!rctx->enc) + crypto_cipher_decrypt_one(tctx->blockcipher, rctx->rbuf.bytes, + rctx->rbuf.bytes); + + /* + * Second hash step + * enc: C_R = C_M - H_{K_H}(T, C_L) + * dec: P_R = P_M - H_{K_H}(T, P_L) + */ + err = adiantum_hash_message(req, req->dst, &digest); + if (err) + return err; + le128_add(&digest, &digest, &rctx->header_hash); + le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest); + scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->dst, + bulk_len, BLOCKCIPHER_BLOCK_SIZE, 1); + return 0; +} + +static void adiantum_streamcipher_done(struct crypto_async_request *areq, + int err) +{ + struct skcipher_request *req = areq->data; + + if (!err) + err = adiantum_finish(req); + + skcipher_request_complete(req, err); +} + +static int adiantum_crypt(struct skcipher_request *req, bool enc) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); + const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; + unsigned int stream_len; + le128 digest; + int err; + + if (req->cryptlen < BLOCKCIPHER_BLOCK_SIZE) + return -EINVAL; + + rctx->enc = enc; + + /* + * First hash step + * enc: P_M = P_R + H_{K_H}(T, P_L) + * dec: C_M = C_R + H_{K_H}(T, C_L) + */ + adiantum_hash_header(req); + err = adiantum_hash_message(req, req->src, &digest); + if (err) + return err; + le128_add(&digest, &digest, &rctx->header_hash); + scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->src, + bulk_len, BLOCKCIPHER_BLOCK_SIZE, 0); + le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest); + + /* If encrypting, encrypt P_M with the block cipher to get C_M */ + if (enc) + crypto_cipher_encrypt_one(tctx->blockcipher, rctx->rbuf.bytes, + rctx->rbuf.bytes); + + /* Initialize the rest of the XChaCha IV (first part is C_M) */ + BUILD_BUG_ON(BLOCKCIPHER_BLOCK_SIZE != 16); + BUILD_BUG_ON(XCHACHA_IV_SIZE != 32); /* nonce || stream position */ + rctx->rbuf.words[4] = cpu_to_le32(1); + rctx->rbuf.words[5] = 0; + rctx->rbuf.words[6] = 0; + rctx->rbuf.words[7] = 0; + + /* + * XChaCha needs to be done on all the data except the last 16 bytes; + * for disk encryption that usually means 4080 or 496 bytes. But ChaCha + * implementations tend to be most efficient when passed a whole number + * of 64-byte ChaCha blocks, or sometimes even a multiple of 256 bytes. + * And here it doesn't matter whether the last 16 bytes are written to, + * as the second hash step will overwrite them. Thus, round the XChaCha + * length up to the next 64-byte boundary if possible. + */ + stream_len = bulk_len; + if (round_up(stream_len, CHACHA_BLOCK_SIZE) <= req->cryptlen) + stream_len = round_up(stream_len, CHACHA_BLOCK_SIZE); + + skcipher_request_set_tfm(&rctx->u.streamcipher_req, tctx->streamcipher); + skcipher_request_set_crypt(&rctx->u.streamcipher_req, req->src, + req->dst, stream_len, &rctx->rbuf); + skcipher_request_set_callback(&rctx->u.streamcipher_req, + req->base.flags, + adiantum_streamcipher_done, req); + return crypto_skcipher_encrypt(&rctx->u.streamcipher_req) ?: + adiantum_finish(req); +} + +static int adiantum_encrypt(struct skcipher_request *req) +{ + return adiantum_crypt(req, true); +} + +static int adiantum_decrypt(struct skcipher_request *req) +{ + return adiantum_crypt(req, false); +} + +static int adiantum_init_tfm(struct crypto_skcipher *tfm) +{ + struct skcipher_instance *inst = skcipher_alg_instance(tfm); + struct adiantum_instance_ctx *ictx = skcipher_instance_ctx(inst); + struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct crypto_skcipher *streamcipher; + struct crypto_cipher *blockcipher; + struct crypto_shash *hash; + unsigned int subreq_size; + int err; + + streamcipher = crypto_spawn_skcipher(&ictx->streamcipher_spawn); + if (IS_ERR(streamcipher)) + return PTR_ERR(streamcipher); + + blockcipher = crypto_spawn_cipher(&ictx->blockcipher_spawn); + if (IS_ERR(blockcipher)) { + err = PTR_ERR(blockcipher); + goto err_free_streamcipher; + } + + hash = crypto_spawn_shash(&ictx->hash_spawn); + if (IS_ERR(hash)) { + err = PTR_ERR(hash); + goto err_free_blockcipher; + } + + tctx->streamcipher = streamcipher; + tctx->blockcipher = blockcipher; + tctx->hash = hash; + + BUILD_BUG_ON(offsetofend(struct adiantum_request_ctx, u) != + sizeof(struct adiantum_request_ctx)); + subreq_size = max(FIELD_SIZEOF(struct adiantum_request_ctx, + u.hash_desc) + + crypto_shash_descsize(hash), + FIELD_SIZEOF(struct adiantum_request_ctx, + u.streamcipher_req) + + crypto_skcipher_reqsize(streamcipher)); + + crypto_skcipher_set_reqsize(tfm, + offsetof(struct adiantum_request_ctx, u) + + subreq_size); + return 0; + +err_free_blockcipher: + crypto_free_cipher(blockcipher); +err_free_streamcipher: + crypto_free_skcipher(streamcipher); + return err; +} + +static void adiantum_exit_tfm(struct crypto_skcipher *tfm) +{ + struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + + crypto_free_skcipher(tctx->streamcipher); + crypto_free_cipher(tctx->blockcipher); + crypto_free_shash(tctx->hash); +} + +static void adiantum_free_instance(struct skcipher_instance *inst) +{ + struct adiantum_instance_ctx *ictx = skcipher_instance_ctx(inst); + + crypto_drop_skcipher(&ictx->streamcipher_spawn); + crypto_drop_spawn(&ictx->blockcipher_spawn); + crypto_drop_shash(&ictx->hash_spawn); + kfree(inst); +} + +/* + * Check for a supported set of inner algorithms. + * See the comment at the beginning of this file. + */ +static bool adiantum_supported_algorithms(struct skcipher_alg *streamcipher_alg, + struct crypto_alg *blockcipher_alg, + struct shash_alg *hash_alg) +{ + if (strcmp(streamcipher_alg->base.cra_name, "xchacha12") != 0 && + strcmp(streamcipher_alg->base.cra_name, "xchacha20") != 0) + return false; + + if (blockcipher_alg->cra_cipher.cia_min_keysize > BLOCKCIPHER_KEY_SIZE || + blockcipher_alg->cra_cipher.cia_max_keysize < BLOCKCIPHER_KEY_SIZE) + return false; + if (blockcipher_alg->cra_blocksize != BLOCKCIPHER_BLOCK_SIZE) + return false; + + if (strcmp(hash_alg->base.cra_name, "nhpoly1305") != 0) + return false; + + return true; +} + +static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb) +{ + struct crypto_attr_type *algt; + const char *streamcipher_name; + const char *blockcipher_name; + const char *nhpoly1305_name; + struct skcipher_instance *inst; + struct adiantum_instance_ctx *ictx; + struct skcipher_alg *streamcipher_alg; + struct crypto_alg *blockcipher_alg; + struct crypto_alg *_hash_alg; + struct shash_alg *hash_alg; + int err; + + algt = crypto_get_attr_type(tb); + if (IS_ERR(algt)) + return PTR_ERR(algt); + + if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask) + return -EINVAL; + + streamcipher_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(streamcipher_name)) + return PTR_ERR(streamcipher_name); + + blockcipher_name = crypto_attr_alg_name(tb[2]); + if (IS_ERR(blockcipher_name)) + return PTR_ERR(blockcipher_name); + + nhpoly1305_name = crypto_attr_alg_name(tb[3]); + if (nhpoly1305_name == ERR_PTR(-ENOENT)) + nhpoly1305_name = "nhpoly1305"; + if (IS_ERR(nhpoly1305_name)) + return PTR_ERR(nhpoly1305_name); + + inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL); + if (!inst) + return -ENOMEM; + ictx = skcipher_instance_ctx(inst); + + /* Stream cipher, e.g. "xchacha12" */ + err = crypto_grab_skcipher(&ictx->streamcipher_spawn, streamcipher_name, + 0, crypto_requires_sync(algt->type, + algt->mask)); + if (err) + goto out_free_inst; + streamcipher_alg = crypto_spawn_skcipher_alg(&ictx->streamcipher_spawn); + + /* Block cipher, e.g. "aes" */ + err = crypto_grab_spawn(&ictx->blockcipher_spawn, blockcipher_name, + CRYPTO_ALG_TYPE_CIPHER, CRYPTO_ALG_TYPE_MASK); + if (err) + goto out_drop_streamcipher; + blockcipher_alg = ictx->blockcipher_spawn.alg; + + /* NHPoly1305 ε-∆U hash function */ + _hash_alg = crypto_alg_mod_lookup(nhpoly1305_name, + CRYPTO_ALG_TYPE_SHASH, + CRYPTO_ALG_TYPE_MASK); + if (IS_ERR(_hash_alg)) { + err = PTR_ERR(_hash_alg); + goto out_drop_blockcipher; + } + hash_alg = __crypto_shash_alg(_hash_alg); + err = crypto_init_shash_spawn(&ictx->hash_spawn, hash_alg, + skcipher_crypto_instance(inst)); + if (err) + goto out_put_hash; + + /* Check the set of algorithms */ + if (!adiantum_supported_algorithms(streamcipher_alg, blockcipher_alg, + hash_alg)) { + pr_warn("Unsupported Adiantum instantiation: (%s,%s,%s)\n", + streamcipher_alg->base.cra_name, + blockcipher_alg->cra_name, hash_alg->base.cra_name); + err = -EINVAL; + goto out_drop_hash; + } + + /* Instance fields */ + + err = -ENAMETOOLONG; + if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, + "adiantum(%s,%s)", streamcipher_alg->base.cra_name, + blockcipher_alg->cra_name) >= CRYPTO_MAX_ALG_NAME) + goto out_drop_hash; + if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, + "adiantum(%s,%s,%s)", + streamcipher_alg->base.cra_driver_name, + blockcipher_alg->cra_driver_name, + hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) + goto out_drop_hash; + + inst->alg.base.cra_flags = streamcipher_alg->base.cra_flags & + CRYPTO_ALG_ASYNC; + inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE; + inst->alg.base.cra_ctxsize = sizeof(struct adiantum_tfm_ctx); + inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask | + hash_alg->base.cra_alignmask; + /* + * The block cipher is only invoked once per message, so for long + * messages (e.g. sectors for disk encryption) its performance doesn't + * matter as much as that of the stream cipher and hash function. Thus, + * weigh the block cipher's ->cra_priority less. + */ + inst->alg.base.cra_priority = (4 * streamcipher_alg->base.cra_priority + + 2 * hash_alg->base.cra_priority + + blockcipher_alg->cra_priority) / 7; + + inst->alg.setkey = adiantum_setkey; + inst->alg.encrypt = adiantum_encrypt; + inst->alg.decrypt = adiantum_decrypt; + inst->alg.init = adiantum_init_tfm; + inst->alg.exit = adiantum_exit_tfm; + inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(streamcipher_alg); + inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(streamcipher_alg); + inst->alg.ivsize = TWEAK_SIZE; + + inst->free = adiantum_free_instance; + + err = skcipher_register_instance(tmpl, inst); + if (err) + goto out_drop_hash; + + crypto_mod_put(_hash_alg); + return 0; + +out_drop_hash: + crypto_drop_shash(&ictx->hash_spawn); +out_put_hash: + crypto_mod_put(_hash_alg); +out_drop_blockcipher: + crypto_drop_spawn(&ictx->blockcipher_spawn); +out_drop_streamcipher: + crypto_drop_skcipher(&ictx->streamcipher_spawn); +out_free_inst: + kfree(inst); + return err; +} + +/* adiantum(streamcipher_name, blockcipher_name [, nhpoly1305_name]) */ +static struct crypto_template adiantum_tmpl = { + .name = "adiantum", + .create = adiantum_create, + .module = THIS_MODULE, +}; + +static int __init adiantum_module_init(void) +{ + return crypto_register_template(&adiantum_tmpl); +} + +static void __exit adiantum_module_exit(void) +{ + crypto_unregister_template(&adiantum_tmpl); +} + +module_init(adiantum_module_init); +module_exit(adiantum_module_exit); + +MODULE_DESCRIPTION("Adiantum length-preserving encryption mode"); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Eric Biggers "); +MODULE_ALIAS_CRYPTO("adiantum"); diff --git a/crypto/aead.c b/crypto/aead.c index 60b3bbe973e7..189c52d1f63a 100644 --- a/crypto/aead.c +++ b/crypto/aead.c @@ -119,20 +119,16 @@ static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg) struct crypto_report_aead raead; struct aead_alg *aead = container_of(alg, struct aead_alg, base); - strncpy(raead.type, "aead", sizeof(raead.type)); - strncpy(raead.geniv, "", sizeof(raead.geniv)); + memset(&raead, 0, sizeof(raead)); + + strscpy(raead.type, "aead", sizeof(raead.type)); + strscpy(raead.geniv, "", sizeof(raead.geniv)); raead.blocksize = alg->cra_blocksize; raead.maxauthsize = aead->maxauthsize; raead.ivsize = aead->ivsize; - if (nla_put(skb, CRYPTOCFGA_REPORT_AEAD, - sizeof(struct crypto_report_aead), &raead)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_AEAD, sizeof(raead), &raead); } #else static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c index ca554d57d01e..13df33aca463 100644 --- a/crypto/aes_generic.c +++ b/crypto/aes_generic.c @@ -63,7 +63,8 @@ static inline u8 byte(const u32 x, const unsigned n) static const u32 rco_tab[10] = { 1, 2, 4, 8, 16, 32, 64, 128, 27, 54 }; -__visible const u32 crypto_ft_tab[4][256] = { +/* cacheline-aligned to facilitate prefetching into cache */ +__visible const u32 crypto_ft_tab[4][256] __cacheline_aligned = { { 0xa56363c6, 0x847c7cf8, 0x997777ee, 0x8d7b7bf6, 0x0df2f2ff, 0xbd6b6bd6, 0xb16f6fde, 0x54c5c591, @@ -327,7 +328,7 @@ __visible const u32 crypto_ft_tab[4][256] = { } }; -__visible const u32 crypto_fl_tab[4][256] = { +__visible const u32 crypto_fl_tab[4][256] __cacheline_aligned = { { 0x00000063, 0x0000007c, 0x00000077, 0x0000007b, 0x000000f2, 0x0000006b, 0x0000006f, 0x000000c5, @@ -591,7 +592,7 @@ __visible const u32 crypto_fl_tab[4][256] = { } }; -__visible const u32 crypto_it_tab[4][256] = { +__visible const u32 crypto_it_tab[4][256] __cacheline_aligned = { { 0x50a7f451, 0x5365417e, 0xc3a4171a, 0x965e273a, 0xcb6bab3b, 0xf1459d1f, 0xab58faac, 0x9303e34b, @@ -855,7 +856,7 @@ __visible const u32 crypto_it_tab[4][256] = { } }; -__visible const u32 crypto_il_tab[4][256] = { +__visible const u32 crypto_il_tab[4][256] __cacheline_aligned = { { 0x00000052, 0x00000009, 0x0000006a, 0x000000d5, 0x00000030, 0x00000036, 0x000000a5, 0x00000038, diff --git a/crypto/aes_ti.c b/crypto/aes_ti.c index 03023b2290e8..1ff9785b30f5 100644 --- a/crypto/aes_ti.c +++ b/crypto/aes_ti.c @@ -269,6 +269,7 @@ static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) const u32 *rkp = ctx->key_enc + 4; int rounds = 6 + ctx->key_length / 4; u32 st0[4], st1[4]; + unsigned long flags; int round; st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in); @@ -276,6 +277,12 @@ static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8); st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12); + /* + * Temporarily disable interrupts to avoid races where cachelines are + * evicted when the CPU is interrupted to do something else. + */ + local_irq_save(flags); + st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128]; st0[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160]; st0[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192]; @@ -300,6 +307,8 @@ static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4); put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8); put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12); + + local_irq_restore(flags); } static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) @@ -308,6 +317,7 @@ static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) const u32 *rkp = ctx->key_dec + 4; int rounds = 6 + ctx->key_length / 4; u32 st0[4], st1[4]; + unsigned long flags; int round; st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in); @@ -315,6 +325,12 @@ static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8); st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12); + /* + * Temporarily disable interrupts to avoid races where cachelines are + * evicted when the CPU is interrupted to do something else. + */ + local_irq_save(flags); + st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128]; st0[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160]; st0[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192]; @@ -339,6 +355,8 @@ static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4); put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8); put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12); + + local_irq_restore(flags); } static struct crypto_alg aes_alg = { diff --git a/crypto/ahash.c b/crypto/ahash.c index e21667b4e10a..5d320a811f75 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -364,20 +364,28 @@ static int crypto_ahash_op(struct ahash_request *req, int crypto_ahash_final(struct ahash_request *req) { + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct crypto_alg *alg = tfm->base.__crt_alg; + unsigned int nbytes = req->nbytes; int ret; + crypto_stats_get(alg); ret = crypto_ahash_op(req, crypto_ahash_reqtfm(req)->final); - crypto_stat_ahash_final(req, ret); + crypto_stats_ahash_final(nbytes, ret, alg); return ret; } EXPORT_SYMBOL_GPL(crypto_ahash_final); int crypto_ahash_finup(struct ahash_request *req) { + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct crypto_alg *alg = tfm->base.__crt_alg; + unsigned int nbytes = req->nbytes; int ret; + crypto_stats_get(alg); ret = crypto_ahash_op(req, crypto_ahash_reqtfm(req)->finup); - crypto_stat_ahash_final(req, ret); + crypto_stats_ahash_final(nbytes, ret, alg); return ret; } EXPORT_SYMBOL_GPL(crypto_ahash_finup); @@ -385,13 +393,16 @@ EXPORT_SYMBOL_GPL(crypto_ahash_finup); int crypto_ahash_digest(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct crypto_alg *alg = tfm->base.__crt_alg; + unsigned int nbytes = req->nbytes; int ret; + crypto_stats_get(alg); if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) ret = -ENOKEY; else ret = crypto_ahash_op(req, tfm->digest); - crypto_stat_ahash_final(req, ret); + crypto_stats_ahash_final(nbytes, ret, alg); return ret; } EXPORT_SYMBOL_GPL(crypto_ahash_digest); @@ -498,18 +509,14 @@ static int crypto_ahash_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_hash rhash; - strncpy(rhash.type, "ahash", sizeof(rhash.type)); + memset(&rhash, 0, sizeof(rhash)); + + strscpy(rhash.type, "ahash", sizeof(rhash.type)); rhash.blocksize = alg->cra_blocksize; rhash.digestsize = __crypto_hash_alg_common(alg)->digestsize; - if (nla_put(skb, CRYPTOCFGA_REPORT_HASH, - sizeof(struct crypto_report_hash), &rhash)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_HASH, sizeof(rhash), &rhash); } #else static int crypto_ahash_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/akcipher.c b/crypto/akcipher.c index cfbdb06d8ca8..0cbeae137e0a 100644 --- a/crypto/akcipher.c +++ b/crypto/akcipher.c @@ -30,15 +30,12 @@ static int crypto_akcipher_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_akcipher rakcipher; - strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); + memset(&rakcipher, 0, sizeof(rakcipher)); - if (nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, - sizeof(struct crypto_report_akcipher), &rakcipher)) - goto nla_put_failure; - return 0; + strscpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, + sizeof(rakcipher), &rakcipher); } #else static int crypto_akcipher_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/algapi.c b/crypto/algapi.c index 2545c5f89c4c..8b65ada33e5d 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -258,13 +258,7 @@ static struct crypto_larval *__crypto_register_alg(struct crypto_alg *alg) list_add(&alg->cra_list, &crypto_alg_list); list_add(&larval->alg.cra_list, &crypto_alg_list); - atomic_set(&alg->encrypt_cnt, 0); - atomic_set(&alg->decrypt_cnt, 0); - atomic64_set(&alg->encrypt_tlen, 0); - atomic64_set(&alg->decrypt_tlen, 0); - atomic_set(&alg->verify_cnt, 0); - atomic_set(&alg->cipher_err_cnt, 0); - atomic_set(&alg->sign_cnt, 0); + crypto_stats_init(alg); out: return larval; @@ -1076,6 +1070,245 @@ int crypto_type_has_alg(const char *name, const struct crypto_type *frontend, } EXPORT_SYMBOL_GPL(crypto_type_has_alg); +#ifdef CONFIG_CRYPTO_STATS +void crypto_stats_init(struct crypto_alg *alg) +{ + memset(&alg->stats, 0, sizeof(alg->stats)); +} +EXPORT_SYMBOL_GPL(crypto_stats_init); + +void crypto_stats_get(struct crypto_alg *alg) +{ + crypto_alg_get(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_get); + +void crypto_stats_ablkcipher_encrypt(unsigned int nbytes, int ret, + struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.cipher.err_cnt); + } else { + atomic64_inc(&alg->stats.cipher.encrypt_cnt); + atomic64_add(nbytes, &alg->stats.cipher.encrypt_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_ablkcipher_encrypt); + +void crypto_stats_ablkcipher_decrypt(unsigned int nbytes, int ret, + struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.cipher.err_cnt); + } else { + atomic64_inc(&alg->stats.cipher.decrypt_cnt); + atomic64_add(nbytes, &alg->stats.cipher.decrypt_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_ablkcipher_decrypt); + +void crypto_stats_aead_encrypt(unsigned int cryptlen, struct crypto_alg *alg, + int ret) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.aead.err_cnt); + } else { + atomic64_inc(&alg->stats.aead.encrypt_cnt); + atomic64_add(cryptlen, &alg->stats.aead.encrypt_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_aead_encrypt); + +void crypto_stats_aead_decrypt(unsigned int cryptlen, struct crypto_alg *alg, + int ret) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.aead.err_cnt); + } else { + atomic64_inc(&alg->stats.aead.decrypt_cnt); + atomic64_add(cryptlen, &alg->stats.aead.decrypt_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_aead_decrypt); + +void crypto_stats_akcipher_encrypt(unsigned int src_len, int ret, + struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.akcipher.err_cnt); + } else { + atomic64_inc(&alg->stats.akcipher.encrypt_cnt); + atomic64_add(src_len, &alg->stats.akcipher.encrypt_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_akcipher_encrypt); + +void crypto_stats_akcipher_decrypt(unsigned int src_len, int ret, + struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.akcipher.err_cnt); + } else { + atomic64_inc(&alg->stats.akcipher.decrypt_cnt); + atomic64_add(src_len, &alg->stats.akcipher.decrypt_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_akcipher_decrypt); + +void crypto_stats_akcipher_sign(int ret, struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) + atomic64_inc(&alg->stats.akcipher.err_cnt); + else + atomic64_inc(&alg->stats.akcipher.sign_cnt); + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_akcipher_sign); + +void crypto_stats_akcipher_verify(int ret, struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) + atomic64_inc(&alg->stats.akcipher.err_cnt); + else + atomic64_inc(&alg->stats.akcipher.verify_cnt); + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_akcipher_verify); + +void crypto_stats_compress(unsigned int slen, int ret, struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.compress.err_cnt); + } else { + atomic64_inc(&alg->stats.compress.compress_cnt); + atomic64_add(slen, &alg->stats.compress.compress_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_compress); + +void crypto_stats_decompress(unsigned int slen, int ret, struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.compress.err_cnt); + } else { + atomic64_inc(&alg->stats.compress.decompress_cnt); + atomic64_add(slen, &alg->stats.compress.decompress_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_decompress); + +void crypto_stats_ahash_update(unsigned int nbytes, int ret, + struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) + atomic64_inc(&alg->stats.hash.err_cnt); + else + atomic64_add(nbytes, &alg->stats.hash.hash_tlen); + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_ahash_update); + +void crypto_stats_ahash_final(unsigned int nbytes, int ret, + struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.hash.err_cnt); + } else { + atomic64_inc(&alg->stats.hash.hash_cnt); + atomic64_add(nbytes, &alg->stats.hash.hash_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_ahash_final); + +void crypto_stats_kpp_set_secret(struct crypto_alg *alg, int ret) +{ + if (ret) + atomic64_inc(&alg->stats.kpp.err_cnt); + else + atomic64_inc(&alg->stats.kpp.setsecret_cnt); + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_kpp_set_secret); + +void crypto_stats_kpp_generate_public_key(struct crypto_alg *alg, int ret) +{ + if (ret) + atomic64_inc(&alg->stats.kpp.err_cnt); + else + atomic64_inc(&alg->stats.kpp.generate_public_key_cnt); + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_kpp_generate_public_key); + +void crypto_stats_kpp_compute_shared_secret(struct crypto_alg *alg, int ret) +{ + if (ret) + atomic64_inc(&alg->stats.kpp.err_cnt); + else + atomic64_inc(&alg->stats.kpp.compute_shared_secret_cnt); + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_kpp_compute_shared_secret); + +void crypto_stats_rng_seed(struct crypto_alg *alg, int ret) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) + atomic64_inc(&alg->stats.rng.err_cnt); + else + atomic64_inc(&alg->stats.rng.seed_cnt); + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_rng_seed); + +void crypto_stats_rng_generate(struct crypto_alg *alg, unsigned int dlen, + int ret) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.rng.err_cnt); + } else { + atomic64_inc(&alg->stats.rng.generate_cnt); + atomic64_add(dlen, &alg->stats.rng.generate_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_rng_generate); + +void crypto_stats_skcipher_encrypt(unsigned int cryptlen, int ret, + struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.cipher.err_cnt); + } else { + atomic64_inc(&alg->stats.cipher.encrypt_cnt); + atomic64_add(cryptlen, &alg->stats.cipher.encrypt_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_skcipher_encrypt); + +void crypto_stats_skcipher_decrypt(unsigned int cryptlen, int ret, + struct crypto_alg *alg) +{ + if (ret && ret != -EINPROGRESS && ret != -EBUSY) { + atomic64_inc(&alg->stats.cipher.err_cnt); + } else { + atomic64_inc(&alg->stats.cipher.decrypt_cnt); + atomic64_add(cryptlen, &alg->stats.cipher.decrypt_tlen); + } + crypto_alg_put(alg); +} +EXPORT_SYMBOL_GPL(crypto_stats_skcipher_decrypt); +#endif + static int __init crypto_algapi_init(void) { crypto_init_proc(); diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c index f93abf13b5d4..c5398bd54942 100644 --- a/crypto/blkcipher.c +++ b/crypto/blkcipher.c @@ -507,23 +507,18 @@ static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_blkcipher rblkcipher; - strncpy(rblkcipher.type, "blkcipher", sizeof(rblkcipher.type)); - strncpy(rblkcipher.geniv, alg->cra_blkcipher.geniv ?: "", - sizeof(rblkcipher.geniv)); - rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0'; + memset(&rblkcipher, 0, sizeof(rblkcipher)); + + strscpy(rblkcipher.type, "blkcipher", sizeof(rblkcipher.type)); + strscpy(rblkcipher.geniv, "", sizeof(rblkcipher.geniv)); rblkcipher.blocksize = alg->cra_blocksize; rblkcipher.min_keysize = alg->cra_blkcipher.min_keysize; rblkcipher.max_keysize = alg->cra_blkcipher.max_keysize; rblkcipher.ivsize = alg->cra_blkcipher.ivsize; - if (nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER, - sizeof(struct crypto_report_blkcipher), &rblkcipher)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER, + sizeof(rblkcipher), &rblkcipher); } #else static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg) @@ -541,8 +536,7 @@ static void crypto_blkcipher_show(struct seq_file *m, struct crypto_alg *alg) seq_printf(m, "min keysize : %u\n", alg->cra_blkcipher.min_keysize); seq_printf(m, "max keysize : %u\n", alg->cra_blkcipher.max_keysize); seq_printf(m, "ivsize : %u\n", alg->cra_blkcipher.ivsize); - seq_printf(m, "geniv : %s\n", alg->cra_blkcipher.geniv ?: - ""); + seq_printf(m, "geniv : \n"); } const struct crypto_type crypto_blkcipher_type = { diff --git a/crypto/cfb.c b/crypto/cfb.c index 20987d0e09d8..e81e45673498 100644 --- a/crypto/cfb.c +++ b/crypto/cfb.c @@ -144,7 +144,7 @@ static int crypto_cfb_decrypt_segment(struct skcipher_walk *walk, do { crypto_cfb_encrypt_one(tfm, iv, dst); - crypto_xor(dst, iv, bsize); + crypto_xor(dst, src, bsize); iv = src; src += bsize; diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c deleted file mode 100644 index 3ae96587caf9..000000000000 --- a/crypto/chacha20_generic.c +++ /dev/null @@ -1,137 +0,0 @@ -/* - * ChaCha20 256-bit cipher algorithm, RFC7539 - * - * Copyright (C) 2015 Martin Willi - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - */ - -#include -#include -#include -#include -#include - -static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src, - unsigned int bytes) -{ - /* aligned to potentially speed up crypto_xor() */ - u8 stream[CHACHA20_BLOCK_SIZE] __aligned(sizeof(long)); - - if (dst != src) - memcpy(dst, src, bytes); - - while (bytes >= CHACHA20_BLOCK_SIZE) { - chacha20_block(state, stream); - crypto_xor(dst, stream, CHACHA20_BLOCK_SIZE); - bytes -= CHACHA20_BLOCK_SIZE; - dst += CHACHA20_BLOCK_SIZE; - } - if (bytes) { - chacha20_block(state, stream); - crypto_xor(dst, stream, bytes); - } -} - -void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv) -{ - state[0] = 0x61707865; /* "expa" */ - state[1] = 0x3320646e; /* "nd 3" */ - state[2] = 0x79622d32; /* "2-by" */ - state[3] = 0x6b206574; /* "te k" */ - state[4] = ctx->key[0]; - state[5] = ctx->key[1]; - state[6] = ctx->key[2]; - state[7] = ctx->key[3]; - state[8] = ctx->key[4]; - state[9] = ctx->key[5]; - state[10] = ctx->key[6]; - state[11] = ctx->key[7]; - state[12] = get_unaligned_le32(iv + 0); - state[13] = get_unaligned_le32(iv + 4); - state[14] = get_unaligned_le32(iv + 8); - state[15] = get_unaligned_le32(iv + 12); -} -EXPORT_SYMBOL_GPL(crypto_chacha20_init); - -int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key, - unsigned int keysize) -{ - struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm); - int i; - - if (keysize != CHACHA20_KEY_SIZE) - return -EINVAL; - - for (i = 0; i < ARRAY_SIZE(ctx->key); i++) - ctx->key[i] = get_unaligned_le32(key + i * sizeof(u32)); - - return 0; -} -EXPORT_SYMBOL_GPL(crypto_chacha20_setkey); - -int crypto_chacha20_crypt(struct skcipher_request *req) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm); - struct skcipher_walk walk; - u32 state[16]; - int err; - - err = skcipher_walk_virt(&walk, req, true); - - crypto_chacha20_init(state, ctx, walk.iv); - - while (walk.nbytes > 0) { - unsigned int nbytes = walk.nbytes; - - if (nbytes < walk.total) - nbytes = round_down(nbytes, walk.stride); - - chacha20_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr, - nbytes); - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - } - - return err; -} -EXPORT_SYMBOL_GPL(crypto_chacha20_crypt); - -static struct skcipher_alg alg = { - .base.cra_name = "chacha20", - .base.cra_driver_name = "chacha20-generic", - .base.cra_priority = 100, - .base.cra_blocksize = 1, - .base.cra_ctxsize = sizeof(struct chacha20_ctx), - .base.cra_module = THIS_MODULE, - - .min_keysize = CHACHA20_KEY_SIZE, - .max_keysize = CHACHA20_KEY_SIZE, - .ivsize = CHACHA20_IV_SIZE, - .chunksize = CHACHA20_BLOCK_SIZE, - .setkey = crypto_chacha20_setkey, - .encrypt = crypto_chacha20_crypt, - .decrypt = crypto_chacha20_crypt, -}; - -static int __init chacha20_generic_mod_init(void) -{ - return crypto_register_skcipher(&alg); -} - -static void __exit chacha20_generic_mod_fini(void) -{ - crypto_unregister_skcipher(&alg); -} - -module_init(chacha20_generic_mod_init); -module_exit(chacha20_generic_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Martin Willi "); -MODULE_DESCRIPTION("chacha20 cipher algorithm"); -MODULE_ALIAS_CRYPTO("chacha20"); -MODULE_ALIAS_CRYPTO("chacha20-generic"); diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c index 600afa99941f..fef11446ab1b 100644 --- a/crypto/chacha20poly1305.c +++ b/crypto/chacha20poly1305.c @@ -13,7 +13,7 @@ #include #include #include -#include +#include #include #include #include @@ -22,8 +22,6 @@ #include "internal.h" -#define CHACHAPOLY_IV_SIZE 12 - struct chachapoly_instance_ctx { struct crypto_skcipher_spawn chacha; struct crypto_ahash_spawn poly; @@ -51,7 +49,7 @@ struct poly_req { }; struct chacha_req { - u8 iv[CHACHA20_IV_SIZE]; + u8 iv[CHACHA_IV_SIZE]; struct scatterlist src[1]; struct skcipher_request req; /* must be last member */ }; @@ -91,7 +89,7 @@ static void chacha_iv(u8 *iv, struct aead_request *req, u32 icb) memcpy(iv, &leicb, sizeof(leicb)); memcpy(iv + sizeof(leicb), ctx->salt, ctx->saltlen); memcpy(iv + sizeof(leicb) + ctx->saltlen, req->iv, - CHACHA20_IV_SIZE - sizeof(leicb) - ctx->saltlen); + CHACHA_IV_SIZE - sizeof(leicb) - ctx->saltlen); } static int poly_verify_tag(struct aead_request *req) @@ -494,7 +492,7 @@ static int chachapoly_setkey(struct crypto_aead *aead, const u8 *key, struct chachapoly_ctx *ctx = crypto_aead_ctx(aead); int err; - if (keylen != ctx->saltlen + CHACHA20_KEY_SIZE) + if (keylen != ctx->saltlen + CHACHA_KEY_SIZE) return -EINVAL; keylen -= ctx->saltlen; @@ -639,7 +637,7 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb, err = -EINVAL; /* Need 16-byte IV size, including Initial Block Counter value */ - if (crypto_skcipher_alg_ivsize(chacha) != CHACHA20_IV_SIZE) + if (crypto_skcipher_alg_ivsize(chacha) != CHACHA_IV_SIZE) goto out_drop_chacha; /* Not a stream cipher? */ if (chacha->base.cra_blocksize != 1) diff --git a/crypto/chacha_generic.c b/crypto/chacha_generic.c new file mode 100644 index 000000000000..35b583101f4f --- /dev/null +++ b/crypto/chacha_generic.c @@ -0,0 +1,217 @@ +/* + * ChaCha and XChaCha stream ciphers, including ChaCha20 (RFC7539) + * + * Copyright (C) 2015 Martin Willi + * Copyright (C) 2018 Google LLC + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include +#include +#include +#include +#include + +static void chacha_docrypt(u32 *state, u8 *dst, const u8 *src, + unsigned int bytes, int nrounds) +{ + /* aligned to potentially speed up crypto_xor() */ + u8 stream[CHACHA_BLOCK_SIZE] __aligned(sizeof(long)); + + if (dst != src) + memcpy(dst, src, bytes); + + while (bytes >= CHACHA_BLOCK_SIZE) { + chacha_block(state, stream, nrounds); + crypto_xor(dst, stream, CHACHA_BLOCK_SIZE); + bytes -= CHACHA_BLOCK_SIZE; + dst += CHACHA_BLOCK_SIZE; + } + if (bytes) { + chacha_block(state, stream, nrounds); + crypto_xor(dst, stream, bytes); + } +} + +static int chacha_stream_xor(struct skcipher_request *req, + struct chacha_ctx *ctx, u8 *iv) +{ + struct skcipher_walk walk; + u32 state[16]; + int err; + + err = skcipher_walk_virt(&walk, req, false); + + crypto_chacha_init(state, ctx, iv); + + while (walk.nbytes > 0) { + unsigned int nbytes = walk.nbytes; + + if (nbytes < walk.total) + nbytes = round_down(nbytes, walk.stride); + + chacha_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr, + nbytes, ctx->nrounds); + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + + return err; +} + +void crypto_chacha_init(u32 *state, struct chacha_ctx *ctx, u8 *iv) +{ + state[0] = 0x61707865; /* "expa" */ + state[1] = 0x3320646e; /* "nd 3" */ + state[2] = 0x79622d32; /* "2-by" */ + state[3] = 0x6b206574; /* "te k" */ + state[4] = ctx->key[0]; + state[5] = ctx->key[1]; + state[6] = ctx->key[2]; + state[7] = ctx->key[3]; + state[8] = ctx->key[4]; + state[9] = ctx->key[5]; + state[10] = ctx->key[6]; + state[11] = ctx->key[7]; + state[12] = get_unaligned_le32(iv + 0); + state[13] = get_unaligned_le32(iv + 4); + state[14] = get_unaligned_le32(iv + 8); + state[15] = get_unaligned_le32(iv + 12); +} +EXPORT_SYMBOL_GPL(crypto_chacha_init); + +static int chacha_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keysize, int nrounds) +{ + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + int i; + + if (keysize != CHACHA_KEY_SIZE) + return -EINVAL; + + for (i = 0; i < ARRAY_SIZE(ctx->key); i++) + ctx->key[i] = get_unaligned_le32(key + i * sizeof(u32)); + + ctx->nrounds = nrounds; + return 0; +} + +int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keysize) +{ + return chacha_setkey(tfm, key, keysize, 20); +} +EXPORT_SYMBOL_GPL(crypto_chacha20_setkey); + +int crypto_chacha12_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keysize) +{ + return chacha_setkey(tfm, key, keysize, 12); +} +EXPORT_SYMBOL_GPL(crypto_chacha12_setkey); + +int crypto_chacha_crypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + + return chacha_stream_xor(req, ctx, req->iv); +} +EXPORT_SYMBOL_GPL(crypto_chacha_crypt); + +int crypto_xchacha_crypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + struct chacha_ctx subctx; + u32 state[16]; + u8 real_iv[16]; + + /* Compute the subkey given the original key and first 128 nonce bits */ + crypto_chacha_init(state, ctx, req->iv); + hchacha_block(state, subctx.key, ctx->nrounds); + subctx.nrounds = ctx->nrounds; + + /* Build the real IV */ + memcpy(&real_iv[0], req->iv + 24, 8); /* stream position */ + memcpy(&real_iv[8], req->iv + 16, 8); /* remaining 64 nonce bits */ + + /* Generate the stream and XOR it with the data */ + return chacha_stream_xor(req, &subctx, real_iv); +} +EXPORT_SYMBOL_GPL(crypto_xchacha_crypt); + +static struct skcipher_alg algs[] = { + { + .base.cra_name = "chacha20", + .base.cra_driver_name = "chacha20-generic", + .base.cra_priority = 100, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = CHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha20_setkey, + .encrypt = crypto_chacha_crypt, + .decrypt = crypto_chacha_crypt, + }, { + .base.cra_name = "xchacha20", + .base.cra_driver_name = "xchacha20-generic", + .base.cra_priority = 100, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = XCHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha20_setkey, + .encrypt = crypto_xchacha_crypt, + .decrypt = crypto_xchacha_crypt, + }, { + .base.cra_name = "xchacha12", + .base.cra_driver_name = "xchacha12-generic", + .base.cra_priority = 100, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct chacha_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = XCHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .setkey = crypto_chacha12_setkey, + .encrypt = crypto_xchacha_crypt, + .decrypt = crypto_xchacha_crypt, + } +}; + +static int __init chacha_generic_mod_init(void) +{ + return crypto_register_skciphers(algs, ARRAY_SIZE(algs)); +} + +static void __exit chacha_generic_mod_fini(void) +{ + crypto_unregister_skciphers(algs, ARRAY_SIZE(algs)); +} + +module_init(chacha_generic_mod_init); +module_exit(chacha_generic_mod_fini); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Martin Willi "); +MODULE_DESCRIPTION("ChaCha and XChaCha stream ciphers (generic)"); +MODULE_ALIAS_CRYPTO("chacha20"); +MODULE_ALIAS_CRYPTO("chacha20-generic"); +MODULE_ALIAS_CRYPTO("xchacha20"); +MODULE_ALIAS_CRYPTO("xchacha20-generic"); +MODULE_ALIAS_CRYPTO("xchacha12"); +MODULE_ALIAS_CRYPTO("xchacha12-generic"); diff --git a/crypto/cryptd.c b/crypto/cryptd.c index 7118fb5efbaa..5640e5db7bdb 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -422,8 +422,6 @@ static int cryptd_create_blkcipher(struct crypto_template *tmpl, inst->alg.cra_ablkcipher.min_keysize = alg->cra_blkcipher.min_keysize; inst->alg.cra_ablkcipher.max_keysize = alg->cra_blkcipher.max_keysize; - inst->alg.cra_ablkcipher.geniv = alg->cra_blkcipher.geniv; - inst->alg.cra_ctxsize = sizeof(struct cryptd_blkcipher_ctx); inst->alg.cra_init = cryptd_blkcipher_init_tfm; @@ -1174,7 +1172,7 @@ struct cryptd_ablkcipher *cryptd_alloc_ablkcipher(const char *alg_name, return ERR_PTR(-EINVAL); type = crypto_skcipher_type(type); mask &= ~CRYPTO_ALG_TYPE_MASK; - mask |= (CRYPTO_ALG_GENIV | CRYPTO_ALG_TYPE_BLKCIPHER_MASK); + mask |= CRYPTO_ALG_TYPE_BLKCIPHER_MASK; tfm = crypto_alloc_base(cryptd_alg_name, type, mask); if (IS_ERR(tfm)) return ERR_CAST(tfm); diff --git a/crypto/crypto_user_base.c b/crypto/crypto_user_base.c index 784748dbb19f..f25d3f32c9c2 100644 --- a/crypto/crypto_user_base.c +++ b/crypto/crypto_user_base.c @@ -84,87 +84,38 @@ static int crypto_report_cipher(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_cipher rcipher; - strncpy(rcipher.type, "cipher", sizeof(rcipher.type)); + memset(&rcipher, 0, sizeof(rcipher)); + + strscpy(rcipher.type, "cipher", sizeof(rcipher.type)); rcipher.blocksize = alg->cra_blocksize; rcipher.min_keysize = alg->cra_cipher.cia_min_keysize; rcipher.max_keysize = alg->cra_cipher.cia_max_keysize; - if (nla_put(skb, CRYPTOCFGA_REPORT_CIPHER, - sizeof(struct crypto_report_cipher), &rcipher)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_CIPHER, + sizeof(rcipher), &rcipher); } static int crypto_report_comp(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_comp rcomp; - strncpy(rcomp.type, "compression", sizeof(rcomp.type)); - if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS, - sizeof(struct crypto_report_comp), &rcomp)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; -} - -static int crypto_report_acomp(struct sk_buff *skb, struct crypto_alg *alg) -{ - struct crypto_report_acomp racomp; + memset(&rcomp, 0, sizeof(rcomp)); - strncpy(racomp.type, "acomp", sizeof(racomp.type)); + strscpy(rcomp.type, "compression", sizeof(rcomp.type)); - if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMP, - sizeof(struct crypto_report_acomp), &racomp)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; -} - -static int crypto_report_akcipher(struct sk_buff *skb, struct crypto_alg *alg) -{ - struct crypto_report_akcipher rakcipher; - - strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); - - if (nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, - sizeof(struct crypto_report_akcipher), &rakcipher)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; -} - -static int crypto_report_kpp(struct sk_buff *skb, struct crypto_alg *alg) -{ - struct crypto_report_kpp rkpp; - - strncpy(rkpp.type, "kpp", sizeof(rkpp.type)); - - if (nla_put(skb, CRYPTOCFGA_REPORT_KPP, - sizeof(struct crypto_report_kpp), &rkpp)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS, sizeof(rcomp), &rcomp); } static int crypto_report_one(struct crypto_alg *alg, struct crypto_user_alg *ualg, struct sk_buff *skb) { - strncpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); - strncpy(ualg->cru_driver_name, alg->cra_driver_name, + memset(ualg, 0, sizeof(*ualg)); + + strscpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); + strscpy(ualg->cru_driver_name, alg->cra_driver_name, sizeof(ualg->cru_driver_name)); - strncpy(ualg->cru_module_name, module_name(alg->cra_module), + strscpy(ualg->cru_module_name, module_name(alg->cra_module), sizeof(ualg->cru_module_name)); ualg->cru_type = 0; @@ -177,9 +128,9 @@ static int crypto_report_one(struct crypto_alg *alg, if (alg->cra_flags & CRYPTO_ALG_LARVAL) { struct crypto_report_larval rl; - strncpy(rl.type, "larval", sizeof(rl.type)); - if (nla_put(skb, CRYPTOCFGA_REPORT_LARVAL, - sizeof(struct crypto_report_larval), &rl)) + memset(&rl, 0, sizeof(rl)); + strscpy(rl.type, "larval", sizeof(rl.type)); + if (nla_put(skb, CRYPTOCFGA_REPORT_LARVAL, sizeof(rl), &rl)) goto nla_put_failure; goto out; } @@ -202,20 +153,6 @@ static int crypto_report_one(struct crypto_alg *alg, goto nla_put_failure; break; - case CRYPTO_ALG_TYPE_ACOMPRESS: - if (crypto_report_acomp(skb, alg)) - goto nla_put_failure; - - break; - case CRYPTO_ALG_TYPE_AKCIPHER: - if (crypto_report_akcipher(skb, alg)) - goto nla_put_failure; - - break; - case CRYPTO_ALG_TYPE_KPP: - if (crypto_report_kpp(skb, alg)) - goto nla_put_failure; - break; } out: @@ -294,30 +231,33 @@ drop_alg: static int crypto_dump_report(struct sk_buff *skb, struct netlink_callback *cb) { - struct crypto_alg *alg; + const size_t start_pos = cb->args[0]; + size_t pos = 0; struct crypto_dump_info info; - int err; - - if (cb->args[0]) - goto out; - - cb->args[0] = 1; + struct crypto_alg *alg; + int res; info.in_skb = cb->skb; info.out_skb = skb; info.nlmsg_seq = cb->nlh->nlmsg_seq; info.nlmsg_flags = NLM_F_MULTI; + down_read(&crypto_alg_sem); list_for_each_entry(alg, &crypto_alg_list, cra_list) { - err = crypto_report_alg(alg, &info); - if (err) - goto out_err; + if (pos >= start_pos) { + res = crypto_report_alg(alg, &info); + if (res == -EMSGSIZE) + break; + if (res) + goto out; + } + pos++; } - + cb->args[0] = pos; + res = skb->len; out: - return skb->len; -out_err: - return err; + up_read(&crypto_alg_sem); + return res; } static int crypto_dump_report_done(struct netlink_callback *cb) @@ -483,9 +423,7 @@ static const struct crypto_link { .dump = crypto_dump_report, .done = crypto_dump_report_done}, [CRYPTO_MSG_DELRNG - CRYPTO_MSG_BASE] = { .doit = crypto_del_rng }, - [CRYPTO_MSG_GETSTAT - CRYPTO_MSG_BASE] = { .doit = crypto_reportstat, - .dump = crypto_dump_reportstat, - .done = crypto_dump_reportstat_done}, + [CRYPTO_MSG_GETSTAT - CRYPTO_MSG_BASE] = { .doit = crypto_reportstat}, }; static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, @@ -505,7 +443,7 @@ static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, if ((type == (CRYPTO_MSG_GETALG - CRYPTO_MSG_BASE) && (nlh->nlmsg_flags & NLM_F_DUMP))) { struct crypto_alg *alg; - u16 dump_alloc = 0; + unsigned long dump_alloc = 0; if (link->dump == NULL) return -EINVAL; @@ -513,16 +451,16 @@ static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, down_read(&crypto_alg_sem); list_for_each_entry(alg, &crypto_alg_list, cra_list) dump_alloc += CRYPTO_REPORT_MAXSIZE; + up_read(&crypto_alg_sem); { struct netlink_dump_control c = { .dump = link->dump, .done = link->done, - .min_dump_alloc = dump_alloc, + .min_dump_alloc = min(dump_alloc, 65535UL), }; err = netlink_dump_start(crypto_nlsk, skb, nlh, &c); } - up_read(&crypto_alg_sem); return err; } diff --git a/crypto/crypto_user_stat.c b/crypto/crypto_user_stat.c index 1dfaa0ccd555..3e9a53233d80 100644 --- a/crypto/crypto_user_stat.c +++ b/crypto/crypto_user_stat.c @@ -33,260 +33,149 @@ struct crypto_dump_info { static int crypto_report_aead(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat raead; - u64 v64; - u32 v32; + struct crypto_stat_aead raead; memset(&raead, 0, sizeof(raead)); - strncpy(raead.type, "aead", sizeof(raead.type)); - - v32 = atomic_read(&alg->encrypt_cnt); - raead.stat_encrypt_cnt = v32; - v64 = atomic64_read(&alg->encrypt_tlen); - raead.stat_encrypt_tlen = v64; - v32 = atomic_read(&alg->decrypt_cnt); - raead.stat_decrypt_cnt = v32; - v64 = atomic64_read(&alg->decrypt_tlen); - raead.stat_decrypt_tlen = v64; - v32 = atomic_read(&alg->aead_err_cnt); - raead.stat_aead_err_cnt = v32; - - if (nla_put(skb, CRYPTOCFGA_STAT_AEAD, - sizeof(struct crypto_stat), &raead)) - goto nla_put_failure; - return 0; + strscpy(raead.type, "aead", sizeof(raead.type)); -nla_put_failure: - return -EMSGSIZE; + raead.stat_encrypt_cnt = atomic64_read(&alg->stats.aead.encrypt_cnt); + raead.stat_encrypt_tlen = atomic64_read(&alg->stats.aead.encrypt_tlen); + raead.stat_decrypt_cnt = atomic64_read(&alg->stats.aead.decrypt_cnt); + raead.stat_decrypt_tlen = atomic64_read(&alg->stats.aead.decrypt_tlen); + raead.stat_err_cnt = atomic64_read(&alg->stats.aead.err_cnt); + + return nla_put(skb, CRYPTOCFGA_STAT_AEAD, sizeof(raead), &raead); } static int crypto_report_cipher(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat rcipher; - u64 v64; - u32 v32; + struct crypto_stat_cipher rcipher; memset(&rcipher, 0, sizeof(rcipher)); - strlcpy(rcipher.type, "cipher", sizeof(rcipher.type)); - - v32 = atomic_read(&alg->encrypt_cnt); - rcipher.stat_encrypt_cnt = v32; - v64 = atomic64_read(&alg->encrypt_tlen); - rcipher.stat_encrypt_tlen = v64; - v32 = atomic_read(&alg->decrypt_cnt); - rcipher.stat_decrypt_cnt = v32; - v64 = atomic64_read(&alg->decrypt_tlen); - rcipher.stat_decrypt_tlen = v64; - v32 = atomic_read(&alg->cipher_err_cnt); - rcipher.stat_cipher_err_cnt = v32; - - if (nla_put(skb, CRYPTOCFGA_STAT_CIPHER, - sizeof(struct crypto_stat), &rcipher)) - goto nla_put_failure; - return 0; + strscpy(rcipher.type, "cipher", sizeof(rcipher.type)); -nla_put_failure: - return -EMSGSIZE; + rcipher.stat_encrypt_cnt = atomic64_read(&alg->stats.cipher.encrypt_cnt); + rcipher.stat_encrypt_tlen = atomic64_read(&alg->stats.cipher.encrypt_tlen); + rcipher.stat_decrypt_cnt = atomic64_read(&alg->stats.cipher.decrypt_cnt); + rcipher.stat_decrypt_tlen = atomic64_read(&alg->stats.cipher.decrypt_tlen); + rcipher.stat_err_cnt = atomic64_read(&alg->stats.cipher.err_cnt); + + return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher); } static int crypto_report_comp(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat rcomp; - u64 v64; - u32 v32; + struct crypto_stat_compress rcomp; memset(&rcomp, 0, sizeof(rcomp)); - strlcpy(rcomp.type, "compression", sizeof(rcomp.type)); - v32 = atomic_read(&alg->compress_cnt); - rcomp.stat_compress_cnt = v32; - v64 = atomic64_read(&alg->compress_tlen); - rcomp.stat_compress_tlen = v64; - v32 = atomic_read(&alg->decompress_cnt); - rcomp.stat_decompress_cnt = v32; - v64 = atomic64_read(&alg->decompress_tlen); - rcomp.stat_decompress_tlen = v64; - v32 = atomic_read(&alg->cipher_err_cnt); - rcomp.stat_compress_err_cnt = v32; - - if (nla_put(skb, CRYPTOCFGA_STAT_COMPRESS, - sizeof(struct crypto_stat), &rcomp)) - goto nla_put_failure; - return 0; + strscpy(rcomp.type, "compression", sizeof(rcomp.type)); + rcomp.stat_compress_cnt = atomic64_read(&alg->stats.compress.compress_cnt); + rcomp.stat_compress_tlen = atomic64_read(&alg->stats.compress.compress_tlen); + rcomp.stat_decompress_cnt = atomic64_read(&alg->stats.compress.decompress_cnt); + rcomp.stat_decompress_tlen = atomic64_read(&alg->stats.compress.decompress_tlen); + rcomp.stat_err_cnt = atomic64_read(&alg->stats.compress.err_cnt); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_STAT_COMPRESS, sizeof(rcomp), &rcomp); } static int crypto_report_acomp(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat racomp; - u64 v64; - u32 v32; + struct crypto_stat_compress racomp; memset(&racomp, 0, sizeof(racomp)); - strlcpy(racomp.type, "acomp", sizeof(racomp.type)); - v32 = atomic_read(&alg->compress_cnt); - racomp.stat_compress_cnt = v32; - v64 = atomic64_read(&alg->compress_tlen); - racomp.stat_compress_tlen = v64; - v32 = atomic_read(&alg->decompress_cnt); - racomp.stat_decompress_cnt = v32; - v64 = atomic64_read(&alg->decompress_tlen); - racomp.stat_decompress_tlen = v64; - v32 = atomic_read(&alg->cipher_err_cnt); - racomp.stat_compress_err_cnt = v32; - - if (nla_put(skb, CRYPTOCFGA_STAT_ACOMP, - sizeof(struct crypto_stat), &racomp)) - goto nla_put_failure; - return 0; + strscpy(racomp.type, "acomp", sizeof(racomp.type)); + racomp.stat_compress_cnt = atomic64_read(&alg->stats.compress.compress_cnt); + racomp.stat_compress_tlen = atomic64_read(&alg->stats.compress.compress_tlen); + racomp.stat_decompress_cnt = atomic64_read(&alg->stats.compress.decompress_cnt); + racomp.stat_decompress_tlen = atomic64_read(&alg->stats.compress.decompress_tlen); + racomp.stat_err_cnt = atomic64_read(&alg->stats.compress.err_cnt); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_STAT_ACOMP, sizeof(racomp), &racomp); } static int crypto_report_akcipher(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat rakcipher; - u64 v64; - u32 v32; + struct crypto_stat_akcipher rakcipher; memset(&rakcipher, 0, sizeof(rakcipher)); - strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); - v32 = atomic_read(&alg->encrypt_cnt); - rakcipher.stat_encrypt_cnt = v32; - v64 = atomic64_read(&alg->encrypt_tlen); - rakcipher.stat_encrypt_tlen = v64; - v32 = atomic_read(&alg->decrypt_cnt); - rakcipher.stat_decrypt_cnt = v32; - v64 = atomic64_read(&alg->decrypt_tlen); - rakcipher.stat_decrypt_tlen = v64; - v32 = atomic_read(&alg->sign_cnt); - rakcipher.stat_sign_cnt = v32; - v32 = atomic_read(&alg->verify_cnt); - rakcipher.stat_verify_cnt = v32; - v32 = atomic_read(&alg->akcipher_err_cnt); - rakcipher.stat_akcipher_err_cnt = v32; - - if (nla_put(skb, CRYPTOCFGA_STAT_AKCIPHER, - sizeof(struct crypto_stat), &rakcipher)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + strscpy(rakcipher.type, "akcipher", sizeof(rakcipher.type)); + rakcipher.stat_encrypt_cnt = atomic64_read(&alg->stats.akcipher.encrypt_cnt); + rakcipher.stat_encrypt_tlen = atomic64_read(&alg->stats.akcipher.encrypt_tlen); + rakcipher.stat_decrypt_cnt = atomic64_read(&alg->stats.akcipher.decrypt_cnt); + rakcipher.stat_decrypt_tlen = atomic64_read(&alg->stats.akcipher.decrypt_tlen); + rakcipher.stat_sign_cnt = atomic64_read(&alg->stats.akcipher.sign_cnt); + rakcipher.stat_verify_cnt = atomic64_read(&alg->stats.akcipher.verify_cnt); + rakcipher.stat_err_cnt = atomic64_read(&alg->stats.akcipher.err_cnt); + + return nla_put(skb, CRYPTOCFGA_STAT_AKCIPHER, + sizeof(rakcipher), &rakcipher); } static int crypto_report_kpp(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat rkpp; - u32 v; + struct crypto_stat_kpp rkpp; memset(&rkpp, 0, sizeof(rkpp)); - strlcpy(rkpp.type, "kpp", sizeof(rkpp.type)); - - v = atomic_read(&alg->setsecret_cnt); - rkpp.stat_setsecret_cnt = v; - v = atomic_read(&alg->generate_public_key_cnt); - rkpp.stat_generate_public_key_cnt = v; - v = atomic_read(&alg->compute_shared_secret_cnt); - rkpp.stat_compute_shared_secret_cnt = v; - v = atomic_read(&alg->kpp_err_cnt); - rkpp.stat_kpp_err_cnt = v; + strscpy(rkpp.type, "kpp", sizeof(rkpp.type)); - if (nla_put(skb, CRYPTOCFGA_STAT_KPP, - sizeof(struct crypto_stat), &rkpp)) - goto nla_put_failure; - return 0; + rkpp.stat_setsecret_cnt = atomic64_read(&alg->stats.kpp.setsecret_cnt); + rkpp.stat_generate_public_key_cnt = atomic64_read(&alg->stats.kpp.generate_public_key_cnt); + rkpp.stat_compute_shared_secret_cnt = atomic64_read(&alg->stats.kpp.compute_shared_secret_cnt); + rkpp.stat_err_cnt = atomic64_read(&alg->stats.kpp.err_cnt); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_STAT_KPP, sizeof(rkpp), &rkpp); } static int crypto_report_ahash(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat rhash; - u64 v64; - u32 v32; + struct crypto_stat_hash rhash; memset(&rhash, 0, sizeof(rhash)); - strncpy(rhash.type, "ahash", sizeof(rhash.type)); + strscpy(rhash.type, "ahash", sizeof(rhash.type)); - v32 = atomic_read(&alg->hash_cnt); - rhash.stat_hash_cnt = v32; - v64 = atomic64_read(&alg->hash_tlen); - rhash.stat_hash_tlen = v64; - v32 = atomic_read(&alg->hash_err_cnt); - rhash.stat_hash_err_cnt = v32; + rhash.stat_hash_cnt = atomic64_read(&alg->stats.hash.hash_cnt); + rhash.stat_hash_tlen = atomic64_read(&alg->stats.hash.hash_tlen); + rhash.stat_err_cnt = atomic64_read(&alg->stats.hash.err_cnt); - if (nla_put(skb, CRYPTOCFGA_STAT_HASH, - sizeof(struct crypto_stat), &rhash)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_STAT_HASH, sizeof(rhash), &rhash); } static int crypto_report_shash(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat rhash; - u64 v64; - u32 v32; + struct crypto_stat_hash rhash; memset(&rhash, 0, sizeof(rhash)); - strncpy(rhash.type, "shash", sizeof(rhash.type)); - - v32 = atomic_read(&alg->hash_cnt); - rhash.stat_hash_cnt = v32; - v64 = atomic64_read(&alg->hash_tlen); - rhash.stat_hash_tlen = v64; - v32 = atomic_read(&alg->hash_err_cnt); - rhash.stat_hash_err_cnt = v32; + strscpy(rhash.type, "shash", sizeof(rhash.type)); - if (nla_put(skb, CRYPTOCFGA_STAT_HASH, - sizeof(struct crypto_stat), &rhash)) - goto nla_put_failure; - return 0; + rhash.stat_hash_cnt = atomic64_read(&alg->stats.hash.hash_cnt); + rhash.stat_hash_tlen = atomic64_read(&alg->stats.hash.hash_tlen); + rhash.stat_err_cnt = atomic64_read(&alg->stats.hash.err_cnt); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_STAT_HASH, sizeof(rhash), &rhash); } static int crypto_report_rng(struct sk_buff *skb, struct crypto_alg *alg) { - struct crypto_stat rrng; - u64 v64; - u32 v32; + struct crypto_stat_rng rrng; memset(&rrng, 0, sizeof(rrng)); - strncpy(rrng.type, "rng", sizeof(rrng.type)); + strscpy(rrng.type, "rng", sizeof(rrng.type)); - v32 = atomic_read(&alg->generate_cnt); - rrng.stat_generate_cnt = v32; - v64 = atomic64_read(&alg->generate_tlen); - rrng.stat_generate_tlen = v64; - v32 = atomic_read(&alg->seed_cnt); - rrng.stat_seed_cnt = v32; - v32 = atomic_read(&alg->hash_err_cnt); - rrng.stat_rng_err_cnt = v32; - - if (nla_put(skb, CRYPTOCFGA_STAT_RNG, - sizeof(struct crypto_stat), &rrng)) - goto nla_put_failure; - return 0; + rrng.stat_generate_cnt = atomic64_read(&alg->stats.rng.generate_cnt); + rrng.stat_generate_tlen = atomic64_read(&alg->stats.rng.generate_tlen); + rrng.stat_seed_cnt = atomic64_read(&alg->stats.rng.seed_cnt); + rrng.stat_err_cnt = atomic64_read(&alg->stats.rng.err_cnt); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_STAT_RNG, sizeof(rrng), &rrng); } static int crypto_reportstat_one(struct crypto_alg *alg, @@ -295,10 +184,10 @@ static int crypto_reportstat_one(struct crypto_alg *alg, { memset(ualg, 0, sizeof(*ualg)); - strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); - strlcpy(ualg->cru_driver_name, alg->cra_driver_name, + strscpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name)); + strscpy(ualg->cru_driver_name, alg->cra_driver_name, sizeof(ualg->cru_driver_name)); - strlcpy(ualg->cru_module_name, module_name(alg->cra_module), + strscpy(ualg->cru_module_name, module_name(alg->cra_module), sizeof(ualg->cru_module_name)); ualg->cru_type = 0; @@ -309,12 +198,11 @@ static int crypto_reportstat_one(struct crypto_alg *alg, if (nla_put_u32(skb, CRYPTOCFGA_PRIORITY_VAL, alg->cra_priority)) goto nla_put_failure; if (alg->cra_flags & CRYPTO_ALG_LARVAL) { - struct crypto_stat rl; + struct crypto_stat_larval rl; memset(&rl, 0, sizeof(rl)); - strlcpy(rl.type, "larval", sizeof(rl.type)); - if (nla_put(skb, CRYPTOCFGA_STAT_LARVAL, - sizeof(struct crypto_stat), &rl)) + strscpy(rl.type, "larval", sizeof(rl.type)); + if (nla_put(skb, CRYPTOCFGA_STAT_LARVAL, sizeof(rl), &rl)) goto nla_put_failure; goto out; } @@ -448,37 +336,4 @@ drop_alg: return nlmsg_unicast(crypto_nlsk, skb, NETLINK_CB(in_skb).portid); } -int crypto_dump_reportstat(struct sk_buff *skb, struct netlink_callback *cb) -{ - struct crypto_alg *alg; - struct crypto_dump_info info; - int err; - - if (cb->args[0]) - goto out; - - cb->args[0] = 1; - - info.in_skb = cb->skb; - info.out_skb = skb; - info.nlmsg_seq = cb->nlh->nlmsg_seq; - info.nlmsg_flags = NLM_F_MULTI; - - list_for_each_entry(alg, &crypto_alg_list, cra_list) { - err = crypto_reportstat_alg(alg, &info); - if (err) - goto out_err; - } - -out: - return skb->len; -out_err: - return err; -} - -int crypto_dump_reportstat_done(struct netlink_callback *cb) -{ - return 0; -} - MODULE_LICENSE("GPL"); diff --git a/crypto/ctr.c b/crypto/ctr.c index 435b75bd619e..30f3946efc6d 100644 --- a/crypto/ctr.c +++ b/crypto/ctr.c @@ -233,8 +233,6 @@ static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb) inst->alg.cra_blkcipher.encrypt = crypto_ctr_crypt; inst->alg.cra_blkcipher.decrypt = crypto_ctr_crypt; - inst->alg.cra_blkcipher.geniv = "chainiv"; - out: crypto_mod_put(alg); return inst; diff --git a/crypto/ecc.c b/crypto/ecc.c index 8facafd67802..ed1237115066 100644 --- a/crypto/ecc.c +++ b/crypto/ecc.c @@ -842,15 +842,23 @@ static void xycz_add_c(u64 *x1, u64 *y1, u64 *x2, u64 *y2, u64 *curve_prime, static void ecc_point_mult(struct ecc_point *result, const struct ecc_point *point, const u64 *scalar, - u64 *initial_z, u64 *curve_prime, + u64 *initial_z, const struct ecc_curve *curve, unsigned int ndigits) { /* R0 and R1 */ u64 rx[2][ECC_MAX_DIGITS]; u64 ry[2][ECC_MAX_DIGITS]; u64 z[ECC_MAX_DIGITS]; + u64 sk[2][ECC_MAX_DIGITS]; + u64 *curve_prime = curve->p; int i, nb; - int num_bits = vli_num_bits(scalar, ndigits); + int num_bits; + int carry; + + carry = vli_add(sk[0], scalar, curve->n, ndigits); + vli_add(sk[1], sk[0], curve->n, ndigits); + scalar = sk[!carry]; + num_bits = sizeof(u64) * ndigits * 8 + 1; vli_set(rx[1], point->x, ndigits); vli_set(ry[1], point->y, ndigits); @@ -904,30 +912,43 @@ static inline void ecc_swap_digits(const u64 *in, u64 *out, out[i] = __swab64(in[ndigits - 1 - i]); } -int ecc_is_key_valid(unsigned int curve_id, unsigned int ndigits, - const u64 *private_key, unsigned int private_key_len) +static int __ecc_is_key_valid(const struct ecc_curve *curve, + const u64 *private_key, unsigned int ndigits) { - int nbytes; - const struct ecc_curve *curve = ecc_get_curve(curve_id); + u64 one[ECC_MAX_DIGITS] = { 1, }; + u64 res[ECC_MAX_DIGITS]; if (!private_key) return -EINVAL; - nbytes = ndigits << ECC_DIGITS_TO_BYTES_SHIFT; - - if (private_key_len != nbytes) + if (curve->g.ndigits != ndigits) return -EINVAL; - if (vli_is_zero(private_key, ndigits)) + /* Make sure the private key is in the range [2, n-3]. */ + if (vli_cmp(one, private_key, ndigits) != -1) return -EINVAL; - - /* Make sure the private key is in the range [1, n-1]. */ - if (vli_cmp(curve->n, private_key, ndigits) != 1) + vli_sub(res, curve->n, one, ndigits); + vli_sub(res, res, one, ndigits); + if (vli_cmp(res, private_key, ndigits) != 1) return -EINVAL; return 0; } +int ecc_is_key_valid(unsigned int curve_id, unsigned int ndigits, + const u64 *private_key, unsigned int private_key_len) +{ + int nbytes; + const struct ecc_curve *curve = ecc_get_curve(curve_id); + + nbytes = ndigits << ECC_DIGITS_TO_BYTES_SHIFT; + + if (private_key_len != nbytes) + return -EINVAL; + + return __ecc_is_key_valid(curve, private_key, ndigits); +} + /* * ECC private keys are generated using the method of extra random bits, * equivalent to that described in FIPS 186-4, Appendix B.4.1. @@ -971,11 +992,8 @@ int ecc_gen_privkey(unsigned int curve_id, unsigned int ndigits, u64 *privkey) if (err) return err; - if (vli_is_zero(priv, ndigits)) - return -EINVAL; - - /* Make sure the private key is in the range [1, n-1]. */ - if (vli_cmp(curve->n, priv, ndigits) != 1) + /* Make sure the private key is in the valid range. */ + if (__ecc_is_key_valid(curve, priv, ndigits)) return -EINVAL; ecc_swap_digits(priv, privkey, ndigits); @@ -1004,7 +1022,7 @@ int ecc_make_pub_key(unsigned int curve_id, unsigned int ndigits, goto out; } - ecc_point_mult(pk, &curve->g, priv, NULL, curve->p, ndigits); + ecc_point_mult(pk, &curve->g, priv, NULL, curve, ndigits); if (ecc_point_is_zero(pk)) { ret = -EAGAIN; goto err_free_point; @@ -1090,7 +1108,7 @@ int crypto_ecdh_shared_secret(unsigned int curve_id, unsigned int ndigits, goto err_alloc_product; } - ecc_point_mult(product, pk, priv, rand_z, curve->p, ndigits); + ecc_point_mult(product, pk, priv, rand_z, curve, ndigits); ecc_swap_digits(product->x, secret, ndigits); diff --git a/crypto/hash_info.c b/crypto/hash_info.c index 7b1e0b188ce6..1dd095e4b451 100644 --- a/crypto/hash_info.c +++ b/crypto/hash_info.c @@ -32,6 +32,8 @@ const char *const hash_algo_name[HASH_ALGO__LAST] = { [HASH_ALGO_TGR_160] = "tgr160", [HASH_ALGO_TGR_192] = "tgr192", [HASH_ALGO_SM3_256] = "sm3-256", + [HASH_ALGO_STREEBOG_256] = "streebog256", + [HASH_ALGO_STREEBOG_512] = "streebog512", }; EXPORT_SYMBOL_GPL(hash_algo_name); @@ -54,5 +56,7 @@ const int hash_digest_size[HASH_ALGO__LAST] = { [HASH_ALGO_TGR_160] = TGR160_DIGEST_SIZE, [HASH_ALGO_TGR_192] = TGR192_DIGEST_SIZE, [HASH_ALGO_SM3_256] = SM3256_DIGEST_SIZE, + [HASH_ALGO_STREEBOG_256] = STREEBOG256_DIGEST_SIZE, + [HASH_ALGO_STREEBOG_512] = STREEBOG512_DIGEST_SIZE, }; EXPORT_SYMBOL_GPL(hash_digest_size); diff --git a/crypto/kpp.c b/crypto/kpp.c index a90edc27af77..bc2f1006a2f7 100644 --- a/crypto/kpp.c +++ b/crypto/kpp.c @@ -30,15 +30,11 @@ static int crypto_kpp_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_kpp rkpp; - strncpy(rkpp.type, "kpp", sizeof(rkpp.type)); + memset(&rkpp, 0, sizeof(rkpp)); - if (nla_put(skb, CRYPTOCFGA_REPORT_KPP, - sizeof(struct crypto_report_kpp), &rkpp)) - goto nla_put_failure; - return 0; + strscpy(rkpp.type, "kpp", sizeof(rkpp.type)); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_KPP, sizeof(rkpp), &rkpp); } #else static int crypto_kpp_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/lz4.c b/crypto/lz4.c index 2ce2660d3519..c160dfdbf2e0 100644 --- a/crypto/lz4.c +++ b/crypto/lz4.c @@ -122,7 +122,6 @@ static struct crypto_alg alg_lz4 = { .cra_flags = CRYPTO_ALG_TYPE_COMPRESS, .cra_ctxsize = sizeof(struct lz4_ctx), .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(alg_lz4.cra_list), .cra_init = lz4_init, .cra_exit = lz4_exit, .cra_u = { .compress = { diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c index 2be14f054daf..583b5e013d7a 100644 --- a/crypto/lz4hc.c +++ b/crypto/lz4hc.c @@ -123,7 +123,6 @@ static struct crypto_alg alg_lz4hc = { .cra_flags = CRYPTO_ALG_TYPE_COMPRESS, .cra_ctxsize = sizeof(struct lz4hc_ctx), .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(alg_lz4hc.cra_list), .cra_init = lz4hc_init, .cra_exit = lz4hc_exit, .cra_u = { .compress = { diff --git a/crypto/nhpoly1305.c b/crypto/nhpoly1305.c new file mode 100644 index 000000000000..ec831a5594d8 --- /dev/null +++ b/crypto/nhpoly1305.c @@ -0,0 +1,254 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NHPoly1305 - ε-almost-∆-universal hash function for Adiantum + * + * Copyright 2018 Google LLC + */ + +/* + * "NHPoly1305" is the main component of Adiantum hashing. + * Specifically, it is the calculation + * + * H_L ← Poly1305_{K_L}(NH_{K_N}(pad_{128}(L))) + * + * from the procedure in section 6.4 of the Adiantum paper [1]. It is an + * ε-almost-∆-universal (ε-∆U) hash function for equal-length inputs over + * Z/(2^{128}Z), where the "∆" operation is addition. It hashes 1024-byte + * chunks of the input with the NH hash function [2], reducing the input length + * by 32x. The resulting NH digests are evaluated as a polynomial in + * GF(2^{130}-5), like in the Poly1305 MAC [3]. Note that the polynomial + * evaluation by itself would suffice to achieve the ε-∆U property; NH is used + * for performance since it's over twice as fast as Poly1305. + * + * This is *not* a cryptographic hash function; do not use it as such! + * + * [1] Adiantum: length-preserving encryption for entry-level processors + * (https://eprint.iacr.org/2018/720.pdf) + * [2] UMAC: Fast and Secure Message Authentication + * (https://fastcrypto.org/umac/umac_proc.pdf) + * [3] The Poly1305-AES message-authentication code + * (https://cr.yp.to/mac/poly1305-20050329.pdf) + */ + +#include +#include +#include +#include +#include +#include +#include + +static void nh_generic(const u32 *key, const u8 *message, size_t message_len, + __le64 hash[NH_NUM_PASSES]) +{ + u64 sums[4] = { 0, 0, 0, 0 }; + + BUILD_BUG_ON(NH_PAIR_STRIDE != 2); + BUILD_BUG_ON(NH_NUM_PASSES != 4); + + while (message_len) { + u32 m0 = get_unaligned_le32(message + 0); + u32 m1 = get_unaligned_le32(message + 4); + u32 m2 = get_unaligned_le32(message + 8); + u32 m3 = get_unaligned_le32(message + 12); + + sums[0] += (u64)(u32)(m0 + key[ 0]) * (u32)(m2 + key[ 2]); + sums[1] += (u64)(u32)(m0 + key[ 4]) * (u32)(m2 + key[ 6]); + sums[2] += (u64)(u32)(m0 + key[ 8]) * (u32)(m2 + key[10]); + sums[3] += (u64)(u32)(m0 + key[12]) * (u32)(m2 + key[14]); + sums[0] += (u64)(u32)(m1 + key[ 1]) * (u32)(m3 + key[ 3]); + sums[1] += (u64)(u32)(m1 + key[ 5]) * (u32)(m3 + key[ 7]); + sums[2] += (u64)(u32)(m1 + key[ 9]) * (u32)(m3 + key[11]); + sums[3] += (u64)(u32)(m1 + key[13]) * (u32)(m3 + key[15]); + key += NH_MESSAGE_UNIT / sizeof(key[0]); + message += NH_MESSAGE_UNIT; + message_len -= NH_MESSAGE_UNIT; + } + + hash[0] = cpu_to_le64(sums[0]); + hash[1] = cpu_to_le64(sums[1]); + hash[2] = cpu_to_le64(sums[2]); + hash[3] = cpu_to_le64(sums[3]); +} + +/* Pass the next NH hash value through Poly1305 */ +static void process_nh_hash_value(struct nhpoly1305_state *state, + const struct nhpoly1305_key *key) +{ + BUILD_BUG_ON(NH_HASH_BYTES % POLY1305_BLOCK_SIZE != 0); + + poly1305_core_blocks(&state->poly_state, &key->poly_key, state->nh_hash, + NH_HASH_BYTES / POLY1305_BLOCK_SIZE); +} + +/* + * Feed the next portion of the source data, as a whole number of 16-byte + * "NH message units", through NH and Poly1305. Each NH hash is taken over + * 1024 bytes, except possibly the final one which is taken over a multiple of + * 16 bytes up to 1024. Also, in the case where data is passed in misaligned + * chunks, we combine partial hashes; the end result is the same either way. + */ +static void nhpoly1305_units(struct nhpoly1305_state *state, + const struct nhpoly1305_key *key, + const u8 *src, unsigned int srclen, nh_t nh_fn) +{ + do { + unsigned int bytes; + + if (state->nh_remaining == 0) { + /* Starting a new NH message */ + bytes = min_t(unsigned int, srclen, NH_MESSAGE_BYTES); + nh_fn(key->nh_key, src, bytes, state->nh_hash); + state->nh_remaining = NH_MESSAGE_BYTES - bytes; + } else { + /* Continuing a previous NH message */ + __le64 tmp_hash[NH_NUM_PASSES]; + unsigned int pos; + int i; + + pos = NH_MESSAGE_BYTES - state->nh_remaining; + bytes = min(srclen, state->nh_remaining); + nh_fn(&key->nh_key[pos / 4], src, bytes, tmp_hash); + for (i = 0; i < NH_NUM_PASSES; i++) + le64_add_cpu(&state->nh_hash[i], + le64_to_cpu(tmp_hash[i])); + state->nh_remaining -= bytes; + } + if (state->nh_remaining == 0) + process_nh_hash_value(state, key); + src += bytes; + srclen -= bytes; + } while (srclen); +} + +int crypto_nhpoly1305_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + struct nhpoly1305_key *ctx = crypto_shash_ctx(tfm); + int i; + + if (keylen != NHPOLY1305_KEY_SIZE) + return -EINVAL; + + poly1305_core_setkey(&ctx->poly_key, key); + key += POLY1305_BLOCK_SIZE; + + for (i = 0; i < NH_KEY_WORDS; i++) + ctx->nh_key[i] = get_unaligned_le32(key + i * sizeof(u32)); + + return 0; +} +EXPORT_SYMBOL(crypto_nhpoly1305_setkey); + +int crypto_nhpoly1305_init(struct shash_desc *desc) +{ + struct nhpoly1305_state *state = shash_desc_ctx(desc); + + poly1305_core_init(&state->poly_state); + state->buflen = 0; + state->nh_remaining = 0; + return 0; +} +EXPORT_SYMBOL(crypto_nhpoly1305_init); + +int crypto_nhpoly1305_update_helper(struct shash_desc *desc, + const u8 *src, unsigned int srclen, + nh_t nh_fn) +{ + struct nhpoly1305_state *state = shash_desc_ctx(desc); + const struct nhpoly1305_key *key = crypto_shash_ctx(desc->tfm); + unsigned int bytes; + + if (state->buflen) { + bytes = min(srclen, (int)NH_MESSAGE_UNIT - state->buflen); + memcpy(&state->buffer[state->buflen], src, bytes); + state->buflen += bytes; + if (state->buflen < NH_MESSAGE_UNIT) + return 0; + nhpoly1305_units(state, key, state->buffer, NH_MESSAGE_UNIT, + nh_fn); + state->buflen = 0; + src += bytes; + srclen -= bytes; + } + + if (srclen >= NH_MESSAGE_UNIT) { + bytes = round_down(srclen, NH_MESSAGE_UNIT); + nhpoly1305_units(state, key, src, bytes, nh_fn); + src += bytes; + srclen -= bytes; + } + + if (srclen) { + memcpy(state->buffer, src, srclen); + state->buflen = srclen; + } + return 0; +} +EXPORT_SYMBOL(crypto_nhpoly1305_update_helper); + +int crypto_nhpoly1305_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + return crypto_nhpoly1305_update_helper(desc, src, srclen, nh_generic); +} +EXPORT_SYMBOL(crypto_nhpoly1305_update); + +int crypto_nhpoly1305_final_helper(struct shash_desc *desc, u8 *dst, nh_t nh_fn) +{ + struct nhpoly1305_state *state = shash_desc_ctx(desc); + const struct nhpoly1305_key *key = crypto_shash_ctx(desc->tfm); + + if (state->buflen) { + memset(&state->buffer[state->buflen], 0, + NH_MESSAGE_UNIT - state->buflen); + nhpoly1305_units(state, key, state->buffer, NH_MESSAGE_UNIT, + nh_fn); + } + + if (state->nh_remaining) + process_nh_hash_value(state, key); + + poly1305_core_emit(&state->poly_state, dst); + return 0; +} +EXPORT_SYMBOL(crypto_nhpoly1305_final_helper); + +int crypto_nhpoly1305_final(struct shash_desc *desc, u8 *dst) +{ + return crypto_nhpoly1305_final_helper(desc, dst, nh_generic); +} +EXPORT_SYMBOL(crypto_nhpoly1305_final); + +static struct shash_alg nhpoly1305_alg = { + .base.cra_name = "nhpoly1305", + .base.cra_driver_name = "nhpoly1305-generic", + .base.cra_priority = 100, + .base.cra_ctxsize = sizeof(struct nhpoly1305_key), + .base.cra_module = THIS_MODULE, + .digestsize = POLY1305_DIGEST_SIZE, + .init = crypto_nhpoly1305_init, + .update = crypto_nhpoly1305_update, + .final = crypto_nhpoly1305_final, + .setkey = crypto_nhpoly1305_setkey, + .descsize = sizeof(struct nhpoly1305_state), +}; + +static int __init nhpoly1305_mod_init(void) +{ + return crypto_register_shash(&nhpoly1305_alg); +} + +static void __exit nhpoly1305_mod_exit(void) +{ + crypto_unregister_shash(&nhpoly1305_alg); +} + +module_init(nhpoly1305_mod_init); +module_exit(nhpoly1305_mod_exit); + +MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function"); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Eric Biggers "); +MODULE_ALIAS_CRYPTO("nhpoly1305"); +MODULE_ALIAS_CRYPTO("nhpoly1305-generic"); diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c index 8eb3c4c9ff67..d47cfc47b1b1 100644 --- a/crypto/pcrypt.c +++ b/crypto/pcrypt.c @@ -394,7 +394,7 @@ static int pcrypt_sysfs_add(struct padata_instance *pinst, const char *name) int ret; pinst->kobj.kset = pcrypt_kset; - ret = kobject_add(&pinst->kobj, NULL, name); + ret = kobject_add(&pinst->kobj, NULL, "%s", name); if (!ret) kobject_uevent(&pinst->kobj, KOBJ_ADD); diff --git a/crypto/poly1305_generic.c b/crypto/poly1305_generic.c index 47d3a6b83931..2a06874204e8 100644 --- a/crypto/poly1305_generic.c +++ b/crypto/poly1305_generic.c @@ -38,7 +38,7 @@ int crypto_poly1305_init(struct shash_desc *desc) { struct poly1305_desc_ctx *dctx = shash_desc_ctx(desc); - memset(dctx->h, 0, sizeof(dctx->h)); + poly1305_core_init(&dctx->h); dctx->buflen = 0; dctx->rset = false; dctx->sset = false; @@ -47,23 +47,16 @@ int crypto_poly1305_init(struct shash_desc *desc) } EXPORT_SYMBOL_GPL(crypto_poly1305_init); -static void poly1305_setrkey(struct poly1305_desc_ctx *dctx, const u8 *key) +void poly1305_core_setkey(struct poly1305_key *key, const u8 *raw_key) { /* r &= 0xffffffc0ffffffc0ffffffc0fffffff */ - dctx->r[0] = (get_unaligned_le32(key + 0) >> 0) & 0x3ffffff; - dctx->r[1] = (get_unaligned_le32(key + 3) >> 2) & 0x3ffff03; - dctx->r[2] = (get_unaligned_le32(key + 6) >> 4) & 0x3ffc0ff; - dctx->r[3] = (get_unaligned_le32(key + 9) >> 6) & 0x3f03fff; - dctx->r[4] = (get_unaligned_le32(key + 12) >> 8) & 0x00fffff; -} - -static void poly1305_setskey(struct poly1305_desc_ctx *dctx, const u8 *key) -{ - dctx->s[0] = get_unaligned_le32(key + 0); - dctx->s[1] = get_unaligned_le32(key + 4); - dctx->s[2] = get_unaligned_le32(key + 8); - dctx->s[3] = get_unaligned_le32(key + 12); + key->r[0] = (get_unaligned_le32(raw_key + 0) >> 0) & 0x3ffffff; + key->r[1] = (get_unaligned_le32(raw_key + 3) >> 2) & 0x3ffff03; + key->r[2] = (get_unaligned_le32(raw_key + 6) >> 4) & 0x3ffc0ff; + key->r[3] = (get_unaligned_le32(raw_key + 9) >> 6) & 0x3f03fff; + key->r[4] = (get_unaligned_le32(raw_key + 12) >> 8) & 0x00fffff; } +EXPORT_SYMBOL_GPL(poly1305_core_setkey); /* * Poly1305 requires a unique key for each tag, which implies that we can't set @@ -75,13 +68,16 @@ unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx, { if (!dctx->sset) { if (!dctx->rset && srclen >= POLY1305_BLOCK_SIZE) { - poly1305_setrkey(dctx, src); + poly1305_core_setkey(&dctx->r, src); src += POLY1305_BLOCK_SIZE; srclen -= POLY1305_BLOCK_SIZE; dctx->rset = true; } if (srclen >= POLY1305_BLOCK_SIZE) { - poly1305_setskey(dctx, src); + dctx->s[0] = get_unaligned_le32(src + 0); + dctx->s[1] = get_unaligned_le32(src + 4); + dctx->s[2] = get_unaligned_le32(src + 8); + dctx->s[3] = get_unaligned_le32(src + 12); src += POLY1305_BLOCK_SIZE; srclen -= POLY1305_BLOCK_SIZE; dctx->sset = true; @@ -91,41 +87,37 @@ unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx, } EXPORT_SYMBOL_GPL(crypto_poly1305_setdesckey); -static unsigned int poly1305_blocks(struct poly1305_desc_ctx *dctx, - const u8 *src, unsigned int srclen, - u32 hibit) +static void poly1305_blocks_internal(struct poly1305_state *state, + const struct poly1305_key *key, + const void *src, unsigned int nblocks, + u32 hibit) { u32 r0, r1, r2, r3, r4; u32 s1, s2, s3, s4; u32 h0, h1, h2, h3, h4; u64 d0, d1, d2, d3, d4; - unsigned int datalen; - if (unlikely(!dctx->sset)) { - datalen = crypto_poly1305_setdesckey(dctx, src, srclen); - src += srclen - datalen; - srclen = datalen; - } + if (!nblocks) + return; - r0 = dctx->r[0]; - r1 = dctx->r[1]; - r2 = dctx->r[2]; - r3 = dctx->r[3]; - r4 = dctx->r[4]; + r0 = key->r[0]; + r1 = key->r[1]; + r2 = key->r[2]; + r3 = key->r[3]; + r4 = key->r[4]; s1 = r1 * 5; s2 = r2 * 5; s3 = r3 * 5; s4 = r4 * 5; - h0 = dctx->h[0]; - h1 = dctx->h[1]; - h2 = dctx->h[2]; - h3 = dctx->h[3]; - h4 = dctx->h[4]; - - while (likely(srclen >= POLY1305_BLOCK_SIZE)) { + h0 = state->h[0]; + h1 = state->h[1]; + h2 = state->h[2]; + h3 = state->h[3]; + h4 = state->h[4]; + do { /* h += m[i] */ h0 += (get_unaligned_le32(src + 0) >> 0) & 0x3ffffff; h1 += (get_unaligned_le32(src + 3) >> 2) & 0x3ffffff; @@ -154,16 +146,36 @@ static unsigned int poly1305_blocks(struct poly1305_desc_ctx *dctx, h1 += h0 >> 26; h0 = h0 & 0x3ffffff; src += POLY1305_BLOCK_SIZE; - srclen -= POLY1305_BLOCK_SIZE; - } + } while (--nblocks); - dctx->h[0] = h0; - dctx->h[1] = h1; - dctx->h[2] = h2; - dctx->h[3] = h3; - dctx->h[4] = h4; + state->h[0] = h0; + state->h[1] = h1; + state->h[2] = h2; + state->h[3] = h3; + state->h[4] = h4; +} - return srclen; +void poly1305_core_blocks(struct poly1305_state *state, + const struct poly1305_key *key, + const void *src, unsigned int nblocks) +{ + poly1305_blocks_internal(state, key, src, nblocks, 1 << 24); +} +EXPORT_SYMBOL_GPL(poly1305_core_blocks); + +static void poly1305_blocks(struct poly1305_desc_ctx *dctx, + const u8 *src, unsigned int srclen, u32 hibit) +{ + unsigned int datalen; + + if (unlikely(!dctx->sset)) { + datalen = crypto_poly1305_setdesckey(dctx, src, srclen); + src += srclen - datalen; + srclen = datalen; + } + + poly1305_blocks_internal(&dctx->h, &dctx->r, + src, srclen / POLY1305_BLOCK_SIZE, hibit); } int crypto_poly1305_update(struct shash_desc *desc, @@ -187,9 +199,9 @@ int crypto_poly1305_update(struct shash_desc *desc, } if (likely(srclen >= POLY1305_BLOCK_SIZE)) { - bytes = poly1305_blocks(dctx, src, srclen, 1 << 24); - src += srclen - bytes; - srclen = bytes; + poly1305_blocks(dctx, src, srclen, 1 << 24); + src += srclen - (srclen % POLY1305_BLOCK_SIZE); + srclen %= POLY1305_BLOCK_SIZE; } if (unlikely(srclen)) { @@ -201,30 +213,18 @@ int crypto_poly1305_update(struct shash_desc *desc, } EXPORT_SYMBOL_GPL(crypto_poly1305_update); -int crypto_poly1305_final(struct shash_desc *desc, u8 *dst) +void poly1305_core_emit(const struct poly1305_state *state, void *dst) { - struct poly1305_desc_ctx *dctx = shash_desc_ctx(desc); u32 h0, h1, h2, h3, h4; u32 g0, g1, g2, g3, g4; u32 mask; - u64 f = 0; - - if (unlikely(!dctx->sset)) - return -ENOKEY; - - if (unlikely(dctx->buflen)) { - dctx->buf[dctx->buflen++] = 1; - memset(dctx->buf + dctx->buflen, 0, - POLY1305_BLOCK_SIZE - dctx->buflen); - poly1305_blocks(dctx, dctx->buf, POLY1305_BLOCK_SIZE, 0); - } /* fully carry h */ - h0 = dctx->h[0]; - h1 = dctx->h[1]; - h2 = dctx->h[2]; - h3 = dctx->h[3]; - h4 = dctx->h[4]; + h0 = state->h[0]; + h1 = state->h[1]; + h2 = state->h[2]; + h3 = state->h[3]; + h4 = state->h[4]; h2 += (h1 >> 26); h1 = h1 & 0x3ffffff; h3 += (h2 >> 26); h2 = h2 & 0x3ffffff; @@ -254,16 +254,40 @@ int crypto_poly1305_final(struct shash_desc *desc, u8 *dst) h4 = (h4 & mask) | g4; /* h = h % (2^128) */ - h0 = (h0 >> 0) | (h1 << 26); - h1 = (h1 >> 6) | (h2 << 20); - h2 = (h2 >> 12) | (h3 << 14); - h3 = (h3 >> 18) | (h4 << 8); + put_unaligned_le32((h0 >> 0) | (h1 << 26), dst + 0); + put_unaligned_le32((h1 >> 6) | (h2 << 20), dst + 4); + put_unaligned_le32((h2 >> 12) | (h3 << 14), dst + 8); + put_unaligned_le32((h3 >> 18) | (h4 << 8), dst + 12); +} +EXPORT_SYMBOL_GPL(poly1305_core_emit); + +int crypto_poly1305_final(struct shash_desc *desc, u8 *dst) +{ + struct poly1305_desc_ctx *dctx = shash_desc_ctx(desc); + __le32 digest[4]; + u64 f = 0; + + if (unlikely(!dctx->sset)) + return -ENOKEY; + + if (unlikely(dctx->buflen)) { + dctx->buf[dctx->buflen++] = 1; + memset(dctx->buf + dctx->buflen, 0, + POLY1305_BLOCK_SIZE - dctx->buflen); + poly1305_blocks(dctx, dctx->buf, POLY1305_BLOCK_SIZE, 0); + } + + poly1305_core_emit(&dctx->h, digest); /* mac = (h + s) % (2^128) */ - f = (f >> 32) + h0 + dctx->s[0]; put_unaligned_le32(f, dst + 0); - f = (f >> 32) + h1 + dctx->s[1]; put_unaligned_le32(f, dst + 4); - f = (f >> 32) + h2 + dctx->s[2]; put_unaligned_le32(f, dst + 8); - f = (f >> 32) + h3 + dctx->s[3]; put_unaligned_le32(f, dst + 12); + f = (f >> 32) + le32_to_cpu(digest[0]) + dctx->s[0]; + put_unaligned_le32(f, dst + 0); + f = (f >> 32) + le32_to_cpu(digest[1]) + dctx->s[1]; + put_unaligned_le32(f, dst + 4); + f = (f >> 32) + le32_to_cpu(digest[2]) + dctx->s[2]; + put_unaligned_le32(f, dst + 8); + f = (f >> 32) + le32_to_cpu(digest[3]) + dctx->s[3]; + put_unaligned_le32(f, dst + 12); return 0; } diff --git a/crypto/rng.c b/crypto/rng.c index 547f16ecbfb0..33c38a72bff5 100644 --- a/crypto/rng.c +++ b/crypto/rng.c @@ -35,9 +35,11 @@ static int crypto_default_rng_refcnt; int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen) { + struct crypto_alg *alg = tfm->base.__crt_alg; u8 *buf = NULL; int err; + crypto_stats_get(alg); if (!seed && slen) { buf = kmalloc(slen, GFP_KERNEL); if (!buf) @@ -50,7 +52,7 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen) } err = crypto_rng_alg(tfm)->seed(tfm, seed, slen); - crypto_stat_rng_seed(tfm, err); + crypto_stats_rng_seed(alg, err); out: kzfree(buf); return err; @@ -74,17 +76,13 @@ static int crypto_rng_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_rng rrng; - strncpy(rrng.type, "rng", sizeof(rrng.type)); + memset(&rrng, 0, sizeof(rrng)); - rrng.seedsize = seedsize(alg); + strscpy(rrng.type, "rng", sizeof(rrng.type)); - if (nla_put(skb, CRYPTOCFGA_REPORT_RNG, - sizeof(struct crypto_report_rng), &rrng)) - goto nla_put_failure; - return 0; + rrng.seedsize = seedsize(alg); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_RNG, sizeof(rrng), &rrng); } #else static int crypto_rng_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c index 8c77bc78a09f..00fce32ae17a 100644 --- a/crypto/salsa20_generic.c +++ b/crypto/salsa20_generic.c @@ -159,7 +159,7 @@ static int salsa20_crypt(struct skcipher_request *req) u32 state[16]; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); salsa20_init(state, ctx, walk.iv); diff --git a/crypto/scompress.c b/crypto/scompress.c index 968bbcf65c94..6f8305f8c300 100644 --- a/crypto/scompress.c +++ b/crypto/scompress.c @@ -40,15 +40,12 @@ static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg) { struct crypto_report_comp rscomp; - strncpy(rscomp.type, "scomp", sizeof(rscomp.type)); + memset(&rscomp, 0, sizeof(rscomp)); - if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS, - sizeof(struct crypto_report_comp), &rscomp)) - goto nla_put_failure; - return 0; + strscpy(rscomp.type, "scomp", sizeof(rscomp.type)); -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS, + sizeof(rscomp), &rscomp); } #else static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/shash.c b/crypto/shash.c index d21f04d70dce..44d297b82a8f 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -408,18 +408,14 @@ static int crypto_shash_report(struct sk_buff *skb, struct crypto_alg *alg) struct crypto_report_hash rhash; struct shash_alg *salg = __crypto_shash_alg(alg); - strncpy(rhash.type, "shash", sizeof(rhash.type)); + memset(&rhash, 0, sizeof(rhash)); + + strscpy(rhash.type, "shash", sizeof(rhash.type)); rhash.blocksize = alg->cra_blocksize; rhash.digestsize = salg->digestsize; - if (nla_put(skb, CRYPTOCFGA_REPORT_HASH, - sizeof(struct crypto_report_hash), &rhash)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_HASH, sizeof(rhash), &rhash); } #else static int crypto_shash_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 4caab81d2d02..2a969296bc24 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -474,6 +474,8 @@ int skcipher_walk_virt(struct skcipher_walk *walk, { int err; + might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); + walk->flags &= ~SKCIPHER_WALK_PHYS; err = skcipher_walk_skcipher(walk, req); @@ -577,8 +579,7 @@ static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg) if (alg->cra_type == &crypto_blkcipher_type) return sizeof(struct crypto_blkcipher *); - if (alg->cra_type == &crypto_ablkcipher_type || - alg->cra_type == &crypto_givcipher_type) + if (alg->cra_type == &crypto_ablkcipher_type) return sizeof(struct crypto_ablkcipher *); return crypto_alg_extsize(alg); @@ -842,8 +843,7 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm) if (tfm->__crt_alg->cra_type == &crypto_blkcipher_type) return crypto_init_skcipher_ops_blkcipher(tfm); - if (tfm->__crt_alg->cra_type == &crypto_ablkcipher_type || - tfm->__crt_alg->cra_type == &crypto_givcipher_type) + if (tfm->__crt_alg->cra_type == &crypto_ablkcipher_type) return crypto_init_skcipher_ops_ablkcipher(tfm); skcipher->setkey = skcipher_setkey; @@ -897,21 +897,18 @@ static int crypto_skcipher_report(struct sk_buff *skb, struct crypto_alg *alg) struct skcipher_alg *skcipher = container_of(alg, struct skcipher_alg, base); - strncpy(rblkcipher.type, "skcipher", sizeof(rblkcipher.type)); - strncpy(rblkcipher.geniv, "", sizeof(rblkcipher.geniv)); + memset(&rblkcipher, 0, sizeof(rblkcipher)); + + strscpy(rblkcipher.type, "skcipher", sizeof(rblkcipher.type)); + strscpy(rblkcipher.geniv, "", sizeof(rblkcipher.geniv)); rblkcipher.blocksize = alg->cra_blocksize; rblkcipher.min_keysize = skcipher->min_keysize; rblkcipher.max_keysize = skcipher->max_keysize; rblkcipher.ivsize = skcipher->ivsize; - if (nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER, - sizeof(struct crypto_report_blkcipher), &rblkcipher)) - goto nla_put_failure; - return 0; - -nla_put_failure: - return -EMSGSIZE; + return nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER, + sizeof(rblkcipher), &rblkcipher); } #else static int crypto_skcipher_report(struct sk_buff *skb, struct crypto_alg *alg) diff --git a/crypto/streebog_generic.c b/crypto/streebog_generic.c new file mode 100644 index 000000000000..03272a22afce --- /dev/null +++ b/crypto/streebog_generic.c @@ -0,0 +1,1140 @@ +// SPDX-License-Identifier: GPL-2.0+ OR BSD-2-Clause +/* + * Streebog hash function as specified by GOST R 34.11-2012 and + * described at https://tools.ietf.org/html/rfc6986 + * + * Copyright (c) 2013 Alexey Degtyarev + * Copyright (c) 2018 Vitaly Chikunov + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the Free + * Software Foundation; either version 2 of the License, or (at your option) + * any later version. + */ + +#include +#include +#include +#include + +static const struct streebog_uint512 buffer0 = { { + 0, 0, 0, 0, 0, 0, 0, 0 +} }; + +static const struct streebog_uint512 buffer512 = { { + cpu_to_le64(0x200), 0, 0, 0, 0, 0, 0, 0 +} }; + +static const struct streebog_uint512 C[12] = { + { { + cpu_to_le64(0xdd806559f2a64507ULL), + cpu_to_le64(0x05767436cc744d23ULL), + cpu_to_le64(0xa2422a08a460d315ULL), + cpu_to_le64(0x4b7ce09192676901ULL), + cpu_to_le64(0x714eb88d7585c4fcULL), + cpu_to_le64(0x2f6a76432e45d016ULL), + cpu_to_le64(0xebcb2f81c0657c1fULL), + cpu_to_le64(0xb1085bda1ecadae9ULL) + } }, + { { + cpu_to_le64(0xe679047021b19bb7ULL), + cpu_to_le64(0x55dda21bd7cbcd56ULL), + cpu_to_le64(0x5cb561c2db0aa7caULL), + cpu_to_le64(0x9ab5176b12d69958ULL), + cpu_to_le64(0x61d55e0f16b50131ULL), + cpu_to_le64(0xf3feea720a232b98ULL), + cpu_to_le64(0x4fe39d460f70b5d7ULL), + cpu_to_le64(0x6fa3b58aa99d2f1aULL) + } }, + { { + cpu_to_le64(0x991e96f50aba0ab2ULL), + cpu_to_le64(0xc2b6f443867adb31ULL), + cpu_to_le64(0xc1c93a376062db09ULL), + cpu_to_le64(0xd3e20fe490359eb1ULL), + cpu_to_le64(0xf2ea7514b1297b7bULL), + cpu_to_le64(0x06f15e5f529c1f8bULL), + cpu_to_le64(0x0a39fc286a3d8435ULL), + cpu_to_le64(0xf574dcac2bce2fc7ULL) + } }, + { { + cpu_to_le64(0x220cbebc84e3d12eULL), + cpu_to_le64(0x3453eaa193e837f1ULL), + cpu_to_le64(0xd8b71333935203beULL), + cpu_to_le64(0xa9d72c82ed03d675ULL), + cpu_to_le64(0x9d721cad685e353fULL), + cpu_to_le64(0x488e857e335c3c7dULL), + cpu_to_le64(0xf948e1a05d71e4ddULL), + cpu_to_le64(0xef1fdfb3e81566d2ULL) + } }, + { { + cpu_to_le64(0x601758fd7c6cfe57ULL), + cpu_to_le64(0x7a56a27ea9ea63f5ULL), + cpu_to_le64(0xdfff00b723271a16ULL), + cpu_to_le64(0xbfcd1747253af5a3ULL), + cpu_to_le64(0x359e35d7800fffbdULL), + cpu_to_le64(0x7f151c1f1686104aULL), + cpu_to_le64(0x9a3f410c6ca92363ULL), + cpu_to_le64(0x4bea6bacad474799ULL) + } }, + { { + cpu_to_le64(0xfa68407a46647d6eULL), + cpu_to_le64(0xbf71c57236904f35ULL), + cpu_to_le64(0x0af21f66c2bec6b6ULL), + cpu_to_le64(0xcffaa6b71c9ab7b4ULL), + cpu_to_le64(0x187f9ab49af08ec6ULL), + cpu_to_le64(0x2d66c4f95142a46cULL), + cpu_to_le64(0x6fa4c33b7a3039c0ULL), + cpu_to_le64(0xae4faeae1d3ad3d9ULL) + } }, + { { + cpu_to_le64(0x8886564d3a14d493ULL), + cpu_to_le64(0x3517454ca23c4af3ULL), + cpu_to_le64(0x06476983284a0504ULL), + cpu_to_le64(0x0992abc52d822c37ULL), + cpu_to_le64(0xd3473e33197a93c9ULL), + cpu_to_le64(0x399ec6c7e6bf87c9ULL), + cpu_to_le64(0x51ac86febf240954ULL), + cpu_to_le64(0xf4c70e16eeaac5ecULL) + } }, + { { + cpu_to_le64(0xa47f0dd4bf02e71eULL), + cpu_to_le64(0x36acc2355951a8d9ULL), + cpu_to_le64(0x69d18d2bd1a5c42fULL), + cpu_to_le64(0xf4892bcb929b0690ULL), + cpu_to_le64(0x89b4443b4ddbc49aULL), + cpu_to_le64(0x4eb7f8719c36de1eULL), + cpu_to_le64(0x03e7aa020c6e4141ULL), + cpu_to_le64(0x9b1f5b424d93c9a7ULL) + } }, + { { + cpu_to_le64(0x7261445183235adbULL), + cpu_to_le64(0x0e38dc92cb1f2a60ULL), + cpu_to_le64(0x7b2b8a9aa6079c54ULL), + cpu_to_le64(0x800a440bdbb2ceb1ULL), + cpu_to_le64(0x3cd955b7e00d0984ULL), + cpu_to_le64(0x3a7d3a1b25894224ULL), + cpu_to_le64(0x944c9ad8ec165fdeULL), + cpu_to_le64(0x378f5a541631229bULL) + } }, + { { + cpu_to_le64(0x74b4c7fb98459cedULL), + cpu_to_le64(0x3698fad1153bb6c3ULL), + cpu_to_le64(0x7a1e6c303b7652f4ULL), + cpu_to_le64(0x9fe76702af69334bULL), + cpu_to_le64(0x1fffe18a1b336103ULL), + cpu_to_le64(0x8941e71cff8a78dbULL), + cpu_to_le64(0x382ae548b2e4f3f3ULL), + cpu_to_le64(0xabbedea680056f52ULL) + } }, + { { + cpu_to_le64(0x6bcaa4cd81f32d1bULL), + cpu_to_le64(0xdea2594ac06fd85dULL), + cpu_to_le64(0xefbacd1d7d476e98ULL), + cpu_to_le64(0x8a1d71efea48b9caULL), + cpu_to_le64(0x2001802114846679ULL), + cpu_to_le64(0xd8fa6bbbebab0761ULL), + cpu_to_le64(0x3002c6cd635afe94ULL), + cpu_to_le64(0x7bcd9ed0efc889fbULL) + } }, + { { + cpu_to_le64(0x48bc924af11bd720ULL), + cpu_to_le64(0xfaf417d5d9b21b99ULL), + cpu_to_le64(0xe71da4aa88e12852ULL), + cpu_to_le64(0x5d80ef9d1891cc86ULL), + cpu_to_le64(0xf82012d430219f9bULL), + cpu_to_le64(0xcda43c32bcdf1d77ULL), + cpu_to_le64(0xd21380b00449b17aULL), + cpu_to_le64(0x378ee767f11631baULL) + } } +}; + +static const u8 Tau[64] = { + 0, 8, 16, 24, 32, 40, 48, 56, + 1, 9, 17, 25, 33, 41, 49, 57, + 2, 10, 18, 26, 34, 42, 50, 58, + 3, 11, 19, 27, 35, 43, 51, 59, + 4, 12, 20, 28, 36, 44, 52, 60, + 5, 13, 21, 29, 37, 45, 53, 61, + 6, 14, 22, 30, 38, 46, 54, 62, + 7, 15, 23, 31, 39, 47, 55, 63 +}; + +static const u8 Pi[256] = { + 252, 238, 221, 17, 207, 110, 49, 22, + 251, 196, 250, 218, 35, 197, 4, 77, + 233, 119, 240, 219, 147, 46, 153, 186, + 23, 54, 241, 187, 20, 205, 95, 193, + 249, 24, 101, 90, 226, 92, 239, 33, + 129, 28, 60, 66, 139, 1, 142, 79, + 5, 132, 2, 174, 227, 106, 143, 160, + 6, 11, 237, 152, 127, 212, 211, 31, + 235, 52, 44, 81, 234, 200, 72, 171, + 242, 42, 104, 162, 253, 58, 206, 204, + 181, 112, 14, 86, 8, 12, 118, 18, + 191, 114, 19, 71, 156, 183, 93, 135, + 21, 161, 150, 41, 16, 123, 154, 199, + 243, 145, 120, 111, 157, 158, 178, 177, + 50, 117, 25, 61, 255, 53, 138, 126, + 109, 84, 198, 128, 195, 189, 13, 87, + 223, 245, 36, 169, 62, 168, 67, 201, + 215, 121, 214, 246, 124, 34, 185, 3, + 224, 15, 236, 222, 122, 148, 176, 188, + 220, 232, 40, 80, 78, 51, 10, 74, + 167, 151, 96, 115, 30, 0, 98, 68, + 26, 184, 56, 130, 100, 159, 38, 65, + 173, 69, 70, 146, 39, 94, 85, 47, + 140, 163, 165, 125, 105, 213, 149, 59, + 7, 88, 179, 64, 134, 172, 29, 247, + 48, 55, 107, 228, 136, 217, 231, 137, + 225, 27, 131, 73, 76, 63, 248, 254, + 141, 83, 170, 144, 202, 216, 133, 97, + 32, 113, 103, 164, 45, 43, 9, 91, + 203, 155, 37, 208, 190, 229, 108, 82, + 89, 166, 116, 210, 230, 244, 180, 192, + 209, 102, 175, 194, 57, 75, 99, 182 +}; + +static const unsigned long long Ax[8][256] = { + { + 0xd01f715b5c7ef8e6ULL, 0x16fa240980778325ULL, 0xa8a42e857ee049c8ULL, + 0x6ac1068fa186465bULL, 0x6e417bd7a2e9320bULL, 0x665c8167a437daabULL, + 0x7666681aa89617f6ULL, 0x4b959163700bdcf5ULL, 0xf14be6b78df36248ULL, + 0xc585bd689a625cffULL, 0x9557d7fca67d82cbULL, 0x89f0b969af6dd366ULL, + 0xb0833d48749f6c35ULL, 0xa1998c23b1ecbc7cULL, 0x8d70c431ac02a736ULL, + 0xd6dfbc2fd0a8b69eULL, 0x37aeb3e551fa198bULL, 0x0b7d128a40b5cf9cULL, + 0x5a8f2008b5780cbcULL, 0xedec882284e333e5ULL, 0xd25fc177d3c7c2ceULL, + 0x5e0f5d50b61778ecULL, 0x1d873683c0c24cb9ULL, 0xad040bcbb45d208cULL, + 0x2f89a0285b853c76ULL, 0x5732fff6791b8d58ULL, 0x3e9311439ef6ec3fULL, + 0xc9183a809fd3c00fULL, 0x83adf3f5260a01eeULL, 0xa6791941f4e8ef10ULL, + 0x103ae97d0ca1cd5dULL, 0x2ce948121dee1b4aULL, 0x39738421dbf2bf53ULL, + 0x093da2a6cf0cf5b4ULL, 0xcd9847d89cbcb45fULL, 0xf9561c078b2d8ae8ULL, + 0x9c6a755a6971777fULL, 0xbc1ebaa0712ef0c5ULL, 0x72e61542abf963a6ULL, + 0x78bb5fde229eb12eULL, 0x14ba94250fceb90dULL, 0x844d6697630e5282ULL, + 0x98ea08026a1e032fULL, 0xf06bbea144217f5cULL, 0xdb6263d11ccb377aULL, + 0x641c314b2b8ee083ULL, 0x320e96ab9b4770cfULL, 0x1ee7deb986a96b85ULL, + 0xe96cf57a878c47b5ULL, 0xfdd6615f8842feb8ULL, 0xc83862965601dd1bULL, + 0x2ea9f83e92572162ULL, 0xf876441142ff97fcULL, 0xeb2c455608357d9dULL, + 0x5612a7e0b0c9904cULL, 0x6c01cbfb2d500823ULL, 0x4548a6a7fa037a2dULL, + 0xabc4c6bf388b6ef4ULL, 0xbade77d4fdf8bebdULL, 0x799b07c8eb4cac3aULL, + 0x0c9d87e805b19cf0ULL, 0xcb588aac106afa27ULL, 0xea0c1d40c1e76089ULL, + 0x2869354a1e816f1aULL, 0xff96d17307fbc490ULL, 0x9f0a9d602f1a5043ULL, + 0x96373fc6e016a5f7ULL, 0x5292dab8b3a6e41cULL, 0x9b8ae0382c752413ULL, + 0x4f15ec3b7364a8a5ULL, 0x3fb349555724f12bULL, 0xc7c50d4415db66d7ULL, + 0x92b7429ee379d1a7ULL, 0xd37f99611a15dfdaULL, 0x231427c05e34a086ULL, + 0xa439a96d7b51d538ULL, 0xb403401077f01865ULL, 0xdda2aea5901d7902ULL, + 0x0a5d4a9c8967d288ULL, 0xc265280adf660f93ULL, 0x8bb0094520d4e94eULL, + 0x2a29856691385532ULL, 0x42a833c5bf072941ULL, 0x73c64d54622b7eb2ULL, + 0x07e095624504536cULL, 0x8a905153e906f45aULL, 0x6f6123c16b3b2f1fULL, + 0xc6e55552dc097bc3ULL, 0x4468feb133d16739ULL, 0xe211e7f0c7398829ULL, + 0xa2f96419f7879b40ULL, 0x19074bdbc3ad38e9ULL, 0xf4ebc3f9474e0b0cULL, + 0x43886bd376d53455ULL, 0xd8028beb5aa01046ULL, 0x51f23282f5cdc320ULL, + 0xe7b1c2be0d84e16dULL, 0x081dfab006dee8a0ULL, 0x3b33340d544b857bULL, + 0x7f5bcabc679ae242ULL, 0x0edd37c48a08a6d8ULL, 0x81ed43d9a9b33bc6ULL, + 0xb1a3655ebd4d7121ULL, 0x69a1eeb5e7ed6167ULL, 0xf6ab73d5c8f73124ULL, + 0x1a67a3e185c61fd5ULL, 0x2dc91004d43c065eULL, 0x0240b02c8fb93a28ULL, + 0x90f7f2b26cc0eb8fULL, 0x3cd3a16f114fd617ULL, 0xaae49ea9f15973e0ULL, + 0x06c0cd748cd64e78ULL, 0xda423bc7d5192a6eULL, 0xc345701c16b41287ULL, + 0x6d2193ede4821537ULL, 0xfcf639494190e3acULL, 0x7c3b228621f1c57eULL, + 0xfb16ac2b0494b0c0ULL, 0xbf7e529a3745d7f9ULL, 0x6881b6a32e3f7c73ULL, + 0xca78d2bad9b8e733ULL, 0xbbfe2fc2342aa3a9ULL, 0x0dbddffecc6381e4ULL, + 0x70a6a56e2440598eULL, 0xe4d12a844befc651ULL, 0x8c509c2765d0ba22ULL, + 0xee8c6018c28814d9ULL, 0x17da7c1f49a59e31ULL, 0x609c4c1328e194d3ULL, + 0xb3e3d57232f44b09ULL, 0x91d7aaa4a512f69bULL, 0x0ffd6fd243dabbccULL, + 0x50d26a943c1fde34ULL, 0x6be15e9968545b4fULL, 0x94778fea6faf9fdfULL, + 0x2b09dd7058ea4826ULL, 0x677cd9716de5c7bfULL, 0x49d5214fffb2e6ddULL, + 0x0360e83a466b273cULL, 0x1fc786af4f7b7691ULL, 0xa0b9d435783ea168ULL, + 0xd49f0c035f118cb6ULL, 0x01205816c9d21d14ULL, 0xac2453dd7d8f3d98ULL, + 0x545217cc3f70aa64ULL, 0x26b4028e9489c9c2ULL, 0xdec2469fd6765e3eULL, + 0x04807d58036f7450ULL, 0xe5f17292823ddb45ULL, 0xf30b569b024a5860ULL, + 0x62dcfc3fa758aefbULL, 0xe84cad6c4e5e5aa1ULL, 0xccb81fce556ea94bULL, + 0x53b282ae7a74f908ULL, 0x1b47fbf74c1402c1ULL, 0x368eebf39828049fULL, + 0x7afbeff2ad278b06ULL, 0xbe5e0a8cfe97caedULL, 0xcfd8f7f413058e77ULL, + 0xf78b2bc301252c30ULL, 0x4d555c17fcdd928dULL, 0x5f2f05467fc565f8ULL, + 0x24f4b2a21b30f3eaULL, 0x860dd6bbecb768aaULL, 0x4c750401350f8f99ULL, + 0x0000000000000000ULL, 0xecccd0344d312ef1ULL, 0xb5231806be220571ULL, + 0xc105c030990d28afULL, 0x653c695de25cfd97ULL, 0x159acc33c61ca419ULL, + 0xb89ec7f872418495ULL, 0xa9847693b73254dcULL, 0x58cf90243ac13694ULL, + 0x59efc832f3132b80ULL, 0x5c4fed7c39ae42c4ULL, 0x828dabe3efd81cfaULL, + 0xd13f294d95ace5f2ULL, 0x7d1b7a90e823d86aULL, 0xb643f03cf849224dULL, + 0x3df3f979d89dcb03ULL, 0x7426d836272f2ddeULL, 0xdfe21e891fa4432aULL, + 0x3a136c1b9d99986fULL, 0xfa36f43dcd46add4ULL, 0xc025982650df35bbULL, + 0x856d3e81aadc4f96ULL, 0xc4a5e57e53b041ebULL, 0x4708168b75ba4005ULL, + 0xaf44bbe73be41aa4ULL, 0x971767d029c4b8e3ULL, 0xb9be9feebb939981ULL, + 0x215497ecd18d9aaeULL, 0x316e7e91dd2c57f3ULL, 0xcef8afe2dad79363ULL, + 0x3853dc371220a247ULL, 0x35ee03c9de4323a3ULL, 0xe6919aa8c456fc79ULL, + 0xe05157dc4880b201ULL, 0x7bdbb7e464f59612ULL, 0x127a59518318f775ULL, + 0x332ecebd52956ddbULL, 0x8f30741d23bb9d1eULL, 0xd922d3fd93720d52ULL, + 0x7746300c61440ae2ULL, 0x25d4eab4d2e2eefeULL, 0x75068020eefd30caULL, + 0x135a01474acaea61ULL, 0x304e268714fe4ae7ULL, 0xa519f17bb283c82cULL, + 0xdc82f6b359cf6416ULL, 0x5baf781e7caa11a8ULL, 0xb2c38d64fb26561dULL, + 0x34ce5bdf17913eb7ULL, 0x5d6fb56af07c5fd0ULL, 0x182713cd0a7f25fdULL, + 0x9e2ac576e6c84d57ULL, 0x9aaab82ee5a73907ULL, 0xa3d93c0f3e558654ULL, + 0x7e7b92aaae48ff56ULL, 0x872d8ead256575beULL, 0x41c8dbfff96c0e7dULL, + 0x99ca5014a3cc1e3bULL, 0x40e883e930be1369ULL, 0x1ca76e95091051adULL, + 0x4e35b42dbab6b5b1ULL, 0x05a0254ecabd6944ULL, 0xe1710fca8152af15ULL, + 0xf22b0e8dcb984574ULL, 0xb763a82a319b3f59ULL, 0x63fca4296e8ab3efULL, + 0x9d4a2d4ca0a36a6bULL, 0xe331bfe60eeb953dULL, 0xd5bf541596c391a2ULL, + 0xf5cb9bef8e9c1618ULL, 0x46284e9dbc685d11ULL, 0x2074cffa185f87baULL, + 0xbd3ee2b6b8fcedd1ULL, 0xae64e3f1f23607b0ULL, 0xfeb68965ce29d984ULL, + 0x55724fdaf6a2b770ULL, 0x29496d5cd753720eULL, 0xa75941573d3af204ULL, + 0x8e102c0bea69800aULL, 0x111ab16bc573d049ULL, 0xd7ffe439197aab8aULL, + 0xefac380e0b5a09cdULL, 0x48f579593660fbc9ULL, 0x22347fd697e6bd92ULL, + 0x61bc1405e13389c7ULL, 0x4ab5c975b9d9c1e1ULL, 0x80cd1bcf606126d2ULL, + 0x7186fd78ed92449aULL, 0x93971a882aabccb3ULL, 0x88d0e17f66bfce72ULL, + 0x27945a985d5bd4d6ULL + }, { + 0xde553f8c05a811c8ULL, 0x1906b59631b4f565ULL, 0x436e70d6b1964ff7ULL, + 0x36d343cb8b1e9d85ULL, 0x843dfacc858aab5aULL, 0xfdfc95c299bfc7f9ULL, + 0x0f634bdea1d51fa2ULL, 0x6d458b3b76efb3cdULL, 0x85c3f77cf8593f80ULL, + 0x3c91315fbe737cb2ULL, 0x2148b03366ace398ULL, 0x18f8b8264c6761bfULL, + 0xc830c1c495c9fb0fULL, 0x981a76102086a0aaULL, 0xaa16012142f35760ULL, + 0x35cc54060c763cf6ULL, 0x42907d66cc45db2dULL, 0x8203d44b965af4bcULL, + 0x3d6f3cefc3a0e868ULL, 0xbc73ff69d292bda7ULL, 0x8722ed0102e20a29ULL, + 0x8f8185e8cd34deb7ULL, 0x9b0561dda7ee01d9ULL, 0x5335a0193227fad6ULL, + 0xc9cecc74e81a6fd5ULL, 0x54f5832e5c2431eaULL, 0x99e47ba05d553470ULL, + 0xf7bee756acd226ceULL, 0x384e05a5571816fdULL, 0xd1367452a47d0e6aULL, + 0xf29fde1c386ad85bULL, 0x320c77316275f7caULL, 0xd0c879e2d9ae9ab0ULL, + 0xdb7406c69110ef5dULL, 0x45505e51a2461011ULL, 0xfc029872e46c5323ULL, + 0xfa3cb6f5f7bc0cc5ULL, 0x031f17cd8768a173ULL, 0xbd8df2d9af41297dULL, + 0x9d3b4f5ab43e5e3fULL, 0x4071671b36feee84ULL, 0x716207e7d3e3b83dULL, + 0x48d20ff2f9283a1aULL, 0x27769eb4757cbc7eULL, 0x5c56ebc793f2e574ULL, + 0xa48b474f9ef5dc18ULL, 0x52cbada94ff46e0cULL, 0x60c7da982d8199c6ULL, + 0x0e9d466edc068b78ULL, 0x4eec2175eaf865fcULL, 0x550b8e9e21f7a530ULL, + 0x6b7ba5bc653fec2bULL, 0x5eb7f1ba6949d0ddULL, 0x57ea94e3db4c9099ULL, + 0xf640eae6d101b214ULL, 0xdd4a284182c0b0bbULL, 0xff1d8fbf6304f250ULL, + 0xb8accb933bf9d7e8ULL, 0xe8867c478eb68c4dULL, 0x3f8e2692391bddc1ULL, + 0xcb2fd60912a15a7cULL, 0xaec935dbab983d2fULL, 0xf55ffd2b56691367ULL, + 0x80e2ce366ce1c115ULL, 0x179bf3f8edb27e1dULL, 0x01fe0db07dd394daULL, + 0xda8a0b76ecc37b87ULL, 0x44ae53e1df9584cbULL, 0xb310b4b77347a205ULL, + 0xdfab323c787b8512ULL, 0x3b511268d070b78eULL, 0x65e6e3d2b9396753ULL, + 0x6864b271e2574d58ULL, 0x259784c98fc789d7ULL, 0x02e11a7dfabb35a9ULL, + 0x8841a6dfa337158bULL, 0x7ade78c39b5dcdd0ULL, 0xb7cf804d9a2cc84aULL, + 0x20b6bd831b7f7742ULL, 0x75bd331d3a88d272ULL, 0x418f6aab4b2d7a5eULL, + 0xd9951cbb6babdaf4ULL, 0xb6318dfde7ff5c90ULL, 0x1f389b112264aa83ULL, + 0x492c024284fbaec0ULL, 0xe33a0363c608f9a0ULL, 0x2688930408af28a4ULL, + 0xc7538a1a341ce4adULL, 0x5da8e677ee2171aeULL, 0x8c9e92254a5c7fc4ULL, + 0x63d8cd55aae938b5ULL, 0x29ebd8daa97a3706ULL, 0x959827b37be88aa1ULL, + 0x1484e4356adadf6eULL, 0xa7945082199d7d6bULL, 0xbf6ce8a455fa1cd4ULL, + 0x9cc542eac9edcae5ULL, 0x79c16f0e1c356ca3ULL, 0x89bfab6fdee48151ULL, + 0xd4174d1830c5f0ffULL, 0x9258048415eb419dULL, 0x6139d72850520d1cULL, + 0x6a85a80c18ec78f1ULL, 0xcd11f88e0171059aULL, 0xcceff53e7ca29140ULL, + 0xd229639f2315af19ULL, 0x90b91ef9ef507434ULL, 0x5977d28d074a1be1ULL, + 0x311360fce51d56b9ULL, 0xc093a92d5a1f2f91ULL, 0x1a19a25bb6dc5416ULL, + 0xeb996b8a09de2d3eULL, 0xfee3820f1ed7668aULL, 0xd7085ad5b7ad518cULL, + 0x7fff41890fe53345ULL, 0xec5948bd67dde602ULL, 0x2fd5f65dbaaa68e0ULL, + 0xa5754affe32648c2ULL, 0xf8ddac880d07396cULL, 0x6fa491468c548664ULL, + 0x0c7c5c1326bdbed1ULL, 0x4a33158f03930fb3ULL, 0x699abfc19f84d982ULL, + 0xe4fa2054a80b329cULL, 0x6707f9af438252faULL, 0x08a368e9cfd6d49eULL, + 0x47b1442c58fd25b8ULL, 0xbbb3dc5ebc91769bULL, 0x1665fe489061eac7ULL, + 0x33f27a811fa66310ULL, 0x93a609346838d547ULL, 0x30ed6d4c98cec263ULL, + 0x1dd9816cd8df9f2aULL, 0x94662a03063b1e7bULL, 0x83fdd9fbeb896066ULL, + 0x7b207573e68e590aULL, 0x5f49fc0a149a4407ULL, 0x343259b671a5a82cULL, + 0xfbc2bb458a6f981fULL, 0xc272b350a0a41a38ULL, 0x3aaf1fd8ada32354ULL, + 0x6cbb868b0b3c2717ULL, 0xa2b569c88d2583feULL, 0xf180c9d1bf027928ULL, + 0xaf37386bd64ba9f5ULL, 0x12bacab2790a8088ULL, 0x4c0d3b0810435055ULL, + 0xb2eeb9070e9436dfULL, 0xc5b29067cea7d104ULL, 0xdcb425f1ff132461ULL, + 0x4f122cc5972bf126ULL, 0xac282fa651230886ULL, 0xe7e537992f6393efULL, + 0xe61b3a2952b00735ULL, 0x709c0a57ae302ce7ULL, 0xe02514ae416058d3ULL, + 0xc44c9dd7b37445deULL, 0x5a68c5408022ba92ULL, 0x1c278cdca50c0bf0ULL, + 0x6e5a9cf6f18712beULL, 0x86dce0b17f319ef3ULL, 0x2d34ec2040115d49ULL, + 0x4bcd183f7e409b69ULL, 0x2815d56ad4a9a3dcULL, 0x24698979f2141d0dULL, + 0x0000000000000000ULL, 0x1ec696a15fb73e59ULL, 0xd86b110b16784e2eULL, + 0x8e7f8858b0e74a6dULL, 0x063e2e8713d05fe6ULL, 0xe2c40ed3bbdb6d7aULL, + 0xb1f1aeca89fc97acULL, 0xe1db191e3cb3cc09ULL, 0x6418ee62c4eaf389ULL, + 0xc6ad87aa49cf7077ULL, 0xd6f65765ca7ec556ULL, 0x9afb6c6dda3d9503ULL, + 0x7ce05644888d9236ULL, 0x8d609f95378feb1eULL, 0x23a9aa4e9c17d631ULL, + 0x6226c0e5d73aac6fULL, 0x56149953a69f0443ULL, 0xeeb852c09d66d3abULL, + 0x2b0ac2a753c102afULL, 0x07c023376e03cb3cULL, 0x2ccae1903dc2c993ULL, + 0xd3d76e2f5ec63bc3ULL, 0x9e2458973356ff4cULL, 0xa66a5d32644ee9b1ULL, + 0x0a427294356de137ULL, 0x783f62be61e6f879ULL, 0x1344c70204d91452ULL, + 0x5b96c8f0fdf12e48ULL, 0xa90916ecc59bf613ULL, 0xbe92e5142829880eULL, + 0x727d102a548b194eULL, 0x1be7afebcb0fc0ccULL, 0x3e702b2244c8491bULL, + 0xd5e940a84d166425ULL, 0x66f9f41f3e51c620ULL, 0xabe80c913f20c3baULL, + 0xf07ec461c2d1edf2ULL, 0xf361d3ac45b94c81ULL, 0x0521394a94b8fe95ULL, + 0xadd622162cf09c5cULL, 0xe97871f7f3651897ULL, 0xf4a1f09b2bba87bdULL, + 0x095d6559b2054044ULL, 0x0bbc7f2448be75edULL, 0x2af4cf172e129675ULL, + 0x157ae98517094bb4ULL, 0x9fda55274e856b96ULL, 0x914713499283e0eeULL, + 0xb952c623462a4332ULL, 0x74433ead475b46a8ULL, 0x8b5eb112245fb4f8ULL, + 0xa34b6478f0f61724ULL, 0x11a5dd7ffe6221fbULL, 0xc16da49d27ccbb4bULL, + 0x76a224d0bde07301ULL, 0x8aa0bca2598c2022ULL, 0x4df336b86d90c48fULL, + 0xea67663a740db9e4ULL, 0xef465f70e0b54771ULL, 0x39b008152acb8227ULL, + 0x7d1e5bf4f55e06ecULL, 0x105bd0cf83b1b521ULL, 0x775c2960c033e7dbULL, + 0x7e014c397236a79fULL, 0x811cc386113255cfULL, 0xeda7450d1a0e72d8ULL, + 0x5889df3d7a998f3bULL, 0x2e2bfbedc779fc3aULL, 0xce0eef438619a4e9ULL, + 0x372d4e7bf6cd095fULL, 0x04df34fae96b6a4fULL, 0xf923a13870d4adb6ULL, + 0xa1aa7e050a4d228dULL, 0xa8f71b5cb84862c9ULL, 0xb52e9a306097fde3ULL, + 0x0d8251a35b6e2a0bULL, 0x2257a7fee1c442ebULL, 0x73831d9a29588d94ULL, + 0x51d4ba64c89ccf7fULL, 0x502ab7d4b54f5ba5ULL, 0x97793dce8153bf08ULL, + 0xe5042de4d5d8a646ULL, 0x9687307efc802bd2ULL, 0xa05473b5779eb657ULL, + 0xb4d097801d446939ULL, 0xcff0e2f3fbca3033ULL, 0xc38cbee0dd778ee2ULL, + 0x464f499c252eb162ULL, 0xcad1dbb96f72cea6ULL, 0xba4dd1eec142e241ULL, + 0xb00fa37af42f0376ULL + }, { + 0xcce4cd3aa968b245ULL, 0x089d5484e80b7fafULL, 0x638246c1b3548304ULL, + 0xd2fe0ec8c2355492ULL, 0xa7fbdf7ff2374eeeULL, 0x4df1600c92337a16ULL, + 0x84e503ea523b12fbULL, 0x0790bbfd53ab0c4aULL, 0x198a780f38f6ea9dULL, + 0x2ab30c8f55ec48cbULL, 0xe0f7fed6b2c49db5ULL, 0xb6ecf3f422cadbdcULL, + 0x409c9a541358df11ULL, 0xd3ce8a56dfde3fe3ULL, 0xc3e9224312c8c1a0ULL, + 0x0d6dfa58816ba507ULL, 0xddf3e1b179952777ULL, 0x04c02a42748bb1d9ULL, + 0x94c2abff9f2decb8ULL, 0x4f91752da8f8acf4ULL, 0x78682befb169bf7bULL, + 0xe1c77a48af2ff6c4ULL, 0x0c5d7ec69c80ce76ULL, 0x4cc1e4928fd81167ULL, + 0xfeed3d24d9997b62ULL, 0x518bb6dfc3a54a23ULL, 0x6dbf2d26151f9b90ULL, + 0xb5bc624b05ea664fULL, 0xe86aaa525acfe21aULL, 0x4801ced0fb53a0beULL, + 0xc91463e6c00868edULL, 0x1027a815cd16fe43ULL, 0xf67069a0319204cdULL, + 0xb04ccc976c8abce7ULL, 0xc0b9b3fc35e87c33ULL, 0xf380c77c58f2de65ULL, + 0x50bb3241de4e2152ULL, 0xdf93f490435ef195ULL, 0xf1e0d25d62390887ULL, + 0xaf668bfb1a3c3141ULL, 0xbc11b251f00a7291ULL, 0x73a5eed47e427d47ULL, + 0x25bee3f6ee4c3b2eULL, 0x43cc0beb34786282ULL, 0xc824e778dde3039cULL, + 0xf97d86d98a327728ULL, 0xf2b043e24519b514ULL, 0xe297ebf7880f4b57ULL, + 0x3a94a49a98fab688ULL, 0x868516cb68f0c419ULL, 0xeffa11af0964ee50ULL, + 0xa4ab4ec0d517f37dULL, 0xa9c6b498547c567aULL, 0x8e18424f80fbbbb6ULL, + 0x0bcdc53bcf2bc23cULL, 0x137739aaea3643d0ULL, 0x2c1333ec1bac2ff0ULL, + 0x8d48d3f0a7db0625ULL, 0x1e1ac3f26b5de6d7ULL, 0xf520f81f16b2b95eULL, + 0x9f0f6ec450062e84ULL, 0x0130849e1deb6b71ULL, 0xd45e31ab8c7533a9ULL, + 0x652279a2fd14e43fULL, 0x3209f01e70f1c927ULL, 0xbe71a770cac1a473ULL, + 0x0e3d6be7a64b1894ULL, 0x7ec8148cff29d840ULL, 0xcb7476c7fac3be0fULL, + 0x72956a4a63a91636ULL, 0x37f95ec21991138fULL, 0x9e3fea5a4ded45f5ULL, + 0x7b38ba50964902e8ULL, 0x222e580bbde73764ULL, 0x61e253e0899f55e6ULL, + 0xfc8d2805e352ad80ULL, 0x35994be3235ac56dULL, 0x09add01af5e014deULL, + 0x5e8659a6780539c6ULL, 0xb17c48097161d796ULL, 0x026015213acbd6e2ULL, + 0xd1ae9f77e515e901ULL, 0xb7dc776a3f21b0adULL, 0xaba6a1b96eb78098ULL, + 0x9bcf4486248d9f5dULL, 0x582666c536455efdULL, 0xfdbdac9bfeb9c6f1ULL, + 0xc47999be4163cdeaULL, 0x765540081722a7efULL, 0x3e548ed8ec710751ULL, + 0x3d041f67cb51bac2ULL, 0x7958af71ac82d40aULL, 0x36c9da5c047a78feULL, + 0xed9a048e33af38b2ULL, 0x26ee7249c96c86bdULL, 0x900281bdeba65d61ULL, + 0x11172c8bd0fd9532ULL, 0xea0abf73600434f8ULL, 0x42fc8f75299309f3ULL, + 0x34a9cf7d3eb1ae1cULL, 0x2b838811480723baULL, 0x5ce64c8742ceef24ULL, + 0x1adae9b01fd6570eULL, 0x3c349bf9d6bad1b3ULL, 0x82453c891c7b75c0ULL, + 0x97923a40b80d512bULL, 0x4a61dbf1c198765cULL, 0xb48ce6d518010d3eULL, + 0xcfb45c858e480fd6ULL, 0xd933cbf30d1e96aeULL, 0xd70ea014ab558e3aULL, + 0xc189376228031742ULL, 0x9262949cd16d8b83ULL, 0xeb3a3bed7def5f89ULL, + 0x49314a4ee6b8cbcfULL, 0xdcc3652f647e4c06ULL, 0xda635a4c2a3e2b3dULL, + 0x470c21a940f3d35bULL, 0x315961a157d174b4ULL, 0x6672e81dda3459acULL, + 0x5b76f77a1165e36eULL, 0x445cb01667d36ec8ULL, 0xc5491d205c88a69bULL, + 0x456c34887a3805b9ULL, 0xffddb9bac4721013ULL, 0x99af51a71e4649bfULL, + 0xa15be01cbc7729d5ULL, 0x52db2760e485f7b0ULL, 0x8c78576eba306d54ULL, + 0xae560f6507d75a30ULL, 0x95f22f6182c687c9ULL, 0x71c5fbf54489aba5ULL, + 0xca44f259e728d57eULL, 0x88b87d2ccebbdc8dULL, 0xbab18d32be4a15aaULL, + 0x8be8ec93e99b611eULL, 0x17b713e89ebdf209ULL, 0xb31c5d284baa0174ULL, + 0xeeca9531148f8521ULL, 0xb8d198138481c348ULL, 0x8988f9b2d350b7fcULL, + 0xb9e11c8d996aa839ULL, 0x5a4673e40c8e881fULL, 0x1687977683569978ULL, + 0xbf4123eed72acf02ULL, 0x4ea1f1b3b513c785ULL, 0xe767452be16f91ffULL, + 0x7505d1b730021a7cULL, 0xa59bca5ec8fc980cULL, 0xad069eda20f7e7a3ULL, + 0x38f4b1bba231606aULL, 0x60d2d77e94743e97ULL, 0x9affc0183966f42cULL, + 0x248e6768f3a7505fULL, 0xcdd449a4b483d934ULL, 0x87b59255751baf68ULL, + 0x1bea6d2e023d3c7fULL, 0x6b1f12455b5ffcabULL, 0x743555292de9710dULL, + 0xd8034f6d10f5fddfULL, 0xc6198c9f7ba81b08ULL, 0xbb8109aca3a17edbULL, + 0xfa2d1766ad12cabbULL, 0xc729080166437079ULL, 0x9c5fff7b77269317ULL, + 0x0000000000000000ULL, 0x15d706c9a47624ebULL, 0x6fdf38072fd44d72ULL, + 0x5fb6dd3865ee52b7ULL, 0xa33bf53d86bcff37ULL, 0xe657c1b5fc84fa8eULL, + 0xaa962527735cebe9ULL, 0x39c43525bfda0b1bULL, 0x204e4d2a872ce186ULL, + 0x7a083ece8ba26999ULL, 0x554b9c9db72efbfaULL, 0xb22cd9b656416a05ULL, + 0x96a2bedea5e63a5aULL, 0x802529a826b0a322ULL, 0x8115ad363b5bc853ULL, + 0x8375b81701901eb1ULL, 0x3069e53f4a3a1fc5ULL, 0xbd2136cfede119e0ULL, + 0x18bafc91251d81ecULL, 0x1d4a524d4c7d5b44ULL, 0x05f0aedc6960daa8ULL, + 0x29e39d3072ccf558ULL, 0x70f57f6b5962c0d4ULL, 0x989fd53903ad22ceULL, + 0xf84d024797d91c59ULL, 0x547b1803aac5908bULL, 0xf0d056c37fd263f6ULL, + 0xd56eb535919e58d8ULL, 0x1c7ad6d351963035ULL, 0x2e7326cd2167f912ULL, + 0xac361a443d1c8cd2ULL, 0x697f076461942a49ULL, 0x4b515f6fdc731d2dULL, + 0x8ad8680df4700a6fULL, 0x41ac1eca0eb3b460ULL, 0x7d988533d80965d3ULL, + 0xa8f6300649973d0bULL, 0x7765c4960ac9cc9eULL, 0x7ca801adc5e20ea2ULL, + 0xdea3700e5eb59ae4ULL, 0xa06b6482a19c42a4ULL, 0x6a2f96db46b497daULL, + 0x27def6d7d487edccULL, 0x463ca5375d18b82aULL, 0xa6cb5be1efdc259fULL, + 0x53eba3fef96e9cc1ULL, 0xce84d81b93a364a7ULL, 0xf4107c810b59d22fULL, + 0x333974806d1aa256ULL, 0x0f0def79bba073e5ULL, 0x231edc95a00c5c15ULL, + 0xe437d494c64f2c6cULL, 0x91320523f64d3610ULL, 0x67426c83c7df32ddULL, + 0x6eefbc99323f2603ULL, 0x9d6f7be56acdf866ULL, 0x5916e25b2bae358cULL, + 0x7ff89012e2c2b331ULL, 0x035091bf2720bd93ULL, 0x561b0d22900e4669ULL, + 0x28d319ae6f279e29ULL, 0x2f43a2533c8c9263ULL, 0xd09e1be9f8fe8270ULL, + 0xf740ed3e2c796fbcULL, 0xdb53ded237d5404cULL, 0x62b2c25faebfe875ULL, + 0x0afd41a5d2c0a94dULL, 0x6412fd3ce0ff8f4eULL, 0xe3a76f6995e42026ULL, + 0x6c8fa9b808f4f0e1ULL, 0xc2d9a6dd0f23aad1ULL, 0x8f28c6d19d10d0c7ULL, + 0x85d587744fd0798aULL, 0xa20b71a39b579446ULL, 0x684f83fa7c7f4138ULL, + 0xe507500adba4471dULL, 0x3f640a46f19a6c20ULL, 0x1247bd34f7dd28a1ULL, + 0x2d23b77206474481ULL, 0x93521002cc86e0f2ULL, 0x572b89bc8de52d18ULL, + 0xfb1d93f8b0f9a1caULL, 0xe95a2ecc4724896bULL, 0x3ba420048511ddf9ULL, + 0xd63e248ab6bee54bULL, 0x5dd6c8195f258455ULL, 0x06a03f634e40673bULL, + 0x1f2a476c76b68da6ULL, 0x217ec9b49ac78af7ULL, 0xecaa80102e4453c3ULL, + 0x14e78257b99d4f9aULL + }, { + 0x20329b2cc87bba05ULL, 0x4f5eb6f86546a531ULL, 0xd4f44775f751b6b1ULL, + 0x8266a47b850dfa8bULL, 0xbb986aa15a6ca985ULL, 0xc979eb08f9ae0f99ULL, + 0x2da6f447a2375ea1ULL, 0x1e74275dcd7d8576ULL, 0xbc20180a800bc5f8ULL, + 0xb4a2f701b2dc65beULL, 0xe726946f981b6d66ULL, 0x48e6c453bf21c94cULL, + 0x42cad9930f0a4195ULL, 0xefa47b64aacccd20ULL, 0x71180a8960409a42ULL, + 0x8bb3329bf6a44e0cULL, 0xd34c35de2d36daccULL, 0xa92f5b7cbc23dc96ULL, + 0xb31a85aa68bb09c3ULL, 0x13e04836a73161d2ULL, 0xb24dfc4129c51d02ULL, + 0x8ae44b70b7da5acdULL, 0xe671ed84d96579a7ULL, 0xa4bb3417d66f3832ULL, + 0x4572ab38d56d2de8ULL, 0xb1b47761ea47215cULL, 0xe81c09cf70aba15dULL, + 0xffbdb872ce7f90acULL, 0xa8782297fd5dc857ULL, 0x0d946f6b6a4ce4a4ULL, + 0xe4df1f4f5b995138ULL, 0x9ebc71edca8c5762ULL, 0x0a2c1dc0b02b88d9ULL, + 0x3b503c115d9d7b91ULL, 0xc64376a8111ec3a2ULL, 0xcec199a323c963e4ULL, + 0xdc76a87ec58616f7ULL, 0x09d596e073a9b487ULL, 0x14583a9d7d560dafULL, + 0xf4c6dc593f2a0cb4ULL, 0xdd21d19584f80236ULL, 0x4a4836983ddde1d3ULL, + 0xe58866a41ae745f9ULL, 0xf591a5b27e541875ULL, 0x891dc05074586693ULL, + 0x5b068c651810a89eULL, 0xa30346bc0c08544fULL, 0x3dbf3751c684032dULL, + 0x2a1e86ec785032dcULL, 0xf73f5779fca830eaULL, 0xb60c05ca30204d21ULL, + 0x0cc316802b32f065ULL, 0x8770241bdd96be69ULL, 0xb861e18199ee95dbULL, + 0xf805cad91418fcd1ULL, 0x29e70dccbbd20e82ULL, 0xc7140f435060d763ULL, + 0x0f3a9da0e8b0cc3bULL, 0xa2543f574d76408eULL, 0xbd7761e1c175d139ULL, + 0x4b1f4f737ca3f512ULL, 0x6dc2df1f2fc137abULL, 0xf1d05c3967b14856ULL, + 0xa742bf3715ed046cULL, 0x654030141d1697edULL, 0x07b872abda676c7dULL, + 0x3ce84eba87fa17ecULL, 0xc1fb0403cb79afdfULL, 0x3e46bc7105063f73ULL, + 0x278ae987121cd678ULL, 0xa1adb4778ef47cd0ULL, 0x26dd906c5362c2b9ULL, + 0x05168060589b44e2ULL, 0xfbfc41f9d79ac08fULL, 0x0e6de44ba9ced8faULL, + 0x9feb08068bf243a3ULL, 0x7b341749d06b129bULL, 0x229c69e74a87929aULL, + 0xe09ee6c4427c011bULL, 0x5692e30e725c4c3aULL, 0xda99a33e5e9f6e4bULL, + 0x353dd85af453a36bULL, 0x25241b4c90e0fee7ULL, 0x5de987258309d022ULL, + 0xe230140fc0802984ULL, 0x93281e86a0c0b3c6ULL, 0xf229d719a4337408ULL, + 0x6f6c2dd4ad3d1f34ULL, 0x8ea5b2fbae3f0aeeULL, 0x8331dd90c473ee4aULL, + 0x346aa1b1b52db7aaULL, 0xdf8f235e06042aa9ULL, 0xcc6f6b68a1354b7bULL, + 0x6c95a6f46ebf236aULL, 0x52d31a856bb91c19ULL, 0x1a35ded6d498d555ULL, + 0xf37eaef2e54d60c9ULL, 0x72e181a9a3c2a61cULL, 0x98537aad51952fdeULL, + 0x16f6c856ffaa2530ULL, 0xd960281e9d1d5215ULL, 0x3a0745fa1ce36f50ULL, + 0x0b7b642bf1559c18ULL, 0x59a87eae9aec8001ULL, 0x5e100c05408bec7cULL, + 0x0441f98b19e55023ULL, 0xd70dcc5534d38aefULL, 0x927f676de1bea707ULL, + 0x9769e70db925e3e5ULL, 0x7a636ea29115065aULL, 0x468b201816ef11b6ULL, + 0xab81a9b73edff409ULL, 0xc0ac7de88a07bb1eULL, 0x1f235eb68c0391b7ULL, + 0x6056b074458dd30fULL, 0xbe8eeac102f7ed67ULL, 0xcd381283e04b5fbaULL, + 0x5cbefecec277c4e3ULL, 0xd21b4c356c48ce0dULL, 0x1019c31664b35d8cULL, + 0x247362a7d19eea26ULL, 0xebe582efb3299d03ULL, 0x02aef2cb82fc289fULL, + 0x86275df09ce8aaa8ULL, 0x28b07427faac1a43ULL, 0x38a9b7319e1f47cfULL, + 0xc82e92e3b8d01b58ULL, 0x06ef0b409b1978bcULL, 0x62f842bfc771fb90ULL, + 0x9904034610eb3b1fULL, 0xded85ab5477a3e68ULL, 0x90d195a663428f98ULL, + 0x5384636e2ac708d8ULL, 0xcbd719c37b522706ULL, 0xae9729d76644b0ebULL, + 0x7c8c65e20a0c7ee6ULL, 0x80c856b007f1d214ULL, 0x8c0b40302cc32271ULL, + 0xdbcedad51fe17a8aULL, 0x740e8ae938dbdea0ULL, 0xa615c6dc549310adULL, + 0x19cc55f6171ae90bULL, 0x49b1bdb8fe5fdd8dULL, 0xed0a89af2830e5bfULL, + 0x6a7aadb4f5a65bd6ULL, 0x7e22972988f05679ULL, 0xf952b3325566e810ULL, + 0x39fecedadf61530eULL, 0x6101c99f04f3c7ceULL, 0x2e5f7f6761b562ffULL, + 0xf08725d226cf5c97ULL, 0x63af3b54860fef51ULL, 0x8ff2cb10ef411e2fULL, + 0x884ab9bb35267252ULL, 0x4df04433e7ba8daeULL, 0x9afd8866d3690741ULL, + 0x66b9bb34de94abb3ULL, 0x9baaf18d92171380ULL, 0x543c11c5f0a064a5ULL, + 0x17a1b1bdbed431f1ULL, 0xb5f58eeaf3a2717fULL, 0xc355f6c849858740ULL, + 0xec5df044694ef17eULL, 0xd83751f5dc6346d4ULL, 0xfc4433520dfdacf2ULL, + 0x0000000000000000ULL, 0x5a51f58e596ebc5fULL, 0x3285aaf12e34cf16ULL, + 0x8d5c39db6dbd36b0ULL, 0x12b731dde64f7513ULL, 0x94906c2d7aa7dfbbULL, + 0x302b583aacc8e789ULL, 0x9d45facd090e6b3cULL, 0x2165e2c78905aec4ULL, + 0x68d45f7f775a7349ULL, 0x189b2c1d5664fdcaULL, 0xe1c99f2f030215daULL, + 0x6983269436246788ULL, 0x8489af3b1e148237ULL, 0xe94b702431d5b59cULL, + 0x33d2d31a6f4adbd7ULL, 0xbfd9932a4389f9a6ULL, 0xb0e30e8aab39359dULL, + 0xd1e2c715afcaf253ULL, 0x150f43763c28196eULL, 0xc4ed846393e2eb3dULL, + 0x03f98b20c3823c5eULL, 0xfd134ab94c83b833ULL, 0x556b682eb1de7064ULL, + 0x36c4537a37d19f35ULL, 0x7559f30279a5ca61ULL, 0x799ae58252973a04ULL, + 0x9c12832648707ffdULL, 0x78cd9c6913e92ec5ULL, 0x1d8dac7d0effb928ULL, + 0x439da0784e745554ULL, 0x413352b3cc887dcbULL, 0xbacf134a1b12bd44ULL, + 0x114ebafd25cd494dULL, 0x2f08068c20cb763eULL, 0x76a07822ba27f63fULL, + 0xeab2fb04f25789c2ULL, 0xe3676de481fe3d45ULL, 0x1b62a73d95e6c194ULL, + 0x641749ff5c68832cULL, 0xa5ec4dfc97112cf3ULL, 0xf6682e92bdd6242bULL, + 0x3f11c59a44782bb2ULL, 0x317c21d1edb6f348ULL, 0xd65ab5be75ad9e2eULL, + 0x6b2dd45fb4d84f17ULL, 0xfaab381296e4d44eULL, 0xd0b5befeeeb4e692ULL, + 0x0882ef0b32d7a046ULL, 0x512a91a5a83b2047ULL, 0x963e9ee6f85bf724ULL, + 0x4e09cf132438b1f0ULL, 0x77f701c9fb59e2feULL, 0x7ddb1c094b726a27ULL, + 0x5f4775ee01f5f8bdULL, 0x9186ec4d223c9b59ULL, 0xfeeac1998f01846dULL, + 0xac39db1ce4b89874ULL, 0xb75b7c21715e59e0ULL, 0xafc0503c273aa42aULL, + 0x6e3b543fec430bf5ULL, 0x704f7362213e8e83ULL, 0x58ff0745db9294c0ULL, + 0x67eec2df9feabf72ULL, 0xa0facd9ccf8a6811ULL, 0xb936986ad890811aULL, + 0x95c715c63bd9cb7aULL, 0xca8060283a2c33c7ULL, 0x507de84ee9453486ULL, + 0x85ded6d05f6a96f6ULL, 0x1cdad5964f81ade9ULL, 0xd5a33e9eb62fa270ULL, + 0x40642b588df6690aULL, 0x7f75eec2c98e42b8ULL, 0x2cf18dace3494a60ULL, + 0x23cb100c0bf9865bULL, 0xeef3028febb2d9e1ULL, 0x4425d2d394133929ULL, + 0xaad6d05c7fa1e0c8ULL, 0xad6ea2f7a5c68cb5ULL, 0xc2028f2308fb9381ULL, + 0x819f2f5b468fc6d5ULL, 0xc5bafd88d29cfffcULL, 0x47dc59f357910577ULL, + 0x2b49ff07392e261dULL, 0x57c59ae5332258fbULL, 0x73b6f842e2bcb2ddULL, + 0xcf96e04862b77725ULL, 0x4ca73dd8a6c4996fULL, 0x015779eb417e14c1ULL, + 0x37932a9176af8bf4ULL + }, { + 0x190a2c9b249df23eULL, 0x2f62f8b62263e1e9ULL, 0x7a7f754740993655ULL, + 0x330b7ba4d5564d9fULL, 0x4c17a16a46672582ULL, 0xb22f08eb7d05f5b8ULL, + 0x535f47f40bc148ccULL, 0x3aec5d27d4883037ULL, 0x10ed0a1825438f96ULL, + 0x516101f72c233d17ULL, 0x13cc6f949fd04eaeULL, 0x739853c441474bfdULL, + 0x653793d90d3f5b1bULL, 0x5240647b96b0fc2fULL, 0x0c84890ad27623e0ULL, + 0xd7189b32703aaea3ULL, 0x2685de3523bd9c41ULL, 0x99317c5b11bffefaULL, + 0x0d9baa854f079703ULL, 0x70b93648fbd48ac5ULL, 0xa80441fce30bc6beULL, + 0x7287704bdc36ff1eULL, 0xb65384ed33dc1f13ULL, 0xd36417343ee34408ULL, + 0x39cd38ab6e1bf10fULL, 0x5ab861770a1f3564ULL, 0x0ebacf09f594563bULL, + 0xd04572b884708530ULL, 0x3cae9722bdb3af47ULL, 0x4a556b6f2f5cbaf2ULL, + 0xe1704f1f76c4bd74ULL, 0x5ec4ed7144c6dfcfULL, 0x16afc01d4c7810e6ULL, + 0x283f113cd629ca7aULL, 0xaf59a8761741ed2dULL, 0xeed5a3991e215facULL, + 0x3bf37ea849f984d4ULL, 0xe413e096a56ce33cULL, 0x2c439d3a98f020d1ULL, + 0x637559dc6404c46bULL, 0x9e6c95d1e5f5d569ULL, 0x24bb9836045fe99aULL, + 0x44efa466dac8ecc9ULL, 0xc6eab2a5c80895d6ULL, 0x803b50c035220cc4ULL, + 0x0321658cba93c138ULL, 0x8f9ebc465dc7ee1cULL, 0xd15a5137190131d3ULL, + 0x0fa5ec8668e5e2d8ULL, 0x91c979578d1037b1ULL, 0x0642ca05693b9f70ULL, + 0xefca80168350eb4fULL, 0x38d21b24f36a45ecULL, 0xbeab81e1af73d658ULL, + 0x8cbfd9cae7542f24ULL, 0xfd19cc0d81f11102ULL, 0x0ac6430fbb4dbc90ULL, + 0x1d76a09d6a441895ULL, 0x2a01573ff1cbbfa1ULL, 0xb572e161894fde2bULL, + 0x8124734fa853b827ULL, 0x614b1fdf43e6b1b0ULL, 0x68ac395c4238cc18ULL, + 0x21d837bfd7f7b7d2ULL, 0x20c714304a860331ULL, 0x5cfaab726324aa14ULL, + 0x74c5ba4eb50d606eULL, 0xf3a3030474654739ULL, 0x23e671bcf015c209ULL, + 0x45f087e947b9582aULL, 0xd8bd77b418df4c7bULL, 0xe06f6c90ebb50997ULL, + 0x0bd96080263c0873ULL, 0x7e03f9410e40dcfeULL, 0xb8e94be4c6484928ULL, + 0xfb5b0608e8ca8e72ULL, 0x1a2b49179e0e3306ULL, 0x4e29e76961855059ULL, + 0x4f36c4e6fcf4e4baULL, 0x49740ee395cf7bcaULL, 0xc2963ea386d17f7dULL, + 0x90d65ad810618352ULL, 0x12d34c1b02a1fa4dULL, 0xfa44258775bb3a91ULL, + 0x18150f14b9ec46ddULL, 0x1491861e6b9a653dULL, 0x9a1019d7ab2c3fc2ULL, + 0x3668d42d06fe13d7ULL, 0xdcc1fbb25606a6d0ULL, 0x969490dd795a1c22ULL, + 0x3549b1a1bc6dd2efULL, 0xc94f5e23a0ed770eULL, 0xb9f6686b5b39fdcbULL, + 0xc4d4f4a6efeae00dULL, 0xe732851a1fff2204ULL, 0x94aad6de5eb869f9ULL, + 0x3f8ff2ae07206e7fULL, 0xfe38a9813b62d03aULL, 0xa7a1ad7a8bee2466ULL, + 0x7b6056c8dde882b6ULL, 0x302a1e286fc58ca7ULL, 0x8da0fa457a259bc7ULL, + 0xb3302b64e074415bULL, 0x5402ae7eff8b635fULL, 0x08f8050c9cafc94bULL, + 0xae468bf98a3059ceULL, 0x88c355cca98dc58fULL, 0xb10e6d67c7963480ULL, + 0xbad70de7e1aa3cf3ULL, 0xbfb4a26e320262bbULL, 0xcb711820870f02d5ULL, + 0xce12b7a954a75c9dULL, 0x563ce87dd8691684ULL, 0x9f73b65e7884618aULL, + 0x2b1e74b06cba0b42ULL, 0x47cec1ea605b2df1ULL, 0x1c698312f735ac76ULL, + 0x5fdbcefed9b76b2cULL, 0x831a354c8fb1cdfcULL, 0x820516c312c0791fULL, + 0xb74ca762aeadabf0ULL, 0xfc06ef821c80a5e1ULL, 0x5723cbf24518a267ULL, + 0x9d4df05d5f661451ULL, 0x588627742dfd40bfULL, 0xda8331b73f3d39a0ULL, + 0x17b0e392d109a405ULL, 0xf965400bcf28fba9ULL, 0x7c3dbf4229a2a925ULL, + 0x023e460327e275dbULL, 0x6cd0b55a0ce126b3ULL, 0xe62da695828e96e7ULL, + 0x42ad6e63b3f373b9ULL, 0xe50cc319381d57dfULL, 0xc5cbd729729b54eeULL, + 0x46d1e265fd2a9912ULL, 0x6428b056904eeff8ULL, 0x8be23040131e04b7ULL, + 0x6709d5da2add2ec0ULL, 0x075de98af44a2b93ULL, 0x8447dcc67bfbe66fULL, + 0x6616f655b7ac9a23ULL, 0xd607b8bded4b1a40ULL, 0x0563af89d3a85e48ULL, + 0x3db1b4ad20c21ba4ULL, 0x11f22997b8323b75ULL, 0x292032b34b587e99ULL, + 0x7f1cdace9331681dULL, 0x8e819fc9c0b65affULL, 0xa1e3677fe2d5bb16ULL, + 0xcd33d225ee349da5ULL, 0xd9a2543b85aef898ULL, 0x795e10cbfa0af76dULL, + 0x25a4bbb9992e5d79ULL, 0x78413344677b438eULL, 0xf0826688cef68601ULL, + 0xd27b34bba392f0ebULL, 0x551d8df162fad7bcULL, 0x1e57c511d0d7d9adULL, + 0xdeffbdb171e4d30bULL, 0xf4feea8e802f6caaULL, 0xa480c8f6317de55eULL, + 0xa0fc44f07fa40ff5ULL, 0x95b5f551c3c9dd1aULL, 0x22f952336d6476eaULL, + 0x0000000000000000ULL, 0xa6be8ef5169f9085ULL, 0xcc2cf1aa73452946ULL, + 0x2e7ddb39bf12550aULL, 0xd526dd3157d8db78ULL, 0x486b2d6c08becf29ULL, + 0x9b0f3a58365d8b21ULL, 0xac78cdfaadd22c15ULL, 0xbc95c7e28891a383ULL, + 0x6a927f5f65dab9c3ULL, 0xc3891d2c1ba0cb9eULL, 0xeaa92f9f50f8b507ULL, + 0xcf0d9426c9d6e87eULL, 0xca6e3baf1a7eb636ULL, 0xab25247059980786ULL, + 0x69b31ad3df4978fbULL, 0xe2512a93cc577c4cULL, 0xff278a0ea61364d9ULL, + 0x71a615c766a53e26ULL, 0x89dc764334fc716cULL, 0xf87a638452594f4aULL, + 0xf2bc208be914f3daULL, 0x8766b94ac1682757ULL, 0xbbc82e687cdb8810ULL, + 0x626a7a53f9757088ULL, 0xa2c202f358467a2eULL, 0x4d0882e5db169161ULL, + 0x09e7268301de7da8ULL, 0xe897699c771ac0dcULL, 0xc8507dac3d9cc3edULL, + 0xc0a878a0a1330aa6ULL, 0x978bb352e42ba8c1ULL, 0xe9884a13ea6b743fULL, + 0x279afdbabecc28a2ULL, 0x047c8c064ed9eaabULL, 0x507e2278b15289f4ULL, + 0x599904fbb08cf45cULL, 0xbd8ae46d15e01760ULL, 0x31353da7f2b43844ULL, + 0x8558ff49e68a528cULL, 0x76fbfc4d92ef15b5ULL, 0x3456922e211c660cULL, + 0x86799ac55c1993b4ULL, 0x3e90d1219a51da9cULL, 0x2d5cbeb505819432ULL, + 0x982e5fd48cce4a19ULL, 0xdb9c1238a24c8d43ULL, 0xd439febecaa96f9bULL, + 0x418c0bef0960b281ULL, 0x158ea591f6ebd1deULL, 0x1f48e69e4da66d4eULL, + 0x8afd13cf8e6fb054ULL, 0xf5e1c9011d5ed849ULL, 0xe34e091c5126c8afULL, + 0xad67ee7530a398f6ULL, 0x43b24dec2e82c75aULL, 0x75da99c1287cd48dULL, + 0x92e81cdb3783f689ULL, 0xa3dd217cc537cecdULL, 0x60543c50de970553ULL, + 0x93f73f54aaf2426aULL, 0xa91b62737e7a725dULL, 0xf19d4507538732e2ULL, + 0x77e4dfc20f9ea156ULL, 0x7d229ccdb4d31dc6ULL, 0x1b346a98037f87e5ULL, + 0xedf4c615a4b29e94ULL, 0x4093286094110662ULL, 0xb0114ee85ae78063ULL, + 0x6ff1d0d6b672e78bULL, 0x6dcf96d591909250ULL, 0xdfe09e3eec9567e8ULL, + 0x3214582b4827f97cULL, 0xb46dc2ee143e6ac8ULL, 0xf6c0ac8da7cd1971ULL, + 0xebb60c10cd8901e4ULL, 0xf7df8f023abcad92ULL, 0x9c52d3d2c217a0b2ULL, + 0x6b8d5cd0f8ab0d20ULL, 0x3777f7a29b8fa734ULL, 0x011f238f9d71b4e3ULL, + 0xc1b75b2f3c42be45ULL, 0x5de588fdfe551ef7ULL, 0x6eeef3592b035368ULL, + 0xaa3a07ffc4e9b365ULL, 0xecebe59a39c32a77ULL, 0x5ba742f8976e8187ULL, + 0x4b4a48e0b22d0e11ULL, 0xddded83dcb771233ULL, 0xa59feb79ac0c51bdULL, + 0xc7f5912a55792135ULL + }, { + 0x6d6ae04668a9b08aULL, 0x3ab3f04b0be8c743ULL, 0xe51e166b54b3c908ULL, + 0xbe90a9eb35c2f139ULL, 0xb2c7066637f2bec1ULL, 0xaa6945613392202cULL, + 0x9a28c36f3b5201ebULL, 0xddce5a93ab536994ULL, 0x0e34133ef6382827ULL, + 0x52a02ba1ec55048bULL, 0xa2f88f97c4b2a177ULL, 0x8640e513ca2251a5ULL, + 0xcdf1d36258137622ULL, 0xfe6cb708dedf8ddbULL, 0x8a174a9ec8121e5dULL, + 0x679896036b81560eULL, 0x59ed033395795feeULL, 0x1dd778ab8b74edafULL, + 0xee533ef92d9f926dULL, 0x2a8c79baf8a8d8f5ULL, 0x6bcf398e69b119f6ULL, + 0xe20491742fafdd95ULL, 0x276488e0809c2aecULL, 0xea955b82d88f5cceULL, + 0x7102c63a99d9e0c4ULL, 0xf9763017a5c39946ULL, 0x429fa2501f151b3dULL, + 0x4659c72bea05d59eULL, 0x984b7fdccf5a6634ULL, 0xf742232953fbb161ULL, + 0x3041860e08c021c7ULL, 0x747bfd9616cd9386ULL, 0x4bb1367192312787ULL, + 0x1b72a1638a6c44d3ULL, 0x4a0e68a6e8359a66ULL, 0x169a5039f258b6caULL, + 0xb98a2ef44edee5a4ULL, 0xd9083fe85e43a737ULL, 0x967f6ce239624e13ULL, + 0x8874f62d3c1a7982ULL, 0x3c1629830af06e3fULL, 0x9165ebfd427e5a8eULL, + 0xb5dd81794ceeaa5cULL, 0x0de8f15a7834f219ULL, 0x70bd98ede3dd5d25ULL, + 0xaccc9ca9328a8950ULL, 0x56664eda1945ca28ULL, 0x221db34c0f8859aeULL, + 0x26dbd637fa98970dULL, 0x1acdffb4f068f932ULL, 0x4585254f64090fa0ULL, + 0x72de245e17d53afaULL, 0x1546b25d7c546cf4ULL, 0x207e0ffffb803e71ULL, + 0xfaaad2732bcf4378ULL, 0xb462dfae36ea17bdULL, 0xcf926fd1ac1b11fdULL, + 0xe0672dc7dba7ba4aULL, 0xd3fa49ad5d6b41b3ULL, 0x8ba81449b216a3bcULL, + 0x14f9ec8a0650d115ULL, 0x40fc1ee3eb1d7ce2ULL, 0x23a2ed9b758ce44fULL, + 0x782c521b14fddc7eULL, 0x1c68267cf170504eULL, 0xbcf31558c1ca96e6ULL, + 0xa781b43b4ba6d235ULL, 0xf6fd7dfe29ff0c80ULL, 0xb0a4bad5c3fad91eULL, + 0xd199f51ea963266cULL, 0x414340349119c103ULL, 0x5405f269ed4dadf7ULL, + 0xabd61bb649969dcdULL, 0x6813dbeae7bdc3c8ULL, 0x65fb2ab09f8931d1ULL, + 0xf1e7fae152e3181dULL, 0xc1a67cef5a2339daULL, 0x7a4feea8e0f5bba1ULL, + 0x1e0b9acf05783791ULL, 0x5b8ebf8061713831ULL, 0x80e53cdbcb3af8d9ULL, + 0x7e898bd315e57502ULL, 0xc6bcfbf0213f2d47ULL, 0x95a38e86b76e942dULL, + 0x092e94218d243cbaULL, 0x8339debf453622e7ULL, 0xb11be402b9fe64ffULL, + 0x57d9100d634177c9ULL, 0xcc4e8db52217cbc3ULL, 0x3b0cae9c71ec7aa2ULL, + 0xfb158ca451cbfe99ULL, 0x2b33276d82ac6514ULL, 0x01bf5ed77a04bde1ULL, + 0xc5601994af33f779ULL, 0x75c4a3416cc92e67ULL, 0xf3844652a6eb7fc2ULL, + 0x3487e375fdd0ef64ULL, 0x18ae430704609eedULL, 0x4d14efb993298efbULL, + 0x815a620cb13e4538ULL, 0x125c354207487869ULL, 0x9eeea614ce42cf48ULL, + 0xce2d3106d61fac1cULL, 0xbbe99247bad6827bULL, 0x071a871f7b1c149dULL, + 0x2e4a1cc10db81656ULL, 0x77a71ff298c149b8ULL, 0x06a5d9c80118a97cULL, + 0xad73c27e488e34b1ULL, 0x443a7b981e0db241ULL, 0xe3bbcfa355ab6074ULL, + 0x0af276450328e684ULL, 0x73617a896dd1871bULL, 0x58525de4ef7de20fULL, + 0xb7be3dcab8e6cd83ULL, 0x19111dd07e64230cULL, 0x842359a03e2a367aULL, + 0x103f89f1f3401fb6ULL, 0xdc710444d157d475ULL, 0xb835702334da5845ULL, + 0x4320fc876511a6dcULL, 0xd026abc9d3679b8dULL, 0x17250eee885c0b2bULL, + 0x90dab52a387ae76fULL, 0x31fed8d972c49c26ULL, 0x89cba8fa461ec463ULL, + 0x2ff5421677bcabb7ULL, 0x396f122f85e41d7dULL, 0xa09b332430bac6a8ULL, + 0xc888e8ced7070560ULL, 0xaeaf201ac682ee8fULL, 0x1180d7268944a257ULL, + 0xf058a43628e7a5fcULL, 0xbd4c4b8fbbce2b07ULL, 0xa1246df34abe7b49ULL, + 0x7d5569b79be9af3cULL, 0xa9b5a705bd9efa12ULL, 0xdb6b835baa4bc0e8ULL, + 0x05793bac8f147342ULL, 0x21c1512881848390ULL, 0xfdb0556c50d357e5ULL, + 0x613d4fcb6a99ff72ULL, 0x03dce2648e0cda3eULL, 0xe949b9e6568386f0ULL, + 0xfc0f0bbb2ad7ea04ULL, 0x6a70675913b5a417ULL, 0x7f36d5046fe1c8e3ULL, + 0x0c57af8d02304ff8ULL, 0x32223abdfcc84618ULL, 0x0891caf6f720815bULL, + 0xa63eeaec31a26fd4ULL, 0x2507345374944d33ULL, 0x49d28ac266394058ULL, + 0xf5219f9aa7f3d6beULL, 0x2d96fea583b4cc68ULL, 0x5a31e1571b7585d0ULL, + 0x8ed12fe53d02d0feULL, 0xdfade6205f5b0e4bULL, 0x4cabb16ee92d331aULL, + 0x04c6657bf510cea3ULL, 0xd73c2cd6a87b8f10ULL, 0xe1d87310a1a307abULL, + 0x6cd5be9112ad0d6bULL, 0x97c032354366f3f2ULL, 0xd4e0ceb22677552eULL, + 0x0000000000000000ULL, 0x29509bde76a402cbULL, 0xc27a9e8bd42fe3e4ULL, + 0x5ef7842cee654b73ULL, 0xaf107ecdbc86536eULL, 0x3fcacbe784fcb401ULL, + 0xd55f90655c73e8cfULL, 0xe6c2f40fdabf1336ULL, 0xe8f6e7312c873b11ULL, + 0xeb2a0555a28be12fULL, 0xe4a148bc2eb774e9ULL, 0x9b979db84156bc0aULL, + 0x6eb60222e6a56ab4ULL, 0x87ffbbc4b026ec44ULL, 0xc703a5275b3b90a6ULL, + 0x47e699fc9001687fULL, 0x9c8d1aa73a4aa897ULL, 0x7cea3760e1ed12ddULL, + 0x4ec80ddd1d2554c5ULL, 0x13e36b957d4cc588ULL, 0x5d2b66486069914dULL, + 0x92b90999cc7280b0ULL, 0x517cc9c56259deb5ULL, 0xc937b619ad03b881ULL, + 0xec30824ad997f5b2ULL, 0xa45d565fc5aa080bULL, 0xd6837201d27f32f1ULL, + 0x635ef3789e9198adULL, 0x531f75769651b96aULL, 0x4f77530a6721e924ULL, + 0x486dd4151c3dfdb9ULL, 0x5f48dafb9461f692ULL, 0x375b011173dc355aULL, + 0x3da9775470f4d3deULL, 0x8d0dcd81b30e0ac0ULL, 0x36e45fc609d888bbULL, + 0x55baacbe97491016ULL, 0x8cb29356c90ab721ULL, 0x76184125e2c5f459ULL, + 0x99f4210bb55edbd5ULL, 0x6f095cf59ca1d755ULL, 0x9f51f8c3b44672a9ULL, + 0x3538bda287d45285ULL, 0x50c39712185d6354ULL, 0xf23b1885dcefc223ULL, + 0x79930ccc6ef9619fULL, 0xed8fdc9da3934853ULL, 0xcb540aaa590bdf5eULL, + 0x5c94389f1a6d2cacULL, 0xe77daad8a0bbaed7ULL, 0x28efc5090ca0bf2aULL, + 0xbf2ff73c4fc64cd8ULL, 0xb37858b14df60320ULL, 0xf8c96ec0dfc724a7ULL, + 0x828680683f329f06ULL, 0x941cd051cd6a29ccULL, 0xc3c5c05cae2b5e05ULL, + 0xb601631dc2e27062ULL, 0xc01922382027843bULL, 0x24b86a840e90f0d2ULL, + 0xd245177a276ffc52ULL, 0x0f8b4de98c3c95c6ULL, 0x3e759530fef809e0ULL, + 0x0b4d2892792c5b65ULL, 0xc4df4743d5374a98ULL, 0xa5e20888bfaeb5eaULL, + 0xba56cc90c0d23f9aULL, 0x38d04cf8ffe0a09cULL, 0x62e1adafe495254cULL, + 0x0263bcb3f40867dfULL, 0xcaeb547d230f62bfULL, 0x6082111c109d4293ULL, + 0xdad4dd8cd04f7d09ULL, 0xefec602e579b2f8cULL, 0x1fb4c4187f7c8a70ULL, + 0xffd3e9dfa4db303aULL, 0x7bf0b07f9af10640ULL, 0xf49ec14dddf76b5fULL, + 0x8f6e713247066d1fULL, 0x339d646a86ccfbf9ULL, 0x64447467e58d8c30ULL, + 0x2c29a072f9b07189ULL, 0xd8b7613f24471ad6ULL, 0x6627c8d41185ebefULL, + 0xa347d140beb61c96ULL, 0xde12b8f7255fb3aaULL, 0x9d324470404e1576ULL, + 0x9306574eb6763d51ULL, 0xa80af9d2c79a47f3ULL, 0x859c0777442e8b9bULL, + 0x69ac853d9db97e29ULL + }, { + 0xc3407dfc2de6377eULL, 0x5b9e93eea4256f77ULL, 0xadb58fdd50c845e0ULL, + 0x5219ff11a75bed86ULL, 0x356b61cfd90b1de9ULL, 0xfb8f406e25abe037ULL, + 0x7a5a0231c0f60796ULL, 0x9d3cd216e1f5020bULL, 0x0c6550fb6b48d8f3ULL, + 0xf57508c427ff1c62ULL, 0x4ad35ffa71cb407dULL, 0x6290a2da1666aa6dULL, + 0xe284ec2349355f9fULL, 0xb3c307c53d7c84ecULL, 0x05e23c0468365a02ULL, + 0x190bac4d6c9ebfa8ULL, 0x94bbbee9e28b80faULL, 0xa34fc777529cb9b5ULL, + 0xcc7b39f095bcd978ULL, 0x2426addb0ce532e3ULL, 0x7e79329312ce4fc7ULL, + 0xab09a72eebec2917ULL, 0xf8d15499f6b9d6c2ULL, 0x1a55b8babf8c895dULL, + 0xdb8add17fb769a85ULL, 0xb57f2f368658e81bULL, 0x8acd36f18f3f41f6ULL, + 0x5ce3b7bba50f11d3ULL, 0x114dcc14d5ee2f0aULL, 0xb91a7fcded1030e8ULL, + 0x81d5425fe55de7a1ULL, 0xb6213bc1554adeeeULL, 0x80144ef95f53f5f2ULL, + 0x1e7688186db4c10cULL, 0x3b912965db5fe1bcULL, 0xc281715a97e8252dULL, + 0x54a5d7e21c7f8171ULL, 0x4b12535ccbc5522eULL, 0x1d289cefbea6f7f9ULL, + 0x6ef5f2217d2e729eULL, 0xe6a7dc819b0d17ceULL, 0x1b94b41c05829b0eULL, + 0x33d7493c622f711eULL, 0xdcf7f942fa5ce421ULL, 0x600fba8b7f7a8ecbULL, + 0x46b60f011a83988eULL, 0x235b898e0dcf4c47ULL, 0x957ab24f588592a9ULL, + 0x4354330572b5c28cULL, 0xa5f3ef84e9b8d542ULL, 0x8c711e02341b2d01ULL, + 0x0b1874ae6a62a657ULL, 0x1213d8e306fc19ffULL, 0xfe6d7c6a4d9dba35ULL, + 0x65ed868f174cd4c9ULL, 0x88522ea0e6236550ULL, 0x899322065c2d7703ULL, + 0xc01e690bfef4018bULL, 0x915982ed8abddaf8ULL, 0xbe675b98ec3a4e4cULL, + 0xa996bf7f82f00db1ULL, 0xe1daf8d49a27696aULL, 0x2effd5d3dc8986e7ULL, + 0xd153a51f2b1a2e81ULL, 0x18caa0ebd690adfbULL, 0x390e3134b243c51aULL, + 0x2778b92cdff70416ULL, 0x029f1851691c24a6ULL, 0x5e7cafeacc133575ULL, + 0xfa4e4cc89fa5f264ULL, 0x5a5f9f481e2b7d24ULL, 0x484c47ab18d764dbULL, + 0x400a27f2a1a7f479ULL, 0xaeeb9b2a83da7315ULL, 0x721c626879869734ULL, + 0x042330a2d2384851ULL, 0x85f672fd3765aff0ULL, 0xba446b3a3e02061dULL, + 0x73dd6ecec3888567ULL, 0xffac70ccf793a866ULL, 0xdfa9edb5294ed2d4ULL, + 0x6c6aea7014325638ULL, 0x834a5a0e8c41c307ULL, 0xcdba35562fb2cb2bULL, + 0x0ad97808d06cb404ULL, 0x0f3b440cb85aee06ULL, 0xe5f9c876481f213bULL, + 0x98deee1289c35809ULL, 0x59018bbfcd394bd1ULL, 0xe01bf47220297b39ULL, + 0xde68e1139340c087ULL, 0x9fa3ca4788e926adULL, 0xbb85679c840c144eULL, + 0x53d8f3b71d55ffd5ULL, 0x0da45c5dd146caa0ULL, 0x6f34fe87c72060cdULL, + 0x57fbc315cf6db784ULL, 0xcee421a1fca0fddeULL, 0x3d2d0196607b8d4bULL, + 0x642c8a29ad42c69aULL, 0x14aff010bdd87508ULL, 0xac74837beac657b3ULL, + 0x3216459ad821634dULL, 0x3fb219c70967a9edULL, 0x06bc28f3bb246cf7ULL, + 0xf2082c9126d562c6ULL, 0x66b39278c45ee23cULL, 0xbd394f6f3f2878b9ULL, + 0xfd33689d9e8f8cc0ULL, 0x37f4799eb017394fULL, 0x108cc0b26fe03d59ULL, + 0xda4bd1b1417888d6ULL, 0xb09d1332ee6eb219ULL, 0x2f3ed975668794b4ULL, + 0x58c0871977375982ULL, 0x7561463d78ace990ULL, 0x09876cff037e82f1ULL, + 0x7fb83e35a8c05d94ULL, 0x26b9b58a65f91645ULL, 0xef20b07e9873953fULL, + 0x3148516d0b3355b8ULL, 0x41cb2b541ba9e62aULL, 0x790416c613e43163ULL, + 0xa011d380818e8f40ULL, 0x3a5025c36151f3efULL, 0xd57095bdf92266d0ULL, + 0x498d4b0da2d97688ULL, 0x8b0c3a57353153a5ULL, 0x21c491df64d368e1ULL, + 0x8f2f0af5e7091bf4ULL, 0x2da1c1240f9bb012ULL, 0xc43d59a92ccc49daULL, + 0xbfa6573e56345c1fULL, 0x828b56a8364fd154ULL, 0x9a41f643e0df7cafULL, + 0xbcf843c985266aeaULL, 0x2b1de9d7b4bfdce5ULL, 0x20059d79dedd7ab2ULL, + 0x6dabe6d6ae3c446bULL, 0x45e81bf6c991ae7bULL, 0x6351ae7cac68b83eULL, + 0xa432e32253b6c711ULL, 0xd092a9b991143cd2ULL, 0xcac711032e98b58fULL, + 0xd8d4c9e02864ac70ULL, 0xc5fc550f96c25b89ULL, 0xd7ef8dec903e4276ULL, + 0x67729ede7e50f06fULL, 0xeac28c7af045cf3dULL, 0xb15c1f945460a04aULL, + 0x9cfddeb05bfb1058ULL, 0x93c69abce3a1fe5eULL, 0xeb0380dc4a4bdd6eULL, + 0xd20db1e8f8081874ULL, 0x229a8528b7c15e14ULL, 0x44291750739fbc28ULL, + 0xd3ccbd4e42060a27ULL, 0xf62b1c33f4ed2a97ULL, 0x86a8660ae4779905ULL, + 0xd62e814a2a305025ULL, 0x477703a7a08d8addULL, 0x7b9b0e977af815c5ULL, + 0x78c51a60a9ea2330ULL, 0xa6adfb733aaae3b7ULL, 0x97e5aa1e3199b60fULL, + 0x0000000000000000ULL, 0xf4b404629df10e31ULL, 0x5564db44a6719322ULL, + 0x9207961a59afec0dULL, 0x9624a6b88b97a45cULL, 0x363575380a192b1cULL, + 0x2c60cd82b595a241ULL, 0x7d272664c1dc7932ULL, 0x7142769faa94a1c1ULL, + 0xa1d0df263b809d13ULL, 0x1630e841d4c451aeULL, 0xc1df65ad44fa13d8ULL, + 0x13d2d445bcf20bacULL, 0xd915c546926abe23ULL, 0x38cf3d92084dd749ULL, + 0xe766d0272103059dULL, 0xc7634d5effde7f2fULL, 0x077d2455012a7ea4ULL, + 0xedbfa82ff16fb199ULL, 0xaf2a978c39d46146ULL, 0x42953fa3c8bbd0dfULL, + 0xcb061da59496a7dcULL, 0x25e7a17db6eb20b0ULL, 0x34aa6d6963050fbaULL, + 0xa76cf7d580a4f1e4ULL, 0xf7ea10954ee338c4ULL, 0xfcf2643b24819e93ULL, + 0xcf252d0746aeef8dULL, 0x4ef06f58a3f3082cULL, 0x563acfb37563a5d7ULL, + 0x5086e740ce47c920ULL, 0x2982f186dda3f843ULL, 0x87696aac5e798b56ULL, + 0x5d22bb1d1f010380ULL, 0x035e14f7d31236f5ULL, 0x3cec0d30da759f18ULL, + 0xf3c920379cdb7095ULL, 0xb8db736b571e22bbULL, 0xdd36f5e44052f672ULL, + 0xaac8ab8851e23b44ULL, 0xa857b3d938fe1fe2ULL, 0x17f1e4e76eca43fdULL, + 0xec7ea4894b61a3caULL, 0x9e62c6e132e734feULL, 0xd4b1991b432c7483ULL, + 0x6ad6c283af163acfULL, 0x1ce9904904a8e5aaULL, 0x5fbda34c761d2726ULL, + 0xf910583f4cb7c491ULL, 0xc6a241f845d06d7cULL, 0x4f3163fe19fd1a7fULL, + 0xe99c988d2357f9c8ULL, 0x8eee06535d0709a7ULL, 0x0efa48aa0254fc55ULL, + 0xb4be23903c56fa48ULL, 0x763f52caabbedf65ULL, 0xeee1bcd8227d876cULL, + 0xe345e085f33b4dccULL, 0x3e731561b369bbbeULL, 0x2843fd2067adea10ULL, + 0x2adce5710eb1ceb6ULL, 0xb7e03767ef44ccbdULL, 0x8db012a48e153f52ULL, + 0x61ceb62dc5749c98ULL, 0xe85d942b9959eb9bULL, 0x4c6f7709caef2c8aULL, + 0x84377e5b8d6bbda3ULL, 0x30895dcbb13d47ebULL, 0x74a04a9bc2a2fbc3ULL, + 0x6b17ce251518289cULL, 0xe438c4d0f2113368ULL, 0x1fb784bed7bad35fULL, + 0x9b80fae55ad16efcULL, 0x77fe5e6c11b0cd36ULL, 0xc858095247849129ULL, + 0x08466059b97090a2ULL, 0x01c10ca6ba0e1253ULL, 0x6988d6747c040c3aULL, + 0x6849dad2c60a1e69ULL, 0x5147ebe67449db73ULL, 0xc99905f4fd8a837aULL, + 0x991fe2b433cd4a5aULL, 0xf09734c04fc94660ULL, 0xa28ecbd1e892abe6ULL, + 0xf1563866f5c75433ULL, 0x4dae7baf70e13ed9ULL, 0x7ce62ac27bd26b61ULL, + 0x70837a39109ab392ULL, 0x90988e4b30b3c8abULL, 0xb2020b63877296bfULL, + 0x156efcb607d6675bULL + }, { + 0xe63f55ce97c331d0ULL, 0x25b506b0015bba16ULL, 0xc8706e29e6ad9ba8ULL, + 0x5b43d3775d521f6aULL, 0x0bfa3d577035106eULL, 0xab95fc172afb0e66ULL, + 0xf64b63979e7a3276ULL, 0xf58b4562649dad4bULL, 0x48f7c3dbae0c83f1ULL, + 0xff31916642f5c8c5ULL, 0xcbb048dc1c4a0495ULL, 0x66b8f83cdf622989ULL, + 0x35c130e908e2b9b0ULL, 0x7c761a61f0b34fa1ULL, 0x3601161cf205268dULL, + 0x9e54ccfe2219b7d6ULL, 0x8b7d90a538940837ULL, 0x9cd403588ea35d0bULL, + 0xbc3c6fea9ccc5b5aULL, 0xe5ff733b6d24aeedULL, 0xceed22de0f7eb8d2ULL, + 0xec8581cab1ab545eULL, 0xb96105e88ff8e71dULL, 0x8ca03501871a5eadULL, + 0x76ccce65d6db2a2fULL, 0x5883f582a7b58057ULL, 0x3f7be4ed2e8adc3eULL, + 0x0fe7be06355cd9c9ULL, 0xee054e6c1d11be83ULL, 0x1074365909b903a6ULL, + 0x5dde9f80b4813c10ULL, 0x4a770c7d02b6692cULL, 0x5379c8d5d7809039ULL, + 0xb4067448161ed409ULL, 0x5f5e5026183bd6cdULL, 0xe898029bf4c29df9ULL, + 0x7fb63c940a54d09cULL, 0xc5171f897f4ba8bcULL, 0xa6f28db7b31d3d72ULL, + 0x2e4f3be7716eaa78ULL, 0x0d6771a099e63314ULL, 0x82076254e41bf284ULL, + 0x2f0fd2b42733df98ULL, 0x5c9e76d3e2dc49f0ULL, 0x7aeb569619606cdbULL, + 0x83478b07b2468764ULL, 0xcfadcb8d5923cd32ULL, 0x85dac7f05b95a41eULL, + 0xb5469d1b4043a1e9ULL, 0xb821ecbbd9a592fdULL, 0x1b8e0b0e798c13c8ULL, + 0x62a57b6d9a0be02eULL, 0xfcf1b793b81257f8ULL, 0x9d94ea0bd8fe28ebULL, + 0x4cea408aeb654a56ULL, 0x23284a47e888996cULL, 0x2d8f1d128b893545ULL, + 0xf4cbac3132c0d8abULL, 0xbd7c86b9ca912ebaULL, 0x3a268eef3dbe6079ULL, + 0xf0d62f6077a9110cULL, 0x2735c916ade150cbULL, 0x89fd5f03942ee2eaULL, + 0x1acee25d2fd16628ULL, 0x90f39bab41181bffULL, 0x430dfe8cde39939fULL, + 0xf70b8ac4c8274796ULL, 0x1c53aeaac6024552ULL, 0x13b410acf35e9c9bULL, + 0xa532ab4249faa24fULL, 0x2b1251e5625a163fULL, 0xd7e3e676da4841c7ULL, + 0xa7b264e4e5404892ULL, 0xda8497d643ae72d3ULL, 0x861ae105a1723b23ULL, + 0x38a6414991048aa4ULL, 0x6578dec92585b6b4ULL, 0x0280cfa6acbaeaddULL, + 0x88bdb650c273970aULL, 0x9333bd5ebbff84c2ULL, 0x4e6a8f2c47dfa08bULL, + 0x321c954db76cef2aULL, 0x418d312a72837942ULL, 0xb29b38bfffcdf773ULL, + 0x6c022c38f90a4c07ULL, 0x5a033a240b0f6a8aULL, 0x1f93885f3ce5da6fULL, + 0xc38a537e96988bc6ULL, 0x39e6a81ac759ff44ULL, 0x29929e43cee0fce2ULL, + 0x40cdd87924de0ca2ULL, 0xe9d8ebc8a29fe819ULL, 0x0c2798f3cfbb46f4ULL, + 0x55e484223e53b343ULL, 0x4650948ecd0d2fd8ULL, 0x20e86cb2126f0651ULL, + 0x6d42c56baf5739e7ULL, 0xa06fc1405ace1e08ULL, 0x7babbfc54f3d193bULL, + 0x424d17df8864e67fULL, 0xd8045870ef14980eULL, 0xc6d7397c85ac3781ULL, + 0x21a885e1443273b1ULL, 0x67f8116f893f5c69ULL, 0x24f5efe35706cff6ULL, + 0xd56329d076f2ab1aULL, 0x5e1eb9754e66a32dULL, 0x28d2771098bd8902ULL, + 0x8f6013f47dfdc190ULL, 0x17a993fdb637553cULL, 0xe0a219397e1012aaULL, + 0x786b9930b5da8606ULL, 0x6e82e39e55b0a6daULL, 0x875a0856f72f4ec3ULL, + 0x3741ff4fa458536dULL, 0xac4859b3957558fcULL, 0x7ef6d5c75c09a57cULL, + 0xc04a758b6c7f14fbULL, 0xf9acdd91ab26ebbfULL, 0x7391a467c5ef9668ULL, + 0x335c7c1ee1319acaULL, 0xa91533b18641e4bbULL, 0xe4bf9a683b79db0dULL, + 0x8e20faa72ba0b470ULL, 0x51f907737b3a7ae4ULL, 0x2268a314bed5ec8cULL, + 0xd944b123b949edeeULL, 0x31dcb3b84d8b7017ULL, 0xd3fe65279f218860ULL, + 0x097af2f1dc8ffab3ULL, 0x9b09a6fc312d0b91ULL, 0xcc6ded78a3c4520fULL, + 0x3481d9ba5ebfcc50ULL, 0x4f2a667f1182d56bULL, 0xdfd9fdd4509ace94ULL, + 0x26752045fbbc252bULL, 0xbffc491f662bc467ULL, 0xdd593272fc202449ULL, + 0x3cbbc218d46d4303ULL, 0x91b372f817456e1fULL, 0x681faf69bc6385a0ULL, + 0xb686bbeebaa43ed4ULL, 0x1469b5084cd0ca01ULL, 0x98c98009cbca94acULL, + 0x6438379a73d8c354ULL, 0xc2caba2dc0c5fe26ULL, 0x3e3b0dbe78d7a9deULL, + 0x50b9ee202d670f04ULL, 0x4590b27b37eab0e5ULL, 0x6025b4cb36b10af3ULL, + 0xfb2c1237079c0162ULL, 0xa12f28130c936be8ULL, 0x4b37e52e54eb1cccULL, + 0x083a1ba28ad28f53ULL, 0xc10a9cd83a22611bULL, 0x9f1425ad7444c236ULL, + 0x069d4cf7e9d3237aULL, 0xedc56899e7f621beULL, 0x778c273680865fcfULL, + 0x309c5aeb1bd605f7ULL, 0x8de0dc52d1472b4dULL, 0xf8ec34c2fd7b9e5fULL, + 0xea18cd3d58787724ULL, 0xaad515447ca67b86ULL, 0x9989695a9d97e14cULL, + 0x0000000000000000ULL, 0xf196c63321f464ecULL, 0x71116bc169557cb5ULL, + 0xaf887f466f92c7c1ULL, 0x972e3e0ffe964d65ULL, 0x190ec4a8d536f915ULL, + 0x95aef1a9522ca7b8ULL, 0xdc19db21aa7d51a9ULL, 0x94ee18fa0471d258ULL, + 0x8087adf248a11859ULL, 0xc457f6da2916dd5cULL, 0xfa6cfb6451c17482ULL, + 0xf256e0c6db13fbd1ULL, 0x6a9f60cf10d96f7dULL, 0x4daaa9d9bd383fb6ULL, + 0x03c026f5fae79f3dULL, 0xde99148706c7bb74ULL, 0x2a52b8b6340763dfULL, + 0x6fc20acd03edd33aULL, 0xd423c08320afdefaULL, 0xbbe1ca4e23420dc0ULL, + 0x966ed75ca8cb3885ULL, 0xeb58246e0e2502c4ULL, 0x055d6a021334bc47ULL, + 0xa47242111fa7d7afULL, 0xe3623fcc84f78d97ULL, 0x81c744a11efc6db9ULL, + 0xaec8961539cfb221ULL, 0xf31609958d4e8e31ULL, 0x63e5923ecc5695ceULL, + 0x47107ddd9b505a38ULL, 0xa3afe7b5a0298135ULL, 0x792b7063e387f3e6ULL, + 0x0140e953565d75e0ULL, 0x12f4f9ffa503e97bULL, 0x750ce8902c3cb512ULL, + 0xdbc47e8515f30733ULL, 0x1ed3610c6ab8af8fULL, 0x5239218681dde5d9ULL, + 0xe222d69fd2aaf877ULL, 0xfe71783514a8bd25ULL, 0xcaf0a18f4a177175ULL, + 0x61655d9860ec7f13ULL, 0xe77fbc9dc19e4430ULL, 0x2ccff441ddd440a5ULL, + 0x16e97aaee06a20dcULL, 0xa855dae2d01c915bULL, 0x1d1347f9905f30b2ULL, + 0xb7c652bdecf94b34ULL, 0xd03e43d265c6175dULL, 0xfdb15ec0ee4f2218ULL, + 0x57644b8492e9599eULL, 0x07dda5a4bf8e569aULL, 0x54a46d71680ec6a3ULL, + 0x5624a2d7c4b42c7eULL, 0xbebca04c3076b187ULL, 0x7d36f332a6ee3a41ULL, + 0x3b6667bc6be31599ULL, 0x695f463aea3ef040ULL, 0xad08b0e0c3282d1cULL, + 0xb15b1e4a052a684eULL, 0x44d05b2861b7c505ULL, 0x15295c5b1a8dbfe1ULL, + 0x744c01c37a61c0f2ULL, 0x59c31cd1f1e8f5b7ULL, 0xef45a73f4b4ccb63ULL, + 0x6bdf899c46841a9dULL, 0x3dfb2b4b823036e3ULL, 0xa2ef0ee6f674f4d5ULL, + 0x184e2dfb836b8cf5ULL, 0x1134df0a5fe47646ULL, 0xbaa1231d751f7820ULL, + 0xd17eaa81339b62bdULL, 0xb01bf71953771daeULL, 0x849a2ea30dc8d1feULL, + 0x705182923f080955ULL, 0x0ea757556301ac29ULL, 0x041d83514569c9a7ULL, + 0x0abad4042668658eULL, 0x49b72a88f851f611ULL, 0x8a3d79f66ec97dd7ULL, + 0xcd2d042bf59927efULL, 0xc930877ab0f0ee48ULL, 0x9273540deda2f122ULL, + 0xc797d02fd3f14261ULL, 0xe1e2f06a284d674aULL, 0xd2be8c74c97cfd80ULL, + 0x9a494faf67707e71ULL, 0xb3dbd1eca9908293ULL, 0x72d14d3493b2e388ULL, + 0xd6a30f258c153427ULL + } +}; /* Ax */ + +static void streebog_xor(const struct streebog_uint512 *x, + const struct streebog_uint512 *y, + struct streebog_uint512 *z) +{ + z->qword[0] = x->qword[0] ^ y->qword[0]; + z->qword[1] = x->qword[1] ^ y->qword[1]; + z->qword[2] = x->qword[2] ^ y->qword[2]; + z->qword[3] = x->qword[3] ^ y->qword[3]; + z->qword[4] = x->qword[4] ^ y->qword[4]; + z->qword[5] = x->qword[5] ^ y->qword[5]; + z->qword[6] = x->qword[6] ^ y->qword[6]; + z->qword[7] = x->qword[7] ^ y->qword[7]; +} + +static void streebog_xlps(const struct streebog_uint512 *x, + const struct streebog_uint512 *y, + struct streebog_uint512 *data) +{ + u64 r0, r1, r2, r3, r4, r5, r6, r7; + int i; + + r0 = le64_to_cpu(x->qword[0] ^ y->qword[0]); + r1 = le64_to_cpu(x->qword[1] ^ y->qword[1]); + r2 = le64_to_cpu(x->qword[2] ^ y->qword[2]); + r3 = le64_to_cpu(x->qword[3] ^ y->qword[3]); + r4 = le64_to_cpu(x->qword[4] ^ y->qword[4]); + r5 = le64_to_cpu(x->qword[5] ^ y->qword[5]); + r6 = le64_to_cpu(x->qword[6] ^ y->qword[6]); + r7 = le64_to_cpu(x->qword[7] ^ y->qword[7]); + + for (i = 0; i <= 7; i++) { + data->qword[i] = cpu_to_le64(Ax[0][r0 & 0xFF]); + data->qword[i] ^= cpu_to_le64(Ax[1][r1 & 0xFF]); + data->qword[i] ^= cpu_to_le64(Ax[2][r2 & 0xFF]); + data->qword[i] ^= cpu_to_le64(Ax[3][r3 & 0xFF]); + data->qword[i] ^= cpu_to_le64(Ax[4][r4 & 0xFF]); + data->qword[i] ^= cpu_to_le64(Ax[5][r5 & 0xFF]); + data->qword[i] ^= cpu_to_le64(Ax[6][r6 & 0xFF]); + data->qword[i] ^= cpu_to_le64(Ax[7][r7 & 0xFF]); + r0 >>= 8; + r1 >>= 8; + r2 >>= 8; + r3 >>= 8; + r4 >>= 8; + r5 >>= 8; + r6 >>= 8; + r7 >>= 8; + } +} + +static void streebog_round(int i, struct streebog_uint512 *Ki, + struct streebog_uint512 *data) +{ + streebog_xlps(Ki, &C[i], Ki); + streebog_xlps(Ki, data, data); +} + +static int streebog_init(struct shash_desc *desc) +{ + struct streebog_state *ctx = shash_desc_ctx(desc); + unsigned int digest_size = crypto_shash_digestsize(desc->tfm); + unsigned int i; + + memset(ctx, 0, sizeof(struct streebog_state)); + for (i = 0; i < 8; i++) { + if (digest_size == STREEBOG256_DIGEST_SIZE) + ctx->h.qword[i] = 0x0101010101010101ULL; + } + return 0; +} + +static void streebog_pad(struct streebog_state *ctx) +{ + if (ctx->fillsize >= STREEBOG_BLOCK_SIZE) + return; + + memset(ctx->buffer + ctx->fillsize, 0, + sizeof(ctx->buffer) - ctx->fillsize); + + ctx->buffer[ctx->fillsize] = 1; +} + +static void streebog_add512(const struct streebog_uint512 *x, + const struct streebog_uint512 *y, + struct streebog_uint512 *r) +{ + u64 carry = 0; + int i; + + for (i = 0; i < 8; i++) { + const u64 left = le64_to_cpu(x->qword[i]); + u64 sum; + + sum = left + le64_to_cpu(y->qword[i]) + carry; + if (sum != left) + carry = (sum < left); + r->qword[i] = cpu_to_le64(sum); + } +} + +static void streebog_g(struct streebog_uint512 *h, + const struct streebog_uint512 *N, + const u8 *m) +{ + struct streebog_uint512 Ki, data; + unsigned int i; + + streebog_xlps(h, N, &data); + + /* Starting E() */ + Ki = data; + streebog_xlps(&Ki, (const struct streebog_uint512 *)&m[0], &data); + + for (i = 0; i < 11; i++) + streebog_round(i, &Ki, &data); + + streebog_xlps(&Ki, &C[11], &Ki); + streebog_xor(&Ki, &data, &data); + /* E() done */ + + streebog_xor(&data, h, &data); + streebog_xor(&data, (const struct streebog_uint512 *)&m[0], h); +} + +static void streebog_stage2(struct streebog_state *ctx, const u8 *data) +{ + streebog_g(&ctx->h, &ctx->N, data); + + streebog_add512(&ctx->N, &buffer512, &ctx->N); + streebog_add512(&ctx->Sigma, (const struct streebog_uint512 *)data, + &ctx->Sigma); +} + +static void streebog_stage3(struct streebog_state *ctx) +{ + struct streebog_uint512 buf = { { 0 } }; + + buf.qword[0] = cpu_to_le64(ctx->fillsize << 3); + streebog_pad(ctx); + + streebog_g(&ctx->h, &ctx->N, (const u8 *)&ctx->buffer); + streebog_add512(&ctx->N, &buf, &ctx->N); + streebog_add512(&ctx->Sigma, + (const struct streebog_uint512 *)&ctx->buffer[0], + &ctx->Sigma); + streebog_g(&ctx->h, &buffer0, (const u8 *)&ctx->N); + streebog_g(&ctx->h, &buffer0, (const u8 *)&ctx->Sigma); + memcpy(&ctx->hash, &ctx->h, sizeof(struct streebog_uint512)); +} + +static int streebog_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + struct streebog_state *ctx = shash_desc_ctx(desc); + size_t chunksize; + + if (ctx->fillsize) { + chunksize = STREEBOG_BLOCK_SIZE - ctx->fillsize; + if (chunksize > len) + chunksize = len; + memcpy(&ctx->buffer[ctx->fillsize], data, chunksize); + ctx->fillsize += chunksize; + len -= chunksize; + data += chunksize; + + if (ctx->fillsize == STREEBOG_BLOCK_SIZE) { + streebog_stage2(ctx, ctx->buffer); + ctx->fillsize = 0; + } + } + + while (len >= STREEBOG_BLOCK_SIZE) { + streebog_stage2(ctx, data); + data += STREEBOG_BLOCK_SIZE; + len -= STREEBOG_BLOCK_SIZE; + } + + if (len) { + memcpy(&ctx->buffer, data, len); + ctx->fillsize = len; + } + return 0; +} + +static int streebog_final(struct shash_desc *desc, u8 *digest) +{ + struct streebog_state *ctx = shash_desc_ctx(desc); + + streebog_stage3(ctx); + ctx->fillsize = 0; + if (crypto_shash_digestsize(desc->tfm) == STREEBOG256_DIGEST_SIZE) + memcpy(digest, &ctx->hash.qword[4], STREEBOG256_DIGEST_SIZE); + else + memcpy(digest, &ctx->hash.qword[0], STREEBOG512_DIGEST_SIZE); + return 0; +} + +static struct shash_alg algs[2] = { { + .digestsize = STREEBOG256_DIGEST_SIZE, + .init = streebog_init, + .update = streebog_update, + .final = streebog_final, + .descsize = sizeof(struct streebog_state), + .base = { + .cra_name = "streebog256", + .cra_driver_name = "streebog256-generic", + .cra_blocksize = STREEBOG_BLOCK_SIZE, + .cra_module = THIS_MODULE, + }, +}, { + .digestsize = STREEBOG512_DIGEST_SIZE, + .init = streebog_init, + .update = streebog_update, + .final = streebog_final, + .descsize = sizeof(struct streebog_state), + .base = { + .cra_name = "streebog512", + .cra_driver_name = "streebog512-generic", + .cra_blocksize = STREEBOG_BLOCK_SIZE, + .cra_module = THIS_MODULE, + } +} }; + +static int __init streebog_mod_init(void) +{ + return crypto_register_shashes(algs, ARRAY_SIZE(algs)); +} + +static void __exit streebog_mod_fini(void) +{ + crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); +} + +module_init(streebog_mod_init); +module_exit(streebog_mod_fini); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Vitaly Chikunov "); +MODULE_DESCRIPTION("Streebog Hash Function"); + +MODULE_ALIAS_CRYPTO("streebog256"); +MODULE_ALIAS_CRYPTO("streebog256-generic"); +MODULE_ALIAS_CRYPTO("streebog512"); +MODULE_ALIAS_CRYPTO("streebog512-generic"); diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index c20c9f5c18f2..e7fb87e114a5 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -76,10 +76,12 @@ static char *check[] = { "cast6", "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea", "khazad", "wp512", "wp384", "wp256", "tnepres", "xeta", "fcrypt", "camellia", "seed", "salsa20", "rmd128", "rmd160", "rmd256", "rmd320", - "lzo", "cts", "sha3-224", "sha3-256", "sha3-384", "sha3-512", NULL + "lzo", "cts", "sha3-224", "sha3-256", "sha3-384", "sha3-512", + "streebog256", "streebog512", + NULL }; -static u32 block_sizes[] = { 16, 64, 256, 1024, 8192, 0 }; +static u32 block_sizes[] = { 16, 64, 256, 1024, 1472, 8192, 0 }; static u32 aead_sizes[] = { 16, 64, 256, 512, 1024, 2048, 4096, 8192, 0 }; #define XBUFSIZE 8 @@ -1736,6 +1738,7 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("ctr(aes)"); ret += tcrypt_test("rfc3686(ctr(aes))"); ret += tcrypt_test("ofb(aes)"); + ret += tcrypt_test("cfb(aes)"); break; case 11: @@ -1913,6 +1916,14 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("sm3"); break; + case 53: + ret += tcrypt_test("streebog256"); + break; + + case 54: + ret += tcrypt_test("streebog512"); + break; + case 100: ret += tcrypt_test("hmac(md5)"); break; @@ -1969,6 +1980,14 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("hmac(sha3-512)"); break; + case 115: + ret += tcrypt_test("hmac(streebog256)"); + break; + + case 116: + ret += tcrypt_test("hmac(streebog512)"); + break; + case 150: ret += tcrypt_test("ansi_cprng"); break; @@ -2060,6 +2079,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) speed_template_16_24_32); test_cipher_speed("ctr(aes)", DECRYPT, sec, NULL, 0, speed_template_16_24_32); + test_cipher_speed("cfb(aes)", ENCRYPT, sec, NULL, 0, + speed_template_16_24_32); + test_cipher_speed("cfb(aes)", DECRYPT, sec, NULL, 0, + speed_template_16_24_32); break; case 201: @@ -2297,6 +2320,18 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) test_cipher_speed("ctr(sm4)", DECRYPT, sec, NULL, 0, speed_template_16); break; + + case 219: + test_cipher_speed("adiantum(xchacha12,aes)", ENCRYPT, sec, NULL, + 0, speed_template_32); + test_cipher_speed("adiantum(xchacha12,aes)", DECRYPT, sec, NULL, + 0, speed_template_32); + test_cipher_speed("adiantum(xchacha20,aes)", ENCRYPT, sec, NULL, + 0, speed_template_32); + test_cipher_speed("adiantum(xchacha20,aes)", DECRYPT, sec, NULL, + 0, speed_template_32); + break; + case 300: if (alg) { test_hash_speed(alg, sec, generic_hash_speed_template); @@ -2407,6 +2442,16 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) test_hash_speed("sm3", sec, generic_hash_speed_template); if (mode > 300 && mode < 400) break; /* fall through */ + case 327: + test_hash_speed("streebog256", sec, + generic_hash_speed_template); + if (mode > 300 && mode < 400) break; + /* fall through */ + case 328: + test_hash_speed("streebog512", sec, + generic_hash_speed_template); + if (mode > 300 && mode < 400) break; + /* fall through */ case 399: break; @@ -2520,6 +2565,16 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) num_mb); if (mode > 400 && mode < 500) break; /* fall through */ + case 426: + test_mb_ahash_speed("streebog256", sec, + generic_hash_speed_template, num_mb); + if (mode > 400 && mode < 500) break; + /* fall through */ + case 427: + test_mb_ahash_speed("streebog512", sec, + generic_hash_speed_template, num_mb); + if (mode > 400 && mode < 500) break; + /* fall through */ case 499: break; diff --git a/crypto/testmgr.c b/crypto/testmgr.c index b1f79c6bf409..0f684a414acb 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -2404,6 +2404,18 @@ static int alg_test_null(const struct alg_test_desc *desc, /* Please keep this list sorted by algorithm name. */ static const struct alg_test_desc alg_test_descs[] = { { + .alg = "adiantum(xchacha12,aes)", + .test = alg_test_skcipher, + .suite = { + .cipher = __VECS(adiantum_xchacha12_aes_tv_template) + }, + }, { + .alg = "adiantum(xchacha20,aes)", + .test = alg_test_skcipher, + .suite = { + .cipher = __VECS(adiantum_xchacha20_aes_tv_template) + }, + }, { .alg = "aegis128", .test = alg_test_aead, .suite = { @@ -2690,6 +2702,13 @@ static const struct alg_test_desc alg_test_descs[] = { .dec = __VECS(aes_ccm_dec_tv_template) } } + }, { + .alg = "cfb(aes)", + .test = alg_test_skcipher, + .fips_allowed = 1, + .suite = { + .cipher = __VECS(aes_cfb_tv_template) + }, }, { .alg = "chacha20", .test = alg_test_skcipher, @@ -2805,6 +2824,7 @@ static const struct alg_test_desc alg_test_descs[] = { }, { .alg = "cts(cbc(aes))", .test = alg_test_skcipher, + .fips_allowed = 1, .suite = { .cipher = __VECS(cts_mode_tv_template) } @@ -3184,6 +3204,18 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .hash = __VECS(hmac_sha512_tv_template) } + }, { + .alg = "hmac(streebog256)", + .test = alg_test_hash, + .suite = { + .hash = __VECS(hmac_streebog256_tv_template) + } + }, { + .alg = "hmac(streebog512)", + .test = alg_test_hash, + .suite = { + .hash = __VECS(hmac_streebog512_tv_template) + } }, { .alg = "jitterentropy_rng", .fips_allowed = 1, @@ -3291,6 +3323,12 @@ static const struct alg_test_desc alg_test_descs[] = { .dec = __VECS(morus640_dec_tv_template), } } + }, { + .alg = "nhpoly1305", + .test = alg_test_hash, + .suite = { + .hash = __VECS(nhpoly1305_tv_template) + } }, { .alg = "ofb(aes)", .test = alg_test_skcipher, @@ -3496,6 +3534,18 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .hash = __VECS(sm3_tv_template) } + }, { + .alg = "streebog256", + .test = alg_test_hash, + .suite = { + .hash = __VECS(streebog256_tv_template) + } + }, { + .alg = "streebog512", + .test = alg_test_hash, + .suite = { + .hash = __VECS(streebog512_tv_template) + } }, { .alg = "tgr128", .test = alg_test_hash, @@ -3544,6 +3594,18 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .hash = __VECS(aes_xcbc128_tv_template) } + }, { + .alg = "xchacha12", + .test = alg_test_skcipher, + .suite = { + .cipher = __VECS(xchacha12_tv_template) + }, + }, { + .alg = "xchacha20", + .test = alg_test_skcipher, + .suite = { + .cipher = __VECS(xchacha20_tv_template) + }, }, { .alg = "xts(aes)", .test = alg_test_skcipher, diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 1fe7b97ba03f..e8f47d7b92cd 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -27,7 +27,7 @@ #define MAX_DIGEST_SIZE 64 #define MAX_TAP 8 -#define MAX_KEYLEN 160 +#define MAX_KEYLEN 1088 #define MAX_IVLEN 32 struct hash_testvec { @@ -35,10 +35,10 @@ struct hash_testvec { const char *key; const char *plaintext; const char *digest; - unsigned char tap[MAX_TAP]; + unsigned short tap[MAX_TAP]; + unsigned short np; unsigned short psize; - unsigned char np; - unsigned char ksize; + unsigned short ksize; }; /* @@ -2307,6 +2307,122 @@ static const struct hash_testvec crct10dif_tv_template[] = { } }; +/* + * Streebog test vectors from RFC 6986 and GOST R 34.11-2012 + */ +static const struct hash_testvec streebog256_tv_template[] = { + { /* M1 */ + .plaintext = "012345678901234567890123456789012345678901234567890123456789012", + .psize = 63, + .digest = + "\x9d\x15\x1e\xef\xd8\x59\x0b\x89" + "\xda\xa6\xba\x6c\xb7\x4a\xf9\x27" + "\x5d\xd0\x51\x02\x6b\xb1\x49\xa4" + "\x52\xfd\x84\xe5\xe5\x7b\x55\x00", + }, + { /* M2 */ + .plaintext = + "\xd1\xe5\x20\xe2\xe5\xf2\xf0\xe8" + "\x2c\x20\xd1\xf2\xf0\xe8\xe1\xee" + "\xe6\xe8\x20\xe2\xed\xf3\xf6\xe8" + "\x2c\x20\xe2\xe5\xfe\xf2\xfa\x20" + "\xf1\x20\xec\xee\xf0\xff\x20\xf1" + "\xf2\xf0\xe5\xeb\xe0\xec\xe8\x20" + "\xed\xe0\x20\xf5\xf0\xe0\xe1\xf0" + "\xfb\xff\x20\xef\xeb\xfa\xea\xfb" + "\x20\xc8\xe3\xee\xf0\xe5\xe2\xfb", + .psize = 72, + .digest = + "\x9d\xd2\xfe\x4e\x90\x40\x9e\x5d" + "\xa8\x7f\x53\x97\x6d\x74\x05\xb0" + "\xc0\xca\xc6\x28\xfc\x66\x9a\x74" + "\x1d\x50\x06\x3c\x55\x7e\x8f\x50", + }, +}; + +static const struct hash_testvec streebog512_tv_template[] = { + { /* M1 */ + .plaintext = "012345678901234567890123456789012345678901234567890123456789012", + .psize = 63, + .digest = + "\x1b\x54\xd0\x1a\x4a\xf5\xb9\xd5" + "\xcc\x3d\x86\xd6\x8d\x28\x54\x62" + "\xb1\x9a\xbc\x24\x75\x22\x2f\x35" + "\xc0\x85\x12\x2b\xe4\xba\x1f\xfa" + "\x00\xad\x30\xf8\x76\x7b\x3a\x82" + "\x38\x4c\x65\x74\xf0\x24\xc3\x11" + "\xe2\xa4\x81\x33\x2b\x08\xef\x7f" + "\x41\x79\x78\x91\xc1\x64\x6f\x48", + }, + { /* M2 */ + .plaintext = + "\xd1\xe5\x20\xe2\xe5\xf2\xf0\xe8" + "\x2c\x20\xd1\xf2\xf0\xe8\xe1\xee" + "\xe6\xe8\x20\xe2\xed\xf3\xf6\xe8" + "\x2c\x20\xe2\xe5\xfe\xf2\xfa\x20" + "\xf1\x20\xec\xee\xf0\xff\x20\xf1" + "\xf2\xf0\xe5\xeb\xe0\xec\xe8\x20" + "\xed\xe0\x20\xf5\xf0\xe0\xe1\xf0" + "\xfb\xff\x20\xef\xeb\xfa\xea\xfb" + "\x20\xc8\xe3\xee\xf0\xe5\xe2\xfb", + .psize = 72, + .digest = + "\x1e\x88\xe6\x22\x26\xbf\xca\x6f" + "\x99\x94\xf1\xf2\xd5\x15\x69\xe0" + "\xda\xf8\x47\x5a\x3b\x0f\xe6\x1a" + "\x53\x00\xee\xe4\x6d\x96\x13\x76" + "\x03\x5f\xe8\x35\x49\xad\xa2\xb8" + "\x62\x0f\xcd\x7c\x49\x6c\xe5\xb3" + "\x3f\x0c\xb9\xdd\xdc\x2b\x64\x60" + "\x14\x3b\x03\xda\xba\xc9\xfb\x28", + }, +}; + +/* + * Two HMAC-Streebog test vectors from RFC 7836 and R 50.1.113-2016 A + */ +static const struct hash_testvec hmac_streebog256_tv_template[] = { + { + .key = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + "\x10\x11\x12\x13\x14\x15\x16\x17" + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f", + .ksize = 32, + .plaintext = + "\x01\x26\xbd\xb8\x78\x00\xaf\x21" + "\x43\x41\x45\x65\x63\x78\x01\x00", + .psize = 16, + .digest = + "\xa1\xaa\x5f\x7d\xe4\x02\xd7\xb3" + "\xd3\x23\xf2\x99\x1c\x8d\x45\x34" + "\x01\x31\x37\x01\x0a\x83\x75\x4f" + "\xd0\xaf\x6d\x7c\xd4\x92\x2e\xd9", + }, +}; + +static const struct hash_testvec hmac_streebog512_tv_template[] = { + { + .key = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + "\x10\x11\x12\x13\x14\x15\x16\x17" + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f", + .ksize = 32, + .plaintext = + "\x01\x26\xbd\xb8\x78\x00\xaf\x21" + "\x43\x41\x45\x65\x63\x78\x01\x00", + .psize = 16, + .digest = + "\xa5\x9b\xab\x22\xec\xae\x19\xc6" + "\x5f\xbd\xe6\xe5\xf4\xe9\xf5\xd8" + "\x54\x9d\x31\xf0\x37\xf9\xdf\x9b" + "\x90\x55\x00\xe1\x71\x92\x3a\x77" + "\x3d\x5f\x15\x30\xf2\xed\x7e\x96" + "\x4c\xb2\xee\xdc\x29\xe9\xad\x2f" + "\x3a\xfe\x93\xb2\x81\x4f\x79\xf5" + "\x00\x0f\xfc\x03\x66\xc2\x51\xe6", + }, +}; + /* Example vectors below taken from * http://www.oscca.gov.cn/UpFile/20101222141857786.pdf * @@ -5593,6 +5709,1238 @@ static const struct hash_testvec poly1305_tv_template[] = { }, }; +/* NHPoly1305 test vectors from https://github.com/google/adiantum */ +static const struct hash_testvec nhpoly1305_tv_template[] = { + { + .key = "\xd2\x5d\x4c\xdd\x8d\x2b\x7f\x7a" + "\xd9\xbe\x71\xec\xd1\x83\x52\xe3" + "\xe1\xad\xd7\x5c\x0a\x75\x9d\xec" + "\x1d\x13\x7e\x5d\x71\x07\xc9\xe4" + "\x57\x2d\x44\x68\xcf\xd8\xd6\xc5" + "\x39\x69\x7d\x32\x75\x51\x4f\x7e" + "\xb2\x4c\xc6\x90\x51\x6e\xd9\xd6" + "\xa5\x8b\x2d\xf1\x94\xf9\xf7\x5e" + "\x2c\x84\x7b\x41\x0f\x88\x50\x89" + "\x30\xd9\xa1\x38\x46\x6c\xc0\x4f" + "\xe8\xdf\xdc\x66\xab\x24\x43\x41" + "\x91\x55\x29\x65\x86\x28\x5e\x45" + "\xd5\x2d\xb7\x80\x08\x9a\xc3\xd4" + "\x9a\x77\x0a\xd4\xef\x3e\xe6\x3f" + "\x6f\x2f\x9b\x3a\x7d\x12\x1e\x80" + "\x6c\x44\xa2\x25\xe1\xf6\x60\xe9" + "\x0d\xaf\xc5\x3c\xa5\x79\xae\x64" + "\xbc\xa0\x39\xa3\x4d\x10\xe5\x4d" + "\xd5\xe7\x89\x7a\x13\xee\x06\x78" + "\xdc\xa4\xdc\x14\x27\xe6\x49\x38" + "\xd0\xe0\x45\x25\x36\xc5\xf4\x79" + "\x2e\x9a\x98\x04\xe4\x2b\x46\x52" + "\x7c\x33\xca\xe2\x56\x51\x50\xe2" + "\xa5\x9a\xae\x18\x6a\x13\xf8\xd2" + "\x21\x31\x66\x02\xe2\xda\x8d\x7e" + "\x41\x19\xb2\x61\xee\x48\x8f\xf1" + "\x65\x24\x2e\x1e\x68\xce\x05\xd9" + "\x2a\xcf\xa5\x3a\x57\xdd\x35\x91" + "\x93\x01\xca\x95\xfc\x2b\x36\x04" + "\xe6\x96\x97\x28\xf6\x31\xfe\xa3" + "\x9d\xf6\x6a\x1e\x80\x8d\xdc\xec" + "\xaf\x66\x11\x13\x02\x88\xd5\x27" + "\x33\xb4\x1a\xcd\xa3\xf6\xde\x31" + "\x8e\xc0\x0e\x6c\xd8\x5a\x97\x5e" + "\xdd\xfd\x60\x69\x38\x46\x3f\x90" + "\x5e\x97\xd3\x32\x76\xc7\x82\x49" + "\xfe\xba\x06\x5f\x2f\xa2\xfd\xff" + "\x80\x05\x40\xe4\x33\x03\xfb\x10" + "\xc0\xde\x65\x8c\xc9\x8d\x3a\x9d" + "\xb5\x7b\x36\x4b\xb5\x0c\xcf\x00" + "\x9c\x87\xe4\x49\xad\x90\xda\x4a" + "\xdd\xbd\xff\xe2\x32\x57\xd6\x78" + "\x36\x39\x6c\xd3\x5b\x9b\x88\x59" + "\x2d\xf0\x46\xe4\x13\x0e\x2b\x35" + "\x0d\x0f\x73\x8a\x4f\x26\x84\x75" + "\x88\x3c\xc5\x58\x66\x18\x1a\xb4" + "\x64\x51\x34\x27\x1b\xa4\x11\xc9" + "\x6d\x91\x8a\xfa\x32\x60\x9d\xd7" + "\x87\xe5\xaa\x43\x72\xf8\xda\xd1" + "\x48\x44\x13\x61\xdc\x8c\x76\x17" + "\x0c\x85\x4e\xf3\xdd\xa2\x42\xd2" + "\x74\xc1\x30\x1b\xeb\x35\x31\x29" + "\x5b\xd7\x4c\x94\x46\x35\xa1\x23" + "\x50\xf2\xa2\x8e\x7e\x4f\x23\x4f" + "\x51\xff\xe2\xc9\xa3\x7d\x56\x8b" + "\x41\xf2\xd0\xc5\x57\x7e\x59\xac" + "\xbb\x65\xf3\xfe\xf7\x17\xef\x63" + "\x7c\x6f\x23\xdd\x22\x8e\xed\x84" + "\x0e\x3b\x09\xb3\xf3\xf4\x8f\xcd" + "\x37\xa8\xe1\xa7\x30\xdb\xb1\xa2" + "\x9c\xa2\xdf\x34\x17\x3e\x68\x44" + "\xd0\xde\x03\x50\xd1\x48\x6b\x20" + "\xe2\x63\x45\xa5\xea\x87\xc2\x42" + "\x95\x03\x49\x05\xed\xe0\x90\x29" + "\x1a\xb8\xcf\x9b\x43\xcf\x29\x7a" + "\x63\x17\x41\x9f\xe0\xc9\x10\xfd" + "\x2c\x56\x8c\x08\x55\xb4\xa9\x27" + "\x0f\x23\xb1\x05\x6a\x12\x46\xc7" + "\xe1\xfe\x28\x93\x93\xd7\x2f\xdc" + "\x98\x30\xdb\x75\x8a\xbe\x97\x7a" + "\x02\xfb\x8c\xba\xbe\x25\x09\xbe" + "\xce\xcb\xa2\xef\x79\x4d\x0e\x9d" + "\x1b\x9d\xb6\x39\x34\x38\xfa\x07" + "\xec\xe8\xfc\x32\x85\x1d\xf7\x85" + "\x63\xc3\x3c\xc0\x02\x75\xd7\x3f" + "\xb2\x68\x60\x66\x65\x81\xc6\xb1" + "\x42\x65\x4b\x4b\x28\xd7\xc7\xaa" + "\x9b\xd2\xdc\x1b\x01\xe0\x26\x39" + "\x01\xc1\x52\x14\xd1\x3f\xb7\xe6" + "\x61\x41\xc7\x93\xd2\xa2\x67\xc6" + "\xf7\x11\xb5\xf5\xea\xdd\x19\xfb" + "\x4d\x21\x12\xd6\x7d\xf1\x10\xb0" + "\x89\x07\xc7\x5a\x52\x73\x70\x2f" + "\x32\xef\x65\x2b\x12\xb2\xf0\xf5" + "\x20\xe0\x90\x59\x7e\x64\xf1\x4c" + "\x41\xb3\xa5\x91\x08\xe6\x5e\x5f" + "\x05\x56\x76\xb4\xb0\xcd\x70\x53" + "\x10\x48\x9c\xff\xc2\x69\x55\x24" + "\x87\xef\x84\xea\xfb\xa7\xbf\xa0" + "\x91\x04\xad\x4f\x8b\x57\x54\x4b" + "\xb6\xe9\xd1\xac\x37\x2f\x1d\x2e" + "\xab\xa5\xa4\xe8\xff\xfb\xd9\x39" + "\x2f\xb7\xac\xd1\xfe\x0b\x9a\x80" + "\x0f\xb6\xf4\x36\x39\x90\x51\xe3" + "\x0a\x2f\xb6\x45\x76\x89\xcd\x61" + "\xfe\x48\x5f\x75\x1d\x13\x00\x62" + "\x80\x24\x47\xe7\xbc\x37\xd7\xe3" + "\x15\xe8\x68\x22\xaf\x80\x6f\x4b" + "\xa8\x9f\x01\x10\x48\x14\xc3\x02" + "\x52\xd2\xc7\x75\x9b\x52\x6d\x30" + "\xac\x13\x85\xc8\xf7\xa3\x58\x4b" + "\x49\xf7\x1c\x45\x55\x8c\x39\x9a" + "\x99\x6d\x97\x27\x27\xe6\xab\xdd" + "\x2c\x42\x1b\x35\xdd\x9d\x73\xbb" + "\x6c\xf3\x64\xf1\xfb\xb9\xf7\xe6" + "\x4a\x3c\xc0\x92\xc0\x2e\xb7\x1a" + "\xbe\xab\xb3\x5a\xe5\xea\xb1\x48" + "\x58\x13\x53\x90\xfd\xc3\x8e\x54" + "\xf9\x18\x16\x73\xe8\xcb\x6d\x39" + "\x0e\xd7\xe0\xfe\xb6\x9f\x43\x97" + "\xe8\xd0\x85\x56\x83\x3e\x98\x68" + "\x7f\xbd\x95\xa8\x9a\x61\x21\x8f" + "\x06\x98\x34\xa6\xc8\xd6\x1d\xf3" + "\x3d\x43\xa4\x9a\x8c\xe5\xd3\x5a" + "\x32\xa2\x04\x22\xa4\x19\x1a\x46" + "\x42\x7e\x4d\xe5\xe0\xe6\x0e\xca" + "\xd5\x58\x9d\x2c\xaf\xda\x33\x5c" + "\xb0\x79\x9e\xc9\xfc\xca\xf0\x2f" + "\xa8\xb2\x77\xeb\x7a\xa2\xdd\x37" + "\x35\x83\x07\xd6\x02\x1a\xb6\x6c" + "\x24\xe2\x59\x08\x0e\xfd\x3e\x46" + "\xec\x40\x93\xf4\x00\x26\x4f\x2a" + "\xff\x47\x2f\xeb\x02\x92\x26\x5b" + "\x53\x17\xc2\x8d\x2a\xc7\xa3\x1b" + "\xcd\xbc\xa7\xe8\xd1\x76\xe3\x80" + "\x21\xca\x5d\x3b\xe4\x9c\x8f\xa9" + "\x5b\x7f\x29\x7f\x7c\xd8\xed\x6d" + "\x8c\xb2\x86\x85\xe7\x77\xf2\x85" + "\xab\x38\xa9\x9d\xc1\x4e\xc5\x64" + "\x33\x73\x8b\x59\x03\xad\x05\xdf" + "\x25\x98\x31\xde\xef\x13\xf1\x9b" + "\x3c\x91\x9d\x7b\xb1\xfa\xe6\xbf" + "\x5b\xed\xa5\x55\xe6\xea\x6c\x74" + "\xf4\xb9\xe4\x45\x64\x72\x81\xc2" + "\x4c\x28\xd4\xcd\xac\xe2\xde\xf9" + "\xeb\x5c\xeb\x61\x60\x5a\xe5\x28", + .ksize = 1088, + .plaintext = "", + .psize = 0, + .digest = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + }, { + .key = "\x29\x21\x43\xcb\xcb\x13\x07\xde" + "\xbf\x48\xdf\x8a\x7f\xa2\x84\xde" + "\x72\x23\x9d\xf5\xf0\x07\xf2\x4c" + "\x20\x3a\x93\xb9\xcd\x5d\xfe\xcb" + "\x99\x2c\x2b\x58\xc6\x50\x5f\x94" + "\x56\xc3\x7c\x0d\x02\x3f\xb8\x5e" + "\x7b\xc0\x6c\x51\x34\x76\xc0\x0e" + "\xc6\x22\xc8\x9e\x92\xa0\x21\xc9" + "\x85\x5c\x7c\xf8\xe2\x64\x47\xc9" + "\xe4\xa2\x57\x93\xf8\xa2\x69\xcd" + "\x62\x98\x99\xf4\xd7\x7b\x14\xb1" + "\xd8\x05\xff\x04\x15\xc9\xe1\x6e" + "\x9b\xe6\x50\x6b\x0b\x3f\x22\x1f" + "\x08\xde\x0c\x5b\x08\x7e\xc6\x2f" + "\x6c\xed\xd6\xb2\x15\xa4\xb3\xf9" + "\xa7\x46\x38\x2a\xea\x69\xa5\xde" + "\x02\xc3\x96\x89\x4d\x55\x3b\xed" + "\x3d\x3a\x85\x77\xbf\x97\x45\x5c" + "\x9e\x02\x69\xe2\x1b\x68\xbe\x96" + "\xfb\x64\x6f\x0f\xf6\x06\x40\x67" + "\xfa\x04\xe3\x55\xfa\xbe\xa4\x60" + "\xef\x21\x66\x97\xe6\x9d\x5c\x1f" + "\x62\x37\xaa\x31\xde\xe4\x9c\x28" + "\x95\xe0\x22\x86\xf4\x4d\xf3\x07" + "\xfd\x5f\x3a\x54\x2c\x51\x80\x71" + "\xba\x78\x69\x5b\x65\xab\x1f\x81" + "\xed\x3b\xff\x34\xa3\xfb\xbc\x73" + "\x66\x7d\x13\x7f\xdf\x6e\xe2\xe2" + "\xeb\x4f\x6c\xda\x7d\x33\x57\xd0" + "\xd3\x7c\x95\x4f\x33\x58\x21\xc7" + "\xc0\xe5\x6f\x42\x26\xc6\x1f\x5e" + "\x85\x1b\x98\x9a\xa2\x1e\x55\x77" + "\x23\xdf\x81\x5e\x79\x55\x05\xfc" + "\xfb\xda\xee\xba\x5a\xba\xf7\x77" + "\x7f\x0e\xd3\xe1\x37\xfe\x8d\x2b" + "\xd5\x3f\xfb\xd0\xc0\x3c\x0b\x3f" + "\xcf\x3c\x14\xcf\xfb\x46\x72\x4c" + "\x1f\x39\xe2\xda\x03\x71\x6d\x23" + "\xef\x93\xcd\x39\xd9\x37\x80\x4d" + "\x65\x61\xd1\x2c\x03\xa9\x47\x72" + "\x4d\x1e\x0e\x16\x33\x0f\x21\x17" + "\xec\x92\xea\x6f\x37\x22\xa4\xd8" + "\x03\x33\x9e\xd8\x03\x69\x9a\xe8" + "\xb2\x57\xaf\x78\x99\x05\x12\xab" + "\x48\x90\x80\xf0\x12\x9b\x20\x64" + "\x7a\x1d\x47\x5f\xba\x3c\xf9\xc3" + "\x0a\x0d\x8d\xa1\xf9\x1b\x82\x13" + "\x3e\x0d\xec\x0a\x83\xc0\x65\xe1" + "\xe9\x95\xff\x97\xd6\xf2\xe4\xd5" + "\x86\xc0\x1f\x29\x27\x63\xd7\xde" + "\xb7\x0a\x07\x99\x04\x2d\xa3\x89" + "\xa2\x43\xcf\xf3\xe1\x43\xac\x4a" + "\x06\x97\xd0\x05\x4f\x87\xfa\xf9" + "\x9b\xbf\x52\x70\xbd\xbc\x6c\xf3" + "\x03\x13\x60\x41\x28\x09\xec\xcc" + "\xb1\x1a\xec\xd6\xfb\x6f\x2a\x89" + "\x5d\x0b\x53\x9c\x59\xc1\x84\x21" + "\x33\x51\x47\x19\x31\x9c\xd4\x0a" + "\x4d\x04\xec\x50\x90\x61\xbd\xbc" + "\x7e\xc8\xd9\x6c\x98\x1d\x45\x41" + "\x17\x5e\x97\x1c\xc5\xa8\xe8\xea" + "\x46\x58\x53\xf7\x17\xd5\xad\x11" + "\xc8\x54\xf5\x7a\x33\x90\xf5\x19" + "\xba\x36\xb4\xfc\x52\xa5\x72\x3d" + "\x14\xbb\x55\xa7\xe9\xe3\x12\xf7" + "\x1c\x30\xa2\x82\x03\xbf\x53\x91" + "\x2e\x60\x41\x9f\x5b\x69\x39\xf6" + "\x4d\xc8\xf8\x46\x7a\x7f\xa4\x98" + "\x36\xff\x06\xcb\xca\xe7\x33\xf2" + "\xc0\x4a\xf4\x3c\x14\x44\x5f\x6b" + "\x75\xef\x02\x36\x75\x08\x14\xfd" + "\x10\x8e\xa5\x58\xd0\x30\x46\x49" + "\xaf\x3a\xf8\x40\x3d\x35\xdb\x84" + "\x11\x2e\x97\x6a\xb7\x87\x7f\xad" + "\xf1\xfa\xa5\x63\x60\xd8\x5e\xbf" + "\x41\x78\x49\xcf\x77\xbb\x56\xbb" + "\x7d\x01\x67\x05\x22\xc8\x8f\x41" + "\xba\x81\xd2\xca\x2c\x38\xac\x76" + "\x06\xc1\x1a\xc2\xce\xac\x90\x67" + "\x57\x3e\x20\x12\x5b\xd9\x97\x58" + "\x65\x05\xb7\x04\x61\x7e\xd8\x3a" + "\xbf\x55\x3b\x13\xe9\x34\x5a\x37" + "\x36\xcb\x94\x45\xc5\x32\xb3\xa0" + "\x0c\x3e\x49\xc5\xd3\xed\xa7\xf0" + "\x1c\x69\xcc\xea\xcc\x83\xc9\x16" + "\x95\x72\x4b\xf4\x89\xd5\xb9\x10" + "\xf6\x2d\x60\x15\xea\x3c\x06\x66" + "\x9f\x82\xad\x17\xce\xd2\xa4\x48" + "\x7c\x65\xd9\xf8\x02\x4d\x9b\x4c" + "\x89\x06\x3a\x34\x85\x48\x89\x86" + "\xf9\x24\xa9\x54\x72\xdb\x44\x95" + "\xc7\x44\x1c\x19\x11\x4c\x04\xdc" + "\x13\xb9\x67\xc8\xc3\x3a\x6a\x50" + "\xfa\xd1\xfb\xe1\x88\xb6\xf1\xa3" + "\xc5\x3b\xdc\x38\x45\x16\x26\x02" + "\x3b\xb8\x8f\x8b\x58\x7d\x23\x04" + "\x50\x6b\x81\x9f\xae\x66\xac\x6f" + "\xcf\x2a\x9d\xf1\xfd\x1d\x57\x07" + "\xbe\x58\xeb\x77\x0c\xe3\xc2\x19" + "\x14\x74\x1b\x51\x1c\x4f\x41\xf3" + "\x32\x89\xb3\xe7\xde\x62\xf6\x5f" + "\xc7\x6a\x4a\x2a\x5b\x0f\x5f\x87" + "\x9c\x08\xb9\x02\x88\xc8\x29\xb7" + "\x94\x52\xfa\x52\xfe\xaa\x50\x10" + "\xba\x48\x75\x5e\x11\x1b\xe6\x39" + "\xd7\x82\x2c\x87\xf1\x1e\xa4\x38" + "\x72\x3e\x51\xe7\xd8\x3e\x5b\x7b" + "\x31\x16\x89\xba\xd6\xad\x18\x5e" + "\xba\xf8\x12\xb3\xf4\x6c\x47\x30" + "\xc0\x38\x58\xb3\x10\x8d\x58\x5d" + "\xb4\xfb\x19\x7e\x41\xc3\x66\xb8" + "\xd6\x72\x84\xe1\x1a\xc2\x71\x4c" + "\x0d\x4a\x21\x7a\xab\xa2\xc0\x36" + "\x15\xc5\xe9\x46\xd7\x29\x17\x76" + "\x5e\x47\x36\x7f\x72\x05\xa7\xcc" + "\x36\x63\xf9\x47\x7d\xe6\x07\x3c" + "\x8b\x79\x1d\x96\x61\x8d\x90\x65" + "\x7c\xf5\xeb\x4e\x6e\x09\x59\x6d" + "\x62\x50\x1b\x0f\xe0\xdc\x78\xf2" + "\x5b\x83\x1a\xa1\x11\x75\xfd\x18" + "\xd7\xe2\x8d\x65\x14\x21\xce\xbe" + "\xb5\x87\xe3\x0a\xda\x24\x0a\x64" + "\xa9\x9f\x03\x8d\x46\x5d\x24\x1a" + "\x8a\x0c\x42\x01\xca\xb1\x5f\x7c" + "\xa5\xac\x32\x4a\xb8\x07\x91\x18" + "\x6f\xb0\x71\x3c\xc9\xb1\xa8\xf8" + "\x5f\x69\xa5\xa1\xca\x9e\x7a\xaa" + "\xac\xe9\xc7\x47\x41\x75\x25\xc3" + "\x73\xe2\x0b\xdd\x6d\x52\x71\xbe" + "\xc5\xdc\xb4\xe7\x01\x26\x53\x77" + "\x86\x90\x85\x68\x6b\x7b\x03\x53" + "\xda\x52\x52\x51\x68\xc8\xf3\xec" + "\x6c\xd5\x03\x7a\xa3\x0e\xb4\x02" + "\x5f\x1a\xab\xee\xca\x67\x29\x7b" + "\xbd\x96\x59\xb3\x8b\x32\x7a\x92" + "\x9f\xd8\x25\x2b\xdf\xc0\x4c\xda", + .ksize = 1088, + .plaintext = "\xbc\xda\x81\xa8\x78\x79\x1c\xbf" + "\x77\x53\xba\x4c\x30\x5b\xb8\x33", + .psize = 16, + .digest = "\x04\xbf\x7f\x6a\xce\x72\xea\x6a" + "\x79\xdb\xb0\xc9\x60\xf6\x12\xcc", + .np = 6, + .tap = { 4, 4, 1, 1, 1, 5 }, + }, { + .key = "\x65\x4d\xe3\xf8\xd2\x4c\xac\x28" + "\x68\xf5\xb3\x81\x71\x4b\xa1\xfa" + "\x04\x0e\xd3\x81\x36\xbe\x0c\x81" + "\x5e\xaf\xbc\x3a\xa4\xc0\x8e\x8b" + "\x55\x63\xd3\x52\x97\x88\xd6\x19" + "\xbc\x96\xdf\x49\xff\x04\x63\xf5" + "\x0c\x11\x13\xaa\x9e\x1f\x5a\xf7" + "\xdd\xbd\x37\x80\xc3\xd0\xbe\xa7" + "\x05\xc8\x3c\x98\x1e\x05\x3c\x84" + "\x39\x61\xc4\xed\xed\x71\x1b\xc4" + "\x74\x45\x2c\xa1\x56\x70\x97\xfd" + "\x44\x18\x07\x7d\xca\x60\x1f\x73" + "\x3b\x6d\x21\xcb\x61\x87\x70\x25" + "\x46\x21\xf1\x1f\x21\x91\x31\x2d" + "\x5d\xcc\xb7\xd1\x84\x3e\x3d\xdb" + "\x03\x53\x2a\x82\xa6\x9a\x95\xbc" + "\x1a\x1e\x0a\x5e\x07\x43\xab\x43" + "\xaf\x92\x82\x06\x91\x04\x09\xf4" + "\x17\x0a\x9a\x2c\x54\xdb\xb8\xf4" + "\xd0\xf0\x10\x66\x24\x8d\xcd\xda" + "\xfe\x0e\x45\x9d\x6f\xc4\x4e\xf4" + "\x96\xaf\x13\xdc\xa9\xd4\x8c\xc4" + "\xc8\x57\x39\x3c\xc2\xd3\x0a\x76" + "\x4a\x1f\x75\x83\x44\xc7\xd1\x39" + "\xd8\xb5\x41\xba\x73\x87\xfa\x96" + "\xc7\x18\x53\xfb\x9b\xda\xa0\x97" + "\x1d\xee\x60\x85\x9e\x14\xc3\xce" + "\xc4\x05\x29\x3b\x95\x30\xa3\xd1" + "\x9f\x82\x6a\x04\xf5\xa7\x75\x57" + "\x82\x04\xfe\x71\x51\x71\xb1\x49" + "\x50\xf8\xe0\x96\xf1\xfa\xa8\x88" + "\x3f\xa0\x86\x20\xd4\x60\x79\x59" + "\x17\x2d\xd1\x09\xf4\xec\x05\x57" + "\xcf\x62\x7e\x0e\x7e\x60\x78\xe6" + "\x08\x60\x29\xd8\xd5\x08\x1a\x24" + "\xc4\x6c\x24\xe7\x92\x08\x3d\x8a" + "\x98\x7a\xcf\x99\x0a\x65\x0e\xdc" + "\x8c\x8a\xbe\x92\x82\x91\xcc\x62" + "\x30\xb6\xf4\x3f\xc6\x8a\x7f\x12" + "\x4a\x8a\x49\xfa\x3f\x5c\xd4\x5a" + "\xa6\x82\xa3\xe6\xaa\x34\x76\xb2" + "\xab\x0a\x30\xef\x6c\x77\x58\x3f" + "\x05\x6b\xcc\x5c\xae\xdc\xd7\xb9" + "\x51\x7e\x8d\x32\x5b\x24\x25\xbe" + "\x2b\x24\x01\xcf\x80\xda\x16\xd8" + "\x90\x72\x2c\xad\x34\x8d\x0c\x74" + "\x02\xcb\xfd\xcf\x6e\xef\x97\xb5" + "\x4c\xf2\x68\xca\xde\x43\x9e\x8a" + "\xc5\x5f\x31\x7f\x14\x71\x38\xec" + "\xbd\x98\xe5\x71\xc4\xb5\xdb\xef" + "\x59\xd2\xca\xc0\xc1\x86\x75\x01" + "\xd4\x15\x0d\x6f\xa4\xf7\x7b\x37" + "\x47\xda\x18\x93\x63\xda\xbe\x9e" + "\x07\xfb\xb2\x83\xd5\xc4\x34\x55" + "\xee\x73\xa1\x42\x96\xf9\x66\x41" + "\xa4\xcc\xd2\x93\x6e\xe1\x0a\xbb" + "\xd2\xdd\x18\x23\xe6\x6b\x98\x0b" + "\x8a\x83\x59\x2c\xc3\xa6\x59\x5b" + "\x01\x22\x59\xf7\xdc\xb0\x87\x7e" + "\xdb\x7d\xf4\x71\x41\xab\xbd\xee" + "\x79\xbe\x3c\x01\x76\x0b\x2d\x0a" + "\x42\xc9\x77\x8c\xbb\x54\x95\x60" + "\x43\x2e\xe0\x17\x52\xbd\x90\xc9" + "\xc2\x2c\xdd\x90\x24\x22\x76\x40" + "\x5c\xb9\x41\xc9\xa1\xd5\xbd\xe3" + "\x44\xe0\xa4\xab\xcc\xb8\xe2\x32" + "\x02\x15\x04\x1f\x8c\xec\x5d\x14" + "\xac\x18\xaa\xef\x6e\x33\x19\x6e" + "\xde\xfe\x19\xdb\xeb\x61\xca\x18" + "\xad\xd8\x3d\xbf\x09\x11\xc7\xa5" + "\x86\x0b\x0f\xe5\x3e\xde\xe8\xd9" + "\x0a\x69\x9e\x4c\x20\xff\xf9\xc5" + "\xfa\xf8\xf3\x7f\xa5\x01\x4b\x5e" + "\x0f\xf0\x3b\x68\xf0\x46\x8c\x2a" + "\x7a\xc1\x8f\xa0\xfe\x6a\x5b\x44" + "\x70\x5c\xcc\x92\x2c\x6f\x0f\xbd" + "\x25\x3e\xb7\x8e\x73\x58\xda\xc9" + "\xa5\xaa\x9e\xf3\x9b\xfd\x37\x3e" + "\xe2\x88\xa4\x7b\xc8\x5c\xa8\x93" + "\x0e\xe7\x9a\x9c\x2e\x95\x18\x9f" + "\xc8\x45\x0c\x88\x9e\x53\x4f\x3a" + "\x76\xc1\x35\xfa\x17\xd8\xac\xa0" + "\x0c\x2d\x47\x2e\x4f\x69\x9b\xf7" + "\xd0\xb6\x96\x0c\x19\xb3\x08\x01" + "\x65\x7a\x1f\xc7\x31\x86\xdb\xc8" + "\xc1\x99\x8f\xf8\x08\x4a\x9d\x23" + "\x22\xa8\xcf\x27\x01\x01\x88\x93" + "\x9c\x86\x45\xbd\xe0\x51\xca\x52" + "\x84\xba\xfe\x03\xf7\xda\xc5\xce" + "\x3e\x77\x75\x86\xaf\x84\xc8\x05" + "\x44\x01\x0f\x02\xf3\x58\xb0\x06" + "\x5a\xd7\x12\x30\x8d\xdf\x1f\x1f" + "\x0a\xe6\xd2\xea\xf6\x3a\x7a\x99" + "\x63\xe8\xd2\xc1\x4a\x45\x8b\x40" + "\x4d\x0a\xa9\x76\x92\xb3\xda\x87" + "\x36\x33\xf0\x78\xc3\x2f\x5f\x02" + "\x1a\x6a\x2c\x32\xcd\x76\xbf\xbd" + "\x5a\x26\x20\x28\x8c\x8c\xbc\x52" + "\x3d\x0a\xc9\xcb\xab\xa4\x21\xb0" + "\x54\x40\x81\x44\xc7\xd6\x1c\x11" + "\x44\xc6\x02\x92\x14\x5a\xbf\x1a" + "\x09\x8a\x18\xad\xcd\x64\x3d\x53" + "\x4a\xb6\xa5\x1b\x57\x0e\xef\xe0" + "\x8c\x44\x5f\x7d\xbd\x6c\xfd\x60" + "\xae\x02\x24\xb6\x99\xdd\x8c\xaf" + "\x59\x39\x75\x3c\xd1\x54\x7b\x86" + "\xcc\x99\xd9\x28\x0c\xb0\x94\x62" + "\xf9\x51\xd1\x19\x96\x2d\x66\xf5" + "\x55\xcf\x9e\x59\xe2\x6b\x2c\x08" + "\xc0\x54\x48\x24\x45\xc3\x8c\x73" + "\xea\x27\x6e\x66\x7d\x1d\x0e\x6e" + "\x13\xe8\x56\x65\x3a\xb0\x81\x5c" + "\xf0\xe8\xd8\x00\x6b\xcd\x8f\xad" + "\xdd\x53\xf3\xa4\x6c\x43\xd6\x31" + "\xaf\xd2\x76\x1e\x91\x12\xdb\x3c" + "\x8c\xc2\x81\xf0\x49\xdb\xe2\x6b" + "\x76\x62\x0a\x04\xe4\xaa\x8a\x7c" + "\x08\x0b\x5d\xd0\xee\x1d\xfb\xc4" + "\x02\x75\x42\xd6\xba\xa7\x22\xa8" + "\x47\x29\xb7\x85\x6d\x93\x3a\xdb" + "\x00\x53\x0b\xa2\xeb\xf8\xfe\x01" + "\x6f\x8a\x31\xd6\x17\x05\x6f\x67" + "\x88\x95\x32\xfe\x4f\xa6\x4b\xf8" + "\x03\xe4\xcd\x9a\x18\xe8\x4e\x2d" + "\xf7\x97\x9a\x0c\x7d\x9f\x7e\x44" + "\x69\x51\xe0\x32\x6b\x62\x86\x8f" + "\xa6\x8e\x0b\x21\x96\xe5\xaf\x77" + "\xc0\x83\xdf\xa5\x0e\xd0\xa1\x04" + "\xaf\xc1\x10\xcb\x5a\x40\xe4\xe3" + "\x38\x7e\x07\xe8\x4d\xfa\xed\xc5" + "\xf0\x37\xdf\xbb\x8a\xcf\x3d\xdc" + "\x61\xd2\xc6\x2b\xff\x07\xc9\x2f" + "\x0c\x2d\x5c\x07\xa8\x35\x6a\xfc" + "\xae\x09\x03\x45\x74\x51\x4d\xc4" + "\xb8\x23\x87\x4a\x99\x27\x20\x87" + "\x62\x44\x0a\x4a\xce\x78\x47\x22", + .ksize = 1088, + .plaintext = "\x8e\xb0\x4c\xde\x9c\x4a\x04\x5a" + "\xf6\xa9\x7f\x45\x25\xa5\x7b\x3a" + "\xbc\x4d\x73\x39\x81\xb5\xbd\x3d" + "\x21\x6f\xd7\x37\x50\x3c\x7b\x28" + "\xd1\x03\x3a\x17\xed\x7b\x7c\x2a" + "\x16\xbc\xdf\x19\x89\x52\x71\x31" + "\xb6\xc0\xfd\xb5\xd3\xba\x96\x99" + "\xb6\x34\x0b\xd0\x99\x93\xfc\x1a" + "\x01\x3c\x85\xc6\x9b\x78\x5c\x8b" + "\xfe\xae\xd2\xbf\xb2\x6f\xf9\xed" + "\xc8\x25\x17\xfe\x10\x3b\x7d\xda" + "\xf4\x8d\x35\x4b\x7c\x7b\x82\xe7" + "\xc2\xb3\xee\x60\x4a\x03\x86\xc9" + "\x4e\xb5\xc4\xbe\xd2\xbd\x66\xf1" + "\x13\xf1\x09\xab\x5d\xca\x63\x1f" + "\xfc\xfb\x57\x2a\xfc\xca\x66\xd8" + "\x77\x84\x38\x23\x1d\xac\xd3\xb3" + "\x7a\xad\x4c\x70\xfa\x9c\xc9\x61" + "\xa6\x1b\xba\x33\x4b\x4e\x33\xec" + "\xa0\xa1\x64\x39\x40\x05\x1c\xc2" + "\x3f\x49\x9d\xae\xf2\xc5\xf2\xc5" + "\xfe\xe8\xf4\xc2\xf9\x96\x2d\x28" + "\x92\x30\x44\xbc\xd2\x7f\xe1\x6e" + "\x62\x02\x8f\x3d\x1c\x80\xda\x0e" + "\x6a\x90\x7e\x75\xff\xec\x3e\xc4" + "\xcd\x16\x34\x3b\x05\x6d\x4d\x20" + "\x1c\x7b\xf5\x57\x4f\xfa\x3d\xac" + "\xd0\x13\x55\xe8\xb3\xe1\x1b\x78" + "\x30\xe6\x9f\x84\xd4\x69\xd1\x08" + "\x12\x77\xa7\x4a\xbd\xc0\xf2\xd2" + "\x78\xdd\xa3\x81\x12\xcb\x6c\x14" + "\x90\x61\xe2\x84\xc6\x2b\x16\xcc" + "\x40\x99\x50\x88\x01\x09\x64\x4f" + "\x0a\x80\xbe\x61\xae\x46\xc9\x0a" + "\x5d\xe0\xfb\x72\x7a\x1a\xdd\x61" + "\x63\x20\x05\xa0\x4a\xf0\x60\x69" + "\x7f\x92\xbc\xbf\x4e\x39\x4d\xdd" + "\x74\xd1\xb7\xc0\x5a\x34\xb7\xae" + "\x76\x65\x2e\xbc\x36\xb9\x04\x95" + "\x42\xe9\x6f\xca\x78\xb3\x72\x07" + "\xa3\xba\x02\x94\x67\x4c\xb1\xd7" + "\xe9\x30\x0d\xf0\x3b\xb8\x10\x6d" + "\xea\x2b\x21\xbf\x74\x59\x82\x97" + "\x85\xaa\xf1\xd7\x54\x39\xeb\x05" + "\xbd\xf3\x40\xa0\x97\xe6\x74\xfe" + "\xb4\x82\x5b\xb1\x36\xcb\xe8\x0d" + "\xce\x14\xd9\xdf\xf1\x94\x22\xcd" + "\xd6\x00\xba\x04\x4c\x05\x0c\xc0" + "\xd1\x5a\xeb\x52\xd5\xa8\x8e\xc8" + "\x97\xa1\xaa\xc1\xea\xc1\xbe\x7c" + "\x36\xb3\x36\xa0\xc6\x76\x66\xc5" + "\xe2\xaf\xd6\x5c\xe2\xdb\x2c\xb3" + "\x6c\xb9\x99\x7f\xff\x9f\x03\x24" + "\xe1\x51\x44\x66\xd8\x0c\x5d\x7f" + "\x5c\x85\x22\x2a\xcf\x6d\x79\x28" + "\xab\x98\x01\x72\xfe\x80\x87\x5f" + "\x46\xba\xef\x81\x24\xee\xbf\xb0" + "\x24\x74\xa3\x65\x97\x12\xc4\xaf" + "\x8b\xa0\x39\xda\x8a\x7e\x74\x6e" + "\x1b\x42\xb4\x44\x37\xfc\x59\xfd" + "\x86\xed\xfb\x8c\x66\x33\xda\x63" + "\x75\xeb\xe1\xa4\x85\x4f\x50\x8f" + "\x83\x66\x0d\xd3\x37\xfa\xe6\x9c" + "\x4f\x30\x87\x35\x18\xe3\x0b\xb7" + "\x6e\x64\x54\xcd\x70\xb3\xde\x54" + "\xb7\x1d\xe6\x4c\x4d\x55\x12\x12" + "\xaf\x5f\x7f\x5e\xee\x9d\xe8\x8e" + "\x32\x9d\x4e\x75\xeb\xc6\xdd\xaa" + "\x48\x82\xa4\x3f\x3c\xd7\xd3\xa8" + "\x63\x9e\x64\xfe\xe3\x97\x00\x62" + "\xe5\x40\x5d\xc3\xad\x72\xe1\x28" + "\x18\x50\xb7\x75\xef\xcd\x23\xbf" + "\x3f\xc0\x51\x36\xf8\x41\xc3\x08" + "\xcb\xf1\x8d\x38\x34\xbd\x48\x45" + "\x75\xed\xbc\x65\x7b\xb5\x0c\x9b" + "\xd7\x67\x7d\x27\xb4\xc4\x80\xd7" + "\xa9\xb9\xc7\x4a\x97\xaa\xda\xc8" + "\x3c\x74\xcf\x36\x8f\xe4\x41\xe3" + "\xd4\xd3\x26\xa7\xf3\x23\x9d\x8f" + "\x6c\x20\x05\x32\x3e\xe0\xc3\xc8" + "\x56\x3f\xa7\x09\xb7\xfb\xc7\xf7" + "\xbe\x2a\xdd\x0f\x06\x7b\x0d\xdd" + "\xb0\xb4\x86\x17\xfd\xb9\x04\xe5" + "\xc0\x64\x5d\xad\x2a\x36\x38\xdb" + "\x24\xaf\x5b\xff\xca\xf9\x41\xe8" + "\xf9\x2f\x1e\x5e\xf9\xf5\xd5\xf2" + "\xb2\x88\xca\xc9\xa1\x31\xe2\xe8" + "\x10\x95\x65\xbf\xf1\x11\x61\x7a" + "\x30\x1a\x54\x90\xea\xd2\x30\xf6" + "\xa5\xad\x60\xf9\x4d\x84\x21\x1b" + "\xe4\x42\x22\xc8\x12\x4b\xb0\x58" + "\x3e\x9c\x2d\x32\x95\x0a\x8e\xb0" + "\x0a\x7e\x77\x2f\xe8\x97\x31\x6a" + "\xf5\x59\xb4\x26\xe6\x37\x12\xc9" + "\xcb\xa0\x58\x33\x6f\xd5\x55\x55" + "\x3c\xa1\x33\xb1\x0b\x7e\x2e\xb4" + "\x43\x2a\x84\x39\xf0\x9c\xf4\x69" + "\x4f\x1e\x79\xa6\x15\x1b\x87\xbb" + "\xdb\x9b\xe0\xf1\x0b\xba\xe3\x6e" + "\xcc\x2f\x49\x19\x22\x29\xfc\x71" + "\xbb\x77\x38\x18\x61\xaf\x85\x76" + "\xeb\xd1\x09\xcc\x86\x04\x20\x9a" + "\x66\x53\x2f\x44\x8b\xc6\xa3\xd2" + "\x5f\xc7\x79\x82\x66\xa8\x6e\x75" + "\x7d\x94\xd1\x86\x75\x0f\xa5\x4f" + "\x3c\x7a\x33\xce\xd1\x6e\x9d\x7b" + "\x1f\x91\x37\xb8\x37\x80\xfb\xe0" + "\x52\x26\xd0\x9a\xd4\x48\x02\x41" + "\x05\xe3\x5a\x94\xf1\x65\x61\x19" + "\xb8\x88\x4e\x2b\xea\xba\x8b\x58" + "\x8b\x42\x01\x00\xa8\xfe\x00\x5c" + "\xfe\x1c\xee\x31\x15\x69\xfa\xb3" + "\x9b\x5f\x22\x8e\x0d\x2c\xe3\xa5" + "\x21\xb9\x99\x8a\x8e\x94\x5a\xef" + "\x13\x3e\x99\x96\x79\x6e\xd5\x42" + "\x36\x03\xa9\xe2\xca\x65\x4e\x8a" + "\x8a\x30\xd2\x7d\x74\xe7\xf0\xaa" + "\x23\x26\xdd\xcb\x82\x39\xfc\x9d" + "\x51\x76\x21\x80\xa2\xbe\x93\x03" + "\x47\xb0\xc1\xb6\xdc\x63\xfd\x9f" + "\xca\x9d\xa5\xca\x27\x85\xe2\xd8" + "\x15\x5b\x7e\x14\x7a\xc4\x89\xcc" + "\x74\x14\x4b\x46\xd2\xce\xac\x39" + "\x6b\x6a\x5a\xa4\x0e\xe3\x7b\x15" + "\x94\x4b\x0f\x74\xcb\x0c\x7f\xa9" + "\xbe\x09\x39\xa3\xdd\x56\x5c\xc7" + "\x99\x56\x65\x39\xf4\x0b\x7d\x87" + "\xec\xaa\xe3\x4d\x22\x65\x39\x4e", + .psize = 1024, + .digest = "\x64\x3a\xbc\xc3\x3f\x74\x40\x51" + "\x6e\x56\x01\x1a\x51\xec\x36\xde", + .np = 8, + .tap = { 64, 203, 267, 28, 263, 62, 54, 83 }, + }, { + .key = "\x1b\x82\x2e\x1b\x17\x23\xb9\x6d" + "\xdc\x9c\xda\x99\x07\xe3\x5f\xd8" + "\xd2\xf8\x43\x80\x8d\x86\x7d\x80" + "\x1a\xd0\xcc\x13\xb9\x11\x05\x3f" + "\x7e\xcf\x7e\x80\x0e\xd8\x25\x48" + "\x8b\xaa\x63\x83\x92\xd0\x72\xf5" + "\x4f\x67\x7e\x50\x18\x25\xa4\xd1" + "\xe0\x7e\x1e\xba\xd8\xa7\x6e\xdb" + "\x1a\xcc\x0d\xfe\x9f\x6d\x22\x35" + "\xe1\xe6\xe0\xa8\x7b\x9c\xb1\x66" + "\xa3\xf8\xff\x4d\x90\x84\x28\xbc" + "\xdc\x19\xc7\x91\x49\xfc\xf6\x33" + "\xc9\x6e\x65\x7f\x28\x6f\x68\x2e" + "\xdf\x1a\x75\xe9\xc2\x0c\x96\xb9" + "\x31\x22\xc4\x07\xc6\x0a\x2f\xfd" + "\x36\x06\x5f\x5c\xc5\xb1\x3a\xf4" + "\x5e\x48\xa4\x45\x2b\x88\xa7\xee" + "\xa9\x8b\x52\xcc\x99\xd9\x2f\xb8" + "\xa4\x58\x0a\x13\xeb\x71\x5a\xfa" + "\xe5\x5e\xbe\xf2\x64\xad\x75\xbc" + "\x0b\x5b\x34\x13\x3b\x23\x13\x9a" + "\x69\x30\x1e\x9a\xb8\x03\xb8\x8b" + "\x3e\x46\x18\x6d\x38\xd9\xb3\xd8" + "\xbf\xf1\xd0\x28\xe6\x51\x57\x80" + "\x5e\x99\xfb\xd0\xce\x1e\x83\xf7" + "\xe9\x07\x5a\x63\xa9\xef\xce\xa5" + "\xfb\x3f\x37\x17\xfc\x0b\x37\x0e" + "\xbb\x4b\x21\x62\xb7\x83\x0e\xa9" + "\x9e\xb0\xc4\xad\x47\xbe\x35\xe7" + "\x51\xb2\xf2\xac\x2b\x65\x7b\x48" + "\xe3\x3f\x5f\xb6\x09\x04\x0c\x58" + "\xce\x99\xa9\x15\x2f\x4e\xc1\xf2" + "\x24\x48\xc0\xd8\x6c\xd3\x76\x17" + "\x83\x5d\xe6\xe3\xfd\x01\x8e\xf7" + "\x42\xa5\x04\x29\x30\xdf\xf9\x00" + "\x4a\xdc\x71\x22\x1a\x33\x15\xb6" + "\xd7\x72\xfb\x9a\xb8\xeb\x2b\x38" + "\xea\xa8\x61\xa8\x90\x11\x9d\x73" + "\x2e\x6c\xce\x81\x54\x5a\x9f\xcd" + "\xcf\xd5\xbd\x26\x5d\x66\xdb\xfb" + "\xdc\x1e\x7c\x10\xfe\x58\x82\x10" + "\x16\x24\x01\xce\x67\x55\x51\xd1" + "\xdd\x6b\x44\xa3\x20\x8e\xa9\xa6" + "\x06\xa8\x29\x77\x6e\x00\x38\x5b" + "\xde\x4d\x58\xd8\x1f\x34\xdf\xf9" + "\x2c\xac\x3e\xad\xfb\x92\x0d\x72" + "\x39\xa4\xac\x44\x10\xc0\x43\xc4" + "\xa4\x77\x3b\xfc\xc4\x0d\x37\xd3" + "\x05\x84\xda\x53\x71\xf8\x80\xd3" + "\x34\x44\xdb\x09\xb4\x2b\x8e\xe3" + "\x00\x75\x50\x9e\x43\x22\x00\x0b" + "\x7c\x70\xab\xd4\x41\xf1\x93\xcd" + "\x25\x2d\x84\x74\xb5\xf2\x92\xcd" + "\x0a\x28\xea\x9a\x49\x02\x96\xcb" + "\x85\x9e\x2f\x33\x03\x86\x1d\xdc" + "\x1d\x31\xd5\xfc\x9d\xaa\xc5\xe9" + "\x9a\xc4\x57\xf5\x35\xed\xf4\x4b" + "\x3d\x34\xc2\x29\x13\x86\x36\x42" + "\x5d\xbf\x90\x86\x13\x77\xe5\xc3" + "\x62\xb4\xfe\x0b\x70\x39\x35\x65" + "\x02\xea\xf6\xce\x57\x0c\xbb\x74" + "\x29\xe3\xfd\x60\x90\xfd\x10\x38" + "\xd5\x4e\x86\xbd\x37\x70\xf0\x97" + "\xa6\xab\x3b\x83\x64\x52\xca\x66" + "\x2f\xf9\xa4\xca\x3a\x55\x6b\xb0" + "\xe8\x3a\x34\xdb\x9e\x48\x50\x2f" + "\x3b\xef\xfd\x08\x2d\x5f\xc1\x37" + "\x5d\xbe\x73\xe4\xd8\xe9\xac\xca" + "\x8a\xaa\x48\x7c\x5c\xf4\xa6\x96" + "\x5f\xfa\x70\xa6\xb7\x8b\x50\xcb" + "\xa6\xf5\xa9\xbd\x7b\x75\x4c\x22" + "\x0b\x19\x40\x2e\xc9\x39\x39\x32" + "\x83\x03\xa8\xa4\x98\xe6\x8e\x16" + "\xb9\xde\x08\xc5\xfc\xbf\xad\x39" + "\xa8\xc7\x93\x6c\x6f\x23\xaf\xc1" + "\xab\xe1\xdf\xbb\x39\xae\x93\x29" + "\x0e\x7d\x80\x8d\x3e\x65\xf3\xfd" + "\x96\x06\x65\x90\xa1\x28\x64\x4b" + "\x69\xf9\xa8\x84\x27\x50\xfc\x87" + "\xf7\xbf\x55\x8e\x56\x13\x58\x7b" + "\x85\xb4\x6a\x72\x0f\x40\xf1\x4f" + "\x83\x81\x1f\x76\xde\x15\x64\x7a" + "\x7a\x80\xe4\xc7\x5e\x63\x01\x91" + "\xd7\x6b\xea\x0b\x9b\xa2\x99\x3b" + "\x6c\x88\xd8\xfd\x59\x3c\x8d\x22" + "\x86\x56\xbe\xab\xa1\x37\x08\x01" + "\x50\x85\x69\x29\xee\x9f\xdf\x21" + "\x3e\x20\x20\xf5\xb0\xbb\x6b\xd0" + "\x9c\x41\x38\xec\x54\x6f\x2d\xbd" + "\x0f\xe1\xbd\xf1\x2b\x6e\x60\x56" + "\x29\xe5\x7a\x70\x1c\xe2\xfc\x97" + "\x82\x68\x67\xd9\x3d\x1f\xfb\xd8" + "\x07\x9f\xbf\x96\x74\xba\x6a\x0e" + "\x10\x48\x20\xd8\x13\x1e\xb5\x44" + "\xf2\xcc\xb1\x8b\xfb\xbb\xec\xd7" + "\x37\x70\x1f\x7c\x55\xd2\x4b\xb9" + "\xfd\x70\x5e\xa3\x91\x73\x63\x52" + "\x13\x47\x5a\x06\xfb\x01\x67\xa5" + "\xc0\xd0\x49\x19\x56\x66\x9a\x77" + "\x64\xaf\x8c\x25\x91\x52\x87\x0e" + "\x18\xf3\x5f\x97\xfd\x71\x13\xf8" + "\x05\xa5\x39\xcc\x65\xd3\xcc\x63" + "\x5b\xdb\x5f\x7e\x5f\x6e\xad\xc4" + "\xf4\xa0\xc5\xc2\x2b\x4d\x97\x38" + "\x4f\xbc\xfa\x33\x17\xb4\x47\xb9" + "\x43\x24\x15\x8d\xd2\xed\x80\x68" + "\x84\xdb\x04\x80\xca\x5e\x6a\x35" + "\x2c\x2c\xe7\xc5\x03\x5f\x54\xb0" + "\x5e\x4f\x1d\x40\x54\x3d\x78\x9a" + "\xac\xda\x80\x27\x4d\x15\x4c\x1a" + "\x6e\x80\xc9\xc4\x3b\x84\x0e\xd9" + "\x2e\x93\x01\x8c\xc3\xc8\x91\x4b" + "\xb3\xaa\x07\x04\x68\x5b\x93\xa5" + "\xe7\xc4\x9d\xe7\x07\xee\xf5\x3b" + "\x40\x89\xcc\x60\x34\x9d\xb4\x06" + "\x1b\xef\x92\xe6\xc1\x2a\x7d\x0f" + "\x81\xaa\x56\xe3\xd7\xed\xa7\xd4" + "\xa7\x3a\x49\xc4\xad\x81\x5c\x83" + "\x55\x8e\x91\x54\xb7\x7d\x65\xa5" + "\x06\x16\xd5\x9a\x16\xc1\xb0\xa2" + "\x06\xd8\x98\x47\x73\x7e\x73\xa0" + "\xb8\x23\xb1\x52\xbf\x68\x74\x5d" + "\x0b\xcb\xfa\x8c\x46\xe3\x24\xe6" + "\xab\xd4\x69\x8d\x8c\xf2\x8a\x59" + "\xbe\x48\x46\x50\x8c\x9a\xe8\xe3" + "\x31\x55\x0a\x06\xed\x4f\xf8\xb7" + "\x4f\xe3\x85\x17\x30\xbd\xd5\x20" + "\xe7\x5b\xb2\x32\xcf\x6b\x16\x44" + "\xd2\xf5\x7e\xd7\xd1\x2f\xee\x64" + "\x3e\x9d\x10\xef\x27\x35\x43\x64" + "\x67\xfb\x7a\x7b\xe0\x62\x31\x9a" + "\x4d\xdf\xa5\xab\xc0\x20\xbb\x01" + "\xe9\x7b\x54\xf1\xde\xb2\x79\x50" + "\x6c\x4b\x91\xdb\x7f\xbb\x50\xc1" + "\x55\x44\x38\x9a\xe0\x9f\xe8\x29" + "\x6f\x15\xf8\x4e\xa6\xec\xa0\x60", + .ksize = 1088, + .plaintext = "\x15\x68\x9e\x2f\xad\x15\x52\xdf" + "\xf0\x42\x62\x24\x2a\x2d\xea\xbf" + "\xc7\xf3\xb4\x1a\xf5\xed\xb2\x08" + "\x15\x60\x1c\x00\x77\xbf\x0b\x0e" + "\xb7\x2c\xcf\x32\x3a\xc7\x01\x77" + "\xef\xa6\x75\xd0\x29\xc7\x68\x20" + "\xb2\x92\x25\xbf\x12\x34\xe9\xa4" + "\xfd\x32\x7b\x3f\x7c\xbd\xa5\x02" + "\x38\x41\xde\xc9\xc1\x09\xd9\xfc" + "\x6e\x78\x22\x83\x18\xf7\x50\x8d" + "\x8f\x9c\x2d\x02\xa5\x30\xac\xff" + "\xea\x63\x2e\x80\x37\x83\xb0\x58" + "\xda\x2f\xef\x21\x55\xba\x7b\xb1" + "\xb6\xed\xf5\xd2\x4d\xaa\x8c\xa9" + "\xdd\xdb\x0f\xb4\xce\xc1\x9a\xb1" + "\xc1\xdc\xbd\xab\x86\xc2\xdf\x0b" + "\xe1\x2c\xf9\xbe\xf6\xd8\xda\x62" + "\x72\xdd\x98\x09\x52\xc0\xc4\xb6" + "\x7b\x17\x5c\xf5\xd8\x4b\x88\xd6" + "\x6b\xbf\x84\x4a\x3f\xf5\x4d\xd2" + "\x94\xe2\x9c\xff\xc7\x3c\xd9\xc8" + "\x37\x38\xbc\x8c\xf3\xe7\xb7\xd0" + "\x1d\x78\xc4\x39\x07\xc8\x5e\x79" + "\xb6\x5a\x90\x5b\x6e\x97\xc9\xd4" + "\x82\x9c\xf3\x83\x7a\xe7\x97\xfc" + "\x1d\xbb\xef\xdb\xce\xe0\x82\xad" + "\xca\x07\x6c\x54\x62\x6f\x81\xe6" + "\x7a\x5a\x96\x6e\x80\x3a\xa2\x37" + "\x6f\xc6\xa4\x29\xc3\x9e\x19\x94" + "\x9f\xb0\x3e\x38\xfb\x3c\x2b\x7d" + "\xaa\xb8\x74\xda\x54\x23\x51\x12" + "\x4b\x96\x36\x8f\x91\x4f\x19\x37" + "\x83\xc9\xdd\xc7\x1a\x32\x2d\xab" + "\xc7\x89\xe2\x07\x47\x6c\xe8\xa6" + "\x70\x6b\x8e\x0c\xda\x5c\x6a\x59" + "\x27\x33\x0e\xe1\xe1\x20\xe8\xc8" + "\xae\xdc\xd0\xe3\x6d\xa8\xa6\x06" + "\x41\xb4\xd4\xd4\xcf\x91\x3e\x06" + "\xb0\x9a\xf7\xf1\xaa\xa6\x23\x92" + "\x10\x86\xf0\x94\xd1\x7c\x2e\x07" + "\x30\xfb\xc5\xd8\xf3\x12\xa9\xe8" + "\x22\x1c\x97\x1a\xad\x96\xb0\xa1" + "\x72\x6a\x6b\xb4\xfd\xf7\xe8\xfa" + "\xe2\x74\xd8\x65\x8d\x35\x17\x4b" + "\x00\x23\x5c\x8c\x70\xad\x71\xa2" + "\xca\xc5\x6c\x59\xbf\xb4\xc0\x6d" + "\x86\x98\x3e\x19\x5a\x90\x92\xb1" + "\x66\x57\x6a\x91\x68\x7c\xbc\xf3" + "\xf1\xdb\x94\xf8\x48\xf1\x36\xd8" + "\x78\xac\x1c\xa9\xcc\xd6\x27\xba" + "\x91\x54\x22\xf5\xe6\x05\x3f\xcc" + "\xc2\x8f\x2c\x3b\x2b\xc3\x2b\x2b" + "\x3b\xb8\xb6\x29\xb7\x2f\x94\xb6" + "\x7b\xfc\x94\x3e\xd0\x7a\x41\x59" + "\x7b\x1f\x9a\x09\xa6\xed\x4a\x82" + "\x9d\x34\x1c\xbd\x4e\x1c\x3a\x66" + "\x80\x74\x0e\x9a\x4f\x55\x54\x47" + "\x16\xba\x2a\x0a\x03\x35\x99\xa3" + "\x5c\x63\x8d\xa2\x72\x8b\x17\x15" + "\x68\x39\x73\xeb\xec\xf2\xe8\xf5" + "\x95\x32\x27\xd6\xc4\xfe\xb0\x51" + "\xd5\x0c\x50\xc5\xcd\x6d\x16\xb3" + "\xa3\x1e\x95\x69\xad\x78\x95\x06" + "\xb9\x46\xf2\x6d\x24\x5a\x99\x76" + "\x73\x6a\x91\xa6\xac\x12\xe1\x28" + "\x79\xbc\x08\x4e\x97\x00\x98\x63" + "\x07\x1c\x4e\xd1\x68\xf3\xb3\x81" + "\xa8\xa6\x5f\xf1\x01\xc9\xc1\xaf" + "\x3a\x96\xf9\x9d\xb5\x5a\x5f\x8f" + "\x7e\xc1\x7e\x77\x0a\x40\xc8\x8e" + "\xfc\x0e\xed\xe1\x0d\xb0\xe5\x5e" + "\x5e\x6f\xf5\x7f\xab\x33\x7d\xcd" + "\xf0\x09\x4b\xb2\x11\x37\xdc\x65" + "\x97\x32\x62\x71\x3a\x29\x54\xb9" + "\xc7\xa4\xbf\x75\x0f\xf9\x40\xa9" + "\x8d\xd7\x8b\xa7\xe0\x9a\xbe\x15" + "\xc6\xda\xd8\x00\x14\x69\x1a\xaf" + "\x5f\x79\xc3\xf5\xbb\x6c\x2a\x9d" + "\xdd\x3c\x5f\x97\x21\xe1\x3a\x03" + "\x84\x6a\xe9\x76\x11\x1f\xd3\xd5" + "\xf0\x54\x20\x4d\xc2\x91\xc3\xa4" + "\x36\x25\xbe\x1b\x2a\x06\xb7\xf3" + "\xd1\xd0\x55\x29\x81\x4c\x83\xa3" + "\xa6\x84\x1e\x5c\xd1\xd0\x6c\x90" + "\xa4\x11\xf0\xd7\x63\x6a\x48\x05" + "\xbc\x48\x18\x53\xcd\xb0\x8d\xdb" + "\xdc\xfe\x55\x11\x5c\x51\xb3\xab" + "\xab\x63\x3e\x31\x5a\x8b\x93\x63" + "\x34\xa9\xba\x2b\x69\x1a\xc0\xe3" + "\xcb\x41\xbc\xd7\xf5\x7f\x82\x3e" + "\x01\xa3\x3c\x72\xf4\xfe\xdf\xbe" + "\xb1\x67\x17\x2b\x37\x60\x0d\xca" + "\x6f\xc3\x94\x2c\xd2\x92\x6d\x9d" + "\x75\x18\x77\xaa\x29\x38\x96\xed" + "\x0e\x20\x70\x92\xd5\xd0\xb4\x00" + "\xc0\x31\xf2\xc9\x43\x0e\x75\x1d" + "\x4b\x64\xf2\x1f\xf2\x29\x6c\x7b" + "\x7f\xec\x59\x7d\x8c\x0d\xd4\xd3" + "\xac\x53\x4c\xa3\xde\x42\x92\x95" + "\x6d\xa3\x4f\xd0\xe6\x3d\xe7\xec" + "\x7a\x4d\x68\xf1\xfe\x67\x66\x09" + "\x83\x22\xb1\x98\x43\x8c\xab\xb8" + "\x45\xe6\x6d\xdf\x5e\x50\x71\xce" + "\xf5\x4e\x40\x93\x2b\xfa\x86\x0e" + "\xe8\x30\xbd\x82\xcc\x1c\x9c\x5f" + "\xad\xfd\x08\x31\xbe\x52\xe7\xe6" + "\xf2\x06\x01\x62\x25\x15\x99\x74" + "\x33\x51\x52\x57\x3f\x57\x87\x61" + "\xb9\x7f\x29\x3d\xcd\x92\x5e\xa6" + "\x5c\x3b\xf1\xed\x5f\xeb\x82\xed" + "\x56\x7b\x61\xe7\xfd\x02\x47\x0e" + "\x2a\x15\xa4\xce\x43\x86\x9b\xe1" + "\x2b\x4c\x2a\xd9\x42\x97\xf7\x9a" + "\xe5\x47\x46\x48\xd3\x55\x6f\x4d" + "\xd9\xeb\x4b\xdd\x7b\x21\x2f\xb3" + "\xa8\x36\x28\xdf\xca\xf1\xf6\xd9" + "\x10\xf6\x1c\xfd\x2e\x0c\x27\xe0" + "\x01\xb3\xff\x6d\x47\x08\x4d\xd4" + "\x00\x25\xee\x55\x4a\xe9\xe8\x5b" + "\xd8\xf7\x56\x12\xd4\x50\xb2\xe5" + "\x51\x6f\x34\x63\x69\xd2\x4e\x96" + "\x4e\xbc\x79\xbf\x18\xae\xc6\x13" + "\x80\x92\x77\xb0\xb4\x0f\x29\x94" + "\x6f\x4c\xbb\x53\x11\x36\xc3\x9f" + "\x42\x8e\x96\x8a\x91\xc8\xe9\xfc" + "\xfe\xbf\x7c\x2d\x6f\xf9\xb8\x44" + "\x89\x1b\x09\x53\x0a\x2a\x92\xc3" + "\x54\x7a\x3a\xf9\xe2\xe4\x75\x87" + "\xa0\x5e\x4b\x03\x7a\x0d\x8a\xf4" + "\x55\x59\x94\x2b\x63\x96\x0e\xf5", + .psize = 1040, + .digest = "\xb5\xb9\x08\xb3\x24\x3e\x03\xf0" + "\xd6\x0b\x57\xbc\x0a\x6d\x89\x59", + }, { + .key = "\xf6\x34\x42\x71\x35\x52\x8b\x58" + "\x02\x3a\x8e\x4a\x8d\x41\x13\xe9" + "\x7f\xba\xb9\x55\x9d\x73\x4d\xf8" + "\x3f\x5d\x73\x15\xff\xd3\x9e\x7f" + "\x20\x2a\x6a\xa8\xd1\xf0\x8f\x12" + "\x6b\x02\xd8\x6c\xde\xba\x80\x22" + "\x19\x37\xc8\xd0\x4e\x89\x17\x7c" + "\x7c\xdd\x88\xfd\x41\xc0\x04\xb7" + "\x1d\xac\x19\xe3\x20\xc7\x16\xcf" + "\x58\xee\x1d\x7a\x61\x69\xa9\x12" + "\x4b\xef\x4f\xb6\x38\xdd\x78\xf8" + "\x28\xee\x70\x08\xc7\x7c\xcc\xc8" + "\x1e\x41\xf5\x80\x86\x70\xd0\xf0" + "\xa3\x87\x6b\x0a\x00\xd2\x41\x28" + "\x74\x26\xf1\x24\xf3\xd0\x28\x77" + "\xd7\xcd\xf6\x2d\x61\xf4\xa2\x13" + "\x77\xb4\x6f\xa0\xf4\xfb\xd6\xb5" + "\x38\x9d\x5a\x0c\x51\xaf\xad\x63" + "\x27\x67\x8c\x01\xea\x42\x1a\x66" + "\xda\x16\x7c\x3c\x30\x0c\x66\x53" + "\x1c\x88\xa4\x5c\xb2\xe3\x78\x0a" + "\x13\x05\x6d\xe2\xaf\xb3\xe4\x75" + "\x00\x99\x58\xee\x76\x09\x64\xaa" + "\xbb\x2e\xb1\x81\xec\xd8\x0e\xd3" + "\x0c\x33\x5d\xb7\x98\xef\x36\xb6" + "\xd2\x65\x69\x41\x70\x12\xdc\x25" + "\x41\x03\x99\x81\x41\x19\x62\x13" + "\xd1\x0a\x29\xc5\x8c\xe0\x4c\xf3" + "\xd6\xef\x4c\xf4\x1d\x83\x2e\x6d" + "\x8e\x14\x87\xed\x80\xe0\xaa\xd3" + "\x08\x04\x73\x1a\x84\x40\xf5\x64" + "\xbd\x61\x32\x65\x40\x42\xfb\xb0" + "\x40\xf6\x40\x8d\xc7\x7f\x14\xd0" + "\x83\x99\xaa\x36\x7e\x60\xc6\xbf" + "\x13\x8a\xf9\x21\xe4\x7e\x68\x87" + "\xf3\x33\x86\xb4\xe0\x23\x7e\x0a" + "\x21\xb1\xf5\xad\x67\x3c\x9c\x9d" + "\x09\xab\xaf\x5f\xba\xe0\xd0\x82" + "\x48\x22\x70\xb5\x6d\x53\xd6\x0e" + "\xde\x64\x92\x41\xb0\xd3\xfb\xda" + "\x21\xfe\xab\xea\x20\xc4\x03\x58" + "\x18\x2e\x7d\x2f\x03\xa9\x47\x66" + "\xdf\x7b\xa4\x6b\x34\x6b\x55\x9c" + "\x4f\xd7\x9c\x47\xfb\xa9\x42\xec" + "\x5a\x12\xfd\xfe\x76\xa0\x92\x9d" + "\xfe\x1e\x16\xdd\x24\x2a\xe4\x27" + "\xd5\xa9\xf2\x05\x4f\x83\xa2\xaf" + "\xfe\xee\x83\x7a\xad\xde\xdf\x9a" + "\x80\xd5\x81\x14\x93\x16\x7e\x46" + "\x47\xc2\x14\xef\x49\x6e\xb9\xdb" + "\x40\xe8\x06\x6f\x9c\x2a\xfd\x62" + "\x06\x46\xfd\x15\x1d\x36\x61\x6f" + "\x77\x77\x5e\x64\xce\x78\x1b\x85" + "\xbf\x50\x9a\xfd\x67\xa6\x1a\x65" + "\xad\x5b\x33\x30\xf1\x71\xaa\xd9" + "\x23\x0d\x92\x24\x5f\xae\x57\xb0" + "\x24\x37\x0a\x94\x12\xfb\xb5\xb1" + "\xd3\xb8\x1d\x12\x29\xb0\x80\x24" + "\x2d\x47\x9f\x96\x1f\x95\xf1\xb1" + "\xda\x35\xf6\x29\xe0\xe1\x23\x96" + "\xc7\xe8\x22\x9b\x7c\xac\xf9\x41" + "\x39\x01\xe5\x73\x15\x5e\x99\xec" + "\xb4\xc1\xf4\xe7\xa7\x97\x6a\xd5" + "\x90\x9a\xa0\x1d\xf3\x5a\x8b\x5f" + "\xdf\x01\x52\xa4\x93\x31\x97\xb0" + "\x93\x24\xb5\xbc\xb2\x14\x24\x98" + "\x4a\x8f\x19\x85\xc3\x2d\x0f\x74" + "\x9d\x16\x13\x80\x5e\x59\x62\x62" + "\x25\xe0\xd1\x2f\x64\xef\xba\xac" + "\xcd\x09\x07\x15\x8a\xcf\x73\xb5" + "\x8b\xc9\xd8\x24\xb0\x53\xd5\x6f" + "\xe1\x2b\x77\xb1\xc5\xe4\xa7\x0e" + "\x18\x45\xab\x36\x03\x59\xa8\xbd" + "\x43\xf0\xd8\x2c\x1a\x69\x96\xbb" + "\x13\xdf\x6c\x33\x77\xdf\x25\x34" + "\x5b\xa5\x5b\x8c\xf9\x51\x05\xd4" + "\x8b\x8b\x44\x87\x49\xfc\xa0\x8f" + "\x45\x15\x5b\x40\x42\xc4\x09\x92" + "\x98\x0c\x4d\xf4\x26\x37\x1b\x13" + "\x76\x01\x93\x8d\x4f\xe6\xed\x18" + "\xd0\x79\x7b\x3f\x44\x50\xcb\xee" + "\xf7\x4a\xc9\x9e\xe0\x96\x74\xa7" + "\xe6\x93\xb2\x53\xca\x55\xa8\xdc" + "\x1e\x68\x07\x87\xb7\x2e\xc1\x08" + "\xb2\xa4\x5b\xaf\xc6\xdb\x5c\x66" + "\x41\x1c\x51\xd9\xb0\x07\x00\x0d" + "\xf0\x4c\xdc\x93\xde\xa9\x1e\x8e" + "\xd3\x22\x62\xd8\x8b\x88\x2c\xea" + "\x5e\xf1\x6e\x14\x40\xc7\xbe\xaa" + "\x42\x28\xd0\x26\x30\x78\x01\x9b" + "\x83\x07\xbc\x94\xc7\x57\xa2\x9f" + "\x03\x07\xff\x16\xff\x3c\x6e\x48" + "\x0a\xd0\xdd\x4c\xf6\x64\x9a\xf1" + "\xcd\x30\x12\x82\x2c\x38\xd3\x26" + "\x83\xdb\xab\x3e\xc6\xf8\xe6\xfa" + "\x77\x0a\x78\x82\x75\xf8\x63\x51" + "\x59\xd0\x8d\x24\x9f\x25\xe6\xa3" + "\x4c\xbc\x34\xfc\xe3\x10\xc7\x62" + "\xd4\x23\xc8\x3d\xa7\xc6\xa6\x0a" + "\x4f\x7e\x29\x9d\x6d\xbe\xb5\xf1" + "\xdf\xa4\x53\xfa\xc0\x23\x0f\x37" + "\x84\x68\xd0\xb5\xc8\xc6\xae\xf8" + "\xb7\x8d\xb3\x16\xfe\x8f\x87\xad" + "\xd0\xc1\x08\xee\x12\x1c\x9b\x1d" + "\x90\xf8\xd1\x63\xa4\x92\x3c\xf0" + "\xc7\x34\xd8\xf1\x14\xed\xa3\xbc" + "\x17\x7e\xd4\x62\x42\x54\x57\x2c" + "\x3e\x7a\x35\x35\x17\x0f\x0b\x7f" + "\x81\xa1\x3f\xd0\xcd\xc8\x3b\x96" + "\xe9\xe0\x4a\x04\xe1\xb6\x3c\xa1" + "\xd6\xca\xc4\xbd\xb6\xb5\x95\x34" + "\x12\x9d\xc5\x96\xf2\xdf\xba\x54" + "\x76\xd1\xb2\x6b\x3b\x39\xe0\xb9" + "\x18\x62\xfb\xf7\xfc\x12\xf1\x5f" + "\x7e\xc7\xe3\x59\x4c\xa6\xc2\x3d" + "\x40\x15\xf9\xa3\x95\x64\x4c\x74" + "\x8b\x73\x77\x33\x07\xa7\x04\x1d" + "\x33\x5a\x7e\x8f\xbd\x86\x01\x4f" + "\x3e\xb9\x27\x6f\xe2\x41\xf7\x09" + "\x67\xfd\x29\x28\xc5\xe4\xf6\x18" + "\x4c\x1b\x49\xb2\x9c\x5b\xf6\x81" + "\x4f\xbb\x5c\xcc\x0b\xdf\x84\x23" + "\x58\xd6\x28\x34\x93\x3a\x25\x97" + "\xdf\xb2\xc3\x9e\x97\x38\x0b\x7d" + "\x10\xb3\x54\x35\x23\x8c\x64\xee" + "\xf0\xd8\x66\xff\x8b\x22\xd2\x5b" + "\x05\x16\x3c\x89\xf7\xb1\x75\xaf" + "\xc0\xae\x6a\x4f\x3f\xaf\x9a\xf4" + "\xf4\x9a\x24\xd9\x80\x82\xc0\x12" + "\xde\x96\xd1\xbe\x15\x0b\x8d\x6a" + "\xd7\x12\xe4\x85\x9f\x83\xc9\xc3" + "\xff\x0b\xb5\xaf\x3b\xd8\x6d\x67" + "\x81\x45\xe6\xac\xec\xc1\x7b\x16" + "\x18\x0a\xce\x4b\xc0\x2e\x76\xbc" + "\x1b\xfa\xb4\x34\xb8\xfc\x3e\xc8" + "\x5d\x90\x71\x6d\x7a\x79\xef\x06", + .ksize = 1088, + .plaintext = "\xaa\x5d\x54\xcb\xea\x1e\x46\x0f" + "\x45\x87\x70\x51\x8a\x66\x7a\x33" + "\xb4\x18\xff\xa9\x82\xf9\x45\x4b" + "\x93\xae\x2e\x7f\xab\x98\xfe\xbf" + "\x01\xee\xe5\xa0\x37\x8f\x57\xa6" + "\xb0\x76\x0d\xa4\xd6\x28\x2b\x5d" + "\xe1\x03\xd6\x1c\x6f\x34\x0d\xe7" + "\x61\x2d\x2e\xe5\xae\x5d\x47\xc7" + "\x80\x4b\x18\x8f\xa8\x99\xbc\x28" + "\xed\x1d\x9d\x86\x7d\xd7\x41\xd1" + "\xe0\x2b\xe1\x8c\x93\x2a\xa7\x80" + "\xe1\x07\xa0\xa9\x9f\x8c\x8d\x1a" + "\x55\xfc\x6b\x24\x7a\xbd\x3e\x51" + "\x68\x4b\x26\x59\xc8\xa7\x16\xd9" + "\xb9\x61\x13\xde\x8b\x63\x1c\xf6" + "\x60\x01\xfb\x08\xb3\x5b\x0a\xbf" + "\x34\x73\xda\x87\x87\x3d\x6f\x97" + "\x4a\x0c\xa3\x58\x20\xa2\xc0\x81" + "\x5b\x8c\xef\xa9\xc2\x01\x1e\x64" + "\x83\x8c\xbc\x03\xb6\xd0\x29\x9f" + "\x54\xe2\xce\x8b\xc2\x07\x85\x78" + "\x25\x38\x96\x4c\xb4\xbe\x17\x4a" + "\x65\xa6\xfa\x52\x9d\x66\x9d\x65" + "\x4a\xd1\x01\x01\xf0\xcb\x13\xcc" + "\xa5\x82\xf3\xf2\x66\xcd\x3f\x9d" + "\xd1\xaa\xe4\x67\xea\xf2\xad\x88" + "\x56\x76\xa7\x9b\x59\x3c\xb1\x5d" + "\x78\xfd\x69\x79\x74\x78\x43\x26" + "\x7b\xde\x3f\xf1\xf5\x4e\x14\xd9" + "\x15\xf5\x75\xb5\x2e\x19\xf3\x0c" + "\x48\x72\xd6\x71\x6d\x03\x6e\xaa" + "\xa7\x08\xf9\xaa\x70\xa3\x0f\x4d" + "\x12\x8a\xdd\xe3\x39\x73\x7e\xa7" + "\xea\x1f\x6d\x06\x26\x2a\xf2\xc5" + "\x52\xb4\xbf\xfd\x52\x0c\x06\x60" + "\x90\xd1\xb2\x7b\x56\xae\xac\x58" + "\x5a\x6b\x50\x2a\xf5\xe0\x30\x3c" + "\x2a\x98\x0f\x1b\x5b\x0a\x84\x6c" + "\x31\xae\x92\xe2\xd4\xbb\x7f\x59" + "\x26\x10\xb9\x89\x37\x68\x26\xbf" + "\x41\xc8\x49\xc4\x70\x35\x7d\xff" + "\x2d\x7f\xf6\x8a\x93\x68\x8c\x78" + "\x0d\x53\xce\x7d\xff\x7d\xfb\xae" + "\x13\x1b\x75\xc4\x78\xd7\x71\xd8" + "\xea\xd3\xf4\x9d\x95\x64\x8e\xb4" + "\xde\xb8\xe4\xa6\x68\xc8\xae\x73" + "\x58\xaf\xa8\xb0\x5a\x20\xde\x87" + "\x43\xb9\x0f\xe3\xad\x41\x4b\xd5" + "\xb7\xad\x16\x00\xa6\xff\xf6\x74" + "\xbf\x8c\x9f\xb3\x58\x1b\xb6\x55" + "\xa9\x90\x56\x28\xf0\xb5\x13\x4e" + "\x9e\xf7\x25\x86\xe0\x07\x7b\x98" + "\xd8\x60\x5d\x38\x95\x3c\xe4\x22" + "\x16\x2f\xb2\xa2\xaf\xe8\x90\x17" + "\xec\x11\x83\x1a\xf4\xa9\x26\xda" + "\x39\x72\xf5\x94\x61\x05\x51\xec" + "\xa8\x30\x8b\x2c\x13\xd0\x72\xac" + "\xb9\xd2\xa0\x4c\x4b\x78\xe8\x6e" + "\x04\x85\xe9\x04\x49\x82\x91\xff" + "\x89\xe5\xab\x4c\xaa\x37\x03\x12" + "\xca\x8b\x74\x10\xfd\x9e\xd9\x7b" + "\xcb\xdb\x82\x6e\xce\x2e\x33\x39" + "\xce\xd2\x84\x6e\x34\x71\x51\x6e" + "\x0d\xd6\x01\x87\xc7\xfa\x0a\xd3" + "\xad\x36\xf3\x4c\x9f\x96\x5e\x62" + "\x62\x54\xc3\x03\x78\xd6\xab\xdd" + "\x89\x73\x55\x25\x30\xf8\xa7\xe6" + "\x4f\x11\x0c\x7c\x0a\xa1\x2b\x7b" + "\x3d\x0d\xde\x81\xd4\x9d\x0b\xae" + "\xdf\x00\xf9\x4c\xb6\x90\x8e\x16" + "\xcb\x11\xc8\xd1\x2e\x73\x13\x75" + "\x75\x3e\xaa\xf5\xee\x02\xb3\x18" + "\xa6\x2d\xf5\x3b\x51\xd1\x1f\x47" + "\x6b\x2c\xdb\xc4\x10\xe0\xc8\xba" + "\x9d\xac\xb1\x9d\x75\xd5\x41\x0e" + "\x7e\xbe\x18\x5b\xa4\x1f\xf8\x22" + "\x4c\xc1\x68\xda\x6d\x51\x34\x6c" + "\x19\x59\xec\xb5\xb1\xec\xa7\x03" + "\xca\x54\x99\x63\x05\x6c\xb1\xac" + "\x9c\x31\xd6\xdb\xba\x7b\x14\x12" + "\x7a\xc3\x2f\xbf\x8d\xdc\x37\x46" + "\xdb\xd2\xbc\xd4\x2f\xab\x30\xd5" + "\xed\x34\x99\x8e\x83\x3e\xbe\x4c" + "\x86\x79\x58\xe0\x33\x8d\x9a\xb8" + "\xa9\xa6\x90\x46\xa2\x02\xb8\xdd" + "\xf5\xf9\x1a\x5c\x8c\x01\xaa\x6e" + "\xb4\x22\x12\xf5\x0c\x1b\x9b\x7a" + "\xc3\x80\xf3\x06\x00\x5f\x30\xd5" + "\x06\xdb\x7d\x82\xc2\xd4\x0b\x4c" + "\x5f\xe9\xc5\xf5\xdf\x97\x12\xbf" + "\x56\xaf\x9b\x69\xcd\xee\x30\xb4" + "\xa8\x71\xff\x3e\x7d\x73\x7a\xb4" + "\x0d\xa5\x46\x7a\xf3\xf4\x15\x87" + "\x5d\x93\x2b\x8c\x37\x64\xb5\xdd" + "\x48\xd1\xe5\x8c\xae\xd4\xf1\x76" + "\xda\xf4\xba\x9e\x25\x0e\xad\xa3" + "\x0d\x08\x7c\xa8\x82\x16\x8d\x90" + "\x56\x40\x16\x84\xe7\x22\x53\x3a" + "\x58\xbc\xb9\x8f\x33\xc8\xc2\x84" + "\x22\xe6\x0d\xe7\xb3\xdc\x5d\xdf" + "\xd7\x2a\x36\xe4\x16\x06\x07\xd2" + "\x97\x60\xb2\xf5\x5e\x14\xc9\xfd" + "\x8b\x05\xd1\xce\xee\x9a\x65\x99" + "\xb7\xae\x19\xb7\xc8\xbc\xd5\xa2" + "\x7b\x95\xe1\xcc\xba\x0d\xdc\x8a" + "\x1d\x59\x52\x50\xaa\x16\x02\x82" + "\xdf\x61\x33\x2e\x44\xce\x49\xc7" + "\xe5\xc6\x2e\x76\xcf\x80\x52\xf0" + "\x3d\x17\x34\x47\x3f\xd3\x80\x48" + "\xa2\xba\xd5\xc7\x7b\x02\x28\xdb" + "\xac\x44\xc7\x6e\x05\x5c\xc2\x79" + "\xb3\x7d\x6a\x47\x77\x66\xf1\x38" + "\xf0\xf5\x4f\x27\x1a\x31\xca\x6c" + "\x72\x95\x92\x8e\x3f\xb0\xec\x1d" + "\xc7\x2a\xff\x73\xee\xdf\x55\x80" + "\x93\xd2\xbd\x34\xd3\x9f\x00\x51" + "\xfb\x2e\x41\xba\x6c\x5a\x7c\x17" + "\x7f\xe6\x70\xac\x8d\x39\x3f\x77" + "\xe2\x23\xac\x8f\x72\x4e\xe4\x53" + "\xcc\xf1\x1b\xf1\x35\xfe\x52\xa4" + "\xd6\xb8\x40\x6b\xc1\xfd\xa0\xa1" + "\xf5\x46\x65\xc2\x50\xbb\x43\xe2" + "\xd1\x43\x28\x34\x74\xf5\x87\xa0" + "\xf2\x5e\x27\x3b\x59\x2b\x3e\x49" + "\xdf\x46\xee\xaf\x71\xd7\x32\x36" + "\xc7\x14\x0b\x58\x6e\x3e\x2d\x41" + "\xfa\x75\x66\x3a\x54\xe0\xb2\xb9" + "\xaf\xdd\x04\x80\x15\x19\x3f\x6f" + "\xce\x12\xb4\xd8\xe8\x89\x3c\x05" + "\x30\xeb\xf3\x3d\xcd\x27\xec\xdc" + "\x56\x70\x12\xcf\x78\x2b\x77\xbf" + "\x22\xf0\x1b\x17\x9c\xcc\xd6\x1b" + "\x2d\x3d\xa0\x3b\xd8\xc9\x70\xa4" + "\x7a\x3e\x07\xb9\x06\xc3\xfa\xb0" + "\x33\xee\xc1\xd8\xf6\xe0\xf0\xb2" + "\x61\x12\x69\xb0\x5f\x28\x99\xda" + "\xc3\x61\x48\xfa\x07\x16\x03\xc4" + "\xa8\xe1\x3c\xe8\x0e\x64\x15\x30" + "\xc1\x9d\x84\x2f\x73\x98\x0e\x3a" + "\xf2\x86\x21\xa4\x9e\x1d\xb5\x86" + "\x16\xdb\x2b\x9a\x06\x64\x8e\x79" + "\x8d\x76\x3e\xc3\xc2\x64\x44\xe3" + "\xda\xbc\x1a\x52\xd7\x61\x03\x65" + "\x54\x32\x77\x01\xed\x9d\x8a\x43" + "\x25\x24\xe3\xc1\xbe\xb8\x2f\xcb" + "\x89\x14\x64\xab\xf6\xa0\x6e\x02" + "\x57\xe4\x7d\xa9\x4e\x9a\x03\x36" + "\xad\xf1\xb1\xfc\x0b\xe6\x79\x51" + "\x9f\x81\x77\xc4\x14\x78\x9d\xbf" + "\xb6\xd6\xa3\x8c\xba\x0b\x26\xe7" + "\xc8\xb9\x5c\xcc\xe1\x5f\xd5\xc6" + "\xc4\xca\xc2\xa3\x45\xba\x94\x13" + "\xb2\x8f\xc3\x54\x01\x09\xe7\x8b" + "\xda\x2a\x0a\x11\x02\x43\xcb\x57" + "\xc9\xcc\xb5\x5c\xab\xc4\xec\x54" + "\x00\x06\x34\xe1\x6e\x03\x89\x7c" + "\xc6\xfb\x6a\xc7\x60\x43\xd6\xc5" + "\xb5\x68\x72\x89\x8f\x42\xc3\x74" + "\xbd\x25\xaa\x9f\x67\xb5\xdf\x26" + "\x20\xe8\xb7\x01\x3c\xe4\x77\xce" + "\xc4\x65\xa7\x23\x79\xea\x33\xc7" + "\x82\x14\x5c\x82\xf2\x4e\x3d\xf6" + "\xc6\x4a\x0e\x29\xbb\xec\x44\xcd" + "\x2f\xd1\x4f\x21\x71\xa9\xce\x0f" + "\x5c\xf2\x72\x5c\x08\x2e\x21\xd2" + "\xc3\x29\x13\xd8\xac\xc3\xda\x13" + "\x1a\x9d\xa7\x71\x1d\x27\x1d\x27" + "\x1d\xea\xab\x44\x79\xad\xe5\xeb" + "\xef\x1f\x22\x0a\x44\x4f\xcb\x87" + "\xa7\x58\x71\x0e\x66\xf8\x60\xbf" + "\x60\x74\x4a\xb4\xec\x2e\xfe\xd3" + "\xf5\xb8\xfe\x46\x08\x50\x99\x6c" + "\x66\xa5\xa8\x34\x44\xb5\xe5\xf0" + "\xdd\x2c\x67\x4e\x35\x96\x8e\x67" + "\x48\x3f\x5f\x37\x44\x60\x51\x2e" + "\x14\x91\x5e\x57\xc3\x0e\x79\x77" + "\x2f\x03\xf4\xe2\x1c\x72\xbf\x85" + "\x5d\xd3\x17\xdf\x6c\xc5\x70\x24" + "\x42\xdf\x51\x4e\x2a\xb2\xd2\x5b" + "\x9e\x69\x83\x41\x11\xfe\x73\x22" + "\xde\x8a\x9e\xd8\x8a\xfb\x20\x38" + "\xd8\x47\x6f\xd5\xed\x8f\x41\xfd" + "\x13\x7a\x18\x03\x7d\x0f\xcd\x7d" + "\xa6\x7d\x31\x9e\xf1\x8f\x30\xa3" + "\x8b\x4c\x24\xb7\xf5\x48\xd7\xd9" + "\x12\xe7\x84\x97\x5c\x31\x6d\xfb" + "\xdf\xf3\xd3\xd1\xd5\x0c\x30\x06" + "\x01\x6a\xbc\x6c\x78\x7b\xa6\x50" + "\xfa\x0f\x3c\x42\x2d\xa5\xa3\x3b" + "\xcf\x62\x50\xff\x71\x6d\xe7\xda" + "\x27\xab\xc6\x67\x16\x65\x68\x64" + "\xc7\xd5\x5f\x81\xa9\xf6\x65\xb3" + "\x5e\x43\x91\x16\xcd\x3d\x55\x37" + "\x55\xb3\xf0\x28\xc5\x54\x19\xc0" + "\xe0\xd6\x2a\x61\xd4\xc8\x72\x51" + "\xe9\xa1\x7b\x48\x21\xad\x44\x09" + "\xe4\x01\x61\x3c\x8a\x5b\xf9\xa1" + "\x6e\x1b\xdf\xc0\x04\xa8\x8b\xf2" + "\x21\xbe\x34\x7b\xfc\xa1\xcd\xc9" + "\xa9\x96\xf4\xa4\x4c\xf7\x4e\x8f" + "\x84\xcc\xd3\xa8\x92\x77\x8f\x36" + "\xe2\x2e\x8c\x33\xe8\x84\xa6\x0c" + "\x6c\x8a\xda\x14\x32\xc2\x96\xff" + "\xc6\x4a\xc2\x9b\x30\x7f\xd1\x29" + "\xc0\xd5\x78\x41\x00\x80\x80\x03" + "\x2a\xb1\xde\x26\x03\x48\x49\xee" + "\x57\x14\x76\x51\x3c\x36\x5d\x0a" + "\x5c\x9f\xe8\xd8\x53\xdb\x4f\xd4" + "\x38\xbf\x66\xc9\x75\x12\x18\x75" + "\x34\x2d\x93\x22\x96\x51\x24\x6e" + "\x4e\xd9\x30\xea\x67\xff\x92\x1c" + "\x16\x26\xe9\xb5\x33\xab\x8c\x22" + "\x47\xdb\xa0\x2c\x08\xf0\x12\x69" + "\x7e\x93\x52\xda\xa5\xe5\xca\xc1" + "\x0f\x55\x2a\xbd\x09\x30\x88\x1b" + "\x9c\xc6\x9f\xe6\xdb\xa6\x92\xeb" + "\xf4\xbd\x5c\xc4\xdb\xc6\x71\x09" + "\xab\x5e\x48\x0c\xed\x6f\xda\x8e" + "\x8d\x0c\x98\x71\x7d\x10\xd0\x9c" + "\x20\x9b\x79\x53\x26\x5d\xb9\x85" + "\x8a\x31\xb8\xc5\x1c\x97\xde\x88" + "\x61\x55\x7f\x7c\x21\x06\xea\xc4" + "\x5f\xaf\xf2\xf0\xd5\x5e\x7d\xb4" + "\x6e\xcf\xe9\xae\x1b\x0e\x11\x80" + "\xc1\x9a\x74\x7e\x52\x6f\xa0\xb7" + "\x24\xcd\x8d\x0a\x11\x40\x63\x72" + "\xfa\xe2\xc5\xb3\x94\xef\x29\xa2" + "\x1a\x23\x43\x04\x37\x55\x0d\xe9" + "\x83\xb2\x29\x51\x49\x64\xa0\xbd" + "\xde\x73\xfd\xa5\x7c\x95\x70\x62" + "\x58\xdc\xe2\xd0\xbf\x98\xf5\x8a" + "\x6a\xfd\xce\xa8\x0e\x42\x2a\xeb" + "\xd2\xff\x83\x27\x53\x5c\xa0\x6e" + "\x93\xef\xe2\xb9\x5d\x35\xd6\x98" + "\xf6\x71\x19\x7a\x54\xa1\xa7\xe8" + "\x09\xfe\xf6\x9e\xc7\xbd\x3e\x29" + "\xbd\x6b\x17\xf4\xe7\x3e\x10\x5c" + "\xc1\xd2\x59\x4f\x4b\x12\x1a\x5b" + "\x50\x80\x59\xb9\xec\x13\x66\xa8" + "\xd2\x31\x7b\x6a\x61\x22\xdd\x7d" + "\x61\xee\x87\x16\x46\x9f\xf9\xc7" + "\x41\xee\x74\xf8\xd0\x96\x2c\x76" + "\x2a\xac\x7d\x6e\x9f\x0e\x7f\x95" + "\xfe\x50\x16\xb2\x23\xca\x62\xd5" + "\x68\xcf\x07\x3f\x3f\x97\x85\x2a" + "\x0c\x25\x45\xba\xdb\x32\xcb\x83" + "\x8c\x4f\xe0\x6d\x9a\x99\xf9\xc9" + "\xda\xd4\x19\x31\xc1\x7c\x6d\xd9" + "\x9c\x56\xd3\xec\xc1\x81\x4c\xed" + "\x28\x9d\x87\xeb\x19\xd7\x1a\x4f" + "\x04\x6a\xcb\x1f\xcf\x1f\xa2\x16" + "\xfc\x2a\x0d\xa1\x14\x2d\xfa\xc5" + "\x5a\xd2\xc5\xf9\x19\x7c\x20\x1f" + "\x2d\x10\xc0\x66\x7c\xd9\x2d\xe5" + "\x88\x70\x59\xa7\x85\xd5\x2e\x7c" + "\x5c\xe3\xb7\x12\xd6\x97\x3f\x29", + .psize = 2048, + .digest = "\x37\x90\x92\xc2\xeb\x01\x87\xd9" + "\x95\xc7\x91\xc3\x17\x8b\x38\x52", + } +}; + + /* * DES test vectors. */ @@ -11449,6 +12797,82 @@ static const struct cipher_testvec aes_cbc_tv_template[] = { }, }; +static const struct cipher_testvec aes_cfb_tv_template[] = { + { /* From NIST SP800-38A */ + .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6" + "\xab\xf7\x15\x88\x09\xcf\x4f\x3c", + .klen = 16, + .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20" + "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a" + "\xc8\xa6\x45\x37\xa0\xb3\xa9\x3f" + "\xcd\xe3\xcd\xad\x9f\x1c\xe5\x8b" + "\x26\x75\x1f\x67\xa3\xcb\xb1\x40" + "\xb1\x80\x8c\xf1\x87\xa4\xf4\xdf" + "\xc0\x4b\x05\x35\x7c\x5d\x1c\x0e" + "\xea\xc4\xc6\x6f\x9f\xf7\xf2\xe6", + .len = 64, + }, { + .key = "\x8e\x73\xb0\xf7\xda\x0e\x64\x52" + "\xc8\x10\xf3\x2b\x80\x90\x79\xe5" + "\x62\xf8\xea\xd2\x52\x2c\x6b\x7b", + .klen = 24, + .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .ctext = "\xcd\xc8\x0d\x6f\xdd\xf1\x8c\xab" + "\x34\xc2\x59\x09\xc9\x9a\x41\x74" + "\x67\xce\x7f\x7f\x81\x17\x36\x21" + "\x96\x1a\x2b\x70\x17\x1d\x3d\x7a" + "\x2e\x1e\x8a\x1d\xd5\x9b\x88\xb1" + "\xc8\xe6\x0f\xed\x1e\xfa\xc4\xc9" + "\xc0\x5f\x9f\x9c\xa9\x83\x4f\xa0" + "\x42\xae\x8f\xba\x58\x4b\x09\xff", + .len = 64, + }, { + .key = "\x60\x3d\xeb\x10\x15\xca\x71\xbe" + "\x2b\x73\xae\xf0\x85\x7d\x77\x81" + "\x1f\x35\x2c\x07\x3b\x61\x08\xd7" + "\x2d\x98\x10\xa3\x09\x14\xdf\xf4", + .klen = 32, + .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .ctext = "\xdc\x7e\x84\xbf\xda\x79\x16\x4b" + "\x7e\xcd\x84\x86\x98\x5d\x38\x60" + "\x39\xff\xed\x14\x3b\x28\xb1\xc8" + "\x32\x11\x3c\x63\x31\xe5\x40\x7b" + "\xdf\x10\x13\x24\x15\xe5\x4b\x92" + "\xa1\x3e\xd0\xa8\x26\x7a\xe2\xf9" + "\x75\xa3\x85\x74\x1a\xb9\xce\xf8" + "\x20\x31\x62\x3d\x55\xb1\xe4\x71", + .len = 64, + }, +}; + static const struct aead_testvec hmac_md5_ecb_cipher_null_enc_tv_template[] = { { /* Input data from RFC 2410 Case 1 */ #ifdef __LITTLE_ENDIAN @@ -30802,6 +32226,1794 @@ static const struct cipher_testvec chacha20_tv_template[] = { }, }; +static const struct cipher_testvec xchacha20_tv_template[] = { + { /* from libsodium test/default/xchacha20.c */ + .key = "\x79\xc9\x97\x98\xac\x67\x30\x0b" + "\xbb\x27\x04\xc9\x5c\x34\x1e\x32" + "\x45\xf3\xdc\xb2\x17\x61\xb9\x8e" + "\x52\xff\x45\xb2\x4f\x30\x4f\xc4", + .klen = 32, + .iv = "\xb3\x3f\xfd\x30\x96\x47\x9b\xcf" + "\xbc\x9a\xee\x49\x41\x76\x88\xa0" + "\xa2\x55\x4f\x8d\x95\x38\x94\x19" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00", + .ctext = "\xc6\xe9\x75\x81\x60\x08\x3a\xc6" + "\x04\xef\x90\xe7\x12\xce\x6e\x75" + "\xd7\x79\x75\x90\x74\x4e\x0c\xf0" + "\x60\xf0\x13\x73\x9c", + .len = 29, + }, { /* from libsodium test/default/xchacha20.c */ + .key = "\x9d\x23\xbd\x41\x49\xcb\x97\x9c" + "\xcf\x3c\x5c\x94\xdd\x21\x7e\x98" + "\x08\xcb\x0e\x50\xcd\x0f\x67\x81" + "\x22\x35\xea\xaf\x60\x1d\x62\x32", + .klen = 32, + .iv = "\xc0\x47\x54\x82\x66\xb7\xc3\x70" + "\xd3\x35\x66\xa2\x42\x5c\xbf\x30" + "\xd8\x2d\x1e\xaf\x52\x94\x10\x9e" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00", + .ctext = "\xa2\x12\x09\x09\x65\x94\xde\x8c" + "\x56\x67\xb1\xd1\x3a\xd9\x3f\x74" + "\x41\x06\xd0\x54\xdf\x21\x0e\x47" + "\x82\xcd\x39\x6f\xec\x69\x2d\x35" + "\x15\xa2\x0b\xf3\x51\xee\xc0\x11" + "\xa9\x2c\x36\x78\x88\xbc\x46\x4c" + "\x32\xf0\x80\x7a\xcd\x6c\x20\x3a" + "\x24\x7e\x0d\xb8\x54\x14\x84\x68" + "\xe9\xf9\x6b\xee\x4c\xf7\x18\xd6" + "\x8d\x5f\x63\x7c\xbd\x5a\x37\x64" + "\x57\x78\x8e\x6f\xae\x90\xfc\x31" + "\x09\x7c\xfc", + .len = 91, + }, { /* Taken from the ChaCha20 test vectors, appended 12 random bytes + to the nonce, zero-padded the stream position from 4 to 8 bytes, + and recomputed the ciphertext using libsodium's XChaCha20 */ + .key = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .klen = 32, + .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x67\xc6\x69\x73" + "\x51\xff\x4a\xec\x29\xcd\xba\xab" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ctext = "\x9c\x49\x2a\xe7\x8a\x2f\x93\xc7" + "\xb3\x33\x6f\x82\x17\xd8\xc4\x1e" + "\xad\x80\x11\x11\x1d\x4c\x16\x18" + "\x07\x73\x9b\x4f\xdb\x7c\xcb\x47" + "\xfd\xef\x59\x74\xfa\x3f\xe5\x4c" + "\x9b\xd0\xea\xbc\xba\x56\xad\x32" + "\x03\xdc\xf8\x2b\xc1\xe1\x75\x67" + "\x23\x7b\xe6\xfc\xd4\x03\x86\x54", + .len = 64, + }, { /* Derived from a ChaCha20 test vector, via the process above */ + .key = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x01", + .klen = 32, + .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x02\xf2\xfb\xe3\x46" + "\x7c\xc2\x54\xf8\x1b\xe8\xe7\x8d" + "\x01\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x41\x6e\x79\x20\x73\x75\x62\x6d" + "\x69\x73\x73\x69\x6f\x6e\x20\x74" + "\x6f\x20\x74\x68\x65\x20\x49\x45" + "\x54\x46\x20\x69\x6e\x74\x65\x6e" + "\x64\x65\x64\x20\x62\x79\x20\x74" + "\x68\x65\x20\x43\x6f\x6e\x74\x72" + "\x69\x62\x75\x74\x6f\x72\x20\x66" + "\x6f\x72\x20\x70\x75\x62\x6c\x69" + "\x63\x61\x74\x69\x6f\x6e\x20\x61" + "\x73\x20\x61\x6c\x6c\x20\x6f\x72" + "\x20\x70\x61\x72\x74\x20\x6f\x66" + "\x20\x61\x6e\x20\x49\x45\x54\x46" + "\x20\x49\x6e\x74\x65\x72\x6e\x65" + "\x74\x2d\x44\x72\x61\x66\x74\x20" + "\x6f\x72\x20\x52\x46\x43\x20\x61" + "\x6e\x64\x20\x61\x6e\x79\x20\x73" + "\x74\x61\x74\x65\x6d\x65\x6e\x74" + "\x20\x6d\x61\x64\x65\x20\x77\x69" + "\x74\x68\x69\x6e\x20\x74\x68\x65" + "\x20\x63\x6f\x6e\x74\x65\x78\x74" + "\x20\x6f\x66\x20\x61\x6e\x20\x49" + "\x45\x54\x46\x20\x61\x63\x74\x69" + "\x76\x69\x74\x79\x20\x69\x73\x20" + "\x63\x6f\x6e\x73\x69\x64\x65\x72" + "\x65\x64\x20\x61\x6e\x20\x22\x49" + "\x45\x54\x46\x20\x43\x6f\x6e\x74" + "\x72\x69\x62\x75\x74\x69\x6f\x6e" + "\x22\x2e\x20\x53\x75\x63\x68\x20" + "\x73\x74\x61\x74\x65\x6d\x65\x6e" + "\x74\x73\x20\x69\x6e\x63\x6c\x75" + "\x64\x65\x20\x6f\x72\x61\x6c\x20" + "\x73\x74\x61\x74\x65\x6d\x65\x6e" + "\x74\x73\x20\x69\x6e\x20\x49\x45" + "\x54\x46\x20\x73\x65\x73\x73\x69" + "\x6f\x6e\x73\x2c\x20\x61\x73\x20" + "\x77\x65\x6c\x6c\x20\x61\x73\x20" + "\x77\x72\x69\x74\x74\x65\x6e\x20" + "\x61\x6e\x64\x20\x65\x6c\x65\x63" + "\x74\x72\x6f\x6e\x69\x63\x20\x63" + "\x6f\x6d\x6d\x75\x6e\x69\x63\x61" + "\x74\x69\x6f\x6e\x73\x20\x6d\x61" + "\x64\x65\x20\x61\x74\x20\x61\x6e" + "\x79\x20\x74\x69\x6d\x65\x20\x6f" + "\x72\x20\x70\x6c\x61\x63\x65\x2c" + "\x20\x77\x68\x69\x63\x68\x20\x61" + "\x72\x65\x20\x61\x64\x64\x72\x65" + "\x73\x73\x65\x64\x20\x74\x6f", + .ctext = "\xf9\xab\x7a\x4a\x60\xb8\x5f\xa0" + "\x50\xbb\x57\xce\xef\x8c\xc1\xd9" + "\x24\x15\xb3\x67\x5e\x7f\x01\xf6" + "\x1c\x22\xf6\xe5\x71\xb1\x43\x64" + "\x63\x05\xd5\xfc\x5c\x3d\xc0\x0e" + "\x23\xef\xd3\x3b\xd9\xdc\x7f\xa8" + "\x58\x26\xb3\xd0\xc2\xd5\x04\x3f" + "\x0a\x0e\x8f\x17\xe4\xcd\xf7\x2a" + "\xb4\x2c\x09\xe4\x47\xec\x8b\xfb" + "\x59\x37\x7a\xa1\xd0\x04\x7e\xaa" + "\xf1\x98\x5f\x24\x3d\x72\x9a\x43" + "\xa4\x36\x51\x92\x22\x87\xff\x26" + "\xce\x9d\xeb\x59\x78\x84\x5e\x74" + "\x97\x2e\x63\xc0\xef\x29\xf7\x8a" + "\xb9\xee\x35\x08\x77\x6a\x35\x9a" + "\x3e\xe6\x4f\x06\x03\x74\x1b\xc1" + "\x5b\xb3\x0b\x89\x11\x07\xd3\xb7" + "\x53\xd6\x25\x04\xd9\x35\xb4\x5d" + "\x4c\x33\x5a\xc2\x42\x4c\xe6\xa4" + "\x97\x6e\x0e\xd2\xb2\x8b\x2f\x7f" + "\x28\xe5\x9f\xac\x4b\x2e\x02\xab" + "\x85\xfa\xa9\x0d\x7c\x2d\x10\xe6" + "\x91\xab\x55\x63\xf0\xde\x3a\x94" + "\x25\x08\x10\x03\xc2\x68\xd1\xf4" + "\xaf\x7d\x9c\x99\xf7\x86\x96\x30" + "\x60\xfc\x0b\xe6\xa8\x80\x15\xb0" + "\x81\xb1\x0c\xbe\xb9\x12\x18\x25" + "\xe9\x0e\xb1\xe7\x23\xb2\xef\x4a" + "\x22\x8f\xc5\x61\x89\xd4\xe7\x0c" + "\x64\x36\x35\x61\xb6\x34\x60\xf7" + "\x7b\x61\x37\x37\x12\x10\xa2\xf6" + "\x7e\xdb\x7f\x39\x3f\xb6\x8e\x89" + "\x9e\xf3\xfe\x13\x98\xbb\x66\x5a" + "\xec\xea\xab\x3f\x9c\x87\xc4\x8c" + "\x8a\x04\x18\x49\xfc\x77\x11\x50" + "\x16\xe6\x71\x2b\xee\xc0\x9c\xb6" + "\x87\xfd\x80\xff\x0b\x1d\x73\x38" + "\xa4\x1d\x6f\xae\xe4\x12\xd7\x93" + "\x9d\xcd\x38\x26\x09\x40\x52\xcd" + "\x67\x01\x67\x26\xe0\x3e\x98\xa8" + "\xe8\x1a\x13\x41\xbb\x90\x4d\x87" + "\xbb\x42\x82\x39\xce\x3a\xd0\x18" + "\x6d\x7b\x71\x8f\xbb\x2c\x6a\xd1" + "\xbd\xf5\xc7\x8a\x7e\xe1\x1e\x0f" + "\x0d\x0d\x13\x7c\xd9\xd8\x3c\x91" + "\xab\xff\x1f\x12\xc3\xee\xe5\x65" + "\x12\x8d\x7b\x61\xe5\x1f\x98", + .len = 375, + .also_non_np = 1, + .np = 3, + .tap = { 375 - 20, 4, 16 }, + + }, { /* Derived from a ChaCha20 test vector, via the process above */ + .key = "\x1c\x92\x40\xa5\xeb\x55\xd3\x8a" + "\xf3\x33\x88\x86\x04\xf6\xb5\xf0" + "\x47\x39\x17\xc1\x40\x2b\x80\x09" + "\x9d\xca\x5c\xbc\x20\x70\x75\xc0", + .klen = 32, + .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x02\x76\x5a\x2e\x63" + "\x33\x9f\xc9\x9a\x66\x32\x0d\xb7" + "\x2a\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x27\x54\x77\x61\x73\x20\x62\x72" + "\x69\x6c\x6c\x69\x67\x2c\x20\x61" + "\x6e\x64\x20\x74\x68\x65\x20\x73" + "\x6c\x69\x74\x68\x79\x20\x74\x6f" + "\x76\x65\x73\x0a\x44\x69\x64\x20" + "\x67\x79\x72\x65\x20\x61\x6e\x64" + "\x20\x67\x69\x6d\x62\x6c\x65\x20" + "\x69\x6e\x20\x74\x68\x65\x20\x77" + "\x61\x62\x65\x3a\x0a\x41\x6c\x6c" + "\x20\x6d\x69\x6d\x73\x79\x20\x77" + "\x65\x72\x65\x20\x74\x68\x65\x20" + "\x62\x6f\x72\x6f\x67\x6f\x76\x65" + "\x73\x2c\x0a\x41\x6e\x64\x20\x74" + "\x68\x65\x20\x6d\x6f\x6d\x65\x20" + "\x72\x61\x74\x68\x73\x20\x6f\x75" + "\x74\x67\x72\x61\x62\x65\x2e", + .ctext = "\x95\xb9\x51\xe7\x8f\xb4\xa4\x03" + "\xca\x37\xcc\xde\x60\x1d\x8c\xe2" + "\xf1\xbb\x8a\x13\x7f\x61\x85\xcc" + "\xad\xf4\xf0\xdc\x86\xa6\x1e\x10" + "\xbc\x8e\xcb\x38\x2b\xa5\xc8\x8f" + "\xaa\x03\x3d\x53\x4a\x42\xb1\x33" + "\xfc\xd3\xef\xf0\x8e\x7e\x10\x9c" + "\x6f\x12\x5e\xd4\x96\xfe\x5b\x08" + "\xb6\x48\xf0\x14\x74\x51\x18\x7c" + "\x07\x92\xfc\xac\x9d\xf1\x94\xc0" + "\xc1\x9d\xc5\x19\x43\x1f\x1d\xbb" + "\x07\xf0\x1b\x14\x25\x45\xbb\xcb" + "\x5c\xe2\x8b\x28\xf3\xcf\x47\x29" + "\x27\x79\x67\x24\xa6\x87\xc2\x11" + "\x65\x03\xfa\x45\xf7\x9e\x53\x7a" + "\x99\xf1\x82\x25\x4f\x8d\x07", + .len = 127, + }, { /* Derived from a ChaCha20 test vector, via the process above */ + .key = "\x1c\x92\x40\xa5\xeb\x55\xd3\x8a" + "\xf3\x33\x88\x86\x04\xf6\xb5\xf0" + "\x47\x39\x17\xc1\x40\x2b\x80\x09" + "\x9d\xca\x5c\xbc\x20\x70\x75\xc0", + .klen = 32, + .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x01\x31\x58\xa3\x5a" + "\x25\x5d\x05\x17\x58\xe9\x5e\xd4" + "\x1c\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x49\xee\xe0\xdc\x24\x90\x40\xcd" + "\xc5\x40\x8f\x47\x05\xbc\xdd\x81" + "\x47\xc6\x8d\xe6\xb1\x8f\xd7\xcb" + "\x09\x0e\x6e\x22\x48\x1f\xbf\xb8" + "\x5c\xf7\x1e\x8a\xc1\x23\xf2\xd4" + "\x19\x4b\x01\x0f\x4e\xa4\x43\xce" + "\x01\xc6\x67\xda\x03\x91\x18\x90" + "\xa5\xa4\x8e\x45\x03\xb3\x2d\xac" + "\x74\x92\xd3\x53\x47\xc8\xdd\x25" + "\x53\x6c\x02\x03\x87\x0d\x11\x0c" + "\x58\xe3\x12\x18\xfd\x2a\x5b\x40" + "\x0c\x30\xf0\xb8\x3f\x43\xce\xae" + "\x65\x3a\x7d\x7c\xf4\x54\xaa\xcc" + "\x33\x97\xc3\x77\xba\xc5\x70\xde" + "\xd7\xd5\x13\xa5\x65\xc4\x5f\x0f" + "\x46\x1a\x0d\x97\xb5\xf3\xbb\x3c" + "\x84\x0f\x2b\xc5\xaa\xea\xf2\x6c" + "\xc9\xb5\x0c\xee\x15\xf3\x7d\xbe" + "\x9f\x7b\x5a\xa6\xae\x4f\x83\xb6" + "\x79\x49\x41\xf4\x58\x18\xcb\x86" + "\x7f\x30\x0e\xf8\x7d\x44\x36\xea" + "\x75\xeb\x88\x84\x40\x3c\xad\x4f" + "\x6f\x31\x6b\xaa\x5d\xe5\xa5\xc5" + "\x21\x66\xe9\xa7\xe3\xb2\x15\x88" + "\x78\xf6\x79\xa1\x59\x47\x12\x4e" + "\x9f\x9f\x64\x1a\xa0\x22\x5b\x08" + "\xbe\x7c\x36\xc2\x2b\x66\x33\x1b" + "\xdd\x60\x71\xf7\x47\x8c\x61\xc3" + "\xda\x8a\x78\x1e\x16\xfa\x1e\x86" + "\x81\xa6\x17\x2a\xa7\xb5\xc2\xe7" + "\xa4\xc7\x42\xf1\xcf\x6a\xca\xb4" + "\x45\xcf\xf3\x93\xf0\xe7\xea\xf6" + "\xf4\xe6\x33\x43\x84\x93\xa5\x67" + "\x9b\x16\x58\x58\x80\x0f\x2b\x5c" + "\x24\x74\x75\x7f\x95\x81\xb7\x30" + "\x7a\x33\xa7\xf7\x94\x87\x32\x27" + "\x10\x5d\x14\x4c\x43\x29\xdd\x26" + "\xbd\x3e\x3c\x0e\xfe\x0e\xa5\x10" + "\xea\x6b\x64\xfd\x73\xc6\xed\xec" + "\xa8\xc9\xbf\xb3\xba\x0b\x4d\x07" + "\x70\xfc\x16\xfd\x79\x1e\xd7\xc5" + "\x49\x4e\x1c\x8b\x8d\x79\x1b\xb1" + "\xec\xca\x60\x09\x4c\x6a\xd5\x09" + "\x49\x46\x00\x88\x22\x8d\xce\xea" + "\xb1\x17\x11\xde\x42\xd2\x23\xc1" + "\x72\x11\xf5\x50\x73\x04\x40\x47" + "\xf9\x5d\xe7\xa7\x26\xb1\x7e\xb0" + "\x3f\x58\xc1\x52\xab\x12\x67\x9d" + "\x3f\x43\x4b\x68\xd4\x9c\x68\x38" + "\x07\x8a\x2d\x3e\xf3\xaf\x6a\x4b" + "\xf9\xe5\x31\x69\x22\xf9\xa6\x69" + "\xc6\x9c\x96\x9a\x12\x35\x95\x1d" + "\x95\xd5\xdd\xbe\xbf\x93\x53\x24" + "\xfd\xeb\xc2\x0a\x64\xb0\x77\x00" + "\x6f\x88\xc4\x37\x18\x69\x7c\xd7" + "\x41\x92\x55\x4c\x03\xa1\x9a\x4b" + "\x15\xe5\xdf\x7f\x37\x33\x72\xc1" + "\x8b\x10\x67\xa3\x01\x57\x94\x25" + "\x7b\x38\x71\x7e\xdd\x1e\xcc\x73" + "\x55\xd2\x8e\xeb\x07\xdd\xf1\xda" + "\x58\xb1\x47\x90\xfe\x42\x21\x72" + "\xa3\x54\x7a\xa0\x40\xec\x9f\xdd" + "\xc6\x84\x6e\xca\xae\xe3\x68\xb4" + "\x9d\xe4\x78\xff\x57\xf2\xf8\x1b" + "\x03\xa1\x31\xd9\xde\x8d\xf5\x22" + "\x9c\xdd\x20\xa4\x1e\x27\xb1\x76" + "\x4f\x44\x55\xe2\x9b\xa1\x9c\xfe" + "\x54\xf7\x27\x1b\xf4\xde\x02\xf5" + "\x1b\x55\x48\x5c\xdc\x21\x4b\x9e" + "\x4b\x6e\xed\x46\x23\xdc\x65\xb2" + "\xcf\x79\x5f\x28\xe0\x9e\x8b\xe7" + "\x4c\x9d\x8a\xff\xc1\xa6\x28\xb8" + "\x65\x69\x8a\x45\x29\xef\x74\x85" + "\xde\x79\xc7\x08\xae\x30\xb0\xf4" + "\xa3\x1d\x51\x41\xab\xce\xcb\xf6" + "\xb5\xd8\x6d\xe0\x85\xe1\x98\xb3" + "\x43\xbb\x86\x83\x0a\xa0\xf5\xb7" + "\x04\x0b\xfa\x71\x1f\xb0\xf6\xd9" + "\x13\x00\x15\xf0\xc7\xeb\x0d\x5a" + "\x9f\xd7\xb9\x6c\x65\x14\x22\x45" + "\x6e\x45\x32\x3e\x7e\x60\x1a\x12" + "\x97\x82\x14\xfb\xaa\x04\x22\xfa" + "\xa0\xe5\x7e\x8c\x78\x02\x48\x5d" + "\x78\x33\x5a\x7c\xad\xdb\x29\xce" + "\xbb\x8b\x61\xa4\xb7\x42\xe2\xac" + "\x8b\x1a\xd9\x2f\x0b\x8b\x62\x21" + "\x83\x35\x7e\xad\x73\xc2\xb5\x6c" + "\x10\x26\x38\x07\xe5\xc7\x36\x80" + "\xe2\x23\x12\x61\xf5\x48\x4b\x2b" + "\xc5\xdf\x15\xd9\x87\x01\xaa\xac" + "\x1e\x7c\xad\x73\x78\x18\x63\xe0" + "\x8b\x9f\x81\xd8\x12\x6a\x28\x10" + "\xbe\x04\x68\x8a\x09\x7c\x1b\x1c" + "\x83\x66\x80\x47\x80\xe8\xfd\x35" + "\x1c\x97\x6f\xae\x49\x10\x66\xcc" + "\xc6\xd8\xcc\x3a\x84\x91\x20\x77" + "\x72\xe4\x24\xd2\x37\x9f\xc5\xc9" + "\x25\x94\x10\x5f\x40\x00\x64\x99" + "\xdc\xae\xd7\x21\x09\x78\x50\x15" + "\xac\x5f\xc6\x2c\xa2\x0b\xa9\x39" + "\x87\x6e\x6d\xab\xde\x08\x51\x16" + "\xc7\x13\xe9\xea\xed\x06\x8e\x2c" + "\xf8\x37\x8c\xf0\xa6\x96\x8d\x43" + "\xb6\x98\x37\xb2\x43\xed\xde\xdf" + "\x89\x1a\xe7\xeb\x9d\xa1\x7b\x0b" + "\x77\xb0\xe2\x75\xc0\xf1\x98\xd9" + "\x80\x55\xc9\x34\x91\xd1\x59\xe8" + "\x4b\x0f\xc1\xa9\x4b\x7a\x84\x06" + "\x20\xa8\x5d\xfa\xd1\xde\x70\x56" + "\x2f\x9e\x91\x9c\x20\xb3\x24\xd8" + "\x84\x3d\xe1\x8c\x7e\x62\x52\xe5" + "\x44\x4b\x9f\xc2\x93\x03\xea\x2b" + "\x59\xc5\xfa\x3f\x91\x2b\xbb\x23" + "\xf5\xb2\x7b\xf5\x38\xaf\xb3\xee" + "\x63\xdc\x7b\xd1\xff\xaa\x8b\xab" + "\x82\x6b\x37\x04\xeb\x74\xbe\x79" + "\xb9\x83\x90\xef\x20\x59\x46\xff" + "\xe9\x97\x3e\x2f\xee\xb6\x64\x18" + "\x38\x4c\x7a\x4a\xf9\x61\xe8\x9a" + "\xa1\xb5\x01\xa6\x47\xd3\x11\xd4" + "\xce\xd3\x91\x49\x88\xc7\xb8\x4d" + "\xb1\xb9\x07\x6d\x16\x72\xae\x46" + "\x5e\x03\xa1\x4b\xb6\x02\x30\xa8" + "\x3d\xa9\x07\x2a\x7c\x19\xe7\x62" + "\x87\xe3\x82\x2f\x6f\xe1\x09\xd9" + "\x94\x97\xea\xdd\x58\x9e\xae\x76" + "\x7e\x35\xe5\xb4\xda\x7e\xf4\xde" + "\xf7\x32\x87\xcd\x93\xbf\x11\x56" + "\x11\xbe\x08\x74\xe1\x69\xad\xe2" + "\xd7\xf8\x86\x75\x8a\x3c\xa4\xbe" + "\x70\xa7\x1b\xfc\x0b\x44\x2a\x76" + "\x35\xea\x5d\x85\x81\xaf\x85\xeb" + "\xa0\x1c\x61\xc2\xf7\x4f\xa5\xdc" + "\x02\x7f\xf6\x95\x40\x6e\x8a\x9a" + "\xf3\x5d\x25\x6e\x14\x3a\x22\xc9" + "\x37\x1c\xeb\x46\x54\x3f\xa5\x91" + "\xc2\xb5\x8c\xfe\x53\x08\x97\x32" + "\x1b\xb2\x30\x27\xfe\x25\x5d\xdc" + "\x08\x87\xd0\xe5\x94\x1a\xd4\xf1" + "\xfe\xd6\xb4\xa3\xe6\x74\x81\x3c" + "\x1b\xb7\x31\xa7\x22\xfd\xd4\xdd" + "\x20\x4e\x7c\x51\xb0\x60\x73\xb8" + "\x9c\xac\x91\x90\x7e\x01\xb0\xe1" + "\x8a\x2f\x75\x1c\x53\x2a\x98\x2a" + "\x06\x52\x95\x52\xb2\xe9\x25\x2e" + "\x4c\xe2\x5a\x00\xb2\x13\x81\x03" + "\x77\x66\x0d\xa5\x99\xda\x4e\x8c" + "\xac\xf3\x13\x53\x27\x45\xaf\x64" + "\x46\xdc\xea\x23\xda\x97\xd1\xab" + "\x7d\x6c\x30\x96\x1f\xbc\x06\x34" + "\x18\x0b\x5e\x21\x35\x11\x8d\x4c" + "\xe0\x2d\xe9\x50\x16\x74\x81\xa8" + "\xb4\x34\xb9\x72\x42\xa6\xcc\xbc" + "\xca\x34\x83\x27\x10\x5b\x68\x45" + "\x8f\x52\x22\x0c\x55\x3d\x29\x7c" + "\xe3\xc0\x66\x05\x42\x91\x5f\x58" + "\xfe\x4a\x62\xd9\x8c\xa9\x04\x19" + "\x04\xa9\x08\x4b\x57\xfc\x67\x53" + "\x08\x7c\xbc\x66\x8a\xb0\xb6\x9f" + "\x92\xd6\x41\x7c\x5b\x2a\x00\x79" + "\x72", + .ctext = "\x3a\x92\xee\x53\x31\xaf\x2b\x60" + "\x5f\x55\x8d\x00\x5d\xfc\x74\x97" + "\x28\x54\xf4\xa5\x75\xf1\x9b\x25" + "\x62\x1c\xc0\xe0\x13\xc8\x87\x53" + "\xd0\xf3\xa7\x97\x1f\x3b\x1e\xea" + "\xe0\xe5\x2a\xd1\xdd\xa4\x3b\x50" + "\x45\xa3\x0d\x7e\x1b\xc9\xa0\xad" + "\xb9\x2c\x54\xa6\xc7\x55\x16\xd0" + "\xc5\x2e\x02\x44\x35\xd0\x7e\x67" + "\xf2\xc4\x9b\xcd\x95\x10\xcc\x29" + "\x4b\xfa\x86\x87\xbe\x40\x36\xbe" + "\xe1\xa3\x52\x89\x55\x20\x9b\xc2" + "\xab\xf2\x31\x34\x16\xad\xc8\x17" + "\x65\x24\xc0\xff\x12\x37\xfe\x5a" + "\x62\x3b\x59\x47\x6c\x5f\x3a\x8e" + "\x3b\xd9\x30\xc8\x7f\x2f\x88\xda" + "\x80\xfd\x02\xda\x7f\x9a\x7a\x73" + "\x59\xc5\x34\x09\x9a\x11\xcb\xa7" + "\xfc\xf6\xa1\xa0\x60\xfb\x43\xbb" + "\xf1\xe9\xd7\xc6\x79\x27\x4e\xff" + "\x22\xb4\x24\xbf\x76\xee\x47\xb9" + "\x6d\x3f\x8b\xb0\x9c\x3c\x43\xdd" + "\xff\x25\x2e\x6d\xa4\x2b\xfb\x5d" + "\x1b\x97\x6c\x55\x0a\x82\x7a\x7b" + "\x94\x34\xc2\xdb\x2f\x1f\xc1\xea" + "\xd4\x4d\x17\x46\x3b\x51\x69\x09" + "\xe4\x99\x32\x25\xfd\x94\xaf\xfb" + "\x10\xf7\x4f\xdd\x0b\x3c\x8b\x41" + "\xb3\x6a\xb7\xd1\x33\xa8\x0c\x2f" + "\x62\x4c\x72\x11\xd7\x74\xe1\x3b" + "\x38\x43\x66\x7b\x6c\x36\x48\xe7" + "\xe3\xe7\x9d\xb9\x42\x73\x7a\x2a" + "\x89\x20\x1a\x41\x80\x03\xf7\x8f" + "\x61\x78\x13\xbf\xfe\x50\xf5\x04" + "\x52\xf9\xac\x47\xf8\x62\x4b\xb2" + "\x24\xa9\xbf\x64\xb0\x18\x69\xd2" + "\xf5\xe4\xce\xc8\xb1\x87\x75\xd6" + "\x2c\x24\x79\x00\x7d\x26\xfb\x44" + "\xe7\x45\x7a\xee\x58\xa5\x83\xc1" + "\xb4\x24\xab\x23\x2f\x4d\xd7\x4f" + "\x1c\xc7\xaa\xa9\x50\xf4\xa3\x07" + "\x12\x13\x89\x74\xdc\x31\x6a\xb2" + "\xf5\x0f\x13\x8b\xb9\xdb\x85\x1f" + "\xf5\xbc\x88\xd9\x95\xea\x31\x6c" + "\x36\x60\xb6\x49\xdc\xc4\xf7\x55" + "\x3f\x21\xc1\xb5\x92\x18\x5e\xbc" + "\x9f\x87\x7f\xe7\x79\x25\x40\x33" + "\xd6\xb9\x33\xd5\x50\xb3\xc7\x89" + "\x1b\x12\xa0\x46\xdd\xa7\xd8\x3e" + "\x71\xeb\x6f\x66\xa1\x26\x0c\x67" + "\xab\xb2\x38\x58\x17\xd8\x44\x3b" + "\x16\xf0\x8e\x62\x8d\x16\x10\x00" + "\x32\x8b\xef\xb9\x28\xd3\xc5\xad" + "\x0a\x19\xa2\xe4\x03\x27\x7d\x94" + "\x06\x18\xcd\xd6\x27\x00\xf9\x1f" + "\xb6\xb3\xfe\x96\x35\x5f\xc4\x1c" + "\x07\x62\x10\x79\x68\x50\xf1\x7e" + "\x29\xe7\xc4\xc4\xe7\xee\x54\xd6" + "\x58\x76\x84\x6d\x8d\xe4\x59\x31" + "\xe9\xf4\xdc\xa1\x1f\xe5\x1a\xd6" + "\xe6\x64\x46\xf5\x77\x9c\x60\x7a" + "\x5e\x62\xe3\x0a\xd4\x9f\x7a\x2d" + "\x7a\xa5\x0a\x7b\x29\x86\x7a\x74" + "\x74\x71\x6b\xca\x7d\x1d\xaa\xba" + "\x39\x84\x43\x76\x35\xfe\x4f\x9b" + "\xbb\xbb\xb5\x6a\x32\xb5\x5d\x41" + "\x51\xf0\x5b\x68\x03\x47\x4b\x8a" + "\xca\x88\xf6\x37\xbd\x73\x51\x70" + "\x66\xfe\x9e\x5f\x21\x9c\xf3\xdd" + "\xc3\xea\x27\xf9\x64\x94\xe1\x19" + "\xa0\xa9\xab\x60\xe0\x0e\xf7\x78" + "\x70\x86\xeb\xe0\xd1\x5c\x05\xd3" + "\xd7\xca\xe0\xc0\x47\x47\x34\xee" + "\x11\xa3\xa3\x54\x98\xb7\x49\x8e" + "\x84\x28\x70\x2c\x9e\xfb\x55\x54" + "\x4d\xf8\x86\xf7\x85\x7c\xbd\xf3" + "\x17\xd8\x47\xcb\xac\xf4\x20\x85" + "\x34\x66\xad\x37\x2d\x5e\x52\xda" + "\x8a\xfe\x98\x55\x30\xe7\x2d\x2b" + "\x19\x10\x8e\x7b\x66\x5e\xdc\xe0" + "\x45\x1f\x7b\xb4\x08\xfb\x8f\xf6" + "\x8c\x89\x21\x34\x55\x27\xb2\x76" + "\xb2\x07\xd9\xd6\x68\x9b\xea\x6b" + "\x2d\xb4\xc4\x35\xdd\xd2\x79\xae" + "\xc7\xd6\x26\x7f\x12\x01\x8c\xa7" + "\xe3\xdb\xa8\xf4\xf7\x2b\xec\x99" + "\x11\x00\xf1\x35\x8c\xcf\xd5\xc9" + "\xbd\x91\x36\x39\x70\xcf\x7d\x70" + "\x47\x1a\xfc\x6b\x56\xe0\x3f\x9c" + "\x60\x49\x01\x72\xa9\xaf\x2c\x9c" + "\xe8\xab\xda\x8c\x14\x19\xf3\x75" + "\x07\x17\x9d\x44\x67\x7a\x2e\xef" + "\xb7\x83\x35\x4a\xd1\x3d\x1c\x84" + "\x32\xdd\xaa\xea\xca\x1d\xdc\x72" + "\x2c\xcc\x43\xcd\x5d\xe3\x21\xa4" + "\xd0\x8a\x4b\x20\x12\xa3\xd5\x86" + "\x76\x96\xff\x5f\x04\x57\x0f\xe6" + "\xba\xe8\x76\x50\x0c\x64\x1d\x83" + "\x9c\x9b\x9a\x9a\x58\x97\x9c\x5c" + "\xb4\xa4\xa6\x3e\x19\xeb\x8f\x5a" + "\x61\xb2\x03\x7b\x35\x19\xbe\xa7" + "\x63\x0c\xfd\xdd\xf9\x90\x6c\x08" + "\x19\x11\xd3\x65\x4a\xf5\x96\x92" + "\x59\xaa\x9c\x61\x0c\x29\xa7\xf8" + "\x14\x39\x37\xbf\x3c\xf2\x16\x72" + "\x02\xfa\xa2\xf3\x18\x67\x5d\xcb" + "\xdc\x4d\xbb\x96\xff\x70\x08\x2d" + "\xc2\xa8\x52\xe1\x34\x5f\x72\xfe" + "\x64\xbf\xca\xa7\x74\x38\xfb\x74" + "\x55\x9c\xfa\x8a\xed\xfb\x98\xeb" + "\x58\x2e\x6c\xe1\x52\x76\x86\xd7" + "\xcf\xa1\xa4\xfc\xb2\x47\x41\x28" + "\xa3\xc1\xe5\xfd\x53\x19\x28\x2b" + "\x37\x04\x65\x96\x99\x7a\x28\x0f" + "\x07\x68\x4b\xc7\x52\x0a\x55\x35" + "\x40\x19\x95\x61\xe8\x59\x40\x1f" + "\x9d\xbf\x78\x7d\x8f\x84\xff\x6f" + "\xd0\xd5\x63\xd2\x22\xbd\xc8\x4e" + "\xfb\xe7\x9f\x06\xe6\xe7\x39\x6d" + "\x6a\x96\x9f\xf0\x74\x7e\xc9\x35" + "\xb7\x26\xb8\x1c\x0a\xa6\x27\x2c" + "\xa2\x2b\xfe\xbe\x0f\x07\x73\xae" + "\x7f\x7f\x54\xf5\x7c\x6a\x0a\x56" + "\x49\xd4\x81\xe5\x85\x53\x99\x1f" + "\x95\x05\x13\x58\x8d\x0e\x1b\x90" + "\xc3\x75\x48\x64\x58\x98\x67\x84" + "\xae\xe2\x21\xa2\x8a\x04\x0a\x0b" + "\x61\xaa\xb0\xd4\x28\x60\x7a\xf8" + "\xbc\x52\xfb\x24\x7f\xed\x0d\x2a" + "\x0a\xb2\xf9\xc6\x95\xb5\x11\xc9" + "\xf4\x0f\x26\x11\xcf\x2a\x57\x87" + "\x7a\xf3\xe7\x94\x65\xc2\xb5\xb3" + "\xab\x98\xe3\xc1\x2b\x59\x19\x7c" + "\xd6\xf3\xf9\xbf\xff\x6d\xc6\x82" + "\x13\x2f\x4a\x2e\xcd\x26\xfe\x2d" + "\x01\x70\xf4\xc2\x7f\x1f\x4c\xcb" + "\x47\x77\x0c\xa0\xa3\x03\xec\xda" + "\xa9\xbf\x0d\x2d\xae\xe4\xb8\x7b" + "\xa9\xbc\x08\xb4\x68\x2e\xc5\x60" + "\x8d\x87\x41\x2b\x0f\x69\xf0\xaf" + "\x5f\xba\x72\x20\x0f\x33\xcd\x6d" + "\x36\x7d\x7b\xd5\x05\xf1\x4b\x05" + "\xc4\xfc\x7f\x80\xb9\x4d\xbd\xf7" + "\x7c\x84\x07\x01\xc2\x40\x66\x5b" + "\x98\xc7\x2c\xe3\x97\xfa\xdf\x87" + "\xa0\x1f\xe9\x21\x42\x0f\x3b\xeb" + "\x89\x1c\x3b\xca\x83\x61\x77\x68" + "\x84\xbb\x60\x87\x38\x2e\x25\xd5" + "\x9e\x04\x41\x70\xac\xda\xc0\x9c" + "\x9c\x69\xea\x8d\x4e\x55\x2a\x29" + "\xed\x05\x4b\x7b\x73\x71\x90\x59" + "\x4d\xc8\xd8\x44\xf0\x4c\xe1\x5e" + "\x84\x47\x55\xcc\x32\x3f\xe7\x97" + "\x42\xc6\x32\xac\x40\xe5\xa5\xc7" + "\x8b\xed\xdb\xf7\x83\xd6\xb1\xc2" + "\x52\x5e\x34\xb7\xeb\x6e\xd9\xfc" + "\xe5\x93\x9a\x97\x3e\xb0\xdc\xd9" + "\xd7\x06\x10\xb6\x1d\x80\x59\xdd" + "\x0d\xfe\x64\x35\xcd\x5d\xec\xf0" + "\xba\xd0\x34\xc9\x2d\x91\xc5\x17" + "\x11", + .len = 1281, + .also_non_np = 1, + .np = 3, + .tap = { 1200, 1, 80 }, + }, { /* test vector from https://tools.ietf.org/html/draft-arciszewski-xchacha-02#appendix-A.3.2 */ + .key = "\x80\x81\x82\x83\x84\x85\x86\x87" + "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" + "\x90\x91\x92\x93\x94\x95\x96\x97" + "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f", + .klen = 32, + .iv = "\x40\x41\x42\x43\x44\x45\x46\x47" + "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" + "\x50\x51\x52\x53\x54\x55\x56\x58" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x54\x68\x65\x20\x64\x68\x6f\x6c" + "\x65\x20\x28\x70\x72\x6f\x6e\x6f" + "\x75\x6e\x63\x65\x64\x20\x22\x64" + "\x6f\x6c\x65\x22\x29\x20\x69\x73" + "\x20\x61\x6c\x73\x6f\x20\x6b\x6e" + "\x6f\x77\x6e\x20\x61\x73\x20\x74" + "\x68\x65\x20\x41\x73\x69\x61\x74" + "\x69\x63\x20\x77\x69\x6c\x64\x20" + "\x64\x6f\x67\x2c\x20\x72\x65\x64" + "\x20\x64\x6f\x67\x2c\x20\x61\x6e" + "\x64\x20\x77\x68\x69\x73\x74\x6c" + "\x69\x6e\x67\x20\x64\x6f\x67\x2e" + "\x20\x49\x74\x20\x69\x73\x20\x61" + "\x62\x6f\x75\x74\x20\x74\x68\x65" + "\x20\x73\x69\x7a\x65\x20\x6f\x66" + "\x20\x61\x20\x47\x65\x72\x6d\x61" + "\x6e\x20\x73\x68\x65\x70\x68\x65" + "\x72\x64\x20\x62\x75\x74\x20\x6c" + "\x6f\x6f\x6b\x73\x20\x6d\x6f\x72" + "\x65\x20\x6c\x69\x6b\x65\x20\x61" + "\x20\x6c\x6f\x6e\x67\x2d\x6c\x65" + "\x67\x67\x65\x64\x20\x66\x6f\x78" + "\x2e\x20\x54\x68\x69\x73\x20\x68" + "\x69\x67\x68\x6c\x79\x20\x65\x6c" + "\x75\x73\x69\x76\x65\x20\x61\x6e" + "\x64\x20\x73\x6b\x69\x6c\x6c\x65" + "\x64\x20\x6a\x75\x6d\x70\x65\x72" + "\x20\x69\x73\x20\x63\x6c\x61\x73" + "\x73\x69\x66\x69\x65\x64\x20\x77" + "\x69\x74\x68\x20\x77\x6f\x6c\x76" + "\x65\x73\x2c\x20\x63\x6f\x79\x6f" + "\x74\x65\x73\x2c\x20\x6a\x61\x63" + "\x6b\x61\x6c\x73\x2c\x20\x61\x6e" + "\x64\x20\x66\x6f\x78\x65\x73\x20" + "\x69\x6e\x20\x74\x68\x65\x20\x74" + "\x61\x78\x6f\x6e\x6f\x6d\x69\x63" + "\x20\x66\x61\x6d\x69\x6c\x79\x20" + "\x43\x61\x6e\x69\x64\x61\x65\x2e", + .ctext = "\x45\x59\xab\xba\x4e\x48\xc1\x61" + "\x02\xe8\xbb\x2c\x05\xe6\x94\x7f" + "\x50\xa7\x86\xde\x16\x2f\x9b\x0b" + "\x7e\x59\x2a\x9b\x53\xd0\xd4\xe9" + "\x8d\x8d\x64\x10\xd5\x40\xa1\xa6" + "\x37\x5b\x26\xd8\x0d\xac\xe4\xfa" + "\xb5\x23\x84\xc7\x31\xac\xbf\x16" + "\xa5\x92\x3c\x0c\x48\xd3\x57\x5d" + "\x4d\x0d\x2c\x67\x3b\x66\x6f\xaa" + "\x73\x10\x61\x27\x77\x01\x09\x3a" + "\x6b\xf7\xa1\x58\xa8\x86\x42\x92" + "\xa4\x1c\x48\xe3\xa9\xb4\xc0\xda" + "\xec\xe0\xf8\xd9\x8d\x0d\x7e\x05" + "\xb3\x7a\x30\x7b\xbb\x66\x33\x31" + "\x64\xec\x9e\x1b\x24\xea\x0d\x6c" + "\x3f\xfd\xdc\xec\x4f\x68\xe7\x44" + "\x30\x56\x19\x3a\x03\xc8\x10\xe1" + "\x13\x44\xca\x06\xd8\xed\x8a\x2b" + "\xfb\x1e\x8d\x48\xcf\xa6\xbc\x0e" + "\xb4\xe2\x46\x4b\x74\x81\x42\x40" + "\x7c\x9f\x43\x1a\xee\x76\x99\x60" + "\xe1\x5b\xa8\xb9\x68\x90\x46\x6e" + "\xf2\x45\x75\x99\x85\x23\x85\xc6" + "\x61\xf7\x52\xce\x20\xf9\xda\x0c" + "\x09\xab\x6b\x19\xdf\x74\xe7\x6a" + "\x95\x96\x74\x46\xf8\xd0\xfd\x41" + "\x5e\x7b\xee\x2a\x12\xa1\x14\xc2" + "\x0e\xb5\x29\x2a\xe7\xa3\x49\xae" + "\x57\x78\x20\xd5\x52\x0a\x1f\x3f" + "\xb6\x2a\x17\xce\x6a\x7e\x68\xfa" + "\x7c\x79\x11\x1d\x88\x60\x92\x0b" + "\xc0\x48\xef\x43\xfe\x84\x48\x6c" + "\xcb\x87\xc2\x5f\x0a\xe0\x45\xf0" + "\xcc\xe1\xe7\x98\x9a\x9a\xa2\x20" + "\xa2\x8b\xdd\x48\x27\xe7\x51\xa2" + "\x4a\x6d\x5c\x62\xd7\x90\xa6\x63" + "\x93\xb9\x31\x11\xc1\xa5\x5d\xd7" + "\x42\x1a\x10\x18\x49\x74\xc7\xc5", + .len = 304, + } +}; + +/* + * Same as XChaCha20 test vectors above, but recomputed the ciphertext with + * XChaCha12, using a modified libsodium. + */ +static const struct cipher_testvec xchacha12_tv_template[] = { + { + .key = "\x79\xc9\x97\x98\xac\x67\x30\x0b" + "\xbb\x27\x04\xc9\x5c\x34\x1e\x32" + "\x45\xf3\xdc\xb2\x17\x61\xb9\x8e" + "\x52\xff\x45\xb2\x4f\x30\x4f\xc4", + .klen = 32, + .iv = "\xb3\x3f\xfd\x30\x96\x47\x9b\xcf" + "\xbc\x9a\xee\x49\x41\x76\x88\xa0" + "\xa2\x55\x4f\x8d\x95\x38\x94\x19" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00", + .ctext = "\x1b\x78\x7f\xd7\xa1\x41\x68\xab" + "\x3d\x3f\xd1\x7b\x69\x56\xb2\xd5" + "\x43\xce\xeb\xaf\x36\xf0\x29\x9d" + "\x3a\xfb\x18\xae\x1b", + .len = 29, + }, { + .key = "\x9d\x23\xbd\x41\x49\xcb\x97\x9c" + "\xcf\x3c\x5c\x94\xdd\x21\x7e\x98" + "\x08\xcb\x0e\x50\xcd\x0f\x67\x81" + "\x22\x35\xea\xaf\x60\x1d\x62\x32", + .klen = 32, + .iv = "\xc0\x47\x54\x82\x66\xb7\xc3\x70" + "\xd3\x35\x66\xa2\x42\x5c\xbf\x30" + "\xd8\x2d\x1e\xaf\x52\x94\x10\x9e" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00", + .ctext = "\xfb\x32\x09\x1d\x83\x05\xae\x4c" + "\x13\x1f\x12\x71\xf2\xca\xb2\xeb" + "\x5b\x83\x14\x7d\x83\xf6\x57\x77" + "\x2e\x40\x1f\x92\x2c\xf9\xec\x35" + "\x34\x1f\x93\xdf\xfb\x30\xd7\x35" + "\x03\x05\x78\xc1\x20\x3b\x7a\xe3" + "\x62\xa3\x89\xdc\x11\x11\x45\xa8" + "\x82\x89\xa0\xf1\x4e\xc7\x0f\x11" + "\x69\xdd\x0c\x84\x2b\x89\x5c\xdc" + "\xf0\xde\x01\xef\xc5\x65\x79\x23" + "\x87\x67\xd6\x50\xd9\x8d\xd9\x92" + "\x54\x5b\x0e", + .len = 91, + }, { + .key = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .klen = 32, + .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x67\xc6\x69\x73" + "\x51\xff\x4a\xec\x29\xcd\xba\xab" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ctext = "\xdf\x2d\xc6\x21\x2a\x9d\xa1\xbb" + "\xc2\x77\x66\x0c\x5c\x46\xef\xa7" + "\x79\x1b\xb9\xdf\x55\xe2\xf9\x61" + "\x4c\x7b\xa4\x52\x24\xaf\xa2\xda" + "\xd1\x8f\x8f\xa2\x9e\x53\x4d\xc4" + "\xb8\x55\x98\x08\x7c\x08\xd4\x18" + "\x67\x8f\xef\x50\xb1\x5f\xa5\x77" + "\x4c\x25\xe7\x86\x26\x42\xca\x44", + .len = 64, + }, { + .key = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x01", + .klen = 32, + .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x02\xf2\xfb\xe3\x46" + "\x7c\xc2\x54\xf8\x1b\xe8\xe7\x8d" + "\x01\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x41\x6e\x79\x20\x73\x75\x62\x6d" + "\x69\x73\x73\x69\x6f\x6e\x20\x74" + "\x6f\x20\x74\x68\x65\x20\x49\x45" + "\x54\x46\x20\x69\x6e\x74\x65\x6e" + "\x64\x65\x64\x20\x62\x79\x20\x74" + "\x68\x65\x20\x43\x6f\x6e\x74\x72" + "\x69\x62\x75\x74\x6f\x72\x20\x66" + "\x6f\x72\x20\x70\x75\x62\x6c\x69" + "\x63\x61\x74\x69\x6f\x6e\x20\x61" + "\x73\x20\x61\x6c\x6c\x20\x6f\x72" + "\x20\x70\x61\x72\x74\x20\x6f\x66" + "\x20\x61\x6e\x20\x49\x45\x54\x46" + "\x20\x49\x6e\x74\x65\x72\x6e\x65" + "\x74\x2d\x44\x72\x61\x66\x74\x20" + "\x6f\x72\x20\x52\x46\x43\x20\x61" + "\x6e\x64\x20\x61\x6e\x79\x20\x73" + "\x74\x61\x74\x65\x6d\x65\x6e\x74" + "\x20\x6d\x61\x64\x65\x20\x77\x69" + "\x74\x68\x69\x6e\x20\x74\x68\x65" + "\x20\x63\x6f\x6e\x74\x65\x78\x74" + "\x20\x6f\x66\x20\x61\x6e\x20\x49" + "\x45\x54\x46\x20\x61\x63\x74\x69" + "\x76\x69\x74\x79\x20\x69\x73\x20" + "\x63\x6f\x6e\x73\x69\x64\x65\x72" + "\x65\x64\x20\x61\x6e\x20\x22\x49" + "\x45\x54\x46\x20\x43\x6f\x6e\x74" + "\x72\x69\x62\x75\x74\x69\x6f\x6e" + "\x22\x2e\x20\x53\x75\x63\x68\x20" + "\x73\x74\x61\x74\x65\x6d\x65\x6e" + "\x74\x73\x20\x69\x6e\x63\x6c\x75" + "\x64\x65\x20\x6f\x72\x61\x6c\x20" + "\x73\x74\x61\x74\x65\x6d\x65\x6e" + "\x74\x73\x20\x69\x6e\x20\x49\x45" + "\x54\x46\x20\x73\x65\x73\x73\x69" + "\x6f\x6e\x73\x2c\x20\x61\x73\x20" + "\x77\x65\x6c\x6c\x20\x61\x73\x20" + "\x77\x72\x69\x74\x74\x65\x6e\x20" + "\x61\x6e\x64\x20\x65\x6c\x65\x63" + "\x74\x72\x6f\x6e\x69\x63\x20\x63" + "\x6f\x6d\x6d\x75\x6e\x69\x63\x61" + "\x74\x69\x6f\x6e\x73\x20\x6d\x61" + "\x64\x65\x20\x61\x74\x20\x61\x6e" + "\x79\x20\x74\x69\x6d\x65\x20\x6f" + "\x72\x20\x70\x6c\x61\x63\x65\x2c" + "\x20\x77\x68\x69\x63\x68\x20\x61" + "\x72\x65\x20\x61\x64\x64\x72\x65" + "\x73\x73\x65\x64\x20\x74\x6f", + .ctext = "\xe4\xa6\xc8\x30\xc4\x23\x13\xd6" + "\x08\x4d\xc9\xb7\xa5\x64\x7c\xb9" + "\x71\xe2\xab\x3e\xa8\x30\x8a\x1c" + "\x4a\x94\x6d\x9b\xe0\xb3\x6f\xf1" + "\xdc\xe3\x1b\xb3\xa9\x6d\x0d\xd6" + "\xd0\xca\x12\xef\xe7\x5f\xd8\x61" + "\x3c\x82\xd3\x99\x86\x3c\x6f\x66" + "\x02\x06\xdc\x55\xf9\xed\xdf\x38" + "\xb4\xa6\x17\x00\x7f\xef\xbf\x4f" + "\xf8\x36\xf1\x60\x7e\x47\xaf\xdb" + "\x55\x9b\x12\xcb\x56\x44\xa7\x1f" + "\xd3\x1a\x07\x3b\x00\xec\xe6\x4c" + "\xa2\x43\x27\xdf\x86\x19\x4f\x16" + "\xed\xf9\x4a\xf3\x63\x6f\xfa\x7f" + "\x78\x11\xf6\x7d\x97\x6f\xec\x6f" + "\x85\x0f\x5c\x36\x13\x8d\x87\xe0" + "\x80\xb1\x69\x0b\x98\x89\x9c\x4e" + "\xf8\xdd\xee\x5c\x0a\x85\xce\xd4" + "\xea\x1b\x48\xbe\x08\xf8\xe2\xa8" + "\xa5\xb0\x3c\x79\xb1\x15\xb4\xb9" + "\x75\x10\x95\x35\x81\x7e\x26\xe6" + "\x78\xa4\x88\xcf\xdb\x91\x34\x18" + "\xad\xd7\x8e\x07\x7d\xab\x39\xf9" + "\xa3\x9e\xa5\x1d\xbb\xed\x61\xfd" + "\xdc\xb7\x5a\x27\xfc\xb5\xc9\x10" + "\xa8\xcc\x52\x7f\x14\x76\x90\xe7" + "\x1b\x29\x60\x74\xc0\x98\x77\xbb" + "\xe0\x54\xbb\x27\x49\x59\x1e\x62" + "\x3d\xaf\x74\x06\xa4\x42\x6f\xc6" + "\x52\x97\xc4\x1d\xc4\x9f\xe2\xe5" + "\x38\x57\x91\xd1\xa2\x28\xcc\x40" + "\xcc\x70\x59\x37\xfc\x9f\x4b\xda" + "\xa0\xeb\x97\x9a\x7d\xed\x14\x5c" + "\x9c\xb7\x93\x26\x41\xa8\x66\xdd" + "\x87\x6a\xc0\xd3\xc2\xa9\x3e\xae" + "\xe9\x72\xfe\xd1\xb3\xac\x38\xea" + "\x4d\x15\xa9\xd5\x36\x61\xe9\x96" + "\x6c\x23\xf8\x43\xe4\x92\x29\xd9" + "\x8b\x78\xf7\x0a\x52\xe0\x19\x5b" + "\x59\x69\x5b\x5d\xa1\x53\xc4\x68" + "\xe1\xbb\xac\x89\x14\xe2\xe2\x85" + "\x41\x18\xf5\xb3\xd1\xfa\x68\x19" + "\x44\x78\xdc\xcf\xe7\x88\x2d\x52" + "\x5f\x40\xb5\x7e\xf8\x88\xa2\xae" + "\x4a\xb2\x07\x35\x9d\x9b\x07\x88" + "\xb7\x00\xd0\x0c\xb6\xa0\x47\x59" + "\xda\x4e\xc9\xab\x9b\x8a\x7b", + + .len = 375, + .also_non_np = 1, + .np = 3, + .tap = { 375 - 20, 4, 16 }, + + }, { + .key = "\x1c\x92\x40\xa5\xeb\x55\xd3\x8a" + "\xf3\x33\x88\x86\x04\xf6\xb5\xf0" + "\x47\x39\x17\xc1\x40\x2b\x80\x09" + "\x9d\xca\x5c\xbc\x20\x70\x75\xc0", + .klen = 32, + .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x02\x76\x5a\x2e\x63" + "\x33\x9f\xc9\x9a\x66\x32\x0d\xb7" + "\x2a\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x27\x54\x77\x61\x73\x20\x62\x72" + "\x69\x6c\x6c\x69\x67\x2c\x20\x61" + "\x6e\x64\x20\x74\x68\x65\x20\x73" + "\x6c\x69\x74\x68\x79\x20\x74\x6f" + "\x76\x65\x73\x0a\x44\x69\x64\x20" + "\x67\x79\x72\x65\x20\x61\x6e\x64" + "\x20\x67\x69\x6d\x62\x6c\x65\x20" + "\x69\x6e\x20\x74\x68\x65\x20\x77" + "\x61\x62\x65\x3a\x0a\x41\x6c\x6c" + "\x20\x6d\x69\x6d\x73\x79\x20\x77" + "\x65\x72\x65\x20\x74\x68\x65\x20" + "\x62\x6f\x72\x6f\x67\x6f\x76\x65" + "\x73\x2c\x0a\x41\x6e\x64\x20\x74" + "\x68\x65\x20\x6d\x6f\x6d\x65\x20" + "\x72\x61\x74\x68\x73\x20\x6f\x75" + "\x74\x67\x72\x61\x62\x65\x2e", + .ctext = "\xb9\x68\xbc\x6a\x24\xbc\xcc\xd8" + "\x9b\x2a\x8d\x5b\x96\xaf\x56\xe3" + "\x11\x61\xe7\xa7\x9b\xce\x4e\x7d" + "\x60\x02\x48\xac\xeb\xd5\x3a\x26" + "\x9d\x77\x3b\xb5\x32\x13\x86\x8e" + "\x20\x82\x26\x72\xae\x64\x1b\x7e" + "\x2e\x01\x68\xb4\x87\x45\xa1\x24" + "\xe4\x48\x40\xf0\xaa\xac\xee\xa9" + "\xfc\x31\xad\x9d\x89\xa3\xbb\xd2" + "\xe4\x25\x13\xad\x0f\x5e\xdf\x3c" + "\x27\xab\xb8\x62\x46\x22\x30\x48" + "\x55\x2c\x4e\x84\x78\x1d\x0d\x34" + "\x8d\x3c\x91\x0a\x7f\x5b\x19\x9f" + "\x97\x05\x4c\xa7\x62\x47\x8b\xc5" + "\x44\x2e\x20\x33\xdd\xa0\x82\xa9" + "\x25\x76\x37\xe6\x3c\x67\x5b", + .len = 127, + }, { + .key = "\x1c\x92\x40\xa5\xeb\x55\xd3\x8a" + "\xf3\x33\x88\x86\x04\xf6\xb5\xf0" + "\x47\x39\x17\xc1\x40\x2b\x80\x09" + "\x9d\xca\x5c\xbc\x20\x70\x75\xc0", + .klen = 32, + .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x01\x31\x58\xa3\x5a" + "\x25\x5d\x05\x17\x58\xe9\x5e\xd4" + "\x1c\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x49\xee\xe0\xdc\x24\x90\x40\xcd" + "\xc5\x40\x8f\x47\x05\xbc\xdd\x81" + "\x47\xc6\x8d\xe6\xb1\x8f\xd7\xcb" + "\x09\x0e\x6e\x22\x48\x1f\xbf\xb8" + "\x5c\xf7\x1e\x8a\xc1\x23\xf2\xd4" + "\x19\x4b\x01\x0f\x4e\xa4\x43\xce" + "\x01\xc6\x67\xda\x03\x91\x18\x90" + "\xa5\xa4\x8e\x45\x03\xb3\x2d\xac" + "\x74\x92\xd3\x53\x47\xc8\xdd\x25" + "\x53\x6c\x02\x03\x87\x0d\x11\x0c" + "\x58\xe3\x12\x18\xfd\x2a\x5b\x40" + "\x0c\x30\xf0\xb8\x3f\x43\xce\xae" + "\x65\x3a\x7d\x7c\xf4\x54\xaa\xcc" + "\x33\x97\xc3\x77\xba\xc5\x70\xde" + "\xd7\xd5\x13\xa5\x65\xc4\x5f\x0f" + "\x46\x1a\x0d\x97\xb5\xf3\xbb\x3c" + "\x84\x0f\x2b\xc5\xaa\xea\xf2\x6c" + "\xc9\xb5\x0c\xee\x15\xf3\x7d\xbe" + "\x9f\x7b\x5a\xa6\xae\x4f\x83\xb6" + "\x79\x49\x41\xf4\x58\x18\xcb\x86" + "\x7f\x30\x0e\xf8\x7d\x44\x36\xea" + "\x75\xeb\x88\x84\x40\x3c\xad\x4f" + "\x6f\x31\x6b\xaa\x5d\xe5\xa5\xc5" + "\x21\x66\xe9\xa7\xe3\xb2\x15\x88" + "\x78\xf6\x79\xa1\x59\x47\x12\x4e" + "\x9f\x9f\x64\x1a\xa0\x22\x5b\x08" + "\xbe\x7c\x36\xc2\x2b\x66\x33\x1b" + "\xdd\x60\x71\xf7\x47\x8c\x61\xc3" + "\xda\x8a\x78\x1e\x16\xfa\x1e\x86" + "\x81\xa6\x17\x2a\xa7\xb5\xc2\xe7" + "\xa4\xc7\x42\xf1\xcf\x6a\xca\xb4" + "\x45\xcf\xf3\x93\xf0\xe7\xea\xf6" + "\xf4\xe6\x33\x43\x84\x93\xa5\x67" + "\x9b\x16\x58\x58\x80\x0f\x2b\x5c" + "\x24\x74\x75\x7f\x95\x81\xb7\x30" + "\x7a\x33\xa7\xf7\x94\x87\x32\x27" + "\x10\x5d\x14\x4c\x43\x29\xdd\x26" + "\xbd\x3e\x3c\x0e\xfe\x0e\xa5\x10" + "\xea\x6b\x64\xfd\x73\xc6\xed\xec" + "\xa8\xc9\xbf\xb3\xba\x0b\x4d\x07" + "\x70\xfc\x16\xfd\x79\x1e\xd7\xc5" + "\x49\x4e\x1c\x8b\x8d\x79\x1b\xb1" + "\xec\xca\x60\x09\x4c\x6a\xd5\x09" + "\x49\x46\x00\x88\x22\x8d\xce\xea" + "\xb1\x17\x11\xde\x42\xd2\x23\xc1" + "\x72\x11\xf5\x50\x73\x04\x40\x47" + "\xf9\x5d\xe7\xa7\x26\xb1\x7e\xb0" + "\x3f\x58\xc1\x52\xab\x12\x67\x9d" + "\x3f\x43\x4b\x68\xd4\x9c\x68\x38" + "\x07\x8a\x2d\x3e\xf3\xaf\x6a\x4b" + "\xf9\xe5\x31\x69\x22\xf9\xa6\x69" + "\xc6\x9c\x96\x9a\x12\x35\x95\x1d" + "\x95\xd5\xdd\xbe\xbf\x93\x53\x24" + "\xfd\xeb\xc2\x0a\x64\xb0\x77\x00" + "\x6f\x88\xc4\x37\x18\x69\x7c\xd7" + "\x41\x92\x55\x4c\x03\xa1\x9a\x4b" + "\x15\xe5\xdf\x7f\x37\x33\x72\xc1" + "\x8b\x10\x67\xa3\x01\x57\x94\x25" + "\x7b\x38\x71\x7e\xdd\x1e\xcc\x73" + "\x55\xd2\x8e\xeb\x07\xdd\xf1\xda" + "\x58\xb1\x47\x90\xfe\x42\x21\x72" + "\xa3\x54\x7a\xa0\x40\xec\x9f\xdd" + "\xc6\x84\x6e\xca\xae\xe3\x68\xb4" + "\x9d\xe4\x78\xff\x57\xf2\xf8\x1b" + "\x03\xa1\x31\xd9\xde\x8d\xf5\x22" + "\x9c\xdd\x20\xa4\x1e\x27\xb1\x76" + "\x4f\x44\x55\xe2\x9b\xa1\x9c\xfe" + "\x54\xf7\x27\x1b\xf4\xde\x02\xf5" + "\x1b\x55\x48\x5c\xdc\x21\x4b\x9e" + "\x4b\x6e\xed\x46\x23\xdc\x65\xb2" + "\xcf\x79\x5f\x28\xe0\x9e\x8b\xe7" + "\x4c\x9d\x8a\xff\xc1\xa6\x28\xb8" + "\x65\x69\x8a\x45\x29\xef\x74\x85" + "\xde\x79\xc7\x08\xae\x30\xb0\xf4" + "\xa3\x1d\x51\x41\xab\xce\xcb\xf6" + "\xb5\xd8\x6d\xe0\x85\xe1\x98\xb3" + "\x43\xbb\x86\x83\x0a\xa0\xf5\xb7" + "\x04\x0b\xfa\x71\x1f\xb0\xf6\xd9" + "\x13\x00\x15\xf0\xc7\xeb\x0d\x5a" + "\x9f\xd7\xb9\x6c\x65\x14\x22\x45" + "\x6e\x45\x32\x3e\x7e\x60\x1a\x12" + "\x97\x82\x14\xfb\xaa\x04\x22\xfa" + "\xa0\xe5\x7e\x8c\x78\x02\x48\x5d" + "\x78\x33\x5a\x7c\xad\xdb\x29\xce" + "\xbb\x8b\x61\xa4\xb7\x42\xe2\xac" + "\x8b\x1a\xd9\x2f\x0b\x8b\x62\x21" + "\x83\x35\x7e\xad\x73\xc2\xb5\x6c" + "\x10\x26\x38\x07\xe5\xc7\x36\x80" + "\xe2\x23\x12\x61\xf5\x48\x4b\x2b" + "\xc5\xdf\x15\xd9\x87\x01\xaa\xac" + "\x1e\x7c\xad\x73\x78\x18\x63\xe0" + "\x8b\x9f\x81\xd8\x12\x6a\x28\x10" + "\xbe\x04\x68\x8a\x09\x7c\x1b\x1c" + "\x83\x66\x80\x47\x80\xe8\xfd\x35" + "\x1c\x97\x6f\xae\x49\x10\x66\xcc" + "\xc6\xd8\xcc\x3a\x84\x91\x20\x77" + "\x72\xe4\x24\xd2\x37\x9f\xc5\xc9" + "\x25\x94\x10\x5f\x40\x00\x64\x99" + "\xdc\xae\xd7\x21\x09\x78\x50\x15" + "\xac\x5f\xc6\x2c\xa2\x0b\xa9\x39" + "\x87\x6e\x6d\xab\xde\x08\x51\x16" + "\xc7\x13\xe9\xea\xed\x06\x8e\x2c" + "\xf8\x37\x8c\xf0\xa6\x96\x8d\x43" + "\xb6\x98\x37\xb2\x43\xed\xde\xdf" + "\x89\x1a\xe7\xeb\x9d\xa1\x7b\x0b" + "\x77\xb0\xe2\x75\xc0\xf1\x98\xd9" + "\x80\x55\xc9\x34\x91\xd1\x59\xe8" + "\x4b\x0f\xc1\xa9\x4b\x7a\x84\x06" + "\x20\xa8\x5d\xfa\xd1\xde\x70\x56" + "\x2f\x9e\x91\x9c\x20\xb3\x24\xd8" + "\x84\x3d\xe1\x8c\x7e\x62\x52\xe5" + "\x44\x4b\x9f\xc2\x93\x03\xea\x2b" + "\x59\xc5\xfa\x3f\x91\x2b\xbb\x23" + "\xf5\xb2\x7b\xf5\x38\xaf\xb3\xee" + "\x63\xdc\x7b\xd1\xff\xaa\x8b\xab" + "\x82\x6b\x37\x04\xeb\x74\xbe\x79" + "\xb9\x83\x90\xef\x20\x59\x46\xff" + "\xe9\x97\x3e\x2f\xee\xb6\x64\x18" + "\x38\x4c\x7a\x4a\xf9\x61\xe8\x9a" + "\xa1\xb5\x01\xa6\x47\xd3\x11\xd4" + "\xce\xd3\x91\x49\x88\xc7\xb8\x4d" + "\xb1\xb9\x07\x6d\x16\x72\xae\x46" + "\x5e\x03\xa1\x4b\xb6\x02\x30\xa8" + "\x3d\xa9\x07\x2a\x7c\x19\xe7\x62" + "\x87\xe3\x82\x2f\x6f\xe1\x09\xd9" + "\x94\x97\xea\xdd\x58\x9e\xae\x76" + "\x7e\x35\xe5\xb4\xda\x7e\xf4\xde" + "\xf7\x32\x87\xcd\x93\xbf\x11\x56" + "\x11\xbe\x08\x74\xe1\x69\xad\xe2" + "\xd7\xf8\x86\x75\x8a\x3c\xa4\xbe" + "\x70\xa7\x1b\xfc\x0b\x44\x2a\x76" + "\x35\xea\x5d\x85\x81\xaf\x85\xeb" + "\xa0\x1c\x61\xc2\xf7\x4f\xa5\xdc" + "\x02\x7f\xf6\x95\x40\x6e\x8a\x9a" + "\xf3\x5d\x25\x6e\x14\x3a\x22\xc9" + "\x37\x1c\xeb\x46\x54\x3f\xa5\x91" + "\xc2\xb5\x8c\xfe\x53\x08\x97\x32" + "\x1b\xb2\x30\x27\xfe\x25\x5d\xdc" + "\x08\x87\xd0\xe5\x94\x1a\xd4\xf1" + "\xfe\xd6\xb4\xa3\xe6\x74\x81\x3c" + "\x1b\xb7\x31\xa7\x22\xfd\xd4\xdd" + "\x20\x4e\x7c\x51\xb0\x60\x73\xb8" + "\x9c\xac\x91\x90\x7e\x01\xb0\xe1" + "\x8a\x2f\x75\x1c\x53\x2a\x98\x2a" + "\x06\x52\x95\x52\xb2\xe9\x25\x2e" + "\x4c\xe2\x5a\x00\xb2\x13\x81\x03" + "\x77\x66\x0d\xa5\x99\xda\x4e\x8c" + "\xac\xf3\x13\x53\x27\x45\xaf\x64" + "\x46\xdc\xea\x23\xda\x97\xd1\xab" + "\x7d\x6c\x30\x96\x1f\xbc\x06\x34" + "\x18\x0b\x5e\x21\x35\x11\x8d\x4c" + "\xe0\x2d\xe9\x50\x16\x74\x81\xa8" + "\xb4\x34\xb9\x72\x42\xa6\xcc\xbc" + "\xca\x34\x83\x27\x10\x5b\x68\x45" + "\x8f\x52\x22\x0c\x55\x3d\x29\x7c" + "\xe3\xc0\x66\x05\x42\x91\x5f\x58" + "\xfe\x4a\x62\xd9\x8c\xa9\x04\x19" + "\x04\xa9\x08\x4b\x57\xfc\x67\x53" + "\x08\x7c\xbc\x66\x8a\xb0\xb6\x9f" + "\x92\xd6\x41\x7c\x5b\x2a\x00\x79" + "\x72", + .ctext = "\xe1\xb6\x8b\x5c\x80\xb8\xcc\x08" + "\x1b\x84\xb2\xd1\xad\xa4\x70\xac" + "\x67\xa9\x39\x27\xac\xb4\x5b\xb7" + "\x4c\x26\x77\x23\x1d\xce\x0a\xbe" + "\x18\x9e\x42\x8b\xbd\x7f\xd6\xf1" + "\xf1\x6b\xe2\x6d\x7f\x92\x0e\xcb" + "\xb8\x79\xba\xb4\xac\x7e\x2d\xc0" + "\x9e\x83\x81\x91\xd5\xea\xc3\x12" + "\x8d\xa4\x26\x70\xa4\xf9\x71\x0b" + "\xbd\x2e\xe1\xb3\x80\x42\x25\xb3" + "\x0b\x31\x99\xe1\x0d\xde\xa6\x90" + "\xf2\xa3\x10\xf7\xe5\xf3\x83\x1e" + "\x2c\xfb\x4d\xf0\x45\x3d\x28\x3c" + "\xb8\xf1\xcb\xbf\x67\xd8\x43\x5a" + "\x9d\x7b\x73\x29\x88\x0f\x13\x06" + "\x37\x50\x0d\x7c\xe6\x9b\x07\xdd" + "\x7e\x01\x1f\x81\x90\x10\x69\xdb" + "\xa4\xad\x8a\x5e\xac\x30\x72\xf2" + "\x36\xcd\xe3\x23\x49\x02\x93\xfa" + "\x3d\xbb\xe2\x98\x83\xeb\xe9\x8d" + "\xb3\x8f\x11\xaa\x53\xdb\xaf\x2e" + "\x95\x13\x99\x3d\x71\xbd\x32\x92" + "\xdd\xfc\x9d\x5e\x6f\x63\x2c\xee" + "\x91\x1f\x4c\x64\x3d\x87\x55\x0f" + "\xcc\x3d\x89\x61\x53\x02\x57\x8f" + "\xe4\x77\x29\x32\xaf\xa6\x2f\x0a" + "\xae\x3c\x3f\x3f\xf4\xfb\x65\x52" + "\xc5\xc1\x78\x78\x53\x28\xad\xed" + "\xd1\x67\x37\xc7\x59\x70\xcd\x0a" + "\xb8\x0f\x80\x51\x9f\xc0\x12\x5e" + "\x06\x0a\x7e\xec\x24\x5f\x73\x00" + "\xb1\x0b\x31\x47\x4f\x73\x8d\xb4" + "\xce\xf3\x55\x45\x6c\x84\x27\xba" + "\xb9\x6f\x03\x4a\xeb\x98\x88\x6e" + "\x53\xed\x25\x19\x0d\x8f\xfe\xca" + "\x60\xe5\x00\x93\x6e\x3c\xff\x19" + "\xae\x08\x3b\x8a\xa6\x84\x05\xfe" + "\x9b\x59\xa0\x8c\xc8\x05\x45\xf5" + "\x05\x37\xdc\x45\x6f\x8b\x95\x8c" + "\x4e\x11\x45\x7a\xce\x21\xa5\xf7" + "\x71\x67\xb9\xce\xd7\xf9\xe9\x5e" + "\x60\xf5\x53\x7a\xa8\x85\x14\x03" + "\xa0\x92\xec\xf3\x51\x80\x84\xc4" + "\xdc\x11\x9e\x57\xce\x4b\x45\xcf" + "\x90\x95\x85\x0b\x96\xe9\xee\x35" + "\x10\xb8\x9b\xf2\x59\x4a\xc6\x7e" + "\x85\xe5\x6f\x38\x51\x93\x40\x0c" + "\x99\xd7\x7f\x32\xa8\x06\x27\xd1" + "\x2b\xd5\xb5\x3a\x1a\xe1\x5e\xda" + "\xcd\x5a\x50\x30\x3c\xc7\xe7\x65" + "\xa6\x07\x0b\x98\x91\xc6\x20\x27" + "\x2a\x03\x63\x1b\x1e\x3d\xaf\xc8" + "\x71\x48\x46\x6a\x64\x28\xf9\x3d" + "\xd1\x1d\xab\xc8\x40\x76\xc2\x39" + "\x4e\x00\x75\xd2\x0e\x82\x58\x8c" + "\xd3\x73\x5a\xea\x46\x89\xbe\xfd" + "\x4e\x2c\x0d\x94\xaa\x9b\x68\xac" + "\x86\x87\x30\x7e\xa9\x16\xcd\x59" + "\xd2\xa6\xbe\x0a\xd8\xf5\xfd\x2d" + "\x49\x69\xd2\x1a\x90\xd2\x1b\xed" + "\xff\x71\x04\x87\x87\x21\xc4\xb8" + "\x1f\x5b\x51\x33\xd0\xd6\x59\x9a" + "\x03\x0e\xd3\x8b\xfb\x57\x73\xfd" + "\x5a\x52\x63\x82\xc8\x85\x2f\xcb" + "\x74\x6d\x4e\xd9\x68\x37\x85\x6a" + "\xd4\xfb\x94\xed\x8d\xd1\x1a\xaf" + "\x76\xa7\xb7\x88\xd0\x2b\x4e\xda" + "\xec\x99\x94\x27\x6f\x87\x8c\xdf" + "\x4b\x5e\xa6\x66\xdd\xcb\x33\x7b" + "\x64\x94\x31\xa8\x37\xa6\x1d\xdb" + "\x0d\x5c\x93\xa4\x40\xf9\x30\x53" + "\x4b\x74\x8d\xdd\xf6\xde\x3c\xac" + "\x5c\x80\x01\x3a\xef\xb1\x9a\x02" + "\x0c\x22\x8e\xe7\x44\x09\x74\x4c" + "\xf2\x9a\x27\x69\x7f\x12\x32\x36" + "\xde\x92\xdf\xde\x8f\x5b\x31\xab" + "\x4a\x01\x26\xe0\xb1\xda\xe8\x37" + "\x21\x64\xe8\xff\x69\xfc\x9e\x41" + "\xd2\x96\x2d\x18\x64\x98\x33\x78" + "\x24\x61\x73\x9b\x47\x29\xf1\xa7" + "\xcb\x27\x0f\xf0\x85\x6d\x8c\x9d" + "\x2c\x95\x9e\xe5\xb2\x8e\x30\x29" + "\x78\x8a\x9d\x65\xb4\x8e\xde\x7b" + "\xd9\x00\x50\xf5\x7f\x81\xc3\x1b" + "\x25\x85\xeb\xc2\x8c\x33\x22\x1e" + "\x68\x38\x22\x30\xd8\x2e\x00\x98" + "\x85\x16\x06\x56\xb4\x81\x74\x20" + "\x95\xdb\x1c\x05\x19\xe8\x23\x4d" + "\x65\x5d\xcc\xd8\x7f\xc4\x2d\x0f" + "\x57\x26\x71\x07\xad\xaa\x71\x9f" + "\x19\x76\x2f\x25\x51\x88\xe4\xc0" + "\x82\x6e\x08\x05\x37\x04\xee\x25" + "\x23\x90\xe9\x4e\xce\x9b\x16\xc1" + "\x31\xe7\x6e\x2c\x1b\xe1\x85\x9a" + "\x0c\x8c\xbb\x12\x1e\x68\x7b\x93" + "\xa9\x3c\x39\x56\x23\x3e\x6e\xc7" + "\x77\x84\xd3\xe0\x86\x59\xaa\xb9" + "\xd5\x53\x58\xc9\x0a\x83\x5f\x85" + "\xd8\x47\x14\x67\x8a\x3c\x17\xe0" + "\xab\x02\x51\xea\xf1\xf0\x4f\x30" + "\x7d\xe0\x92\xc2\x5f\xfb\x19\x5a" + "\x3f\xbd\xf4\x39\xa4\x31\x0c\x39" + "\xd1\xae\x4e\xf7\x65\x7f\x1f\xce" + "\xc2\x39\xd1\x84\xd4\xe5\x02\xe0" + "\x58\xaa\xf1\x5e\x81\xaf\x7f\x72" + "\x0f\x08\x99\x43\xb9\xd8\xac\x41" + "\x35\x55\xf2\xb2\xd4\x98\xb8\x3b" + "\x2b\x3c\x3e\x16\x06\x31\xfc\x79" + "\x47\x38\x63\x51\xc5\xd0\x26\xd7" + "\x43\xb4\x2b\xd9\xc5\x05\xf2\x9d" + "\x18\xc9\x26\x82\x56\xd2\x11\x05" + "\xb6\x89\xb4\x43\x9c\xb5\x9d\x11" + "\x6c\x83\x37\x71\x27\x1c\xae\xbf" + "\xcd\x57\xd2\xee\x0d\x5a\x15\x26" + "\x67\x88\x80\x80\x1b\xdc\xc1\x62" + "\xdd\x4c\xff\x92\x5c\x6c\xe1\xa0" + "\xe3\x79\xa9\x65\x8c\x8c\x14\x42" + "\xe5\x11\xd2\x1a\xad\xa9\x56\x6f" + "\x98\xfc\x8a\x7b\x56\x1f\xc6\xc1" + "\x52\x12\x92\x9b\x41\x0f\x4b\xae" + "\x1b\x4a\xbc\xfe\x23\xb6\x94\x70" + "\x04\x30\x9e\x69\x47\xbe\xb8\x8f" + "\xca\x45\xd7\x8a\xf4\x78\x3e\xaa" + "\x71\x17\xd8\x1e\xb8\x11\x8f\xbc" + "\xc8\x1a\x65\x7b\x41\x89\x72\xc7" + "\x5f\xbe\xc5\x2a\xdb\x5c\x54\xf9" + "\x25\xa3\x7a\x80\x56\x9c\x8c\xab" + "\x26\x19\x10\x36\xa6\xf3\x14\x79" + "\x40\x98\x70\x68\xb7\x35\xd9\xb9" + "\x27\xd4\xe7\x74\x5b\x3d\x97\xb4" + "\xd9\xaa\xd9\xf2\xb5\x14\x84\x1f" + "\xa9\xde\x12\x44\x5b\x00\xc0\xbc" + "\xc8\x11\x25\x1b\x67\x7a\x15\x72" + "\xa6\x31\x6f\xf4\x68\x7a\x86\x9d" + "\x43\x1c\x5f\x16\xd3\xad\x2e\x52" + "\xf3\xb4\xc3\xfa\x27\x2e\x68\x6c" + "\x06\xe7\x4c\x4f\xa2\xe0\xe4\x21" + "\x5d\x9e\x33\x58\x8d\xbf\xd5\x70" + "\xf8\x80\xa5\xdd\xe7\x18\x79\xfa" + "\x7b\xfd\x09\x69\x2c\x37\x32\xa8" + "\x65\xfa\x8d\x8b\x5c\xcc\xe8\xf3" + "\x37\xf6\xa6\xc6\x5c\xa2\x66\x79" + "\xfa\x8a\xa7\xd1\x0b\x2e\x1b\x5e" + "\x95\x35\x00\x76\xae\x42\xf7\x50" + "\x51\x78\xfb\xb4\x28\x24\xde\x1a" + "\x70\x8b\xed\xca\x3c\x5e\xe4\xbd" + "\x28\xb5\xf3\x76\x4f\x67\x5d\x81" + "\xb2\x60\x87\xd9\x7b\x19\x1a\xa7" + "\x79\xa2\xfa\x3f\x9e\xa9\xd7\x25" + "\x61\xe1\x74\x31\xa2\x77\xa0\x1b" + "\xf6\xf7\xcb\xc5\xaa\x9e\xce\xf9" + "\x9b\x96\xef\x51\xc3\x1a\x44\x96" + "\xae\x17\x50\xab\x29\x08\xda\xcc" + "\x1a\xb3\x12\xd0\x24\xe4\xe2\xe0" + "\xc6\xe3\xcc\x82\xd0\xba\x47\x4c" + "\x3f\x49\xd7\xe8\xb6\x61\xaa\x65" + "\x25\x18\x40\x2d\x62\x25\x02\x71" + "\x61\xa2\xc1\xb2\x13\xd2\x71\x3f" + "\x43\x1a\xc9\x09\x92\xff\xd5\x57" + "\xf0\xfc\x5e\x1c\xf1\xf5\xf9\xf3" + "\x5b", + .len = 1281, + .also_non_np = 1, + .np = 3, + .tap = { 1200, 1, 80 }, + }, { + .key = "\x80\x81\x82\x83\x84\x85\x86\x87" + "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" + "\x90\x91\x92\x93\x94\x95\x96\x97" + "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f", + .klen = 32, + .iv = "\x40\x41\x42\x43\x44\x45\x46\x47" + "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" + "\x50\x51\x52\x53\x54\x55\x56\x58" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x54\x68\x65\x20\x64\x68\x6f\x6c" + "\x65\x20\x28\x70\x72\x6f\x6e\x6f" + "\x75\x6e\x63\x65\x64\x20\x22\x64" + "\x6f\x6c\x65\x22\x29\x20\x69\x73" + "\x20\x61\x6c\x73\x6f\x20\x6b\x6e" + "\x6f\x77\x6e\x20\x61\x73\x20\x74" + "\x68\x65\x20\x41\x73\x69\x61\x74" + "\x69\x63\x20\x77\x69\x6c\x64\x20" + "\x64\x6f\x67\x2c\x20\x72\x65\x64" + "\x20\x64\x6f\x67\x2c\x20\x61\x6e" + "\x64\x20\x77\x68\x69\x73\x74\x6c" + "\x69\x6e\x67\x20\x64\x6f\x67\x2e" + "\x20\x49\x74\x20\x69\x73\x20\x61" + "\x62\x6f\x75\x74\x20\x74\x68\x65" + "\x20\x73\x69\x7a\x65\x20\x6f\x66" + "\x20\x61\x20\x47\x65\x72\x6d\x61" + "\x6e\x20\x73\x68\x65\x70\x68\x65" + "\x72\x64\x20\x62\x75\x74\x20\x6c" + "\x6f\x6f\x6b\x73\x20\x6d\x6f\x72" + "\x65\x20\x6c\x69\x6b\x65\x20\x61" + "\x20\x6c\x6f\x6e\x67\x2d\x6c\x65" + "\x67\x67\x65\x64\x20\x66\x6f\x78" + "\x2e\x20\x54\x68\x69\x73\x20\x68" + "\x69\x67\x68\x6c\x79\x20\x65\x6c" + "\x75\x73\x69\x76\x65\x20\x61\x6e" + "\x64\x20\x73\x6b\x69\x6c\x6c\x65" + "\x64\x20\x6a\x75\x6d\x70\x65\x72" + "\x20\x69\x73\x20\x63\x6c\x61\x73" + "\x73\x69\x66\x69\x65\x64\x20\x77" + "\x69\x74\x68\x20\x77\x6f\x6c\x76" + "\x65\x73\x2c\x20\x63\x6f\x79\x6f" + "\x74\x65\x73\x2c\x20\x6a\x61\x63" + "\x6b\x61\x6c\x73\x2c\x20\x61\x6e" + "\x64\x20\x66\x6f\x78\x65\x73\x20" + "\x69\x6e\x20\x74\x68\x65\x20\x74" + "\x61\x78\x6f\x6e\x6f\x6d\x69\x63" + "\x20\x66\x61\x6d\x69\x6c\x79\x20" + "\x43\x61\x6e\x69\x64\x61\x65\x2e", + .ctext = "\x9f\x1a\xab\x8a\x95\xf4\x7e\xcd" + "\xee\x34\xc0\x39\xd6\x23\x43\x94" + "\xf6\x01\xc1\x7f\x60\x91\xa5\x23" + "\x4a\x8a\xe6\xb1\x14\x8b\xd7\x58" + "\xee\x02\xad\xab\xce\x1e\x7d\xdf" + "\xf9\x49\x27\x69\xd0\x8d\x0c\x20" + "\x6e\x17\xc4\xae\x87\x7a\xc6\x61" + "\x91\xe2\x8e\x0a\x1d\x61\xcc\x38" + "\x02\x64\x43\x49\xc6\xb2\x59\x59" + "\x42\xe7\x9d\x83\x00\x60\x90\xd2" + "\xb9\xcd\x97\x6e\xc7\x95\x71\xbc" + "\x23\x31\x58\x07\xb3\xb4\xac\x0b" + "\x87\x64\x56\xe5\xe3\xec\x63\xa1" + "\x71\x8c\x08\x48\x33\x20\x29\x81" + "\xea\x01\x25\x20\xc3\xda\xe6\xee" + "\x6a\x03\xf6\x68\x4d\x26\xa0\x91" + "\x9e\x44\xb8\xc1\xc0\x8f\x5a\x6a" + "\xc0\xcd\xbf\x24\x5e\x40\x66\xd2" + "\x42\x24\xb5\xbf\xc1\xeb\x12\x60" + "\x56\xbe\xb1\xa6\xc4\x0f\xfc\x49" + "\x69\x9f\xcc\x06\x5c\xe3\x26\xd7" + "\x52\xc0\x42\xe8\xb4\x76\xc3\xee" + "\xb2\x97\xe3\x37\x61\x29\x5a\xb5" + "\x8e\xe8\x8c\xc5\x38\xcc\xcb\xec" + "\x64\x1a\xa9\x12\x5f\xf7\x79\xdf" + "\x64\xca\x77\x4e\xbd\xf9\x83\xa0" + "\x13\x27\x3f\x31\x03\x63\x30\x26" + "\x27\x0b\x3e\xb3\x23\x13\x61\x0b" + "\x70\x1d\xd4\xad\x85\x1e\xbf\xdf" + "\xc6\x8e\x4d\x08\xcc\x7e\x77\xbd" + "\x1e\x18\x77\x38\x3a\xfe\xc0\x5d" + "\x16\xfc\xf0\xa9\x2f\xe9\x17\xc7" + "\xd3\x23\x17\x18\xa3\xe6\x54\x77" + "\x6f\x1b\xbe\x8a\x6e\x7e\xca\x97" + "\x08\x05\x36\x76\xaf\x12\x7a\x42" + "\xf7\x7a\xc2\x35\xc3\xb4\x93\x40" + "\x54\x14\x90\xa0\x4d\x65\x1c\x37" + "\x50\x70\x44\x29\x6d\x6e\x62\x68", + .len = 304, + } +}; + +/* Adiantum test vectors from https://github.com/google/adiantum */ +static const struct cipher_testvec adiantum_xchacha12_aes_tv_template[] = { + { + .key = "\x9e\xeb\xb2\x49\x3c\x1c\xf5\xf4" + "\x6a\x99\xc2\xc4\xdf\xb1\xf4\xdd" + "\x75\x20\x57\xea\x2c\x4f\xcd\xb2" + "\xa5\x3d\x7b\x49\x1e\xab\xfd\x0f", + .klen = 32, + .iv = "\xdf\x63\xd4\xab\xd2\x49\xf3\xd8" + "\x33\x81\x37\x60\x7d\xfa\x73\x08" + "\xd8\x49\x6d\x80\xe8\x2f\x62\x54" + "\xeb\x0e\xa9\x39\x5b\x45\x7f\x8a", + .ptext = "\x67\xc9\xf2\x30\x84\x41\x8e\x43" + "\xfb\xf3\xb3\x3e\x79\x36\x7f\xe8", + .ctext = "\x6d\x32\x86\x18\x67\x86\x0f\x3f" + "\x96\x7c\x9d\x28\x0d\x53\xec\x9f", + .len = 16, + .also_non_np = 1, + .np = 2, + .tap = { 14, 2 }, + }, { + .key = "\x36\x2b\x57\x97\xf8\x5d\xcd\x99" + "\x5f\x1a\x5a\x44\x1d\x92\x0f\x27" + "\xcc\x16\xd7\x2b\x85\x63\x99\xd3" + "\xba\x96\xa1\xdb\xd2\x60\x68\xda", + .klen = 32, + .iv = "\xef\x58\x69\xb1\x2c\x5e\x9a\x47" + "\x24\xc1\xb1\x69\xe1\x12\x93\x8f" + "\x43\x3d\x6d\x00\xdb\x5e\xd8\xd9" + "\x12\x9a\xfe\xd9\xff\x2d\xaa\xc4", + .ptext = "\x5e\xa8\x68\x19\x85\x98\x12\x23" + "\x26\x0a\xcc\xdb\x0a\x04\xb9\xdf" + "\x4d\xb3\x48\x7b\xb0\xe3\xc8\x19" + "\x43\x5a\x46\x06\x94\x2d\xf2", + .ctext = "\xc7\xc6\xf1\x73\x8f\xc4\xff\x4a" + "\x39\xbe\x78\xbe\x8d\x28\xc8\x89" + "\x46\x63\xe7\x0c\x7d\x87\xe8\x4e" + "\xc9\x18\x7b\xbe\x18\x60\x50", + .len = 31, + }, { + .key = "\xa5\x28\x24\x34\x1a\x3c\xd8\xf7" + "\x05\x91\x8f\xee\x85\x1f\x35\x7f" + "\x80\x3d\xfc\x9b\x94\xf6\xfc\x9e" + "\x19\x09\x00\xa9\x04\x31\x4f\x11", + .klen = 32, + .iv = "\xa1\xba\x49\x95\xff\x34\x6d\xb8" + "\xcd\x87\x5d\x5e\xfd\xea\x85\xdb" + "\x8a\x7b\x5e\xb2\x5d\x57\xdd\x62" + "\xac\xa9\x8c\x41\x42\x94\x75\xb7", + .ptext = "\x69\xb4\xe8\x8c\x37\xe8\x67\x82" + "\xf1\xec\x5d\x04\xe5\x14\x91\x13" + "\xdf\xf2\x87\x1b\x69\x81\x1d\x71" + "\x70\x9e\x9c\x3b\xde\x49\x70\x11" + "\xa0\xa3\xdb\x0d\x54\x4f\x66\x69" + "\xd7\xdb\x80\xa7\x70\x92\x68\xce" + "\x81\x04\x2c\xc6\xab\xae\xe5\x60" + "\x15\xe9\x6f\xef\xaa\x8f\xa7\xa7" + "\x63\x8f\xf2\xf0\x77\xf1\xa8\xea" + "\xe1\xb7\x1f\x9e\xab\x9e\x4b\x3f" + "\x07\x87\x5b\x6f\xcd\xa8\xaf\xb9" + "\xfa\x70\x0b\x52\xb8\xa8\xa7\x9e" + "\x07\x5f\xa6\x0e\xb3\x9b\x79\x13" + "\x79\xc3\x3e\x8d\x1c\x2c\x68\xc8" + "\x51\x1d\x3c\x7b\x7d\x79\x77\x2a" + "\x56\x65\xc5\x54\x23\x28\xb0\x03", + .ctext = "\x9e\x16\xab\xed\x4b\xa7\x42\x5a" + "\xc6\xfb\x4e\x76\xff\xbe\x03\xa0" + "\x0f\xe3\xad\xba\xe4\x98\x2b\x0e" + "\x21\x48\xa0\xb8\x65\x48\x27\x48" + "\x84\x54\x54\xb2\x9a\x94\x7b\xe6" + "\x4b\x29\xe9\xcf\x05\x91\x80\x1a" + "\x3a\xf3\x41\x96\x85\x1d\x9f\x74" + "\x51\x56\x63\xfa\x7c\x28\x85\x49" + "\xf7\x2f\xf9\xf2\x18\x46\xf5\x33" + "\x80\xa3\x3c\xce\xb2\x57\x93\xf5" + "\xae\xbd\xa9\xf5\x7b\x30\xc4\x93" + "\x66\xe0\x30\x77\x16\xe4\xa0\x31" + "\xba\x70\xbc\x68\x13\xf5\xb0\x9a" + "\xc1\xfc\x7e\xfe\x55\x80\x5c\x48" + "\x74\xa6\xaa\xa3\xac\xdc\xc2\xf5" + "\x8d\xde\x34\x86\x78\x60\x75\x8d", + .len = 128, + .also_non_np = 1, + .np = 4, + .tap = { 104, 16, 4, 4 }, + }, { + .key = "\xd3\x81\x72\x18\x23\xff\x6f\x4a" + "\x25\x74\x29\x0d\x51\x8a\x0e\x13" + "\xc1\x53\x5d\x30\x8d\xee\x75\x0d" + "\x14\xd6\x69\xc9\x15\xa9\x0c\x60", + .klen = 32, + .iv = "\x65\x9b\xd4\xa8\x7d\x29\x1d\xf4" + "\xc4\xd6\x9b\x6a\x28\xab\x64\xe2" + "\x62\x81\x97\xc5\x81\xaa\xf9\x44" + "\xc1\x72\x59\x82\xaf\x16\xc8\x2c", + .ptext = "\xc7\x6b\x52\x6a\x10\xf0\xcc\x09" + "\xc1\x12\x1d\x6d\x21\xa6\x78\xf5" + "\x05\xa3\x69\x60\x91\x36\x98\x57" + "\xba\x0c\x14\xcc\xf3\x2d\x73\x03" + "\xc6\xb2\x5f\xc8\x16\x27\x37\x5d" + "\xd0\x0b\x87\xb2\x50\x94\x7b\x58" + "\x04\xf4\xe0\x7f\x6e\x57\x8e\xc9" + "\x41\x84\xc1\xb1\x7e\x4b\x91\x12" + "\x3a\x8b\x5d\x50\x82\x7b\xcb\xd9" + "\x9a\xd9\x4e\x18\x06\x23\x9e\xd4" + "\xa5\x20\x98\xef\xb5\xda\xe5\xc0" + "\x8a\x6a\x83\x77\x15\x84\x1e\xae" + "\x78\x94\x9d\xdf\xb7\xd1\xea\x67" + "\xaa\xb0\x14\x15\xfa\x67\x21\x84" + "\xd3\x41\x2a\xce\xba\x4b\x4a\xe8" + "\x95\x62\xa9\x55\xf0\x80\xad\xbd" + "\xab\xaf\xdd\x4f\xa5\x7c\x13\x36" + "\xed\x5e\x4f\x72\xad\x4b\xf1\xd0" + "\x88\x4e\xec\x2c\x88\x10\x5e\xea" + "\x12\xc0\x16\x01\x29\xa3\xa0\x55" + "\xaa\x68\xf3\xe9\x9d\x3b\x0d\x3b" + "\x6d\xec\xf8\xa0\x2d\xf0\x90\x8d" + "\x1c\xe2\x88\xd4\x24\x71\xf9\xb3" + "\xc1\x9f\xc5\xd6\x76\x70\xc5\x2e" + "\x9c\xac\xdb\x90\xbd\x83\x72\xba" + "\x6e\xb5\xa5\x53\x83\xa9\xa5\xbf" + "\x7d\x06\x0e\x3c\x2a\xd2\x04\xb5" + "\x1e\x19\x38\x09\x16\xd2\x82\x1f" + "\x75\x18\x56\xb8\x96\x0b\xa6\xf9" + "\xcf\x62\xd9\x32\x5d\xa9\xd7\x1d" + "\xec\xe4\xdf\x1b\xbe\xf1\x36\xee" + "\xe3\x7b\xb5\x2f\xee\xf8\x53\x3d" + "\x6a\xb7\x70\xa9\xfc\x9c\x57\x25" + "\xf2\x89\x10\xd3\xb8\xa8\x8c\x30" + "\xae\x23\x4f\x0e\x13\x66\x4f\xe1" + "\xb6\xc0\xe4\xf8\xef\x93\xbd\x6e" + "\x15\x85\x6b\xe3\x60\x81\x1d\x68" + "\xd7\x31\x87\x89\x09\xab\xd5\x96" + "\x1d\xf3\x6d\x67\x80\xca\x07\x31" + "\x5d\xa7\xe4\xfb\x3e\xf2\x9b\x33" + "\x52\x18\xc8\x30\xfe\x2d\xca\x1e" + "\x79\x92\x7a\x60\x5c\xb6\x58\x87" + "\xa4\x36\xa2\x67\x92\x8b\xa4\xb7" + "\xf1\x86\xdf\xdc\xc0\x7e\x8f\x63" + "\xd2\xa2\xdc\x78\xeb\x4f\xd8\x96" + "\x47\xca\xb8\x91\xf9\xf7\x94\x21" + "\x5f\x9a\x9f\x5b\xb8\x40\x41\x4b" + "\x66\x69\x6a\x72\xd0\xcb\x70\xb7" + "\x93\xb5\x37\x96\x05\x37\x4f\xe5" + "\x8c\xa7\x5a\x4e\x8b\xb7\x84\xea" + "\xc7\xfc\x19\x6e\x1f\x5a\xa1\xac" + "\x18\x7d\x52\x3b\xb3\x34\x62\x99" + "\xe4\x9e\x31\x04\x3f\xc0\x8d\x84" + "\x17\x7c\x25\x48\x52\x67\x11\x27" + "\x67\xbb\x5a\x85\xca\x56\xb2\x5c" + "\xe6\xec\xd5\x96\x3d\x15\xfc\xfb" + "\x22\x25\xf4\x13\xe5\x93\x4b\x9a" + "\x77\xf1\x52\x18\xfa\x16\x5e\x49" + "\x03\x45\xa8\x08\xfa\xb3\x41\x92" + "\x79\x50\x33\xca\xd0\xd7\x42\x55" + "\xc3\x9a\x0c\x4e\xd9\xa4\x3c\x86" + "\x80\x9f\x53\xd1\xa4\x2e\xd1\xbc" + "\xf1\x54\x6e\x93\xa4\x65\x99\x8e" + "\xdf\x29\xc0\x64\x63\x07\xbb\xea", + .ctext = "\x15\x97\xd0\x86\x18\x03\x9c\x51" + "\xc5\x11\x36\x62\x13\x92\xe6\x73" + "\x29\x79\xde\xa1\x00\x3e\x08\x64" + "\x17\x1a\xbc\xd5\xfe\x33\x0e\x0c" + "\x7c\x94\xa7\xc6\x3c\xbe\xac\xa2" + "\x89\xe6\xbc\xdf\x0c\x33\x27\x42" + "\x46\x73\x2f\xba\x4e\xa6\x46\x8f" + "\xe4\xee\x39\x63\x42\x65\xa3\x88" + "\x7a\xad\x33\x23\xa9\xa7\x20\x7f" + "\x0b\xe6\x6a\xc3\x60\xda\x9e\xb4" + "\xd6\x07\x8a\x77\x26\xd1\xab\x44" + "\x99\x55\x03\x5e\xed\x8d\x7b\xbd" + "\xc8\x21\xb7\x21\x30\x3f\xc0\xb5" + "\xc8\xec\x6c\x23\xa6\xa3\x6d\xf1" + "\x30\x0a\xd0\xa6\xa9\x28\x69\xae" + "\x2a\xe6\x54\xac\x82\x9d\x6a\x95" + "\x6f\x06\x44\xc5\x5a\x77\x6e\xec" + "\xf8\xf8\x63\xb2\xe6\xaa\xbd\x8e" + "\x0e\x8a\x62\x00\x03\xc8\x84\xdd" + "\x47\x4a\xc3\x55\xba\xb7\xe7\xdf" + "\x08\xbf\x62\xf5\xe8\xbc\xb6\x11" + "\xe4\xcb\xd0\x66\x74\x32\xcf\xd4" + "\xf8\x51\x80\x39\x14\x05\x12\xdb" + "\x87\x93\xe2\x26\x30\x9c\x3a\x21" + "\xe5\xd0\x38\x57\x80\x15\xe4\x08" + "\x58\x05\x49\x7d\xe6\x92\x77\x70" + "\xfb\x1e\x2d\x6a\x84\x00\xc8\x68" + "\xf7\x1a\xdd\xf0\x7b\x38\x1e\xd8" + "\x2c\x78\x78\x61\xcf\xe3\xde\x69" + "\x1f\xd5\x03\xd5\x1a\xb4\xcf\x03" + "\xc8\x7a\x70\x68\x35\xb4\xf6\xbe" + "\x90\x62\xb2\x28\x99\x86\xf5\x44" + "\x99\xeb\x31\xcf\xca\xdf\xd0\x21" + "\xd6\x60\xf7\x0f\x40\xb4\x80\xb7" + "\xab\xe1\x9b\x45\xba\x66\xda\xee" + "\xdd\x04\x12\x40\x98\xe1\x69\xe5" + "\x2b\x9c\x59\x80\xe7\x7b\xcc\x63" + "\xa6\xc0\x3a\xa9\xfe\x8a\xf9\x62" + "\x11\x34\x61\x94\x35\xfe\xf2\x99" + "\xfd\xee\x19\xea\x95\xb6\x12\xbf" + "\x1b\xdf\x02\x1a\xcc\x3e\x7e\x65" + "\x78\x74\x10\x50\x29\x63\x28\xea" + "\x6b\xab\xd4\x06\x4d\x15\x24\x31" + "\xc7\x0a\xc9\x16\xb6\x48\xf0\xbf" + "\x49\xdb\x68\x71\x31\x8f\x87\xe2" + "\x13\x05\x64\xd6\x22\x0c\xf8\x36" + "\x84\x24\x3e\x69\x5e\xb8\x9e\x16" + "\x73\x6c\x83\x1e\xe0\x9f\x9e\xba" + "\xe5\x59\x21\x33\x1b\xa9\x26\xc2" + "\xc7\xd9\x30\x73\xb6\xa6\x73\x82" + "\x19\xfa\x44\x4d\x40\x8b\x69\x04" + "\x94\x74\xea\x6e\xb3\x09\x47\x01" + "\x2a\xb9\x78\x34\x43\x11\xed\xd6" + "\x8c\x95\x65\x1b\x85\x67\xa5\x40" + "\xac\x9c\x05\x4b\x57\x4a\xa9\x96" + "\x0f\xdd\x4f\xa1\xe0\xcf\x6e\xc7" + "\x1b\xed\xa2\xb4\x56\x8c\x09\x6e" + "\xa6\x65\xd7\x55\x81\xb7\xed\x11" + "\x9b\x40\x75\xa8\x6b\x56\xaf\x16" + "\x8b\x3d\xf4\xcb\xfe\xd5\x1d\x3d" + "\x85\xc2\xc0\xde\x43\x39\x4a\x96" + "\xba\x88\x97\xc0\xd6\x00\x0e\x27" + "\x21\xb0\x21\x52\xba\xa7\x37\xaa" + "\xcc\xbf\x95\xa8\xf4\xd0\x91\xf6", + .len = 512, + .also_non_np = 1, + .np = 2, + .tap = { 144, 368 }, + } +}; + +/* Adiantum with XChaCha20 instead of XChaCha12 */ +/* Test vectors from https://github.com/google/adiantum */ +static const struct cipher_testvec adiantum_xchacha20_aes_tv_template[] = { + { + .key = "\x9e\xeb\xb2\x49\x3c\x1c\xf5\xf4" + "\x6a\x99\xc2\xc4\xdf\xb1\xf4\xdd" + "\x75\x20\x57\xea\x2c\x4f\xcd\xb2" + "\xa5\x3d\x7b\x49\x1e\xab\xfd\x0f", + .klen = 32, + .iv = "\xdf\x63\xd4\xab\xd2\x49\xf3\xd8" + "\x33\x81\x37\x60\x7d\xfa\x73\x08" + "\xd8\x49\x6d\x80\xe8\x2f\x62\x54" + "\xeb\x0e\xa9\x39\x5b\x45\x7f\x8a", + .ptext = "\x67\xc9\xf2\x30\x84\x41\x8e\x43" + "\xfb\xf3\xb3\x3e\x79\x36\x7f\xe8", + .ctext = "\xf6\x78\x97\xd6\xaa\x94\x01\x27" + "\x2e\x4d\x83\xe0\x6e\x64\x9a\xdf", + .len = 16, + .also_non_np = 1, + .np = 3, + .tap = { 5, 2, 9 }, + }, { + .key = "\x36\x2b\x57\x97\xf8\x5d\xcd\x99" + "\x5f\x1a\x5a\x44\x1d\x92\x0f\x27" + "\xcc\x16\xd7\x2b\x85\x63\x99\xd3" + "\xba\x96\xa1\xdb\xd2\x60\x68\xda", + .klen = 32, + .iv = "\xef\x58\x69\xb1\x2c\x5e\x9a\x47" + "\x24\xc1\xb1\x69\xe1\x12\x93\x8f" + "\x43\x3d\x6d\x00\xdb\x5e\xd8\xd9" + "\x12\x9a\xfe\xd9\xff\x2d\xaa\xc4", + .ptext = "\x5e\xa8\x68\x19\x85\x98\x12\x23" + "\x26\x0a\xcc\xdb\x0a\x04\xb9\xdf" + "\x4d\xb3\x48\x7b\xb0\xe3\xc8\x19" + "\x43\x5a\x46\x06\x94\x2d\xf2", + .ctext = "\x4b\xb8\x90\x10\xdf\x7f\x64\x08" + "\x0e\x14\x42\x5f\x00\x74\x09\x36" + "\x57\x72\xb5\xfd\xb5\x5d\xb8\x28" + "\x0c\x04\x91\x14\x91\xe9\x37", + .len = 31, + .also_non_np = 1, + .np = 2, + .tap = { 16, 15 }, + }, { + .key = "\xa5\x28\x24\x34\x1a\x3c\xd8\xf7" + "\x05\x91\x8f\xee\x85\x1f\x35\x7f" + "\x80\x3d\xfc\x9b\x94\xf6\xfc\x9e" + "\x19\x09\x00\xa9\x04\x31\x4f\x11", + .klen = 32, + .iv = "\xa1\xba\x49\x95\xff\x34\x6d\xb8" + "\xcd\x87\x5d\x5e\xfd\xea\x85\xdb" + "\x8a\x7b\x5e\xb2\x5d\x57\xdd\x62" + "\xac\xa9\x8c\x41\x42\x94\x75\xb7", + .ptext = "\x69\xb4\xe8\x8c\x37\xe8\x67\x82" + "\xf1\xec\x5d\x04\xe5\x14\x91\x13" + "\xdf\xf2\x87\x1b\x69\x81\x1d\x71" + "\x70\x9e\x9c\x3b\xde\x49\x70\x11" + "\xa0\xa3\xdb\x0d\x54\x4f\x66\x69" + "\xd7\xdb\x80\xa7\x70\x92\x68\xce" + "\x81\x04\x2c\xc6\xab\xae\xe5\x60" + "\x15\xe9\x6f\xef\xaa\x8f\xa7\xa7" + "\x63\x8f\xf2\xf0\x77\xf1\xa8\xea" + "\xe1\xb7\x1f\x9e\xab\x9e\x4b\x3f" + "\x07\x87\x5b\x6f\xcd\xa8\xaf\xb9" + "\xfa\x70\x0b\x52\xb8\xa8\xa7\x9e" + "\x07\x5f\xa6\x0e\xb3\x9b\x79\x13" + "\x79\xc3\x3e\x8d\x1c\x2c\x68\xc8" + "\x51\x1d\x3c\x7b\x7d\x79\x77\x2a" + "\x56\x65\xc5\x54\x23\x28\xb0\x03", + .ctext = "\xb1\x8b\xa0\x05\x77\xa8\x4d\x59" + "\x1b\x8e\x21\xfc\x3a\x49\xfa\xd4" + "\xeb\x36\xf3\xc4\xdf\xdc\xae\x67" + "\x07\x3f\x70\x0e\xe9\x66\xf5\x0c" + "\x30\x4d\x66\xc9\xa4\x2f\x73\x9c" + "\x13\xc8\x49\x44\xcc\x0a\x90\x9d" + "\x7c\xdd\x19\x3f\xea\x72\x8d\x58" + "\xab\xe7\x09\x2c\xec\xb5\x44\xd2" + "\xca\xa6\x2d\x7a\x5c\x9c\x2b\x15" + "\xec\x2a\xa6\x69\x91\xf9\xf3\x13" + "\xf7\x72\xc1\xc1\x40\xd5\xe1\x94" + "\xf4\x29\xa1\x3e\x25\x02\xa8\x3e" + "\x94\xc1\x91\x14\xa1\x14\xcb\xbe" + "\x67\x4c\xb9\x38\xfe\xa7\xaa\x32" + "\x29\x62\x0d\xb2\xf6\x3c\x58\x57" + "\xc1\xd5\x5a\xbb\xd6\xa6\x2a\xe5", + .len = 128, + .also_non_np = 1, + .np = 4, + .tap = { 112, 7, 8, 1 }, + }, { + .key = "\xd3\x81\x72\x18\x23\xff\x6f\x4a" + "\x25\x74\x29\x0d\x51\x8a\x0e\x13" + "\xc1\x53\x5d\x30\x8d\xee\x75\x0d" + "\x14\xd6\x69\xc9\x15\xa9\x0c\x60", + .klen = 32, + .iv = "\x65\x9b\xd4\xa8\x7d\x29\x1d\xf4" + "\xc4\xd6\x9b\x6a\x28\xab\x64\xe2" + "\x62\x81\x97\xc5\x81\xaa\xf9\x44" + "\xc1\x72\x59\x82\xaf\x16\xc8\x2c", + .ptext = "\xc7\x6b\x52\x6a\x10\xf0\xcc\x09" + "\xc1\x12\x1d\x6d\x21\xa6\x78\xf5" + "\x05\xa3\x69\x60\x91\x36\x98\x57" + "\xba\x0c\x14\xcc\xf3\x2d\x73\x03" + "\xc6\xb2\x5f\xc8\x16\x27\x37\x5d" + "\xd0\x0b\x87\xb2\x50\x94\x7b\x58" + "\x04\xf4\xe0\x7f\x6e\x57\x8e\xc9" + "\x41\x84\xc1\xb1\x7e\x4b\x91\x12" + "\x3a\x8b\x5d\x50\x82\x7b\xcb\xd9" + "\x9a\xd9\x4e\x18\x06\x23\x9e\xd4" + "\xa5\x20\x98\xef\xb5\xda\xe5\xc0" + "\x8a\x6a\x83\x77\x15\x84\x1e\xae" + "\x78\x94\x9d\xdf\xb7\xd1\xea\x67" + "\xaa\xb0\x14\x15\xfa\x67\x21\x84" + "\xd3\x41\x2a\xce\xba\x4b\x4a\xe8" + "\x95\x62\xa9\x55\xf0\x80\xad\xbd" + "\xab\xaf\xdd\x4f\xa5\x7c\x13\x36" + "\xed\x5e\x4f\x72\xad\x4b\xf1\xd0" + "\x88\x4e\xec\x2c\x88\x10\x5e\xea" + "\x12\xc0\x16\x01\x29\xa3\xa0\x55" + "\xaa\x68\xf3\xe9\x9d\x3b\x0d\x3b" + "\x6d\xec\xf8\xa0\x2d\xf0\x90\x8d" + "\x1c\xe2\x88\xd4\x24\x71\xf9\xb3" + "\xc1\x9f\xc5\xd6\x76\x70\xc5\x2e" + "\x9c\xac\xdb\x90\xbd\x83\x72\xba" + "\x6e\xb5\xa5\x53\x83\xa9\xa5\xbf" + "\x7d\x06\x0e\x3c\x2a\xd2\x04\xb5" + "\x1e\x19\x38\x09\x16\xd2\x82\x1f" + "\x75\x18\x56\xb8\x96\x0b\xa6\xf9" + "\xcf\x62\xd9\x32\x5d\xa9\xd7\x1d" + "\xec\xe4\xdf\x1b\xbe\xf1\x36\xee" + "\xe3\x7b\xb5\x2f\xee\xf8\x53\x3d" + "\x6a\xb7\x70\xa9\xfc\x9c\x57\x25" + "\xf2\x89\x10\xd3\xb8\xa8\x8c\x30" + "\xae\x23\x4f\x0e\x13\x66\x4f\xe1" + "\xb6\xc0\xe4\xf8\xef\x93\xbd\x6e" + "\x15\x85\x6b\xe3\x60\x81\x1d\x68" + "\xd7\x31\x87\x89\x09\xab\xd5\x96" + "\x1d\xf3\x6d\x67\x80\xca\x07\x31" + "\x5d\xa7\xe4\xfb\x3e\xf2\x9b\x33" + "\x52\x18\xc8\x30\xfe\x2d\xca\x1e" + "\x79\x92\x7a\x60\x5c\xb6\x58\x87" + "\xa4\x36\xa2\x67\x92\x8b\xa4\xb7" + "\xf1\x86\xdf\xdc\xc0\x7e\x8f\x63" + "\xd2\xa2\xdc\x78\xeb\x4f\xd8\x96" + "\x47\xca\xb8\x91\xf9\xf7\x94\x21" + "\x5f\x9a\x9f\x5b\xb8\x40\x41\x4b" + "\x66\x69\x6a\x72\xd0\xcb\x70\xb7" + "\x93\xb5\x37\x96\x05\x37\x4f\xe5" + "\x8c\xa7\x5a\x4e\x8b\xb7\x84\xea" + "\xc7\xfc\x19\x6e\x1f\x5a\xa1\xac" + "\x18\x7d\x52\x3b\xb3\x34\x62\x99" + "\xe4\x9e\x31\x04\x3f\xc0\x8d\x84" + "\x17\x7c\x25\x48\x52\x67\x11\x27" + "\x67\xbb\x5a\x85\xca\x56\xb2\x5c" + "\xe6\xec\xd5\x96\x3d\x15\xfc\xfb" + "\x22\x25\xf4\x13\xe5\x93\x4b\x9a" + "\x77\xf1\x52\x18\xfa\x16\x5e\x49" + "\x03\x45\xa8\x08\xfa\xb3\x41\x92" + "\x79\x50\x33\xca\xd0\xd7\x42\x55" + "\xc3\x9a\x0c\x4e\xd9\xa4\x3c\x86" + "\x80\x9f\x53\xd1\xa4\x2e\xd1\xbc" + "\xf1\x54\x6e\x93\xa4\x65\x99\x8e" + "\xdf\x29\xc0\x64\x63\x07\xbb\xea", + .ctext = "\xe0\x33\xf6\xe0\xb4\xa5\xdd\x2b" + "\xdd\xce\xfc\x12\x1e\xfc\x2d\xf2" + "\x8b\xc7\xeb\xc1\xc4\x2a\xe8\x44" + "\x0f\x3d\x97\x19\x2e\x6d\xa2\x38" + "\x9d\xa6\xaa\xe1\x96\xb9\x08\xe8" + "\x0b\x70\x48\x5c\xed\xb5\x9b\xcb" + "\x8b\x40\x88\x7e\x69\x73\xf7\x16" + "\x71\xbb\x5b\xfc\xa3\x47\x5d\xa6" + "\xae\x3a\x64\xc4\xe7\xb8\xa8\xe7" + "\xb1\x32\x19\xdb\xe3\x01\xb8\xf0" + "\xa4\x86\xb4\x4c\xc2\xde\x5c\xd2" + "\x6c\x77\xd2\xe8\x18\xb7\x0a\xc9" + "\x3d\x53\xb5\xc4\x5c\xf0\x8c\x06" + "\xdc\x90\xe0\x74\x47\x1b\x0b\xf6" + "\xd2\x71\x6b\xc4\xf1\x97\x00\x2d" + "\x63\x57\x44\x1f\x8c\xf4\xe6\x9b" + "\xe0\x7a\xdd\xec\x32\x73\x42\x32" + "\x7f\x35\x67\x60\x0d\xcf\x10\x52" + "\x61\x22\x53\x8d\x8e\xbb\x33\x76" + "\x59\xd9\x10\xce\xdf\xef\xc0\x41" + "\xd5\x33\x29\x6a\xda\x46\xa4\x51" + "\xf0\x99\x3d\x96\x31\xdd\xb5\xcb" + "\x3e\x2a\x1f\xc7\x5c\x79\xd3\xc5" + "\x20\xa1\xb1\x39\x1b\xc6\x0a\x70" + "\x26\x39\x95\x07\xad\x7a\xc9\x69" + "\xfe\x81\xc7\x88\x08\x38\xaf\xad" + "\x9e\x8d\xfb\xe8\x24\x0d\x22\xb8" + "\x0e\xed\xbe\x37\x53\x7c\xa6\xc6" + "\x78\x62\xec\xa3\x59\xd9\xc6\x9d" + "\xb8\x0e\x69\x77\x84\x2d\x6a\x4c" + "\xc5\xd9\xb2\xa0\x2b\xa8\x80\xcc" + "\xe9\x1e\x9c\x5a\xc4\xa1\xb2\x37" + "\x06\x9b\x30\x32\x67\xf7\xe7\xd2" + "\x42\xc7\xdf\x4e\xd4\xcb\xa0\x12" + "\x94\xa1\x34\x85\x93\x50\x4b\x0a" + "\x3c\x7d\x49\x25\x01\x41\x6b\x96" + "\xa9\x12\xbb\x0b\xc0\xd7\xd0\x93" + "\x1f\x70\x38\xb8\x21\xee\xf6\xa7" + "\xee\xeb\xe7\x81\xa4\x13\xb4\x87" + "\xfa\xc1\xb0\xb5\x37\x8b\x74\xa2" + "\x4e\xc7\xc2\xad\x3d\x62\x3f\xf8" + "\x34\x42\xe5\xae\x45\x13\x63\xfe" + "\xfc\x2a\x17\x46\x61\xa9\xd3\x1c" + "\x4c\xaf\xf0\x09\x62\x26\x66\x1e" + "\x74\xcf\xd6\x68\x3d\x7d\xd8\xb7" + "\xe7\xe6\xf8\xf0\x08\x20\xf7\x47" + "\x1c\x52\xaa\x0f\x3e\x21\xa3\xf2" + "\xbf\x2f\x95\x16\xa8\xc8\xc8\x8c" + "\x99\x0f\x5d\xfb\xfa\x2b\x58\x8a" + "\x7e\xd6\x74\x02\x60\xf0\xd0\x5b" + "\x65\xa8\xac\xea\x8d\x68\x46\x34" + "\x26\x9d\x4f\xb1\x9a\x8e\xc0\x1a" + "\xf1\xed\xc6\x7a\x83\xfd\x8a\x57" + "\xf2\xe6\xe4\xba\xfc\xc6\x3c\xad" + "\x5b\x19\x50\x2f\x3a\xcc\x06\x46" + "\x04\x51\x3f\x91\x97\xf0\xd2\x07" + "\xe7\x93\x89\x7e\xb5\x32\x0f\x03" + "\xe5\x58\x9e\x74\x72\xeb\xc2\x38" + "\x00\x0c\x91\x72\x69\xed\x7d\x6d" + "\xc8\x71\xf0\xec\xff\x80\xd9\x1c" + "\x9e\xd2\xfa\x15\xfc\x6c\x4e\xbc" + "\xb1\xa6\xbd\xbd\x70\x40\xca\x20" + "\xb8\x78\xd2\xa3\xc6\xf3\x79\x9c" + "\xc7\x27\xe1\x6a\x29\xad\xa4\x03", + .len = 512, + } +}; + /* * CTS (Cipher Text Stealing) mode tests */ diff --git a/drivers/Kconfig b/drivers/Kconfig index 8395bc515996..4f9f99057ff8 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig @@ -1,7 +1,14 @@ # SPDX-License-Identifier: GPL-2.0 menu "Device Drivers" +# Keep I/O buses first + source "drivers/amba/Kconfig" +source "drivers/eisa/Kconfig" +source "drivers/pci/Kconfig" +source "drivers/pcmcia/Kconfig" +source "drivers/rapidio/Kconfig" + source "drivers/base/Kconfig" diff --git a/drivers/acpi/apei/erst.c b/drivers/acpi/apei/erst.c index 3c5ea7cb693e..9953e50667ec 100644 --- a/drivers/acpi/apei/erst.c +++ b/drivers/acpi/apei/erst.c @@ -1035,7 +1035,7 @@ skip: CPER_SECTION_TYPE_MCE) == 0) record->type = PSTORE_TYPE_MCE; else - record->type = PSTORE_TYPE_UNKNOWN; + record->type = PSTORE_TYPE_MAX; if (rcd->hdr.validation_bits & CPER_VALID_TIMESTAMP) record->time.tv_sec = rcd->hdr.timestamp; @@ -1176,7 +1176,6 @@ static int __init erst_init(void) "Error Record Serialization Table (ERST) support is initialized.\n"); buf = kmalloc(erst_erange.size, GFP_KERNEL); - spin_lock_init(&erst_info.buf_lock); if (buf) { erst_info.buf = buf + sizeof(struct cper_pstore_record); erst_info.bufsize = erst_erange.size - diff --git a/drivers/acpi/nfit/Kconfig b/drivers/acpi/nfit/Kconfig index f7c57e33499e..52eefd732cf2 100644 --- a/drivers/acpi/nfit/Kconfig +++ b/drivers/acpi/nfit/Kconfig @@ -13,3 +13,14 @@ config ACPI_NFIT To compile this driver as a module, choose M here: the module will be called nfit. + +config NFIT_SECURITY_DEBUG + bool "Enable debug for NVDIMM security commands" + depends on ACPI_NFIT + help + Some NVDIMM devices and controllers support encryption and + other security features. The payloads for the commands that + enable those features may contain sensitive clear-text + security material. Disable debug of those command payloads + by default. If you are a kernel developer actively working + on NVDIMM security enabling say Y, otherwise say N. diff --git a/drivers/acpi/nfit/Makefile b/drivers/acpi/nfit/Makefile index a407e769f103..751081c47886 100644 --- a/drivers/acpi/nfit/Makefile +++ b/drivers/acpi/nfit/Makefile @@ -1,3 +1,4 @@ obj-$(CONFIG_ACPI_NFIT) := nfit.o nfit-y := core.o +nfit-y += intel.o nfit-$(CONFIG_X86_MCE) += mce.o diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c index 5912d30020c7..011d3db19c80 100644 --- a/drivers/acpi/nfit/core.c +++ b/drivers/acpi/nfit/core.c @@ -24,6 +24,7 @@ #include #include #include +#include "intel.h" #include "nfit.h" #include "intel.h" @@ -380,6 +381,16 @@ static u8 nfit_dsm_revid(unsigned family, unsigned func) [NVDIMM_INTEL_QUERY_FWUPDATE] = 2, [NVDIMM_INTEL_SET_THRESHOLD] = 2, [NVDIMM_INTEL_INJECT_ERROR] = 2, + [NVDIMM_INTEL_GET_SECURITY_STATE] = 2, + [NVDIMM_INTEL_SET_PASSPHRASE] = 2, + [NVDIMM_INTEL_DISABLE_PASSPHRASE] = 2, + [NVDIMM_INTEL_UNLOCK_UNIT] = 2, + [NVDIMM_INTEL_FREEZE_LOCK] = 2, + [NVDIMM_INTEL_SECURE_ERASE] = 2, + [NVDIMM_INTEL_OVERWRITE] = 2, + [NVDIMM_INTEL_QUERY_OVERWRITE] = 2, + [NVDIMM_INTEL_SET_MASTER_PASSPHRASE] = 2, + [NVDIMM_INTEL_MASTER_SECURE_ERASE] = 2, }, }; u8 id; @@ -394,6 +405,17 @@ static u8 nfit_dsm_revid(unsigned family, unsigned func) return id; } +static bool payload_dumpable(struct nvdimm *nvdimm, unsigned int func) +{ + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + + if (nfit_mem && nfit_mem->family == NVDIMM_FAMILY_INTEL + && func >= NVDIMM_INTEL_GET_SECURITY_STATE + && func <= NVDIMM_INTEL_MASTER_SECURE_ERASE) + return IS_ENABLED(CONFIG_NFIT_SECURITY_DEBUG); + return true; +} + int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, unsigned int cmd, void *buf, unsigned int buf_len, int *cmd_rc) { @@ -478,9 +500,10 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, dev_dbg(dev, "%s cmd: %d: func: %d input length: %d\n", dimm_name, cmd, func, in_buf.buffer.length); - print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 4, 4, - in_buf.buffer.pointer, - min_t(u32, 256, in_buf.buffer.length), true); + if (payload_dumpable(nvdimm, func)) + print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 4, 4, + in_buf.buffer.pointer, + min_t(u32, 256, in_buf.buffer.length), true); /* call the BIOS, prefer the named methods over _DSM if available */ if (nvdimm && cmd == ND_CMD_GET_CONFIG_SIZE @@ -1573,18 +1596,10 @@ static DEVICE_ATTR_RO(flags); static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev); + struct nvdimm *nvdimm = to_nvdimm(dev); + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); - if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID) - return sprintf(buf, "%04x-%02x-%04x-%08x\n", - be16_to_cpu(dcr->vendor_id), - dcr->manufacturing_location, - be16_to_cpu(dcr->manufacturing_date), - be32_to_cpu(dcr->serial_number)); - else - return sprintf(buf, "%04x-%08x\n", - be16_to_cpu(dcr->vendor_id), - be32_to_cpu(dcr->serial_number)); + return sprintf(buf, "%s\n", nfit_mem->id); } static DEVICE_ATTR_RO(id); @@ -1780,10 +1795,23 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc, const guid_t *guid; int i; int family = -1; + struct acpi_nfit_control_region *dcr = nfit_mem->dcr; /* nfit test assumes 1:1 relationship between commands and dsms */ nfit_mem->dsm_mask = acpi_desc->dimm_cmd_force_en; nfit_mem->family = NVDIMM_FAMILY_INTEL; + + if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID) + sprintf(nfit_mem->id, "%04x-%02x-%04x-%08x", + be16_to_cpu(dcr->vendor_id), + dcr->manufacturing_location, + be16_to_cpu(dcr->manufacturing_date), + be32_to_cpu(dcr->serial_number)); + else + sprintf(nfit_mem->id, "%04x-%08x", + be16_to_cpu(dcr->vendor_id), + be32_to_cpu(dcr->serial_number)); + adev = to_acpi_dev(acpi_desc); if (!adev) { /* unit test case */ @@ -1904,6 +1932,16 @@ static void shutdown_dimm_notify(void *data) mutex_unlock(&acpi_desc->init_mutex); } +static const struct nvdimm_security_ops *acpi_nfit_get_security_ops(int family) +{ + switch (family) { + case NVDIMM_FAMILY_INTEL: + return intel_security_ops; + default: + return NULL; + } +} + static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc) { struct nfit_mem *nfit_mem; @@ -1970,10 +2008,11 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc) flush = nfit_mem->nfit_flush ? nfit_mem->nfit_flush->flush : NULL; - nvdimm = nvdimm_create(acpi_desc->nvdimm_bus, nfit_mem, + nvdimm = __nvdimm_create(acpi_desc->nvdimm_bus, nfit_mem, acpi_nfit_dimm_attribute_groups, flags, cmd_mask, flush ? flush->hint_count : 0, - nfit_mem->flush_wpq); + nfit_mem->flush_wpq, &nfit_mem->id[0], + acpi_nfit_get_security_ops(nfit_mem->family)); if (!nvdimm) return -ENOMEM; @@ -2008,6 +2047,11 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc) if (!nvdimm) continue; + rc = nvdimm_security_setup_events(nvdimm); + if (rc < 0) + dev_warn(acpi_desc->dev, + "security event setup failed: %d\n", rc); + nfit_kernfs = sysfs_get_dirent(nvdimm_kobj(nvdimm)->sd, "nfit"); if (nfit_kernfs) nfit_mem->flags_attr = sysfs_get_dirent(nfit_kernfs, @@ -3337,7 +3381,7 @@ static int acpi_nfit_flush_probe(struct nvdimm_bus_descriptor *nd_desc) return 0; } -static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc, +static int __acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, unsigned int cmd) { struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc); @@ -3359,6 +3403,23 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc, return 0; } +/* prevent security commands from being issued via ioctl */ +static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc, + struct nvdimm *nvdimm, unsigned int cmd, void *buf) +{ + struct nd_cmd_pkg *call_pkg = buf; + unsigned int func; + + if (nvdimm && cmd == ND_CMD_CALL && + call_pkg->nd_family == NVDIMM_FAMILY_INTEL) { + func = call_pkg->nd_command; + if ((1 << func) & NVDIMM_INTEL_SECURITY_CMDMASK) + return -EOPNOTSUPP; + } + + return __acpi_nfit_clear_to_send(nd_desc, nvdimm, cmd); +} + int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, enum nfit_ars_state req_type) { @@ -3474,7 +3535,13 @@ static int acpi_nfit_add(struct acpi_device *adev) status = acpi_get_table(ACPI_SIG_NFIT, 0, &tbl); if (ACPI_FAILURE(status)) { - /* This is ok, we could have an nvdimm hotplugged later */ + /* The NVDIMM root device allows OS to trigger enumeration of + * NVDIMMs through NFIT at boot time and re-enumeration at + * root level via the _FIT method during runtime. + * This is ok to return 0 here, we could have an nvdimm + * hotplugged later and evaluate _FIT method which returns + * data in the format of a series of NFIT Structures. + */ dev_dbg(dev, "failed to find NFIT at startup\n"); return 0; } diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c new file mode 100644 index 000000000000..850b2927b4e7 --- /dev/null +++ b/drivers/acpi/nfit/intel.c @@ -0,0 +1,388 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright(c) 2018 Intel Corporation. All rights reserved. */ +#include +#include +#include +#include +#include "intel.h" +#include "nfit.h" + +static enum nvdimm_security_state intel_security_state(struct nvdimm *nvdimm, + enum nvdimm_passphrase_type ptype) +{ + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + struct { + struct nd_cmd_pkg pkg; + struct nd_intel_get_security_state cmd; + } nd_cmd = { + .pkg = { + .nd_command = NVDIMM_INTEL_GET_SECURITY_STATE, + .nd_family = NVDIMM_FAMILY_INTEL, + .nd_size_out = + sizeof(struct nd_intel_get_security_state), + .nd_fw_size = + sizeof(struct nd_intel_get_security_state), + }, + }; + int rc; + + if (!test_bit(NVDIMM_INTEL_GET_SECURITY_STATE, &nfit_mem->dsm_mask)) + return -ENXIO; + + /* + * Short circuit the state retrieval while we are doing overwrite. + * The DSM spec states that the security state is indeterminate + * until the overwrite DSM completes. + */ + if (nvdimm_in_overwrite(nvdimm) && ptype == NVDIMM_USER) + return NVDIMM_SECURITY_OVERWRITE; + + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); + if (rc < 0) + return rc; + if (nd_cmd.cmd.status) + return -EIO; + + /* check and see if security is enabled and locked */ + if (ptype == NVDIMM_MASTER) { + if (nd_cmd.cmd.extended_state & ND_INTEL_SEC_ESTATE_ENABLED) + return NVDIMM_SECURITY_UNLOCKED; + else if (nd_cmd.cmd.extended_state & + ND_INTEL_SEC_ESTATE_PLIMIT) + return NVDIMM_SECURITY_FROZEN; + } else { + if (nd_cmd.cmd.state & ND_INTEL_SEC_STATE_UNSUPPORTED) + return -ENXIO; + else if (nd_cmd.cmd.state & ND_INTEL_SEC_STATE_ENABLED) { + if (nd_cmd.cmd.state & ND_INTEL_SEC_STATE_LOCKED) + return NVDIMM_SECURITY_LOCKED; + else if (nd_cmd.cmd.state & ND_INTEL_SEC_STATE_FROZEN + || nd_cmd.cmd.state & + ND_INTEL_SEC_STATE_PLIMIT) + return NVDIMM_SECURITY_FROZEN; + else + return NVDIMM_SECURITY_UNLOCKED; + } + } + + /* this should cover master security disabled as well */ + return NVDIMM_SECURITY_DISABLED; +} + +static int intel_security_freeze(struct nvdimm *nvdimm) +{ + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + struct { + struct nd_cmd_pkg pkg; + struct nd_intel_freeze_lock cmd; + } nd_cmd = { + .pkg = { + .nd_command = NVDIMM_INTEL_FREEZE_LOCK, + .nd_family = NVDIMM_FAMILY_INTEL, + .nd_size_out = ND_INTEL_STATUS_SIZE, + .nd_fw_size = ND_INTEL_STATUS_SIZE, + }, + }; + int rc; + + if (!test_bit(NVDIMM_INTEL_FREEZE_LOCK, &nfit_mem->dsm_mask)) + return -ENOTTY; + + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); + if (rc < 0) + return rc; + if (nd_cmd.cmd.status) + return -EIO; + return 0; +} + +static int intel_security_change_key(struct nvdimm *nvdimm, + const struct nvdimm_key_data *old_data, + const struct nvdimm_key_data *new_data, + enum nvdimm_passphrase_type ptype) +{ + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + unsigned int cmd = ptype == NVDIMM_MASTER ? + NVDIMM_INTEL_SET_MASTER_PASSPHRASE : + NVDIMM_INTEL_SET_PASSPHRASE; + struct { + struct nd_cmd_pkg pkg; + struct nd_intel_set_passphrase cmd; + } nd_cmd = { + .pkg = { + .nd_family = NVDIMM_FAMILY_INTEL, + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE * 2, + .nd_size_out = ND_INTEL_STATUS_SIZE, + .nd_fw_size = ND_INTEL_STATUS_SIZE, + .nd_command = cmd, + }, + }; + int rc; + + if (!test_bit(cmd, &nfit_mem->dsm_mask)) + return -ENOTTY; + + if (old_data) + memcpy(nd_cmd.cmd.old_pass, old_data->data, + sizeof(nd_cmd.cmd.old_pass)); + memcpy(nd_cmd.cmd.new_pass, new_data->data, + sizeof(nd_cmd.cmd.new_pass)); + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); + if (rc < 0) + return rc; + + switch (nd_cmd.cmd.status) { + case 0: + return 0; + case ND_INTEL_STATUS_INVALID_PASS: + return -EINVAL; + case ND_INTEL_STATUS_NOT_SUPPORTED: + return -EOPNOTSUPP; + case ND_INTEL_STATUS_INVALID_STATE: + default: + return -EIO; + } +} + +static void nvdimm_invalidate_cache(void); + +static int intel_security_unlock(struct nvdimm *nvdimm, + const struct nvdimm_key_data *key_data) +{ + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + struct { + struct nd_cmd_pkg pkg; + struct nd_intel_unlock_unit cmd; + } nd_cmd = { + .pkg = { + .nd_command = NVDIMM_INTEL_UNLOCK_UNIT, + .nd_family = NVDIMM_FAMILY_INTEL, + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE, + .nd_size_out = ND_INTEL_STATUS_SIZE, + .nd_fw_size = ND_INTEL_STATUS_SIZE, + }, + }; + int rc; + + if (!test_bit(NVDIMM_INTEL_UNLOCK_UNIT, &nfit_mem->dsm_mask)) + return -ENOTTY; + + memcpy(nd_cmd.cmd.passphrase, key_data->data, + sizeof(nd_cmd.cmd.passphrase)); + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); + if (rc < 0) + return rc; + switch (nd_cmd.cmd.status) { + case 0: + break; + case ND_INTEL_STATUS_INVALID_PASS: + return -EINVAL; + default: + return -EIO; + } + + /* DIMM unlocked, invalidate all CPU caches before we read it */ + nvdimm_invalidate_cache(); + + return 0; +} + +static int intel_security_disable(struct nvdimm *nvdimm, + const struct nvdimm_key_data *key_data) +{ + int rc; + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + struct { + struct nd_cmd_pkg pkg; + struct nd_intel_disable_passphrase cmd; + } nd_cmd = { + .pkg = { + .nd_command = NVDIMM_INTEL_DISABLE_PASSPHRASE, + .nd_family = NVDIMM_FAMILY_INTEL, + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE, + .nd_size_out = ND_INTEL_STATUS_SIZE, + .nd_fw_size = ND_INTEL_STATUS_SIZE, + }, + }; + + if (!test_bit(NVDIMM_INTEL_DISABLE_PASSPHRASE, &nfit_mem->dsm_mask)) + return -ENOTTY; + + memcpy(nd_cmd.cmd.passphrase, key_data->data, + sizeof(nd_cmd.cmd.passphrase)); + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); + if (rc < 0) + return rc; + + switch (nd_cmd.cmd.status) { + case 0: + break; + case ND_INTEL_STATUS_INVALID_PASS: + return -EINVAL; + case ND_INTEL_STATUS_INVALID_STATE: + default: + return -ENXIO; + } + + return 0; +} + +static int intel_security_erase(struct nvdimm *nvdimm, + const struct nvdimm_key_data *key, + enum nvdimm_passphrase_type ptype) +{ + int rc; + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + unsigned int cmd = ptype == NVDIMM_MASTER ? + NVDIMM_INTEL_MASTER_SECURE_ERASE : NVDIMM_INTEL_SECURE_ERASE; + struct { + struct nd_cmd_pkg pkg; + struct nd_intel_secure_erase cmd; + } nd_cmd = { + .pkg = { + .nd_family = NVDIMM_FAMILY_INTEL, + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE, + .nd_size_out = ND_INTEL_STATUS_SIZE, + .nd_fw_size = ND_INTEL_STATUS_SIZE, + .nd_command = cmd, + }, + }; + + if (!test_bit(cmd, &nfit_mem->dsm_mask)) + return -ENOTTY; + + /* flush all cache before we erase DIMM */ + nvdimm_invalidate_cache(); + memcpy(nd_cmd.cmd.passphrase, key->data, + sizeof(nd_cmd.cmd.passphrase)); + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); + if (rc < 0) + return rc; + + switch (nd_cmd.cmd.status) { + case 0: + break; + case ND_INTEL_STATUS_NOT_SUPPORTED: + return -EOPNOTSUPP; + case ND_INTEL_STATUS_INVALID_PASS: + return -EINVAL; + case ND_INTEL_STATUS_INVALID_STATE: + default: + return -ENXIO; + } + + /* DIMM erased, invalidate all CPU caches before we read it */ + nvdimm_invalidate_cache(); + return 0; +} + +static int intel_security_query_overwrite(struct nvdimm *nvdimm) +{ + int rc; + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + struct { + struct nd_cmd_pkg pkg; + struct nd_intel_query_overwrite cmd; + } nd_cmd = { + .pkg = { + .nd_command = NVDIMM_INTEL_QUERY_OVERWRITE, + .nd_family = NVDIMM_FAMILY_INTEL, + .nd_size_out = ND_INTEL_STATUS_SIZE, + .nd_fw_size = ND_INTEL_STATUS_SIZE, + }, + }; + + if (!test_bit(NVDIMM_INTEL_QUERY_OVERWRITE, &nfit_mem->dsm_mask)) + return -ENOTTY; + + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); + if (rc < 0) + return rc; + + switch (nd_cmd.cmd.status) { + case 0: + break; + case ND_INTEL_STATUS_OQUERY_INPROGRESS: + return -EBUSY; + default: + return -ENXIO; + } + + /* flush all cache before we make the nvdimms available */ + nvdimm_invalidate_cache(); + return 0; +} + +static int intel_security_overwrite(struct nvdimm *nvdimm, + const struct nvdimm_key_data *nkey) +{ + int rc; + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); + struct { + struct nd_cmd_pkg pkg; + struct nd_intel_overwrite cmd; + } nd_cmd = { + .pkg = { + .nd_command = NVDIMM_INTEL_OVERWRITE, + .nd_family = NVDIMM_FAMILY_INTEL, + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE, + .nd_size_out = ND_INTEL_STATUS_SIZE, + .nd_fw_size = ND_INTEL_STATUS_SIZE, + }, + }; + + if (!test_bit(NVDIMM_INTEL_OVERWRITE, &nfit_mem->dsm_mask)) + return -ENOTTY; + + /* flush all cache before we erase DIMM */ + nvdimm_invalidate_cache(); + if (nkey) + memcpy(nd_cmd.cmd.passphrase, nkey->data, + sizeof(nd_cmd.cmd.passphrase)); + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); + if (rc < 0) + return rc; + + switch (nd_cmd.cmd.status) { + case 0: + return 0; + case ND_INTEL_STATUS_OVERWRITE_UNSUPPORTED: + return -ENOTSUPP; + case ND_INTEL_STATUS_INVALID_PASS: + return -EINVAL; + case ND_INTEL_STATUS_INVALID_STATE: + default: + return -ENXIO; + } +} + +/* + * TODO: define a cross arch wbinvd equivalent when/if + * NVDIMM_FAMILY_INTEL command support arrives on another arch. + */ +#ifdef CONFIG_X86 +static void nvdimm_invalidate_cache(void) +{ + wbinvd_on_all_cpus(); +} +#else +static void nvdimm_invalidate_cache(void) +{ + WARN_ON_ONCE("cache invalidation required after unlock\n"); +} +#endif + +static const struct nvdimm_security_ops __intel_security_ops = { + .state = intel_security_state, + .freeze = intel_security_freeze, + .change_key = intel_security_change_key, + .disable = intel_security_disable, +#ifdef CONFIG_X86 + .unlock = intel_security_unlock, + .erase = intel_security_erase, + .overwrite = intel_security_overwrite, + .query_overwrite = intel_security_query_overwrite, +#endif +}; + +const struct nvdimm_security_ops *intel_security_ops = &__intel_security_ops; diff --git a/drivers/acpi/nfit/intel.h b/drivers/acpi/nfit/intel.h index 86746312381f..0aca682ab9d7 100644 --- a/drivers/acpi/nfit/intel.h +++ b/drivers/acpi/nfit/intel.h @@ -35,4 +35,80 @@ struct nd_intel_smart { }; } __packed; +extern const struct nvdimm_security_ops *intel_security_ops; + +#define ND_INTEL_STATUS_SIZE 4 +#define ND_INTEL_PASSPHRASE_SIZE 32 + +#define ND_INTEL_STATUS_NOT_SUPPORTED 1 +#define ND_INTEL_STATUS_RETRY 5 +#define ND_INTEL_STATUS_NOT_READY 9 +#define ND_INTEL_STATUS_INVALID_STATE 10 +#define ND_INTEL_STATUS_INVALID_PASS 11 +#define ND_INTEL_STATUS_OVERWRITE_UNSUPPORTED 0x10007 +#define ND_INTEL_STATUS_OQUERY_INPROGRESS 0x10007 +#define ND_INTEL_STATUS_OQUERY_SEQUENCE_ERR 0x20007 + +#define ND_INTEL_SEC_STATE_ENABLED 0x02 +#define ND_INTEL_SEC_STATE_LOCKED 0x04 +#define ND_INTEL_SEC_STATE_FROZEN 0x08 +#define ND_INTEL_SEC_STATE_PLIMIT 0x10 +#define ND_INTEL_SEC_STATE_UNSUPPORTED 0x20 +#define ND_INTEL_SEC_STATE_OVERWRITE 0x40 + +#define ND_INTEL_SEC_ESTATE_ENABLED 0x01 +#define ND_INTEL_SEC_ESTATE_PLIMIT 0x02 + +struct nd_intel_get_security_state { + u32 status; + u8 extended_state; + u8 reserved[3]; + u8 state; + u8 reserved1[3]; +} __packed; + +struct nd_intel_set_passphrase { + u8 old_pass[ND_INTEL_PASSPHRASE_SIZE]; + u8 new_pass[ND_INTEL_PASSPHRASE_SIZE]; + u32 status; +} __packed; + +struct nd_intel_unlock_unit { + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; + u32 status; +} __packed; + +struct nd_intel_disable_passphrase { + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; + u32 status; +} __packed; + +struct nd_intel_freeze_lock { + u32 status; +} __packed; + +struct nd_intel_secure_erase { + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; + u32 status; +} __packed; + +struct nd_intel_overwrite { + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; + u32 status; +} __packed; + +struct nd_intel_query_overwrite { + u32 status; +} __packed; + +struct nd_intel_set_master_passphrase { + u8 old_pass[ND_INTEL_PASSPHRASE_SIZE]; + u8 new_pass[ND_INTEL_PASSPHRASE_SIZE]; + u32 status; +} __packed; + +struct nd_intel_master_secure_erase { + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; + u32 status; +} __packed; #endif diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h index df0f6b8407e7..33691aecfcee 100644 --- a/drivers/acpi/nfit/nfit.h +++ b/drivers/acpi/nfit/nfit.h @@ -60,14 +60,33 @@ enum nvdimm_family_cmds { NVDIMM_INTEL_QUERY_FWUPDATE = 16, NVDIMM_INTEL_SET_THRESHOLD = 17, NVDIMM_INTEL_INJECT_ERROR = 18, + NVDIMM_INTEL_GET_SECURITY_STATE = 19, + NVDIMM_INTEL_SET_PASSPHRASE = 20, + NVDIMM_INTEL_DISABLE_PASSPHRASE = 21, + NVDIMM_INTEL_UNLOCK_UNIT = 22, + NVDIMM_INTEL_FREEZE_LOCK = 23, + NVDIMM_INTEL_SECURE_ERASE = 24, + NVDIMM_INTEL_OVERWRITE = 25, + NVDIMM_INTEL_QUERY_OVERWRITE = 26, + NVDIMM_INTEL_SET_MASTER_PASSPHRASE = 27, + NVDIMM_INTEL_MASTER_SECURE_ERASE = 28, }; +#define NVDIMM_INTEL_SECURITY_CMDMASK \ +(1 << NVDIMM_INTEL_GET_SECURITY_STATE | 1 << NVDIMM_INTEL_SET_PASSPHRASE \ +| 1 << NVDIMM_INTEL_DISABLE_PASSPHRASE | 1 << NVDIMM_INTEL_UNLOCK_UNIT \ +| 1 << NVDIMM_INTEL_FREEZE_LOCK | 1 << NVDIMM_INTEL_SECURE_ERASE \ +| 1 << NVDIMM_INTEL_OVERWRITE | 1 << NVDIMM_INTEL_QUERY_OVERWRITE \ +| 1 << NVDIMM_INTEL_SET_MASTER_PASSPHRASE \ +| 1 << NVDIMM_INTEL_MASTER_SECURE_ERASE) + #define NVDIMM_INTEL_CMDMASK \ (NVDIMM_STANDARD_CMDMASK | 1 << NVDIMM_INTEL_GET_MODES \ | 1 << NVDIMM_INTEL_GET_FWINFO | 1 << NVDIMM_INTEL_START_FWUPDATE \ | 1 << NVDIMM_INTEL_SEND_FWUPDATE | 1 << NVDIMM_INTEL_FINISH_FWUPDATE \ | 1 << NVDIMM_INTEL_QUERY_FWUPDATE | 1 << NVDIMM_INTEL_SET_THRESHOLD \ - | 1 << NVDIMM_INTEL_INJECT_ERROR | 1 << NVDIMM_INTEL_LATCH_SHUTDOWN) + | 1 << NVDIMM_INTEL_INJECT_ERROR | 1 << NVDIMM_INTEL_LATCH_SHUTDOWN \ + | NVDIMM_INTEL_SECURITY_CMDMASK) enum nfit_uuids { /* for simplicity alias the uuid index with the family id */ @@ -164,6 +183,8 @@ enum nfit_mem_flags { NFIT_MEM_DIRTY_COUNT, }; +#define NFIT_DIMM_ID_LEN 22 + /* assembled tables for a given dimm/memory-device */ struct nfit_mem { struct nvdimm *nvdimm; @@ -181,6 +202,7 @@ struct nfit_mem { struct list_head list; struct acpi_device *adev; struct acpi_nfit_desc *acpi_desc; + char id[NFIT_DIMM_ID_LEN+1]; struct resource *flush_wpq; unsigned long dsm_mask; unsigned long flags; diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c index 8c7c4583b52d..77abe0ec4043 100644 --- a/drivers/acpi/property.c +++ b/drivers/acpi/property.c @@ -24,6 +24,14 @@ static int acpi_data_get_property_array(const struct acpi_device_data *data, acpi_object_type type, const union acpi_object **obj); +/* + * The GUIDs here are made equivalent to each other in order to avoid extra + * complexity in the properties handling code, with the caveat that the + * kernel will accept certain combinations of GUID and properties that are + * not defined without a warning. For instance if any of the properties + * from different GUID appear in a property list of another, it will be + * accepted by the kernel. Firmware validation tools should catch these. + */ static const guid_t prp_guids[] = { /* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */ GUID_INIT(0xdaffd814, 0x6eba, 0x4d8c, @@ -31,6 +39,9 @@ static const guid_t prp_guids[] = { /* Hotplug in D3 GUID: 6211e2c0-58a3-4af3-90e1-927a4e0c55a4 */ GUID_INIT(0x6211e2c0, 0x58a3, 0x4af3, 0x90, 0xe1, 0x92, 0x7a, 0x4e, 0x0c, 0x55, 0xa4), + /* External facing port GUID: efcc06cc-73ac-4bc3-bff0-76143807c389 */ + GUID_INIT(0xefcc06cc, 0x73ac, 0x4bc3, + 0xbf, 0xf0, 0x76, 0x14, 0x38, 0x07, 0xc3, 0x89), }; static const guid_t ads_guid = diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c index e9eda5558c1f..5efd4219f112 100644 --- a/drivers/acpi/scan.c +++ b/drivers/acpi/scan.c @@ -1456,6 +1456,11 @@ int acpi_dma_configure(struct device *dev, enum dev_dma_attr attr) const struct iommu_ops *iommu; u64 dma_addr = 0, size = 0; + if (attr == DEV_DMA_NOT_SUPPORTED) { + set_dma_ops(dev, &dma_dummy_ops); + return 0; + } + iort_dma_setup(dev, &dma_addr, &size); iommu = iort_iommu_configure(dev); diff --git a/drivers/android/Kconfig b/drivers/android/Kconfig index 51e8250d113f..4c190f8d1f4c 100644 --- a/drivers/android/Kconfig +++ b/drivers/android/Kconfig @@ -20,6 +20,18 @@ config ANDROID_BINDER_IPC Android process, using Binder to identify, invoke and pass arguments between said processes. +config ANDROID_BINDERFS + bool "Android Binderfs filesystem" + depends on ANDROID_BINDER_IPC + default n + ---help--- + Binderfs is a pseudo-filesystem for the Android Binder IPC driver + which can be mounted per-ipc namespace allowing to run multiple + instances of Android. + Each binderfs mount initially only contains a binder-control device. + It can be used to dynamically allocate new binder IPC devices via + ioctls. + config ANDROID_BINDER_DEVICES string "Android Binder devices" depends on ANDROID_BINDER_IPC diff --git a/drivers/android/Makefile b/drivers/android/Makefile index a01254c43ee3..c7856e3200da 100644 --- a/drivers/android/Makefile +++ b/drivers/android/Makefile @@ -1,4 +1,5 @@ ccflags-y += -I$(src) # needed for trace events +obj-$(CONFIG_ANDROID_BINDERFS) += binderfs.o obj-$(CONFIG_ANDROID_BINDER_IPC) += binder.o binder_alloc.o obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) += binder_alloc_selftest.o diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 9f1000d2a40c..cdfc87629efb 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -72,12 +72,14 @@ #include #include #include +#include #include #include #include "binder_alloc.h" +#include "binder_internal.h" #include "binder_trace.h" static HLIST_HEAD(binder_deferred_list); @@ -94,22 +96,8 @@ static struct dentry *binder_debugfs_dir_entry_root; static struct dentry *binder_debugfs_dir_entry_proc; static atomic_t binder_last_id; -#define BINDER_DEBUG_ENTRY(name) \ -static int binder_##name##_open(struct inode *inode, struct file *file) \ -{ \ - return single_open(file, binder_##name##_show, inode->i_private); \ -} \ -\ -static const struct file_operations binder_##name##_fops = { \ - .owner = THIS_MODULE, \ - .open = binder_##name##_open, \ - .read = seq_read, \ - .llseek = seq_lseek, \ - .release = single_release, \ -} - -static int binder_proc_show(struct seq_file *m, void *unused); -BINDER_DEBUG_ENTRY(proc); +static int proc_show(struct seq_file *m, void *unused); +DEFINE_SHOW_ATTRIBUTE(proc); /* This is only defined in include/asm-arm/sizes.h */ #ifndef SZ_1K @@ -262,20 +250,6 @@ static struct binder_transaction_log_entry *binder_transaction_log_add( return e; } -struct binder_context { - struct binder_node *binder_context_mgr_node; - struct mutex context_mgr_node_lock; - - kuid_t binder_context_mgr_uid; - const char *name; -}; - -struct binder_device { - struct hlist_node hlist; - struct miscdevice miscdev; - struct binder_context context; -}; - /** * struct binder_work - work enqueued on a worklist * @entry: node enqueued on list @@ -660,6 +634,7 @@ struct binder_transaction { #define binder_proc_lock(proc) _binder_proc_lock(proc, __LINE__) static void _binder_proc_lock(struct binder_proc *proc, int line) + __acquires(&proc->outer_lock) { binder_debug(BINDER_DEBUG_SPINLOCKS, "%s: line=%d\n", __func__, line); @@ -675,6 +650,7 @@ _binder_proc_lock(struct binder_proc *proc, int line) #define binder_proc_unlock(_proc) _binder_proc_unlock(_proc, __LINE__) static void _binder_proc_unlock(struct binder_proc *proc, int line) + __releases(&proc->outer_lock) { binder_debug(BINDER_DEBUG_SPINLOCKS, "%s: line=%d\n", __func__, line); @@ -690,6 +666,7 @@ _binder_proc_unlock(struct binder_proc *proc, int line) #define binder_inner_proc_lock(proc) _binder_inner_proc_lock(proc, __LINE__) static void _binder_inner_proc_lock(struct binder_proc *proc, int line) + __acquires(&proc->inner_lock) { binder_debug(BINDER_DEBUG_SPINLOCKS, "%s: line=%d\n", __func__, line); @@ -705,6 +682,7 @@ _binder_inner_proc_lock(struct binder_proc *proc, int line) #define binder_inner_proc_unlock(proc) _binder_inner_proc_unlock(proc, __LINE__) static void _binder_inner_proc_unlock(struct binder_proc *proc, int line) + __releases(&proc->inner_lock) { binder_debug(BINDER_DEBUG_SPINLOCKS, "%s: line=%d\n", __func__, line); @@ -720,6 +698,7 @@ _binder_inner_proc_unlock(struct binder_proc *proc, int line) #define binder_node_lock(node) _binder_node_lock(node, __LINE__) static void _binder_node_lock(struct binder_node *node, int line) + __acquires(&node->lock) { binder_debug(BINDER_DEBUG_SPINLOCKS, "%s: line=%d\n", __func__, line); @@ -735,6 +714,7 @@ _binder_node_lock(struct binder_node *node, int line) #define binder_node_unlock(node) _binder_node_unlock(node, __LINE__) static void _binder_node_unlock(struct binder_node *node, int line) + __releases(&node->lock) { binder_debug(BINDER_DEBUG_SPINLOCKS, "%s: line=%d\n", __func__, line); @@ -751,12 +731,16 @@ _binder_node_unlock(struct binder_node *node, int line) #define binder_node_inner_lock(node) _binder_node_inner_lock(node, __LINE__) static void _binder_node_inner_lock(struct binder_node *node, int line) + __acquires(&node->lock) __acquires(&node->proc->inner_lock) { binder_debug(BINDER_DEBUG_SPINLOCKS, "%s: line=%d\n", __func__, line); spin_lock(&node->lock); if (node->proc) binder_inner_proc_lock(node->proc); + else + /* annotation for sparse */ + __acquire(&node->proc->inner_lock); } /** @@ -768,6 +752,7 @@ _binder_node_inner_lock(struct binder_node *node, int line) #define binder_node_inner_unlock(node) _binder_node_inner_unlock(node, __LINE__) static void _binder_node_inner_unlock(struct binder_node *node, int line) + __releases(&node->lock) __releases(&node->proc->inner_lock) { struct binder_proc *proc = node->proc; @@ -775,6 +760,9 @@ _binder_node_inner_unlock(struct binder_node *node, int line) "%s: line=%d\n", __func__, line); if (proc) binder_inner_proc_unlock(proc); + else + /* annotation for sparse */ + __release(&node->proc->inner_lock); spin_unlock(&node->lock); } @@ -1384,10 +1372,14 @@ static void binder_dec_node_tmpref(struct binder_node *node) binder_node_inner_lock(node); if (!node->proc) spin_lock(&binder_dead_nodes_lock); + else + __acquire(&binder_dead_nodes_lock); node->tmp_refs--; BUG_ON(node->tmp_refs < 0); if (!node->proc) spin_unlock(&binder_dead_nodes_lock); + else + __release(&binder_dead_nodes_lock); /* * Call binder_dec_node() to check if all refcounts are 0 * and cleanup is needed. Calling with strong=0 and internal=1 @@ -1890,18 +1882,22 @@ static struct binder_thread *binder_get_txn_from( */ static struct binder_thread *binder_get_txn_from_and_acq_inner( struct binder_transaction *t) + __acquires(&t->from->proc->inner_lock) { struct binder_thread *from; from = binder_get_txn_from(t); - if (!from) + if (!from) { + __acquire(&from->proc->inner_lock); return NULL; + } binder_inner_proc_lock(from->proc); if (t->from) { BUG_ON(from != t->from); return from; } binder_inner_proc_unlock(from->proc); + __acquire(&from->proc->inner_lock); binder_thread_dec_tmpref(from); return NULL; } @@ -1973,6 +1969,8 @@ static void binder_send_failed_reply(struct binder_transaction *t, binder_thread_dec_tmpref(target_thread); binder_free_transaction(t); return; + } else { + __release(&target_thread->proc->inner_lock); } next = t->from_parent; @@ -2160,6 +2158,64 @@ static bool binder_validate_fixup(struct binder_buffer *b, return (fixup_offset >= last_min_offset); } +/** + * struct binder_task_work_cb - for deferred close + * + * @twork: callback_head for task work + * @fd: fd to close + * + * Structure to pass task work to be handled after + * returning from binder_ioctl() via task_work_add(). + */ +struct binder_task_work_cb { + struct callback_head twork; + struct file *file; +}; + +/** + * binder_do_fd_close() - close list of file descriptors + * @twork: callback head for task work + * + * It is not safe to call ksys_close() during the binder_ioctl() + * function if there is a chance that binder's own file descriptor + * might be closed. This is to meet the requirements for using + * fdget() (see comments for __fget_light()). Therefore use + * task_work_add() to schedule the close operation once we have + * returned from binder_ioctl(). This function is a callback + * for that mechanism and does the actual ksys_close() on the + * given file descriptor. + */ +static void binder_do_fd_close(struct callback_head *twork) +{ + struct binder_task_work_cb *twcb = container_of(twork, + struct binder_task_work_cb, twork); + + fput(twcb->file); + kfree(twcb); +} + +/** + * binder_deferred_fd_close() - schedule a close for the given file-descriptor + * @fd: file-descriptor to close + * + * See comments in binder_do_fd_close(). This function is used to schedule + * a file-descriptor to be closed after returning from binder_ioctl(). + */ +static void binder_deferred_fd_close(int fd) +{ + struct binder_task_work_cb *twcb; + + twcb = kzalloc(sizeof(*twcb), GFP_KERNEL); + if (!twcb) + return; + init_task_work(&twcb->twork, binder_do_fd_close); + __close_fd_get_file(fd, &twcb->file); + if (twcb->file) + task_work_add(current, &twcb->twork, true); + else + kfree(twcb); +} + static void binder_transaction_buffer_release(struct binder_proc *proc, struct binder_buffer *buffer, binder_size_t *failed_at) @@ -2299,7 +2355,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, } fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset); for (fd_index = 0; fd_index < fda->num_fds; fd_index++) - ksys_close(fd_array[fd_index]); + binder_deferred_fd_close(fd_array[fd_index]); } break; default: pr_err("transaction release %d bad object type %x\n", @@ -2394,11 +2450,15 @@ static int binder_translate_handle(struct flat_binder_object *fp, fp->cookie = node->cookie; if (node->proc) binder_inner_proc_lock(node->proc); + else + __acquire(&node->proc->inner_lock); binder_inc_node_nilocked(node, fp->hdr.type == BINDER_TYPE_BINDER, 0, NULL); if (node->proc) binder_inner_proc_unlock(node->proc); + else + __release(&node->proc->inner_lock); trace_binder_transaction_ref_to_node(t, node, &src_rdata); binder_debug(BINDER_DEBUG_TRANSACTION, " ref %d desc %d -> node %d u%016llx\n", @@ -2762,6 +2822,8 @@ static void binder_transaction(struct binder_proc *proc, binder_set_nice(in_reply_to->saved_priority); target_thread = binder_get_txn_from_and_acq_inner(in_reply_to); if (target_thread == NULL) { + /* annotation for sparse */ + __release(&target_thread->proc->inner_lock); return_error = BR_DEAD_REPLY; return_error_line = __LINE__; goto err_dead_binder; @@ -3912,7 +3974,7 @@ static int binder_apply_fd_fixups(struct binder_transaction *t) } else if (ret) { u32 *fdp = (u32 *)(t->buffer->data + fixup->offset); - ksys_close(*fdp); + binder_deferred_fd_close(*fdp); } list_del(&fixup->fixup_entry); kfree(fixup); @@ -4164,6 +4226,11 @@ retry: if (cmd == BR_DEAD_BINDER) goto done; /* DEAD_BINDER notifications can cause transactions */ } break; + default: + binder_inner_proc_unlock(proc); + pr_err("%d:%d: bad work type %d\n", + proc->pid, thread->pid, w->type); + break; } if (!t) @@ -4467,6 +4534,8 @@ static int binder_thread_release(struct binder_proc *proc, spin_lock(&t->lock); if (t->to_thread == thread) send_reply = t; + } else { + __acquire(&t->lock); } thread->is_dead = true; @@ -4495,7 +4564,11 @@ static int binder_thread_release(struct binder_proc *proc, spin_unlock(&last_t->lock); if (t) spin_lock(&t->lock); + else + __acquire(&t->lock); } + /* annotation for sparse, lock not acquired in last iteration above */ + __release(&t->lock); /* * If this thread used poll, make sure we remove the waitqueue @@ -4938,8 +5011,12 @@ static int binder_open(struct inode *nodp, struct file *filp) proc->tsk = current->group_leader; INIT_LIST_HEAD(&proc->todo); proc->default_priority = task_nice(current); - binder_dev = container_of(filp->private_data, struct binder_device, - miscdev); + /* binderfs stashes devices in i_private */ + if (is_binderfs_device(nodp)) + binder_dev = nodp->i_private; + else + binder_dev = container_of(filp->private_data, + struct binder_device, miscdev); proc->context = &binder_dev->context; binder_alloc_init(&proc->alloc); @@ -4967,7 +5044,7 @@ static int binder_open(struct inode *nodp, struct file *filp) proc->debugfs_entry = debugfs_create_file(strbuf, 0444, binder_debugfs_dir_entry_proc, (void *)(unsigned long)proc->pid, - &binder_proc_fops); + &proc_fops); } return 0; @@ -5391,6 +5468,9 @@ static void print_binder_proc(struct seq_file *m, for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n)) { struct binder_node *node = rb_entry(n, struct binder_node, rb_node); + if (!print_all && !node->has_async_transaction) + continue; + /* * take a temporary reference on the node so it * survives and isn't removed from the tree @@ -5595,7 +5675,7 @@ static void print_binder_proc_stats(struct seq_file *m, } -static int binder_state_show(struct seq_file *m, void *unused) +static int state_show(struct seq_file *m, void *unused) { struct binder_proc *proc; struct binder_node *node; @@ -5634,7 +5714,7 @@ static int binder_state_show(struct seq_file *m, void *unused) return 0; } -static int binder_stats_show(struct seq_file *m, void *unused) +static int stats_show(struct seq_file *m, void *unused) { struct binder_proc *proc; @@ -5650,7 +5730,7 @@ static int binder_stats_show(struct seq_file *m, void *unused) return 0; } -static int binder_transactions_show(struct seq_file *m, void *unused) +static int transactions_show(struct seq_file *m, void *unused) { struct binder_proc *proc; @@ -5663,7 +5743,7 @@ static int binder_transactions_show(struct seq_file *m, void *unused) return 0; } -static int binder_proc_show(struct seq_file *m, void *unused) +static int proc_show(struct seq_file *m, void *unused) { struct binder_proc *itr; int pid = (unsigned long)m->private; @@ -5706,7 +5786,7 @@ static void print_binder_transaction_log_entry(struct seq_file *m, "\n" : " (incomplete)\n"); } -static int binder_transaction_log_show(struct seq_file *m, void *unused) +static int transaction_log_show(struct seq_file *m, void *unused) { struct binder_transaction_log *log = m->private; unsigned int log_cur = atomic_read(&log->cur); @@ -5727,7 +5807,7 @@ static int binder_transaction_log_show(struct seq_file *m, void *unused) return 0; } -static const struct file_operations binder_fops = { +const struct file_operations binder_fops = { .owner = THIS_MODULE, .poll = binder_poll, .unlocked_ioctl = binder_ioctl, @@ -5738,10 +5818,10 @@ static const struct file_operations binder_fops = { .release = binder_release, }; -BINDER_DEBUG_ENTRY(state); -BINDER_DEBUG_ENTRY(stats); -BINDER_DEBUG_ENTRY(transactions); -BINDER_DEBUG_ENTRY(transaction_log); +DEFINE_SHOW_ATTRIBUTE(state); +DEFINE_SHOW_ATTRIBUTE(stats); +DEFINE_SHOW_ATTRIBUTE(transactions); +DEFINE_SHOW_ATTRIBUTE(transaction_log); static int __init init_binder_device(const char *name) { @@ -5795,27 +5875,27 @@ static int __init binder_init(void) 0444, binder_debugfs_dir_entry_root, NULL, - &binder_state_fops); + &state_fops); debugfs_create_file("stats", 0444, binder_debugfs_dir_entry_root, NULL, - &binder_stats_fops); + &stats_fops); debugfs_create_file("transactions", 0444, binder_debugfs_dir_entry_root, NULL, - &binder_transactions_fops); + &transactions_fops); debugfs_create_file("transaction_log", 0444, binder_debugfs_dir_entry_root, &binder_transaction_log, - &binder_transaction_log_fops); + &transaction_log_fops); debugfs_create_file("failed_transaction_log", 0444, binder_debugfs_dir_entry_root, &binder_transaction_log_failed, - &binder_transaction_log_fops); + &transaction_log_fops); } /* diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 030c98f35cca..022cd80e80cc 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -939,6 +939,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item, struct list_lru_one *lru, spinlock_t *lock, void *cb_arg) + __must_hold(lock) { struct mm_struct *mm = NULL; struct binder_lru_page *page = container_of(item, diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index fb3238c74c8a..c0aadbbf7f19 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -30,16 +30,16 @@ struct binder_transaction; * struct binder_buffer - buffer used for binder transactions * @entry: entry alloc->buffers * @rb_node: node for allocated_buffers/free_buffers rb trees - * @free: true if buffer is free - * @allow_user_free: describe the second member of struct blah, - * @async_transaction: describe the second member of struct blah, - * @debug_id: describe the second member of struct blah, - * @transaction: describe the second member of struct blah, - * @target_node: describe the second member of struct blah, - * @data_size: describe the second member of struct blah, - * @offsets_size: describe the second member of struct blah, - * @extra_buffers_size: describe the second member of struct blah, - * @data:i describe the second member of struct blah, + * @free: %true if buffer is free + * @allow_user_free: %true if user is allowed to free buffer + * @async_transaction: %true if buffer is in use for an async txn + * @debug_id: unique ID for debugging + * @transaction: pointer to associated struct binder_transaction + * @target_node: struct binder_node associated with this buffer + * @data_size: size of @transaction data + * @offsets_size: size of array of offsets + * @extra_buffers_size: size of space for other objects (like sg lists) + * @data: pointer to base of buffer space * * Bookkeeping structure for binder transaction buffers */ diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h new file mode 100644 index 000000000000..7fb97f503ef2 --- /dev/null +++ b/drivers/android/binder_internal.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_BINDER_INTERNAL_H +#define _LINUX_BINDER_INTERNAL_H + +#include +#include +#include +#include +#include +#include +#include +#include + +struct binder_context { + struct binder_node *binder_context_mgr_node; + struct mutex context_mgr_node_lock; + kuid_t binder_context_mgr_uid; + const char *name; +}; + +/** + * struct binder_device - information about a binder device node + * @hlist: list of binder devices (only used for devices requested via + * CONFIG_ANDROID_BINDER_DEVICES) + * @miscdev: information about a binder character device node + * @context: binder context information + * @binderfs_inode: This is the inode of the root dentry of the super block + * belonging to a binderfs mount. + */ +struct binder_device { + struct hlist_node hlist; + struct miscdevice miscdev; + struct binder_context context; + struct inode *binderfs_inode; +}; + +extern const struct file_operations binder_fops; + +#ifdef CONFIG_ANDROID_BINDERFS +extern bool is_binderfs_device(const struct inode *inode); +#else +static inline bool is_binderfs_device(const struct inode *inode) +{ + return false; +} +#endif + +#endif /* _LINUX_BINDER_INTERNAL_H */ diff --git a/drivers/android/binderfs.c b/drivers/android/binderfs.c new file mode 100644 index 000000000000..7496b10532aa --- /dev/null +++ b/drivers/android/binderfs.c @@ -0,0 +1,544 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "binder_internal.h" + +#define FIRST_INODE 1 +#define SECOND_INODE 2 +#define INODE_OFFSET 3 +#define INTSTRLEN 21 +#define BINDERFS_MAX_MINOR (1U << MINORBITS) + +static struct vfsmount *binderfs_mnt; + +static dev_t binderfs_dev; +static DEFINE_MUTEX(binderfs_minors_mutex); +static DEFINE_IDA(binderfs_minors); + +/** + * binderfs_info - information about a binderfs mount + * @ipc_ns: The ipc namespace the binderfs mount belongs to. + * @control_dentry: This records the dentry of this binderfs mount + * binder-control device. + * @root_uid: uid that needs to be used when a new binder device is + * created. + * @root_gid: gid that needs to be used when a new binder device is + * created. + */ +struct binderfs_info { + struct ipc_namespace *ipc_ns; + struct dentry *control_dentry; + kuid_t root_uid; + kgid_t root_gid; + +}; + +static inline struct binderfs_info *BINDERFS_I(const struct inode *inode) +{ + return inode->i_sb->s_fs_info; +} + +bool is_binderfs_device(const struct inode *inode) +{ + if (inode->i_sb->s_magic == BINDERFS_SUPER_MAGIC) + return true; + + return false; +} + +/** + * binderfs_binder_device_create - allocate inode from super block of a + * binderfs mount + * @ref_inode: inode from wich the super block will be taken + * @userp: buffer to copy information about new device for userspace to + * @req: struct binderfs_device as copied from userspace + * + * This function allocated a new binder_device and reserves a new minor + * number for it. + * Minor numbers are limited and tracked globally in binderfs_minors. The + * function will stash a struct binder_device for the specific binder + * device in i_private of the inode. + * It will go on to allocate a new inode from the super block of the + * filesystem mount, stash a struct binder_device in its i_private field + * and attach a dentry to that inode. + * + * Return: 0 on success, negative errno on failure + */ +static int binderfs_binder_device_create(struct inode *ref_inode, + struct binderfs_device __user *userp, + struct binderfs_device *req) +{ + int minor, ret; + struct dentry *dentry, *dup, *root; + struct binder_device *device; + size_t name_len = BINDERFS_MAX_NAME + 1; + char *name = NULL; + struct inode *inode = NULL; + struct super_block *sb = ref_inode->i_sb; + struct binderfs_info *info = sb->s_fs_info; + + /* Reserve new minor number for the new device. */ + mutex_lock(&binderfs_minors_mutex); + minor = ida_alloc_max(&binderfs_minors, BINDERFS_MAX_MINOR, GFP_KERNEL); + mutex_unlock(&binderfs_minors_mutex); + if (minor < 0) + return minor; + + ret = -ENOMEM; + device = kzalloc(sizeof(*device), GFP_KERNEL); + if (!device) + goto err; + + inode = new_inode(sb); + if (!inode) + goto err; + + inode->i_ino = minor + INODE_OFFSET; + inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode); + init_special_inode(inode, S_IFCHR | 0600, + MKDEV(MAJOR(binderfs_dev), minor)); + inode->i_fop = &binder_fops; + inode->i_uid = info->root_uid; + inode->i_gid = info->root_gid; + + name = kmalloc(name_len, GFP_KERNEL); + if (!name) + goto err; + + strscpy(name, req->name, name_len); + + device->binderfs_inode = inode; + device->context.binder_context_mgr_uid = INVALID_UID; + device->context.name = name; + device->miscdev.name = name; + device->miscdev.minor = minor; + mutex_init(&device->context.context_mgr_node_lock); + + req->major = MAJOR(binderfs_dev); + req->minor = minor; + + ret = copy_to_user(userp, req, sizeof(*req)); + if (ret) { + ret = -EFAULT; + goto err; + } + + root = sb->s_root; + inode_lock(d_inode(root)); + dentry = d_alloc_name(root, name); + if (!dentry) { + inode_unlock(d_inode(root)); + ret = -ENOMEM; + goto err; + } + + /* Verify that the name userspace gave us is not already in use. */ + dup = d_lookup(root, &dentry->d_name); + if (dup) { + if (d_really_is_positive(dup)) { + dput(dup); + dput(dentry); + inode_unlock(d_inode(root)); + ret = -EEXIST; + goto err; + } + dput(dup); + } + + inode->i_private = device; + d_add(dentry, inode); + fsnotify_create(root->d_inode, dentry); + inode_unlock(d_inode(root)); + + return 0; + +err: + kfree(name); + kfree(device); + mutex_lock(&binderfs_minors_mutex); + ida_free(&binderfs_minors, minor); + mutex_unlock(&binderfs_minors_mutex); + iput(inode); + + return ret; +} + +/** + * binderfs_ctl_ioctl - handle binder device node allocation requests + * + * The request handler for the binder-control device. All requests operate on + * the binderfs mount the binder-control device resides in: + * - BINDER_CTL_ADD + * Allocate a new binder device. + * + * Return: 0 on success, negative errno on failure + */ +static long binder_ctl_ioctl(struct file *file, unsigned int cmd, + unsigned long arg) +{ + int ret = -EINVAL; + struct inode *inode = file_inode(file); + struct binderfs_device __user *device = (struct binderfs_device __user *)arg; + struct binderfs_device device_req; + + switch (cmd) { + case BINDER_CTL_ADD: + ret = copy_from_user(&device_req, device, sizeof(device_req)); + if (ret) { + ret = -EFAULT; + break; + } + + ret = binderfs_binder_device_create(inode, device, &device_req); + break; + default: + break; + } + + return ret; +} + +static void binderfs_evict_inode(struct inode *inode) +{ + struct binder_device *device = inode->i_private; + + clear_inode(inode); + + if (!device) + return; + + mutex_lock(&binderfs_minors_mutex); + ida_free(&binderfs_minors, device->miscdev.minor); + mutex_unlock(&binderfs_minors_mutex); + + kfree(device->context.name); + kfree(device); +} + +static const struct super_operations binderfs_super_ops = { + .statfs = simple_statfs, + .evict_inode = binderfs_evict_inode, +}; + +static int binderfs_rename(struct inode *old_dir, struct dentry *old_dentry, + struct inode *new_dir, struct dentry *new_dentry, + unsigned int flags) +{ + struct inode *inode = d_inode(old_dentry); + + /* binderfs doesn't support directories. */ + if (d_is_dir(old_dentry)) + return -EPERM; + + if (flags & ~RENAME_NOREPLACE) + return -EINVAL; + + if (!simple_empty(new_dentry)) + return -ENOTEMPTY; + + if (d_really_is_positive(new_dentry)) + simple_unlink(new_dir, new_dentry); + + old_dir->i_ctime = old_dir->i_mtime = new_dir->i_ctime = + new_dir->i_mtime = inode->i_ctime = current_time(old_dir); + + return 0; +} + +static int binderfs_unlink(struct inode *dir, struct dentry *dentry) +{ + /* + * The control dentry is only ever touched during mount so checking it + * here should not require us to take lock. + */ + if (BINDERFS_I(dir)->control_dentry == dentry) + return -EPERM; + + return simple_unlink(dir, dentry); +} + +static const struct file_operations binder_ctl_fops = { + .owner = THIS_MODULE, + .open = nonseekable_open, + .unlocked_ioctl = binder_ctl_ioctl, + .compat_ioctl = binder_ctl_ioctl, + .llseek = noop_llseek, +}; + +/** + * binderfs_binder_ctl_create - create a new binder-control device + * @sb: super block of the binderfs mount + * + * This function creates a new binder-control device node in the binderfs mount + * referred to by @sb. + * + * Return: 0 on success, negative errno on failure + */ +static int binderfs_binder_ctl_create(struct super_block *sb) +{ + int minor, ret; + struct dentry *dentry; + struct binder_device *device; + struct inode *inode = NULL; + struct dentry *root = sb->s_root; + struct binderfs_info *info = sb->s_fs_info; + + device = kzalloc(sizeof(*device), GFP_KERNEL); + if (!device) + return -ENOMEM; + + inode_lock(d_inode(root)); + + /* If we have already created a binder-control node, return. */ + if (info->control_dentry) { + ret = 0; + goto out; + } + + ret = -ENOMEM; + inode = new_inode(sb); + if (!inode) + goto out; + + /* Reserve a new minor number for the new device. */ + mutex_lock(&binderfs_minors_mutex); + minor = ida_alloc_max(&binderfs_minors, BINDERFS_MAX_MINOR, GFP_KERNEL); + mutex_unlock(&binderfs_minors_mutex); + if (minor < 0) { + ret = minor; + goto out; + } + + inode->i_ino = SECOND_INODE; + inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode); + init_special_inode(inode, S_IFCHR | 0600, + MKDEV(MAJOR(binderfs_dev), minor)); + inode->i_fop = &binder_ctl_fops; + inode->i_uid = info->root_uid; + inode->i_gid = info->root_gid; + + device->binderfs_inode = inode; + device->miscdev.minor = minor; + + dentry = d_alloc_name(root, "binder-control"); + if (!dentry) + goto out; + + inode->i_private = device; + info->control_dentry = dentry; + d_add(dentry, inode); + inode_unlock(d_inode(root)); + + return 0; + +out: + inode_unlock(d_inode(root)); + kfree(device); + iput(inode); + + return ret; +} + +static const struct inode_operations binderfs_dir_inode_operations = { + .lookup = simple_lookup, + .rename = binderfs_rename, + .unlink = binderfs_unlink, +}; + +static int binderfs_fill_super(struct super_block *sb, void *data, int silent) +{ + struct binderfs_info *info; + int ret = -ENOMEM; + struct inode *inode = NULL; + struct ipc_namespace *ipc_ns = sb->s_fs_info; + + get_ipc_ns(ipc_ns); + + sb->s_blocksize = PAGE_SIZE; + sb->s_blocksize_bits = PAGE_SHIFT; + + /* + * The binderfs filesystem can be mounted by userns root in a + * non-initial userns. By default such mounts have the SB_I_NODEV flag + * set in s_iflags to prevent security issues where userns root can + * just create random device nodes via mknod() since it owns the + * filesystem mount. But binderfs does not allow to create any files + * including devices nodes. The only way to create binder devices nodes + * is through the binder-control device which userns root is explicitly + * allowed to do. So removing the SB_I_NODEV flag from s_iflags is both + * necessary and safe. + */ + sb->s_iflags &= ~SB_I_NODEV; + sb->s_iflags |= SB_I_NOEXEC; + sb->s_magic = BINDERFS_SUPER_MAGIC; + sb->s_op = &binderfs_super_ops; + sb->s_time_gran = 1; + + info = kzalloc(sizeof(struct binderfs_info), GFP_KERNEL); + if (!info) + goto err_without_dentry; + + info->ipc_ns = ipc_ns; + info->root_gid = make_kgid(sb->s_user_ns, 0); + if (!gid_valid(info->root_gid)) + info->root_gid = GLOBAL_ROOT_GID; + info->root_uid = make_kuid(sb->s_user_ns, 0); + if (!uid_valid(info->root_uid)) + info->root_uid = GLOBAL_ROOT_UID; + + sb->s_fs_info = info; + + inode = new_inode(sb); + if (!inode) + goto err_without_dentry; + + inode->i_ino = FIRST_INODE; + inode->i_fop = &simple_dir_operations; + inode->i_mode = S_IFDIR | 0755; + inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode); + inode->i_op = &binderfs_dir_inode_operations; + set_nlink(inode, 2); + + sb->s_root = d_make_root(inode); + if (!sb->s_root) + goto err_without_dentry; + + ret = binderfs_binder_ctl_create(sb); + if (ret) + goto err_with_dentry; + + return 0; + +err_with_dentry: + dput(sb->s_root); + sb->s_root = NULL; + +err_without_dentry: + put_ipc_ns(ipc_ns); + iput(inode); + kfree(info); + + return ret; +} + +static int binderfs_test_super(struct super_block *sb, void *data) +{ + struct binderfs_info *info = sb->s_fs_info; + + if (info) + return info->ipc_ns == data; + + return 0; +} + +static int binderfs_set_super(struct super_block *sb, void *data) +{ + sb->s_fs_info = data; + return set_anon_super(sb, NULL); +} + +static struct dentry *binderfs_mount(struct file_system_type *fs_type, + int flags, const char *dev_name, + void *data) +{ + struct super_block *sb; + struct ipc_namespace *ipc_ns = current->nsproxy->ipc_ns; + + if (!ns_capable(ipc_ns->user_ns, CAP_SYS_ADMIN)) + return ERR_PTR(-EPERM); + + sb = sget_userns(fs_type, binderfs_test_super, binderfs_set_super, + flags, ipc_ns->user_ns, ipc_ns); + if (IS_ERR(sb)) + return ERR_CAST(sb); + + if (!sb->s_root) { + int ret = binderfs_fill_super(sb, data, flags & SB_SILENT ? 1 : 0); + if (ret) { + deactivate_locked_super(sb); + return ERR_PTR(ret); + } + + sb->s_flags |= SB_ACTIVE; + } + + return dget(sb->s_root); +} + +static void binderfs_kill_super(struct super_block *sb) +{ + struct binderfs_info *info = sb->s_fs_info; + + if (info && info->ipc_ns) + put_ipc_ns(info->ipc_ns); + + kfree(info); + kill_litter_super(sb); +} + +static struct file_system_type binder_fs_type = { + .name = "binder", + .mount = binderfs_mount, + .kill_sb = binderfs_kill_super, + .fs_flags = FS_USERNS_MOUNT, +}; + +static int __init init_binderfs(void) +{ + int ret; + + /* Allocate new major number for binderfs. */ + ret = alloc_chrdev_region(&binderfs_dev, 0, BINDERFS_MAX_MINOR, + "binder"); + if (ret) + return ret; + + ret = register_filesystem(&binder_fs_type); + if (ret) { + unregister_chrdev_region(binderfs_dev, BINDERFS_MAX_MINOR); + return ret; + } + + binderfs_mnt = kern_mount(&binder_fs_type); + if (IS_ERR(binderfs_mnt)) { + ret = PTR_ERR(binderfs_mnt); + binderfs_mnt = NULL; + unregister_filesystem(&binder_fs_type); + unregister_chrdev_region(binderfs_dev, BINDERFS_MAX_MINOR); + } + + return ret; +} + +device_initcall(init_binderfs); diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c index 01306c018398..938ed513b070 100644 --- a/drivers/ata/libata-eh.c +++ b/drivers/ata/libata-eh.c @@ -919,8 +919,6 @@ static void ata_eh_set_pending(struct ata_port *ap, int fastdrain) void ata_qc_schedule_eh(struct ata_queued_cmd *qc) { struct ata_port *ap = qc->ap; - struct request_queue *q = qc->scsicmd->device->request_queue; - unsigned long flags; WARN_ON(!ap->ops->error_handler); @@ -932,9 +930,7 @@ void ata_qc_schedule_eh(struct ata_queued_cmd *qc) * Note that ATA_QCFLAG_FAILED is unconditionally set after * this function completes. */ - spin_lock_irqsave(q->queue_lock, flags); blk_abort_request(qc->scsicmd->request); - spin_unlock_irqrestore(q->queue_lock, flags); } /** diff --git a/drivers/ata/pata_palmld.c b/drivers/ata/pata_palmld.c index d071ab6864a8..26817fd91700 100644 --- a/drivers/ata/pata_palmld.c +++ b/drivers/ata/pata_palmld.c @@ -26,16 +26,17 @@ #include #include #include -#include +#include #include #include #define DRV_NAME "pata_palmld" -static struct gpio palmld_hdd_gpios[] = { - { GPIO_NR_PALMLD_IDE_PWEN, GPIOF_INIT_HIGH, "HDD Power" }, - { GPIO_NR_PALMLD_IDE_RESET, GPIOF_INIT_LOW, "HDD Reset" }, +struct palmld_pata { + struct ata_host *host; + struct gpio_desc *power; + struct gpio_desc *reset; }; static struct scsi_host_template palmld_sht = { @@ -50,39 +51,44 @@ static struct ata_port_operations palmld_port_ops = { static int palmld_pata_probe(struct platform_device *pdev) { - struct ata_host *host; + struct palmld_pata *lda; struct ata_port *ap; void __iomem *mem; + struct device *dev = &pdev->dev; int ret; + lda = devm_kzalloc(dev, sizeof(*lda), GFP_KERNEL); + if (!lda) + return -ENOMEM; + /* allocate host */ - host = ata_host_alloc(&pdev->dev, 1); - if (!host) { - ret = -ENOMEM; - goto err1; - } + lda->host = ata_host_alloc(dev, 1); + if (!lda->host) + return -ENOMEM; /* remap drive's physical memory address */ - mem = devm_ioremap(&pdev->dev, PALMLD_IDE_PHYS, 0x1000); - if (!mem) { - ret = -ENOMEM; - goto err1; + mem = devm_ioremap(dev, PALMLD_IDE_PHYS, 0x1000); + if (!mem) + return -ENOMEM; + + /* request and activate power and reset GPIOs */ + lda->power = devm_gpiod_get(dev, "power", GPIOD_OUT_HIGH); + if (IS_ERR(lda->power)) + return PTR_ERR(lda->power); + lda->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); + if (IS_ERR(lda->reset)) { + gpiod_set_value(lda->power, 0); + return PTR_ERR(lda->reset); } - /* request and activate power GPIO, IRQ GPIO */ - ret = gpio_request_array(palmld_hdd_gpios, - ARRAY_SIZE(palmld_hdd_gpios)); - if (ret) - goto err1; - - /* reset the drive */ - gpio_set_value(GPIO_NR_PALMLD_IDE_RESET, 0); + /* Assert reset to reset the drive */ + gpiod_set_value(lda->reset, 1); msleep(30); - gpio_set_value(GPIO_NR_PALMLD_IDE_RESET, 1); + gpiod_set_value(lda->reset, 0); msleep(30); /* setup the ata port */ - ap = host->ports[0]; + ap = lda->host->ports[0]; ap->ops = &palmld_port_ops; ap->pio_mask = ATA_PIO4; ap->flags |= ATA_FLAG_PIO_POLLING; @@ -96,27 +102,26 @@ static int palmld_pata_probe(struct platform_device *pdev) ata_sff_std_ports(&ap->ioaddr); /* activate host */ - ret = ata_host_activate(host, 0, NULL, IRQF_TRIGGER_RISING, - &palmld_sht); - if (ret) - goto err2; - - return ret; + ret = ata_host_activate(lda->host, 0, NULL, IRQF_TRIGGER_RISING, + &palmld_sht); + /* power down on failure */ + if (ret) { + gpiod_set_value(lda->power, 0); + return ret; + } -err2: - gpio_free_array(palmld_hdd_gpios, ARRAY_SIZE(palmld_hdd_gpios)); -err1: - return ret; + platform_set_drvdata(pdev, lda); + return 0; } -static int palmld_pata_remove(struct platform_device *dev) +static int palmld_pata_remove(struct platform_device *pdev) { - ata_platform_remove_one(dev); + struct palmld_pata *lda = platform_get_drvdata(pdev); - /* power down the HDD */ - gpio_set_value(GPIO_NR_PALMLD_IDE_PWEN, 0); + ata_platform_remove_one(pdev); - gpio_free_array(palmld_hdd_gpios, ARRAY_SIZE(palmld_hdd_gpios)); + /* power down the HDD */ + gpiod_set_value(lda->power, 0); return 0; } diff --git a/drivers/ata/pata_pxa.c b/drivers/ata/pata_pxa.c index e8b6a2e464c9..4b9b9e120188 100644 --- a/drivers/ata/pata_pxa.c +++ b/drivers/ata/pata_pxa.c @@ -25,7 +25,6 @@ #include #include #include -#include #include #include diff --git a/drivers/ata/pata_rb532_cf.c b/drivers/ata/pata_rb532_cf.c index 653b9a0bf727..66bb5bff126b 100644 --- a/drivers/ata/pata_rb532_cf.c +++ b/drivers/ata/pata_rb532_cf.c @@ -27,7 +27,7 @@ #include #include #include -#include +#include #include #include @@ -49,7 +49,7 @@ struct rb532_cf_info { void __iomem *iobase; - unsigned int gpio_line; + struct gpio_desc *gpio_line; unsigned int irq; }; @@ -60,7 +60,7 @@ static irqreturn_t rb532_pata_irq_handler(int irq, void *dev_instance) struct ata_host *ah = dev_instance; struct rb532_cf_info *info = ah->private_data; - if (gpio_get_value(info->gpio_line)) { + if (gpiod_get_value(info->gpio_line)) { irq_set_irq_type(info->irq, IRQ_TYPE_LEVEL_LOW); ata_sff_interrupt(info->irq, dev_instance); } else { @@ -106,10 +106,9 @@ static void rb532_pata_setup_ports(struct ata_host *ah) static int rb532_pata_driver_probe(struct platform_device *pdev) { int irq; - int gpio; + struct gpio_desc *gpiod; struct resource *res; struct ata_host *ah; - struct cf_device *pdata; struct rb532_cf_info *info; int ret; @@ -125,23 +124,12 @@ static int rb532_pata_driver_probe(struct platform_device *pdev) return -ENOENT; } - pdata = dev_get_platdata(&pdev->dev); - if (!pdata) { - dev_err(&pdev->dev, "no platform data specified\n"); - return -EINVAL; - } - - gpio = pdata->gpio_pin; - if (gpio < 0) { + gpiod = devm_gpiod_get(&pdev->dev, NULL, GPIOD_IN); + if (IS_ERR(gpiod)) { dev_err(&pdev->dev, "no GPIO found for irq%d\n", irq); - return -ENOENT; - } - - ret = gpio_request(gpio, DRV_NAME); - if (ret) { - dev_err(&pdev->dev, "GPIO request failed\n"); - return ret; + return PTR_ERR(gpiod); } + gpiod_set_consumer_name(gpiod, DRV_NAME); /* allocate host */ ah = ata_host_alloc(&pdev->dev, RB500_CF_MAXPORTS); @@ -153,7 +141,7 @@ static int rb532_pata_driver_probe(struct platform_device *pdev) return -ENOMEM; ah->private_data = info; - info->gpio_line = gpio; + info->gpio_line = gpiod; info->irq = irq; info->iobase = devm_ioremap_nocache(&pdev->dev, res->start, @@ -161,26 +149,14 @@ static int rb532_pata_driver_probe(struct platform_device *pdev) if (!info->iobase) return -ENOMEM; - ret = gpio_direction_input(gpio); - if (ret) { - dev_err(&pdev->dev, "unable to set GPIO direction, err=%d\n", - ret); - goto err_free_gpio; - } - rb532_pata_setup_ports(ah); ret = ata_host_activate(ah, irq, rb532_pata_irq_handler, IRQF_TRIGGER_LOW, &rb532_pata_sht); if (ret) - goto err_free_gpio; + return ret; return 0; - -err_free_gpio: - gpio_free(gpio); - - return ret; } static int rb532_pata_driver_remove(struct platform_device *pdev) @@ -189,7 +165,6 @@ static int rb532_pata_driver_remove(struct platform_device *pdev) struct rb532_cf_info *info = ah->private_data; ata_host_detach(ah); - gpio_free(info->gpio_line); return 0; } diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c index e67815b896fc..c8fc9280d6e4 100644 --- a/drivers/ata/sata_highbank.c +++ b/drivers/ata/sata_highbank.c @@ -31,8 +31,7 @@ #include #include #include -#include -#include +#include #include "ahci.h" @@ -85,7 +84,7 @@ struct ecx_plat_data { /* number of extra clocks that the SGPIO PIC controller expects */ u32 pre_clocks; u32 post_clocks; - unsigned sgpio_gpio[SGPIO_PINS]; + struct gpio_desc *sgpio_gpiod[SGPIO_PINS]; u32 sgpio_pattern; u32 port_to_sgpio[SGPIO_PORTS]; }; @@ -131,9 +130,9 @@ static void ecx_parse_sgpio(struct ecx_plat_data *pdata, u32 port, u32 state) */ static void ecx_led_cycle_clock(struct ecx_plat_data *pdata) { - gpio_set_value(pdata->sgpio_gpio[SCLOCK], 1); + gpiod_set_value(pdata->sgpio_gpiod[SCLOCK], 1); udelay(50); - gpio_set_value(pdata->sgpio_gpio[SCLOCK], 0); + gpiod_set_value(pdata->sgpio_gpiod[SCLOCK], 0); udelay(50); } @@ -164,15 +163,15 @@ static ssize_t ecx_transmit_led_message(struct ata_port *ap, u32 state, for (i = 0; i < pdata->pre_clocks; i++) ecx_led_cycle_clock(pdata); - gpio_set_value(pdata->sgpio_gpio[SLOAD], 1); + gpiod_set_value(pdata->sgpio_gpiod[SLOAD], 1); ecx_led_cycle_clock(pdata); - gpio_set_value(pdata->sgpio_gpio[SLOAD], 0); + gpiod_set_value(pdata->sgpio_gpiod[SLOAD], 0); /* * bit-bang out the SGPIO pattern, by consuming a bit and then * clocking it out. */ for (i = 0; i < (SGPIO_SIGNALS * pdata->n_ports); i++) { - gpio_set_value(pdata->sgpio_gpio[SDATA], sgpio_out & 1); + gpiod_set_value(pdata->sgpio_gpiod[SDATA], sgpio_out & 1); sgpio_out >>= 1; ecx_led_cycle_clock(pdata); } @@ -193,21 +192,19 @@ static void highbank_set_em_messages(struct device *dev, struct device_node *np = dev->of_node; struct ecx_plat_data *pdata = hpriv->plat_data; int i; - int err; for (i = 0; i < SGPIO_PINS; i++) { - err = of_get_named_gpio(np, "calxeda,sgpio-gpio", i); - if (err < 0) - return; - - pdata->sgpio_gpio[i] = err; - err = gpio_request(pdata->sgpio_gpio[i], "CX SGPIO"); - if (err) { - pr_err("sata_highbank gpio_request %d failed: %d\n", - i, err); - return; + struct gpio_desc *gpiod; + + gpiod = devm_gpiod_get_index(dev, "calxeda,sgpio", i, + GPIOD_OUT_HIGH); + if (IS_ERR(gpiod)) { + dev_err(dev, "failed to get GPIO %d\n", i); + continue; } - gpio_direction_output(pdata->sgpio_gpio[i], 1); + gpiod_set_consumer_name(gpiod, "CX SGPIO"); + + pdata->sgpio_gpiod[i] = gpiod; } of_property_read_u32_array(np, "calxeda,led-order", pdata->port_to_sgpio, diff --git a/drivers/ata/sata_rcar.c b/drivers/ata/sata_rcar.c index 4b1ff5bc256a..59b2317acea9 100644 --- a/drivers/ata/sata_rcar.c +++ b/drivers/ata/sata_rcar.c @@ -891,7 +891,9 @@ static int sata_rcar_probe(struct platform_device *pdev) int ret = 0; irq = platform_get_irq(pdev, 0); - if (irq <= 0) + if (irq < 0) + return irq; + if (!irq) return -EINVAL; priv = devm_kzalloc(dev, sizeof(struct sata_rcar_priv), GFP_KERNEL); diff --git a/drivers/atm/fore200e.c b/drivers/atm/fore200e.c index f55ffde877b5..14053e01a2cc 100644 --- a/drivers/atm/fore200e.c +++ b/drivers/atm/fore200e.c @@ -754,8 +754,8 @@ static int fore200e_sba_proc_read(struct fore200e *fore200e, char *page) regs = of_get_property(op->dev.of_node, "reg", NULL); - return sprintf(page, " SBUS slot/device:\t\t%d/'%s'\n", - (regs ? regs->which_io : 0), op->dev.of_node->name); + return sprintf(page, " SBUS slot/device:\t\t%d/'%pOFn'\n", + (regs ? regs->which_io : 0), op->dev.of_node); } static const struct fore200e_bus fore200e_sbus_ops = { diff --git a/drivers/base/bus.c b/drivers/base/bus.c index 8bfd27ec73d6..e06a57936cc9 100644 --- a/drivers/base/bus.c +++ b/drivers/base/bus.c @@ -31,6 +31,9 @@ static struct kset *system_kset; #define to_drv_attr(_attr) container_of(_attr, struct driver_attribute, attr) +#define DRIVER_ATTR_IGNORE_LOCKDEP(_name, _mode, _show, _store) \ + struct driver_attribute driver_attr_##_name = \ + __ATTR_IGNORE_LOCKDEP(_name, _mode, _show, _store) static int __must_check bus_rescan_devices_helper(struct device *dev, void *data); @@ -195,7 +198,7 @@ static ssize_t unbind_store(struct device_driver *drv, const char *buf, bus_put(bus); return err; } -static DRIVER_ATTR_WO(unbind); +static DRIVER_ATTR_IGNORE_LOCKDEP(unbind, S_IWUSR, NULL, unbind_store); /* * Manually attach a device to a driver. @@ -231,7 +234,7 @@ static ssize_t bind_store(struct device_driver *drv, const char *buf, bus_put(bus); return err; } -static DRIVER_ATTR_WO(bind); +static DRIVER_ATTR_IGNORE_LOCKDEP(bind, S_IWUSR, NULL, bind_store); static ssize_t show_drivers_autoprobe(struct bus_type *bus, char *buf) { @@ -611,8 +614,10 @@ static void remove_probe_files(struct bus_type *bus) static ssize_t uevent_store(struct device_driver *drv, const char *buf, size_t count) { - kobject_synth_uevent(&drv->p->kobj, buf, count); - return count; + int rc; + + rc = kobject_synth_uevent(&drv->p->kobj, buf, count); + return rc ? rc : count; } static DRIVER_ATTR_WO(uevent); @@ -828,8 +833,10 @@ static void klist_devices_put(struct klist_node *n) static ssize_t bus_uevent_store(struct bus_type *bus, const char *buf, size_t count) { - kobject_synth_uevent(&bus->p->subsys.kobj, buf, count); - return count; + int rc; + + rc = kobject_synth_uevent(&bus->p->subsys.kobj, buf, count); + return rc ? rc : count; } static BUS_ATTR(uevent, S_IWUSR, NULL, bus_uevent_store); diff --git a/drivers/base/component.c b/drivers/base/component.c index e8d676fad0c9..ddcea8739c12 100644 --- a/drivers/base/component.c +++ b/drivers/base/component.c @@ -85,17 +85,7 @@ static int component_devices_show(struct seq_file *s, void *data) return 0; } -static int component_devices_open(struct inode *inode, struct file *file) -{ - return single_open(file, component_devices_show, inode->i_private); -} - -static const struct file_operations component_devices_fops = { - .open = component_devices_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(component_devices); static int __init component_debug_init(void) { diff --git a/drivers/base/core.c b/drivers/base/core.c index a2f14098663f..0073b09bb99f 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -815,10 +815,12 @@ ssize_t device_store_ulong(struct device *dev, const char *buf, size_t size) { struct dev_ext_attribute *ea = to_ext_attr(attr); - char *end; - unsigned long new = simple_strtoul(buf, &end, 0); - if (end == buf) - return -EINVAL; + int ret; + unsigned long new; + + ret = kstrtoul(buf, 0, &new); + if (ret) + return ret; *(unsigned long *)(ea->var) = new; /* Always return full write size even if we didn't consume all */ return size; @@ -839,9 +841,14 @@ ssize_t device_store_int(struct device *dev, const char *buf, size_t size) { struct dev_ext_attribute *ea = to_ext_attr(attr); - char *end; - long new = simple_strtol(buf, &end, 0); - if (end == buf || new > INT_MAX || new < INT_MIN) + int ret; + long new; + + ret = kstrtol(buf, 0, &new); + if (ret) + return ret; + + if (new > INT_MAX || new < INT_MIN) return -EINVAL; *(int *)(ea->var) = new; /* Always return full write size even if we didn't consume all */ @@ -911,8 +918,7 @@ static void device_release(struct kobject *kobj) else if (dev->class && dev->class->dev_release) dev->class->dev_release(dev); else - WARN(1, KERN_ERR "Device '%s' does not have a release() " - "function, it is broken and must be fixed.\n", + WARN(1, KERN_ERR "Device '%s' does not have a release() function, it is broken and must be fixed. See Documentation/kobject.txt.\n", dev_name(dev)); kfree(p); } @@ -1088,8 +1094,14 @@ out: static ssize_t uevent_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { - if (kobject_synth_uevent(&dev->kobj, buf, count)) + int rc; + + rc = kobject_synth_uevent(&dev->kobj, buf, count); + + if (rc) { dev_err(dev, "uevent: failed to send synthetic uevent\n"); + return rc; + } return count; } diff --git a/drivers/base/dd.c b/drivers/base/dd.c index 169412ee4ae8..8ac10af17c00 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -179,7 +179,7 @@ static void driver_deferred_probe_trigger(void) } /** - * device_block_probing() - Block/defere device's probes + * device_block_probing() - Block/defer device's probes * * It will disable probing of devices and defer their probes instead. */ @@ -223,7 +223,10 @@ DEFINE_SHOW_ATTRIBUTE(deferred_devs); static int deferred_probe_timeout = -1; static int __init deferred_probe_timeout_setup(char *str) { - deferred_probe_timeout = simple_strtol(str, NULL, 10); + int timeout; + + if (!kstrtoint(str, 10, &timeout)) + deferred_probe_timeout = timeout; return 1; } __setup("deferred_probe_timeout=", deferred_probe_timeout_setup); @@ -453,7 +456,7 @@ static int really_probe(struct device *dev, struct device_driver *drv) if (defer_all_probes) { /* * Value of defer_all_probes can be set only by - * device_defer_all_probes_enable() which, in turn, will call + * device_block_probing() which, in turn, will call * wait_for_device_probe() right after that to avoid any races. */ dev_dbg(dev, "Driver %s force probe deferral\n", drv->name); @@ -928,16 +931,13 @@ static void __device_release_driver(struct device *dev, struct device *parent) drv = dev->driver; if (drv) { - if (driver_allows_async_probing(drv)) - async_synchronize_full(); - while (device_links_busy(dev)) { device_unlock(dev); - if (parent) + if (parent && dev->bus->need_parent_lock) device_unlock(parent); device_links_unbind_consumers(dev); - if (parent) + if (parent && dev->bus->need_parent_lock) device_lock(parent); device_lock(dev); @@ -1036,6 +1036,9 @@ void driver_detach(struct device_driver *drv) struct device_private *dev_prv; struct device *dev; + if (driver_allows_async_probing(drv)) + async_synchronize_full(); + for (;;) { spin_lock(&drv->p->klist_devices.k_lock); if (list_empty(&drv->p->klist_devices.k_list)) { diff --git a/drivers/base/memory.c b/drivers/base/memory.c index 0e5985682642..048cbf7d5233 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -109,8 +109,8 @@ static unsigned long get_memory_block_size(void) * uses. */ -static ssize_t show_mem_start_phys_index(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t phys_index_show(struct device *dev, + struct device_attribute *attr, char *buf) { struct memory_block *mem = to_memory_block(dev); unsigned long phys_index; @@ -122,8 +122,8 @@ static ssize_t show_mem_start_phys_index(struct device *dev, /* * Show whether the section of memory is likely to be hot-removable */ -static ssize_t show_mem_removable(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t removable_show(struct device *dev, struct device_attribute *attr, + char *buf) { unsigned long i, pfn; int ret = 1; @@ -146,8 +146,8 @@ out: /* * online, offline, going offline, etc. */ -static ssize_t show_mem_state(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t state_show(struct device *dev, struct device_attribute *attr, + char *buf) { struct memory_block *mem = to_memory_block(dev); ssize_t len = 0; @@ -207,15 +207,15 @@ static bool pages_correctly_probed(unsigned long start_pfn) return false; if (!present_section_nr(section_nr)) { - pr_warn("section %ld pfn[%lx, %lx) not present", + pr_warn("section %ld pfn[%lx, %lx) not present\n", section_nr, pfn, pfn + PAGES_PER_SECTION); return false; } else if (!valid_section_nr(section_nr)) { - pr_warn("section %ld pfn[%lx, %lx) no valid memmap", + pr_warn("section %ld pfn[%lx, %lx) no valid memmap\n", section_nr, pfn, pfn + PAGES_PER_SECTION); return false; } else if (online_section_nr(section_nr)) { - pr_warn("section %ld pfn[%lx, %lx) is already online", + pr_warn("section %ld pfn[%lx, %lx) is already online\n", section_nr, pfn, pfn + PAGES_PER_SECTION); return false; } @@ -286,7 +286,7 @@ static int memory_subsys_online(struct device *dev) return 0; /* - * If we are called from store_mem_state(), online_type will be + * If we are called from state_store(), online_type will be * set >= 0 Otherwise we were called from the device online * attribute and need to set the online_type. */ @@ -315,9 +315,8 @@ static int memory_subsys_offline(struct device *dev) return memory_block_change_state(mem, MEM_OFFLINE, MEM_ONLINE); } -static ssize_t -store_mem_state(struct device *dev, - struct device_attribute *attr, const char *buf, size_t count) +static ssize_t state_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct memory_block *mem = to_memory_block(dev); int ret, online_type; @@ -374,7 +373,7 @@ err: * s.t. if I offline all of these sections I can then * remove the physical device? */ -static ssize_t show_phys_device(struct device *dev, +static ssize_t phys_device_show(struct device *dev, struct device_attribute *attr, char *buf) { struct memory_block *mem = to_memory_block(dev); @@ -395,7 +394,7 @@ static void print_allowed_zone(char *buf, int nid, unsigned long start_pfn, } } -static ssize_t show_valid_zones(struct device *dev, +static ssize_t valid_zones_show(struct device *dev, struct device_attribute *attr, char *buf) { struct memory_block *mem = to_memory_block(dev); @@ -435,33 +434,31 @@ out: return strlen(buf); } -static DEVICE_ATTR(valid_zones, 0444, show_valid_zones, NULL); +static DEVICE_ATTR_RO(valid_zones); #endif -static DEVICE_ATTR(phys_index, 0444, show_mem_start_phys_index, NULL); -static DEVICE_ATTR(state, 0644, show_mem_state, store_mem_state); -static DEVICE_ATTR(phys_device, 0444, show_phys_device, NULL); -static DEVICE_ATTR(removable, 0444, show_mem_removable, NULL); +static DEVICE_ATTR_RO(phys_index); +static DEVICE_ATTR_RW(state); +static DEVICE_ATTR_RO(phys_device); +static DEVICE_ATTR_RO(removable); /* * Block size attribute stuff */ -static ssize_t -print_block_size(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t block_size_bytes_show(struct device *dev, + struct device_attribute *attr, char *buf) { return sprintf(buf, "%lx\n", get_memory_block_size()); } -static DEVICE_ATTR(block_size_bytes, 0444, print_block_size, NULL); +static DEVICE_ATTR_RO(block_size_bytes); /* * Memory auto online policy. */ -static ssize_t -show_auto_online_blocks(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t auto_online_blocks_show(struct device *dev, + struct device_attribute *attr, char *buf) { if (memhp_auto_online) return sprintf(buf, "online\n"); @@ -469,9 +466,9 @@ show_auto_online_blocks(struct device *dev, struct device_attribute *attr, return sprintf(buf, "offline\n"); } -static ssize_t -store_auto_online_blocks(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t auto_online_blocks_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { if (sysfs_streq(buf, "online")) memhp_auto_online = true; @@ -483,8 +480,7 @@ store_auto_online_blocks(struct device *dev, struct device_attribute *attr, return count; } -static DEVICE_ATTR(auto_online_blocks, 0644, show_auto_online_blocks, - store_auto_online_blocks); +static DEVICE_ATTR_RW(auto_online_blocks); /* * Some architectures will have custom drivers to do this, and @@ -493,9 +489,8 @@ static DEVICE_ATTR(auto_online_blocks, 0644, show_auto_online_blocks, * and will require this interface. */ #ifdef CONFIG_ARCH_MEMORY_PROBE -static ssize_t -memory_probe_store(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t probe_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { u64 phys_addr; int nid, ret; @@ -525,7 +520,7 @@ out: return ret; } -static DEVICE_ATTR(probe, S_IWUSR, NULL, memory_probe_store); +static DEVICE_ATTR_WO(probe); #endif #ifdef CONFIG_MEMORY_FAILURE @@ -534,10 +529,9 @@ static DEVICE_ATTR(probe, S_IWUSR, NULL, memory_probe_store); */ /* Soft offline a page */ -static ssize_t -store_soft_offline_page(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t soft_offline_page_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { int ret; u64 pfn; @@ -553,10 +547,9 @@ store_soft_offline_page(struct device *dev, } /* Forcibly offline a page, including killing processes. */ -static ssize_t -store_hard_offline_page(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t hard_offline_page_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { int ret; u64 pfn; @@ -569,8 +562,8 @@ store_hard_offline_page(struct device *dev, return ret ? ret : count; } -static DEVICE_ATTR(soft_offline_page, S_IWUSR, NULL, store_soft_offline_page); -static DEVICE_ATTR(hard_offline_page, S_IWUSR, NULL, store_hard_offline_page); +static DEVICE_ATTR_WO(soft_offline_page); +static DEVICE_ATTR_WO(hard_offline_page); #endif /* @@ -688,7 +681,7 @@ static int add_memory_block(int base_section_nr) int i, ret, section_count = 0, section_nr; for (i = base_section_nr; - (i < base_section_nr + sections_per_block) && i < NR_MEM_SECTIONS; + i < base_section_nr + sections_per_block; i++) { if (!present_section_nr(i)) continue; @@ -739,7 +732,7 @@ unregister_memory(struct memory_block *memory) { BUG_ON(memory->dev.bus != &memory_subsys); - /* drop the ref. we got in remove_memory_block() */ + /* drop the ref. we got in remove_memory_section() */ put_device(&memory->dev); device_unregister(&memory->dev); } diff --git a/drivers/base/platform.c b/drivers/base/platform.c index 0fb5f140f1b0..be6c1eb3cbe2 100644 --- a/drivers/base/platform.c +++ b/drivers/base/platform.c @@ -234,7 +234,7 @@ struct platform_object { */ void platform_device_put(struct platform_device *pdev) { - if (pdev) + if (!IS_ERR_OR_NULL(pdev)) put_device(&pdev->dev); } EXPORT_SYMBOL_GPL(platform_device_put); @@ -447,7 +447,7 @@ void platform_device_del(struct platform_device *pdev) { int i; - if (pdev) { + if (!IS_ERR_OR_NULL(pdev)) { device_del(&pdev->dev); if (pdev->id_auto) { @@ -1137,8 +1137,7 @@ int platform_dma_configure(struct device *dev) ret = of_dma_configure(dev, dev->of_node, true); } else if (has_acpi_companion(dev)) { attr = acpi_get_dma_attr(to_acpi_device_node(dev->fwnode)); - if (attr != DEV_DMA_NOT_SUPPORTED) - ret = acpi_dma_configure(dev, attr); + ret = acpi_dma_configure(dev, attr); } return ret; @@ -1178,37 +1177,6 @@ int __init platform_bus_init(void) return error; } -#ifndef ARCH_HAS_DMA_GET_REQUIRED_MASK -static u64 dma_default_get_required_mask(struct device *dev) -{ - u32 low_totalram = ((max_pfn - 1) << PAGE_SHIFT); - u32 high_totalram = ((max_pfn - 1) >> (32 - PAGE_SHIFT)); - u64 mask; - - if (!high_totalram) { - /* convert to mask just covering totalram */ - low_totalram = (1 << (fls(low_totalram) - 1)); - low_totalram += low_totalram - 1; - mask = low_totalram; - } else { - high_totalram = (1 << (fls(high_totalram) - 1)); - high_totalram += high_totalram - 1; - mask = (((u64)high_totalram) << 32) + 0xffffffff; - } - return mask; -} - -u64 dma_get_required_mask(struct device *dev) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - - if (ops->get_required_mask) - return ops->get_required_mask(dev); - return dma_default_get_required_mask(dev); -} -EXPORT_SYMBOL_GPL(dma_get_required_mask); -#endif - static __initdata LIST_HEAD(early_platform_driver_list); static __initdata LIST_HEAD(early_platform_device_list); diff --git a/drivers/block/aoe/aoe.h b/drivers/block/aoe/aoe.h index 7ca76ed2e71a..84d0fcebd6af 100644 --- a/drivers/block/aoe/aoe.h +++ b/drivers/block/aoe/aoe.h @@ -100,6 +100,10 @@ enum { MAX_TAINT = 1000, /* cap on aoetgt taint */ }; +struct aoe_req { + unsigned long nr_bios; +}; + struct buf { ulong nframesout; struct bio *bio; diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c index ed26b7287256..e2c6aae2d636 100644 --- a/drivers/block/aoe/aoeblk.c +++ b/drivers/block/aoe/aoeblk.c @@ -387,6 +387,7 @@ aoeblk_gdalloc(void *vp) set = &d->tag_set; set->ops = &aoeblk_mq_ops; + set->cmd_size = sizeof(struct aoe_req); set->nr_hw_queues = 1; set->queue_depth = 128; set->numa_node = NUMA_NO_NODE; diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c index bb2fba651bd2..3cf9bc5d8d95 100644 --- a/drivers/block/aoe/aoecmd.c +++ b/drivers/block/aoe/aoecmd.c @@ -822,17 +822,6 @@ out: spin_unlock_irqrestore(&d->lock, flags); } -static unsigned long -rqbiocnt(struct request *r) -{ - struct bio *bio; - unsigned long n = 0; - - __rq_for_each_bio(bio, r) - n++; - return n; -} - static void bufinit(struct buf *buf, struct request *rq, struct bio *bio) { @@ -847,6 +836,7 @@ nextbuf(struct aoedev *d) { struct request *rq; struct request_queue *q; + struct aoe_req *req; struct buf *buf; struct bio *bio; @@ -865,7 +855,11 @@ nextbuf(struct aoedev *d) blk_mq_start_request(rq); d->ip.rq = rq; d->ip.nxbio = rq->bio; - rq->special = (void *) rqbiocnt(rq); + + req = blk_mq_rq_to_pdu(rq); + req->nr_bios = 0; + __rq_for_each_bio(bio, rq) + req->nr_bios++; } buf = mempool_alloc(d->bufpool, GFP_ATOMIC); if (buf == NULL) { @@ -1069,16 +1063,13 @@ aoe_end_request(struct aoedev *d, struct request *rq, int fastfail) static void aoe_end_buf(struct aoedev *d, struct buf *buf) { - struct request *rq; - unsigned long n; + struct request *rq = buf->rq; + struct aoe_req *req = blk_mq_rq_to_pdu(rq); if (buf == d->ip.buf) d->ip.buf = NULL; - rq = buf->rq; mempool_free(buf, d->bufpool); - n = (unsigned long) rq->special; - rq->special = (void *) --n; - if (n == 0) + if (--req->nr_bios == 0) aoe_end_request(d, rq, 0); } diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c index 9063f8efbd3b..5b49f1b33ebe 100644 --- a/drivers/block/aoe/aoedev.c +++ b/drivers/block/aoe/aoedev.c @@ -160,21 +160,22 @@ static void aoe_failip(struct aoedev *d) { struct request *rq; + struct aoe_req *req; struct bio *bio; - unsigned long n; aoe_failbuf(d, d->ip.buf); - rq = d->ip.rq; if (rq == NULL) return; + + req = blk_mq_rq_to_pdu(rq); while ((bio = d->ip.nxbio)) { bio->bi_status = BLK_STS_IOERR; d->ip.nxbio = bio->bi_next; - n = (unsigned long) rq->special; - rq->special = (void *) --n; + req->nr_bios--; } - if ((unsigned long) rq->special == 0) + + if (!req->nr_bios) aoe_end_request(d, rq, 0); } diff --git a/drivers/block/aoe/aoemain.c b/drivers/block/aoe/aoemain.c index 251482066977..1e4e2971171c 100644 --- a/drivers/block/aoe/aoemain.c +++ b/drivers/block/aoe/aoemain.c @@ -24,7 +24,7 @@ static void discover_timer(struct timer_list *t) aoecmd_cfg(0xffff, 0xff); } -static void +static void __exit aoe_exit(void) { del_timer_sync(&timer); diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c index f88b4c26d422..b0dbbdfeb33e 100644 --- a/drivers/block/ataflop.c +++ b/drivers/block/ataflop.c @@ -1471,6 +1471,15 @@ static void setup_req_params( int drive ) ReqTrack, ReqSector, (unsigned long)ReqData )); } +static void ataflop_commit_rqs(struct blk_mq_hw_ctx *hctx) +{ + spin_lock_irq(&ataflop_lock); + atari_disable_irq(IRQ_MFP_FDC); + finish_fdc(); + atari_enable_irq(IRQ_MFP_FDC); + spin_unlock_irq(&ataflop_lock); +} + static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { @@ -1947,6 +1956,7 @@ static const struct block_device_operations floppy_fops = { static const struct blk_mq_ops ataflop_mq_ops = { .queue_rq = ataflop_queue_rq, + .commit_rqs = ataflop_commit_rqs, }; static struct kobject *floppy_find(dev_t dev, int *part, void *data) @@ -1982,6 +1992,7 @@ static int __init atari_floppy_init (void) &ataflop_mq_ops, 2, BLK_MQ_F_SHOULD_MERGE); if (IS_ERR(unit[i].disk->queue)) { + put_disk(unit[i].disk); ret = PTR_ERR(unit[i].disk->queue); unit[i].disk->queue = NULL; goto err; @@ -2033,18 +2044,13 @@ static int __init atari_floppy_init (void) return 0; err: - do { + while (--i >= 0) { struct gendisk *disk = unit[i].disk; - if (disk) { - if (disk->queue) { - blk_cleanup_queue(disk->queue); - disk->queue = NULL; - } - blk_mq_free_tag_set(&unit[i].tag_set); - put_disk(unit[i].disk); - } - } while (i--); + blk_cleanup_queue(disk->queue); + blk_mq_free_tag_set(&unit[i].tag_set); + put_disk(unit[i].disk); + } unregister_blkdev(FLOPPY_MAJOR, "fd"); return ret; diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index fa8204214ac0..f973a2a845c8 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -2792,7 +2792,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig drbd_init_set_defaults(device); - q = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE, &resource->req_lock); + q = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE); if (!q) goto out_no_q; device->rq_queue = q; diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index 61c392752fe4..ccfcf00f2798 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -3623,7 +3623,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in * change. */ - peer_integrity_tfm = crypto_alloc_shash(integrity_alg, 0, CRYPTO_ALG_ASYNC); + peer_integrity_tfm = crypto_alloc_shash(integrity_alg, 0, 0); if (IS_ERR(peer_integrity_tfm)) { peer_integrity_tfm = NULL; drbd_err(connection, "peer data-integrity-alg %s not supported\n", diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index fb23578e9a41..6f2856c6d0f2 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -2231,7 +2231,6 @@ static void request_done(int uptodate) { struct request *req = current_req; struct request_queue *q; - unsigned long flags; int block; char msg[sizeof("request done ") + sizeof(int) * 3]; @@ -2254,10 +2253,7 @@ static void request_done(int uptodate) if (block > _floppy->sect) DRS->maxtrack = 1; - /* unlock chained buffers */ - spin_lock_irqsave(q->queue_lock, flags); floppy_end_request(req, 0); - spin_unlock_irqrestore(q->queue_lock, flags); } else { if (rq_data_dir(req) == WRITE) { /* record write error information */ @@ -2269,9 +2265,7 @@ static void request_done(int uptodate) DRWE->last_error_sector = blk_rq_pos(req); DRWE->last_error_generation = DRS->generation; } - spin_lock_irqsave(q->queue_lock, flags); floppy_end_request(req, BLK_STS_IOERR); - spin_unlock_irqrestore(q->queue_lock, flags); } } diff --git a/drivers/block/loop.c b/drivers/block/loop.c index cb0cc8685076..0939f36548c9 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -77,13 +77,14 @@ #include #include #include +#include #include "loop.h" #include static DEFINE_IDR(loop_index_idr); -static DEFINE_MUTEX(loop_index_mutex); +static DEFINE_MUTEX(loop_ctl_mutex); static int max_part; static int part_shift; @@ -630,18 +631,7 @@ static void loop_reread_partitions(struct loop_device *lo, { int rc; - /* - * bd_mutex has been held already in release path, so don't - * acquire it if this function is called in such case. - * - * If the reread partition isn't from release path, lo_refcnt - * must be at least one and it can only become zero when the - * current holder is released. - */ - if (!atomic_read(&lo->lo_refcnt)) - rc = __blkdev_reread_part(bdev); - else - rc = blkdev_reread_part(bdev); + rc = blkdev_reread_part(bdev); if (rc) pr_warn("%s: partition scan of loop%d (%s) failed (rc=%d)\n", __func__, lo->lo_number, lo->lo_file_name, rc); @@ -688,26 +678,30 @@ static int loop_validate_file(struct file *file, struct block_device *bdev) static int loop_change_fd(struct loop_device *lo, struct block_device *bdev, unsigned int arg) { - struct file *file, *old_file; + struct file *file = NULL, *old_file; int error; + bool partscan; + error = mutex_lock_killable(&loop_ctl_mutex); + if (error) + return error; error = -ENXIO; if (lo->lo_state != Lo_bound) - goto out; + goto out_err; /* the loop device has to be read-only */ error = -EINVAL; if (!(lo->lo_flags & LO_FLAGS_READ_ONLY)) - goto out; + goto out_err; error = -EBADF; file = fget(arg); if (!file) - goto out; + goto out_err; error = loop_validate_file(file, bdev); if (error) - goto out_putf; + goto out_err; old_file = lo->lo_backing_file; @@ -715,7 +709,7 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev, /* size of the new backing store needs to be the same */ if (get_loop_size(lo, file) != get_loop_size(lo, old_file)) - goto out_putf; + goto out_err; /* and ... switch */ blk_mq_freeze_queue(lo->lo_queue); @@ -726,15 +720,22 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev, lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS)); loop_update_dio(lo); blk_mq_unfreeze_queue(lo->lo_queue); - + partscan = lo->lo_flags & LO_FLAGS_PARTSCAN; + mutex_unlock(&loop_ctl_mutex); + /* + * We must drop file reference outside of loop_ctl_mutex as dropping + * the file ref can take bd_mutex which creates circular locking + * dependency. + */ fput(old_file); - if (lo->lo_flags & LO_FLAGS_PARTSCAN) + if (partscan) loop_reread_partitions(lo, bdev); return 0; - out_putf: - fput(file); - out: +out_err: + mutex_unlock(&loop_ctl_mutex); + if (file) + fput(file); return error; } @@ -909,6 +910,7 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode, int lo_flags = 0; int error; loff_t size; + bool partscan; /* This is safe, since we have a reference from open(). */ __module_get(THIS_MODULE); @@ -918,13 +920,17 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode, if (!file) goto out; + error = mutex_lock_killable(&loop_ctl_mutex); + if (error) + goto out_putf; + error = -EBUSY; if (lo->lo_state != Lo_unbound) - goto out_putf; + goto out_unlock; error = loop_validate_file(file, bdev); if (error) - goto out_putf; + goto out_unlock; mapping = file->f_mapping; inode = mapping->host; @@ -936,10 +942,10 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode, error = -EFBIG; size = get_loop_size(lo, file); if ((loff_t)(sector_t)size != size) - goto out_putf; + goto out_unlock; error = loop_prepare_queue(lo); if (error) - goto out_putf; + goto out_unlock; error = 0; @@ -971,18 +977,22 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode, lo->lo_state = Lo_bound; if (part_shift) lo->lo_flags |= LO_FLAGS_PARTSCAN; - if (lo->lo_flags & LO_FLAGS_PARTSCAN) - loop_reread_partitions(lo, bdev); + partscan = lo->lo_flags & LO_FLAGS_PARTSCAN; /* Grab the block_device to prevent its destruction after we - * put /dev/loopXX inode. Later in loop_clr_fd() we bdput(bdev). + * put /dev/loopXX inode. Later in __loop_clr_fd() we bdput(bdev). */ bdgrab(bdev); + mutex_unlock(&loop_ctl_mutex); + if (partscan) + loop_reread_partitions(lo, bdev); return 0; - out_putf: +out_unlock: + mutex_unlock(&loop_ctl_mutex); +out_putf: fput(file); - out: +out: /* This is safe: open() is still holding a reference. */ module_put(THIS_MODULE); return error; @@ -1025,39 +1035,31 @@ loop_init_xfer(struct loop_device *lo, struct loop_func_table *xfer, return err; } -static int loop_clr_fd(struct loop_device *lo) +static int __loop_clr_fd(struct loop_device *lo, bool release) { - struct file *filp = lo->lo_backing_file; + struct file *filp = NULL; gfp_t gfp = lo->old_gfp_mask; struct block_device *bdev = lo->lo_device; + int err = 0; + bool partscan = false; + int lo_number; - if (lo->lo_state != Lo_bound) - return -ENXIO; - - /* - * If we've explicitly asked to tear down the loop device, - * and it has an elevated reference count, set it for auto-teardown when - * the last reference goes away. This stops $!~#$@ udev from - * preventing teardown because it decided that it needs to run blkid on - * the loopback device whenever they appear. xfstests is notorious for - * failing tests because blkid via udev races with a losetup - * /do something like mkfs/losetup -d causing the losetup -d - * command to fail with EBUSY. - */ - if (atomic_read(&lo->lo_refcnt) > 1) { - lo->lo_flags |= LO_FLAGS_AUTOCLEAR; - mutex_unlock(&lo->lo_ctl_mutex); - return 0; + mutex_lock(&loop_ctl_mutex); + if (WARN_ON_ONCE(lo->lo_state != Lo_rundown)) { + err = -ENXIO; + goto out_unlock; } - if (filp == NULL) - return -EINVAL; + filp = lo->lo_backing_file; + if (filp == NULL) { + err = -EINVAL; + goto out_unlock; + } /* freeze request queue during the transition */ blk_mq_freeze_queue(lo->lo_queue); spin_lock_irq(&lo->lo_lock); - lo->lo_state = Lo_rundown; lo->lo_backing_file = NULL; spin_unlock_irq(&lo->lo_lock); @@ -1093,21 +1095,73 @@ static int loop_clr_fd(struct loop_device *lo) module_put(THIS_MODULE); blk_mq_unfreeze_queue(lo->lo_queue); - if (lo->lo_flags & LO_FLAGS_PARTSCAN && bdev) - loop_reread_partitions(lo, bdev); + partscan = lo->lo_flags & LO_FLAGS_PARTSCAN && bdev; + lo_number = lo->lo_number; lo->lo_flags = 0; if (!part_shift) lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN; loop_unprepare_queue(lo); - mutex_unlock(&lo->lo_ctl_mutex); +out_unlock: + mutex_unlock(&loop_ctl_mutex); + if (partscan) { + /* + * bd_mutex has been held already in release path, so don't + * acquire it if this function is called in such case. + * + * If the reread partition isn't from release path, lo_refcnt + * must be at least one and it can only become zero when the + * current holder is released. + */ + if (release) + err = __blkdev_reread_part(bdev); + else + err = blkdev_reread_part(bdev); + pr_warn("%s: partition scan of loop%d failed (rc=%d)\n", + __func__, lo_number, err); + /* Device is gone, no point in returning error */ + err = 0; + } /* - * Need not hold lo_ctl_mutex to fput backing file. - * Calling fput holding lo_ctl_mutex triggers a circular + * Need not hold loop_ctl_mutex to fput backing file. + * Calling fput holding loop_ctl_mutex triggers a circular * lock dependency possibility warning as fput can take - * bd_mutex which is usually taken before lo_ctl_mutex. + * bd_mutex which is usually taken before loop_ctl_mutex. */ - fput(filp); - return 0; + if (filp) + fput(filp); + return err; +} + +static int loop_clr_fd(struct loop_device *lo) +{ + int err; + + err = mutex_lock_killable(&loop_ctl_mutex); + if (err) + return err; + if (lo->lo_state != Lo_bound) { + mutex_unlock(&loop_ctl_mutex); + return -ENXIO; + } + /* + * If we've explicitly asked to tear down the loop device, + * and it has an elevated reference count, set it for auto-teardown when + * the last reference goes away. This stops $!~#$@ udev from + * preventing teardown because it decided that it needs to run blkid on + * the loopback device whenever they appear. xfstests is notorious for + * failing tests because blkid via udev races with a losetup + * /do something like mkfs/losetup -d causing the losetup -d + * command to fail with EBUSY. + */ + if (atomic_read(&lo->lo_refcnt) > 1) { + lo->lo_flags |= LO_FLAGS_AUTOCLEAR; + mutex_unlock(&loop_ctl_mutex); + return 0; + } + lo->lo_state = Lo_rundown; + mutex_unlock(&loop_ctl_mutex); + + return __loop_clr_fd(lo, false); } static int @@ -1116,47 +1170,58 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info) int err; struct loop_func_table *xfer; kuid_t uid = current_uid(); + struct block_device *bdev; + bool partscan = false; + err = mutex_lock_killable(&loop_ctl_mutex); + if (err) + return err; if (lo->lo_encrypt_key_size && !uid_eq(lo->lo_key_owner, uid) && - !capable(CAP_SYS_ADMIN)) - return -EPERM; - if (lo->lo_state != Lo_bound) - return -ENXIO; - if ((unsigned int) info->lo_encrypt_key_size > LO_KEY_SIZE) - return -EINVAL; + !capable(CAP_SYS_ADMIN)) { + err = -EPERM; + goto out_unlock; + } + if (lo->lo_state != Lo_bound) { + err = -ENXIO; + goto out_unlock; + } + if ((unsigned int) info->lo_encrypt_key_size > LO_KEY_SIZE) { + err = -EINVAL; + goto out_unlock; + } /* I/O need to be drained during transfer transition */ blk_mq_freeze_queue(lo->lo_queue); err = loop_release_xfer(lo); if (err) - goto exit; + goto out_unfreeze; if (info->lo_encrypt_type) { unsigned int type = info->lo_encrypt_type; if (type >= MAX_LO_CRYPT) { err = -EINVAL; - goto exit; + goto out_unfreeze; } xfer = xfer_funcs[type]; if (xfer == NULL) { err = -EINVAL; - goto exit; + goto out_unfreeze; } } else xfer = NULL; err = loop_init_xfer(lo, xfer, info); if (err) - goto exit; + goto out_unfreeze; if (lo->lo_offset != info->lo_offset || lo->lo_sizelimit != info->lo_sizelimit) { if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) { err = -EFBIG; - goto exit; + goto out_unfreeze; } } @@ -1188,15 +1253,20 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info) /* update dio if lo_offset or transfer is changed */ __loop_update_dio(lo, lo->use_dio); - exit: +out_unfreeze: blk_mq_unfreeze_queue(lo->lo_queue); if (!err && (info->lo_flags & LO_FLAGS_PARTSCAN) && !(lo->lo_flags & LO_FLAGS_PARTSCAN)) { lo->lo_flags |= LO_FLAGS_PARTSCAN; lo->lo_disk->flags &= ~GENHD_FL_NO_PART_SCAN; - loop_reread_partitions(lo, lo->lo_device); + bdev = lo->lo_device; + partscan = true; } +out_unlock: + mutex_unlock(&loop_ctl_mutex); + if (partscan) + loop_reread_partitions(lo, bdev); return err; } @@ -1204,12 +1274,15 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info) static int loop_get_status(struct loop_device *lo, struct loop_info64 *info) { - struct file *file; + struct path path; struct kstat stat; int ret; + ret = mutex_lock_killable(&loop_ctl_mutex); + if (ret) + return ret; if (lo->lo_state != Lo_bound) { - mutex_unlock(&lo->lo_ctl_mutex); + mutex_unlock(&loop_ctl_mutex); return -ENXIO; } @@ -1228,17 +1301,17 @@ loop_get_status(struct loop_device *lo, struct loop_info64 *info) lo->lo_encrypt_key_size); } - /* Drop lo_ctl_mutex while we call into the filesystem. */ - file = get_file(lo->lo_backing_file); - mutex_unlock(&lo->lo_ctl_mutex); - ret = vfs_getattr(&file->f_path, &stat, STATX_INO, - AT_STATX_SYNC_AS_STAT); + /* Drop loop_ctl_mutex while we call into the filesystem. */ + path = lo->lo_backing_file->f_path; + path_get(&path); + mutex_unlock(&loop_ctl_mutex); + ret = vfs_getattr(&path, &stat, STATX_INO, AT_STATX_SYNC_AS_STAT); if (!ret) { info->lo_device = huge_encode_dev(stat.dev); info->lo_inode = stat.ino; info->lo_rdevice = huge_encode_dev(stat.rdev); } - fput(file); + path_put(&path); return ret; } @@ -1322,10 +1395,8 @@ loop_get_status_old(struct loop_device *lo, struct loop_info __user *arg) { struct loop_info64 info64; int err; - if (!arg) { - mutex_unlock(&lo->lo_ctl_mutex); + if (!arg) return -EINVAL; - } err = loop_get_status(lo, &info64); if (!err) err = loop_info64_to_old(&info64, &info); @@ -1340,10 +1411,8 @@ loop_get_status64(struct loop_device *lo, struct loop_info64 __user *arg) { struct loop_info64 info64; int err; - if (!arg) { - mutex_unlock(&lo->lo_ctl_mutex); + if (!arg) return -EINVAL; - } err = loop_get_status(lo, &info64); if (!err && copy_to_user(arg, &info64, sizeof(info64))) err = -EFAULT; @@ -1393,70 +1462,73 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg) return 0; } +static int lo_simple_ioctl(struct loop_device *lo, unsigned int cmd, + unsigned long arg) +{ + int err; + + err = mutex_lock_killable(&loop_ctl_mutex); + if (err) + return err; + switch (cmd) { + case LOOP_SET_CAPACITY: + err = loop_set_capacity(lo); + break; + case LOOP_SET_DIRECT_IO: + err = loop_set_dio(lo, arg); + break; + case LOOP_SET_BLOCK_SIZE: + err = loop_set_block_size(lo, arg); + break; + default: + err = lo->ioctl ? lo->ioctl(lo, cmd, arg) : -EINVAL; + } + mutex_unlock(&loop_ctl_mutex); + return err; +} + static int lo_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, unsigned long arg) { struct loop_device *lo = bdev->bd_disk->private_data; int err; - err = mutex_lock_killable_nested(&lo->lo_ctl_mutex, 1); - if (err) - goto out_unlocked; - switch (cmd) { case LOOP_SET_FD: - err = loop_set_fd(lo, mode, bdev, arg); - break; + return loop_set_fd(lo, mode, bdev, arg); case LOOP_CHANGE_FD: - err = loop_change_fd(lo, bdev, arg); - break; + return loop_change_fd(lo, bdev, arg); case LOOP_CLR_FD: - /* loop_clr_fd would have unlocked lo_ctl_mutex on success */ - err = loop_clr_fd(lo); - if (!err) - goto out_unlocked; - break; + return loop_clr_fd(lo); case LOOP_SET_STATUS: err = -EPERM; - if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) + if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) { err = loop_set_status_old(lo, (struct loop_info __user *)arg); + } break; case LOOP_GET_STATUS: - err = loop_get_status_old(lo, (struct loop_info __user *) arg); - /* loop_get_status() unlocks lo_ctl_mutex */ - goto out_unlocked; + return loop_get_status_old(lo, (struct loop_info __user *) arg); case LOOP_SET_STATUS64: err = -EPERM; - if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) + if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) { err = loop_set_status64(lo, (struct loop_info64 __user *) arg); + } break; case LOOP_GET_STATUS64: - err = loop_get_status64(lo, (struct loop_info64 __user *) arg); - /* loop_get_status() unlocks lo_ctl_mutex */ - goto out_unlocked; + return loop_get_status64(lo, (struct loop_info64 __user *) arg); case LOOP_SET_CAPACITY: - err = -EPERM; - if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) - err = loop_set_capacity(lo); - break; case LOOP_SET_DIRECT_IO: - err = -EPERM; - if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) - err = loop_set_dio(lo, arg); - break; case LOOP_SET_BLOCK_SIZE: - err = -EPERM; - if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) - err = loop_set_block_size(lo, arg); - break; + if (!(mode & FMODE_WRITE) && !capable(CAP_SYS_ADMIN)) + return -EPERM; + /* Fall through */ default: - err = lo->ioctl ? lo->ioctl(lo, cmd, arg) : -EINVAL; + err = lo_simple_ioctl(lo, cmd, arg); + break; } - mutex_unlock(&lo->lo_ctl_mutex); -out_unlocked: return err; } @@ -1570,10 +1642,8 @@ loop_get_status_compat(struct loop_device *lo, struct loop_info64 info64; int err; - if (!arg) { - mutex_unlock(&lo->lo_ctl_mutex); + if (!arg) return -EINVAL; - } err = loop_get_status(lo, &info64); if (!err) err = loop_info64_to_compat(&info64, arg); @@ -1588,20 +1658,12 @@ static int lo_compat_ioctl(struct block_device *bdev, fmode_t mode, switch(cmd) { case LOOP_SET_STATUS: - err = mutex_lock_killable(&lo->lo_ctl_mutex); - if (!err) { - err = loop_set_status_compat(lo, - (const struct compat_loop_info __user *)arg); - mutex_unlock(&lo->lo_ctl_mutex); - } + err = loop_set_status_compat(lo, + (const struct compat_loop_info __user *)arg); break; case LOOP_GET_STATUS: - err = mutex_lock_killable(&lo->lo_ctl_mutex); - if (!err) { - err = loop_get_status_compat(lo, - (struct compat_loop_info __user *)arg); - /* loop_get_status() unlocks lo_ctl_mutex */ - } + err = loop_get_status_compat(lo, + (struct compat_loop_info __user *)arg); break; case LOOP_SET_CAPACITY: case LOOP_CLR_FD: @@ -1625,9 +1687,11 @@ static int lo_compat_ioctl(struct block_device *bdev, fmode_t mode, static int lo_open(struct block_device *bdev, fmode_t mode) { struct loop_device *lo; - int err = 0; + int err; - mutex_lock(&loop_index_mutex); + err = mutex_lock_killable(&loop_ctl_mutex); + if (err) + return err; lo = bdev->bd_disk->private_data; if (!lo) { err = -ENXIO; @@ -1636,26 +1700,30 @@ static int lo_open(struct block_device *bdev, fmode_t mode) atomic_inc(&lo->lo_refcnt); out: - mutex_unlock(&loop_index_mutex); + mutex_unlock(&loop_ctl_mutex); return err; } -static void __lo_release(struct loop_device *lo) +static void lo_release(struct gendisk *disk, fmode_t mode) { - int err; + struct loop_device *lo; + mutex_lock(&loop_ctl_mutex); + lo = disk->private_data; if (atomic_dec_return(&lo->lo_refcnt)) - return; + goto out_unlock; - mutex_lock(&lo->lo_ctl_mutex); if (lo->lo_flags & LO_FLAGS_AUTOCLEAR) { + if (lo->lo_state != Lo_bound) + goto out_unlock; + lo->lo_state = Lo_rundown; + mutex_unlock(&loop_ctl_mutex); /* * In autoclear mode, stop the loop thread * and remove configuration after last close. */ - err = loop_clr_fd(lo); - if (!err) - return; + __loop_clr_fd(lo, true); + return; } else if (lo->lo_state == Lo_bound) { /* * Otherwise keep thread (if running) and config, @@ -1665,14 +1733,8 @@ static void __lo_release(struct loop_device *lo) blk_mq_unfreeze_queue(lo->lo_queue); } - mutex_unlock(&lo->lo_ctl_mutex); -} - -static void lo_release(struct gendisk *disk, fmode_t mode) -{ - mutex_lock(&loop_index_mutex); - __lo_release(disk->private_data); - mutex_unlock(&loop_index_mutex); +out_unlock: + mutex_unlock(&loop_ctl_mutex); } static const struct block_device_operations lo_fops = { @@ -1711,10 +1773,10 @@ static int unregister_transfer_cb(int id, void *ptr, void *data) struct loop_device *lo = ptr; struct loop_func_table *xfer = data; - mutex_lock(&lo->lo_ctl_mutex); + mutex_lock(&loop_ctl_mutex); if (lo->lo_encryption == xfer) loop_release_xfer(lo); - mutex_unlock(&lo->lo_ctl_mutex); + mutex_unlock(&loop_ctl_mutex); return 0; } @@ -1759,8 +1821,8 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx, /* always use the first bio's css */ #ifdef CONFIG_BLK_CGROUP - if (cmd->use_aio && rq->bio && rq->bio->bi_css) { - cmd->css = rq->bio->bi_css; + if (cmd->use_aio && rq->bio && rq->bio->bi_blkg) { + cmd->css = &bio_blkcg(rq->bio)->css; css_get(cmd->css); } else #endif @@ -1853,7 +1915,7 @@ static int loop_add(struct loop_device **l, int i) goto out_free_idr; lo->lo_queue = blk_mq_init_queue(&lo->tag_set); - if (IS_ERR_OR_NULL(lo->lo_queue)) { + if (IS_ERR(lo->lo_queue)) { err = PTR_ERR(lo->lo_queue); goto out_cleanup_tags; } @@ -1895,7 +1957,6 @@ static int loop_add(struct loop_device **l, int i) if (!part_shift) disk->flags |= GENHD_FL_NO_PART_SCAN; disk->flags |= GENHD_FL_EXT_DEVT; - mutex_init(&lo->lo_ctl_mutex); atomic_set(&lo->lo_refcnt, 0); lo->lo_number = i; spin_lock_init(&lo->lo_lock); @@ -1974,7 +2035,7 @@ static struct kobject *loop_probe(dev_t dev, int *part, void *data) struct kobject *kobj; int err; - mutex_lock(&loop_index_mutex); + mutex_lock(&loop_ctl_mutex); err = loop_lookup(&lo, MINOR(dev) >> part_shift); if (err < 0) err = loop_add(&lo, MINOR(dev) >> part_shift); @@ -1982,7 +2043,7 @@ static struct kobject *loop_probe(dev_t dev, int *part, void *data) kobj = NULL; else kobj = get_disk_and_module(lo->lo_disk); - mutex_unlock(&loop_index_mutex); + mutex_unlock(&loop_ctl_mutex); *part = 0; return kobj; @@ -1992,9 +2053,13 @@ static long loop_control_ioctl(struct file *file, unsigned int cmd, unsigned long parm) { struct loop_device *lo; - int ret = -ENOSYS; + int ret; - mutex_lock(&loop_index_mutex); + ret = mutex_lock_killable(&loop_ctl_mutex); + if (ret) + return ret; + + ret = -ENOSYS; switch (cmd) { case LOOP_CTL_ADD: ret = loop_lookup(&lo, parm); @@ -2008,21 +2073,15 @@ static long loop_control_ioctl(struct file *file, unsigned int cmd, ret = loop_lookup(&lo, parm); if (ret < 0) break; - ret = mutex_lock_killable(&lo->lo_ctl_mutex); - if (ret) - break; if (lo->lo_state != Lo_unbound) { ret = -EBUSY; - mutex_unlock(&lo->lo_ctl_mutex); break; } if (atomic_read(&lo->lo_refcnt) > 0) { ret = -EBUSY; - mutex_unlock(&lo->lo_ctl_mutex); break; } lo->lo_disk->private_data = NULL; - mutex_unlock(&lo->lo_ctl_mutex); idr_remove(&loop_index_idr, lo->lo_number); loop_remove(lo); break; @@ -2032,7 +2091,7 @@ static long loop_control_ioctl(struct file *file, unsigned int cmd, break; ret = loop_add(&lo, -1); } - mutex_unlock(&loop_index_mutex); + mutex_unlock(&loop_ctl_mutex); return ret; } @@ -2116,10 +2175,10 @@ static int __init loop_init(void) THIS_MODULE, loop_probe, NULL, NULL); /* pre-create number of devices given by config or max_loop */ - mutex_lock(&loop_index_mutex); + mutex_lock(&loop_ctl_mutex); for (i = 0; i < nr; i++) loop_add(&lo, i); - mutex_unlock(&loop_index_mutex); + mutex_unlock(&loop_ctl_mutex); printk(KERN_INFO "loop: module loaded\n"); return 0; diff --git a/drivers/block/loop.h b/drivers/block/loop.h index 4d42c7af7de7..af75a5ee4094 100644 --- a/drivers/block/loop.h +++ b/drivers/block/loop.h @@ -54,7 +54,6 @@ struct loop_device { spinlock_t lo_lock; int lo_state; - struct mutex lo_ctl_mutex; struct kthread_worker worker; struct task_struct *worker_task; bool use_dio; diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c index a7daa8acbab3..88e8440e75c3 100644 --- a/drivers/block/mtip32xx/mtip32xx.c +++ b/drivers/block/mtip32xx/mtip32xx.c @@ -168,41 +168,6 @@ static bool mtip_check_surprise_removal(struct pci_dev *pdev) return false; /* device present */ } -/* we have to use runtime tag to setup command header */ -static void mtip_init_cmd_header(struct request *rq) -{ - struct driver_data *dd = rq->q->queuedata; - struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq); - - /* Point the command headers at the command tables. */ - cmd->command_header = dd->port->command_list + - (sizeof(struct mtip_cmd_hdr) * rq->tag); - cmd->command_header_dma = dd->port->command_list_dma + - (sizeof(struct mtip_cmd_hdr) * rq->tag); - - if (test_bit(MTIP_PF_HOST_CAP_64, &dd->port->flags)) - cmd->command_header->ctbau = __force_bit2int cpu_to_le32((cmd->command_dma >> 16) >> 16); - - cmd->command_header->ctba = __force_bit2int cpu_to_le32(cmd->command_dma & 0xFFFFFFFF); -} - -static struct mtip_cmd *mtip_get_int_command(struct driver_data *dd) -{ - struct request *rq; - - if (mtip_check_surprise_removal(dd->pdev)) - return NULL; - - rq = blk_mq_alloc_request(dd->queue, REQ_OP_DRV_IN, BLK_MQ_REQ_RESERVED); - if (IS_ERR(rq)) - return NULL; - - /* Internal cmd isn't submitted via .queue_rq */ - mtip_init_cmd_header(rq); - - return blk_mq_rq_to_pdu(rq); -} - static struct mtip_cmd *mtip_cmd_from_tag(struct driver_data *dd, unsigned int tag) { @@ -1023,13 +988,14 @@ static int mtip_exec_internal_command(struct mtip_port *port, return -EFAULT; } - int_cmd = mtip_get_int_command(dd); - if (!int_cmd) { + if (mtip_check_surprise_removal(dd->pdev)) + return -EFAULT; + + rq = blk_mq_alloc_request(dd->queue, REQ_OP_DRV_IN, BLK_MQ_REQ_RESERVED); + if (IS_ERR(rq)) { dbg_printk(MTIP_DRV_NAME "Unable to allocate tag for PIO cmd\n"); return -EFAULT; } - rq = blk_mq_rq_from_pdu(int_cmd); - rq->special = &icmd; set_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags); @@ -1050,6 +1016,8 @@ static int mtip_exec_internal_command(struct mtip_port *port, } /* Copy the command to the command table */ + int_cmd = blk_mq_rq_to_pdu(rq); + int_cmd->icmd = &icmd; memcpy(int_cmd->command, fis, fis_len*4); rq->timeout = timeout; @@ -1423,23 +1391,19 @@ static int mtip_get_smart_attr(struct mtip_port *port, unsigned int id, * @dd pointer to driver_data structure * @lba starting lba * @len # of 512b sectors to trim - * - * return value - * -ENOMEM Out of dma memory - * -EINVAL Invalid parameters passed in, trim not supported - * -EIO Error submitting trim request to hw */ -static int mtip_send_trim(struct driver_data *dd, unsigned int lba, - unsigned int len) +static blk_status_t mtip_send_trim(struct driver_data *dd, unsigned int lba, + unsigned int len) { - int i, rv = 0; u64 tlba, tlen, sect_left; struct mtip_trim_entry *buf; dma_addr_t dma_addr; struct host_to_dev_fis fis; + blk_status_t ret = BLK_STS_OK; + int i; if (!len || dd->trim_supp == false) - return -EINVAL; + return BLK_STS_IOERR; /* Trim request too big */ WARN_ON(len > (MTIP_MAX_TRIM_ENTRY_LEN * MTIP_MAX_TRIM_ENTRIES)); @@ -1454,7 +1418,7 @@ static int mtip_send_trim(struct driver_data *dd, unsigned int lba, buf = dmam_alloc_coherent(&dd->pdev->dev, ATA_SECT_SIZE, &dma_addr, GFP_KERNEL); if (!buf) - return -ENOMEM; + return BLK_STS_RESOURCE; memset(buf, 0, ATA_SECT_SIZE); for (i = 0, sect_left = len, tlba = lba; @@ -1463,8 +1427,8 @@ static int mtip_send_trim(struct driver_data *dd, unsigned int lba, tlen = (sect_left >= MTIP_MAX_TRIM_ENTRY_LEN ? MTIP_MAX_TRIM_ENTRY_LEN : sect_left); - buf[i].lba = __force_bit2int cpu_to_le32(tlba); - buf[i].range = __force_bit2int cpu_to_le16(tlen); + buf[i].lba = cpu_to_le32(tlba); + buf[i].range = cpu_to_le16(tlen); tlba += tlen; sect_left -= tlen; } @@ -1486,10 +1450,10 @@ static int mtip_send_trim(struct driver_data *dd, unsigned int lba, ATA_SECT_SIZE, 0, MTIP_TRIM_TIMEOUT_MS) < 0) - rv = -EIO; + ret = BLK_STS_IOERR; dmam_free_coherent(&dd->pdev->dev, ATA_SECT_SIZE, buf, dma_addr); - return rv; + return ret; } /* @@ -1585,23 +1549,20 @@ static inline void fill_command_sg(struct driver_data *dd, int n; unsigned int dma_len; struct mtip_cmd_sg *command_sg; - struct scatterlist *sg = command->sg; + struct scatterlist *sg; command_sg = command->command + AHCI_CMD_TBL_HDR_SZ; - for (n = 0; n < nents; n++) { + for_each_sg(command->sg, sg, nents, n) { dma_len = sg_dma_len(sg); if (dma_len > 0x400000) dev_err(&dd->pdev->dev, "DMA segment length truncated\n"); - command_sg->info = __force_bit2int - cpu_to_le32((dma_len-1) & 0x3FFFFF); - command_sg->dba = __force_bit2int - cpu_to_le32(sg_dma_address(sg)); - command_sg->dba_upper = __force_bit2int + command_sg->info = cpu_to_le32((dma_len-1) & 0x3FFFFF); + command_sg->dba = cpu_to_le32(sg_dma_address(sg)); + command_sg->dba_upper = cpu_to_le32((sg_dma_address(sg) >> 16) >> 16); command_sg++; - sg++; } } @@ -2171,7 +2132,6 @@ static int mtip_hw_ioctl(struct driver_data *dd, unsigned int cmd, * @dd Pointer to the driver data structure. * @start First sector to read. * @nsect Number of sectors to read. - * @nents Number of entries in scatter list for the read command. * @tag The tag of this read command. * @callback Pointer to the function that should be called * when the read completes. @@ -2183,16 +2143,20 @@ static int mtip_hw_ioctl(struct driver_data *dd, unsigned int cmd, * None */ static void mtip_hw_submit_io(struct driver_data *dd, struct request *rq, - struct mtip_cmd *command, int nents, + struct mtip_cmd *command, struct blk_mq_hw_ctx *hctx) { + struct mtip_cmd_hdr *hdr = + dd->port->command_list + sizeof(struct mtip_cmd_hdr) * rq->tag; struct host_to_dev_fis *fis; struct mtip_port *port = dd->port; int dma_dir = rq_data_dir(rq) == READ ? DMA_FROM_DEVICE : DMA_TO_DEVICE; u64 start = blk_rq_pos(rq); unsigned int nsect = blk_rq_sectors(rq); + unsigned int nents; /* Map the scatter list for DMA access */ + nents = blk_rq_map_sg(hctx->queue, rq, command->sg); nents = dma_map_sg(&dd->pdev->dev, command->sg, nents, dma_dir); prefetch(&port->flags); @@ -2233,10 +2197,11 @@ static void mtip_hw_submit_io(struct driver_data *dd, struct request *rq, fis->device |= 1 << 7; /* Populate the command header */ - command->command_header->opts = - __force_bit2int cpu_to_le32( - (nents << 16) | 5 | AHCI_CMD_PREFETCH); - command->command_header->byte_count = 0; + hdr->ctba = cpu_to_le32(command->command_dma & 0xFFFFFFFF); + if (test_bit(MTIP_PF_HOST_CAP_64, &dd->port->flags)) + hdr->ctbau = cpu_to_le32((command->command_dma >> 16) >> 16); + hdr->opts = cpu_to_le32((nents << 16) | 5 | AHCI_CMD_PREFETCH); + hdr->byte_count = 0; command->direction = dma_dir; @@ -2715,12 +2680,12 @@ static void mtip_softirq_done_fn(struct request *rq) cmd->direction); if (unlikely(cmd->unaligned)) - up(&dd->port->cmd_slot_unal); + atomic_inc(&dd->port->cmd_slot_unal); blk_mq_end_request(rq, cmd->status); } -static void mtip_abort_cmd(struct request *req, void *data, bool reserved) +static bool mtip_abort_cmd(struct request *req, void *data, bool reserved) { struct mtip_cmd *cmd = blk_mq_rq_to_pdu(req); struct driver_data *dd = data; @@ -2730,14 +2695,16 @@ static void mtip_abort_cmd(struct request *req, void *data, bool reserved) clear_bit(req->tag, dd->port->cmds_to_issue); cmd->status = BLK_STS_IOERR; mtip_softirq_done_fn(req); + return true; } -static void mtip_queue_cmd(struct request *req, void *data, bool reserved) +static bool mtip_queue_cmd(struct request *req, void *data, bool reserved) { struct driver_data *dd = data; set_bit(req->tag, dd->port->cmds_to_issue); blk_abort_request(req); + return true; } /* @@ -2803,10 +2770,7 @@ restart_eh: blk_mq_quiesce_queue(dd->queue); - spin_lock(dd->queue->queue_lock); - blk_mq_tagset_busy_iter(&dd->tags, - mtip_queue_cmd, dd); - spin_unlock(dd->queue->queue_lock); + blk_mq_tagset_busy_iter(&dd->tags, mtip_queue_cmd, dd); set_bit(MTIP_PF_ISSUE_CMDS_BIT, &dd->port->flags); @@ -3026,7 +2990,7 @@ static int mtip_hw_init(struct driver_data *dd) else dd->unal_qdepth = 0; - sema_init(&dd->port->cmd_slot_unal, dd->unal_qdepth); + atomic_set(&dd->port->cmd_slot_unal, dd->unal_qdepth); /* Spinlock to prevent concurrent issue */ for (i = 0; i < MTIP_MAX_SLOT_GROUPS; i++) @@ -3531,58 +3495,24 @@ static inline bool is_se_active(struct driver_data *dd) return false; } -/* - * Block layer make request function. - * - * This function is called by the kernel to process a BIO for - * the P320 device. - * - * @queue Pointer to the request queue. Unused other than to obtain - * the driver data structure. - * @rq Pointer to the request. - * - */ -static int mtip_submit_request(struct blk_mq_hw_ctx *hctx, struct request *rq) +static inline bool is_stopped(struct driver_data *dd, struct request *rq) { - struct driver_data *dd = hctx->queue->queuedata; - struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq); - unsigned int nents; - - if (is_se_active(dd)) - return -ENODATA; - - if (unlikely(dd->dd_flag & MTIP_DDF_STOP_IO)) { - if (unlikely(test_bit(MTIP_DDF_REMOVE_PENDING_BIT, - &dd->dd_flag))) { - return -ENXIO; - } - if (unlikely(test_bit(MTIP_DDF_OVER_TEMP_BIT, &dd->dd_flag))) { - return -ENODATA; - } - if (unlikely(test_bit(MTIP_DDF_WRITE_PROTECT_BIT, - &dd->dd_flag) && - rq_data_dir(rq))) { - return -ENODATA; - } - if (unlikely(test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag) || - test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag))) - return -ENODATA; - } - - if (req_op(rq) == REQ_OP_DISCARD) { - int err; - - err = mtip_send_trim(dd, blk_rq_pos(rq), blk_rq_sectors(rq)); - blk_mq_end_request(rq, err ? BLK_STS_IOERR : BLK_STS_OK); - return 0; - } + if (likely(!(dd->dd_flag & MTIP_DDF_STOP_IO))) + return false; - /* Create the scatter list for this request. */ - nents = blk_rq_map_sg(hctx->queue, rq, cmd->sg); + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag)) + return true; + if (test_bit(MTIP_DDF_OVER_TEMP_BIT, &dd->dd_flag)) + return true; + if (test_bit(MTIP_DDF_WRITE_PROTECT_BIT, &dd->dd_flag) && + rq_data_dir(rq)) + return true; + if (test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag)) + return true; + if (test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag)) + return true; - /* Issue the read/write. */ - mtip_hw_submit_io(dd, rq, cmd, nents, hctx); - return 0; + return false; } static bool mtip_check_unal_depth(struct blk_mq_hw_ctx *hctx, @@ -3603,7 +3533,7 @@ static bool mtip_check_unal_depth(struct blk_mq_hw_ctx *hctx, cmd->unaligned = 1; } - if (cmd->unaligned && down_trylock(&dd->port->cmd_slot_unal)) + if (cmd->unaligned && atomic_dec_if_positive(&dd->port->cmd_slot_unal) >= 0) return true; return false; @@ -3613,32 +3543,33 @@ static blk_status_t mtip_issue_reserved_cmd(struct blk_mq_hw_ctx *hctx, struct request *rq) { struct driver_data *dd = hctx->queue->queuedata; - struct mtip_int_cmd *icmd = rq->special; struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq); + struct mtip_int_cmd *icmd = cmd->icmd; + struct mtip_cmd_hdr *hdr = + dd->port->command_list + sizeof(struct mtip_cmd_hdr) * rq->tag; struct mtip_cmd_sg *command_sg; if (mtip_commands_active(dd->port)) - return BLK_STS_RESOURCE; + return BLK_STS_DEV_RESOURCE; + hdr->ctba = cpu_to_le32(cmd->command_dma & 0xFFFFFFFF); + if (test_bit(MTIP_PF_HOST_CAP_64, &dd->port->flags)) + hdr->ctbau = cpu_to_le32((cmd->command_dma >> 16) >> 16); /* Populate the SG list */ - cmd->command_header->opts = - __force_bit2int cpu_to_le32(icmd->opts | icmd->fis_len); + hdr->opts = cpu_to_le32(icmd->opts | icmd->fis_len); if (icmd->buf_len) { command_sg = cmd->command + AHCI_CMD_TBL_HDR_SZ; - command_sg->info = - __force_bit2int cpu_to_le32((icmd->buf_len-1) & 0x3FFFFF); - command_sg->dba = - __force_bit2int cpu_to_le32(icmd->buffer & 0xFFFFFFFF); + command_sg->info = cpu_to_le32((icmd->buf_len-1) & 0x3FFFFF); + command_sg->dba = cpu_to_le32(icmd->buffer & 0xFFFFFFFF); command_sg->dba_upper = - __force_bit2int cpu_to_le32((icmd->buffer >> 16) >> 16); + cpu_to_le32((icmd->buffer >> 16) >> 16); - cmd->command_header->opts |= - __force_bit2int cpu_to_le32((1 << 16)); + hdr->opts |= cpu_to_le32((1 << 16)); } /* Populate the command header */ - cmd->command_header->byte_count = 0; + hdr->byte_count = 0; blk_mq_start_request(rq); mtip_issue_non_ncq_command(dd->port, rq->tag); @@ -3648,23 +3579,25 @@ static blk_status_t mtip_issue_reserved_cmd(struct blk_mq_hw_ctx *hctx, static blk_status_t mtip_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { + struct driver_data *dd = hctx->queue->queuedata; struct request *rq = bd->rq; - int ret; - - mtip_init_cmd_header(rq); + struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq); if (blk_rq_is_passthrough(rq)) return mtip_issue_reserved_cmd(hctx, rq); if (unlikely(mtip_check_unal_depth(hctx, rq))) - return BLK_STS_RESOURCE; + return BLK_STS_DEV_RESOURCE; + + if (is_se_active(dd) || is_stopped(dd, rq)) + return BLK_STS_IOERR; blk_mq_start_request(rq); - ret = mtip_submit_request(hctx, rq); - if (likely(!ret)) - return BLK_STS_OK; - return BLK_STS_IOERR; + if (req_op(rq) == REQ_OP_DISCARD) + return mtip_send_trim(dd, blk_rq_pos(rq), blk_rq_sectors(rq)); + mtip_hw_submit_io(dd, rq, cmd, hctx); + return BLK_STS_OK; } static void mtip_free_cmd(struct blk_mq_tag_set *set, struct request *rq, @@ -3920,12 +3853,13 @@ protocol_init_error: return rv; } -static void mtip_no_dev_cleanup(struct request *rq, void *data, bool reserv) +static bool mtip_no_dev_cleanup(struct request *rq, void *data, bool reserv) { struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq); cmd->status = BLK_STS_IOERR; blk_mq_complete_request(rq); + return true; } /* diff --git a/drivers/block/mtip32xx/mtip32xx.h b/drivers/block/mtip32xx/mtip32xx.h index e20e55dab443..abce25f27f57 100644 --- a/drivers/block/mtip32xx/mtip32xx.h +++ b/drivers/block/mtip32xx/mtip32xx.h @@ -126,8 +126,6 @@ #define MTIP_DFS_MAX_BUF_SIZE 1024 -#define __force_bit2int (unsigned int __force) - enum { /* below are bit numbers in 'flags' defined in mtip_port */ MTIP_PF_IC_ACTIVE_BIT = 0, /* pio/ioctl */ @@ -174,10 +172,10 @@ enum { struct smart_attr { u8 attr_id; - u16 flags; + __le16 flags; u8 cur; u8 worst; - u32 data; + __le32 data; u8 res[3]; } __packed; @@ -200,9 +198,9 @@ struct mtip_work { #define MTIP_MAX_TRIM_ENTRY_LEN 0xfff8 struct mtip_trim_entry { - u32 lba; /* starting lba of region */ - u16 rsvd; /* unused */ - u16 range; /* # of 512b blocks to trim */ + __le32 lba; /* starting lba of region */ + __le16 rsvd; /* unused */ + __le16 range; /* # of 512b blocks to trim */ } __packed; struct mtip_trim { @@ -278,24 +276,24 @@ struct mtip_cmd_hdr { * - Bit 5 Unused in this implementation. * - Bits 4:0 Length of the command FIS in DWords (DWord = 4 bytes). */ - unsigned int opts; + __le32 opts; /* This field is unsed when using NCQ. */ union { - unsigned int byte_count; - unsigned int status; + __le32 byte_count; + __le32 status; }; /* * Lower 32 bits of the command table address associated with this * header. The command table addresses must be 128 byte aligned. */ - unsigned int ctba; + __le32 ctba; /* * If 64 bit addressing is used this field is the upper 32 bits * of the command table address associated with this command. */ - unsigned int ctbau; + __le32 ctbau; /* Reserved and unused. */ - unsigned int res[4]; + u32 res[4]; }; /* Command scatter gather structure (PRD). */ @@ -305,31 +303,28 @@ struct mtip_cmd_sg { * address must be 8 byte aligned signified by bits 2:0 being * set to 0. */ - unsigned int dba; + __le32 dba; /* * When 64 bit addressing is used this field is the upper * 32 bits of the data buffer address. */ - unsigned int dba_upper; + __le32 dba_upper; /* Unused. */ - unsigned int reserved; + __le32 reserved; /* * Bit 31: interrupt when this data block has been transferred. * Bits 30..22: reserved * Bits 21..0: byte count (minus 1). For P320 the byte count must be * 8 byte aligned signified by bits 2:0 being set to 1. */ - unsigned int info; + __le32 info; }; struct mtip_port; +struct mtip_int_cmd; + /* Structure used to describe a command. */ struct mtip_cmd { - - struct mtip_cmd_hdr *command_header; /* ptr to command header entry */ - - dma_addr_t command_header_dma; /* corresponding physical address */ - void *command; /* ptr to command table entry */ dma_addr_t command_dma; /* corresponding physical address */ @@ -338,7 +333,10 @@ struct mtip_cmd { int unaligned; /* command is unaligned on 4k boundary */ - struct scatterlist sg[MTIP_MAX_SG]; /* Scatter list entries */ + union { + struct scatterlist sg[MTIP_MAX_SG]; /* Scatter list entries */ + struct mtip_int_cmd *icmd; + }; int retries; /* The number of retries left for this command. */ @@ -435,8 +433,8 @@ struct mtip_port { */ unsigned long ic_pause_timer; - /* Semaphore to control queue depth of unaligned IOs */ - struct semaphore cmd_slot_unal; + /* Counter to control queue depth of unaligned IOs */ + atomic_t cmd_slot_unal; /* Spinlock for working around command-issue bug. */ spinlock_t cmd_issue_lock[MTIP_MAX_SLOT_GROUPS]; diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 4d4d6129ff66..08696f5f00bb 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -734,12 +734,13 @@ static void recv_work(struct work_struct *work) kfree(args); } -static void nbd_clear_req(struct request *req, void *data, bool reserved) +static bool nbd_clear_req(struct request *req, void *data, bool reserved) { struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req); cmd->status = BLK_STS_IOERR; blk_mq_complete_request(req); + return true; } static void nbd_clear_que(struct nbd_device *nbd) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index 7685df43f1ef..b3df2793e7cd 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -49,6 +49,7 @@ struct nullb_device { unsigned long completion_nsec; /* time in ns to complete a request */ unsigned long cache_size; /* disk cache size in MB */ unsigned long zone_size; /* zone size in MB if device is zoned */ + unsigned int zone_nr_conv; /* number of conventional zones */ unsigned int submit_queues; /* number of submission queues */ unsigned int home_node; /* home node for the device */ unsigned int queue_mode; /* block interface */ diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 09339203dfba..62c9654b9ce8 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -188,6 +188,10 @@ static unsigned long g_zone_size = 256; module_param_named(zone_size, g_zone_size, ulong, S_IRUGO); MODULE_PARM_DESC(zone_size, "Zone size in MB when block device is zoned. Must be power-of-two: Default: 256"); +static unsigned int g_zone_nr_conv; +module_param_named(zone_nr_conv, g_zone_nr_conv, uint, 0444); +MODULE_PARM_DESC(zone_nr_conv, "Number of conventional zones when block device is zoned. Default: 0"); + static struct nullb_device *null_alloc_dev(void); static void null_free_dev(struct nullb_device *dev); static void null_del_dev(struct nullb *nullb); @@ -293,6 +297,7 @@ NULLB_DEVICE_ATTR(mbps, uint); NULLB_DEVICE_ATTR(cache_size, ulong); NULLB_DEVICE_ATTR(zoned, bool); NULLB_DEVICE_ATTR(zone_size, ulong); +NULLB_DEVICE_ATTR(zone_nr_conv, uint); static ssize_t nullb_device_power_show(struct config_item *item, char *page) { @@ -407,6 +412,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_badblocks, &nullb_device_attr_zoned, &nullb_device_attr_zone_size, + &nullb_device_attr_zone_nr_conv, NULL, }; @@ -520,6 +526,7 @@ static struct nullb_device *null_alloc_dev(void) dev->use_per_node_hctx = g_use_per_node_hctx; dev->zoned = g_zoned; dev->zone_size = g_zone_size; + dev->zone_nr_conv = g_zone_nr_conv; return dev; } @@ -635,14 +642,9 @@ static void null_cmd_end_timer(struct nullb_cmd *cmd) hrtimer_start(&cmd->timer, kt, HRTIMER_MODE_REL); } -static void null_softirq_done_fn(struct request *rq) +static void null_complete_rq(struct request *rq) { - struct nullb *nullb = rq->q->queuedata; - - if (nullb->dev->queue_mode == NULL_Q_MQ) - end_cmd(blk_mq_rq_to_pdu(rq)); - else - end_cmd(rq->special); + end_cmd(blk_mq_rq_to_pdu(rq)); } static struct nullb_page *null_alloc_page(gfp_t gfp_flags) @@ -1350,7 +1352,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx, static const struct blk_mq_ops null_mq_ops = { .queue_rq = null_queue_rq, - .complete = null_softirq_done_fn, + .complete = null_complete_rq, .timeout = null_timeout_rq, }; @@ -1657,8 +1659,7 @@ static int null_add_dev(struct nullb_device *dev) } null_init_queues(nullb); } else if (dev->queue_mode == NULL_Q_BIO) { - nullb->q = blk_alloc_queue_node(GFP_KERNEL, dev->home_node, - NULL); + nullb->q = blk_alloc_queue_node(GFP_KERNEL, dev->home_node); if (!nullb->q) { rv = -ENOMEM; goto out_cleanup_queues; diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index c0b0e4a3fa8f..5d1c261a2cfd 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -29,7 +29,25 @@ int null_zone_init(struct nullb_device *dev) if (!dev->zones) return -ENOMEM; - for (i = 0; i < dev->nr_zones; i++) { + if (dev->zone_nr_conv >= dev->nr_zones) { + dev->zone_nr_conv = dev->nr_zones - 1; + pr_info("null_blk: changed the number of conventional zones to %u", + dev->zone_nr_conv); + } + + for (i = 0; i < dev->zone_nr_conv; i++) { + struct blk_zone *zone = &dev->zones[i]; + + zone->start = sector; + zone->len = dev->zone_size_sects; + zone->wp = zone->start + zone->len; + zone->type = BLK_ZONE_TYPE_CONVENTIONAL; + zone->cond = BLK_ZONE_COND_NOT_WP; + + sector += dev->zone_size_sects; + } + + for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) { struct blk_zone *zone = &dev->zones[i]; zone->start = zone->wp = sector; @@ -98,6 +116,8 @@ void null_zone_write(struct nullb_cmd *cmd, sector_t sector, if (zone->wp == zone->start + zone->len) zone->cond = BLK_ZONE_COND_FULL; break; + case BLK_ZONE_COND_NOT_WP: + break; default: /* Invalid zone condition */ cmd->error = BLK_STS_IOERR; @@ -111,6 +131,11 @@ void null_zone_reset(struct nullb_cmd *cmd, sector_t sector) unsigned int zno = null_zone_no(dev, sector); struct blk_zone *zone = &dev->zones[zno]; + if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) { + cmd->error = BLK_STS_IOERR; + return; + } + zone->cond = BLK_ZONE_COND_EMPTY; zone->wp = zone->start; } diff --git a/drivers/block/paride/pd.c b/drivers/block/paride/pd.c index ae4971e5d9a8..0ff9b12d0e35 100644 --- a/drivers/block/paride/pd.c +++ b/drivers/block/paride/pd.c @@ -242,6 +242,11 @@ struct pd_unit { static struct pd_unit pd[PD_UNITS]; +struct pd_req { + /* for REQ_OP_DRV_IN: */ + enum action (*func)(struct pd_unit *disk); +}; + static char pd_scratch[512]; /* scratch block buffer */ static char *pd_errs[17] = { "ERR", "INDEX", "ECC", "DRQ", "SEEK", "WRERR", @@ -502,8 +507,9 @@ static enum action do_pd_io_start(void) static enum action pd_special(void) { - enum action (*func)(struct pd_unit *) = pd_req->special; - return func(pd_current); + struct pd_req *req = blk_mq_rq_to_pdu(pd_req); + + return req->func(pd_current); } static int pd_next_buf(void) @@ -767,12 +773,14 @@ static int pd_special_command(struct pd_unit *disk, enum action (*func)(struct pd_unit *disk)) { struct request *rq; + struct pd_req *req; rq = blk_get_request(disk->gd->queue, REQ_OP_DRV_IN, 0); if (IS_ERR(rq)) return PTR_ERR(rq); + req = blk_mq_rq_to_pdu(rq); - rq->special = func; + req->func = func; blk_execute_rq(disk->gd->queue, disk->gd, rq, 0); blk_put_request(rq); return 0; @@ -892,9 +900,21 @@ static void pd_probe_drive(struct pd_unit *disk) disk->gd = p; p->private_data = disk; - p->queue = blk_mq_init_sq_queue(&disk->tag_set, &pd_mq_ops, 2, - BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING); + memset(&disk->tag_set, 0, sizeof(disk->tag_set)); + disk->tag_set.ops = &pd_mq_ops; + disk->tag_set.cmd_size = sizeof(struct pd_req); + disk->tag_set.nr_hw_queues = 1; + disk->tag_set.nr_maps = 1; + disk->tag_set.queue_depth = 2; + disk->tag_set.numa_node = NUMA_NO_NODE; + disk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING; + + if (blk_mq_alloc_tag_set(&disk->tag_set)) + return; + + p->queue = blk_mq_init_queue(&disk->tag_set); if (IS_ERR(p->queue)) { + blk_mq_free_tag_set(&disk->tag_set); p->queue = NULL; return; } diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index 9381f4e3b221..f5a71023f76c 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -2203,9 +2203,7 @@ static int pkt_open_dev(struct pktcdvd_device *pd, fmode_t write) * Some CDRW drives can not handle writes larger than one packet, * even if the size is a multiple of the packet size. */ - spin_lock_irq(q->queue_lock); blk_queue_max_hw_sectors(q, pd->settings.size); - spin_unlock_irq(q->queue_lock); set_bit(PACKET_WRITABLE, &pd->flags); } else { pkt_set_speed(pd, MAX_SPEED, MAX_SPEED); diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c index 2459dcc04b1c..a10d5736d8f7 100644 --- a/drivers/block/skd_main.c +++ b/drivers/block/skd_main.c @@ -181,6 +181,7 @@ struct skd_request_context { struct fit_completion_entry_v1 completion; struct fit_comp_error_info err_info; + int retries; blk_status_t status; }; @@ -382,11 +383,12 @@ static void skd_log_skreq(struct skd_device *skdev, * READ/WRITE REQUESTS ***************************************************************************** */ -static void skd_inc_in_flight(struct request *rq, void *data, bool reserved) +static bool skd_inc_in_flight(struct request *rq, void *data, bool reserved) { int *count = data; count++; + return true; } static int skd_in_flight(struct skd_device *skdev) @@ -494,6 +496,11 @@ static blk_status_t skd_mq_queue_rq(struct blk_mq_hw_ctx *hctx, if (unlikely(skdev->state != SKD_DRVR_STATE_ONLINE)) return skd_fail_all(q) ? BLK_STS_IOERR : BLK_STS_RESOURCE; + if (!(req->rq_flags & RQF_DONTPREP)) { + skreq->retries = 0; + req->rq_flags |= RQF_DONTPREP; + } + blk_mq_start_request(req); WARN_ONCE(tag >= skd_max_queue_depth, "%#x > %#x (nr_requests = %lu)\n", @@ -1425,7 +1432,7 @@ static void skd_resolve_req_exception(struct skd_device *skdev, break; case SKD_CHECK_STATUS_REQUEUE_REQUEST: - if ((unsigned long) ++req->special < SKD_MAX_RETRIES) { + if (++skreq->retries < SKD_MAX_RETRIES) { skd_log_skreq(skdev, skreq, "retry"); blk_mq_requeue_request(req, true); break; @@ -1887,13 +1894,13 @@ static void skd_isr_fwstate(struct skd_device *skdev) skd_skdev_state_to_str(skdev->state), skdev->state); } -static void skd_recover_request(struct request *req, void *data, bool reserved) +static bool skd_recover_request(struct request *req, void *data, bool reserved) { struct skd_device *const skdev = data; struct skd_request_context *skreq = blk_mq_rq_to_pdu(req); if (skreq->state != SKD_REQ_STATE_BUSY) - return; + return true; skd_log_skreq(skdev, skreq, "recover"); @@ -1904,6 +1911,7 @@ static void skd_recover_request(struct request *req, void *data, bool reserved) skreq->state = SKD_REQ_STATE_IDLE; skreq->status = BLK_STS_IOERR; blk_mq_complete_request(req); + return true; } static void skd_recover_requests(struct skd_device *skdev) diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c index b54fa6726303..9c0553dd13e7 100644 --- a/drivers/block/sunvdc.c +++ b/drivers/block/sunvdc.c @@ -6,7 +6,7 @@ #include #include #include -#include +#include #include #include #include @@ -45,6 +45,8 @@ MODULE_VERSION(DRV_MODULE_VERSION); #define WAITING_FOR_GEN_CMD 0x04 #define WAITING_FOR_ANY -1 +#define VDC_MAX_RETRIES 10 + static struct workqueue_struct *sunvdc_wq; struct vdc_req_entry { @@ -66,9 +68,10 @@ struct vdc_port { u64 max_xfer_size; u32 vdisk_block_size; + u32 drain; u64 ldc_timeout; - struct timer_list ldc_reset_timer; + struct delayed_work ldc_reset_timer_work; struct work_struct ldc_reset_work; /* The server fills these in for us in the disk attribute @@ -80,12 +83,14 @@ struct vdc_port { u8 vdisk_mtype; u32 vdisk_phys_blksz; + struct blk_mq_tag_set tag_set; + char disk_name[32]; }; static void vdc_ldc_reset(struct vdc_port *port); static void vdc_ldc_reset_work(struct work_struct *work); -static void vdc_ldc_reset_timer(struct timer_list *t); +static void vdc_ldc_reset_timer_work(struct work_struct *work); static inline struct vdc_port *to_vdc_port(struct vio_driver_state *vio) { @@ -175,11 +180,8 @@ static void vdc_blk_queue_start(struct vdc_port *port) * handshake completes, so check for initial handshake before we've * allocated a disk. */ - if (port->disk && blk_queue_stopped(port->disk->queue) && - vdc_tx_dring_avail(dr) * 100 / VDC_TX_RING_SIZE >= 50) { - blk_start_queue(port->disk->queue); - } - + if (port->disk && vdc_tx_dring_avail(dr) * 100 / VDC_TX_RING_SIZE >= 50) + blk_mq_start_hw_queues(port->disk->queue); } static void vdc_finish(struct vio_driver_state *vio, int err, int waiting_for) @@ -197,7 +199,7 @@ static void vdc_handshake_complete(struct vio_driver_state *vio) { struct vdc_port *port = to_vdc_port(vio); - del_timer(&port->ldc_reset_timer); + cancel_delayed_work(&port->ldc_reset_timer_work); vdc_finish(vio, 0, WAITING_FOR_LINK_UP); vdc_blk_queue_start(port); } @@ -320,7 +322,7 @@ static void vdc_end_one(struct vdc_port *port, struct vio_dring_state *dr, rqe->req = NULL; - __blk_end_request(req, (desc->status ? BLK_STS_IOERR : 0), desc->size); + blk_mq_end_request(req, desc->status ? BLK_STS_IOERR : 0); vdc_blk_queue_start(port); } @@ -431,6 +433,7 @@ static int __vdc_tx_trigger(struct vdc_port *port) .end_idx = dr->prod, }; int err, delay; + int retries = 0; hdr.seq = dr->snd_nxt; delay = 1; @@ -443,6 +446,8 @@ static int __vdc_tx_trigger(struct vdc_port *port) udelay(delay); if ((delay <<= 1) > 128) delay = 128; + if (retries++ > VDC_MAX_RETRIES) + break; } while (err == -EAGAIN); if (err == -ENOTCONN) @@ -525,29 +530,40 @@ static int __send_request(struct request *req) return err; } -static void do_vdc_request(struct request_queue *rq) +static blk_status_t vdc_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) { - struct request *req; + struct vdc_port *port = hctx->queue->queuedata; + struct vio_dring_state *dr; + unsigned long flags; - while ((req = blk_peek_request(rq)) != NULL) { - struct vdc_port *port; - struct vio_dring_state *dr; + dr = &port->vio.drings[VIO_DRIVER_TX_RING]; - port = req->rq_disk->private_data; - dr = &port->vio.drings[VIO_DRIVER_TX_RING]; - if (unlikely(vdc_tx_dring_avail(dr) < 1)) - goto wait; + blk_mq_start_request(bd->rq); - blk_start_request(req); + spin_lock_irqsave(&port->vio.lock, flags); - if (__send_request(req) < 0) { - blk_requeue_request(rq, req); -wait: - /* Avoid pointless unplugs. */ - blk_stop_queue(rq); - break; - } + /* + * Doing drain, just end the request in error + */ + if (unlikely(port->drain)) { + spin_unlock_irqrestore(&port->vio.lock, flags); + return BLK_STS_IOERR; + } + + if (unlikely(vdc_tx_dring_avail(dr) < 1)) { + spin_unlock_irqrestore(&port->vio.lock, flags); + blk_mq_stop_hw_queue(hctx); + return BLK_STS_DEV_RESOURCE; + } + + if (__send_request(bd->rq) < 0) { + spin_unlock_irqrestore(&port->vio.lock, flags); + return BLK_STS_IOERR; } + + spin_unlock_irqrestore(&port->vio.lock, flags); + return BLK_STS_OK; } static int generic_request(struct vdc_port *port, u8 op, void *buf, int len) @@ -759,6 +775,31 @@ static void vdc_port_down(struct vdc_port *port) vio_ldc_free(&port->vio); } +static const struct blk_mq_ops vdc_mq_ops = { + .queue_rq = vdc_queue_rq, +}; + +static void cleanup_queue(struct request_queue *q) +{ + struct vdc_port *port = q->queuedata; + + blk_cleanup_queue(q); + blk_mq_free_tag_set(&port->tag_set); +} + +static struct request_queue *init_queue(struct vdc_port *port) +{ + struct request_queue *q; + + q = blk_mq_init_sq_queue(&port->tag_set, &vdc_mq_ops, VDC_TX_RING_SIZE, + BLK_MQ_F_SHOULD_MERGE); + if (IS_ERR(q)) + return q; + + q->queuedata = port; + return q; +} + static int probe_disk(struct vdc_port *port) { struct request_queue *q; @@ -796,17 +837,17 @@ static int probe_disk(struct vdc_port *port) (u64)geom.num_sec); } - q = blk_init_queue(do_vdc_request, &port->vio.lock); - if (!q) { + q = init_queue(port); + if (IS_ERR(q)) { printk(KERN_ERR PFX "%s: Could not allocate queue.\n", port->vio.name); - return -ENOMEM; + return PTR_ERR(q); } g = alloc_disk(1 << PARTITION_SHIFT); if (!g) { printk(KERN_ERR PFX "%s: Could not allocate gendisk.\n", port->vio.name); - blk_cleanup_queue(q); + cleanup_queue(q); return -ENOMEM; } @@ -981,7 +1022,7 @@ static int vdc_port_probe(struct vio_dev *vdev, const struct vio_device_id *id) */ ldc_timeout = mdesc_get_property(hp, vdev->mp, "vdc-timeout", NULL); port->ldc_timeout = ldc_timeout ? *ldc_timeout : 0; - timer_setup(&port->ldc_reset_timer, vdc_ldc_reset_timer, 0); + INIT_DELAYED_WORK(&port->ldc_reset_timer_work, vdc_ldc_reset_timer_work); INIT_WORK(&port->ldc_reset_work, vdc_ldc_reset_work); err = vio_driver_init(&port->vio, vdev, VDEV_DISK, @@ -1034,18 +1075,14 @@ static int vdc_port_remove(struct vio_dev *vdev) struct vdc_port *port = dev_get_drvdata(&vdev->dev); if (port) { - unsigned long flags; - - spin_lock_irqsave(&port->vio.lock, flags); - blk_stop_queue(port->disk->queue); - spin_unlock_irqrestore(&port->vio.lock, flags); + blk_mq_stop_hw_queues(port->disk->queue); flush_work(&port->ldc_reset_work); - del_timer_sync(&port->ldc_reset_timer); + cancel_delayed_work_sync(&port->ldc_reset_timer_work); del_timer_sync(&port->vio.timer); del_gendisk(port->disk); - blk_cleanup_queue(port->disk->queue); + cleanup_queue(port->disk->queue); put_disk(port->disk); port->disk = NULL; @@ -1080,32 +1117,46 @@ static void vdc_requeue_inflight(struct vdc_port *port) } rqe->req = NULL; - blk_requeue_request(port->disk->queue, req); + blk_mq_requeue_request(req, false); } } static void vdc_queue_drain(struct vdc_port *port) { - struct request *req; + struct request_queue *q = port->disk->queue; + + /* + * Mark the queue as draining, then freeze/quiesce to ensure + * that all existing requests are seen in ->queue_rq() and killed + */ + port->drain = 1; + spin_unlock_irq(&port->vio.lock); - while ((req = blk_fetch_request(port->disk->queue)) != NULL) - __blk_end_request_all(req, BLK_STS_IOERR); + blk_mq_freeze_queue(q); + blk_mq_quiesce_queue(q); + + spin_lock_irq(&port->vio.lock); + port->drain = 0; + blk_mq_unquiesce_queue(q); + blk_mq_unfreeze_queue(q); } -static void vdc_ldc_reset_timer(struct timer_list *t) +static void vdc_ldc_reset_timer_work(struct work_struct *work) { - struct vdc_port *port = from_timer(port, t, ldc_reset_timer); - struct vio_driver_state *vio = &port->vio; - unsigned long flags; + struct vdc_port *port; + struct vio_driver_state *vio; - spin_lock_irqsave(&vio->lock, flags); + port = container_of(work, struct vdc_port, ldc_reset_timer_work.work); + vio = &port->vio; + + spin_lock_irq(&vio->lock); if (!(port->vio.hs_state & VIO_HS_COMPLETE)) { pr_warn(PFX "%s ldc down %llu seconds, draining queue\n", port->disk_name, port->ldc_timeout); vdc_queue_drain(port); vdc_blk_queue_start(port); } - spin_unlock_irqrestore(&vio->lock, flags); + spin_unlock_irq(&vio->lock); } static void vdc_ldc_reset_work(struct work_struct *work) @@ -1129,7 +1180,7 @@ static void vdc_ldc_reset(struct vdc_port *port) assert_spin_locked(&port->vio.lock); pr_warn(PFX "%s ldc link reset\n", port->disk_name); - blk_stop_queue(port->disk->queue); + blk_mq_stop_hw_queues(port->disk->queue); vdc_requeue_inflight(port); vdc_port_down(port); @@ -1146,7 +1197,7 @@ static void vdc_ldc_reset(struct vdc_port *port) } if (port->ldc_timeout) - mod_timer(&port->ldc_reset_timer, + mod_delayed_work(system_wq, &port->ldc_reset_timer_work, round_jiffies(jiffies + HZ * port->ldc_timeout)); mod_timer(&port->vio.timer, round_jiffies(jiffies + HZ)); return; diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c index 064b8c5c7a32..4478eb7efee0 100644 --- a/drivers/block/sx8.c +++ b/drivers/block/sx8.c @@ -243,7 +243,6 @@ struct carm_port { unsigned int port_no; struct gendisk *disk; struct carm_host *host; - struct blk_mq_tag_set tag_set; /* attached device characteristics */ u64 capacity; @@ -254,13 +253,10 @@ struct carm_port { }; struct carm_request { - unsigned int tag; int n_elem; unsigned int msg_type; unsigned int msg_subtype; unsigned int msg_bucket; - struct request *rq; - struct carm_port *port; struct scatterlist sg[CARM_MAX_REQ_SG]; }; @@ -291,9 +287,6 @@ struct carm_host { unsigned int wait_q_cons; struct request_queue *wait_q[CARM_MAX_WAIT_Q]; - unsigned int n_msgs; - u64 msg_alloc; - struct carm_request req[CARM_MAX_REQ]; void *msg_base; dma_addr_t msg_dma; @@ -478,10 +471,10 @@ static inline dma_addr_t carm_ref_msg_dma(struct carm_host *host, } static int carm_send_msg(struct carm_host *host, - struct carm_request *crq) + struct carm_request *crq, unsigned tag) { void __iomem *mmio = host->mmio; - u32 msg = (u32) carm_ref_msg_dma(host, crq->tag); + u32 msg = (u32) carm_ref_msg_dma(host, tag); u32 cm_bucket = crq->msg_bucket; u32 tmp; int rc = 0; @@ -506,99 +499,24 @@ static int carm_send_msg(struct carm_host *host, return rc; } -static struct carm_request *carm_get_request(struct carm_host *host) -{ - unsigned int i; - - /* obey global hardware limit on S/G entries */ - if (host->hw_sg_used >= (CARM_MAX_HOST_SG - CARM_MAX_REQ_SG)) - return NULL; - - for (i = 0; i < max_queue; i++) - if ((host->msg_alloc & (1ULL << i)) == 0) { - struct carm_request *crq = &host->req[i]; - crq->port = NULL; - crq->n_elem = 0; - - host->msg_alloc |= (1ULL << i); - host->n_msgs++; - - assert(host->n_msgs <= CARM_MAX_REQ); - sg_init_table(crq->sg, CARM_MAX_REQ_SG); - return crq; - } - - DPRINTK("no request available, returning NULL\n"); - return NULL; -} - -static int carm_put_request(struct carm_host *host, struct carm_request *crq) -{ - assert(crq->tag < max_queue); - - if (unlikely((host->msg_alloc & (1ULL << crq->tag)) == 0)) - return -EINVAL; /* tried to clear a tag that was not active */ - - assert(host->hw_sg_used >= crq->n_elem); - - host->msg_alloc &= ~(1ULL << crq->tag); - host->hw_sg_used -= crq->n_elem; - host->n_msgs--; - - return 0; -} - -static struct carm_request *carm_get_special(struct carm_host *host) -{ - unsigned long flags; - struct carm_request *crq = NULL; - struct request *rq; - int tries = 5000; - - while (tries-- > 0) { - spin_lock_irqsave(&host->lock, flags); - crq = carm_get_request(host); - spin_unlock_irqrestore(&host->lock, flags); - - if (crq) - break; - msleep(10); - } - - if (!crq) - return NULL; - - rq = blk_get_request(host->oob_q, REQ_OP_DRV_OUT, 0); - if (IS_ERR(rq)) { - spin_lock_irqsave(&host->lock, flags); - carm_put_request(host, crq); - spin_unlock_irqrestore(&host->lock, flags); - return NULL; - } - - crq->rq = rq; - return crq; -} - static int carm_array_info (struct carm_host *host, unsigned int array_idx) { struct carm_msg_ioctl *ioc; - unsigned int idx; u32 msg_data; dma_addr_t msg_dma; struct carm_request *crq; + struct request *rq; int rc; - crq = carm_get_special(host); - if (!crq) { + rq = blk_mq_alloc_request(host->oob_q, REQ_OP_DRV_OUT, 0); + if (IS_ERR(rq)) { rc = -ENOMEM; goto err_out; } + crq = blk_mq_rq_to_pdu(rq); - idx = crq->tag; - - ioc = carm_ref_msg(host, idx); - msg_dma = carm_ref_msg_dma(host, idx); + ioc = carm_ref_msg(host, rq->tag); + msg_dma = carm_ref_msg_dma(host, rq->tag); msg_data = (u32) (msg_dma + sizeof(struct carm_array_info)); crq->msg_type = CARM_MSG_ARRAY; @@ -612,7 +530,7 @@ static int carm_array_info (struct carm_host *host, unsigned int array_idx) ioc->type = CARM_MSG_ARRAY; ioc->subtype = CARM_ARRAY_INFO; ioc->array_id = (u8) array_idx; - ioc->handle = cpu_to_le32(TAG_ENCODE(idx)); + ioc->handle = cpu_to_le32(TAG_ENCODE(rq->tag)); ioc->data_addr = cpu_to_le32(msg_data); spin_lock_irq(&host->lock); @@ -620,9 +538,8 @@ static int carm_array_info (struct carm_host *host, unsigned int array_idx) host->state == HST_DEV_SCAN); spin_unlock_irq(&host->lock); - DPRINTK("blk_execute_rq_nowait, tag == %u\n", idx); - crq->rq->special = crq; - blk_execute_rq_nowait(host->oob_q, NULL, crq->rq, true, NULL); + DPRINTK("blk_execute_rq_nowait, tag == %u\n", rq->tag); + blk_execute_rq_nowait(host->oob_q, NULL, rq, true, NULL); return 0; @@ -637,21 +554,21 @@ typedef unsigned int (*carm_sspc_t)(struct carm_host *, unsigned int, void *); static int carm_send_special (struct carm_host *host, carm_sspc_t func) { + struct request *rq; struct carm_request *crq; struct carm_msg_ioctl *ioc; void *mem; - unsigned int idx, msg_size; + unsigned int msg_size; int rc; - crq = carm_get_special(host); - if (!crq) + rq = blk_mq_alloc_request(host->oob_q, REQ_OP_DRV_OUT, 0); + if (IS_ERR(rq)) return -ENOMEM; + crq = blk_mq_rq_to_pdu(rq); - idx = crq->tag; + mem = carm_ref_msg(host, rq->tag); - mem = carm_ref_msg(host, idx); - - msg_size = func(host, idx, mem); + msg_size = func(host, rq->tag, mem); ioc = mem; crq->msg_type = ioc->type; @@ -660,9 +577,8 @@ static int carm_send_special (struct carm_host *host, carm_sspc_t func) BUG_ON(rc < 0); crq->msg_bucket = (u32) rc; - DPRINTK("blk_execute_rq_nowait, tag == %u\n", idx); - crq->rq->special = crq; - blk_execute_rq_nowait(host->oob_q, NULL, crq->rq, true, NULL); + DPRINTK("blk_execute_rq_nowait, tag == %u\n", rq->tag); + blk_execute_rq_nowait(host->oob_q, NULL, rq, true, NULL); return 0; } @@ -744,19 +660,6 @@ static unsigned int carm_fill_get_fw_ver(struct carm_host *host, sizeof(struct carm_fw_ver); } -static inline void carm_end_request_queued(struct carm_host *host, - struct carm_request *crq, - blk_status_t error) -{ - struct request *req = crq->rq; - int rc; - - blk_mq_end_request(req, error); - - rc = carm_put_request(host, crq); - assert(rc == 0); -} - static inline void carm_push_q (struct carm_host *host, struct request_queue *q) { unsigned int idx = host->wait_q_prod % CARM_MAX_WAIT_Q; @@ -791,101 +694,50 @@ static inline void carm_round_robin(struct carm_host *host) } } -static inline void carm_end_rq(struct carm_host *host, struct carm_request *crq, - blk_status_t error) -{ - carm_end_request_queued(host, crq, error); - if (max_queue == 1) - carm_round_robin(host); - else if ((host->n_msgs <= CARM_MSG_LOW_WATER) && - (host->hw_sg_used <= CARM_SG_LOW_WATER)) { - carm_round_robin(host); - } -} - -static blk_status_t carm_oob_queue_rq(struct blk_mq_hw_ctx *hctx, - const struct blk_mq_queue_data *bd) +static inline enum dma_data_direction carm_rq_dir(struct request *rq) { - struct request_queue *q = hctx->queue; - struct carm_host *host = q->queuedata; - struct carm_request *crq; - int rc; - - blk_mq_start_request(bd->rq); - - spin_lock_irq(&host->lock); - - crq = bd->rq->special; - assert(crq != NULL); - assert(crq->rq == bd->rq); - - crq->n_elem = 0; - - DPRINTK("send req\n"); - rc = carm_send_msg(host, crq); - if (rc) { - carm_push_q(host, q); - spin_unlock_irq(&host->lock); - return BLK_STS_DEV_RESOURCE; - } - - spin_unlock_irq(&host->lock); - return BLK_STS_OK; + return op_is_write(req_op(rq)) ? DMA_TO_DEVICE : DMA_FROM_DEVICE; } static blk_status_t carm_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { struct request_queue *q = hctx->queue; + struct request *rq = bd->rq; struct carm_port *port = q->queuedata; struct carm_host *host = port->host; + struct carm_request *crq = blk_mq_rq_to_pdu(rq); struct carm_msg_rw *msg; - struct carm_request *crq; - struct request *rq = bd->rq; struct scatterlist *sg; - int writing = 0, pci_dir, i, n_elem, rc; - u32 tmp; + int i, n_elem = 0, rc; unsigned int msg_size; + u32 tmp; + + crq->n_elem = 0; + sg_init_table(crq->sg, CARM_MAX_REQ_SG); blk_mq_start_request(rq); spin_lock_irq(&host->lock); - - crq = carm_get_request(host); - if (!crq) { - carm_push_q(host, q); - spin_unlock_irq(&host->lock); - return BLK_STS_DEV_RESOURCE; - } - crq->rq = rq; - - if (rq_data_dir(rq) == WRITE) { - writing = 1; - pci_dir = DMA_TO_DEVICE; - } else { - pci_dir = DMA_FROM_DEVICE; - } + if (req_op(rq) == REQ_OP_DRV_OUT) + goto send_msg; /* get scatterlist from block layer */ sg = &crq->sg[0]; n_elem = blk_rq_map_sg(q, rq, sg); - if (n_elem <= 0) { - /* request with no s/g entries? */ - carm_end_rq(host, crq, BLK_STS_IOERR); - spin_unlock_irq(&host->lock); - return BLK_STS_IOERR; - } + if (n_elem <= 0) + goto out_ioerr; /* map scatterlist to PCI bus addresses */ - n_elem = dma_map_sg(&host->pdev->dev, sg, n_elem, pci_dir); - if (n_elem <= 0) { - /* request with no s/g entries? */ - carm_end_rq(host, crq, BLK_STS_IOERR); - spin_unlock_irq(&host->lock); - return BLK_STS_IOERR; - } + n_elem = dma_map_sg(&host->pdev->dev, sg, n_elem, carm_rq_dir(rq)); + if (n_elem <= 0) + goto out_ioerr; + + /* obey global hardware limit on S/G entries */ + if (host->hw_sg_used >= CARM_MAX_HOST_SG - n_elem) + goto out_resource; + crq->n_elem = n_elem; - crq->port = port; host->hw_sg_used += n_elem; /* @@ -893,9 +745,9 @@ static blk_status_t carm_queue_rq(struct blk_mq_hw_ctx *hctx, */ VPRINTK("build msg\n"); - msg = (struct carm_msg_rw *) carm_ref_msg(host, crq->tag); + msg = (struct carm_msg_rw *) carm_ref_msg(host, rq->tag); - if (writing) { + if (rq_data_dir(rq) == WRITE) { msg->type = CARM_MSG_WRITE; crq->msg_type = CARM_MSG_WRITE; } else { @@ -906,7 +758,7 @@ static blk_status_t carm_queue_rq(struct blk_mq_hw_ctx *hctx, msg->id = port->port_no; msg->sg_count = n_elem; msg->sg_type = SGT_32BIT; - msg->handle = cpu_to_le32(TAG_ENCODE(crq->tag)); + msg->handle = cpu_to_le32(TAG_ENCODE(rq->tag)); msg->lba = cpu_to_le32(blk_rq_pos(rq) & 0xffffffff); tmp = (blk_rq_pos(rq) >> 16) >> 16; msg->lba_high = cpu_to_le16( (u16) tmp ); @@ -923,22 +775,28 @@ static blk_status_t carm_queue_rq(struct blk_mq_hw_ctx *hctx, rc = carm_lookup_bucket(msg_size); BUG_ON(rc < 0); crq->msg_bucket = (u32) rc; - +send_msg: /* * queue read/write message to hardware */ - - VPRINTK("send msg, tag == %u\n", crq->tag); - rc = carm_send_msg(host, crq); + VPRINTK("send msg, tag == %u\n", rq->tag); + rc = carm_send_msg(host, crq, rq->tag); if (rc) { - carm_put_request(host, crq); - carm_push_q(host, q); - spin_unlock_irq(&host->lock); - return BLK_STS_DEV_RESOURCE; + host->hw_sg_used -= n_elem; + goto out_resource; } spin_unlock_irq(&host->lock); return BLK_STS_OK; +out_resource: + dma_unmap_sg(&host->pdev->dev, &crq->sg[0], n_elem, carm_rq_dir(rq)); + carm_push_q(host, q); + spin_unlock_irq(&host->lock); + return BLK_STS_DEV_RESOURCE; +out_ioerr: + carm_round_robin(host); + spin_unlock_irq(&host->lock); + return BLK_STS_IOERR; } static void carm_handle_array_info(struct carm_host *host, @@ -954,8 +812,6 @@ static void carm_handle_array_info(struct carm_host *host, DPRINTK("ENTER\n"); - carm_end_rq(host, crq, error); - if (error) goto out; if (le32_to_cpu(desc->array_status) & ARRAY_NO_EXIST) @@ -1011,8 +867,6 @@ static void carm_handle_scan_chan(struct carm_host *host, DPRINTK("ENTER\n"); - carm_end_rq(host, crq, error); - if (error) { new_state = HST_ERROR; goto out; @@ -1040,8 +894,6 @@ static void carm_handle_generic(struct carm_host *host, { DPRINTK("ENTER\n"); - carm_end_rq(host, crq, error); - assert(host->state == cur_state); if (error) host->state = HST_ERROR; @@ -1050,28 +902,12 @@ static void carm_handle_generic(struct carm_host *host, schedule_work(&host->fsm_task); } -static inline void carm_handle_rw(struct carm_host *host, - struct carm_request *crq, blk_status_t error) -{ - int pci_dir; - - VPRINTK("ENTER\n"); - - if (rq_data_dir(crq->rq) == WRITE) - pci_dir = DMA_TO_DEVICE; - else - pci_dir = DMA_FROM_DEVICE; - - dma_unmap_sg(&host->pdev->dev, &crq->sg[0], crq->n_elem, pci_dir); - - carm_end_rq(host, crq, error); -} - static inline void carm_handle_resp(struct carm_host *host, __le32 ret_handle_le, u32 status) { u32 handle = le32_to_cpu(ret_handle_le); unsigned int msg_idx; + struct request *rq; struct carm_request *crq; blk_status_t error = (status == RMSG_OK) ? 0 : BLK_STS_IOERR; u8 *mem; @@ -1087,13 +923,15 @@ static inline void carm_handle_resp(struct carm_host *host, msg_idx = TAG_DECODE(handle); VPRINTK("tag == %u\n", msg_idx); - crq = &host->req[msg_idx]; + rq = blk_mq_tag_to_rq(host->tag_set.tags[0], msg_idx); + crq = blk_mq_rq_to_pdu(rq); /* fast path */ if (likely(crq->msg_type == CARM_MSG_READ || crq->msg_type == CARM_MSG_WRITE)) { - carm_handle_rw(host, crq, error); - return; + dma_unmap_sg(&host->pdev->dev, &crq->sg[0], crq->n_elem, + carm_rq_dir(rq)); + goto done; } mem = carm_ref_msg(host, msg_idx); @@ -1103,7 +941,7 @@ static inline void carm_handle_resp(struct carm_host *host, switch (crq->msg_subtype) { case CARM_IOC_SCAN_CHAN: carm_handle_scan_chan(host, crq, mem, error); - break; + goto done; default: /* unknown / invalid response */ goto err_out; @@ -1116,11 +954,11 @@ static inline void carm_handle_resp(struct carm_host *host, case MISC_ALLOC_MEM: carm_handle_generic(host, crq, error, HST_ALLOC_BUF, HST_SYNC_TIME); - break; + goto done; case MISC_SET_TIME: carm_handle_generic(host, crq, error, HST_SYNC_TIME, HST_GET_FW_VER); - break; + goto done; case MISC_GET_FW_VER: { struct carm_fw_ver *ver = (struct carm_fw_ver *) (mem + sizeof(struct carm_msg_get_fw_ver)); @@ -1130,7 +968,7 @@ static inline void carm_handle_resp(struct carm_host *host, } carm_handle_generic(host, crq, error, HST_GET_FW_VER, HST_PORT_SCAN); - break; + goto done; } default: /* unknown / invalid response */ @@ -1161,7 +999,13 @@ static inline void carm_handle_resp(struct carm_host *host, err_out: printk(KERN_WARNING DRV_NAME "(%s): BUG: unhandled message type %d/%d\n", pci_name(host->pdev), crq->msg_type, crq->msg_subtype); - carm_end_rq(host, crq, BLK_STS_IOERR); + error = BLK_STS_IOERR; +done: + host->hw_sg_used -= crq->n_elem; + blk_mq_end_request(blk_mq_rq_from_pdu(crq), error); + + if (host->hw_sg_used <= CARM_SG_LOW_WATER) + carm_round_robin(host); } static inline void carm_handle_responses(struct carm_host *host) @@ -1491,78 +1335,56 @@ static int carm_init_host(struct carm_host *host) return 0; } -static const struct blk_mq_ops carm_oob_mq_ops = { - .queue_rq = carm_oob_queue_rq, -}; - static const struct blk_mq_ops carm_mq_ops = { .queue_rq = carm_queue_rq, }; -static int carm_init_disks(struct carm_host *host) +static int carm_init_disk(struct carm_host *host, unsigned int port_no) { - unsigned int i; - int rc = 0; + struct carm_port *port = &host->port[port_no]; + struct gendisk *disk; + struct request_queue *q; - for (i = 0; i < CARM_MAX_PORTS; i++) { - struct gendisk *disk; - struct request_queue *q; - struct carm_port *port; + port->host = host; + port->port_no = port_no; - port = &host->port[i]; - port->host = host; - port->port_no = i; + disk = alloc_disk(CARM_MINORS_PER_MAJOR); + if (!disk) + return -ENOMEM; - disk = alloc_disk(CARM_MINORS_PER_MAJOR); - if (!disk) { - rc = -ENOMEM; - break; - } + port->disk = disk; + sprintf(disk->disk_name, DRV_NAME "/%u", + (unsigned int)host->id * CARM_MAX_PORTS + port_no); + disk->major = host->major; + disk->first_minor = port_no * CARM_MINORS_PER_MAJOR; + disk->fops = &carm_bd_ops; + disk->private_data = port; - port->disk = disk; - sprintf(disk->disk_name, DRV_NAME "/%u", - (unsigned int) (host->id * CARM_MAX_PORTS) + i); - disk->major = host->major; - disk->first_minor = i * CARM_MINORS_PER_MAJOR; - disk->fops = &carm_bd_ops; - disk->private_data = port; - - q = blk_mq_init_sq_queue(&port->tag_set, &carm_mq_ops, - max_queue, BLK_MQ_F_SHOULD_MERGE); - if (IS_ERR(q)) { - rc = PTR_ERR(q); - break; - } - disk->queue = q; - blk_queue_max_segments(q, CARM_MAX_REQ_SG); - blk_queue_segment_boundary(q, CARM_SG_BOUNDARY); + q = blk_mq_init_queue(&host->tag_set); + if (IS_ERR(q)) + return PTR_ERR(q); - q->queuedata = port; - } + blk_queue_max_segments(q, CARM_MAX_REQ_SG); + blk_queue_segment_boundary(q, CARM_SG_BOUNDARY); - return rc; + q->queuedata = port; + disk->queue = q; + return 0; } -static void carm_free_disks(struct carm_host *host) +static void carm_free_disk(struct carm_host *host, unsigned int port_no) { - unsigned int i; - - for (i = 0; i < CARM_MAX_PORTS; i++) { - struct carm_port *port = &host->port[i]; - struct gendisk *disk = port->disk; + struct carm_port *port = &host->port[port_no]; + struct gendisk *disk = port->disk; - if (disk) { - struct request_queue *q = disk->queue; + if (!disk) + return; - if (disk->flags & GENHD_FL_UP) - del_gendisk(disk); - if (q) { - blk_mq_free_tag_set(&port->tag_set); - blk_cleanup_queue(q); - } - put_disk(disk); - } - } + if (disk->flags & GENHD_FL_UP) + del_gendisk(disk); + if (disk->queue) + blk_cleanup_queue(disk->queue); + put_disk(disk); } static int carm_init_shm(struct carm_host *host) @@ -1618,9 +1440,6 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) INIT_WORK(&host->fsm_task, carm_fsm_task); init_completion(&host->probe_comp); - for (i = 0; i < ARRAY_SIZE(host->req); i++) - host->req[i].tag = i; - host->mmio = ioremap(pci_resource_start(pdev, 0), pci_resource_len(pdev, 0)); if (!host->mmio) { @@ -1637,14 +1456,26 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) goto err_out_iounmap; } - q = blk_mq_init_sq_queue(&host->tag_set, &carm_oob_mq_ops, 1, - BLK_MQ_F_NO_SCHED); + memset(&host->tag_set, 0, sizeof(host->tag_set)); + host->tag_set.ops = &carm_mq_ops; + host->tag_set.cmd_size = sizeof(struct carm_request); + host->tag_set.nr_hw_queues = 1; + host->tag_set.nr_maps = 1; + host->tag_set.queue_depth = max_queue; + host->tag_set.numa_node = NUMA_NO_NODE; + host->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; + + rc = blk_mq_alloc_tag_set(&host->tag_set); + if (rc) + goto err_out_dma_free; + + q = blk_mq_init_queue(&host->tag_set); if (IS_ERR(q)) { - printk(KERN_ERR DRV_NAME "(%s): OOB queue alloc failure\n", - pci_name(pdev)); rc = PTR_ERR(q); + blk_mq_free_tag_set(&host->tag_set); goto err_out_dma_free; } + host->oob_q = q; q->queuedata = host; @@ -1667,9 +1498,11 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) if (host->flags & FL_DYN_MAJOR) host->major = rc; - rc = carm_init_disks(host); - if (rc) - goto err_out_blkdev_disks; + for (i = 0; i < CARM_MAX_PORTS; i++) { + rc = carm_init_disk(host, i); + if (rc) + goto err_out_blkdev_disks; + } pci_set_master(pdev); @@ -1699,7 +1532,8 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) err_out_free_irq: free_irq(pdev->irq, host); err_out_blkdev_disks: - carm_free_disks(host); + for (i = 0; i < CARM_MAX_PORTS; i++) + carm_free_disk(host, i); unregister_blkdev(host->major, host->name); err_out_free_majors: if (host->major == 160) @@ -1724,6 +1558,7 @@ err_out: static void carm_remove_one (struct pci_dev *pdev) { struct carm_host *host = pci_get_drvdata(pdev); + unsigned int i; if (!host) { printk(KERN_ERR PFX "BUG: no host data for PCI(%s)\n", @@ -1732,7 +1567,8 @@ static void carm_remove_one (struct pci_dev *pdev) } free_irq(pdev->irq, host); - carm_free_disks(host); + for (i = 0; i < CARM_MAX_PORTS; i++) + carm_free_disk(host, i); unregister_blkdev(host->major, host->name); if (host->major == 160) clear_bit(0, &carm_major_alloc); diff --git a/drivers/block/umem.c b/drivers/block/umem.c index be3e3ab79950..aa035cf8a51d 100644 --- a/drivers/block/umem.c +++ b/drivers/block/umem.c @@ -888,8 +888,7 @@ static int mm_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) card->biotail = &card->bio; spin_lock_init(&card->lock); - card->queue = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE, - &card->lock); + card->queue = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE); if (!card->queue) goto failed_alloc; diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 086c6bb12baa..912c4265e592 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -214,6 +214,20 @@ static void virtblk_done(struct virtqueue *vq) spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags); } +static void virtio_commit_rqs(struct blk_mq_hw_ctx *hctx) +{ + struct virtio_blk *vblk = hctx->queue->queuedata; + struct virtio_blk_vq *vq = &vblk->vqs[hctx->queue_num]; + bool kick; + + spin_lock_irq(&vq->lock); + kick = virtqueue_kick_prepare(vq->vq); + spin_unlock_irq(&vq->lock); + + if (kick) + virtqueue_notify(vq->vq); +} + static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { @@ -624,7 +638,7 @@ static int virtblk_map_queues(struct blk_mq_tag_set *set) { struct virtio_blk *vblk = set->driver_data; - return blk_mq_virtio_map_queues(set, vblk->vdev, 0); + return blk_mq_virtio_map_queues(&set->map[0], vblk->vdev, 0); } #ifdef CONFIG_VIRTIO_BLK_SCSI @@ -638,6 +652,7 @@ static void virtblk_initialize_rq(struct request *req) static const struct blk_mq_ops virtio_mq_ops = { .queue_rq = virtio_queue_rq, + .commit_rqs = virtio_commit_rqs, .complete = virtblk_request_done, .init_request = virtblk_init_request, #ifdef CONFIG_VIRTIO_BLK_SCSI diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig index fcd055457364..1ffc64770643 100644 --- a/drivers/block/zram/Kconfig +++ b/drivers/block/zram/Kconfig @@ -15,7 +15,7 @@ config ZRAM See Documentation/blockdev/zram.txt for more information. config ZRAM_WRITEBACK - bool "Write back incompressible page to backing device" + bool "Write back incompressible or idle page to backing device" depends on ZRAM help With incompressible page, there is no memory saving to keep it @@ -23,6 +23,9 @@ config ZRAM_WRITEBACK For this feature, admin should set up backing device via /sys/block/zramX/backing_dev. + With /sys/block/zramX/{idle,writeback}, application could ask + idle page's writeback to the backing device to save in memory. + See Documentation/blockdev/zram.txt for more information. config ZRAM_MEMORY_TRACKING diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 4879595200e1..33c5cc879f24 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -52,15 +52,23 @@ static unsigned int num_devices = 1; static size_t huge_class_size; static void zram_free_page(struct zram *zram, size_t index); +static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec, + u32 index, int offset, struct bio *bio); + + +static int zram_slot_trylock(struct zram *zram, u32 index) +{ + return bit_spin_trylock(ZRAM_LOCK, &zram->table[index].flags); +} static void zram_slot_lock(struct zram *zram, u32 index) { - bit_spin_lock(ZRAM_LOCK, &zram->table[index].value); + bit_spin_lock(ZRAM_LOCK, &zram->table[index].flags); } static void zram_slot_unlock(struct zram *zram, u32 index) { - bit_spin_unlock(ZRAM_LOCK, &zram->table[index].value); + bit_spin_unlock(ZRAM_LOCK, &zram->table[index].flags); } static inline bool init_done(struct zram *zram) @@ -68,13 +76,6 @@ static inline bool init_done(struct zram *zram) return zram->disksize; } -static inline bool zram_allocated(struct zram *zram, u32 index) -{ - - return (zram->table[index].value >> (ZRAM_FLAG_SHIFT + 1)) || - zram->table[index].handle; -} - static inline struct zram *dev_to_zram(struct device *dev) { return (struct zram *)dev_to_disk(dev)->private_data; @@ -94,19 +95,19 @@ static void zram_set_handle(struct zram *zram, u32 index, unsigned long handle) static bool zram_test_flag(struct zram *zram, u32 index, enum zram_pageflags flag) { - return zram->table[index].value & BIT(flag); + return zram->table[index].flags & BIT(flag); } static void zram_set_flag(struct zram *zram, u32 index, enum zram_pageflags flag) { - zram->table[index].value |= BIT(flag); + zram->table[index].flags |= BIT(flag); } static void zram_clear_flag(struct zram *zram, u32 index, enum zram_pageflags flag) { - zram->table[index].value &= ~BIT(flag); + zram->table[index].flags &= ~BIT(flag); } static inline void zram_set_element(struct zram *zram, u32 index, @@ -122,15 +123,22 @@ static unsigned long zram_get_element(struct zram *zram, u32 index) static size_t zram_get_obj_size(struct zram *zram, u32 index) { - return zram->table[index].value & (BIT(ZRAM_FLAG_SHIFT) - 1); + return zram->table[index].flags & (BIT(ZRAM_FLAG_SHIFT) - 1); } static void zram_set_obj_size(struct zram *zram, u32 index, size_t size) { - unsigned long flags = zram->table[index].value >> ZRAM_FLAG_SHIFT; + unsigned long flags = zram->table[index].flags >> ZRAM_FLAG_SHIFT; - zram->table[index].value = (flags << ZRAM_FLAG_SHIFT) | size; + zram->table[index].flags = (flags << ZRAM_FLAG_SHIFT) | size; +} + +static inline bool zram_allocated(struct zram *zram, u32 index) +{ + return zram_get_obj_size(zram, index) || + zram_test_flag(zram, index, ZRAM_SAME) || + zram_test_flag(zram, index, ZRAM_WB); } #if PAGE_SIZE != 4096 @@ -276,17 +284,90 @@ static ssize_t mem_used_max_store(struct device *dev, return len; } +static ssize_t idle_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t len) +{ + struct zram *zram = dev_to_zram(dev); + unsigned long nr_pages = zram->disksize >> PAGE_SHIFT; + int index; + char mode_buf[8]; + ssize_t sz; + + sz = strscpy(mode_buf, buf, sizeof(mode_buf)); + if (sz <= 0) + return -EINVAL; + + /* ignore trailing new line */ + if (mode_buf[sz - 1] == '\n') + mode_buf[sz - 1] = 0x00; + + if (strcmp(mode_buf, "all")) + return -EINVAL; + + down_read(&zram->init_lock); + if (!init_done(zram)) { + up_read(&zram->init_lock); + return -EINVAL; + } + + for (index = 0; index < nr_pages; index++) { + /* + * Do not mark ZRAM_UNDER_WB slot as ZRAM_IDLE to close race. + * See the comment in writeback_store. + */ + zram_slot_lock(zram, index); + if (!zram_allocated(zram, index) || + zram_test_flag(zram, index, ZRAM_UNDER_WB)) + goto next; + zram_set_flag(zram, index, ZRAM_IDLE); +next: + zram_slot_unlock(zram, index); + } + + up_read(&zram->init_lock); + + return len; +} + #ifdef CONFIG_ZRAM_WRITEBACK -static bool zram_wb_enabled(struct zram *zram) +static ssize_t writeback_limit_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t len) +{ + struct zram *zram = dev_to_zram(dev); + u64 val; + ssize_t ret = -EINVAL; + + if (kstrtoull(buf, 10, &val)) + return ret; + + down_read(&zram->init_lock); + atomic64_set(&zram->stats.bd_wb_limit, val); + if (val == 0) + zram->stop_writeback = false; + up_read(&zram->init_lock); + ret = len; + + return ret; +} + +static ssize_t writeback_limit_show(struct device *dev, + struct device_attribute *attr, char *buf) { - return zram->backing_dev; + u64 val; + struct zram *zram = dev_to_zram(dev); + + down_read(&zram->init_lock); + val = atomic64_read(&zram->stats.bd_wb_limit); + up_read(&zram->init_lock); + + return scnprintf(buf, PAGE_SIZE, "%llu\n", val); } static void reset_bdev(struct zram *zram) { struct block_device *bdev; - if (!zram_wb_enabled(zram)) + if (!zram->backing_dev) return; bdev = zram->bdev; @@ -313,7 +394,7 @@ static ssize_t backing_dev_show(struct device *dev, ssize_t ret; down_read(&zram->init_lock); - if (!zram_wb_enabled(zram)) { + if (!zram->backing_dev) { memcpy(buf, "none\n", 5); up_read(&zram->init_lock); return 5; @@ -382,8 +463,10 @@ static ssize_t backing_dev_store(struct device *dev, bdev = bdgrab(I_BDEV(inode)); err = blkdev_get(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL, zram); - if (err < 0) + if (err < 0) { + bdev = NULL; goto out; + } nr_pages = i_size_read(inode) >> PAGE_SHIFT; bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long); @@ -399,7 +482,6 @@ static ssize_t backing_dev_store(struct device *dev, goto out; reset_bdev(zram); - spin_lock_init(&zram->bitmap_lock); zram->old_block_size = old_block_size; zram->bdev = bdev; @@ -441,32 +523,29 @@ out: return err; } -static unsigned long get_entry_bdev(struct zram *zram) +static unsigned long alloc_block_bdev(struct zram *zram) { - unsigned long entry; - - spin_lock(&zram->bitmap_lock); + unsigned long blk_idx = 1; +retry: /* skip 0 bit to confuse zram.handle = 0 */ - entry = find_next_zero_bit(zram->bitmap, zram->nr_pages, 1); - if (entry == zram->nr_pages) { - spin_unlock(&zram->bitmap_lock); + blk_idx = find_next_zero_bit(zram->bitmap, zram->nr_pages, blk_idx); + if (blk_idx == zram->nr_pages) return 0; - } - set_bit(entry, zram->bitmap); - spin_unlock(&zram->bitmap_lock); + if (test_and_set_bit(blk_idx, zram->bitmap)) + goto retry; - return entry; + atomic64_inc(&zram->stats.bd_count); + return blk_idx; } -static void put_entry_bdev(struct zram *zram, unsigned long entry) +static void free_block_bdev(struct zram *zram, unsigned long blk_idx) { int was_set; - spin_lock(&zram->bitmap_lock); - was_set = test_and_clear_bit(entry, zram->bitmap); - spin_unlock(&zram->bitmap_lock); + was_set = test_and_clear_bit(blk_idx, zram->bitmap); WARN_ON_ONCE(!was_set); + atomic64_dec(&zram->stats.bd_count); } static void zram_page_end_io(struct bio *bio) @@ -509,6 +588,169 @@ static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec, return 1; } +#define HUGE_WRITEBACK 0x1 +#define IDLE_WRITEBACK 0x2 + +static ssize_t writeback_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t len) +{ + struct zram *zram = dev_to_zram(dev); + unsigned long nr_pages = zram->disksize >> PAGE_SHIFT; + unsigned long index; + struct bio bio; + struct bio_vec bio_vec; + struct page *page; + ssize_t ret, sz; + char mode_buf[8]; + unsigned long mode = -1UL; + unsigned long blk_idx = 0; + + sz = strscpy(mode_buf, buf, sizeof(mode_buf)); + if (sz <= 0) + return -EINVAL; + + /* ignore trailing newline */ + if (mode_buf[sz - 1] == '\n') + mode_buf[sz - 1] = 0x00; + + if (!strcmp(mode_buf, "idle")) + mode = IDLE_WRITEBACK; + else if (!strcmp(mode_buf, "huge")) + mode = HUGE_WRITEBACK; + + if (mode == -1UL) + return -EINVAL; + + down_read(&zram->init_lock); + if (!init_done(zram)) { + ret = -EINVAL; + goto release_init_lock; + } + + if (!zram->backing_dev) { + ret = -ENODEV; + goto release_init_lock; + } + + page = alloc_page(GFP_KERNEL); + if (!page) { + ret = -ENOMEM; + goto release_init_lock; + } + + for (index = 0; index < nr_pages; index++) { + struct bio_vec bvec; + + bvec.bv_page = page; + bvec.bv_len = PAGE_SIZE; + bvec.bv_offset = 0; + + if (zram->stop_writeback) { + ret = -EIO; + break; + } + + if (!blk_idx) { + blk_idx = alloc_block_bdev(zram); + if (!blk_idx) { + ret = -ENOSPC; + break; + } + } + + zram_slot_lock(zram, index); + if (!zram_allocated(zram, index)) + goto next; + + if (zram_test_flag(zram, index, ZRAM_WB) || + zram_test_flag(zram, index, ZRAM_SAME) || + zram_test_flag(zram, index, ZRAM_UNDER_WB)) + goto next; + + if ((mode & IDLE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_IDLE)) && + (mode & HUGE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_HUGE))) + goto next; + /* + * Clearing ZRAM_UNDER_WB is duty of caller. + * IOW, zram_free_page never clear it. + */ + zram_set_flag(zram, index, ZRAM_UNDER_WB); + /* Need for hugepage writeback racing */ + zram_set_flag(zram, index, ZRAM_IDLE); + zram_slot_unlock(zram, index); + if (zram_bvec_read(zram, &bvec, index, 0, NULL)) { + zram_slot_lock(zram, index); + zram_clear_flag(zram, index, ZRAM_UNDER_WB); + zram_clear_flag(zram, index, ZRAM_IDLE); + zram_slot_unlock(zram, index); + continue; + } + + bio_init(&bio, &bio_vec, 1); + bio_set_dev(&bio, zram->bdev); + bio.bi_iter.bi_sector = blk_idx * (PAGE_SIZE >> 9); + bio.bi_opf = REQ_OP_WRITE | REQ_SYNC; + + bio_add_page(&bio, bvec.bv_page, bvec.bv_len, + bvec.bv_offset); + /* + * XXX: A single page IO would be inefficient for write + * but it would be not bad as starter. + */ + ret = submit_bio_wait(&bio); + if (ret) { + zram_slot_lock(zram, index); + zram_clear_flag(zram, index, ZRAM_UNDER_WB); + zram_clear_flag(zram, index, ZRAM_IDLE); + zram_slot_unlock(zram, index); + continue; + } + + atomic64_inc(&zram->stats.bd_writes); + /* + * We released zram_slot_lock so need to check if the slot was + * changed. If there is freeing for the slot, we can catch it + * easily by zram_allocated. + * A subtle case is the slot is freed/reallocated/marked as + * ZRAM_IDLE again. To close the race, idle_store doesn't + * mark ZRAM_IDLE once it found the slot was ZRAM_UNDER_WB. + * Thus, we could close the race by checking ZRAM_IDLE bit. + */ + zram_slot_lock(zram, index); + if (!zram_allocated(zram, index) || + !zram_test_flag(zram, index, ZRAM_IDLE)) { + zram_clear_flag(zram, index, ZRAM_UNDER_WB); + zram_clear_flag(zram, index, ZRAM_IDLE); + goto next; + } + + zram_free_page(zram, index); + zram_clear_flag(zram, index, ZRAM_UNDER_WB); + zram_set_flag(zram, index, ZRAM_WB); + zram_set_element(zram, index, blk_idx); + blk_idx = 0; + atomic64_inc(&zram->stats.pages_stored); + if (atomic64_add_unless(&zram->stats.bd_wb_limit, + -1 << (PAGE_SHIFT - 12), 0)) { + if (atomic64_read(&zram->stats.bd_wb_limit) == 0) + zram->stop_writeback = true; + } +next: + zram_slot_unlock(zram, index); + } + + if (blk_idx) + free_block_bdev(zram, blk_idx); + ret = len; + __free_page(page); +release_init_lock: + up_read(&zram->init_lock); + + return ret; +} + struct zram_work { struct work_struct work; struct zram *zram; @@ -561,79 +803,21 @@ static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec, static int read_from_bdev(struct zram *zram, struct bio_vec *bvec, unsigned long entry, struct bio *parent, bool sync) { + atomic64_inc(&zram->stats.bd_reads); if (sync) return read_from_bdev_sync(zram, bvec, entry, parent); else return read_from_bdev_async(zram, bvec, entry, parent); } - -static int write_to_bdev(struct zram *zram, struct bio_vec *bvec, - u32 index, struct bio *parent, - unsigned long *pentry) -{ - struct bio *bio; - unsigned long entry; - - bio = bio_alloc(GFP_ATOMIC, 1); - if (!bio) - return -ENOMEM; - - entry = get_entry_bdev(zram); - if (!entry) { - bio_put(bio); - return -ENOSPC; - } - - bio->bi_iter.bi_sector = entry * (PAGE_SIZE >> 9); - bio_set_dev(bio, zram->bdev); - if (!bio_add_page(bio, bvec->bv_page, bvec->bv_len, - bvec->bv_offset)) { - bio_put(bio); - put_entry_bdev(zram, entry); - return -EIO; - } - - if (!parent) { - bio->bi_opf = REQ_OP_WRITE | REQ_SYNC; - bio->bi_end_io = zram_page_end_io; - } else { - bio->bi_opf = parent->bi_opf; - bio_chain(bio, parent); - } - - submit_bio(bio); - *pentry = entry; - - return 0; -} - -static void zram_wb_clear(struct zram *zram, u32 index) -{ - unsigned long entry; - - zram_clear_flag(zram, index, ZRAM_WB); - entry = zram_get_element(zram, index); - zram_set_element(zram, index, 0); - put_entry_bdev(zram, entry); -} - #else -static bool zram_wb_enabled(struct zram *zram) { return false; } static inline void reset_bdev(struct zram *zram) {}; -static int write_to_bdev(struct zram *zram, struct bio_vec *bvec, - u32 index, struct bio *parent, - unsigned long *pentry) - -{ - return -EIO; -} - static int read_from_bdev(struct zram *zram, struct bio_vec *bvec, unsigned long entry, struct bio *parent, bool sync) { return -EIO; } -static void zram_wb_clear(struct zram *zram, u32 index) {} + +static void free_block_bdev(struct zram *zram, unsigned long blk_idx) {}; #endif #ifdef CONFIG_ZRAM_MEMORY_TRACKING @@ -652,14 +836,10 @@ static void zram_debugfs_destroy(void) static void zram_accessed(struct zram *zram, u32 index) { + zram_clear_flag(zram, index, ZRAM_IDLE); zram->table[index].ac_time = ktime_get_boottime(); } -static void zram_reset_access(struct zram *zram, u32 index) -{ - zram->table[index].ac_time = 0; -} - static ssize_t read_block_state(struct file *file, char __user *buf, size_t count, loff_t *ppos) { @@ -689,12 +869,13 @@ static ssize_t read_block_state(struct file *file, char __user *buf, ts = ktime_to_timespec64(zram->table[index].ac_time); copied = snprintf(kbuf + written, count, - "%12zd %12lld.%06lu %c%c%c\n", + "%12zd %12lld.%06lu %c%c%c%c\n", index, (s64)ts.tv_sec, ts.tv_nsec / NSEC_PER_USEC, zram_test_flag(zram, index, ZRAM_SAME) ? 's' : '.', zram_test_flag(zram, index, ZRAM_WB) ? 'w' : '.', - zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.'); + zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.', + zram_test_flag(zram, index, ZRAM_IDLE) ? 'i' : '.'); if (count < copied) { zram_slot_unlock(zram, index); @@ -739,8 +920,10 @@ static void zram_debugfs_unregister(struct zram *zram) #else static void zram_debugfs_create(void) {}; static void zram_debugfs_destroy(void) {}; -static void zram_accessed(struct zram *zram, u32 index) {}; -static void zram_reset_access(struct zram *zram, u32 index) {}; +static void zram_accessed(struct zram *zram, u32 index) +{ + zram_clear_flag(zram, index, ZRAM_IDLE); +}; static void zram_debugfs_register(struct zram *zram) {}; static void zram_debugfs_unregister(struct zram *zram) {}; #endif @@ -877,6 +1060,26 @@ static ssize_t mm_stat_show(struct device *dev, return ret; } +#ifdef CONFIG_ZRAM_WRITEBACK +#define FOUR_K(x) ((x) * (1 << (PAGE_SHIFT - 12))) +static ssize_t bd_stat_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct zram *zram = dev_to_zram(dev); + ssize_t ret; + + down_read(&zram->init_lock); + ret = scnprintf(buf, PAGE_SIZE, + "%8llu %8llu %8llu\n", + FOUR_K((u64)atomic64_read(&zram->stats.bd_count)), + FOUR_K((u64)atomic64_read(&zram->stats.bd_reads)), + FOUR_K((u64)atomic64_read(&zram->stats.bd_writes))); + up_read(&zram->init_lock); + + return ret; +} +#endif + static ssize_t debug_stat_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -886,9 +1089,10 @@ static ssize_t debug_stat_show(struct device *dev, down_read(&zram->init_lock); ret = scnprintf(buf, PAGE_SIZE, - "version: %d\n%8llu\n", + "version: %d\n%8llu %8llu\n", version, - (u64)atomic64_read(&zram->stats.writestall)); + (u64)atomic64_read(&zram->stats.writestall), + (u64)atomic64_read(&zram->stats.miss_free)); up_read(&zram->init_lock); return ret; @@ -896,6 +1100,9 @@ static ssize_t debug_stat_show(struct device *dev, static DEVICE_ATTR_RO(io_stat); static DEVICE_ATTR_RO(mm_stat); +#ifdef CONFIG_ZRAM_WRITEBACK +static DEVICE_ATTR_RO(bd_stat); +#endif static DEVICE_ATTR_RO(debug_stat); static void zram_meta_free(struct zram *zram, u64 disksize) @@ -940,17 +1147,21 @@ static void zram_free_page(struct zram *zram, size_t index) { unsigned long handle; - zram_reset_access(zram, index); +#ifdef CONFIG_ZRAM_MEMORY_TRACKING + zram->table[index].ac_time = 0; +#endif + if (zram_test_flag(zram, index, ZRAM_IDLE)) + zram_clear_flag(zram, index, ZRAM_IDLE); if (zram_test_flag(zram, index, ZRAM_HUGE)) { zram_clear_flag(zram, index, ZRAM_HUGE); atomic64_dec(&zram->stats.huge_pages); } - if (zram_wb_enabled(zram) && zram_test_flag(zram, index, ZRAM_WB)) { - zram_wb_clear(zram, index); - atomic64_dec(&zram->stats.pages_stored); - return; + if (zram_test_flag(zram, index, ZRAM_WB)) { + zram_clear_flag(zram, index, ZRAM_WB); + free_block_bdev(zram, zram_get_element(zram, index)); + goto out; } /* @@ -959,10 +1170,8 @@ static void zram_free_page(struct zram *zram, size_t index) */ if (zram_test_flag(zram, index, ZRAM_SAME)) { zram_clear_flag(zram, index, ZRAM_SAME); - zram_set_element(zram, index, 0); atomic64_dec(&zram->stats.same_pages); - atomic64_dec(&zram->stats.pages_stored); - return; + goto out; } handle = zram_get_handle(zram, index); @@ -973,10 +1182,12 @@ static void zram_free_page(struct zram *zram, size_t index) atomic64_sub(zram_get_obj_size(zram, index), &zram->stats.compr_data_size); +out: atomic64_dec(&zram->stats.pages_stored); - zram_set_handle(zram, index, 0); zram_set_obj_size(zram, index, 0); + WARN_ON_ONCE(zram->table[index].flags & + ~(1UL << ZRAM_LOCK | 1UL << ZRAM_UNDER_WB)); } static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index, @@ -987,24 +1198,20 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index, unsigned int size; void *src, *dst; - if (zram_wb_enabled(zram)) { - zram_slot_lock(zram, index); - if (zram_test_flag(zram, index, ZRAM_WB)) { - struct bio_vec bvec; - - zram_slot_unlock(zram, index); + zram_slot_lock(zram, index); + if (zram_test_flag(zram, index, ZRAM_WB)) { + struct bio_vec bvec; - bvec.bv_page = page; - bvec.bv_len = PAGE_SIZE; - bvec.bv_offset = 0; - return read_from_bdev(zram, &bvec, - zram_get_element(zram, index), - bio, partial_io); - } zram_slot_unlock(zram, index); + + bvec.bv_page = page; + bvec.bv_len = PAGE_SIZE; + bvec.bv_offset = 0; + return read_from_bdev(zram, &bvec, + zram_get_element(zram, index), + bio, partial_io); } - zram_slot_lock(zram, index); handle = zram_get_handle(zram, index); if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) { unsigned long value; @@ -1089,7 +1296,6 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec, struct page *page = bvec->bv_page; unsigned long element = 0; enum zram_pageflags flags = 0; - bool allow_wb = true; mem = kmap_atomic(page); if (page_same_filled(mem, &element)) { @@ -1114,21 +1320,8 @@ compress_again: return ret; } - if (unlikely(comp_len >= huge_class_size)) { + if (comp_len >= huge_class_size) comp_len = PAGE_SIZE; - if (zram_wb_enabled(zram) && allow_wb) { - zcomp_stream_put(zram->comp); - ret = write_to_bdev(zram, bvec, index, bio, &element); - if (!ret) { - flags = ZRAM_WB; - ret = 1; - goto out; - } - allow_wb = false; - goto compress_again; - } - } - /* * handle allocation has 2 paths: * a) fast path is executed with preemption disabled (for @@ -1400,10 +1593,14 @@ static void zram_slot_free_notify(struct block_device *bdev, zram = bdev->bd_disk->private_data; - zram_slot_lock(zram, index); + atomic64_inc(&zram->stats.notify_free); + if (!zram_slot_trylock(zram, index)) { + atomic64_inc(&zram->stats.miss_free); + return; + } + zram_free_page(zram, index); zram_slot_unlock(zram, index); - atomic64_inc(&zram->stats.notify_free); } static int zram_rw_page(struct block_device *bdev, sector_t sector, @@ -1608,10 +1805,13 @@ static DEVICE_ATTR_RO(initstate); static DEVICE_ATTR_WO(reset); static DEVICE_ATTR_WO(mem_limit); static DEVICE_ATTR_WO(mem_used_max); +static DEVICE_ATTR_WO(idle); static DEVICE_ATTR_RW(max_comp_streams); static DEVICE_ATTR_RW(comp_algorithm); #ifdef CONFIG_ZRAM_WRITEBACK static DEVICE_ATTR_RW(backing_dev); +static DEVICE_ATTR_WO(writeback); +static DEVICE_ATTR_RW(writeback_limit); #endif static struct attribute *zram_disk_attrs[] = { @@ -1621,13 +1821,19 @@ static struct attribute *zram_disk_attrs[] = { &dev_attr_compact.attr, &dev_attr_mem_limit.attr, &dev_attr_mem_used_max.attr, + &dev_attr_idle.attr, &dev_attr_max_comp_streams.attr, &dev_attr_comp_algorithm.attr, #ifdef CONFIG_ZRAM_WRITEBACK &dev_attr_backing_dev.attr, + &dev_attr_writeback.attr, + &dev_attr_writeback_limit.attr, #endif &dev_attr_io_stat.attr, &dev_attr_mm_stat.attr, +#ifdef CONFIG_ZRAM_WRITEBACK + &dev_attr_bd_stat.attr, +#endif &dev_attr_debug_stat.attr, NULL, }; diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 72c8584b6dff..4bd3afd15e83 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -30,7 +30,7 @@ /* - * The lower ZRAM_FLAG_SHIFT bits of table.value is for + * The lower ZRAM_FLAG_SHIFT bits of table.flags is for * object size (excluding header), the higher bits is for * zram_pageflags. * @@ -41,13 +41,15 @@ */ #define ZRAM_FLAG_SHIFT 24 -/* Flags for zram pages (table[page_no].value) */ +/* Flags for zram pages (table[page_no].flags) */ enum zram_pageflags { /* zram slot is locked */ ZRAM_LOCK = ZRAM_FLAG_SHIFT, ZRAM_SAME, /* Page consists the same element */ ZRAM_WB, /* page is stored on backing_device */ + ZRAM_UNDER_WB, /* page is under writeback */ ZRAM_HUGE, /* Incompressible page */ + ZRAM_IDLE, /* not accessed page since last idle marking */ __NR_ZRAM_PAGEFLAGS, }; @@ -60,7 +62,7 @@ struct zram_table_entry { unsigned long handle; unsigned long element; }; - unsigned long value; + unsigned long flags; #ifdef CONFIG_ZRAM_MEMORY_TRACKING ktime_t ac_time; #endif @@ -79,6 +81,13 @@ struct zram_stats { atomic64_t pages_stored; /* no. of pages currently stored */ atomic_long_t max_used_pages; /* no. of maximum pages stored */ atomic64_t writestall; /* no. of write slow paths */ + atomic64_t miss_free; /* no. of missed free */ +#ifdef CONFIG_ZRAM_WRITEBACK + atomic64_t bd_count; /* no. of pages in backing device */ + atomic64_t bd_reads; /* no. of reads from backing device */ + atomic64_t bd_writes; /* no. of writes from backing device */ + atomic64_t bd_wb_limit; /* writeback limit of backing device */ +#endif }; struct zram { @@ -104,13 +113,13 @@ struct zram { * zram is claimed so open request will be failed */ bool claim; /* Protected by bdev->bd_mutex */ -#ifdef CONFIG_ZRAM_WRITEBACK struct file *backing_dev; + bool stop_writeback; +#ifdef CONFIG_ZRAM_WRITEBACK struct block_device *bdev; unsigned int old_block_size; unsigned long *bitmap; unsigned long nr_pages; - spinlock_t bitmap_lock; #endif #ifdef CONFIG_ZRAM_MEMORY_TRACKING struct dentry *debugfs_dir; diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c index e3e4d929e74f..d5d6e6e5da3b 100644 --- a/drivers/bluetooth/btbcm.c +++ b/drivers/bluetooth/btbcm.c @@ -33,6 +33,8 @@ #define VERSION "0.1" #define BDADDR_BCM20702A0 (&(bdaddr_t) {{0x00, 0xa0, 0x02, 0x70, 0x20, 0x00}}) +#define BDADDR_BCM20702A1 (&(bdaddr_t) {{0x00, 0x00, 0xa0, 0x02, 0x70, 0x20}}) +#define BDADDR_BCM43430A0 (&(bdaddr_t) {{0xac, 0x1f, 0x12, 0xa0, 0x43, 0x43}}) #define BDADDR_BCM4324B3 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb3, 0x24, 0x43}}) #define BDADDR_BCM4330B1 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb1, 0x30, 0x43}}) @@ -64,15 +66,23 @@ int btbcm_check_bdaddr(struct hci_dev *hdev) * The address 00:20:70:02:A0:00 indicates a BCM20702A0 controller * with no configured address. * + * The address 20:70:02:A0:00:00 indicates a BCM20702A1 controller + * with no configured address. + * * The address 43:24:B3:00:00:00 indicates a BCM4324B3 controller * with waiting for configuration state. * * The address 43:30:B1:00:00:00 indicates a BCM4330B1 controller * with waiting for configuration state. + * + * The address 43:43:A0:12:1F:AC indicates a BCM43430A0 controller + * with no configured address. */ if (!bacmp(&bda->bdaddr, BDADDR_BCM20702A0) || + !bacmp(&bda->bdaddr, BDADDR_BCM20702A1) || !bacmp(&bda->bdaddr, BDADDR_BCM4324B3) || - !bacmp(&bda->bdaddr, BDADDR_BCM4330B1)) { + !bacmp(&bda->bdaddr, BDADDR_BCM4330B1) || + !bacmp(&bda->bdaddr, BDADDR_BCM43430A0)) { bt_dev_info(hdev, "BCM: Using default device address (%pMR)", &bda->bdaddr); set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); @@ -330,6 +340,8 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = { { 0x2209, "BCM43430A1" }, /* 001.002.009 */ { 0x6119, "BCM4345C0" }, /* 003.001.025 */ { 0x230f, "BCM4356A2" }, /* 001.003.015 */ + { 0x220e, "BCM20702A1" }, /* 001.002.014 */ + { 0x4217, "BCM4329B1" }, /* 002.002.023 */ { } }; diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 7439a7eb50ac..4761499db9ee 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -344,6 +344,7 @@ static const struct usb_device_id blacklist_table[] = { /* Intel Bluetooth devices */ { USB_DEVICE(0x8087, 0x0025), .driver_info = BTUSB_INTEL_NEW }, { USB_DEVICE(0x8087, 0x0026), .driver_info = BTUSB_INTEL_NEW }, + { USB_DEVICE(0x8087, 0x0029), .driver_info = BTUSB_INTEL_NEW }, { USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR }, { USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL }, { USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL }, @@ -1935,10 +1936,8 @@ static void btusb_intel_bootup(struct btusb_data *data, const void *ptr, if (len != sizeof(*evt)) return; - if (test_and_clear_bit(BTUSB_BOOTING, &data->flags)) { - smp_mb__after_atomic(); + if (test_and_clear_bit(BTUSB_BOOTING, &data->flags)) wake_up_bit(&data->flags, BTUSB_BOOTING); - } } static void btusb_intel_secure_send_result(struct btusb_data *data, @@ -1953,10 +1952,8 @@ static void btusb_intel_secure_send_result(struct btusb_data *data, set_bit(BTUSB_FIRMWARE_FAILED, &data->flags); if (test_and_clear_bit(BTUSB_DOWNLOADING, &data->flags) && - test_bit(BTUSB_FIRMWARE_LOADED, &data->flags)) { - smp_mb__after_atomic(); + test_bit(BTUSB_FIRMWARE_LOADED, &data->flags)) wake_up_bit(&data->flags, BTUSB_DOWNLOADING); - } } static int btusb_recv_event_intel(struct hci_dev *hdev, struct sk_buff *skb) @@ -2055,6 +2052,35 @@ static int btusb_send_frame_intel(struct hci_dev *hdev, struct sk_buff *skb) return -EILSEQ; } +static bool btusb_setup_intel_new_get_fw_name(struct intel_version *ver, + struct intel_boot_params *params, + char *fw_name, size_t len, + const char *suffix) +{ + switch (ver->hw_variant) { + case 0x0b: /* SfP */ + case 0x0c: /* WsP */ + snprintf(fw_name, len, "intel/ibt-%u-%u.%s", + le16_to_cpu(ver->hw_variant), + le16_to_cpu(params->dev_revid), + suffix); + break; + case 0x11: /* JfP */ + case 0x12: /* ThP */ + case 0x13: /* HrP */ + case 0x14: /* CcP */ + snprintf(fw_name, len, "intel/ibt-%u-%u-%u.%s", + le16_to_cpu(ver->hw_variant), + le16_to_cpu(ver->hw_revision), + le16_to_cpu(ver->fw_revision), + suffix); + break; + default: + return false; + } + return true; +} + static int btusb_setup_intel_new(struct hci_dev *hdev) { struct btusb_data *data = hci_get_drvdata(hdev); @@ -2106,7 +2132,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev) case 0x11: /* JfP */ case 0x12: /* ThP */ case 0x13: /* HrP */ - case 0x14: /* QnJ, IcP */ + case 0x14: /* CcP */ break; default: bt_dev_err(hdev, "Unsupported Intel hardware variant (%u)", @@ -2190,23 +2216,9 @@ static int btusb_setup_intel_new(struct hci_dev *hdev) * ibt---.sfi. * */ - switch (ver.hw_variant) { - case 0x0b: /* SfP */ - case 0x0c: /* WsP */ - snprintf(fwname, sizeof(fwname), "intel/ibt-%u-%u.sfi", - le16_to_cpu(ver.hw_variant), - le16_to_cpu(params.dev_revid)); - break; - case 0x11: /* JfP */ - case 0x12: /* ThP */ - case 0x13: /* HrP */ - case 0x14: /* QnJ, IcP */ - snprintf(fwname, sizeof(fwname), "intel/ibt-%u-%u-%u.sfi", - le16_to_cpu(ver.hw_variant), - le16_to_cpu(ver.hw_revision), - le16_to_cpu(ver.fw_revision)); - break; - default: + err = btusb_setup_intel_new_get_fw_name(&ver, ¶ms, fwname, + sizeof(fwname), "sfi"); + if (!err) { bt_dev_err(hdev, "Unsupported Intel firmware naming"); return -EINVAL; } @@ -2222,23 +2234,9 @@ static int btusb_setup_intel_new(struct hci_dev *hdev) /* Save the DDC file name for later use to apply once the firmware * downloading is done. */ - switch (ver.hw_variant) { - case 0x0b: /* SfP */ - case 0x0c: /* WsP */ - snprintf(fwname, sizeof(fwname), "intel/ibt-%u-%u.ddc", - le16_to_cpu(ver.hw_variant), - le16_to_cpu(params.dev_revid)); - break; - case 0x11: /* JfP */ - case 0x12: /* ThP */ - case 0x13: /* HrP */ - case 0x14: /* QnJ, IcP */ - snprintf(fwname, sizeof(fwname), "intel/ibt-%u-%u-%u.ddc", - le16_to_cpu(ver.hw_variant), - le16_to_cpu(ver.hw_revision), - le16_to_cpu(ver.fw_revision)); - break; - default: + err = btusb_setup_intel_new_get_fw_name(&ver, ¶ms, fwname, + sizeof(fwname), "ddc"); + if (!err) { bt_dev_err(hdev, "Unsupported Intel firmware naming"); return -EINVAL; } diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c index ddbd8c6a0ceb..ddbe518c3e5b 100644 --- a/drivers/bluetooth/hci_bcm.c +++ b/drivers/bluetooth/hci_bcm.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -51,8 +52,16 @@ #define BCM_LM_DIAG_PKT 0x07 #define BCM_LM_DIAG_SIZE 63 +#define BCM_TYPE49_PKT 0x31 +#define BCM_TYPE49_SIZE 0 + +#define BCM_TYPE52_PKT 0x34 +#define BCM_TYPE52_SIZE 0 + #define BCM_AUTOSUSPEND_DELAY 5000 /* default autosleep delay */ +#define BCM_NUM_SUPPLIES 2 + /** * struct bcm_device - device driver resources * @serdev_hu: HCI UART controller struct @@ -71,8 +80,10 @@ * @btlp: Apple ACPI method to toggle BT_WAKE pin ("Bluetooth Low Power") * @btpu: Apple ACPI method to drive BT_REG_ON pin high ("Bluetooth Power Up") * @btpd: Apple ACPI method to drive BT_REG_ON pin low ("Bluetooth Power Down") - * @clk: clock used by Bluetooth device - * @clk_enabled: whether @clk is prepared and enabled + * @txco_clk: external reference frequency clock used by Bluetooth device + * @lpo_clk: external LPO clock used by Bluetooth device + * @supplies: VBAT and VDDIO supplies used by Bluetooth device + * @res_enabled: whether clocks and supplies are prepared and enabled * @init_speed: default baudrate of Bluetooth device; * the host UART is initially set to this baudrate so that * it can configure the Bluetooth device for @oper_speed @@ -102,8 +113,10 @@ struct bcm_device { int gpio_int_idx; #endif - struct clk *clk; - bool clk_enabled; + struct clk *txco_clk; + struct clk *lpo_clk; + struct regulator_bulk_data supplies[BCM_NUM_SUPPLIES]; + bool res_enabled; u32 init_speed; u32 oper_speed; @@ -214,32 +227,59 @@ static int bcm_gpio_set_power(struct bcm_device *dev, bool powered) { int err; - if (powered && !IS_ERR(dev->clk) && !dev->clk_enabled) { - err = clk_prepare_enable(dev->clk); + if (powered && !dev->res_enabled) { + err = regulator_bulk_enable(BCM_NUM_SUPPLIES, dev->supplies); if (err) return err; + + /* LPO clock needs to be 32.768 kHz */ + err = clk_set_rate(dev->lpo_clk, 32768); + if (err) { + dev_err(dev->dev, "Could not set LPO clock rate\n"); + goto err_regulator_disable; + } + + err = clk_prepare_enable(dev->lpo_clk); + if (err) + goto err_regulator_disable; + + err = clk_prepare_enable(dev->txco_clk); + if (err) + goto err_lpo_clk_disable; } err = dev->set_shutdown(dev, powered); if (err) - goto err_clk_disable; + goto err_txco_clk_disable; err = dev->set_device_wakeup(dev, powered); if (err) goto err_revert_shutdown; - if (!powered && !IS_ERR(dev->clk) && dev->clk_enabled) - clk_disable_unprepare(dev->clk); + if (!powered && dev->res_enabled) { + clk_disable_unprepare(dev->txco_clk); + clk_disable_unprepare(dev->lpo_clk); + regulator_bulk_disable(BCM_NUM_SUPPLIES, dev->supplies); + } + + /* wait for device to power on and come out of reset */ + usleep_range(10000, 20000); - dev->clk_enabled = powered; + dev->res_enabled = powered; return 0; err_revert_shutdown: dev->set_shutdown(dev, !powered); -err_clk_disable: - if (powered && !IS_ERR(dev->clk) && !dev->clk_enabled) - clk_disable_unprepare(dev->clk); +err_txco_clk_disable: + if (powered && !dev->res_enabled) + clk_disable_unprepare(dev->txco_clk); +err_lpo_clk_disable: + if (powered && !dev->res_enabled) + clk_disable_unprepare(dev->lpo_clk); +err_regulator_disable: + if (powered && !dev->res_enabled) + regulator_bulk_disable(BCM_NUM_SUPPLIES, dev->supplies); return err; } @@ -561,12 +601,28 @@ finalize: .lsize = 0, \ .maxlen = BCM_NULL_SIZE +#define BCM_RECV_TYPE49 \ + .type = BCM_TYPE49_PKT, \ + .hlen = BCM_TYPE49_SIZE, \ + .loff = 0, \ + .lsize = 0, \ + .maxlen = BCM_TYPE49_SIZE + +#define BCM_RECV_TYPE52 \ + .type = BCM_TYPE52_PKT, \ + .hlen = BCM_TYPE52_SIZE, \ + .loff = 0, \ + .lsize = 0, \ + .maxlen = BCM_TYPE52_SIZE + static const struct h4_recv_pkt bcm_recv_pkts[] = { { H4_RECV_ACL, .recv = hci_recv_frame }, { H4_RECV_SCO, .recv = hci_recv_frame }, { H4_RECV_EVENT, .recv = hci_recv_frame }, { BCM_RECV_LM_DIAG, .recv = hci_recv_diag }, { BCM_RECV_NULL, .recv = hci_recv_diag }, + { BCM_RECV_TYPE49, .recv = hci_recv_diag }, + { BCM_RECV_TYPE52, .recv = hci_recv_diag }, }; static int bcm_recv(struct hci_uart *hu, const void *data, int count) @@ -896,16 +952,57 @@ static int bcm_gpio_set_shutdown(struct bcm_device *dev, bool powered) return 0; } +/* Try a bunch of names for TXCO */ +static struct clk *bcm_get_txco(struct device *dev) +{ + struct clk *clk; + + /* New explicit name */ + clk = devm_clk_get(dev, "txco"); + if (!IS_ERR(clk) || PTR_ERR(clk) == -EPROBE_DEFER) + return clk; + + /* Deprecated name */ + clk = devm_clk_get(dev, "extclk"); + if (!IS_ERR(clk) || PTR_ERR(clk) == -EPROBE_DEFER) + return clk; + + /* Original code used no name at all */ + return devm_clk_get(dev, NULL); +} + static int bcm_get_resources(struct bcm_device *dev) { const struct dmi_system_id *dmi_id; + int err; dev->name = dev_name(dev->dev); if (x86_apple_machine && !bcm_apple_get_resources(dev)) return 0; - dev->clk = devm_clk_get(dev->dev, NULL); + dev->txco_clk = bcm_get_txco(dev->dev); + + /* Handle deferred probing */ + if (dev->txco_clk == ERR_PTR(-EPROBE_DEFER)) + return PTR_ERR(dev->txco_clk); + + /* Ignore all other errors as before */ + if (IS_ERR(dev->txco_clk)) + dev->txco_clk = NULL; + + dev->lpo_clk = devm_clk_get(dev->dev, "lpo"); + if (dev->lpo_clk == ERR_PTR(-EPROBE_DEFER)) + return PTR_ERR(dev->lpo_clk); + + if (IS_ERR(dev->lpo_clk)) + dev->lpo_clk = NULL; + + /* Check if we accidentally fetched the lpo clock twice */ + if (dev->lpo_clk && clk_is_match(dev->lpo_clk, dev->txco_clk)) { + devm_clk_put(dev->dev, dev->txco_clk); + dev->txco_clk = NULL; + } dev->device_wakeup = devm_gpiod_get_optional(dev->dev, "device-wakeup", GPIOD_OUT_LOW); @@ -920,6 +1017,13 @@ static int bcm_get_resources(struct bcm_device *dev) dev->set_device_wakeup = bcm_gpio_set_device_wakeup; dev->set_shutdown = bcm_gpio_set_shutdown; + dev->supplies[0].supply = "vbat"; + dev->supplies[1].supply = "vddio"; + err = devm_regulator_bulk_get(dev->dev, BCM_NUM_SUPPLIES, + dev->supplies); + if (err) + return err; + /* IRQ can be declared in ACPI table as Interrupt or GpioInt */ if (dev->irq <= 0) { struct gpio_desc *gpio; @@ -1314,6 +1418,8 @@ static void bcm_serdev_remove(struct serdev_device *serdev) #ifdef CONFIG_OF static const struct of_device_id bcm_bluetooth_of_match[] = { + { .compatible = "brcm,bcm20702a1" }, + { .compatible = "brcm,bcm4330-bt" }, { .compatible = "brcm,bcm43438-bt" }, { }, }; diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c index 8eede1197cd2..069d1c8fde73 100644 --- a/drivers/bluetooth/hci_h5.c +++ b/drivers/bluetooth/hci_h5.c @@ -115,6 +115,8 @@ struct h5_vnd { int (*setup)(struct h5 *h5); void (*open)(struct h5 *h5); void (*close)(struct h5 *h5); + int (*suspend)(struct h5 *h5); + int (*resume)(struct h5 *h5); const struct acpi_gpio_mapping *acpi_gpio_map; }; @@ -841,6 +843,28 @@ static void h5_serdev_remove(struct serdev_device *serdev) hci_uart_unregister_device(&h5->serdev_hu); } +static int __maybe_unused h5_serdev_suspend(struct device *dev) +{ + struct h5 *h5 = dev_get_drvdata(dev); + int ret = 0; + + if (h5->vnd && h5->vnd->suspend) + ret = h5->vnd->suspend(h5); + + return ret; +} + +static int __maybe_unused h5_serdev_resume(struct device *dev) +{ + struct h5 *h5 = dev_get_drvdata(dev); + int ret = 0; + + if (h5->vnd && h5->vnd->resume) + ret = h5->vnd->resume(h5); + + return ret; +} + #ifdef CONFIG_BT_HCIUART_RTL static int h5_btrtl_setup(struct h5 *h5) { @@ -907,6 +931,56 @@ static void h5_btrtl_close(struct h5 *h5) gpiod_set_value_cansleep(h5->enable_gpio, 0); } +/* Suspend/resume support. On many devices the RTL BT device loses power during + * suspend/resume, causing it to lose its firmware and all state. So we simply + * turn it off on suspend and reprobe on resume. This mirrors how RTL devices + * are handled in the USB driver, where the USB_QUIRK_RESET_RESUME is used which + * also causes a reprobe on resume. + */ +static int h5_btrtl_suspend(struct h5 *h5) +{ + serdev_device_set_flow_control(h5->hu->serdev, false); + gpiod_set_value_cansleep(h5->device_wake_gpio, 0); + gpiod_set_value_cansleep(h5->enable_gpio, 0); + return 0; +} + +struct h5_btrtl_reprobe { + struct device *dev; + struct work_struct work; +}; + +static void h5_btrtl_reprobe_worker(struct work_struct *work) +{ + struct h5_btrtl_reprobe *reprobe = + container_of(work, struct h5_btrtl_reprobe, work); + int ret; + + ret = device_reprobe(reprobe->dev); + if (ret && ret != -EPROBE_DEFER) + dev_err(reprobe->dev, "Reprobe error %d\n", ret); + + put_device(reprobe->dev); + kfree(reprobe); + module_put(THIS_MODULE); +} + +static int h5_btrtl_resume(struct h5 *h5) +{ + struct h5_btrtl_reprobe *reprobe; + + reprobe = kzalloc(sizeof(*reprobe), GFP_KERNEL); + if (!reprobe) + return -ENOMEM; + + __module_get(THIS_MODULE); + + INIT_WORK(&reprobe->work, h5_btrtl_reprobe_worker); + reprobe->dev = get_device(&h5->hu->serdev->dev); + queue_work(system_long_wq, &reprobe->work); + return 0; +} + static const struct acpi_gpio_params btrtl_device_wake_gpios = { 0, 0, false }; static const struct acpi_gpio_params btrtl_enable_gpios = { 1, 0, false }; static const struct acpi_gpio_params btrtl_host_wake_gpios = { 2, 0, false }; @@ -921,6 +995,8 @@ static struct h5_vnd rtl_vnd = { .setup = h5_btrtl_setup, .open = h5_btrtl_open, .close = h5_btrtl_close, + .suspend = h5_btrtl_suspend, + .resume = h5_btrtl_resume, .acpi_gpio_map = acpi_btrtl_gpios, }; #endif @@ -935,12 +1011,17 @@ static const struct acpi_device_id h5_acpi_match[] = { MODULE_DEVICE_TABLE(acpi, h5_acpi_match); #endif +static const struct dev_pm_ops h5_serdev_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(h5_serdev_suspend, h5_serdev_resume) +}; + static struct serdev_device_driver h5_serdev_driver = { .probe = h5_serdev_probe, .remove = h5_serdev_remove, .driver = { .name = "hci_uart_h5", .acpi_match_table = ACPI_PTR(h5_acpi_match), + .pm = &h5_serdev_pm_ops, }, }; diff --git a/drivers/bluetooth/hci_intel.c b/drivers/bluetooth/hci_intel.c index 46ace321bf60..f31410526c57 100644 --- a/drivers/bluetooth/hci_intel.c +++ b/drivers/bluetooth/hci_intel.c @@ -596,8 +596,8 @@ static int intel_setup(struct hci_uart *hu) * is in bootloader mode or if it already has operational firmware * loaded. */ - err = btintel_read_version(hdev, &ver); - if (err) + err = btintel_read_version(hdev, &ver); + if (err) return err; /* The hardware platform number has a fixed value of 0x37 and @@ -909,10 +909,8 @@ static int intel_recv_event(struct hci_dev *hdev, struct sk_buff *skb) set_bit(STATE_FIRMWARE_FAILED, &intel->flags); if (test_and_clear_bit(STATE_DOWNLOADING, &intel->flags) && - test_bit(STATE_FIRMWARE_LOADED, &intel->flags)) { - smp_mb__after_atomic(); + test_bit(STATE_FIRMWARE_LOADED, &intel->flags)) wake_up_bit(&intel->flags, STATE_DOWNLOADING); - } /* When switching to the operational firmware the device * sends a vendor specific event indicating that the bootup @@ -920,10 +918,8 @@ static int intel_recv_event(struct hci_dev *hdev, struct sk_buff *skb) */ } else if (skb->len == 9 && hdr->evt == 0xff && hdr->plen == 0x07 && skb->data[2] == 0x02) { - if (test_and_clear_bit(STATE_BOOTING, &intel->flags)) { - smp_mb__after_atomic(); + if (test_and_clear_bit(STATE_BOOTING, &intel->flags)) wake_up_bit(&intel->flags, STATE_BOOTING); - } } recv: return hci_recv_frame(hdev, skb); @@ -960,17 +956,13 @@ static int intel_recv_lpm(struct hci_dev *hdev, struct sk_buff *skb) break; case LPM_OP_SUSPEND_ACK: set_bit(STATE_SUSPENDED, &intel->flags); - if (test_and_clear_bit(STATE_LPM_TRANSACTION, &intel->flags)) { - smp_mb__after_atomic(); + if (test_and_clear_bit(STATE_LPM_TRANSACTION, &intel->flags)) wake_up_bit(&intel->flags, STATE_LPM_TRANSACTION); - } break; case LPM_OP_RESUME_ACK: clear_bit(STATE_SUSPENDED, &intel->flags); - if (test_and_clear_bit(STATE_LPM_TRANSACTION, &intel->flags)) { - smp_mb__after_atomic(); + if (test_and_clear_bit(STATE_LPM_TRANSACTION, &intel->flags)) wake_up_bit(&intel->flags, STATE_LPM_TRANSACTION); - } break; default: bt_dev_err(hdev, "Unknown LPM opcode (%02x)", lpm->opcode); diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c index c445aa9ac511..490abba94363 100644 --- a/drivers/bluetooth/hci_serdev.c +++ b/drivers/bluetooth/hci_serdev.c @@ -333,9 +333,6 @@ int hci_uart_register_device(struct hci_uart *hu, if (test_bit(HCI_UART_EXT_CONFIG, &hu->hdev_flags)) set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks); - if (!test_bit(HCI_UART_RESET_ON_INIT, &hu->hdev_flags)) - set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); - if (test_bit(HCI_UART_CREATE_AMP, &hu->hdev_flags)) hdev->dev_type = HCI_AMP; else diff --git a/drivers/bus/fsl-mc/dpbp.c b/drivers/bus/fsl-mc/dpbp.c index 17e3c5d2f22e..9003cd3698a5 100644 --- a/drivers/bus/fsl-mc/dpbp.c +++ b/drivers/bus/fsl-mc/dpbp.c @@ -5,7 +5,6 @@ */ #include #include -#include #include "fsl-mc-private.h" diff --git a/drivers/bus/fsl-mc/dpcon.c b/drivers/bus/fsl-mc/dpcon.c index 760555d7946e..97b6fa605e62 100644 --- a/drivers/bus/fsl-mc/dpcon.c +++ b/drivers/bus/fsl-mc/dpcon.c @@ -5,7 +5,6 @@ */ #include #include -#include #include "fsl-mc-private.h" diff --git a/drivers/bus/qcom-ebi2.c b/drivers/bus/qcom-ebi2.c index a6444244c411..56b01e4344d3 100644 --- a/drivers/bus/qcom-ebi2.c +++ b/drivers/bus/qcom-ebi2.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig index 9d03b2ff5df6..2e2ffe7010aa 100644 --- a/drivers/char/Kconfig +++ b/drivers/char/Kconfig @@ -66,6 +66,14 @@ config TTY_PRINTK If unsure, say N. +config TTY_PRINTK_LEVEL + depends on TTY_PRINTK + int "ttyprintk log level (1-7)" + range 1 7 + default "6" + help + Printk log level to use for ttyprintk messages. + config PRINTER tristate "Parallel printer support" depends on PARPORT diff --git a/drivers/char/agp/backend.c b/drivers/char/agp/backend.c index 38ffb281df97..004a3ce8ba72 100644 --- a/drivers/char/agp/backend.c +++ b/drivers/char/agp/backend.c @@ -115,9 +115,9 @@ static int agp_find_max(void) long memory, index, result; #if PAGE_SHIFT < 20 - memory = totalram_pages >> (20 - PAGE_SHIFT); + memory = totalram_pages() >> (20 - PAGE_SHIFT); #else - memory = totalram_pages << (PAGE_SHIFT - 20); + memory = totalram_pages() << (PAGE_SHIFT - 20); #endif index = 1; diff --git a/drivers/char/hw_random/bcm2835-rng.c b/drivers/char/hw_random/bcm2835-rng.c index 6767d965c36c..256b0b1d0f26 100644 --- a/drivers/char/hw_random/bcm2835-rng.c +++ b/drivers/char/hw_random/bcm2835-rng.c @@ -1,10 +1,7 @@ -/** +// SPDX-License-Identifier: GPL-2.0 +/* * Copyright (c) 2010-2012 Broadcom. All rights reserved. * Copyright (c) 2013 Lubomir Rintel - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License ("GPL") - * version 2, as published by the Free Software Foundation. */ #include diff --git a/drivers/char/lp.c b/drivers/char/lp.c index 8c4dd1a3bb6a..5c8d780637bd 100644 --- a/drivers/char/lp.c +++ b/drivers/char/lp.c @@ -46,8 +46,8 @@ * lp=auto (assign lp devices to all ports that * have printers attached, as determined * by the IEEE-1284 autoprobe) - * - * lp=reset (reset the printer during + * + * lp=reset (reset the printer during * initialisation) * * lp=off (disable the printer driver entirely) @@ -141,6 +141,7 @@ static DEFINE_MUTEX(lp_mutex); static struct lp_struct lp_table[LP_NO]; +static int port_num[LP_NO]; static unsigned int lp_count = 0; static struct class *lp_class; @@ -166,7 +167,7 @@ static struct parport *console_registered; static void lp_claim_parport_or_block(struct lp_struct *this_lp) { if (!test_and_set_bit(LP_PARPORT_CLAIMED, &this_lp->bits)) { - parport_claim_or_block (this_lp->dev); + parport_claim_or_block(this_lp->dev); } } @@ -174,7 +175,7 @@ static void lp_claim_parport_or_block(struct lp_struct *this_lp) static void lp_release_parport(struct lp_struct *this_lp) { if (test_and_clear_bit(LP_PARPORT_CLAIMED, &this_lp->bits)) { - parport_release (this_lp->dev); + parport_release(this_lp->dev); } } @@ -184,37 +185,37 @@ static int lp_preempt(void *handle) { struct lp_struct *this_lp = (struct lp_struct *)handle; set_bit(LP_PREEMPT_REQUEST, &this_lp->bits); - return (1); + return 1; } -/* +/* * Try to negotiate to a new mode; if unsuccessful negotiate to * compatibility mode. Return the mode we ended up in. */ -static int lp_negotiate(struct parport * port, int mode) +static int lp_negotiate(struct parport *port, int mode) { - if (parport_negotiate (port, mode) != 0) { + if (parport_negotiate(port, mode) != 0) { mode = IEEE1284_MODE_COMPAT; - parport_negotiate (port, mode); + parport_negotiate(port, mode); } - return (mode); + return mode; } static int lp_reset(int minor) { int retval; - lp_claim_parport_or_block (&lp_table[minor]); + lp_claim_parport_or_block(&lp_table[minor]); w_ctr(minor, LP_PSELECP); - udelay (LP_DELAY); + udelay(LP_DELAY); w_ctr(minor, LP_PSELECP | LP_PINITP); retval = r_str(minor); - lp_release_parport (&lp_table[minor]); + lp_release_parport(&lp_table[minor]); return retval; } -static void lp_error (int minor) +static void lp_error(int minor) { DEFINE_WAIT(wait); int polling; @@ -223,12 +224,15 @@ static void lp_error (int minor) return; polling = lp_table[minor].dev->port->irq == PARPORT_IRQ_NONE; - if (polling) lp_release_parport (&lp_table[minor]); + if (polling) + lp_release_parport(&lp_table[minor]); prepare_to_wait(&lp_table[minor].waitq, &wait, TASK_INTERRUPTIBLE); schedule_timeout(LP_TIMEOUT_POLLED); finish_wait(&lp_table[minor].waitq, &wait); - if (polling) lp_claim_parport_or_block (&lp_table[minor]); - else parport_yield_blocking (lp_table[minor].dev); + if (polling) + lp_claim_parport_or_block(&lp_table[minor]); + else + parport_yield_blocking(lp_table[minor].dev); } static int lp_check_status(int minor) @@ -259,7 +263,7 @@ static int lp_check_status(int minor) error = -EIO; } else { last = 0; /* Come here if LP_CAREFUL is set and no - errors are reported. */ + errors are reported. */ } lp_table[minor].last_error = last; @@ -276,14 +280,14 @@ static int lp_wait_ready(int minor, int nonblock) /* If we're not in compatibility mode, we're ready now! */ if (lp_table[minor].current_mode != IEEE1284_MODE_COMPAT) { - return (0); + return 0; } do { - error = lp_check_status (minor); + error = lp_check_status(minor); if (error && (nonblock || (LP_F(minor) & LP_ABORT))) break; - if (signal_pending (current)) { + if (signal_pending(current)) { error = -EINTR; break; } @@ -291,8 +295,8 @@ static int lp_wait_ready(int minor, int nonblock) return error; } -static ssize_t lp_write(struct file * file, const char __user * buf, - size_t count, loff_t *ppos) +static ssize_t lp_write(struct file *file, const char __user *buf, + size_t count, loff_t *ppos) { unsigned int minor = iminor(file_inode(file)); struct parport *port = lp_table[minor].dev->port; @@ -317,26 +321,26 @@ static ssize_t lp_write(struct file * file, const char __user * buf, if (mutex_lock_interruptible(&lp_table[minor].port_mutex)) return -EINTR; - if (copy_from_user (kbuf, buf, copy_size)) { + if (copy_from_user(kbuf, buf, copy_size)) { retv = -EFAULT; goto out_unlock; } - /* Claim Parport or sleep until it becomes available - */ - lp_claim_parport_or_block (&lp_table[minor]); + /* Claim Parport or sleep until it becomes available + */ + lp_claim_parport_or_block(&lp_table[minor]); /* Go to the proper mode. */ - lp_table[minor].current_mode = lp_negotiate (port, - lp_table[minor].best_mode); + lp_table[minor].current_mode = lp_negotiate(port, + lp_table[minor].best_mode); - parport_set_timeout (lp_table[minor].dev, - (nonblock ? PARPORT_INACTIVITY_O_NONBLOCK - : lp_table[minor].timeout)); + parport_set_timeout(lp_table[minor].dev, + (nonblock ? PARPORT_INACTIVITY_O_NONBLOCK + : lp_table[minor].timeout)); - if ((retv = lp_wait_ready (minor, nonblock)) == 0) + if ((retv = lp_wait_ready(minor, nonblock)) == 0) do { /* Write the data. */ - written = parport_write (port, kbuf, copy_size); + written = parport_write(port, kbuf, copy_size); if (written > 0) { copy_size -= written; count -= written; @@ -344,7 +348,7 @@ static ssize_t lp_write(struct file * file, const char __user * buf, retv += written; } - if (signal_pending (current)) { + if (signal_pending(current)) { if (retv == 0) retv = -EINTR; @@ -355,11 +359,11 @@ static ssize_t lp_write(struct file * file, const char __user * buf, /* incomplete write -> check error ! */ int error; - parport_negotiate (lp_table[minor].dev->port, - IEEE1284_MODE_COMPAT); + parport_negotiate(lp_table[minor].dev->port, + IEEE1284_MODE_COMPAT); lp_table[minor].current_mode = IEEE1284_MODE_COMPAT; - error = lp_wait_ready (minor, nonblock); + error = lp_wait_ready(minor, nonblock); if (error) { if (retv == 0) @@ -371,13 +375,13 @@ static ssize_t lp_write(struct file * file, const char __user * buf, break; } - parport_yield_blocking (lp_table[minor].dev); - lp_table[minor].current_mode - = lp_negotiate (port, - lp_table[minor].best_mode); + parport_yield_blocking(lp_table[minor].dev); + lp_table[minor].current_mode + = lp_negotiate(port, + lp_table[minor].best_mode); } else if (need_resched()) - schedule (); + schedule(); if (count) { copy_size = count; @@ -389,27 +393,27 @@ static ssize_t lp_write(struct file * file, const char __user * buf, retv = -EFAULT; break; } - } + } } while (count > 0); - if (test_and_clear_bit(LP_PREEMPT_REQUEST, + if (test_and_clear_bit(LP_PREEMPT_REQUEST, &lp_table[minor].bits)) { printk(KERN_INFO "lp%d releasing parport\n", minor); - parport_negotiate (lp_table[minor].dev->port, - IEEE1284_MODE_COMPAT); + parport_negotiate(lp_table[minor].dev->port, + IEEE1284_MODE_COMPAT); lp_table[minor].current_mode = IEEE1284_MODE_COMPAT; - lp_release_parport (&lp_table[minor]); + lp_release_parport(&lp_table[minor]); } out_unlock: mutex_unlock(&lp_table[minor].port_mutex); - return retv; + return retv; } #ifdef CONFIG_PARPORT_1284 /* Status readback conforming to ieee1284 */ -static ssize_t lp_read(struct file * file, char __user * buf, +static ssize_t lp_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { DEFINE_WAIT(wait); @@ -426,21 +430,21 @@ static ssize_t lp_read(struct file * file, char __user * buf, if (mutex_lock_interruptible(&lp_table[minor].port_mutex)) return -EINTR; - lp_claim_parport_or_block (&lp_table[minor]); + lp_claim_parport_or_block(&lp_table[minor]); - parport_set_timeout (lp_table[minor].dev, - (nonblock ? PARPORT_INACTIVITY_O_NONBLOCK - : lp_table[minor].timeout)); + parport_set_timeout(lp_table[minor].dev, + (nonblock ? PARPORT_INACTIVITY_O_NONBLOCK + : lp_table[minor].timeout)); - parport_negotiate (lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); - if (parport_negotiate (lp_table[minor].dev->port, - IEEE1284_MODE_NIBBLE)) { + parport_negotiate(lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); + if (parport_negotiate(lp_table[minor].dev->port, + IEEE1284_MODE_NIBBLE)) { retval = -EIO; goto out; } while (retval == 0) { - retval = parport_read (port, kbuf, count); + retval = parport_read(port, kbuf, count); if (retval > 0) break; @@ -453,11 +457,11 @@ static ssize_t lp_read(struct file * file, char __user * buf, /* Wait for data. */ if (lp_table[minor].dev->port->irq == PARPORT_IRQ_NONE) { - parport_negotiate (lp_table[minor].dev->port, - IEEE1284_MODE_COMPAT); - lp_error (minor); - if (parport_negotiate (lp_table[minor].dev->port, - IEEE1284_MODE_NIBBLE)) { + parport_negotiate(lp_table[minor].dev->port, + IEEE1284_MODE_COMPAT); + lp_error(minor); + if (parport_negotiate(lp_table[minor].dev->port, + IEEE1284_MODE_NIBBLE)) { retval = -EIO; goto out; } @@ -467,18 +471,18 @@ static ssize_t lp_read(struct file * file, char __user * buf, finish_wait(&lp_table[minor].waitq, &wait); } - if (signal_pending (current)) { + if (signal_pending(current)) { retval = -ERESTARTSYS; break; } - cond_resched (); + cond_resched(); } - parport_negotiate (lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); + parport_negotiate(lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); out: - lp_release_parport (&lp_table[minor]); + lp_release_parport(&lp_table[minor]); - if (retval > 0 && copy_to_user (buf, kbuf, retval)) + if (retval > 0 && copy_to_user(buf, kbuf, retval)) retval = -EFAULT; mutex_unlock(&lp_table[minor].port_mutex); @@ -488,7 +492,7 @@ static ssize_t lp_read(struct file * file, char __user * buf, #endif /* IEEE 1284 support */ -static int lp_open(struct inode * inode, struct file * file) +static int lp_open(struct inode *inode, struct file *file) { unsigned int minor = iminor(inode); int ret = 0; @@ -513,9 +517,9 @@ static int lp_open(struct inode * inode, struct file * file) should most likely only ever be used by the tunelp application. */ if ((LP_F(minor) & LP_ABORTOPEN) && !(file->f_flags & O_NONBLOCK)) { int status; - lp_claim_parport_or_block (&lp_table[minor]); + lp_claim_parport_or_block(&lp_table[minor]); status = r_str(minor); - lp_release_parport (&lp_table[minor]); + lp_release_parport(&lp_table[minor]); if (status & LP_POUTPA) { printk(KERN_INFO "lp%d out of paper\n", minor); LP_F(minor) &= ~LP_BUSY; @@ -540,32 +544,32 @@ static int lp_open(struct inode * inode, struct file * file) goto out; } /* Determine if the peripheral supports ECP mode */ - lp_claim_parport_or_block (&lp_table[minor]); + lp_claim_parport_or_block(&lp_table[minor]); if ( (lp_table[minor].dev->port->modes & PARPORT_MODE_ECP) && - !parport_negotiate (lp_table[minor].dev->port, - IEEE1284_MODE_ECP)) { - printk (KERN_INFO "lp%d: ECP mode\n", minor); + !parport_negotiate(lp_table[minor].dev->port, + IEEE1284_MODE_ECP)) { + printk(KERN_INFO "lp%d: ECP mode\n", minor); lp_table[minor].best_mode = IEEE1284_MODE_ECP; } else { lp_table[minor].best_mode = IEEE1284_MODE_COMPAT; } /* Leave peripheral in compatibility mode */ - parport_negotiate (lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); - lp_release_parport (&lp_table[minor]); + parport_negotiate(lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); + lp_release_parport(&lp_table[minor]); lp_table[minor].current_mode = IEEE1284_MODE_COMPAT; out: mutex_unlock(&lp_mutex); return ret; } -static int lp_release(struct inode * inode, struct file * file) +static int lp_release(struct inode *inode, struct file *file) { unsigned int minor = iminor(inode); - lp_claim_parport_or_block (&lp_table[minor]); - parport_negotiate (lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); + lp_claim_parport_or_block(&lp_table[minor]); + parport_negotiate(lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); lp_table[minor].current_mode = IEEE1284_MODE_COMPAT; - lp_release_parport (&lp_table[minor]); + lp_release_parport(&lp_table[minor]); kfree(lp_table[minor].lp_buffer); lp_table[minor].lp_buffer = NULL; LP_F(minor) &= ~LP_BUSY; @@ -615,7 +619,7 @@ static int lp_do_ioctl(unsigned int minor, unsigned int cmd, case LPWAIT: LP_WAIT(minor) = arg; break; - case LPSETIRQ: + case LPSETIRQ: return -EINVAL; break; case LPGETIRQ: @@ -626,9 +630,9 @@ static int lp_do_ioctl(unsigned int minor, unsigned int cmd, case LPGETSTATUS: if (mutex_lock_interruptible(&lp_table[minor].port_mutex)) return -EINTR; - lp_claim_parport_or_block (&lp_table[minor]); + lp_claim_parport_or_block(&lp_table[minor]); status = r_str(minor); - lp_release_parport (&lp_table[minor]); + lp_release_parport(&lp_table[minor]); mutex_unlock(&lp_table[minor].port_mutex); if (copy_to_user(argp, &status, sizeof(int))) @@ -647,8 +651,8 @@ static int lp_do_ioctl(unsigned int minor, unsigned int cmd, sizeof(struct lp_stats)); break; #endif - case LPGETFLAGS: - status = LP_F(minor); + case LPGETFLAGS: + status = LP_F(minor); if (copy_to_user(argp, &status, sizeof(int))) return -EFAULT; break; @@ -801,31 +805,31 @@ static const struct file_operations lp_fops = { /* The console must be locked when we get here. */ -static void lp_console_write (struct console *co, const char *s, - unsigned count) +static void lp_console_write(struct console *co, const char *s, + unsigned count) { struct pardevice *dev = lp_table[CONSOLE_LP].dev; struct parport *port = dev->port; ssize_t written; - if (parport_claim (dev)) + if (parport_claim(dev)) /* Nothing we can do. */ return; - parport_set_timeout (dev, 0); + parport_set_timeout(dev, 0); /* Go to compatibility mode. */ - parport_negotiate (port, IEEE1284_MODE_COMPAT); + parport_negotiate(port, IEEE1284_MODE_COMPAT); do { /* Write the data, converting LF->CRLF as we go. */ ssize_t canwrite = count; - char *lf = memchr (s, '\n', count); + char *lf = memchr(s, '\n', count); if (lf) canwrite = lf - s; if (canwrite > 0) { - written = parport_write (port, s, canwrite); + written = parport_write(port, s, canwrite); if (written <= 0) continue; @@ -843,14 +847,14 @@ static void lp_console_write (struct console *co, const char *s, s++; count--; do { - written = parport_write (port, crlf, i); + written = parport_write(port, crlf, i); if (written > 0) i -= written, crlf += written; } while (i > 0 && (CONSOLE_LP_STRICT || written > 0)); } } while (count > 0 && (CONSOLE_LP_STRICT || written > 0)); - parport_release (dev); + parport_release(dev); } static struct console lpcons = { @@ -871,7 +875,7 @@ module_param_array(parport, charp, NULL, 0); module_param(reset, bool, 0); #ifndef MODULE -static int __init lp_setup (char *str) +static int __init lp_setup(char *str) { static int parport_ptr; int x; @@ -908,9 +912,13 @@ static int __init lp_setup (char *str) static int lp_register(int nr, struct parport *port) { - lp_table[nr].dev = parport_register_device(port, "lp", - lp_preempt, NULL, NULL, 0, - (void *) &lp_table[nr]); + struct pardev_cb ppdev_cb; + + memset(&ppdev_cb, 0, sizeof(ppdev_cb)); + ppdev_cb.preempt = lp_preempt; + ppdev_cb.private = &lp_table[nr]; + lp_table[nr].dev = parport_register_dev_model(port, "lp", + &ppdev_cb, nr); if (lp_table[nr].dev == NULL) return 1; lp_table[nr].flags |= LP_EXIST; @@ -921,7 +929,7 @@ static int lp_register(int nr, struct parport *port) device_create(lp_class, port->dev, MKDEV(LP_MAJOR, nr), NULL, "lp%d", nr); - printk(KERN_INFO "lp%d: using %s (%s).\n", nr, port->name, + printk(KERN_INFO "lp%d: using %s (%s).\n", nr, port->name, (port->irq == PARPORT_IRQ_NONE)?"polling":"interrupt-driven"); #ifdef CONFIG_LP_CONSOLE @@ -929,17 +937,18 @@ static int lp_register(int nr, struct parport *port) if (port->modes & PARPORT_MODE_SAFEININT) { register_console(&lpcons); console_registered = port; - printk (KERN_INFO "lp%d: console ready\n", CONSOLE_LP); + printk(KERN_INFO "lp%d: console ready\n", CONSOLE_LP); } else - printk (KERN_ERR "lp%d: cannot run console on %s\n", - CONSOLE_LP, port->name); + printk(KERN_ERR "lp%d: cannot run console on %s\n", + CONSOLE_LP, port->name); } #endif + port_num[nr] = port->number; return 0; } -static void lp_attach (struct parport *port) +static void lp_attach(struct parport *port) { unsigned int i; @@ -953,7 +962,11 @@ static void lp_attach (struct parport *port) printk(KERN_INFO "lp: ignoring parallel port (max. %d)\n",LP_NO); return; } - if (!lp_register(lp_count, port)) + for (i = 0; i < LP_NO; i++) + if (port_num[i] == -1) + break; + + if (!lp_register(i, port)) lp_count++; break; @@ -969,8 +982,10 @@ static void lp_attach (struct parport *port) } } -static void lp_detach (struct parport *port) +static void lp_detach(struct parport *port) { + int n; + /* Write this some day. */ #ifdef CONFIG_LP_CONSOLE if (console_registered == port) { @@ -978,15 +993,25 @@ static void lp_detach (struct parport *port) console_registered = NULL; } #endif /* CONFIG_LP_CONSOLE */ + + for (n = 0; n < LP_NO; n++) { + if (port_num[n] == port->number) { + port_num[n] = -1; + lp_count--; + device_destroy(lp_class, MKDEV(LP_MAJOR, n)); + parport_unregister_device(lp_table[n].dev); + } + } } static struct parport_driver lp_driver = { .name = "lp", - .attach = lp_attach, + .match_port = lp_attach, .detach = lp_detach, + .devmodel = true, }; -static int __init lp_init (void) +static int __init lp_init(void) { int i, err = 0; @@ -1003,17 +1028,18 @@ static int __init lp_init (void) #ifdef LP_STATS lp_table[i].lastcall = 0; lp_table[i].runchars = 0; - memset (&lp_table[i].stats, 0, sizeof (struct lp_stats)); + memset(&lp_table[i].stats, 0, sizeof(struct lp_stats)); #endif lp_table[i].last_error = 0; - init_waitqueue_head (&lp_table[i].waitq); - init_waitqueue_head (&lp_table[i].dataq); + init_waitqueue_head(&lp_table[i].waitq); + init_waitqueue_head(&lp_table[i].dataq); mutex_init(&lp_table[i].port_mutex); lp_table[i].timeout = 10 * HZ; + port_num[i] = -1; } - if (register_chrdev (LP_MAJOR, "lp", &lp_fops)) { - printk (KERN_ERR "lp: unable to get major %d\n", LP_MAJOR); + if (register_chrdev(LP_MAJOR, "lp", &lp_fops)) { + printk(KERN_ERR "lp: unable to get major %d\n", LP_MAJOR); return -EIO; } @@ -1023,17 +1049,17 @@ static int __init lp_init (void) goto out_reg; } - if (parport_register_driver (&lp_driver)) { - printk (KERN_ERR "lp: unable to register with parport\n"); + if (parport_register_driver(&lp_driver)) { + printk(KERN_ERR "lp: unable to register with parport\n"); err = -EIO; goto out_class; } if (!lp_count) { - printk (KERN_INFO "lp: driver loaded but no devices found\n"); + printk(KERN_INFO "lp: driver loaded but no devices found\n"); #ifndef CONFIG_PARPORT_1284 if (parport_nr[0] == LP_PARPORT_AUTO) - printk (KERN_INFO "lp: (is IEEE 1284 support enabled?)\n"); + printk(KERN_INFO "lp: (is IEEE 1284 support enabled?)\n"); #endif } @@ -1046,7 +1072,7 @@ out_reg: return err; } -static int __init lp_init_module (void) +static int __init lp_init_module(void) { if (parport[0]) { /* The user gave some parameters. Let's see what they were. */ @@ -1060,7 +1086,7 @@ static int __init lp_init_module (void) else { char *ep; unsigned long r = simple_strtoul(parport[n], &ep, 0); - if (ep != parport[n]) + if (ep != parport[n]) parport_nr[n] = r; else { printk(KERN_ERR "lp: bad port specifier `%s'\n", parport[n]); @@ -1074,23 +1100,15 @@ static int __init lp_init_module (void) return lp_init(); } -static void lp_cleanup_module (void) +static void lp_cleanup_module(void) { - unsigned int offset; - - parport_unregister_driver (&lp_driver); + parport_unregister_driver(&lp_driver); #ifdef CONFIG_LP_CONSOLE - unregister_console (&lpcons); + unregister_console(&lpcons); #endif unregister_chrdev(LP_MAJOR, "lp"); - for (offset = 0; offset < LP_NO; offset++) { - if (lp_table[offset].dev == NULL) - continue; - parport_unregister_device(lp_table[offset].dev); - device_destroy(lp_class, MKDEV(LP_MAJOR, offset)); - } class_destroy(lp_class); } diff --git a/drivers/char/random.c b/drivers/char/random.c index 2eb70e76ed35..38c6d1af6d1c 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -265,7 +265,7 @@ #include #include #include -#include +#include #include #include @@ -431,11 +431,10 @@ static int crng_init = 0; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt = 0; static unsigned long crng_global_init_time = 0; -#define CRNG_INIT_CNT_THRESH (2*CHACHA20_KEY_SIZE) -static void _extract_crng(struct crng_state *crng, - __u8 out[CHACHA20_BLOCK_SIZE]); +#define CRNG_INIT_CNT_THRESH (2*CHACHA_KEY_SIZE) +static void _extract_crng(struct crng_state *crng, __u8 out[CHACHA_BLOCK_SIZE]); static void _crng_backtrack_protect(struct crng_state *crng, - __u8 tmp[CHACHA20_BLOCK_SIZE], int used); + __u8 tmp[CHACHA_BLOCK_SIZE], int used); static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); @@ -863,7 +862,7 @@ static int crng_fast_load(const char *cp, size_t len) } p = (unsigned char *) &primary_crng.state[4]; while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - p[crng_init_cnt % CHACHA20_KEY_SIZE] ^= *cp; + p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp; cp++; crng_init_cnt++; len--; } spin_unlock_irqrestore(&primary_crng.lock, flags); @@ -895,7 +894,7 @@ static int crng_slow_load(const char *cp, size_t len) unsigned long flags; static unsigned char lfsr = 1; unsigned char tmp; - unsigned i, max = CHACHA20_KEY_SIZE; + unsigned i, max = CHACHA_KEY_SIZE; const char * src_buf = cp; char * dest_buf = (char *) &primary_crng.state[4]; @@ -913,8 +912,8 @@ static int crng_slow_load(const char *cp, size_t len) lfsr >>= 1; if (tmp & 1) lfsr ^= 0xE1; - tmp = dest_buf[i % CHACHA20_KEY_SIZE]; - dest_buf[i % CHACHA20_KEY_SIZE] ^= src_buf[i % len] ^ lfsr; + tmp = dest_buf[i % CHACHA_KEY_SIZE]; + dest_buf[i % CHACHA_KEY_SIZE] ^= src_buf[i % len] ^ lfsr; lfsr += (tmp << 3) | (tmp >> 5); } spin_unlock_irqrestore(&primary_crng.lock, flags); @@ -926,7 +925,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) unsigned long flags; int i, num; union { - __u8 block[CHACHA20_BLOCK_SIZE]; + __u8 block[CHACHA_BLOCK_SIZE]; __u32 key[8]; } buf; @@ -937,7 +936,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) } else { _extract_crng(&primary_crng, buf.block); _crng_backtrack_protect(&primary_crng, buf.block, - CHACHA20_KEY_SIZE); + CHACHA_KEY_SIZE); } spin_lock_irqsave(&crng->lock, flags); for (i = 0; i < 8; i++) { @@ -973,7 +972,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) } static void _extract_crng(struct crng_state *crng, - __u8 out[CHACHA20_BLOCK_SIZE]) + __u8 out[CHACHA_BLOCK_SIZE]) { unsigned long v, flags; @@ -990,7 +989,7 @@ static void _extract_crng(struct crng_state *crng, spin_unlock_irqrestore(&crng->lock, flags); } -static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE]) +static void extract_crng(__u8 out[CHACHA_BLOCK_SIZE]) { struct crng_state *crng = NULL; @@ -1008,14 +1007,14 @@ static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE]) * enough) to mutate the CRNG key to provide backtracking protection. */ static void _crng_backtrack_protect(struct crng_state *crng, - __u8 tmp[CHACHA20_BLOCK_SIZE], int used) + __u8 tmp[CHACHA_BLOCK_SIZE], int used) { unsigned long flags; __u32 *s, *d; int i; used = round_up(used, sizeof(__u32)); - if (used + CHACHA20_KEY_SIZE > CHACHA20_BLOCK_SIZE) { + if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) { extract_crng(tmp); used = 0; } @@ -1027,7 +1026,7 @@ static void _crng_backtrack_protect(struct crng_state *crng, spin_unlock_irqrestore(&crng->lock, flags); } -static void crng_backtrack_protect(__u8 tmp[CHACHA20_BLOCK_SIZE], int used) +static void crng_backtrack_protect(__u8 tmp[CHACHA_BLOCK_SIZE], int used) { struct crng_state *crng = NULL; @@ -1042,8 +1041,8 @@ static void crng_backtrack_protect(__u8 tmp[CHACHA20_BLOCK_SIZE], int used) static ssize_t extract_crng_user(void __user *buf, size_t nbytes) { - ssize_t ret = 0, i = CHACHA20_BLOCK_SIZE; - __u8 tmp[CHACHA20_BLOCK_SIZE] __aligned(4); + ssize_t ret = 0, i = CHACHA_BLOCK_SIZE; + __u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); int large_request = (nbytes > 256); while (nbytes) { @@ -1057,7 +1056,7 @@ static ssize_t extract_crng_user(void __user *buf, size_t nbytes) } extract_crng(tmp); - i = min_t(int, nbytes, CHACHA20_BLOCK_SIZE); + i = min_t(int, nbytes, CHACHA_BLOCK_SIZE); if (copy_to_user(buf, tmp, i)) { ret = -EFAULT; break; @@ -1622,14 +1621,14 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, */ static void _get_random_bytes(void *buf, int nbytes) { - __u8 tmp[CHACHA20_BLOCK_SIZE] __aligned(4); + __u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); trace_get_random_bytes(nbytes, _RET_IP_); - while (nbytes >= CHACHA20_BLOCK_SIZE) { + while (nbytes >= CHACHA_BLOCK_SIZE) { extract_crng(buf); - buf += CHACHA20_BLOCK_SIZE; - nbytes -= CHACHA20_BLOCK_SIZE; + buf += CHACHA_BLOCK_SIZE; + nbytes -= CHACHA_BLOCK_SIZE; } if (nbytes > 0) { @@ -1637,7 +1636,7 @@ static void _get_random_bytes(void *buf, int nbytes) memcpy(buf, tmp, nbytes); crng_backtrack_protect(tmp, nbytes); } else - crng_backtrack_protect(tmp, CHACHA20_BLOCK_SIZE); + crng_backtrack_protect(tmp, CHACHA_BLOCK_SIZE); memzero_explicit(tmp, sizeof(tmp)); } @@ -2208,8 +2207,8 @@ struct ctl_table random_table[] = { struct batched_entropy { union { - u64 entropy_u64[CHACHA20_BLOCK_SIZE / sizeof(u64)]; - u32 entropy_u32[CHACHA20_BLOCK_SIZE / sizeof(u32)]; + u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)]; + u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)]; }; unsigned int position; }; diff --git a/drivers/char/rtc.c b/drivers/char/rtc.c index 4948c8bda6b1..62b7c721c732 100644 --- a/drivers/char/rtc.c +++ b/drivers/char/rtc.c @@ -866,8 +866,8 @@ static int __init rtc_init(void) #ifdef CONFIG_SPARC32 for_each_node_by_name(ebus_dp, "ebus") { struct device_node *dp; - for (dp = ebus_dp; dp; dp = dp->sibling) { - if (!strcmp(dp->name, "rtc")) { + for_each_child_of_node(ebus_dp, dp) { + if (of_node_name_eq(dp, "rtc")) { op = of_find_device_by_node(dp); if (op) { rtc_port = op->resource[0].start; diff --git a/drivers/char/tlclk.c b/drivers/char/tlclk.c index 8eeb4190207d..6d81bb3bb503 100644 --- a/drivers/char/tlclk.c +++ b/drivers/char/tlclk.c @@ -506,28 +506,28 @@ static ssize_t store_select_amcb2_transmit_clock(struct device *d, val = (unsigned char)tmp; spin_lock_irqsave(&event_lock, flags); - if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) { - SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x28); - SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val); - } else if (val >= CLK_8_592MHz) { - SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x38); - switch (val) { - case CLK_8_592MHz: - SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); - break; - case CLK_11_184MHz: - SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); - break; - case CLK_34_368MHz: - SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); - break; - case CLK_44_736MHz: - SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); - break; - } - } else - SET_PORT_BITS(TLCLK_REG3, 0xc7, val << 3); - + if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) { + SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x28); + SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val); + } else if (val >= CLK_8_592MHz) { + SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x38); + switch (val) { + case CLK_8_592MHz: + SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); + break; + case CLK_11_184MHz: + SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); + break; + case CLK_34_368MHz: + SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); + break; + case CLK_44_736MHz: + SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); + break; + } + } else { + SET_PORT_BITS(TLCLK_REG3, 0xc7, val << 3); + } spin_unlock_irqrestore(&event_lock, flags); return strnlen(buf, count); @@ -548,27 +548,28 @@ static ssize_t store_select_amcb1_transmit_clock(struct device *d, val = (unsigned char)tmp; spin_lock_irqsave(&event_lock, flags); - if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) { - SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x5); - SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val); - } else if (val >= CLK_8_592MHz) { - SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7); - switch (val) { - case CLK_8_592MHz: - SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); - break; - case CLK_11_184MHz: - SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); - break; - case CLK_34_368MHz: - SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); - break; - case CLK_44_736MHz: - SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); - break; - } - } else - SET_PORT_BITS(TLCLK_REG3, 0xf8, val); + if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) { + SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x5); + SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val); + } else if (val >= CLK_8_592MHz) { + SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7); + switch (val) { + case CLK_8_592MHz: + SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); + break; + case CLK_11_184MHz: + SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); + break; + case CLK_34_368MHz: + SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); + break; + case CLK_44_736MHz: + SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); + break; + } + } else { + SET_PORT_BITS(TLCLK_REG3, 0xf8, val); + } spin_unlock_irqrestore(&event_lock, flags); return strnlen(buf, count); diff --git a/drivers/char/ttyprintk.c b/drivers/char/ttyprintk.c index 67549ce88cc9..88808dbba486 100644 --- a/drivers/char/ttyprintk.c +++ b/drivers/char/ttyprintk.c @@ -37,6 +37,8 @@ static struct ttyprintk_port tpk_port; */ #define TPK_STR_SIZE 508 /* should be bigger then max expected line length */ #define TPK_MAX_ROOM 4096 /* we could assume 4K for instance */ +#define TPK_PREFIX KERN_SOH __stringify(CONFIG_TTY_PRINTK_LEVEL) + static int tpk_curr; static char tpk_buffer[TPK_STR_SIZE + 4]; @@ -45,7 +47,7 @@ static void tpk_flush(void) { if (tpk_curr > 0) { tpk_buffer[tpk_curr] = '\0'; - pr_info("[U] %s\n", tpk_buffer); + printk(TPK_PREFIX "[U] %s\n", tpk_buffer); tpk_curr = 0; } } diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c index 5b5b5d72eab7..fbeb71953526 100644 --- a/drivers/char/virtio_console.c +++ b/drivers/char/virtio_console.c @@ -1309,7 +1309,7 @@ static const struct attribute_group port_attribute_group = { .attrs = port_sysfs_entries, }; -static int debugfs_show(struct seq_file *s, void *data) +static int port_debugfs_show(struct seq_file *s, void *data) { struct port *port = s->private; @@ -1327,18 +1327,7 @@ static int debugfs_show(struct seq_file *s, void *data) return 0; } -static int debugfs_open(struct inode *inode, struct file *file) -{ - return single_open(file, debugfs_show, inode->i_private); -} - -static const struct file_operations port_debugfs_ops = { - .owner = THIS_MODULE, - .open = debugfs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(port_debugfs); static void set_console_size(struct port *port, u16 rows, u16 cols) { @@ -1490,7 +1479,7 @@ static int add_port(struct ports_device *portdev, u32 id) port->debugfs_file = debugfs_create_file(debugfs_name, 0444, pdrvdata.debugfs_dir, port, - &port_debugfs_ops); + &port_debugfs_fops); } return 0; diff --git a/drivers/cpufreq/pmac32-cpufreq.c b/drivers/cpufreq/pmac32-cpufreq.c index 61ae06ca008e..52f0d91d30c1 100644 --- a/drivers/cpufreq/pmac32-cpufreq.c +++ b/drivers/cpufreq/pmac32-cpufreq.c @@ -128,7 +128,7 @@ static int cpu_750fx_cpu_speed(int low_speed) mtspr(SPRN_HID2, hid2); } } -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 low_choose_750fx_pll(low_speed); #endif if (low_speed == 1) { @@ -166,7 +166,7 @@ static int dfs_set_cpu_speed(int low_speed) } /* set frequency */ -#ifdef CONFIG_6xx +#ifdef CONFIG_PPC_BOOK3S_32 low_choose_7447a_dfs(low_speed); #endif udelay(100); diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c index 9e56bc411061..74c247972bb3 100644 --- a/drivers/cpuidle/cpuidle-pseries.c +++ b/drivers/cpuidle/cpuidle-pseries.c @@ -247,7 +247,13 @@ static int pseries_idle_probe(void) return -ENODEV; if (firmware_has_feature(FW_FEATURE_SPLPAR)) { - if (lppaca_shared_proc(get_lppaca())) { + /* + * Use local_paca instead of get_lppaca() since + * preemption is not disabled, and it is not required in + * fact, since lppaca_ptr does not need to be the value + * associated to the current CPU, it can be from any CPU. + */ + if (lppaca_shared_proc(local_paca->lppaca_ptr)) { cpuidle_state_table = shared_states; max_idle_state = ARRAY_SIZE(shared_states); } else { diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index caa98a7fe392..5a90075f719d 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -258,7 +258,7 @@ config CRYPTO_DEV_HIFN_795X_RNG Select this option if you want to enable the random number generator on the HIFN 795x crypto adapters. -source drivers/crypto/caam/Kconfig +source "drivers/crypto/caam/Kconfig" config CRYPTO_DEV_TALITOS tristate "Talitos Freescale Security Engine (SEC)" @@ -762,10 +762,12 @@ config CRYPTO_DEV_CCREE select CRYPTO_ECB select CRYPTO_CTR select CRYPTO_XTS + select CRYPTO_SM4 + select CRYPTO_SM3 help Say 'Y' to enable a driver for the REE interface of the Arm TrustZone CryptoCell family of processors. Currently the - CryptoCell 712, 710 and 630 are supported. + CryptoCell 713, 703, 712, 710 and 630 are supported. Choose this if you wish to use hardware acceleration of cryptographic operations on the system REE. If unsure say Y. diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c index f5c07498ea4f..4092c2aad8e2 100644 --- a/drivers/crypto/amcc/crypto4xx_alg.c +++ b/drivers/crypto/amcc/crypto4xx_alg.c @@ -520,8 +520,7 @@ static int crypto4xx_compute_gcm_hash_key_sw(__le32 *hash_start, const u8 *key, uint8_t src[16] = { 0 }; int rc = 0; - aes_tfm = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC | - CRYPTO_ALG_NEED_FALLBACK); + aes_tfm = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(aes_tfm)) { rc = PTR_ERR(aes_tfm); pr_warn("could not load aes cipher driver: %d\n", rc); diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c index 6eaec9ba0f68..63cb6956c948 100644 --- a/drivers/crypto/amcc/crypto4xx_core.c +++ b/drivers/crypto/amcc/crypto4xx_core.c @@ -596,7 +596,7 @@ static void crypto4xx_aead_done(struct crypto4xx_device *dev, pd->pd_ctl_len.bf.pkt_len, dst); } else { - __dma_sync_page(sg_page(dst), dst->offset, dst->length, + dma_unmap_page(dev->core_dev->device, pd->dest, dst->length, DMA_FROM_DEVICE); } diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c index 2d1f1db9f807..c9393ffb70ed 100644 --- a/drivers/crypto/bcm/cipher.c +++ b/drivers/crypto/bcm/cipher.c @@ -3868,7 +3868,6 @@ static struct iproc_alg_s driver_algs[] = { .cra_driver_name = "ctr-aes-iproc", .cra_blocksize = AES_BLOCK_SIZE, .cra_ablkcipher = { - /* .geniv = "chainiv", */ .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, @@ -4605,7 +4604,6 @@ static int spu_register_ablkcipher(struct iproc_alg_s *driver_alg) crypto->cra_priority = cipher_pri; crypto->cra_alignmask = 0; crypto->cra_ctxsize = sizeof(struct iproc_ctx_s); - INIT_LIST_HEAD(&crypto->cra_list); crypto->cra_init = ablkcipher_cra_init; crypto->cra_exit = generic_cra_exit; @@ -4652,12 +4650,16 @@ static int spu_register_ahash(struct iproc_alg_s *driver_alg) hash->halg.statesize = sizeof(struct spu_hash_export_s); if (driver_alg->auth_info.mode != HASH_MODE_HMAC) { - hash->setkey = ahash_setkey; hash->init = ahash_init; hash->update = ahash_update; hash->final = ahash_final; hash->finup = ahash_finup; hash->digest = ahash_digest; + if ((driver_alg->auth_info.alg == HASH_ALG_AES) && + ((driver_alg->auth_info.mode == HASH_MODE_XCBC) || + (driver_alg->auth_info.mode == HASH_MODE_CMAC))) { + hash->setkey = ahash_setkey; + } } else { hash->setkey = ahash_hmac_setkey; hash->init = ahash_hmac_init; @@ -4687,7 +4689,6 @@ static int spu_register_aead(struct iproc_alg_s *driver_alg) aead->base.cra_priority = aead_pri; aead->base.cra_alignmask = 0; aead->base.cra_ctxsize = sizeof(struct iproc_ctx_s); - INIT_LIST_HEAD(&aead->base.cra_list); aead->base.cra_flags |= CRYPTO_ALG_ASYNC; /* setkey set in alg initialization */ diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 869f092432de..92e593e2069a 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -72,6 +72,8 @@ #define AUTHENC_DESC_JOB_IO_LEN (AEAD_DESC_JOB_IO_LEN + \ CAAM_CMD_SZ * 5) +#define CHACHAPOLY_DESC_JOB_IO_LEN (AEAD_DESC_JOB_IO_LEN + CAAM_CMD_SZ * 6) + #define DESC_MAX_USED_BYTES (CAAM_DESC_BYTES_MAX - DESC_JOB_IO_LEN) #define DESC_MAX_USED_LEN (DESC_MAX_USED_BYTES / CAAM_CMD_SZ) @@ -513,6 +515,61 @@ static int rfc4543_setauthsize(struct crypto_aead *authenc, return 0; } +static int chachapoly_set_sh_desc(struct crypto_aead *aead) +{ + struct caam_ctx *ctx = crypto_aead_ctx(aead); + struct device *jrdev = ctx->jrdev; + unsigned int ivsize = crypto_aead_ivsize(aead); + u32 *desc; + + if (!ctx->cdata.keylen || !ctx->authsize) + return 0; + + desc = ctx->sh_desc_enc; + cnstr_shdsc_chachapoly(desc, &ctx->cdata, &ctx->adata, ivsize, + ctx->authsize, true, false); + dma_sync_single_for_device(jrdev, ctx->sh_desc_enc_dma, + desc_bytes(desc), ctx->dir); + + desc = ctx->sh_desc_dec; + cnstr_shdsc_chachapoly(desc, &ctx->cdata, &ctx->adata, ivsize, + ctx->authsize, false, false); + dma_sync_single_for_device(jrdev, ctx->sh_desc_dec_dma, + desc_bytes(desc), ctx->dir); + + return 0; +} + +static int chachapoly_setauthsize(struct crypto_aead *aead, + unsigned int authsize) +{ + struct caam_ctx *ctx = crypto_aead_ctx(aead); + + if (authsize != POLY1305_DIGEST_SIZE) + return -EINVAL; + + ctx->authsize = authsize; + return chachapoly_set_sh_desc(aead); +} + +static int chachapoly_setkey(struct crypto_aead *aead, const u8 *key, + unsigned int keylen) +{ + struct caam_ctx *ctx = crypto_aead_ctx(aead); + unsigned int ivsize = crypto_aead_ivsize(aead); + unsigned int saltlen = CHACHAPOLY_IV_SIZE - ivsize; + + if (keylen != CHACHA_KEY_SIZE + saltlen) { + crypto_aead_set_flags(aead, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + ctx->cdata.key_virt = key; + ctx->cdata.keylen = keylen - saltlen; + + return chachapoly_set_sh_desc(aead); +} + static int aead_setkey(struct crypto_aead *aead, const u8 *key, unsigned int keylen) { @@ -1031,6 +1088,40 @@ static void init_gcm_job(struct aead_request *req, /* End of blank commands */ } +static void init_chachapoly_job(struct aead_request *req, + struct aead_edesc *edesc, bool all_contig, + bool encrypt) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + unsigned int ivsize = crypto_aead_ivsize(aead); + unsigned int assoclen = req->assoclen; + u32 *desc = edesc->hw_desc; + u32 ctx_iv_off = 4; + + init_aead_job(req, edesc, all_contig, encrypt); + + if (ivsize != CHACHAPOLY_IV_SIZE) { + /* IPsec specific: CONTEXT1[223:128] = {NONCE, IV} */ + ctx_iv_off += 4; + + /* + * The associated data comes already with the IV but we need + * to skip it when we authenticate or encrypt... + */ + assoclen -= ivsize; + } + + append_math_add_imm_u32(desc, REG3, ZERO, IMM, assoclen); + + /* + * For IPsec load the IV further in the same register. + * For RFC7539 simply load the 12 bytes nonce in a single operation + */ + append_load_as_imm(desc, req->iv, ivsize, LDST_CLASS_1_CCB | + LDST_SRCDST_BYTE_CONTEXT | + ctx_iv_off << LDST_OFFSET_SHIFT); +} + static void init_authenc_job(struct aead_request *req, struct aead_edesc *edesc, bool all_contig, bool encrypt) @@ -1289,6 +1380,72 @@ static int gcm_encrypt(struct aead_request *req) return ret; } +static int chachapoly_encrypt(struct aead_request *req) +{ + struct aead_edesc *edesc; + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct caam_ctx *ctx = crypto_aead_ctx(aead); + struct device *jrdev = ctx->jrdev; + bool all_contig; + u32 *desc; + int ret; + + edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig, + true); + if (IS_ERR(edesc)) + return PTR_ERR(edesc); + + desc = edesc->hw_desc; + + init_chachapoly_job(req, edesc, all_contig, true); + print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), + 1); + + ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + if (!ret) { + ret = -EINPROGRESS; + } else { + aead_unmap(jrdev, edesc, req); + kfree(edesc); + } + + return ret; +} + +static int chachapoly_decrypt(struct aead_request *req) +{ + struct aead_edesc *edesc; + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct caam_ctx *ctx = crypto_aead_ctx(aead); + struct device *jrdev = ctx->jrdev; + bool all_contig; + u32 *desc; + int ret; + + edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig, + false); + if (IS_ERR(edesc)) + return PTR_ERR(edesc); + + desc = edesc->hw_desc; + + init_chachapoly_job(req, edesc, all_contig, false); + print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), + 1); + + ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); + if (!ret) { + ret = -EINPROGRESS; + } else { + aead_unmap(jrdev, edesc, req); + kfree(edesc); + } + + return ret; +} + static int ipsec_gcm_encrypt(struct aead_request *req) { if (req->assoclen < 8) @@ -3002,6 +3159,50 @@ static struct caam_aead_alg driver_aeads[] = { .geniv = true, }, }, + { + .aead = { + .base = { + .cra_name = "rfc7539(chacha20,poly1305)", + .cra_driver_name = "rfc7539-chacha20-poly1305-" + "caam", + .cra_blocksize = 1, + }, + .setkey = chachapoly_setkey, + .setauthsize = chachapoly_setauthsize, + .encrypt = chachapoly_encrypt, + .decrypt = chachapoly_decrypt, + .ivsize = CHACHAPOLY_IV_SIZE, + .maxauthsize = POLY1305_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_CHACHA20 | + OP_ALG_AAI_AEAD, + .class2_alg_type = OP_ALG_ALGSEL_POLY1305 | + OP_ALG_AAI_AEAD, + }, + }, + { + .aead = { + .base = { + .cra_name = "rfc7539esp(chacha20,poly1305)", + .cra_driver_name = "rfc7539esp-chacha20-" + "poly1305-caam", + .cra_blocksize = 1, + }, + .setkey = chachapoly_setkey, + .setauthsize = chachapoly_setauthsize, + .encrypt = chachapoly_encrypt, + .decrypt = chachapoly_decrypt, + .ivsize = 8, + .maxauthsize = POLY1305_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_CHACHA20 | + OP_ALG_AAI_AEAD, + .class2_alg_type = OP_ALG_ALGSEL_POLY1305 | + OP_ALG_AAI_AEAD, + }, + }, }; static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam, @@ -3135,7 +3336,7 @@ static int __init caam_algapi_init(void) struct device *ctrldev; struct caam_drv_private *priv; int i = 0, err = 0; - u32 cha_vid, cha_inst, des_inst, aes_inst, md_inst; + u32 aes_vid, aes_inst, des_inst, md_vid, md_inst, ccha_inst, ptha_inst; unsigned int md_limit = SHA512_DIGEST_SIZE; bool registered = false; @@ -3168,14 +3369,38 @@ static int __init caam_algapi_init(void) * Register crypto algorithms the device supports. * First, detect presence and attributes of DES, AES, and MD blocks. */ - cha_vid = rd_reg32(&priv->ctrl->perfmon.cha_id_ls); - cha_inst = rd_reg32(&priv->ctrl->perfmon.cha_num_ls); - des_inst = (cha_inst & CHA_ID_LS_DES_MASK) >> CHA_ID_LS_DES_SHIFT; - aes_inst = (cha_inst & CHA_ID_LS_AES_MASK) >> CHA_ID_LS_AES_SHIFT; - md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + if (priv->era < 10) { + u32 cha_vid, cha_inst; + + cha_vid = rd_reg32(&priv->ctrl->perfmon.cha_id_ls); + aes_vid = cha_vid & CHA_ID_LS_AES_MASK; + md_vid = (cha_vid & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + + cha_inst = rd_reg32(&priv->ctrl->perfmon.cha_num_ls); + des_inst = (cha_inst & CHA_ID_LS_DES_MASK) >> + CHA_ID_LS_DES_SHIFT; + aes_inst = cha_inst & CHA_ID_LS_AES_MASK; + md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + ccha_inst = 0; + ptha_inst = 0; + } else { + u32 aesa, mdha; + + aesa = rd_reg32(&priv->ctrl->vreg.aesa); + mdha = rd_reg32(&priv->ctrl->vreg.mdha); + + aes_vid = (aesa & CHA_VER_VID_MASK) >> CHA_VER_VID_SHIFT; + md_vid = (mdha & CHA_VER_VID_MASK) >> CHA_VER_VID_SHIFT; + + des_inst = rd_reg32(&priv->ctrl->vreg.desa) & CHA_VER_NUM_MASK; + aes_inst = aesa & CHA_VER_NUM_MASK; + md_inst = mdha & CHA_VER_NUM_MASK; + ccha_inst = rd_reg32(&priv->ctrl->vreg.ccha) & CHA_VER_NUM_MASK; + ptha_inst = rd_reg32(&priv->ctrl->vreg.ptha) & CHA_VER_NUM_MASK; + } /* If MD is present, limit digest size based on LP256 */ - if (md_inst && ((cha_vid & CHA_ID_LS_MD_MASK) == CHA_ID_LS_MD_LP256)) + if (md_inst && md_vid == CHA_VER_VID_MD_LP256) md_limit = SHA256_DIGEST_SIZE; for (i = 0; i < ARRAY_SIZE(driver_algs); i++) { @@ -3196,10 +3421,10 @@ static int __init caam_algapi_init(void) * Check support for AES modes not available * on LP devices. */ - if ((cha_vid & CHA_ID_LS_AES_MASK) == CHA_ID_LS_AES_LP) - if ((t_alg->caam.class1_alg_type & OP_ALG_AAI_MASK) == - OP_ALG_AAI_XTS) - continue; + if (aes_vid == CHA_VER_VID_AES_LP && + (t_alg->caam.class1_alg_type & OP_ALG_AAI_MASK) == + OP_ALG_AAI_XTS) + continue; caam_skcipher_alg_init(t_alg); @@ -3232,21 +3457,28 @@ static int __init caam_algapi_init(void) if (!aes_inst && (c1_alg_sel == OP_ALG_ALGSEL_AES)) continue; + /* Skip CHACHA20 algorithms if not supported by device */ + if (c1_alg_sel == OP_ALG_ALGSEL_CHACHA20 && !ccha_inst) + continue; + + /* Skip POLY1305 algorithms if not supported by device */ + if (c2_alg_sel == OP_ALG_ALGSEL_POLY1305 && !ptha_inst) + continue; + /* * Check support for AES algorithms not available * on LP devices. */ - if ((cha_vid & CHA_ID_LS_AES_MASK) == CHA_ID_LS_AES_LP) - if (alg_aai == OP_ALG_AAI_GCM) - continue; + if (aes_vid == CHA_VER_VID_AES_LP && alg_aai == OP_ALG_AAI_GCM) + continue; /* * Skip algorithms requiring message digests * if MD or MD size is not supported by device. */ - if (c2_alg_sel && - (!md_inst || (t_alg->aead.maxauthsize > md_limit))) - continue; + if ((c2_alg_sel & ~OP_ALG_ALGSEL_SUBMASK) == 0x40 && + (!md_inst || t_alg->aead.maxauthsize > md_limit)) + continue; caam_aead_alg_init(t_alg); diff --git a/drivers/crypto/caam/caamalg_desc.c b/drivers/crypto/caam/caamalg_desc.c index 1a6f0da14106..7db1640d3577 100644 --- a/drivers/crypto/caam/caamalg_desc.c +++ b/drivers/crypto/caam/caamalg_desc.c @@ -1213,6 +1213,139 @@ void cnstr_shdsc_rfc4543_decap(u32 * const desc, struct alginfo *cdata, } EXPORT_SYMBOL(cnstr_shdsc_rfc4543_decap); +/** + * cnstr_shdsc_chachapoly - Chacha20 + Poly1305 generic AEAD (rfc7539) and + * IPsec ESP (rfc7634, a.k.a. rfc7539esp) shared + * descriptor (non-protocol). + * @desc: pointer to buffer used for descriptor construction + * @cdata: pointer to block cipher transform definitions + * Valid algorithm values - OP_ALG_ALGSEL_CHACHA20 ANDed with + * OP_ALG_AAI_AEAD. + * @adata: pointer to authentication transform definitions + * Valid algorithm values - OP_ALG_ALGSEL_POLY1305 ANDed with + * OP_ALG_AAI_AEAD. + * @ivsize: initialization vector size + * @icvsize: integrity check value (ICV) size (truncated or full) + * @encap: true if encapsulation, false if decapsulation + * @is_qi: true when called from caam/qi + */ +void cnstr_shdsc_chachapoly(u32 * const desc, struct alginfo *cdata, + struct alginfo *adata, unsigned int ivsize, + unsigned int icvsize, const bool encap, + const bool is_qi) +{ + u32 *key_jump_cmd, *wait_cmd; + u32 nfifo; + const bool is_ipsec = (ivsize != CHACHAPOLY_IV_SIZE); + + /* Note: Context registers are saved. */ + init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX); + + /* skip key loading if they are loaded due to sharing */ + key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL | + JUMP_COND_SHRD); + + append_key_as_imm(desc, cdata->key_virt, cdata->keylen, cdata->keylen, + CLASS_1 | KEY_DEST_CLASS_REG); + + /* For IPsec load the salt from keymat in the context register */ + if (is_ipsec) + append_load_as_imm(desc, cdata->key_virt + cdata->keylen, 4, + LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT | + 4 << LDST_OFFSET_SHIFT); + + set_jump_tgt_here(desc, key_jump_cmd); + + /* Class 2 and 1 operations: Poly & ChaCha */ + if (encap) { + append_operation(desc, adata->algtype | OP_ALG_AS_INITFINAL | + OP_ALG_ENCRYPT); + append_operation(desc, cdata->algtype | OP_ALG_AS_INITFINAL | + OP_ALG_ENCRYPT); + } else { + append_operation(desc, adata->algtype | OP_ALG_AS_INITFINAL | + OP_ALG_DECRYPT | OP_ALG_ICV_ON); + append_operation(desc, cdata->algtype | OP_ALG_AS_INITFINAL | + OP_ALG_DECRYPT); + } + + if (is_qi) { + u32 *wait_load_cmd; + u32 ctx1_iv_off = is_ipsec ? 8 : 4; + + /* REG3 = assoclen */ + append_seq_load(desc, 4, LDST_CLASS_DECO | + LDST_SRCDST_WORD_DECO_MATH3 | + 4 << LDST_OFFSET_SHIFT); + + wait_load_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL | + JUMP_COND_CALM | JUMP_COND_NCP | + JUMP_COND_NOP | JUMP_COND_NIP | + JUMP_COND_NIFP); + set_jump_tgt_here(desc, wait_load_cmd); + + append_seq_load(desc, ivsize, LDST_CLASS_1_CCB | + LDST_SRCDST_BYTE_CONTEXT | + ctx1_iv_off << LDST_OFFSET_SHIFT); + } + + /* + * MAGIC with NFIFO + * Read associated data from the input and send them to class1 and + * class2 alignment blocks. From class1 send data to output fifo and + * then write it to memory since we don't need to encrypt AD. + */ + nfifo = NFIFOENTRY_DEST_BOTH | NFIFOENTRY_FC1 | NFIFOENTRY_FC2 | + NFIFOENTRY_DTYPE_POLY | NFIFOENTRY_BND; + append_load_imm_u32(desc, nfifo, LDST_CLASS_IND_CCB | + LDST_SRCDST_WORD_INFO_FIFO_SM | LDLEN_MATH3); + + append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ); + append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ); + append_seq_fifo_load(desc, 0, FIFOLD_TYPE_NOINFOFIFO | + FIFOLD_CLASS_CLASS1 | LDST_VLF); + append_move_len(desc, MOVE_AUX_LS | MOVE_SRC_AUX_ABLK | + MOVE_DEST_OUTFIFO | MOVELEN_MRSEL_MATH3); + append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | LDST_VLF); + + /* IPsec - copy IV at the output */ + if (is_ipsec) + append_seq_fifo_store(desc, ivsize, FIFOST_TYPE_METADATA | + 0x2 << 25); + + wait_cmd = append_jump(desc, JUMP_JSL | JUMP_TYPE_LOCAL | + JUMP_COND_NOP | JUMP_TEST_ALL); + set_jump_tgt_here(desc, wait_cmd); + + if (encap) { + /* Read and write cryptlen bytes */ + append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ); + append_math_add(desc, VARSEQOUTLEN, SEQINLEN, REG0, + CAAM_CMD_SZ); + aead_append_src_dst(desc, FIFOLD_TYPE_MSG1OUT2); + + /* Write ICV */ + append_seq_store(desc, icvsize, LDST_CLASS_2_CCB | + LDST_SRCDST_BYTE_CONTEXT); + } else { + /* Read and write cryptlen bytes */ + append_math_add(desc, VARSEQINLEN, SEQOUTLEN, REG0, + CAAM_CMD_SZ); + append_math_add(desc, VARSEQOUTLEN, SEQOUTLEN, REG0, + CAAM_CMD_SZ); + aead_append_src_dst(desc, FIFOLD_TYPE_MSG); + + /* Load ICV for verification */ + append_seq_fifo_load(desc, icvsize, FIFOLD_CLASS_CLASS2 | + FIFOLD_TYPE_LAST2 | FIFOLD_TYPE_ICV); + } + + print_hex_dump_debug("chachapoly shdesc@" __stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), + 1); +} +EXPORT_SYMBOL(cnstr_shdsc_chachapoly); + /* For skcipher encrypt and decrypt, read from req->src and write to req->dst */ static inline void skcipher_append_src_dst(u32 *desc) { @@ -1228,7 +1361,8 @@ static inline void skcipher_append_src_dst(u32 *desc) * @desc: pointer to buffer used for descriptor construction * @cdata: pointer to block cipher transform definitions * Valid algorithm values - one of OP_ALG_ALGSEL_{AES, DES, 3DES} ANDed - * with OP_ALG_AAI_CBC or OP_ALG_AAI_CTR_MOD128. + * with OP_ALG_AAI_CBC or OP_ALG_AAI_CTR_MOD128 + * - OP_ALG_ALGSEL_CHACHA20 * @ivsize: initialization vector size * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template * @ctx1_iv_off: IV offset in CONTEXT1 register @@ -1293,7 +1427,8 @@ EXPORT_SYMBOL(cnstr_shdsc_skcipher_encap); * @desc: pointer to buffer used for descriptor construction * @cdata: pointer to block cipher transform definitions * Valid algorithm values - one of OP_ALG_ALGSEL_{AES, DES, 3DES} ANDed - * with OP_ALG_AAI_CBC or OP_ALG_AAI_CTR_MOD128. + * with OP_ALG_AAI_CBC or OP_ALG_AAI_CTR_MOD128 + * - OP_ALG_ALGSEL_CHACHA20 * @ivsize: initialization vector size * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template * @ctx1_iv_off: IV offset in CONTEXT1 register diff --git a/drivers/crypto/caam/caamalg_desc.h b/drivers/crypto/caam/caamalg_desc.h index 1315c8f6f951..d5ca42ff961a 100644 --- a/drivers/crypto/caam/caamalg_desc.h +++ b/drivers/crypto/caam/caamalg_desc.h @@ -96,6 +96,11 @@ void cnstr_shdsc_rfc4543_decap(u32 * const desc, struct alginfo *cdata, unsigned int ivsize, unsigned int icvsize, const bool is_qi); +void cnstr_shdsc_chachapoly(u32 * const desc, struct alginfo *cdata, + struct alginfo *adata, unsigned int ivsize, + unsigned int icvsize, const bool encap, + const bool is_qi); + void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata, unsigned int ivsize, const bool is_rfc3686, const u32 ctx1_iv_off); diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c index 23c9fc4975f8..c0d55310aade 100644 --- a/drivers/crypto/caam/caamalg_qi.c +++ b/drivers/crypto/caam/caamalg_qi.c @@ -2462,7 +2462,7 @@ static int __init caam_qi_algapi_init(void) struct device *ctrldev; struct caam_drv_private *priv; int i = 0, err = 0; - u32 cha_vid, cha_inst, des_inst, aes_inst, md_inst; + u32 aes_vid, aes_inst, des_inst, md_vid, md_inst; unsigned int md_limit = SHA512_DIGEST_SIZE; bool registered = false; @@ -2497,14 +2497,34 @@ static int __init caam_qi_algapi_init(void) * Register crypto algorithms the device supports. * First, detect presence and attributes of DES, AES, and MD blocks. */ - cha_vid = rd_reg32(&priv->ctrl->perfmon.cha_id_ls); - cha_inst = rd_reg32(&priv->ctrl->perfmon.cha_num_ls); - des_inst = (cha_inst & CHA_ID_LS_DES_MASK) >> CHA_ID_LS_DES_SHIFT; - aes_inst = (cha_inst & CHA_ID_LS_AES_MASK) >> CHA_ID_LS_AES_SHIFT; - md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + if (priv->era < 10) { + u32 cha_vid, cha_inst; + + cha_vid = rd_reg32(&priv->ctrl->perfmon.cha_id_ls); + aes_vid = cha_vid & CHA_ID_LS_AES_MASK; + md_vid = (cha_vid & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + + cha_inst = rd_reg32(&priv->ctrl->perfmon.cha_num_ls); + des_inst = (cha_inst & CHA_ID_LS_DES_MASK) >> + CHA_ID_LS_DES_SHIFT; + aes_inst = cha_inst & CHA_ID_LS_AES_MASK; + md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + } else { + u32 aesa, mdha; + + aesa = rd_reg32(&priv->ctrl->vreg.aesa); + mdha = rd_reg32(&priv->ctrl->vreg.mdha); + + aes_vid = (aesa & CHA_VER_VID_MASK) >> CHA_VER_VID_SHIFT; + md_vid = (mdha & CHA_VER_VID_MASK) >> CHA_VER_VID_SHIFT; + + des_inst = rd_reg32(&priv->ctrl->vreg.desa) & CHA_VER_NUM_MASK; + aes_inst = aesa & CHA_VER_NUM_MASK; + md_inst = mdha & CHA_VER_NUM_MASK; + } /* If MD is present, limit digest size based on LP256 */ - if (md_inst && ((cha_vid & CHA_ID_LS_MD_MASK) == CHA_ID_LS_MD_LP256)) + if (md_inst && md_vid == CHA_VER_VID_MD_LP256) md_limit = SHA256_DIGEST_SIZE; for (i = 0; i < ARRAY_SIZE(driver_algs); i++) { @@ -2556,8 +2576,7 @@ static int __init caam_qi_algapi_init(void) * Check support for AES algorithms not available * on LP devices. */ - if (((cha_vid & CHA_ID_LS_AES_MASK) == CHA_ID_LS_AES_LP) && - (alg_aai == OP_ALG_AAI_GCM)) + if (aes_vid == CHA_VER_VID_AES_LP && alg_aai == OP_ALG_AAI_GCM) continue; /* diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c index 7d8ac0222fa3..425d5d974613 100644 --- a/drivers/crypto/caam/caamalg_qi2.c +++ b/drivers/crypto/caam/caamalg_qi2.c @@ -462,7 +462,15 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, edesc->dst_nents = dst_nents; edesc->iv_dma = iv_dma; - edesc->assoclen = cpu_to_caam32(req->assoclen); + if ((alg->caam.class1_alg_type & OP_ALG_ALGSEL_MASK) == + OP_ALG_ALGSEL_CHACHA20 && ivsize != CHACHAPOLY_IV_SIZE) + /* + * The associated data comes already with the IV but we need + * to skip it when we authenticate or encrypt... + */ + edesc->assoclen = cpu_to_caam32(req->assoclen - ivsize); + else + edesc->assoclen = cpu_to_caam32(req->assoclen); edesc->assoclen_dma = dma_map_single(dev, &edesc->assoclen, 4, DMA_TO_DEVICE); if (dma_mapping_error(dev, edesc->assoclen_dma)) { @@ -532,6 +540,68 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, return edesc; } +static int chachapoly_set_sh_desc(struct crypto_aead *aead) +{ + struct caam_ctx *ctx = crypto_aead_ctx(aead); + unsigned int ivsize = crypto_aead_ivsize(aead); + struct device *dev = ctx->dev; + struct caam_flc *flc; + u32 *desc; + + if (!ctx->cdata.keylen || !ctx->authsize) + return 0; + + flc = &ctx->flc[ENCRYPT]; + desc = flc->sh_desc; + cnstr_shdsc_chachapoly(desc, &ctx->cdata, &ctx->adata, ivsize, + ctx->authsize, true, true); + flc->flc[1] = cpu_to_caam32(desc_len(desc)); /* SDL */ + dma_sync_single_for_device(dev, ctx->flc_dma[ENCRYPT], + sizeof(flc->flc) + desc_bytes(desc), + ctx->dir); + + flc = &ctx->flc[DECRYPT]; + desc = flc->sh_desc; + cnstr_shdsc_chachapoly(desc, &ctx->cdata, &ctx->adata, ivsize, + ctx->authsize, false, true); + flc->flc[1] = cpu_to_caam32(desc_len(desc)); /* SDL */ + dma_sync_single_for_device(dev, ctx->flc_dma[DECRYPT], + sizeof(flc->flc) + desc_bytes(desc), + ctx->dir); + + return 0; +} + +static int chachapoly_setauthsize(struct crypto_aead *aead, + unsigned int authsize) +{ + struct caam_ctx *ctx = crypto_aead_ctx(aead); + + if (authsize != POLY1305_DIGEST_SIZE) + return -EINVAL; + + ctx->authsize = authsize; + return chachapoly_set_sh_desc(aead); +} + +static int chachapoly_setkey(struct crypto_aead *aead, const u8 *key, + unsigned int keylen) +{ + struct caam_ctx *ctx = crypto_aead_ctx(aead); + unsigned int ivsize = crypto_aead_ivsize(aead); + unsigned int saltlen = CHACHAPOLY_IV_SIZE - ivsize; + + if (keylen != CHACHA_KEY_SIZE + saltlen) { + crypto_aead_set_flags(aead, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + ctx->cdata.key_virt = key; + ctx->cdata.keylen = keylen - saltlen; + + return chachapoly_set_sh_desc(aead); +} + static int gcm_set_sh_desc(struct crypto_aead *aead) { struct caam_ctx *ctx = crypto_aead_ctx(aead); @@ -816,7 +886,9 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, u32 *desc; u32 ctx1_iv_off = 0; const bool ctr_mode = ((ctx->cdata.algtype & OP_ALG_AAI_MASK) == - OP_ALG_AAI_CTR_MOD128); + OP_ALG_AAI_CTR_MOD128) && + ((ctx->cdata.algtype & OP_ALG_ALGSEL_MASK) != + OP_ALG_ALGSEL_CHACHA20); const bool is_rfc3686 = alg->caam.rfc3686; print_hex_dump_debug("key in @" __stringify(__LINE__)": ", @@ -1494,7 +1566,23 @@ static struct caam_skcipher_alg driver_algs[] = { .ivsize = AES_BLOCK_SIZE, }, .caam.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_XTS, - } + }, + { + .skcipher = { + .base = { + .cra_name = "chacha20", + .cra_driver_name = "chacha20-caam-qi2", + .cra_blocksize = 1, + }, + .setkey = skcipher_setkey, + .encrypt = skcipher_encrypt, + .decrypt = skcipher_decrypt, + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = CHACHA_IV_SIZE, + }, + .caam.class1_alg_type = OP_ALG_ALGSEL_CHACHA20, + }, }; static struct caam_aead_alg driver_aeads[] = { @@ -2608,6 +2696,50 @@ static struct caam_aead_alg driver_aeads[] = { .geniv = true, }, }, + { + .aead = { + .base = { + .cra_name = "rfc7539(chacha20,poly1305)", + .cra_driver_name = "rfc7539-chacha20-poly1305-" + "caam-qi2", + .cra_blocksize = 1, + }, + .setkey = chachapoly_setkey, + .setauthsize = chachapoly_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = CHACHAPOLY_IV_SIZE, + .maxauthsize = POLY1305_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_CHACHA20 | + OP_ALG_AAI_AEAD, + .class2_alg_type = OP_ALG_ALGSEL_POLY1305 | + OP_ALG_AAI_AEAD, + }, + }, + { + .aead = { + .base = { + .cra_name = "rfc7539esp(chacha20,poly1305)", + .cra_driver_name = "rfc7539esp-chacha20-" + "poly1305-caam-qi2", + .cra_blocksize = 1, + }, + .setkey = chachapoly_setkey, + .setauthsize = chachapoly_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = 8, + .maxauthsize = POLY1305_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_CHACHA20 | + OP_ALG_AAI_AEAD, + .class2_alg_type = OP_ALG_ALGSEL_POLY1305 | + OP_ALG_AAI_AEAD, + }, + }, { .aead = { .base = { @@ -4908,6 +5040,11 @@ static int dpaa2_caam_probe(struct fsl_mc_device *dpseci_dev) alg_sel == OP_ALG_ALGSEL_AES) continue; + /* Skip CHACHA20 algorithms if not supported by device */ + if (alg_sel == OP_ALG_ALGSEL_CHACHA20 && + !priv->sec_attr.ccha_acc_num) + continue; + t_alg->caam.dev = dev; caam_skcipher_alg_init(t_alg); @@ -4940,11 +5077,22 @@ static int dpaa2_caam_probe(struct fsl_mc_device *dpseci_dev) c1_alg_sel == OP_ALG_ALGSEL_AES) continue; + /* Skip CHACHA20 algorithms if not supported by device */ + if (c1_alg_sel == OP_ALG_ALGSEL_CHACHA20 && + !priv->sec_attr.ccha_acc_num) + continue; + + /* Skip POLY1305 algorithms if not supported by device */ + if (c2_alg_sel == OP_ALG_ALGSEL_POLY1305 && + !priv->sec_attr.ptha_acc_num) + continue; + /* * Skip algorithms requiring message digests * if MD not supported by device. */ - if (!priv->sec_attr.md_acc_num && c2_alg_sel) + if ((c2_alg_sel & ~OP_ALG_ALGSEL_SUBMASK) == 0x40 && + !priv->sec_attr.md_acc_num) continue; t_alg->caam.dev = dev; diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 46924affa0bd..81712aa5d0f2 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -3,6 +3,7 @@ * caam - Freescale FSL CAAM support for ahash functions of crypto API * * Copyright 2011 Freescale Semiconductor, Inc. + * Copyright 2018 NXP * * Based on caamalg.c crypto API driver. * @@ -1801,7 +1802,7 @@ static int __init caam_algapi_hash_init(void) int i = 0, err = 0; struct caam_drv_private *priv; unsigned int md_limit = SHA512_DIGEST_SIZE; - u32 cha_inst, cha_vid; + u32 md_inst, md_vid; dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); if (!dev_node) { @@ -1831,18 +1832,27 @@ static int __init caam_algapi_hash_init(void) * Register crypto algorithms the device supports. First, identify * presence and attributes of MD block. */ - cha_vid = rd_reg32(&priv->ctrl->perfmon.cha_id_ls); - cha_inst = rd_reg32(&priv->ctrl->perfmon.cha_num_ls); + if (priv->era < 10) { + md_vid = (rd_reg32(&priv->ctrl->perfmon.cha_id_ls) & + CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + md_inst = (rd_reg32(&priv->ctrl->perfmon.cha_num_ls) & + CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + } else { + u32 mdha = rd_reg32(&priv->ctrl->vreg.mdha); + + md_vid = (mdha & CHA_VER_VID_MASK) >> CHA_VER_VID_SHIFT; + md_inst = mdha & CHA_VER_NUM_MASK; + } /* * Skip registration of any hashing algorithms if MD block * is not present. */ - if (!((cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT)) + if (!md_inst) return -ENODEV; /* Limit digest size based on LP256 */ - if ((cha_vid & CHA_ID_LS_MD_MASK) == CHA_ID_LS_MD_LP256) + if (md_vid == CHA_VER_VID_MD_LP256) md_limit = SHA256_DIGEST_SIZE; INIT_LIST_HEAD(&hash_list); diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c index 4fc209cbbeab..77ab28a2811a 100644 --- a/drivers/crypto/caam/caampkc.c +++ b/drivers/crypto/caam/caampkc.c @@ -3,6 +3,7 @@ * caam - Freescale FSL CAAM support for Public Key Cryptography * * Copyright 2016 Freescale Semiconductor, Inc. + * Copyright 2018 NXP * * There is no Shared Descriptor for PKC so that the Job Descriptor must carry * all the desired key parameters, input and output pointers. @@ -1017,7 +1018,7 @@ static int __init caam_pkc_init(void) struct platform_device *pdev; struct device *ctrldev; struct caam_drv_private *priv; - u32 cha_inst, pk_inst; + u32 pk_inst; int err; dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); @@ -1045,8 +1046,11 @@ static int __init caam_pkc_init(void) return -ENODEV; /* Determine public key hardware accelerator presence. */ - cha_inst = rd_reg32(&priv->ctrl->perfmon.cha_num_ls); - pk_inst = (cha_inst & CHA_ID_LS_PK_MASK) >> CHA_ID_LS_PK_SHIFT; + if (priv->era < 10) + pk_inst = (rd_reg32(&priv->ctrl->perfmon.cha_num_ls) & + CHA_ID_LS_PK_MASK) >> CHA_ID_LS_PK_SHIFT; + else + pk_inst = rd_reg32(&priv->ctrl->vreg.pkha) & CHA_VER_NUM_MASK; /* Do not register algorithms if PKHA is not present. */ if (!pk_inst) diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c index 4318b0aa6fb9..a387c8d49a62 100644 --- a/drivers/crypto/caam/caamrng.c +++ b/drivers/crypto/caam/caamrng.c @@ -3,6 +3,7 @@ * caam - Freescale FSL CAAM support for hw_random * * Copyright 2011 Freescale Semiconductor, Inc. + * Copyright 2018 NXP * * Based on caamalg.c crypto API driver. * @@ -309,6 +310,7 @@ static int __init caam_rng_init(void) struct platform_device *pdev; struct device *ctrldev; struct caam_drv_private *priv; + u32 rng_inst; int err; dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); @@ -336,7 +338,13 @@ static int __init caam_rng_init(void) return -ENODEV; /* Check for an instantiated RNG before registration */ - if (!(rd_reg32(&priv->ctrl->perfmon.cha_num_ls) & CHA_ID_LS_RNG_MASK)) + if (priv->era < 10) + rng_inst = (rd_reg32(&priv->ctrl->perfmon.cha_num_ls) & + CHA_ID_LS_RNG_MASK) >> CHA_ID_LS_RNG_SHIFT; + else + rng_inst = rd_reg32(&priv->ctrl->vreg.rng) & CHA_VER_NUM_MASK; + + if (!rng_inst) return -ENODEV; dev = caam_jr_alloc(); diff --git a/drivers/crypto/caam/compat.h b/drivers/crypto/caam/compat.h index 9604ff7a335e..87d9efe4c7aa 100644 --- a/drivers/crypto/caam/compat.h +++ b/drivers/crypto/caam/compat.h @@ -36,6 +36,8 @@ #include #include #include +#include +#include #include #include #include diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c index 3fc793193821..16bbc72f041a 100644 --- a/drivers/crypto/caam/ctrl.c +++ b/drivers/crypto/caam/ctrl.c @@ -3,6 +3,7 @@ * Controller-level driver, kernel property detection, initialization * * Copyright 2008-2012 Freescale Semiconductor, Inc. + * Copyright 2018 NXP */ #include @@ -106,7 +107,7 @@ static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc, struct caam_ctrl __iomem *ctrl = ctrlpriv->ctrl; struct caam_deco __iomem *deco = ctrlpriv->deco; unsigned int timeout = 100000; - u32 deco_dbg_reg, flags; + u32 deco_dbg_reg, deco_state, flags; int i; @@ -149,13 +150,22 @@ static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc, timeout = 10000000; do { deco_dbg_reg = rd_reg32(&deco->desc_dbg); + + if (ctrlpriv->era < 10) + deco_state = (deco_dbg_reg & DESC_DBG_DECO_STAT_MASK) >> + DESC_DBG_DECO_STAT_SHIFT; + else + deco_state = (rd_reg32(&deco->dbg_exec) & + DESC_DER_DECO_STAT_MASK) >> + DESC_DER_DECO_STAT_SHIFT; + /* * If an error occured in the descriptor, then * the DECO status field will be set to 0x0D */ - if ((deco_dbg_reg & DESC_DBG_DECO_STAT_MASK) == - DESC_DBG_DECO_STAT_HOST_ERR) + if (deco_state == DECO_STAT_HOST_ERR) break; + cpu_relax(); } while ((deco_dbg_reg & DESC_DBG_DECO_STAT_VALID) && --timeout); @@ -491,7 +501,7 @@ static int caam_probe(struct platform_device *pdev) struct caam_perfmon *perfmon; #endif u32 scfgr, comp_params; - u32 cha_vid_ls; + u8 rng_vid; int pg_size; int BLOCK_OFFSET = 0; @@ -733,15 +743,19 @@ static int caam_probe(struct platform_device *pdev) goto caam_remove; } - cha_vid_ls = rd_reg32(&ctrl->perfmon.cha_id_ls); + if (ctrlpriv->era < 10) + rng_vid = (rd_reg32(&ctrl->perfmon.cha_id_ls) & + CHA_ID_LS_RNG_MASK) >> CHA_ID_LS_RNG_SHIFT; + else + rng_vid = (rd_reg32(&ctrl->vreg.rng) & CHA_VER_VID_MASK) >> + CHA_VER_VID_SHIFT; /* * If SEC has RNG version >= 4 and RNG state handle has not been * already instantiated, do RNG instantiation * In case of SoCs with Management Complex, RNG is managed by MC f/w. */ - if (!ctrlpriv->mc_en && - (cha_vid_ls & CHA_ID_LS_RNG_MASK) >> CHA_ID_LS_RNG_SHIFT >= 4) { + if (!ctrlpriv->mc_en && rng_vid >= 4) { ctrlpriv->rng4_sh_init = rd_reg32(&ctrl->r4tst[0].rdsta); /* diff --git a/drivers/crypto/caam/desc.h b/drivers/crypto/caam/desc.h index f76ff160a02c..ec10230178c5 100644 --- a/drivers/crypto/caam/desc.h +++ b/drivers/crypto/caam/desc.h @@ -4,6 +4,7 @@ * Definitions to support CAAM descriptor instruction generation * * Copyright 2008-2011 Freescale Semiconductor, Inc. + * Copyright 2018 NXP */ #ifndef DESC_H @@ -242,6 +243,7 @@ #define LDST_SRCDST_WORD_DESCBUF_SHARED (0x42 << LDST_SRCDST_SHIFT) #define LDST_SRCDST_WORD_DESCBUF_JOB_WE (0x45 << LDST_SRCDST_SHIFT) #define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT) +#define LDST_SRCDST_WORD_INFO_FIFO_SM (0x71 << LDST_SRCDST_SHIFT) #define LDST_SRCDST_WORD_INFO_FIFO (0x7a << LDST_SRCDST_SHIFT) /* Offset in source/destination */ @@ -284,6 +286,12 @@ #define LDLEN_SET_OFIFO_OFFSET_SHIFT 0 #define LDLEN_SET_OFIFO_OFFSET_MASK (3 << LDLEN_SET_OFIFO_OFFSET_SHIFT) +/* Special Length definitions when dst=sm, nfifo-{sm,m} */ +#define LDLEN_MATH0 0 +#define LDLEN_MATH1 1 +#define LDLEN_MATH2 2 +#define LDLEN_MATH3 3 + /* * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE * Command Constructs @@ -408,6 +416,7 @@ #define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT) #define FIFOST_TYPE_RNGSTORE (0x34 << FIFOST_TYPE_SHIFT) #define FIFOST_TYPE_RNGFIFO (0x35 << FIFOST_TYPE_SHIFT) +#define FIFOST_TYPE_METADATA (0x3e << FIFOST_TYPE_SHIFT) #define FIFOST_TYPE_SKIP (0x3f << FIFOST_TYPE_SHIFT) /* @@ -1133,6 +1142,12 @@ #define OP_ALG_TYPE_CLASS1 (2 << OP_ALG_TYPE_SHIFT) #define OP_ALG_TYPE_CLASS2 (4 << OP_ALG_TYPE_SHIFT) +/* version register fields */ +#define OP_VER_CCHA_NUM 0x000000ff /* Number CCHAs instantiated */ +#define OP_VER_CCHA_MISC 0x0000ff00 /* CCHA Miscellaneous Information */ +#define OP_VER_CCHA_REV 0x00ff0000 /* CCHA Revision Number */ +#define OP_VER_CCHA_VID 0xff000000 /* CCHA Version ID */ + #define OP_ALG_ALGSEL_SHIFT 16 #define OP_ALG_ALGSEL_MASK (0xff << OP_ALG_ALGSEL_SHIFT) #define OP_ALG_ALGSEL_SUBMASK (0x0f << OP_ALG_ALGSEL_SHIFT) @@ -1152,6 +1167,8 @@ #define OP_ALG_ALGSEL_KASUMI (0x70 << OP_ALG_ALGSEL_SHIFT) #define OP_ALG_ALGSEL_CRC (0x90 << OP_ALG_ALGSEL_SHIFT) #define OP_ALG_ALGSEL_SNOW_F9 (0xA0 << OP_ALG_ALGSEL_SHIFT) +#define OP_ALG_ALGSEL_CHACHA20 (0xD0 << OP_ALG_ALGSEL_SHIFT) +#define OP_ALG_ALGSEL_POLY1305 (0xE0 << OP_ALG_ALGSEL_SHIFT) #define OP_ALG_AAI_SHIFT 4 #define OP_ALG_AAI_MASK (0x1ff << OP_ALG_AAI_SHIFT) @@ -1199,6 +1216,11 @@ #define OP_ALG_AAI_RNG4_AI (0x80 << OP_ALG_AAI_SHIFT) #define OP_ALG_AAI_RNG4_SK (0x100 << OP_ALG_AAI_SHIFT) +/* Chacha20 AAI set */ +#define OP_ALG_AAI_AEAD (0x002 << OP_ALG_AAI_SHIFT) +#define OP_ALG_AAI_KEYSTREAM (0x001 << OP_ALG_AAI_SHIFT) +#define OP_ALG_AAI_BC8 (0x008 << OP_ALG_AAI_SHIFT) + /* hmac/smac AAI set */ #define OP_ALG_AAI_HASH (0x00 << OP_ALG_AAI_SHIFT) #define OP_ALG_AAI_HMAC (0x01 << OP_ALG_AAI_SHIFT) @@ -1387,6 +1409,7 @@ #define MOVE_SRC_MATH3 (0x07 << MOVE_SRC_SHIFT) #define MOVE_SRC_INFIFO (0x08 << MOVE_SRC_SHIFT) #define MOVE_SRC_INFIFO_CL (0x09 << MOVE_SRC_SHIFT) +#define MOVE_SRC_AUX_ABLK (0x0a << MOVE_SRC_SHIFT) #define MOVE_DEST_SHIFT 16 #define MOVE_DEST_MASK (0x0f << MOVE_DEST_SHIFT) @@ -1413,6 +1436,10 @@ #define MOVELEN_MRSEL_SHIFT 0 #define MOVELEN_MRSEL_MASK (0x3 << MOVE_LEN_SHIFT) +#define MOVELEN_MRSEL_MATH0 (0 << MOVELEN_MRSEL_SHIFT) +#define MOVELEN_MRSEL_MATH1 (1 << MOVELEN_MRSEL_SHIFT) +#define MOVELEN_MRSEL_MATH2 (2 << MOVELEN_MRSEL_SHIFT) +#define MOVELEN_MRSEL_MATH3 (3 << MOVELEN_MRSEL_SHIFT) /* * MATH Command Constructs @@ -1589,6 +1616,7 @@ #define NFIFOENTRY_DTYPE_IV (0x2 << NFIFOENTRY_DTYPE_SHIFT) #define NFIFOENTRY_DTYPE_SAD (0x3 << NFIFOENTRY_DTYPE_SHIFT) #define NFIFOENTRY_DTYPE_ICV (0xA << NFIFOENTRY_DTYPE_SHIFT) +#define NFIFOENTRY_DTYPE_POLY (0xB << NFIFOENTRY_DTYPE_SHIFT) #define NFIFOENTRY_DTYPE_SKIP (0xE << NFIFOENTRY_DTYPE_SHIFT) #define NFIFOENTRY_DTYPE_MSG (0xF << NFIFOENTRY_DTYPE_SHIFT) diff --git a/drivers/crypto/caam/desc_constr.h b/drivers/crypto/caam/desc_constr.h index d4256fa4a1d6..2980b8ef1fb1 100644 --- a/drivers/crypto/caam/desc_constr.h +++ b/drivers/crypto/caam/desc_constr.h @@ -189,6 +189,7 @@ static inline u32 *append_##cmd(u32 * const desc, u32 options) \ } APPEND_CMD_RET(jump, JUMP) APPEND_CMD_RET(move, MOVE) +APPEND_CMD_RET(move_len, MOVE_LEN) static inline void set_jump_tgt_here(u32 * const desc, u32 *jump_cmd) { @@ -327,7 +328,11 @@ static inline void append_##cmd##_imm_##type(u32 * const desc, type immediate, \ u32 options) \ { \ PRINT_POS; \ - append_cmd(desc, CMD_##op | IMMEDIATE | options | sizeof(type)); \ + if (options & LDST_LEN_MASK) \ + append_cmd(desc, CMD_##op | IMMEDIATE | options); \ + else \ + append_cmd(desc, CMD_##op | IMMEDIATE | options | \ + sizeof(type)); \ append_cmd(desc, immediate); \ } APPEND_CMD_RAW_IMM(load, LOAD, u32); diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h index 457815f965c0..3cd0822ea819 100644 --- a/drivers/crypto/caam/regs.h +++ b/drivers/crypto/caam/regs.h @@ -3,6 +3,7 @@ * CAAM hardware register-level view * * Copyright 2008-2011 Freescale Semiconductor, Inc. + * Copyright 2018 NXP */ #ifndef REGS_H @@ -211,6 +212,47 @@ struct jr_outentry { u32 jrstatus; /* Status for completed descriptor */ } __packed; +/* Version registers (Era 10+) e80-eff */ +struct version_regs { + u32 crca; /* CRCA_VERSION */ + u32 afha; /* AFHA_VERSION */ + u32 kfha; /* KFHA_VERSION */ + u32 pkha; /* PKHA_VERSION */ + u32 aesa; /* AESA_VERSION */ + u32 mdha; /* MDHA_VERSION */ + u32 desa; /* DESA_VERSION */ + u32 snw8a; /* SNW8A_VERSION */ + u32 snw9a; /* SNW9A_VERSION */ + u32 zuce; /* ZUCE_VERSION */ + u32 zuca; /* ZUCA_VERSION */ + u32 ccha; /* CCHA_VERSION */ + u32 ptha; /* PTHA_VERSION */ + u32 rng; /* RNG_VERSION */ + u32 trng; /* TRNG_VERSION */ + u32 aaha; /* AAHA_VERSION */ + u32 rsvd[10]; + u32 sr; /* SR_VERSION */ + u32 dma; /* DMA_VERSION */ + u32 ai; /* AI_VERSION */ + u32 qi; /* QI_VERSION */ + u32 jr; /* JR_VERSION */ + u32 deco; /* DECO_VERSION */ +}; + +/* Version registers bitfields */ + +/* Number of CHAs instantiated */ +#define CHA_VER_NUM_MASK 0xffull +/* CHA Miscellaneous Information */ +#define CHA_VER_MISC_SHIFT 8 +#define CHA_VER_MISC_MASK (0xffull << CHA_VER_MISC_SHIFT) +/* CHA Revision Number */ +#define CHA_VER_REV_SHIFT 16 +#define CHA_VER_REV_MASK (0xffull << CHA_VER_REV_SHIFT) +/* CHA Version ID */ +#define CHA_VER_VID_SHIFT 24 +#define CHA_VER_VID_MASK (0xffull << CHA_VER_VID_SHIFT) + /* * caam_perfmon - Performance Monitor/Secure Memory Status/ * CAAM Global Status/Component Version IDs @@ -223,15 +265,13 @@ struct jr_outentry { #define CHA_NUM_MS_DECONUM_MASK (0xfull << CHA_NUM_MS_DECONUM_SHIFT) /* - * CHA version IDs / instantiation bitfields + * CHA version IDs / instantiation bitfields (< Era 10) * Defined for use with the cha_id fields in perfmon, but the same shift/mask * selectors can be used to pull out the number of instantiated blocks within * cha_num fields in perfmon because the locations are the same. */ #define CHA_ID_LS_AES_SHIFT 0 #define CHA_ID_LS_AES_MASK (0xfull << CHA_ID_LS_AES_SHIFT) -#define CHA_ID_LS_AES_LP (0x3ull << CHA_ID_LS_AES_SHIFT) -#define CHA_ID_LS_AES_HP (0x4ull << CHA_ID_LS_AES_SHIFT) #define CHA_ID_LS_DES_SHIFT 4 #define CHA_ID_LS_DES_MASK (0xfull << CHA_ID_LS_DES_SHIFT) @@ -241,9 +281,6 @@ struct jr_outentry { #define CHA_ID_LS_MD_SHIFT 12 #define CHA_ID_LS_MD_MASK (0xfull << CHA_ID_LS_MD_SHIFT) -#define CHA_ID_LS_MD_LP256 (0x0ull << CHA_ID_LS_MD_SHIFT) -#define CHA_ID_LS_MD_LP512 (0x1ull << CHA_ID_LS_MD_SHIFT) -#define CHA_ID_LS_MD_HP (0x2ull << CHA_ID_LS_MD_SHIFT) #define CHA_ID_LS_RNG_SHIFT 16 #define CHA_ID_LS_RNG_MASK (0xfull << CHA_ID_LS_RNG_SHIFT) @@ -269,6 +306,13 @@ struct jr_outentry { #define CHA_ID_MS_JR_SHIFT 28 #define CHA_ID_MS_JR_MASK (0xfull << CHA_ID_MS_JR_SHIFT) +/* Specific CHA version IDs */ +#define CHA_VER_VID_AES_LP 0x3ull +#define CHA_VER_VID_AES_HP 0x4ull +#define CHA_VER_VID_MD_LP256 0x0ull +#define CHA_VER_VID_MD_LP512 0x1ull +#define CHA_VER_VID_MD_HP 0x2ull + struct sec_vid { u16 ip_id; u8 maj_rev; @@ -479,8 +523,10 @@ struct caam_ctrl { struct rng4tst r4tst[2]; }; - u32 rsvd9[448]; + u32 rsvd9[416]; + /* Version registers - introduced with era 10 e80-eff */ + struct version_regs vreg; /* Performance Monitor f00-fff */ struct caam_perfmon perfmon; }; @@ -570,8 +616,10 @@ struct caam_job_ring { u32 rsvd11; u32 jrcommand; /* JRCRx - JobR command */ - u32 rsvd12[932]; + u32 rsvd12[900]; + /* Version registers - introduced with era 10 e80-eff */ + struct version_regs vreg; /* Performance Monitor f00-fff */ struct caam_perfmon perfmon; }; @@ -878,13 +926,19 @@ struct caam_deco { u32 rsvd29[48]; u32 descbuf[64]; /* DxDESB - Descriptor buffer */ u32 rscvd30[193]; -#define DESC_DBG_DECO_STAT_HOST_ERR 0x00D00000 #define DESC_DBG_DECO_STAT_VALID 0x80000000 #define DESC_DBG_DECO_STAT_MASK 0x00F00000 +#define DESC_DBG_DECO_STAT_SHIFT 20 u32 desc_dbg; /* DxDDR - DECO Debug Register */ - u32 rsvd31[126]; + u32 rsvd31[13]; +#define DESC_DER_DECO_STAT_MASK 0x000F0000 +#define DESC_DER_DECO_STAT_SHIFT 16 + u32 dbg_exec; /* DxDER - DECO Debug Exec Register */ + u32 rsvd32[112]; }; +#define DECO_STAT_HOST_ERR 0xD + #define DECO_JQCR_WHL 0x20000000 #define DECO_JQCR_FOUR 0x10000000 diff --git a/drivers/crypto/cavium/nitrox/Makefile b/drivers/crypto/cavium/nitrox/Makefile index e12954791673..f83991aaf820 100644 --- a/drivers/crypto/cavium/nitrox/Makefile +++ b/drivers/crypto/cavium/nitrox/Makefile @@ -6,7 +6,10 @@ n5pf-objs := nitrox_main.o \ nitrox_lib.o \ nitrox_hal.o \ nitrox_reqmgr.o \ - nitrox_algs.o + nitrox_algs.o \ + nitrox_mbx.o \ + nitrox_skcipher.o \ + nitrox_aead.o n5pf-$(CONFIG_PCI_IOV) += nitrox_sriov.o n5pf-$(CONFIG_DEBUG_FS) += nitrox_debugfs.o diff --git a/drivers/crypto/cavium/nitrox/nitrox_aead.c b/drivers/crypto/cavium/nitrox/nitrox_aead.c new file mode 100644 index 000000000000..4f43eacd2557 --- /dev/null +++ b/drivers/crypto/cavium/nitrox/nitrox_aead.c @@ -0,0 +1,364 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include "nitrox_dev.h" +#include "nitrox_common.h" +#include "nitrox_req.h" + +#define GCM_AES_SALT_SIZE 4 + +/** + * struct nitrox_crypt_params - Params to set nitrox crypto request. + * @cryptlen: Encryption/Decryption data length + * @authlen: Assoc data length + Cryptlen + * @srclen: Input buffer length + * @dstlen: Output buffer length + * @iv: IV data + * @ivsize: IV data length + * @ctrl_arg: Identifies the request type (ENCRYPT/DECRYPT) + */ +struct nitrox_crypt_params { + unsigned int cryptlen; + unsigned int authlen; + unsigned int srclen; + unsigned int dstlen; + u8 *iv; + int ivsize; + u8 ctrl_arg; +}; + +union gph_p3 { + struct { +#ifdef __BIG_ENDIAN_BITFIELD + u16 iv_offset : 8; + u16 auth_offset : 8; +#else + u16 auth_offset : 8; + u16 iv_offset : 8; +#endif + }; + u16 param; +}; + +static int nitrox_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, + unsigned int keylen) +{ + int aes_keylen; + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + struct flexi_crypto_context *fctx; + union fc_ctx_flags flags; + + aes_keylen = flexi_aes_keylen(keylen); + if (aes_keylen < 0) { + crypto_aead_set_flags(aead, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + /* fill crypto context */ + fctx = nctx->u.fctx; + flags.f = be64_to_cpu(fctx->flags.f); + flags.w0.aes_keylen = aes_keylen; + fctx->flags.f = cpu_to_be64(flags.f); + + /* copy enc key to context */ + memset(&fctx->crypto, 0, sizeof(fctx->crypto)); + memcpy(fctx->crypto.u.key, key, keylen); + + return 0; +} + +static int nitrox_aead_setauthsize(struct crypto_aead *aead, + unsigned int authsize) +{ + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + struct flexi_crypto_context *fctx = nctx->u.fctx; + union fc_ctx_flags flags; + + flags.f = be64_to_cpu(fctx->flags.f); + flags.w0.mac_len = authsize; + fctx->flags.f = cpu_to_be64(flags.f); + + aead->authsize = authsize; + + return 0; +} + +static int alloc_src_sglist(struct aead_request *areq, char *iv, int ivsize, + int buflen) +{ + struct nitrox_kcrypt_request *nkreq = aead_request_ctx(areq); + int nents = sg_nents_for_len(areq->src, buflen) + 1; + int ret; + + if (nents < 0) + return nents; + + /* Allocate buffer to hold IV and input scatterlist array */ + ret = alloc_src_req_buf(nkreq, nents, ivsize); + if (ret) + return ret; + + nitrox_creq_copy_iv(nkreq->src, iv, ivsize); + nitrox_creq_set_src_sg(nkreq, nents, ivsize, areq->src, buflen); + + return 0; +} + +static int alloc_dst_sglist(struct aead_request *areq, int ivsize, int buflen) +{ + struct nitrox_kcrypt_request *nkreq = aead_request_ctx(areq); + int nents = sg_nents_for_len(areq->dst, buflen) + 3; + int ret; + + if (nents < 0) + return nents; + + /* Allocate buffer to hold ORH, COMPLETION and output scatterlist + * array + */ + ret = alloc_dst_req_buf(nkreq, nents); + if (ret) + return ret; + + nitrox_creq_set_orh(nkreq); + nitrox_creq_set_comp(nkreq); + nitrox_creq_set_dst_sg(nkreq, nents, ivsize, areq->dst, buflen); + + return 0; +} + +static void free_src_sglist(struct aead_request *areq) +{ + struct nitrox_kcrypt_request *nkreq = aead_request_ctx(areq); + + kfree(nkreq->src); +} + +static void free_dst_sglist(struct aead_request *areq) +{ + struct nitrox_kcrypt_request *nkreq = aead_request_ctx(areq); + + kfree(nkreq->dst); +} + +static int nitrox_set_creq(struct aead_request *areq, + struct nitrox_crypt_params *params) +{ + struct nitrox_kcrypt_request *nkreq = aead_request_ctx(areq); + struct se_crypto_request *creq = &nkreq->creq; + struct crypto_aead *aead = crypto_aead_reqtfm(areq); + union gph_p3 param3; + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + int ret; + + creq->flags = areq->base.flags; + creq->gfp = (areq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? + GFP_KERNEL : GFP_ATOMIC; + + creq->ctrl.value = 0; + creq->opcode = FLEXI_CRYPTO_ENCRYPT_HMAC; + creq->ctrl.s.arg = params->ctrl_arg; + + creq->gph.param0 = cpu_to_be16(params->cryptlen); + creq->gph.param1 = cpu_to_be16(params->authlen); + creq->gph.param2 = cpu_to_be16(params->ivsize + areq->assoclen); + param3.iv_offset = 0; + param3.auth_offset = params->ivsize; + creq->gph.param3 = cpu_to_be16(param3.param); + + creq->ctx_handle = nctx->u.ctx_handle; + creq->ctrl.s.ctxl = sizeof(struct flexi_crypto_context); + + ret = alloc_src_sglist(areq, params->iv, params->ivsize, + params->srclen); + if (ret) + return ret; + + ret = alloc_dst_sglist(areq, params->ivsize, params->dstlen); + if (ret) { + free_src_sglist(areq); + return ret; + } + + return 0; +} + +static void nitrox_aead_callback(void *arg, int err) +{ + struct aead_request *areq = arg; + + free_src_sglist(areq); + free_dst_sglist(areq); + if (err) { + pr_err_ratelimited("request failed status 0x%0x\n", err); + err = -EINVAL; + } + + areq->base.complete(&areq->base, err); +} + +static int nitrox_aes_gcm_enc(struct aead_request *areq) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(areq); + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + struct nitrox_kcrypt_request *nkreq = aead_request_ctx(areq); + struct se_crypto_request *creq = &nkreq->creq; + struct flexi_crypto_context *fctx = nctx->u.fctx; + struct nitrox_crypt_params params; + int ret; + + memcpy(fctx->crypto.iv, areq->iv, GCM_AES_SALT_SIZE); + + memset(¶ms, 0, sizeof(params)); + params.cryptlen = areq->cryptlen; + params.authlen = areq->assoclen + params.cryptlen; + params.srclen = params.authlen; + params.dstlen = params.srclen + aead->authsize; + params.iv = &areq->iv[GCM_AES_SALT_SIZE]; + params.ivsize = GCM_AES_IV_SIZE - GCM_AES_SALT_SIZE; + params.ctrl_arg = ENCRYPT; + ret = nitrox_set_creq(areq, ¶ms); + if (ret) + return ret; + + /* send the crypto request */ + return nitrox_process_se_request(nctx->ndev, creq, nitrox_aead_callback, + areq); +} + +static int nitrox_aes_gcm_dec(struct aead_request *areq) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(areq); + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + struct nitrox_kcrypt_request *nkreq = aead_request_ctx(areq); + struct se_crypto_request *creq = &nkreq->creq; + struct flexi_crypto_context *fctx = nctx->u.fctx; + struct nitrox_crypt_params params; + int ret; + + memcpy(fctx->crypto.iv, areq->iv, GCM_AES_SALT_SIZE); + + memset(¶ms, 0, sizeof(params)); + params.cryptlen = areq->cryptlen - aead->authsize; + params.authlen = areq->assoclen + params.cryptlen; + params.srclen = areq->cryptlen + areq->assoclen; + params.dstlen = params.srclen - aead->authsize; + params.iv = &areq->iv[GCM_AES_SALT_SIZE]; + params.ivsize = GCM_AES_IV_SIZE - GCM_AES_SALT_SIZE; + params.ctrl_arg = DECRYPT; + ret = nitrox_set_creq(areq, ¶ms); + if (ret) + return ret; + + /* send the crypto request */ + return nitrox_process_se_request(nctx->ndev, creq, nitrox_aead_callback, + areq); +} + +static int nitrox_aead_init(struct crypto_aead *aead) +{ + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + struct crypto_ctx_hdr *chdr; + + /* get the first device */ + nctx->ndev = nitrox_get_first_device(); + if (!nctx->ndev) + return -ENODEV; + + /* allocate nitrox crypto context */ + chdr = crypto_alloc_context(nctx->ndev); + if (!chdr) { + nitrox_put_device(nctx->ndev); + return -ENOMEM; + } + nctx->chdr = chdr; + nctx->u.ctx_handle = (uintptr_t)((u8 *)chdr->vaddr + + sizeof(struct ctx_hdr)); + nctx->u.fctx->flags.f = 0; + + return 0; +} + +static int nitrox_aes_gcm_init(struct crypto_aead *aead) +{ + int ret; + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + union fc_ctx_flags *flags; + + ret = nitrox_aead_init(aead); + if (ret) + return ret; + + flags = &nctx->u.fctx->flags; + flags->w0.cipher_type = CIPHER_AES_GCM; + flags->w0.hash_type = AUTH_NULL; + flags->w0.iv_source = IV_FROM_DPTR; + /* ask microcode to calculate ipad/opad */ + flags->w0.auth_input_type = 1; + flags->f = be64_to_cpu(flags->f); + + crypto_aead_set_reqsize(aead, sizeof(struct aead_request) + + sizeof(struct nitrox_kcrypt_request)); + + return 0; +} + +static void nitrox_aead_exit(struct crypto_aead *aead) +{ + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + + /* free the nitrox crypto context */ + if (nctx->u.ctx_handle) { + struct flexi_crypto_context *fctx = nctx->u.fctx; + + memzero_explicit(&fctx->crypto, sizeof(struct crypto_keys)); + memzero_explicit(&fctx->auth, sizeof(struct auth_keys)); + crypto_free_context((void *)nctx->chdr); + } + nitrox_put_device(nctx->ndev); + + nctx->u.ctx_handle = 0; + nctx->ndev = NULL; +} + +static struct aead_alg nitrox_aeads[] = { { + .base = { + .cra_name = "gcm(aes)", + .cra_driver_name = "n5_aes_gcm", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + }, + .setkey = nitrox_aes_gcm_setkey, + .setauthsize = nitrox_aead_setauthsize, + .encrypt = nitrox_aes_gcm_enc, + .decrypt = nitrox_aes_gcm_dec, + .init = nitrox_aes_gcm_init, + .exit = nitrox_aead_exit, + .ivsize = GCM_AES_IV_SIZE, + .maxauthsize = AES_BLOCK_SIZE, +} }; + +int nitrox_register_aeads(void) +{ + return crypto_register_aeads(nitrox_aeads, ARRAY_SIZE(nitrox_aeads)); +} + +void nitrox_unregister_aeads(void) +{ + crypto_unregister_aeads(nitrox_aeads, ARRAY_SIZE(nitrox_aeads)); +} diff --git a/drivers/crypto/cavium/nitrox/nitrox_algs.c b/drivers/crypto/cavium/nitrox/nitrox_algs.c index 2ae6124e5da6..d646ae5f29b0 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_algs.c +++ b/drivers/crypto/cavium/nitrox/nitrox_algs.c @@ -1,458 +1,24 @@ -// SPDX-License-Identifier: GPL-2.0 -#include -#include -#include -#include - -#include -#include -#include -#include -#include - -#include "nitrox_dev.h" #include "nitrox_common.h" -#include "nitrox_req.h" - -#define PRIO 4001 - -struct nitrox_cipher { - const char *name; - enum flexi_cipher value; -}; - -/** - * supported cipher list - */ -static const struct nitrox_cipher flexi_cipher_table[] = { - { "null", CIPHER_NULL }, - { "cbc(des3_ede)", CIPHER_3DES_CBC }, - { "ecb(des3_ede)", CIPHER_3DES_ECB }, - { "cbc(aes)", CIPHER_AES_CBC }, - { "ecb(aes)", CIPHER_AES_ECB }, - { "cfb(aes)", CIPHER_AES_CFB }, - { "rfc3686(ctr(aes))", CIPHER_AES_CTR }, - { "xts(aes)", CIPHER_AES_XTS }, - { "cts(cbc(aes))", CIPHER_AES_CBC_CTS }, - { NULL, CIPHER_INVALID } -}; - -static enum flexi_cipher flexi_cipher_type(const char *name) -{ - const struct nitrox_cipher *cipher = flexi_cipher_table; - - while (cipher->name) { - if (!strcmp(cipher->name, name)) - break; - cipher++; - } - return cipher->value; -} - -static int flexi_aes_keylen(int keylen) -{ - int aes_keylen; - - switch (keylen) { - case AES_KEYSIZE_128: - aes_keylen = 1; - break; - case AES_KEYSIZE_192: - aes_keylen = 2; - break; - case AES_KEYSIZE_256: - aes_keylen = 3; - break; - default: - aes_keylen = -EINVAL; - break; - } - return aes_keylen; -} - -static int nitrox_skcipher_init(struct crypto_skcipher *tfm) -{ - struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(tfm); - void *fctx; - - /* get the first device */ - nctx->ndev = nitrox_get_first_device(); - if (!nctx->ndev) - return -ENODEV; - - /* allocate nitrox crypto context */ - fctx = crypto_alloc_context(nctx->ndev); - if (!fctx) { - nitrox_put_device(nctx->ndev); - return -ENOMEM; - } - nctx->u.ctx_handle = (uintptr_t)fctx; - crypto_skcipher_set_reqsize(tfm, crypto_skcipher_reqsize(tfm) + - sizeof(struct nitrox_kcrypt_request)); - return 0; -} - -static void nitrox_skcipher_exit(struct crypto_skcipher *tfm) -{ - struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(tfm); - - /* free the nitrox crypto context */ - if (nctx->u.ctx_handle) { - struct flexi_crypto_context *fctx = nctx->u.fctx; - - memset(&fctx->crypto, 0, sizeof(struct crypto_keys)); - memset(&fctx->auth, 0, sizeof(struct auth_keys)); - crypto_free_context((void *)fctx); - } - nitrox_put_device(nctx->ndev); - - nctx->u.ctx_handle = 0; - nctx->ndev = NULL; -} -static inline int nitrox_skcipher_setkey(struct crypto_skcipher *cipher, - int aes_keylen, const u8 *key, - unsigned int keylen) -{ - struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); - struct nitrox_crypto_ctx *nctx = crypto_tfm_ctx(tfm); - struct flexi_crypto_context *fctx; - enum flexi_cipher cipher_type; - const char *name; - - name = crypto_tfm_alg_name(tfm); - cipher_type = flexi_cipher_type(name); - if (unlikely(cipher_type == CIPHER_INVALID)) { - pr_err("unsupported cipher: %s\n", name); - return -EINVAL; - } - - /* fill crypto context */ - fctx = nctx->u.fctx; - fctx->flags = 0; - fctx->w0.cipher_type = cipher_type; - fctx->w0.aes_keylen = aes_keylen; - fctx->w0.iv_source = IV_FROM_DPTR; - fctx->flags = cpu_to_be64(*(u64 *)&fctx->w0); - /* copy the key to context */ - memcpy(fctx->crypto.u.key, key, keylen); - - return 0; -} - -static int nitrox_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, - unsigned int keylen) +int nitrox_crypto_register(void) { - int aes_keylen; + int err; - aes_keylen = flexi_aes_keylen(keylen); - if (aes_keylen < 0) { - crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); - return -EINVAL; - } - return nitrox_skcipher_setkey(cipher, aes_keylen, key, keylen); -} + err = nitrox_register_skciphers(); + if (err) + return err; -static void nitrox_skcipher_callback(struct skcipher_request *skreq, - int err) -{ + err = nitrox_register_aeads(); if (err) { - pr_err_ratelimited("request failed status 0x%0x\n", err); - err = -EINVAL; - } - skcipher_request_complete(skreq, err); -} - -static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc) -{ - struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq); - struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(cipher); - struct nitrox_kcrypt_request *nkreq = skcipher_request_ctx(skreq); - int ivsize = crypto_skcipher_ivsize(cipher); - struct se_crypto_request *creq; - - creq = &nkreq->creq; - creq->flags = skreq->base.flags; - creq->gfp = (skreq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; - - /* fill the request */ - creq->ctrl.value = 0; - creq->opcode = FLEXI_CRYPTO_ENCRYPT_HMAC; - creq->ctrl.s.arg = (enc ? ENCRYPT : DECRYPT); - /* param0: length of the data to be encrypted */ - creq->gph.param0 = cpu_to_be16(skreq->cryptlen); - creq->gph.param1 = 0; - /* param2: encryption data offset */ - creq->gph.param2 = cpu_to_be16(ivsize); - creq->gph.param3 = 0; - - creq->ctx_handle = nctx->u.ctx_handle; - creq->ctrl.s.ctxl = sizeof(struct flexi_crypto_context); - - /* copy the iv */ - memcpy(creq->iv, skreq->iv, ivsize); - creq->ivsize = ivsize; - creq->src = skreq->src; - creq->dst = skreq->dst; - - nkreq->nctx = nctx; - nkreq->skreq = skreq; - - /* send the crypto request */ - return nitrox_process_se_request(nctx->ndev, creq, - nitrox_skcipher_callback, skreq); -} - -static int nitrox_aes_encrypt(struct skcipher_request *skreq) -{ - return nitrox_skcipher_crypt(skreq, true); -} - -static int nitrox_aes_decrypt(struct skcipher_request *skreq) -{ - return nitrox_skcipher_crypt(skreq, false); -} - -static int nitrox_3des_setkey(struct crypto_skcipher *cipher, - const u8 *key, unsigned int keylen) -{ - if (keylen != DES3_EDE_KEY_SIZE) { - crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); - return -EINVAL; - } - - return nitrox_skcipher_setkey(cipher, 0, key, keylen); -} - -static int nitrox_3des_encrypt(struct skcipher_request *skreq) -{ - return nitrox_skcipher_crypt(skreq, true); -} - -static int nitrox_3des_decrypt(struct skcipher_request *skreq) -{ - return nitrox_skcipher_crypt(skreq, false); -} - -static int nitrox_aes_xts_setkey(struct crypto_skcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); - struct nitrox_crypto_ctx *nctx = crypto_tfm_ctx(tfm); - struct flexi_crypto_context *fctx; - int aes_keylen, ret; - - ret = xts_check_key(tfm, key, keylen); - if (ret) - return ret; - - keylen /= 2; - - aes_keylen = flexi_aes_keylen(keylen); - if (aes_keylen < 0) { - crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); - return -EINVAL; + nitrox_unregister_skciphers(); + return err; } - fctx = nctx->u.fctx; - /* copy KEY2 */ - memcpy(fctx->auth.u.key2, (key + keylen), keylen); - - return nitrox_skcipher_setkey(cipher, aes_keylen, key, keylen); -} - -static int nitrox_aes_ctr_rfc3686_setkey(struct crypto_skcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); - struct nitrox_crypto_ctx *nctx = crypto_tfm_ctx(tfm); - struct flexi_crypto_context *fctx; - int aes_keylen; - - if (keylen < CTR_RFC3686_NONCE_SIZE) - return -EINVAL; - - fctx = nctx->u.fctx; - - memcpy(fctx->crypto.iv, key + (keylen - CTR_RFC3686_NONCE_SIZE), - CTR_RFC3686_NONCE_SIZE); - - keylen -= CTR_RFC3686_NONCE_SIZE; - - aes_keylen = flexi_aes_keylen(keylen); - if (aes_keylen < 0) { - crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); - return -EINVAL; - } - return nitrox_skcipher_setkey(cipher, aes_keylen, key, keylen); -} - -static struct skcipher_alg nitrox_skciphers[] = { { - .base = { - .cra_name = "cbc(aes)", - .cra_driver_name = "n5_cbc(aes)", - .cra_priority = PRIO, - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = nitrox_aes_setkey, - .encrypt = nitrox_aes_encrypt, - .decrypt = nitrox_aes_decrypt, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, -}, { - .base = { - .cra_name = "ecb(aes)", - .cra_driver_name = "n5_ecb(aes)", - .cra_priority = PRIO, - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = nitrox_aes_setkey, - .encrypt = nitrox_aes_encrypt, - .decrypt = nitrox_aes_decrypt, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, -}, { - .base = { - .cra_name = "cfb(aes)", - .cra_driver_name = "n5_cfb(aes)", - .cra_priority = PRIO, - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = nitrox_aes_setkey, - .encrypt = nitrox_aes_encrypt, - .decrypt = nitrox_aes_decrypt, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, -}, { - .base = { - .cra_name = "xts(aes)", - .cra_driver_name = "n5_xts(aes)", - .cra_priority = PRIO, - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - }, - .min_keysize = 2 * AES_MIN_KEY_SIZE, - .max_keysize = 2 * AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = nitrox_aes_xts_setkey, - .encrypt = nitrox_aes_encrypt, - .decrypt = nitrox_aes_decrypt, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, -}, { - .base = { - .cra_name = "rfc3686(ctr(aes))", - .cra_driver_name = "n5_rfc3686(ctr(aes))", - .cra_priority = PRIO, - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .ivsize = CTR_RFC3686_IV_SIZE, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, - .setkey = nitrox_aes_ctr_rfc3686_setkey, - .encrypt = nitrox_aes_encrypt, - .decrypt = nitrox_aes_decrypt, -}, { - .base = { - .cra_name = "cts(cbc(aes))", - .cra_driver_name = "n5_cts(cbc(aes))", - .cra_priority = PRIO, - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = nitrox_aes_setkey, - .encrypt = nitrox_aes_encrypt, - .decrypt = nitrox_aes_decrypt, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, -}, { - .base = { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "n5_cbc(des3_ede)", - .cra_priority = PRIO, - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - }, - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES3_EDE_BLOCK_SIZE, - .setkey = nitrox_3des_setkey, - .encrypt = nitrox_3des_encrypt, - .decrypt = nitrox_3des_decrypt, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, -}, { - .base = { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "n5_ecb(des3_ede)", - .cra_priority = PRIO, - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - }, - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES3_EDE_BLOCK_SIZE, - .setkey = nitrox_3des_setkey, - .encrypt = nitrox_3des_encrypt, - .decrypt = nitrox_3des_decrypt, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, -} - -}; - -int nitrox_crypto_register(void) -{ - return crypto_register_skciphers(nitrox_skciphers, - ARRAY_SIZE(nitrox_skciphers)); + return 0; } void nitrox_crypto_unregister(void) { - crypto_unregister_skciphers(nitrox_skciphers, - ARRAY_SIZE(nitrox_skciphers)); + nitrox_unregister_aeads(); + nitrox_unregister_skciphers(); } diff --git a/drivers/crypto/cavium/nitrox/nitrox_common.h b/drivers/crypto/cavium/nitrox/nitrox_common.h index 863143a8336b..e4be69d7e6e5 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_common.h +++ b/drivers/crypto/cavium/nitrox/nitrox_common.h @@ -7,6 +7,10 @@ int nitrox_crypto_register(void); void nitrox_crypto_unregister(void); +int nitrox_register_aeads(void); +void nitrox_unregister_aeads(void); +int nitrox_register_skciphers(void); +void nitrox_unregister_skciphers(void); void *crypto_alloc_context(struct nitrox_device *ndev); void crypto_free_context(void *ctx); struct nitrox_device *nitrox_get_first_device(void); @@ -19,7 +23,7 @@ void pkt_slc_resp_tasklet(unsigned long data); int nitrox_process_se_request(struct nitrox_device *ndev, struct se_crypto_request *req, completion_t cb, - struct skcipher_request *skreq); + void *cb_arg); void backlog_qflush_work(struct work_struct *work); diff --git a/drivers/crypto/cavium/nitrox/nitrox_csr.h b/drivers/crypto/cavium/nitrox/nitrox_csr.h index 1ad27b1a87c5..a2a452642b38 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_csr.h +++ b/drivers/crypto/cavium/nitrox/nitrox_csr.h @@ -54,7 +54,13 @@ #define NPS_STATS_PKT_DMA_WR_CNT 0x1000190 /* NPS packet registers */ -#define NPS_PKT_INT 0x1040018 +#define NPS_PKT_INT 0x1040018 +#define NPS_PKT_MBOX_INT_LO 0x1040020 +#define NPS_PKT_MBOX_INT_LO_ENA_W1C 0x1040030 +#define NPS_PKT_MBOX_INT_LO_ENA_W1S 0x1040038 +#define NPS_PKT_MBOX_INT_HI 0x1040040 +#define NPS_PKT_MBOX_INT_HI_ENA_W1C 0x1040050 +#define NPS_PKT_MBOX_INT_HI_ENA_W1S 0x1040058 #define NPS_PKT_IN_RERR_HI 0x1040108 #define NPS_PKT_IN_RERR_HI_ENA_W1S 0x1040120 #define NPS_PKT_IN_RERR_LO 0x1040128 @@ -74,6 +80,10 @@ #define NPS_PKT_SLC_RERR_LO_ENA_W1S 0x1040240 #define NPS_PKT_SLC_ERR_TYPE 0x1040248 #define NPS_PKT_SLC_ERR_TYPE_ENA_W1S 0x1040260 +/* Mailbox PF->VF PF Accessible Data registers */ +#define NPS_PKT_MBOX_PF_VF_PFDATAX(_i) (0x1040800 + ((_i) * 0x8)) +#define NPS_PKT_MBOX_VF_PF_PFDATAX(_i) (0x1040C00 + ((_i) * 0x8)) + #define NPS_PKT_SLC_CTLX(_i) (0x10000 + ((_i) * 0x40000)) #define NPS_PKT_SLC_CNTSX(_i) (0x10008 + ((_i) * 0x40000)) #define NPS_PKT_SLC_INT_LEVELSX(_i) (0x10010 + ((_i) * 0x40000)) diff --git a/drivers/crypto/cavium/nitrox/nitrox_debugfs.c b/drivers/crypto/cavium/nitrox/nitrox_debugfs.c index 5f3cd5fafe04..0196b992280f 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_debugfs.c +++ b/drivers/crypto/cavium/nitrox/nitrox_debugfs.c @@ -13,18 +13,7 @@ static int firmware_show(struct seq_file *s, void *v) return 0; } -static int firmware_open(struct inode *inode, struct file *file) -{ - return single_open(file, firmware_show, inode->i_private); -} - -static const struct file_operations firmware_fops = { - .owner = THIS_MODULE, - .open = firmware_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(firmware); static int device_show(struct seq_file *s, void *v) { @@ -41,18 +30,7 @@ static int device_show(struct seq_file *s, void *v) return 0; } -static int nitrox_open(struct inode *inode, struct file *file) -{ - return single_open(file, device_show, inode->i_private); -} - -static const struct file_operations nitrox_fops = { - .owner = THIS_MODULE, - .open = nitrox_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(device); static int stats_show(struct seq_file *s, void *v) { @@ -69,18 +47,7 @@ static int stats_show(struct seq_file *s, void *v) return 0; } -static int nitrox_stats_open(struct inode *inode, struct file *file) -{ - return single_open(file, stats_show, inode->i_private); -} - -static const struct file_operations nitrox_stats_fops = { - .owner = THIS_MODULE, - .open = nitrox_stats_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(stats); void nitrox_debugfs_exit(struct nitrox_device *ndev) { @@ -97,13 +64,16 @@ int nitrox_debugfs_init(struct nitrox_device *ndev) return -ENOMEM; ndev->debugfs_dir = dir; - f = debugfs_create_file("firmware", 0400, dir, ndev, &firmware_fops); + f = debugfs_create_file("firmware", 0400, dir, ndev, + &firmware_fops); if (!f) goto err; - f = debugfs_create_file("device", 0400, dir, ndev, &nitrox_fops); + f = debugfs_create_file("device", 0400, dir, ndev, + &device_fops); if (!f) goto err; - f = debugfs_create_file("stats", 0400, dir, ndev, &nitrox_stats_fops); + f = debugfs_create_file("stats", 0400, dir, ndev, + &stats_fops); if (!f) goto err; diff --git a/drivers/crypto/cavium/nitrox/nitrox_debugfs.h b/drivers/crypto/cavium/nitrox/nitrox_debugfs.h new file mode 100644 index 000000000000..a8d85ffa619c --- /dev/null +++ b/drivers/crypto/cavium/nitrox/nitrox_debugfs.h @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __NITROX_DEBUGFS_H +#define __NITROX_DEBUGFS_H + +#include "nitrox_dev.h" + +#ifdef CONFIG_DEBUG_FS +int nitrox_debugfs_init(struct nitrox_device *ndev); +void nitrox_debugfs_exit(struct nitrox_device *ndev); +#else +static inline int nitrox_debugfs_init(struct nitrox_device *ndev) +{ + return 0; +} + +static inline void nitrox_debugfs_exit(struct nitrox_device *ndev) +{ +} +#endif /* !CONFIG_DEBUG_FS */ + +#endif /* __NITROX_DEBUGFS_H */ diff --git a/drivers/crypto/cavium/nitrox/nitrox_dev.h b/drivers/crypto/cavium/nitrox/nitrox_dev.h index 283e252385fb..0338877b828f 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_dev.h +++ b/drivers/crypto/cavium/nitrox/nitrox_dev.h @@ -8,6 +8,8 @@ #include #define VERSION_LEN 32 +/* Maximum queues in PF mode */ +#define MAX_PF_QUEUES 64 /** * struct nitrox_cmdq - NITROX command queue @@ -103,6 +105,61 @@ struct nitrox_q_vector { }; }; +/** + * mbox_msg - Mailbox message data + * @type: message type + * @opcode: message opcode + * @data: message data + */ +union mbox_msg { + u64 value; + struct { + u64 type: 2; + u64 opcode: 6; + u64 data: 58; + }; + struct { + u64 type: 2; + u64 opcode: 6; + u64 chipid: 8; + u64 vfid: 8; + } id; +}; + +/** + * nitrox_vfdev - NITROX VF device instance in PF + * @state: VF device state + * @vfno: VF number + * @nr_queues: number of queues enabled in VF + * @ring: ring to communicate with VF + * @msg: Mailbox message data from VF + * @mbx_resp: Mailbox counters + */ +struct nitrox_vfdev { + atomic_t state; + int vfno; + int nr_queues; + int ring; + union mbox_msg msg; + atomic64_t mbx_resp; +}; + +/** + * struct nitrox_iov - SR-IOV information + * @num_vfs: number of VF(s) enabled + * @max_vf_queues: Maximum number of queues allowed for VF + * @vfdev: VF(s) devices + * @pf2vf_wq: workqueue for PF2VF communication + * @msix: MSI-X entry for PF in SR-IOV case + */ +struct nitrox_iov { + int num_vfs; + int max_vf_queues; + struct nitrox_vfdev *vfdev; + struct workqueue_struct *pf2vf_wq; + struct msix_entry msix; +}; + /* * NITROX Device states */ @@ -150,6 +207,9 @@ enum vf_mode { * @ctx_pool: DMA pool for crypto context * @pkt_inq: Packet input rings * @qvec: MSI-X queue vectors information + * @iov: SR-IOV informatin + * @num_vecs: number of MSI-X vectors + * @stats: request statistics * @hw: hardware information * @debugfs_dir: debugfs directory */ @@ -168,13 +228,13 @@ struct nitrox_device { int node; u16 qlen; u16 nr_queues; - int num_vfs; enum vf_mode mode; struct dma_pool *ctx_pool; struct nitrox_cmdq *pkt_inq; struct nitrox_q_vector *qvec; + struct nitrox_iov iov; int num_vecs; struct nitrox_stats stats; @@ -213,17 +273,9 @@ static inline bool nitrox_ready(struct nitrox_device *ndev) return atomic_read(&ndev->state) == __NDEV_READY; } -#ifdef CONFIG_DEBUG_FS -int nitrox_debugfs_init(struct nitrox_device *ndev); -void nitrox_debugfs_exit(struct nitrox_device *ndev); -#else -static inline int nitrox_debugfs_init(struct nitrox_device *ndev) +static inline bool nitrox_vfdev_ready(struct nitrox_vfdev *vfdev) { - return 0; + return atomic_read(&vfdev->state) == __NDEV_READY; } -static inline void nitrox_debugfs_exit(struct nitrox_device *ndev) -{ } -#endif - #endif /* __NITROX_DEV_H */ diff --git a/drivers/crypto/cavium/nitrox/nitrox_hal.c b/drivers/crypto/cavium/nitrox/nitrox_hal.c index a9b82387cf53..c08d9f33a3b1 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_hal.c +++ b/drivers/crypto/cavium/nitrox/nitrox_hal.c @@ -5,10 +5,11 @@ #include "nitrox_csr.h" #define PLL_REF_CLK 50 +#define MAX_CSR_RETRIES 10 /** * emu_enable_cores - Enable EMU cluster cores. - * @ndev: N5 device + * @ndev: NITROX device */ static void emu_enable_cores(struct nitrox_device *ndev) { @@ -33,7 +34,7 @@ static void emu_enable_cores(struct nitrox_device *ndev) /** * nitrox_config_emu_unit - configure EMU unit. - * @ndev: N5 device + * @ndev: NITROX device */ void nitrox_config_emu_unit(struct nitrox_device *ndev) { @@ -63,29 +64,26 @@ void nitrox_config_emu_unit(struct nitrox_device *ndev) static void reset_pkt_input_ring(struct nitrox_device *ndev, int ring) { union nps_pkt_in_instr_ctl pkt_in_ctl; - union nps_pkt_in_instr_baoff_dbell pkt_in_dbell; union nps_pkt_in_done_cnts pkt_in_cnts; + int max_retries = MAX_CSR_RETRIES; u64 offset; + /* step 1: disable the ring, clear enable bit */ offset = NPS_PKT_IN_INSTR_CTLX(ring); - /* disable the ring */ pkt_in_ctl.value = nitrox_read_csr(ndev, offset); pkt_in_ctl.s.enb = 0; nitrox_write_csr(ndev, offset, pkt_in_ctl.value); - usleep_range(100, 150); - /* wait to clear [ENB] */ + /* step 2: wait to clear [ENB] */ + usleep_range(100, 150); do { pkt_in_ctl.value = nitrox_read_csr(ndev, offset); - } while (pkt_in_ctl.s.enb); - - /* clear off door bell counts */ - offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(ring); - pkt_in_dbell.value = 0; - pkt_in_dbell.s.dbell = 0xffffffff; - nitrox_write_csr(ndev, offset, pkt_in_dbell.value); + if (!pkt_in_ctl.s.enb) + break; + udelay(50); + } while (max_retries--); - /* clear done counts */ + /* step 3: clear done counts */ offset = NPS_PKT_IN_DONE_CNTSX(ring); pkt_in_cnts.value = nitrox_read_csr(ndev, offset); nitrox_write_csr(ndev, offset, pkt_in_cnts.value); @@ -95,6 +93,7 @@ static void reset_pkt_input_ring(struct nitrox_device *ndev, int ring) void enable_pkt_input_ring(struct nitrox_device *ndev, int ring) { union nps_pkt_in_instr_ctl pkt_in_ctl; + int max_retries = MAX_CSR_RETRIES; u64 offset; /* 64-byte instruction size */ @@ -107,12 +106,15 @@ void enable_pkt_input_ring(struct nitrox_device *ndev, int ring) /* wait for set [ENB] */ do { pkt_in_ctl.value = nitrox_read_csr(ndev, offset); - } while (!pkt_in_ctl.s.enb); + if (pkt_in_ctl.s.enb) + break; + udelay(50); + } while (max_retries--); } /** * nitrox_config_pkt_input_rings - configure Packet Input Rings - * @ndev: N5 device + * @ndev: NITROX device */ void nitrox_config_pkt_input_rings(struct nitrox_device *ndev) { @@ -121,11 +123,14 @@ void nitrox_config_pkt_input_rings(struct nitrox_device *ndev) for (i = 0; i < ndev->nr_queues; i++) { struct nitrox_cmdq *cmdq = &ndev->pkt_inq[i]; union nps_pkt_in_instr_rsize pkt_in_rsize; + union nps_pkt_in_instr_baoff_dbell pkt_in_dbell; u64 offset; reset_pkt_input_ring(ndev, i); - /* configure ring base address 16-byte aligned, + /** + * step 4: + * configure ring base address 16-byte aligned, * size and interrupt threshold. */ offset = NPS_PKT_IN_INSTR_BADDRX(i); @@ -141,6 +146,13 @@ void nitrox_config_pkt_input_rings(struct nitrox_device *ndev) offset = NPS_PKT_IN_INT_LEVELSX(i); nitrox_write_csr(ndev, offset, 0xffffffff); + /* step 5: clear off door bell counts */ + offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(i); + pkt_in_dbell.value = 0; + pkt_in_dbell.s.dbell = 0xffffffff; + nitrox_write_csr(ndev, offset, pkt_in_dbell.value); + + /* enable the ring */ enable_pkt_input_ring(ndev, i); } } @@ -149,21 +161,26 @@ static void reset_pkt_solicit_port(struct nitrox_device *ndev, int port) { union nps_pkt_slc_ctl pkt_slc_ctl; union nps_pkt_slc_cnts pkt_slc_cnts; + int max_retries = MAX_CSR_RETRIES; u64 offset; - /* disable slc port */ + /* step 1: disable slc port */ offset = NPS_PKT_SLC_CTLX(port); pkt_slc_ctl.value = nitrox_read_csr(ndev, offset); pkt_slc_ctl.s.enb = 0; nitrox_write_csr(ndev, offset, pkt_slc_ctl.value); - usleep_range(100, 150); + /* step 2 */ + usleep_range(100, 150); /* wait to clear [ENB] */ do { pkt_slc_ctl.value = nitrox_read_csr(ndev, offset); - } while (pkt_slc_ctl.s.enb); + if (!pkt_slc_ctl.s.enb) + break; + udelay(50); + } while (max_retries--); - /* clear slc counters */ + /* step 3: clear slc counters */ offset = NPS_PKT_SLC_CNTSX(port); pkt_slc_cnts.value = nitrox_read_csr(ndev, offset); nitrox_write_csr(ndev, offset, pkt_slc_cnts.value); @@ -173,12 +190,12 @@ static void reset_pkt_solicit_port(struct nitrox_device *ndev, int port) void enable_pkt_solicit_port(struct nitrox_device *ndev, int port) { union nps_pkt_slc_ctl pkt_slc_ctl; + int max_retries = MAX_CSR_RETRIES; u64 offset; offset = NPS_PKT_SLC_CTLX(port); pkt_slc_ctl.value = 0; pkt_slc_ctl.s.enb = 1; - /* * 8 trailing 0x00 bytes will be added * to the end of the outgoing packet. @@ -191,23 +208,27 @@ void enable_pkt_solicit_port(struct nitrox_device *ndev, int port) /* wait to set [ENB] */ do { pkt_slc_ctl.value = nitrox_read_csr(ndev, offset); - } while (!pkt_slc_ctl.s.enb); + if (pkt_slc_ctl.s.enb) + break; + udelay(50); + } while (max_retries--); } -static void config_single_pkt_solicit_port(struct nitrox_device *ndev, - int port) +static void config_pkt_solicit_port(struct nitrox_device *ndev, int port) { union nps_pkt_slc_int_levels pkt_slc_int; u64 offset; reset_pkt_solicit_port(ndev, port); + /* step 4: configure interrupt levels */ offset = NPS_PKT_SLC_INT_LEVELSX(port); pkt_slc_int.value = 0; /* time interrupt threshold */ pkt_slc_int.s.timet = 0x3fffff; nitrox_write_csr(ndev, offset, pkt_slc_int.value); + /* enable the solicit port */ enable_pkt_solicit_port(ndev, port); } @@ -216,12 +237,12 @@ void nitrox_config_pkt_solicit_ports(struct nitrox_device *ndev) int i; for (i = 0; i < ndev->nr_queues; i++) - config_single_pkt_solicit_port(ndev, i); + config_pkt_solicit_port(ndev, i); } /** * enable_nps_interrupts - enable NPS interrutps - * @ndev: N5 device. + * @ndev: NITROX device. * * This includes NPS core, packet in and slc interrupts. */ @@ -284,8 +305,8 @@ void nitrox_config_pom_unit(struct nitrox_device *ndev) } /** - * nitrox_config_rand_unit - enable N5 random number unit - * @ndev: N5 device + * nitrox_config_rand_unit - enable NITROX random number unit + * @ndev: NITROX device */ void nitrox_config_rand_unit(struct nitrox_device *ndev) { @@ -361,6 +382,7 @@ void invalidate_lbc(struct nitrox_device *ndev) { union lbc_inval_ctl lbc_ctl; union lbc_inval_status lbc_stat; + int max_retries = MAX_CSR_RETRIES; u64 offset; /* invalidate LBC */ @@ -370,10 +392,12 @@ void invalidate_lbc(struct nitrox_device *ndev) nitrox_write_csr(ndev, offset, lbc_ctl.value); offset = LBC_INVAL_STATUS; - do { lbc_stat.value = nitrox_read_csr(ndev, offset); - } while (!lbc_stat.s.done); + if (lbc_stat.s.done) + break; + udelay(50); + } while (max_retries--); } void nitrox_config_lbc_unit(struct nitrox_device *ndev) @@ -467,3 +491,31 @@ void nitrox_get_hwinfo(struct nitrox_device *ndev) /* copy partname */ strncpy(ndev->hw.partname, name, sizeof(ndev->hw.partname)); } + +void enable_pf2vf_mbox_interrupts(struct nitrox_device *ndev) +{ + u64 value = ~0ULL; + u64 reg_addr; + + /* Mailbox interrupt low enable set register */ + reg_addr = NPS_PKT_MBOX_INT_LO_ENA_W1S; + nitrox_write_csr(ndev, reg_addr, value); + + /* Mailbox interrupt high enable set register */ + reg_addr = NPS_PKT_MBOX_INT_HI_ENA_W1S; + nitrox_write_csr(ndev, reg_addr, value); +} + +void disable_pf2vf_mbox_interrupts(struct nitrox_device *ndev) +{ + u64 value = ~0ULL; + u64 reg_addr; + + /* Mailbox interrupt low enable clear register */ + reg_addr = NPS_PKT_MBOX_INT_LO_ENA_W1C; + nitrox_write_csr(ndev, reg_addr, value); + + /* Mailbox interrupt high enable clear register */ + reg_addr = NPS_PKT_MBOX_INT_HI_ENA_W1C; + nitrox_write_csr(ndev, reg_addr, value); +} diff --git a/drivers/crypto/cavium/nitrox/nitrox_hal.h b/drivers/crypto/cavium/nitrox/nitrox_hal.h index 489ee64c119e..d6606418ba38 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_hal.h +++ b/drivers/crypto/cavium/nitrox/nitrox_hal.h @@ -19,5 +19,7 @@ void enable_pkt_input_ring(struct nitrox_device *ndev, int ring); void enable_pkt_solicit_port(struct nitrox_device *ndev, int port); void config_nps_core_vfcfg_mode(struct nitrox_device *ndev, enum vf_mode mode); void nitrox_get_hwinfo(struct nitrox_device *ndev); +void enable_pf2vf_mbox_interrupts(struct nitrox_device *ndev); +void disable_pf2vf_mbox_interrupts(struct nitrox_device *ndev); #endif /* __NITROX_HAL_H */ diff --git a/drivers/crypto/cavium/nitrox/nitrox_isr.c b/drivers/crypto/cavium/nitrox/nitrox_isr.c index 88a77b8fb3fb..3dec570a190a 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_isr.c +++ b/drivers/crypto/cavium/nitrox/nitrox_isr.c @@ -7,12 +7,14 @@ #include "nitrox_csr.h" #include "nitrox_common.h" #include "nitrox_hal.h" +#include "nitrox_mbx.h" /** * One vector for each type of ring * - NPS packet ring, AQMQ ring and ZQMQ ring */ #define NR_RING_VECTORS 3 +#define NR_NON_RING_VECTORS 1 /* base entry for packet ring/port */ #define PKT_RING_MSIX_BASE 0 #define NON_RING_MSIX_BASE 192 @@ -219,7 +221,8 @@ static void nps_core_int_tasklet(unsigned long data) */ static irqreturn_t nps_core_int_isr(int irq, void *data) { - struct nitrox_device *ndev = data; + struct nitrox_q_vector *qvec = data; + struct nitrox_device *ndev = qvec->ndev; union nps_core_int_active core_int; core_int.value = nitrox_read_csr(ndev, NPS_CORE_INT_ACTIVE); @@ -245,6 +248,10 @@ static irqreturn_t nps_core_int_isr(int irq, void *data) if (core_int.s.bmi) clear_bmi_err_intr(ndev); + /* Mailbox interrupt */ + if (core_int.s.mbox) + nitrox_pf2vf_mbox_handler(ndev); + /* If more work callback the ISR, set resend */ core_int.s.resend = 1; nitrox_write_csr(ndev, NPS_CORE_INT_ACTIVE, core_int.value); @@ -275,6 +282,7 @@ void nitrox_unregister_interrupts(struct nitrox_device *ndev) qvec->valid = false; } kfree(ndev->qvec); + ndev->qvec = NULL; pci_free_irq_vectors(pdev); } @@ -321,6 +329,7 @@ int nitrox_register_interrupts(struct nitrox_device *ndev) if (qvec->ring >= ndev->nr_queues) break; + qvec->cmdq = &ndev->pkt_inq[qvec->ring]; snprintf(qvec->name, IRQ_NAMESZ, "nitrox-pkt%d", qvec->ring); /* get the vector number */ vec = pci_irq_vector(pdev, i); @@ -335,13 +344,13 @@ int nitrox_register_interrupts(struct nitrox_device *ndev) tasklet_init(&qvec->resp_tasklet, pkt_slc_resp_tasklet, (unsigned long)qvec); - qvec->cmdq = &ndev->pkt_inq[qvec->ring]; qvec->valid = true; } /* request irqs for non ring vectors */ i = NON_RING_MSIX_BASE; qvec = &ndev->qvec[i]; + qvec->ndev = ndev; snprintf(qvec->name, IRQ_NAMESZ, "nitrox-core-int%d", i); /* get the vector number */ @@ -356,7 +365,6 @@ int nitrox_register_interrupts(struct nitrox_device *ndev) tasklet_init(&qvec->resp_tasklet, nps_core_int_tasklet, (unsigned long)qvec); - qvec->ndev = ndev; qvec->valid = true; return 0; @@ -365,3 +373,81 @@ irq_fail: nitrox_unregister_interrupts(ndev); return ret; } + +void nitrox_sriov_unregister_interrupts(struct nitrox_device *ndev) +{ + struct pci_dev *pdev = ndev->pdev; + int i; + + for (i = 0; i < ndev->num_vecs; i++) { + struct nitrox_q_vector *qvec; + int vec; + + qvec = ndev->qvec + i; + if (!qvec->valid) + continue; + + vec = ndev->iov.msix.vector; + irq_set_affinity_hint(vec, NULL); + free_irq(vec, qvec); + + tasklet_disable(&qvec->resp_tasklet); + tasklet_kill(&qvec->resp_tasklet); + qvec->valid = false; + } + kfree(ndev->qvec); + ndev->qvec = NULL; + pci_disable_msix(pdev); +} + +int nitrox_sriov_register_interupts(struct nitrox_device *ndev) +{ + struct pci_dev *pdev = ndev->pdev; + struct nitrox_q_vector *qvec; + int vec, cpu; + int ret; + + /** + * only non ring vectors i.e Entry 192 is available + * for PF in SR-IOV mode. + */ + ndev->iov.msix.entry = NON_RING_MSIX_BASE; + ret = pci_enable_msix_exact(pdev, &ndev->iov.msix, NR_NON_RING_VECTORS); + if (ret) { + dev_err(DEV(ndev), "failed to allocate nps-core-int%d\n", + NON_RING_MSIX_BASE); + return ret; + } + + qvec = kcalloc(NR_NON_RING_VECTORS, sizeof(*qvec), GFP_KERNEL); + if (!qvec) { + pci_disable_msix(pdev); + return -ENOMEM; + } + qvec->ndev = ndev; + + ndev->qvec = qvec; + ndev->num_vecs = NR_NON_RING_VECTORS; + snprintf(qvec->name, IRQ_NAMESZ, "nitrox-core-int%d", + NON_RING_MSIX_BASE); + + vec = ndev->iov.msix.vector; + ret = request_irq(vec, nps_core_int_isr, 0, qvec->name, qvec); + if (ret) { + dev_err(DEV(ndev), "irq failed for nitrox-core-int%d\n", + NON_RING_MSIX_BASE); + goto iov_irq_fail; + } + cpu = num_online_cpus(); + irq_set_affinity_hint(vec, get_cpu_mask(cpu)); + + tasklet_init(&qvec->resp_tasklet, nps_core_int_tasklet, + (unsigned long)qvec); + qvec->valid = true; + + return 0; + +iov_irq_fail: + nitrox_sriov_unregister_interrupts(ndev); + return ret; +} diff --git a/drivers/crypto/cavium/nitrox/nitrox_isr.h b/drivers/crypto/cavium/nitrox/nitrox_isr.h index 63418a6cc52c..1062c9336c1f 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_isr.h +++ b/drivers/crypto/cavium/nitrox/nitrox_isr.h @@ -6,5 +6,7 @@ int nitrox_register_interrupts(struct nitrox_device *ndev); void nitrox_unregister_interrupts(struct nitrox_device *ndev); +int nitrox_sriov_register_interupts(struct nitrox_device *ndev); +void nitrox_sriov_unregister_interrupts(struct nitrox_device *ndev); #endif /* __NITROX_ISR_H */ diff --git a/drivers/crypto/cavium/nitrox/nitrox_lib.c b/drivers/crypto/cavium/nitrox/nitrox_lib.c index 2260efa42308..9138bae12521 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_lib.c +++ b/drivers/crypto/cavium/nitrox/nitrox_lib.c @@ -158,12 +158,19 @@ static void destroy_crypto_dma_pool(struct nitrox_device *ndev) void *crypto_alloc_context(struct nitrox_device *ndev) { struct ctx_hdr *ctx; + struct crypto_ctx_hdr *chdr; void *vaddr; dma_addr_t dma; + chdr = kmalloc(sizeof(*chdr), GFP_KERNEL); + if (!chdr) + return NULL; + vaddr = dma_pool_zalloc(ndev->ctx_pool, GFP_KERNEL, &dma); - if (!vaddr) + if (!vaddr) { + kfree(chdr); return NULL; + } /* fill meta data */ ctx = vaddr; @@ -171,7 +178,11 @@ void *crypto_alloc_context(struct nitrox_device *ndev) ctx->dma = dma; ctx->ctx_dma = dma + sizeof(struct ctx_hdr); - return ((u8 *)vaddr + sizeof(struct ctx_hdr)); + chdr->pool = ndev->ctx_pool; + chdr->dma = dma; + chdr->vaddr = vaddr; + + return chdr; } /** @@ -180,13 +191,14 @@ void *crypto_alloc_context(struct nitrox_device *ndev) */ void crypto_free_context(void *ctx) { - struct ctx_hdr *ctxp; + struct crypto_ctx_hdr *ctxp; if (!ctx) return; - ctxp = (struct ctx_hdr *)((u8 *)ctx - sizeof(struct ctx_hdr)); - dma_pool_free(ctxp->pool, ctxp, ctxp->dma); + ctxp = ctx; + dma_pool_free(ctxp->pool, ctxp->vaddr, ctxp->dma); + kfree(ctxp); } /** diff --git a/drivers/crypto/cavium/nitrox/nitrox_main.c b/drivers/crypto/cavium/nitrox/nitrox_main.c index 6595c95af9f1..014e9863c20e 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_main.c +++ b/drivers/crypto/cavium/nitrox/nitrox_main.c @@ -1,6 +1,5 @@ #include #include -#include #include #include #include @@ -13,9 +12,9 @@ #include "nitrox_csr.h" #include "nitrox_hal.h" #include "nitrox_isr.h" +#include "nitrox_debugfs.h" #define CNN55XX_DEV_ID 0x12 -#define MAX_PF_QUEUES 64 #define UCODE_HLEN 48 #define SE_GROUP 0 diff --git a/drivers/crypto/cavium/nitrox/nitrox_mbx.c b/drivers/crypto/cavium/nitrox/nitrox_mbx.c new file mode 100644 index 000000000000..02ee95064841 --- /dev/null +++ b/drivers/crypto/cavium/nitrox/nitrox_mbx.c @@ -0,0 +1,204 @@ +// SPDX-License-Identifier: GPL-2.0 +#include + +#include "nitrox_csr.h" +#include "nitrox_hal.h" +#include "nitrox_dev.h" + +#define RING_TO_VFNO(_x, _y) ((_x) / (_y)) + +/** + * mbx_msg_type - Mailbox message types + */ +enum mbx_msg_type { + MBX_MSG_TYPE_NOP, + MBX_MSG_TYPE_REQ, + MBX_MSG_TYPE_ACK, + MBX_MSG_TYPE_NACK, +}; + +/** + * mbx_msg_opcode - Mailbox message opcodes + */ +enum mbx_msg_opcode { + MSG_OP_VF_MODE = 1, + MSG_OP_VF_UP, + MSG_OP_VF_DOWN, + MSG_OP_CHIPID_VFID, +}; + +struct pf2vf_work { + struct nitrox_vfdev *vfdev; + struct nitrox_device *ndev; + struct work_struct pf2vf_resp; +}; + +static inline u64 pf2vf_read_mbox(struct nitrox_device *ndev, int ring) +{ + u64 reg_addr; + + reg_addr = NPS_PKT_MBOX_VF_PF_PFDATAX(ring); + return nitrox_read_csr(ndev, reg_addr); +} + +static inline void pf2vf_write_mbox(struct nitrox_device *ndev, u64 value, + int ring) +{ + u64 reg_addr; + + reg_addr = NPS_PKT_MBOX_PF_VF_PFDATAX(ring); + nitrox_write_csr(ndev, reg_addr, value); +} + +static void pf2vf_send_response(struct nitrox_device *ndev, + struct nitrox_vfdev *vfdev) +{ + union mbox_msg msg; + + msg.value = vfdev->msg.value; + + switch (vfdev->msg.opcode) { + case MSG_OP_VF_MODE: + msg.data = ndev->mode; + break; + case MSG_OP_VF_UP: + vfdev->nr_queues = vfdev->msg.data; + atomic_set(&vfdev->state, __NDEV_READY); + break; + case MSG_OP_CHIPID_VFID: + msg.id.chipid = ndev->idx; + msg.id.vfid = vfdev->vfno; + break; + case MSG_OP_VF_DOWN: + vfdev->nr_queues = 0; + atomic_set(&vfdev->state, __NDEV_NOT_READY); + break; + default: + msg.type = MBX_MSG_TYPE_NOP; + break; + } + + if (msg.type == MBX_MSG_TYPE_NOP) + return; + + /* send ACK to VF */ + msg.type = MBX_MSG_TYPE_ACK; + pf2vf_write_mbox(ndev, msg.value, vfdev->ring); + + vfdev->msg.value = 0; + atomic64_inc(&vfdev->mbx_resp); +} + +static void pf2vf_resp_handler(struct work_struct *work) +{ + struct pf2vf_work *pf2vf_resp = container_of(work, struct pf2vf_work, + pf2vf_resp); + struct nitrox_vfdev *vfdev = pf2vf_resp->vfdev; + struct nitrox_device *ndev = pf2vf_resp->ndev; + + switch (vfdev->msg.type) { + case MBX_MSG_TYPE_REQ: + /* process the request from VF */ + pf2vf_send_response(ndev, vfdev); + break; + case MBX_MSG_TYPE_ACK: + case MBX_MSG_TYPE_NACK: + break; + }; + + kfree(pf2vf_resp); +} + +void nitrox_pf2vf_mbox_handler(struct nitrox_device *ndev) +{ + struct nitrox_vfdev *vfdev; + struct pf2vf_work *pfwork; + u64 value, reg_addr; + u32 i; + int vfno; + + /* loop for VF(0..63) */ + reg_addr = NPS_PKT_MBOX_INT_LO; + value = nitrox_read_csr(ndev, reg_addr); + for_each_set_bit(i, (const unsigned long *)&value, BITS_PER_LONG) { + /* get the vfno from ring */ + vfno = RING_TO_VFNO(i, ndev->iov.max_vf_queues); + vfdev = ndev->iov.vfdev + vfno; + vfdev->ring = i; + /* fill the vf mailbox data */ + vfdev->msg.value = pf2vf_read_mbox(ndev, vfdev->ring); + pfwork = kzalloc(sizeof(*pfwork), GFP_ATOMIC); + if (!pfwork) + continue; + + pfwork->vfdev = vfdev; + pfwork->ndev = ndev; + INIT_WORK(&pfwork->pf2vf_resp, pf2vf_resp_handler); + queue_work(ndev->iov.pf2vf_wq, &pfwork->pf2vf_resp); + /* clear the corresponding vf bit */ + nitrox_write_csr(ndev, reg_addr, BIT_ULL(i)); + } + + /* loop for VF(64..127) */ + reg_addr = NPS_PKT_MBOX_INT_HI; + value = nitrox_read_csr(ndev, reg_addr); + for_each_set_bit(i, (const unsigned long *)&value, BITS_PER_LONG) { + /* get the vfno from ring */ + vfno = RING_TO_VFNO(i + 64, ndev->iov.max_vf_queues); + vfdev = ndev->iov.vfdev + vfno; + vfdev->ring = (i + 64); + /* fill the vf mailbox data */ + vfdev->msg.value = pf2vf_read_mbox(ndev, vfdev->ring); + + pfwork = kzalloc(sizeof(*pfwork), GFP_ATOMIC); + if (!pfwork) + continue; + + pfwork->vfdev = vfdev; + pfwork->ndev = ndev; + INIT_WORK(&pfwork->pf2vf_resp, pf2vf_resp_handler); + queue_work(ndev->iov.pf2vf_wq, &pfwork->pf2vf_resp); + /* clear the corresponding vf bit */ + nitrox_write_csr(ndev, reg_addr, BIT_ULL(i)); + } +} + +int nitrox_mbox_init(struct nitrox_device *ndev) +{ + struct nitrox_vfdev *vfdev; + int i; + + ndev->iov.vfdev = kcalloc(ndev->iov.num_vfs, + sizeof(struct nitrox_vfdev), GFP_KERNEL); + if (!ndev->iov.vfdev) + return -ENOMEM; + + for (i = 0; i < ndev->iov.num_vfs; i++) { + vfdev = ndev->iov.vfdev + i; + vfdev->vfno = i; + } + + /* allocate pf2vf response workqueue */ + ndev->iov.pf2vf_wq = alloc_workqueue("nitrox_pf2vf", 0, 0); + if (!ndev->iov.pf2vf_wq) { + kfree(ndev->iov.vfdev); + return -ENOMEM; + } + /* enable pf2vf mailbox interrupts */ + enable_pf2vf_mbox_interrupts(ndev); + + return 0; +} + +void nitrox_mbox_cleanup(struct nitrox_device *ndev) +{ + /* disable pf2vf mailbox interrupts */ + disable_pf2vf_mbox_interrupts(ndev); + /* destroy workqueue */ + if (ndev->iov.pf2vf_wq) + destroy_workqueue(ndev->iov.pf2vf_wq); + + kfree(ndev->iov.vfdev); + ndev->iov.pf2vf_wq = NULL; + ndev->iov.vfdev = NULL; +} diff --git a/drivers/crypto/cavium/nitrox/nitrox_mbx.h b/drivers/crypto/cavium/nitrox/nitrox_mbx.h new file mode 100644 index 000000000000..5008399775a9 --- /dev/null +++ b/drivers/crypto/cavium/nitrox/nitrox_mbx.h @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __NITROX_MBX_H +#define __NITROX_MBX_H + +int nitrox_mbox_init(struct nitrox_device *ndev); +void nitrox_mbox_cleanup(struct nitrox_device *ndev); +void nitrox_pf2vf_mbox_handler(struct nitrox_device *ndev); + +#endif /* __NITROX_MBX_H */ diff --git a/drivers/crypto/cavium/nitrox/nitrox_req.h b/drivers/crypto/cavium/nitrox/nitrox_req.h index d091b6f5f5dd..76c0f0be7233 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_req.h +++ b/drivers/crypto/cavium/nitrox/nitrox_req.h @@ -7,6 +7,9 @@ #include "nitrox_dev.h" +#define PENDING_SIG 0xFFFFFFFFFFFFFFFFUL +#define PRIO 4001 + /** * struct gphdr - General purpose Header * @param0: first parameter. @@ -46,13 +49,6 @@ union se_req_ctrl { } s; }; -struct nitrox_sglist { - u16 len; - u16 raz0; - u32 raz1; - dma_addr_t dma; -}; - #define MAX_IV_LEN 16 /** @@ -62,8 +58,10 @@ struct nitrox_sglist { * @ctx_handle: Crypto context handle. * @gph: GP Header * @ctrl: Request Information. - * @in: Input sglist - * @out: Output sglist + * @orh: ORH address + * @comp: completion address + * @src: Input sglist + * @dst: Output sglist */ struct se_crypto_request { u8 opcode; @@ -73,9 +71,8 @@ struct se_crypto_request { struct gphdr gph; union se_req_ctrl ctrl; - - u8 iv[MAX_IV_LEN]; - u16 ivsize; + u64 *orh; + u64 *comp; struct scatterlist *src; struct scatterlist *dst; @@ -110,6 +107,18 @@ enum flexi_cipher { CIPHER_INVALID }; +enum flexi_auth { + AUTH_NULL = 0, + AUTH_MD5, + AUTH_SHA1, + AUTH_SHA2_SHA224, + AUTH_SHA2_SHA256, + AUTH_SHA2_SHA384, + AUTH_SHA2_SHA512, + AUTH_GMAC, + AUTH_INVALID +}; + /** * struct crypto_keys - Crypto keys * @key: Encryption key or KEY1 for AES-XTS @@ -136,6 +145,32 @@ struct auth_keys { u8 opad[64]; }; +union fc_ctx_flags { + __be64 f; + struct { +#if defined(__BIG_ENDIAN_BITFIELD) + u64 cipher_type : 4; + u64 reserved_59 : 1; + u64 aes_keylen : 2; + u64 iv_source : 1; + u64 hash_type : 4; + u64 reserved_49_51 : 3; + u64 auth_input_type: 1; + u64 mac_len : 8; + u64 reserved_0_39 : 40; +#else + u64 reserved_0_39 : 40; + u64 mac_len : 8; + u64 auth_input_type: 1; + u64 reserved_49_51 : 3; + u64 hash_type : 4; + u64 iv_source : 1; + u64 aes_keylen : 2; + u64 reserved_59 : 1; + u64 cipher_type : 4; +#endif + } w0; +}; /** * struct flexi_crypto_context - Crypto context * @cipher_type: Encryption cipher type @@ -150,49 +185,30 @@ struct auth_keys { * @auth: Authentication keys */ struct flexi_crypto_context { - union { - __be64 flags; - struct { -#if defined(__BIG_ENDIAN_BITFIELD) - u64 cipher_type : 4; - u64 reserved_59 : 1; - u64 aes_keylen : 2; - u64 iv_source : 1; - u64 hash_type : 4; - u64 reserved_49_51 : 3; - u64 auth_input_type: 1; - u64 mac_len : 8; - u64 reserved_0_39 : 40; -#else - u64 reserved_0_39 : 40; - u64 mac_len : 8; - u64 auth_input_type: 1; - u64 reserved_49_51 : 3; - u64 hash_type : 4; - u64 iv_source : 1; - u64 aes_keylen : 2; - u64 reserved_59 : 1; - u64 cipher_type : 4; -#endif - } w0; - }; - + union fc_ctx_flags flags; struct crypto_keys crypto; struct auth_keys auth; }; +struct crypto_ctx_hdr { + struct dma_pool *pool; + dma_addr_t dma; + void *vaddr; +}; + struct nitrox_crypto_ctx { struct nitrox_device *ndev; union { u64 ctx_handle; struct flexi_crypto_context *fctx; } u; + struct crypto_ctx_hdr *chdr; }; struct nitrox_kcrypt_request { struct se_crypto_request creq; - struct nitrox_crypto_ctx *nctx; - struct skcipher_request *skreq; + u8 *src; + u8 *dst; }; /** @@ -369,26 +385,19 @@ struct nitrox_sgcomp { /* * strutct nitrox_sgtable - SG list information - * @map_cnt: Number of buffers mapped - * @nr_comp: Number of sglist components + * @sgmap_cnt: Number of buffers mapped * @total_bytes: Total bytes in sglist. - * @len: Total sglist components length. - * @dma: DMA address of sglist component. - * @dir: DMA direction. - * @buf: crypto request buffer. - * @sglist: SG list of input/output buffers. + * @sgcomp_len: Total sglist components length. + * @sgcomp_dma: DMA address of sglist component. + * @sg: crypto request buffer. * @sgcomp: sglist component for NITROX. */ struct nitrox_sgtable { - u8 map_bufs_cnt; - u8 nr_sgcomp; + u8 sgmap_cnt; u16 total_bytes; - u32 len; - dma_addr_t dma; - enum dma_data_direction dir; - - struct scatterlist *buf; - struct nitrox_sglist *sglist; + u32 sgcomp_len; + dma_addr_t sgcomp_dma; + struct scatterlist *sg; struct nitrox_sgcomp *sgcomp; }; @@ -398,13 +407,11 @@ struct nitrox_sgtable { #define COMP_HLEN 8 struct resp_hdr { - u64 orh; - dma_addr_t orh_dma; - u64 completion; - dma_addr_t completion_dma; + u64 *orh; + u64 *completion; }; -typedef void (*completion_t)(struct skcipher_request *skreq, int err); +typedef void (*completion_t)(void *arg, int err); /** * struct nitrox_softreq - Represents the NIROX Request. @@ -427,7 +434,6 @@ struct nitrox_softreq { u32 flags; gfp_t gfp; atomic_t status; - bool inplace; struct nitrox_device *ndev; struct nitrox_cmdq *cmdq; @@ -440,7 +446,201 @@ struct nitrox_softreq { unsigned long tstamp; completion_t callback; - struct skcipher_request *skreq; + void *cb_arg; }; +static inline int flexi_aes_keylen(int keylen) +{ + int aes_keylen; + + switch (keylen) { + case AES_KEYSIZE_128: + aes_keylen = 1; + break; + case AES_KEYSIZE_192: + aes_keylen = 2; + break; + case AES_KEYSIZE_256: + aes_keylen = 3; + break; + default: + aes_keylen = -EINVAL; + break; + } + return aes_keylen; +} + +static inline void *alloc_req_buf(int nents, int extralen, gfp_t gfp) +{ + size_t size; + + size = sizeof(struct scatterlist) * nents; + size += extralen; + + return kzalloc(size, gfp); +} + +/** + * create_single_sg - Point SG entry to the data + * @sg: Destination SG list + * @buf: Data + * @buflen: Data length + * + * Returns next free entry in the destination SG list + **/ +static inline struct scatterlist *create_single_sg(struct scatterlist *sg, + void *buf, int buflen) +{ + sg_set_buf(sg, buf, buflen); + sg++; + return sg; +} + +/** + * create_multi_sg - Create multiple sg entries with buflen data length from + * source sglist + * @to_sg: Destination SG list + * @from_sg: Source SG list + * @buflen: Data length + * + * Returns next free entry in the destination SG list + **/ +static inline struct scatterlist *create_multi_sg(struct scatterlist *to_sg, + struct scatterlist *from_sg, + int buflen) +{ + struct scatterlist *sg = to_sg; + unsigned int sglen; + + for (; buflen; buflen -= sglen) { + sglen = from_sg->length; + if (sglen > buflen) + sglen = buflen; + + sg_set_buf(sg, sg_virt(from_sg), sglen); + from_sg = sg_next(from_sg); + sg++; + } + + return sg; +} + +static inline void set_orh_value(u64 *orh) +{ + WRITE_ONCE(*orh, PENDING_SIG); +} + +static inline void set_comp_value(u64 *comp) +{ + WRITE_ONCE(*comp, PENDING_SIG); +} + +static inline int alloc_src_req_buf(struct nitrox_kcrypt_request *nkreq, + int nents, int ivsize) +{ + struct se_crypto_request *creq = &nkreq->creq; + + nkreq->src = alloc_req_buf(nents, ivsize, creq->gfp); + if (!nkreq->src) + return -ENOMEM; + + return 0; +} + +static inline void nitrox_creq_copy_iv(char *dst, char *src, int size) +{ + memcpy(dst, src, size); +} + +static inline struct scatterlist *nitrox_creq_src_sg(char *iv, int ivsize) +{ + return (struct scatterlist *)(iv + ivsize); +} + +static inline void nitrox_creq_set_src_sg(struct nitrox_kcrypt_request *nkreq, + int nents, int ivsize, + struct scatterlist *src, int buflen) +{ + char *iv = nkreq->src; + struct scatterlist *sg; + struct se_crypto_request *creq = &nkreq->creq; + + creq->src = nitrox_creq_src_sg(iv, ivsize); + sg = creq->src; + sg_init_table(sg, nents); + + /* Input format: + * +----+----------------+ + * | IV | SRC sg entries | + * +----+----------------+ + */ + + /* IV */ + sg = create_single_sg(sg, iv, ivsize); + /* SRC entries */ + create_multi_sg(sg, src, buflen); +} + +static inline int alloc_dst_req_buf(struct nitrox_kcrypt_request *nkreq, + int nents) +{ + int extralen = ORH_HLEN + COMP_HLEN; + struct se_crypto_request *creq = &nkreq->creq; + + nkreq->dst = alloc_req_buf(nents, extralen, creq->gfp); + if (!nkreq->dst) + return -ENOMEM; + + return 0; +} + +static inline void nitrox_creq_set_orh(struct nitrox_kcrypt_request *nkreq) +{ + struct se_crypto_request *creq = &nkreq->creq; + + creq->orh = (u64 *)(nkreq->dst); + set_orh_value(creq->orh); +} + +static inline void nitrox_creq_set_comp(struct nitrox_kcrypt_request *nkreq) +{ + struct se_crypto_request *creq = &nkreq->creq; + + creq->comp = (u64 *)(nkreq->dst + ORH_HLEN); + set_comp_value(creq->comp); +} + +static inline struct scatterlist *nitrox_creq_dst_sg(char *dst) +{ + return (struct scatterlist *)(dst + ORH_HLEN + COMP_HLEN); +} + +static inline void nitrox_creq_set_dst_sg(struct nitrox_kcrypt_request *nkreq, + int nents, int ivsize, + struct scatterlist *dst, int buflen) +{ + struct se_crypto_request *creq = &nkreq->creq; + struct scatterlist *sg; + char *iv = nkreq->src; + + creq->dst = nitrox_creq_dst_sg(nkreq->dst); + sg = creq->dst; + sg_init_table(sg, nents); + + /* Output format: + * +-----+----+----------------+-----------------+ + * | ORH | IV | DST sg entries | COMPLETION Bytes| + * +-----+----+----------------+-----------------+ + */ + + /* ORH */ + sg = create_single_sg(sg, creq->orh, ORH_HLEN); + /* IV */ + sg = create_single_sg(sg, iv, ivsize); + /* DST entries */ + sg = create_multi_sg(sg, dst, buflen); + /* COMPLETION Bytes */ + create_single_sg(sg, creq->comp, COMP_HLEN); +} + #endif /* __NITROX_REQ_H */ diff --git a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c index 3987cd84c033..e34e4df8fd24 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c +++ b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c @@ -13,7 +13,6 @@ #define FDATA_SIZE 32 /* Base destination port for the solicited requests */ #define SOLICIT_BASE_DPORT 256 -#define PENDING_SIG 0xFFFFFFFFFFFFFFFFUL #define REQ_NOT_POSTED 1 #define REQ_BACKLOG 2 @@ -52,58 +51,26 @@ static inline int incr_index(int index, int count, int max) return index; } -/** - * dma_free_sglist - unmap and free the sg lists. - * @ndev: N5 device - * @sgtbl: SG table - */ static void softreq_unmap_sgbufs(struct nitrox_softreq *sr) { struct nitrox_device *ndev = sr->ndev; struct device *dev = DEV(ndev); - struct nitrox_sglist *sglist; - - /* unmap in sgbuf */ - sglist = sr->in.sglist; - if (!sglist) - goto out_unmap; - - /* unmap iv */ - dma_unmap_single(dev, sglist->dma, sglist->len, DMA_BIDIRECTIONAL); - /* unmpa src sglist */ - dma_unmap_sg(dev, sr->in.buf, (sr->in.map_bufs_cnt - 1), sr->in.dir); - /* unamp gather component */ - dma_unmap_single(dev, sr->in.dma, sr->in.len, DMA_TO_DEVICE); - kfree(sr->in.sglist); + + + dma_unmap_sg(dev, sr->in.sg, sr->in.sgmap_cnt, DMA_BIDIRECTIONAL); + dma_unmap_single(dev, sr->in.sgcomp_dma, sr->in.sgcomp_len, + DMA_TO_DEVICE); kfree(sr->in.sgcomp); - sr->in.sglist = NULL; - sr->in.buf = NULL; - sr->in.map_bufs_cnt = 0; - -out_unmap: - /* unmap out sgbuf */ - sglist = sr->out.sglist; - if (!sglist) - return; - - /* unmap orh */ - dma_unmap_single(dev, sr->resp.orh_dma, ORH_HLEN, sr->out.dir); - - /* unmap dst sglist */ - if (!sr->inplace) { - dma_unmap_sg(dev, sr->out.buf, (sr->out.map_bufs_cnt - 3), - sr->out.dir); - } - /* unmap completion */ - dma_unmap_single(dev, sr->resp.completion_dma, COMP_HLEN, sr->out.dir); + sr->in.sg = NULL; + sr->in.sgmap_cnt = 0; - /* unmap scatter component */ - dma_unmap_single(dev, sr->out.dma, sr->out.len, DMA_TO_DEVICE); - kfree(sr->out.sglist); + dma_unmap_sg(dev, sr->out.sg, sr->out.sgmap_cnt, + DMA_BIDIRECTIONAL); + dma_unmap_single(dev, sr->out.sgcomp_dma, sr->out.sgcomp_len, + DMA_TO_DEVICE); kfree(sr->out.sgcomp); - sr->out.sglist = NULL; - sr->out.buf = NULL; - sr->out.map_bufs_cnt = 0; + sr->out.sg = NULL; + sr->out.sgmap_cnt = 0; } static void softreq_destroy(struct nitrox_softreq *sr) @@ -116,7 +83,7 @@ static void softreq_destroy(struct nitrox_softreq *sr) * create_sg_component - create SG componets for N5 device. * @sr: Request structure * @sgtbl: SG table - * @nr_comp: total number of components required + * @map_nents: number of dma mapped entries * * Component structure * @@ -140,7 +107,7 @@ static int create_sg_component(struct nitrox_softreq *sr, { struct nitrox_device *ndev = sr->ndev; struct nitrox_sgcomp *sgcomp; - struct nitrox_sglist *sglist; + struct scatterlist *sg; dma_addr_t dma; size_t sz_comp; int i, j, nr_sgcomp; @@ -154,17 +121,15 @@ static int create_sg_component(struct nitrox_softreq *sr, return -ENOMEM; sgtbl->sgcomp = sgcomp; - sgtbl->nr_sgcomp = nr_sgcomp; - sglist = sgtbl->sglist; + sg = sgtbl->sg; /* populate device sg component */ for (i = 0; i < nr_sgcomp; i++) { - for (j = 0; j < 4; j++) { - sgcomp->len[j] = cpu_to_be16(sglist->len); - sgcomp->dma[j] = cpu_to_be64(sglist->dma); - sglist++; + for (j = 0; j < 4 && sg; j++) { + sgcomp[i].len[j] = cpu_to_be16(sg_dma_len(sg)); + sgcomp[i].dma[j] = cpu_to_be64(sg_dma_address(sg)); + sg = sg_next(sg); } - sgcomp++; } /* map the device sg component */ dma = dma_map_single(DEV(ndev), sgtbl->sgcomp, sz_comp, DMA_TO_DEVICE); @@ -174,8 +139,8 @@ static int create_sg_component(struct nitrox_softreq *sr, return -ENOMEM; } - sgtbl->dma = dma; - sgtbl->len = sz_comp; + sgtbl->sgcomp_dma = dma; + sgtbl->sgcomp_len = sz_comp; return 0; } @@ -193,66 +158,27 @@ static int dma_map_inbufs(struct nitrox_softreq *sr, { struct device *dev = DEV(sr->ndev); struct scatterlist *sg = req->src; - struct nitrox_sglist *glist; int i, nents, ret = 0; - dma_addr_t dma; - size_t sz; - nents = sg_nents(req->src); + nents = dma_map_sg(dev, req->src, sg_nents(req->src), + DMA_BIDIRECTIONAL); + if (!nents) + return -EINVAL; - /* creater gather list IV and src entries */ - sz = roundup((1 + nents), 4) * sizeof(*glist); - glist = kzalloc(sz, sr->gfp); - if (!glist) - return -ENOMEM; + for_each_sg(req->src, sg, nents, i) + sr->in.total_bytes += sg_dma_len(sg); - sr->in.sglist = glist; - /* map IV */ - dma = dma_map_single(dev, &req->iv, req->ivsize, DMA_BIDIRECTIONAL); - if (dma_mapping_error(dev, dma)) { - ret = -EINVAL; - goto iv_map_err; - } - - sr->in.dir = (req->src == req->dst) ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE; - /* map src entries */ - nents = dma_map_sg(dev, req->src, nents, sr->in.dir); - if (!nents) { - ret = -EINVAL; - goto src_map_err; - } - sr->in.buf = req->src; - - /* store the mappings */ - glist->len = req->ivsize; - glist->dma = dma; - glist++; - sr->in.total_bytes += req->ivsize; - - for_each_sg(req->src, sg, nents, i) { - glist->len = sg_dma_len(sg); - glist->dma = sg_dma_address(sg); - sr->in.total_bytes += glist->len; - glist++; - } - /* roundup map count to align with entires in sg component */ - sr->in.map_bufs_cnt = (1 + nents); - - /* create NITROX gather component */ - ret = create_sg_component(sr, &sr->in, sr->in.map_bufs_cnt); + sr->in.sg = req->src; + sr->in.sgmap_cnt = nents; + ret = create_sg_component(sr, &sr->in, sr->in.sgmap_cnt); if (ret) goto incomp_err; return 0; incomp_err: - dma_unmap_sg(dev, req->src, nents, sr->in.dir); - sr->in.map_bufs_cnt = 0; -src_map_err: - dma_unmap_single(dev, dma, req->ivsize, DMA_BIDIRECTIONAL); -iv_map_err: - kfree(sr->in.sglist); - sr->in.sglist = NULL; + dma_unmap_sg(dev, req->src, nents, DMA_BIDIRECTIONAL); + sr->in.sgmap_cnt = 0; return ret; } @@ -260,104 +186,25 @@ static int dma_map_outbufs(struct nitrox_softreq *sr, struct se_crypto_request *req) { struct device *dev = DEV(sr->ndev); - struct nitrox_sglist *glist = sr->in.sglist; - struct nitrox_sglist *slist; - struct scatterlist *sg; - int i, nents, map_bufs_cnt, ret = 0; - size_t sz; - - nents = sg_nents(req->dst); - - /* create scatter list ORH, IV, dst entries and Completion header */ - sz = roundup((3 + nents), 4) * sizeof(*slist); - slist = kzalloc(sz, sr->gfp); - if (!slist) - return -ENOMEM; - - sr->out.sglist = slist; - sr->out.dir = DMA_BIDIRECTIONAL; - /* map ORH */ - sr->resp.orh_dma = dma_map_single(dev, &sr->resp.orh, ORH_HLEN, - sr->out.dir); - if (dma_mapping_error(dev, sr->resp.orh_dma)) { - ret = -EINVAL; - goto orh_map_err; - } + int nents, ret = 0; - /* map completion */ - sr->resp.completion_dma = dma_map_single(dev, &sr->resp.completion, - COMP_HLEN, sr->out.dir); - if (dma_mapping_error(dev, sr->resp.completion_dma)) { - ret = -EINVAL; - goto compl_map_err; - } + nents = dma_map_sg(dev, req->dst, sg_nents(req->dst), + DMA_BIDIRECTIONAL); + if (!nents) + return -EINVAL; - sr->inplace = (req->src == req->dst) ? true : false; - /* out place */ - if (!sr->inplace) { - nents = dma_map_sg(dev, req->dst, nents, sr->out.dir); - if (!nents) { - ret = -EINVAL; - goto dst_map_err; - } - } - sr->out.buf = req->dst; - - /* store the mappings */ - /* orh */ - slist->len = ORH_HLEN; - slist->dma = sr->resp.orh_dma; - slist++; - - /* copy the glist mappings */ - if (sr->inplace) { - nents = sr->in.map_bufs_cnt - 1; - map_bufs_cnt = sr->in.map_bufs_cnt; - while (map_bufs_cnt--) { - slist->len = glist->len; - slist->dma = glist->dma; - slist++; - glist++; - } - } else { - /* copy iv mapping */ - slist->len = glist->len; - slist->dma = glist->dma; - slist++; - /* copy remaining maps */ - for_each_sg(req->dst, sg, nents, i) { - slist->len = sg_dma_len(sg); - slist->dma = sg_dma_address(sg); - slist++; - } - } - - /* completion */ - slist->len = COMP_HLEN; - slist->dma = sr->resp.completion_dma; - - sr->out.map_bufs_cnt = (3 + nents); - - ret = create_sg_component(sr, &sr->out, sr->out.map_bufs_cnt); + sr->out.sg = req->dst; + sr->out.sgmap_cnt = nents; + ret = create_sg_component(sr, &sr->out, sr->out.sgmap_cnt); if (ret) goto outcomp_map_err; return 0; outcomp_map_err: - if (!sr->inplace) - dma_unmap_sg(dev, req->dst, nents, sr->out.dir); - sr->out.map_bufs_cnt = 0; - sr->out.buf = NULL; -dst_map_err: - dma_unmap_single(dev, sr->resp.completion_dma, COMP_HLEN, sr->out.dir); - sr->resp.completion_dma = 0; -compl_map_err: - dma_unmap_single(dev, sr->resp.orh_dma, ORH_HLEN, sr->out.dir); - sr->resp.orh_dma = 0; -orh_map_err: - kfree(sr->out.sglist); - sr->out.sglist = NULL; + dma_unmap_sg(dev, req->dst, nents, DMA_BIDIRECTIONAL); + sr->out.sgmap_cnt = 0; + sr->out.sg = NULL; return ret; } @@ -422,6 +269,8 @@ static inline bool cmdq_full(struct nitrox_cmdq *cmdq, int qlen) smp_mb__after_atomic(); return true; } + /* sync with other cpus */ + smp_mb__after_atomic(); return false; } @@ -477,8 +326,6 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq) spin_lock_bh(&cmdq->backlog_qlock); list_for_each_entry_safe(sr, tmp, &cmdq->backlog_head, backlog) { - struct skcipher_request *skreq; - /* submit until space available */ if (unlikely(cmdq_full(cmdq, ndev->qlen))) { ret = -ENOSPC; @@ -490,12 +337,8 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq) /* sync with other cpus */ smp_mb__after_atomic(); - skreq = sr->skreq; /* post the command */ post_se_instr(sr, cmdq); - - /* backlog requests are posted, wakeup with -EINPROGRESS */ - skcipher_request_complete(skreq, -EINPROGRESS); } spin_unlock_bh(&cmdq->backlog_qlock); @@ -518,7 +361,7 @@ static int nitrox_enqueue_request(struct nitrox_softreq *sr) } /* add to backlog list */ backlog_list_add(sr, cmdq); - return -EBUSY; + return -EINPROGRESS; } post_se_instr(sr, cmdq); @@ -535,7 +378,7 @@ static int nitrox_enqueue_request(struct nitrox_softreq *sr) int nitrox_process_se_request(struct nitrox_device *ndev, struct se_crypto_request *req, completion_t callback, - struct skcipher_request *skreq) + void *cb_arg) { struct nitrox_softreq *sr; dma_addr_t ctx_handle = 0; @@ -552,12 +395,12 @@ int nitrox_process_se_request(struct nitrox_device *ndev, sr->flags = req->flags; sr->gfp = req->gfp; sr->callback = callback; - sr->skreq = skreq; + sr->cb_arg = cb_arg; atomic_set(&sr->status, REQ_NOT_POSTED); - WRITE_ONCE(sr->resp.orh, PENDING_SIG); - WRITE_ONCE(sr->resp.completion, PENDING_SIG); + sr->resp.orh = req->orh; + sr->resp.completion = req->comp; ret = softreq_map_iobuf(sr, req); if (ret) { @@ -598,13 +441,13 @@ int nitrox_process_se_request(struct nitrox_device *ndev, /* fill the packet instruction */ /* word 0 */ - sr->instr.dptr0 = cpu_to_be64(sr->in.dma); + sr->instr.dptr0 = cpu_to_be64(sr->in.sgcomp_dma); /* word 1 */ sr->instr.ih.value = 0; sr->instr.ih.s.g = 1; - sr->instr.ih.s.gsz = sr->in.map_bufs_cnt; - sr->instr.ih.s.ssz = sr->out.map_bufs_cnt; + sr->instr.ih.s.gsz = sr->in.sgmap_cnt; + sr->instr.ih.s.ssz = sr->out.sgmap_cnt; sr->instr.ih.s.fsz = FDATA_SIZE + sizeof(struct gphdr); sr->instr.ih.s.tlen = sr->instr.ih.s.fsz + sr->in.total_bytes; sr->instr.ih.value = cpu_to_be64(sr->instr.ih.value); @@ -626,11 +469,11 @@ int nitrox_process_se_request(struct nitrox_device *ndev, /* word 4 */ sr->instr.slc.value[0] = 0; - sr->instr.slc.s.ssz = sr->out.map_bufs_cnt; + sr->instr.slc.s.ssz = sr->out.sgmap_cnt; sr->instr.slc.value[0] = cpu_to_be64(sr->instr.slc.value[0]); /* word 5 */ - sr->instr.slc.s.rptr = cpu_to_be64(sr->out.dma); + sr->instr.slc.s.rptr = cpu_to_be64(sr->out.sgcomp_dma); /* * No conversion for front data, @@ -664,6 +507,24 @@ void backlog_qflush_work(struct work_struct *work) post_backlog_cmds(cmdq); } +static bool sr_completed(struct nitrox_softreq *sr) +{ + u64 orh = READ_ONCE(*sr->resp.orh); + unsigned long timeout = jiffies + msecs_to_jiffies(1); + + if ((orh != PENDING_SIG) && (orh & 0xff)) + return true; + + while (READ_ONCE(*sr->resp.completion) == PENDING_SIG) { + if (time_after(jiffies, timeout)) { + pr_err("comp not done\n"); + return false; + } + } + + return true; +} + /** * process_request_list - process completed requests * @ndev: N5 device @@ -675,8 +536,6 @@ static void process_response_list(struct nitrox_cmdq *cmdq) { struct nitrox_device *ndev = cmdq->ndev; struct nitrox_softreq *sr; - struct skcipher_request *skreq; - completion_t callback; int req_completed = 0, err = 0, budget; /* check all pending requests */ @@ -691,13 +550,13 @@ static void process_response_list(struct nitrox_cmdq *cmdq) break; /* check orh and completion bytes updates */ - if (READ_ONCE(sr->resp.orh) == READ_ONCE(sr->resp.completion)) { + if (!sr_completed(sr)) { /* request not completed, check for timeout */ if (!cmd_timeout(sr->tstamp, ndev->timeout)) break; dev_err_ratelimited(DEV(ndev), "Request timeout, orh 0x%016llx\n", - READ_ONCE(sr->resp.orh)); + READ_ONCE(*sr->resp.orh)); } atomic_dec(&cmdq->pending_count); atomic64_inc(&ndev->stats.completed); @@ -706,15 +565,12 @@ static void process_response_list(struct nitrox_cmdq *cmdq) /* remove from response list */ response_list_del(sr, cmdq); - callback = sr->callback; - skreq = sr->skreq; - /* ORH error code */ - err = READ_ONCE(sr->resp.orh) & 0xff; + err = READ_ONCE(*sr->resp.orh) & 0xff; softreq_destroy(sr); - if (callback) - callback(skreq, err); + if (sr->callback) + sr->callback(sr->cb_arg, err); req_completed++; } diff --git a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c new file mode 100644 index 000000000000..d4935d6cefdd --- /dev/null +++ b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c @@ -0,0 +1,498 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "nitrox_dev.h" +#include "nitrox_common.h" +#include "nitrox_req.h" + +struct nitrox_cipher { + const char *name; + enum flexi_cipher value; +}; + +/** + * supported cipher list + */ +static const struct nitrox_cipher flexi_cipher_table[] = { + { "null", CIPHER_NULL }, + { "cbc(des3_ede)", CIPHER_3DES_CBC }, + { "ecb(des3_ede)", CIPHER_3DES_ECB }, + { "cbc(aes)", CIPHER_AES_CBC }, + { "ecb(aes)", CIPHER_AES_ECB }, + { "cfb(aes)", CIPHER_AES_CFB }, + { "rfc3686(ctr(aes))", CIPHER_AES_CTR }, + { "xts(aes)", CIPHER_AES_XTS }, + { "cts(cbc(aes))", CIPHER_AES_CBC_CTS }, + { NULL, CIPHER_INVALID } +}; + +static enum flexi_cipher flexi_cipher_type(const char *name) +{ + const struct nitrox_cipher *cipher = flexi_cipher_table; + + while (cipher->name) { + if (!strcmp(cipher->name, name)) + break; + cipher++; + } + return cipher->value; +} + +static int nitrox_skcipher_init(struct crypto_skcipher *tfm) +{ + struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(tfm); + struct crypto_ctx_hdr *chdr; + + /* get the first device */ + nctx->ndev = nitrox_get_first_device(); + if (!nctx->ndev) + return -ENODEV; + + /* allocate nitrox crypto context */ + chdr = crypto_alloc_context(nctx->ndev); + if (!chdr) { + nitrox_put_device(nctx->ndev); + return -ENOMEM; + } + nctx->chdr = chdr; + nctx->u.ctx_handle = (uintptr_t)((u8 *)chdr->vaddr + + sizeof(struct ctx_hdr)); + crypto_skcipher_set_reqsize(tfm, crypto_skcipher_reqsize(tfm) + + sizeof(struct nitrox_kcrypt_request)); + return 0; +} + +static void nitrox_skcipher_exit(struct crypto_skcipher *tfm) +{ + struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(tfm); + + /* free the nitrox crypto context */ + if (nctx->u.ctx_handle) { + struct flexi_crypto_context *fctx = nctx->u.fctx; + + memzero_explicit(&fctx->crypto, sizeof(struct crypto_keys)); + memzero_explicit(&fctx->auth, sizeof(struct auth_keys)); + crypto_free_context((void *)nctx->chdr); + } + nitrox_put_device(nctx->ndev); + + nctx->u.ctx_handle = 0; + nctx->ndev = NULL; +} + +static inline int nitrox_skcipher_setkey(struct crypto_skcipher *cipher, + int aes_keylen, const u8 *key, + unsigned int keylen) +{ + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct nitrox_crypto_ctx *nctx = crypto_tfm_ctx(tfm); + struct flexi_crypto_context *fctx; + union fc_ctx_flags *flags; + enum flexi_cipher cipher_type; + const char *name; + + name = crypto_tfm_alg_name(tfm); + cipher_type = flexi_cipher_type(name); + if (unlikely(cipher_type == CIPHER_INVALID)) { + pr_err("unsupported cipher: %s\n", name); + return -EINVAL; + } + + /* fill crypto context */ + fctx = nctx->u.fctx; + flags = &fctx->flags; + flags->f = 0; + flags->w0.cipher_type = cipher_type; + flags->w0.aes_keylen = aes_keylen; + flags->w0.iv_source = IV_FROM_DPTR; + flags->f = cpu_to_be64(*(u64 *)&flags->w0); + /* copy the key to context */ + memcpy(fctx->crypto.u.key, key, keylen); + + return 0; +} + +static int nitrox_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, + unsigned int keylen) +{ + int aes_keylen; + + aes_keylen = flexi_aes_keylen(keylen); + if (aes_keylen < 0) { + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + return nitrox_skcipher_setkey(cipher, aes_keylen, key, keylen); +} + +static int alloc_src_sglist(struct skcipher_request *skreq, int ivsize) +{ + struct nitrox_kcrypt_request *nkreq = skcipher_request_ctx(skreq); + int nents = sg_nents(skreq->src) + 1; + int ret; + + /* Allocate buffer to hold IV and input scatterlist array */ + ret = alloc_src_req_buf(nkreq, nents, ivsize); + if (ret) + return ret; + + nitrox_creq_copy_iv(nkreq->src, skreq->iv, ivsize); + nitrox_creq_set_src_sg(nkreq, nents, ivsize, skreq->src, + skreq->cryptlen); + + return 0; +} + +static int alloc_dst_sglist(struct skcipher_request *skreq, int ivsize) +{ + struct nitrox_kcrypt_request *nkreq = skcipher_request_ctx(skreq); + int nents = sg_nents(skreq->dst) + 3; + int ret; + + /* Allocate buffer to hold ORH, COMPLETION and output scatterlist + * array + */ + ret = alloc_dst_req_buf(nkreq, nents); + if (ret) + return ret; + + nitrox_creq_set_orh(nkreq); + nitrox_creq_set_comp(nkreq); + nitrox_creq_set_dst_sg(nkreq, nents, ivsize, skreq->dst, + skreq->cryptlen); + + return 0; +} + +static void free_src_sglist(struct skcipher_request *skreq) +{ + struct nitrox_kcrypt_request *nkreq = skcipher_request_ctx(skreq); + + kfree(nkreq->src); +} + +static void free_dst_sglist(struct skcipher_request *skreq) +{ + struct nitrox_kcrypt_request *nkreq = skcipher_request_ctx(skreq); + + kfree(nkreq->dst); +} + +static void nitrox_skcipher_callback(void *arg, int err) +{ + struct skcipher_request *skreq = arg; + + free_src_sglist(skreq); + free_dst_sglist(skreq); + if (err) { + pr_err_ratelimited("request failed status 0x%0x\n", err); + err = -EINVAL; + } + + skcipher_request_complete(skreq, err); +} + +static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc) +{ + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq); + struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(cipher); + struct nitrox_kcrypt_request *nkreq = skcipher_request_ctx(skreq); + int ivsize = crypto_skcipher_ivsize(cipher); + struct se_crypto_request *creq; + int ret; + + creq = &nkreq->creq; + creq->flags = skreq->base.flags; + creq->gfp = (skreq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? + GFP_KERNEL : GFP_ATOMIC; + + /* fill the request */ + creq->ctrl.value = 0; + creq->opcode = FLEXI_CRYPTO_ENCRYPT_HMAC; + creq->ctrl.s.arg = (enc ? ENCRYPT : DECRYPT); + /* param0: length of the data to be encrypted */ + creq->gph.param0 = cpu_to_be16(skreq->cryptlen); + creq->gph.param1 = 0; + /* param2: encryption data offset */ + creq->gph.param2 = cpu_to_be16(ivsize); + creq->gph.param3 = 0; + + creq->ctx_handle = nctx->u.ctx_handle; + creq->ctrl.s.ctxl = sizeof(struct flexi_crypto_context); + + ret = alloc_src_sglist(skreq, ivsize); + if (ret) + return ret; + + ret = alloc_dst_sglist(skreq, ivsize); + if (ret) { + free_src_sglist(skreq); + return ret; + } + + /* send the crypto request */ + return nitrox_process_se_request(nctx->ndev, creq, + nitrox_skcipher_callback, skreq); +} + +static int nitrox_aes_encrypt(struct skcipher_request *skreq) +{ + return nitrox_skcipher_crypt(skreq, true); +} + +static int nitrox_aes_decrypt(struct skcipher_request *skreq) +{ + return nitrox_skcipher_crypt(skreq, false); +} + +static int nitrox_3des_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + if (keylen != DES3_EDE_KEY_SIZE) { + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + return nitrox_skcipher_setkey(cipher, 0, key, keylen); +} + +static int nitrox_3des_encrypt(struct skcipher_request *skreq) +{ + return nitrox_skcipher_crypt(skreq, true); +} + +static int nitrox_3des_decrypt(struct skcipher_request *skreq) +{ + return nitrox_skcipher_crypt(skreq, false); +} + +static int nitrox_aes_xts_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct nitrox_crypto_ctx *nctx = crypto_tfm_ctx(tfm); + struct flexi_crypto_context *fctx; + int aes_keylen, ret; + + ret = xts_check_key(tfm, key, keylen); + if (ret) + return ret; + + keylen /= 2; + + aes_keylen = flexi_aes_keylen(keylen); + if (aes_keylen < 0) { + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + fctx = nctx->u.fctx; + /* copy KEY2 */ + memcpy(fctx->auth.u.key2, (key + keylen), keylen); + + return nitrox_skcipher_setkey(cipher, aes_keylen, key, keylen); +} + +static int nitrox_aes_ctr_rfc3686_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct nitrox_crypto_ctx *nctx = crypto_tfm_ctx(tfm); + struct flexi_crypto_context *fctx; + int aes_keylen; + + if (keylen < CTR_RFC3686_NONCE_SIZE) + return -EINVAL; + + fctx = nctx->u.fctx; + + memcpy(fctx->crypto.iv, key + (keylen - CTR_RFC3686_NONCE_SIZE), + CTR_RFC3686_NONCE_SIZE); + + keylen -= CTR_RFC3686_NONCE_SIZE; + + aes_keylen = flexi_aes_keylen(keylen); + if (aes_keylen < 0) { + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + return nitrox_skcipher_setkey(cipher, aes_keylen, key, keylen); +} + +static struct skcipher_alg nitrox_skciphers[] = { { + .base = { + .cra_name = "cbc(aes)", + .cra_driver_name = "n5_cbc(aes)", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = nitrox_aes_setkey, + .encrypt = nitrox_aes_encrypt, + .decrypt = nitrox_aes_decrypt, + .init = nitrox_skcipher_init, + .exit = nitrox_skcipher_exit, +}, { + .base = { + .cra_name = "ecb(aes)", + .cra_driver_name = "n5_ecb(aes)", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = nitrox_aes_setkey, + .encrypt = nitrox_aes_encrypt, + .decrypt = nitrox_aes_decrypt, + .init = nitrox_skcipher_init, + .exit = nitrox_skcipher_exit, +}, { + .base = { + .cra_name = "cfb(aes)", + .cra_driver_name = "n5_cfb(aes)", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = nitrox_aes_setkey, + .encrypt = nitrox_aes_encrypt, + .decrypt = nitrox_aes_decrypt, + .init = nitrox_skcipher_init, + .exit = nitrox_skcipher_exit, +}, { + .base = { + .cra_name = "xts(aes)", + .cra_driver_name = "n5_xts(aes)", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + }, + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = nitrox_aes_xts_setkey, + .encrypt = nitrox_aes_encrypt, + .decrypt = nitrox_aes_decrypt, + .init = nitrox_skcipher_init, + .exit = nitrox_skcipher_exit, +}, { + .base = { + .cra_name = "rfc3686(ctr(aes))", + .cra_driver_name = "n5_rfc3686(ctr(aes))", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, + .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, + .ivsize = CTR_RFC3686_IV_SIZE, + .init = nitrox_skcipher_init, + .exit = nitrox_skcipher_exit, + .setkey = nitrox_aes_ctr_rfc3686_setkey, + .encrypt = nitrox_aes_encrypt, + .decrypt = nitrox_aes_decrypt, +}, { + .base = { + .cra_name = "cts(cbc(aes))", + .cra_driver_name = "n5_cts(cbc(aes))", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_type = &crypto_ablkcipher_type, + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = nitrox_aes_setkey, + .encrypt = nitrox_aes_encrypt, + .decrypt = nitrox_aes_decrypt, + .init = nitrox_skcipher_init, + .exit = nitrox_skcipher_exit, +}, { + .base = { + .cra_name = "cbc(des3_ede)", + .cra_driver_name = "n5_cbc(des3_ede)", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + }, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + .setkey = nitrox_3des_setkey, + .encrypt = nitrox_3des_encrypt, + .decrypt = nitrox_3des_decrypt, + .init = nitrox_skcipher_init, + .exit = nitrox_skcipher_exit, +}, { + .base = { + .cra_name = "ecb(des3_ede)", + .cra_driver_name = "n5_ecb(des3_ede)", + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + }, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + .setkey = nitrox_3des_setkey, + .encrypt = nitrox_3des_encrypt, + .decrypt = nitrox_3des_decrypt, + .init = nitrox_skcipher_init, + .exit = nitrox_skcipher_exit, +} + +}; + +int nitrox_register_skciphers(void) +{ + return crypto_register_skciphers(nitrox_skciphers, + ARRAY_SIZE(nitrox_skciphers)); +} + +void nitrox_unregister_skciphers(void) +{ + crypto_unregister_skciphers(nitrox_skciphers, + ARRAY_SIZE(nitrox_skciphers)); +} diff --git a/drivers/crypto/cavium/nitrox/nitrox_sriov.c b/drivers/crypto/cavium/nitrox/nitrox_sriov.c index 30c0aa874583..bf439d8256ba 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_sriov.c +++ b/drivers/crypto/cavium/nitrox/nitrox_sriov.c @@ -6,7 +6,12 @@ #include "nitrox_hal.h" #include "nitrox_common.h" #include "nitrox_isr.h" +#include "nitrox_mbx.h" +/** + * num_vfs_valid - validate VF count + * @num_vfs: number of VF(s) + */ static inline bool num_vfs_valid(int num_vfs) { bool valid = false; @@ -48,7 +53,32 @@ static inline enum vf_mode num_vfs_to_mode(int num_vfs) return mode; } -static void pf_sriov_cleanup(struct nitrox_device *ndev) +static inline int vf_mode_to_nr_queues(enum vf_mode mode) +{ + int nr_queues = 0; + + switch (mode) { + case __NDEV_MODE_PF: + nr_queues = MAX_PF_QUEUES; + break; + case __NDEV_MODE_VF16: + nr_queues = 8; + break; + case __NDEV_MODE_VF32: + nr_queues = 4; + break; + case __NDEV_MODE_VF64: + nr_queues = 2; + break; + case __NDEV_MODE_VF128: + nr_queues = 1; + break; + } + + return nr_queues; +} + +static void nitrox_pf_cleanup(struct nitrox_device *ndev) { /* PF has no queues in SR-IOV mode */ atomic_set(&ndev->state, __NDEV_NOT_READY); @@ -60,7 +90,11 @@ static void pf_sriov_cleanup(struct nitrox_device *ndev) nitrox_common_sw_cleanup(ndev); } -static int pf_sriov_init(struct nitrox_device *ndev) +/** + * nitrox_pf_reinit - re-initialize PF resources once SR-IOV is disabled + * @ndev: NITROX device + */ +static int nitrox_pf_reinit(struct nitrox_device *ndev) { int err; @@ -86,6 +120,33 @@ static int pf_sriov_init(struct nitrox_device *ndev) return nitrox_crypto_register(); } +static void nitrox_sriov_cleanup(struct nitrox_device *ndev) +{ + /* unregister interrupts for PF in SR-IOV */ + nitrox_sriov_unregister_interrupts(ndev); + nitrox_mbox_cleanup(ndev); +} + +static int nitrox_sriov_init(struct nitrox_device *ndev) +{ + int ret; + + /* register interrupts for PF in SR-IOV */ + ret = nitrox_sriov_register_interupts(ndev); + if (ret) + return ret; + + ret = nitrox_mbox_init(ndev); + if (ret) + goto sriov_init_fail; + + return 0; + +sriov_init_fail: + nitrox_sriov_cleanup(ndev); + return ret; +} + static int nitrox_sriov_enable(struct pci_dev *pdev, int num_vfs) { struct nitrox_device *ndev = pci_get_drvdata(pdev); @@ -106,17 +167,32 @@ static int nitrox_sriov_enable(struct pci_dev *pdev, int num_vfs) } dev_info(DEV(ndev), "Enabled VF(s) %d\n", num_vfs); - ndev->num_vfs = num_vfs; ndev->mode = num_vfs_to_mode(num_vfs); + ndev->iov.num_vfs = num_vfs; + ndev->iov.max_vf_queues = vf_mode_to_nr_queues(ndev->mode); /* set bit in flags */ set_bit(__NDEV_SRIOV_BIT, &ndev->flags); /* cleanup PF resources */ - pf_sriov_cleanup(ndev); + nitrox_pf_cleanup(ndev); - config_nps_core_vfcfg_mode(ndev, ndev->mode); + /* PF SR-IOV mode initialization */ + err = nitrox_sriov_init(ndev); + if (err) + goto iov_fail; + config_nps_core_vfcfg_mode(ndev, ndev->mode); return num_vfs; + +iov_fail: + pci_disable_sriov(pdev); + /* clear bit in flags */ + clear_bit(__NDEV_SRIOV_BIT, &ndev->flags); + ndev->iov.num_vfs = 0; + ndev->mode = __NDEV_MODE_PF; + /* reset back to working mode in PF */ + nitrox_pf_reinit(ndev); + return err; } static int nitrox_sriov_disable(struct pci_dev *pdev) @@ -134,12 +210,16 @@ static int nitrox_sriov_disable(struct pci_dev *pdev) /* clear bit in flags */ clear_bit(__NDEV_SRIOV_BIT, &ndev->flags); - ndev->num_vfs = 0; + ndev->iov.num_vfs = 0; + ndev->iov.max_vf_queues = 0; ndev->mode = __NDEV_MODE_PF; + /* cleanup PF SR-IOV resources */ + nitrox_sriov_cleanup(ndev); + config_nps_core_vfcfg_mode(ndev, ndev->mode); - return pf_sriov_init(ndev); + return nitrox_pf_reinit(ndev); } int nitrox_sriov_configure(struct pci_dev *pdev, int num_vfs) diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c index 3c6fe57f91f8..9108015e56cc 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c @@ -346,9 +346,7 @@ static int ccp_aes_cmac_cra_init(struct crypto_tfm *tfm) crypto_ahash_set_reqsize(ahash, sizeof(struct ccp_aes_cmac_req_ctx)); - cipher_tfm = crypto_alloc_cipher("aes", 0, - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_NEED_FALLBACK); + cipher_tfm = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(cipher_tfm)) { pr_warn("could not load aes cipher driver\n"); return PTR_ERR(cipher_tfm); diff --git a/drivers/crypto/ccree/cc_aead.c b/drivers/crypto/ccree/cc_aead.c index 01b82b82f8b8..f2643cda45db 100644 --- a/drivers/crypto/ccree/cc_aead.c +++ b/drivers/crypto/ccree/cc_aead.c @@ -58,6 +58,7 @@ struct cc_aead_ctx { unsigned int enc_keylen; unsigned int auth_keylen; unsigned int authsize; /* Actual (reduced?) size of the MAC/ICv */ + unsigned int hash_len; enum drv_cipher_mode cipher_mode; enum cc_flow_mode flow_mode; enum drv_hash_mode auth_mode; @@ -122,6 +123,13 @@ static void cc_aead_exit(struct crypto_aead *tfm) } } +static unsigned int cc_get_aead_hash_len(struct crypto_aead *tfm) +{ + struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); + + return cc_get_default_hash_len(ctx->drvdata); +} + static int cc_aead_init(struct crypto_aead *tfm) { struct aead_alg *alg = crypto_aead_alg(tfm); @@ -196,6 +204,7 @@ static int cc_aead_init(struct crypto_aead *tfm) ctx->auth_state.hmac.ipad_opad = NULL; ctx->auth_state.hmac.padded_authkey = NULL; } + ctx->hash_len = cc_get_aead_hash_len(tfm); return 0; @@ -327,7 +336,7 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct cc_aead_ctx *ctx) /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hash_mode); - set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); + set_din_const(&desc[idx], 0, ctx->hash_len); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -465,7 +474,7 @@ static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key, /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hashmode); - set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); + set_din_const(&desc[idx], 0, ctx->hash_len); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -1001,7 +1010,7 @@ static void cc_set_hmac_desc(struct aead_request *req, struct cc_hw_desc desc[], hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hash_mode); set_din_sram(&desc[idx], cc_digest_len_addr(ctx->drvdata, hash_mode), - ctx->drvdata->hash_len_sz); + ctx->hash_len); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -1098,7 +1107,7 @@ static void cc_proc_scheme_desc(struct aead_request *req, hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hash_mode); set_dout_sram(&desc[idx], aead_handle->sram_workspace_addr, - ctx->drvdata->hash_len_sz); + ctx->hash_len); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); set_cipher_do(&desc[idx], DO_PAD); @@ -1128,7 +1137,7 @@ static void cc_proc_scheme_desc(struct aead_request *req, hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hash_mode); set_din_sram(&desc[idx], cc_digest_len_addr(ctx->drvdata, hash_mode), - ctx->drvdata->hash_len_sz); + ctx->hash_len); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -2358,6 +2367,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_SHA1, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "authenc(hmac(sha1),cbc(des3_ede))", @@ -2377,6 +2387,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_DES, .auth_mode = DRV_HASH_SHA1, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "authenc(hmac(sha256),cbc(aes))", @@ -2396,6 +2407,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_SHA256, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "authenc(hmac(sha256),cbc(des3_ede))", @@ -2415,6 +2427,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_DES, .auth_mode = DRV_HASH_SHA256, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "authenc(xcbc(aes),cbc(aes))", @@ -2434,6 +2447,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_XCBC_MAC, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "authenc(hmac(sha1),rfc3686(ctr(aes)))", @@ -2453,6 +2467,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_SHA1, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "authenc(hmac(sha256),rfc3686(ctr(aes)))", @@ -2472,6 +2487,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_SHA256, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "authenc(xcbc(aes),rfc3686(ctr(aes)))", @@ -2491,6 +2507,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_XCBC_MAC, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "ccm(aes)", @@ -2510,6 +2527,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "rfc4309(ccm(aes))", @@ -2529,6 +2547,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "gcm(aes)", @@ -2548,6 +2567,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "rfc4106(gcm(aes))", @@ -2567,6 +2587,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "rfc4543(gcm(aes))", @@ -2586,6 +2607,7 @@ static struct cc_alg_template aead_algs[] = { .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, }; @@ -2670,7 +2692,8 @@ int cc_aead_alloc(struct cc_drvdata *drvdata) /* Linux crypto */ for (alg = 0; alg < ARRAY_SIZE(aead_algs); alg++) { - if (aead_algs[alg].min_hw_rev > drvdata->hw_rev) + if ((aead_algs[alg].min_hw_rev > drvdata->hw_rev) || + !(drvdata->std_bodies & aead_algs[alg].std_body)) continue; t_alg = cc_create_aead_alg(&aead_algs[alg], dev); diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c index 7623b29911af..cc92b031fad1 100644 --- a/drivers/crypto/ccree/cc_cipher.c +++ b/drivers/crypto/ccree/cc_cipher.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include "cc_driver.h" @@ -83,6 +84,9 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size) if (size == DES3_EDE_KEY_SIZE || size == DES_KEY_SIZE) return 0; break; + case S_DIN_to_SM4: + if (size == SM4_KEY_SIZE) + return 0; default: break; } @@ -122,6 +126,17 @@ static int validate_data_size(struct cc_cipher_ctx *ctx_p, if (IS_ALIGNED(size, DES_BLOCK_SIZE)) return 0; break; + case S_DIN_to_SM4: + switch (ctx_p->cipher_mode) { + case DRV_CIPHER_CTR: + return 0; + case DRV_CIPHER_ECB: + case DRV_CIPHER_CBC: + if (IS_ALIGNED(size, SM4_BLOCK_SIZE)) + return 0; + default: + break; + } default: break; } @@ -522,6 +537,9 @@ static void cc_setup_cipher_data(struct crypto_tfm *tfm, case S_DIN_to_DES: flow_mode = DIN_DES_DOUT; break; + case S_DIN_to_SM4: + flow_mode = DIN_SM4_DOUT; + break; default: dev_err(dev, "invalid flow mode, flow_mode = %d\n", flow_mode); return; @@ -815,6 +833,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_XTS, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "xts512(paes)", @@ -832,6 +851,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 512, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "xts4096(paes)", @@ -849,6 +869,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 4096, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "essiv(paes)", @@ -865,6 +886,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "essiv512(paes)", @@ -882,6 +904,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 512, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "essiv4096(paes)", @@ -899,6 +922,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 4096, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "bitlocker(paes)", @@ -915,6 +939,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_BITLOCKER, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "bitlocker512(paes)", @@ -932,6 +957,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 512, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "bitlocker4096(paes)", @@ -949,6 +975,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 4096, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "ecb(paes)", @@ -965,6 +992,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "cbc(paes)", @@ -981,6 +1009,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "ofb(paes)", @@ -997,6 +1026,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_OFB, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "cts(cbc(paes))", @@ -1013,6 +1043,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_CBC_CTS, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "ctr(paes)", @@ -1029,6 +1060,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_CTR, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "xts(aes)", @@ -1045,6 +1077,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_XTS, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "xts512(aes)", @@ -1062,6 +1095,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 512, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "xts4096(aes)", @@ -1079,6 +1113,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 4096, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "essiv(aes)", @@ -1095,6 +1130,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "essiv512(aes)", @@ -1112,6 +1148,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 512, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "essiv4096(aes)", @@ -1129,6 +1166,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 4096, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "bitlocker(aes)", @@ -1145,6 +1183,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_BITLOCKER, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "bitlocker512(aes)", @@ -1162,6 +1201,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 512, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "bitlocker4096(aes)", @@ -1179,6 +1219,7 @@ static const struct cc_alg_template skcipher_algs[] = { .flow_mode = S_DIN_to_AES, .data_unit = 4096, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "ecb(aes)", @@ -1195,6 +1236,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "cbc(aes)", @@ -1211,6 +1253,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "ofb(aes)", @@ -1227,6 +1270,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_OFB, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "cts(cbc(aes))", @@ -1243,6 +1287,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_CBC_CTS, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "ctr(aes)", @@ -1259,6 +1304,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_CTR, .flow_mode = S_DIN_to_AES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "cbc(des3_ede)", @@ -1275,6 +1321,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_DES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "ecb(des3_ede)", @@ -1291,6 +1338,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_DES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "cbc(des)", @@ -1307,6 +1355,7 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_DES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "ecb(des)", @@ -1323,6 +1372,58 @@ static const struct cc_alg_template skcipher_algs[] = { .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_DES, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, + }, + { + .name = "cbc(sm4)", + .driver_name = "cbc-sm4-ccree", + .blocksize = SM4_BLOCK_SIZE, + .template_skcipher = { + .setkey = cc_cipher_setkey, + .encrypt = cc_cipher_encrypt, + .decrypt = cc_cipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_SM4, + .min_hw_rev = CC_HW_REV_713, + .std_body = CC_STD_OSCCA, + }, + { + .name = "ecb(sm4)", + .driver_name = "ecb-sm4-ccree", + .blocksize = SM4_BLOCK_SIZE, + .template_skcipher = { + .setkey = cc_cipher_setkey, + .encrypt = cc_cipher_encrypt, + .decrypt = cc_cipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = 0, + }, + .cipher_mode = DRV_CIPHER_ECB, + .flow_mode = S_DIN_to_SM4, + .min_hw_rev = CC_HW_REV_713, + .std_body = CC_STD_OSCCA, + }, + { + .name = "ctr(sm4)", + .driver_name = "ctr-sm4-ccree", + .blocksize = SM4_BLOCK_SIZE, + .template_skcipher = { + .setkey = cc_cipher_setkey, + .encrypt = cc_cipher_encrypt, + .decrypt = cc_cipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CTR, + .flow_mode = S_DIN_to_SM4, + .min_hw_rev = CC_HW_REV_713, + .std_body = CC_STD_OSCCA, }, }; @@ -1398,7 +1499,8 @@ int cc_cipher_alloc(struct cc_drvdata *drvdata) dev_dbg(dev, "Number of algorithms = %zu\n", ARRAY_SIZE(skcipher_algs)); for (alg = 0; alg < ARRAY_SIZE(skcipher_algs); alg++) { - if (skcipher_algs[alg].min_hw_rev > drvdata->hw_rev) + if ((skcipher_algs[alg].min_hw_rev > drvdata->hw_rev) || + !(drvdata->std_bodies & skcipher_algs[alg].std_body)) continue; dev_dbg(dev, "creating %s\n", skcipher_algs[alg].driver_name); diff --git a/drivers/crypto/ccree/cc_crypto_ctx.h b/drivers/crypto/ccree/cc_crypto_ctx.h index e032544f4e31..c8dac273c563 100644 --- a/drivers/crypto/ccree/cc_crypto_ctx.h +++ b/drivers/crypto/ccree/cc_crypto_ctx.h @@ -115,7 +115,8 @@ enum drv_hash_mode { DRV_HASH_CBC_MAC = 6, DRV_HASH_XCBC_MAC = 7, DRV_HASH_CMAC = 8, - DRV_HASH_MODE_NUM = 9, + DRV_HASH_SM3 = 9, + DRV_HASH_MODE_NUM = 10, DRV_HASH_RESERVE32B = S32_MAX }; @@ -127,6 +128,7 @@ enum drv_hash_hw_mode { DRV_HASH_HW_SHA512 = 4, DRV_HASH_HW_SHA384 = 12, DRV_HASH_HW_GHASH = 6, + DRV_HASH_HW_SM3 = 14, DRV_HASH_HW_RESERVE32B = S32_MAX }; diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c index 1ff229c2aeab..8ada308d72ee 100644 --- a/drivers/crypto/ccree/cc_driver.c +++ b/drivers/crypto/ccree/cc_driver.c @@ -39,23 +39,38 @@ struct cc_hw_data { char *name; enum cc_hw_rev rev; u32 sig; + int std_bodies; }; /* Hardware revisions defs. */ +/* The 703 is a OSCCA only variant of the 713 */ +static const struct cc_hw_data cc703_hw = { + .name = "703", .rev = CC_HW_REV_713, .std_bodies = CC_STD_OSCCA +}; + +static const struct cc_hw_data cc713_hw = { + .name = "713", .rev = CC_HW_REV_713, .std_bodies = CC_STD_ALL +}; + static const struct cc_hw_data cc712_hw = { - .name = "712", .rev = CC_HW_REV_712, .sig = 0xDCC71200U + .name = "712", .rev = CC_HW_REV_712, .sig = 0xDCC71200U, + .std_bodies = CC_STD_ALL }; static const struct cc_hw_data cc710_hw = { - .name = "710", .rev = CC_HW_REV_710, .sig = 0xDCC63200U + .name = "710", .rev = CC_HW_REV_710, .sig = 0xDCC63200U, + .std_bodies = CC_STD_ALL }; static const struct cc_hw_data cc630p_hw = { - .name = "630P", .rev = CC_HW_REV_630, .sig = 0xDCC63000U + .name = "630P", .rev = CC_HW_REV_630, .sig = 0xDCC63000U, + .std_bodies = CC_STD_ALL }; static const struct of_device_id arm_ccree_dev_of_match[] = { + { .compatible = "arm,cryptocell-703-ree", .data = &cc703_hw }, + { .compatible = "arm,cryptocell-713-ree", .data = &cc713_hw }, { .compatible = "arm,cryptocell-712-ree", .data = &cc712_hw }, { .compatible = "arm,cryptocell-710-ree", .data = &cc710_hw }, { .compatible = "arm,cryptocell-630p-ree", .data = &cc630p_hw }, @@ -204,14 +219,13 @@ static int init_cc_resources(struct platform_device *plat_dev) hw_rev = (struct cc_hw_data *)dev_id->data; new_drvdata->hw_rev_name = hw_rev->name; new_drvdata->hw_rev = hw_rev->rev; + new_drvdata->std_bodies = hw_rev->std_bodies; if (hw_rev->rev >= CC_HW_REV_712) { - new_drvdata->hash_len_sz = HASH_LEN_SIZE_712; new_drvdata->axim_mon_offset = CC_REG(AXIM_MON_COMP); new_drvdata->sig_offset = CC_REG(HOST_SIGNATURE_712); new_drvdata->ver_offset = CC_REG(HOST_VERSION_712); } else { - new_drvdata->hash_len_sz = HASH_LEN_SIZE_630; new_drvdata->axim_mon_offset = CC_REG(AXIM_MON_COMP8); new_drvdata->sig_offset = CC_REG(HOST_SIGNATURE_630); new_drvdata->ver_offset = CC_REG(HOST_VERSION_630); @@ -297,15 +311,17 @@ static int init_cc_resources(struct platform_device *plat_dev) return rc; } - /* Verify correct mapping */ - signature_val = cc_ioread(new_drvdata, new_drvdata->sig_offset); - if (signature_val != hw_rev->sig) { - dev_err(dev, "Invalid CC signature: SIGNATURE=0x%08X != expected=0x%08X\n", - signature_val, hw_rev->sig); - rc = -EINVAL; - goto post_clk_err; + if (hw_rev->rev <= CC_HW_REV_712) { + /* Verify correct mapping */ + signature_val = cc_ioread(new_drvdata, new_drvdata->sig_offset); + if (signature_val != hw_rev->sig) { + dev_err(dev, "Invalid CC signature: SIGNATURE=0x%08X != expected=0x%08X\n", + signature_val, hw_rev->sig); + rc = -EINVAL; + goto post_clk_err; + } + dev_dbg(dev, "CC SIGNATURE=0x%08X\n", signature_val); } - dev_dbg(dev, "CC SIGNATURE=0x%08X\n", signature_val); /* Display HW versions */ dev_info(dev, "ARM CryptoCell %s Driver: HW version 0x%08X, Driver version %s\n", @@ -461,6 +477,14 @@ int cc_clk_on(struct cc_drvdata *drvdata) return 0; } +unsigned int cc_get_default_hash_len(struct cc_drvdata *drvdata) +{ + if (drvdata->hw_rev >= CC_HW_REV_712) + return HASH_LEN_SIZE_712; + else + return HASH_LEN_SIZE_630; +} + void cc_clk_off(struct cc_drvdata *drvdata) { struct clk *clk = drvdata->clk; diff --git a/drivers/crypto/ccree/cc_driver.h b/drivers/crypto/ccree/cc_driver.h index d608a4faf662..5be7fd431b05 100644 --- a/drivers/crypto/ccree/cc_driver.h +++ b/drivers/crypto/ccree/cc_driver.h @@ -36,12 +36,19 @@ extern bool cc_dump_desc; extern bool cc_dump_bytes; -#define DRV_MODULE_VERSION "4.0" +#define DRV_MODULE_VERSION "5.0" enum cc_hw_rev { CC_HW_REV_630 = 630, CC_HW_REV_710 = 710, - CC_HW_REV_712 = 712 + CC_HW_REV_712 = 712, + CC_HW_REV_713 = 713 +}; + +enum cc_std_body { + CC_STD_NIST = 0x1, + CC_STD_OSCCA = 0x2, + CC_STD_ALL = 0x3 }; #define CC_COHERENT_CACHE_PARAMS 0xEEE @@ -127,10 +134,10 @@ struct cc_drvdata { bool coherent; char *hw_rev_name; enum cc_hw_rev hw_rev; - u32 hash_len_sz; u32 axim_mon_offset; u32 sig_offset; u32 ver_offset; + int std_bodies; }; struct cc_crypto_alg { @@ -156,6 +163,7 @@ struct cc_alg_template { int flow_mode; /* Note: currently, refers to the cipher mode only. */ int auth_mode; u32 min_hw_rev; + enum cc_std_body std_body; unsigned int data_unit; struct cc_drvdata *drvdata; }; @@ -182,6 +190,7 @@ int init_cc_regs(struct cc_drvdata *drvdata, bool is_probe); void fini_cc_regs(struct cc_drvdata *drvdata); int cc_clk_on(struct cc_drvdata *drvdata); void cc_clk_off(struct cc_drvdata *drvdata); +unsigned int cc_get_default_hash_len(struct cc_drvdata *drvdata); static inline void cc_iowrite(struct cc_drvdata *drvdata, u32 reg, u32 val) { diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c index b9313306c36f..2c4ddc8fb76b 100644 --- a/drivers/crypto/ccree/cc_hash.c +++ b/drivers/crypto/ccree/cc_hash.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include "cc_driver.h" @@ -16,6 +17,7 @@ #define CC_MAX_HASH_SEQ_LEN 12 #define CC_MAX_OPAD_KEYS_SIZE CC_MAX_HASH_BLCK_SIZE +#define CC_SM3_HASH_LEN_SIZE 8 struct cc_hash_handle { cc_sram_addr_t digest_len_sram_addr; /* const value in SRAM*/ @@ -43,6 +45,9 @@ static u64 sha384_init[] = { static u64 sha512_init[] = { SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4, SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 }; +static const u32 sm3_init[] = { + SM3_IVH, SM3_IVG, SM3_IVF, SM3_IVE, + SM3_IVD, SM3_IVC, SM3_IVB, SM3_IVA }; static void cc_setup_xcbc(struct ahash_request *areq, struct cc_hw_desc desc[], unsigned int *seq_size); @@ -82,6 +87,7 @@ struct cc_hash_ctx { int hash_mode; int hw_mode; int inter_digestsize; + unsigned int hash_len; struct completion setkey_comp; bool is_hmac; }; @@ -138,10 +144,10 @@ static void cc_init_req(struct device *dev, struct ahash_req_ctx *state, ctx->hash_mode == DRV_HASH_SHA384) memcpy(state->digest_bytes_len, digest_len_sha512_init, - ctx->drvdata->hash_len_sz); + ctx->hash_len); else memcpy(state->digest_bytes_len, digest_len_init, - ctx->drvdata->hash_len_sz); + ctx->hash_len); } if (ctx->hash_mode != DRV_HASH_NULL) { @@ -321,7 +327,7 @@ static int cc_fin_result(struct cc_hw_desc *desc, struct ahash_request *req, /* Get final MAC result */ hw_desc_init(&desc[idx]); - set_cipher_mode(&desc[idx], ctx->hw_mode); + set_hash_cipher_mode(&desc[idx], ctx->hw_mode, ctx->hash_mode); /* TODO */ set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); @@ -367,7 +373,7 @@ static int cc_fin_hmac(struct cc_hw_desc *desc, struct ahash_request *req, set_cipher_mode(&desc[idx], ctx->hw_mode); set_din_sram(&desc[idx], cc_digest_len_addr(ctx->drvdata, ctx->hash_mode), - ctx->drvdata->hash_len_sz); + ctx->hash_len); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -440,7 +446,7 @@ static int cc_hash_digest(struct ahash_request *req) * digest */ hw_desc_init(&desc[idx]); - set_cipher_mode(&desc[idx], ctx->hw_mode); + set_hash_cipher_mode(&desc[idx], ctx->hw_mode, ctx->hash_mode); if (is_hmac) { set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); @@ -454,14 +460,14 @@ static int cc_hash_digest(struct ahash_request *req) /* Load the hash current length */ hw_desc_init(&desc[idx]); - set_cipher_mode(&desc[idx], ctx->hw_mode); + set_hash_cipher_mode(&desc[idx], ctx->hw_mode, ctx->hash_mode); if (is_hmac) { set_din_type(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, - ctx->drvdata->hash_len_sz, NS_BIT); + ctx->hash_len, NS_BIT); } else { - set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); + set_din_const(&desc[idx], 0, ctx->hash_len); if (nbytes) set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); else @@ -478,7 +484,7 @@ static int cc_hash_digest(struct ahash_request *req) hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, - ctx->drvdata->hash_len_sz, NS_BIT, 0); + ctx->hash_len, NS_BIT, 0); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); set_cipher_do(&desc[idx], DO_PAD); @@ -504,7 +510,7 @@ static int cc_restore_hash(struct cc_hw_desc *desc, struct cc_hash_ctx *ctx, { /* Restore hash digest */ hw_desc_init(&desc[idx]); - set_cipher_mode(&desc[idx], ctx->hw_mode); + set_hash_cipher_mode(&desc[idx], ctx->hw_mode, ctx->hash_mode); set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); set_flow_mode(&desc[idx], S_DIN_to_HASH); @@ -513,10 +519,10 @@ static int cc_restore_hash(struct cc_hw_desc *desc, struct cc_hash_ctx *ctx, /* Restore hash current length */ hw_desc_init(&desc[idx]); - set_cipher_mode(&desc[idx], ctx->hw_mode); + set_hash_cipher_mode(&desc[idx], ctx->hw_mode, ctx->hash_mode); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); set_din_type(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, - ctx->drvdata->hash_len_sz, NS_BIT); + ctx->hash_len, NS_BIT); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -576,7 +582,7 @@ static int cc_hash_update(struct ahash_request *req) /* store the hash digest result in context */ hw_desc_init(&desc[idx]); - set_cipher_mode(&desc[idx], ctx->hw_mode); + set_hash_cipher_mode(&desc[idx], ctx->hw_mode, ctx->hash_mode); set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0); set_flow_mode(&desc[idx], S_HASH_to_DOUT); @@ -585,9 +591,9 @@ static int cc_hash_update(struct ahash_request *req) /* store current hash length in context */ hw_desc_init(&desc[idx]); - set_cipher_mode(&desc[idx], ctx->hw_mode); + set_hash_cipher_mode(&desc[idx], ctx->hw_mode, ctx->hash_mode); set_dout_dlli(&desc[idx], state->digest_bytes_len_dma_addr, - ctx->drvdata->hash_len_sz, NS_BIT, 1); + ctx->hash_len, NS_BIT, 1); set_queue_last_ind(ctx->drvdata, &desc[idx]); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); @@ -649,9 +655,9 @@ static int cc_do_finup(struct ahash_request *req, bool update) /* Pad the hash */ hw_desc_init(&desc[idx]); set_cipher_do(&desc[idx], DO_PAD); - set_cipher_mode(&desc[idx], ctx->hw_mode); + set_hash_cipher_mode(&desc[idx], ctx->hw_mode, ctx->hash_mode); set_dout_dlli(&desc[idx], state->digest_bytes_len_dma_addr, - ctx->drvdata->hash_len_sz, NS_BIT, 0); + ctx->hash_len, NS_BIT, 0); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); set_flow_mode(&desc[idx], S_HASH_to_DOUT); idx++; @@ -749,7 +755,7 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key, /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); - set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); + set_din_const(&desc[idx], 0, ctx->hash_len); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -831,7 +837,7 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key, /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); - set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); + set_din_const(&desc[idx], 0, ctx->hash_len); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -1069,6 +1075,16 @@ fail: return -ENOMEM; } +static int cc_get_hash_len(struct crypto_tfm *tfm) +{ + struct cc_hash_ctx *ctx = crypto_tfm_ctx(tfm); + + if (ctx->hash_mode == DRV_HASH_SM3) + return CC_SM3_HASH_LEN_SIZE; + else + return cc_get_default_hash_len(ctx->drvdata); +} + static int cc_cra_init(struct crypto_tfm *tfm) { struct cc_hash_ctx *ctx = crypto_tfm_ctx(tfm); @@ -1086,7 +1102,7 @@ static int cc_cra_init(struct crypto_tfm *tfm) ctx->hw_mode = cc_alg->hw_mode; ctx->inter_digestsize = cc_alg->inter_digestsize; ctx->drvdata = cc_alg->drvdata; - + ctx->hash_len = cc_get_hash_len(tfm); return cc_alloc_ctx(ctx); } @@ -1465,8 +1481,8 @@ static int cc_hash_export(struct ahash_request *req, void *out) memcpy(out, state->digest_buff, ctx->inter_digestsize); out += ctx->inter_digestsize; - memcpy(out, state->digest_bytes_len, ctx->drvdata->hash_len_sz); - out += ctx->drvdata->hash_len_sz; + memcpy(out, state->digest_bytes_len, ctx->hash_len); + out += ctx->hash_len; memcpy(out, &curr_buff_cnt, sizeof(u32)); out += sizeof(u32); @@ -1494,8 +1510,8 @@ static int cc_hash_import(struct ahash_request *req, const void *in) memcpy(state->digest_buff, in, ctx->inter_digestsize); in += ctx->inter_digestsize; - memcpy(state->digest_bytes_len, in, ctx->drvdata->hash_len_sz); - in += ctx->drvdata->hash_len_sz; + memcpy(state->digest_bytes_len, in, ctx->hash_len); + in += ctx->hash_len; /* Sanity check the data as much as possible */ memcpy(&tmp, in, sizeof(u32)); @@ -1515,6 +1531,7 @@ struct cc_hash_template { char mac_name[CRYPTO_MAX_ALG_NAME]; char mac_driver_name[CRYPTO_MAX_ALG_NAME]; unsigned int blocksize; + bool is_mac; bool synchronize; struct ahash_alg template_ahash; int hash_mode; @@ -1522,6 +1539,7 @@ struct cc_hash_template { int inter_digestsize; struct cc_drvdata *drvdata; u32 min_hw_rev; + enum cc_std_body std_body; }; #define CC_STATE_SIZE(_x) \ @@ -1536,6 +1554,7 @@ static struct cc_hash_template driver_hash[] = { .mac_name = "hmac(sha1)", .mac_driver_name = "hmac-sha1-ccree", .blocksize = SHA1_BLOCK_SIZE, + .is_mac = true, .synchronize = false, .template_ahash = { .init = cc_hash_init, @@ -1555,6 +1574,7 @@ static struct cc_hash_template driver_hash[] = { .hw_mode = DRV_HASH_HW_SHA1, .inter_digestsize = SHA1_DIGEST_SIZE, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "sha256", @@ -1562,6 +1582,7 @@ static struct cc_hash_template driver_hash[] = { .mac_name = "hmac(sha256)", .mac_driver_name = "hmac-sha256-ccree", .blocksize = SHA256_BLOCK_SIZE, + .is_mac = true, .template_ahash = { .init = cc_hash_init, .update = cc_hash_update, @@ -1580,6 +1601,7 @@ static struct cc_hash_template driver_hash[] = { .hw_mode = DRV_HASH_HW_SHA256, .inter_digestsize = SHA256_DIGEST_SIZE, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "sha224", @@ -1587,6 +1609,7 @@ static struct cc_hash_template driver_hash[] = { .mac_name = "hmac(sha224)", .mac_driver_name = "hmac-sha224-ccree", .blocksize = SHA224_BLOCK_SIZE, + .is_mac = true, .template_ahash = { .init = cc_hash_init, .update = cc_hash_update, @@ -1605,6 +1628,7 @@ static struct cc_hash_template driver_hash[] = { .hw_mode = DRV_HASH_HW_SHA256, .inter_digestsize = SHA256_DIGEST_SIZE, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .name = "sha384", @@ -1612,6 +1636,7 @@ static struct cc_hash_template driver_hash[] = { .mac_name = "hmac(sha384)", .mac_driver_name = "hmac-sha384-ccree", .blocksize = SHA384_BLOCK_SIZE, + .is_mac = true, .template_ahash = { .init = cc_hash_init, .update = cc_hash_update, @@ -1630,6 +1655,7 @@ static struct cc_hash_template driver_hash[] = { .hw_mode = DRV_HASH_HW_SHA512, .inter_digestsize = SHA512_DIGEST_SIZE, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "sha512", @@ -1637,6 +1663,7 @@ static struct cc_hash_template driver_hash[] = { .mac_name = "hmac(sha512)", .mac_driver_name = "hmac-sha512-ccree", .blocksize = SHA512_BLOCK_SIZE, + .is_mac = true, .template_ahash = { .init = cc_hash_init, .update = cc_hash_update, @@ -1655,6 +1682,7 @@ static struct cc_hash_template driver_hash[] = { .hw_mode = DRV_HASH_HW_SHA512, .inter_digestsize = SHA512_DIGEST_SIZE, .min_hw_rev = CC_HW_REV_712, + .std_body = CC_STD_NIST, }, { .name = "md5", @@ -1662,6 +1690,7 @@ static struct cc_hash_template driver_hash[] = { .mac_name = "hmac(md5)", .mac_driver_name = "hmac-md5-ccree", .blocksize = MD5_HMAC_BLOCK_SIZE, + .is_mac = true, .template_ahash = { .init = cc_hash_init, .update = cc_hash_update, @@ -1680,11 +1709,38 @@ static struct cc_hash_template driver_hash[] = { .hw_mode = DRV_HASH_HW_MD5, .inter_digestsize = MD5_DIGEST_SIZE, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, + }, + { + .name = "sm3", + .driver_name = "sm3-ccree", + .blocksize = SM3_BLOCK_SIZE, + .is_mac = false, + .template_ahash = { + .init = cc_hash_init, + .update = cc_hash_update, + .final = cc_hash_final, + .finup = cc_hash_finup, + .digest = cc_hash_digest, + .export = cc_hash_export, + .import = cc_hash_import, + .setkey = cc_hash_setkey, + .halg = { + .digestsize = SM3_DIGEST_SIZE, + .statesize = CC_STATE_SIZE(SM3_DIGEST_SIZE), + }, + }, + .hash_mode = DRV_HASH_SM3, + .hw_mode = DRV_HASH_HW_SM3, + .inter_digestsize = SM3_DIGEST_SIZE, + .min_hw_rev = CC_HW_REV_713, + .std_body = CC_STD_OSCCA, }, { .mac_name = "xcbc(aes)", .mac_driver_name = "xcbc-aes-ccree", .blocksize = AES_BLOCK_SIZE, + .is_mac = true, .template_ahash = { .init = cc_hash_init, .update = cc_mac_update, @@ -1703,11 +1759,13 @@ static struct cc_hash_template driver_hash[] = { .hw_mode = DRV_CIPHER_XCBC_MAC, .inter_digestsize = AES_BLOCK_SIZE, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, { .mac_name = "cmac(aes)", .mac_driver_name = "cmac-aes-ccree", .blocksize = AES_BLOCK_SIZE, + .is_mac = true, .template_ahash = { .init = cc_hash_init, .update = cc_mac_update, @@ -1726,6 +1784,7 @@ static struct cc_hash_template driver_hash[] = { .hw_mode = DRV_CIPHER_CMAC, .inter_digestsize = AES_BLOCK_SIZE, .min_hw_rev = CC_HW_REV_630, + .std_body = CC_STD_NIST, }, }; @@ -1780,6 +1839,7 @@ int cc_init_hash_sram(struct cc_drvdata *drvdata) unsigned int larval_seq_len = 0; struct cc_hw_desc larval_seq[CC_DIGEST_SIZE_MAX / sizeof(u32)]; bool large_sha_supported = (drvdata->hw_rev >= CC_HW_REV_712); + bool sm3_supported = (drvdata->hw_rev >= CC_HW_REV_713); int rc = 0; /* Copy-to-sram digest-len */ @@ -1845,6 +1905,17 @@ int cc_init_hash_sram(struct cc_drvdata *drvdata) sram_buff_ofs += sizeof(sha256_init); larval_seq_len = 0; + if (sm3_supported) { + cc_set_sram_desc(sm3_init, sram_buff_ofs, + ARRAY_SIZE(sm3_init), larval_seq, + &larval_seq_len); + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (rc) + goto init_digest_const_err; + sram_buff_ofs += sizeof(sm3_init); + larval_seq_len = 0; + } + if (large_sha_supported) { cc_set_sram_desc((u32 *)sha384_init, sram_buff_ofs, (ARRAY_SIZE(sha384_init) * 2), larval_seq, @@ -1911,6 +1982,9 @@ int cc_hash_alloc(struct cc_drvdata *drvdata) sizeof(sha224_init) + sizeof(sha256_init); + if (drvdata->hw_rev >= CC_HW_REV_713) + sram_size_to_alloc += sizeof(sm3_init); + if (drvdata->hw_rev >= CC_HW_REV_712) sram_size_to_alloc += sizeof(digest_len_sha512_init) + sizeof(sha384_init) + sizeof(sha512_init); @@ -1937,30 +2011,33 @@ int cc_hash_alloc(struct cc_drvdata *drvdata) struct cc_hash_alg *t_alg; int hw_mode = driver_hash[alg].hw_mode; - /* We either support both HASH and MAC or none */ - if (driver_hash[alg].min_hw_rev > drvdata->hw_rev) + /* Check that the HW revision and variants are suitable */ + if ((driver_hash[alg].min_hw_rev > drvdata->hw_rev) || + !(drvdata->std_bodies & driver_hash[alg].std_body)) continue; - /* register hmac version */ - t_alg = cc_alloc_hash_alg(&driver_hash[alg], dev, true); - if (IS_ERR(t_alg)) { - rc = PTR_ERR(t_alg); - dev_err(dev, "%s alg allocation failed\n", - driver_hash[alg].driver_name); - goto fail; - } - t_alg->drvdata = drvdata; - - rc = crypto_register_ahash(&t_alg->ahash_alg); - if (rc) { - dev_err(dev, "%s alg registration failed\n", - driver_hash[alg].driver_name); - kfree(t_alg); - goto fail; - } else { - list_add_tail(&t_alg->entry, &hash_handle->hash_list); + if (driver_hash[alg].is_mac) { + /* register hmac version */ + t_alg = cc_alloc_hash_alg(&driver_hash[alg], dev, true); + if (IS_ERR(t_alg)) { + rc = PTR_ERR(t_alg); + dev_err(dev, "%s alg allocation failed\n", + driver_hash[alg].driver_name); + goto fail; + } + t_alg->drvdata = drvdata; + + rc = crypto_register_ahash(&t_alg->ahash_alg); + if (rc) { + dev_err(dev, "%s alg registration failed\n", + driver_hash[alg].driver_name); + kfree(t_alg); + goto fail; + } else { + list_add_tail(&t_alg->entry, + &hash_handle->hash_list); + } } - if (hw_mode == DRV_CIPHER_XCBC_MAC || hw_mode == DRV_CIPHER_CMAC) continue; @@ -2027,7 +2104,7 @@ static void cc_setup_xcbc(struct ahash_request *areq, struct cc_hw_desc desc[], XCBC_MAC_K1_OFFSET), CC_AES_128_BIT_KEY_SIZE, NS_BIT); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); - set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_hash_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC, ctx->hash_mode); set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); set_flow_mode(&desc[idx], S_DIN_to_AES); @@ -2162,6 +2239,8 @@ static const void *cc_larval_digest(struct device *dev, u32 mode) return sha384_init; case DRV_HASH_SHA512: return sha512_init; + case DRV_HASH_SM3: + return sm3_init; default: dev_err(dev, "Invalid hash mode (%d)\n", mode); return md5_init; @@ -2182,6 +2261,8 @@ cc_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode) struct cc_drvdata *_drvdata = (struct cc_drvdata *)drvdata; struct cc_hash_handle *hash_handle = _drvdata->hash_handle; struct device *dev = drvdata_to_dev(_drvdata); + bool sm3_supported = (_drvdata->hw_rev >= CC_HW_REV_713); + cc_sram_addr_t addr; switch (mode) { case DRV_HASH_NULL: @@ -2200,19 +2281,31 @@ cc_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode) sizeof(md5_init) + sizeof(sha1_init) + sizeof(sha224_init)); - case DRV_HASH_SHA384: + case DRV_HASH_SM3: return (hash_handle->larval_digest_sram_addr + sizeof(md5_init) + sizeof(sha1_init) + sizeof(sha224_init) + sizeof(sha256_init)); + case DRV_HASH_SHA384: + addr = (hash_handle->larval_digest_sram_addr + + sizeof(md5_init) + + sizeof(sha1_init) + + sizeof(sha224_init) + + sizeof(sha256_init)); + if (sm3_supported) + addr += sizeof(sm3_init); + return addr; case DRV_HASH_SHA512: - return (hash_handle->larval_digest_sram_addr + + addr = (hash_handle->larval_digest_sram_addr + sizeof(md5_init) + sizeof(sha1_init) + sizeof(sha224_init) + sizeof(sha256_init) + sizeof(sha384_init)); + if (sm3_supported) + addr += sizeof(sm3_init); + return addr; default: dev_err(dev, "Invalid hash mode (%d)\n", mode); } diff --git a/drivers/crypto/ccree/cc_hw_queue_defs.h b/drivers/crypto/ccree/cc_hw_queue_defs.h index 45985b955d2c..7a9b90db7db7 100644 --- a/drivers/crypto/ccree/cc_hw_queue_defs.h +++ b/drivers/crypto/ccree/cc_hw_queue_defs.h @@ -42,6 +42,7 @@ #define WORD3_QUEUE_LAST_IND CC_GENMASK(3, QUEUE_LAST_IND) #define WORD4_ACK_NEEDED CC_GENMASK(4, ACK_NEEDED) #define WORD4_AES_SEL_N_HASH CC_GENMASK(4, AES_SEL_N_HASH) +#define WORD4_AES_XOR_CRYPTO_KEY CC_GENMASK(4, AES_XOR_CRYPTO_KEY) #define WORD4_BYTES_SWAP CC_GENMASK(4, BYTES_SWAP) #define WORD4_CIPHER_CONF0 CC_GENMASK(4, CIPHER_CONF0) #define WORD4_CIPHER_CONF1 CC_GENMASK(4, CIPHER_CONF1) @@ -107,6 +108,7 @@ enum cc_flow_mode { AES_to_AES_to_HASH_and_DOUT = 13, AES_to_AES_to_HASH = 14, AES_to_HASH_and_AES = 15, + DIN_SM4_DOUT = 16, DIN_AES_AESMAC = 17, HASH_to_DOUT = 18, /* setup flows */ @@ -114,9 +116,11 @@ enum cc_flow_mode { S_DIN_to_AES2 = 33, S_DIN_to_DES = 34, S_DIN_to_RC4 = 35, + S_DIN_to_SM4 = 36, S_DIN_to_HASH = 37, S_AES_to_DOUT = 38, S_AES2_to_DOUT = 39, + S_SM4_to_DOUT = 40, S_RC4_to_DOUT = 41, S_DES_to_DOUT = 42, S_HASH_to_DOUT = 43, @@ -393,6 +397,16 @@ static inline void set_aes_not_hash_mode(struct cc_hw_desc *pdesc) pdesc->word[4] |= FIELD_PREP(WORD4_AES_SEL_N_HASH, 1); } +/* + * Set aes xor crypto key, this in some secenrios select SM3 engine + * + * @pdesc: pointer HW descriptor struct + */ +static inline void set_aes_xor_crypto_key(struct cc_hw_desc *pdesc) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_AES_XOR_CRYPTO_KEY, 1); +} + /* * Set the DOUT field of a HW descriptors to SRAM mode * Note: No need to check SRAM alignment since host requests do not use SRAM and @@ -454,6 +468,22 @@ static inline void set_cipher_mode(struct cc_hw_desc *pdesc, int mode) pdesc->word[4] |= FIELD_PREP(WORD4_CIPHER_MODE, mode); } +/* + * Set the cipher mode for hash algorithms. + * + * @pdesc: pointer HW descriptor struct + * @cipher_mode: Any one of the modes defined in [CC7x-DESC] + * @hash_mode: specifies which hash is being handled + */ +static inline void set_hash_cipher_mode(struct cc_hw_desc *pdesc, + enum drv_cipher_mode cipher_mode, + enum drv_hash_mode hash_mode) +{ + set_cipher_mode(pdesc, cipher_mode); + if (hash_mode == DRV_HASH_SM3) + set_aes_xor_crypto_key(pdesc); +} + /* * Set the cipher configuration fields. * diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index db203f8be429..bcef76508dfa 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -123,7 +123,7 @@ static inline struct chcr_authenc_ctx *AUTHENC_CTX(struct chcr_aead_ctx *gctx) static inline struct uld_ctx *ULD_CTX(struct chcr_context *ctx) { - return ctx->dev->u_ctx; + return container_of(ctx->dev, struct uld_ctx, dev); } static inline int is_ofld_imm(const struct sk_buff *skb) @@ -198,18 +198,43 @@ void chcr_verify_tag(struct aead_request *req, u8 *input, int *err) *err = 0; } -static inline void chcr_handle_aead_resp(struct aead_request *req, +static int chcr_inc_wrcount(struct chcr_dev *dev) +{ + int err = 0; + + spin_lock_bh(&dev->lock_chcr_dev); + if (dev->state == CHCR_DETACH) + err = 1; + else + atomic_inc(&dev->inflight); + + spin_unlock_bh(&dev->lock_chcr_dev); + + return err; +} + +static inline void chcr_dec_wrcount(struct chcr_dev *dev) +{ + atomic_dec(&dev->inflight); +} + +static inline int chcr_handle_aead_resp(struct aead_request *req, unsigned char *input, int err) { struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct chcr_dev *dev = a_ctx(tfm)->dev; chcr_aead_common_exit(req); if (reqctx->verify == VERIFY_SW) { chcr_verify_tag(req, input, &err); reqctx->verify = VERIFY_HW; } + chcr_dec_wrcount(dev); req->base.complete(&req->base, err); + + return err; } static void get_aes_decrypt_key(unsigned char *dec_key, @@ -391,7 +416,7 @@ static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid, static inline void dsgl_walk_add_page(struct dsgl_walk *walk, size_t size, - dma_addr_t *addr) + dma_addr_t addr) { int j; @@ -399,7 +424,7 @@ static inline void dsgl_walk_add_page(struct dsgl_walk *walk, return; j = walk->nents; walk->to->len[j % 8] = htons(size); - walk->to->addr[j % 8] = cpu_to_be64(*addr); + walk->to->addr[j % 8] = cpu_to_be64(addr); j++; if ((j % 8) == 0) walk->to++; @@ -473,16 +498,16 @@ static inline void ulptx_walk_end(struct ulptx_walk *walk) static inline void ulptx_walk_add_page(struct ulptx_walk *walk, size_t size, - dma_addr_t *addr) + dma_addr_t addr) { if (!size) return; if (walk->nents == 0) { walk->sgl->len0 = cpu_to_be32(size); - walk->sgl->addr0 = cpu_to_be64(*addr); + walk->sgl->addr0 = cpu_to_be64(addr); } else { - walk->pair->addr[walk->pair_idx] = cpu_to_be64(*addr); + walk->pair->addr[walk->pair_idx] = cpu_to_be64(addr); walk->pair->len[walk->pair_idx] = cpu_to_be32(size); walk->pair_idx = !walk->pair_idx; if (!walk->pair_idx) @@ -717,7 +742,7 @@ static inline void create_wreq(struct chcr_context *ctx, htonl(FW_CRYPTO_LOOKASIDE_WR_LEN16_V(DIV_ROUND_UP(len16, 16))); chcr_req->wreq.cookie = cpu_to_be64((uintptr_t)req); chcr_req->wreq.rx_chid_to_rx_q_id = - FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid, + FILL_WR_RX_Q_ID(ctx->tx_chan_id, qid, !!lcb, ctx->tx_qidx); chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->tx_chan_id, @@ -773,7 +798,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam) } chcr_req = __skb_put_zero(skb, transhdr_len); chcr_req->sec_cpl.op_ivinsrtofst = - FILL_SEC_CPL_OP_IVINSR(c_ctx(tfm)->dev->rx_channel_id, 2, 1); + FILL_SEC_CPL_OP_IVINSR(c_ctx(tfm)->tx_chan_id, 2, 1); chcr_req->sec_cpl.pldlen = htonl(IV + wrparam->bytes); chcr_req->sec_cpl.aadstart_cipherstop_hi = @@ -1100,6 +1125,7 @@ static int chcr_handle_cipher_resp(struct ablkcipher_request *req, struct cpl_fw6_pld *fw6_pld = (struct cpl_fw6_pld *)input; struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); struct cipher_wr_param wrparam; + struct chcr_dev *dev = c_ctx(tfm)->dev; int bytes; if (err) @@ -1161,6 +1187,7 @@ static int chcr_handle_cipher_resp(struct ablkcipher_request *req, unmap: chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); complete: + chcr_dec_wrcount(dev); req->base.complete(&req->base, err); return err; } @@ -1187,7 +1214,10 @@ static int process_cipher(struct ablkcipher_request *req, ablkctx->enckey_len, req->nbytes, ivsize); goto error; } - chcr_cipher_dma_map(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); + + err = chcr_cipher_dma_map(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); + if (err) + goto error; if (req->nbytes < (SGE_MAX_WR_LEN - (sizeof(struct chcr_wr) + AES_MIN_KEY_SIZE + sizeof(struct cpl_rx_phys_dsgl) + @@ -1276,15 +1306,21 @@ error: static int chcr_aes_encrypt(struct ablkcipher_request *req) { struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); + struct chcr_dev *dev = c_ctx(tfm)->dev; struct sk_buff *skb = NULL; int err, isfull = 0; struct uld_ctx *u_ctx = ULD_CTX(c_ctx(tfm)); + err = chcr_inc_wrcount(dev); + if (err) + return -ENXIO; if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0], c_ctx(tfm)->tx_qidx))) { isfull = 1; - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) - return -ENOSPC; + if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { + err = -ENOSPC; + goto error; + } } err = process_cipher(req, u_ctx->lldi.rxq_ids[c_ctx(tfm)->rx_qidx], @@ -1295,15 +1331,23 @@ static int chcr_aes_encrypt(struct ablkcipher_request *req) set_wr_txq(skb, CPL_PRIORITY_DATA, c_ctx(tfm)->tx_qidx); chcr_send_wr(skb); return isfull ? -EBUSY : -EINPROGRESS; +error: + chcr_dec_wrcount(dev); + return err; } static int chcr_aes_decrypt(struct ablkcipher_request *req) { struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); struct uld_ctx *u_ctx = ULD_CTX(c_ctx(tfm)); + struct chcr_dev *dev = c_ctx(tfm)->dev; struct sk_buff *skb = NULL; int err, isfull = 0; + err = chcr_inc_wrcount(dev); + if (err) + return -ENXIO; + if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0], c_ctx(tfm)->tx_qidx))) { isfull = 1; @@ -1311,8 +1355,8 @@ static int chcr_aes_decrypt(struct ablkcipher_request *req) return -ENOSPC; } - err = process_cipher(req, u_ctx->lldi.rxq_ids[c_ctx(tfm)->rx_qidx], - &skb, CHCR_DECRYPT_OP); + err = process_cipher(req, u_ctx->lldi.rxq_ids[c_ctx(tfm)->rx_qidx], + &skb, CHCR_DECRYPT_OP); if (err || !skb) return err; skb->dev = u_ctx->lldi.ports[0]; @@ -1333,10 +1377,11 @@ static int chcr_device_init(struct chcr_context *ctx) if (!ctx->dev) { u_ctx = assign_chcr_device(); if (!u_ctx) { + err = -ENXIO; pr_err("chcr device assignment fails\n"); goto out; } - ctx->dev = u_ctx->dev; + ctx->dev = &u_ctx->dev; adap = padap(ctx->dev); ntxq = u_ctx->lldi.ntxq; rxq_perchan = u_ctx->lldi.nrxq / u_ctx->lldi.nchan; @@ -1344,7 +1389,6 @@ static int chcr_device_init(struct chcr_context *ctx) spin_lock(&ctx->dev->lock_chcr_dev); ctx->tx_chan_id = ctx->dev->tx_channel_id; ctx->dev->tx_channel_id = !ctx->dev->tx_channel_id; - ctx->dev->rx_channel_id = 0; spin_unlock(&ctx->dev->lock_chcr_dev); rxq_idx = ctx->tx_chan_id * rxq_perchan; rxq_idx += id % rxq_perchan; @@ -1498,7 +1542,7 @@ static struct sk_buff *create_hash_wr(struct ahash_request *req, chcr_req = __skb_put_zero(skb, transhdr_len); chcr_req->sec_cpl.op_ivinsrtofst = - FILL_SEC_CPL_OP_IVINSR(h_ctx(tfm)->dev->rx_channel_id, 2, 0); + FILL_SEC_CPL_OP_IVINSR(h_ctx(tfm)->tx_chan_id, 2, 0); chcr_req->sec_cpl.pldlen = htonl(param->bfr_len + param->sg_len); chcr_req->sec_cpl.aadstart_cipherstop_hi = @@ -1562,6 +1606,7 @@ static int chcr_ahash_update(struct ahash_request *req) struct chcr_ahash_req_ctx *req_ctx = ahash_request_ctx(req); struct crypto_ahash *rtfm = crypto_ahash_reqtfm(req); struct uld_ctx *u_ctx = NULL; + struct chcr_dev *dev = h_ctx(rtfm)->dev; struct sk_buff *skb; u8 remainder = 0, bs; unsigned int nbytes = req->nbytes; @@ -1570,12 +1615,6 @@ static int chcr_ahash_update(struct ahash_request *req) bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(rtfm)); u_ctx = ULD_CTX(h_ctx(rtfm)); - if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0], - h_ctx(rtfm)->tx_qidx))) { - isfull = 1; - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) - return -ENOSPC; - } if (nbytes + req_ctx->reqlen >= bs) { remainder = (nbytes + req_ctx->reqlen) % bs; @@ -1586,10 +1625,27 @@ static int chcr_ahash_update(struct ahash_request *req) req_ctx->reqlen += nbytes; return 0; } + error = chcr_inc_wrcount(dev); + if (error) + return -ENXIO; + /* Detach state for CHCR means lldi or padap is freed. Increasing + * inflight count for dev guarantees that lldi and padap is valid + */ + if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0], + h_ctx(rtfm)->tx_qidx))) { + isfull = 1; + if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { + error = -ENOSPC; + goto err; + } + } + chcr_init_hctx_per_wr(req_ctx); error = chcr_hash_dma_map(&u_ctx->lldi.pdev->dev, req); - if (error) - return -ENOMEM; + if (error) { + error = -ENOMEM; + goto err; + } get_alg_config(¶ms.alg_prm, crypto_ahash_digestsize(rtfm)); params.kctx_len = roundup(params.alg_prm.result_size, 16); params.sg_len = chcr_hash_ent_in_wr(req->src, !!req_ctx->reqlen, @@ -1629,6 +1685,8 @@ static int chcr_ahash_update(struct ahash_request *req) return isfull ? -EBUSY : -EINPROGRESS; unmap: chcr_hash_dma_unmap(&u_ctx->lldi.pdev->dev, req); +err: + chcr_dec_wrcount(dev); return error; } @@ -1646,10 +1704,16 @@ static int chcr_ahash_final(struct ahash_request *req) { struct chcr_ahash_req_ctx *req_ctx = ahash_request_ctx(req); struct crypto_ahash *rtfm = crypto_ahash_reqtfm(req); + struct chcr_dev *dev = h_ctx(rtfm)->dev; struct hash_wr_param params; struct sk_buff *skb; struct uld_ctx *u_ctx = NULL; u8 bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(rtfm)); + int error = -EINVAL; + + error = chcr_inc_wrcount(dev); + if (error) + return -ENXIO; chcr_init_hctx_per_wr(req_ctx); u_ctx = ULD_CTX(h_ctx(rtfm)); @@ -1686,19 +1750,25 @@ static int chcr_ahash_final(struct ahash_request *req) } params.hash_size = crypto_ahash_digestsize(rtfm); skb = create_hash_wr(req, ¶ms); - if (IS_ERR(skb)) - return PTR_ERR(skb); + if (IS_ERR(skb)) { + error = PTR_ERR(skb); + goto err; + } req_ctx->reqlen = 0; skb->dev = u_ctx->lldi.ports[0]; set_wr_txq(skb, CPL_PRIORITY_DATA, h_ctx(rtfm)->tx_qidx); chcr_send_wr(skb); return -EINPROGRESS; +err: + chcr_dec_wrcount(dev); + return error; } static int chcr_ahash_finup(struct ahash_request *req) { struct chcr_ahash_req_ctx *req_ctx = ahash_request_ctx(req); struct crypto_ahash *rtfm = crypto_ahash_reqtfm(req); + struct chcr_dev *dev = h_ctx(rtfm)->dev; struct uld_ctx *u_ctx = NULL; struct sk_buff *skb; struct hash_wr_param params; @@ -1707,17 +1777,24 @@ static int chcr_ahash_finup(struct ahash_request *req) bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(rtfm)); u_ctx = ULD_CTX(h_ctx(rtfm)); + error = chcr_inc_wrcount(dev); + if (error) + return -ENXIO; if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0], h_ctx(rtfm)->tx_qidx))) { isfull = 1; - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) - return -ENOSPC; + if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { + error = -ENOSPC; + goto err; + } } chcr_init_hctx_per_wr(req_ctx); error = chcr_hash_dma_map(&u_ctx->lldi.pdev->dev, req); - if (error) - return -ENOMEM; + if (error) { + error = -ENOMEM; + goto err; + } get_alg_config(¶ms.alg_prm, crypto_ahash_digestsize(rtfm)); params.kctx_len = roundup(params.alg_prm.result_size, 16); @@ -1774,6 +1851,8 @@ static int chcr_ahash_finup(struct ahash_request *req) return isfull ? -EBUSY : -EINPROGRESS; unmap: chcr_hash_dma_unmap(&u_ctx->lldi.pdev->dev, req); +err: + chcr_dec_wrcount(dev); return error; } @@ -1781,6 +1860,7 @@ static int chcr_ahash_digest(struct ahash_request *req) { struct chcr_ahash_req_ctx *req_ctx = ahash_request_ctx(req); struct crypto_ahash *rtfm = crypto_ahash_reqtfm(req); + struct chcr_dev *dev = h_ctx(rtfm)->dev; struct uld_ctx *u_ctx = NULL; struct sk_buff *skb; struct hash_wr_param params; @@ -1789,19 +1869,26 @@ static int chcr_ahash_digest(struct ahash_request *req) rtfm->init(req); bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(rtfm)); + error = chcr_inc_wrcount(dev); + if (error) + return -ENXIO; u_ctx = ULD_CTX(h_ctx(rtfm)); if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0], h_ctx(rtfm)->tx_qidx))) { isfull = 1; - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) - return -ENOSPC; + if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { + error = -ENOSPC; + goto err; + } } chcr_init_hctx_per_wr(req_ctx); error = chcr_hash_dma_map(&u_ctx->lldi.pdev->dev, req); - if (error) - return -ENOMEM; + if (error) { + error = -ENOMEM; + goto err; + } get_alg_config(¶ms.alg_prm, crypto_ahash_digestsize(rtfm)); params.kctx_len = roundup(params.alg_prm.result_size, 16); @@ -1854,6 +1941,8 @@ static int chcr_ahash_digest(struct ahash_request *req) return isfull ? -EBUSY : -EINPROGRESS; unmap: chcr_hash_dma_unmap(&u_ctx->lldi.pdev->dev, req); +err: + chcr_dec_wrcount(dev); return error; } @@ -1925,6 +2014,7 @@ static inline void chcr_handle_ahash_resp(struct ahash_request *req, int digestsize, updated_digestsize; struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct uld_ctx *u_ctx = ULD_CTX(h_ctx(tfm)); + struct chcr_dev *dev = h_ctx(tfm)->dev; if (input == NULL) goto out; @@ -1967,6 +2057,7 @@ unmap: out: + chcr_dec_wrcount(dev); req->base.complete(&req->base, err); } @@ -1983,14 +2074,13 @@ int chcr_handle_resp(struct crypto_async_request *req, unsigned char *input, switch (tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK) { case CRYPTO_ALG_TYPE_AEAD: - chcr_handle_aead_resp(aead_request_cast(req), input, err); + err = chcr_handle_aead_resp(aead_request_cast(req), input, err); break; case CRYPTO_ALG_TYPE_ABLKCIPHER: - err = chcr_handle_cipher_resp(ablkcipher_request_cast(req), + chcr_handle_cipher_resp(ablkcipher_request_cast(req), input, err); break; - case CRYPTO_ALG_TYPE_AHASH: chcr_handle_ahash_resp(ahash_request_cast(req), input, err); } @@ -2008,7 +2098,7 @@ static int chcr_ahash_export(struct ahash_request *areq, void *out) memcpy(state->partial_hash, req_ctx->partial_hash, CHCR_HASH_MAX_DIGEST_SIZE); chcr_init_hctx_per_wr(state); - return 0; + return 0; } static int chcr_ahash_import(struct ahash_request *areq, const void *in) @@ -2215,10 +2305,7 @@ static int chcr_aead_common_init(struct aead_request *req) error = -ENOMEM; goto err; } - reqctx->aad_nents = sg_nents_xlen(req->src, req->assoclen, - CHCR_SRC_SG_SIZE, 0); - reqctx->src_nents = sg_nents_xlen(req->src, req->cryptlen, - CHCR_SRC_SG_SIZE, req->assoclen); + return 0; err: return error; @@ -2249,7 +2336,7 @@ static int chcr_aead_fallback(struct aead_request *req, unsigned short op_type) req->base.complete, req->base.data); aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, req->iv); - aead_request_set_ad(subreq, req->assoclen); + aead_request_set_ad(subreq, req->assoclen); return op_type ? crypto_aead_decrypt(subreq) : crypto_aead_encrypt(subreq); } @@ -2268,10 +2355,10 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req, struct ulptx_sgl *ulptx; unsigned int transhdr_len; unsigned int dst_size = 0, temp, subtype = get_aead_subtype(tfm); - unsigned int kctx_len = 0, dnents; - unsigned int assoclen = req->assoclen; + unsigned int kctx_len = 0, dnents, snents; unsigned int authsize = crypto_aead_authsize(tfm); int error = -EINVAL; + u8 *ivptr; int null = 0; gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC; @@ -2288,24 +2375,20 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req, if (subtype == CRYPTO_ALG_SUB_TYPE_CBC_NULL || subtype == CRYPTO_ALG_SUB_TYPE_CTR_NULL) { null = 1; - assoclen = 0; - reqctx->aad_nents = 0; } - dnents = sg_nents_xlen(req->dst, assoclen, CHCR_DST_SG_SIZE, 0); - dnents += sg_nents_xlen(req->dst, req->cryptlen + - (reqctx->op ? -authsize : authsize), CHCR_DST_SG_SIZE, - req->assoclen); + dnents = sg_nents_xlen(req->dst, req->assoclen + req->cryptlen + + (reqctx->op ? -authsize : authsize), CHCR_DST_SG_SIZE, 0); dnents += MIN_AUTH_SG; // For IV - + snents = sg_nents_xlen(req->src, req->assoclen + req->cryptlen, + CHCR_SRC_SG_SIZE, 0); dst_size = get_space_for_phys_dsgl(dnents); kctx_len = (ntohl(KEY_CONTEXT_CTX_LEN_V(aeadctx->key_ctx_hdr)) << 4) - sizeof(chcr_req->key_ctx); transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, dst_size); - reqctx->imm = (transhdr_len + assoclen + IV + req->cryptlen) < + reqctx->imm = (transhdr_len + req->assoclen + req->cryptlen) < SGE_MAX_WR_LEN; - temp = reqctx->imm ? roundup(assoclen + IV + req->cryptlen, 16) - : (sgl_len(reqctx->src_nents + reqctx->aad_nents - + MIN_GCM_SG) * 8); + temp = reqctx->imm ? roundup(req->assoclen + req->cryptlen, 16) + : (sgl_len(snents) * 8); transhdr_len += temp; transhdr_len = roundup(transhdr_len, 16); @@ -2315,7 +2398,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req, chcr_aead_common_exit(req); return ERR_PTR(chcr_aead_fallback(req, reqctx->op)); } - skb = alloc_skb(SGE_MAX_WR_LEN, flags); + skb = alloc_skb(transhdr_len, flags); if (!skb) { error = -ENOMEM; goto err; @@ -2331,16 +2414,16 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req, * to the hardware spec */ chcr_req->sec_cpl.op_ivinsrtofst = - FILL_SEC_CPL_OP_IVINSR(a_ctx(tfm)->dev->rx_channel_id, 2, - assoclen + 1); - chcr_req->sec_cpl.pldlen = htonl(assoclen + IV + req->cryptlen); + FILL_SEC_CPL_OP_IVINSR(a_ctx(tfm)->tx_chan_id, 2, 1); + chcr_req->sec_cpl.pldlen = htonl(req->assoclen + IV + req->cryptlen); chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI( - assoclen ? 1 : 0, assoclen, - assoclen + IV + 1, + null ? 0 : 1 + IV, + null ? 0 : IV + req->assoclen, + req->assoclen + IV + 1, (temp & 0x1F0) >> 4); chcr_req->sec_cpl.cipherstop_lo_authinsert = FILL_SEC_CPL_AUTHINSERT( temp & 0xF, - null ? 0 : assoclen + IV + 1, + null ? 0 : req->assoclen + IV + 1, temp, temp); if (subtype == CRYPTO_ALG_SUB_TYPE_CTR_NULL || subtype == CRYPTO_ALG_SUB_TYPE_CTR_SHA) @@ -2367,23 +2450,24 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req, memcpy(chcr_req->key_ctx.key + roundup(aeadctx->enckey_len, 16), actx->h_iopad, kctx_len - roundup(aeadctx->enckey_len, 16)); + phys_cpl = (struct cpl_rx_phys_dsgl *)((u8 *)(chcr_req + 1) + kctx_len); + ivptr = (u8 *)(phys_cpl + 1) + dst_size; + ulptx = (struct ulptx_sgl *)(ivptr + IV); if (subtype == CRYPTO_ALG_SUB_TYPE_CTR_SHA || subtype == CRYPTO_ALG_SUB_TYPE_CTR_NULL) { - memcpy(reqctx->iv, aeadctx->nonce, CTR_RFC3686_NONCE_SIZE); - memcpy(reqctx->iv + CTR_RFC3686_NONCE_SIZE, req->iv, + memcpy(ivptr, aeadctx->nonce, CTR_RFC3686_NONCE_SIZE); + memcpy(ivptr + CTR_RFC3686_NONCE_SIZE, req->iv, CTR_RFC3686_IV_SIZE); - *(__be32 *)(reqctx->iv + CTR_RFC3686_NONCE_SIZE + + *(__be32 *)(ivptr + CTR_RFC3686_NONCE_SIZE + CTR_RFC3686_IV_SIZE) = cpu_to_be32(1); } else { - memcpy(reqctx->iv, req->iv, IV); + memcpy(ivptr, req->iv, IV); } - phys_cpl = (struct cpl_rx_phys_dsgl *)((u8 *)(chcr_req + 1) + kctx_len); - ulptx = (struct ulptx_sgl *)((u8 *)(phys_cpl + 1) + dst_size); - chcr_add_aead_dst_ent(req, phys_cpl, assoclen, qid); - chcr_add_aead_src_ent(req, ulptx, assoclen); + chcr_add_aead_dst_ent(req, phys_cpl, qid); + chcr_add_aead_src_ent(req, ulptx); atomic_inc(&adap->chcr_stats.cipher_rqst); - temp = sizeof(struct cpl_rx_phys_dsgl) + dst_size + - kctx_len + (reqctx->imm ? (assoclen + IV + req->cryptlen) : 0); + temp = sizeof(struct cpl_rx_phys_dsgl) + dst_size + IV + + kctx_len + (reqctx->imm ? (req->assoclen + req->cryptlen) : 0); create_wreq(a_ctx(tfm), chcr_req, &req->base, reqctx->imm, size, transhdr_len, temp, 0); reqctx->skb = skb; @@ -2470,8 +2554,7 @@ void chcr_aead_dma_unmap(struct device *dev, } void chcr_add_aead_src_ent(struct aead_request *req, - struct ulptx_sgl *ulptx, - unsigned int assoclen) + struct ulptx_sgl *ulptx) { struct ulptx_walk ulp_walk; struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); @@ -2484,28 +2567,20 @@ void chcr_add_aead_src_ent(struct aead_request *req, buf += reqctx->b0_len; } sg_pcopy_to_buffer(req->src, sg_nents(req->src), - buf, assoclen, 0); - buf += assoclen; - memcpy(buf, reqctx->iv, IV); - buf += IV; - sg_pcopy_to_buffer(req->src, sg_nents(req->src), - buf, req->cryptlen, req->assoclen); + buf, req->cryptlen + req->assoclen, 0); } else { ulptx_walk_init(&ulp_walk, ulptx); if (reqctx->b0_len) ulptx_walk_add_page(&ulp_walk, reqctx->b0_len, - &reqctx->b0_dma); - ulptx_walk_add_sg(&ulp_walk, req->src, assoclen, 0); - ulptx_walk_add_page(&ulp_walk, IV, &reqctx->iv_dma); - ulptx_walk_add_sg(&ulp_walk, req->src, req->cryptlen, - req->assoclen); + reqctx->b0_dma); + ulptx_walk_add_sg(&ulp_walk, req->src, req->cryptlen + + req->assoclen, 0); ulptx_walk_end(&ulp_walk); } } void chcr_add_aead_dst_ent(struct aead_request *req, struct cpl_rx_phys_dsgl *phys_cpl, - unsigned int assoclen, unsigned short qid) { struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); @@ -2516,12 +2591,10 @@ void chcr_add_aead_dst_ent(struct aead_request *req, u32 temp; dsgl_walk_init(&dsgl_walk, phys_cpl); - if (reqctx->b0_len) - dsgl_walk_add_page(&dsgl_walk, reqctx->b0_len, &reqctx->b0_dma); - dsgl_walk_add_sg(&dsgl_walk, req->dst, assoclen, 0); - dsgl_walk_add_page(&dsgl_walk, IV, &reqctx->iv_dma); - temp = req->cryptlen + (reqctx->op ? -authsize : authsize); - dsgl_walk_add_sg(&dsgl_walk, req->dst, temp, req->assoclen); + dsgl_walk_add_page(&dsgl_walk, IV + reqctx->b0_len, reqctx->iv_dma); + temp = req->assoclen + req->cryptlen + + (reqctx->op ? -authsize : authsize); + dsgl_walk_add_sg(&dsgl_walk, req->dst, temp, 0); dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id); } @@ -2589,7 +2662,7 @@ void chcr_add_hash_src_ent(struct ahash_request *req, ulptx_walk_init(&ulp_walk, ulptx); if (param->bfr_len) ulptx_walk_add_page(&ulp_walk, param->bfr_len, - &reqctx->hctx_wr.dma_addr); + reqctx->hctx_wr.dma_addr); ulptx_walk_add_sg(&ulp_walk, reqctx->hctx_wr.srcsg, param->sg_len, reqctx->hctx_wr.src_ofst); reqctx->hctx_wr.srcsg = ulp_walk.last_sg; @@ -2689,8 +2762,7 @@ static int set_msg_len(u8 *block, unsigned int msglen, int csize) return 0; } -static void generate_b0(struct aead_request *req, - struct chcr_aead_ctx *aeadctx, +static void generate_b0(struct aead_request *req, u8 *ivptr, unsigned short op_type) { unsigned int l, lp, m; @@ -2701,7 +2773,7 @@ static void generate_b0(struct aead_request *req, m = crypto_aead_authsize(aead); - memcpy(b0, reqctx->iv, 16); + memcpy(b0, ivptr, 16); lp = b0[0]; l = lp + 1; @@ -2727,29 +2799,31 @@ static inline int crypto_ccm_check_iv(const u8 *iv) } static int ccm_format_packet(struct aead_request *req, - struct chcr_aead_ctx *aeadctx, + u8 *ivptr, unsigned int sub_type, unsigned short op_type, unsigned int assoclen) { struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct chcr_aead_ctx *aeadctx = AEAD_CTX(a_ctx(tfm)); int rc = 0; if (sub_type == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4309) { - reqctx->iv[0] = 3; - memcpy(reqctx->iv + 1, &aeadctx->salt[0], 3); - memcpy(reqctx->iv + 4, req->iv, 8); - memset(reqctx->iv + 12, 0, 4); + ivptr[0] = 3; + memcpy(ivptr + 1, &aeadctx->salt[0], 3); + memcpy(ivptr + 4, req->iv, 8); + memset(ivptr + 12, 0, 4); } else { - memcpy(reqctx->iv, req->iv, 16); + memcpy(ivptr, req->iv, 16); } if (assoclen) *((unsigned short *)(reqctx->scratch_pad + 16)) = htons(assoclen); - generate_b0(req, aeadctx, op_type); + generate_b0(req, ivptr, op_type); /* zero the ctr value */ - memset(reqctx->iv + 15 - reqctx->iv[0], 0, reqctx->iv[0] + 1); + memset(ivptr + 15 - ivptr[0], 0, ivptr[0] + 1); return rc; } @@ -2762,7 +2836,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl, struct chcr_aead_ctx *aeadctx = AEAD_CTX(a_ctx(tfm)); unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM; unsigned int mac_mode = CHCR_SCMD_AUTH_MODE_CBCMAC; - unsigned int c_id = a_ctx(tfm)->dev->rx_channel_id; + unsigned int c_id = a_ctx(tfm)->tx_chan_id; unsigned int ccm_xtra; unsigned char tag_offset = 0, auth_offset = 0; unsigned int assoclen; @@ -2775,7 +2849,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl, ((assoclen) ? CCM_AAD_FIELD_SIZE : 0); auth_offset = req->cryptlen ? - (assoclen + IV + 1 + ccm_xtra) : 0; + (req->assoclen + IV + 1 + ccm_xtra) : 0; if (op_type == CHCR_DECRYPT_OP) { if (crypto_aead_authsize(tfm) != req->cryptlen) tag_offset = crypto_aead_authsize(tfm); @@ -2785,13 +2859,13 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl, sec_cpl->op_ivinsrtofst = FILL_SEC_CPL_OP_IVINSR(c_id, - 2, assoclen + 1 + ccm_xtra); + 2, 1); sec_cpl->pldlen = - htonl(assoclen + IV + req->cryptlen + ccm_xtra); + htonl(req->assoclen + IV + req->cryptlen + ccm_xtra); /* For CCM there wil be b0 always. So AAD start will be 1 always */ sec_cpl->aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI( - 1, assoclen + ccm_xtra, assoclen - + IV + 1 + ccm_xtra, 0); + 1 + IV, IV + assoclen + ccm_xtra, + req->assoclen + IV + 1 + ccm_xtra, 0); sec_cpl->cipherstop_lo_authinsert = FILL_SEC_CPL_AUTHINSERT(0, auth_offset, tag_offset, @@ -2838,10 +2912,11 @@ static struct sk_buff *create_aead_ccm_wr(struct aead_request *req, struct cpl_rx_phys_dsgl *phys_cpl; struct ulptx_sgl *ulptx; unsigned int transhdr_len; - unsigned int dst_size = 0, kctx_len, dnents, temp; + unsigned int dst_size = 0, kctx_len, dnents, temp, snents; unsigned int sub_type, assoclen = req->assoclen; unsigned int authsize = crypto_aead_authsize(tfm); int error = -EINVAL; + u8 *ivptr; gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC; struct adapter *adap = padap(a_ctx(tfm)->dev); @@ -2857,37 +2932,38 @@ static struct sk_buff *create_aead_ccm_wr(struct aead_request *req, error = aead_ccm_validate_input(reqctx->op, req, aeadctx, sub_type); if (error) goto err; - dnents = sg_nents_xlen(req->dst, assoclen, CHCR_DST_SG_SIZE, 0); - dnents += sg_nents_xlen(req->dst, req->cryptlen + dnents = sg_nents_xlen(req->dst, req->assoclen + req->cryptlen + (reqctx->op ? -authsize : authsize), - CHCR_DST_SG_SIZE, req->assoclen); + CHCR_DST_SG_SIZE, 0); dnents += MIN_CCM_SG; // For IV and B0 dst_size = get_space_for_phys_dsgl(dnents); + snents = sg_nents_xlen(req->src, req->assoclen + req->cryptlen, + CHCR_SRC_SG_SIZE, 0); + snents += MIN_CCM_SG; //For B0 kctx_len = roundup(aeadctx->enckey_len, 16) * 2; transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, dst_size); - reqctx->imm = (transhdr_len + assoclen + IV + req->cryptlen + + reqctx->imm = (transhdr_len + req->assoclen + req->cryptlen + reqctx->b0_len) <= SGE_MAX_WR_LEN; - temp = reqctx->imm ? roundup(assoclen + IV + req->cryptlen + + temp = reqctx->imm ? roundup(req->assoclen + req->cryptlen + reqctx->b0_len, 16) : - (sgl_len(reqctx->src_nents + reqctx->aad_nents + - MIN_CCM_SG) * 8); + (sgl_len(snents) * 8); transhdr_len += temp; transhdr_len = roundup(transhdr_len, 16); if (chcr_aead_need_fallback(req, dnents, T6_MAX_AAD_SIZE - - reqctx->b0_len, transhdr_len, reqctx->op)) { + reqctx->b0_len, transhdr_len, reqctx->op)) { atomic_inc(&adap->chcr_stats.fallback); chcr_aead_common_exit(req); return ERR_PTR(chcr_aead_fallback(req, reqctx->op)); } - skb = alloc_skb(SGE_MAX_WR_LEN, flags); + skb = alloc_skb(transhdr_len, flags); if (!skb) { error = -ENOMEM; goto err; } - chcr_req = (struct chcr_wr *) __skb_put_zero(skb, transhdr_len); + chcr_req = __skb_put_zero(skb, transhdr_len); fill_sec_cpl_for_aead(&chcr_req->sec_cpl, dst_size, req, reqctx->op); @@ -2897,16 +2973,17 @@ static struct sk_buff *create_aead_ccm_wr(struct aead_request *req, aeadctx->key, aeadctx->enckey_len); phys_cpl = (struct cpl_rx_phys_dsgl *)((u8 *)(chcr_req + 1) + kctx_len); - ulptx = (struct ulptx_sgl *)((u8 *)(phys_cpl + 1) + dst_size); - error = ccm_format_packet(req, aeadctx, sub_type, reqctx->op, assoclen); + ivptr = (u8 *)(phys_cpl + 1) + dst_size; + ulptx = (struct ulptx_sgl *)(ivptr + IV); + error = ccm_format_packet(req, ivptr, sub_type, reqctx->op, assoclen); if (error) goto dstmap_fail; - chcr_add_aead_dst_ent(req, phys_cpl, assoclen, qid); - chcr_add_aead_src_ent(req, ulptx, assoclen); + chcr_add_aead_dst_ent(req, phys_cpl, qid); + chcr_add_aead_src_ent(req, ulptx); atomic_inc(&adap->chcr_stats.aead_rqst); - temp = sizeof(struct cpl_rx_phys_dsgl) + dst_size + - kctx_len + (reqctx->imm ? (assoclen + IV + req->cryptlen + + temp = sizeof(struct cpl_rx_phys_dsgl) + dst_size + IV + + kctx_len + (reqctx->imm ? (req->assoclen + req->cryptlen + reqctx->b0_len) : 0); create_wreq(a_ctx(tfm), chcr_req, &req->base, reqctx->imm, 0, transhdr_len, temp, 0); @@ -2931,10 +3008,11 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req, struct chcr_wr *chcr_req; struct cpl_rx_phys_dsgl *phys_cpl; struct ulptx_sgl *ulptx; - unsigned int transhdr_len, dnents = 0; + unsigned int transhdr_len, dnents = 0, snents; unsigned int dst_size = 0, temp = 0, kctx_len, assoclen = req->assoclen; unsigned int authsize = crypto_aead_authsize(tfm); int error = -EINVAL; + u8 *ivptr; gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC; struct adapter *adap = padap(a_ctx(tfm)->dev); @@ -2946,19 +3024,19 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req, error = chcr_aead_common_init(req); if (error) return ERR_PTR(error); - dnents = sg_nents_xlen(req->dst, assoclen, CHCR_DST_SG_SIZE, 0); - dnents += sg_nents_xlen(req->dst, req->cryptlen + + dnents = sg_nents_xlen(req->dst, req->assoclen + req->cryptlen + (reqctx->op ? -authsize : authsize), - CHCR_DST_SG_SIZE, req->assoclen); + CHCR_DST_SG_SIZE, 0); + snents = sg_nents_xlen(req->src, req->assoclen + req->cryptlen, + CHCR_SRC_SG_SIZE, 0); dnents += MIN_GCM_SG; // For IV dst_size = get_space_for_phys_dsgl(dnents); kctx_len = roundup(aeadctx->enckey_len, 16) + AEAD_H_SIZE; transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, dst_size); - reqctx->imm = (transhdr_len + assoclen + IV + req->cryptlen) <= + reqctx->imm = (transhdr_len + req->assoclen + req->cryptlen) <= SGE_MAX_WR_LEN; - temp = reqctx->imm ? roundup(assoclen + IV + req->cryptlen, 16) : - (sgl_len(reqctx->src_nents + - reqctx->aad_nents + MIN_GCM_SG) * 8); + temp = reqctx->imm ? roundup(req->assoclen + req->cryptlen, 16) : + (sgl_len(snents) * 8); transhdr_len += temp; transhdr_len = roundup(transhdr_len, 16); if (chcr_aead_need_fallback(req, dnents, T6_MAX_AAD_SIZE, @@ -2968,7 +3046,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req, chcr_aead_common_exit(req); return ERR_PTR(chcr_aead_fallback(req, reqctx->op)); } - skb = alloc_skb(SGE_MAX_WR_LEN, flags); + skb = alloc_skb(transhdr_len, flags); if (!skb) { error = -ENOMEM; goto err; @@ -2979,15 +3057,15 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req, //Offset of tag from end temp = (reqctx->op == CHCR_ENCRYPT_OP) ? 0 : authsize; chcr_req->sec_cpl.op_ivinsrtofst = FILL_SEC_CPL_OP_IVINSR( - a_ctx(tfm)->dev->rx_channel_id, 2, - (assoclen + 1)); + a_ctx(tfm)->tx_chan_id, 2, 1); chcr_req->sec_cpl.pldlen = - htonl(assoclen + IV + req->cryptlen); + htonl(req->assoclen + IV + req->cryptlen); chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI( - assoclen ? 1 : 0, assoclen, - assoclen + IV + 1, 0); + assoclen ? 1 + IV : 0, + assoclen ? IV + assoclen : 0, + req->assoclen + IV + 1, 0); chcr_req->sec_cpl.cipherstop_lo_authinsert = - FILL_SEC_CPL_AUTHINSERT(0, assoclen + IV + 1, + FILL_SEC_CPL_AUTHINSERT(0, req->assoclen + IV + 1, temp, temp); chcr_req->sec_cpl.seqno_numivs = FILL_SEC_CPL_SCMD0_SEQNO(reqctx->op, (reqctx->op == @@ -3002,25 +3080,26 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req, memcpy(chcr_req->key_ctx.key + roundup(aeadctx->enckey_len, 16), GCM_CTX(aeadctx)->ghash_h, AEAD_H_SIZE); + phys_cpl = (struct cpl_rx_phys_dsgl *)((u8 *)(chcr_req + 1) + kctx_len); + ivptr = (u8 *)(phys_cpl + 1) + dst_size; /* prepare a 16 byte iv */ /* S A L T | IV | 0x00000001 */ if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106) { - memcpy(reqctx->iv, aeadctx->salt, 4); - memcpy(reqctx->iv + 4, req->iv, GCM_RFC4106_IV_SIZE); + memcpy(ivptr, aeadctx->salt, 4); + memcpy(ivptr + 4, req->iv, GCM_RFC4106_IV_SIZE); } else { - memcpy(reqctx->iv, req->iv, GCM_AES_IV_SIZE); + memcpy(ivptr, req->iv, GCM_AES_IV_SIZE); } - *((unsigned int *)(reqctx->iv + 12)) = htonl(0x01); + *((unsigned int *)(ivptr + 12)) = htonl(0x01); - phys_cpl = (struct cpl_rx_phys_dsgl *)((u8 *)(chcr_req + 1) + kctx_len); - ulptx = (struct ulptx_sgl *)((u8 *)(phys_cpl + 1) + dst_size); + ulptx = (struct ulptx_sgl *)(ivptr + 16); - chcr_add_aead_dst_ent(req, phys_cpl, assoclen, qid); - chcr_add_aead_src_ent(req, ulptx, assoclen); + chcr_add_aead_dst_ent(req, phys_cpl, qid); + chcr_add_aead_src_ent(req, ulptx); atomic_inc(&adap->chcr_stats.aead_rqst); - temp = sizeof(struct cpl_rx_phys_dsgl) + dst_size + - kctx_len + (reqctx->imm ? (assoclen + IV + req->cryptlen) : 0); + temp = sizeof(struct cpl_rx_phys_dsgl) + dst_size + IV + + kctx_len + (reqctx->imm ? (req->assoclen + req->cryptlen) : 0); create_wreq(a_ctx(tfm), chcr_req, &req->base, reqctx->imm, size, transhdr_len, temp, reqctx->verify); reqctx->skb = skb; @@ -3118,12 +3197,12 @@ static int chcr_gcm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) aeadctx->mayverify = VERIFY_HW; break; case ICV_12: - aeadctx->hmac_ctrl = CHCR_SCMD_HMAC_CTRL_IPSEC_96BIT; - aeadctx->mayverify = VERIFY_HW; + aeadctx->hmac_ctrl = CHCR_SCMD_HMAC_CTRL_IPSEC_96BIT; + aeadctx->mayverify = VERIFY_HW; break; case ICV_14: - aeadctx->hmac_ctrl = CHCR_SCMD_HMAC_CTRL_PL3; - aeadctx->mayverify = VERIFY_HW; + aeadctx->hmac_ctrl = CHCR_SCMD_HMAC_CTRL_PL3; + aeadctx->mayverify = VERIFY_HW; break; case ICV_16: aeadctx->hmac_ctrl = CHCR_SCMD_HMAC_CTRL_NO_TRUNC; @@ -3565,27 +3644,42 @@ static int chcr_aead_op(struct aead_request *req, create_wr_t create_wr_fn) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); struct uld_ctx *u_ctx; struct sk_buff *skb; int isfull = 0; + struct chcr_dev *cdev; - if (!a_ctx(tfm)->dev) { + cdev = a_ctx(tfm)->dev; + if (!cdev) { pr_err("chcr : %s : No crypto device.\n", __func__); return -ENXIO; } + + if (chcr_inc_wrcount(cdev)) { + /* Detach state for CHCR means lldi or padap is freed. + * We cannot increment fallback here. + */ + return chcr_aead_fallback(req, reqctx->op); + } + u_ctx = ULD_CTX(a_ctx(tfm)); if (cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0], a_ctx(tfm)->tx_qidx)) { isfull = 1; - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) + if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { + chcr_dec_wrcount(cdev); return -ENOSPC; + } } /* Form a WR from req */ skb = create_wr_fn(req, u_ctx->lldi.rxq_ids[a_ctx(tfm)->rx_qidx], size); - if (IS_ERR(skb) || !skb) + if (IS_ERR(skb) || !skb) { + chcr_dec_wrcount(cdev); return PTR_ERR(skb); + } skb->dev = u_ctx->lldi.ports[0]; set_wr_txq(skb, CPL_PRIORITY_DATA, a_ctx(tfm)->tx_qidx); @@ -3722,7 +3816,6 @@ static struct chcr_alg_template driver_algs[] = { .setkey = chcr_aes_rfc3686_setkey, .encrypt = chcr_aes_encrypt, .decrypt = chcr_aes_decrypt, - .geniv = "seqiv", } } }, @@ -4178,7 +4271,6 @@ static struct chcr_alg_template driver_algs[] = { .setauthsize = chcr_authenc_null_setauthsize, } }, - }; /* diff --git a/drivers/crypto/chelsio/chcr_algo.h b/drivers/crypto/chelsio/chcr_algo.h index 1871500309e2..ee20dd899e83 100644 --- a/drivers/crypto/chelsio/chcr_algo.h +++ b/drivers/crypto/chelsio/chcr_algo.h @@ -262,7 +262,7 @@ #define MIN_AUTH_SG 1 /* IV */ #define MIN_GCM_SG 1 /* IV */ #define MIN_DIGEST_SG 1 /*Partial Buffer*/ -#define MIN_CCM_SG 2 /*IV+B0*/ +#define MIN_CCM_SG 1 /*IV+B0*/ #define CIP_SPACE_LEFT(len) \ ((SGE_MAX_WR_LEN - CIP_WR_MIN_LEN - (len))) #define HASH_SPACE_LEFT(len) \ diff --git a/drivers/crypto/chelsio/chcr_core.c b/drivers/crypto/chelsio/chcr_core.c index 2c472e3c6aeb..239b933d6df6 100644 --- a/drivers/crypto/chelsio/chcr_core.c +++ b/drivers/crypto/chelsio/chcr_core.c @@ -26,10 +26,7 @@ #include "chcr_core.h" #include "cxgb4_uld.h" -static LIST_HEAD(uld_ctx_list); -static DEFINE_MUTEX(dev_mutex); -static atomic_t dev_count; -static struct uld_ctx *ctx_rr; +static struct chcr_driver_data drv_data; typedef int (*chcr_handler_func)(struct chcr_dev *dev, unsigned char *input); static int cpl_fw6_pld_handler(struct chcr_dev *dev, unsigned char *input); @@ -53,6 +50,29 @@ static struct cxgb4_uld_info chcr_uld_info = { #endif /* CONFIG_CHELSIO_IPSEC_INLINE */ }; +static void detach_work_fn(struct work_struct *work) +{ + struct chcr_dev *dev; + + dev = container_of(work, struct chcr_dev, detach_work.work); + + if (atomic_read(&dev->inflight)) { + dev->wqretry--; + if (dev->wqretry) { + pr_debug("Request Inflight Count %d\n", + atomic_read(&dev->inflight)); + + schedule_delayed_work(&dev->detach_work, WQ_DETACH_TM); + } else { + WARN(1, "CHCR:%d request Still Pending\n", + atomic_read(&dev->inflight)); + complete(&dev->detach_comp); + } + } else { + complete(&dev->detach_comp); + } +} + struct uld_ctx *assign_chcr_device(void) { struct uld_ctx *u_ctx = NULL; @@ -63,56 +83,74 @@ struct uld_ctx *assign_chcr_device(void) * Although One session must use the same device to * maintain request-response ordering. */ - mutex_lock(&dev_mutex); - if (!list_empty(&uld_ctx_list)) { - u_ctx = ctx_rr; - if (list_is_last(&ctx_rr->entry, &uld_ctx_list)) - ctx_rr = list_first_entry(&uld_ctx_list, - struct uld_ctx, - entry); + mutex_lock(&drv_data.drv_mutex); + if (!list_empty(&drv_data.act_dev)) { + u_ctx = drv_data.last_dev; + if (list_is_last(&drv_data.last_dev->entry, &drv_data.act_dev)) + drv_data.last_dev = list_first_entry(&drv_data.act_dev, + struct uld_ctx, entry); else - ctx_rr = list_next_entry(ctx_rr, entry); + drv_data.last_dev = + list_next_entry(drv_data.last_dev, entry); } - mutex_unlock(&dev_mutex); + mutex_unlock(&drv_data.drv_mutex); return u_ctx; } -static int chcr_dev_add(struct uld_ctx *u_ctx) +static void chcr_dev_add(struct uld_ctx *u_ctx) { struct chcr_dev *dev; - dev = kzalloc(sizeof(*dev), GFP_KERNEL); - if (!dev) - return -ENXIO; + dev = &u_ctx->dev; + dev->state = CHCR_ATTACH; + atomic_set(&dev->inflight, 0); + mutex_lock(&drv_data.drv_mutex); + list_move(&u_ctx->entry, &drv_data.act_dev); + if (!drv_data.last_dev) + drv_data.last_dev = u_ctx; + mutex_unlock(&drv_data.drv_mutex); +} + +static void chcr_dev_init(struct uld_ctx *u_ctx) +{ + struct chcr_dev *dev; + dev = &u_ctx->dev; spin_lock_init(&dev->lock_chcr_dev); - u_ctx->dev = dev; - dev->u_ctx = u_ctx; - atomic_inc(&dev_count); - mutex_lock(&dev_mutex); - list_add_tail(&u_ctx->entry, &uld_ctx_list); - if (!ctx_rr) - ctx_rr = u_ctx; - mutex_unlock(&dev_mutex); - return 0; + INIT_DELAYED_WORK(&dev->detach_work, detach_work_fn); + init_completion(&dev->detach_comp); + dev->state = CHCR_INIT; + dev->wqretry = WQ_RETRY; + atomic_inc(&drv_data.dev_count); + atomic_set(&dev->inflight, 0); + mutex_lock(&drv_data.drv_mutex); + list_add_tail(&u_ctx->entry, &drv_data.inact_dev); + if (!drv_data.last_dev) + drv_data.last_dev = u_ctx; + mutex_unlock(&drv_data.drv_mutex); } -static int chcr_dev_remove(struct uld_ctx *u_ctx) +static int chcr_dev_move(struct uld_ctx *u_ctx) { - if (ctx_rr == u_ctx) { - if (list_is_last(&ctx_rr->entry, &uld_ctx_list)) - ctx_rr = list_first_entry(&uld_ctx_list, - struct uld_ctx, - entry); + struct adapter *adap; + + mutex_lock(&drv_data.drv_mutex); + if (drv_data.last_dev == u_ctx) { + if (list_is_last(&drv_data.last_dev->entry, &drv_data.act_dev)) + drv_data.last_dev = list_first_entry(&drv_data.act_dev, + struct uld_ctx, entry); else - ctx_rr = list_next_entry(ctx_rr, entry); + drv_data.last_dev = + list_next_entry(drv_data.last_dev, entry); } - list_del(&u_ctx->entry); - if (list_empty(&uld_ctx_list)) - ctx_rr = NULL; - kfree(u_ctx->dev); - u_ctx->dev = NULL; - atomic_dec(&dev_count); + list_move(&u_ctx->entry, &drv_data.inact_dev); + if (list_empty(&drv_data.act_dev)) + drv_data.last_dev = NULL; + adap = padap(&u_ctx->dev); + memset(&adap->chcr_stats, 0, sizeof(adap->chcr_stats)); + atomic_dec(&drv_data.dev_count); + mutex_unlock(&drv_data.drv_mutex); + return 0; } @@ -131,12 +169,8 @@ static int cpl_fw6_pld_handler(struct chcr_dev *dev, ack_err_status = ntohl(*(__be32 *)((unsigned char *)&fw6_pld->data[0] + 4)); - if (ack_err_status) { - if (CHK_MAC_ERR_BIT(ack_err_status) || - CHK_PAD_ERR_BIT(ack_err_status)) - error_status = -EBADMSG; - atomic_inc(&adap->chcr_stats.error); - } + if (CHK_MAC_ERR_BIT(ack_err_status) || CHK_PAD_ERR_BIT(ack_err_status)) + error_status = -EBADMSG; /* call completion callback with failure status */ if (req) { error_status = chcr_handle_resp(req, input, error_status); @@ -144,6 +178,9 @@ static int cpl_fw6_pld_handler(struct chcr_dev *dev, pr_err("Incorrect request address from the firmware\n"); return -EFAULT; } + if (error_status) + atomic_inc(&adap->chcr_stats.error); + return 0; } @@ -167,6 +204,7 @@ static void *chcr_uld_add(const struct cxgb4_lld_info *lld) goto out; } u_ctx->lldi = *lld; + chcr_dev_init(u_ctx); #ifdef CONFIG_CHELSIO_IPSEC_INLINE if (lld->crypto & ULP_CRYPTO_IPSEC_INLINE) chcr_add_xfrmops(lld); @@ -179,7 +217,7 @@ int chcr_uld_rx_handler(void *handle, const __be64 *rsp, const struct pkt_gl *pgl) { struct uld_ctx *u_ctx = (struct uld_ctx *)handle; - struct chcr_dev *dev = u_ctx->dev; + struct chcr_dev *dev = &u_ctx->dev; const struct cpl_fw6_pld *rpl = (struct cpl_fw6_pld *)rsp; if (rpl->opcode != CPL_FW6_PLD) { @@ -201,6 +239,28 @@ int chcr_uld_tx_handler(struct sk_buff *skb, struct net_device *dev) } #endif /* CONFIG_CHELSIO_IPSEC_INLINE */ +static void chcr_detach_device(struct uld_ctx *u_ctx) +{ + struct chcr_dev *dev = &u_ctx->dev; + + spin_lock_bh(&dev->lock_chcr_dev); + if (dev->state == CHCR_DETACH) { + spin_unlock_bh(&dev->lock_chcr_dev); + pr_debug("Detached Event received for already detach device\n"); + return; + } + dev->state = CHCR_DETACH; + spin_unlock_bh(&dev->lock_chcr_dev); + + if (atomic_read(&dev->inflight) != 0) { + schedule_delayed_work(&dev->detach_work, WQ_DETACH_TM); + wait_for_completion(&dev->detach_comp); + } + + // Move u_ctx to inactive_dev list + chcr_dev_move(u_ctx); +} + static int chcr_uld_state_change(void *handle, enum cxgb4_state state) { struct uld_ctx *u_ctx = handle; @@ -208,23 +268,16 @@ static int chcr_uld_state_change(void *handle, enum cxgb4_state state) switch (state) { case CXGB4_STATE_UP: - if (!u_ctx->dev) { - ret = chcr_dev_add(u_ctx); - if (ret != 0) - return ret; + if (u_ctx->dev.state != CHCR_INIT) { + // ALready Initialised. + return 0; } - if (atomic_read(&dev_count) == 1) - ret = start_crypto(); + chcr_dev_add(u_ctx); + ret = start_crypto(); break; case CXGB4_STATE_DETACH: - if (u_ctx->dev) { - mutex_lock(&dev_mutex); - chcr_dev_remove(u_ctx); - mutex_unlock(&dev_mutex); - } - if (!atomic_read(&dev_count)) - stop_crypto(); + chcr_detach_device(u_ctx); break; case CXGB4_STATE_START_RECOVERY: @@ -237,7 +290,13 @@ static int chcr_uld_state_change(void *handle, enum cxgb4_state state) static int __init chcr_crypto_init(void) { + INIT_LIST_HEAD(&drv_data.act_dev); + INIT_LIST_HEAD(&drv_data.inact_dev); + atomic_set(&drv_data.dev_count, 0); + mutex_init(&drv_data.drv_mutex); + drv_data.last_dev = NULL; cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info); + return 0; } @@ -245,18 +304,20 @@ static void __exit chcr_crypto_exit(void) { struct uld_ctx *u_ctx, *tmp; - if (atomic_read(&dev_count)) - stop_crypto(); + stop_crypto(); + cxgb4_unregister_uld(CXGB4_ULD_CRYPTO); /* Remove all devices from list */ - mutex_lock(&dev_mutex); - list_for_each_entry_safe(u_ctx, tmp, &uld_ctx_list, entry) { - if (u_ctx->dev) - chcr_dev_remove(u_ctx); + mutex_lock(&drv_data.drv_mutex); + list_for_each_entry_safe(u_ctx, tmp, &drv_data.act_dev, entry) { + list_del(&u_ctx->entry); kfree(u_ctx); } - mutex_unlock(&dev_mutex); - cxgb4_unregister_uld(CXGB4_ULD_CRYPTO); + list_for_each_entry_safe(u_ctx, tmp, &drv_data.inact_dev, entry) { + list_del(&u_ctx->entry); + kfree(u_ctx); + } + mutex_unlock(&drv_data.drv_mutex); } module_init(chcr_crypto_init); diff --git a/drivers/crypto/chelsio/chcr_core.h b/drivers/crypto/chelsio/chcr_core.h index de3a9c085daf..1159dee964ed 100644 --- a/drivers/crypto/chelsio/chcr_core.h +++ b/drivers/crypto/chelsio/chcr_core.h @@ -47,7 +47,7 @@ #define MAX_PENDING_REQ_TO_HW 20 #define CHCR_TEST_RESPONSE_TIMEOUT 1000 - +#define WQ_DETACH_TM (msecs_to_jiffies(50)) #define PAD_ERROR_BIT 1 #define CHK_PAD_ERR_BIT(x) (((x) >> PAD_ERROR_BIT) & 1) @@ -61,9 +61,6 @@ #define HASH_WR_MIN_LEN (sizeof(struct chcr_wr) + \ DUMMY_BYTES + \ sizeof(struct ulptx_sgl)) - -#define padap(dev) pci_get_drvdata(dev->u_ctx->lldi.pdev) - struct uld_ctx; struct _key_ctx { @@ -121,6 +118,20 @@ struct _key_ctx { #define KEYCTX_TX_WR_AUTHIN_G(x) \ (((x) >> KEYCTX_TX_WR_AUTHIN_S) & KEYCTX_TX_WR_AUTHIN_M) +#define WQ_RETRY 5 +struct chcr_driver_data { + struct list_head act_dev; + struct list_head inact_dev; + atomic_t dev_count; + struct mutex drv_mutex; + struct uld_ctx *last_dev; +}; + +enum chcr_state { + CHCR_INIT = 0, + CHCR_ATTACH, + CHCR_DETACH, +}; struct chcr_wr { struct fw_crypto_lookaside_wr wreq; struct ulp_txpkt ulptx; @@ -131,15 +142,18 @@ struct chcr_wr { struct chcr_dev { spinlock_t lock_chcr_dev; - struct uld_ctx *u_ctx; + enum chcr_state state; + atomic_t inflight; + int wqretry; + struct delayed_work detach_work; + struct completion detach_comp; unsigned char tx_channel_id; - unsigned char rx_channel_id; }; struct uld_ctx { struct list_head entry; struct cxgb4_lld_info lldi; - struct chcr_dev *dev; + struct chcr_dev dev; }; struct sge_opaque_hdr { @@ -159,8 +173,17 @@ struct chcr_ipsec_wr { struct chcr_ipsec_req req; }; +#define ESN_IV_INSERT_OFFSET 12 +struct chcr_ipsec_aadiv { + __be32 spi; + u8 seq_no[8]; + u8 iv[8]; +}; + struct ipsec_sa_entry { int hmac_ctrl; + u16 esn; + u16 imm; unsigned int enckey_len; unsigned int kctx_len; unsigned int authsize; @@ -181,6 +204,13 @@ static inline unsigned int sgl_len(unsigned int n) return (3 * n) / 2 + (n & 1) + 2; } +static inline void *padap(struct chcr_dev *dev) +{ + struct uld_ctx *u_ctx = container_of(dev, struct uld_ctx, dev); + + return pci_get_drvdata(u_ctx->lldi.pdev); +} + struct uld_ctx *assign_chcr_device(void); int chcr_send_wr(struct sk_buff *skb); int start_crypto(void); diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h index d37ef41f9ebe..655606f2e4d0 100644 --- a/drivers/crypto/chelsio/chcr_crypto.h +++ b/drivers/crypto/chelsio/chcr_crypto.h @@ -41,7 +41,8 @@ #define CCM_B0_SIZE 16 #define CCM_AAD_FIELD_SIZE 2 -#define T6_MAX_AAD_SIZE 511 +// 511 - 16(For IV) +#define T6_MAX_AAD_SIZE 495 /* Define following if h/w is not dropping the AAD and IV data before @@ -185,9 +186,6 @@ struct chcr_aead_reqctx { dma_addr_t b0_dma; unsigned int b0_len; unsigned int op; - short int aad_nents; - short int src_nents; - short int dst_nents; u16 imm; u16 verify; u8 iv[CHCR_MAX_CRYPTO_IV_LEN + MAX_SCRATCH_PAD_SIZE]; @@ -322,10 +320,8 @@ void chcr_aead_dma_unmap(struct device *dev, struct aead_request *req, unsigned short op_type); void chcr_add_aead_dst_ent(struct aead_request *req, struct cpl_rx_phys_dsgl *phys_cpl, - unsigned int assoclen, unsigned short qid); -void chcr_add_aead_src_ent(struct aead_request *req, struct ulptx_sgl *ulptx, - unsigned int assoclen); +void chcr_add_aead_src_ent(struct aead_request *req, struct ulptx_sgl *ulptx); void chcr_add_cipher_src_ent(struct ablkcipher_request *req, void *ulptx, struct cipher_wr_param *wrparam); diff --git a/drivers/crypto/chelsio/chcr_ipsec.c b/drivers/crypto/chelsio/chcr_ipsec.c index 461b97e2f1fd..2fb48cce4462 100644 --- a/drivers/crypto/chelsio/chcr_ipsec.c +++ b/drivers/crypto/chelsio/chcr_ipsec.c @@ -76,12 +76,14 @@ static int chcr_xfrm_add_state(struct xfrm_state *x); static void chcr_xfrm_del_state(struct xfrm_state *x); static void chcr_xfrm_free_state(struct xfrm_state *x); static bool chcr_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *x); +static void chcr_advance_esn_state(struct xfrm_state *x); static const struct xfrmdev_ops chcr_xfrmdev_ops = { .xdo_dev_state_add = chcr_xfrm_add_state, .xdo_dev_state_delete = chcr_xfrm_del_state, .xdo_dev_state_free = chcr_xfrm_free_state, .xdo_dev_offload_ok = chcr_ipsec_offload_ok, + .xdo_dev_state_advance_esn = chcr_advance_esn_state, }; /* Add offload xfrms to Chelsio Interface */ @@ -210,10 +212,6 @@ static int chcr_xfrm_add_state(struct xfrm_state *x) pr_debug("CHCR: Cannot offload compressed xfrm states\n"); return -EINVAL; } - if (x->props.flags & XFRM_STATE_ESN) { - pr_debug("CHCR: Cannot offload ESN xfrm states\n"); - return -EINVAL; - } if (x->props.family != AF_INET && x->props.family != AF_INET6) { pr_debug("CHCR: Only IPv4/6 xfrm state offloaded\n"); @@ -266,6 +264,8 @@ static int chcr_xfrm_add_state(struct xfrm_state *x) } sa_entry->hmac_ctrl = chcr_ipsec_setauthsize(x, sa_entry); + if (x->props.flags & XFRM_STATE_ESN) + sa_entry->esn = 1; chcr_ipsec_setkey(x, sa_entry); x->xso.offload_handle = (unsigned long)sa_entry; try_module_get(THIS_MODULE); @@ -294,28 +294,57 @@ static void chcr_xfrm_free_state(struct xfrm_state *x) static bool chcr_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *x) { - /* Offload with IP options is not supported yet */ - if (ip_hdr(skb)->ihl > 5) - return false; - + if (x->props.family == AF_INET) { + /* Offload with IP options is not supported yet */ + if (ip_hdr(skb)->ihl > 5) + return false; + } else { + /* Offload with IPv6 extension headers is not support yet */ + if (ipv6_ext_hdr(ipv6_hdr(skb)->nexthdr)) + return false; + } return true; } -static inline int is_eth_imm(const struct sk_buff *skb, unsigned int kctx_len) +static void chcr_advance_esn_state(struct xfrm_state *x) { - int hdrlen = sizeof(struct chcr_ipsec_req) + kctx_len; + /* do nothing */ + if (!x->xso.offload_handle) + return; +} + +static inline int is_eth_imm(const struct sk_buff *skb, + struct ipsec_sa_entry *sa_entry) +{ + unsigned int kctx_len; + int hdrlen; + + kctx_len = sa_entry->kctx_len; + hdrlen = sizeof(struct fw_ulptx_wr) + + sizeof(struct chcr_ipsec_req) + kctx_len; hdrlen += sizeof(struct cpl_tx_pkt); + if (sa_entry->esn) + hdrlen += (DIV_ROUND_UP(sizeof(struct chcr_ipsec_aadiv), 16) + << 4); if (skb->len <= MAX_IMM_TX_PKT_LEN - hdrlen) return hdrlen; return 0; } static inline unsigned int calc_tx_sec_flits(const struct sk_buff *skb, - unsigned int kctx_len) + struct ipsec_sa_entry *sa_entry) { + unsigned int kctx_len; unsigned int flits; - int hdrlen = is_eth_imm(skb, kctx_len); + int aadivlen; + int hdrlen; + + kctx_len = sa_entry->kctx_len; + hdrlen = is_eth_imm(skb, sa_entry); + aadivlen = sa_entry->esn ? DIV_ROUND_UP(sizeof(struct chcr_ipsec_aadiv), + 16) : 0; + aadivlen <<= 4; /* If the skb is small enough, we can pump it out as a work request * with only immediate data. In that case we just have to have the @@ -338,13 +367,69 @@ static inline unsigned int calc_tx_sec_flits(const struct sk_buff *skb, flits += (sizeof(struct fw_ulptx_wr) + sizeof(struct chcr_ipsec_req) + kctx_len + - sizeof(struct cpl_tx_pkt_core)) / sizeof(__be64); + sizeof(struct cpl_tx_pkt_core) + + aadivlen) / sizeof(__be64); return flits; } +inline void *copy_esn_pktxt(struct sk_buff *skb, + struct net_device *dev, + void *pos, + struct ipsec_sa_entry *sa_entry) +{ + struct chcr_ipsec_aadiv *aadiv; + struct ulptx_idata *sc_imm; + struct ip_esp_hdr *esphdr; + struct xfrm_offload *xo; + struct sge_eth_txq *q; + struct adapter *adap; + struct port_info *pi; + __be64 seqno; + u32 qidx; + u32 seqlo; + u8 *iv; + int eoq; + int len; + + pi = netdev_priv(dev); + adap = pi->adapter; + qidx = skb->queue_mapping; + q = &adap->sge.ethtxq[qidx + pi->first_qset]; + + /* end of queue, reset pos to start of queue */ + eoq = (void *)q->q.stat - pos; + if (!eoq) + pos = q->q.desc; + + len = DIV_ROUND_UP(sizeof(struct chcr_ipsec_aadiv), 16) << 4; + memset(pos, 0, len); + aadiv = (struct chcr_ipsec_aadiv *)pos; + esphdr = (struct ip_esp_hdr *)skb_transport_header(skb); + iv = skb_transport_header(skb) + sizeof(struct ip_esp_hdr); + xo = xfrm_offload(skb); + + aadiv->spi = (esphdr->spi); + seqlo = htonl(esphdr->seq_no); + seqno = cpu_to_be64(seqlo + ((u64)xo->seq.hi << 32)); + memcpy(aadiv->seq_no, &seqno, 8); + iv = skb_transport_header(skb) + sizeof(struct ip_esp_hdr); + memcpy(aadiv->iv, iv, 8); + + if (sa_entry->imm) { + sc_imm = (struct ulptx_idata *)(pos + + (DIV_ROUND_UP(sizeof(struct chcr_ipsec_aadiv), + sizeof(__be64)) << 3)); + sc_imm->cmd_more = FILL_CMD_MORE(!sa_entry->imm); + sc_imm->len = cpu_to_be32(sa_entry->imm); + } + pos += len; + return pos; +} + inline void *copy_cpltx_pktxt(struct sk_buff *skb, - struct net_device *dev, - void *pos) + struct net_device *dev, + void *pos, + struct ipsec_sa_entry *sa_entry) { struct cpl_tx_pkt_core *cpl; struct sge_eth_txq *q; @@ -379,6 +464,9 @@ inline void *copy_cpltx_pktxt(struct sk_buff *skb, cpl->ctrl1 = cpu_to_be64(cntrl); pos += sizeof(struct cpl_tx_pkt_core); + /* Copy ESN info for HW */ + if (sa_entry->esn) + pos = copy_esn_pktxt(skb, dev, pos, sa_entry); return pos; } @@ -425,7 +513,7 @@ inline void *copy_key_cpltx_pktxt(struct sk_buff *skb, pos = (u8 *)q->q.desc + (key_len - left); } /* Copy CPL TX PKT XT */ - pos = copy_cpltx_pktxt(skb, dev, pos); + pos = copy_cpltx_pktxt(skb, dev, pos, sa_entry); return pos; } @@ -438,10 +526,16 @@ inline void *chcr_crypto_wreq(struct sk_buff *skb, { struct port_info *pi = netdev_priv(dev); struct adapter *adap = pi->adapter; - unsigned int immdatalen = 0; unsigned int ivsize = GCM_ESP_IV_SIZE; struct chcr_ipsec_wr *wr; + u16 immdatalen = 0; unsigned int flits; + u32 ivinoffset; + u32 aadstart; + u32 aadstop; + u32 ciphstart; + u32 ivdrop = 0; + u32 esnlen = 0; u32 wr_mid; int qidx = skb_get_queue_mapping(skb); struct sge_eth_txq *q = &adap->sge.ethtxq[qidx + pi->first_qset]; @@ -450,10 +544,17 @@ inline void *chcr_crypto_wreq(struct sk_buff *skb, atomic_inc(&adap->chcr_stats.ipsec_cnt); - flits = calc_tx_sec_flits(skb, kctx_len); + flits = calc_tx_sec_flits(skb, sa_entry); + if (sa_entry->esn) + ivdrop = 1; - if (is_eth_imm(skb, kctx_len)) + if (is_eth_imm(skb, sa_entry)) { immdatalen = skb->len; + sa_entry->imm = immdatalen; + } + + if (sa_entry->esn) + esnlen = sizeof(struct chcr_ipsec_aadiv); /* WR Header */ wr = (struct chcr_ipsec_wr *)pos; @@ -478,33 +579,38 @@ inline void *chcr_crypto_wreq(struct sk_buff *skb, sizeof(wr->req.key_ctx) + kctx_len + sizeof(struct cpl_tx_pkt_core) + - immdatalen); + esnlen + + (esnlen ? 0 : immdatalen)); /* CPL_SEC_PDU */ + ivinoffset = sa_entry->esn ? (ESN_IV_INSERT_OFFSET + 1) : + (skb_transport_offset(skb) + + sizeof(struct ip_esp_hdr) + 1); wr->req.sec_cpl.op_ivinsrtofst = htonl( CPL_TX_SEC_PDU_OPCODE_V(CPL_TX_SEC_PDU) | CPL_TX_SEC_PDU_CPLLEN_V(2) | CPL_TX_SEC_PDU_PLACEHOLDER_V(1) | CPL_TX_SEC_PDU_IVINSRTOFST_V( - (skb_transport_offset(skb) + - sizeof(struct ip_esp_hdr) + 1))); + ivinoffset)); - wr->req.sec_cpl.pldlen = htonl(skb->len); + wr->req.sec_cpl.pldlen = htonl(skb->len + esnlen); + aadstart = sa_entry->esn ? 1 : (skb_transport_offset(skb) + 1); + aadstop = sa_entry->esn ? ESN_IV_INSERT_OFFSET : + (skb_transport_offset(skb) + + sizeof(struct ip_esp_hdr)); + ciphstart = skb_transport_offset(skb) + sizeof(struct ip_esp_hdr) + + GCM_ESP_IV_SIZE + 1; + ciphstart += sa_entry->esn ? esnlen : 0; wr->req.sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI( - (skb_transport_offset(skb) + 1), - (skb_transport_offset(skb) + - sizeof(struct ip_esp_hdr)), - (skb_transport_offset(skb) + - sizeof(struct ip_esp_hdr) + - GCM_ESP_IV_SIZE + 1), 0); + aadstart, + aadstop, + ciphstart, 0); wr->req.sec_cpl.cipherstop_lo_authinsert = - FILL_SEC_CPL_AUTHINSERT(0, skb_transport_offset(skb) + - sizeof(struct ip_esp_hdr) + - GCM_ESP_IV_SIZE + 1, - sa_entry->authsize, - sa_entry->authsize); + FILL_SEC_CPL_AUTHINSERT(0, ciphstart, + sa_entry->authsize, + sa_entry->authsize); wr->req.sec_cpl.seqno_numivs = FILL_SEC_CPL_SCMD0_SEQNO(CHCR_ENCRYPT_OP, 1, CHCR_SCMD_CIPHER_MODE_AES_GCM, @@ -512,7 +618,7 @@ inline void *chcr_crypto_wreq(struct sk_buff *skb, sa_entry->hmac_ctrl, ivsize >> 1); wr->req.sec_cpl.ivgen_hdrlen = FILL_SEC_CPL_IVGEN_HDRLEN(0, 0, 1, - 0, 0, 0); + 0, ivdrop, 0); pos += sizeof(struct fw_ulptx_wr) + sizeof(struct ulp_txpkt) + @@ -565,20 +671,21 @@ int chcr_ipsec_xmit(struct sk_buff *skb, struct net_device *dev) struct ipsec_sa_entry *sa_entry; u64 *pos, *end, *before, *sgl; int qidx, left, credits; - unsigned int flits = 0, ndesc, kctx_len; + unsigned int flits = 0, ndesc; struct adapter *adap; struct sge_eth_txq *q; struct port_info *pi; dma_addr_t addr[MAX_SKB_FRAGS + 1]; + struct sec_path *sp; bool immediate = false; if (!x->xso.offload_handle) return NETDEV_TX_BUSY; sa_entry = (struct ipsec_sa_entry *)x->xso.offload_handle; - kctx_len = sa_entry->kctx_len; - if (skb->sp->len != 1) { + sp = skb_sec_path(skb); + if (sp->len != 1) { out_free: dev_kfree_skb_any(skb); return NETDEV_TX_OK; } @@ -590,7 +697,7 @@ out_free: dev_kfree_skb_any(skb); cxgb4_reclaim_completed_tx(adap, &q->q, true); - flits = calc_tx_sec_flits(skb, sa_entry->kctx_len); + flits = calc_tx_sec_flits(skb, sa_entry); ndesc = flits_to_desc(flits); credits = txq_avail(&q->q) - ndesc; @@ -603,7 +710,7 @@ out_free: dev_kfree_skb_any(skb); return NETDEV_TX_BUSY; } - if (is_eth_imm(skb, kctx_len)) + if (is_eth_imm(skb, sa_entry)) immediate = true; if (!immediate && diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c index 931b96c220af..59b75299fcbc 100644 --- a/drivers/crypto/chelsio/chtls/chtls_cm.c +++ b/drivers/crypto/chelsio/chtls/chtls_cm.c @@ -1079,8 +1079,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk, csk->txq_idx = (rxq_idx < cdev->lldi->ntxq) ? rxq_idx : port_id * step; csk->sndbuf = newsk->sk_sndbuf; - csk->smac_idx = cxgb4_tp_smt_idx(cdev->lldi->adapter_type, - cxgb4_port_viid(ndev)); + csk->smac_idx = ((struct port_info *)netdev_priv(ndev))->smt_idx; RCV_WSCALE(tp) = select_rcv_wscale(tcp_full_space(newsk), sock_net(newsk)-> ipv4.sysctl_tcp_window_scaling, diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c index eb2a0a73cbed..b4c24a35b3d0 100644 --- a/drivers/crypto/geode-aes.c +++ b/drivers/crypto/geode-aes.c @@ -261,7 +261,7 @@ static int fallback_init_cip(struct crypto_tfm *tfm) struct geode_aes_op *op = crypto_tfm_ctx(tfm); op->fallback.cip = crypto_alloc_cipher(name, 0, - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback.cip)) { printk(KERN_ERR "Error allocating fallback algo %s\n", name); diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c index 3aef1d43e435..d531c14020dc 100644 --- a/drivers/crypto/inside-secure/safexcel_cipher.c +++ b/drivers/crypto/inside-secure/safexcel_cipher.c @@ -970,7 +970,7 @@ struct safexcel_alg_template safexcel_alg_cbc_des = { .cra_name = "cbc(des)", .cra_driver_name = "safexcel-cbc-des", .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_ASYNC | + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = DES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct safexcel_cipher_ctx), @@ -1010,7 +1010,7 @@ struct safexcel_alg_template safexcel_alg_ecb_des = { .cra_name = "ecb(des)", .cra_driver_name = "safexcel-ecb-des", .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_ASYNC | + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = DES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct safexcel_cipher_ctx), @@ -1074,7 +1074,7 @@ struct safexcel_alg_template safexcel_alg_cbc_des3_ede = { .cra_name = "cbc(des3_ede)", .cra_driver_name = "safexcel-cbc-des3_ede", .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_ASYNC | + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_ctxsize = sizeof(struct safexcel_cipher_ctx), @@ -1114,7 +1114,7 @@ struct safexcel_alg_template safexcel_alg_ecb_des3_ede = { .cra_name = "ecb(des3_ede)", .cra_driver_name = "safexcel-ecb-des3_ede", .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_ASYNC | + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_ctxsize = sizeof(struct safexcel_cipher_ctx), diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c index 27f7dad2d45d..19fba998b86b 100644 --- a/drivers/crypto/ixp4xx_crypto.c +++ b/drivers/crypto/ixp4xx_crypto.c @@ -1194,7 +1194,6 @@ static struct ixp_alg ixp4xx_algos[] = { .min_keysize = DES_KEY_SIZE, .max_keysize = DES_KEY_SIZE, .ivsize = DES_BLOCK_SIZE, - .geniv = "eseqiv", } } }, @@ -1221,7 +1220,6 @@ static struct ixp_alg ixp4xx_algos[] = { .min_keysize = DES3_EDE_KEY_SIZE, .max_keysize = DES3_EDE_KEY_SIZE, .ivsize = DES3_EDE_BLOCK_SIZE, - .geniv = "eseqiv", } } }, @@ -1247,7 +1245,6 @@ static struct ixp_alg ixp4xx_algos[] = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, - .geniv = "eseqiv", } } }, @@ -1273,7 +1270,6 @@ static struct ixp_alg ixp4xx_algos[] = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, - .geniv = "eseqiv", } } }, @@ -1287,7 +1283,6 @@ static struct ixp_alg ixp4xx_algos[] = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, - .geniv = "eseqiv", .setkey = ablk_rfc3686_setkey, .encrypt = ablk_rfc3686_crypt, .decrypt = ablk_rfc3686_crypt } diff --git a/drivers/crypto/mxc-scc.c b/drivers/crypto/mxc-scc.c index e01c46387df8..519086730791 100644 --- a/drivers/crypto/mxc-scc.c +++ b/drivers/crypto/mxc-scc.c @@ -178,12 +178,12 @@ static int mxc_scc_get_data(struct mxc_scc_ctx *ctx, else from = scc->black_memory; - dev_dbg(scc->dev, "pcopy: from 0x%p %d bytes\n", from, + dev_dbg(scc->dev, "pcopy: from 0x%p %zu bytes\n", from, ctx->dst_nents * 8); len = sg_pcopy_from_buffer(ablkreq->dst, ctx->dst_nents, from, ctx->size, ctx->offset); if (!len) { - dev_err(scc->dev, "pcopy err from 0x%p (len=%d)\n", from, len); + dev_err(scc->dev, "pcopy err from 0x%p (len=%zu)\n", from, len); return -EINVAL; } @@ -274,7 +274,7 @@ static int mxc_scc_put_data(struct mxc_scc_ctx *ctx, len = sg_pcopy_to_buffer(req->src, ctx->src_nents, to, len, ctx->offset); if (!len) { - dev_err(scc->dev, "pcopy err to 0x%p (len=%d)\n", to, len); + dev_err(scc->dev, "pcopy err to 0x%p (len=%zu)\n", to, len); return -EINVAL; } @@ -335,9 +335,9 @@ static void mxc_scc_ablkcipher_next(struct mxc_scc_ctx *ctx, return; } - dev_dbg(scc->dev, "Start encryption (0x%p/0x%p)\n", - (void *)readl(scc->base + SCC_SCM_RED_START), - (void *)readl(scc->base + SCC_SCM_BLACK_START)); + dev_dbg(scc->dev, "Start encryption (0x%x/0x%x)\n", + readl(scc->base + SCC_SCM_RED_START), + readl(scc->base + SCC_SCM_BLACK_START)); /* clear interrupt control registers */ writel(SCC_SCM_INTR_CTRL_CLR_INTR, diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index 4e6ff32f8a7e..a2105cf33abb 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -82,6 +83,7 @@ struct dcp { spinlock_t lock[DCP_MAX_CHANS]; struct task_struct *thread[DCP_MAX_CHANS]; struct crypto_queue queue[DCP_MAX_CHANS]; + struct clk *dcp_clk; }; enum dcp_chan { @@ -1053,11 +1055,24 @@ static int mxs_dcp_probe(struct platform_device *pdev) /* Re-align the structure so it fits the DCP constraints. */ sdcp->coh = PTR_ALIGN(sdcp->coh, DCP_ALIGNMENT); - /* Restart the DCP block. */ - ret = stmp_reset_block(sdcp->base); + /* DCP clock is optional, only used on some SOCs */ + sdcp->dcp_clk = devm_clk_get(dev, "dcp"); + if (IS_ERR(sdcp->dcp_clk)) { + if (sdcp->dcp_clk != ERR_PTR(-ENOENT)) + return PTR_ERR(sdcp->dcp_clk); + sdcp->dcp_clk = NULL; + } + ret = clk_prepare_enable(sdcp->dcp_clk); if (ret) return ret; + /* Restart the DCP block. */ + ret = stmp_reset_block(sdcp->base); + if (ret) { + dev_err(dev, "Failed reset\n"); + goto err_disable_unprepare_clk; + } + /* Initialize control register. */ writel(MXS_DCP_CTRL_GATHER_RESIDUAL_WRITES | MXS_DCP_CTRL_ENABLE_CONTEXT_CACHING | 0xf, @@ -1094,7 +1109,8 @@ static int mxs_dcp_probe(struct platform_device *pdev) NULL, "mxs_dcp_chan/sha"); if (IS_ERR(sdcp->thread[DCP_CHAN_HASH_SHA])) { dev_err(dev, "Error starting SHA thread!\n"); - return PTR_ERR(sdcp->thread[DCP_CHAN_HASH_SHA]); + ret = PTR_ERR(sdcp->thread[DCP_CHAN_HASH_SHA]); + goto err_disable_unprepare_clk; } sdcp->thread[DCP_CHAN_CRYPTO] = kthread_run(dcp_chan_thread_aes, @@ -1151,6 +1167,10 @@ err_destroy_aes_thread: err_destroy_sha_thread: kthread_stop(sdcp->thread[DCP_CHAN_HASH_SHA]); + +err_disable_unprepare_clk: + clk_disable_unprepare(sdcp->dcp_clk); + return ret; } @@ -1170,6 +1190,8 @@ static int mxs_dcp_remove(struct platform_device *pdev) kthread_stop(sdcp->thread[DCP_CHAN_HASH_SHA]); kthread_stop(sdcp->thread[DCP_CHAN_CRYPTO]); + clk_disable_unprepare(sdcp->dcp_clk); + platform_set_drvdata(pdev, NULL); global_sdcp = NULL; diff --git a/drivers/crypto/nx/nx-aes-ctr.c b/drivers/crypto/nx/nx-aes-ctr.c index 898c0a280511..5a26fcd75d2d 100644 --- a/drivers/crypto/nx/nx-aes-ctr.c +++ b/drivers/crypto/nx/nx-aes-ctr.c @@ -159,7 +159,6 @@ struct crypto_alg nx_ctr3686_aes_alg = { .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, .ivsize = CTR_RFC3686_IV_SIZE, - .geniv = "seqiv", .setkey = ctr3686_aes_nx_set_key, .encrypt = ctr3686_aes_nx_crypt, .decrypt = ctr3686_aes_nx_crypt, diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c index a553ffddb11b..0120feb2d746 100644 --- a/drivers/crypto/omap-aes.c +++ b/drivers/crypto/omap-aes.c @@ -749,7 +749,6 @@ static struct crypto_alg algs_ctr[] = { .cra_u.ablkcipher = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, - .geniv = "eseqiv", .ivsize = AES_BLOCK_SIZE, .setkey = omap_aes_setkey, .encrypt = omap_aes_ctr_encrypt, @@ -1222,7 +1221,6 @@ static int omap_aes_probe(struct platform_device *pdev) algp = &dd->pdata->algs_info[i].algs_list[j]; pr_debug("reg alg: %s\n", algp->cra_name); - INIT_LIST_HEAD(&algp->cra_list); err = crypto_register_alg(algp); if (err) @@ -1240,7 +1238,6 @@ static int omap_aes_probe(struct platform_device *pdev) algp = &aalg->base; pr_debug("reg alg: %s\n", algp->cra_name); - INIT_LIST_HEAD(&algp->cra_list); err = crypto_register_aead(aalg); if (err) diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c index eb95b0d7f184..6369019219d4 100644 --- a/drivers/crypto/omap-des.c +++ b/drivers/crypto/omap-des.c @@ -1069,7 +1069,6 @@ static int omap_des_probe(struct platform_device *pdev) algp = &dd->pdata->algs_info[i].algs_list[j]; pr_debug("reg alg: %s\n", algp->cra_name); - INIT_LIST_HEAD(&algp->cra_list); err = crypto_register_alg(algp); if (err) diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c index a28f1d18fe01..17068b55fea5 100644 --- a/drivers/crypto/picoxcell_crypto.c +++ b/drivers/crypto/picoxcell_crypto.c @@ -1585,8 +1585,7 @@ static struct spacc_alg l2_engine_algs[] = { .cra_name = "f8(kasumi)", .cra_driver_name = "f8-kasumi-picoxcell", .cra_priority = SPACC_CRYPTO_ALG_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_GIVCIPHER | - CRYPTO_ALG_ASYNC | + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = 8, .cra_ctxsize = sizeof(struct spacc_ablk_ctx), diff --git a/drivers/crypto/qce/ablkcipher.c b/drivers/crypto/qce/ablkcipher.c index 585e1cab9ae3..25c13e26d012 100644 --- a/drivers/crypto/qce/ablkcipher.c +++ b/drivers/crypto/qce/ablkcipher.c @@ -376,7 +376,6 @@ static int qce_ablkcipher_register_one(const struct qce_ablkcipher_def *def, alg->cra_module = THIS_MODULE; alg->cra_init = qce_ablkcipher_init; alg->cra_exit = qce_ablkcipher_exit; - INIT_LIST_HEAD(&alg->cra_list); INIT_LIST_HEAD(&tmpl->entry); tmpl->crypto_alg_type = CRYPTO_ALG_TYPE_ABLKCIPHER; diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c index d8a5db11b7ea..fc45f5ea6fdd 100644 --- a/drivers/crypto/qce/sha.c +++ b/drivers/crypto/qce/sha.c @@ -508,7 +508,6 @@ static int qce_ahash_register_one(const struct qce_ahash_def *def, base->cra_alignmask = 0; base->cra_module = THIS_MODULE; base->cra_init = qce_ahash_cra_init; - INIT_LIST_HEAD(&base->cra_list); snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index bbf166a97ad3..8c32a3059b4a 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -1321,7 +1321,6 @@ static int sahara_register_algs(struct sahara_dev *dev) unsigned int i, j, k, l; for (i = 0; i < ARRAY_SIZE(aes_algs); i++) { - INIT_LIST_HEAD(&aes_algs[i].cra_list); err = crypto_register_alg(&aes_algs[i]); if (err) goto err_aes_algs; diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index 6988012deca4..45e20707cef8 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -3155,7 +3155,6 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev, alg->cra_ablkcipher.setkey = ablkcipher_setkey; alg->cra_ablkcipher.encrypt = ablkcipher_encrypt; alg->cra_ablkcipher.decrypt = ablkcipher_decrypt; - alg->cra_ablkcipher.geniv = "eseqiv"; break; case CRYPTO_ALG_TYPE_AEAD: alg = &t_alg->algt.alg.aead.base; diff --git a/drivers/crypto/ux500/cryp/cryp_core.c b/drivers/crypto/ux500/cryp/cryp_core.c index d2663a4e1f5e..a92a66b1ff46 100644 --- a/drivers/crypto/ux500/cryp/cryp_core.c +++ b/drivers/crypto/ux500/cryp/cryp_core.c @@ -556,7 +556,7 @@ static int cryp_set_dma_transfer(struct cryp_ctx *ctx, desc = dmaengine_prep_slave_sg(channel, ctx->device->dma.sg_src, ctx->device->dma.sg_src_len, - direction, DMA_CTRL_ACK); + DMA_MEM_TO_DEV, DMA_CTRL_ACK); break; case DMA_FROM_DEVICE: @@ -580,7 +580,7 @@ static int cryp_set_dma_transfer(struct cryp_ctx *ctx, desc = dmaengine_prep_slave_sg(channel, ctx->device->dma.sg_dst, ctx->device->dma.sg_dst_len, - direction, + DMA_DEV_TO_MEM, DMA_CTRL_ACK | DMA_PREP_INTERRUPT); diff --git a/drivers/crypto/ux500/hash/hash_core.c b/drivers/crypto/ux500/hash/hash_core.c index 633321a8dd03..a0bb8a6eec3f 100644 --- a/drivers/crypto/ux500/hash/hash_core.c +++ b/drivers/crypto/ux500/hash/hash_core.c @@ -166,7 +166,7 @@ static int hash_set_dma_transfer(struct hash_ctx *ctx, struct scatterlist *sg, __func__); desc = dmaengine_prep_slave_sg(channel, ctx->device->dma.sg, ctx->device->dma.sg_len, - direction, DMA_CTRL_ACK | DMA_PREP_INTERRUPT); + DMA_MEM_TO_DEV, DMA_CTRL_ACK | DMA_PREP_INTERRUPT); if (!desc) { dev_err(ctx->device->dev, "%s: dmaengine_prep_slave_sg() failed!\n", __func__); diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c index 99e2aace8078..2c1f459c0c63 100644 --- a/drivers/dax/pmem.c +++ b/drivers/dax/pmem.c @@ -48,9 +48,8 @@ static void dax_pmem_percpu_exit(void *data) percpu_ref_exit(ref); } -static void dax_pmem_percpu_kill(void *data) +static void dax_pmem_percpu_kill(struct percpu_ref *ref) { - struct percpu_ref *ref = data; struct dax_pmem *dax_pmem = to_dax_pmem(ref); dev_dbg(dax_pmem->dev, "trace\n"); @@ -112,17 +111,10 @@ static int dax_pmem_probe(struct device *dev) } dax_pmem->pgmap.ref = &dax_pmem->ref; + dax_pmem->pgmap.kill = dax_pmem_percpu_kill; addr = devm_memremap_pages(dev, &dax_pmem->pgmap); - if (IS_ERR(addr)) { - devm_remove_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref); - percpu_ref_exit(&dax_pmem->ref); + if (IS_ERR(addr)) return PTR_ERR(addr); - } - - rc = devm_add_action_or_reset(dev, dax_pmem_percpu_kill, - &dax_pmem->ref); - if (rc) - return rc; /* adjust the dax_region resource to the start of data */ memcpy(&res, &dax_pmem->pgmap.res, sizeof(res)); diff --git a/drivers/eisa/Kconfig b/drivers/eisa/Kconfig index 2705284f6223..4570e3bca42c 100644 --- a/drivers/eisa/Kconfig +++ b/drivers/eisa/Kconfig @@ -1,6 +1,26 @@ # # EISA configuration # + +config HAVE_EISA + bool + +menuconfig EISA + bool "EISA support" + depends on HAVE_EISA + ---help--- + The Extended Industry Standard Architecture (EISA) bus was + developed as an open alternative to the IBM MicroChannel bus. + + The EISA bus provided some of the features of the IBM MicroChannel + bus while maintaining backward compatibility with cards made for + the older ISA bus. The EISA bus saw limited use between 1988 and + 1995 when it was made obsolete by the PCI bus. + + Say Y here if you are building a kernel for an EISA-based machine. + + Otherwise, say N. + config EISA_VLB_PRIMING bool "Vesa Local Bus priming" depends on X86 && EISA @@ -53,4 +73,3 @@ config EISA_NAMES names. When in doubt, say Y. - diff --git a/drivers/extcon/extcon-max14577.c b/drivers/extcon/extcon-max14577.c index 22d2feb1f8bc..32f663436e6e 100644 --- a/drivers/extcon/extcon-max14577.c +++ b/drivers/extcon/extcon-max14577.c @@ -657,6 +657,8 @@ static int max14577_muic_probe(struct platform_device *pdev) struct max14577 *max14577 = dev_get_drvdata(pdev->dev.parent); struct max14577_muic_info *info; int delay_jiffies; + int cable_type; + bool attached; int ret; int i; u8 id; @@ -725,8 +727,17 @@ static int max14577_muic_probe(struct platform_device *pdev) info->path_uart = CTRL1_SW_UART; delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT); - /* Set initial path for UART */ - max14577_muic_set_path(info, info->path_uart, true); + /* Set initial path for UART when JIG is connected to get serial logs */ + ret = max14577_bulk_read(info->max14577->regmap, + MAX14577_MUIC_REG_STATUS1, info->status, 2); + if (ret) { + dev_err(info->dev, "Cannot read STATUS registers\n"); + return ret; + } + cable_type = max14577_muic_get_cable_type(info, MAX14577_CABLE_GROUP_ADC, + &attached); + if (attached && cable_type == MAX14577_MUIC_ADC_FACTORY_MODE_UART_OFF) + max14577_muic_set_path(info, info->path_uart, true); /* Check revision number of MUIC device*/ ret = max14577_read_reg(info->max14577->regmap, diff --git a/drivers/extcon/extcon-max77693.c b/drivers/extcon/extcon-max77693.c index a79537ebb671..32fc5a66ffa9 100644 --- a/drivers/extcon/extcon-max77693.c +++ b/drivers/extcon/extcon-max77693.c @@ -1072,6 +1072,8 @@ static int max77693_muic_probe(struct platform_device *pdev) struct max77693_reg_data *init_data; int num_init_data; int delay_jiffies; + int cable_type; + bool attached; int ret; int i; unsigned int id; @@ -1212,8 +1214,18 @@ static int max77693_muic_probe(struct platform_device *pdev) delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT); } - /* Set initial path for UART */ - max77693_muic_set_path(info, info->path_uart, true); + /* Set initial path for UART when JIG is connected to get serial logs */ + ret = regmap_bulk_read(info->max77693->regmap_muic, + MAX77693_MUIC_REG_STATUS1, info->status, 2); + if (ret) { + dev_err(info->dev, "failed to read MUIC register\n"); + return ret; + } + cable_type = max77693_muic_get_cable_type(info, + MAX77693_CABLE_GROUP_ADC, &attached); + if (attached && (cable_type == MAX77693_MUIC_ADC_FACTORY_MODE_UART_ON || + cable_type == MAX77693_MUIC_ADC_FACTORY_MODE_UART_OFF)) + max77693_muic_set_path(info, info->path_uart, true); /* Check revision number of MUIC device*/ ret = regmap_read(info->max77693->regmap_muic, diff --git a/drivers/extcon/extcon-max77843.c b/drivers/extcon/extcon-max77843.c index b98cbd0362f5..a343a6ef3506 100644 --- a/drivers/extcon/extcon-max77843.c +++ b/drivers/extcon/extcon-max77843.c @@ -812,6 +812,8 @@ static int max77843_muic_probe(struct platform_device *pdev) struct max77693_dev *max77843 = dev_get_drvdata(pdev->dev.parent); struct max77843_muic_info *info; unsigned int id; + int cable_type; + bool attached; int i, ret; info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL); @@ -856,9 +858,19 @@ static int max77843_muic_probe(struct platform_device *pdev) /* Set ADC debounce time */ max77843_muic_set_debounce_time(info, MAX77843_DEBOUNCE_TIME_25MS); - /* Set initial path for UART */ - max77843_muic_set_path(info, MAX77843_MUIC_CONTROL1_SW_UART, true, - false); + /* Set initial path for UART when JIG is connected to get serial logs */ + ret = regmap_bulk_read(max77843->regmap_muic, + MAX77843_MUIC_REG_STATUS1, info->status, + MAX77843_MUIC_STATUS_NUM); + if (ret) { + dev_err(info->dev, "Cannot read STATUS registers\n"); + goto err_muic_irq; + } + cable_type = max77843_muic_get_cable_type(info, MAX77843_CABLE_GROUP_ADC, + &attached); + if (attached && cable_type == MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF) + max77843_muic_set_path(info, MAX77843_MUIC_CONTROL1_SW_UART, + true, false); /* Check revision number of MUIC device */ ret = regmap_read(max77843->regmap_muic, MAX77843_MUIC_REG_ID, &id); diff --git a/drivers/extcon/extcon-max8997.c b/drivers/extcon/extcon-max8997.c index bdabb2479e0d..172e116ac1ce 100644 --- a/drivers/extcon/extcon-max8997.c +++ b/drivers/extcon/extcon-max8997.c @@ -311,12 +311,10 @@ static int max8997_muic_handle_usb(struct max8997_muic_info *info, { int ret = 0; - if (usb_type == MAX8997_USB_HOST) { - ret = max8997_muic_set_path(info, info->path_usb, attached); - if (ret < 0) { - dev_err(info->dev, "failed to update muic register\n"); - return ret; - } + ret = max8997_muic_set_path(info, info->path_usb, attached); + if (ret < 0) { + dev_err(info->dev, "failed to update muic register\n"); + return ret; } switch (usb_type) { @@ -632,6 +630,8 @@ static int max8997_muic_probe(struct platform_device *pdev) struct max8997_platform_data *pdata = dev_get_platdata(max8997->dev); struct max8997_muic_info *info; int delay_jiffies; + int cable_type; + bool attached; int ret, i; info = devm_kzalloc(&pdev->dev, sizeof(struct max8997_muic_info), @@ -724,8 +724,17 @@ static int max8997_muic_probe(struct platform_device *pdev) delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT); } - /* Set initial path for UART */ - max8997_muic_set_path(info, info->path_uart, true); + /* Set initial path for UART when JIG is connected to get serial logs */ + ret = max8997_bulk_read(info->muic, MAX8997_MUIC_REG_STATUS1, + 2, info->status); + if (ret) { + dev_err(info->dev, "failed to read MUIC register\n"); + return ret; + } + cable_type = max8997_muic_get_cable_type(info, + MAX8997_CABLE_GROUP_ADC, &attached); + if (attached && cable_type == MAX8997_MUIC_ADC_FACTORY_MODE_UART_OFF) + max8997_muic_set_path(info, info->path_uart, true); /* Set ADC debounce time */ max8997_muic_set_debounce_time(info, ADC_DEBOUNCE_TIME_25MS); diff --git a/drivers/firewire/sbp2.c b/drivers/firewire/sbp2.c index 6bac03999fd4..09b845e90114 100644 --- a/drivers/firewire/sbp2.c +++ b/drivers/firewire/sbp2.c @@ -1610,7 +1610,6 @@ static struct scsi_host_template scsi_driver_template = { .eh_abort_handler = sbp2_scsi_abort, .this_id = -1, .sg_tablesize = SG_ALL, - .use_clustering = ENABLE_CLUSTERING, .can_queue = 1, .sdev_attrs = sbp2_scsi_sysfs_attrs, }; diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig index 7273e5082b41..f754578414f0 100644 --- a/drivers/firmware/Kconfig +++ b/drivers/firmware/Kconfig @@ -216,6 +216,18 @@ config FW_CFG_SYSFS_CMDLINE WARNING: Using incorrect parameters (base address in particular) may crash your system. +config INTEL_STRATIX10_SERVICE + tristate "Intel Stratix10 Service Layer" + depends on HAVE_ARM_SMCCC + default n + help + Intel Stratix10 service layer runs at privileged exception level, + interfaces with the service providers (FPGA manager is one of them) + and manages secure monitor call to communicate with secure monitor + software at secure monitor exception level. + + Say Y here if you want Stratix10 service layer support. + config QCOM_SCM bool depends on ARM || ARM64 diff --git a/drivers/firmware/Makefile b/drivers/firmware/Makefile index 3158dffd9914..80feb635120f 100644 --- a/drivers/firmware/Makefile +++ b/drivers/firmware/Makefile @@ -12,6 +12,7 @@ obj-$(CONFIG_DMI_SYSFS) += dmi-sysfs.o obj-$(CONFIG_EDD) += edd.o obj-$(CONFIG_EFI_PCDP) += pcdp.o obj-$(CONFIG_DMIID) += dmi-id.o +obj-$(CONFIG_INTEL_STRATIX10_SERVICE) += stratix10-svc.o obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o diff --git a/drivers/firmware/efi/efi-pstore.c b/drivers/firmware/efi/efi-pstore.c index cfe87b465819..0f7d97917197 100644 --- a/drivers/firmware/efi/efi-pstore.c +++ b/drivers/firmware/efi/efi-pstore.c @@ -259,8 +259,7 @@ static int efi_pstore_write(struct pstore_record *record) efi_name[i] = name[i]; ret = efivar_entry_set_safe(efi_name, vendor, PSTORE_EFI_ATTRIBUTES, - !pstore_cannot_block_path(record->reason), - record->size, record->psi->buf); + preemptible(), record->size, record->psi->buf); if (record->reason == KMSG_DUMP_OOPS) efivar_run_worker(); @@ -369,7 +368,6 @@ static __init int efivars_pstore_init(void) return -ENOMEM; efi_pstore_info.bufsize = 1024; - spin_lock_init(&efi_pstore_info.buf_lock); if (pstore_register(&efi_pstore_info)) { kfree(efi_pstore_info.buf); diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c new file mode 100644 index 000000000000..6e6514825ad0 --- /dev/null +++ b/drivers/firmware/stratix10-svc.c @@ -0,0 +1,1041 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2017-2018, Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/** + * SVC_NUM_DATA_IN_FIFO - number of struct stratix10_svc_data in the FIFO + * + * SVC_NUM_CHANNEL - number of channel supported by service layer driver + * + * FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS - claim back the submitted buffer(s) + * from the secure world for FPGA manager to reuse, or to free the buffer(s) + * when all bit-stream data had be send. + * + * FPGA_CONFIG_STATUS_TIMEOUT_SEC - poll the FPGA configuration status, + * service layer will return error to FPGA manager when timeout occurs, + * timeout is set to 30 seconds (30 * 1000) at Intel Stratix10 SoC. + */ +#define SVC_NUM_DATA_IN_FIFO 32 +#define SVC_NUM_CHANNEL 2 +#define FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS 200 +#define FPGA_CONFIG_STATUS_TIMEOUT_SEC 30 + +typedef void (svc_invoke_fn)(unsigned long, unsigned long, unsigned long, + unsigned long, unsigned long, unsigned long, + unsigned long, unsigned long, + struct arm_smccc_res *); +struct stratix10_svc_chan; + +/** + * struct stratix10_svc_sh_memory - service shared memory structure + * @sync_complete: state for a completion + * @addr: physical address of shared memory block + * @size: size of shared memory block + * @invoke_fn: function to issue secure monitor or hypervisor call + * + * This struct is used to save physical address and size of shared memory + * block. The shared memory blocked is allocated by secure monitor software + * at secure world. + * + * Service layer driver uses the physical address and size to create a memory + * pool, then allocates data buffer from that memory pool for service client. + */ +struct stratix10_svc_sh_memory { + struct completion sync_complete; + unsigned long addr; + unsigned long size; + svc_invoke_fn *invoke_fn; +}; + +/** + * struct stratix10_svc_data_mem - service memory structure + * @vaddr: virtual address + * @paddr: physical address + * @size: size of memory + * @node: link list head node + * + * This struct is used in a list that keeps track of buffers which have + * been allocated or freed from the memory pool. Service layer driver also + * uses this struct to transfer physical address to virtual address. + */ +struct stratix10_svc_data_mem { + void *vaddr; + phys_addr_t paddr; + size_t size; + struct list_head node; +}; + +/** + * struct stratix10_svc_data - service data structure + * @chan: service channel + * @paddr: playload physical address + * @size: playload size + * @command: service command requested by client + * @flag: configuration type (full or partial) + * @arg: args to be passed via registers and not physically mapped buffers + * + * This struct is used in service FIFO for inter-process communication. + */ +struct stratix10_svc_data { + struct stratix10_svc_chan *chan; + phys_addr_t paddr; + size_t size; + u32 command; + u32 flag; + u64 arg[3]; +}; + +/** + * struct stratix10_svc_controller - service controller + * @dev: device + * @chans: array of service channels + * @num_chans: number of channels in 'chans' array + * @num_active_client: number of active service client + * @node: list management + * @genpool: memory pool pointing to the memory region + * @task: pointer to the thread task which handles SMC or HVC call + * @svc_fifo: a queue for storing service message data + * @complete_status: state for completion + * @svc_fifo_lock: protect access to service message data queue + * @invoke_fn: function to issue secure monitor call or hypervisor call + * + * This struct is used to create communication channels for service clients, to + * handle secure monitor or hypervisor call. + */ +struct stratix10_svc_controller { + struct device *dev; + struct stratix10_svc_chan *chans; + int num_chans; + int num_active_client; + struct list_head node; + struct gen_pool *genpool; + struct task_struct *task; + struct kfifo svc_fifo; + struct completion complete_status; + spinlock_t svc_fifo_lock; + svc_invoke_fn *invoke_fn; +}; + +/** + * struct stratix10_svc_chan - service communication channel + * @ctrl: pointer to service controller which is the provider of this channel + * @scl: pointer to service client which owns the channel + * @name: service client name associated with the channel + * @lock: protect access to the channel + * + * This struct is used by service client to communicate with service layer, each + * service client has its own channel created by service controller. + */ +struct stratix10_svc_chan { + struct stratix10_svc_controller *ctrl; + struct stratix10_svc_client *scl; + char *name; + spinlock_t lock; +}; + +static LIST_HEAD(svc_ctrl); +static LIST_HEAD(svc_data_mem); + +/** + * svc_pa_to_va() - translate physical address to virtual address + * @addr: to be translated physical address + * + * Return: valid virtual address or NULL if the provided physical + * address doesn't exist. + */ +static void *svc_pa_to_va(unsigned long addr) +{ + struct stratix10_svc_data_mem *pmem; + + pr_debug("claim back P-addr=0x%016x\n", (unsigned int)addr); + list_for_each_entry(pmem, &svc_data_mem, node) + if (pmem->paddr == addr) + return pmem->vaddr; + + /* physical address is not found */ + return NULL; +} + +/** + * svc_thread_cmd_data_claim() - claim back buffer from the secure world + * @ctrl: pointer to service layer controller + * @p_data: pointer to service data structure + * @cb_data: pointer to callback data structure to service client + * + * Claim back the submitted buffers from the secure world and pass buffer + * back to service client (FPGA manager, etc) for reuse. + */ +static void svc_thread_cmd_data_claim(struct stratix10_svc_controller *ctrl, + struct stratix10_svc_data *p_data, + struct stratix10_svc_cb_data *cb_data) +{ + struct arm_smccc_res res; + unsigned long timeout; + + reinit_completion(&ctrl->complete_status); + timeout = msecs_to_jiffies(FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS); + + pr_debug("%s: claim back the submitted buffer\n", __func__); + do { + ctrl->invoke_fn(INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE, + 0, 0, 0, 0, 0, 0, 0, &res); + + if (res.a0 == INTEL_SIP_SMC_STATUS_OK) { + if (!res.a1) { + complete(&ctrl->complete_status); + break; + } + cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_DONE); + cb_data->kaddr1 = svc_pa_to_va(res.a1); + cb_data->kaddr2 = (res.a2) ? + svc_pa_to_va(res.a2) : NULL; + cb_data->kaddr3 = (res.a3) ? + svc_pa_to_va(res.a3) : NULL; + p_data->chan->scl->receive_cb(p_data->chan->scl, + cb_data); + } else { + pr_debug("%s: secure world busy, polling again\n", + __func__); + } + } while (res.a0 == INTEL_SIP_SMC_STATUS_OK || + res.a0 == INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY || + wait_for_completion_timeout(&ctrl->complete_status, timeout)); +} + +/** + * svc_thread_cmd_config_status() - check configuration status + * @ctrl: pointer to service layer controller + * @p_data: pointer to service data structure + * @cb_data: pointer to callback data structure to service client + * + * Check whether the secure firmware at secure world has finished the FPGA + * configuration, and then inform FPGA manager the configuration status. + */ +static void svc_thread_cmd_config_status(struct stratix10_svc_controller *ctrl, + struct stratix10_svc_data *p_data, + struct stratix10_svc_cb_data *cb_data) +{ + struct arm_smccc_res res; + int count_in_sec; + + cb_data->kaddr1 = NULL; + cb_data->kaddr2 = NULL; + cb_data->kaddr3 = NULL; + cb_data->status = BIT(SVC_STATUS_RECONFIG_ERROR); + + pr_debug("%s: polling config status\n", __func__); + + count_in_sec = FPGA_CONFIG_STATUS_TIMEOUT_SEC; + while (count_in_sec) { + ctrl->invoke_fn(INTEL_SIP_SMC_FPGA_CONFIG_ISDONE, + 0, 0, 0, 0, 0, 0, 0, &res); + if ((res.a0 == INTEL_SIP_SMC_STATUS_OK) || + (res.a0 == INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR)) + break; + + /* + * configuration is still in progress, wait one second then + * poll again + */ + msleep(1000); + count_in_sec--; + }; + + if (res.a0 == INTEL_SIP_SMC_STATUS_OK && count_in_sec) + cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED); + + p_data->chan->scl->receive_cb(p_data->chan->scl, cb_data); +} + +/** + * svc_thread_recv_status_ok() - handle the successful status + * @p_data: pointer to service data structure + * @cb_data: pointer to callback data structure to service client + * @res: result from SMC or HVC call + * + * Send back the correspond status to the service clients. + */ +static void svc_thread_recv_status_ok(struct stratix10_svc_data *p_data, + struct stratix10_svc_cb_data *cb_data, + struct arm_smccc_res res) +{ + cb_data->kaddr1 = NULL; + cb_data->kaddr2 = NULL; + cb_data->kaddr3 = NULL; + + switch (p_data->command) { + case COMMAND_RECONFIG: + cb_data->status = BIT(SVC_STATUS_RECONFIG_REQUEST_OK); + break; + case COMMAND_RECONFIG_DATA_SUBMIT: + cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED); + break; + case COMMAND_NOOP: + cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED); + cb_data->kaddr1 = svc_pa_to_va(res.a1); + break; + case COMMAND_RECONFIG_STATUS: + cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED); + break; + case COMMAND_RSU_UPDATE: + cb_data->status = BIT(SVC_STATUS_RSU_OK); + break; + default: + pr_warn("it shouldn't happen\n"); + break; + } + + pr_debug("%s: call receive_cb\n", __func__); + p_data->chan->scl->receive_cb(p_data->chan->scl, cb_data); +} + +/** + * svc_normal_to_secure_thread() - the function to run in the kthread + * @data: data pointer for kthread function + * + * Service layer driver creates stratix10_svc_smc_hvc_call kthread on CPU + * node 0, its function stratix10_svc_secure_call_thread is used to handle + * SMC or HVC calls between kernel driver and secure monitor software. + * + * Return: 0 for success or -ENOMEM on error. + */ +static int svc_normal_to_secure_thread(void *data) +{ + struct stratix10_svc_controller + *ctrl = (struct stratix10_svc_controller *)data; + struct stratix10_svc_data *pdata; + struct stratix10_svc_cb_data *cbdata; + struct arm_smccc_res res; + unsigned long a0, a1, a2; + int ret_fifo = 0; + + pdata = kmalloc(sizeof(*pdata), GFP_KERNEL); + if (!pdata) + return -ENOMEM; + + cbdata = kmalloc(sizeof(*cbdata), GFP_KERNEL); + if (!cbdata) { + kfree(pdata); + return -ENOMEM; + } + + /* default set, to remove build warning */ + a0 = INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK; + a1 = 0; + a2 = 0; + + pr_debug("smc_hvc_shm_thread is running\n"); + + while (!kthread_should_stop()) { + ret_fifo = kfifo_out_spinlocked(&ctrl->svc_fifo, + pdata, sizeof(*pdata), + &ctrl->svc_fifo_lock); + + if (!ret_fifo) + continue; + + pr_debug("get from FIFO pa=0x%016x, command=%u, size=%u\n", + (unsigned int)pdata->paddr, pdata->command, + (unsigned int)pdata->size); + + switch (pdata->command) { + case COMMAND_RECONFIG_DATA_CLAIM: + svc_thread_cmd_data_claim(ctrl, pdata, cbdata); + continue; + case COMMAND_RECONFIG: + a0 = INTEL_SIP_SMC_FPGA_CONFIG_START; + pr_debug("conf_type=%u\n", (unsigned int)pdata->flag); + a1 = pdata->flag; + a2 = 0; + break; + case COMMAND_RECONFIG_DATA_SUBMIT: + a0 = INTEL_SIP_SMC_FPGA_CONFIG_WRITE; + a1 = (unsigned long)pdata->paddr; + a2 = (unsigned long)pdata->size; + break; + case COMMAND_RECONFIG_STATUS: + a0 = INTEL_SIP_SMC_FPGA_CONFIG_ISDONE; + a1 = 0; + a2 = 0; + break; + case COMMAND_RSU_STATUS: + a0 = INTEL_SIP_SMC_RSU_STATUS; + a1 = 0; + a2 = 0; + break; + case COMMAND_RSU_UPDATE: + a0 = INTEL_SIP_SMC_RSU_UPDATE; + a1 = pdata->arg[0]; + a2 = 0; + break; + default: + pr_warn("it shouldn't happen\n"); + break; + } + pr_debug("%s: before SMC call -- a0=0x%016x a1=0x%016x", + __func__, (unsigned int)a0, (unsigned int)a1); + pr_debug(" a2=0x%016x\n", (unsigned int)a2); + + ctrl->invoke_fn(a0, a1, a2, 0, 0, 0, 0, 0, &res); + + pr_debug("%s: after SMC call -- res.a0=0x%016x", + __func__, (unsigned int)res.a0); + pr_debug(" res.a1=0x%016x, res.a2=0x%016x", + (unsigned int)res.a1, (unsigned int)res.a2); + pr_debug(" res.a3=0x%016x\n", (unsigned int)res.a3); + + if (pdata->command == COMMAND_RSU_STATUS) { + if (res.a0 == INTEL_SIP_SMC_RSU_ERROR) + cbdata->status = BIT(SVC_STATUS_RSU_ERROR); + else + cbdata->status = BIT(SVC_STATUS_RSU_OK); + + cbdata->kaddr1 = &res; + cbdata->kaddr2 = NULL; + cbdata->kaddr3 = NULL; + pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata); + continue; + } + + switch (res.a0) { + case INTEL_SIP_SMC_STATUS_OK: + svc_thread_recv_status_ok(pdata, cbdata, res); + break; + case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY: + switch (pdata->command) { + case COMMAND_RECONFIG_DATA_SUBMIT: + svc_thread_cmd_data_claim(ctrl, + pdata, cbdata); + break; + case COMMAND_RECONFIG_STATUS: + svc_thread_cmd_config_status(ctrl, + pdata, cbdata); + break; + default: + pr_warn("it shouldn't happen\n"); + break; + } + break; + case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_REJECTED: + pr_debug("%s: STATUS_REJECTED\n", __func__); + break; + case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR: + pr_err("%s: STATUS_ERROR\n", __func__); + cbdata->status = BIT(SVC_STATUS_RECONFIG_ERROR); + cbdata->kaddr1 = NULL; + cbdata->kaddr2 = NULL; + cbdata->kaddr3 = NULL; + pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata); + break; + default: + pr_warn("it shouldn't happen\n"); + break; + } + }; + + kfree(cbdata); + kfree(pdata); + + return 0; +} + +/** + * svc_normal_to_secure_shm_thread() - the function to run in the kthread + * @data: data pointer for kthread function + * + * Service layer driver creates stratix10_svc_smc_hvc_shm kthread on CPU + * node 0, its function stratix10_svc_secure_shm_thread is used to query the + * physical address of memory block reserved by secure monitor software at + * secure world. + * + * svc_normal_to_secure_shm_thread() calls do_exit() directly since it is a + * standlone thread for which no one will call kthread_stop() or return when + * 'kthread_should_stop()' is true. + */ +static int svc_normal_to_secure_shm_thread(void *data) +{ + struct stratix10_svc_sh_memory + *sh_mem = (struct stratix10_svc_sh_memory *)data; + struct arm_smccc_res res; + + /* SMC or HVC call to get shared memory info from secure world */ + sh_mem->invoke_fn(INTEL_SIP_SMC_FPGA_CONFIG_GET_MEM, + 0, 0, 0, 0, 0, 0, 0, &res); + if (res.a0 == INTEL_SIP_SMC_STATUS_OK) { + sh_mem->addr = res.a1; + sh_mem->size = res.a2; + } else { + pr_err("%s: after SMC call -- res.a0=0x%016x", __func__, + (unsigned int)res.a0); + sh_mem->addr = 0; + sh_mem->size = 0; + } + + complete(&sh_mem->sync_complete); + do_exit(0); +} + +/** + * svc_get_sh_memory() - get memory block reserved by secure monitor SW + * @pdev: pointer to service layer device + * @sh_memory: pointer to service shared memory structure + * + * Return: zero for successfully getting the physical address of memory block + * reserved by secure monitor software, or negative value on error. + */ +static int svc_get_sh_memory(struct platform_device *pdev, + struct stratix10_svc_sh_memory *sh_memory) +{ + struct device *dev = &pdev->dev; + struct task_struct *sh_memory_task; + unsigned int cpu = 0; + + init_completion(&sh_memory->sync_complete); + + /* smc or hvc call happens on cpu 0 bound kthread */ + sh_memory_task = kthread_create_on_node(svc_normal_to_secure_shm_thread, + (void *)sh_memory, + cpu_to_node(cpu), + "svc_smc_hvc_shm_thread"); + if (IS_ERR(sh_memory_task)) { + dev_err(dev, "fail to create stratix10_svc_smc_shm_thread\n"); + return -EINVAL; + } + + wake_up_process(sh_memory_task); + + if (!wait_for_completion_timeout(&sh_memory->sync_complete, 10 * HZ)) { + dev_err(dev, + "timeout to get sh-memory paras from secure world\n"); + return -ETIMEDOUT; + } + + if (!sh_memory->addr || !sh_memory->size) { + dev_err(dev, + "fails to get shared memory info from secure world\n"); + return -ENOMEM; + } + + dev_dbg(dev, "SM software provides paddr: 0x%016x, size: 0x%08x\n", + (unsigned int)sh_memory->addr, + (unsigned int)sh_memory->size); + + return 0; +} + +/** + * svc_create_memory_pool() - create a memory pool from reserved memory block + * @pdev: pointer to service layer device + * @sh_memory: pointer to service shared memory structure + * + * Return: pool allocated from reserved memory block or ERR_PTR() on error. + */ +static struct gen_pool * +svc_create_memory_pool(struct platform_device *pdev, + struct stratix10_svc_sh_memory *sh_memory) +{ + struct device *dev = &pdev->dev; + struct gen_pool *genpool; + unsigned long vaddr; + phys_addr_t paddr; + size_t size; + phys_addr_t begin; + phys_addr_t end; + void *va; + size_t page_mask = PAGE_SIZE - 1; + int min_alloc_order = 3; + int ret; + + begin = roundup(sh_memory->addr, PAGE_SIZE); + end = rounddown(sh_memory->addr + sh_memory->size, PAGE_SIZE); + paddr = begin; + size = end - begin; + va = memremap(paddr, size, MEMREMAP_WC); + if (!va) { + dev_err(dev, "fail to remap shared memory\n"); + return ERR_PTR(-EINVAL); + } + vaddr = (unsigned long)va; + dev_dbg(dev, + "reserved memory vaddr: %p, paddr: 0x%16x size: 0x%8x\n", + va, (unsigned int)paddr, (unsigned int)size); + if ((vaddr & page_mask) || (paddr & page_mask) || + (size & page_mask)) { + dev_err(dev, "page is not aligned\n"); + return ERR_PTR(-EINVAL); + } + genpool = gen_pool_create(min_alloc_order, -1); + if (!genpool) { + dev_err(dev, "fail to create genpool\n"); + return ERR_PTR(-ENOMEM); + } + gen_pool_set_algo(genpool, gen_pool_best_fit, NULL); + ret = gen_pool_add_virt(genpool, vaddr, paddr, size, -1); + if (ret) { + dev_err(dev, "fail to add memory chunk to the pool\n"); + gen_pool_destroy(genpool); + return ERR_PTR(ret); + } + + return genpool; +} + +/** + * svc_smccc_smc() - secure monitor call between normal and secure world + * @a0: argument passed in registers 0 + * @a1: argument passed in registers 1 + * @a2: argument passed in registers 2 + * @a3: argument passed in registers 3 + * @a4: argument passed in registers 4 + * @a5: argument passed in registers 5 + * @a6: argument passed in registers 6 + * @a7: argument passed in registers 7 + * @res: result values from register 0 to 3 + */ +static void svc_smccc_smc(unsigned long a0, unsigned long a1, + unsigned long a2, unsigned long a3, + unsigned long a4, unsigned long a5, + unsigned long a6, unsigned long a7, + struct arm_smccc_res *res) +{ + arm_smccc_smc(a0, a1, a2, a3, a4, a5, a6, a7, res); +} + +/** + * svc_smccc_hvc() - hypervisor call between normal and secure world + * @a0: argument passed in registers 0 + * @a1: argument passed in registers 1 + * @a2: argument passed in registers 2 + * @a3: argument passed in registers 3 + * @a4: argument passed in registers 4 + * @a5: argument passed in registers 5 + * @a6: argument passed in registers 6 + * @a7: argument passed in registers 7 + * @res: result values from register 0 to 3 + */ +static void svc_smccc_hvc(unsigned long a0, unsigned long a1, + unsigned long a2, unsigned long a3, + unsigned long a4, unsigned long a5, + unsigned long a6, unsigned long a7, + struct arm_smccc_res *res) +{ + arm_smccc_hvc(a0, a1, a2, a3, a4, a5, a6, a7, res); +} + +/** + * get_invoke_func() - invoke SMC or HVC call + * @dev: pointer to device + * + * Return: function pointer to svc_smccc_smc or svc_smccc_hvc. + */ +static svc_invoke_fn *get_invoke_func(struct device *dev) +{ + const char *method; + + if (of_property_read_string(dev->of_node, "method", &method)) { + dev_warn(dev, "missing \"method\" property\n"); + return ERR_PTR(-ENXIO); + } + + if (!strcmp(method, "smc")) + return svc_smccc_smc; + if (!strcmp(method, "hvc")) + return svc_smccc_hvc; + + dev_warn(dev, "invalid \"method\" property: %s\n", method); + + return ERR_PTR(-EINVAL); +} + +/** + * stratix10_svc_request_channel_byname() - request a service channel + * @client: pointer to service client + * @name: service client name + * + * This function is used by service client to request a service channel. + * + * Return: a pointer to channel assigned to the client on success, + * or ERR_PTR() on error. + */ +struct stratix10_svc_chan *stratix10_svc_request_channel_byname( + struct stratix10_svc_client *client, const char *name) +{ + struct device *dev = client->dev; + struct stratix10_svc_controller *controller; + struct stratix10_svc_chan *chan = NULL; + unsigned long flag; + int i; + + /* if probe was called after client's, or error on probe */ + if (list_empty(&svc_ctrl)) + return ERR_PTR(-EPROBE_DEFER); + + controller = list_first_entry(&svc_ctrl, + struct stratix10_svc_controller, node); + for (i = 0; i < SVC_NUM_CHANNEL; i++) { + if (!strcmp(controller->chans[i].name, name)) { + chan = &controller->chans[i]; + break; + } + } + + /* if there was no channel match */ + if (i == SVC_NUM_CHANNEL) { + dev_err(dev, "%s: channel not allocated\n", __func__); + return ERR_PTR(-EINVAL); + } + + if (chan->scl || !try_module_get(controller->dev->driver->owner)) { + dev_dbg(dev, "%s: svc not free\n", __func__); + return ERR_PTR(-EBUSY); + } + + spin_lock_irqsave(&chan->lock, flag); + chan->scl = client; + chan->ctrl->num_active_client++; + spin_unlock_irqrestore(&chan->lock, flag); + + return chan; +} +EXPORT_SYMBOL_GPL(stratix10_svc_request_channel_byname); + +/** + * stratix10_svc_free_channel() - free service channel + * @chan: service channel to be freed + * + * This function is used by service client to free a service channel. + */ +void stratix10_svc_free_channel(struct stratix10_svc_chan *chan) +{ + unsigned long flag; + + spin_lock_irqsave(&chan->lock, flag); + chan->scl = NULL; + chan->ctrl->num_active_client--; + module_put(chan->ctrl->dev->driver->owner); + spin_unlock_irqrestore(&chan->lock, flag); +} +EXPORT_SYMBOL_GPL(stratix10_svc_free_channel); + +/** + * stratix10_svc_send() - send a message data to the remote + * @chan: service channel assigned to the client + * @msg: message data to be sent, in the format of + * "struct stratix10_svc_client_msg" + * + * This function is used by service client to add a message to the service + * layer driver's queue for being sent to the secure world. + * + * Return: 0 for success, -ENOMEM or -ENOBUFS on error. + */ +int stratix10_svc_send(struct stratix10_svc_chan *chan, void *msg) +{ + struct stratix10_svc_client_msg + *p_msg = (struct stratix10_svc_client_msg *)msg; + struct stratix10_svc_data_mem *p_mem; + struct stratix10_svc_data *p_data; + int ret = 0; + unsigned int cpu = 0; + + p_data = kzalloc(sizeof(*p_data), GFP_KERNEL); + if (!p_data) + return -ENOMEM; + + /* first client will create kernel thread */ + if (!chan->ctrl->task) { + chan->ctrl->task = + kthread_create_on_node(svc_normal_to_secure_thread, + (void *)chan->ctrl, + cpu_to_node(cpu), + "svc_smc_hvc_thread"); + if (IS_ERR(chan->ctrl->task)) { + dev_err(chan->ctrl->dev, + "fails to create svc_smc_hvc_thread\n"); + kfree(p_data); + return -EINVAL; + } + kthread_bind(chan->ctrl->task, cpu); + wake_up_process(chan->ctrl->task); + } + + pr_debug("%s: sent P-va=%p, P-com=%x, P-size=%u\n", __func__, + p_msg->payload, p_msg->command, + (unsigned int)p_msg->payload_length); + + if (list_empty(&svc_data_mem)) { + if (p_msg->command == COMMAND_RECONFIG) { + struct stratix10_svc_command_config_type *ct = + (struct stratix10_svc_command_config_type *) + p_msg->payload; + p_data->flag = ct->flags; + } + } else { + list_for_each_entry(p_mem, &svc_data_mem, node) + if (p_mem->vaddr == p_msg->payload) { + p_data->paddr = p_mem->paddr; + break; + } + } + + p_data->command = p_msg->command; + p_data->arg[0] = p_msg->arg[0]; + p_data->arg[1] = p_msg->arg[1]; + p_data->arg[2] = p_msg->arg[2]; + p_data->size = p_msg->payload_length; + p_data->chan = chan; + pr_debug("%s: put to FIFO pa=0x%016x, cmd=%x, size=%u\n", __func__, + (unsigned int)p_data->paddr, p_data->command, + (unsigned int)p_data->size); + ret = kfifo_in_spinlocked(&chan->ctrl->svc_fifo, p_data, + sizeof(*p_data), + &chan->ctrl->svc_fifo_lock); + + kfree(p_data); + + if (!ret) + return -ENOBUFS; + + return 0; +} +EXPORT_SYMBOL_GPL(stratix10_svc_send); + +/** + * stratix10_svc_done() - complete service request transactions + * @chan: service channel assigned to the client + * + * This function should be called when client has finished its request + * or there is an error in the request process. It allows the service layer + * to stop the running thread to have maximize savings in kernel resources. + */ +void stratix10_svc_done(struct stratix10_svc_chan *chan) +{ + /* stop thread when thread is running AND only one active client */ + if (chan->ctrl->task && chan->ctrl->num_active_client <= 1) { + pr_debug("svc_smc_hvc_shm_thread is stopped\n"); + kthread_stop(chan->ctrl->task); + chan->ctrl->task = NULL; + } +} +EXPORT_SYMBOL_GPL(stratix10_svc_done); + +/** + * stratix10_svc_allocate_memory() - allocate memory + * @chan: service channel assigned to the client + * @size: memory size requested by a specific service client + * + * Service layer allocates the requested number of bytes buffer from the + * memory pool, service client uses this function to get allocated buffers. + * + * Return: address of allocated memory on success, or ERR_PTR() on error. + */ +void *stratix10_svc_allocate_memory(struct stratix10_svc_chan *chan, + size_t size) +{ + struct stratix10_svc_data_mem *pmem; + unsigned long va; + phys_addr_t pa; + struct gen_pool *genpool = chan->ctrl->genpool; + size_t s = roundup(size, 1 << genpool->min_alloc_order); + + pmem = devm_kzalloc(chan->ctrl->dev, sizeof(*pmem), GFP_KERNEL); + if (!pmem) + return ERR_PTR(-ENOMEM); + + va = gen_pool_alloc(genpool, s); + if (!va) + return ERR_PTR(-ENOMEM); + + memset((void *)va, 0, s); + pa = gen_pool_virt_to_phys(genpool, va); + + pmem->vaddr = (void *)va; + pmem->paddr = pa; + pmem->size = s; + list_add_tail(&pmem->node, &svc_data_mem); + pr_debug("%s: va=%p, pa=0x%016x\n", __func__, + pmem->vaddr, (unsigned int)pmem->paddr); + + return (void *)va; +} +EXPORT_SYMBOL_GPL(stratix10_svc_allocate_memory); + +/** + * stratix10_svc_free_memory() - free allocated memory + * @chan: service channel assigned to the client + * @kaddr: memory to be freed + * + * This function is used by service client to free allocated buffers. + */ +void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr) +{ + struct stratix10_svc_data_mem *pmem; + size_t size = 0; + + list_for_each_entry(pmem, &svc_data_mem, node) + if (pmem->vaddr == kaddr) { + size = pmem->size; + break; + } + + gen_pool_free(chan->ctrl->genpool, (unsigned long)kaddr, size); + pmem->vaddr = NULL; + list_del(&pmem->node); +} +EXPORT_SYMBOL_GPL(stratix10_svc_free_memory); + +static const struct of_device_id stratix10_svc_drv_match[] = { + {.compatible = "intel,stratix10-svc"}, + {}, +}; + +static int stratix10_svc_drv_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct stratix10_svc_controller *controller; + struct stratix10_svc_chan *chans; + struct gen_pool *genpool; + struct stratix10_svc_sh_memory *sh_memory; + svc_invoke_fn *invoke_fn; + size_t fifo_size; + int ret; + + /* get SMC or HVC function */ + invoke_fn = get_invoke_func(dev); + if (IS_ERR(invoke_fn)) + return -EINVAL; + + sh_memory = devm_kzalloc(dev, sizeof(*sh_memory), GFP_KERNEL); + if (!sh_memory) + return -ENOMEM; + + sh_memory->invoke_fn = invoke_fn; + ret = svc_get_sh_memory(pdev, sh_memory); + if (ret) + return ret; + + genpool = svc_create_memory_pool(pdev, sh_memory); + if (!genpool) + return -ENOMEM; + + /* allocate service controller and supporting channel */ + controller = devm_kzalloc(dev, sizeof(*controller), GFP_KERNEL); + if (!controller) + return -ENOMEM; + + chans = devm_kmalloc_array(dev, SVC_NUM_CHANNEL, + sizeof(*chans), GFP_KERNEL | __GFP_ZERO); + if (!chans) + return -ENOMEM; + + controller->dev = dev; + controller->num_chans = SVC_NUM_CHANNEL; + controller->num_active_client = 0; + controller->chans = chans; + controller->genpool = genpool; + controller->task = NULL; + controller->invoke_fn = invoke_fn; + init_completion(&controller->complete_status); + + fifo_size = sizeof(struct stratix10_svc_data) * SVC_NUM_DATA_IN_FIFO; + ret = kfifo_alloc(&controller->svc_fifo, fifo_size, GFP_KERNEL); + if (ret) { + dev_err(dev, "fails to allocate FIFO\n"); + return ret; + } + spin_lock_init(&controller->svc_fifo_lock); + + chans[0].scl = NULL; + chans[0].ctrl = controller; + chans[0].name = SVC_CLIENT_FPGA; + spin_lock_init(&chans[0].lock); + + chans[1].scl = NULL; + chans[1].ctrl = controller; + chans[1].name = SVC_CLIENT_RSU; + spin_lock_init(&chans[1].lock); + + list_add_tail(&controller->node, &svc_ctrl); + platform_set_drvdata(pdev, controller); + + pr_info("Intel Service Layer Driver Initialized\n"); + + return ret; +} + +static int stratix10_svc_drv_remove(struct platform_device *pdev) +{ + struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev); + + kfifo_free(&ctrl->svc_fifo); + if (ctrl->task) { + kthread_stop(ctrl->task); + ctrl->task = NULL; + } + if (ctrl->genpool) + gen_pool_destroy(ctrl->genpool); + list_del(&ctrl->node); + + return 0; +} + +static struct platform_driver stratix10_svc_driver = { + .probe = stratix10_svc_drv_probe, + .remove = stratix10_svc_drv_remove, + .driver = { + .name = "stratix10-svc", + .of_match_table = stratix10_svc_drv_match, + }, +}; + +static int __init stratix10_svc_init(void) +{ + struct device_node *fw_np; + struct device_node *np; + int ret; + + fw_np = of_find_node_by_name(NULL, "firmware"); + if (!fw_np) + return -ENODEV; + + np = of_find_matching_node(fw_np, stratix10_svc_drv_match); + if (!np) + return -ENODEV; + + of_node_put(np); + ret = of_platform_populate(fw_np, stratix10_svc_drv_match, NULL, NULL); + if (ret) + return ret; + + return platform_driver_register(&stratix10_svc_driver); +} + +static void __exit stratix10_svc_exit(void) +{ + return platform_driver_unregister(&stratix10_svc_driver); +} + +subsys_initcall(stratix10_svc_init); +module_exit(stratix10_svc_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Intel Stratix10 Service Layer Driver"); +MODULE_AUTHOR("Richard Gong "); +MODULE_ALIAS("platform:stratix10-svc"); diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig index 1ebcef4bab5b..0bb7b5cd6cdc 100644 --- a/drivers/fpga/Kconfig +++ b/drivers/fpga/Kconfig @@ -56,6 +56,12 @@ config FPGA_MGR_ZYNQ_FPGA help FPGA manager driver support for Xilinx Zynq FPGAs. +config FPGA_MGR_STRATIX10_SOC + tristate "Intel Stratix10 SoC FPGA Manager" + depends on (ARCH_STRATIX10 && INTEL_STRATIX10_SERVICE) + help + FPGA manager driver support for the Intel Stratix10 SoC. + config FPGA_MGR_XILINX_SPI tristate "Xilinx Configuration over Slave Serial (SPI)" depends on SPI diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile index 7a2d73ba7122..c0dd4c82fbdb 100644 --- a/drivers/fpga/Makefile +++ b/drivers/fpga/Makefile @@ -13,6 +13,7 @@ obj-$(CONFIG_FPGA_MGR_ICE40_SPI) += ice40-spi.o obj-$(CONFIG_FPGA_MGR_MACHXO2_SPI) += machxo2-spi.o obj-$(CONFIG_FPGA_MGR_SOCFPGA) += socfpga.o obj-$(CONFIG_FPGA_MGR_SOCFPGA_A10) += socfpga-a10.o +obj-$(CONFIG_FPGA_MGR_STRATIX10_SOC) += stratix10-soc.o obj-$(CONFIG_FPGA_MGR_TS73XX) += ts73xx-fpga.o obj-$(CONFIG_FPGA_MGR_XILINX_SPI) += xilinx-spi.o obj-$(CONFIG_FPGA_MGR_ZYNQ_FPGA) += zynq-fpga.o diff --git a/drivers/fpga/altera-cvp.c b/drivers/fpga/altera-cvp.c index 610a1558e0ed..35c3aa5792e2 100644 --- a/drivers/fpga/altera-cvp.c +++ b/drivers/fpga/altera-cvp.c @@ -403,6 +403,7 @@ static int altera_cvp_probe(struct pci_dev *pdev, struct altera_cvp_conf *conf; struct fpga_manager *mgr; u16 cmd, val; + u32 regval; int ret; /* @@ -416,6 +417,14 @@ static int altera_cvp_probe(struct pci_dev *pdev, return -ENODEV; } + pci_read_config_dword(pdev, VSE_CVP_STATUS, ®val); + if (!(regval & VSE_CVP_STATUS_CVP_EN)) { + dev_err(&pdev->dev, + "CVP is disabled for this device: CVP_STATUS Reg 0x%x\n", + regval); + return -ENODEV; + } + conf = devm_kzalloc(&pdev->dev, sizeof(*conf), GFP_KERNEL); if (!conf) return -ENOMEM; @@ -466,18 +475,11 @@ static int altera_cvp_probe(struct pci_dev *pdev, if (ret) goto err_unmap; - ret = driver_create_file(&altera_cvp_driver.driver, - &driver_attr_chkcfg); - if (ret) { - dev_err(&pdev->dev, "Can't create sysfs chkcfg file\n"); - fpga_mgr_unregister(mgr); - goto err_unmap; - } - return 0; err_unmap: - pci_iounmap(pdev, conf->map); + if (conf->map) + pci_iounmap(pdev, conf->map); pci_release_region(pdev, CVP_BAR); err_disable: cmd &= ~PCI_COMMAND_MEMORY; @@ -491,16 +493,39 @@ static void altera_cvp_remove(struct pci_dev *pdev) struct altera_cvp_conf *conf = mgr->priv; u16 cmd; - driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg); fpga_mgr_unregister(mgr); - pci_iounmap(pdev, conf->map); + if (conf->map) + pci_iounmap(pdev, conf->map); pci_release_region(pdev, CVP_BAR); pci_read_config_word(pdev, PCI_COMMAND, &cmd); cmd &= ~PCI_COMMAND_MEMORY; pci_write_config_word(pdev, PCI_COMMAND, cmd); } -module_pci_driver(altera_cvp_driver); +static int __init altera_cvp_init(void) +{ + int ret; + + ret = pci_register_driver(&altera_cvp_driver); + if (ret) + return ret; + + ret = driver_create_file(&altera_cvp_driver.driver, + &driver_attr_chkcfg); + if (ret) + pr_warn("Can't create sysfs chkcfg file\n"); + + return 0; +} + +static void __exit altera_cvp_exit(void) +{ + driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg); + pci_unregister_driver(&altera_cvp_driver); +} + +module_init(altera_cvp_init); +module_exit(altera_cvp_exit); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Anatolij Gustschin "); diff --git a/drivers/fpga/altera-ps-spi.c b/drivers/fpga/altera-ps-spi.c index 33aafda50af5..8c18beec6b57 100644 --- a/drivers/fpga/altera-ps-spi.c +++ b/drivers/fpga/altera-ps-spi.c @@ -75,6 +75,12 @@ static struct altera_ps_data a10_data = { .t_st2ck_us = 10, /* min(t_ST2CK) */ }; +/* Array index is enum altera_ps_devtype */ +static const struct altera_ps_data *altera_ps_data_map[] = { + &c5_data, + &a10_data, +}; + static const struct of_device_id of_ef_match[] = { { .compatible = "altr,fpga-passive-serial", .data = &c5_data }, { .compatible = "altr,fpga-arria10-passive-serial", .data = &a10_data }, @@ -234,6 +240,22 @@ static const struct fpga_manager_ops altera_ps_ops = { .write_complete = altera_ps_write_complete, }; +static const struct altera_ps_data *id_to_data(const struct spi_device_id *id) +{ + kernel_ulong_t devtype = id->driver_data; + const struct altera_ps_data *data; + + /* someone added a altera_ps_devtype without adding to the map array */ + if (devtype >= ARRAY_SIZE(altera_ps_data_map)) + return NULL; + + data = altera_ps_data_map[devtype]; + if (!data || data->devtype != devtype) + return NULL; + + return data; +} + static int altera_ps_probe(struct spi_device *spi) { struct altera_ps_conf *conf; @@ -244,11 +266,17 @@ static int altera_ps_probe(struct spi_device *spi) if (!conf) return -ENOMEM; - of_id = of_match_device(of_ef_match, &spi->dev); - if (!of_id) - return -ENODEV; + if (spi->dev.of_node) { + of_id = of_match_device(of_ef_match, &spi->dev); + if (!of_id) + return -ENODEV; + conf->data = of_id->data; + } else { + conf->data = id_to_data(spi_get_device_id(spi)); + if (!conf->data) + return -ENODEV; + } - conf->data = of_id->data; conf->spi = spi; conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_LOW); if (IS_ERR(conf->config)) { @@ -294,7 +322,9 @@ static int altera_ps_remove(struct spi_device *spi) } static const struct spi_device_id altera_ps_spi_ids[] = { - {"cyclone-ps-spi", 0}, + { "cyclone-ps-spi", CYCLONE5 }, + { "fpga-passive-serial", CYCLONE5 }, + { "fpga-arria10-passive-serial", ARRIA10 }, {} }; MODULE_DEVICE_TABLE(spi, altera_ps_spi_ids); diff --git a/drivers/fpga/dfl-fme-pr.c b/drivers/fpga/dfl-fme-pr.c index 0b840531ef33..fe5a5578fbf7 100644 --- a/drivers/fpga/dfl-fme-pr.c +++ b/drivers/fpga/dfl-fme-pr.c @@ -444,10 +444,8 @@ static void pr_mgmt_uinit(struct platform_device *pdev, struct dfl_feature *feature) { struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); - struct dfl_fme *priv; mutex_lock(&pdata->lock); - priv = dfl_fpga_pdata_get_private(pdata); dfl_fme_destroy_regions(pdata); dfl_fme_destroy_bridges(pdata); diff --git a/drivers/fpga/dfl-fme-region.c b/drivers/fpga/dfl-fme-region.c index ec134ec93f08..1eeb42af1012 100644 --- a/drivers/fpga/dfl-fme-region.c +++ b/drivers/fpga/dfl-fme-region.c @@ -64,7 +64,7 @@ eprobe_mgr_put: static int fme_region_remove(struct platform_device *pdev) { - struct fpga_region *region = dev_get_drvdata(&pdev->dev); + struct fpga_region *region = platform_get_drvdata(pdev); struct fpga_manager *mgr = region->mgr; fpga_region_unregister(region); diff --git a/drivers/fpga/of-fpga-region.c b/drivers/fpga/of-fpga-region.c index 122286fd255a..75f64abf9c81 100644 --- a/drivers/fpga/of-fpga-region.c +++ b/drivers/fpga/of-fpga-region.c @@ -421,7 +421,7 @@ static int of_fpga_region_probe(struct platform_device *pdev) goto eprobe_mgr_put; of_platform_populate(np, fpga_region_of_match, NULL, ®ion->dev); - dev_set_drvdata(dev, region); + platform_set_drvdata(pdev, region); dev_info(dev, "FPGA Region probed\n"); diff --git a/drivers/fpga/stratix10-soc.c b/drivers/fpga/stratix10-soc.c new file mode 100644 index 000000000000..a1a09e04fab8 --- /dev/null +++ b/drivers/fpga/stratix10-soc.c @@ -0,0 +1,535 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * FPGA Manager Driver for Intel Stratix10 SoC + * + * Copyright (C) 2018 Intel Corporation + */ +#include +#include +#include +#include +#include +#include + +/* + * FPGA programming requires a higher level of privilege (EL3), per the SoC + * design. + */ +#define NUM_SVC_BUFS 4 +#define SVC_BUF_SIZE SZ_512K + +/* Indicates buffer is in use if set */ +#define SVC_BUF_LOCK 0 + +#define S10_BUFFER_TIMEOUT (msecs_to_jiffies(SVC_RECONFIG_BUFFER_TIMEOUT_MS)) +#define S10_RECONFIG_TIMEOUT (msecs_to_jiffies(SVC_RECONFIG_REQUEST_TIMEOUT_MS)) + +/* + * struct s10_svc_buf + * buf: virtual address of buf provided by service layer + * lock: locked if buffer is in use + */ +struct s10_svc_buf { + char *buf; + unsigned long lock; +}; + +struct s10_priv { + struct stratix10_svc_chan *chan; + struct stratix10_svc_client client; + struct completion status_return_completion; + struct s10_svc_buf svc_bufs[NUM_SVC_BUFS]; + unsigned long status; +}; + +static int s10_svc_send_msg(struct s10_priv *priv, + enum stratix10_svc_command_code command, + void *payload, u32 payload_length) +{ + struct stratix10_svc_chan *chan = priv->chan; + struct device *dev = priv->client.dev; + struct stratix10_svc_client_msg msg; + int ret; + + dev_dbg(dev, "%s cmd=%d payload=%p length=%d\n", + __func__, command, payload, payload_length); + + msg.command = command; + msg.payload = payload; + msg.payload_length = payload_length; + + ret = stratix10_svc_send(chan, &msg); + dev_dbg(dev, "stratix10_svc_send returned status %d\n", ret); + + return ret; +} + +/* + * Free buffers allocated from the service layer's pool that are not in use. + * Return true when all buffers are freed. + */ +static bool s10_free_buffers(struct fpga_manager *mgr) +{ + struct s10_priv *priv = mgr->priv; + uint num_free = 0; + uint i; + + for (i = 0; i < NUM_SVC_BUFS; i++) { + if (!priv->svc_bufs[i].buf) { + num_free++; + continue; + } + + if (!test_and_set_bit_lock(SVC_BUF_LOCK, + &priv->svc_bufs[i].lock)) { + stratix10_svc_free_memory(priv->chan, + priv->svc_bufs[i].buf); + priv->svc_bufs[i].buf = NULL; + num_free++; + } + } + + return num_free == NUM_SVC_BUFS; +} + +/* + * Returns count of how many buffers are not in use. + */ +static uint s10_free_buffer_count(struct fpga_manager *mgr) +{ + struct s10_priv *priv = mgr->priv; + uint num_free = 0; + uint i; + + for (i = 0; i < NUM_SVC_BUFS; i++) + if (!priv->svc_bufs[i].buf) + num_free++; + + return num_free; +} + +/* + * s10_unlock_bufs + * Given the returned buffer address, match that address to our buffer struct + * and unlock that buffer. This marks it as available to be refilled and sent + * (or freed). + * priv: private data + * kaddr: kernel address of buffer that was returned from service layer + */ +static void s10_unlock_bufs(struct s10_priv *priv, void *kaddr) +{ + uint i; + + if (!kaddr) + return; + + for (i = 0; i < NUM_SVC_BUFS; i++) + if (priv->svc_bufs[i].buf == kaddr) { + clear_bit_unlock(SVC_BUF_LOCK, + &priv->svc_bufs[i].lock); + return; + } + + WARN(1, "Unknown buffer returned from service layer %p\n", kaddr); +} + +/* + * s10_receive_callback - callback for service layer to use to provide client + * (this driver) messages received through the mailbox. + * client: service layer client struct + * data: message from service layer + */ +static void s10_receive_callback(struct stratix10_svc_client *client, + struct stratix10_svc_cb_data *data) +{ + struct s10_priv *priv = client->priv; + u32 status; + int i; + + WARN_ONCE(!data, "%s: stratix10_svc_rc_data = NULL", __func__); + + status = data->status; + + /* + * Here we set status bits as we receive them. Elsewhere, we always use + * test_and_clear_bit() to check status in priv->status + */ + for (i = 0; i <= SVC_STATUS_RECONFIG_ERROR; i++) + if (status & (1 << i)) + set_bit(i, &priv->status); + + if (status & BIT(SVC_STATUS_RECONFIG_BUFFER_DONE)) { + s10_unlock_bufs(priv, data->kaddr1); + s10_unlock_bufs(priv, data->kaddr2); + s10_unlock_bufs(priv, data->kaddr3); + } + + complete(&priv->status_return_completion); +} + +/* + * s10_ops_write_init - prepare for FPGA reconfiguration by requesting + * partial reconfig and allocating buffers from the service layer. + */ +static int s10_ops_write_init(struct fpga_manager *mgr, + struct fpga_image_info *info, + const char *buf, size_t count) +{ + struct s10_priv *priv = mgr->priv; + struct device *dev = priv->client.dev; + struct stratix10_svc_command_config_type ctype; + char *kbuf; + uint i; + int ret; + + ctype.flags = 0; + if (info->flags & FPGA_MGR_PARTIAL_RECONFIG) { + dev_dbg(dev, "Requesting partial reconfiguration.\n"); + ctype.flags |= BIT(COMMAND_RECONFIG_FLAG_PARTIAL); + } else { + dev_dbg(dev, "Requesting full reconfiguration.\n"); + } + + reinit_completion(&priv->status_return_completion); + ret = s10_svc_send_msg(priv, COMMAND_RECONFIG, + &ctype, sizeof(ctype)); + if (ret < 0) + goto init_done; + + ret = wait_for_completion_interruptible_timeout( + &priv->status_return_completion, S10_RECONFIG_TIMEOUT); + if (!ret) { + dev_err(dev, "timeout waiting for RECONFIG_REQUEST\n"); + ret = -ETIMEDOUT; + goto init_done; + } + if (ret < 0) { + dev_err(dev, "error (%d) waiting for RECONFIG_REQUEST\n", ret); + goto init_done; + } + + ret = 0; + if (!test_and_clear_bit(SVC_STATUS_RECONFIG_REQUEST_OK, + &priv->status)) { + ret = -ETIMEDOUT; + goto init_done; + } + + /* Allocate buffers from the service layer's pool. */ + for (i = 0; i < NUM_SVC_BUFS; i++) { + kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE); + if (!kbuf) { + s10_free_buffers(mgr); + ret = -ENOMEM; + goto init_done; + } + + priv->svc_bufs[i].buf = kbuf; + priv->svc_bufs[i].lock = 0; + } + +init_done: + stratix10_svc_done(priv->chan); + return ret; +} + +/* + * s10_send_buf - send a buffer to the service layer queue + * mgr: fpga manager struct + * buf: fpga image buffer + * count: size of buf in bytes + * Returns # of bytes transferred or -ENOBUFS if the all the buffers are in use + * or if the service queue is full. Never returns 0. + */ +static int s10_send_buf(struct fpga_manager *mgr, const char *buf, size_t count) +{ + struct s10_priv *priv = mgr->priv; + struct device *dev = priv->client.dev; + void *svc_buf; + size_t xfer_sz; + int ret; + uint i; + + /* get/lock a buffer that that's not being used */ + for (i = 0; i < NUM_SVC_BUFS; i++) + if (!test_and_set_bit_lock(SVC_BUF_LOCK, + &priv->svc_bufs[i].lock)) + break; + + if (i == NUM_SVC_BUFS) + return -ENOBUFS; + + xfer_sz = count < SVC_BUF_SIZE ? count : SVC_BUF_SIZE; + + svc_buf = priv->svc_bufs[i].buf; + memcpy(svc_buf, buf, xfer_sz); + ret = s10_svc_send_msg(priv, COMMAND_RECONFIG_DATA_SUBMIT, + svc_buf, xfer_sz); + if (ret < 0) { + dev_err(dev, + "Error while sending data to service layer (%d)", ret); + clear_bit_unlock(SVC_BUF_LOCK, &priv->svc_bufs[i].lock); + return ret; + } + + return xfer_sz; +} + +/* + * Send a FPGA image to privileged layers to write to the FPGA. When done + * sending, free all service layer buffers we allocated in write_init. + */ +static int s10_ops_write(struct fpga_manager *mgr, const char *buf, + size_t count) +{ + struct s10_priv *priv = mgr->priv; + struct device *dev = priv->client.dev; + long wait_status; + int sent = 0; + int ret = 0; + + /* + * Loop waiting for buffers to be returned. When a buffer is returned, + * reuse it to send more data or free if if all data has been sent. + */ + while (count > 0 || s10_free_buffer_count(mgr) != NUM_SVC_BUFS) { + reinit_completion(&priv->status_return_completion); + + if (count > 0) { + sent = s10_send_buf(mgr, buf, count); + if (sent < 0) + continue; + + count -= sent; + buf += sent; + } else { + if (s10_free_buffers(mgr)) + return 0; + + ret = s10_svc_send_msg( + priv, COMMAND_RECONFIG_DATA_CLAIM, + NULL, 0); + if (ret < 0) + break; + } + + /* + * If callback hasn't already happened, wait for buffers to be + * returned from service layer + */ + wait_status = 1; /* not timed out */ + if (!priv->status) + wait_status = wait_for_completion_interruptible_timeout( + &priv->status_return_completion, + S10_BUFFER_TIMEOUT); + + if (test_and_clear_bit(SVC_STATUS_RECONFIG_BUFFER_DONE, + &priv->status) || + test_and_clear_bit(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED, + &priv->status)) { + ret = 0; + continue; + } + + if (test_and_clear_bit(SVC_STATUS_RECONFIG_ERROR, + &priv->status)) { + dev_err(dev, "ERROR - giving up - SVC_STATUS_RECONFIG_ERROR\n"); + ret = -EFAULT; + break; + } + + if (!wait_status) { + dev_err(dev, "timeout waiting for svc layer buffers\n"); + ret = -ETIMEDOUT; + break; + } + if (wait_status < 0) { + ret = wait_status; + dev_err(dev, + "error (%d) waiting for svc layer buffers\n", + ret); + break; + } + } + + if (!s10_free_buffers(mgr)) + dev_err(dev, "%s not all buffers were freed\n", __func__); + + return ret; +} + +static int s10_ops_write_complete(struct fpga_manager *mgr, + struct fpga_image_info *info) +{ + struct s10_priv *priv = mgr->priv; + struct device *dev = priv->client.dev; + unsigned long timeout; + int ret; + + timeout = usecs_to_jiffies(info->config_complete_timeout_us); + + do { + reinit_completion(&priv->status_return_completion); + + ret = s10_svc_send_msg(priv, COMMAND_RECONFIG_STATUS, NULL, 0); + if (ret < 0) + break; + + ret = wait_for_completion_interruptible_timeout( + &priv->status_return_completion, timeout); + if (!ret) { + dev_err(dev, + "timeout waiting for RECONFIG_COMPLETED\n"); + ret = -ETIMEDOUT; + break; + } + if (ret < 0) { + dev_err(dev, + "error (%d) waiting for RECONFIG_COMPLETED\n", + ret); + break; + } + /* Not error or timeout, so ret is # of jiffies until timeout */ + timeout = ret; + ret = 0; + + if (test_and_clear_bit(SVC_STATUS_RECONFIG_COMPLETED, + &priv->status)) + break; + + if (test_and_clear_bit(SVC_STATUS_RECONFIG_ERROR, + &priv->status)) { + dev_err(dev, "ERROR - giving up - SVC_STATUS_RECONFIG_ERROR\n"); + ret = -EFAULT; + break; + } + } while (1); + + stratix10_svc_done(priv->chan); + + return ret; +} + +static enum fpga_mgr_states s10_ops_state(struct fpga_manager *mgr) +{ + return FPGA_MGR_STATE_UNKNOWN; +} + +static const struct fpga_manager_ops s10_ops = { + .state = s10_ops_state, + .write_init = s10_ops_write_init, + .write = s10_ops_write, + .write_complete = s10_ops_write_complete, +}; + +static int s10_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct s10_priv *priv; + struct fpga_manager *mgr; + int ret; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->client.dev = dev; + priv->client.receive_cb = s10_receive_callback; + priv->client.priv = priv; + + priv->chan = stratix10_svc_request_channel_byname(&priv->client, + SVC_CLIENT_FPGA); + if (IS_ERR(priv->chan)) { + dev_err(dev, "couldn't get service channel (%s)\n", + SVC_CLIENT_FPGA); + return PTR_ERR(priv->chan); + } + + init_completion(&priv->status_return_completion); + + mgr = fpga_mgr_create(dev, "Stratix10 SOC FPGA Manager", + &s10_ops, priv); + if (!mgr) { + dev_err(dev, "unable to create FPGA manager\n"); + ret = -ENOMEM; + goto probe_err; + } + + ret = fpga_mgr_register(mgr); + if (ret) { + dev_err(dev, "unable to register FPGA manager\n"); + fpga_mgr_free(mgr); + goto probe_err; + } + + platform_set_drvdata(pdev, mgr); + return ret; + +probe_err: + stratix10_svc_free_channel(priv->chan); + return ret; +} + +static int s10_remove(struct platform_device *pdev) +{ + struct fpga_manager *mgr = platform_get_drvdata(pdev); + struct s10_priv *priv = mgr->priv; + + fpga_mgr_unregister(mgr); + stratix10_svc_free_channel(priv->chan); + + return 0; +} + +static const struct of_device_id s10_of_match[] = { + { .compatible = "intel,stratix10-soc-fpga-mgr", }, + {}, +}; + +MODULE_DEVICE_TABLE(of, s10_of_match); + +static struct platform_driver s10_driver = { + .probe = s10_probe, + .remove = s10_remove, + .driver = { + .name = "Stratix10 SoC FPGA manager", + .of_match_table = of_match_ptr(s10_of_match), + }, +}; + +static int __init s10_init(void) +{ + struct device_node *fw_np; + struct device_node *np; + int ret; + + fw_np = of_find_node_by_name(NULL, "svc"); + if (!fw_np) + return -ENODEV; + + np = of_find_matching_node(fw_np, s10_of_match); + if (!np) { + of_node_put(fw_np); + return -ENODEV; + } + + of_node_put(np); + ret = of_platform_populate(fw_np, s10_of_match, NULL, NULL); + of_node_put(fw_np); + if (ret) + return ret; + + return platform_driver_register(&s10_driver); +} + +static void __exit s10_exit(void) +{ + return platform_driver_unregister(&s10_driver); +} + +module_init(s10_init); +module_exit(s10_exit); + +MODULE_AUTHOR("Alan Tull "); +MODULE_DESCRIPTION("Intel Stratix 10 SOC FPGA Manager"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/fpga/zynq-fpga.c b/drivers/fpga/zynq-fpga.c index bb82efeebb9d..57b0e6775958 100644 --- a/drivers/fpga/zynq-fpga.c +++ b/drivers/fpga/zynq-fpga.c @@ -501,6 +501,10 @@ static int zynq_fpga_ops_write_complete(struct fpga_manager *mgr, if (err) return err; + /* Release 'PR' control back to the ICAP */ + zynq_fpga_write(priv, CTRL_OFFSET, + zynq_fpga_read(priv, CTRL_OFFSET) & ~CTRL_PCAP_PR_MASK); + err = zynq_fpga_poll_timeout(priv, INT_STS_OFFSET, intr_status, intr_status & IXR_PCFG_DONE_MASK, INIT_POLL_DELAY, diff --git a/drivers/fsi/Kconfig b/drivers/fsi/Kconfig index 99c99a5d57fe..5cc20f3c3fd6 100644 --- a/drivers/fsi/Kconfig +++ b/drivers/fsi/Kconfig @@ -65,4 +65,14 @@ config FSI_SBEFIFO a pipe-like FSI device for communicating with the self boot engine (SBE) on POWER processors. +config FSI_OCC + tristate "OCC SBEFIFO client device driver" + depends on FSI_SBEFIFO + ---help--- + This option enables an SBEFIFO based On-Chip Controller (OCC) device + driver. The OCC is a device embedded on a POWER processor that collects + and aggregates sensor data from the processor and system. The OCC can + provide the raw sensor data as well as perform thermal and power + management on the system. + endif diff --git a/drivers/fsi/Makefile b/drivers/fsi/Makefile index a50d6ce22fb3..62687ec86d2e 100644 --- a/drivers/fsi/Makefile +++ b/drivers/fsi/Makefile @@ -5,3 +5,4 @@ obj-$(CONFIG_FSI_MASTER_GPIO) += fsi-master-gpio.o obj-$(CONFIG_FSI_MASTER_AST_CF) += fsi-master-ast-cf.o obj-$(CONFIG_FSI_SCOM) += fsi-scom.o obj-$(CONFIG_FSI_SBEFIFO) += fsi-sbefifo.o +obj-$(CONFIG_FSI_OCC) += fsi-occ.o diff --git a/drivers/fsi/fsi-occ.c b/drivers/fsi/fsi-occ.c new file mode 100644 index 000000000000..a2301cea1cbb --- /dev/null +++ b/drivers/fsi/fsi-occ.c @@ -0,0 +1,599 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define OCC_SRAM_BYTES 4096 +#define OCC_CMD_DATA_BYTES 4090 +#define OCC_RESP_DATA_BYTES 4089 + +#define OCC_SRAM_CMD_ADDR 0xFFFBE000 +#define OCC_SRAM_RSP_ADDR 0xFFFBF000 + +/* + * Assume we don't have much FFDC, if we do we'll overflow and + * fail the command. This needs to be big enough for simple + * commands as well. + */ +#define OCC_SBE_STATUS_WORDS 32 + +#define OCC_TIMEOUT_MS 1000 +#define OCC_CMD_IN_PRG_WAIT_MS 50 + +struct occ { + struct device *dev; + struct device *sbefifo; + char name[32]; + int idx; + struct miscdevice mdev; + struct mutex occ_lock; +}; + +#define to_occ(x) container_of((x), struct occ, mdev) + +struct occ_response { + u8 seq_no; + u8 cmd_type; + u8 return_status; + __be16 data_length; + u8 data[OCC_RESP_DATA_BYTES + 2]; /* two bytes checksum */ +} __packed; + +struct occ_client { + struct occ *occ; + struct mutex lock; + size_t data_size; + size_t read_offset; + u8 *buffer; +}; + +#define to_client(x) container_of((x), struct occ_client, xfr) + +static DEFINE_IDA(occ_ida); + +static int occ_open(struct inode *inode, struct file *file) +{ + struct occ_client *client = kzalloc(sizeof(*client), GFP_KERNEL); + struct miscdevice *mdev = file->private_data; + struct occ *occ = to_occ(mdev); + + if (!client) + return -ENOMEM; + + client->buffer = (u8 *)__get_free_page(GFP_KERNEL); + if (!client->buffer) { + kfree(client); + return -ENOMEM; + } + + client->occ = occ; + mutex_init(&client->lock); + file->private_data = client; + + /* We allocate a 1-page buffer, make sure it all fits */ + BUILD_BUG_ON((OCC_CMD_DATA_BYTES + 3) > PAGE_SIZE); + BUILD_BUG_ON((OCC_RESP_DATA_BYTES + 7) > PAGE_SIZE); + + return 0; +} + +static ssize_t occ_read(struct file *file, char __user *buf, size_t len, + loff_t *offset) +{ + struct occ_client *client = file->private_data; + ssize_t rc = 0; + + if (!client) + return -ENODEV; + + if (len > OCC_SRAM_BYTES) + return -EINVAL; + + mutex_lock(&client->lock); + + /* This should not be possible ... */ + if (WARN_ON_ONCE(client->read_offset > client->data_size)) { + rc = -EIO; + goto done; + } + + /* Grab how much data we have to read */ + rc = min(len, client->data_size - client->read_offset); + if (copy_to_user(buf, client->buffer + client->read_offset, rc)) + rc = -EFAULT; + else + client->read_offset += rc; + + done: + mutex_unlock(&client->lock); + + return rc; +} + +static ssize_t occ_write(struct file *file, const char __user *buf, + size_t len, loff_t *offset) +{ + struct occ_client *client = file->private_data; + size_t rlen, data_length; + u16 checksum = 0; + ssize_t rc, i; + u8 *cmd; + + if (!client) + return -ENODEV; + + if (len > (OCC_CMD_DATA_BYTES + 3) || len < 3) + return -EINVAL; + + mutex_lock(&client->lock); + + /* Construct the command */ + cmd = client->buffer; + + /* Sequence number (we could increment and compare with response) */ + cmd[0] = 1; + + /* + * Copy the user command (assume user data follows the occ command + * format) + * byte 0: command type + * bytes 1-2: data length (msb first) + * bytes 3-n: data + */ + if (copy_from_user(&cmd[1], buf, len)) { + rc = -EFAULT; + goto done; + } + + /* Extract data length */ + data_length = (cmd[2] << 8) + cmd[3]; + if (data_length > OCC_CMD_DATA_BYTES) { + rc = -EINVAL; + goto done; + } + + /* Calculate checksum */ + for (i = 0; i < data_length + 4; ++i) + checksum += cmd[i]; + + cmd[data_length + 4] = checksum >> 8; + cmd[data_length + 5] = checksum & 0xFF; + + /* Submit command */ + rlen = PAGE_SIZE; + rc = fsi_occ_submit(client->occ->dev, cmd, data_length + 6, cmd, + &rlen); + if (rc) + goto done; + + /* Set read tracking data */ + client->data_size = rlen; + client->read_offset = 0; + + /* Done */ + rc = len; + + done: + mutex_unlock(&client->lock); + + return rc; +} + +static int occ_release(struct inode *inode, struct file *file) +{ + struct occ_client *client = file->private_data; + + free_page((unsigned long)client->buffer); + kfree(client); + + return 0; +} + +static const struct file_operations occ_fops = { + .owner = THIS_MODULE, + .open = occ_open, + .read = occ_read, + .write = occ_write, + .release = occ_release, +}; + +static int occ_verify_checksum(struct occ_response *resp, u16 data_length) +{ + /* Fetch the two bytes after the data for the checksum. */ + u16 checksum_resp = get_unaligned_be16(&resp->data[data_length]); + u16 checksum; + u16 i; + + checksum = resp->seq_no; + checksum += resp->cmd_type; + checksum += resp->return_status; + checksum += (data_length >> 8) + (data_length & 0xFF); + + for (i = 0; i < data_length; ++i) + checksum += resp->data[i]; + + if (checksum != checksum_resp) + return -EBADMSG; + + return 0; +} + +static int occ_getsram(struct occ *occ, u32 address, void *data, ssize_t len) +{ + u32 data_len = ((len + 7) / 8) * 8; /* must be multiples of 8 B */ + size_t resp_len, resp_data_len; + __be32 *resp, cmd[5]; + int rc; + + /* + * Magic sequence to do SBE getsram command. SBE will fetch data from + * specified SRAM address. + */ + cmd[0] = cpu_to_be32(0x5); + cmd[1] = cpu_to_be32(SBEFIFO_CMD_GET_OCC_SRAM); + cmd[2] = cpu_to_be32(1); + cmd[3] = cpu_to_be32(address); + cmd[4] = cpu_to_be32(data_len); + + resp_len = (data_len >> 2) + OCC_SBE_STATUS_WORDS; + resp = kzalloc(resp_len << 2, GFP_KERNEL); + if (!resp) + return -ENOMEM; + + rc = sbefifo_submit(occ->sbefifo, cmd, 5, resp, &resp_len); + if (rc) + goto free; + + rc = sbefifo_parse_status(occ->sbefifo, SBEFIFO_CMD_GET_OCC_SRAM, + resp, resp_len, &resp_len); + if (rc) + goto free; + + resp_data_len = be32_to_cpu(resp[resp_len - 1]); + if (resp_data_len != data_len) { + dev_err(occ->dev, "SRAM read expected %d bytes got %zd\n", + data_len, resp_data_len); + rc = -EBADMSG; + } else { + memcpy(data, resp, len); + } + +free: + /* Convert positive SBEI status */ + if (rc > 0) { + dev_err(occ->dev, "SRAM read returned failure status: %08x\n", + rc); + rc = -EBADMSG; + } + + kfree(resp); + return rc; +} + +static int occ_putsram(struct occ *occ, u32 address, const void *data, + ssize_t len) +{ + size_t cmd_len, buf_len, resp_len, resp_data_len; + u32 data_len = ((len + 7) / 8) * 8; /* must be multiples of 8 B */ + __be32 *buf; + int rc; + + /* + * We use the same buffer for command and response, make + * sure it's big enough + */ + resp_len = OCC_SBE_STATUS_WORDS; + cmd_len = (data_len >> 2) + 5; + buf_len = max(cmd_len, resp_len); + buf = kzalloc(buf_len << 2, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + /* + * Magic sequence to do SBE putsram command. SBE will transfer + * data to specified SRAM address. + */ + buf[0] = cpu_to_be32(cmd_len); + buf[1] = cpu_to_be32(SBEFIFO_CMD_PUT_OCC_SRAM); + buf[2] = cpu_to_be32(1); + buf[3] = cpu_to_be32(address); + buf[4] = cpu_to_be32(data_len); + + memcpy(&buf[5], data, len); + + rc = sbefifo_submit(occ->sbefifo, buf, cmd_len, buf, &resp_len); + if (rc) + goto free; + + rc = sbefifo_parse_status(occ->sbefifo, SBEFIFO_CMD_PUT_OCC_SRAM, + buf, resp_len, &resp_len); + if (rc) + goto free; + + if (resp_len != 1) { + dev_err(occ->dev, "SRAM write response length invalid: %zd\n", + resp_len); + rc = -EBADMSG; + } else { + resp_data_len = be32_to_cpu(buf[0]); + if (resp_data_len != data_len) { + dev_err(occ->dev, + "SRAM write expected %d bytes got %zd\n", + data_len, resp_data_len); + rc = -EBADMSG; + } + } + +free: + /* Convert positive SBEI status */ + if (rc > 0) { + dev_err(occ->dev, "SRAM write returned failure status: %08x\n", + rc); + rc = -EBADMSG; + } + + kfree(buf); + return rc; +} + +static int occ_trigger_attn(struct occ *occ) +{ + __be32 buf[OCC_SBE_STATUS_WORDS]; + size_t resp_len, resp_data_len; + int rc; + + BUILD_BUG_ON(OCC_SBE_STATUS_WORDS < 7); + resp_len = OCC_SBE_STATUS_WORDS; + + buf[0] = cpu_to_be32(0x5 + 0x2); /* Chip-op length in words */ + buf[1] = cpu_to_be32(SBEFIFO_CMD_PUT_OCC_SRAM); + buf[2] = cpu_to_be32(0x3); /* Mode: Circular */ + buf[3] = cpu_to_be32(0x0); /* Address: ignore in mode 3 */ + buf[4] = cpu_to_be32(0x8); /* Data length in bytes */ + buf[5] = cpu_to_be32(0x20010000); /* Trigger OCC attention */ + buf[6] = 0; + + rc = sbefifo_submit(occ->sbefifo, buf, 7, buf, &resp_len); + if (rc) + goto error; + + rc = sbefifo_parse_status(occ->sbefifo, SBEFIFO_CMD_PUT_OCC_SRAM, + buf, resp_len, &resp_len); + if (rc) + goto error; + + if (resp_len != 1) { + dev_err(occ->dev, "SRAM attn response length invalid: %zd\n", + resp_len); + rc = -EBADMSG; + } else { + resp_data_len = be32_to_cpu(buf[0]); + if (resp_data_len != 8) { + dev_err(occ->dev, + "SRAM attn expected 8 bytes got %zd\n", + resp_data_len); + rc = -EBADMSG; + } + } + + error: + /* Convert positive SBEI status */ + if (rc > 0) { + dev_err(occ->dev, "SRAM attn returned failure status: %08x\n", + rc); + rc = -EBADMSG; + } + + return rc; +} + +int fsi_occ_submit(struct device *dev, const void *request, size_t req_len, + void *response, size_t *resp_len) +{ + const unsigned long timeout = msecs_to_jiffies(OCC_TIMEOUT_MS); + const unsigned long wait_time = + msecs_to_jiffies(OCC_CMD_IN_PRG_WAIT_MS); + struct occ *occ = dev_get_drvdata(dev); + struct occ_response *resp = response; + u16 resp_data_length; + unsigned long start; + int rc; + + if (!occ) + return -ENODEV; + + if (*resp_len < 7) { + dev_dbg(dev, "Bad resplen %zd\n", *resp_len); + return -EINVAL; + } + + mutex_lock(&occ->occ_lock); + + rc = occ_putsram(occ, OCC_SRAM_CMD_ADDR, request, req_len); + if (rc) + goto done; + + rc = occ_trigger_attn(occ); + if (rc) + goto done; + + /* Read occ response header */ + start = jiffies; + do { + rc = occ_getsram(occ, OCC_SRAM_RSP_ADDR, resp, 8); + if (rc) + goto done; + + if (resp->return_status == OCC_RESP_CMD_IN_PRG) { + rc = -ETIMEDOUT; + + if (time_after(jiffies, start + timeout)) + break; + + set_current_state(TASK_UNINTERRUPTIBLE); + schedule_timeout(wait_time); + } + } while (rc); + + /* Extract size of response data */ + resp_data_length = get_unaligned_be16(&resp->data_length); + + /* Message size is data length + 5 bytes header + 2 bytes checksum */ + if ((resp_data_length + 7) > *resp_len) { + rc = -EMSGSIZE; + goto done; + } + + dev_dbg(dev, "resp_status=%02x resp_data_len=%d\n", + resp->return_status, resp_data_length); + + /* Grab the rest */ + if (resp_data_length > 1) { + /* already got 3 bytes resp, also need 2 bytes checksum */ + rc = occ_getsram(occ, OCC_SRAM_RSP_ADDR + 8, + &resp->data[3], resp_data_length - 1); + if (rc) + goto done; + } + + *resp_len = resp_data_length + 7; + rc = occ_verify_checksum(resp, resp_data_length); + + done: + mutex_unlock(&occ->occ_lock); + + return rc; +} +EXPORT_SYMBOL_GPL(fsi_occ_submit); + +static int occ_unregister_child(struct device *dev, void *data) +{ + struct platform_device *hwmon_dev = to_platform_device(dev); + + platform_device_unregister(hwmon_dev); + + return 0; +} + +static int occ_probe(struct platform_device *pdev) +{ + int rc; + u32 reg; + struct occ *occ; + struct platform_device *hwmon_dev; + struct device *dev = &pdev->dev; + struct platform_device_info hwmon_dev_info = { + .parent = dev, + .name = "occ-hwmon", + }; + + occ = devm_kzalloc(dev, sizeof(*occ), GFP_KERNEL); + if (!occ) + return -ENOMEM; + + occ->dev = dev; + occ->sbefifo = dev->parent; + mutex_init(&occ->occ_lock); + + if (dev->of_node) { + rc = of_property_read_u32(dev->of_node, "reg", ®); + if (!rc) { + /* make sure we don't have a duplicate from dts */ + occ->idx = ida_simple_get(&occ_ida, reg, reg + 1, + GFP_KERNEL); + if (occ->idx < 0) + occ->idx = ida_simple_get(&occ_ida, 1, INT_MAX, + GFP_KERNEL); + } else { + occ->idx = ida_simple_get(&occ_ida, 1, INT_MAX, + GFP_KERNEL); + } + } else { + occ->idx = ida_simple_get(&occ_ida, 1, INT_MAX, GFP_KERNEL); + } + + platform_set_drvdata(pdev, occ); + + snprintf(occ->name, sizeof(occ->name), "occ%d", occ->idx); + occ->mdev.fops = &occ_fops; + occ->mdev.minor = MISC_DYNAMIC_MINOR; + occ->mdev.name = occ->name; + occ->mdev.parent = dev; + + rc = misc_register(&occ->mdev); + if (rc) { + dev_err(dev, "failed to register miscdevice: %d\n", rc); + ida_simple_remove(&occ_ida, occ->idx); + return rc; + } + + hwmon_dev_info.id = occ->idx; + hwmon_dev = platform_device_register_full(&hwmon_dev_info); + if (!hwmon_dev) + dev_warn(dev, "failed to create hwmon device\n"); + + return 0; +} + +static int occ_remove(struct platform_device *pdev) +{ + struct occ *occ = platform_get_drvdata(pdev); + + misc_deregister(&occ->mdev); + + device_for_each_child(&pdev->dev, NULL, occ_unregister_child); + + ida_simple_remove(&occ_ida, occ->idx); + + return 0; +} + +static const struct of_device_id occ_match[] = { + { .compatible = "ibm,p9-occ" }, + { }, +}; + +static struct platform_driver occ_driver = { + .driver = { + .name = "occ", + .of_match_table = occ_match, + }, + .probe = occ_probe, + .remove = occ_remove, +}; + +static int occ_init(void) +{ + return platform_driver_register(&occ_driver); +} + +static void occ_exit(void) +{ + platform_driver_unregister(&occ_driver); + + ida_destroy(&occ_ida); +} + +module_init(occ_init); +module_exit(occ_exit); + +MODULE_AUTHOR("Eddie James "); +MODULE_DESCRIPTION("BMC P9 OCC driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/gnss/serial.c b/drivers/gnss/serial.c index 31e891f00175..def64b36d994 100644 --- a/drivers/gnss/serial.c +++ b/drivers/gnss/serial.c @@ -65,7 +65,7 @@ static int gnss_serial_write_raw(struct gnss_device *gdev, /* write is only buffered synchronously */ ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT); - if (ret < 0) + if (ret < 0 || ret < count) return ret; /* FIXME: determine if interrupted? */ diff --git a/drivers/gnss/sirf.c b/drivers/gnss/sirf.c index 2c22836d3ffd..226f6e6fe01b 100644 --- a/drivers/gnss/sirf.c +++ b/drivers/gnss/sirf.c @@ -85,7 +85,7 @@ static int sirf_write_raw(struct gnss_device *gdev, const unsigned char *buf, /* write is only buffered synchronously */ ret = serdev_device_write(serdev, buf, count, MAX_SCHEDULE_TIMEOUT); - if (ret < 0) + if (ret < 0 || ret < count) return ret; /* FIXME: determine if interrupted? */ diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig index 833a1b51c948..b5a2845347ec 100644 --- a/drivers/gpio/Kconfig +++ b/drivers/gpio/Kconfig @@ -160,6 +160,14 @@ config GPIO_BRCMSTB help Say yes here to enable GPIO support for Broadcom STB (BCM7XXX) SoCs. +config GPIO_CADENCE + tristate "Cadence GPIO support" + depends on OF_GPIO + select GPIO_GENERIC + select GPIOLIB_IRQCHIP + help + Say yes here to enable support for Cadence GPIO controller. + config GPIO_CLPS711X tristate "CLPS711X GPIO support" depends on ARCH_CLPS711X || COMPILE_TEST @@ -288,6 +296,7 @@ config GPIO_LPC18XX tristate "NXP LPC18XX/43XX GPIO support" default y if ARCH_LPC18XX depends on OF_GPIO && (ARCH_LPC18XX || COMPILE_TEST) + select IRQ_DOMAIN_HIERARCHY help Select this option to enable GPIO driver for NXP LPC18XX/43XX devices. @@ -429,6 +438,18 @@ config GPIO_REG A 32-bit single register GPIO fixed in/out implementation. This can be used to represent any register as a set of GPIO signals. +config GPIO_SAMA5D2_PIOBU + tristate "SAMA5D2 PIOBU GPIO support" + depends on MFD_SYSCON + depends on OF_GPIO + select GPIO_SYSCON + help + Say yes here to use the PIOBU pins as GPIOs. + + PIOBU pins on the SAMA5D2 can be used as GPIOs. + The difference from regular GPIOs is that they + maintain their value during backup/self-refresh. + config GPIO_SIOX tristate "SIOX GPIO support" depends on SIOX @@ -849,6 +870,7 @@ config GPIO_MC9S08DZ60 config GPIO_PCA953X tristate "PCA95[357]x, PCA9698, TCA64xx, and MAX7310 I/O ports" + select REGMAP_I2C help Say yes here to provide access to several register-oriented SMBus I/O expanders, made mostly by NXP or TI. Compatible diff --git a/drivers/gpio/Makefile b/drivers/gpio/Makefile index 671c4477c951..37628f8dbf70 100644 --- a/drivers/gpio/Makefile +++ b/drivers/gpio/Makefile @@ -37,6 +37,7 @@ obj-$(CONFIG_GPIO_BCM_KONA) += gpio-bcm-kona.o obj-$(CONFIG_GPIO_BD9571MWV) += gpio-bd9571mwv.o obj-$(CONFIG_GPIO_BRCMSTB) += gpio-brcmstb.o obj-$(CONFIG_GPIO_BT8XX) += gpio-bt8xx.o +obj-$(CONFIG_GPIO_CADENCE) += gpio-cadence.o obj-$(CONFIG_GPIO_CLPS711X) += gpio-clps711x.o obj-$(CONFIG_GPIO_CS5535) += gpio-cs5535.o obj-$(CONFIG_GPIO_CRYSTAL_COVE) += gpio-crystalcove.o @@ -108,6 +109,7 @@ obj-$(CONFIG_GPIO_RDC321X) += gpio-rdc321x.o obj-$(CONFIG_GPIO_RCAR) += gpio-rcar.o obj-$(CONFIG_GPIO_REG) += gpio-reg.o obj-$(CONFIG_ARCH_SA1100) += gpio-sa1100.o +obj-$(CONFIG_GPIO_SAMA5D2_PIOBU) += gpio-sama5d2-piobu.o obj-$(CONFIG_GPIO_SCH) += gpio-sch.o obj-$(CONFIG_GPIO_SCH311X) += gpio-sch311x.o obj-$(CONFIG_GPIO_SNPS_CREG) += gpio-creg-snps.o diff --git a/drivers/gpio/TODO b/drivers/gpio/TODO new file mode 100644 index 000000000000..19d27c904916 --- /dev/null +++ b/drivers/gpio/TODO @@ -0,0 +1,109 @@ +This is a place for planning the ongoing long-term work in the GPIO +subsystem. + + +GPIO descriptors + +Starting with commit 79a9becda894 the GPIO subsystem embarked on a journey +to move away from the global GPIO numberspace and toward a decriptor-based +approach. This means that GPIO consumers, drivers and machine descriptions +ideally have no use or idea of the global GPIO numberspace that has/was +used in the inception of the GPIO subsystem. + +Work items: + +- Convert all GPIO device drivers to only #include + +- Convert all consumer drivers to only #include + +- Convert all machine descriptors in "boardfiles" to only + #include , the other option being to convert it + to a machine description such as device tree, ACPI or fwnode that + implicitly does not use global GPIO numbers. + +- When this work is complete (will require some of the items in the + following ongoing work as well) we can delete the old global + numberspace accessors from and eventually delete + altogether. + + +Get rid of + +This header and helpers appeared at one point when there was no proper +driver infrastructure for doing simpler MMIO GPIO devices and there was +no core support for parsing device tree GPIOs from the core library with +the [devm_]gpiod_get() calls we have today that will implicitly go into +the device tree back-end. + +Work items: + +- Get rid of struct of_mm_gpio_chip altogether: use the generic MMIO + GPIO for all current users (see below). Delete struct of_mm_gpio_chip, + to_of_mm_gpio_chip(), of_mm_gpiochip_add_data(), of_mm_gpiochip_add() + of_mm_gpiochip_remove() from the kernel. + +- Change all consumer drivers that #include to + #include and stop doing custom parsing of the + GPIO lines from the device tree. This can be tricky and often ivolves + changing boardfiles, etc. + +- Pull semantics for legacy device tree (OF) GPIO lookups into + gpiolib-of.c: in some cases subsystems are doing custom flags and + lookups for polarity inversion, open drain and what not. As we now + handle this with generic OF bindings, pull all legacy handling into + gpiolib so the library API becomes narrow and deep and handle all + legacy bindings internally. (See e.g. commits 6953c57ab172, + 6a537d48461d etc) + +- Delete when all the above is complete and everything + uses or instead. + + +Collect drivers + +Collect GPIO drivers from arch/* and other places that should be placed +in drivers/gpio/gpio-*. Augment platforms to create platform devices or +similar and probe a proper driver in the gpiolib subsystem. + +In some cases it makes sense to create a GPIO chip from the local driver +for a few GPIOs. Those should stay where they are. + + +Generic MMIO GPIO + +The GPIO drivers can utilize the generic MMIO helper library in many +cases, and the helper library should be as helpful as possible for MMIO +drivers. (drivers/gpio/gpio-mmio.c) + +Work items: + +- Look over and identify any remaining easily converted drivers and + dry-code conversions to MMIO GPIO for maintainers to test + +- Expand the MMIO GPIO or write a new library for port-mapped I/O + helpers (x86 inb()/outb()) and convert port-mapped I/O drivers to use + this with dry-coding and sending to maintainers to test + + +GPIOLIB irqchip + +The GPIOLIB irqchip is a helper irqchip for "simple cases" that should +try to cover any generic kind of irqchip cascaded from a GPIO. + +- Look over and identify any remaining easily converted drivers and + dry-code conversions to gpiolib irqchip for maintainers to test + +- Support generic hierarchical GPIO interrupts: these are for the + non-cascading case where there is one IRQ per GPIO line, there is + currently no common infrastructure for this. + + +Increase integration with pin control + +There are already ways to use pin control as back-end for GPIO and +it may make sense to bring these subsystems closer. One reason for +creating pin control as its own subsystem was that we could avoid any +use of the global GPIO numbers. Once the above is complete, it may +make sense to simply join the subsystems into one and make pin +multiplexing, pin configuration, GPIO, etc selectable options in one +and the same pin control and GPIO subsystem. diff --git a/drivers/gpio/gpio-104-dio-48e.c b/drivers/gpio/gpio-104-dio-48e.c index 9c4e07fcb74b..92c8f944bf64 100644 --- a/drivers/gpio/gpio-104-dio-48e.c +++ b/drivers/gpio/gpio-104-dio-48e.c @@ -222,7 +222,7 @@ static int dio48e_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask, port_state = inb(dio48egpio->base + ports[i]); /* store acquired bits at respective bits array offset */ - bits[word_index] |= port_state << word_offset; + bits[word_index] |= (port_state << word_offset) & word_mask; } return 0; diff --git a/drivers/gpio/gpio-104-idi-48.c b/drivers/gpio/gpio-104-idi-48.c index 2c9738adb3a6..88dc6f2449f6 100644 --- a/drivers/gpio/gpio-104-idi-48.c +++ b/drivers/gpio/gpio-104-idi-48.c @@ -128,7 +128,7 @@ static int idi_48_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask, port_state = inb(idi48gpio->base + ports[i]); /* store acquired bits at respective bits array offset */ - bits[word_index] |= port_state << word_offset; + bits[word_index] |= (port_state << word_offset) & word_mask; } return 0; diff --git a/drivers/gpio/gpio-aspeed.c b/drivers/gpio/gpio-aspeed.c index 2342e154029b..854bce4fb9e7 100644 --- a/drivers/gpio/gpio-aspeed.c +++ b/drivers/gpio/gpio-aspeed.c @@ -1185,7 +1185,6 @@ static int __init aspeed_gpio_probe(struct platform_device *pdev) gpio->chip.parent = &pdev->dev; gpio->chip.ngpio = gpio->config->nr_gpios; - gpio->chip.parent = &pdev->dev; gpio->chip.direction_input = aspeed_gpio_dir_in; gpio->chip.direction_output = aspeed_gpio_dir_out; gpio->chip.get_direction = aspeed_gpio_get_direction; diff --git a/drivers/gpio/gpio-cadence.c b/drivers/gpio/gpio-cadence.c new file mode 100644 index 000000000000..aec8d5df9f30 --- /dev/null +++ b/drivers/gpio/gpio-cadence.c @@ -0,0 +1,291 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright 2017-2018 Cadence + * + * Authors: + * Jan Kotas + * Boris Brezillon + */ + +#include +#include +#include +#include +#include +#include +#include + +#define CDNS_GPIO_BYPASS_MODE 0x00 +#define CDNS_GPIO_DIRECTION_MODE 0x04 +#define CDNS_GPIO_OUTPUT_EN 0x08 +#define CDNS_GPIO_OUTPUT_VALUE 0x0c +#define CDNS_GPIO_INPUT_VALUE 0x10 +#define CDNS_GPIO_IRQ_MASK 0x14 +#define CDNS_GPIO_IRQ_EN 0x18 +#define CDNS_GPIO_IRQ_DIS 0x1c +#define CDNS_GPIO_IRQ_STATUS 0x20 +#define CDNS_GPIO_IRQ_TYPE 0x24 +#define CDNS_GPIO_IRQ_VALUE 0x28 +#define CDNS_GPIO_IRQ_ANY_EDGE 0x2c + +struct cdns_gpio_chip { + struct gpio_chip gc; + struct clk *pclk; + void __iomem *regs; + u32 bypass_orig; +}; + +static int cdns_gpio_request(struct gpio_chip *chip, unsigned int offset) +{ + struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip); + unsigned long flags; + + spin_lock_irqsave(&chip->bgpio_lock, flags); + + iowrite32(ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE) & ~BIT(offset), + cgpio->regs + CDNS_GPIO_BYPASS_MODE); + + spin_unlock_irqrestore(&chip->bgpio_lock, flags); + return 0; +} + +static void cdns_gpio_free(struct gpio_chip *chip, unsigned int offset) +{ + struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip); + unsigned long flags; + + spin_lock_irqsave(&chip->bgpio_lock, flags); + + iowrite32(ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE) | + (BIT(offset) & cgpio->bypass_orig), + cgpio->regs + CDNS_GPIO_BYPASS_MODE); + + spin_unlock_irqrestore(&chip->bgpio_lock, flags); +} + +static void cdns_gpio_irq_mask(struct irq_data *d) +{ + struct gpio_chip *chip = irq_data_get_irq_chip_data(d); + struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip); + + iowrite32(BIT(d->hwirq), cgpio->regs + CDNS_GPIO_IRQ_DIS); +} + +static void cdns_gpio_irq_unmask(struct irq_data *d) +{ + struct gpio_chip *chip = irq_data_get_irq_chip_data(d); + struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip); + + iowrite32(BIT(d->hwirq), cgpio->regs + CDNS_GPIO_IRQ_EN); +} + +static int cdns_gpio_irq_set_type(struct irq_data *d, unsigned int type) +{ + struct gpio_chip *chip = irq_data_get_irq_chip_data(d); + struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip); + unsigned long flags; + u32 int_value; + u32 int_type; + u32 mask = BIT(d->hwirq); + int ret = 0; + + spin_lock_irqsave(&chip->bgpio_lock, flags); + + int_value = ioread32(cgpio->regs + CDNS_GPIO_IRQ_VALUE) & ~mask; + int_type = ioread32(cgpio->regs + CDNS_GPIO_IRQ_TYPE) & ~mask; + + /* + * The GPIO controller doesn't have an ACK register. + * All interrupt statuses are cleared on a status register read. + * Don't support edge interrupts for now. + */ + + if (type == IRQ_TYPE_LEVEL_HIGH) { + int_type |= mask; + int_value |= mask; + } else if (type == IRQ_TYPE_LEVEL_LOW) { + int_type |= mask; + } else { + ret = -EINVAL; + goto err_irq_type; + } + + iowrite32(int_value, cgpio->regs + CDNS_GPIO_IRQ_VALUE); + iowrite32(int_type, cgpio->regs + CDNS_GPIO_IRQ_TYPE); + +err_irq_type: + spin_unlock_irqrestore(&chip->bgpio_lock, flags); + return ret; +} + +static void cdns_gpio_irq_handler(struct irq_desc *desc) +{ + struct gpio_chip *chip = irq_desc_get_handler_data(desc); + struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip); + struct irq_chip *irqchip = irq_desc_get_chip(desc); + unsigned long status; + int hwirq; + + chained_irq_enter(irqchip, desc); + + status = ioread32(cgpio->regs + CDNS_GPIO_IRQ_STATUS) & + ~ioread32(cgpio->regs + CDNS_GPIO_IRQ_MASK); + + for_each_set_bit(hwirq, &status, chip->ngpio) + generic_handle_irq(irq_find_mapping(chip->irq.domain, hwirq)); + + chained_irq_exit(irqchip, desc); +} + +static struct irq_chip cdns_gpio_irqchip = { + .name = "cdns-gpio", + .irq_mask = cdns_gpio_irq_mask, + .irq_unmask = cdns_gpio_irq_unmask, + .irq_set_type = cdns_gpio_irq_set_type +}; + +static int cdns_gpio_probe(struct platform_device *pdev) +{ + struct cdns_gpio_chip *cgpio; + struct resource *res; + int ret, irq; + u32 dir_prev; + u32 num_gpios = 32; + + cgpio = devm_kzalloc(&pdev->dev, sizeof(*cgpio), GFP_KERNEL); + if (!cgpio) + return -ENOMEM; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + cgpio->regs = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(cgpio->regs)) + return PTR_ERR(cgpio->regs); + + of_property_read_u32(pdev->dev.of_node, "ngpios", &num_gpios); + + if (num_gpios > 32) { + dev_err(&pdev->dev, "ngpios must be less or equal 32\n"); + return -EINVAL; + } + + /* + * Set all pins as inputs by default, otherwise: + * gpiochip_lock_as_irq: + * tried to flag a GPIO set as output for IRQ + * Generic GPIO driver stores the direction value internally, + * so it needs to be changed before bgpio_init() is called. + */ + dir_prev = ioread32(cgpio->regs + CDNS_GPIO_DIRECTION_MODE); + iowrite32(GENMASK(num_gpios - 1, 0), + cgpio->regs + CDNS_GPIO_DIRECTION_MODE); + + ret = bgpio_init(&cgpio->gc, &pdev->dev, 4, + cgpio->regs + CDNS_GPIO_INPUT_VALUE, + cgpio->regs + CDNS_GPIO_OUTPUT_VALUE, + NULL, + NULL, + cgpio->regs + CDNS_GPIO_DIRECTION_MODE, + BGPIOF_READ_OUTPUT_REG_SET); + if (ret) { + dev_err(&pdev->dev, "Failed to register generic gpio, %d\n", + ret); + goto err_revert_dir; + } + + cgpio->gc.label = dev_name(&pdev->dev); + cgpio->gc.ngpio = num_gpios; + cgpio->gc.parent = &pdev->dev; + cgpio->gc.base = -1; + cgpio->gc.owner = THIS_MODULE; + cgpio->gc.request = cdns_gpio_request; + cgpio->gc.free = cdns_gpio_free; + + cgpio->pclk = devm_clk_get(&pdev->dev, NULL); + if (IS_ERR(cgpio->pclk)) { + ret = PTR_ERR(cgpio->pclk); + dev_err(&pdev->dev, + "Failed to retrieve peripheral clock, %d\n", ret); + goto err_revert_dir; + } + + ret = clk_prepare_enable(cgpio->pclk); + if (ret) { + dev_err(&pdev->dev, + "Failed to enable the peripheral clock, %d\n", ret); + goto err_revert_dir; + } + + ret = devm_gpiochip_add_data(&pdev->dev, &cgpio->gc, cgpio); + if (ret < 0) { + dev_err(&pdev->dev, "Could not register gpiochip, %d\n", ret); + goto err_disable_clk; + } + + /* + * irq_chip support + */ + irq = platform_get_irq(pdev, 0); + if (irq >= 0) { + ret = gpiochip_irqchip_add(&cgpio->gc, &cdns_gpio_irqchip, + 0, handle_level_irq, + IRQ_TYPE_NONE); + if (ret) { + dev_err(&pdev->dev, "Could not add irqchip, %d\n", + ret); + goto err_disable_clk; + } + gpiochip_set_chained_irqchip(&cgpio->gc, &cdns_gpio_irqchip, + irq, cdns_gpio_irq_handler); + } + + cgpio->bypass_orig = ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE); + + /* + * Enable gpio outputs, ignored for input direction + */ + iowrite32(GENMASK(num_gpios - 1, 0), + cgpio->regs + CDNS_GPIO_OUTPUT_EN); + iowrite32(0, cgpio->regs + CDNS_GPIO_BYPASS_MODE); + + platform_set_drvdata(pdev, cgpio); + return 0; + +err_disable_clk: + clk_disable_unprepare(cgpio->pclk); + +err_revert_dir: + iowrite32(dir_prev, cgpio->regs + CDNS_GPIO_DIRECTION_MODE); + + return ret; +} + +static int cdns_gpio_remove(struct platform_device *pdev) +{ + struct cdns_gpio_chip *cgpio = platform_get_drvdata(pdev); + + iowrite32(cgpio->bypass_orig, cgpio->regs + CDNS_GPIO_BYPASS_MODE); + clk_disable_unprepare(cgpio->pclk); + + return 0; +} + +static const struct of_device_id cdns_of_ids[] = { + { .compatible = "cdns,gpio-r1p02" }, + { /* sentinel */ }, +}; + +static struct platform_driver cdns_gpio_driver = { + .driver = { + .name = "cdns-gpio", + .of_match_table = cdns_of_ids, + }, + .probe = cdns_gpio_probe, + .remove = cdns_gpio_remove, +}; +module_platform_driver(cdns_gpio_driver); + +MODULE_AUTHOR("Jan Kotas "); +MODULE_DESCRIPTION("Cadence GPIO driver"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS("platform:cdns-gpio"); diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c index 044888fd96a1..84ae04402f70 100644 --- a/drivers/gpio/gpio-dwapb.c +++ b/drivers/gpio/gpio-dwapb.c @@ -748,8 +748,7 @@ static int dwapb_gpio_remove(struct platform_device *pdev) #ifdef CONFIG_PM_SLEEP static int dwapb_gpio_suspend(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct dwapb_gpio *gpio = platform_get_drvdata(pdev); + struct dwapb_gpio *gpio = dev_get_drvdata(dev); struct gpio_chip *gc = &gpio->ports[0].gc; unsigned long flags; int i; @@ -793,8 +792,7 @@ static int dwapb_gpio_suspend(struct device *dev) static int dwapb_gpio_resume(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct dwapb_gpio *gpio = platform_get_drvdata(pdev); + struct dwapb_gpio *gpio = dev_get_drvdata(dev); struct gpio_chip *gc = &gpio->ports[0].gc; unsigned long flags; int i; diff --git a/drivers/gpio/gpio-gpio-mm.c b/drivers/gpio/gpio-gpio-mm.c index b56ff2efbf36..8c150fd68d9d 100644 --- a/drivers/gpio/gpio-gpio-mm.c +++ b/drivers/gpio/gpio-gpio-mm.c @@ -211,7 +211,7 @@ static int gpiomm_gpio_get_multiple(struct gpio_chip *chip, unsigned long *mask, port_state = inb(gpiommgpio->base + ports[i]); /* store acquired bits at respective bits array offset */ - bits[word_index] |= port_state << word_offset; + bits[word_index] |= (port_state << word_offset) & word_mask; } return 0; diff --git a/drivers/gpio/gpio-grgpio.c b/drivers/gpio/gpio-grgpio.c index 60a1556c570a..45b8d6a02b87 100644 --- a/drivers/gpio/gpio-grgpio.c +++ b/drivers/gpio/gpio-grgpio.c @@ -30,7 +30,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/gpio/gpio-ich.c b/drivers/gpio/gpio-ich.c index dba392221042..90bf7742f9b0 100644 --- a/drivers/gpio/gpio-ich.c +++ b/drivers/gpio/gpio-ich.c @@ -1,32 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0+ /* * Intel ICH6-10, Series 5 and 6, Atom C2000 (Avoton/Rangeley) GPIO driver * * Copyright (C) 2010 Extreme Engineering Solutions. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include +#include #include +#include #include #include -#include #include -#include -#include #define DRV_NAME "gpio_ich" @@ -100,7 +86,7 @@ struct ichx_desc { static struct { spinlock_t lock; - struct platform_device *dev; + struct device *dev; struct gpio_chip chip; struct resource *gpio_base; /* GPIO IO base */ struct resource *pm_base; /* Power Mangagment IO base */ @@ -112,8 +98,7 @@ static struct { static int modparam_gpiobase = -1; /* dynamic */ module_param_named(gpiobase, modparam_gpiobase, int, 0444); -MODULE_PARM_DESC(gpiobase, "The GPIO number base. -1 means dynamic, " - "which is the default."); +MODULE_PARM_DESC(gpiobase, "The GPIO number base. -1 means dynamic, which is the default."); static int ichx_write_bit(int reg, unsigned nr, int val, int verify) { @@ -121,7 +106,6 @@ static int ichx_write_bit(int reg, unsigned nr, int val, int verify) u32 data, tmp; int reg_nr = nr / 32; int bit = nr & 0x1f; - int ret = 0; spin_lock_irqsave(&ichx_priv.lock, flags); @@ -142,12 +126,10 @@ static int ichx_write_bit(int reg, unsigned nr, int val, int verify) tmp = ICHX_READ(ichx_priv.desc->regs[reg][reg_nr], ichx_priv.gpio_base); - if (verify && data != tmp) - ret = -EPERM; spin_unlock_irqrestore(&ichx_priv.lock, flags); - return ret; + return (verify && data != tmp) ? -EPERM : 0; } static int ichx_read_bit(int reg, unsigned nr) @@ -186,10 +168,7 @@ static int ichx_gpio_direction_input(struct gpio_chip *gpio, unsigned nr) * Try setting pin as an input and verify it worked since many pins * are output-only. */ - if (ichx_write_bit(GPIO_IO_SEL, nr, 1, 1)) - return -EINVAL; - - return 0; + return ichx_write_bit(GPIO_IO_SEL, nr, 1, 1); } static int ichx_gpio_direction_output(struct gpio_chip *gpio, unsigned nr, @@ -206,10 +185,7 @@ static int ichx_gpio_direction_output(struct gpio_chip *gpio, unsigned nr, * Try setting pin as an output and verify it worked since many pins * are input-only. */ - if (ichx_write_bit(GPIO_IO_SEL, nr, 0, 1)) - return -EINVAL; - - return 0; + return ichx_write_bit(GPIO_IO_SEL, nr, 0, 1); } static int ichx_gpio_get(struct gpio_chip *chip, unsigned nr) @@ -284,7 +260,7 @@ static void ichx_gpiolib_setup(struct gpio_chip *chip) { chip->owner = THIS_MODULE; chip->label = DRV_NAME; - chip->parent = &ichx_priv.dev->dev; + chip->parent = ichx_priv.dev; /* Allow chip-specific overrides of request()/get() */ chip->request = ichx_priv.desc->request ? @@ -407,15 +383,14 @@ static int ichx_gpio_request_regions(struct device *dev, static int ichx_gpio_probe(struct platform_device *pdev) { + struct device *dev = &pdev->dev; + struct lpc_ich_info *ich_info = dev_get_platdata(dev); struct resource *res_base, *res_pm; int err; - struct lpc_ich_info *ich_info = dev_get_platdata(&pdev->dev); if (!ich_info) return -ENODEV; - ichx_priv.dev = pdev; - switch (ich_info->gpio_version) { case ICH_I3100_GPIO: ichx_priv.desc = &i3100_desc; @@ -445,19 +420,21 @@ static int ichx_gpio_probe(struct platform_device *pdev) return -ENODEV; } + ichx_priv.dev = dev; spin_lock_init(&ichx_priv.lock); + res_base = platform_get_resource(pdev, IORESOURCE_IO, ICH_RES_GPIO); - ichx_priv.use_gpio = ich_info->use_gpio; - err = ichx_gpio_request_regions(&pdev->dev, res_base, pdev->name, - ichx_priv.use_gpio); + err = ichx_gpio_request_regions(dev, res_base, pdev->name, + ich_info->use_gpio); if (err) return err; ichx_priv.gpio_base = res_base; + ichx_priv.use_gpio = ich_info->use_gpio; /* * If necessary, determine the I/O address of ACPI/power management - * registers which are needed to read the the GPE0 register for GPI pins + * registers which are needed to read the GPE0 register for GPI pins * 0 - 15 on some chipsets. */ if (!ichx_priv.desc->uses_gpe0) @@ -465,13 +442,13 @@ static int ichx_gpio_probe(struct platform_device *pdev) res_pm = platform_get_resource(pdev, IORESOURCE_IO, ICH_RES_GPE0); if (!res_pm) { - pr_warn("ACPI BAR is unavailable, GPI 0 - 15 unavailable\n"); + dev_warn(dev, "ACPI BAR is unavailable, GPI 0 - 15 unavailable\n"); goto init; } - if (!devm_request_region(&pdev->dev, res_pm->start, - resource_size(res_pm), pdev->name)) { - pr_warn("ACPI BAR is busy, GPI 0 - 15 unavailable\n"); + if (!devm_request_region(dev, res_pm->start, resource_size(res_pm), + pdev->name)) { + dev_warn(dev, "ACPI BAR is busy, GPI 0 - 15 unavailable\n"); goto init; } @@ -481,12 +458,12 @@ init: ichx_gpiolib_setup(&ichx_priv.chip); err = gpiochip_add_data(&ichx_priv.chip, NULL); if (err) { - pr_err("Failed to register GPIOs\n"); + dev_err(dev, "Failed to register GPIOs\n"); return err; } - pr_info("GPIO from %d to %d on %s\n", ichx_priv.chip.base, - ichx_priv.chip.base + ichx_priv.chip.ngpio - 1, DRV_NAME); + dev_info(dev, "GPIO from %d to %d\n", ichx_priv.chip.base, + ichx_priv.chip.base + ichx_priv.chip.ngpio - 1); return 0; } diff --git a/drivers/gpio/gpio-intel-mid.c b/drivers/gpio/gpio-intel-mid.c index 028d64c2cb1e..4e803baf980e 100644 --- a/drivers/gpio/gpio-intel-mid.c +++ b/drivers/gpio/gpio-intel-mid.c @@ -1,16 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Intel MID GPIO driver * * Copyright (c) 2008-2014,2016 Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ /* Supports: @@ -20,12 +12,11 @@ */ #include +#include #include #include #include -#include #include -#include #include #include #include @@ -273,9 +264,8 @@ static const struct pci_device_id intel_gpio_ids[] = { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x08f7), .driver_data = (kernel_ulong_t)&gpio_cloverview_core, }, - { 0 } + { } }; -MODULE_DEVICE_TABLE(pci, intel_gpio_ids); static void intel_mid_irq_handler(struct irq_desc *desc) { diff --git a/drivers/gpio/gpio-ks8695.c b/drivers/gpio/gpio-ks8695.c index 55d562e1278e..d6d6140ffc40 100644 --- a/drivers/gpio/gpio-ks8695.c +++ b/drivers/gpio/gpio-ks8695.c @@ -282,22 +282,13 @@ static int ks8695_gpio_show(struct seq_file *s, void *unused) return 0; } -static int ks8695_gpio_open(struct inode *inode, struct file *file) -{ - return single_open(file, ks8695_gpio_show, NULL); -} - -static const struct file_operations ks8695_gpio_operations = { - .open = ks8695_gpio_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(ks8695_gpio); static int __init ks8695_gpio_debugfs_init(void) { /* /sys/kernel/debug/ks8695_gpio */ - (void) debugfs_create_file("ks8695_gpio", S_IFREG | S_IRUGO, NULL, NULL, &ks8695_gpio_operations); + debugfs_create_file("ks8695_gpio", S_IFREG | S_IRUGO, NULL, NULL, + &ks8695_gpio_fops); return 0; } postcore_initcall(ks8695_gpio_debugfs_init); diff --git a/drivers/gpio/gpio-lpc18xx.c b/drivers/gpio/gpio-lpc18xx.c index f12e02e1016d..d441dbaed7a3 100644 --- a/drivers/gpio/gpio-lpc18xx.c +++ b/drivers/gpio/gpio-lpc18xx.c @@ -1,20 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 /* * GPIO driver for NXP LPC18xx/43xx. * + * Copyright (C) 2018 Vladimir Zapolskiy * Copyright (C) 2015 Joachim Eastwood * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #include #include #include +#include #include #include +#include #include +#include #include #include @@ -24,13 +25,246 @@ #define LPC18XX_MAX_PORTS 8 #define LPC18XX_PINS_PER_PORT 32 +/* LPC18xx GPIO pin interrupt controller register offsets */ +#define LPC18XX_GPIO_PIN_IC_ISEL 0x00 +#define LPC18XX_GPIO_PIN_IC_IENR 0x04 +#define LPC18XX_GPIO_PIN_IC_SIENR 0x08 +#define LPC18XX_GPIO_PIN_IC_CIENR 0x0c +#define LPC18XX_GPIO_PIN_IC_IENF 0x10 +#define LPC18XX_GPIO_PIN_IC_SIENF 0x14 +#define LPC18XX_GPIO_PIN_IC_CIENF 0x18 +#define LPC18XX_GPIO_PIN_IC_RISE 0x1c +#define LPC18XX_GPIO_PIN_IC_FALL 0x20 +#define LPC18XX_GPIO_PIN_IC_IST 0x24 + +#define NR_LPC18XX_GPIO_PIN_IC_IRQS 8 + +struct lpc18xx_gpio_pin_ic { + void __iomem *base; + struct irq_domain *domain; + struct raw_spinlock lock; +}; + struct lpc18xx_gpio_chip { struct gpio_chip gpio; void __iomem *base; struct clk *clk; + struct lpc18xx_gpio_pin_ic *pin_ic; spinlock_t lock; }; +static inline void lpc18xx_gpio_pin_ic_isel(struct lpc18xx_gpio_pin_ic *ic, + u32 pin, bool set) +{ + u32 val = readl_relaxed(ic->base + LPC18XX_GPIO_PIN_IC_ISEL); + + if (set) + val &= ~BIT(pin); + else + val |= BIT(pin); + + writel_relaxed(val, ic->base + LPC18XX_GPIO_PIN_IC_ISEL); +} + +static inline void lpc18xx_gpio_pin_ic_set(struct lpc18xx_gpio_pin_ic *ic, + u32 pin, u32 reg) +{ + writel_relaxed(BIT(pin), ic->base + reg); +} + +static void lpc18xx_gpio_pin_ic_mask(struct irq_data *d) +{ + struct lpc18xx_gpio_pin_ic *ic = d->chip_data; + u32 type = irqd_get_trigger_type(d); + + raw_spin_lock(&ic->lock); + + if (type & IRQ_TYPE_LEVEL_MASK || type & IRQ_TYPE_EDGE_RISING) + lpc18xx_gpio_pin_ic_set(ic, d->hwirq, + LPC18XX_GPIO_PIN_IC_CIENR); + + if (type & IRQ_TYPE_EDGE_FALLING) + lpc18xx_gpio_pin_ic_set(ic, d->hwirq, + LPC18XX_GPIO_PIN_IC_CIENF); + + raw_spin_unlock(&ic->lock); + + irq_chip_mask_parent(d); +} + +static void lpc18xx_gpio_pin_ic_unmask(struct irq_data *d) +{ + struct lpc18xx_gpio_pin_ic *ic = d->chip_data; + u32 type = irqd_get_trigger_type(d); + + raw_spin_lock(&ic->lock); + + if (type & IRQ_TYPE_LEVEL_MASK || type & IRQ_TYPE_EDGE_RISING) + lpc18xx_gpio_pin_ic_set(ic, d->hwirq, + LPC18XX_GPIO_PIN_IC_SIENR); + + if (type & IRQ_TYPE_EDGE_FALLING) + lpc18xx_gpio_pin_ic_set(ic, d->hwirq, + LPC18XX_GPIO_PIN_IC_SIENF); + + raw_spin_unlock(&ic->lock); + + irq_chip_unmask_parent(d); +} + +static void lpc18xx_gpio_pin_ic_eoi(struct irq_data *d) +{ + struct lpc18xx_gpio_pin_ic *ic = d->chip_data; + u32 type = irqd_get_trigger_type(d); + + raw_spin_lock(&ic->lock); + + if (type & IRQ_TYPE_EDGE_BOTH) + lpc18xx_gpio_pin_ic_set(ic, d->hwirq, + LPC18XX_GPIO_PIN_IC_IST); + + raw_spin_unlock(&ic->lock); + + irq_chip_eoi_parent(d); +} + +static int lpc18xx_gpio_pin_ic_set_type(struct irq_data *d, unsigned int type) +{ + struct lpc18xx_gpio_pin_ic *ic = d->chip_data; + + raw_spin_lock(&ic->lock); + + if (type & IRQ_TYPE_LEVEL_HIGH) { + lpc18xx_gpio_pin_ic_isel(ic, d->hwirq, true); + lpc18xx_gpio_pin_ic_set(ic, d->hwirq, + LPC18XX_GPIO_PIN_IC_SIENF); + } else if (type & IRQ_TYPE_LEVEL_LOW) { + lpc18xx_gpio_pin_ic_isel(ic, d->hwirq, true); + lpc18xx_gpio_pin_ic_set(ic, d->hwirq, + LPC18XX_GPIO_PIN_IC_CIENF); + } else { + lpc18xx_gpio_pin_ic_isel(ic, d->hwirq, false); + } + + raw_spin_unlock(&ic->lock); + + return 0; +} + +static struct irq_chip lpc18xx_gpio_pin_ic = { + .name = "LPC18xx GPIO pin", + .irq_mask = lpc18xx_gpio_pin_ic_mask, + .irq_unmask = lpc18xx_gpio_pin_ic_unmask, + .irq_eoi = lpc18xx_gpio_pin_ic_eoi, + .irq_set_type = lpc18xx_gpio_pin_ic_set_type, + .flags = IRQCHIP_SET_TYPE_MASKED, +}; + +static int lpc18xx_gpio_pin_ic_domain_alloc(struct irq_domain *domain, + unsigned int virq, + unsigned int nr_irqs, void *data) +{ + struct irq_fwspec parent_fwspec, *fwspec = data; + struct lpc18xx_gpio_pin_ic *ic = domain->host_data; + irq_hw_number_t hwirq; + int ret; + + if (nr_irqs != 1) + return -EINVAL; + + hwirq = fwspec->param[0]; + if (hwirq >= NR_LPC18XX_GPIO_PIN_IC_IRQS) + return -EINVAL; + + /* + * All LPC18xx/LPC43xx GPIO pin hardware interrupts are translated + * into edge interrupts 32...39 on parent Cortex-M3/M4 NVIC + */ + parent_fwspec.fwnode = domain->parent->fwnode; + parent_fwspec.param_count = 1; + parent_fwspec.param[0] = hwirq + 32; + + ret = irq_domain_alloc_irqs_parent(domain, virq, 1, &parent_fwspec); + if (ret < 0) { + pr_err("failed to allocate parent irq %u: %d\n", + parent_fwspec.param[0], ret); + return ret; + } + + return irq_domain_set_hwirq_and_chip(domain, virq, hwirq, + &lpc18xx_gpio_pin_ic, ic); +} + +static const struct irq_domain_ops lpc18xx_gpio_pin_ic_domain_ops = { + .alloc = lpc18xx_gpio_pin_ic_domain_alloc, + .xlate = irq_domain_xlate_twocell, + .free = irq_domain_free_irqs_common, +}; + +static int lpc18xx_gpio_pin_ic_probe(struct lpc18xx_gpio_chip *gc) +{ + struct device *dev = gc->gpio.parent; + struct irq_domain *parent_domain; + struct device_node *parent_node; + struct lpc18xx_gpio_pin_ic *ic; + struct resource res; + int ret, index; + + parent_node = of_irq_find_parent(dev->of_node); + if (!parent_node) + return -ENXIO; + + parent_domain = irq_find_host(parent_node); + of_node_put(parent_node); + if (!parent_domain) + return -ENXIO; + + ic = devm_kzalloc(dev, sizeof(*ic), GFP_KERNEL); + if (!ic) + return -ENOMEM; + + index = of_property_match_string(dev->of_node, "reg-names", + "gpio-pin-ic"); + if (index < 0) { + ret = -ENODEV; + goto free_ic; + } + + ret = of_address_to_resource(dev->of_node, index, &res); + if (ret < 0) + goto free_ic; + + ic->base = devm_ioremap_resource(dev, &res); + if (IS_ERR(ic->base)) { + ret = PTR_ERR(ic->base); + goto free_ic; + } + + raw_spin_lock_init(&ic->lock); + + ic->domain = irq_domain_add_hierarchy(parent_domain, 0, + NR_LPC18XX_GPIO_PIN_IC_IRQS, + dev->of_node, + &lpc18xx_gpio_pin_ic_domain_ops, + ic); + if (!ic->domain) { + pr_err("unable to add irq domain\n"); + ret = -ENODEV; + goto free_iomap; + } + + gc->pin_ic = ic; + + return 0; + +free_iomap: + devm_iounmap(dev, ic->base); +free_ic: + devm_kfree(dev, ic); + + return ret; +} + static void lpc18xx_gpio_set(struct gpio_chip *chip, unsigned offset, int value) { struct lpc18xx_gpio_chip *gc = gpiochip_get_data(chip); @@ -92,45 +326,62 @@ static const struct gpio_chip lpc18xx_chip = { static int lpc18xx_gpio_probe(struct platform_device *pdev) { + struct device *dev = &pdev->dev; struct lpc18xx_gpio_chip *gc; - struct resource *res; - int ret; + int index, ret; - gc = devm_kzalloc(&pdev->dev, sizeof(*gc), GFP_KERNEL); + gc = devm_kzalloc(dev, sizeof(*gc), GFP_KERNEL); if (!gc) return -ENOMEM; gc->gpio = lpc18xx_chip; platform_set_drvdata(pdev, gc); - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - gc->base = devm_ioremap_resource(&pdev->dev, res); + index = of_property_match_string(dev->of_node, "reg-names", "gpio"); + if (index < 0) { + /* To support backward compatibility take the first resource */ + struct resource *res; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + gc->base = devm_ioremap_resource(dev, res); + } else { + struct resource res; + + ret = of_address_to_resource(dev->of_node, index, &res); + if (ret < 0) + return ret; + + gc->base = devm_ioremap_resource(dev, &res); + } if (IS_ERR(gc->base)) return PTR_ERR(gc->base); - gc->clk = devm_clk_get(&pdev->dev, NULL); + gc->clk = devm_clk_get(dev, NULL); if (IS_ERR(gc->clk)) { - dev_err(&pdev->dev, "input clock not found\n"); + dev_err(dev, "input clock not found\n"); return PTR_ERR(gc->clk); } ret = clk_prepare_enable(gc->clk); if (ret) { - dev_err(&pdev->dev, "unable to enable clock\n"); + dev_err(dev, "unable to enable clock\n"); return ret; } spin_lock_init(&gc->lock); - gc->gpio.parent = &pdev->dev; + gc->gpio.parent = dev; - ret = gpiochip_add_data(&gc->gpio, gc); + ret = devm_gpiochip_add_data(dev, &gc->gpio, gc); if (ret) { - dev_err(&pdev->dev, "failed to add gpio chip\n"); + dev_err(dev, "failed to add gpio chip\n"); clk_disable_unprepare(gc->clk); return ret; } + /* On error GPIO pin interrupt controller just won't be registered */ + lpc18xx_gpio_pin_ic_probe(gc); + return 0; } @@ -138,7 +389,9 @@ static int lpc18xx_gpio_remove(struct platform_device *pdev) { struct lpc18xx_gpio_chip *gc = platform_get_drvdata(pdev); - gpiochip_remove(&gc->gpio); + if (gc->pin_ic) + irq_domain_remove(gc->pin_ic->domain); + clk_disable_unprepare(gc->clk); return 0; @@ -161,5 +414,6 @@ static struct platform_driver lpc18xx_gpio_driver = { module_platform_driver(lpc18xx_gpio_driver); MODULE_AUTHOR("Joachim Eastwood "); +MODULE_AUTHOR("Vladimir Zapolskiy "); MODULE_DESCRIPTION("GPIO driver for LPC18xx/43xx"); MODULE_LICENSE("GPL v2"); diff --git a/drivers/gpio/gpio-lynxpoint.c b/drivers/gpio/gpio-lynxpoint.c index b5b5e500e72c..31b4a091ab60 100644 --- a/drivers/gpio/gpio-lynxpoint.c +++ b/drivers/gpio/gpio-lynxpoint.c @@ -1,36 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0 /* * GPIO controller driver for Intel Lynxpoint PCH chipset> * Copyright (c) 2012, Intel Corporation. * * Author: Mathias Nyman - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * You should have received a copy of the GNU General Public License along with - * this program; if not, write to the Free Software Foundation, Inc., - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. - * */ -#include -#include -#include -#include +#include #include -#include #include -#include -#include +#include +#include +#include +#include #include #include -#include +#include +#include /* LynxPoint chipset has support for 94 gpio pins */ @@ -240,21 +226,23 @@ static void lp_gpio_irq_handler(struct irq_desc *desc) struct gpio_chip *gc = irq_desc_get_handler_data(desc); struct lp_gpio *lg = gpiochip_get_data(gc); struct irq_chip *chip = irq_data_get_irq_chip(data); - u32 base, pin, mask; unsigned long reg, ena, pending; + u32 base, pin; /* check from GPIO controller which pin triggered the interrupt */ for (base = 0; base < lg->chip.ngpio; base += 32) { reg = lp_gpio_reg(&lg->chip, base, LP_INT_STAT); ena = lp_gpio_reg(&lg->chip, base, LP_INT_ENABLE); - while ((pending = (inl(reg) & inl(ena)))) { + /* Only interrupts that are enabled */ + pending = inl(reg) & inl(ena); + + for_each_set_bit(pin, &pending, 32) { unsigned irq; - pin = __ffs(pending); - mask = BIT(pin); /* Clear before handling so we don't lose an edge */ - outl(mask, reg); + outl(BIT(pin), reg); + irq = irq_find_mapping(lg->chip.irq.domain, base + pin); generic_handle_irq(irq); } @@ -408,8 +396,7 @@ static int lp_gpio_runtime_resume(struct device *dev) static int lp_gpio_resume(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct lp_gpio *lg = platform_get_drvdata(pdev); + struct lp_gpio *lg = dev_get_drvdata(dev); unsigned long reg; int i; @@ -467,5 +454,5 @@ module_exit(lp_gpio_exit); MODULE_AUTHOR("Mathias Nyman (Intel)"); MODULE_DESCRIPTION("GPIO interface for Intel Lynxpoint"); -MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL v2"); MODULE_ALIAS("platform:lp_gpio"); diff --git a/drivers/gpio/gpio-merrifield.c b/drivers/gpio/gpio-merrifield.c index 97421bd4a60f..7c659fdaa6d5 100644 --- a/drivers/gpio/gpio-merrifield.c +++ b/drivers/gpio/gpio-merrifield.c @@ -1,18 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Intel Merrifield SoC GPIO driver * * Copyright (c) 2016 Intel Corporation. * Author: Andy Shevchenko - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #include #include #include -#include #include #include #include diff --git a/drivers/gpio/gpio-mt7621.c b/drivers/gpio/gpio-mt7621.c index d72af6f6cdbd..00e954f22bc9 100644 --- a/drivers/gpio/gpio-mt7621.c +++ b/drivers/gpio/gpio-mt7621.c @@ -244,6 +244,8 @@ mediatek_gpio_bank_probe(struct device *dev, rg->chip.of_xlate = mediatek_gpio_xlate; rg->chip.label = devm_kasprintf(dev, GFP_KERNEL, "%s-bank%d", dev_name(dev), bank); + if (!rg->chip.label) + return -ENOMEM; ret = devm_gpiochip_add_data(dev, &rg->chip, mtk); if (ret < 0) { @@ -295,6 +297,7 @@ mediatek_gpio_probe(struct platform_device *pdev) struct device_node *np = dev->of_node; struct mtk *mtk; int i; + int ret; mtk = devm_kzalloc(dev, sizeof(*mtk), GFP_KERNEL); if (!mtk) @@ -309,8 +312,11 @@ mediatek_gpio_probe(struct platform_device *pdev) platform_set_drvdata(pdev, mtk); mediatek_gpio_irq_chip.name = dev_name(dev); - for (i = 0; i < MTK_BANK_CNT; i++) - mediatek_gpio_bank_probe(dev, np, i); + for (i = 0; i < MTK_BANK_CNT; i++) { + ret = mediatek_gpio_bank_probe(dev, np, i); + if (ret) + return ret; + } return 0; } diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c index adc768f908f1..7d5c55494ccd 100644 --- a/drivers/gpio/gpio-mvebu.c +++ b/drivers/gpio/gpio-mvebu.c @@ -608,7 +608,7 @@ static int mvebu_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm) ret = -EBUSY; } else { desc = gpiochip_request_own_desc(&mvchip->chip, - pwm->hwpwm, "mvebu-pwm"); + pwm->hwpwm, "mvebu-pwm", 0); if (IS_ERR(desc)) { ret = PTR_ERR(desc); goto out; diff --git a/drivers/gpio/gpio-mxc.c b/drivers/gpio/gpio-mxc.c index 995cf0b9e0b1..2d1dfa1e0745 100644 --- a/drivers/gpio/gpio-mxc.c +++ b/drivers/gpio/gpio-mxc.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -550,33 +551,38 @@ static void mxc_gpio_restore_regs(struct mxc_gpio_port *port) writel(port->gpio_saved_reg.dr, port->base + GPIO_DR); } -static int __maybe_unused mxc_gpio_noirq_suspend(struct device *dev) +static int mxc_gpio_syscore_suspend(void) { - struct platform_device *pdev = to_platform_device(dev); - struct mxc_gpio_port *port = platform_get_drvdata(pdev); + struct mxc_gpio_port *port; - mxc_gpio_save_regs(port); - clk_disable_unprepare(port->clk); + /* walk through all ports */ + list_for_each_entry(port, &mxc_gpio_ports, node) { + mxc_gpio_save_regs(port); + clk_disable_unprepare(port->clk); + } return 0; } -static int __maybe_unused mxc_gpio_noirq_resume(struct device *dev) +static void mxc_gpio_syscore_resume(void) { - struct platform_device *pdev = to_platform_device(dev); - struct mxc_gpio_port *port = platform_get_drvdata(pdev); + struct mxc_gpio_port *port; int ret; - ret = clk_prepare_enable(port->clk); - if (ret) - return ret; - mxc_gpio_restore_regs(port); - - return 0; + /* walk through all ports */ + list_for_each_entry(port, &mxc_gpio_ports, node) { + ret = clk_prepare_enable(port->clk); + if (ret) { + pr_err("mxc: failed to enable gpio clock %d\n", ret); + return; + } + mxc_gpio_restore_regs(port); + } } -static const struct dev_pm_ops mxc_gpio_dev_pm_ops = { - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mxc_gpio_noirq_suspend, mxc_gpio_noirq_resume) +static struct syscore_ops mxc_gpio_syscore_ops = { + .suspend = mxc_gpio_syscore_suspend, + .resume = mxc_gpio_syscore_resume, }; static struct platform_driver mxc_gpio_driver = { @@ -584,7 +590,6 @@ static struct platform_driver mxc_gpio_driver = { .name = "gpio-mxc", .of_match_table = mxc_gpio_dt_ids, .suppress_bind_attrs = true, - .pm = &mxc_gpio_dev_pm_ops, }, .probe = mxc_gpio_probe, .id_table = mxc_gpio_devtype, @@ -592,6 +597,8 @@ static struct platform_driver mxc_gpio_driver = { static int __init gpio_mxc_init(void) { + register_syscore_ops(&mxc_gpio_syscore_ops); + return platform_driver_register(&mxc_gpio_driver); } subsys_initcall(gpio_mxc_init); diff --git a/drivers/gpio/gpio-mxs.c b/drivers/gpio/gpio-mxs.c index ea874fd033a5..5e5437a2c607 100644 --- a/drivers/gpio/gpio-mxs.c +++ b/drivers/gpio/gpio-mxs.c @@ -84,7 +84,7 @@ static int mxs_gpio_set_irq_type(struct irq_data *d, unsigned int type) port->both_edges &= ~pin_mask; switch (type) { case IRQ_TYPE_EDGE_BOTH: - val = port->gc.get(&port->gc, d->hwirq); + val = readl(port->base + PINCTRL_DIN(port)) & pin_mask; if (val) edge = GPIO_INT_FALL_EDGE; else diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c index 5b3e83cd7137..f4e9921fa966 100644 --- a/drivers/gpio/gpio-omap.c +++ b/drivers/gpio/gpio-omap.c @@ -936,8 +936,7 @@ omap2_gpio_disable_level_quirk(struct gpio_bank *bank) static int omap_mpuio_suspend_noirq(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct gpio_bank *bank = platform_get_drvdata(pdev); + struct gpio_bank *bank = dev_get_drvdata(dev); void __iomem *mask_reg = bank->base + OMAP_MPUIO_GPIO_MASKIT / bank->stride; unsigned long flags; @@ -951,8 +950,7 @@ static int omap_mpuio_suspend_noirq(struct device *dev) static int omap_mpuio_resume_noirq(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct gpio_bank *bank = platform_get_drvdata(pdev); + struct gpio_bank *bank = dev_get_drvdata(dev); void __iomem *mask_reg = bank->base + OMAP_MPUIO_GPIO_MASKIT / bank->stride; unsigned long flags; @@ -1635,8 +1633,7 @@ static void omap_gpio_restore_context(struct gpio_bank *bank) static int __maybe_unused omap_gpio_runtime_suspend(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct gpio_bank *bank = platform_get_drvdata(pdev); + struct gpio_bank *bank = dev_get_drvdata(dev); unsigned long flags; int error = 0; @@ -1656,8 +1653,7 @@ unlock: static int __maybe_unused omap_gpio_runtime_resume(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct gpio_bank *bank = platform_get_drvdata(pdev); + struct gpio_bank *bank = dev_get_drvdata(dev); unsigned long flags; int error = 0; diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index 023a32cfac42..83617fdc661d 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -30,6 +31,8 @@ #define PCA953X_INVERT 0x02 #define PCA953X_DIRECTION 0x03 +#define REG_ADDR_MASK 0x3f +#define REG_ADDR_EXT 0x40 #define REG_ADDR_AI 0x80 #define PCA957X_IN 0x00 @@ -58,7 +61,7 @@ #define PCA_GPIO_MASK 0x00FF #define PCAL_GPIO_MASK 0x1f -#define PCAL_PINCTRL_MASK 0xe0 +#define PCAL_PINCTRL_MASK 0x60 #define PCA_INT 0x0100 #define PCA_PCAL 0x0200 @@ -119,25 +122,27 @@ struct pca953x_reg_config { int direction; int output; int input; + int invert; }; static const struct pca953x_reg_config pca953x_regs = { .direction = PCA953X_DIRECTION, .output = PCA953X_OUTPUT, .input = PCA953X_INPUT, + .invert = PCA953X_INVERT, }; static const struct pca953x_reg_config pca957x_regs = { .direction = PCA957X_CFG, .output = PCA957X_OUT, .input = PCA957X_IN, + .invert = PCA957X_INVRT, }; struct pca953x_chip { unsigned gpio_start; - u8 reg_output[MAX_BANK]; - u8 reg_direction[MAX_BANK]; struct mutex i2c_lock; + struct regmap *regmap; #ifdef CONFIG_GPIO_PCA953X_IRQ struct mutex irq_lock; @@ -154,87 +159,177 @@ struct pca953x_chip { struct regulator *regulator; const struct pca953x_reg_config *regs; - - int (*write_regs)(struct pca953x_chip *, int, u8 *); - int (*read_regs)(struct pca953x_chip *, int, u8 *); }; -static int pca953x_read_single(struct pca953x_chip *chip, int reg, u32 *val, - int off) +static int pca953x_bank_shift(struct pca953x_chip *chip) { - int ret; - int bank_shift = fls((chip->gpio_chip.ngpio - 1) / BANK_SZ); - int offset = off / BANK_SZ; + return fls((chip->gpio_chip.ngpio - 1) / BANK_SZ); +} - ret = i2c_smbus_read_byte_data(chip->client, - (reg << bank_shift) + offset); - *val = ret; +#define PCA953x_BANK_INPUT BIT(0) +#define PCA953x_BANK_OUTPUT BIT(1) +#define PCA953x_BANK_POLARITY BIT(2) +#define PCA953x_BANK_CONFIG BIT(3) - if (ret < 0) { - dev_err(&chip->client->dev, "failed reading register\n"); - return ret; +#define PCA957x_BANK_INPUT BIT(0) +#define PCA957x_BANK_POLARITY BIT(1) +#define PCA957x_BANK_BUSHOLD BIT(2) +#define PCA957x_BANK_CONFIG BIT(4) +#define PCA957x_BANK_OUTPUT BIT(5) + +#define PCAL9xxx_BANK_IN_LATCH BIT(8 + 2) +#define PCAL9xxx_BANK_IRQ_MASK BIT(8 + 5) +#define PCAL9xxx_BANK_IRQ_STAT BIT(8 + 6) + +/* + * We care about the following registers: + * - Standard set, below 0x40, each port can be replicated up to 8 times + * - PCA953x standard + * Input port 0x00 + 0 * bank_size R + * Output port 0x00 + 1 * bank_size RW + * Polarity Inversion port 0x00 + 2 * bank_size RW + * Configuration port 0x00 + 3 * bank_size RW + * - PCA957x with mixed up registers + * Input port 0x00 + 0 * bank_size R + * Polarity Inversion port 0x00 + 1 * bank_size RW + * Bus hold port 0x00 + 2 * bank_size RW + * Configuration port 0x00 + 4 * bank_size RW + * Output port 0x00 + 5 * bank_size RW + * + * - Extended set, above 0x40, often chip specific. + * - PCAL6524/PCAL9555A with custom PCAL IRQ handling: + * Input latch register 0x40 + 2 * bank_size RW + * Interrupt mask register 0x40 + 5 * bank_size RW + * Interrupt status register 0x40 + 6 * bank_size R + * + * - Registers with bit 0x80 set, the AI bit + * The bit is cleared and the registers fall into one of the + * categories above. + */ + +static bool pca953x_check_register(struct pca953x_chip *chip, unsigned int reg, + u32 checkbank) +{ + int bank_shift = pca953x_bank_shift(chip); + int bank = (reg & REG_ADDR_MASK) >> bank_shift; + int offset = reg & (BIT(bank_shift) - 1); + + /* Special PCAL extended register check. */ + if (reg & REG_ADDR_EXT) { + if (!(chip->driver_data & PCA_PCAL)) + return false; + bank += 8; } - return 0; + /* Register is not in the matching bank. */ + if (!(BIT(bank) & checkbank)) + return false; + + /* Register is not within allowed range of bank. */ + if (offset >= NBANK(chip)) + return false; + + return true; } -static int pca953x_write_single(struct pca953x_chip *chip, int reg, u32 val, - int off) +static bool pca953x_readable_register(struct device *dev, unsigned int reg) { - int ret; - int bank_shift = fls((chip->gpio_chip.ngpio - 1) / BANK_SZ); - int offset = off / BANK_SZ; + struct pca953x_chip *chip = dev_get_drvdata(dev); + u32 bank; - ret = i2c_smbus_write_byte_data(chip->client, - (reg << bank_shift) + offset, val); + if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) { + bank = PCA953x_BANK_INPUT | PCA953x_BANK_OUTPUT | + PCA953x_BANK_POLARITY | PCA953x_BANK_CONFIG; + } else { + bank = PCA957x_BANK_INPUT | PCA957x_BANK_OUTPUT | + PCA957x_BANK_POLARITY | PCA957x_BANK_CONFIG | + PCA957x_BANK_BUSHOLD; + } - if (ret < 0) { - dev_err(&chip->client->dev, "failed writing register\n"); - return ret; + if (chip->driver_data & PCA_PCAL) { + bank |= PCAL9xxx_BANK_IN_LATCH | PCAL9xxx_BANK_IRQ_MASK | + PCAL9xxx_BANK_IRQ_STAT; } - return 0; + return pca953x_check_register(chip, reg, bank); } -static int pca953x_write_regs_8(struct pca953x_chip *chip, int reg, u8 *val) +static bool pca953x_writeable_register(struct device *dev, unsigned int reg) { - return i2c_smbus_write_byte_data(chip->client, reg, *val); -} + struct pca953x_chip *chip = dev_get_drvdata(dev); + u32 bank; -static int pca953x_write_regs_16(struct pca953x_chip *chip, int reg, u8 *val) -{ - u16 word = get_unaligned((u16 *)val); + if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) { + bank = PCA953x_BANK_OUTPUT | PCA953x_BANK_POLARITY | + PCA953x_BANK_CONFIG; + } else { + bank = PCA957x_BANK_OUTPUT | PCA957x_BANK_POLARITY | + PCA957x_BANK_CONFIG | PCA957x_BANK_BUSHOLD; + } - return i2c_smbus_write_word_data(chip->client, reg << 1, word); + if (chip->driver_data & PCA_PCAL) + bank |= PCAL9xxx_BANK_IN_LATCH | PCAL9xxx_BANK_IRQ_MASK; + + return pca953x_check_register(chip, reg, bank); } -static int pca957x_write_regs_16(struct pca953x_chip *chip, int reg, u8 *val) +static bool pca953x_volatile_register(struct device *dev, unsigned int reg) { - int ret; + struct pca953x_chip *chip = dev_get_drvdata(dev); + u32 bank; - ret = i2c_smbus_write_byte_data(chip->client, reg << 1, val[0]); - if (ret < 0) - return ret; + if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) + bank = PCA953x_BANK_INPUT; + else + bank = PCA957x_BANK_INPUT; + + if (chip->driver_data & PCA_PCAL) + bank |= PCAL9xxx_BANK_IRQ_STAT; - return i2c_smbus_write_byte_data(chip->client, (reg << 1) + 1, val[1]); + return pca953x_check_register(chip, reg, bank); } -static int pca953x_write_regs_24(struct pca953x_chip *chip, int reg, u8 *val) +const struct regmap_config pca953x_i2c_regmap = { + .reg_bits = 8, + .val_bits = 8, + + .readable_reg = pca953x_readable_register, + .writeable_reg = pca953x_writeable_register, + .volatile_reg = pca953x_volatile_register, + + .cache_type = REGCACHE_RBTREE, + .max_register = 0x7f, +}; + +static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off, + bool write, bool addrinc) { - int bank_shift = fls((chip->gpio_chip.ngpio - 1) / BANK_SZ); + int bank_shift = pca953x_bank_shift(chip); int addr = (reg & PCAL_GPIO_MASK) << bank_shift; int pinctrl = (reg & PCAL_PINCTRL_MASK) << 1; + u8 regaddr = pinctrl | addr | (off / BANK_SZ); + + /* Single byte read doesn't need AI bit set. */ + if (!addrinc) + return regaddr; + + /* Chips with 24 and more GPIOs always support Auto Increment */ + if (write && NBANK(chip) > 2) + regaddr |= REG_ADDR_AI; - return i2c_smbus_write_i2c_block_data(chip->client, - pinctrl | addr | REG_ADDR_AI, - NBANK(chip), val); + /* PCA9575 needs address-increment on multi-byte writes */ + if (PCA_CHIP_TYPE(chip->driver_data) == PCA957X_TYPE) + regaddr |= REG_ADDR_AI; + + return regaddr; } static int pca953x_write_regs(struct pca953x_chip *chip, int reg, u8 *val) { - int ret = 0; + u8 regaddr = pca953x_recalc_addr(chip, reg, 0, true, true); + int ret; - ret = chip->write_regs(chip, reg, val); + ret = regmap_bulk_write(chip->regmap, regaddr, val, NBANK(chip)); if (ret < 0) { dev_err(&chip->client->dev, "failed writing register\n"); return ret; @@ -243,42 +338,12 @@ static int pca953x_write_regs(struct pca953x_chip *chip, int reg, u8 *val) return 0; } -static int pca953x_read_regs_8(struct pca953x_chip *chip, int reg, u8 *val) -{ - int ret; - - ret = i2c_smbus_read_byte_data(chip->client, reg); - *val = ret; - - return ret; -} - -static int pca953x_read_regs_16(struct pca953x_chip *chip, int reg, u8 *val) -{ - int ret; - - ret = i2c_smbus_read_word_data(chip->client, reg << 1); - put_unaligned(ret, (u16 *)val); - - return ret; -} - -static int pca953x_read_regs_24(struct pca953x_chip *chip, int reg, u8 *val) -{ - int bank_shift = fls((chip->gpio_chip.ngpio - 1) / BANK_SZ); - int addr = (reg & PCAL_GPIO_MASK) << bank_shift; - int pinctrl = (reg & PCAL_PINCTRL_MASK) << 1; - - return i2c_smbus_read_i2c_block_data(chip->client, - pinctrl | addr | REG_ADDR_AI, - NBANK(chip), val); -} - static int pca953x_read_regs(struct pca953x_chip *chip, int reg, u8 *val) { + u8 regaddr = pca953x_recalc_addr(chip, reg, 0, false, true); int ret; - ret = chip->read_regs(chip, reg, val); + ret = regmap_bulk_read(chip->regmap, regaddr, val, NBANK(chip)); if (ret < 0) { dev_err(&chip->client->dev, "failed reading register\n"); return ret; @@ -290,18 +355,13 @@ static int pca953x_read_regs(struct pca953x_chip *chip, int reg, u8 *val) static int pca953x_gpio_direction_input(struct gpio_chip *gc, unsigned off) { struct pca953x_chip *chip = gpiochip_get_data(gc); - u8 reg_val; + u8 dirreg = pca953x_recalc_addr(chip, chip->regs->direction, off, + true, false); + u8 bit = BIT(off % BANK_SZ); int ret; mutex_lock(&chip->i2c_lock); - reg_val = chip->reg_direction[off / BANK_SZ] | (1u << (off % BANK_SZ)); - - ret = pca953x_write_single(chip, chip->regs->direction, reg_val, off); - if (ret) - goto exit; - - chip->reg_direction[off / BANK_SZ] = reg_val; -exit: + ret = regmap_write_bits(chip->regmap, dirreg, bit, bit); mutex_unlock(&chip->i2c_lock); return ret; } @@ -310,31 +370,21 @@ static int pca953x_gpio_direction_output(struct gpio_chip *gc, unsigned off, int val) { struct pca953x_chip *chip = gpiochip_get_data(gc); - u8 reg_val; + u8 dirreg = pca953x_recalc_addr(chip, chip->regs->direction, off, + true, false); + u8 outreg = pca953x_recalc_addr(chip, chip->regs->output, off, + true, false); + u8 bit = BIT(off % BANK_SZ); int ret; mutex_lock(&chip->i2c_lock); /* set output level */ - if (val) - reg_val = chip->reg_output[off / BANK_SZ] - | (1u << (off % BANK_SZ)); - else - reg_val = chip->reg_output[off / BANK_SZ] - & ~(1u << (off % BANK_SZ)); - - ret = pca953x_write_single(chip, chip->regs->output, reg_val, off); + ret = regmap_write_bits(chip->regmap, outreg, bit, val ? bit : 0); if (ret) goto exit; - chip->reg_output[off / BANK_SZ] = reg_val; - /* then direction */ - reg_val = chip->reg_direction[off / BANK_SZ] & ~(1u << (off % BANK_SZ)); - ret = pca953x_write_single(chip, chip->regs->direction, reg_val, off); - if (ret) - goto exit; - - chip->reg_direction[off / BANK_SZ] = reg_val; + ret = regmap_write_bits(chip->regmap, dirreg, bit, 0); exit: mutex_unlock(&chip->i2c_lock); return ret; @@ -343,11 +393,14 @@ exit: static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off) { struct pca953x_chip *chip = gpiochip_get_data(gc); + u8 inreg = pca953x_recalc_addr(chip, chip->regs->input, off, + true, false); + u8 bit = BIT(off % BANK_SZ); u32 reg_val; int ret; mutex_lock(&chip->i2c_lock); - ret = pca953x_read_single(chip, chip->regs->input, ®_val, off); + ret = regmap_read(chip->regmap, inreg, ®_val); mutex_unlock(&chip->i2c_lock); if (ret < 0) { /* NOTE: diagnostic already emitted; that's all we should @@ -357,45 +410,37 @@ static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off) return 0; } - return (reg_val & (1u << (off % BANK_SZ))) ? 1 : 0; + return !!(reg_val & bit); } static void pca953x_gpio_set_value(struct gpio_chip *gc, unsigned off, int val) { struct pca953x_chip *chip = gpiochip_get_data(gc); - u8 reg_val; - int ret; + u8 outreg = pca953x_recalc_addr(chip, chip->regs->output, off, + true, false); + u8 bit = BIT(off % BANK_SZ); mutex_lock(&chip->i2c_lock); - if (val) - reg_val = chip->reg_output[off / BANK_SZ] - | (1u << (off % BANK_SZ)); - else - reg_val = chip->reg_output[off / BANK_SZ] - & ~(1u << (off % BANK_SZ)); - - ret = pca953x_write_single(chip, chip->regs->output, reg_val, off); - if (ret) - goto exit; - - chip->reg_output[off / BANK_SZ] = reg_val; -exit: + regmap_write_bits(chip->regmap, outreg, bit, val ? bit : 0); mutex_unlock(&chip->i2c_lock); } static int pca953x_gpio_get_direction(struct gpio_chip *gc, unsigned off) { struct pca953x_chip *chip = gpiochip_get_data(gc); + u8 dirreg = pca953x_recalc_addr(chip, chip->regs->direction, off, + true, false); + u8 bit = BIT(off % BANK_SZ); u32 reg_val; int ret; mutex_lock(&chip->i2c_lock); - ret = pca953x_read_single(chip, chip->regs->direction, ®_val, off); + ret = regmap_read(chip->regmap, dirreg, ®_val); mutex_unlock(&chip->i2c_lock); if (ret < 0) return ret; - return !!(reg_val & (1u << (off % BANK_SZ))); + return !!(reg_val & bit); } static void pca953x_gpio_set_multiple(struct gpio_chip *gc, @@ -403,14 +448,15 @@ static void pca953x_gpio_set_multiple(struct gpio_chip *gc, { struct pca953x_chip *chip = gpiochip_get_data(gc); unsigned int bank_mask, bank_val; - int bank_shift, bank; + int bank; u8 reg_val[MAX_BANK]; int ret; - bank_shift = fls((chip->gpio_chip.ngpio - 1) / BANK_SZ); - mutex_lock(&chip->i2c_lock); - memcpy(reg_val, chip->reg_output, NBANK(chip)); + ret = pca953x_read_regs(chip, chip->regs->output, reg_val); + if (ret) + goto exit; + for (bank = 0; bank < NBANK(chip); bank++) { bank_mask = mask[bank / sizeof(*mask)] >> ((bank % sizeof(*mask)) * 8); @@ -422,13 +468,7 @@ static void pca953x_gpio_set_multiple(struct gpio_chip *gc, } } - ret = i2c_smbus_write_i2c_block_data(chip->client, - chip->regs->output << bank_shift, - NBANK(chip), reg_val); - if (ret) - goto exit; - - memcpy(chip->reg_output, reg_val, NBANK(chip)); + pca953x_write_regs(chip, chip->regs->output, reg_val); exit: mutex_unlock(&chip->i2c_lock); } @@ -449,7 +489,7 @@ static void pca953x_setup_gpio(struct pca953x_chip *chip, int gpios) gc->base = chip->gpio_start; gc->ngpio = gpios; - gc->label = chip->client->name; + gc->label = dev_name(&chip->client->dev); gc->parent = &chip->client->dev; gc->owner = THIS_MODULE; gc->names = chip->names; @@ -487,6 +527,10 @@ static void pca953x_irq_bus_sync_unlock(struct irq_data *d) u8 new_irqs; int level, i; u8 invert_irq_mask[MAX_BANK]; + int reg_direction[MAX_BANK]; + + regmap_bulk_read(chip->regmap, chip->regs->direction, reg_direction, + NBANK(chip)); if (chip->driver_data & PCA_PCAL) { /* Enable latch on interrupt-enabled inputs */ @@ -502,7 +546,7 @@ static void pca953x_irq_bus_sync_unlock(struct irq_data *d) /* Look for any newly setup interrupt */ for (i = 0; i < NBANK(chip); i++) { new_irqs = chip->irq_trig_fall[i] | chip->irq_trig_raise[i]; - new_irqs &= ~chip->reg_direction[i]; + new_irqs &= reg_direction[i]; while (new_irqs) { level = __ffs(new_irqs); @@ -567,6 +611,7 @@ static bool pca953x_irq_pending(struct pca953x_chip *chip, u8 *pending) bool pending_seen = false; bool trigger_seen = false; u8 trigger[MAX_BANK]; + int reg_direction[MAX_BANK]; int ret, i; if (chip->driver_data & PCA_PCAL) { @@ -597,8 +642,10 @@ static bool pca953x_irq_pending(struct pca953x_chip *chip, u8 *pending) return false; /* Remove output pins from the equation */ + regmap_bulk_read(chip->regmap, chip->regs->direction, reg_direction, + NBANK(chip)); for (i = 0; i < NBANK(chip); i++) - cur_stat[i] &= chip->reg_direction[i]; + cur_stat[i] &= reg_direction[i]; memcpy(old_stat, chip->irq_stat, NBANK(chip)); @@ -652,6 +699,7 @@ static int pca953x_irq_setup(struct pca953x_chip *chip, int irq_base) { struct i2c_client *client = chip->client; + int reg_direction[MAX_BANK]; int ret, i; if (client->irq && irq_base != -1 @@ -666,8 +714,10 @@ static int pca953x_irq_setup(struct pca953x_chip *chip, * interrupt. We have to rely on the previous read for * this purpose. */ + regmap_bulk_read(chip->regmap, chip->regs->direction, + reg_direction, NBANK(chip)); for (i = 0; i < NBANK(chip); i++) - chip->irq_stat[i] &= chip->reg_direction[i]; + chip->irq_stat[i] &= reg_direction[i]; mutex_init(&chip->irq_lock); ret = devm_request_threaded_irq(&client->dev, @@ -715,20 +765,19 @@ static int pca953x_irq_setup(struct pca953x_chip *chip, } #endif -static int device_pca953x_init(struct pca953x_chip *chip, u32 invert) +static int device_pca95xx_init(struct pca953x_chip *chip, u32 invert) { int ret; u8 val[MAX_BANK]; - chip->regs = &pca953x_regs; - - ret = pca953x_read_regs(chip, chip->regs->output, chip->reg_output); - if (ret) + ret = regcache_sync_region(chip->regmap, chip->regs->output, + chip->regs->output + NBANK(chip)); + if (ret != 0) goto out; - ret = pca953x_read_regs(chip, chip->regs->direction, - chip->reg_direction); - if (ret) + ret = regcache_sync_region(chip->regmap, chip->regs->direction, + chip->regs->direction + NBANK(chip)); + if (ret != 0) goto out; /* set platform specific polarity inversion */ @@ -737,7 +786,7 @@ static int device_pca953x_init(struct pca953x_chip *chip, u32 invert) else memset(val, 0, NBANK(chip)); - ret = pca953x_write_regs(chip, PCA953X_INVERT, val); + ret = pca953x_write_regs(chip, chip->regs->invert, val); out: return ret; } @@ -747,22 +796,7 @@ static int device_pca957x_init(struct pca953x_chip *chip, u32 invert) int ret; u8 val[MAX_BANK]; - chip->regs = &pca957x_regs; - - ret = pca953x_read_regs(chip, chip->regs->output, chip->reg_output); - if (ret) - goto out; - ret = pca953x_read_regs(chip, chip->regs->direction, - chip->reg_direction); - if (ret) - goto out; - - /* set platform specific polarity inversion */ - if (invert) - memset(val, 0xFF, NBANK(chip)); - else - memset(val, 0, NBANK(chip)); - ret = pca953x_write_regs(chip, PCA957X_INVRT, val); + ret = device_pca95xx_init(chip, invert); if (ret) goto out; @@ -853,6 +887,16 @@ static int pca953x_probe(struct i2c_client *client, } } + i2c_set_clientdata(client, chip); + + chip->regmap = devm_regmap_init_i2c(client, &pca953x_i2c_regmap); + if (IS_ERR(chip->regmap)) { + ret = PTR_ERR(chip->regmap); + goto err_exit; + } + + regcache_mark_dirty(chip->regmap); + mutex_init(&chip->i2c_lock); /* * In case we have an i2c-mux controlled by a GPIO provided by an @@ -878,24 +922,13 @@ static int pca953x_probe(struct i2c_client *client, */ pca953x_setup_gpio(chip, chip->driver_data & PCA_GPIO_MASK); - if (chip->gpio_chip.ngpio <= 8) { - chip->write_regs = pca953x_write_regs_8; - chip->read_regs = pca953x_read_regs_8; - } else if (chip->gpio_chip.ngpio >= 24) { - chip->write_regs = pca953x_write_regs_24; - chip->read_regs = pca953x_read_regs_24; + if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) { + chip->regs = &pca953x_regs; + ret = device_pca95xx_init(chip, invert); } else { - if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) - chip->write_regs = pca953x_write_regs_16; - else - chip->write_regs = pca957x_write_regs_16; - chip->read_regs = pca953x_read_regs_16; - } - - if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) - ret = device_pca953x_init(chip, invert); - else + chip->regs = &pca957x_regs; ret = device_pca957x_init(chip, invert); + } if (ret) goto err_exit; @@ -914,7 +947,6 @@ static int pca953x_probe(struct i2c_client *client, dev_warn(&client->dev, "setup failed, %d\n", ret); } - i2c_set_clientdata(client, chip); return 0; err_exit: @@ -943,6 +975,91 @@ static int pca953x_remove(struct i2c_client *client) return ret; } +#ifdef CONFIG_PM_SLEEP +static int pca953x_regcache_sync(struct device *dev) +{ + struct pca953x_chip *chip = dev_get_drvdata(dev); + int ret; + + /* + * The ordering between direction and output is important, + * sync these registers first and only then sync the rest. + */ + ret = regcache_sync_region(chip->regmap, chip->regs->direction, + chip->regs->direction + NBANK(chip)); + if (ret != 0) { + dev_err(dev, "Failed to sync GPIO dir registers: %d\n", ret); + return ret; + } + + ret = regcache_sync_region(chip->regmap, chip->regs->output, + chip->regs->output + NBANK(chip)); + if (ret != 0) { + dev_err(dev, "Failed to sync GPIO out registers: %d\n", ret); + return ret; + } + +#ifdef CONFIG_GPIO_PCA953X_IRQ + if (chip->driver_data & PCA_PCAL) { + ret = regcache_sync_region(chip->regmap, PCAL953X_IN_LATCH, + PCAL953X_IN_LATCH + NBANK(chip)); + if (ret != 0) { + dev_err(dev, "Failed to sync INT latch registers: %d\n", + ret); + return ret; + } + + ret = regcache_sync_region(chip->regmap, PCAL953X_INT_MASK, + PCAL953X_INT_MASK + NBANK(chip)); + if (ret != 0) { + dev_err(dev, "Failed to sync INT mask registers: %d\n", + ret); + return ret; + } + } +#endif + + return 0; +} + +static int pca953x_suspend(struct device *dev) +{ + struct pca953x_chip *chip = dev_get_drvdata(dev); + + regcache_cache_only(chip->regmap, true); + + regulator_disable(chip->regulator); + + return 0; +} + +static int pca953x_resume(struct device *dev) +{ + struct pca953x_chip *chip = dev_get_drvdata(dev); + int ret; + + ret = regulator_enable(chip->regulator); + if (ret != 0) { + dev_err(dev, "Failed to enable regulator: %d\n", ret); + return 0; + } + + regcache_cache_only(chip->regmap, false); + regcache_mark_dirty(chip->regmap); + ret = pca953x_regcache_sync(dev); + if (ret) + return ret; + + ret = regcache_sync(chip->regmap); + if (ret != 0) { + dev_err(dev, "Failed to restore register map: %d\n", ret); + return ret; + } + + return 0; +} +#endif + /* convenience to stop overlong match-table lines */ #define OF_953X(__nrgpio, __int) (void *)(__nrgpio | PCA953X_TYPE | __int) #define OF_957X(__nrgpio, __int) (void *)(__nrgpio | PCA957X_TYPE | __int) @@ -986,9 +1103,12 @@ static const struct of_device_id pca953x_dt_ids[] = { MODULE_DEVICE_TABLE(of, pca953x_dt_ids); +static SIMPLE_DEV_PM_OPS(pca953x_pm_ops, pca953x_suspend, pca953x_resume); + static struct i2c_driver pca953x_driver = { .driver = { .name = "pca953x", + .pm = &pca953x_pm_ops, .of_match_table = pca953x_dt_ids, .acpi_match_table = ACPI_PTR(pca953x_acpi_ids), }, diff --git a/drivers/gpio/gpio-pch.c b/drivers/gpio/gpio-pch.c index ffce0ab912ed..ee79e5f88b5a 100644 --- a/drivers/gpio/gpio-pch.c +++ b/drivers/gpio/gpio-pch.c @@ -1,25 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2011 LAPIS Semiconductor Co., Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; version 2 of the License. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. */ -#include -#include -#include #include #include #include +#include +#include +#include #include #define PCH_EDGE_FALLING 0 @@ -171,11 +159,10 @@ static int pch_gpio_direction_input(struct gpio_chip *gpio, unsigned nr) return 0; } -#ifdef CONFIG_PM /* * Save register configuration and disable interrupts. */ -static void pch_gpio_save_reg_conf(struct pch_gpio *chip) +static void __maybe_unused pch_gpio_save_reg_conf(struct pch_gpio *chip) { chip->pch_gpio_reg.ien_reg = ioread32(&chip->reg->ien); chip->pch_gpio_reg.imask_reg = ioread32(&chip->reg->imask); @@ -185,14 +172,13 @@ static void pch_gpio_save_reg_conf(struct pch_gpio *chip) if (chip->ioh == INTEL_EG20T_PCH) chip->pch_gpio_reg.im1_reg = ioread32(&chip->reg->im1); if (chip->ioh == OKISEMI_ML7223n_IOH) - chip->pch_gpio_reg.gpio_use_sel_reg =\ - ioread32(&chip->reg->gpio_use_sel); + chip->pch_gpio_reg.gpio_use_sel_reg = ioread32(&chip->reg->gpio_use_sel); } /* * This function restores the register configuration of the GPIO device. */ -static void pch_gpio_restore_reg_conf(struct pch_gpio *chip) +static void __maybe_unused pch_gpio_restore_reg_conf(struct pch_gpio *chip) { iowrite32(chip->pch_gpio_reg.ien_reg, &chip->reg->ien); iowrite32(chip->pch_gpio_reg.imask_reg, &chip->reg->imask); @@ -204,10 +190,8 @@ static void pch_gpio_restore_reg_conf(struct pch_gpio *chip) if (chip->ioh == INTEL_EG20T_PCH) iowrite32(chip->pch_gpio_reg.im1_reg, &chip->reg->im1); if (chip->ioh == OKISEMI_ML7223n_IOH) - iowrite32(chip->pch_gpio_reg.gpio_use_sel_reg, - &chip->reg->gpio_use_sel); + iowrite32(chip->pch_gpio_reg.gpio_use_sel_reg, &chip->reg->gpio_use_sel); } -#endif static int pch_gpio_to_irq(struct gpio_chip *gpio, unsigned offset) { @@ -226,7 +210,6 @@ static void pch_gpio_setup(struct pch_gpio *chip) gpio->get = pch_gpio_get; gpio->direction_output = pch_gpio_direction_output; gpio->set = pch_gpio_set; - gpio->dbg_show = NULL; gpio->base = -1; gpio->ngpio = gpio_pins[chip->ioh]; gpio->can_sleep = false; @@ -250,8 +233,7 @@ static int pch_irq_type(struct irq_data *d, unsigned int type) im_reg = &chip->reg->im1; im_pos = ch - 8; } - dev_dbg(chip->dev, "%s:irq=%d type=%d ch=%d pos=%d\n", - __func__, irq, type, ch, im_pos); + dev_dbg(chip->dev, "irq=%d type=%d ch=%d pos=%d\n", irq, type, ch, im_pos); spin_lock_irqsave(&chip->spinlock, flags); @@ -317,16 +299,13 @@ static void pch_irq_ack(struct irq_data *d) static irqreturn_t pch_gpio_handler(int irq, void *dev_id) { struct pch_gpio *chip = dev_id; - u32 reg_val = ioread32(&chip->reg->istatus); + unsigned long reg_val = ioread32(&chip->reg->istatus); int i, ret = IRQ_NONE; - for (i = 0; i < gpio_pins[chip->ioh]; i++) { - if (reg_val & BIT(i)) { - dev_dbg(chip->dev, "%s:[%d]:irq=%d status=0x%x\n", - __func__, i, irq, reg_val); - generic_handle_irq(chip->irq_base + i); - ret = IRQ_HANDLED; - } + for_each_set_bit(i, ®_val, gpio_pins[chip->ioh]) { + dev_dbg(chip->dev, "[%d]:irq=%d status=0x%lx\n", i, irq, reg_val); + generic_handle_irq(chip->irq_base + i); + ret = IRQ_HANDLED; } return ret; } @@ -367,29 +346,24 @@ static int pch_gpio_probe(struct pci_dev *pdev, int irq_base; u32 msk; - chip = kzalloc(sizeof(*chip), GFP_KERNEL); + chip = devm_kzalloc(&pdev->dev, sizeof(*chip), GFP_KERNEL); if (chip == NULL) return -ENOMEM; chip->dev = &pdev->dev; - ret = pci_enable_device(pdev); + ret = pcim_enable_device(pdev); if (ret) { - dev_err(&pdev->dev, "%s : pci_enable_device FAILED", __func__); - goto err_pci_enable; + dev_err(&pdev->dev, "pci_enable_device FAILED"); + return ret; } - ret = pci_request_regions(pdev, KBUILD_MODNAME); + ret = pcim_iomap_regions(pdev, 1 << 1, KBUILD_MODNAME); if (ret) { dev_err(&pdev->dev, "pci_request_regions FAILED-%d", ret); - goto err_request_regions; + return ret; } - chip->base = pci_iomap(pdev, 1, 0); - if (!chip->base) { - dev_err(&pdev->dev, "%s : pci_iomap FAILED", __func__); - ret = -ENOMEM; - goto err_iomap; - } + chip->base = pcim_iomap_table(pdev)[1]; if (pdev->device == 0x8803) chip->ioh = INTEL_EG20T_PCH; @@ -402,13 +376,11 @@ static int pch_gpio_probe(struct pci_dev *pdev, pci_set_drvdata(pdev, chip); spin_lock_init(&chip->spinlock); pch_gpio_setup(chip); -#ifdef CONFIG_OF_GPIO - chip->gpio.of_node = pdev->dev.of_node; -#endif - ret = gpiochip_add_data(&chip->gpio, chip); + + ret = devm_gpiochip_add_data(&pdev->dev, &chip->gpio, chip); if (ret) { dev_err(&pdev->dev, "PCH gpio: Failed to register GPIO\n"); - goto err_gpiochip_add; + return ret; } irq_base = devm_irq_alloc_descs(&pdev->dev, -1, 0, @@ -416,7 +388,7 @@ static int pch_gpio_probe(struct pci_dev *pdev, if (irq_base < 0) { dev_warn(&pdev->dev, "PCH gpio: Failed to get IRQ base num\n"); chip->irq_base = -1; - goto end; + return 0; } chip->irq_base = irq_base; @@ -427,53 +399,17 @@ static int pch_gpio_probe(struct pci_dev *pdev, ret = devm_request_irq(&pdev->dev, pdev->irq, pch_gpio_handler, IRQF_SHARED, KBUILD_MODNAME, chip); - if (ret != 0) { - dev_err(&pdev->dev, - "%s request_irq failed\n", __func__); - goto err_request_irq; + if (ret) { + dev_err(&pdev->dev, "request_irq failed\n"); + return ret; } - ret = pch_gpio_alloc_generic_chip(chip, irq_base, - gpio_pins[chip->ioh]); - if (ret) - goto err_request_irq; - -end: - return 0; - -err_request_irq: - gpiochip_remove(&chip->gpio); - -err_gpiochip_add: - pci_iounmap(pdev, chip->base); - -err_iomap: - pci_release_regions(pdev); - -err_request_regions: - pci_disable_device(pdev); - -err_pci_enable: - kfree(chip); - dev_err(&pdev->dev, "%s Failed returns %d\n", __func__, ret); - return ret; -} - -static void pch_gpio_remove(struct pci_dev *pdev) -{ - struct pch_gpio *chip = pci_get_drvdata(pdev); - - gpiochip_remove(&chip->gpio); - pci_iounmap(pdev, chip->base); - pci_release_regions(pdev); - pci_disable_device(pdev); - kfree(chip); + return pch_gpio_alloc_generic_chip(chip, irq_base, gpio_pins[chip->ioh]); } -#ifdef CONFIG_PM -static int pch_gpio_suspend(struct pci_dev *pdev, pm_message_t state) +static int __maybe_unused pch_gpio_suspend(struct device *dev) { - s32 ret; + struct pci_dev *pdev = to_pci_dev(dev); struct pch_gpio *chip = pci_get_drvdata(pdev); unsigned long flags; @@ -481,36 +417,15 @@ static int pch_gpio_suspend(struct pci_dev *pdev, pm_message_t state) pch_gpio_save_reg_conf(chip); spin_unlock_irqrestore(&chip->spinlock, flags); - ret = pci_save_state(pdev); - if (ret) { - dev_err(&pdev->dev, "pci_save_state Failed-%d\n", ret); - return ret; - } - pci_disable_device(pdev); - pci_set_power_state(pdev, PCI_D0); - ret = pci_enable_wake(pdev, PCI_D0, 1); - if (ret) - dev_err(&pdev->dev, "pci_enable_wake Failed -%d\n", ret); - return 0; } -static int pch_gpio_resume(struct pci_dev *pdev) +static int __maybe_unused pch_gpio_resume(struct device *dev) { - s32 ret; + struct pci_dev *pdev = to_pci_dev(dev); struct pch_gpio *chip = pci_get_drvdata(pdev); unsigned long flags; - ret = pci_enable_wake(pdev, PCI_D0, 0); - - pci_set_power_state(pdev, PCI_D0); - ret = pci_enable_device(pdev); - if (ret) { - dev_err(&pdev->dev, "pci_enable_device Failed-%d ", ret); - return ret; - } - pci_restore_state(pdev); - spin_lock_irqsave(&chip->spinlock, flags); iowrite32(0x01, &chip->reg->reset); iowrite32(0x00, &chip->reg->reset); @@ -519,10 +434,8 @@ static int pch_gpio_resume(struct pci_dev *pdev) return 0; } -#else -#define pch_gpio_suspend NULL -#define pch_gpio_resume NULL -#endif + +static SIMPLE_DEV_PM_OPS(pch_gpio_pm_ops, pch_gpio_suspend, pch_gpio_resume); #define PCI_VENDOR_ID_ROHM 0x10DB static const struct pci_device_id pch_gpio_pcidev_id[] = { @@ -538,12 +451,12 @@ static struct pci_driver pch_gpio_driver = { .name = "pch_gpio", .id_table = pch_gpio_pcidev_id, .probe = pch_gpio_probe, - .remove = pch_gpio_remove, - .suspend = pch_gpio_suspend, - .resume = pch_gpio_resume + .driver = { + .pm = &pch_gpio_pm_ops, + }, }; module_pci_driver(pch_gpio_driver); MODULE_DESCRIPTION("PCH GPIO PCI Driver"); -MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/gpio/gpio-pci-idio-16.c b/drivers/gpio/gpio-pci-idio-16.c index 25d16b2af1c3..6b7349783223 100644 --- a/drivers/gpio/gpio-pci-idio-16.c +++ b/drivers/gpio/gpio-pci-idio-16.c @@ -146,7 +146,7 @@ static int idio_16_gpio_get_multiple(struct gpio_chip *chip, port_state = ioread8(ports[i]); /* store acquired bits at respective bits array offset */ - bits[word_index] |= port_state << word_offset; + bits[word_index] |= (port_state << word_offset) & word_mask; } return 0; diff --git a/drivers/gpio/gpio-pcie-idio-24.c b/drivers/gpio/gpio-pcie-idio-24.c index f953541e7890..52f1647a46fd 100644 --- a/drivers/gpio/gpio-pcie-idio-24.c +++ b/drivers/gpio/gpio-pcie-idio-24.c @@ -243,7 +243,7 @@ static int idio_24_gpio_get_multiple(struct gpio_chip *chip, port_state = ioread8(&idio24gpio->reg->ttl_in0_7); /* store acquired bits at respective bits array offset */ - bits[word_index] |= port_state << word_offset; + bits[word_index] |= (port_state << word_offset) & word_mask; } return 0; diff --git a/drivers/gpio/gpio-pl061.c b/drivers/gpio/gpio-pl061.c index 2afd9de84a0d..dc42571e6fdc 100644 --- a/drivers/gpio/gpio-pl061.c +++ b/drivers/gpio/gpio-pl061.c @@ -54,6 +54,7 @@ struct pl061 { void __iomem *base; struct gpio_chip gc; + struct irq_chip irq_chip; int parent_irq; #ifdef CONFIG_PM @@ -281,15 +282,6 @@ static int pl061_irq_set_wake(struct irq_data *d, unsigned int state) return irq_set_irq_wake(pl061->parent_irq, state); } -static struct irq_chip pl061_irqchip = { - .name = "pl061", - .irq_ack = pl061_irq_ack, - .irq_mask = pl061_irq_mask, - .irq_unmask = pl061_irq_unmask, - .irq_set_type = pl061_irq_type, - .irq_set_wake = pl061_irq_set_wake, -}; - static int pl061_probe(struct amba_device *adev, const struct amba_id *id) { struct device *dev = &adev->dev; @@ -328,6 +320,13 @@ static int pl061_probe(struct amba_device *adev, const struct amba_id *id) /* * irq_chip support */ + pl061->irq_chip.name = dev_name(dev); + pl061->irq_chip.irq_ack = pl061_irq_ack; + pl061->irq_chip.irq_mask = pl061_irq_mask; + pl061->irq_chip.irq_unmask = pl061_irq_unmask; + pl061->irq_chip.irq_set_type = pl061_irq_type; + pl061->irq_chip.irq_set_wake = pl061_irq_set_wake; + writeb(0, pl061->base + GPIOIE); /* disable irqs */ irq = adev->irq[0]; if (irq < 0) { @@ -336,14 +335,14 @@ static int pl061_probe(struct amba_device *adev, const struct amba_id *id) } pl061->parent_irq = irq; - ret = gpiochip_irqchip_add(&pl061->gc, &pl061_irqchip, + ret = gpiochip_irqchip_add(&pl061->gc, &pl061->irq_chip, 0, handle_bad_irq, IRQ_TYPE_NONE); if (ret) { dev_info(&adev->dev, "could not add irqchip\n"); return ret; } - gpiochip_set_chained_irqchip(&pl061->gc, &pl061_irqchip, + gpiochip_set_chained_irqchip(&pl061->gc, &pl061->irq_chip, irq, pl061_irq_handler); amba_set_drvdata(adev, pl061); diff --git a/drivers/gpio/gpio-raspberrypi-exp.c b/drivers/gpio/gpio-raspberrypi-exp.c index d6d36d537e37..b77ea16ffa03 100644 --- a/drivers/gpio/gpio-raspberrypi-exp.c +++ b/drivers/gpio/gpio-raspberrypi-exp.c @@ -206,6 +206,7 @@ static int rpi_exp_gpio_probe(struct platform_device *pdev) } fw = rpi_firmware_get(fw_node); + of_node_put(fw_node); if (!fw) return -EPROBE_DEFER; diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c index 3c82bb3c2030..068ce25ffd28 100644 --- a/drivers/gpio/gpio-rcar.c +++ b/drivers/gpio/gpio-rcar.c @@ -1,17 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Renesas R-Car GPIO Support * * Copyright (C) 2014 Renesas Electronics Corporation * Copyright (C) 2013 Magnus Damm - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #include @@ -43,7 +35,7 @@ struct gpio_rcar_bank_info { struct gpio_rcar_priv { void __iomem *base; spinlock_t lock; - struct platform_device *pdev; + struct device *dev; struct gpio_chip gpio_chip; struct irq_chip irq_chip; unsigned int irq_parent; @@ -148,7 +140,7 @@ static int gpio_rcar_irq_set_type(struct irq_data *d, unsigned int type) struct gpio_rcar_priv *p = gpiochip_get_data(gc); unsigned int hwirq = irqd_to_hwirq(d); - dev_dbg(&p->pdev->dev, "sense irq = %d, type = %d\n", hwirq, type); + dev_dbg(p->dev, "sense irq = %d, type = %d\n", hwirq, type); switch (type & IRQ_TYPE_SENSE_MASK) { case IRQ_TYPE_LEVEL_HIGH: @@ -188,8 +180,7 @@ static int gpio_rcar_irq_set_wake(struct irq_data *d, unsigned int on) if (p->irq_parent) { error = irq_set_irq_wake(p->irq_parent, on); if (error) { - dev_dbg(&p->pdev->dev, - "irq %u doesn't support irq_set_wake\n", + dev_dbg(p->dev, "irq %u doesn't support irq_set_wake\n", p->irq_parent); p->irq_parent = 0; } @@ -252,13 +243,13 @@ static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset) struct gpio_rcar_priv *p = gpiochip_get_data(chip); int error; - error = pm_runtime_get_sync(&p->pdev->dev); + error = pm_runtime_get_sync(p->dev); if (error < 0) return error; error = pinctrl_gpio_request(chip->base + offset); if (error) - pm_runtime_put(&p->pdev->dev); + pm_runtime_put(p->dev); return error; } @@ -275,7 +266,7 @@ static void gpio_rcar_free(struct gpio_chip *chip, unsigned offset) */ gpio_rcar_config_general_input_output_mode(chip, offset, false); - pm_runtime_put(&p->pdev->dev); + pm_runtime_put(p->dev); } static int gpio_rcar_get_direction(struct gpio_chip *chip, unsigned int offset) @@ -406,21 +397,20 @@ MODULE_DEVICE_TABLE(of, gpio_rcar_of_table); static int gpio_rcar_parse_dt(struct gpio_rcar_priv *p, unsigned int *npins) { - struct device_node *np = p->pdev->dev.of_node; + struct device_node *np = p->dev->of_node; const struct gpio_rcar_info *info; struct of_phandle_args args; int ret; - info = of_device_get_match_data(&p->pdev->dev); + info = of_device_get_match_data(p->dev); ret = of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 0, &args); *npins = ret == 0 ? args.args[2] : RCAR_MAX_GPIO_PER_BANK; p->has_both_edge_trigger = info->has_both_edge_trigger; if (*npins == 0 || *npins > RCAR_MAX_GPIO_PER_BANK) { - dev_warn(&p->pdev->dev, - "Invalid number of gpio lines %u, using %u\n", *npins, - RCAR_MAX_GPIO_PER_BANK); + dev_warn(p->dev, "Invalid number of gpio lines %u, using %u\n", + *npins, RCAR_MAX_GPIO_PER_BANK); *npins = RCAR_MAX_GPIO_PER_BANK; } @@ -442,7 +432,7 @@ static int gpio_rcar_probe(struct platform_device *pdev) if (!p) return -ENOMEM; - p->pdev = pdev; + p->dev = dev; spin_lock_init(&p->lock); /* Get device configuration from DT node */ diff --git a/drivers/gpio/gpio-sama5d2-piobu.c b/drivers/gpio/gpio-sama5d2-piobu.c new file mode 100644 index 000000000000..03a000659fa1 --- /dev/null +++ b/drivers/gpio/gpio-sama5d2-piobu.c @@ -0,0 +1,253 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * SAMA5D2 PIOBU GPIO controller + * + * Copyright (C) 2018 Microchip Technology Inc. and its subsidiaries + * + * Author: Andrei Stefanescu + * + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define PIOBU_NUM 8 +#define PIOBU_REG_SIZE 4 + +/* + * backup mode protection register for tamper detection + * normal mode protection register for tamper detection + * wakeup signal generation + */ +#define PIOBU_BMPR 0x7C +#define PIOBU_NMPR 0x80 +#define PIOBU_WKPR 0x90 + +#define PIOBU_BASE 0x18 /* PIOBU offset from SECUMOD base register address. */ + +#define PIOBU_DET_OFFSET 16 + +/* In the datasheet this bit is called OUTPUT */ +#define PIOBU_DIRECTION BIT(8) +#define PIOBU_OUT BIT(8) +#define PIOBU_IN 0 + +#define PIOBU_SOD BIT(9) +#define PIOBU_PDS BIT(10) + +#define PIOBU_HIGH BIT(9) +#define PIOBU_LOW 0 + +struct sama5d2_piobu { + struct gpio_chip chip; + struct regmap *regmap; +}; + +/** + * sama5d2_piobu_setup_pin() - prepares a pin for set_direction call + * + * Do not consider pin for tamper detection (normal and backup modes) + * Do not consider pin as tamper wakeup interrupt source + */ +static int sama5d2_piobu_setup_pin(struct gpio_chip *chip, unsigned int pin) +{ + int ret; + struct sama5d2_piobu *piobu = container_of(chip, struct sama5d2_piobu, + chip); + unsigned int mask = BIT(PIOBU_DET_OFFSET + pin); + + ret = regmap_update_bits(piobu->regmap, PIOBU_BMPR, mask, 0); + if (ret) + return ret; + + ret = regmap_update_bits(piobu->regmap, PIOBU_NMPR, mask, 0); + if (ret) + return ret; + + return regmap_update_bits(piobu->regmap, PIOBU_WKPR, mask, 0); +} + +/** + * sama5d2_piobu_write_value() - writes value & mask at the pin's PIOBU register + */ +static int sama5d2_piobu_write_value(struct gpio_chip *chip, unsigned int pin, + unsigned int mask, unsigned int value) +{ + int reg; + struct sama5d2_piobu *piobu = container_of(chip, struct sama5d2_piobu, + chip); + + reg = PIOBU_BASE + pin * PIOBU_REG_SIZE; + + return regmap_update_bits(piobu->regmap, reg, mask, value); +} + +/** + * sama5d2_piobu_read_value() - read the value with masking from the pin's PIOBU + * register + */ +static int sama5d2_piobu_read_value(struct gpio_chip *chip, unsigned int pin, + unsigned int mask) +{ + struct sama5d2_piobu *piobu = container_of(chip, struct sama5d2_piobu, + chip); + unsigned int val, reg; + int ret; + + reg = PIOBU_BASE + pin * PIOBU_REG_SIZE; + ret = regmap_read(piobu->regmap, reg, &val); + if (ret < 0) + return ret; + + return val & mask; +} + +/** + * sama5d2_piobu_set_direction() - mark pin as input or output + */ +static int sama5d2_piobu_set_direction(struct gpio_chip *chip, + unsigned int direction, + unsigned int pin) +{ + return sama5d2_piobu_write_value(chip, pin, PIOBU_DIRECTION, direction); +} + +/** + * sama5d2_piobu_get_direction() - gpiochip get_direction + */ +static int sama5d2_piobu_get_direction(struct gpio_chip *chip, + unsigned int pin) +{ + int ret = sama5d2_piobu_read_value(chip, pin, PIOBU_DIRECTION); + + if (ret < 0) + return ret; + + return (ret == PIOBU_IN) ? 1 : 0; +} + +/** + * sama5d2_piobu_direction_input() - gpiochip direction_input + */ +static int sama5d2_piobu_direction_input(struct gpio_chip *chip, + unsigned int pin) +{ + return sama5d2_piobu_set_direction(chip, PIOBU_IN, pin); +} + +/** + * sama5d2_piobu_direction_output() - gpiochip direction_output + */ +static int sama5d2_piobu_direction_output(struct gpio_chip *chip, + unsigned int pin, int value) +{ + return sama5d2_piobu_set_direction(chip, PIOBU_OUT, pin); +} + +/** + * sama5d2_piobu_get() - gpiochip get + */ +static int sama5d2_piobu_get(struct gpio_chip *chip, unsigned int pin) +{ + /* if pin is input, read value from PDS else read from SOD */ + int ret = sama5d2_piobu_get_direction(chip, pin); + + if (ret == 1) + ret = sama5d2_piobu_read_value(chip, pin, PIOBU_PDS); + else if (!ret) + ret = sama5d2_piobu_read_value(chip, pin, PIOBU_SOD); + + if (ret < 0) + return ret; + + return !!ret; +} + +/** + * sama5d2_piobu_set() - gpiochip set + */ +static void sama5d2_piobu_set(struct gpio_chip *chip, unsigned int pin, + int value) +{ + if (!value) + value = PIOBU_LOW; + else + value = PIOBU_HIGH; + + sama5d2_piobu_write_value(chip, pin, PIOBU_SOD, value); +} + +static int sama5d2_piobu_probe(struct platform_device *pdev) +{ + struct sama5d2_piobu *piobu; + int ret, i; + + piobu = devm_kzalloc(&pdev->dev, sizeof(*piobu), GFP_KERNEL); + if (!piobu) + return -ENOMEM; + + platform_set_drvdata(pdev, piobu); + piobu->chip.label = pdev->name; + piobu->chip.parent = &pdev->dev; + piobu->chip.of_node = pdev->dev.of_node; + piobu->chip.owner = THIS_MODULE, + piobu->chip.get_direction = sama5d2_piobu_get_direction, + piobu->chip.direction_input = sama5d2_piobu_direction_input, + piobu->chip.direction_output = sama5d2_piobu_direction_output, + piobu->chip.get = sama5d2_piobu_get, + piobu->chip.set = sama5d2_piobu_set, + piobu->chip.base = -1, + piobu->chip.ngpio = PIOBU_NUM, + piobu->chip.can_sleep = 0, + + piobu->regmap = syscon_node_to_regmap(pdev->dev.of_node); + if (IS_ERR(piobu->regmap)) { + dev_err(&pdev->dev, "Failed to get syscon regmap %ld\n", + PTR_ERR(piobu->regmap)); + return PTR_ERR(piobu->regmap); + } + + ret = devm_gpiochip_add_data(&pdev->dev, &piobu->chip, piobu); + if (ret) { + dev_err(&pdev->dev, "Failed to add gpiochip %d\n", ret); + return ret; + } + + for (i = 0; i < PIOBU_NUM; ++i) { + ret = sama5d2_piobu_setup_pin(&piobu->chip, i); + if (ret) { + dev_err(&pdev->dev, "Failed to setup pin: %d %d\n", + i, ret); + return ret; + } + } + + return 0; +} + +static const struct of_device_id sama5d2_piobu_ids[] = { + { .compatible = "atmel,sama5d2-secumod" }, + {}, +}; +MODULE_DEVICE_TABLE(of, sama5d2_piobu_ids); + +static struct platform_driver sama5d2_piobu_driver = { + .driver = { + .name = "sama5d2-piobu", + .of_match_table = of_match_ptr(sama5d2_piobu_ids) + }, + .probe = sama5d2_piobu_probe, +}; + +module_platform_driver(sama5d2_piobu_driver); + +MODULE_VERSION("1.0"); +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("SAMA5D2 PIOBU controller driver"); +MODULE_AUTHOR("Andrei Stefanescu "); diff --git a/drivers/gpio/gpio-sch.c b/drivers/gpio/gpio-sch.c index e9878f6ede67..c333046d02b8 100644 --- a/drivers/gpio/gpio-sch.c +++ b/drivers/gpio/gpio-sch.c @@ -1,32 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 /* * GPIO interface for Intel Poulsbo SCH * * Copyright (c) 2010 CompuLab Ltd * Author: Denis Turischev - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License 2 as published - * by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; see the file COPYING. If not, write to - * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. */ -#include +#include +#include +#include +#include #include #include -#include -#include -#include -#include #include -#include +#include #define GEN 0x00 #define GIO 0x04 @@ -235,5 +222,5 @@ module_platform_driver(sch_gpio_driver); MODULE_AUTHOR("Denis Turischev "); MODULE_DESCRIPTION("GPIO interface for Intel Poulsbo SCH"); -MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL v2"); MODULE_ALIAS("platform:sch_gpio"); diff --git a/drivers/gpio/gpio-sch311x.c b/drivers/gpio/gpio-sch311x.c index 5497f0a88cf0..4df5335469fd 100644 --- a/drivers/gpio/gpio-sch311x.c +++ b/drivers/gpio/gpio-sch311x.c @@ -188,7 +188,7 @@ static void sch311x_gpio_set(struct gpio_chip *chip, unsigned offset, struct sch311x_gpio_block *block = gpiochip_get_data(chip); spin_lock(&block->lock); - __sch311x_gpio_set(block, offset, value); + __sch311x_gpio_set(block, offset, value); spin_unlock(&block->lock); } diff --git a/drivers/gpio/gpio-sodaville.c b/drivers/gpio/gpio-sodaville.c index f60da83349ef..aed988e78251 100644 --- a/drivers/gpio/gpio-sodaville.c +++ b/drivers/gpio/gpio-sodaville.c @@ -1,26 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0 /* * GPIO interface for Intel Sodaville SoCs. * * Copyright (c) 2010, 2011 Intel Corporation * * Author: Hans J. Koch - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License 2 as published - * by the Free Software Foundation. - * */ #include +#include #include +#include #include #include -#include #include +#include #include #include -#include -#include #define DRV_NAME "sdv_gpio" #define SDV_NUM_PUB_GPIOS 12 @@ -80,18 +76,15 @@ static int sdv_gpio_pub_set_type(struct irq_data *d, unsigned int type) static irqreturn_t sdv_gpio_pub_irq_handler(int irq, void *data) { struct sdv_gpio_chip_data *sd = data; - u32 irq_stat = readl(sd->gpio_pub_base + GPSTR); + unsigned long irq_stat = readl(sd->gpio_pub_base + GPSTR); + int irq_bit; irq_stat &= readl(sd->gpio_pub_base + GPIO_INT); if (!irq_stat) return IRQ_NONE; - while (irq_stat) { - u32 irq_bit = __fls(irq_stat); - - irq_stat &= ~BIT(irq_bit); + for_each_set_bit(irq_bit, &irq_stat, 32) generic_handle_irq(irq_find_mapping(sd->id, irq_bit)); - } return IRQ_HANDLED; } @@ -155,8 +148,10 @@ static int sdv_register_irqsupport(struct sdv_gpio_chip_data *sd, * we unmask & ACK the IRQ before the source of the interrupt is gone * then the interrupt is active again. */ - sd->gc = irq_alloc_generic_chip("sdv-gpio", 1, sd->irq_base, - sd->gpio_pub_base, handle_fasteoi_irq); + sd->gc = devm_irq_alloc_generic_chip(&pdev->dev, "sdv-gpio", 1, + sd->irq_base, + sd->gpio_pub_base, + handle_fasteoi_irq); if (!sd->gc) return -ENOMEM; @@ -186,70 +181,52 @@ static int sdv_gpio_probe(struct pci_dev *pdev, const struct pci_device_id *pci_id) { struct sdv_gpio_chip_data *sd; - unsigned long addr; - const void *prop; - int len; int ret; u32 mux_val; - sd = kzalloc(sizeof(struct sdv_gpio_chip_data), GFP_KERNEL); + sd = devm_kzalloc(&pdev->dev, sizeof(*sd), GFP_KERNEL); if (!sd) return -ENOMEM; - ret = pci_enable_device(pdev); + + ret = pcim_enable_device(pdev); if (ret) { dev_err(&pdev->dev, "can't enable device.\n"); - goto done; + return ret; } - ret = pci_request_region(pdev, GPIO_BAR, DRV_NAME); + ret = pcim_iomap_regions(pdev, 1 << GPIO_BAR, DRV_NAME); if (ret) { dev_err(&pdev->dev, "can't alloc PCI BAR #%d\n", GPIO_BAR); - goto disable_pci; + return ret; } - addr = pci_resource_start(pdev, GPIO_BAR); - if (!addr) { - ret = -ENODEV; - goto release_reg; - } - sd->gpio_pub_base = ioremap(addr, pci_resource_len(pdev, GPIO_BAR)); + sd->gpio_pub_base = pcim_iomap_table(pdev)[GPIO_BAR]; - prop = of_get_property(pdev->dev.of_node, "intel,muxctl", &len); - if (prop && len == 4) { - mux_val = of_read_number(prop, 1); + ret = of_property_read_u32(pdev->dev.of_node, "intel,muxctl", &mux_val); + if (!ret) writel(mux_val, sd->gpio_pub_base + GPMUXCTL); - } ret = bgpio_init(&sd->chip, &pdev->dev, 4, sd->gpio_pub_base + GPINR, sd->gpio_pub_base + GPOUTR, NULL, sd->gpio_pub_base + GPOER, NULL, 0); if (ret) - goto unmap; + return ret; + sd->chip.ngpio = SDV_NUM_PUB_GPIOS; - ret = gpiochip_add_data(&sd->chip, sd); + ret = devm_gpiochip_add_data(&pdev->dev, &sd->chip, sd); if (ret < 0) { dev_err(&pdev->dev, "gpiochip_add() failed.\n"); - goto unmap; + return ret; } ret = sdv_register_irqsupport(sd, pdev); if (ret) - goto unmap; + return ret; pci_set_drvdata(pdev, sd); dev_info(&pdev->dev, "Sodaville GPIO driver registered.\n"); return 0; - -unmap: - iounmap(sd->gpio_pub_base); -release_reg: - pci_release_region(pdev, GPIO_BAR); -disable_pci: - pci_disable_device(pdev); -done: - kfree(sd); - return ret; } static const struct pci_device_id sdv_gpio_pci_ids[] = { diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c index 47dbd19751d0..02f6db925fd5 100644 --- a/drivers/gpio/gpio-tegra.c +++ b/drivers/gpio/gpio-tegra.c @@ -404,8 +404,7 @@ static void tegra_gpio_irq_handler(struct irq_desc *desc) #ifdef CONFIG_PM_SLEEP static int tegra_gpio_resume(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct tegra_gpio_info *tgi = platform_get_drvdata(pdev); + struct tegra_gpio_info *tgi = dev_get_drvdata(dev); unsigned long flags; unsigned int b, p; @@ -444,8 +443,7 @@ static int tegra_gpio_resume(struct device *dev) static int tegra_gpio_suspend(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct tegra_gpio_info *tgi = platform_get_drvdata(pdev); + struct tegra_gpio_info *tgi = dev_get_drvdata(dev); unsigned long flags; unsigned int b, p; diff --git a/drivers/gpio/gpio-tegra186.c b/drivers/gpio/gpio-tegra186.c index 9d0292c8a199..66ec38bb7954 100644 --- a/drivers/gpio/gpio-tegra186.c +++ b/drivers/gpio/gpio-tegra186.c @@ -279,7 +279,7 @@ static void tegra186_irq_unmask(struct irq_data *data) writel(value, base + TEGRA186_GPIO_ENABLE_CONFIG); } -static int tegra186_irq_set_type(struct irq_data *data, unsigned int flow) +static int tegra186_irq_set_type(struct irq_data *data, unsigned int type) { struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data); void __iomem *base; @@ -293,7 +293,7 @@ static int tegra186_irq_set_type(struct irq_data *data, unsigned int flow) value &= ~TEGRA186_GPIO_ENABLE_CONFIG_TRIGGER_TYPE_MASK; value &= ~TEGRA186_GPIO_ENABLE_CONFIG_TRIGGER_LEVEL; - switch (flow & IRQ_TYPE_SENSE_MASK) { + switch (type & IRQ_TYPE_SENSE_MASK) { case IRQ_TYPE_NONE: break; @@ -325,7 +325,7 @@ static int tegra186_irq_set_type(struct irq_data *data, unsigned int flow) writel(value, base + TEGRA186_GPIO_ENABLE_CONFIG); - if ((flow & IRQ_TYPE_EDGE_BOTH) == 0) + if ((type & IRQ_TYPE_EDGE_BOTH) == 0) irq_set_handler_locked(data, handle_level_irq); else irq_set_handler_locked(data, handle_edge_irq); diff --git a/drivers/gpio/gpio-uniphier.c b/drivers/gpio/gpio-uniphier.c index 74551cbdb2e8..0f662b297a95 100644 --- a/drivers/gpio/gpio-uniphier.c +++ b/drivers/gpio/gpio-uniphier.c @@ -1,16 +1,7 @@ -/* - * Copyright (C) 2017 Socionext Inc. - * Author: Masahiro Yamada - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ +// SPDX-License-Identifier: GPL-2.0 +// +// Copyright (C) 2017 Socionext Inc. +// Author: Masahiro Yamada #include #include diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c index 5960396c8d9a..1b79ebcfce3e 100644 --- a/drivers/gpio/gpio-vf610.c +++ b/drivers/gpio/gpio-vf610.c @@ -7,6 +7,7 @@ * Author: Stefan Agner . */ #include +#include #include #include #include @@ -32,6 +33,8 @@ struct vf610_gpio_port { void __iomem *gpio_base; const struct fsl_gpio_soc_data *sdata; u8 irqc[VF610_GPIO_PER_PORT]; + struct clk *clk_port; + struct clk *clk_gpio; int irq; }; @@ -271,6 +274,33 @@ static int vf610_gpio_probe(struct platform_device *pdev) if (port->irq < 0) return port->irq; + port->clk_port = devm_clk_get(&pdev->dev, "port"); + if (!IS_ERR(port->clk_port)) { + ret = clk_prepare_enable(port->clk_port); + if (ret) + return ret; + } else if (port->clk_port == ERR_PTR(-EPROBE_DEFER)) { + /* + * Percolate deferrals, for anything else, + * just live without the clocking. + */ + return PTR_ERR(port->clk_port); + } + + port->clk_gpio = devm_clk_get(&pdev->dev, "gpio"); + if (!IS_ERR(port->clk_gpio)) { + ret = clk_prepare_enable(port->clk_gpio); + if (ret) { + clk_disable_unprepare(port->clk_port); + return ret; + } + } else if (port->clk_gpio == ERR_PTR(-EPROBE_DEFER)) { + clk_disable_unprepare(port->clk_port); + return PTR_ERR(port->clk_gpio); + } + + platform_set_drvdata(pdev, port); + gc = &port->gc; gc->of_node = np; gc->parent = dev; @@ -305,12 +335,26 @@ static int vf610_gpio_probe(struct platform_device *pdev) return 0; } +static int vf610_gpio_remove(struct platform_device *pdev) +{ + struct vf610_gpio_port *port = platform_get_drvdata(pdev); + + gpiochip_remove(&port->gc); + if (!IS_ERR(port->clk_port)) + clk_disable_unprepare(port->clk_port); + if (!IS_ERR(port->clk_gpio)) + clk_disable_unprepare(port->clk_gpio); + + return 0; +} + static struct platform_driver vf610_gpio_driver = { .driver = { .name = "gpio-vf610", .of_match_table = vf610_gpio_dt_ids, }, .probe = vf610_gpio_probe, + .remove = vf610_gpio_remove, }; builtin_platform_driver(vf610_gpio_driver); diff --git a/drivers/gpio/gpio-ws16c48.c b/drivers/gpio/gpio-ws16c48.c index c7028eb0b8e1..5cf3697bfb15 100644 --- a/drivers/gpio/gpio-ws16c48.c +++ b/drivers/gpio/gpio-ws16c48.c @@ -169,7 +169,7 @@ static int ws16c48_gpio_get_multiple(struct gpio_chip *chip, port_state = inb(ws16c48gpio->base + i); /* store acquired bits at respective bits array offset */ - bits[word_index] |= port_state << word_offset; + bits[word_index] |= (port_state << word_offset) & word_mask; } return 0; diff --git a/drivers/gpio/gpio-zynq.c b/drivers/gpio/gpio-zynq.c index 3f5fcdd5a429..b3b4edcdffe0 100644 --- a/drivers/gpio/gpio-zynq.c +++ b/drivers/gpio/gpio-zynq.c @@ -357,6 +357,28 @@ static int zynq_gpio_dir_out(struct gpio_chip *chip, unsigned int pin, return 0; } +/** + * zynq_gpio_get_direction - Read the direction of the specified GPIO pin + * @chip: gpio_chip instance to be worked on + * @pin: gpio pin number within the device + * + * This function returns the direction of the specified GPIO. + * + * Return: 0 for output, 1 for input + */ +static int zynq_gpio_get_direction(struct gpio_chip *chip, unsigned int pin) +{ + u32 reg; + unsigned int bank_num, bank_pin_num; + struct zynq_gpio *gpio = gpiochip_get_data(chip); + + zynq_gpio_get_bank_pin(pin, &bank_num, &bank_pin_num, gpio); + + reg = readl_relaxed(gpio->base_addr + ZYNQ_GPIO_DIRM_OFFSET(bank_num)); + + return !(reg & BIT(bank_pin_num)); +} + /** * zynq_gpio_irq_mask - Disable the interrupts for a gpio pin * @irq_data: per irq and chip data passed down to chip functions @@ -693,8 +715,7 @@ static int __maybe_unused zynq_gpio_resume(struct device *dev) static int __maybe_unused zynq_gpio_runtime_suspend(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct zynq_gpio *gpio = platform_get_drvdata(pdev); + struct zynq_gpio *gpio = dev_get_drvdata(dev); clk_disable_unprepare(gpio->clk); @@ -703,8 +724,7 @@ static int __maybe_unused zynq_gpio_runtime_suspend(struct device *dev) static int __maybe_unused zynq_gpio_runtime_resume(struct device *dev) { - struct platform_device *pdev = to_platform_device(dev); - struct zynq_gpio *gpio = platform_get_drvdata(pdev); + struct zynq_gpio *gpio = dev_get_drvdata(dev); return clk_prepare_enable(gpio->clk); } @@ -827,6 +847,7 @@ static int zynq_gpio_probe(struct platform_device *pdev) chip->free = zynq_gpio_free; chip->direction_input = zynq_gpio_dir_in; chip->direction_output = zynq_gpio_dir_out; + chip->get_direction = zynq_gpio_get_direction; chip->base = of_alias_get_id(pdev->dev.of_node, "gpio"); chip->ngpio = gpio->p_data->ngpio; diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c index 7f93954c58ea..48534bda73d3 100644 --- a/drivers/gpio/gpiolib-acpi.c +++ b/drivers/gpio/gpiolib-acpi.c @@ -217,7 +217,7 @@ static acpi_status acpi_gpiochip_alloc_event(struct acpi_resource *ares, if (!handler) return AE_OK; - desc = gpiochip_request_own_desc(chip, pin, "ACPI:Event"); + desc = gpiochip_request_own_desc(chip, pin, "ACPI:Event", 0); if (IS_ERR(desc)) { dev_err(chip->parent, "Failed to request GPIO\n"); return AE_ERROR; @@ -913,23 +913,15 @@ acpi_gpio_adr_space_handler(u32 function, acpi_physical_address address, if (!found) { enum gpiod_flags flags = acpi_gpio_to_gpiod_flags(agpio); const char *label = "ACPI:OpRegion"; - int err; - desc = gpiochip_request_own_desc(chip, pin, label); + desc = gpiochip_request_own_desc(chip, pin, label, + flags); if (IS_ERR(desc)) { status = AE_ERROR; mutex_unlock(&achip->conn_lock); goto out; } - err = gpiod_configure_flags(desc, label, 0, flags); - if (err < 0) { - status = AE_NOT_CONFIGURED; - gpiochip_free_own_desc(desc); - mutex_unlock(&achip->conn_lock); - goto out; - } - conn = kzalloc(sizeof(*conn), GFP_KERNEL); if (!conn) { status = AE_NO_MEMORY; diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c index 7f1260c78270..a6e1891217e2 100644 --- a/drivers/gpio/gpiolib-of.c +++ b/drivers/gpio/gpiolib-of.c @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +// SPDX-License-Identifier: GPL-2.0+ /* * OF helpers for the GPIO API * @@ -54,9 +54,31 @@ static struct gpio_desc *of_xlate_and_get_gpiod_flags(struct gpio_chip *chip, } static void of_gpio_flags_quirks(struct device_node *np, + const char *propname, enum of_gpio_flags *flags, int index) { + /* + * Handle MMC "cd-inverted" and "wp-inverted" semantics. + */ + if (IS_ENABLED(CONFIG_MMC)) { + /* + * Active low is the default according to the + * SDHCI specification and the device tree + * bindings. However the code in the current + * kernel was written such that the phandle + * flags were always respected, and "cd-inverted" + * would invert the flag from the device phandle. + */ + if (!strcmp(propname, "cd-gpios")) { + if (of_property_read_bool(np, "cd-inverted")) + *flags ^= OF_GPIO_ACTIVE_LOW; + } + if (!strcmp(propname, "wp-gpios")) { + if (of_property_read_bool(np, "wp-inverted")) + *flags ^= OF_GPIO_ACTIVE_LOW; + } + } /* * Some GPIO fixed regulator quirks. * Note that active low is the default. @@ -174,7 +196,7 @@ struct gpio_desc *of_get_named_gpiod_flags(struct device_node *np, goto out; if (flags) - of_gpio_flags_quirks(np, flags, index); + of_gpio_flags_quirks(np, propname, flags, index); pr_debug("%s: parsed '%s' property of node '%pOF[%d]' - status (%d)\n", __func__, propname, np, index, diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c index 985c09ce80fb..1651d7f0a303 100644 --- a/drivers/gpio/gpiolib.c +++ b/drivers/gpio/gpiolib.c @@ -1512,19 +1512,6 @@ static void devm_gpio_chip_release(struct device *dev, void *res) gpiochip_remove(chip); } -static int devm_gpio_chip_match(struct device *dev, void *res, void *data) - -{ - struct gpio_chip **r = res; - - if (!r || !*r) { - WARN_ON(!r || !*r); - return 0; - } - - return *r == data; -} - /** * devm_gpiochip_add_data() - Resource manager gpiochip_add_data() * @dev: pointer to the device that gpio_chip belongs to. @@ -1564,23 +1551,6 @@ int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *chip, } EXPORT_SYMBOL_GPL(devm_gpiochip_add_data); -/** - * devm_gpiochip_remove() - Resource manager of gpiochip_remove() - * @dev: device for which which resource was allocated - * @chip: the chip to remove - * - * A gpio_chip with any GPIOs still requested may not be removed. - */ -void devm_gpiochip_remove(struct device *dev, struct gpio_chip *chip) -{ - int ret; - - ret = devres_release(dev, devm_gpio_chip_release, - devm_gpio_chip_match, chip); - WARN_ON(ret); -} -EXPORT_SYMBOL_GPL(devm_gpiochip_remove); - /** * gpiochip_find() - iterator for locating a specific gpio_chip * @data: data to pass to match function @@ -2299,6 +2269,12 @@ static int gpiod_request_commit(struct gpio_desc *desc, const char *label) unsigned long flags; unsigned offset; + if (label) { + label = kstrdup_const(label, GFP_KERNEL); + if (!label) + return -ENOMEM; + } + spin_lock_irqsave(&gpio_lock, flags); /* NOTE: gpio_request() can be called in early boot, @@ -2309,6 +2285,7 @@ static int gpiod_request_commit(struct gpio_desc *desc, const char *label) desc_set_label(desc, label ? : "?"); status = 0; } else { + kfree_const(label); status = -EBUSY; goto done; } @@ -2325,6 +2302,7 @@ static int gpiod_request_commit(struct gpio_desc *desc, const char *label) if (status < 0) { desc_set_label(desc, NULL); + kfree_const(label); clear_bit(FLAG_REQUESTED, &desc->flags); goto done; } @@ -2420,6 +2398,7 @@ static bool gpiod_free_commit(struct gpio_desc *desc) chip->free(chip, gpio_chip_hwgpio(desc)); spin_lock_irqsave(&gpio_lock, flags); } + kfree_const(desc->label); desc_set_label(desc, NULL); clear_bit(FLAG_ACTIVE_LOW, &desc->flags); clear_bit(FLAG_REQUESTED, &desc->flags); @@ -2476,6 +2455,7 @@ EXPORT_SYMBOL_GPL(gpiochip_is_requested); * @chip: GPIO chip * @hwnum: hardware number of the GPIO for which to request the descriptor * @label: label for the GPIO + * @flags: flags for this GPIO or 0 if default * * Function allows GPIO chip drivers to request and use their own GPIO * descriptors via gpiolib API. Difference to gpiod_request() is that this @@ -2488,7 +2468,8 @@ EXPORT_SYMBOL_GPL(gpiochip_is_requested); * code on failure. */ struct gpio_desc *gpiochip_request_own_desc(struct gpio_chip *chip, u16 hwnum, - const char *label) + const char *label, + enum gpiod_flags flags) { struct gpio_desc *desc = gpiochip_get_desc(chip, hwnum); int err; @@ -2502,6 +2483,13 @@ struct gpio_desc *gpiochip_request_own_desc(struct gpio_chip *chip, u16 hwnum, if (err < 0) return ERR_PTR(err); + err = gpiod_configure_flags(desc, label, 0, flags); + if (err) { + chip_err(chip, "setup of own GPIO %s failed\n", label); + gpiod_free_commit(desc); + return ERR_PTR(err); + } + return desc; } EXPORT_SYMBOL_GPL(gpiochip_request_own_desc); @@ -3375,11 +3363,19 @@ EXPORT_SYMBOL_GPL(gpiod_cansleep); * @desc: gpio to set the consumer name on * @name: the new consumer name */ -void gpiod_set_consumer_name(struct gpio_desc *desc, const char *name) +int gpiod_set_consumer_name(struct gpio_desc *desc, const char *name) { - VALIDATE_DESC_VOID(desc); - /* Just overwrite whatever the previous name was */ - desc->label = name; + VALIDATE_DESC(desc); + if (name) { + name = kstrdup_const(name, GFP_KERNEL); + if (!name) + return -ENOMEM; + } + + kfree_const(desc->label); + desc_set_label(desc, name); + + return 0; } EXPORT_SYMBOL_GPL(gpiod_set_consumer_name); @@ -4348,7 +4344,15 @@ int gpiod_hog(struct gpio_desc *desc, const char *name, chip = gpiod_to_chip(desc); hwnum = gpio_chip_hwgpio(desc); - local_desc = gpiochip_request_own_desc(chip, hwnum, name); + /* + * FIXME: not very elegant that we call gpiod_configure_flags() + * twice here (once inside gpiochip_request_own_desc() and + * again here), but the gpiochip_request_own_desc() is external + * and cannot really pass the lflags so this is the lesser evil + * at the moment. Pass zero as dflags on this first call so we + * don't screw anything up. + */ + local_desc = gpiochip_request_own_desc(chip, hwnum, name, 0); if (IS_ERR(local_desc)) { status = PTR_ERR(local_desc); pr_err("requesting hog GPIO %s (chip %s, offset %d) failed, %d\n", diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c index e55508b39496..3e6823fdd939 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c @@ -238,44 +238,40 @@ static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node, * amdgpu_mn_invalidate_range_start_gfx - callback to notify about mm change * * @mn: our notifier - * @mm: the mm this callback is about - * @start: start of updated range - * @end: end of updated range + * @range: mmu notifier context * * Block for operations on BOs to finish and mark pages as accessed and * potentially dirty. */ static int amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn); struct interval_tree_node *it; + unsigned long end; /* notification is exclusive, but interval is inclusive */ - end -= 1; + end = range->end - 1; /* TODO we should be able to split locking for interval tree and * amdgpu_mn_invalidate_node */ - if (amdgpu_mn_read_lock(amn, blockable)) + if (amdgpu_mn_read_lock(amn, range->blockable)) return -EAGAIN; - it = interval_tree_iter_first(&amn->objects, start, end); + it = interval_tree_iter_first(&amn->objects, range->start, end); while (it) { struct amdgpu_mn_node *node; - if (!blockable) { + if (!range->blockable) { amdgpu_mn_read_unlock(amn); return -EAGAIN; } node = container_of(it, struct amdgpu_mn_node, it); - it = interval_tree_iter_next(it, start, end); + it = interval_tree_iter_next(it, range->start, end); - amdgpu_mn_invalidate_node(node, start, end); + amdgpu_mn_invalidate_node(node, range->start, end); } return 0; @@ -294,39 +290,38 @@ static int amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, * are restorted in amdgpu_mn_invalidate_range_end_hsa. */ static int amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn); struct interval_tree_node *it; + unsigned long end; /* notification is exclusive, but interval is inclusive */ - end -= 1; + end = range->end - 1; - if (amdgpu_mn_read_lock(amn, blockable)) + if (amdgpu_mn_read_lock(amn, range->blockable)) return -EAGAIN; - it = interval_tree_iter_first(&amn->objects, start, end); + it = interval_tree_iter_first(&amn->objects, range->start, end); while (it) { struct amdgpu_mn_node *node; struct amdgpu_bo *bo; - if (!blockable) { + if (!range->blockable) { amdgpu_mn_read_unlock(amn); return -EAGAIN; } node = container_of(it, struct amdgpu_mn_node, it); - it = interval_tree_iter_next(it, start, end); + it = interval_tree_iter_next(it, range->start, end); list_for_each_entry(bo, &node->bos, mn_list) { struct kgd_mem *mem = bo->kfd_bo; if (amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, - start, end)) - amdgpu_amdkfd_evict_userptr(mem, mm); + range->start, + end)) + amdgpu_amdkfd_evict_userptr(mem, range->mm); } } @@ -344,9 +339,7 @@ static int amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, * Release the lock again to allow new command submissions. */ static void amdgpu_mn_invalidate_range_end(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end) + const struct mmu_notifier_range *range) { struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn); diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c index c02adbbeef2a..b7bc7d7d048f 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c @@ -853,7 +853,7 @@ static int kfd_fill_mem_info_for_cpu(int numa_node_id, int *avail_size, */ pgdat = NODE_DATA(numa_node_id); for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) - mem_in_bytes += pgdat->node_zones[zone_type].managed_pages; + mem_in_bytes += zone_managed_pages(&pgdat->node_zones[zone_type]); mem_in_bytes <<= PAGE_SHIFT; sub_type_hdr->length_low = lower_32_bits(mem_in_bytes); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c index 7fea74861a87..160ce3c060a5 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c @@ -439,6 +439,4 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, if (drm_debug & DRM_UT_DRIVER) etnaviv_buffer_dump(gpu, buffer, 0, 0x50); - - gpu->lastctx = cmdbuf->ctx; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c index 52802e6049e0..96efc84396bf 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c @@ -72,14 +72,8 @@ static void etnaviv_postclose(struct drm_device *dev, struct drm_file *file) for (i = 0; i < ETNA_MAX_PIPES; i++) { struct etnaviv_gpu *gpu = priv->gpu[i]; - if (gpu) { - mutex_lock(&gpu->lock); - if (gpu->lastctx == ctx) - gpu->lastctx = NULL; - mutex_unlock(&gpu->lock); - + if (gpu) drm_sched_entity_destroy(&ctx->sched_entity[i]); - } } kfree(ctx); @@ -523,7 +517,7 @@ static int etnaviv_bind(struct device *dev) if (!priv) { dev_err(dev, "failed to allocate private data\n"); ret = -ENOMEM; - goto out_unref; + goto out_put; } drm->dev_private = priv; @@ -549,7 +543,7 @@ out_register: component_unbind_all(dev, drm); out_bind: kfree(priv); -out_unref: +out_put: drm_dev_put(drm); return ret; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 8d02d1b7dcf5..4bf698de5996 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -107,17 +107,6 @@ static inline size_t size_vstruct(size_t nelem, size_t elem_size, size_t base) return base + nelem * elem_size; } -/* returns true if fence a comes after fence b */ -static inline bool fence_after(u32 a, u32 b) -{ - return (s32)(a - b) > 0; -} - -static inline bool fence_after_eq(u32 a, u32 b) -{ - return (s32)(a - b) >= 0; -} - /* * Etnaviv timeouts are specified wrt CLOCK_MONOTONIC, not jiffies. * We need to calculate the timeout in terms of number of jiffies diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c index f225fbc6edd2..6904535475de 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c @@ -3,10 +3,12 @@ * Copyright (C) 2015-2018 Etnaviv Project */ +#include #include #include #include #include +#include #include #include "etnaviv_cmdbuf.h" @@ -976,7 +978,6 @@ int etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m) void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu) { - unsigned long flags; unsigned int i = 0; dev_err(gpu->dev, "recover hung GPU!\n"); @@ -989,15 +990,13 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu) etnaviv_hw_reset(gpu); /* complete all events, the GPU won't do it after the reset */ - spin_lock_irqsave(&gpu->event_spinlock, flags); + spin_lock(&gpu->event_spinlock); for_each_set_bit_from(i, gpu->event_bitmap, ETNA_NR_EVENTS) complete(&gpu->event_free); bitmap_zero(gpu->event_bitmap, ETNA_NR_EVENTS); - spin_unlock_irqrestore(&gpu->event_spinlock, flags); - gpu->completed_fence = gpu->active_fence; + spin_unlock(&gpu->event_spinlock); etnaviv_gpu_hw_init(gpu); - gpu->lastctx = NULL; gpu->exec_state = -1; mutex_unlock(&gpu->lock); @@ -1032,7 +1031,7 @@ static bool etnaviv_fence_signaled(struct dma_fence *fence) { struct etnaviv_fence *f = to_etnaviv_fence(fence); - return fence_completed(f->gpu, f->base.seqno); + return (s32)(f->gpu->completed_fence - f->base.seqno) >= 0; } static void etnaviv_fence_release(struct dma_fence *fence) @@ -1071,6 +1070,12 @@ static struct dma_fence *etnaviv_gpu_fence_alloc(struct etnaviv_gpu *gpu) return &f->base; } +/* returns true if fence a comes after fence b */ +static inline bool fence_after(u32 a, u32 b) +{ + return (s32)(a - b) > 0; +} + /* * event management: */ @@ -1078,7 +1083,7 @@ static struct dma_fence *etnaviv_gpu_fence_alloc(struct etnaviv_gpu *gpu) static int event_alloc(struct etnaviv_gpu *gpu, unsigned nr_events, unsigned int *events) { - unsigned long flags, timeout = msecs_to_jiffies(10 * 10000); + unsigned long timeout = msecs_to_jiffies(10 * 10000); unsigned i, acquired = 0; for (i = 0; i < nr_events; i++) { @@ -1095,7 +1100,7 @@ static int event_alloc(struct etnaviv_gpu *gpu, unsigned nr_events, timeout = ret; } - spin_lock_irqsave(&gpu->event_spinlock, flags); + spin_lock(&gpu->event_spinlock); for (i = 0; i < nr_events; i++) { int event = find_first_zero_bit(gpu->event_bitmap, ETNA_NR_EVENTS); @@ -1105,7 +1110,7 @@ static int event_alloc(struct etnaviv_gpu *gpu, unsigned nr_events, set_bit(event, gpu->event_bitmap); } - spin_unlock_irqrestore(&gpu->event_spinlock, flags); + spin_unlock(&gpu->event_spinlock); return 0; @@ -1118,18 +1123,11 @@ out: static void event_free(struct etnaviv_gpu *gpu, unsigned int event) { - unsigned long flags; - - spin_lock_irqsave(&gpu->event_spinlock, flags); - if (!test_bit(event, gpu->event_bitmap)) { dev_warn(gpu->dev, "event %u is already marked as free", event); - spin_unlock_irqrestore(&gpu->event_spinlock, flags); } else { clear_bit(event, gpu->event_bitmap); - spin_unlock_irqrestore(&gpu->event_spinlock, flags); - complete(&gpu->event_free); } } @@ -1306,8 +1304,6 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit) goto out_unlock; } - gpu->active_fence = gpu_fence->seqno; - if (submit->nr_pmrs) { gpu->event[event[1]].sync_point = &sync_point_perfmon_sample_pre; kref_get(&submit->refcount); @@ -1549,7 +1545,6 @@ static int etnaviv_gpu_hw_resume(struct etnaviv_gpu *gpu) etnaviv_gpu_update_clock(gpu); etnaviv_gpu_hw_init(gpu); - gpu->lastctx = NULL; gpu->exec_state = -1; mutex_unlock(&gpu->lock); @@ -1806,8 +1801,8 @@ static int etnaviv_gpu_rpm_suspend(struct device *dev) struct etnaviv_gpu *gpu = dev_get_drvdata(dev); u32 idle, mask; - /* If we have outstanding fences, we're not idle */ - if (gpu->completed_fence != gpu->active_fence) + /* If there are any jobs in the HW queue, we're not idle */ + if (atomic_read(&gpu->sched.hw_rq_count)) return -EBUSY; /* Check whether the hardware (except FE) is idle */ diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h index 9a75a6937268..9bcf151f706b 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h @@ -6,9 +6,6 @@ #ifndef __ETNAVIV_GPU_H__ #define __ETNAVIV_GPU_H__ -#include -#include - #include "etnaviv_cmdbuf.h" #include "etnaviv_drv.h" @@ -88,6 +85,8 @@ struct etnaviv_event { struct etnaviv_cmdbuf_suballoc; struct etnaviv_cmdbuf; +struct regulator; +struct clk; #define ETNA_NR_EVENTS 30 @@ -98,7 +97,6 @@ struct etnaviv_gpu { struct mutex lock; struct etnaviv_chip_identity identity; enum etnaviv_sec_mode sec_mode; - struct etnaviv_file_private *lastctx; struct workqueue_struct *wq; struct drm_gpu_scheduler sched; @@ -121,7 +119,6 @@ struct etnaviv_gpu { struct mutex fence_lock; struct idr fence_idr; u32 next_fence; - u32 active_fence; u32 completed_fence; wait_queue_head_t fence_event; u64 fence_context; @@ -161,11 +158,6 @@ static inline u32 gpu_read(struct etnaviv_gpu *gpu, u32 reg) return readl(gpu->mmio + reg); } -static inline bool fence_completed(struct etnaviv_gpu *gpu, u32 fence) -{ - return fence_after_eq(gpu->completed_fence, fence); -} - int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, u32 param, u64 *value); int etnaviv_gpu_init(struct etnaviv_gpu *gpu); diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c index e3d6a8584715..786a8ee6f10f 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c +++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c @@ -228,6 +228,21 @@ static const uint32_t fimd_formats[] = { DRM_FORMAT_ARGB8888, }; +static const unsigned int capabilities[WINDOWS_NR] = { + 0, + EXYNOS_DRM_PLANE_CAP_WIN_BLEND | EXYNOS_DRM_PLANE_CAP_PIX_BLEND, + EXYNOS_DRM_PLANE_CAP_WIN_BLEND | EXYNOS_DRM_PLANE_CAP_PIX_BLEND, + EXYNOS_DRM_PLANE_CAP_WIN_BLEND | EXYNOS_DRM_PLANE_CAP_PIX_BLEND, + EXYNOS_DRM_PLANE_CAP_WIN_BLEND | EXYNOS_DRM_PLANE_CAP_PIX_BLEND, +}; + +static inline void fimd_set_bits(struct fimd_context *ctx, u32 reg, u32 mask, + u32 val) +{ + val = (val & mask) | (readl(ctx->regs + reg) & ~mask); + writel(val, ctx->regs + reg); +} + static int fimd_enable_vblank(struct exynos_drm_crtc *crtc) { struct fimd_context *ctx = crtc->ctx; @@ -551,13 +566,88 @@ static void fimd_commit(struct exynos_drm_crtc *crtc) writel(val, ctx->regs + VIDCON0); } +static void fimd_win_set_bldeq(struct fimd_context *ctx, unsigned int win, + unsigned int alpha, unsigned int pixel_alpha) +{ + u32 mask = BLENDEQ_A_FUNC_F(0xf) | BLENDEQ_B_FUNC_F(0xf); + u32 val = 0; + + switch (pixel_alpha) { + case DRM_MODE_BLEND_PIXEL_NONE: + case DRM_MODE_BLEND_COVERAGE: + val |= BLENDEQ_A_FUNC_F(BLENDEQ_ALPHA_A); + val |= BLENDEQ_B_FUNC_F(BLENDEQ_ONE_MINUS_ALPHA_A); + break; + case DRM_MODE_BLEND_PREMULTI: + default: + if (alpha != DRM_BLEND_ALPHA_OPAQUE) { + val |= BLENDEQ_A_FUNC_F(BLENDEQ_ALPHA0); + val |= BLENDEQ_B_FUNC_F(BLENDEQ_ONE_MINUS_ALPHA_A); + } else { + val |= BLENDEQ_A_FUNC_F(BLENDEQ_ONE); + val |= BLENDEQ_B_FUNC_F(BLENDEQ_ONE_MINUS_ALPHA_A); + } + break; + } + fimd_set_bits(ctx, BLENDEQx(win), mask, val); +} -static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win, - uint32_t pixel_format, int width) +static void fimd_win_set_bldmod(struct fimd_context *ctx, unsigned int win, + unsigned int alpha, unsigned int pixel_alpha) { - unsigned long val; + u32 win_alpha_l = (alpha >> 8) & 0xf; + u32 win_alpha_h = alpha >> 12; + u32 val = 0; - val = WINCONx_ENWIN; + switch (pixel_alpha) { + case DRM_MODE_BLEND_PIXEL_NONE: + break; + case DRM_MODE_BLEND_COVERAGE: + case DRM_MODE_BLEND_PREMULTI: + default: + val |= WINCON1_ALPHA_SEL; + val |= WINCON1_BLD_PIX; + val |= WINCON1_ALPHA_MUL; + break; + } + fimd_set_bits(ctx, WINCON(win), WINCONx_BLEND_MODE_MASK, val); + + /* OSD alpha */ + val = VIDISD14C_ALPHA0_R(win_alpha_h) | + VIDISD14C_ALPHA0_G(win_alpha_h) | + VIDISD14C_ALPHA0_B(win_alpha_h) | + VIDISD14C_ALPHA1_R(0x0) | + VIDISD14C_ALPHA1_G(0x0) | + VIDISD14C_ALPHA1_B(0x0); + writel(val, ctx->regs + VIDOSD_C(win)); + + val = VIDW_ALPHA_R(win_alpha_l) | VIDW_ALPHA_G(win_alpha_l) | + VIDW_ALPHA_B(win_alpha_l); + writel(val, ctx->regs + VIDWnALPHA0(win)); + + val = VIDW_ALPHA_R(0x0) | VIDW_ALPHA_G(0x0) | + VIDW_ALPHA_B(0x0); + writel(val, ctx->regs + VIDWnALPHA1(win)); + + fimd_set_bits(ctx, BLENDCON, BLENDCON_NEW_MASK, + BLENDCON_NEW_8BIT_ALPHA_VALUE); +} + +static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win, + struct drm_framebuffer *fb, int width) +{ + struct exynos_drm_plane plane = ctx->planes[win]; + struct exynos_drm_plane_state *state = + to_exynos_plane_state(plane.base.state); + uint32_t pixel_format = fb->format->format; + unsigned int alpha = state->base.alpha; + u32 val = WINCONx_ENWIN; + unsigned int pixel_alpha; + + if (fb->format->has_alpha) + pixel_alpha = state->base.pixel_blend_mode; + else + pixel_alpha = DRM_MODE_BLEND_PIXEL_NONE; /* * In case of s3c64xx, window 0 doesn't support alpha channel. @@ -591,8 +681,7 @@ static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win, break; case DRM_FORMAT_ARGB8888: default: - val |= WINCON1_BPPMODE_25BPP_A1888 - | WINCON1_BLD_PIX | WINCON1_ALPHA_SEL; + val |= WINCON1_BPPMODE_25BPP_A1888; val |= WINCONx_WSWP; val |= WINCONx_BURSTLEN_16WORD; break; @@ -610,25 +699,12 @@ static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win, val &= ~WINCONx_BURSTLEN_MASK; val |= WINCONx_BURSTLEN_4WORD; } - - writel(val, ctx->regs + WINCON(win)); + fimd_set_bits(ctx, WINCON(win), ~WINCONx_BLEND_MODE_MASK, val); /* hardware window 0 doesn't support alpha channel. */ if (win != 0) { - /* OSD alpha */ - val = VIDISD14C_ALPHA0_R(0xf) | - VIDISD14C_ALPHA0_G(0xf) | - VIDISD14C_ALPHA0_B(0xf) | - VIDISD14C_ALPHA1_R(0xf) | - VIDISD14C_ALPHA1_G(0xf) | - VIDISD14C_ALPHA1_B(0xf); - - writel(val, ctx->regs + VIDOSD_C(win)); - - val = VIDW_ALPHA_R(0xf) | VIDW_ALPHA_G(0xf) | - VIDW_ALPHA_G(0xf); - writel(val, ctx->regs + VIDWnALPHA0(win)); - writel(val, ctx->regs + VIDWnALPHA1(win)); + fimd_win_set_bldmod(ctx, win, alpha, pixel_alpha); + fimd_win_set_bldeq(ctx, win, alpha, pixel_alpha); } } @@ -785,7 +861,7 @@ static void fimd_update_plane(struct exynos_drm_crtc *crtc, DRM_DEBUG_KMS("osd size = 0x%x\n", (unsigned int)val); } - fimd_win_set_pixfmt(ctx, win, fb->format->format, state->src.w); + fimd_win_set_pixfmt(ctx, win, fb, state->src.w); /* hardware window 0 doesn't support color key. */ if (win != 0) @@ -987,6 +1063,7 @@ static int fimd_bind(struct device *dev, struct device *master, void *data) ctx->configs[i].num_pixel_formats = ARRAY_SIZE(fimd_formats); ctx->configs[i].zpos = i; ctx->configs[i].type = fimd_win_types[i]; + ctx->configs[i].capabilities = capabilities[i]; ret = exynos_plane_init(drm_dev, &ctx->planes[i], i, &ctx->configs[i]); if (ret) diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig index 33a458b7f1fc..148be8e1a090 100644 --- a/drivers/gpu/drm/i915/Kconfig +++ b/drivers/gpu/drm/i915/Kconfig @@ -131,5 +131,5 @@ config DRM_I915_GVT_KVMGT menu "drm/i915 Debugging" depends on DRM_I915 depends on EXPERT -source drivers/gpu/drm/i915/Kconfig.debug +source "drivers/gpu/drm/i915/Kconfig.debug" endmenu diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index d36a9755ad91..a9de07bb72c8 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2559,7 +2559,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) * If there's no chance of allocating enough pages for the whole * object, bail early. */ - if (page_count > totalram_pages) + if (page_count > totalram_pages()) return -ENOMEM; st = kmalloc(sizeof(*st), GFP_KERNEL); diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c index 2c9b284036d1..3df77020aada 100644 --- a/drivers/gpu/drm/i915/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c @@ -113,27 +113,25 @@ static void del_object(struct i915_mmu_object *mo) } static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct i915_mmu_notifier *mn = container_of(_mn, struct i915_mmu_notifier, mn); struct i915_mmu_object *mo; struct interval_tree_node *it; LIST_HEAD(cancelled); + unsigned long end; if (RB_EMPTY_ROOT(&mn->objects.rb_root)) return 0; /* interval ranges are inclusive, but invalidate range is exclusive */ - end--; + end = range->end - 1; spin_lock(&mn->lock); - it = interval_tree_iter_first(&mn->objects, start, end); + it = interval_tree_iter_first(&mn->objects, range->start, end); while (it) { - if (!blockable) { + if (!range->blockable) { spin_unlock(&mn->lock); return -EAGAIN; } @@ -151,7 +149,7 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, queue_work(mn->wq, &mo->work); list_add(&mo->link, &cancelled); - it = interval_tree_iter_next(it, start, end); + it = interval_tree_iter_next(it, range->start, end); } list_for_each_entry(mo, &cancelled, link) del_object(mo); diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c index 69fe86b30fbb..a9ed0ecc94e2 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -170,7 +170,7 @@ static int igt_ppgtt_alloc(void *arg) * This should ensure that we do not run into the oomkiller during * the test and take down the machine wilfully. */ - limit = totalram_pages << PAGE_SHIFT; + limit = totalram_pages() << PAGE_SHIFT; limit = min(ppgtt->vm.total, limit); /* Check we can allocate the entire range */ @@ -1244,7 +1244,7 @@ static int exercise_mock(struct drm_i915_private *i915, u64 hole_start, u64 hole_end, unsigned long end_time)) { - const u64 limit = totalram_pages << PAGE_SHIFT; + const u64 limit = totalram_pages() << PAGE_SHIFT; struct i915_gem_context *ctx; struct i915_hw_ppgtt *ppgtt; IGT_TIMEOUT(end_time); diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c index f8b35df44c60..b3019505065a 100644 --- a/drivers/gpu/drm/radeon/radeon_mn.c +++ b/drivers/gpu/drm/radeon/radeon_mn.c @@ -119,40 +119,38 @@ static void radeon_mn_release(struct mmu_notifier *mn, * unmap them by move them into system domain again. */ static int radeon_mn_invalidate_range_start(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn); struct ttm_operation_ctx ctx = { false, false }; struct interval_tree_node *it; + unsigned long end; int ret = 0; /* notification is exclusive, but interval is inclusive */ - end -= 1; + end = range->end - 1; /* TODO we should be able to split locking for interval tree and * the tear down. */ - if (blockable) + if (range->blockable) mutex_lock(&rmn->lock); else if (!mutex_trylock(&rmn->lock)) return -EAGAIN; - it = interval_tree_iter_first(&rmn->objects, start, end); + it = interval_tree_iter_first(&rmn->objects, range->start, end); while (it) { struct radeon_mn_node *node; struct radeon_bo *bo; long r; - if (!blockable) { + if (!range->blockable) { ret = -EAGAIN; goto out_unlock; } node = container_of(it, struct radeon_mn_node, it); - it = interval_tree_iter_next(it, start, end); + it = interval_tree_iter_next(it, range->start, end); list_for_each_entry(bo, &node->bos, mn_list) { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index f05a29ff586e..9fd5fbe8bebf 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -583,7 +583,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv) dev_priv->map_mode = vmw_dma_map_populate; - if (dma_ops->sync_single_for_cpu) + if (dma_ops && dma_ops->sync_single_for_cpu) dev_priv->map_mode = vmw_dma_alloc_coherent; #ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl() == 0) diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c index 271f31461da4..47f65857408d 100644 --- a/drivers/hid/hid-cp2112.c +++ b/drivers/hid/hid-cp2112.c @@ -1203,7 +1203,7 @@ static int __maybe_unused cp2112_allocate_irq(struct cp2112_device *dev, return -EINVAL; dev->desc[pin] = gpiochip_request_own_desc(&dev->gc, pin, - "HID/I2C:Event"); + "HID/I2C:Event", 0); if (IS_ERR(dev->desc[pin])) { dev_err(dev->gc.parent, "Failed to request GPIO\n"); return PTR_ERR(dev->desc[pin]); diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c index 41a09f506803..a2a35319f7a5 100644 --- a/drivers/hsi/controllers/omap_ssi_core.c +++ b/drivers/hsi/controllers/omap_ssi_core.c @@ -48,7 +48,7 @@ static DEFINE_IDA(platform_omap_ssi_ida); #ifdef CONFIG_DEBUG_FS -static int ssi_debug_show(struct seq_file *m, void *p __maybe_unused) +static int ssi_regs_show(struct seq_file *m, void *p __maybe_unused) { struct hsi_controller *ssi = m->private; struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi); @@ -63,7 +63,7 @@ static int ssi_debug_show(struct seq_file *m, void *p __maybe_unused) return 0; } -static int ssi_debug_gdd_show(struct seq_file *m, void *p __maybe_unused) +static int ssi_gdd_regs_show(struct seq_file *m, void *p __maybe_unused) { struct hsi_controller *ssi = m->private; struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi); @@ -117,29 +117,8 @@ static int ssi_debug_gdd_show(struct seq_file *m, void *p __maybe_unused) return 0; } -static int ssi_regs_open(struct inode *inode, struct file *file) -{ - return single_open(file, ssi_debug_show, inode->i_private); -} - -static int ssi_gdd_regs_open(struct inode *inode, struct file *file) -{ - return single_open(file, ssi_debug_gdd_show, inode->i_private); -} - -static const struct file_operations ssi_regs_fops = { - .open = ssi_regs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; - -static const struct file_operations ssi_gdd_regs_fops = { - .open = ssi_gdd_regs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(ssi_regs); +DEFINE_SHOW_ATTRIBUTE(ssi_gdd_regs); static int ssi_debug_add_ctrl(struct hsi_controller *ssi) { diff --git a/drivers/hsi/controllers/omap_ssi_port.c b/drivers/hsi/controllers/omap_ssi_port.c index 2ada82d2ec8c..b2b3989ccfd2 100644 --- a/drivers/hsi/controllers/omap_ssi_port.c +++ b/drivers/hsi/controllers/omap_ssi_port.c @@ -57,7 +57,7 @@ static void ssi_debug_remove_port(struct hsi_port *port) debugfs_remove_recursive(omap_port->dir); } -static int ssi_debug_port_show(struct seq_file *m, void *p __maybe_unused) +static int ssi_port_regs_show(struct seq_file *m, void *p __maybe_unused) { struct hsi_port *port = m->private; struct omap_ssi_port *omap_port = hsi_port_drvdata(port); @@ -132,17 +132,7 @@ static int ssi_debug_port_show(struct seq_file *m, void *p __maybe_unused) return 0; } -static int ssi_port_regs_open(struct inode *inode, struct file *file) -{ - return single_open(file, ssi_debug_port_show, inode->i_private); -} - -static const struct file_operations ssi_port_regs_fops = { - .open = ssi_port_regs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(ssi_port_regs); static int ssi_div_get(void *data, u64 *val) { diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index fe00b12e4417..ce0ba2062723 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -711,7 +711,6 @@ int vmbus_disconnect_ring(struct vmbus_channel *channel) /* Snapshot the list of subchannels */ spin_lock_irqsave(&channel->lock, flags); list_splice_init(&channel->sc_list, &list); - channel->num_sc = 0; spin_unlock_irqrestore(&channel->lock, flags); list_for_each_entry_safe(cur_channel, tmp, &list, sc_list) { diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c index edd34c167a9b..d01689079e9b 100644 --- a/drivers/hv/channel_mgmt.c +++ b/drivers/hv/channel_mgmt.c @@ -405,7 +405,6 @@ void hv_process_channel_removal(struct vmbus_channel *channel) primary_channel = channel->primary_channel; spin_lock_irqsave(&primary_channel->lock, flags); list_del(&channel->sc_list); - primary_channel->num_sc--; spin_unlock_irqrestore(&primary_channel->lock, flags); } @@ -1302,49 +1301,6 @@ cleanup: return ret; } -/* - * Retrieve the (sub) channel on which to send an outgoing request. - * When a primary channel has multiple sub-channels, we try to - * distribute the load equally amongst all available channels. - */ -struct vmbus_channel *vmbus_get_outgoing_channel(struct vmbus_channel *primary) -{ - struct list_head *cur, *tmp; - int cur_cpu; - struct vmbus_channel *cur_channel; - struct vmbus_channel *outgoing_channel = primary; - int next_channel; - int i = 1; - - if (list_empty(&primary->sc_list)) - return outgoing_channel; - - next_channel = primary->next_oc++; - - if (next_channel > (primary->num_sc)) { - primary->next_oc = 0; - return outgoing_channel; - } - - cur_cpu = hv_cpu_number_to_vp_number(smp_processor_id()); - list_for_each_safe(cur, tmp, &primary->sc_list) { - cur_channel = list_entry(cur, struct vmbus_channel, sc_list); - if (cur_channel->state != CHANNEL_OPENED_STATE) - continue; - - if (cur_channel->target_vp == cur_cpu) - return cur_channel; - - if (i == next_channel) - return cur_channel; - - i++; - } - - return outgoing_channel; -} -EXPORT_SYMBOL_GPL(vmbus_get_outgoing_channel); - static void invoke_sc_cb(struct vmbus_channel *primary_channel) { struct list_head *cur, *tmp; diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index 11273cd384d6..632d25674e7f 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -33,9 +33,7 @@ #include "hyperv_vmbus.h" /* The one and only */ -struct hv_context hv_context = { - .synic_initialized = false, -}; +struct hv_context hv_context; /* * If false, we're using the old mechanism for stimer0 interrupts @@ -326,8 +324,6 @@ int hv_synic_init(unsigned int cpu) hv_set_synic_state(sctrl.as_uint64); - hv_context.synic_initialized = true; - /* * Register the per-cpu clockevent source. */ @@ -373,7 +369,8 @@ int hv_synic_cleanup(unsigned int cpu) bool channel_found = false; unsigned long flags; - if (!hv_context.synic_initialized) + hv_get_synic_state(sctrl.as_uint64); + if (sctrl.enable != 1) return -EFAULT; /* @@ -435,7 +432,6 @@ int hv_synic_cleanup(unsigned int cpu) hv_set_siefp(siefp.as_uint64); /* Disable the global synic bit */ - hv_get_synic_state(sctrl.as_uint64); sctrl.enable = 0; hv_set_synic_state(sctrl.as_uint64); diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index 41631512ae97..5301fef16c31 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -1090,6 +1090,7 @@ static void process_info(struct hv_dynmem_device *dm, struct dm_info_msg *msg) static unsigned long compute_balloon_floor(void) { unsigned long min_pages; + unsigned long nr_pages = totalram_pages(); #define MB2PAGES(mb) ((mb) << (20 - PAGE_SHIFT)) /* Simple continuous piecewiese linear function: * max MiB -> min MiB gradient @@ -1102,16 +1103,16 @@ static unsigned long compute_balloon_floor(void) * 8192 744 (1/16) * 32768 1512 (1/32) */ - if (totalram_pages < MB2PAGES(128)) - min_pages = MB2PAGES(8) + (totalram_pages >> 1); - else if (totalram_pages < MB2PAGES(512)) - min_pages = MB2PAGES(40) + (totalram_pages >> 2); - else if (totalram_pages < MB2PAGES(2048)) - min_pages = MB2PAGES(104) + (totalram_pages >> 3); - else if (totalram_pages < MB2PAGES(8192)) - min_pages = MB2PAGES(232) + (totalram_pages >> 4); + if (nr_pages < MB2PAGES(128)) + min_pages = MB2PAGES(8) + (nr_pages >> 1); + else if (nr_pages < MB2PAGES(512)) + min_pages = MB2PAGES(40) + (nr_pages >> 2); + else if (nr_pages < MB2PAGES(2048)) + min_pages = MB2PAGES(104) + (nr_pages >> 3); + else if (nr_pages < MB2PAGES(8192)) + min_pages = MB2PAGES(232) + (nr_pages >> 4); else - min_pages = MB2PAGES(488) + (totalram_pages >> 5); + min_pages = MB2PAGES(488) + (nr_pages >> 5); #undef MB2PAGES return min_pages; } diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c index d6106e1a0d4a..5054d1105236 100644 --- a/drivers/hv/hv_kvp.c +++ b/drivers/hv/hv_kvp.c @@ -437,7 +437,7 @@ kvp_send_key(struct work_struct *dummy) val32 = in_msg->body.kvp_set.data.value_u32; message->body.kvp_set.data.value_size = sprintf(message->body.kvp_set.data.value, - "%d", val32) + 1; + "%u", val32) + 1; break; case REG_U64: diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c index 423205077bf6..f10eeb120c8b 100644 --- a/drivers/hv/hv_util.c +++ b/drivers/hv/hv_util.c @@ -483,7 +483,7 @@ MODULE_DEVICE_TABLE(vmbus, id_table); /* The one and only one */ static struct hv_driver util_drv = { - .name = "hv_util", + .name = "hv_utils", .id_table = id_table, .probe = util_probe, .remove = util_remove, diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h index ea201034b248..a1f6ce6e5974 100644 --- a/drivers/hv/hyperv_vmbus.h +++ b/drivers/hv/hyperv_vmbus.h @@ -162,8 +162,6 @@ struct hv_context { void *tsc_page; - bool synic_initialized; - struct hv_per_cpu_context __percpu *cpu_context; /* diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig index 81da17a42dc9..6f929bfa9fcd 100644 --- a/drivers/hwmon/Kconfig +++ b/drivers/hwmon/Kconfig @@ -11,7 +11,7 @@ menuconfig HWMON of a system. Most modern motherboards include such a device. It can include temperature sensors, voltage sensors, fan speed sensors and various additional features such as the ability to - control the speed of the fans. If you want this support you + control the speed of the fans. If you want this support you should say Y here and also to the specific driver(s) for your sensors chip(s) below. @@ -19,7 +19,7 @@ menuconfig HWMON sensors-detect script from the lm_sensors package. Read for details. - This support can also be built as a module. If so, the module + This support can also be built as a module. If so, the module will be called hwmon. if HWMON @@ -46,7 +46,7 @@ config SENSORS_AB8500 AB8500 die and two GPADC channels. The GPADC channel are preferably used to access sensors outside the AB8500 chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called abx500-temp. config SENSORS_ABITUGURU @@ -61,7 +61,7 @@ config SENSORS_ABITUGURU of which motherboards have which revision see Documentation/hwmon/abituguru - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called abituguru. config SENSORS_ABITUGURU3 @@ -75,7 +75,7 @@ config SENSORS_ABITUGURU3 2005). For more info and a list of which motherboards have which revision see Documentation/hwmon/abituguru3 - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called abituguru3. config SENSORS_AD7314 @@ -116,7 +116,7 @@ config SENSORS_ADM1021 and ADM1023 sensor chips and clones: Maxim MAX1617 and MAX1617A, Genesys Logic GL523SM, National Semiconductor LM84 and TI THMC10. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called adm1021. config SENSORS_ADM1025 @@ -127,7 +127,7 @@ config SENSORS_ADM1025 If you say yes here you get support for Analog Devices ADM1025 and Philips NE1619 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called adm1025. config SENSORS_ADM1026 @@ -138,7 +138,7 @@ config SENSORS_ADM1026 If you say yes here you get support for Analog Devices ADM1026 sensor chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called adm1026. config SENSORS_ADM1029 @@ -149,7 +149,7 @@ config SENSORS_ADM1029 sensor chip. Very rare chip, please let us know you use it. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called adm1029. config SENSORS_ADM1031 @@ -159,7 +159,7 @@ config SENSORS_ADM1031 If you say yes here you get support for Analog Devices ADM1031 and ADM1030 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called adm1031. config SENSORS_ADM9240 @@ -170,7 +170,7 @@ config SENSORS_ADM9240 If you say yes here you get support for Analog Devices ADM9240, Dallas DS1780, National Semiconductor LM81 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called adm9240. config SENSORS_ADT7X10 @@ -179,7 +179,7 @@ config SENSORS_ADT7X10 This module contains common code shared by the ADT7310/ADT7320 and ADT7410/ADT7420 temperature monitoring chip drivers. - If build as a module, the module will be called adt7x10. + If built as a module, the module will be called adt7x10. config SENSORS_ADT7310 tristate "Analog Devices ADT7310/ADT7320" @@ -242,7 +242,7 @@ config SENSORS_ADT7475 ADT7473, ADT7475, ADT7476 and ADT7490 hardware monitoring chips. - This driver can also be build as a module. If so, the module + This driver can also be built as a module. If so, the module will be called adt7475. config SENSORS_ASC7621 @@ -255,7 +255,7 @@ config SENSORS_ASC7621 aSC7621 aSC7621a - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called asc7621. config SENSORS_K8TEMP @@ -267,7 +267,7 @@ config SENSORS_K8TEMP microarchitecture. Please note that you will need at least lm-sensors 2.10.1 for proper userspace support. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called k8temp. config SENSORS_K10TEMP @@ -280,7 +280,7 @@ config SENSORS_K10TEMP 12h (Llano), 14h (Brazos), 15h (Bulldozer/Trinity/Kaveri/Carrizo) and 16h (Kabini/Mullins) microarchitectures. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called k10temp. config SENSORS_FAM15H_POWER @@ -290,7 +290,7 @@ config SENSORS_FAM15H_POWER If you say yes here you get support for processor power information of your AMD family 15h CPU. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called fam15h_power. config SENSORS_APPLESMC @@ -326,7 +326,7 @@ config SENSORS_ARM_SCMI and power sensors available on SCMI based platforms. The actual number and type of sensors exported depend on the platform. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called scmi-hwmon. config SENSORS_ARM_SCPI @@ -346,7 +346,7 @@ config SENSORS_ASB100 If you say yes here you get support for the ASB100 Bach sensor chip found on some Asus mainboards. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called asb100. config SENSORS_ASPEED @@ -371,7 +371,7 @@ config SENSORS_ATXP1 If your board have such a chip, you are able to control your CPU core and other voltages. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called atxp1. config SENSORS_DS620 @@ -381,7 +381,7 @@ config SENSORS_DS620 If you say yes here you get support for Dallas Semiconductor DS620 sensor chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ds620. config SENSORS_DS1621 @@ -396,7 +396,7 @@ config SENSORS_DS1621 - Maxim Integrated DS1721 - Maxim Integrated DS1731 - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ds1621. config SENSORS_DELL_SMM @@ -427,7 +427,7 @@ config SENSORS_DA9055 If you say yes here you get support for ADC on the Dialog Semiconductor DA9055 PMIC. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called da9055-hwmon. config SENSORS_I5K_AMB @@ -448,7 +448,7 @@ config SENSORS_F71805F features of the Fintek F71805F/FG, F71806F/FG and F71872F/FG Super-I/O chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called f71805f. config SENSORS_F71882FG @@ -470,7 +470,7 @@ config SENSORS_F71882FG F81801U F81865F - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called f71882fg. config SENSORS_F75375S @@ -480,7 +480,7 @@ config SENSORS_F75375S If you say yes here you get support for hardware monitoring features of the Fintek F75375S/SP, F75373 and F75387 - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called f75375s. config SENSORS_MC13783_ADC @@ -502,7 +502,7 @@ config SENSORS_FSCHMD fscscy and fscher drivers and adding support for several other FSC sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called fschmd. config SENSORS_FTSTEUTATES @@ -524,7 +524,7 @@ config SENSORS_GL518SM If you say yes here you get support for Genesys Logic GL518SM sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called gl518sm. config SENSORS_GL520SM @@ -535,7 +535,7 @@ config SENSORS_GL520SM If you say yes here you get support for Genesys Logic GL520SM sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called gl520sm. config SENSORS_G760A @@ -545,7 +545,7 @@ config SENSORS_G760A If you say yes here you get support for Global Mixed-mode Technology Inc G760A fan speed PWM controller chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called g760a. config SENSORS_G762 @@ -555,7 +555,7 @@ config SENSORS_G762 If you say yes here you get support for Global Mixed-mode Technology Inc G762 and G763 fan speed PWM controller chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called g762. config SENSORS_GPIO_FAN @@ -566,7 +566,7 @@ config SENSORS_GPIO_FAN help If you say yes here you get support for fans connected to GPIO lines. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called gpio-fan. config SENSORS_HIH6130 @@ -576,7 +576,7 @@ config SENSORS_HIH6130 If you say yes here you get support for Honeywell Humidicon HIH-6130 and HIH-6131 Humidicon humidity sensors. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called hih6130. config SENSORS_IBMAEM @@ -590,7 +590,7 @@ config SENSORS_IBMAEM the x3350, x3550, x3650, x3655, x3755, x3850 M2, x3950 M2, and certain HC10/HS2x/LS2x/QS2x blades. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ibmaem. config SENSORS_IBMPEX @@ -604,7 +604,7 @@ config SENSORS_IBMPEX x3655, and x3755; the x3800, x3850, and x3950 models that have PCI Express; and some of the HS2x, LS2x, and QS2x blades. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ibmpex. config SENSORS_IBMPOWERNV @@ -656,7 +656,7 @@ config SENSORS_IT87 IT8603E, IT8620E, IT8623E, and IT8628E sensor chips, and the SiS950 clone. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called it87. config SENSORS_JZ4740 @@ -666,7 +666,7 @@ config SENSORS_JZ4740 If you say yes here you get support for reading adc values from the ADCIN pin on Ingenic JZ4740 SoC based boards. - This driver can also be build as a module. If so, the module will be + This driver can also be built as a module. If so, the module will be called jz4740-hwmon. config SENSORS_JC42 @@ -680,7 +680,7 @@ config SENSORS_JC42 MCP9808, MCP98242, MCP98243, MCP98244, MCP9843, SE97, SE98, STTS424(E), STTS2002, STTS3000, TSE2002, TSE2004, TS3000, and TS3001. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called jc42. config SENSORS_POWR1220 @@ -691,7 +691,7 @@ config SENSORS_POWR1220 functions of the Lattice POWR1220 isp Power Supply Monitoring, Sequencing and Margining Controller. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called powr1220. config SENSORS_LINEAGE @@ -702,7 +702,7 @@ config SENSORS_LINEAGE series of DC/DC and AC/DC converters such as CP1800, CP2000AC, CP2000DC, CP2725, and others. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lineage-pem. config SENSORS_LTC2945 @@ -803,7 +803,7 @@ config SENSORS_MAX1111 Say y here to support Maxim's MAX1110, MAX1111, MAX1112, and MAX1113 ADC chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max1111. config SENSORS_MAX16065 @@ -819,7 +819,7 @@ config SENSORS_MAX16065 MAX16070 MAX16071 - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max16065. config SENSORS_MAX1619 @@ -828,7 +828,7 @@ config SENSORS_MAX1619 help If you say yes here you get support for MAX1619 sensor chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max1619. config SENSORS_MAX1668 @@ -838,7 +838,7 @@ config SENSORS_MAX1668 If you say yes here you get support for MAX1668, MAX1989 and MAX1805 chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max1668. config SENSORS_MAX197 @@ -881,7 +881,7 @@ config SENSORS_MAX6639 If you say yes here you get support for the MAX6639 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max6639. config SENSORS_MAX6642 @@ -892,7 +892,7 @@ config SENSORS_MAX6642 MAX6642 is a SMBus-Compatible Remote/Local Temperature Sensor with Overtemperature Alarm from Maxim. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max6642. config SENSORS_MAX6650 @@ -902,7 +902,7 @@ config SENSORS_MAX6650 If you say yes here you get support for the MAX6650 / MAX6651 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max6650. config SENSORS_MAX6697 @@ -913,7 +913,7 @@ config SENSORS_MAX6697 MAX6636, MAX6689, MAX6693, MAX6694, MAX6697, MAX6698, and MAX6699 temperature sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max6697. config SENSORS_MAX31790 @@ -923,7 +923,7 @@ config SENSORS_MAX31790 If you say yes here you get support for 6-Channel PWM-Output Fan RPM Controller. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called max31790. config SENSORS_MCP3021 @@ -934,7 +934,7 @@ config SENSORS_MCP3021 The MCP3021 is a A/D converter (ADC) with 10-bit and the MCP3221 with 12-bit resolution. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called mcp3021. config SENSORS_MLXREG_FAN @@ -957,7 +957,7 @@ config SENSORS_TC654 The TC654 and TC655 are PWM mode fan speed controllers with FanSense technology for use with brushless DC fans. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called tc654. config SENSORS_MENF21BMC_HWMON @@ -983,7 +983,7 @@ config SENSORS_ADCXX Examples : ADC081S101, ADC124S501, ... - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called adcxx. config SENSORS_LM63 @@ -996,7 +996,7 @@ config SENSORS_LM63 on the Tyan S4882 (Thunder K8QS Pro) motherboard, among others. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm63. config SENSORS_LM70 @@ -1007,7 +1007,7 @@ config SENSORS_LM70 LM70, LM71, LM74 and Texas Instruments TMP121/TMP123 digital tempera- ture sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm70. config SENSORS_LM73 @@ -1016,7 +1016,7 @@ config SENSORS_LM73 help If you say yes here you get support for National Semiconductor LM73 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm73. config SENSORS_LM75 @@ -1035,6 +1035,7 @@ config SENSORS_LM75 - National Semiconductor LM75, LM75A - NXP's LM75A - ST Microelectronics STDS75 + - ST Microelectronics STLM75 - TelCom (now Microchip) TCN75 - Texas Instruments TMP100, TMP101, TMP105, TMP112, TMP75, TMP175, TMP275 @@ -1046,7 +1047,7 @@ config SENSORS_LM75 that with some chips which don't replicate LM75 quirks exactly, you may need the "force" module parameter. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm75. config SENSORS_LM77 @@ -1056,7 +1057,7 @@ config SENSORS_LM77 If you say yes here you get support for National Semiconductor LM77 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm77. config SENSORS_LM78 @@ -1067,7 +1068,7 @@ config SENSORS_LM78 If you say yes here you get support for National Semiconductor LM78, LM78-J and LM79. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm78. config SENSORS_LM80 @@ -1077,7 +1078,7 @@ config SENSORS_LM80 If you say yes here you get support for National Semiconductor LM80 and LM96080 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm80. config SENSORS_LM83 @@ -1087,7 +1088,7 @@ config SENSORS_LM83 If you say yes here you get support for National Semiconductor LM82 and LM83 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm83. config SENSORS_LM85 @@ -1099,7 +1100,7 @@ config SENSORS_LM85 sensor chips and clones: ADM1027, ADT7463, ADT7468, EMC6D100, EMC6D101, EMC6D102, and EMC6D103. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm85. config SENSORS_LM87 @@ -1110,7 +1111,7 @@ config SENSORS_LM87 If you say yes here you get support for National Semiconductor LM87 and Analog Devices ADM1024 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm87. config SENSORS_LM90 @@ -1124,7 +1125,7 @@ config SENSORS_LM90 Winbond/Nuvoton W83L771W/G/AWG/ASG, Philips SA56004, GMT G781, and Texas Instruments TMP451 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm90. config SENSORS_LM92 @@ -1134,7 +1135,7 @@ config SENSORS_LM92 If you say yes here you get support for National Semiconductor LM92 and Maxim MAX6635 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm92. config SENSORS_LM93 @@ -1145,7 +1146,7 @@ config SENSORS_LM93 If you say yes here you get support for National Semiconductor LM93, LM94, and compatible sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm93. config SENSORS_LM95234 @@ -1155,7 +1156,7 @@ config SENSORS_LM95234 If you say yes here you get support for the LM95233 and LM95234 temperature sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm95234. config SENSORS_LM95241 @@ -1165,7 +1166,7 @@ config SENSORS_LM95241 If you say yes here you get support for LM95231 and LM95241 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm95241. config SENSORS_LM95245 @@ -1176,7 +1177,7 @@ config SENSORS_LM95245 If you say yes here you get support for LM95235 and LM95245 temperature sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called lm95245. config SENSORS_PC87360 @@ -1190,7 +1191,7 @@ config SENSORS_PC87360 control. The PC87365 and PC87366 additionally have voltage and temperature monitoring. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called pc87360. config SENSORS_PC87427 @@ -1204,7 +1205,7 @@ config SENSORS_PC87427 monitoring. Fan speed monitoring and control are supported, as well as temperature monitoring. Voltages aren't supported yet. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called pc87427. config SENSORS_NTC_THERMISTOR @@ -1218,9 +1219,10 @@ config SENSORS_NTC_THERMISTOR Currently, this driver supports NCP15WB473, NCP18WB473, NCP21WB473, NCP03WB473, NCP15WL333, - NCP03WF104 and NCP15XH103 from Murata and B57330V2103 from EPCOS. + NCP03WF104 and NCP15XH103 from Murata and B57330V2103 and + B57891S0103 from EPCOS. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ntc-thermistor. config SENSORS_NCT6683 @@ -1230,7 +1232,7 @@ config SENSORS_NCT6683 If you say yes here you get support for the hardware monitoring functionality of the Nuvoton NCT6683D eSIO chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called nct6683. config SENSORS_NCT6775 @@ -1244,7 +1246,7 @@ config SENSORS_NCT6775 Super-I/O chips. This driver replaces the w83627ehf driver for NCT6775F and NCT6776F. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called nct6775. config SENSORS_NCT7802 @@ -1255,7 +1257,7 @@ config SENSORS_NCT7802 If you say yes here you get support for the Nuvoton NCT7802Y hardware monitoring chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called nct7802. config SENSORS_NCT7904 @@ -1265,7 +1267,7 @@ config SENSORS_NCT7904 If you say yes here you get support for the Nuvoton NCT7904 hardware monitoring chip, including manual fan speed control. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called nct7904. config SENSORS_NPCM7XX @@ -1293,6 +1295,8 @@ config SENSORS_NSA320 This driver can also be built as a module. If so, the module will be called nsa320-hwmon. +source "drivers/hwmon/occ/Kconfig" + config SENSORS_PCF8591 tristate "Philips PCF8591 ADC/DAC" depends on I2C @@ -1300,13 +1304,13 @@ config SENSORS_PCF8591 If you say yes here you get support for Philips PCF8591 4-channel ADC, 1-channel DAC chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called pcf8591. These devices are hard to detect and rarely found on mainstream - hardware. If unsure, say N. + hardware. If unsure, say N. -source drivers/hwmon/pmbus/Kconfig +source "drivers/hwmon/pmbus/Kconfig" config SENSORS_PWM_FAN tristate "PWM fan" @@ -1317,7 +1321,7 @@ config SENSORS_PWM_FAN The driver uses the generic PWM interface, thus it will work on a variety of SoCs. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called pwm-fan. config SENSORS_RASPBERRYPI_HWMON @@ -1338,7 +1342,7 @@ config SENSORS_SHT15 If you say yes here you get support for the Sensiron SHT10, SHT11, SHT15, SHT71, SHT75 humidity and temperature sensors. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called sht15. config SENSORS_SHT21 @@ -1348,7 +1352,7 @@ config SENSORS_SHT21 If you say yes here you get support for the Sensiron SHT21, SHT25 humidity and temperature sensors. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called sht21. config SENSORS_SHT3x @@ -1359,7 +1363,7 @@ config SENSORS_SHT3x If you say yes here you get support for the Sensiron SHT30 and SHT31 humidity and temperature sensors. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called sht3x. config SENSORS_SHTC1 @@ -1369,7 +1373,7 @@ config SENSORS_SHTC1 If you say yes here you get support for the Sensiron SHTC1 and SHTW1 humidity and temperature sensors. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called shtc1. config SENSORS_S3C @@ -1396,7 +1400,7 @@ config SENSORS_SIS5595 If you say yes here you get support for the integrated sensors in SiS5595 South Bridges. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called sis5595. config SENSORS_DME1737 @@ -1408,7 +1412,7 @@ config SENSORS_DME1737 and fan control features of the SMSC DME1737, SCH311x, SCH5027, and Asus A8000 Super-I/O chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called dme1737. config SENSORS_EMC1403 @@ -1429,7 +1433,7 @@ config SENSORS_EMC2103 If you say yes here you get support for the temperature and fan sensors of the SMSC EMC2103 chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called emc2103. config SENSORS_EMC6W201 @@ -1439,7 +1443,7 @@ config SENSORS_EMC6W201 If you say yes here you get support for the SMSC EMC6W201 hardware monitoring chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called emc6w201. config SENSORS_SMSC47M1 @@ -1456,7 +1460,7 @@ config SENSORS_SMSC47M1 driver, select also "SMSC LPC47M192 and compatibles" below for those. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called smsc47m1. config SENSORS_SMSC47M192 @@ -1473,7 +1477,7 @@ config SENSORS_SMSC47M192 "SMSC LPC47M10x and compatibles" above. You need both drivers if you want fan control and voltage/temperature sensor support. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called smsc47m192. config SENSORS_SMSC47B397 @@ -1483,7 +1487,7 @@ config SENSORS_SMSC47B397 If you say yes here you get support for the SMSC LPC47B397-NC sensor chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called smsc47b397. config SENSORS_SCH56XX_COMMON @@ -1499,7 +1503,7 @@ config SENSORS_SCH5627 features of the SMSC SCH5627 Super-I/O chip including support for the integrated watchdog. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called sch5627. config SENSORS_SCH5636 @@ -1517,7 +1521,7 @@ config SENSORS_SCH5636 Theseus' hardware monitoring features including support for the integrated watchdog. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called sch5636. config SENSORS_STTS751 @@ -1527,7 +1531,7 @@ config SENSORS_STTS751 If you say yes here you get support for STTS751 temperature sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called stts751. config SENSORS_SMM665 @@ -1561,7 +1565,7 @@ config SENSORS_ADS1015 If you say yes here you get support for Texas Instruments ADS1015/ADS1115 12/16-bit 4-input ADC device. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ads1015. config SENSORS_ADS7828 @@ -1573,7 +1577,7 @@ config SENSORS_ADS7828 ADS7830 8-channel A/D converters. ADS7828 resolution is 12-bit, while it is 8-bit on ADS7830. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ads7828. config SENSORS_ADS7871 @@ -1582,7 +1586,7 @@ config SENSORS_ADS7871 help If you say yes here you get support for TI ADS7871 & ADS7870 - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ads7871. config SENSORS_AMC6821 @@ -1592,7 +1596,7 @@ config SENSORS_AMC6821 If you say yes here you get support for the Texas Instruments AMC6821 hardware monitoring chips. - This driver can also be build as a module. If so, the module + This driver can also be built as a module. If so, the module will be called amc6821. config SENSORS_INA209 @@ -1616,7 +1620,7 @@ config SENSORS_INA2XX The INA2xx driver is configured for the default configuration of the part as described in the datasheet. Default value for Rshunt is 10 mOhms. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ina2xx. config SENSORS_INA3221 @@ -1627,7 +1631,7 @@ config SENSORS_INA3221 If you say yes here you get support for the TI INA3221 Triple Power Monitor. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called ina3221. config SENSORS_TC74 @@ -1637,7 +1641,7 @@ config SENSORS_TC74 If you say yes here you get support for Microchip TC74 single input temperature sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called tc74. config SENSORS_THMC50 @@ -1647,7 +1651,7 @@ config SENSORS_THMC50 If you say yes here you get support for Texas Instruments THMC50 sensor chips and clones: the Analog Devices ADM1022. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called thmc50. config SENSORS_TMP102 @@ -1658,7 +1662,7 @@ config SENSORS_TMP102 If you say yes here you get support for Texas Instruments TMP102 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called tmp102. config SENSORS_TMP103 @@ -1669,7 +1673,7 @@ config SENSORS_TMP103 If you say yes here you get support for Texas Instruments TMP103 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called tmp103. config SENSORS_TMP108 @@ -1680,7 +1684,7 @@ config SENSORS_TMP108 If you say yes here you get support for Texas Instruments TMP108 sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called tmp108. config SENSORS_TMP401 @@ -1690,7 +1694,7 @@ config SENSORS_TMP401 If you say yes here you get support for Texas Instruments TMP401, TMP411, TMP431, TMP432, TMP435, and TMP461 temperature sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called tmp401. config SENSORS_TMP421 @@ -1700,7 +1704,7 @@ config SENSORS_TMP421 If you say yes here you get support for Texas Instruments TMP421, TMP422, TMP423, TMP441, and TMP442 temperature sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called tmp421. config SENSORS_VEXPRESS @@ -1727,7 +1731,7 @@ config SENSORS_VIA686A If you say yes here you get support for the integrated sensors in Via 686A/B South Bridges. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called via686a. config SENSORS_VT1211 @@ -1738,7 +1742,7 @@ config SENSORS_VT1211 If you say yes here then you get support for hardware monitoring features of the VIA VT1211 Super-I/O chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called vt1211. config SENSORS_VT8231 @@ -1749,7 +1753,7 @@ config SENSORS_VT8231 If you say yes here then you get support for the integrated sensors in the VIA VT8231 device. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called vt8231. config SENSORS_W83773G @@ -1759,7 +1763,7 @@ config SENSORS_W83773G If you say yes here you get support for the Nuvoton W83773G hardware monitoring chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83773g. config SENSORS_W83781D @@ -1771,7 +1775,7 @@ config SENSORS_W83781D of sensor chips: the W83781D, W83782D and W83783S, and the similar Asus AS99127F. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83781d. config SENSORS_W83791D @@ -1781,7 +1785,7 @@ config SENSORS_W83791D help If you say yes here you get support for the Winbond W83791D chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83791d. config SENSORS_W83792D @@ -1790,7 +1794,7 @@ config SENSORS_W83792D help If you say yes here you get support for the Winbond W83792D chip. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83792d. config SENSORS_W83793 @@ -1802,7 +1806,7 @@ config SENSORS_W83793 hardware monitoring chip, including support for the integrated watchdog. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83793. config SENSORS_W83795 @@ -1813,7 +1817,7 @@ config SENSORS_W83795 W83795ADG hardware monitoring chip, including manual fan speed control. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83795. config SENSORS_W83795_FANCTRL @@ -1840,7 +1844,7 @@ config SENSORS_W83L785TS sensor chip, which is used on the Asus A7N8X, among other motherboards. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83l785ts. config SENSORS_W83L786NG @@ -1850,7 +1854,7 @@ config SENSORS_W83L786NG If you say yes here you get support for the Winbond W83L786NG and W83L786NR sensor chips. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83l786ng. config SENSORS_W83627HF @@ -1862,7 +1866,7 @@ config SENSORS_W83627HF of sensor chips: the W83627HF, W83627THF, W83637HF, W83687THF and W83697HF. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83627hf. config SENSORS_W83627EHF @@ -1882,7 +1886,7 @@ config SENSORS_W83627EHF This driver also supports Nuvoton W83667HG, W83667HG-B, NCT6775F (also known as W83667HG-I), and NCT6776F. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called w83627ehf. config SENSORS_WM831X @@ -1893,7 +1897,7 @@ config SENSORS_WM831X monitoring functionality of the Wolfson Microelectronics WM831x series of PMICs. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called wm831x-hwmon. config SENSORS_WM8350 @@ -1903,7 +1907,7 @@ config SENSORS_WM8350 If you say yes here you get support for the hardware monitoring features of the WM835x series of PMICs. - This driver can also be built as a module. If so, the module + This driver can also be built as a module. If so, the module will be called wm8350-hwmon. config SENSORS_ULTRA45 diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile index 93f7f41ea4ad..f5c7b442e69e 100644 --- a/drivers/hwmon/Makefile +++ b/drivers/hwmon/Makefile @@ -178,6 +178,7 @@ obj-$(CONFIG_SENSORS_WM831X) += wm831x-hwmon.o obj-$(CONFIG_SENSORS_WM8350) += wm8350-hwmon.o obj-$(CONFIG_SENSORS_XGENE) += xgene-hwmon.o +obj-$(CONFIG_SENSORS_OCC) += occ/ obj-$(CONFIG_PMBUS) += pmbus/ ccflags-$(CONFIG_HWMON_DEBUG_CHIP) := -DDEBUG diff --git a/drivers/hwmon/abx500.c b/drivers/hwmon/abx500.c index d87cae8c635f..d4ad91d9f200 100644 --- a/drivers/hwmon/abx500.c +++ b/drivers/hwmon/abx500.c @@ -121,7 +121,7 @@ static void gpadc_monitor(struct work_struct *work) } /* HWMON sysfs interfaces */ -static ssize_t show_name(struct device *dev, struct device_attribute *devattr, +static ssize_t name_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct abx500_temp *data = dev_get_drvdata(dev); @@ -129,7 +129,7 @@ static ssize_t show_name(struct device *dev, struct device_attribute *devattr, return data->ops.show_name(dev, devattr, buf); } -static ssize_t show_label(struct device *dev, +static ssize_t label_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct abx500_temp *data = dev_get_drvdata(dev); @@ -137,7 +137,7 @@ static ssize_t show_label(struct device *dev, return data->ops.show_label(dev, devattr, buf); } -static ssize_t show_input(struct device *dev, +static ssize_t input_show(struct device *dev, struct device_attribute *devattr, char *buf) { int ret, temp; @@ -153,8 +153,8 @@ static ssize_t show_input(struct device *dev, } /* Set functions (RW nodes) */ -static ssize_t set_min(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t min_store(struct device *dev, struct device_attribute *devattr, + const char *buf, size_t count) { unsigned long val; struct abx500_temp *data = dev_get_drvdata(dev); @@ -173,8 +173,8 @@ static ssize_t set_min(struct device *dev, struct device_attribute *devattr, return count; } -static ssize_t set_max(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t max_store(struct device *dev, struct device_attribute *devattr, + const char *buf, size_t count) { unsigned long val; struct abx500_temp *data = dev_get_drvdata(dev); @@ -193,9 +193,9 @@ static ssize_t set_max(struct device *dev, struct device_attribute *devattr, return count; } -static ssize_t set_max_hyst(struct device *dev, - struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t max_hyst_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { unsigned long val; struct abx500_temp *data = dev_get_drvdata(dev); @@ -215,8 +215,8 @@ static ssize_t set_max_hyst(struct device *dev, } /* Show functions (RO nodes) */ -static ssize_t show_min(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t min_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct abx500_temp *data = dev_get_drvdata(dev); struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -224,8 +224,8 @@ static ssize_t show_min(struct device *dev, return sprintf(buf, "%lu\n", data->min[attr->index]); } -static ssize_t show_max(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t max_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct abx500_temp *data = dev_get_drvdata(dev); struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -233,7 +233,7 @@ static ssize_t show_max(struct device *dev, return sprintf(buf, "%lu\n", data->max[attr->index]); } -static ssize_t show_max_hyst(struct device *dev, +static ssize_t max_hyst_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct abx500_temp *data = dev_get_drvdata(dev); @@ -242,7 +242,7 @@ static ssize_t show_max_hyst(struct device *dev, return sprintf(buf, "%lu\n", data->max_hyst[attr->index]); } -static ssize_t show_min_alarm(struct device *dev, +static ssize_t min_alarm_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct abx500_temp *data = dev_get_drvdata(dev); @@ -251,7 +251,7 @@ static ssize_t show_min_alarm(struct device *dev, return sprintf(buf, "%d\n", data->min_alarm[attr->index]); } -static ssize_t show_max_alarm(struct device *dev, +static ssize_t max_alarm_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct abx500_temp *data = dev_get_drvdata(dev); @@ -273,47 +273,43 @@ static umode_t abx500_attrs_visible(struct kobject *kobj, } /* Chip name, required by hwmon */ -static SENSOR_DEVICE_ATTR(name, S_IRUGO, show_name, NULL, 0); +static SENSOR_DEVICE_ATTR_RO(name, name, 0); /* GPADC - SENSOR1 */ -static SENSOR_DEVICE_ATTR(temp1_label, S_IRUGO, show_label, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_input, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, show_min, set_min, 0); -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_max, set_max, 0); -static SENSOR_DEVICE_ATTR(temp1_max_hyst, S_IWUSR | S_IRUGO, - show_max_hyst, set_max_hyst, 0); -static SENSOR_DEVICE_ATTR(temp1_min_alarm, S_IRUGO, show_min_alarm, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_max_alarm, NULL, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_label, label, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_input, input, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_min, min, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_max, max, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_max_hyst, max_hyst, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_min_alarm, min_alarm, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, max_alarm, 0); /* GPADC - SENSOR2 */ -static SENSOR_DEVICE_ATTR(temp2_label, S_IRUGO, show_label, NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_input, NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_min, S_IWUSR | S_IRUGO, show_min, set_min, 1); -static SENSOR_DEVICE_ATTR(temp2_max, S_IWUSR | S_IRUGO, show_max, set_max, 1); -static SENSOR_DEVICE_ATTR(temp2_max_hyst, S_IWUSR | S_IRUGO, - show_max_hyst, set_max_hyst, 1); -static SENSOR_DEVICE_ATTR(temp2_min_alarm, S_IRUGO, show_min_alarm, NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_max_alarm, S_IRUGO, show_max_alarm, NULL, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_label, label, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_input, input, 1); +static SENSOR_DEVICE_ATTR_RW(temp2_min, min, 1); +static SENSOR_DEVICE_ATTR_RW(temp2_max, max, 1); +static SENSOR_DEVICE_ATTR_RW(temp2_max_hyst, max_hyst, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_min_alarm, min_alarm, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_max_alarm, max_alarm, 1); /* GPADC - SENSOR3 */ -static SENSOR_DEVICE_ATTR(temp3_label, S_IRUGO, show_label, NULL, 2); -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, show_input, NULL, 2); -static SENSOR_DEVICE_ATTR(temp3_min, S_IWUSR | S_IRUGO, show_min, set_min, 2); -static SENSOR_DEVICE_ATTR(temp3_max, S_IWUSR | S_IRUGO, show_max, set_max, 2); -static SENSOR_DEVICE_ATTR(temp3_max_hyst, S_IWUSR | S_IRUGO, - show_max_hyst, set_max_hyst, 2); -static SENSOR_DEVICE_ATTR(temp3_min_alarm, S_IRUGO, show_min_alarm, NULL, 2); -static SENSOR_DEVICE_ATTR(temp3_max_alarm, S_IRUGO, show_max_alarm, NULL, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_label, label, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_input, input, 2); +static SENSOR_DEVICE_ATTR_RW(temp3_min, min, 2); +static SENSOR_DEVICE_ATTR_RW(temp3_max, max, 2); +static SENSOR_DEVICE_ATTR_RW(temp3_max_hyst, max_hyst, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_min_alarm, min_alarm, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_max_alarm, max_alarm, 2); /* GPADC - SENSOR4 */ -static SENSOR_DEVICE_ATTR(temp4_label, S_IRUGO, show_label, NULL, 3); -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, show_input, NULL, 3); -static SENSOR_DEVICE_ATTR(temp4_min, S_IWUSR | S_IRUGO, show_min, set_min, 3); -static SENSOR_DEVICE_ATTR(temp4_max, S_IWUSR | S_IRUGO, show_max, set_max, 3); -static SENSOR_DEVICE_ATTR(temp4_max_hyst, S_IWUSR | S_IRUGO, - show_max_hyst, set_max_hyst, 3); -static SENSOR_DEVICE_ATTR(temp4_min_alarm, S_IRUGO, show_min_alarm, NULL, 3); -static SENSOR_DEVICE_ATTR(temp4_max_alarm, S_IRUGO, show_max_alarm, NULL, 3); +static SENSOR_DEVICE_ATTR_RO(temp4_label, label, 3); +static SENSOR_DEVICE_ATTR_RO(temp4_input, input, 3); +static SENSOR_DEVICE_ATTR_RW(temp4_min, min, 3); +static SENSOR_DEVICE_ATTR_RW(temp4_max, max, 3); +static SENSOR_DEVICE_ATTR_RW(temp4_max_hyst, max_hyst, 3); +static SENSOR_DEVICE_ATTR_RO(temp4_min_alarm, min_alarm, 3); +static SENSOR_DEVICE_ATTR_RO(temp4_max_alarm, max_alarm, 3); static struct attribute *abx500_temp_attributes[] = { &sensor_dev_attr_name.dev_attr.attr, diff --git a/drivers/hwmon/acpi_power_meter.c b/drivers/hwmon/acpi_power_meter.c index 34e45b97629e..e98591fa2528 100644 --- a/drivers/hwmon/acpi_power_meter.c +++ b/drivers/hwmon/acpi_power_meter.c @@ -638,12 +638,12 @@ static int register_attrs(struct acpi_power_meter_resource *resource, while (attrs->label) { sensors->dev_attr.attr.name = attrs->label; - sensors->dev_attr.attr.mode = S_IRUGO; + sensors->dev_attr.attr.mode = 0444; sensors->dev_attr.show = attrs->show; sensors->index = attrs->index; if (attrs->set) { - sensors->dev_attr.attr.mode |= S_IWUSR; + sensors->dev_attr.attr.mode |= 0200; sensors->dev_attr.store = attrs->set; } diff --git a/drivers/hwmon/ad7314.c b/drivers/hwmon/ad7314.c index 8ea35932fbaa..be521bdae114 100644 --- a/drivers/hwmon/ad7314.c +++ b/drivers/hwmon/ad7314.c @@ -53,9 +53,9 @@ static int ad7314_spi_read(struct ad7314_data *chip) return be16_to_cpu(chip->rx); } -static ssize_t ad7314_show_temperature(struct device *dev, - struct device_attribute *attr, - char *buf) +static ssize_t ad7314_temperature_show(struct device *dev, + struct device_attribute *attr, + char *buf) { struct ad7314_data *chip = dev_get_drvdata(dev); s16 data; @@ -87,8 +87,7 @@ static ssize_t ad7314_show_temperature(struct device *dev, } } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, - ad7314_show_temperature, NULL, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_input, ad7314_temperature, 0); static struct attribute *ad7314_attrs[] = { &sensor_dev_attr_temp1_input.dev_attr.attr, diff --git a/drivers/hwmon/ad7414.c b/drivers/hwmon/ad7414.c index cec227f13874..f13806d731fa 100644 --- a/drivers/hwmon/ad7414.c +++ b/drivers/hwmon/ad7414.c @@ -107,25 +107,25 @@ static struct ad7414_data *ad7414_update_device(struct device *dev) return data; } -static ssize_t show_temp_input(struct device *dev, +static ssize_t temp_input_show(struct device *dev, struct device_attribute *attr, char *buf) { struct ad7414_data *data = ad7414_update_device(dev); return sprintf(buf, "%d\n", ad7414_temp_from_reg(data->temp_input)); } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp_input, NULL, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp_input, 0); -static ssize_t show_max_min(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t max_min_show(struct device *dev, struct device_attribute *attr, + char *buf) { int index = to_sensor_dev_attr(attr)->index; struct ad7414_data *data = ad7414_update_device(dev); return sprintf(buf, "%d\n", data->temps[index] * 1000); } -static ssize_t set_max_min(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t max_min_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct ad7414_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -147,12 +147,10 @@ static ssize_t set_max_min(struct device *dev, return count; } -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, - show_max_min, set_max_min, 0); -static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, - show_max_min, set_max_min, 1); +static SENSOR_DEVICE_ATTR_RW(temp1_max, max_min, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_min, max_min, 1); -static ssize_t show_alarm(struct device *dev, struct device_attribute *attr, +static ssize_t alarm_show(struct device *dev, struct device_attribute *attr, char *buf) { int bitnr = to_sensor_dev_attr(attr)->index; @@ -161,8 +159,8 @@ static ssize_t show_alarm(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", value); } -static SENSOR_DEVICE_ATTR(temp1_min_alarm, S_IRUGO, show_alarm, NULL, 3); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_alarm, NULL, 4); +static SENSOR_DEVICE_ATTR_RO(temp1_min_alarm, alarm, 3); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, alarm, 4); static struct attribute *ad7414_attrs[] = { &sensor_dev_attr_temp1_input.dev_attr.attr, diff --git a/drivers/hwmon/ad7418.c b/drivers/hwmon/ad7418.c index a01b731ba5d7..76f0a5c01e8a 100644 --- a/drivers/hwmon/ad7418.c +++ b/drivers/hwmon/ad7418.c @@ -103,8 +103,8 @@ static struct ad7418_data *ad7418_update_device(struct device *dev) return data; } -static ssize_t show_temp(struct device *dev, struct device_attribute *devattr, - char *buf) +static ssize_t temp_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct ad7418_data *data = ad7418_update_device(dev); @@ -112,7 +112,7 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *devattr, LM75_TEMP_FROM_REG(data->temp[attr->index])); } -static ssize_t show_adc(struct device *dev, struct device_attribute *devattr, +static ssize_t adc_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -122,8 +122,9 @@ static ssize_t show_adc(struct device *dev, struct device_attribute *devattr, ((data->in[attr->index] >> 6) * 2500 + 512) / 1024); } -static ssize_t set_temp(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t temp_store(struct device *dev, + struct device_attribute *devattr, const char *buf, + size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct ad7418_data *data = dev_get_drvdata(dev); @@ -143,16 +144,14 @@ static ssize_t set_temp(struct device *dev, struct device_attribute *devattr, return count; } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_max_hyst, S_IWUSR | S_IRUGO, - show_temp, set_temp, 1); -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, - show_temp, set_temp, 2); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_max_hyst, temp, 1); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp, 2); -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, show_adc, NULL, 0); -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, show_adc, NULL, 1); -static SENSOR_DEVICE_ATTR(in3_input, S_IRUGO, show_adc, NULL, 2); -static SENSOR_DEVICE_ATTR(in4_input, S_IRUGO, show_adc, NULL, 3); +static SENSOR_DEVICE_ATTR_RO(in1_input, adc, 0); +static SENSOR_DEVICE_ATTR_RO(in2_input, adc, 1); +static SENSOR_DEVICE_ATTR_RO(in3_input, adc, 2); +static SENSOR_DEVICE_ATTR_RO(in4_input, adc, 3); static struct attribute *ad7416_attrs[] = { &sensor_dev_attr_temp1_max.dev_attr.attr, diff --git a/drivers/hwmon/adc128d818.c b/drivers/hwmon/adc128d818.c index bd2ca315c9d8..ca794bf904de 100644 --- a/drivers/hwmon/adc128d818.c +++ b/drivers/hwmon/adc128d818.c @@ -153,8 +153,8 @@ done: return ret; } -static ssize_t adc128_show_in(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t adc128_in_show(struct device *dev, + struct device_attribute *attr, char *buf) { struct adc128_data *data = adc128_update_device(dev); int index = to_sensor_dev_attr_2(attr)->index; @@ -168,8 +168,9 @@ static ssize_t adc128_show_in(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", val); } -static ssize_t adc128_set_in(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t adc128_in_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct adc128_data *data = dev_get_drvdata(dev); int index = to_sensor_dev_attr_2(attr)->index; @@ -193,7 +194,7 @@ static ssize_t adc128_set_in(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t adc128_show_temp(struct device *dev, +static ssize_t adc128_temp_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adc128_data *data = adc128_update_device(dev); @@ -207,9 +208,9 @@ static ssize_t adc128_show_temp(struct device *dev, return sprintf(buf, "%d\n", temp * 500);/* 0.5 degrees C resolution */ } -static ssize_t adc128_set_temp(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t adc128_temp_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct adc128_data *data = dev_get_drvdata(dev); int index = to_sensor_dev_attr(attr)->index; @@ -233,7 +234,7 @@ static ssize_t adc128_set_temp(struct device *dev, return count; } -static ssize_t adc128_show_alarm(struct device *dev, +static ssize_t adc128_alarm_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adc128_data *data = adc128_update_device(dev); @@ -272,77 +273,51 @@ static umode_t adc128_is_visible(struct kobject *kobj, return attr->mode; } -static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, - adc128_show_in, NULL, 0, 0); -static SENSOR_DEVICE_ATTR_2(in0_min, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 0, 1); -static SENSOR_DEVICE_ATTR_2(in0_max, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 0, 2); - -static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, - adc128_show_in, NULL, 1, 0); -static SENSOR_DEVICE_ATTR_2(in1_min, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 1, 1); -static SENSOR_DEVICE_ATTR_2(in1_max, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 1, 2); - -static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, - adc128_show_in, NULL, 2, 0); -static SENSOR_DEVICE_ATTR_2(in2_min, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 2, 1); -static SENSOR_DEVICE_ATTR_2(in2_max, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 2, 2); - -static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, - adc128_show_in, NULL, 3, 0); -static SENSOR_DEVICE_ATTR_2(in3_min, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 3, 1); -static SENSOR_DEVICE_ATTR_2(in3_max, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 3, 2); - -static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, - adc128_show_in, NULL, 4, 0); -static SENSOR_DEVICE_ATTR_2(in4_min, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 4, 1); -static SENSOR_DEVICE_ATTR_2(in4_max, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 4, 2); - -static SENSOR_DEVICE_ATTR_2(in5_input, S_IRUGO, - adc128_show_in, NULL, 5, 0); -static SENSOR_DEVICE_ATTR_2(in5_min, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 5, 1); -static SENSOR_DEVICE_ATTR_2(in5_max, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 5, 2); - -static SENSOR_DEVICE_ATTR_2(in6_input, S_IRUGO, - adc128_show_in, NULL, 6, 0); -static SENSOR_DEVICE_ATTR_2(in6_min, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 6, 1); -static SENSOR_DEVICE_ATTR_2(in6_max, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 6, 2); - -static SENSOR_DEVICE_ATTR_2(in7_input, S_IRUGO, - adc128_show_in, NULL, 7, 0); -static SENSOR_DEVICE_ATTR_2(in7_min, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 7, 1); -static SENSOR_DEVICE_ATTR_2(in7_max, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 7, 2); - -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, adc128_show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, - adc128_show_temp, adc128_set_temp, 1); -static SENSOR_DEVICE_ATTR(temp1_max_hyst, S_IWUSR | S_IRUGO, - adc128_show_temp, adc128_set_temp, 2); - -static SENSOR_DEVICE_ATTR(in0_alarm, S_IRUGO, adc128_show_alarm, NULL, 0); -static SENSOR_DEVICE_ATTR(in1_alarm, S_IRUGO, adc128_show_alarm, NULL, 1); -static SENSOR_DEVICE_ATTR(in2_alarm, S_IRUGO, adc128_show_alarm, NULL, 2); -static SENSOR_DEVICE_ATTR(in3_alarm, S_IRUGO, adc128_show_alarm, NULL, 3); -static SENSOR_DEVICE_ATTR(in4_alarm, S_IRUGO, adc128_show_alarm, NULL, 4); -static SENSOR_DEVICE_ATTR(in5_alarm, S_IRUGO, adc128_show_alarm, NULL, 5); -static SENSOR_DEVICE_ATTR(in6_alarm, S_IRUGO, adc128_show_alarm, NULL, 6); -static SENSOR_DEVICE_ATTR(in7_alarm, S_IRUGO, adc128_show_alarm, NULL, 7); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, adc128_show_alarm, NULL, 7); +static SENSOR_DEVICE_ATTR_2_RO(in0_input, adc128_in, 0, 0); +static SENSOR_DEVICE_ATTR_2_RW(in0_min, adc128_in, 0, 1); +static SENSOR_DEVICE_ATTR_2_RW(in0_max, adc128_in, 0, 2); + +static SENSOR_DEVICE_ATTR_2_RO(in1_input, adc128_in, 1, 0); +static SENSOR_DEVICE_ATTR_2_RW(in1_min, adc128_in, 1, 1); +static SENSOR_DEVICE_ATTR_2_RW(in1_max, adc128_in, 1, 2); + +static SENSOR_DEVICE_ATTR_2_RO(in2_input, adc128_in, 2, 0); +static SENSOR_DEVICE_ATTR_2_RW(in2_min, adc128_in, 2, 1); +static SENSOR_DEVICE_ATTR_2_RW(in2_max, adc128_in, 2, 2); + +static SENSOR_DEVICE_ATTR_2_RO(in3_input, adc128_in, 3, 0); +static SENSOR_DEVICE_ATTR_2_RW(in3_min, adc128_in, 3, 1); +static SENSOR_DEVICE_ATTR_2_RW(in3_max, adc128_in, 3, 2); + +static SENSOR_DEVICE_ATTR_2_RO(in4_input, adc128_in, 4, 0); +static SENSOR_DEVICE_ATTR_2_RW(in4_min, adc128_in, 4, 1); +static SENSOR_DEVICE_ATTR_2_RW(in4_max, adc128_in, 4, 2); + +static SENSOR_DEVICE_ATTR_2_RO(in5_input, adc128_in, 5, 0); +static SENSOR_DEVICE_ATTR_2_RW(in5_min, adc128_in, 5, 1); +static SENSOR_DEVICE_ATTR_2_RW(in5_max, adc128_in, 5, 2); + +static SENSOR_DEVICE_ATTR_2_RO(in6_input, adc128_in, 6, 0); +static SENSOR_DEVICE_ATTR_2_RW(in6_min, adc128_in, 6, 1); +static SENSOR_DEVICE_ATTR_2_RW(in6_max, adc128_in, 6, 2); + +static SENSOR_DEVICE_ATTR_2_RO(in7_input, adc128_in, 7, 0); +static SENSOR_DEVICE_ATTR_2_RW(in7_min, adc128_in, 7, 1); +static SENSOR_DEVICE_ATTR_2_RW(in7_max, adc128_in, 7, 2); + +static SENSOR_DEVICE_ATTR_RO(temp1_input, adc128_temp, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_max, adc128_temp, 1); +static SENSOR_DEVICE_ATTR_RW(temp1_max_hyst, adc128_temp, 2); + +static SENSOR_DEVICE_ATTR_RO(in0_alarm, adc128_alarm, 0); +static SENSOR_DEVICE_ATTR_RO(in1_alarm, adc128_alarm, 1); +static SENSOR_DEVICE_ATTR_RO(in2_alarm, adc128_alarm, 2); +static SENSOR_DEVICE_ATTR_RO(in3_alarm, adc128_alarm, 3); +static SENSOR_DEVICE_ATTR_RO(in4_alarm, adc128_alarm, 4); +static SENSOR_DEVICE_ATTR_RO(in5_alarm, adc128_alarm, 5); +static SENSOR_DEVICE_ATTR_RO(in6_alarm, adc128_alarm, 6); +static SENSOR_DEVICE_ATTR_RO(in7_alarm, adc128_alarm, 7); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, adc128_alarm, 7); static struct attribute *adc128_attrs[] = { &sensor_dev_attr_in0_alarm.dev_attr.attr, diff --git a/drivers/hwmon/adcxx.c b/drivers/hwmon/adcxx.c index 69e0bb97e597..c8feb2295fe2 100644 --- a/drivers/hwmon/adcxx.c +++ b/drivers/hwmon/adcxx.c @@ -57,8 +57,8 @@ struct adcxx { }; /* sysfs hook function */ -static ssize_t adcxx_read(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t adcxx_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct spi_device *spi = to_spi_device(dev); struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -94,15 +94,15 @@ out: return status; } -static ssize_t adcxx_show_min(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t adcxx_min_show(struct device *dev, + struct device_attribute *devattr, char *buf) { /* The minimum reference is 0 for this chip family */ return sprintf(buf, "0\n"); } -static ssize_t adcxx_show_max(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t adcxx_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct spi_device *spi = to_spi_device(dev); struct adcxx *adc = spi_get_drvdata(spi); @@ -118,8 +118,9 @@ static ssize_t adcxx_show_max(struct device *dev, return sprintf(buf, "%d\n", reference); } -static ssize_t adcxx_set_max(struct device *dev, - struct device_attribute *devattr, const char *buf, size_t count) +static ssize_t adcxx_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct spi_device *spi = to_spi_device(dev); struct adcxx *adc = spi_get_drvdata(spi); @@ -138,25 +139,24 @@ static ssize_t adcxx_set_max(struct device *dev, return count; } -static ssize_t adcxx_show_name(struct device *dev, struct device_attribute - *devattr, char *buf) +static ssize_t adcxx_name_show(struct device *dev, + struct device_attribute *devattr, char *buf) { return sprintf(buf, "%s\n", to_spi_device(dev)->modalias); } static struct sensor_device_attribute ad_input[] = { - SENSOR_ATTR(name, S_IRUGO, adcxx_show_name, NULL, 0), - SENSOR_ATTR(in_min, S_IRUGO, adcxx_show_min, NULL, 0), - SENSOR_ATTR(in_max, S_IWUSR | S_IRUGO, adcxx_show_max, - adcxx_set_max, 0), - SENSOR_ATTR(in0_input, S_IRUGO, adcxx_read, NULL, 0), - SENSOR_ATTR(in1_input, S_IRUGO, adcxx_read, NULL, 1), - SENSOR_ATTR(in2_input, S_IRUGO, adcxx_read, NULL, 2), - SENSOR_ATTR(in3_input, S_IRUGO, adcxx_read, NULL, 3), - SENSOR_ATTR(in4_input, S_IRUGO, adcxx_read, NULL, 4), - SENSOR_ATTR(in5_input, S_IRUGO, adcxx_read, NULL, 5), - SENSOR_ATTR(in6_input, S_IRUGO, adcxx_read, NULL, 6), - SENSOR_ATTR(in7_input, S_IRUGO, adcxx_read, NULL, 7), + SENSOR_ATTR_RO(name, adcxx_name, 0), + SENSOR_ATTR_RO(in_min, adcxx_min, 0), + SENSOR_ATTR_RW(in_max, adcxx_max, 0), + SENSOR_ATTR_RO(in0_input, adcxx, 0), + SENSOR_ATTR_RO(in1_input, adcxx, 1), + SENSOR_ATTR_RO(in2_input, adcxx, 2), + SENSOR_ATTR_RO(in3_input, adcxx, 3), + SENSOR_ATTR_RO(in4_input, adcxx, 4), + SENSOR_ATTR_RO(in5_input, adcxx, 5), + SENSOR_ATTR_RO(in6_input, adcxx, 6), + SENSOR_ATTR_RO(in7_input, adcxx, 7), }; /*----------------------------------------------------------------------*/ diff --git a/drivers/hwmon/adm1021.c b/drivers/hwmon/adm1021.c index eacf10fadbc6..49461b60e296 100644 --- a/drivers/hwmon/adm1021.c +++ b/drivers/hwmon/adm1021.c @@ -156,8 +156,8 @@ static struct adm1021_data *adm1021_update_device(struct device *dev) return data; } -static ssize_t show_temp(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t temp_show(struct device *dev, struct device_attribute *devattr, + char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct adm1021_data *data = adm1021_update_device(dev); @@ -165,7 +165,7 @@ static ssize_t show_temp(struct device *dev, return sprintf(buf, "%d\n", data->temp[index]); } -static ssize_t show_temp_max(struct device *dev, +static ssize_t temp_max_show(struct device *dev, struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; @@ -174,7 +174,7 @@ static ssize_t show_temp_max(struct device *dev, return sprintf(buf, "%d\n", data->temp_max[index]); } -static ssize_t show_temp_min(struct device *dev, +static ssize_t temp_min_show(struct device *dev, struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; @@ -183,7 +183,7 @@ static ssize_t show_temp_min(struct device *dev, return sprintf(buf, "%d\n", data->temp_min[index]); } -static ssize_t show_alarm(struct device *dev, struct device_attribute *attr, +static ssize_t alarm_show(struct device *dev, struct device_attribute *attr, char *buf) { int index = to_sensor_dev_attr(attr)->index; @@ -199,9 +199,9 @@ static ssize_t alarms_show(struct device *dev, return sprintf(buf, "%u\n", data->alarms); } -static ssize_t set_temp_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t temp_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { int index = to_sensor_dev_attr(devattr)->index; struct adm1021_data *data = dev_get_drvdata(dev); @@ -225,9 +225,9 @@ static ssize_t set_temp_max(struct device *dev, return count; } -static ssize_t set_temp_min(struct device *dev, - struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t temp_min_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { int index = to_sensor_dev_attr(devattr)->index; struct adm1021_data *data = dev_get_drvdata(dev); @@ -287,21 +287,17 @@ static ssize_t low_power_store(struct device *dev, } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 0); -static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 0); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp, NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 1); -static SENSOR_DEVICE_ATTR(temp2_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 1); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_alarm, NULL, 6); -static SENSOR_DEVICE_ATTR(temp1_min_alarm, S_IRUGO, show_alarm, NULL, 5); -static SENSOR_DEVICE_ATTR(temp2_max_alarm, S_IRUGO, show_alarm, NULL, 4); -static SENSOR_DEVICE_ATTR(temp2_min_alarm, S_IRUGO, show_alarm, NULL, 3); -static SENSOR_DEVICE_ATTR(temp2_fault, S_IRUGO, show_alarm, NULL, 2); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp_max, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_min, temp_min, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp, 1); +static SENSOR_DEVICE_ATTR_RW(temp2_max, temp_max, 1); +static SENSOR_DEVICE_ATTR_RW(temp2_min, temp_min, 1); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, alarm, 6); +static SENSOR_DEVICE_ATTR_RO(temp1_min_alarm, alarm, 5); +static SENSOR_DEVICE_ATTR_RO(temp2_max_alarm, alarm, 4); +static SENSOR_DEVICE_ATTR_RO(temp2_min_alarm, alarm, 3); +static SENSOR_DEVICE_ATTR_RO(temp2_fault, alarm, 2); static DEVICE_ATTR_RO(alarms); static DEVICE_ATTR_RW(low_power); diff --git a/drivers/hwmon/ads1015.c b/drivers/hwmon/ads1015.c index 98c704d3366a..c21b0529adb2 100644 --- a/drivers/hwmon/ads1015.c +++ b/drivers/hwmon/ads1015.c @@ -133,8 +133,8 @@ static int ads1015_reg_to_mv(struct i2c_client *client, unsigned int channel, } /* sysfs callback function */ -static ssize_t show_in(struct device *dev, struct device_attribute *da, - char *buf) +static ssize_t in_show(struct device *dev, struct device_attribute *da, + char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); struct i2c_client *client = to_i2c_client(dev); @@ -149,14 +149,14 @@ static ssize_t show_in(struct device *dev, struct device_attribute *da, } static const struct sensor_device_attribute ads1015_in[] = { - SENSOR_ATTR(in0_input, S_IRUGO, show_in, NULL, 0), - SENSOR_ATTR(in1_input, S_IRUGO, show_in, NULL, 1), - SENSOR_ATTR(in2_input, S_IRUGO, show_in, NULL, 2), - SENSOR_ATTR(in3_input, S_IRUGO, show_in, NULL, 3), - SENSOR_ATTR(in4_input, S_IRUGO, show_in, NULL, 4), - SENSOR_ATTR(in5_input, S_IRUGO, show_in, NULL, 5), - SENSOR_ATTR(in6_input, S_IRUGO, show_in, NULL, 6), - SENSOR_ATTR(in7_input, S_IRUGO, show_in, NULL, 7), + SENSOR_ATTR_RO(in0_input, in, 0), + SENSOR_ATTR_RO(in1_input, in, 1), + SENSOR_ATTR_RO(in2_input, in, 2), + SENSOR_ATTR_RO(in3_input, in, 3), + SENSOR_ATTR_RO(in4_input, in, 4), + SENSOR_ATTR_RO(in5_input, in, 5), + SENSOR_ATTR_RO(in6_input, in, 6), + SENSOR_ATTR_RO(in7_input, in, 7), }; /* diff --git a/drivers/hwmon/ads7828.c b/drivers/hwmon/ads7828.c index 898607bf682b..12c56d3783ed 100644 --- a/drivers/hwmon/ads7828.c +++ b/drivers/hwmon/ads7828.c @@ -62,8 +62,8 @@ static inline u8 ads7828_cmd_byte(u8 cmd, int ch) } /* sysfs callback function */ -static ssize_t ads7828_show_in(struct device *dev, struct device_attribute *da, - char *buf) +static ssize_t ads7828_in_show(struct device *dev, + struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); struct ads7828_data *data = dev_get_drvdata(dev); @@ -79,14 +79,14 @@ static ssize_t ads7828_show_in(struct device *dev, struct device_attribute *da, DIV_ROUND_CLOSEST(regval * data->lsb_resol, 1000)); } -static SENSOR_DEVICE_ATTR(in0_input, S_IRUGO, ads7828_show_in, NULL, 0); -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, ads7828_show_in, NULL, 1); -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, ads7828_show_in, NULL, 2); -static SENSOR_DEVICE_ATTR(in3_input, S_IRUGO, ads7828_show_in, NULL, 3); -static SENSOR_DEVICE_ATTR(in4_input, S_IRUGO, ads7828_show_in, NULL, 4); -static SENSOR_DEVICE_ATTR(in5_input, S_IRUGO, ads7828_show_in, NULL, 5); -static SENSOR_DEVICE_ATTR(in6_input, S_IRUGO, ads7828_show_in, NULL, 6); -static SENSOR_DEVICE_ATTR(in7_input, S_IRUGO, ads7828_show_in, NULL, 7); +static SENSOR_DEVICE_ATTR_RO(in0_input, ads7828_in, 0); +static SENSOR_DEVICE_ATTR_RO(in1_input, ads7828_in, 1); +static SENSOR_DEVICE_ATTR_RO(in2_input, ads7828_in, 2); +static SENSOR_DEVICE_ATTR_RO(in3_input, ads7828_in, 3); +static SENSOR_DEVICE_ATTR_RO(in4_input, ads7828_in, 4); +static SENSOR_DEVICE_ATTR_RO(in5_input, ads7828_in, 5); +static SENSOR_DEVICE_ATTR_RO(in6_input, ads7828_in, 6); +static SENSOR_DEVICE_ATTR_RO(in7_input, ads7828_in, 7); static struct attribute *ads7828_attrs[] = { &sensor_dev_attr_in0_input.dev_attr.attr, diff --git a/drivers/hwmon/ads7871.c b/drivers/hwmon/ads7871.c index 59bd7b9e1772..cd14c1501508 100644 --- a/drivers/hwmon/ads7871.c +++ b/drivers/hwmon/ads7871.c @@ -96,8 +96,8 @@ static int ads7871_write_reg8(struct spi_device *spi, int reg, u8 val) return spi_write(spi, tmp, sizeof(tmp)); } -static ssize_t show_voltage(struct device *dev, - struct device_attribute *da, char *buf) +static ssize_t voltage_show(struct device *dev, struct device_attribute *da, + char *buf) { struct ads7871_data *pdata = dev_get_drvdata(dev); struct spi_device *spi = pdata->spi; @@ -138,14 +138,14 @@ static ssize_t show_voltage(struct device *dev, } } -static SENSOR_DEVICE_ATTR(in0_input, S_IRUGO, show_voltage, NULL, 0); -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, show_voltage, NULL, 1); -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, show_voltage, NULL, 2); -static SENSOR_DEVICE_ATTR(in3_input, S_IRUGO, show_voltage, NULL, 3); -static SENSOR_DEVICE_ATTR(in4_input, S_IRUGO, show_voltage, NULL, 4); -static SENSOR_DEVICE_ATTR(in5_input, S_IRUGO, show_voltage, NULL, 5); -static SENSOR_DEVICE_ATTR(in6_input, S_IRUGO, show_voltage, NULL, 6); -static SENSOR_DEVICE_ATTR(in7_input, S_IRUGO, show_voltage, NULL, 7); +static SENSOR_DEVICE_ATTR_RO(in0_input, voltage, 0); +static SENSOR_DEVICE_ATTR_RO(in1_input, voltage, 1); +static SENSOR_DEVICE_ATTR_RO(in2_input, voltage, 2); +static SENSOR_DEVICE_ATTR_RO(in3_input, voltage, 3); +static SENSOR_DEVICE_ATTR_RO(in4_input, voltage, 4); +static SENSOR_DEVICE_ATTR_RO(in5_input, voltage, 5); +static SENSOR_DEVICE_ATTR_RO(in6_input, voltage, 6); +static SENSOR_DEVICE_ATTR_RO(in7_input, voltage, 7); static struct attribute *ads7871_attrs[] = { &sensor_dev_attr_in0_input.dev_attr.attr, diff --git a/drivers/hwmon/adt7462.c b/drivers/hwmon/adt7462.c index 19f2a6d48bac..b0211f731251 100644 --- a/drivers/hwmon/adt7462.c +++ b/drivers/hwmon/adt7462.c @@ -784,9 +784,8 @@ out: return data; } -static ssize_t show_temp_min(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp_min_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -797,10 +796,9 @@ static ssize_t show_temp_min(struct device *dev, return sprintf(buf, "%d\n", 1000 * (data->temp_min[attr->index] - 64)); } -static ssize_t set_temp_min(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t temp_min_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -822,9 +820,8 @@ static ssize_t set_temp_min(struct device *dev, return count; } -static ssize_t show_temp_max(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -835,10 +832,9 @@ static ssize_t show_temp_max(struct device *dev, return sprintf(buf, "%d\n", 1000 * (data->temp_max[attr->index] - 64)); } -static ssize_t set_temp_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t temp_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -860,7 +856,7 @@ static ssize_t set_temp_max(struct device *dev, return count; } -static ssize_t show_temp(struct device *dev, struct device_attribute *devattr, +static ssize_t temp_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -874,9 +870,8 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *devattr, 250 * frac); } -static ssize_t show_temp_label(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp_label_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -884,9 +879,8 @@ static ssize_t show_temp_label(struct device *dev, return sprintf(buf, "%s\n", temp_label(data, attr->index)); } -static ssize_t show_volt_max(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t volt_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -898,10 +892,9 @@ static ssize_t show_volt_max(struct device *dev, return sprintf(buf, "%d\n", x); } -static ssize_t set_volt_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t volt_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -926,9 +919,8 @@ static ssize_t set_volt_max(struct device *dev, return count; } -static ssize_t show_volt_min(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t volt_min_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -940,10 +932,9 @@ static ssize_t show_volt_min(struct device *dev, return sprintf(buf, "%d\n", x); } -static ssize_t set_volt_min(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t volt_min_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -968,9 +959,8 @@ static ssize_t set_volt_min(struct device *dev, return count; } -static ssize_t show_voltage(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t voltage_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -982,9 +972,8 @@ static ssize_t show_voltage(struct device *dev, return sprintf(buf, "%d\n", x); } -static ssize_t show_voltage_label(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t voltage_label_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -992,9 +981,8 @@ static ssize_t show_voltage_label(struct device *dev, return sprintf(buf, "%s\n", voltage_label(data, attr->index)); } -static ssize_t show_alarm(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t alarm_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -1012,9 +1000,8 @@ static int fan_enabled(struct adt7462_data *data, int fan) return data->fan_enabled & (1 << fan); } -static ssize_t show_fan_min(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t fan_min_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -1031,9 +1018,9 @@ static ssize_t show_fan_min(struct device *dev, return sprintf(buf, "%d\n", FAN_PERIOD_TO_RPM(temp)); } -static ssize_t set_fan_min(struct device *dev, - struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t fan_min_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -1057,7 +1044,7 @@ static ssize_t set_fan_min(struct device *dev, return count; } -static ssize_t show_fan(struct device *dev, struct device_attribute *devattr, +static ssize_t fan_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -1071,18 +1058,16 @@ static ssize_t show_fan(struct device *dev, struct device_attribute *devattr, FAN_PERIOD_TO_RPM(data->fan[attr->index])); } -static ssize_t show_force_pwm_max(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t force_pwm_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct adt7462_data *data = adt7462_update_device(dev); return sprintf(buf, "%d\n", (data->cfg2 & ADT7462_FSPD_MASK ? 1 : 0)); } -static ssize_t set_force_pwm_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t force_pwm_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct adt7462_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -1105,7 +1090,7 @@ static ssize_t set_force_pwm_max(struct device *dev, return count; } -static ssize_t show_pwm(struct device *dev, struct device_attribute *devattr, +static ssize_t pwm_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -1113,8 +1098,8 @@ static ssize_t show_pwm(struct device *dev, struct device_attribute *devattr, return sprintf(buf, "%d\n", data->pwm[attr->index]); } -static ssize_t set_pwm(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t pwm_store(struct device *dev, struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -1134,18 +1119,16 @@ static ssize_t set_pwm(struct device *dev, struct device_attribute *devattr, return count; } -static ssize_t show_pwm_max(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct adt7462_data *data = adt7462_update_device(dev); return sprintf(buf, "%d\n", data->pwm_max); } -static ssize_t set_pwm_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct adt7462_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -1164,19 +1147,17 @@ static ssize_t set_pwm_max(struct device *dev, return count; } -static ssize_t show_pwm_min(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_min_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); return sprintf(buf, "%d\n", data->pwm_min[attr->index]); } -static ssize_t set_pwm_min(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_min_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -1197,9 +1178,8 @@ static ssize_t set_pwm_min(struct device *dev, return count; } -static ssize_t show_pwm_hyst(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_hyst_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -1207,10 +1187,9 @@ static ssize_t show_pwm_hyst(struct device *dev, (data->pwm_trange[attr->index] & ADT7462_PWM_HYST_MASK)); } -static ssize_t set_pwm_hyst(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_hyst_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -1236,9 +1215,8 @@ static ssize_t set_pwm_hyst(struct device *dev, return count; } -static ssize_t show_pwm_tmax(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_tmax_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -1251,10 +1229,9 @@ static ssize_t show_pwm_tmax(struct device *dev, return sprintf(buf, "%d\n", tmin + trange); } -static ssize_t set_pwm_tmax(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_tmax_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { int temp; struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -1284,19 +1261,17 @@ static ssize_t set_pwm_tmax(struct device *dev, return count; } -static ssize_t show_pwm_tmin(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_tmin_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); return sprintf(buf, "%d\n", 1000 * (data->pwm_tmin[attr->index] - 64)); } -static ssize_t set_pwm_tmin(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_tmin_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -1318,9 +1293,8 @@ static ssize_t set_pwm_tmin(struct device *dev, return count; } -static ssize_t show_pwm_auto(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_auto_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -1350,10 +1324,9 @@ static void set_pwm_channel(struct i2c_client *client, mutex_unlock(&data->lock); } -static ssize_t set_pwm_auto(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_auto_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -1375,9 +1348,8 @@ static ssize_t set_pwm_auto(struct device *dev, } } -static ssize_t show_pwm_auto_temp(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_auto_temp_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = adt7462_update_device(dev); @@ -1409,10 +1381,9 @@ static int cvt_auto_temp(int input) return ilog2(input); } -static ssize_t set_pwm_auto_temp(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_auto_temp_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7462_data *data = dev_get_drvdata(dev); @@ -1431,274 +1402,199 @@ static ssize_t set_pwm_auto_temp(struct device *dev, return count; } -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 0); -static SENSOR_DEVICE_ATTR(temp2_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 1); -static SENSOR_DEVICE_ATTR(temp3_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 2); -static SENSOR_DEVICE_ATTR(temp4_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 3); - -static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 0); -static SENSOR_DEVICE_ATTR(temp2_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 1); -static SENSOR_DEVICE_ATTR(temp3_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 2); -static SENSOR_DEVICE_ATTR(temp4_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 3); - -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp, NULL, 1); -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, show_temp, NULL, 2); -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, show_temp, NULL, 3); - -static SENSOR_DEVICE_ATTR(temp1_label, S_IRUGO, show_temp_label, NULL, 0); -static SENSOR_DEVICE_ATTR(temp2_label, S_IRUGO, show_temp_label, NULL, 1); -static SENSOR_DEVICE_ATTR(temp3_label, S_IRUGO, show_temp_label, NULL, 2); -static SENSOR_DEVICE_ATTR(temp4_label, S_IRUGO, show_temp_label, NULL, 3); - -static SENSOR_DEVICE_ATTR(temp1_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM1 | ADT7462_LT_ALARM); -static SENSOR_DEVICE_ATTR(temp2_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM1 | ADT7462_R1T_ALARM); -static SENSOR_DEVICE_ATTR(temp3_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM1 | ADT7462_R2T_ALARM); -static SENSOR_DEVICE_ATTR(temp4_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM1 | ADT7462_R3T_ALARM); - -static SENSOR_DEVICE_ATTR(in1_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 0); -static SENSOR_DEVICE_ATTR(in2_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 1); -static SENSOR_DEVICE_ATTR(in3_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 2); -static SENSOR_DEVICE_ATTR(in4_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 3); -static SENSOR_DEVICE_ATTR(in5_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 4); -static SENSOR_DEVICE_ATTR(in6_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 5); -static SENSOR_DEVICE_ATTR(in7_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 6); -static SENSOR_DEVICE_ATTR(in8_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 7); -static SENSOR_DEVICE_ATTR(in9_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 8); -static SENSOR_DEVICE_ATTR(in10_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 9); -static SENSOR_DEVICE_ATTR(in11_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 10); -static SENSOR_DEVICE_ATTR(in12_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 11); -static SENSOR_DEVICE_ATTR(in13_max, S_IWUSR | S_IRUGO, show_volt_max, - set_volt_max, 12); - -static SENSOR_DEVICE_ATTR(in1_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 0); -static SENSOR_DEVICE_ATTR(in2_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 1); -static SENSOR_DEVICE_ATTR(in3_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 2); -static SENSOR_DEVICE_ATTR(in4_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 3); -static SENSOR_DEVICE_ATTR(in5_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 4); -static SENSOR_DEVICE_ATTR(in6_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 5); -static SENSOR_DEVICE_ATTR(in7_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 6); -static SENSOR_DEVICE_ATTR(in8_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 7); -static SENSOR_DEVICE_ATTR(in9_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 8); -static SENSOR_DEVICE_ATTR(in10_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 9); -static SENSOR_DEVICE_ATTR(in11_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 10); -static SENSOR_DEVICE_ATTR(in12_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 11); -static SENSOR_DEVICE_ATTR(in13_min, S_IWUSR | S_IRUGO, show_volt_min, - set_volt_min, 12); - -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, show_voltage, NULL, 0); -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, show_voltage, NULL, 1); -static SENSOR_DEVICE_ATTR(in3_input, S_IRUGO, show_voltage, NULL, 2); -static SENSOR_DEVICE_ATTR(in4_input, S_IRUGO, show_voltage, NULL, 3); -static SENSOR_DEVICE_ATTR(in5_input, S_IRUGO, show_voltage, NULL, 4); -static SENSOR_DEVICE_ATTR(in6_input, S_IRUGO, show_voltage, NULL, 5); -static SENSOR_DEVICE_ATTR(in7_input, S_IRUGO, show_voltage, NULL, 6); -static SENSOR_DEVICE_ATTR(in8_input, S_IRUGO, show_voltage, NULL, 7); -static SENSOR_DEVICE_ATTR(in9_input, S_IRUGO, show_voltage, NULL, 8); -static SENSOR_DEVICE_ATTR(in10_input, S_IRUGO, show_voltage, NULL, 9); -static SENSOR_DEVICE_ATTR(in11_input, S_IRUGO, show_voltage, NULL, 10); -static SENSOR_DEVICE_ATTR(in12_input, S_IRUGO, show_voltage, NULL, 11); -static SENSOR_DEVICE_ATTR(in13_input, S_IRUGO, show_voltage, NULL, 12); - -static SENSOR_DEVICE_ATTR(in1_label, S_IRUGO, show_voltage_label, NULL, 0); -static SENSOR_DEVICE_ATTR(in2_label, S_IRUGO, show_voltage_label, NULL, 1); -static SENSOR_DEVICE_ATTR(in3_label, S_IRUGO, show_voltage_label, NULL, 2); -static SENSOR_DEVICE_ATTR(in4_label, S_IRUGO, show_voltage_label, NULL, 3); -static SENSOR_DEVICE_ATTR(in5_label, S_IRUGO, show_voltage_label, NULL, 4); -static SENSOR_DEVICE_ATTR(in6_label, S_IRUGO, show_voltage_label, NULL, 5); -static SENSOR_DEVICE_ATTR(in7_label, S_IRUGO, show_voltage_label, NULL, 6); -static SENSOR_DEVICE_ATTR(in8_label, S_IRUGO, show_voltage_label, NULL, 7); -static SENSOR_DEVICE_ATTR(in9_label, S_IRUGO, show_voltage_label, NULL, 8); -static SENSOR_DEVICE_ATTR(in10_label, S_IRUGO, show_voltage_label, NULL, 9); -static SENSOR_DEVICE_ATTR(in11_label, S_IRUGO, show_voltage_label, NULL, 10); -static SENSOR_DEVICE_ATTR(in12_label, S_IRUGO, show_voltage_label, NULL, 11); -static SENSOR_DEVICE_ATTR(in13_label, S_IRUGO, show_voltage_label, NULL, 12); - -static SENSOR_DEVICE_ATTR(in1_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM2 | ADT7462_V0_ALARM); -static SENSOR_DEVICE_ATTR(in2_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM2 | ADT7462_V7_ALARM); -static SENSOR_DEVICE_ATTR(in3_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM2 | ADT7462_V2_ALARM); -static SENSOR_DEVICE_ATTR(in4_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM2 | ADT7462_V6_ALARM); -static SENSOR_DEVICE_ATTR(in5_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM2 | ADT7462_V5_ALARM); -static SENSOR_DEVICE_ATTR(in6_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM2 | ADT7462_V4_ALARM); -static SENSOR_DEVICE_ATTR(in7_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM2 | ADT7462_V3_ALARM); -static SENSOR_DEVICE_ATTR(in8_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM2 | ADT7462_V1_ALARM); -static SENSOR_DEVICE_ATTR(in9_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM3 | ADT7462_V10_ALARM); -static SENSOR_DEVICE_ATTR(in10_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM3 | ADT7462_V9_ALARM); -static SENSOR_DEVICE_ATTR(in11_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM3 | ADT7462_V8_ALARM); -static SENSOR_DEVICE_ATTR(in12_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM3 | ADT7462_V11_ALARM); -static SENSOR_DEVICE_ATTR(in13_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM3 | ADT7462_V12_ALARM); - -static SENSOR_DEVICE_ATTR(fan1_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 0); -static SENSOR_DEVICE_ATTR(fan2_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 1); -static SENSOR_DEVICE_ATTR(fan3_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 2); -static SENSOR_DEVICE_ATTR(fan4_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 3); -static SENSOR_DEVICE_ATTR(fan5_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 4); -static SENSOR_DEVICE_ATTR(fan6_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 5); -static SENSOR_DEVICE_ATTR(fan7_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 6); -static SENSOR_DEVICE_ATTR(fan8_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 7); - -static SENSOR_DEVICE_ATTR(fan1_input, S_IRUGO, show_fan, NULL, 0); -static SENSOR_DEVICE_ATTR(fan2_input, S_IRUGO, show_fan, NULL, 1); -static SENSOR_DEVICE_ATTR(fan3_input, S_IRUGO, show_fan, NULL, 2); -static SENSOR_DEVICE_ATTR(fan4_input, S_IRUGO, show_fan, NULL, 3); -static SENSOR_DEVICE_ATTR(fan5_input, S_IRUGO, show_fan, NULL, 4); -static SENSOR_DEVICE_ATTR(fan6_input, S_IRUGO, show_fan, NULL, 5); -static SENSOR_DEVICE_ATTR(fan7_input, S_IRUGO, show_fan, NULL, 6); -static SENSOR_DEVICE_ATTR(fan8_input, S_IRUGO, show_fan, NULL, 7); - -static SENSOR_DEVICE_ATTR(fan1_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM4 | ADT7462_F0_ALARM); -static SENSOR_DEVICE_ATTR(fan2_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM4 | ADT7462_F1_ALARM); -static SENSOR_DEVICE_ATTR(fan3_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM4 | ADT7462_F2_ALARM); -static SENSOR_DEVICE_ATTR(fan4_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM4 | ADT7462_F3_ALARM); -static SENSOR_DEVICE_ATTR(fan5_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM4 | ADT7462_F4_ALARM); -static SENSOR_DEVICE_ATTR(fan6_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM4 | ADT7462_F5_ALARM); -static SENSOR_DEVICE_ATTR(fan7_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM4 | ADT7462_F6_ALARM); -static SENSOR_DEVICE_ATTR(fan8_alarm, S_IRUGO, show_alarm, NULL, - ADT7462_ALARM4 | ADT7462_F7_ALARM); - -static SENSOR_DEVICE_ATTR(force_pwm_max, S_IWUSR | S_IRUGO, - show_force_pwm_max, set_force_pwm_max, 0); - -static SENSOR_DEVICE_ATTR(pwm1, S_IWUSR | S_IRUGO, show_pwm, set_pwm, 0); -static SENSOR_DEVICE_ATTR(pwm2, S_IWUSR | S_IRUGO, show_pwm, set_pwm, 1); -static SENSOR_DEVICE_ATTR(pwm3, S_IWUSR | S_IRUGO, show_pwm, set_pwm, 2); -static SENSOR_DEVICE_ATTR(pwm4, S_IWUSR | S_IRUGO, show_pwm, set_pwm, 3); - -static SENSOR_DEVICE_ATTR(pwm1_auto_point1_pwm, S_IWUSR | S_IRUGO, - show_pwm_min, set_pwm_min, 0); -static SENSOR_DEVICE_ATTR(pwm2_auto_point1_pwm, S_IWUSR | S_IRUGO, - show_pwm_min, set_pwm_min, 1); -static SENSOR_DEVICE_ATTR(pwm3_auto_point1_pwm, S_IWUSR | S_IRUGO, - show_pwm_min, set_pwm_min, 2); -static SENSOR_DEVICE_ATTR(pwm4_auto_point1_pwm, S_IWUSR | S_IRUGO, - show_pwm_min, set_pwm_min, 3); - -static SENSOR_DEVICE_ATTR(pwm1_auto_point2_pwm, S_IWUSR | S_IRUGO, - show_pwm_max, set_pwm_max, 0); -static SENSOR_DEVICE_ATTR(pwm2_auto_point2_pwm, S_IWUSR | S_IRUGO, - show_pwm_max, set_pwm_max, 1); -static SENSOR_DEVICE_ATTR(pwm3_auto_point2_pwm, S_IWUSR | S_IRUGO, - show_pwm_max, set_pwm_max, 2); -static SENSOR_DEVICE_ATTR(pwm4_auto_point2_pwm, S_IWUSR | S_IRUGO, - show_pwm_max, set_pwm_max, 3); - -static SENSOR_DEVICE_ATTR(temp1_auto_point1_hyst, S_IWUSR | S_IRUGO, - show_pwm_hyst, set_pwm_hyst, 0); -static SENSOR_DEVICE_ATTR(temp2_auto_point1_hyst, S_IWUSR | S_IRUGO, - show_pwm_hyst, set_pwm_hyst, 1); -static SENSOR_DEVICE_ATTR(temp3_auto_point1_hyst, S_IWUSR | S_IRUGO, - show_pwm_hyst, set_pwm_hyst, 2); -static SENSOR_DEVICE_ATTR(temp4_auto_point1_hyst, S_IWUSR | S_IRUGO, - show_pwm_hyst, set_pwm_hyst, 3); - -static SENSOR_DEVICE_ATTR(temp1_auto_point2_hyst, S_IWUSR | S_IRUGO, - show_pwm_hyst, set_pwm_hyst, 0); -static SENSOR_DEVICE_ATTR(temp2_auto_point2_hyst, S_IWUSR | S_IRUGO, - show_pwm_hyst, set_pwm_hyst, 1); -static SENSOR_DEVICE_ATTR(temp3_auto_point2_hyst, S_IWUSR | S_IRUGO, - show_pwm_hyst, set_pwm_hyst, 2); -static SENSOR_DEVICE_ATTR(temp4_auto_point2_hyst, S_IWUSR | S_IRUGO, - show_pwm_hyst, set_pwm_hyst, 3); - -static SENSOR_DEVICE_ATTR(temp1_auto_point1_temp, S_IWUSR | S_IRUGO, - show_pwm_tmin, set_pwm_tmin, 0); -static SENSOR_DEVICE_ATTR(temp2_auto_point1_temp, S_IWUSR | S_IRUGO, - show_pwm_tmin, set_pwm_tmin, 1); -static SENSOR_DEVICE_ATTR(temp3_auto_point1_temp, S_IWUSR | S_IRUGO, - show_pwm_tmin, set_pwm_tmin, 2); -static SENSOR_DEVICE_ATTR(temp4_auto_point1_temp, S_IWUSR | S_IRUGO, - show_pwm_tmin, set_pwm_tmin, 3); - -static SENSOR_DEVICE_ATTR(temp1_auto_point2_temp, S_IWUSR | S_IRUGO, - show_pwm_tmax, set_pwm_tmax, 0); -static SENSOR_DEVICE_ATTR(temp2_auto_point2_temp, S_IWUSR | S_IRUGO, - show_pwm_tmax, set_pwm_tmax, 1); -static SENSOR_DEVICE_ATTR(temp3_auto_point2_temp, S_IWUSR | S_IRUGO, - show_pwm_tmax, set_pwm_tmax, 2); -static SENSOR_DEVICE_ATTR(temp4_auto_point2_temp, S_IWUSR | S_IRUGO, - show_pwm_tmax, set_pwm_tmax, 3); - -static SENSOR_DEVICE_ATTR(pwm1_enable, S_IWUSR | S_IRUGO, show_pwm_auto, - set_pwm_auto, 0); -static SENSOR_DEVICE_ATTR(pwm2_enable, S_IWUSR | S_IRUGO, show_pwm_auto, - set_pwm_auto, 1); -static SENSOR_DEVICE_ATTR(pwm3_enable, S_IWUSR | S_IRUGO, show_pwm_auto, - set_pwm_auto, 2); -static SENSOR_DEVICE_ATTR(pwm4_enable, S_IWUSR | S_IRUGO, show_pwm_auto, - set_pwm_auto, 3); - -static SENSOR_DEVICE_ATTR(pwm1_auto_channels_temp, S_IWUSR | S_IRUGO, - show_pwm_auto_temp, set_pwm_auto_temp, 0); -static SENSOR_DEVICE_ATTR(pwm2_auto_channels_temp, S_IWUSR | S_IRUGO, - show_pwm_auto_temp, set_pwm_auto_temp, 1); -static SENSOR_DEVICE_ATTR(pwm3_auto_channels_temp, S_IWUSR | S_IRUGO, - show_pwm_auto_temp, set_pwm_auto_temp, 2); -static SENSOR_DEVICE_ATTR(pwm4_auto_channels_temp, S_IWUSR | S_IRUGO, - show_pwm_auto_temp, set_pwm_auto_temp, 3); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp_max, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_max, temp_max, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_max, temp_max, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_max, temp_max, 3); + +static SENSOR_DEVICE_ATTR_RW(temp1_min, temp_min, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_min, temp_min, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_min, temp_min, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_min, temp_min, 3); + +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_input, temp, 2); +static SENSOR_DEVICE_ATTR_RO(temp4_input, temp, 3); + +static SENSOR_DEVICE_ATTR_RO(temp1_label, temp_label, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_label, temp_label, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_label, temp_label, 2); +static SENSOR_DEVICE_ATTR_RO(temp4_label, temp_label, 3); + +static SENSOR_DEVICE_ATTR_RO(temp1_alarm, alarm, + ADT7462_ALARM1 | ADT7462_LT_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp2_alarm, alarm, + ADT7462_ALARM1 | ADT7462_R1T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp3_alarm, alarm, + ADT7462_ALARM1 | ADT7462_R2T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp4_alarm, alarm, + ADT7462_ALARM1 | ADT7462_R3T_ALARM); + +static SENSOR_DEVICE_ATTR_RW(in1_max, volt_max, 0); +static SENSOR_DEVICE_ATTR_RW(in2_max, volt_max, 1); +static SENSOR_DEVICE_ATTR_RW(in3_max, volt_max, 2); +static SENSOR_DEVICE_ATTR_RW(in4_max, volt_max, 3); +static SENSOR_DEVICE_ATTR_RW(in5_max, volt_max, 4); +static SENSOR_DEVICE_ATTR_RW(in6_max, volt_max, 5); +static SENSOR_DEVICE_ATTR_RW(in7_max, volt_max, 6); +static SENSOR_DEVICE_ATTR_RW(in8_max, volt_max, 7); +static SENSOR_DEVICE_ATTR_RW(in9_max, volt_max, 8); +static SENSOR_DEVICE_ATTR_RW(in10_max, volt_max, 9); +static SENSOR_DEVICE_ATTR_RW(in11_max, volt_max, 10); +static SENSOR_DEVICE_ATTR_RW(in12_max, volt_max, 11); +static SENSOR_DEVICE_ATTR_RW(in13_max, volt_max, 12); + +static SENSOR_DEVICE_ATTR_RW(in1_min, volt_min, 0); +static SENSOR_DEVICE_ATTR_RW(in2_min, volt_min, 1); +static SENSOR_DEVICE_ATTR_RW(in3_min, volt_min, 2); +static SENSOR_DEVICE_ATTR_RW(in4_min, volt_min, 3); +static SENSOR_DEVICE_ATTR_RW(in5_min, volt_min, 4); +static SENSOR_DEVICE_ATTR_RW(in6_min, volt_min, 5); +static SENSOR_DEVICE_ATTR_RW(in7_min, volt_min, 6); +static SENSOR_DEVICE_ATTR_RW(in8_min, volt_min, 7); +static SENSOR_DEVICE_ATTR_RW(in9_min, volt_min, 8); +static SENSOR_DEVICE_ATTR_RW(in10_min, volt_min, 9); +static SENSOR_DEVICE_ATTR_RW(in11_min, volt_min, 10); +static SENSOR_DEVICE_ATTR_RW(in12_min, volt_min, 11); +static SENSOR_DEVICE_ATTR_RW(in13_min, volt_min, 12); + +static SENSOR_DEVICE_ATTR_RO(in1_input, voltage, 0); +static SENSOR_DEVICE_ATTR_RO(in2_input, voltage, 1); +static SENSOR_DEVICE_ATTR_RO(in3_input, voltage, 2); +static SENSOR_DEVICE_ATTR_RO(in4_input, voltage, 3); +static SENSOR_DEVICE_ATTR_RO(in5_input, voltage, 4); +static SENSOR_DEVICE_ATTR_RO(in6_input, voltage, 5); +static SENSOR_DEVICE_ATTR_RO(in7_input, voltage, 6); +static SENSOR_DEVICE_ATTR_RO(in8_input, voltage, 7); +static SENSOR_DEVICE_ATTR_RO(in9_input, voltage, 8); +static SENSOR_DEVICE_ATTR_RO(in10_input, voltage, 9); +static SENSOR_DEVICE_ATTR_RO(in11_input, voltage, 10); +static SENSOR_DEVICE_ATTR_RO(in12_input, voltage, 11); +static SENSOR_DEVICE_ATTR_RO(in13_input, voltage, 12); + +static SENSOR_DEVICE_ATTR_RO(in1_label, voltage_label, 0); +static SENSOR_DEVICE_ATTR_RO(in2_label, voltage_label, 1); +static SENSOR_DEVICE_ATTR_RO(in3_label, voltage_label, 2); +static SENSOR_DEVICE_ATTR_RO(in4_label, voltage_label, 3); +static SENSOR_DEVICE_ATTR_RO(in5_label, voltage_label, 4); +static SENSOR_DEVICE_ATTR_RO(in6_label, voltage_label, 5); +static SENSOR_DEVICE_ATTR_RO(in7_label, voltage_label, 6); +static SENSOR_DEVICE_ATTR_RO(in8_label, voltage_label, 7); +static SENSOR_DEVICE_ATTR_RO(in9_label, voltage_label, 8); +static SENSOR_DEVICE_ATTR_RO(in10_label, voltage_label, 9); +static SENSOR_DEVICE_ATTR_RO(in11_label, voltage_label, 10); +static SENSOR_DEVICE_ATTR_RO(in12_label, voltage_label, 11); +static SENSOR_DEVICE_ATTR_RO(in13_label, voltage_label, 12); + +static SENSOR_DEVICE_ATTR_RO(in1_alarm, alarm, + ADT7462_ALARM2 | ADT7462_V0_ALARM); +static SENSOR_DEVICE_ATTR_RO(in2_alarm, alarm, + ADT7462_ALARM2 | ADT7462_V7_ALARM); +static SENSOR_DEVICE_ATTR_RO(in3_alarm, alarm, + ADT7462_ALARM2 | ADT7462_V2_ALARM); +static SENSOR_DEVICE_ATTR_RO(in4_alarm, alarm, + ADT7462_ALARM2 | ADT7462_V6_ALARM); +static SENSOR_DEVICE_ATTR_RO(in5_alarm, alarm, + ADT7462_ALARM2 | ADT7462_V5_ALARM); +static SENSOR_DEVICE_ATTR_RO(in6_alarm, alarm, + ADT7462_ALARM2 | ADT7462_V4_ALARM); +static SENSOR_DEVICE_ATTR_RO(in7_alarm, alarm, + ADT7462_ALARM2 | ADT7462_V3_ALARM); +static SENSOR_DEVICE_ATTR_RO(in8_alarm, alarm, + ADT7462_ALARM2 | ADT7462_V1_ALARM); +static SENSOR_DEVICE_ATTR_RO(in9_alarm, alarm, + ADT7462_ALARM3 | ADT7462_V10_ALARM); +static SENSOR_DEVICE_ATTR_RO(in10_alarm, alarm, + ADT7462_ALARM3 | ADT7462_V9_ALARM); +static SENSOR_DEVICE_ATTR_RO(in11_alarm, alarm, + ADT7462_ALARM3 | ADT7462_V8_ALARM); +static SENSOR_DEVICE_ATTR_RO(in12_alarm, alarm, + ADT7462_ALARM3 | ADT7462_V11_ALARM); +static SENSOR_DEVICE_ATTR_RO(in13_alarm, alarm, + ADT7462_ALARM3 | ADT7462_V12_ALARM); + +static SENSOR_DEVICE_ATTR_RW(fan1_min, fan_min, 0); +static SENSOR_DEVICE_ATTR_RW(fan2_min, fan_min, 1); +static SENSOR_DEVICE_ATTR_RW(fan3_min, fan_min, 2); +static SENSOR_DEVICE_ATTR_RW(fan4_min, fan_min, 3); +static SENSOR_DEVICE_ATTR_RW(fan5_min, fan_min, 4); +static SENSOR_DEVICE_ATTR_RW(fan6_min, fan_min, 5); +static SENSOR_DEVICE_ATTR_RW(fan7_min, fan_min, 6); +static SENSOR_DEVICE_ATTR_RW(fan8_min, fan_min, 7); + +static SENSOR_DEVICE_ATTR_RO(fan1_input, fan, 0); +static SENSOR_DEVICE_ATTR_RO(fan2_input, fan, 1); +static SENSOR_DEVICE_ATTR_RO(fan3_input, fan, 2); +static SENSOR_DEVICE_ATTR_RO(fan4_input, fan, 3); +static SENSOR_DEVICE_ATTR_RO(fan5_input, fan, 4); +static SENSOR_DEVICE_ATTR_RO(fan6_input, fan, 5); +static SENSOR_DEVICE_ATTR_RO(fan7_input, fan, 6); +static SENSOR_DEVICE_ATTR_RO(fan8_input, fan, 7); + +static SENSOR_DEVICE_ATTR_RO(fan1_alarm, alarm, + ADT7462_ALARM4 | ADT7462_F0_ALARM); +static SENSOR_DEVICE_ATTR_RO(fan2_alarm, alarm, + ADT7462_ALARM4 | ADT7462_F1_ALARM); +static SENSOR_DEVICE_ATTR_RO(fan3_alarm, alarm, + ADT7462_ALARM4 | ADT7462_F2_ALARM); +static SENSOR_DEVICE_ATTR_RO(fan4_alarm, alarm, + ADT7462_ALARM4 | ADT7462_F3_ALARM); +static SENSOR_DEVICE_ATTR_RO(fan5_alarm, alarm, + ADT7462_ALARM4 | ADT7462_F4_ALARM); +static SENSOR_DEVICE_ATTR_RO(fan6_alarm, alarm, + ADT7462_ALARM4 | ADT7462_F5_ALARM); +static SENSOR_DEVICE_ATTR_RO(fan7_alarm, alarm, + ADT7462_ALARM4 | ADT7462_F6_ALARM); +static SENSOR_DEVICE_ATTR_RO(fan8_alarm, alarm, + ADT7462_ALARM4 | ADT7462_F7_ALARM); + +static SENSOR_DEVICE_ATTR_RW(force_pwm_max, force_pwm_max, 0); + +static SENSOR_DEVICE_ATTR_RW(pwm1, pwm, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2, pwm, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3, pwm, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4, pwm, 3); + +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point1_pwm, pwm_min, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point1_pwm, pwm_min, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point1_pwm, pwm_min, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_auto_point1_pwm, pwm_min, 3); + +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point2_pwm, pwm_max, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point2_pwm, pwm_max, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point2_pwm, pwm_max, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_auto_point2_pwm, pwm_max, 3); + +static SENSOR_DEVICE_ATTR_RW(temp1_auto_point1_hyst, pwm_hyst, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_auto_point1_hyst, pwm_hyst, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_auto_point1_hyst, pwm_hyst, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_auto_point1_hyst, pwm_hyst, 3); + +static SENSOR_DEVICE_ATTR_RW(temp1_auto_point2_hyst, pwm_hyst, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_auto_point2_hyst, pwm_hyst, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_auto_point2_hyst, pwm_hyst, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_auto_point2_hyst, pwm_hyst, 3); + +static SENSOR_DEVICE_ATTR_RW(temp1_auto_point1_temp, pwm_tmin, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_auto_point1_temp, pwm_tmin, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_auto_point1_temp, pwm_tmin, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_auto_point1_temp, pwm_tmin, 3); + +static SENSOR_DEVICE_ATTR_RW(temp1_auto_point2_temp, pwm_tmax, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_auto_point2_temp, pwm_tmax, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_auto_point2_temp, pwm_tmax, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_auto_point2_temp, pwm_tmax, 3); + +static SENSOR_DEVICE_ATTR_RW(pwm1_enable, pwm_auto, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_enable, pwm_auto, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_enable, pwm_auto, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_enable, pwm_auto, 3); + +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_channels_temp, pwm_auto_temp, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_channels_temp, pwm_auto_temp, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_channels_temp, pwm_auto_temp, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_auto_channels_temp, pwm_auto_temp, 3); static struct attribute *adt7462_attrs[] = { &sensor_dev_attr_temp1_max.dev_attr.attr, diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c index 2cd920751441..6d87daf18809 100644 --- a/drivers/hwmon/adt7470.c +++ b/drivers/hwmon/adt7470.c @@ -459,19 +459,17 @@ static ssize_t num_temp_sensors_store(struct device *dev, return count; } -static ssize_t show_temp_min(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp_min_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); return sprintf(buf, "%d\n", 1000 * data->temp_min[attr->index]); } -static ssize_t set_temp_min(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t temp_min_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -493,19 +491,17 @@ static ssize_t set_temp_min(struct device *dev, return count; } -static ssize_t show_temp_max(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); return sprintf(buf, "%d\n", 1000 * data->temp_max[attr->index]); } -static ssize_t set_temp_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t temp_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -527,7 +523,7 @@ static ssize_t set_temp_max(struct device *dev, return count; } -static ssize_t show_temp(struct device *dev, struct device_attribute *devattr, +static ssize_t temp_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -565,9 +561,8 @@ static ssize_t alarm_mask_store(struct device *dev, return count; } -static ssize_t show_fan_max(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t fan_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); @@ -579,9 +574,9 @@ static ssize_t show_fan_max(struct device *dev, return sprintf(buf, "0\n"); } -static ssize_t set_fan_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t fan_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -602,9 +597,8 @@ static ssize_t set_fan_max(struct device *dev, return count; } -static ssize_t show_fan_min(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t fan_min_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); @@ -616,9 +610,9 @@ static ssize_t show_fan_min(struct device *dev, return sprintf(buf, "0\n"); } -static ssize_t set_fan_min(struct device *dev, - struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t fan_min_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -639,7 +633,7 @@ static ssize_t set_fan_min(struct device *dev, return count; } -static ssize_t show_fan(struct device *dev, struct device_attribute *devattr, +static ssize_t fan_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -652,18 +646,16 @@ static ssize_t show_fan(struct device *dev, struct device_attribute *devattr, return sprintf(buf, "0\n"); } -static ssize_t show_force_pwm_max(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t force_pwm_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct adt7470_data *data = adt7470_update_device(dev); return sprintf(buf, "%d\n", data->force_pwm_max); } -static ssize_t set_force_pwm_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t force_pwm_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct adt7470_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -686,7 +678,7 @@ static ssize_t set_force_pwm_max(struct device *dev, return count; } -static ssize_t show_pwm(struct device *dev, struct device_attribute *devattr, +static ssize_t pwm_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -694,8 +686,8 @@ static ssize_t show_pwm(struct device *dev, struct device_attribute *devattr, return sprintf(buf, "%d\n", data->pwm[attr->index]); } -static ssize_t set_pwm(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t pwm_store(struct device *dev, struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -779,19 +771,17 @@ static ssize_t pwm1_freq_store(struct device *dev, return count; } -static ssize_t show_pwm_max(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); return sprintf(buf, "%d\n", data->pwm_max[attr->index]); } -static ssize_t set_pwm_max(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -812,19 +802,17 @@ static ssize_t set_pwm_max(struct device *dev, return count; } -static ssize_t show_pwm_min(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_min_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); return sprintf(buf, "%d\n", data->pwm_min[attr->index]); } -static ssize_t set_pwm_min(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_min_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -845,9 +833,8 @@ static ssize_t set_pwm_min(struct device *dev, return count; } -static ssize_t show_pwm_tmax(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_tmax_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); @@ -855,19 +842,17 @@ static ssize_t show_pwm_tmax(struct device *dev, return sprintf(buf, "%d\n", 1000 * (20 + data->pwm_tmin[attr->index])); } -static ssize_t show_pwm_tmin(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_tmin_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); return sprintf(buf, "%d\n", 1000 * data->pwm_tmin[attr->index]); } -static ssize_t set_pwm_tmin(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_tmin_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -889,19 +874,17 @@ static ssize_t set_pwm_tmin(struct device *dev, return count; } -static ssize_t show_pwm_auto(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_auto_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); return sprintf(buf, "%d\n", 1 + data->pwm_automatic[attr->index]); } -static ssize_t set_pwm_auto(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_auto_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -936,9 +919,8 @@ static ssize_t set_pwm_auto(struct device *dev, return count; } -static ssize_t show_pwm_auto_temp(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm_auto_temp_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); @@ -959,10 +941,9 @@ static int cvt_auto_temp(int input) return ilog2(input) + 1; } -static ssize_t set_pwm_auto_temp(struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm_auto_temp_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = dev_get_drvdata(dev); @@ -996,9 +977,8 @@ static ssize_t set_pwm_auto_temp(struct device *dev, return count; } -static ssize_t show_alarm(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t alarm_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct adt7470_data *data = adt7470_update_device(dev); @@ -1013,175 +993,108 @@ static DEVICE_ATTR_RW(alarm_mask); static DEVICE_ATTR_RW(num_temp_sensors); static DEVICE_ATTR_RW(auto_update_interval); -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 0); -static SENSOR_DEVICE_ATTR(temp2_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 1); -static SENSOR_DEVICE_ATTR(temp3_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 2); -static SENSOR_DEVICE_ATTR(temp4_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 3); -static SENSOR_DEVICE_ATTR(temp5_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 4); -static SENSOR_DEVICE_ATTR(temp6_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 5); -static SENSOR_DEVICE_ATTR(temp7_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 6); -static SENSOR_DEVICE_ATTR(temp8_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 7); -static SENSOR_DEVICE_ATTR(temp9_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 8); -static SENSOR_DEVICE_ATTR(temp10_max, S_IWUSR | S_IRUGO, show_temp_max, - set_temp_max, 9); - -static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 0); -static SENSOR_DEVICE_ATTR(temp2_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 1); -static SENSOR_DEVICE_ATTR(temp3_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 2); -static SENSOR_DEVICE_ATTR(temp4_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 3); -static SENSOR_DEVICE_ATTR(temp5_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 4); -static SENSOR_DEVICE_ATTR(temp6_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 5); -static SENSOR_DEVICE_ATTR(temp7_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 6); -static SENSOR_DEVICE_ATTR(temp8_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 7); -static SENSOR_DEVICE_ATTR(temp9_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 8); -static SENSOR_DEVICE_ATTR(temp10_min, S_IWUSR | S_IRUGO, show_temp_min, - set_temp_min, 9); - -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp, NULL, 1); -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, show_temp, NULL, 2); -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, show_temp, NULL, 3); -static SENSOR_DEVICE_ATTR(temp5_input, S_IRUGO, show_temp, NULL, 4); -static SENSOR_DEVICE_ATTR(temp6_input, S_IRUGO, show_temp, NULL, 5); -static SENSOR_DEVICE_ATTR(temp7_input, S_IRUGO, show_temp, NULL, 6); -static SENSOR_DEVICE_ATTR(temp8_input, S_IRUGO, show_temp, NULL, 7); -static SENSOR_DEVICE_ATTR(temp9_input, S_IRUGO, show_temp, NULL, 8); -static SENSOR_DEVICE_ATTR(temp10_input, S_IRUGO, show_temp, NULL, 9); - -static SENSOR_DEVICE_ATTR(temp1_alarm, S_IRUGO, show_alarm, NULL, - ADT7470_R1T_ALARM); -static SENSOR_DEVICE_ATTR(temp2_alarm, S_IRUGO, show_alarm, NULL, - ADT7470_R2T_ALARM); -static SENSOR_DEVICE_ATTR(temp3_alarm, S_IRUGO, show_alarm, NULL, - ADT7470_R3T_ALARM); -static SENSOR_DEVICE_ATTR(temp4_alarm, S_IRUGO, show_alarm, NULL, - ADT7470_R4T_ALARM); -static SENSOR_DEVICE_ATTR(temp5_alarm, S_IRUGO, show_alarm, NULL, - ADT7470_R5T_ALARM); -static SENSOR_DEVICE_ATTR(temp6_alarm, S_IRUGO, show_alarm, NULL, - ADT7470_R6T_ALARM); -static SENSOR_DEVICE_ATTR(temp7_alarm, S_IRUGO, show_alarm, NULL, - ADT7470_R7T_ALARM); -static SENSOR_DEVICE_ATTR(temp8_alarm, S_IRUGO, show_alarm, NULL, - ALARM2(ADT7470_R8T_ALARM)); -static SENSOR_DEVICE_ATTR(temp9_alarm, S_IRUGO, show_alarm, NULL, - ALARM2(ADT7470_R9T_ALARM)); -static SENSOR_DEVICE_ATTR(temp10_alarm, S_IRUGO, show_alarm, NULL, - ALARM2(ADT7470_R10T_ALARM)); - -static SENSOR_DEVICE_ATTR(fan1_max, S_IWUSR | S_IRUGO, show_fan_max, - set_fan_max, 0); -static SENSOR_DEVICE_ATTR(fan2_max, S_IWUSR | S_IRUGO, show_fan_max, - set_fan_max, 1); -static SENSOR_DEVICE_ATTR(fan3_max, S_IWUSR | S_IRUGO, show_fan_max, - set_fan_max, 2); -static SENSOR_DEVICE_ATTR(fan4_max, S_IWUSR | S_IRUGO, show_fan_max, - set_fan_max, 3); - -static SENSOR_DEVICE_ATTR(fan1_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 0); -static SENSOR_DEVICE_ATTR(fan2_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 1); -static SENSOR_DEVICE_ATTR(fan3_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 2); -static SENSOR_DEVICE_ATTR(fan4_min, S_IWUSR | S_IRUGO, show_fan_min, - set_fan_min, 3); - -static SENSOR_DEVICE_ATTR(fan1_input, S_IRUGO, show_fan, NULL, 0); -static SENSOR_DEVICE_ATTR(fan2_input, S_IRUGO, show_fan, NULL, 1); -static SENSOR_DEVICE_ATTR(fan3_input, S_IRUGO, show_fan, NULL, 2); -static SENSOR_DEVICE_ATTR(fan4_input, S_IRUGO, show_fan, NULL, 3); - -static SENSOR_DEVICE_ATTR(fan1_alarm, S_IRUGO, show_alarm, NULL, - ALARM2(ADT7470_FAN1_ALARM)); -static SENSOR_DEVICE_ATTR(fan2_alarm, S_IRUGO, show_alarm, NULL, - ALARM2(ADT7470_FAN2_ALARM)); -static SENSOR_DEVICE_ATTR(fan3_alarm, S_IRUGO, show_alarm, NULL, - ALARM2(ADT7470_FAN3_ALARM)); -static SENSOR_DEVICE_ATTR(fan4_alarm, S_IRUGO, show_alarm, NULL, - ALARM2(ADT7470_FAN4_ALARM)); - -static SENSOR_DEVICE_ATTR(force_pwm_max, S_IWUSR | S_IRUGO, - show_force_pwm_max, set_force_pwm_max, 0); - -static SENSOR_DEVICE_ATTR(pwm1, S_IWUSR | S_IRUGO, show_pwm, set_pwm, 0); -static SENSOR_DEVICE_ATTR(pwm2, S_IWUSR | S_IRUGO, show_pwm, set_pwm, 1); -static SENSOR_DEVICE_ATTR(pwm3, S_IWUSR | S_IRUGO, show_pwm, set_pwm, 2); -static SENSOR_DEVICE_ATTR(pwm4, S_IWUSR | S_IRUGO, show_pwm, set_pwm, 3); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp_max, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_max, temp_max, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_max, temp_max, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_max, temp_max, 3); +static SENSOR_DEVICE_ATTR_RW(temp5_max, temp_max, 4); +static SENSOR_DEVICE_ATTR_RW(temp6_max, temp_max, 5); +static SENSOR_DEVICE_ATTR_RW(temp7_max, temp_max, 6); +static SENSOR_DEVICE_ATTR_RW(temp8_max, temp_max, 7); +static SENSOR_DEVICE_ATTR_RW(temp9_max, temp_max, 8); +static SENSOR_DEVICE_ATTR_RW(temp10_max, temp_max, 9); + +static SENSOR_DEVICE_ATTR_RW(temp1_min, temp_min, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_min, temp_min, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_min, temp_min, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_min, temp_min, 3); +static SENSOR_DEVICE_ATTR_RW(temp5_min, temp_min, 4); +static SENSOR_DEVICE_ATTR_RW(temp6_min, temp_min, 5); +static SENSOR_DEVICE_ATTR_RW(temp7_min, temp_min, 6); +static SENSOR_DEVICE_ATTR_RW(temp8_min, temp_min, 7); +static SENSOR_DEVICE_ATTR_RW(temp9_min, temp_min, 8); +static SENSOR_DEVICE_ATTR_RW(temp10_min, temp_min, 9); + +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_input, temp, 2); +static SENSOR_DEVICE_ATTR_RO(temp4_input, temp, 3); +static SENSOR_DEVICE_ATTR_RO(temp5_input, temp, 4); +static SENSOR_DEVICE_ATTR_RO(temp6_input, temp, 5); +static SENSOR_DEVICE_ATTR_RO(temp7_input, temp, 6); +static SENSOR_DEVICE_ATTR_RO(temp8_input, temp, 7); +static SENSOR_DEVICE_ATTR_RO(temp9_input, temp, 8); +static SENSOR_DEVICE_ATTR_RO(temp10_input, temp, 9); + +static SENSOR_DEVICE_ATTR_RO(temp1_alarm, alarm, ADT7470_R1T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp2_alarm, alarm, ADT7470_R2T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp3_alarm, alarm, ADT7470_R3T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp4_alarm, alarm, ADT7470_R4T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp5_alarm, alarm, ADT7470_R5T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp6_alarm, alarm, ADT7470_R6T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp7_alarm, alarm, ADT7470_R7T_ALARM); +static SENSOR_DEVICE_ATTR_RO(temp8_alarm, alarm, ALARM2(ADT7470_R8T_ALARM)); +static SENSOR_DEVICE_ATTR_RO(temp9_alarm, alarm, ALARM2(ADT7470_R9T_ALARM)); +static SENSOR_DEVICE_ATTR_RO(temp10_alarm, alarm, ALARM2(ADT7470_R10T_ALARM)); + +static SENSOR_DEVICE_ATTR_RW(fan1_max, fan_max, 0); +static SENSOR_DEVICE_ATTR_RW(fan2_max, fan_max, 1); +static SENSOR_DEVICE_ATTR_RW(fan3_max, fan_max, 2); +static SENSOR_DEVICE_ATTR_RW(fan4_max, fan_max, 3); + +static SENSOR_DEVICE_ATTR_RW(fan1_min, fan_min, 0); +static SENSOR_DEVICE_ATTR_RW(fan2_min, fan_min, 1); +static SENSOR_DEVICE_ATTR_RW(fan3_min, fan_min, 2); +static SENSOR_DEVICE_ATTR_RW(fan4_min, fan_min, 3); + +static SENSOR_DEVICE_ATTR_RO(fan1_input, fan, 0); +static SENSOR_DEVICE_ATTR_RO(fan2_input, fan, 1); +static SENSOR_DEVICE_ATTR_RO(fan3_input, fan, 2); +static SENSOR_DEVICE_ATTR_RO(fan4_input, fan, 3); + +static SENSOR_DEVICE_ATTR_RO(fan1_alarm, alarm, ALARM2(ADT7470_FAN1_ALARM)); +static SENSOR_DEVICE_ATTR_RO(fan2_alarm, alarm, ALARM2(ADT7470_FAN2_ALARM)); +static SENSOR_DEVICE_ATTR_RO(fan3_alarm, alarm, ALARM2(ADT7470_FAN3_ALARM)); +static SENSOR_DEVICE_ATTR_RO(fan4_alarm, alarm, ALARM2(ADT7470_FAN4_ALARM)); + +static SENSOR_DEVICE_ATTR_RW(force_pwm_max, force_pwm_max, 0); + +static SENSOR_DEVICE_ATTR_RW(pwm1, pwm, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2, pwm, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3, pwm, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4, pwm, 3); static DEVICE_ATTR_RW(pwm1_freq); -static SENSOR_DEVICE_ATTR(pwm1_auto_point1_pwm, S_IWUSR | S_IRUGO, - show_pwm_min, set_pwm_min, 0); -static SENSOR_DEVICE_ATTR(pwm2_auto_point1_pwm, S_IWUSR | S_IRUGO, - show_pwm_min, set_pwm_min, 1); -static SENSOR_DEVICE_ATTR(pwm3_auto_point1_pwm, S_IWUSR | S_IRUGO, - show_pwm_min, set_pwm_min, 2); -static SENSOR_DEVICE_ATTR(pwm4_auto_point1_pwm, S_IWUSR | S_IRUGO, - show_pwm_min, set_pwm_min, 3); - -static SENSOR_DEVICE_ATTR(pwm1_auto_point2_pwm, S_IWUSR | S_IRUGO, - show_pwm_max, set_pwm_max, 0); -static SENSOR_DEVICE_ATTR(pwm2_auto_point2_pwm, S_IWUSR | S_IRUGO, - show_pwm_max, set_pwm_max, 1); -static SENSOR_DEVICE_ATTR(pwm3_auto_point2_pwm, S_IWUSR | S_IRUGO, - show_pwm_max, set_pwm_max, 2); -static SENSOR_DEVICE_ATTR(pwm4_auto_point2_pwm, S_IWUSR | S_IRUGO, - show_pwm_max, set_pwm_max, 3); - -static SENSOR_DEVICE_ATTR(pwm1_auto_point1_temp, S_IWUSR | S_IRUGO, - show_pwm_tmin, set_pwm_tmin, 0); -static SENSOR_DEVICE_ATTR(pwm2_auto_point1_temp, S_IWUSR | S_IRUGO, - show_pwm_tmin, set_pwm_tmin, 1); -static SENSOR_DEVICE_ATTR(pwm3_auto_point1_temp, S_IWUSR | S_IRUGO, - show_pwm_tmin, set_pwm_tmin, 2); -static SENSOR_DEVICE_ATTR(pwm4_auto_point1_temp, S_IWUSR | S_IRUGO, - show_pwm_tmin, set_pwm_tmin, 3); - -static SENSOR_DEVICE_ATTR(pwm1_auto_point2_temp, S_IRUGO, show_pwm_tmax, - NULL, 0); -static SENSOR_DEVICE_ATTR(pwm2_auto_point2_temp, S_IRUGO, show_pwm_tmax, - NULL, 1); -static SENSOR_DEVICE_ATTR(pwm3_auto_point2_temp, S_IRUGO, show_pwm_tmax, - NULL, 2); -static SENSOR_DEVICE_ATTR(pwm4_auto_point2_temp, S_IRUGO, show_pwm_tmax, - NULL, 3); - -static SENSOR_DEVICE_ATTR(pwm1_enable, S_IWUSR | S_IRUGO, show_pwm_auto, - set_pwm_auto, 0); -static SENSOR_DEVICE_ATTR(pwm2_enable, S_IWUSR | S_IRUGO, show_pwm_auto, - set_pwm_auto, 1); -static SENSOR_DEVICE_ATTR(pwm3_enable, S_IWUSR | S_IRUGO, show_pwm_auto, - set_pwm_auto, 2); -static SENSOR_DEVICE_ATTR(pwm4_enable, S_IWUSR | S_IRUGO, show_pwm_auto, - set_pwm_auto, 3); - -static SENSOR_DEVICE_ATTR(pwm1_auto_channels_temp, S_IWUSR | S_IRUGO, - show_pwm_auto_temp, set_pwm_auto_temp, 0); -static SENSOR_DEVICE_ATTR(pwm2_auto_channels_temp, S_IWUSR | S_IRUGO, - show_pwm_auto_temp, set_pwm_auto_temp, 1); -static SENSOR_DEVICE_ATTR(pwm3_auto_channels_temp, S_IWUSR | S_IRUGO, - show_pwm_auto_temp, set_pwm_auto_temp, 2); -static SENSOR_DEVICE_ATTR(pwm4_auto_channels_temp, S_IWUSR | S_IRUGO, - show_pwm_auto_temp, set_pwm_auto_temp, 3); +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point1_pwm, pwm_min, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point1_pwm, pwm_min, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point1_pwm, pwm_min, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_auto_point1_pwm, pwm_min, 3); + +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point2_pwm, pwm_max, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point2_pwm, pwm_max, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point2_pwm, pwm_max, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_auto_point2_pwm, pwm_max, 3); + +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point1_temp, pwm_tmin, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point1_temp, pwm_tmin, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point1_temp, pwm_tmin, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_auto_point1_temp, pwm_tmin, 3); + +static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point2_temp, pwm_tmax, 0); +static SENSOR_DEVICE_ATTR_RO(pwm2_auto_point2_temp, pwm_tmax, 1); +static SENSOR_DEVICE_ATTR_RO(pwm3_auto_point2_temp, pwm_tmax, 2); +static SENSOR_DEVICE_ATTR_RO(pwm4_auto_point2_temp, pwm_tmax, 3); + +static SENSOR_DEVICE_ATTR_RW(pwm1_enable, pwm_auto, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_enable, pwm_auto, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_enable, pwm_auto, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_enable, pwm_auto, 3); + +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_channels_temp, pwm_auto_temp, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_channels_temp, pwm_auto_temp, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_channels_temp, pwm_auto_temp, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4_auto_channels_temp, pwm_auto_temp, 3); static struct attribute *adt7470_attrs[] = { &dev_attr_alarm_mask.attr, diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c index f4c7516eb989..0dbb8df74e44 100644 --- a/drivers/hwmon/adt7475.c +++ b/drivers/hwmon/adt7475.c @@ -322,7 +322,7 @@ static void adt7475_write_word(struct i2c_client *client, int reg, u16 val) i2c_smbus_write_byte_data(client, reg, val & 0xFF); } -static ssize_t show_voltage(struct device *dev, struct device_attribute *attr, +static ssize_t voltage_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adt7475_data *data = adt7475_update_device(dev); @@ -343,8 +343,9 @@ static ssize_t show_voltage(struct device *dev, struct device_attribute *attr, } } -static ssize_t set_voltage(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t voltage_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -380,7 +381,7 @@ static ssize_t set_voltage(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_temp(struct device *dev, struct device_attribute *attr, +static ssize_t temp_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adt7475_data *data = adt7475_update_device(dev); @@ -438,8 +439,8 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", out); } -static ssize_t set_temp(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t temp_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct i2c_client *client = to_i2c_client(dev); @@ -540,8 +541,8 @@ static const int ad7475_st_map[] = { 37500, 18800, 12500, 7500, 4700, 3100, 1600, 800, }; -static ssize_t show_temp_st(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t temp_st_show(struct device *dev, struct device_attribute *attr, + char *buf) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct i2c_client *client = to_i2c_client(dev); @@ -567,8 +568,9 @@ static ssize_t show_temp_st(struct device *dev, struct device_attribute *attr, return sprintf(buf, "0\n"); } -static ssize_t set_temp_st(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t temp_st_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct i2c_client *client = to_i2c_client(dev); @@ -627,7 +629,7 @@ static const int autorange_table[] = { 53330, 80000 }; -static ssize_t show_point2(struct device *dev, struct device_attribute *attr, +static ssize_t point2_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adt7475_data *data = adt7475_update_device(dev); @@ -645,8 +647,8 @@ static ssize_t show_point2(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", val + autorange_table[out]); } -static ssize_t set_point2(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t point2_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct i2c_client *client = to_i2c_client(dev); struct adt7475_data *data = i2c_get_clientdata(client); @@ -688,7 +690,7 @@ static ssize_t set_point2(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_tach(struct device *dev, struct device_attribute *attr, +static ssize_t tach_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adt7475_data *data = adt7475_update_device(dev); @@ -706,8 +708,8 @@ static ssize_t show_tach(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", out); } -static ssize_t set_tach(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t tach_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -729,7 +731,7 @@ static ssize_t set_tach(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_pwm(struct device *dev, struct device_attribute *attr, +static ssize_t pwm_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adt7475_data *data = adt7475_update_device(dev); @@ -741,7 +743,7 @@ static ssize_t show_pwm(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", data->pwm[sattr->nr][sattr->index]); } -static ssize_t show_pwmchan(struct device *dev, struct device_attribute *attr, +static ssize_t pwmchan_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adt7475_data *data = adt7475_update_device(dev); @@ -753,7 +755,7 @@ static ssize_t show_pwmchan(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", data->pwmchan[sattr->index]); } -static ssize_t show_pwmctrl(struct device *dev, struct device_attribute *attr, +static ssize_t pwmctrl_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adt7475_data *data = adt7475_update_device(dev); @@ -765,8 +767,8 @@ static ssize_t show_pwmctrl(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", data->pwmctl[sattr->index]); } -static ssize_t set_pwm(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t pwm_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -815,7 +817,7 @@ static ssize_t set_pwm(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_stall_disable(struct device *dev, +static ssize_t stall_disable_show(struct device *dev, struct device_attribute *attr, char *buf) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -826,9 +828,9 @@ static ssize_t show_stall_disable(struct device *dev, return sprintf(buf, "%d\n", !!(data->enh_acoustics[0] & mask)); } -static ssize_t set_stall_disable(struct device *dev, - struct device_attribute *attr, const char *buf, - size_t count) +static ssize_t stall_disable_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct i2c_client *client = to_i2c_client(dev); @@ -910,8 +912,9 @@ static int hw_set_pwm(struct i2c_client *client, int index, return 0; } -static ssize_t set_pwmchan(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t pwmchan_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct i2c_client *client = to_i2c_client(dev); @@ -933,8 +936,9 @@ static ssize_t set_pwmchan(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t set_pwmctrl(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t pwmctrl_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct i2c_client *client = to_i2c_client(dev); @@ -961,7 +965,7 @@ static const int pwmfreq_table[] = { 11, 14, 22, 29, 35, 44, 58, 88, 22500 }; -static ssize_t show_pwmfreq(struct device *dev, struct device_attribute *attr, +static ssize_t pwmfreq_show(struct device *dev, struct device_attribute *attr, char *buf) { struct adt7475_data *data = adt7475_update_device(dev); @@ -976,8 +980,9 @@ static ssize_t show_pwmfreq(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", pwmfreq_table[idx]); } -static ssize_t set_pwmfreq(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t pwmfreq_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct i2c_client *client = to_i2c_client(dev); @@ -1074,156 +1079,95 @@ static ssize_t cpu0_vid_show(struct device *dev, return sprintf(buf, "%d\n", vid_from_reg(data->vid, data->vrm)); } -static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, show_voltage, NULL, INPUT, 0); -static SENSOR_DEVICE_ATTR_2(in0_max, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MAX, 0); -static SENSOR_DEVICE_ATTR_2(in0_min, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MIN, 0); -static SENSOR_DEVICE_ATTR_2(in0_alarm, S_IRUGO, show_voltage, NULL, ALARM, 0); -static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, show_voltage, NULL, INPUT, 1); -static SENSOR_DEVICE_ATTR_2(in1_max, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MAX, 1); -static SENSOR_DEVICE_ATTR_2(in1_min, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MIN, 1); -static SENSOR_DEVICE_ATTR_2(in1_alarm, S_IRUGO, show_voltage, NULL, ALARM, 1); -static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, show_voltage, NULL, INPUT, 2); -static SENSOR_DEVICE_ATTR_2(in2_max, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MAX, 2); -static SENSOR_DEVICE_ATTR_2(in2_min, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MIN, 2); -static SENSOR_DEVICE_ATTR_2(in2_alarm, S_IRUGO, show_voltage, NULL, ALARM, 2); -static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, show_voltage, NULL, INPUT, 3); -static SENSOR_DEVICE_ATTR_2(in3_max, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MAX, 3); -static SENSOR_DEVICE_ATTR_2(in3_min, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MIN, 3); -static SENSOR_DEVICE_ATTR_2(in3_alarm, S_IRUGO, show_voltage, NULL, ALARM, 3); -static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, show_voltage, NULL, INPUT, 4); -static SENSOR_DEVICE_ATTR_2(in4_max, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MAX, 4); -static SENSOR_DEVICE_ATTR_2(in4_min, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MIN, 4); -static SENSOR_DEVICE_ATTR_2(in4_alarm, S_IRUGO, show_voltage, NULL, ALARM, 8); -static SENSOR_DEVICE_ATTR_2(in5_input, S_IRUGO, show_voltage, NULL, INPUT, 5); -static SENSOR_DEVICE_ATTR_2(in5_max, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MAX, 5); -static SENSOR_DEVICE_ATTR_2(in5_min, S_IRUGO | S_IWUSR, show_voltage, - set_voltage, MIN, 5); -static SENSOR_DEVICE_ATTR_2(in5_alarm, S_IRUGO, show_voltage, NULL, ALARM, 31); -static SENSOR_DEVICE_ATTR_2(temp1_input, S_IRUGO, show_temp, NULL, INPUT, 0); -static SENSOR_DEVICE_ATTR_2(temp1_alarm, S_IRUGO, show_temp, NULL, ALARM, 0); -static SENSOR_DEVICE_ATTR_2(temp1_fault, S_IRUGO, show_temp, NULL, FAULT, 0); -static SENSOR_DEVICE_ATTR_2(temp1_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - MAX, 0); -static SENSOR_DEVICE_ATTR_2(temp1_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - MIN, 0); -static SENSOR_DEVICE_ATTR_2(temp1_offset, S_IRUGO | S_IWUSR, show_temp, - set_temp, OFFSET, 0); -static SENSOR_DEVICE_ATTR_2(temp1_auto_point1_temp, S_IRUGO | S_IWUSR, - show_temp, set_temp, AUTOMIN, 0); -static SENSOR_DEVICE_ATTR_2(temp1_auto_point2_temp, S_IRUGO | S_IWUSR, - show_point2, set_point2, 0, 0); -static SENSOR_DEVICE_ATTR_2(temp1_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - THERM, 0); -static SENSOR_DEVICE_ATTR_2(temp1_crit_hyst, S_IRUGO | S_IWUSR, show_temp, - set_temp, HYSTERSIS, 0); -static SENSOR_DEVICE_ATTR_2(temp1_smoothing, S_IRUGO | S_IWUSR, show_temp_st, - set_temp_st, 0, 0); -static SENSOR_DEVICE_ATTR_2(temp2_input, S_IRUGO, show_temp, NULL, INPUT, 1); -static SENSOR_DEVICE_ATTR_2(temp2_alarm, S_IRUGO, show_temp, NULL, ALARM, 1); -static SENSOR_DEVICE_ATTR_2(temp2_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - MAX, 1); -static SENSOR_DEVICE_ATTR_2(temp2_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - MIN, 1); -static SENSOR_DEVICE_ATTR_2(temp2_offset, S_IRUGO | S_IWUSR, show_temp, - set_temp, OFFSET, 1); -static SENSOR_DEVICE_ATTR_2(temp2_auto_point1_temp, S_IRUGO | S_IWUSR, - show_temp, set_temp, AUTOMIN, 1); -static SENSOR_DEVICE_ATTR_2(temp2_auto_point2_temp, S_IRUGO | S_IWUSR, - show_point2, set_point2, 0, 1); -static SENSOR_DEVICE_ATTR_2(temp2_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - THERM, 1); -static SENSOR_DEVICE_ATTR_2(temp2_crit_hyst, S_IRUGO | S_IWUSR, show_temp, - set_temp, HYSTERSIS, 1); -static SENSOR_DEVICE_ATTR_2(temp2_smoothing, S_IRUGO | S_IWUSR, show_temp_st, - set_temp_st, 0, 1); -static SENSOR_DEVICE_ATTR_2(temp3_input, S_IRUGO, show_temp, NULL, INPUT, 2); -static SENSOR_DEVICE_ATTR_2(temp3_alarm, S_IRUGO, show_temp, NULL, ALARM, 2); -static SENSOR_DEVICE_ATTR_2(temp3_fault, S_IRUGO, show_temp, NULL, FAULT, 2); -static SENSOR_DEVICE_ATTR_2(temp3_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - MAX, 2); -static SENSOR_DEVICE_ATTR_2(temp3_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - MIN, 2); -static SENSOR_DEVICE_ATTR_2(temp3_offset, S_IRUGO | S_IWUSR, show_temp, - set_temp, OFFSET, 2); -static SENSOR_DEVICE_ATTR_2(temp3_auto_point1_temp, S_IRUGO | S_IWUSR, - show_temp, set_temp, AUTOMIN, 2); -static SENSOR_DEVICE_ATTR_2(temp3_auto_point2_temp, S_IRUGO | S_IWUSR, - show_point2, set_point2, 0, 2); -static SENSOR_DEVICE_ATTR_2(temp3_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - THERM, 2); -static SENSOR_DEVICE_ATTR_2(temp3_crit_hyst, S_IRUGO | S_IWUSR, show_temp, - set_temp, HYSTERSIS, 2); -static SENSOR_DEVICE_ATTR_2(temp3_smoothing, S_IRUGO | S_IWUSR, show_temp_st, - set_temp_st, 0, 2); -static SENSOR_DEVICE_ATTR_2(fan1_input, S_IRUGO, show_tach, NULL, INPUT, 0); -static SENSOR_DEVICE_ATTR_2(fan1_min, S_IRUGO | S_IWUSR, show_tach, set_tach, - MIN, 0); -static SENSOR_DEVICE_ATTR_2(fan1_alarm, S_IRUGO, show_tach, NULL, ALARM, 0); -static SENSOR_DEVICE_ATTR_2(fan2_input, S_IRUGO, show_tach, NULL, INPUT, 1); -static SENSOR_DEVICE_ATTR_2(fan2_min, S_IRUGO | S_IWUSR, show_tach, set_tach, - MIN, 1); -static SENSOR_DEVICE_ATTR_2(fan2_alarm, S_IRUGO, show_tach, NULL, ALARM, 1); -static SENSOR_DEVICE_ATTR_2(fan3_input, S_IRUGO, show_tach, NULL, INPUT, 2); -static SENSOR_DEVICE_ATTR_2(fan3_min, S_IRUGO | S_IWUSR, show_tach, set_tach, - MIN, 2); -static SENSOR_DEVICE_ATTR_2(fan3_alarm, S_IRUGO, show_tach, NULL, ALARM, 2); -static SENSOR_DEVICE_ATTR_2(fan4_input, S_IRUGO, show_tach, NULL, INPUT, 3); -static SENSOR_DEVICE_ATTR_2(fan4_min, S_IRUGO | S_IWUSR, show_tach, set_tach, - MIN, 3); -static SENSOR_DEVICE_ATTR_2(fan4_alarm, S_IRUGO, show_tach, NULL, ALARM, 3); -static SENSOR_DEVICE_ATTR_2(pwm1, S_IRUGO | S_IWUSR, show_pwm, set_pwm, INPUT, - 0); -static SENSOR_DEVICE_ATTR_2(pwm1_freq, S_IRUGO | S_IWUSR, show_pwmfreq, - set_pwmfreq, INPUT, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_enable, S_IRUGO | S_IWUSR, show_pwmctrl, - set_pwmctrl, INPUT, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_auto_channels_temp, S_IRUGO | S_IWUSR, - show_pwmchan, set_pwmchan, INPUT, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_auto_point1_pwm, S_IRUGO | S_IWUSR, show_pwm, - set_pwm, MIN, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_auto_point2_pwm, S_IRUGO | S_IWUSR, show_pwm, - set_pwm, MAX, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_stall_disable, S_IRUGO | S_IWUSR, - show_stall_disable, set_stall_disable, 0, 0); -static SENSOR_DEVICE_ATTR_2(pwm2, S_IRUGO | S_IWUSR, show_pwm, set_pwm, INPUT, - 1); -static SENSOR_DEVICE_ATTR_2(pwm2_freq, S_IRUGO | S_IWUSR, show_pwmfreq, - set_pwmfreq, INPUT, 1); -static SENSOR_DEVICE_ATTR_2(pwm2_enable, S_IRUGO | S_IWUSR, show_pwmctrl, - set_pwmctrl, INPUT, 1); -static SENSOR_DEVICE_ATTR_2(pwm2_auto_channels_temp, S_IRUGO | S_IWUSR, - show_pwmchan, set_pwmchan, INPUT, 1); -static SENSOR_DEVICE_ATTR_2(pwm2_auto_point1_pwm, S_IRUGO | S_IWUSR, show_pwm, - set_pwm, MIN, 1); -static SENSOR_DEVICE_ATTR_2(pwm2_auto_point2_pwm, S_IRUGO | S_IWUSR, show_pwm, - set_pwm, MAX, 1); -static SENSOR_DEVICE_ATTR_2(pwm2_stall_disable, S_IRUGO | S_IWUSR, - show_stall_disable, set_stall_disable, 0, 1); -static SENSOR_DEVICE_ATTR_2(pwm3, S_IRUGO | S_IWUSR, show_pwm, set_pwm, INPUT, - 2); -static SENSOR_DEVICE_ATTR_2(pwm3_freq, S_IRUGO | S_IWUSR, show_pwmfreq, - set_pwmfreq, INPUT, 2); -static SENSOR_DEVICE_ATTR_2(pwm3_enable, S_IRUGO | S_IWUSR, show_pwmctrl, - set_pwmctrl, INPUT, 2); -static SENSOR_DEVICE_ATTR_2(pwm3_auto_channels_temp, S_IRUGO | S_IWUSR, - show_pwmchan, set_pwmchan, INPUT, 2); -static SENSOR_DEVICE_ATTR_2(pwm3_auto_point1_pwm, S_IRUGO | S_IWUSR, show_pwm, - set_pwm, MIN, 2); -static SENSOR_DEVICE_ATTR_2(pwm3_auto_point2_pwm, S_IRUGO | S_IWUSR, show_pwm, - set_pwm, MAX, 2); -static SENSOR_DEVICE_ATTR_2(pwm3_stall_disable, S_IRUGO | S_IWUSR, - show_stall_disable, set_stall_disable, 0, 2); +static SENSOR_DEVICE_ATTR_2_RO(in0_input, voltage, INPUT, 0); +static SENSOR_DEVICE_ATTR_2_RW(in0_max, voltage, MAX, 0); +static SENSOR_DEVICE_ATTR_2_RW(in0_min, voltage, MIN, 0); +static SENSOR_DEVICE_ATTR_2_RO(in0_alarm, voltage, ALARM, 0); +static SENSOR_DEVICE_ATTR_2_RO(in1_input, voltage, INPUT, 1); +static SENSOR_DEVICE_ATTR_2_RW(in1_max, voltage, MAX, 1); +static SENSOR_DEVICE_ATTR_2_RW(in1_min, voltage, MIN, 1); +static SENSOR_DEVICE_ATTR_2_RO(in1_alarm, voltage, ALARM, 1); +static SENSOR_DEVICE_ATTR_2_RO(in2_input, voltage, INPUT, 2); +static SENSOR_DEVICE_ATTR_2_RW(in2_max, voltage, MAX, 2); +static SENSOR_DEVICE_ATTR_2_RW(in2_min, voltage, MIN, 2); +static SENSOR_DEVICE_ATTR_2_RO(in2_alarm, voltage, ALARM, 2); +static SENSOR_DEVICE_ATTR_2_RO(in3_input, voltage, INPUT, 3); +static SENSOR_DEVICE_ATTR_2_RW(in3_max, voltage, MAX, 3); +static SENSOR_DEVICE_ATTR_2_RW(in3_min, voltage, MIN, 3); +static SENSOR_DEVICE_ATTR_2_RO(in3_alarm, voltage, ALARM, 3); +static SENSOR_DEVICE_ATTR_2_RO(in4_input, voltage, INPUT, 4); +static SENSOR_DEVICE_ATTR_2_RW(in4_max, voltage, MAX, 4); +static SENSOR_DEVICE_ATTR_2_RW(in4_min, voltage, MIN, 4); +static SENSOR_DEVICE_ATTR_2_RO(in4_alarm, voltage, ALARM, 8); +static SENSOR_DEVICE_ATTR_2_RO(in5_input, voltage, INPUT, 5); +static SENSOR_DEVICE_ATTR_2_RW(in5_max, voltage, MAX, 5); +static SENSOR_DEVICE_ATTR_2_RW(in5_min, voltage, MIN, 5); +static SENSOR_DEVICE_ATTR_2_RO(in5_alarm, voltage, ALARM, 31); +static SENSOR_DEVICE_ATTR_2_RO(temp1_input, temp, INPUT, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp1_alarm, temp, ALARM, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp1_fault, temp, FAULT, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_max, temp, MAX, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_min, temp, MIN, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_offset, temp, OFFSET, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_auto_point1_temp, temp, AUTOMIN, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_auto_point2_temp, point2, 0, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_crit, temp, THERM, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_crit_hyst, temp, HYSTERSIS, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_smoothing, temp_st, 0, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp2_input, temp, INPUT, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp2_alarm, temp, ALARM, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_max, temp, MAX, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_min, temp, MIN, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_offset, temp, OFFSET, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_auto_point1_temp, temp, AUTOMIN, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_auto_point2_temp, point2, 0, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_crit, temp, THERM, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_crit_hyst, temp, HYSTERSIS, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_smoothing, temp_st, 0, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp3_input, temp, INPUT, 2); +static SENSOR_DEVICE_ATTR_2_RO(temp3_alarm, temp, ALARM, 2); +static SENSOR_DEVICE_ATTR_2_RO(temp3_fault, temp, FAULT, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_max, temp, MAX, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_min, temp, MIN, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_offset, temp, OFFSET, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_auto_point1_temp, temp, AUTOMIN, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_auto_point2_temp, point2, 0, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_crit, temp, THERM, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_crit_hyst, temp, HYSTERSIS, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_smoothing, temp_st, 0, 2); +static SENSOR_DEVICE_ATTR_2_RO(fan1_input, tach, INPUT, 0); +static SENSOR_DEVICE_ATTR_2_RW(fan1_min, tach, MIN, 0); +static SENSOR_DEVICE_ATTR_2_RO(fan1_alarm, tach, ALARM, 0); +static SENSOR_DEVICE_ATTR_2_RO(fan2_input, tach, INPUT, 1); +static SENSOR_DEVICE_ATTR_2_RW(fan2_min, tach, MIN, 1); +static SENSOR_DEVICE_ATTR_2_RO(fan2_alarm, tach, ALARM, 1); +static SENSOR_DEVICE_ATTR_2_RO(fan3_input, tach, INPUT, 2); +static SENSOR_DEVICE_ATTR_2_RW(fan3_min, tach, MIN, 2); +static SENSOR_DEVICE_ATTR_2_RO(fan3_alarm, tach, ALARM, 2); +static SENSOR_DEVICE_ATTR_2_RO(fan4_input, tach, INPUT, 3); +static SENSOR_DEVICE_ATTR_2_RW(fan4_min, tach, MIN, 3); +static SENSOR_DEVICE_ATTR_2_RO(fan4_alarm, tach, ALARM, 3); +static SENSOR_DEVICE_ATTR_2_RW(pwm1, pwm, INPUT, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_freq, pwmfreq, INPUT, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_enable, pwmctrl, INPUT, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_auto_channels_temp, pwmchan, INPUT, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_auto_point1_pwm, pwm, MIN, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_auto_point2_pwm, pwm, MAX, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_stall_disable, stall_disable, 0, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm2, pwm, INPUT, 1); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_freq, pwmfreq, INPUT, 1); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_enable, pwmctrl, INPUT, 1); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_auto_channels_temp, pwmchan, INPUT, 1); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_auto_point1_pwm, pwm, MIN, 1); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_auto_point2_pwm, pwm, MAX, 1); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_stall_disable, stall_disable, 0, 1); +static SENSOR_DEVICE_ATTR_2_RW(pwm3, pwm, INPUT, 2); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_freq, pwmfreq, INPUT, 2); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_enable, pwmctrl, INPUT, 2); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_auto_channels_temp, pwmchan, INPUT, 2); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_auto_point1_pwm, pwm, MIN, 2); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_auto_point2_pwm, pwm, MAX, 2); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_stall_disable, stall_disable, 0, 2); /* Non-standard name, might need revisiting */ static DEVICE_ATTR_RW(pwm_use_point2_pwm_at_crit); diff --git a/drivers/hwmon/adt7x10.c b/drivers/hwmon/adt7x10.c index 0f538f8be6bf..2ab5c2519ff0 100644 --- a/drivers/hwmon/adt7x10.c +++ b/drivers/hwmon/adt7x10.c @@ -230,9 +230,8 @@ static int ADT7X10_REG_TO_TEMP(struct adt7x10_data *data, s16 reg) /* sysfs attributes for hwmon */ -static ssize_t adt7x10_show_temp(struct device *dev, - struct device_attribute *da, - char *buf) +static ssize_t adt7x10_temp_show(struct device *dev, + struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); struct adt7x10_data *data = dev_get_drvdata(dev); @@ -250,9 +249,9 @@ static ssize_t adt7x10_show_temp(struct device *dev, data->temp[attr->index])); } -static ssize_t adt7x10_set_temp(struct device *dev, - struct device_attribute *da, - const char *buf, size_t count) +static ssize_t adt7x10_temp_store(struct device *dev, + struct device_attribute *da, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); struct adt7x10_data *data = dev_get_drvdata(dev); @@ -273,9 +272,8 @@ static ssize_t adt7x10_set_temp(struct device *dev, return count; } -static ssize_t adt7x10_show_t_hyst(struct device *dev, - struct device_attribute *da, - char *buf) +static ssize_t adt7x10_t_hyst_show(struct device *dev, + struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); struct adt7x10_data *data = dev_get_drvdata(dev); @@ -294,9 +292,9 @@ static ssize_t adt7x10_show_t_hyst(struct device *dev, ADT7X10_REG_TO_TEMP(data, data->temp[nr]) - hyst); } -static ssize_t adt7x10_set_t_hyst(struct device *dev, - struct device_attribute *da, - const char *buf, size_t count) +static ssize_t adt7x10_t_hyst_store(struct device *dev, + struct device_attribute *da, + const char *buf, size_t count) { struct adt7x10_data *data = dev_get_drvdata(dev); int limit, ret; @@ -317,9 +315,8 @@ static ssize_t adt7x10_set_t_hyst(struct device *dev, return count; } -static ssize_t adt7x10_show_alarm(struct device *dev, - struct device_attribute *da, - char *buf) +static ssize_t adt7x10_alarm_show(struct device *dev, + struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); int ret; @@ -339,25 +336,19 @@ static ssize_t name_show(struct device *dev, struct device_attribute *da, return sprintf(buf, "%s\n", data->name); } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, adt7x10_show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, - adt7x10_show_temp, adt7x10_set_temp, 1); -static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, - adt7x10_show_temp, adt7x10_set_temp, 2); -static SENSOR_DEVICE_ATTR(temp1_crit, S_IWUSR | S_IRUGO, - adt7x10_show_temp, adt7x10_set_temp, 3); -static SENSOR_DEVICE_ATTR(temp1_max_hyst, S_IWUSR | S_IRUGO, - adt7x10_show_t_hyst, adt7x10_set_t_hyst, 1); -static SENSOR_DEVICE_ATTR(temp1_min_hyst, S_IRUGO, - adt7x10_show_t_hyst, NULL, 2); -static SENSOR_DEVICE_ATTR(temp1_crit_hyst, S_IRUGO, - adt7x10_show_t_hyst, NULL, 3); -static SENSOR_DEVICE_ATTR(temp1_min_alarm, S_IRUGO, adt7x10_show_alarm, - NULL, ADT7X10_STAT_T_LOW); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, adt7x10_show_alarm, - NULL, ADT7X10_STAT_T_HIGH); -static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, adt7x10_show_alarm, - NULL, ADT7X10_STAT_T_CRIT); +static SENSOR_DEVICE_ATTR_RO(temp1_input, adt7x10_temp, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_max, adt7x10_temp, 1); +static SENSOR_DEVICE_ATTR_RW(temp1_min, adt7x10_temp, 2); +static SENSOR_DEVICE_ATTR_RW(temp1_crit, adt7x10_temp, 3); +static SENSOR_DEVICE_ATTR_RW(temp1_max_hyst, adt7x10_t_hyst, 1); +static SENSOR_DEVICE_ATTR_RO(temp1_min_hyst, adt7x10_t_hyst, 2); +static SENSOR_DEVICE_ATTR_RO(temp1_crit_hyst, adt7x10_t_hyst, 3); +static SENSOR_DEVICE_ATTR_RO(temp1_min_alarm, adt7x10_alarm, + ADT7X10_STAT_T_LOW); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, adt7x10_alarm, + ADT7X10_STAT_T_HIGH); +static SENSOR_DEVICE_ATTR_RO(temp1_crit_alarm, adt7x10_alarm, + ADT7X10_STAT_T_CRIT); static DEVICE_ATTR_RO(name); static struct attribute *adt7x10_attributes[] = { diff --git a/drivers/hwmon/amc6821.c b/drivers/hwmon/amc6821.c index 46b4e35fd555..2cc5d3c63a4d 100644 --- a/drivers/hwmon/amc6821.c +++ b/drivers/hwmon/amc6821.c @@ -44,10 +44,10 @@ static const unsigned short normal_i2c[] = {0x18, 0x19, 0x1a, 0x2c, 0x2d, 0x2e, */ static int pwminv; /*Inverted PWM output. */ -module_param(pwminv, int, S_IRUGO); +module_param(pwminv, int, 0444); static int init = 1; /*Power-on initialization.*/ -module_param(init, int, S_IRUGO); +module_param(init, int, 0444); enum chips { amc6821 }; @@ -277,10 +277,8 @@ static struct amc6821_data *amc6821_update_device(struct device *dev) return data; } -static ssize_t get_temp( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct amc6821_data *data = amc6821_update_device(dev); int ix = to_sensor_dev_attr(devattr)->index; @@ -288,11 +286,8 @@ static ssize_t get_temp( return sprintf(buf, "%d\n", data->temp[ix] * 1000); } -static ssize_t set_temp( - struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t count) +static ssize_t temp_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct amc6821_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -314,10 +309,8 @@ static ssize_t set_temp( return count; } -static ssize_t get_temp_alarm( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp_alarm_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct amc6821_data *data = amc6821_update_device(dev); int ix = to_sensor_dev_attr(devattr)->index; @@ -352,10 +345,8 @@ static ssize_t get_temp_alarm( return sprintf(buf, "0"); } -static ssize_t get_temp2_fault( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp2_fault_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct amc6821_data *data = amc6821_update_device(dev); if (data->stat1 & AMC6821_STAT1_RTF) @@ -364,20 +355,16 @@ static ssize_t get_temp2_fault( return sprintf(buf, "0"); } -static ssize_t get_pwm1( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm1_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct amc6821_data *data = amc6821_update_device(dev); return sprintf(buf, "%d\n", data->pwm1); } -static ssize_t set_pwm1( - struct device *dev, - struct device_attribute *devattr, - const char *buf, - size_t count) +static ssize_t pwm1_store(struct device *dev, + struct device_attribute *devattr, const char *buf, + size_t count) { struct amc6821_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -393,20 +380,16 @@ static ssize_t set_pwm1( return count; } -static ssize_t get_pwm1_enable( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm1_enable_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct amc6821_data *data = amc6821_update_device(dev); return sprintf(buf, "%d\n", data->pwm1_enable); } -static ssize_t set_pwm1_enable( - struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t count) +static ssize_t pwm1_enable_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct amc6821_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -451,19 +434,17 @@ unlock: return count; } -static ssize_t get_pwm1_auto_channels_temp( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm1_auto_channels_temp_show(struct device *dev, + struct device_attribute *devattr, + char *buf) { struct amc6821_data *data = amc6821_update_device(dev); return sprintf(buf, "%d\n", data->pwm1_auto_channels_temp); } -static ssize_t get_temp_auto_point_temp( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t temp_auto_point_temp_show(struct device *dev, + struct device_attribute *devattr, + char *buf) { int ix = to_sensor_dev_attr_2(devattr)->index; int nr = to_sensor_dev_attr_2(devattr)->nr; @@ -481,10 +462,9 @@ static ssize_t get_temp_auto_point_temp( } } -static ssize_t get_pwm1_auto_point_pwm( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t pwm1_auto_point_pwm_show(struct device *dev, + struct device_attribute *devattr, + char *buf) { int ix = to_sensor_dev_attr(devattr)->index; struct amc6821_data *data = amc6821_update_device(dev); @@ -513,11 +493,9 @@ static inline ssize_t set_slope_register(struct i2c_client *client, return 0; } -static ssize_t set_temp_auto_point_temp( - struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t count) +static ssize_t temp_auto_point_temp_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct amc6821_data *data = amc6821_update_device(dev); struct i2c_client *client = data->client; @@ -586,11 +564,9 @@ EXIT: return count; } -static ssize_t set_pwm1_auto_point_pwm( - struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t count) +static ssize_t pwm1_auto_point_pwm_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct amc6821_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -626,10 +602,8 @@ EXIT: return count; } -static ssize_t get_fan( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t fan_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct amc6821_data *data = amc6821_update_device(dev); int ix = to_sensor_dev_attr(devattr)->index; @@ -638,10 +612,8 @@ static ssize_t get_fan( return sprintf(buf, "%d\n", (int)(6000000 / data->fan[ix])); } -static ssize_t get_fan1_fault( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t fan1_fault_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct amc6821_data *data = amc6821_update_device(dev); if (data->stat1 & AMC6821_STAT1_FANS) @@ -650,10 +622,8 @@ static ssize_t get_fan1_fault( return sprintf(buf, "0"); } -static ssize_t set_fan( - struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t fan_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct amc6821_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -682,19 +652,16 @@ EXIT: return count; } -static ssize_t get_fan1_div( - struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t fan1_div_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct amc6821_data *data = amc6821_update_device(dev); return sprintf(buf, "%d\n", data->fan1_div); } -static ssize_t set_fan1_div( - struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t fan1_div_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct amc6821_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -734,69 +701,47 @@ EXIT: return count; } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, - get_temp, NULL, IDX_TEMP1_INPUT); -static SENSOR_DEVICE_ATTR(temp1_min, S_IRUGO | S_IWUSR, get_temp, - set_temp, IDX_TEMP1_MIN); -static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO | S_IWUSR, get_temp, - set_temp, IDX_TEMP1_MAX); -static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO | S_IWUSR, get_temp, - set_temp, IDX_TEMP1_CRIT); -static SENSOR_DEVICE_ATTR(temp1_min_alarm, S_IRUGO, - get_temp_alarm, NULL, IDX_TEMP1_MIN); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, - get_temp_alarm, NULL, IDX_TEMP1_MAX); -static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, - get_temp_alarm, NULL, IDX_TEMP1_CRIT); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, - get_temp, NULL, IDX_TEMP2_INPUT); -static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, get_temp, - set_temp, IDX_TEMP2_MIN); -static SENSOR_DEVICE_ATTR(temp2_max, S_IRUGO | S_IWUSR, get_temp, - set_temp, IDX_TEMP2_MAX); -static SENSOR_DEVICE_ATTR(temp2_crit, S_IRUGO | S_IWUSR, get_temp, - set_temp, IDX_TEMP2_CRIT); -static SENSOR_DEVICE_ATTR(temp2_fault, S_IRUGO, - get_temp2_fault, NULL, 0); -static SENSOR_DEVICE_ATTR(temp2_min_alarm, S_IRUGO, - get_temp_alarm, NULL, IDX_TEMP2_MIN); -static SENSOR_DEVICE_ATTR(temp2_max_alarm, S_IRUGO, - get_temp_alarm, NULL, IDX_TEMP2_MAX); -static SENSOR_DEVICE_ATTR(temp2_crit_alarm, S_IRUGO, - get_temp_alarm, NULL, IDX_TEMP2_CRIT); -static SENSOR_DEVICE_ATTR(fan1_input, S_IRUGO, get_fan, NULL, IDX_FAN1_INPUT); -static SENSOR_DEVICE_ATTR(fan1_min, S_IRUGO | S_IWUSR, - get_fan, set_fan, IDX_FAN1_MIN); -static SENSOR_DEVICE_ATTR(fan1_max, S_IRUGO | S_IWUSR, - get_fan, set_fan, IDX_FAN1_MAX); -static SENSOR_DEVICE_ATTR(fan1_fault, S_IRUGO, get_fan1_fault, NULL, 0); -static SENSOR_DEVICE_ATTR(fan1_div, S_IRUGO | S_IWUSR, - get_fan1_div, set_fan1_div, 0); - -static SENSOR_DEVICE_ATTR(pwm1, S_IWUSR | S_IRUGO, get_pwm1, set_pwm1, 0); -static SENSOR_DEVICE_ATTR(pwm1_enable, S_IWUSR | S_IRUGO, - get_pwm1_enable, set_pwm1_enable, 0); -static SENSOR_DEVICE_ATTR(pwm1_auto_point1_pwm, S_IRUGO, - get_pwm1_auto_point_pwm, NULL, 0); -static SENSOR_DEVICE_ATTR(pwm1_auto_point2_pwm, S_IWUSR | S_IRUGO, - get_pwm1_auto_point_pwm, set_pwm1_auto_point_pwm, 1); -static SENSOR_DEVICE_ATTR(pwm1_auto_point3_pwm, S_IRUGO, - get_pwm1_auto_point_pwm, NULL, 2); -static SENSOR_DEVICE_ATTR(pwm1_auto_channels_temp, S_IRUGO, - get_pwm1_auto_channels_temp, NULL, 0); -static SENSOR_DEVICE_ATTR_2(temp1_auto_point1_temp, S_IRUGO, - get_temp_auto_point_temp, NULL, 1, 0); -static SENSOR_DEVICE_ATTR_2(temp1_auto_point2_temp, S_IWUSR | S_IRUGO, - get_temp_auto_point_temp, set_temp_auto_point_temp, 1, 1); -static SENSOR_DEVICE_ATTR_2(temp1_auto_point3_temp, S_IWUSR | S_IRUGO, - get_temp_auto_point_temp, set_temp_auto_point_temp, 1, 2); - -static SENSOR_DEVICE_ATTR_2(temp2_auto_point1_temp, S_IWUSR | S_IRUGO, - get_temp_auto_point_temp, set_temp_auto_point_temp, 2, 0); -static SENSOR_DEVICE_ATTR_2(temp2_auto_point2_temp, S_IWUSR | S_IRUGO, - get_temp_auto_point_temp, set_temp_auto_point_temp, 2, 1); -static SENSOR_DEVICE_ATTR_2(temp2_auto_point3_temp, S_IWUSR | S_IRUGO, - get_temp_auto_point_temp, set_temp_auto_point_temp, 2, 2); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, IDX_TEMP1_INPUT); +static SENSOR_DEVICE_ATTR_RW(temp1_min, temp, IDX_TEMP1_MIN); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp, IDX_TEMP1_MAX); +static SENSOR_DEVICE_ATTR_RW(temp1_crit, temp, IDX_TEMP1_CRIT); +static SENSOR_DEVICE_ATTR_RO(temp1_min_alarm, temp_alarm, IDX_TEMP1_MIN); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, temp_alarm, IDX_TEMP1_MAX); +static SENSOR_DEVICE_ATTR_RO(temp1_crit_alarm, temp_alarm, IDX_TEMP1_CRIT); +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp, IDX_TEMP2_INPUT); +static SENSOR_DEVICE_ATTR_RW(temp2_min, temp, IDX_TEMP2_MIN); +static SENSOR_DEVICE_ATTR_RW(temp2_max, temp, IDX_TEMP2_MAX); +static SENSOR_DEVICE_ATTR_RW(temp2_crit, temp, IDX_TEMP2_CRIT); +static SENSOR_DEVICE_ATTR_RO(temp2_fault, temp2_fault, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_min_alarm, temp_alarm, IDX_TEMP2_MIN); +static SENSOR_DEVICE_ATTR_RO(temp2_max_alarm, temp_alarm, IDX_TEMP2_MAX); +static SENSOR_DEVICE_ATTR_RO(temp2_crit_alarm, temp_alarm, IDX_TEMP2_CRIT); +static SENSOR_DEVICE_ATTR_RO(fan1_input, fan, IDX_FAN1_INPUT); +static SENSOR_DEVICE_ATTR_RW(fan1_min, fan, IDX_FAN1_MIN); +static SENSOR_DEVICE_ATTR_RW(fan1_max, fan, IDX_FAN1_MAX); +static SENSOR_DEVICE_ATTR_RO(fan1_fault, fan1_fault, 0); +static SENSOR_DEVICE_ATTR_RW(fan1_div, fan1_div, 0); + +static SENSOR_DEVICE_ATTR_RW(pwm1, pwm1, 0); +static SENSOR_DEVICE_ATTR_RW(pwm1_enable, pwm1_enable, 0); +static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point1_pwm, pwm1_auto_point_pwm, 0); +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point2_pwm, pwm1_auto_point_pwm, 1); +static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point3_pwm, pwm1_auto_point_pwm, 2); +static SENSOR_DEVICE_ATTR_RO(pwm1_auto_channels_temp, pwm1_auto_channels_temp, + 0); +static SENSOR_DEVICE_ATTR_2_RO(temp1_auto_point1_temp, temp_auto_point_temp, + 1, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_auto_point2_temp, temp_auto_point_temp, + 1, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp1_auto_point3_temp, temp_auto_point_temp, + 1, 2); + +static SENSOR_DEVICE_ATTR_2_RW(temp2_auto_point1_temp, temp_auto_point_temp, + 2, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp2_auto_point2_temp, temp_auto_point_temp, + 2, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_auto_point3_temp, temp_auto_point_temp, + 2, 2); static struct attribute *amc6821_attrs[] = { &sensor_dev_attr_temp1_input.dev_attr.attr, diff --git a/drivers/hwmon/applesmc.c b/drivers/hwmon/applesmc.c index 5c677ba44014..a24e8fa7fba8 100644 --- a/drivers/hwmon/applesmc.c +++ b/drivers/hwmon/applesmc.c @@ -1128,7 +1128,7 @@ static int applesmc_create_nodes(struct applesmc_node_group *groups, int num) attr = &node->sda.dev_attr.attr; sysfs_attr_init(attr); attr->name = node->name; - attr->mode = S_IRUGO | (grp->store ? S_IWUSR : 0); + attr->mode = 0444 | (grp->store ? 0200 : 0); ret = sysfs_create_file(&pdev->dev.kobj, attr); if (ret) { attr->name = NULL; diff --git a/drivers/hwmon/aspeed-pwm-tacho.c b/drivers/hwmon/aspeed-pwm-tacho.c index 92de8139d398..c4dd6301e7c8 100644 --- a/drivers/hwmon/aspeed-pwm-tacho.c +++ b/drivers/hwmon/aspeed-pwm-tacho.c @@ -570,8 +570,8 @@ static int aspeed_get_fan_tach_ch_rpm(struct aspeed_pwm_tacho_data *priv, return (clk_source * 60) / (2 * raw_data * tach_div); } -static ssize_t set_pwm(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t pwm_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr); int index = sensor_attr->index; @@ -595,7 +595,7 @@ static ssize_t set_pwm(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_pwm(struct device *dev, struct device_attribute *attr, +static ssize_t pwm_show(struct device *dev, struct device_attribute *attr, char *buf) { struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr); @@ -605,7 +605,7 @@ static ssize_t show_pwm(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%u\n", priv->pwm_port_fan_ctrl[index]); } -static ssize_t show_rpm(struct device *dev, struct device_attribute *attr, +static ssize_t rpm_show(struct device *dev, struct device_attribute *attr, char *buf) { struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr); @@ -642,22 +642,14 @@ static umode_t fan_dev_is_visible(struct kobject *kobj, return a->mode; } -static SENSOR_DEVICE_ATTR(pwm1, 0644, - show_pwm, set_pwm, 0); -static SENSOR_DEVICE_ATTR(pwm2, 0644, - show_pwm, set_pwm, 1); -static SENSOR_DEVICE_ATTR(pwm3, 0644, - show_pwm, set_pwm, 2); -static SENSOR_DEVICE_ATTR(pwm4, 0644, - show_pwm, set_pwm, 3); -static SENSOR_DEVICE_ATTR(pwm5, 0644, - show_pwm, set_pwm, 4); -static SENSOR_DEVICE_ATTR(pwm6, 0644, - show_pwm, set_pwm, 5); -static SENSOR_DEVICE_ATTR(pwm7, 0644, - show_pwm, set_pwm, 6); -static SENSOR_DEVICE_ATTR(pwm8, 0644, - show_pwm, set_pwm, 7); +static SENSOR_DEVICE_ATTR_RW(pwm1, pwm, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2, pwm, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3, pwm, 2); +static SENSOR_DEVICE_ATTR_RW(pwm4, pwm, 3); +static SENSOR_DEVICE_ATTR_RW(pwm5, pwm, 4); +static SENSOR_DEVICE_ATTR_RW(pwm6, pwm, 5); +static SENSOR_DEVICE_ATTR_RW(pwm7, pwm, 6); +static SENSOR_DEVICE_ATTR_RW(pwm8, pwm, 7); static struct attribute *pwm_dev_attrs[] = { &sensor_dev_attr_pwm1.dev_attr.attr, &sensor_dev_attr_pwm2.dev_attr.attr, @@ -675,38 +667,22 @@ static const struct attribute_group pwm_dev_group = { .is_visible = pwm_is_visible, }; -static SENSOR_DEVICE_ATTR(fan1_input, 0444, - show_rpm, NULL, 0); -static SENSOR_DEVICE_ATTR(fan2_input, 0444, - show_rpm, NULL, 1); -static SENSOR_DEVICE_ATTR(fan3_input, 0444, - show_rpm, NULL, 2); -static SENSOR_DEVICE_ATTR(fan4_input, 0444, - show_rpm, NULL, 3); -static SENSOR_DEVICE_ATTR(fan5_input, 0444, - show_rpm, NULL, 4); -static SENSOR_DEVICE_ATTR(fan6_input, 0444, - show_rpm, NULL, 5); -static SENSOR_DEVICE_ATTR(fan7_input, 0444, - show_rpm, NULL, 6); -static SENSOR_DEVICE_ATTR(fan8_input, 0444, - show_rpm, NULL, 7); -static SENSOR_DEVICE_ATTR(fan9_input, 0444, - show_rpm, NULL, 8); -static SENSOR_DEVICE_ATTR(fan10_input, 0444, - show_rpm, NULL, 9); -static SENSOR_DEVICE_ATTR(fan11_input, 0444, - show_rpm, NULL, 10); -static SENSOR_DEVICE_ATTR(fan12_input, 0444, - show_rpm, NULL, 11); -static SENSOR_DEVICE_ATTR(fan13_input, 0444, - show_rpm, NULL, 12); -static SENSOR_DEVICE_ATTR(fan14_input, 0444, - show_rpm, NULL, 13); -static SENSOR_DEVICE_ATTR(fan15_input, 0444, - show_rpm, NULL, 14); -static SENSOR_DEVICE_ATTR(fan16_input, 0444, - show_rpm, NULL, 15); +static SENSOR_DEVICE_ATTR_RO(fan1_input, rpm, 0); +static SENSOR_DEVICE_ATTR_RO(fan2_input, rpm, 1); +static SENSOR_DEVICE_ATTR_RO(fan3_input, rpm, 2); +static SENSOR_DEVICE_ATTR_RO(fan4_input, rpm, 3); +static SENSOR_DEVICE_ATTR_RO(fan5_input, rpm, 4); +static SENSOR_DEVICE_ATTR_RO(fan6_input, rpm, 5); +static SENSOR_DEVICE_ATTR_RO(fan7_input, rpm, 6); +static SENSOR_DEVICE_ATTR_RO(fan8_input, rpm, 7); +static SENSOR_DEVICE_ATTR_RO(fan9_input, rpm, 8); +static SENSOR_DEVICE_ATTR_RO(fan10_input, rpm, 9); +static SENSOR_DEVICE_ATTR_RO(fan11_input, rpm, 10); +static SENSOR_DEVICE_ATTR_RO(fan12_input, rpm, 11); +static SENSOR_DEVICE_ATTR_RO(fan13_input, rpm, 12); +static SENSOR_DEVICE_ATTR_RO(fan14_input, rpm, 13); +static SENSOR_DEVICE_ATTR_RO(fan15_input, rpm, 14); +static SENSOR_DEVICE_ATTR_RO(fan16_input, rpm, 15); static struct attribute *fan_dev_attrs[] = { &sensor_dev_attr_fan1_input.dev_attr.attr, &sensor_dev_attr_fan2_input.dev_attr.attr, diff --git a/drivers/hwmon/asus_atk0110.c b/drivers/hwmon/asus_atk0110.c index a7cf00885c5d..22be78cc5a4c 100644 --- a/drivers/hwmon/asus_atk0110.c +++ b/drivers/hwmon/asus_atk0110.c @@ -681,10 +681,8 @@ static int atk_debugfs_gitm_get(void *p, u64 *val) return err; } -DEFINE_SIMPLE_ATTRIBUTE(atk_debugfs_gitm, - atk_debugfs_gitm_get, - NULL, - "0x%08llx\n"); +DEFINE_DEBUGFS_ATTRIBUTE(atk_debugfs_gitm, atk_debugfs_gitm_get, NULL, + "0x%08llx\n"); static int atk_acpi_print(char *buf, size_t sz, union acpi_object *obj) { @@ -799,17 +797,17 @@ static void atk_debugfs_init(struct atk_data *data) if (!d || IS_ERR(d)) return; - f = debugfs_create_x32("id", S_IRUSR | S_IWUSR, d, &data->debugfs.id); + f = debugfs_create_x32("id", 0600, d, &data->debugfs.id); if (!f || IS_ERR(f)) goto cleanup; - f = debugfs_create_file("gitm", S_IRUSR, d, data, - &atk_debugfs_gitm); + f = debugfs_create_file_unsafe("gitm", 0400, d, data, + &atk_debugfs_gitm); if (!f || IS_ERR(f)) goto cleanup; - f = debugfs_create_file("ggrp", S_IRUSR, d, data, - &atk_debugfs_ggrp_fops); + f = debugfs_create_file("ggrp", 0400, d, data, + &atk_debugfs_ggrp_fops); if (!f || IS_ERR(f)) goto cleanup; diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c index 10645c9bb7be..5d34f7271e67 100644 --- a/drivers/hwmon/coretemp.c +++ b/drivers/hwmon/coretemp.c @@ -407,7 +407,7 @@ static int create_core_attrs(struct temp_data *tdata, struct device *dev, "temp%d_%s", attr_no, suffixes[i]); sysfs_attr_init(&tdata->sd_attrs[i].dev_attr.attr); tdata->sd_attrs[i].dev_attr.attr.name = tdata->attr_name[i]; - tdata->sd_attrs[i].dev_attr.attr.mode = S_IRUGO; + tdata->sd_attrs[i].dev_attr.attr.mode = 0444; tdata->sd_attrs[i].dev_attr.show = rd_ptr[i]; tdata->sd_attrs[i].index = attr_no; tdata->attrs[i] = &tdata->sd_attrs[i].dev_attr.attr; diff --git a/drivers/hwmon/da9052-hwmon.c b/drivers/hwmon/da9052-hwmon.c index a973eb6a2890..8ec5bf4ce392 100644 --- a/drivers/hwmon/da9052-hwmon.c +++ b/drivers/hwmon/da9052-hwmon.c @@ -87,7 +87,7 @@ static inline int da9052_disable_vddout_channel(struct da9052 *da9052) DA9052_ADCCONT_AUTOVDDEN, 0); } -static ssize_t da9052_read_vddout(struct device *dev, +static ssize_t da9052_vddout_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct da9052_hwmon *hwmon = dev_get_drvdata(dev); @@ -119,7 +119,7 @@ hwmon_err: return ret; } -static ssize_t da9052_read_ich(struct device *dev, +static ssize_t da9052_ich_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct da9052_hwmon *hwmon = dev_get_drvdata(dev); @@ -133,7 +133,7 @@ static ssize_t da9052_read_ich(struct device *dev, return sprintf(buf, "%d\n", DIV_ROUND_CLOSEST(ret * 39, 10)); } -static ssize_t da9052_read_tbat(struct device *dev, +static ssize_t da9052_tbat_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct da9052_hwmon *hwmon = dev_get_drvdata(dev); @@ -141,7 +141,7 @@ static ssize_t da9052_read_tbat(struct device *dev, return sprintf(buf, "%d\n", da9052_adc_read_temp(hwmon->da9052)); } -static ssize_t da9052_read_vbat(struct device *dev, +static ssize_t da9052_vbat_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct da9052_hwmon *hwmon = dev_get_drvdata(dev); @@ -154,7 +154,7 @@ static ssize_t da9052_read_vbat(struct device *dev, return sprintf(buf, "%d\n", volt_reg_to_mv(ret)); } -static ssize_t da9052_read_misc_channel(struct device *dev, +static ssize_t da9052_misc_channel_show(struct device *dev, struct device_attribute *devattr, char *buf) { @@ -242,9 +242,8 @@ static ssize_t __da9052_read_tsi(struct device *dev, int channel) return da9052_get_tsi_result(hwmon, channel); } -static ssize_t da9052_read_tsi(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t da9052_tsi_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct da9052_hwmon *hwmon = dev_get_drvdata(dev); int channel = to_sensor_dev_attr(devattr)->index; @@ -260,7 +259,7 @@ static ssize_t da9052_read_tsi(struct device *dev, return sprintf(buf, "%d\n", input_tsireg_to_mv(hwmon, ret)); } -static ssize_t da9052_read_tjunc(struct device *dev, +static ssize_t da9052_tjunc_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct da9052_hwmon *hwmon = dev_get_drvdata(dev); @@ -282,7 +281,7 @@ static ssize_t da9052_read_tjunc(struct device *dev, return sprintf(buf, "%d\n", 1708 * (tjunc - toffset) - 108800); } -static ssize_t da9052_read_vbbat(struct device *dev, +static ssize_t da9052_vbbat_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct da9052_hwmon *hwmon = dev_get_drvdata(dev); @@ -295,7 +294,7 @@ static ssize_t da9052_read_vbbat(struct device *dev, return sprintf(buf, "%d\n", vbbat_reg_to_mv(ret)); } -static ssize_t show_label(struct device *dev, +static ssize_t label_show(struct device *dev, struct device_attribute *devattr, char *buf) { return sprintf(buf, "%s\n", @@ -324,61 +323,35 @@ static umode_t da9052_channel_is_visible(struct kobject *kobj, return attr->mode; } -static SENSOR_DEVICE_ATTR(in0_input, 0444, da9052_read_vddout, NULL, - DA9052_ADC_VDDOUT); -static SENSOR_DEVICE_ATTR(in0_label, 0444, show_label, NULL, - DA9052_ADC_VDDOUT); -static SENSOR_DEVICE_ATTR(in3_input, 0444, da9052_read_vbat, NULL, - DA9052_ADC_VBAT); -static SENSOR_DEVICE_ATTR(in3_label, 0444, show_label, NULL, - DA9052_ADC_VBAT); -static SENSOR_DEVICE_ATTR(in4_input, 0444, da9052_read_misc_channel, NULL, - DA9052_ADC_IN4); -static SENSOR_DEVICE_ATTR(in4_label, 0444, show_label, NULL, - DA9052_ADC_IN4); -static SENSOR_DEVICE_ATTR(in5_input, 0444, da9052_read_misc_channel, NULL, - DA9052_ADC_IN5); -static SENSOR_DEVICE_ATTR(in5_label, 0444, show_label, NULL, - DA9052_ADC_IN5); -static SENSOR_DEVICE_ATTR(in6_input, 0444, da9052_read_misc_channel, NULL, - DA9052_ADC_IN6); -static SENSOR_DEVICE_ATTR(in6_label, 0444, show_label, NULL, - DA9052_ADC_IN6); -static SENSOR_DEVICE_ATTR(in9_input, 0444, da9052_read_vbbat, NULL, - DA9052_ADC_VBBAT); -static SENSOR_DEVICE_ATTR(in9_label, 0444, show_label, NULL, - DA9052_ADC_VBBAT); - -static SENSOR_DEVICE_ATTR(in70_input, 0444, da9052_read_tsi, NULL, - DA9052_ADC_TSI_XP); -static SENSOR_DEVICE_ATTR(in70_label, 0444, show_label, NULL, - DA9052_ADC_TSI_XP); -static SENSOR_DEVICE_ATTR(in71_input, 0444, da9052_read_tsi, NULL, - DA9052_ADC_TSI_XN); -static SENSOR_DEVICE_ATTR(in71_label, 0444, show_label, NULL, - DA9052_ADC_TSI_XN); -static SENSOR_DEVICE_ATTR(in72_input, 0444, da9052_read_tsi, NULL, - DA9052_ADC_TSI_YP); -static SENSOR_DEVICE_ATTR(in72_label, 0444, show_label, NULL, - DA9052_ADC_TSI_YP); -static SENSOR_DEVICE_ATTR(in73_input, 0444, da9052_read_tsi, NULL, - DA9052_ADC_TSI_YN); -static SENSOR_DEVICE_ATTR(in73_label, 0444, show_label, NULL, - DA9052_ADC_TSI_YN); - -static SENSOR_DEVICE_ATTR(curr1_input, 0444, da9052_read_ich, NULL, - DA9052_ADC_ICH); -static SENSOR_DEVICE_ATTR(curr1_label, 0444, show_label, NULL, - DA9052_ADC_ICH); - -static SENSOR_DEVICE_ATTR(temp2_input, 0444, da9052_read_tbat, NULL, - DA9052_ADC_TBAT); -static SENSOR_DEVICE_ATTR(temp2_label, 0444, show_label, NULL, - DA9052_ADC_TBAT); -static SENSOR_DEVICE_ATTR(temp8_input, 0444, da9052_read_tjunc, NULL, - DA9052_ADC_TJUNC); -static SENSOR_DEVICE_ATTR(temp8_label, 0444, show_label, NULL, - DA9052_ADC_TJUNC); +static SENSOR_DEVICE_ATTR_RO(in0_input, da9052_vddout, DA9052_ADC_VDDOUT); +static SENSOR_DEVICE_ATTR_RO(in0_label, label, DA9052_ADC_VDDOUT); +static SENSOR_DEVICE_ATTR_RO(in3_input, da9052_vbat, DA9052_ADC_VBAT); +static SENSOR_DEVICE_ATTR_RO(in3_label, label, DA9052_ADC_VBAT); +static SENSOR_DEVICE_ATTR_RO(in4_input, da9052_misc_channel, DA9052_ADC_IN4); +static SENSOR_DEVICE_ATTR_RO(in4_label, label, DA9052_ADC_IN4); +static SENSOR_DEVICE_ATTR_RO(in5_input, da9052_misc_channel, DA9052_ADC_IN5); +static SENSOR_DEVICE_ATTR_RO(in5_label, label, DA9052_ADC_IN5); +static SENSOR_DEVICE_ATTR_RO(in6_input, da9052_misc_channel, DA9052_ADC_IN6); +static SENSOR_DEVICE_ATTR_RO(in6_label, label, DA9052_ADC_IN6); +static SENSOR_DEVICE_ATTR_RO(in9_input, da9052_vbbat, DA9052_ADC_VBBAT); +static SENSOR_DEVICE_ATTR_RO(in9_label, label, DA9052_ADC_VBBAT); + +static SENSOR_DEVICE_ATTR_RO(in70_input, da9052_tsi, DA9052_ADC_TSI_XP); +static SENSOR_DEVICE_ATTR_RO(in70_label, label, DA9052_ADC_TSI_XP); +static SENSOR_DEVICE_ATTR_RO(in71_input, da9052_tsi, DA9052_ADC_TSI_XN); +static SENSOR_DEVICE_ATTR_RO(in71_label, label, DA9052_ADC_TSI_XN); +static SENSOR_DEVICE_ATTR_RO(in72_input, da9052_tsi, DA9052_ADC_TSI_YP); +static SENSOR_DEVICE_ATTR_RO(in72_label, label, DA9052_ADC_TSI_YP); +static SENSOR_DEVICE_ATTR_RO(in73_input, da9052_tsi, DA9052_ADC_TSI_YN); +static SENSOR_DEVICE_ATTR_RO(in73_label, label, DA9052_ADC_TSI_YN); + +static SENSOR_DEVICE_ATTR_RO(curr1_input, da9052_ich, DA9052_ADC_ICH); +static SENSOR_DEVICE_ATTR_RO(curr1_label, label, DA9052_ADC_ICH); + +static SENSOR_DEVICE_ATTR_RO(temp2_input, da9052_tbat, DA9052_ADC_TBAT); +static SENSOR_DEVICE_ATTR_RO(temp2_label, label, DA9052_ADC_TBAT); +static SENSOR_DEVICE_ATTR_RO(temp8_input, da9052_tjunc, DA9052_ADC_TJUNC); +static SENSOR_DEVICE_ATTR_RO(temp8_label, label, DA9052_ADC_TJUNC); static struct attribute *da9052_attrs[] = { &sensor_dev_attr_in0_input.dev_attr.attr, diff --git a/drivers/hwmon/da9055-hwmon.c b/drivers/hwmon/da9055-hwmon.c index f6e159cabe23..4de6de683908 100644 --- a/drivers/hwmon/da9055-hwmon.c +++ b/drivers/hwmon/da9055-hwmon.c @@ -140,8 +140,9 @@ static int da9055_disable_auto_mode(struct da9055 *da9055, int channel) return da9055_reg_update(da9055, DA9055_REG_ADC_CONT, 1 << channel, 0); } -static ssize_t da9055_read_auto_ch(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t da9055_auto_ch_show(struct device *dev, + struct device_attribute *devattr, + char *buf) { struct da9055_hwmon *hwmon = dev_get_drvdata(dev); int ret, adc; @@ -176,7 +177,7 @@ hwmon_err: return ret; } -static ssize_t da9055_read_tjunc(struct device *dev, +static ssize_t da9055_tjunc_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct da9055_hwmon *hwmon = dev_get_drvdata(dev); @@ -199,34 +200,24 @@ static ssize_t da9055_read_tjunc(struct device *dev, + 3076332, 10000)); } -static ssize_t show_label(struct device *dev, +static ssize_t label_show(struct device *dev, struct device_attribute *devattr, char *buf) { return sprintf(buf, "%s\n", input_names[to_sensor_dev_attr(devattr)->index]); } -static SENSOR_DEVICE_ATTR(in0_input, S_IRUGO, da9055_read_auto_ch, NULL, - DA9055_ADC_VSYS); -static SENSOR_DEVICE_ATTR(in0_label, S_IRUGO, show_label, NULL, - DA9055_ADC_VSYS); -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, da9055_read_auto_ch, NULL, - DA9055_ADC_ADCIN1); -static SENSOR_DEVICE_ATTR(in1_label, S_IRUGO, show_label, NULL, - DA9055_ADC_ADCIN1); -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, da9055_read_auto_ch, NULL, - DA9055_ADC_ADCIN2); -static SENSOR_DEVICE_ATTR(in2_label, S_IRUGO, show_label, NULL, - DA9055_ADC_ADCIN2); -static SENSOR_DEVICE_ATTR(in3_input, S_IRUGO, da9055_read_auto_ch, NULL, - DA9055_ADC_ADCIN3); -static SENSOR_DEVICE_ATTR(in3_label, S_IRUGO, show_label, NULL, - DA9055_ADC_ADCIN3); - -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, da9055_read_tjunc, NULL, - DA9055_ADC_TJUNC); -static SENSOR_DEVICE_ATTR(temp1_label, S_IRUGO, show_label, NULL, - DA9055_ADC_TJUNC); +static SENSOR_DEVICE_ATTR_RO(in0_input, da9055_auto_ch, DA9055_ADC_VSYS); +static SENSOR_DEVICE_ATTR_RO(in0_label, label, DA9055_ADC_VSYS); +static SENSOR_DEVICE_ATTR_RO(in1_input, da9055_auto_ch, DA9055_ADC_ADCIN1); +static SENSOR_DEVICE_ATTR_RO(in1_label, label, DA9055_ADC_ADCIN1); +static SENSOR_DEVICE_ATTR_RO(in2_input, da9055_auto_ch, DA9055_ADC_ADCIN2); +static SENSOR_DEVICE_ATTR_RO(in2_label, label, DA9055_ADC_ADCIN2); +static SENSOR_DEVICE_ATTR_RO(in3_input, da9055_auto_ch, DA9055_ADC_ADCIN3); +static SENSOR_DEVICE_ATTR_RO(in3_label, label, DA9055_ADC_ADCIN3); + +static SENSOR_DEVICE_ATTR_RO(temp1_input, da9055_tjunc, DA9055_ADC_TJUNC); +static SENSOR_DEVICE_ATTR_RO(temp1_label, label, DA9055_ADC_TJUNC); static struct attribute *da9055_attrs[] = { &sensor_dev_attr_in0_input.dev_attr.attr, diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c index 9d3ef879dc51..68c9a6664557 100644 --- a/drivers/hwmon/dell-smm-hwmon.c +++ b/drivers/hwmon/dell-smm-hwmon.c @@ -618,7 +618,7 @@ static inline void __exit i8k_exit_procfs(void) * Hwmon interface */ -static ssize_t i8k_hwmon_show_temp_label(struct device *dev, +static ssize_t i8k_hwmon_temp_label_show(struct device *dev, struct device_attribute *devattr, char *buf) { @@ -641,7 +641,7 @@ static ssize_t i8k_hwmon_show_temp_label(struct device *dev, return sprintf(buf, "%s\n", labels[type]); } -static ssize_t i8k_hwmon_show_temp(struct device *dev, +static ssize_t i8k_hwmon_temp_show(struct device *dev, struct device_attribute *devattr, char *buf) { @@ -654,7 +654,7 @@ static ssize_t i8k_hwmon_show_temp(struct device *dev, return sprintf(buf, "%d\n", temp * 1000); } -static ssize_t i8k_hwmon_show_fan_label(struct device *dev, +static ssize_t i8k_hwmon_fan_label_show(struct device *dev, struct device_attribute *devattr, char *buf) { @@ -685,9 +685,8 @@ static ssize_t i8k_hwmon_show_fan_label(struct device *dev, return sprintf(buf, "%s%s\n", (dock ? "Docking " : ""), labels[type]); } -static ssize_t i8k_hwmon_show_fan(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t i8k_hwmon_fan_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; int fan_speed; @@ -698,9 +697,8 @@ static ssize_t i8k_hwmon_show_fan(struct device *dev, return sprintf(buf, "%d\n", fan_speed); } -static ssize_t i8k_hwmon_show_pwm(struct device *dev, - struct device_attribute *devattr, - char *buf) +static ssize_t i8k_hwmon_pwm_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; int status; @@ -711,9 +709,9 @@ static ssize_t i8k_hwmon_show_pwm(struct device *dev, return sprintf(buf, "%d\n", clamp_val(status * i8k_pwm_mult, 0, 255)); } -static ssize_t i8k_hwmon_set_pwm(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t i8k_hwmon_pwm_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { int index = to_sensor_dev_attr(attr)->index; unsigned long val; @@ -731,35 +729,23 @@ static ssize_t i8k_hwmon_set_pwm(struct device *dev, return err < 0 ? -EIO : count; } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, i8k_hwmon_show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_label, S_IRUGO, i8k_hwmon_show_temp_label, NULL, - 0); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, i8k_hwmon_show_temp, NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_label, S_IRUGO, i8k_hwmon_show_temp_label, NULL, - 1); -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, i8k_hwmon_show_temp, NULL, 2); -static SENSOR_DEVICE_ATTR(temp3_label, S_IRUGO, i8k_hwmon_show_temp_label, NULL, - 2); -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, i8k_hwmon_show_temp, NULL, 3); -static SENSOR_DEVICE_ATTR(temp4_label, S_IRUGO, i8k_hwmon_show_temp_label, NULL, - 3); -static SENSOR_DEVICE_ATTR(fan1_input, S_IRUGO, i8k_hwmon_show_fan, NULL, 0); -static SENSOR_DEVICE_ATTR(fan1_label, S_IRUGO, i8k_hwmon_show_fan_label, NULL, - 0); -static SENSOR_DEVICE_ATTR(pwm1, S_IRUGO | S_IWUSR, i8k_hwmon_show_pwm, - i8k_hwmon_set_pwm, 0); -static SENSOR_DEVICE_ATTR(fan2_input, S_IRUGO, i8k_hwmon_show_fan, NULL, - 1); -static SENSOR_DEVICE_ATTR(fan2_label, S_IRUGO, i8k_hwmon_show_fan_label, NULL, - 1); -static SENSOR_DEVICE_ATTR(pwm2, S_IRUGO | S_IWUSR, i8k_hwmon_show_pwm, - i8k_hwmon_set_pwm, 1); -static SENSOR_DEVICE_ATTR(fan3_input, S_IRUGO, i8k_hwmon_show_fan, NULL, - 2); -static SENSOR_DEVICE_ATTR(fan3_label, S_IRUGO, i8k_hwmon_show_fan_label, NULL, - 2); -static SENSOR_DEVICE_ATTR(pwm3, S_IRUGO | S_IWUSR, i8k_hwmon_show_pwm, - i8k_hwmon_set_pwm, 2); +static SENSOR_DEVICE_ATTR_RO(temp1_input, i8k_hwmon_temp, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_label, i8k_hwmon_temp_label, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_input, i8k_hwmon_temp, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_label, i8k_hwmon_temp_label, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_input, i8k_hwmon_temp, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_label, i8k_hwmon_temp_label, 2); +static SENSOR_DEVICE_ATTR_RO(temp4_input, i8k_hwmon_temp, 3); +static SENSOR_DEVICE_ATTR_RO(temp4_label, i8k_hwmon_temp_label, 3); +static SENSOR_DEVICE_ATTR_RO(fan1_input, i8k_hwmon_fan, 0); +static SENSOR_DEVICE_ATTR_RO(fan1_label, i8k_hwmon_fan_label, 0); +static SENSOR_DEVICE_ATTR_RW(pwm1, i8k_hwmon_pwm, 0); +static SENSOR_DEVICE_ATTR_RO(fan2_input, i8k_hwmon_fan, 1); +static SENSOR_DEVICE_ATTR_RO(fan2_label, i8k_hwmon_fan_label, 1); +static SENSOR_DEVICE_ATTR_RW(pwm2, i8k_hwmon_pwm, 1); +static SENSOR_DEVICE_ATTR_RO(fan3_input, i8k_hwmon_fan, 2); +static SENSOR_DEVICE_ATTR_RO(fan3_label, i8k_hwmon_fan_label, 2); +static SENSOR_DEVICE_ATTR_RW(pwm3, i8k_hwmon_pwm, 2); static struct attribute *i8k_attrs[] = { &sensor_dev_attr_temp1_input.dev_attr.attr, /* 0 */ @@ -1017,6 +1003,13 @@ static const struct dmi_system_id i8k_dmi_table[] __initconst = { DMI_MATCH(DMI_PRODUCT_NAME, "XPS 15 9560"), }, }, + { + .ident = "Dell XPS 15 9570", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), + DMI_MATCH(DMI_PRODUCT_NAME, "XPS 15 9570"), + }, + }, { } }; diff --git a/drivers/hwmon/ds1621.c b/drivers/hwmon/ds1621.c index 5c317fc32a4a..6fcdac068a27 100644 --- a/drivers/hwmon/ds1621.c +++ b/drivers/hwmon/ds1621.c @@ -234,7 +234,7 @@ static struct ds1621_data *ds1621_update_client(struct device *dev) return data; } -static ssize_t show_temp(struct device *dev, struct device_attribute *da, +static ssize_t temp_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -243,8 +243,8 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *da, DS1621_TEMP_FROM_REG(data->temp[attr->index])); } -static ssize_t set_temp(struct device *dev, struct device_attribute *da, - const char *buf, size_t count) +static ssize_t temp_store(struct device *dev, struct device_attribute *da, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); struct ds1621_data *data = dev_get_drvdata(dev); @@ -270,7 +270,7 @@ static ssize_t alarms_show(struct device *dev, struct device_attribute *da, return sprintf(buf, "%d\n", ALARMS_FROM_REG(data->conf)); } -static ssize_t show_alarm(struct device *dev, struct device_attribute *da, +static ssize_t alarm_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -319,13 +319,11 @@ static ssize_t update_interval_store(struct device *dev, static DEVICE_ATTR_RO(alarms); static DEVICE_ATTR_RW(update_interval); -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, show_temp, set_temp, 1); -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_temp, set_temp, 2); -static SENSOR_DEVICE_ATTR(temp1_min_alarm, S_IRUGO, show_alarm, NULL, - DS1621_ALARM_TEMP_LOW); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_alarm, NULL, - DS1621_ALARM_TEMP_HIGH); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_min, temp, 1); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp, 2); +static SENSOR_DEVICE_ATTR_RO(temp1_min_alarm, alarm, DS1621_ALARM_TEMP_LOW); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, alarm, DS1621_ALARM_TEMP_HIGH); static struct attribute *ds1621_attributes[] = { &sensor_dev_attr_temp1_input.dev_attr.attr, diff --git a/drivers/hwmon/ds620.c b/drivers/hwmon/ds620.c index 57d6958c74b8..7f8f8869c93c 100644 --- a/drivers/hwmon/ds620.c +++ b/drivers/hwmon/ds620.c @@ -139,7 +139,7 @@ abort: return ret; } -static ssize_t show_temp(struct device *dev, struct device_attribute *da, +static ssize_t temp_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -151,8 +151,8 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *da, return sprintf(buf, "%d\n", ((data->temp[attr->index] / 8) * 625) / 10); } -static ssize_t set_temp(struct device *dev, struct device_attribute *da, - const char *buf, size_t count) +static ssize_t temp_store(struct device *dev, struct device_attribute *da, + const char *buf, size_t count) { int res; long val; @@ -176,7 +176,7 @@ static ssize_t set_temp(struct device *dev, struct device_attribute *da, return count; } -static ssize_t show_alarm(struct device *dev, struct device_attribute *da, +static ssize_t alarm_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -207,13 +207,11 @@ static ssize_t show_alarm(struct device *dev, struct device_attribute *da, return sprintf(buf, "%d\n", !!(conf & attr->index)); } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, show_temp, set_temp, 1); -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_temp, set_temp, 2); -static SENSOR_DEVICE_ATTR(temp1_min_alarm, S_IRUGO, show_alarm, NULL, - DS620_REG_CONFIG_TLF); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_alarm, NULL, - DS620_REG_CONFIG_THF); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_min, temp, 1); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp, 2); +static SENSOR_DEVICE_ATTR_RO(temp1_min_alarm, alarm, DS620_REG_CONFIG_TLF); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, alarm, DS620_REG_CONFIG_THF); static struct attribute *ds620_attrs[] = { &sensor_dev_attr_temp1_input.dev_attr.attr, diff --git a/drivers/hwmon/emc1403.c b/drivers/hwmon/emc1403.c index aaebeb726d6a..bdab47ac9e9a 100644 --- a/drivers/hwmon/emc1403.c +++ b/drivers/hwmon/emc1403.c @@ -43,8 +43,8 @@ struct thermal_data { const struct attribute_group *groups[4]; }; -static ssize_t show_temp(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t temp_show(struct device *dev, struct device_attribute *attr, + char *buf) { struct sensor_device_attribute *sda = to_sensor_dev_attr(attr); struct thermal_data *data = dev_get_drvdata(dev); @@ -57,8 +57,8 @@ static ssize_t show_temp(struct device *dev, return sprintf(buf, "%d000\n", val); } -static ssize_t show_bit(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t bit_show(struct device *dev, struct device_attribute *attr, + char *buf) { struct sensor_device_attribute_2 *sda = to_sensor_dev_attr_2(attr); struct thermal_data *data = dev_get_drvdata(dev); @@ -71,8 +71,8 @@ static ssize_t show_bit(struct device *dev, return sprintf(buf, "%d\n", !!(val & sda->index)); } -static ssize_t store_temp(struct device *dev, - struct device_attribute *attr, const char *buf, size_t count) +static ssize_t temp_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct sensor_device_attribute *sda = to_sensor_dev_attr(attr); struct thermal_data *data = dev_get_drvdata(dev); @@ -88,8 +88,8 @@ static ssize_t store_temp(struct device *dev, return count; } -static ssize_t store_bit(struct device *dev, - struct device_attribute *attr, const char *buf, size_t count) +static ssize_t bit_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct sensor_device_attribute_2 *sda = to_sensor_dev_attr_2(attr); struct thermal_data *data = dev_get_drvdata(dev); @@ -128,20 +128,20 @@ static ssize_t show_hyst_common(struct device *dev, return sprintf(buf, "%d000\n", is_min ? limit + hyst : limit - hyst); } -static ssize_t show_hyst(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t hyst_show(struct device *dev, struct device_attribute *attr, + char *buf) { return show_hyst_common(dev, attr, buf, false); } -static ssize_t show_min_hyst(struct device *dev, +static ssize_t min_hyst_show(struct device *dev, struct device_attribute *attr, char *buf) { return show_hyst_common(dev, attr, buf, true); } -static ssize_t store_hyst(struct device *dev, - struct device_attribute *attr, const char *buf, size_t count) +static ssize_t hyst_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct sensor_device_attribute *sda = to_sensor_dev_attr(attr); struct thermal_data *data = dev_get_drvdata(dev); @@ -173,80 +173,54 @@ fail: * Sensors. We pass the actual i2c register to the methods. */ -static SENSOR_DEVICE_ATTR(temp1_min, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x06); -static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x05); -static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x20); -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0x00); -static SENSOR_DEVICE_ATTR_2(temp1_min_alarm, S_IRUGO, - show_bit, NULL, 0x36, 0x01); -static SENSOR_DEVICE_ATTR_2(temp1_max_alarm, S_IRUGO, - show_bit, NULL, 0x35, 0x01); -static SENSOR_DEVICE_ATTR_2(temp1_crit_alarm, S_IRUGO, - show_bit, NULL, 0x37, 0x01); -static SENSOR_DEVICE_ATTR(temp1_min_hyst, S_IRUGO, show_min_hyst, NULL, 0x06); -static SENSOR_DEVICE_ATTR(temp1_max_hyst, S_IRUGO, show_hyst, NULL, 0x05); -static SENSOR_DEVICE_ATTR(temp1_crit_hyst, S_IRUGO | S_IWUSR, - show_hyst, store_hyst, 0x20); - -static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x08); -static SENSOR_DEVICE_ATTR(temp2_max, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x07); -static SENSOR_DEVICE_ATTR(temp2_crit, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x19); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp, NULL, 0x01); -static SENSOR_DEVICE_ATTR_2(temp2_fault, S_IRUGO, show_bit, NULL, 0x1b, 0x02); -static SENSOR_DEVICE_ATTR_2(temp2_min_alarm, S_IRUGO, - show_bit, NULL, 0x36, 0x02); -static SENSOR_DEVICE_ATTR_2(temp2_max_alarm, S_IRUGO, - show_bit, NULL, 0x35, 0x02); -static SENSOR_DEVICE_ATTR_2(temp2_crit_alarm, S_IRUGO, - show_bit, NULL, 0x37, 0x02); -static SENSOR_DEVICE_ATTR(temp2_min_hyst, S_IRUGO, show_min_hyst, NULL, 0x08); -static SENSOR_DEVICE_ATTR(temp2_max_hyst, S_IRUGO, show_hyst, NULL, 0x07); -static SENSOR_DEVICE_ATTR(temp2_crit_hyst, S_IRUGO, show_hyst, NULL, 0x19); - -static SENSOR_DEVICE_ATTR(temp3_min, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x16); -static SENSOR_DEVICE_ATTR(temp3_max, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x15); -static SENSOR_DEVICE_ATTR(temp3_crit, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x1A); -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, show_temp, NULL, 0x23); -static SENSOR_DEVICE_ATTR_2(temp3_fault, S_IRUGO, show_bit, NULL, 0x1b, 0x04); -static SENSOR_DEVICE_ATTR_2(temp3_min_alarm, S_IRUGO, - show_bit, NULL, 0x36, 0x04); -static SENSOR_DEVICE_ATTR_2(temp3_max_alarm, S_IRUGO, - show_bit, NULL, 0x35, 0x04); -static SENSOR_DEVICE_ATTR_2(temp3_crit_alarm, S_IRUGO, - show_bit, NULL, 0x37, 0x04); -static SENSOR_DEVICE_ATTR(temp3_min_hyst, S_IRUGO, show_min_hyst, NULL, 0x16); -static SENSOR_DEVICE_ATTR(temp3_max_hyst, S_IRUGO, show_hyst, NULL, 0x15); -static SENSOR_DEVICE_ATTR(temp3_crit_hyst, S_IRUGO, show_hyst, NULL, 0x1A); - -static SENSOR_DEVICE_ATTR(temp4_min, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x2D); -static SENSOR_DEVICE_ATTR(temp4_max, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x2C); -static SENSOR_DEVICE_ATTR(temp4_crit, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x30); -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, show_temp, NULL, 0x2A); -static SENSOR_DEVICE_ATTR_2(temp4_fault, S_IRUGO, show_bit, NULL, 0x1b, 0x08); -static SENSOR_DEVICE_ATTR_2(temp4_min_alarm, S_IRUGO, - show_bit, NULL, 0x36, 0x08); -static SENSOR_DEVICE_ATTR_2(temp4_max_alarm, S_IRUGO, - show_bit, NULL, 0x35, 0x08); -static SENSOR_DEVICE_ATTR_2(temp4_crit_alarm, S_IRUGO, - show_bit, NULL, 0x37, 0x08); -static SENSOR_DEVICE_ATTR(temp4_min_hyst, S_IRUGO, show_min_hyst, NULL, 0x2D); -static SENSOR_DEVICE_ATTR(temp4_max_hyst, S_IRUGO, show_hyst, NULL, 0x2C); -static SENSOR_DEVICE_ATTR(temp4_crit_hyst, S_IRUGO, show_hyst, NULL, 0x30); - -static SENSOR_DEVICE_ATTR_2(power_state, S_IRUGO | S_IWUSR, - show_bit, store_bit, 0x03, 0x40); +static SENSOR_DEVICE_ATTR_RW(temp1_min, temp, 0x06); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp, 0x05); +static SENSOR_DEVICE_ATTR_RW(temp1_crit, temp, 0x20); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0x00); +static SENSOR_DEVICE_ATTR_2_RO(temp1_min_alarm, bit, 0x36, 0x01); +static SENSOR_DEVICE_ATTR_2_RO(temp1_max_alarm, bit, 0x35, 0x01); +static SENSOR_DEVICE_ATTR_2_RO(temp1_crit_alarm, bit, 0x37, 0x01); +static SENSOR_DEVICE_ATTR_RO(temp1_min_hyst, min_hyst, 0x06); +static SENSOR_DEVICE_ATTR_RO(temp1_max_hyst, hyst, 0x05); +static SENSOR_DEVICE_ATTR_RW(temp1_crit_hyst, hyst, 0x20); + +static SENSOR_DEVICE_ATTR_RW(temp2_min, temp, 0x08); +static SENSOR_DEVICE_ATTR_RW(temp2_max, temp, 0x07); +static SENSOR_DEVICE_ATTR_RW(temp2_crit, temp, 0x19); +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp, 0x01); +static SENSOR_DEVICE_ATTR_2_RO(temp2_fault, bit, 0x1b, 0x02); +static SENSOR_DEVICE_ATTR_2_RO(temp2_min_alarm, bit, 0x36, 0x02); +static SENSOR_DEVICE_ATTR_2_RO(temp2_max_alarm, bit, 0x35, 0x02); +static SENSOR_DEVICE_ATTR_2_RO(temp2_crit_alarm, bit, 0x37, 0x02); +static SENSOR_DEVICE_ATTR_RO(temp2_min_hyst, min_hyst, 0x08); +static SENSOR_DEVICE_ATTR_RO(temp2_max_hyst, hyst, 0x07); +static SENSOR_DEVICE_ATTR_RO(temp2_crit_hyst, hyst, 0x19); + +static SENSOR_DEVICE_ATTR_RW(temp3_min, temp, 0x16); +static SENSOR_DEVICE_ATTR_RW(temp3_max, temp, 0x15); +static SENSOR_DEVICE_ATTR_RW(temp3_crit, temp, 0x1A); +static SENSOR_DEVICE_ATTR_RO(temp3_input, temp, 0x23); +static SENSOR_DEVICE_ATTR_2_RO(temp3_fault, bit, 0x1b, 0x04); +static SENSOR_DEVICE_ATTR_2_RO(temp3_min_alarm, bit, 0x36, 0x04); +static SENSOR_DEVICE_ATTR_2_RO(temp3_max_alarm, bit, 0x35, 0x04); +static SENSOR_DEVICE_ATTR_2_RO(temp3_crit_alarm, bit, 0x37, 0x04); +static SENSOR_DEVICE_ATTR_RO(temp3_min_hyst, min_hyst, 0x16); +static SENSOR_DEVICE_ATTR_RO(temp3_max_hyst, hyst, 0x15); +static SENSOR_DEVICE_ATTR_RO(temp3_crit_hyst, hyst, 0x1A); + +static SENSOR_DEVICE_ATTR_RW(temp4_min, temp, 0x2D); +static SENSOR_DEVICE_ATTR_RW(temp4_max, temp, 0x2C); +static SENSOR_DEVICE_ATTR_RW(temp4_crit, temp, 0x30); +static SENSOR_DEVICE_ATTR_RO(temp4_input, temp, 0x2A); +static SENSOR_DEVICE_ATTR_2_RO(temp4_fault, bit, 0x1b, 0x08); +static SENSOR_DEVICE_ATTR_2_RO(temp4_min_alarm, bit, 0x36, 0x08); +static SENSOR_DEVICE_ATTR_2_RO(temp4_max_alarm, bit, 0x35, 0x08); +static SENSOR_DEVICE_ATTR_2_RO(temp4_crit_alarm, bit, 0x37, 0x08); +static SENSOR_DEVICE_ATTR_RO(temp4_min_hyst, min_hyst, 0x2D); +static SENSOR_DEVICE_ATTR_RO(temp4_max_hyst, hyst, 0x2C); +static SENSOR_DEVICE_ATTR_RO(temp4_crit_hyst, hyst, 0x30); + +static SENSOR_DEVICE_ATTR_2_RW(power_state, bit, 0x03, 0x40); static struct attribute *emc1402_attrs[] = { &sensor_dev_attr_temp1_min.dev_attr.attr, @@ -328,14 +302,14 @@ static const struct attribute_group emc1404_group = { * array. */ static struct sensor_device_attribute_2 emc1402_alarms[] = { - SENSOR_ATTR_2(temp1_min_alarm, S_IRUGO, show_bit, NULL, 0x02, 0x20), - SENSOR_ATTR_2(temp1_max_alarm, S_IRUGO, show_bit, NULL, 0x02, 0x40), - SENSOR_ATTR_2(temp1_crit_alarm, S_IRUGO, show_bit, NULL, 0x02, 0x01), - - SENSOR_ATTR_2(temp2_fault, S_IRUGO, show_bit, NULL, 0x02, 0x04), - SENSOR_ATTR_2(temp2_min_alarm, S_IRUGO, show_bit, NULL, 0x02, 0x08), - SENSOR_ATTR_2(temp2_max_alarm, S_IRUGO, show_bit, NULL, 0x02, 0x10), - SENSOR_ATTR_2(temp2_crit_alarm, S_IRUGO, show_bit, NULL, 0x02, 0x02), + SENSOR_ATTR_2_RO(temp1_min_alarm, bit, 0x02, 0x20), + SENSOR_ATTR_2_RO(temp1_max_alarm, bit, 0x02, 0x40), + SENSOR_ATTR_2_RO(temp1_crit_alarm, bit, 0x02, 0x01), + + SENSOR_ATTR_2_RO(temp2_fault, bit, 0x02, 0x04), + SENSOR_ATTR_2_RO(temp2_min_alarm, bit, 0x02, 0x08), + SENSOR_ATTR_2_RO(temp2_max_alarm, bit, 0x02, 0x10), + SENSOR_ATTR_2_RO(temp2_crit_alarm, bit, 0x02, 0x02), }; static struct attribute *emc1402_alarm_attrs[] = { diff --git a/drivers/hwmon/emc2103.c b/drivers/hwmon/emc2103.c index 1ed9a7aa953d..4b7748a0a833 100644 --- a/drivers/hwmon/emc2103.c +++ b/drivers/hwmon/emc2103.c @@ -185,7 +185,7 @@ static struct emc2103_data *emc2103_update_device(struct device *dev) } static ssize_t -show_temp(struct device *dev, struct device_attribute *da, char *buf) +temp_show(struct device *dev, struct device_attribute *da, char *buf) { int nr = to_sensor_dev_attr(da)->index; struct emc2103_data *data = emc2103_update_device(dev); @@ -195,7 +195,7 @@ show_temp(struct device *dev, struct device_attribute *da, char *buf) } static ssize_t -show_temp_min(struct device *dev, struct device_attribute *da, char *buf) +temp_min_show(struct device *dev, struct device_attribute *da, char *buf) { int nr = to_sensor_dev_attr(da)->index; struct emc2103_data *data = emc2103_update_device(dev); @@ -204,7 +204,7 @@ show_temp_min(struct device *dev, struct device_attribute *da, char *buf) } static ssize_t -show_temp_max(struct device *dev, struct device_attribute *da, char *buf) +temp_max_show(struct device *dev, struct device_attribute *da, char *buf) { int nr = to_sensor_dev_attr(da)->index; struct emc2103_data *data = emc2103_update_device(dev); @@ -213,7 +213,7 @@ show_temp_max(struct device *dev, struct device_attribute *da, char *buf) } static ssize_t -show_temp_fault(struct device *dev, struct device_attribute *da, char *buf) +temp_fault_show(struct device *dev, struct device_attribute *da, char *buf) { int nr = to_sensor_dev_attr(da)->index; struct emc2103_data *data = emc2103_update_device(dev); @@ -222,7 +222,8 @@ show_temp_fault(struct device *dev, struct device_attribute *da, char *buf) } static ssize_t -show_temp_min_alarm(struct device *dev, struct device_attribute *da, char *buf) +temp_min_alarm_show(struct device *dev, struct device_attribute *da, + char *buf) { int nr = to_sensor_dev_attr(da)->index; struct emc2103_data *data = emc2103_update_device(dev); @@ -231,7 +232,8 @@ show_temp_min_alarm(struct device *dev, struct device_attribute *da, char *buf) } static ssize_t -show_temp_max_alarm(struct device *dev, struct device_attribute *da, char *buf) +temp_max_alarm_show(struct device *dev, struct device_attribute *da, + char *buf) { int nr = to_sensor_dev_attr(da)->index; struct emc2103_data *data = emc2103_update_device(dev); @@ -239,8 +241,8 @@ show_temp_max_alarm(struct device *dev, struct device_attribute *da, char *buf) return sprintf(buf, "%d\n", alarm ? 1 : 0); } -static ssize_t set_temp_min(struct device *dev, struct device_attribute *da, - const char *buf, size_t count) +static ssize_t temp_min_store(struct device *dev, struct device_attribute *da, + const char *buf, size_t count) { int nr = to_sensor_dev_attr(da)->index; struct emc2103_data *data = dev_get_drvdata(dev); @@ -261,8 +263,8 @@ static ssize_t set_temp_min(struct device *dev, struct device_attribute *da, return count; } -static ssize_t set_temp_max(struct device *dev, struct device_attribute *da, - const char *buf, size_t count) +static ssize_t temp_max_store(struct device *dev, struct device_attribute *da, + const char *buf, size_t count) { int nr = to_sensor_dev_attr(da)->index; struct emc2103_data *data = dev_get_drvdata(dev); @@ -470,49 +472,33 @@ err: return count; } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_min, S_IRUGO | S_IWUSR, show_temp_min, - set_temp_min, 0); -static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO | S_IWUSR, show_temp_max, - set_temp_max, 0); -static SENSOR_DEVICE_ATTR(temp1_fault, S_IRUGO, show_temp_fault, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_min_alarm, S_IRUGO, show_temp_min_alarm, - NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_temp_max_alarm, - NULL, 0); - -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp, NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, show_temp_min, - set_temp_min, 1); -static SENSOR_DEVICE_ATTR(temp2_max, S_IRUGO | S_IWUSR, show_temp_max, - set_temp_max, 1); -static SENSOR_DEVICE_ATTR(temp2_fault, S_IRUGO, show_temp_fault, NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_min_alarm, S_IRUGO, show_temp_min_alarm, - NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_max_alarm, S_IRUGO, show_temp_max_alarm, - NULL, 1); - -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, show_temp, NULL, 2); -static SENSOR_DEVICE_ATTR(temp3_min, S_IRUGO | S_IWUSR, show_temp_min, - set_temp_min, 2); -static SENSOR_DEVICE_ATTR(temp3_max, S_IRUGO | S_IWUSR, show_temp_max, - set_temp_max, 2); -static SENSOR_DEVICE_ATTR(temp3_fault, S_IRUGO, show_temp_fault, NULL, 2); -static SENSOR_DEVICE_ATTR(temp3_min_alarm, S_IRUGO, show_temp_min_alarm, - NULL, 2); -static SENSOR_DEVICE_ATTR(temp3_max_alarm, S_IRUGO, show_temp_max_alarm, - NULL, 2); - -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, show_temp, NULL, 3); -static SENSOR_DEVICE_ATTR(temp4_min, S_IRUGO | S_IWUSR, show_temp_min, - set_temp_min, 3); -static SENSOR_DEVICE_ATTR(temp4_max, S_IRUGO | S_IWUSR, show_temp_max, - set_temp_max, 3); -static SENSOR_DEVICE_ATTR(temp4_fault, S_IRUGO, show_temp_fault, NULL, 3); -static SENSOR_DEVICE_ATTR(temp4_min_alarm, S_IRUGO, show_temp_min_alarm, - NULL, 3); -static SENSOR_DEVICE_ATTR(temp4_max_alarm, S_IRUGO, show_temp_max_alarm, - NULL, 3); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_min, temp_min, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_max, temp_max, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_fault, temp_fault, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_min_alarm, temp_min_alarm, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, temp_max_alarm, 0); + +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp, 1); +static SENSOR_DEVICE_ATTR_RW(temp2_min, temp_min, 1); +static SENSOR_DEVICE_ATTR_RW(temp2_max, temp_max, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_fault, temp_fault, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_min_alarm, temp_min_alarm, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_max_alarm, temp_max_alarm, 1); + +static SENSOR_DEVICE_ATTR_RO(temp3_input, temp, 2); +static SENSOR_DEVICE_ATTR_RW(temp3_min, temp_min, 2); +static SENSOR_DEVICE_ATTR_RW(temp3_max, temp_max, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_fault, temp_fault, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_min_alarm, temp_min_alarm, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_max_alarm, temp_max_alarm, 2); + +static SENSOR_DEVICE_ATTR_RO(temp4_input, temp, 3); +static SENSOR_DEVICE_ATTR_RW(temp4_min, temp_min, 3); +static SENSOR_DEVICE_ATTR_RW(temp4_max, temp_max, 3); +static SENSOR_DEVICE_ATTR_RO(temp4_fault, temp_fault, 3); +static SENSOR_DEVICE_ATTR_RO(temp4_min_alarm, temp_min_alarm, 3); +static SENSOR_DEVICE_ATTR_RO(temp4_max_alarm, temp_max_alarm, 3); static DEVICE_ATTR_RO(fan1_input); static DEVICE_ATTR_RW(fan1_div); diff --git a/drivers/hwmon/emc6w201.c b/drivers/hwmon/emc6w201.c index 4aee5adf9ef2..b4735e7e18f5 100644 --- a/drivers/hwmon/emc6w201.c +++ b/drivers/hwmon/emc6w201.c @@ -189,8 +189,8 @@ static struct emc6w201_data *emc6w201_update_device(struct device *dev) static const s16 nominal_mv[6] = { 2500, 1500, 3300, 5000, 1500, 1500 }; -static ssize_t show_in(struct device *dev, struct device_attribute *devattr, - char *buf) +static ssize_t in_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct emc6w201_data *data = emc6w201_update_device(dev); int sf = to_sensor_dev_attr_2(devattr)->index; @@ -200,8 +200,8 @@ static ssize_t show_in(struct device *dev, struct device_attribute *devattr, (unsigned)data->in[sf][nr] * nominal_mv[nr] / 0xC0); } -static ssize_t set_in(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t in_store(struct device *dev, struct device_attribute *devattr, + const char *buf, size_t count) { struct emc6w201_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -228,8 +228,8 @@ static ssize_t set_in(struct device *dev, struct device_attribute *devattr, return err < 0 ? err : count; } -static ssize_t show_temp(struct device *dev, struct device_attribute *devattr, - char *buf) +static ssize_t temp_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct emc6w201_data *data = emc6w201_update_device(dev); int sf = to_sensor_dev_attr_2(devattr)->index; @@ -238,8 +238,9 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *devattr, return sprintf(buf, "%d\n", (int)data->temp[sf][nr] * 1000); } -static ssize_t set_temp(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t temp_store(struct device *dev, + struct device_attribute *devattr, const char *buf, + size_t count) { struct emc6w201_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -266,8 +267,8 @@ static ssize_t set_temp(struct device *dev, struct device_attribute *devattr, return err < 0 ? err : count; } -static ssize_t show_fan(struct device *dev, struct device_attribute *devattr, - char *buf) +static ssize_t fan_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct emc6w201_data *data = emc6w201_update_device(dev); int sf = to_sensor_dev_attr_2(devattr)->index; @@ -282,8 +283,8 @@ static ssize_t show_fan(struct device *dev, struct device_attribute *devattr, return sprintf(buf, "%u\n", rpm); } -static ssize_t set_fan(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t fan_store(struct device *dev, struct device_attribute *devattr, + const char *buf, size_t count) { struct emc6w201_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -312,83 +313,54 @@ static ssize_t set_fan(struct device *dev, struct device_attribute *devattr, return err < 0 ? err : count; } -static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, show_in, NULL, 0, input); -static SENSOR_DEVICE_ATTR_2(in0_min, S_IRUGO | S_IWUSR, show_in, set_in, - 0, min); -static SENSOR_DEVICE_ATTR_2(in0_max, S_IRUGO | S_IWUSR, show_in, set_in, - 0, max); -static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, show_in, NULL, 1, input); -static SENSOR_DEVICE_ATTR_2(in1_min, S_IRUGO | S_IWUSR, show_in, set_in, - 1, min); -static SENSOR_DEVICE_ATTR_2(in1_max, S_IRUGO | S_IWUSR, show_in, set_in, - 1, max); -static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, show_in, NULL, 2, input); -static SENSOR_DEVICE_ATTR_2(in2_min, S_IRUGO | S_IWUSR, show_in, set_in, - 2, min); -static SENSOR_DEVICE_ATTR_2(in2_max, S_IRUGO | S_IWUSR, show_in, set_in, - 2, max); -static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, show_in, NULL, 3, input); -static SENSOR_DEVICE_ATTR_2(in3_min, S_IRUGO | S_IWUSR, show_in, set_in, - 3, min); -static SENSOR_DEVICE_ATTR_2(in3_max, S_IRUGO | S_IWUSR, show_in, set_in, - 3, max); -static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, show_in, NULL, 4, input); -static SENSOR_DEVICE_ATTR_2(in4_min, S_IRUGO | S_IWUSR, show_in, set_in, - 4, min); -static SENSOR_DEVICE_ATTR_2(in4_max, S_IRUGO | S_IWUSR, show_in, set_in, - 4, max); -static SENSOR_DEVICE_ATTR_2(in5_input, S_IRUGO, show_in, NULL, 5, input); -static SENSOR_DEVICE_ATTR_2(in5_min, S_IRUGO | S_IWUSR, show_in, set_in, - 5, min); -static SENSOR_DEVICE_ATTR_2(in5_max, S_IRUGO | S_IWUSR, show_in, set_in, - 5, max); - -static SENSOR_DEVICE_ATTR_2(temp1_input, S_IRUGO, show_temp, NULL, 0, input); -static SENSOR_DEVICE_ATTR_2(temp1_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - 0, min); -static SENSOR_DEVICE_ATTR_2(temp1_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 0, max); -static SENSOR_DEVICE_ATTR_2(temp2_input, S_IRUGO, show_temp, NULL, 1, input); -static SENSOR_DEVICE_ATTR_2(temp2_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - 1, min); -static SENSOR_DEVICE_ATTR_2(temp2_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 1, max); -static SENSOR_DEVICE_ATTR_2(temp3_input, S_IRUGO, show_temp, NULL, 2, input); -static SENSOR_DEVICE_ATTR_2(temp3_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - 2, min); -static SENSOR_DEVICE_ATTR_2(temp3_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 2, max); -static SENSOR_DEVICE_ATTR_2(temp4_input, S_IRUGO, show_temp, NULL, 3, input); -static SENSOR_DEVICE_ATTR_2(temp4_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - 3, min); -static SENSOR_DEVICE_ATTR_2(temp4_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 3, max); -static SENSOR_DEVICE_ATTR_2(temp5_input, S_IRUGO, show_temp, NULL, 4, input); -static SENSOR_DEVICE_ATTR_2(temp5_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - 4, min); -static SENSOR_DEVICE_ATTR_2(temp5_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 4, max); -static SENSOR_DEVICE_ATTR_2(temp6_input, S_IRUGO, show_temp, NULL, 5, input); -static SENSOR_DEVICE_ATTR_2(temp6_min, S_IRUGO | S_IWUSR, show_temp, set_temp, - 5, min); -static SENSOR_DEVICE_ATTR_2(temp6_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 5, max); - -static SENSOR_DEVICE_ATTR_2(fan1_input, S_IRUGO, show_fan, NULL, 0, input); -static SENSOR_DEVICE_ATTR_2(fan1_min, S_IRUGO | S_IWUSR, show_fan, set_fan, - 0, min); -static SENSOR_DEVICE_ATTR_2(fan2_input, S_IRUGO, show_fan, NULL, 1, input); -static SENSOR_DEVICE_ATTR_2(fan2_min, S_IRUGO | S_IWUSR, show_fan, set_fan, - 1, min); -static SENSOR_DEVICE_ATTR_2(fan3_input, S_IRUGO, show_fan, NULL, 2, input); -static SENSOR_DEVICE_ATTR_2(fan3_min, S_IRUGO | S_IWUSR, show_fan, set_fan, - 2, min); -static SENSOR_DEVICE_ATTR_2(fan4_input, S_IRUGO, show_fan, NULL, 3, input); -static SENSOR_DEVICE_ATTR_2(fan4_min, S_IRUGO | S_IWUSR, show_fan, set_fan, - 3, min); -static SENSOR_DEVICE_ATTR_2(fan5_input, S_IRUGO, show_fan, NULL, 4, input); -static SENSOR_DEVICE_ATTR_2(fan5_min, S_IRUGO | S_IWUSR, show_fan, set_fan, - 4, min); +static SENSOR_DEVICE_ATTR_2_RO(in0_input, in, 0, input); +static SENSOR_DEVICE_ATTR_2_RW(in0_min, in, 0, min); +static SENSOR_DEVICE_ATTR_2_RW(in0_max, in, 0, max); +static SENSOR_DEVICE_ATTR_2_RO(in1_input, in, 1, input); +static SENSOR_DEVICE_ATTR_2_RW(in1_min, in, 1, min); +static SENSOR_DEVICE_ATTR_2_RW(in1_max, in, 1, max); +static SENSOR_DEVICE_ATTR_2_RO(in2_input, in, 2, input); +static SENSOR_DEVICE_ATTR_2_RW(in2_min, in, 2, min); +static SENSOR_DEVICE_ATTR_2_RW(in2_max, in, 2, max); +static SENSOR_DEVICE_ATTR_2_RO(in3_input, in, 3, input); +static SENSOR_DEVICE_ATTR_2_RW(in3_min, in, 3, min); +static SENSOR_DEVICE_ATTR_2_RW(in3_max, in, 3, max); +static SENSOR_DEVICE_ATTR_2_RO(in4_input, in, 4, input); +static SENSOR_DEVICE_ATTR_2_RW(in4_min, in, 4, min); +static SENSOR_DEVICE_ATTR_2_RW(in4_max, in, 4, max); +static SENSOR_DEVICE_ATTR_2_RO(in5_input, in, 5, input); +static SENSOR_DEVICE_ATTR_2_RW(in5_min, in, 5, min); +static SENSOR_DEVICE_ATTR_2_RW(in5_max, in, 5, max); + +static SENSOR_DEVICE_ATTR_2_RO(temp1_input, temp, 0, input); +static SENSOR_DEVICE_ATTR_2_RW(temp1_min, temp, 0, min); +static SENSOR_DEVICE_ATTR_2_RW(temp1_max, temp, 0, max); +static SENSOR_DEVICE_ATTR_2_RO(temp2_input, temp, 1, input); +static SENSOR_DEVICE_ATTR_2_RW(temp2_min, temp, 1, min); +static SENSOR_DEVICE_ATTR_2_RW(temp2_max, temp, 1, max); +static SENSOR_DEVICE_ATTR_2_RO(temp3_input, temp, 2, input); +static SENSOR_DEVICE_ATTR_2_RW(temp3_min, temp, 2, min); +static SENSOR_DEVICE_ATTR_2_RW(temp3_max, temp, 2, max); +static SENSOR_DEVICE_ATTR_2_RO(temp4_input, temp, 3, input); +static SENSOR_DEVICE_ATTR_2_RW(temp4_min, temp, 3, min); +static SENSOR_DEVICE_ATTR_2_RW(temp4_max, temp, 3, max); +static SENSOR_DEVICE_ATTR_2_RO(temp5_input, temp, 4, input); +static SENSOR_DEVICE_ATTR_2_RW(temp5_min, temp, 4, min); +static SENSOR_DEVICE_ATTR_2_RW(temp5_max, temp, 4, max); +static SENSOR_DEVICE_ATTR_2_RO(temp6_input, temp, 5, input); +static SENSOR_DEVICE_ATTR_2_RW(temp6_min, temp, 5, min); +static SENSOR_DEVICE_ATTR_2_RW(temp6_max, temp, 5, max); + +static SENSOR_DEVICE_ATTR_2_RO(fan1_input, fan, 0, input); +static SENSOR_DEVICE_ATTR_2_RW(fan1_min, fan, 0, min); +static SENSOR_DEVICE_ATTR_2_RO(fan2_input, fan, 1, input); +static SENSOR_DEVICE_ATTR_2_RW(fan2_min, fan, 1, min); +static SENSOR_DEVICE_ATTR_2_RO(fan3_input, fan, 2, input); +static SENSOR_DEVICE_ATTR_2_RW(fan3_min, fan, 2, min); +static SENSOR_DEVICE_ATTR_2_RO(fan4_input, fan, 3, input); +static SENSOR_DEVICE_ATTR_2_RW(fan4_min, fan, 3, min); +static SENSOR_DEVICE_ATTR_2_RO(fan5_input, fan, 4, input); +static SENSOR_DEVICE_ATTR_2_RW(fan5_min, fan, 4, min); static struct attribute *emc6w201_attrs[] = { &sensor_dev_attr_in0_input.dev_attr.attr, diff --git a/drivers/hwmon/fschmd.c b/drivers/hwmon/fschmd.c index 22d3a84f13ef..042a166e1858 100644 --- a/drivers/hwmon/fschmd.c +++ b/drivers/hwmon/fschmd.c @@ -331,8 +331,8 @@ static void fschmd_release_resources(struct kref *ref) * Sysfs attr show / store functions */ -static ssize_t show_in_value(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t in_value_show(struct device *dev, + struct device_attribute *devattr, char *buf) { const int max_reading[3] = { 14200, 6600, 3300 }; int index = to_sensor_dev_attr(devattr)->index; @@ -349,8 +349,8 @@ static ssize_t show_in_value(struct device *dev, #define TEMP_FROM_REG(val) (((val) - 128) * 1000) -static ssize_t show_temp_value(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t temp_value_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -358,8 +358,8 @@ static ssize_t show_temp_value(struct device *dev, return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_act[index])); } -static ssize_t show_temp_max(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t temp_max_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -367,8 +367,9 @@ static ssize_t show_temp_max(struct device *dev, return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_max[index])); } -static ssize_t store_temp_max(struct device *dev, struct device_attribute - *devattr, const char *buf, size_t count) +static ssize_t temp_max_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = dev_get_drvdata(dev); @@ -390,8 +391,8 @@ static ssize_t store_temp_max(struct device *dev, struct device_attribute return count; } -static ssize_t show_temp_fault(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t temp_fault_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -403,8 +404,8 @@ static ssize_t show_temp_fault(struct device *dev, return sprintf(buf, "1\n"); } -static ssize_t show_temp_alarm(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t temp_alarm_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -419,8 +420,8 @@ static ssize_t show_temp_alarm(struct device *dev, #define RPM_FROM_REG(val) ((val) * 60) -static ssize_t show_fan_value(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t fan_value_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -428,8 +429,8 @@ static ssize_t show_fan_value(struct device *dev, return sprintf(buf, "%u\n", RPM_FROM_REG(data->fan_act[index])); } -static ssize_t show_fan_div(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t fan_div_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -438,8 +439,9 @@ static ssize_t show_fan_div(struct device *dev, return sprintf(buf, "%d\n", 1 << (data->fan_ripple[index] & 3)); } -static ssize_t store_fan_div(struct device *dev, struct device_attribute - *devattr, const char *buf, size_t count) +static ssize_t fan_div_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { u8 reg; int index = to_sensor_dev_attr(devattr)->index; @@ -488,8 +490,8 @@ static ssize_t store_fan_div(struct device *dev, struct device_attribute return count; } -static ssize_t show_fan_alarm(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t fan_alarm_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -500,8 +502,8 @@ static ssize_t show_fan_alarm(struct device *dev, return sprintf(buf, "0\n"); } -static ssize_t show_fan_fault(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t fan_fault_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -513,8 +515,9 @@ static ssize_t show_fan_fault(struct device *dev, } -static ssize_t show_pwm_auto_point1_pwm(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t pwm_auto_point1_pwm_show(struct device *dev, + struct device_attribute *devattr, + char *buf) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = fschmd_update_device(dev); @@ -527,8 +530,9 @@ static ssize_t show_pwm_auto_point1_pwm(struct device *dev, return sprintf(buf, "%d\n", val); } -static ssize_t store_pwm_auto_point1_pwm(struct device *dev, - struct device_attribute *devattr, const char *buf, size_t count) +static ssize_t pwm_auto_point1_pwm_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { int index = to_sensor_dev_attr(devattr)->index; struct fschmd_data *data = dev_get_drvdata(dev); @@ -605,104 +609,97 @@ static ssize_t alert_led_store(struct device *dev, static DEVICE_ATTR_RW(alert_led); static struct sensor_device_attribute fschmd_attr[] = { - SENSOR_ATTR(in0_input, 0444, show_in_value, NULL, 0), - SENSOR_ATTR(in1_input, 0444, show_in_value, NULL, 1), - SENSOR_ATTR(in2_input, 0444, show_in_value, NULL, 2), - SENSOR_ATTR(in3_input, 0444, show_in_value, NULL, 3), - SENSOR_ATTR(in4_input, 0444, show_in_value, NULL, 4), - SENSOR_ATTR(in5_input, 0444, show_in_value, NULL, 5), + SENSOR_ATTR_RO(in0_input, in_value, 0), + SENSOR_ATTR_RO(in1_input, in_value, 1), + SENSOR_ATTR_RO(in2_input, in_value, 2), + SENSOR_ATTR_RO(in3_input, in_value, 3), + SENSOR_ATTR_RO(in4_input, in_value, 4), + SENSOR_ATTR_RO(in5_input, in_value, 5), }; static struct sensor_device_attribute fschmd_temp_attr[] = { - SENSOR_ATTR(temp1_input, 0444, show_temp_value, NULL, 0), - SENSOR_ATTR(temp1_max, 0644, show_temp_max, store_temp_max, 0), - SENSOR_ATTR(temp1_fault, 0444, show_temp_fault, NULL, 0), - SENSOR_ATTR(temp1_alarm, 0444, show_temp_alarm, NULL, 0), - SENSOR_ATTR(temp2_input, 0444, show_temp_value, NULL, 1), - SENSOR_ATTR(temp2_max, 0644, show_temp_max, store_temp_max, 1), - SENSOR_ATTR(temp2_fault, 0444, show_temp_fault, NULL, 1), - SENSOR_ATTR(temp2_alarm, 0444, show_temp_alarm, NULL, 1), - SENSOR_ATTR(temp3_input, 0444, show_temp_value, NULL, 2), - SENSOR_ATTR(temp3_max, 0644, show_temp_max, store_temp_max, 2), - SENSOR_ATTR(temp3_fault, 0444, show_temp_fault, NULL, 2), - SENSOR_ATTR(temp3_alarm, 0444, show_temp_alarm, NULL, 2), - SENSOR_ATTR(temp4_input, 0444, show_temp_value, NULL, 3), - SENSOR_ATTR(temp4_max, 0644, show_temp_max, store_temp_max, 3), - SENSOR_ATTR(temp4_fault, 0444, show_temp_fault, NULL, 3), - SENSOR_ATTR(temp4_alarm, 0444, show_temp_alarm, NULL, 3), - SENSOR_ATTR(temp5_input, 0444, show_temp_value, NULL, 4), - SENSOR_ATTR(temp5_max, 0644, show_temp_max, store_temp_max, 4), - SENSOR_ATTR(temp5_fault, 0444, show_temp_fault, NULL, 4), - SENSOR_ATTR(temp5_alarm, 0444, show_temp_alarm, NULL, 4), - SENSOR_ATTR(temp6_input, 0444, show_temp_value, NULL, 5), - SENSOR_ATTR(temp6_max, 0644, show_temp_max, store_temp_max, 5), - SENSOR_ATTR(temp6_fault, 0444, show_temp_fault, NULL, 5), - SENSOR_ATTR(temp6_alarm, 0444, show_temp_alarm, NULL, 5), - SENSOR_ATTR(temp7_input, 0444, show_temp_value, NULL, 6), - SENSOR_ATTR(temp7_max, 0644, show_temp_max, store_temp_max, 6), - SENSOR_ATTR(temp7_fault, 0444, show_temp_fault, NULL, 6), - SENSOR_ATTR(temp7_alarm, 0444, show_temp_alarm, NULL, 6), - SENSOR_ATTR(temp8_input, 0444, show_temp_value, NULL, 7), - SENSOR_ATTR(temp8_max, 0644, show_temp_max, store_temp_max, 7), - SENSOR_ATTR(temp8_fault, 0444, show_temp_fault, NULL, 7), - SENSOR_ATTR(temp8_alarm, 0444, show_temp_alarm, NULL, 7), - SENSOR_ATTR(temp9_input, 0444, show_temp_value, NULL, 8), - SENSOR_ATTR(temp9_max, 0644, show_temp_max, store_temp_max, 8), - SENSOR_ATTR(temp9_fault, 0444, show_temp_fault, NULL, 8), - SENSOR_ATTR(temp9_alarm, 0444, show_temp_alarm, NULL, 8), - SENSOR_ATTR(temp10_input, 0444, show_temp_value, NULL, 9), - SENSOR_ATTR(temp10_max, 0644, show_temp_max, store_temp_max, 9), - SENSOR_ATTR(temp10_fault, 0444, show_temp_fault, NULL, 9), - SENSOR_ATTR(temp10_alarm, 0444, show_temp_alarm, NULL, 9), - SENSOR_ATTR(temp11_input, 0444, show_temp_value, NULL, 10), - SENSOR_ATTR(temp11_max, 0644, show_temp_max, store_temp_max, 10), - SENSOR_ATTR(temp11_fault, 0444, show_temp_fault, NULL, 10), - SENSOR_ATTR(temp11_alarm, 0444, show_temp_alarm, NULL, 10), + SENSOR_ATTR_RO(temp1_input, temp_value, 0), + SENSOR_ATTR_RW(temp1_max, temp_max, 0), + SENSOR_ATTR_RO(temp1_fault, temp_fault, 0), + SENSOR_ATTR_RO(temp1_alarm, temp_alarm, 0), + SENSOR_ATTR_RO(temp2_input, temp_value, 1), + SENSOR_ATTR_RW(temp2_max, temp_max, 1), + SENSOR_ATTR_RO(temp2_fault, temp_fault, 1), + SENSOR_ATTR_RO(temp2_alarm, temp_alarm, 1), + SENSOR_ATTR_RO(temp3_input, temp_value, 2), + SENSOR_ATTR_RW(temp3_max, temp_max, 2), + SENSOR_ATTR_RO(temp3_fault, temp_fault, 2), + SENSOR_ATTR_RO(temp3_alarm, temp_alarm, 2), + SENSOR_ATTR_RO(temp4_input, temp_value, 3), + SENSOR_ATTR_RW(temp4_max, temp_max, 3), + SENSOR_ATTR_RO(temp4_fault, temp_fault, 3), + SENSOR_ATTR_RO(temp4_alarm, temp_alarm, 3), + SENSOR_ATTR_RO(temp5_input, temp_value, 4), + SENSOR_ATTR_RW(temp5_max, temp_max, 4), + SENSOR_ATTR_RO(temp5_fault, temp_fault, 4), + SENSOR_ATTR_RO(temp5_alarm, temp_alarm, 4), + SENSOR_ATTR_RO(temp6_input, temp_value, 5), + SENSOR_ATTR_RW(temp6_max, temp_max, 5), + SENSOR_ATTR_RO(temp6_fault, temp_fault, 5), + SENSOR_ATTR_RO(temp6_alarm, temp_alarm, 5), + SENSOR_ATTR_RO(temp7_input, temp_value, 6), + SENSOR_ATTR_RW(temp7_max, temp_max, 6), + SENSOR_ATTR_RO(temp7_fault, temp_fault, 6), + SENSOR_ATTR_RO(temp7_alarm, temp_alarm, 6), + SENSOR_ATTR_RO(temp8_input, temp_value, 7), + SENSOR_ATTR_RW(temp8_max, temp_max, 7), + SENSOR_ATTR_RO(temp8_fault, temp_fault, 7), + SENSOR_ATTR_RO(temp8_alarm, temp_alarm, 7), + SENSOR_ATTR_RO(temp9_input, temp_value, 8), + SENSOR_ATTR_RW(temp9_max, temp_max, 8), + SENSOR_ATTR_RO(temp9_fault, temp_fault, 8), + SENSOR_ATTR_RO(temp9_alarm, temp_alarm, 8), + SENSOR_ATTR_RO(temp10_input, temp_value, 9), + SENSOR_ATTR_RW(temp10_max, temp_max, 9), + SENSOR_ATTR_RO(temp10_fault, temp_fault, 9), + SENSOR_ATTR_RO(temp10_alarm, temp_alarm, 9), + SENSOR_ATTR_RO(temp11_input, temp_value, 10), + SENSOR_ATTR_RW(temp11_max, temp_max, 10), + SENSOR_ATTR_RO(temp11_fault, temp_fault, 10), + SENSOR_ATTR_RO(temp11_alarm, temp_alarm, 10), }; static struct sensor_device_attribute fschmd_fan_attr[] = { - SENSOR_ATTR(fan1_input, 0444, show_fan_value, NULL, 0), - SENSOR_ATTR(fan1_div, 0644, show_fan_div, store_fan_div, 0), - SENSOR_ATTR(fan1_alarm, 0444, show_fan_alarm, NULL, 0), - SENSOR_ATTR(fan1_fault, 0444, show_fan_fault, NULL, 0), - SENSOR_ATTR(pwm1_auto_point1_pwm, 0644, show_pwm_auto_point1_pwm, - store_pwm_auto_point1_pwm, 0), - SENSOR_ATTR(fan2_input, 0444, show_fan_value, NULL, 1), - SENSOR_ATTR(fan2_div, 0644, show_fan_div, store_fan_div, 1), - SENSOR_ATTR(fan2_alarm, 0444, show_fan_alarm, NULL, 1), - SENSOR_ATTR(fan2_fault, 0444, show_fan_fault, NULL, 1), - SENSOR_ATTR(pwm2_auto_point1_pwm, 0644, show_pwm_auto_point1_pwm, - store_pwm_auto_point1_pwm, 1), - SENSOR_ATTR(fan3_input, 0444, show_fan_value, NULL, 2), - SENSOR_ATTR(fan3_div, 0644, show_fan_div, store_fan_div, 2), - SENSOR_ATTR(fan3_alarm, 0444, show_fan_alarm, NULL, 2), - SENSOR_ATTR(fan3_fault, 0444, show_fan_fault, NULL, 2), - SENSOR_ATTR(pwm3_auto_point1_pwm, 0644, show_pwm_auto_point1_pwm, - store_pwm_auto_point1_pwm, 2), - SENSOR_ATTR(fan4_input, 0444, show_fan_value, NULL, 3), - SENSOR_ATTR(fan4_div, 0644, show_fan_div, store_fan_div, 3), - SENSOR_ATTR(fan4_alarm, 0444, show_fan_alarm, NULL, 3), - SENSOR_ATTR(fan4_fault, 0444, show_fan_fault, NULL, 3), - SENSOR_ATTR(pwm4_auto_point1_pwm, 0644, show_pwm_auto_point1_pwm, - store_pwm_auto_point1_pwm, 3), - SENSOR_ATTR(fan5_input, 0444, show_fan_value, NULL, 4), - SENSOR_ATTR(fan5_div, 0644, show_fan_div, store_fan_div, 4), - SENSOR_ATTR(fan5_alarm, 0444, show_fan_alarm, NULL, 4), - SENSOR_ATTR(fan5_fault, 0444, show_fan_fault, NULL, 4), - SENSOR_ATTR(pwm5_auto_point1_pwm, 0644, show_pwm_auto_point1_pwm, - store_pwm_auto_point1_pwm, 4), - SENSOR_ATTR(fan6_input, 0444, show_fan_value, NULL, 5), - SENSOR_ATTR(fan6_div, 0644, show_fan_div, store_fan_div, 5), - SENSOR_ATTR(fan6_alarm, 0444, show_fan_alarm, NULL, 5), - SENSOR_ATTR(fan6_fault, 0444, show_fan_fault, NULL, 5), - SENSOR_ATTR(pwm6_auto_point1_pwm, 0644, show_pwm_auto_point1_pwm, - store_pwm_auto_point1_pwm, 5), - SENSOR_ATTR(fan7_input, 0444, show_fan_value, NULL, 6), - SENSOR_ATTR(fan7_div, 0644, show_fan_div, store_fan_div, 6), - SENSOR_ATTR(fan7_alarm, 0444, show_fan_alarm, NULL, 6), - SENSOR_ATTR(fan7_fault, 0444, show_fan_fault, NULL, 6), - SENSOR_ATTR(pwm7_auto_point1_pwm, 0644, show_pwm_auto_point1_pwm, - store_pwm_auto_point1_pwm, 6), + SENSOR_ATTR_RO(fan1_input, fan_value, 0), + SENSOR_ATTR_RW(fan1_div, fan_div, 0), + SENSOR_ATTR_RO(fan1_alarm, fan_alarm, 0), + SENSOR_ATTR_RO(fan1_fault, fan_fault, 0), + SENSOR_ATTR_RW(pwm1_auto_point1_pwm, pwm_auto_point1_pwm, 0), + SENSOR_ATTR_RO(fan2_input, fan_value, 1), + SENSOR_ATTR_RW(fan2_div, fan_div, 1), + SENSOR_ATTR_RO(fan2_alarm, fan_alarm, 1), + SENSOR_ATTR_RO(fan2_fault, fan_fault, 1), + SENSOR_ATTR_RW(pwm2_auto_point1_pwm, pwm_auto_point1_pwm, 1), + SENSOR_ATTR_RO(fan3_input, fan_value, 2), + SENSOR_ATTR_RW(fan3_div, fan_div, 2), + SENSOR_ATTR_RO(fan3_alarm, fan_alarm, 2), + SENSOR_ATTR_RO(fan3_fault, fan_fault, 2), + SENSOR_ATTR_RW(pwm3_auto_point1_pwm, pwm_auto_point1_pwm, 2), + SENSOR_ATTR_RO(fan4_input, fan_value, 3), + SENSOR_ATTR_RW(fan4_div, fan_div, 3), + SENSOR_ATTR_RO(fan4_alarm, fan_alarm, 3), + SENSOR_ATTR_RO(fan4_fault, fan_fault, 3), + SENSOR_ATTR_RW(pwm4_auto_point1_pwm, pwm_auto_point1_pwm, 3), + SENSOR_ATTR_RO(fan5_input, fan_value, 4), + SENSOR_ATTR_RW(fan5_div, fan_div, 4), + SENSOR_ATTR_RO(fan5_alarm, fan_alarm, 4), + SENSOR_ATTR_RO(fan5_fault, fan_fault, 4), + SENSOR_ATTR_RW(pwm5_auto_point1_pwm, pwm_auto_point1_pwm, 4), + SENSOR_ATTR_RO(fan6_input, fan_value, 5), + SENSOR_ATTR_RW(fan6_div, fan_div, 5), + SENSOR_ATTR_RO(fan6_alarm, fan_alarm, 5), + SENSOR_ATTR_RO(fan6_fault, fan_fault, 5), + SENSOR_ATTR_RW(pwm6_auto_point1_pwm, pwm_auto_point1_pwm, 5), + SENSOR_ATTR_RO(fan7_input, fan_value, 6), + SENSOR_ATTR_RW(fan7_div, fan_div, 6), + SENSOR_ATTR_RO(fan7_alarm, fan_alarm, 6), + SENSOR_ATTR_RO(fan7_fault, fan_fault, 6), + SENSOR_ATTR_RW(pwm7_auto_point1_pwm, pwm_auto_point1_pwm, 6), }; @@ -1169,7 +1166,7 @@ static int fschmd_probe(struct i2c_client *client, for (i = 0; i < (FSCHMD_NO_TEMP_SENSORS[data->kind] * 4); i++) { /* Poseidon doesn't have TEMP_LIMIT registers */ if (kind == fscpos && fschmd_temp_attr[i].dev_attr.show == - show_temp_max) + temp_max_show) continue; if (kind == fscsyl) { diff --git a/drivers/hwmon/ftsteutates.c b/drivers/hwmon/ftsteutates.c index 0801f48a41f7..ca8f4481264b 100644 --- a/drivers/hwmon/ftsteutates.c +++ b/drivers/hwmon/ftsteutates.c @@ -352,7 +352,7 @@ static int fts_watchdog_init(struct fts_data *data) /*****************************************************************************/ /* SysFS handler functions */ /*****************************************************************************/ -static ssize_t show_in_value(struct device *dev, +static ssize_t in_value_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct fts_data *data = dev_get_drvdata(dev); @@ -366,7 +366,7 @@ static ssize_t show_in_value(struct device *dev, return sprintf(buf, "%u\n", data->volt[index]); } -static ssize_t show_temp_value(struct device *dev, +static ssize_t temp_value_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct fts_data *data = dev_get_drvdata(dev); @@ -380,7 +380,7 @@ static ssize_t show_temp_value(struct device *dev, return sprintf(buf, "%u\n", data->temp_input[index]); } -static ssize_t show_temp_fault(struct device *dev, +static ssize_t temp_fault_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct fts_data *data = dev_get_drvdata(dev); @@ -395,7 +395,7 @@ static ssize_t show_temp_fault(struct device *dev, return sprintf(buf, "%d\n", data->temp_input[index] == 0); } -static ssize_t show_temp_alarm(struct device *dev, +static ssize_t temp_alarm_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct fts_data *data = dev_get_drvdata(dev); @@ -410,7 +410,7 @@ static ssize_t show_temp_alarm(struct device *dev, } static ssize_t -clear_temp_alarm(struct device *dev, struct device_attribute *devattr, +temp_alarm_store(struct device *dev, struct device_attribute *devattr, const char *buf, size_t count) { struct fts_data *data = dev_get_drvdata(dev); @@ -441,7 +441,7 @@ error: return ret; } -static ssize_t show_fan_value(struct device *dev, +static ssize_t fan_value_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct fts_data *data = dev_get_drvdata(dev); @@ -455,7 +455,7 @@ static ssize_t show_fan_value(struct device *dev, return sprintf(buf, "%u\n", data->fan_input[index]); } -static ssize_t show_fan_source(struct device *dev, +static ssize_t fan_source_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct fts_data *data = dev_get_drvdata(dev); @@ -469,7 +469,7 @@ static ssize_t show_fan_source(struct device *dev, return sprintf(buf, "%u\n", data->fan_source[index]); } -static ssize_t show_fan_alarm(struct device *dev, +static ssize_t fan_alarm_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct fts_data *data = dev_get_drvdata(dev); @@ -484,7 +484,7 @@ static ssize_t show_fan_alarm(struct device *dev, } static ssize_t -clear_fan_alarm(struct device *dev, struct device_attribute *devattr, +fan_alarm_store(struct device *dev, struct device_attribute *devattr, const char *buf, size_t count) { struct fts_data *data = dev_get_drvdata(dev); @@ -520,72 +520,56 @@ error: /*****************************************************************************/ /* Temprature sensors */ -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp_value, NULL, 0); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp_value, NULL, 1); -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, show_temp_value, NULL, 2); -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, show_temp_value, NULL, 3); -static SENSOR_DEVICE_ATTR(temp5_input, S_IRUGO, show_temp_value, NULL, 4); -static SENSOR_DEVICE_ATTR(temp6_input, S_IRUGO, show_temp_value, NULL, 5); -static SENSOR_DEVICE_ATTR(temp7_input, S_IRUGO, show_temp_value, NULL, 6); -static SENSOR_DEVICE_ATTR(temp8_input, S_IRUGO, show_temp_value, NULL, 7); -static SENSOR_DEVICE_ATTR(temp9_input, S_IRUGO, show_temp_value, NULL, 8); -static SENSOR_DEVICE_ATTR(temp10_input, S_IRUGO, show_temp_value, NULL, 9); -static SENSOR_DEVICE_ATTR(temp11_input, S_IRUGO, show_temp_value, NULL, 10); -static SENSOR_DEVICE_ATTR(temp12_input, S_IRUGO, show_temp_value, NULL, 11); -static SENSOR_DEVICE_ATTR(temp13_input, S_IRUGO, show_temp_value, NULL, 12); -static SENSOR_DEVICE_ATTR(temp14_input, S_IRUGO, show_temp_value, NULL, 13); -static SENSOR_DEVICE_ATTR(temp15_input, S_IRUGO, show_temp_value, NULL, 14); -static SENSOR_DEVICE_ATTR(temp16_input, S_IRUGO, show_temp_value, NULL, 15); - -static SENSOR_DEVICE_ATTR(temp1_fault, S_IRUGO, show_temp_fault, NULL, 0); -static SENSOR_DEVICE_ATTR(temp2_fault, S_IRUGO, show_temp_fault, NULL, 1); -static SENSOR_DEVICE_ATTR(temp3_fault, S_IRUGO, show_temp_fault, NULL, 2); -static SENSOR_DEVICE_ATTR(temp4_fault, S_IRUGO, show_temp_fault, NULL, 3); -static SENSOR_DEVICE_ATTR(temp5_fault, S_IRUGO, show_temp_fault, NULL, 4); -static SENSOR_DEVICE_ATTR(temp6_fault, S_IRUGO, show_temp_fault, NULL, 5); -static SENSOR_DEVICE_ATTR(temp7_fault, S_IRUGO, show_temp_fault, NULL, 6); -static SENSOR_DEVICE_ATTR(temp8_fault, S_IRUGO, show_temp_fault, NULL, 7); -static SENSOR_DEVICE_ATTR(temp9_fault, S_IRUGO, show_temp_fault, NULL, 8); -static SENSOR_DEVICE_ATTR(temp10_fault, S_IRUGO, show_temp_fault, NULL, 9); -static SENSOR_DEVICE_ATTR(temp11_fault, S_IRUGO, show_temp_fault, NULL, 10); -static SENSOR_DEVICE_ATTR(temp12_fault, S_IRUGO, show_temp_fault, NULL, 11); -static SENSOR_DEVICE_ATTR(temp13_fault, S_IRUGO, show_temp_fault, NULL, 12); -static SENSOR_DEVICE_ATTR(temp14_fault, S_IRUGO, show_temp_fault, NULL, 13); -static SENSOR_DEVICE_ATTR(temp15_fault, S_IRUGO, show_temp_fault, NULL, 14); -static SENSOR_DEVICE_ATTR(temp16_fault, S_IRUGO, show_temp_fault, NULL, 15); - -static SENSOR_DEVICE_ATTR(temp1_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 0); -static SENSOR_DEVICE_ATTR(temp2_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 1); -static SENSOR_DEVICE_ATTR(temp3_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 2); -static SENSOR_DEVICE_ATTR(temp4_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 3); -static SENSOR_DEVICE_ATTR(temp5_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 4); -static SENSOR_DEVICE_ATTR(temp6_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 5); -static SENSOR_DEVICE_ATTR(temp7_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 6); -static SENSOR_DEVICE_ATTR(temp8_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 7); -static SENSOR_DEVICE_ATTR(temp9_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 8); -static SENSOR_DEVICE_ATTR(temp10_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 9); -static SENSOR_DEVICE_ATTR(temp11_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 10); -static SENSOR_DEVICE_ATTR(temp12_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 11); -static SENSOR_DEVICE_ATTR(temp13_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 12); -static SENSOR_DEVICE_ATTR(temp14_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 13); -static SENSOR_DEVICE_ATTR(temp15_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 14); -static SENSOR_DEVICE_ATTR(temp16_alarm, S_IRUGO | S_IWUSR, show_temp_alarm, - clear_temp_alarm, 15); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp_value, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp_value, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_input, temp_value, 2); +static SENSOR_DEVICE_ATTR_RO(temp4_input, temp_value, 3); +static SENSOR_DEVICE_ATTR_RO(temp5_input, temp_value, 4); +static SENSOR_DEVICE_ATTR_RO(temp6_input, temp_value, 5); +static SENSOR_DEVICE_ATTR_RO(temp7_input, temp_value, 6); +static SENSOR_DEVICE_ATTR_RO(temp8_input, temp_value, 7); +static SENSOR_DEVICE_ATTR_RO(temp9_input, temp_value, 8); +static SENSOR_DEVICE_ATTR_RO(temp10_input, temp_value, 9); +static SENSOR_DEVICE_ATTR_RO(temp11_input, temp_value, 10); +static SENSOR_DEVICE_ATTR_RO(temp12_input, temp_value, 11); +static SENSOR_DEVICE_ATTR_RO(temp13_input, temp_value, 12); +static SENSOR_DEVICE_ATTR_RO(temp14_input, temp_value, 13); +static SENSOR_DEVICE_ATTR_RO(temp15_input, temp_value, 14); +static SENSOR_DEVICE_ATTR_RO(temp16_input, temp_value, 15); + +static SENSOR_DEVICE_ATTR_RO(temp1_fault, temp_fault, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_fault, temp_fault, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_fault, temp_fault, 2); +static SENSOR_DEVICE_ATTR_RO(temp4_fault, temp_fault, 3); +static SENSOR_DEVICE_ATTR_RO(temp5_fault, temp_fault, 4); +static SENSOR_DEVICE_ATTR_RO(temp6_fault, temp_fault, 5); +static SENSOR_DEVICE_ATTR_RO(temp7_fault, temp_fault, 6); +static SENSOR_DEVICE_ATTR_RO(temp8_fault, temp_fault, 7); +static SENSOR_DEVICE_ATTR_RO(temp9_fault, temp_fault, 8); +static SENSOR_DEVICE_ATTR_RO(temp10_fault, temp_fault, 9); +static SENSOR_DEVICE_ATTR_RO(temp11_fault, temp_fault, 10); +static SENSOR_DEVICE_ATTR_RO(temp12_fault, temp_fault, 11); +static SENSOR_DEVICE_ATTR_RO(temp13_fault, temp_fault, 12); +static SENSOR_DEVICE_ATTR_RO(temp14_fault, temp_fault, 13); +static SENSOR_DEVICE_ATTR_RO(temp15_fault, temp_fault, 14); +static SENSOR_DEVICE_ATTR_RO(temp16_fault, temp_fault, 15); + +static SENSOR_DEVICE_ATTR_RW(temp1_alarm, temp_alarm, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_alarm, temp_alarm, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_alarm, temp_alarm, 2); +static SENSOR_DEVICE_ATTR_RW(temp4_alarm, temp_alarm, 3); +static SENSOR_DEVICE_ATTR_RW(temp5_alarm, temp_alarm, 4); +static SENSOR_DEVICE_ATTR_RW(temp6_alarm, temp_alarm, 5); +static SENSOR_DEVICE_ATTR_RW(temp7_alarm, temp_alarm, 6); +static SENSOR_DEVICE_ATTR_RW(temp8_alarm, temp_alarm, 7); +static SENSOR_DEVICE_ATTR_RW(temp9_alarm, temp_alarm, 8); +static SENSOR_DEVICE_ATTR_RW(temp10_alarm, temp_alarm, 9); +static SENSOR_DEVICE_ATTR_RW(temp11_alarm, temp_alarm, 10); +static SENSOR_DEVICE_ATTR_RW(temp12_alarm, temp_alarm, 11); +static SENSOR_DEVICE_ATTR_RW(temp13_alarm, temp_alarm, 12); +static SENSOR_DEVICE_ATTR_RW(temp14_alarm, temp_alarm, 13); +static SENSOR_DEVICE_ATTR_RW(temp15_alarm, temp_alarm, 14); +static SENSOR_DEVICE_ATTR_RW(temp16_alarm, temp_alarm, 15); static struct attribute *fts_temp_attrs[] = { &sensor_dev_attr_temp1_input.dev_attr.attr, @@ -642,40 +626,32 @@ static struct attribute *fts_temp_attrs[] = { }; /* Fans */ -static SENSOR_DEVICE_ATTR(fan1_input, S_IRUGO, show_fan_value, NULL, 0); -static SENSOR_DEVICE_ATTR(fan2_input, S_IRUGO, show_fan_value, NULL, 1); -static SENSOR_DEVICE_ATTR(fan3_input, S_IRUGO, show_fan_value, NULL, 2); -static SENSOR_DEVICE_ATTR(fan4_input, S_IRUGO, show_fan_value, NULL, 3); -static SENSOR_DEVICE_ATTR(fan5_input, S_IRUGO, show_fan_value, NULL, 4); -static SENSOR_DEVICE_ATTR(fan6_input, S_IRUGO, show_fan_value, NULL, 5); -static SENSOR_DEVICE_ATTR(fan7_input, S_IRUGO, show_fan_value, NULL, 6); -static SENSOR_DEVICE_ATTR(fan8_input, S_IRUGO, show_fan_value, NULL, 7); - -static SENSOR_DEVICE_ATTR(fan1_source, S_IRUGO, show_fan_source, NULL, 0); -static SENSOR_DEVICE_ATTR(fan2_source, S_IRUGO, show_fan_source, NULL, 1); -static SENSOR_DEVICE_ATTR(fan3_source, S_IRUGO, show_fan_source, NULL, 2); -static SENSOR_DEVICE_ATTR(fan4_source, S_IRUGO, show_fan_source, NULL, 3); -static SENSOR_DEVICE_ATTR(fan5_source, S_IRUGO, show_fan_source, NULL, 4); -static SENSOR_DEVICE_ATTR(fan6_source, S_IRUGO, show_fan_source, NULL, 5); -static SENSOR_DEVICE_ATTR(fan7_source, S_IRUGO, show_fan_source, NULL, 6); -static SENSOR_DEVICE_ATTR(fan8_source, S_IRUGO, show_fan_source, NULL, 7); - -static SENSOR_DEVICE_ATTR(fan1_alarm, S_IRUGO | S_IWUSR, - show_fan_alarm, clear_fan_alarm, 0); -static SENSOR_DEVICE_ATTR(fan2_alarm, S_IRUGO | S_IWUSR, - show_fan_alarm, clear_fan_alarm, 1); -static SENSOR_DEVICE_ATTR(fan3_alarm, S_IRUGO | S_IWUSR, - show_fan_alarm, clear_fan_alarm, 2); -static SENSOR_DEVICE_ATTR(fan4_alarm, S_IRUGO | S_IWUSR, - show_fan_alarm, clear_fan_alarm, 3); -static SENSOR_DEVICE_ATTR(fan5_alarm, S_IRUGO | S_IWUSR, - show_fan_alarm, clear_fan_alarm, 4); -static SENSOR_DEVICE_ATTR(fan6_alarm, S_IRUGO | S_IWUSR, - show_fan_alarm, clear_fan_alarm, 5); -static SENSOR_DEVICE_ATTR(fan7_alarm, S_IRUGO | S_IWUSR, - show_fan_alarm, clear_fan_alarm, 6); -static SENSOR_DEVICE_ATTR(fan8_alarm, S_IRUGO | S_IWUSR, - show_fan_alarm, clear_fan_alarm, 7); +static SENSOR_DEVICE_ATTR_RO(fan1_input, fan_value, 0); +static SENSOR_DEVICE_ATTR_RO(fan2_input, fan_value, 1); +static SENSOR_DEVICE_ATTR_RO(fan3_input, fan_value, 2); +static SENSOR_DEVICE_ATTR_RO(fan4_input, fan_value, 3); +static SENSOR_DEVICE_ATTR_RO(fan5_input, fan_value, 4); +static SENSOR_DEVICE_ATTR_RO(fan6_input, fan_value, 5); +static SENSOR_DEVICE_ATTR_RO(fan7_input, fan_value, 6); +static SENSOR_DEVICE_ATTR_RO(fan8_input, fan_value, 7); + +static SENSOR_DEVICE_ATTR_RO(fan1_source, fan_source, 0); +static SENSOR_DEVICE_ATTR_RO(fan2_source, fan_source, 1); +static SENSOR_DEVICE_ATTR_RO(fan3_source, fan_source, 2); +static SENSOR_DEVICE_ATTR_RO(fan4_source, fan_source, 3); +static SENSOR_DEVICE_ATTR_RO(fan5_source, fan_source, 4); +static SENSOR_DEVICE_ATTR_RO(fan6_source, fan_source, 5); +static SENSOR_DEVICE_ATTR_RO(fan7_source, fan_source, 6); +static SENSOR_DEVICE_ATTR_RO(fan8_source, fan_source, 7); + +static SENSOR_DEVICE_ATTR_RW(fan1_alarm, fan_alarm, 0); +static SENSOR_DEVICE_ATTR_RW(fan2_alarm, fan_alarm, 1); +static SENSOR_DEVICE_ATTR_RW(fan3_alarm, fan_alarm, 2); +static SENSOR_DEVICE_ATTR_RW(fan4_alarm, fan_alarm, 3); +static SENSOR_DEVICE_ATTR_RW(fan5_alarm, fan_alarm, 4); +static SENSOR_DEVICE_ATTR_RW(fan6_alarm, fan_alarm, 5); +static SENSOR_DEVICE_ATTR_RW(fan7_alarm, fan_alarm, 6); +static SENSOR_DEVICE_ATTR_RW(fan8_alarm, fan_alarm, 7); static struct attribute *fts_fan_attrs[] = { &sensor_dev_attr_fan1_input.dev_attr.attr, @@ -708,10 +684,10 @@ static struct attribute *fts_fan_attrs[] = { }; /* Voltages */ -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, show_in_value, NULL, 0); -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, show_in_value, NULL, 1); -static SENSOR_DEVICE_ATTR(in3_input, S_IRUGO, show_in_value, NULL, 2); -static SENSOR_DEVICE_ATTR(in4_input, S_IRUGO, show_in_value, NULL, 3); +static SENSOR_DEVICE_ATTR_RO(in1_input, in_value, 0); +static SENSOR_DEVICE_ATTR_RO(in2_input, in_value, 1); +static SENSOR_DEVICE_ATTR_RO(in3_input, in_value, 2); +static SENSOR_DEVICE_ATTR_RO(in4_input, in_value, 3); static struct attribute *fts_voltage_attrs[] = { &sensor_dev_attr_in1_input.dev_attr.attr, &sensor_dev_attr_in2_input.dev_attr.attr, diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c index 84f61cec6319..36ed50d4b276 100644 --- a/drivers/hwmon/hwmon.c +++ b/drivers/hwmon/hwmon.c @@ -267,7 +267,7 @@ static struct attribute *hwmon_genattr(struct device *dev, struct device_attribute *dattr; struct attribute *a; umode_t mode; - char *name; + const char *name; bool is_string = is_string_attr(type, attr); /* The attribute is invisible if there is no template string */ @@ -289,7 +289,7 @@ static struct attribute *hwmon_genattr(struct device *dev, return ERR_PTR(-ENOMEM); if (type == hwmon_chip) { - name = (char *)template; + name = template; } else { scnprintf(hattr->name, sizeof(hattr->name), template, index + hwmon_attr_base(type)); diff --git a/drivers/hwmon/ina2xx.c b/drivers/hwmon/ina2xx.c index 07ee19573b3f..290379c49be9 100644 --- a/drivers/hwmon/ina2xx.c +++ b/drivers/hwmon/ina2xx.c @@ -290,7 +290,7 @@ static int ina2xx_get_value(struct ina2xx_data *data, u8 reg, return val; } -static ssize_t ina2xx_show_value(struct device *dev, +static ssize_t ina2xx_value_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -329,16 +329,15 @@ static int ina2xx_set_shunt(struct ina2xx_data *data, long val) return 0; } -static ssize_t ina2xx_show_shunt(struct device *dev, - struct device_attribute *da, - char *buf) +static ssize_t ina2xx_shunt_show(struct device *dev, + struct device_attribute *da, char *buf) { struct ina2xx_data *data = dev_get_drvdata(dev); return snprintf(buf, PAGE_SIZE, "%li\n", data->rshunt); } -static ssize_t ina2xx_store_shunt(struct device *dev, +static ssize_t ina2xx_shunt_store(struct device *dev, struct device_attribute *da, const char *buf, size_t count) { @@ -356,9 +355,9 @@ static ssize_t ina2xx_store_shunt(struct device *dev, return count; } -static ssize_t ina226_set_interval(struct device *dev, - struct device_attribute *da, - const char *buf, size_t count) +static ssize_t ina226_interval_store(struct device *dev, + struct device_attribute *da, + const char *buf, size_t count) { struct ina2xx_data *data = dev_get_drvdata(dev); unsigned long val; @@ -380,7 +379,7 @@ static ssize_t ina226_set_interval(struct device *dev, return count; } -static ssize_t ina226_show_interval(struct device *dev, +static ssize_t ina226_interval_show(struct device *dev, struct device_attribute *da, char *buf) { struct ina2xx_data *data = dev_get_drvdata(dev); @@ -395,29 +394,22 @@ static ssize_t ina226_show_interval(struct device *dev, } /* shunt voltage */ -static SENSOR_DEVICE_ATTR(in0_input, S_IRUGO, ina2xx_show_value, NULL, - INA2XX_SHUNT_VOLTAGE); +static SENSOR_DEVICE_ATTR_RO(in0_input, ina2xx_value, INA2XX_SHUNT_VOLTAGE); /* bus voltage */ -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, ina2xx_show_value, NULL, - INA2XX_BUS_VOLTAGE); +static SENSOR_DEVICE_ATTR_RO(in1_input, ina2xx_value, INA2XX_BUS_VOLTAGE); /* calculated current */ -static SENSOR_DEVICE_ATTR(curr1_input, S_IRUGO, ina2xx_show_value, NULL, - INA2XX_CURRENT); +static SENSOR_DEVICE_ATTR_RO(curr1_input, ina2xx_value, INA2XX_CURRENT); /* calculated power */ -static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ina2xx_show_value, NULL, - INA2XX_POWER); +static SENSOR_DEVICE_ATTR_RO(power1_input, ina2xx_value, INA2XX_POWER); /* shunt resistance */ -static SENSOR_DEVICE_ATTR(shunt_resistor, S_IRUGO | S_IWUSR, - ina2xx_show_shunt, ina2xx_store_shunt, - INA2XX_CALIBRATION); +static SENSOR_DEVICE_ATTR_RW(shunt_resistor, ina2xx_shunt, INA2XX_CALIBRATION); /* update interval (ina226 only) */ -static SENSOR_DEVICE_ATTR(update_interval, S_IRUGO | S_IWUSR, - ina226_show_interval, ina226_set_interval, 0); +static SENSOR_DEVICE_ATTR_RW(update_interval, ina226_interval, 0); /* pointers to created device attributes */ static struct attribute *ina2xx_attrs[] = { diff --git a/drivers/hwmon/ina3221.c b/drivers/hwmon/ina3221.c index d61688f04594..e90ccac8bebb 100644 --- a/drivers/hwmon/ina3221.c +++ b/drivers/hwmon/ina3221.c @@ -18,7 +18,9 @@ #include #include #include +#include #include +#include #include #define INA3221_DRIVER_NAME "ina3221" @@ -43,14 +45,25 @@ #define INA3221_CONFIG_MODE_SHUNT BIT(0) #define INA3221_CONFIG_MODE_BUS BIT(1) #define INA3221_CONFIG_MODE_CONTINUOUS BIT(2) +#define INA3221_CONFIG_VSH_CT_SHIFT 3 +#define INA3221_CONFIG_VSH_CT_MASK GENMASK(5, 3) +#define INA3221_CONFIG_VSH_CT(x) (((x) & GENMASK(5, 3)) >> 3) +#define INA3221_CONFIG_VBUS_CT_SHIFT 6 +#define INA3221_CONFIG_VBUS_CT_MASK GENMASK(8, 6) +#define INA3221_CONFIG_VBUS_CT(x) (((x) & GENMASK(8, 6)) >> 6) +#define INA3221_CONFIG_CHs_EN_MASK GENMASK(14, 12) #define INA3221_CONFIG_CHx_EN(x) BIT(14 - (x)) +#define INA3221_CONFIG_DEFAULT 0x7127 #define INA3221_RSHUNT_DEFAULT 10000 enum ina3221_fields { /* Configuration */ F_RST, + /* Status Flags */ + F_CVRF, + /* Alert Flags */ F_WF3, F_WF2, F_WF1, F_CF3, F_CF2, F_CF1, @@ -62,6 +75,7 @@ enum ina3221_fields { static const struct reg_field ina3221_reg_fields[] = { [F_RST] = REG_FIELD(INA3221_CONFIG, 15, 15), + [F_CVRF] = REG_FIELD(INA3221_MASK_ENABLE, 0, 0), [F_WF3] = REG_FIELD(INA3221_MASK_ENABLE, 3, 3), [F_WF2] = REG_FIELD(INA3221_MASK_ENABLE, 4, 4), [F_WF1] = REG_FIELD(INA3221_MASK_ENABLE, 5, 5), @@ -91,21 +105,48 @@ struct ina3221_input { /** * struct ina3221_data - device specific information + * @pm_dev: Device pointer for pm runtime * @regmap: Register map of the device * @fields: Register fields of the device * @inputs: Array of channel input source specific structures + * @lock: mutex lock to serialize sysfs attribute accesses * @reg_config: Register value of INA3221_CONFIG */ struct ina3221_data { + struct device *pm_dev; struct regmap *regmap; struct regmap_field *fields[F_MAX_FIELDS]; struct ina3221_input inputs[INA3221_NUM_CHANNELS]; + struct mutex lock; u32 reg_config; }; static inline bool ina3221_is_enabled(struct ina3221_data *ina, int channel) { - return ina->reg_config & INA3221_CONFIG_CHx_EN(channel); + return pm_runtime_active(ina->pm_dev) && + (ina->reg_config & INA3221_CONFIG_CHx_EN(channel)); +} + +/* Lookup table for Bus and Shunt conversion times in usec */ +static const u16 ina3221_conv_time[] = { + 140, 204, 332, 588, 1100, 2116, 4156, 8244, +}; + +static inline int ina3221_wait_for_data(struct ina3221_data *ina) +{ + u32 channels = hweight16(ina->reg_config & INA3221_CONFIG_CHs_EN_MASK); + u32 vbus_ct_idx = INA3221_CONFIG_VBUS_CT(ina->reg_config); + u32 vsh_ct_idx = INA3221_CONFIG_VSH_CT(ina->reg_config); + u32 vbus_ct = ina3221_conv_time[vbus_ct_idx]; + u32 vsh_ct = ina3221_conv_time[vsh_ct_idx]; + u32 wait, cvrf; + + /* Calculate total conversion time */ + wait = channels * (vbus_ct + vsh_ct); + + /* Polling the CVRF bit to make sure read data is ready */ + return regmap_field_read_poll_timeout(ina->fields[F_CVRF], + cvrf, cvrf, wait, 100000); } static int ina3221_read_value(struct ina3221_data *ina, unsigned int reg, @@ -147,6 +188,10 @@ static int ina3221_read_in(struct device *dev, u32 attr, int channel, long *val) if (!ina3221_is_enabled(ina, channel)) return -ENODATA; + ret = ina3221_wait_for_data(ina); + if (ret) + return ret; + ret = ina3221_read_value(ina, reg, ®val); if (ret) return ret; @@ -186,6 +231,11 @@ static int ina3221_read_curr(struct device *dev, u32 attr, case hwmon_curr_input: if (!ina3221_is_enabled(ina, channel)) return -ENODATA; + + ret = ina3221_wait_for_data(ina); + if (ret) + return ret; + /* fall through */ case hwmon_curr_crit: case hwmon_curr_max: @@ -200,6 +250,12 @@ static int ina3221_read_curr(struct device *dev, u32 attr, return 0; case hwmon_curr_crit_alarm: case hwmon_curr_max_alarm: + /* No actual register read if channel is disabled */ + if (!ina3221_is_enabled(ina, channel)) { + /* Return 0 for alert flags */ + *val = 0; + return 0; + } ret = regmap_field_read(ina->fields[reg], ®val); if (ret) return ret; @@ -239,49 +295,100 @@ static int ina3221_write_enable(struct device *dev, int channel, bool enable) { struct ina3221_data *ina = dev_get_drvdata(dev); u16 config, mask = INA3221_CONFIG_CHx_EN(channel); + u16 config_old = ina->reg_config & mask; int ret; config = enable ? mask : 0; + /* Bypass if enable status is not being changed */ + if (config_old == config) + return 0; + + /* For enabling routine, increase refcount and resume() at first */ + if (enable) { + ret = pm_runtime_get_sync(ina->pm_dev); + if (ret < 0) { + dev_err(dev, "Failed to get PM runtime\n"); + return ret; + } + } + /* Enable or disable the channel */ ret = regmap_update_bits(ina->regmap, INA3221_CONFIG, mask, config); if (ret) - return ret; + goto fail; /* Cache the latest config register value */ ret = regmap_read(ina->regmap, INA3221_CONFIG, &ina->reg_config); if (ret) - return ret; + goto fail; + + /* For disabling routine, decrease refcount or suspend() at last */ + if (!enable) + pm_runtime_put_sync(ina->pm_dev); return 0; + +fail: + if (enable) { + dev_err(dev, "Failed to enable channel %d: error %d\n", + channel, ret); + pm_runtime_put_sync(ina->pm_dev); + } + + return ret; } static int ina3221_read(struct device *dev, enum hwmon_sensor_types type, u32 attr, int channel, long *val) { + struct ina3221_data *ina = dev_get_drvdata(dev); + int ret; + + mutex_lock(&ina->lock); + switch (type) { case hwmon_in: /* 0-align channel ID */ - return ina3221_read_in(dev, attr, channel - 1, val); + ret = ina3221_read_in(dev, attr, channel - 1, val); + break; case hwmon_curr: - return ina3221_read_curr(dev, attr, channel, val); + ret = ina3221_read_curr(dev, attr, channel, val); + break; default: - return -EOPNOTSUPP; + ret = -EOPNOTSUPP; + break; } + + mutex_unlock(&ina->lock); + + return ret; } static int ina3221_write(struct device *dev, enum hwmon_sensor_types type, u32 attr, int channel, long val) { + struct ina3221_data *ina = dev_get_drvdata(dev); + int ret; + + mutex_lock(&ina->lock); + switch (type) { case hwmon_in: /* 0-align channel ID */ - return ina3221_write_enable(dev, channel - 1, val); + ret = ina3221_write_enable(dev, channel - 1, val); + break; case hwmon_curr: - return ina3221_write_curr(dev, attr, channel, val); + ret = ina3221_write_curr(dev, attr, channel, val); + break; default: - return -EOPNOTSUPP; + ret = -EOPNOTSUPP; + break; } + + mutex_unlock(&ina->lock); + + return ret; } static int ina3221_read_string(struct device *dev, enum hwmon_sensor_types type, @@ -469,10 +576,10 @@ static int ina3221_probe_child_from_dt(struct device *dev, ret = of_property_read_u32(child, "reg", &val); if (ret) { - dev_err(dev, "missing reg property of %s\n", child->name); + dev_err(dev, "missing reg property of %pOFn\n", child); return ret; } else if (val > INA3221_CHANNEL3) { - dev_err(dev, "invalid reg %d of %s\n", val, child->name); + dev_err(dev, "invalid reg %d of %pOFn\n", val, child); return ret; } @@ -490,8 +597,8 @@ static int ina3221_probe_child_from_dt(struct device *dev, /* Overwrite default shunt resistor value optionally */ if (!of_property_read_u32(child, "shunt-resistor-micro-ohms", &val)) { if (val < 1 || val > INT_MAX) { - dev_err(dev, "invalid shunt resistor value %u of %s\n", - val, child->name); + dev_err(dev, "invalid shunt resistor value %u of %pOFn\n", + val, child); return -EINVAL; } input->shunt_resistor = val; @@ -556,36 +663,68 @@ static int ina3221_probe(struct i2c_client *client, return ret; } - ret = regmap_field_write(ina->fields[F_RST], true); - if (ret) { - dev_err(dev, "Unable to reset device\n"); - return ret; - } - - /* Sync config register after reset */ - ret = regmap_read(ina->regmap, INA3221_CONFIG, &ina->reg_config); - if (ret) - return ret; + /* The driver will be reset, so use reset value */ + ina->reg_config = INA3221_CONFIG_DEFAULT; /* Disable channels if their inputs are disconnected */ for (i = 0; i < INA3221_NUM_CHANNELS; i++) { if (ina->inputs[i].disconnected) ina->reg_config &= ~INA3221_CONFIG_CHx_EN(i); } - ret = regmap_write(ina->regmap, INA3221_CONFIG, ina->reg_config); - if (ret) - return ret; + ina->pm_dev = dev; + mutex_init(&ina->lock); dev_set_drvdata(dev, ina); + /* Enable PM runtime -- status is suspended by default */ + pm_runtime_enable(ina->pm_dev); + + /* Initialize (resume) the device */ + for (i = 0; i < INA3221_NUM_CHANNELS; i++) { + if (ina->inputs[i].disconnected) + continue; + /* Match the refcount with number of enabled channels */ + ret = pm_runtime_get_sync(ina->pm_dev); + if (ret < 0) + goto fail; + } + hwmon_dev = devm_hwmon_device_register_with_info(dev, client->name, ina, &ina3221_chip_info, ina3221_groups); if (IS_ERR(hwmon_dev)) { dev_err(dev, "Unable to register hwmon device\n"); - return PTR_ERR(hwmon_dev); + ret = PTR_ERR(hwmon_dev); + goto fail; } + return 0; + +fail: + pm_runtime_disable(ina->pm_dev); + pm_runtime_set_suspended(ina->pm_dev); + /* pm_runtime_put_noidle() will decrease the PM refcount until 0 */ + for (i = 0; i < INA3221_NUM_CHANNELS; i++) + pm_runtime_put_noidle(ina->pm_dev); + mutex_destroy(&ina->lock); + + return ret; +} + +static int ina3221_remove(struct i2c_client *client) +{ + struct ina3221_data *ina = dev_get_drvdata(&client->dev); + int i; + + pm_runtime_disable(ina->pm_dev); + pm_runtime_set_suspended(ina->pm_dev); + + /* pm_runtime_put_noidle() will decrease the PM refcount until 0 */ + for (i = 0; i < INA3221_NUM_CHANNELS; i++) + pm_runtime_put_noidle(ina->pm_dev); + + mutex_destroy(&ina->lock); + return 0; } @@ -640,7 +779,9 @@ static int __maybe_unused ina3221_resume(struct device *dev) } static const struct dev_pm_ops ina3221_pm = { - SET_SYSTEM_SLEEP_PM_OPS(ina3221_suspend, ina3221_resume) + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, + pm_runtime_force_resume) + SET_RUNTIME_PM_OPS(ina3221_suspend, ina3221_resume, NULL) }; static const struct of_device_id ina3221_of_match_table[] = { @@ -657,6 +798,7 @@ MODULE_DEVICE_TABLE(i2c, ina3221_ids); static struct i2c_driver ina3221_i2c_driver = { .probe = ina3221_probe, + .remove = ina3221_remove, .driver = { .name = INA3221_DRIVER_NAME, .of_match_table = ina3221_of_match_table, diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c index 9790f1f5eb98..50158b9298bb 100644 --- a/drivers/hwmon/k10temp.c +++ b/drivers/hwmon/k10temp.c @@ -184,7 +184,7 @@ static ssize_t temp1_max_show(struct device *dev, return sprintf(buf, "%d\n", 70 * 1000); } -static ssize_t show_temp_crit(struct device *dev, +static ssize_t temp_crit_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -202,12 +202,12 @@ static ssize_t show_temp_crit(struct device *dev, static DEVICE_ATTR_RO(temp1_input); static DEVICE_ATTR_RO(temp1_max); -static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO, show_temp_crit, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_crit_hyst, S_IRUGO, show_temp_crit, NULL, 1); +static SENSOR_DEVICE_ATTR_RO(temp1_crit, temp_crit, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_crit_hyst, temp_crit, 1); -static SENSOR_DEVICE_ATTR(temp1_label, 0444, temp_label_show, NULL, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_label, temp_label, 0); static DEVICE_ATTR_RO(temp2_input); -static SENSOR_DEVICE_ATTR(temp2_label, 0444, temp_label_show, NULL, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_label, temp_label, 1); static umode_t k10temp_is_visible(struct kobject *kobj, struct attribute *attr, int index) @@ -323,7 +323,7 @@ static int k10temp_probe(struct pci_dev *pdev, (boot_cpu_data.x86_model & 0xf0) == 0x70)) { data->read_htcreg = read_htcreg_nb_f15; data->read_tempreg = read_tempreg_nb_f15; - } else if (boot_cpu_data.x86 == 0x17) { + } else if (boot_cpu_data.x86 == 0x17 || boot_cpu_data.x86 == 0x18) { data->temp_adjust_mask = 0x80000; data->read_tempreg = read_tempreg_nb_f17; data->show_tdie = true; @@ -361,6 +361,7 @@ static const struct pci_device_id k10temp_id_table[] = { { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) }, { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) }, { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F3) }, + { PCI_VDEVICE(HYGON, PCI_DEVICE_ID_AMD_17H_DF_F3) }, {} }; MODULE_DEVICE_TABLE(pci, k10temp_id_table); diff --git a/drivers/hwmon/lm63.c b/drivers/hwmon/lm63.c index 4c1770920d29..eac54b9cdeec 100644 --- a/drivers/hwmon/lm63.c +++ b/drivers/hwmon/lm63.c @@ -1120,7 +1120,6 @@ static int lm63_probe(struct i2c_client *client, data->kind = (enum chips)of_device_get_match_data(&client->dev); else data->kind = id->driver_data; - data->kind = id->driver_data; if (data->kind == lm64) data->temp2_offset = 16000; diff --git a/drivers/hwmon/lm75.c b/drivers/hwmon/lm75.c index c7f20543b2bf..62acb9f16ec5 100644 --- a/drivers/hwmon/lm75.c +++ b/drivers/hwmon/lm75.c @@ -50,6 +50,7 @@ enum lm75_type { /* keep sorted in alphabetical order */ max31725, mcp980x, stds75, + stlm75, tcn75, tmp100, tmp101, @@ -316,6 +317,10 @@ lm75_probe(struct i2c_client *client, const struct i2c_device_id *id) data->resolution = 11; data->sample_time = MSEC_PER_SEC; break; + case stlm75: + data->resolution = 9; + data->sample_time = MSEC_PER_SEC / 5; + break; case ds7505: set_mask |= 3 << 5; /* 12-bit mode */ data->resolution = 12; @@ -424,6 +429,7 @@ static const struct i2c_device_id lm75_ids[] = { { "max31726", max31725, }, { "mcp980x", mcp980x, }, { "stds75", stds75, }, + { "stlm75", stlm75, }, { "tcn75", tcn75, }, { "tmp100", tmp100, }, { "tmp101", tmp101, }, @@ -494,6 +500,10 @@ static const struct of_device_id lm75_of_match[] = { .compatible = "st,stds75", .data = (void *)stds75 }, + { + .compatible = "st,stlm75", + .data = (void *)stlm75 + }, { .compatible = "microchip,tcn75", .data = (void *)tcn75 diff --git a/drivers/hwmon/lm80.c b/drivers/hwmon/lm80.c index 08e3945a6fbf..0e30fa00204c 100644 --- a/drivers/hwmon/lm80.c +++ b/drivers/hwmon/lm80.c @@ -360,9 +360,11 @@ static ssize_t set_fan_div(struct device *dev, struct device_attribute *attr, struct i2c_client *client = data->client; unsigned long min, val; u8 reg; - int err = kstrtoul(buf, 10, &val); - if (err < 0) - return err; + int rv; + + rv = kstrtoul(buf, 10, &val); + if (rv < 0) + return rv; /* Save fan_min */ mutex_lock(&data->update_lock); @@ -390,8 +392,11 @@ static ssize_t set_fan_div(struct device *dev, struct device_attribute *attr, return -EINVAL; } - reg = (lm80_read_value(client, LM80_REG_FANDIV) & - ~(3 << (2 * (nr + 1)))) | (data->fan_div[nr] << (2 * (nr + 1))); + rv = lm80_read_value(client, LM80_REG_FANDIV); + if (rv < 0) + return rv; + reg = (rv & ~(3 << (2 * (nr + 1)))) + | (data->fan_div[nr] << (2 * (nr + 1))); lm80_write_value(client, LM80_REG_FANDIV, reg); /* Restore fan_min */ @@ -623,6 +628,7 @@ static int lm80_probe(struct i2c_client *client, struct device *dev = &client->dev; struct device *hwmon_dev; struct lm80_data *data; + int rv; data = devm_kzalloc(dev, sizeof(struct lm80_data), GFP_KERNEL); if (!data) @@ -635,8 +641,14 @@ static int lm80_probe(struct i2c_client *client, lm80_init_client(client); /* A few vars need to be filled upon startup */ - data->fan[f_min][0] = lm80_read_value(client, LM80_REG_FAN_MIN(1)); - data->fan[f_min][1] = lm80_read_value(client, LM80_REG_FAN_MIN(2)); + rv = lm80_read_value(client, LM80_REG_FAN_MIN(1)); + if (rv < 0) + return rv; + data->fan[f_min][0] = rv; + rv = lm80_read_value(client, LM80_REG_FAN_MIN(2)); + if (rv < 0) + return rv; + data->fan[f_min][1] = rv; hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name, data, lm80_groups); diff --git a/drivers/hwmon/lm95234.c b/drivers/hwmon/lm95234.c index c7fcc9e7f57a..02cd48bdde8d 100644 --- a/drivers/hwmon/lm95234.c +++ b/drivers/hwmon/lm95234.c @@ -211,7 +211,7 @@ abort: return ret; } -static ssize_t show_temp(struct device *dev, struct device_attribute *attr, +static ssize_t temp_show(struct device *dev, struct device_attribute *attr, char *buf) { struct lm95234_data *data = dev_get_drvdata(dev); @@ -225,8 +225,8 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *attr, DIV_ROUND_CLOSEST(data->temp[index] * 125, 32)); } -static ssize_t show_alarm(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t alarm_show(struct device *dev, struct device_attribute *attr, + char *buf) { struct lm95234_data *data = dev_get_drvdata(dev); u32 mask = to_sensor_dev_attr(attr)->index; @@ -238,7 +238,7 @@ static ssize_t show_alarm(struct device *dev, return sprintf(buf, "%u", !!(data->status & mask)); } -static ssize_t show_type(struct device *dev, struct device_attribute *attr, +static ssize_t type_show(struct device *dev, struct device_attribute *attr, char *buf) { struct lm95234_data *data = dev_get_drvdata(dev); @@ -251,8 +251,8 @@ static ssize_t show_type(struct device *dev, struct device_attribute *attr, return sprintf(buf, data->sensor_type & mask ? "1\n" : "2\n"); } -static ssize_t set_type(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t type_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct lm95234_data *data = dev_get_drvdata(dev); unsigned long val; @@ -282,7 +282,7 @@ static ssize_t set_type(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_tcrit2(struct device *dev, struct device_attribute *attr, +static ssize_t tcrit2_show(struct device *dev, struct device_attribute *attr, char *buf) { struct lm95234_data *data = dev_get_drvdata(dev); @@ -295,8 +295,8 @@ static ssize_t show_tcrit2(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%u", data->tcrit2[index] * 1000); } -static ssize_t set_tcrit2(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t tcrit2_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct lm95234_data *data = dev_get_drvdata(dev); int index = to_sensor_dev_attr(attr)->index; @@ -320,7 +320,7 @@ static ssize_t set_tcrit2(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_tcrit2_hyst(struct device *dev, +static ssize_t tcrit2_hyst_show(struct device *dev, struct device_attribute *attr, char *buf) { struct lm95234_data *data = dev_get_drvdata(dev); @@ -335,7 +335,7 @@ static ssize_t show_tcrit2_hyst(struct device *dev, ((int)data->tcrit2[index] - (int)data->thyst) * 1000); } -static ssize_t show_tcrit1(struct device *dev, struct device_attribute *attr, +static ssize_t tcrit1_show(struct device *dev, struct device_attribute *attr, char *buf) { struct lm95234_data *data = dev_get_drvdata(dev); @@ -344,8 +344,8 @@ static ssize_t show_tcrit1(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%u", data->tcrit1[index] * 1000); } -static ssize_t set_tcrit1(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t tcrit1_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct lm95234_data *data = dev_get_drvdata(dev); int index = to_sensor_dev_attr(attr)->index; @@ -369,7 +369,7 @@ static ssize_t set_tcrit1(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_tcrit1_hyst(struct device *dev, +static ssize_t tcrit1_hyst_show(struct device *dev, struct device_attribute *attr, char *buf) { struct lm95234_data *data = dev_get_drvdata(dev); @@ -384,9 +384,9 @@ static ssize_t show_tcrit1_hyst(struct device *dev, ((int)data->tcrit1[index] - (int)data->thyst) * 1000); } -static ssize_t set_tcrit1_hyst(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t tcrit1_hyst_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) { struct lm95234_data *data = dev_get_drvdata(dev); int index = to_sensor_dev_attr(attr)->index; @@ -411,7 +411,7 @@ static ssize_t set_tcrit1_hyst(struct device *dev, return count; } -static ssize_t show_offset(struct device *dev, struct device_attribute *attr, +static ssize_t offset_show(struct device *dev, struct device_attribute *attr, char *buf) { struct lm95234_data *data = dev_get_drvdata(dev); @@ -424,8 +424,8 @@ static ssize_t show_offset(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d", data->toffset[index] * 500); } -static ssize_t set_offset(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t offset_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct lm95234_data *data = dev_get_drvdata(dev); int index = to_sensor_dev_attr(attr)->index; @@ -492,80 +492,53 @@ static ssize_t update_interval_store(struct device *dev, return count; } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL, 0); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp, NULL, 1); -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, show_temp, NULL, 2); -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, show_temp, NULL, 3); -static SENSOR_DEVICE_ATTR(temp5_input, S_IRUGO, show_temp, NULL, 4); - -static SENSOR_DEVICE_ATTR(temp2_fault, S_IRUGO, show_alarm, NULL, - BIT(0) | BIT(1)); -static SENSOR_DEVICE_ATTR(temp3_fault, S_IRUGO, show_alarm, NULL, - BIT(2) | BIT(3)); -static SENSOR_DEVICE_ATTR(temp4_fault, S_IRUGO, show_alarm, NULL, - BIT(4) | BIT(5)); -static SENSOR_DEVICE_ATTR(temp5_fault, S_IRUGO, show_alarm, NULL, - BIT(6) | BIT(7)); - -static SENSOR_DEVICE_ATTR(temp2_type, S_IWUSR | S_IRUGO, show_type, set_type, - BIT(1)); -static SENSOR_DEVICE_ATTR(temp3_type, S_IWUSR | S_IRUGO, show_type, set_type, - BIT(2)); -static SENSOR_DEVICE_ATTR(temp4_type, S_IWUSR | S_IRUGO, show_type, set_type, - BIT(3)); -static SENSOR_DEVICE_ATTR(temp5_type, S_IWUSR | S_IRUGO, show_type, set_type, - BIT(4)); - -static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_tcrit1, - set_tcrit1, 0); -static SENSOR_DEVICE_ATTR(temp2_max, S_IWUSR | S_IRUGO, show_tcrit2, - set_tcrit2, 0); -static SENSOR_DEVICE_ATTR(temp3_max, S_IWUSR | S_IRUGO, show_tcrit2, - set_tcrit2, 1); -static SENSOR_DEVICE_ATTR(temp4_max, S_IWUSR | S_IRUGO, show_tcrit1, - set_tcrit1, 3); -static SENSOR_DEVICE_ATTR(temp5_max, S_IWUSR | S_IRUGO, show_tcrit1, - set_tcrit1, 4); - -static SENSOR_DEVICE_ATTR(temp1_max_hyst, S_IWUSR | S_IRUGO, show_tcrit1_hyst, - set_tcrit1_hyst, 0); -static SENSOR_DEVICE_ATTR(temp2_max_hyst, S_IRUGO, show_tcrit2_hyst, NULL, 0); -static SENSOR_DEVICE_ATTR(temp3_max_hyst, S_IRUGO, show_tcrit2_hyst, NULL, 1); -static SENSOR_DEVICE_ATTR(temp4_max_hyst, S_IRUGO, show_tcrit1_hyst, NULL, 3); -static SENSOR_DEVICE_ATTR(temp5_max_hyst, S_IRUGO, show_tcrit1_hyst, NULL, 4); - -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_alarm, NULL, - BIT(0 + 8)); -static SENSOR_DEVICE_ATTR(temp2_max_alarm, S_IRUGO, show_alarm, NULL, - BIT(1 + 16)); -static SENSOR_DEVICE_ATTR(temp3_max_alarm, S_IRUGO, show_alarm, NULL, - BIT(2 + 16)); -static SENSOR_DEVICE_ATTR(temp4_max_alarm, S_IRUGO, show_alarm, NULL, - BIT(3 + 8)); -static SENSOR_DEVICE_ATTR(temp5_max_alarm, S_IRUGO, show_alarm, NULL, - BIT(4 + 8)); - -static SENSOR_DEVICE_ATTR(temp2_crit, S_IWUSR | S_IRUGO, show_tcrit1, - set_tcrit1, 1); -static SENSOR_DEVICE_ATTR(temp3_crit, S_IWUSR | S_IRUGO, show_tcrit1, - set_tcrit1, 2); - -static SENSOR_DEVICE_ATTR(temp2_crit_hyst, S_IRUGO, show_tcrit1_hyst, NULL, 1); -static SENSOR_DEVICE_ATTR(temp3_crit_hyst, S_IRUGO, show_tcrit1_hyst, NULL, 2); - -static SENSOR_DEVICE_ATTR(temp2_crit_alarm, S_IRUGO, show_alarm, NULL, - BIT(1 + 8)); -static SENSOR_DEVICE_ATTR(temp3_crit_alarm, S_IRUGO, show_alarm, NULL, - BIT(2 + 8)); - -static SENSOR_DEVICE_ATTR(temp2_offset, S_IWUSR | S_IRUGO, show_offset, - set_offset, 0); -static SENSOR_DEVICE_ATTR(temp3_offset, S_IWUSR | S_IRUGO, show_offset, - set_offset, 1); -static SENSOR_DEVICE_ATTR(temp4_offset, S_IWUSR | S_IRUGO, show_offset, - set_offset, 2); -static SENSOR_DEVICE_ATTR(temp5_offset, S_IWUSR | S_IRUGO, show_offset, - set_offset, 3); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_input, temp, 2); +static SENSOR_DEVICE_ATTR_RO(temp4_input, temp, 3); +static SENSOR_DEVICE_ATTR_RO(temp5_input, temp, 4); + +static SENSOR_DEVICE_ATTR_RO(temp2_fault, alarm, BIT(0) | BIT(1)); +static SENSOR_DEVICE_ATTR_RO(temp3_fault, alarm, BIT(2) | BIT(3)); +static SENSOR_DEVICE_ATTR_RO(temp4_fault, alarm, BIT(4) | BIT(5)); +static SENSOR_DEVICE_ATTR_RO(temp5_fault, alarm, BIT(6) | BIT(7)); + +static SENSOR_DEVICE_ATTR_RW(temp2_type, type, BIT(1)); +static SENSOR_DEVICE_ATTR_RW(temp3_type, type, BIT(2)); +static SENSOR_DEVICE_ATTR_RW(temp4_type, type, BIT(3)); +static SENSOR_DEVICE_ATTR_RW(temp5_type, type, BIT(4)); + +static SENSOR_DEVICE_ATTR_RW(temp1_max, tcrit1, 0); +static SENSOR_DEVICE_ATTR_RW(temp2_max, tcrit2, 0); +static SENSOR_DEVICE_ATTR_RW(temp3_max, tcrit2, 1); +static SENSOR_DEVICE_ATTR_RW(temp4_max, tcrit1, 3); +static SENSOR_DEVICE_ATTR_RW(temp5_max, tcrit1, 4); + +static SENSOR_DEVICE_ATTR_RW(temp1_max_hyst, tcrit1_hyst, 0); +static SENSOR_DEVICE_ATTR_RO(temp2_max_hyst, tcrit2_hyst, 0); +static SENSOR_DEVICE_ATTR_RO(temp3_max_hyst, tcrit2_hyst, 1); +static SENSOR_DEVICE_ATTR_RO(temp4_max_hyst, tcrit1_hyst, 3); +static SENSOR_DEVICE_ATTR_RO(temp5_max_hyst, tcrit1_hyst, 4); + +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, alarm, BIT(0 + 8)); +static SENSOR_DEVICE_ATTR_RO(temp2_max_alarm, alarm, BIT(1 + 16)); +static SENSOR_DEVICE_ATTR_RO(temp3_max_alarm, alarm, BIT(2 + 16)); +static SENSOR_DEVICE_ATTR_RO(temp4_max_alarm, alarm, BIT(3 + 8)); +static SENSOR_DEVICE_ATTR_RO(temp5_max_alarm, alarm, BIT(4 + 8)); + +static SENSOR_DEVICE_ATTR_RW(temp2_crit, tcrit1, 1); +static SENSOR_DEVICE_ATTR_RW(temp3_crit, tcrit1, 2); + +static SENSOR_DEVICE_ATTR_RO(temp2_crit_hyst, tcrit1_hyst, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_crit_hyst, tcrit1_hyst, 2); + +static SENSOR_DEVICE_ATTR_RO(temp2_crit_alarm, alarm, BIT(1 + 8)); +static SENSOR_DEVICE_ATTR_RO(temp3_crit_alarm, alarm, BIT(2 + 8)); + +static SENSOR_DEVICE_ATTR_RW(temp2_offset, offset, 0); +static SENSOR_DEVICE_ATTR_RW(temp3_offset, offset, 1); +static SENSOR_DEVICE_ATTR_RW(temp4_offset, offset, 2); +static SENSOR_DEVICE_ATTR_RW(temp5_offset, offset, 3); static DEVICE_ATTR_RW(update_interval); diff --git a/drivers/hwmon/ltc2945.c b/drivers/hwmon/ltc2945.c index 1b92e4f6e234..f16716a1fead 100644 --- a/drivers/hwmon/ltc2945.c +++ b/drivers/hwmon/ltc2945.c @@ -226,7 +226,7 @@ static int ltc2945_val_to_reg(struct device *dev, u8 reg, return val; } -static ssize_t ltc2945_show_value(struct device *dev, +static ssize_t ltc2945_value_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -238,9 +238,9 @@ static ssize_t ltc2945_show_value(struct device *dev, return snprintf(buf, PAGE_SIZE, "%lld\n", value); } -static ssize_t ltc2945_set_value(struct device *dev, - struct device_attribute *da, - const char *buf, size_t count) +static ssize_t ltc2945_value_store(struct device *dev, + struct device_attribute *da, + const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); struct regmap *regmap = dev_get_drvdata(dev); @@ -273,7 +273,7 @@ static ssize_t ltc2945_set_value(struct device *dev, return ret < 0 ? ret : count; } -static ssize_t ltc2945_reset_history(struct device *dev, +static ssize_t ltc2945_history_store(struct device *dev, struct device_attribute *da, const char *buf, size_t count) { @@ -326,7 +326,7 @@ static ssize_t ltc2945_reset_history(struct device *dev, return ret ? : count; } -static ssize_t ltc2945_show_bool(struct device *dev, +static ssize_t ltc2945_bool_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -347,86 +347,65 @@ static ssize_t ltc2945_show_bool(struct device *dev, /* Input voltages */ -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_VIN_H); -static SENSOR_DEVICE_ATTR(in1_min, S_IRUGO | S_IWUSR, ltc2945_show_value, - ltc2945_set_value, LTC2945_MIN_VIN_THRES_H); -static SENSOR_DEVICE_ATTR(in1_max, S_IRUGO | S_IWUSR, ltc2945_show_value, - ltc2945_set_value, LTC2945_MAX_VIN_THRES_H); -static SENSOR_DEVICE_ATTR(in1_lowest, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_MIN_VIN_H); -static SENSOR_DEVICE_ATTR(in1_highest, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_MAX_VIN_H); -static SENSOR_DEVICE_ATTR(in1_reset_history, S_IWUSR, NULL, - ltc2945_reset_history, LTC2945_MIN_VIN_H); - -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_ADIN_H); -static SENSOR_DEVICE_ATTR(in2_min, S_IRUGO | S_IWUSR, ltc2945_show_value, - ltc2945_set_value, LTC2945_MIN_ADIN_THRES_H); -static SENSOR_DEVICE_ATTR(in2_max, S_IRUGO | S_IWUSR, ltc2945_show_value, - ltc2945_set_value, LTC2945_MAX_ADIN_THRES_H); -static SENSOR_DEVICE_ATTR(in2_lowest, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_MIN_ADIN_H); -static SENSOR_DEVICE_ATTR(in2_highest, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_MAX_ADIN_H); -static SENSOR_DEVICE_ATTR(in2_reset_history, S_IWUSR, NULL, - ltc2945_reset_history, LTC2945_MIN_ADIN_H); +static SENSOR_DEVICE_ATTR_RO(in1_input, ltc2945_value, LTC2945_VIN_H); +static SENSOR_DEVICE_ATTR_RW(in1_min, ltc2945_value, LTC2945_MIN_VIN_THRES_H); +static SENSOR_DEVICE_ATTR_RW(in1_max, ltc2945_value, LTC2945_MAX_VIN_THRES_H); +static SENSOR_DEVICE_ATTR_RO(in1_lowest, ltc2945_value, LTC2945_MIN_VIN_H); +static SENSOR_DEVICE_ATTR_RO(in1_highest, ltc2945_value, LTC2945_MAX_VIN_H); +static SENSOR_DEVICE_ATTR_WO(in1_reset_history, ltc2945_history, + LTC2945_MIN_VIN_H); + +static SENSOR_DEVICE_ATTR_RO(in2_input, ltc2945_value, LTC2945_ADIN_H); +static SENSOR_DEVICE_ATTR_RW(in2_min, ltc2945_value, LTC2945_MIN_ADIN_THRES_H); +static SENSOR_DEVICE_ATTR_RW(in2_max, ltc2945_value, LTC2945_MAX_ADIN_THRES_H); +static SENSOR_DEVICE_ATTR_RO(in2_lowest, ltc2945_value, LTC2945_MIN_ADIN_H); +static SENSOR_DEVICE_ATTR_RO(in2_highest, ltc2945_value, LTC2945_MAX_ADIN_H); +static SENSOR_DEVICE_ATTR_WO(in2_reset_history, ltc2945_history, + LTC2945_MIN_ADIN_H); /* Voltage alarms */ -static SENSOR_DEVICE_ATTR(in1_min_alarm, S_IRUGO, ltc2945_show_bool, NULL, - FAULT_VIN_UV); -static SENSOR_DEVICE_ATTR(in1_max_alarm, S_IRUGO, ltc2945_show_bool, NULL, - FAULT_VIN_OV); -static SENSOR_DEVICE_ATTR(in2_min_alarm, S_IRUGO, ltc2945_show_bool, NULL, - FAULT_ADIN_UV); -static SENSOR_DEVICE_ATTR(in2_max_alarm, S_IRUGO, ltc2945_show_bool, NULL, - FAULT_ADIN_OV); +static SENSOR_DEVICE_ATTR_RO(in1_min_alarm, ltc2945_bool, FAULT_VIN_UV); +static SENSOR_DEVICE_ATTR_RO(in1_max_alarm, ltc2945_bool, FAULT_VIN_OV); +static SENSOR_DEVICE_ATTR_RO(in2_min_alarm, ltc2945_bool, FAULT_ADIN_UV); +static SENSOR_DEVICE_ATTR_RO(in2_max_alarm, ltc2945_bool, FAULT_ADIN_OV); /* Currents (via sense resistor) */ -static SENSOR_DEVICE_ATTR(curr1_input, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_SENSE_H); -static SENSOR_DEVICE_ATTR(curr1_min, S_IRUGO | S_IWUSR, ltc2945_show_value, - ltc2945_set_value, LTC2945_MIN_SENSE_THRES_H); -static SENSOR_DEVICE_ATTR(curr1_max, S_IRUGO | S_IWUSR, ltc2945_show_value, - ltc2945_set_value, LTC2945_MAX_SENSE_THRES_H); -static SENSOR_DEVICE_ATTR(curr1_lowest, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_MIN_SENSE_H); -static SENSOR_DEVICE_ATTR(curr1_highest, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_MAX_SENSE_H); -static SENSOR_DEVICE_ATTR(curr1_reset_history, S_IWUSR, NULL, - ltc2945_reset_history, LTC2945_MIN_SENSE_H); +static SENSOR_DEVICE_ATTR_RO(curr1_input, ltc2945_value, LTC2945_SENSE_H); +static SENSOR_DEVICE_ATTR_RW(curr1_min, ltc2945_value, + LTC2945_MIN_SENSE_THRES_H); +static SENSOR_DEVICE_ATTR_RW(curr1_max, ltc2945_value, + LTC2945_MAX_SENSE_THRES_H); +static SENSOR_DEVICE_ATTR_RO(curr1_lowest, ltc2945_value, LTC2945_MIN_SENSE_H); +static SENSOR_DEVICE_ATTR_RO(curr1_highest, ltc2945_value, + LTC2945_MAX_SENSE_H); +static SENSOR_DEVICE_ATTR_WO(curr1_reset_history, ltc2945_history, + LTC2945_MIN_SENSE_H); /* Current alarms */ -static SENSOR_DEVICE_ATTR(curr1_min_alarm, S_IRUGO, ltc2945_show_bool, NULL, - FAULT_SENSE_UV); -static SENSOR_DEVICE_ATTR(curr1_max_alarm, S_IRUGO, ltc2945_show_bool, NULL, - FAULT_SENSE_OV); +static SENSOR_DEVICE_ATTR_RO(curr1_min_alarm, ltc2945_bool, FAULT_SENSE_UV); +static SENSOR_DEVICE_ATTR_RO(curr1_max_alarm, ltc2945_bool, FAULT_SENSE_OV); /* Power */ -static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ltc2945_show_value, NULL, - LTC2945_POWER_H); -static SENSOR_DEVICE_ATTR(power1_min, S_IRUGO | S_IWUSR, ltc2945_show_value, - ltc2945_set_value, LTC2945_MIN_POWER_THRES_H); -static SENSOR_DEVICE_ATTR(power1_max, S_IRUGO | S_IWUSR, ltc2945_show_value, - ltc2945_set_value, LTC2945_MAX_POWER_THRES_H); -static SENSOR_DEVICE_ATTR(power1_input_lowest, S_IRUGO, ltc2945_show_value, - NULL, LTC2945_MIN_POWER_H); -static SENSOR_DEVICE_ATTR(power1_input_highest, S_IRUGO, ltc2945_show_value, - NULL, LTC2945_MAX_POWER_H); -static SENSOR_DEVICE_ATTR(power1_reset_history, S_IWUSR, NULL, - ltc2945_reset_history, LTC2945_MIN_POWER_H); +static SENSOR_DEVICE_ATTR_RO(power1_input, ltc2945_value, LTC2945_POWER_H); +static SENSOR_DEVICE_ATTR_RW(power1_min, ltc2945_value, + LTC2945_MIN_POWER_THRES_H); +static SENSOR_DEVICE_ATTR_RW(power1_max, ltc2945_value, + LTC2945_MAX_POWER_THRES_H); +static SENSOR_DEVICE_ATTR_RO(power1_input_lowest, ltc2945_value, + LTC2945_MIN_POWER_H); +static SENSOR_DEVICE_ATTR_RO(power1_input_highest, ltc2945_value, + LTC2945_MAX_POWER_H); +static SENSOR_DEVICE_ATTR_WO(power1_reset_history, ltc2945_history, + LTC2945_MIN_POWER_H); /* Power alarms */ -static SENSOR_DEVICE_ATTR(power1_min_alarm, S_IRUGO, ltc2945_show_bool, NULL, - FAULT_POWER_UV); -static SENSOR_DEVICE_ATTR(power1_max_alarm, S_IRUGO, ltc2945_show_bool, NULL, - FAULT_POWER_OV); +static SENSOR_DEVICE_ATTR_RO(power1_min_alarm, ltc2945_bool, FAULT_POWER_UV); +static SENSOR_DEVICE_ATTR_RO(power1_max_alarm, ltc2945_bool, FAULT_POWER_OV); static struct attribute *ltc2945_attrs[] = { &sensor_dev_attr_in1_input.dev_attr.attr, diff --git a/drivers/hwmon/ltc4215.c b/drivers/hwmon/ltc4215.c index c8a9bd9b050f..d4a1d033d3e8 100644 --- a/drivers/hwmon/ltc4215.c +++ b/drivers/hwmon/ltc4215.c @@ -136,9 +136,8 @@ static unsigned int ltc4215_get_current(struct device *dev) return curr; } -static ssize_t ltc4215_show_voltage(struct device *dev, - struct device_attribute *da, - char *buf) +static ssize_t ltc4215_voltage_show(struct device *dev, + struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); const int voltage = ltc4215_get_voltage(dev, attr->index); @@ -146,18 +145,16 @@ static ssize_t ltc4215_show_voltage(struct device *dev, return snprintf(buf, PAGE_SIZE, "%d\n", voltage); } -static ssize_t ltc4215_show_current(struct device *dev, - struct device_attribute *da, - char *buf) +static ssize_t ltc4215_current_show(struct device *dev, + struct device_attribute *da, char *buf) { const unsigned int curr = ltc4215_get_current(dev); return snprintf(buf, PAGE_SIZE, "%u\n", curr); } -static ssize_t ltc4215_show_power(struct device *dev, - struct device_attribute *da, - char *buf) +static ssize_t ltc4215_power_show(struct device *dev, + struct device_attribute *da, char *buf) { const unsigned int curr = ltc4215_get_current(dev); const int output_voltage = ltc4215_get_voltage(dev, LTC4215_ADIN); @@ -168,9 +165,8 @@ static ssize_t ltc4215_show_power(struct device *dev, return snprintf(buf, PAGE_SIZE, "%u\n", power); } -static ssize_t ltc4215_show_alarm(struct device *dev, - struct device_attribute *da, - char *buf) +static ssize_t ltc4215_alarm_show(struct device *dev, + struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); struct ltc4215_data *data = ltc4215_update_device(dev); @@ -189,26 +185,20 @@ static ssize_t ltc4215_show_alarm(struct device *dev, /* Construct a sensor_device_attribute structure for each register */ /* Current */ -static SENSOR_DEVICE_ATTR(curr1_input, S_IRUGO, ltc4215_show_current, NULL, 0); -static SENSOR_DEVICE_ATTR(curr1_max_alarm, S_IRUGO, ltc4215_show_alarm, NULL, - 1 << 2); +static SENSOR_DEVICE_ATTR_RO(curr1_input, ltc4215_current, 0); +static SENSOR_DEVICE_ATTR_RO(curr1_max_alarm, ltc4215_alarm, 1 << 2); /* Power (virtual) */ -static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ltc4215_show_power, NULL, 0); +static SENSOR_DEVICE_ATTR_RO(power1_input, ltc4215_power, 0); /* Input Voltage */ -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, ltc4215_show_voltage, NULL, - LTC4215_ADIN); -static SENSOR_DEVICE_ATTR(in1_max_alarm, S_IRUGO, ltc4215_show_alarm, NULL, - 1 << 0); -static SENSOR_DEVICE_ATTR(in1_min_alarm, S_IRUGO, ltc4215_show_alarm, NULL, - 1 << 1); +static SENSOR_DEVICE_ATTR_RO(in1_input, ltc4215_voltage, LTC4215_ADIN); +static SENSOR_DEVICE_ATTR_RO(in1_max_alarm, ltc4215_alarm, 1 << 0); +static SENSOR_DEVICE_ATTR_RO(in1_min_alarm, ltc4215_alarm, 1 << 1); /* Output Voltage */ -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, ltc4215_show_voltage, NULL, - LTC4215_SOURCE); -static SENSOR_DEVICE_ATTR(in2_min_alarm, S_IRUGO, ltc4215_show_alarm, NULL, - 1 << 3); +static SENSOR_DEVICE_ATTR_RO(in2_input, ltc4215_voltage, LTC4215_SOURCE); +static SENSOR_DEVICE_ATTR_RO(in2_min_alarm, ltc4215_alarm, 1 << 3); /* * Finally, construct an array of pointers to members of the above objects, diff --git a/drivers/hwmon/ltc4260.c b/drivers/hwmon/ltc4260.c index afb09574b12c..011b1ae98ff1 100644 --- a/drivers/hwmon/ltc4260.c +++ b/drivers/hwmon/ltc4260.c @@ -79,7 +79,7 @@ static int ltc4260_get_value(struct device *dev, u8 reg) return val; } -static ssize_t ltc4260_show_value(struct device *dev, +static ssize_t ltc4260_value_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -91,7 +91,7 @@ static ssize_t ltc4260_show_value(struct device *dev, return snprintf(buf, PAGE_SIZE, "%d\n", value); } -static ssize_t ltc4260_show_bool(struct device *dev, +static ssize_t ltc4260_bool_show(struct device *dev, struct device_attribute *da, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(da); @@ -111,30 +111,24 @@ static ssize_t ltc4260_show_bool(struct device *dev, } /* Voltages */ -static SENSOR_DEVICE_ATTR(in1_input, S_IRUGO, ltc4260_show_value, NULL, - LTC4260_SOURCE); -static SENSOR_DEVICE_ATTR(in2_input, S_IRUGO, ltc4260_show_value, NULL, - LTC4260_ADIN); +static SENSOR_DEVICE_ATTR_RO(in1_input, ltc4260_value, LTC4260_SOURCE); +static SENSOR_DEVICE_ATTR_RO(in2_input, ltc4260_value, LTC4260_ADIN); /* * Voltage alarms * UV/OV faults are associated with the input voltage, and the POWER BAD and * FET SHORT faults are associated with the output voltage. */ -static SENSOR_DEVICE_ATTR(in1_min_alarm, S_IRUGO, ltc4260_show_bool, NULL, - FAULT_UV); -static SENSOR_DEVICE_ATTR(in1_max_alarm, S_IRUGO, ltc4260_show_bool, NULL, - FAULT_OV); -static SENSOR_DEVICE_ATTR(in2_alarm, S_IRUGO, ltc4260_show_bool, NULL, - FAULT_POWER_BAD | FAULT_FET_SHORT); +static SENSOR_DEVICE_ATTR_RO(in1_min_alarm, ltc4260_bool, FAULT_UV); +static SENSOR_DEVICE_ATTR_RO(in1_max_alarm, ltc4260_bool, FAULT_OV); +static SENSOR_DEVICE_ATTR_RO(in2_alarm, ltc4260_bool, + FAULT_POWER_BAD | FAULT_FET_SHORT); /* Current (via sense resistor) */ -static SENSOR_DEVICE_ATTR(curr1_input, S_IRUGO, ltc4260_show_value, NULL, - LTC4260_SENSE); +static SENSOR_DEVICE_ATTR_RO(curr1_input, ltc4260_value, LTC4260_SENSE); /* Overcurrent alarm */ -static SENSOR_DEVICE_ATTR(curr1_max_alarm, S_IRUGO, ltc4260_show_bool, NULL, - FAULT_OC); +static SENSOR_DEVICE_ATTR_RO(curr1_max_alarm, ltc4260_bool, FAULT_OC); static struct attribute *ltc4260_attrs[] = { &sensor_dev_attr_in1_input.dev_attr.attr, diff --git a/drivers/hwmon/max6650.c b/drivers/hwmon/max6650.c index 65be4b19fe47..4752a9ee9645 100644 --- a/drivers/hwmon/max6650.c +++ b/drivers/hwmon/max6650.c @@ -209,8 +209,8 @@ static int max6650_set_operating_mode(struct max6650_data *data, u8 mode) return 0; } -static ssize_t get_fan(struct device *dev, struct device_attribute *devattr, - char *buf) +static ssize_t fan_show(struct device *dev, struct device_attribute *devattr, + char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct max6650_data *data = max6650_update_device(dev); @@ -514,8 +514,8 @@ static ssize_t fan1_div_store(struct device *dev, * 1 = alarm */ -static ssize_t get_alarm(struct device *dev, struct device_attribute *devattr, - char *buf) +static ssize_t alarm_show(struct device *dev, + struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); struct max6650_data *data = max6650_update_device(dev); @@ -534,24 +534,19 @@ static ssize_t get_alarm(struct device *dev, struct device_attribute *devattr, return sprintf(buf, "%d\n", alarm); } -static SENSOR_DEVICE_ATTR(fan1_input, S_IRUGO, get_fan, NULL, 0); -static SENSOR_DEVICE_ATTR(fan2_input, S_IRUGO, get_fan, NULL, 1); -static SENSOR_DEVICE_ATTR(fan3_input, S_IRUGO, get_fan, NULL, 2); -static SENSOR_DEVICE_ATTR(fan4_input, S_IRUGO, get_fan, NULL, 3); +static SENSOR_DEVICE_ATTR_RO(fan1_input, fan, 0); +static SENSOR_DEVICE_ATTR_RO(fan2_input, fan, 1); +static SENSOR_DEVICE_ATTR_RO(fan3_input, fan, 2); +static SENSOR_DEVICE_ATTR_RO(fan4_input, fan, 3); static DEVICE_ATTR_RW(fan1_target); static DEVICE_ATTR_RW(fan1_div); static DEVICE_ATTR_RW(pwm1_enable); static DEVICE_ATTR_RW(pwm1); -static SENSOR_DEVICE_ATTR(fan1_max_alarm, S_IRUGO, get_alarm, NULL, - MAX6650_ALRM_MAX); -static SENSOR_DEVICE_ATTR(fan1_min_alarm, S_IRUGO, get_alarm, NULL, - MAX6650_ALRM_MIN); -static SENSOR_DEVICE_ATTR(fan1_fault, S_IRUGO, get_alarm, NULL, - MAX6650_ALRM_TACH); -static SENSOR_DEVICE_ATTR(gpio1_alarm, S_IRUGO, get_alarm, NULL, - MAX6650_ALRM_GPIO1); -static SENSOR_DEVICE_ATTR(gpio2_alarm, S_IRUGO, get_alarm, NULL, - MAX6650_ALRM_GPIO2); +static SENSOR_DEVICE_ATTR_RO(fan1_max_alarm, alarm, MAX6650_ALRM_MAX); +static SENSOR_DEVICE_ATTR_RO(fan1_min_alarm, alarm, MAX6650_ALRM_MIN); +static SENSOR_DEVICE_ATTR_RO(fan1_fault, alarm, MAX6650_ALRM_TACH); +static SENSOR_DEVICE_ATTR_RO(gpio1_alarm, alarm, MAX6650_ALRM_GPIO1); +static SENSOR_DEVICE_ATTR_RO(gpio2_alarm, alarm, MAX6650_ALRM_GPIO2); static umode_t max6650_attrs_visible(struct kobject *kobj, struct attribute *a, int n) diff --git a/drivers/hwmon/max6697.c b/drivers/hwmon/max6697.c index 221fd1492057..da43f7ae3de1 100644 --- a/drivers/hwmon/max6697.c +++ b/drivers/hwmon/max6697.c @@ -250,7 +250,7 @@ abort: return ret; } -static ssize_t show_temp_input(struct device *dev, +static ssize_t temp_input_show(struct device *dev, struct device_attribute *devattr, char *buf) { int index = to_sensor_dev_attr(devattr)->index; @@ -266,8 +266,8 @@ static ssize_t show_temp_input(struct device *dev, return sprintf(buf, "%d\n", temp * 125); } -static ssize_t show_temp(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t temp_show(struct device *dev, struct device_attribute *devattr, + char *buf) { int nr = to_sensor_dev_attr_2(devattr)->nr; int index = to_sensor_dev_attr_2(devattr)->index; @@ -283,7 +283,7 @@ static ssize_t show_temp(struct device *dev, return sprintf(buf, "%d\n", temp * 1000); } -static ssize_t show_alarm(struct device *dev, struct device_attribute *attr, +static ssize_t alarm_show(struct device *dev, struct device_attribute *attr, char *buf) { int index = to_sensor_dev_attr(attr)->index; @@ -298,9 +298,9 @@ static ssize_t show_alarm(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%u\n", (data->alarms >> index) & 0x1); } -static ssize_t set_temp(struct device *dev, - struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t temp_store(struct device *dev, + struct device_attribute *devattr, const char *buf, + size_t count) { int nr = to_sensor_dev_attr_2(devattr)->nr; int index = to_sensor_dev_attr_2(devattr)->index; @@ -325,79 +325,63 @@ static ssize_t set_temp(struct device *dev, return ret < 0 ? ret : count; } -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp_input, NULL, 0); -static SENSOR_DEVICE_ATTR_2(temp1_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 0, MAX6697_TEMP_MAX); -static SENSOR_DEVICE_ATTR_2(temp1_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - 0, MAX6697_TEMP_CRIT); - -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp_input, NULL, 1); -static SENSOR_DEVICE_ATTR_2(temp2_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 1, MAX6697_TEMP_MAX); -static SENSOR_DEVICE_ATTR_2(temp2_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - 1, MAX6697_TEMP_CRIT); - -static SENSOR_DEVICE_ATTR(temp3_input, S_IRUGO, show_temp_input, NULL, 2); -static SENSOR_DEVICE_ATTR_2(temp3_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 2, MAX6697_TEMP_MAX); -static SENSOR_DEVICE_ATTR_2(temp3_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - 2, MAX6697_TEMP_CRIT); - -static SENSOR_DEVICE_ATTR(temp4_input, S_IRUGO, show_temp_input, NULL, 3); -static SENSOR_DEVICE_ATTR_2(temp4_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 3, MAX6697_TEMP_MAX); -static SENSOR_DEVICE_ATTR_2(temp4_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - 3, MAX6697_TEMP_CRIT); - -static SENSOR_DEVICE_ATTR(temp5_input, S_IRUGO, show_temp_input, NULL, 4); -static SENSOR_DEVICE_ATTR_2(temp5_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 4, MAX6697_TEMP_MAX); -static SENSOR_DEVICE_ATTR_2(temp5_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - 4, MAX6697_TEMP_CRIT); - -static SENSOR_DEVICE_ATTR(temp6_input, S_IRUGO, show_temp_input, NULL, 5); -static SENSOR_DEVICE_ATTR_2(temp6_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 5, MAX6697_TEMP_MAX); -static SENSOR_DEVICE_ATTR_2(temp6_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - 5, MAX6697_TEMP_CRIT); - -static SENSOR_DEVICE_ATTR(temp7_input, S_IRUGO, show_temp_input, NULL, 6); -static SENSOR_DEVICE_ATTR_2(temp7_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 6, MAX6697_TEMP_MAX); -static SENSOR_DEVICE_ATTR_2(temp7_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - 6, MAX6697_TEMP_CRIT); - -static SENSOR_DEVICE_ATTR(temp8_input, S_IRUGO, show_temp_input, NULL, 7); -static SENSOR_DEVICE_ATTR_2(temp8_max, S_IRUGO | S_IWUSR, show_temp, set_temp, - 7, MAX6697_TEMP_MAX); -static SENSOR_DEVICE_ATTR_2(temp8_crit, S_IRUGO | S_IWUSR, show_temp, set_temp, - 7, MAX6697_TEMP_CRIT); - -static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_alarm, NULL, 22); -static SENSOR_DEVICE_ATTR(temp2_max_alarm, S_IRUGO, show_alarm, NULL, 16); -static SENSOR_DEVICE_ATTR(temp3_max_alarm, S_IRUGO, show_alarm, NULL, 17); -static SENSOR_DEVICE_ATTR(temp4_max_alarm, S_IRUGO, show_alarm, NULL, 18); -static SENSOR_DEVICE_ATTR(temp5_max_alarm, S_IRUGO, show_alarm, NULL, 19); -static SENSOR_DEVICE_ATTR(temp6_max_alarm, S_IRUGO, show_alarm, NULL, 20); -static SENSOR_DEVICE_ATTR(temp7_max_alarm, S_IRUGO, show_alarm, NULL, 21); -static SENSOR_DEVICE_ATTR(temp8_max_alarm, S_IRUGO, show_alarm, NULL, 23); - -static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, show_alarm, NULL, 14); -static SENSOR_DEVICE_ATTR(temp2_crit_alarm, S_IRUGO, show_alarm, NULL, 8); -static SENSOR_DEVICE_ATTR(temp3_crit_alarm, S_IRUGO, show_alarm, NULL, 9); -static SENSOR_DEVICE_ATTR(temp4_crit_alarm, S_IRUGO, show_alarm, NULL, 10); -static SENSOR_DEVICE_ATTR(temp5_crit_alarm, S_IRUGO, show_alarm, NULL, 11); -static SENSOR_DEVICE_ATTR(temp6_crit_alarm, S_IRUGO, show_alarm, NULL, 12); -static SENSOR_DEVICE_ATTR(temp7_crit_alarm, S_IRUGO, show_alarm, NULL, 13); -static SENSOR_DEVICE_ATTR(temp8_crit_alarm, S_IRUGO, show_alarm, NULL, 15); - -static SENSOR_DEVICE_ATTR(temp2_fault, S_IRUGO, show_alarm, NULL, 1); -static SENSOR_DEVICE_ATTR(temp3_fault, S_IRUGO, show_alarm, NULL, 2); -static SENSOR_DEVICE_ATTR(temp4_fault, S_IRUGO, show_alarm, NULL, 3); -static SENSOR_DEVICE_ATTR(temp5_fault, S_IRUGO, show_alarm, NULL, 4); -static SENSOR_DEVICE_ATTR(temp6_fault, S_IRUGO, show_alarm, NULL, 5); -static SENSOR_DEVICE_ATTR(temp7_fault, S_IRUGO, show_alarm, NULL, 6); -static SENSOR_DEVICE_ATTR(temp8_fault, S_IRUGO, show_alarm, NULL, 7); +static SENSOR_DEVICE_ATTR_RO(temp1_input, temp_input, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_max, temp, 0, MAX6697_TEMP_MAX); +static SENSOR_DEVICE_ATTR_2_RW(temp1_crit, temp, 0, MAX6697_TEMP_CRIT); + +static SENSOR_DEVICE_ATTR_RO(temp2_input, temp_input, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_max, temp, 1, MAX6697_TEMP_MAX); +static SENSOR_DEVICE_ATTR_2_RW(temp2_crit, temp, 1, MAX6697_TEMP_CRIT); + +static SENSOR_DEVICE_ATTR_RO(temp3_input, temp_input, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_max, temp, 2, MAX6697_TEMP_MAX); +static SENSOR_DEVICE_ATTR_2_RW(temp3_crit, temp, 2, MAX6697_TEMP_CRIT); + +static SENSOR_DEVICE_ATTR_RO(temp4_input, temp_input, 3); +static SENSOR_DEVICE_ATTR_2_RW(temp4_max, temp, 3, MAX6697_TEMP_MAX); +static SENSOR_DEVICE_ATTR_2_RW(temp4_crit, temp, 3, MAX6697_TEMP_CRIT); + +static SENSOR_DEVICE_ATTR_RO(temp5_input, temp_input, 4); +static SENSOR_DEVICE_ATTR_2_RW(temp5_max, temp, 4, MAX6697_TEMP_MAX); +static SENSOR_DEVICE_ATTR_2_RW(temp5_crit, temp, 4, MAX6697_TEMP_CRIT); + +static SENSOR_DEVICE_ATTR_RO(temp6_input, temp_input, 5); +static SENSOR_DEVICE_ATTR_2_RW(temp6_max, temp, 5, MAX6697_TEMP_MAX); +static SENSOR_DEVICE_ATTR_2_RW(temp6_crit, temp, 5, MAX6697_TEMP_CRIT); + +static SENSOR_DEVICE_ATTR_RO(temp7_input, temp_input, 6); +static SENSOR_DEVICE_ATTR_2_RW(temp7_max, temp, 6, MAX6697_TEMP_MAX); +static SENSOR_DEVICE_ATTR_2_RW(temp7_crit, temp, 6, MAX6697_TEMP_CRIT); + +static SENSOR_DEVICE_ATTR_RO(temp8_input, temp_input, 7); +static SENSOR_DEVICE_ATTR_2_RW(temp8_max, temp, 7, MAX6697_TEMP_MAX); +static SENSOR_DEVICE_ATTR_2_RW(temp8_crit, temp, 7, MAX6697_TEMP_CRIT); + +static SENSOR_DEVICE_ATTR_RO(temp1_max_alarm, alarm, 22); +static SENSOR_DEVICE_ATTR_RO(temp2_max_alarm, alarm, 16); +static SENSOR_DEVICE_ATTR_RO(temp3_max_alarm, alarm, 17); +static SENSOR_DEVICE_ATTR_RO(temp4_max_alarm, alarm, 18); +static SENSOR_DEVICE_ATTR_RO(temp5_max_alarm, alarm, 19); +static SENSOR_DEVICE_ATTR_RO(temp6_max_alarm, alarm, 20); +static SENSOR_DEVICE_ATTR_RO(temp7_max_alarm, alarm, 21); +static SENSOR_DEVICE_ATTR_RO(temp8_max_alarm, alarm, 23); + +static SENSOR_DEVICE_ATTR_RO(temp1_crit_alarm, alarm, 14); +static SENSOR_DEVICE_ATTR_RO(temp2_crit_alarm, alarm, 8); +static SENSOR_DEVICE_ATTR_RO(temp3_crit_alarm, alarm, 9); +static SENSOR_DEVICE_ATTR_RO(temp4_crit_alarm, alarm, 10); +static SENSOR_DEVICE_ATTR_RO(temp5_crit_alarm, alarm, 11); +static SENSOR_DEVICE_ATTR_RO(temp6_crit_alarm, alarm, 12); +static SENSOR_DEVICE_ATTR_RO(temp7_crit_alarm, alarm, 13); +static SENSOR_DEVICE_ATTR_RO(temp8_crit_alarm, alarm, 15); + +static SENSOR_DEVICE_ATTR_RO(temp2_fault, alarm, 1); +static SENSOR_DEVICE_ATTR_RO(temp3_fault, alarm, 2); +static SENSOR_DEVICE_ATTR_RO(temp4_fault, alarm, 3); +static SENSOR_DEVICE_ATTR_RO(temp5_fault, alarm, 4); +static SENSOR_DEVICE_ATTR_RO(temp6_fault, alarm, 5); +static SENSOR_DEVICE_ATTR_RO(temp7_fault, alarm, 6); +static SENSOR_DEVICE_ATTR_RO(temp8_fault, alarm, 7); static DEVICE_ATTR(dummy, 0, NULL, NULL); diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c index d8fa4bea4bc8..db8c6de0b6a0 100644 --- a/drivers/hwmon/mlxreg-fan.c +++ b/drivers/hwmon/mlxreg-fan.c @@ -51,7 +51,7 @@ */ #define MLXREG_FAN_GET_RPM(rval, d, s) (DIV_ROUND_CLOSEST(15000000 * 100, \ ((rval) + (s)) * (d))) -#define MLXREG_FAN_GET_FAULT(val, mask) (!((val) ^ (mask))) +#define MLXREG_FAN_GET_FAULT(val, mask) ((val) == (mask)) #define MLXREG_FAN_PWM_DUTY2STATE(duty) (DIV_ROUND_CLOSEST((duty) * \ MLXREG_FAN_MAX_STATE, \ MLXREG_FAN_MAX_DUTY)) diff --git a/drivers/hwmon/nct7802.c b/drivers/hwmon/nct7802.c index 2876c18ed841..6aa44492ae30 100644 --- a/drivers/hwmon/nct7802.c +++ b/drivers/hwmon/nct7802.c @@ -69,8 +69,8 @@ struct nct7802_data { struct mutex access_lock; /* for multi-byte read and write operations */ }; -static ssize_t show_temp_type(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t temp_type_show(struct device *dev, + struct device_attribute *attr, char *buf) { struct nct7802_data *data = dev_get_drvdata(dev); struct sensor_device_attribute *sattr = to_sensor_dev_attr(attr); @@ -84,9 +84,9 @@ static ssize_t show_temp_type(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%u\n", (mode >> (2 * sattr->index) & 3) + 2); } -static ssize_t store_temp_type(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t temp_type_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct nct7802_data *data = dev_get_drvdata(dev); struct sensor_device_attribute *sattr = to_sensor_dev_attr(attr); @@ -105,8 +105,8 @@ static ssize_t store_temp_type(struct device *dev, return err ? : count; } -static ssize_t show_pwm_mode(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t pwm_mode_show(struct device *dev, + struct device_attribute *attr, char *buf) { struct sensor_device_attribute *sattr = to_sensor_dev_attr(attr); struct nct7802_data *data = dev_get_drvdata(dev); @@ -123,7 +123,7 @@ static ssize_t show_pwm_mode(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%u\n", !(regval & (1 << sattr->index))); } -static ssize_t show_pwm(struct device *dev, struct device_attribute *devattr, +static ssize_t pwm_show(struct device *dev, struct device_attribute *devattr, char *buf) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -141,7 +141,7 @@ static ssize_t show_pwm(struct device *dev, struct device_attribute *devattr, return sprintf(buf, "%d\n", val); } -static ssize_t store_pwm(struct device *dev, struct device_attribute *devattr, +static ssize_t pwm_store(struct device *dev, struct device_attribute *devattr, const char *buf, size_t count) { struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); @@ -157,7 +157,7 @@ static ssize_t store_pwm(struct device *dev, struct device_attribute *devattr, return err ? : count; } -static ssize_t show_pwm_enable(struct device *dev, +static ssize_t pwm_enable_show(struct device *dev, struct device_attribute *attr, char *buf) { struct nct7802_data *data = dev_get_drvdata(dev); @@ -172,7 +172,7 @@ static ssize_t show_pwm_enable(struct device *dev, return sprintf(buf, "%u\n", enabled + 1); } -static ssize_t store_pwm_enable(struct device *dev, +static ssize_t pwm_enable_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { @@ -345,7 +345,7 @@ abort: return err; } -static ssize_t show_in(struct device *dev, struct device_attribute *attr, +static ssize_t in_show(struct device *dev, struct device_attribute *attr, char *buf) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -359,7 +359,7 @@ static ssize_t show_in(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", voltage); } -static ssize_t store_in(struct device *dev, struct device_attribute *attr, +static ssize_t in_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -377,7 +377,7 @@ static ssize_t store_in(struct device *dev, struct device_attribute *attr, return err ? : count; } -static ssize_t show_temp(struct device *dev, struct device_attribute *attr, +static ssize_t temp_show(struct device *dev, struct device_attribute *attr, char *buf) { struct nct7802_data *data = dev_get_drvdata(dev); @@ -391,7 +391,7 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", temp); } -static ssize_t store_temp(struct device *dev, struct device_attribute *attr, +static ssize_t temp_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -410,7 +410,7 @@ static ssize_t store_temp(struct device *dev, struct device_attribute *attr, return err ? : count; } -static ssize_t show_fan(struct device *dev, struct device_attribute *attr, +static ssize_t fan_show(struct device *dev, struct device_attribute *attr, char *buf) { struct sensor_device_attribute *sattr = to_sensor_dev_attr(attr); @@ -424,7 +424,7 @@ static ssize_t show_fan(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", speed); } -static ssize_t show_fan_min(struct device *dev, struct device_attribute *attr, +static ssize_t fan_min_show(struct device *dev, struct device_attribute *attr, char *buf) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -438,8 +438,9 @@ static ssize_t show_fan_min(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", speed); } -static ssize_t store_fan_min(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t fan_min_store(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct nct7802_data *data = dev_get_drvdata(dev); @@ -454,7 +455,7 @@ static ssize_t store_fan_min(struct device *dev, struct device_attribute *attr, return err ? : count; } -static ssize_t show_alarm(struct device *dev, struct device_attribute *attr, +static ssize_t alarm_show(struct device *dev, struct device_attribute *attr, char *buf) { struct nct7802_data *data = dev_get_drvdata(dev); @@ -471,7 +472,7 @@ static ssize_t show_alarm(struct device *dev, struct device_attribute *attr, } static ssize_t -show_beep(struct device *dev, struct device_attribute *attr, char *buf) +beep_show(struct device *dev, struct device_attribute *attr, char *buf) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); struct nct7802_data *data = dev_get_drvdata(dev); @@ -486,7 +487,7 @@ show_beep(struct device *dev, struct device_attribute *attr, char *buf) } static ssize_t -store_beep(struct device *dev, struct device_attribute *attr, const char *buf, +beep_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); @@ -505,108 +506,64 @@ store_beep(struct device *dev, struct device_attribute *attr, const char *buf, return err ? : count; } -static SENSOR_DEVICE_ATTR(temp1_type, S_IRUGO | S_IWUSR, - show_temp_type, store_temp_type, 0); -static SENSOR_DEVICE_ATTR_2(temp1_input, S_IRUGO, show_temp, NULL, 0x01, - REG_TEMP_LSB); -static SENSOR_DEVICE_ATTR_2(temp1_min, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x31, 0); -static SENSOR_DEVICE_ATTR_2(temp1_max, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x30, 0); -static SENSOR_DEVICE_ATTR_2(temp1_crit, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x3a, 0); - -static SENSOR_DEVICE_ATTR(temp2_type, S_IRUGO | S_IWUSR, - show_temp_type, store_temp_type, 1); -static SENSOR_DEVICE_ATTR_2(temp2_input, S_IRUGO, show_temp, NULL, 0x02, - REG_TEMP_LSB); -static SENSOR_DEVICE_ATTR_2(temp2_min, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x33, 0); -static SENSOR_DEVICE_ATTR_2(temp2_max, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x32, 0); -static SENSOR_DEVICE_ATTR_2(temp2_crit, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x3b, 0); - -static SENSOR_DEVICE_ATTR(temp3_type, S_IRUGO | S_IWUSR, - show_temp_type, store_temp_type, 2); -static SENSOR_DEVICE_ATTR_2(temp3_input, S_IRUGO, show_temp, NULL, 0x03, - REG_TEMP_LSB); -static SENSOR_DEVICE_ATTR_2(temp3_min, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x35, 0); -static SENSOR_DEVICE_ATTR_2(temp3_max, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x34, 0); -static SENSOR_DEVICE_ATTR_2(temp3_crit, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x3c, 0); - -static SENSOR_DEVICE_ATTR_2(temp4_input, S_IRUGO, show_temp, NULL, 0x04, 0); -static SENSOR_DEVICE_ATTR_2(temp4_min, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x37, 0); -static SENSOR_DEVICE_ATTR_2(temp4_max, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x36, 0); -static SENSOR_DEVICE_ATTR_2(temp4_crit, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x3d, 0); - -static SENSOR_DEVICE_ATTR_2(temp5_input, S_IRUGO, show_temp, NULL, 0x06, - REG_TEMP_PECI_LSB); -static SENSOR_DEVICE_ATTR_2(temp5_min, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x39, 0); -static SENSOR_DEVICE_ATTR_2(temp5_max, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x38, 0); -static SENSOR_DEVICE_ATTR_2(temp5_crit, S_IRUGO | S_IWUSR, show_temp, - store_temp, 0x3e, 0); - -static SENSOR_DEVICE_ATTR_2(temp6_input, S_IRUGO, show_temp, NULL, 0x07, - REG_TEMP_PECI_LSB); - -static SENSOR_DEVICE_ATTR_2(temp1_min_alarm, S_IRUGO, show_alarm, NULL, - 0x18, 0); -static SENSOR_DEVICE_ATTR_2(temp2_min_alarm, S_IRUGO, show_alarm, NULL, - 0x18, 1); -static SENSOR_DEVICE_ATTR_2(temp3_min_alarm, S_IRUGO, show_alarm, NULL, - 0x18, 2); -static SENSOR_DEVICE_ATTR_2(temp4_min_alarm, S_IRUGO, show_alarm, NULL, - 0x18, 3); -static SENSOR_DEVICE_ATTR_2(temp5_min_alarm, S_IRUGO, show_alarm, NULL, - 0x18, 4); - -static SENSOR_DEVICE_ATTR_2(temp1_max_alarm, S_IRUGO, show_alarm, NULL, - 0x19, 0); -static SENSOR_DEVICE_ATTR_2(temp2_max_alarm, S_IRUGO, show_alarm, NULL, - 0x19, 1); -static SENSOR_DEVICE_ATTR_2(temp3_max_alarm, S_IRUGO, show_alarm, NULL, - 0x19, 2); -static SENSOR_DEVICE_ATTR_2(temp4_max_alarm, S_IRUGO, show_alarm, NULL, - 0x19, 3); -static SENSOR_DEVICE_ATTR_2(temp5_max_alarm, S_IRUGO, show_alarm, NULL, - 0x19, 4); - -static SENSOR_DEVICE_ATTR_2(temp1_crit_alarm, S_IRUGO, show_alarm, NULL, - 0x1b, 0); -static SENSOR_DEVICE_ATTR_2(temp2_crit_alarm, S_IRUGO, show_alarm, NULL, - 0x1b, 1); -static SENSOR_DEVICE_ATTR_2(temp3_crit_alarm, S_IRUGO, show_alarm, NULL, - 0x1b, 2); -static SENSOR_DEVICE_ATTR_2(temp4_crit_alarm, S_IRUGO, show_alarm, NULL, - 0x1b, 3); -static SENSOR_DEVICE_ATTR_2(temp5_crit_alarm, S_IRUGO, show_alarm, NULL, - 0x1b, 4); - -static SENSOR_DEVICE_ATTR_2(temp1_fault, S_IRUGO, show_alarm, NULL, 0x17, 0); -static SENSOR_DEVICE_ATTR_2(temp2_fault, S_IRUGO, show_alarm, NULL, 0x17, 1); -static SENSOR_DEVICE_ATTR_2(temp3_fault, S_IRUGO, show_alarm, NULL, 0x17, 2); - -static SENSOR_DEVICE_ATTR_2(temp1_beep, S_IRUGO | S_IWUSR, show_beep, - store_beep, 0x5c, 0); -static SENSOR_DEVICE_ATTR_2(temp2_beep, S_IRUGO | S_IWUSR, show_beep, - store_beep, 0x5c, 1); -static SENSOR_DEVICE_ATTR_2(temp3_beep, S_IRUGO | S_IWUSR, show_beep, - store_beep, 0x5c, 2); -static SENSOR_DEVICE_ATTR_2(temp4_beep, S_IRUGO | S_IWUSR, show_beep, - store_beep, 0x5c, 3); -static SENSOR_DEVICE_ATTR_2(temp5_beep, S_IRUGO | S_IWUSR, show_beep, - store_beep, 0x5c, 4); -static SENSOR_DEVICE_ATTR_2(temp6_beep, S_IRUGO | S_IWUSR, show_beep, - store_beep, 0x5c, 5); +static SENSOR_DEVICE_ATTR_RW(temp1_type, temp_type, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp1_input, temp, 0x01, REG_TEMP_LSB); +static SENSOR_DEVICE_ATTR_2_RW(temp1_min, temp, 0x31, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_max, temp, 0x30, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_crit, temp, 0x3a, 0); + +static SENSOR_DEVICE_ATTR_RW(temp2_type, temp_type, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp2_input, temp, 0x02, REG_TEMP_LSB); +static SENSOR_DEVICE_ATTR_2_RW(temp2_min, temp, 0x33, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp2_max, temp, 0x32, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp2_crit, temp, 0x3b, 0); + +static SENSOR_DEVICE_ATTR_RW(temp3_type, temp_type, 2); +static SENSOR_DEVICE_ATTR_2_RO(temp3_input, temp, 0x03, REG_TEMP_LSB); +static SENSOR_DEVICE_ATTR_2_RW(temp3_min, temp, 0x35, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp3_max, temp, 0x34, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp3_crit, temp, 0x3c, 0); + +static SENSOR_DEVICE_ATTR_2_RO(temp4_input, temp, 0x04, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp4_min, temp, 0x37, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp4_max, temp, 0x36, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp4_crit, temp, 0x3d, 0); + +static SENSOR_DEVICE_ATTR_2_RO(temp5_input, temp, 0x06, REG_TEMP_PECI_LSB); +static SENSOR_DEVICE_ATTR_2_RW(temp5_min, temp, 0x39, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp5_max, temp, 0x38, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp5_crit, temp, 0x3e, 0); + +static SENSOR_DEVICE_ATTR_2_RO(temp6_input, temp, 0x07, REG_TEMP_PECI_LSB); + +static SENSOR_DEVICE_ATTR_2_RO(temp1_min_alarm, alarm, 0x18, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp2_min_alarm, alarm, 0x18, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp3_min_alarm, alarm, 0x18, 2); +static SENSOR_DEVICE_ATTR_2_RO(temp4_min_alarm, alarm, 0x18, 3); +static SENSOR_DEVICE_ATTR_2_RO(temp5_min_alarm, alarm, 0x18, 4); + +static SENSOR_DEVICE_ATTR_2_RO(temp1_max_alarm, alarm, 0x19, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp2_max_alarm, alarm, 0x19, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp3_max_alarm, alarm, 0x19, 2); +static SENSOR_DEVICE_ATTR_2_RO(temp4_max_alarm, alarm, 0x19, 3); +static SENSOR_DEVICE_ATTR_2_RO(temp5_max_alarm, alarm, 0x19, 4); + +static SENSOR_DEVICE_ATTR_2_RO(temp1_crit_alarm, alarm, 0x1b, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp2_crit_alarm, alarm, 0x1b, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp3_crit_alarm, alarm, 0x1b, 2); +static SENSOR_DEVICE_ATTR_2_RO(temp4_crit_alarm, alarm, 0x1b, 3); +static SENSOR_DEVICE_ATTR_2_RO(temp5_crit_alarm, alarm, 0x1b, 4); + +static SENSOR_DEVICE_ATTR_2_RO(temp1_fault, alarm, 0x17, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp2_fault, alarm, 0x17, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp3_fault, alarm, 0x17, 2); + +static SENSOR_DEVICE_ATTR_2_RW(temp1_beep, beep, 0x5c, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp2_beep, beep, 0x5c, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp3_beep, beep, 0x5c, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp4_beep, beep, 0x5c, 3); +static SENSOR_DEVICE_ATTR_2_RW(temp5_beep, beep, 0x5c, 4); +static SENSOR_DEVICE_ATTR_2_RW(temp6_beep, beep, 0x5c, 5); static struct attribute *nct7802_temp_attrs[] = { &sensor_dev_attr_temp1_type.dev_attr.attr, @@ -709,43 +666,31 @@ static const struct attribute_group nct7802_temp_group = { .is_visible = nct7802_temp_is_visible, }; -static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, show_in, NULL, 0, 0); -static SENSOR_DEVICE_ATTR_2(in0_min, S_IRUGO | S_IWUSR, show_in, store_in, - 0, 1); -static SENSOR_DEVICE_ATTR_2(in0_max, S_IRUGO | S_IWUSR, show_in, store_in, - 0, 2); -static SENSOR_DEVICE_ATTR_2(in0_alarm, S_IRUGO, show_alarm, NULL, 0x1e, 3); -static SENSOR_DEVICE_ATTR_2(in0_beep, S_IRUGO | S_IWUSR, show_beep, store_beep, - 0x5a, 3); - -static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, show_in, NULL, 1, 0); - -static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, show_in, NULL, 2, 0); -static SENSOR_DEVICE_ATTR_2(in2_min, S_IRUGO | S_IWUSR, show_in, store_in, - 2, 1); -static SENSOR_DEVICE_ATTR_2(in2_max, S_IRUGO | S_IWUSR, show_in, store_in, - 2, 2); -static SENSOR_DEVICE_ATTR_2(in2_alarm, S_IRUGO, show_alarm, NULL, 0x1e, 0); -static SENSOR_DEVICE_ATTR_2(in2_beep, S_IRUGO | S_IWUSR, show_beep, store_beep, - 0x5a, 0); - -static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, show_in, NULL, 3, 0); -static SENSOR_DEVICE_ATTR_2(in3_min, S_IRUGO | S_IWUSR, show_in, store_in, - 3, 1); -static SENSOR_DEVICE_ATTR_2(in3_max, S_IRUGO | S_IWUSR, show_in, store_in, - 3, 2); -static SENSOR_DEVICE_ATTR_2(in3_alarm, S_IRUGO, show_alarm, NULL, 0x1e, 1); -static SENSOR_DEVICE_ATTR_2(in3_beep, S_IRUGO | S_IWUSR, show_beep, store_beep, - 0x5a, 1); - -static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, show_in, NULL, 4, 0); -static SENSOR_DEVICE_ATTR_2(in4_min, S_IRUGO | S_IWUSR, show_in, store_in, - 4, 1); -static SENSOR_DEVICE_ATTR_2(in4_max, S_IRUGO | S_IWUSR, show_in, store_in, - 4, 2); -static SENSOR_DEVICE_ATTR_2(in4_alarm, S_IRUGO, show_alarm, NULL, 0x1e, 2); -static SENSOR_DEVICE_ATTR_2(in4_beep, S_IRUGO | S_IWUSR, show_beep, store_beep, - 0x5a, 2); +static SENSOR_DEVICE_ATTR_2_RO(in0_input, in, 0, 0); +static SENSOR_DEVICE_ATTR_2_RW(in0_min, in, 0, 1); +static SENSOR_DEVICE_ATTR_2_RW(in0_max, in, 0, 2); +static SENSOR_DEVICE_ATTR_2_RO(in0_alarm, alarm, 0x1e, 3); +static SENSOR_DEVICE_ATTR_2_RW(in0_beep, beep, 0x5a, 3); + +static SENSOR_DEVICE_ATTR_2_RO(in1_input, in, 1, 0); + +static SENSOR_DEVICE_ATTR_2_RO(in2_input, in, 2, 0); +static SENSOR_DEVICE_ATTR_2_RW(in2_min, in, 2, 1); +static SENSOR_DEVICE_ATTR_2_RW(in2_max, in, 2, 2); +static SENSOR_DEVICE_ATTR_2_RO(in2_alarm, alarm, 0x1e, 0); +static SENSOR_DEVICE_ATTR_2_RW(in2_beep, beep, 0x5a, 0); + +static SENSOR_DEVICE_ATTR_2_RO(in3_input, in, 3, 0); +static SENSOR_DEVICE_ATTR_2_RW(in3_min, in, 3, 1); +static SENSOR_DEVICE_ATTR_2_RW(in3_max, in, 3, 2); +static SENSOR_DEVICE_ATTR_2_RO(in3_alarm, alarm, 0x1e, 1); +static SENSOR_DEVICE_ATTR_2_RW(in3_beep, beep, 0x5a, 1); + +static SENSOR_DEVICE_ATTR_2_RO(in4_input, in, 4, 0); +static SENSOR_DEVICE_ATTR_2_RW(in4_min, in, 4, 1); +static SENSOR_DEVICE_ATTR_2_RW(in4_max, in, 4, 2); +static SENSOR_DEVICE_ATTR_2_RO(in4_alarm, alarm, 0x1e, 2); +static SENSOR_DEVICE_ATTR_2_RW(in4_beep, beep, 0x5a, 2); static struct attribute *nct7802_in_attrs[] = { &sensor_dev_attr_in0_input.dev_attr.attr, @@ -807,45 +752,33 @@ static const struct attribute_group nct7802_in_group = { .is_visible = nct7802_in_is_visible, }; -static SENSOR_DEVICE_ATTR(fan1_input, S_IRUGO, show_fan, NULL, 0x10); -static SENSOR_DEVICE_ATTR_2(fan1_min, S_IRUGO | S_IWUSR, show_fan_min, - store_fan_min, 0x49, 0x4c); -static SENSOR_DEVICE_ATTR_2(fan1_alarm, S_IRUGO, show_alarm, NULL, 0x1a, 0); -static SENSOR_DEVICE_ATTR_2(fan1_beep, S_IRUGO | S_IWUSR, show_beep, store_beep, - 0x5b, 0); -static SENSOR_DEVICE_ATTR(fan2_input, S_IRUGO, show_fan, NULL, 0x11); -static SENSOR_DEVICE_ATTR_2(fan2_min, S_IRUGO | S_IWUSR, show_fan_min, - store_fan_min, 0x4a, 0x4d); -static SENSOR_DEVICE_ATTR_2(fan2_alarm, S_IRUGO, show_alarm, NULL, 0x1a, 1); -static SENSOR_DEVICE_ATTR_2(fan2_beep, S_IRUGO | S_IWUSR, show_beep, store_beep, - 0x5b, 1); -static SENSOR_DEVICE_ATTR(fan3_input, S_IRUGO, show_fan, NULL, 0x12); -static SENSOR_DEVICE_ATTR_2(fan3_min, S_IRUGO | S_IWUSR, show_fan_min, - store_fan_min, 0x4b, 0x4e); -static SENSOR_DEVICE_ATTR_2(fan3_alarm, S_IRUGO, show_alarm, NULL, 0x1a, 2); -static SENSOR_DEVICE_ATTR_2(fan3_beep, S_IRUGO | S_IWUSR, show_beep, store_beep, - 0x5b, 2); +static SENSOR_DEVICE_ATTR_RO(fan1_input, fan, 0x10); +static SENSOR_DEVICE_ATTR_2_RW(fan1_min, fan_min, 0x49, 0x4c); +static SENSOR_DEVICE_ATTR_2_RO(fan1_alarm, alarm, 0x1a, 0); +static SENSOR_DEVICE_ATTR_2_RW(fan1_beep, beep, 0x5b, 0); +static SENSOR_DEVICE_ATTR_RO(fan2_input, fan, 0x11); +static SENSOR_DEVICE_ATTR_2_RW(fan2_min, fan_min, 0x4a, 0x4d); +static SENSOR_DEVICE_ATTR_2_RO(fan2_alarm, alarm, 0x1a, 1); +static SENSOR_DEVICE_ATTR_2_RW(fan2_beep, beep, 0x5b, 1); +static SENSOR_DEVICE_ATTR_RO(fan3_input, fan, 0x12); +static SENSOR_DEVICE_ATTR_2_RW(fan3_min, fan_min, 0x4b, 0x4e); +static SENSOR_DEVICE_ATTR_2_RO(fan3_alarm, alarm, 0x1a, 2); +static SENSOR_DEVICE_ATTR_2_RW(fan3_beep, beep, 0x5b, 2); /* 7.2.89 Fan Control Output Type */ -static SENSOR_DEVICE_ATTR(pwm1_mode, S_IRUGO, show_pwm_mode, NULL, 0); -static SENSOR_DEVICE_ATTR(pwm2_mode, S_IRUGO, show_pwm_mode, NULL, 1); -static SENSOR_DEVICE_ATTR(pwm3_mode, S_IRUGO, show_pwm_mode, NULL, 2); +static SENSOR_DEVICE_ATTR_RO(pwm1_mode, pwm_mode, 0); +static SENSOR_DEVICE_ATTR_RO(pwm2_mode, pwm_mode, 1); +static SENSOR_DEVICE_ATTR_RO(pwm3_mode, pwm_mode, 2); /* 7.2.91... Fan Control Output Value */ -static SENSOR_DEVICE_ATTR(pwm1, S_IRUGO | S_IWUSR, show_pwm, store_pwm, - REG_PWM(0)); -static SENSOR_DEVICE_ATTR(pwm2, S_IRUGO | S_IWUSR, show_pwm, store_pwm, - REG_PWM(1)); -static SENSOR_DEVICE_ATTR(pwm3, S_IRUGO | S_IWUSR, show_pwm, store_pwm, - REG_PWM(2)); +static SENSOR_DEVICE_ATTR_RW(pwm1, pwm, REG_PWM(0)); +static SENSOR_DEVICE_ATTR_RW(pwm2, pwm, REG_PWM(1)); +static SENSOR_DEVICE_ATTR_RW(pwm3, pwm, REG_PWM(2)); /* 7.2.95... Temperature to Fan mapping Relationships Register */ -static SENSOR_DEVICE_ATTR(pwm1_enable, S_IRUGO | S_IWUSR, show_pwm_enable, - store_pwm_enable, 0); -static SENSOR_DEVICE_ATTR(pwm2_enable, S_IRUGO | S_IWUSR, show_pwm_enable, - store_pwm_enable, 1); -static SENSOR_DEVICE_ATTR(pwm3_enable, S_IRUGO | S_IWUSR, show_pwm_enable, - store_pwm_enable, 2); +static SENSOR_DEVICE_ATTR_RW(pwm1_enable, pwm_enable, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_enable, pwm_enable, 1); +static SENSOR_DEVICE_ATTR_RW(pwm3_enable, pwm_enable, 2); static struct attribute *nct7802_fan_attrs[] = { &sensor_dev_attr_fan1_input.dev_attr.attr, @@ -903,73 +836,46 @@ static const struct attribute_group nct7802_pwm_group = { }; /* 7.2.115... 0x80-0x83, 0x84 Temperature (X-axis) transition */ -static SENSOR_DEVICE_ATTR_2(pwm1_auto_point1_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x80, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_auto_point2_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x81, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_auto_point3_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x82, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_auto_point4_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x83, 0); -static SENSOR_DEVICE_ATTR_2(pwm1_auto_point5_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x84, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_auto_point1_temp, temp, 0x80, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_auto_point2_temp, temp, 0x81, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_auto_point3_temp, temp, 0x82, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_auto_point4_temp, temp, 0x83, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm1_auto_point5_temp, temp, 0x84, 0); /* 7.2.120... 0x85-0x88 PWM (Y-axis) transition */ -static SENSOR_DEVICE_ATTR(pwm1_auto_point1_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0x85); -static SENSOR_DEVICE_ATTR(pwm1_auto_point2_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0x86); -static SENSOR_DEVICE_ATTR(pwm1_auto_point3_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0x87); -static SENSOR_DEVICE_ATTR(pwm1_auto_point4_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0x88); -static SENSOR_DEVICE_ATTR(pwm1_auto_point5_pwm, S_IRUGO, show_pwm, NULL, 0); +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point1_pwm, pwm, 0x85); +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point2_pwm, pwm, 0x86); +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point3_pwm, pwm, 0x87); +static SENSOR_DEVICE_ATTR_RW(pwm1_auto_point4_pwm, pwm, 0x88); +static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point5_pwm, pwm, 0); /* 7.2.124 Table 2 X-axis Transition Point 1 Register */ -static SENSOR_DEVICE_ATTR_2(pwm2_auto_point1_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x90, 0); -static SENSOR_DEVICE_ATTR_2(pwm2_auto_point2_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x91, 0); -static SENSOR_DEVICE_ATTR_2(pwm2_auto_point3_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x92, 0); -static SENSOR_DEVICE_ATTR_2(pwm2_auto_point4_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x93, 0); -static SENSOR_DEVICE_ATTR_2(pwm2_auto_point5_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0x94, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_auto_point1_temp, temp, 0x90, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_auto_point2_temp, temp, 0x91, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_auto_point3_temp, temp, 0x92, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_auto_point4_temp, temp, 0x93, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm2_auto_point5_temp, temp, 0x94, 0); /* 7.2.129 Table 2 Y-axis Transition Point 1 Register */ -static SENSOR_DEVICE_ATTR(pwm2_auto_point1_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0x95); -static SENSOR_DEVICE_ATTR(pwm2_auto_point2_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0x96); -static SENSOR_DEVICE_ATTR(pwm2_auto_point3_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0x97); -static SENSOR_DEVICE_ATTR(pwm2_auto_point4_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0x98); -static SENSOR_DEVICE_ATTR(pwm2_auto_point5_pwm, S_IRUGO, show_pwm, NULL, 0); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point1_pwm, pwm, 0x95); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point2_pwm, pwm, 0x96); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point3_pwm, pwm, 0x97); +static SENSOR_DEVICE_ATTR_RW(pwm2_auto_point4_pwm, pwm, 0x98); +static SENSOR_DEVICE_ATTR_RO(pwm2_auto_point5_pwm, pwm, 0); /* 7.2.133 Table 3 X-axis Transition Point 1 Register */ -static SENSOR_DEVICE_ATTR_2(pwm3_auto_point1_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0xA0, 0); -static SENSOR_DEVICE_ATTR_2(pwm3_auto_point2_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0xA1, 0); -static SENSOR_DEVICE_ATTR_2(pwm3_auto_point3_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0xA2, 0); -static SENSOR_DEVICE_ATTR_2(pwm3_auto_point4_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0xA3, 0); -static SENSOR_DEVICE_ATTR_2(pwm3_auto_point5_temp, S_IRUGO | S_IWUSR, - show_temp, store_temp, 0xA4, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_auto_point1_temp, temp, 0xA0, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_auto_point2_temp, temp, 0xA1, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_auto_point3_temp, temp, 0xA2, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_auto_point4_temp, temp, 0xA3, 0); +static SENSOR_DEVICE_ATTR_2_RW(pwm3_auto_point5_temp, temp, 0xA4, 0); /* 7.2.138 Table 3 Y-axis Transition Point 1 Register */ -static SENSOR_DEVICE_ATTR(pwm3_auto_point1_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0xA5); -static SENSOR_DEVICE_ATTR(pwm3_auto_point2_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0xA6); -static SENSOR_DEVICE_ATTR(pwm3_auto_point3_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0xA7); -static SENSOR_DEVICE_ATTR(pwm3_auto_point4_pwm, S_IRUGO | S_IWUSR, - show_pwm, store_pwm, 0xA8); -static SENSOR_DEVICE_ATTR(pwm3_auto_point5_pwm, S_IRUGO, show_pwm, NULL, 0); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point1_pwm, pwm, 0xA5); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point2_pwm, pwm, 0xA6); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point3_pwm, pwm, 0xA7); +static SENSOR_DEVICE_ATTR_RW(pwm3_auto_point4_pwm, pwm, 0xA8); +static SENSOR_DEVICE_ATTR_RO(pwm3_auto_point5_pwm, pwm, 0); static struct attribute *nct7802_auto_point_attrs[] = { &sensor_dev_attr_pwm1_auto_point1_temp.dev_attr.attr, diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c index c52d07c6b49f..2823aff82c82 100644 --- a/drivers/hwmon/ntc_thermistor.c +++ b/drivers/hwmon/ntc_thermistor.c @@ -45,17 +45,34 @@ struct ntc_compensation { unsigned int ohm; }; -/* Order matters, ntc_match references the entries by index */ +/* + * Used as index in a zero-terminated array, holes not allowed so + * that NTC_LAST is the first empty array entry. + */ +enum { + NTC_B57330V2103, + NTC_B57891S0103, + NTC_NCP03WB473, + NTC_NCP03WF104, + NTC_NCP15WB473, + NTC_NCP15WL333, + NTC_NCP15XH103, + NTC_NCP18WB473, + NTC_NCP21WB473, + NTC_LAST, +}; + static const struct platform_device_id ntc_thermistor_id[] = { - { "ncp15wb473", TYPE_NCPXXWB473 }, - { "ncp18wb473", TYPE_NCPXXWB473 }, - { "ncp21wb473", TYPE_NCPXXWB473 }, - { "ncp03wb473", TYPE_NCPXXWB473 }, - { "ncp15wl333", TYPE_NCPXXWL333 }, - { "b57330v2103", TYPE_B57330V2103}, - { "ncp03wf104", TYPE_NCPXXWF104 }, - { "ncp15xh103", TYPE_NCPXXXH103 }, - { }, + [NTC_B57330V2103] = { "b57330v2103", TYPE_B57330V2103 }, + [NTC_B57891S0103] = { "b57891s0103", TYPE_B57891S0103 }, + [NTC_NCP03WB473] = { "ncp03wb473", TYPE_NCPXXWB473 }, + [NTC_NCP03WF104] = { "ncp03wf104", TYPE_NCPXXWF104 }, + [NTC_NCP15WB473] = { "ncp15wb473", TYPE_NCPXXWB473 }, + [NTC_NCP15WL333] = { "ncp15wl333", TYPE_NCPXXWL333 }, + [NTC_NCP15XH103] = { "ncp15xh103", TYPE_NCPXXXH103 }, + [NTC_NCP18WB473] = { "ncp18wb473", TYPE_NCPXXWB473 }, + [NTC_NCP21WB473] = { "ncp21wb473", TYPE_NCPXXWB473 }, + [NTC_LAST] = { }, }; /* @@ -212,8 +229,8 @@ static const struct ntc_compensation ncpXXxh103[] = { }; /* - * The following compensation table is from the specification of EPCOS NTC - * Thermistors Datasheet + * The following compensation tables are from the specifications in EPCOS NTC + * Thermistors Datasheets */ static const struct ntc_compensation b57330v2103[] = { { .temp_c = -40, .ohm = 190030 }, @@ -252,6 +269,69 @@ static const struct ntc_compensation b57330v2103[] = { { .temp_c = 125, .ohm = 531 }, }; +static const struct ntc_compensation b57891s0103[] = { + { .temp_c = -55.0, .ohm = 878900 }, + { .temp_c = -50.0, .ohm = 617590 }, + { .temp_c = -45.0, .ohm = 439340 }, + { .temp_c = -40.0, .ohm = 316180 }, + { .temp_c = -35.0, .ohm = 230060 }, + { .temp_c = -30.0, .ohm = 169150 }, + { .temp_c = -25.0, .ohm = 125550 }, + { .temp_c = -20.0, .ohm = 94143 }, + { .temp_c = -15.0, .ohm = 71172 }, + { .temp_c = -10.0, .ohm = 54308 }, + { .temp_c = -5.0, .ohm = 41505 }, + { .temp_c = 0.0, .ohm = 32014 }, + { .temp_c = 5.0, .ohm = 25011 }, + { .temp_c = 10.0, .ohm = 19691 }, + { .temp_c = 15.0, .ohm = 15618 }, + { .temp_c = 20.0, .ohm = 12474 }, + { .temp_c = 25.0, .ohm = 10000 }, + { .temp_c = 30.0, .ohm = 8080 }, + { .temp_c = 35.0, .ohm = 6569 }, + { .temp_c = 40.0, .ohm = 5372 }, + { .temp_c = 45.0, .ohm = 4424 }, + { .temp_c = 50.0, .ohm = 3661 }, + { .temp_c = 55.0, .ohm = 3039 }, + { .temp_c = 60.0, .ohm = 2536 }, + { .temp_c = 65.0, .ohm = 2128 }, + { .temp_c = 70.0, .ohm = 1794 }, + { .temp_c = 75.0, .ohm = 1518 }, + { .temp_c = 80.0, .ohm = 1290 }, + { .temp_c = 85.0, .ohm = 1100 }, + { .temp_c = 90.0, .ohm = 942 }, + { .temp_c = 95.0, .ohm = 809 }, + { .temp_c = 100.0, .ohm = 697 }, + { .temp_c = 105.0, .ohm = 604 }, + { .temp_c = 110.0, .ohm = 525 }, + { .temp_c = 115.0, .ohm = 457 }, + { .temp_c = 120.0, .ohm = 400 }, + { .temp_c = 125.0, .ohm = 351 }, + { .temp_c = 130.0, .ohm = 308 }, + { .temp_c = 135.0, .ohm = 272 }, + { .temp_c = 140.0, .ohm = 240 }, + { .temp_c = 145.0, .ohm = 213 }, + { .temp_c = 150.0, .ohm = 189 }, + { .temp_c = 155.0, .ohm = 168 }, +}; + +struct ntc_type { + const struct ntc_compensation *comp; + int n_comp; +}; + +#define NTC_TYPE(ntc, compensation) \ +[(ntc)] = { .comp = (compensation), .n_comp = ARRAY_SIZE(compensation) } + +static const struct ntc_type ntc_type[] = { + NTC_TYPE(TYPE_B57330V2103, b57330v2103), + NTC_TYPE(TYPE_B57891S0103, b57891s0103), + NTC_TYPE(TYPE_NCPXXWB473, ncpXXwb473), + NTC_TYPE(TYPE_NCPXXWF104, ncpXXwf104), + NTC_TYPE(TYPE_NCPXXWL333, ncpXXwl333), + NTC_TYPE(TYPE_NCPXXXH103, ncpXXxh103), +}; + struct ntc_data { struct ntc_thermistor_platform_data *pdata; const struct ntc_compensation *comp; @@ -280,34 +360,36 @@ static int ntc_adc_iio_read(struct ntc_thermistor_platform_data *pdata) } static const struct of_device_id ntc_match[] = { - { .compatible = "murata,ncp15wb473", - .data = &ntc_thermistor_id[0] }, - { .compatible = "murata,ncp18wb473", - .data = &ntc_thermistor_id[1] }, - { .compatible = "murata,ncp21wb473", - .data = &ntc_thermistor_id[2] }, - { .compatible = "murata,ncp03wb473", - .data = &ntc_thermistor_id[3] }, - { .compatible = "murata,ncp15wl333", - .data = &ntc_thermistor_id[4] }, { .compatible = "epcos,b57330v2103", - .data = &ntc_thermistor_id[5]}, + .data = &ntc_thermistor_id[NTC_B57330V2103]}, + { .compatible = "epcos,b57891s0103", + .data = &ntc_thermistor_id[NTC_B57891S0103] }, + { .compatible = "murata,ncp03wb473", + .data = &ntc_thermistor_id[NTC_NCP03WB473] }, { .compatible = "murata,ncp03wf104", - .data = &ntc_thermistor_id[6] }, + .data = &ntc_thermistor_id[NTC_NCP03WF104] }, + { .compatible = "murata,ncp15wb473", + .data = &ntc_thermistor_id[NTC_NCP15WB473] }, + { .compatible = "murata,ncp15wl333", + .data = &ntc_thermistor_id[NTC_NCP15WL333] }, { .compatible = "murata,ncp15xh103", - .data = &ntc_thermistor_id[7] }, + .data = &ntc_thermistor_id[NTC_NCP15XH103] }, + { .compatible = "murata,ncp18wb473", + .data = &ntc_thermistor_id[NTC_NCP18WB473] }, + { .compatible = "murata,ncp21wb473", + .data = &ntc_thermistor_id[NTC_NCP21WB473] }, /* Usage of vendor name "ntc" is deprecated */ + { .compatible = "ntc,ncp03wb473", + .data = &ntc_thermistor_id[NTC_NCP03WB473] }, { .compatible = "ntc,ncp15wb473", - .data = &ntc_thermistor_id[0] }, + .data = &ntc_thermistor_id[NTC_NCP15WB473] }, + { .compatible = "ntc,ncp15wl333", + .data = &ntc_thermistor_id[NTC_NCP15WL333] }, { .compatible = "ntc,ncp18wb473", - .data = &ntc_thermistor_id[1] }, + .data = &ntc_thermistor_id[NTC_NCP18WB473] }, { .compatible = "ntc,ncp21wb473", - .data = &ntc_thermistor_id[2] }, - { .compatible = "ntc,ncp03wb473", - .data = &ntc_thermistor_id[3] }, - { .compatible = "ntc,ncp15wl333", - .data = &ntc_thermistor_id[4] }, + .data = &ntc_thermistor_id[NTC_NCP21WB473] }, { }, }; MODULE_DEVICE_TABLE(of, ntc_match); @@ -519,14 +601,14 @@ static int ntc_read_temp(void *data, int *temp) return 0; } -static ssize_t ntc_show_type(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t ntc_type_show(struct device *dev, + struct device_attribute *attr, char *buf) { return sprintf(buf, "4\n"); } -static ssize_t ntc_show_temp(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t ntc_temp_show(struct device *dev, + struct device_attribute *attr, char *buf) { struct ntc_data *data = dev_get_drvdata(dev); int ohm; @@ -538,8 +620,8 @@ static ssize_t ntc_show_temp(struct device *dev, return sprintf(buf, "%d\n", get_temp_mc(data, ohm)); } -static SENSOR_DEVICE_ATTR(temp1_type, S_IRUGO, ntc_show_type, NULL, 0); -static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, ntc_show_temp, NULL, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_type, ntc_type, 0); +static SENSOR_DEVICE_ATTR_RO(temp1_input, ntc_temp, 0); static struct attribute *ntc_attrs[] = { &sensor_dev_attr_temp1_type.dev_attr.attr, @@ -606,33 +688,15 @@ static int ntc_thermistor_probe(struct platform_device *pdev) data->pdata = pdata; - switch (pdev_id->driver_data) { - case TYPE_NCPXXWB473: - data->comp = ncpXXwb473; - data->n_comp = ARRAY_SIZE(ncpXXwb473); - break; - case TYPE_NCPXXWL333: - data->comp = ncpXXwl333; - data->n_comp = ARRAY_SIZE(ncpXXwl333); - break; - case TYPE_B57330V2103: - data->comp = b57330v2103; - data->n_comp = ARRAY_SIZE(b57330v2103); - break; - case TYPE_NCPXXWF104: - data->comp = ncpXXwf104; - data->n_comp = ARRAY_SIZE(ncpXXwf104); - break; - case TYPE_NCPXXXH103: - data->comp = ncpXXxh103; - data->n_comp = ARRAY_SIZE(ncpXXxh103); - break; - default: + if (pdev_id->driver_data >= ARRAY_SIZE(ntc_type)) { dev_err(dev, "Unknown device type: %lu(%s)\n", pdev_id->driver_data, pdev_id->name); return -EINVAL; } + data->comp = ntc_type[pdev_id->driver_data].comp; + data->n_comp = ntc_type[pdev_id->driver_data].n_comp; + hwmon_dev = devm_hwmon_device_register_with_groups(dev, pdev_id->name, data, ntc_groups); if (IS_ERR(hwmon_dev)) { diff --git a/drivers/hwmon/occ/Kconfig b/drivers/hwmon/occ/Kconfig new file mode 100644 index 000000000000..66686628fb53 --- /dev/null +++ b/drivers/hwmon/occ/Kconfig @@ -0,0 +1,31 @@ +# +# On-Chip Controller configuration +# + +config SENSORS_OCC_P8_I2C + tristate "POWER8 OCC through I2C" + depends on I2C + select SENSORS_OCC + help + This option enables support for monitoring sensors provided by the + On-Chip Controller (OCC) on a POWER8 processor. Communications with + the OCC are established through I2C bus. + + This driver can also be built as a module. If so, the module will be + called occ-p8-hwmon. + +config SENSORS_OCC_P9_SBE + tristate "POWER9 OCC through SBE" + depends on FSI_OCC + select SENSORS_OCC + help + This option enables support for monitoring sensors provided by the + On-Chip Controller (OCC) on a POWER9 processor. Communications with + the OCC are established through SBE fifo on an FSI bus. + + This driver can also be built as a module. If so, the module will be + called occ-p9-hwmon. + +config SENSORS_OCC + bool "POWER On-Chip Controller" + depends on SENSORS_OCC_P8_I2C || SENSORS_OCC_P9_SBE diff --git a/drivers/hwmon/occ/Makefile b/drivers/hwmon/occ/Makefile new file mode 100644 index 000000000000..3fec12ddc7e7 --- /dev/null +++ b/drivers/hwmon/occ/Makefile @@ -0,0 +1,5 @@ +occ-p8-hwmon-objs := common.o sysfs.o p8_i2c.o +occ-p9-hwmon-objs := common.o sysfs.o p9_sbe.o + +obj-$(CONFIG_SENSORS_OCC_P8_I2C) += occ-p8-hwmon.o +obj-$(CONFIG_SENSORS_OCC_P9_SBE) += occ-p9-hwmon.o diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c new file mode 100644 index 000000000000..423903f87955 --- /dev/null +++ b/drivers/hwmon/occ/common.c @@ -0,0 +1,1098 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "common.h" + +#define EXTN_FLAG_SENSOR_ID BIT(7) + +#define OCC_ERROR_COUNT_THRESHOLD 2 /* required by OCC spec */ + +#define OCC_STATE_SAFE 4 +#define OCC_SAFE_TIMEOUT msecs_to_jiffies(60000) /* 1 min */ + +#define OCC_UPDATE_FREQUENCY msecs_to_jiffies(1000) + +#define OCC_TEMP_SENSOR_FAULT 0xFF + +#define OCC_FRU_TYPE_VRM 3 + +/* OCC sensor type and version definitions */ + +struct temp_sensor_1 { + u16 sensor_id; + u16 value; +} __packed; + +struct temp_sensor_2 { + u32 sensor_id; + u8 fru_type; + u8 value; +} __packed; + +struct freq_sensor_1 { + u16 sensor_id; + u16 value; +} __packed; + +struct freq_sensor_2 { + u32 sensor_id; + u16 value; +} __packed; + +struct power_sensor_1 { + u16 sensor_id; + u32 update_tag; + u32 accumulator; + u16 value; +} __packed; + +struct power_sensor_2 { + u32 sensor_id; + u8 function_id; + u8 apss_channel; + u16 reserved; + u32 update_tag; + u64 accumulator; + u16 value; +} __packed; + +struct power_sensor_data { + u16 value; + u32 update_tag; + u64 accumulator; +} __packed; + +struct power_sensor_data_and_time { + u16 update_time; + u16 value; + u32 update_tag; + u64 accumulator; +} __packed; + +struct power_sensor_a0 { + u32 sensor_id; + struct power_sensor_data_and_time system; + u32 reserved; + struct power_sensor_data_and_time proc; + struct power_sensor_data vdd; + struct power_sensor_data vdn; +} __packed; + +struct caps_sensor_2 { + u16 cap; + u16 system_power; + u16 n_cap; + u16 max; + u16 min; + u16 user; + u8 user_source; +} __packed; + +struct caps_sensor_3 { + u16 cap; + u16 system_power; + u16 n_cap; + u16 max; + u16 hard_min; + u16 soft_min; + u16 user; + u8 user_source; +} __packed; + +struct extended_sensor { + union { + u8 name[4]; + u32 sensor_id; + }; + u8 flags; + u8 reserved; + u8 data[6]; +} __packed; + +static int occ_poll(struct occ *occ) +{ + int rc; + u16 checksum = occ->poll_cmd_data + 1; + u8 cmd[8]; + struct occ_poll_response_header *header; + + /* big endian */ + cmd[0] = 0; /* sequence number */ + cmd[1] = 0; /* cmd type */ + cmd[2] = 0; /* data length msb */ + cmd[3] = 1; /* data length lsb */ + cmd[4] = occ->poll_cmd_data; /* data */ + cmd[5] = checksum >> 8; /* checksum msb */ + cmd[6] = checksum & 0xFF; /* checksum lsb */ + cmd[7] = 0; + + /* mutex should already be locked if necessary */ + rc = occ->send_cmd(occ, cmd); + if (rc) { + if (occ->error_count++ > OCC_ERROR_COUNT_THRESHOLD) + occ->error = rc; + + goto done; + } + + /* clear error since communication was successful */ + occ->error_count = 0; + occ->error = 0; + + /* check for safe state */ + header = (struct occ_poll_response_header *)occ->resp.data; + if (header->occ_state == OCC_STATE_SAFE) { + if (occ->last_safe) { + if (time_after(jiffies, + occ->last_safe + OCC_SAFE_TIMEOUT)) + occ->error = -EHOSTDOWN; + } else { + occ->last_safe = jiffies; + } + } else { + occ->last_safe = 0; + } + +done: + occ_sysfs_poll_done(occ); + return rc; +} + +static int occ_set_user_power_cap(struct occ *occ, u16 user_power_cap) +{ + int rc; + u8 cmd[8]; + u16 checksum = 0x24; + __be16 user_power_cap_be = cpu_to_be16(user_power_cap); + + cmd[0] = 0; + cmd[1] = 0x22; + cmd[2] = 0; + cmd[3] = 2; + + memcpy(&cmd[4], &user_power_cap_be, 2); + + checksum += cmd[4] + cmd[5]; + cmd[6] = checksum >> 8; + cmd[7] = checksum & 0xFF; + + rc = mutex_lock_interruptible(&occ->lock); + if (rc) + return rc; + + rc = occ->send_cmd(occ, cmd); + + mutex_unlock(&occ->lock); + + return rc; +} + +int occ_update_response(struct occ *occ) +{ + int rc = mutex_lock_interruptible(&occ->lock); + + if (rc) + return rc; + + /* limit the maximum rate of polling the OCC */ + if (time_after(jiffies, occ->last_update + OCC_UPDATE_FREQUENCY)) { + rc = occ_poll(occ); + occ->last_update = jiffies; + } + + mutex_unlock(&occ->lock); + return rc; +} + +static ssize_t occ_show_temp_1(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u32 val = 0; + struct temp_sensor_1 *temp; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + temp = ((struct temp_sensor_1 *)sensors->temp.data) + sattr->index; + + switch (sattr->nr) { + case 0: + val = get_unaligned_be16(&temp->sensor_id); + break; + case 1: + val = get_unaligned_be16(&temp->value) * 1000; + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%u\n", val); +} + +static ssize_t occ_show_temp_2(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u32 val = 0; + struct temp_sensor_2 *temp; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + temp = ((struct temp_sensor_2 *)sensors->temp.data) + sattr->index; + + switch (sattr->nr) { + case 0: + val = get_unaligned_be32(&temp->sensor_id); + break; + case 1: + val = temp->value; + if (val == OCC_TEMP_SENSOR_FAULT) + return -EREMOTEIO; + + /* + * VRM doesn't return temperature, only alarm bit. This + * attribute maps to tempX_alarm instead of tempX_input for + * VRM + */ + if (temp->fru_type != OCC_FRU_TYPE_VRM) { + /* sensor not ready */ + if (val == 0) + return -EAGAIN; + + val *= 1000; + } + break; + case 2: + val = temp->fru_type; + break; + case 3: + val = temp->value == OCC_TEMP_SENSOR_FAULT; + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%u\n", val); +} + +static ssize_t occ_show_freq_1(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u16 val = 0; + struct freq_sensor_1 *freq; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + freq = ((struct freq_sensor_1 *)sensors->freq.data) + sattr->index; + + switch (sattr->nr) { + case 0: + val = get_unaligned_be16(&freq->sensor_id); + break; + case 1: + val = get_unaligned_be16(&freq->value); + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%u\n", val); +} + +static ssize_t occ_show_freq_2(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u32 val = 0; + struct freq_sensor_2 *freq; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + freq = ((struct freq_sensor_2 *)sensors->freq.data) + sattr->index; + + switch (sattr->nr) { + case 0: + val = get_unaligned_be32(&freq->sensor_id); + break; + case 1: + val = get_unaligned_be16(&freq->value); + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%u\n", val); +} + +static ssize_t occ_show_power_1(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u64 val = 0; + struct power_sensor_1 *power; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + power = ((struct power_sensor_1 *)sensors->power.data) + sattr->index; + + switch (sattr->nr) { + case 0: + val = get_unaligned_be16(&power->sensor_id); + break; + case 1: + val = get_unaligned_be32(&power->accumulator) / + get_unaligned_be32(&power->update_tag); + val *= 1000000ULL; + break; + case 2: + val = get_unaligned_be32(&power->update_tag) * + occ->powr_sample_time_us; + break; + case 3: + val = get_unaligned_be16(&power->value) * 1000000ULL; + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%llu\n", val); +} + +static u64 occ_get_powr_avg(u64 *accum, u32 *samples) +{ + return div64_u64(get_unaligned_be64(accum) * 1000000ULL, + get_unaligned_be32(samples)); +} + +static ssize_t occ_show_power_2(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u64 val = 0; + struct power_sensor_2 *power; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + power = ((struct power_sensor_2 *)sensors->power.data) + sattr->index; + + switch (sattr->nr) { + case 0: + return snprintf(buf, PAGE_SIZE - 1, "%u_%u_%u\n", + get_unaligned_be32(&power->sensor_id), + power->function_id, power->apss_channel); + case 1: + val = occ_get_powr_avg(&power->accumulator, + &power->update_tag); + break; + case 2: + val = get_unaligned_be32(&power->update_tag) * + occ->powr_sample_time_us; + break; + case 3: + val = get_unaligned_be16(&power->value) * 1000000ULL; + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%llu\n", val); +} + +static ssize_t occ_show_power_a0(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u64 val = 0; + struct power_sensor_a0 *power; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + power = ((struct power_sensor_a0 *)sensors->power.data) + sattr->index; + + switch (sattr->nr) { + case 0: + return snprintf(buf, PAGE_SIZE - 1, "%u_system\n", + get_unaligned_be32(&power->sensor_id)); + case 1: + val = occ_get_powr_avg(&power->system.accumulator, + &power->system.update_tag); + break; + case 2: + val = get_unaligned_be32(&power->system.update_tag) * + occ->powr_sample_time_us; + break; + case 3: + val = get_unaligned_be16(&power->system.value) * 1000000ULL; + break; + case 4: + return snprintf(buf, PAGE_SIZE - 1, "%u_proc\n", + get_unaligned_be32(&power->sensor_id)); + case 5: + val = occ_get_powr_avg(&power->proc.accumulator, + &power->proc.update_tag); + break; + case 6: + val = get_unaligned_be32(&power->proc.update_tag) * + occ->powr_sample_time_us; + break; + case 7: + val = get_unaligned_be16(&power->proc.value) * 1000000ULL; + break; + case 8: + return snprintf(buf, PAGE_SIZE - 1, "%u_vdd\n", + get_unaligned_be32(&power->sensor_id)); + case 9: + val = occ_get_powr_avg(&power->vdd.accumulator, + &power->vdd.update_tag); + break; + case 10: + val = get_unaligned_be32(&power->vdd.update_tag) * + occ->powr_sample_time_us; + break; + case 11: + val = get_unaligned_be16(&power->vdd.value) * 1000000ULL; + break; + case 12: + return snprintf(buf, PAGE_SIZE - 1, "%u_vdn\n", + get_unaligned_be32(&power->sensor_id)); + case 13: + val = occ_get_powr_avg(&power->vdn.accumulator, + &power->vdn.update_tag); + break; + case 14: + val = get_unaligned_be32(&power->vdn.update_tag) * + occ->powr_sample_time_us; + break; + case 15: + val = get_unaligned_be16(&power->vdn.value) * 1000000ULL; + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%llu\n", val); +} + +static ssize_t occ_show_caps_1_2(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u64 val = 0; + struct caps_sensor_2 *caps; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + caps = ((struct caps_sensor_2 *)sensors->caps.data) + sattr->index; + + switch (sattr->nr) { + case 0: + return snprintf(buf, PAGE_SIZE - 1, "system\n"); + case 1: + val = get_unaligned_be16(&caps->cap) * 1000000ULL; + break; + case 2: + val = get_unaligned_be16(&caps->system_power) * 1000000ULL; + break; + case 3: + val = get_unaligned_be16(&caps->n_cap) * 1000000ULL; + break; + case 4: + val = get_unaligned_be16(&caps->max) * 1000000ULL; + break; + case 5: + val = get_unaligned_be16(&caps->min) * 1000000ULL; + break; + case 6: + val = get_unaligned_be16(&caps->user) * 1000000ULL; + break; + case 7: + if (occ->sensors.caps.version == 1) + return -EINVAL; + + val = caps->user_source; + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%llu\n", val); +} + +static ssize_t occ_show_caps_3(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + u64 val = 0; + struct caps_sensor_3 *caps; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + caps = ((struct caps_sensor_3 *)sensors->caps.data) + sattr->index; + + switch (sattr->nr) { + case 0: + return snprintf(buf, PAGE_SIZE - 1, "system\n"); + case 1: + val = get_unaligned_be16(&caps->cap) * 1000000ULL; + break; + case 2: + val = get_unaligned_be16(&caps->system_power) * 1000000ULL; + break; + case 3: + val = get_unaligned_be16(&caps->n_cap) * 1000000ULL; + break; + case 4: + val = get_unaligned_be16(&caps->max) * 1000000ULL; + break; + case 5: + val = get_unaligned_be16(&caps->hard_min) * 1000000ULL; + break; + case 6: + val = get_unaligned_be16(&caps->user) * 1000000ULL; + break; + case 7: + val = caps->user_source; + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%llu\n", val); +} + +static ssize_t occ_store_caps_user(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + int rc; + u16 user_power_cap; + unsigned long long value; + struct occ *occ = dev_get_drvdata(dev); + + rc = kstrtoull(buf, 0, &value); + if (rc) + return rc; + + user_power_cap = div64_u64(value, 1000000ULL); /* microwatt to watt */ + + rc = occ_set_user_power_cap(occ, user_power_cap); + if (rc) + return rc; + + return count; +} + +static ssize_t occ_show_extended(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + struct extended_sensor *extn; + struct occ *occ = dev_get_drvdata(dev); + struct occ_sensors *sensors = &occ->sensors; + struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + extn = ((struct extended_sensor *)sensors->extended.data) + + sattr->index; + + switch (sattr->nr) { + case 0: + if (extn->flags & EXTN_FLAG_SENSOR_ID) + rc = snprintf(buf, PAGE_SIZE - 1, "%u", + get_unaligned_be32(&extn->sensor_id)); + else + rc = snprintf(buf, PAGE_SIZE - 1, "%02x%02x%02x%02x\n", + extn->name[0], extn->name[1], + extn->name[2], extn->name[3]); + break; + case 1: + rc = snprintf(buf, PAGE_SIZE - 1, "%02x\n", extn->flags); + break; + case 2: + rc = snprintf(buf, PAGE_SIZE - 1, "%02x%02x%02x%02x%02x%02x\n", + extn->data[0], extn->data[1], extn->data[2], + extn->data[3], extn->data[4], extn->data[5]); + break; + default: + return -EINVAL; + } + + return rc; +} + +/* + * Some helper macros to make it easier to define an occ_attribute. Since these + * are dynamically allocated, we shouldn't use the existing kernel macros which + * stringify the name argument. + */ +#define ATTR_OCC(_name, _mode, _show, _store) { \ + .attr = { \ + .name = _name, \ + .mode = VERIFY_OCTAL_PERMISSIONS(_mode), \ + }, \ + .show = _show, \ + .store = _store, \ +} + +#define SENSOR_ATTR_OCC(_name, _mode, _show, _store, _nr, _index) { \ + .dev_attr = ATTR_OCC(_name, _mode, _show, _store), \ + .index = _index, \ + .nr = _nr, \ +} + +#define OCC_INIT_ATTR(_name, _mode, _show, _store, _nr, _index) \ + ((struct sensor_device_attribute_2) \ + SENSOR_ATTR_OCC(_name, _mode, _show, _store, _nr, _index)) + +/* + * Allocate and instatiate sensor_device_attribute_2s. It's most efficient to + * use our own instead of the built-in hwmon attribute types. + */ +static int occ_setup_sensor_attrs(struct occ *occ) +{ + unsigned int i, s, num_attrs = 0; + struct device *dev = occ->bus_dev; + struct occ_sensors *sensors = &occ->sensors; + struct occ_attribute *attr; + struct temp_sensor_2 *temp; + ssize_t (*show_temp)(struct device *, struct device_attribute *, + char *) = occ_show_temp_1; + ssize_t (*show_freq)(struct device *, struct device_attribute *, + char *) = occ_show_freq_1; + ssize_t (*show_power)(struct device *, struct device_attribute *, + char *) = occ_show_power_1; + ssize_t (*show_caps)(struct device *, struct device_attribute *, + char *) = occ_show_caps_1_2; + + switch (sensors->temp.version) { + case 1: + num_attrs += (sensors->temp.num_sensors * 2); + break; + case 2: + num_attrs += (sensors->temp.num_sensors * 4); + show_temp = occ_show_temp_2; + break; + default: + sensors->temp.num_sensors = 0; + } + + switch (sensors->freq.version) { + case 2: + show_freq = occ_show_freq_2; + /* fall through */ + case 1: + num_attrs += (sensors->freq.num_sensors * 2); + break; + default: + sensors->freq.num_sensors = 0; + } + + switch (sensors->power.version) { + case 2: + show_power = occ_show_power_2; + /* fall through */ + case 1: + num_attrs += (sensors->power.num_sensors * 4); + break; + case 0xA0: + num_attrs += (sensors->power.num_sensors * 16); + show_power = occ_show_power_a0; + break; + default: + sensors->power.num_sensors = 0; + } + + switch (sensors->caps.version) { + case 1: + num_attrs += (sensors->caps.num_sensors * 7); + break; + case 3: + show_caps = occ_show_caps_3; + /* fall through */ + case 2: + num_attrs += (sensors->caps.num_sensors * 8); + break; + default: + sensors->caps.num_sensors = 0; + } + + switch (sensors->extended.version) { + case 1: + num_attrs += (sensors->extended.num_sensors * 3); + break; + default: + sensors->extended.num_sensors = 0; + } + + occ->attrs = devm_kzalloc(dev, sizeof(*occ->attrs) * num_attrs, + GFP_KERNEL); + if (!occ->attrs) + return -ENOMEM; + + /* null-terminated list */ + occ->group.attrs = devm_kzalloc(dev, sizeof(*occ->group.attrs) * + num_attrs + 1, GFP_KERNEL); + if (!occ->group.attrs) + return -ENOMEM; + + attr = occ->attrs; + + for (i = 0; i < sensors->temp.num_sensors; ++i) { + s = i + 1; + temp = ((struct temp_sensor_2 *)sensors->temp.data) + i; + + snprintf(attr->name, sizeof(attr->name), "temp%d_label", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_temp, NULL, + 0, i); + attr++; + + if (sensors->temp.version > 1 && + temp->fru_type == OCC_FRU_TYPE_VRM) { + snprintf(attr->name, sizeof(attr->name), + "temp%d_alarm", s); + } else { + snprintf(attr->name, sizeof(attr->name), + "temp%d_input", s); + } + + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_temp, NULL, + 1, i); + attr++; + + if (sensors->temp.version > 1) { + snprintf(attr->name, sizeof(attr->name), + "temp%d_fru_type", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_temp, NULL, 2, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), + "temp%d_fault", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_temp, NULL, 3, i); + attr++; + } + } + + for (i = 0; i < sensors->freq.num_sensors; ++i) { + s = i + 1; + + snprintf(attr->name, sizeof(attr->name), "freq%d_label", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_freq, NULL, + 0, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), "freq%d_input", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_freq, NULL, + 1, i); + attr++; + } + + if (sensors->power.version == 0xA0) { + /* + * Special case for many-attribute power sensor. Split it into + * a sensor number per power type, emulating several sensors. + */ + for (i = 0; i < sensors->power.num_sensors; ++i) { + unsigned int j; + unsigned int nr = 0; + + s = (i * 4) + 1; + + for (j = 0; j < 4; ++j) { + snprintf(attr->name, sizeof(attr->name), + "power%d_label", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_power, NULL, + nr++, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), + "power%d_average", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_power, NULL, + nr++, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), + "power%d_average_interval", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_power, NULL, + nr++, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), + "power%d_input", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_power, NULL, + nr++, i); + attr++; + + s++; + } + } + } else { + for (i = 0; i < sensors->power.num_sensors; ++i) { + s = i + 1; + + snprintf(attr->name, sizeof(attr->name), + "power%d_label", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_power, NULL, 0, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), + "power%d_average", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_power, NULL, 1, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), + "power%d_average_interval", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_power, NULL, 2, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), + "power%d_input", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_power, NULL, 3, i); + attr++; + } + } + + if (sensors->caps.num_sensors >= 1) { + s = sensors->power.num_sensors + 1; + + snprintf(attr->name, sizeof(attr->name), "power%d_label", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, + 0, 0); + attr++; + + snprintf(attr->name, sizeof(attr->name), "power%d_cap", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, + 1, 0); + attr++; + + snprintf(attr->name, sizeof(attr->name), "power%d_input", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, + 2, 0); + attr++; + + snprintf(attr->name, sizeof(attr->name), + "power%d_cap_not_redundant", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, + 3, 0); + attr++; + + snprintf(attr->name, sizeof(attr->name), "power%d_cap_max", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, + 4, 0); + attr++; + + snprintf(attr->name, sizeof(attr->name), "power%d_cap_min", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, + 5, 0); + attr++; + + snprintf(attr->name, sizeof(attr->name), "power%d_cap_user", + s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0644, show_caps, + occ_store_caps_user, 6, 0); + attr++; + + if (sensors->caps.version > 1) { + snprintf(attr->name, sizeof(attr->name), + "power%d_cap_user_source", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + show_caps, NULL, 7, 0); + attr++; + } + } + + for (i = 0; i < sensors->extended.num_sensors; ++i) { + s = i + 1; + + snprintf(attr->name, sizeof(attr->name), "extn%d_label", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + occ_show_extended, NULL, 0, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), "extn%d_flags", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + occ_show_extended, NULL, 1, i); + attr++; + + snprintf(attr->name, sizeof(attr->name), "extn%d_input", s); + attr->sensor = OCC_INIT_ATTR(attr->name, 0444, + occ_show_extended, NULL, 2, i); + attr++; + } + + /* put the sensors in the group */ + for (i = 0; i < num_attrs; ++i) { + sysfs_attr_init(&occ->attrs[i].sensor.dev_attr.attr); + occ->group.attrs[i] = &occ->attrs[i].sensor.dev_attr.attr; + } + + return 0; +} + +/* only need to do this once at startup, as OCC won't change sensors on us */ +static void occ_parse_poll_response(struct occ *occ) +{ + unsigned int i, old_offset, offset = 0, size = 0; + struct occ_sensor *sensor; + struct occ_sensors *sensors = &occ->sensors; + struct occ_response *resp = &occ->resp; + struct occ_poll_response *poll = + (struct occ_poll_response *)&resp->data[0]; + struct occ_poll_response_header *header = &poll->header; + struct occ_sensor_data_block *block = &poll->block; + + dev_info(occ->bus_dev, "OCC found, code level: %.16s\n", + header->occ_code_level); + + for (i = 0; i < header->num_sensor_data_blocks; ++i) { + block = (struct occ_sensor_data_block *)((u8 *)block + offset); + old_offset = offset; + offset = (block->header.num_sensors * + block->header.sensor_length) + sizeof(block->header); + size += offset; + + /* validate all the length/size fields */ + if ((size + sizeof(*header)) >= OCC_RESP_DATA_BYTES) { + dev_warn(occ->bus_dev, "exceeded response buffer\n"); + return; + } + + dev_dbg(occ->bus_dev, " %04x..%04x: %.4s (%d sensors)\n", + old_offset, offset - 1, block->header.eye_catcher, + block->header.num_sensors); + + /* match sensor block type */ + if (strncmp(block->header.eye_catcher, "TEMP", 4) == 0) + sensor = &sensors->temp; + else if (strncmp(block->header.eye_catcher, "FREQ", 4) == 0) + sensor = &sensors->freq; + else if (strncmp(block->header.eye_catcher, "POWR", 4) == 0) + sensor = &sensors->power; + else if (strncmp(block->header.eye_catcher, "CAPS", 4) == 0) + sensor = &sensors->caps; + else if (strncmp(block->header.eye_catcher, "EXTN", 4) == 0) + sensor = &sensors->extended; + else { + dev_warn(occ->bus_dev, "sensor not supported %.4s\n", + block->header.eye_catcher); + continue; + } + + sensor->num_sensors = block->header.num_sensors; + sensor->version = block->header.sensor_format; + sensor->data = &block->data; + } + + dev_dbg(occ->bus_dev, "Max resp size: %u+%zd=%zd\n", size, + sizeof(*header), size + sizeof(*header)); +} + +int occ_setup(struct occ *occ, const char *name) +{ + int rc; + + mutex_init(&occ->lock); + occ->groups[0] = &occ->group; + + /* no need to lock */ + rc = occ_poll(occ); + if (rc == -ESHUTDOWN) { + dev_info(occ->bus_dev, "host is not ready\n"); + return rc; + } else if (rc < 0) { + dev_err(occ->bus_dev, "failed to get OCC poll response: %d\n", + rc); + return rc; + } + + occ_parse_poll_response(occ); + + rc = occ_setup_sensor_attrs(occ); + if (rc) { + dev_err(occ->bus_dev, "failed to setup sensor attrs: %d\n", + rc); + return rc; + } + + occ->hwmon = devm_hwmon_device_register_with_groups(occ->bus_dev, name, + occ, occ->groups); + if (IS_ERR(occ->hwmon)) { + rc = PTR_ERR(occ->hwmon); + dev_err(occ->bus_dev, "failed to register hwmon device: %d\n", + rc); + return rc; + } + + rc = occ_setup_sysfs(occ); + if (rc) + dev_err(occ->bus_dev, "failed to setup sysfs: %d\n", rc); + + return rc; +} diff --git a/drivers/hwmon/occ/common.h b/drivers/hwmon/occ/common.h new file mode 100644 index 000000000000..7c44df3f5631 --- /dev/null +++ b/drivers/hwmon/occ/common.h @@ -0,0 +1,128 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef OCC_COMMON_H +#define OCC_COMMON_H + +#include +#include +#include + +struct device; + +#define OCC_RESP_DATA_BYTES 4089 + +/* + * Same response format for all OCC versions. + * Allocate the largest possible response. + */ +struct occ_response { + u8 seq_no; + u8 cmd_type; + u8 return_status; + __be16 data_length; + u8 data[OCC_RESP_DATA_BYTES]; + __be16 checksum; +} __packed; + +struct occ_sensor_data_block_header { + u8 eye_catcher[4]; + u8 reserved; + u8 sensor_format; + u8 sensor_length; + u8 num_sensors; +} __packed; + +struct occ_sensor_data_block { + struct occ_sensor_data_block_header header; + u32 data; +} __packed; + +struct occ_poll_response_header { + u8 status; + u8 ext_status; + u8 occs_present; + u8 config_data; + u8 occ_state; + u8 mode; + u8 ips_status; + u8 error_log_id; + __be32 error_log_start_address; + __be16 error_log_length; + u16 reserved; + u8 occ_code_level[16]; + u8 eye_catcher[6]; + u8 num_sensor_data_blocks; + u8 sensor_data_block_header_version; +} __packed; + +struct occ_poll_response { + struct occ_poll_response_header header; + struct occ_sensor_data_block block; +} __packed; + +struct occ_sensor { + u8 num_sensors; + u8 version; + void *data; /* pointer to sensor data start within response */ +}; + +/* + * OCC only provides one sensor data block of each type, but any number of + * sensors within that block. + */ +struct occ_sensors { + struct occ_sensor temp; + struct occ_sensor freq; + struct occ_sensor power; + struct occ_sensor caps; + struct occ_sensor extended; +}; + +/* + * Use our own attribute struct so we can dynamically allocate space for the + * name. + */ +struct occ_attribute { + char name[32]; + struct sensor_device_attribute_2 sensor; +}; + +struct occ { + struct device *bus_dev; + + struct occ_response resp; + struct occ_sensors sensors; + + int powr_sample_time_us; /* average power sample time */ + u8 poll_cmd_data; /* to perform OCC poll command */ + int (*send_cmd)(struct occ *occ, u8 *cmd); + + unsigned long last_update; + struct mutex lock; /* lock OCC access */ + + struct device *hwmon; + struct occ_attribute *attrs; + struct attribute_group group; + const struct attribute_group *groups[2]; + + int error; /* latest transfer error */ + unsigned int error_count; /* number of xfr errors observed */ + unsigned long last_safe; /* time OCC entered "safe" state */ + + /* + * Store the previous state data for comparison in order to notify + * sysfs readers of state changes. + */ + int prev_error; + u8 prev_stat; + u8 prev_ext_stat; + u8 prev_occs_present; +}; + +int occ_setup(struct occ *occ, const char *name); +int occ_setup_sysfs(struct occ *occ); +void occ_shutdown(struct occ *occ); +void occ_sysfs_poll_done(struct occ *occ); +int occ_update_response(struct occ *occ); + +#endif /* OCC_COMMON_H */ diff --git a/drivers/hwmon/occ/p8_i2c.c b/drivers/hwmon/occ/p8_i2c.c new file mode 100644 index 000000000000..b59efc945e54 --- /dev/null +++ b/drivers/hwmon/occ/p8_i2c.c @@ -0,0 +1,255 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "common.h" + +#define OCC_TIMEOUT_MS 1000 +#define OCC_CMD_IN_PRG_WAIT_MS 50 + +/* OCB (on-chip control bridge - interface to OCC) registers */ +#define OCB_DATA1 0x6B035 +#define OCB_ADDR 0x6B070 +#define OCB_DATA3 0x6B075 + +/* OCC SRAM address space */ +#define OCC_SRAM_ADDR_CMD 0xFFFF6000 +#define OCC_SRAM_ADDR_RESP 0xFFFF7000 + +#define OCC_DATA_ATTN 0x20010000 + +struct p8_i2c_occ { + struct occ occ; + struct i2c_client *client; +}; + +#define to_p8_i2c_occ(x) container_of((x), struct p8_i2c_occ, occ) + +static int p8_i2c_occ_getscom(struct i2c_client *client, u32 address, u8 *data) +{ + ssize_t rc; + __be64 buf; + struct i2c_msg msgs[2]; + + /* p8 i2c slave requires shift */ + address <<= 1; + + msgs[0].addr = client->addr; + msgs[0].flags = client->flags & I2C_M_TEN; + msgs[0].len = sizeof(u32); + /* address is a scom address; bus-endian */ + msgs[0].buf = (char *)&address; + + /* data from OCC is big-endian */ + msgs[1].addr = client->addr; + msgs[1].flags = (client->flags & I2C_M_TEN) | I2C_M_RD; + msgs[1].len = sizeof(u64); + msgs[1].buf = (char *)&buf; + + rc = i2c_transfer(client->adapter, msgs, 2); + if (rc < 0) + return rc; + + *(u64 *)data = be64_to_cpu(buf); + + return 0; +} + +static int p8_i2c_occ_putscom(struct i2c_client *client, u32 address, u8 *data) +{ + u32 buf[3]; + ssize_t rc; + + /* p8 i2c slave requires shift */ + address <<= 1; + + /* address is bus-endian; data passed through from user as-is */ + buf[0] = address; + memcpy(&buf[1], &data[4], sizeof(u32)); + memcpy(&buf[2], data, sizeof(u32)); + + rc = i2c_master_send(client, (const char *)buf, sizeof(buf)); + if (rc < 0) + return rc; + else if (rc != sizeof(buf)) + return -EIO; + + return 0; +} + +static int p8_i2c_occ_putscom_u32(struct i2c_client *client, u32 address, + u32 data0, u32 data1) +{ + u8 buf[8]; + + memcpy(buf, &data0, 4); + memcpy(buf + 4, &data1, 4); + + return p8_i2c_occ_putscom(client, address, buf); +} + +static int p8_i2c_occ_putscom_be(struct i2c_client *client, u32 address, + u8 *data) +{ + __be32 data0, data1; + + memcpy(&data0, data, 4); + memcpy(&data1, data + 4, 4); + + return p8_i2c_occ_putscom_u32(client, address, be32_to_cpu(data0), + be32_to_cpu(data1)); +} + +static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd) +{ + int i, rc; + unsigned long start; + u16 data_length; + const unsigned long timeout = msecs_to_jiffies(OCC_TIMEOUT_MS); + const long wait_time = msecs_to_jiffies(OCC_CMD_IN_PRG_WAIT_MS); + struct p8_i2c_occ *ctx = to_p8_i2c_occ(occ); + struct i2c_client *client = ctx->client; + struct occ_response *resp = &occ->resp; + + start = jiffies; + + /* set sram address for command */ + rc = p8_i2c_occ_putscom_u32(client, OCB_ADDR, OCC_SRAM_ADDR_CMD, 0); + if (rc) + return rc; + + /* write command (expected to already be BE), we need bus-endian... */ + rc = p8_i2c_occ_putscom_be(client, OCB_DATA3, cmd); + if (rc) + return rc; + + /* trigger OCC attention */ + rc = p8_i2c_occ_putscom_u32(client, OCB_DATA1, OCC_DATA_ATTN, 0); + if (rc) + return rc; + + do { + /* set sram address for response */ + rc = p8_i2c_occ_putscom_u32(client, OCB_ADDR, + OCC_SRAM_ADDR_RESP, 0); + if (rc) + return rc; + + rc = p8_i2c_occ_getscom(client, OCB_DATA3, (u8 *)resp); + if (rc) + return rc; + + /* wait for OCC */ + if (resp->return_status == OCC_RESP_CMD_IN_PRG) { + rc = -EALREADY; + + if (time_after(jiffies, start + timeout)) + break; + + set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(wait_time); + } + } while (rc); + + /* check the OCC response */ + switch (resp->return_status) { + case OCC_RESP_CMD_IN_PRG: + rc = -ETIMEDOUT; + break; + case OCC_RESP_SUCCESS: + rc = 0; + break; + case OCC_RESP_CMD_INVAL: + case OCC_RESP_CMD_LEN_INVAL: + case OCC_RESP_DATA_INVAL: + case OCC_RESP_CHKSUM_ERR: + rc = -EINVAL; + break; + case OCC_RESP_INT_ERR: + case OCC_RESP_BAD_STATE: + case OCC_RESP_CRIT_EXCEPT: + case OCC_RESP_CRIT_INIT: + case OCC_RESP_CRIT_WATCHDOG: + case OCC_RESP_CRIT_OCB: + case OCC_RESP_CRIT_HW: + rc = -EREMOTEIO; + break; + default: + rc = -EPROTO; + } + + if (rc < 0) + return rc; + + data_length = get_unaligned_be16(&resp->data_length); + if (data_length > OCC_RESP_DATA_BYTES) + return -EMSGSIZE; + + /* fetch the rest of the response data */ + for (i = 8; i < data_length + 7; i += 8) { + rc = p8_i2c_occ_getscom(client, OCB_DATA3, ((u8 *)resp) + i); + if (rc) + return rc; + } + + return 0; +} + +static int p8_i2c_occ_probe(struct i2c_client *client, + const struct i2c_device_id *id) +{ + struct occ *occ; + struct p8_i2c_occ *ctx = devm_kzalloc(&client->dev, sizeof(*ctx), + GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->client = client; + occ = &ctx->occ; + occ->bus_dev = &client->dev; + dev_set_drvdata(&client->dev, occ); + + occ->powr_sample_time_us = 250; + occ->poll_cmd_data = 0x10; /* P8 OCC poll data */ + occ->send_cmd = p8_i2c_occ_send_cmd; + + return occ_setup(occ, "p8_occ"); +} + +static int p8_i2c_occ_remove(struct i2c_client *client) +{ + struct occ *occ = dev_get_drvdata(&client->dev); + + occ_shutdown(occ); + + return 0; +} + +static const struct of_device_id p8_i2c_occ_of_match[] = { + { .compatible = "ibm,p8-occ-hwmon" }, + {} +}; +MODULE_DEVICE_TABLE(of, p8_i2c_occ_of_match); + +static struct i2c_driver p8_i2c_occ_driver = { + .class = I2C_CLASS_HWMON, + .driver = { + .name = "occ-hwmon", + .of_match_table = p8_i2c_occ_of_match, + }, + .probe = p8_i2c_occ_probe, + .remove = p8_i2c_occ_remove, +}; + +module_i2c_driver(p8_i2c_occ_driver); + +MODULE_AUTHOR("Eddie James "); +MODULE_DESCRIPTION("BMC P8 OCC hwmon driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/hwmon/occ/p9_sbe.c b/drivers/hwmon/occ/p9_sbe.c new file mode 100644 index 000000000000..b65c1d1dfb54 --- /dev/null +++ b/drivers/hwmon/occ/p9_sbe.c @@ -0,0 +1,106 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include + +#include "common.h" + +struct p9_sbe_occ { + struct occ occ; + struct device *sbe; +}; + +#define to_p9_sbe_occ(x) container_of((x), struct p9_sbe_occ, occ) + +static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd) +{ + struct occ_response *resp = &occ->resp; + struct p9_sbe_occ *ctx = to_p9_sbe_occ(occ); + size_t resp_len = sizeof(*resp); + int rc; + + rc = fsi_occ_submit(ctx->sbe, cmd, 8, resp, &resp_len); + if (rc < 0) + return rc; + + switch (resp->return_status) { + case OCC_RESP_CMD_IN_PRG: + rc = -ETIMEDOUT; + break; + case OCC_RESP_SUCCESS: + rc = 0; + break; + case OCC_RESP_CMD_INVAL: + case OCC_RESP_CMD_LEN_INVAL: + case OCC_RESP_DATA_INVAL: + case OCC_RESP_CHKSUM_ERR: + rc = -EINVAL; + break; + case OCC_RESP_INT_ERR: + case OCC_RESP_BAD_STATE: + case OCC_RESP_CRIT_EXCEPT: + case OCC_RESP_CRIT_INIT: + case OCC_RESP_CRIT_WATCHDOG: + case OCC_RESP_CRIT_OCB: + case OCC_RESP_CRIT_HW: + rc = -EREMOTEIO; + break; + default: + rc = -EPROTO; + } + + return rc; +} + +static int p9_sbe_occ_probe(struct platform_device *pdev) +{ + int rc; + struct occ *occ; + struct p9_sbe_occ *ctx = devm_kzalloc(&pdev->dev, sizeof(*ctx), + GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->sbe = pdev->dev.parent; + occ = &ctx->occ; + occ->bus_dev = &pdev->dev; + platform_set_drvdata(pdev, occ); + + occ->powr_sample_time_us = 500; + occ->poll_cmd_data = 0x20; /* P9 OCC poll data */ + occ->send_cmd = p9_sbe_occ_send_cmd; + + rc = occ_setup(occ, "p9_occ"); + if (rc == -ESHUTDOWN) + rc = -ENODEV; /* Host is shutdown, don't spew errors */ + + return rc; +} + +static int p9_sbe_occ_remove(struct platform_device *pdev) +{ + struct occ *occ = platform_get_drvdata(pdev); + struct p9_sbe_occ *ctx = to_p9_sbe_occ(occ); + + ctx->sbe = NULL; + occ_shutdown(occ); + + return 0; +} + +static struct platform_driver p9_sbe_occ_driver = { + .driver = { + .name = "occ-hwmon", + }, + .probe = p9_sbe_occ_probe, + .remove = p9_sbe_occ_remove, +}; + +module_platform_driver(p9_sbe_occ_driver); + +MODULE_AUTHOR("Eddie James "); +MODULE_DESCRIPTION("BMC P9 OCC hwmon driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/hwmon/occ/sysfs.c b/drivers/hwmon/occ/sysfs.c new file mode 100644 index 000000000000..743b26ec8e54 --- /dev/null +++ b/drivers/hwmon/occ/sysfs.c @@ -0,0 +1,188 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * OCC hwmon driver sysfs interface + * + * Copyright (C) IBM Corporation 2018 + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include +#include +#include +#include +#include + +#include "common.h" + +/* OCC status register */ +#define OCC_STAT_MASTER BIT(7) +#define OCC_STAT_ACTIVE BIT(0) + +/* OCC extended status register */ +#define OCC_EXT_STAT_DVFS_OT BIT(7) +#define OCC_EXT_STAT_DVFS_POWER BIT(6) +#define OCC_EXT_STAT_MEM_THROTTLE BIT(5) +#define OCC_EXT_STAT_QUICK_DROP BIT(4) + +static ssize_t occ_sysfs_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int rc; + int val = 0; + struct occ *occ = dev_get_drvdata(dev); + struct occ_poll_response_header *header; + struct sensor_device_attribute *sattr = to_sensor_dev_attr(attr); + + rc = occ_update_response(occ); + if (rc) + return rc; + + header = (struct occ_poll_response_header *)occ->resp.data; + + switch (sattr->index) { + case 0: + val = !!(header->status & OCC_STAT_MASTER); + break; + case 1: + val = !!(header->status & OCC_STAT_ACTIVE); + break; + case 2: + val = !!(header->status & OCC_EXT_STAT_DVFS_OT); + break; + case 3: + val = !!(header->status & OCC_EXT_STAT_DVFS_POWER); + break; + case 4: + val = !!(header->status & OCC_EXT_STAT_MEM_THROTTLE); + break; + case 5: + val = !!(header->status & OCC_EXT_STAT_QUICK_DROP); + break; + case 6: + val = header->occ_state; + break; + case 7: + if (header->status & OCC_STAT_MASTER) + val = hweight8(header->occs_present); + else + val = 1; + break; + case 8: + val = occ->error; + break; + default: + return -EINVAL; + } + + return snprintf(buf, PAGE_SIZE - 1, "%d\n", val); +} + +static SENSOR_DEVICE_ATTR(occ_master, 0444, occ_sysfs_show, NULL, 0); +static SENSOR_DEVICE_ATTR(occ_active, 0444, occ_sysfs_show, NULL, 1); +static SENSOR_DEVICE_ATTR(occ_dvfs_overtemp, 0444, occ_sysfs_show, NULL, 2); +static SENSOR_DEVICE_ATTR(occ_dvfs_power, 0444, occ_sysfs_show, NULL, 3); +static SENSOR_DEVICE_ATTR(occ_mem_throttle, 0444, occ_sysfs_show, NULL, 4); +static SENSOR_DEVICE_ATTR(occ_quick_pwr_drop, 0444, occ_sysfs_show, NULL, 5); +static SENSOR_DEVICE_ATTR(occ_state, 0444, occ_sysfs_show, NULL, 6); +static SENSOR_DEVICE_ATTR(occs_present, 0444, occ_sysfs_show, NULL, 7); +static SENSOR_DEVICE_ATTR(occ_error, 0444, occ_sysfs_show, NULL, 8); + +static struct attribute *occ_attributes[] = { + &sensor_dev_attr_occ_master.dev_attr.attr, + &sensor_dev_attr_occ_active.dev_attr.attr, + &sensor_dev_attr_occ_dvfs_overtemp.dev_attr.attr, + &sensor_dev_attr_occ_dvfs_power.dev_attr.attr, + &sensor_dev_attr_occ_mem_throttle.dev_attr.attr, + &sensor_dev_attr_occ_quick_pwr_drop.dev_attr.attr, + &sensor_dev_attr_occ_state.dev_attr.attr, + &sensor_dev_attr_occs_present.dev_attr.attr, + &sensor_dev_attr_occ_error.dev_attr.attr, + NULL +}; + +static const struct attribute_group occ_sysfs = { + .attrs = occ_attributes, +}; + +void occ_sysfs_poll_done(struct occ *occ) +{ + const char *name; + struct occ_poll_response_header *header = + (struct occ_poll_response_header *)occ->resp.data; + + /* + * On the first poll response, we haven't yet created the sysfs + * attributes, so don't make any notify calls. + */ + if (!occ->hwmon) + goto done; + + if ((header->status & OCC_STAT_MASTER) != + (occ->prev_stat & OCC_STAT_MASTER)) { + name = sensor_dev_attr_occ_master.dev_attr.attr.name; + sysfs_notify(&occ->bus_dev->kobj, NULL, name); + } + + if ((header->status & OCC_STAT_ACTIVE) != + (occ->prev_stat & OCC_STAT_ACTIVE)) { + name = sensor_dev_attr_occ_active.dev_attr.attr.name; + sysfs_notify(&occ->bus_dev->kobj, NULL, name); + } + + if ((header->ext_status & OCC_EXT_STAT_DVFS_OT) != + (occ->prev_ext_stat & OCC_EXT_STAT_DVFS_OT)) { + name = sensor_dev_attr_occ_dvfs_overtemp.dev_attr.attr.name; + sysfs_notify(&occ->bus_dev->kobj, NULL, name); + } + + if ((header->ext_status & OCC_EXT_STAT_DVFS_POWER) != + (occ->prev_ext_stat & OCC_EXT_STAT_DVFS_POWER)) { + name = sensor_dev_attr_occ_dvfs_power.dev_attr.attr.name; + sysfs_notify(&occ->bus_dev->kobj, NULL, name); + } + + if ((header->ext_status & OCC_EXT_STAT_MEM_THROTTLE) != + (occ->prev_ext_stat & OCC_EXT_STAT_MEM_THROTTLE)) { + name = sensor_dev_attr_occ_mem_throttle.dev_attr.attr.name; + sysfs_notify(&occ->bus_dev->kobj, NULL, name); + } + + if ((header->ext_status & OCC_EXT_STAT_QUICK_DROP) != + (occ->prev_ext_stat & OCC_EXT_STAT_QUICK_DROP)) { + name = sensor_dev_attr_occ_quick_pwr_drop.dev_attr.attr.name; + sysfs_notify(&occ->bus_dev->kobj, NULL, name); + } + + if ((header->status & OCC_STAT_MASTER) && + header->occs_present != occ->prev_occs_present) { + name = sensor_dev_attr_occs_present.dev_attr.attr.name; + sysfs_notify(&occ->bus_dev->kobj, NULL, name); + } + + if (occ->error && occ->error != occ->prev_error) { + name = sensor_dev_attr_occ_error.dev_attr.attr.name; + sysfs_notify(&occ->bus_dev->kobj, NULL, name); + } + + /* no notifications for OCC state; doesn't indicate error condition */ + +done: + occ->prev_error = occ->error; + occ->prev_stat = header->status; + occ->prev_ext_stat = header->ext_status; + occ->prev_occs_present = header->occs_present; +} + +int occ_setup_sysfs(struct occ *occ) +{ + return sysfs_create_group(&occ->bus_dev->kobj, &occ_sysfs); +} + +void occ_shutdown(struct occ *occ) +{ + sysfs_remove_group(&occ->bus_dev->kobj, &occ_sysfs); +} diff --git a/drivers/hwmon/pmbus/adm1275.c b/drivers/hwmon/pmbus/adm1275.c index 13600fa79e7f..f569372c9204 100644 --- a/drivers/hwmon/pmbus/adm1275.c +++ b/drivers/hwmon/pmbus/adm1275.c @@ -373,6 +373,7 @@ static int adm1275_probe(struct i2c_client *client, const struct coefficients *coefficients; int vindex = -1, voindex = -1, cindex = -1, pindex = -1; int tindex = -1; + u32 shunt; if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_READ_BYTE_DATA @@ -421,6 +422,13 @@ static int adm1275_probe(struct i2c_client *client, if (!data) return -ENOMEM; + if (of_property_read_u32(client->dev.of_node, + "shunt-resistor-micro-ohms", &shunt)) + shunt = 1000; /* 1 mOhm if not set via DT */ + + if (shunt == 0) + return -EINVAL; + data->id = mid->driver_data; info = &data->info; @@ -654,12 +662,15 @@ static int adm1275_probe(struct i2c_client *client, info->R[PSC_VOLTAGE_OUT] = coefficients[voindex].R; } if (cindex >= 0) { - info->m[PSC_CURRENT_OUT] = coefficients[cindex].m; + /* Scale current with sense resistor value */ + info->m[PSC_CURRENT_OUT] = + coefficients[cindex].m * shunt / 1000; info->b[PSC_CURRENT_OUT] = coefficients[cindex].b; info->R[PSC_CURRENT_OUT] = coefficients[cindex].R; } if (pindex >= 0) { - info->m[PSC_POWER] = coefficients[pindex].m; + info->m[PSC_POWER] = + coefficients[pindex].m * shunt / 1000; info->b[PSC_POWER] = coefficients[pindex].b; info->R[PSC_POWER] = coefficients[pindex].R; } diff --git a/drivers/hwmon/pmbus/ltc2978.c b/drivers/hwmon/pmbus/ltc2978.c index 07afb92bb36b..29c0b7219aaa 100644 --- a/drivers/hwmon/pmbus/ltc2978.c +++ b/drivers/hwmon/pmbus/ltc2978.c @@ -795,5 +795,5 @@ static struct i2c_driver ltc2978_driver = { module_i2c_driver(ltc2978_driver); MODULE_AUTHOR("Guenter Roeck"); -MODULE_DESCRIPTION("PMBus driver for LTC2978 and comppatible chips"); +MODULE_DESCRIPTION("PMBus driver for LTC2978 and compatible chips"); MODULE_LICENSE("GPL"); diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c index 7da6a160d45a..2c944825026f 100644 --- a/drivers/hwmon/pwm-fan.c +++ b/drivers/hwmon/pwm-fan.c @@ -72,8 +72,8 @@ static void pwm_fan_update_state(struct pwm_fan_ctx *ctx, unsigned long pwm) ctx->pwm_fan_state = i; } -static ssize_t set_pwm(struct device *dev, struct device_attribute *attr, - const char *buf, size_t count) +static ssize_t pwm_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) { struct pwm_fan_ctx *ctx = dev_get_drvdata(dev); unsigned long pwm; @@ -90,8 +90,8 @@ static ssize_t set_pwm(struct device *dev, struct device_attribute *attr, return count; } -static ssize_t show_pwm(struct device *dev, - struct device_attribute *attr, char *buf) +static ssize_t pwm_show(struct device *dev, struct device_attribute *attr, + char *buf) { struct pwm_fan_ctx *ctx = dev_get_drvdata(dev); @@ -99,7 +99,7 @@ static ssize_t show_pwm(struct device *dev, } -static SENSOR_DEVICE_ATTR(pwm1, S_IRUGO | S_IWUSR, show_pwm, set_pwm, 0); +static SENSOR_DEVICE_ATTR_RW(pwm1, pwm, 0); static struct attribute *pwm_fan_attrs[] = { &sensor_dev_attr_pwm1.dev_attr.attr, diff --git a/drivers/hwmon/tmp401.c b/drivers/hwmon/tmp401.c index 1f2d13dc9439..ff66cf1bfb2e 100644 --- a/drivers/hwmon/tmp401.c +++ b/drivers/hwmon/tmp401.c @@ -288,8 +288,8 @@ abort: return ret; } -static ssize_t show_temp(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t temp_show(struct device *dev, struct device_attribute *devattr, + char *buf) { int nr = to_sensor_dev_attr_2(devattr)->nr; int index = to_sensor_dev_attr_2(devattr)->index; @@ -302,8 +302,9 @@ static ssize_t show_temp(struct device *dev, tmp401_register_to_temp(data->temp[nr][index], data->config)); } -static ssize_t show_temp_crit_hyst(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t temp_crit_hyst_show(struct device *dev, + struct device_attribute *devattr, + char *buf) { int temp, index = to_sensor_dev_attr(devattr)->index; struct tmp401_data *data = tmp401_update_device(dev); @@ -319,8 +320,8 @@ static ssize_t show_temp_crit_hyst(struct device *dev, return sprintf(buf, "%d\n", temp); } -static ssize_t show_status(struct device *dev, - struct device_attribute *devattr, char *buf) +static ssize_t status_show(struct device *dev, + struct device_attribute *devattr, char *buf) { int nr = to_sensor_dev_attr_2(devattr)->nr; int mask = to_sensor_dev_attr_2(devattr)->index; @@ -332,8 +333,9 @@ static ssize_t show_status(struct device *dev, return sprintf(buf, "%d\n", !!(data->status[nr] & mask)); } -static ssize_t store_temp(struct device *dev, struct device_attribute *devattr, - const char *buf, size_t count) +static ssize_t temp_store(struct device *dev, + struct device_attribute *devattr, const char *buf, + size_t count) { int nr = to_sensor_dev_attr_2(devattr)->nr; int index = to_sensor_dev_attr_2(devattr)->index; @@ -365,8 +367,9 @@ static ssize_t store_temp(struct device *dev, struct device_attribute *devattr, return count; } -static ssize_t store_temp_crit_hyst(struct device *dev, struct device_attribute - *devattr, const char *buf, size_t count) +static ssize_t temp_crit_hyst_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { int temp, index = to_sensor_dev_attr(devattr)->index; struct tmp401_data *data = tmp401_update_device(dev); @@ -404,8 +407,9 @@ static ssize_t store_temp_crit_hyst(struct device *dev, struct device_attribute * This is done by writing any value to any of the minimum/maximum registers * (0x30-0x37). */ -static ssize_t reset_temp_history(struct device *dev, - struct device_attribute *devattr, const char *buf, size_t count) +static ssize_t reset_temp_history_store(struct device *dev, + struct device_attribute *devattr, + const char *buf, size_t count) { struct tmp401_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; @@ -467,38 +471,29 @@ static ssize_t update_interval_store(struct device *dev, return count; } -static SENSOR_DEVICE_ATTR_2(temp1_input, S_IRUGO, show_temp, NULL, 0, 0); -static SENSOR_DEVICE_ATTR_2(temp1_min, S_IWUSR | S_IRUGO, show_temp, - store_temp, 1, 0); -static SENSOR_DEVICE_ATTR_2(temp1_max, S_IWUSR | S_IRUGO, show_temp, - store_temp, 2, 0); -static SENSOR_DEVICE_ATTR_2(temp1_crit, S_IWUSR | S_IRUGO, show_temp, - store_temp, 3, 0); -static SENSOR_DEVICE_ATTR(temp1_crit_hyst, S_IWUSR | S_IRUGO, - show_temp_crit_hyst, store_temp_crit_hyst, 0); -static SENSOR_DEVICE_ATTR_2(temp1_min_alarm, S_IRUGO, show_status, NULL, - 1, TMP432_STATUS_LOCAL); -static SENSOR_DEVICE_ATTR_2(temp1_max_alarm, S_IRUGO, show_status, NULL, - 2, TMP432_STATUS_LOCAL); -static SENSOR_DEVICE_ATTR_2(temp1_crit_alarm, S_IRUGO, show_status, NULL, - 3, TMP432_STATUS_LOCAL); -static SENSOR_DEVICE_ATTR_2(temp2_input, S_IRUGO, show_temp, NULL, 0, 1); -static SENSOR_DEVICE_ATTR_2(temp2_min, S_IWUSR | S_IRUGO, show_temp, - store_temp, 1, 1); -static SENSOR_DEVICE_ATTR_2(temp2_max, S_IWUSR | S_IRUGO, show_temp, - store_temp, 2, 1); -static SENSOR_DEVICE_ATTR_2(temp2_crit, S_IWUSR | S_IRUGO, show_temp, - store_temp, 3, 1); -static SENSOR_DEVICE_ATTR(temp2_crit_hyst, S_IRUGO, show_temp_crit_hyst, - NULL, 1); -static SENSOR_DEVICE_ATTR_2(temp2_fault, S_IRUGO, show_status, NULL, - 0, TMP432_STATUS_REMOTE1); -static SENSOR_DEVICE_ATTR_2(temp2_min_alarm, S_IRUGO, show_status, NULL, - 1, TMP432_STATUS_REMOTE1); -static SENSOR_DEVICE_ATTR_2(temp2_max_alarm, S_IRUGO, show_status, NULL, - 2, TMP432_STATUS_REMOTE1); -static SENSOR_DEVICE_ATTR_2(temp2_crit_alarm, S_IRUGO, show_status, NULL, - 3, TMP432_STATUS_REMOTE1); +static SENSOR_DEVICE_ATTR_2_RO(temp1_input, temp, 0, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_min, temp, 1, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_max, temp, 2, 0); +static SENSOR_DEVICE_ATTR_2_RW(temp1_crit, temp, 3, 0); +static SENSOR_DEVICE_ATTR_RW(temp1_crit_hyst, temp_crit_hyst, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp1_min_alarm, status, 1, + TMP432_STATUS_LOCAL); +static SENSOR_DEVICE_ATTR_2_RO(temp1_max_alarm, status, 2, + TMP432_STATUS_LOCAL); +static SENSOR_DEVICE_ATTR_2_RO(temp1_crit_alarm, status, 3, + TMP432_STATUS_LOCAL); +static SENSOR_DEVICE_ATTR_2_RO(temp2_input, temp, 0, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_min, temp, 1, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_max, temp, 2, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_crit, temp, 3, 1); +static SENSOR_DEVICE_ATTR_RO(temp2_crit_hyst, temp_crit_hyst, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp2_fault, status, 0, TMP432_STATUS_REMOTE1); +static SENSOR_DEVICE_ATTR_2_RO(temp2_min_alarm, status, 1, + TMP432_STATUS_REMOTE1); +static SENSOR_DEVICE_ATTR_2_RO(temp2_max_alarm, status, 2, + TMP432_STATUS_REMOTE1); +static SENSOR_DEVICE_ATTR_2_RO(temp2_crit_alarm, status, 3, + TMP432_STATUS_REMOTE1); static DEVICE_ATTR_RW(update_interval); @@ -538,12 +533,11 @@ static const struct attribute_group tmp401_group = { * minimum and maximum register reset for both the local * and remote channels. */ -static SENSOR_DEVICE_ATTR_2(temp1_lowest, S_IRUGO, show_temp, NULL, 4, 0); -static SENSOR_DEVICE_ATTR_2(temp1_highest, S_IRUGO, show_temp, NULL, 5, 0); -static SENSOR_DEVICE_ATTR_2(temp2_lowest, S_IRUGO, show_temp, NULL, 4, 1); -static SENSOR_DEVICE_ATTR_2(temp2_highest, S_IRUGO, show_temp, NULL, 5, 1); -static SENSOR_DEVICE_ATTR(temp_reset_history, S_IWUSR, NULL, reset_temp_history, - 0); +static SENSOR_DEVICE_ATTR_2_RO(temp1_lowest, temp, 4, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp1_highest, temp, 5, 0); +static SENSOR_DEVICE_ATTR_2_RO(temp2_lowest, temp, 4, 1); +static SENSOR_DEVICE_ATTR_2_RO(temp2_highest, temp, 5, 1); +static SENSOR_DEVICE_ATTR_WO(temp_reset_history, reset_temp_history, 0); static struct attribute *tmp411_attributes[] = { &sensor_dev_attr_temp1_highest.dev_attr.attr, @@ -558,23 +552,18 @@ static const struct attribute_group tmp411_group = { .attrs = tmp411_attributes, }; -static SENSOR_DEVICE_ATTR_2(temp3_input, S_IRUGO, show_temp, NULL, 0, 2); -static SENSOR_DEVICE_ATTR_2(temp3_min, S_IWUSR | S_IRUGO, show_temp, - store_temp, 1, 2); -static SENSOR_DEVICE_ATTR_2(temp3_max, S_IWUSR | S_IRUGO, show_temp, - store_temp, 2, 2); -static SENSOR_DEVICE_ATTR_2(temp3_crit, S_IWUSR | S_IRUGO, show_temp, - store_temp, 3, 2); -static SENSOR_DEVICE_ATTR(temp3_crit_hyst, S_IRUGO, show_temp_crit_hyst, - NULL, 2); -static SENSOR_DEVICE_ATTR_2(temp3_fault, S_IRUGO, show_status, NULL, - 0, TMP432_STATUS_REMOTE2); -static SENSOR_DEVICE_ATTR_2(temp3_min_alarm, S_IRUGO, show_status, NULL, - 1, TMP432_STATUS_REMOTE2); -static SENSOR_DEVICE_ATTR_2(temp3_max_alarm, S_IRUGO, show_status, NULL, - 2, TMP432_STATUS_REMOTE2); -static SENSOR_DEVICE_ATTR_2(temp3_crit_alarm, S_IRUGO, show_status, NULL, - 3, TMP432_STATUS_REMOTE2); +static SENSOR_DEVICE_ATTR_2_RO(temp3_input, temp, 0, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_min, temp, 1, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_max, temp, 2, 2); +static SENSOR_DEVICE_ATTR_2_RW(temp3_crit, temp, 3, 2); +static SENSOR_DEVICE_ATTR_RO(temp3_crit_hyst, temp_crit_hyst, 2); +static SENSOR_DEVICE_ATTR_2_RO(temp3_fault, status, 0, TMP432_STATUS_REMOTE2); +static SENSOR_DEVICE_ATTR_2_RO(temp3_min_alarm, status, 1, + TMP432_STATUS_REMOTE2); +static SENSOR_DEVICE_ATTR_2_RO(temp3_max_alarm, status, 2, + TMP432_STATUS_REMOTE2); +static SENSOR_DEVICE_ATTR_2_RO(temp3_crit_alarm, status, 3, + TMP432_STATUS_REMOTE2); static struct attribute *tmp432_attributes[] = { &sensor_dev_attr_temp3_input.dev_attr.attr, @@ -598,8 +587,7 @@ static const struct attribute_group tmp432_group = { * Additional features of the TMP461 chip. * The TMP461 temperature offset for the remote channel. */ -static SENSOR_DEVICE_ATTR_2(temp2_offset, S_IWUSR | S_IRUGO, show_temp, - store_temp, 6, 1); +static SENSOR_DEVICE_ATTR_2_RW(temp2_offset, temp, 6, 1); static struct attribute *tmp461_attributes[] = { &sensor_dev_attr_temp2_offset.dev_attr.attr, diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c index 824be0c5f592..105782ea64c7 100644 --- a/drivers/hwtracing/coresight/coresight-etb10.c +++ b/drivers/hwtracing/coresight/coresight-etb10.c @@ -136,6 +136,11 @@ static void __etb_enable_hw(struct etb_drvdata *drvdata) static int etb_enable_hw(struct etb_drvdata *drvdata) { + int rc = coresight_claim_device(drvdata->base); + + if (rc) + return rc; + __etb_enable_hw(drvdata); return 0; } @@ -223,7 +228,7 @@ static int etb_enable(struct coresight_device *csdev, u32 mode, void *data) return 0; } -static void etb_disable_hw(struct etb_drvdata *drvdata) +static void __etb_disable_hw(struct etb_drvdata *drvdata) { u32 ffcr; @@ -313,6 +318,13 @@ static void etb_dump_hw(struct etb_drvdata *drvdata) CS_LOCK(drvdata->base); } +static void etb_disable_hw(struct etb_drvdata *drvdata) +{ + __etb_disable_hw(drvdata); + etb_dump_hw(drvdata); + coresight_disclaim_device(drvdata->base); +} + static void etb_disable(struct coresight_device *csdev) { struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); @@ -323,7 +335,6 @@ static void etb_disable(struct coresight_device *csdev) /* Disable the ETB only if it needs to */ if (drvdata->mode != CS_MODE_DISABLED) { etb_disable_hw(drvdata); - etb_dump_hw(drvdata); drvdata->mode = CS_MODE_DISABLED; } spin_unlock_irqrestore(&drvdata->spinlock, flags); @@ -402,7 +413,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS; - etb_disable_hw(drvdata); + __etb_disable_hw(drvdata); CS_UNLOCK(drvdata->base); /* unit is in words, not bytes */ @@ -510,7 +521,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, handle->head = (cur * PAGE_SIZE) + offset; to_read = buf->nr_pages << PAGE_SHIFT; } - etb_enable_hw(drvdata); + __etb_enable_hw(drvdata); CS_LOCK(drvdata->base); return to_read; @@ -534,9 +545,9 @@ static void etb_dump(struct etb_drvdata *drvdata) spin_lock_irqsave(&drvdata->spinlock, flags); if (drvdata->mode == CS_MODE_SYSFS) { - etb_disable_hw(drvdata); + __etb_disable_hw(drvdata); etb_dump_hw(drvdata); - etb_enable_hw(drvdata); + __etb_enable_hw(drvdata); } spin_unlock_irqrestore(&drvdata->spinlock, flags); diff --git a/drivers/hwtracing/coresight/coresight-etm3x.c b/drivers/hwtracing/coresight/coresight-etm3x.c index fd5c4cca7db5..9a63e87ea5f3 100644 --- a/drivers/hwtracing/coresight/coresight-etm3x.c +++ b/drivers/hwtracing/coresight/coresight-etm3x.c @@ -363,15 +363,16 @@ static int etm_enable_hw(struct etm_drvdata *drvdata) CS_UNLOCK(drvdata->base); + rc = coresight_claim_device_unlocked(drvdata->base); + if (rc) + goto done; + /* Turn engine on */ etm_clr_pwrdwn(drvdata); /* Apply power to trace registers */ etm_set_pwrup(drvdata); /* Make sure all registers are accessible */ etm_os_unlock(drvdata); - rc = coresight_claim_device_unlocked(drvdata->base); - if (rc) - goto done; etm_set_prog(drvdata); @@ -422,8 +423,6 @@ static int etm_enable_hw(struct etm_drvdata *drvdata) etm_clr_prog(drvdata); done: - if (rc) - etm_set_pwrdwn(drvdata); CS_LOCK(drvdata->base); dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n", @@ -577,9 +576,9 @@ static void etm_disable_hw(void *info) for (i = 0; i < drvdata->nr_cntr; i++) config->cntr_val[i] = etm_readl(drvdata, ETMCNTVRn(i)); + etm_set_pwrdwn(drvdata); coresight_disclaim_device_unlocked(drvdata->base); - etm_set_pwrdwn(drvdata); CS_LOCK(drvdata->base); dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu); @@ -602,6 +601,7 @@ static void etm_disable_perf(struct coresight_device *csdev) * power down the tracer. */ etm_set_pwrdwn(drvdata); + coresight_disclaim_device_unlocked(drvdata->base); CS_LOCK(drvdata->base); } diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c index 35d6f9709274..ef339ff22090 100644 --- a/drivers/hwtracing/coresight/coresight-stm.c +++ b/drivers/hwtracing/coresight/coresight-stm.c @@ -856,7 +856,7 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id) if (stm_register_device(dev, &drvdata->stm, THIS_MODULE)) { dev_info(dev, - "stm_register_device failed, probing deffered\n"); + "stm_register_device failed, probing deferred\n"); return -EPROBE_DEFER; } diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c index 53fc83b72a49..a5f053f2db2c 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etf.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c @@ -86,8 +86,8 @@ static void __tmc_etb_disable_hw(struct tmc_drvdata *drvdata) static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata) { - coresight_disclaim_device(drvdata); __tmc_etb_disable_hw(drvdata); + coresight_disclaim_device(drvdata->base); } static void __tmc_etf_enable_hw(struct tmc_drvdata *drvdata) diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c index d293e55553bd..ba7aaf421f36 100644 --- a/drivers/hwtracing/intel_th/msu.c +++ b/drivers/hwtracing/intel_th/msu.c @@ -1423,7 +1423,8 @@ nr_pages_store(struct device *dev, struct device_attribute *attr, if (!end) break; - len -= end - p; + /* consume the number and the following comma, hence +1 */ + len -= end - p + 1; p = end + 1; } while (len); diff --git a/drivers/hwtracing/stm/policy.c b/drivers/hwtracing/stm/policy.c index 0910ec807187..4b9e44b227d8 100644 --- a/drivers/hwtracing/stm/policy.c +++ b/drivers/hwtracing/stm/policy.c @@ -440,10 +440,8 @@ stp_policy_make(struct config_group *group, const char *name) stm->policy = kzalloc(sizeof(*stm->policy), GFP_KERNEL); if (!stm->policy) { - mutex_unlock(&stm->policy_mutex); - stm_put_protocol(pdrv); - stm_put_device(stm); - return ERR_PTR(-ENOMEM); + ret = ERR_PTR(-ENOMEM); + goto unlock_policy; } config_group_init_type_name(&stm->policy->group, name, @@ -458,7 +456,11 @@ unlock_policy: mutex_unlock(&stm->policy_mutex); if (IS_ERR(ret)) { - stm_put_protocol(stm->pdrv); + /* + * pdrv and stm->pdrv at this point can be quite different, + * and only one of them needs to be 'put' + */ + stm_put_protocol(pdrv); stm_put_device(stm); } diff --git a/drivers/i2c/Kconfig b/drivers/i2c/Kconfig index efc3354d60ae..c6b7fc7b67d6 100644 --- a/drivers/i2c/Kconfig +++ b/drivers/i2c/Kconfig @@ -68,7 +68,7 @@ config I2C_MUX This support is also available as a module. If so, the module will be called i2c-mux. -source drivers/i2c/muxes/Kconfig +source "drivers/i2c/muxes/Kconfig" config I2C_HELPER_AUTO bool "Autoselect pertinent helper modules" @@ -94,8 +94,8 @@ config I2C_SMBUS This support is also available as a module. If so, the module will be called i2c-smbus. -source drivers/i2c/algos/Kconfig -source drivers/i2c/busses/Kconfig +source "drivers/i2c/algos/Kconfig" +source "drivers/i2c/busses/Kconfig" config I2C_STUB tristate "I2C/SMBus Test Stub" diff --git a/drivers/ide/ide-atapi.c b/drivers/ide/ide-atapi.c index 8b2b72b93885..da58020a144e 100644 --- a/drivers/ide/ide-atapi.c +++ b/drivers/ide/ide-atapi.c @@ -94,7 +94,7 @@ int ide_queue_pc_tail(ide_drive_t *drive, struct gendisk *disk, rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, 0); ide_req(rq)->type = ATA_PRIV_MISC; - rq->special = (char *)pc; + ide_req(rq)->special = pc; if (buf && bufflen) { error = blk_rq_map_kern(drive->queue, rq, buf, bufflen, @@ -172,8 +172,8 @@ EXPORT_SYMBOL_GPL(ide_create_request_sense_cmd); void ide_prep_sense(ide_drive_t *drive, struct request *rq) { struct request_sense *sense = &drive->sense_data; - struct request *sense_rq = drive->sense_rq; - struct scsi_request *req = scsi_req(sense_rq); + struct request *sense_rq; + struct scsi_request *req; unsigned int cmd_len, sense_len; int err; @@ -196,9 +196,16 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq) if (ata_sense_request(rq) || drive->sense_rq_armed) return; + sense_rq = drive->sense_rq; + if (!sense_rq) { + sense_rq = blk_mq_alloc_request(drive->queue, REQ_OP_DRV_IN, + BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT); + drive->sense_rq = sense_rq; + } + req = scsi_req(sense_rq); + memset(sense, 0, sizeof(*sense)); - blk_rq_init(rq->q, sense_rq); scsi_req_init(req); err = blk_rq_map_kern(drive->queue, sense_rq, sense, sense_len, @@ -207,6 +214,8 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq) if (printk_ratelimit()) printk(KERN_WARNING PFX "%s: failed to map sense " "buffer\n", drive->name); + blk_mq_free_request(sense_rq); + drive->sense_rq = NULL; return; } @@ -226,6 +235,8 @@ EXPORT_SYMBOL_GPL(ide_prep_sense); int ide_queue_sense_rq(ide_drive_t *drive, void *special) { + struct request *sense_rq = drive->sense_rq; + /* deferred failure from ide_prep_sense() */ if (!drive->sense_rq_armed) { printk(KERN_WARNING PFX "%s: error queuing a sense request\n", @@ -233,12 +244,12 @@ int ide_queue_sense_rq(ide_drive_t *drive, void *special) return -ENOMEM; } - drive->sense_rq->special = special; + ide_req(sense_rq)->special = special; drive->sense_rq_armed = false; drive->hwif->rq = NULL; - elv_add_request(drive->queue, drive->sense_rq, ELEVATOR_INSERT_FRONT); + ide_insert_request_head(drive, sense_rq); return 0; } EXPORT_SYMBOL_GPL(ide_queue_sense_rq); @@ -270,10 +281,8 @@ void ide_retry_pc(ide_drive_t *drive) */ drive->hwif->rq = NULL; ide_requeue_and_plug(drive, failed_rq); - if (ide_queue_sense_rq(drive, pc)) { - blk_start_request(failed_rq); + if (ide_queue_sense_rq(drive, pc)) ide_complete_rq(drive, BLK_STS_IOERR, blk_rq_bytes(failed_rq)); - } } EXPORT_SYMBOL_GPL(ide_retry_pc); diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c index f9b59d41813f..1f03884a6808 100644 --- a/drivers/ide/ide-cd.c +++ b/drivers/ide/ide-cd.c @@ -211,12 +211,12 @@ static void cdrom_analyze_sense_data(ide_drive_t *drive, static void ide_cd_complete_failed_rq(ide_drive_t *drive, struct request *rq) { /* - * For ATA_PRIV_SENSE, "rq->special" points to the original + * For ATA_PRIV_SENSE, "ide_req(rq)->special" points to the original * failed request. Also, the sense data should be read * directly from rq which might be different from the original * sense buffer if it got copied during mapping. */ - struct request *failed = (struct request *)rq->special; + struct request *failed = ide_req(rq)->special; void *sense = bio_data(rq->bio); if (failed) { @@ -258,11 +258,22 @@ static int ide_cd_breathe(ide_drive_t *drive, struct request *rq) /* * take a breather */ - blk_delay_queue(drive->queue, 1); + blk_mq_requeue_request(rq, false); + blk_mq_delay_kick_requeue_list(drive->queue, 1); return 1; } } +static void ide_cd_free_sense(ide_drive_t *drive) +{ + if (!drive->sense_rq) + return; + + blk_mq_free_request(drive->sense_rq); + drive->sense_rq = NULL; + drive->sense_rq_armed = false; +} + /** * Returns: * 0: if the request should be continued. @@ -516,6 +527,82 @@ static bool ide_cd_error_cmd(ide_drive_t *drive, struct ide_cmd *cmd) return false; } +/* standard prep_rq that builds 10 byte cmds */ +static bool ide_cdrom_prep_fs(struct request_queue *q, struct request *rq) +{ + int hard_sect = queue_logical_block_size(q); + long block = (long)blk_rq_pos(rq) / (hard_sect >> 9); + unsigned long blocks = blk_rq_sectors(rq) / (hard_sect >> 9); + struct scsi_request *req = scsi_req(rq); + + if (rq_data_dir(rq) == READ) + req->cmd[0] = GPCMD_READ_10; + else + req->cmd[0] = GPCMD_WRITE_10; + + /* + * fill in lba + */ + req->cmd[2] = (block >> 24) & 0xff; + req->cmd[3] = (block >> 16) & 0xff; + req->cmd[4] = (block >> 8) & 0xff; + req->cmd[5] = block & 0xff; + + /* + * and transfer length + */ + req->cmd[7] = (blocks >> 8) & 0xff; + req->cmd[8] = blocks & 0xff; + req->cmd_len = 10; + return true; +} + +/* + * Most of the SCSI commands are supported directly by ATAPI devices. + * This transform handles the few exceptions. + */ +static bool ide_cdrom_prep_pc(struct request *rq) +{ + u8 *c = scsi_req(rq)->cmd; + + /* transform 6-byte read/write commands to the 10-byte version */ + if (c[0] == READ_6 || c[0] == WRITE_6) { + c[8] = c[4]; + c[5] = c[3]; + c[4] = c[2]; + c[3] = c[1] & 0x1f; + c[2] = 0; + c[1] &= 0xe0; + c[0] += (READ_10 - READ_6); + scsi_req(rq)->cmd_len = 10; + return true; + } + + /* + * it's silly to pretend we understand 6-byte sense commands, just + * reject with ILLEGAL_REQUEST and the caller should take the + * appropriate action + */ + if (c[0] == MODE_SENSE || c[0] == MODE_SELECT) { + scsi_req(rq)->result = ILLEGAL_REQUEST; + return false; + } + + return true; +} + +static bool ide_cdrom_prep_rq(ide_drive_t *drive, struct request *rq) +{ + if (!blk_rq_is_passthrough(rq)) { + scsi_req_init(scsi_req(rq)); + + return ide_cdrom_prep_fs(drive->queue, rq); + } else if (blk_rq_is_scsi(rq)) + return ide_cdrom_prep_pc(rq); + + return true; +} + static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive) { ide_hwif_t *hwif = drive->hwif; @@ -675,7 +762,7 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive) out_end: if (blk_rq_is_scsi(rq) && rc == 0) { scsi_req(rq)->resid_len = 0; - blk_end_request_all(rq, BLK_STS_OK); + blk_mq_end_request(rq, BLK_STS_OK); hwif->rq = NULL; } else { if (sense && uptodate) @@ -705,6 +792,8 @@ out_end: if (sense && rc == 2) ide_error(drive, "request sense failure", stat); } + + ide_cd_free_sense(drive); return ide_stopped; } @@ -729,7 +818,7 @@ static ide_startstop_t cdrom_start_rw(ide_drive_t *drive, struct request *rq) * We may be retrying this request after an error. Fix up any * weirdness which might be present in the request packet. */ - q->prep_rq_fn(q, rq); + ide_cdrom_prep_rq(drive, rq); } /* fs requests *must* be hardware frame aligned */ @@ -1323,82 +1412,6 @@ static int ide_cdrom_probe_capabilities(ide_drive_t *drive) return nslots; } -/* standard prep_rq_fn that builds 10 byte cmds */ -static int ide_cdrom_prep_fs(struct request_queue *q, struct request *rq) -{ - int hard_sect = queue_logical_block_size(q); - long block = (long)blk_rq_pos(rq) / (hard_sect >> 9); - unsigned long blocks = blk_rq_sectors(rq) / (hard_sect >> 9); - struct scsi_request *req = scsi_req(rq); - - q->initialize_rq_fn(rq); - - if (rq_data_dir(rq) == READ) - req->cmd[0] = GPCMD_READ_10; - else - req->cmd[0] = GPCMD_WRITE_10; - - /* - * fill in lba - */ - req->cmd[2] = (block >> 24) & 0xff; - req->cmd[3] = (block >> 16) & 0xff; - req->cmd[4] = (block >> 8) & 0xff; - req->cmd[5] = block & 0xff; - - /* - * and transfer length - */ - req->cmd[7] = (blocks >> 8) & 0xff; - req->cmd[8] = blocks & 0xff; - req->cmd_len = 10; - return BLKPREP_OK; -} - -/* - * Most of the SCSI commands are supported directly by ATAPI devices. - * This transform handles the few exceptions. - */ -static int ide_cdrom_prep_pc(struct request *rq) -{ - u8 *c = scsi_req(rq)->cmd; - - /* transform 6-byte read/write commands to the 10-byte version */ - if (c[0] == READ_6 || c[0] == WRITE_6) { - c[8] = c[4]; - c[5] = c[3]; - c[4] = c[2]; - c[3] = c[1] & 0x1f; - c[2] = 0; - c[1] &= 0xe0; - c[0] += (READ_10 - READ_6); - scsi_req(rq)->cmd_len = 10; - return BLKPREP_OK; - } - - /* - * it's silly to pretend we understand 6-byte sense commands, just - * reject with ILLEGAL_REQUEST and the caller should take the - * appropriate action - */ - if (c[0] == MODE_SENSE || c[0] == MODE_SELECT) { - scsi_req(rq)->result = ILLEGAL_REQUEST; - return BLKPREP_KILL; - } - - return BLKPREP_OK; -} - -static int ide_cdrom_prep_fn(struct request_queue *q, struct request *rq) -{ - if (!blk_rq_is_passthrough(rq)) - return ide_cdrom_prep_fs(q, rq); - else if (blk_rq_is_scsi(rq)) - return ide_cdrom_prep_pc(rq); - - return 0; -} - struct cd_list_entry { const char *id_model; const char *id_firmware; @@ -1508,7 +1521,7 @@ static int ide_cdrom_setup(ide_drive_t *drive) ide_debug_log(IDE_DBG_PROBE, "enter"); - blk_queue_prep_rq(q, ide_cdrom_prep_fn); + drive->prep_rq = ide_cdrom_prep_rq; blk_queue_dma_alignment(q, 31); blk_queue_update_dma_pad(q, 15); @@ -1569,7 +1582,7 @@ static void ide_cd_release(struct device *dev) if (devinfo->handle == drive) unregister_cdrom(devinfo); drive->driver_data = NULL; - blk_queue_prep_rq(drive->queue, NULL); + drive->prep_rq = NULL; g->private_data = NULL; put_disk(g); kfree(info); diff --git a/drivers/ide/ide-devsets.c b/drivers/ide/ide-devsets.c index f4f8afdf8bbe..f2f93ed40356 100644 --- a/drivers/ide/ide-devsets.c +++ b/drivers/ide/ide-devsets.c @@ -171,7 +171,7 @@ int ide_devset_execute(ide_drive_t *drive, const struct ide_devset *setting, scsi_req(rq)->cmd_len = 5; scsi_req(rq)->cmd[0] = REQ_DEVSET_EXEC; *(int *)&scsi_req(rq)->cmd[1] = arg; - rq->special = setting->set; + ide_req(rq)->special = setting->set; blk_execute_rq(q, NULL, rq, 0); ret = scsi_req(rq)->result; @@ -182,7 +182,7 @@ int ide_devset_execute(ide_drive_t *drive, const struct ide_devset *setting, ide_startstop_t ide_do_devset(ide_drive_t *drive, struct request *rq) { - int err, (*setfunc)(ide_drive_t *, int) = rq->special; + int err, (*setfunc)(ide_drive_t *, int) = ide_req(rq)->special; err = setfunc(drive, *(int *)&scsi_req(rq)->cmd[1]); if (err) diff --git a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c index e3b4e659082d..197912af5c2f 100644 --- a/drivers/ide/ide-disk.c +++ b/drivers/ide/ide-disk.c @@ -427,16 +427,15 @@ static void ide_disk_unlock_native_capacity(ide_drive_t *drive) drive->dev_flags |= IDE_DFLAG_NOHPA; /* disable HPA on resume */ } -static int idedisk_prep_fn(struct request_queue *q, struct request *rq) +static bool idedisk_prep_rq(ide_drive_t *drive, struct request *rq) { - ide_drive_t *drive = q->queuedata; struct ide_cmd *cmd; if (req_op(rq) != REQ_OP_FLUSH) - return BLKPREP_OK; + return true; - if (rq->special) { - cmd = rq->special; + if (ide_req(rq)->special) { + cmd = ide_req(rq)->special; memset(cmd, 0, sizeof(*cmd)); } else { cmd = kzalloc(sizeof(*cmd), GFP_ATOMIC); @@ -456,10 +455,10 @@ static int idedisk_prep_fn(struct request_queue *q, struct request *rq) rq->cmd_flags &= ~REQ_OP_MASK; rq->cmd_flags |= REQ_OP_DRV_OUT; ide_req(rq)->type = ATA_PRIV_TASKFILE; - rq->special = cmd; + ide_req(rq)->special = cmd; cmd->rq = rq; - return BLKPREP_OK; + return true; } ide_devset_get(multcount, mult_count); @@ -548,7 +547,7 @@ static void update_flush(ide_drive_t *drive) if (barrier) { wc = true; - blk_queue_prep_rq(drive->queue, idedisk_prep_fn); + drive->prep_rq = idedisk_prep_rq; } } diff --git a/drivers/ide/ide-eh.c b/drivers/ide/ide-eh.c index 47d5f3379748..e1323e058454 100644 --- a/drivers/ide/ide-eh.c +++ b/drivers/ide/ide-eh.c @@ -125,7 +125,7 @@ ide_startstop_t ide_error(ide_drive_t *drive, const char *msg, u8 stat) /* retry only "normal" I/O: */ if (blk_rq_is_passthrough(rq)) { if (ata_taskfile_request(rq)) { - struct ide_cmd *cmd = rq->special; + struct ide_cmd *cmd = ide_req(rq)->special; if (cmd) ide_complete_cmd(drive, cmd, stat, err); diff --git a/drivers/ide/ide-floppy.c b/drivers/ide/ide-floppy.c index a8df300f949c..780d33ccc5d8 100644 --- a/drivers/ide/ide-floppy.c +++ b/drivers/ide/ide-floppy.c @@ -276,7 +276,7 @@ static ide_startstop_t ide_floppy_do_request(ide_drive_t *drive, switch (ide_req(rq)->type) { case ATA_PRIV_MISC: case ATA_PRIV_SENSE: - pc = (struct ide_atapi_pc *)rq->special; + pc = (struct ide_atapi_pc *)ide_req(rq)->special; break; default: BUG(); diff --git a/drivers/ide/ide-io.c b/drivers/ide/ide-io.c index 0d93e0cfbeaf..8445b484ae69 100644 --- a/drivers/ide/ide-io.c +++ b/drivers/ide/ide-io.c @@ -67,7 +67,15 @@ int ide_end_rq(ide_drive_t *drive, struct request *rq, blk_status_t error, ide_dma_on(drive); } - return blk_end_request(rq, error, nr_bytes); + if (!blk_update_request(rq, error, nr_bytes)) { + if (rq == drive->sense_rq) + drive->sense_rq = NULL; + + __blk_mq_end_request(rq, error); + return 0; + } + + return 1; } EXPORT_SYMBOL_GPL(ide_end_rq); @@ -103,7 +111,7 @@ void ide_complete_cmd(ide_drive_t *drive, struct ide_cmd *cmd, u8 stat, u8 err) } if (rq && ata_taskfile_request(rq)) { - struct ide_cmd *orig_cmd = rq->special; + struct ide_cmd *orig_cmd = ide_req(rq)->special; if (cmd->tf_flags & IDE_TFLAG_DYN) kfree(orig_cmd); @@ -253,7 +261,7 @@ EXPORT_SYMBOL_GPL(ide_init_sg_cmd); static ide_startstop_t execute_drive_cmd (ide_drive_t *drive, struct request *rq) { - struct ide_cmd *cmd = rq->special; + struct ide_cmd *cmd = ide_req(rq)->special; if (cmd) { if (cmd->protocol == ATA_PROT_PIO) { @@ -307,8 +315,6 @@ static ide_startstop_t start_request (ide_drive_t *drive, struct request *rq) { ide_startstop_t startstop; - BUG_ON(!(rq->rq_flags & RQF_STARTED)); - #ifdef DEBUG printk("%s: start_request: current=0x%08lx\n", drive->hwif->name, (unsigned long) rq); @@ -320,6 +326,9 @@ static ide_startstop_t start_request (ide_drive_t *drive, struct request *rq) goto kill_rq; } + if (drive->prep_rq && !drive->prep_rq(drive, rq)) + return ide_stopped; + if (ata_pm_request(rq)) ide_check_pm_state(drive, rq); @@ -343,7 +352,7 @@ static ide_startstop_t start_request (ide_drive_t *drive, struct request *rq) if (ata_taskfile_request(rq)) return execute_drive_cmd(drive, rq); else if (ata_pm_request(rq)) { - struct ide_pm_state *pm = rq->special; + struct ide_pm_state *pm = ide_req(rq)->special; #ifdef DEBUG_PM printk("%s: start_power_step(step: %d)\n", drive->name, pm->pm_step); @@ -430,44 +439,42 @@ static inline void ide_unlock_host(struct ide_host *host) } } -static void __ide_requeue_and_plug(struct request_queue *q, struct request *rq) -{ - if (rq) - blk_requeue_request(q, rq); - if (rq || blk_peek_request(q)) { - /* Use 3ms as that was the old plug delay */ - blk_delay_queue(q, 3); - } -} - void ide_requeue_and_plug(ide_drive_t *drive, struct request *rq) { struct request_queue *q = drive->queue; - unsigned long flags; - spin_lock_irqsave(q->queue_lock, flags); - __ide_requeue_and_plug(q, rq); - spin_unlock_irqrestore(q->queue_lock, flags); + /* Use 3ms as that was the old plug delay */ + if (rq) { + blk_mq_requeue_request(rq, false); + blk_mq_delay_kick_requeue_list(q, 3); + } else + blk_mq_delay_run_hw_queue(q->queue_hw_ctx[0], 3); } /* * Issue a new request to a device. */ -void do_ide_request(struct request_queue *q) +blk_status_t ide_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) { - ide_drive_t *drive = q->queuedata; + ide_drive_t *drive = hctx->queue->queuedata; ide_hwif_t *hwif = drive->hwif; struct ide_host *host = hwif->host; - struct request *rq = NULL; + struct request *rq = bd->rq; ide_startstop_t startstop; - spin_unlock_irq(q->queue_lock); + if (!blk_rq_is_passthrough(rq) && !(rq->rq_flags & RQF_DONTPREP)) { + rq->rq_flags |= RQF_DONTPREP; + ide_req(rq)->special = NULL; + } /* HLD do_request() callback might sleep, make sure it's okay */ might_sleep(); if (ide_lock_host(host, hwif)) - goto plug_device_2; + return BLK_STS_DEV_RESOURCE; + + blk_mq_start_request(rq); spin_lock_irq(&hwif->lock); @@ -503,21 +510,16 @@ repeat: hwif->cur_dev = drive; drive->dev_flags &= ~(IDE_DFLAG_SLEEPING | IDE_DFLAG_PARKED); - spin_unlock_irq(&hwif->lock); - spin_lock_irq(q->queue_lock); /* * we know that the queue isn't empty, but this can happen - * if the q->prep_rq_fn() decides to kill a request + * if ->prep_rq() decides to kill a request */ - if (!rq) - rq = blk_fetch_request(drive->queue); - - spin_unlock_irq(q->queue_lock); - spin_lock_irq(&hwif->lock); - if (!rq) { - ide_unlock_port(hwif); - goto out; + rq = bd->rq; + if (!rq) { + ide_unlock_port(hwif); + goto out; + } } /* @@ -551,23 +553,24 @@ repeat: if (startstop == ide_stopped) { rq = hwif->rq; hwif->rq = NULL; - goto repeat; + if (rq) + goto repeat; + ide_unlock_port(hwif); + goto out; } - } else - goto plug_device; + } else { +plug_device: + spin_unlock_irq(&hwif->lock); + ide_unlock_host(host); + ide_requeue_and_plug(drive, rq); + return BLK_STS_OK; + } + out: spin_unlock_irq(&hwif->lock); if (rq == NULL) ide_unlock_host(host); - spin_lock_irq(q->queue_lock); - return; - -plug_device: - spin_unlock_irq(&hwif->lock); - ide_unlock_host(host); -plug_device_2: - spin_lock_irq(q->queue_lock); - __ide_requeue_and_plug(q, rq); + return BLK_STS_OK; } static int drive_is_ready(ide_drive_t *drive) @@ -887,3 +890,16 @@ void ide_pad_transfer(ide_drive_t *drive, int write, int len) } } EXPORT_SYMBOL_GPL(ide_pad_transfer); + +void ide_insert_request_head(ide_drive_t *drive, struct request *rq) +{ + ide_hwif_t *hwif = drive->hwif; + unsigned long flags; + + spin_lock_irqsave(&hwif->lock, flags); + list_add_tail(&rq->queuelist, &drive->rq_list); + spin_unlock_irqrestore(&hwif->lock, flags); + + kblockd_schedule_work(&drive->rq_work); +} +EXPORT_SYMBOL_GPL(ide_insert_request_head); diff --git a/drivers/ide/ide-park.c b/drivers/ide/ide-park.c index 622f0edb3945..102aa3bc3e7f 100644 --- a/drivers/ide/ide-park.c +++ b/drivers/ide/ide-park.c @@ -27,7 +27,7 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout) spin_unlock_irq(&hwif->lock); if (start_queue) - blk_run_queue(q); + blk_mq_run_hw_queues(q, true); return; } spin_unlock_irq(&hwif->lock); @@ -36,7 +36,7 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout) scsi_req(rq)->cmd[0] = REQ_PARK_HEADS; scsi_req(rq)->cmd_len = 1; ide_req(rq)->type = ATA_PRIV_MISC; - rq->special = &timeout; + ide_req(rq)->special = &timeout; blk_execute_rq(q, NULL, rq, 1); rc = scsi_req(rq)->result ? -EIO : 0; blk_put_request(rq); @@ -54,7 +54,7 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout) scsi_req(rq)->cmd[0] = REQ_UNPARK_HEADS; scsi_req(rq)->cmd_len = 1; ide_req(rq)->type = ATA_PRIV_MISC; - elv_add_request(q, rq, ELEVATOR_INSERT_FRONT); + ide_insert_request_head(drive, rq); out: return; @@ -67,7 +67,7 @@ ide_startstop_t ide_do_park_unpark(ide_drive_t *drive, struct request *rq) memset(&cmd, 0, sizeof(cmd)); if (scsi_req(rq)->cmd[0] == REQ_PARK_HEADS) { - drive->sleep = *(unsigned long *)rq->special; + drive->sleep = *(unsigned long *)ide_req(rq)->special; drive->dev_flags |= IDE_DFLAG_SLEEPING; tf->command = ATA_CMD_IDLEIMMEDIATE; tf->feature = 0x44; diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c index 59217aa1d1fb..192e6c65d34e 100644 --- a/drivers/ide/ide-pm.c +++ b/drivers/ide/ide-pm.c @@ -21,7 +21,7 @@ int generic_ide_suspend(struct device *dev, pm_message_t mesg) memset(&rqpm, 0, sizeof(rqpm)); rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, 0); ide_req(rq)->type = ATA_PRIV_PM_SUSPEND; - rq->special = &rqpm; + ide_req(rq)->special = &rqpm; rqpm.pm_step = IDE_PM_START_SUSPEND; if (mesg.event == PM_EVENT_PRETHAW) mesg.event = PM_EVENT_FREEZE; @@ -40,32 +40,17 @@ int generic_ide_suspend(struct device *dev, pm_message_t mesg) return ret; } -static void ide_end_sync_rq(struct request *rq, blk_status_t error) -{ - complete(rq->end_io_data); -} - static int ide_pm_execute_rq(struct request *rq) { struct request_queue *q = rq->q; - DECLARE_COMPLETION_ONSTACK(wait); - rq->end_io_data = &wait; - rq->end_io = ide_end_sync_rq; - - spin_lock_irq(q->queue_lock); if (unlikely(blk_queue_dying(q))) { rq->rq_flags |= RQF_QUIET; scsi_req(rq)->result = -ENXIO; - __blk_end_request_all(rq, BLK_STS_OK); - spin_unlock_irq(q->queue_lock); + blk_mq_end_request(rq, BLK_STS_OK); return -ENXIO; } - __elv_add_request(q, rq, ELEVATOR_INSERT_FRONT); - __blk_run_queue_uncond(q); - spin_unlock_irq(q->queue_lock); - - wait_for_completion_io(&wait); + blk_execute_rq(q, NULL, rq, true); return scsi_req(rq)->result ? -EIO : 0; } @@ -79,6 +64,8 @@ int generic_ide_resume(struct device *dev) struct ide_pm_state rqpm; int err; + blk_mq_start_stopped_hw_queues(drive->queue, true); + if (ide_port_acpi(hwif)) { /* call ACPI _PS0 / _STM only once */ if ((drive->dn & 1) == 0 || pair == NULL) { @@ -92,7 +79,7 @@ int generic_ide_resume(struct device *dev) memset(&rqpm, 0, sizeof(rqpm)); rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, BLK_MQ_REQ_PREEMPT); ide_req(rq)->type = ATA_PRIV_PM_RESUME; - rq->special = &rqpm; + ide_req(rq)->special = &rqpm; rqpm.pm_step = IDE_PM_START_RESUME; rqpm.pm_state = PM_EVENT_ON; @@ -111,7 +98,7 @@ int generic_ide_resume(struct device *dev) void ide_complete_power_step(ide_drive_t *drive, struct request *rq) { - struct ide_pm_state *pm = rq->special; + struct ide_pm_state *pm = ide_req(rq)->special; #ifdef DEBUG_PM printk(KERN_INFO "%s: complete_power_step(step: %d)\n", @@ -141,7 +128,7 @@ void ide_complete_power_step(ide_drive_t *drive, struct request *rq) ide_startstop_t ide_start_power_step(ide_drive_t *drive, struct request *rq) { - struct ide_pm_state *pm = rq->special; + struct ide_pm_state *pm = ide_req(rq)->special; struct ide_cmd cmd = { }; switch (pm->pm_step) { @@ -213,8 +200,7 @@ out_do_tf: void ide_complete_pm_rq(ide_drive_t *drive, struct request *rq) { struct request_queue *q = drive->queue; - struct ide_pm_state *pm = rq->special; - unsigned long flags; + struct ide_pm_state *pm = ide_req(rq)->special; ide_complete_power_step(drive, rq); if (pm->pm_step != IDE_PM_COMPLETED) @@ -224,22 +210,19 @@ void ide_complete_pm_rq(ide_drive_t *drive, struct request *rq) printk("%s: completing PM request, %s\n", drive->name, (ide_req(rq)->type == ATA_PRIV_PM_SUSPEND) ? "suspend" : "resume"); #endif - spin_lock_irqsave(q->queue_lock, flags); if (ide_req(rq)->type == ATA_PRIV_PM_SUSPEND) - blk_stop_queue(q); + blk_mq_stop_hw_queues(q); else drive->dev_flags &= ~IDE_DFLAG_BLOCKED; - spin_unlock_irqrestore(q->queue_lock, flags); drive->hwif->rq = NULL; - if (blk_end_request(rq, BLK_STS_OK, 0)) - BUG(); + blk_mq_end_request(rq, BLK_STS_OK); } void ide_check_pm_state(ide_drive_t *drive, struct request *rq) { - struct ide_pm_state *pm = rq->special; + struct ide_pm_state *pm = ide_req(rq)->special; if (blk_rq_is_private(rq) && ide_req(rq)->type == ATA_PRIV_PM_SUSPEND && @@ -260,7 +243,6 @@ void ide_check_pm_state(ide_drive_t *drive, struct request *rq) ide_hwif_t *hwif = drive->hwif; const struct ide_tp_ops *tp_ops = hwif->tp_ops; struct request_queue *q = drive->queue; - unsigned long flags; int rc; #ifdef DEBUG_PM printk("%s: Wakeup request inited, waiting for !BSY...\n", drive->name); @@ -274,8 +256,6 @@ void ide_check_pm_state(ide_drive_t *drive, struct request *rq) if (rc) printk(KERN_WARNING "%s: drive not ready on wakeup\n", drive->name); - spin_lock_irqsave(q->queue_lock, flags); - blk_start_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); + blk_mq_start_hw_queues(q); } } diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c index 3b75a7b7a284..63627be0811a 100644 --- a/drivers/ide/ide-probe.c +++ b/drivers/ide/ide-probe.c @@ -746,10 +746,16 @@ static void ide_initialize_rq(struct request *rq) { struct ide_request *req = blk_mq_rq_to_pdu(rq); + req->special = NULL; scsi_req_init(&req->sreq); req->sreq.sense = req->sense; } +static const struct blk_mq_ops ide_mq_ops = { + .queue_rq = ide_queue_rq, + .initialize_rq_fn = ide_initialize_rq, +}; + /* * init request queue */ @@ -759,6 +765,7 @@ static int ide_init_queue(ide_drive_t *drive) ide_hwif_t *hwif = drive->hwif; int max_sectors = 256; int max_sg_entries = PRD_ENTRIES; + struct blk_mq_tag_set *set; /* * Our default set up assumes the normal IDE case, @@ -767,19 +774,26 @@ static int ide_init_queue(ide_drive_t *drive) * limits and LBA48 we could raise it but as yet * do not. */ - q = blk_alloc_queue_node(GFP_KERNEL, hwif_to_node(hwif), NULL); - if (!q) + + set = &drive->tag_set; + set->ops = &ide_mq_ops; + set->nr_hw_queues = 1; + set->queue_depth = 32; + set->reserved_tags = 1; + set->cmd_size = sizeof(struct ide_request); + set->numa_node = hwif_to_node(hwif); + set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING; + if (blk_mq_alloc_tag_set(set)) return 1; - q->request_fn = do_ide_request; - q->initialize_rq_fn = ide_initialize_rq; - q->cmd_size = sizeof(struct ide_request); - blk_queue_flag_set(QUEUE_FLAG_SCSI_PASSTHROUGH, q); - if (blk_init_allocated_queue(q) < 0) { - blk_cleanup_queue(q); + q = blk_mq_init_queue(set); + if (IS_ERR(q)) { + blk_mq_free_tag_set(set); return 1; } + blk_queue_flag_set(QUEUE_FLAG_SCSI_PASSTHROUGH, q); + q->queuedata = drive; blk_queue_segment_boundary(q, 0xffff); @@ -965,8 +979,12 @@ static void drive_release_dev (struct device *dev) ide_proc_unregister_device(drive); + if (drive->sense_rq) + blk_mq_free_request(drive->sense_rq); + blk_cleanup_queue(drive->queue); drive->queue = NULL; + blk_mq_free_tag_set(&drive->tag_set); drive->dev_flags &= ~IDE_DFLAG_PRESENT; @@ -1133,6 +1151,28 @@ static void ide_port_cable_detect(ide_hwif_t *hwif) } } +/* + * Deferred request list insertion handler + */ +static void drive_rq_insert_work(struct work_struct *work) +{ + ide_drive_t *drive = container_of(work, ide_drive_t, rq_work); + ide_hwif_t *hwif = drive->hwif; + struct request *rq; + LIST_HEAD(list); + + spin_lock_irq(&hwif->lock); + if (!list_empty(&drive->rq_list)) + list_splice_init(&drive->rq_list, &list); + spin_unlock_irq(&hwif->lock); + + while (!list_empty(&list)) { + rq = list_first_entry(&list, struct request, queuelist); + list_del_init(&rq->queuelist); + blk_execute_rq_nowait(drive->queue, rq->rq_disk, rq, true, NULL); + } +} + static const u8 ide_hwif_to_major[] = { IDE0_MAJOR, IDE1_MAJOR, IDE2_MAJOR, IDE3_MAJOR, IDE4_MAJOR, IDE5_MAJOR, IDE6_MAJOR, IDE7_MAJOR, IDE8_MAJOR, IDE9_MAJOR }; @@ -1145,12 +1185,10 @@ static void ide_port_init_devices_data(ide_hwif_t *hwif) ide_port_for_each_dev(i, drive, hwif) { u8 j = (hwif->index * MAX_DRIVES) + i; u16 *saved_id = drive->id; - struct request *saved_sense_rq = drive->sense_rq; memset(drive, 0, sizeof(*drive)); memset(saved_id, 0, SECTOR_SIZE); drive->id = saved_id; - drive->sense_rq = saved_sense_rq; drive->media = ide_disk; drive->select = (i << 4) | ATA_DEVICE_OBS; @@ -1166,6 +1204,9 @@ static void ide_port_init_devices_data(ide_hwif_t *hwif) INIT_LIST_HEAD(&drive->list); init_completion(&drive->gendev_rel_comp); + + INIT_WORK(&drive->rq_work, drive_rq_insert_work); + INIT_LIST_HEAD(&drive->rq_list); } } @@ -1255,7 +1296,6 @@ static void ide_port_free_devices(ide_hwif_t *hwif) int i; ide_port_for_each_dev(i, drive, hwif) { - kfree(drive->sense_rq); kfree(drive->id); kfree(drive); } @@ -1283,17 +1323,10 @@ static int ide_port_alloc_devices(ide_hwif_t *hwif, int node) if (drive->id == NULL) goto out_free_drive; - drive->sense_rq = kmalloc(sizeof(struct request) + - sizeof(struct ide_request), GFP_KERNEL); - if (!drive->sense_rq) - goto out_free_id; - hwif->devices[i] = drive; } return 0; -out_free_id: - kfree(drive->id); out_free_drive: kfree(drive); out_nomem: diff --git a/drivers/ide/ide-tape.c b/drivers/ide/ide-tape.c index 34c1165226a4..db1a65f4b490 100644 --- a/drivers/ide/ide-tape.c +++ b/drivers/ide/ide-tape.c @@ -639,7 +639,7 @@ static ide_startstop_t idetape_do_request(ide_drive_t *drive, goto out; } if (req->cmd[13] & REQ_IDETAPE_PC1) { - pc = (struct ide_atapi_pc *)rq->special; + pc = (struct ide_atapi_pc *)ide_req(rq)->special; req->cmd[13] &= ~(REQ_IDETAPE_PC1); req->cmd[13] |= REQ_IDETAPE_PC2; goto out; diff --git a/drivers/ide/ide-taskfile.c b/drivers/ide/ide-taskfile.c index c21d5c50ae3a..17b2e379e872 100644 --- a/drivers/ide/ide-taskfile.c +++ b/drivers/ide/ide-taskfile.c @@ -440,7 +440,7 @@ int ide_raw_taskfile(ide_drive_t *drive, struct ide_cmd *cmd, u8 *buf, goto put_req; } - rq->special = cmd; + ide_req(rq)->special = cmd; cmd->rq = rq; blk_execute_rq(drive->queue, NULL, rq, 0); diff --git a/drivers/ide/pmac.c b/drivers/ide/pmac.c index 203ed4adc04a..92f840365718 100644 --- a/drivers/ide/pmac.c +++ b/drivers/ide/pmac.c @@ -1046,7 +1046,7 @@ static int pmac_ide_setup_device(pmac_ide_hwif_t *pmif, struct ide_hw *hw) d.port_ops = &pmac_ide_ata4_port_ops; d.udma_mask = ATA_UDMA5; } else if (of_device_is_compatible(np, "keylargo-ata")) { - if (strcmp(np->name, "ata-4") == 0) { + if (of_node_name_eq(np, "ata-4")) { pmif->kind = controller_kl_ata4; d.port_ops = &pmac_ide_ata4_port_ops; d.udma_mask = ATA_UDMA4; diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig index 7993a67bd351..898839ca164a 100644 --- a/drivers/iio/accel/Kconfig +++ b/drivers/iio/accel/Kconfig @@ -223,7 +223,7 @@ config IIO_ST_ACCEL_3AXIS Say yes here to build support for STMicroelectronics accelerometers: LSM303DLH, LSM303DLHC, LIS3DH, LSM330D, LSM330DL, LSM330DLC, LIS331DLH, LSM303DL, LSM303DLM, LSM330, LIS2DH12, H3LIS331DL, - LNG2DM + LNG2DM, LIS3DE This driver can also be built as a module. If so, these modules will be created: diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c index af53a1084ee5..7096e577b23f 100644 --- a/drivers/iio/accel/kxcjk-1013.c +++ b/drivers/iio/accel/kxcjk-1013.c @@ -1489,8 +1489,11 @@ static const struct acpi_device_id kx_acpi_match[] = { {"KXCJ1013", KXCJK1013}, {"KXCJ1008", KXCJ91008}, {"KXCJ9000", KXCJ91008}, + {"KIOX0009", KXTJ21009}, {"KIOX000A", KXCJ91008}, + {"KIOX010A", KXCJ91008}, /* KXCJ91008 inside the display of a 2-in-1 */ {"KXTJ1009", KXTJ21009}, + {"KXJ2109", KXTJ21009}, {"SMO8500", KXCJ91008}, { }, }; diff --git a/drivers/iio/accel/st_accel.h b/drivers/iio/accel/st_accel.h index 2f931e4837e5..fd53258656ca 100644 --- a/drivers/iio/accel/st_accel.h +++ b/drivers/iio/accel/st_accel.h @@ -56,6 +56,7 @@ enum st_accel_type { #define LNG2DM_ACCEL_DEV_NAME "lng2dm" #define LIS2DW12_ACCEL_DEV_NAME "lis2dw12" #define LIS3DHH_ACCEL_DEV_NAME "lis3dhh" +#define LIS3DE_ACCEL_DEV_NAME "lis3de" /** * struct st_sensors_platform_data - default accel platform data diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c index 3e6fd5a8ac5b..f7b471121508 100644 --- a/drivers/iio/accel/st_accel_core.c +++ b/drivers/iio/accel/st_accel_core.c @@ -103,6 +103,7 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = { [4] = LSM330DLC_ACCEL_DEV_NAME, [5] = LSM303AGR_ACCEL_DEV_NAME, [6] = LIS2DH12_ACCEL_DEV_NAME, + [7] = LIS3DE_ACCEL_DEV_NAME, }, .ch = (struct iio_chan_spec *)st_accel_12bit_channels, .odr = { diff --git a/drivers/iio/accel/st_accel_i2c.c b/drivers/iio/accel/st_accel_i2c.c index 2ca5d1f6ade0..de8ae4327094 100644 --- a/drivers/iio/accel/st_accel_i2c.c +++ b/drivers/iio/accel/st_accel_i2c.c @@ -98,6 +98,10 @@ static const struct of_device_id st_accel_of_match[] = { .compatible = "st,lis2dw12", .data = LIS2DW12_ACCEL_DEV_NAME, }, + { + .compatible = "st,lis3de", + .data = LIS3DE_ACCEL_DEV_NAME, + }, {}, }; MODULE_DEVICE_TABLE(of, st_accel_of_match); @@ -135,6 +139,7 @@ static const struct i2c_device_id st_accel_id_table[] = { { LIS331DL_ACCEL_DEV_NAME }, { LIS3LV02DL_ACCEL_DEV_NAME }, { LIS2DW12_ACCEL_DEV_NAME }, + { LIS3DE_ACCEL_DEV_NAME }, {}, }; MODULE_DEVICE_TABLE(i2c, st_accel_id_table); diff --git a/drivers/iio/accel/st_accel_spi.c b/drivers/iio/accel/st_accel_spi.c index dcc9bd243a52..73bfb5d04e2b 100644 --- a/drivers/iio/accel/st_accel_spi.c +++ b/drivers/iio/accel/st_accel_spi.c @@ -90,6 +90,10 @@ static const struct of_device_id st_accel_of_match[] = { .compatible = "st,lis3dhh", .data = LIS3DHH_ACCEL_DEV_NAME, }, + { + .compatible = "st,lis3de", + .data = LIS3DE_ACCEL_DEV_NAME, + }, {} }; MODULE_DEVICE_TABLE(of, st_accel_of_match); @@ -143,6 +147,7 @@ static const struct spi_device_id st_accel_id_table[] = { { LIS3LV02DL_ACCEL_DEV_NAME }, { LIS2DW12_ACCEL_DEV_NAME }, { LIS3DHH_ACCEL_DEV_NAME }, + { LIS3DE_ACCEL_DEV_NAME }, {}, }; MODULE_DEVICE_TABLE(spi, st_accel_id_table); diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig index a52fea8749a9..7a3ca4ec0cb7 100644 --- a/drivers/iio/adc/Kconfig +++ b/drivers/iio/adc/Kconfig @@ -10,6 +10,17 @@ config AD_SIGMA_DELTA select IIO_BUFFER select IIO_TRIGGERED_BUFFER +config AD7124 + tristate "Analog Devices AD7124 and similar sigma-delta ADCs driver" + depends on SPI_MASTER + select AD_SIGMA_DELTA + help + Say yes here to build support for Analog Devices AD7124-4 and AD7124-8 + SPI analog to digital converters (ADC). + + To compile this driver as a module, choose M here: the module will be + called ad7124. + config AD7266 tristate "Analog Devices AD7265/AD7266 ADC driver" depends on SPI_MASTER @@ -116,6 +127,16 @@ config AD7923 To compile this driver as a module, choose M here: the module will be called ad7923. +config AD7949 + tristate "Analog Devices AD7949 and similar ADCs driver" + depends on SPI + help + Say yes here to build support for Analog Devices + AD7949, AD7682, AD7689 8 Channel ADCs. + + To compile this driver as a module, choose M here: the + module will be called ad7949. + config AD799X tristate "Analog Devices AD799x ADC driver" depends on I2C @@ -274,7 +295,7 @@ config EP93XX_ADC config EXYNOS_ADC tristate "Exynos ADC driver support" - depends on ARCH_EXYNOS || ARCH_S3C24XX || ARCH_S3C64XX || (OF && COMPILE_TEST) + depends on ARCH_EXYNOS || ARCH_S3C24XX || ARCH_S3C64XX || ARCH_S5PV210 || (OF && COMPILE_TEST) depends on HAS_IOMEM help Core support for the ADC block found in the Samsung EXYNOS series diff --git a/drivers/iio/adc/Makefile b/drivers/iio/adc/Makefile index a6e6a0b659e2..07df37f621bd 100644 --- a/drivers/iio/adc/Makefile +++ b/drivers/iio/adc/Makefile @@ -5,6 +5,7 @@ # When adding new entries keep the list in alphabetical order obj-$(CONFIG_AD_SIGMA_DELTA) += ad_sigma_delta.o +obj-$(CONFIG_AD7124) += ad7124.o obj-$(CONFIG_AD7266) += ad7266.o obj-$(CONFIG_AD7291) += ad7291.o obj-$(CONFIG_AD7298) += ad7298.o @@ -14,6 +15,7 @@ obj-$(CONFIG_AD7766) += ad7766.o obj-$(CONFIG_AD7791) += ad7791.o obj-$(CONFIG_AD7793) += ad7793.o obj-$(CONFIG_AD7887) += ad7887.o +obj-$(CONFIG_AD7949) += ad7949.o obj-$(CONFIG_AD799X) += ad799x.o obj-$(CONFIG_ASPEED_ADC) += aspeed_adc.o obj-$(CONFIG_AT91_ADC) += at91_adc.o diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c new file mode 100644 index 000000000000..7d5e5311d8de --- /dev/null +++ b/drivers/iio/adc/ad7124.c @@ -0,0 +1,684 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * AD7124 SPI ADC driver + * + * Copyright 2018 Analog Devices Inc. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +/* AD7124 registers */ +#define AD7124_COMMS 0x00 +#define AD7124_STATUS 0x00 +#define AD7124_ADC_CONTROL 0x01 +#define AD7124_DATA 0x02 +#define AD7124_IO_CONTROL_1 0x03 +#define AD7124_IO_CONTROL_2 0x04 +#define AD7124_ID 0x05 +#define AD7124_ERROR 0x06 +#define AD7124_ERROR_EN 0x07 +#define AD7124_MCLK_COUNT 0x08 +#define AD7124_CHANNEL(x) (0x09 + (x)) +#define AD7124_CONFIG(x) (0x19 + (x)) +#define AD7124_FILTER(x) (0x21 + (x)) +#define AD7124_OFFSET(x) (0x29 + (x)) +#define AD7124_GAIN(x) (0x31 + (x)) + +/* AD7124_STATUS */ +#define AD7124_STATUS_POR_FLAG_MSK BIT(4) + +/* AD7124_ADC_CONTROL */ +#define AD7124_ADC_CTRL_PWR_MSK GENMASK(7, 6) +#define AD7124_ADC_CTRL_PWR(x) FIELD_PREP(AD7124_ADC_CTRL_PWR_MSK, x) +#define AD7124_ADC_CTRL_MODE_MSK GENMASK(5, 2) +#define AD7124_ADC_CTRL_MODE(x) FIELD_PREP(AD7124_ADC_CTRL_MODE_MSK, x) + +/* AD7124_CHANNEL_X */ +#define AD7124_CHANNEL_EN_MSK BIT(15) +#define AD7124_CHANNEL_EN(x) FIELD_PREP(AD7124_CHANNEL_EN_MSK, x) +#define AD7124_CHANNEL_SETUP_MSK GENMASK(14, 12) +#define AD7124_CHANNEL_SETUP(x) FIELD_PREP(AD7124_CHANNEL_SETUP_MSK, x) +#define AD7124_CHANNEL_AINP_MSK GENMASK(9, 5) +#define AD7124_CHANNEL_AINP(x) FIELD_PREP(AD7124_CHANNEL_AINP_MSK, x) +#define AD7124_CHANNEL_AINM_MSK GENMASK(4, 0) +#define AD7124_CHANNEL_AINM(x) FIELD_PREP(AD7124_CHANNEL_AINM_MSK, x) + +/* AD7124_CONFIG_X */ +#define AD7124_CONFIG_BIPOLAR_MSK BIT(11) +#define AD7124_CONFIG_BIPOLAR(x) FIELD_PREP(AD7124_CONFIG_BIPOLAR_MSK, x) +#define AD7124_CONFIG_REF_SEL_MSK GENMASK(4, 3) +#define AD7124_CONFIG_REF_SEL(x) FIELD_PREP(AD7124_CONFIG_REF_SEL_MSK, x) +#define AD7124_CONFIG_PGA_MSK GENMASK(2, 0) +#define AD7124_CONFIG_PGA(x) FIELD_PREP(AD7124_CONFIG_PGA_MSK, x) + +/* AD7124_FILTER_X */ +#define AD7124_FILTER_FS_MSK GENMASK(10, 0) +#define AD7124_FILTER_FS(x) FIELD_PREP(AD7124_FILTER_FS_MSK, x) + +enum ad7124_ids { + ID_AD7124_4, + ID_AD7124_8, +}; + +enum ad7124_ref_sel { + AD7124_REFIN1, + AD7124_REFIN2, + AD7124_INT_REF, + AD7124_AVDD_REF, +}; + +enum ad7124_power_mode { + AD7124_LOW_POWER, + AD7124_MID_POWER, + AD7124_FULL_POWER, +}; + +static const unsigned int ad7124_gain[8] = { + 1, 2, 4, 8, 16, 32, 64, 128 +}; + +static const int ad7124_master_clk_freq_hz[3] = { + [AD7124_LOW_POWER] = 76800, + [AD7124_MID_POWER] = 153600, + [AD7124_FULL_POWER] = 614400, +}; + +static const char * const ad7124_ref_names[] = { + [AD7124_REFIN1] = "refin1", + [AD7124_REFIN2] = "refin2", + [AD7124_INT_REF] = "int", + [AD7124_AVDD_REF] = "avdd", +}; + +struct ad7124_chip_info { + unsigned int num_inputs; +}; + +struct ad7124_channel_config { + enum ad7124_ref_sel refsel; + bool bipolar; + unsigned int ain; + unsigned int vref_mv; + unsigned int pga_bits; + unsigned int odr; +}; + +struct ad7124_state { + const struct ad7124_chip_info *chip_info; + struct ad_sigma_delta sd; + struct ad7124_channel_config channel_config[4]; + struct regulator *vref[4]; + struct clk *mclk; + unsigned int adc_control; + unsigned int num_channels; +}; + +static const struct iio_chan_spec ad7124_channel_template = { + .type = IIO_VOLTAGE, + .indexed = 1, + .differential = 1, + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | + BIT(IIO_CHAN_INFO_SCALE) | + BIT(IIO_CHAN_INFO_OFFSET) | + BIT(IIO_CHAN_INFO_SAMP_FREQ), + .scan_type = { + .sign = 'u', + .realbits = 24, + .storagebits = 32, + .shift = 8, + .endianness = IIO_BE, + }, +}; + +static struct ad7124_chip_info ad7124_chip_info_tbl[] = { + [ID_AD7124_4] = { + .num_inputs = 8, + }, + [ID_AD7124_8] = { + .num_inputs = 16, + }, +}; + +static int ad7124_find_closest_match(const int *array, + unsigned int size, int val) +{ + int i, idx; + unsigned int diff_new, diff_old; + + diff_old = U32_MAX; + idx = 0; + + for (i = 0; i < size; i++) { + diff_new = abs(val - array[i]); + if (diff_new < diff_old) { + diff_old = diff_new; + idx = i; + } + } + + return idx; +} + +static int ad7124_spi_write_mask(struct ad7124_state *st, + unsigned int addr, + unsigned long mask, + unsigned int val, + unsigned int bytes) +{ + unsigned int readval; + int ret; + + ret = ad_sd_read_reg(&st->sd, addr, bytes, &readval); + if (ret < 0) + return ret; + + readval &= ~mask; + readval |= val; + + return ad_sd_write_reg(&st->sd, addr, bytes, readval); +} + +static int ad7124_set_mode(struct ad_sigma_delta *sd, + enum ad_sigma_delta_mode mode) +{ + struct ad7124_state *st = container_of(sd, struct ad7124_state, sd); + + st->adc_control &= ~AD7124_ADC_CTRL_MODE_MSK; + st->adc_control |= AD7124_ADC_CTRL_MODE(mode); + + return ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, 2, st->adc_control); +} + +static int ad7124_set_channel(struct ad_sigma_delta *sd, unsigned int channel) +{ + struct ad7124_state *st = container_of(sd, struct ad7124_state, sd); + unsigned int val; + + val = st->channel_config[channel].ain | AD7124_CHANNEL_EN(1) | + AD7124_CHANNEL_SETUP(channel); + + return ad_sd_write_reg(&st->sd, AD7124_CHANNEL(channel), 2, val); +} + +static const struct ad_sigma_delta_info ad7124_sigma_delta_info = { + .set_channel = ad7124_set_channel, + .set_mode = ad7124_set_mode, + .has_registers = true, + .addr_shift = 0, + .read_mask = BIT(6), + .data_reg = AD7124_DATA, +}; + +static int ad7124_set_channel_odr(struct ad7124_state *st, + unsigned int channel, + unsigned int odr) +{ + unsigned int fclk, odr_sel_bits; + int ret; + + fclk = clk_get_rate(st->mclk); + /* + * FS[10:0] = fCLK / (fADC x 32) where: + * fADC is the output data rate + * fCLK is the master clock frequency + * FS[10:0] are the bits in the filter register + * FS[10:0] can have a value from 1 to 2047 + */ + odr_sel_bits = DIV_ROUND_CLOSEST(fclk, odr * 32); + if (odr_sel_bits < 1) + odr_sel_bits = 1; + else if (odr_sel_bits > 2047) + odr_sel_bits = 2047; + + ret = ad7124_spi_write_mask(st, AD7124_FILTER(channel), + AD7124_FILTER_FS_MSK, + AD7124_FILTER_FS(odr_sel_bits), 3); + if (ret < 0) + return ret; + /* fADC = fCLK / (FS[10:0] x 32) */ + st->channel_config[channel].odr = + DIV_ROUND_CLOSEST(fclk, odr_sel_bits * 32); + + return 0; +} + +static int ad7124_set_channel_gain(struct ad7124_state *st, + unsigned int channel, + unsigned int gain) +{ + unsigned int res; + int ret; + + res = ad7124_find_closest_match(ad7124_gain, + ARRAY_SIZE(ad7124_gain), gain); + ret = ad7124_spi_write_mask(st, AD7124_CONFIG(channel), + AD7124_CONFIG_PGA_MSK, + AD7124_CONFIG_PGA(res), 2); + if (ret < 0) + return ret; + + st->channel_config[channel].pga_bits = res; + + return 0; +} + +static int ad7124_read_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int *val, int *val2, long info) +{ + struct ad7124_state *st = iio_priv(indio_dev); + int idx, ret; + + switch (info) { + case IIO_CHAN_INFO_RAW: + ret = ad_sigma_delta_single_conversion(indio_dev, chan, val); + if (ret < 0) + return ret; + + /* After the conversion is performed, disable the channel */ + ret = ad_sd_write_reg(&st->sd, + AD7124_CHANNEL(chan->address), 2, + st->channel_config[chan->address].ain | + AD7124_CHANNEL_EN(0)); + if (ret < 0) + return ret; + + return IIO_VAL_INT; + case IIO_CHAN_INFO_SCALE: + idx = st->channel_config[chan->address].pga_bits; + *val = st->channel_config[chan->address].vref_mv; + if (st->channel_config[chan->address].bipolar) + *val2 = chan->scan_type.realbits - 1 + idx; + else + *val2 = chan->scan_type.realbits + idx; + + return IIO_VAL_FRACTIONAL_LOG2; + case IIO_CHAN_INFO_OFFSET: + if (st->channel_config[chan->address].bipolar) + *val = -(1 << (chan->scan_type.realbits - 1)); + else + *val = 0; + + return IIO_VAL_INT; + case IIO_CHAN_INFO_SAMP_FREQ: + *val = st->channel_config[chan->address].odr; + + return IIO_VAL_INT; + default: + return -EINVAL; + } +} + +static int ad7124_write_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int val, int val2, long info) +{ + struct ad7124_state *st = iio_priv(indio_dev); + unsigned int res, gain, full_scale, vref; + + switch (info) { + case IIO_CHAN_INFO_SAMP_FREQ: + if (val2 != 0) + return -EINVAL; + + return ad7124_set_channel_odr(st, chan->address, val); + case IIO_CHAN_INFO_SCALE: + if (val != 0) + return -EINVAL; + + if (st->channel_config[chan->address].bipolar) + full_scale = 1 << (chan->scan_type.realbits - 1); + else + full_scale = 1 << chan->scan_type.realbits; + + vref = st->channel_config[chan->address].vref_mv * 1000000LL; + res = DIV_ROUND_CLOSEST(vref, full_scale); + gain = DIV_ROUND_CLOSEST(res, val2); + + return ad7124_set_channel_gain(st, chan->address, gain); + default: + return -EINVAL; + } +} + +static IIO_CONST_ATTR(in_voltage_scale_available, + "0.000001164 0.000002328 0.000004656 0.000009313 0.000018626 0.000037252 0.000074505 0.000149011 0.000298023"); + +static struct attribute *ad7124_attributes[] = { + &iio_const_attr_in_voltage_scale_available.dev_attr.attr, + NULL, +}; + +static const struct attribute_group ad7124_attrs_group = { + .attrs = ad7124_attributes, +}; + +static const struct iio_info ad7124_info = { + .read_raw = ad7124_read_raw, + .write_raw = ad7124_write_raw, + .validate_trigger = ad_sd_validate_trigger, + .attrs = &ad7124_attrs_group, +}; + +static int ad7124_soft_reset(struct ad7124_state *st) +{ + unsigned int readval, timeout; + int ret; + + ret = ad_sd_reset(&st->sd, 64); + if (ret < 0) + return ret; + + timeout = 100; + do { + ret = ad_sd_read_reg(&st->sd, AD7124_STATUS, 1, &readval); + if (ret < 0) + return ret; + + if (!(readval & AD7124_STATUS_POR_FLAG_MSK)) + return 0; + + /* The AD7124 requires typically 2ms to power up and settle */ + usleep_range(100, 2000); + } while (--timeout); + + dev_err(&st->sd.spi->dev, "Soft reset failed\n"); + + return -EIO; +} + +static int ad7124_init_channel_vref(struct ad7124_state *st, + unsigned int channel_number) +{ + unsigned int refsel = st->channel_config[channel_number].refsel; + + switch (refsel) { + case AD7124_REFIN1: + case AD7124_REFIN2: + case AD7124_AVDD_REF: + if (IS_ERR(st->vref[refsel])) { + dev_err(&st->sd.spi->dev, + "Error, trying to use external voltage reference without a %s regulator.\n", + ad7124_ref_names[refsel]); + return PTR_ERR(st->vref[refsel]); + } + st->channel_config[channel_number].vref_mv = + regulator_get_voltage(st->vref[refsel]); + /* Conversion from uV to mV */ + st->channel_config[channel_number].vref_mv /= 1000; + break; + case AD7124_INT_REF: + st->channel_config[channel_number].vref_mv = 2500; + break; + default: + dev_err(&st->sd.spi->dev, "Invalid reference %d\n", refsel); + return -EINVAL; + } + + return 0; +} + +static int ad7124_of_parse_channel_config(struct iio_dev *indio_dev, + struct device_node *np) +{ + struct ad7124_state *st = iio_priv(indio_dev); + struct device_node *child; + struct iio_chan_spec *chan; + unsigned int ain[2], channel = 0, tmp; + int ret; + + st->num_channels = of_get_available_child_count(np); + if (!st->num_channels) { + dev_err(indio_dev->dev.parent, "no channel children\n"); + return -ENODEV; + } + + chan = devm_kcalloc(indio_dev->dev.parent, st->num_channels, + sizeof(*chan), GFP_KERNEL); + if (!chan) + return -ENOMEM; + + indio_dev->channels = chan; + indio_dev->num_channels = st->num_channels; + + for_each_available_child_of_node(np, child) { + ret = of_property_read_u32(child, "reg", &channel); + if (ret) + goto err; + + ret = of_property_read_u32_array(child, "diff-channels", + ain, 2); + if (ret) + goto err; + + if (ain[0] >= st->chip_info->num_inputs || + ain[1] >= st->chip_info->num_inputs) { + dev_err(indio_dev->dev.parent, + "Input pin number out of range.\n"); + ret = -EINVAL; + goto err; + } + st->channel_config[channel].ain = AD7124_CHANNEL_AINP(ain[0]) | + AD7124_CHANNEL_AINM(ain[1]); + st->channel_config[channel].bipolar = + of_property_read_bool(child, "bipolar"); + + ret = of_property_read_u32(child, "adi,reference-select", &tmp); + if (ret) + st->channel_config[channel].refsel = AD7124_INT_REF; + else + st->channel_config[channel].refsel = tmp; + + *chan = ad7124_channel_template; + chan->address = channel; + chan->scan_index = channel; + chan->channel = ain[0]; + chan->channel2 = ain[1]; + + chan++; + } + + return 0; +err: + of_node_put(child); + + return ret; +} + +static int ad7124_setup(struct ad7124_state *st) +{ + unsigned int val, fclk, power_mode; + int i, ret; + + fclk = clk_get_rate(st->mclk); + if (!fclk) + return -EINVAL; + + /* The power mode changes the master clock frequency */ + power_mode = ad7124_find_closest_match(ad7124_master_clk_freq_hz, + ARRAY_SIZE(ad7124_master_clk_freq_hz), + fclk); + if (fclk != ad7124_master_clk_freq_hz[power_mode]) { + ret = clk_set_rate(st->mclk, fclk); + if (ret) + return ret; + } + + /* Set the power mode */ + st->adc_control &= ~AD7124_ADC_CTRL_PWR_MSK; + st->adc_control |= AD7124_ADC_CTRL_PWR(power_mode); + ret = ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, 2, st->adc_control); + if (ret < 0) + return ret; + + for (i = 0; i < st->num_channels; i++) { + val = st->channel_config[i].ain | AD7124_CHANNEL_SETUP(i); + ret = ad_sd_write_reg(&st->sd, AD7124_CHANNEL(i), 2, val); + if (ret < 0) + return ret; + + ret = ad7124_init_channel_vref(st, i); + if (ret < 0) + return ret; + + val = AD7124_CONFIG_BIPOLAR(st->channel_config[i].bipolar) | + AD7124_CONFIG_REF_SEL(st->channel_config[i].refsel); + ret = ad_sd_write_reg(&st->sd, AD7124_CONFIG(i), 2, val); + if (ret < 0) + return ret; + /* + * 9.38 SPS is the minimum output data rate supported + * regardless of the selected power mode. Round it up to 10 and + * set all the enabled channels to this default value. + */ + ret = ad7124_set_channel_odr(st, i, 10); + } + + return ret; +} + +static int ad7124_probe(struct spi_device *spi) +{ + const struct spi_device_id *id; + struct ad7124_state *st; + struct iio_dev *indio_dev; + int i, ret; + + indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st)); + if (!indio_dev) + return -ENOMEM; + + st = iio_priv(indio_dev); + + id = spi_get_device_id(spi); + st->chip_info = &ad7124_chip_info_tbl[id->driver_data]; + + ad_sd_init(&st->sd, indio_dev, spi, &ad7124_sigma_delta_info); + + spi_set_drvdata(spi, indio_dev); + + indio_dev->dev.parent = &spi->dev; + indio_dev->name = spi_get_device_id(spi)->name; + indio_dev->modes = INDIO_DIRECT_MODE; + indio_dev->info = &ad7124_info; + + ret = ad7124_of_parse_channel_config(indio_dev, spi->dev.of_node); + if (ret < 0) + return ret; + + for (i = 0; i < ARRAY_SIZE(st->vref); i++) { + if (i == AD7124_INT_REF) + continue; + + st->vref[i] = devm_regulator_get_optional(&spi->dev, + ad7124_ref_names[i]); + if (PTR_ERR(st->vref[i]) == -ENODEV) + continue; + else if (IS_ERR(st->vref[i])) + return PTR_ERR(st->vref[i]); + + ret = regulator_enable(st->vref[i]); + if (ret) + return ret; + } + + st->mclk = devm_clk_get(&spi->dev, "mclk"); + if (IS_ERR(st->mclk)) { + ret = PTR_ERR(st->mclk); + goto error_regulator_disable; + } + + ret = clk_prepare_enable(st->mclk); + if (ret < 0) + goto error_regulator_disable; + + ret = ad7124_soft_reset(st); + if (ret < 0) + goto error_clk_disable_unprepare; + + ret = ad7124_setup(st); + if (ret < 0) + goto error_clk_disable_unprepare; + + ret = ad_sd_setup_buffer_and_trigger(indio_dev); + if (ret < 0) + goto error_clk_disable_unprepare; + + ret = iio_device_register(indio_dev); + if (ret < 0) { + dev_err(&spi->dev, "Failed to register iio device\n"); + goto error_remove_trigger; + } + + return 0; + +error_remove_trigger: + ad_sd_cleanup_buffer_and_trigger(indio_dev); +error_clk_disable_unprepare: + clk_disable_unprepare(st->mclk); +error_regulator_disable: + for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) { + if (!IS_ERR_OR_NULL(st->vref[i])) + regulator_disable(st->vref[i]); + } + + return ret; +} + +static int ad7124_remove(struct spi_device *spi) +{ + struct iio_dev *indio_dev = spi_get_drvdata(spi); + struct ad7124_state *st = iio_priv(indio_dev); + int i; + + iio_device_unregister(indio_dev); + ad_sd_cleanup_buffer_and_trigger(indio_dev); + clk_disable_unprepare(st->mclk); + + for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) { + if (!IS_ERR_OR_NULL(st->vref[i])) + regulator_disable(st->vref[i]); + } + + return 0; +} + +static const struct spi_device_id ad7124_id_table[] = { + { "ad7124-4", ID_AD7124_4 }, + { "ad7124-8", ID_AD7124_8 }, + {} +}; +MODULE_DEVICE_TABLE(spi, ad7124_id_table); + +static const struct of_device_id ad7124_of_match[] = { + { .compatible = "adi,ad7124-4" }, + { .compatible = "adi,ad7124-8" }, + { }, +}; +MODULE_DEVICE_TABLE(of, ad7124_of_match); + +static struct spi_driver ad71124_driver = { + .driver = { + .name = "ad7124", + .of_match_table = ad7124_of_match, + }, + .probe = ad7124_probe, + .remove = ad7124_remove, + .id_table = ad7124_id_table, +}; +module_spi_driver(ad71124_driver); + +MODULE_AUTHOR("Stefan Popa "); +MODULE_DESCRIPTION("Analog Devices AD7124 SPI driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/iio/adc/ad7949.c b/drivers/iio/adc/ad7949.c new file mode 100644 index 000000000000..ac0ffff6c5ae --- /dev/null +++ b/drivers/iio/adc/ad7949.c @@ -0,0 +1,347 @@ +// SPDX-License-Identifier: GPL-2.0 +/* ad7949.c - Analog Devices ADC driver 14/16 bits 4/8 channels + * + * Copyright (C) 2018 CMC NV + * + * http://www.analog.com/media/en/technical-documentation/data-sheets/AD7949.pdf + */ + +#include +#include +#include +#include +#include + +#define AD7949_MASK_CHANNEL_SEL GENMASK(9, 7) +#define AD7949_MASK_TOTAL GENMASK(13, 0) +#define AD7949_OFFSET_CHANNEL_SEL 7 +#define AD7949_CFG_READ_BACK 0x1 +#define AD7949_CFG_REG_SIZE_BITS 14 + +enum { + ID_AD7949 = 0, + ID_AD7682, + ID_AD7689, +}; + +struct ad7949_adc_spec { + u8 num_channels; + u8 resolution; +}; + +static const struct ad7949_adc_spec ad7949_adc_spec[] = { + [ID_AD7949] = { .num_channels = 8, .resolution = 14 }, + [ID_AD7682] = { .num_channels = 4, .resolution = 16 }, + [ID_AD7689] = { .num_channels = 8, .resolution = 16 }, +}; + +/** + * struct ad7949_adc_chip - AD ADC chip + * @lock: protects write sequences + * @vref: regulator generating Vref + * @iio_dev: reference to iio structure + * @spi: reference to spi structure + * @resolution: resolution of the chip + * @cfg: copy of the configuration register + * @current_channel: current channel in use + * @buffer: buffer to send / receive data to / from device + */ +struct ad7949_adc_chip { + struct mutex lock; + struct regulator *vref; + struct iio_dev *indio_dev; + struct spi_device *spi; + u8 resolution; + u16 cfg; + unsigned int current_channel; + u32 buffer ____cacheline_aligned; +}; + +static bool ad7949_spi_cfg_is_read_back(struct ad7949_adc_chip *ad7949_adc) +{ + if (!(ad7949_adc->cfg & AD7949_CFG_READ_BACK)) + return true; + + return false; +} + +static int ad7949_spi_bits_per_word(struct ad7949_adc_chip *ad7949_adc) +{ + int ret = ad7949_adc->resolution; + + if (ad7949_spi_cfg_is_read_back(ad7949_adc)) + ret += AD7949_CFG_REG_SIZE_BITS; + + return ret; +} + +static int ad7949_spi_write_cfg(struct ad7949_adc_chip *ad7949_adc, u16 val, + u16 mask) +{ + int ret; + int bits_per_word = ad7949_spi_bits_per_word(ad7949_adc); + int shift = bits_per_word - AD7949_CFG_REG_SIZE_BITS; + struct spi_message msg; + struct spi_transfer tx[] = { + { + .tx_buf = &ad7949_adc->buffer, + .len = 4, + .bits_per_word = bits_per_word, + }, + }; + + ad7949_adc->cfg = (val & mask) | (ad7949_adc->cfg & ~mask); + ad7949_adc->buffer = ad7949_adc->cfg << shift; + spi_message_init_with_transfers(&msg, tx, 1); + ret = spi_sync(ad7949_adc->spi, &msg); + + /* + * This delay is to avoid a new request before the required time to + * send a new command to the device + */ + udelay(2); + return ret; +} + +static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val, + unsigned int channel) +{ + int ret; + int bits_per_word = ad7949_spi_bits_per_word(ad7949_adc); + int mask = GENMASK(ad7949_adc->resolution, 0); + struct spi_message msg; + struct spi_transfer tx[] = { + { + .rx_buf = &ad7949_adc->buffer, + .len = 4, + .bits_per_word = bits_per_word, + }, + }; + + ret = ad7949_spi_write_cfg(ad7949_adc, + channel << AD7949_OFFSET_CHANNEL_SEL, + AD7949_MASK_CHANNEL_SEL); + if (ret) + return ret; + + ad7949_adc->buffer = 0; + spi_message_init_with_transfers(&msg, tx, 1); + ret = spi_sync(ad7949_adc->spi, &msg); + if (ret) + return ret; + + /* + * This delay is to avoid a new request before the required time to + * send a new command to the device + */ + udelay(2); + + ad7949_adc->current_channel = channel; + + if (ad7949_spi_cfg_is_read_back(ad7949_adc)) + *val = (ad7949_adc->buffer >> AD7949_CFG_REG_SIZE_BITS) & mask; + else + *val = ad7949_adc->buffer & mask; + + return 0; +} + +#define AD7949_ADC_CHANNEL(chan) { \ + .type = IIO_VOLTAGE, \ + .indexed = 1, \ + .channel = (chan), \ + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \ +} + +static const struct iio_chan_spec ad7949_adc_channels[] = { + AD7949_ADC_CHANNEL(0), + AD7949_ADC_CHANNEL(1), + AD7949_ADC_CHANNEL(2), + AD7949_ADC_CHANNEL(3), + AD7949_ADC_CHANNEL(4), + AD7949_ADC_CHANNEL(5), + AD7949_ADC_CHANNEL(6), + AD7949_ADC_CHANNEL(7), +}; + +static int ad7949_spi_read_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int *val, int *val2, long mask) +{ + struct ad7949_adc_chip *ad7949_adc = iio_priv(indio_dev); + int ret; + + if (!val) + return -EINVAL; + + switch (mask) { + case IIO_CHAN_INFO_RAW: + mutex_lock(&ad7949_adc->lock); + ret = ad7949_spi_read_channel(ad7949_adc, val, chan->channel); + mutex_unlock(&ad7949_adc->lock); + + if (ret < 0) + return ret; + + return IIO_VAL_INT; + + case IIO_CHAN_INFO_SCALE: + ret = regulator_get_voltage(ad7949_adc->vref); + if (ret < 0) + return ret; + + *val = ret / 5000; + return IIO_VAL_INT; + } + + return -EINVAL; +} + +static int ad7949_spi_reg_access(struct iio_dev *indio_dev, + unsigned int reg, unsigned int writeval, + unsigned int *readval) +{ + struct ad7949_adc_chip *ad7949_adc = iio_priv(indio_dev); + int ret = 0; + + if (readval) + *readval = ad7949_adc->cfg; + else + ret = ad7949_spi_write_cfg(ad7949_adc, + writeval & AD7949_MASK_TOTAL, AD7949_MASK_TOTAL); + + return ret; +} + +static const struct iio_info ad7949_spi_info = { + .read_raw = ad7949_spi_read_raw, + .debugfs_reg_access = ad7949_spi_reg_access, +}; + +static int ad7949_spi_init(struct ad7949_adc_chip *ad7949_adc) +{ + int ret; + int val; + + /* Sequencer disabled, CFG readback disabled, IN0 as default channel */ + ad7949_adc->current_channel = 0; + ret = ad7949_spi_write_cfg(ad7949_adc, 0x3C79, AD7949_MASK_TOTAL); + + /* + * Do two dummy conversions to apply the first configuration setting. + * Required only after the start up of the device. + */ + ad7949_spi_read_channel(ad7949_adc, &val, ad7949_adc->current_channel); + ad7949_spi_read_channel(ad7949_adc, &val, ad7949_adc->current_channel); + + return ret; +} + +static int ad7949_spi_probe(struct spi_device *spi) +{ + struct device *dev = &spi->dev; + const struct ad7949_adc_spec *spec; + struct ad7949_adc_chip *ad7949_adc; + struct iio_dev *indio_dev; + int ret; + + indio_dev = devm_iio_device_alloc(dev, sizeof(*ad7949_adc)); + if (!indio_dev) { + dev_err(dev, "can not allocate iio device\n"); + return -ENOMEM; + } + + indio_dev->dev.parent = dev; + indio_dev->dev.of_node = dev->of_node; + indio_dev->info = &ad7949_spi_info; + indio_dev->name = spi_get_device_id(spi)->name; + indio_dev->modes = INDIO_DIRECT_MODE; + indio_dev->channels = ad7949_adc_channels; + spi_set_drvdata(spi, indio_dev); + + ad7949_adc = iio_priv(indio_dev); + ad7949_adc->indio_dev = indio_dev; + ad7949_adc->spi = spi; + + spec = &ad7949_adc_spec[spi_get_device_id(spi)->driver_data]; + indio_dev->num_channels = spec->num_channels; + ad7949_adc->resolution = spec->resolution; + + ad7949_adc->vref = devm_regulator_get(dev, "vref"); + if (IS_ERR(ad7949_adc->vref)) { + dev_err(dev, "fail to request regulator\n"); + return PTR_ERR(ad7949_adc->vref); + } + + ret = regulator_enable(ad7949_adc->vref); + if (ret < 0) { + dev_err(dev, "fail to enable regulator\n"); + return ret; + } + + mutex_init(&ad7949_adc->lock); + + ret = ad7949_spi_init(ad7949_adc); + if (ret) { + dev_err(dev, "enable to init this device: %d\n", ret); + goto err; + } + + ret = iio_device_register(indio_dev); + if (ret) { + dev_err(dev, "fail to register iio device: %d\n", ret); + goto err; + } + + return 0; + +err: + mutex_destroy(&ad7949_adc->lock); + regulator_disable(ad7949_adc->vref); + + return ret; +} + +static int ad7949_spi_remove(struct spi_device *spi) +{ + struct iio_dev *indio_dev = spi_get_drvdata(spi); + struct ad7949_adc_chip *ad7949_adc = iio_priv(indio_dev); + + iio_device_unregister(indio_dev); + mutex_destroy(&ad7949_adc->lock); + regulator_disable(ad7949_adc->vref); + + return 0; +} + +static const struct of_device_id ad7949_spi_of_id[] = { + { .compatible = "adi,ad7949" }, + { .compatible = "adi,ad7682" }, + { .compatible = "adi,ad7689" }, + { } +}; +MODULE_DEVICE_TABLE(of, ad7949_spi_of_id); + +static const struct spi_device_id ad7949_spi_id[] = { + { "ad7949", ID_AD7949 }, + { "ad7682", ID_AD7682 }, + { "ad7689", ID_AD7689 }, + { } +}; +MODULE_DEVICE_TABLE(spi, ad7949_spi_id); + +static struct spi_driver ad7949_spi_driver = { + .driver = { + .name = "ad7949", + .of_match_table = ad7949_spi_of_id, + }, + .probe = ad7949_spi_probe, + .remove = ad7949_spi_remove, + .id_table = ad7949_spi_id, +}; +module_spi_driver(ad7949_spi_driver); + +MODULE_AUTHOR("Charles-Antoine Couret "); +MODULE_DESCRIPTION("Analog Devices 14/16-bit 8-channel ADC driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c index fc9510716ac7..ff5f2da2e1b1 100644 --- a/drivers/iio/adc/ad_sigma_delta.c +++ b/drivers/iio/adc/ad_sigma_delta.c @@ -278,6 +278,7 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev, { struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev); unsigned int sample, raw_sample; + unsigned int data_reg; int ret = 0; if (iio_buffer_enabled(indio_dev)) @@ -305,7 +306,12 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev, if (ret < 0) goto out; - ret = ad_sd_read_reg(sigma_delta, AD_SD_REG_DATA, + if (sigma_delta->info->data_reg != 0) + data_reg = sigma_delta->info->data_reg; + else + data_reg = AD_SD_REG_DATA; + + ret = ad_sd_read_reg(sigma_delta, data_reg, DIV_ROUND_UP(chan->scan_type.realbits + chan->scan_type.shift, 8), &raw_sample); @@ -392,6 +398,7 @@ static irqreturn_t ad_sd_trigger_handler(int irq, void *p) struct iio_dev *indio_dev = pf->indio_dev; struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev); unsigned int reg_size; + unsigned int data_reg; uint8_t data[16]; int ret; @@ -401,18 +408,23 @@ static irqreturn_t ad_sd_trigger_handler(int irq, void *p) indio_dev->channels[0].scan_type.shift; reg_size = DIV_ROUND_UP(reg_size, 8); + if (sigma_delta->info->data_reg != 0) + data_reg = sigma_delta->info->data_reg; + else + data_reg = AD_SD_REG_DATA; + switch (reg_size) { case 4: case 2: case 1: - ret = ad_sd_read_reg_raw(sigma_delta, AD_SD_REG_DATA, - reg_size, &data[0]); + ret = ad_sd_read_reg_raw(sigma_delta, data_reg, reg_size, + &data[0]); break; case 3: /* We store 24 bit samples in a 32 bit word. Keep the upper * byte set to zero. */ - ret = ad_sd_read_reg_raw(sigma_delta, AD_SD_REG_DATA, - reg_size, &data[1]); + ret = ad_sd_read_reg_raw(sigma_delta, data_reg, reg_size, + &data[1]); break; } diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c index f10443f92e4c..fa2d2b5767f3 100644 --- a/drivers/iio/adc/exynos_adc.c +++ b/drivers/iio/adc/exynos_adc.c @@ -115,6 +115,7 @@ #define MAX_ADC_V2_CHANNELS 10 #define MAX_ADC_V1_CHANNELS 8 #define MAX_EXYNOS3250_ADC_CHANNELS 2 +#define MAX_S5PV210_ADC_CHANNELS 10 /* Bit definitions common for ADC_V1 and ADC_V2 */ #define ADC_CON_EN_START (1u << 0) @@ -282,6 +283,16 @@ static const struct exynos_adc_data exynos_adc_v1_data = { .start_conv = exynos_adc_v1_start_conv, }; +static const struct exynos_adc_data exynos_adc_s5pv210_data = { + .num_channels = MAX_S5PV210_ADC_CHANNELS, + .mask = ADC_DATX_MASK, /* 12 bit ADC resolution */ + + .init_hw = exynos_adc_v1_init_hw, + .exit_hw = exynos_adc_v1_exit_hw, + .clear_irq = exynos_adc_v1_clear_irq, + .start_conv = exynos_adc_v1_start_conv, +}; + static void exynos_adc_s3c2416_start_conv(struct exynos_adc *info, unsigned long addr) { @@ -478,6 +489,9 @@ static const struct of_device_id exynos_adc_match[] = { }, { .compatible = "samsung,s3c6410-adc", .data = &exynos_adc_s3c64xx_data, + }, { + .compatible = "samsung,s5pv210-adc", + .data = &exynos_adc_s5pv210_data, }, { .compatible = "samsung,exynos-adc-v1", .data = &exynos_adc_v1_data, diff --git a/drivers/iio/adc/ina2xx-adc.c b/drivers/iio/adc/ina2xx-adc.c index d1239624187d..bdd7cba6f6b0 100644 --- a/drivers/iio/adc/ina2xx-adc.c +++ b/drivers/iio/adc/ina2xx-adc.c @@ -250,6 +250,7 @@ static int ina2xx_read_raw(struct iio_dev *indio_dev, *val2 = chip->shunt_resistor_uohm; return IIO_VAL_FRACTIONAL; } + return -EINVAL; case IIO_CHAN_INFO_HARDWAREGAIN: switch (chan->address) { @@ -262,6 +263,7 @@ static int ina2xx_read_raw(struct iio_dev *indio_dev, *val = chip->range_vbus == 32 ? 1 : 2; return IIO_VAL_INT; } + return -EINVAL; } return -EINVAL; diff --git a/drivers/iio/adc/max11100.c b/drivers/iio/adc/max11100.c index af59ab2e650c..3440539cfdba 100644 --- a/drivers/iio/adc/max11100.c +++ b/drivers/iio/adc/max11100.c @@ -1,13 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 /* * iio/adc/max11100.c * Maxim max11100 ADC Driver with IIO interface * * Copyright (C) 2016-17 Renesas Electronics Corporation * Copyright (C) 2016-17 Jacopo Mondi - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #include #include diff --git a/drivers/iio/adc/max9611.c b/drivers/iio/adc/max9611.c index 643a4e66eb80..917223d5ff5b 100644 --- a/drivers/iio/adc/max9611.c +++ b/drivers/iio/adc/max9611.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * iio/adc/max9611.c * @@ -5,10 +6,6 @@ * 12-bit ADC interface. * * Copyright (C) 2017 Jacopo Mondi - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ /* diff --git a/drivers/iio/adc/meson_saradc.c b/drivers/iio/adc/meson_saradc.c index 028ccd218f82..729becb2d3d9 100644 --- a/drivers/iio/adc/meson_saradc.c +++ b/drivers/iio/adc/meson_saradc.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -165,6 +166,14 @@ #define MESON_SAR_ADC_MAX_FIFO_SIZE 32 #define MESON_SAR_ADC_TIMEOUT 100 /* ms */ +#define MESON_SAR_ADC_VOLTAGE_AND_TEMP_CHANNEL 6 +#define MESON_SAR_ADC_TEMP_OFFSET 27 + +/* temperature sensor calibration information in eFuse */ +#define MESON_SAR_ADC_EFUSE_BYTES 4 +#define MESON_SAR_ADC_EFUSE_BYTE3_UPPER_ADC_VAL GENMASK(6, 0) +#define MESON_SAR_ADC_EFUSE_BYTE3_IS_CALIBRATED BIT(7) + /* for use with IIO_VAL_INT_PLUS_MICRO */ #define MILLION 1000000 @@ -175,16 +184,25 @@ .address = _chan, \ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \ BIT(IIO_CHAN_INFO_AVERAGE_RAW), \ - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) | \ - BIT(IIO_CHAN_INFO_CALIBBIAS) | \ + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \ + .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_CALIBBIAS) | \ BIT(IIO_CHAN_INFO_CALIBSCALE), \ .datasheet_name = "SAR_ADC_CH"#_chan, \ } -/* - * TODO: the hardware supports IIO_TEMP for channel 6 as well which is - * currently not supported by this driver. - */ +#define MESON_SAR_ADC_TEMP_CHAN(_chan) { \ + .type = IIO_TEMP, \ + .channel = _chan, \ + .address = MESON_SAR_ADC_VOLTAGE_AND_TEMP_CHANNEL, \ + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \ + BIT(IIO_CHAN_INFO_AVERAGE_RAW), \ + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) | \ + BIT(IIO_CHAN_INFO_SCALE), \ + .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_CALIBBIAS) | \ + BIT(IIO_CHAN_INFO_CALIBSCALE), \ + .datasheet_name = "TEMP_SENSOR", \ +} + static const struct iio_chan_spec meson_sar_adc_iio_channels[] = { MESON_SAR_ADC_CHAN(0), MESON_SAR_ADC_CHAN(1), @@ -197,6 +215,19 @@ static const struct iio_chan_spec meson_sar_adc_iio_channels[] = { IIO_CHAN_SOFT_TIMESTAMP(8), }; +static const struct iio_chan_spec meson_sar_adc_and_temp_iio_channels[] = { + MESON_SAR_ADC_CHAN(0), + MESON_SAR_ADC_CHAN(1), + MESON_SAR_ADC_CHAN(2), + MESON_SAR_ADC_CHAN(3), + MESON_SAR_ADC_CHAN(4), + MESON_SAR_ADC_CHAN(5), + MESON_SAR_ADC_CHAN(6), + MESON_SAR_ADC_CHAN(7), + MESON_SAR_ADC_TEMP_CHAN(8), + IIO_CHAN_SOFT_TIMESTAMP(9), +}; + enum meson_sar_adc_avg_mode { NO_AVERAGING = 0x0, MEAN_AVERAGING = 0x1, @@ -225,6 +256,9 @@ struct meson_sar_adc_param { u32 bandgap_reg; unsigned int resolution; const struct regmap_config *regmap_config; + u8 temperature_trimming_bits; + unsigned int temperature_multiplier; + unsigned int temperature_divider; }; struct meson_sar_adc_data { @@ -246,6 +280,9 @@ struct meson_sar_adc_priv { struct completion done; int calibbias; int calibscale; + bool temperature_sensor_calibrated; + u8 temperature_sensor_coefficient; + u16 temperature_sensor_adc_val; }; static const struct regmap_config meson_sar_adc_regmap_config_gxbb = { @@ -389,9 +426,16 @@ static void meson_sar_adc_enable_channel(struct iio_dev *indio_dev, MESON_SAR_ADC_DETECT_IDLE_SW_IDLE_MUX_SEL_MASK, regval); - if (chan->address == 6) - regmap_update_bits(priv->regmap, MESON_SAR_ADC_DELTA_10, - MESON_SAR_ADC_DELTA_10_TEMP_SEL, 0); + if (chan->address == MESON_SAR_ADC_VOLTAGE_AND_TEMP_CHANNEL) { + if (chan->type == IIO_TEMP) + regval = MESON_SAR_ADC_DELTA_10_TEMP_SEL; + else + regval = 0; + + regmap_update_bits(priv->regmap, + MESON_SAR_ADC_DELTA_10, + MESON_SAR_ADC_DELTA_10_TEMP_SEL, regval); + } } static void meson_sar_adc_set_chan7_mux(struct iio_dev *indio_dev, @@ -506,8 +550,12 @@ static int meson_sar_adc_get_sample(struct iio_dev *indio_dev, enum meson_sar_adc_num_samples avg_samples, int *val) { + struct meson_sar_adc_priv *priv = iio_priv(indio_dev); int ret; + if (chan->type == IIO_TEMP && !priv->temperature_sensor_calibrated) + return -ENOTSUPP; + ret = meson_sar_adc_lock(indio_dev); if (ret) return ret; @@ -555,17 +603,31 @@ static int meson_sar_adc_iio_info_read_raw(struct iio_dev *indio_dev, break; case IIO_CHAN_INFO_SCALE: - ret = regulator_get_voltage(priv->vref); - if (ret < 0) { - dev_err(indio_dev->dev.parent, - "failed to get vref voltage: %d\n", ret); - return ret; + if (chan->type == IIO_VOLTAGE) { + ret = regulator_get_voltage(priv->vref); + if (ret < 0) { + dev_err(indio_dev->dev.parent, + "failed to get vref voltage: %d\n", + ret); + return ret; + } + + *val = ret / 1000; + *val2 = priv->param->resolution; + return IIO_VAL_FRACTIONAL_LOG2; + } else if (chan->type == IIO_TEMP) { + /* SoC specific multiplier and divider */ + *val = priv->param->temperature_multiplier; + *val2 = priv->param->temperature_divider; + + /* celsius to millicelsius */ + *val *= 1000; + + return IIO_VAL_FRACTIONAL; + } else { + return -EINVAL; } - *val = ret / 1000; - *val2 = priv->param->resolution; - return IIO_VAL_FRACTIONAL_LOG2; - case IIO_CHAN_INFO_CALIBBIAS: *val = priv->calibbias; return IIO_VAL_INT; @@ -575,6 +637,13 @@ static int meson_sar_adc_iio_info_read_raw(struct iio_dev *indio_dev, *val2 = priv->calibscale % MILLION; return IIO_VAL_INT_PLUS_MICRO; + case IIO_CHAN_INFO_OFFSET: + *val = DIV_ROUND_CLOSEST(MESON_SAR_ADC_TEMP_OFFSET * + priv->param->temperature_divider, + priv->param->temperature_multiplier); + *val -= priv->temperature_sensor_adc_val; + return IIO_VAL_INT; + default: return -EINVAL; } @@ -587,8 +656,11 @@ static int meson_sar_adc_clk_init(struct iio_dev *indio_dev, struct clk_init_data init; const char *clk_parents[1]; - init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%pOF#adc_div", - indio_dev->dev.of_node); + init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%s#adc_div", + dev_name(indio_dev->dev.parent)); + if (!init.name) + return -ENOMEM; + init.flags = 0; init.ops = &clk_divider_ops; clk_parents[0] = __clk_get_name(priv->clkin); @@ -606,8 +678,11 @@ static int meson_sar_adc_clk_init(struct iio_dev *indio_dev, if (WARN_ON(IS_ERR(priv->adc_div_clk))) return PTR_ERR(priv->adc_div_clk); - init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%pOF#adc_en", - indio_dev->dev.of_node); + init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%s#adc_en", + dev_name(indio_dev->dev.parent)); + if (!init.name) + return -ENOMEM; + init.flags = CLK_SET_RATE_PARENT; init.ops = &clk_gate_ops; clk_parents[0] = __clk_get_name(priv->adc_div_clk); @@ -625,6 +700,65 @@ static int meson_sar_adc_clk_init(struct iio_dev *indio_dev, return 0; } +static int meson_sar_adc_temp_sensor_init(struct iio_dev *indio_dev) +{ + struct meson_sar_adc_priv *priv = iio_priv(indio_dev); + u8 *buf, trimming_bits, trimming_mask, upper_adc_val; + struct nvmem_cell *temperature_calib; + size_t read_len; + int ret; + + temperature_calib = devm_nvmem_cell_get(&indio_dev->dev, + "temperature_calib"); + if (IS_ERR(temperature_calib)) { + ret = PTR_ERR(temperature_calib); + + /* + * leave the temperature sensor disabled if no calibration data + * was passed via nvmem-cells. + */ + if (ret == -ENODEV) + return 0; + + if (ret != -EPROBE_DEFER) + dev_err(indio_dev->dev.parent, + "failed to get temperature_calib cell\n"); + + return ret; + } + + read_len = MESON_SAR_ADC_EFUSE_BYTES; + buf = nvmem_cell_read(temperature_calib, &read_len); + if (IS_ERR(buf)) { + dev_err(indio_dev->dev.parent, + "failed to read temperature_calib cell\n"); + return PTR_ERR(buf); + } else if (read_len != MESON_SAR_ADC_EFUSE_BYTES) { + kfree(buf); + dev_err(indio_dev->dev.parent, + "invalid read size of temperature_calib cell\n"); + return -EINVAL; + } + + trimming_bits = priv->param->temperature_trimming_bits; + trimming_mask = BIT(trimming_bits) - 1; + + priv->temperature_sensor_calibrated = + buf[3] & MESON_SAR_ADC_EFUSE_BYTE3_IS_CALIBRATED; + priv->temperature_sensor_coefficient = buf[2] & trimming_mask; + + upper_adc_val = FIELD_GET(MESON_SAR_ADC_EFUSE_BYTE3_UPPER_ADC_VAL, + buf[3]); + + priv->temperature_sensor_adc_val = buf[2]; + priv->temperature_sensor_adc_val |= upper_adc_val << BITS_PER_BYTE; + priv->temperature_sensor_adc_val >>= trimming_bits; + + kfree(buf); + + return 0; +} + static int meson_sar_adc_init(struct iio_dev *indio_dev) { struct meson_sar_adc_priv *priv = iio_priv(indio_dev); @@ -649,10 +783,12 @@ static int meson_sar_adc_init(struct iio_dev *indio_dev) meson_sar_adc_stop_sample_engine(indio_dev); - /* update the channel 6 MUX to select the temperature sensor */ + /* + * disable this bit as seems to be only relevant for Meson6 (based + * on the vendor driver), which we don't support at the moment. + */ regmap_update_bits(priv->regmap, MESON_SAR_ADC_REG0, - MESON_SAR_ADC_REG0_ADC_TEMP_SEN_SEL, - MESON_SAR_ADC_REG0_ADC_TEMP_SEN_SEL); + MESON_SAR_ADC_REG0_ADC_TEMP_SEN_SEL, 0); /* disable all channels by default */ regmap_write(priv->regmap, MESON_SAR_ADC_CHAN_LIST, 0x0); @@ -709,6 +845,29 @@ static int meson_sar_adc_init(struct iio_dev *indio_dev) regval |= MESON_SAR_ADC_AUX_SW_XP_DRIVE_SW; regmap_write(priv->regmap, MESON_SAR_ADC_AUX_SW, regval); + if (priv->temperature_sensor_calibrated) { + regmap_update_bits(priv->regmap, MESON_SAR_ADC_DELTA_10, + MESON_SAR_ADC_DELTA_10_TS_REVE1, + MESON_SAR_ADC_DELTA_10_TS_REVE1); + regmap_update_bits(priv->regmap, MESON_SAR_ADC_DELTA_10, + MESON_SAR_ADC_DELTA_10_TS_REVE0, + MESON_SAR_ADC_DELTA_10_TS_REVE0); + + /* + * set bits [3:0] of the TSC (temperature sensor coefficient) + * to get the correct values when reading the temperature. + */ + regval = FIELD_PREP(MESON_SAR_ADC_DELTA_10_TS_C_MASK, + priv->temperature_sensor_coefficient); + regmap_update_bits(priv->regmap, MESON_SAR_ADC_DELTA_10, + MESON_SAR_ADC_DELTA_10_TS_C_MASK, regval); + } else { + regmap_update_bits(priv->regmap, MESON_SAR_ADC_DELTA_10, + MESON_SAR_ADC_DELTA_10_TS_REVE1, 0); + regmap_update_bits(priv->regmap, MESON_SAR_ADC_DELTA_10, + MESON_SAR_ADC_DELTA_10_TS_REVE0, 0); + } + ret = clk_set_parent(priv->adc_sel_clk, priv->clkin); if (ret) { dev_err(indio_dev->dev.parent, @@ -894,6 +1053,17 @@ static const struct meson_sar_adc_param meson_sar_adc_meson8_param = { .bandgap_reg = MESON_SAR_ADC_DELTA_10, .regmap_config = &meson_sar_adc_regmap_config_meson8, .resolution = 10, + .temperature_trimming_bits = 4, + .temperature_multiplier = 18 * 10000, + .temperature_divider = 1024 * 10 * 85, +}; + +static const struct meson_sar_adc_param meson_sar_adc_meson8b_param = { + .has_bl30_integration = false, + .clock_rate = 1150000, + .bandgap_reg = MESON_SAR_ADC_DELTA_10, + .regmap_config = &meson_sar_adc_regmap_config_meson8, + .resolution = 10, }; static const struct meson_sar_adc_param meson_sar_adc_gxbb_param = { @@ -918,12 +1088,12 @@ static const struct meson_sar_adc_data meson_sar_adc_meson8_data = { }; static const struct meson_sar_adc_data meson_sar_adc_meson8b_data = { - .param = &meson_sar_adc_meson8_param, + .param = &meson_sar_adc_meson8b_param, .name = "meson-meson8b-saradc", }; static const struct meson_sar_adc_data meson_sar_adc_meson8m2_data = { - .param = &meson_sar_adc_meson8_param, + .param = &meson_sar_adc_meson8b_param, .name = "meson-meson8m2-saradc", }; @@ -1009,9 +1179,6 @@ static int meson_sar_adc_probe(struct platform_device *pdev) indio_dev->modes = INDIO_DIRECT_MODE; indio_dev->info = &meson_sar_adc_iio_info; - indio_dev->channels = meson_sar_adc_iio_channels; - indio_dev->num_channels = ARRAY_SIZE(meson_sar_adc_iio_channels); - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); base = devm_ioremap_resource(&pdev->dev, res); if (IS_ERR(base)) @@ -1078,6 +1245,22 @@ static int meson_sar_adc_probe(struct platform_device *pdev) priv->calibscale = MILLION; + if (priv->param->temperature_trimming_bits) { + ret = meson_sar_adc_temp_sensor_init(indio_dev); + if (ret) + return ret; + } + + if (priv->temperature_sensor_calibrated) { + indio_dev->channels = meson_sar_adc_and_temp_iio_channels; + indio_dev->num_channels = + ARRAY_SIZE(meson_sar_adc_and_temp_iio_channels); + } else { + indio_dev->channels = meson_sar_adc_iio_channels; + indio_dev->num_channels = + ARRAY_SIZE(meson_sar_adc_iio_channels); + } + ret = meson_sar_adc_init(indio_dev); if (ret) goto err; diff --git a/drivers/iio/adc/qcom-spmi-adc5.c b/drivers/iio/adc/qcom-spmi-adc5.c index f9af6b082916..6a866cc187f7 100644 --- a/drivers/iio/adc/qcom-spmi-adc5.c +++ b/drivers/iio/adc/qcom-spmi-adc5.c @@ -423,6 +423,7 @@ struct adc5_channels { enum vadc_scale_fn_type scale_fn_type; }; +/* In these definitions, _pre refers to an index into adc5_prescale_ratios. */ #define ADC5_CHAN(_dname, _type, _mask, _pre, _scale) \ { \ .datasheet_name = _dname, \ @@ -443,63 +444,63 @@ struct adc5_channels { _pre, _scale) \ static const struct adc5_channels adc5_chans_pmic[ADC5_MAX_CHANNEL] = { - [ADC5_REF_GND] = ADC5_CHAN_VOLT("ref_gnd", 1, + [ADC5_REF_GND] = ADC5_CHAN_VOLT("ref_gnd", 0, SCALE_HW_CALIB_DEFAULT) - [ADC5_1P25VREF] = ADC5_CHAN_VOLT("vref_1p25", 1, + [ADC5_1P25VREF] = ADC5_CHAN_VOLT("vref_1p25", 0, SCALE_HW_CALIB_DEFAULT) - [ADC5_VPH_PWR] = ADC5_CHAN_VOLT("vph_pwr", 3, + [ADC5_VPH_PWR] = ADC5_CHAN_VOLT("vph_pwr", 1, SCALE_HW_CALIB_DEFAULT) - [ADC5_VBAT_SNS] = ADC5_CHAN_VOLT("vbat_sns", 3, + [ADC5_VBAT_SNS] = ADC5_CHAN_VOLT("vbat_sns", 1, SCALE_HW_CALIB_DEFAULT) - [ADC5_DIE_TEMP] = ADC5_CHAN_TEMP("die_temp", 1, + [ADC5_DIE_TEMP] = ADC5_CHAN_TEMP("die_temp", 0, SCALE_HW_CALIB_PMIC_THERM) - [ADC5_USB_IN_I] = ADC5_CHAN_VOLT("usb_in_i_uv", 1, + [ADC5_USB_IN_I] = ADC5_CHAN_VOLT("usb_in_i_uv", 0, SCALE_HW_CALIB_DEFAULT) - [ADC5_USB_IN_V_16] = ADC5_CHAN_VOLT("usb_in_v_div_16", 16, + [ADC5_USB_IN_V_16] = ADC5_CHAN_VOLT("usb_in_v_div_16", 8, SCALE_HW_CALIB_DEFAULT) - [ADC5_CHG_TEMP] = ADC5_CHAN_TEMP("chg_temp", 1, + [ADC5_CHG_TEMP] = ADC5_CHAN_TEMP("chg_temp", 0, SCALE_HW_CALIB_PM5_CHG_TEMP) /* Charger prescales SBUx and MID_CHG to fit within 1.8V upper unit */ - [ADC5_SBUx] = ADC5_CHAN_VOLT("chg_sbux", 3, + [ADC5_SBUx] = ADC5_CHAN_VOLT("chg_sbux", 1, SCALE_HW_CALIB_DEFAULT) - [ADC5_MID_CHG_DIV6] = ADC5_CHAN_VOLT("chg_mid_chg", 6, + [ADC5_MID_CHG_DIV6] = ADC5_CHAN_VOLT("chg_mid_chg", 3, SCALE_HW_CALIB_DEFAULT) - [ADC5_XO_THERM_100K_PU] = ADC5_CHAN_TEMP("xo_therm", 1, + [ADC5_XO_THERM_100K_PU] = ADC5_CHAN_TEMP("xo_therm", 0, SCALE_HW_CALIB_XOTHERM) - [ADC5_AMUX_THM1_100K_PU] = ADC5_CHAN_TEMP("amux_thm1_100k_pu", 1, + [ADC5_AMUX_THM1_100K_PU] = ADC5_CHAN_TEMP("amux_thm1_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) - [ADC5_AMUX_THM2_100K_PU] = ADC5_CHAN_TEMP("amux_thm2_100k_pu", 1, + [ADC5_AMUX_THM2_100K_PU] = ADC5_CHAN_TEMP("amux_thm2_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) - [ADC5_AMUX_THM3_100K_PU] = ADC5_CHAN_TEMP("amux_thm3_100k_pu", 1, + [ADC5_AMUX_THM3_100K_PU] = ADC5_CHAN_TEMP("amux_thm3_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) - [ADC5_AMUX_THM2] = ADC5_CHAN_TEMP("amux_thm2", 1, + [ADC5_AMUX_THM2] = ADC5_CHAN_TEMP("amux_thm2", 0, SCALE_HW_CALIB_PM5_SMB_TEMP) }; static const struct adc5_channels adc5_chans_rev2[ADC5_MAX_CHANNEL] = { - [ADC5_REF_GND] = ADC5_CHAN_VOLT("ref_gnd", 1, + [ADC5_REF_GND] = ADC5_CHAN_VOLT("ref_gnd", 0, SCALE_HW_CALIB_DEFAULT) - [ADC5_1P25VREF] = ADC5_CHAN_VOLT("vref_1p25", 1, + [ADC5_1P25VREF] = ADC5_CHAN_VOLT("vref_1p25", 0, SCALE_HW_CALIB_DEFAULT) - [ADC5_VPH_PWR] = ADC5_CHAN_VOLT("vph_pwr", 3, + [ADC5_VPH_PWR] = ADC5_CHAN_VOLT("vph_pwr", 1, SCALE_HW_CALIB_DEFAULT) - [ADC5_VBAT_SNS] = ADC5_CHAN_VOLT("vbat_sns", 3, + [ADC5_VBAT_SNS] = ADC5_CHAN_VOLT("vbat_sns", 1, SCALE_HW_CALIB_DEFAULT) - [ADC5_VCOIN] = ADC5_CHAN_VOLT("vcoin", 3, + [ADC5_VCOIN] = ADC5_CHAN_VOLT("vcoin", 1, SCALE_HW_CALIB_DEFAULT) - [ADC5_DIE_TEMP] = ADC5_CHAN_TEMP("die_temp", 1, + [ADC5_DIE_TEMP] = ADC5_CHAN_TEMP("die_temp", 0, SCALE_HW_CALIB_PMIC_THERM) - [ADC5_AMUX_THM1_100K_PU] = ADC5_CHAN_TEMP("amux_thm1_100k_pu", 1, + [ADC5_AMUX_THM1_100K_PU] = ADC5_CHAN_TEMP("amux_thm1_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) - [ADC5_AMUX_THM2_100K_PU] = ADC5_CHAN_TEMP("amux_thm2_100k_pu", 1, + [ADC5_AMUX_THM2_100K_PU] = ADC5_CHAN_TEMP("amux_thm2_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) - [ADC5_AMUX_THM3_100K_PU] = ADC5_CHAN_TEMP("amux_thm3_100k_pu", 1, + [ADC5_AMUX_THM3_100K_PU] = ADC5_CHAN_TEMP("amux_thm3_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) - [ADC5_AMUX_THM4_100K_PU] = ADC5_CHAN_TEMP("amux_thm4_100k_pu", 1, + [ADC5_AMUX_THM4_100K_PU] = ADC5_CHAN_TEMP("amux_thm4_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) - [ADC5_AMUX_THM5_100K_PU] = ADC5_CHAN_TEMP("amux_thm5_100k_pu", 1, + [ADC5_AMUX_THM5_100K_PU] = ADC5_CHAN_TEMP("amux_thm5_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) - [ADC5_XO_THERM_100K_PU] = ADC5_CHAN_TEMP("xo_therm_100k_pu", 1, + [ADC5_XO_THERM_100K_PU] = ADC5_CHAN_TEMP("xo_therm_100k_pu", 0, SCALE_HW_CALIB_THERM_100K_PULLUP) }; @@ -558,6 +559,9 @@ static int adc5_get_dt_channel_data(struct adc5_chip *adc, return ret; } prop->prescale = ret; + } else { + prop->prescale = + adc->data->adc_chans[prop->channel].prescale_index; } ret = of_property_read_u32(node, "qcom,hw-settle-time", &value); diff --git a/drivers/iio/adc/rcar-gyroadc.c b/drivers/iio/adc/rcar-gyroadc.c index 4e982b51bcda..2c0d0316d149 100644 --- a/drivers/iio/adc/rcar-gyroadc.c +++ b/drivers/iio/adc/rcar-gyroadc.c @@ -1,17 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0+ /* * Renesas R-Car GyroADC driver * * Copyright 2016 Marek Vasut - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #include diff --git a/drivers/iio/adc/sc27xx_adc.c b/drivers/iio/adc/sc27xx_adc.c index 7940b23dcad9..f7f7a18904b4 100644 --- a/drivers/iio/adc/sc27xx_adc.c +++ b/drivers/iio/adc/sc27xx_adc.c @@ -52,6 +52,9 @@ /* Timeout (ms) for the trylock of hardware spinlocks */ #define SC27XX_ADC_HWLOCK_TIMEOUT 5000 +/* Timeout (ms) for ADC data conversion according to ADC datasheet */ +#define SC27XX_ADC_RDY_TIMEOUT 100 + /* Maximum ADC channel number */ #define SC27XX_ADC_CHANNEL_MAX 32 @@ -223,7 +226,14 @@ static int sc27xx_adc_read(struct sc27xx_adc_data *data, int channel, if (ret) goto disable_adc; - wait_for_completion(&data->completion); + ret = wait_for_completion_timeout(&data->completion, + msecs_to_jiffies(SC27XX_ADC_RDY_TIMEOUT)); + if (!ret) { + dev_err(data->dev, "read ADC data timeout\n"); + ret = -ETIMEDOUT; + } else { + ret = 0; + } disable_adc: regmap_update_bits(data->regmap, data->base + SC27XX_ADC_CTL, diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c index ca432e7b6ff1..2327ec18b40c 100644 --- a/drivers/iio/adc/stm32-adc-core.c +++ b/drivers/iio/adc/stm32-adc-core.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -48,15 +49,19 @@ #define STM32H7_CKMODE_SHIFT 16 #define STM32H7_CKMODE_MASK GENMASK(17, 16) +#define STM32_ADC_CORE_SLEEP_DELAY_MS 2000 + /** * stm32_adc_common_regs - stm32 common registers, compatible dependent data * @csr: common status register offset + * @ccr: common control register offset * @eoc1: adc1 end of conversion flag in @csr * @eoc2: adc2 end of conversion flag in @csr * @eoc3: adc3 end of conversion flag in @csr */ struct stm32_adc_common_regs { u32 csr; + u32 ccr; u32 eoc1_msk; u32 eoc2_msk; u32 eoc3_msk; @@ -85,6 +90,7 @@ struct stm32_adc_priv_cfg { * @vref: regulator reference * @cfg: compatible configuration data * @common: common data for all ADC instances + * @ccr_bak: backup CCR in low power mode */ struct stm32_adc_priv { int irq[STM32_ADC_MAX_ADCS]; @@ -94,6 +100,7 @@ struct stm32_adc_priv { struct regulator *vref; const struct stm32_adc_priv_cfg *cfg; struct stm32_adc_common common; + u32 ccr_bak; }; static struct stm32_adc_priv *to_stm32_adc_priv(struct stm32_adc_common *com) @@ -265,6 +272,7 @@ out: /* STM32F4 common registers definitions */ static const struct stm32_adc_common_regs stm32f4_adc_common_regs = { .csr = STM32F4_ADC_CSR, + .ccr = STM32F4_ADC_CCR, .eoc1_msk = STM32F4_EOC1, .eoc2_msk = STM32F4_EOC2, .eoc3_msk = STM32F4_EOC3, @@ -273,6 +281,7 @@ static const struct stm32_adc_common_regs stm32f4_adc_common_regs = { /* STM32H7 common registers definitions */ static const struct stm32_adc_common_regs stm32h7_adc_common_regs = { .csr = STM32H7_ADC_CSR, + .ccr = STM32H7_ADC_CCR, .eoc1_msk = STM32H7_EOC_MST, .eoc2_msk = STM32H7_EOC_SLV, }; @@ -379,6 +388,61 @@ static void stm32_adc_irq_remove(struct platform_device *pdev, } } +static int stm32_adc_core_hw_start(struct device *dev) +{ + struct stm32_adc_common *common = dev_get_drvdata(dev); + struct stm32_adc_priv *priv = to_stm32_adc_priv(common); + int ret; + + ret = regulator_enable(priv->vref); + if (ret < 0) { + dev_err(dev, "vref enable failed\n"); + return ret; + } + + if (priv->bclk) { + ret = clk_prepare_enable(priv->bclk); + if (ret < 0) { + dev_err(dev, "bus clk enable failed\n"); + goto err_regulator_disable; + } + } + + if (priv->aclk) { + ret = clk_prepare_enable(priv->aclk); + if (ret < 0) { + dev_err(dev, "adc clk enable failed\n"); + goto err_bclk_disable; + } + } + + writel_relaxed(priv->ccr_bak, priv->common.base + priv->cfg->regs->ccr); + + return 0; + +err_bclk_disable: + if (priv->bclk) + clk_disable_unprepare(priv->bclk); +err_regulator_disable: + regulator_disable(priv->vref); + + return ret; +} + +static void stm32_adc_core_hw_stop(struct device *dev) +{ + struct stm32_adc_common *common = dev_get_drvdata(dev); + struct stm32_adc_priv *priv = to_stm32_adc_priv(common); + + /* Backup CCR that may be lost (depends on power state to achieve) */ + priv->ccr_bak = readl_relaxed(priv->common.base + priv->cfg->regs->ccr); + if (priv->aclk) + clk_disable_unprepare(priv->aclk); + if (priv->bclk) + clk_disable_unprepare(priv->bclk); + regulator_disable(priv->vref); +} + static int stm32_adc_probe(struct platform_device *pdev) { struct stm32_adc_priv *priv; @@ -393,6 +457,7 @@ static int stm32_adc_probe(struct platform_device *pdev) priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); if (!priv) return -ENOMEM; + platform_set_drvdata(pdev, &priv->common); priv->cfg = (const struct stm32_adc_priv_cfg *) of_match_device(dev->driver->of_match_table, dev)->data; @@ -410,67 +475,51 @@ static int stm32_adc_probe(struct platform_device *pdev) return ret; } - ret = regulator_enable(priv->vref); - if (ret < 0) { - dev_err(&pdev->dev, "vref enable failed\n"); - return ret; - } - - ret = regulator_get_voltage(priv->vref); - if (ret < 0) { - dev_err(&pdev->dev, "vref get voltage failed, %d\n", ret); - goto err_regulator_disable; - } - priv->common.vref_mv = ret / 1000; - dev_dbg(&pdev->dev, "vref+=%dmV\n", priv->common.vref_mv); - priv->aclk = devm_clk_get(&pdev->dev, "adc"); if (IS_ERR(priv->aclk)) { ret = PTR_ERR(priv->aclk); - if (ret == -ENOENT) { - priv->aclk = NULL; - } else { + if (ret != -ENOENT) { dev_err(&pdev->dev, "Can't get 'adc' clock\n"); - goto err_regulator_disable; - } - } - - if (priv->aclk) { - ret = clk_prepare_enable(priv->aclk); - if (ret < 0) { - dev_err(&pdev->dev, "adc clk enable failed\n"); - goto err_regulator_disable; + return ret; } + priv->aclk = NULL; } priv->bclk = devm_clk_get(&pdev->dev, "bus"); if (IS_ERR(priv->bclk)) { ret = PTR_ERR(priv->bclk); - if (ret == -ENOENT) { - priv->bclk = NULL; - } else { + if (ret != -ENOENT) { dev_err(&pdev->dev, "Can't get 'bus' clock\n"); - goto err_aclk_disable; + return ret; } + priv->bclk = NULL; } - if (priv->bclk) { - ret = clk_prepare_enable(priv->bclk); - if (ret < 0) { - dev_err(&pdev->dev, "adc clk enable failed\n"); - goto err_aclk_disable; - } + pm_runtime_get_noresume(dev); + pm_runtime_set_active(dev); + pm_runtime_set_autosuspend_delay(dev, STM32_ADC_CORE_SLEEP_DELAY_MS); + pm_runtime_use_autosuspend(dev); + pm_runtime_enable(dev); + + ret = stm32_adc_core_hw_start(dev); + if (ret) + goto err_pm_stop; + + ret = regulator_get_voltage(priv->vref); + if (ret < 0) { + dev_err(&pdev->dev, "vref get voltage failed, %d\n", ret); + goto err_hw_stop; } + priv->common.vref_mv = ret / 1000; + dev_dbg(&pdev->dev, "vref+=%dmV\n", priv->common.vref_mv); ret = priv->cfg->clk_sel(pdev, priv); if (ret < 0) - goto err_bclk_disable; + goto err_hw_stop; ret = stm32_adc_irq_probe(pdev, priv); if (ret < 0) - goto err_bclk_disable; - - platform_set_drvdata(pdev, &priv->common); + goto err_hw_stop; ret = of_platform_populate(np, NULL, NULL, &pdev->dev); if (ret < 0) { @@ -478,21 +527,19 @@ static int stm32_adc_probe(struct platform_device *pdev) goto err_irq_remove; } + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); + return 0; err_irq_remove: stm32_adc_irq_remove(pdev, priv); - -err_bclk_disable: - if (priv->bclk) - clk_disable_unprepare(priv->bclk); - -err_aclk_disable: - if (priv->aclk) - clk_disable_unprepare(priv->aclk); - -err_regulator_disable: - regulator_disable(priv->vref); +err_hw_stop: + stm32_adc_core_hw_stop(dev); +err_pm_stop: + pm_runtime_disable(dev); + pm_runtime_set_suspended(dev); + pm_runtime_put_noidle(dev); return ret; } @@ -502,17 +549,39 @@ static int stm32_adc_remove(struct platform_device *pdev) struct stm32_adc_common *common = platform_get_drvdata(pdev); struct stm32_adc_priv *priv = to_stm32_adc_priv(common); + pm_runtime_get_sync(&pdev->dev); of_platform_depopulate(&pdev->dev); stm32_adc_irq_remove(pdev, priv); - if (priv->bclk) - clk_disable_unprepare(priv->bclk); - if (priv->aclk) - clk_disable_unprepare(priv->aclk); - regulator_disable(priv->vref); + stm32_adc_core_hw_stop(&pdev->dev); + pm_runtime_disable(&pdev->dev); + pm_runtime_set_suspended(&pdev->dev); + pm_runtime_put_noidle(&pdev->dev); return 0; } +#if defined(CONFIG_PM) +static int stm32_adc_core_runtime_suspend(struct device *dev) +{ + stm32_adc_core_hw_stop(dev); + + return 0; +} + +static int stm32_adc_core_runtime_resume(struct device *dev) +{ + return stm32_adc_core_hw_start(dev); +} +#endif + +static const struct dev_pm_ops stm32_adc_core_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, + pm_runtime_force_resume) + SET_RUNTIME_PM_OPS(stm32_adc_core_runtime_suspend, + stm32_adc_core_runtime_resume, + NULL) +}; + static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = { .regs = &stm32f4_adc_common_regs, .clk_sel = stm32f4_adc_clk_sel, @@ -552,6 +621,7 @@ static struct platform_driver stm32_adc_driver = { .driver = { .name = "stm32-adc-core", .of_match_table = stm32_adc_of_match, + .pm = &stm32_adc_core_pm_ops, }, }; module_platform_driver(stm32_adc_driver); diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c index 378411853d75..205e1699f954 100644 --- a/drivers/iio/adc/stm32-adc.c +++ b/drivers/iio/adc/stm32-adc.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -148,6 +149,7 @@ enum stm32h7_adc_dmngt { #define STM32_ADC_MAX_SMP 7 /* SMPx range is [0..7] */ #define STM32_ADC_TIMEOUT_US 100000 #define STM32_ADC_TIMEOUT (msecs_to_jiffies(STM32_ADC_TIMEOUT_US / 1000)) +#define STM32_ADC_HW_STOP_DELAY_MS 100 #define STM32_DMA_BUFFER_SIZE PAGE_SIZE @@ -199,11 +201,13 @@ struct stm32_adc_trig_info { * @calfact_s: Calibration offset for single ended channels * @calfact_d: Calibration offset in differential * @lincalfact: Linearity calibration factor + * @calibrated: Indicates calibration status */ struct stm32_adc_calib { u32 calfact_s; u32 calfact_d; u32 lincalfact[STM32H7_LINCALFACT_NUM]; + bool calibrated; }; /** @@ -251,7 +255,6 @@ struct stm32_adc; * @trigs: external trigger sources * @clk_required: clock is required * @has_vregready: vregready status flag presence - * @selfcalib: optional routine for self-calibration * @prepare: optional prepare routine (power-up, enable) * @start_conv: routine to start conversions * @stop_conv: routine to stop conversions @@ -264,7 +267,6 @@ struct stm32_adc_cfg { struct stm32_adc_trig_info *trigs; bool clk_required; bool has_vregready; - int (*selfcalib)(struct stm32_adc *); int (*prepare)(struct stm32_adc *); void (*start_conv)(struct stm32_adc *, bool dma); void (*stop_conv)(struct stm32_adc *); @@ -623,6 +625,47 @@ static void stm32_adc_set_res(struct stm32_adc *adc) stm32_adc_writel(adc, res->reg, val); } +static int stm32_adc_hw_stop(struct device *dev) +{ + struct stm32_adc *adc = dev_get_drvdata(dev); + + if (adc->cfg->unprepare) + adc->cfg->unprepare(adc); + + if (adc->clk) + clk_disable_unprepare(adc->clk); + + return 0; +} + +static int stm32_adc_hw_start(struct device *dev) +{ + struct stm32_adc *adc = dev_get_drvdata(dev); + int ret; + + if (adc->clk) { + ret = clk_prepare_enable(adc->clk); + if (ret) + return ret; + } + + stm32_adc_set_res(adc); + + if (adc->cfg->prepare) { + ret = adc->cfg->prepare(adc); + if (ret) + goto err_clk_dis; + } + + return 0; + +err_clk_dis: + if (adc->clk) + clk_disable_unprepare(adc->clk); + + return ret; +} + /** * stm32f4_adc_start_conv() - Start conversions for regular channels. * @adc: stm32 adc instance @@ -777,6 +820,7 @@ static void stm32h7_adc_disable(struct stm32_adc *adc) /** * stm32h7_adc_read_selfcalib() - read calibration shadow regs, save result * @adc: stm32 adc instance + * Note: Must be called once ADC is enabled, so LINCALRDYW[1..6] are writable */ static int stm32h7_adc_read_selfcalib(struct stm32_adc *adc) { @@ -784,11 +828,6 @@ static int stm32h7_adc_read_selfcalib(struct stm32_adc *adc) int i, ret; u32 lincalrdyw_mask, val; - /* Enable adc so LINCALRDYW1..6 bits are writable */ - ret = stm32h7_adc_enable(adc); - if (ret) - return ret; - /* Read linearity calibration */ lincalrdyw_mask = STM32H7_LINCALRDYW6; for (i = STM32H7_LINCALFACT_NUM - 1; i >= 0; i--) { @@ -801,7 +840,7 @@ static int stm32h7_adc_read_selfcalib(struct stm32_adc *adc) 100, STM32_ADC_TIMEOUT_US); if (ret) { dev_err(&indio_dev->dev, "Failed to read calfact\n"); - goto disable; + return ret; } val = stm32_adc_readl(adc, STM32H7_ADC_CALFACT2); @@ -817,11 +856,9 @@ static int stm32h7_adc_read_selfcalib(struct stm32_adc *adc) adc->cal.calfact_s >>= STM32H7_CALFACT_S_SHIFT; adc->cal.calfact_d = (val & STM32H7_CALFACT_D_MASK); adc->cal.calfact_d >>= STM32H7_CALFACT_D_SHIFT; + adc->cal.calibrated = true; -disable: - stm32h7_adc_disable(adc); - - return ret; + return 0; } /** @@ -898,9 +935,9 @@ static int stm32h7_adc_restore_selfcalib(struct stm32_adc *adc) #define STM32H7_ADC_CALIB_TIMEOUT_US 100000 /** - * stm32h7_adc_selfcalib() - Procedure to calibrate ADC (from power down) + * stm32h7_adc_selfcalib() - Procedure to calibrate ADC * @adc: stm32 adc instance - * Exit from power down, calibrate ADC, then return to power down. + * Note: Must be called once ADC is out of power down. */ static int stm32h7_adc_selfcalib(struct stm32_adc *adc) { @@ -908,9 +945,8 @@ static int stm32h7_adc_selfcalib(struct stm32_adc *adc) int ret; u32 val; - ret = stm32h7_adc_exit_pwr_down(adc); - if (ret) - return ret; + if (adc->cal.calibrated) + return true; /* * Select calibration mode: @@ -927,7 +963,7 @@ static int stm32h7_adc_selfcalib(struct stm32_adc *adc) STM32H7_ADC_CALIB_TIMEOUT_US); if (ret) { dev_err(&indio_dev->dev, "calibration failed\n"); - goto pwr_dwn; + goto out; } /* @@ -944,18 +980,13 @@ static int stm32h7_adc_selfcalib(struct stm32_adc *adc) STM32H7_ADC_CALIB_TIMEOUT_US); if (ret) { dev_err(&indio_dev->dev, "calibration failed\n"); - goto pwr_dwn; + goto out; } +out: stm32_adc_clr_bits(adc, STM32H7_ADC_CR, STM32H7_ADCALDIF | STM32H7_ADCALLIN); - /* Read calibration result for future reference */ - ret = stm32h7_adc_read_selfcalib(adc); - -pwr_dwn: - stm32h7_adc_enter_pwr_down(adc); - return ret; } @@ -972,19 +1003,28 @@ pwr_dwn: */ static int stm32h7_adc_prepare(struct stm32_adc *adc) { - int ret; + int calib, ret; ret = stm32h7_adc_exit_pwr_down(adc); if (ret) return ret; + ret = stm32h7_adc_selfcalib(adc); + if (ret < 0) + goto pwr_dwn; + calib = ret; + stm32_adc_writel(adc, STM32H7_ADC_DIFSEL, adc->difsel); ret = stm32h7_adc_enable(adc); if (ret) goto pwr_dwn; - ret = stm32h7_adc_restore_selfcalib(adc); + /* Either restore or read calibration result for future reference */ + if (calib) + ret = stm32h7_adc_restore_selfcalib(adc); + else + ret = stm32h7_adc_read_selfcalib(adc); if (ret) goto disable; @@ -1174,6 +1214,7 @@ static int stm32_adc_single_conv(struct iio_dev *indio_dev, int *res) { struct stm32_adc *adc = iio_priv(indio_dev); + struct device *dev = indio_dev->dev.parent; const struct stm32_adc_regspec *regs = adc->cfg->regs; long timeout; u32 val; @@ -1183,10 +1224,10 @@ static int stm32_adc_single_conv(struct iio_dev *indio_dev, adc->bufi = 0; - if (adc->cfg->prepare) { - ret = adc->cfg->prepare(adc); - if (ret) - return ret; + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + pm_runtime_put_noidle(dev); + return ret; } /* Apply sampling time settings */ @@ -1224,8 +1265,8 @@ static int stm32_adc_single_conv(struct iio_dev *indio_dev, stm32_adc_conv_irq_disable(adc); - if (adc->cfg->unprepare) - adc->cfg->unprepare(adc); + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); return ret; } @@ -1333,15 +1374,22 @@ static int stm32_adc_update_scan_mode(struct iio_dev *indio_dev, const unsigned long *scan_mask) { struct stm32_adc *adc = iio_priv(indio_dev); + struct device *dev = indio_dev->dev.parent; int ret; + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + pm_runtime_put_noidle(dev); + return ret; + } + adc->num_conv = bitmap_weight(scan_mask, indio_dev->masklength); ret = stm32_adc_conf_scan_seq(indio_dev, scan_mask); - if (ret) - return ret; + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); - return 0; + return ret; } static int stm32_adc_of_xlate(struct iio_dev *indio_dev, @@ -1371,12 +1419,23 @@ static int stm32_adc_debugfs_reg_access(struct iio_dev *indio_dev, unsigned *readval) { struct stm32_adc *adc = iio_priv(indio_dev); + struct device *dev = indio_dev->dev.parent; + int ret; + + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + pm_runtime_put_noidle(dev); + return ret; + } if (!readval) stm32_adc_writel(adc, reg, writeval); else *readval = stm32_adc_readl(adc, reg); + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); + return 0; } @@ -1459,21 +1518,22 @@ static int stm32_adc_dma_start(struct iio_dev *indio_dev) return 0; } -static int stm32_adc_buffer_postenable(struct iio_dev *indio_dev) +static int __stm32_adc_buffer_postenable(struct iio_dev *indio_dev) { struct stm32_adc *adc = iio_priv(indio_dev); + struct device *dev = indio_dev->dev.parent; int ret; - if (adc->cfg->prepare) { - ret = adc->cfg->prepare(adc); - if (ret) - return ret; + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + pm_runtime_put_noidle(dev); + return ret; } ret = stm32_adc_set_trig(indio_dev, indio_dev->trig); if (ret) { dev_err(&indio_dev->dev, "Can't set trigger\n"); - goto err_unprepare; + goto err_pm_put; } ret = stm32_adc_dma_start(indio_dev); @@ -1482,10 +1542,6 @@ static int stm32_adc_buffer_postenable(struct iio_dev *indio_dev) goto err_clr_trig; } - ret = iio_triggered_buffer_postenable(indio_dev); - if (ret < 0) - goto err_stop_dma; - /* Reset adc buffer index */ adc->bufi = 0; @@ -1496,39 +1552,58 @@ static int stm32_adc_buffer_postenable(struct iio_dev *indio_dev) return 0; -err_stop_dma: - if (adc->dma_chan) - dmaengine_terminate_all(adc->dma_chan); err_clr_trig: stm32_adc_set_trig(indio_dev, NULL); -err_unprepare: - if (adc->cfg->unprepare) - adc->cfg->unprepare(adc); +err_pm_put: + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); return ret; } -static int stm32_adc_buffer_predisable(struct iio_dev *indio_dev) +static int stm32_adc_buffer_postenable(struct iio_dev *indio_dev) { - struct stm32_adc *adc = iio_priv(indio_dev); int ret; + ret = iio_triggered_buffer_postenable(indio_dev); + if (ret < 0) + return ret; + + ret = __stm32_adc_buffer_postenable(indio_dev); + if (ret < 0) + iio_triggered_buffer_predisable(indio_dev); + + return ret; +} + +static void __stm32_adc_buffer_predisable(struct iio_dev *indio_dev) +{ + struct stm32_adc *adc = iio_priv(indio_dev); + struct device *dev = indio_dev->dev.parent; + adc->cfg->stop_conv(adc); if (!adc->dma_chan) stm32_adc_conv_irq_disable(adc); - ret = iio_triggered_buffer_predisable(indio_dev); - if (ret < 0) - dev_err(&indio_dev->dev, "predisable failed\n"); - if (adc->dma_chan) dmaengine_terminate_all(adc->dma_chan); if (stm32_adc_set_trig(indio_dev, NULL)) dev_err(&indio_dev->dev, "Can't clear trigger\n"); - if (adc->cfg->unprepare) - adc->cfg->unprepare(adc); + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); +} + +static int stm32_adc_buffer_predisable(struct iio_dev *indio_dev) +{ + int ret; + + __stm32_adc_buffer_predisable(indio_dev); + + ret = iio_triggered_buffer_predisable(indio_dev); + if (ret < 0) + dev_err(&indio_dev->dev, "predisable failed\n"); return ret; } @@ -1867,32 +1942,17 @@ static int stm32_adc_probe(struct platform_device *pdev) } } - if (adc->clk) { - ret = clk_prepare_enable(adc->clk); - if (ret < 0) { - dev_err(&pdev->dev, "clk enable failed\n"); - return ret; - } - } - ret = stm32_adc_of_get_resolution(indio_dev); if (ret < 0) - goto err_clk_disable; - stm32_adc_set_res(adc); - - if (adc->cfg->selfcalib) { - ret = adc->cfg->selfcalib(adc); - if (ret) - goto err_clk_disable; - } + return ret; ret = stm32_adc_chan_of_init(indio_dev); if (ret < 0) - goto err_clk_disable; + return ret; ret = stm32_adc_dma_request(indio_dev); if (ret < 0) - goto err_clk_disable; + return ret; ret = iio_triggered_buffer_setup(indio_dev, &iio_pollfunc_store_time, @@ -1903,15 +1963,35 @@ static int stm32_adc_probe(struct platform_device *pdev) goto err_dma_disable; } + /* Get stm32-adc-core PM online */ + pm_runtime_get_noresume(dev); + pm_runtime_set_active(dev); + pm_runtime_set_autosuspend_delay(dev, STM32_ADC_HW_STOP_DELAY_MS); + pm_runtime_use_autosuspend(dev); + pm_runtime_enable(dev); + + ret = stm32_adc_hw_start(dev); + if (ret) + goto err_buffer_cleanup; + ret = iio_device_register(indio_dev); if (ret) { dev_err(&pdev->dev, "iio dev register failed\n"); - goto err_buffer_cleanup; + goto err_hw_stop; } + pm_runtime_mark_last_busy(dev); + pm_runtime_put_autosuspend(dev); + return 0; +err_hw_stop: + stm32_adc_hw_stop(dev); + err_buffer_cleanup: + pm_runtime_disable(dev); + pm_runtime_set_suspended(dev); + pm_runtime_put_noidle(dev); iio_triggered_buffer_cleanup(indio_dev); err_dma_disable: @@ -1921,9 +2001,6 @@ err_dma_disable: adc->rx_buf, adc->rx_dma_buf); dma_release_channel(adc->dma_chan); } -err_clk_disable: - if (adc->clk) - clk_disable_unprepare(adc->clk); return ret; } @@ -1933,7 +2010,12 @@ static int stm32_adc_remove(struct platform_device *pdev) struct stm32_adc *adc = platform_get_drvdata(pdev); struct iio_dev *indio_dev = iio_priv_to_dev(adc); + pm_runtime_get_sync(&pdev->dev); iio_device_unregister(indio_dev); + stm32_adc_hw_stop(&pdev->dev); + pm_runtime_disable(&pdev->dev); + pm_runtime_set_suspended(&pdev->dev); + pm_runtime_put_noidle(&pdev->dev); iio_triggered_buffer_cleanup(indio_dev); if (adc->dma_chan) { dma_free_coherent(adc->dma_chan->device->dev, @@ -1941,12 +2023,62 @@ static int stm32_adc_remove(struct platform_device *pdev) adc->rx_buf, adc->rx_dma_buf); dma_release_channel(adc->dma_chan); } - if (adc->clk) - clk_disable_unprepare(adc->clk); return 0; } +#if defined(CONFIG_PM_SLEEP) +static int stm32_adc_suspend(struct device *dev) +{ + struct stm32_adc *adc = dev_get_drvdata(dev); + struct iio_dev *indio_dev = iio_priv_to_dev(adc); + + if (iio_buffer_enabled(indio_dev)) + __stm32_adc_buffer_predisable(indio_dev); + + return pm_runtime_force_suspend(dev); +} + +static int stm32_adc_resume(struct device *dev) +{ + struct stm32_adc *adc = dev_get_drvdata(dev); + struct iio_dev *indio_dev = iio_priv_to_dev(adc); + int ret; + + ret = pm_runtime_force_resume(dev); + if (ret < 0) + return ret; + + if (!iio_buffer_enabled(indio_dev)) + return 0; + + ret = stm32_adc_update_scan_mode(indio_dev, + indio_dev->active_scan_mask); + if (ret < 0) + return ret; + + return __stm32_adc_buffer_postenable(indio_dev); +} +#endif + +#if defined(CONFIG_PM) +static int stm32_adc_runtime_suspend(struct device *dev) +{ + return stm32_adc_hw_stop(dev); +} + +static int stm32_adc_runtime_resume(struct device *dev) +{ + return stm32_adc_hw_start(dev); +} +#endif + +static const struct dev_pm_ops stm32_adc_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(stm32_adc_suspend, stm32_adc_resume) + SET_RUNTIME_PM_OPS(stm32_adc_runtime_suspend, stm32_adc_runtime_resume, + NULL) +}; + static const struct stm32_adc_cfg stm32f4_adc_cfg = { .regs = &stm32f4_adc_regspec, .adc_info = &stm32f4_adc_info, @@ -1961,7 +2093,6 @@ static const struct stm32_adc_cfg stm32h7_adc_cfg = { .regs = &stm32h7_adc_regspec, .adc_info = &stm32h7_adc_info, .trigs = stm32h7_adc_trigs, - .selfcalib = stm32h7_adc_selfcalib, .start_conv = stm32h7_adc_start_conv, .stop_conv = stm32h7_adc_stop_conv, .prepare = stm32h7_adc_prepare, @@ -1974,7 +2105,6 @@ static const struct stm32_adc_cfg stm32mp1_adc_cfg = { .adc_info = &stm32h7_adc_info, .trigs = stm32h7_adc_trigs, .has_vregready = true, - .selfcalib = stm32h7_adc_selfcalib, .start_conv = stm32h7_adc_start_conv, .stop_conv = stm32h7_adc_stop_conv, .prepare = stm32h7_adc_prepare, @@ -1996,6 +2126,7 @@ static struct platform_driver stm32_adc_driver = { .driver = { .name = "stm32-adc", .of_match_table = stm32_adc_of_match, + .pm = &stm32_adc_pm_ops, }, }; module_platform_driver(stm32_adc_driver); diff --git a/drivers/iio/adc/ti-adc128s052.c b/drivers/iio/adc/ti-adc128s052.c index 7cf39b3e2416..1e5a936b5b6a 100644 --- a/drivers/iio/adc/ti-adc128s052.c +++ b/drivers/iio/adc/ti-adc128s052.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2014 Angelo Compagnucci * @@ -6,16 +7,14 @@ * http://www.ti.com/lit/ds/symlink/adc128s052.pdf * http://www.ti.com/lit/ds/symlink/adc122s021.pdf * http://www.ti.com/lit/ds/symlink/adc124s021.pdf - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ +#include #include #include #include #include +#include #include struct adc128_configuration { @@ -135,10 +134,15 @@ static const struct iio_info adc128_info = { static int adc128_probe(struct spi_device *spi) { struct iio_dev *indio_dev; + unsigned int config; struct adc128 *adc; - int config = spi_get_device_id(spi)->driver_data; int ret; + if (dev_fwnode(&spi->dev)) + config = (unsigned long) device_get_match_data(&spi->dev); + else + config = spi_get_device_id(spi)->driver_data; + indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*adc)); if (!indio_dev) return -ENOMEM; @@ -186,23 +190,40 @@ static int adc128_remove(struct spi_device *spi) static const struct of_device_id adc128_of_match[] = { { .compatible = "ti,adc128s052", }, { .compatible = "ti,adc122s021", }, + { .compatible = "ti,adc122s051", }, + { .compatible = "ti,adc122s101", }, { .compatible = "ti,adc124s021", }, + { .compatible = "ti,adc124s051", }, + { .compatible = "ti,adc124s101", }, { /* sentinel */ }, }; MODULE_DEVICE_TABLE(of, adc128_of_match); static const struct spi_device_id adc128_id[] = { - { "adc128s052", 0}, /* index into adc128_config */ - { "adc122s021", 1}, - { "adc124s021", 2}, + { "adc128s052", 0 }, /* index into adc128_config */ + { "adc122s021", 1 }, + { "adc122s051", 1 }, + { "adc122s101", 1 }, + { "adc124s021", 2 }, + { "adc124s051", 2 }, + { "adc124s101", 2 }, { } }; MODULE_DEVICE_TABLE(spi, adc128_id); +#ifdef CONFIG_ACPI +static const struct acpi_device_id adc128_acpi_match[] = { + { "AANT1280", 2 }, /* ADC124S021 compatible ACPI ID */ + { } +}; +MODULE_DEVICE_TABLE(acpi, adc128_acpi_match); +#endif + static struct spi_driver adc128_driver = { .driver = { .name = "adc128s052", .of_match_table = of_match_ptr(adc128_of_match), + .acpi_match_table = ACPI_PTR(adc128_acpi_match), }, .probe = adc128_probe, .remove = adc128_remove, diff --git a/drivers/iio/common/hid-sensors/hid-sensor-attributes.c b/drivers/iio/common/hid-sensors/hid-sensor-attributes.c index ed3849d6fc6a..b2143d7b4ccb 100644 --- a/drivers/iio/common/hid-sensors/hid-sensor-attributes.c +++ b/drivers/iio/common/hid-sensors/hid-sensor-attributes.c @@ -336,7 +336,7 @@ static void adjust_exponent_nano(int *val0, int *val1, int scale0, scale1 = scale1 % pow_10(8 - i); } *val0 += res; - *val1 = scale1 * pow_10(exp); + *val1 = scale1 * pow_10(exp); } else if (exp < 0) { exp = abs(exp); if (exp > 9) { diff --git a/drivers/iio/common/ssp_sensors/ssp_dev.c b/drivers/iio/common/ssp_sensors/ssp_dev.c index af3aa38f67cd..9e13be2c0cb9 100644 --- a/drivers/iio/common/ssp_sensors/ssp_dev.c +++ b/drivers/iio/common/ssp_sensors/ssp_dev.c @@ -462,43 +462,35 @@ static struct ssp_data *ssp_parse_dt(struct device *dev) data->mcu_ap_gpio = of_get_named_gpio(node, "mcu-ap-gpios", 0); if (data->mcu_ap_gpio < 0) - goto err_free_pd; + return NULL; data->ap_mcu_gpio = of_get_named_gpio(node, "ap-mcu-gpios", 0); if (data->ap_mcu_gpio < 0) - goto err_free_pd; + return NULL; data->mcu_reset_gpio = of_get_named_gpio(node, "mcu-reset-gpios", 0); if (data->mcu_reset_gpio < 0) - goto err_free_pd; + return NULL; ret = devm_gpio_request_one(dev, data->ap_mcu_gpio, GPIOF_OUT_INIT_HIGH, "ap-mcu-gpios"); if (ret) - goto err_free_pd; + return NULL; ret = devm_gpio_request_one(dev, data->mcu_reset_gpio, GPIOF_OUT_INIT_HIGH, "mcu-reset-gpios"); if (ret) - goto err_ap_mcu; + return NULL; match = of_match_node(ssp_of_match, node); if (!match) - goto err_mcu_reset_gpio; + return NULL; data->sensorhub_info = match->data; dev_set_drvdata(dev, data); return data; - -err_mcu_reset_gpio: - devm_gpio_free(dev, data->mcu_reset_gpio); -err_ap_mcu: - devm_gpio_free(dev, data->ap_mcu_gpio); -err_free_pd: - devm_kfree(dev, data); - return NULL; } #else static struct ssp_data *ssp_parse_dt(struct device *pdev) diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c index 26fbd1bd9413..e50c975250e9 100644 --- a/drivers/iio/common/st_sensors/st_sensors_core.c +++ b/drivers/iio/common/st_sensors/st_sensors_core.c @@ -133,7 +133,7 @@ static int st_sensors_match_fs(struct st_sensor_settings *sensor_settings, for (i = 0; i < ST_SENSORS_FULLSCALE_AVL_MAX; i++) { if (sensor_settings->fs.fs_avl[i].num == 0) - goto st_sensors_match_odr_error; + return ret; if (sensor_settings->fs.fs_avl[i].num == fs) { *index_fs_avl = i; @@ -142,7 +142,6 @@ static int st_sensors_match_fs(struct st_sensor_settings *sensor_settings, } } -st_sensors_match_odr_error: return ret; } diff --git a/drivers/iio/common/st_sensors/st_sensors_trigger.c b/drivers/iio/common/st_sensors/st_sensors_trigger.c index fdcc5a891958..224596b0e189 100644 --- a/drivers/iio/common/st_sensors/st_sensors_trigger.c +++ b/drivers/iio/common/st_sensors/st_sensors_trigger.c @@ -104,7 +104,7 @@ static irqreturn_t st_sensors_irq_thread(int irq, void *p) return IRQ_HANDLED; /* - * If we are using egde IRQs, new samples arrived while processing + * If we are using edge IRQs, new samples arrived while processing * the IRQ and those may be missed unless we pick them here, so poll * again. If the sensor delivery frequency is very high, this thread * turns into a polled loop handler. @@ -148,7 +148,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev, if (!sdata->sensor_settings->drdy_irq.addr_ihl) { dev_err(&indio_dev->dev, "falling/low specified for IRQ " - "but hardware only support rising/high: " + "but hardware supports only rising/high: " "will request rising/high\n"); if (irq_trig == IRQF_TRIGGER_FALLING) irq_trig = IRQF_TRIGGER_RISING; diff --git a/drivers/iio/dac/Kconfig b/drivers/iio/dac/Kconfig index bb2057fd1b6f..851b61eaf3da 100644 --- a/drivers/iio/dac/Kconfig +++ b/drivers/iio/dac/Kconfig @@ -366,6 +366,15 @@ config TI_DAC5571 If compiled as a module, it will be called ti-dac5571. +config TI_DAC7311 + tristate "Texas Instruments 8/10/12-bit 1-channel DAC driver" + depends on SPI + help + Driver for the Texas Instruments + DAC7311, DAC6311, DAC5311. + + If compiled as a module, it will be called ti-dac7311. + config VF610_DAC tristate "Vybrid vf610 DAC driver" depends on OF diff --git a/drivers/iio/dac/Makefile b/drivers/iio/dac/Makefile index 2ac93cc4a389..f0a37c93de8e 100644 --- a/drivers/iio/dac/Makefile +++ b/drivers/iio/dac/Makefile @@ -40,4 +40,5 @@ obj-$(CONFIG_STM32_DAC_CORE) += stm32-dac-core.o obj-$(CONFIG_STM32_DAC) += stm32-dac.o obj-$(CONFIG_TI_DAC082S085) += ti-dac082s085.o obj-$(CONFIG_TI_DAC5571) += ti-dac5571.o +obj-$(CONFIG_TI_DAC7311) += ti-dac7311.o obj-$(CONFIG_VF610_DAC) += vf610_dac.o diff --git a/drivers/iio/dac/ad5686-spi.c b/drivers/iio/dac/ad5686-spi.c index 1df9143f55e9..665fa6bd9ced 100644 --- a/drivers/iio/dac/ad5686-spi.c +++ b/drivers/iio/dac/ad5686-spi.c @@ -19,6 +19,12 @@ static int ad5686_spi_write(struct ad5686_state *st, u8 tx_len, *buf; switch (st->chip_info->regmap_type) { + case AD5310_REGMAP: + st->data[0].d16 = cpu_to_be16(AD5310_CMD(cmd) | + val); + buf = &st->data[0].d8[0]; + tx_len = 2; + break; case AD5683_REGMAP: st->data[0].d32 = cpu_to_be32(AD5686_CMD(cmd) | AD5683_DATA(val)); @@ -56,10 +62,18 @@ static int ad5686_spi_read(struct ad5686_state *st, u8 addr) u8 cmd = 0; int ret; - if (st->chip_info->regmap_type == AD5686_REGMAP) - cmd = AD5686_CMD_READBACK_ENABLE; - else if (st->chip_info->regmap_type == AD5683_REGMAP) + switch (st->chip_info->regmap_type) { + case AD5310_REGMAP: + return -ENOTSUPP; + case AD5683_REGMAP: cmd = AD5686_CMD_READBACK_ENABLE_V2; + break; + case AD5686_REGMAP: + cmd = AD5686_CMD_READBACK_ENABLE; + break; + default: + return -EINVAL; + } st->data[0].d32 = cpu_to_be32(AD5686_CMD(cmd) | AD5686_ADDR(addr)); @@ -86,6 +100,7 @@ static int ad5686_spi_remove(struct spi_device *spi) } static const struct spi_device_id ad5686_spi_id[] = { + {"ad5310r", ID_AD5310R}, {"ad5672r", ID_AD5672R}, {"ad5676", ID_AD5676}, {"ad5676r", ID_AD5676R}, diff --git a/drivers/iio/dac/ad5686.c b/drivers/iio/dac/ad5686.c index 0e134b13967a..a332b93ca2c4 100644 --- a/drivers/iio/dac/ad5686.c +++ b/drivers/iio/dac/ad5686.c @@ -83,6 +83,10 @@ static ssize_t ad5686_write_dac_powerdown(struct iio_dev *indio_dev, st->pwr_down_mask &= ~(0x3 << (chan->channel * 2)); switch (st->chip_info->regmap_type) { + case AD5310_REGMAP: + shift = 9; + ref_bit_msk = AD5310_REF_BIT_MSK; + break; case AD5683_REGMAP: shift = 13; ref_bit_msk = AD5683_REF_BIT_MSK; @@ -124,7 +128,8 @@ static int ad5686_read_raw(struct iio_dev *indio_dev, mutex_unlock(&indio_dev->mlock); if (ret < 0) return ret; - *val = ret; + *val = (ret >> chan->scan_type.shift) & + GENMASK(chan->scan_type.realbits - 1, 0); return IIO_VAL_INT; case IIO_CHAN_INFO_SCALE: *val = st->vref_mv; @@ -221,6 +226,7 @@ static struct iio_chan_spec name[] = { \ AD5868_CHANNEL(7, 7, bits, _shift), \ } +DECLARE_AD5693_CHANNELS(ad5310r_channels, 10, 2); DECLARE_AD5693_CHANNELS(ad5311r_channels, 10, 6); DECLARE_AD5676_CHANNELS(ad5672_channels, 12, 4); DECLARE_AD5676_CHANNELS(ad5676_channels, 16, 0); @@ -232,6 +238,12 @@ DECLARE_AD5693_CHANNELS(ad5692r_channels, 14, 2); DECLARE_AD5693_CHANNELS(ad5691r_channels, 12, 4); static const struct ad5686_chip_info ad5686_chip_info_tbl[] = { + [ID_AD5310R] = { + .channels = ad5310r_channels, + .int_vref_mv = 2500, + .num_channels = 1, + .regmap_type = AD5310_REGMAP, + }, [ID_AD5311R] = { .channels = ad5311r_channels, .int_vref_mv = 2500, @@ -419,6 +431,11 @@ int ad5686_probe(struct device *dev, indio_dev->num_channels = st->chip_info->num_channels; switch (st->chip_info->regmap_type) { + case AD5310_REGMAP: + cmd = AD5686_CMD_CONTROL_REG; + ref_bit_msk = AD5310_REF_BIT_MSK; + st->use_internal_vref = !voltage_uv; + break; case AD5683_REGMAP: cmd = AD5686_CMD_CONTROL_REG; ref_bit_msk = AD5683_REF_BIT_MSK; diff --git a/drivers/iio/dac/ad5686.h b/drivers/iio/dac/ad5686.h index 57b3c61bfb91..19f6917d4738 100644 --- a/drivers/iio/dac/ad5686.h +++ b/drivers/iio/dac/ad5686.h @@ -13,7 +13,10 @@ #include #include +#define AD5310_CMD(x) ((x) << 12) + #define AD5683_DATA(x) ((x) << 4) + #define AD5686_ADDR(x) ((x) << 16) #define AD5686_CMD(x) ((x) << 20) @@ -38,6 +41,8 @@ #define AD5686_CMD_CONTROL_REG 0x4 #define AD5686_CMD_READBACK_ENABLE_V2 0x5 + +#define AD5310_REF_BIT_MSK BIT(8) #define AD5683_REF_BIT_MSK BIT(12) #define AD5693_REF_BIT_MSK BIT(12) @@ -45,6 +50,7 @@ * ad5686_supported_device_ids: */ enum ad5686_supported_device_ids { + ID_AD5310R, ID_AD5311R, ID_AD5671R, ID_AD5672R, @@ -72,6 +78,7 @@ enum ad5686_supported_device_ids { }; enum ad5686_regmap_type { + AD5310_REGMAP, AD5683_REGMAP, AD5686_REGMAP, AD5693_REGMAP diff --git a/drivers/iio/dac/dpot-dac.c b/drivers/iio/dac/dpot-dac.c index a791d0a09d3b..4a6111b7e86c 100644 --- a/drivers/iio/dac/dpot-dac.c +++ b/drivers/iio/dac/dpot-dac.c @@ -74,11 +74,11 @@ static int dpot_dac_read_raw(struct iio_dev *indio_dev, case IIO_VAL_INT: /* * Convert integer scale to fractional scale by - * setting the denominator (val2) to one... + * setting the denominator (val2) to one, and... */ *val2 = 1; ret = IIO_VAL_FRACTIONAL; - /* ...and fall through. */ + /* fall through */ case IIO_VAL_FRACTIONAL: *val *= regulator_get_voltage(dac->vref) / 1000; *val2 *= dac->max_ohms; diff --git a/drivers/iio/dac/ti-dac7311.c b/drivers/iio/dac/ti-dac7311.c new file mode 100644 index 000000000000..6f5df1a30a1c --- /dev/null +++ b/drivers/iio/dac/ti-dac7311.c @@ -0,0 +1,338 @@ +// SPDX-License-Identifier: GPL-2.0 +/* ti-dac7311.c - Texas Instruments 8/10/12-bit 1-channel DAC driver + * + * Copyright (C) 2018 CMC NV + * + * http://www.ti.com/lit/ds/symlink/dac7311.pdf + */ + +#include +#include +#include +#include + +enum { + ID_DAC5311 = 0, + ID_DAC6311, + ID_DAC7311, +}; + +enum { + POWER_1KOHM_TO_GND = 0, + POWER_100KOHM_TO_GND, + POWER_TRI_STATE, +}; + +struct ti_dac_spec { + u8 resolution; +}; + +static const struct ti_dac_spec ti_dac_spec[] = { + [ID_DAC5311] = { .resolution = 8 }, + [ID_DAC6311] = { .resolution = 10 }, + [ID_DAC7311] = { .resolution = 12 }, +}; + +/** + * struct ti_dac_chip - TI DAC chip + * @lock: protects write sequences + * @vref: regulator generating Vref + * @spi: SPI device to send data to the device + * @val: cached value + * @powerdown: whether the chip is powered down + * @powerdown_mode: selected by the user + * @resolution: resolution of the chip + * @buf: buffer for transfer data + */ +struct ti_dac_chip { + struct mutex lock; + struct regulator *vref; + struct spi_device *spi; + u16 val; + bool powerdown; + u8 powerdown_mode; + u8 resolution; + u8 buf[2] ____cacheline_aligned; +}; + +static u8 ti_dac_get_power(struct ti_dac_chip *ti_dac, bool powerdown) +{ + if (powerdown) + return ti_dac->powerdown_mode + 1; + + return 0; +} + +static int ti_dac_cmd(struct ti_dac_chip *ti_dac, u8 power, u16 val) +{ + u8 shift = 14 - ti_dac->resolution; + + ti_dac->buf[0] = (val << shift) & 0xFF; + ti_dac->buf[1] = (power << 6) | (val >> (8 - shift)); + return spi_write(ti_dac->spi, ti_dac->buf, 2); +} + +static const char * const ti_dac_powerdown_modes[] = { + "1kohm_to_gnd", + "100kohm_to_gnd", + "three_state", +}; + +static int ti_dac_get_powerdown_mode(struct iio_dev *indio_dev, + const struct iio_chan_spec *chan) +{ + struct ti_dac_chip *ti_dac = iio_priv(indio_dev); + + return ti_dac->powerdown_mode; +} + +static int ti_dac_set_powerdown_mode(struct iio_dev *indio_dev, + const struct iio_chan_spec *chan, + unsigned int mode) +{ + struct ti_dac_chip *ti_dac = iio_priv(indio_dev); + + ti_dac->powerdown_mode = mode; + return 0; +} + +static const struct iio_enum ti_dac_powerdown_mode = { + .items = ti_dac_powerdown_modes, + .num_items = ARRAY_SIZE(ti_dac_powerdown_modes), + .get = ti_dac_get_powerdown_mode, + .set = ti_dac_set_powerdown_mode, +}; + +static ssize_t ti_dac_read_powerdown(struct iio_dev *indio_dev, + uintptr_t private, + const struct iio_chan_spec *chan, + char *buf) +{ + struct ti_dac_chip *ti_dac = iio_priv(indio_dev); + + return sprintf(buf, "%d\n", ti_dac->powerdown); +} + +static ssize_t ti_dac_write_powerdown(struct iio_dev *indio_dev, + uintptr_t private, + const struct iio_chan_spec *chan, + const char *buf, size_t len) +{ + struct ti_dac_chip *ti_dac = iio_priv(indio_dev); + bool powerdown; + u8 power; + int ret; + + ret = strtobool(buf, &powerdown); + if (ret) + return ret; + + power = ti_dac_get_power(ti_dac, powerdown); + + mutex_lock(&ti_dac->lock); + ret = ti_dac_cmd(ti_dac, power, 0); + if (!ret) + ti_dac->powerdown = powerdown; + mutex_unlock(&ti_dac->lock); + + return ret ? ret : len; +} + +static const struct iio_chan_spec_ext_info ti_dac_ext_info[] = { + { + .name = "powerdown", + .read = ti_dac_read_powerdown, + .write = ti_dac_write_powerdown, + .shared = IIO_SHARED_BY_TYPE, + }, + IIO_ENUM("powerdown_mode", IIO_SHARED_BY_TYPE, &ti_dac_powerdown_mode), + IIO_ENUM_AVAILABLE("powerdown_mode", &ti_dac_powerdown_mode), + { }, +}; + +#define TI_DAC_CHANNEL(chan) { \ + .type = IIO_VOLTAGE, \ + .channel = (chan), \ + .output = true, \ + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \ + .ext_info = ti_dac_ext_info, \ +} + +static const struct iio_chan_spec ti_dac_channels[] = { + TI_DAC_CHANNEL(0), +}; + +static int ti_dac_read_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int *val, int *val2, long mask) +{ + struct ti_dac_chip *ti_dac = iio_priv(indio_dev); + int ret; + + switch (mask) { + case IIO_CHAN_INFO_RAW: + *val = ti_dac->val; + return IIO_VAL_INT; + + case IIO_CHAN_INFO_SCALE: + ret = regulator_get_voltage(ti_dac->vref); + if (ret < 0) + return ret; + + *val = ret / 1000; + *val2 = ti_dac->resolution; + return IIO_VAL_FRACTIONAL_LOG2; + } + + return -EINVAL; +} + +static int ti_dac_write_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int val, int val2, long mask) +{ + struct ti_dac_chip *ti_dac = iio_priv(indio_dev); + u8 power = ti_dac_get_power(ti_dac, ti_dac->powerdown); + int ret; + + switch (mask) { + case IIO_CHAN_INFO_RAW: + if (ti_dac->val == val) + return 0; + + if (val >= (1 << ti_dac->resolution) || val < 0) + return -EINVAL; + + if (ti_dac->powerdown) + return -EBUSY; + + mutex_lock(&ti_dac->lock); + ret = ti_dac_cmd(ti_dac, power, val); + if (!ret) + ti_dac->val = val; + mutex_unlock(&ti_dac->lock); + break; + + default: + ret = -EINVAL; + } + + return ret; +} + +static int ti_dac_write_raw_get_fmt(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, long mask) +{ + return IIO_VAL_INT; +} + +static const struct iio_info ti_dac_info = { + .read_raw = ti_dac_read_raw, + .write_raw = ti_dac_write_raw, + .write_raw_get_fmt = ti_dac_write_raw_get_fmt, +}; + +static int ti_dac_probe(struct spi_device *spi) +{ + struct device *dev = &spi->dev; + const struct ti_dac_spec *spec; + struct ti_dac_chip *ti_dac; + struct iio_dev *indio_dev; + int ret; + + indio_dev = devm_iio_device_alloc(dev, sizeof(*ti_dac)); + if (!indio_dev) { + dev_err(dev, "can not allocate iio device\n"); + return -ENOMEM; + } + + spi->mode = SPI_MODE_1; + spi->bits_per_word = 16; + spi_setup(spi); + + indio_dev->dev.parent = dev; + indio_dev->dev.of_node = spi->dev.of_node; + indio_dev->info = &ti_dac_info; + indio_dev->name = spi_get_device_id(spi)->name; + indio_dev->modes = INDIO_DIRECT_MODE; + indio_dev->channels = ti_dac_channels; + spi_set_drvdata(spi, indio_dev); + + ti_dac = iio_priv(indio_dev); + ti_dac->powerdown = false; + ti_dac->spi = spi; + + spec = &ti_dac_spec[spi_get_device_id(spi)->driver_data]; + indio_dev->num_channels = 1; + ti_dac->resolution = spec->resolution; + + ti_dac->vref = devm_regulator_get(dev, "vref"); + if (IS_ERR(ti_dac->vref)) { + dev_err(dev, "error to get regulator\n"); + return PTR_ERR(ti_dac->vref); + } + + ret = regulator_enable(ti_dac->vref); + if (ret < 0) { + dev_err(dev, "can not enable regulator\n"); + return ret; + } + + mutex_init(&ti_dac->lock); + + ret = iio_device_register(indio_dev); + if (ret) { + dev_err(dev, "fail to register iio device: %d\n", ret); + goto err; + } + + return 0; + +err: + mutex_destroy(&ti_dac->lock); + regulator_disable(ti_dac->vref); + return ret; +} + +static int ti_dac_remove(struct spi_device *spi) +{ + struct iio_dev *indio_dev = spi_get_drvdata(spi); + struct ti_dac_chip *ti_dac = iio_priv(indio_dev); + + iio_device_unregister(indio_dev); + mutex_destroy(&ti_dac->lock); + regulator_disable(ti_dac->vref); + return 0; +} + +static const struct of_device_id ti_dac_of_id[] = { + { .compatible = "ti,dac5311" }, + { .compatible = "ti,dac6311" }, + { .compatible = "ti,dac7311" }, + { } +}; +MODULE_DEVICE_TABLE(of, ti_dac_of_id); + +static const struct spi_device_id ti_dac_spi_id[] = { + { "dac5311", ID_DAC5311 }, + { "dac6311", ID_DAC6311 }, + { "dac7311", ID_DAC7311 }, + { } +}; +MODULE_DEVICE_TABLE(spi, ti_dac_spi_id); + +static struct spi_driver ti_dac_driver = { + .driver = { + .name = "ti-dac7311", + .of_match_table = ti_dac_of_id, + }, + .probe = ti_dac_probe, + .remove = ti_dac_remove, + .id_table = ti_dac_spi_id, +}; +module_spi_driver(ti_dac_driver); + +MODULE_AUTHOR("Charles-Antoine Couret "); +MODULE_DESCRIPTION("Texas Instruments 8/10/12-bit 1-channel DAC driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/iio/imu/bmi160/bmi160.h b/drivers/iio/imu/bmi160/bmi160.h index e7b11e74fd1d..2351049d930b 100644 --- a/drivers/iio/imu/bmi160/bmi160.h +++ b/drivers/iio/imu/bmi160/bmi160.h @@ -6,6 +6,5 @@ extern const struct regmap_config bmi160_regmap_config; int bmi160_core_probe(struct device *dev, struct regmap *regmap, const char *name, bool use_spi); -void bmi160_core_remove(struct device *dev); #endif /* BMI160_H_ */ diff --git a/drivers/iio/imu/bmi160/bmi160_core.c b/drivers/iio/imu/bmi160/bmi160_core.c index c85659ca9507..b10330b0f93f 100644 --- a/drivers/iio/imu/bmi160/bmi160_core.c +++ b/drivers/iio/imu/bmi160/bmi160_core.c @@ -542,10 +542,12 @@ static int bmi160_chip_init(struct bmi160_data *data, bool use_spi) return 0; } -static void bmi160_chip_uninit(struct bmi160_data *data) +static void bmi160_chip_uninit(void *data) { - bmi160_set_mode(data, BMI160_GYRO, false); - bmi160_set_mode(data, BMI160_ACCEL, false); + struct bmi160_data *bmi_data = data; + + bmi160_set_mode(bmi_data, BMI160_GYRO, false); + bmi160_set_mode(bmi_data, BMI160_ACCEL, false); } int bmi160_core_probe(struct device *dev, struct regmap *regmap, @@ -567,6 +569,10 @@ int bmi160_core_probe(struct device *dev, struct regmap *regmap, if (ret < 0) return ret; + ret = devm_add_action_or_reset(dev, bmi160_chip_uninit, data); + if (ret < 0) + return ret; + if (!name && ACPI_HANDLE(dev)) name = bmi160_match_acpi_device(dev); @@ -577,35 +583,19 @@ int bmi160_core_probe(struct device *dev, struct regmap *regmap, indio_dev->modes = INDIO_DIRECT_MODE; indio_dev->info = &bmi160_info; - ret = iio_triggered_buffer_setup(indio_dev, NULL, - bmi160_trigger_handler, NULL); + ret = devm_iio_triggered_buffer_setup(dev, indio_dev, NULL, + bmi160_trigger_handler, NULL); if (ret < 0) - goto uninit; + return ret; - ret = iio_device_register(indio_dev); + ret = devm_iio_device_register(dev, indio_dev); if (ret < 0) - goto buffer_cleanup; + return ret; return 0; -buffer_cleanup: - iio_triggered_buffer_cleanup(indio_dev); -uninit: - bmi160_chip_uninit(data); - return ret; } EXPORT_SYMBOL_GPL(bmi160_core_probe); -void bmi160_core_remove(struct device *dev) -{ - struct iio_dev *indio_dev = dev_get_drvdata(dev); - struct bmi160_data *data = iio_priv(indio_dev); - - iio_device_unregister(indio_dev); - iio_triggered_buffer_cleanup(indio_dev); - bmi160_chip_uninit(data); -} -EXPORT_SYMBOL_GPL(bmi160_core_remove); - MODULE_AUTHOR("Daniel Baluta dev, regmap, name, false); } -static int bmi160_i2c_remove(struct i2c_client *client) -{ - bmi160_core_remove(&client->dev); - - return 0; -} - static const struct i2c_device_id bmi160_i2c_id[] = { {"bmi160", 0}, {} @@ -72,7 +65,6 @@ static struct i2c_driver bmi160_i2c_driver = { .of_match_table = of_match_ptr(bmi160_of_match), }, .probe = bmi160_i2c_probe, - .remove = bmi160_i2c_remove, .id_table = bmi160_i2c_id, }; module_i2c_driver(bmi160_i2c_driver); diff --git a/drivers/iio/imu/bmi160/bmi160_spi.c b/drivers/iio/imu/bmi160/bmi160_spi.c index d34dfdfd1a7d..e521ad14eeac 100644 --- a/drivers/iio/imu/bmi160/bmi160_spi.c +++ b/drivers/iio/imu/bmi160/bmi160_spi.c @@ -29,13 +29,6 @@ static int bmi160_spi_probe(struct spi_device *spi) return bmi160_core_probe(&spi->dev, regmap, id->name, true); } -static int bmi160_spi_remove(struct spi_device *spi) -{ - bmi160_core_remove(&spi->dev); - - return 0; -} - static const struct spi_device_id bmi160_spi_id[] = { {"bmi160", 0}, {} @@ -58,7 +51,6 @@ MODULE_DEVICE_TABLE(of, bmi160_of_match); static struct spi_driver bmi160_spi_driver = { .probe = bmi160_spi_probe, - .remove = bmi160_spi_remove, .id_table = bmi160_spi_id, .driver = { .acpi_match_table = ACPI_PTR(bmi160_acpi_match), diff --git a/drivers/iio/imu/st_lsm6dsx/Makefile b/drivers/iio/imu/st_lsm6dsx/Makefile index 35919febea2a..e5f733ce6e11 100644 --- a/drivers/iio/imu/st_lsm6dsx/Makefile +++ b/drivers/iio/imu/st_lsm6dsx/Makefile @@ -1,4 +1,5 @@ -st_lsm6dsx-y := st_lsm6dsx_core.o st_lsm6dsx_buffer.o +st_lsm6dsx-y := st_lsm6dsx_core.o st_lsm6dsx_buffer.o \ + st_lsm6dsx_shub.o obj-$(CONFIG_IIO_ST_LSM6DSX) += st_lsm6dsx.o obj-$(CONFIG_IIO_ST_LSM6DSX_I2C) += st_lsm6dsx_i2c.o diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h index ef73519a0fb6..d1d8d07a0714 100644 --- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h +++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h @@ -43,6 +43,24 @@ enum st_lsm6dsx_hw_id { * ST_LSM6DSX_TAGGED_SAMPLE_SIZE) #define ST_LSM6DSX_SHIFT_VAL(val, mask) (((val) << __ffs(mask)) & (mask)) +#define ST_LSM6DSX_CHANNEL(chan_type, addr, mod, scan_idx) \ +{ \ + .type = chan_type, \ + .address = addr, \ + .modified = 1, \ + .channel2 = mod, \ + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \ + BIT(IIO_CHAN_INFO_SCALE), \ + .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ), \ + .scan_index = scan_idx, \ + .scan_type = { \ + .sign = 's', \ + .realbits = 16, \ + .storagebits = 16, \ + .endianness = IIO_LE, \ + }, \ +} + struct st_lsm6dsx_reg { u8 addr; u8 mask; @@ -50,6 +68,28 @@ struct st_lsm6dsx_reg { struct st_lsm6dsx_hw; +struct st_lsm6dsx_odr { + u16 hz; + u8 val; +}; + +#define ST_LSM6DSX_ODR_LIST_SIZE 6 +struct st_lsm6dsx_odr_table_entry { + struct st_lsm6dsx_reg reg; + struct st_lsm6dsx_odr odr_avl[ST_LSM6DSX_ODR_LIST_SIZE]; +}; + +struct st_lsm6dsx_fs { + u32 gain; + u8 val; +}; + +#define ST_LSM6DSX_FS_LIST_SIZE 4 +struct st_lsm6dsx_fs_table_entry { + struct st_lsm6dsx_reg reg; + struct st_lsm6dsx_fs fs_avl[ST_LSM6DSX_FS_LIST_SIZE]; +}; + /** * struct st_lsm6dsx_fifo_ops - ST IMU FIFO settings * @read_fifo: Read FIFO callback. @@ -84,6 +124,70 @@ struct st_lsm6dsx_hw_ts_settings { struct st_lsm6dsx_reg decimator; }; +/** + * struct st_lsm6dsx_shub_settings - ST IMU hw i2c controller settings + * @page_mux: register page mux info (addr + mask). + * @master_en: master config register info (addr + mask). + * @pullup_en: i2c controller pull-up register info (addr + mask). + * @aux_sens: aux sensor register info (addr + mask). + * @wr_once: write_once register info (addr + mask). + * @shub_out: sensor hub first output register info. + * @slv0_addr: slave0 address in secondary page. + * @dw_slv0_addr: slave0 write register address in secondary page. + * @batch_en: Enable/disable FIFO batching. + */ +struct st_lsm6dsx_shub_settings { + struct st_lsm6dsx_reg page_mux; + struct st_lsm6dsx_reg master_en; + struct st_lsm6dsx_reg pullup_en; + struct st_lsm6dsx_reg aux_sens; + struct st_lsm6dsx_reg wr_once; + u8 shub_out; + u8 slv0_addr; + u8 dw_slv0_addr; + u8 batch_en; +}; + +enum st_lsm6dsx_ext_sensor_id { + ST_LSM6DSX_ID_MAGN, +}; + +/** + * struct st_lsm6dsx_ext_dev_settings - i2c controller slave settings + * @i2c_addr: I2c slave address list. + * @wai: Wai address info. + * @id: external sensor id. + * @odr: Output data rate of the sensor [Hz]. + * @gain: Configured sensor sensitivity. + * @temp_comp: Temperature compensation register info (addr + mask). + * @pwr_table: Power on register info (addr + mask). + * @off_canc: Offset cancellation register info (addr + mask). + * @bdu: Block data update register info (addr + mask). + * @out: Output register info. + */ +struct st_lsm6dsx_ext_dev_settings { + u8 i2c_addr[2]; + struct { + u8 addr; + u8 val; + } wai; + enum st_lsm6dsx_ext_sensor_id id; + struct st_lsm6dsx_odr_table_entry odr_table; + struct st_lsm6dsx_fs_table_entry fs_table; + struct st_lsm6dsx_reg temp_comp; + struct { + struct st_lsm6dsx_reg reg; + u8 off_val; + u8 on_val; + } pwr_table; + struct st_lsm6dsx_reg off_canc; + struct st_lsm6dsx_reg bdu; + struct { + u8 addr; + u8 len; + } out; +}; + /** * struct st_lsm6dsx_settings - ST IMU sensor settings * @wai: Sensor WhoAmI default value. @@ -93,6 +197,7 @@ struct st_lsm6dsx_hw_ts_settings { * @batch: List of FIFO batching register info (addr + mask). * @fifo_ops: Sensor hw FIFO parameters. * @ts_settings: Hw timer related settings. + * @shub_settings: i2c controller related settings. */ struct st_lsm6dsx_settings { u8 wai; @@ -102,11 +207,15 @@ struct st_lsm6dsx_settings { struct st_lsm6dsx_reg batch[ST_LSM6DSX_MAX_ID]; struct st_lsm6dsx_fifo_ops fifo_ops; struct st_lsm6dsx_hw_ts_settings ts_settings; + struct st_lsm6dsx_shub_settings shub_settings; }; enum st_lsm6dsx_sensor_id { - ST_LSM6DSX_ID_ACC, ST_LSM6DSX_ID_GYRO, + ST_LSM6DSX_ID_ACC, + ST_LSM6DSX_ID_EXT0, + ST_LSM6DSX_ID_EXT1, + ST_LSM6DSX_ID_EXT2, ST_LSM6DSX_ID_MAX, }; @@ -126,6 +235,7 @@ enum st_lsm6dsx_fifo_mode { * @sip: Number of samples in a given pattern. * @decimator: FIFO decimation factor. * @ts_ref: Sensor timestamp reference for hw one. + * @ext_info: Sensor settings if it is connected to i2c controller */ struct st_lsm6dsx_sensor { char name[32]; @@ -139,6 +249,11 @@ struct st_lsm6dsx_sensor { u8 sip; u8 decimator; s64 ts_ref; + + struct { + const struct st_lsm6dsx_ext_dev_settings *settings; + u8 addr; + } ext_info; }; /** @@ -148,6 +263,7 @@ struct st_lsm6dsx_sensor { * @irq: Device interrupt line (I2C or SPI). * @fifo_lock: Mutex to prevent concurrent access to the hw FIFO. * @conf_lock: Mutex to prevent concurrent FIFO configuration update. + * @page_lock: Mutex to prevent concurrent memory page configuration. * @fifo_mode: FIFO operating mode supported by the device. * @enable_mask: Enabled sensor bitmask. * @ts_sip: Total number of timestamp samples in a given pattern. @@ -163,6 +279,7 @@ struct st_lsm6dsx_hw { struct mutex fifo_lock; struct mutex conf_lock; + struct mutex page_lock; enum st_lsm6dsx_fifo_mode fifo_mode; u8 enable_mask; @@ -176,13 +293,15 @@ struct st_lsm6dsx_hw { const struct st_lsm6dsx_settings *settings; }; +static const unsigned long st_lsm6dsx_available_scan_masks[] = {0x7, 0x0}; extern const struct dev_pm_ops st_lsm6dsx_pm_ops; int st_lsm6dsx_probe(struct device *dev, int irq, int hw_id, const char *name, struct regmap *regmap); -int st_lsm6dsx_sensor_enable(struct st_lsm6dsx_sensor *sensor); -int st_lsm6dsx_sensor_disable(struct st_lsm6dsx_sensor *sensor); +int st_lsm6dsx_sensor_set_enable(struct st_lsm6dsx_sensor *sensor, + bool enable); int st_lsm6dsx_fifo_setup(struct st_lsm6dsx_hw *hw); +int st_lsm6dsx_set_watermark(struct iio_dev *iio_dev, unsigned int val); int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark); int st_lsm6dsx_flush_fifo(struct st_lsm6dsx_hw *hw); @@ -191,5 +310,47 @@ int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw, int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw); int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw); int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u16 odr, u8 *val); +int st_lsm6dsx_shub_probe(struct st_lsm6dsx_hw *hw, const char *name); +int st_lsm6dsx_shub_set_enable(struct st_lsm6dsx_sensor *sensor, bool enable); +int st_lsm6dsx_set_page(struct st_lsm6dsx_hw *hw, bool enable); + +static inline int +st_lsm6dsx_update_bits_locked(struct st_lsm6dsx_hw *hw, unsigned int addr, + unsigned int mask, unsigned int val) +{ + int err; + + mutex_lock(&hw->page_lock); + err = regmap_update_bits(hw->regmap, addr, mask, val); + mutex_unlock(&hw->page_lock); + + return err; +} + +static inline int +st_lsm6dsx_read_locked(struct st_lsm6dsx_hw *hw, unsigned int addr, + void *val, unsigned int len) +{ + int err; + + mutex_lock(&hw->page_lock); + err = regmap_bulk_read(hw->regmap, addr, val, len); + mutex_unlock(&hw->page_lock); + + return err; +} + +static inline int +st_lsm6dsx_write_locked(struct st_lsm6dsx_hw *hw, unsigned int addr, + unsigned int val) +{ + int err; + + mutex_lock(&hw->page_lock); + err = regmap_write(hw->regmap, addr, val); + mutex_unlock(&hw->page_lock); + + return err; +} #endif /* ST_LSM6DSX_H */ diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c index b5263fc522ca..2c0d3763405a 100644 --- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c +++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c @@ -68,6 +68,9 @@ enum st_lsm6dsx_fifo_tag { ST_LSM6DSX_GYRO_TAG = 0x01, ST_LSM6DSX_ACC_TAG = 0x02, ST_LSM6DSX_TS_TAG = 0x04, + ST_LSM6DSX_EXT0_TAG = 0x0f, + ST_LSM6DSX_EXT1_TAG = 0x10, + ST_LSM6DSX_EXT2_TAG = 0x11, }; static const @@ -102,6 +105,9 @@ static void st_lsm6dsx_get_max_min_odr(struct st_lsm6dsx_hw *hw, *max_odr = 0, *min_odr = ~0; for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + if (!hw->iio_devs[i]) + continue; + sensor = iio_priv(hw->iio_devs[i]); if (!(hw->enable_mask & BIT(sensor->id))) @@ -125,6 +131,9 @@ static int st_lsm6dsx_update_decimators(struct st_lsm6dsx_hw *hw) for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { const struct st_lsm6dsx_reg *dec_reg; + if (!hw->iio_devs[i]) + continue; + sensor = iio_priv(hw->iio_devs[i]); /* update fifo decimators and sample in pattern */ if (hw->enable_mask & BIT(sensor->id)) { @@ -142,8 +151,9 @@ static int st_lsm6dsx_update_decimators(struct st_lsm6dsx_hw *hw) if (dec_reg->addr) { int val = ST_LSM6DSX_SHIFT_VAL(data, dec_reg->mask); - err = regmap_update_bits(hw->regmap, dec_reg->addr, - dec_reg->mask, val); + err = st_lsm6dsx_update_bits_locked(hw, dec_reg->addr, + dec_reg->mask, + val); if (err < 0) return err; } @@ -162,8 +172,8 @@ static int st_lsm6dsx_update_decimators(struct st_lsm6dsx_hw *hw) int val, ts_dec = !!hw->ts_sip; val = ST_LSM6DSX_SHIFT_VAL(ts_dec, ts_dec_reg->mask); - err = regmap_update_bits(hw->regmap, ts_dec_reg->addr, - ts_dec_reg->mask, val); + err = st_lsm6dsx_update_bits_locked(hw, ts_dec_reg->addr, + ts_dec_reg->mask, val); } return err; } @@ -171,12 +181,12 @@ static int st_lsm6dsx_update_decimators(struct st_lsm6dsx_hw *hw) int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw, enum st_lsm6dsx_fifo_mode fifo_mode) { + unsigned int data; int err; - err = regmap_update_bits(hw->regmap, ST_LSM6DSX_REG_FIFO_MODE_ADDR, - ST_LSM6DSX_FIFO_MODE_MASK, - FIELD_PREP(ST_LSM6DSX_FIFO_MODE_MASK, - fifo_mode)); + data = FIELD_PREP(ST_LSM6DSX_FIFO_MODE_MASK, fifo_mode); + err = st_lsm6dsx_update_bits_locked(hw, ST_LSM6DSX_REG_FIFO_MODE_ADDR, + ST_LSM6DSX_FIFO_MODE_MASK, data); if (err < 0) return err; @@ -207,15 +217,15 @@ static int st_lsm6dsx_set_fifo_odr(struct st_lsm6dsx_sensor *sensor, data = 0; } val = ST_LSM6DSX_SHIFT_VAL(data, batch_reg->mask); - return regmap_update_bits(hw->regmap, batch_reg->addr, - batch_reg->mask, val); + return st_lsm6dsx_update_bits_locked(hw, batch_reg->addr, + batch_reg->mask, val); } else { data = hw->enable_mask ? ST_LSM6DSX_MAX_FIFO_ODR_VAL : 0; - return regmap_update_bits(hw->regmap, - ST_LSM6DSX_REG_FIFO_MODE_ADDR, - ST_LSM6DSX_FIFO_ODR_MASK, - FIELD_PREP(ST_LSM6DSX_FIFO_ODR_MASK, - data)); + return st_lsm6dsx_update_bits_locked(hw, + ST_LSM6DSX_REG_FIFO_MODE_ADDR, + ST_LSM6DSX_FIFO_ODR_MASK, + FIELD_PREP(ST_LSM6DSX_FIFO_ODR_MASK, + data)); } } @@ -231,6 +241,9 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark) return 0; for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + if (!hw->iio_devs[i]) + continue; + cur_sensor = iio_priv(hw->iio_devs[i]); if (!(hw->enable_mask & BIT(cur_sensor->id))) @@ -246,19 +259,23 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark) fifo_watermark = (fifo_watermark / hw->sip) * hw->sip; fifo_watermark = fifo_watermark * hw->settings->fifo_ops.th_wl; + mutex_lock(&hw->page_lock); err = regmap_read(hw->regmap, hw->settings->fifo_ops.fifo_th.addr + 1, &data); if (err < 0) - return err; + goto out; fifo_th_mask = hw->settings->fifo_ops.fifo_th.mask; fifo_watermark = ((data << 8) & ~fifo_th_mask) | (fifo_watermark & fifo_th_mask); wdata = cpu_to_le16(fifo_watermark); - return regmap_bulk_write(hw->regmap, - hw->settings->fifo_ops.fifo_th.addr, - &wdata, sizeof(wdata)); + err = regmap_bulk_write(hw->regmap, + hw->settings->fifo_ops.fifo_th.addr, + &wdata, sizeof(wdata)); +out: + mutex_unlock(&hw->page_lock); + return err; } static int st_lsm6dsx_reset_hw_ts(struct st_lsm6dsx_hw *hw) @@ -267,12 +284,15 @@ static int st_lsm6dsx_reset_hw_ts(struct st_lsm6dsx_hw *hw) int i, err; /* reset hw ts counter */ - err = regmap_write(hw->regmap, ST_LSM6DSX_REG_TS_RESET_ADDR, - ST_LSM6DSX_TS_RESET_VAL); + err = st_lsm6dsx_write_locked(hw, ST_LSM6DSX_REG_TS_RESET_ADDR, + ST_LSM6DSX_TS_RESET_VAL); if (err < 0) return err; for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + if (!hw->iio_devs[i]) + continue; + sensor = iio_priv(hw->iio_devs[i]); /* * store enable buffer timestamp as reference for @@ -297,8 +317,8 @@ static inline int st_lsm6dsx_read_block(struct st_lsm6dsx_hw *hw, u8 addr, while (read_len < data_len) { word_len = min_t(unsigned int, data_len - read_len, max_word_len); - err = regmap_bulk_read(hw->regmap, addr, data + read_len, - word_len); + err = st_lsm6dsx_read_locked(hw, addr, data + read_len, + word_len); if (err < 0) return err; read_len += word_len; @@ -328,9 +348,9 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw) __le16 fifo_status; s64 ts = 0; - err = regmap_bulk_read(hw->regmap, - hw->settings->fifo_ops.fifo_diff.addr, - &fifo_status, sizeof(fifo_status)); + err = st_lsm6dsx_read_locked(hw, + hw->settings->fifo_ops.fifo_diff.addr, + &fifo_status, sizeof(fifo_status)); if (err < 0) { dev_err(hw->dev, "failed to read fifo status (err=%d)\n", err); @@ -436,6 +456,55 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw) return read_len; } +static int +st_lsm6dsx_push_tagged_data(struct st_lsm6dsx_hw *hw, u8 tag, + u8 *data, s64 ts) +{ + struct st_lsm6dsx_sensor *sensor; + struct iio_dev *iio_dev; + + /* + * EXT_TAG are managed in FIFO fashion so ST_LSM6DSX_EXT0_TAG + * corresponds to the first enabled channel, ST_LSM6DSX_EXT1_TAG + * to the second one and ST_LSM6DSX_EXT2_TAG to the last enabled + * channel + */ + switch (tag) { + case ST_LSM6DSX_GYRO_TAG: + iio_dev = hw->iio_devs[ST_LSM6DSX_ID_GYRO]; + break; + case ST_LSM6DSX_ACC_TAG: + iio_dev = hw->iio_devs[ST_LSM6DSX_ID_ACC]; + break; + case ST_LSM6DSX_EXT0_TAG: + if (hw->enable_mask & BIT(ST_LSM6DSX_ID_EXT0)) + iio_dev = hw->iio_devs[ST_LSM6DSX_ID_EXT0]; + else if (hw->enable_mask & BIT(ST_LSM6DSX_ID_EXT1)) + iio_dev = hw->iio_devs[ST_LSM6DSX_ID_EXT1]; + else + iio_dev = hw->iio_devs[ST_LSM6DSX_ID_EXT2]; + break; + case ST_LSM6DSX_EXT1_TAG: + if ((hw->enable_mask & BIT(ST_LSM6DSX_ID_EXT0)) && + (hw->enable_mask & BIT(ST_LSM6DSX_ID_EXT1))) + iio_dev = hw->iio_devs[ST_LSM6DSX_ID_EXT1]; + else + iio_dev = hw->iio_devs[ST_LSM6DSX_ID_EXT2]; + break; + case ST_LSM6DSX_EXT2_TAG: + iio_dev = hw->iio_devs[ST_LSM6DSX_ID_EXT2]; + break; + default: + return -EINVAL; + } + + sensor = iio_priv(iio_dev); + iio_push_to_buffers_with_timestamp(iio_dev, data, + ts + sensor->ts_ref); + + return 0; +} + /** * st_lsm6dsx_read_tagged_fifo() - LSM6DSO read FIFO routine * @hw: Pointer to instance of struct st_lsm6dsx_hw. @@ -455,9 +524,9 @@ int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw) __le16 fifo_status; s64 ts = 0; - err = regmap_bulk_read(hw->regmap, - hw->settings->fifo_ops.fifo_diff.addr, - &fifo_status, sizeof(fifo_status)); + err = st_lsm6dsx_read_locked(hw, + hw->settings->fifo_ops.fifo_diff.addr, + &fifo_status, sizeof(fifo_status)); if (err < 0) { dev_err(hw->dev, "failed to read fifo status (err=%d)\n", err); @@ -491,8 +560,7 @@ int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw) ST_LSM6DSX_SAMPLE_SIZE); tag = hw->buff[i] >> 3; - switch (tag) { - case ST_LSM6DSX_TS_TAG: + if (tag == ST_LSM6DSX_TS_TAG) { /* * hw timestamp is 4B long and it is stored * in FIFO according to this schema: @@ -509,19 +577,9 @@ int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw) if (!reset_ts && ts >= 0xffff0000) reset_ts = true; ts *= ST_LSM6DSX_TS_SENSITIVITY; - break; - case ST_LSM6DSX_GYRO_TAG: - iio_push_to_buffers_with_timestamp( - hw->iio_devs[ST_LSM6DSX_ID_GYRO], - iio_buff, gyro_sensor->ts_ref + ts); - break; - case ST_LSM6DSX_ACC_TAG: - iio_push_to_buffers_with_timestamp( - hw->iio_devs[ST_LSM6DSX_ID_ACC], - iio_buff, acc_sensor->ts_ref + ts); - break; - default: - break; + } else { + st_lsm6dsx_push_tagged_data(hw, tag, iio_buff, + ts); } } } @@ -562,19 +620,21 @@ static int st_lsm6dsx_update_fifo(struct iio_dev *iio_dev, bool enable) goto out; } - if (enable) { - err = st_lsm6dsx_sensor_enable(sensor); + if (sensor->id == ST_LSM6DSX_ID_EXT0 || + sensor->id == ST_LSM6DSX_ID_EXT1 || + sensor->id == ST_LSM6DSX_ID_EXT2) { + err = st_lsm6dsx_shub_set_enable(sensor, enable); if (err < 0) goto out; } else { - err = st_lsm6dsx_sensor_disable(sensor); + err = st_lsm6dsx_sensor_set_enable(sensor, enable); if (err < 0) goto out; - } - err = st_lsm6dsx_set_fifo_odr(sensor, enable); - if (err < 0) - goto out; + err = st_lsm6dsx_set_fifo_odr(sensor, enable); + if (err < 0) + goto out; + } err = st_lsm6dsx_update_decimators(hw); if (err < 0) @@ -690,6 +750,9 @@ int st_lsm6dsx_fifo_setup(struct st_lsm6dsx_hw *hw) } for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + if (!hw->iio_devs[i]) + continue; + buffer = devm_iio_kfifo_allocate(hw->dev); if (!buffer) return -ENOMEM; diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c index 2ad3c610e4b6..12e29dda9b98 100644 --- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c +++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c @@ -56,6 +56,7 @@ #define ST_LSM6DSX_REG_WHOAMI_ADDR 0x0f #define ST_LSM6DSX_REG_RESET_ADDR 0x12 #define ST_LSM6DSX_REG_RESET_MASK BIT(0) +#define ST_LSM6DSX_REG_BOOT_MASK BIT(7) #define ST_LSM6DSX_REG_BDU_ADDR 0x12 #define ST_LSM6DSX_REG_BDU_MASK BIT(6) #define ST_LSM6DSX_REG_INT2_ON_INT1_ADDR 0x13 @@ -87,17 +88,6 @@ #define ST_LSM6DSX_GYRO_FS_1000_GAIN IIO_DEGREE_TO_RAD(35000) #define ST_LSM6DSX_GYRO_FS_2000_GAIN IIO_DEGREE_TO_RAD(70000) -struct st_lsm6dsx_odr { - u16 hz; - u8 val; -}; - -#define ST_LSM6DSX_ODR_LIST_SIZE 6 -struct st_lsm6dsx_odr_table_entry { - struct st_lsm6dsx_reg reg; - struct st_lsm6dsx_odr odr_avl[ST_LSM6DSX_ODR_LIST_SIZE]; -}; - static const struct st_lsm6dsx_odr_table_entry st_lsm6dsx_odr_table[] = { [ST_LSM6DSX_ID_ACC] = { .reg = { @@ -125,17 +115,6 @@ static const struct st_lsm6dsx_odr_table_entry st_lsm6dsx_odr_table[] = { } }; -struct st_lsm6dsx_fs { - u32 gain; - u8 val; -}; - -#define ST_LSM6DSX_FS_LIST_SIZE 4 -struct st_lsm6dsx_fs_table_entry { - struct st_lsm6dsx_reg reg; - struct st_lsm6dsx_fs fs_avl[ST_LSM6DSX_FS_LIST_SIZE]; -}; - static const struct st_lsm6dsx_fs_table_entry st_lsm6dsx_fs_table[] = { [ST_LSM6DSX_ID_ACC] = { .reg = { @@ -341,27 +320,35 @@ static const struct st_lsm6dsx_settings st_lsm6dsx_sensor_settings[] = { .mask = GENMASK(7, 6), }, }, + .shub_settings = { + .page_mux = { + .addr = 0x01, + .mask = BIT(6), + }, + .master_en = { + .addr = 0x14, + .mask = BIT(2), + }, + .pullup_en = { + .addr = 0x14, + .mask = BIT(3), + }, + .aux_sens = { + .addr = 0x14, + .mask = GENMASK(1, 0), + }, + .wr_once = { + .addr = 0x14, + .mask = BIT(6), + }, + .shub_out = 0x02, + .slv0_addr = 0x15, + .dw_slv0_addr = 0x21, + .batch_en = BIT(3), + } }, }; -#define ST_LSM6DSX_CHANNEL(chan_type, addr, mod, scan_idx) \ -{ \ - .type = chan_type, \ - .address = addr, \ - .modified = 1, \ - .channel2 = mod, \ - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \ - BIT(IIO_CHAN_INFO_SCALE), \ - .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ), \ - .scan_index = scan_idx, \ - .scan_type = { \ - .sign = 's', \ - .realbits = 16, \ - .storagebits = 16, \ - .endianness = IIO_LE, \ - }, \ -} - static const struct iio_chan_spec st_lsm6dsx_acc_channels[] = { ST_LSM6DSX_CHANNEL(IIO_ACCEL, ST_LSM6DSX_REG_ACC_OUT_X_L_ADDR, IIO_MOD_X, 0), @@ -382,6 +369,21 @@ static const struct iio_chan_spec st_lsm6dsx_gyro_channels[] = { IIO_CHAN_SOFT_TIMESTAMP(3), }; +int st_lsm6dsx_set_page(struct st_lsm6dsx_hw *hw, bool enable) +{ + const struct st_lsm6dsx_shub_settings *hub_settings; + unsigned int data; + int err; + + hub_settings = &hw->settings->shub_settings; + data = ST_LSM6DSX_SHIFT_VAL(enable, hub_settings->page_mux.mask); + err = regmap_update_bits(hw->regmap, hub_settings->page_mux.addr, + hub_settings->page_mux.mask, data); + usleep_range(100, 150); + + return err; +} + static int st_lsm6dsx_check_whoami(struct st_lsm6dsx_hw *hw, int id) { int err, i, j, data; @@ -421,6 +423,7 @@ static int st_lsm6dsx_set_full_scale(struct st_lsm6dsx_sensor *sensor, { struct st_lsm6dsx_hw *hw = sensor->hw; const struct st_lsm6dsx_reg *reg; + unsigned int data; int i, err; u8 val; @@ -433,8 +436,8 @@ static int st_lsm6dsx_set_full_scale(struct st_lsm6dsx_sensor *sensor, val = st_lsm6dsx_fs_table[sensor->id].fs_avl[i].val; reg = &st_lsm6dsx_fs_table[sensor->id].reg; - err = regmap_update_bits(hw->regmap, reg->addr, reg->mask, - ST_LSM6DSX_SHIFT_VAL(val, reg->mask)); + data = ST_LSM6DSX_SHIFT_VAL(val, reg->mask); + err = st_lsm6dsx_update_bits_locked(hw, reg->addr, reg->mask, data); if (err < 0) return err; @@ -448,7 +451,11 @@ int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u16 odr, u8 *val) int i; for (i = 0; i < ST_LSM6DSX_ODR_LIST_SIZE; i++) - if (st_lsm6dsx_odr_table[sensor->id].odr_avl[i].hz == odr) + /* + * ext devices can run at different odr respect to + * accel sensor + */ + if (st_lsm6dsx_odr_table[sensor->id].odr_avl[i].hz >= odr) break; if (i == ST_LSM6DSX_ODR_LIST_SIZE) @@ -459,48 +466,86 @@ int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u16 odr, u8 *val) return 0; } -static int st_lsm6dsx_set_odr(struct st_lsm6dsx_sensor *sensor, u16 odr) +static u16 st_lsm6dsx_check_odr_dependency(struct st_lsm6dsx_hw *hw, u16 odr, + enum st_lsm6dsx_sensor_id id) { - struct st_lsm6dsx_hw *hw = sensor->hw; - const struct st_lsm6dsx_reg *reg; - int err; - u8 val; - - err = st_lsm6dsx_check_odr(sensor, odr, &val); - if (err < 0) - return err; - - reg = &st_lsm6dsx_odr_table[sensor->id].reg; - return regmap_update_bits(hw->regmap, reg->addr, reg->mask, - ST_LSM6DSX_SHIFT_VAL(val, reg->mask)); + struct st_lsm6dsx_sensor *ref = iio_priv(hw->iio_devs[id]); + + if (odr > 0) { + if (hw->enable_mask & BIT(id)) + return max_t(u16, ref->odr, odr); + else + return odr; + } else { + return (hw->enable_mask & BIT(id)) ? ref->odr : 0; + } } -int st_lsm6dsx_sensor_enable(struct st_lsm6dsx_sensor *sensor) +static int st_lsm6dsx_set_odr(struct st_lsm6dsx_sensor *sensor, u16 req_odr) { + struct st_lsm6dsx_sensor *ref_sensor = sensor; + struct st_lsm6dsx_hw *hw = sensor->hw; + const struct st_lsm6dsx_reg *reg; + unsigned int data; + u8 val = 0; int err; - err = st_lsm6dsx_set_odr(sensor, sensor->odr); - if (err < 0) - return err; + switch (sensor->id) { + case ST_LSM6DSX_ID_EXT0: + case ST_LSM6DSX_ID_EXT1: + case ST_LSM6DSX_ID_EXT2: + case ST_LSM6DSX_ID_ACC: { + u16 odr; + int i; + + /* + * i2c embedded controller relies on the accelerometer sensor as + * bus read/write trigger so we need to enable accel device + * at odr = max(accel_odr, ext_odr) in order to properly + * communicate with i2c slave devices + */ + ref_sensor = iio_priv(hw->iio_devs[ST_LSM6DSX_ID_ACC]); + for (i = ST_LSM6DSX_ID_ACC; i < ST_LSM6DSX_ID_MAX; i++) { + if (!hw->iio_devs[i] || i == sensor->id) + continue; + + odr = st_lsm6dsx_check_odr_dependency(hw, req_odr, i); + if (odr != req_odr) + /* device already configured */ + return 0; + } + break; + } + default: + break; + } - sensor->hw->enable_mask |= BIT(sensor->id); + if (req_odr > 0) { + err = st_lsm6dsx_check_odr(ref_sensor, req_odr, &val); + if (err < 0) + return err; + } - return 0; + reg = &st_lsm6dsx_odr_table[ref_sensor->id].reg; + data = ST_LSM6DSX_SHIFT_VAL(val, reg->mask); + return st_lsm6dsx_update_bits_locked(hw, reg->addr, reg->mask, data); } -int st_lsm6dsx_sensor_disable(struct st_lsm6dsx_sensor *sensor) +int st_lsm6dsx_sensor_set_enable(struct st_lsm6dsx_sensor *sensor, + bool enable) { struct st_lsm6dsx_hw *hw = sensor->hw; - const struct st_lsm6dsx_reg *reg; + u16 odr = enable ? sensor->odr : 0; int err; - reg = &st_lsm6dsx_odr_table[sensor->id].reg; - err = regmap_update_bits(hw->regmap, reg->addr, reg->mask, - ST_LSM6DSX_SHIFT_VAL(0, reg->mask)); + err = st_lsm6dsx_set_odr(sensor, odr); if (err < 0) return err; - sensor->hw->enable_mask &= ~BIT(sensor->id); + if (enable) + hw->enable_mask |= BIT(sensor->id); + else + hw->enable_mask &= ~BIT(sensor->id); return 0; } @@ -512,18 +557,18 @@ static int st_lsm6dsx_read_oneshot(struct st_lsm6dsx_sensor *sensor, int err, delay; __le16 data; - err = st_lsm6dsx_sensor_enable(sensor); + err = st_lsm6dsx_sensor_set_enable(sensor, true); if (err < 0) return err; delay = 1000000 / sensor->odr; usleep_range(delay, 2 * delay); - err = regmap_bulk_read(hw->regmap, addr, &data, sizeof(data)); + err = st_lsm6dsx_read_locked(hw, addr, &data, sizeof(data)); if (err < 0) return err; - st_lsm6dsx_sensor_disable(sensor); + st_lsm6dsx_sensor_set_enable(sensor, false); *val = (s16)le16_to_cpu(data); @@ -596,7 +641,7 @@ static int st_lsm6dsx_write_raw(struct iio_dev *iio_dev, return err; } -static int st_lsm6dsx_set_watermark(struct iio_dev *iio_dev, unsigned int val) +int st_lsm6dsx_set_watermark(struct iio_dev *iio_dev, unsigned int val) { struct st_lsm6dsx_sensor *sensor = iio_priv(iio_dev); struct st_lsm6dsx_hw *hw = sensor->hw; @@ -692,8 +737,6 @@ static const struct iio_info st_lsm6dsx_gyro_info = { .hwfifo_set_watermark = st_lsm6dsx_set_watermark, }; -static const unsigned long st_lsm6dsx_available_scan_masks[] = {0x7, 0x0}; - static int st_lsm6dsx_of_get_drdy_pin(struct st_lsm6dsx_hw *hw, int *drdy_pin) { struct device_node *np = hw->dev->of_node; @@ -732,6 +775,51 @@ static int st_lsm6dsx_get_drdy_reg(struct st_lsm6dsx_hw *hw, u8 *drdy_reg) return err; } +static int st_lsm6dsx_init_shub(struct st_lsm6dsx_hw *hw) +{ + const struct st_lsm6dsx_shub_settings *hub_settings; + struct device_node *np = hw->dev->of_node; + struct st_sensors_platform_data *pdata; + unsigned int data; + int err = 0; + + hub_settings = &hw->settings->shub_settings; + + pdata = (struct st_sensors_platform_data *)hw->dev->platform_data; + if ((np && of_property_read_bool(np, "st,pullups")) || + (pdata && pdata->pullups)) { + err = st_lsm6dsx_set_page(hw, true); + if (err < 0) + return err; + + data = ST_LSM6DSX_SHIFT_VAL(1, hub_settings->pullup_en.mask); + err = regmap_update_bits(hw->regmap, + hub_settings->pullup_en.addr, + hub_settings->pullup_en.mask, data); + + st_lsm6dsx_set_page(hw, false); + + if (err < 0) + return err; + } + + if (hub_settings->aux_sens.addr) { + /* configure aux sensors */ + err = st_lsm6dsx_set_page(hw, true); + if (err < 0) + return err; + + data = ST_LSM6DSX_SHIFT_VAL(3, hub_settings->aux_sens.mask); + err = regmap_update_bits(hw->regmap, + hub_settings->aux_sens.addr, + hub_settings->aux_sens.mask, data); + + st_lsm6dsx_set_page(hw, false); + } + + return err; +} + static int st_lsm6dsx_init_hw_timer(struct st_lsm6dsx_hw *hw) { const struct st_lsm6dsx_hw_ts_settings *ts_settings; @@ -775,12 +863,23 @@ static int st_lsm6dsx_init_device(struct st_lsm6dsx_hw *hw) u8 drdy_int_reg; int err; - err = regmap_write(hw->regmap, ST_LSM6DSX_REG_RESET_ADDR, - ST_LSM6DSX_REG_RESET_MASK); + /* device sw reset */ + err = regmap_update_bits(hw->regmap, ST_LSM6DSX_REG_RESET_ADDR, + ST_LSM6DSX_REG_RESET_MASK, + FIELD_PREP(ST_LSM6DSX_REG_RESET_MASK, 1)); + if (err < 0) + return err; + + msleep(50); + + /* reload trimming parameter */ + err = regmap_update_bits(hw->regmap, ST_LSM6DSX_REG_RESET_ADDR, + ST_LSM6DSX_REG_BOOT_MASK, + FIELD_PREP(ST_LSM6DSX_REG_BOOT_MASK, 1)); if (err < 0) return err; - msleep(200); + msleep(50); /* enable Block Data Update */ err = regmap_update_bits(hw->regmap, ST_LSM6DSX_REG_BDU_ADDR, @@ -801,6 +900,10 @@ static int st_lsm6dsx_init_device(struct st_lsm6dsx_hw *hw) if (err < 0) return err; + err = st_lsm6dsx_init_shub(hw); + if (err < 0) + return err; + return st_lsm6dsx_init_hw_timer(hw); } @@ -854,6 +957,7 @@ static struct iio_dev *st_lsm6dsx_alloc_iiodev(struct st_lsm6dsx_hw *hw, int st_lsm6dsx_probe(struct device *dev, int irq, int hw_id, const char *name, struct regmap *regmap) { + const struct st_lsm6dsx_shub_settings *hub_settings; struct st_lsm6dsx_hw *hw; int i, err; @@ -865,6 +969,7 @@ int st_lsm6dsx_probe(struct device *dev, int irq, int hw_id, const char *name, mutex_init(&hw->fifo_lock); mutex_init(&hw->conf_lock); + mutex_init(&hw->page_lock); hw->buff = devm_kzalloc(dev, ST_LSM6DSX_BUFF_SIZE, GFP_KERNEL); if (!hw->buff) @@ -878,7 +983,7 @@ int st_lsm6dsx_probe(struct device *dev, int irq, int hw_id, const char *name, if (err < 0) return err; - for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + for (i = 0; i < ST_LSM6DSX_ID_EXT0; i++) { hw->iio_devs[i] = st_lsm6dsx_alloc_iiodev(hw, i, name); if (!hw->iio_devs[i]) return -ENOMEM; @@ -888,6 +993,13 @@ int st_lsm6dsx_probe(struct device *dev, int irq, int hw_id, const char *name, if (err < 0) return err; + hub_settings = &hw->settings->shub_settings; + if (hub_settings->master_en.addr) { + err = st_lsm6dsx_shub_probe(hw, name); + if (err < 0) + return err; + } + if (hw->irq > 0) { err = st_lsm6dsx_fifo_setup(hw); if (err < 0) @@ -895,6 +1007,9 @@ int st_lsm6dsx_probe(struct device *dev, int irq, int hw_id, const char *name, } for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + if (!hw->iio_devs[i]) + continue; + err = devm_iio_device_register(hw->dev, hw->iio_devs[i]); if (err) return err; @@ -909,16 +1024,21 @@ static int __maybe_unused st_lsm6dsx_suspend(struct device *dev) struct st_lsm6dsx_hw *hw = dev_get_drvdata(dev); struct st_lsm6dsx_sensor *sensor; const struct st_lsm6dsx_reg *reg; + unsigned int data; int i, err = 0; for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + if (!hw->iio_devs[i]) + continue; + sensor = iio_priv(hw->iio_devs[i]); if (!(hw->enable_mask & BIT(sensor->id))) continue; reg = &st_lsm6dsx_odr_table[sensor->id].reg; - err = regmap_update_bits(hw->regmap, reg->addr, reg->mask, - ST_LSM6DSX_SHIFT_VAL(0, reg->mask)); + data = ST_LSM6DSX_SHIFT_VAL(0, reg->mask); + err = st_lsm6dsx_update_bits_locked(hw, reg->addr, reg->mask, + data); if (err < 0) return err; } @@ -936,6 +1056,9 @@ static int __maybe_unused st_lsm6dsx_resume(struct device *dev) int i, err = 0; for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + if (!hw->iio_devs[i]) + continue; + sensor = iio_priv(hw->iio_devs[i]); if (!(hw->enable_mask & BIT(sensor->id))) continue; diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c new file mode 100644 index 000000000000..8e47dccdd40f --- /dev/null +++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c @@ -0,0 +1,779 @@ +/* + * STMicroelectronics st_lsm6dsx i2c controller driver + * + * i2c controller embedded in lsm6dx series can connect up to four + * slave devices using accelerometer sensor as trigger for i2c + * read/write operations. Current implementation relies on SLV0 channel + * for slave configuration and SLV{1,2,3} to read data and push them into + * the hw FIFO + * + * Copyright (C) 2018 Lorenzo Bianconi + * + * Permission to use, copy, modify, and/or distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + * + */ +#include +#include +#include +#include +#include + +#include "st_lsm6dsx.h" + +#define ST_LSM6DSX_MAX_SLV_NUM 3 +#define ST_LSM6DSX_SLV_ADDR(n, base) ((base) + (n) * 3) +#define ST_LSM6DSX_SLV_SUB_ADDR(n, base) ((base) + 1 + (n) * 3) +#define ST_LSM6DSX_SLV_CONFIG(n, base) ((base) + 2 + (n) * 3) + +#define ST_LS6DSX_READ_OP_MASK GENMASK(2, 0) + +static const struct st_lsm6dsx_ext_dev_settings st_lsm6dsx_ext_dev_table[] = { + /* LIS2MDL */ + { + .i2c_addr = { 0x1e }, + .wai = { + .addr = 0x4f, + .val = 0x40, + }, + .id = ST_LSM6DSX_ID_MAGN, + .odr_table = { + .reg = { + .addr = 0x60, + .mask = GENMASK(3, 2), + }, + .odr_avl[0] = { 10, 0x0 }, + .odr_avl[1] = { 20, 0x1 }, + .odr_avl[2] = { 50, 0x2 }, + .odr_avl[3] = { 100, 0x3 }, + }, + .fs_table = { + .fs_avl[0] = { + .gain = 1500, + .val = 0x0, + }, /* 1500 uG/LSB */ + }, + .temp_comp = { + .addr = 0x60, + .mask = BIT(7), + }, + .pwr_table = { + .reg = { + .addr = 0x60, + .mask = GENMASK(1, 0), + }, + .off_val = 0x2, + .on_val = 0x0, + }, + .off_canc = { + .addr = 0x61, + .mask = BIT(1), + }, + .bdu = { + .addr = 0x62, + .mask = BIT(4), + }, + .out = { + .addr = 0x68, + .len = 6, + }, + }, +}; + +static void st_lsm6dsx_shub_wait_complete(struct st_lsm6dsx_hw *hw) +{ + struct st_lsm6dsx_sensor *sensor; + + sensor = iio_priv(hw->iio_devs[ST_LSM6DSX_ID_ACC]); + msleep((2000U / sensor->odr) + 1); +} + +/** + * st_lsm6dsx_shub_read_reg - read i2c controller register + * + * Read st_lsm6dsx i2c controller register + */ +static int st_lsm6dsx_shub_read_reg(struct st_lsm6dsx_hw *hw, u8 addr, + u8 *data, int len) +{ + const struct st_lsm6dsx_shub_settings *hub_settings; + int err; + + mutex_lock(&hw->page_lock); + + hub_settings = &hw->settings->shub_settings; + err = st_lsm6dsx_set_page(hw, true); + if (err < 0) + goto out; + + err = regmap_bulk_read(hw->regmap, addr, data, len); + + st_lsm6dsx_set_page(hw, false); +out: + mutex_unlock(&hw->page_lock); + + return err; +} + +/** + * st_lsm6dsx_shub_write_reg - write i2c controller register + * + * Write st_lsm6dsx i2c controller register + */ +static int st_lsm6dsx_shub_write_reg(struct st_lsm6dsx_hw *hw, u8 addr, + u8 *data, int len) +{ + int err; + + mutex_lock(&hw->page_lock); + err = st_lsm6dsx_set_page(hw, true); + if (err < 0) + goto out; + + err = regmap_bulk_write(hw->regmap, addr, data, len); + + st_lsm6dsx_set_page(hw, false); +out: + mutex_unlock(&hw->page_lock); + + return err; +} + +static int +st_lsm6dsx_shub_write_reg_with_mask(struct st_lsm6dsx_hw *hw, u8 addr, + u8 mask, u8 val) +{ + int err; + + mutex_lock(&hw->page_lock); + err = st_lsm6dsx_set_page(hw, true); + if (err < 0) + goto out; + + err = regmap_update_bits(hw->regmap, addr, mask, val); + + st_lsm6dsx_set_page(hw, false); +out: + mutex_unlock(&hw->page_lock); + + return err; +} + +static int st_lsm6dsx_shub_master_enable(struct st_lsm6dsx_sensor *sensor, + bool enable) +{ + const struct st_lsm6dsx_shub_settings *hub_settings; + struct st_lsm6dsx_hw *hw = sensor->hw; + unsigned int data; + int err; + + /* enable acc sensor as trigger */ + err = st_lsm6dsx_sensor_set_enable(sensor, enable); + if (err < 0) + return err; + + mutex_lock(&hw->page_lock); + + hub_settings = &hw->settings->shub_settings; + err = st_lsm6dsx_set_page(hw, true); + if (err < 0) + goto out; + + data = ST_LSM6DSX_SHIFT_VAL(enable, hub_settings->master_en.mask); + err = regmap_update_bits(hw->regmap, hub_settings->master_en.addr, + hub_settings->master_en.mask, data); + + st_lsm6dsx_set_page(hw, false); +out: + mutex_unlock(&hw->page_lock); + + return err; +} + +/** + * st_lsm6dsx_shub_read - read data from slave device register + * + * Read data from slave device register. SLV0 is used for + * one-shot read operation + */ +static int +st_lsm6dsx_shub_read(struct st_lsm6dsx_sensor *sensor, u8 addr, + u8 *data, int len) +{ + const struct st_lsm6dsx_shub_settings *hub_settings; + struct st_lsm6dsx_hw *hw = sensor->hw; + u8 config[3], slv_addr; + int err; + + hub_settings = &hw->settings->shub_settings; + slv_addr = ST_LSM6DSX_SLV_ADDR(0, hub_settings->slv0_addr); + + config[0] = (sensor->ext_info.addr << 1) | 1; + config[1] = addr; + config[2] = len & ST_LS6DSX_READ_OP_MASK; + + err = st_lsm6dsx_shub_write_reg(hw, slv_addr, config, + sizeof(config)); + if (err < 0) + return err; + + err = st_lsm6dsx_shub_master_enable(sensor, true); + if (err < 0) + return err; + + st_lsm6dsx_shub_wait_complete(hw); + + err = st_lsm6dsx_shub_read_reg(hw, hub_settings->shub_out, data, + len & ST_LS6DSX_READ_OP_MASK); + + st_lsm6dsx_shub_master_enable(sensor, false); + + memset(config, 0, sizeof(config)); + return st_lsm6dsx_shub_write_reg(hw, slv_addr, config, + sizeof(config)); +} + +/** + * st_lsm6dsx_shub_write - write data to slave device register + * + * Write data from slave device register. SLV0 is used for + * one-shot write operation + */ +static int +st_lsm6dsx_shub_write(struct st_lsm6dsx_sensor *sensor, u8 addr, + u8 *data, int len) +{ + const struct st_lsm6dsx_shub_settings *hub_settings; + struct st_lsm6dsx_hw *hw = sensor->hw; + u8 config[2], slv_addr; + int err, i; + + hub_settings = &hw->settings->shub_settings; + if (hub_settings->wr_once.addr) { + unsigned int data; + + data = ST_LSM6DSX_SHIFT_VAL(1, hub_settings->wr_once.mask); + err = st_lsm6dsx_shub_write_reg_with_mask(hw, + hub_settings->wr_once.addr, + hub_settings->wr_once.mask, + data); + if (err < 0) + return err; + } + + slv_addr = ST_LSM6DSX_SLV_ADDR(0, hub_settings->slv0_addr); + config[0] = sensor->ext_info.addr << 1; + for (i = 0 ; i < len; i++) { + config[1] = addr + i; + + err = st_lsm6dsx_shub_write_reg(hw, slv_addr, config, + sizeof(config)); + if (err < 0) + return err; + + err = st_lsm6dsx_shub_write_reg(hw, hub_settings->dw_slv0_addr, + &data[i], 1); + if (err < 0) + return err; + + err = st_lsm6dsx_shub_master_enable(sensor, true); + if (err < 0) + return err; + + st_lsm6dsx_shub_wait_complete(hw); + + st_lsm6dsx_shub_master_enable(sensor, false); + } + + memset(config, 0, sizeof(config)); + return st_lsm6dsx_shub_write_reg(hw, slv_addr, config, sizeof(config)); +} + +static int +st_lsm6dsx_shub_write_with_mask(struct st_lsm6dsx_sensor *sensor, + u8 addr, u8 mask, u8 val) +{ + int err; + u8 data; + + err = st_lsm6dsx_shub_read(sensor, addr, &data, sizeof(data)); + if (err < 0) + return err; + + data = ((data & ~mask) | (val << __ffs(mask) & mask)); + + return st_lsm6dsx_shub_write(sensor, addr, &data, sizeof(data)); +} + +static int +st_lsm6dsx_shub_get_odr_val(struct st_lsm6dsx_sensor *sensor, + u16 odr, u16 *val) +{ + const struct st_lsm6dsx_ext_dev_settings *settings; + int i; + + settings = sensor->ext_info.settings; + for (i = 0; i < ST_LSM6DSX_ODR_LIST_SIZE; i++) + if (settings->odr_table.odr_avl[i].hz == odr) + break; + + if (i == ST_LSM6DSX_ODR_LIST_SIZE) + return -EINVAL; + + *val = settings->odr_table.odr_avl[i].val; + return 0; +} + +static int +st_lsm6dsx_shub_set_odr(struct st_lsm6dsx_sensor *sensor, u16 odr) +{ + const struct st_lsm6dsx_ext_dev_settings *settings; + u16 val; + int err; + + err = st_lsm6dsx_shub_get_odr_val(sensor, odr, &val); + if (err < 0) + return err; + + settings = sensor->ext_info.settings; + return st_lsm6dsx_shub_write_with_mask(sensor, + settings->odr_table.reg.addr, + settings->odr_table.reg.mask, + val); +} + +/* use SLV{1,2,3} for FIFO read operations */ +static int +st_lsm6dsx_shub_config_channels(struct st_lsm6dsx_sensor *sensor, + bool enable) +{ + const struct st_lsm6dsx_shub_settings *hub_settings; + const struct st_lsm6dsx_ext_dev_settings *settings; + u8 config[9] = {}, enable_mask, slv_addr; + struct st_lsm6dsx_hw *hw = sensor->hw; + struct st_lsm6dsx_sensor *cur_sensor; + int i, j = 0; + + hub_settings = &hw->settings->shub_settings; + if (enable) + enable_mask = hw->enable_mask | BIT(sensor->id); + else + enable_mask = hw->enable_mask & ~BIT(sensor->id); + + for (i = ST_LSM6DSX_ID_EXT0; i <= ST_LSM6DSX_ID_EXT2; i++) { + if (!hw->iio_devs[i]) + continue; + + cur_sensor = iio_priv(hw->iio_devs[i]); + if (!(enable_mask & BIT(cur_sensor->id))) + continue; + + settings = cur_sensor->ext_info.settings; + config[j] = (sensor->ext_info.addr << 1) | 1; + config[j + 1] = settings->out.addr; + config[j + 2] = (settings->out.len & ST_LS6DSX_READ_OP_MASK) | + hub_settings->batch_en; + j += 3; + } + + slv_addr = ST_LSM6DSX_SLV_ADDR(1, hub_settings->slv0_addr); + return st_lsm6dsx_shub_write_reg(hw, slv_addr, config, + sizeof(config)); +} + +int st_lsm6dsx_shub_set_enable(struct st_lsm6dsx_sensor *sensor, bool enable) +{ + const struct st_lsm6dsx_ext_dev_settings *settings; + int err; + + err = st_lsm6dsx_shub_config_channels(sensor, enable); + if (err < 0) + return err; + + settings = sensor->ext_info.settings; + if (enable) { + err = st_lsm6dsx_shub_set_odr(sensor, sensor->odr); + if (err < 0) + return err; + } else { + err = st_lsm6dsx_shub_write_with_mask(sensor, + settings->odr_table.reg.addr, + settings->odr_table.reg.mask, 0); + if (err < 0) + return err; + } + + if (settings->pwr_table.reg.addr) { + u8 val; + + val = enable ? settings->pwr_table.on_val + : settings->pwr_table.off_val; + err = st_lsm6dsx_shub_write_with_mask(sensor, + settings->pwr_table.reg.addr, + settings->pwr_table.reg.mask, val); + if (err < 0) + return err; + } + + return st_lsm6dsx_shub_master_enable(sensor, enable); +} + +static int +st_lsm6dsx_shub_read_oneshot(struct st_lsm6dsx_sensor *sensor, + struct iio_chan_spec const *ch, + int *val) +{ + int err, delay, len; + u8 data[4]; + + err = st_lsm6dsx_shub_set_enable(sensor, true); + if (err < 0) + return err; + + delay = 1000000 / sensor->odr; + usleep_range(delay, 2 * delay); + + len = min_t(int, sizeof(data), ch->scan_type.realbits >> 3); + err = st_lsm6dsx_shub_read(sensor, ch->address, data, len); + + st_lsm6dsx_shub_set_enable(sensor, false); + + if (err < 0) + return err; + + switch (len) { + case 2: + *val = (s16)le16_to_cpu(*((__le16 *)data)); + break; + default: + return -EINVAL; + } + + return IIO_VAL_INT; +} + +static int +st_lsm6dsx_shub_read_raw(struct iio_dev *iio_dev, + struct iio_chan_spec const *ch, + int *val, int *val2, long mask) +{ + struct st_lsm6dsx_sensor *sensor = iio_priv(iio_dev); + int ret; + + switch (mask) { + case IIO_CHAN_INFO_RAW: + ret = iio_device_claim_direct_mode(iio_dev); + if (ret) + break; + + ret = st_lsm6dsx_shub_read_oneshot(sensor, ch, val); + iio_device_release_direct_mode(iio_dev); + break; + case IIO_CHAN_INFO_SAMP_FREQ: + *val = sensor->odr; + ret = IIO_VAL_INT; + break; + case IIO_CHAN_INFO_SCALE: + *val = 0; + *val2 = sensor->gain; + ret = IIO_VAL_INT_PLUS_MICRO; + break; + default: + ret = -EINVAL; + break; + } + + return ret; +} + +static int +st_lsm6dsx_shub_write_raw(struct iio_dev *iio_dev, + struct iio_chan_spec const *chan, + int val, int val2, long mask) +{ + struct st_lsm6dsx_sensor *sensor = iio_priv(iio_dev); + int err; + + err = iio_device_claim_direct_mode(iio_dev); + if (err) + return err; + + switch (mask) { + case IIO_CHAN_INFO_SAMP_FREQ: { + u16 data; + + err = st_lsm6dsx_shub_get_odr_val(sensor, val, &data); + if (!err) + sensor->odr = val; + break; + } + default: + err = -EINVAL; + break; + } + + iio_device_release_direct_mode(iio_dev); + + return err; +} + +static ssize_t +st_lsm6dsx_shub_sampling_freq_avail(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct st_lsm6dsx_sensor *sensor = iio_priv(dev_get_drvdata(dev)); + const struct st_lsm6dsx_ext_dev_settings *settings; + int i, len = 0; + + settings = sensor->ext_info.settings; + for (i = 0; i < ST_LSM6DSX_ODR_LIST_SIZE; i++) { + u16 val = settings->odr_table.odr_avl[i].hz; + + if (val > 0) + len += scnprintf(buf + len, PAGE_SIZE - len, "%d ", + val); + } + buf[len - 1] = '\n'; + + return len; +} + +static ssize_t st_lsm6dsx_shub_scale_avail(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct st_lsm6dsx_sensor *sensor = iio_priv(dev_get_drvdata(dev)); + const struct st_lsm6dsx_ext_dev_settings *settings; + int i, len = 0; + + settings = sensor->ext_info.settings; + for (i = 0; i < ST_LSM6DSX_FS_LIST_SIZE; i++) { + u16 val = settings->fs_table.fs_avl[i].gain; + + if (val > 0) + len += scnprintf(buf + len, PAGE_SIZE - len, "0.%06u ", + val); + } + buf[len - 1] = '\n'; + + return len; +} + +static IIO_DEV_ATTR_SAMP_FREQ_AVAIL(st_lsm6dsx_shub_sampling_freq_avail); +static IIO_DEVICE_ATTR(in_scale_available, 0444, + st_lsm6dsx_shub_scale_avail, NULL, 0); +static struct attribute *st_lsm6dsx_ext_attributes[] = { + &iio_dev_attr_sampling_frequency_available.dev_attr.attr, + &iio_dev_attr_in_scale_available.dev_attr.attr, + NULL, +}; + +static const struct attribute_group st_lsm6dsx_ext_attribute_group = { + .attrs = st_lsm6dsx_ext_attributes, +}; + +static const struct iio_info st_lsm6dsx_ext_info = { + .attrs = &st_lsm6dsx_ext_attribute_group, + .read_raw = st_lsm6dsx_shub_read_raw, + .write_raw = st_lsm6dsx_shub_write_raw, + .hwfifo_set_watermark = st_lsm6dsx_set_watermark, +}; + +static struct iio_dev * +st_lsm6dsx_shub_alloc_iiodev(struct st_lsm6dsx_hw *hw, + enum st_lsm6dsx_sensor_id id, + const struct st_lsm6dsx_ext_dev_settings *info, + u8 i2c_addr, const char *name) +{ + struct iio_chan_spec *ext_channels; + struct st_lsm6dsx_sensor *sensor; + struct iio_dev *iio_dev; + + iio_dev = devm_iio_device_alloc(hw->dev, sizeof(*sensor)); + if (!iio_dev) + return NULL; + + iio_dev->modes = INDIO_DIRECT_MODE; + iio_dev->dev.parent = hw->dev; + iio_dev->info = &st_lsm6dsx_ext_info; + + sensor = iio_priv(iio_dev); + sensor->id = id; + sensor->hw = hw; + sensor->odr = info->odr_table.odr_avl[0].hz; + sensor->gain = info->fs_table.fs_avl[0].gain; + sensor->ext_info.settings = info; + sensor->ext_info.addr = i2c_addr; + sensor->watermark = 1; + + switch (info->id) { + case ST_LSM6DSX_ID_MAGN: { + const struct iio_chan_spec magn_channels[] = { + ST_LSM6DSX_CHANNEL(IIO_MAGN, info->out.addr, + IIO_MOD_X, 0), + ST_LSM6DSX_CHANNEL(IIO_MAGN, info->out.addr + 2, + IIO_MOD_Y, 1), + ST_LSM6DSX_CHANNEL(IIO_MAGN, info->out.addr + 4, + IIO_MOD_Z, 2), + IIO_CHAN_SOFT_TIMESTAMP(3), + }; + + ext_channels = devm_kzalloc(hw->dev, sizeof(magn_channels), + GFP_KERNEL); + if (!ext_channels) + return NULL; + + memcpy(ext_channels, magn_channels, sizeof(magn_channels)); + iio_dev->available_scan_masks = st_lsm6dsx_available_scan_masks; + iio_dev->channels = ext_channels; + iio_dev->num_channels = ARRAY_SIZE(magn_channels); + + scnprintf(sensor->name, sizeof(sensor->name), "%s_magn", + name); + break; + } + default: + return NULL; + } + iio_dev->name = sensor->name; + + return iio_dev; +} + +static int st_lsm6dsx_shub_init_device(struct st_lsm6dsx_sensor *sensor) +{ + const struct st_lsm6dsx_ext_dev_settings *settings; + int err; + + settings = sensor->ext_info.settings; + if (settings->bdu.addr) { + err = st_lsm6dsx_shub_write_with_mask(sensor, + settings->bdu.addr, + settings->bdu.mask, 1); + if (err < 0) + return err; + } + + if (settings->temp_comp.addr) { + err = st_lsm6dsx_shub_write_with_mask(sensor, + settings->temp_comp.addr, + settings->temp_comp.mask, 1); + if (err < 0) + return err; + } + + if (settings->off_canc.addr) { + err = st_lsm6dsx_shub_write_with_mask(sensor, + settings->off_canc.addr, + settings->off_canc.mask, 1); + if (err < 0) + return err; + } + + return 0; +} + +static int +st_lsm6dsx_shub_check_wai(struct st_lsm6dsx_hw *hw, u8 *i2c_addr, + const struct st_lsm6dsx_ext_dev_settings *settings) +{ + const struct st_lsm6dsx_shub_settings *hub_settings; + struct st_lsm6dsx_sensor *sensor; + u8 config[3], data, slv_addr; + bool found = false; + int i, err; + + hub_settings = &hw->settings->shub_settings; + slv_addr = ST_LSM6DSX_SLV_ADDR(0, hub_settings->slv0_addr); + sensor = iio_priv(hw->iio_devs[ST_LSM6DSX_ID_ACC]); + + for (i = 0; i < ARRAY_SIZE(settings->i2c_addr); i++) { + if (!settings->i2c_addr[i]) + continue; + + /* read wai slave register */ + config[0] = (settings->i2c_addr[i] << 1) | 0x1; + config[1] = settings->wai.addr; + config[2] = 0x1; + + err = st_lsm6dsx_shub_write_reg(hw, slv_addr, config, + sizeof(config)); + if (err < 0) + return err; + + err = st_lsm6dsx_shub_master_enable(sensor, true); + if (err < 0) + return err; + + st_lsm6dsx_shub_wait_complete(hw); + + err = st_lsm6dsx_shub_read_reg(hw, + hub_settings->shub_out, + &data, sizeof(data)); + + st_lsm6dsx_shub_master_enable(sensor, false); + + if (err < 0) + return err; + + if (data != settings->wai.val) + continue; + + *i2c_addr = settings->i2c_addr[i]; + found = true; + break; + } + + /* reset SLV0 channel */ + memset(config, 0, sizeof(config)); + err = st_lsm6dsx_shub_write_reg(hw, slv_addr, config, + sizeof(config)); + if (err < 0) + return err; + + return found ? 0 : -ENODEV; +} + +int st_lsm6dsx_shub_probe(struct st_lsm6dsx_hw *hw, const char *name) +{ + enum st_lsm6dsx_sensor_id id = ST_LSM6DSX_ID_EXT0; + struct st_lsm6dsx_sensor *sensor; + int err, i, num_ext_dev = 0; + u8 i2c_addr = 0; + + for (i = 0; i < ARRAY_SIZE(st_lsm6dsx_ext_dev_table); i++) { + err = st_lsm6dsx_shub_check_wai(hw, &i2c_addr, + &st_lsm6dsx_ext_dev_table[i]); + if (err == -ENODEV) + continue; + else if (err < 0) + return err; + + hw->iio_devs[id] = st_lsm6dsx_shub_alloc_iiodev(hw, id, + &st_lsm6dsx_ext_dev_table[i], + i2c_addr, name); + if (!hw->iio_devs[id]) + return -ENOMEM; + + sensor = iio_priv(hw->iio_devs[id]); + err = st_lsm6dsx_shub_init_device(sensor); + if (err < 0) + return err; + + if (++num_ext_dev >= ST_LSM6DSX_MAX_SLV_NUM) + break; + id++; + } + + return 0; +} diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c index a062cfddc5af..4f5cd9f60870 100644 --- a/drivers/iio/industrialio-core.c +++ b/drivers/iio/industrialio-core.c @@ -1671,6 +1671,9 @@ int __iio_device_register(struct iio_dev *indio_dev, struct module *this_mod) if (ret < 0) return ret; + if (!indio_dev->info) + return -EINVAL; + /* configure elements for the chrdev */ indio_dev->dev.devt = MKDEV(MAJOR(iio_devt), indio_dev->id); diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig index d66ea754ffff..36f458433480 100644 --- a/drivers/iio/light/Kconfig +++ b/drivers/iio/light/Kconfig @@ -460,6 +460,19 @@ config VCNL4000 To compile this driver as a module, choose M here: the module will be called vcnl4000. +config VCNL4035 + tristate "VCNL4035 combined ALS and proximity sensor" + select IIO_TRIGGERED_BUFFER + select REGMAP_I2C + depends on I2C + help + Say Y here if you want to build a driver for the Vishay VCNL4035, + combined ambient light (ALS) and proximity sensor. Currently only ALS + function is available. + + To compile this driver as a module, choose M here: the + module will be called vcnl4035. + config VEML6070 tristate "VEML6070 UV A light sensor" depends on I2C diff --git a/drivers/iio/light/Makefile b/drivers/iio/light/Makefile index 86337b114bc4..286bf3975372 100644 --- a/drivers/iio/light/Makefile +++ b/drivers/iio/light/Makefile @@ -45,6 +45,7 @@ obj-$(CONFIG_TSL2772) += tsl2772.o obj-$(CONFIG_TSL4531) += tsl4531.o obj-$(CONFIG_US5182D) += us5182d.o obj-$(CONFIG_VCNL4000) += vcnl4000.o +obj-$(CONFIG_VCNL4035) += vcnl4035.o obj-$(CONFIG_VEML6070) += veml6070.o obj-$(CONFIG_VL6180) += vl6180.o obj-$(CONFIG_ZOPT2201) += zopt2201.o diff --git a/drivers/iio/light/vcnl4035.c b/drivers/iio/light/vcnl4035.c new file mode 100644 index 000000000000..cca4db312bd3 --- /dev/null +++ b/drivers/iio/light/vcnl4035.c @@ -0,0 +1,676 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * VCNL4035 Ambient Light and Proximity Sensor - 7-bit I2C slave address 0x60 + * + * Copyright (c) 2018, DENX Software Engineering GmbH + * Author: Parthiban Nallathambi + * + * TODO: Proximity + */ +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#define VCNL4035_DRV_NAME "vcnl4035" +#define VCNL4035_IRQ_NAME "vcnl4035_event" +#define VCNL4035_REGMAP_NAME "vcnl4035_regmap" + +/* Device registers */ +#define VCNL4035_ALS_CONF 0x00 +#define VCNL4035_ALS_THDH 0x01 +#define VCNL4035_ALS_THDL 0x02 +#define VCNL4035_ALS_DATA 0x0B +#define VCNL4035_WHITE_DATA 0x0C +#define VCNL4035_INT_FLAG 0x0D +#define VCNL4035_DEV_ID 0x0E + +/* Register masks */ +#define VCNL4035_MODE_ALS_MASK BIT(0) +#define VCNL4035_MODE_ALS_WHITE_CHAN BIT(8) +#define VCNL4035_MODE_ALS_INT_MASK BIT(1) +#define VCNL4035_ALS_IT_MASK GENMASK(7, 5) +#define VCNL4035_ALS_PERS_MASK GENMASK(3, 2) +#define VCNL4035_INT_ALS_IF_H_MASK BIT(12) +#define VCNL4035_INT_ALS_IF_L_MASK BIT(13) + +/* Default values */ +#define VCNL4035_MODE_ALS_ENABLE BIT(0) +#define VCNL4035_MODE_ALS_DISABLE 0x00 +#define VCNL4035_MODE_ALS_INT_ENABLE BIT(1) +#define VCNL4035_MODE_ALS_INT_DISABLE 0 +#define VCNL4035_DEV_ID_VAL 0x80 +#define VCNL4035_ALS_IT_DEFAULT 0x01 +#define VCNL4035_ALS_PERS_DEFAULT 0x00 +#define VCNL4035_ALS_THDH_DEFAULT 5000 +#define VCNL4035_ALS_THDL_DEFAULT 100 +#define VCNL4035_SLEEP_DELAY_MS 2000 + +struct vcnl4035_data { + struct i2c_client *client; + struct regmap *regmap; + unsigned int als_it_val; + unsigned int als_persistence; + unsigned int als_thresh_low; + unsigned int als_thresh_high; + struct iio_trigger *drdy_trigger0; +}; + +static inline bool vcnl4035_is_triggered(struct vcnl4035_data *data) +{ + int ret; + int reg; + + ret = regmap_read(data->regmap, VCNL4035_INT_FLAG, ®); + if (ret < 0) + return false; + + return !!(reg & + (VCNL4035_INT_ALS_IF_H_MASK | VCNL4035_INT_ALS_IF_L_MASK)); +} + +static irqreturn_t vcnl4035_drdy_irq_thread(int irq, void *private) +{ + struct iio_dev *indio_dev = private; + struct vcnl4035_data *data = iio_priv(indio_dev); + + if (vcnl4035_is_triggered(data)) { + iio_push_event(indio_dev, IIO_UNMOD_EVENT_CODE(IIO_LIGHT, + 0, + IIO_EV_TYPE_THRESH, + IIO_EV_DIR_EITHER), + iio_get_time_ns(indio_dev)); + iio_trigger_poll_chained(data->drdy_trigger0); + return IRQ_HANDLED; + } + + return IRQ_NONE; +} + +/* Triggered buffer */ +static irqreturn_t vcnl4035_trigger_consumer_handler(int irq, void *p) +{ + struct iio_poll_func *pf = p; + struct iio_dev *indio_dev = pf->indio_dev; + struct vcnl4035_data *data = iio_priv(indio_dev); + u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)]; + int ret; + + ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer); + if (ret < 0) { + dev_err(&data->client->dev, + "Trigger consumer can't read from sensor.\n"); + goto fail_read; + } + iio_push_to_buffers_with_timestamp(indio_dev, buffer, + iio_get_time_ns(indio_dev)); + +fail_read: + iio_trigger_notify_done(indio_dev->trig); + + return IRQ_HANDLED; +} + +static int vcnl4035_als_drdy_set_state(struct iio_trigger *trigger, + bool enable_drdy) +{ + struct iio_dev *indio_dev = iio_trigger_get_drvdata(trigger); + struct vcnl4035_data *data = iio_priv(indio_dev); + int val = enable_drdy ? VCNL4035_MODE_ALS_INT_ENABLE : + VCNL4035_MODE_ALS_INT_DISABLE; + + return regmap_update_bits(data->regmap, VCNL4035_ALS_CONF, + VCNL4035_MODE_ALS_INT_MASK, + val); +} + +static const struct iio_trigger_ops vcnl4035_trigger_ops = { + .validate_device = iio_trigger_validate_own_device, + .set_trigger_state = vcnl4035_als_drdy_set_state, +}; + +static int vcnl4035_set_pm_runtime_state(struct vcnl4035_data *data, bool on) +{ + int ret; + struct device *dev = &data->client->dev; + + if (on) { + ret = pm_runtime_get_sync(dev); + if (ret < 0) + pm_runtime_put_noidle(dev); + } else { + pm_runtime_mark_last_busy(dev); + ret = pm_runtime_put_autosuspend(dev); + } + + return ret; +} + +/* + * Device IT INT Time (ms) Scale (lux/step) + * 000 50 0.064 + * 001 100 0.032 + * 010 200 0.016 + * 100 400 0.008 + * 101 - 111 800 0.004 + * Values are proportional, so ALS INT is selected for input due to + * simplicity reason. Integration time value and scaling is + * calculated based on device INT value + * + * Raw value needs to be scaled using ALS steps + */ +static int vcnl4035_read_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, int *val, + int *val2, long mask) +{ + struct vcnl4035_data *data = iio_priv(indio_dev); + int ret; + int raw_data; + unsigned int reg; + + switch (mask) { + case IIO_CHAN_INFO_RAW: + ret = vcnl4035_set_pm_runtime_state(data, true); + if (ret < 0) + return ret; + + ret = iio_device_claim_direct_mode(indio_dev); + if (!ret) { + if (chan->channel) + reg = VCNL4035_ALS_DATA; + else + reg = VCNL4035_WHITE_DATA; + ret = regmap_read(data->regmap, reg, &raw_data); + iio_device_release_direct_mode(indio_dev); + if (!ret) { + *val = raw_data; + ret = IIO_VAL_INT; + } + } + vcnl4035_set_pm_runtime_state(data, false); + return ret; + case IIO_CHAN_INFO_INT_TIME: + *val = 50; + if (data->als_it_val) + *val = data->als_it_val * 100; + return IIO_VAL_INT; + case IIO_CHAN_INFO_SCALE: + *val = 64; + if (!data->als_it_val) + *val2 = 1000; + else + *val2 = data->als_it_val * 2 * 1000; + return IIO_VAL_FRACTIONAL; + default: + return -EINVAL; + } +} + +static int vcnl4035_write_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int val, int val2, long mask) +{ + int ret; + struct vcnl4035_data *data = iio_priv(indio_dev); + + switch (mask) { + case IIO_CHAN_INFO_INT_TIME: + if (val <= 0 || val > 800) + return -EINVAL; + + ret = vcnl4035_set_pm_runtime_state(data, true); + if (ret < 0) + return ret; + + ret = regmap_update_bits(data->regmap, VCNL4035_ALS_CONF, + VCNL4035_ALS_IT_MASK, + val / 100); + if (!ret) + data->als_it_val = val / 100; + + vcnl4035_set_pm_runtime_state(data, false); + return ret; + default: + return -EINVAL; + } +} + +/* No direct ABI for persistence and threshold, so eventing */ +static int vcnl4035_read_thresh(struct iio_dev *indio_dev, + const struct iio_chan_spec *chan, enum iio_event_type type, + enum iio_event_direction dir, enum iio_event_info info, + int *val, int *val2) +{ + struct vcnl4035_data *data = iio_priv(indio_dev); + + switch (info) { + case IIO_EV_INFO_VALUE: + switch (dir) { + case IIO_EV_DIR_RISING: + *val = data->als_thresh_high; + return IIO_VAL_INT; + case IIO_EV_DIR_FALLING: + *val = data->als_thresh_low; + return IIO_VAL_INT; + default: + return -EINVAL; + } + break; + case IIO_EV_INFO_PERIOD: + *val = data->als_persistence; + return IIO_VAL_INT; + default: + return -EINVAL; + } + +} + +static int vcnl4035_write_thresh(struct iio_dev *indio_dev, + const struct iio_chan_spec *chan, enum iio_event_type type, + enum iio_event_direction dir, enum iio_event_info info, int val, + int val2) +{ + struct vcnl4035_data *data = iio_priv(indio_dev); + int ret; + + switch (info) { + case IIO_EV_INFO_VALUE: + /* 16 bit threshold range 0 - 65535 */ + if (val < 0 || val > 65535) + return -EINVAL; + if (dir == IIO_EV_DIR_RISING) { + if (val < data->als_thresh_low) + return -EINVAL; + ret = regmap_write(data->regmap, VCNL4035_ALS_THDH, + val); + if (ret) + return ret; + data->als_thresh_high = val; + } else { + if (val > data->als_thresh_high) + return -EINVAL; + ret = regmap_write(data->regmap, VCNL4035_ALS_THDL, + val); + if (ret) + return ret; + data->als_thresh_low = val; + } + return ret; + case IIO_EV_INFO_PERIOD: + /* allow only 1 2 4 8 as persistence value */ + if (val < 0 || val > 8 || hweight8(val) != 1) + return -EINVAL; + ret = regmap_update_bits(data->regmap, VCNL4035_ALS_CONF, + VCNL4035_ALS_PERS_MASK, val); + if (!ret) + data->als_persistence = val; + return ret; + default: + return -EINVAL; + } +} + +static IIO_CONST_ATTR_INT_TIME_AVAIL("50 100 200 400 800"); + +static struct attribute *vcnl4035_attributes[] = { + &iio_const_attr_integration_time_available.dev_attr.attr, + NULL, +}; + +static const struct attribute_group vcnl4035_attribute_group = { + .attrs = vcnl4035_attributes, +}; + +static const struct iio_info vcnl4035_info = { + .read_raw = vcnl4035_read_raw, + .write_raw = vcnl4035_write_raw, + .read_event_value = vcnl4035_read_thresh, + .write_event_value = vcnl4035_write_thresh, + .attrs = &vcnl4035_attribute_group, +}; + +static const struct iio_event_spec vcnl4035_event_spec[] = { + { + .type = IIO_EV_TYPE_THRESH, + .dir = IIO_EV_DIR_RISING, + .mask_separate = BIT(IIO_EV_INFO_VALUE), + }, { + .type = IIO_EV_TYPE_THRESH, + .dir = IIO_EV_DIR_FALLING, + .mask_separate = BIT(IIO_EV_INFO_VALUE), + }, { + .type = IIO_EV_TYPE_THRESH, + .dir = IIO_EV_DIR_EITHER, + .mask_separate = BIT(IIO_EV_INFO_PERIOD), + }, +}; + +enum vcnl4035_scan_index_order { + VCNL4035_CHAN_INDEX_LIGHT, + VCNL4035_CHAN_INDEX_WHITE_LED, +}; + +static const struct iio_buffer_setup_ops iio_triggered_buffer_setup_ops = { + .validate_scan_mask = &iio_validate_scan_mask_onehot, +}; + +static const struct iio_chan_spec vcnl4035_channels[] = { + { + .type = IIO_LIGHT, + .channel = 0, + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | + BIT(IIO_CHAN_INFO_INT_TIME) | + BIT(IIO_CHAN_INFO_SCALE), + .event_spec = vcnl4035_event_spec, + .num_event_specs = ARRAY_SIZE(vcnl4035_event_spec), + .scan_index = VCNL4035_CHAN_INDEX_LIGHT, + .scan_type = { + .sign = 'u', + .realbits = 16, + .storagebits = 16, + .endianness = IIO_LE, + }, + }, + { + .type = IIO_INTENSITY, + .channel = 1, + .modified = 1, + .channel2 = IIO_MOD_LIGHT_BOTH, + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .scan_index = VCNL4035_CHAN_INDEX_WHITE_LED, + .scan_type = { + .sign = 'u', + .realbits = 16, + .storagebits = 16, + .endianness = IIO_LE, + }, + }, +}; + +static int vcnl4035_set_als_power_state(struct vcnl4035_data *data, u8 status) +{ + return regmap_update_bits(data->regmap, VCNL4035_ALS_CONF, + VCNL4035_MODE_ALS_MASK, + status); +} + +static int vcnl4035_init(struct vcnl4035_data *data) +{ + int ret; + int id; + + ret = regmap_read(data->regmap, VCNL4035_DEV_ID, &id); + if (ret < 0) { + dev_err(&data->client->dev, "Failed to read DEV_ID register\n"); + return ret; + } + + if (id != VCNL4035_DEV_ID_VAL) { + dev_err(&data->client->dev, "Wrong id, got %x, expected %x\n", + id, VCNL4035_DEV_ID_VAL); + return -ENODEV; + } + + ret = vcnl4035_set_als_power_state(data, VCNL4035_MODE_ALS_ENABLE); + if (ret < 0) + return ret; + + /* ALS white channel enable */ + ret = regmap_update_bits(data->regmap, VCNL4035_ALS_CONF, + VCNL4035_MODE_ALS_WHITE_CHAN, + 1); + if (ret) { + dev_err(&data->client->dev, "set white channel enable %d\n", + ret); + return ret; + } + + /* set default integration time - 100 ms for ALS */ + ret = regmap_update_bits(data->regmap, VCNL4035_ALS_CONF, + VCNL4035_ALS_IT_MASK, + VCNL4035_ALS_IT_DEFAULT); + if (ret) { + dev_err(&data->client->dev, "set default ALS IT returned %d\n", + ret); + return ret; + } + data->als_it_val = VCNL4035_ALS_IT_DEFAULT; + + /* set default persistence time - 1 for ALS */ + ret = regmap_update_bits(data->regmap, VCNL4035_ALS_CONF, + VCNL4035_ALS_PERS_MASK, + VCNL4035_ALS_PERS_DEFAULT); + if (ret) { + dev_err(&data->client->dev, "set default PERS returned %d\n", + ret); + return ret; + } + data->als_persistence = VCNL4035_ALS_PERS_DEFAULT; + + /* set default HIGH threshold for ALS */ + ret = regmap_write(data->regmap, VCNL4035_ALS_THDH, + VCNL4035_ALS_THDH_DEFAULT); + if (ret) { + dev_err(&data->client->dev, "set default THDH returned %d\n", + ret); + return ret; + } + data->als_thresh_high = VCNL4035_ALS_THDH_DEFAULT; + + /* set default LOW threshold for ALS */ + ret = regmap_write(data->regmap, VCNL4035_ALS_THDL, + VCNL4035_ALS_THDL_DEFAULT); + if (ret) { + dev_err(&data->client->dev, "set default THDL returned %d\n", + ret); + return ret; + } + data->als_thresh_low = VCNL4035_ALS_THDL_DEFAULT; + + return 0; +} + +static bool vcnl4035_is_volatile_reg(struct device *dev, unsigned int reg) +{ + switch (reg) { + case VCNL4035_ALS_CONF: + case VCNL4035_DEV_ID: + return false; + default: + return true; + } +} + +static const struct regmap_config vcnl4035_regmap_config = { + .name = VCNL4035_REGMAP_NAME, + .reg_bits = 8, + .val_bits = 16, + .max_register = VCNL4035_DEV_ID, + .cache_type = REGCACHE_RBTREE, + .volatile_reg = vcnl4035_is_volatile_reg, + .val_format_endian = REGMAP_ENDIAN_LITTLE, +}; + +static int vcnl4035_probe_trigger(struct iio_dev *indio_dev) +{ + int ret; + struct vcnl4035_data *data = iio_priv(indio_dev); + + data->drdy_trigger0 = devm_iio_trigger_alloc( + indio_dev->dev.parent, + "%s-dev%d", indio_dev->name, indio_dev->id); + if (!data->drdy_trigger0) + return -ENOMEM; + + data->drdy_trigger0->dev.parent = indio_dev->dev.parent; + data->drdy_trigger0->ops = &vcnl4035_trigger_ops; + iio_trigger_set_drvdata(data->drdy_trigger0, indio_dev); + ret = devm_iio_trigger_register(indio_dev->dev.parent, + data->drdy_trigger0); + if (ret) { + dev_err(&data->client->dev, "iio trigger register failed\n"); + return ret; + } + + /* Trigger setup */ + ret = devm_iio_triggered_buffer_setup(indio_dev->dev.parent, indio_dev, + NULL, vcnl4035_trigger_consumer_handler, + &iio_triggered_buffer_setup_ops); + if (ret < 0) { + dev_err(&data->client->dev, "iio triggered buffer setup failed\n"); + return ret; + } + + /* IRQ to trigger mapping */ + ret = devm_request_threaded_irq(&data->client->dev, data->client->irq, + NULL, vcnl4035_drdy_irq_thread, + IRQF_TRIGGER_LOW | IRQF_ONESHOT, + VCNL4035_IRQ_NAME, indio_dev); + if (ret < 0) + dev_err(&data->client->dev, "request irq %d for trigger0 failed\n", + data->client->irq); + return ret; +} + +static int vcnl4035_probe(struct i2c_client *client, + const struct i2c_device_id *id) +{ + struct vcnl4035_data *data; + struct iio_dev *indio_dev; + struct regmap *regmap; + int ret; + + indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data)); + if (!indio_dev) + return -ENOMEM; + + regmap = devm_regmap_init_i2c(client, &vcnl4035_regmap_config); + if (IS_ERR(regmap)) { + dev_err(&client->dev, "regmap_init failed!\n"); + return PTR_ERR(regmap); + } + + data = iio_priv(indio_dev); + i2c_set_clientdata(client, indio_dev); + data->client = client; + data->regmap = regmap; + + indio_dev->dev.parent = &client->dev; + indio_dev->info = &vcnl4035_info; + indio_dev->name = VCNL4035_DRV_NAME; + indio_dev->channels = vcnl4035_channels; + indio_dev->num_channels = ARRAY_SIZE(vcnl4035_channels); + indio_dev->modes = INDIO_DIRECT_MODE; + + ret = vcnl4035_init(data); + if (ret < 0) { + dev_err(&client->dev, "vcnl4035 chip init failed\n"); + return ret; + } + + if (client->irq > 0) { + ret = vcnl4035_probe_trigger(indio_dev); + if (ret < 0) { + dev_err(&client->dev, "vcnl4035 unable init trigger\n"); + goto fail_poweroff; + } + } + + ret = pm_runtime_set_active(&client->dev); + if (ret < 0) + goto fail_poweroff; + + ret = iio_device_register(indio_dev); + if (ret < 0) + goto fail_poweroff; + + pm_runtime_enable(&client->dev); + pm_runtime_set_autosuspend_delay(&client->dev, VCNL4035_SLEEP_DELAY_MS); + pm_runtime_use_autosuspend(&client->dev); + + return 0; + +fail_poweroff: + vcnl4035_set_als_power_state(data, VCNL4035_MODE_ALS_DISABLE); + return ret; +} + +static int vcnl4035_remove(struct i2c_client *client) +{ + struct iio_dev *indio_dev = i2c_get_clientdata(client); + + pm_runtime_dont_use_autosuspend(&client->dev); + pm_runtime_disable(&client->dev); + iio_device_unregister(indio_dev); + pm_runtime_set_suspended(&client->dev); + + return vcnl4035_set_als_power_state(iio_priv(indio_dev), + VCNL4035_MODE_ALS_DISABLE); +} + +static int __maybe_unused vcnl4035_runtime_suspend(struct device *dev) +{ + struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev)); + struct vcnl4035_data *data = iio_priv(indio_dev); + int ret; + + ret = vcnl4035_set_als_power_state(data, VCNL4035_MODE_ALS_DISABLE); + regcache_mark_dirty(data->regmap); + + return ret; +} + +static int __maybe_unused vcnl4035_runtime_resume(struct device *dev) +{ + struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev)); + struct vcnl4035_data *data = iio_priv(indio_dev); + int ret; + + regcache_sync(data->regmap); + ret = vcnl4035_set_als_power_state(data, VCNL4035_MODE_ALS_ENABLE); + if (ret < 0) + return ret; + + /* wait for 1 ALS integration cycle */ + msleep(data->als_it_val * 100); + + return 0; +} + +static const struct dev_pm_ops vcnl4035_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, + pm_runtime_force_resume) + SET_RUNTIME_PM_OPS(vcnl4035_runtime_suspend, + vcnl4035_runtime_resume, NULL) +}; + +static const struct of_device_id vcnl4035_of_match[] = { + { .compatible = "vishay,vcnl4035", }, + { } +}; +MODULE_DEVICE_TABLE(of, vcnl4035_of_match); + +static struct i2c_driver vcnl4035_driver = { + .driver = { + .name = VCNL4035_DRV_NAME, + .pm = &vcnl4035_pm_ops, + .of_match_table = vcnl4035_of_match, + }, + .probe = vcnl4035_probe, + .remove = vcnl4035_remove, +}; + +module_i2c_driver(vcnl4035_driver); + +MODULE_AUTHOR("Parthiban Nallathambi "); +MODULE_DESCRIPTION("VCNL4035 Ambient Light Sensor driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/iio/magnetometer/Kconfig b/drivers/iio/magnetometer/Kconfig index ed9d776d01af..8a63cbbca4b7 100644 --- a/drivers/iio/magnetometer/Kconfig +++ b/drivers/iio/magnetometer/Kconfig @@ -175,4 +175,33 @@ config SENSORS_HMC5843_SPI - hmc5843_core (core functions) - hmc5843_spi (support for HMC5983) +config SENSORS_RM3100 + tristate + select IIO_BUFFER + select IIO_TRIGGERED_BUFFER + +config SENSORS_RM3100_I2C + tristate "PNI RM3100 3-Axis Magnetometer (I2C)" + depends on I2C + select SENSORS_RM3100 + select REGMAP_I2C + help + Say Y here to add support for the PNI RM3100 3-Axis Magnetometer. + + This driver can also be compiled as a module. + To compile this driver as a module, choose M here: the module + will be called rm3100-i2c. + +config SENSORS_RM3100_SPI + tristate "PNI RM3100 3-Axis Magnetometer (SPI)" + depends on SPI_MASTER + select SENSORS_RM3100 + select REGMAP_SPI + help + Say Y here to add support for the PNI RM3100 3-Axis Magnetometer. + + This driver can also be compiled as a module. + To compile this driver as a module, choose M here: the module + will be called rm3100-spi. + endmenu diff --git a/drivers/iio/magnetometer/Makefile b/drivers/iio/magnetometer/Makefile index 664b2f866472..ba1bc34b82fa 100644 --- a/drivers/iio/magnetometer/Makefile +++ b/drivers/iio/magnetometer/Makefile @@ -24,3 +24,7 @@ obj-$(CONFIG_IIO_ST_MAGN_SPI_3AXIS) += st_magn_spi.o obj-$(CONFIG_SENSORS_HMC5843) += hmc5843_core.o obj-$(CONFIG_SENSORS_HMC5843_I2C) += hmc5843_i2c.o obj-$(CONFIG_SENSORS_HMC5843_SPI) += hmc5843_spi.o + +obj-$(CONFIG_SENSORS_RM3100) += rm3100-core.o +obj-$(CONFIG_SENSORS_RM3100_I2C) += rm3100-i2c.o +obj-$(CONFIG_SENSORS_RM3100_SPI) += rm3100-spi.o diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c index 42a827a66512..d430b80808ef 100644 --- a/drivers/iio/magnetometer/ak8975.c +++ b/drivers/iio/magnetometer/ak8975.c @@ -790,6 +790,7 @@ static const struct acpi_device_id ak_acpi_match[] = { {"INVN6500", AK8963}, {"AK009911", AK09911}, {"AK09911", AK09911}, + {"AKM9911", AK09911}, {"AK09912", AK09912}, { }, }; diff --git a/drivers/iio/magnetometer/rm3100-core.c b/drivers/iio/magnetometer/rm3100-core.c new file mode 100644 index 000000000000..7c20918d8108 --- /dev/null +++ b/drivers/iio/magnetometer/rm3100-core.c @@ -0,0 +1,616 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PNI RM3100 3-axis geomagnetic sensor driver core. + * + * Copyright (C) 2018 Song Qiang + * + * User Manual available at + * + * + * TODO: event generation, pm. + */ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "rm3100.h" + +/* Cycle Count Registers. */ +#define RM3100_REG_CC_X 0x05 +#define RM3100_REG_CC_Y 0x07 +#define RM3100_REG_CC_Z 0x09 + +/* Poll Measurement Mode register. */ +#define RM3100_REG_POLL 0x00 +#define RM3100_POLL_X BIT(4) +#define RM3100_POLL_Y BIT(5) +#define RM3100_POLL_Z BIT(6) + +/* Continuous Measurement Mode register. */ +#define RM3100_REG_CMM 0x01 +#define RM3100_CMM_START BIT(0) +#define RM3100_CMM_X BIT(4) +#define RM3100_CMM_Y BIT(5) +#define RM3100_CMM_Z BIT(6) + +/* TiMe Rate Configuration register. */ +#define RM3100_REG_TMRC 0x0B +#define RM3100_TMRC_OFFSET 0x92 + +/* Result Status register. */ +#define RM3100_REG_STATUS 0x34 +#define RM3100_STATUS_DRDY BIT(7) + +/* Measurement result registers. */ +#define RM3100_REG_MX2 0x24 +#define RM3100_REG_MY2 0x27 +#define RM3100_REG_MZ2 0x2a + +#define RM3100_W_REG_START RM3100_REG_POLL +#define RM3100_W_REG_END RM3100_REG_TMRC +#define RM3100_R_REG_START RM3100_REG_POLL +#define RM3100_R_REG_END RM3100_REG_STATUS +#define RM3100_V_REG_START RM3100_REG_POLL +#define RM3100_V_REG_END RM3100_REG_STATUS + +/* + * This is computed by hand, is the sum of channel storage bits and padding + * bits, which is 4+4+4+12=24 in here. + */ +#define RM3100_SCAN_BYTES 24 + +#define RM3100_CMM_AXIS_SHIFT 4 + +struct rm3100_data { + struct regmap *regmap; + struct completion measuring_done; + bool use_interrupt; + int conversion_time; + int scale; + u8 buffer[RM3100_SCAN_BYTES]; + struct iio_trigger *drdy_trig; + + /* + * This lock is for protecting the consistency of series of i2c + * operations, that is, to make sure a measurement process will + * not be interrupted by a set frequency operation, which should + * be taken where a series of i2c operation starts, released where + * the operation ends. + */ + struct mutex lock; +}; + +static const struct regmap_range rm3100_readable_ranges[] = { + regmap_reg_range(RM3100_R_REG_START, RM3100_R_REG_END), +}; + +const struct regmap_access_table rm3100_readable_table = { + .yes_ranges = rm3100_readable_ranges, + .n_yes_ranges = ARRAY_SIZE(rm3100_readable_ranges), +}; +EXPORT_SYMBOL_GPL(rm3100_readable_table); + +static const struct regmap_range rm3100_writable_ranges[] = { + regmap_reg_range(RM3100_W_REG_START, RM3100_W_REG_END), +}; + +const struct regmap_access_table rm3100_writable_table = { + .yes_ranges = rm3100_writable_ranges, + .n_yes_ranges = ARRAY_SIZE(rm3100_writable_ranges), +}; +EXPORT_SYMBOL_GPL(rm3100_writable_table); + +static const struct regmap_range rm3100_volatile_ranges[] = { + regmap_reg_range(RM3100_V_REG_START, RM3100_V_REG_END), +}; + +const struct regmap_access_table rm3100_volatile_table = { + .yes_ranges = rm3100_volatile_ranges, + .n_yes_ranges = ARRAY_SIZE(rm3100_volatile_ranges), +}; +EXPORT_SYMBOL_GPL(rm3100_volatile_table); + +static irqreturn_t rm3100_thread_fn(int irq, void *d) +{ + struct iio_dev *indio_dev = d; + struct rm3100_data *data = iio_priv(indio_dev); + + /* + * Write operation to any register or read operation + * to first byte of results will clear the interrupt. + */ + regmap_write(data->regmap, RM3100_REG_POLL, 0); + + return IRQ_HANDLED; +} + +static irqreturn_t rm3100_irq_handler(int irq, void *d) +{ + struct iio_dev *indio_dev = d; + struct rm3100_data *data = iio_priv(indio_dev); + + switch (indio_dev->currentmode) { + case INDIO_DIRECT_MODE: + complete(&data->measuring_done); + break; + case INDIO_BUFFER_TRIGGERED: + iio_trigger_poll(data->drdy_trig); + break; + default: + dev_err(indio_dev->dev.parent, + "device mode out of control, current mode: %d", + indio_dev->currentmode); + } + + return IRQ_WAKE_THREAD; +} + +static int rm3100_wait_measurement(struct rm3100_data *data) +{ + struct regmap *regmap = data->regmap; + unsigned int val; + int tries = 20; + int ret; + + /* + * A read cycle of 400kbits i2c bus is about 20us, plus the time + * used for scheduling, a read cycle of fast mode of this device + * can reach 1.7ms, it may be possible for data to arrive just + * after we check the RM3100_REG_STATUS. In this case, irq_handler is + * called before measuring_done is reinitialized, it will wait + * forever for data that has already been ready. + * Reinitialize measuring_done before looking up makes sure we + * will always capture interrupt no matter when it happens. + */ + if (data->use_interrupt) + reinit_completion(&data->measuring_done); + + ret = regmap_read(regmap, RM3100_REG_STATUS, &val); + if (ret < 0) + return ret; + + if ((val & RM3100_STATUS_DRDY) != RM3100_STATUS_DRDY) { + if (data->use_interrupt) { + ret = wait_for_completion_timeout(&data->measuring_done, + msecs_to_jiffies(data->conversion_time)); + if (!ret) + return -ETIMEDOUT; + } else { + do { + usleep_range(1000, 5000); + + ret = regmap_read(regmap, RM3100_REG_STATUS, + &val); + if (ret < 0) + return ret; + + if (val & RM3100_STATUS_DRDY) + break; + } while (--tries); + if (!tries) + return -ETIMEDOUT; + } + } + return 0; +} + +static int rm3100_read_mag(struct rm3100_data *data, int idx, int *val) +{ + struct regmap *regmap = data->regmap; + u8 buffer[3]; + int ret; + + mutex_lock(&data->lock); + ret = regmap_write(regmap, RM3100_REG_POLL, BIT(4 + idx)); + if (ret < 0) + goto unlock_return; + + ret = rm3100_wait_measurement(data); + if (ret < 0) + goto unlock_return; + + ret = regmap_bulk_read(regmap, RM3100_REG_MX2 + 3 * idx, buffer, 3); + if (ret < 0) + goto unlock_return; + mutex_unlock(&data->lock); + + *val = sign_extend32((buffer[0] << 16) | (buffer[1] << 8) | buffer[2], + 23); + + return IIO_VAL_INT; + +unlock_return: + mutex_unlock(&data->lock); + return ret; +} + +#define RM3100_CHANNEL(axis, idx) \ + { \ + .type = IIO_MAGN, \ + .modified = 1, \ + .channel2 = IIO_MOD_##axis, \ + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) | \ + BIT(IIO_CHAN_INFO_SAMP_FREQ), \ + .scan_index = idx, \ + .scan_type = { \ + .sign = 's', \ + .realbits = 24, \ + .storagebits = 32, \ + .shift = 8, \ + .endianness = IIO_BE, \ + }, \ + } + +static const struct iio_chan_spec rm3100_channels[] = { + RM3100_CHANNEL(X, 0), + RM3100_CHANNEL(Y, 1), + RM3100_CHANNEL(Z, 2), + IIO_CHAN_SOFT_TIMESTAMP(3), +}; + +static IIO_CONST_ATTR_SAMP_FREQ_AVAIL( + "600 300 150 75 37 18 9 4.5 2.3 1.2 0.6 0.3 0.015 0.075" +); + +static struct attribute *rm3100_attributes[] = { + &iio_const_attr_sampling_frequency_available.dev_attr.attr, + NULL, +}; + +static const struct attribute_group rm3100_attribute_group = { + .attrs = rm3100_attributes, +}; + +#define RM3100_SAMP_NUM 14 + +/* + * Frequency : rm3100_samp_rates[][0].rm3100_samp_rates[][1]Hz. + * Time between reading: rm3100_sam_rates[][2]ms. + * The first one is actually 1.7ms. + */ +static const int rm3100_samp_rates[RM3100_SAMP_NUM][3] = { + {600, 0, 2}, {300, 0, 3}, {150, 0, 7}, {75, 0, 13}, {37, 0, 27}, + {18, 0, 55}, {9, 0, 110}, {4, 500000, 220}, {2, 300000, 440}, + {1, 200000, 800}, {0, 600000, 1600}, {0, 300000, 3300}, + {0, 15000, 6700}, {0, 75000, 13000} +}; + +static int rm3100_get_samp_freq(struct rm3100_data *data, int *val, int *val2) +{ + unsigned int tmp; + int ret; + + mutex_lock(&data->lock); + ret = regmap_read(data->regmap, RM3100_REG_TMRC, &tmp); + mutex_unlock(&data->lock); + if (ret < 0) + return ret; + *val = rm3100_samp_rates[tmp - RM3100_TMRC_OFFSET][0]; + *val2 = rm3100_samp_rates[tmp - RM3100_TMRC_OFFSET][1]; + + return IIO_VAL_INT_PLUS_MICRO; +} + +static int rm3100_set_cycle_count(struct rm3100_data *data, int val) +{ + int ret; + u8 i; + + for (i = 0; i < 3; i++) { + ret = regmap_write(data->regmap, RM3100_REG_CC_X + 2 * i, val); + if (ret < 0) + return ret; + } + + /* + * The scale of this sensor depends on the cycle count value, these + * three values are corresponding to the cycle count value 50, 100, + * 200. scale = output / gain * 10^4. + */ + switch (val) { + case 50: + data->scale = 500; + break; + case 100: + data->scale = 263; + break; + /* + * case 200: + * This function will never be called by users' code, so here we + * assume that it will never get a wrong parameter. + */ + default: + data->scale = 133; + } + + return 0; +} + +static int rm3100_set_samp_freq(struct iio_dev *indio_dev, int val, int val2) +{ + struct rm3100_data *data = iio_priv(indio_dev); + struct regmap *regmap = data->regmap; + unsigned int cycle_count; + int ret; + int i; + + mutex_lock(&data->lock); + /* All cycle count registers use the same value. */ + ret = regmap_read(regmap, RM3100_REG_CC_X, &cycle_count); + if (ret < 0) + goto unlock_return; + + for (i = 0; i < RM3100_SAMP_NUM; i++) { + if (val == rm3100_samp_rates[i][0] && + val2 == rm3100_samp_rates[i][1]) + break; + } + if (i == RM3100_SAMP_NUM) { + ret = -EINVAL; + goto unlock_return; + } + + ret = regmap_write(regmap, RM3100_REG_TMRC, i + RM3100_TMRC_OFFSET); + if (ret < 0) + goto unlock_return; + + /* Checking if cycle count registers need changing. */ + if (val == 600 && cycle_count == 200) { + ret = rm3100_set_cycle_count(data, 100); + if (ret < 0) + goto unlock_return; + } else if (val != 600 && cycle_count == 100) { + ret = rm3100_set_cycle_count(data, 200); + if (ret < 0) + goto unlock_return; + } + + if (indio_dev->currentmode == INDIO_BUFFER_TRIGGERED) { + /* Writing TMRC registers requires CMM reset. */ + ret = regmap_write(regmap, RM3100_REG_CMM, 0); + if (ret < 0) + goto unlock_return; + ret = regmap_write(data->regmap, RM3100_REG_CMM, + (*indio_dev->active_scan_mask & 0x7) << + RM3100_CMM_AXIS_SHIFT | RM3100_CMM_START); + if (ret < 0) + goto unlock_return; + } + mutex_unlock(&data->lock); + + data->conversion_time = rm3100_samp_rates[i][2] * 2; + return 0; + +unlock_return: + mutex_unlock(&data->lock); + return ret; +} + +static int rm3100_read_raw(struct iio_dev *indio_dev, + const struct iio_chan_spec *chan, + int *val, int *val2, long mask) +{ + struct rm3100_data *data = iio_priv(indio_dev); + int ret; + + switch (mask) { + case IIO_CHAN_INFO_RAW: + ret = iio_device_claim_direct_mode(indio_dev); + if (ret < 0) + return ret; + + ret = rm3100_read_mag(data, chan->scan_index, val); + iio_device_release_direct_mode(indio_dev); + + return ret; + case IIO_CHAN_INFO_SCALE: + *val = 0; + *val2 = data->scale; + + return IIO_VAL_INT_PLUS_MICRO; + case IIO_CHAN_INFO_SAMP_FREQ: + return rm3100_get_samp_freq(data, val, val2); + default: + return -EINVAL; + } +} + +static int rm3100_write_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int val, int val2, long mask) +{ + switch (mask) { + case IIO_CHAN_INFO_SAMP_FREQ: + return rm3100_set_samp_freq(indio_dev, val, val2); + default: + return -EINVAL; + } +} + +static const struct iio_info rm3100_info = { + .attrs = &rm3100_attribute_group, + .read_raw = rm3100_read_raw, + .write_raw = rm3100_write_raw, +}; + +static int rm3100_buffer_preenable(struct iio_dev *indio_dev) +{ + struct rm3100_data *data = iio_priv(indio_dev); + + /* Starting channels enabled. */ + return regmap_write(data->regmap, RM3100_REG_CMM, + (*indio_dev->active_scan_mask & 0x7) << RM3100_CMM_AXIS_SHIFT | + RM3100_CMM_START); +} + +static int rm3100_buffer_postdisable(struct iio_dev *indio_dev) +{ + struct rm3100_data *data = iio_priv(indio_dev); + + return regmap_write(data->regmap, RM3100_REG_CMM, 0); +} + +static const struct iio_buffer_setup_ops rm3100_buffer_ops = { + .preenable = rm3100_buffer_preenable, + .postenable = iio_triggered_buffer_postenable, + .predisable = iio_triggered_buffer_predisable, + .postdisable = rm3100_buffer_postdisable, +}; + +static irqreturn_t rm3100_trigger_handler(int irq, void *p) +{ + struct iio_poll_func *pf = p; + struct iio_dev *indio_dev = pf->indio_dev; + unsigned long scan_mask = *indio_dev->active_scan_mask; + unsigned int mask_len = indio_dev->masklength; + struct rm3100_data *data = iio_priv(indio_dev); + struct regmap *regmap = data->regmap; + int ret, i, bit; + + mutex_lock(&data->lock); + switch (scan_mask) { + case BIT(0) | BIT(1) | BIT(2): + ret = regmap_bulk_read(regmap, RM3100_REG_MX2, data->buffer, 9); + mutex_unlock(&data->lock); + if (ret < 0) + goto done; + /* Convert XXXYYYZZZxxx to XXXxYYYxZZZx. x for paddings. */ + for (i = 2; i > 0; i--) + memmove(data->buffer + i * 4, data->buffer + i * 3, 3); + break; + case BIT(0) | BIT(1): + ret = regmap_bulk_read(regmap, RM3100_REG_MX2, data->buffer, 6); + mutex_unlock(&data->lock); + if (ret < 0) + goto done; + memmove(data->buffer + 4, data->buffer + 3, 3); + break; + case BIT(1) | BIT(2): + ret = regmap_bulk_read(regmap, RM3100_REG_MY2, data->buffer, 6); + mutex_unlock(&data->lock); + if (ret < 0) + goto done; + memmove(data->buffer + 4, data->buffer + 3, 3); + break; + case BIT(0) | BIT(2): + ret = regmap_bulk_read(regmap, RM3100_REG_MX2, data->buffer, 9); + mutex_unlock(&data->lock); + if (ret < 0) + goto done; + memmove(data->buffer + 4, data->buffer + 6, 3); + break; + default: + for_each_set_bit(bit, &scan_mask, mask_len) { + ret = regmap_bulk_read(regmap, RM3100_REG_MX2 + 3 * bit, + data->buffer, 3); + if (ret < 0) { + mutex_unlock(&data->lock); + goto done; + } + } + mutex_unlock(&data->lock); + } + /* + * Always using the same buffer so that we wouldn't need to set the + * paddings to 0 in case of leaking any data. + */ + iio_push_to_buffers_with_timestamp(indio_dev, data->buffer, + pf->timestamp); +done: + iio_trigger_notify_done(indio_dev->trig); + + return IRQ_HANDLED; +} + +int rm3100_common_probe(struct device *dev, struct regmap *regmap, int irq) +{ + struct iio_dev *indio_dev; + struct rm3100_data *data; + unsigned int tmp; + int ret; + + indio_dev = devm_iio_device_alloc(dev, sizeof(*data)); + if (!indio_dev) + return -ENOMEM; + + data = iio_priv(indio_dev); + data->regmap = regmap; + + mutex_init(&data->lock); + + indio_dev->dev.parent = dev; + indio_dev->name = "rm3100"; + indio_dev->info = &rm3100_info; + indio_dev->channels = rm3100_channels; + indio_dev->num_channels = ARRAY_SIZE(rm3100_channels); + indio_dev->modes = INDIO_DIRECT_MODE | INDIO_BUFFER_TRIGGERED; + indio_dev->currentmode = INDIO_DIRECT_MODE; + + if (!irq) + data->use_interrupt = false; + else { + data->use_interrupt = true; + + init_completion(&data->measuring_done); + ret = devm_request_threaded_irq(dev, + irq, + rm3100_irq_handler, + rm3100_thread_fn, + IRQF_TRIGGER_HIGH | + IRQF_ONESHOT, + indio_dev->name, + indio_dev); + if (ret < 0) { + dev_err(dev, "request irq line failed.\n"); + return ret; + } + + data->drdy_trig = devm_iio_trigger_alloc(dev, "%s-drdy%d", + indio_dev->name, + indio_dev->id); + if (!data->drdy_trig) + return -ENOMEM; + + data->drdy_trig->dev.parent = dev; + ret = devm_iio_trigger_register(dev, data->drdy_trig); + if (ret < 0) + return ret; + } + + ret = devm_iio_triggered_buffer_setup(dev, indio_dev, + &iio_pollfunc_store_time, + rm3100_trigger_handler, + &rm3100_buffer_ops); + if (ret < 0) + return ret; + + ret = regmap_read(regmap, RM3100_REG_TMRC, &tmp); + if (ret < 0) + return ret; + /* Initializing max wait time, which is double conversion time. */ + data->conversion_time = rm3100_samp_rates[tmp - RM3100_TMRC_OFFSET][2] + * 2; + + /* Cycle count values may not be what we want. */ + if ((tmp - RM3100_TMRC_OFFSET) == 0) + rm3100_set_cycle_count(data, 100); + else + rm3100_set_cycle_count(data, 200); + + return devm_iio_device_register(dev, indio_dev); +} +EXPORT_SYMBOL_GPL(rm3100_common_probe); + +MODULE_AUTHOR("Song Qiang "); +MODULE_DESCRIPTION("PNI RM3100 3-axis magnetometer i2c driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/iio/magnetometer/rm3100-i2c.c b/drivers/iio/magnetometer/rm3100-i2c.c new file mode 100644 index 000000000000..1ac622c6d6c9 --- /dev/null +++ b/drivers/iio/magnetometer/rm3100-i2c.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Support for PNI RM3100 3-axis geomagnetic sensor on a i2c bus. + * + * Copyright (C) 2018 Song Qiang + * + * i2c slave address: 0x20 + SA1 << 1 + SA0. + */ + +#include +#include + +#include "rm3100.h" + +static const struct regmap_config rm3100_regmap_config = { + .reg_bits = 8, + .val_bits = 8, + + .rd_table = &rm3100_readable_table, + .wr_table = &rm3100_writable_table, + .volatile_table = &rm3100_volatile_table, + + .cache_type = REGCACHE_RBTREE, +}; + +static int rm3100_probe(struct i2c_client *client) +{ + struct regmap *regmap; + + regmap = devm_regmap_init_i2c(client, &rm3100_regmap_config); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + return rm3100_common_probe(&client->dev, regmap, client->irq); +} + +static const struct of_device_id rm3100_dt_match[] = { + { .compatible = "pni,rm3100", }, + { } +}; +MODULE_DEVICE_TABLE(of, rm3100_dt_match); + +static struct i2c_driver rm3100_driver = { + .driver = { + .name = "rm3100-i2c", + .of_match_table = rm3100_dt_match, + }, + .probe_new = rm3100_probe, +}; +module_i2c_driver(rm3100_driver); + +MODULE_AUTHOR("Song Qiang "); +MODULE_DESCRIPTION("PNI RM3100 3-axis magnetometer i2c driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/iio/magnetometer/rm3100-spi.c b/drivers/iio/magnetometer/rm3100-spi.c new file mode 100644 index 000000000000..65d5eb9e4f5e --- /dev/null +++ b/drivers/iio/magnetometer/rm3100-spi.c @@ -0,0 +1,64 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Support for PNI RM3100 3-axis geomagnetic sensor on a spi bus. + * + * Copyright (C) 2018 Song Qiang + */ + +#include +#include + +#include "rm3100.h" + +static const struct regmap_config rm3100_regmap_config = { + .reg_bits = 8, + .val_bits = 8, + + .rd_table = &rm3100_readable_table, + .wr_table = &rm3100_writable_table, + .volatile_table = &rm3100_volatile_table, + + .read_flag_mask = 0x80, + + .cache_type = REGCACHE_RBTREE, +}; + +static int rm3100_probe(struct spi_device *spi) +{ + struct regmap *regmap; + int ret; + + /* Actually this device supports both mode 0 and mode 3. */ + spi->mode = SPI_MODE_0; + /* Data rates cannot exceed 1Mbits. */ + spi->max_speed_hz = 1000000; + spi->bits_per_word = 8; + ret = spi_setup(spi); + if (ret) + return ret; + + regmap = devm_regmap_init_spi(spi, &rm3100_regmap_config); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + return rm3100_common_probe(&spi->dev, regmap, spi->irq); +} + +static const struct of_device_id rm3100_dt_match[] = { + { .compatible = "pni,rm3100", }, + { } +}; +MODULE_DEVICE_TABLE(of, rm3100_dt_match); + +static struct spi_driver rm3100_driver = { + .driver = { + .name = "rm3100-spi", + .of_match_table = rm3100_dt_match, + }, + .probe = rm3100_probe, +}; +module_spi_driver(rm3100_driver); + +MODULE_AUTHOR("Song Qiang "); +MODULE_DESCRIPTION("PNI RM3100 3-axis magnetometer spi driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/iio/magnetometer/rm3100.h b/drivers/iio/magnetometer/rm3100.h new file mode 100644 index 000000000000..c3508218bc77 --- /dev/null +++ b/drivers/iio/magnetometer/rm3100.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2018 Song Qiang + */ + +#ifndef RM3100_CORE_H +#define RM3100_CORE_H + +#include + +extern const struct regmap_access_table rm3100_readable_table; +extern const struct regmap_access_table rm3100_writable_table; +extern const struct regmap_access_table rm3100_volatile_table; + +int rm3100_common_probe(struct device *dev, struct regmap *regmap, int irq); + +#endif /* RM3100_CORE_H */ diff --git a/drivers/iio/magnetometer/st_magn.h b/drivers/iio/magnetometer/st_magn.h index 8fe51ce427bd..bc14ad4f1b26 100644 --- a/drivers/iio/magnetometer/st_magn.h +++ b/drivers/iio/magnetometer/st_magn.h @@ -20,6 +20,7 @@ #define LIS3MDL_MAGN_DEV_NAME "lis3mdl" #define LSM303AGR_MAGN_DEV_NAME "lsm303agr_magn" #define LIS2MDL_MAGN_DEV_NAME "lis2mdl" +#define LSM9DS1_MAGN_DEV_NAME "lsm9ds1_magn" int st_magn_common_probe(struct iio_dev *indio_dev); void st_magn_common_remove(struct iio_dev *indio_dev); diff --git a/drivers/iio/magnetometer/st_magn_core.c b/drivers/iio/magnetometer/st_magn_core.c index 72f6d1335a04..5d056bdb3b37 100644 --- a/drivers/iio/magnetometer/st_magn_core.c +++ b/drivers/iio/magnetometer/st_magn_core.c @@ -29,9 +29,9 @@ #define ST_MAGN_NUMBER_DATA_CHANNELS 3 /* DEFAULT VALUE FOR SENSORS */ -#define ST_MAGN_DEFAULT_OUT_X_H_ADDR 0X03 -#define ST_MAGN_DEFAULT_OUT_Y_H_ADDR 0X07 -#define ST_MAGN_DEFAULT_OUT_Z_H_ADDR 0X05 +#define ST_MAGN_DEFAULT_OUT_X_H_ADDR 0x03 +#define ST_MAGN_DEFAULT_OUT_Y_H_ADDR 0x07 +#define ST_MAGN_DEFAULT_OUT_Z_H_ADDR 0x05 /* FULLSCALE */ #define ST_MAGN_FS_AVL_1300MG 1300 @@ -267,6 +267,7 @@ static const struct st_sensor_settings st_magn_sensors_settings[] = { .wai_addr = ST_SENSORS_DEFAULT_WAI_ADDRESS, .sensors_supported = { [0] = LIS3MDL_MAGN_DEV_NAME, + [1] = LSM9DS1_MAGN_DEV_NAME, }, .ch = (struct iio_chan_spec *)st_magn_2_16bit_channels, .odr = { @@ -315,6 +316,10 @@ static const struct st_sensor_settings st_magn_sensors_settings[] = { }, }, }, + .bdu = { + .addr = 0x24, + .mask = 0x40, + }, .drdy_irq = { /* drdy line is routed drdy pin */ .stat_drdy = { diff --git a/drivers/iio/magnetometer/st_magn_i2c.c b/drivers/iio/magnetometer/st_magn_i2c.c index feaa28cf6a77..68650f5f5c19 100644 --- a/drivers/iio/magnetometer/st_magn_i2c.c +++ b/drivers/iio/magnetometer/st_magn_i2c.c @@ -44,6 +44,10 @@ static const struct of_device_id st_magn_of_match[] = { .compatible = "st,lis2mdl", .data = LIS2MDL_MAGN_DEV_NAME, }, + { + .compatible = "st,lsm9ds1-magn", + .data = LSM9DS1_MAGN_DEV_NAME, + }, {}, }; MODULE_DEVICE_TABLE(of, st_magn_of_match); @@ -90,6 +94,7 @@ static const struct i2c_device_id st_magn_id_table[] = { { LIS3MDL_MAGN_DEV_NAME }, { LSM303AGR_MAGN_DEV_NAME }, { LIS2MDL_MAGN_DEV_NAME }, + { LSM9DS1_MAGN_DEV_NAME }, {}, }; MODULE_DEVICE_TABLE(i2c, st_magn_id_table); diff --git a/drivers/iio/magnetometer/st_magn_spi.c b/drivers/iio/magnetometer/st_magn_spi.c index 7b7cd08fcc32..cb05fcd9ddfe 100644 --- a/drivers/iio/magnetometer/st_magn_spi.c +++ b/drivers/iio/magnetometer/st_magn_spi.c @@ -23,6 +23,8 @@ * For new single-chip sensors use as compatible string. * For old single-chip devices keep -magn to maintain * compatibility + * For multi-chip devices, use -magn to distinguish which + * capability is being used */ static const struct of_device_id st_magn_of_match[] = { { @@ -37,6 +39,10 @@ static const struct of_device_id st_magn_of_match[] = { .compatible = "st,lis2mdl", .data = LIS2MDL_MAGN_DEV_NAME, }, + { + .compatible = "st,lsm9ds1-magn", + .data = LSM9DS1_MAGN_DEV_NAME, + }, {} }; MODULE_DEVICE_TABLE(of, st_magn_of_match); @@ -79,6 +85,7 @@ static const struct spi_device_id st_magn_id_table[] = { { LIS3MDL_MAGN_DEV_NAME }, { LSM303AGR_MAGN_DEV_NAME }, { LIS2MDL_MAGN_DEV_NAME }, + { LSM9DS1_MAGN_DEV_NAME }, {}, }; MODULE_DEVICE_TABLE(spi, st_magn_id_table); diff --git a/drivers/iio/potentiometer/Kconfig b/drivers/iio/potentiometer/Kconfig index 79ec2eba4969..6303cbe79903 100644 --- a/drivers/iio/potentiometer/Kconfig +++ b/drivers/iio/potentiometer/Kconfig @@ -90,6 +90,18 @@ config MCP4531 To compile this driver as a module, choose M here: the module will be called mcp4531. +config MCP41010 + tristate "Microchip MCP41xxx/MCP42xxx Digital Potentiometer driver" + depends on SPI + help + Say yes here to build support for the Microchip + MCP41010, MCP41050, MCP41100, + MCP42010, MCP42050, MCP42100 + digital potentiometer chips. + + To compile this driver as a module, choose M here: the + module will be called mcp41010. + config TPL0102 tristate "Texas Instruments digital potentiometer driver" depends on I2C diff --git a/drivers/iio/potentiometer/Makefile b/drivers/iio/potentiometer/Makefile index 4af657883c3f..8ff55138cf12 100644 --- a/drivers/iio/potentiometer/Makefile +++ b/drivers/iio/potentiometer/Makefile @@ -11,4 +11,5 @@ obj-$(CONFIG_MAX5487) += max5487.o obj-$(CONFIG_MCP4018) += mcp4018.o obj-$(CONFIG_MCP4131) += mcp4131.o obj-$(CONFIG_MCP4531) += mcp4531.o +obj-$(CONFIG_MCP41010) += mcp41010.o obj-$(CONFIG_TPL0102) += tpl0102.o diff --git a/drivers/iio/potentiometer/mcp41010.c b/drivers/iio/potentiometer/mcp41010.c new file mode 100644 index 000000000000..2368b39debf5 --- /dev/null +++ b/drivers/iio/potentiometer/mcp41010.c @@ -0,0 +1,203 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Industrial I/O driver for Microchip digital potentiometers + * + * Copyright (c) 2018 Chris Coffey + * Based on: Slawomir Stepien's code from mcp4131.c + * + * Datasheet: http://ww1.microchip.com/downloads/en/devicedoc/11195c.pdf + * + * DEVID #Wipers #Positions Resistance (kOhm) + * mcp41010 1 256 10 + * mcp41050 1 256 50 + * mcp41100 1 256 100 + * mcp42010 2 256 10 + * mcp42050 2 256 50 + * mcp42100 2 256 100 + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define MCP41010_MAX_WIPERS 2 +#define MCP41010_WRITE BIT(4) +#define MCP41010_WIPER_MAX 255 +#define MCP41010_WIPER_CHANNEL BIT(0) + +struct mcp41010_cfg { + char name[16]; + int wipers; + int kohms; +}; + +enum mcp41010_type { + MCP41010, + MCP41050, + MCP41100, + MCP42010, + MCP42050, + MCP42100, +}; + +static const struct mcp41010_cfg mcp41010_cfg[] = { + [MCP41010] = { .name = "mcp41010", .wipers = 1, .kohms = 10, }, + [MCP41050] = { .name = "mcp41050", .wipers = 1, .kohms = 50, }, + [MCP41100] = { .name = "mcp41100", .wipers = 1, .kohms = 100, }, + [MCP42010] = { .name = "mcp42010", .wipers = 2, .kohms = 10, }, + [MCP42050] = { .name = "mcp42050", .wipers = 2, .kohms = 50, }, + [MCP42100] = { .name = "mcp42100", .wipers = 2, .kohms = 100, }, +}; + +struct mcp41010_data { + struct spi_device *spi; + const struct mcp41010_cfg *cfg; + struct mutex lock; /* Protect write sequences */ + unsigned int value[MCP41010_MAX_WIPERS]; /* Cache wiper values */ + u8 buf[2] ____cacheline_aligned; +}; + +#define MCP41010_CHANNEL(ch) { \ + .type = IIO_RESISTANCE, \ + .indexed = 1, \ + .output = 1, \ + .channel = (ch), \ + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \ +} + +static const struct iio_chan_spec mcp41010_channels[] = { + MCP41010_CHANNEL(0), + MCP41010_CHANNEL(1), +}; + +static int mcp41010_read_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int *val, int *val2, long mask) +{ + struct mcp41010_data *data = iio_priv(indio_dev); + int channel = chan->channel; + + switch (mask) { + case IIO_CHAN_INFO_RAW: + *val = data->value[channel]; + return IIO_VAL_INT; + + case IIO_CHAN_INFO_SCALE: + *val = 1000 * data->cfg->kohms; + *val2 = MCP41010_WIPER_MAX; + return IIO_VAL_FRACTIONAL; + } + + return -EINVAL; +} + +static int mcp41010_write_raw(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + int val, int val2, long mask) +{ + int err; + struct mcp41010_data *data = iio_priv(indio_dev); + int channel = chan->channel; + + if (mask != IIO_CHAN_INFO_RAW) + return -EINVAL; + + if (val > MCP41010_WIPER_MAX || val < 0) + return -EINVAL; + + mutex_lock(&data->lock); + + data->buf[0] = MCP41010_WIPER_CHANNEL << channel; + data->buf[0] |= MCP41010_WRITE; + data->buf[1] = val & 0xff; + + err = spi_write(data->spi, data->buf, sizeof(data->buf)); + if (!err) + data->value[channel] = val; + + mutex_unlock(&data->lock); + + return err; +} + +static const struct iio_info mcp41010_info = { + .read_raw = mcp41010_read_raw, + .write_raw = mcp41010_write_raw, +}; + +static int mcp41010_probe(struct spi_device *spi) +{ + int err; + struct device *dev = &spi->dev; + struct mcp41010_data *data; + struct iio_dev *indio_dev; + + indio_dev = devm_iio_device_alloc(dev, sizeof(*data)); + if (!indio_dev) + return -ENOMEM; + + data = iio_priv(indio_dev); + spi_set_drvdata(spi, indio_dev); + data->spi = spi; + data->cfg = of_device_get_match_data(&spi->dev); + if (!data->cfg) + data->cfg = &mcp41010_cfg[spi_get_device_id(spi)->driver_data]; + + mutex_init(&data->lock); + + indio_dev->dev.parent = dev; + indio_dev->info = &mcp41010_info; + indio_dev->channels = mcp41010_channels; + indio_dev->num_channels = data->cfg->wipers; + indio_dev->name = data->cfg->name; + + err = devm_iio_device_register(dev, indio_dev); + if (err) + dev_info(&spi->dev, "Unable to register %s\n", indio_dev->name); + + return err; +} + +static const struct of_device_id mcp41010_match[] = { + { .compatible = "microchip,mcp41010", .data = &mcp41010_cfg[MCP41010] }, + { .compatible = "microchip,mcp41050", .data = &mcp41010_cfg[MCP41050] }, + { .compatible = "microchip,mcp41100", .data = &mcp41010_cfg[MCP41100] }, + { .compatible = "microchip,mcp42010", .data = &mcp41010_cfg[MCP42010] }, + { .compatible = "microchip,mcp42050", .data = &mcp41010_cfg[MCP42050] }, + { .compatible = "microchip,mcp42100", .data = &mcp41010_cfg[MCP42100] }, + {} +}; +MODULE_DEVICE_TABLE(of, mcp41010_match); + +static const struct spi_device_id mcp41010_id[] = { + { "mcp41010", MCP41010 }, + { "mcp41050", MCP41050 }, + { "mcp41100", MCP41100 }, + { "mcp42010", MCP42010 }, + { "mcp42050", MCP42050 }, + { "mcp42100", MCP42100 }, + {} +}; +MODULE_DEVICE_TABLE(spi, mcp41010_id); + +static struct spi_driver mcp41010_driver = { + .driver = { + .name = "mcp41010", + .of_match_table = mcp41010_match, + }, + .probe = mcp41010_probe, + .id_table = mcp41010_id, +}; + +module_spi_driver(mcp41010_driver); + +MODULE_AUTHOR("Chris Coffey "); +MODULE_DESCRIPTION("MCP41010 digital potentiometer"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/iio/potentiometer/mcp4131.c b/drivers/iio/potentiometer/mcp4131.c index b3e30db246cc..efe035ce010d 100644 --- a/drivers/iio/potentiometer/mcp4131.c +++ b/drivers/iio/potentiometer/mcp4131.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #define MCP4131_WRITE (0x00 << 2) @@ -243,7 +244,7 @@ static int mcp4131_probe(struct spi_device *spi) { int err; struct device *dev = &spi->dev; - unsigned long devid = spi_get_device_id(spi)->driver_data; + unsigned long devid; struct mcp4131_data *data; struct iio_dev *indio_dev; @@ -254,7 +255,11 @@ static int mcp4131_probe(struct spi_device *spi) data = iio_priv(indio_dev); spi_set_drvdata(spi, indio_dev); data->spi = spi; - data->cfg = &mcp4131_cfg[devid]; + data->cfg = of_device_get_match_data(&spi->dev); + if (!data->cfg) { + devid = spi_get_device_id(spi)->driver_data; + data->cfg = &mcp4131_cfg[devid]; + } mutex_init(&data->lock); @@ -273,7 +278,6 @@ static int mcp4131_probe(struct spi_device *spi) return 0; } -#if defined(CONFIG_OF) static const struct of_device_id mcp4131_dt_ids[] = { { .compatible = "microchip,mcp4131-502", .data = &mcp4131_cfg[MCP413x_502] }, @@ -406,7 +410,6 @@ static const struct of_device_id mcp4131_dt_ids[] = { {} }; MODULE_DEVICE_TABLE(of, mcp4131_dt_ids); -#endif /* CONFIG_OF */ static const struct spi_device_id mcp4131_id[] = { { "mcp4131-502", MCP413x_502 }, diff --git a/drivers/iio/potentiometer/tpl0102.c b/drivers/iio/potentiometer/tpl0102.c index ca1cce58fe20..a0a07e47f13f 100644 --- a/drivers/iio/potentiometer/tpl0102.c +++ b/drivers/iio/potentiometer/tpl0102.c @@ -15,7 +15,7 @@ struct tpl0102_cfg { int wipers; - int max_pos; + int avail[3]; int kohms; }; @@ -28,16 +28,16 @@ enum tpl0102_type { static const struct tpl0102_cfg tpl0102_cfg[] = { /* on-semiconductor parts */ - [CAT5140_503] = { .wipers = 1, .max_pos = 256, .kohms = 50, }, - [CAT5140_104] = { .wipers = 1, .max_pos = 256, .kohms = 100, }, + [CAT5140_503] = { .wipers = 1, .avail = { 0, 1, 255 }, .kohms = 50, }, + [CAT5140_104] = { .wipers = 1, .avail = { 0, 1, 255 }, .kohms = 100, }, /* ti parts */ - [TPL0102_104] = { .wipers = 2, .max_pos = 256, .kohms = 100 }, - [TPL0401_103] = { .wipers = 1, .max_pos = 128, .kohms = 10, }, + [TPL0102_104] = { .wipers = 2, .avail = { 0, 1, 255 }, .kohms = 100 }, + [TPL0401_103] = { .wipers = 1, .avail = { 0, 1, 127 }, .kohms = 10, }, }; struct tpl0102_data { struct regmap *regmap; - unsigned long devid; + const struct tpl0102_cfg *cfg; }; static const struct regmap_config tpl0102_regmap_config = { @@ -52,6 +52,7 @@ static const struct regmap_config tpl0102_regmap_config = { .channel = (ch), \ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \ + .info_mask_separate_available = BIT(IIO_CHAN_INFO_RAW), \ } static const struct iio_chan_spec tpl0102_channels[] = { @@ -72,14 +73,32 @@ static int tpl0102_read_raw(struct iio_dev *indio_dev, return ret ? ret : IIO_VAL_INT; } case IIO_CHAN_INFO_SCALE: - *val = 1000 * tpl0102_cfg[data->devid].kohms; - *val2 = tpl0102_cfg[data->devid].max_pos; + *val = 1000 * data->cfg->kohms; + *val2 = data->cfg->avail[2] + 1; return IIO_VAL_FRACTIONAL; } return -EINVAL; } +static int tpl0102_read_avail(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, + const int **vals, int *type, int *length, + long mask) +{ + struct tpl0102_data *data = iio_priv(indio_dev); + + switch (mask) { + case IIO_CHAN_INFO_RAW: + *length = ARRAY_SIZE(data->cfg->avail); + *vals = data->cfg->avail; + *type = IIO_VAL_INT; + return IIO_AVAIL_RANGE; + } + + return -EINVAL; +} + static int tpl0102_write_raw(struct iio_dev *indio_dev, struct iio_chan_spec const *chan, int val, int val2, long mask) @@ -89,7 +108,7 @@ static int tpl0102_write_raw(struct iio_dev *indio_dev, if (mask != IIO_CHAN_INFO_RAW) return -EINVAL; - if (val >= tpl0102_cfg[data->devid].max_pos || val < 0) + if (val > data->cfg->avail[2] || val < 0) return -EINVAL; return regmap_write(data->regmap, chan->channel, val); @@ -97,6 +116,7 @@ static int tpl0102_write_raw(struct iio_dev *indio_dev, static const struct iio_info tpl0102_info = { .read_raw = tpl0102_read_raw, + .read_avail = tpl0102_read_avail, .write_raw = tpl0102_write_raw, }; @@ -113,7 +133,7 @@ static int tpl0102_probe(struct i2c_client *client, data = iio_priv(indio_dev); i2c_set_clientdata(client, indio_dev); - data->devid = id->driver_data; + data->cfg = &tpl0102_cfg[id->driver_data]; data->regmap = devm_regmap_init_i2c(client, &tpl0102_regmap_config); if (IS_ERR(data->regmap)) { dev_err(dev, "regmap initialization failed\n"); @@ -123,7 +143,7 @@ static int tpl0102_probe(struct i2c_client *client, indio_dev->dev.parent = dev; indio_dev->info = &tpl0102_info; indio_dev->channels = tpl0102_channels; - indio_dev->num_channels = tpl0102_cfg[data->devid].wipers; + indio_dev->num_channels = data->cfg->wipers; indio_dev->name = client->name; return devm_iio_device_register(dev, indio_dev); diff --git a/drivers/iio/resolver/Kconfig b/drivers/iio/resolver/Kconfig index 2ced9f22aa70..786801be54f6 100644 --- a/drivers/iio/resolver/Kconfig +++ b/drivers/iio/resolver/Kconfig @@ -3,6 +3,16 @@ # menu "Resolver to digital converters" +config AD2S90 + tristate "Analog Devices ad2s90 driver" + depends on SPI + help + Say yes here to build support for Analog Devices spi resolver + to digital converters, ad2s90, provides direct access via sysfs. + + To compile this driver as a module, choose M here: the + module will be called ad2s90. + config AD2S1200 tristate "Analog Devices ad2s1200/ad2s1205 driver" depends on SPI diff --git a/drivers/iio/resolver/Makefile b/drivers/iio/resolver/Makefile index 4e1dccae07e7..398d82d50028 100644 --- a/drivers/iio/resolver/Makefile +++ b/drivers/iio/resolver/Makefile @@ -2,4 +2,5 @@ # Makefile for Resolver/Synchro drivers # +obj-$(CONFIG_AD2S90) += ad2s90.o obj-$(CONFIG_AD2S1200) += ad2s1200.o diff --git a/drivers/staging/iio/resolver/ad2s90.c b/drivers/iio/resolver/ad2s90.c similarity index 58% rename from drivers/staging/iio/resolver/ad2s90.c rename to drivers/iio/resolver/ad2s90.c index 59586947a936..a41f5cb10da5 100644 --- a/drivers/staging/iio/resolver/ad2s90.c +++ b/drivers/iio/resolver/ad2s90.c @@ -1,12 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* * ad2s90.c simple support for the ADI Resolver to Digital Converters: AD2S90 * * Copyright (c) 2010-2010 Analog Devices Inc. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * */ #include #include @@ -19,8 +15,14 @@ #include #include +/* + * Although chip's max frequency is 2Mhz, it needs 600ns between CS and the + * first falling edge of SCLK, so frequency should be at most 1 / (2 * 6e-7) + */ +#define AD2S90_MAX_SPI_FREQ_HZ 830000 + struct ad2s90_state { - struct mutex lock; + struct mutex lock; /* lock to protect rx buffer */ struct spi_device *sdev; u8 rx[2] ____cacheline_aligned; }; @@ -34,16 +36,32 @@ static int ad2s90_read_raw(struct iio_dev *indio_dev, int ret; struct ad2s90_state *st = iio_priv(indio_dev); - mutex_lock(&st->lock); - ret = spi_read(st->sdev, st->rx, 2); - if (ret) - goto error_ret; - *val = (((u16)(st->rx[0])) << 4) | ((st->rx[1] & 0xF0) >> 4); - -error_ret: - mutex_unlock(&st->lock); - - return IIO_VAL_INT; + if (chan->type != IIO_ANGL) + return -EINVAL; + + switch (m) { + case IIO_CHAN_INFO_SCALE: + /* 2 * Pi / 2^12 */ + *val = 6283; /* mV */ + *val2 = 12; + return IIO_VAL_FRACTIONAL_LOG2; + case IIO_CHAN_INFO_RAW: + mutex_lock(&st->lock); + ret = spi_read(st->sdev, st->rx, 2); + if (ret < 0) { + mutex_unlock(&st->lock); + return ret; + } + *val = (((u16)(st->rx[0])) << 4) | ((st->rx[1] & 0xF0) >> 4); + + mutex_unlock(&st->lock); + + return IIO_VAL_INT; + default: + break; + } + + return -EINVAL; } static const struct iio_info ad2s90_info = { @@ -54,14 +72,19 @@ static const struct iio_chan_spec ad2s90_chan = { .type = IIO_ANGL, .indexed = 1, .channel = 0, - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE), }; static int ad2s90_probe(struct spi_device *spi) { struct iio_dev *indio_dev; struct ad2s90_state *st; - int ret = 0; + + if (spi->max_speed_hz > AD2S90_MAX_SPI_FREQ_HZ) { + dev_err(&spi->dev, "SPI CLK, %d Hz exceeds %d Hz\n", + spi->max_speed_hz, AD2S90_MAX_SPI_FREQ_HZ); + return -EINVAL; + } indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st)); if (!indio_dev) @@ -78,18 +101,15 @@ static int ad2s90_probe(struct spi_device *spi) indio_dev->num_channels = 1; indio_dev->name = spi_get_device_id(spi)->name; - ret = devm_iio_device_register(indio_dev->dev.parent, indio_dev); - if (ret) - return ret; - - /* need 600ns between CS and the first falling edge of SCLK */ - spi->max_speed_hz = 830000; - spi->mode = SPI_MODE_3; - spi_setup(spi); - - return 0; + return devm_iio_device_register(indio_dev->dev.parent, indio_dev); } +static const struct of_device_id ad2s90_of_match[] = { + { .compatible = "adi,ad2s90", }, + {} +}; +MODULE_DEVICE_TABLE(of, ad2s90_of_match); + static const struct spi_device_id ad2s90_id[] = { { "ad2s90" }, {} @@ -99,6 +119,7 @@ MODULE_DEVICE_TABLE(spi, ad2s90_id); static struct spi_driver ad2s90_driver = { .driver = { .name = "ad2s90", + .of_match_table = ad2s90_of_match, }, .probe = ad2s90_probe, .id_table = ad2s90_id, diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile index 867cee5e27b2..69dee36e0e89 100644 --- a/drivers/infiniband/core/Makefile +++ b/drivers/infiniband/core/Makefile @@ -38,4 +38,4 @@ ib_uverbs-y := uverbs_main.o uverbs_cmd.o uverbs_marshall.o \ uverbs_std_types_cq.o \ uverbs_std_types_flow_action.o uverbs_std_types_dm.o \ uverbs_std_types_mr.o uverbs_std_types_counters.o \ - uverbs_uapi.o + uverbs_uapi.o uverbs_std_types_device.o diff --git a/drivers/infiniband/core/agent.c b/drivers/infiniband/core/agent.c index 324ef85a13b6..f82b4260de42 100644 --- a/drivers/infiniband/core/agent.c +++ b/drivers/infiniband/core/agent.c @@ -137,13 +137,13 @@ void agent_send_response(const struct ib_mad_hdr *mad_hdr, const struct ib_grh * err2: ib_free_send_mad(send_buf); err1: - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, RDMA_DESTROY_AH_SLEEPABLE); } static void agent_send_handler(struct ib_mad_agent *mad_agent, struct ib_mad_send_wc *mad_send_wc) { - rdma_destroy_ah(mad_send_wc->send_buf->ah); + rdma_destroy_ah(mad_send_wc->send_buf->ah, RDMA_DESTROY_AH_SLEEPABLE); ib_free_send_mad(mad_send_wc->send_buf); } diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c index 5b2fce4a7091..7b04590f307f 100644 --- a/drivers/infiniband/core/cache.c +++ b/drivers/infiniband/core/cache.c @@ -215,10 +215,6 @@ static void free_gid_entry_locked(struct ib_gid_table_entry *entry) dev_dbg(&device->dev, "%s port=%d index=%d gid %pI6\n", __func__, port_num, entry->attr.index, entry->attr.gid.raw); - if (rdma_cap_roce_gid_table(device, port_num) && - entry->state != GID_TABLE_ENTRY_INVALID) - device->del_gid(&entry->attr, &entry->context); - write_lock_irq(&table->rwlock); /* @@ -324,7 +320,7 @@ static int add_roce_gid(struct ib_gid_table_entry *entry) return -EINVAL; } if (rdma_cap_roce_gid_table(attr->device, attr->port_num)) { - ret = attr->device->add_gid(attr, &entry->context); + ret = attr->device->ops.add_gid(attr, &entry->context); if (ret) { dev_err(&attr->device->dev, "%s GID add failed port=%d index=%d\n", @@ -364,6 +360,9 @@ static void del_gid(struct ib_device *ib_dev, u8 port, table->data_vec[ix] = NULL; write_unlock_irq(&table->rwlock); + if (rdma_cap_roce_gid_table(ib_dev, port)) + ib_dev->ops.del_gid(&entry->attr, &entry->context); + put_gid_entry_locked(entry); } @@ -548,8 +547,8 @@ int ib_cache_gid_add(struct ib_device *ib_dev, u8 port, unsigned long mask; int ret; - if (ib_dev->get_netdev) { - idev = ib_dev->get_netdev(ib_dev, port); + if (ib_dev->ops.get_netdev) { + idev = ib_dev->ops.get_netdev(ib_dev, port); if (idev && attr->ndev != idev) { union ib_gid default_gid; @@ -1296,9 +1295,9 @@ static int config_non_roce_gid_cache(struct ib_device *device, mutex_lock(&table->lock); for (i = 0; i < gid_tbl_len; ++i) { - if (!device->query_gid) + if (!device->ops.query_gid) continue; - ret = device->query_gid(device, port, i, &gid_attr.gid); + ret = device->ops.query_gid(device, port, i, &gid_attr.gid); if (ret) { dev_warn(&device->dev, "query_gid failed (%d) for index %d\n", ret, diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c index edb2cb758be7..37980c7564c0 100644 --- a/drivers/infiniband/core/cm.c +++ b/drivers/infiniband/core/cm.c @@ -343,7 +343,7 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv, ret = -ENODEV; goto out; } - ah = rdma_create_ah(mad_agent->qp->pd, &av->ah_attr); + ah = rdma_create_ah(mad_agent->qp->pd, &av->ah_attr, 0); if (IS_ERR(ah)) { ret = PTR_ERR(ah); goto out; @@ -355,7 +355,7 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv, GFP_ATOMIC, IB_MGMT_BASE_VERSION); if (IS_ERR(m)) { - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, 0); ret = PTR_ERR(m); goto out; } @@ -400,7 +400,7 @@ static int cm_create_response_msg_ah(struct cm_port *port, static void cm_free_msg(struct ib_mad_send_buf *msg) { if (msg->ah) - rdma_destroy_ah(msg->ah); + rdma_destroy_ah(msg->ah, 0); if (msg->context[0]) cm_deref_id(msg->context[0]); ib_free_send_mad(msg); diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c index 15d5bb7bf6bb..63a7cc00bae0 100644 --- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c @@ -494,7 +494,7 @@ static void _cma_attach_to_dev(struct rdma_id_private *id_priv, id_priv->id.route.addr.dev_addr.transport = rdma_node_get_transport(cma_dev->device->node_type); list_add_tail(&id_priv->list, &cma_dev->id_list); - rdma_restrack_add(&id_priv->res); + rdma_restrack_kadd(&id_priv->res); } static void cma_attach_to_dev(struct rdma_id_private *id_priv, diff --git a/drivers/infiniband/core/cma_configfs.c b/drivers/infiniband/core/cma_configfs.c index 8c2dfb3e294e..3ec2c415bb70 100644 --- a/drivers/infiniband/core/cma_configfs.c +++ b/drivers/infiniband/core/cma_configfs.c @@ -33,7 +33,10 @@ #include #include #include +#include + #include "core_priv.h" +#include "cma_priv.h" struct cma_device; diff --git a/drivers/infiniband/core/cma_priv.h b/drivers/infiniband/core/cma_priv.h index 194cfe78c447..cf47c69436a7 100644 --- a/drivers/infiniband/core/cma_priv.h +++ b/drivers/infiniband/core/cma_priv.h @@ -94,4 +94,32 @@ struct rdma_id_private { */ struct rdma_restrack_entry res; }; + +#if IS_ENABLED(CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS) +int cma_configfs_init(void); +void cma_configfs_exit(void); +#else +static inline int cma_configfs_init(void) +{ + return 0; +} + +static inline void cma_configfs_exit(void) +{ +} +#endif + +void cma_ref_dev(struct cma_device *dev); +void cma_deref_dev(struct cma_device *dev); +typedef bool (*cma_device_filter)(struct ib_device *, void *); +struct cma_device *cma_enum_devices_by_ibdev(cma_device_filter filter, + void *cookie); +int cma_get_default_gid_type(struct cma_device *dev, unsigned int port); +int cma_set_default_gid_type(struct cma_device *dev, unsigned int port, + enum ib_gid_type default_gid_type); +int cma_get_default_roce_tos(struct cma_device *dev, unsigned int port); +int cma_set_default_roce_tos(struct cma_device *dev, unsigned int port, + u8 default_roce_tos); +struct ib_device *cma_get_ib_dev(struct cma_device *dev); + #endif /* _CMA_PRIV_H */ diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index bb9007a0cca7..3cd830d52967 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -54,35 +54,6 @@ struct pkey_index_qp_list { struct list_head qp_list; }; -#if IS_ENABLED(CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS) -int cma_configfs_init(void); -void cma_configfs_exit(void); -#else -static inline int cma_configfs_init(void) -{ - return 0; -} - -static inline void cma_configfs_exit(void) -{ -} -#endif -struct cma_device; -void cma_ref_dev(struct cma_device *cma_dev); -void cma_deref_dev(struct cma_device *cma_dev); -typedef bool (*cma_device_filter)(struct ib_device *, void *); -struct cma_device *cma_enum_devices_by_ibdev(cma_device_filter filter, - void *cookie); -int cma_get_default_gid_type(struct cma_device *cma_dev, - unsigned int port); -int cma_set_default_gid_type(struct cma_device *cma_dev, - unsigned int port, - enum ib_gid_type default_gid_type); -int cma_get_default_roce_tos(struct cma_device *cma_dev, unsigned int port); -int cma_set_default_roce_tos(struct cma_device *a_dev, unsigned int port, - u8 default_roce_tos); -struct ib_device *cma_get_ib_dev(struct cma_device *cma_dev); - int ib_device_register_sysfs(struct ib_device *device, int (*port_callback)(struct ib_device *, u8, struct kobject *)); @@ -244,10 +215,10 @@ static inline int ib_security_modify_qp(struct ib_qp *qp, int qp_attr_mask, struct ib_udata *udata) { - return qp->device->modify_qp(qp->real_qp, - qp_attr, - qp_attr_mask, - udata); + return qp->device->ops.modify_qp(qp->real_qp, + qp_attr, + qp_attr_mask, + udata); } static inline int ib_create_qp_security(struct ib_qp *qp, @@ -296,6 +267,7 @@ static inline int ib_mad_enforce_security(struct ib_mad_agent_private *map, #endif struct ib_device *ib_device_get_by_index(u32 ifindex); +void ib_device_put(struct ib_device *device); /* RDMA device netlink */ void nldev_init(void); void nldev_exit(void); @@ -308,10 +280,10 @@ static inline struct ib_qp *_ib_create_qp(struct ib_device *dev, { struct ib_qp *qp; - if (!dev->create_qp) + if (!dev->ops.create_qp) return ERR_PTR(-EOPNOTSUPP); - qp = dev->create_qp(pd, attr, udata); + qp = dev->ops.create_qp(pd, attr, udata); if (IS_ERR(qp)) return qp; @@ -325,7 +297,10 @@ static inline struct ib_qp *_ib_create_qp(struct ib_device *dev, */ if (attr->qp_type < IB_QPT_XRC_INI) { qp->res.type = RDMA_RESTRACK_QP; - rdma_restrack_add(&qp->res); + if (uobj) + rdma_restrack_uadd(&qp->res); + else + rdma_restrack_kadd(&qp->res); } else qp->res.valid = false; diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c index b1e5365ddafa..d61e5e1427c2 100644 --- a/drivers/infiniband/core/cq.c +++ b/drivers/infiniband/core/cq.c @@ -145,7 +145,7 @@ struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, struct ib_cq *cq; int ret = -ENOMEM; - cq = dev->create_cq(dev, &cq_attr, NULL, NULL); + cq = dev->ops.create_cq(dev, &cq_attr, NULL, NULL); if (IS_ERR(cq)) return cq; @@ -162,7 +162,7 @@ struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, cq->res.type = RDMA_RESTRACK_CQ; rdma_restrack_set_task(&cq->res, caller); - rdma_restrack_add(&cq->res); + rdma_restrack_kadd(&cq->res); switch (cq->poll_ctx) { case IB_POLL_DIRECT: @@ -193,7 +193,7 @@ out_free_wc: kfree(cq->wc); rdma_restrack_del(&cq->res); out_destroy_cq: - cq->device->destroy_cq(cq); + cq->device->ops.destroy_cq(cq); return ERR_PTR(ret); } EXPORT_SYMBOL(__ib_alloc_cq); @@ -225,7 +225,7 @@ void ib_free_cq(struct ib_cq *cq) kfree(cq->wc); rdma_restrack_del(&cq->res); - ret = cq->device->destroy_cq(cq); + ret = cq->device->ops.destroy_cq(cq); WARN_ON_ONCE(ret); } EXPORT_SYMBOL(ib_free_cq); diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 87eb4f2cdd7d..47ab34ee1a9d 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -96,7 +96,7 @@ static struct notifier_block ibdev_lsm_nb = { static int ib_device_check_mandatory(struct ib_device *device) { -#define IB_MANDATORY_FUNC(x) { offsetof(struct ib_device, x), #x } +#define IB_MANDATORY_FUNC(x) { offsetof(struct ib_device_ops, x), #x } static const struct { size_t offset; char *name; @@ -122,7 +122,8 @@ static int ib_device_check_mandatory(struct ib_device *device) int i; for (i = 0; i < ARRAY_SIZE(mandatory_table); ++i) { - if (!*(void **) ((void *) device + mandatory_table[i].offset)) { + if (!*(void **) ((void *) &device->ops + + mandatory_table[i].offset)) { dev_warn(&device->dev, "Device is missing mandatory function %s\n", mandatory_table[i].name); @@ -145,7 +146,8 @@ static struct ib_device *__ib_device_get_by_index(u32 index) } /* - * Caller is responsible to return refrerence count by calling put_device() + * Caller must perform ib_device_put() to return the device reference count + * when ib_device_get_by_index() returns valid device pointer. */ struct ib_device *ib_device_get_by_index(u32 index) { @@ -153,13 +155,21 @@ struct ib_device *ib_device_get_by_index(u32 index) down_read(&lists_rwsem); device = __ib_device_get_by_index(index); - if (device) - get_device(&device->dev); - + if (device) { + /* Do not return a device if unregistration has started. */ + if (!refcount_inc_not_zero(&device->refcount)) + device = NULL; + } up_read(&lists_rwsem); return device; } +void ib_device_put(struct ib_device *device) +{ + if (refcount_dec_and_test(&device->refcount)) + complete(&device->unreg_completion); +} + static struct ib_device *__ib_device_get_by_name(const char *name) { struct ib_device *device; @@ -293,6 +303,8 @@ struct ib_device *ib_alloc_device(size_t size) rwlock_init(&device->client_data_lock); INIT_LIST_HEAD(&device->client_data_list); INIT_LIST_HEAD(&device->port_list); + refcount_set(&device->refcount, 1); + init_completion(&device->unreg_completion); return device; } @@ -362,8 +374,8 @@ static int read_port_immutable(struct ib_device *device) return -ENOMEM; for (port = start_port; port <= end_port; ++port) { - ret = device->get_port_immutable(device, port, - &device->port_immutable[port]); + ret = device->ops.get_port_immutable( + device, port, &device->port_immutable[port]); if (ret) return ret; @@ -375,8 +387,8 @@ static int read_port_immutable(struct ib_device *device) void ib_get_device_fw_str(struct ib_device *dev, char *str) { - if (dev->get_dev_fw_str) - dev->get_dev_fw_str(dev, str); + if (dev->ops.get_dev_fw_str) + dev->ops.get_dev_fw_str(dev, str); else str[0] = '\0'; } @@ -525,7 +537,7 @@ static int setup_device(struct ib_device *device) } memset(&device->attrs, 0, sizeof(device->attrs)); - ret = device->query_device(device, &device->attrs, &uhw); + ret = device->ops.query_device(device, &device->attrs, &uhw); if (ret) { dev_warn(&device->dev, "Couldn't query the device attributes\n"); @@ -641,6 +653,13 @@ void ib_unregister_device(struct ib_device *device) struct ib_client_data *context, *tmp; unsigned long flags; + /* + * Wait for all netlink command callers to finish working on the + * device. + */ + ib_device_put(device); + wait_for_completion(&device->unreg_completion); + mutex_lock(&device_mutex); down_write(&lists_rwsem); @@ -905,14 +924,14 @@ int ib_query_port(struct ib_device *device, return -EINVAL; memset(port_attr, 0, sizeof(*port_attr)); - err = device->query_port(device, port_num, port_attr); + err = device->ops.query_port(device, port_num, port_attr); if (err || port_attr->subnet_prefix) return err; if (rdma_port_get_link_layer(device, port_num) != IB_LINK_LAYER_INFINIBAND) return 0; - err = device->query_gid(device, port_num, 0, &gid); + err = device->ops.query_gid(device, port_num, 0, &gid); if (err) return err; @@ -946,8 +965,8 @@ void ib_enum_roce_netdev(struct ib_device *ib_dev, if (rdma_protocol_roce(ib_dev, port)) { struct net_device *idev = NULL; - if (ib_dev->get_netdev) - idev = ib_dev->get_netdev(ib_dev, port); + if (ib_dev->ops.get_netdev) + idev = ib_dev->ops.get_netdev(ib_dev, port); if (idev && idev->reg_state >= NETREG_UNREGISTERED) { @@ -1024,7 +1043,10 @@ int ib_enum_all_devs(nldev_callback nldev_cb, struct sk_buff *skb, int ib_query_pkey(struct ib_device *device, u8 port_num, u16 index, u16 *pkey) { - return device->query_pkey(device, port_num, index, pkey); + if (!rdma_is_port_valid(device, port_num)) + return -EINVAL; + + return device->ops.query_pkey(device, port_num, index, pkey); } EXPORT_SYMBOL(ib_query_pkey); @@ -1041,11 +1063,11 @@ int ib_modify_device(struct ib_device *device, int device_modify_mask, struct ib_device_modify *device_modify) { - if (!device->modify_device) + if (!device->ops.modify_device) return -ENOSYS; - return device->modify_device(device, device_modify_mask, - device_modify); + return device->ops.modify_device(device, device_modify_mask, + device_modify); } EXPORT_SYMBOL(ib_modify_device); @@ -1069,9 +1091,10 @@ int ib_modify_port(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - if (device->modify_port) - rc = device->modify_port(device, port_num, port_modify_mask, - port_modify); + if (device->ops.modify_port) + rc = device->ops.modify_port(device, port_num, + port_modify_mask, + port_modify); else rc = rdma_protocol_roce(device, port_num) ? 0 : -ENOSYS; return rc; @@ -1198,6 +1221,105 @@ struct net_device *ib_get_net_dev_by_params(struct ib_device *dev, } EXPORT_SYMBOL(ib_get_net_dev_by_params); +void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) +{ + struct ib_device_ops *dev_ops = &dev->ops; +#define SET_DEVICE_OP(ptr, name) \ + do { \ + if (ops->name) \ + if (!((ptr)->name)) \ + (ptr)->name = ops->name; \ + } while (0) + + SET_DEVICE_OP(dev_ops, add_gid); + SET_DEVICE_OP(dev_ops, alloc_dm); + SET_DEVICE_OP(dev_ops, alloc_fmr); + SET_DEVICE_OP(dev_ops, alloc_hw_stats); + SET_DEVICE_OP(dev_ops, alloc_mr); + SET_DEVICE_OP(dev_ops, alloc_mw); + SET_DEVICE_OP(dev_ops, alloc_pd); + SET_DEVICE_OP(dev_ops, alloc_rdma_netdev); + SET_DEVICE_OP(dev_ops, alloc_ucontext); + SET_DEVICE_OP(dev_ops, alloc_xrcd); + SET_DEVICE_OP(dev_ops, attach_mcast); + SET_DEVICE_OP(dev_ops, check_mr_status); + SET_DEVICE_OP(dev_ops, create_ah); + SET_DEVICE_OP(dev_ops, create_counters); + SET_DEVICE_OP(dev_ops, create_cq); + SET_DEVICE_OP(dev_ops, create_flow); + SET_DEVICE_OP(dev_ops, create_flow_action_esp); + SET_DEVICE_OP(dev_ops, create_qp); + SET_DEVICE_OP(dev_ops, create_rwq_ind_table); + SET_DEVICE_OP(dev_ops, create_srq); + SET_DEVICE_OP(dev_ops, create_wq); + SET_DEVICE_OP(dev_ops, dealloc_dm); + SET_DEVICE_OP(dev_ops, dealloc_fmr); + SET_DEVICE_OP(dev_ops, dealloc_mw); + SET_DEVICE_OP(dev_ops, dealloc_pd); + SET_DEVICE_OP(dev_ops, dealloc_ucontext); + SET_DEVICE_OP(dev_ops, dealloc_xrcd); + SET_DEVICE_OP(dev_ops, del_gid); + SET_DEVICE_OP(dev_ops, dereg_mr); + SET_DEVICE_OP(dev_ops, destroy_ah); + SET_DEVICE_OP(dev_ops, destroy_counters); + SET_DEVICE_OP(dev_ops, destroy_cq); + SET_DEVICE_OP(dev_ops, destroy_flow); + SET_DEVICE_OP(dev_ops, destroy_flow_action); + SET_DEVICE_OP(dev_ops, destroy_qp); + SET_DEVICE_OP(dev_ops, destroy_rwq_ind_table); + SET_DEVICE_OP(dev_ops, destroy_srq); + SET_DEVICE_OP(dev_ops, destroy_wq); + SET_DEVICE_OP(dev_ops, detach_mcast); + SET_DEVICE_OP(dev_ops, disassociate_ucontext); + SET_DEVICE_OP(dev_ops, drain_rq); + SET_DEVICE_OP(dev_ops, drain_sq); + SET_DEVICE_OP(dev_ops, get_dev_fw_str); + SET_DEVICE_OP(dev_ops, get_dma_mr); + SET_DEVICE_OP(dev_ops, get_hw_stats); + SET_DEVICE_OP(dev_ops, get_link_layer); + SET_DEVICE_OP(dev_ops, get_netdev); + SET_DEVICE_OP(dev_ops, get_port_immutable); + SET_DEVICE_OP(dev_ops, get_vector_affinity); + SET_DEVICE_OP(dev_ops, get_vf_config); + SET_DEVICE_OP(dev_ops, get_vf_stats); + SET_DEVICE_OP(dev_ops, map_mr_sg); + SET_DEVICE_OP(dev_ops, map_phys_fmr); + SET_DEVICE_OP(dev_ops, mmap); + SET_DEVICE_OP(dev_ops, modify_ah); + SET_DEVICE_OP(dev_ops, modify_cq); + SET_DEVICE_OP(dev_ops, modify_device); + SET_DEVICE_OP(dev_ops, modify_flow_action_esp); + SET_DEVICE_OP(dev_ops, modify_port); + SET_DEVICE_OP(dev_ops, modify_qp); + SET_DEVICE_OP(dev_ops, modify_srq); + SET_DEVICE_OP(dev_ops, modify_wq); + SET_DEVICE_OP(dev_ops, peek_cq); + SET_DEVICE_OP(dev_ops, poll_cq); + SET_DEVICE_OP(dev_ops, post_recv); + SET_DEVICE_OP(dev_ops, post_send); + SET_DEVICE_OP(dev_ops, post_srq_recv); + SET_DEVICE_OP(dev_ops, process_mad); + SET_DEVICE_OP(dev_ops, query_ah); + SET_DEVICE_OP(dev_ops, query_device); + SET_DEVICE_OP(dev_ops, query_gid); + SET_DEVICE_OP(dev_ops, query_pkey); + SET_DEVICE_OP(dev_ops, query_port); + SET_DEVICE_OP(dev_ops, query_qp); + SET_DEVICE_OP(dev_ops, query_srq); + SET_DEVICE_OP(dev_ops, rdma_netdev_get_params); + SET_DEVICE_OP(dev_ops, read_counters); + SET_DEVICE_OP(dev_ops, reg_dm_mr); + SET_DEVICE_OP(dev_ops, reg_user_mr); + SET_DEVICE_OP(dev_ops, req_ncomp_notif); + SET_DEVICE_OP(dev_ops, req_notify_cq); + SET_DEVICE_OP(dev_ops, rereg_user_mr); + SET_DEVICE_OP(dev_ops, resize_cq); + SET_DEVICE_OP(dev_ops, set_vf_guid); + SET_DEVICE_OP(dev_ops, set_vf_link_state); + SET_DEVICE_OP(dev_ops, unmap_fmr); +} +EXPORT_SYMBOL(ib_set_device_ops); + static const struct rdma_nl_cbs ibnl_ls_cb_table[RDMA_NL_LS_NUM_OPS] = { [RDMA_NL_LS_OP_RESOLVE] = { .doit = ib_nl_handle_resolve_resp, diff --git a/drivers/infiniband/core/fmr_pool.c b/drivers/infiniband/core/fmr_pool.c index 83ba0068e8bb..7d841b689a1e 100644 --- a/drivers/infiniband/core/fmr_pool.c +++ b/drivers/infiniband/core/fmr_pool.c @@ -211,8 +211,8 @@ struct ib_fmr_pool *ib_create_fmr_pool(struct ib_pd *pd, return ERR_PTR(-EINVAL); device = pd->device; - if (!device->alloc_fmr || !device->dealloc_fmr || - !device->map_phys_fmr || !device->unmap_fmr) { + if (!device->ops.alloc_fmr || !device->ops.dealloc_fmr || + !device->ops.map_phys_fmr || !device->ops.unmap_fmr) { dev_info(&device->dev, "Device does not support FMRs\n"); return ERR_PTR(-ENOSYS); } @@ -474,7 +474,7 @@ EXPORT_SYMBOL(ib_fmr_pool_map_phys); * Unmap an FMR. The FMR mapping may remain valid until the FMR is * reused (or until ib_flush_fmr_pool() is called). */ -int ib_fmr_pool_unmap(struct ib_pool_fmr *fmr) +void ib_fmr_pool_unmap(struct ib_pool_fmr *fmr) { struct ib_fmr_pool *pool; unsigned long flags; @@ -503,7 +503,5 @@ int ib_fmr_pool_unmap(struct ib_pool_fmr *fmr) #endif spin_unlock_irqrestore(&pool->pool_lock, flags); - - return 0; } EXPORT_SYMBOL(ib_fmr_pool_unmap); diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c index ba668d49c751..476abc74178e 100644 --- a/drivers/infiniband/core/iwcm.c +++ b/drivers/infiniband/core/iwcm.c @@ -502,17 +502,21 @@ static void iw_cm_check_wildcard(struct sockaddr_storage *pm_addr, */ static int iw_cm_map(struct iw_cm_id *cm_id, bool active) { + const char *devname = dev_name(&cm_id->device->dev); + const char *ifname = cm_id->device->iwcm->ifname; struct iwpm_dev_data pm_reg_msg; struct iwpm_sa_data pm_msg; int status; + if (strlen(devname) >= sizeof(pm_reg_msg.dev_name) || + strlen(ifname) >= sizeof(pm_reg_msg.if_name)) + return -EINVAL; + cm_id->m_local_addr = cm_id->local_addr; cm_id->m_remote_addr = cm_id->remote_addr; - memcpy(pm_reg_msg.dev_name, dev_name(&cm_id->device->dev), - sizeof(pm_reg_msg.dev_name)); - memcpy(pm_reg_msg.if_name, cm_id->device->iwcm->ifname, - sizeof(pm_reg_msg.if_name)); + strncpy(pm_reg_msg.dev_name, devname, sizeof(pm_reg_msg.dev_name)); + strncpy(pm_reg_msg.if_name, ifname, sizeof(pm_reg_msg.if_name)); if (iwpm_register_pid(&pm_reg_msg, RDMA_NL_IWCM) || !iwpm_valid_pid()) diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index d7025cd5be28..7870823bac47 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -888,10 +888,10 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv, } /* No GRH for DR SMP */ - ret = device->process_mad(device, 0, port_num, &mad_wc, NULL, - (const struct ib_mad_hdr *)smp, mad_size, - (struct ib_mad_hdr *)mad_priv->mad, - &mad_size, &out_mad_pkey_index); + ret = device->ops.process_mad(device, 0, port_num, &mad_wc, NULL, + (const struct ib_mad_hdr *)smp, mad_size, + (struct ib_mad_hdr *)mad_priv->mad, + &mad_size, &out_mad_pkey_index); switch (ret) { case IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY: @@ -2305,14 +2305,12 @@ static void ib_mad_recv_done(struct ib_cq *cq, struct ib_wc *wc) } /* Give driver "right of first refusal" on incoming MAD */ - if (port_priv->device->process_mad) { - ret = port_priv->device->process_mad(port_priv->device, 0, - port_priv->port_num, - wc, &recv->grh, - (const struct ib_mad_hdr *)recv->mad, - recv->mad_size, - (struct ib_mad_hdr *)response->mad, - &mad_size, &resp_mad_pkey_index); + if (port_priv->device->ops.process_mad) { + ret = port_priv->device->ops.process_mad( + port_priv->device, 0, port_priv->port_num, wc, + &recv->grh, (const struct ib_mad_hdr *)recv->mad, + recv->mad_size, (struct ib_mad_hdr *)response->mad, + &mad_size, &resp_mad_pkey_index); if (opa) wc->pkey_index = resp_mad_pkey_index; diff --git a/drivers/infiniband/core/mad_rmpp.c b/drivers/infiniband/core/mad_rmpp.c index e5cf09c66fe6..5ec57abc0849 100644 --- a/drivers/infiniband/core/mad_rmpp.c +++ b/drivers/infiniband/core/mad_rmpp.c @@ -81,7 +81,7 @@ static void destroy_rmpp_recv(struct mad_rmpp_recv *rmpp_recv) { deref_rmpp_recv(rmpp_recv); wait_for_completion(&rmpp_recv->comp); - rdma_destroy_ah(rmpp_recv->ah); + rdma_destroy_ah(rmpp_recv->ah, RDMA_DESTROY_AH_SLEEPABLE); kfree(rmpp_recv); } @@ -171,7 +171,7 @@ static struct ib_mad_send_buf *alloc_response_msg(struct ib_mad_agent *agent, hdr_len, 0, GFP_KERNEL, IB_MGMT_BASE_VERSION); if (IS_ERR(msg)) - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, RDMA_DESTROY_AH_SLEEPABLE); else { msg->ah = ah; msg->context[0] = ah; @@ -201,7 +201,7 @@ static void ack_ds_ack(struct ib_mad_agent_private *agent, ret = ib_post_send_mad(msg, NULL); if (ret) { - rdma_destroy_ah(msg->ah); + rdma_destroy_ah(msg->ah, RDMA_DESTROY_AH_SLEEPABLE); ib_free_send_mad(msg); } } @@ -209,7 +209,8 @@ static void ack_ds_ack(struct ib_mad_agent_private *agent, void ib_rmpp_send_handler(struct ib_mad_send_wc *mad_send_wc) { if (mad_send_wc->send_buf->context[0] == mad_send_wc->send_buf->ah) - rdma_destroy_ah(mad_send_wc->send_buf->ah); + rdma_destroy_ah(mad_send_wc->send_buf->ah, + RDMA_DESTROY_AH_SLEEPABLE); ib_free_send_mad(mad_send_wc->send_buf); } @@ -237,7 +238,7 @@ static void nack_recv(struct ib_mad_agent_private *agent, ret = ib_post_send_mad(msg, NULL); if (ret) { - rdma_destroy_ah(msg->ah); + rdma_destroy_ah(msg->ah, RDMA_DESTROY_AH_SLEEPABLE); ib_free_send_mad(msg); } } diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c index 573399e3ccc1..e600fc23ae62 100644 --- a/drivers/infiniband/core/nldev.c +++ b/drivers/infiniband/core/nldev.c @@ -227,6 +227,7 @@ static int fill_port_info(struct sk_buff *msg, struct net_device *netdev = NULL; struct ib_port_attr attr; int ret; + u64 cap_flags = 0; if (fill_nldev_handle(msg, device)) return -EMSGSIZE; @@ -239,10 +240,12 @@ static int fill_port_info(struct sk_buff *msg, return ret; if (rdma_protocol_ib(device, port)) { - BUILD_BUG_ON(sizeof(attr.port_cap_flags) > sizeof(u64)); + BUILD_BUG_ON((sizeof(attr.port_cap_flags) + + sizeof(attr.port_cap_flags2)) > sizeof(u64)); + cap_flags = attr.port_cap_flags | + ((u64)attr.port_cap_flags2 << 32); if (nla_put_u64_64bit(msg, RDMA_NLDEV_ATTR_CAP_FLAGS, - (u64)attr.port_cap_flags, - RDMA_NLDEV_ATTR_PAD)) + cap_flags, RDMA_NLDEV_ATTR_PAD)) return -EMSGSIZE; if (nla_put_u64_64bit(msg, RDMA_NLDEV_ATTR_SUBNET_PREFIX, attr.subnet_prefix, RDMA_NLDEV_ATTR_PAD)) @@ -259,8 +262,8 @@ static int fill_port_info(struct sk_buff *msg, if (nla_put_u8(msg, RDMA_NLDEV_ATTR_PORT_PHYS_STATE, attr.phys_state)) return -EMSGSIZE; - if (device->get_netdev) - netdev = device->get_netdev(device, port); + if (device->ops.get_netdev) + netdev = device->ops.get_netdev(device, port); if (netdev && net_eq(dev_net(netdev), net)) { ret = nla_put_u32(msg, @@ -308,6 +311,7 @@ static int fill_res_info(struct sk_buff *msg, struct ib_device *device) [RDMA_RESTRACK_QP] = "qp", [RDMA_RESTRACK_CM_ID] = "cm_id", [RDMA_RESTRACK_MR] = "mr", + [RDMA_RESTRACK_CTX] = "ctx", }; struct rdma_restrack_root *res = &device->res; @@ -636,13 +640,13 @@ static int nldev_get_doit(struct sk_buff *skb, struct nlmsghdr *nlh, nlmsg_end(msg, nlh); - put_device(&device->dev); + ib_device_put(device); return rdma_nl_unicast(msg, NETLINK_CB(skb).portid); err_free: nlmsg_free(msg); err: - put_device(&device->dev); + ib_device_put(device); return err; } @@ -672,7 +676,7 @@ static int nldev_set_doit(struct sk_buff *skb, struct nlmsghdr *nlh, err = ib_device_rename(device, name); } - put_device(&device->dev); + ib_device_put(device); return err; } @@ -756,14 +760,14 @@ static int nldev_port_get_doit(struct sk_buff *skb, struct nlmsghdr *nlh, goto err_free; nlmsg_end(msg, nlh); - put_device(&device->dev); + ib_device_put(device); return rdma_nl_unicast(msg, NETLINK_CB(skb).portid); err_free: nlmsg_free(msg); err: - put_device(&device->dev); + ib_device_put(device); return err; } @@ -820,7 +824,7 @@ static int nldev_port_get_dumpit(struct sk_buff *skb, } out: - put_device(&device->dev); + ib_device_put(device); cb->args[0] = idx; return skb->len; } @@ -859,13 +863,13 @@ static int nldev_res_get_doit(struct sk_buff *skb, struct nlmsghdr *nlh, goto err_free; nlmsg_end(msg, nlh); - put_device(&device->dev); + ib_device_put(device); return rdma_nl_unicast(msg, NETLINK_CB(skb).portid); err_free: nlmsg_free(msg); err: - put_device(&device->dev); + ib_device_put(device); return ret; } @@ -1058,7 +1062,7 @@ next: idx++; if (!filled) goto err; - put_device(&device->dev); + ib_device_put(device); return skb->len; res_err: @@ -1069,7 +1073,7 @@ err: nlmsg_cancel(skb, nlh); err_index: - put_device(&device->dev); + ib_device_put(device); return ret; } diff --git a/drivers/infiniband/core/opa_smi.h b/drivers/infiniband/core/opa_smi.h index 3bfab3505a29..af4879bdf3d6 100644 --- a/drivers/infiniband/core/opa_smi.h +++ b/drivers/infiniband/core/opa_smi.h @@ -55,7 +55,7 @@ static inline enum smi_action opa_smi_check_local_smp(struct opa_smp *smp, { /* C14-9:3 -- We're at the end of the DR segment of path */ /* C14-9:4 -- Hop Pointer = Hop Count + 1 -> give to SMA/SM */ - return (device->process_mad && + return (device->ops.process_mad && !opa_get_smp_direction(smp) && (smp->hop_ptr == smp->hop_cnt + 1)) ? IB_SMI_HANDLE : IB_SMI_DISCARD; @@ -70,7 +70,7 @@ static inline enum smi_action opa_smi_check_local_returning_smp(struct opa_smp * { /* C14-13:3 -- We're at the end of the DR segment of path */ /* C14-13:4 -- Hop Pointer == 0 -> give to SM */ - return (device->process_mad && + return (device->ops.process_mad && opa_get_smp_direction(smp) && !smp->hop_ptr) ? IB_SMI_HANDLE : IB_SMI_DISCARD; } diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 752a55c6bdce..6c4747e61d2b 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -224,12 +224,14 @@ out_unlock: * uverbs_put_destroy. */ struct ib_uobject *__uobj_get_destroy(const struct uverbs_api_object *obj, - u32 id, struct ib_uverbs_file *ufile) + u32 id, + const struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj; int ret; - uobj = rdma_lookup_get_uobject(obj, ufile, id, UVERBS_LOOKUP_DESTROY); + uobj = rdma_lookup_get_uobject(obj, attrs->ufile, id, + UVERBS_LOOKUP_DESTROY); if (IS_ERR(uobj)) return uobj; @@ -243,21 +245,20 @@ struct ib_uobject *__uobj_get_destroy(const struct uverbs_api_object *obj, } /* - * Does both uobj_get_destroy() and uobj_put_destroy(). Returns success_res - * on success (negative errno on failure). For use by callers that do not need - * the uobj. + * Does both uobj_get_destroy() and uobj_put_destroy(). Returns 0 on success + * (negative errno on failure). For use by callers that do not need the uobj. */ int __uobj_perform_destroy(const struct uverbs_api_object *obj, u32 id, - struct ib_uverbs_file *ufile, int success_res) + const struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj; - uobj = __uobj_get_destroy(obj, id, ufile); + uobj = __uobj_get_destroy(obj, id, attrs); if (IS_ERR(uobj)) return PTR_ERR(uobj); rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); - return success_res; + return 0; } /* alloc_uobj must be undone by uverbs_destroy_uobject() */ @@ -267,7 +268,7 @@ static struct ib_uobject *alloc_uobj(struct ib_uverbs_file *ufile, struct ib_uobject *uobj; struct ib_ucontext *ucontext; - ucontext = ib_uverbs_get_ucontext(ufile); + ucontext = ib_uverbs_get_ucontext_file(ufile); if (IS_ERR(ucontext)) return ERR_CAST(ucontext); @@ -397,16 +398,23 @@ struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_api_object *obj, struct ib_uobject *uobj; int ret; - if (!obj) - return ERR_PTR(-EINVAL); + if (IS_ERR(obj) && PTR_ERR(obj) == -ENOMSG) { + /* must be UVERBS_IDR_ANY_OBJECT, see uapi_get_object() */ + uobj = lookup_get_idr_uobject(NULL, ufile, id, mode); + if (IS_ERR(uobj)) + return uobj; + } else { + if (IS_ERR(obj)) + return ERR_PTR(-EINVAL); - uobj = obj->type_class->lookup_get(obj, ufile, id, mode); - if (IS_ERR(uobj)) - return uobj; + uobj = obj->type_class->lookup_get(obj, ufile, id, mode); + if (IS_ERR(uobj)) + return uobj; - if (uobj->uapi_object != obj) { - ret = -EINVAL; - goto free; + if (uobj->uapi_object != obj) { + ret = -EINVAL; + goto free; + } } /* @@ -426,7 +434,7 @@ struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_api_object *obj, return uobj; free: - obj->type_class->lookup_put(uobj, mode); + uobj->uapi_object->type_class->lookup_put(uobj, mode); uverbs_uobject_put(uobj); return ERR_PTR(ret); } @@ -490,7 +498,7 @@ struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_api_object *obj, { struct ib_uobject *ret; - if (!obj) + if (IS_ERR(obj)) return ERR_PTR(-EINVAL); /* @@ -812,18 +820,20 @@ static void ufile_destroy_ucontext(struct ib_uverbs_file *ufile, */ if (reason == RDMA_REMOVE_DRIVER_REMOVE) { uverbs_user_mmap_disassociate(ufile); - if (ib_dev->disassociate_ucontext) - ib_dev->disassociate_ucontext(ucontext); + if (ib_dev->ops.disassociate_ucontext) + ib_dev->ops.disassociate_ucontext(ucontext); } ib_rdmacg_uncharge(&ucontext->cg_obj, ib_dev, RDMACG_RESOURCE_HCA_HANDLE); + rdma_restrack_del(&ucontext->res); + /* * FIXME: Drivers are not permitted to fail dealloc_ucontext, remove * the error return. */ - ret = ib_dev->dealloc_ucontext(ucontext); + ret = ib_dev->ops.dealloc_ucontext(ucontext); WARN_ON(ret); ufile->ucontext = NULL; diff --git a/drivers/infiniband/core/rdma_core.h b/drivers/infiniband/core/rdma_core.h index 4886d2bba7c7..be6b8e1257d0 100644 --- a/drivers/infiniband/core/rdma_core.h +++ b/drivers/infiniband/core/rdma_core.h @@ -118,43 +118,67 @@ void release_ufile_idr_uobject(struct ib_uverbs_file *ufile); * Depending on ID the slot pointer in the radix tree points at one of these * structs. */ -struct uverbs_api_object { - const struct uverbs_obj_type *type_attrs; - const struct uverbs_obj_type_class *type_class; -}; struct uverbs_api_ioctl_method { - int (__rcu *handler)(struct ib_uverbs_file *ufile, - struct uverbs_attr_bundle *ctx); + int(__rcu *handler)(struct uverbs_attr_bundle *attrs); DECLARE_BITMAP(attr_mandatory, UVERBS_API_ATTR_BKEY_LEN); u16 bundle_size; u8 use_stack:1; u8 driver_method:1; + u8 disabled:1; + u8 has_udata:1; u8 key_bitmap_len; u8 destroy_bkey; }; +struct uverbs_api_write_method { + int (*handler)(struct uverbs_attr_bundle *attrs); + u8 disabled:1; + u8 is_ex:1; + u8 has_udata:1; + u8 has_resp:1; + u8 req_size; + u8 resp_size; +}; + struct uverbs_api_attr { struct uverbs_attr_spec spec; }; -struct uverbs_api_object; struct uverbs_api { /* radix tree contains struct uverbs_api_* pointers */ struct radix_tree_root radix; enum rdma_driver_id driver_id; + + unsigned int num_write; + unsigned int num_write_ex; + struct uverbs_api_write_method notsupp_method; + const struct uverbs_api_write_method **write_methods; + const struct uverbs_api_write_method **write_ex_methods; }; +/* + * Get an uverbs_api_object that corresponds to the given object_id. + * Note: + * -ENOMSG means that any object is allowed to match during lookup. + */ static inline const struct uverbs_api_object * uapi_get_object(struct uverbs_api *uapi, u16 object_id) { - return radix_tree_lookup(&uapi->radix, uapi_key_obj(object_id)); + const struct uverbs_api_object *res; + + if (object_id == UVERBS_IDR_ANY_OBJECT) + return ERR_PTR(-ENOMSG); + + res = radix_tree_lookup(&uapi->radix, uapi_key_obj(object_id)); + if (!res) + return ERR_PTR(-ENOENT); + + return res; } char *uapi_key_format(char *S, unsigned int key); -struct uverbs_api *uverbs_alloc_api( - const struct uverbs_object_tree_def *const *driver_specs, - enum rdma_driver_id driver_id); +struct uverbs_api *uverbs_alloc_api(struct ib_device *ibdev); void uverbs_disassociate_api_pre(struct ib_uverbs_device *uverbs_dev); void uverbs_disassociate_api(struct uverbs_api *uapi); void uverbs_destroy_api(struct uverbs_api *uapi); @@ -162,4 +186,37 @@ void uapi_compute_bundle_size(struct uverbs_api_ioctl_method *method_elm, unsigned int num_attrs); void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile); +extern const struct uapi_definition uverbs_def_obj_counters[]; +extern const struct uapi_definition uverbs_def_obj_cq[]; +extern const struct uapi_definition uverbs_def_obj_device[]; +extern const struct uapi_definition uverbs_def_obj_dm[]; +extern const struct uapi_definition uverbs_def_obj_flow_action[]; +extern const struct uapi_definition uverbs_def_obj_intf[]; +extern const struct uapi_definition uverbs_def_obj_mr[]; +extern const struct uapi_definition uverbs_def_write_intf[]; + +static inline const struct uverbs_api_write_method * +uapi_get_method(const struct uverbs_api *uapi, u32 command) +{ + u32 cmd_idx = command & IB_USER_VERBS_CMD_COMMAND_MASK; + + if (command & ~(u32)(IB_USER_VERBS_CMD_FLAG_EXTENDED | + IB_USER_VERBS_CMD_COMMAND_MASK)) + return ERR_PTR(-EINVAL); + + if (command & IB_USER_VERBS_CMD_FLAG_EXTENDED) { + if (cmd_idx >= uapi->num_write_ex) + return ERR_PTR(-EOPNOTSUPP); + return uapi->write_ex_methods[cmd_idx]; + } + + if (cmd_idx >= uapi->num_write) + return ERR_PTR(-EOPNOTSUPP); + return uapi->write_methods[cmd_idx]; +} + +void uverbs_fill_udata(struct uverbs_attr_bundle *bundle, + struct ib_udata *udata, unsigned int attr_in, + unsigned int attr_out); + #endif /* RDMA_CORE_H */ diff --git a/drivers/infiniband/core/restrack.c b/drivers/infiniband/core/restrack.c index 06d8657ce583..46a5c553c624 100644 --- a/drivers/infiniband/core/restrack.c +++ b/drivers/infiniband/core/restrack.c @@ -32,6 +32,7 @@ static const char *type2str(enum rdma_restrack_type type) [RDMA_RESTRACK_QP] = "QP", [RDMA_RESTRACK_CM_ID] = "CM_ID", [RDMA_RESTRACK_MR] = "MR", + [RDMA_RESTRACK_CTX] = "CTX", }; return names[type]; @@ -130,31 +131,14 @@ static struct ib_device *res_to_dev(struct rdma_restrack_entry *res) res)->id.device; case RDMA_RESTRACK_MR: return container_of(res, struct ib_mr, res)->device; + case RDMA_RESTRACK_CTX: + return container_of(res, struct ib_ucontext, res)->device; default: WARN_ONCE(true, "Wrong resource tracking type %u\n", res->type); return NULL; } } -static bool res_is_user(struct rdma_restrack_entry *res) -{ - switch (res->type) { - case RDMA_RESTRACK_PD: - return container_of(res, struct ib_pd, res)->uobject; - case RDMA_RESTRACK_CQ: - return container_of(res, struct ib_cq, res)->uobject; - case RDMA_RESTRACK_QP: - return container_of(res, struct ib_qp, res)->uobject; - case RDMA_RESTRACK_CM_ID: - return !res->kern_name; - case RDMA_RESTRACK_MR: - return container_of(res, struct ib_mr, res)->pd->uobject; - default: - WARN_ONCE(true, "Wrong resource tracking type %u\n", res->type); - return false; - } -} - void rdma_restrack_set_task(struct rdma_restrack_entry *res, const char *caller) { @@ -170,17 +154,17 @@ void rdma_restrack_set_task(struct rdma_restrack_entry *res, } EXPORT_SYMBOL(rdma_restrack_set_task); -void rdma_restrack_add(struct rdma_restrack_entry *res) +static void rdma_restrack_add(struct rdma_restrack_entry *res) { struct ib_device *dev = res_to_dev(res); if (!dev) return; - if (res->type != RDMA_RESTRACK_CM_ID || !res_is_user(res)) + if (res->type != RDMA_RESTRACK_CM_ID || rdma_is_kernel_res(res)) res->task = NULL; - if (res_is_user(res)) { + if (!rdma_is_kernel_res(res)) { if (!res->task) rdma_restrack_set_task(res, NULL); res->kern_name = NULL; @@ -196,7 +180,28 @@ void rdma_restrack_add(struct rdma_restrack_entry *res) hash_add(dev->res.hash, &res->node, res->type); up_write(&dev->res.rwsem); } -EXPORT_SYMBOL(rdma_restrack_add); + +/** + * rdma_restrack_kadd() - add kernel object to the reource tracking database + * @res: resource entry + */ +void rdma_restrack_kadd(struct rdma_restrack_entry *res) +{ + res->user = false; + rdma_restrack_add(res); +} +EXPORT_SYMBOL(rdma_restrack_kadd); + +/** + * rdma_restrack_uadd() - add user object to the reource tracking database + * @res: resource entry + */ +void rdma_restrack_uadd(struct rdma_restrack_entry *res) +{ + res->user = true; + rdma_restrack_add(res); +} +EXPORT_SYMBOL(rdma_restrack_uadd); int __must_check rdma_restrack_get(struct rdma_restrack_entry *res) { diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c index be5ba5e15496..97e6d7b69abf 100644 --- a/drivers/infiniband/core/sa_query.c +++ b/drivers/infiniband/core/sa_query.c @@ -1147,7 +1147,7 @@ static void free_sm_ah(struct kref *kref) { struct ib_sa_sm_ah *sm_ah = container_of(kref, struct ib_sa_sm_ah, ref); - rdma_destroy_ah(sm_ah->ah); + rdma_destroy_ah(sm_ah->ah, 0); kfree(sm_ah); } @@ -2276,7 +2276,8 @@ static void update_sm_ah(struct work_struct *work) cpu_to_be64(IB_SA_WELL_KNOWN_GUID)); } - new_ah->ah = rdma_create_ah(port->agent->qp->pd, &ah_attr); + new_ah->ah = rdma_create_ah(port->agent->qp->pd, &ah_attr, + RDMA_CREATE_AH_SLEEPABLE); if (IS_ERR(new_ah->ah)) { pr_warn("Couldn't create new SM AH\n"); kfree(new_ah); diff --git a/drivers/infiniband/core/security.c b/drivers/infiniband/core/security.c index 1143c0448666..1efadbccf394 100644 --- a/drivers/infiniband/core/security.c +++ b/drivers/infiniband/core/security.c @@ -626,10 +626,10 @@ int ib_security_modify_qp(struct ib_qp *qp, } if (!ret) - ret = real_qp->device->modify_qp(real_qp, - qp_attr, - qp_attr_mask, - udata); + ret = real_qp->device->ops.modify_qp(real_qp, + qp_attr, + qp_attr_mask, + udata); if (new_pps) { /* Clean up the lists and free the appropriate diff --git a/drivers/infiniband/core/smi.h b/drivers/infiniband/core/smi.h index 33c91c8a16e9..91d9b353ab85 100644 --- a/drivers/infiniband/core/smi.h +++ b/drivers/infiniband/core/smi.h @@ -67,7 +67,7 @@ static inline enum smi_action smi_check_local_smp(struct ib_smp *smp, { /* C14-9:3 -- We're at the end of the DR segment of path */ /* C14-9:4 -- Hop Pointer = Hop Count + 1 -> give to SMA/SM */ - return ((device->process_mad && + return ((device->ops.process_mad && !ib_get_smp_direction(smp) && (smp->hop_ptr == smp->hop_cnt + 1)) ? IB_SMI_HANDLE : IB_SMI_DISCARD); @@ -82,7 +82,7 @@ static inline enum smi_action smi_check_local_returning_smp(struct ib_smp *smp, { /* C14-13:3 -- We're at the end of the DR segment of path */ /* C14-13:4 -- Hop Pointer == 0 -> give to SM */ - return ((device->process_mad && + return ((device->ops.process_mad && ib_get_smp_direction(smp) && !smp->hop_ptr) ? IB_SMI_HANDLE : IB_SMI_DISCARD); } diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c index 6fcce2c206c6..80f68eb0ba5c 100644 --- a/drivers/infiniband/core/sysfs.c +++ b/drivers/infiniband/core/sysfs.c @@ -462,7 +462,7 @@ static int get_perf_mad(struct ib_device *dev, int port_num, __be16 attr, u16 out_mad_pkey_index = 0; ssize_t ret; - if (!dev->process_mad) + if (!dev->ops.process_mad) return -ENOSYS; in_mad = kzalloc(sizeof *in_mad, GFP_KERNEL); @@ -481,11 +481,11 @@ static int get_perf_mad(struct ib_device *dev, int port_num, __be16 attr, if (attr != IB_PMA_CLASS_PORT_INFO) in_mad->data[41] = port_num; /* PortSelect field */ - if ((dev->process_mad(dev, IB_MAD_IGNORE_MKEY, - port_num, NULL, NULL, - (const struct ib_mad_hdr *)in_mad, mad_size, - (struct ib_mad_hdr *)out_mad, &mad_size, - &out_mad_pkey_index) & + if ((dev->ops.process_mad(dev, IB_MAD_IGNORE_MKEY, + port_num, NULL, NULL, + (const struct ib_mad_hdr *)in_mad, mad_size, + (struct ib_mad_hdr *)out_mad, &mad_size, + &out_mad_pkey_index) & (IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) != (IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) { ret = -EINVAL; @@ -786,7 +786,7 @@ static int update_hw_stats(struct ib_device *dev, struct rdma_hw_stats *stats, if (time_is_after_eq_jiffies(stats->timestamp + stats->lifespan)) return 0; - ret = dev->get_hw_stats(dev, stats, port_num, index); + ret = dev->ops.get_hw_stats(dev, stats, port_num, index); if (ret < 0) return ret; if (ret == stats->num_counters) @@ -946,7 +946,7 @@ static void setup_hw_stats(struct ib_device *device, struct ib_port *port, struct rdma_hw_stats *stats; int i, ret; - stats = device->alloc_hw_stats(device, port_num); + stats = device->ops.alloc_hw_stats(device, port_num); if (!stats) return; @@ -964,8 +964,8 @@ static void setup_hw_stats(struct ib_device *device, struct ib_port *port, if (!hsag) goto err_free_stats; - ret = device->get_hw_stats(device, stats, port_num, - stats->num_counters); + ret = device->ops.get_hw_stats(device, stats, port_num, + stats->num_counters); if (ret != stats->num_counters) goto err_free_hsag; @@ -1057,7 +1057,7 @@ static int add_port(struct ib_device *device, int port_num, goto err_put; } - if (device->process_mad) { + if (device->ops.process_mad) { p->pma_table = get_counter_table(device, port_num); ret = sysfs_create_group(&p->kobj, p->pma_table); if (ret) @@ -1124,7 +1124,7 @@ static int add_port(struct ib_device *device, int port_num, * port, so holder should be device. Therefore skip per port conunter * initialization. */ - if (device->alloc_hw_stats && port_num) + if (device->ops.alloc_hw_stats && port_num) setup_hw_stats(device, p, port_num); list_add_tail(&p->kobj.entry, &device->port_list); @@ -1245,7 +1245,7 @@ static ssize_t node_desc_store(struct device *device, struct ib_device_modify desc = {}; int ret; - if (!dev->modify_device) + if (!dev->ops.modify_device) return -EIO; memcpy(desc.node_desc, buf, min_t(int, count, IB_DEVICE_NODE_DESC_MAX)); @@ -1341,7 +1341,7 @@ int ib_device_register_sysfs(struct ib_device *device, } } - if (device->alloc_hw_stats) + if (device->ops.alloc_hw_stats) setup_hw_stats(device, NULL, 0); return 0; diff --git a/drivers/infiniband/core/ucm.c b/drivers/infiniband/core/ucm.c index 73332b9a25b5..7541fbaf58a3 100644 --- a/drivers/infiniband/core/ucm.c +++ b/drivers/infiniband/core/ucm.c @@ -1242,7 +1242,7 @@ static void ib_ucm_add_one(struct ib_device *device) dev_t base; struct ib_ucm_device *ucm_dev; - if (!device->alloc_ucontext || !rdma_cap_ib_cm(device, 1)) + if (!device->ops.alloc_ucontext || !rdma_cap_ib_cm(device, 1)) return; ucm_dev = kzalloc(sizeof *ucm_dev, GFP_KERNEL); diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 676c1fd1119d..a4ec43093cb3 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -146,15 +146,12 @@ static int invalidate_range_start_trampoline(struct ib_umem_odp *item, } static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct ib_ucontext_per_mm *per_mm = container_of(mn, struct ib_ucontext_per_mm, mn); - if (blockable) + if (range->blockable) down_read(&per_mm->umem_rwsem); else if (!down_read_trylock(&per_mm->umem_rwsem)) return -EAGAIN; @@ -169,9 +166,10 @@ static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } - return rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, start, end, + return rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start, + range->end, invalidate_range_start_trampoline, - blockable, NULL); + range->blockable, NULL); } static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 start, @@ -182,9 +180,7 @@ static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 start, } static void ib_umem_notifier_invalidate_range_end(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end) + const struct mmu_notifier_range *range) { struct ib_ucontext_per_mm *per_mm = container_of(mn, struct ib_ucontext_per_mm, mn); @@ -192,8 +188,8 @@ static void ib_umem_notifier_invalidate_range_end(struct mmu_notifier *mn, if (unlikely(!per_mm->active)) return; - rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, start, - end, + rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start, + range->end, invalidate_range_end_trampoline, true, NULL); up_read(&per_mm->umem_rwsem); } @@ -647,8 +643,13 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, flags, local_page_list, NULL, NULL); up_read(&owning_mm->mmap_sem); - if (npages < 0) + if (npages < 0) { + if (npages != -EAGAIN) + pr_warn("fail to get %zu user pages with error %d\n", gup_num_pages, npages); + else + pr_debug("fail to get %zu user pages with error %d\n", gup_num_pages, npages); break; + } bcnt -= min_t(size_t, npages << PAGE_SHIFT, bcnt); mutex_lock(&umem_odp->umem_mutex); @@ -666,8 +667,13 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, ret = ib_umem_odp_map_dma_single_page( umem_odp, k, local_page_list[j], access_mask, current_seq); - if (ret < 0) + if (ret < 0) { + if (ret != -EAGAIN) + pr_warn("ib_umem_odp_map_dma_single_page failed with error %d\n", ret); + else + pr_debug("ib_umem_odp_map_dma_single_page failed with error %d\n", ret); break; + } p = page_to_phys(local_page_list[j]); k++; diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c index f55f48f6b272..de8d31ab8945 100644 --- a/drivers/infiniband/core/user_mad.c +++ b/drivers/infiniband/core/user_mad.c @@ -88,10 +88,9 @@ enum { struct ib_umad_port { struct cdev cdev; - struct device *dev; - + struct device dev; struct cdev sm_cdev; - struct device *sm_dev; + struct device sm_dev; struct semaphore sm_sem; struct mutex file_mutex; @@ -104,8 +103,8 @@ struct ib_umad_port { }; struct ib_umad_device { - struct kobject kobj; - struct ib_umad_port port[0]; + struct kref kref; + struct ib_umad_port ports[]; }; struct ib_umad_file { @@ -130,8 +129,6 @@ struct ib_umad_packet { struct ib_user_mad mad; }; -static struct class *umad_class; - static const dev_t base_umad_dev = MKDEV(IB_UMAD_MAJOR, IB_UMAD_MINOR_BASE); static const dev_t base_issm_dev = MKDEV(IB_UMAD_MAJOR, IB_UMAD_MINOR_BASE) + IB_UMAD_NUM_FIXED_MINOR; @@ -143,17 +140,23 @@ static DEFINE_IDA(umad_ida); static void ib_umad_add_one(struct ib_device *device); static void ib_umad_remove_one(struct ib_device *device, void *client_data); -static void ib_umad_release_dev(struct kobject *kobj) +static void ib_umad_dev_free(struct kref *kref) { struct ib_umad_device *dev = - container_of(kobj, struct ib_umad_device, kobj); + container_of(kref, struct ib_umad_device, kref); kfree(dev); } -static struct kobj_type ib_umad_dev_ktype = { - .release = ib_umad_release_dev, -}; +static void ib_umad_dev_get(struct ib_umad_device *dev) +{ + kref_get(&dev->kref); +} + +static void ib_umad_dev_put(struct ib_umad_device *dev) +{ + kref_put(&dev->kref, ib_umad_dev_free); +} static int hdr_size(struct ib_umad_file *file) { @@ -205,7 +208,7 @@ static void send_handler(struct ib_mad_agent *agent, struct ib_umad_packet *packet = send_wc->send_buf->context[0]; dequeue_send(file, packet); - rdma_destroy_ah(packet->msg->ah); + rdma_destroy_ah(packet->msg->ah, RDMA_DESTROY_AH_SLEEPABLE); ib_free_send_mad(packet->msg); if (send_wc->status == IB_WC_RESP_TIMEOUT_ERR) { @@ -621,7 +624,7 @@ err_send: err_msg: ib_free_send_mad(packet->msg); err_ah: - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, RDMA_DESTROY_AH_SLEEPABLE); err_up: mutex_unlock(&file->mutex); err: @@ -657,7 +660,7 @@ static int ib_umad_reg_agent(struct ib_umad_file *file, void __user *arg, mutex_lock(&file->mutex); if (!file->port->ib_dev) { - dev_notice(file->port->dev, + dev_notice(&file->port->dev, "ib_umad_reg_agent: invalid device\n"); ret = -EPIPE; goto out; @@ -669,7 +672,7 @@ static int ib_umad_reg_agent(struct ib_umad_file *file, void __user *arg, } if (ureq.qpn != 0 && ureq.qpn != 1) { - dev_notice(file->port->dev, + dev_notice(&file->port->dev, "ib_umad_reg_agent: invalid QPN %d specified\n", ureq.qpn); ret = -EINVAL; @@ -680,7 +683,7 @@ static int ib_umad_reg_agent(struct ib_umad_file *file, void __user *arg, if (!__get_agent(file, agent_id)) goto found; - dev_notice(file->port->dev, + dev_notice(&file->port->dev, "ib_umad_reg_agent: Max Agents (%u) reached\n", IB_UMAD_MAX_AGENTS); ret = -ENOMEM; @@ -725,10 +728,10 @@ found: if (!file->already_used) { file->already_used = 1; if (!file->use_pkey_index) { - dev_warn(file->port->dev, + dev_warn(&file->port->dev, "process %s did not enable P_Key index support.\n", current->comm); - dev_warn(file->port->dev, + dev_warn(&file->port->dev, " Documentation/infiniband/user_mad.txt has info on the new ABI.\n"); } } @@ -759,7 +762,7 @@ static int ib_umad_reg_agent2(struct ib_umad_file *file, void __user *arg) mutex_lock(&file->mutex); if (!file->port->ib_dev) { - dev_notice(file->port->dev, + dev_notice(&file->port->dev, "ib_umad_reg_agent2: invalid device\n"); ret = -EPIPE; goto out; @@ -771,7 +774,7 @@ static int ib_umad_reg_agent2(struct ib_umad_file *file, void __user *arg) } if (ureq.qpn != 0 && ureq.qpn != 1) { - dev_notice(file->port->dev, + dev_notice(&file->port->dev, "ib_umad_reg_agent2: invalid QPN %d specified\n", ureq.qpn); ret = -EINVAL; @@ -779,7 +782,7 @@ static int ib_umad_reg_agent2(struct ib_umad_file *file, void __user *arg) } if (ureq.flags & ~IB_USER_MAD_REG_FLAGS_CAP) { - dev_notice(file->port->dev, + dev_notice(&file->port->dev, "ib_umad_reg_agent2 failed: invalid registration flags specified 0x%x; supported 0x%x\n", ureq.flags, IB_USER_MAD_REG_FLAGS_CAP); ret = -EINVAL; @@ -796,7 +799,7 @@ static int ib_umad_reg_agent2(struct ib_umad_file *file, void __user *arg) if (!__get_agent(file, agent_id)) goto found; - dev_notice(file->port->dev, + dev_notice(&file->port->dev, "ib_umad_reg_agent2: Max Agents (%u) reached\n", IB_UMAD_MAX_AGENTS); ret = -ENOMEM; @@ -808,7 +811,7 @@ found: req.mgmt_class = ureq.mgmt_class; req.mgmt_class_version = ureq.mgmt_class_version; if (ureq.oui & 0xff000000) { - dev_notice(file->port->dev, + dev_notice(&file->port->dev, "ib_umad_reg_agent2 failed: oui invalid 0x%08x\n", ureq.oui); ret = -EINVAL; @@ -986,8 +989,7 @@ static int ib_umad_open(struct inode *inode, struct file *filp) goto out; } - kobject_get(&port->umad_dev->kobj); - + ib_umad_dev_get(port->umad_dev); out: mutex_unlock(&port->file_mutex); return ret; @@ -1025,8 +1027,7 @@ static int ib_umad_close(struct inode *inode, struct file *filp) mutex_unlock(&file->port->file_mutex); kfree(file); - kobject_put(&dev->kobj); - + ib_umad_dev_put(dev); return 0; } @@ -1076,8 +1077,7 @@ static int ib_umad_sm_open(struct inode *inode, struct file *filp) if (ret) goto err_clr_sm_cap; - kobject_get(&port->umad_dev->kobj); - + ib_umad_dev_get(port->umad_dev); return 0; err_clr_sm_cap: @@ -1106,8 +1106,7 @@ static int ib_umad_sm_close(struct inode *inode, struct file *filp) up(&port->sm_sem); - kobject_put(&port->umad_dev->kobj); - + ib_umad_dev_put(port->umad_dev); return ret; } @@ -1124,7 +1123,7 @@ static struct ib_client umad_client = { .remove = ib_umad_remove_one }; -static ssize_t show_ibdev(struct device *dev, struct device_attribute *attr, +static ssize_t ibdev_show(struct device *dev, struct device_attribute *attr, char *buf) { struct ib_umad_port *port = dev_get_drvdata(dev); @@ -1134,9 +1133,9 @@ static ssize_t show_ibdev(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%s\n", dev_name(&port->ib_dev->dev)); } -static DEVICE_ATTR(ibdev, S_IRUGO, show_ibdev, NULL); +static DEVICE_ATTR_RO(ibdev); -static ssize_t show_port(struct device *dev, struct device_attribute *attr, +static ssize_t port_show(struct device *dev, struct device_attribute *attr, char *buf) { struct ib_umad_port *port = dev_get_drvdata(dev); @@ -1146,10 +1145,59 @@ static ssize_t show_port(struct device *dev, struct device_attribute *attr, return sprintf(buf, "%d\n", port->port_num); } -static DEVICE_ATTR(port, S_IRUGO, show_port, NULL); +static DEVICE_ATTR_RO(port); -static CLASS_ATTR_STRING(abi_version, S_IRUGO, - __stringify(IB_USER_MAD_ABI_VERSION)); +static struct attribute *umad_class_dev_attrs[] = { + &dev_attr_ibdev.attr, + &dev_attr_port.attr, + NULL, +}; +ATTRIBUTE_GROUPS(umad_class_dev); + +static char *umad_devnode(struct device *dev, umode_t *mode) +{ + return kasprintf(GFP_KERNEL, "infiniband/%s", dev_name(dev)); +} + +static ssize_t abi_version_show(struct class *class, + struct class_attribute *attr, char *buf) +{ + return sprintf(buf, "%d\n", IB_USER_MAD_ABI_VERSION); +} +static CLASS_ATTR_RO(abi_version); + +static struct attribute *umad_class_attrs[] = { + &class_attr_abi_version.attr, + NULL, +}; +ATTRIBUTE_GROUPS(umad_class); + +static struct class umad_class = { + .name = "infiniband_mad", + .devnode = umad_devnode, + .class_groups = umad_class_groups, + .dev_groups = umad_class_dev_groups, +}; + +static void ib_umad_release_port(struct device *device) +{ + struct ib_umad_port *port = dev_get_drvdata(device); + struct ib_umad_device *umad_dev = port->umad_dev; + + ib_umad_dev_put(umad_dev); +} + +static void ib_umad_init_port_dev(struct device *dev, + struct ib_umad_port *port, + const struct ib_device *device) +{ + device_initialize(dev); + ib_umad_dev_get(port->umad_dev); + dev->class = &umad_class; + dev->parent = device->dev.parent; + dev_set_drvdata(dev, port); + dev->release = ib_umad_release_port; +} static int ib_umad_init_port(struct ib_device *device, int port_num, struct ib_umad_device *umad_dev, @@ -1158,6 +1206,7 @@ static int ib_umad_init_port(struct ib_device *device, int port_num, int devnum; dev_t base_umad; dev_t base_issm; + int ret; devnum = ida_alloc_max(&umad_ida, IB_UMAD_MAX_PORTS - 1, GFP_KERNEL); if (devnum < 0) @@ -1172,63 +1221,41 @@ static int ib_umad_init_port(struct ib_device *device, int port_num, } port->ib_dev = device; + port->umad_dev = umad_dev; port->port_num = port_num; sema_init(&port->sm_sem, 1); mutex_init(&port->file_mutex); INIT_LIST_HEAD(&port->file_list); + ib_umad_init_port_dev(&port->dev, port, device); + port->dev.devt = base_umad; + dev_set_name(&port->dev, "umad%d", port->dev_num); cdev_init(&port->cdev, &umad_fops); port->cdev.owner = THIS_MODULE; - cdev_set_parent(&port->cdev, &umad_dev->kobj); - kobject_set_name(&port->cdev.kobj, "umad%d", port->dev_num); - if (cdev_add(&port->cdev, base_umad, 1)) - goto err_cdev; - port->dev = device_create(umad_class, device->dev.parent, - port->cdev.dev, port, - "umad%d", port->dev_num); - if (IS_ERR(port->dev)) + ret = cdev_device_add(&port->cdev, &port->dev); + if (ret) goto err_cdev; - if (device_create_file(port->dev, &dev_attr_ibdev)) - goto err_dev; - if (device_create_file(port->dev, &dev_attr_port)) - goto err_dev; - + ib_umad_init_port_dev(&port->sm_dev, port, device); + port->sm_dev.devt = base_issm; + dev_set_name(&port->sm_dev, "issm%d", port->dev_num); cdev_init(&port->sm_cdev, &umad_sm_fops); port->sm_cdev.owner = THIS_MODULE; - cdev_set_parent(&port->sm_cdev, &umad_dev->kobj); - kobject_set_name(&port->sm_cdev.kobj, "issm%d", port->dev_num); - if (cdev_add(&port->sm_cdev, base_issm, 1)) - goto err_sm_cdev; - - port->sm_dev = device_create(umad_class, device->dev.parent, - port->sm_cdev.dev, port, - "issm%d", port->dev_num); - if (IS_ERR(port->sm_dev)) - goto err_sm_cdev; - - if (device_create_file(port->sm_dev, &dev_attr_ibdev)) - goto err_sm_dev; - if (device_create_file(port->sm_dev, &dev_attr_port)) - goto err_sm_dev; - - return 0; -err_sm_dev: - device_destroy(umad_class, port->sm_cdev.dev); + ret = cdev_device_add(&port->sm_cdev, &port->sm_dev); + if (ret) + goto err_dev; -err_sm_cdev: - cdev_del(&port->sm_cdev); + return 0; err_dev: - device_destroy(umad_class, port->cdev.dev); - + put_device(&port->sm_dev); + cdev_device_del(&port->cdev, &port->dev); err_cdev: - cdev_del(&port->cdev); + put_device(&port->dev); ida_free(&umad_ida, devnum); - - return -1; + return ret; } static void ib_umad_kill_port(struct ib_umad_port *port) @@ -1236,17 +1263,11 @@ static void ib_umad_kill_port(struct ib_umad_port *port) struct ib_umad_file *file; int id; - dev_set_drvdata(port->dev, NULL); - dev_set_drvdata(port->sm_dev, NULL); - - device_destroy(umad_class, port->cdev.dev); - device_destroy(umad_class, port->sm_cdev.dev); - - cdev_del(&port->cdev); - cdev_del(&port->sm_cdev); - mutex_lock(&port->file_mutex); + /* Mark ib_dev NULL and block ioctl or other file ops to progress + * further. + */ port->ib_dev = NULL; list_for_each_entry(file, &port->file_list, port_list) { @@ -1260,6 +1281,11 @@ static void ib_umad_kill_port(struct ib_umad_port *port) } mutex_unlock(&port->file_mutex); + + cdev_device_del(&port->sm_cdev, &port->sm_dev); + put_device(&port->sm_dev); + cdev_device_del(&port->cdev, &port->dev); + put_device(&port->dev); ida_free(&umad_ida, port->dev_num); } @@ -1272,22 +1298,17 @@ static void ib_umad_add_one(struct ib_device *device) s = rdma_start_port(device); e = rdma_end_port(device); - umad_dev = kzalloc(sizeof *umad_dev + - (e - s + 1) * sizeof (struct ib_umad_port), - GFP_KERNEL); + umad_dev = kzalloc(struct_size(umad_dev, ports, e - s + 1), GFP_KERNEL); if (!umad_dev) return; - kobject_init(&umad_dev->kobj, &ib_umad_dev_ktype); - + kref_init(&umad_dev->kref); for (i = s; i <= e; ++i) { if (!rdma_cap_ib_mad(device, i)) continue; - umad_dev->port[i - s].umad_dev = umad_dev; - if (ib_umad_init_port(device, i, umad_dev, - &umad_dev->port[i - s])) + &umad_dev->ports[i - s])) goto err; count++; @@ -1305,10 +1326,10 @@ err: if (!rdma_cap_ib_mad(device, i)) continue; - ib_umad_kill_port(&umad_dev->port[i - s]); + ib_umad_kill_port(&umad_dev->ports[i - s]); } free: - kobject_put(&umad_dev->kobj); + ib_umad_dev_put(umad_dev); } static void ib_umad_remove_one(struct ib_device *device, void *client_data) @@ -1321,15 +1342,9 @@ static void ib_umad_remove_one(struct ib_device *device, void *client_data) for (i = 0; i <= rdma_end_port(device) - rdma_start_port(device); ++i) { if (rdma_cap_ib_mad(device, i + rdma_start_port(device))) - ib_umad_kill_port(&umad_dev->port[i]); + ib_umad_kill_port(&umad_dev->ports[i]); } - - kobject_put(&umad_dev->kobj); -} - -static char *umad_devnode(struct device *dev, umode_t *mode) -{ - return kasprintf(GFP_KERNEL, "infiniband/%s", dev_name(dev)); + ib_umad_dev_put(umad_dev); } static int __init ib_umad_init(void) @@ -1338,7 +1353,7 @@ static int __init ib_umad_init(void) ret = register_chrdev_region(base_umad_dev, IB_UMAD_NUM_FIXED_MINOR * 2, - "infiniband_mad"); + umad_class.name); if (ret) { pr_err("couldn't register device number\n"); goto out; @@ -1346,28 +1361,19 @@ static int __init ib_umad_init(void) ret = alloc_chrdev_region(&dynamic_umad_dev, 0, IB_UMAD_NUM_DYNAMIC_MINOR * 2, - "infiniband_mad"); + umad_class.name); if (ret) { pr_err("couldn't register dynamic device number\n"); goto out_alloc; } dynamic_issm_dev = dynamic_umad_dev + IB_UMAD_NUM_DYNAMIC_MINOR; - umad_class = class_create(THIS_MODULE, "infiniband_mad"); - if (IS_ERR(umad_class)) { - ret = PTR_ERR(umad_class); + ret = class_register(&umad_class); + if (ret) { pr_err("couldn't create class infiniband_mad\n"); goto out_chrdev; } - umad_class->devnode = umad_devnode; - - ret = class_create_file(umad_class, &class_attr_abi_version.attr); - if (ret) { - pr_err("couldn't create abi_version attribute\n"); - goto out_class; - } - ret = ib_register_client(&umad_client); if (ret) { pr_err("couldn't register ib_umad client\n"); @@ -1377,7 +1383,7 @@ static int __init ib_umad_init(void) return 0; out_class: - class_destroy(umad_class); + class_unregister(&umad_class); out_chrdev: unregister_chrdev_region(dynamic_umad_dev, @@ -1394,7 +1400,7 @@ out: static void __exit ib_umad_cleanup(void) { ib_unregister_client(&umad_client); - class_destroy(umad_class); + class_unregister(&umad_class); unregister_chrdev_region(base_umad_dev, IB_UMAD_NUM_FIXED_MINOR * 2); unregister_chrdev_region(dynamic_umad_dev, diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h index c97935a0c7c6..ea0bc6885517 100644 --- a/drivers/infiniband/core/uverbs.h +++ b/drivers/infiniband/core/uverbs.h @@ -161,9 +161,6 @@ struct ib_uverbs_file { struct mutex umap_lock; struct list_head umaps; - u64 uverbs_cmd_mask; - u64 uverbs_ex_cmd_mask; - struct idr idr; /* spinlock protects write access to idr */ spinlock_t idr_lock; @@ -249,7 +246,6 @@ int uverbs_dealloc_mw(struct ib_mw *mw); void ib_uverbs_detach_umcast(struct ib_qp *qp, struct ib_uqp_object *uobj); -void create_udata(struct uverbs_attr_bundle *ctx, struct ib_udata *udata); long ib_uverbs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); struct ib_uverbs_flow_spec { @@ -297,63 +293,29 @@ extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_FLOW_ACTION); extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_DM); extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_COUNTERS); -#define IB_UVERBS_DECLARE_CMD(name) \ - ssize_t ib_uverbs_##name(struct ib_uverbs_file *file, \ - const char __user *buf, int in_len, \ - int out_len) - -IB_UVERBS_DECLARE_CMD(get_context); -IB_UVERBS_DECLARE_CMD(query_device); -IB_UVERBS_DECLARE_CMD(query_port); -IB_UVERBS_DECLARE_CMD(alloc_pd); -IB_UVERBS_DECLARE_CMD(dealloc_pd); -IB_UVERBS_DECLARE_CMD(reg_mr); -IB_UVERBS_DECLARE_CMD(rereg_mr); -IB_UVERBS_DECLARE_CMD(dereg_mr); -IB_UVERBS_DECLARE_CMD(alloc_mw); -IB_UVERBS_DECLARE_CMD(dealloc_mw); -IB_UVERBS_DECLARE_CMD(create_comp_channel); -IB_UVERBS_DECLARE_CMD(create_cq); -IB_UVERBS_DECLARE_CMD(resize_cq); -IB_UVERBS_DECLARE_CMD(poll_cq); -IB_UVERBS_DECLARE_CMD(req_notify_cq); -IB_UVERBS_DECLARE_CMD(destroy_cq); -IB_UVERBS_DECLARE_CMD(create_qp); -IB_UVERBS_DECLARE_CMD(open_qp); -IB_UVERBS_DECLARE_CMD(query_qp); -IB_UVERBS_DECLARE_CMD(modify_qp); -IB_UVERBS_DECLARE_CMD(destroy_qp); -IB_UVERBS_DECLARE_CMD(post_send); -IB_UVERBS_DECLARE_CMD(post_recv); -IB_UVERBS_DECLARE_CMD(post_srq_recv); -IB_UVERBS_DECLARE_CMD(create_ah); -IB_UVERBS_DECLARE_CMD(destroy_ah); -IB_UVERBS_DECLARE_CMD(attach_mcast); -IB_UVERBS_DECLARE_CMD(detach_mcast); -IB_UVERBS_DECLARE_CMD(create_srq); -IB_UVERBS_DECLARE_CMD(modify_srq); -IB_UVERBS_DECLARE_CMD(query_srq); -IB_UVERBS_DECLARE_CMD(destroy_srq); -IB_UVERBS_DECLARE_CMD(create_xsrq); -IB_UVERBS_DECLARE_CMD(open_xrcd); -IB_UVERBS_DECLARE_CMD(close_xrcd); - -#define IB_UVERBS_DECLARE_EX_CMD(name) \ - int ib_uverbs_ex_##name(struct ib_uverbs_file *file, \ - struct ib_udata *ucore, \ - struct ib_udata *uhw) - -IB_UVERBS_DECLARE_EX_CMD(create_flow); -IB_UVERBS_DECLARE_EX_CMD(destroy_flow); -IB_UVERBS_DECLARE_EX_CMD(query_device); -IB_UVERBS_DECLARE_EX_CMD(create_cq); -IB_UVERBS_DECLARE_EX_CMD(create_qp); -IB_UVERBS_DECLARE_EX_CMD(create_wq); -IB_UVERBS_DECLARE_EX_CMD(modify_wq); -IB_UVERBS_DECLARE_EX_CMD(destroy_wq); -IB_UVERBS_DECLARE_EX_CMD(create_rwq_ind_table); -IB_UVERBS_DECLARE_EX_CMD(destroy_rwq_ind_table); -IB_UVERBS_DECLARE_EX_CMD(modify_qp); -IB_UVERBS_DECLARE_EX_CMD(modify_cq); +/* + * ib_uverbs_query_port_resp.port_cap_flags started out as just a copy of the + * PortInfo CapabilityMask, but was extended with unique bits. + */ +static inline u32 make_port_cap_flags(const struct ib_port_attr *attr) +{ + u32 res; + + /* All IBA CapabilityMask bits are passed through here, except bit 26, + * which is overridden with IP_BASED_GIDS. This is due to a historical + * mistake in the implementation of IP_BASED_GIDS. Otherwise all other + * bits match the IBA definition across all kernel versions. + */ + res = attr->port_cap_flags & ~(u32)IB_UVERBS_PCF_IP_BASED_GIDS; + + if (attr->ip_gids) + res |= IB_UVERBS_PCF_IP_BASED_GIDS; + + return res; +} + +void copy_port_attr_to_resp(struct ib_port_attr *attr, + struct ib_uverbs_query_port_resp *resp, + struct ib_device *ib_dev, u8 port_num); #endif /* UVERBS_H */ diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index a93853770e3c..6b12cc5f97b2 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -47,11 +47,134 @@ #include "uverbs.h" #include "core_priv.h" +/* + * Copy a response to userspace. If the provided 'resp' is larger than the + * user buffer it is silently truncated. If the user provided a larger buffer + * then the trailing portion is zero filled. + * + * These semantics are intended to support future extension of the output + * structures. + */ +static int uverbs_response(struct uverbs_attr_bundle *attrs, const void *resp, + size_t resp_len) +{ + int ret; + + if (copy_to_user(attrs->ucore.outbuf, resp, + min(attrs->ucore.outlen, resp_len))) + return -EFAULT; + + if (resp_len < attrs->ucore.outlen) { + /* + * Zero fill any extra memory that user + * space might have provided. + */ + ret = clear_user(attrs->ucore.outbuf + resp_len, + attrs->ucore.outlen - resp_len); + if (ret) + return -EFAULT; + } + + return 0; +} + +/* + * Copy a request from userspace. If the provided 'req' is larger than the + * user buffer then the user buffer is zero extended into the 'req'. If 'req' + * is smaller than the user buffer then the uncopied bytes in the user buffer + * must be zero. + */ +static int uverbs_request(struct uverbs_attr_bundle *attrs, void *req, + size_t req_len) +{ + if (copy_from_user(req, attrs->ucore.inbuf, + min(attrs->ucore.inlen, req_len))) + return -EFAULT; + + if (attrs->ucore.inlen < req_len) { + memset(req + attrs->ucore.inlen, 0, + req_len - attrs->ucore.inlen); + } else if (attrs->ucore.inlen > req_len) { + if (!ib_is_buffer_cleared(attrs->ucore.inbuf + req_len, + attrs->ucore.inlen - req_len)) + return -EOPNOTSUPP; + } + return 0; +} + +/* + * Generate the value for the 'response_length' protocol used by write_ex. + * This is the number of bytes the kernel actually wrote. Userspace can use + * this to detect what structure members in the response the kernel + * understood. + */ +static u32 uverbs_response_length(struct uverbs_attr_bundle *attrs, + size_t resp_len) +{ + return min_t(size_t, attrs->ucore.outlen, resp_len); +} + +/* + * The iterator version of the request interface is for handlers that need to + * step over a flex array at the end of a command header. + */ +struct uverbs_req_iter { + const void __user *cur; + const void __user *end; +}; + +static int uverbs_request_start(struct uverbs_attr_bundle *attrs, + struct uverbs_req_iter *iter, + void *req, + size_t req_len) +{ + if (attrs->ucore.inlen < req_len) + return -ENOSPC; + + if (copy_from_user(req, attrs->ucore.inbuf, req_len)) + return -EFAULT; + + iter->cur = attrs->ucore.inbuf + req_len; + iter->end = attrs->ucore.inbuf + attrs->ucore.inlen; + return 0; +} + +static int uverbs_request_next(struct uverbs_req_iter *iter, void *val, + size_t len) +{ + if (iter->cur + len > iter->end) + return -ENOSPC; + + if (copy_from_user(val, iter->cur, len)) + return -EFAULT; + + iter->cur += len; + return 0; +} + +static const void __user *uverbs_request_next_ptr(struct uverbs_req_iter *iter, + size_t len) +{ + const void __user *res = iter->cur; + + if (iter->cur + len > iter->end) + return ERR_PTR(-ENOSPC); + iter->cur += len; + return res; +} + +static int uverbs_request_finish(struct uverbs_req_iter *iter) +{ + if (!ib_is_buffer_cleared(iter->cur, iter->end - iter->cur)) + return -EOPNOTSUPP; + return 0; +} + static struct ib_uverbs_completion_event_file * -_ib_uverbs_lookup_comp_file(s32 fd, struct ib_uverbs_file *ufile) +_ib_uverbs_lookup_comp_file(s32 fd, const struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = ufd_get_read(UVERBS_OBJECT_COMP_CHANNEL, - fd, ufile); + fd, attrs); if (IS_ERR(uobj)) return (void *)uobj; @@ -65,24 +188,20 @@ _ib_uverbs_lookup_comp_file(s32 fd, struct ib_uverbs_file *ufile) #define ib_uverbs_lookup_comp_file(_fd, _ufile) \ _ib_uverbs_lookup_comp_file((_fd)*typecheck(s32, _fd), _ufile) -ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, - const char __user *buf, - int in_len, int out_len) +static int ib_uverbs_get_context(struct uverbs_attr_bundle *attrs) { + struct ib_uverbs_file *file = attrs->ufile; struct ib_uverbs_get_context cmd; struct ib_uverbs_get_context_resp resp; - struct ib_udata udata; struct ib_ucontext *ucontext; struct file *filp; struct ib_rdmacg_object cg_obj; struct ib_device *ib_dev; int ret; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; mutex_lock(&file->ucontext_lock); ib_dev = srcu_dereference(file->device->ib_dev, @@ -97,16 +216,11 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, goto err; } - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); - ret = ib_rdmacg_try_charge(&cg_obj, ib_dev, RDMACG_RESOURCE_HCA_HANDLE); if (ret) goto err; - ucontext = ib_dev->alloc_ucontext(ib_dev, &udata); + ucontext = ib_dev->ops.alloc_ucontext(ib_dev, &attrs->driver_udata); if (IS_ERR(ucontext)) { ret = PTR_ERR(ucontext); goto err_alloc; @@ -141,13 +255,15 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, goto err_fd; } - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) { - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) goto err_file; - } fd_install(resp.async_fd, filp); + ucontext->res.type = RDMA_RESTRACK_CTX; + rdma_restrack_uadd(&ucontext->res); + /* * Make sure that ib_uverbs_get_ucontext() sees the pointer update * only after all writes to setup the ucontext have completed @@ -156,7 +272,7 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, mutex_unlock(&file->ucontext_lock); - return in_len; + return 0; err_file: ib_uverbs_free_async_event_file(file); @@ -166,7 +282,7 @@ err_fd: put_unused_fd(resp.async_fd); err_free: - ib_dev->dealloc_ucontext(ucontext); + ib_dev->ops.dealloc_ucontext(ucontext); err_alloc: ib_rdmacg_uncharge(&cg_obj, ib_dev, RDMACG_RESOURCE_HCA_HANDLE); @@ -224,57 +340,28 @@ static void copy_query_dev_fields(struct ib_ucontext *ucontext, resp->phys_port_cnt = ib_dev->phys_port_cnt; } -ssize_t ib_uverbs_query_device(struct ib_uverbs_file *file, - const char __user *buf, - int in_len, int out_len) +static int ib_uverbs_query_device(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_query_device cmd; struct ib_uverbs_query_device_resp resp; struct ib_ucontext *ucontext; + int ret; - ucontext = ib_uverbs_get_ucontext(file); + ucontext = ib_uverbs_get_ucontext(attrs); if (IS_ERR(ucontext)) return PTR_ERR(ucontext); - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; memset(&resp, 0, sizeof resp); copy_query_dev_fields(ucontext, &resp, &ucontext->device->attrs); - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - return -EFAULT; - - return in_len; + return uverbs_response(attrs, &resp, sizeof(resp)); } -/* - * ib_uverbs_query_port_resp.port_cap_flags started out as just a copy of the - * PortInfo CapabilityMask, but was extended with unique bits. - */ -static u32 make_port_cap_flags(const struct ib_port_attr *attr) -{ - u32 res; - - /* All IBA CapabilityMask bits are passed through here, except bit 26, - * which is overridden with IP_BASED_GIDS. This is due to a historical - * mistake in the implementation of IP_BASED_GIDS. Otherwise all other - * bits match the IBA definition across all kernel versions. - */ - res = attr->port_cap_flags & ~(u32)IB_UVERBS_PCF_IP_BASED_GIDS; - - if (attr->ip_gids) - res |= IB_UVERBS_PCF_IP_BASED_GIDS; - - return res; -} - -ssize_t ib_uverbs_query_port(struct ib_uverbs_file *file, - const char __user *buf, - int in_len, int out_len) +static int ib_uverbs_query_port(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_query_port cmd; struct ib_uverbs_query_port_resp resp; @@ -283,88 +370,43 @@ ssize_t ib_uverbs_query_port(struct ib_uverbs_file *file, struct ib_ucontext *ucontext; struct ib_device *ib_dev; - ucontext = ib_uverbs_get_ucontext(file); + ucontext = ib_uverbs_get_ucontext(attrs); if (IS_ERR(ucontext)) return PTR_ERR(ucontext); ib_dev = ucontext->device; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; ret = ib_query_port(ib_dev, cmd.port_num, &attr); if (ret) return ret; memset(&resp, 0, sizeof resp); + copy_port_attr_to_resp(&attr, &resp, ib_dev, cmd.port_num); - resp.state = attr.state; - resp.max_mtu = attr.max_mtu; - resp.active_mtu = attr.active_mtu; - resp.gid_tbl_len = attr.gid_tbl_len; - resp.port_cap_flags = make_port_cap_flags(&attr); - resp.max_msg_sz = attr.max_msg_sz; - resp.bad_pkey_cntr = attr.bad_pkey_cntr; - resp.qkey_viol_cntr = attr.qkey_viol_cntr; - resp.pkey_tbl_len = attr.pkey_tbl_len; - - if (rdma_is_grh_required(ib_dev, cmd.port_num)) - resp.flags |= IB_UVERBS_QPF_GRH_REQUIRED; - - if (rdma_cap_opa_ah(ib_dev, cmd.port_num)) { - resp.lid = OPA_TO_IB_UCAST_LID(attr.lid); - resp.sm_lid = OPA_TO_IB_UCAST_LID(attr.sm_lid); - } else { - resp.lid = ib_lid_cpu16(attr.lid); - resp.sm_lid = ib_lid_cpu16(attr.sm_lid); - } - resp.lmc = attr.lmc; - resp.max_vl_num = attr.max_vl_num; - resp.sm_sl = attr.sm_sl; - resp.subnet_timeout = attr.subnet_timeout; - resp.init_type_reply = attr.init_type_reply; - resp.active_width = attr.active_width; - resp.active_speed = attr.active_speed; - resp.phys_state = attr.phys_state; - resp.link_layer = rdma_port_get_link_layer(ib_dev, - cmd.port_num); - - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - return -EFAULT; - - return in_len; + return uverbs_response(attrs, &resp, sizeof(resp)); } -ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file, - const char __user *buf, - int in_len, int out_len) +static int ib_uverbs_alloc_pd(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_alloc_pd cmd; struct ib_uverbs_alloc_pd_resp resp; - struct ib_udata udata; struct ib_uobject *uobj; struct ib_pd *pd; int ret; struct ib_device *ib_dev; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - uobj = uobj_alloc(UVERBS_OBJECT_PD, file, &ib_dev); + uobj = uobj_alloc(UVERBS_OBJECT_PD, attrs, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); - pd = ib_dev->alloc_pd(ib_dev, uobj->context, &udata); + pd = ib_dev->ops.alloc_pd(ib_dev, uobj->context, &attrs->driver_udata); if (IS_ERR(pd)) { ret = PTR_ERR(pd); goto err; @@ -379,14 +421,13 @@ ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file, memset(&resp, 0, sizeof resp); resp.pd_handle = uobj->id; pd->res.type = RDMA_RESTRACK_PD; - rdma_restrack_add(&pd->res); + rdma_restrack_uadd(&pd->res); - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) { - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) goto err_copy; - } - return uobj_alloc_commit(uobj, in_len); + return uobj_alloc_commit(uobj); err_copy: ib_dealloc_pd(pd); @@ -396,17 +437,16 @@ err: return ret; } -ssize_t ib_uverbs_dealloc_pd(struct ib_uverbs_file *file, - const char __user *buf, - int in_len, int out_len) +static int ib_uverbs_dealloc_pd(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_dealloc_pd cmd; + int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - return uobj_perform_destroy(UVERBS_OBJECT_PD, cmd.pd_handle, file, - in_len); + return uobj_perform_destroy(UVERBS_OBJECT_PD, cmd.pd_handle, attrs); } struct xrcd_table_entry { @@ -494,13 +534,11 @@ static void xrcd_table_delete(struct ib_uverbs_device *dev, } } -ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_open_xrcd(struct uverbs_attr_bundle *attrs) { + struct ib_uverbs_device *ibudev = attrs->ufile->device; struct ib_uverbs_open_xrcd cmd; struct ib_uverbs_open_xrcd_resp resp; - struct ib_udata udata; struct ib_uxrcd_object *obj; struct ib_xrcd *xrcd = NULL; struct fd f = {NULL, 0}; @@ -509,18 +547,11 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, int new_xrcd = 0; struct ib_device *ib_dev; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - mutex_lock(&file->device->xrcd_tree_mutex); + mutex_lock(&ibudev->xrcd_tree_mutex); if (cmd.fd != -1) { /* search for file descriptor */ @@ -531,7 +562,7 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, } inode = file_inode(f.file); - xrcd = find_xrcd(file->device, inode); + xrcd = find_xrcd(ibudev, inode); if (!xrcd && !(cmd.oflags & O_CREAT)) { /* no file descriptor. Need CREATE flag */ ret = -EAGAIN; @@ -544,7 +575,7 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, } } - obj = (struct ib_uxrcd_object *)uobj_alloc(UVERBS_OBJECT_XRCD, file, + obj = (struct ib_uxrcd_object *)uobj_alloc(UVERBS_OBJECT_XRCD, attrs, &ib_dev); if (IS_ERR(obj)) { ret = PTR_ERR(obj); @@ -552,7 +583,8 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, } if (!xrcd) { - xrcd = ib_dev->alloc_xrcd(ib_dev, obj->uobject.context, &udata); + xrcd = ib_dev->ops.alloc_xrcd(ib_dev, obj->uobject.context, + &attrs->driver_udata); if (IS_ERR(xrcd)) { ret = PTR_ERR(xrcd); goto err; @@ -574,29 +606,28 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, if (inode) { if (new_xrcd) { /* create new inode/xrcd table entry */ - ret = xrcd_table_insert(file->device, inode, xrcd); + ret = xrcd_table_insert(ibudev, inode, xrcd); if (ret) goto err_dealloc_xrcd; } atomic_inc(&xrcd->usecnt); } - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) { - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) goto err_copy; - } if (f.file) fdput(f); - mutex_unlock(&file->device->xrcd_tree_mutex); + mutex_unlock(&ibudev->xrcd_tree_mutex); - return uobj_alloc_commit(&obj->uobject, in_len); + return uobj_alloc_commit(&obj->uobject); err_copy: if (inode) { if (new_xrcd) - xrcd_table_delete(file->device, inode); + xrcd_table_delete(ibudev, inode); atomic_dec(&xrcd->usecnt); } @@ -610,22 +641,21 @@ err_tree_mutex_unlock: if (f.file) fdput(f); - mutex_unlock(&file->device->xrcd_tree_mutex); + mutex_unlock(&ibudev->xrcd_tree_mutex); return ret; } -ssize_t ib_uverbs_close_xrcd(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_close_xrcd(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_close_xrcd cmd; + int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - return uobj_perform_destroy(UVERBS_OBJECT_XRCD, cmd.xrcd_handle, file, - in_len); + return uobj_perform_destroy(UVERBS_OBJECT_XRCD, cmd.xrcd_handle, attrs); } int ib_uverbs_dealloc_xrcd(struct ib_uobject *uobject, @@ -653,29 +683,19 @@ int ib_uverbs_dealloc_xrcd(struct ib_uobject *uobject, return ret; } -ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_reg_mr cmd; struct ib_uverbs_reg_mr_resp resp; - struct ib_udata udata; struct ib_uobject *uobj; struct ib_pd *pd; struct ib_mr *mr; int ret; struct ib_device *ib_dev; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; if ((cmd.start & ~PAGE_MASK) != (cmd.hca_va & ~PAGE_MASK)) return -EINVAL; @@ -684,11 +704,11 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, if (ret) return ret; - uobj = uobj_alloc(UVERBS_OBJECT_MR, file, &ib_dev); + uobj = uobj_alloc(UVERBS_OBJECT_MR, attrs, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); if (!pd) { ret = -EINVAL; goto err_free; @@ -703,8 +723,9 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, } } - mr = pd->device->reg_user_mr(pd, cmd.start, cmd.length, cmd.hca_va, - cmd.access_flags, &udata); + mr = pd->device->ops.reg_user_mr(pd, cmd.start, cmd.length, cmd.hca_va, + cmd.access_flags, + &attrs->driver_udata); if (IS_ERR(mr)) { ret = PTR_ERR(mr); goto err_put; @@ -716,7 +737,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, mr->uobject = uobj; atomic_inc(&pd->usecnt); mr->res.type = RDMA_RESTRACK_MR; - rdma_restrack_add(&mr->res); + rdma_restrack_uadd(&mr->res); uobj->object = mr; @@ -725,14 +746,13 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, resp.rkey = mr->rkey; resp.mr_handle = uobj->id; - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) { - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) goto err_copy; - } uobj_put_obj_read(pd); - return uobj_alloc_commit(uobj, in_len); + return uobj_alloc_commit(uobj); err_copy: ib_dereg_mr(mr); @@ -745,29 +765,19 @@ err_free: return ret; } -ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_rereg_mr cmd; struct ib_uverbs_rereg_mr_resp resp; - struct ib_udata udata; struct ib_pd *pd = NULL; struct ib_mr *mr; struct ib_pd *old_pd; int ret; struct ib_uobject *uobj; - if (out_len < sizeof(resp)) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof(cmd))) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; if (cmd.flags & ~IB_MR_REREG_SUPPORTED || !cmd.flags) return -EINVAL; @@ -777,7 +787,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, (cmd.start & ~PAGE_MASK) != (cmd.hca_va & ~PAGE_MASK))) return -EINVAL; - uobj = uobj_get_write(UVERBS_OBJECT_MR, cmd.mr_handle, file); + uobj = uobj_get_write(UVERBS_OBJECT_MR, cmd.mr_handle, attrs); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -796,7 +806,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, if (cmd.flags & IB_MR_REREG_PD) { pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, - file); + attrs); if (!pd) { ret = -EINVAL; goto put_uobjs; @@ -804,9 +814,10 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, } old_pd = mr->pd; - ret = mr->device->rereg_user_mr(mr, cmd.flags, cmd.start, - cmd.length, cmd.hca_va, - cmd.access_flags, pd, &udata); + ret = mr->device->ops.rereg_user_mr(mr, cmd.flags, cmd.start, + cmd.length, cmd.hca_va, + cmd.access_flags, pd, + &attrs->driver_udata); if (!ret) { if (cmd.flags & IB_MR_REREG_PD) { atomic_inc(&pd->usecnt); @@ -821,10 +832,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, resp.lkey = mr->lkey; resp.rkey = mr->rkey; - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof(resp))) - ret = -EFAULT; - else - ret = in_len; + ret = uverbs_response(attrs, &resp, sizeof(resp)); put_uobj_pd: if (cmd.flags & IB_MR_REREG_PD) @@ -836,54 +844,43 @@ put_uobjs: return ret; } -ssize_t ib_uverbs_dereg_mr(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_dereg_mr(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_dereg_mr cmd; + int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - return uobj_perform_destroy(UVERBS_OBJECT_MR, cmd.mr_handle, file, - in_len); + return uobj_perform_destroy(UVERBS_OBJECT_MR, cmd.mr_handle, attrs); } -ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_alloc_mw(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_alloc_mw cmd; struct ib_uverbs_alloc_mw_resp resp; struct ib_uobject *uobj; struct ib_pd *pd; struct ib_mw *mw; - struct ib_udata udata; int ret; struct ib_device *ib_dev; - if (out_len < sizeof(resp)) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof(cmd))) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - uobj = uobj_alloc(UVERBS_OBJECT_MW, file, &ib_dev); + uobj = uobj_alloc(UVERBS_OBJECT_MW, attrs, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); if (!pd) { ret = -EINVAL; goto err_free; } - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); - - mw = pd->device->alloc_mw(pd, cmd.mw_type, &udata); + mw = pd->device->ops.alloc_mw(pd, cmd.mw_type, &attrs->driver_udata); if (IS_ERR(mw)) { ret = PTR_ERR(mw); goto err_put; @@ -900,13 +897,12 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, resp.rkey = mw->rkey; resp.mw_handle = uobj->id; - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof(resp))) { - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) goto err_copy; - } uobj_put_obj_read(pd); - return uobj_alloc_commit(uobj, in_len); + return uobj_alloc_commit(uobj); err_copy: uverbs_dealloc_mw(mw); @@ -917,36 +913,32 @@ err_free: return ret; } -ssize_t ib_uverbs_dealloc_mw(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_dealloc_mw(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_dealloc_mw cmd; + int ret; - if (copy_from_user(&cmd, buf, sizeof(cmd))) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - return uobj_perform_destroy(UVERBS_OBJECT_MW, cmd.mw_handle, file, - in_len); + return uobj_perform_destroy(UVERBS_OBJECT_MW, cmd.mw_handle, attrs); } -ssize_t ib_uverbs_create_comp_channel(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_create_comp_channel(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_create_comp_channel cmd; struct ib_uverbs_create_comp_channel_resp resp; struct ib_uobject *uobj; struct ib_uverbs_completion_event_file *ev_file; struct ib_device *ib_dev; + int ret; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - uobj = uobj_alloc(UVERBS_OBJECT_COMP_CHANNEL, file, &ib_dev); + uobj = uobj_alloc(UVERBS_OBJECT_COMP_CHANNEL, attrs, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -956,25 +948,17 @@ ssize_t ib_uverbs_create_comp_channel(struct ib_uverbs_file *file, uobj); ib_uverbs_init_event_queue(&ev_file->ev_queue); - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) { + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) { uobj_alloc_abort(uobj); - return -EFAULT; + return ret; } - return uobj_alloc_commit(uobj, in_len); + return uobj_alloc_commit(uobj); } -static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw, - struct ib_uverbs_ex_create_cq *cmd, - size_t cmd_sz, - int (*cb)(struct ib_uverbs_file *file, - struct ib_ucq_object *obj, - struct ib_uverbs_ex_create_cq_resp *resp, - struct ib_udata *udata, - void *context), - void *context) +static struct ib_ucq_object *create_cq(struct uverbs_attr_bundle *attrs, + struct ib_uverbs_ex_create_cq *cmd) { struct ib_ucq_object *obj; struct ib_uverbs_completion_event_file *ev_file = NULL; @@ -984,21 +968,16 @@ static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file, struct ib_cq_init_attr attr = {}; struct ib_device *ib_dev; - if (cmd->comp_vector >= file->device->num_comp_vectors) + if (cmd->comp_vector >= attrs->ufile->device->num_comp_vectors) return ERR_PTR(-EINVAL); - obj = (struct ib_ucq_object *)uobj_alloc(UVERBS_OBJECT_CQ, file, + obj = (struct ib_ucq_object *)uobj_alloc(UVERBS_OBJECT_CQ, attrs, &ib_dev); if (IS_ERR(obj)) return obj; - if (!ib_dev->create_cq) { - ret = -EOPNOTSUPP; - goto err; - } - if (cmd->comp_channel >= 0) { - ev_file = ib_uverbs_lookup_comp_file(cmd->comp_channel, file); + ev_file = ib_uverbs_lookup_comp_file(cmd->comp_channel, attrs); if (IS_ERR(ev_file)) { ret = PTR_ERR(ev_file); goto err; @@ -1013,11 +992,10 @@ static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file, attr.cqe = cmd->cqe; attr.comp_vector = cmd->comp_vector; + attr.flags = cmd->flags; - if (cmd_sz > offsetof(typeof(*cmd), flags) + sizeof(cmd->flags)) - attr.flags = cmd->flags; - - cq = ib_dev->create_cq(ib_dev, &attr, obj->uobject.context, uhw); + cq = ib_dev->ops.create_cq(ib_dev, &attr, obj->uobject.context, + &attrs->driver_udata); if (IS_ERR(cq)) { ret = PTR_ERR(cq); goto err_file; @@ -1034,18 +1012,16 @@ static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file, memset(&resp, 0, sizeof resp); resp.base.cq_handle = obj->uobject.id; resp.base.cqe = cq->cqe; - - resp.response_length = offsetof(typeof(resp), response_length) + - sizeof(resp.response_length); + resp.response_length = uverbs_response_length(attrs, sizeof(resp)); cq->res.type = RDMA_RESTRACK_CQ; - rdma_restrack_add(&cq->res); + rdma_restrack_uadd(&cq->res); - ret = cb(file, obj, &resp, ucore, context); + ret = uverbs_response(attrs, &resp, sizeof(resp)); if (ret) goto err_cb; - ret = uobj_alloc_commit(&obj->uobject, 0); + ret = uobj_alloc_commit(&obj->uobject); if (ret) return ERR_PTR(ret); return obj; @@ -1055,7 +1031,7 @@ err_cb: err_file: if (ev_file) - ib_uverbs_release_ucq(file, ev_file, obj); + ib_uverbs_release_ucq(attrs->ufile, ev_file, obj); err: uobj_alloc_abort(&obj->uobject); @@ -1063,41 +1039,16 @@ err: return ERR_PTR(ret); } -static int ib_uverbs_create_cq_cb(struct ib_uverbs_file *file, - struct ib_ucq_object *obj, - struct ib_uverbs_ex_create_cq_resp *resp, - struct ib_udata *ucore, void *context) -{ - if (ib_copy_to_udata(ucore, &resp->base, sizeof(resp->base))) - return -EFAULT; - - return 0; -} - -ssize_t ib_uverbs_create_cq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_create_cq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_create_cq cmd; struct ib_uverbs_ex_create_cq cmd_ex; - struct ib_uverbs_create_cq_resp resp; - struct ib_udata ucore; - struct ib_udata uhw; struct ib_ucq_object *obj; + int ret; - if (out_len < sizeof(resp)) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof(cmd))) - return -EFAULT; - - ib_uverbs_init_udata(&ucore, buf, u64_to_user_ptr(cmd.response), - sizeof(cmd), sizeof(resp)); - - ib_uverbs_init_udata(&uhw, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; memset(&cmd_ex, 0, sizeof(cmd_ex)); cmd_ex.user_handle = cmd.user_handle; @@ -1105,43 +1056,19 @@ ssize_t ib_uverbs_create_cq(struct ib_uverbs_file *file, cmd_ex.comp_vector = cmd.comp_vector; cmd_ex.comp_channel = cmd.comp_channel; - obj = create_cq(file, &ucore, &uhw, &cmd_ex, - offsetof(typeof(cmd_ex), comp_channel) + - sizeof(cmd.comp_channel), ib_uverbs_create_cq_cb, - NULL); - - if (IS_ERR(obj)) - return PTR_ERR(obj); - - return in_len; -} - -static int ib_uverbs_ex_create_cq_cb(struct ib_uverbs_file *file, - struct ib_ucq_object *obj, - struct ib_uverbs_ex_create_cq_resp *resp, - struct ib_udata *ucore, void *context) -{ - if (ib_copy_to_udata(ucore, resp, resp->response_length)) - return -EFAULT; - - return 0; + obj = create_cq(attrs, &cmd_ex); + return PTR_ERR_OR_ZERO(obj); } -int ib_uverbs_ex_create_cq(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_create_cq(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_create_cq_resp resp; struct ib_uverbs_ex_create_cq cmd; struct ib_ucq_object *obj; - int err; - - if (ucore->inlen < sizeof(cmd)) - return -EINVAL; + int ret; - err = ib_copy_from_udata(&cmd, ucore, sizeof(cmd)); - if (err) - return err; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; if (cmd.comp_mask) return -EINVAL; @@ -1149,52 +1076,36 @@ int ib_uverbs_ex_create_cq(struct ib_uverbs_file *file, if (cmd.reserved) return -EINVAL; - if (ucore->outlen < (offsetof(typeof(resp), response_length) + - sizeof(resp.response_length))) - return -ENOSPC; - - obj = create_cq(file, ucore, uhw, &cmd, - min(ucore->inlen, sizeof(cmd)), - ib_uverbs_ex_create_cq_cb, NULL); - + obj = create_cq(attrs, &cmd); return PTR_ERR_OR_ZERO(obj); } -ssize_t ib_uverbs_resize_cq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_resize_cq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_resize_cq cmd; struct ib_uverbs_resize_cq_resp resp = {}; - struct ib_udata udata; struct ib_cq *cq; int ret = -EINVAL; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); if (!cq) return -EINVAL; - ret = cq->device->resize_cq(cq, cmd.cqe, &udata); + ret = cq->device->ops.resize_cq(cq, cmd.cqe, &attrs->driver_udata); if (ret) goto out; resp.cqe = cq->cqe; - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp.cqe)) - ret = -EFAULT; - + ret = uverbs_response(attrs, &resp, sizeof(resp)); out: uobj_put_obj_read(cq); - return ret ? ret : in_len; + return ret; } static int copy_wc_to_user(struct ib_device *ib_dev, void __user *dest, @@ -1227,9 +1138,7 @@ static int copy_wc_to_user(struct ib_device *ib_dev, void __user *dest, return 0; } -ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_poll_cq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_poll_cq cmd; struct ib_uverbs_poll_cq_resp resp; @@ -1239,15 +1148,16 @@ ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, struct ib_wc wc; int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); if (!cq) return -EINVAL; /* we copy a struct ib_uverbs_poll_cq_resp to user space */ - header_ptr = u64_to_user_ptr(cmd.response); + header_ptr = attrs->ucore.outbuf; data_ptr = header_ptr + sizeof resp; memset(&resp, 0, sizeof resp); @@ -1271,24 +1181,24 @@ ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, goto out_put; } - ret = in_len; + ret = 0; out_put: uobj_put_obj_read(cq); return ret; } -ssize_t ib_uverbs_req_notify_cq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_req_notify_cq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_req_notify_cq cmd; struct ib_cq *cq; + int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); if (!cq) return -EINVAL; @@ -1297,22 +1207,22 @@ ssize_t ib_uverbs_req_notify_cq(struct ib_uverbs_file *file, uobj_put_obj_read(cq); - return in_len; + return 0; } -ssize_t ib_uverbs_destroy_cq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_destroy_cq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_destroy_cq cmd; struct ib_uverbs_destroy_cq_resp resp; struct ib_uobject *uobj; struct ib_ucq_object *obj; + int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - uobj = uobj_get_destroy(UVERBS_OBJECT_CQ, cmd.cq_handle, file); + uobj = uobj_get_destroy(UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -1323,21 +1233,11 @@ ssize_t ib_uverbs_destroy_cq(struct ib_uverbs_file *file, uobj_put_destroy(uobj); - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - return -EFAULT; - - return in_len; + return uverbs_response(attrs, &resp, sizeof(resp)); } -static int create_qp(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw, - struct ib_uverbs_ex_create_qp *cmd, - size_t cmd_sz, - int (*cb)(struct ib_uverbs_file *file, - struct ib_uverbs_ex_create_qp_resp *resp, - struct ib_udata *udata), - void *context) +static int create_qp(struct uverbs_attr_bundle *attrs, + struct ib_uverbs_ex_create_qp *cmd) { struct ib_uqp_object *obj; struct ib_device *device; @@ -1347,7 +1247,6 @@ static int create_qp(struct ib_uverbs_file *file, struct ib_cq *scq = NULL, *rcq = NULL; struct ib_srq *srq = NULL; struct ib_qp *qp; - char *buf; struct ib_qp_init_attr attr = {}; struct ib_uverbs_ex_create_qp_resp resp; int ret; @@ -1358,7 +1257,7 @@ static int create_qp(struct ib_uverbs_file *file, if (cmd->qp_type == IB_QPT_RAW_PACKET && !capable(CAP_NET_RAW)) return -EPERM; - obj = (struct ib_uqp_object *)uobj_alloc(UVERBS_OBJECT_QP, file, + obj = (struct ib_uqp_object *)uobj_alloc(UVERBS_OBJECT_QP, attrs, &ib_dev); if (IS_ERR(obj)) return PTR_ERR(obj); @@ -1366,12 +1265,10 @@ static int create_qp(struct ib_uverbs_file *file, obj->uevent.uobject.user_handle = cmd->user_handle; mutex_init(&obj->mcast_lock); - if (cmd_sz >= offsetof(typeof(*cmd), rwq_ind_tbl_handle) + - sizeof(cmd->rwq_ind_tbl_handle) && - (cmd->comp_mask & IB_UVERBS_CREATE_QP_MASK_IND_TABLE)) { + if (cmd->comp_mask & IB_UVERBS_CREATE_QP_MASK_IND_TABLE) { ind_tbl = uobj_get_obj_read(rwq_ind_table, UVERBS_OBJECT_RWQ_IND_TBL, - cmd->rwq_ind_tbl_handle, file); + cmd->rwq_ind_tbl_handle, attrs); if (!ind_tbl) { ret = -EINVAL; goto err_put; @@ -1380,13 +1277,6 @@ static int create_qp(struct ib_uverbs_file *file, attr.rwq_ind_tbl = ind_tbl; } - if (cmd_sz > sizeof(*cmd) && - !ib_is_udata_cleared(ucore, sizeof(*cmd), - cmd_sz - sizeof(*cmd))) { - ret = -EOPNOTSUPP; - goto err_put; - } - if (ind_tbl && (cmd->max_recv_wr || cmd->max_recv_sge || cmd->is_srq)) { ret = -EINVAL; goto err_put; @@ -1397,7 +1287,7 @@ static int create_qp(struct ib_uverbs_file *file, if (cmd->qp_type == IB_QPT_XRC_TGT) { xrcd_uobj = uobj_get_read(UVERBS_OBJECT_XRCD, cmd->pd_handle, - file); + attrs); if (IS_ERR(xrcd_uobj)) { ret = -EINVAL; @@ -1417,7 +1307,7 @@ static int create_qp(struct ib_uverbs_file *file, } else { if (cmd->is_srq) { srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, - cmd->srq_handle, file); + cmd->srq_handle, attrs); if (!srq || srq->srq_type == IB_SRQT_XRC) { ret = -EINVAL; goto err_put; @@ -1428,7 +1318,7 @@ static int create_qp(struct ib_uverbs_file *file, if (cmd->recv_cq_handle != cmd->send_cq_handle) { rcq = uobj_get_obj_read( cq, UVERBS_OBJECT_CQ, - cmd->recv_cq_handle, file); + cmd->recv_cq_handle, attrs); if (!rcq) { ret = -EINVAL; goto err_put; @@ -1439,11 +1329,11 @@ static int create_qp(struct ib_uverbs_file *file, if (has_sq) scq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, - cmd->send_cq_handle, file); + cmd->send_cq_handle, attrs); if (!ind_tbl) rcq = rcq ?: scq; pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, - file); + attrs); if (!pd || (!scq && has_sq)) { ret = -EINVAL; goto err_put; @@ -1453,7 +1343,7 @@ static int create_qp(struct ib_uverbs_file *file, } attr.event_handler = ib_uverbs_qp_event_handler; - attr.qp_context = file; + attr.qp_context = attrs->ufile; attr.send_cq = scq; attr.recv_cq = rcq; attr.srq = srq; @@ -1473,10 +1363,7 @@ static int create_qp(struct ib_uverbs_file *file, INIT_LIST_HEAD(&obj->uevent.event_list); INIT_LIST_HEAD(&obj->mcast_list); - if (cmd_sz >= offsetof(typeof(*cmd), create_flags) + - sizeof(cmd->create_flags)) - attr.create_flags = cmd->create_flags; - + attr.create_flags = cmd->create_flags; if (attr.create_flags & ~(IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK | IB_QP_CREATE_CROSS_CHANNEL | IB_QP_CREATE_MANAGED_SEND | @@ -1498,18 +1385,10 @@ static int create_qp(struct ib_uverbs_file *file, attr.source_qpn = cmd->source_qpn; } - buf = (void *)cmd + sizeof(*cmd); - if (cmd_sz > sizeof(*cmd)) - if (!(buf[0] == 0 && !memcmp(buf, buf + 1, - cmd_sz - sizeof(*cmd) - 1))) { - ret = -EINVAL; - goto err_put; - } - if (cmd->qp_type == IB_QPT_XRC_TGT) qp = ib_create_qp(pd, &attr); else - qp = _ib_create_qp(device, pd, &attr, uhw, + qp = _ib_create_qp(device, pd, &attr, &attrs->driver_udata, &obj->uevent.uobject); if (IS_ERR(qp)) { @@ -1557,11 +1436,9 @@ static int create_qp(struct ib_uverbs_file *file, resp.base.max_recv_wr = attr.cap.max_recv_wr; resp.base.max_send_wr = attr.cap.max_send_wr; resp.base.max_inline_data = attr.cap.max_inline_data; + resp.response_length = uverbs_response_length(attrs, sizeof(resp)); - resp.response_length = offsetof(typeof(resp), response_length) + - sizeof(resp.response_length); - - ret = cb(file, &resp, ucore); + ret = uverbs_response(attrs, &resp, sizeof(resp)); if (ret) goto err_cb; @@ -1583,7 +1460,7 @@ static int create_qp(struct ib_uverbs_file *file, if (ind_tbl) uobj_put_obj_read(ind_tbl); - return uobj_alloc_commit(&obj->uevent.uobject, 0); + return uobj_alloc_commit(&obj->uevent.uobject); err_cb: ib_destroy_qp(qp); @@ -1605,39 +1482,15 @@ err_put: return ret; } -static int ib_uverbs_create_qp_cb(struct ib_uverbs_file *file, - struct ib_uverbs_ex_create_qp_resp *resp, - struct ib_udata *ucore) -{ - if (ib_copy_to_udata(ucore, &resp->base, sizeof(resp->base))) - return -EFAULT; - - return 0; -} - -ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_create_qp(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_create_qp cmd; struct ib_uverbs_ex_create_qp cmd_ex; - struct ib_udata ucore; - struct ib_udata uhw; - ssize_t resp_size = sizeof(struct ib_uverbs_create_qp_resp); - int err; - - if (out_len < resp_size) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof(cmd))) - return -EFAULT; + int ret; - ib_uverbs_init_udata(&ucore, buf, u64_to_user_ptr(cmd.response), - sizeof(cmd), resp_size); - ib_uverbs_init_udata(&uhw, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + resp_size, - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - resp_size); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; memset(&cmd_ex, 0, sizeof(cmd_ex)); cmd_ex.user_handle = cmd.user_handle; @@ -1649,47 +1502,22 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file, cmd_ex.max_recv_wr = cmd.max_recv_wr; cmd_ex.max_send_sge = cmd.max_send_sge; cmd_ex.max_recv_sge = cmd.max_recv_sge; - cmd_ex.max_inline_data = cmd.max_inline_data; - cmd_ex.sq_sig_all = cmd.sq_sig_all; - cmd_ex.qp_type = cmd.qp_type; - cmd_ex.is_srq = cmd.is_srq; - - err = create_qp(file, &ucore, &uhw, &cmd_ex, - offsetof(typeof(cmd_ex), is_srq) + - sizeof(cmd.is_srq), ib_uverbs_create_qp_cb, - NULL); - - if (err) - return err; - - return in_len; -} - -static int ib_uverbs_ex_create_qp_cb(struct ib_uverbs_file *file, - struct ib_uverbs_ex_create_qp_resp *resp, - struct ib_udata *ucore) -{ - if (ib_copy_to_udata(ucore, resp, resp->response_length)) - return -EFAULT; + cmd_ex.max_inline_data = cmd.max_inline_data; + cmd_ex.sq_sig_all = cmd.sq_sig_all; + cmd_ex.qp_type = cmd.qp_type; + cmd_ex.is_srq = cmd.is_srq; - return 0; + return create_qp(attrs, &cmd_ex); } -int ib_uverbs_ex_create_qp(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_create_qp(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_create_qp_resp resp; - struct ib_uverbs_ex_create_qp cmd = {0}; - int err; - - if (ucore->inlen < (offsetof(typeof(cmd), comp_mask) + - sizeof(cmd.comp_mask))) - return -EINVAL; + struct ib_uverbs_ex_create_qp cmd; + int ret; - err = ib_copy_from_udata(&cmd, ucore, min(sizeof(cmd), ucore->inlen)); - if (err) - return err; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; if (cmd.comp_mask & ~IB_UVERBS_CREATE_QP_SUP_COMP_MASK) return -EINVAL; @@ -1697,26 +1525,13 @@ int ib_uverbs_ex_create_qp(struct ib_uverbs_file *file, if (cmd.reserved) return -EINVAL; - if (ucore->outlen < (offsetof(typeof(resp), response_length) + - sizeof(resp.response_length))) - return -ENOSPC; - - err = create_qp(file, ucore, uhw, &cmd, - min(ucore->inlen, sizeof(cmd)), - ib_uverbs_ex_create_qp_cb, NULL); - - if (err) - return err; - - return 0; + return create_qp(attrs, &cmd); } -ssize_t ib_uverbs_open_qp(struct ib_uverbs_file *file, - const char __user *buf, int in_len, int out_len) +static int ib_uverbs_open_qp(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_open_qp cmd; struct ib_uverbs_create_qp_resp resp; - struct ib_udata udata; struct ib_uqp_object *obj; struct ib_xrcd *xrcd; struct ib_uobject *uninitialized_var(xrcd_uobj); @@ -1725,23 +1540,16 @@ ssize_t ib_uverbs_open_qp(struct ib_uverbs_file *file, int ret; struct ib_device *ib_dev; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - obj = (struct ib_uqp_object *)uobj_alloc(UVERBS_OBJECT_QP, file, + obj = (struct ib_uqp_object *)uobj_alloc(UVERBS_OBJECT_QP, attrs, &ib_dev); if (IS_ERR(obj)) return PTR_ERR(obj); - xrcd_uobj = uobj_get_read(UVERBS_OBJECT_XRCD, cmd.pd_handle, file); + xrcd_uobj = uobj_get_read(UVERBS_OBJECT_XRCD, cmd.pd_handle, attrs); if (IS_ERR(xrcd_uobj)) { ret = -EINVAL; goto err_put; @@ -1754,7 +1562,7 @@ ssize_t ib_uverbs_open_qp(struct ib_uverbs_file *file, } attr.event_handler = ib_uverbs_qp_event_handler; - attr.qp_context = file; + attr.qp_context = attrs->ufile; attr.qp_num = cmd.qpn; attr.qp_type = cmd.qp_type; @@ -1775,17 +1583,16 @@ ssize_t ib_uverbs_open_qp(struct ib_uverbs_file *file, resp.qpn = qp->qp_num; resp.qp_handle = obj->uevent.uobject.id; - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) { - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) goto err_destroy; - } obj->uxrcd = container_of(xrcd_uobj, struct ib_uxrcd_object, uobject); atomic_inc(&obj->uxrcd->refcnt); qp->uobject = &obj->uevent.uobject; uobj_put_read(xrcd_uobj); - return uobj_alloc_commit(&obj->uevent.uobject, in_len); + return uobj_alloc_commit(&obj->uevent.uobject); err_destroy: ib_destroy_qp(qp); @@ -1818,9 +1625,7 @@ static void copy_ah_attr_to_uverbs(struct ib_uverbs_qp_dest *uverb_attr, uverb_attr->port_num = rdma_ah_get_port_num(rdma_attr); } -ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_query_qp(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_query_qp cmd; struct ib_uverbs_query_qp_resp resp; @@ -1829,8 +1634,9 @@ ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, struct ib_qp_init_attr *init_attr; int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; attr = kmalloc(sizeof *attr, GFP_KERNEL); init_attr = kmalloc(sizeof *init_attr, GFP_KERNEL); @@ -1839,7 +1645,7 @@ ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, goto out; } - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); if (!qp) { ret = -EINVAL; goto out; @@ -1886,14 +1692,13 @@ ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, resp.max_inline_data = init_attr->cap.max_inline_data; resp.sq_sig_all = init_attr->sq_sig_type == IB_SIGNAL_ALL_WR; - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); out: kfree(attr); kfree(init_attr); - return ret ? ret : in_len; + return ret; } /* Remove ignored fields set in the attribute mask */ @@ -1933,8 +1738,8 @@ static void copy_ah_attr_from_uverbs(struct ib_device *dev, rdma_ah_set_make_grd(rdma_attr, false); } -static int modify_qp(struct ib_uverbs_file *file, - struct ib_uverbs_ex_modify_qp *cmd, struct ib_udata *udata) +static int modify_qp(struct uverbs_attr_bundle *attrs, + struct ib_uverbs_ex_modify_qp *cmd) { struct ib_qp_attr *attr; struct ib_qp *qp; @@ -1944,7 +1749,8 @@ static int modify_qp(struct ib_uverbs_file *file, if (!attr) return -ENOMEM; - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd->base.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd->base.qp_handle, + attrs); if (!qp) { ret = -EINVAL; goto out; @@ -2081,7 +1887,7 @@ static int modify_qp(struct ib_uverbs_file *file, ret = ib_modify_qp_with_udata(qp, attr, modify_qp_mask(qp->qp_type, cmd->base.attr_mask), - udata); + &attrs->driver_udata); release_qp: uobj_put_obj_read(qp); @@ -2091,80 +1897,64 @@ out: return ret; } -ssize_t ib_uverbs_modify_qp(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_modify_qp(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_modify_qp cmd = {}; - struct ib_udata udata; + struct ib_uverbs_ex_modify_qp cmd; int ret; - if (copy_from_user(&cmd.base, buf, sizeof(cmd.base))) - return -EFAULT; + ret = uverbs_request(attrs, &cmd.base, sizeof(cmd.base)); + if (ret) + return ret; if (cmd.base.attr_mask & ~((IB_USER_LEGACY_LAST_QP_ATTR_MASK << 1) - 1)) return -EOPNOTSUPP; - ib_uverbs_init_udata(&udata, buf + sizeof(cmd.base), NULL, - in_len - sizeof(cmd.base) - sizeof(struct ib_uverbs_cmd_hdr), - out_len); - - ret = modify_qp(file, &cmd, &udata); - if (ret) - return ret; - - return in_len; + return modify_qp(attrs, &cmd); } -int ib_uverbs_ex_modify_qp(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_modify_qp(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_modify_qp cmd = {}; + struct ib_uverbs_ex_modify_qp cmd; + struct ib_uverbs_ex_modify_qp_resp resp = { + .response_length = uverbs_response_length(attrs, sizeof(resp)) + }; int ret; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; + /* * Last bit is reserved for extending the attr_mask by * using another field. */ BUILD_BUG_ON(IB_USER_LAST_QP_ATTR_MASK == (1 << 31)); - if (ucore->inlen < sizeof(cmd.base)) - return -EINVAL; - - ret = ib_copy_from_udata(&cmd, ucore, min(sizeof(cmd), ucore->inlen)); - if (ret) - return ret; - if (cmd.base.attr_mask & ~((IB_USER_LAST_QP_ATTR_MASK << 1) - 1)) return -EOPNOTSUPP; - if (ucore->inlen > sizeof(cmd)) { - if (!ib_is_udata_cleared(ucore, sizeof(cmd), - ucore->inlen - sizeof(cmd))) - return -EOPNOTSUPP; - } - - ret = modify_qp(file, &cmd, uhw); + ret = modify_qp(attrs, &cmd); + if (ret) + return ret; - return ret; + return uverbs_response(attrs, &resp, sizeof(resp)); } -ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_destroy_qp(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_destroy_qp cmd; struct ib_uverbs_destroy_qp_resp resp; struct ib_uobject *uobj; struct ib_uqp_object *obj; + int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - uobj = uobj_get_destroy(UVERBS_OBJECT_QP, cmd.qp_handle, file); + uobj = uobj_get_destroy(UVERBS_OBJECT_QP, cmd.qp_handle, attrs); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -2174,10 +1964,7 @@ ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file, uobj_put_destroy(uobj); - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - return -EFAULT; - - return in_len; + return uverbs_response(attrs, &resp, sizeof(resp)); } static void *alloc_wr(size_t wr_size, __u32 num_sge) @@ -2190,9 +1977,7 @@ static void *alloc_wr(size_t wr_size, __u32 num_sge) num_sge * sizeof (struct ib_sge), GFP_KERNEL); } -ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_post_send cmd; struct ib_uverbs_post_send_resp resp; @@ -2202,24 +1987,31 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, struct ib_qp *qp; int i, sg_ind; int is_ud; - ssize_t ret = -EINVAL; + int ret, ret2; size_t next_size; + const struct ib_sge __user *sgls; + const void __user *wqes; + struct uverbs_req_iter iter; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - if (in_len < sizeof cmd + cmd.wqe_size * cmd.wr_count + - cmd.sge_count * sizeof (struct ib_uverbs_sge)) - return -EINVAL; - - if (cmd.wqe_size < sizeof (struct ib_uverbs_send_wr)) - return -EINVAL; + ret = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd)); + if (ret) + return ret; + wqes = uverbs_request_next_ptr(&iter, cmd.wqe_size * cmd.wr_count); + if (IS_ERR(wqes)) + return PTR_ERR(wqes); + sgls = uverbs_request_next_ptr( + &iter, cmd.sge_count * sizeof(struct ib_uverbs_sge)); + if (IS_ERR(sgls)) + return PTR_ERR(sgls); + ret = uverbs_request_finish(&iter); + if (ret) + return ret; user_wr = kmalloc(cmd.wqe_size, GFP_KERNEL); if (!user_wr) return -ENOMEM; - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); if (!qp) goto out; @@ -2227,8 +2019,7 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, sg_ind = 0; last = NULL; for (i = 0; i < cmd.wr_count; ++i) { - if (copy_from_user(user_wr, - buf + sizeof cmd + i * cmd.wqe_size, + if (copy_from_user(user_wr, wqes + i * cmd.wqe_size, cmd.wqe_size)) { ret = -EFAULT; goto out_put; @@ -2256,7 +2047,7 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, } ud->ah = uobj_get_obj_read(ah, UVERBS_OBJECT_AH, - user_wr->wr.ud.ah, file); + user_wr->wr.ud.ah, attrs); if (!ud->ah) { kfree(ud); ret = -EINVAL; @@ -2336,11 +2127,9 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, if (next->num_sge) { next->sg_list = (void *) next + ALIGN(next_size, sizeof(struct ib_sge)); - if (copy_from_user(next->sg_list, - buf + sizeof cmd + - cmd.wr_count * cmd.wqe_size + - sg_ind * sizeof (struct ib_sge), - next->num_sge * sizeof (struct ib_sge))) { + if (copy_from_user(next->sg_list, sgls + sg_ind, + next->num_sge * + sizeof(struct ib_sge))) { ret = -EFAULT; goto out_put; } @@ -2350,7 +2139,7 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, } resp.bad_wr = 0; - ret = qp->device->post_send(qp->real_qp, wr, &bad_wr); + ret = qp->device->ops.post_send(qp->real_qp, wr, &bad_wr); if (ret) for (next = wr; next; next = next->next) { ++resp.bad_wr; @@ -2358,8 +2147,9 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, break; } - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - ret = -EFAULT; + ret2 = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret2) + ret = ret2; out_put: uobj_put_obj_read(qp); @@ -2375,28 +2165,35 @@ out_put: out: kfree(user_wr); - return ret ? ret : in_len; + return ret; } -static struct ib_recv_wr *ib_uverbs_unmarshall_recv(const char __user *buf, - int in_len, - u32 wr_count, - u32 sge_count, - u32 wqe_size) +static struct ib_recv_wr * +ib_uverbs_unmarshall_recv(struct uverbs_req_iter *iter, u32 wr_count, + u32 wqe_size, u32 sge_count) { struct ib_uverbs_recv_wr *user_wr; struct ib_recv_wr *wr = NULL, *last, *next; int sg_ind; int i; int ret; - - if (in_len < wqe_size * wr_count + - sge_count * sizeof (struct ib_uverbs_sge)) - return ERR_PTR(-EINVAL); + const struct ib_sge __user *sgls; + const void __user *wqes; if (wqe_size < sizeof (struct ib_uverbs_recv_wr)) return ERR_PTR(-EINVAL); + wqes = uverbs_request_next_ptr(iter, wqe_size * wr_count); + if (IS_ERR(wqes)) + return ERR_CAST(wqes); + sgls = uverbs_request_next_ptr( + iter, sge_count * sizeof(struct ib_uverbs_sge)); + if (IS_ERR(sgls)) + return ERR_CAST(sgls); + ret = uverbs_request_finish(iter); + if (ret) + return ERR_PTR(ret); + user_wr = kmalloc(wqe_size, GFP_KERNEL); if (!user_wr) return ERR_PTR(-ENOMEM); @@ -2404,7 +2201,7 @@ static struct ib_recv_wr *ib_uverbs_unmarshall_recv(const char __user *buf, sg_ind = 0; last = NULL; for (i = 0; i < wr_count; ++i) { - if (copy_from_user(user_wr, buf + i * wqe_size, + if (copy_from_user(user_wr, wqes + i * wqe_size, wqe_size)) { ret = -EFAULT; goto err; @@ -2443,10 +2240,9 @@ static struct ib_recv_wr *ib_uverbs_unmarshall_recv(const char __user *buf, if (next->num_sge) { next->sg_list = (void *) next + ALIGN(sizeof *next, sizeof (struct ib_sge)); - if (copy_from_user(next->sg_list, - buf + wr_count * wqe_size + - sg_ind * sizeof (struct ib_sge), - next->num_sge * sizeof (struct ib_sge))) { + if (copy_from_user(next->sg_list, sgls + sg_ind, + next->num_sge * + sizeof(struct ib_sge))) { ret = -EFAULT; goto err; } @@ -2470,32 +2266,33 @@ err: return ERR_PTR(ret); } -ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_post_recv(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_post_recv cmd; struct ib_uverbs_post_recv_resp resp; struct ib_recv_wr *wr, *next; const struct ib_recv_wr *bad_wr; struct ib_qp *qp; - ssize_t ret = -EINVAL; + int ret, ret2; + struct uverbs_req_iter iter; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd)); + if (ret) + return ret; - wr = ib_uverbs_unmarshall_recv(buf + sizeof cmd, - in_len - sizeof cmd, cmd.wr_count, - cmd.sge_count, cmd.wqe_size); + wr = ib_uverbs_unmarshall_recv(&iter, cmd.wr_count, cmd.wqe_size, + cmd.sge_count); if (IS_ERR(wr)) return PTR_ERR(wr); - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); - if (!qp) + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); + if (!qp) { + ret = -EINVAL; goto out; + } resp.bad_wr = 0; - ret = qp->device->post_recv(qp->real_qp, wr, &bad_wr); + ret = qp->device->ops.post_recv(qp->real_qp, wr, &bad_wr); uobj_put_obj_read(qp); if (ret) { @@ -2506,9 +2303,9 @@ ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, } } - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - ret = -EFAULT; - + ret2 = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret2) + ret = ret2; out: while (wr) { next = wr->next; @@ -2516,36 +2313,36 @@ out: wr = next; } - return ret ? ret : in_len; + return ret; } -ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_post_srq_recv(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_post_srq_recv cmd; struct ib_uverbs_post_srq_recv_resp resp; struct ib_recv_wr *wr, *next; const struct ib_recv_wr *bad_wr; struct ib_srq *srq; - ssize_t ret = -EINVAL; + int ret, ret2; + struct uverbs_req_iter iter; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd)); + if (ret) + return ret; - wr = ib_uverbs_unmarshall_recv(buf + sizeof cmd, - in_len - sizeof cmd, cmd.wr_count, - cmd.sge_count, cmd.wqe_size); + wr = ib_uverbs_unmarshall_recv(&iter, cmd.wr_count, cmd.wqe_size, + cmd.sge_count); if (IS_ERR(wr)) return PTR_ERR(wr); - srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file); - if (!srq) + srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs); + if (!srq) { + ret = -EINVAL; goto out; + } resp.bad_wr = 0; - ret = srq->device->post_srq_recv ? - srq->device->post_srq_recv(srq, wr, &bad_wr) : -EOPNOTSUPP; + ret = srq->device->ops.post_srq_recv(srq, wr, &bad_wr); uobj_put_obj_read(srq); @@ -2556,8 +2353,9 @@ ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, break; } - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - ret = -EFAULT; + ret2 = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret2) + ret = ret2; out: while (wr) { @@ -2566,12 +2364,10 @@ out: wr = next; } - return ret ? ret : in_len; + return ret; } -ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_create_ah(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_create_ah cmd; struct ib_uverbs_create_ah_resp resp; @@ -2580,21 +2376,13 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, struct ib_ah *ah; struct rdma_ah_attr attr = {}; int ret; - struct ib_udata udata; struct ib_device *ib_dev; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - uobj = uobj_alloc(UVERBS_OBJECT_AH, file, &ib_dev); + uobj = uobj_alloc(UVERBS_OBJECT_AH, attrs, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -2603,7 +2391,7 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, goto err; } - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); if (!pd) { ret = -EINVAL; goto err; @@ -2627,7 +2415,7 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, rdma_ah_set_ah_flags(&attr, 0); } - ah = rdma_create_user_ah(pd, &attr, &udata); + ah = rdma_create_user_ah(pd, &attr, &attrs->driver_udata); if (IS_ERR(ah)) { ret = PTR_ERR(ah); goto err_put; @@ -2639,16 +2427,15 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, resp.ah_handle = uobj->id; - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) { - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) goto err_copy; - } uobj_put_obj_read(pd); - return uobj_alloc_commit(uobj, in_len); + return uobj_alloc_commit(uobj); err_copy: - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, RDMA_DESTROY_AH_SLEEPABLE); err_put: uobj_put_obj_read(pd); @@ -2658,21 +2445,19 @@ err: return ret; } -ssize_t ib_uverbs_destroy_ah(struct ib_uverbs_file *file, - const char __user *buf, int in_len, int out_len) +static int ib_uverbs_destroy_ah(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_destroy_ah cmd; + int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - return uobj_perform_destroy(UVERBS_OBJECT_AH, cmd.ah_handle, file, - in_len); + return uobj_perform_destroy(UVERBS_OBJECT_AH, cmd.ah_handle, attrs); } -ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_attach_mcast(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_attach_mcast cmd; struct ib_qp *qp; @@ -2680,10 +2465,11 @@ ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, struct ib_uverbs_mcast_entry *mcast; int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); if (!qp) return -EINVAL; @@ -2716,12 +2502,10 @@ out_put: mutex_unlock(&obj->mcast_lock); uobj_put_obj_read(qp); - return ret ? ret : in_len; + return ret; } -ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_detach_mcast(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_detach_mcast cmd; struct ib_uqp_object *obj; @@ -2730,10 +2514,11 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, int ret = -EINVAL; bool found = false; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); if (!qp) return -EINVAL; @@ -2759,7 +2544,7 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, out_put: mutex_unlock(&obj->mcast_lock); uobj_put_obj_read(qp); - return ret ? ret : in_len; + return ret; } struct ib_uflow_resources *flow_resources_alloc(size_t num_specs) @@ -2838,7 +2623,7 @@ void flow_resources_add(struct ib_uflow_resources *uflow_res, } EXPORT_SYMBOL(flow_resources_add); -static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, +static int kern_spec_to_ib_spec_action(const struct uverbs_attr_bundle *attrs, struct ib_uverbs_flow_spec *kern_spec, union ib_flow_spec *ib_spec, struct ib_uflow_resources *uflow_res) @@ -2867,7 +2652,7 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, ib_spec->action.act = uobj_get_obj_read(flow_action, UVERBS_OBJECT_FLOW_ACTION, kern_spec->action.handle, - ufile); + attrs); if (!ib_spec->action.act) return -EINVAL; ib_spec->action.size = @@ -2885,7 +2670,7 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, uobj_get_obj_read(counters, UVERBS_OBJECT_COUNTERS, kern_spec->flow_count.handle, - ufile); + attrs); if (!ib_spec->flow_count.counters) return -EINVAL; ib_spec->flow_count.size = @@ -3066,7 +2851,7 @@ static int kern_spec_to_ib_spec_filter(struct ib_uverbs_flow_spec *kern_spec, kern_filter_sz, ib_spec); } -static int kern_spec_to_ib_spec(struct ib_uverbs_file *ufile, +static int kern_spec_to_ib_spec(struct uverbs_attr_bundle *attrs, struct ib_uverbs_flow_spec *kern_spec, union ib_flow_spec *ib_spec, struct ib_uflow_resources *uflow_res) @@ -3075,17 +2860,15 @@ static int kern_spec_to_ib_spec(struct ib_uverbs_file *ufile, return -EINVAL; if (kern_spec->type >= IB_FLOW_SPEC_ACTION_TAG) - return kern_spec_to_ib_spec_action(ufile, kern_spec, ib_spec, + return kern_spec_to_ib_spec_action(attrs, kern_spec, ib_spec, uflow_res); else return kern_spec_to_ib_spec_filter(kern_spec, ib_spec); } -int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_create_wq cmd = {}; + struct ib_uverbs_ex_create_wq cmd; struct ib_uverbs_ex_create_wq_resp resp = {}; struct ib_uwq_object *obj; int err = 0; @@ -3093,43 +2876,27 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, struct ib_pd *pd; struct ib_wq *wq; struct ib_wq_init_attr wq_init_attr = {}; - size_t required_cmd_sz; - size_t required_resp_len; struct ib_device *ib_dev; - required_cmd_sz = offsetof(typeof(cmd), max_sge) + sizeof(cmd.max_sge); - required_resp_len = offsetof(typeof(resp), wqn) + sizeof(resp.wqn); - - if (ucore->inlen < required_cmd_sz) - return -EINVAL; - - if (ucore->outlen < required_resp_len) - return -ENOSPC; - - if (ucore->inlen > sizeof(cmd) && - !ib_is_udata_cleared(ucore, sizeof(cmd), - ucore->inlen - sizeof(cmd))) - return -EOPNOTSUPP; - - err = ib_copy_from_udata(&cmd, ucore, min(sizeof(cmd), ucore->inlen)); + err = uverbs_request(attrs, &cmd, sizeof(cmd)); if (err) return err; if (cmd.comp_mask) return -EOPNOTSUPP; - obj = (struct ib_uwq_object *)uobj_alloc(UVERBS_OBJECT_WQ, file, + obj = (struct ib_uwq_object *)uobj_alloc(UVERBS_OBJECT_WQ, attrs, &ib_dev); if (IS_ERR(obj)) return PTR_ERR(obj); - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs); if (!pd) { err = -EINVAL; goto err_uobj; } - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); if (!cq) { err = -EINVAL; goto err_put_pd; @@ -3138,20 +2905,14 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, wq_init_attr.cq = cq; wq_init_attr.max_sge = cmd.max_sge; wq_init_attr.max_wr = cmd.max_wr; - wq_init_attr.wq_context = file; + wq_init_attr.wq_context = attrs->ufile; wq_init_attr.wq_type = cmd.wq_type; wq_init_attr.event_handler = ib_uverbs_wq_event_handler; - if (ucore->inlen >= (offsetof(typeof(cmd), create_flags) + - sizeof(cmd.create_flags))) - wq_init_attr.create_flags = cmd.create_flags; + wq_init_attr.create_flags = cmd.create_flags; obj->uevent.events_reported = 0; INIT_LIST_HEAD(&obj->uevent.event_list); - if (!pd->device->create_wq) { - err = -EOPNOTSUPP; - goto err_put_cq; - } - wq = pd->device->create_wq(pd, &wq_init_attr, uhw); + wq = pd->device->ops.create_wq(pd, &wq_init_attr, &attrs->driver_udata); if (IS_ERR(wq)) { err = PTR_ERR(wq); goto err_put_cq; @@ -3175,15 +2936,14 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, resp.max_sge = wq_init_attr.max_sge; resp.max_wr = wq_init_attr.max_wr; resp.wqn = wq->wq_num; - resp.response_length = required_resp_len; - err = ib_copy_to_udata(ucore, - &resp, resp.response_length); + resp.response_length = uverbs_response_length(attrs, sizeof(resp)); + err = uverbs_response(attrs, &resp, sizeof(resp)); if (err) goto err_copy; uobj_put_obj_read(pd); uobj_put_obj_read(cq); - return uobj_alloc_commit(&obj->uevent.uobject, 0); + return uobj_alloc_commit(&obj->uevent.uobject); err_copy: ib_destroy_wq(wq); @@ -3197,41 +2957,23 @@ err_uobj: return err; } -int ib_uverbs_ex_destroy_wq(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_destroy_wq(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_destroy_wq cmd = {}; + struct ib_uverbs_ex_destroy_wq cmd; struct ib_uverbs_ex_destroy_wq_resp resp = {}; struct ib_uobject *uobj; struct ib_uwq_object *obj; - size_t required_cmd_sz; - size_t required_resp_len; int ret; - required_cmd_sz = offsetof(typeof(cmd), wq_handle) + sizeof(cmd.wq_handle); - required_resp_len = offsetof(typeof(resp), reserved) + sizeof(resp.reserved); - - if (ucore->inlen < required_cmd_sz) - return -EINVAL; - - if (ucore->outlen < required_resp_len) - return -ENOSPC; - - if (ucore->inlen > sizeof(cmd) && - !ib_is_udata_cleared(ucore, sizeof(cmd), - ucore->inlen - sizeof(cmd))) - return -EOPNOTSUPP; - - ret = ib_copy_from_udata(&cmd, ucore, min(sizeof(cmd), ucore->inlen)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); if (ret) return ret; if (cmd.comp_mask) return -EOPNOTSUPP; - resp.response_length = required_resp_len; - uobj = uobj_get_destroy(UVERBS_OBJECT_WQ, cmd.wq_handle, file); + resp.response_length = uverbs_response_length(attrs, sizeof(resp)); + uobj = uobj_get_destroy(UVERBS_OBJECT_WQ, cmd.wq_handle, attrs); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -3240,29 +2982,17 @@ int ib_uverbs_ex_destroy_wq(struct ib_uverbs_file *file, uobj_put_destroy(uobj); - return ib_copy_to_udata(ucore, &resp, resp.response_length); + return uverbs_response(attrs, &resp, sizeof(resp)); } -int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_modify_wq(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_modify_wq cmd = {}; + struct ib_uverbs_ex_modify_wq cmd; struct ib_wq *wq; struct ib_wq_attr wq_attr = {}; - size_t required_cmd_sz; int ret; - required_cmd_sz = offsetof(typeof(cmd), curr_wq_state) + sizeof(cmd.curr_wq_state); - if (ucore->inlen < required_cmd_sz) - return -EINVAL; - - if (ucore->inlen > sizeof(cmd) && - !ib_is_udata_cleared(ucore, sizeof(cmd), - ucore->inlen - sizeof(cmd))) - return -EOPNOTSUPP; - - ret = ib_copy_from_udata(&cmd, ucore, min(sizeof(cmd), ucore->inlen)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); if (ret) return ret; @@ -3272,7 +3002,7 @@ int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, if (cmd.attr_mask > (IB_WQ_STATE | IB_WQ_CUR_STATE | IB_WQ_FLAGS)) return -EINVAL; - wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, cmd.wq_handle, file); + wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, cmd.wq_handle, attrs); if (!wq) return -EINVAL; @@ -3282,24 +3012,18 @@ int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, wq_attr.flags = cmd.flags; wq_attr.flags_mask = cmd.flags_mask; } - if (!wq->device->modify_wq) { - ret = -EOPNOTSUPP; - goto out; - } - ret = wq->device->modify_wq(wq, &wq_attr, cmd.attr_mask, uhw); -out: + ret = wq->device->ops.modify_wq(wq, &wq_attr, cmd.attr_mask, + &attrs->driver_udata); uobj_put_obj_read(wq); return ret; } -int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_create_rwq_ind_table(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_create_rwq_ind_table cmd = {}; + struct ib_uverbs_ex_create_rwq_ind_table cmd; struct ib_uverbs_ex_create_rwq_ind_table_resp resp = {}; struct ib_uobject *uobj; - int err = 0; + int err; struct ib_rwq_ind_table_init_attr init_attr = {}; struct ib_rwq_ind_table *rwq_ind_tbl; struct ib_wq **wqs = NULL; @@ -3307,27 +3031,13 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, struct ib_wq *wq = NULL; int i, j, num_read_wqs; u32 num_wq_handles; - u32 expected_in_size; - size_t required_cmd_sz_header; - size_t required_resp_len; + struct uverbs_req_iter iter; struct ib_device *ib_dev; - required_cmd_sz_header = offsetof(typeof(cmd), log_ind_tbl_size) + sizeof(cmd.log_ind_tbl_size); - required_resp_len = offsetof(typeof(resp), ind_tbl_num) + sizeof(resp.ind_tbl_num); - - if (ucore->inlen < required_cmd_sz_header) - return -EINVAL; - - if (ucore->outlen < required_resp_len) - return -ENOSPC; - - err = ib_copy_from_udata(&cmd, ucore, required_cmd_sz_header); + err = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd)); if (err) return err; - ucore->inbuf += required_cmd_sz_header; - ucore->inlen -= required_cmd_sz_header; - if (cmd.comp_mask) return -EOPNOTSUPP; @@ -3335,26 +3045,17 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, return -EINVAL; num_wq_handles = 1 << cmd.log_ind_tbl_size; - expected_in_size = num_wq_handles * sizeof(__u32); - if (num_wq_handles == 1) - /* input size for wq handles is u64 aligned */ - expected_in_size += sizeof(__u32); - - if (ucore->inlen < expected_in_size) - return -EINVAL; - - if (ucore->inlen > expected_in_size && - !ib_is_udata_cleared(ucore, expected_in_size, - ucore->inlen - expected_in_size)) - return -EOPNOTSUPP; - wqs_handles = kcalloc(num_wq_handles, sizeof(*wqs_handles), GFP_KERNEL); if (!wqs_handles) return -ENOMEM; - err = ib_copy_from_udata(wqs_handles, ucore, - num_wq_handles * sizeof(__u32)); + err = uverbs_request_next(&iter, wqs_handles, + num_wq_handles * sizeof(__u32)); + if (err) + goto err_free; + + err = uverbs_request_finish(&iter); if (err) goto err_free; @@ -3367,7 +3068,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, for (num_read_wqs = 0; num_read_wqs < num_wq_handles; num_read_wqs++) { wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, - wqs_handles[num_read_wqs], file); + wqs_handles[num_read_wqs], attrs); if (!wq) { err = -EINVAL; goto put_wqs; @@ -3376,7 +3077,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, wqs[num_read_wqs] = wq; } - uobj = uobj_alloc(UVERBS_OBJECT_RWQ_IND_TBL, file, &ib_dev); + uobj = uobj_alloc(UVERBS_OBJECT_RWQ_IND_TBL, attrs, &ib_dev); if (IS_ERR(uobj)) { err = PTR_ERR(uobj); goto put_wqs; @@ -3385,11 +3086,8 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, init_attr.log_ind_tbl_size = cmd.log_ind_tbl_size; init_attr.ind_tbl = wqs; - if (!ib_dev->create_rwq_ind_table) { - err = -EOPNOTSUPP; - goto err_uobj; - } - rwq_ind_tbl = ib_dev->create_rwq_ind_table(ib_dev, &init_attr, uhw); + rwq_ind_tbl = ib_dev->ops.create_rwq_ind_table(ib_dev, &init_attr, + &attrs->driver_udata); if (IS_ERR(rwq_ind_tbl)) { err = PTR_ERR(rwq_ind_tbl); @@ -3408,10 +3106,9 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, resp.ind_tbl_handle = uobj->id; resp.ind_tbl_num = rwq_ind_tbl->ind_tbl_num; - resp.response_length = required_resp_len; + resp.response_length = uverbs_response_length(attrs, sizeof(resp)); - err = ib_copy_to_udata(ucore, - &resp, resp.response_length); + err = uverbs_response(attrs, &resp, sizeof(resp)); if (err) goto err_copy; @@ -3420,7 +3117,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, for (j = 0; j < num_read_wqs; j++) uobj_put_obj_read(wqs[j]); - return uobj_alloc_commit(uobj, 0); + return uobj_alloc_commit(uobj); err_copy: ib_destroy_rwq_ind_table(rwq_ind_tbl); @@ -3435,25 +3132,12 @@ err_free: return err; } -int ib_uverbs_ex_destroy_rwq_ind_table(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_destroy_rwq_ind_table(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_destroy_rwq_ind_table cmd = {}; - int ret; - size_t required_cmd_sz; - - required_cmd_sz = offsetof(typeof(cmd), ind_tbl_handle) + sizeof(cmd.ind_tbl_handle); - - if (ucore->inlen < required_cmd_sz) - return -EINVAL; - - if (ucore->inlen > sizeof(cmd) && - !ib_is_udata_cleared(ucore, sizeof(cmd), - ucore->inlen - sizeof(cmd))) - return -EOPNOTSUPP; + struct ib_uverbs_ex_destroy_rwq_ind_table cmd; + int ret; - ret = ib_copy_from_udata(&cmd, ucore, min(sizeof(cmd), ucore->inlen)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); if (ret) return ret; @@ -3461,12 +3145,10 @@ int ib_uverbs_ex_destroy_rwq_ind_table(struct ib_uverbs_file *file, return -EOPNOTSUPP; return uobj_perform_destroy(UVERBS_OBJECT_RWQ_IND_TBL, - cmd.ind_tbl_handle, file, 0); + cmd.ind_tbl_handle, attrs); } -int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_create_flow(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_create_flow cmd; struct ib_uverbs_create_flow_resp resp; @@ -3477,24 +3159,16 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, struct ib_qp *qp; struct ib_uflow_resources *uflow_res; struct ib_uverbs_flow_spec_hdr *kern_spec; - int err = 0; + struct uverbs_req_iter iter; + int err; void *ib_spec; int i; struct ib_device *ib_dev; - if (ucore->inlen < sizeof(cmd)) - return -EINVAL; - - if (ucore->outlen < sizeof(resp)) - return -ENOSPC; - - err = ib_copy_from_udata(&cmd, ucore, sizeof(cmd)); + err = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd)); if (err) return err; - ucore->inbuf += sizeof(cmd); - ucore->inlen -= sizeof(cmd); - if (cmd.comp_mask) return -EINVAL; @@ -3512,8 +3186,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, if (cmd.flow_attr.num_of_specs > IB_FLOW_SPEC_SUPPORT_LAYERS) return -EINVAL; - if (cmd.flow_attr.size > ucore->inlen || - cmd.flow_attr.size > + if (cmd.flow_attr.size > (cmd.flow_attr.num_of_specs * sizeof(struct ib_uverbs_flow_spec))) return -EINVAL; @@ -3528,21 +3201,25 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, return -ENOMEM; *kern_flow_attr = cmd.flow_attr; - err = ib_copy_from_udata(&kern_flow_attr->flow_specs, ucore, - cmd.flow_attr.size); + err = uverbs_request_next(&iter, &kern_flow_attr->flow_specs, + cmd.flow_attr.size); if (err) goto err_free_attr; } else { kern_flow_attr = &cmd.flow_attr; } - uobj = uobj_alloc(UVERBS_OBJECT_FLOW, file, &ib_dev); + err = uverbs_request_finish(&iter); + if (err) + goto err_free_attr; + + uobj = uobj_alloc(UVERBS_OBJECT_FLOW, attrs, &ib_dev); if (IS_ERR(uobj)) { err = PTR_ERR(uobj); goto err_free_attr; } - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); if (!qp) { err = -EINVAL; goto err_uobj; @@ -3553,11 +3230,6 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, goto err_put; } - if (!qp->device->create_flow) { - err = -EOPNOTSUPP; - goto err_put; - } - flow_attr = kzalloc(struct_size(flow_attr, flows, cmd.flow_attr.num_of_specs), GFP_KERNEL); if (!flow_attr) { @@ -3584,7 +3256,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, cmd.flow_attr.size >= kern_spec->size; i++) { err = kern_spec_to_ib_spec( - file, (struct ib_uverbs_flow_spec *)kern_spec, + attrs, (struct ib_uverbs_flow_spec *)kern_spec, ib_spec, uflow_res); if (err) goto err_free; @@ -3602,8 +3274,8 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, goto err_free; } - flow_id = qp->device->create_flow(qp, flow_attr, - IB_FLOW_DOMAIN_USER, uhw); + flow_id = qp->device->ops.create_flow( + qp, flow_attr, IB_FLOW_DOMAIN_USER, &attrs->driver_udata); if (IS_ERR(flow_id)) { err = PTR_ERR(flow_id); @@ -3615,8 +3287,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, memset(&resp, 0, sizeof(resp)); resp.flow_handle = uobj->id; - err = ib_copy_to_udata(ucore, - &resp, sizeof(resp)); + err = uverbs_response(attrs, &resp, sizeof(resp)); if (err) goto err_copy; @@ -3624,9 +3295,9 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, kfree(flow_attr); if (cmd.flow_attr.num_of_specs) kfree(kern_flow_attr); - return uobj_alloc_commit(uobj, 0); + return uobj_alloc_commit(uobj); err_copy: - if (!qp->device->destroy_flow(flow_id)) + if (!qp->device->ops.destroy_flow(flow_id)) atomic_dec(&qp->usecnt); err_free: ib_uverbs_flow_resources_free(uflow_res); @@ -3642,28 +3313,22 @@ err_free_attr: return err; } -int ib_uverbs_ex_destroy_flow(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_destroy_flow(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_destroy_flow cmd; int ret; - if (ucore->inlen < sizeof(cmd)) - return -EINVAL; - - ret = ib_copy_from_udata(&cmd, ucore, sizeof(cmd)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); if (ret) return ret; if (cmd.comp_mask) return -EINVAL; - return uobj_perform_destroy(UVERBS_OBJECT_FLOW, cmd.flow_handle, file, - 0); + return uobj_perform_destroy(UVERBS_OBJECT_FLOW, cmd.flow_handle, attrs); } -static int __uverbs_create_xsrq(struct ib_uverbs_file *file, +static int __uverbs_create_xsrq(struct uverbs_attr_bundle *attrs, struct ib_uverbs_create_xsrq *cmd, struct ib_udata *udata) { @@ -3676,7 +3341,7 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, int ret; struct ib_device *ib_dev; - obj = (struct ib_usrq_object *)uobj_alloc(UVERBS_OBJECT_SRQ, file, + obj = (struct ib_usrq_object *)uobj_alloc(UVERBS_OBJECT_SRQ, attrs, &ib_dev); if (IS_ERR(obj)) return PTR_ERR(obj); @@ -3686,7 +3351,7 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, if (cmd->srq_type == IB_SRQT_XRC) { xrcd_uobj = uobj_get_read(UVERBS_OBJECT_XRCD, cmd->xrcd_handle, - file); + attrs); if (IS_ERR(xrcd_uobj)) { ret = -EINVAL; goto err; @@ -3704,21 +3369,21 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, if (ib_srq_has_cq(cmd->srq_type)) { attr.ext.cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, - cmd->cq_handle, file); + cmd->cq_handle, attrs); if (!attr.ext.cq) { ret = -EINVAL; goto err_put_xrcd; } } - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, attrs); if (!pd) { ret = -EINVAL; goto err_put_cq; } attr.event_handler = ib_uverbs_srq_event_handler; - attr.srq_context = file; + attr.srq_context = attrs->ufile; attr.srq_type = cmd->srq_type; attr.attr.max_wr = cmd->max_wr; attr.attr.max_sge = cmd->max_sge; @@ -3727,7 +3392,7 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, obj->uevent.events_reported = 0; INIT_LIST_HEAD(&obj->uevent.event_list); - srq = pd->device->create_srq(pd, &attr, udata); + srq = pd->device->ops.create_srq(pd, &attr, udata); if (IS_ERR(srq)) { ret = PTR_ERR(srq); goto err_put; @@ -3763,11 +3428,9 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, if (cmd->srq_type == IB_SRQT_XRC) resp.srqn = srq->ext.xrc.srq_num; - if (copy_to_user(u64_to_user_ptr(cmd->response), - &resp, sizeof resp)) { - ret = -EFAULT; + ret = uverbs_response(attrs, &resp, sizeof(resp)); + if (ret) goto err_copy; - } if (cmd->srq_type == IB_SRQT_XRC) uobj_put_read(xrcd_uobj); @@ -3776,7 +3439,7 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, uobj_put_obj_read(attr.ext.cq); uobj_put_obj_read(pd); - return uobj_alloc_commit(&obj->uevent.uobject, 0); + return uobj_alloc_commit(&obj->uevent.uobject); err_copy: ib_destroy_srq(srq); @@ -3799,21 +3462,15 @@ err: return ret; } -ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_create_srq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_create_srq cmd; struct ib_uverbs_create_xsrq xcmd; - struct ib_uverbs_create_srq_resp resp; - struct ib_udata udata; int ret; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; memset(&xcmd, 0, sizeof(xcmd)); xcmd.response = cmd.response; @@ -3824,77 +3481,48 @@ ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file, xcmd.max_sge = cmd.max_sge; xcmd.srq_limit = cmd.srq_limit; - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); - - ret = __uverbs_create_xsrq(file, &xcmd, &udata); - if (ret) - return ret; - - return in_len; + return __uverbs_create_xsrq(attrs, &xcmd, &attrs->driver_udata); } -ssize_t ib_uverbs_create_xsrq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, int out_len) +static int ib_uverbs_create_xsrq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_create_xsrq cmd; - struct ib_uverbs_create_srq_resp resp; - struct ib_udata udata; int ret; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), - u64_to_user_ptr(cmd.response) + sizeof(resp), - in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), - out_len - sizeof(resp)); - - ret = __uverbs_create_xsrq(file, &cmd, &udata); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); if (ret) return ret; - return in_len; + return __uverbs_create_xsrq(attrs, &cmd, &attrs->driver_udata); } -ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_modify_srq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_modify_srq cmd; - struct ib_udata udata; struct ib_srq *srq; struct ib_srq_attr attr; int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; - - ib_uverbs_init_udata(&udata, buf + sizeof cmd, NULL, in_len - sizeof cmd, - out_len); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file); + srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs); if (!srq) return -EINVAL; attr.max_wr = cmd.max_wr; attr.srq_limit = cmd.srq_limit; - ret = srq->device->modify_srq(srq, &attr, cmd.attr_mask, &udata); + ret = srq->device->ops.modify_srq(srq, &attr, cmd.attr_mask, + &attrs->driver_udata); uobj_put_obj_read(srq); - return ret ? ret : in_len; + return ret; } -ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, - const char __user *buf, - int in_len, int out_len) +static int ib_uverbs_query_srq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_query_srq cmd; struct ib_uverbs_query_srq_resp resp; @@ -3902,13 +3530,11 @@ ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, struct ib_srq *srq; int ret; - if (out_len < sizeof resp) - return -ENOSPC; - - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file); + srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs); if (!srq) return -EINVAL; @@ -3925,25 +3551,22 @@ ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, resp.max_sge = attr.max_sge; resp.srq_limit = attr.srq_limit; - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) - return -EFAULT; - - return in_len; + return uverbs_response(attrs, &resp, sizeof(resp)); } -ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) +static int ib_uverbs_destroy_srq(struct uverbs_attr_bundle *attrs) { struct ib_uverbs_destroy_srq cmd; struct ib_uverbs_destroy_srq_resp resp; struct ib_uobject *uobj; struct ib_uevent_object *obj; + int ret; - if (copy_from_user(&cmd, buf, sizeof cmd)) - return -EFAULT; + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); + if (ret) + return ret; - uobj = uobj_get_destroy(UVERBS_OBJECT_SRQ, cmd.srq_handle, file); + uobj = uobj_get_destroy(UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -3953,35 +3576,24 @@ ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file, uobj_put_destroy(uobj); - if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof(resp))) - return -EFAULT; - - return in_len; + return uverbs_response(attrs, &resp, sizeof(resp)); } -int ib_uverbs_ex_query_device(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_query_device(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_query_device_resp resp = { {0} }; + struct ib_uverbs_ex_query_device_resp resp = {}; struct ib_uverbs_ex_query_device cmd; struct ib_device_attr attr = {0}; struct ib_ucontext *ucontext; struct ib_device *ib_dev; int err; - ucontext = ib_uverbs_get_ucontext(file); + ucontext = ib_uverbs_get_ucontext(attrs); if (IS_ERR(ucontext)) return PTR_ERR(ucontext); ib_dev = ucontext->device; - if (!ib_dev->query_device) - return -EOPNOTSUPP; - - if (ucore->inlen < sizeof(cmd)) - return -EINVAL; - - err = ib_copy_from_udata(&cmd, ucore, sizeof(cmd)); + err = uverbs_request(attrs, &cmd, sizeof(cmd)); if (err) return err; @@ -3991,20 +3603,12 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file, if (cmd.reserved) return -EINVAL; - resp.response_length = offsetof(typeof(resp), odp_caps); - - if (ucore->outlen < resp.response_length) - return -ENOSPC; - - err = ib_dev->query_device(ib_dev, &attr, uhw); + err = ib_dev->ops.query_device(ib_dev, &attr, &attrs->driver_udata); if (err) return err; copy_query_dev_fields(ucontext, &resp.base, &attr); - if (ucore->outlen < resp.response_length + sizeof(resp.odp_caps)) - goto end; - #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING resp.odp_caps.general_caps = attr.odp_caps.general_caps; resp.odp_caps.per_transport_caps.rc_odp_caps = @@ -4014,99 +3618,39 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file, resp.odp_caps.per_transport_caps.ud_odp_caps = attr.odp_caps.per_transport_caps.ud_odp_caps; #endif - resp.response_length += sizeof(resp.odp_caps); - - if (ucore->outlen < resp.response_length + sizeof(resp.timestamp_mask)) - goto end; resp.timestamp_mask = attr.timestamp_mask; - resp.response_length += sizeof(resp.timestamp_mask); - - if (ucore->outlen < resp.response_length + sizeof(resp.hca_core_clock)) - goto end; - resp.hca_core_clock = attr.hca_core_clock; - resp.response_length += sizeof(resp.hca_core_clock); - - if (ucore->outlen < resp.response_length + sizeof(resp.device_cap_flags_ex)) - goto end; - resp.device_cap_flags_ex = attr.device_cap_flags; - resp.response_length += sizeof(resp.device_cap_flags_ex); - - if (ucore->outlen < resp.response_length + sizeof(resp.rss_caps)) - goto end; - resp.rss_caps.supported_qpts = attr.rss_caps.supported_qpts; resp.rss_caps.max_rwq_indirection_tables = attr.rss_caps.max_rwq_indirection_tables; resp.rss_caps.max_rwq_indirection_table_size = attr.rss_caps.max_rwq_indirection_table_size; - - resp.response_length += sizeof(resp.rss_caps); - - if (ucore->outlen < resp.response_length + sizeof(resp.max_wq_type_rq)) - goto end; - resp.max_wq_type_rq = attr.max_wq_type_rq; - resp.response_length += sizeof(resp.max_wq_type_rq); - - if (ucore->outlen < resp.response_length + sizeof(resp.raw_packet_caps)) - goto end; - resp.raw_packet_caps = attr.raw_packet_caps; - resp.response_length += sizeof(resp.raw_packet_caps); - - if (ucore->outlen < resp.response_length + sizeof(resp.tm_caps)) - goto end; - resp.tm_caps.max_rndv_hdr_size = attr.tm_caps.max_rndv_hdr_size; resp.tm_caps.max_num_tags = attr.tm_caps.max_num_tags; resp.tm_caps.max_ops = attr.tm_caps.max_ops; resp.tm_caps.max_sge = attr.tm_caps.max_sge; resp.tm_caps.flags = attr.tm_caps.flags; - resp.response_length += sizeof(resp.tm_caps); - - if (ucore->outlen < resp.response_length + sizeof(resp.cq_moderation_caps)) - goto end; - resp.cq_moderation_caps.max_cq_moderation_count = attr.cq_caps.max_cq_moderation_count; resp.cq_moderation_caps.max_cq_moderation_period = attr.cq_caps.max_cq_moderation_period; - resp.response_length += sizeof(resp.cq_moderation_caps); - - if (ucore->outlen < resp.response_length + sizeof(resp.max_dm_size)) - goto end; - resp.max_dm_size = attr.max_dm_size; - resp.response_length += sizeof(resp.max_dm_size); -end: - err = ib_copy_to_udata(ucore, &resp, resp.response_length); - return err; + resp.response_length = uverbs_response_length(attrs, sizeof(resp)); + + return uverbs_response(attrs, &resp, sizeof(resp)); } -int ib_uverbs_ex_modify_cq(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) +static int ib_uverbs_ex_modify_cq(struct uverbs_attr_bundle *attrs) { - struct ib_uverbs_ex_modify_cq cmd = {}; + struct ib_uverbs_ex_modify_cq cmd; struct ib_cq *cq; - size_t required_cmd_sz; int ret; - required_cmd_sz = offsetof(typeof(cmd), reserved) + - sizeof(cmd.reserved); - if (ucore->inlen < required_cmd_sz) - return -EINVAL; - - /* sanity checks */ - if (ucore->inlen > sizeof(cmd) && - !ib_is_udata_cleared(ucore, sizeof(cmd), - ucore->inlen - sizeof(cmd))) - return -EOPNOTSUPP; - - ret = ib_copy_from_udata(&cmd, ucore, min(sizeof(cmd), ucore->inlen)); + ret = uverbs_request(attrs, &cmd, sizeof(cmd)); if (ret) return ret; @@ -4116,7 +3660,7 @@ int ib_uverbs_ex_modify_cq(struct ib_uverbs_file *file, if (cmd.attr_mask > IB_CQ_MODERATE) return -EOPNOTSUPP; - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs); if (!cq) return -EINVAL; @@ -4126,3 +3670,381 @@ int ib_uverbs_ex_modify_cq(struct ib_uverbs_file *file, return ret; } + +/* + * Describe the input structs for write(). Some write methods have an input + * only struct, most have an input and output. If the struct has an output then + * the 'response' u64 must be the first field in the request structure. + * + * If udata is present then both the request and response structs have a + * trailing driver_data flex array. In this case the size of the base struct + * cannot be changed. + */ +#define offsetof_after(_struct, _member) \ + (offsetof(_struct, _member) + sizeof(((_struct *)NULL)->_member)) + +#define UAPI_DEF_WRITE_IO(req, resp) \ + .write.has_resp = 1 + \ + BUILD_BUG_ON_ZERO(offsetof(req, response) != 0) + \ + BUILD_BUG_ON_ZERO(sizeof(((req *)0)->response) != \ + sizeof(u64)), \ + .write.req_size = sizeof(req), .write.resp_size = sizeof(resp) + +#define UAPI_DEF_WRITE_I(req) .write.req_size = sizeof(req) + +#define UAPI_DEF_WRITE_UDATA_IO(req, resp) \ + UAPI_DEF_WRITE_IO(req, resp), \ + .write.has_udata = \ + 1 + \ + BUILD_BUG_ON_ZERO(offsetof(req, driver_data) != \ + sizeof(req)) + \ + BUILD_BUG_ON_ZERO(offsetof(resp, driver_data) != \ + sizeof(resp)) + +#define UAPI_DEF_WRITE_UDATA_I(req) \ + UAPI_DEF_WRITE_I(req), \ + .write.has_udata = \ + 1 + BUILD_BUG_ON_ZERO(offsetof(req, driver_data) != \ + sizeof(req)) + +/* + * The _EX versions are for use with WRITE_EX and allow the last struct member + * to be specified. Buffers that do not include that member will be rejected. + */ +#define UAPI_DEF_WRITE_IO_EX(req, req_last_member, resp, resp_last_member) \ + .write.has_resp = 1, \ + .write.req_size = offsetof_after(req, req_last_member), \ + .write.resp_size = offsetof_after(resp, resp_last_member) + +#define UAPI_DEF_WRITE_I_EX(req, req_last_member) \ + .write.req_size = offsetof_after(req, req_last_member) + +const struct uapi_definition uverbs_def_write_intf[] = { + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_AH, + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_CREATE_AH, + ib_uverbs_create_ah, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_create_ah, + struct ib_uverbs_create_ah_resp), + UAPI_DEF_METHOD_NEEDS_FN(create_ah)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_DESTROY_AH, + ib_uverbs_destroy_ah, + UAPI_DEF_WRITE_I(struct ib_uverbs_destroy_ah), + UAPI_DEF_METHOD_NEEDS_FN(destroy_ah))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_COMP_CHANNEL, + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL, + ib_uverbs_create_comp_channel, + UAPI_DEF_WRITE_IO( + struct ib_uverbs_create_comp_channel, + struct ib_uverbs_create_comp_channel_resp))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_CQ, + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_CREATE_CQ, + ib_uverbs_create_cq, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_create_cq, + struct ib_uverbs_create_cq_resp), + UAPI_DEF_METHOD_NEEDS_FN(create_cq)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_DESTROY_CQ, + ib_uverbs_destroy_cq, + UAPI_DEF_WRITE_IO(struct ib_uverbs_destroy_cq, + struct ib_uverbs_destroy_cq_resp), + UAPI_DEF_METHOD_NEEDS_FN(destroy_cq)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_POLL_CQ, + ib_uverbs_poll_cq, + UAPI_DEF_WRITE_IO(struct ib_uverbs_poll_cq, + struct ib_uverbs_poll_cq_resp), + UAPI_DEF_METHOD_NEEDS_FN(poll_cq)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_REQ_NOTIFY_CQ, + ib_uverbs_req_notify_cq, + UAPI_DEF_WRITE_I(struct ib_uverbs_req_notify_cq), + UAPI_DEF_METHOD_NEEDS_FN(req_notify_cq)), + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_RESIZE_CQ, + ib_uverbs_resize_cq, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_resize_cq, + struct ib_uverbs_resize_cq_resp), + UAPI_DEF_METHOD_NEEDS_FN(resize_cq)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_CREATE_CQ, + ib_uverbs_ex_create_cq, + UAPI_DEF_WRITE_IO_EX(struct ib_uverbs_ex_create_cq, + reserved, + struct ib_uverbs_ex_create_cq_resp, + response_length), + UAPI_DEF_METHOD_NEEDS_FN(create_cq)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_MODIFY_CQ, + ib_uverbs_ex_modify_cq, + UAPI_DEF_WRITE_I(struct ib_uverbs_ex_modify_cq), + UAPI_DEF_METHOD_NEEDS_FN(create_cq))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_DEVICE, + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_GET_CONTEXT, + ib_uverbs_get_context, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_get_context, + struct ib_uverbs_get_context_resp)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_QUERY_DEVICE, + ib_uverbs_query_device, + UAPI_DEF_WRITE_IO(struct ib_uverbs_query_device, + struct ib_uverbs_query_device_resp)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_QUERY_PORT, + ib_uverbs_query_port, + UAPI_DEF_WRITE_IO(struct ib_uverbs_query_port, + struct ib_uverbs_query_port_resp), + UAPI_DEF_METHOD_NEEDS_FN(query_port)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_QUERY_DEVICE, + ib_uverbs_ex_query_device, + UAPI_DEF_WRITE_IO_EX( + struct ib_uverbs_ex_query_device, + reserved, + struct ib_uverbs_ex_query_device_resp, + response_length), + UAPI_DEF_METHOD_NEEDS_FN(query_device)), + UAPI_DEF_OBJ_NEEDS_FN(alloc_ucontext), + UAPI_DEF_OBJ_NEEDS_FN(dealloc_ucontext)), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_FLOW, + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_CREATE_FLOW, + ib_uverbs_ex_create_flow, + UAPI_DEF_WRITE_IO_EX(struct ib_uverbs_create_flow, + flow_attr, + struct ib_uverbs_create_flow_resp, + flow_handle), + UAPI_DEF_METHOD_NEEDS_FN(create_flow)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_DESTROY_FLOW, + ib_uverbs_ex_destroy_flow, + UAPI_DEF_WRITE_I(struct ib_uverbs_destroy_flow), + UAPI_DEF_METHOD_NEEDS_FN(destroy_flow))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_MR, + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_DEREG_MR, + ib_uverbs_dereg_mr, + UAPI_DEF_WRITE_I(struct ib_uverbs_dereg_mr), + UAPI_DEF_METHOD_NEEDS_FN(dereg_mr)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_REG_MR, + ib_uverbs_reg_mr, + UAPI_DEF_WRITE_UDATA_IO(struct ib_uverbs_reg_mr, + struct ib_uverbs_reg_mr_resp), + UAPI_DEF_METHOD_NEEDS_FN(reg_user_mr)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_REREG_MR, + ib_uverbs_rereg_mr, + UAPI_DEF_WRITE_UDATA_IO(struct ib_uverbs_rereg_mr, + struct ib_uverbs_rereg_mr_resp), + UAPI_DEF_METHOD_NEEDS_FN(rereg_user_mr))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_MW, + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_ALLOC_MW, + ib_uverbs_alloc_mw, + UAPI_DEF_WRITE_UDATA_IO(struct ib_uverbs_alloc_mw, + struct ib_uverbs_alloc_mw_resp), + UAPI_DEF_METHOD_NEEDS_FN(alloc_mw)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_DEALLOC_MW, + ib_uverbs_dealloc_mw, + UAPI_DEF_WRITE_I(struct ib_uverbs_dealloc_mw), + UAPI_DEF_METHOD_NEEDS_FN(dealloc_mw))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_PD, + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_ALLOC_PD, + ib_uverbs_alloc_pd, + UAPI_DEF_WRITE_UDATA_IO(struct ib_uverbs_alloc_pd, + struct ib_uverbs_alloc_pd_resp), + UAPI_DEF_METHOD_NEEDS_FN(alloc_pd)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_DEALLOC_PD, + ib_uverbs_dealloc_pd, + UAPI_DEF_WRITE_I(struct ib_uverbs_dealloc_pd), + UAPI_DEF_METHOD_NEEDS_FN(dealloc_pd))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_QP, + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_ATTACH_MCAST, + ib_uverbs_attach_mcast, + UAPI_DEF_WRITE_I(struct ib_uverbs_attach_mcast), + UAPI_DEF_METHOD_NEEDS_FN(attach_mcast), + UAPI_DEF_METHOD_NEEDS_FN(detach_mcast)), + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_CREATE_QP, + ib_uverbs_create_qp, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_create_qp, + struct ib_uverbs_create_qp_resp), + UAPI_DEF_METHOD_NEEDS_FN(create_qp)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_DESTROY_QP, + ib_uverbs_destroy_qp, + UAPI_DEF_WRITE_IO(struct ib_uverbs_destroy_qp, + struct ib_uverbs_destroy_qp_resp), + UAPI_DEF_METHOD_NEEDS_FN(destroy_qp)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_DETACH_MCAST, + ib_uverbs_detach_mcast, + UAPI_DEF_WRITE_I(struct ib_uverbs_detach_mcast), + UAPI_DEF_METHOD_NEEDS_FN(detach_mcast)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_MODIFY_QP, + ib_uverbs_modify_qp, + UAPI_DEF_WRITE_I(struct ib_uverbs_modify_qp), + UAPI_DEF_METHOD_NEEDS_FN(modify_qp)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_POST_RECV, + ib_uverbs_post_recv, + UAPI_DEF_WRITE_IO(struct ib_uverbs_post_recv, + struct ib_uverbs_post_recv_resp), + UAPI_DEF_METHOD_NEEDS_FN(post_recv)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_POST_SEND, + ib_uverbs_post_send, + UAPI_DEF_WRITE_IO(struct ib_uverbs_post_send, + struct ib_uverbs_post_send_resp), + UAPI_DEF_METHOD_NEEDS_FN(post_send)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_QUERY_QP, + ib_uverbs_query_qp, + UAPI_DEF_WRITE_IO(struct ib_uverbs_query_qp, + struct ib_uverbs_query_qp_resp), + UAPI_DEF_METHOD_NEEDS_FN(query_qp)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_CREATE_QP, + ib_uverbs_ex_create_qp, + UAPI_DEF_WRITE_IO_EX(struct ib_uverbs_ex_create_qp, + comp_mask, + struct ib_uverbs_ex_create_qp_resp, + response_length), + UAPI_DEF_METHOD_NEEDS_FN(create_qp)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_MODIFY_QP, + ib_uverbs_ex_modify_qp, + UAPI_DEF_WRITE_IO_EX(struct ib_uverbs_ex_modify_qp, + base, + struct ib_uverbs_ex_modify_qp_resp, + response_length), + UAPI_DEF_METHOD_NEEDS_FN(modify_qp))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_RWQ_IND_TBL, + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL, + ib_uverbs_ex_create_rwq_ind_table, + UAPI_DEF_WRITE_IO_EX( + struct ib_uverbs_ex_create_rwq_ind_table, + log_ind_tbl_size, + struct ib_uverbs_ex_create_rwq_ind_table_resp, + ind_tbl_num), + UAPI_DEF_METHOD_NEEDS_FN(create_rwq_ind_table)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL, + ib_uverbs_ex_destroy_rwq_ind_table, + UAPI_DEF_WRITE_I( + struct ib_uverbs_ex_destroy_rwq_ind_table), + UAPI_DEF_METHOD_NEEDS_FN(destroy_rwq_ind_table))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_WQ, + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_CREATE_WQ, + ib_uverbs_ex_create_wq, + UAPI_DEF_WRITE_IO_EX(struct ib_uverbs_ex_create_wq, + max_sge, + struct ib_uverbs_ex_create_wq_resp, + wqn), + UAPI_DEF_METHOD_NEEDS_FN(create_wq)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_DESTROY_WQ, + ib_uverbs_ex_destroy_wq, + UAPI_DEF_WRITE_IO_EX(struct ib_uverbs_ex_destroy_wq, + wq_handle, + struct ib_uverbs_ex_destroy_wq_resp, + reserved), + UAPI_DEF_METHOD_NEEDS_FN(destroy_wq)), + DECLARE_UVERBS_WRITE_EX( + IB_USER_VERBS_EX_CMD_MODIFY_WQ, + ib_uverbs_ex_modify_wq, + UAPI_DEF_WRITE_I_EX(struct ib_uverbs_ex_modify_wq, + curr_wq_state), + UAPI_DEF_METHOD_NEEDS_FN(modify_wq))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_SRQ, + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_CREATE_SRQ, + ib_uverbs_create_srq, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_create_srq, + struct ib_uverbs_create_srq_resp), + UAPI_DEF_METHOD_NEEDS_FN(create_srq)), + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_CREATE_XSRQ, + ib_uverbs_create_xsrq, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_create_xsrq, + struct ib_uverbs_create_srq_resp), + UAPI_DEF_METHOD_NEEDS_FN(create_srq)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_DESTROY_SRQ, + ib_uverbs_destroy_srq, + UAPI_DEF_WRITE_IO(struct ib_uverbs_destroy_srq, + struct ib_uverbs_destroy_srq_resp), + UAPI_DEF_METHOD_NEEDS_FN(destroy_srq)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_MODIFY_SRQ, + ib_uverbs_modify_srq, + UAPI_DEF_WRITE_UDATA_I(struct ib_uverbs_modify_srq), + UAPI_DEF_METHOD_NEEDS_FN(modify_srq)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_POST_SRQ_RECV, + ib_uverbs_post_srq_recv, + UAPI_DEF_WRITE_IO(struct ib_uverbs_post_srq_recv, + struct ib_uverbs_post_srq_recv_resp), + UAPI_DEF_METHOD_NEEDS_FN(post_srq_recv)), + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_QUERY_SRQ, + ib_uverbs_query_srq, + UAPI_DEF_WRITE_IO(struct ib_uverbs_query_srq, + struct ib_uverbs_query_srq_resp), + UAPI_DEF_METHOD_NEEDS_FN(query_srq))), + + DECLARE_UVERBS_OBJECT( + UVERBS_OBJECT_XRCD, + DECLARE_UVERBS_WRITE( + IB_USER_VERBS_CMD_CLOSE_XRCD, + ib_uverbs_close_xrcd, + UAPI_DEF_WRITE_I(struct ib_uverbs_close_xrcd), + UAPI_DEF_METHOD_NEEDS_FN(dealloc_xrcd)), + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_OPEN_QP, + ib_uverbs_open_qp, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_open_qp, + struct ib_uverbs_create_qp_resp)), + DECLARE_UVERBS_WRITE(IB_USER_VERBS_CMD_OPEN_XRCD, + ib_uverbs_open_xrcd, + UAPI_DEF_WRITE_UDATA_IO( + struct ib_uverbs_open_xrcd, + struct ib_uverbs_open_xrcd_resp), + UAPI_DEF_METHOD_NEEDS_FN(alloc_xrcd))), + + {}, +}; diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c index b0e493e8d860..8c81ff698052 100644 --- a/drivers/infiniband/core/uverbs_ioctl.c +++ b/drivers/infiniband/core/uverbs_ioctl.c @@ -404,8 +404,7 @@ static int uverbs_set_attr(struct bundle_priv *pbundle, static int ib_uverbs_run_method(struct bundle_priv *pbundle, unsigned int num_attrs) { - int (*handler)(struct ib_uverbs_file *ufile, - struct uverbs_attr_bundle *ctx); + int (*handler)(struct uverbs_attr_bundle *attrs); size_t uattrs_size = array_size(sizeof(*pbundle->uattrs), num_attrs); unsigned int destroy_bkey = pbundle->method_elm->destroy_bkey; unsigned int i; @@ -436,6 +435,11 @@ static int ib_uverbs_run_method(struct bundle_priv *pbundle, pbundle->method_elm->key_bitmap_len))) return -EINVAL; + if (pbundle->method_elm->has_udata) + uverbs_fill_udata(&pbundle->bundle, + &pbundle->bundle.driver_udata, + UVERBS_ATTR_UHW_IN, UVERBS_ATTR_UHW_OUT); + if (destroy_bkey != UVERBS_API_ATTR_BKEY_LEN) { struct uverbs_obj_attr *destroy_attr = &pbundle->bundle.attrs[destroy_bkey].obj_attr; @@ -445,10 +449,10 @@ static int ib_uverbs_run_method(struct bundle_priv *pbundle, return ret; __clear_bit(destroy_bkey, pbundle->uobj_finalize); - ret = handler(pbundle->bundle.ufile, &pbundle->bundle); + ret = handler(&pbundle->bundle); uobj_put_destroy(destroy_attr->uobject); } else { - ret = handler(pbundle->bundle.ufile, &pbundle->bundle); + ret = handler(&pbundle->bundle); } /* @@ -662,35 +666,37 @@ int uverbs_get_flags32(u32 *to, const struct uverbs_attr_bundle *attrs_bundle, EXPORT_SYMBOL(uverbs_get_flags32); /* - * This is for ease of conversion. The purpose is to convert all drivers to - * use uverbs_attr_bundle instead of ib_udata. Assume attr == 0 is input and - * attr == 1 is output. + * Fill a ib_udata struct (core or uhw) using the given attribute IDs. + * This is primarily used to convert the UVERBS_ATTR_UHW() into the + * ib_udata format used by the drivers. */ -void create_udata(struct uverbs_attr_bundle *bundle, struct ib_udata *udata) +void uverbs_fill_udata(struct uverbs_attr_bundle *bundle, + struct ib_udata *udata, unsigned int attr_in, + unsigned int attr_out) { struct bundle_priv *pbundle = container_of(bundle, struct bundle_priv, bundle); - const struct uverbs_attr *uhw_in = - uverbs_attr_get(bundle, UVERBS_ATTR_UHW_IN); - const struct uverbs_attr *uhw_out = - uverbs_attr_get(bundle, UVERBS_ATTR_UHW_OUT); - - if (!IS_ERR(uhw_in)) { - udata->inlen = uhw_in->ptr_attr.len; - if (uverbs_attr_ptr_is_inline(uhw_in)) + const struct uverbs_attr *in = + uverbs_attr_get(&pbundle->bundle, attr_in); + const struct uverbs_attr *out = + uverbs_attr_get(&pbundle->bundle, attr_out); + + if (!IS_ERR(in)) { + udata->inlen = in->ptr_attr.len; + if (uverbs_attr_ptr_is_inline(in)) udata->inbuf = - &pbundle->user_attrs[uhw_in->ptr_attr.uattr_idx] + &pbundle->user_attrs[in->ptr_attr.uattr_idx] .data; else - udata->inbuf = u64_to_user_ptr(uhw_in->ptr_attr.data); + udata->inbuf = u64_to_user_ptr(in->ptr_attr.data); } else { udata->inbuf = NULL; udata->inlen = 0; } - if (!IS_ERR(uhw_out)) { - udata->outbuf = u64_to_user_ptr(uhw_out->ptr_attr.data); - udata->outlen = uhw_out->ptr_attr.len; + if (!IS_ERR(out)) { + udata->outbuf = u64_to_user_ptr(out->ptr_attr.data); + udata->outlen = out->ptr_attr.len; } else { udata->outbuf = NULL; udata->outlen = 0; @@ -745,3 +751,14 @@ int _uverbs_get_const(s64 *to, const struct uverbs_attr_bundle *attrs_bundle, return 0; } EXPORT_SYMBOL(_uverbs_get_const); + +int uverbs_copy_to_struct_or_zero(const struct uverbs_attr_bundle *bundle, + size_t idx, const void *from, size_t size) +{ + const struct uverbs_attr *attr = uverbs_attr_get(bundle, idx); + + if (clear_user(u64_to_user_ptr(attr->ptr_attr.data), + attr->ptr_attr.len)) + return -EFAULT; + return uverbs_copy_to(bundle, idx, from, size); +} diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index 6d373f5515b7..9f9172eb1512 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -74,64 +74,6 @@ static dev_t dynamic_uverbs_dev; static struct class *uverbs_class; static DEFINE_IDA(uverbs_ida); - -static ssize_t (*uverbs_cmd_table[])(struct ib_uverbs_file *file, - const char __user *buf, int in_len, - int out_len) = { - [IB_USER_VERBS_CMD_GET_CONTEXT] = ib_uverbs_get_context, - [IB_USER_VERBS_CMD_QUERY_DEVICE] = ib_uverbs_query_device, - [IB_USER_VERBS_CMD_QUERY_PORT] = ib_uverbs_query_port, - [IB_USER_VERBS_CMD_ALLOC_PD] = ib_uverbs_alloc_pd, - [IB_USER_VERBS_CMD_DEALLOC_PD] = ib_uverbs_dealloc_pd, - [IB_USER_VERBS_CMD_REG_MR] = ib_uverbs_reg_mr, - [IB_USER_VERBS_CMD_REREG_MR] = ib_uverbs_rereg_mr, - [IB_USER_VERBS_CMD_DEREG_MR] = ib_uverbs_dereg_mr, - [IB_USER_VERBS_CMD_ALLOC_MW] = ib_uverbs_alloc_mw, - [IB_USER_VERBS_CMD_DEALLOC_MW] = ib_uverbs_dealloc_mw, - [IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL] = ib_uverbs_create_comp_channel, - [IB_USER_VERBS_CMD_CREATE_CQ] = ib_uverbs_create_cq, - [IB_USER_VERBS_CMD_RESIZE_CQ] = ib_uverbs_resize_cq, - [IB_USER_VERBS_CMD_POLL_CQ] = ib_uverbs_poll_cq, - [IB_USER_VERBS_CMD_REQ_NOTIFY_CQ] = ib_uverbs_req_notify_cq, - [IB_USER_VERBS_CMD_DESTROY_CQ] = ib_uverbs_destroy_cq, - [IB_USER_VERBS_CMD_CREATE_QP] = ib_uverbs_create_qp, - [IB_USER_VERBS_CMD_QUERY_QP] = ib_uverbs_query_qp, - [IB_USER_VERBS_CMD_MODIFY_QP] = ib_uverbs_modify_qp, - [IB_USER_VERBS_CMD_DESTROY_QP] = ib_uverbs_destroy_qp, - [IB_USER_VERBS_CMD_POST_SEND] = ib_uverbs_post_send, - [IB_USER_VERBS_CMD_POST_RECV] = ib_uverbs_post_recv, - [IB_USER_VERBS_CMD_POST_SRQ_RECV] = ib_uverbs_post_srq_recv, - [IB_USER_VERBS_CMD_CREATE_AH] = ib_uverbs_create_ah, - [IB_USER_VERBS_CMD_DESTROY_AH] = ib_uverbs_destroy_ah, - [IB_USER_VERBS_CMD_ATTACH_MCAST] = ib_uverbs_attach_mcast, - [IB_USER_VERBS_CMD_DETACH_MCAST] = ib_uverbs_detach_mcast, - [IB_USER_VERBS_CMD_CREATE_SRQ] = ib_uverbs_create_srq, - [IB_USER_VERBS_CMD_MODIFY_SRQ] = ib_uverbs_modify_srq, - [IB_USER_VERBS_CMD_QUERY_SRQ] = ib_uverbs_query_srq, - [IB_USER_VERBS_CMD_DESTROY_SRQ] = ib_uverbs_destroy_srq, - [IB_USER_VERBS_CMD_OPEN_XRCD] = ib_uverbs_open_xrcd, - [IB_USER_VERBS_CMD_CLOSE_XRCD] = ib_uverbs_close_xrcd, - [IB_USER_VERBS_CMD_CREATE_XSRQ] = ib_uverbs_create_xsrq, - [IB_USER_VERBS_CMD_OPEN_QP] = ib_uverbs_open_qp, -}; - -static int (*uverbs_ex_cmd_table[])(struct ib_uverbs_file *file, - struct ib_udata *ucore, - struct ib_udata *uhw) = { - [IB_USER_VERBS_EX_CMD_CREATE_FLOW] = ib_uverbs_ex_create_flow, - [IB_USER_VERBS_EX_CMD_DESTROY_FLOW] = ib_uverbs_ex_destroy_flow, - [IB_USER_VERBS_EX_CMD_QUERY_DEVICE] = ib_uverbs_ex_query_device, - [IB_USER_VERBS_EX_CMD_CREATE_CQ] = ib_uverbs_ex_create_cq, - [IB_USER_VERBS_EX_CMD_CREATE_QP] = ib_uverbs_ex_create_qp, - [IB_USER_VERBS_EX_CMD_CREATE_WQ] = ib_uverbs_ex_create_wq, - [IB_USER_VERBS_EX_CMD_MODIFY_WQ] = ib_uverbs_ex_modify_wq, - [IB_USER_VERBS_EX_CMD_DESTROY_WQ] = ib_uverbs_ex_destroy_wq, - [IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL] = ib_uverbs_ex_create_rwq_ind_table, - [IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL] = ib_uverbs_ex_destroy_rwq_ind_table, - [IB_USER_VERBS_EX_CMD_MODIFY_QP] = ib_uverbs_ex_modify_qp, - [IB_USER_VERBS_EX_CMD_MODIFY_CQ] = ib_uverbs_ex_modify_cq, -}; - static void ib_uverbs_add_one(struct ib_device *device); static void ib_uverbs_remove_one(struct ib_device *device, void *client_data); @@ -139,7 +81,7 @@ static void ib_uverbs_remove_one(struct ib_device *device, void *client_data); * Must be called with the ufile->device->disassociate_srcu held, and the lock * must be held until use of the ucontext is finished. */ -struct ib_ucontext *ib_uverbs_get_ucontext(struct ib_uverbs_file *ufile) +struct ib_ucontext *ib_uverbs_get_ucontext_file(struct ib_uverbs_file *ufile) { /* * We do not hold the hw_destroy_rwsem lock for this flow, instead @@ -157,14 +99,14 @@ struct ib_ucontext *ib_uverbs_get_ucontext(struct ib_uverbs_file *ufile) return ucontext; } -EXPORT_SYMBOL(ib_uverbs_get_ucontext); +EXPORT_SYMBOL(ib_uverbs_get_ucontext_file); int uverbs_dealloc_mw(struct ib_mw *mw) { struct ib_pd *pd = mw->pd; int ret; - ret = mw->device->dealloc_mw(mw); + ret = mw->device->ops.dealloc_mw(mw); if (!ret) atomic_dec(&pd->usecnt); return ret; @@ -255,7 +197,7 @@ void ib_uverbs_release_file(struct kref *ref) srcu_key = srcu_read_lock(&file->device->disassociate_srcu); ib_dev = srcu_dereference(file->device->ib_dev, &file->device->disassociate_srcu); - if (ib_dev && !ib_dev->disassociate_ucontext) + if (ib_dev && !ib_dev->ops.disassociate_ucontext) module_put(ib_dev->owner); srcu_read_unlock(&file->device->disassociate_srcu, srcu_key); @@ -646,51 +588,19 @@ err_put_refs: return filp; } -static bool verify_command_mask(struct ib_uverbs_file *ufile, u32 command, - bool extended) -{ - if (!extended) - return ufile->uverbs_cmd_mask & BIT_ULL(command); - - return ufile->uverbs_ex_cmd_mask & BIT_ULL(command); -} - -static bool verify_command_idx(u32 command, bool extended) -{ - if (extended) - return command < ARRAY_SIZE(uverbs_ex_cmd_table) && - uverbs_ex_cmd_table[command]; - - return command < ARRAY_SIZE(uverbs_cmd_table) && - uverbs_cmd_table[command]; -} - -static ssize_t process_hdr(struct ib_uverbs_cmd_hdr *hdr, - u32 *command, bool *extended) -{ - if (hdr->command & ~(u32)(IB_USER_VERBS_CMD_FLAG_EXTENDED | - IB_USER_VERBS_CMD_COMMAND_MASK)) - return -EINVAL; - - *command = hdr->command & IB_USER_VERBS_CMD_COMMAND_MASK; - *extended = hdr->command & IB_USER_VERBS_CMD_FLAG_EXTENDED; - - if (!verify_command_idx(*command, *extended)) - return -EOPNOTSUPP; - - return 0; -} - static ssize_t verify_hdr(struct ib_uverbs_cmd_hdr *hdr, - struct ib_uverbs_ex_cmd_hdr *ex_hdr, - size_t count, bool extended) + struct ib_uverbs_ex_cmd_hdr *ex_hdr, size_t count, + const struct uverbs_api_write_method *method_elm) { - if (extended) { + if (method_elm->is_ex) { count -= sizeof(*hdr) + sizeof(*ex_hdr); if ((hdr->in_words + ex_hdr->provider_in_words) * 8 != count) return -EINVAL; + if (hdr->in_words * 8 < method_elm->req_size) + return -ENOSPC; + if (ex_hdr->cmd_hdr_reserved) return -EINVAL; @@ -698,6 +608,9 @@ static ssize_t verify_hdr(struct ib_uverbs_cmd_hdr *hdr, if (!hdr->out_words && !ex_hdr->provider_out_words) return -EINVAL; + if (hdr->out_words * 8 < method_elm->resp_size) + return -ENOSPC; + if (!access_ok(VERIFY_WRITE, u64_to_user_ptr(ex_hdr->response), (hdr->out_words + ex_hdr->provider_out_words) * 8)) @@ -714,6 +627,24 @@ static ssize_t verify_hdr(struct ib_uverbs_cmd_hdr *hdr, if (hdr->in_words * 4 != count) return -EINVAL; + if (count < method_elm->req_size + sizeof(hdr)) { + /* + * rdma-core v18 and v19 have a bug where they send DESTROY_CQ + * with a 16 byte write instead of 24. Old kernels didn't + * check the size so they allowed this. Now that the size is + * checked provide a compatibility work around to not break + * those userspaces. + */ + if (hdr->command == IB_USER_VERBS_CMD_DESTROY_CQ && + count == 16) { + hdr->in_words = 6; + return 0; + } + return -ENOSPC; + } + if (hdr->out_words * 4 < method_elm->resp_size) + return -ENOSPC; + return 0; } @@ -721,11 +652,12 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, size_t count, loff_t *pos) { struct ib_uverbs_file *file = filp->private_data; + const struct uverbs_api_write_method *method_elm; + struct uverbs_api *uapi = file->device->uapi; struct ib_uverbs_ex_cmd_hdr ex_hdr; struct ib_uverbs_cmd_hdr hdr; - bool extended; + struct uverbs_attr_bundle bundle; int srcu_key; - u32 command; ssize_t ret; if (!ib_safe_file_access(filp)) { @@ -740,57 +672,92 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, if (copy_from_user(&hdr, buf, sizeof(hdr))) return -EFAULT; - ret = process_hdr(&hdr, &command, &extended); - if (ret) - return ret; + method_elm = uapi_get_method(uapi, hdr.command); + if (IS_ERR(method_elm)) + return PTR_ERR(method_elm); - if (extended) { + if (method_elm->is_ex) { if (count < (sizeof(hdr) + sizeof(ex_hdr))) return -EINVAL; if (copy_from_user(&ex_hdr, buf + sizeof(hdr), sizeof(ex_hdr))) return -EFAULT; } - ret = verify_hdr(&hdr, &ex_hdr, count, extended); + ret = verify_hdr(&hdr, &ex_hdr, count, method_elm); if (ret) return ret; srcu_key = srcu_read_lock(&file->device->disassociate_srcu); - if (!verify_command_mask(file, command, extended)) { - ret = -EOPNOTSUPP; - goto out; - } - buf += sizeof(hdr); - if (!extended) { - ret = uverbs_cmd_table[command](file, buf, - hdr.in_words * 4, - hdr.out_words * 4); - } else { - struct ib_udata ucore; - struct ib_udata uhw; + bundle.ufile = file; + if (!method_elm->is_ex) { + size_t in_len = hdr.in_words * 4 - sizeof(hdr); + size_t out_len = hdr.out_words * 4; + u64 response = 0; + + if (method_elm->has_udata) { + bundle.driver_udata.inlen = + in_len - method_elm->req_size; + in_len = method_elm->req_size; + if (bundle.driver_udata.inlen) + bundle.driver_udata.inbuf = buf + in_len; + else + bundle.driver_udata.inbuf = NULL; + } else { + memset(&bundle.driver_udata, 0, + sizeof(bundle.driver_udata)); + } + + if (method_elm->has_resp) { + /* + * The macros check that if has_resp is set + * then the command request structure starts + * with a '__aligned u64 response' member. + */ + ret = get_user(response, (const u64 *)buf); + if (ret) + goto out_unlock; + + if (method_elm->has_udata) { + bundle.driver_udata.outlen = + out_len - method_elm->resp_size; + out_len = method_elm->resp_size; + if (bundle.driver_udata.outlen) + bundle.driver_udata.outbuf = + u64_to_user_ptr(response + + out_len); + else + bundle.driver_udata.outbuf = NULL; + } + } else { + bundle.driver_udata.outlen = 0; + bundle.driver_udata.outbuf = NULL; + } + ib_uverbs_init_udata_buf_or_null( + &bundle.ucore, buf, u64_to_user_ptr(response), + in_len, out_len); + } else { buf += sizeof(ex_hdr); - ib_uverbs_init_udata_buf_or_null(&ucore, buf, + ib_uverbs_init_udata_buf_or_null(&bundle.ucore, buf, u64_to_user_ptr(ex_hdr.response), hdr.in_words * 8, hdr.out_words * 8); - ib_uverbs_init_udata_buf_or_null(&uhw, - buf + ucore.inlen, - u64_to_user_ptr(ex_hdr.response) + ucore.outlen, - ex_hdr.provider_in_words * 8, - ex_hdr.provider_out_words * 8); + ib_uverbs_init_udata_buf_or_null( + &bundle.driver_udata, buf + bundle.ucore.inlen, + u64_to_user_ptr(ex_hdr.response) + bundle.ucore.outlen, + ex_hdr.provider_in_words * 8, + ex_hdr.provider_out_words * 8); - ret = uverbs_ex_cmd_table[command](file, &ucore, &uhw); - ret = (ret) ? : count; } -out: + ret = method_elm->handler(&bundle); +out_unlock: srcu_read_unlock(&file->device->disassociate_srcu, srcu_key); - return ret; + return (ret) ? : count; } static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma) @@ -801,13 +768,13 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma) int srcu_key; srcu_key = srcu_read_lock(&file->device->disassociate_srcu); - ucontext = ib_uverbs_get_ucontext(file); + ucontext = ib_uverbs_get_ucontext_file(file); if (IS_ERR(ucontext)) { ret = PTR_ERR(ucontext); goto out; } - ret = ucontext->device->mmap(ucontext, vma); + ret = ucontext->device->ops.mmap(ucontext, vma); out: srcu_read_unlock(&file->device->disassociate_srcu, srcu_key); return ret; @@ -1069,7 +1036,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp) /* In case IB device supports disassociate ucontext, there is no hard * dependency between uverbs device and its low level device. */ - module_dependent = !(ib_dev->disassociate_ucontext); + module_dependent = !(ib_dev->ops.disassociate_ucontext); if (module_dependent) { if (!try_module_get(ib_dev->owner)) { @@ -1102,9 +1069,6 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp) mutex_unlock(&dev->lists_mutex); srcu_read_unlock(&dev->disassociate_srcu, srcu_key); - file->uverbs_cmd_mask = ib_dev->uverbs_cmd_mask; - file->uverbs_ex_cmd_mask = ib_dev->uverbs_ex_cmd_mask; - setup_ufile_idr_uobject(file); return nonseekable_open(inode, filp); @@ -1224,7 +1188,7 @@ static int ib_uverbs_create_uapi(struct ib_device *device, { struct uverbs_api *uapi; - uapi = uverbs_alloc_api(device->driver_specs, device->driver_id); + uapi = uverbs_alloc_api(device); if (IS_ERR(uapi)) return PTR_ERR(uapi); @@ -1239,7 +1203,7 @@ static void ib_uverbs_add_one(struct ib_device *device) struct ib_uverbs_device *uverbs_dev; int ret; - if (!device->alloc_ucontext) + if (!device->ops.alloc_ucontext) return; uverbs_dev = kzalloc(sizeof(*uverbs_dev), GFP_KERNEL); @@ -1285,7 +1249,7 @@ static void ib_uverbs_add_one(struct ib_device *device) dev_set_name(&uverbs_dev->dev, "uverbs%d", uverbs_dev->devnum); cdev_init(&uverbs_dev->cdev, - device->mmap ? &uverbs_mmap_fops : &uverbs_fops); + device->ops.mmap ? &uverbs_mmap_fops : &uverbs_fops); uverbs_dev->cdev.owner = THIS_MODULE; ret = cdev_device_add(&uverbs_dev->cdev, &uverbs_dev->dev); @@ -1373,7 +1337,7 @@ static void ib_uverbs_remove_one(struct ib_device *device, void *client_data) cdev_device_del(&uverbs_dev->cdev, &uverbs_dev->dev); ida_free(&uverbs_ida, uverbs_dev->devnum); - if (device->disassociate_ucontext) { + if (device->ops.disassociate_ucontext) { /* We disassociate HW resources and immediately return. * Userspace will see a EIO errno for all future access. * Upon returning, ib_device may be freed internally and is not diff --git a/drivers/infiniband/core/uverbs_std_types.c b/drivers/infiniband/core/uverbs_std_types.c index 203cc96ac6f5..cbc72312eb41 100644 --- a/drivers/infiniband/core/uverbs_std_types.c +++ b/drivers/infiniband/core/uverbs_std_types.c @@ -42,7 +42,8 @@ static int uverbs_free_ah(struct ib_uobject *uobject, enum rdma_remove_reason why) { - return rdma_destroy_ah((struct ib_ah *)uobject->object); + return rdma_destroy_ah((struct ib_ah *)uobject->object, + RDMA_DESTROY_AH_SLEEPABLE); } static int uverbs_free_flow(struct ib_uobject *uobject, @@ -54,7 +55,7 @@ static int uverbs_free_flow(struct ib_uobject *uobject, struct ib_qp *qp = flow->qp; int ret; - ret = flow->device->destroy_flow(flow); + ret = flow->device->ops.destroy_flow(flow); if (!ret) { if (qp) atomic_dec(&qp->usecnt); @@ -210,8 +211,7 @@ static int uverbs_hot_unplug_completion_event_file(struct ib_uobject *uobj, return 0; }; -int uverbs_destroy_def_handler(struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +int uverbs_destroy_def_handler(struct uverbs_attr_bundle *attrs) { return 0; } @@ -229,58 +229,106 @@ DECLARE_UVERBS_NAMED_OBJECT( UVERBS_OBJECT_QP, UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_uqp_object), uverbs_free_qp)); +DECLARE_UVERBS_NAMED_METHOD_DESTROY( + UVERBS_METHOD_MW_DESTROY, + UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_MW_HANDLE, + UVERBS_OBJECT_MW, + UVERBS_ACCESS_DESTROY, + UA_MANDATORY)); + DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_MW, - UVERBS_TYPE_ALLOC_IDR(uverbs_free_mw)); + UVERBS_TYPE_ALLOC_IDR(uverbs_free_mw), + &UVERBS_METHOD(UVERBS_METHOD_MW_DESTROY)); DECLARE_UVERBS_NAMED_OBJECT( UVERBS_OBJECT_SRQ, UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_usrq_object), uverbs_free_srq)); +DECLARE_UVERBS_NAMED_METHOD_DESTROY( + UVERBS_METHOD_AH_DESTROY, + UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_AH_HANDLE, + UVERBS_OBJECT_AH, + UVERBS_ACCESS_DESTROY, + UA_MANDATORY)); + DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_AH, - UVERBS_TYPE_ALLOC_IDR(uverbs_free_ah)); + UVERBS_TYPE_ALLOC_IDR(uverbs_free_ah), + &UVERBS_METHOD(UVERBS_METHOD_AH_DESTROY)); + +DECLARE_UVERBS_NAMED_METHOD_DESTROY( + UVERBS_METHOD_FLOW_DESTROY, + UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_FLOW_HANDLE, + UVERBS_OBJECT_FLOW, + UVERBS_ACCESS_DESTROY, + UA_MANDATORY)); DECLARE_UVERBS_NAMED_OBJECT( UVERBS_OBJECT_FLOW, UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_uflow_object), - uverbs_free_flow)); + uverbs_free_flow), + &UVERBS_METHOD(UVERBS_METHOD_FLOW_DESTROY)); DECLARE_UVERBS_NAMED_OBJECT( UVERBS_OBJECT_WQ, UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_uwq_object), uverbs_free_wq)); +DECLARE_UVERBS_NAMED_METHOD_DESTROY( + UVERBS_METHOD_RWQ_IND_TBL_DESTROY, + UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_RWQ_IND_TBL_HANDLE, + UVERBS_OBJECT_RWQ_IND_TBL, + UVERBS_ACCESS_DESTROY, + UA_MANDATORY)); + DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_RWQ_IND_TBL, - UVERBS_TYPE_ALLOC_IDR(uverbs_free_rwq_ind_tbl)); + UVERBS_TYPE_ALLOC_IDR(uverbs_free_rwq_ind_tbl), + &UVERBS_METHOD(UVERBS_METHOD_RWQ_IND_TBL_DESTROY)); + +DECLARE_UVERBS_NAMED_METHOD_DESTROY( + UVERBS_METHOD_XRCD_DESTROY, + UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_XRCD_HANDLE, + UVERBS_OBJECT_XRCD, + UVERBS_ACCESS_DESTROY, + UA_MANDATORY)); DECLARE_UVERBS_NAMED_OBJECT( UVERBS_OBJECT_XRCD, UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_uxrcd_object), - uverbs_free_xrcd)); + uverbs_free_xrcd), + &UVERBS_METHOD(UVERBS_METHOD_XRCD_DESTROY)); + +DECLARE_UVERBS_NAMED_METHOD_DESTROY( + UVERBS_METHOD_PD_DESTROY, + UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_PD_HANDLE, + UVERBS_OBJECT_PD, + UVERBS_ACCESS_DESTROY, + UA_MANDATORY)); DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_PD, - UVERBS_TYPE_ALLOC_IDR(uverbs_free_pd)); - -DECLARE_UVERBS_GLOBAL_METHODS(UVERBS_OBJECT_DEVICE); - -DECLARE_UVERBS_OBJECT_TREE(uverbs_default_objects, - &UVERBS_OBJECT(UVERBS_OBJECT_DEVICE), - &UVERBS_OBJECT(UVERBS_OBJECT_PD), - &UVERBS_OBJECT(UVERBS_OBJECT_MR), - &UVERBS_OBJECT(UVERBS_OBJECT_COMP_CHANNEL), - &UVERBS_OBJECT(UVERBS_OBJECT_CQ), - &UVERBS_OBJECT(UVERBS_OBJECT_QP), - &UVERBS_OBJECT(UVERBS_OBJECT_AH), - &UVERBS_OBJECT(UVERBS_OBJECT_MW), - &UVERBS_OBJECT(UVERBS_OBJECT_SRQ), - &UVERBS_OBJECT(UVERBS_OBJECT_FLOW), - &UVERBS_OBJECT(UVERBS_OBJECT_WQ), - &UVERBS_OBJECT(UVERBS_OBJECT_RWQ_IND_TBL), - &UVERBS_OBJECT(UVERBS_OBJECT_XRCD), - &UVERBS_OBJECT(UVERBS_OBJECT_FLOW_ACTION), - &UVERBS_OBJECT(UVERBS_OBJECT_DM), - &UVERBS_OBJECT(UVERBS_OBJECT_COUNTERS)); - -const struct uverbs_object_tree_def *uverbs_default_get_objects(void) -{ - return &uverbs_default_objects; -} + UVERBS_TYPE_ALLOC_IDR(uverbs_free_pd), + &UVERBS_METHOD(UVERBS_METHOD_PD_DESTROY)); + +const struct uapi_definition uverbs_def_obj_intf[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_PD, + UAPI_DEF_OBJ_NEEDS_FN(dealloc_pd)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_COMP_CHANNEL, + UAPI_DEF_OBJ_NEEDS_FN(dealloc_pd)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_QP, + UAPI_DEF_OBJ_NEEDS_FN(destroy_qp)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_AH, + UAPI_DEF_OBJ_NEEDS_FN(destroy_ah)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_MW, + UAPI_DEF_OBJ_NEEDS_FN(dealloc_mw)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_SRQ, + UAPI_DEF_OBJ_NEEDS_FN(destroy_srq)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_FLOW, + UAPI_DEF_OBJ_NEEDS_FN(destroy_flow)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_WQ, + UAPI_DEF_OBJ_NEEDS_FN(destroy_wq)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED( + UVERBS_OBJECT_RWQ_IND_TBL, + UAPI_DEF_OBJ_NEEDS_FN(destroy_rwq_ind_table)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_XRCD, + UAPI_DEF_OBJ_NEEDS_FN(dealloc_xrcd)), + {} +}; diff --git a/drivers/infiniband/core/uverbs_std_types_counters.c b/drivers/infiniband/core/uverbs_std_types_counters.c index a0ffdcf9a51c..309c5e80988d 100644 --- a/drivers/infiniband/core/uverbs_std_types_counters.c +++ b/drivers/infiniband/core/uverbs_std_types_counters.c @@ -44,11 +44,11 @@ static int uverbs_free_counters(struct ib_uobject *uobject, if (ret) return ret; - return counters->device->destroy_counters(counters); + return counters->device->ops.destroy_counters(counters); } static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject( attrs, UVERBS_ATTR_CREATE_COUNTERS_HANDLE); @@ -61,10 +61,10 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)( * have the ability to remove methods from parse tree once * such condition is met. */ - if (!ib_dev->create_counters) + if (!ib_dev->ops.create_counters) return -EOPNOTSUPP; - counters = ib_dev->create_counters(ib_dev, attrs); + counters = ib_dev->ops.create_counters(ib_dev, attrs); if (IS_ERR(counters)) { ret = PTR_ERR(counters); goto err_create_counters; @@ -82,7 +82,7 @@ err_create_counters: } static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct ib_counters_read_attr read_attr = {}; const struct uverbs_attr *uattr; @@ -90,7 +90,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( uverbs_attr_get_obj(attrs, UVERBS_ATTR_READ_COUNTERS_HANDLE); int ret; - if (!counters->device->read_counters) + if (!counters->device->ops.read_counters) return -EOPNOTSUPP; if (!atomic_read(&counters->usecnt)) @@ -109,7 +109,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( if (IS_ERR(read_attr.counters_buff)) return PTR_ERR(read_attr.counters_buff); - ret = counters->device->read_counters(counters, &read_attr, attrs); + ret = counters->device->ops.read_counters(counters, &read_attr, attrs); if (ret) return ret; @@ -149,3 +149,9 @@ DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_COUNTERS, &UVERBS_METHOD(UVERBS_METHOD_COUNTERS_CREATE), &UVERBS_METHOD(UVERBS_METHOD_COUNTERS_DESTROY), &UVERBS_METHOD(UVERBS_METHOD_COUNTERS_READ)); + +const struct uapi_definition uverbs_def_obj_counters[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_COUNTERS, + UAPI_DEF_OBJ_NEEDS_FN(destroy_counters)), + {} +}; diff --git a/drivers/infiniband/core/uverbs_std_types_cq.c b/drivers/infiniband/core/uverbs_std_types_cq.c index 5b5f2052cd52..a59ea89e3f2b 100644 --- a/drivers/infiniband/core/uverbs_std_types_cq.c +++ b/drivers/infiniband/core/uverbs_std_types_cq.c @@ -58,13 +58,12 @@ static int uverbs_free_cq(struct ib_uobject *uobject, } static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct ib_ucq_object *obj = container_of( uverbs_attr_get_uobject(attrs, UVERBS_ATTR_CREATE_CQ_HANDLE), typeof(*obj), uobject); struct ib_device *ib_dev = obj->uobject.context->device; - struct ib_udata uhw; int ret; u64 user_handle; struct ib_cq_init_attr attr = {}; @@ -72,7 +71,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( struct ib_uverbs_completion_event_file *ev_file = NULL; struct ib_uobject *ev_file_uobj; - if (!ib_dev->create_cq || !ib_dev->destroy_cq) + if (!ib_dev->ops.create_cq || !ib_dev->ops.destroy_cq) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.comp_vector, attrs, @@ -101,7 +100,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( uverbs_uobject_get(ev_file_uobj); } - if (attr.comp_vector >= file->device->num_comp_vectors) { + if (attr.comp_vector >= attrs->ufile->device->num_comp_vectors) { ret = -EINVAL; goto err_event_file; } @@ -111,10 +110,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( INIT_LIST_HEAD(&obj->comp_list); INIT_LIST_HEAD(&obj->async_list); - /* Temporary, only until drivers get the new uverbs_attr_bundle */ - create_udata(attrs, &uhw); - - cq = ib_dev->create_cq(ib_dev, &attr, obj->uobject.context, &uhw); + cq = ib_dev->ops.create_cq(ib_dev, &attr, obj->uobject.context, + &attrs->driver_udata); if (IS_ERR(cq)) { ret = PTR_ERR(cq); goto err_event_file; @@ -129,7 +126,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( obj->uobject.user_handle = user_handle; atomic_set(&cq->usecnt, 0); cq->res.type = RDMA_RESTRACK_CQ; - rdma_restrack_add(&cq->res); + rdma_restrack_uadd(&cq->res); ret = uverbs_copy_to(attrs, UVERBS_ATTR_CREATE_CQ_RESP_CQE, &cq->cqe, sizeof(cq->cqe)); @@ -173,7 +170,7 @@ DECLARE_UVERBS_NAMED_METHOD( UVERBS_ATTR_UHW()); static int UVERBS_HANDLER(UVERBS_METHOD_CQ_DESTROY)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, UVERBS_ATTR_DESTROY_CQ_HANDLE); @@ -207,3 +204,9 @@ DECLARE_UVERBS_NAMED_OBJECT( &UVERBS_METHOD(UVERBS_METHOD_CQ_DESTROY) #endif ); + +const struct uapi_definition uverbs_def_obj_cq[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_CQ, + UAPI_DEF_OBJ_NEEDS_FN(destroy_cq)), + {} +}; diff --git a/drivers/infiniband/core/uverbs_std_types_device.c b/drivers/infiniband/core/uverbs_std_types_device.c new file mode 100644 index 000000000000..5030ec480370 --- /dev/null +++ b/drivers/infiniband/core/uverbs_std_types_device.c @@ -0,0 +1,224 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2018, Mellanox Technologies inc. All rights reserved. + */ + +#include +#include "rdma_core.h" +#include "uverbs.h" +#include +#include + +/* + * This ioctl method allows calling any defined write or write_ex + * handler. This essentially replaces the hdr/ex_hdr system with the ioctl + * marshalling, and brings the non-ex path into the same marshalling as the ex + * path. + */ +static int UVERBS_HANDLER(UVERBS_METHOD_INVOKE_WRITE)( + struct uverbs_attr_bundle *attrs) +{ + struct uverbs_api *uapi = attrs->ufile->device->uapi; + const struct uverbs_api_write_method *method_elm; + u32 cmd; + int rc; + + rc = uverbs_get_const(&cmd, attrs, UVERBS_ATTR_WRITE_CMD); + if (rc) + return rc; + + method_elm = uapi_get_method(uapi, cmd); + if (IS_ERR(method_elm)) + return PTR_ERR(method_elm); + + uverbs_fill_udata(attrs, &attrs->ucore, UVERBS_ATTR_CORE_IN, + UVERBS_ATTR_CORE_OUT); + + if (attrs->ucore.inlen < method_elm->req_size || + attrs->ucore.outlen < method_elm->resp_size) + return -ENOSPC; + + return method_elm->handler(attrs); +} + +DECLARE_UVERBS_NAMED_METHOD(UVERBS_METHOD_INVOKE_WRITE, + UVERBS_ATTR_CONST_IN(UVERBS_ATTR_WRITE_CMD, + enum ib_uverbs_write_cmds, + UA_MANDATORY), + UVERBS_ATTR_PTR_IN(UVERBS_ATTR_CORE_IN, + UVERBS_ATTR_MIN_SIZE(sizeof(u32)), + UA_OPTIONAL), + UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_CORE_OUT, + UVERBS_ATTR_MIN_SIZE(0), + UA_OPTIONAL), + UVERBS_ATTR_UHW()); + +static uint32_t * +gather_objects_handle(struct ib_uverbs_file *ufile, + const struct uverbs_api_object *uapi_object, + struct uverbs_attr_bundle *attrs, + ssize_t out_len, + u64 *total) +{ + u64 max_count = out_len / sizeof(u32); + struct ib_uobject *obj; + u64 count = 0; + u32 *handles; + + /* Allocated memory that cannot page out where we gather + * all object ids under a spin_lock. + */ + handles = uverbs_zalloc(attrs, out_len); + if (IS_ERR(handles)) + return handles; + + spin_lock_irq(&ufile->uobjects_lock); + list_for_each_entry(obj, &ufile->uobjects, list) { + u32 obj_id = obj->id; + + if (obj->uapi_object != uapi_object) + continue; + + if (count >= max_count) + break; + + handles[count] = obj_id; + count++; + } + spin_unlock_irq(&ufile->uobjects_lock); + + *total = count; + return handles; +} + +static int UVERBS_HANDLER(UVERBS_METHOD_INFO_HANDLES)( + struct uverbs_attr_bundle *attrs) +{ + const struct uverbs_api_object *uapi_object; + ssize_t out_len; + u64 total = 0; + u16 object_id; + u32 *handles; + int ret; + + out_len = uverbs_attr_get_len(attrs, UVERBS_ATTR_INFO_HANDLES_LIST); + if (out_len <= 0 || (out_len % sizeof(u32) != 0)) + return -EINVAL; + + ret = uverbs_get_const(&object_id, attrs, UVERBS_ATTR_INFO_OBJECT_ID); + if (ret) + return ret; + + uapi_object = uapi_get_object(attrs->ufile->device->uapi, object_id); + if (!uapi_object) + return -EINVAL; + + handles = gather_objects_handle(attrs->ufile, uapi_object, attrs, + out_len, &total); + if (IS_ERR(handles)) + return PTR_ERR(handles); + + ret = uverbs_copy_to(attrs, UVERBS_ATTR_INFO_HANDLES_LIST, handles, + sizeof(u32) * total); + if (ret) + goto err; + + ret = uverbs_copy_to(attrs, UVERBS_ATTR_INFO_TOTAL_HANDLES, &total, + sizeof(total)); +err: + return ret; +} + +void copy_port_attr_to_resp(struct ib_port_attr *attr, + struct ib_uverbs_query_port_resp *resp, + struct ib_device *ib_dev, u8 port_num) +{ + resp->state = attr->state; + resp->max_mtu = attr->max_mtu; + resp->active_mtu = attr->active_mtu; + resp->gid_tbl_len = attr->gid_tbl_len; + resp->port_cap_flags = make_port_cap_flags(attr); + resp->max_msg_sz = attr->max_msg_sz; + resp->bad_pkey_cntr = attr->bad_pkey_cntr; + resp->qkey_viol_cntr = attr->qkey_viol_cntr; + resp->pkey_tbl_len = attr->pkey_tbl_len; + + if (rdma_is_grh_required(ib_dev, port_num)) + resp->flags |= IB_UVERBS_QPF_GRH_REQUIRED; + + if (rdma_cap_opa_ah(ib_dev, port_num)) { + resp->lid = OPA_TO_IB_UCAST_LID(attr->lid); + resp->sm_lid = OPA_TO_IB_UCAST_LID(attr->sm_lid); + } else { + resp->lid = ib_lid_cpu16(attr->lid); + resp->sm_lid = ib_lid_cpu16(attr->sm_lid); + } + + resp->lmc = attr->lmc; + resp->max_vl_num = attr->max_vl_num; + resp->sm_sl = attr->sm_sl; + resp->subnet_timeout = attr->subnet_timeout; + resp->init_type_reply = attr->init_type_reply; + resp->active_width = attr->active_width; + resp->active_speed = attr->active_speed; + resp->phys_state = attr->phys_state; + resp->link_layer = rdma_port_get_link_layer(ib_dev, port_num); +} + +static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_PORT)( + struct uverbs_attr_bundle *attrs) +{ + struct ib_device *ib_dev = attrs->ufile->device->ib_dev; + struct ib_port_attr attr = {}; + struct ib_uverbs_query_port_resp_ex resp = {}; + int ret; + u8 port_num; + + /* FIXME: Extend the UAPI_DEF_OBJ_NEEDS_FN stuff.. */ + if (!ib_dev->ops.query_port) + return -EOPNOTSUPP; + + ret = uverbs_get_const(&port_num, attrs, + UVERBS_ATTR_QUERY_PORT_PORT_NUM); + if (ret) + return ret; + + ret = ib_query_port(ib_dev, port_num, &attr); + if (ret) + return ret; + + copy_port_attr_to_resp(&attr, &resp.legacy_resp, ib_dev, port_num); + resp.port_cap_flags2 = attr.port_cap_flags2; + + return uverbs_copy_to_struct_or_zero(attrs, UVERBS_ATTR_QUERY_PORT_RESP, + &resp, sizeof(resp)); +} + +DECLARE_UVERBS_NAMED_METHOD( + UVERBS_METHOD_INFO_HANDLES, + /* Also includes any device specific object ids */ + UVERBS_ATTR_CONST_IN(UVERBS_ATTR_INFO_OBJECT_ID, + enum uverbs_default_objects, UA_MANDATORY), + UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_INFO_TOTAL_HANDLES, + UVERBS_ATTR_TYPE(u32), UA_OPTIONAL), + UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_INFO_HANDLES_LIST, + UVERBS_ATTR_MIN_SIZE(sizeof(u32)), UA_OPTIONAL)); + +DECLARE_UVERBS_NAMED_METHOD( + UVERBS_METHOD_QUERY_PORT, + UVERBS_ATTR_CONST_IN(UVERBS_ATTR_QUERY_PORT_PORT_NUM, u8, UA_MANDATORY), + UVERBS_ATTR_PTR_OUT( + UVERBS_ATTR_QUERY_PORT_RESP, + UVERBS_ATTR_STRUCT(struct ib_uverbs_query_port_resp_ex, + reserved), + UA_MANDATORY)); + +DECLARE_UVERBS_GLOBAL_METHODS(UVERBS_OBJECT_DEVICE, + &UVERBS_METHOD(UVERBS_METHOD_INVOKE_WRITE), + &UVERBS_METHOD(UVERBS_METHOD_INFO_HANDLES), + &UVERBS_METHOD(UVERBS_METHOD_QUERY_PORT)); + +const struct uapi_definition uverbs_def_obj_device[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_DEVICE), + {}, +}; diff --git a/drivers/infiniband/core/uverbs_std_types_dm.c b/drivers/infiniband/core/uverbs_std_types_dm.c index edc3ff7733d4..2ef70637bee1 100644 --- a/drivers/infiniband/core/uverbs_std_types_dm.c +++ b/drivers/infiniband/core/uverbs_std_types_dm.c @@ -43,12 +43,11 @@ static int uverbs_free_dm(struct ib_uobject *uobject, if (ret) return ret; - return dm->device->dealloc_dm(dm); + return dm->device->ops.dealloc_dm(dm); } -static int -UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)( + struct uverbs_attr_bundle *attrs) { struct ib_dm_alloc_attr attr = {}; struct ib_uobject *uobj = @@ -58,7 +57,7 @@ UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_uverbs_file *file, struct ib_dm *dm; int ret; - if (!ib_dev->alloc_dm) + if (!ib_dev->ops.alloc_dm) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.length, attrs, @@ -71,7 +70,7 @@ UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_uverbs_file *file, if (ret) return ret; - dm = ib_dev->alloc_dm(ib_dev, uobj->context, &attr, attrs); + dm = ib_dev->ops.alloc_dm(ib_dev, uobj->context, &attr, attrs); if (IS_ERR(dm)) return PTR_ERR(dm); @@ -109,3 +108,9 @@ DECLARE_UVERBS_NAMED_OBJECT(UVERBS_OBJECT_DM, UVERBS_TYPE_ALLOC_IDR(uverbs_free_dm), &UVERBS_METHOD(UVERBS_METHOD_DM_ALLOC), &UVERBS_METHOD(UVERBS_METHOD_DM_FREE)); + +const struct uapi_definition uverbs_def_obj_dm[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_DM, + UAPI_DEF_OBJ_NEEDS_FN(dealloc_dm)), + {} +}; diff --git a/drivers/infiniband/core/uverbs_std_types_flow_action.c b/drivers/infiniband/core/uverbs_std_types_flow_action.c index cb9486ad5c67..4962b87fa600 100644 --- a/drivers/infiniband/core/uverbs_std_types_flow_action.c +++ b/drivers/infiniband/core/uverbs_std_types_flow_action.c @@ -43,7 +43,7 @@ static int uverbs_free_flow_action(struct ib_uobject *uobject, if (ret) return ret; - return action->device->destroy_flow_action(action); + return action->device->ops.destroy_flow_action(action); } static u64 esp_flags_uverbs_to_verbs(struct uverbs_attr_bundle *attrs, @@ -223,7 +223,6 @@ struct ib_flow_action_esp_attr { #define ESP_LAST_SUPPORTED_FLAG IB_UVERBS_FLOW_ACTION_ESP_FLAGS_ESN_NEW_WINDOW static int parse_flow_action_esp(struct ib_device *ib_dev, - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs, struct ib_flow_action_esp_attr *esp_attr, bool is_modify) @@ -305,7 +304,7 @@ static int parse_flow_action_esp(struct ib_device *ib_dev, } static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject( attrs, UVERBS_ATTR_CREATE_FLOW_ACTION_ESP_HANDLE); @@ -314,15 +313,16 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)( struct ib_flow_action *action; struct ib_flow_action_esp_attr esp_attr = {}; - if (!ib_dev->create_flow_action_esp) + if (!ib_dev->ops.create_flow_action_esp) return -EOPNOTSUPP; - ret = parse_flow_action_esp(ib_dev, file, attrs, &esp_attr, false); + ret = parse_flow_action_esp(ib_dev, attrs, &esp_attr, false); if (ret) return ret; /* No need to check as this attribute is marked as MANDATORY */ - action = ib_dev->create_flow_action_esp(ib_dev, &esp_attr.hdr, attrs); + action = ib_dev->ops.create_flow_action_esp(ib_dev, &esp_attr.hdr, + attrs); if (IS_ERR(action)) return PTR_ERR(action); @@ -333,7 +333,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)( } static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject( attrs, UVERBS_ATTR_MODIFY_FLOW_ACTION_ESP_HANDLE); @@ -341,19 +341,19 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)( int ret; struct ib_flow_action_esp_attr esp_attr = {}; - if (!action->device->modify_flow_action_esp) + if (!action->device->ops.modify_flow_action_esp) return -EOPNOTSUPP; - ret = parse_flow_action_esp(action->device, file, attrs, &esp_attr, - true); + ret = parse_flow_action_esp(action->device, attrs, &esp_attr, true); if (ret) return ret; if (action->type != IB_FLOW_ACTION_ESP) return -EINVAL; - return action->device->modify_flow_action_esp(action, &esp_attr.hdr, - attrs); + return action->device->ops.modify_flow_action_esp(action, + &esp_attr.hdr, + attrs); } static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = { @@ -438,3 +438,10 @@ DECLARE_UVERBS_NAMED_OBJECT( &UVERBS_METHOD(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE), &UVERBS_METHOD(UVERBS_METHOD_FLOW_ACTION_DESTROY), &UVERBS_METHOD(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)); + +const struct uapi_definition uverbs_def_obj_flow_action[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED( + UVERBS_OBJECT_FLOW_ACTION, + UAPI_DEF_OBJ_NEEDS_FN(destroy_flow_action)), + {} +}; diff --git a/drivers/infiniband/core/uverbs_std_types_mr.c b/drivers/infiniband/core/uverbs_std_types_mr.c index cf02e774303e..4d4be0c2b752 100644 --- a/drivers/infiniband/core/uverbs_std_types_mr.c +++ b/drivers/infiniband/core/uverbs_std_types_mr.c @@ -39,8 +39,44 @@ static int uverbs_free_mr(struct ib_uobject *uobject, return ib_dereg_mr((struct ib_mr *)uobject->object); } +static int UVERBS_HANDLER(UVERBS_METHOD_ADVISE_MR)( + struct uverbs_attr_bundle *attrs) +{ + struct ib_pd *pd = + uverbs_attr_get_obj(attrs, UVERBS_ATTR_ADVISE_MR_PD_HANDLE); + enum ib_uverbs_advise_mr_advice advice; + struct ib_device *ib_dev = pd->device; + struct ib_sge *sg_list; + int num_sge; + u32 flags; + int ret; + + /* FIXME: Extend the UAPI_DEF_OBJ_NEEDS_FN stuff.. */ + if (!ib_dev->ops.advise_mr) + return -EOPNOTSUPP; + + ret = uverbs_get_const(&advice, attrs, UVERBS_ATTR_ADVISE_MR_ADVICE); + if (ret) + return ret; + + ret = uverbs_get_flags32(&flags, attrs, UVERBS_ATTR_ADVISE_MR_FLAGS, + IB_UVERBS_ADVISE_MR_FLAG_FLUSH); + if (ret) + return ret; + + num_sge = uverbs_attr_ptr_get_array_size( + attrs, UVERBS_ATTR_ADVISE_MR_SGE_LIST, sizeof(struct ib_sge)); + if (num_sge < 0) + return num_sge; + + sg_list = uverbs_attr_get_alloced_ptr(attrs, + UVERBS_ATTR_ADVISE_MR_SGE_LIST); + return ib_dev->ops.advise_mr(pd, advice, flags, sg_list, num_sge, + attrs); +} + static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct ib_dm_mr_attr attr = {}; struct ib_uobject *uobj = @@ -54,7 +90,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)( struct ib_mr *mr; int ret; - if (!ib_dev->reg_dm_mr) + if (!ib_dev->ops.reg_dm_mr) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.offset, attrs, UVERBS_ATTR_REG_DM_MR_OFFSET); @@ -83,7 +119,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)( attr.length > dm->length - attr.offset) return -EINVAL; - mr = pd->device->reg_dm_mr(pd, dm, &attr, attrs); + mr = pd->device->ops.reg_dm_mr(pd, dm, &attr, attrs); if (IS_ERR(mr)) return PTR_ERR(mr); @@ -114,6 +150,23 @@ err_dereg: return ret; } +DECLARE_UVERBS_NAMED_METHOD( + UVERBS_METHOD_ADVISE_MR, + UVERBS_ATTR_IDR(UVERBS_ATTR_ADVISE_MR_PD_HANDLE, + UVERBS_OBJECT_PD, + UVERBS_ACCESS_READ, + UA_MANDATORY), + UVERBS_ATTR_CONST_IN(UVERBS_ATTR_ADVISE_MR_ADVICE, + enum ib_uverbs_advise_mr_advice, + UA_MANDATORY), + UVERBS_ATTR_FLAGS_IN(UVERBS_ATTR_ADVISE_MR_FLAGS, + enum ib_uverbs_advise_mr_flag, + UA_MANDATORY), + UVERBS_ATTR_PTR_IN(UVERBS_ATTR_ADVISE_MR_SGE_LIST, + UVERBS_ATTR_MIN_SIZE(sizeof(struct ib_uverbs_sge)), + UA_MANDATORY, + UA_ALLOC_AND_COPY)); + DECLARE_UVERBS_NAMED_METHOD( UVERBS_METHOD_DM_MR_REG, UVERBS_ATTR_IDR(UVERBS_ATTR_REG_DM_MR_HANDLE, @@ -143,7 +196,22 @@ DECLARE_UVERBS_NAMED_METHOD( UVERBS_ATTR_TYPE(u32), UA_MANDATORY)); +DECLARE_UVERBS_NAMED_METHOD_DESTROY( + UVERBS_METHOD_MR_DESTROY, + UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_MR_HANDLE, + UVERBS_OBJECT_MR, + UVERBS_ACCESS_DESTROY, + UA_MANDATORY)); + DECLARE_UVERBS_NAMED_OBJECT( UVERBS_OBJECT_MR, UVERBS_TYPE_ALLOC_IDR(uverbs_free_mr), - &UVERBS_METHOD(UVERBS_METHOD_DM_MR_REG)); + &UVERBS_METHOD(UVERBS_METHOD_DM_MR_REG), + &UVERBS_METHOD(UVERBS_METHOD_MR_DESTROY), + &UVERBS_METHOD(UVERBS_METHOD_ADVISE_MR)); + +const struct uapi_definition uverbs_def_obj_mr[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(UVERBS_OBJECT_MR, + UAPI_DEF_OBJ_NEEDS_FN(dereg_mr)), + {} +}; diff --git a/drivers/infiniband/core/uverbs_uapi.c b/drivers/infiniband/core/uverbs_uapi.c index 86f3fc5e04b4..9ae08e4b78a3 100644 --- a/drivers/infiniband/core/uverbs_uapi.c +++ b/drivers/infiniband/core/uverbs_uapi.c @@ -8,6 +8,11 @@ #include "rdma_core.h" #include "uverbs.h" +static int ib_uverbs_notsupp(struct uverbs_attr_bundle *attrs) +{ + return -EOPNOTSUPP; +} + static void *uapi_add_elm(struct uverbs_api *uapi, u32 key, size_t alloc_size) { void *elm; @@ -26,6 +31,70 @@ static void *uapi_add_elm(struct uverbs_api *uapi, u32 key, size_t alloc_size) return elm; } +static void *uapi_add_get_elm(struct uverbs_api *uapi, u32 key, + size_t alloc_size, bool *exists) +{ + void *elm; + + elm = uapi_add_elm(uapi, key, alloc_size); + if (!IS_ERR(elm)) { + *exists = false; + return elm; + } + + if (elm != ERR_PTR(-EEXIST)) + return elm; + + elm = radix_tree_lookup(&uapi->radix, key); + if (WARN_ON(!elm)) + return ERR_PTR(-EINVAL); + *exists = true; + return elm; +} + +static int uapi_create_write(struct uverbs_api *uapi, + struct ib_device *ibdev, + const struct uapi_definition *def, + u32 obj_key, + u32 *cur_method_key) +{ + struct uverbs_api_write_method *method_elm; + u32 method_key = obj_key; + bool exists; + + if (def->write.is_ex) + method_key |= uapi_key_write_ex_method(def->write.command_num); + else + method_key |= uapi_key_write_method(def->write.command_num); + + method_elm = uapi_add_get_elm(uapi, method_key, sizeof(*method_elm), + &exists); + if (IS_ERR(method_elm)) + return PTR_ERR(method_elm); + + if (WARN_ON(exists && (def->write.is_ex != method_elm->is_ex))) + return -EINVAL; + + method_elm->is_ex = def->write.is_ex; + method_elm->handler = def->func_write; + if (def->write.is_ex) + method_elm->disabled = !(ibdev->uverbs_ex_cmd_mask & + BIT_ULL(def->write.command_num)); + else + method_elm->disabled = !(ibdev->uverbs_cmd_mask & + BIT_ULL(def->write.command_num)); + + if (!def->write.is_ex && def->func_write) { + method_elm->has_udata = def->write.has_udata; + method_elm->has_resp = def->write.has_resp; + method_elm->req_size = def->write.req_size; + method_elm->resp_size = def->write.resp_size; + } + + *cur_method_key = method_key; + return 0; +} + static int uapi_merge_method(struct uverbs_api *uapi, struct uverbs_api_object *obj_elm, u32 obj_key, const struct uverbs_method_def *method, @@ -34,23 +103,21 @@ static int uapi_merge_method(struct uverbs_api *uapi, u32 method_key = obj_key | uapi_key_ioctl_method(method->id); struct uverbs_api_ioctl_method *method_elm; unsigned int i; + bool exists; if (!method->attrs) return 0; - method_elm = uapi_add_elm(uapi, method_key, sizeof(*method_elm)); - if (IS_ERR(method_elm)) { - if (method_elm != ERR_PTR(-EEXIST)) - return PTR_ERR(method_elm); - + method_elm = uapi_add_get_elm(uapi, method_key, sizeof(*method_elm), + &exists); + if (IS_ERR(method_elm)) + return PTR_ERR(method_elm); + if (exists) { /* * This occurs when a driver uses ADD_UVERBS_ATTRIBUTES_SIMPLE */ if (WARN_ON(method->handler)) return -EINVAL; - method_elm = radix_tree_lookup(&uapi->radix, method_key); - if (WARN_ON(!method_elm)) - return -EINVAL; } else { WARN_ON(!method->handler); rcu_assign_pointer(method_elm->handler, method->handler); @@ -98,72 +165,183 @@ static int uapi_merge_method(struct uverbs_api *uapi, return 0; } -static int uapi_merge_tree(struct uverbs_api *uapi, - const struct uverbs_object_tree_def *tree, - bool is_driver) +static int uapi_merge_obj_tree(struct uverbs_api *uapi, + const struct uverbs_object_def *obj, + bool is_driver) { - unsigned int i, j; + struct uverbs_api_object *obj_elm; + unsigned int i; + u32 obj_key; + bool exists; int rc; - if (!tree->objects) + obj_key = uapi_key_obj(obj->id); + obj_elm = uapi_add_get_elm(uapi, obj_key, sizeof(*obj_elm), &exists); + if (IS_ERR(obj_elm)) + return PTR_ERR(obj_elm); + + if (obj->type_attrs) { + if (WARN_ON(obj_elm->type_attrs)) + return -EINVAL; + + obj_elm->id = obj->id; + obj_elm->type_attrs = obj->type_attrs; + obj_elm->type_class = obj->type_attrs->type_class; + /* + * Today drivers are only permitted to use idr_class + * types. They cannot use FD types because we currently have + * no way to revoke the fops pointer after device + * disassociation. + */ + if (WARN_ON(is_driver && + obj->type_attrs->type_class != &uverbs_idr_class)) + return -EINVAL; + } + + if (!obj->methods) return 0; - for (i = 0; i != tree->num_objects; i++) { - const struct uverbs_object_def *obj = (*tree->objects)[i]; - struct uverbs_api_object *obj_elm; - u32 obj_key; + for (i = 0; i != obj->num_methods; i++) { + const struct uverbs_method_def *method = (*obj->methods)[i]; - if (!obj) + if (!method) continue; - obj_key = uapi_key_obj(obj->id); - obj_elm = uapi_add_elm(uapi, obj_key, sizeof(*obj_elm)); - if (IS_ERR(obj_elm)) { - if (obj_elm != ERR_PTR(-EEXIST)) - return PTR_ERR(obj_elm); + rc = uapi_merge_method(uapi, obj_elm, obj_key, method, + is_driver); + if (rc) + return rc; + } - /* This occurs when a driver uses ADD_UVERBS_METHODS */ - if (WARN_ON(obj->type_attrs)) - return -EINVAL; - obj_elm = radix_tree_lookup(&uapi->radix, obj_key); - if (WARN_ON(!obj_elm)) + return 0; +} + +static int uapi_disable_elm(struct uverbs_api *uapi, + const struct uapi_definition *def, + u32 obj_key, + u32 method_key) +{ + bool exists; + + if (def->scope == UAPI_SCOPE_OBJECT) { + struct uverbs_api_object *obj_elm; + + obj_elm = uapi_add_get_elm( + uapi, obj_key, sizeof(*obj_elm), &exists); + if (IS_ERR(obj_elm)) + return PTR_ERR(obj_elm); + obj_elm->disabled = 1; + return 0; + } + + if (def->scope == UAPI_SCOPE_METHOD && + uapi_key_is_ioctl_method(method_key)) { + struct uverbs_api_ioctl_method *method_elm; + + method_elm = uapi_add_get_elm(uapi, method_key, + sizeof(*method_elm), &exists); + if (IS_ERR(method_elm)) + return PTR_ERR(method_elm); + method_elm->disabled = 1; + return 0; + } + + if (def->scope == UAPI_SCOPE_METHOD && + (uapi_key_is_write_method(method_key) || + uapi_key_is_write_ex_method(method_key))) { + struct uverbs_api_write_method *write_elm; + + write_elm = uapi_add_get_elm(uapi, method_key, + sizeof(*write_elm), &exists); + if (IS_ERR(write_elm)) + return PTR_ERR(write_elm); + write_elm->disabled = 1; + return 0; + } + + WARN_ON(true); + return -EINVAL; +} + +static int uapi_merge_def(struct uverbs_api *uapi, struct ib_device *ibdev, + const struct uapi_definition *def_list, + bool is_driver) +{ + const struct uapi_definition *def = def_list; + u32 cur_obj_key = UVERBS_API_KEY_ERR; + u32 cur_method_key = UVERBS_API_KEY_ERR; + bool exists; + int rc; + + if (!def_list) + return 0; + + for (;; def++) { + switch ((enum uapi_definition_kind)def->kind) { + case UAPI_DEF_CHAIN: + rc = uapi_merge_def(uapi, ibdev, def->chain, is_driver); + if (rc) + return rc; + continue; + + case UAPI_DEF_CHAIN_OBJ_TREE: + if (WARN_ON(def->object_start.object_id != + def->chain_obj_tree->id)) return -EINVAL; - } else { - obj_elm->type_attrs = obj->type_attrs; - if (obj->type_attrs) { - obj_elm->type_class = - obj->type_attrs->type_class; - /* - * Today drivers are only permitted to use - * idr_class types. They cannot use FD types - * because we currently have no way to revoke - * the fops pointer after device - * disassociation. - */ - if (WARN_ON(is_driver && - obj->type_attrs->type_class != - &uverbs_idr_class)) - return -EINVAL; - } - } - if (!obj->methods) + cur_obj_key = uapi_key_obj(def->object_start.object_id); + rc = uapi_merge_obj_tree(uapi, def->chain_obj_tree, + is_driver); + if (rc) + return rc; continue; - for (j = 0; j != obj->num_methods; j++) { - const struct uverbs_method_def *method = - (*obj->methods)[j]; - if (!method) + case UAPI_DEF_END: + return 0; + + case UAPI_DEF_IS_SUPPORTED_DEV_FN: { + void **ibdev_fn = + (void *)(&ibdev->ops) + def->needs_fn_offset; + + if (*ibdev_fn) continue; + rc = uapi_disable_elm( + uapi, def, cur_obj_key, cur_method_key); + if (rc) + return rc; + continue; + } - rc = uapi_merge_method(uapi, obj_elm, obj_key, method, - is_driver); + case UAPI_DEF_IS_SUPPORTED_FUNC: + if (def->func_is_supported(ibdev)) + continue; + rc = uapi_disable_elm( + uapi, def, cur_obj_key, cur_method_key); if (rc) return rc; + continue; + + case UAPI_DEF_OBJECT_START: { + struct uverbs_api_object *obj_elm; + + cur_obj_key = uapi_key_obj(def->object_start.object_id); + obj_elm = uapi_add_get_elm(uapi, cur_obj_key, + sizeof(*obj_elm), &exists); + if (IS_ERR(obj_elm)) + return PTR_ERR(obj_elm); + continue; } - } - return 0; + case UAPI_DEF_WRITE: + rc = uapi_create_write( + uapi, ibdev, def, cur_obj_key, &cur_method_key); + if (rc) + return rc; + continue; + } + WARN_ON(true); + return -EINVAL; + } } static int @@ -186,13 +364,16 @@ uapi_finalize_ioctl_method(struct uverbs_api *uapi, u32 attr_bkey = uapi_bkey_attr(attr_key); u8 type = elm->spec.type; - if (uapi_key_attr_to_method(iter.index) != - uapi_key_attr_to_method(method_key)) + if (uapi_key_attr_to_ioctl_method(iter.index) != + uapi_key_attr_to_ioctl_method(method_key)) break; if (elm->spec.mandatory) __set_bit(attr_bkey, method_elm->attr_mandatory); + if (elm->spec.is_udata) + method_elm->has_udata = true; + if (type == UVERBS_ATTR_TYPE_IDR || type == UVERBS_ATTR_TYPE_FD) { u8 access = elm->spec.u.obj.access; @@ -229,9 +410,13 @@ uapi_finalize_ioctl_method(struct uverbs_api *uapi, static int uapi_finalize(struct uverbs_api *uapi) { + const struct uverbs_api_write_method **data; + unsigned long max_write_ex = 0; + unsigned long max_write = 0; struct radix_tree_iter iter; void __rcu **slot; int rc; + int i; radix_tree_for_each_slot (slot, &uapi->radix, &iter, 0) { struct uverbs_api_ioctl_method *method_elm = @@ -243,29 +428,209 @@ static int uapi_finalize(struct uverbs_api *uapi) if (rc) return rc; } + + if (uapi_key_is_write_method(iter.index)) + max_write = max(max_write, + iter.index & UVERBS_API_ATTR_KEY_MASK); + if (uapi_key_is_write_ex_method(iter.index)) + max_write_ex = + max(max_write_ex, + iter.index & UVERBS_API_ATTR_KEY_MASK); + } + + uapi->notsupp_method.handler = ib_uverbs_notsupp; + uapi->num_write = max_write + 1; + uapi->num_write_ex = max_write_ex + 1; + data = kmalloc_array(uapi->num_write + uapi->num_write_ex, + sizeof(*uapi->write_methods), GFP_KERNEL); + for (i = 0; i != uapi->num_write + uapi->num_write_ex; i++) + data[i] = &uapi->notsupp_method; + uapi->write_methods = data; + uapi->write_ex_methods = data + uapi->num_write; + + radix_tree_for_each_slot (slot, &uapi->radix, &iter, 0) { + if (uapi_key_is_write_method(iter.index)) + uapi->write_methods[iter.index & + UVERBS_API_ATTR_KEY_MASK] = + rcu_dereference_protected(*slot, true); + if (uapi_key_is_write_ex_method(iter.index)) + uapi->write_ex_methods[iter.index & + UVERBS_API_ATTR_KEY_MASK] = + rcu_dereference_protected(*slot, true); } return 0; } -void uverbs_destroy_api(struct uverbs_api *uapi) +static void uapi_remove_range(struct uverbs_api *uapi, u32 start, u32 last) { struct radix_tree_iter iter; void __rcu **slot; - if (!uapi) - return; - - radix_tree_for_each_slot (slot, &uapi->radix, &iter, 0) { + radix_tree_for_each_slot (slot, &uapi->radix, &iter, start) { + if (iter.index > last) + return; kfree(rcu_dereference_protected(*slot, true)); radix_tree_iter_delete(&uapi->radix, &iter, slot); } +} + +static void uapi_remove_object(struct uverbs_api *uapi, u32 obj_key) +{ + uapi_remove_range(uapi, obj_key, + obj_key | UVERBS_API_METHOD_KEY_MASK | + UVERBS_API_ATTR_KEY_MASK); +} + +static void uapi_remove_method(struct uverbs_api *uapi, u32 method_key) +{ + uapi_remove_range(uapi, method_key, + method_key | UVERBS_API_ATTR_KEY_MASK); +} + + +static u32 uapi_get_obj_id(struct uverbs_attr_spec *spec) +{ + if (spec->type == UVERBS_ATTR_TYPE_IDR || + spec->type == UVERBS_ATTR_TYPE_FD) + return spec->u.obj.obj_type; + if (spec->type == UVERBS_ATTR_TYPE_IDRS_ARRAY) + return spec->u2.objs_arr.obj_type; + return UVERBS_API_KEY_ERR; +} + +static void uapi_key_okay(u32 key) +{ + unsigned int count = 0; + + if (uapi_key_is_object(key)) + count++; + if (uapi_key_is_ioctl_method(key)) + count++; + if (uapi_key_is_write_method(key)) + count++; + if (uapi_key_is_write_ex_method(key)) + count++; + if (uapi_key_is_attr(key)) + count++; + WARN(count != 1, "Bad count %d key=%x", count, key); +} + +static void uapi_finalize_disable(struct uverbs_api *uapi) +{ + struct radix_tree_iter iter; + u32 starting_key = 0; + bool scan_again = false; + void __rcu **slot; + +again: + radix_tree_for_each_slot (slot, &uapi->radix, &iter, starting_key) { + uapi_key_okay(iter.index); + + if (uapi_key_is_object(iter.index)) { + struct uverbs_api_object *obj_elm = + rcu_dereference_protected(*slot, true); + + if (obj_elm->disabled) { + /* Have to check all the attrs again */ + scan_again = true; + starting_key = iter.index; + uapi_remove_object(uapi, iter.index); + goto again; + } + continue; + } + + if (uapi_key_is_ioctl_method(iter.index)) { + struct uverbs_api_ioctl_method *method_elm = + rcu_dereference_protected(*slot, true); + + if (method_elm->disabled) { + starting_key = iter.index; + uapi_remove_method(uapi, iter.index); + goto again; + } + continue; + } + + if (uapi_key_is_write_method(iter.index) || + uapi_key_is_write_ex_method(iter.index)) { + struct uverbs_api_write_method *method_elm = + rcu_dereference_protected(*slot, true); + + if (method_elm->disabled) { + kfree(method_elm); + radix_tree_iter_delete(&uapi->radix, &iter, slot); + } + continue; + } + + if (uapi_key_is_attr(iter.index)) { + struct uverbs_api_attr *attr_elm = + rcu_dereference_protected(*slot, true); + const struct uverbs_api_object *tmp_obj; + u32 obj_key; + + /* + * If the method has a mandatory object handle + * attribute which relies on an object which is not + * present then the entire method is uncallable. + */ + if (!attr_elm->spec.mandatory) + continue; + obj_key = uapi_get_obj_id(&attr_elm->spec); + if (obj_key == UVERBS_API_KEY_ERR) + continue; + tmp_obj = uapi_get_object(uapi, obj_key); + if (IS_ERR(tmp_obj)) { + if (PTR_ERR(tmp_obj) == -ENOMSG) + continue; + } else { + if (!tmp_obj->disabled) + continue; + } + + starting_key = iter.index; + uapi_remove_method( + uapi, + iter.index & (UVERBS_API_OBJ_KEY_MASK | + UVERBS_API_METHOD_KEY_MASK)); + goto again; + } + + WARN_ON(false); + } + + if (!scan_again) + return; + scan_again = false; + starting_key = 0; + goto again; +} + +void uverbs_destroy_api(struct uverbs_api *uapi) +{ + if (!uapi) + return; + + uapi_remove_range(uapi, 0, U32_MAX); + kfree(uapi->write_methods); kfree(uapi); } -struct uverbs_api *uverbs_alloc_api( - const struct uverbs_object_tree_def *const *driver_specs, - enum rdma_driver_id driver_id) +static const struct uapi_definition uverbs_core_api[] = { + UAPI_DEF_CHAIN(uverbs_def_obj_counters), + UAPI_DEF_CHAIN(uverbs_def_obj_cq), + UAPI_DEF_CHAIN(uverbs_def_obj_device), + UAPI_DEF_CHAIN(uverbs_def_obj_dm), + UAPI_DEF_CHAIN(uverbs_def_obj_flow_action), + UAPI_DEF_CHAIN(uverbs_def_obj_intf), + UAPI_DEF_CHAIN(uverbs_def_obj_mr), + UAPI_DEF_CHAIN(uverbs_def_write_intf), + {}, +}; + +struct uverbs_api *uverbs_alloc_api(struct ib_device *ibdev) { struct uverbs_api *uapi; int rc; @@ -275,18 +640,16 @@ struct uverbs_api *uverbs_alloc_api( return ERR_PTR(-ENOMEM); INIT_RADIX_TREE(&uapi->radix, GFP_KERNEL); - uapi->driver_id = driver_id; + uapi->driver_id = ibdev->driver_id; - rc = uapi_merge_tree(uapi, uverbs_default_get_objects(), false); + rc = uapi_merge_def(uapi, ibdev, uverbs_core_api, false); + if (rc) + goto err; + rc = uapi_merge_def(uapi, ibdev, ibdev->driver_def, true); if (rc) goto err; - for (; driver_specs && *driver_specs; driver_specs++) { - rc = uapi_merge_tree(uapi, *driver_specs, true); - if (rc) - goto err; - } - + uapi_finalize_disable(uapi); rc = uapi_finalize(uapi); if (rc) goto err; @@ -294,8 +657,9 @@ struct uverbs_api *uverbs_alloc_api( return uapi; err: if (rc != -ENOMEM) - pr_err("Setup of uverbs_api failed, kernel parsing tree description is not valid (%d)??\n", - rc); + dev_err(&ibdev->dev, + "Setup of uverbs_api failed, kernel parsing tree description is not valid (%d)??\n", + rc); uverbs_destroy_api(uapi); return ERR_PTR(rc); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 178899e3ce73..ac011836bb54 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -141,6 +141,10 @@ __attribute_const__ int ib_rate_to_mult(enum ib_rate rate) case IB_RATE_100_GBPS: return 40; case IB_RATE_200_GBPS: return 80; case IB_RATE_300_GBPS: return 120; + case IB_RATE_28_GBPS: return 11; + case IB_RATE_50_GBPS: return 20; + case IB_RATE_400_GBPS: return 160; + case IB_RATE_600_GBPS: return 240; default: return -1; } } @@ -166,6 +170,10 @@ __attribute_const__ enum ib_rate mult_to_ib_rate(int mult) case 40: return IB_RATE_100_GBPS; case 80: return IB_RATE_200_GBPS; case 120: return IB_RATE_300_GBPS; + case 11: return IB_RATE_28_GBPS; + case 20: return IB_RATE_50_GBPS; + case 160: return IB_RATE_400_GBPS; + case 240: return IB_RATE_600_GBPS; default: return IB_RATE_PORT_CURRENT; } } @@ -191,6 +199,10 @@ __attribute_const__ int ib_rate_to_mbps(enum ib_rate rate) case IB_RATE_100_GBPS: return 103125; case IB_RATE_200_GBPS: return 206250; case IB_RATE_300_GBPS: return 309375; + case IB_RATE_28_GBPS: return 28125; + case IB_RATE_50_GBPS: return 53125; + case IB_RATE_400_GBPS: return 425000; + case IB_RATE_600_GBPS: return 637500; default: return -1; } } @@ -214,8 +226,8 @@ EXPORT_SYMBOL(rdma_node_get_transport); enum rdma_link_layer rdma_port_get_link_layer(struct ib_device *device, u8 port_num) { enum rdma_transport_type lt; - if (device->get_link_layer) - return device->get_link_layer(device, port_num); + if (device->ops.get_link_layer) + return device->ops.get_link_layer(device, port_num); lt = rdma_node_get_transport(device->node_type); if (lt == RDMA_TRANSPORT_IB) @@ -243,7 +255,7 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, struct ib_pd *pd; int mr_access_flags = 0; - pd = device->alloc_pd(device, NULL, NULL); + pd = device->ops.alloc_pd(device, NULL, NULL); if (IS_ERR(pd)) return pd; @@ -265,12 +277,12 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, pd->res.type = RDMA_RESTRACK_PD; rdma_restrack_set_task(&pd->res, caller); - rdma_restrack_add(&pd->res); + rdma_restrack_kadd(&pd->res); if (mr_access_flags) { struct ib_mr *mr; - mr = pd->device->get_dma_mr(pd, mr_access_flags); + mr = pd->device->ops.get_dma_mr(pd, mr_access_flags); if (IS_ERR(mr)) { ib_dealloc_pd(pd); return ERR_CAST(mr); @@ -307,7 +319,7 @@ void ib_dealloc_pd(struct ib_pd *pd) int ret; if (pd->__internal_mr) { - ret = pd->device->dereg_mr(pd->__internal_mr); + ret = pd->device->ops.dereg_mr(pd->__internal_mr); WARN_ON(ret); pd->__internal_mr = NULL; } @@ -319,7 +331,7 @@ void ib_dealloc_pd(struct ib_pd *pd) rdma_restrack_del(&pd->res); /* Making delalloc_pd a void return is a WIP, no driver should return an error here. */ - ret = pd->device->dealloc_pd(pd); + ret = pd->device->ops.dealloc_pd(pd); WARN_ONCE(ret, "Infiniband HW driver failed dealloc_pd"); } EXPORT_SYMBOL(ib_dealloc_pd); @@ -475,14 +487,17 @@ rdma_update_sgid_attr(struct rdma_ah_attr *ah_attr, static struct ib_ah *_rdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 flags, struct ib_udata *udata) { struct ib_ah *ah; - if (!pd->device->create_ah) + might_sleep_if(flags & RDMA_CREATE_AH_SLEEPABLE); + + if (!pd->device->ops.create_ah) return ERR_PTR(-EOPNOTSUPP); - ah = pd->device->create_ah(pd, ah_attr, udata); + ah = pd->device->ops.create_ah(pd, ah_attr, flags, udata); if (!IS_ERR(ah)) { ah->device = pd->device; @@ -502,12 +517,14 @@ static struct ib_ah *_rdma_create_ah(struct ib_pd *pd, * given address vector. * @pd: The protection domain associated with the address handle. * @ah_attr: The attributes of the address vector. + * @flags: Create address handle flags (see enum rdma_create_ah_flags). * * It returns 0 on success and returns appropriate error code on error. * The address handle is used to reference a local or global destination * in all UD QP post sends. */ -struct ib_ah *rdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr) +struct ib_ah *rdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 flags) { const struct ib_gid_attr *old_sgid_attr; struct ib_ah *ah; @@ -517,7 +534,7 @@ struct ib_ah *rdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr) if (ret) return ERR_PTR(ret); - ah = _rdma_create_ah(pd, ah_attr, NULL); + ah = _rdma_create_ah(pd, ah_attr, flags, NULL); rdma_unfill_sgid_attr(ah_attr, old_sgid_attr); return ah; @@ -557,7 +574,7 @@ struct ib_ah *rdma_create_user_ah(struct ib_pd *pd, } } - ah = _rdma_create_ah(pd, ah_attr, udata); + ah = _rdma_create_ah(pd, ah_attr, RDMA_CREATE_AH_SLEEPABLE, udata); out: rdma_unfill_sgid_attr(ah_attr, old_sgid_attr); @@ -869,7 +886,7 @@ struct ib_ah *ib_create_ah_from_wc(struct ib_pd *pd, const struct ib_wc *wc, if (ret) return ERR_PTR(ret); - ah = rdma_create_ah(pd, &ah_attr); + ah = rdma_create_ah(pd, &ah_attr, RDMA_CREATE_AH_SLEEPABLE); rdma_destroy_ah_attr(&ah_attr); return ah; @@ -888,8 +905,8 @@ int rdma_modify_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr) if (ret) return ret; - ret = ah->device->modify_ah ? - ah->device->modify_ah(ah, ah_attr) : + ret = ah->device->ops.modify_ah ? + ah->device->ops.modify_ah(ah, ah_attr) : -EOPNOTSUPP; ah->sgid_attr = rdma_update_sgid_attr(ah_attr, ah->sgid_attr); @@ -902,20 +919,22 @@ int rdma_query_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr) { ah_attr->grh.sgid_attr = NULL; - return ah->device->query_ah ? - ah->device->query_ah(ah, ah_attr) : + return ah->device->ops.query_ah ? + ah->device->ops.query_ah(ah, ah_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(rdma_query_ah); -int rdma_destroy_ah(struct ib_ah *ah) +int rdma_destroy_ah(struct ib_ah *ah, u32 flags) { const struct ib_gid_attr *sgid_attr = ah->sgid_attr; struct ib_pd *pd; int ret; + might_sleep_if(flags & RDMA_DESTROY_AH_SLEEPABLE); + pd = ah->pd; - ret = ah->device->destroy_ah(ah); + ret = ah->device->ops.destroy_ah(ah, flags); if (!ret) { atomic_dec(&pd->usecnt); if (sgid_attr) @@ -933,10 +952,10 @@ struct ib_srq *ib_create_srq(struct ib_pd *pd, { struct ib_srq *srq; - if (!pd->device->create_srq) + if (!pd->device->ops.create_srq) return ERR_PTR(-EOPNOTSUPP); - srq = pd->device->create_srq(pd, srq_init_attr, NULL); + srq = pd->device->ops.create_srq(pd, srq_init_attr, NULL); if (!IS_ERR(srq)) { srq->device = pd->device; @@ -965,17 +984,17 @@ int ib_modify_srq(struct ib_srq *srq, struct ib_srq_attr *srq_attr, enum ib_srq_attr_mask srq_attr_mask) { - return srq->device->modify_srq ? - srq->device->modify_srq(srq, srq_attr, srq_attr_mask, NULL) : - -EOPNOTSUPP; + return srq->device->ops.modify_srq ? + srq->device->ops.modify_srq(srq, srq_attr, srq_attr_mask, + NULL) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_modify_srq); int ib_query_srq(struct ib_srq *srq, struct ib_srq_attr *srq_attr) { - return srq->device->query_srq ? - srq->device->query_srq(srq, srq_attr) : -EOPNOTSUPP; + return srq->device->ops.query_srq ? + srq->device->ops.query_srq(srq, srq_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_query_srq); @@ -997,7 +1016,7 @@ int ib_destroy_srq(struct ib_srq *srq) if (srq_type == IB_SRQT_XRC) xrcd = srq->ext.xrc.xrcd; - ret = srq->device->destroy_srq(srq); + ret = srq->device->ops.destroy_srq(srq); if (!ret) { atomic_dec(&pd->usecnt); if (srq_type == IB_SRQT_XRC) @@ -1106,7 +1125,7 @@ static struct ib_qp *ib_create_xrc_qp(struct ib_qp *qp, if (!IS_ERR(qp)) __ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp); else - real_qp->device->destroy_qp(real_qp); + real_qp->device->ops.destroy_qp(real_qp); return qp; } @@ -1692,10 +1711,10 @@ int ib_get_eth_speed(struct ib_device *dev, u8 port_num, u8 *speed, u8 *width) if (rdma_port_get_link_layer(dev, port_num) != IB_LINK_LAYER_ETHERNET) return -EINVAL; - if (!dev->get_netdev) + if (!dev->ops.get_netdev) return -EOPNOTSUPP; - netdev = dev->get_netdev(dev, port_num); + netdev = dev->ops.get_netdev(dev, port_num); if (!netdev) return -ENODEV; @@ -1753,9 +1772,9 @@ int ib_query_qp(struct ib_qp *qp, qp_attr->ah_attr.grh.sgid_attr = NULL; qp_attr->alt_ah_attr.grh.sgid_attr = NULL; - return qp->device->query_qp ? - qp->device->query_qp(qp->real_qp, qp_attr, qp_attr_mask, qp_init_attr) : - -EOPNOTSUPP; + return qp->device->ops.query_qp ? + qp->device->ops.query_qp(qp->real_qp, qp_attr, qp_attr_mask, + qp_init_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_query_qp); @@ -1841,7 +1860,7 @@ int ib_destroy_qp(struct ib_qp *qp) rdma_rw_cleanup_mrs(qp); rdma_restrack_del(&qp->res); - ret = qp->device->destroy_qp(qp); + ret = qp->device->ops.destroy_qp(qp); if (!ret) { if (alt_path_sgid_attr) rdma_put_gid_attr(alt_path_sgid_attr); @@ -1879,7 +1898,7 @@ struct ib_cq *__ib_create_cq(struct ib_device *device, { struct ib_cq *cq; - cq = device->create_cq(device, cq_attr, NULL, NULL); + cq = device->ops.create_cq(device, cq_attr, NULL, NULL); if (!IS_ERR(cq)) { cq->device = device; @@ -1890,7 +1909,7 @@ struct ib_cq *__ib_create_cq(struct ib_device *device, atomic_set(&cq->usecnt, 0); cq->res.type = RDMA_RESTRACK_CQ; rdma_restrack_set_task(&cq->res, caller); - rdma_restrack_add(&cq->res); + rdma_restrack_kadd(&cq->res); } return cq; @@ -1899,8 +1918,9 @@ EXPORT_SYMBOL(__ib_create_cq); int rdma_set_cq_moderation(struct ib_cq *cq, u16 cq_count, u16 cq_period) { - return cq->device->modify_cq ? - cq->device->modify_cq(cq, cq_count, cq_period) : -EOPNOTSUPP; + return cq->device->ops.modify_cq ? + cq->device->ops.modify_cq(cq, cq_count, + cq_period) : -EOPNOTSUPP; } EXPORT_SYMBOL(rdma_set_cq_moderation); @@ -1910,14 +1930,14 @@ int ib_destroy_cq(struct ib_cq *cq) return -EBUSY; rdma_restrack_del(&cq->res); - return cq->device->destroy_cq(cq); + return cq->device->ops.destroy_cq(cq); } EXPORT_SYMBOL(ib_destroy_cq); int ib_resize_cq(struct ib_cq *cq, int cqe) { - return cq->device->resize_cq ? - cq->device->resize_cq(cq, cqe, NULL) : -EOPNOTSUPP; + return cq->device->ops.resize_cq ? + cq->device->ops.resize_cq(cq, cqe, NULL) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_resize_cq); @@ -1930,7 +1950,7 @@ int ib_dereg_mr(struct ib_mr *mr) int ret; rdma_restrack_del(&mr->res); - ret = mr->device->dereg_mr(mr); + ret = mr->device->ops.dereg_mr(mr); if (!ret) { atomic_dec(&pd->usecnt); if (dm) @@ -1959,10 +1979,10 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, { struct ib_mr *mr; - if (!pd->device->alloc_mr) + if (!pd->device->ops.alloc_mr) return ERR_PTR(-EOPNOTSUPP); - mr = pd->device->alloc_mr(pd, mr_type, max_num_sg); + mr = pd->device->ops.alloc_mr(pd, mr_type, max_num_sg); if (!IS_ERR(mr)) { mr->device = pd->device; mr->pd = pd; @@ -1971,7 +1991,7 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, atomic_inc(&pd->usecnt); mr->need_inval = false; mr->res.type = RDMA_RESTRACK_MR; - rdma_restrack_add(&mr->res); + rdma_restrack_kadd(&mr->res); } return mr; @@ -1986,10 +2006,10 @@ struct ib_fmr *ib_alloc_fmr(struct ib_pd *pd, { struct ib_fmr *fmr; - if (!pd->device->alloc_fmr) + if (!pd->device->ops.alloc_fmr) return ERR_PTR(-EOPNOTSUPP); - fmr = pd->device->alloc_fmr(pd, mr_access_flags, fmr_attr); + fmr = pd->device->ops.alloc_fmr(pd, mr_access_flags, fmr_attr); if (!IS_ERR(fmr)) { fmr->device = pd->device; fmr->pd = pd; @@ -2008,7 +2028,7 @@ int ib_unmap_fmr(struct list_head *fmr_list) return 0; fmr = list_entry(fmr_list->next, struct ib_fmr, list); - return fmr->device->unmap_fmr(fmr_list); + return fmr->device->ops.unmap_fmr(fmr_list); } EXPORT_SYMBOL(ib_unmap_fmr); @@ -2018,7 +2038,7 @@ int ib_dealloc_fmr(struct ib_fmr *fmr) int ret; pd = fmr->pd; - ret = fmr->device->dealloc_fmr(fmr); + ret = fmr->device->ops.dealloc_fmr(fmr); if (!ret) atomic_dec(&pd->usecnt); @@ -2070,14 +2090,14 @@ int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid) { int ret; - if (!qp->device->attach_mcast) + if (!qp->device->ops.attach_mcast) return -EOPNOTSUPP; if (!rdma_is_multicast_addr((struct in6_addr *)gid->raw) || qp->qp_type != IB_QPT_UD || !is_valid_mcast_lid(qp, lid)) return -EINVAL; - ret = qp->device->attach_mcast(qp, gid, lid); + ret = qp->device->ops.attach_mcast(qp, gid, lid); if (!ret) atomic_inc(&qp->usecnt); return ret; @@ -2088,14 +2108,14 @@ int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid) { int ret; - if (!qp->device->detach_mcast) + if (!qp->device->ops.detach_mcast) return -EOPNOTSUPP; if (!rdma_is_multicast_addr((struct in6_addr *)gid->raw) || qp->qp_type != IB_QPT_UD || !is_valid_mcast_lid(qp, lid)) return -EINVAL; - ret = qp->device->detach_mcast(qp, gid, lid); + ret = qp->device->ops.detach_mcast(qp, gid, lid); if (!ret) atomic_dec(&qp->usecnt); return ret; @@ -2106,10 +2126,10 @@ struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller) { struct ib_xrcd *xrcd; - if (!device->alloc_xrcd) + if (!device->ops.alloc_xrcd) return ERR_PTR(-EOPNOTSUPP); - xrcd = device->alloc_xrcd(device, NULL, NULL); + xrcd = device->ops.alloc_xrcd(device, NULL, NULL); if (!IS_ERR(xrcd)) { xrcd->device = device; xrcd->inode = NULL; @@ -2137,7 +2157,7 @@ int ib_dealloc_xrcd(struct ib_xrcd *xrcd) return ret; } - return xrcd->device->dealloc_xrcd(xrcd); + return xrcd->device->ops.dealloc_xrcd(xrcd); } EXPORT_SYMBOL(ib_dealloc_xrcd); @@ -2160,10 +2180,10 @@ struct ib_wq *ib_create_wq(struct ib_pd *pd, { struct ib_wq *wq; - if (!pd->device->create_wq) + if (!pd->device->ops.create_wq) return ERR_PTR(-EOPNOTSUPP); - wq = pd->device->create_wq(pd, wq_attr, NULL); + wq = pd->device->ops.create_wq(pd, wq_attr, NULL); if (!IS_ERR(wq)) { wq->event_handler = wq_attr->event_handler; wq->wq_context = wq_attr->wq_context; @@ -2193,7 +2213,7 @@ int ib_destroy_wq(struct ib_wq *wq) if (atomic_read(&wq->usecnt)) return -EBUSY; - err = wq->device->destroy_wq(wq); + err = wq->device->ops.destroy_wq(wq); if (!err) { atomic_dec(&pd->usecnt); atomic_dec(&cq->usecnt); @@ -2215,10 +2235,10 @@ int ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr, { int err; - if (!wq->device->modify_wq) + if (!wq->device->ops.modify_wq) return -EOPNOTSUPP; - err = wq->device->modify_wq(wq, wq_attr, wq_attr_mask, NULL); + err = wq->device->ops.modify_wq(wq, wq_attr, wq_attr_mask, NULL); return err; } EXPORT_SYMBOL(ib_modify_wq); @@ -2240,12 +2260,12 @@ struct ib_rwq_ind_table *ib_create_rwq_ind_table(struct ib_device *device, int i; u32 table_size; - if (!device->create_rwq_ind_table) + if (!device->ops.create_rwq_ind_table) return ERR_PTR(-EOPNOTSUPP); table_size = (1 << init_attr->log_ind_tbl_size); - rwq_ind_table = device->create_rwq_ind_table(device, - init_attr, NULL); + rwq_ind_table = device->ops.create_rwq_ind_table(device, + init_attr, NULL); if (IS_ERR(rwq_ind_table)) return rwq_ind_table; @@ -2275,7 +2295,7 @@ int ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *rwq_ind_table) if (atomic_read(&rwq_ind_table->usecnt)) return -EBUSY; - err = rwq_ind_table->device->destroy_rwq_ind_table(rwq_ind_table); + err = rwq_ind_table->device->ops.destroy_rwq_ind_table(rwq_ind_table); if (!err) { for (i = 0; i < table_size; i++) atomic_dec(&ind_tbl[i]->usecnt); @@ -2288,48 +2308,50 @@ EXPORT_SYMBOL(ib_destroy_rwq_ind_table); int ib_check_mr_status(struct ib_mr *mr, u32 check_mask, struct ib_mr_status *mr_status) { - return mr->device->check_mr_status ? - mr->device->check_mr_status(mr, check_mask, mr_status) : -EOPNOTSUPP; + if (!mr->device->ops.check_mr_status) + return -EOPNOTSUPP; + + return mr->device->ops.check_mr_status(mr, check_mask, mr_status); } EXPORT_SYMBOL(ib_check_mr_status); int ib_set_vf_link_state(struct ib_device *device, int vf, u8 port, int state) { - if (!device->set_vf_link_state) + if (!device->ops.set_vf_link_state) return -EOPNOTSUPP; - return device->set_vf_link_state(device, vf, port, state); + return device->ops.set_vf_link_state(device, vf, port, state); } EXPORT_SYMBOL(ib_set_vf_link_state); int ib_get_vf_config(struct ib_device *device, int vf, u8 port, struct ifla_vf_info *info) { - if (!device->get_vf_config) + if (!device->ops.get_vf_config) return -EOPNOTSUPP; - return device->get_vf_config(device, vf, port, info); + return device->ops.get_vf_config(device, vf, port, info); } EXPORT_SYMBOL(ib_get_vf_config); int ib_get_vf_stats(struct ib_device *device, int vf, u8 port, struct ifla_vf_stats *stats) { - if (!device->get_vf_stats) + if (!device->ops.get_vf_stats) return -EOPNOTSUPP; - return device->get_vf_stats(device, vf, port, stats); + return device->ops.get_vf_stats(device, vf, port, stats); } EXPORT_SYMBOL(ib_get_vf_stats); int ib_set_vf_guid(struct ib_device *device, int vf, u8 port, u64 guid, int type) { - if (!device->set_vf_guid) + if (!device->ops.set_vf_guid) return -EOPNOTSUPP; - return device->set_vf_guid(device, vf, port, guid, type); + return device->ops.set_vf_guid(device, vf, port, guid, type); } EXPORT_SYMBOL(ib_set_vf_guid); @@ -2361,12 +2383,12 @@ EXPORT_SYMBOL(ib_set_vf_guid); int ib_map_mr_sg(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset, unsigned int page_size) { - if (unlikely(!mr->device->map_mr_sg)) + if (unlikely(!mr->device->ops.map_mr_sg)) return -EOPNOTSUPP; mr->page_size = page_size; - return mr->device->map_mr_sg(mr, sg, sg_nents, sg_offset); + return mr->device->ops.map_mr_sg(mr, sg, sg_nents, sg_offset); } EXPORT_SYMBOL(ib_map_mr_sg); @@ -2565,8 +2587,8 @@ static void __ib_drain_rq(struct ib_qp *qp) */ void ib_drain_sq(struct ib_qp *qp) { - if (qp->device->drain_sq) - qp->device->drain_sq(qp); + if (qp->device->ops.drain_sq) + qp->device->ops.drain_sq(qp); else __ib_drain_sq(qp); } @@ -2593,8 +2615,8 @@ EXPORT_SYMBOL(ib_drain_sq); */ void ib_drain_rq(struct ib_qp *qp) { - if (qp->device->drain_rq) - qp->device->drain_rq(qp); + if (qp->device->ops.drain_rq) + qp->device->ops.drain_rq(qp); else __ib_drain_rq(qp); } @@ -2632,10 +2654,11 @@ struct net_device *rdma_alloc_netdev(struct ib_device *device, u8 port_num, struct net_device *netdev; int rc; - if (!device->rdma_netdev_get_params) + if (!device->ops.rdma_netdev_get_params) return ERR_PTR(-EOPNOTSUPP); - rc = device->rdma_netdev_get_params(device, port_num, type, ¶ms); + rc = device->ops.rdma_netdev_get_params(device, port_num, type, + ¶ms); if (rc) return ERR_PTR(rc); @@ -2657,10 +2680,11 @@ int rdma_init_netdev(struct ib_device *device, u8 port_num, struct rdma_netdev_alloc_params params; int rc; - if (!device->rdma_netdev_get_params) + if (!device->ops.rdma_netdev_get_params) return -EOPNOTSUPP; - rc = device->rdma_netdev_get_params(device, port_num, type, ¶ms); + rc = device->ops.rdma_netdev_get_params(device, port_num, type, + ¶ms); if (rc) return rc; diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 54fdd4cf5288..1e2515e2eb62 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -647,13 +647,14 @@ fail: } /* Address Handles */ -int bnxt_re_destroy_ah(struct ib_ah *ib_ah) +int bnxt_re_destroy_ah(struct ib_ah *ib_ah, u32 flags) { struct bnxt_re_ah *ah = container_of(ib_ah, struct bnxt_re_ah, ib_ah); struct bnxt_re_dev *rdev = ah->rdev; int rc; - rc = bnxt_qplib_destroy_ah(&rdev->qplib_res, &ah->qplib_ah); + rc = bnxt_qplib_destroy_ah(&rdev->qplib_res, &ah->qplib_ah, + !(flags & RDMA_DESTROY_AH_SLEEPABLE)); if (rc) { dev_err(rdev_to_dev(rdev), "Failed to destroy HW AH"); return rc; @@ -664,6 +665,7 @@ int bnxt_re_destroy_ah(struct ib_ah *ib_ah) struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd, struct rdma_ah_attr *ah_attr, + u32 flags, struct ib_udata *udata) { struct bnxt_re_pd *pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd); @@ -698,7 +700,7 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd, ah->qplib_ah.flow_label = grh->flow_label; ah->qplib_ah.hop_limit = grh->hop_limit; ah->qplib_ah.sl = rdma_ah_get_sl(ah_attr); - if (ib_pd->uobject && + if (udata && !rdma_is_multicast_addr((struct in6_addr *) grh->dgid.raw) && !rdma_link_local_addr((struct in6_addr *) @@ -722,14 +724,15 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd, } memcpy(ah->qplib_ah.dmac, ah_attr->roce.dmac, ETH_ALEN); - rc = bnxt_qplib_create_ah(&rdev->qplib_res, &ah->qplib_ah); + rc = bnxt_qplib_create_ah(&rdev->qplib_res, &ah->qplib_ah, + !(flags & RDMA_CREATE_AH_SLEEPABLE)); if (rc) { dev_err(rdev_to_dev(rdev), "Failed to allocate HW AH"); goto fail; } /* Write AVID to shared page. */ - if (ib_pd->uobject) { + if (udata) { struct ib_ucontext *ib_uctx = ib_pd->uobject->context; struct bnxt_re_ucontext *uctx; unsigned long flag; @@ -818,7 +821,7 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp) if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp) { rc = bnxt_qplib_destroy_ah(&rdev->qplib_res, - &rdev->sqp_ah->qplib_ah); + &rdev->sqp_ah->qplib_ah, false); if (rc) { dev_err(rdev_to_dev(rdev), "Failed to destroy HW AH for shadow QP"); @@ -958,7 +961,7 @@ static struct bnxt_re_ah *bnxt_re_create_shadow_qp_ah /* Have DMAC same as SMAC */ ether_addr_copy(ah->qplib_ah.dmac, rdev->netdev->dev_addr); - rc = bnxt_qplib_create_ah(&rdev->qplib_res, &ah->qplib_ah); + rc = bnxt_qplib_create_ah(&rdev->qplib_res, &ah->qplib_ah, false); if (rc) { dev_err(rdev_to_dev(rdev), "Failed to allocate HW AH for Shadow QP"); diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h index aa33e7b82c84..c4af72604b4f 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h @@ -169,10 +169,11 @@ struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev, int bnxt_re_dealloc_pd(struct ib_pd *pd); struct ib_ah *bnxt_re_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 flags, struct ib_udata *udata); int bnxt_re_modify_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr); int bnxt_re_query_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr); -int bnxt_re_destroy_ah(struct ib_ah *ah); +int bnxt_re_destroy_ah(struct ib_ah *ah, u32 flags); struct ib_srq *bnxt_re_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *srq_init_attr, struct ib_udata *udata); diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index 77f095e5fbe3..e7a997f2a537 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -568,6 +568,50 @@ static void bnxt_re_unregister_ib(struct bnxt_re_dev *rdev) ib_unregister_device(&rdev->ibdev); } +static const struct ib_device_ops bnxt_re_dev_ops = { + .add_gid = bnxt_re_add_gid, + .alloc_hw_stats = bnxt_re_ib_alloc_hw_stats, + .alloc_mr = bnxt_re_alloc_mr, + .alloc_pd = bnxt_re_alloc_pd, + .alloc_ucontext = bnxt_re_alloc_ucontext, + .create_ah = bnxt_re_create_ah, + .create_cq = bnxt_re_create_cq, + .create_qp = bnxt_re_create_qp, + .create_srq = bnxt_re_create_srq, + .dealloc_pd = bnxt_re_dealloc_pd, + .dealloc_ucontext = bnxt_re_dealloc_ucontext, + .del_gid = bnxt_re_del_gid, + .dereg_mr = bnxt_re_dereg_mr, + .destroy_ah = bnxt_re_destroy_ah, + .destroy_cq = bnxt_re_destroy_cq, + .destroy_qp = bnxt_re_destroy_qp, + .destroy_srq = bnxt_re_destroy_srq, + .get_dev_fw_str = bnxt_re_query_fw_str, + .get_dma_mr = bnxt_re_get_dma_mr, + .get_hw_stats = bnxt_re_ib_get_hw_stats, + .get_link_layer = bnxt_re_get_link_layer, + .get_netdev = bnxt_re_get_netdev, + .get_port_immutable = bnxt_re_get_port_immutable, + .map_mr_sg = bnxt_re_map_mr_sg, + .mmap = bnxt_re_mmap, + .modify_ah = bnxt_re_modify_ah, + .modify_device = bnxt_re_modify_device, + .modify_qp = bnxt_re_modify_qp, + .modify_srq = bnxt_re_modify_srq, + .poll_cq = bnxt_re_poll_cq, + .post_recv = bnxt_re_post_recv, + .post_send = bnxt_re_post_send, + .post_srq_recv = bnxt_re_post_srq_recv, + .query_ah = bnxt_re_query_ah, + .query_device = bnxt_re_query_device, + .query_pkey = bnxt_re_query_pkey, + .query_port = bnxt_re_query_port, + .query_qp = bnxt_re_query_qp, + .query_srq = bnxt_re_query_srq, + .reg_user_mr = bnxt_re_reg_user_mr, + .req_notify_cq = bnxt_re_req_notify_cq, +}; + static int bnxt_re_register_ib(struct bnxt_re_dev *rdev) { struct ib_device *ibdev = &rdev->ibdev; @@ -614,60 +658,10 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev) (1ull << IB_USER_VERBS_CMD_DESTROY_AH); /* POLL_CQ and REQ_NOTIFY_CQ is directly handled in libbnxt_re */ - /* Kernel verbs */ - ibdev->query_device = bnxt_re_query_device; - ibdev->modify_device = bnxt_re_modify_device; - - ibdev->query_port = bnxt_re_query_port; - ibdev->get_port_immutable = bnxt_re_get_port_immutable; - ibdev->get_dev_fw_str = bnxt_re_query_fw_str; - ibdev->query_pkey = bnxt_re_query_pkey; - ibdev->get_netdev = bnxt_re_get_netdev; - ibdev->add_gid = bnxt_re_add_gid; - ibdev->del_gid = bnxt_re_del_gid; - ibdev->get_link_layer = bnxt_re_get_link_layer; - - ibdev->alloc_pd = bnxt_re_alloc_pd; - ibdev->dealloc_pd = bnxt_re_dealloc_pd; - - ibdev->create_ah = bnxt_re_create_ah; - ibdev->modify_ah = bnxt_re_modify_ah; - ibdev->query_ah = bnxt_re_query_ah; - ibdev->destroy_ah = bnxt_re_destroy_ah; - - ibdev->create_srq = bnxt_re_create_srq; - ibdev->modify_srq = bnxt_re_modify_srq; - ibdev->query_srq = bnxt_re_query_srq; - ibdev->destroy_srq = bnxt_re_destroy_srq; - ibdev->post_srq_recv = bnxt_re_post_srq_recv; - - ibdev->create_qp = bnxt_re_create_qp; - ibdev->modify_qp = bnxt_re_modify_qp; - ibdev->query_qp = bnxt_re_query_qp; - ibdev->destroy_qp = bnxt_re_destroy_qp; - - ibdev->post_send = bnxt_re_post_send; - ibdev->post_recv = bnxt_re_post_recv; - - ibdev->create_cq = bnxt_re_create_cq; - ibdev->destroy_cq = bnxt_re_destroy_cq; - ibdev->poll_cq = bnxt_re_poll_cq; - ibdev->req_notify_cq = bnxt_re_req_notify_cq; - - ibdev->get_dma_mr = bnxt_re_get_dma_mr; - ibdev->dereg_mr = bnxt_re_dereg_mr; - ibdev->alloc_mr = bnxt_re_alloc_mr; - ibdev->map_mr_sg = bnxt_re_map_mr_sg; - - ibdev->reg_user_mr = bnxt_re_reg_user_mr; - ibdev->alloc_ucontext = bnxt_re_alloc_ucontext; - ibdev->dealloc_ucontext = bnxt_re_dealloc_ucontext; - ibdev->mmap = bnxt_re_mmap; - ibdev->get_hw_stats = bnxt_re_ib_get_hw_stats; - ibdev->alloc_hw_stats = bnxt_re_ib_alloc_hw_stats; rdma_set_device_sysfs_group(ibdev, &bnxt_re_dev_attr_group); ibdev->driver_id = RDMA_DRIVER_BNXT_RE; + ib_set_device_ops(ibdev, &bnxt_re_dev_ops); return ib_register_device(ibdev, "bnxt_re%d", NULL); } @@ -1203,6 +1197,35 @@ static int bnxt_re_setup_qos(struct bnxt_re_dev *rdev) return 0; } +static void bnxt_re_query_hwrm_intf_version(struct bnxt_re_dev *rdev) +{ + struct bnxt_en_dev *en_dev = rdev->en_dev; + struct hwrm_ver_get_output resp = {0}; + struct hwrm_ver_get_input req = {0}; + struct bnxt_fw_msg fw_msg; + int rc = 0; + + memset(&fw_msg, 0, sizeof(fw_msg)); + bnxt_re_init_hwrm_hdr(rdev, (void *)&req, + HWRM_VER_GET, -1, -1); + req.hwrm_intf_maj = HWRM_VERSION_MAJOR; + req.hwrm_intf_min = HWRM_VERSION_MINOR; + req.hwrm_intf_upd = HWRM_VERSION_UPDATE; + bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp, + sizeof(resp), DFLT_HWRM_CMD_TIMEOUT); + rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg); + if (rc) { + dev_err(rdev_to_dev(rdev), + "Failed to query HW version, rc = 0x%x", rc); + return; + } + rdev->qplib_ctx.hwrm_intf_ver = + (u64)resp.hwrm_intf_major << 48 | + (u64)resp.hwrm_intf_minor << 32 | + (u64)resp.hwrm_intf_build << 16 | + resp.hwrm_intf_patch; +} + static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev) { int rc; @@ -1285,10 +1308,13 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev) } set_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags); + bnxt_re_query_hwrm_intf_version(rdev); + /* Establish RCFW Communication Channel to initialize the context * memory for the function and all child VFs */ rc = bnxt_qplib_alloc_rcfw_channel(rdev->en_dev->pdev, &rdev->rcfw, + &rdev->qplib_ctx, BNXT_RE_MAX_QPC_COUNT); if (rc) { pr_err("Failed to allocate RCFW Channel: %#x\n", rc); diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c index be4e33e9f962..326805461265 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c @@ -58,7 +58,7 @@ static int __wait_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie) u16 cbit; int rc; - cbit = cookie % RCFW_MAX_OUTSTANDING_CMD; + cbit = cookie % rcfw->cmdq_depth; rc = wait_event_timeout(rcfw->waitq, !test_bit(cbit, rcfw->cmdq_bitmap), msecs_to_jiffies(RCFW_CMD_WAIT_TIME_MS)); @@ -70,7 +70,7 @@ static int __block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie) u32 count = RCFW_BLOCKED_CMD_WAIT_COUNT; u16 cbit; - cbit = cookie % RCFW_MAX_OUTSTANDING_CMD; + cbit = cookie % rcfw->cmdq_depth; if (!test_bit(cbit, rcfw->cmdq_bitmap)) goto done; do { @@ -86,6 +86,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, struct cmdq_base *req, { struct bnxt_qplib_cmdqe *cmdqe, **cmdq_ptr; struct bnxt_qplib_hwq *cmdq = &rcfw->cmdq; + u32 cmdq_depth = rcfw->cmdq_depth; struct bnxt_qplib_crsq *crsqe; u32 sw_prod, cmdq_prod; unsigned long flags; @@ -124,7 +125,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, struct cmdq_base *req, cookie = rcfw->seq_num & RCFW_MAX_COOKIE_VALUE; - cbit = cookie % RCFW_MAX_OUTSTANDING_CMD; + cbit = cookie % rcfw->cmdq_depth; if (is_block) cookie |= RCFW_CMD_IS_BLOCKING; @@ -153,7 +154,8 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, struct cmdq_base *req, do { /* Locate the next cmdq slot */ sw_prod = HWQ_CMP(cmdq->prod, cmdq); - cmdqe = &cmdq_ptr[get_cmdq_pg(sw_prod)][get_cmdq_idx(sw_prod)]; + cmdqe = &cmdq_ptr[get_cmdq_pg(sw_prod, cmdq_depth)] + [get_cmdq_idx(sw_prod, cmdq_depth)]; if (!cmdqe) { dev_err(&rcfw->pdev->dev, "RCFW request failed with no cmdqe!\n"); @@ -326,7 +328,7 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw, mcookie = qp_event->cookie; blocked = cookie & RCFW_CMD_IS_BLOCKING; cookie &= RCFW_MAX_COOKIE_VALUE; - cbit = cookie % RCFW_MAX_OUTSTANDING_CMD; + cbit = cookie % rcfw->cmdq_depth; crsqe = &rcfw->crsqe_tbl[cbit]; if (crsqe->resp && crsqe->resp->cookie == mcookie) { @@ -555,6 +557,7 @@ void bnxt_qplib_free_rcfw_channel(struct bnxt_qplib_rcfw *rcfw) int bnxt_qplib_alloc_rcfw_channel(struct pci_dev *pdev, struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_ctx *ctx, int qp_tbl_sz) { rcfw->pdev = pdev; @@ -567,11 +570,18 @@ int bnxt_qplib_alloc_rcfw_channel(struct pci_dev *pdev, "HW channel CREQ allocation failed\n"); goto fail; } - rcfw->cmdq.max_elements = BNXT_QPLIB_CMDQE_MAX_CNT; - if (bnxt_qplib_alloc_init_hwq(rcfw->pdev, &rcfw->cmdq, NULL, 0, - &rcfw->cmdq.max_elements, - BNXT_QPLIB_CMDQE_UNITS, 0, PAGE_SIZE, - HWQ_TYPE_CTX)) { + if (ctx->hwrm_intf_ver < HWRM_VERSION_RCFW_CMDQ_DEPTH_CHECK) + rcfw->cmdq_depth = BNXT_QPLIB_CMDQE_MAX_CNT_256; + else + rcfw->cmdq_depth = BNXT_QPLIB_CMDQE_MAX_CNT_8192; + + rcfw->cmdq.max_elements = rcfw->cmdq_depth; + if (bnxt_qplib_alloc_init_hwq + (rcfw->pdev, &rcfw->cmdq, NULL, 0, + &rcfw->cmdq.max_elements, + BNXT_QPLIB_CMDQE_UNITS, 0, + bnxt_qplib_cmdqe_page_size(rcfw->cmdq_depth), + HWQ_TYPE_CTX)) { dev_err(&rcfw->pdev->dev, "HW channel CMDQ allocation failed\n"); goto fail; @@ -674,7 +684,7 @@ int bnxt_qplib_enable_rcfw_channel(struct pci_dev *pdev, /* General */ rcfw->seq_num = 0; set_bit(FIRMWARE_FIRST_FLAG, &rcfw->flags); - bmap_size = BITS_TO_LONGS(RCFW_MAX_OUTSTANDING_CMD * + bmap_size = BITS_TO_LONGS(rcfw->cmdq_depth * sizeof(unsigned long)); rcfw->cmdq_bitmap = kzalloc(bmap_size, GFP_KERNEL); if (!rcfw->cmdq_bitmap) @@ -734,7 +744,7 @@ int bnxt_qplib_enable_rcfw_channel(struct pci_dev *pdev, init.cmdq_pbl = cpu_to_le64(rcfw->cmdq.pbl[PBL_LVL_0].pg_map_arr[0]); init.cmdq_size_cmdq_lvl = cpu_to_le16( - ((BNXT_QPLIB_CMDQE_MAX_CNT << CMDQ_INIT_CMDQ_SIZE_SFT) & + ((rcfw->cmdq_depth << CMDQ_INIT_CMDQ_SIZE_SFT) & CMDQ_INIT_CMDQ_SIZE_MASK) | ((rcfw->cmdq.level << CMDQ_INIT_CMDQ_LVL_SFT) & CMDQ_INIT_CMDQ_LVL_MASK)); diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h index 9a8687dc0a79..be0ef0e8c53e 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h @@ -63,32 +63,60 @@ #define RCFW_CMD_WAIT_TIME_MS 20000 /* 20 Seconds timeout */ +/* Cmdq contains a fix number of a 16-Byte slots */ +struct bnxt_qplib_cmdqe { + u8 data[16]; +}; + /* CMDQ elements */ -#define BNXT_QPLIB_CMDQE_MAX_CNT 256 +#define BNXT_QPLIB_CMDQE_MAX_CNT_256 256 +#define BNXT_QPLIB_CMDQE_MAX_CNT_8192 8192 #define BNXT_QPLIB_CMDQE_UNITS sizeof(struct bnxt_qplib_cmdqe) -#define BNXT_QPLIB_CMDQE_CNT_PER_PG (PAGE_SIZE / BNXT_QPLIB_CMDQE_UNITS) +#define BNXT_QPLIB_CMDQE_BYTES(depth) ((depth) * BNXT_QPLIB_CMDQE_UNITS) + +static inline u32 bnxt_qplib_cmdqe_npages(u32 depth) +{ + u32 npages; + + npages = BNXT_QPLIB_CMDQE_BYTES(depth) / PAGE_SIZE; + if (BNXT_QPLIB_CMDQE_BYTES(depth) % PAGE_SIZE) + npages++; + return npages; +} + +static inline u32 bnxt_qplib_cmdqe_page_size(u32 depth) +{ + return (bnxt_qplib_cmdqe_npages(depth) * PAGE_SIZE); +} + +static inline u32 bnxt_qplib_cmdqe_cnt_per_pg(u32 depth) +{ + return (bnxt_qplib_cmdqe_page_size(depth) / + BNXT_QPLIB_CMDQE_UNITS); +} -#define MAX_CMDQ_IDX (BNXT_QPLIB_CMDQE_MAX_CNT - 1) -#define MAX_CMDQ_IDX_PER_PG (BNXT_QPLIB_CMDQE_CNT_PER_PG - 1) +#define MAX_CMDQ_IDX(depth) ((depth) - 1) + +static inline u32 bnxt_qplib_max_cmdq_idx_per_pg(u32 depth) +{ + return (bnxt_qplib_cmdqe_cnt_per_pg(depth) - 1); +} -#define RCFW_MAX_OUTSTANDING_CMD BNXT_QPLIB_CMDQE_MAX_CNT #define RCFW_MAX_COOKIE_VALUE 0x7FFF #define RCFW_CMD_IS_BLOCKING 0x8000 #define RCFW_BLOCKED_CMD_WAIT_COUNT 0x4E20 -/* Cmdq contains a fix number of a 16-Byte slots */ -struct bnxt_qplib_cmdqe { - u8 data[16]; -}; +#define HWRM_VERSION_RCFW_CMDQ_DEPTH_CHECK 0x1000900020011ULL -static inline u32 get_cmdq_pg(u32 val) +static inline u32 get_cmdq_pg(u32 val, u32 depth) { - return (val & ~MAX_CMDQ_IDX_PER_PG) / BNXT_QPLIB_CMDQE_CNT_PER_PG; + return (val & ~(bnxt_qplib_max_cmdq_idx_per_pg(depth))) / + (bnxt_qplib_cmdqe_cnt_per_pg(depth)); } -static inline u32 get_cmdq_idx(u32 val) +static inline u32 get_cmdq_idx(u32 val, u32 depth) { - return val & MAX_CMDQ_IDX_PER_PG; + return val & (bnxt_qplib_max_cmdq_idx_per_pg(depth)); } /* Crsq buf is 1024-Byte */ @@ -194,11 +222,14 @@ struct bnxt_qplib_rcfw { struct bnxt_qplib_qp_node *qp_tbl; u64 oos_prev; u32 init_oos_stats; + u32 cmdq_depth; }; void bnxt_qplib_free_rcfw_channel(struct bnxt_qplib_rcfw *rcfw); int bnxt_qplib_alloc_rcfw_channel(struct pci_dev *pdev, - struct bnxt_qplib_rcfw *rcfw, int qp_tbl_sz); + struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_ctx *ctx, + int qp_tbl_sz); void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill); void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw); int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector, diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h index 2e5c052da5a9..1e80aa7bbcce 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_res.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h @@ -177,6 +177,7 @@ struct bnxt_qplib_ctx { struct bnxt_qplib_hwq tqm_tbl[MAX_TQM_ALLOC_REQ]; struct bnxt_qplib_stats stats; struct bnxt_qplib_vf_res vf_res; + u64 hwrm_intf_ver; }; struct bnxt_qplib_res { diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c index 5216b5f844cc..be03b5738f71 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c @@ -488,7 +488,8 @@ int bnxt_qplib_add_pkey(struct bnxt_qplib_res *res, } /* AH */ -int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah) +int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah, + bool block) { struct bnxt_qplib_rcfw *rcfw = res->rcfw; struct cmdq_create_ah req; @@ -522,7 +523,7 @@ int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah) req.dest_mac[2] = cpu_to_le16(temp16[2]); rc = bnxt_qplib_rcfw_send_message(rcfw, (void *)&req, (void *)&resp, - NULL, 1); + NULL, block); if (rc) return rc; @@ -530,7 +531,8 @@ int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah) return 0; } -int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah) +int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah, + bool block) { struct bnxt_qplib_rcfw *rcfw = res->rcfw; struct cmdq_destroy_ah req; @@ -544,7 +546,7 @@ int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah) req.ah_cid = cpu_to_le32(ah->id); rc = bnxt_qplib_rcfw_send_message(rcfw, (void *)&req, (void *)&resp, - NULL, 1); + NULL, block); if (rc) return rc; return 0; diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h index 8079d7f5a008..39454b3f738d 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h @@ -241,8 +241,10 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw, int bnxt_qplib_set_func_resources(struct bnxt_qplib_res *res, struct bnxt_qplib_rcfw *rcfw, struct bnxt_qplib_ctx *ctx); -int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah); -int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah); +int bnxt_qplib_create_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah, + bool block); +int bnxt_qplib_destroy_ah(struct bnxt_qplib_res *res, struct bnxt_qplib_ah *ah, + bool block); int bnxt_qplib_alloc_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mrw); int bnxt_qplib_dereg_mrw(struct bnxt_qplib_res *res, struct bnxt_qplib_mrw *mrw, diff --git a/drivers/infiniband/hw/cxgb3/cxio_hal.c b/drivers/infiniband/hw/cxgb3/cxio_hal.c index dcb4bba522ba..df4f7a3f043d 100644 --- a/drivers/infiniband/hw/cxgb3/cxio_hal.c +++ b/drivers/infiniband/hw/cxgb3/cxio_hal.c @@ -291,13 +291,12 @@ int cxio_create_qp(struct cxio_rdev *rdev_p, u32 kernel_domain, if (!wq->sq) goto err3; - wq->queue = dma_alloc_coherent(&(rdev_p->rnic_info.pdev->dev), + wq->queue = dma_zalloc_coherent(&(rdev_p->rnic_info.pdev->dev), depth * sizeof(union t3_wr), &(wq->dma_addr), GFP_KERNEL); if (!wq->queue) goto err4; - memset(wq->queue, 0, depth * sizeof(union t3_wr)); dma_unmap_addr_set(wq, mapping, wq->dma_addr); wq->doorbell = (void __iomem *)rdev_p->rnic_info.kdb_addr; if (!kernel_domain) diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index ebbec02cebe0..b34b1a1bd94b 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -836,7 +836,7 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd, * Kernel users need more wq space for fastreg WRs which can take * 2 WR fragments. */ - ucontext = pd->uobject ? to_iwch_ucontext(pd->uobject->context) : NULL; + ucontext = udata ? to_iwch_ucontext(pd->uobject->context) : NULL; if (!ucontext && wqsize < (rqsize + (2 * sqsize))) wqsize = roundup_pow_of_two(rqsize + roundup_pow_of_two(attrs->cap.max_send_wr * 2)); @@ -1317,6 +1317,39 @@ static void get_dev_fw_ver_str(struct ib_device *ibdev, char *str) snprintf(str, IB_FW_VERSION_NAME_MAX, "%s", info.fw_version); } +static const struct ib_device_ops iwch_dev_ops = { + .alloc_hw_stats = iwch_alloc_stats, + .alloc_mr = iwch_alloc_mr, + .alloc_mw = iwch_alloc_mw, + .alloc_pd = iwch_allocate_pd, + .alloc_ucontext = iwch_alloc_ucontext, + .create_cq = iwch_create_cq, + .create_qp = iwch_create_qp, + .dealloc_mw = iwch_dealloc_mw, + .dealloc_pd = iwch_deallocate_pd, + .dealloc_ucontext = iwch_dealloc_ucontext, + .dereg_mr = iwch_dereg_mr, + .destroy_cq = iwch_destroy_cq, + .destroy_qp = iwch_destroy_qp, + .get_dev_fw_str = get_dev_fw_ver_str, + .get_dma_mr = iwch_get_dma_mr, + .get_hw_stats = iwch_get_mib, + .get_port_immutable = iwch_port_immutable, + .map_mr_sg = iwch_map_mr_sg, + .mmap = iwch_mmap, + .modify_qp = iwch_ib_modify_qp, + .poll_cq = iwch_poll_cq, + .post_recv = iwch_post_receive, + .post_send = iwch_post_send, + .query_device = iwch_query_device, + .query_gid = iwch_query_gid, + .query_pkey = iwch_query_pkey, + .query_port = iwch_query_port, + .reg_user_mr = iwch_reg_user_mr, + .req_notify_cq = iwch_arm_cq, + .resize_cq = iwch_resize_cq, +}; + int iwch_register_device(struct iwch_dev *dev) { int ret; @@ -1356,37 +1389,7 @@ int iwch_register_device(struct iwch_dev *dev) dev->ibdev.phys_port_cnt = dev->rdev.port_info.nports; dev->ibdev.num_comp_vectors = 1; dev->ibdev.dev.parent = &dev->rdev.rnic_info.pdev->dev; - dev->ibdev.query_device = iwch_query_device; - dev->ibdev.query_port = iwch_query_port; - dev->ibdev.query_pkey = iwch_query_pkey; - dev->ibdev.query_gid = iwch_query_gid; - dev->ibdev.alloc_ucontext = iwch_alloc_ucontext; - dev->ibdev.dealloc_ucontext = iwch_dealloc_ucontext; - dev->ibdev.mmap = iwch_mmap; - dev->ibdev.alloc_pd = iwch_allocate_pd; - dev->ibdev.dealloc_pd = iwch_deallocate_pd; - dev->ibdev.create_qp = iwch_create_qp; - dev->ibdev.modify_qp = iwch_ib_modify_qp; - dev->ibdev.destroy_qp = iwch_destroy_qp; - dev->ibdev.create_cq = iwch_create_cq; - dev->ibdev.destroy_cq = iwch_destroy_cq; - dev->ibdev.resize_cq = iwch_resize_cq; - dev->ibdev.poll_cq = iwch_poll_cq; - dev->ibdev.get_dma_mr = iwch_get_dma_mr; - dev->ibdev.reg_user_mr = iwch_reg_user_mr; - dev->ibdev.dereg_mr = iwch_dereg_mr; - dev->ibdev.alloc_mw = iwch_alloc_mw; - dev->ibdev.dealloc_mw = iwch_dealloc_mw; - dev->ibdev.alloc_mr = iwch_alloc_mr; - dev->ibdev.map_mr_sg = iwch_map_mr_sg; - dev->ibdev.req_notify_cq = iwch_arm_cq; - dev->ibdev.post_send = iwch_post_send; - dev->ibdev.post_recv = iwch_post_receive; - dev->ibdev.alloc_hw_stats = iwch_alloc_stats; - dev->ibdev.get_hw_stats = iwch_get_mib; dev->ibdev.uverbs_abi_ver = IWCH_UVERBS_ABI_VERSION; - dev->ibdev.get_port_immutable = iwch_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_ver_str; dev->ibdev.iwcm = kmalloc(sizeof(struct iw_cm_verbs), GFP_KERNEL); if (!dev->ibdev.iwcm) @@ -1405,6 +1408,7 @@ int iwch_register_device(struct iwch_dev *dev) dev->ibdev.driver_id = RDMA_DRIVER_CXGB3; rdma_set_device_sysfs_group(&dev->ibdev, &iwch_attr_group); + ib_set_device_ops(&dev->ibdev, &iwch_dev_ops); ret = ib_register_device(&dev->ibdev, "cxgb3_%d", NULL); if (ret) kfree(dev->ibdev.iwcm); diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c index 615413bd3e8d..8221813219e5 100644 --- a/drivers/infiniband/hw/cxgb4/cm.c +++ b/drivers/infiniband/hw/cxgb4/cm.c @@ -2058,8 +2058,7 @@ static int import_ep(struct c4iw_ep *ep, int iptype, __u8 *peer_ip, } ep->mtu = pdev->mtu; ep->tx_chan = cxgb4_port_chan(pdev); - ep->smac_idx = cxgb4_tp_smt_idx(adapter_type, - cxgb4_port_viid(pdev)); + ep->smac_idx = ((struct port_info *)netdev_priv(pdev))->smt_idx; step = cdev->rdev.lldi.ntxq / cdev->rdev.lldi.nchan; ep->txq_idx = cxgb4_port_idx(pdev) * step; @@ -2078,8 +2077,7 @@ static int import_ep(struct c4iw_ep *ep, int iptype, __u8 *peer_ip, goto out; ep->mtu = dst_mtu(dst); ep->tx_chan = cxgb4_port_chan(pdev); - ep->smac_idx = cxgb4_tp_smt_idx(adapter_type, - cxgb4_port_viid(pdev)); + ep->smac_idx = ((struct port_info *)netdev_priv(pdev))->smt_idx; step = cdev->rdev.lldi.ntxq / cdev->rdev.lldi.nchan; ep->txq_idx = cxgb4_port_idx(pdev) * step; @@ -2795,7 +2793,8 @@ static int peer_abort(struct c4iw_dev *dev, struct sk_buff *skb) break; case MPA_REQ_SENT: (void)stop_ep_timer(ep); - if (mpa_rev == 1 || (mpa_rev == 2 && ep->tried_with_mpa_v1)) + if (status != CPL_ERR_CONN_RESET || mpa_rev == 1 || + (mpa_rev == 2 && ep->tried_with_mpa_v1)) connect_reply_upcall(ep, -ECONNRESET); else { /* @@ -3944,7 +3943,7 @@ static int rx_pkt(struct c4iw_dev *dev, struct sk_buff *skb) } else { vlan_eh = (struct vlan_ethhdr *)(req + 1); iph = (struct iphdr *)(vlan_eh + 1); - skb->vlan_tci = ntohs(cpl->vlan); + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), ntohs(cpl->vlan)); } if (iph->version != 0x4) diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c index cbb3c0ddd990..586b0c37481f 100644 --- a/drivers/infiniband/hw/cxgb4/provider.c +++ b/drivers/infiniband/hw/cxgb4/provider.c @@ -531,6 +531,44 @@ static int fill_res_entry(struct sk_buff *msg, struct rdma_restrack_entry *res) c4iw_restrack_funcs[res->type](msg, res) : 0; } +static const struct ib_device_ops c4iw_dev_ops = { + .alloc_hw_stats = c4iw_alloc_stats, + .alloc_mr = c4iw_alloc_mr, + .alloc_mw = c4iw_alloc_mw, + .alloc_pd = c4iw_allocate_pd, + .alloc_ucontext = c4iw_alloc_ucontext, + .create_cq = c4iw_create_cq, + .create_qp = c4iw_create_qp, + .create_srq = c4iw_create_srq, + .dealloc_mw = c4iw_dealloc_mw, + .dealloc_pd = c4iw_deallocate_pd, + .dealloc_ucontext = c4iw_dealloc_ucontext, + .dereg_mr = c4iw_dereg_mr, + .destroy_cq = c4iw_destroy_cq, + .destroy_qp = c4iw_destroy_qp, + .destroy_srq = c4iw_destroy_srq, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = c4iw_get_dma_mr, + .get_hw_stats = c4iw_get_mib, + .get_netdev = get_netdev, + .get_port_immutable = c4iw_port_immutable, + .map_mr_sg = c4iw_map_mr_sg, + .mmap = c4iw_mmap, + .modify_qp = c4iw_ib_modify_qp, + .modify_srq = c4iw_modify_srq, + .poll_cq = c4iw_poll_cq, + .post_recv = c4iw_post_receive, + .post_send = c4iw_post_send, + .post_srq_recv = c4iw_post_srq_recv, + .query_device = c4iw_query_device, + .query_gid = c4iw_query_gid, + .query_pkey = c4iw_query_pkey, + .query_port = c4iw_query_port, + .query_qp = c4iw_ib_query_qp, + .reg_user_mr = c4iw_reg_user_mr, + .req_notify_cq = c4iw_arm_cq, +}; + void c4iw_register_device(struct work_struct *work) { int ret; @@ -573,42 +611,7 @@ void c4iw_register_device(struct work_struct *work) dev->ibdev.phys_port_cnt = dev->rdev.lldi.nports; dev->ibdev.num_comp_vectors = dev->rdev.lldi.nciq; dev->ibdev.dev.parent = &dev->rdev.lldi.pdev->dev; - dev->ibdev.query_device = c4iw_query_device; - dev->ibdev.query_port = c4iw_query_port; - dev->ibdev.query_pkey = c4iw_query_pkey; - dev->ibdev.query_gid = c4iw_query_gid; - dev->ibdev.alloc_ucontext = c4iw_alloc_ucontext; - dev->ibdev.dealloc_ucontext = c4iw_dealloc_ucontext; - dev->ibdev.mmap = c4iw_mmap; - dev->ibdev.alloc_pd = c4iw_allocate_pd; - dev->ibdev.dealloc_pd = c4iw_deallocate_pd; - dev->ibdev.create_qp = c4iw_create_qp; - dev->ibdev.modify_qp = c4iw_ib_modify_qp; - dev->ibdev.query_qp = c4iw_ib_query_qp; - dev->ibdev.destroy_qp = c4iw_destroy_qp; - dev->ibdev.create_srq = c4iw_create_srq; - dev->ibdev.modify_srq = c4iw_modify_srq; - dev->ibdev.destroy_srq = c4iw_destroy_srq; - dev->ibdev.create_cq = c4iw_create_cq; - dev->ibdev.destroy_cq = c4iw_destroy_cq; - dev->ibdev.poll_cq = c4iw_poll_cq; - dev->ibdev.get_dma_mr = c4iw_get_dma_mr; - dev->ibdev.reg_user_mr = c4iw_reg_user_mr; - dev->ibdev.dereg_mr = c4iw_dereg_mr; - dev->ibdev.alloc_mw = c4iw_alloc_mw; - dev->ibdev.dealloc_mw = c4iw_dealloc_mw; - dev->ibdev.alloc_mr = c4iw_alloc_mr; - dev->ibdev.map_mr_sg = c4iw_map_mr_sg; - dev->ibdev.req_notify_cq = c4iw_arm_cq; - dev->ibdev.post_send = c4iw_post_send; - dev->ibdev.post_recv = c4iw_post_receive; - dev->ibdev.post_srq_recv = c4iw_post_srq_recv; - dev->ibdev.alloc_hw_stats = c4iw_alloc_stats; - dev->ibdev.get_hw_stats = c4iw_get_mib; dev->ibdev.uverbs_abi_ver = C4IW_UVERBS_ABI_VERSION; - dev->ibdev.get_port_immutable = c4iw_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_str; - dev->ibdev.get_netdev = get_netdev; dev->ibdev.iwcm = kmalloc(sizeof(struct iw_cm_verbs), GFP_KERNEL); if (!dev->ibdev.iwcm) { @@ -630,6 +633,7 @@ void c4iw_register_device(struct work_struct *work) rdma_set_device_sysfs_group(&dev->ibdev, &c4iw_attr_group); dev->ibdev.driver_id = RDMA_DRIVER_CXGB4; + ib_set_device_ops(&dev->ibdev, &c4iw_dev_ops); ret = ib_register_device(&dev->ibdev, "cxgb4_%d", NULL); if (ret) goto err_kfree_iwcm; diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c index 13478f3b7057..981ff5cfb5d1 100644 --- a/drivers/infiniband/hw/cxgb4/qp.c +++ b/drivers/infiniband/hw/cxgb4/qp.c @@ -2163,7 +2163,7 @@ struct ib_qp *c4iw_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attrs, if (sqsize < 8) sqsize = 8; - ucontext = pd->uobject ? to_c4iw_ucontext(pd->uobject->context) : NULL; + ucontext = udata ? to_c4iw_ucontext(pd->uobject->context) : NULL; qhp = kzalloc(sizeof(*qhp), GFP_KERNEL); if (!qhp) @@ -2564,13 +2564,12 @@ static int alloc_srq_queue(struct c4iw_srq *srq, struct c4iw_dev_ucontext *uctx, wq->rqt_abs_idx = (wq->rqt_hwaddr - rdev->lldi.vr->rq.start) >> T4_RQT_ENTRY_SHIFT; - wq->queue = dma_alloc_coherent(&rdev->lldi.pdev->dev, + wq->queue = dma_zalloc_coherent(&rdev->lldi.pdev->dev, wq->memsize, &wq->dma_addr, GFP_KERNEL); if (!wq->queue) goto err_free_rqtpool; - memset(wq->queue, 0, wq->memsize); dma_unmap_addr_set(wq, mapping, wq->dma_addr); wq->bar2_va = c4iw_bar2_addrs(rdev, wq->qid, CXGB4_BAR2_QTYPE_EGRESS, @@ -2713,7 +2712,7 @@ struct ib_srq *c4iw_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *attrs, rqsize = attrs->attr.max_wr + 1; rqsize = roundup_pow_of_two(max_t(u16, rqsize, 16)); - ucontext = pd->uobject ? to_c4iw_ucontext(pd->uobject->context) : NULL; + ucontext = udata ? to_c4iw_ucontext(pd->uobject->context) : NULL; srq = kzalloc(sizeof(*srq), GFP_KERNEL); if (!srq) diff --git a/drivers/infiniband/hw/hfi1/Makefile b/drivers/infiniband/hw/hfi1/Makefile index ff790390c91a..3ce9dc8c3463 100644 --- a/drivers/infiniband/hw/hfi1/Makefile +++ b/drivers/infiniband/hw/hfi1/Makefile @@ -34,6 +34,7 @@ hfi1-y := \ ruc.o \ sdma.o \ sysfs.o \ + tid_rdma.o \ trace.o \ uc.o \ ud.o \ diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c index 7e6d70936c63..b443642eac02 100644 --- a/drivers/infiniband/hw/hfi1/chip.c +++ b/drivers/infiniband/hw/hfi1/chip.c @@ -1072,6 +1072,8 @@ static void log_state_transition(struct hfi1_pportdata *ppd, u32 state); static void log_physical_state(struct hfi1_pportdata *ppd, u32 state); static int wait_physical_linkstate(struct hfi1_pportdata *ppd, u32 state, int msecs); +static int wait_phys_link_out_of_offline(struct hfi1_pportdata *ppd, + int msecs); static void read_planned_down_reason_code(struct hfi1_devdata *dd, u8 *pdrrc); static void read_link_down_reason(struct hfi1_devdata *dd, u8 *ldr); static void handle_temp_err(struct hfi1_devdata *dd); @@ -10770,13 +10772,15 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state) break; ppd->port_error_action = 0; - ppd->host_link_state = HLS_DN_POLL; if (quick_linkup) { /* quick linkup does not go into polling */ ret = do_quick_linkup(dd); } else { ret1 = set_physical_link_state(dd, PLS_POLLING); + if (!ret1) + ret1 = wait_phys_link_out_of_offline(ppd, + 3000); if (ret1 != HCMD_SUCCESS) { dd_dev_err(dd, "Failed to transition to Polling link state, return 0x%x\n", @@ -10784,6 +10788,14 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state) ret = -EINVAL; } } + + /* + * Change the host link state after requesting DC8051 to + * change its physical state so that we can ignore any + * interrupt with stale LNI(XX) error, which will not be + * cleared until DC8051 transitions to Polling state. + */ + ppd->host_link_state = HLS_DN_POLL; ppd->offline_disabled_reason = HFI1_ODR_MASK(OPA_LINKDOWN_REASON_NONE); /* @@ -12928,6 +12940,39 @@ static int wait_phys_link_offline_substates(struct hfi1_pportdata *ppd, return read_state; } +/* + * wait_phys_link_out_of_offline - wait for any out of offline state + * @ppd: port device + * @msecs: the number of milliseconds to wait + * + * Wait up to msecs milliseconds for any out of offline physical link + * state change to occur. + * Returns 0 if at least one state is reached, otherwise -ETIMEDOUT. + */ +static int wait_phys_link_out_of_offline(struct hfi1_pportdata *ppd, + int msecs) +{ + u32 read_state; + unsigned long timeout; + + timeout = jiffies + msecs_to_jiffies(msecs); + while (1) { + read_state = read_physical_state(ppd->dd); + if ((read_state & 0xF0) != PLS_OFFLINE) + break; + if (time_after(jiffies, timeout)) { + dd_dev_err(ppd->dd, + "timeout waiting for phy link out of offline. Read state 0x%x, %dms\n", + read_state, msecs); + return -ETIMEDOUT; + } + usleep_range(1950, 2050); /* sleep 2ms-ish */ + } + + log_state_transition(ppd, read_state); + return read_state; +} + #define CLEAR_STATIC_RATE_CONTROL_SMASK(r) \ (r &= ~SEND_CTXT_CHECK_ENABLE_DISALLOW_PBC_STATIC_RATE_CONTROL_SMASK) diff --git a/drivers/infiniband/hw/hfi1/chip_registers.h b/drivers/infiniband/hw/hfi1/chip_registers.h index c6163a347e93..c0800ea5a3f8 100644 --- a/drivers/infiniband/hw/hfi1/chip_registers.h +++ b/drivers/infiniband/hw/hfi1/chip_registers.h @@ -935,6 +935,10 @@ #define SEND_CTXT_CREDIT_CTRL_THRESHOLD_MASK 0x7FFull #define SEND_CTXT_CREDIT_CTRL_THRESHOLD_SHIFT 0 #define SEND_CTXT_CREDIT_CTRL_THRESHOLD_SMASK 0x7FFull +#define SEND_CTXT_CREDIT_STATUS (TXE + 0x000000100018) +#define SEND_CTXT_CREDIT_STATUS_CURRENT_FREE_COUNTER_MASK 0x7FFull +#define SEND_CTXT_CREDIT_STATUS_CURRENT_FREE_COUNTER_SHIFT 32 +#define SEND_CTXT_CREDIT_STATUS_LAST_RETURNED_COUNTER_SMASK 0x7FFull #define SEND_CTXT_CREDIT_FORCE (TXE + 0x000000100028) #define SEND_CTXT_CREDIT_FORCE_FORCE_RETURN_SMASK 0x1ull #define SEND_CTXT_CREDIT_RETURN_ADDR (TXE + 0x000000100020) diff --git a/drivers/infiniband/hw/hfi1/common.h b/drivers/infiniband/hw/hfi1/common.h index 7108d4d92259..40d3cfb58bd1 100644 --- a/drivers/infiniband/hw/hfi1/common.h +++ b/drivers/infiniband/hw/hfi1/common.h @@ -1,5 +1,5 @@ /* - * Copyright(c) 2015, 2016 Intel Corporation. + * Copyright(c) 2015 - 2018 Intel Corporation. * * This file is provided under a dual BSD/GPLv2 license. When using or * redistributing this file, you may do so under either license. @@ -136,18 +136,21 @@ HFI1_CAP_ALLOW_PERM_JKEY | \ HFI1_CAP_STATIC_RATE_CTRL | \ HFI1_CAP_PRINT_UNIMPL | \ - HFI1_CAP_TID_UNMAP) + HFI1_CAP_TID_UNMAP | \ + HFI1_CAP_OPFN) /* * A set of capability bits that are "global" and are not allowed to be * set in the user bitmask. */ #define HFI1_CAP_RESERVED_MASK ((HFI1_CAP_SDMA | \ - HFI1_CAP_USE_SDMA_HEAD | \ - HFI1_CAP_EXTENDED_PSN | \ - HFI1_CAP_PRINT_UNIMPL | \ - HFI1_CAP_NO_INTEGRITY | \ - HFI1_CAP_PKEY_CHECK) << \ - HFI1_CAP_USER_SHIFT) + HFI1_CAP_USE_SDMA_HEAD | \ + HFI1_CAP_EXTENDED_PSN | \ + HFI1_CAP_PRINT_UNIMPL | \ + HFI1_CAP_NO_INTEGRITY | \ + HFI1_CAP_PKEY_CHECK | \ + HFI1_CAP_TID_RDMA | \ + HFI1_CAP_OPFN) << \ + HFI1_CAP_USER_SHIFT) /* * Set of capabilities that need to be enabled for kernel context in * order to be allowed for user contexts, as well. diff --git a/drivers/infiniband/hw/hfi1/debugfs.c b/drivers/infiniband/hw/hfi1/debugfs.c index 9f992ae36c89..0a557795563c 100644 --- a/drivers/infiniband/hw/hfi1/debugfs.c +++ b/drivers/infiniband/hw/hfi1/debugfs.c @@ -407,6 +407,54 @@ DEBUGFS_SEQ_FILE_OPS(rcds); DEBUGFS_SEQ_FILE_OPEN(rcds) DEBUGFS_FILE_OPS(rcds); +static void *_pios_seq_start(struct seq_file *s, loff_t *pos) +{ + struct hfi1_ibdev *ibd; + struct hfi1_devdata *dd; + + ibd = (struct hfi1_ibdev *)s->private; + dd = dd_from_dev(ibd); + if (!dd->send_contexts || *pos >= dd->num_send_contexts) + return NULL; + return pos; +} + +static void *_pios_seq_next(struct seq_file *s, void *v, loff_t *pos) +{ + struct hfi1_ibdev *ibd = (struct hfi1_ibdev *)s->private; + struct hfi1_devdata *dd = dd_from_dev(ibd); + + ++*pos; + if (!dd->send_contexts || *pos >= dd->num_send_contexts) + return NULL; + return pos; +} + +static void _pios_seq_stop(struct seq_file *s, void *v) +{ +} + +static int _pios_seq_show(struct seq_file *s, void *v) +{ + struct hfi1_ibdev *ibd = (struct hfi1_ibdev *)s->private; + struct hfi1_devdata *dd = dd_from_dev(ibd); + struct send_context_info *sci; + loff_t *spos = v; + loff_t i = *spos; + unsigned long flags; + + spin_lock_irqsave(&dd->sc_lock, flags); + sci = &dd->send_contexts[i]; + if (sci && sci->type != SC_USER && sci->allocated && sci->sc) + seqfile_dump_sci(s, i, sci); + spin_unlock_irqrestore(&dd->sc_lock, flags); + return 0; +} + +DEBUGFS_SEQ_FILE_OPS(pios); +DEBUGFS_SEQ_FILE_OPEN(pios) +DEBUGFS_FILE_OPS(pios); + /* read the per-device counters */ static ssize_t dev_counters_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) @@ -1143,6 +1191,7 @@ void hfi1_dbg_ibdev_init(struct hfi1_ibdev *ibd) DEBUGFS_SEQ_FILE_CREATE(qp_stats, ibd->hfi1_ibdev_dbg, ibd); DEBUGFS_SEQ_FILE_CREATE(sdes, ibd->hfi1_ibdev_dbg, ibd); DEBUGFS_SEQ_FILE_CREATE(rcds, ibd->hfi1_ibdev_dbg, ibd); + DEBUGFS_SEQ_FILE_CREATE(pios, ibd->hfi1_ibdev_dbg, ibd); DEBUGFS_SEQ_FILE_CREATE(sdma_cpu_list, ibd->hfi1_ibdev_dbg, ibd); /* dev counter files */ for (i = 0; i < ARRAY_SIZE(cntr_ops); i++) diff --git a/drivers/infiniband/hw/hfi1/driver.c b/drivers/infiniband/hw/hfi1/driver.c index a41f85558312..a8ad70730203 100644 --- a/drivers/infiniband/hw/hfi1/driver.c +++ b/drivers/infiniband/hw/hfi1/driver.c @@ -430,40 +430,60 @@ static const hfi1_handle_cnp hfi1_handle_cnp_tbl[2] = { [HFI1_PKT_TYPE_16B] = &return_cnp_16B }; -void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt, - bool do_cnp) +/** + * hfi1_process_ecn_slowpath - Process FECN or BECN bits + * @qp: The packet's destination QP + * @pkt: The packet itself. + * @prescan: Is the caller the RXQ prescan + * + * Process the packet's FECN or BECN bits. By now, the packet + * has already been evaluated whether processing of those bit should + * be done. + * The significance of the @prescan argument is that if the caller + * is the RXQ prescan, a CNP will be send out instead of waiting for the + * normal packet processing to send an ACK with BECN set (or a CNP). + */ +bool hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt, + bool prescan) { struct hfi1_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num); struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); struct ib_other_headers *ohdr = pkt->ohdr; struct ib_grh *grh = pkt->grh; - u32 rqpn = 0, bth1; + u32 rqpn = 0; u16 pkey; u32 rlid, slid, dlid = 0; - u8 hdr_type, sc, svc_type; - bool is_mcast = false; + u8 hdr_type, sc, svc_type, opcode; + bool is_mcast = false, ignore_fecn = false, do_cnp = false, + fecn, becn; /* can be called from prescan */ if (pkt->etype == RHF_RCV_TYPE_BYPASS) { - is_mcast = hfi1_is_16B_mcast(dlid); pkey = hfi1_16B_get_pkey(pkt->hdr); sc = hfi1_16B_get_sc(pkt->hdr); dlid = hfi1_16B_get_dlid(pkt->hdr); slid = hfi1_16B_get_slid(pkt->hdr); + is_mcast = hfi1_is_16B_mcast(dlid); + opcode = ib_bth_get_opcode(ohdr); hdr_type = HFI1_PKT_TYPE_16B; + fecn = hfi1_16B_get_fecn(pkt->hdr); + becn = hfi1_16B_get_becn(pkt->hdr); } else { - is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) && - (dlid != be16_to_cpu(IB_LID_PERMISSIVE)); pkey = ib_bth_get_pkey(ohdr); sc = hfi1_9B_get_sc5(pkt->hdr, pkt->rhf); - dlid = ib_get_dlid(pkt->hdr); + dlid = qp->ibqp.qp_type != IB_QPT_UD ? ib_get_dlid(pkt->hdr) : + ppd->lid; slid = ib_get_slid(pkt->hdr); + is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) && + (dlid != be16_to_cpu(IB_LID_PERMISSIVE)); + opcode = ib_bth_get_opcode(ohdr); hdr_type = HFI1_PKT_TYPE_9B; + fecn = ib_bth_get_fecn(ohdr); + becn = ib_bth_get_becn(ohdr); } switch (qp->ibqp.qp_type) { case IB_QPT_UD: - dlid = ppd->lid; rlid = slid; rqpn = ib_get_sqpn(pkt->ohdr); svc_type = IB_CC_SVCTYPE_UD; @@ -485,22 +505,31 @@ void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt, svc_type = IB_CC_SVCTYPE_RC; break; default: - return; + return false; } - bth1 = be32_to_cpu(ohdr->bth[1]); + ignore_fecn = is_mcast || (opcode == IB_OPCODE_CNP) || + (opcode == IB_OPCODE_RC_ACKNOWLEDGE); + /* + * ACKNOWLEDGE packets do not get a CNP but this will be + * guarded by ignore_fecn above. + */ + do_cnp = prescan || + (opcode >= IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST && + opcode <= IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE); + /* Call appropriate CNP handler */ - if (do_cnp && (bth1 & IB_FECN_SMASK)) + if (!ignore_fecn && do_cnp && fecn) hfi1_handle_cnp_tbl[hdr_type](ibp, qp, rqpn, pkey, dlid, rlid, sc, grh); - if (!is_mcast && (bth1 & IB_BECN_SMASK)) { - u32 lqpn = bth1 & RVT_QPN_MASK; + if (becn) { + u32 lqpn = be32_to_cpu(ohdr->bth[1]) & RVT_QPN_MASK; u8 sl = ibp->sc_to_sl[sc]; process_becn(ppd, sl, rlid, lqpn, rqpn, svc_type); } - + return !ignore_fecn && fecn; } struct ps_mdata { @@ -599,7 +628,6 @@ static void __prescan_rxq(struct hfi1_packet *packet) struct rvt_dev_info *rdi = &rcd->dd->verbs_dev.rdi; u64 rhf = rhf_to_cpu(rhf_addr); u32 etype = rhf_rcv_type(rhf), qpn, bth1; - int is_ecn = 0; u8 lnh; if (ps_done(&mdata, rhf, rcd)) @@ -625,12 +653,10 @@ static void __prescan_rxq(struct hfi1_packet *packet) goto next; /* just in case */ } - bth1 = be32_to_cpu(packet->ohdr->bth[1]); - is_ecn = !!(bth1 & (IB_FECN_SMASK | IB_BECN_SMASK)); - - if (!is_ecn) + if (!hfi1_may_ecn(packet)) goto next; + bth1 = be32_to_cpu(packet->ohdr->bth[1]); qpn = bth1 & RVT_QPN_MASK; rcu_read_lock(); qp = rvt_lookup_qpn(rdi, &ibp->rvp, qpn); @@ -640,7 +666,7 @@ static void __prescan_rxq(struct hfi1_packet *packet) goto next; } - process_ecn(qp, packet, true); + hfi1_process_ecn_slowpath(qp, packet, true); rcu_read_unlock(); /* turn off BECN, FECN */ @@ -1400,7 +1426,7 @@ static int hfi1_bypass_ingress_pkt_check(struct hfi1_packet *packet) if ((!(hfi1_is_16B_mcast(packet->dlid))) && (packet->dlid != opa_get_lid(be32_to_cpu(OPA_LID_PERMISSIVE), 16B))) { - if (packet->dlid != ppd->lid) + if ((packet->dlid & ~((1 << ppd->lmc) - 1)) != ppd->lid) return -EINVAL; } diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h index 2b882347d0c2..6db2276f5c13 100644 --- a/drivers/infiniband/hw/hfi1/hfi.h +++ b/drivers/infiniband/hw/hfi1/hfi.h @@ -1804,13 +1804,20 @@ static inline struct hfi1_ibport *rcd_to_iport(struct hfi1_ctxtdata *rcd) return &rcd->ppd->ibport_data; } -void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt, - bool do_cnp); -static inline bool process_ecn(struct rvt_qp *qp, struct hfi1_packet *pkt, - bool do_cnp) +/** + * hfi1_may_ecn - Check whether FECN or BECN processing should be done + * @pkt: the packet to be evaluated + * + * Check whether the FECN or BECN bits in the packet's header are + * enabled, depending on packet type. + * + * This function only checks for FECN and BECN bits. Additional checks + * are done in the slowpath (hfi1_process_ecn_slowpath()) in order to + * ensure correct handling. + */ +static inline bool hfi1_may_ecn(struct hfi1_packet *pkt) { - bool becn; - bool fecn; + bool fecn, becn; if (pkt->etype == RHF_RCV_TYPE_BYPASS) { fecn = hfi1_16B_get_fecn(pkt->hdr); @@ -1819,10 +1826,18 @@ static inline bool process_ecn(struct rvt_qp *qp, struct hfi1_packet *pkt, fecn = ib_bth_get_fecn(pkt->ohdr); becn = ib_bth_get_becn(pkt->ohdr); } - if (unlikely(fecn || becn)) { - hfi1_process_ecn_slowpath(qp, pkt, do_cnp); - return fecn; - } + return fecn || becn; +} + +bool hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt, + bool prescan); +static inline bool process_ecn(struct rvt_qp *qp, struct hfi1_packet *pkt) +{ + bool do_work; + + do_work = hfi1_may_ecn(pkt); + if (unlikely(do_work)) + return hfi1_process_ecn_slowpath(qp, pkt, false); return false; } diff --git a/drivers/infiniband/hw/hfi1/mad.c b/drivers/infiniband/hw/hfi1/mad.c index 88a0cf930136..4228393e6c4c 100644 --- a/drivers/infiniband/hw/hfi1/mad.c +++ b/drivers/infiniband/hw/hfi1/mad.c @@ -305,7 +305,7 @@ static struct ib_ah *hfi1_create_qp0_ah(struct hfi1_ibport *ibp, u32 dlid) rcu_read_lock(); qp0 = rcu_dereference(ibp->rvp.qp[0]); if (qp0) - ah = rdma_create_ah(qp0->ibqp.pd, &attr); + ah = rdma_create_ah(qp0->ibqp.pd, &attr, 0); rcu_read_unlock(); return ah; } diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c index 475b769e120c..14d2a90964c3 100644 --- a/drivers/infiniband/hw/hfi1/mmu_rb.c +++ b/drivers/infiniband/hw/hfi1/mmu_rb.c @@ -68,8 +68,7 @@ struct mmu_rb_handler { static unsigned long mmu_node_start(struct mmu_rb_node *); static unsigned long mmu_node_last(struct mmu_rb_node *); static int mmu_notifier_range_start(struct mmu_notifier *, - struct mm_struct *, - unsigned long, unsigned long, bool); + const struct mmu_notifier_range *); static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *, unsigned long, unsigned long); static void do_remove(struct mmu_rb_handler *handler, @@ -284,10 +283,7 @@ void hfi1_mmu_rb_remove(struct mmu_rb_handler *handler, } static int mmu_notifier_range_start(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct mmu_rb_handler *handler = container_of(mn, struct mmu_rb_handler, mn); @@ -297,10 +293,11 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn, bool added = false; spin_lock_irqsave(&handler->lock, flags); - for (node = __mmu_int_rb_iter_first(root, start, end - 1); + for (node = __mmu_int_rb_iter_first(root, range->start, range->end-1); node; node = ptr) { /* Guard against node removal. */ - ptr = __mmu_int_rb_iter_next(node, start, end - 1); + ptr = __mmu_int_rb_iter_next(node, range->start, + range->end - 1); trace_hfi1_mmu_mem_invalidate(node->addr, node->len); if (handler->ops->invalidate(handler->ops_arg, node)) { __mmu_int_rb_remove(node, root); diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c index 9ab50d2308dc..dd5a5c030066 100644 --- a/drivers/infiniband/hw/hfi1/pio.c +++ b/drivers/infiniband/hw/hfi1/pio.c @@ -742,6 +742,7 @@ struct send_context *sc_alloc(struct hfi1_devdata *dd, int type, spin_lock_init(&sc->alloc_lock); spin_lock_init(&sc->release_lock); spin_lock_init(&sc->credit_ctrl_lock); + seqlock_init(&sc->waitlock); INIT_LIST_HEAD(&sc->piowait); INIT_WORK(&sc->halt_work, sc_halted); init_waitqueue_head(&sc->halt_wait); @@ -1593,7 +1594,6 @@ void hfi1_sc_wantpiobuf_intr(struct send_context *sc, u32 needint) static void sc_piobufavail(struct send_context *sc) { struct hfi1_devdata *dd = sc->dd; - struct hfi1_ibdev *dev = &dd->verbs_dev; struct list_head *list; struct rvt_qp *qps[PIO_WAIT_BATCH_SIZE]; struct rvt_qp *qp; @@ -1612,7 +1612,7 @@ static void sc_piobufavail(struct send_context *sc) * could end up with QPs on the wait list with the interrupt * disabled. */ - write_seqlock_irqsave(&dev->iowait_lock, flags); + write_seqlock_irqsave(&sc->waitlock, flags); while (!list_empty(list)) { struct iowait *wait; @@ -1636,7 +1636,7 @@ static void sc_piobufavail(struct send_context *sc) if (!list_empty(list)) hfi1_sc_wantpiobuf_intr(sc, 1); } - write_sequnlock_irqrestore(&dev->iowait_lock, flags); + write_sequnlock_irqrestore(&sc->waitlock, flags); /* Wake up the most starved one first */ if (n) @@ -2137,3 +2137,28 @@ void free_credit_return(struct hfi1_devdata *dd) kfree(dd->cr_base); dd->cr_base = NULL; } + +void seqfile_dump_sci(struct seq_file *s, u32 i, + struct send_context_info *sci) +{ + struct send_context *sc = sci->sc; + u64 reg; + + seq_printf(s, "SCI %u: type %u base %u credits %u\n", + i, sci->type, sci->base, sci->credits); + seq_printf(s, " flags 0x%x sw_inx %u hw_ctxt %u grp %u\n", + sc->flags, sc->sw_index, sc->hw_context, sc->group); + seq_printf(s, " sr_size %u credits %u sr_head %u sr_tail %u\n", + sc->sr_size, sc->credits, sc->sr_head, sc->sr_tail); + seq_printf(s, " fill %lu free %lu fill_wrap %u alloc_free %lu\n", + sc->fill, sc->free, sc->fill_wrap, sc->alloc_free); + seq_printf(s, " credit_intr_count %u credit_ctrl 0x%llx\n", + sc->credit_intr_count, sc->credit_ctrl); + reg = read_kctxt_csr(sc->dd, sc->hw_context, SC(CREDIT_STATUS)); + seq_printf(s, " *hw_free %llu CurrentFree %llu LastReturned %llu\n", + (le64_to_cpu(*sc->hw_free) & CR_COUNTER_SMASK) >> + CR_COUNTER_SHIFT, + (reg >> SC(CREDIT_STATUS_CURRENT_FREE_COUNTER_SHIFT)) & + SC(CREDIT_STATUS_CURRENT_FREE_COUNTER_MASK), + reg & SC(CREDIT_STATUS_LAST_RETURNED_COUNTER_SMASK)); +} diff --git a/drivers/infiniband/hw/hfi1/pio.h b/drivers/infiniband/hw/hfi1/pio.h index aaf372c3e5d6..c9a58b642bdd 100644 --- a/drivers/infiniband/hw/hfi1/pio.h +++ b/drivers/infiniband/hw/hfi1/pio.h @@ -127,6 +127,8 @@ struct send_context { volatile __le64 *hw_free; /* HW free counter */ /* list for PIO waiters */ struct list_head piowait ____cacheline_aligned_in_smp; + seqlock_t waitlock; + spinlock_t credit_ctrl_lock ____cacheline_aligned_in_smp; u32 credit_intr_count; /* count of credit intr users */ u64 credit_ctrl; /* cache for credit control */ @@ -329,4 +331,7 @@ void seg_pio_copy_start(struct pio_buf *pbuf, u64 pbc, void seg_pio_copy_mid(struct pio_buf *pbuf, const void *from, size_t nbytes); void seg_pio_copy_end(struct pio_buf *pbuf); +void seqfile_dump_sci(struct seq_file *s, u32 i, + struct send_context_info *sci); + #endif /* _PIO_H */ diff --git a/drivers/infiniband/hw/hfi1/qp.c b/drivers/infiniband/hw/hfi1/qp.c index 1a016248039f..5344e8993b28 100644 --- a/drivers/infiniband/hw/hfi1/qp.c +++ b/drivers/infiniband/hw/hfi1/qp.c @@ -375,20 +375,18 @@ bool _hfi1_schedule_send(struct rvt_qp *qp) static void qp_pio_drain(struct rvt_qp *qp) { - struct hfi1_ibdev *dev; struct hfi1_qp_priv *priv = qp->priv; if (!priv->s_sendcontext) return; - dev = to_idev(qp->ibqp.device); while (iowait_pio_pending(&priv->s_iowait)) { - write_seqlock_irq(&dev->iowait_lock); + write_seqlock_irq(&priv->s_sendcontext->waitlock); hfi1_sc_wantpiobuf_intr(priv->s_sendcontext, 1); - write_sequnlock_irq(&dev->iowait_lock); + write_sequnlock_irq(&priv->s_sendcontext->waitlock); iowait_pio_drain(&priv->s_iowait); - write_seqlock_irq(&dev->iowait_lock); + write_seqlock_irq(&priv->s_sendcontext->waitlock); hfi1_sc_wantpiobuf_intr(priv->s_sendcontext, 0); - write_sequnlock_irq(&dev->iowait_lock); + write_sequnlock_irq(&priv->s_sendcontext->waitlock); } } @@ -459,7 +457,6 @@ static int iowait_sleep( struct hfi1_qp_priv *priv; unsigned long flags; int ret = 0; - struct hfi1_ibdev *dev; qp = tx->qp; priv = qp->priv; @@ -472,9 +469,8 @@ static int iowait_sleep( * buffer and undoing the side effects of the copy. */ /* Make a common routine? */ - dev = &sde->dd->verbs_dev; list_add_tail(&stx->list, &wait->tx_head); - write_seqlock(&dev->iowait_lock); + write_seqlock(&sde->waitlock); if (sdma_progress(sde, seq, stx)) goto eagain; if (list_empty(&priv->s_iowait.list)) { @@ -485,11 +481,11 @@ static int iowait_sleep( qp->s_flags |= RVT_S_WAIT_DMA_DESC; iowait_queue(pkts_sent, &priv->s_iowait, &sde->dmawait); - priv->s_iowait.lock = &dev->iowait_lock; + priv->s_iowait.lock = &sde->waitlock; trace_hfi1_qpsleep(qp, RVT_S_WAIT_DMA_DESC); rvt_get_qp(qp); } - write_sequnlock(&dev->iowait_lock); + write_sequnlock(&sde->waitlock); hfi1_qp_unbusy(qp, wait); spin_unlock_irqrestore(&qp->s_lock, flags); ret = -EBUSY; @@ -499,7 +495,7 @@ static int iowait_sleep( } return ret; eagain: - write_sequnlock(&dev->iowait_lock); + write_sequnlock(&sde->waitlock); spin_unlock_irqrestore(&qp->s_lock, flags); list_del_init(&stx->list); return -EAGAIN; diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c index 188aa4f686a0..be603f35d7e4 100644 --- a/drivers/infiniband/hw/hfi1/rc.c +++ b/drivers/infiniband/hw/hfi1/rc.c @@ -1157,6 +1157,7 @@ void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah) if (cmp_psn(wqe->lpsn, qp->s_sending_psn) >= 0 && cmp_psn(qp->s_sending_psn, qp->s_sending_hpsn) <= 0) break; + rvt_qp_wqe_unreserve(qp, wqe); s_last = qp->s_last; trace_hfi1_qp_send_completion(qp, wqe, s_last); if (++s_last >= qp->s_size) @@ -1209,6 +1210,7 @@ static struct rvt_swqe *do_rc_completion(struct rvt_qp *qp, u32 s_last; rvt_put_swqe(wqe); + rvt_qp_wqe_unreserve(qp, wqe); s_last = qp->s_last; trace_hfi1_qp_send_completion(qp, wqe, s_last); if (++s_last >= qp->s_size) @@ -2049,8 +2051,7 @@ void hfi1_rc_rcv(struct hfi1_packet *packet) struct ib_reth *reth; unsigned long flags; int ret; - bool is_fecn = false; - bool copy_last = false; + bool copy_last = false, fecn; u32 rkey; u8 extra_bytes = pad + packet->extra_byte + (SIZE_OF_CRC << 2); @@ -2059,7 +2060,7 @@ void hfi1_rc_rcv(struct hfi1_packet *packet) if (hfi1_ruc_check_hdr(ibp, packet)) return; - is_fecn = process_ecn(qp, packet, false); + fecn = process_ecn(qp, packet); /* * Process responses (ACKs) before anything else. Note that the @@ -2070,8 +2071,6 @@ void hfi1_rc_rcv(struct hfi1_packet *packet) if (opcode >= OP(RDMA_READ_RESPONSE_FIRST) && opcode <= OP(ATOMIC_ACKNOWLEDGE)) { rc_rcv_resp(packet); - if (is_fecn) - goto send_ack; return; } @@ -2347,11 +2346,11 @@ send_last: /* Schedule the send engine. */ qp->s_flags |= RVT_S_RESP_PENDING; + if (fecn) + qp->s_flags |= RVT_S_ECN; hfi1_schedule_send(qp); spin_unlock_irqrestore(&qp->s_lock, flags); - if (is_fecn) - goto send_ack; return; } @@ -2413,11 +2412,11 @@ send_last: /* Schedule the send engine. */ qp->s_flags |= RVT_S_RESP_PENDING; + if (fecn) + qp->s_flags |= RVT_S_ECN; hfi1_schedule_send(qp); spin_unlock_irqrestore(&qp->s_lock, flags); - if (is_fecn) - goto send_ack; return; } @@ -2430,16 +2429,9 @@ send_last: qp->r_ack_psn = psn; qp->r_nak_state = 0; /* Send an ACK if requested or required. */ - if (psn & IB_BTH_REQ_ACK) { - if (packet->numpkt == 0) { - rc_cancel_ack(qp); - goto send_ack; - } - if (qp->r_adefered >= HFI1_PSN_CREDIT) { - rc_cancel_ack(qp); - goto send_ack; - } - if (unlikely(is_fecn)) { + if (psn & IB_BTH_REQ_ACK || fecn) { + if (packet->numpkt == 0 || fecn || + qp->r_adefered >= HFI1_PSN_CREDIT) { rc_cancel_ack(qp); goto send_ack; } @@ -2480,7 +2472,7 @@ nack_acc: qp->r_nak_state = IB_NAK_REMOTE_ACCESS_ERROR; qp->r_ack_psn = qp->r_psn; send_ack: - hfi1_send_rc_ack(packet, is_fecn); + hfi1_send_rc_ack(packet, fecn); } void hfi1_rc_hdrerr( diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c index 891d2386d1ca..b84356e1a4c1 100644 --- a/drivers/infiniband/hw/hfi1/sdma.c +++ b/drivers/infiniband/hw/hfi1/sdma.c @@ -1424,6 +1424,7 @@ int sdma_init(struct hfi1_devdata *dd, u8 port) seqlock_init(&sde->head_lock); spin_lock_init(&sde->senddmactrl_lock); spin_lock_init(&sde->flushlist_lock); + seqlock_init(&sde->waitlock); /* insure there is always a zero bit */ sde->ahg_bits = 0xfffffffe00000000ULL; @@ -1758,7 +1759,6 @@ static void sdma_desc_avail(struct sdma_engine *sde, uint avail) struct iowait *wait, *nw; struct iowait *waits[SDMA_WAIT_BATCH_SIZE]; uint i, n = 0, seq, max_idx = 0; - struct hfi1_ibdev *dev = &sde->dd->verbs_dev; u8 max_starved_cnt = 0; #ifdef CONFIG_SDMA_VERBOSITY @@ -1768,10 +1768,10 @@ static void sdma_desc_avail(struct sdma_engine *sde, uint avail) #endif do { - seq = read_seqbegin(&dev->iowait_lock); + seq = read_seqbegin(&sde->waitlock); if (!list_empty(&sde->dmawait)) { /* at least one item */ - write_seqlock(&dev->iowait_lock); + write_seqlock(&sde->waitlock); /* Harvest waiters wanting DMA descriptors */ list_for_each_entry_safe( wait, @@ -1794,10 +1794,10 @@ static void sdma_desc_avail(struct sdma_engine *sde, uint avail) list_del_init(&wait->list); waits[n++] = wait; } - write_sequnlock(&dev->iowait_lock); + write_sequnlock(&sde->waitlock); break; } - } while (read_seqretry(&dev->iowait_lock, seq)); + } while (read_seqretry(&sde->waitlock, seq)); /* Schedule the most starved one first */ if (n) diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h index 6dc63d7c5685..1e2e40f79cb2 100644 --- a/drivers/infiniband/hw/hfi1/sdma.h +++ b/drivers/infiniband/hw/hfi1/sdma.h @@ -382,6 +382,7 @@ struct sdma_engine { u64 progress_int_cnt; /* private: */ + seqlock_t waitlock; struct list_head dmawait; /* CONFIG SDMA for now, just blindly duplicate */ diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c new file mode 100644 index 000000000000..da1ecb68a928 --- /dev/null +++ b/drivers/infiniband/hw/hfi1/tid_rdma.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) +/* + * Copyright(c) 2018 Intel Corporation. + * + */ + +#include "hfi.h" +#include "verbs.h" +#include "tid_rdma.h" + +/** + * qp_to_rcd - determine the receive context used by a qp + * @qp - the qp + * + * This routine returns the receive context associated + * with a a qp's qpn. + * + * Returns the context. + */ +static struct hfi1_ctxtdata *qp_to_rcd(struct rvt_dev_info *rdi, + struct rvt_qp *qp) +{ + struct hfi1_ibdev *verbs_dev = container_of(rdi, + struct hfi1_ibdev, + rdi); + struct hfi1_devdata *dd = container_of(verbs_dev, + struct hfi1_devdata, + verbs_dev); + unsigned int ctxt; + + if (qp->ibqp.qp_num == 0) + ctxt = 0; + else + ctxt = ((qp->ibqp.qp_num >> dd->qos_shift) % + (dd->n_krcv_queues - 1)) + 1; + + return dd->rcd[ctxt]; +} + +int hfi1_qp_priv_init(struct rvt_dev_info *rdi, struct rvt_qp *qp, + struct ib_qp_init_attr *init_attr) +{ + struct hfi1_qp_priv *qpriv = qp->priv; + + qpriv->rcd = qp_to_rcd(rdi, qp); + + return 0; +} diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.h b/drivers/infiniband/hw/hfi1/tid_rdma.h new file mode 100644 index 000000000000..6fcd3adcdcc3 --- /dev/null +++ b/drivers/infiniband/hw/hfi1/tid_rdma.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* + * Copyright(c) 2018 Intel Corporation. + * + */ +#ifndef HFI1_TID_RDMA_H +#define HFI1_TID_RDMA_H + +int hfi1_qp_priv_init(struct rvt_dev_info *rdi, struct rvt_qp *qp, + struct ib_qp_init_attr *init_attr); + +#endif /* HFI1_TID_RDMA_H */ + diff --git a/drivers/infiniband/hw/hfi1/uc.c b/drivers/infiniband/hw/hfi1/uc.c index 6aca0c5a7f97..6ba47037c424 100644 --- a/drivers/infiniband/hw/hfi1/uc.c +++ b/drivers/infiniband/hw/hfi1/uc.c @@ -321,7 +321,7 @@ void hfi1_uc_rcv(struct hfi1_packet *packet) if (hfi1_ruc_check_hdr(ibp, packet)) return; - process_ecn(qp, packet, true); + process_ecn(qp, packet); psn = ib_bth_get_psn(ohdr); /* Compare the PSN verses the expected PSN. */ diff --git a/drivers/infiniband/hw/hfi1/ud.c b/drivers/infiniband/hw/hfi1/ud.c index 4baa8f4d49de..88242fe95eaa 100644 --- a/drivers/infiniband/hw/hfi1/ud.c +++ b/drivers/infiniband/hw/hfi1/ud.c @@ -51,6 +51,7 @@ #include "hfi.h" #include "mad.h" #include "verbs_txreq.h" +#include "trace_ibhdrs.h" #include "qp.h" /* We support only two types - 9B and 16B for now */ @@ -656,18 +657,19 @@ void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 bth0, plen, vl, hwords = 7; u16 len; u8 l4; - struct hfi1_16b_header hdr; + struct hfi1_opa_header hdr; struct ib_other_headers *ohdr; struct pio_buf *pbuf; struct send_context *ctxt = qp_to_send_context(qp, sc5); struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); u32 nwords; + hdr.hdr_type = HFI1_PKT_TYPE_16B; /* Populate length */ nwords = ((hfi1_get_16b_padding(hwords << 2, 0) + SIZE_OF_LT) >> 2) + SIZE_OF_CRC; if (old_grh) { - struct ib_grh *grh = &hdr.u.l.grh; + struct ib_grh *grh = &hdr.opah.u.l.grh; grh->version_tclass_flow = old_grh->version_tclass_flow; grh->paylen = cpu_to_be16( @@ -675,11 +677,11 @@ void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, grh->hop_limit = 0xff; grh->sgid = old_grh->dgid; grh->dgid = old_grh->sgid; - ohdr = &hdr.u.l.oth; + ohdr = &hdr.opah.u.l.oth; l4 = OPA_16B_L4_IB_GLOBAL; hwords += sizeof(struct ib_grh) / sizeof(u32); } else { - ohdr = &hdr.u.oth; + ohdr = &hdr.opah.u.oth; l4 = OPA_16B_L4_IB_LOCAL; } @@ -693,7 +695,7 @@ void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, /* Convert dwords to flits */ len = (hwords + nwords) >> 1; - hfi1_make_16b_hdr(&hdr, slid, dlid, len, pkey, 1, 0, l4, sc5); + hfi1_make_16b_hdr(&hdr.opah, slid, dlid, len, pkey, 1, 0, l4, sc5); plen = 2 /* PBC */ + hwords + nwords; pbc_flags |= PBC_PACKET_BYPASS | PBC_INSERT_BYPASS_ICRC; @@ -701,9 +703,11 @@ void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps, vl, plen); if (ctxt) { pbuf = sc_buffer_alloc(ctxt, plen, NULL, NULL); - if (pbuf) + if (pbuf) { + trace_pio_output_ibhdr(ppd->dd, &hdr, sc5); ppd->dd->pio_inline_send(ppd->dd, pbuf, pbc, &hdr, hwords); + } } } @@ -715,14 +719,15 @@ void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, u32 bth0, plen, vl, hwords = 5; u16 lrh0; u8 sl = ibp->sc_to_sl[sc5]; - struct ib_header hdr; + struct hfi1_opa_header hdr; struct ib_other_headers *ohdr; struct pio_buf *pbuf; struct send_context *ctxt = qp_to_send_context(qp, sc5); struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); + hdr.hdr_type = HFI1_PKT_TYPE_9B; if (old_grh) { - struct ib_grh *grh = &hdr.u.l.grh; + struct ib_grh *grh = &hdr.ibh.u.l.grh; grh->version_tclass_flow = old_grh->version_tclass_flow; grh->paylen = cpu_to_be16( @@ -730,11 +735,11 @@ void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, grh->hop_limit = 0xff; grh->sgid = old_grh->dgid; grh->dgid = old_grh->sgid; - ohdr = &hdr.u.l.oth; + ohdr = &hdr.ibh.u.l.oth; lrh0 = HFI1_LRH_GRH; hwords += sizeof(struct ib_grh) / sizeof(u32); } else { - ohdr = &hdr.u.oth; + ohdr = &hdr.ibh.u.oth; lrh0 = HFI1_LRH_BTH; } @@ -746,16 +751,18 @@ void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, ohdr->bth[1] = cpu_to_be32(remote_qpn | (1 << IB_BECN_SHIFT)); ohdr->bth[2] = 0; /* PSN 0 */ - hfi1_make_ib_hdr(&hdr, lrh0, hwords + SIZE_OF_CRC, dlid, slid); + hfi1_make_ib_hdr(&hdr.ibh, lrh0, hwords + SIZE_OF_CRC, dlid, slid); plen = 2 /* PBC */ + hwords; pbc_flags |= (ib_is_sc5(sc5) << PBC_DC_INFO_SHIFT); vl = sc_to_vlt(ppd->dd, sc5); pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps, vl, plen); if (ctxt) { pbuf = sc_buffer_alloc(ctxt, plen, NULL, NULL); - if (pbuf) + if (pbuf) { + trace_pio_output_ibhdr(ppd->dd, &hdr, sc5); ppd->dd->pio_inline_send(ppd->dd, pbuf, pbc, &hdr, hwords); + } } } @@ -912,7 +919,7 @@ void hfi1_ud_rcv(struct hfi1_packet *packet) src_qp = hfi1_16B_get_src_qpn(packet->mgmt); } - process_ecn(qp, packet, (opcode != IB_OPCODE_CNP)); + process_ecn(qp, packet); /* * Get the number of bytes the message was padded by * and drop incomplete packets. diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c index 3f0aadccd9f6..e5e7fad09f32 100644 --- a/drivers/infiniband/hw/hfi1/user_sdma.c +++ b/drivers/infiniband/hw/hfi1/user_sdma.c @@ -130,7 +130,6 @@ static int defer_packet_queue( { struct hfi1_user_sdma_pkt_q *pq = container_of(wait->iow, struct hfi1_user_sdma_pkt_q, busy); - struct hfi1_ibdev *dev = &pq->dd->verbs_dev; struct user_sdma_txreq *tx = container_of(txreq, struct user_sdma_txreq, txreq); @@ -144,10 +143,10 @@ static int defer_packet_queue( * it is supposed to be enqueued. */ xchg(&pq->state, SDMA_PKT_Q_DEFERRED); - write_seqlock(&dev->iowait_lock); + write_seqlock(&sde->waitlock); if (list_empty(&pq->busy.list)) iowait_queue(pkts_sent, &pq->busy, &sde->dmawait); - write_sequnlock(&dev->iowait_lock); + write_sequnlock(&sde->waitlock); return -EBUSY; eagain: return -EAGAIN; diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c index a365089a9305..ec582d86025f 100644 --- a/drivers/infiniband/hw/hfi1/verbs.c +++ b/drivers/infiniband/hw/hfi1/verbs.c @@ -765,7 +765,6 @@ static int pio_wait(struct rvt_qp *qp, { struct hfi1_qp_priv *priv = qp->priv; struct hfi1_devdata *dd = sc->dd; - struct hfi1_ibdev *dev = &dd->verbs_dev; unsigned long flags; int ret = 0; @@ -777,7 +776,7 @@ static int pio_wait(struct rvt_qp *qp, */ spin_lock_irqsave(&qp->s_lock, flags); if (ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) { - write_seqlock(&dev->iowait_lock); + write_seqlock(&sc->waitlock); list_add_tail(&ps->s_txreq->txreq.list, &ps->wait->tx_head); if (list_empty(&priv->s_iowait.list)) { @@ -790,14 +789,14 @@ static int pio_wait(struct rvt_qp *qp, was_empty = list_empty(&sc->piowait); iowait_queue(ps->pkts_sent, &priv->s_iowait, &sc->piowait); - priv->s_iowait.lock = &dev->iowait_lock; + priv->s_iowait.lock = &sc->waitlock; trace_hfi1_qpsleep(qp, RVT_S_WAIT_PIO); rvt_get_qp(qp); /* counting: only call wantpiobuf_intr if first user */ if (was_empty) hfi1_sc_wantpiobuf_intr(sc, 1); } - write_sequnlock(&dev->iowait_lock); + write_sequnlock(&sc->waitlock); hfi1_qp_unbusy(qp, ps->wait); ret = -EBUSY; } @@ -919,6 +918,8 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps, if (slen > len) slen = len; + if (slen > ss->sge.sge_length) + slen = ss->sge.sge_length; rvt_update_sge(ss, slen, false); seg_pio_copy_mid(pbuf, addr, slen); len -= slen; @@ -1616,6 +1617,16 @@ static int get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats, return count; } +static const struct ib_device_ops hfi1_dev_ops = { + .alloc_hw_stats = alloc_hw_stats, + .alloc_rdma_netdev = hfi1_vnic_alloc_rn, + .get_dev_fw_str = hfi1_get_dev_fw_str, + .get_hw_stats = get_hw_stats, + .modify_device = modify_device, + /* keep process mad in the driver */ + .process_mad = hfi1_process_mad, +}; + /** * hfi1_register_ib_device - register our device with the infiniband core * @dd: the device data structure @@ -1659,14 +1670,8 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd) ibdev->owner = THIS_MODULE; ibdev->phys_port_cnt = dd->num_pports; ibdev->dev.parent = &dd->pcidev->dev; - ibdev->modify_device = modify_device; - ibdev->alloc_hw_stats = alloc_hw_stats; - ibdev->get_hw_stats = get_hw_stats; - ibdev->alloc_rdma_netdev = hfi1_vnic_alloc_rn; - /* keep process mad in the driver */ - ibdev->process_mad = hfi1_process_mad; - ibdev->get_dev_fw_str = hfi1_get_dev_fw_str; + ib_set_device_ops(ibdev, &hfi1_dev_ops); strlcpy(ibdev->node_desc, init_utsname()->nodename, sizeof(ibdev->node_desc)); @@ -1704,6 +1709,7 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd) dd->verbs_dev.rdi.dparms.max_mad_size = OPA_MGMT_MAD_SIZE; dd->verbs_dev.rdi.driver_f.qp_priv_alloc = qp_priv_alloc; + dd->verbs_dev.rdi.driver_f.qp_priv_init = hfi1_qp_priv_init; dd->verbs_dev.rdi.driver_f.qp_priv_free = qp_priv_free; dd->verbs_dev.rdi.driver_f.free_all_qps = free_all_qps; dd->verbs_dev.rdi.driver_f.notify_qp_reset = notify_qp_reset; diff --git a/drivers/infiniband/hw/hfi1/verbs.h b/drivers/infiniband/hw/hfi1/verbs.h index 64c9054db5f3..1ad0b14bdb3c 100644 --- a/drivers/infiniband/hw/hfi1/verbs.h +++ b/drivers/infiniband/hw/hfi1/verbs.h @@ -71,6 +71,7 @@ struct hfi1_devdata; struct hfi1_packet; #include "iowait.h" +#include "tid_rdma.h" #define HFI1_MAX_RDMA_ATOMIC 16 @@ -156,6 +157,7 @@ struct hfi1_qp_priv { struct hfi1_ahg_info *s_ahg; /* ahg info for next header */ struct sdma_engine *s_sde; /* current sde */ struct send_context *s_sendcontext; /* current sendcontext */ + struct hfi1_ctxtdata *rcd; /* QP's receive context */ u8 s_sc; /* SC[0..4] for next packet */ struct iowait s_iowait; struct rvt_qp *owner; diff --git a/drivers/infiniband/hw/hfi1/vnic_main.c b/drivers/infiniband/hw/hfi1/vnic_main.c index c9876d9e3cb9..a922db58be14 100644 --- a/drivers/infiniband/hw/hfi1/vnic_main.c +++ b/drivers/infiniband/hw/hfi1/vnic_main.c @@ -816,14 +816,14 @@ struct net_device *hfi1_vnic_alloc_rn(struct ib_device *device, size = sizeof(struct opa_vnic_rdma_netdev) + sizeof(*vinfo); netdev = alloc_netdev_mqs(size, name, name_assign_type, setup, - chip_sdma_engines(dd), dd->num_vnic_contexts); + dd->num_sdma, dd->num_vnic_contexts); if (!netdev) return ERR_PTR(-ENOMEM); rn = netdev_priv(netdev); vinfo = opa_vnic_dev_priv(netdev); vinfo->dd = dd; - vinfo->num_tx_q = chip_sdma_engines(dd); + vinfo->num_tx_q = dd->num_sdma; vinfo->num_rx_q = dd->num_vnic_contexts; vinfo->netdev = netdev; rn->free_rdma_netdev = hfi1_vnic_free_rn; diff --git a/drivers/infiniband/hw/hfi1/vnic_sdma.c b/drivers/infiniband/hw/hfi1/vnic_sdma.c index 97bd940a056a..1f81c480e028 100644 --- a/drivers/infiniband/hw/hfi1/vnic_sdma.c +++ b/drivers/infiniband/hw/hfi1/vnic_sdma.c @@ -57,7 +57,6 @@ #define HFI1_VNIC_TXREQ_NAME_LEN 32 #define HFI1_VNIC_SDMA_DESC_WTRMRK 64 -#define HFI1_VNIC_SDMA_RETRY_COUNT 1 /* * struct vnic_txreq - VNIC transmit descriptor @@ -67,7 +66,6 @@ * @pad: pad buffer * @plen: pad length * @pbc_val: pbc value - * @retry_count: tx retry count */ struct vnic_txreq { struct sdma_txreq txreq; @@ -77,8 +75,6 @@ struct vnic_txreq { unsigned char pad[HFI1_VNIC_MAX_PAD]; u16 plen; __le64 pbc_val; - - u32 retry_count; }; static void vnic_sdma_complete(struct sdma_txreq *txreq, @@ -196,7 +192,6 @@ int hfi1_vnic_send_dma(struct hfi1_devdata *dd, u8 q_idx, ret = build_vnic_tx_desc(sde, tx, pbc); if (unlikely(ret)) goto free_desc; - tx->retry_count = 0; ret = sdma_send_txreq(sde, iowait_get_ib_work(&vnic_sdma->wait), &tx->txreq, vnic_sdma->pkts_sent); @@ -237,18 +232,17 @@ static int hfi1_vnic_sdma_sleep(struct sdma_engine *sde, { struct hfi1_vnic_sdma *vnic_sdma = container_of(wait->iow, struct hfi1_vnic_sdma, wait); - struct hfi1_ibdev *dev = &vnic_sdma->dd->verbs_dev; - struct vnic_txreq *tx = container_of(txreq, struct vnic_txreq, txreq); - if (sdma_progress(sde, seq, txreq)) - if (tx->retry_count++ < HFI1_VNIC_SDMA_RETRY_COUNT) - return -EAGAIN; + write_seqlock(&sde->waitlock); + if (sdma_progress(sde, seq, txreq)) { + write_sequnlock(&sde->waitlock); + return -EAGAIN; + } vnic_sdma->state = HFI1_VNIC_SDMA_Q_DEFERRED; - write_seqlock(&dev->iowait_lock); if (list_empty(&vnic_sdma->wait.list)) iowait_queue(pkts_sent, wait->iow, &sde->dmawait); - write_sequnlock(&dev->iowait_lock); + write_sequnlock(&sde->waitlock); return -EBUSY; } diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile index cf03404b9d58..004c88b32e13 100644 --- a/drivers/infiniband/hw/hns/Makefile +++ b/drivers/infiniband/hw/hns/Makefile @@ -7,7 +7,7 @@ ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3 obj-$(CONFIG_INFINIBAND_HNS) += hns-roce.o hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \ hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \ - hns_roce_cq.o hns_roce_alloc.o hns_roce_db.o + hns_roce_cq.o hns_roce_alloc.o hns_roce_db.o hns_roce_srq.o obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o hns-roce-hw-v1-objs := hns_roce_hw_v1.o obj-$(CONFIG_INFINIBAND_HNS_HIP08) += hns-roce-hw-v2.o diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c index 9990dc9eb96a..b3c8c45ec1e3 100644 --- a/drivers/infiniband/hw/hns/hns_roce_ah.c +++ b/drivers/infiniband/hw/hns/hns_roce_ah.c @@ -41,6 +41,7 @@ struct ib_ah *hns_roce_create_ah(struct ib_pd *ibpd, struct rdma_ah_attr *ah_attr, + u32 flags, struct ib_udata *udata) { struct hns_roce_dev *hr_dev = to_hr_dev(ibpd->device); @@ -110,7 +111,7 @@ int hns_roce_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr) return 0; } -int hns_roce_destroy_ah(struct ib_ah *ah) +int hns_roce_destroy_ah(struct ib_ah *ah, u32 flags) { kfree(to_hr_ah(ah)); diff --git a/drivers/infiniband/hw/hns/hns_roce_alloc.c b/drivers/infiniband/hw/hns/hns_roce_alloc.c index 46f65f9f59d0..6300033a448f 100644 --- a/drivers/infiniband/hw/hns/hns_roce_alloc.c +++ b/drivers/infiniband/hw/hns/hns_roce_alloc.c @@ -239,6 +239,8 @@ err_free: void hns_roce_cleanup_bitmap(struct hns_roce_dev *hr_dev) { + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) + hns_roce_cleanup_srq_table(hr_dev); hns_roce_cleanup_qp_table(hr_dev); hns_roce_cleanup_cq_table(hr_dev); hns_roce_cleanup_mr_table(hr_dev); diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.h b/drivers/infiniband/hw/hns/hns_roce_cmd.h index 9549ae51a0dd..927701df5eff 100644 --- a/drivers/infiniband/hw/hns/hns_roce_cmd.h +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.h @@ -120,6 +120,10 @@ enum { HNS_ROCE_CMD_SQD2RTS_QP = 0x20, HNS_ROCE_CMD_2RST_QP = 0x21, HNS_ROCE_CMD_QUERY_QP = 0x22, + HNS_ROCE_CMD_SW2HW_SRQ = 0x70, + HNS_ROCE_CMD_MODIFY_SRQC = 0x72, + HNS_ROCE_CMD_QUERY_SRQC = 0x73, + HNS_ROCE_CMD_HW2SW_SRQ = 0x74, }; int hns_roce_cmd_mbox(struct hns_roce_dev *hr_dev, u64 in_param, u64 out_param, diff --git a/drivers/infiniband/hw/hns/hns_roce_common.h b/drivers/infiniband/hw/hns/hns_roce_common.h index 93d4b4ec002d..f4c92a7ac1ce 100644 --- a/drivers/infiniband/hw/hns/hns_roce_common.h +++ b/drivers/infiniband/hw/hns/hns_roce_common.h @@ -376,9 +376,6 @@ #define ROCEE_RX_CMQ_TAIL_REG 0x07024 #define ROCEE_RX_CMQ_HEAD_REG 0x07028 -#define ROCEE_VF_MB_CFG0_REG 0x40 -#define ROCEE_VF_MB_STATUS_REG 0x58 - #define ROCEE_VF_EQ_DB_CFG0_REG 0x238 #define ROCEE_VF_EQ_DB_CFG1_REG 0x23C diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index d39bdfdb5de9..509e467843f6 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -111,6 +111,9 @@ #define PAGES_SHIFT_24 24 #define PAGES_SHIFT_32 32 +#define HNS_ROCE_IDX_QUE_ENTRY_SZ 4 +#define SRQ_DB_REG 0x230 + enum { HNS_ROCE_SUPPORT_RQ_RECORD_DB = 1 << 0, HNS_ROCE_SUPPORT_SQ_RECORD_DB = 1 << 1, @@ -196,6 +199,7 @@ enum { HNS_ROCE_CAP_FLAG_RQ_INLINE = BIT(2), HNS_ROCE_CAP_FLAG_RECORD_DB = BIT(3), HNS_ROCE_CAP_FLAG_SQ_RECORD_DB = BIT(4), + HNS_ROCE_CAP_FLAG_SRQ = BIT(5), HNS_ROCE_CAP_FLAG_MW = BIT(7), HNS_ROCE_CAP_FLAG_FRMR = BIT(8), HNS_ROCE_CAP_FLAG_ATOMIC = BIT(10), @@ -204,6 +208,8 @@ enum { enum hns_roce_mtt_type { MTT_TYPE_WQE, MTT_TYPE_CQE, + MTT_TYPE_SRQWQE, + MTT_TYPE_IDX }; enum { @@ -339,6 +345,10 @@ struct hns_roce_mr_table { struct hns_roce_hem_table mtpt_table; struct hns_roce_buddy mtt_cqe_buddy; struct hns_roce_hem_table mtt_cqe_table; + struct hns_roce_buddy mtt_srqwqe_buddy; + struct hns_roce_hem_table mtt_srqwqe_table; + struct hns_roce_buddy mtt_idx_buddy; + struct hns_roce_hem_table mtt_idx_table; }; struct hns_roce_wq { @@ -429,9 +439,37 @@ struct hns_roce_cq { struct completion free; }; +struct hns_roce_idx_que { + struct hns_roce_buf idx_buf; + int entry_sz; + u32 buf_size; + struct ib_umem *umem; + struct hns_roce_mtt mtt; + u64 *bitmap; +}; + struct hns_roce_srq { struct ib_srq ibsrq; - int srqn; + void (*event)(struct hns_roce_srq *srq, enum hns_roce_event event); + unsigned long srqn; + int max; + int max_gs; + int wqe_shift; + void __iomem *db_reg_l; + + atomic_t refcount; + struct completion free; + + struct hns_roce_buf buf; + u64 *wrid; + struct ib_umem *umem; + struct hns_roce_mtt mtt; + struct hns_roce_idx_que idx_que; + spinlock_t lock; + int head; + int tail; + u16 wqe_ctr; + struct mutex mutex; }; struct hns_roce_uar_table { @@ -453,6 +491,12 @@ struct hns_roce_cq_table { struct hns_roce_hem_table table; }; +struct hns_roce_srq_table { + struct hns_roce_bitmap bitmap; + struct xarray xa; + struct hns_roce_hem_table table; +}; + struct hns_roce_raq_table { struct hns_roce_buf_list *e_raq_buf; }; @@ -602,6 +646,12 @@ struct hns_roce_aeqe { u32 rsv1; } qp_event; + struct { + __le32 srq; + u32 rsv0; + u32 rsv1; + } srq_event; + struct { __le32 cq; u32 rsv0; @@ -679,7 +729,12 @@ struct hns_roce_caps { u32 max_extend_sg; int num_qps; /* 256k */ int reserved_qps; + u32 max_srq_sg; + int num_srqs; u32 max_wqes; /* 16k */ + u32 max_srqs; + u32 max_srq_wrs; + u32 max_srq_sges; u32 max_sq_desc_sz; /* 64 */ u32 max_rq_desc_sz; /* 64 */ u32 max_srq_desc_sz; @@ -690,12 +745,16 @@ struct hns_roce_caps { int min_cqes; u32 min_wqes; int reserved_cqs; + int reserved_srqs; + u32 max_srqwqes; int num_aeq_vectors; /* 1 */ int num_comp_vectors; int num_other_vectors; int num_mtpts; u32 num_mtt_segs; u32 num_cqe_segs; + u32 num_srqwqe_segs; + u32 num_idx_segs; int reserved_mrws; int reserved_uars; int num_pds; @@ -709,6 +768,8 @@ struct hns_roce_caps { int irrl_entry_sz; int trrl_entry_sz; int cqc_entry_sz; + int srqc_entry_sz; + int idx_entry_sz; u32 pbl_ba_pg_sz; u32 pbl_buf_pg_sz; u32 pbl_hop_num; @@ -737,6 +798,12 @@ struct hns_roce_caps { u32 cqe_ba_pg_sz; u32 cqe_buf_pg_sz; u32 cqe_hop_num; + u32 srqwqe_ba_pg_sz; + u32 srqwqe_buf_pg_sz; + u32 srqwqe_hop_num; + u32 idx_ba_pg_sz; + u32 idx_buf_pg_sz; + u32 idx_hop_num; u32 eqe_ba_pg_sz; u32 eqe_buf_pg_sz; u32 eqe_hop_num; @@ -805,6 +872,19 @@ struct hns_roce_hw { int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period); int (*init_eq)(struct hns_roce_dev *hr_dev); void (*cleanup_eq)(struct hns_roce_dev *hr_dev); + void (*write_srqc)(struct hns_roce_dev *hr_dev, + struct hns_roce_srq *srq, u32 pdn, u16 xrcd, u32 cqn, + void *mb_buf, u64 *mtts_wqe, u64 *mtts_idx, + dma_addr_t dma_handle_wqe, + dma_addr_t dma_handle_idx); + int (*modify_srq)(struct ib_srq *ibsrq, struct ib_srq_attr *srq_attr, + enum ib_srq_attr_mask srq_attr_mask, + struct ib_udata *udata); + int (*query_srq)(struct ib_srq *ibsrq, struct ib_srq_attr *attr); + int (*post_srq_recv)(struct ib_srq *ibsrq, const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad_wr); + const struct ib_device_ops *hns_roce_dev_ops; + const struct ib_device_ops *hns_roce_dev_srq_ops; }; struct hns_roce_dev { @@ -839,6 +919,7 @@ struct hns_roce_dev { struct hns_roce_uar_table uar_table; struct hns_roce_mr_table mr_table; struct hns_roce_cq_table cq_table; + struct hns_roce_srq_table srq_table; struct hns_roce_qp_table qp_table; struct hns_roce_eq_table eq_table; @@ -951,12 +1032,14 @@ int hns_roce_init_mr_table(struct hns_roce_dev *hr_dev); int hns_roce_init_eq_table(struct hns_roce_dev *hr_dev); int hns_roce_init_cq_table(struct hns_roce_dev *hr_dev); int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev); +int hns_roce_init_srq_table(struct hns_roce_dev *hr_dev); void hns_roce_cleanup_pd_table(struct hns_roce_dev *hr_dev); void hns_roce_cleanup_mr_table(struct hns_roce_dev *hr_dev); void hns_roce_cleanup_eq_table(struct hns_roce_dev *hr_dev); void hns_roce_cleanup_cq_table(struct hns_roce_dev *hr_dev); void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev); +void hns_roce_cleanup_srq_table(struct hns_roce_dev *hr_dev); int hns_roce_bitmap_alloc(struct hns_roce_bitmap *bitmap, unsigned long *obj); void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj, @@ -973,9 +1056,10 @@ void hns_roce_bitmap_free_range(struct hns_roce_bitmap *bitmap, struct ib_ah *hns_roce_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 flags, struct ib_udata *udata); int hns_roce_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); -int hns_roce_destroy_ah(struct ib_ah *ah); +int hns_roce_destroy_ah(struct ib_ah *ah, u32 flags); struct ib_pd *hns_roce_alloc_pd(struct ib_device *ib_dev, struct ib_ucontext *context, @@ -1011,6 +1095,14 @@ int hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size, u32 max_direct, int hns_roce_ib_umem_write_mtt(struct hns_roce_dev *hr_dev, struct hns_roce_mtt *mtt, struct ib_umem *umem); +struct ib_srq *hns_roce_create_srq(struct ib_pd *pd, + struct ib_srq_init_attr *srq_init_attr, + struct ib_udata *udata); +int hns_roce_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *srq_attr, + enum ib_srq_attr_mask srq_attr_mask, + struct ib_udata *udata); +int hns_roce_destroy_srq(struct ib_srq *ibsrq); + struct ib_qp *hns_roce_create_qp(struct ib_pd *ib_pd, struct ib_qp_init_attr *init_attr, struct ib_udata *udata); @@ -1052,6 +1144,7 @@ void hns_roce_free_db(struct hns_roce_dev *hr_dev, struct hns_roce_db *db); void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn); void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type); void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type); +void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type); int hns_get_gid_index(struct hns_roce_dev *hr_dev, u8 port, int gid_index); int hns_roce_init(struct hns_roce_dev *hr_dev); void hns_roce_exit(struct hns_roce_dev *hr_dev); diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c index f6faefed96e8..4cdbcafa5915 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hem.c +++ b/drivers/infiniband/hw/hns/hns_roce_hem.c @@ -46,7 +46,9 @@ bool hns_roce_check_whether_mhop(struct hns_roce_dev *hr_dev, u32 type) (hr_dev->caps.cqc_hop_num && type == HEM_TYPE_CQC) || (hr_dev->caps.srqc_hop_num && type == HEM_TYPE_SRQC) || (hr_dev->caps.cqe_hop_num && type == HEM_TYPE_CQE) || - (hr_dev->caps.mtt_hop_num && type == HEM_TYPE_MTT)) + (hr_dev->caps.mtt_hop_num && type == HEM_TYPE_MTT) || + (hr_dev->caps.srqwqe_hop_num && type == HEM_TYPE_SRQWQE) || + (hr_dev->caps.idx_hop_num && type == HEM_TYPE_IDX)) return true; return false; @@ -147,6 +149,22 @@ int hns_roce_calc_hem_mhop(struct hns_roce_dev *hr_dev, mhop->ba_l0_num = mhop->bt_chunk_size / 8; mhop->hop_num = hr_dev->caps.cqe_hop_num; break; + case HEM_TYPE_SRQWQE: + mhop->buf_chunk_size = 1 << (hr_dev->caps.srqwqe_buf_pg_sz + + PAGE_SHIFT); + mhop->bt_chunk_size = 1 << (hr_dev->caps.srqwqe_ba_pg_sz + + PAGE_SHIFT); + mhop->ba_l0_num = mhop->bt_chunk_size / 8; + mhop->hop_num = hr_dev->caps.srqwqe_hop_num; + break; + case HEM_TYPE_IDX: + mhop->buf_chunk_size = 1 << (hr_dev->caps.idx_buf_pg_sz + + PAGE_SHIFT); + mhop->bt_chunk_size = 1 << (hr_dev->caps.idx_ba_pg_sz + + PAGE_SHIFT); + mhop->ba_l0_num = mhop->bt_chunk_size / 8; + mhop->hop_num = hr_dev->caps.idx_hop_num; + break; default: dev_err(dev, "Table %d not support multi-hop addressing!\n", table->type); @@ -906,6 +924,18 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev, bt_chunk_size = buf_chunk_size; hop_num = hr_dev->caps.cqe_hop_num; break; + case HEM_TYPE_SRQWQE: + buf_chunk_size = 1 << (hr_dev->caps.srqwqe_ba_pg_sz + + PAGE_SHIFT); + bt_chunk_size = buf_chunk_size; + hop_num = hr_dev->caps.srqwqe_hop_num; + break; + case HEM_TYPE_IDX: + buf_chunk_size = 1 << (hr_dev->caps.idx_ba_pg_sz + + PAGE_SHIFT); + bt_chunk_size = buf_chunk_size; + hop_num = hr_dev->caps.idx_hop_num; + break; default: dev_err(dev, "Table %d not support to init hem table here!\n", @@ -1041,6 +1071,15 @@ void hns_roce_cleanup_hem_table(struct hns_roce_dev *hr_dev, void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev) { + if ((hr_dev->caps.num_idx_segs)) + hns_roce_cleanup_hem_table(hr_dev, + &hr_dev->mr_table.mtt_idx_table); + if (hr_dev->caps.num_srqwqe_segs) + hns_roce_cleanup_hem_table(hr_dev, + &hr_dev->mr_table.mtt_srqwqe_table); + if (hr_dev->caps.srqc_entry_sz) + hns_roce_cleanup_hem_table(hr_dev, + &hr_dev->srq_table.table); hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table); if (hr_dev->caps.trrl_entry_sz) hns_roce_cleanup_hem_table(hr_dev, diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.h b/drivers/infiniband/hw/hns/hns_roce_hem.h index e8850d59e780..a650278c6fbd 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hem.h +++ b/drivers/infiniband/hw/hns/hns_roce_hem.h @@ -48,6 +48,8 @@ enum { /* UNMAP HEM */ HEM_TYPE_MTT, HEM_TYPE_CQE, + HEM_TYPE_SRQWQE, + HEM_TYPE_IDX, HEM_TYPE_IRRL, HEM_TYPE_TRRL, }; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c index ca05810c92dc..b74c742b000c 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c @@ -3926,7 +3926,7 @@ int hns_roce_v1_destroy_qp(struct ib_qp *ibqp) struct hns_roce_qp_work *qp_work; struct hns_roce_v1_priv *priv; struct hns_roce_cq *send_cq, *recv_cq; - int is_user = !!ibqp->pd->uobject; + bool is_user = ibqp->uobject; int is_timeout = 0; int ret; @@ -4793,6 +4793,16 @@ static void hns_roce_v1_cleanup_eq_table(struct hns_roce_dev *hr_dev) kfree(eq_table->eq); } +static const struct ib_device_ops hns_roce_v1_dev_ops = { + .destroy_qp = hns_roce_v1_destroy_qp, + .modify_cq = hns_roce_v1_modify_cq, + .poll_cq = hns_roce_v1_poll_cq, + .post_recv = hns_roce_v1_post_recv, + .post_send = hns_roce_v1_post_send, + .query_qp = hns_roce_v1_query_qp, + .req_notify_cq = hns_roce_v1_req_notify_cq, +}; + static const struct hns_roce_hw hns_roce_hw_v1 = { .reset = hns_roce_v1_reset, .hw_profile = hns_roce_v1_profile, @@ -4818,6 +4828,7 @@ static const struct hns_roce_hw hns_roce_hw_v1 = { .destroy_cq = hns_roce_v1_destroy_cq, .init_eq = hns_roce_v1_init_eq_table, .cleanup_eq = hns_roce_v1_cleanup_eq_table, + .hns_roce_dev_ops = &hns_roce_v1_dev_ops, }; static const struct of_device_id hns_roce_of_match[] = { diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 3beb1523e17c..3a669451cf86 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -1082,6 +1082,33 @@ static int hns_roce_query_pf_resource(struct hns_roce_dev *hr_dev) return 0; } +static int hns_roce_set_vf_switch_param(struct hns_roce_dev *hr_dev, + int vf_id) +{ + struct hns_roce_cmq_desc desc; + struct hns_roce_vf_switch *swt; + int ret; + + swt = (struct hns_roce_vf_switch *)desc.data; + hns_roce_cmq_setup_basic_desc(&desc, HNS_SWITCH_PARAMETER_CFG, true); + swt->rocee_sel |= cpu_to_le16(HNS_ICL_SWITCH_CMD_ROCEE_SEL); + roce_set_field(swt->fun_id, + VF_SWITCH_DATA_FUN_ID_VF_ID_M, + VF_SWITCH_DATA_FUN_ID_VF_ID_S, + vf_id); + ret = hns_roce_cmq_send(hr_dev, &desc, 1); + if (ret) + return ret; + desc.flag = + cpu_to_le16(HNS_ROCE_CMD_FLAG_NO_INTR | HNS_ROCE_CMD_FLAG_IN); + desc.flag &= cpu_to_le16(~HNS_ROCE_CMD_FLAG_WR); + roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LPBK_S, 1); + roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LCL_LPBK_S, 1); + roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_DST_OVRD_S, 1); + + return hns_roce_cmq_send(hr_dev, &desc, 1); +} + static int hns_roce_alloc_vf_resource(struct hns_roce_dev *hr_dev) { struct hns_roce_cmq_desc desc[2]; @@ -1269,6 +1296,15 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev) return ret; } + if (hr_dev->pci_dev->revision == 0x21) { + ret = hns_roce_set_vf_switch_param(hr_dev, 0); + if (ret) { + dev_err(hr_dev->dev, + "Set function switch param fail, ret = %d.\n", + ret); + return ret; + } + } hr_dev->vendor_part_id = hr_dev->pci_dev->device; hr_dev->sys_image_guid = be64_to_cpu(hr_dev->ib_dev.node_guid); @@ -1276,11 +1312,14 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev) caps->num_qps = HNS_ROCE_V2_MAX_QP_NUM; caps->max_wqes = HNS_ROCE_V2_MAX_WQE_NUM; caps->num_cqs = HNS_ROCE_V2_MAX_CQ_NUM; + caps->num_srqs = HNS_ROCE_V2_MAX_SRQ_NUM; caps->max_cqes = HNS_ROCE_V2_MAX_CQE_NUM; + caps->max_srqwqes = HNS_ROCE_V2_MAX_SRQWQE_NUM; caps->max_sq_sg = HNS_ROCE_V2_MAX_SQ_SGE_NUM; caps->max_extend_sg = HNS_ROCE_V2_MAX_EXTEND_SGE_NUM; caps->max_rq_sg = HNS_ROCE_V2_MAX_RQ_SGE_NUM; caps->max_sq_inline = HNS_ROCE_V2_MAX_SQ_INLINE; + caps->max_srq_sg = HNS_ROCE_V2_MAX_SRQ_SGE_NUM; caps->num_uars = HNS_ROCE_V2_UAR_NUM; caps->phy_num_uars = HNS_ROCE_V2_PHY_UAR_NUM; caps->num_aeq_vectors = HNS_ROCE_V2_AEQE_VEC_NUM; @@ -1289,6 +1328,8 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev) caps->num_mtpts = HNS_ROCE_V2_MAX_MTPT_NUM; caps->num_mtt_segs = HNS_ROCE_V2_MAX_MTT_SEGS; caps->num_cqe_segs = HNS_ROCE_V2_MAX_CQE_SEGS; + caps->num_srqwqe_segs = HNS_ROCE_V2_MAX_SRQWQE_SEGS; + caps->num_idx_segs = HNS_ROCE_V2_MAX_IDX_SEGS; caps->num_pds = HNS_ROCE_V2_MAX_PD_NUM; caps->max_qp_init_rdma = HNS_ROCE_V2_MAX_QP_INIT_RDMA; caps->max_qp_dest_rdma = HNS_ROCE_V2_MAX_QP_DEST_RDMA; @@ -1299,8 +1340,10 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev) caps->irrl_entry_sz = HNS_ROCE_V2_IRRL_ENTRY_SZ; caps->trrl_entry_sz = HNS_ROCE_V2_TRRL_ENTRY_SZ; caps->cqc_entry_sz = HNS_ROCE_V2_CQC_ENTRY_SZ; + caps->srqc_entry_sz = HNS_ROCE_V2_SRQC_ENTRY_SZ; caps->mtpt_entry_sz = HNS_ROCE_V2_MTPT_ENTRY_SZ; caps->mtt_entry_sz = HNS_ROCE_V2_MTT_ENTRY_SZ; + caps->idx_entry_sz = 4; caps->cq_entry_sz = HNS_ROCE_V2_CQE_ENTRY_SIZE; caps->page_size_cap = HNS_ROCE_V2_PAGE_SIZE_SUPPORTED; caps->reserved_lkey = 0; @@ -1308,6 +1351,7 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev) caps->reserved_mrws = 1; caps->reserved_uars = 0; caps->reserved_cqs = 0; + caps->reserved_srqs = 0; caps->reserved_qps = HNS_ROCE_V2_RSV_QPS; caps->qpc_ba_pg_sz = 0; @@ -1331,6 +1375,12 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev) caps->cqe_ba_pg_sz = 0; caps->cqe_buf_pg_sz = 0; caps->cqe_hop_num = HNS_ROCE_CQE_HOP_NUM; + caps->srqwqe_ba_pg_sz = 0; + caps->srqwqe_buf_pg_sz = 0; + caps->srqwqe_hop_num = HNS_ROCE_SRQWQE_HOP_NUM; + caps->idx_ba_pg_sz = 0; + caps->idx_buf_pg_sz = 0; + caps->idx_hop_num = HNS_ROCE_IDX_HOP_NUM; caps->eqe_ba_pg_sz = 0; caps->eqe_buf_pg_sz = 0; caps->eqe_hop_num = HNS_ROCE_EQE_HOP_NUM; @@ -1354,8 +1404,13 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev) caps->local_ca_ack_delay = 0; caps->max_mtu = IB_MTU_4096; + caps->max_srqs = HNS_ROCE_V2_MAX_SRQ; + caps->max_srq_wrs = HNS_ROCE_V2_MAX_SRQ_WR; + caps->max_srq_sges = HNS_ROCE_V2_MAX_SRQ_SGE; + if (hr_dev->pci_dev->revision == 0x21) - caps->flags |= HNS_ROCE_CAP_FLAG_ATOMIC; + caps->flags |= HNS_ROCE_CAP_FLAG_ATOMIC | + HNS_ROCE_CAP_FLAG_SRQ; ret = hns_roce_v2_set_bt(hr_dev); if (ret) @@ -1587,30 +1642,62 @@ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev) hns_roce_free_link_table(hr_dev, &priv->tsq); } +static int hns_roce_query_mbox_status(struct hns_roce_dev *hr_dev) +{ + struct hns_roce_cmq_desc desc; + struct hns_roce_mbox_status *mb_st = + (struct hns_roce_mbox_status *)desc.data; + enum hns_roce_cmd_return_status status; + + hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_QUERY_MB_ST, true); + + status = hns_roce_cmq_send(hr_dev, &desc, 1); + if (status) + return status; + + return cpu_to_le32(mb_st->mb_status_hw_run); +} + static int hns_roce_v2_cmd_pending(struct hns_roce_dev *hr_dev) { - u32 status = readl(hr_dev->reg_base + ROCEE_VF_MB_STATUS_REG); + u32 status = hns_roce_query_mbox_status(hr_dev); return status >> HNS_ROCE_HW_RUN_BIT_SHIFT; } static int hns_roce_v2_cmd_complete(struct hns_roce_dev *hr_dev) { - u32 status = readl(hr_dev->reg_base + ROCEE_VF_MB_STATUS_REG); + u32 status = hns_roce_query_mbox_status(hr_dev); return status & HNS_ROCE_HW_MB_STATUS_MASK; } +static int hns_roce_mbox_post(struct hns_roce_dev *hr_dev, u64 in_param, + u64 out_param, u32 in_modifier, u8 op_modifier, + u16 op, u16 token, int event) +{ + struct hns_roce_cmq_desc desc; + struct hns_roce_post_mbox *mb = (struct hns_roce_post_mbox *)desc.data; + + hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_POST_MB, false); + + mb->in_param_l = cpu_to_le64(in_param); + mb->in_param_h = cpu_to_le64(in_param) >> 32; + mb->out_param_l = cpu_to_le64(out_param); + mb->out_param_h = cpu_to_le64(out_param) >> 32; + mb->cmd_tag = cpu_to_le32(in_modifier << 8 | op); + mb->token_event_en = cpu_to_le32(event << 16 | token); + + return hns_roce_cmq_send(hr_dev, &desc, 1); +} + static int hns_roce_v2_post_mbox(struct hns_roce_dev *hr_dev, u64 in_param, u64 out_param, u32 in_modifier, u8 op_modifier, u16 op, u16 token, int event) { struct device *dev = hr_dev->dev; - u32 __iomem *hcr = (u32 __iomem *)(hr_dev->reg_base + - ROCEE_VF_MB_CFG0_REG); unsigned long end; - u32 val0 = 0; - u32 val1 = 0; + int ret; end = msecs_to_jiffies(HNS_ROCE_V2_GO_BIT_TIMEOUT_MSECS) + jiffies; while (hns_roce_v2_cmd_pending(hr_dev)) { @@ -1622,27 +1709,12 @@ static int hns_roce_v2_post_mbox(struct hns_roce_dev *hr_dev, u64 in_param, cond_resched(); } - roce_set_field(val0, HNS_ROCE_VF_MB4_TAG_MASK, - HNS_ROCE_VF_MB4_TAG_SHIFT, in_modifier); - roce_set_field(val0, HNS_ROCE_VF_MB4_CMD_MASK, - HNS_ROCE_VF_MB4_CMD_SHIFT, op); - roce_set_field(val1, HNS_ROCE_VF_MB5_EVENT_MASK, - HNS_ROCE_VF_MB5_EVENT_SHIFT, event); - roce_set_field(val1, HNS_ROCE_VF_MB5_TOKEN_MASK, - HNS_ROCE_VF_MB5_TOKEN_SHIFT, token); - - writeq(in_param, hcr + 0); - writeq(out_param, hcr + 2); - - /* Memory barrier */ - wmb(); - - writel(val0, hcr + 4); - writel(val1, hcr + 5); - - mmiowb(); + ret = hns_roce_mbox_post(hr_dev, in_param, out_param, in_modifier, + op_modifier, op, token, event); + if (ret) + dev_err(dev, "Post mailbox fail(%d)\n", ret); - return 0; + return ret; } static int hns_roce_v2_chk_mbox(struct hns_roce_dev *hr_dev, @@ -2007,6 +2079,27 @@ static struct hns_roce_v2_cqe *next_cqe_sw_v2(struct hns_roce_cq *hr_cq) return get_sw_cqe_v2(hr_cq, hr_cq->cons_index); } +static void *get_srq_wqe(struct hns_roce_srq *srq, int n) +{ + return hns_roce_buf_offset(&srq->buf, n << srq->wqe_shift); +} + +static void hns_roce_free_srq_wqe(struct hns_roce_srq *srq, int wqe_index) +{ + u32 bitmap_num; + int bit_num; + + /* always called with interrupts disabled. */ + spin_lock(&srq->lock); + + bitmap_num = wqe_index / (sizeof(u64) * 8); + bit_num = wqe_index % (sizeof(u64) * 8); + srq->idx_que.bitmap[bitmap_num] |= (1ULL << bit_num); + srq->tail++; + + spin_unlock(&srq->lock); +} + static void hns_roce_v2_cq_set_ci(struct hns_roce_cq *hr_cq, u32 cons_index) { *hr_cq->set_ci_db = cons_index & 0xffffff; @@ -2018,6 +2111,7 @@ static void __hns_roce_v2_cq_clean(struct hns_roce_cq *hr_cq, u32 qpn, struct hns_roce_v2_cqe *cqe, *dest; u32 prod_index; int nfreed = 0; + int wqe_index; u8 owner_bit; for (prod_index = hr_cq->cons_index; get_sw_cqe_v2(hr_cq, prod_index); @@ -2035,7 +2129,13 @@ static void __hns_roce_v2_cq_clean(struct hns_roce_cq *hr_cq, u32 qpn, if ((roce_get_field(cqe->byte_16, V2_CQE_BYTE_16_LCL_QPN_M, V2_CQE_BYTE_16_LCL_QPN_S) & HNS_ROCE_V2_CQE_QPN_MASK) == qpn) { - /* In v1 engine, not support SRQ */ + if (srq && + roce_get_bit(cqe->byte_4, V2_CQE_BYTE_4_S_R_S)) { + wqe_index = roce_get_field(cqe->byte_4, + V2_CQE_BYTE_4_WQE_INDX_M, + V2_CQE_BYTE_4_WQE_INDX_S); + hns_roce_free_srq_wqe(srq, wqe_index); + } ++nfreed; } else if (nfreed) { dest = get_cqe_v2(hr_cq, (prod_index + nfreed) & @@ -2212,6 +2312,7 @@ static int hns_roce_handle_recv_inl_wqe(struct hns_roce_v2_cqe *cqe, static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq, struct hns_roce_qp **cur_qp, struct ib_wc *wc) { + struct hns_roce_srq *srq = NULL; struct hns_roce_dev *hr_dev; struct hns_roce_v2_cqe *cqe; struct hns_roce_qp *hr_qp; @@ -2254,6 +2355,37 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq, wc->qp = &(*cur_qp)->ibqp; wc->vendor_err = 0; + if (is_send) { + wq = &(*cur_qp)->sq; + if ((*cur_qp)->sq_signal_bits) { + /* + * If sg_signal_bit is 1, + * firstly tail pointer updated to wqe + * which current cqe correspond to + */ + wqe_ctr = (u16)roce_get_field(cqe->byte_4, + V2_CQE_BYTE_4_WQE_INDX_M, + V2_CQE_BYTE_4_WQE_INDX_S); + wq->tail += (wqe_ctr - (u16)wq->tail) & + (wq->wqe_cnt - 1); + } + + wc->wr_id = wq->wrid[wq->tail & (wq->wqe_cnt - 1)]; + ++wq->tail; + } else if ((*cur_qp)->ibqp.srq) { + srq = to_hr_srq((*cur_qp)->ibqp.srq); + wqe_ctr = le16_to_cpu(roce_get_field(cqe->byte_4, + V2_CQE_BYTE_4_WQE_INDX_M, + V2_CQE_BYTE_4_WQE_INDX_S)); + wc->wr_id = srq->wrid[wqe_ctr]; + hns_roce_free_srq_wqe(srq, wqe_ctr); + } else { + /* Update tail pointer, record wr_id */ + wq = &(*cur_qp)->rq; + wc->wr_id = wq->wrid[wq->tail & (wq->wqe_cnt - 1)]; + ++wq->tail; + } + status = roce_get_field(cqe->byte_4, V2_CQE_BYTE_4_STATUS_M, V2_CQE_BYTE_4_STATUS_S); switch (status & HNS_ROCE_V2_CQE_STATUS_MASK) { @@ -2373,23 +2505,6 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq, wc->status = IB_WC_GENERAL_ERR; break; } - - wq = &(*cur_qp)->sq; - if ((*cur_qp)->sq_signal_bits) { - /* - * If sg_signal_bit is 1, - * firstly tail pointer updated to wqe - * which current cqe correspond to - */ - wqe_ctr = (u16)roce_get_field(cqe->byte_4, - V2_CQE_BYTE_4_WQE_INDX_M, - V2_CQE_BYTE_4_WQE_INDX_S); - wq->tail += (wqe_ctr - (u16)wq->tail) & - (wq->wqe_cnt - 1); - } - - wc->wr_id = wq->wrid[wq->tail & (wq->wqe_cnt - 1)]; - ++wq->tail; } else { /* RQ correspond to CQE */ wc->byte_len = le32_to_cpu(cqe->byte_cnt); @@ -2434,11 +2549,6 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq, return -EAGAIN; } - /* Update tail pointer, record wr_id */ - wq = &(*cur_qp)->rq; - wc->wr_id = wq->wrid[wq->tail & (wq->wqe_cnt - 1)]; - ++wq->tail; - wc->sl = (u8)roce_get_field(cqe->byte_32, V2_CQE_BYTE_32_SL_M, V2_CQE_BYTE_32_SL_S); wc->src_qp = (u8)roce_get_field(cqe->byte_32, @@ -2747,6 +2857,8 @@ static void modify_qp_reset_to_init(struct ib_qp *ibqp, roce_set_field(context->byte_20_smac_sgid_idx, V2_QPC_BYTE_20_RQ_SHIFT_M, V2_QPC_BYTE_20_RQ_SHIFT_S, + (hr_qp->ibqp.qp_type == IB_QPT_XRC_INI || + hr_qp->ibqp.qp_type == IB_QPT_XRC_TGT || ibqp->srq) ? 0 : ilog2((unsigned int)hr_qp->rq.wqe_cnt)); roce_set_field(qpc_mask->byte_20_smac_sgid_idx, V2_QPC_BYTE_20_RQ_SHIFT_M, V2_QPC_BYTE_20_RQ_SHIFT_S, 0); @@ -3088,6 +3200,8 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp, roce_set_field(context->byte_20_smac_sgid_idx, V2_QPC_BYTE_20_RQ_SHIFT_M, V2_QPC_BYTE_20_RQ_SHIFT_S, + (hr_qp->ibqp.qp_type == IB_QPT_XRC_INI || + hr_qp->ibqp.qp_type == IB_QPT_XRC_TGT || ibqp->srq) ? 0 : ilog2((unsigned int)hr_qp->rq.wqe_cnt)); roce_set_field(qpc_mask->byte_20_smac_sgid_idx, V2_QPC_BYTE_20_RQ_SHIFT_M, V2_QPC_BYTE_20_RQ_SHIFT_S, 0); @@ -3601,6 +3715,21 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp, return 0; } +static inline bool hns_roce_v2_check_qp_stat(enum ib_qp_state cur_state, + enum ib_qp_state new_state) +{ + + if ((cur_state != IB_QPS_RESET && + (new_state == IB_QPS_ERR || new_state == IB_QPS_RESET)) || + ((cur_state == IB_QPS_RTS || cur_state == IB_QPS_SQD) && + (new_state == IB_QPS_RTS || new_state == IB_QPS_SQD)) || + (cur_state == IB_QPS_SQE && new_state == IB_QPS_RTS)) + return true; + + return false; + +} + static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr, int attr_mask, enum ib_qp_state cur_state, @@ -3626,6 +3755,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, */ memset(qpc_mask, 0xff, sizeof(*qpc_mask)); if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) { + memset(qpc_mask, 0, sizeof(*qpc_mask)); modify_qp_reset_to_init(ibqp, attr, attr_mask, context, qpc_mask); } else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_INIT) { @@ -3641,21 +3771,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, qpc_mask); if (ret) goto out; - } else if ((cur_state == IB_QPS_RTS && new_state == IB_QPS_RTS) || - (cur_state == IB_QPS_SQE && new_state == IB_QPS_RTS) || - (cur_state == IB_QPS_RTS && new_state == IB_QPS_SQD) || - (cur_state == IB_QPS_SQD && new_state == IB_QPS_SQD) || - (cur_state == IB_QPS_SQD && new_state == IB_QPS_RTS) || - (cur_state == IB_QPS_INIT && new_state == IB_QPS_RESET) || - (cur_state == IB_QPS_RTR && new_state == IB_QPS_RESET) || - (cur_state == IB_QPS_RTS && new_state == IB_QPS_RESET) || - (cur_state == IB_QPS_ERR && new_state == IB_QPS_RESET) || - (cur_state == IB_QPS_INIT && new_state == IB_QPS_ERR) || - (cur_state == IB_QPS_RTR && new_state == IB_QPS_ERR) || - (cur_state == IB_QPS_RTS && new_state == IB_QPS_ERR) || - (cur_state == IB_QPS_SQD && new_state == IB_QPS_ERR) || - (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR) || - (cur_state == IB_QPS_ERR && new_state == IB_QPS_ERR)) { + } else if (hns_roce_v2_check_qp_stat(cur_state, new_state)) { /* Nothing */ ; } else { @@ -3789,6 +3905,11 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, if (attr_mask & (IB_QP_ACCESS_FLAGS | IB_QP_MAX_DEST_RD_ATOMIC)) set_access_flags(hr_qp, context, qpc_mask, attr, attr_mask); + roce_set_bit(context->byte_108_rx_reqepsn, V2_QPC_BYTE_108_INV_CREDIT_S, + ibqp->srq ? 1 : 0); + roce_set_bit(qpc_mask->byte_108_rx_reqepsn, + V2_QPC_BYTE_108_INV_CREDIT_S, 0); + /* Every status migrate must change state */ roce_set_field(context->byte_60_qpst_tempid, V2_QPC_BYTE_60_QP_ST_M, V2_QPC_BYTE_60_QP_ST_S, new_state); @@ -4012,7 +4133,7 @@ out: static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp, - int is_user) + bool is_user) { struct hns_roce_cq *send_cq, *recv_cq; struct device *dev = hr_dev->dev; @@ -4074,7 +4195,8 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev, hns_roce_free_db(hr_dev, &hr_qp->rdb); } - if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) { + if ((hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) && + hr_qp->rq.wqe_cnt) { kfree(hr_qp->rq_inl_buf.wqe_list[0].sg_list); kfree(hr_qp->rq_inl_buf.wqe_list); } @@ -4088,7 +4210,7 @@ static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp) struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); int ret; - ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, !!ibqp->pd->uobject); + ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, ibqp->uobject); if (ret) { dev_err(hr_dev->dev, "Destroy qp failed(%d)\n", ret); return ret; @@ -4384,6 +4506,7 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, int aeqe_found = 0; int event_type; int sub_type; + u32 srqn; u32 qpn; u32 cqn; @@ -4406,6 +4529,9 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, cqn = roce_get_field(aeqe->event.cq_event.cq, HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_M, HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_S); + srqn = roce_get_field(aeqe->event.srq_event.srq, + HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_M, + HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_S); switch (event_type) { case HNS_ROCE_EVENT_TYPE_PATH_MIG: @@ -4413,13 +4539,14 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, case HNS_ROCE_EVENT_TYPE_COMM_EST: case HNS_ROCE_EVENT_TYPE_SQ_DRAINED: case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR: + case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH: case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR: case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR: hns_roce_qp_event(hr_dev, qpn, event_type); break; case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH: - case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH: case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR: + hns_roce_srq_event(hr_dev, srqn, event_type); break; case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR: case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW: @@ -4964,13 +5091,12 @@ static int hns_roce_mhop_alloc_eq(struct hns_roce_dev *hr_dev, eqe_alloc = i * (buf_chk_sz / eq->eqe_size); size = (eq->entries - eqe_alloc) * eq->eqe_size; } - eq->buf[i] = dma_alloc_coherent(dev, size, + eq->buf[i] = dma_zalloc_coherent(dev, size, &(eq->buf_dma[i]), GFP_KERNEL); if (!eq->buf[i]) goto err_dma_alloc_buf; - memset(eq->buf[i], 0, size); *(eq->bt_l0 + i) = eq->buf_dma[i]; eq_buf_cnt++; @@ -5000,13 +5126,12 @@ static int hns_roce_mhop_alloc_eq(struct hns_roce_dev *hr_dev, size = (eq->entries - eqe_alloc) * eq->eqe_size; } - eq->buf[idx] = dma_alloc_coherent(dev, size, + eq->buf[idx] = dma_zalloc_coherent(dev, size, &(eq->buf_dma[idx]), GFP_KERNEL); if (!eq->buf[idx]) goto err_dma_alloc_buf; - memset(eq->buf[idx], 0, size); *(eq->bt_l1[i] + j) = eq->buf_dma[idx]; eq_buf_cnt++; @@ -5116,7 +5241,7 @@ static int hns_roce_v2_create_eq(struct hns_roce_dev *hr_dev, goto free_cmd_mbox; } - eq->buf_list->buf = dma_alloc_coherent(dev, buf_chk_sz, + eq->buf_list->buf = dma_zalloc_coherent(dev, buf_chk_sz, &(eq->buf_list->map), GFP_KERNEL); if (!eq->buf_list->buf) { @@ -5124,7 +5249,6 @@ static int hns_roce_v2_create_eq(struct hns_roce_dev *hr_dev, goto err_alloc_buf; } - memset(eq->buf_list->buf, 0, buf_chk_sz); } else { ret = hns_roce_mhop_alloc_eq(hr_dev, eq); if (ret) { @@ -5332,6 +5456,300 @@ static void hns_roce_v2_cleanup_eq_table(struct hns_roce_dev *hr_dev) destroy_workqueue(hr_dev->irq_workq); } +static void hns_roce_v2_write_srqc(struct hns_roce_dev *hr_dev, + struct hns_roce_srq *srq, u32 pdn, u16 xrcd, + u32 cqn, void *mb_buf, u64 *mtts_wqe, + u64 *mtts_idx, dma_addr_t dma_handle_wqe, + dma_addr_t dma_handle_idx) +{ + struct hns_roce_srq_context *srq_context; + + srq_context = mb_buf; + memset(srq_context, 0, sizeof(*srq_context)); + + roce_set_field(srq_context->byte_4_srqn_srqst, SRQC_BYTE_4_SRQ_ST_M, + SRQC_BYTE_4_SRQ_ST_S, 1); + + roce_set_field(srq_context->byte_4_srqn_srqst, + SRQC_BYTE_4_SRQ_WQE_HOP_NUM_M, + SRQC_BYTE_4_SRQ_WQE_HOP_NUM_S, + (hr_dev->caps.srqwqe_hop_num == HNS_ROCE_HOP_NUM_0 ? 0 : + hr_dev->caps.srqwqe_hop_num)); + roce_set_field(srq_context->byte_4_srqn_srqst, + SRQC_BYTE_4_SRQ_SHIFT_M, SRQC_BYTE_4_SRQ_SHIFT_S, + ilog2(srq->max)); + + roce_set_field(srq_context->byte_4_srqn_srqst, SRQC_BYTE_4_SRQN_M, + SRQC_BYTE_4_SRQN_S, srq->srqn); + + roce_set_field(srq_context->byte_8_limit_wl, SRQC_BYTE_8_SRQ_LIMIT_WL_M, + SRQC_BYTE_8_SRQ_LIMIT_WL_S, 0); + + roce_set_field(srq_context->byte_12_xrcd, SRQC_BYTE_12_SRQ_XRCD_M, + SRQC_BYTE_12_SRQ_XRCD_S, xrcd); + + srq_context->wqe_bt_ba = cpu_to_le32((u32)(dma_handle_wqe >> 3)); + + roce_set_field(srq_context->byte_24_wqe_bt_ba, + SRQC_BYTE_24_SRQ_WQE_BT_BA_M, + SRQC_BYTE_24_SRQ_WQE_BT_BA_S, + cpu_to_le32(dma_handle_wqe >> 35)); + + roce_set_field(srq_context->byte_28_rqws_pd, SRQC_BYTE_28_PD_M, + SRQC_BYTE_28_PD_S, pdn); + roce_set_field(srq_context->byte_28_rqws_pd, SRQC_BYTE_28_RQWS_M, + SRQC_BYTE_28_RQWS_S, srq->max_gs <= 0 ? 0 : + fls(srq->max_gs - 1)); + + srq_context->idx_bt_ba = (u32)(dma_handle_idx >> 3); + srq_context->idx_bt_ba = cpu_to_le32(srq_context->idx_bt_ba); + roce_set_field(srq_context->rsv_idx_bt_ba, + SRQC_BYTE_36_SRQ_IDX_BT_BA_M, + SRQC_BYTE_36_SRQ_IDX_BT_BA_S, + cpu_to_le32(dma_handle_idx >> 35)); + + srq_context->idx_cur_blk_addr = (u32)(mtts_idx[0] >> PAGE_ADDR_SHIFT); + srq_context->idx_cur_blk_addr = + cpu_to_le32(srq_context->idx_cur_blk_addr); + roce_set_field(srq_context->byte_44_idxbufpgsz_addr, + SRQC_BYTE_44_SRQ_IDX_CUR_BLK_ADDR_M, + SRQC_BYTE_44_SRQ_IDX_CUR_BLK_ADDR_S, + cpu_to_le32((mtts_idx[0]) >> (32 + PAGE_ADDR_SHIFT))); + roce_set_field(srq_context->byte_44_idxbufpgsz_addr, + SRQC_BYTE_44_SRQ_IDX_HOP_NUM_M, + SRQC_BYTE_44_SRQ_IDX_HOP_NUM_S, + hr_dev->caps.idx_hop_num == HNS_ROCE_HOP_NUM_0 ? 0 : + hr_dev->caps.idx_hop_num); + + roce_set_field(srq_context->byte_44_idxbufpgsz_addr, + SRQC_BYTE_44_SRQ_IDX_BA_PG_SZ_M, + SRQC_BYTE_44_SRQ_IDX_BA_PG_SZ_S, + hr_dev->caps.idx_ba_pg_sz); + roce_set_field(srq_context->byte_44_idxbufpgsz_addr, + SRQC_BYTE_44_SRQ_IDX_BUF_PG_SZ_M, + SRQC_BYTE_44_SRQ_IDX_BUF_PG_SZ_S, + hr_dev->caps.idx_buf_pg_sz); + + srq_context->idx_nxt_blk_addr = (u32)(mtts_idx[1] >> PAGE_ADDR_SHIFT); + srq_context->idx_nxt_blk_addr = + cpu_to_le32(srq_context->idx_nxt_blk_addr); + roce_set_field(srq_context->rsv_idxnxtblkaddr, + SRQC_BYTE_52_SRQ_IDX_NXT_BLK_ADDR_M, + SRQC_BYTE_52_SRQ_IDX_NXT_BLK_ADDR_S, + cpu_to_le32((mtts_idx[1]) >> (32 + PAGE_ADDR_SHIFT))); + roce_set_field(srq_context->byte_56_xrc_cqn, + SRQC_BYTE_56_SRQ_XRC_CQN_M, SRQC_BYTE_56_SRQ_XRC_CQN_S, + cqn); + roce_set_field(srq_context->byte_56_xrc_cqn, + SRQC_BYTE_56_SRQ_WQE_BA_PG_SZ_M, + SRQC_BYTE_56_SRQ_WQE_BA_PG_SZ_S, + hr_dev->caps.srqwqe_ba_pg_sz + PG_SHIFT_OFFSET); + roce_set_field(srq_context->byte_56_xrc_cqn, + SRQC_BYTE_56_SRQ_WQE_BUF_PG_SZ_M, + SRQC_BYTE_56_SRQ_WQE_BUF_PG_SZ_S, + hr_dev->caps.srqwqe_buf_pg_sz + PG_SHIFT_OFFSET); + + roce_set_bit(srq_context->db_record_addr_record_en, + SRQC_BYTE_60_SRQ_RECORD_EN_S, 0); +} + +static int hns_roce_v2_modify_srq(struct ib_srq *ibsrq, + struct ib_srq_attr *srq_attr, + enum ib_srq_attr_mask srq_attr_mask, + struct ib_udata *udata) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ibsrq->device); + struct hns_roce_srq *srq = to_hr_srq(ibsrq); + struct hns_roce_srq_context *srq_context; + struct hns_roce_srq_context *srqc_mask; + struct hns_roce_cmd_mailbox *mailbox; + int ret; + + if (srq_attr_mask & IB_SRQ_LIMIT) { + if (srq_attr->srq_limit >= srq->max) + return -EINVAL; + + mailbox = hns_roce_alloc_cmd_mailbox(hr_dev); + if (IS_ERR(mailbox)) + return PTR_ERR(mailbox); + + srq_context = mailbox->buf; + srqc_mask = (struct hns_roce_srq_context *)mailbox->buf + 1; + + memset(srqc_mask, 0xff, sizeof(*srqc_mask)); + + roce_set_field(srq_context->byte_8_limit_wl, + SRQC_BYTE_8_SRQ_LIMIT_WL_M, + SRQC_BYTE_8_SRQ_LIMIT_WL_S, srq_attr->srq_limit); + roce_set_field(srqc_mask->byte_8_limit_wl, + SRQC_BYTE_8_SRQ_LIMIT_WL_M, + SRQC_BYTE_8_SRQ_LIMIT_WL_S, 0); + + ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, srq->srqn, 0, + HNS_ROCE_CMD_MODIFY_SRQC, + HNS_ROCE_CMD_TIMEOUT_MSECS); + hns_roce_free_cmd_mailbox(hr_dev, mailbox); + if (ret) { + dev_err(hr_dev->dev, + "MODIFY SRQ Failed to cmd mailbox.\n"); + return ret; + } + } + + return 0; +} + +int hns_roce_v2_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ibsrq->device); + struct hns_roce_srq *srq = to_hr_srq(ibsrq); + struct hns_roce_srq_context *srq_context; + struct hns_roce_cmd_mailbox *mailbox; + int limit_wl; + int ret; + + mailbox = hns_roce_alloc_cmd_mailbox(hr_dev); + if (IS_ERR(mailbox)) + return PTR_ERR(mailbox); + + srq_context = mailbox->buf; + ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, srq->srqn, 0, + HNS_ROCE_CMD_QUERY_SRQC, + HNS_ROCE_CMD_TIMEOUT_MSECS); + if (ret) { + dev_err(hr_dev->dev, "QUERY SRQ cmd process error\n"); + goto out; + } + + limit_wl = roce_get_field(srq_context->byte_8_limit_wl, + SRQC_BYTE_8_SRQ_LIMIT_WL_M, + SRQC_BYTE_8_SRQ_LIMIT_WL_S); + + attr->srq_limit = limit_wl; + attr->max_wr = srq->max - 1; + attr->max_sge = srq->max_gs; + + memcpy(srq_context, mailbox->buf, sizeof(*srq_context)); + +out: + hns_roce_free_cmd_mailbox(hr_dev, mailbox); + return ret; +} + +static int find_empty_entry(struct hns_roce_idx_que *idx_que) +{ + int bit_num; + int i; + + /* bitmap[i] is set zero if all bits are allocated */ + for (i = 0; idx_que->bitmap[i] == 0; ++i) + ; + bit_num = ffs(idx_que->bitmap[i]); + idx_que->bitmap[i] &= ~(1ULL << (bit_num - 1)); + + return i * sizeof(u64) * 8 + (bit_num - 1); +} + +static void fill_idx_queue(struct hns_roce_idx_que *idx_que, + int cur_idx, int wqe_idx) +{ + unsigned int *addr; + + addr = (unsigned int *)hns_roce_buf_offset(&idx_que->idx_buf, + cur_idx * idx_que->entry_sz); + *addr = wqe_idx; +} + +static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq, + const struct ib_recv_wr *wr, + const struct ib_recv_wr **bad_wr) +{ + struct hns_roce_srq *srq = to_hr_srq(ibsrq); + struct hns_roce_v2_wqe_data_seg *dseg; + struct hns_roce_v2_db srq_db; + unsigned long flags; + int ret = 0; + int wqe_idx; + void *wqe; + int nreq; + int ind; + int i; + + spin_lock_irqsave(&srq->lock, flags); + + ind = srq->head & (srq->max - 1); + + for (nreq = 0; wr; ++nreq, wr = wr->next) { + if (unlikely(wr->num_sge > srq->max_gs)) { + ret = -EINVAL; + *bad_wr = wr; + break; + } + + if (unlikely(srq->head == srq->tail)) { + ret = -ENOMEM; + *bad_wr = wr; + break; + } + + wqe_idx = find_empty_entry(&srq->idx_que); + fill_idx_queue(&srq->idx_que, ind, wqe_idx); + wqe = get_srq_wqe(srq, wqe_idx); + dseg = (struct hns_roce_v2_wqe_data_seg *)wqe; + + for (i = 0; i < wr->num_sge; ++i) { + dseg[i].len = cpu_to_le32(wr->sg_list[i].length); + dseg[i].lkey = cpu_to_le32(wr->sg_list[i].lkey); + dseg[i].addr = cpu_to_le64(wr->sg_list[i].addr); + } + + if (i < srq->max_gs) { + dseg->len = 0; + dseg->lkey = cpu_to_le32(0x100); + dseg->addr = 0; + } + + srq->wrid[wqe_idx] = wr->wr_id; + ind = (ind + 1) & (srq->max - 1); + } + + if (likely(nreq)) { + srq->head += nreq; + + /* + * Make sure that descriptors are written before + * doorbell record. + */ + wmb(); + + srq_db.byte_4 = HNS_ROCE_V2_SRQ_DB << 24 | srq->srqn; + srq_db.parameter = srq->head; + + hns_roce_write64_k((__le32 *)&srq_db, srq->db_reg_l); + + } + + spin_unlock_irqrestore(&srq->lock, flags); + + return ret; +} + +static const struct ib_device_ops hns_roce_v2_dev_ops = { + .destroy_qp = hns_roce_v2_destroy_qp, + .modify_cq = hns_roce_v2_modify_cq, + .poll_cq = hns_roce_v2_poll_cq, + .post_recv = hns_roce_v2_post_recv, + .post_send = hns_roce_v2_post_send, + .query_qp = hns_roce_v2_query_qp, + .req_notify_cq = hns_roce_v2_req_notify_cq, +}; + +static const struct ib_device_ops hns_roce_v2_dev_srq_ops = { + .modify_srq = hns_roce_v2_modify_srq, + .post_srq_recv = hns_roce_v2_post_srq_recv, + .query_srq = hns_roce_v2_query_srq, +}; + static const struct hns_roce_hw hns_roce_hw_v2 = { .cmq_init = hns_roce_v2_cmq_init, .cmq_exit = hns_roce_v2_cmq_exit, @@ -5359,6 +5777,12 @@ static const struct hns_roce_hw hns_roce_hw_v2 = { .poll_cq = hns_roce_v2_poll_cq, .init_eq = hns_roce_v2_init_eq_table, .cleanup_eq = hns_roce_v2_cleanup_eq_table, + .write_srqc = hns_roce_v2_write_srqc, + .modify_srq = hns_roce_v2_modify_srq, + .query_srq = hns_roce_v2_query_srq, + .post_srq_recv = hns_roce_v2_post_srq_recv, + .hns_roce_dev_ops = &hns_roce_v2_dev_ops, + .hns_roce_dev_srq_ops = &hns_roce_v2_dev_srq_ops, }; static const struct pci_device_id hns_roce_hw_v2_pci_tbl[] = { diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h index 8bc820635bbd..b72d0443c835 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h @@ -46,10 +46,16 @@ #define HNS_ROCE_V2_MAX_QP_NUM 0x2000 #define HNS_ROCE_V2_MAX_WQE_NUM 0x8000 +#define HNS_ROCE_V2_MAX_SRQ 0x100000 +#define HNS_ROCE_V2_MAX_SRQ_WR 0x8000 +#define HNS_ROCE_V2_MAX_SRQ_SGE 0x100 #define HNS_ROCE_V2_MAX_CQ_NUM 0x8000 +#define HNS_ROCE_V2_MAX_SRQ_NUM 0x100000 #define HNS_ROCE_V2_MAX_CQE_NUM 0x10000 +#define HNS_ROCE_V2_MAX_SRQWQE_NUM 0x8000 #define HNS_ROCE_V2_MAX_RQ_SGE_NUM 0x100 #define HNS_ROCE_V2_MAX_SQ_SGE_NUM 0xff +#define HNS_ROCE_V2_MAX_SRQ_SGE_NUM 0x100 #define HNS_ROCE_V2_MAX_EXTEND_SGE_NUM 0x200000 #define HNS_ROCE_V2_MAX_SQ_INLINE 0x20 #define HNS_ROCE_V2_UAR_NUM 256 @@ -61,6 +67,8 @@ #define HNS_ROCE_V2_MAX_MTPT_NUM 0x8000 #define HNS_ROCE_V2_MAX_MTT_SEGS 0x1000000 #define HNS_ROCE_V2_MAX_CQE_SEGS 0x1000000 +#define HNS_ROCE_V2_MAX_SRQWQE_SEGS 0x1000000 +#define HNS_ROCE_V2_MAX_IDX_SEGS 0x1000000 #define HNS_ROCE_V2_MAX_PD_NUM 0x1000000 #define HNS_ROCE_V2_MAX_QP_INIT_RDMA 128 #define HNS_ROCE_V2_MAX_QP_DEST_RDMA 128 @@ -71,6 +79,7 @@ #define HNS_ROCE_V2_IRRL_ENTRY_SZ 64 #define HNS_ROCE_V2_TRRL_ENTRY_SZ 48 #define HNS_ROCE_V2_CQC_ENTRY_SZ 64 +#define HNS_ROCE_V2_SRQC_ENTRY_SZ 64 #define HNS_ROCE_V2_MTPT_ENTRY_SZ 64 #define HNS_ROCE_V2_MTT_ENTRY_SZ 64 #define HNS_ROCE_V2_CQE_ENTRY_SIZE 32 @@ -84,8 +93,10 @@ #define HNS_ROCE_CONTEXT_HOP_NUM 1 #define HNS_ROCE_MTT_HOP_NUM 1 #define HNS_ROCE_CQE_HOP_NUM 1 +#define HNS_ROCE_SRQWQE_HOP_NUM 1 #define HNS_ROCE_PBL_HOP_NUM 2 #define HNS_ROCE_EQE_HOP_NUM 2 +#define HNS_ROCE_IDX_HOP_NUM 1 #define HNS_ROCE_V2_GID_INDEX_NUM 256 @@ -113,6 +124,8 @@ ((step_idx == 0 && hop_num == HNS_ROCE_HOP_NUM_0) || \ (step_idx == 1 && hop_num == 1) || \ (step_idx == 2 && hop_num == 2)) +#define HNS_ICL_SWITCH_CMD_ROCEE_SEL_SHIFT 0 +#define HNS_ICL_SWITCH_CMD_ROCEE_SEL BIT(HNS_ICL_SWITCH_CMD_ROCEE_SEL_SHIFT) #define CMD_CSQ_DESC_NUM 1024 #define CMD_CRQ_DESC_NUM 1024 @@ -213,7 +226,10 @@ enum hns_roce_opcode_type { HNS_ROCE_OPC_CFG_TMOUT_LLM = 0x8404, HNS_ROCE_OPC_CFG_SGID_TB = 0x8500, HNS_ROCE_OPC_CFG_SMAC_TB = 0x8501, + HNS_ROCE_OPC_POST_MB = 0x8504, + HNS_ROCE_OPC_QUERY_MB_ST = 0x8505, HNS_ROCE_OPC_CFG_BT_ATTR = 0x8506, + HNS_SWITCH_PARAMETER_CFG = 0x1033, }; enum { @@ -325,6 +341,90 @@ struct hns_roce_v2_cq_context { #define V2_CQC_BYTE_64_SE_CQE_IDX_S 0 #define V2_CQC_BYTE_64_SE_CQE_IDX_M GENMASK(23, 0) +struct hns_roce_srq_context { + __le32 byte_4_srqn_srqst; + __le32 byte_8_limit_wl; + __le32 byte_12_xrcd; + __le32 byte_16_pi_ci; + __le32 wqe_bt_ba; + __le32 byte_24_wqe_bt_ba; + __le32 byte_28_rqws_pd; + __le32 idx_bt_ba; + __le32 rsv_idx_bt_ba; + __le32 idx_cur_blk_addr; + __le32 byte_44_idxbufpgsz_addr; + __le32 idx_nxt_blk_addr; + __le32 rsv_idxnxtblkaddr; + __le32 byte_56_xrc_cqn; + __le32 db_record_addr_record_en; + __le32 db_record_addr; +}; + +#define SRQC_BYTE_4_SRQ_ST_S 0 +#define SRQC_BYTE_4_SRQ_ST_M GENMASK(1, 0) + +#define SRQC_BYTE_4_SRQ_WQE_HOP_NUM_S 2 +#define SRQC_BYTE_4_SRQ_WQE_HOP_NUM_M GENMASK(3, 2) + +#define SRQC_BYTE_4_SRQ_SHIFT_S 4 +#define SRQC_BYTE_4_SRQ_SHIFT_M GENMASK(7, 4) + +#define SRQC_BYTE_4_SRQN_S 8 +#define SRQC_BYTE_4_SRQN_M GENMASK(31, 8) + +#define SRQC_BYTE_8_SRQ_LIMIT_WL_S 0 +#define SRQC_BYTE_8_SRQ_LIMIT_WL_M GENMASK(15, 0) + +#define SRQC_BYTE_12_SRQ_XRCD_S 0 +#define SRQC_BYTE_12_SRQ_XRCD_M GENMASK(23, 0) + +#define SRQC_BYTE_16_SRQ_PRODUCER_IDX_S 0 +#define SRQC_BYTE_16_SRQ_PRODUCER_IDX_M GENMASK(15, 0) + +#define SRQC_BYTE_16_SRQ_CONSUMER_IDX_S 0 +#define SRQC_BYTE_16_SRQ_CONSUMER_IDX_M GENMASK(31, 16) + +#define SRQC_BYTE_24_SRQ_WQE_BT_BA_S 0 +#define SRQC_BYTE_24_SRQ_WQE_BT_BA_M GENMASK(28, 0) + +#define SRQC_BYTE_28_PD_S 0 +#define SRQC_BYTE_28_PD_M GENMASK(23, 0) + +#define SRQC_BYTE_28_RQWS_S 24 +#define SRQC_BYTE_28_RQWS_M GENMASK(27, 24) + +#define SRQC_BYTE_36_SRQ_IDX_BT_BA_S 0 +#define SRQC_BYTE_36_SRQ_IDX_BT_BA_M GENMASK(28, 0) + +#define SRQC_BYTE_44_SRQ_IDX_CUR_BLK_ADDR_S 0 +#define SRQC_BYTE_44_SRQ_IDX_CUR_BLK_ADDR_M GENMASK(19, 0) + +#define SRQC_BYTE_44_SRQ_IDX_HOP_NUM_S 22 +#define SRQC_BYTE_44_SRQ_IDX_HOP_NUM_M GENMASK(23, 22) + +#define SRQC_BYTE_44_SRQ_IDX_BA_PG_SZ_S 24 +#define SRQC_BYTE_44_SRQ_IDX_BA_PG_SZ_M GENMASK(27, 24) + +#define SRQC_BYTE_44_SRQ_IDX_BUF_PG_SZ_S 28 +#define SRQC_BYTE_44_SRQ_IDX_BUF_PG_SZ_M GENMASK(31, 28) + +#define SRQC_BYTE_52_SRQ_IDX_NXT_BLK_ADDR_S 0 +#define SRQC_BYTE_52_SRQ_IDX_NXT_BLK_ADDR_M GENMASK(19, 0) + +#define SRQC_BYTE_56_SRQ_XRC_CQN_S 0 +#define SRQC_BYTE_56_SRQ_XRC_CQN_M GENMASK(23, 0) + +#define SRQC_BYTE_56_SRQ_WQE_BA_PG_SZ_S 24 +#define SRQC_BYTE_56_SRQ_WQE_BA_PG_SZ_M GENMASK(27, 24) + +#define SRQC_BYTE_56_SRQ_WQE_BUF_PG_SZ_S 28 +#define SRQC_BYTE_56_SRQ_WQE_BUF_PG_SZ_M GENMASK(31, 28) + +#define SRQC_BYTE_60_SRQ_RECORD_EN_S 0 + +#define SRQC_BYTE_60_SRQ_DB_RECORD_ADDR_S 1 +#define SRQC_BYTE_60_SRQ_DB_RECORD_ADDR_M GENMASK(31, 1) + enum{ V2_MPT_ST_VALID = 0x1, V2_MPT_ST_FREE = 0x2, @@ -1289,6 +1389,36 @@ struct hns_roce_vf_res_b { #define VF_RES_B_DATA_3_VF_SL_NUM_S 16 #define VF_RES_B_DATA_3_VF_SL_NUM_M GENMASK(19, 16) +struct hns_roce_vf_switch { + __le32 rocee_sel; + __le32 fun_id; + __le32 cfg; + __le32 resv1; + __le32 resv2; + __le32 resv3; +}; + +#define VF_SWITCH_DATA_FUN_ID_VF_ID_S 3 +#define VF_SWITCH_DATA_FUN_ID_VF_ID_M GENMASK(10, 3) + +#define VF_SWITCH_DATA_CFG_ALW_LPBK_S 1 +#define VF_SWITCH_DATA_CFG_ALW_LCL_LPBK_S 2 +#define VF_SWITCH_DATA_CFG_ALW_DST_OVRD_S 3 + +struct hns_roce_post_mbox { + __le32 in_param_l; + __le32 in_param_h; + __le32 out_param_l; + __le32 out_param_h; + __le32 cmd_tag; + __le32 token_event_en; +}; + +struct hns_roce_mbox_status { + __le32 mb_status_hw_run; + __le32 rsv[5]; +}; + struct hns_roce_cfg_bt_attr { __le32 vf_qpc_cfg; __le32 vf_srqc_cfg; @@ -1372,18 +1502,6 @@ struct hns_roce_cmq_desc { #define HNS_ROCE_HW_RUN_BIT_SHIFT 31 #define HNS_ROCE_HW_MB_STATUS_MASK 0xFF -#define HNS_ROCE_VF_MB4_TAG_MASK 0xFFFFFF00 -#define HNS_ROCE_VF_MB4_TAG_SHIFT 8 - -#define HNS_ROCE_VF_MB4_CMD_MASK 0xFF -#define HNS_ROCE_VF_MB4_CMD_SHIFT 0 - -#define HNS_ROCE_VF_MB5_EVENT_MASK 0x10000 -#define HNS_ROCE_VF_MB5_EVENT_SHIFT 16 - -#define HNS_ROCE_VF_MB5_TOKEN_MASK 0xFFFF -#define HNS_ROCE_VF_MB5_TOKEN_SHIFT 0 - struct hns_roce_v2_cmq_ring { dma_addr_t desc_dma_addr; struct hns_roce_cmq_desc *desc; diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c index 1b3ee514f2ef..c79054ba9495 100644 --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c @@ -220,6 +220,11 @@ static int hns_roce_query_device(struct ib_device *ib_dev, IB_ATOMIC_HCA : IB_ATOMIC_NONE; props->max_pkeys = 1; props->local_ca_ack_delay = hr_dev->caps.local_ca_ack_delay; + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) { + props->max_srq = hr_dev->caps.max_srqs; + props->max_srq_wr = hr_dev->caps.max_srq_wrs; + props->max_srq_sge = hr_dev->caps.max_srq_sges; + } return 0; } @@ -440,6 +445,54 @@ static void hns_roce_unregister_device(struct hns_roce_dev *hr_dev) ib_unregister_device(&hr_dev->ib_dev); } +static const struct ib_device_ops hns_roce_dev_ops = { + .add_gid = hns_roce_add_gid, + .alloc_pd = hns_roce_alloc_pd, + .alloc_ucontext = hns_roce_alloc_ucontext, + .create_ah = hns_roce_create_ah, + .create_cq = hns_roce_ib_create_cq, + .create_qp = hns_roce_create_qp, + .dealloc_pd = hns_roce_dealloc_pd, + .dealloc_ucontext = hns_roce_dealloc_ucontext, + .del_gid = hns_roce_del_gid, + .dereg_mr = hns_roce_dereg_mr, + .destroy_ah = hns_roce_destroy_ah, + .destroy_cq = hns_roce_ib_destroy_cq, + .disassociate_ucontext = hns_roce_disassociate_ucontext, + .get_dma_mr = hns_roce_get_dma_mr, + .get_link_layer = hns_roce_get_link_layer, + .get_netdev = hns_roce_get_netdev, + .get_port_immutable = hns_roce_port_immutable, + .mmap = hns_roce_mmap, + .modify_device = hns_roce_modify_device, + .modify_port = hns_roce_modify_port, + .modify_qp = hns_roce_modify_qp, + .query_ah = hns_roce_query_ah, + .query_device = hns_roce_query_device, + .query_pkey = hns_roce_query_pkey, + .query_port = hns_roce_query_port, + .reg_user_mr = hns_roce_reg_user_mr, +}; + +static const struct ib_device_ops hns_roce_dev_mr_ops = { + .rereg_user_mr = hns_roce_rereg_user_mr, +}; + +static const struct ib_device_ops hns_roce_dev_mw_ops = { + .alloc_mw = hns_roce_alloc_mw, + .dealloc_mw = hns_roce_dealloc_mw, +}; + +static const struct ib_device_ops hns_roce_dev_frmr_ops = { + .alloc_mr = hns_roce_alloc_mr, + .map_mr_sg = hns_roce_map_mr_sg, +}; + +static const struct ib_device_ops hns_roce_dev_srq_ops = { + .create_srq = hns_roce_create_srq, + .destroy_srq = hns_roce_destroy_srq, +}; + static int hns_roce_register_device(struct hns_roce_dev *hr_dev) { int ret; @@ -479,73 +532,38 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev) ib_dev->uverbs_ex_cmd_mask |= (1ULL << IB_USER_VERBS_EX_CMD_MODIFY_CQ); - /* HCA||device||port */ - ib_dev->modify_device = hns_roce_modify_device; - ib_dev->query_device = hns_roce_query_device; - ib_dev->query_port = hns_roce_query_port; - ib_dev->modify_port = hns_roce_modify_port; - ib_dev->get_link_layer = hns_roce_get_link_layer; - ib_dev->get_netdev = hns_roce_get_netdev; - ib_dev->add_gid = hns_roce_add_gid; - ib_dev->del_gid = hns_roce_del_gid; - ib_dev->query_pkey = hns_roce_query_pkey; - ib_dev->alloc_ucontext = hns_roce_alloc_ucontext; - ib_dev->dealloc_ucontext = hns_roce_dealloc_ucontext; - ib_dev->mmap = hns_roce_mmap; - - /* PD */ - ib_dev->alloc_pd = hns_roce_alloc_pd; - ib_dev->dealloc_pd = hns_roce_dealloc_pd; - - /* AH */ - ib_dev->create_ah = hns_roce_create_ah; - ib_dev->query_ah = hns_roce_query_ah; - ib_dev->destroy_ah = hns_roce_destroy_ah; - - /* QP */ - ib_dev->create_qp = hns_roce_create_qp; - ib_dev->modify_qp = hns_roce_modify_qp; - ib_dev->query_qp = hr_dev->hw->query_qp; - ib_dev->destroy_qp = hr_dev->hw->destroy_qp; - ib_dev->post_send = hr_dev->hw->post_send; - ib_dev->post_recv = hr_dev->hw->post_recv; - - /* CQ */ - ib_dev->create_cq = hns_roce_ib_create_cq; - ib_dev->modify_cq = hr_dev->hw->modify_cq; - ib_dev->destroy_cq = hns_roce_ib_destroy_cq; - ib_dev->req_notify_cq = hr_dev->hw->req_notify_cq; - ib_dev->poll_cq = hr_dev->hw->poll_cq; - - /* MR */ - ib_dev->get_dma_mr = hns_roce_get_dma_mr; - ib_dev->reg_user_mr = hns_roce_reg_user_mr; - ib_dev->dereg_mr = hns_roce_dereg_mr; if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_REREG_MR) { - ib_dev->rereg_user_mr = hns_roce_rereg_user_mr; ib_dev->uverbs_cmd_mask |= (1ULL << IB_USER_VERBS_CMD_REREG_MR); + ib_set_device_ops(ib_dev, &hns_roce_dev_mr_ops); } /* MW */ if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_MW) { - ib_dev->alloc_mw = hns_roce_alloc_mw; - ib_dev->dealloc_mw = hns_roce_dealloc_mw; ib_dev->uverbs_cmd_mask |= (1ULL << IB_USER_VERBS_CMD_ALLOC_MW) | (1ULL << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(ib_dev, &hns_roce_dev_mw_ops); } /* FRMR */ - if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_FRMR) { - ib_dev->alloc_mr = hns_roce_alloc_mr; - ib_dev->map_mr_sg = hns_roce_map_mr_sg; - } + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_FRMR) + ib_set_device_ops(ib_dev, &hns_roce_dev_frmr_ops); - /* OTHERS */ - ib_dev->get_port_immutable = hns_roce_port_immutable; - ib_dev->disassociate_ucontext = hns_roce_disassociate_ucontext; + /* SRQ */ + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) { + ib_dev->uverbs_cmd_mask |= + (1ULL << IB_USER_VERBS_CMD_CREATE_SRQ) | + (1ULL << IB_USER_VERBS_CMD_MODIFY_SRQ) | + (1ULL << IB_USER_VERBS_CMD_QUERY_SRQ) | + (1ULL << IB_USER_VERBS_CMD_DESTROY_SRQ) | + (1ULL << IB_USER_VERBS_CMD_POST_SRQ_RECV); + ib_set_device_ops(ib_dev, &hns_roce_dev_srq_ops); + ib_set_device_ops(ib_dev, hr_dev->hw->hns_roce_dev_srq_ops); + } ib_dev->driver_id = RDMA_DRIVER_HNS; + ib_set_device_ops(ib_dev, hr_dev->hw->hns_roce_dev_ops); + ib_set_device_ops(ib_dev, &hns_roce_dev_ops); ret = ib_register_device(ib_dev, "hns_%d", NULL); if (ret) { dev_err(dev, "ib_register_device failed!\n"); @@ -646,8 +664,58 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev) goto err_unmap_trrl; } + if (hr_dev->caps.srqc_entry_sz) { + ret = hns_roce_init_hem_table(hr_dev, &hr_dev->srq_table.table, + HEM_TYPE_SRQC, + hr_dev->caps.srqc_entry_sz, + hr_dev->caps.num_srqs, 1); + if (ret) { + dev_err(dev, + "Failed to init SRQ context memory, aborting.\n"); + goto err_unmap_cq; + } + } + + if (hr_dev->caps.num_srqwqe_segs) { + ret = hns_roce_init_hem_table(hr_dev, + &hr_dev->mr_table.mtt_srqwqe_table, + HEM_TYPE_SRQWQE, + hr_dev->caps.mtt_entry_sz, + hr_dev->caps.num_srqwqe_segs, 1); + if (ret) { + dev_err(dev, + "Failed to init MTT srqwqe memory, aborting.\n"); + goto err_unmap_srq; + } + } + + if (hr_dev->caps.num_idx_segs) { + ret = hns_roce_init_hem_table(hr_dev, + &hr_dev->mr_table.mtt_idx_table, + HEM_TYPE_IDX, + hr_dev->caps.idx_entry_sz, + hr_dev->caps.num_idx_segs, 1); + if (ret) { + dev_err(dev, + "Failed to init MTT idx memory, aborting.\n"); + goto err_unmap_srqwqe; + } + } + return 0; +err_unmap_srqwqe: + if (hr_dev->caps.num_srqwqe_segs) + hns_roce_cleanup_hem_table(hr_dev, + &hr_dev->mr_table.mtt_srqwqe_table); + +err_unmap_srq: + if (hr_dev->caps.srqc_entry_sz) + hns_roce_cleanup_hem_table(hr_dev, &hr_dev->srq_table.table); + +err_unmap_cq: + hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table); + err_unmap_trrl: if (hr_dev->caps.trrl_entry_sz) hns_roce_cleanup_hem_table(hr_dev, @@ -727,8 +795,21 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev) goto err_cq_table_free; } + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) { + ret = hns_roce_init_srq_table(hr_dev); + if (ret) { + dev_err(dev, + "Failed to init share receive queue table.\n"); + goto err_qp_table_free; + } + } + return 0; +err_qp_table_free: + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) + hns_roce_cleanup_qp_table(hr_dev); + err_cq_table_free: hns_roce_cleanup_cq_table(hr_dev); diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c index 521ad2aa3a4e..ee5991bd4171 100644 --- a/drivers/infiniband/hw/hns/hns_roce_mr.c +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c @@ -184,12 +184,27 @@ static int hns_roce_alloc_mtt_range(struct hns_roce_dev *hr_dev, int order, struct hns_roce_buddy *buddy; int ret; - if (mtt_type == MTT_TYPE_WQE) { + switch (mtt_type) { + case MTT_TYPE_WQE: buddy = &mr_table->mtt_buddy; table = &mr_table->mtt_table; - } else { + break; + case MTT_TYPE_CQE: buddy = &mr_table->mtt_cqe_buddy; table = &mr_table->mtt_cqe_table; + break; + case MTT_TYPE_SRQWQE: + buddy = &mr_table->mtt_srqwqe_buddy; + table = &mr_table->mtt_srqwqe_table; + break; + case MTT_TYPE_IDX: + buddy = &mr_table->mtt_idx_buddy; + table = &mr_table->mtt_idx_table; + break; + default: + dev_err(hr_dev->dev, "Unsupport MTT table type: %d\n", + mtt_type); + return -EINVAL; } ret = hns_roce_buddy_alloc(buddy, order, seg); @@ -242,18 +257,40 @@ void hns_roce_mtt_cleanup(struct hns_roce_dev *hr_dev, struct hns_roce_mtt *mtt) if (mtt->order < 0) return; - if (mtt->mtt_type == MTT_TYPE_WQE) { + switch (mtt->mtt_type) { + case MTT_TYPE_WQE: hns_roce_buddy_free(&mr_table->mtt_buddy, mtt->first_seg, mtt->order); hns_roce_table_put_range(hr_dev, &mr_table->mtt_table, mtt->first_seg, mtt->first_seg + (1 << mtt->order) - 1); - } else { + break; + case MTT_TYPE_CQE: hns_roce_buddy_free(&mr_table->mtt_cqe_buddy, mtt->first_seg, mtt->order); hns_roce_table_put_range(hr_dev, &mr_table->mtt_cqe_table, mtt->first_seg, mtt->first_seg + (1 << mtt->order) - 1); + break; + case MTT_TYPE_SRQWQE: + hns_roce_buddy_free(&mr_table->mtt_srqwqe_buddy, mtt->first_seg, + mtt->order); + hns_roce_table_put_range(hr_dev, &mr_table->mtt_srqwqe_table, + mtt->first_seg, + mtt->first_seg + (1 << mtt->order) - 1); + break; + case MTT_TYPE_IDX: + hns_roce_buddy_free(&mr_table->mtt_idx_buddy, mtt->first_seg, + mtt->order); + hns_roce_table_put_range(hr_dev, &mr_table->mtt_idx_table, + mtt->first_seg, + mtt->first_seg + (1 << mtt->order) - 1); + break; + default: + dev_err(hr_dev->dev, + "Unsupport mtt type %d, clean mtt failed\n", + mtt->mtt_type); + break; } } EXPORT_SYMBOL_GPL(hns_roce_mtt_cleanup); @@ -713,10 +750,26 @@ static int hns_roce_write_mtt_chunk(struct hns_roce_dev *hr_dev, u32 bt_page_size; u32 i; - if (mtt->mtt_type == MTT_TYPE_WQE) + switch (mtt->mtt_type) { + case MTT_TYPE_WQE: + table = &hr_dev->mr_table.mtt_table; bt_page_size = 1 << (hr_dev->caps.mtt_ba_pg_sz + PAGE_SHIFT); - else + break; + case MTT_TYPE_CQE: + table = &hr_dev->mr_table.mtt_cqe_table; bt_page_size = 1 << (hr_dev->caps.cqe_ba_pg_sz + PAGE_SHIFT); + break; + case MTT_TYPE_SRQWQE: + table = &hr_dev->mr_table.mtt_srqwqe_table; + bt_page_size = 1 << (hr_dev->caps.srqwqe_ba_pg_sz + PAGE_SHIFT); + break; + case MTT_TYPE_IDX: + table = &hr_dev->mr_table.mtt_idx_table; + bt_page_size = 1 << (hr_dev->caps.idx_ba_pg_sz + PAGE_SHIFT); + break; + default: + return -EINVAL; + } /* All MTTs must fit in the same page */ if (start_index / (bt_page_size / sizeof(u64)) != @@ -726,11 +779,6 @@ static int hns_roce_write_mtt_chunk(struct hns_roce_dev *hr_dev, if (start_index & (HNS_ROCE_MTT_ENTRY_PER_SEG - 1)) return -EINVAL; - if (mtt->mtt_type == MTT_TYPE_WQE) - table = &hr_dev->mr_table.mtt_table; - else - table = &hr_dev->mr_table.mtt_cqe_table; - mtts = hns_roce_table_find(hr_dev, table, mtt->first_seg + s / hr_dev->caps.mtt_entry_sz, &dma_handle); @@ -759,10 +807,25 @@ static int hns_roce_write_mtt(struct hns_roce_dev *hr_dev, if (mtt->order < 0) return -EINVAL; - if (mtt->mtt_type == MTT_TYPE_WQE) + switch (mtt->mtt_type) { + case MTT_TYPE_WQE: bt_page_size = 1 << (hr_dev->caps.mtt_ba_pg_sz + PAGE_SHIFT); - else + break; + case MTT_TYPE_CQE: bt_page_size = 1 << (hr_dev->caps.cqe_ba_pg_sz + PAGE_SHIFT); + break; + case MTT_TYPE_SRQWQE: + bt_page_size = 1 << (hr_dev->caps.srqwqe_ba_pg_sz + PAGE_SHIFT); + break; + case MTT_TYPE_IDX: + bt_page_size = 1 << (hr_dev->caps.idx_ba_pg_sz + PAGE_SHIFT); + break; + default: + dev_err(hr_dev->dev, + "Unsupport mtt type %d, write mtt failed\n", + mtt->mtt_type); + return -EINVAL; + } while (npages > 0) { chunk = min_t(int, bt_page_size / sizeof(u64), npages); @@ -828,8 +891,31 @@ int hns_roce_init_mr_table(struct hns_roce_dev *hr_dev) if (ret) goto err_buddy_cqe; } + + if (hr_dev->caps.num_srqwqe_segs) { + ret = hns_roce_buddy_init(&mr_table->mtt_srqwqe_buddy, + ilog2(hr_dev->caps.num_srqwqe_segs)); + if (ret) + goto err_buddy_srqwqe; + } + + if (hr_dev->caps.num_idx_segs) { + ret = hns_roce_buddy_init(&mr_table->mtt_idx_buddy, + ilog2(hr_dev->caps.num_idx_segs)); + if (ret) + goto err_buddy_idx; + } + return 0; +err_buddy_idx: + if (hr_dev->caps.num_srqwqe_segs) + hns_roce_buddy_cleanup(&mr_table->mtt_srqwqe_buddy); + +err_buddy_srqwqe: + if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE)) + hns_roce_buddy_cleanup(&mr_table->mtt_cqe_buddy); + err_buddy_cqe: hns_roce_buddy_cleanup(&mr_table->mtt_buddy); @@ -842,6 +928,10 @@ void hns_roce_cleanup_mr_table(struct hns_roce_dev *hr_dev) { struct hns_roce_mr_table *mr_table = &hr_dev->mr_table; + if (hr_dev->caps.num_idx_segs) + hns_roce_buddy_cleanup(&mr_table->mtt_idx_buddy); + if (hr_dev->caps.num_srqwqe_segs) + hns_roce_buddy_cleanup(&mr_table->mtt_srqwqe_buddy); hns_roce_buddy_cleanup(&mr_table->mtt_buddy); if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE)) hns_roce_buddy_cleanup(&mr_table->mtt_cqe_buddy); @@ -897,8 +987,25 @@ int hns_roce_ib_umem_write_mtt(struct hns_roce_dev *hr_dev, u32 bt_page_size; u32 n; - order = mtt->mtt_type == MTT_TYPE_WQE ? hr_dev->caps.mtt_ba_pg_sz : - hr_dev->caps.cqe_ba_pg_sz; + switch (mtt->mtt_type) { + case MTT_TYPE_WQE: + order = hr_dev->caps.mtt_ba_pg_sz; + break; + case MTT_TYPE_CQE: + order = hr_dev->caps.cqe_ba_pg_sz; + break; + case MTT_TYPE_SRQWQE: + order = hr_dev->caps.srqwqe_ba_pg_sz; + break; + case MTT_TYPE_IDX: + order = hr_dev->caps.idx_ba_pg_sz; + break; + default: + dev_err(dev, "Unsupport mtt type %d, write mtt failed\n", + mtt->mtt_type); + return -EINVAL; + } + bt_page_size = 1 << (order + PAGE_SHIFT); pages = (u64 *) __get_free_pages(GFP_KERNEL, order); @@ -1021,14 +1128,14 @@ struct ib_mr *hns_roce_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, goto err_umem; } } else { - int pbl_size = 1; + u64 pbl_size = 1; bt_size = (1 << (hr_dev->caps.pbl_ba_pg_sz + PAGE_SHIFT)) / 8; for (i = 0; i < hr_dev->caps.pbl_hop_num; i++) pbl_size *= bt_size; if (n > pbl_size) { dev_err(dev, - " MR len %lld err. MR page num is limited to %d!\n", + " MR len %lld err. MR page num is limited to %lld!\n", length, pbl_size); ret = -EINVAL; goto err_umem; diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index 5ebf481a39d9..54031c5b53fa 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -280,7 +280,7 @@ void hns_roce_release_range_qp(struct hns_roce_dev *hr_dev, int base_qpn, EXPORT_SYMBOL_GPL(hns_roce_release_range_qp); static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev, - struct ib_qp_cap *cap, int is_user, int has_srq, + struct ib_qp_cap *cap, bool is_user, int has_rq, struct hns_roce_qp *hr_qp) { struct device *dev = hr_dev->dev; @@ -294,14 +294,12 @@ static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev, return -EINVAL; } - /* If srq exit, set zero for relative number of rq */ - if (has_srq) { - if (cap->max_recv_wr) { - dev_dbg(dev, "srq no need config max_recv_wr\n"); - return -EINVAL; - } - - hr_qp->rq.wqe_cnt = hr_qp->rq.max_gs = 0; + /* If srq exist, set zero for relative number of rq */ + if (!has_rq) { + hr_qp->rq.wqe_cnt = 0; + hr_qp->rq.max_gs = 0; + cap->max_recv_wr = 0; + cap->max_recv_sge = 0; } else { if (is_user && (!cap->max_recv_wr || !cap->max_recv_sge)) { dev_err(dev, "user space no need config max_recv_wr max_recv_sge\n"); @@ -562,14 +560,15 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, else hr_qp->sq_signal_bits = cpu_to_le32(IB_SIGNAL_REQ_WR); - ret = hns_roce_set_rq_size(hr_dev, &init_attr->cap, !!ib_pd->uobject, - !!init_attr->srq, hr_qp); + ret = hns_roce_set_rq_size(hr_dev, &init_attr->cap, udata, + hns_roce_qp_has_rq(init_attr), hr_qp); if (ret) { dev_err(dev, "hns_roce_set_rq_size failed\n"); goto err_out; } - if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) { + if ((hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) && + hns_roce_qp_has_rq(init_attr)) { /* allocate recv inline buf */ hr_qp->rq_inl_buf.wqe_list = kcalloc(hr_qp->rq.wqe_cnt, sizeof(struct hns_roce_rinl_wqe), @@ -599,7 +598,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, init_attr->cap.max_recv_sge]; } - if (ib_pd->uobject) { + if (udata) { if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) { dev_err(dev, "ib_copy_from_udata error for create qp\n"); ret = -EFAULT; @@ -784,7 +783,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, else hr_qp->doorbell_qpn = cpu_to_le64(hr_qp->qpn); - if (ib_pd->uobject && (udata->outlen >= sizeof(resp)) && + if (udata && (udata->outlen >= sizeof(resp)) && (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB)) { /* indicate kernel supports rq record db */ @@ -811,7 +810,7 @@ err_qpn: hns_roce_release_range_qp(hr_dev, qpn, 1); err_wrid: - if (ib_pd->uobject) { + if (udata) { if ((hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB) && (udata->outlen >= sizeof(resp)) && hns_roce_qp_has_rq(init_attr)) @@ -824,7 +823,7 @@ err_wrid: } err_sq_dbmap: - if (ib_pd->uobject) + if (udata) if ((hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SQ_RECORD_DB) && (udata->inlen >= sizeof(ucmd)) && (udata->outlen >= sizeof(resp)) && @@ -837,13 +836,13 @@ err_mtt: hns_roce_mtt_cleanup(hr_dev, &hr_qp->mtt); err_buf: - if (ib_pd->uobject) + if (hr_qp->umem) ib_umem_release(hr_qp->umem); else hns_roce_buf_free(hr_dev, hr_qp->buff_size, &hr_qp->hr_buf); err_db: - if (!ib_pd->uobject && hns_roce_qp_has_rq(init_attr) && + if (!udata && hns_roce_qp_has_rq(init_attr) && (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB)) hns_roce_free_db(hr_dev, &hr_qp->rdb); @@ -889,7 +888,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd, } case IB_QPT_GSI: { /* Userspace is not allowed to create special QPs: */ - if (pd->uobject) { + if (udata) { dev_err(dev, "not support usr space GSI\n"); return ERR_PTR(-EINVAL); } diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c new file mode 100644 index 000000000000..960b1946c365 --- /dev/null +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c @@ -0,0 +1,457 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2018 Hisilicon Limited. + */ + +#include +#include +#include "hns_roce_device.h" +#include "hns_roce_cmd.h" +#include "hns_roce_hem.h" + +void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type) +{ + struct hns_roce_srq_table *srq_table = &hr_dev->srq_table; + struct hns_roce_srq *srq; + + xa_lock(&srq_table->xa); + srq = xa_load(&srq_table->xa, srqn & (hr_dev->caps.num_srqs - 1)); + if (srq) + atomic_inc(&srq->refcount); + xa_unlock(&srq_table->xa); + + if (!srq) { + dev_warn(hr_dev->dev, "Async event for bogus SRQ %08x\n", srqn); + return; + } + + srq->event(srq, event_type); + + if (atomic_dec_and_test(&srq->refcount)) + complete(&srq->free); +} +EXPORT_SYMBOL_GPL(hns_roce_srq_event); + +static void hns_roce_ib_srq_event(struct hns_roce_srq *srq, + enum hns_roce_event event_type) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(srq->ibsrq.device); + struct ib_srq *ibsrq = &srq->ibsrq; + struct ib_event event; + + if (ibsrq->event_handler) { + event.device = ibsrq->device; + event.element.srq = ibsrq; + switch (event_type) { + case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH: + event.event = IB_EVENT_SRQ_LIMIT_REACHED; + break; + case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR: + event.event = IB_EVENT_SRQ_ERR; + break; + default: + dev_err(hr_dev->dev, + "hns_roce:Unexpected event type 0x%x on SRQ %06lx\n", + event_type, srq->srqn); + return; + } + + ibsrq->event_handler(&event, ibsrq->srq_context); + } +} + +static int hns_roce_sw2hw_srq(struct hns_roce_dev *dev, + struct hns_roce_cmd_mailbox *mailbox, + unsigned long srq_num) +{ + return hns_roce_cmd_mbox(dev, mailbox->dma, 0, srq_num, 0, + HNS_ROCE_CMD_SW2HW_SRQ, + HNS_ROCE_CMD_TIMEOUT_MSECS); +} + +static int hns_roce_hw2sw_srq(struct hns_roce_dev *dev, + struct hns_roce_cmd_mailbox *mailbox, + unsigned long srq_num) +{ + return hns_roce_cmd_mbox(dev, 0, mailbox ? mailbox->dma : 0, srq_num, + mailbox ? 0 : 1, HNS_ROCE_CMD_HW2SW_SRQ, + HNS_ROCE_CMD_TIMEOUT_MSECS); +} + +int hns_roce_srq_alloc(struct hns_roce_dev *hr_dev, u32 pdn, u32 cqn, u16 xrcd, + struct hns_roce_mtt *hr_mtt, u64 db_rec_addr, + struct hns_roce_srq *srq) +{ + struct hns_roce_srq_table *srq_table = &hr_dev->srq_table; + struct hns_roce_cmd_mailbox *mailbox; + dma_addr_t dma_handle_wqe; + dma_addr_t dma_handle_idx; + u64 *mtts_wqe; + u64 *mtts_idx; + int ret; + + /* Get the physical address of srq buf */ + mtts_wqe = hns_roce_table_find(hr_dev, + &hr_dev->mr_table.mtt_srqwqe_table, + srq->mtt.first_seg, + &dma_handle_wqe); + if (!mtts_wqe) { + dev_err(hr_dev->dev, + "SRQ alloc.Failed to find srq buf addr.\n"); + return -EINVAL; + } + + /* Get physical address of idx que buf */ + mtts_idx = hns_roce_table_find(hr_dev, &hr_dev->mr_table.mtt_idx_table, + srq->idx_que.mtt.first_seg, + &dma_handle_idx); + if (!mtts_idx) { + dev_err(hr_dev->dev, + "SRQ alloc.Failed to find idx que buf addr.\n"); + return -EINVAL; + } + + ret = hns_roce_bitmap_alloc(&srq_table->bitmap, &srq->srqn); + if (ret == -1) { + dev_err(hr_dev->dev, "SRQ alloc.Failed to alloc index.\n"); + return -ENOMEM; + } + + ret = hns_roce_table_get(hr_dev, &srq_table->table, srq->srqn); + if (ret) + goto err_out; + + ret = xa_err(xa_store(&srq_table->xa, srq->srqn, srq, GFP_KERNEL)); + if (ret) + goto err_put; + + mailbox = hns_roce_alloc_cmd_mailbox(hr_dev); + if (IS_ERR(mailbox)) { + ret = PTR_ERR(mailbox); + goto err_xa; + } + + hr_dev->hw->write_srqc(hr_dev, srq, pdn, xrcd, cqn, mailbox->buf, + mtts_wqe, mtts_idx, dma_handle_wqe, + dma_handle_idx); + + ret = hns_roce_sw2hw_srq(hr_dev, mailbox, srq->srqn); + hns_roce_free_cmd_mailbox(hr_dev, mailbox); + if (ret) + goto err_xa; + + atomic_set(&srq->refcount, 1); + init_completion(&srq->free); + return ret; + +err_xa: + xa_erase(&srq_table->xa, srq->srqn); + +err_put: + hns_roce_table_put(hr_dev, &srq_table->table, srq->srqn); + +err_out: + hns_roce_bitmap_free(&srq_table->bitmap, srq->srqn, BITMAP_NO_RR); + return ret; +} + +void hns_roce_srq_free(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq) +{ + struct hns_roce_srq_table *srq_table = &hr_dev->srq_table; + int ret; + + ret = hns_roce_hw2sw_srq(hr_dev, NULL, srq->srqn); + if (ret) + dev_err(hr_dev->dev, "HW2SW_SRQ failed (%d) for CQN %06lx\n", + ret, srq->srqn); + + xa_erase(&srq_table->xa, srq->srqn); + + if (atomic_dec_and_test(&srq->refcount)) + complete(&srq->free); + wait_for_completion(&srq->free); + + hns_roce_table_put(hr_dev, &srq_table->table, srq->srqn); + hns_roce_bitmap_free(&srq_table->bitmap, srq->srqn, BITMAP_NO_RR); +} + +static int hns_roce_create_idx_que(struct ib_pd *pd, struct hns_roce_srq *srq, + u32 page_shift) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(pd->device); + struct hns_roce_idx_que *idx_que = &srq->idx_que; + u32 bitmap_num; + int i; + + bitmap_num = HNS_ROCE_ALOGN_UP(srq->max, 8 * sizeof(u64)); + + idx_que->bitmap = kcalloc(1, bitmap_num / 8, GFP_KERNEL); + if (!idx_que->bitmap) + return -ENOMEM; + + bitmap_num = bitmap_num / (8 * sizeof(u64)); + + idx_que->buf_size = srq->idx_que.buf_size; + + if (hns_roce_buf_alloc(hr_dev, idx_que->buf_size, (1 << page_shift) * 2, + &idx_que->idx_buf, page_shift)) { + kfree(idx_que->bitmap); + return -ENOMEM; + } + + for (i = 0; i < bitmap_num; i++) + idx_que->bitmap[i] = ~(0UL); + + return 0; +} + +struct ib_srq *hns_roce_create_srq(struct ib_pd *pd, + struct ib_srq_init_attr *srq_init_attr, + struct ib_udata *udata) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(pd->device); + struct hns_roce_srq *srq; + int srq_desc_size; + int srq_buf_size; + u32 page_shift; + int ret = 0; + u32 npages; + u32 cqn; + + /* Check the actual SRQ wqe and SRQ sge num */ + if (srq_init_attr->attr.max_wr >= hr_dev->caps.max_srq_wrs || + srq_init_attr->attr.max_sge > hr_dev->caps.max_srq_sges) + return ERR_PTR(-EINVAL); + + srq = kzalloc(sizeof(*srq), GFP_KERNEL); + if (!srq) + return ERR_PTR(-ENOMEM); + + mutex_init(&srq->mutex); + spin_lock_init(&srq->lock); + + srq->max = roundup_pow_of_two(srq_init_attr->attr.max_wr + 1); + srq->max_gs = srq_init_attr->attr.max_sge; + + srq_desc_size = max(16, 16 * srq->max_gs); + + srq->wqe_shift = ilog2(srq_desc_size); + + srq_buf_size = srq->max * srq_desc_size; + + srq->idx_que.entry_sz = HNS_ROCE_IDX_QUE_ENTRY_SZ; + srq->idx_que.buf_size = srq->max * srq->idx_que.entry_sz; + srq->mtt.mtt_type = MTT_TYPE_SRQWQE; + srq->idx_que.mtt.mtt_type = MTT_TYPE_IDX; + + if (udata) { + struct hns_roce_ib_create_srq ucmd; + + if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) { + ret = -EFAULT; + goto err_srq; + } + + srq->umem = ib_umem_get(pd->uobject->context, ucmd.buf_addr, + srq_buf_size, 0, 0); + if (IS_ERR(srq->umem)) { + ret = PTR_ERR(srq->umem); + goto err_srq; + } + + if (hr_dev->caps.srqwqe_buf_pg_sz) { + npages = (ib_umem_page_count(srq->umem) + + (1 << hr_dev->caps.srqwqe_buf_pg_sz) - 1) / + (1 << hr_dev->caps.srqwqe_buf_pg_sz); + page_shift = PAGE_SHIFT + hr_dev->caps.srqwqe_buf_pg_sz; + ret = hns_roce_mtt_init(hr_dev, npages, + page_shift, + &srq->mtt); + } else + ret = hns_roce_mtt_init(hr_dev, + ib_umem_page_count(srq->umem), + srq->umem->page_shift, + &srq->mtt); + if (ret) + goto err_buf; + + ret = hns_roce_ib_umem_write_mtt(hr_dev, &srq->mtt, srq->umem); + if (ret) + goto err_srq_mtt; + + /* config index queue BA */ + srq->idx_que.umem = ib_umem_get(pd->uobject->context, + ucmd.que_addr, + srq->idx_que.buf_size, 0, 0); + if (IS_ERR(srq->idx_que.umem)) { + dev_err(hr_dev->dev, + "ib_umem_get error for index queue\n"); + ret = PTR_ERR(srq->idx_que.umem); + goto err_srq_mtt; + } + + if (hr_dev->caps.idx_buf_pg_sz) { + npages = (ib_umem_page_count(srq->idx_que.umem) + + (1 << hr_dev->caps.idx_buf_pg_sz) - 1) / + (1 << hr_dev->caps.idx_buf_pg_sz); + page_shift = PAGE_SHIFT + hr_dev->caps.idx_buf_pg_sz; + ret = hns_roce_mtt_init(hr_dev, npages, + page_shift, &srq->idx_que.mtt); + } else { + ret = hns_roce_mtt_init(hr_dev, + ib_umem_page_count(srq->idx_que.umem), + srq->idx_que.umem->page_shift, + &srq->idx_que.mtt); + } + + if (ret) { + dev_err(hr_dev->dev, + "hns_roce_mtt_init error for idx que\n"); + goto err_idx_mtt; + } + + ret = hns_roce_ib_umem_write_mtt(hr_dev, &srq->idx_que.mtt, + srq->idx_que.umem); + if (ret) { + dev_err(hr_dev->dev, + "hns_roce_ib_umem_write_mtt error for idx que\n"); + goto err_idx_buf; + } + } else { + page_shift = PAGE_SHIFT + hr_dev->caps.srqwqe_buf_pg_sz; + if (hns_roce_buf_alloc(hr_dev, srq_buf_size, + (1 << page_shift) * 2, + &srq->buf, page_shift)) { + ret = -ENOMEM; + goto err_srq; + } + + srq->head = 0; + srq->tail = srq->max - 1; + + ret = hns_roce_mtt_init(hr_dev, srq->buf.npages, + srq->buf.page_shift, &srq->mtt); + if (ret) + goto err_buf; + + ret = hns_roce_buf_write_mtt(hr_dev, &srq->mtt, &srq->buf); + if (ret) + goto err_srq_mtt; + + page_shift = PAGE_SHIFT + hr_dev->caps.idx_buf_pg_sz; + ret = hns_roce_create_idx_que(pd, srq, page_shift); + if (ret) { + dev_err(hr_dev->dev, "Create idx queue fail(%d)!\n", + ret); + goto err_srq_mtt; + } + + /* Init mtt table for idx_que */ + ret = hns_roce_mtt_init(hr_dev, srq->idx_que.idx_buf.npages, + srq->idx_que.idx_buf.page_shift, + &srq->idx_que.mtt); + if (ret) + goto err_create_idx; + + /* Write buffer address into the mtt table */ + ret = hns_roce_buf_write_mtt(hr_dev, &srq->idx_que.mtt, + &srq->idx_que.idx_buf); + if (ret) + goto err_idx_buf; + + srq->wrid = kvmalloc_array(srq->max, sizeof(u64), GFP_KERNEL); + if (!srq->wrid) { + ret = -ENOMEM; + goto err_idx_buf; + } + } + + cqn = ib_srq_has_cq(srq_init_attr->srq_type) ? + to_hr_cq(srq_init_attr->ext.cq)->cqn : 0; + + srq->db_reg_l = hr_dev->reg_base + SRQ_DB_REG; + + ret = hns_roce_srq_alloc(hr_dev, to_hr_pd(pd)->pdn, cqn, 0, + &srq->mtt, 0, srq); + if (ret) + goto err_wrid; + + srq->event = hns_roce_ib_srq_event; + srq->ibsrq.ext.xrc.srq_num = srq->srqn; + + if (udata) { + if (ib_copy_to_udata(udata, &srq->srqn, sizeof(__u32))) { + ret = -EFAULT; + goto err_wrid; + } + } + + return &srq->ibsrq; + +err_wrid: + kvfree(srq->wrid); + +err_idx_buf: + hns_roce_mtt_cleanup(hr_dev, &srq->idx_que.mtt); + +err_idx_mtt: + if (udata) + ib_umem_release(srq->idx_que.umem); + +err_create_idx: + hns_roce_buf_free(hr_dev, srq->idx_que.buf_size, + &srq->idx_que.idx_buf); + kfree(srq->idx_que.bitmap); + +err_srq_mtt: + hns_roce_mtt_cleanup(hr_dev, &srq->mtt); + +err_buf: + if (udata) + ib_umem_release(srq->umem); + else + hns_roce_buf_free(hr_dev, srq_buf_size, &srq->buf); + +err_srq: + kfree(srq); + return ERR_PTR(ret); +} + +int hns_roce_destroy_srq(struct ib_srq *ibsrq) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ibsrq->device); + struct hns_roce_srq *srq = to_hr_srq(ibsrq); + + hns_roce_srq_free(hr_dev, srq); + hns_roce_mtt_cleanup(hr_dev, &srq->mtt); + + if (ibsrq->uobject) { + hns_roce_mtt_cleanup(hr_dev, &srq->idx_que.mtt); + ib_umem_release(srq->idx_que.umem); + ib_umem_release(srq->umem); + } else { + kvfree(srq->wrid); + hns_roce_buf_free(hr_dev, srq->max << srq->wqe_shift, + &srq->buf); + } + + kfree(srq); + + return 0; +} + +int hns_roce_init_srq_table(struct hns_roce_dev *hr_dev) +{ + struct hns_roce_srq_table *srq_table = &hr_dev->srq_table; + + xa_init(&srq_table->xa); + + return hns_roce_bitmap_init(&srq_table->bitmap, hr_dev->caps.num_srqs, + hr_dev->caps.num_srqs - 1, + hr_dev->caps.reserved_srqs, 0); +} + +void hns_roce_cleanup_srq_table(struct hns_roce_dev *hr_dev) +{ + hns_roce_bitmap_cleanup(&hr_dev->srq_table.bitmap); +} diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c index 771eb6bd0785..206cfb0016f8 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_cm.c +++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c @@ -404,7 +404,7 @@ static struct i40iw_puda_buf *i40iw_form_cm_frame(struct i40iw_cm_node *cm_node, if (pdata) pd_len = pdata->size; - if (cm_node->vlan_id < VLAN_TAG_PRESENT) + if (cm_node->vlan_id <= VLAN_VID_MASK) eth_hlen += 4; if (cm_node->ipv4) @@ -433,7 +433,7 @@ static struct i40iw_puda_buf *i40iw_form_cm_frame(struct i40iw_cm_node *cm_node, ether_addr_copy(ethh->h_dest, cm_node->rem_mac); ether_addr_copy(ethh->h_source, cm_node->loc_mac); - if (cm_node->vlan_id < VLAN_TAG_PRESENT) { + if (cm_node->vlan_id <= VLAN_VID_MASK) { ((struct vlan_ethhdr *)ethh)->h_vlan_proto = htons(ETH_P_8021Q); vtag = (cm_node->user_pri << VLAN_PRIO_SHIFT) | cm_node->vlan_id; ((struct vlan_ethhdr *)ethh)->h_vlan_TCI = htons(vtag); @@ -463,7 +463,7 @@ static struct i40iw_puda_buf *i40iw_form_cm_frame(struct i40iw_cm_node *cm_node, ether_addr_copy(ethh->h_dest, cm_node->rem_mac); ether_addr_copy(ethh->h_source, cm_node->loc_mac); - if (cm_node->vlan_id < VLAN_TAG_PRESENT) { + if (cm_node->vlan_id <= VLAN_VID_MASK) { ((struct vlan_ethhdr *)ethh)->h_vlan_proto = htons(ETH_P_8021Q); vtag = (cm_node->user_pri << VLAN_PRIO_SHIFT) | cm_node->vlan_id; ((struct vlan_ethhdr *)ethh)->h_vlan_TCI = htons(vtag); @@ -3323,7 +3323,7 @@ static void i40iw_init_tcp_ctx(struct i40iw_cm_node *cm_node, tcp_info->flow_label = 0; tcp_info->snd_mss = cpu_to_le32(((u32)cm_node->tcp_cntxt.mss)); - if (cm_node->vlan_id < VLAN_TAG_PRESENT) { + if (cm_node->vlan_id <= VLAN_VID_MASK) { tcp_info->insert_vlan_tag = true; tcp_info->vlan_tag = cpu_to_le16(((u16)cm_node->user_pri << I40IW_VLAN_PRIO_SHIFT) | cm_node->vlan_id); @@ -3478,7 +3478,7 @@ static void i40iw_qp_disconnect(struct i40iw_qp *iwqp) /* Need to free the Last Streaming Mode Message */ if (iwqp->ietf_mem.va) { if (iwqp->lsmm_mr) - iwibdev->ibdev.dereg_mr(iwqp->lsmm_mr); + iwibdev->ibdev.ops.dereg_mr(iwqp->lsmm_mr); i40iw_free_dma_mem(iwdev->sc_dev.hw, &iwqp->ietf_mem); } } diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index 102875872bea..0b675b0742c2 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -673,28 +673,26 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd, goto error; } iwqp->ctx_info.qp_compl_ctx = req.user_compl_ctx; - if (ibpd->uobject && ibpd->uobject->context) { - iwqp->user_mode = 1; - ucontext = to_ucontext(ibpd->uobject->context); - - if (req.user_wqe_buffers) { - struct i40iw_pbl *iwpbl; - - spin_lock_irqsave( - &ucontext->qp_reg_mem_list_lock, flags); - iwpbl = i40iw_get_pbl( - (unsigned long)req.user_wqe_buffers, - &ucontext->qp_reg_mem_list); - spin_unlock_irqrestore( - &ucontext->qp_reg_mem_list_lock, flags); - - if (!iwpbl) { - err_code = -ENODATA; - i40iw_pr_err("no pbl info\n"); - goto error; - } - memcpy(&iwqp->iwpbl, iwpbl, sizeof(iwqp->iwpbl)); + iwqp->user_mode = 1; + ucontext = to_ucontext(ibpd->uobject->context); + + if (req.user_wqe_buffers) { + struct i40iw_pbl *iwpbl; + + spin_lock_irqsave( + &ucontext->qp_reg_mem_list_lock, flags); + iwpbl = i40iw_get_pbl( + (unsigned long)req.user_wqe_buffers, + &ucontext->qp_reg_mem_list); + spin_unlock_irqrestore( + &ucontext->qp_reg_mem_list_lock, flags); + + if (!iwpbl) { + err_code = -ENODATA; + i40iw_pr_err("no pbl info\n"); + goto error; } + memcpy(&iwqp->iwpbl, iwpbl, sizeof(iwqp->iwpbl)); } err_code = i40iw_setup_virt_qp(iwdev, iwqp, &init_info); } else { @@ -768,7 +766,7 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd, iwdev->qp_table[qp_num] = iwqp; i40iw_add_pdusecount(iwqp->iwpd); i40iw_add_devusecount(iwdev); - if (ibpd->uobject && udata) { + if (udata) { memset(&uresp, 0, sizeof(uresp)); uresp.actual_sq_size = sq_size; uresp.actual_rq_size = rq_size; @@ -2092,7 +2090,8 @@ static int i40iw_dereg_mr(struct ib_mr *ib_mr) ib_umem_release(iwmr->region); if (iwmr->type != IW_MEMREG_TYPE_MEM) { - if (ibpd->uobject) { + /* region is released. only test for userness. */ + if (iwmr->region) { struct i40iw_ucontext *ucontext; ucontext = to_ucontext(ibpd->uobject->context); @@ -2721,24 +2720,38 @@ static int i40iw_query_pkey(struct ib_device *ibdev, return 0; } -/** - * i40iw_get_vector_affinity - report IRQ affinity mask - * @ibdev: IB device - * @comp_vector: completion vector index - */ -static const struct cpumask *i40iw_get_vector_affinity(struct ib_device *ibdev, - int comp_vector) -{ - struct i40iw_device *iwdev = to_iwdev(ibdev); - struct i40iw_msix_vector *msix_vec; - - if (iwdev->msix_shared) - msix_vec = &iwdev->iw_msixtbl[comp_vector]; - else - msix_vec = &iwdev->iw_msixtbl[comp_vector + 1]; - - return irq_get_affinity_mask(msix_vec->irq); -} +static const struct ib_device_ops i40iw_dev_ops = { + .alloc_hw_stats = i40iw_alloc_hw_stats, + .alloc_mr = i40iw_alloc_mr, + .alloc_pd = i40iw_alloc_pd, + .alloc_ucontext = i40iw_alloc_ucontext, + .create_cq = i40iw_create_cq, + .create_qp = i40iw_create_qp, + .dealloc_pd = i40iw_dealloc_pd, + .dealloc_ucontext = i40iw_dealloc_ucontext, + .dereg_mr = i40iw_dereg_mr, + .destroy_cq = i40iw_destroy_cq, + .destroy_qp = i40iw_destroy_qp, + .drain_rq = i40iw_drain_rq, + .drain_sq = i40iw_drain_sq, + .get_dev_fw_str = i40iw_get_dev_fw_str, + .get_dma_mr = i40iw_get_dma_mr, + .get_hw_stats = i40iw_get_hw_stats, + .get_port_immutable = i40iw_port_immutable, + .map_mr_sg = i40iw_map_mr_sg, + .mmap = i40iw_mmap, + .modify_qp = i40iw_modify_qp, + .poll_cq = i40iw_poll_cq, + .post_recv = i40iw_post_recv, + .post_send = i40iw_post_send, + .query_device = i40iw_query_device, + .query_gid = i40iw_query_gid, + .query_pkey = i40iw_query_pkey, + .query_port = i40iw_query_port, + .query_qp = i40iw_query_qp, + .reg_user_mr = i40iw_reg_user_mr, + .req_notify_cq = i40iw_req_notify_cq, +}; /** * i40iw_init_rdma_device - initialization of iwarp device @@ -2786,30 +2799,6 @@ static struct i40iw_ib_device *i40iw_init_rdma_device(struct i40iw_device *iwdev iwibdev->ibdev.phys_port_cnt = 1; iwibdev->ibdev.num_comp_vectors = iwdev->ceqs_count; iwibdev->ibdev.dev.parent = &pcidev->dev; - iwibdev->ibdev.query_port = i40iw_query_port; - iwibdev->ibdev.query_pkey = i40iw_query_pkey; - iwibdev->ibdev.query_gid = i40iw_query_gid; - iwibdev->ibdev.alloc_ucontext = i40iw_alloc_ucontext; - iwibdev->ibdev.dealloc_ucontext = i40iw_dealloc_ucontext; - iwibdev->ibdev.mmap = i40iw_mmap; - iwibdev->ibdev.alloc_pd = i40iw_alloc_pd; - iwibdev->ibdev.dealloc_pd = i40iw_dealloc_pd; - iwibdev->ibdev.create_qp = i40iw_create_qp; - iwibdev->ibdev.modify_qp = i40iw_modify_qp; - iwibdev->ibdev.query_qp = i40iw_query_qp; - iwibdev->ibdev.destroy_qp = i40iw_destroy_qp; - iwibdev->ibdev.create_cq = i40iw_create_cq; - iwibdev->ibdev.destroy_cq = i40iw_destroy_cq; - iwibdev->ibdev.get_dma_mr = i40iw_get_dma_mr; - iwibdev->ibdev.reg_user_mr = i40iw_reg_user_mr; - iwibdev->ibdev.dereg_mr = i40iw_dereg_mr; - iwibdev->ibdev.alloc_hw_stats = i40iw_alloc_hw_stats; - iwibdev->ibdev.get_hw_stats = i40iw_get_hw_stats; - iwibdev->ibdev.query_device = i40iw_query_device; - iwibdev->ibdev.drain_sq = i40iw_drain_sq; - iwibdev->ibdev.drain_rq = i40iw_drain_rq; - iwibdev->ibdev.alloc_mr = i40iw_alloc_mr; - iwibdev->ibdev.map_mr_sg = i40iw_map_mr_sg; iwibdev->ibdev.iwcm = kzalloc(sizeof(*iwibdev->ibdev.iwcm), GFP_KERNEL); if (!iwibdev->ibdev.iwcm) { ib_dealloc_device(&iwibdev->ibdev); @@ -2826,13 +2815,7 @@ static struct i40iw_ib_device *i40iw_init_rdma_device(struct i40iw_device *iwdev iwibdev->ibdev.iwcm->destroy_listen = i40iw_destroy_listen; memcpy(iwibdev->ibdev.iwcm->ifname, netdev->name, sizeof(iwibdev->ibdev.iwcm->ifname)); - iwibdev->ibdev.get_port_immutable = i40iw_port_immutable; - iwibdev->ibdev.get_dev_fw_str = i40iw_get_dev_fw_str; - iwibdev->ibdev.poll_cq = i40iw_poll_cq; - iwibdev->ibdev.req_notify_cq = i40iw_req_notify_cq; - iwibdev->ibdev.post_send = i40iw_post_send; - iwibdev->ibdev.post_recv = i40iw_post_recv; - iwibdev->ibdev.get_vector_affinity = i40iw_get_vector_affinity; + ib_set_device_ops(&iwibdev->ibdev, &i40iw_dev_ops); return iwibdev; } diff --git a/drivers/infiniband/hw/mlx4/ah.c b/drivers/infiniband/hw/mlx4/ah.c index e9e3a6f390db..1672808262ba 100644 --- a/drivers/infiniband/hw/mlx4/ah.c +++ b/drivers/infiniband/hw/mlx4/ah.c @@ -144,7 +144,7 @@ static struct ib_ah *create_iboe_ah(struct ib_pd *pd, } struct ib_ah *mlx4_ib_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, - struct ib_udata *udata) + u32 flags, struct ib_udata *udata) { struct mlx4_ib_ah *ah; @@ -189,7 +189,7 @@ struct ib_ah *mlx4_ib_create_ah_slave(struct ib_pd *pd, slave_attr.grh.sgid_attr = NULL; slave_attr.grh.sgid_index = slave_sgid_index; - ah = mlx4_ib_create_ah(pd, &slave_attr, NULL); + ah = mlx4_ib_create_ah(pd, &slave_attr, 0, NULL); if (IS_ERR(ah)) return ah; @@ -250,7 +250,7 @@ int mlx4_ib_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr) return 0; } -int mlx4_ib_destroy_ah(struct ib_ah *ah) +int mlx4_ib_destroy_ah(struct ib_ah *ah, u32 flags) { kfree(to_mah(ah)); return 0; diff --git a/drivers/infiniband/hw/mlx4/alias_GUID.c b/drivers/infiniband/hw/mlx4/alias_GUID.c index 155b4dfc0ae8..782499abcd98 100644 --- a/drivers/infiniband/hw/mlx4/alias_GUID.c +++ b/drivers/infiniband/hw/mlx4/alias_GUID.c @@ -849,7 +849,7 @@ int mlx4_ib_init_alias_guid_service(struct mlx4_ib_dev *dev) spin_lock_init(&dev->sriov.alias_guid.ag_work_lock); for (i = 1; i <= dev->num_ports; ++i) { - if (dev->ib_dev.query_gid(&dev->ib_dev , i, 0, &gid)) { + if (dev->ib_dev.ops.query_gid(&dev->ib_dev, i, 0, &gid)) { ret = -EFAULT; goto err_unregister; } diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c index 82adc0d1d30e..43512347b4f0 100644 --- a/drivers/infiniband/hw/mlx4/cq.c +++ b/drivers/infiniband/hw/mlx4/cq.c @@ -181,6 +181,7 @@ struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev, struct mlx4_ib_dev *dev = to_mdev(ibdev); struct mlx4_ib_cq *cq; struct mlx4_uar *uar; + void *buf_addr; int err; if (entries < 1 || entries > dev->dev->caps.max_cqes) @@ -211,6 +212,8 @@ struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev, goto err_cq; } + buf_addr = (void *)(unsigned long)ucmd.buf_addr; + err = mlx4_ib_get_cq_umem(dev, context, &cq->buf, &cq->umem, ucmd.buf_addr, entries); if (err) @@ -237,6 +240,8 @@ struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev, if (err) goto err_db; + buf_addr = &cq->buf.buf; + uar = &dev->priv_uar; cq->mcq.usage = MLX4_RES_USAGE_DRIVER; } @@ -246,7 +251,9 @@ struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev, err = mlx4_cq_alloc(dev->dev, entries, &cq->buf.mtt, uar, cq->db.dma, &cq->mcq, vector, 0, - !!(cq->create_flags & IB_UVERBS_CQ_FLAGS_TIMESTAMP_COMPLETION)); + !!(cq->create_flags & + IB_UVERBS_CQ_FLAGS_TIMESTAMP_COMPLETION), + buf_addr, !!context); if (err) goto err_dbmap; diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c index 8942f5f7f04d..25439da8976c 100644 --- a/drivers/infiniband/hw/mlx4/mad.c +++ b/drivers/infiniband/hw/mlx4/mad.c @@ -202,13 +202,13 @@ static void update_sm_ah(struct mlx4_ib_dev *dev, u8 port_num, u16 lid, u8 sl) rdma_ah_set_port_num(&ah_attr, port_num); new_ah = rdma_create_ah(dev->send_agent[port_num - 1][0]->qp->pd, - &ah_attr); + &ah_attr, 0); if (IS_ERR(new_ah)) return; spin_lock_irqsave(&dev->sm_lock, flags); if (dev->sm_ah[port_num - 1]) - rdma_destroy_ah(dev->sm_ah[port_num - 1]); + rdma_destroy_ah(dev->sm_ah[port_num - 1], 0); dev->sm_ah[port_num - 1] = new_ah; spin_unlock_irqrestore(&dev->sm_lock, flags); } @@ -567,7 +567,7 @@ int mlx4_ib_send_to_slave(struct mlx4_ib_dev *dev, int slave, u8 port, return -EINVAL; rdma_ah_set_grh(&attr, &dgid, 0, 0, 0, 0); } - ah = rdma_create_ah(tun_ctx->pd, &attr); + ah = rdma_create_ah(tun_ctx->pd, &attr, 0); if (IS_ERR(ah)) return -ENOMEM; @@ -584,7 +584,7 @@ int mlx4_ib_send_to_slave(struct mlx4_ib_dev *dev, int slave, u8 port, tun_mad = (struct mlx4_rcv_tunnel_mad *) (tun_qp->tx_ring[tun_tx_ix].buf.addr); if (tun_qp->tx_ring[tun_tx_ix].ah) - rdma_destroy_ah(tun_qp->tx_ring[tun_tx_ix].ah); + rdma_destroy_ah(tun_qp->tx_ring[tun_tx_ix].ah, 0); tun_qp->tx_ring[tun_tx_ix].ah = ah; ib_dma_sync_single_for_cpu(&dev->ib_dev, tun_qp->tx_ring[tun_tx_ix].buf.map, @@ -657,7 +657,7 @@ int mlx4_ib_send_to_slave(struct mlx4_ib_dev *dev, int slave, u8 port, spin_unlock(&tun_qp->tx_lock); tun_qp->tx_ring[tun_tx_ix].ah = NULL; end: - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, 0); return ret; } @@ -1024,7 +1024,7 @@ static void send_handler(struct ib_mad_agent *agent, struct ib_mad_send_wc *mad_send_wc) { if (mad_send_wc->send_buf->context[0]) - rdma_destroy_ah(mad_send_wc->send_buf->context[0]); + rdma_destroy_ah(mad_send_wc->send_buf->context[0], 0); ib_free_send_mad(mad_send_wc->send_buf); } @@ -1079,7 +1079,7 @@ void mlx4_ib_mad_cleanup(struct mlx4_ib_dev *dev) } if (dev->sm_ah[p]) - rdma_destroy_ah(dev->sm_ah[p]); + rdma_destroy_ah(dev->sm_ah[p], 0); } } @@ -1411,7 +1411,7 @@ int mlx4_ib_send_to_wire(struct mlx4_ib_dev *dev, int slave, u8 port, sqp_mad = (struct mlx4_mad_snd_buf *) (sqp->tx_ring[wire_tx_ix].buf.addr); if (sqp->tx_ring[wire_tx_ix].ah) - rdma_destroy_ah(sqp->tx_ring[wire_tx_ix].ah); + rdma_destroy_ah(sqp->tx_ring[wire_tx_ix].ah, 0); sqp->tx_ring[wire_tx_ix].ah = ah; ib_dma_sync_single_for_cpu(&dev->ib_dev, sqp->tx_ring[wire_tx_ix].buf.map, @@ -1450,7 +1450,7 @@ int mlx4_ib_send_to_wire(struct mlx4_ib_dev *dev, int slave, u8 port, spin_unlock(&sqp->tx_lock); sqp->tx_ring[wire_tx_ix].ah = NULL; out: - mlx4_ib_destroy_ah(ah); + mlx4_ib_destroy_ah(ah, 0); return ret; } @@ -1716,7 +1716,7 @@ static void mlx4_ib_free_pv_qp_bufs(struct mlx4_ib_demux_pv_ctx *ctx, tx_buf_size, DMA_TO_DEVICE); kfree(tun_qp->tx_ring[i].buf.addr); if (tun_qp->tx_ring[i].ah) - rdma_destroy_ah(tun_qp->tx_ring[i].ah); + rdma_destroy_ah(tun_qp->tx_ring[i].ah, 0); } kfree(tun_qp->tx_ring); kfree(tun_qp->ring); @@ -1749,7 +1749,7 @@ static void mlx4_ib_tunnel_comp_worker(struct work_struct *work) "wrid=0x%llx, status=0x%x\n", wc.wr_id, wc.status); rdma_destroy_ah(tun_qp->tx_ring[wc.wr_id & - (MLX4_NUM_TUNNEL_BUFS - 1)].ah); + (MLX4_NUM_TUNNEL_BUFS - 1)].ah, 0); tun_qp->tx_ring[wc.wr_id & (MLX4_NUM_TUNNEL_BUFS - 1)].ah = NULL; spin_lock(&tun_qp->tx_lock); @@ -1766,7 +1766,7 @@ static void mlx4_ib_tunnel_comp_worker(struct work_struct *work) ctx->slave, wc.status, wc.wr_id); if (!MLX4_TUN_IS_RECV(wc.wr_id)) { rdma_destroy_ah(tun_qp->tx_ring[wc.wr_id & - (MLX4_NUM_TUNNEL_BUFS - 1)].ah); + (MLX4_NUM_TUNNEL_BUFS - 1)].ah, 0); tun_qp->tx_ring[wc.wr_id & (MLX4_NUM_TUNNEL_BUFS - 1)].ah = NULL; spin_lock(&tun_qp->tx_lock); @@ -1903,7 +1903,7 @@ static void mlx4_ib_sqp_comp_worker(struct work_struct *work) switch (wc.opcode) { case IB_WC_SEND: rdma_destroy_ah(sqp->tx_ring[wc.wr_id & - (MLX4_NUM_TUNNEL_BUFS - 1)].ah); + (MLX4_NUM_TUNNEL_BUFS - 1)].ah, 0); sqp->tx_ring[wc.wr_id & (MLX4_NUM_TUNNEL_BUFS - 1)].ah = NULL; spin_lock(&sqp->tx_lock); @@ -1932,7 +1932,7 @@ static void mlx4_ib_sqp_comp_worker(struct work_struct *work) ctx->slave, wc.status, wc.wr_id); if (!MLX4_TUN_IS_RECV(wc.wr_id)) { rdma_destroy_ah(sqp->tx_ring[wc.wr_id & - (MLX4_NUM_TUNNEL_BUFS - 1)].ah); + (MLX4_NUM_TUNNEL_BUFS - 1)].ah, 0); sqp->tx_ring[wc.wr_id & (MLX4_NUM_TUNNEL_BUFS - 1)].ah = NULL; spin_lock(&sqp->tx_lock); diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c index 0def2323459c..1f15ec3e2b83 100644 --- a/drivers/infiniband/hw/mlx4/main.c +++ b/drivers/infiniband/hw/mlx4/main.c @@ -2220,6 +2220,11 @@ static void mlx4_ib_fill_diag_counters(struct mlx4_ib_dev *ibdev, } } +static const struct ib_device_ops mlx4_ib_hw_stats_ops = { + .alloc_hw_stats = mlx4_ib_alloc_hw_stats, + .get_hw_stats = mlx4_ib_get_hw_stats, +}; + static int mlx4_ib_alloc_diag_counters(struct mlx4_ib_dev *ibdev) { struct mlx4_ib_diag_counters *diag = ibdev->diag_counters; @@ -2246,8 +2251,7 @@ static int mlx4_ib_alloc_diag_counters(struct mlx4_ib_dev *ibdev) diag[i].offset, i); } - ibdev->ib_dev.get_hw_stats = mlx4_ib_get_hw_stats; - ibdev->ib_dev.alloc_hw_stats = mlx4_ib_alloc_hw_stats; + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_hw_stats_ops); return 0; @@ -2352,6 +2356,32 @@ static void mlx4_ib_scan_netdevs(struct mlx4_ib_dev *ibdev, event == NETDEV_UP || event == NETDEV_CHANGE)) update_qps_port = port; + if (dev == iboe->netdevs[port - 1] && + (event == NETDEV_UP || event == NETDEV_DOWN)) { + enum ib_port_state port_state; + struct ib_event ibev = { }; + + if (ib_get_cached_port_state(&ibdev->ib_dev, port, + &port_state)) + continue; + + if (event == NETDEV_UP && + (port_state != IB_PORT_ACTIVE || + iboe->last_port_state[port - 1] != IB_PORT_DOWN)) + continue; + if (event == NETDEV_DOWN && + (port_state != IB_PORT_DOWN || + iboe->last_port_state[port - 1] != IB_PORT_ACTIVE)) + continue; + iboe->last_port_state[port - 1] = port_state; + + ibev.device = &ibdev->ib_dev; + ibev.element.port_num = port; + ibev.event = event == NETDEV_UP ? IB_EVENT_PORT_ACTIVE : + IB_EVENT_PORT_ERR; + ib_dispatch_event(&ibev); + } + } spin_unlock_bh(&iboe->lock); @@ -2499,6 +2529,88 @@ static void get_fw_ver_str(struct ib_device *device, char *str) (int) dev->dev->caps.fw_ver & 0xffff); } +static const struct ib_device_ops mlx4_ib_dev_ops = { + .add_gid = mlx4_ib_add_gid, + .alloc_mr = mlx4_ib_alloc_mr, + .alloc_pd = mlx4_ib_alloc_pd, + .alloc_ucontext = mlx4_ib_alloc_ucontext, + .attach_mcast = mlx4_ib_mcg_attach, + .create_ah = mlx4_ib_create_ah, + .create_cq = mlx4_ib_create_cq, + .create_qp = mlx4_ib_create_qp, + .create_srq = mlx4_ib_create_srq, + .dealloc_pd = mlx4_ib_dealloc_pd, + .dealloc_ucontext = mlx4_ib_dealloc_ucontext, + .del_gid = mlx4_ib_del_gid, + .dereg_mr = mlx4_ib_dereg_mr, + .destroy_ah = mlx4_ib_destroy_ah, + .destroy_cq = mlx4_ib_destroy_cq, + .destroy_qp = mlx4_ib_destroy_qp, + .destroy_srq = mlx4_ib_destroy_srq, + .detach_mcast = mlx4_ib_mcg_detach, + .disassociate_ucontext = mlx4_ib_disassociate_ucontext, + .drain_rq = mlx4_ib_drain_rq, + .drain_sq = mlx4_ib_drain_sq, + .get_dev_fw_str = get_fw_ver_str, + .get_dma_mr = mlx4_ib_get_dma_mr, + .get_link_layer = mlx4_ib_port_link_layer, + .get_netdev = mlx4_ib_get_netdev, + .get_port_immutable = mlx4_port_immutable, + .map_mr_sg = mlx4_ib_map_mr_sg, + .mmap = mlx4_ib_mmap, + .modify_cq = mlx4_ib_modify_cq, + .modify_device = mlx4_ib_modify_device, + .modify_port = mlx4_ib_modify_port, + .modify_qp = mlx4_ib_modify_qp, + .modify_srq = mlx4_ib_modify_srq, + .poll_cq = mlx4_ib_poll_cq, + .post_recv = mlx4_ib_post_recv, + .post_send = mlx4_ib_post_send, + .post_srq_recv = mlx4_ib_post_srq_recv, + .process_mad = mlx4_ib_process_mad, + .query_ah = mlx4_ib_query_ah, + .query_device = mlx4_ib_query_device, + .query_gid = mlx4_ib_query_gid, + .query_pkey = mlx4_ib_query_pkey, + .query_port = mlx4_ib_query_port, + .query_qp = mlx4_ib_query_qp, + .query_srq = mlx4_ib_query_srq, + .reg_user_mr = mlx4_ib_reg_user_mr, + .req_notify_cq = mlx4_ib_arm_cq, + .rereg_user_mr = mlx4_ib_rereg_user_mr, + .resize_cq = mlx4_ib_resize_cq, +}; + +static const struct ib_device_ops mlx4_ib_dev_wq_ops = { + .create_rwq_ind_table = mlx4_ib_create_rwq_ind_table, + .create_wq = mlx4_ib_create_wq, + .destroy_rwq_ind_table = mlx4_ib_destroy_rwq_ind_table, + .destroy_wq = mlx4_ib_destroy_wq, + .modify_wq = mlx4_ib_modify_wq, +}; + +static const struct ib_device_ops mlx4_ib_dev_fmr_ops = { + .alloc_fmr = mlx4_ib_fmr_alloc, + .dealloc_fmr = mlx4_ib_fmr_dealloc, + .map_phys_fmr = mlx4_ib_map_phys_fmr, + .unmap_fmr = mlx4_ib_unmap_fmr, +}; + +static const struct ib_device_ops mlx4_ib_dev_mw_ops = { + .alloc_mw = mlx4_ib_alloc_mw, + .dealloc_mw = mlx4_ib_dealloc_mw, +}; + +static const struct ib_device_ops mlx4_ib_dev_xrc_ops = { + .alloc_xrcd = mlx4_ib_alloc_xrcd, + .dealloc_xrcd = mlx4_ib_dealloc_xrcd, +}; + +static const struct ib_device_ops mlx4_ib_dev_fs_ops = { + .create_flow = mlx4_ib_create_flow, + .destroy_flow = mlx4_ib_destroy_flow, +}; + static void *mlx4_ib_add(struct mlx4_dev *dev) { struct mlx4_ib_dev *ibdev; @@ -2554,9 +2666,6 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) 1 : ibdev->num_ports; ibdev->ib_dev.num_comp_vectors = dev->caps.num_comp_vectors; ibdev->ib_dev.dev.parent = &dev->persist->pdev->dev; - ibdev->ib_dev.get_netdev = mlx4_ib_get_netdev; - ibdev->ib_dev.add_gid = mlx4_ib_add_gid; - ibdev->ib_dev.del_gid = mlx4_ib_del_gid; if (dev->caps.userspace_caps) ibdev->ib_dev.uverbs_abi_ver = MLX4_IB_UVERBS_ABI_VERSION; @@ -2589,116 +2698,53 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) (1ull << IB_USER_VERBS_CMD_CREATE_XSRQ) | (1ull << IB_USER_VERBS_CMD_OPEN_QP); - ibdev->ib_dev.query_device = mlx4_ib_query_device; - ibdev->ib_dev.query_port = mlx4_ib_query_port; - ibdev->ib_dev.get_link_layer = mlx4_ib_port_link_layer; - ibdev->ib_dev.query_gid = mlx4_ib_query_gid; - ibdev->ib_dev.query_pkey = mlx4_ib_query_pkey; - ibdev->ib_dev.modify_device = mlx4_ib_modify_device; - ibdev->ib_dev.modify_port = mlx4_ib_modify_port; - ibdev->ib_dev.alloc_ucontext = mlx4_ib_alloc_ucontext; - ibdev->ib_dev.dealloc_ucontext = mlx4_ib_dealloc_ucontext; - ibdev->ib_dev.mmap = mlx4_ib_mmap; - ibdev->ib_dev.alloc_pd = mlx4_ib_alloc_pd; - ibdev->ib_dev.dealloc_pd = mlx4_ib_dealloc_pd; - ibdev->ib_dev.create_ah = mlx4_ib_create_ah; - ibdev->ib_dev.query_ah = mlx4_ib_query_ah; - ibdev->ib_dev.destroy_ah = mlx4_ib_destroy_ah; - ibdev->ib_dev.create_srq = mlx4_ib_create_srq; - ibdev->ib_dev.modify_srq = mlx4_ib_modify_srq; - ibdev->ib_dev.query_srq = mlx4_ib_query_srq; - ibdev->ib_dev.destroy_srq = mlx4_ib_destroy_srq; - ibdev->ib_dev.post_srq_recv = mlx4_ib_post_srq_recv; - ibdev->ib_dev.create_qp = mlx4_ib_create_qp; - ibdev->ib_dev.modify_qp = mlx4_ib_modify_qp; - ibdev->ib_dev.query_qp = mlx4_ib_query_qp; - ibdev->ib_dev.destroy_qp = mlx4_ib_destroy_qp; - ibdev->ib_dev.drain_sq = mlx4_ib_drain_sq; - ibdev->ib_dev.drain_rq = mlx4_ib_drain_rq; - ibdev->ib_dev.post_send = mlx4_ib_post_send; - ibdev->ib_dev.post_recv = mlx4_ib_post_recv; - ibdev->ib_dev.create_cq = mlx4_ib_create_cq; - ibdev->ib_dev.modify_cq = mlx4_ib_modify_cq; - ibdev->ib_dev.resize_cq = mlx4_ib_resize_cq; - ibdev->ib_dev.destroy_cq = mlx4_ib_destroy_cq; - ibdev->ib_dev.poll_cq = mlx4_ib_poll_cq; - ibdev->ib_dev.req_notify_cq = mlx4_ib_arm_cq; - ibdev->ib_dev.get_dma_mr = mlx4_ib_get_dma_mr; - ibdev->ib_dev.reg_user_mr = mlx4_ib_reg_user_mr; - ibdev->ib_dev.rereg_user_mr = mlx4_ib_rereg_user_mr; - ibdev->ib_dev.dereg_mr = mlx4_ib_dereg_mr; - ibdev->ib_dev.alloc_mr = mlx4_ib_alloc_mr; - ibdev->ib_dev.map_mr_sg = mlx4_ib_map_mr_sg; - ibdev->ib_dev.attach_mcast = mlx4_ib_mcg_attach; - ibdev->ib_dev.detach_mcast = mlx4_ib_mcg_detach; - ibdev->ib_dev.process_mad = mlx4_ib_process_mad; - ibdev->ib_dev.get_port_immutable = mlx4_port_immutable; - ibdev->ib_dev.get_dev_fw_str = get_fw_ver_str; - ibdev->ib_dev.disassociate_ucontext = mlx4_ib_disassociate_ucontext; - + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_ops); ibdev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ); + (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP); if ((dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS) && ((mlx4_ib_port_link_layer(&ibdev->ib_dev, 1) == IB_LINK_LAYER_ETHERNET) || (mlx4_ib_port_link_layer(&ibdev->ib_dev, 2) == IB_LINK_LAYER_ETHERNET))) { - ibdev->ib_dev.create_wq = mlx4_ib_create_wq; - ibdev->ib_dev.modify_wq = mlx4_ib_modify_wq; - ibdev->ib_dev.destroy_wq = mlx4_ib_destroy_wq; - ibdev->ib_dev.create_rwq_ind_table = - mlx4_ib_create_rwq_ind_table; - ibdev->ib_dev.destroy_rwq_ind_table = - mlx4_ib_destroy_rwq_ind_table; ibdev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_wq_ops); } - if (!mlx4_is_slave(ibdev->dev)) { - ibdev->ib_dev.alloc_fmr = mlx4_ib_fmr_alloc; - ibdev->ib_dev.map_phys_fmr = mlx4_ib_map_phys_fmr; - ibdev->ib_dev.unmap_fmr = mlx4_ib_unmap_fmr; - ibdev->ib_dev.dealloc_fmr = mlx4_ib_fmr_dealloc; - } + if (!mlx4_is_slave(ibdev->dev)) + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fmr_ops); if (dev->caps.flags & MLX4_DEV_CAP_FLAG_MEM_WINDOW || dev->caps.bmme_flags & MLX4_BMME_FLAG_TYPE_2_WIN) { - ibdev->ib_dev.alloc_mw = mlx4_ib_alloc_mw; - ibdev->ib_dev.dealloc_mw = mlx4_ib_dealloc_mw; - ibdev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | (1ull << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_mw_ops); } if (dev->caps.flags & MLX4_DEV_CAP_FLAG_XRC) { - ibdev->ib_dev.alloc_xrcd = mlx4_ib_alloc_xrcd; - ibdev->ib_dev.dealloc_xrcd = mlx4_ib_dealloc_xrcd; ibdev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_OPEN_XRCD) | (1ull << IB_USER_VERBS_CMD_CLOSE_XRCD); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_xrc_ops); } if (check_flow_steering_support(dev)) { ibdev->steering_support = MLX4_STEERING_MODE_DEVICE_MANAGED; - ibdev->ib_dev.create_flow = mlx4_ib_create_flow; - ibdev->ib_dev.destroy_flow = mlx4_ib_destroy_flow; - ibdev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fs_ops); } - ibdev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) | - (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | - (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP); - mlx4_ib_alloc_eqs(dev, ibdev); spin_lock_init(&iboe->lock); @@ -2710,6 +2756,7 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) for (i = 0; i < ibdev->num_ports; ++i) { mutex_init(&ibdev->counters_table[i].mutex); INIT_LIST_HEAD(&ibdev->counters_table[i].counters_list); + iboe->last_port_state[i] = IB_PORT_DOWN; } num_req_counters = mlx4_is_bonded(dev) ? 1 : ibdev->num_ports; diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index 8850dfc3826d..e491f3eda6e7 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -519,6 +519,7 @@ struct mlx4_ib_iboe { atomic64_t mac[MLX4_MAX_PORTS]; struct notifier_block nb; struct mlx4_port_gid_table gids[MLX4_MAX_PORTS]; + enum ib_port_state last_port_state[MLX4_MAX_PORTS]; }; struct pkey_mgt { @@ -753,13 +754,13 @@ void __mlx4_ib_cq_clean(struct mlx4_ib_cq *cq, u32 qpn, struct mlx4_ib_srq *srq) void mlx4_ib_cq_clean(struct mlx4_ib_cq *cq, u32 qpn, struct mlx4_ib_srq *srq); struct ib_ah *mlx4_ib_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, - struct ib_udata *udata); + u32 flags, struct ib_udata *udata); struct ib_ah *mlx4_ib_create_ah_slave(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, int slave_sgid_index, u8 *s_mac, u16 vlan_tag); int mlx4_ib_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); -int mlx4_ib_destroy_ah(struct ib_ah *ah); +int mlx4_ib_destroy_ah(struct ib_ah *ah, u32 flags); struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *init_attr, diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c index 0711ca1dfb8f..971e9a9ebdaf 100644 --- a/drivers/infiniband/hw/mlx4/qp.c +++ b/drivers/infiniband/hw/mlx4/qp.c @@ -323,7 +323,7 @@ static int send_wqe_overhead(enum mlx4_ib_qp_type type, u32 flags) } static int set_rq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap, - int is_user, int has_rq, struct mlx4_ib_qp *qp, + bool is_user, int has_rq, struct mlx4_ib_qp *qp, u32 inl_recv_sz) { /* Sanity check RQ size before proceeding */ @@ -401,7 +401,7 @@ static int set_kernel_sq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap, * We need to leave 2 KB + 1 WR of headroom in the SQ to * allow HW to prefetch. */ - qp->sq_spare_wqes = (2048 >> qp->sq.wqe_shift) + 1; + qp->sq_spare_wqes = MLX4_IB_SQ_HEADROOM(qp->sq.wqe_shift); qp->sq.wqe_cnt = roundup_pow_of_two(cap->max_send_wr + qp->sq_spare_wqes); @@ -942,7 +942,7 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, qp->sq_signal_bits = cpu_to_be32(MLX4_WQE_CTRL_CQ_UPDATE); - if (pd->uobject) { + if (udata) { union { struct mlx4_ib_create_qp qp; struct mlx4_ib_create_wq wq; @@ -991,7 +991,7 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, qp->flags |= MLX4_IB_QP_SCATTER_FCS; } - err = set_rq_size(dev, &init_attr->cap, !!pd->uobject, + err = set_rq_size(dev, &init_attr->cap, udata, qp_has_rq(init_attr), qp, qp->inl_recv_sz); if (err) goto err; @@ -1043,7 +1043,7 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, } qp->mqp.usage = MLX4_RES_USAGE_USER_VERBS; } else { - err = set_rq_size(dev, &init_attr->cap, !!pd->uobject, + err = set_rq_size(dev, &init_attr->cap, udata, qp_has_rq(init_attr), qp, 0); if (err) goto err; @@ -1189,7 +1189,7 @@ err_proxy: if (qp->mlx4_ib_qp_type == MLX4_IB_QPT_PROXY_GSI) free_proxy_bufs(pd->device, qp); err_wrid: - if (pd->uobject) { + if (udata) { if (qp_has_rq(init_attr)) mlx4_ib_db_unmap_user(to_mucontext(pd->uobject->context), &qp->db); } else { @@ -1201,20 +1201,20 @@ err_mtt: mlx4_mtt_cleanup(dev->dev, &qp->mtt); err_buf: - if (pd->uobject) + if (qp->umem) ib_umem_release(qp->umem); else mlx4_buf_free(dev->dev, qp->buf_size, &qp->buf); err_db: - if (!pd->uobject && qp_has_rq(init_attr)) + if (!udata && qp_has_rq(init_attr)) mlx4_db_free(dev->dev, &qp->db); err: - if (sqp) - kfree(sqp); - else if (!*caller_qp) + if (!sqp && !*caller_qp) kfree(qp); + kfree(sqp); + return err; } @@ -1332,7 +1332,7 @@ static void destroy_qp_rss(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp) } static void destroy_qp_common(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp, - enum mlx4_ib_source_type src, int is_user) + enum mlx4_ib_source_type src, bool is_user) { struct mlx4_ib_cq *send_cq, *recv_cq; unsigned long flags; @@ -1609,10 +1609,7 @@ static int _mlx4_ib_destroy_qp(struct ib_qp *qp) if (qp->rwq_ind_tbl) { destroy_qp_rss(dev, mqp); } else { - struct mlx4_ib_pd *pd; - - pd = get_pd(mqp); - destroy_qp_common(dev, mqp, MLX4_IB_QP_SRC, !!pd->ibpd.uobject); + destroy_qp_common(dev, mqp, MLX4_IB_QP_SRC, qp->uobject); } if (is_sqp(dev, mqp)) @@ -4044,7 +4041,7 @@ struct ib_wq *mlx4_ib_create_wq(struct ib_pd *pd, struct mlx4_ib_create_wq ucmd; int err, required_cmd_sz; - if (!(udata && pd->uobject)) + if (!udata) return ERR_PTR(-EINVAL); required_cmd_sz = offsetof(typeof(ucmd), comp_mask) + diff --git a/drivers/infiniband/hw/mlx4/srq.c b/drivers/infiniband/hw/mlx4/srq.c index 3731b31c3653..4456f1b8921d 100644 --- a/drivers/infiniband/hw/mlx4/srq.c +++ b/drivers/infiniband/hw/mlx4/srq.c @@ -105,7 +105,7 @@ struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd, buf_size = srq->msrq.max * desc_size; - if (pd->uobject) { + if (udata) { struct mlx4_ib_create_srq ucmd; if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) { @@ -191,7 +191,7 @@ struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd, srq->msrq.event = mlx4_ib_srq_event; srq->ibsrq.ext.xrc.srq_num = srq->msrq.srqn; - if (pd->uobject) + if (udata) if (ib_copy_to_udata(udata, &srq->msrq.srqn, sizeof (__u32))) { err = -EFAULT; goto err_wrid; @@ -202,7 +202,7 @@ struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd, return &srq->ibsrq; err_wrid: - if (pd->uobject) + if (udata) mlx4_ib_db_unmap_user(to_mucontext(pd->uobject->context), &srq->db); else kvfree(srq->wrid); @@ -211,13 +211,13 @@ err_mtt: mlx4_mtt_cleanup(dev->dev, &srq->mtt); err_buf: - if (pd->uobject) + if (srq->umem) ib_umem_release(srq->umem); else mlx4_buf_free(dev->dev, buf_size, &srq->buf); err_db: - if (!pd->uobject) + if (!udata) mlx4_db_free(dev->dev, &srq->db); err_srq: diff --git a/drivers/infiniband/hw/mlx4/sysfs.c b/drivers/infiniband/hw/mlx4/sysfs.c index 752bdd536130..ea1f3a081b05 100644 --- a/drivers/infiniband/hw/mlx4/sysfs.c +++ b/drivers/infiniband/hw/mlx4/sysfs.c @@ -353,16 +353,12 @@ err: static void get_name(struct mlx4_ib_dev *dev, char *name, int i, int max) { - char base_name[9]; - - /* pci_name format is: bus:dev:func -> xxxx:yy:zz.n */ - strlcpy(name, pci_name(dev->dev->persist->pdev), max); - strncpy(base_name, name, 8); /*till xxxx:yy:*/ - base_name[8] = '\0'; - /* with no ARI only 3 last bits are used so when the fn is higher than 8 + /* pci_name format is: bus:dev:func -> xxxx:yy:zz.n + * with no ARI only 3 last bits are used so when the fn is higher than 8 * need to add it to the dev num, so count in the last number will be * modulo 8 */ - sprintf(name, "%s%.2d.%d", base_name, (i/8), (i%8)); + snprintf(name, max, "%.8s%.2d.%d", pci_name(dev->dev->persist->pdev), + i / 8, i % 8); } struct mlx4_port { diff --git a/drivers/infiniband/hw/mlx5/Makefile b/drivers/infiniband/hw/mlx5/Makefile index b8e4b15e2674..33f5adb14e4e 100644 --- a/drivers/infiniband/hw/mlx5/Makefile +++ b/drivers/infiniband/hw/mlx5/Makefile @@ -1,6 +1,8 @@ obj-$(CONFIG_MLX5_INFINIBAND) += mlx5_ib.o -mlx5_ib-y := main.o cq.o doorbell.o qp.o mem.o srq.o mr.o ah.o mad.o gsi.o ib_virt.o cmd.o cong.o +mlx5_ib-y := main.o cq.o doorbell.o qp.o mem.o srq_cmd.o \ + srq.o mr.o ah.o mad.o gsi.o ib_virt.o cmd.o \ + cong.o mlx5_ib-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += odp.o mlx5_ib-$(CONFIG_MLX5_ESWITCH) += ib_rep.o mlx5_ib-$(CONFIG_INFINIBAND_USER_ACCESS) += devx.o diff --git a/drivers/infiniband/hw/mlx5/ah.c b/drivers/infiniband/hw/mlx5/ah.c index ffd03bf1a71e..420ae0897333 100644 --- a/drivers/infiniband/hw/mlx5/ah.c +++ b/drivers/infiniband/hw/mlx5/ah.c @@ -72,7 +72,7 @@ static struct ib_ah *create_ib_ah(struct mlx5_ib_dev *dev, } struct ib_ah *mlx5_ib_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, - struct ib_udata *udata) + u32 flags, struct ib_udata *udata) { struct mlx5_ib_ah *ah; @@ -131,7 +131,7 @@ int mlx5_ib_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr) return 0; } -int mlx5_ib_destroy_ah(struct ib_ah *ah) +int mlx5_ib_destroy_ah(struct ib_ah *ah, u32 flags) { kfree(to_mah(ah)); return 0; diff --git a/drivers/infiniband/hw/mlx5/cmd.c b/drivers/infiniband/hw/mlx5/cmd.c index ca060a2e2b36..356bccc715ee 100644 --- a/drivers/infiniband/hw/mlx5/cmd.c +++ b/drivers/infiniband/hw/mlx5/cmd.c @@ -240,6 +240,7 @@ int mlx5_cmd_alloc_transport_domain(struct mlx5_core_dev *dev, u32 *tdn, MLX5_SET(alloc_transport_domain_in, in, opcode, MLX5_CMD_OP_ALLOC_TRANSPORT_DOMAIN); + MLX5_SET(alloc_transport_domain_in, in, uid, uid); err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); if (!err) @@ -257,6 +258,7 @@ void mlx5_cmd_dealloc_transport_domain(struct mlx5_core_dev *dev, u32 tdn, MLX5_SET(dealloc_transport_domain_in, in, opcode, MLX5_CMD_OP_DEALLOC_TRANSPORT_DOMAIN); + MLX5_SET(dealloc_transport_domain_in, in, uid, uid); MLX5_SET(dealloc_transport_domain_in, in, transport_domain, tdn); mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); } @@ -326,3 +328,20 @@ int mlx5_cmd_xrcd_dealloc(struct mlx5_core_dev *dev, u32 xrcdn, u16 uid) MLX5_SET(dealloc_xrcd_in, in, uid, uid); return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); } + +int mlx5_cmd_alloc_q_counter(struct mlx5_core_dev *dev, u16 *counter_id, + u16 uid) +{ + u32 in[MLX5_ST_SZ_DW(alloc_q_counter_in)] = {0}; + u32 out[MLX5_ST_SZ_DW(alloc_q_counter_out)] = {0}; + int err; + + MLX5_SET(alloc_q_counter_in, in, opcode, MLX5_CMD_OP_ALLOC_Q_COUNTER); + MLX5_SET(alloc_q_counter_in, in, uid, uid); + + err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); + if (!err) + *counter_id = MLX5_GET(alloc_q_counter_out, out, + counter_set_id); + return err; +} diff --git a/drivers/infiniband/hw/mlx5/cmd.h b/drivers/infiniband/hw/mlx5/cmd.h index c03c56455534..1e76dc67a369 100644 --- a/drivers/infiniband/hw/mlx5/cmd.h +++ b/drivers/infiniband/hw/mlx5/cmd.h @@ -61,4 +61,6 @@ int mlx5_cmd_detach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn, u16 uid); int mlx5_cmd_xrcd_alloc(struct mlx5_core_dev *dev, u32 *xrcdn, u16 uid); int mlx5_cmd_xrcd_dealloc(struct mlx5_core_dev *dev, u32 xrcdn, u16 uid); +int mlx5_cmd_alloc_q_counter(struct mlx5_core_dev *dev, u16 *counter_id, + u16 uid); #endif /* MLX5_IB_CMD_H */ diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c index 7d769b5538b4..90f1b0bae5b5 100644 --- a/drivers/infiniband/hw/mlx5/cq.c +++ b/drivers/infiniband/hw/mlx5/cq.c @@ -35,6 +35,7 @@ #include #include #include "mlx5_ib.h" +#include "srq.h" static void mlx5_ib_cq_comp(struct mlx5_core_cq *cq) { @@ -81,7 +82,7 @@ static void *get_sw_cqe(struct mlx5_ib_cq *cq, int n) cqe64 = (cq->mcq.cqe_sz == 64) ? cqe : cqe + 64; - if (likely((cqe64->op_own) >> 4 != MLX5_CQE_INVALID) && + if (likely(get_cqe_opcode(cqe64) != MLX5_CQE_INVALID) && !((cqe64->op_own & MLX5_CQE_OWNER_MASK) ^ !!(n & (cq->ibcq.cqe + 1)))) { return cqe; } else { @@ -177,8 +178,7 @@ static void handle_responder(struct ib_wc *wc, struct mlx5_cqe64 *cqe, struct mlx5_core_srq *msrq = NULL; if (qp->ibqp.xrcd) { - msrq = mlx5_core_get_srq(dev->mdev, - be32_to_cpu(cqe->srqn)); + msrq = mlx5_cmd_get_srq(dev, be32_to_cpu(cqe->srqn)); srq = to_mibsrq(msrq); } else { srq = to_msrq(qp->ibqp.srq); @@ -197,7 +197,7 @@ static void handle_responder(struct ib_wc *wc, struct mlx5_cqe64 *cqe, } wc->byte_len = be32_to_cpu(cqe->byte_cnt); - switch (cqe->op_own >> 4) { + switch (get_cqe_opcode(cqe)) { case MLX5_CQE_RESP_WR_IMM: wc->opcode = IB_WC_RECV_RDMA_WITH_IMM; wc->wc_flags = IB_WC_WITH_IMM; @@ -330,67 +330,6 @@ static void mlx5_handle_error_cqe(struct mlx5_ib_dev *dev, dump_cqe(dev, cqe); } -static int is_atomic_response(struct mlx5_ib_qp *qp, uint16_t idx) -{ - /* TBD: waiting decision - */ - return 0; -} - -static void *mlx5_get_atomic_laddr(struct mlx5_ib_qp *qp, uint16_t idx) -{ - struct mlx5_wqe_data_seg *dpseg; - void *addr; - - dpseg = mlx5_get_send_wqe(qp, idx) + sizeof(struct mlx5_wqe_ctrl_seg) + - sizeof(struct mlx5_wqe_raddr_seg) + - sizeof(struct mlx5_wqe_atomic_seg); - addr = (void *)(unsigned long)be64_to_cpu(dpseg->addr); - return addr; -} - -static void handle_atomic(struct mlx5_ib_qp *qp, struct mlx5_cqe64 *cqe64, - uint16_t idx) -{ - void *addr; - int byte_count; - int i; - - if (!is_atomic_response(qp, idx)) - return; - - byte_count = be32_to_cpu(cqe64->byte_cnt); - addr = mlx5_get_atomic_laddr(qp, idx); - - if (byte_count == 4) { - *(uint32_t *)addr = be32_to_cpu(*((__be32 *)addr)); - } else { - for (i = 0; i < byte_count; i += 8) { - *(uint64_t *)addr = be64_to_cpu(*((__be64 *)addr)); - addr += 8; - } - } - - return; -} - -static void handle_atomics(struct mlx5_ib_qp *qp, struct mlx5_cqe64 *cqe64, - u16 tail, u16 head) -{ - u16 idx; - - do { - idx = tail & (qp->sq.wqe_cnt - 1); - handle_atomic(qp, cqe64, idx); - if (idx == head) - break; - - tail = qp->sq.w_list[idx].next; - } while (1); - tail = qp->sq.w_list[idx].next; - qp->sq.last_poll = tail; -} - static void free_cq_buf(struct mlx5_ib_dev *dev, struct mlx5_ib_cq_buf *buf) { mlx5_frag_buf_free(dev->mdev, &buf->frag_buf); @@ -428,45 +367,15 @@ static void get_sig_err_item(struct mlx5_sig_err_cqe *cqe, item->key = be32_to_cpu(cqe->mkey); } -static void sw_send_comp(struct mlx5_ib_qp *qp, int num_entries, - struct ib_wc *wc, int *npolled) -{ - struct mlx5_ib_wq *wq; - unsigned int cur; - unsigned int idx; - int np; - int i; - - wq = &qp->sq; - cur = wq->head - wq->tail; - np = *npolled; - - if (cur == 0) - return; - - for (i = 0; i < cur && np < num_entries; i++) { - idx = wq->last_poll & (wq->wqe_cnt - 1); - wc->wr_id = wq->wrid[idx]; - wc->status = IB_WC_WR_FLUSH_ERR; - wc->vendor_err = MLX5_CQE_SYNDROME_WR_FLUSH_ERR; - wq->tail++; - np++; - wc->qp = &qp->ibqp; - wc++; - wq->last_poll = wq->w_list[idx].next; - } - *npolled = np; -} - -static void sw_recv_comp(struct mlx5_ib_qp *qp, int num_entries, - struct ib_wc *wc, int *npolled) +static void sw_comp(struct mlx5_ib_qp *qp, int num_entries, struct ib_wc *wc, + int *npolled, int is_send) { struct mlx5_ib_wq *wq; unsigned int cur; int np; int i; - wq = &qp->rq; + wq = (is_send) ? &qp->sq : &qp->rq; cur = wq->head - wq->tail; np = *npolled; @@ -493,13 +402,13 @@ static void mlx5_ib_poll_sw_comp(struct mlx5_ib_cq *cq, int num_entries, *npolled = 0; /* Find uncompleted WQEs belonging to that cq and return mmics ones */ list_for_each_entry(qp, &cq->list_send_qp, cq_send_list) { - sw_send_comp(qp, num_entries, wc + *npolled, npolled); + sw_comp(qp, num_entries, wc + *npolled, npolled, true); if (*npolled >= num_entries) return; } list_for_each_entry(qp, &cq->list_recv_qp, cq_recv_list) { - sw_recv_comp(qp, num_entries, wc + *npolled, npolled); + sw_comp(qp, num_entries, wc + *npolled, npolled, false); if (*npolled >= num_entries) return; } @@ -537,7 +446,7 @@ repoll: */ rmb(); - opcode = cqe64->op_own >> 4; + opcode = get_cqe_opcode(cqe64); if (unlikely(opcode == MLX5_CQE_RESIZE_CQ)) { if (likely(cq->resize_buf)) { free_cq_buf(dev, &cq->buf); @@ -567,7 +476,6 @@ repoll: wqe_ctr = be16_to_cpu(cqe64->wqe_counter); idx = wqe_ctr & (wq->wqe_cnt - 1); handle_good_req(wc, cqe64, wq, idx); - handle_atomics(*cur_qp, cqe64, wq->last_poll, idx); wc->wr_id = wq->wrid[idx]; wq->tail = wq->wqe_head[idx] + 1; wc->status = IB_WC_SUCCESS; @@ -1295,7 +1203,7 @@ static int copy_resize_cqes(struct mlx5_ib_cq *cq) return -EINVAL; } - while ((scqe64->op_own >> 4) != MLX5_CQE_RESIZE_CQ) { + while (get_cqe_opcode(scqe64) != MLX5_CQE_RESIZE_CQ) { dcqe = mlx5_frag_buf_get_wqe(&cq->resize_buf->fbc, (i + 1) & cq->resize_buf->nent); dcqe64 = dsize == 64 ? dcqe : dcqe + 64; diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c index 45c421c87100..5a588f3cfb1b 100644 --- a/drivers/infiniband/hw/mlx5/devx.c +++ b/drivers/infiniband/hw/mlx5/devx.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include "mlx5_ib.h" @@ -40,29 +41,32 @@ struct devx_umem_reg_cmd { u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; }; -static struct mlx5_ib_ucontext *devx_ufile2uctx(struct ib_uverbs_file *file) +static struct mlx5_ib_ucontext * +devx_ufile2uctx(const struct uverbs_attr_bundle *attrs) { - return to_mucontext(ib_uverbs_get_ucontext(file)); + return to_mucontext(ib_uverbs_get_ucontext(attrs)); } -int mlx5_ib_devx_create(struct mlx5_ib_dev *dev) +int mlx5_ib_devx_create(struct mlx5_ib_dev *dev, bool is_user) { u32 in[MLX5_ST_SZ_DW(create_uctx_in)] = {0}; u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; - u64 general_obj_types; - void *hdr; + void *uctx; int err; u16 uid; + u32 cap = 0; - hdr = MLX5_ADDR_OF(create_uctx_in, in, hdr); - - general_obj_types = MLX5_CAP_GEN_64(dev->mdev, general_obj_types); - if (!(general_obj_types & MLX5_GENERAL_OBJ_TYPES_CAP_UCTX) || - !(general_obj_types & MLX5_GENERAL_OBJ_TYPES_CAP_UMEM)) + /* 0 means not supported */ + if (!MLX5_CAP_GEN(dev->mdev, log_max_uctx)) return -EINVAL; - MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); - MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type, MLX5_OBJ_TYPE_UCTX); + uctx = MLX5_ADDR_OF(create_uctx_in, in, uctx); + if (is_user && capable(CAP_NET_RAW) && + (MLX5_CAP_GEN(dev->mdev, uctx_cap) & MLX5_UCTX_CAP_RAW_TX)) + cap |= MLX5_UCTX_CAP_RAW_TX; + + MLX5_SET(create_uctx_in, in, opcode, MLX5_CMD_OP_CREATE_UCTX); + MLX5_SET(uctx, uctx, cap, cap); err = mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out)); if (err) @@ -74,12 +78,11 @@ int mlx5_ib_devx_create(struct mlx5_ib_dev *dev) void mlx5_ib_devx_destroy(struct mlx5_ib_dev *dev, u16 uid) { - u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0}; + u32 in[MLX5_ST_SZ_DW(destroy_uctx_in)] = {0}; u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; - MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_DESTROY_GENERAL_OBJECT); - MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_OBJ_TYPE_UCTX); - MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, uid); + MLX5_SET(destroy_uctx_in, in, opcode, MLX5_CMD_OP_DESTROY_UCTX); + MLX5_SET(destroy_uctx_in, in, uid, uid); mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out)); } @@ -106,6 +109,21 @@ bool mlx5_ib_devx_is_flow_dest(void *obj, int *dest_id, int *dest_type) } } +bool mlx5_ib_devx_is_flow_counter(void *obj, u32 *counter_id) +{ + struct devx_obj *devx_obj = obj; + u16 opcode = MLX5_GET(general_obj_in_cmd_hdr, devx_obj->dinbox, opcode); + + if (opcode == MLX5_CMD_OP_DEALLOC_FLOW_COUNTER) { + *counter_id = MLX5_GET(dealloc_flow_counter_in, + devx_obj->dinbox, + flow_counter_id); + return true; + } + + return false; +} + /* * As the obj_id in the firmware is not globally unique the object type * must be considered upon checking for a valid object id. @@ -116,7 +134,7 @@ static u64 get_enc_obj_id(u16 opcode, u32 obj_id) return ((u64)opcode << 32) | obj_id; } -static int devx_is_valid_obj_id(struct devx_obj *obj, const void *in) +static u64 devx_get_obj_id(const void *in) { u16 opcode = MLX5_GET(general_obj_in_cmd_hdr, in, opcode); u64 obj_id; @@ -290,6 +308,8 @@ static int devx_is_valid_obj_id(struct devx_obj *obj, const void *in) MLX5_GET(query_dct_in, in, dctn)); break; case MLX5_CMD_OP_QUERY_XRQ: + case MLX5_CMD_OP_QUERY_XRQ_DC_PARAMS_ENTRY: + case MLX5_CMD_OP_QUERY_XRQ_ERROR_PARAMS: obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_XRQ, MLX5_GET(query_xrq_in, in, xrqn)); break; @@ -316,17 +336,107 @@ static int devx_is_valid_obj_id(struct devx_obj *obj, const void *in) MLX5_GET(drain_dct_in, in, dctn)); break; case MLX5_CMD_OP_ARM_XRQ: + case MLX5_CMD_OP_SET_XRQ_DC_PARAMS_ENTRY: obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_XRQ, MLX5_GET(arm_xrq_in, in, xrqn)); break; + case MLX5_CMD_OP_QUERY_PACKET_REFORMAT_CONTEXT: + obj_id = get_enc_obj_id + (MLX5_CMD_OP_ALLOC_PACKET_REFORMAT_CONTEXT, + MLX5_GET(query_packet_reformat_context_in, + in, packet_reformat_id)); + break; default: + obj_id = 0; + } + + return obj_id; +} + +static bool devx_is_valid_obj_id(struct ib_uobject *uobj, const void *in) +{ + u64 obj_id = devx_get_obj_id(in); + + if (!obj_id) return false; + + switch (uobj_get_object_id(uobj)) { + case UVERBS_OBJECT_CQ: + return get_enc_obj_id(MLX5_CMD_OP_CREATE_CQ, + to_mcq(uobj->object)->mcq.cqn) == + obj_id; + + case UVERBS_OBJECT_SRQ: + { + struct mlx5_core_srq *srq = &(to_msrq(uobj->object)->msrq); + struct mlx5_ib_dev *dev = to_mdev(uobj->context->device); + u16 opcode; + + switch (srq->common.res) { + case MLX5_RES_XSRQ: + opcode = MLX5_CMD_OP_CREATE_XRC_SRQ; + break; + case MLX5_RES_XRQ: + opcode = MLX5_CMD_OP_CREATE_XRQ; + break; + default: + if (!dev->mdev->issi) + opcode = MLX5_CMD_OP_CREATE_SRQ; + else + opcode = MLX5_CMD_OP_CREATE_RMP; + } + + return get_enc_obj_id(opcode, + to_msrq(uobj->object)->msrq.srqn) == + obj_id; } - if (obj_id == obj->obj_id) - return true; + case UVERBS_OBJECT_QP: + { + struct mlx5_ib_qp *qp = to_mqp(uobj->object); + enum ib_qp_type qp_type = qp->ibqp.qp_type; + + if (qp_type == IB_QPT_RAW_PACKET || + (qp->flags & MLX5_IB_QP_UNDERLAY)) { + struct mlx5_ib_raw_packet_qp *raw_packet_qp = + &qp->raw_packet_qp; + struct mlx5_ib_rq *rq = &raw_packet_qp->rq; + struct mlx5_ib_sq *sq = &raw_packet_qp->sq; + + return (get_enc_obj_id(MLX5_CMD_OP_CREATE_RQ, + rq->base.mqp.qpn) == obj_id || + get_enc_obj_id(MLX5_CMD_OP_CREATE_SQ, + sq->base.mqp.qpn) == obj_id || + get_enc_obj_id(MLX5_CMD_OP_CREATE_TIR, + rq->tirn) == obj_id || + get_enc_obj_id(MLX5_CMD_OP_CREATE_TIS, + sq->tisn) == obj_id); + } + + if (qp_type == MLX5_IB_QPT_DCT) + return get_enc_obj_id(MLX5_CMD_OP_CREATE_DCT, + qp->dct.mdct.mqp.qpn) == obj_id; + + return get_enc_obj_id(MLX5_CMD_OP_CREATE_QP, + qp->ibqp.qp_num) == obj_id; + } - return false; + case UVERBS_OBJECT_WQ: + return get_enc_obj_id(MLX5_CMD_OP_CREATE_RQ, + to_mrwq(uobj->object)->core_qp.qpn) == + obj_id; + + case UVERBS_OBJECT_RWQ_IND_TBL: + return get_enc_obj_id(MLX5_CMD_OP_CREATE_RQT, + to_mrwq_ind_table(uobj->object)->rqtn) == + obj_id; + + case MLX5_IB_OBJECT_DEVX_OBJ: + return ((struct devx_obj *)uobj->object)->obj_id == obj_id; + + default: + return false; + } } static void devx_set_umem_valid(const void *in) @@ -494,6 +604,7 @@ static bool devx_is_obj_modify_cmd(const void *in) case MLX5_CMD_OP_DRAIN_DCT: case MLX5_CMD_OP_ARM_DCT_FOR_KEY_VIOLATION: case MLX5_CMD_OP_ARM_XRQ: + case MLX5_CMD_OP_SET_XRQ_DC_PARAMS_ENTRY: return true; case MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY: { @@ -535,6 +646,9 @@ static bool devx_is_obj_query_cmd(const void *in) case MLX5_CMD_OP_QUERY_XRC_SRQ: case MLX5_CMD_OP_QUERY_DCT: case MLX5_CMD_OP_QUERY_XRQ: + case MLX5_CMD_OP_QUERY_XRQ_DC_PARAMS_ENTRY: + case MLX5_CMD_OP_QUERY_XRQ_ERROR_PARAMS: + case MLX5_CMD_OP_QUERY_PACKET_REFORMAT_CONTEXT: return true; default: return false; @@ -572,15 +686,16 @@ static int devx_get_uid(struct mlx5_ib_ucontext *c, void *cmd_in) if (!c->devx_uid) return -EINVAL; - if (!capable(CAP_NET_RAW)) - return -EPERM; - return c->devx_uid; } static bool devx_is_general_cmd(void *in) { u16 opcode = MLX5_GET(general_obj_in_cmd_hdr, in, opcode); + if (opcode >= MLX5_CMD_OP_GENERAL_START && + opcode < MLX5_CMD_OP_GENERAL_END) + return true; + switch (opcode) { case MLX5_CMD_OP_QUERY_HCA_CAP: case MLX5_CMD_OP_QUERY_HCA_VPORT_CONTEXT: @@ -603,7 +718,7 @@ static bool devx_is_general_cmd(void *in) } static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct mlx5_ib_ucontext *c; struct mlx5_ib_dev *dev; @@ -616,7 +731,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)( MLX5_IB_ATTR_DEVX_QUERY_EQN_USER_VEC)) return -EFAULT; - c = devx_ufile2uctx(file); + c = devx_ufile2uctx(attrs); if (IS_ERR(c)) return PTR_ERR(c); dev = to_mdev(c->ibucontext.device); @@ -653,14 +768,14 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)( * queue or arm its CQ for event generation), no further harm is expected. */ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_UAR)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct mlx5_ib_ucontext *c; struct mlx5_ib_dev *dev; u32 user_idx; s32 dev_idx; - c = devx_ufile2uctx(file); + c = devx_ufile2uctx(attrs); if (IS_ERR(c)) return PTR_ERR(c); dev = to_mdev(c->ibucontext.device); @@ -681,7 +796,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_UAR)( } static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OTHER)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct mlx5_ib_ucontext *c; struct mlx5_ib_dev *dev; @@ -693,7 +808,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OTHER)( int err; int uid; - c = devx_ufile2uctx(file); + c = devx_ufile2uctx(attrs); if (IS_ERR(c)) return PTR_ERR(c); dev = to_mdev(c->ibucontext.device); @@ -740,6 +855,10 @@ static void devx_obj_build_destroy_cmd(void *in, void *out, void *din, MLX5_SET(general_obj_in_cmd_hdr, din, obj_type, obj_type); break; + case MLX5_CMD_OP_CREATE_UMEM: + MLX5_SET(general_obj_in_cmd_hdr, din, opcode, + MLX5_CMD_OP_DESTROY_UMEM); + break; case MLX5_CMD_OP_CREATE_MKEY: MLX5_SET(general_obj_in_cmd_hdr, din, opcode, MLX5_CMD_OP_DESTROY_MKEY); break; @@ -908,7 +1027,7 @@ static int devx_obj_cleanup(struct ib_uobject *uobject, } static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_CREATE)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { void *cmd_in = uverbs_attr_get_alloced_ptr(attrs, MLX5_IB_ATTR_DEVX_OBJ_CREATE_CMD_IN); int cmd_out_len = uverbs_attr_get_len(attrs, @@ -970,7 +1089,7 @@ obj_free: } static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_MODIFY)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { void *cmd_in = uverbs_attr_get_alloced_ptr(attrs, MLX5_IB_ATTR_DEVX_OBJ_MODIFY_CMD_IN); int cmd_out_len = uverbs_attr_get_len(attrs, @@ -978,7 +1097,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_MODIFY)( struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, MLX5_IB_ATTR_DEVX_OBJ_MODIFY_HANDLE); struct mlx5_ib_ucontext *c = to_mucontext(uobj->context); - struct devx_obj *obj = uobj->object; + struct mlx5_ib_dev *mdev = to_mdev(uobj->context->device); void *cmd_out; int err; int uid; @@ -990,7 +1109,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_MODIFY)( if (!devx_is_obj_modify_cmd(cmd_in)) return -EINVAL; - if (!devx_is_valid_obj_id(obj, cmd_in)) + if (!devx_is_valid_obj_id(uobj, cmd_in)) return -EINVAL; cmd_out = uverbs_zalloc(attrs, cmd_out_len); @@ -1000,7 +1119,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_MODIFY)( MLX5_SET(general_obj_in_cmd_hdr, cmd_in, uid, uid); devx_set_umem_valid(cmd_in); - err = mlx5_cmd_exec(obj->mdev, cmd_in, + err = mlx5_cmd_exec(mdev->mdev, cmd_in, uverbs_attr_get_len(attrs, MLX5_IB_ATTR_DEVX_OBJ_MODIFY_CMD_IN), cmd_out, cmd_out_len); if (err) @@ -1011,7 +1130,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_MODIFY)( } static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_QUERY)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { void *cmd_in = uverbs_attr_get_alloced_ptr(attrs, MLX5_IB_ATTR_DEVX_OBJ_QUERY_CMD_IN); int cmd_out_len = uverbs_attr_get_len(attrs, @@ -1019,10 +1138,10 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_QUERY)( struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, MLX5_IB_ATTR_DEVX_OBJ_QUERY_HANDLE); struct mlx5_ib_ucontext *c = to_mucontext(uobj->context); - struct devx_obj *obj = uobj->object; void *cmd_out; int err; int uid; + struct mlx5_ib_dev *mdev = to_mdev(uobj->context->device); uid = devx_get_uid(c, cmd_in); if (uid < 0) @@ -1031,7 +1150,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_QUERY)( if (!devx_is_obj_query_cmd(cmd_in)) return -EINVAL; - if (!devx_is_valid_obj_id(obj, cmd_in)) + if (!devx_is_valid_obj_id(uobj, cmd_in)) return -EINVAL; cmd_out = uverbs_zalloc(attrs, cmd_out_len); @@ -1039,7 +1158,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_QUERY)( return PTR_ERR(cmd_out); MLX5_SET(general_obj_in_cmd_hdr, cmd_in, uid, uid); - err = mlx5_cmd_exec(obj->mdev, cmd_in, + err = mlx5_cmd_exec(mdev->mdev, cmd_in, uverbs_attr_get_len(attrs, MLX5_IB_ATTR_DEVX_OBJ_QUERY_CMD_IN), cmd_out, cmd_out_len); if (err) @@ -1115,8 +1234,7 @@ static void devx_umem_reg_cmd_build(struct mlx5_ib_dev *dev, umem = MLX5_ADDR_OF(create_umem_in, cmd->in, umem); mtt = (__be64 *)MLX5_ADDR_OF(umem, umem, mtt); - MLX5_SET(general_obj_in_cmd_hdr, cmd->in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); - MLX5_SET(general_obj_in_cmd_hdr, cmd->in, obj_type, MLX5_OBJ_TYPE_UMEM); + MLX5_SET(create_umem_in, cmd->in, opcode, MLX5_CMD_OP_CREATE_UMEM); MLX5_SET64(umem, umem, num_of_mtt, obj->ncont); MLX5_SET(umem, umem, log_page_size, obj->page_shift - MLX5_ADAPTER_PAGE_SHIFT); @@ -1127,7 +1245,7 @@ static void devx_umem_reg_cmd_build(struct mlx5_ib_dev *dev, } static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_UMEM_REG)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct devx_umem_reg_cmd cmd; struct devx_umem *obj; @@ -1141,9 +1259,6 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_UMEM_REG)( if (!c->devx_uid) return -EINVAL; - if (!capable(CAP_NET_RAW)) - return -EPERM; - obj = kzalloc(sizeof(struct devx_umem), GFP_KERNEL); if (!obj) return -ENOMEM; @@ -1158,7 +1273,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_UMEM_REG)( devx_umem_reg_cmd_build(dev, obj, &cmd); - MLX5_SET(general_obj_in_cmd_hdr, cmd.in, uid, c->devx_uid); + MLX5_SET(create_umem_in, cmd.in, uid, c->devx_uid); err = mlx5_cmd_exec(dev->mdev, cmd.in, cmd.inlen, cmd.out, sizeof(cmd.out)); if (err) @@ -1279,7 +1394,7 @@ DECLARE_UVERBS_NAMED_METHOD_DESTROY( DECLARE_UVERBS_NAMED_METHOD( MLX5_IB_METHOD_DEVX_OBJ_MODIFY, UVERBS_ATTR_IDR(MLX5_IB_ATTR_DEVX_OBJ_MODIFY_HANDLE, - MLX5_IB_OBJECT_DEVX_OBJ, + UVERBS_IDR_ANY_OBJECT, UVERBS_ACCESS_WRITE, UA_MANDATORY), UVERBS_ATTR_PTR_IN( @@ -1295,7 +1410,7 @@ DECLARE_UVERBS_NAMED_METHOD( DECLARE_UVERBS_NAMED_METHOD( MLX5_IB_METHOD_DEVX_OBJ_QUERY, UVERBS_ATTR_IDR(MLX5_IB_ATTR_DEVX_OBJ_QUERY_HANDLE, - MLX5_IB_OBJECT_DEVX_OBJ, + UVERBS_IDR_ANY_OBJECT, UVERBS_ACCESS_READ, UA_MANDATORY), UVERBS_ATTR_PTR_IN( @@ -1325,12 +1440,22 @@ DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_DEVX_UMEM, &UVERBS_METHOD(MLX5_IB_METHOD_DEVX_UMEM_REG), &UVERBS_METHOD(MLX5_IB_METHOD_DEVX_UMEM_DEREG)); -DECLARE_UVERBS_OBJECT_TREE(devx_objects, - &UVERBS_OBJECT(MLX5_IB_OBJECT_DEVX), - &UVERBS_OBJECT(MLX5_IB_OBJECT_DEVX_OBJ), - &UVERBS_OBJECT(MLX5_IB_OBJECT_DEVX_UMEM)); - -const struct uverbs_object_tree_def *mlx5_ib_get_devx_tree(void) +static bool devx_is_supported(struct ib_device *device) { - return &devx_objects; + struct mlx5_ib_dev *dev = to_mdev(device); + + return !dev->rep && MLX5_CAP_GEN(dev->mdev, log_max_uctx); } + +const struct uapi_definition mlx5_ib_devx_defs[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED( + MLX5_IB_OBJECT_DEVX, + UAPI_DEF_IS_OBJ_SUPPORTED(devx_is_supported)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED( + MLX5_IB_OBJECT_DEVX_OBJ, + UAPI_DEF_IS_OBJ_SUPPORTED(devx_is_supported)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED( + MLX5_IB_OBJECT_DEVX_UMEM, + UAPI_DEF_IS_OBJ_SUPPORTED(devx_is_supported)), + {}, +}; diff --git a/drivers/infiniband/hw/mlx5/flow.c b/drivers/infiniband/hw/mlx5/flow.c index f86cdcafdafc..e8a1e4498e3f 100644 --- a/drivers/infiniband/hw/mlx5/flow.c +++ b/drivers/infiniband/hw/mlx5/flow.c @@ -60,7 +60,7 @@ static const struct uverbs_attr_spec mlx5_ib_flow_type[] = { #define MLX5_IB_CREATE_FLOW_MAX_FLOW_ACTIONS 2 static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct mlx5_flow_act flow_act = {.flow_tag = MLX5_FS_DEFAULT_FLOW_TAG}; struct mlx5_ib_flow_handler *flow_handler; @@ -77,6 +77,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)( uverbs_attr_get_uobject(attrs, MLX5_IB_ATTR_CREATE_FLOW_HANDLE); struct mlx5_ib_dev *dev = to_mdev(uobj->context->device); int len, ret, i; + u32 counter_id = 0; if (!capable(CAP_NET_RAW)) return -EPERM; @@ -92,10 +93,6 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)( ((dest_devx && dest_qp) || (!dest_devx && !dest_qp))) return -EINVAL; - if (fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_EGRESS && - (dest_devx || dest_qp)) - return -EINVAL; - if (dest_devx) { devx_obj = uverbs_attr_get_obj( attrs, MLX5_IB_ATTR_CREATE_FLOW_DEST_DEVX); @@ -128,8 +125,19 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)( dest_type = MLX5_FLOW_DESTINATION_TYPE_PORT; } - if (dev->rep) - return -ENOTSUPP; + len = uverbs_attr_get_uobjs_arr(attrs, + MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX, &arr_flow_actions); + if (len) { + devx_obj = arr_flow_actions[0]->object; + + if (!mlx5_ib_devx_is_flow_counter(devx_obj, &counter_id)) + return -EINVAL; + flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_COUNT; + } + + if (dest_type == MLX5_FLOW_DESTINATION_TYPE_TIR && + fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_EGRESS) + return -EINVAL; cmd_in = uverbs_attr_get_alloced_ptr( attrs, MLX5_IB_ATTR_CREATE_FLOW_MATCH_VALUE); @@ -164,6 +172,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)( } flow_handler = mlx5_ib_raw_fs_rule_add(dev, fs_matcher, &flow_act, + counter_id, cmd_in, inlen, dest_id, dest_type); if (IS_ERR(flow_handler)) { @@ -194,7 +203,7 @@ static int flow_matcher_cleanup(struct ib_uobject *uobject, } static int UVERBS_HANDLER(MLX5_IB_METHOD_FLOW_MATCHER_CREATE)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) + struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject( attrs, MLX5_IB_ATTR_FLOW_MATCHER_CREATE_HANDLE); @@ -313,7 +322,6 @@ static bool mlx5_ib_modify_header_supported(struct mlx5_ib_dev *dev) } static int UVERBS_HANDLER(MLX5_IB_METHOD_FLOW_ACTION_CREATE_MODIFY_HEADER)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject( @@ -321,9 +329,8 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_FLOW_ACTION_CREATE_MODIFY_HEADER)( struct mlx5_ib_dev *mdev = to_mdev(uobj->context->device); enum mlx5_ib_uapi_flow_table_type ft_type; struct ib_flow_action *action; - size_t num_actions; + int num_actions; void *in; - int len; int ret; if (!mlx5_ib_modify_header_supported(mdev)) @@ -331,18 +338,17 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_FLOW_ACTION_CREATE_MODIFY_HEADER)( in = uverbs_attr_get_alloced_ptr(attrs, MLX5_IB_ATTR_CREATE_MODIFY_HEADER_ACTIONS_PRM); - len = uverbs_attr_get_len(attrs, - MLX5_IB_ATTR_CREATE_MODIFY_HEADER_ACTIONS_PRM); - if (len % MLX5_UN_SZ_BYTES(set_action_in_add_action_in_auto)) - return -EINVAL; + num_actions = uverbs_attr_ptr_get_array_size( + attrs, MLX5_IB_ATTR_CREATE_MODIFY_HEADER_ACTIONS_PRM, + MLX5_UN_SZ_BYTES(set_action_in_add_action_in_auto)); + if (num_actions < 0) + return num_actions; ret = uverbs_get_const(&ft_type, attrs, MLX5_IB_ATTR_CREATE_MODIFY_HEADER_FT_TYPE); if (ret) return ret; - - num_actions = len / MLX5_UN_SZ_BYTES(set_action_in_add_action_in_auto), action = mlx5_ib_create_modify_header(mdev, ft_type, num_actions, in); if (IS_ERR(action)) return PTR_ERR(action); @@ -435,7 +441,6 @@ static int mlx5_ib_flow_action_create_packet_reformat_ctx( } static int UVERBS_HANDLER(MLX5_IB_METHOD_FLOW_ACTION_CREATE_PACKET_REFORMAT)( - struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, @@ -526,7 +531,11 @@ DECLARE_UVERBS_NAMED_METHOD( UA_OPTIONAL), UVERBS_ATTR_PTR_IN(MLX5_IB_ATTR_CREATE_FLOW_TAG, UVERBS_ATTR_TYPE(u32), - UA_OPTIONAL)); + UA_OPTIONAL), + UVERBS_ATTR_IDRS_ARR(MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX, + MLX5_IB_OBJECT_DEVX_OBJ, + UVERBS_ACCESS_READ, 1, 1, + UA_OPTIONAL)); DECLARE_UVERBS_NAMED_METHOD_DESTROY( MLX5_IB_METHOD_DESTROY_FLOW, @@ -610,16 +619,20 @@ DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_FLOW_MATCHER, &UVERBS_METHOD(MLX5_IB_METHOD_FLOW_MATCHER_CREATE), &UVERBS_METHOD(MLX5_IB_METHOD_FLOW_MATCHER_DESTROY)); -DECLARE_UVERBS_OBJECT_TREE(flow_objects, - &UVERBS_OBJECT(MLX5_IB_OBJECT_FLOW_MATCHER)); - -int mlx5_ib_get_flow_trees(const struct uverbs_object_tree_def **root) +static bool flow_is_supported(struct ib_device *device) { - int i = 0; - - root[i++] = &flow_objects; - root[i++] = &mlx5_ib_fs; - root[i++] = &mlx5_ib_flow_actions; - - return i; + return !to_mdev(device)->rep; } + +const struct uapi_definition mlx5_ib_flow_defs[] = { + UAPI_DEF_CHAIN_OBJ_TREE_NAMED( + MLX5_IB_OBJECT_FLOW_MATCHER, + UAPI_DEF_IS_OBJ_SUPPORTED(flow_is_supported)), + UAPI_DEF_CHAIN_OBJ_TREE( + UVERBS_OBJECT_FLOW, + &mlx5_ib_fs, + UAPI_DEF_IS_OBJ_SUPPORTED(flow_is_supported)), + UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_FLOW_ACTION, + &mlx5_ib_flow_actions), + {}, +}; diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c index 584ff2ea7810..46a9ddc8ca56 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.c +++ b/drivers/infiniband/hw/mlx5/ib_rep.c @@ -4,6 +4,7 @@ */ #include "ib_rep.h" +#include "srq.h" static const struct mlx5_ib_profile rep_profile = { STAGE_CREATE(MLX5_IB_STAGE_INIT, @@ -21,6 +22,9 @@ static const struct mlx5_ib_profile rep_profile = { STAGE_CREATE(MLX5_IB_STAGE_ROCE, mlx5_ib_stage_rep_roce_init, mlx5_ib_stage_rep_roce_cleanup), + STAGE_CREATE(MLX5_IB_STAGE_SRQ, + mlx5_init_srq_table, + mlx5_cleanup_srq_table), STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES, mlx5_ib_stage_dev_res_init, mlx5_ib_stage_dev_res_cleanup), @@ -44,13 +48,21 @@ static const struct mlx5_ib_profile rep_profile = { static int mlx5_ib_nic_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) { + struct mlx5_ib_dev *ibdev; + + ibdev = mlx5_ib_rep_to_dev(rep); + if (!__mlx5_ib_add(ibdev, ibdev->profile)) + return -EINVAL; return 0; } static void mlx5_ib_nic_rep_unload(struct mlx5_eswitch_rep *rep) { - rep->rep_if[REP_IB].priv = NULL; + struct mlx5_ib_dev *ibdev; + + ibdev = mlx5_ib_rep_to_dev(rep); + __mlx5_ib_remove(ibdev, ibdev->profile, MLX5_IB_STAGE_MAX); } static int @@ -85,6 +97,7 @@ mlx5_ib_vport_rep_unload(struct mlx5_eswitch_rep *rep) dev = mlx5_ib_rep_to_dev(rep); __mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX); rep->rep_if[REP_IB].priv = NULL; + ib_dealloc_device(&dev->ib_dev); } static void *mlx5_ib_vport_get_proto_dev(struct mlx5_eswitch_rep *rep) diff --git a/drivers/infiniband/hw/mlx5/mad.c b/drivers/infiniband/hw/mlx5/mad.c index 32a9e9228b13..558638468edb 100644 --- a/drivers/infiniband/hw/mlx5/mad.c +++ b/drivers/infiniband/hw/mlx5/mad.c @@ -526,11 +526,6 @@ int mlx5_query_mad_ifc_port(struct ib_device *ibdev, u8 port, int ext_active_speed; int err = -ENOMEM; - if (port < 1 || port > dev->num_ports) { - mlx5_ib_warn(dev, "invalid port number %d\n", port); - return -EINVAL; - } - in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL); out_mad = kmalloc(sizeof(*out_mad), GFP_KERNEL); if (!in_mad || !out_mad) @@ -568,6 +563,14 @@ int mlx5_query_mad_ifc_port(struct ib_device *ibdev, u8 port, props->max_vl_num = out_mad->data[37] >> 4; props->init_type_reply = out_mad->data[41] >> 4; + if (props->port_cap_flags & IB_PORT_CAP_MASK2_SUP) { + props->port_cap_flags2 = + be16_to_cpup((__be16 *)(out_mad->data + 60)); + + if (props->port_cap_flags2 & IB_PORT_LINK_WIDTH_2X_SUP) + props->active_width = out_mad->data[31] & 0x1f; + } + /* Check if extended speeds (EDR/FDR/...) are supported */ if (props->port_cap_flags & IB_PORT_EXTENDED_SPEEDS_SUP) { ext_active_speed = out_mad->data[62] >> 4; @@ -579,6 +582,11 @@ int mlx5_query_mad_ifc_port(struct ib_device *ibdev, u8 port, case 2: props->active_speed = 32; /* EDR */ break; + case 4: + if (props->port_cap_flags & IB_PORT_CAP_MASK2_SUP && + props->port_cap_flags2 & IB_PORT_LINK_SPEED_HDR_SUP) + props->active_speed = IB_SPEED_HDR; + break; } } diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 3569fda07e07..94fe253d4956 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -60,6 +60,7 @@ #include "mlx5_ib.h" #include "ib_rep.h" #include "cmd.h" +#include "srq.h" #include #include #include @@ -82,10 +83,13 @@ static char mlx5_version[] = struct mlx5_ib_event_work { struct work_struct work; - struct mlx5_core_dev *dev; - void *context; - enum mlx5_dev_event event; - unsigned long param; + union { + struct mlx5_ib_dev *dev; + struct mlx5_ib_multiport_info *mpi; + }; + bool is_slave; + unsigned int event; + void *param; }; enum { @@ -146,7 +150,7 @@ static int get_port_state(struct ib_device *ibdev, int ret; memset(&attr, 0, sizeof(attr)); - ret = ibdev->query_port(ibdev, port_num, &attr); + ret = ibdev->ops.query_port(ibdev, port_num, &attr); if (!ret) *state = attr.state; return ret; @@ -168,7 +172,6 @@ static int mlx5_netdev_event(struct notifier_block *this, switch (event) { case NETDEV_REGISTER: - case NETDEV_UNREGISTER: write_lock(&roce->netdev_lock); if (ibdev->rep) { struct mlx5_eswitch *esw = ibdev->mdev->priv.eswitch; @@ -177,15 +180,20 @@ static int mlx5_netdev_event(struct notifier_block *this, rep_ndev = mlx5_ib_get_rep_netdev(esw, ibdev->rep->vport); if (rep_ndev == ndev) - roce->netdev = (event == NETDEV_UNREGISTER) ? - NULL : ndev; + roce->netdev = ndev; } else if (ndev->dev.parent == &mdev->pdev->dev) { - roce->netdev = (event == NETDEV_UNREGISTER) ? - NULL : ndev; + roce->netdev = ndev; } write_unlock(&roce->netdev_lock); break; + case NETDEV_UNREGISTER: + write_lock(&roce->netdev_lock); + if (roce->netdev == ndev) + roce->netdev = NULL; + write_unlock(&roce->netdev_lock); + break; + case NETDEV_CHANGE: case NETDEV_UP: case NETDEV_DOWN: { @@ -441,7 +449,7 @@ static int mlx5_query_port_roce(struct ib_device *device, u8 port_num, if (!ndev) goto out; - if (mlx5_lag_is_active(dev->mdev)) { + if (dev->lag_active) { rcu_read_lock(); upper = netdev_master_upper_dev_get_rcu(ndev); if (upper) { @@ -1014,6 +1022,9 @@ static int mlx5_ib_query_device(struct ib_device *ibdev, if (MLX5_CAP_GEN(mdev, cqe_128_always)) resp.flags |= MLX5_IB_QUERY_DEV_RESP_FLAGS_CQE_128B_PAD; + if (MLX5_CAP_GEN(mdev, qp_packet_based)) + resp.flags |= + MLX5_IB_QUERY_DEV_RESP_PACKET_BASED_CREDIT_MODE; } if (field_avail(typeof(resp), sw_parsing_caps, @@ -1101,6 +1112,8 @@ static void translate_active_width(struct ib_device *ibdev, u8 active_width, if (active_width & MLX5_IB_WIDTH_1X) *ib_width = IB_WIDTH_1X; + else if (active_width & MLX5_IB_WIDTH_2X) + *ib_width = IB_WIDTH_2X; else if (active_width & MLX5_IB_WIDTH_4X) *ib_width = IB_WIDTH_4X; else if (active_width & MLX5_IB_WIDTH_8X) @@ -1216,6 +1229,9 @@ static int mlx5_query_hca_port(struct ib_device *ibdev, u8 port, props->subnet_timeout = rep->subnet_timeout; props->init_type_reply = rep->init_type_reply; + if (props->port_cap_flags & IB_PORT_CAP_MASK2_SUP) + props->port_cap_flags2 = rep->cap_mask2; + err = mlx5_query_port_link_width_oper(mdev, &ib_link_width_oper, port); if (err) goto out; @@ -1752,7 +1768,7 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev, #endif if (req.flags & MLX5_IB_ALLOC_UCTX_DEVX) { - err = mlx5_ib_devx_create(dev); + err = mlx5_ib_devx_create(dev, true); if (err < 0) goto out_uars; context->devx_uid = err; @@ -1844,7 +1860,7 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev, context->lib_caps = req.lib_caps; print_lib_caps(dev, context->lib_caps); - if (mlx5_lag_is_active(dev->mdev)) { + if (dev->lag_active) { u8 port = mlx5_core_native_port_num(dev->mdev); atomic_set(&context->tx_port_affinity, @@ -2669,11 +2685,11 @@ static int parse_flow_attr(struct mlx5_core_dev *mdev, u32 *match_c, ntohs(ib_spec->gre.val.protocol)); memcpy(MLX5_ADDR_OF(fte_match_set_misc, misc_params_c, - gre_key_h), + gre_key.nvgre.hi), &ib_spec->gre.mask.key, sizeof(ib_spec->gre.mask.key)); memcpy(MLX5_ADDR_OF(fte_match_set_misc, misc_params_v, - gre_key_h), + gre_key.nvgre.hi), &ib_spec->gre.val.key, sizeof(ib_spec->gre.val.key)); break; @@ -3706,7 +3722,8 @@ _create_raw_flow_rule(struct mlx5_ib_dev *dev, struct mlx5_flow_destination *dst, struct mlx5_ib_flow_matcher *fs_matcher, struct mlx5_flow_act *flow_act, - void *cmd_in, int inlen) + void *cmd_in, int inlen, + int dst_num) { struct mlx5_ib_flow_handler *handler; struct mlx5_flow_spec *spec; @@ -3728,7 +3745,7 @@ _create_raw_flow_rule(struct mlx5_ib_dev *dev, spec->match_criteria_enable = fs_matcher->match_criteria_enable; handler->rule = mlx5_add_flow_rules(ft, spec, - flow_act, dst, 1); + flow_act, dst, dst_num); if (IS_ERR(handler->rule)) { err = PTR_ERR(handler->rule); @@ -3791,12 +3808,14 @@ struct mlx5_ib_flow_handler * mlx5_ib_raw_fs_rule_add(struct mlx5_ib_dev *dev, struct mlx5_ib_flow_matcher *fs_matcher, struct mlx5_flow_act *flow_act, + u32 counter_id, void *cmd_in, int inlen, int dest_id, int dest_type) { struct mlx5_flow_destination *dst; struct mlx5_ib_flow_prio *ft_prio; struct mlx5_ib_flow_handler *handler; + int dst_num = 0; bool mcast; int err; @@ -3806,7 +3825,7 @@ mlx5_ib_raw_fs_rule_add(struct mlx5_ib_dev *dev, if (fs_matcher->priority > MLX5_IB_FLOW_LAST_PRIO) return ERR_PTR(-ENOMEM); - dst = kzalloc(sizeof(*dst), GFP_KERNEL); + dst = kzalloc(sizeof(*dst) * 2, GFP_KERNEL); if (!dst) return ERR_PTR(-ENOMEM); @@ -3820,20 +3839,28 @@ mlx5_ib_raw_fs_rule_add(struct mlx5_ib_dev *dev, } if (dest_type == MLX5_FLOW_DESTINATION_TYPE_TIR) { - dst->type = dest_type; - dst->tir_num = dest_id; + dst[dst_num].type = dest_type; + dst[dst_num].tir_num = dest_id; flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; } else if (dest_type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE) { - dst->type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM; - dst->ft_num = dest_id; + dst[dst_num].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM; + dst[dst_num].ft_num = dest_id; flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; } else { - dst->type = MLX5_FLOW_DESTINATION_TYPE_PORT; + dst[dst_num].type = MLX5_FLOW_DESTINATION_TYPE_PORT; flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_ALLOW; } + dst_num++; + + if (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) { + dst[dst_num].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER; + dst[dst_num].counter_id = counter_id; + dst_num++; + } + handler = _create_raw_flow_rule(dev, ft_prio, dst, fs_matcher, flow_act, - cmd_in, inlen); + cmd_in, inlen, dst_num); if (IS_ERR(handler)) { err = PTR_ERR(handler); @@ -4226,6 +4253,63 @@ static void delay_drop_handler(struct work_struct *work) mutex_unlock(&delay_drop->lock); } +static void handle_general_event(struct mlx5_ib_dev *ibdev, struct mlx5_eqe *eqe, + struct ib_event *ibev) +{ + switch (eqe->sub_type) { + case MLX5_GENERAL_SUBTYPE_DELAY_DROP_TIMEOUT: + schedule_work(&ibdev->delay_drop.delay_drop_work); + break; + default: /* do nothing */ + return; + } +} + +static int handle_port_change(struct mlx5_ib_dev *ibdev, struct mlx5_eqe *eqe, + struct ib_event *ibev) +{ + u8 port = (eqe->data.port.port >> 4) & 0xf; + + ibev->element.port_num = port; + + switch (eqe->sub_type) { + case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE: + case MLX5_PORT_CHANGE_SUBTYPE_DOWN: + case MLX5_PORT_CHANGE_SUBTYPE_INITIALIZED: + /* In RoCE, port up/down events are handled in + * mlx5_netdev_event(). + */ + if (mlx5_ib_port_link_layer(&ibdev->ib_dev, port) == + IB_LINK_LAYER_ETHERNET) + return -EINVAL; + + ibev->event = (eqe->sub_type == MLX5_PORT_CHANGE_SUBTYPE_ACTIVE) ? + IB_EVENT_PORT_ACTIVE : IB_EVENT_PORT_ERR; + break; + + case MLX5_PORT_CHANGE_SUBTYPE_LID: + ibev->event = IB_EVENT_LID_CHANGE; + break; + + case MLX5_PORT_CHANGE_SUBTYPE_PKEY: + ibev->event = IB_EVENT_PKEY_CHANGE; + schedule_work(&ibdev->devr.ports[port - 1].pkey_change_work); + break; + + case MLX5_PORT_CHANGE_SUBTYPE_GUID: + ibev->event = IB_EVENT_GID_CHANGE; + break; + + case MLX5_PORT_CHANGE_SUBTYPE_CLIENT_REREG: + ibev->event = IB_EVENT_CLIENT_REREGISTER; + break; + default: + return -EINVAL; + } + + return 0; +} + static void mlx5_ib_handle_event(struct work_struct *_work) { struct mlx5_ib_event_work *work = @@ -4233,65 +4317,37 @@ static void mlx5_ib_handle_event(struct work_struct *_work) struct mlx5_ib_dev *ibdev; struct ib_event ibev; bool fatal = false; - u8 port = (u8)work->param; - if (mlx5_core_is_mp_slave(work->dev)) { - ibdev = mlx5_ib_get_ibdev_from_mpi(work->context); + if (work->is_slave) { + ibdev = mlx5_ib_get_ibdev_from_mpi(work->mpi); if (!ibdev) goto out; } else { - ibdev = work->context; + ibdev = work->dev; } switch (work->event) { case MLX5_DEV_EVENT_SYS_ERROR: ibev.event = IB_EVENT_DEVICE_FATAL; mlx5_ib_handle_internal_error(ibdev); + ibev.element.port_num = (u8)(unsigned long)work->param; fatal = true; break; - - case MLX5_DEV_EVENT_PORT_UP: - case MLX5_DEV_EVENT_PORT_DOWN: - case MLX5_DEV_EVENT_PORT_INITIALIZED: - /* In RoCE, port up/down events are handled in - * mlx5_netdev_event(). - */ - if (mlx5_ib_port_link_layer(&ibdev->ib_dev, port) == - IB_LINK_LAYER_ETHERNET) + case MLX5_EVENT_TYPE_PORT_CHANGE: + if (handle_port_change(ibdev, work->param, &ibev)) goto out; - - ibev.event = (work->event == MLX5_DEV_EVENT_PORT_UP) ? - IB_EVENT_PORT_ACTIVE : IB_EVENT_PORT_ERR; - break; - - case MLX5_DEV_EVENT_LID_CHANGE: - ibev.event = IB_EVENT_LID_CHANGE; - break; - - case MLX5_DEV_EVENT_PKEY_CHANGE: - ibev.event = IB_EVENT_PKEY_CHANGE; - schedule_work(&ibdev->devr.ports[port - 1].pkey_change_work); break; - - case MLX5_DEV_EVENT_GUID_CHANGE: - ibev.event = IB_EVENT_GID_CHANGE; - break; - - case MLX5_DEV_EVENT_CLIENT_REREG: - ibev.event = IB_EVENT_CLIENT_REREGISTER; - break; - case MLX5_DEV_EVENT_DELAY_DROP_TIMEOUT: - schedule_work(&ibdev->delay_drop.delay_drop_work); - goto out; + case MLX5_EVENT_TYPE_GENERAL_EVENT: + handle_general_event(ibdev, work->param, &ibev); + /* fall through */ default: goto out; } - ibev.device = &ibdev->ib_dev; - ibev.element.port_num = port; + ibev.device = &ibdev->ib_dev; - if (!rdma_is_port_valid(&ibdev->ib_dev, port)) { - mlx5_ib_warn(ibdev, "warning: event on port %d\n", port); + if (!rdma_is_port_valid(&ibdev->ib_dev, ibev.element.port_num)) { + mlx5_ib_warn(ibdev, "warning: event on port %d\n", ibev.element.port_num); goto out; } @@ -4304,22 +4360,43 @@ out: kfree(work); } -static void mlx5_ib_event(struct mlx5_core_dev *dev, void *context, - enum mlx5_dev_event event, unsigned long param) +static int mlx5_ib_event(struct notifier_block *nb, + unsigned long event, void *param) { struct mlx5_ib_event_work *work; work = kmalloc(sizeof(*work), GFP_ATOMIC); if (!work) - return; + return NOTIFY_DONE; INIT_WORK(&work->work, mlx5_ib_handle_event); - work->dev = dev; + work->dev = container_of(nb, struct mlx5_ib_dev, mdev_events); + work->is_slave = false; work->param = param; - work->context = context; work->event = event; queue_work(mlx5_ib_event_wq, &work->work); + + return NOTIFY_OK; +} + +static int mlx5_ib_event_slave_port(struct notifier_block *nb, + unsigned long event, void *param) +{ + struct mlx5_ib_event_work *work; + + work = kmalloc(sizeof(*work), GFP_ATOMIC); + if (!work) + return NOTIFY_DONE; + + INIT_WORK(&work->work, mlx5_ib_handle_event); + work->mpi = container_of(nb, struct mlx5_ib_multiport_info, mdev_events); + work->is_slave = true; + work->param = param; + work->event = event; + queue_work(mlx5_ib_event_wq, &work->work); + + return NOTIFY_OK; } static int set_has_smi_cap(struct mlx5_ib_dev *dev) @@ -4787,7 +4864,7 @@ static int mlx5_eth_lag_init(struct mlx5_ib_dev *dev) struct mlx5_flow_table *ft; int err; - if (!ns || !mlx5_lag_is_active(mdev)) + if (!ns || !mlx5_lag_is_roce(mdev)) return 0; err = mlx5_cmd_create_vport_lag(mdev); @@ -4801,6 +4878,7 @@ static int mlx5_eth_lag_init(struct mlx5_ib_dev *dev) } dev->flow_db->lag_demux_ft = ft; + dev->lag_active = true; return 0; err_destroy_vport_lag: @@ -4812,7 +4890,9 @@ static void mlx5_eth_lag_cleanup(struct mlx5_ib_dev *dev) { struct mlx5_core_dev *mdev = dev->mdev; - if (dev->flow_db->lag_demux_ft) { + if (dev->lag_active) { + dev->lag_active = false; + mlx5_destroy_flow_table(dev->flow_db->lag_demux_ft); dev->flow_db->lag_demux_ft = NULL; @@ -5038,6 +5118,9 @@ static int mlx5_ib_alloc_counters(struct mlx5_ib_dev *dev) { int err = 0; int i; + bool is_shared; + + is_shared = MLX5_CAP_GEN(dev->mdev, log_max_uctx) != 0; for (i = 0; i < dev->num_ports; i++) { err = __mlx5_ib_alloc_counters(dev, &dev->port[i].cnts); @@ -5047,8 +5130,10 @@ static int mlx5_ib_alloc_counters(struct mlx5_ib_dev *dev) mlx5_ib_fill_counters(dev, dev->port[i].cnts.names, dev->port[i].cnts.offsets); - err = mlx5_core_alloc_q_counter(dev->mdev, - &dev->port[i].cnts.set_id); + err = mlx5_cmd_alloc_q_counter(dev->mdev, + &dev->port[i].cnts.set_id, + is_shared ? + MLX5_SHARED_RESOURCE_UID : 0); if (err) { mlx5_ib_warn(dev, "couldn't allocate queue counter for port %d, err %d\n", @@ -5325,14 +5410,6 @@ static void init_delay_drop(struct mlx5_ib_dev *dev) mlx5_ib_warn(dev, "Failed to init delay drop debugfs\n"); } -static const struct cpumask * -mlx5_ib_get_vector_affinity(struct ib_device *ibdev, int comp_vector) -{ - struct mlx5_ib_dev *dev = to_mdev(ibdev); - - return mlx5_get_vector_affinity_hint(dev->mdev, comp_vector); -} - /* The mlx5_ib_multiport_mutex should be held when calling this function */ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev, struct mlx5_ib_multiport_info *mpi) @@ -5350,6 +5427,11 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev, spin_unlock(&port->mp.mpi_lock); return; } + + if (mpi->mdev_events.notifier_call) + mlx5_notifier_unregister(mpi->mdev, &mpi->mdev_events); + mpi->mdev_events.notifier_call = NULL; + mpi->ibdev = NULL; spin_unlock(&port->mp.mpi_lock); @@ -5405,6 +5487,7 @@ static bool mlx5_ib_bind_slave_port(struct mlx5_ib_dev *ibdev, ibdev->port[port_num].mp.mpi = mpi; mpi->ibdev = ibdev; + mpi->mdev_events.notifier_call = NULL; spin_unlock(&ibdev->port[port_num].mp.mpi_lock); err = mlx5_nic_vport_affiliate_multiport(ibdev->mdev, mpi->mdev); @@ -5422,6 +5505,9 @@ static bool mlx5_ib_bind_slave_port(struct mlx5_ib_dev *ibdev, goto unbind; } + mpi->mdev_events.notifier_call = mlx5_ib_event_slave_port; + mlx5_notifier_register(mpi->mdev, &mpi->mdev_events); + err = mlx5_ib_init_cong_debugfs(ibdev, port_num); if (err) goto unbind; @@ -5551,30 +5637,17 @@ ADD_UVERBS_ATTRIBUTES_SIMPLE( UVERBS_ATTR_FLAGS_IN(MLX5_IB_ATTR_CREATE_FLOW_ACTION_FLAGS, enum mlx5_ib_uapi_flow_action_flags)); -static int populate_specs_root(struct mlx5_ib_dev *dev) -{ - const struct uverbs_object_tree_def **trees = dev->driver_trees; - size_t num_trees = 0; - - if (mlx5_accel_ipsec_device_caps(dev->mdev) & - MLX5_ACCEL_IPSEC_CAP_DEVICE) - trees[num_trees++] = &mlx5_ib_flow_action; - - if (MLX5_CAP_DEV_MEM(dev->mdev, memic)) - trees[num_trees++] = &mlx5_ib_dm; - - if (MLX5_CAP_GEN_64(dev->mdev, general_obj_types) & - MLX5_GENERAL_OBJ_TYPES_CAP_UCTX) - trees[num_trees++] = mlx5_ib_get_devx_tree(); - - num_trees += mlx5_ib_get_flow_trees(trees + num_trees); - - WARN_ON(num_trees >= ARRAY_SIZE(dev->driver_trees)); - trees[num_trees] = NULL; - dev->ib_dev.driver_specs = trees; +static const struct uapi_definition mlx5_ib_defs[] = { +#if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS) + UAPI_DEF_CHAIN(mlx5_ib_devx_defs), + UAPI_DEF_CHAIN(mlx5_ib_flow_defs), +#endif - return 0; -} + UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_FLOW_ACTION, + &mlx5_ib_flow_action), + UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_DM, &mlx5_ib_dm), + {} +}; static int mlx5_ib_read_counters(struct ib_counters *counters, struct ib_counters_read_attr *read_attr, @@ -5651,6 +5724,8 @@ void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev) mlx5_ib_cleanup_multiport_master(dev); #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING cleanup_srcu_struct(&dev->mr_srcu); + drain_workqueue(dev->advise_mr_wq); + destroy_workqueue(dev->advise_mr_wq); #endif kfree(dev->port); } @@ -5694,8 +5769,7 @@ int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) dev->ib_dev.node_type = RDMA_NODE_IB_CA; dev->ib_dev.local_dma_lkey = 0 /* not supported for now */; dev->ib_dev.phys_port_cnt = dev->num_ports; - dev->ib_dev.num_comp_vectors = - dev->mdev->priv.eq_table.num_comp_vectors; + dev->ib_dev.num_comp_vectors = mlx5_comp_vectors_count(mdev); dev->ib_dev.dev.parent = &mdev->pdev->dev; mutex_init(&dev->cap_mask_mutex); @@ -5706,9 +5780,17 @@ int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) dev->memic.dev = mdev; #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING + dev->advise_mr_wq = alloc_ordered_workqueue("mlx5_ib_advise_mr_wq", 0); + if (!dev->advise_mr_wq) { + err = -ENOMEM; + goto err_mp; + } + err = init_srcu_struct(&dev->mr_srcu); - if (err) - goto err_free_port; + if (err) { + destroy_workqueue(dev->advise_mr_wq); + goto err_mp; + } #endif return 0; @@ -5752,6 +5834,94 @@ static void mlx5_ib_stage_flow_db_cleanup(struct mlx5_ib_dev *dev) kfree(dev->flow_db); } +static const struct ib_device_ops mlx5_ib_dev_ops = { + .add_gid = mlx5_ib_add_gid, + .alloc_mr = mlx5_ib_alloc_mr, + .alloc_pd = mlx5_ib_alloc_pd, + .alloc_ucontext = mlx5_ib_alloc_ucontext, + .attach_mcast = mlx5_ib_mcg_attach, + .check_mr_status = mlx5_ib_check_mr_status, + .create_ah = mlx5_ib_create_ah, + .create_counters = mlx5_ib_create_counters, + .create_cq = mlx5_ib_create_cq, + .create_flow = mlx5_ib_create_flow, + .create_qp = mlx5_ib_create_qp, + .create_srq = mlx5_ib_create_srq, + .dealloc_pd = mlx5_ib_dealloc_pd, + .dealloc_ucontext = mlx5_ib_dealloc_ucontext, + .del_gid = mlx5_ib_del_gid, + .dereg_mr = mlx5_ib_dereg_mr, + .destroy_ah = mlx5_ib_destroy_ah, + .destroy_counters = mlx5_ib_destroy_counters, + .destroy_cq = mlx5_ib_destroy_cq, + .destroy_flow = mlx5_ib_destroy_flow, + .destroy_flow_action = mlx5_ib_destroy_flow_action, + .destroy_qp = mlx5_ib_destroy_qp, + .destroy_srq = mlx5_ib_destroy_srq, + .detach_mcast = mlx5_ib_mcg_detach, + .disassociate_ucontext = mlx5_ib_disassociate_ucontext, + .drain_rq = mlx5_ib_drain_rq, + .drain_sq = mlx5_ib_drain_sq, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = mlx5_ib_get_dma_mr, + .get_link_layer = mlx5_ib_port_link_layer, + .map_mr_sg = mlx5_ib_map_mr_sg, + .mmap = mlx5_ib_mmap, + .modify_cq = mlx5_ib_modify_cq, + .modify_device = mlx5_ib_modify_device, + .modify_port = mlx5_ib_modify_port, + .modify_qp = mlx5_ib_modify_qp, + .modify_srq = mlx5_ib_modify_srq, + .poll_cq = mlx5_ib_poll_cq, + .post_recv = mlx5_ib_post_recv, + .post_send = mlx5_ib_post_send, + .post_srq_recv = mlx5_ib_post_srq_recv, + .process_mad = mlx5_ib_process_mad, + .query_ah = mlx5_ib_query_ah, + .query_device = mlx5_ib_query_device, + .query_gid = mlx5_ib_query_gid, + .query_pkey = mlx5_ib_query_pkey, + .query_qp = mlx5_ib_query_qp, + .query_srq = mlx5_ib_query_srq, + .read_counters = mlx5_ib_read_counters, + .reg_user_mr = mlx5_ib_reg_user_mr, + .req_notify_cq = mlx5_ib_arm_cq, + .rereg_user_mr = mlx5_ib_rereg_user_mr, + .resize_cq = mlx5_ib_resize_cq, +}; + +static const struct ib_device_ops mlx5_ib_dev_flow_ipsec_ops = { + .create_flow_action_esp = mlx5_ib_create_flow_action_esp, + .modify_flow_action_esp = mlx5_ib_modify_flow_action_esp, +}; + +static const struct ib_device_ops mlx5_ib_dev_ipoib_enhanced_ops = { + .rdma_netdev_get_params = mlx5_ib_rn_get_params, +}; + +static const struct ib_device_ops mlx5_ib_dev_sriov_ops = { + .get_vf_config = mlx5_ib_get_vf_config, + .get_vf_stats = mlx5_ib_get_vf_stats, + .set_vf_guid = mlx5_ib_set_vf_guid, + .set_vf_link_state = mlx5_ib_set_vf_link_state, +}; + +static const struct ib_device_ops mlx5_ib_dev_mw_ops = { + .alloc_mw = mlx5_ib_alloc_mw, + .dealloc_mw = mlx5_ib_dealloc_mw, +}; + +static const struct ib_device_ops mlx5_ib_dev_xrc_ops = { + .alloc_xrcd = mlx5_ib_alloc_xrcd, + .dealloc_xrcd = mlx5_ib_dealloc_xrcd, +}; + +static const struct ib_device_ops mlx5_ib_dev_dm_ops = { + .alloc_dm = mlx5_ib_alloc_dm, + .dealloc_dm = mlx5_ib_dealloc_dm, + .reg_dm_mr = mlx5_ib_reg_dm_mr, +}; + int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) { struct mlx5_core_dev *mdev = dev->mdev; @@ -5790,104 +5960,45 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_QP) | - (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ); - - dev->ib_dev.query_device = mlx5_ib_query_device; - dev->ib_dev.get_link_layer = mlx5_ib_port_link_layer; - dev->ib_dev.query_gid = mlx5_ib_query_gid; - dev->ib_dev.add_gid = mlx5_ib_add_gid; - dev->ib_dev.del_gid = mlx5_ib_del_gid; - dev->ib_dev.query_pkey = mlx5_ib_query_pkey; - dev->ib_dev.modify_device = mlx5_ib_modify_device; - dev->ib_dev.modify_port = mlx5_ib_modify_port; - dev->ib_dev.alloc_ucontext = mlx5_ib_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = mlx5_ib_dealloc_ucontext; - dev->ib_dev.mmap = mlx5_ib_mmap; - dev->ib_dev.alloc_pd = mlx5_ib_alloc_pd; - dev->ib_dev.dealloc_pd = mlx5_ib_dealloc_pd; - dev->ib_dev.create_ah = mlx5_ib_create_ah; - dev->ib_dev.query_ah = mlx5_ib_query_ah; - dev->ib_dev.destroy_ah = mlx5_ib_destroy_ah; - dev->ib_dev.create_srq = mlx5_ib_create_srq; - dev->ib_dev.modify_srq = mlx5_ib_modify_srq; - dev->ib_dev.query_srq = mlx5_ib_query_srq; - dev->ib_dev.destroy_srq = mlx5_ib_destroy_srq; - dev->ib_dev.post_srq_recv = mlx5_ib_post_srq_recv; - dev->ib_dev.create_qp = mlx5_ib_create_qp; - dev->ib_dev.modify_qp = mlx5_ib_modify_qp; - dev->ib_dev.query_qp = mlx5_ib_query_qp; - dev->ib_dev.destroy_qp = mlx5_ib_destroy_qp; - dev->ib_dev.drain_sq = mlx5_ib_drain_sq; - dev->ib_dev.drain_rq = mlx5_ib_drain_rq; - dev->ib_dev.post_send = mlx5_ib_post_send; - dev->ib_dev.post_recv = mlx5_ib_post_recv; - dev->ib_dev.create_cq = mlx5_ib_create_cq; - dev->ib_dev.modify_cq = mlx5_ib_modify_cq; - dev->ib_dev.resize_cq = mlx5_ib_resize_cq; - dev->ib_dev.destroy_cq = mlx5_ib_destroy_cq; - dev->ib_dev.poll_cq = mlx5_ib_poll_cq; - dev->ib_dev.req_notify_cq = mlx5_ib_arm_cq; - dev->ib_dev.get_dma_mr = mlx5_ib_get_dma_mr; - dev->ib_dev.reg_user_mr = mlx5_ib_reg_user_mr; - dev->ib_dev.rereg_user_mr = mlx5_ib_rereg_user_mr; - dev->ib_dev.dereg_mr = mlx5_ib_dereg_mr; - dev->ib_dev.attach_mcast = mlx5_ib_mcg_attach; - dev->ib_dev.detach_mcast = mlx5_ib_mcg_detach; - dev->ib_dev.process_mad = mlx5_ib_process_mad; - dev->ib_dev.alloc_mr = mlx5_ib_alloc_mr; - dev->ib_dev.map_mr_sg = mlx5_ib_map_mr_sg; - dev->ib_dev.check_mr_status = mlx5_ib_check_mr_status; - dev->ib_dev.get_dev_fw_str = get_dev_fw_str; - dev->ib_dev.get_vector_affinity = mlx5_ib_get_vector_affinity; + (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | + (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); + if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads) && IS_ENABLED(CONFIG_MLX5_CORE_IPOIB)) - dev->ib_dev.rdma_netdev_get_params = mlx5_ib_rn_get_params; - - if (mlx5_core_is_pf(mdev)) { - dev->ib_dev.get_vf_config = mlx5_ib_get_vf_config; - dev->ib_dev.set_vf_link_state = mlx5_ib_set_vf_link_state; - dev->ib_dev.get_vf_stats = mlx5_ib_get_vf_stats; - dev->ib_dev.set_vf_guid = mlx5_ib_set_vf_guid; - } + ib_set_device_ops(&dev->ib_dev, + &mlx5_ib_dev_ipoib_enhanced_ops); - dev->ib_dev.disassociate_ucontext = mlx5_ib_disassociate_ucontext; + if (mlx5_core_is_pf(mdev)) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_sriov_ops); dev->umr_fence = mlx5_get_umr_fence(MLX5_CAP_GEN(mdev, umr_fence)); if (MLX5_CAP_GEN(mdev, imaicl)) { - dev->ib_dev.alloc_mw = mlx5_ib_alloc_mw; - dev->ib_dev.dealloc_mw = mlx5_ib_dealloc_mw; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | (1ull << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_mw_ops); } if (MLX5_CAP_GEN(mdev, xrc)) { - dev->ib_dev.alloc_xrcd = mlx5_ib_alloc_xrcd; - dev->ib_dev.dealloc_xrcd = mlx5_ib_dealloc_xrcd; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_OPEN_XRCD) | (1ull << IB_USER_VERBS_CMD_CLOSE_XRCD); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_xrc_ops); } - if (MLX5_CAP_DEV_MEM(mdev, memic)) { - dev->ib_dev.alloc_dm = mlx5_ib_alloc_dm; - dev->ib_dev.dealloc_dm = mlx5_ib_dealloc_dm; - dev->ib_dev.reg_dm_mr = mlx5_ib_reg_dm_mr; - } + if (MLX5_CAP_DEV_MEM(mdev, memic)) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_dm_ops); - dev->ib_dev.create_flow = mlx5_ib_create_flow; - dev->ib_dev.destroy_flow = mlx5_ib_destroy_flow; - dev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | - (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); - dev->ib_dev.create_flow_action_esp = mlx5_ib_create_flow_action_esp; - dev->ib_dev.destroy_flow_action = mlx5_ib_destroy_flow_action; - dev->ib_dev.modify_flow_action_esp = mlx5_ib_modify_flow_action_esp; + if (mlx5_accel_ipsec_device_caps(dev->mdev) & + MLX5_ACCEL_IPSEC_CAP_DEVICE) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_flow_ipsec_ops); dev->ib_dev.driver_id = RDMA_DRIVER_MLX5; - dev->ib_dev.create_counters = mlx5_ib_create_counters; - dev->ib_dev.destroy_counters = mlx5_ib_destroy_counters; - dev->ib_dev.read_counters = mlx5_ib_read_counters; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_ops); + + if (IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS)) + dev->ib_dev.driver_def = mlx5_ib_defs; err = init_node_data(dev); if (err) @@ -5901,22 +6012,37 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) return 0; } +static const struct ib_device_ops mlx5_ib_dev_port_ops = { + .get_port_immutable = mlx5_port_immutable, + .query_port = mlx5_ib_query_port, +}; + static int mlx5_ib_stage_non_default_cb(struct mlx5_ib_dev *dev) { - dev->ib_dev.get_port_immutable = mlx5_port_immutable; - dev->ib_dev.query_port = mlx5_ib_query_port; - + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_ops); return 0; } +static const struct ib_device_ops mlx5_ib_dev_port_rep_ops = { + .get_port_immutable = mlx5_port_rep_immutable, + .query_port = mlx5_ib_rep_query_port, +}; + int mlx5_ib_stage_rep_non_default_cb(struct mlx5_ib_dev *dev) { - dev->ib_dev.get_port_immutable = mlx5_port_rep_immutable; - dev->ib_dev.query_port = mlx5_ib_rep_query_port; - + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_rep_ops); return 0; } +static const struct ib_device_ops mlx5_ib_dev_common_roce_ops = { + .create_rwq_ind_table = mlx5_ib_create_rwq_ind_table, + .create_wq = mlx5_ib_create_wq, + .destroy_rwq_ind_table = mlx5_ib_destroy_rwq_ind_table, + .destroy_wq = mlx5_ib_destroy_wq, + .get_netdev = mlx5_ib_get_netdev, + .modify_wq = mlx5_ib_modify_wq, +}; + static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) { u8 port_num; @@ -5928,19 +6054,13 @@ static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) dev->roce[i].last_port_state = IB_PORT_DOWN; } - dev->ib_dev.get_netdev = mlx5_ib_get_netdev; - dev->ib_dev.create_wq = mlx5_ib_create_wq; - dev->ib_dev.modify_wq = mlx5_ib_modify_wq; - dev->ib_dev.destroy_wq = mlx5_ib_destroy_wq; - dev->ib_dev.create_rwq_ind_table = mlx5_ib_create_rwq_ind_table; - dev->ib_dev.destroy_rwq_ind_table = mlx5_ib_destroy_rwq_ind_table; - dev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_common_roce_ops); port_num = mlx5_core_native_port_num(dev->mdev) - 1; @@ -6034,11 +6154,20 @@ static int mlx5_ib_stage_odp_init(struct mlx5_ib_dev *dev) return mlx5_ib_odp_init_one(dev); } +void mlx5_ib_stage_odp_cleanup(struct mlx5_ib_dev *dev) +{ + mlx5_ib_odp_cleanup_one(dev); +} + +static const struct ib_device_ops mlx5_ib_dev_hw_stats_ops = { + .alloc_hw_stats = mlx5_ib_alloc_hw_stats, + .get_hw_stats = mlx5_ib_get_hw_stats, +}; + int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev) { if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt)) { - dev->ib_dev.get_hw_stats = mlx5_ib_get_hw_stats; - dev->ib_dev.alloc_hw_stats = mlx5_ib_alloc_hw_stats; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_hw_stats_ops); return mlx5_ib_alloc_counters(dev); } @@ -6096,17 +6225,12 @@ void mlx5_ib_stage_bfrag_cleanup(struct mlx5_ib_dev *dev) mlx5_free_bfreg(dev->mdev, &dev->bfreg); } -static int mlx5_ib_stage_populate_specs(struct mlx5_ib_dev *dev) -{ - return populate_specs_root(dev); -} - int mlx5_ib_stage_ib_reg_init(struct mlx5_ib_dev *dev) { const char *name; rdma_set_device_sysfs_group(&dev->ib_dev, &mlx5_attr_group); - if (!mlx5_lag_is_active(dev->mdev)) + if (!mlx5_lag_is_roce(dev->mdev)) name = "mlx5_%d"; else name = "mlx5_bond_%d"; @@ -6140,16 +6264,32 @@ static void mlx5_ib_stage_delay_drop_cleanup(struct mlx5_ib_dev *dev) cancel_delay_drop(dev); } -static int mlx5_ib_stage_rep_reg_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev) { - mlx5_ib_register_vport_reps(dev); - + dev->mdev_events.notifier_call = mlx5_ib_event; + mlx5_notifier_register(dev->mdev, &dev->mdev_events); return 0; } -static void mlx5_ib_stage_rep_reg_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_ib_stage_dev_notifier_cleanup(struct mlx5_ib_dev *dev) +{ + mlx5_notifier_unregister(dev->mdev, &dev->mdev_events); +} + +static int mlx5_ib_stage_devx_init(struct mlx5_ib_dev *dev) +{ + int uid; + + uid = mlx5_ib_devx_create(dev, false); + if (uid > 0) + dev->devx_whitelist_uid = uid; + + return 0; +} +static void mlx5_ib_stage_devx_cleanup(struct mlx5_ib_dev *dev) { - mlx5_ib_unregister_vport_reps(dev); + if (dev->devx_whitelist_uid) + mlx5_ib_devx_destroy(dev, dev->devx_whitelist_uid); } void __mlx5_ib_remove(struct mlx5_ib_dev *dev, @@ -6162,10 +6302,6 @@ void __mlx5_ib_remove(struct mlx5_ib_dev *dev, if (profile->stage[stage].cleanup) profile->stage[stage].cleanup(dev); } - - if (dev->devx_whitelist_uid) - mlx5_ib_devx_destroy(dev, dev->devx_whitelist_uid); - ib_dealloc_device((struct ib_device *)dev); } void *__mlx5_ib_add(struct mlx5_ib_dev *dev, @@ -6173,7 +6309,6 @@ void *__mlx5_ib_add(struct mlx5_ib_dev *dev, { int err; int i; - int uid; for (i = 0; i < MLX5_IB_STAGE_MAX; i++) { if (profile->stage[i].init) { @@ -6183,10 +6318,6 @@ void *__mlx5_ib_add(struct mlx5_ib_dev *dev, } } - uid = mlx5_ib_devx_create(dev); - if (uid > 0) - dev->devx_whitelist_uid = uid; - dev->profile = profile; dev->ib_active = true; @@ -6214,12 +6345,18 @@ static const struct mlx5_ib_profile pf_profile = { STAGE_CREATE(MLX5_IB_STAGE_ROCE, mlx5_ib_stage_roce_init, mlx5_ib_stage_roce_cleanup), + STAGE_CREATE(MLX5_IB_STAGE_SRQ, + mlx5_init_srq_table, + mlx5_cleanup_srq_table), STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES, mlx5_ib_stage_dev_res_init, mlx5_ib_stage_dev_res_cleanup), + STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER, + mlx5_ib_stage_dev_notifier_init, + mlx5_ib_stage_dev_notifier_cleanup), STAGE_CREATE(MLX5_IB_STAGE_ODP, mlx5_ib_stage_odp_init, - NULL), + mlx5_ib_stage_odp_cleanup), STAGE_CREATE(MLX5_IB_STAGE_COUNTERS, mlx5_ib_stage_counters_init, mlx5_ib_stage_counters_cleanup), @@ -6235,9 +6372,9 @@ static const struct mlx5_ib_profile pf_profile = { STAGE_CREATE(MLX5_IB_STAGE_PRE_IB_REG_UMR, NULL, mlx5_ib_stage_pre_ib_reg_umr_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_SPECS, - mlx5_ib_stage_populate_specs, - NULL), + STAGE_CREATE(MLX5_IB_STAGE_WHITELIST_UID, + mlx5_ib_stage_devx_init, + mlx5_ib_stage_devx_cleanup), STAGE_CREATE(MLX5_IB_STAGE_IB_REG, mlx5_ib_stage_ib_reg_init, mlx5_ib_stage_ib_reg_cleanup), @@ -6265,9 +6402,15 @@ static const struct mlx5_ib_profile nic_rep_profile = { STAGE_CREATE(MLX5_IB_STAGE_ROCE, mlx5_ib_stage_rep_roce_init, mlx5_ib_stage_rep_roce_cleanup), + STAGE_CREATE(MLX5_IB_STAGE_SRQ, + mlx5_init_srq_table, + mlx5_cleanup_srq_table), STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES, mlx5_ib_stage_dev_res_init, mlx5_ib_stage_dev_res_cleanup), + STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER, + mlx5_ib_stage_dev_notifier_init, + mlx5_ib_stage_dev_notifier_cleanup), STAGE_CREATE(MLX5_IB_STAGE_COUNTERS, mlx5_ib_stage_counters_init, mlx5_ib_stage_counters_cleanup), @@ -6280,18 +6423,12 @@ static const struct mlx5_ib_profile nic_rep_profile = { STAGE_CREATE(MLX5_IB_STAGE_PRE_IB_REG_UMR, NULL, mlx5_ib_stage_pre_ib_reg_umr_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_SPECS, - mlx5_ib_stage_populate_specs, - NULL), STAGE_CREATE(MLX5_IB_STAGE_IB_REG, mlx5_ib_stage_ib_reg_init, mlx5_ib_stage_ib_reg_cleanup), STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR, mlx5_ib_stage_post_ib_reg_umr_init, NULL), - STAGE_CREATE(MLX5_IB_STAGE_REP_REG, - mlx5_ib_stage_rep_reg_init, - mlx5_ib_stage_rep_reg_cleanup), }; static void *mlx5_ib_add_slave_port(struct mlx5_core_dev *mdev) @@ -6359,8 +6496,9 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev) if (MLX5_ESWITCH_MANAGER(mdev) && mlx5_ib_eswitch_mode(mdev->priv.eswitch) == SRIOV_OFFLOADS) { dev->rep = mlx5_ib_vport_rep(mdev->priv.eswitch, 0); - - return __mlx5_ib_add(dev, &nic_rep_profile); + dev->profile = &nic_rep_profile; + mlx5_ib_register_vport_reps(dev); + return dev; } return __mlx5_ib_add(dev, &pf_profile); @@ -6382,16 +6520,17 @@ static void mlx5_ib_remove(struct mlx5_core_dev *mdev, void *context) } dev = context; - __mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX); + if (dev->profile == &nic_rep_profile) + mlx5_ib_unregister_vport_reps(dev); + else + __mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX); + + ib_dealloc_device((struct ib_device *)dev); } static struct mlx5_interface mlx5_ib_interface = { .add = mlx5_ib_add, .remove = mlx5_ib_remove, - .event = mlx5_ib_event, -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - .pfault = mlx5_ib_pfault, -#endif .protocol = MLX5_INTERFACE_PROTOCOL_IB, }; diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index b651a7a6fde9..b06d3b1efea8 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -41,8 +41,6 @@ #include #include #include -#include -#include #include #include #include @@ -50,6 +48,8 @@ #include #include +#include "srq.h" + #define mlx5_ib_dbg(_dev, format, arg...) \ dev_dbg(&(_dev)->ib_dev.dev, "%s:%d:(pid %d): " format, __func__, \ __LINE__, current->pid, ##arg) @@ -257,6 +257,7 @@ enum mlx5_ib_rq_flags { }; struct mlx5_ib_wq { + struct mlx5_frag_buf_ctrl fbc; u64 *wrid; u32 *wr_data; struct wr_list *w_list; @@ -274,8 +275,7 @@ struct mlx5_ib_wq { unsigned head; unsigned tail; u16 cur_post; - u16 last_poll; - void *qend; + void *cur_edge; }; enum mlx5_ib_wq_flags { @@ -460,6 +460,7 @@ enum mlx5_ib_qp_flags { MLX5_IB_QP_UNDERLAY = 1 << 10, MLX5_IB_QP_PCI_WRITE_END_PADDING = 1 << 11, MLX5_IB_QP_TUNNEL_OFFLOAD = 1 << 12, + MLX5_IB_QP_PACKET_BASED_CREDIT = 1 << 13, }; struct mlx5_umr_wr { @@ -523,6 +524,7 @@ struct mlx5_ib_srq { struct mlx5_core_srq msrq; struct mlx5_frag_buf buf; struct mlx5_db db; + struct mlx5_frag_buf_ctrl fbc; u64 *wrid; /* protect SRQ hanlding */ @@ -540,7 +542,6 @@ struct mlx5_ib_srq { struct mlx5_ib_xrcd { struct ib_xrcd ibxrcd; u32 xrcdn; - u16 uid; }; enum mlx5_ib_mtt_access_flags { @@ -774,19 +775,20 @@ enum mlx5_ib_stages { MLX5_IB_STAGE_CAPS, MLX5_IB_STAGE_NON_DEFAULT_CB, MLX5_IB_STAGE_ROCE, + MLX5_IB_STAGE_SRQ, MLX5_IB_STAGE_DEVICE_RESOURCES, + MLX5_IB_STAGE_DEVICE_NOTIFIER, MLX5_IB_STAGE_ODP, MLX5_IB_STAGE_COUNTERS, MLX5_IB_STAGE_CONG_DEBUGFS, MLX5_IB_STAGE_UAR, MLX5_IB_STAGE_BFREG, MLX5_IB_STAGE_PRE_IB_REG_UMR, - MLX5_IB_STAGE_SPECS, + MLX5_IB_STAGE_WHITELIST_UID, MLX5_IB_STAGE_IB_REG, MLX5_IB_STAGE_POST_IB_REG_UMR, MLX5_IB_STAGE_DELAY_DROP, MLX5_IB_STAGE_CLASS_ATTR, - MLX5_IB_STAGE_REP_REG, MLX5_IB_STAGE_MAX, }; @@ -806,6 +808,7 @@ struct mlx5_ib_multiport_info { struct list_head list; struct mlx5_ib_dev *ibdev; struct mlx5_core_dev *mdev; + struct notifier_block mdev_events; struct completion unref_comp; u64 sys_image_guid; u32 mdev_refcnt; @@ -880,10 +883,19 @@ struct mlx5_ib_lb_state { bool enabled; }; +struct mlx5_ib_pf_eq { + struct mlx5_ib_dev *dev; + struct mlx5_eq *core; + struct work_struct work; + spinlock_t lock; /* Pagefaults spinlock */ + struct workqueue_struct *wq; + mempool_t *pool; +}; + struct mlx5_ib_dev { struct ib_device ib_dev; - const struct uverbs_object_tree_def *driver_trees[7]; struct mlx5_core_dev *mdev; + struct notifier_block mdev_events; struct mlx5_roce roce[MLX5_MAX_PORTS]; int num_ports; /* serialize update of capability mask @@ -902,12 +914,15 @@ struct mlx5_ib_dev { #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING struct ib_odp_caps odp_caps; u64 odp_max_size; + struct mlx5_ib_pf_eq odp_pf_eq; + /* * Sleepable RCU that prevents destruction of MRs while they are still * being used by a page fault handler. */ struct srcu_struct mr_srcu; u32 null_mkey; + struct workqueue_struct *advise_mr_wq; #endif struct mlx5_ib_flow_db *flow_db; /* protect resources needed as part of reset flow */ @@ -920,6 +935,7 @@ struct mlx5_ib_dev { struct mlx5_ib_delay_drop delay_drop; const struct mlx5_ib_profile *profile; struct mlx5_eswitch_rep *rep; + int lag_active; struct mlx5_ib_lb_state lb; u8 umr_fence; @@ -927,6 +943,7 @@ struct mlx5_ib_dev { u64 sys_image_guid; struct mlx5_memic memic; u16 devx_whitelist_uid; + struct mlx5_srq_table srq_table; }; static inline struct mlx5_ib_cq *to_mibcq(struct mlx5_core_cq *mcq) @@ -1025,9 +1042,9 @@ int mlx5_MAD_IFC(struct mlx5_ib_dev *dev, int ignore_mkey, int ignore_bkey, u8 port, const struct ib_wc *in_wc, const struct ib_grh *in_grh, const void *in_mad, void *response_mad); struct ib_ah *mlx5_ib_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, - struct ib_udata *udata); + u32 flags, struct ib_udata *udata); int mlx5_ib_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); -int mlx5_ib_destroy_ah(struct ib_ah *ah); +int mlx5_ib_destroy_ah(struct ib_ah *ah, u32 flags); struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *init_attr, struct ib_udata *udata); @@ -1053,7 +1070,6 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, const struct ib_send_wr **bad_wr); int mlx5_ib_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, const struct ib_recv_wr **bad_wr); -void *mlx5_get_send_wqe(struct mlx5_ib_qp *qp, int n); int mlx5_ib_read_user_wqe(struct mlx5_ib_qp *qp, int send, int wqe_index, void *buffer, u32 length, struct mlx5_ib_qp_base *base); @@ -1070,6 +1086,12 @@ struct ib_mr *mlx5_ib_get_dma_mr(struct ib_pd *pd, int acc); struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, u64 virt_addr, int access_flags, struct ib_udata *udata); +int mlx5_ib_advise_mr(struct ib_pd *pd, + enum ib_uverbs_advise_mr_advice advice, + u32 flags, + struct ib_sge *sg_list, + u32 num_sge, + struct uverbs_attr_bundle *attrs); struct ib_mw *mlx5_ib_alloc_mw(struct ib_pd *pd, enum ib_mw_type type, struct ib_udata *udata); int mlx5_ib_dealloc_mw(struct ib_mw *mw); @@ -1158,9 +1180,8 @@ struct ib_mr *mlx5_ib_reg_dm_mr(struct ib_pd *pd, struct ib_dm *dm, #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev); -void mlx5_ib_pfault(struct mlx5_core_dev *mdev, void *context, - struct mlx5_pagefault *pfault); int mlx5_ib_odp_init_one(struct mlx5_ib_dev *ibdev); +void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev); int __init mlx5_ib_odp_init(void); void mlx5_ib_odp_cleanup(void); void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, unsigned long start, @@ -1168,6 +1189,10 @@ void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, unsigned long start, void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent); void mlx5_odp_populate_klm(struct mlx5_klm *pklm, size_t offset, size_t nentries, struct mlx5_ib_mr *mr, int flags); + +int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, + enum ib_uverbs_advise_mr_advice advice, + u32 flags, struct ib_sge *sg_list, u32 num_sge); #else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */ static inline void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev) { @@ -1175,6 +1200,7 @@ static inline void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev) } static inline int mlx5_ib_odp_init_one(struct mlx5_ib_dev *ibdev) { return 0; } +static inline void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev) {} static inline int mlx5_ib_odp_init(void) { return 0; } static inline void mlx5_ib_odp_cleanup(void) {} static inline void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent) {} @@ -1182,6 +1208,13 @@ static inline void mlx5_odp_populate_klm(struct mlx5_klm *pklm, size_t offset, size_t nentries, struct mlx5_ib_mr *mr, int flags) {} +static inline int +mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, + enum ib_uverbs_advise_mr_advice advice, u32 flags, + struct ib_sge *sg_list, u32 num_sge) +{ + return -EOPNOTSUPP; +} #endif /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */ /* Needed for rep profile */ @@ -1250,32 +1283,29 @@ void mlx5_ib_put_native_port_mdev(struct mlx5_ib_dev *dev, u8 port_num); #if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS) -int mlx5_ib_devx_create(struct mlx5_ib_dev *dev); +int mlx5_ib_devx_create(struct mlx5_ib_dev *dev, bool is_user); void mlx5_ib_devx_destroy(struct mlx5_ib_dev *dev, u16 uid); const struct uverbs_object_tree_def *mlx5_ib_get_devx_tree(void); +extern const struct uapi_definition mlx5_ib_devx_defs[]; +extern const struct uapi_definition mlx5_ib_flow_defs[]; struct mlx5_ib_flow_handler *mlx5_ib_raw_fs_rule_add( struct mlx5_ib_dev *dev, struct mlx5_ib_flow_matcher *fs_matcher, - struct mlx5_flow_act *flow_act, void *cmd_in, int inlen, - int dest_id, int dest_type); + struct mlx5_flow_act *flow_act, u32 counter_id, + void *cmd_in, int inlen, int dest_id, int dest_type); bool mlx5_ib_devx_is_flow_dest(void *obj, int *dest_id, int *dest_type); +bool mlx5_ib_devx_is_flow_counter(void *obj, u32 *counter_id); int mlx5_ib_get_flow_trees(const struct uverbs_object_tree_def **root); void mlx5_ib_destroy_flow_action_raw(struct mlx5_ib_flow_action *maction); #else static inline int -mlx5_ib_devx_create(struct mlx5_ib_dev *dev) { return -EOPNOTSUPP; }; +mlx5_ib_devx_create(struct mlx5_ib_dev *dev, + bool is_user) { return -EOPNOTSUPP; } static inline void mlx5_ib_devx_destroy(struct mlx5_ib_dev *dev, u16 uid) {} -static inline const struct uverbs_object_tree_def * -mlx5_ib_get_devx_tree(void) { return NULL; } static inline bool mlx5_ib_devx_is_flow_dest(void *obj, int *dest_id, int *dest_type) { return false; } -static inline int -mlx5_ib_get_flow_trees(const struct uverbs_object_tree_def **root) -{ - return 0; -} static inline void mlx5_ib_destroy_flow_action_raw(struct mlx5_ib_flow_action *maction) { diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 9b195d65a13e..1bd8c1b1dba1 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -73,7 +73,8 @@ static int destroy_mkey(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING /* Wait until all page fault handlers using the mr complete. */ - synchronize_srcu(&dev->mr_srcu); + if (mr->umem && mr->umem->is_odp) + synchronize_srcu(&dev->mr_srcu); #endif return err; @@ -237,6 +238,9 @@ static void remove_keys(struct mlx5_ib_dev *dev, int c, int num) { struct mlx5_mr_cache *cache = &dev->cache; struct mlx5_cache_ent *ent = &cache->ent[c]; +#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING + bool odp_mkey_exist = false; +#endif struct mlx5_ib_mr *tmp_mr; struct mlx5_ib_mr *mr; LIST_HEAD(del_list); @@ -249,6 +253,10 @@ static void remove_keys(struct mlx5_ib_dev *dev, int c, int num) break; } mr = list_first_entry(&ent->head, struct mlx5_ib_mr, list); +#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING + if (mr->umem && mr->umem->is_odp) + odp_mkey_exist = true; +#endif list_move(&mr->list, &del_list); ent->cur--; ent->size--; @@ -257,7 +265,8 @@ static void remove_keys(struct mlx5_ib_dev *dev, int c, int num) } #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - synchronize_srcu(&dev->mr_srcu); + if (odp_mkey_exist) + synchronize_srcu(&dev->mr_srcu); #endif list_for_each_entry_safe(mr, tmp_mr, &del_list, list) { @@ -572,6 +581,7 @@ static void clean_keys(struct mlx5_ib_dev *dev, int c) { struct mlx5_mr_cache *cache = &dev->cache; struct mlx5_cache_ent *ent = &cache->ent[c]; + bool odp_mkey_exist = false; struct mlx5_ib_mr *tmp_mr; struct mlx5_ib_mr *mr; LIST_HEAD(del_list); @@ -584,6 +594,8 @@ static void clean_keys(struct mlx5_ib_dev *dev, int c) break; } mr = list_first_entry(&ent->head, struct mlx5_ib_mr, list); + if (mr->umem && mr->umem->is_odp) + odp_mkey_exist = true; list_move(&mr->list, &del_list); ent->cur--; ent->size--; @@ -592,7 +604,8 @@ static void clean_keys(struct mlx5_ib_dev *dev, int c) } #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - synchronize_srcu(&dev->mr_srcu); + if (odp_mkey_exist) + synchronize_srcu(&dev->mr_srcu); #endif list_for_each_entry_safe(mr, tmp_mr, &del_list, list) { @@ -1211,7 +1224,7 @@ err_1: return ERR_PTR(err); } -static void set_mr_fileds(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr, +static void set_mr_fields(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr, int npages, u64 length, int access_flags) { mr->npages = npages; @@ -1267,7 +1280,7 @@ static struct ib_mr *mlx5_ib_get_memic_mr(struct ib_pd *pd, u64 memic_addr, kfree(in); mr->umem = NULL; - set_mr_fileds(dev, mr, 0, length, acc); + set_mr_fields(dev, mr, 0, length, acc); return &mr->ibmr; @@ -1280,6 +1293,21 @@ err_free: return ERR_PTR(err); } +int mlx5_ib_advise_mr(struct ib_pd *pd, + enum ib_uverbs_advise_mr_advice advice, + u32 flags, + struct ib_sge *sg_list, + u32 num_sge, + struct uverbs_attr_bundle *attrs) +{ + if (advice != IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH && + advice != IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_WRITE) + return -EOPNOTSUPP; + + return mlx5_ib_advise_mr_prefetch(pd, advice, flags, + sg_list, num_sge); +} + struct ib_mr *mlx5_ib_reg_dm_mr(struct ib_pd *pd, struct ib_dm *dm, struct ib_dm_mr_attr *attr, struct uverbs_attr_bundle *attrs) @@ -1369,7 +1397,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, mlx5_ib_dbg(dev, "mkey 0x%x\n", mr->mmkey.key); mr->umem = umem; - set_mr_fileds(dev, mr, npages, length, access_flags); + set_mr_fields(dev, mr, npages, length, access_flags); #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING update_odp_mr(mr); @@ -1536,7 +1564,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start, goto err; } - set_mr_fileds(dev, mr, npages, len, access_flags); + set_mr_fields(dev, mr, npages, len, access_flags); #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING update_odp_mr(mr); diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 4dc6cc640ce0..01e0f6200631 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -37,6 +37,46 @@ #include "mlx5_ib.h" #include "cmd.h" +#include + +/* Contains the details of a pagefault. */ +struct mlx5_pagefault { + u32 bytes_committed; + u32 token; + u8 event_subtype; + u8 type; + union { + /* Initiator or send message responder pagefault details. */ + struct { + /* Received packet size, only valid for responders. */ + u32 packet_size; + /* + * Number of resource holding WQE, depends on type. + */ + u32 wq_num; + /* + * WQE index. Refers to either the send queue or + * receive queue, according to event_subtype. + */ + u16 wqe_index; + } wqe; + /* RDMA responder pagefault details */ + struct { + u32 r_key; + /* + * Received packet size, minimal size page fault + * resolution required for forward progress. + */ + u32 packet_size; + u32 rdma_op_len; + u64 rdma_va; + } rdma; + }; + + struct mlx5_ib_pf_eq *eq; + struct work_struct work; +}; + #define MAX_PREFETCH_LEN (4*1024*1024U) /* Timeout in ms to wait for an active mmu notifier to complete when handling @@ -304,14 +344,20 @@ static void mlx5_ib_page_fault_resume(struct mlx5_ib_dev *dev, { int wq_num = pfault->event_subtype == MLX5_PFAULT_SUBTYPE_WQE ? pfault->wqe.wq_num : pfault->token; - int ret = mlx5_core_page_fault_resume(dev->mdev, - pfault->token, - wq_num, - pfault->type, - error); - if (ret) - mlx5_ib_err(dev, "Failed to resolve the page fault on WQ 0x%x\n", - wq_num); + u32 out[MLX5_ST_SZ_DW(page_fault_resume_out)] = { }; + u32 in[MLX5_ST_SZ_DW(page_fault_resume_in)] = { }; + int err; + + MLX5_SET(page_fault_resume_in, in, opcode, MLX5_CMD_OP_PAGE_FAULT_RESUME); + MLX5_SET(page_fault_resume_in, in, page_fault_type, pfault->type); + MLX5_SET(page_fault_resume_in, in, token, pfault->token); + MLX5_SET(page_fault_resume_in, in, wq_number, wq_num); + MLX5_SET(page_fault_resume_in, in, error, !!error); + + err = mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out)); + if (err) + mlx5_ib_err(dev, "Failed to resolve the page fault on WQ 0x%x err %d\n", + wq_num, err); } static struct mlx5_ib_mr *implicit_mr_alloc(struct ib_pd *pd, @@ -503,12 +549,17 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr) wait_event(imr->q_leaf_free, !atomic_read(&imr->num_leaf_free)); } +#define MLX5_PF_FLAGS_PREFETCH BIT(0) +#define MLX5_PF_FLAGS_DOWNGRADE BIT(1) static int pagefault_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr, - u64 io_virt, size_t bcnt, u32 *bytes_mapped) + u64 io_virt, size_t bcnt, u32 *bytes_mapped, + u32 flags) { int npages = 0, current_seq, page_shift, ret, np; bool implicit = false; struct ib_umem_odp *odp_mr = to_ib_umem_odp(mr->umem); + bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE; + bool prefetch = flags & MLX5_PF_FLAGS_PREFETCH; u64 access_mask = ODP_READ_ALLOWED_BIT; u64 start_idx, page_mask; struct ib_umem_odp *odp; @@ -532,7 +583,15 @@ next_mr: page_mask = ~(BIT(page_shift) - 1); start_idx = (io_virt - (mr->mmkey.iova & page_mask)) >> page_shift; - if (mr->umem->writable) + if (prefetch && !downgrade && !mr->umem->writable) { + /* prefetch with write-access must + * be supported by the MR + */ + ret = -EINVAL; + goto out; + } + + if (mr->umem->writable && !downgrade) access_mask |= ODP_WRITE_ALLOWED_BIT; current_seq = READ_ONCE(odp->notifiers_seq); @@ -606,8 +665,8 @@ out: if (!wait_for_completion_timeout( &odp->notifier_completion, timeout)) { - mlx5_ib_warn(dev, "timeout waiting for mmu notifier. seq %d against %d\n", - current_seq, odp->notifiers_seq); + mlx5_ib_warn(dev, "timeout waiting for mmu notifier. seq %d against %d. notifiers_count=%d\n", + current_seq, odp->notifiers_seq, odp->notifiers_count); } } else { /* The MR is being killed, kill the QP as well. */ @@ -637,12 +696,13 @@ struct pf_frame { * -EFAULT when there's an error mapping the requested pages. The caller will * abort the page fault handling. */ -static int pagefault_single_data_segment(struct mlx5_ib_dev *dev, - u32 key, u64 io_virt, size_t bcnt, +static int pagefault_single_data_segment(struct mlx5_ib_dev *dev, u32 key, + u64 io_virt, size_t bcnt, u32 *bytes_committed, - u32 *bytes_mapped) + u32 *bytes_mapped, u32 flags) { int npages = 0, srcu_key, ret, i, outlen, cur_outlen = 0, depth = 0; + bool prefetch = flags & MLX5_PF_FLAGS_PREFETCH; struct pf_frame *head = NULL, *frame; struct mlx5_core_mkey *mmkey; struct mlx5_ib_mw *mw; @@ -664,6 +724,12 @@ next_mr: goto srcu_unlock; } + if (prefetch && mmkey->type != MLX5_MKEY_MR) { + mlx5_ib_dbg(dev, "prefetch is allowed only for MR\n"); + ret = -EINVAL; + goto srcu_unlock; + } + switch (mmkey->type) { case MLX5_MKEY_MR: mr = container_of(mmkey, struct mlx5_ib_mr, mmkey); @@ -673,6 +739,11 @@ next_mr: goto srcu_unlock; } + if (prefetch && !mr->umem->is_odp) { + ret = -EINVAL; + goto srcu_unlock; + } + if (!mr->umem->is_odp) { mlx5_ib_dbg(dev, "skipping non ODP MR (lkey=0x%06x) in page fault handler.\n", key); @@ -682,7 +753,7 @@ next_mr: goto srcu_unlock; } - ret = pagefault_mr(dev, mr, io_virt, bcnt, bytes_mapped); + ret = pagefault_mr(dev, mr, io_virt, bcnt, bytes_mapped, flags); if (ret < 0) goto srcu_unlock; @@ -859,7 +930,7 @@ static int pagefault_data_segments(struct mlx5_ib_dev *dev, ret = pagefault_single_data_segment(dev, key, io_virt, bcnt, &pfault->bytes_committed, - bytes_mapped); + bytes_mapped, 0); if (ret < 0) break; npages += ret; @@ -1025,16 +1096,31 @@ invalid_transport_or_opcode: return 0; } -static struct mlx5_ib_qp *mlx5_ib_odp_find_qp(struct mlx5_ib_dev *dev, - u32 wq_num) +static inline struct mlx5_core_rsc_common *odp_get_rsc(struct mlx5_ib_dev *dev, + u32 wq_num, int pf_type) { - struct mlx5_core_qp *mqp = __mlx5_qp_lookup(dev->mdev, wq_num); + enum mlx5_res_type res_type; - if (!mqp) { - mlx5_ib_err(dev, "QPN 0x%6x not found\n", wq_num); + switch (pf_type) { + case MLX5_WQE_PF_TYPE_RMP: + res_type = MLX5_RES_SRQ; + break; + case MLX5_WQE_PF_TYPE_REQ_SEND_OR_WRITE: + case MLX5_WQE_PF_TYPE_RESP: + case MLX5_WQE_PF_TYPE_REQ_READ_OR_ATOMIC: + res_type = MLX5_RES_QP; + break; + default: return NULL; } + return mlx5_core_res_hold(dev->mdev, wq_num, res_type); +} + +static inline struct mlx5_ib_qp *res_to_qp(struct mlx5_core_rsc_common *res) +{ + struct mlx5_core_qp *mqp = (struct mlx5_core_qp *)res; + return to_mibqp(mqp); } @@ -1048,18 +1134,30 @@ static void mlx5_ib_mr_wqe_pfault_handler(struct mlx5_ib_dev *dev, int resume_with_error = 1; u16 wqe_index = pfault->wqe.wqe_index; int requestor = pfault->type & MLX5_PFAULT_REQUESTOR; + struct mlx5_core_rsc_common *res; struct mlx5_ib_qp *qp; + res = odp_get_rsc(dev, pfault->wqe.wq_num, pfault->type); + if (!res) { + mlx5_ib_dbg(dev, "wqe page fault for missing resource %d\n", pfault->wqe.wq_num); + return; + } + + switch (res->res) { + case MLX5_RES_QP: + qp = res_to_qp(res); + break; + default: + mlx5_ib_err(dev, "wqe page fault for unsupported type %d\n", pfault->type); + goto resolve_page_fault; + } + buffer = (char *)__get_free_page(GFP_KERNEL); if (!buffer) { mlx5_ib_err(dev, "Error allocating memory for IO page fault handling.\n"); goto resolve_page_fault; } - qp = mlx5_ib_odp_find_qp(dev, pfault->wqe.wq_num); - if (!qp) - goto resolve_page_fault; - ret = mlx5_ib_read_user_wqe(qp, requestor, wqe_index, buffer, PAGE_SIZE, &qp->trans_qp.base); if (ret < 0) { @@ -1099,6 +1197,7 @@ resolve_page_fault: mlx5_ib_dbg(dev, "PAGE FAULT completed. QP 0x%x resume_with_error=%d, type: 0x%x\n", pfault->wqe.wq_num, resume_with_error, pfault->type); + mlx5_core_res_put(res); free_page((unsigned long)buffer); } @@ -1142,7 +1241,8 @@ static void mlx5_ib_mr_rdma_pfault_handler(struct mlx5_ib_dev *dev, } ret = pagefault_single_data_segment(dev, rkey, address, length, - &pfault->bytes_committed, NULL); + &pfault->bytes_committed, NULL, + 0); if (ret == -EAGAIN) { /* We're racing with an invalidation, don't prefetch */ prefetch_activated = 0; @@ -1169,7 +1269,8 @@ static void mlx5_ib_mr_rdma_pfault_handler(struct mlx5_ib_dev *dev, ret = pagefault_single_data_segment(dev, rkey, address, prefetch_len, - &bytes_committed, NULL); + &bytes_committed, NULL, + 0); if (ret < 0 && ret != -EAGAIN) { mlx5_ib_dbg(dev, "Prefetch failed. ret: %d, QP 0x%x, address: 0x%.16llx, length = 0x%.16x\n", ret, pfault->token, address, prefetch_len); @@ -1177,10 +1278,8 @@ static void mlx5_ib_mr_rdma_pfault_handler(struct mlx5_ib_dev *dev, } } -void mlx5_ib_pfault(struct mlx5_core_dev *mdev, void *context, - struct mlx5_pagefault *pfault) +static void mlx5_ib_pfault(struct mlx5_ib_dev *dev, struct mlx5_pagefault *pfault) { - struct mlx5_ib_dev *dev = context; u8 event_subtype = pfault->event_subtype; switch (event_subtype) { @@ -1197,6 +1296,203 @@ void mlx5_ib_pfault(struct mlx5_core_dev *mdev, void *context, } } +static void mlx5_ib_eqe_pf_action(struct work_struct *work) +{ + struct mlx5_pagefault *pfault = container_of(work, + struct mlx5_pagefault, + work); + struct mlx5_ib_pf_eq *eq = pfault->eq; + + mlx5_ib_pfault(eq->dev, pfault); + mempool_free(pfault, eq->pool); +} + +static void mlx5_ib_eq_pf_process(struct mlx5_ib_pf_eq *eq) +{ + struct mlx5_eqe_page_fault *pf_eqe; + struct mlx5_pagefault *pfault; + struct mlx5_eqe *eqe; + int cc = 0; + + while ((eqe = mlx5_eq_get_eqe(eq->core, cc))) { + pfault = mempool_alloc(eq->pool, GFP_ATOMIC); + if (!pfault) { + schedule_work(&eq->work); + break; + } + + pf_eqe = &eqe->data.page_fault; + pfault->event_subtype = eqe->sub_type; + pfault->bytes_committed = be32_to_cpu(pf_eqe->bytes_committed); + + mlx5_ib_dbg(eq->dev, + "PAGE_FAULT: subtype: 0x%02x, bytes_committed: 0x%06x\n", + eqe->sub_type, pfault->bytes_committed); + + switch (eqe->sub_type) { + case MLX5_PFAULT_SUBTYPE_RDMA: + /* RDMA based event */ + pfault->type = + be32_to_cpu(pf_eqe->rdma.pftype_token) >> 24; + pfault->token = + be32_to_cpu(pf_eqe->rdma.pftype_token) & + MLX5_24BIT_MASK; + pfault->rdma.r_key = + be32_to_cpu(pf_eqe->rdma.r_key); + pfault->rdma.packet_size = + be16_to_cpu(pf_eqe->rdma.packet_length); + pfault->rdma.rdma_op_len = + be32_to_cpu(pf_eqe->rdma.rdma_op_len); + pfault->rdma.rdma_va = + be64_to_cpu(pf_eqe->rdma.rdma_va); + mlx5_ib_dbg(eq->dev, + "PAGE_FAULT: type:0x%x, token: 0x%06x, r_key: 0x%08x\n", + pfault->type, pfault->token, + pfault->rdma.r_key); + mlx5_ib_dbg(eq->dev, + "PAGE_FAULT: rdma_op_len: 0x%08x, rdma_va: 0x%016llx\n", + pfault->rdma.rdma_op_len, + pfault->rdma.rdma_va); + break; + + case MLX5_PFAULT_SUBTYPE_WQE: + /* WQE based event */ + pfault->type = + (be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24) & 0x7; + pfault->token = + be32_to_cpu(pf_eqe->wqe.token); + pfault->wqe.wq_num = + be32_to_cpu(pf_eqe->wqe.pftype_wq) & + MLX5_24BIT_MASK; + pfault->wqe.wqe_index = + be16_to_cpu(pf_eqe->wqe.wqe_index); + pfault->wqe.packet_size = + be16_to_cpu(pf_eqe->wqe.packet_length); + mlx5_ib_dbg(eq->dev, + "PAGE_FAULT: type:0x%x, token: 0x%06x, wq_num: 0x%06x, wqe_index: 0x%04x\n", + pfault->type, pfault->token, + pfault->wqe.wq_num, + pfault->wqe.wqe_index); + break; + + default: + mlx5_ib_warn(eq->dev, + "Unsupported page fault event sub-type: 0x%02hhx\n", + eqe->sub_type); + /* Unsupported page faults should still be + * resolved by the page fault handler + */ + } + + pfault->eq = eq; + INIT_WORK(&pfault->work, mlx5_ib_eqe_pf_action); + queue_work(eq->wq, &pfault->work); + + cc = mlx5_eq_update_cc(eq->core, ++cc); + } + + mlx5_eq_update_ci(eq->core, cc, 1); +} + +static irqreturn_t mlx5_ib_eq_pf_int(int irq, void *eq_ptr) +{ + struct mlx5_ib_pf_eq *eq = eq_ptr; + unsigned long flags; + + if (spin_trylock_irqsave(&eq->lock, flags)) { + mlx5_ib_eq_pf_process(eq); + spin_unlock_irqrestore(&eq->lock, flags); + } else { + schedule_work(&eq->work); + } + + return IRQ_HANDLED; +} + +/* mempool_refill() was proposed but unfortunately wasn't accepted + * http://lkml.iu.edu/hypermail/linux/kernel/1512.1/05073.html + * Cheap workaround. + */ +static void mempool_refill(mempool_t *pool) +{ + while (pool->curr_nr < pool->min_nr) + mempool_free(mempool_alloc(pool, GFP_KERNEL), pool); +} + +static void mlx5_ib_eq_pf_action(struct work_struct *work) +{ + struct mlx5_ib_pf_eq *eq = + container_of(work, struct mlx5_ib_pf_eq, work); + + mempool_refill(eq->pool); + + spin_lock_irq(&eq->lock); + mlx5_ib_eq_pf_process(eq); + spin_unlock_irq(&eq->lock); +} + +enum { + MLX5_IB_NUM_PF_EQE = 0x1000, + MLX5_IB_NUM_PF_DRAIN = 64, +}; + +static int +mlx5_ib_create_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq) +{ + struct mlx5_eq_param param = {}; + int err; + + INIT_WORK(&eq->work, mlx5_ib_eq_pf_action); + spin_lock_init(&eq->lock); + eq->dev = dev; + + eq->pool = mempool_create_kmalloc_pool(MLX5_IB_NUM_PF_DRAIN, + sizeof(struct mlx5_pagefault)); + if (!eq->pool) + return -ENOMEM; + + eq->wq = alloc_workqueue("mlx5_ib_page_fault", + WQ_HIGHPRI | WQ_UNBOUND | WQ_MEM_RECLAIM, + MLX5_NUM_CMD_EQE); + if (!eq->wq) { + err = -ENOMEM; + goto err_mempool; + } + + param = (struct mlx5_eq_param) { + .index = MLX5_EQ_PFAULT_IDX, + .mask = 1 << MLX5_EVENT_TYPE_PAGE_FAULT, + .nent = MLX5_IB_NUM_PF_EQE, + .context = eq, + .handler = mlx5_ib_eq_pf_int + }; + eq->core = mlx5_eq_create_generic(dev->mdev, "mlx5_ib_page_fault_eq", ¶m); + if (IS_ERR(eq->core)) { + err = PTR_ERR(eq->core); + goto err_wq; + } + + return 0; +err_wq: + destroy_workqueue(eq->wq); +err_mempool: + mempool_destroy(eq->pool); + return err; +} + +static int +mlx5_ib_destroy_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq) +{ + int err; + + err = mlx5_eq_destroy_generic(dev->mdev, eq->core); + cancel_work_sync(&eq->work); + destroy_workqueue(eq->wq); + mempool_destroy(eq->pool); + + return err; +} + void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent) { if (!(ent->dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT)) @@ -1223,9 +1519,16 @@ void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent) } } +static const struct ib_device_ops mlx5_ib_dev_odp_ops = { + .advise_mr = mlx5_ib_advise_mr, +}; + int mlx5_ib_odp_init_one(struct mlx5_ib_dev *dev) { - int ret; + int ret = 0; + + if (dev->odp_caps.general_caps & IB_ODP_SUPPORT) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_odp_ops); if (dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT) { ret = mlx5_cmd_null_mkey(dev->mdev, &dev->null_mkey); @@ -1235,7 +1538,20 @@ int mlx5_ib_odp_init_one(struct mlx5_ib_dev *dev) } } - return 0; + if (!MLX5_CAP_GEN(dev->mdev, pg)) + return ret; + + ret = mlx5_ib_create_pf_eq(dev, &dev->odp_pf_eq); + + return ret; +} + +void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *dev) +{ + if (!MLX5_CAP_GEN(dev->mdev, pg)) + return; + + mlx5_ib_destroy_pf_eq(dev, &dev->odp_pf_eq); } int mlx5_ib_odp_init(void) @@ -1246,3 +1562,75 @@ int mlx5_ib_odp_init(void) return 0; } +struct prefetch_mr_work { + struct work_struct work; + struct mlx5_ib_dev *dev; + u32 pf_flags; + u32 num_sge; + struct ib_sge sg_list[0]; +}; + +static int mlx5_ib_prefetch_sg_list(struct mlx5_ib_dev *dev, u32 pf_flags, + struct ib_sge *sg_list, u32 num_sge) +{ + int i; + + for (i = 0; i < num_sge; ++i) { + struct ib_sge *sg = &sg_list[i]; + int bytes_committed = 0; + int ret; + + ret = pagefault_single_data_segment(dev, sg->lkey, sg->addr, + sg->length, + &bytes_committed, NULL, + pf_flags); + if (ret < 0) + return ret; + } + return 0; +} + +static void mlx5_ib_prefetch_mr_work(struct work_struct *work) +{ + struct prefetch_mr_work *w = + container_of(work, struct prefetch_mr_work, work); + + if (w->dev->ib_dev.reg_state == IB_DEV_REGISTERED) + mlx5_ib_prefetch_sg_list(w->dev, w->pf_flags, w->sg_list, + w->num_sge); + + kfree(w); +} + +int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, + enum ib_uverbs_advise_mr_advice advice, + u32 flags, struct ib_sge *sg_list, u32 num_sge) +{ + struct mlx5_ib_dev *dev = to_mdev(pd->device); + u32 pf_flags = MLX5_PF_FLAGS_PREFETCH; + struct prefetch_mr_work *work; + + if (advice == IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH) + pf_flags |= MLX5_PF_FLAGS_DOWNGRADE; + + if (flags & IB_UVERBS_ADVISE_MR_FLAG_FLUSH) + return mlx5_ib_prefetch_sg_list(dev, pf_flags, sg_list, + num_sge); + + if (dev->ib_dev.reg_state != IB_DEV_REGISTERED) + return -ENODEV; + + work = kvzalloc(struct_size(work, sg_list, num_sge), GFP_KERNEL); + if (!work) + return -ENOMEM; + + memcpy(work->sg_list, sg_list, num_sge * sizeof(struct ib_sge)); + + work->dev = dev; + work->pf_flags = pf_flags; + work->num_sge = num_sge; + + INIT_WORK(&work->work, mlx5_ib_prefetch_mr_work); + schedule_work(&work->work); + return 0; +} diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 3747cc681b18..9c94c1b9ec35 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -108,21 +108,6 @@ static int is_sqp(enum ib_qp_type qp_type) return is_qp0(qp_type) || is_qp1(qp_type); } -static void *get_wqe(struct mlx5_ib_qp *qp, int offset) -{ - return mlx5_buf_offset(&qp->buf, offset); -} - -static void *get_recv_wqe(struct mlx5_ib_qp *qp, int n) -{ - return get_wqe(qp, qp->rq.offset + (n << qp->rq.wqe_shift)); -} - -void *mlx5_get_send_wqe(struct mlx5_ib_qp *qp, int n) -{ - return get_wqe(qp, qp->sq.offset + (n << MLX5_IB_SQ_STRIDE)); -} - /** * mlx5_ib_read_user_wqe() - Copy a user-space WQE to kernel space. * @@ -790,6 +775,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, __be64 *pas; void *qpc; int err; + u16 uid; err = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)); if (err) { @@ -851,7 +837,8 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, goto err_umem; } - MLX5_SET(create_qp_in, *in, uid, to_mpd(pd)->uid); + uid = (attr->qp_type != IB_QPT_XRC_TGT) ? to_mpd(pd)->uid : 0; + MLX5_SET(create_qp_in, *in, uid, uid); pas = (__be64 *)MLX5_ADDR_OF(create_qp_in, *in, pas); if (ubuffer->umem) mlx5_ib_populate_pas(dev, ubuffer->umem, page_shift, pas, 0); @@ -917,6 +904,30 @@ static void destroy_qp_user(struct mlx5_ib_dev *dev, struct ib_pd *pd, mlx5_ib_free_bfreg(dev, &context->bfregi, qp->bfregn); } +/* get_sq_edge - Get the next nearby edge. + * + * An 'edge' is defined as the first following address after the end + * of the fragment or the SQ. Accordingly, during the WQE construction + * which repetitively increases the pointer to write the next data, it + * simply should check if it gets to an edge. + * + * @sq - SQ buffer. + * @idx - Stride index in the SQ buffer. + * + * Return: + * The new edge. + */ +static void *get_sq_edge(struct mlx5_ib_wq *sq, u32 idx) +{ + void *fragment_end; + + fragment_end = mlx5_frag_buf_get_wqe + (&sq->fbc, + mlx5_frag_buf_get_idx_last_contig_stride(&sq->fbc, idx)); + + return fragment_end + MLX5_SEND_WQE_BB; +} + static int create_kernel_qp(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *init_attr, struct mlx5_ib_qp *qp, @@ -955,13 +966,29 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev, qp->sq.offset = qp->rq.wqe_cnt << qp->rq.wqe_shift; base->ubuffer.buf_size = err + (qp->rq.wqe_cnt << qp->rq.wqe_shift); - err = mlx5_buf_alloc(dev->mdev, base->ubuffer.buf_size, &qp->buf); + err = mlx5_frag_buf_alloc_node(dev->mdev, base->ubuffer.buf_size, + &qp->buf, dev->mdev->priv.numa_node); if (err) { mlx5_ib_dbg(dev, "err %d\n", err); return err; } - qp->sq.qend = mlx5_get_send_wqe(qp, qp->sq.wqe_cnt); + if (qp->rq.wqe_cnt) + mlx5_init_fbc(qp->buf.frags, qp->rq.wqe_shift, + ilog2(qp->rq.wqe_cnt), &qp->rq.fbc); + + if (qp->sq.wqe_cnt) { + int sq_strides_offset = (qp->sq.offset & (PAGE_SIZE - 1)) / + MLX5_SEND_WQE_BB; + mlx5_init_fbc_offset(qp->buf.frags + + (qp->sq.offset / PAGE_SIZE), + ilog2(MLX5_SEND_WQE_BB), + ilog2(qp->sq.wqe_cnt), + sq_strides_offset, &qp->sq.fbc); + + qp->sq.cur_edge = get_sq_edge(&qp->sq, 0); + } + *inlen = MLX5_ST_SZ_BYTES(create_qp_in) + MLX5_FLD_SZ_BYTES(create_qp_in, pas[0]) * qp->buf.npages; *in = kvzalloc(*inlen, GFP_KERNEL); @@ -983,8 +1010,9 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev, qp->flags |= MLX5_IB_QP_SQPN_QP1; } - mlx5_fill_page_array(&qp->buf, - (__be64 *)MLX5_ADDR_OF(create_qp_in, *in, pas)); + mlx5_fill_page_frag_array(&qp->buf, + (__be64 *)MLX5_ADDR_OF(create_qp_in, + *in, pas)); err = mlx5_db_alloc(dev->mdev, &qp->db); if (err) { @@ -1024,7 +1052,7 @@ err_free: kvfree(*in); err_buf: - mlx5_buf_free(dev->mdev, &qp->buf); + mlx5_frag_buf_free(dev->mdev, &qp->buf); return err; } @@ -1036,7 +1064,7 @@ static void destroy_qp_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp) kvfree(qp->sq.wr_data); kvfree(qp->rq.wrid); mlx5_db_free(dev->mdev, &qp->db); - mlx5_buf_free(dev->mdev, &qp->buf); + mlx5_frag_buf_free(dev->mdev, &qp->buf); } static u32 get_rx_type(struct mlx5_ib_qp *qp, struct ib_qp_init_attr *attr) @@ -1876,7 +1904,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd, qp->flags |= MLX5_IB_QP_CVLAN_STRIPPING; } - if (pd && pd->uobject) { + if (udata) { if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) { mlx5_ib_dbg(dev, "copy failed\n"); return -EFAULT; @@ -1889,7 +1917,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd, MLX5_QP_FLAG_BFREG_INDEX | MLX5_QP_FLAG_TYPE_DCT | MLX5_QP_FLAG_TYPE_DCI | - MLX5_QP_FLAG_ALLOW_SCATTER_CQE)) + MLX5_QP_FLAG_ALLOW_SCATTER_CQE | + MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE)) return -EINVAL; err = get_qp_user_index(to_mucontext(pd->uobject->context), @@ -1925,6 +1954,15 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd, qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC; } + if (ucmd.flags & MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE) { + if (init_attr->qp_type != IB_QPT_RC || + !MLX5_CAP_GEN(dev->mdev, qp_packet_based)) { + mlx5_ib_dbg(dev, "packet based credit mode isn't supported\n"); + return -EOPNOTSUPP; + } + qp->flags |= MLX5_IB_QP_PACKET_BASED_CREDIT; + } + if (init_attr->create_flags & IB_QP_CREATE_SOURCE_QPN) { if (init_attr->qp_type != IB_QPT_UD || (MLX5_CAP_GEN(dev->mdev, port_type) != @@ -1948,14 +1986,14 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd, qp->has_rq = qp_has_rq(init_attr); err = set_rq_size(dev, &init_attr->cap, qp->has_rq, - qp, (pd && pd->uobject) ? &ucmd : NULL); + qp, udata ? &ucmd : NULL); if (err) { mlx5_ib_dbg(dev, "err %d\n", err); return err; } if (pd) { - if (pd->uobject) { + if (udata) { __u32 max_wqes = 1 << MLX5_CAP_GEN(mdev, log_max_qp_sz); mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n", ucmd.sq_wqe_count); @@ -2021,11 +2059,12 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd, MLX5_SET(qpc, qpc, cd_slave_send, 1); if (qp->flags & MLX5_IB_QP_MANAGED_RECV) MLX5_SET(qpc, qpc, cd_slave_receive, 1); - + if (qp->flags & MLX5_IB_QP_PACKET_BASED_CREDIT) + MLX5_SET(qpc, qpc, req_e2e_credit_mode, 1); if (qp->scat_cqe && is_connected(init_attr->qp_type)) { configure_responder_scat_cqe(init_attr, qpc); configure_requester_scat_cqe(dev, init_attr, - (pd && pd->uobject) ? &ucmd : NULL, + udata ? &ucmd : NULL, qpc); } @@ -2465,7 +2504,7 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, dev = to_mdev(pd->device); if (init_attr->qp_type == IB_QPT_RAW_PACKET) { - if (!pd->uobject) { + if (!udata) { mlx5_ib_dbg(dev, "Raw Packet QP is not supported for kernel consumers\n"); return ERR_PTR(-EINVAL); } else if (!to_mucontext(pd->uobject->context)->cqe_version) { @@ -2663,7 +2702,7 @@ static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) if (rate == IB_RATE_PORT_CURRENT) return 0; - if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) + if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_600_GBPS) return -EINVAL; while (rate != IB_RATE_PORT_CURRENT && @@ -3258,7 +3297,7 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp, (ibqp->qp_type == IB_QPT_RAW_PACKET) || (ibqp->qp_type == IB_QPT_XRC_INI) || (ibqp->qp_type == IB_QPT_XRC_TGT)) { - if (mlx5_lag_is_active(dev->mdev)) { + if (dev->lag_active) { u8 p = mlx5_core_native_port_num(dev->mdev); tx_affinity = get_tx_affinity(dev, pd, base, p); context->flags |= cpu_to_be32(tx_affinity << 24); @@ -3475,7 +3514,8 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp, qp->sq.head = 0; qp->sq.tail = 0; qp->sq.cur_post = 0; - qp->sq.last_poll = 0; + if (qp->sq.wqe_cnt) + qp->sq.cur_edge = get_sq_edge(&qp->sq, 0); qp->db.db[MLX5_RCV_DBR] = 0; qp->db.db[MLX5_SND_DBR] = 0; } @@ -3515,7 +3555,7 @@ static bool modify_dci_qp_is_ok(enum ib_qp_state cur_state, enum ib_qp_state new return is_valid_mask(attr_mask, req, opt); } else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_RTR) { req |= IB_QP_PATH_MTU; - opt = IB_QP_PKEY_INDEX; + opt = IB_QP_PKEY_INDEX | IB_QP_AV; return is_valid_mask(attr_mask, req, opt); } else if (cur_state == IB_QPS_RTR && new_state == IB_QPS_RTS) { req |= IB_QP_TIMEOUT | IB_QP_RETRY_CNT | IB_QP_RNR_RETRY | @@ -3749,6 +3789,62 @@ out: return err; } +static void _handle_post_send_edge(struct mlx5_ib_wq *sq, void **seg, + u32 wqe_sz, void **cur_edge) +{ + u32 idx; + + idx = (sq->cur_post + (wqe_sz >> 2)) & (sq->wqe_cnt - 1); + *cur_edge = get_sq_edge(sq, idx); + + *seg = mlx5_frag_buf_get_wqe(&sq->fbc, idx); +} + +/* handle_post_send_edge - Check if we get to SQ edge. If yes, update to the + * next nearby edge and get new address translation for current WQE position. + * @sq - SQ buffer. + * @seg: Current WQE position (16B aligned). + * @wqe_sz: Total current WQE size [16B]. + * @cur_edge: Updated current edge. + */ +static inline void handle_post_send_edge(struct mlx5_ib_wq *sq, void **seg, + u32 wqe_sz, void **cur_edge) +{ + if (likely(*seg != *cur_edge)) + return; + + _handle_post_send_edge(sq, seg, wqe_sz, cur_edge); +} + +/* memcpy_send_wqe - copy data from src to WQE and update the relevant WQ's + * pointers. At the end @seg is aligned to 16B regardless the copied size. + * @sq - SQ buffer. + * @cur_edge: Updated current edge. + * @seg: Current WQE position (16B aligned). + * @wqe_sz: Total current WQE size [16B]. + * @src: Pointer to copy from. + * @n: Number of bytes to copy. + */ +static inline void memcpy_send_wqe(struct mlx5_ib_wq *sq, void **cur_edge, + void **seg, u32 *wqe_sz, const void *src, + size_t n) +{ + while (likely(n)) { + size_t leftlen = *cur_edge - *seg; + size_t copysz = min_t(size_t, leftlen, n); + size_t stride; + + memcpy(*seg, src, copysz); + + n -= copysz; + src += copysz; + stride = !n ? ALIGN(copysz, 16) : copysz; + *seg += stride; + *wqe_sz += stride >> 4; + handle_post_send_edge(sq, seg, *wqe_sz, cur_edge); + } +} + static int mlx5_wq_overflow(struct mlx5_ib_wq *wq, int nreq, struct ib_cq *ib_cq) { struct mlx5_ib_cq *cq; @@ -3774,11 +3870,10 @@ static __always_inline void set_raddr_seg(struct mlx5_wqe_raddr_seg *rseg, rseg->reserved = 0; } -static void *set_eth_seg(struct mlx5_wqe_eth_seg *eseg, - const struct ib_send_wr *wr, void *qend, - struct mlx5_ib_qp *qp, int *size) +static void set_eth_seg(const struct ib_send_wr *wr, struct mlx5_ib_qp *qp, + void **seg, int *size, void **cur_edge) { - void *seg = eseg; + struct mlx5_wqe_eth_seg *eseg = *seg; memset(eseg, 0, sizeof(struct mlx5_wqe_eth_seg)); @@ -3786,45 +3881,41 @@ static void *set_eth_seg(struct mlx5_wqe_eth_seg *eseg, eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM; - seg += sizeof(struct mlx5_wqe_eth_seg); - *size += sizeof(struct mlx5_wqe_eth_seg) / 16; - if (wr->opcode == IB_WR_LSO) { struct ib_ud_wr *ud_wr = container_of(wr, struct ib_ud_wr, wr); - int size_of_inl_hdr_start = sizeof(eseg->inline_hdr.start); - u64 left, leftlen, copysz; + size_t left, copysz; void *pdata = ud_wr->header; + size_t stride; left = ud_wr->hlen; eseg->mss = cpu_to_be16(ud_wr->mss); eseg->inline_hdr.sz = cpu_to_be16(left); - /* - * check if there is space till the end of queue, if yes, - * copy all in one shot, otherwise copy till the end of queue, - * rollback and than the copy the left + /* memcpy_send_wqe should get a 16B align address. Hence, we + * first copy up to the current edge and then, if needed, + * fall-through to memcpy_send_wqe. */ - leftlen = qend - (void *)eseg->inline_hdr.start; - copysz = min_t(u64, leftlen, left); - - memcpy(seg - size_of_inl_hdr_start, pdata, copysz); - - if (likely(copysz > size_of_inl_hdr_start)) { - seg += ALIGN(copysz - size_of_inl_hdr_start, 16); - *size += ALIGN(copysz - size_of_inl_hdr_start, 16) / 16; - } - - if (unlikely(copysz < left)) { /* the last wqe in the queue */ - seg = mlx5_get_send_wqe(qp, 0); + copysz = min_t(u64, *cur_edge - (void *)eseg->inline_hdr.start, + left); + memcpy(eseg->inline_hdr.start, pdata, copysz); + stride = ALIGN(sizeof(struct mlx5_wqe_eth_seg) - + sizeof(eseg->inline_hdr.start) + copysz, 16); + *size += stride / 16; + *seg += stride; + + if (copysz < left) { + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); left -= copysz; pdata += copysz; - memcpy(seg, pdata, left); - seg += ALIGN(left, 16); - *size += ALIGN(left, 16) / 16; + memcpy_send_wqe(&qp->sq, cur_edge, seg, size, pdata, + left); } + + return; } - return seg; + *seg += sizeof(struct mlx5_wqe_eth_seg); + *size += sizeof(struct mlx5_wqe_eth_seg) / 16; } static void set_datagram_seg(struct mlx5_wqe_datagram_seg *dseg, @@ -4083,24 +4174,6 @@ static void set_reg_data_seg(struct mlx5_wqe_data_seg *dseg, dseg->lkey = cpu_to_be32(pd->ibpd.local_dma_lkey); } -static void set_reg_umr_inline_seg(void *seg, struct mlx5_ib_qp *qp, - struct mlx5_ib_mr *mr, int mr_list_size) -{ - void *qend = qp->sq.qend; - void *addr = mr->descs; - int copy; - - if (unlikely(seg + mr_list_size > qend)) { - copy = qend - seg; - memcpy(seg, addr, copy); - addr += copy; - mr_list_size -= copy; - seg = mlx5_get_send_wqe(qp, 0); - } - memcpy(seg, addr, mr_list_size); - seg += mr_list_size; -} - static __be32 send_ieth(const struct ib_send_wr *wr) { switch (wr->opcode) { @@ -4134,40 +4207,48 @@ static u8 wq_sig(void *wqe) } static int set_data_inl_seg(struct mlx5_ib_qp *qp, const struct ib_send_wr *wr, - void *wqe, int *sz) + void **wqe, int *wqe_sz, void **cur_edge) { struct mlx5_wqe_inline_seg *seg; - void *qend = qp->sq.qend; - void *addr; + size_t offset; int inl = 0; - int copy; - int len; int i; - seg = wqe; - wqe += sizeof(*seg); + seg = *wqe; + *wqe += sizeof(*seg); + offset = sizeof(*seg); + for (i = 0; i < wr->num_sge; i++) { - addr = (void *)(unsigned long)(wr->sg_list[i].addr); - len = wr->sg_list[i].length; + size_t len = wr->sg_list[i].length; + void *addr = (void *)(unsigned long)(wr->sg_list[i].addr); + inl += len; if (unlikely(inl > qp->max_inline_data)) return -ENOMEM; - if (unlikely(wqe + len > qend)) { - copy = qend - wqe; - memcpy(wqe, addr, copy); - addr += copy; - len -= copy; - wqe = mlx5_get_send_wqe(qp, 0); + while (likely(len)) { + size_t leftlen; + size_t copysz; + + handle_post_send_edge(&qp->sq, wqe, + *wqe_sz + (offset >> 4), + cur_edge); + + leftlen = *cur_edge - *wqe; + copysz = min_t(size_t, leftlen, len); + + memcpy(*wqe, addr, copysz); + len -= copysz; + addr += copysz; + *wqe += copysz; + offset += copysz; } - memcpy(wqe, addr, len); - wqe += len; } seg->byte_count = cpu_to_be32(inl | MLX5_INLINE_SEG); - *sz = ALIGN(inl + sizeof(seg->byte_count), 16) / 16; + *wqe_sz += ALIGN(inl + sizeof(seg->byte_count), 16) / 16; return 0; } @@ -4280,7 +4361,8 @@ static int mlx5_set_bsf(struct ib_mr *sig_mr, } static int set_sig_data_segment(const struct ib_sig_handover_wr *wr, - struct mlx5_ib_qp *qp, void **seg, int *size) + struct mlx5_ib_qp *qp, void **seg, + int *size, void **cur_edge) { struct ib_sig_attrs *sig_attrs = wr->sig_attrs; struct ib_mr *sig_mr = wr->sig_mr; @@ -4364,8 +4446,7 @@ static int set_sig_data_segment(const struct ib_sig_handover_wr *wr, *seg += wqe_size; *size += wqe_size / 16; - if (unlikely((*seg == qp->sq.qend))) - *seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); bsf = *seg; ret = mlx5_set_bsf(sig_mr, sig_attrs, bsf, data_len); @@ -4374,8 +4455,7 @@ static int set_sig_data_segment(const struct ib_sig_handover_wr *wr, *seg += sizeof(*bsf); *size += sizeof(*bsf) / 16; - if (unlikely((*seg == qp->sq.qend))) - *seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); return 0; } @@ -4413,7 +4493,8 @@ static void set_sig_umr_segment(struct mlx5_wqe_umr_ctrl_seg *umr, static int set_sig_umr_wr(const struct ib_send_wr *send_wr, - struct mlx5_ib_qp *qp, void **seg, int *size) + struct mlx5_ib_qp *qp, void **seg, int *size, + void **cur_edge) { const struct ib_sig_handover_wr *wr = sig_handover_wr(send_wr); struct mlx5_ib_mr *sig_mr = to_mmr(wr->sig_mr); @@ -4445,16 +4526,14 @@ static int set_sig_umr_wr(const struct ib_send_wr *send_wr, set_sig_umr_segment(*seg, xlt_size); *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; - if (unlikely((*seg == qp->sq.qend))) - *seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); set_sig_mkey_segment(*seg, wr, xlt_size, region_len, pdn); *seg += sizeof(struct mlx5_mkey_seg); *size += sizeof(struct mlx5_mkey_seg) / 16; - if (unlikely((*seg == qp->sq.qend))) - *seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); - ret = set_sig_data_segment(wr, qp, seg, size); + ret = set_sig_data_segment(wr, qp, seg, size, cur_edge); if (ret) return ret; @@ -4491,11 +4570,11 @@ static int set_psv_wr(struct ib_sig_domain *domain, static int set_reg_wr(struct mlx5_ib_qp *qp, const struct ib_reg_wr *wr, - void **seg, int *size) + void **seg, int *size, void **cur_edge) { struct mlx5_ib_mr *mr = to_mmr(wr->mr); struct mlx5_ib_pd *pd = to_mpd(qp->ibqp.pd); - int mr_list_size = mr->ndescs * mr->desc_size; + size_t mr_list_size = mr->ndescs * mr->desc_size; bool umr_inline = mr_list_size <= MLX5_IB_SQ_UMR_INLINE_THRESHOLD; if (unlikely(wr->wr.send_flags & IB_SEND_INLINE)) { @@ -4507,18 +4586,17 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, set_reg_umr_seg(*seg, mr, umr_inline); *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; - if (unlikely((*seg == qp->sq.qend))) - *seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); set_reg_mkey_seg(*seg, mr, wr->key, wr->access); *seg += sizeof(struct mlx5_mkey_seg); *size += sizeof(struct mlx5_mkey_seg) / 16; - if (unlikely((*seg == qp->sq.qend))) - *seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); if (umr_inline) { - set_reg_umr_inline_seg(*seg, qp, mr, mr_list_size); - *size += get_xlt_octo(mr_list_size); + memcpy_send_wqe(&qp->sq, cur_edge, seg, size, mr->descs, + mr_list_size); + *size = ALIGN(*size, MLX5_SEND_WQE_BB >> 4); } else { set_reg_data_seg(*seg, mr, pd); *seg += sizeof(struct mlx5_wqe_data_seg); @@ -4527,32 +4605,31 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, return 0; } -static void set_linv_wr(struct mlx5_ib_qp *qp, void **seg, int *size) +static void set_linv_wr(struct mlx5_ib_qp *qp, void **seg, int *size, + void **cur_edge) { set_linv_umr_seg(*seg); *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; - if (unlikely((*seg == qp->sq.qend))) - *seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); set_linv_mkey_seg(*seg); *seg += sizeof(struct mlx5_mkey_seg); *size += sizeof(struct mlx5_mkey_seg) / 16; - if (unlikely((*seg == qp->sq.qend))) - *seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); } -static void dump_wqe(struct mlx5_ib_qp *qp, int idx, int size_16) +static void dump_wqe(struct mlx5_ib_qp *qp, u32 idx, int size_16) { __be32 *p = NULL; - int tidx = idx; + u32 tidx = idx; int i, j; - pr_debug("dump wqe at %p\n", mlx5_get_send_wqe(qp, tidx)); + pr_debug("dump WQE index %u:\n", idx); for (i = 0, j = 0; i < size_16 * 4; i += 4, j += 4) { if ((i & 0xf) == 0) { - void *buf = mlx5_get_send_wqe(qp, tidx); tidx = (tidx + 1) & (qp->sq.wqe_cnt - 1); - p = buf; + p = mlx5_frag_buf_get_wqe(&qp->sq.fbc, tidx); + pr_debug("WQBB at %p:\n", (void *)p); j = 0; } pr_debug("%08x %08x %08x %08x\n", be32_to_cpu(p[j]), @@ -4562,15 +4639,16 @@ static void dump_wqe(struct mlx5_ib_qp *qp, int idx, int size_16) } static int __begin_wqe(struct mlx5_ib_qp *qp, void **seg, - struct mlx5_wqe_ctrl_seg **ctrl, - const struct ib_send_wr *wr, unsigned *idx, - int *size, int nreq, bool send_signaled, bool solicited) + struct mlx5_wqe_ctrl_seg **ctrl, + const struct ib_send_wr *wr, unsigned int *idx, + int *size, void **cur_edge, int nreq, + bool send_signaled, bool solicited) { if (unlikely(mlx5_wq_overflow(&qp->sq, nreq, qp->ibqp.send_cq))) return -ENOMEM; *idx = qp->sq.cur_post & (qp->sq.wqe_cnt - 1); - *seg = mlx5_get_send_wqe(qp, *idx); + *seg = mlx5_frag_buf_get_wqe(&qp->sq.fbc, *idx); *ctrl = *seg; *(uint32_t *)(*seg + 8) = 0; (*ctrl)->imm = send_ieth(wr); @@ -4580,6 +4658,7 @@ static int __begin_wqe(struct mlx5_ib_qp *qp, void **seg, *seg += sizeof(**ctrl); *size = sizeof(**ctrl) / 16; + *cur_edge = qp->sq.cur_edge; return 0; } @@ -4587,17 +4666,18 @@ static int __begin_wqe(struct mlx5_ib_qp *qp, void **seg, static int begin_wqe(struct mlx5_ib_qp *qp, void **seg, struct mlx5_wqe_ctrl_seg **ctrl, const struct ib_send_wr *wr, unsigned *idx, - int *size, int nreq) + int *size, void **cur_edge, int nreq) { - return __begin_wqe(qp, seg, ctrl, wr, idx, size, nreq, + return __begin_wqe(qp, seg, ctrl, wr, idx, size, cur_edge, nreq, wr->send_flags & IB_SEND_SIGNALED, wr->send_flags & IB_SEND_SOLICITED); } static void finish_wqe(struct mlx5_ib_qp *qp, struct mlx5_wqe_ctrl_seg *ctrl, - u8 size, unsigned idx, u64 wr_id, - int nreq, u8 fence, u32 mlx5_opcode) + void *seg, u8 size, void *cur_edge, + unsigned int idx, u64 wr_id, int nreq, u8 fence, + u32 mlx5_opcode) { u8 opmod = 0; @@ -4613,6 +4693,15 @@ static void finish_wqe(struct mlx5_ib_qp *qp, qp->sq.wqe_head[idx] = qp->sq.head + nreq; qp->sq.cur_post += DIV_ROUND_UP(size * 16, MLX5_SEND_WQE_BB); qp->sq.w_list[idx].next = qp->sq.cur_post; + + /* We save the edge which was possibly updated during the WQE + * construction, into SQ's cache. + */ + seg = PTR_ALIGN(seg, MLX5_SEND_WQE_BB); + qp->sq.cur_edge = (unlikely(seg == cur_edge)) ? + get_sq_edge(&qp->sq, qp->sq.cur_post & + (qp->sq.wqe_cnt - 1)) : + cur_edge; } static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, @@ -4623,11 +4712,10 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, struct mlx5_core_dev *mdev = dev->mdev; struct mlx5_ib_qp *qp; struct mlx5_ib_mr *mr; - struct mlx5_wqe_data_seg *dpseg; struct mlx5_wqe_xrc_seg *xrc; struct mlx5_bf *bf; + void *cur_edge; int uninitialized_var(size); - void *qend; unsigned long flags; unsigned idx; int err = 0; @@ -4649,7 +4737,6 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, qp = to_mqp(ibqp); bf = &qp->bf; - qend = qp->sq.qend; spin_lock_irqsave(&qp->sq.lock, flags); @@ -4669,7 +4756,8 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, goto out; } - err = begin_wqe(qp, &seg, &ctrl, wr, &idx, &size, nreq); + err = begin_wqe(qp, &seg, &ctrl, wr, &idx, &size, &cur_edge, + nreq); if (err) { mlx5_ib_warn(dev, "\n"); err = -ENOMEM; @@ -4719,14 +4807,15 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, case IB_WR_LOCAL_INV: qp->sq.wr_data[idx] = IB_WR_LOCAL_INV; ctrl->imm = cpu_to_be32(wr->ex.invalidate_rkey); - set_linv_wr(qp, &seg, &size); + set_linv_wr(qp, &seg, &size, &cur_edge); num_sge = 0; break; case IB_WR_REG_MR: qp->sq.wr_data[idx] = IB_WR_REG_MR; ctrl->imm = cpu_to_be32(reg_wr(wr)->key); - err = set_reg_wr(qp, reg_wr(wr), &seg, &size); + err = set_reg_wr(qp, reg_wr(wr), &seg, &size, + &cur_edge); if (err) { *bad_wr = wr; goto out; @@ -4739,21 +4828,24 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, mr = to_mmr(sig_handover_wr(wr)->sig_mr); ctrl->imm = cpu_to_be32(mr->ibmr.rkey); - err = set_sig_umr_wr(wr, qp, &seg, &size); + err = set_sig_umr_wr(wr, qp, &seg, &size, + &cur_edge); if (err) { mlx5_ib_warn(dev, "\n"); *bad_wr = wr; goto out; } - finish_wqe(qp, ctrl, size, idx, wr->wr_id, nreq, - fence, MLX5_OPCODE_UMR); + finish_wqe(qp, ctrl, seg, size, cur_edge, idx, + wr->wr_id, nreq, fence, + MLX5_OPCODE_UMR); /* * SET_PSV WQEs are not signaled and solicited * on error */ err = __begin_wqe(qp, &seg, &ctrl, wr, &idx, - &size, nreq, false, true); + &size, &cur_edge, nreq, false, + true); if (err) { mlx5_ib_warn(dev, "\n"); err = -ENOMEM; @@ -4770,10 +4862,12 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, goto out; } - finish_wqe(qp, ctrl, size, idx, wr->wr_id, nreq, - fence, MLX5_OPCODE_SET_PSV); + finish_wqe(qp, ctrl, seg, size, cur_edge, idx, + wr->wr_id, nreq, fence, + MLX5_OPCODE_SET_PSV); err = __begin_wqe(qp, &seg, &ctrl, wr, &idx, - &size, nreq, false, true); + &size, &cur_edge, nreq, false, + true); if (err) { mlx5_ib_warn(dev, "\n"); err = -ENOMEM; @@ -4790,8 +4884,9 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, goto out; } - finish_wqe(qp, ctrl, size, idx, wr->wr_id, nreq, - fence, MLX5_OPCODE_SET_PSV); + finish_wqe(qp, ctrl, seg, size, cur_edge, idx, + wr->wr_id, nreq, fence, + MLX5_OPCODE_SET_PSV); qp->next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; num_sge = 0; goto skip_psv; @@ -4828,16 +4923,14 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, set_datagram_seg(seg, wr); seg += sizeof(struct mlx5_wqe_datagram_seg); size += sizeof(struct mlx5_wqe_datagram_seg) / 16; - if (unlikely((seg == qend))) - seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, &seg, size, &cur_edge); + break; case IB_QPT_UD: set_datagram_seg(seg, wr); seg += sizeof(struct mlx5_wqe_datagram_seg); size += sizeof(struct mlx5_wqe_datagram_seg) / 16; - - if (unlikely((seg == qend))) - seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, &seg, size, &cur_edge); /* handle qp that supports ud offload */ if (qp->flags & IB_QP_CREATE_IPOIB_UD_LSO) { @@ -4847,11 +4940,9 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, memset(pad, 0, sizeof(struct mlx5_wqe_eth_pad)); seg += sizeof(struct mlx5_wqe_eth_pad); size += sizeof(struct mlx5_wqe_eth_pad) / 16; - - seg = set_eth_seg(seg, wr, qend, qp, &size); - - if (unlikely((seg == qend))) - seg = mlx5_get_send_wqe(qp, 0); + set_eth_seg(wr, qp, &seg, &size, &cur_edge); + handle_post_send_edge(&qp->sq, &seg, size, + &cur_edge); } break; case MLX5_IB_QPT_REG_UMR: @@ -4867,13 +4958,11 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, goto out; seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; - if (unlikely((seg == qend))) - seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, &seg, size, &cur_edge); set_reg_mkey_segment(seg, wr); seg += sizeof(struct mlx5_mkey_seg); size += sizeof(struct mlx5_mkey_seg) / 16; - if (unlikely((seg == qend))) - seg = mlx5_get_send_wqe(qp, 0); + handle_post_send_edge(&qp->sq, &seg, size, &cur_edge); break; default: @@ -4881,33 +4970,29 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, } if (wr->send_flags & IB_SEND_INLINE && num_sge) { - int uninitialized_var(sz); - - err = set_data_inl_seg(qp, wr, seg, &sz); + err = set_data_inl_seg(qp, wr, &seg, &size, &cur_edge); if (unlikely(err)) { mlx5_ib_warn(dev, "\n"); *bad_wr = wr; goto out; } - size += sz; } else { - dpseg = seg; for (i = 0; i < num_sge; i++) { - if (unlikely(dpseg == qend)) { - seg = mlx5_get_send_wqe(qp, 0); - dpseg = seg; - } + handle_post_send_edge(&qp->sq, &seg, size, + &cur_edge); if (likely(wr->sg_list[i].length)) { - set_data_ptr_seg(dpseg, wr->sg_list + i); + set_data_ptr_seg + ((struct mlx5_wqe_data_seg *)seg, + wr->sg_list + i); size += sizeof(struct mlx5_wqe_data_seg) / 16; - dpseg++; + seg += sizeof(struct mlx5_wqe_data_seg); } } } qp->next_fence = next_fence; - finish_wqe(qp, ctrl, size, idx, wr->wr_id, nreq, fence, - mlx5_ib_opcode[wr->opcode]); + finish_wqe(qp, ctrl, seg, size, cur_edge, idx, wr->wr_id, nreq, + fence, mlx5_ib_opcode[wr->opcode]); skip_psv: if (0) dump_wqe(qp, idx, size); @@ -4993,7 +5078,7 @@ static int _mlx5_ib_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, goto out; } - scat = get_recv_wqe(qp, ind); + scat = mlx5_frag_buf_get_wqe(&qp->rq.fbc, ind); if (qp->wq_sig) scat++; @@ -5441,7 +5526,6 @@ struct ib_xrcd *mlx5_ib_alloc_xrcd(struct ib_device *ibdev, struct mlx5_ib_dev *dev = to_mdev(ibdev); struct mlx5_ib_xrcd *xrcd; int err; - u16 uid; if (!MLX5_CAP_GEN(dev->mdev, xrc)) return ERR_PTR(-ENOSYS); @@ -5450,14 +5534,12 @@ struct ib_xrcd *mlx5_ib_alloc_xrcd(struct ib_device *ibdev, if (!xrcd) return ERR_PTR(-ENOMEM); - uid = context ? to_mucontext(context)->devx_uid : 0; - err = mlx5_cmd_xrcd_alloc(dev->mdev, &xrcd->xrcdn, uid); + err = mlx5_cmd_xrcd_alloc(dev->mdev, &xrcd->xrcdn, 0); if (err) { kfree(xrcd); return ERR_PTR(-ENOMEM); } - xrcd->uid = uid; return &xrcd->ibxrcd; } @@ -5465,10 +5547,9 @@ int mlx5_ib_dealloc_xrcd(struct ib_xrcd *xrcd) { struct mlx5_ib_dev *dev = to_mdev(xrcd->device); u32 xrcdn = to_mxrcd(xrcd)->xrcdn; - u16 uid = to_mxrcd(xrcd)->uid; int err; - err = mlx5_cmd_xrcd_dealloc(dev->mdev, xrcdn, uid); + err = mlx5_cmd_xrcd_dealloc(dev->mdev, xrcdn, 0); if (err) mlx5_ib_warn(dev, "failed to dealloc xrcdn 0x%x\n", xrcdn); diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c index d012e7dbcc38..4e8d18009f58 100644 --- a/drivers/infiniband/hw/mlx5/srq.c +++ b/drivers/infiniband/hw/mlx5/srq.c @@ -1,50 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* - * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved. - * - * This software is available to you under a choice of one of two - * licenses. You may choose to be licensed under the terms of the GNU - * General Public License (GPL) Version 2, available from the file - * COPYING in the main directory of this source tree, or the - * OpenIB.org BSD license below: - * - * Redistribution and use in source and binary forms, with or - * without modification, are permitted provided that the following - * conditions are met: - * - * - Redistributions of source code must retain the above - * copyright notice, this list of conditions and the following - * disclaimer. - * - * - Redistributions in binary form must reproduce the above - * copyright notice, this list of conditions and the following - * disclaimer in the documentation and/or other materials - * provided with the distribution. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. + * Copyright (c) 2013-2018, Mellanox Technologies inc. All rights reserved. */ #include #include -#include #include #include #include - #include "mlx5_ib.h" - -/* not supported currently */ -static int srq_signature; +#include "srq.h" static void *get_wqe(struct mlx5_ib_srq *srq, int n) { - return mlx5_buf_offset(&srq->buf, n << srq->msrq.wqe_shift); + return mlx5_frag_buf_get_wqe(&srq->fbc, n); } static void mlx5_ib_srq_event(struct mlx5_core_srq *srq, enum mlx5_event type) @@ -144,7 +113,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq, in->log_page_size = page_shift - MLX5_ADAPTER_PAGE_SHIFT; in->page_offset = offset; - in->uid = to_mpd(pd)->uid; + in->uid = (in->type != IB_SRQT_XRC) ? to_mpd(pd)->uid : 0; if (MLX5_CAP_GEN(dev->mdev, cqe_version) == MLX5_CQE_VERSION_V1 && in->type != IB_SRQT_BASIC) in->user_index = uidx; @@ -173,12 +142,16 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq, return err; } - if (mlx5_buf_alloc(dev->mdev, buf_size, &srq->buf)) { + if (mlx5_frag_buf_alloc_node(dev->mdev, buf_size, &srq->buf, + dev->mdev->priv.numa_node)) { mlx5_ib_dbg(dev, "buf alloc failed\n"); err = -ENOMEM; goto err_db; } + mlx5_init_fbc(srq->buf.frags, srq->msrq.wqe_shift, ilog2(srq->msrq.max), + &srq->fbc); + srq->head = 0; srq->tail = srq->msrq.max - 1; srq->wqe_ctr = 0; @@ -195,14 +168,14 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq, err = -ENOMEM; goto err_buf; } - mlx5_fill_page_array(&srq->buf, in->pas); + mlx5_fill_page_frag_array(&srq->buf, in->pas); srq->wrid = kvmalloc_array(srq->msrq.max, sizeof(u64), GFP_KERNEL); if (!srq->wrid) { err = -ENOMEM; goto err_in; } - srq->wq_sig = !!srq_signature; + srq->wq_sig = 0; in->log_page_size = srq->buf.page_shift - MLX5_ADAPTER_PAGE_SHIFT; if (MLX5_CAP_GEN(dev->mdev, cqe_version) == MLX5_CQE_VERSION_V1 && @@ -215,7 +188,7 @@ err_in: kvfree(in->pas); err_buf: - mlx5_buf_free(dev->mdev, &srq->buf); + mlx5_frag_buf_free(dev->mdev, &srq->buf); err_db: mlx5_db_free(dev->mdev, &srq->db); @@ -232,7 +205,7 @@ static void destroy_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq) static void destroy_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq) { kvfree(srq->wrid); - mlx5_buf_free(dev->mdev, &srq->buf); + mlx5_frag_buf_free(dev->mdev, &srq->buf); mlx5_db_free(dev->mdev, &srq->db); } @@ -287,14 +260,14 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd, } in.type = init_attr->srq_type; - if (pd->uobject) + if (udata) err = create_srq_user(pd, srq, &in, udata, buf_size); else err = create_srq_kernel(dev, srq, &in, buf_size); if (err) { mlx5_ib_warn(dev, "create srq %s failed, err %d\n", - pd->uobject ? "user" : "kernel", err); + udata ? "user" : "kernel", err); goto err_srq; } @@ -327,7 +300,7 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd, in.pd = to_mpd(pd)->pdn; in.db_record = srq->db.dma; - err = mlx5_core_create_srq(dev->mdev, &srq->msrq, &in); + err = mlx5_cmd_create_srq(dev, &srq->msrq, &in); kvfree(in.pas); if (err) { mlx5_ib_dbg(dev, "create SRQ failed, err %d\n", err); @@ -339,7 +312,7 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd, srq->msrq.event = mlx5_ib_srq_event; srq->ibsrq.ext.xrc.srq_num = srq->msrq.srqn; - if (pd->uobject) + if (udata) if (ib_copy_to_udata(udata, &srq->msrq.srqn, sizeof(__u32))) { mlx5_ib_dbg(dev, "copy to user failed\n"); err = -EFAULT; @@ -351,10 +324,10 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd, return &srq->ibsrq; err_core: - mlx5_core_destroy_srq(dev->mdev, &srq->msrq); + mlx5_cmd_destroy_srq(dev, &srq->msrq); err_usr_kern_srq: - if (pd->uobject) + if (udata) destroy_srq_user(pd, srq); else destroy_srq_kernel(dev, srq); @@ -381,7 +354,7 @@ int mlx5_ib_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, return -EINVAL; mutex_lock(&srq->mutex); - ret = mlx5_core_arm_srq(dev->mdev, &srq->msrq, attr->srq_limit, 1); + ret = mlx5_cmd_arm_srq(dev, &srq->msrq, attr->srq_limit, 1); mutex_unlock(&srq->mutex); if (ret) @@ -402,7 +375,7 @@ int mlx5_ib_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *srq_attr) if (!out) return -ENOMEM; - ret = mlx5_core_query_srq(dev->mdev, &srq->msrq, out); + ret = mlx5_cmd_query_srq(dev, &srq->msrq, out); if (ret) goto out_box; @@ -420,7 +393,7 @@ int mlx5_ib_destroy_srq(struct ib_srq *srq) struct mlx5_ib_dev *dev = to_mdev(srq->device); struct mlx5_ib_srq *msrq = to_msrq(srq); - mlx5_core_destroy_srq(dev->mdev, &msrq->msrq); + mlx5_cmd_destroy_srq(dev, &msrq->msrq); if (srq->uobject) { mlx5_ib_db_unmap_user(to_mucontext(srq->uobject->context), &msrq->db); diff --git a/drivers/infiniband/hw/mlx5/srq.h b/drivers/infiniband/hw/mlx5/srq.h new file mode 100644 index 000000000000..75eb5839ae95 --- /dev/null +++ b/drivers/infiniband/hw/mlx5/srq.h @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright (c) 2013-2018, Mellanox Technologies. All rights reserved. + */ + +#ifndef MLX5_IB_SRQ_H +#define MLX5_IB_SRQ_H + +enum { + MLX5_SRQ_FLAG_ERR = (1 << 0), + MLX5_SRQ_FLAG_WQ_SIG = (1 << 1), + MLX5_SRQ_FLAG_RNDV = (1 << 2), +}; + +struct mlx5_srq_attr { + u32 type; + u32 flags; + u32 log_size; + u32 wqe_shift; + u32 log_page_size; + u32 wqe_cnt; + u32 srqn; + u32 xrcd; + u32 page_offset; + u32 cqn; + u32 pd; + u32 lwm; + u32 user_index; + u64 db_record; + __be64 *pas; + u32 tm_log_list_size; + u32 tm_next_tag; + u32 tm_hw_phase_cnt; + u32 tm_sw_phase_cnt; + u16 uid; +}; + +struct mlx5_ib_dev; + +struct mlx5_core_srq { + struct mlx5_core_rsc_common common; /* must be first */ + u32 srqn; + int max; + size_t max_gs; + size_t max_avail_gather; + int wqe_shift; + void (*event)(struct mlx5_core_srq *srq, enum mlx5_event e); + + atomic_t refcount; + struct completion free; + u16 uid; +}; + +struct mlx5_srq_table { + struct notifier_block nb; + /* protect radix tree + */ + spinlock_t lock; + struct radix_tree_root tree; +}; + +int mlx5_cmd_create_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, + struct mlx5_srq_attr *in); +int mlx5_cmd_destroy_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq); +int mlx5_cmd_query_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, + struct mlx5_srq_attr *out); +int mlx5_cmd_arm_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, + u16 lwm, int is_srq); +struct mlx5_core_srq *mlx5_cmd_get_srq(struct mlx5_ib_dev *dev, u32 srqn); + +int mlx5_init_srq_table(struct mlx5_ib_dev *dev); +void mlx5_cleanup_srq_table(struct mlx5_ib_dev *dev); +#endif /* MLX5_IB_SRQ_H */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/srq.c b/drivers/infiniband/hw/mlx5/srq_cmd.c similarity index 71% rename from drivers/net/ethernet/mellanox/mlx5/core/srq.c rename to drivers/infiniband/hw/mlx5/srq_cmd.c index 6a6fc9be01e6..7aaaffbd4afa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/srq.c +++ b/drivers/infiniband/hw/mlx5/srq_cmd.c @@ -1,67 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* - * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved. - * - * This software is available to you under a choice of one of two - * licenses. You may choose to be licensed under the terms of the GNU - * General Public License (GPL) Version 2, available from the file - * COPYING in the main directory of this source tree, or the - * OpenIB.org BSD license below: - * - * Redistribution and use in source and binary forms, with or - * without modification, are permitted provided that the following - * conditions are met: - * - * - Redistributions of source code must retain the above - * copyright notice, this list of conditions and the following - * disclaimer. - * - * - Redistributions in binary form must reproduce the above - * copyright notice, this list of conditions and the following - * disclaimer in the documentation and/or other materials - * provided with the distribution. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. + * Copyright (c) 2013-2018, Mellanox Technologies inc. All rights reserved. */ #include -#include #include #include -#include -#include -#include "mlx5_core.h" -#include - -void mlx5_srq_event(struct mlx5_core_dev *dev, u32 srqn, int event_type) -{ - struct mlx5_srq_table *table = &dev->priv.srq_table; - struct mlx5_core_srq *srq; - - spin_lock(&table->lock); - - srq = radix_tree_lookup(&table->tree, srqn); - if (srq) - atomic_inc(&srq->refcount); - - spin_unlock(&table->lock); - - if (!srq) { - mlx5_core_warn(dev, "Async event for bogus SRQ 0x%08x\n", srqn); - return; - } - - srq->event(srq, event_type); - - if (atomic_dec_and_test(&srq->refcount)) - complete(&srq->free); -} +#include "mlx5_ib.h" +#include "srq.h" static int get_pas_size(struct mlx5_srq_attr *in) { @@ -132,9 +78,9 @@ static void get_srqc(void *srqc, struct mlx5_srq_attr *in) in->db_record = MLX5_GET64(srqc, srqc, dbr_addr); } -struct mlx5_core_srq *mlx5_core_get_srq(struct mlx5_core_dev *dev, u32 srqn) +struct mlx5_core_srq *mlx5_cmd_get_srq(struct mlx5_ib_dev *dev, u32 srqn) { - struct mlx5_srq_table *table = &dev->priv.srq_table; + struct mlx5_srq_table *table = &dev->srq_table; struct mlx5_core_srq *srq; spin_lock(&table->lock); @@ -147,9 +93,8 @@ struct mlx5_core_srq *mlx5_core_get_srq(struct mlx5_core_dev *dev, u32 srqn) return srq; } -EXPORT_SYMBOL(mlx5_core_get_srq); -static int create_srq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, +static int create_srq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *in) { u32 create_out[MLX5_ST_SZ_DW(create_srq_out)] = {0}; @@ -176,7 +121,7 @@ static int create_srq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, MLX5_SET(create_srq_in, create_in, opcode, MLX5_CMD_OP_CREATE_SRQ); - err = mlx5_cmd_exec(dev, create_in, inlen, create_out, + err = mlx5_cmd_exec(dev->mdev, create_in, inlen, create_out, sizeof(create_out)); kvfree(create_in); if (!err) { @@ -187,8 +132,7 @@ static int create_srq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, return err; } -static int destroy_srq_cmd(struct mlx5_core_dev *dev, - struct mlx5_core_srq *srq) +static int destroy_srq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq) { u32 srq_in[MLX5_ST_SZ_DW(destroy_srq_in)] = {0}; u32 srq_out[MLX5_ST_SZ_DW(destroy_srq_out)] = {0}; @@ -198,11 +142,11 @@ static int destroy_srq_cmd(struct mlx5_core_dev *dev, MLX5_SET(destroy_srq_in, srq_in, srqn, srq->srqn); MLX5_SET(destroy_srq_in, srq_in, uid, srq->uid); - return mlx5_cmd_exec(dev, srq_in, sizeof(srq_in), - srq_out, sizeof(srq_out)); + return mlx5_cmd_exec(dev->mdev, srq_in, sizeof(srq_in), srq_out, + sizeof(srq_out)); } -static int arm_srq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, +static int arm_srq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, u16 lwm, int is_srq) { u32 srq_in[MLX5_ST_SZ_DW(arm_rq_in)] = {0}; @@ -214,11 +158,11 @@ static int arm_srq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, MLX5_SET(arm_rq_in, srq_in, lwm, lwm); MLX5_SET(arm_rq_in, srq_in, uid, srq->uid); - return mlx5_cmd_exec(dev, srq_in, sizeof(srq_in), - srq_out, sizeof(srq_out)); + return mlx5_cmd_exec(dev->mdev, srq_in, sizeof(srq_in), srq_out, + sizeof(srq_out)); } -static int query_srq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, +static int query_srq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *out) { u32 srq_in[MLX5_ST_SZ_DW(query_srq_in)] = {0}; @@ -233,8 +177,8 @@ static int query_srq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, MLX5_SET(query_srq_in, srq_in, opcode, MLX5_CMD_OP_QUERY_SRQ); MLX5_SET(query_srq_in, srq_in, srqn, srq->srqn); - err = mlx5_cmd_exec(dev, srq_in, sizeof(srq_in), - srq_out, MLX5_ST_SZ_BYTES(query_srq_out)); + err = mlx5_cmd_exec(dev->mdev, srq_in, sizeof(srq_in), srq_out, + MLX5_ST_SZ_BYTES(query_srq_out)); if (err) goto out; @@ -247,7 +191,7 @@ out: return err; } -static int create_xrc_srq_cmd(struct mlx5_core_dev *dev, +static int create_xrc_srq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *in) { @@ -277,7 +221,7 @@ static int create_xrc_srq_cmd(struct mlx5_core_dev *dev, MLX5_CMD_OP_CREATE_XRC_SRQ); memset(create_out, 0, sizeof(create_out)); - err = mlx5_cmd_exec(dev, create_in, inlen, create_out, + err = mlx5_cmd_exec(dev->mdev, create_in, inlen, create_out, sizeof(create_out)); if (err) goto out; @@ -289,7 +233,7 @@ out: return err; } -static int destroy_xrc_srq_cmd(struct mlx5_core_dev *dev, +static int destroy_xrc_srq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq) { u32 xrcsrq_in[MLX5_ST_SZ_DW(destroy_xrc_srq_in)] = {0}; @@ -300,12 +244,12 @@ static int destroy_xrc_srq_cmd(struct mlx5_core_dev *dev, MLX5_SET(destroy_xrc_srq_in, xrcsrq_in, xrc_srqn, srq->srqn); MLX5_SET(destroy_xrc_srq_in, xrcsrq_in, uid, srq->uid); - return mlx5_cmd_exec(dev, xrcsrq_in, sizeof(xrcsrq_in), + return mlx5_cmd_exec(dev->mdev, xrcsrq_in, sizeof(xrcsrq_in), xrcsrq_out, sizeof(xrcsrq_out)); } -static int arm_xrc_srq_cmd(struct mlx5_core_dev *dev, - struct mlx5_core_srq *srq, u16 lwm) +static int arm_xrc_srq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, + u16 lwm) { u32 xrcsrq_in[MLX5_ST_SZ_DW(arm_xrc_srq_in)] = {0}; u32 xrcsrq_out[MLX5_ST_SZ_DW(arm_xrc_srq_out)] = {0}; @@ -316,11 +260,11 @@ static int arm_xrc_srq_cmd(struct mlx5_core_dev *dev, MLX5_SET(arm_xrc_srq_in, xrcsrq_in, lwm, lwm); MLX5_SET(arm_xrc_srq_in, xrcsrq_in, uid, srq->uid); - return mlx5_cmd_exec(dev, xrcsrq_in, sizeof(xrcsrq_in), + return mlx5_cmd_exec(dev->mdev, xrcsrq_in, sizeof(xrcsrq_in), xrcsrq_out, sizeof(xrcsrq_out)); } -static int query_xrc_srq_cmd(struct mlx5_core_dev *dev, +static int query_xrc_srq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *out) { @@ -338,8 +282,8 @@ static int query_xrc_srq_cmd(struct mlx5_core_dev *dev, MLX5_CMD_OP_QUERY_XRC_SRQ); MLX5_SET(query_xrc_srq_in, xrcsrq_in, xrc_srqn, srq->srqn); - err = mlx5_cmd_exec(dev, xrcsrq_in, sizeof(xrcsrq_in), xrcsrq_out, - MLX5_ST_SZ_BYTES(query_xrc_srq_out)); + err = mlx5_cmd_exec(dev->mdev, xrcsrq_in, sizeof(xrcsrq_in), + xrcsrq_out, MLX5_ST_SZ_BYTES(query_xrc_srq_out)); if (err) goto out; @@ -354,21 +298,27 @@ out: return err; } -static int create_rmp_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, +static int create_rmp_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *in) { - void *create_in; + void *create_out = NULL; + void *create_in = NULL; void *rmpc; void *wq; int pas_size; + int outlen; int inlen; int err; pas_size = get_pas_size(in); inlen = MLX5_ST_SZ_BYTES(create_rmp_in) + pas_size; + outlen = MLX5_ST_SZ_BYTES(create_rmp_out); create_in = kvzalloc(inlen, GFP_KERNEL); - if (!create_in) - return -ENOMEM; + create_out = kvzalloc(outlen, GFP_KERNEL); + if (!create_in || !create_out) { + err = -ENOMEM; + goto out; + } rmpc = MLX5_ADDR_OF(create_rmp_in, create_in, ctx); wq = MLX5_ADDR_OF(rmpc, rmpc, wq); @@ -378,16 +328,20 @@ static int create_rmp_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, set_wq(wq, in); memcpy(MLX5_ADDR_OF(rmpc, rmpc, wq.pas), in->pas, pas_size); - err = mlx5_core_create_rmp(dev, create_in, inlen, &srq->srqn); - if (!err) + MLX5_SET(create_rmp_in, create_in, opcode, MLX5_CMD_OP_CREATE_RMP); + err = mlx5_cmd_exec(dev->mdev, create_in, inlen, create_out, outlen); + if (!err) { + srq->srqn = MLX5_GET(create_rmp_out, create_out, rmpn); srq->uid = in->uid; + } +out: kvfree(create_in); + kvfree(create_out); return err; } -static int destroy_rmp_cmd(struct mlx5_core_dev *dev, - struct mlx5_core_srq *srq) +static int destroy_rmp_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq) { u32 in[MLX5_ST_SZ_DW(destroy_rmp_in)] = {}; u32 out[MLX5_ST_SZ_DW(destroy_rmp_out)] = {}; @@ -395,22 +349,30 @@ static int destroy_rmp_cmd(struct mlx5_core_dev *dev, MLX5_SET(destroy_rmp_in, in, opcode, MLX5_CMD_OP_DESTROY_RMP); MLX5_SET(destroy_rmp_in, in, rmpn, srq->srqn); MLX5_SET(destroy_rmp_in, in, uid, srq->uid); - return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); + return mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out)); } -static int arm_rmp_cmd(struct mlx5_core_dev *dev, - struct mlx5_core_srq *srq, +static int arm_rmp_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, u16 lwm) { - void *in; + void *out = NULL; + void *in = NULL; void *rmpc; void *wq; void *bitmask; + int outlen; + int inlen; int err; - in = kvzalloc(MLX5_ST_SZ_BYTES(modify_rmp_in), GFP_KERNEL); - if (!in) - return -ENOMEM; + inlen = MLX5_ST_SZ_BYTES(modify_rmp_in); + outlen = MLX5_ST_SZ_BYTES(modify_rmp_out); + + in = kvzalloc(inlen, GFP_KERNEL); + out = kvzalloc(outlen, GFP_KERNEL); + if (!in || !out) { + err = -ENOMEM; + goto out; + } rmpc = MLX5_ADDR_OF(modify_rmp_in, in, ctx); bitmask = MLX5_ADDR_OF(modify_rmp_in, in, bitmask); @@ -422,25 +384,39 @@ static int arm_rmp_cmd(struct mlx5_core_dev *dev, MLX5_SET(wq, wq, lwm, lwm); MLX5_SET(rmp_bitmask, bitmask, lwm, 1); MLX5_SET(rmpc, rmpc, state, MLX5_RMPC_STATE_RDY); + MLX5_SET(modify_rmp_in, in, opcode, MLX5_CMD_OP_MODIFY_RMP); - err = mlx5_core_modify_rmp(dev, in, MLX5_ST_SZ_BYTES(modify_rmp_in)); + err = mlx5_cmd_exec(dev->mdev, in, inlen, out, outlen); +out: kvfree(in); + kvfree(out); return err; } -static int query_rmp_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, +static int query_rmp_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *out) { - u32 *rmp_out; + u32 *rmp_out = NULL; + u32 *rmp_in = NULL; void *rmpc; + int outlen; + int inlen; int err; - rmp_out = kvzalloc(MLX5_ST_SZ_BYTES(query_rmp_out), GFP_KERNEL); - if (!rmp_out) - return -ENOMEM; + outlen = MLX5_ST_SZ_BYTES(query_rmp_out); + inlen = MLX5_ST_SZ_BYTES(query_rmp_in); - err = mlx5_core_query_rmp(dev, srq->srqn, rmp_out); + rmp_out = kvzalloc(outlen, GFP_KERNEL); + rmp_in = kvzalloc(inlen, GFP_KERNEL); + if (!rmp_out || !rmp_in) { + err = -ENOMEM; + goto out; + } + + MLX5_SET(query_rmp_in, rmp_in, opcode, MLX5_CMD_OP_QUERY_RMP); + MLX5_SET(query_rmp_in, rmp_in, rmpn, srq->srqn); + err = mlx5_cmd_exec(dev->mdev, rmp_in, inlen, rmp_out, outlen); if (err) goto out; @@ -451,10 +427,11 @@ static int query_rmp_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, out: kvfree(rmp_out); + kvfree(rmp_in); return err; } -static int create_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, +static int create_xrq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *in) { u32 create_out[MLX5_ST_SZ_DW(create_xrq_out)] = {0}; @@ -489,7 +466,7 @@ static int create_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, MLX5_SET(xrqc, xrqc, cqn, in->cqn); MLX5_SET(create_xrq_in, create_in, opcode, MLX5_CMD_OP_CREATE_XRQ); MLX5_SET(create_xrq_in, create_in, uid, in->uid); - err = mlx5_cmd_exec(dev, create_in, inlen, create_out, + err = mlx5_cmd_exec(dev->mdev, create_in, inlen, create_out, sizeof(create_out)); kvfree(create_in); if (!err) { @@ -500,7 +477,7 @@ static int create_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, return err; } -static int destroy_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq) +static int destroy_xrq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq) { u32 in[MLX5_ST_SZ_DW(destroy_xrq_in)] = {0}; u32 out[MLX5_ST_SZ_DW(destroy_xrq_out)] = {0}; @@ -509,10 +486,10 @@ static int destroy_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq) MLX5_SET(destroy_xrq_in, in, xrqn, srq->srqn); MLX5_SET(destroy_xrq_in, in, uid, srq->uid); - return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); + return mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out)); } -static int arm_xrq_cmd(struct mlx5_core_dev *dev, +static int arm_xrq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, u16 lwm) { @@ -525,10 +502,10 @@ static int arm_xrq_cmd(struct mlx5_core_dev *dev, MLX5_SET(arm_rq_in, in, lwm, lwm); MLX5_SET(arm_rq_in, in, uid, srq->uid); - return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); + return mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out)); } -static int query_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, +static int query_xrq_cmd(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *out) { u32 in[MLX5_ST_SZ_DW(query_xrq_in)] = {0}; @@ -544,7 +521,7 @@ static int query_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, MLX5_SET(query_xrq_in, in, opcode, MLX5_CMD_OP_QUERY_XRQ); MLX5_SET(query_xrq_in, in, xrqn, srq->srqn); - err = mlx5_cmd_exec(dev, in, sizeof(in), xrq_out, outlen); + err = mlx5_cmd_exec(dev->mdev, in, sizeof(in), xrq_out, outlen); if (err) goto out; @@ -567,11 +544,10 @@ out: return err; } -static int create_srq_split(struct mlx5_core_dev *dev, - struct mlx5_core_srq *srq, +static int create_srq_split(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, struct mlx5_srq_attr *in) { - if (!dev->issi) + if (!dev->mdev->issi) return create_srq_cmd(dev, srq, in); switch (srq->common.res) { case MLX5_RES_XSRQ: @@ -583,10 +559,9 @@ static int create_srq_split(struct mlx5_core_dev *dev, } } -static int destroy_srq_split(struct mlx5_core_dev *dev, - struct mlx5_core_srq *srq) +static int destroy_srq_split(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq) { - if (!dev->issi) + if (!dev->mdev->issi) return destroy_srq_cmd(dev, srq); switch (srq->common.res) { case MLX5_RES_XSRQ: @@ -598,11 +573,11 @@ static int destroy_srq_split(struct mlx5_core_dev *dev, } } -int mlx5_core_create_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, - struct mlx5_srq_attr *in) +int mlx5_cmd_create_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, + struct mlx5_srq_attr *in) { + struct mlx5_srq_table *table = &dev->srq_table; int err; - struct mlx5_srq_table *table = &dev->priv.srq_table; switch (in->type) { case IB_SRQT_XRC: @@ -625,10 +600,8 @@ int mlx5_core_create_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, spin_lock_irq(&table->lock); err = radix_tree_insert(&table->tree, srq->srqn, srq); spin_unlock_irq(&table->lock); - if (err) { - mlx5_core_warn(dev, "err %d, srqn 0x%x\n", err, srq->srqn); + if (err) goto err_destroy_srq_split; - } return 0; @@ -637,25 +610,18 @@ err_destroy_srq_split: return err; } -EXPORT_SYMBOL(mlx5_core_create_srq); -int mlx5_core_destroy_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq) +int mlx5_cmd_destroy_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq) { - struct mlx5_srq_table *table = &dev->priv.srq_table; + struct mlx5_srq_table *table = &dev->srq_table; struct mlx5_core_srq *tmp; int err; spin_lock_irq(&table->lock); tmp = radix_tree_delete(&table->tree, srq->srqn); spin_unlock_irq(&table->lock); - if (!tmp) { - mlx5_core_warn(dev, "srq 0x%x not found in tree\n", srq->srqn); - return -EINVAL; - } - if (tmp != srq) { - mlx5_core_warn(dev, "corruption on srqn 0x%x\n", srq->srqn); + if (!tmp || tmp != srq) return -EINVAL; - } err = destroy_srq_split(dev, srq); if (err) @@ -667,12 +633,11 @@ int mlx5_core_destroy_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq) return 0; } -EXPORT_SYMBOL(mlx5_core_destroy_srq); -int mlx5_core_query_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, - struct mlx5_srq_attr *out) +int mlx5_cmd_query_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, + struct mlx5_srq_attr *out) { - if (!dev->issi) + if (!dev->mdev->issi) return query_srq_cmd(dev, srq, out); switch (srq->common.res) { case MLX5_RES_XSRQ: @@ -683,12 +648,11 @@ int mlx5_core_query_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, return query_rmp_cmd(dev, srq, out); } } -EXPORT_SYMBOL(mlx5_core_query_srq); -int mlx5_core_arm_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, - u16 lwm, int is_srq) +int mlx5_cmd_arm_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, + u16 lwm, int is_srq) { - if (!dev->issi) + if (!dev->mdev->issi) return arm_srq_cmd(dev, srq, lwm, is_srq); switch (srq->common.res) { case MLX5_RES_XSRQ: @@ -699,18 +663,60 @@ int mlx5_core_arm_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, return arm_rmp_cmd(dev, srq, lwm); } } -EXPORT_SYMBOL(mlx5_core_arm_srq); -void mlx5_init_srq_table(struct mlx5_core_dev *dev) +static int srq_event_notifier(struct notifier_block *nb, + unsigned long type, void *data) +{ + struct mlx5_srq_table *table; + struct mlx5_core_srq *srq; + struct mlx5_eqe *eqe; + u32 srqn; + + if (type != MLX5_EVENT_TYPE_SRQ_CATAS_ERROR && + type != MLX5_EVENT_TYPE_SRQ_RQ_LIMIT) + return NOTIFY_DONE; + + table = container_of(nb, struct mlx5_srq_table, nb); + + eqe = data; + srqn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff; + + spin_lock(&table->lock); + + srq = radix_tree_lookup(&table->tree, srqn); + if (srq) + atomic_inc(&srq->refcount); + + spin_unlock(&table->lock); + + if (!srq) + return NOTIFY_OK; + + srq->event(srq, eqe->type); + + if (atomic_dec_and_test(&srq->refcount)) + complete(&srq->free); + + return NOTIFY_OK; +} + +int mlx5_init_srq_table(struct mlx5_ib_dev *dev) { - struct mlx5_srq_table *table = &dev->priv.srq_table; + struct mlx5_srq_table *table = &dev->srq_table; memset(table, 0, sizeof(*table)); spin_lock_init(&table->lock); INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); + + table->nb.notifier_call = srq_event_notifier; + mlx5_notifier_register(dev->mdev, &table->nb); + + return 0; } -void mlx5_cleanup_srq_table(struct mlx5_core_dev *dev) +void mlx5_cleanup_srq_table(struct mlx5_ib_dev *dev) { - /* nothing */ + struct mlx5_srq_table *table = &dev->srq_table; + + mlx5_notifier_unregister(dev->mdev, &table->nb); } diff --git a/drivers/infiniband/hw/mthca/mthca_dev.h b/drivers/infiniband/hw/mthca/mthca_dev.h index 220a3e4717a3..bfd4eebc1182 100644 --- a/drivers/infiniband/hw/mthca/mthca_dev.h +++ b/drivers/infiniband/hw/mthca/mthca_dev.h @@ -510,7 +510,8 @@ int mthca_alloc_cq_buf(struct mthca_dev *dev, struct mthca_cq_buf *buf, int nent void mthca_free_cq_buf(struct mthca_dev *dev, struct mthca_cq_buf *buf, int cqe); int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, - struct ib_srq_attr *attr, struct mthca_srq *srq); + struct ib_srq_attr *attr, struct mthca_srq *srq, + struct ib_udata *udata); void mthca_free_srq(struct mthca_dev *dev, struct mthca_srq *srq); int mthca_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, enum ib_srq_attr_mask attr_mask, struct ib_udata *udata); @@ -547,7 +548,8 @@ int mthca_alloc_qp(struct mthca_dev *dev, enum ib_qp_type type, enum ib_sig_type send_policy, struct ib_qp_cap *cap, - struct mthca_qp *qp); + struct mthca_qp *qp, + struct ib_udata *udata); int mthca_alloc_sqp(struct mthca_dev *dev, struct mthca_pd *pd, struct mthca_cq *send_cq, @@ -556,7 +558,8 @@ int mthca_alloc_sqp(struct mthca_dev *dev, struct ib_qp_cap *cap, int qpn, int port, - struct mthca_sqp *sqp); + struct mthca_sqp *sqp, + struct ib_udata *udata); void mthca_free_qp(struct mthca_dev *dev, struct mthca_qp *qp); int mthca_create_ah(struct mthca_dev *dev, struct mthca_pd *pd, diff --git a/drivers/infiniband/hw/mthca/mthca_mad.c b/drivers/infiniband/hw/mthca/mthca_mad.c index 2e5dc0a67cfc..7ad517da4917 100644 --- a/drivers/infiniband/hw/mthca/mthca_mad.c +++ b/drivers/infiniband/hw/mthca/mthca_mad.c @@ -89,13 +89,13 @@ static void update_sm_ah(struct mthca_dev *dev, rdma_ah_set_port_num(&ah_attr, port_num); new_ah = rdma_create_ah(dev->send_agent[port_num - 1][0]->qp->pd, - &ah_attr); + &ah_attr, 0); if (IS_ERR(new_ah)) return; spin_lock_irqsave(&dev->sm_lock, flags); if (dev->sm_ah[port_num - 1]) - rdma_destroy_ah(dev->sm_ah[port_num - 1]); + rdma_destroy_ah(dev->sm_ah[port_num - 1], 0); dev->sm_ah[port_num - 1] = new_ah; spin_unlock_irqrestore(&dev->sm_lock, flags); } @@ -347,6 +347,7 @@ void mthca_free_agents(struct mthca_dev *dev) } if (dev->sm_ah[p]) - rdma_destroy_ah(dev->sm_ah[p]); + rdma_destroy_ah(dev->sm_ah[p], + RDMA_DESTROY_AH_SLEEPABLE); } } diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c index 691c6f048938..82cb6b71ac7c 100644 --- a/drivers/infiniband/hw/mthca/mthca_provider.c +++ b/drivers/infiniband/hw/mthca/mthca_provider.c @@ -412,6 +412,7 @@ static int mthca_dealloc_pd(struct ib_pd *pd) static struct ib_ah *mthca_ah_create(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 flags, struct ib_udata *udata) { @@ -431,7 +432,7 @@ static struct ib_ah *mthca_ah_create(struct ib_pd *pd, return &ah->ibah; } -static int mthca_ah_destroy(struct ib_ah *ah) +static int mthca_ah_destroy(struct ib_ah *ah, u32 flags) { mthca_destroy_ah(to_mdev(ah->device), to_mah(ah)); kfree(ah); @@ -455,7 +456,7 @@ static struct ib_srq *mthca_create_srq(struct ib_pd *pd, if (!srq) return ERR_PTR(-ENOMEM); - if (pd->uobject) { + if (udata) { context = to_mucontext(pd->uobject->context); if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) { @@ -475,9 +476,9 @@ static struct ib_srq *mthca_create_srq(struct ib_pd *pd, } err = mthca_alloc_srq(to_mdev(pd->device), to_mpd(pd), - &init_attr->attr, srq); + &init_attr->attr, srq, udata); - if (err && pd->uobject) + if (err && udata) mthca_unmap_user_db(to_mdev(pd->device), &context->uar, context->db_tab, ucmd.db_index); @@ -537,7 +538,7 @@ static struct ib_qp *mthca_create_qp(struct ib_pd *pd, if (!qp) return ERR_PTR(-ENOMEM); - if (pd->uobject) { + if (udata) { context = to_mucontext(pd->uobject->context); if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) { @@ -574,9 +575,9 @@ static struct ib_qp *mthca_create_qp(struct ib_pd *pd, to_mcq(init_attr->send_cq), to_mcq(init_attr->recv_cq), init_attr->qp_type, init_attr->sq_sig_type, - &init_attr->cap, qp); + &init_attr->cap, qp, udata); - if (err && pd->uobject) { + if (err && udata) { context = to_mucontext(pd->uobject->context); mthca_unmap_user_db(to_mdev(pd->device), @@ -596,7 +597,7 @@ static struct ib_qp *mthca_create_qp(struct ib_pd *pd, case IB_QPT_GSI: { /* Don't allow userspace to create special QPs */ - if (pd->uobject) + if (udata) return ERR_PTR(-EINVAL); qp = kmalloc(sizeof (struct mthca_sqp), GFP_KERNEL); @@ -610,7 +611,7 @@ static struct ib_qp *mthca_create_qp(struct ib_pd *pd, to_mcq(init_attr->recv_cq), init_attr->sq_sig_type, &init_attr->cap, qp->ibqp.qp_num, init_attr->port_num, - to_msqp(qp)); + to_msqp(qp), udata); break; } default: @@ -1193,6 +1194,81 @@ static void get_dev_fw_str(struct ib_device *device, char *str) (int) dev->fw_ver & 0xffff); } +static const struct ib_device_ops mthca_dev_ops = { + .alloc_pd = mthca_alloc_pd, + .alloc_ucontext = mthca_alloc_ucontext, + .attach_mcast = mthca_multicast_attach, + .create_ah = mthca_ah_create, + .create_cq = mthca_create_cq, + .create_qp = mthca_create_qp, + .dealloc_pd = mthca_dealloc_pd, + .dealloc_ucontext = mthca_dealloc_ucontext, + .dereg_mr = mthca_dereg_mr, + .destroy_ah = mthca_ah_destroy, + .destroy_cq = mthca_destroy_cq, + .destroy_qp = mthca_destroy_qp, + .detach_mcast = mthca_multicast_detach, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = mthca_get_dma_mr, + .get_port_immutable = mthca_port_immutable, + .mmap = mthca_mmap_uar, + .modify_device = mthca_modify_device, + .modify_port = mthca_modify_port, + .modify_qp = mthca_modify_qp, + .poll_cq = mthca_poll_cq, + .process_mad = mthca_process_mad, + .query_ah = mthca_ah_query, + .query_device = mthca_query_device, + .query_gid = mthca_query_gid, + .query_pkey = mthca_query_pkey, + .query_port = mthca_query_port, + .query_qp = mthca_query_qp, + .reg_user_mr = mthca_reg_user_mr, + .resize_cq = mthca_resize_cq, +}; + +static const struct ib_device_ops mthca_dev_arbel_srq_ops = { + .create_srq = mthca_create_srq, + .destroy_srq = mthca_destroy_srq, + .modify_srq = mthca_modify_srq, + .post_srq_recv = mthca_arbel_post_srq_recv, + .query_srq = mthca_query_srq, +}; + +static const struct ib_device_ops mthca_dev_tavor_srq_ops = { + .create_srq = mthca_create_srq, + .destroy_srq = mthca_destroy_srq, + .modify_srq = mthca_modify_srq, + .post_srq_recv = mthca_tavor_post_srq_recv, + .query_srq = mthca_query_srq, +}; + +static const struct ib_device_ops mthca_dev_arbel_fmr_ops = { + .alloc_fmr = mthca_alloc_fmr, + .dealloc_fmr = mthca_dealloc_fmr, + .map_phys_fmr = mthca_arbel_map_phys_fmr, + .unmap_fmr = mthca_unmap_fmr, +}; + +static const struct ib_device_ops mthca_dev_tavor_fmr_ops = { + .alloc_fmr = mthca_alloc_fmr, + .dealloc_fmr = mthca_dealloc_fmr, + .map_phys_fmr = mthca_tavor_map_phys_fmr, + .unmap_fmr = mthca_unmap_fmr, +}; + +static const struct ib_device_ops mthca_dev_arbel_ops = { + .post_recv = mthca_arbel_post_receive, + .post_send = mthca_arbel_post_send, + .req_notify_cq = mthca_arbel_arm_cq, +}; + +static const struct ib_device_ops mthca_dev_tavor_ops = { + .post_recv = mthca_tavor_post_receive, + .post_send = mthca_tavor_post_send, + .req_notify_cq = mthca_tavor_arm_cq, +}; + int mthca_register_device(struct mthca_dev *dev) { int ret; @@ -1226,26 +1302,8 @@ int mthca_register_device(struct mthca_dev *dev) dev->ib_dev.phys_port_cnt = dev->limits.num_ports; dev->ib_dev.num_comp_vectors = 1; dev->ib_dev.dev.parent = &dev->pdev->dev; - dev->ib_dev.query_device = mthca_query_device; - dev->ib_dev.query_port = mthca_query_port; - dev->ib_dev.modify_device = mthca_modify_device; - dev->ib_dev.modify_port = mthca_modify_port; - dev->ib_dev.query_pkey = mthca_query_pkey; - dev->ib_dev.query_gid = mthca_query_gid; - dev->ib_dev.alloc_ucontext = mthca_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = mthca_dealloc_ucontext; - dev->ib_dev.mmap = mthca_mmap_uar; - dev->ib_dev.alloc_pd = mthca_alloc_pd; - dev->ib_dev.dealloc_pd = mthca_dealloc_pd; - dev->ib_dev.create_ah = mthca_ah_create; - dev->ib_dev.query_ah = mthca_ah_query; - dev->ib_dev.destroy_ah = mthca_ah_destroy; if (dev->mthca_flags & MTHCA_FLAG_SRQ) { - dev->ib_dev.create_srq = mthca_create_srq; - dev->ib_dev.modify_srq = mthca_modify_srq; - dev->ib_dev.query_srq = mthca_query_srq; - dev->ib_dev.destroy_srq = mthca_destroy_srq; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_CREATE_SRQ) | (1ull << IB_USER_VERBS_CMD_MODIFY_SRQ) | @@ -1253,48 +1311,28 @@ int mthca_register_device(struct mthca_dev *dev) (1ull << IB_USER_VERBS_CMD_DESTROY_SRQ); if (mthca_is_memfree(dev)) - dev->ib_dev.post_srq_recv = mthca_arbel_post_srq_recv; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_arbel_srq_ops); else - dev->ib_dev.post_srq_recv = mthca_tavor_post_srq_recv; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_tavor_srq_ops); } - dev->ib_dev.create_qp = mthca_create_qp; - dev->ib_dev.modify_qp = mthca_modify_qp; - dev->ib_dev.query_qp = mthca_query_qp; - dev->ib_dev.destroy_qp = mthca_destroy_qp; - dev->ib_dev.create_cq = mthca_create_cq; - dev->ib_dev.resize_cq = mthca_resize_cq; - dev->ib_dev.destroy_cq = mthca_destroy_cq; - dev->ib_dev.poll_cq = mthca_poll_cq; - dev->ib_dev.get_dma_mr = mthca_get_dma_mr; - dev->ib_dev.reg_user_mr = mthca_reg_user_mr; - dev->ib_dev.dereg_mr = mthca_dereg_mr; - dev->ib_dev.get_port_immutable = mthca_port_immutable; - dev->ib_dev.get_dev_fw_str = get_dev_fw_str; - if (dev->mthca_flags & MTHCA_FLAG_FMR) { - dev->ib_dev.alloc_fmr = mthca_alloc_fmr; - dev->ib_dev.unmap_fmr = mthca_unmap_fmr; - dev->ib_dev.dealloc_fmr = mthca_dealloc_fmr; if (mthca_is_memfree(dev)) - dev->ib_dev.map_phys_fmr = mthca_arbel_map_phys_fmr; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_arbel_fmr_ops); else - dev->ib_dev.map_phys_fmr = mthca_tavor_map_phys_fmr; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_tavor_fmr_ops); } - dev->ib_dev.attach_mcast = mthca_multicast_attach; - dev->ib_dev.detach_mcast = mthca_multicast_detach; - dev->ib_dev.process_mad = mthca_process_mad; + ib_set_device_ops(&dev->ib_dev, &mthca_dev_ops); - if (mthca_is_memfree(dev)) { - dev->ib_dev.req_notify_cq = mthca_arbel_arm_cq; - dev->ib_dev.post_send = mthca_arbel_post_send; - dev->ib_dev.post_recv = mthca_arbel_post_receive; - } else { - dev->ib_dev.req_notify_cq = mthca_tavor_arm_cq; - dev->ib_dev.post_send = mthca_tavor_post_send; - dev->ib_dev.post_recv = mthca_tavor_post_receive; - } + if (mthca_is_memfree(dev)) + ib_set_device_ops(&dev->ib_dev, &mthca_dev_arbel_ops); + else + ib_set_device_ops(&dev->ib_dev, &mthca_dev_tavor_ops); mutex_init(&dev->cap_mask_mutex); diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c index 9d178ee3c96a..4e5b5cc17f1d 100644 --- a/drivers/infiniband/hw/mthca/mthca_qp.c +++ b/drivers/infiniband/hw/mthca/mthca_qp.c @@ -981,7 +981,8 @@ static void mthca_adjust_qp_caps(struct mthca_dev *dev, */ static int mthca_alloc_wqe_buf(struct mthca_dev *dev, struct mthca_pd *pd, - struct mthca_qp *qp) + struct mthca_qp *qp, + struct ib_udata *udata) { int size; int err = -ENOMEM; @@ -1048,7 +1049,7 @@ static int mthca_alloc_wqe_buf(struct mthca_dev *dev, * allocate anything. All we need is to calculate the WQE * sizes and the send_wqe_offset, so we're done now. */ - if (pd->ibpd.uobject) + if (udata) return 0; size = PAGE_ALIGN(qp->send_wqe_offset + @@ -1155,7 +1156,8 @@ static int mthca_alloc_qp_common(struct mthca_dev *dev, struct mthca_cq *send_cq, struct mthca_cq *recv_cq, enum ib_sig_type send_policy, - struct mthca_qp *qp) + struct mthca_qp *qp, + struct ib_udata *udata) { int ret; int i; @@ -1178,7 +1180,7 @@ static int mthca_alloc_qp_common(struct mthca_dev *dev, if (ret) return ret; - ret = mthca_alloc_wqe_buf(dev, pd, qp); + ret = mthca_alloc_wqe_buf(dev, pd, qp, udata); if (ret) { mthca_unmap_memfree(dev, qp); return ret; @@ -1191,7 +1193,7 @@ static int mthca_alloc_qp_common(struct mthca_dev *dev, * will be allocated and buffers will be initialized in * userspace. */ - if (pd->ibpd.uobject) + if (udata) return 0; ret = mthca_alloc_memfree(dev, qp); @@ -1285,7 +1287,8 @@ int mthca_alloc_qp(struct mthca_dev *dev, enum ib_qp_type type, enum ib_sig_type send_policy, struct ib_qp_cap *cap, - struct mthca_qp *qp) + struct mthca_qp *qp, + struct ib_udata *udata) { int err; @@ -1308,7 +1311,7 @@ int mthca_alloc_qp(struct mthca_dev *dev, qp->port = 0; err = mthca_alloc_qp_common(dev, pd, send_cq, recv_cq, - send_policy, qp); + send_policy, qp, udata); if (err) { mthca_free(&dev->qp_table.alloc, qp->qpn); return err; @@ -1360,7 +1363,8 @@ int mthca_alloc_sqp(struct mthca_dev *dev, struct ib_qp_cap *cap, int qpn, int port, - struct mthca_sqp *sqp) + struct mthca_sqp *sqp, + struct ib_udata *udata) { u32 mqpn = qpn * 2 + dev->qp_table.sqp_start + port - 1; int err; @@ -1391,7 +1395,7 @@ int mthca_alloc_sqp(struct mthca_dev *dev, sqp->qp.transport = MLX; err = mthca_alloc_qp_common(dev, pd, send_cq, recv_cq, - send_policy, &sqp->qp); + send_policy, &sqp->qp, udata); if (err) goto err_out_free; diff --git a/drivers/infiniband/hw/mthca/mthca_srq.c b/drivers/infiniband/hw/mthca/mthca_srq.c index 9a3fc6fb0d7e..b8333c79e3fa 100644 --- a/drivers/infiniband/hw/mthca/mthca_srq.c +++ b/drivers/infiniband/hw/mthca/mthca_srq.c @@ -95,7 +95,8 @@ static inline int *wqe_to_link(void *wqe) static void mthca_tavor_init_srq_context(struct mthca_dev *dev, struct mthca_pd *pd, struct mthca_srq *srq, - struct mthca_tavor_srq_context *context) + struct mthca_tavor_srq_context *context, + bool is_user) { memset(context, 0, sizeof *context); @@ -103,7 +104,7 @@ static void mthca_tavor_init_srq_context(struct mthca_dev *dev, context->state_pd = cpu_to_be32(pd->pd_num); context->lkey = cpu_to_be32(srq->mr.ibmr.lkey); - if (pd->ibpd.uobject) + if (is_user) context->uar = cpu_to_be32(to_mucontext(pd->ibpd.uobject->context)->uar.index); else @@ -113,7 +114,8 @@ static void mthca_tavor_init_srq_context(struct mthca_dev *dev, static void mthca_arbel_init_srq_context(struct mthca_dev *dev, struct mthca_pd *pd, struct mthca_srq *srq, - struct mthca_arbel_srq_context *context) + struct mthca_arbel_srq_context *context, + bool is_user) { int logsize, max; @@ -129,7 +131,7 @@ static void mthca_arbel_init_srq_context(struct mthca_dev *dev, context->lkey = cpu_to_be32(srq->mr.ibmr.lkey); context->db_index = cpu_to_be32(srq->db_index); context->logstride_usrpage = cpu_to_be32((srq->wqe_shift - 4) << 29); - if (pd->ibpd.uobject) + if (is_user) context->logstride_usrpage |= cpu_to_be32(to_mucontext(pd->ibpd.uobject->context)->uar.index); else @@ -145,14 +147,14 @@ static void mthca_free_srq_buf(struct mthca_dev *dev, struct mthca_srq *srq) } static int mthca_alloc_srq_buf(struct mthca_dev *dev, struct mthca_pd *pd, - struct mthca_srq *srq) + struct mthca_srq *srq, struct ib_udata *udata) { struct mthca_data_seg *scatter; void *wqe; int err; int i; - if (pd->ibpd.uobject) + if (udata) return 0; srq->wrid = kmalloc_array(srq->max, sizeof(u64), GFP_KERNEL); @@ -197,7 +199,8 @@ static int mthca_alloc_srq_buf(struct mthca_dev *dev, struct mthca_pd *pd, } int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, - struct ib_srq_attr *attr, struct mthca_srq *srq) + struct ib_srq_attr *attr, struct mthca_srq *srq, + struct ib_udata *udata) { struct mthca_mailbox *mailbox; int ds; @@ -235,7 +238,7 @@ int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, if (err) goto err_out; - if (!pd->ibpd.uobject) { + if (!udata) { srq->db_index = mthca_alloc_db(dev, MTHCA_DB_TYPE_SRQ, srq->srqn, &srq->db); if (srq->db_index < 0) { @@ -251,7 +254,7 @@ int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, goto err_out_db; } - err = mthca_alloc_srq_buf(dev, pd, srq); + err = mthca_alloc_srq_buf(dev, pd, srq, udata); if (err) goto err_out_mailbox; @@ -261,9 +264,9 @@ int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, mutex_init(&srq->mutex); if (mthca_is_memfree(dev)) - mthca_arbel_init_srq_context(dev, pd, srq, mailbox->buf); + mthca_arbel_init_srq_context(dev, pd, srq, mailbox->buf, udata); else - mthca_tavor_init_srq_context(dev, pd, srq, mailbox->buf); + mthca_tavor_init_srq_context(dev, pd, srq, mailbox->buf, udata); err = mthca_SW2HW_SRQ(dev, mailbox, srq->srqn); @@ -297,14 +300,14 @@ err_out_free_srq: mthca_warn(dev, "HW2SW_SRQ failed (%d)\n", err); err_out_free_buf: - if (!pd->ibpd.uobject) + if (!udata) mthca_free_srq_buf(dev, srq); err_out_mailbox: mthca_free_mailbox(dev, mailbox); err_out_db: - if (!pd->ibpd.uobject && mthca_is_memfree(dev)) + if (!udata && mthca_is_memfree(dev)) mthca_free_db(dev, MTHCA_DB_TYPE_SRQ, srq->db_index); err_out_icm: diff --git a/drivers/infiniband/hw/nes/nes_cm.c b/drivers/infiniband/hw/nes/nes_cm.c index 2b67ace5b614..032883180f65 100644 --- a/drivers/infiniband/hw/nes/nes_cm.c +++ b/drivers/infiniband/hw/nes/nes_cm.c @@ -3033,7 +3033,7 @@ static int nes_disconnect(struct nes_qp *nesqp, int abrupt) /* Need to free the Last Streaming Mode Message */ if (nesqp->ietf_frame) { if (nesqp->lsmm_mr) - nesibdev->ibdev.dereg_mr(nesqp->lsmm_mr); + nesibdev->ibdev.ops.dereg_mr(nesqp->lsmm_mr); pci_free_consistent(nesdev->pcidev, nesqp->private_data_len + nesqp->ietf_frame_size, nesqp->ietf_frame, nesqp->ietf_frame_pbase); diff --git a/drivers/infiniband/hw/nes/nes_mgt.c b/drivers/infiniband/hw/nes/nes_mgt.c index e96ffff61c3a..cc4dce5c3e5f 100644 --- a/drivers/infiniband/hw/nes/nes_mgt.c +++ b/drivers/infiniband/hw/nes/nes_mgt.c @@ -223,11 +223,11 @@ static struct sk_buff *nes_get_next_skb(struct nes_device *nesdev, struct nes_qp } old_skb = skb; - skb = skb->next; + skb = skb_peek_next(skb, &nesqp->pau_list); skb_unlink(old_skb, &nesqp->pau_list); nes_mgt_free_skb(nesdev, old_skb, PCI_DMA_TODEVICE); nes_rem_ref_cm_node(nesqp->cm_node); - if (skb == (struct sk_buff *)&nesqp->pau_list) + if (!skb) goto out; } return skb; @@ -551,14 +551,14 @@ static void queue_fpdus(struct sk_buff *skb, struct nes_vnic *nesvnic, struct ne /* Queue skb by sequence number */ if (skb_queue_len(&nesqp->pau_list) == 0) { - skb_queue_head(&nesqp->pau_list, skb); + __skb_queue_head(&nesqp->pau_list, skb); } else { skb_queue_walk(&nesqp->pau_list, tmpskb) { cb = (struct nes_rskb_cb *)&tmpskb->cb[0]; if (before(seqnum, cb->seqnum)) break; } - skb_insert(tmpskb, skb, &nesqp->pau_list); + __skb_insert(skb, tmpskb->prev, tmpskb, &nesqp->pau_list); } if (nesqp->pau_state == PAU_READY) process_it = true; diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c index 92d1cadd4cfd..4e7f08ee1907 100644 --- a/drivers/infiniband/hw/nes/nes_verbs.c +++ b/drivers/infiniband/hw/nes/nes_verbs.c @@ -1066,7 +1066,7 @@ static struct ib_qp *nes_create_qp(struct ib_pd *ibpd, } if (req.user_qp_buffer) nesqp->nesuqp_addr = req.user_qp_buffer; - if ((ibpd->uobject) && (ibpd->uobject->context)) { + if (udata && (ibpd->uobject->context)) { nesqp->user_mode = 1; nes_ucontext = to_nesucontext(ibpd->uobject->context); if (virt_wqs) { @@ -1257,7 +1257,7 @@ static struct ib_qp *nes_create_qp(struct ib_pd *ibpd, nes_put_cqp_request(nesdev, cqp_request); - if (ibpd->uobject) { + if (udata) { uresp.mmap_sq_db_index = nesqp->mmap_sq_db_index; uresp.mmap_rq_db_index = 0; uresp.actual_sq_size = sq_size; @@ -3627,6 +3627,39 @@ static void get_dev_fw_str(struct ib_device *dev, char *str) (nesvnic->nesdev->nesadapter->firmware_version & 0x000000ff)); } +static const struct ib_device_ops nes_dev_ops = { + .alloc_mr = nes_alloc_mr, + .alloc_mw = nes_alloc_mw, + .alloc_pd = nes_alloc_pd, + .alloc_ucontext = nes_alloc_ucontext, + .create_cq = nes_create_cq, + .create_qp = nes_create_qp, + .dealloc_mw = nes_dealloc_mw, + .dealloc_pd = nes_dealloc_pd, + .dealloc_ucontext = nes_dealloc_ucontext, + .dereg_mr = nes_dereg_mr, + .destroy_cq = nes_destroy_cq, + .destroy_qp = nes_destroy_qp, + .drain_rq = nes_drain_rq, + .drain_sq = nes_drain_sq, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = nes_get_dma_mr, + .get_port_immutable = nes_port_immutable, + .map_mr_sg = nes_map_mr_sg, + .mmap = nes_mmap, + .modify_qp = nes_modify_qp, + .poll_cq = nes_poll_cq, + .post_recv = nes_post_recv, + .post_send = nes_post_send, + .query_device = nes_query_device, + .query_gid = nes_query_gid, + .query_pkey = nes_query_pkey, + .query_port = nes_query_port, + .query_qp = nes_query_qp, + .reg_user_mr = nes_reg_user_mr, + .req_notify_cq = nes_req_notify_cq, +}; + /** * nes_init_ofa_device */ @@ -3673,36 +3706,6 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev) nesibdev->ibdev.phys_port_cnt = 1; nesibdev->ibdev.num_comp_vectors = 1; nesibdev->ibdev.dev.parent = &nesdev->pcidev->dev; - nesibdev->ibdev.query_device = nes_query_device; - nesibdev->ibdev.query_port = nes_query_port; - nesibdev->ibdev.query_pkey = nes_query_pkey; - nesibdev->ibdev.query_gid = nes_query_gid; - nesibdev->ibdev.alloc_ucontext = nes_alloc_ucontext; - nesibdev->ibdev.dealloc_ucontext = nes_dealloc_ucontext; - nesibdev->ibdev.mmap = nes_mmap; - nesibdev->ibdev.alloc_pd = nes_alloc_pd; - nesibdev->ibdev.dealloc_pd = nes_dealloc_pd; - nesibdev->ibdev.create_qp = nes_create_qp; - nesibdev->ibdev.modify_qp = nes_modify_qp; - nesibdev->ibdev.query_qp = nes_query_qp; - nesibdev->ibdev.destroy_qp = nes_destroy_qp; - nesibdev->ibdev.create_cq = nes_create_cq; - nesibdev->ibdev.destroy_cq = nes_destroy_cq; - nesibdev->ibdev.poll_cq = nes_poll_cq; - nesibdev->ibdev.get_dma_mr = nes_get_dma_mr; - nesibdev->ibdev.reg_user_mr = nes_reg_user_mr; - nesibdev->ibdev.dereg_mr = nes_dereg_mr; - nesibdev->ibdev.alloc_mw = nes_alloc_mw; - nesibdev->ibdev.dealloc_mw = nes_dealloc_mw; - - nesibdev->ibdev.alloc_mr = nes_alloc_mr; - nesibdev->ibdev.map_mr_sg = nes_map_mr_sg; - - nesibdev->ibdev.req_notify_cq = nes_req_notify_cq; - nesibdev->ibdev.post_send = nes_post_send; - nesibdev->ibdev.post_recv = nes_post_recv; - nesibdev->ibdev.drain_sq = nes_drain_sq; - nesibdev->ibdev.drain_rq = nes_drain_rq; nesibdev->ibdev.iwcm = kzalloc(sizeof(*nesibdev->ibdev.iwcm), GFP_KERNEL); if (nesibdev->ibdev.iwcm == NULL) { @@ -3717,8 +3720,8 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev) nesibdev->ibdev.iwcm->reject = nes_reject; nesibdev->ibdev.iwcm->create_listen = nes_create_listen; nesibdev->ibdev.iwcm->destroy_listen = nes_destroy_listen; - nesibdev->ibdev.get_port_immutable = nes_port_immutable; - nesibdev->ibdev.get_dev_fw_str = get_dev_fw_str; + + ib_set_device_ops(&nesibdev->ibdev, &nes_dev_ops); memcpy(nesibdev->ibdev.iwcm->ifname, netdev->name, sizeof(nesibdev->ibdev.iwcm->ifname)); diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_ah.c b/drivers/infiniband/hw/ocrdma/ocrdma_ah.c index 58188fe5aed2..a7295322efbc 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_ah.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_ah.c @@ -157,7 +157,7 @@ static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah, } struct ib_ah *ocrdma_create_ah(struct ib_pd *ibpd, struct rdma_ah_attr *attr, - struct ib_udata *udata) + u32 flags, struct ib_udata *udata) { u32 *ahid_addr; int status; @@ -219,7 +219,7 @@ av_err: return ERR_PTR(status); } -int ocrdma_destroy_ah(struct ib_ah *ibah) +int ocrdma_destroy_ah(struct ib_ah *ibah, u32 flags) { struct ocrdma_ah *ah = get_ocrdma_ah(ibah); struct ocrdma_dev *dev = get_ocrdma_dev(ibah->device); diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_ah.h b/drivers/infiniband/hw/ocrdma/ocrdma_ah.h index c0c32c9b80ae..eb996e14b520 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_ah.h +++ b/drivers/infiniband/hw/ocrdma/ocrdma_ah.h @@ -52,8 +52,8 @@ enum { }; struct ib_ah *ocrdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, - struct ib_udata *udata); -int ocrdma_destroy_ah(struct ib_ah *ah); + u32 flags, struct ib_udata *udata); +int ocrdma_destroy_ah(struct ib_ah *ah, u32 flags); int ocrdma_query_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr); int ocrdma_process_mad(struct ib_device *, diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_main.c b/drivers/infiniband/hw/ocrdma/ocrdma_main.c index 873cc7f6fe61..1f393842453a 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_main.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_main.c @@ -143,6 +143,50 @@ static const struct attribute_group ocrdma_attr_group = { .attrs = ocrdma_attributes, }; +static const struct ib_device_ops ocrdma_dev_ops = { + .alloc_mr = ocrdma_alloc_mr, + .alloc_pd = ocrdma_alloc_pd, + .alloc_ucontext = ocrdma_alloc_ucontext, + .create_ah = ocrdma_create_ah, + .create_cq = ocrdma_create_cq, + .create_qp = ocrdma_create_qp, + .dealloc_pd = ocrdma_dealloc_pd, + .dealloc_ucontext = ocrdma_dealloc_ucontext, + .dereg_mr = ocrdma_dereg_mr, + .destroy_ah = ocrdma_destroy_ah, + .destroy_cq = ocrdma_destroy_cq, + .destroy_qp = ocrdma_destroy_qp, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = ocrdma_get_dma_mr, + .get_link_layer = ocrdma_link_layer, + .get_netdev = ocrdma_get_netdev, + .get_port_immutable = ocrdma_port_immutable, + .map_mr_sg = ocrdma_map_mr_sg, + .mmap = ocrdma_mmap, + .modify_port = ocrdma_modify_port, + .modify_qp = ocrdma_modify_qp, + .poll_cq = ocrdma_poll_cq, + .post_recv = ocrdma_post_recv, + .post_send = ocrdma_post_send, + .process_mad = ocrdma_process_mad, + .query_ah = ocrdma_query_ah, + .query_device = ocrdma_query_device, + .query_pkey = ocrdma_query_pkey, + .query_port = ocrdma_query_port, + .query_qp = ocrdma_query_qp, + .reg_user_mr = ocrdma_reg_user_mr, + .req_notify_cq = ocrdma_arm_cq, + .resize_cq = ocrdma_resize_cq, +}; + +static const struct ib_device_ops ocrdma_dev_srq_ops = { + .create_srq = ocrdma_create_srq, + .destroy_srq = ocrdma_destroy_srq, + .modify_srq = ocrdma_modify_srq, + .post_srq_recv = ocrdma_post_srq_recv, + .query_srq = ocrdma_query_srq, +}; + static int ocrdma_register_device(struct ocrdma_dev *dev) { ocrdma_get_guid(dev, (u8 *)&dev->ibdev.node_guid); @@ -182,50 +226,10 @@ static int ocrdma_register_device(struct ocrdma_dev *dev) dev->ibdev.phys_port_cnt = 1; dev->ibdev.num_comp_vectors = dev->eq_cnt; - /* mandatory verbs. */ - dev->ibdev.query_device = ocrdma_query_device; - dev->ibdev.query_port = ocrdma_query_port; - dev->ibdev.modify_port = ocrdma_modify_port; - dev->ibdev.get_netdev = ocrdma_get_netdev; - dev->ibdev.get_link_layer = ocrdma_link_layer; - dev->ibdev.alloc_pd = ocrdma_alloc_pd; - dev->ibdev.dealloc_pd = ocrdma_dealloc_pd; - - dev->ibdev.create_cq = ocrdma_create_cq; - dev->ibdev.destroy_cq = ocrdma_destroy_cq; - dev->ibdev.resize_cq = ocrdma_resize_cq; - - dev->ibdev.create_qp = ocrdma_create_qp; - dev->ibdev.modify_qp = ocrdma_modify_qp; - dev->ibdev.query_qp = ocrdma_query_qp; - dev->ibdev.destroy_qp = ocrdma_destroy_qp; - - dev->ibdev.query_pkey = ocrdma_query_pkey; - dev->ibdev.create_ah = ocrdma_create_ah; - dev->ibdev.destroy_ah = ocrdma_destroy_ah; - dev->ibdev.query_ah = ocrdma_query_ah; - - dev->ibdev.poll_cq = ocrdma_poll_cq; - dev->ibdev.post_send = ocrdma_post_send; - dev->ibdev.post_recv = ocrdma_post_recv; - dev->ibdev.req_notify_cq = ocrdma_arm_cq; - - dev->ibdev.get_dma_mr = ocrdma_get_dma_mr; - dev->ibdev.dereg_mr = ocrdma_dereg_mr; - dev->ibdev.reg_user_mr = ocrdma_reg_user_mr; - - dev->ibdev.alloc_mr = ocrdma_alloc_mr; - dev->ibdev.map_mr_sg = ocrdma_map_mr_sg; - /* mandatory to support user space verbs consumer. */ - dev->ibdev.alloc_ucontext = ocrdma_alloc_ucontext; - dev->ibdev.dealloc_ucontext = ocrdma_dealloc_ucontext; - dev->ibdev.mmap = ocrdma_mmap; dev->ibdev.dev.parent = &dev->nic_info.pdev->dev; - dev->ibdev.process_mad = ocrdma_process_mad; - dev->ibdev.get_port_immutable = ocrdma_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_str; + ib_set_device_ops(&dev->ibdev, &ocrdma_dev_ops); if (ocrdma_get_asic_type(dev) == OCRDMA_ASIC_GEN_SKH_R) { dev->ibdev.uverbs_cmd_mask |= @@ -235,11 +239,7 @@ static int ocrdma_register_device(struct ocrdma_dev *dev) OCRDMA_UVERBS(DESTROY_SRQ) | OCRDMA_UVERBS(POST_SRQ_RECV); - dev->ibdev.create_srq = ocrdma_create_srq; - dev->ibdev.modify_srq = ocrdma_modify_srq; - dev->ibdev.query_srq = ocrdma_query_srq; - dev->ibdev.destroy_srq = ocrdma_destroy_srq; - dev->ibdev.post_srq_recv = ocrdma_post_srq_recv; + ib_set_device_ops(&dev->ibdev, &ocrdma_dev_srq_ops); } rdma_set_device_sysfs_group(&dev->ibdev, &ocrdma_attr_group); dev->ibdev.driver_id = RDMA_DRIVER_OCRDMA; diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_stats.c b/drivers/infiniband/hw/ocrdma/ocrdma_stats.c index 290d776edf48..dd15474b19b7 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_stats.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_stats.c @@ -760,12 +760,13 @@ static const struct file_operations ocrdma_dbg_ops = { void ocrdma_add_port_stats(struct ocrdma_dev *dev) { + const struct pci_dev *pdev = dev->nic_info.pdev; + if (!ocrdma_dbgfs_dir) return; /* Create post stats base dir */ - dev->dir = - debugfs_create_dir(dev_name(&dev->ibdev.dev), ocrdma_dbgfs_dir); + dev->dir = debugfs_create_dir(pci_name(pdev), ocrdma_dbgfs_dir); if (!dev->dir) goto err; diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c index 06d2a7f3304c..c46bed0c5513 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c @@ -177,11 +177,6 @@ int ocrdma_query_port(struct ib_device *ibdev, /* props being zeroed by the caller, avoid zeroing it here */ dev = get_ocrdma_dev(ibdev); - if (port > 1) { - pr_err("%s(%d) invalid_port=0x%x\n", __func__, - dev->id, port); - return -EINVAL; - } netdev = dev->nic_info.netdev; if (netif_running(netdev) && netif_oper_up(netdev)) { port_state = IB_PORT_ACTIVE; @@ -215,13 +210,6 @@ int ocrdma_query_port(struct ib_device *ibdev, int ocrdma_modify_port(struct ib_device *ibdev, u8 port, int mask, struct ib_port_modify *props) { - struct ocrdma_dev *dev; - - dev = get_ocrdma_dev(ibdev); - if (port > 1) { - pr_err("%s(%d) invalid_port=0x%x\n", __func__, dev->id, port); - return -EINVAL; - } return 0; } @@ -1169,7 +1157,8 @@ static void ocrdma_del_qpn_map(struct ocrdma_dev *dev, struct ocrdma_qp *qp) } static int ocrdma_check_qp_params(struct ib_pd *ibpd, struct ocrdma_dev *dev, - struct ib_qp_init_attr *attrs) + struct ib_qp_init_attr *attrs, + struct ib_udata *udata) { if ((attrs->qp_type != IB_QPT_GSI) && (attrs->qp_type != IB_QPT_RC) && @@ -1217,7 +1206,7 @@ static int ocrdma_check_qp_params(struct ib_pd *ibpd, struct ocrdma_dev *dev, return -EINVAL; } /* unprivileged user space cannot create special QP */ - if (ibpd->uobject && attrs->qp_type == IB_QPT_GSI) { + if (udata && attrs->qp_type == IB_QPT_GSI) { pr_err ("%s(%d) Userspace can't create special QPs of type=0x%x\n", __func__, dev->id, attrs->qp_type); @@ -1374,7 +1363,7 @@ struct ib_qp *ocrdma_create_qp(struct ib_pd *ibpd, struct ocrdma_create_qp_ureq ureq; u16 dpp_credit_lmt, dpp_offset; - status = ocrdma_check_qp_params(ibpd, dev, attrs); + status = ocrdma_check_qp_params(ibpd, dev, attrs, udata); if (status) goto gen_err; diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c index 8d6ff9df49fe..75940e2a8791 100644 --- a/drivers/infiniband/hw/qedr/main.c +++ b/drivers/infiniband/hw/qedr/main.c @@ -160,12 +160,16 @@ static const struct attribute_group qedr_attr_group = { .attrs = qedr_attributes, }; +static const struct ib_device_ops qedr_iw_dev_ops = { + .get_port_immutable = qedr_iw_port_immutable, + .query_gid = qedr_iw_query_gid, +}; + static int qedr_iw_register_device(struct qedr_dev *dev) { dev->ibdev.node_type = RDMA_NODE_RNIC; - dev->ibdev.query_gid = qedr_iw_query_gid; - dev->ibdev.get_port_immutable = qedr_iw_port_immutable; + ib_set_device_ops(&dev->ibdev, &qedr_iw_dev_ops); dev->ibdev.iwcm = kzalloc(sizeof(*dev->ibdev.iwcm), GFP_KERNEL); if (!dev->ibdev.iwcm) @@ -186,13 +190,56 @@ static int qedr_iw_register_device(struct qedr_dev *dev) return 0; } +static const struct ib_device_ops qedr_roce_dev_ops = { + .get_port_immutable = qedr_roce_port_immutable, +}; + static void qedr_roce_register_device(struct qedr_dev *dev) { dev->ibdev.node_type = RDMA_NODE_IB_CA; - dev->ibdev.get_port_immutable = qedr_roce_port_immutable; + ib_set_device_ops(&dev->ibdev, &qedr_roce_dev_ops); } +static const struct ib_device_ops qedr_dev_ops = { + .alloc_mr = qedr_alloc_mr, + .alloc_pd = qedr_alloc_pd, + .alloc_ucontext = qedr_alloc_ucontext, + .create_ah = qedr_create_ah, + .create_cq = qedr_create_cq, + .create_qp = qedr_create_qp, + .create_srq = qedr_create_srq, + .dealloc_pd = qedr_dealloc_pd, + .dealloc_ucontext = qedr_dealloc_ucontext, + .dereg_mr = qedr_dereg_mr, + .destroy_ah = qedr_destroy_ah, + .destroy_cq = qedr_destroy_cq, + .destroy_qp = qedr_destroy_qp, + .destroy_srq = qedr_destroy_srq, + .get_dev_fw_str = qedr_get_dev_fw_str, + .get_dma_mr = qedr_get_dma_mr, + .get_link_layer = qedr_link_layer, + .get_netdev = qedr_get_netdev, + .map_mr_sg = qedr_map_mr_sg, + .mmap = qedr_mmap, + .modify_port = qedr_modify_port, + .modify_qp = qedr_modify_qp, + .modify_srq = qedr_modify_srq, + .poll_cq = qedr_poll_cq, + .post_recv = qedr_post_recv, + .post_send = qedr_post_send, + .post_srq_recv = qedr_post_srq_recv, + .process_mad = qedr_process_mad, + .query_device = qedr_query_device, + .query_pkey = qedr_query_pkey, + .query_port = qedr_query_port, + .query_qp = qedr_query_qp, + .query_srq = qedr_query_srq, + .reg_user_mr = qedr_reg_user_mr, + .req_notify_cq = qedr_arm_cq, + .resize_cq = qedr_resize_cq, +}; + static int qedr_register_device(struct qedr_dev *dev) { int rc; @@ -237,57 +284,11 @@ static int qedr_register_device(struct qedr_dev *dev) dev->ibdev.phys_port_cnt = 1; dev->ibdev.num_comp_vectors = dev->num_cnq; - - dev->ibdev.query_device = qedr_query_device; - dev->ibdev.query_port = qedr_query_port; - dev->ibdev.modify_port = qedr_modify_port; - - dev->ibdev.alloc_ucontext = qedr_alloc_ucontext; - dev->ibdev.dealloc_ucontext = qedr_dealloc_ucontext; - dev->ibdev.mmap = qedr_mmap; - - dev->ibdev.alloc_pd = qedr_alloc_pd; - dev->ibdev.dealloc_pd = qedr_dealloc_pd; - - dev->ibdev.create_cq = qedr_create_cq; - dev->ibdev.destroy_cq = qedr_destroy_cq; - dev->ibdev.resize_cq = qedr_resize_cq; - dev->ibdev.req_notify_cq = qedr_arm_cq; - - dev->ibdev.create_qp = qedr_create_qp; - dev->ibdev.modify_qp = qedr_modify_qp; - dev->ibdev.query_qp = qedr_query_qp; - dev->ibdev.destroy_qp = qedr_destroy_qp; - - dev->ibdev.create_srq = qedr_create_srq; - dev->ibdev.destroy_srq = qedr_destroy_srq; - dev->ibdev.modify_srq = qedr_modify_srq; - dev->ibdev.query_srq = qedr_query_srq; - dev->ibdev.post_srq_recv = qedr_post_srq_recv; - dev->ibdev.query_pkey = qedr_query_pkey; - - dev->ibdev.create_ah = qedr_create_ah; - dev->ibdev.destroy_ah = qedr_destroy_ah; - - dev->ibdev.get_dma_mr = qedr_get_dma_mr; - dev->ibdev.dereg_mr = qedr_dereg_mr; - dev->ibdev.reg_user_mr = qedr_reg_user_mr; - dev->ibdev.alloc_mr = qedr_alloc_mr; - dev->ibdev.map_mr_sg = qedr_map_mr_sg; - - dev->ibdev.poll_cq = qedr_poll_cq; - dev->ibdev.post_send = qedr_post_send; - dev->ibdev.post_recv = qedr_post_recv; - - dev->ibdev.process_mad = qedr_process_mad; - - dev->ibdev.get_netdev = qedr_get_netdev; - dev->ibdev.dev.parent = &dev->pdev->dev; - dev->ibdev.get_link_layer = qedr_link_layer; - dev->ibdev.get_dev_fw_str = qedr_get_dev_fw_str; rdma_set_device_sysfs_group(&dev->ibdev, &qedr_attr_group); + ib_set_device_ops(&dev->ibdev, &qedr_dev_ops); + dev->ibdev.driver_id = RDMA_DRIVER_QEDR; return ib_register_device(&dev->ibdev, "qedr%d", NULL); } diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index 82ee4b4a7084..b342a70e2814 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -216,10 +216,6 @@ int qedr_query_port(struct ib_device *ibdev, u8 port, struct ib_port_attr *attr) struct qed_rdma_port *rdma_port; dev = get_qedr_dev(ibdev); - if (port > 1) { - DP_ERR(dev, "invalid_port=0x%x\n", port); - return -EINVAL; - } if (!dev->rdma_ctx) { DP_ERR(dev, "rdma_ctx is NULL\n"); @@ -263,14 +259,6 @@ int qedr_query_port(struct ib_device *ibdev, u8 port, struct ib_port_attr *attr) int qedr_modify_port(struct ib_device *ibdev, u8 port, int mask, struct ib_port_modify *props) { - struct qedr_dev *dev; - - dev = get_qedr_dev(ibdev); - if (port > 1) { - DP_ERR(dev, "invalid_port=0x%x\n", port); - return -EINVAL; - } - return 0; } @@ -1148,7 +1136,8 @@ static inline int get_gid_info_from_table(struct ib_qp *ibqp, } static int qedr_check_qp_attrs(struct ib_pd *ibpd, struct qedr_dev *dev, - struct ib_qp_init_attr *attrs) + struct ib_qp_init_attr *attrs, + struct ib_udata *udata) { struct qedr_device_attr *qattr = &dev->attr; @@ -1189,7 +1178,7 @@ static int qedr_check_qp_attrs(struct ib_pd *ibpd, struct qedr_dev *dev, } /* Unprivileged user space cannot create special QP */ - if (ibpd->uobject && attrs->qp_type == IB_QPT_GSI) { + if (udata && attrs->qp_type == IB_QPT_GSI) { DP_ERR(dev, "create qp: userspace can't create special QPs of type=0x%x\n", attrs->qp_type); @@ -1552,7 +1541,7 @@ int qedr_destroy_srq(struct ib_srq *ibsrq) in_params.srq_id = srq->srq_id; dev->ops->rdma_destroy_srq(dev->rdma_ctx, &in_params); - if (ibsrq->pd->uobject) + if (ibsrq->uobject) qedr_free_srq_user_params(srq); else qedr_free_srq_kernel_params(srq); @@ -2005,7 +1994,7 @@ struct ib_qp *qedr_create_qp(struct ib_pd *ibpd, DP_DEBUG(dev, QEDR_MSG_QP, "create qp: called from %s, pd=%p\n", udata ? "user library" : "kernel", pd); - rc = qedr_check_qp_attrs(ibpd, dev, attrs); + rc = qedr_check_qp_attrs(ibpd, dev, attrs, udata); if (rc) return ERR_PTR(rc); @@ -2626,7 +2615,7 @@ int qedr_destroy_qp(struct ib_qp *ibqp) } struct ib_ah *qedr_create_ah(struct ib_pd *ibpd, struct rdma_ah_attr *attr, - struct ib_udata *udata) + u32 flags, struct ib_udata *udata) { struct qedr_ah *ah; @@ -2639,7 +2628,7 @@ struct ib_ah *qedr_create_ah(struct ib_pd *ibpd, struct rdma_ah_attr *attr, return &ah->ibah; } -int qedr_destroy_ah(struct ib_ah *ibah) +int qedr_destroy_ah(struct ib_ah *ibah, u32 flags) { struct qedr_ah *ah = get_qedr_ah(ibah); diff --git a/drivers/infiniband/hw/qedr/verbs.h b/drivers/infiniband/hw/qedr/verbs.h index 0b7d0124b16c..1852b7012bf4 100644 --- a/drivers/infiniband/hw/qedr/verbs.h +++ b/drivers/infiniband/hw/qedr/verbs.h @@ -76,8 +76,8 @@ int qedr_destroy_srq(struct ib_srq *ibsrq); int qedr_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr, const struct ib_recv_wr **bad_recv_wr); struct ib_ah *qedr_create_ah(struct ib_pd *ibpd, struct rdma_ah_attr *attr, - struct ib_udata *udata); -int qedr_destroy_ah(struct ib_ah *ibah); + u32 flags, struct ib_udata *udata); +int qedr_destroy_ah(struct ib_ah *ibah, u32 flags); int qedr_dereg_mr(struct ib_mr *); struct ib_mr *qedr_get_dma_mr(struct ib_pd *, int acc); diff --git a/drivers/infiniband/hw/qib/qib_iba6120.c b/drivers/infiniband/hw/qib/qib_iba6120.c index fb1ff59f40bd..cdbf707fa267 100644 --- a/drivers/infiniband/hw/qib/qib_iba6120.c +++ b/drivers/infiniband/hw/qib/qib_iba6120.c @@ -3237,7 +3237,6 @@ static int init_6120_variables(struct qib_devdata *dd) /* we always allocate at least 2048 bytes for eager buffers */ ret = ib_mtu_enum_to_int(qib_ibmtu); dd->rcvegrbufsize = ret != -1 ? max(ret, 2048) : QIB_DEFAULT_MTU; - BUG_ON(!is_power_of_2(dd->rcvegrbufsize)); dd->rcvegrbufsize_shift = ilog2(dd->rcvegrbufsize); qib_6120_tidtemplate(dd); diff --git a/drivers/infiniband/hw/qib/qib_iba7220.c b/drivers/infiniband/hw/qib/qib_iba7220.c index 163a57a88742..9fde45538f6e 100644 --- a/drivers/infiniband/hw/qib/qib_iba7220.c +++ b/drivers/infiniband/hw/qib/qib_iba7220.c @@ -4043,7 +4043,6 @@ static int qib_init_7220_variables(struct qib_devdata *dd) /* we always allocate at least 2048 bytes for eager buffers */ ret = ib_mtu_enum_to_int(qib_ibmtu); dd->rcvegrbufsize = ret != -1 ? max(ret, 2048) : QIB_DEFAULT_MTU; - BUG_ON(!is_power_of_2(dd->rcvegrbufsize)); dd->rcvegrbufsize_shift = ilog2(dd->rcvegrbufsize); qib_7220_tidtemplate(dd); @@ -4252,7 +4251,6 @@ static int init_sdma_7220_regs(struct qib_pportdata *ppd) unsigned word = i / 64; unsigned bit = i & 63; - BUG_ON(word >= 3); senddmabufmask[word] |= 1ULL << bit; } qib_write_kreg(dd, kr_senddmabufmask0, senddmabufmask[0]); diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c index bf5e222eed8e..17d6b24b3473 100644 --- a/drivers/infiniband/hw/qib/qib_iba7322.c +++ b/drivers/infiniband/hw/qib/qib_iba7322.c @@ -1382,7 +1382,6 @@ static void err_decode(char *msg, size_t len, u64 errs, *msg++ = ','; len--; } - BUG_ON(!msp->sz); /* msp->sz counts the nul */ took = min_t(size_t, msp->sz - (size_t)1, len); memcpy(msg, msp->msg, took); @@ -6599,7 +6598,6 @@ static int qib_init_7322_variables(struct qib_devdata *dd) /* we always allocate at least 2048 bytes for eager buffers */ dd->rcvegrbufsize = max(mtu, 2048); - BUG_ON(!is_power_of_2(dd->rcvegrbufsize)); dd->rcvegrbufsize_shift = ilog2(dd->rcvegrbufsize); qib_7322_tidtemplate(dd); @@ -6904,7 +6902,6 @@ static int init_sdma_7322_regs(struct qib_pportdata *ppd) unsigned word = erstbuf / BITS_PER_LONG; unsigned bit = erstbuf & (BITS_PER_LONG - 1); - BUG_ON(word >= 3); senddmabufmask[word] |= 1ULL << bit; } qib_write_kreg_port(ppd, krp_senddmabufmask0, senddmabufmask[0]); diff --git a/drivers/infiniband/hw/qib/qib_init.c b/drivers/infiniband/hw/qib/qib_init.c index d7cdc77d6306..9fd69903ca57 100644 --- a/drivers/infiniband/hw/qib/qib_init.c +++ b/drivers/infiniband/hw/qib/qib_init.c @@ -209,7 +209,6 @@ struct qib_ctxtdata *qib_create_ctxtdata(struct qib_pportdata *ppd, u32 ctxt, rcd->rcvegrbuf_chunks = (rcd->rcvegrcnt + rcd->rcvegrbufs_perchunk - 1) / rcd->rcvegrbufs_perchunk; - BUG_ON(!is_power_of_2(rcd->rcvegrbufs_perchunk)); rcd->rcvegrbufs_perchunk_shift = ilog2(rcd->rcvegrbufs_perchunk); } diff --git a/drivers/infiniband/hw/qib/qib_mad.c b/drivers/infiniband/hw/qib/qib_mad.c index 4845d000c22f..f92faf5ec369 100644 --- a/drivers/infiniband/hw/qib/qib_mad.c +++ b/drivers/infiniband/hw/qib/qib_mad.c @@ -2494,5 +2494,6 @@ void qib_notify_free_mad_agent(struct rvt_dev_info *rdi, int port_idx) del_timer_sync(&dd->pport[port_idx].cong_stats.timer); if (dd->pport[port_idx].ibport_data.smi_ah) - rdma_destroy_ah(&dd->pport[port_idx].ibport_data.smi_ah->ibah); + rdma_destroy_ah(&dd->pport[port_idx].ibport_data.smi_ah->ibah, + RDMA_DESTROY_AH_SLEEPABLE); } diff --git a/drivers/infiniband/hw/qib/qib_pcie.c b/drivers/infiniband/hw/qib/qib_pcie.c index 30595b358d8f..864f2af171f7 100644 --- a/drivers/infiniband/hw/qib/qib_pcie.c +++ b/drivers/infiniband/hw/qib/qib_pcie.c @@ -387,7 +387,7 @@ void qib_pcie_reenable(struct qib_devdata *dd, u16 cmd, u8 iline, u8 cline) static int qib_pcie_coalesce; module_param_named(pcie_coalesce, qib_pcie_coalesce, int, S_IRUGO); -MODULE_PARM_DESC(pcie_coalesce, "tune PCIe colescing on some Intel chipsets"); +MODULE_PARM_DESC(pcie_coalesce, "tune PCIe coalescing on some Intel chipsets"); /* * Enable PCIe completion and data coalescing, on Intel 5x00 and 7300 diff --git a/drivers/infiniband/hw/qib/qib_sdma.c b/drivers/infiniband/hw/qib/qib_sdma.c index 757d4c9d713d..3d64081c4819 100644 --- a/drivers/infiniband/hw/qib/qib_sdma.c +++ b/drivers/infiniband/hw/qib/qib_sdma.c @@ -572,12 +572,13 @@ retry: len = sge->length; if (len > sge->sge_length) len = sge->sge_length; - BUG_ON(len == 0); dw = (len + 3) >> 2; addr = dma_map_single(&ppd->dd->pcidev->dev, sge->vaddr, dw << 2, DMA_TO_DEVICE); - if (dma_mapping_error(&ppd->dd->pcidev->dev, addr)) + if (dma_mapping_error(&ppd->dd->pcidev->dev, addr)) { + ret = -ENOMEM; goto unmap; + } sdmadesc[0] = 0; make_sdma_desc(ppd, sdmadesc, (u64) addr, dw, dwoffset); /* SDmaUseLargeBuf has to be set in every descriptor */ diff --git a/drivers/infiniband/hw/qib/qib_ud.c b/drivers/infiniband/hw/qib/qib_ud.c index 4d4c31ea4e2d..868da0ece7ba 100644 --- a/drivers/infiniband/hw/qib/qib_ud.c +++ b/drivers/infiniband/hw/qib/qib_ud.c @@ -178,7 +178,6 @@ static void qib_ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe) len = length; if (len > sge->sge_length) len = sge->sge_length; - BUG_ON(len == 0); rvt_copy_sge(qp, &qp->r_sge, sge->vaddr, len, true, false); sge->vaddr += len; sge->length -= len; diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 926f3c8eba69..31c523b2a9f5 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -237,7 +237,6 @@ qib_user_sdma_queue_create(struct device *dev, int unit, int ctxt, int sctxt) ret = qib_user_sdma_rb_insert(&qib_user_sdma_rb_root, sdma_rb_node); - BUG_ON(ret == 0); } pq->sdma_rb_node = sdma_rb_node; diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c index 4b0f5761a646..276304f611ab 100644 --- a/drivers/infiniband/hw/qib/qib_verbs.c +++ b/drivers/infiniband/hw/qib/qib_verbs.c @@ -150,7 +150,6 @@ static u32 qib_count_sge(struct rvt_sge_state *ss, u32 length) len = length; if (len > sge.sge_length) len = sge.sge_length; - BUG_ON(len == 0); if (((long) sge.vaddr & (sizeof(u32) - 1)) || (len != length && (len & (sizeof(u32) - 1)))) { ndesc = 0; @@ -193,7 +192,6 @@ static void qib_copy_from_sge(void *data, struct rvt_sge_state *ss, u32 length) len = length; if (len > sge->sge_length) len = sge->sge_length; - BUG_ON(len == 0); memcpy(data, sge->vaddr, len); sge->vaddr += len; sge->length -= len; @@ -449,7 +447,6 @@ static void copy_io(u32 __iomem *piobuf, struct rvt_sge_state *ss, len = length; if (len > ss->sge.sge_length) len = ss->sge.sge_length; - BUG_ON(len == 0); /* If the source address is not aligned, try to align it. */ off = (unsigned long)ss->sge.vaddr & (sizeof(u32) - 1); if (off) { @@ -1365,7 +1362,7 @@ struct ib_ah *qib_create_qp0_ah(struct qib_ibport *ibp, u16 dlid) rcu_read_lock(); qp0 = rcu_dereference(ibp->rvp.qp[0]); if (qp0) - ah = rdma_create_ah(qp0->ibqp.pd, &attr); + ah = rdma_create_ah(qp0->ibqp.pd, &attr, 0); rcu_read_unlock(); return ah; } @@ -1496,6 +1493,11 @@ static void qib_fill_device_attr(struct qib_devdata *dd) dd->verbs_dev.rdi.wc_opcode = ib_qib_wc_opcode; } +static const struct ib_device_ops qib_dev_ops = { + .modify_device = qib_modify_device, + .process_mad = qib_process_mad, +}; + /** * qib_register_ib_device - register our device with the infiniband core * @dd: the device data structure @@ -1558,8 +1560,6 @@ int qib_register_ib_device(struct qib_devdata *dd) ibdev->node_guid = ppd->guid; ibdev->phys_port_cnt = dd->num_pports; ibdev->dev.parent = &dd->pcidev->dev; - ibdev->modify_device = qib_modify_device; - ibdev->process_mad = qib_process_mad; snprintf(ibdev->node_desc, sizeof(ibdev->node_desc), "Intel Infiniband HCA %s", init_utsname()->nodename); @@ -1627,6 +1627,7 @@ int qib_register_ib_device(struct qib_devdata *dd) } rdma_set_device_sysfs_group(&dd->verbs_dev.rdi.ibdev, &qib_attr_group); + ib_set_device_ops(ibdev, &qib_dev_ops); ret = rvt_register_device(&dd->verbs_dev.rdi, RDMA_DRIVER_QIB); if (ret) goto err_tx; diff --git a/drivers/infiniband/hw/usnic/usnic_ib_main.c b/drivers/infiniband/hw/usnic/usnic_ib_main.c index 73bd00f8d2c8..b2323a52a0dd 100644 --- a/drivers/infiniband/hw/usnic/usnic_ib_main.c +++ b/drivers/infiniband/hw/usnic/usnic_ib_main.c @@ -330,6 +330,37 @@ static void usnic_get_dev_fw_str(struct ib_device *device, char *str) snprintf(str, IB_FW_VERSION_NAME_MAX, "%s", info.fw_version); } +static const struct ib_device_ops usnic_dev_ops = { + .alloc_pd = usnic_ib_alloc_pd, + .alloc_ucontext = usnic_ib_alloc_ucontext, + .create_ah = usnic_ib_create_ah, + .create_cq = usnic_ib_create_cq, + .create_qp = usnic_ib_create_qp, + .dealloc_pd = usnic_ib_dealloc_pd, + .dealloc_ucontext = usnic_ib_dealloc_ucontext, + .dereg_mr = usnic_ib_dereg_mr, + .destroy_ah = usnic_ib_destroy_ah, + .destroy_cq = usnic_ib_destroy_cq, + .destroy_qp = usnic_ib_destroy_qp, + .get_dev_fw_str = usnic_get_dev_fw_str, + .get_dma_mr = usnic_ib_get_dma_mr, + .get_link_layer = usnic_ib_port_link_layer, + .get_netdev = usnic_get_netdev, + .get_port_immutable = usnic_port_immutable, + .mmap = usnic_ib_mmap, + .modify_qp = usnic_ib_modify_qp, + .poll_cq = usnic_ib_poll_cq, + .post_recv = usnic_ib_post_recv, + .post_send = usnic_ib_post_send, + .query_device = usnic_ib_query_device, + .query_gid = usnic_ib_query_gid, + .query_pkey = usnic_ib_query_pkey, + .query_port = usnic_ib_query_port, + .query_qp = usnic_ib_query_qp, + .reg_user_mr = usnic_ib_reg_mr, + .req_notify_cq = usnic_ib_req_notify_cq, +}; + /* Start of PF discovery section */ static void *usnic_ib_device_add(struct pci_dev *dev) { @@ -386,35 +417,7 @@ static void *usnic_ib_device_add(struct pci_dev *dev) (1ull << IB_USER_VERBS_CMD_DETACH_MCAST) | (1ull << IB_USER_VERBS_CMD_OPEN_QP); - us_ibdev->ib_dev.query_device = usnic_ib_query_device; - us_ibdev->ib_dev.query_port = usnic_ib_query_port; - us_ibdev->ib_dev.query_pkey = usnic_ib_query_pkey; - us_ibdev->ib_dev.query_gid = usnic_ib_query_gid; - us_ibdev->ib_dev.get_netdev = usnic_get_netdev; - us_ibdev->ib_dev.get_link_layer = usnic_ib_port_link_layer; - us_ibdev->ib_dev.alloc_pd = usnic_ib_alloc_pd; - us_ibdev->ib_dev.dealloc_pd = usnic_ib_dealloc_pd; - us_ibdev->ib_dev.create_qp = usnic_ib_create_qp; - us_ibdev->ib_dev.modify_qp = usnic_ib_modify_qp; - us_ibdev->ib_dev.query_qp = usnic_ib_query_qp; - us_ibdev->ib_dev.destroy_qp = usnic_ib_destroy_qp; - us_ibdev->ib_dev.create_cq = usnic_ib_create_cq; - us_ibdev->ib_dev.destroy_cq = usnic_ib_destroy_cq; - us_ibdev->ib_dev.reg_user_mr = usnic_ib_reg_mr; - us_ibdev->ib_dev.dereg_mr = usnic_ib_dereg_mr; - us_ibdev->ib_dev.alloc_ucontext = usnic_ib_alloc_ucontext; - us_ibdev->ib_dev.dealloc_ucontext = usnic_ib_dealloc_ucontext; - us_ibdev->ib_dev.mmap = usnic_ib_mmap; - us_ibdev->ib_dev.create_ah = usnic_ib_create_ah; - us_ibdev->ib_dev.destroy_ah = usnic_ib_destroy_ah; - us_ibdev->ib_dev.post_send = usnic_ib_post_send; - us_ibdev->ib_dev.post_recv = usnic_ib_post_recv; - us_ibdev->ib_dev.poll_cq = usnic_ib_poll_cq; - us_ibdev->ib_dev.req_notify_cq = usnic_ib_req_notify_cq; - us_ibdev->ib_dev.get_dma_mr = usnic_ib_get_dma_mr; - us_ibdev->ib_dev.get_port_immutable = usnic_port_immutable; - us_ibdev->ib_dev.get_dev_fw_str = usnic_get_dev_fw_str; - + ib_set_device_ops(&us_ibdev->ib_dev, &usnic_dev_ops); us_ibdev->ib_dev.driver_id = RDMA_DRIVER_USNIC; rdma_set_device_sysfs_group(&us_ibdev->ib_dev, &usnic_attr_group); @@ -649,7 +652,7 @@ static int __init usnic_ib_init(void) err = usnic_uiom_init(DRV_NAME); if (err) { - usnic_err("Unable to initalize umem with err %d\n", err); + usnic_err("Unable to initialize umem with err %d\n", err); return err; } diff --git a/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c b/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c index bf5136533d49..0cdb156e165e 100644 --- a/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c +++ b/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c @@ -681,7 +681,7 @@ usnic_ib_qp_grp_create(struct usnic_fwd_dev *ufdev, struct usnic_ib_vf *vf, err = usnic_vnic_res_spec_satisfied(&min_transport_spec[transport], res_spec); if (err) { - usnic_err("Spec does not meet miniumum req for transport %d\n", + usnic_err("Spec does not meet minimum req for transport %d\n", transport); log_spec(res_spec); return ERR_PTR(err); diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c index 0b91ff36768a..1d4abef17e38 100644 --- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c +++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c @@ -336,13 +336,16 @@ int usnic_ib_query_port(struct ib_device *ibdev, u8 port, usnic_dbg("\n"); - mutex_lock(&us_ibdev->usdev_lock); if (ib_get_eth_speed(ibdev, port, &props->active_speed, - &props->active_width)) { - mutex_unlock(&us_ibdev->usdev_lock); + &props->active_width)) return -EINVAL; - } + /* + * usdev_lock is acquired after (and not before) ib_get_eth_speed call + * because acquiring rtnl_lock in ib_get_eth_speed, while holding + * usdev_lock could lead to a deadlock. + */ + mutex_lock(&us_ibdev->usdev_lock); /* props being zeroed by the caller, avoid zeroing it here */ props->lid = 0; @@ -760,6 +763,7 @@ int usnic_ib_mmap(struct ib_ucontext *context, /* In ib callbacks section - Start of stub funcs */ struct ib_ah *usnic_ib_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 flags, struct ib_udata *udata) { @@ -767,7 +771,7 @@ struct ib_ah *usnic_ib_create_ah(struct ib_pd *pd, return ERR_PTR(-EPERM); } -int usnic_ib_destroy_ah(struct ib_ah *ah) +int usnic_ib_destroy_ah(struct ib_ah *ah, u32 flags) { usnic_dbg("\n"); return -EINVAL; diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.h b/drivers/infiniband/hw/usnic/usnic_ib_verbs.h index 2a2c9beb715f..e33144261b9a 100644 --- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.h +++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.h @@ -77,9 +77,10 @@ int usnic_ib_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); struct ib_ah *usnic_ib_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 flags, struct ib_udata *udata); -int usnic_ib_destroy_ah(struct ib_ah *ah); +int usnic_ib_destroy_ah(struct ib_ah *ah, u32 flags); int usnic_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, const struct ib_send_wr **bad_wr); int usnic_ib_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c index 398443f43dc3..eaa109dbc96a 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c @@ -161,6 +161,49 @@ static struct net_device *pvrdma_get_netdev(struct ib_device *ibdev, return netdev; } +static const struct ib_device_ops pvrdma_dev_ops = { + .add_gid = pvrdma_add_gid, + .alloc_mr = pvrdma_alloc_mr, + .alloc_pd = pvrdma_alloc_pd, + .alloc_ucontext = pvrdma_alloc_ucontext, + .create_ah = pvrdma_create_ah, + .create_cq = pvrdma_create_cq, + .create_qp = pvrdma_create_qp, + .dealloc_pd = pvrdma_dealloc_pd, + .dealloc_ucontext = pvrdma_dealloc_ucontext, + .del_gid = pvrdma_del_gid, + .dereg_mr = pvrdma_dereg_mr, + .destroy_ah = pvrdma_destroy_ah, + .destroy_cq = pvrdma_destroy_cq, + .destroy_qp = pvrdma_destroy_qp, + .get_dev_fw_str = pvrdma_get_fw_ver_str, + .get_dma_mr = pvrdma_get_dma_mr, + .get_link_layer = pvrdma_port_link_layer, + .get_netdev = pvrdma_get_netdev, + .get_port_immutable = pvrdma_port_immutable, + .map_mr_sg = pvrdma_map_mr_sg, + .mmap = pvrdma_mmap, + .modify_port = pvrdma_modify_port, + .modify_qp = pvrdma_modify_qp, + .poll_cq = pvrdma_poll_cq, + .post_recv = pvrdma_post_recv, + .post_send = pvrdma_post_send, + .query_device = pvrdma_query_device, + .query_gid = pvrdma_query_gid, + .query_pkey = pvrdma_query_pkey, + .query_port = pvrdma_query_port, + .query_qp = pvrdma_query_qp, + .reg_user_mr = pvrdma_reg_user_mr, + .req_notify_cq = pvrdma_req_notify_cq, +}; + +static const struct ib_device_ops pvrdma_dev_srq_ops = { + .create_srq = pvrdma_create_srq, + .destroy_srq = pvrdma_destroy_srq, + .modify_srq = pvrdma_modify_srq, + .query_srq = pvrdma_query_srq, +}; + static int pvrdma_register_device(struct pvrdma_dev *dev) { int ret = -1; @@ -197,39 +240,7 @@ static int pvrdma_register_device(struct pvrdma_dev *dev) dev->ib_dev.node_type = RDMA_NODE_IB_CA; dev->ib_dev.phys_port_cnt = dev->dsr->caps.phys_port_cnt; - dev->ib_dev.query_device = pvrdma_query_device; - dev->ib_dev.query_port = pvrdma_query_port; - dev->ib_dev.query_gid = pvrdma_query_gid; - dev->ib_dev.query_pkey = pvrdma_query_pkey; - dev->ib_dev.modify_port = pvrdma_modify_port; - dev->ib_dev.alloc_ucontext = pvrdma_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = pvrdma_dealloc_ucontext; - dev->ib_dev.mmap = pvrdma_mmap; - dev->ib_dev.alloc_pd = pvrdma_alloc_pd; - dev->ib_dev.dealloc_pd = pvrdma_dealloc_pd; - dev->ib_dev.create_ah = pvrdma_create_ah; - dev->ib_dev.destroy_ah = pvrdma_destroy_ah; - dev->ib_dev.create_qp = pvrdma_create_qp; - dev->ib_dev.modify_qp = pvrdma_modify_qp; - dev->ib_dev.query_qp = pvrdma_query_qp; - dev->ib_dev.destroy_qp = pvrdma_destroy_qp; - dev->ib_dev.post_send = pvrdma_post_send; - dev->ib_dev.post_recv = pvrdma_post_recv; - dev->ib_dev.create_cq = pvrdma_create_cq; - dev->ib_dev.destroy_cq = pvrdma_destroy_cq; - dev->ib_dev.poll_cq = pvrdma_poll_cq; - dev->ib_dev.req_notify_cq = pvrdma_req_notify_cq; - dev->ib_dev.get_dma_mr = pvrdma_get_dma_mr; - dev->ib_dev.reg_user_mr = pvrdma_reg_user_mr; - dev->ib_dev.dereg_mr = pvrdma_dereg_mr; - dev->ib_dev.alloc_mr = pvrdma_alloc_mr; - dev->ib_dev.map_mr_sg = pvrdma_map_mr_sg; - dev->ib_dev.add_gid = pvrdma_add_gid; - dev->ib_dev.del_gid = pvrdma_del_gid; - dev->ib_dev.get_netdev = pvrdma_get_netdev; - dev->ib_dev.get_port_immutable = pvrdma_port_immutable; - dev->ib_dev.get_link_layer = pvrdma_port_link_layer; - dev->ib_dev.get_dev_fw_str = pvrdma_get_fw_ver_str; + ib_set_device_ops(&dev->ib_dev, &pvrdma_dev_ops); mutex_init(&dev->port_mutex); spin_lock_init(&dev->desc_lock); @@ -255,10 +266,7 @@ static int pvrdma_register_device(struct pvrdma_dev *dev) (1ull << IB_USER_VERBS_CMD_DESTROY_SRQ) | (1ull << IB_USER_VERBS_CMD_POST_SRQ_RECV); - dev->ib_dev.create_srq = pvrdma_create_srq; - dev->ib_dev.modify_srq = pvrdma_modify_srq; - dev->ib_dev.query_srq = pvrdma_query_srq; - dev->ib_dev.destroy_srq = pvrdma_destroy_srq; + ib_set_device_ops(&dev->ib_dev, &pvrdma_dev_srq_ops); dev->srq_tbl = kcalloc(dev->dsr->caps.max_srq, sizeof(struct pvrdma_srq *), diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c index cf22f57a9f0d..3acf74cbe266 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c @@ -249,7 +249,7 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, init_completion(&qp->free); qp->state = IB_QPS_RESET; - qp->is_kernel = !(pd->uobject && udata); + qp->is_kernel = !udata; if (!qp->is_kernel) { dev_dbg(&dev->pdev->dev, diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c index dc0ce877c7a3..06ba7c7a2235 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c @@ -111,7 +111,7 @@ struct ib_srq *pvrdma_create_srq(struct ib_pd *pd, unsigned long flags; int ret; - if (!(pd->uobject && udata)) { + if (!udata) { /* No support for kernel clients. */ dev_warn(&dev->pdev->dev, "no shared receive queue support for kernel client\n"); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c index b65d10b0a875..4d238d0e484b 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c @@ -533,11 +533,12 @@ int pvrdma_dealloc_pd(struct ib_pd *pd) * @pd: the protection domain * @ah_attr: the attributes of the AH * @udata: user data blob + * @flags: create address handle flags (see enum rdma_create_ah_flags) * * @return: the ib_ah pointer on success, otherwise errno. */ struct ib_ah *pvrdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, - struct ib_udata *udata) + u32 flags, struct ib_udata *udata) { struct pvrdma_dev *dev = to_vdev(pd->device); struct pvrdma_ah *ah; @@ -555,7 +556,7 @@ struct ib_ah *pvrdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, if (!atomic_add_unless(&dev->num_ahs, 1, dev->dsr->caps.max_ah)) return ERR_PTR(-ENOMEM); - ah = kzalloc(sizeof(*ah), GFP_KERNEL); + ah = kzalloc(sizeof(*ah), GFP_ATOMIC); if (!ah) { atomic_dec(&dev->num_ahs); return ERR_PTR(-ENOMEM); @@ -581,10 +582,11 @@ struct ib_ah *pvrdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, /** * pvrdma_destroy_ah - destroy an address handle * @ah: the address handle to destroyed + * @flags: destroy address handle flags (see enum rdma_destroy_ah_flags) * * @return: 0 on success. */ -int pvrdma_destroy_ah(struct ib_ah *ah) +int pvrdma_destroy_ah(struct ib_ah *ah, u32 flags) { struct pvrdma_dev *dev = to_vdev(ah->device); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h index b2e3ab50cb08..f7f758d60110 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h @@ -420,8 +420,8 @@ int pvrdma_destroy_cq(struct ib_cq *cq); int pvrdma_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc); int pvrdma_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags); struct ib_ah *pvrdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, - struct ib_udata *udata); -int pvrdma_destroy_ah(struct ib_ah *ah); + u32 flags, struct ib_udata *udata); +int pvrdma_destroy_ah(struct ib_ah *ah, u32 flags); struct ib_srq *pvrdma_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *init_attr, diff --git a/drivers/infiniband/sw/rdmavt/ah.c b/drivers/infiniband/sw/rdmavt/ah.c index 084bb4baebb5..fc10e4e26ca7 100644 --- a/drivers/infiniband/sw/rdmavt/ah.c +++ b/drivers/infiniband/sw/rdmavt/ah.c @@ -91,6 +91,7 @@ EXPORT_SYMBOL(rvt_check_ah); * rvt_create_ah - create an address handle * @pd: the protection domain * @ah_attr: the attributes of the AH + * @create_flags: create address handle flags (see enum rdma_create_ah_flags) * @udata: pointer to user's input output buffer information. * * This may be called from interrupt context. @@ -99,6 +100,7 @@ EXPORT_SYMBOL(rvt_check_ah); */ struct ib_ah *rvt_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 create_flags, struct ib_udata *udata) { struct rvt_ah *ah; @@ -135,10 +137,11 @@ struct ib_ah *rvt_create_ah(struct ib_pd *pd, /** * rvt_destory_ah - Destory an address handle * @ibah: address handle + * @destroy_flags: destroy address handle flags (see enum rdma_destroy_ah_flags) * * Return: 0 on success */ -int rvt_destroy_ah(struct ib_ah *ibah) +int rvt_destroy_ah(struct ib_ah *ibah, u32 destroy_flags) { struct rvt_dev_info *dev = ib_to_rvt(ibah->device); struct rvt_ah *ah = ibah_to_rvtah(ibah); diff --git a/drivers/infiniband/sw/rdmavt/ah.h b/drivers/infiniband/sw/rdmavt/ah.h index 25271b48a683..72431a618d5d 100644 --- a/drivers/infiniband/sw/rdmavt/ah.h +++ b/drivers/infiniband/sw/rdmavt/ah.h @@ -52,8 +52,9 @@ struct ib_ah *rvt_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, + u32 create_flags, struct ib_udata *udata); -int rvt_destroy_ah(struct ib_ah *ibah); +int rvt_destroy_ah(struct ib_ah *ibah, u32 destroy_flags); int rvt_modify_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); int rvt_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); diff --git a/drivers/infiniband/sw/rdmavt/mad.c b/drivers/infiniband/sw/rdmavt/mad.c index d6981dc04adb..108c71e3ac23 100644 --- a/drivers/infiniband/sw/rdmavt/mad.c +++ b/drivers/infiniband/sw/rdmavt/mad.c @@ -160,7 +160,8 @@ void rvt_free_mad_agents(struct rvt_dev_info *rdi) ib_unregister_mad_agent(agent); } if (rvp->sm_ah) { - rdma_destroy_ah(&rvp->sm_ah->ibah); + rdma_destroy_ah(&rvp->sm_ah->ibah, + RDMA_DESTROY_AH_SLEEPABLE); rvp->sm_ah = NULL; } diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c index 1735deb1a9d4..a1bd8cfc2c25 100644 --- a/drivers/infiniband/sw/rdmavt/qp.c +++ b/drivers/infiniband/sw/rdmavt/qp.c @@ -1,5 +1,5 @@ /* - * Copyright(c) 2016, 2017 Intel Corporation. + * Copyright(c) 2016 - 2018 Intel Corporation. * * This file is provided under a dual BSD/GPLv2 license. When using or * redistributing this file, you may do so under either license. @@ -1094,6 +1094,13 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd, qp->ibqp.qp_num = err; qp->port_num = init_attr->port_num; rvt_init_qp(rdi, qp, init_attr->qp_type); + if (rdi->driver_f.qp_priv_init) { + err = rdi->driver_f.qp_priv_init(rdi, qp, init_attr); + if (err) { + ret = ERR_PTR(err); + goto bail_rq_wq; + } + } break; default: diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c index 723d3daf2eba..aef3aa3fe667 100644 --- a/drivers/infiniband/sw/rdmavt/vt.c +++ b/drivers/infiniband/sw/rdmavt/vt.c @@ -392,16 +392,51 @@ enum { _VERB_IDX_MAX /* Must always be last! */ }; -static inline int check_driver_override(struct rvt_dev_info *rdi, - size_t offset, void *func) -{ - if (!*(void **)((void *)&rdi->ibdev + offset)) { - *(void **)((void *)&rdi->ibdev + offset) = func; - return 0; - } - - return 1; -} +static const struct ib_device_ops rvt_dev_ops = { + .alloc_fmr = rvt_alloc_fmr, + .alloc_mr = rvt_alloc_mr, + .alloc_pd = rvt_alloc_pd, + .alloc_ucontext = rvt_alloc_ucontext, + .attach_mcast = rvt_attach_mcast, + .create_ah = rvt_create_ah, + .create_cq = rvt_create_cq, + .create_qp = rvt_create_qp, + .create_srq = rvt_create_srq, + .dealloc_fmr = rvt_dealloc_fmr, + .dealloc_pd = rvt_dealloc_pd, + .dealloc_ucontext = rvt_dealloc_ucontext, + .dereg_mr = rvt_dereg_mr, + .destroy_ah = rvt_destroy_ah, + .destroy_cq = rvt_destroy_cq, + .destroy_qp = rvt_destroy_qp, + .destroy_srq = rvt_destroy_srq, + .detach_mcast = rvt_detach_mcast, + .get_dma_mr = rvt_get_dma_mr, + .get_port_immutable = rvt_get_port_immutable, + .map_mr_sg = rvt_map_mr_sg, + .map_phys_fmr = rvt_map_phys_fmr, + .mmap = rvt_mmap, + .modify_ah = rvt_modify_ah, + .modify_device = rvt_modify_device, + .modify_port = rvt_modify_port, + .modify_qp = rvt_modify_qp, + .modify_srq = rvt_modify_srq, + .poll_cq = rvt_poll_cq, + .post_recv = rvt_post_recv, + .post_send = rvt_post_send, + .post_srq_recv = rvt_post_srq_recv, + .query_ah = rvt_query_ah, + .query_device = rvt_query_device, + .query_gid = rvt_query_gid, + .query_pkey = rvt_query_pkey, + .query_port = rvt_query_port, + .query_qp = rvt_query_qp, + .query_srq = rvt_query_srq, + .reg_user_mr = rvt_reg_user_mr, + .req_notify_cq = rvt_req_notify_cq, + .resize_cq = rvt_resize_cq, + .unmap_fmr = rvt_unmap_fmr, +}; static noinline int check_support(struct rvt_dev_info *rdi, int verb) { @@ -416,76 +451,36 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) return -EINVAL; break; - case QUERY_DEVICE: - check_driver_override(rdi, offsetof(struct ib_device, - query_device), - rvt_query_device); - break; - case MODIFY_DEVICE: /* * rdmavt does not support modify device currently drivers must * provide. */ - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_device), - rvt_modify_device)) + if (!rdi->ibdev.ops.modify_device) return -EOPNOTSUPP; break; case QUERY_PORT: - if (!check_driver_override(rdi, offsetof(struct ib_device, - query_port), - rvt_query_port)) + if (!rdi->ibdev.ops.query_port) if (!rdi->driver_f.query_port_state) return -EINVAL; break; case MODIFY_PORT: - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_port), - rvt_modify_port)) + if (!rdi->ibdev.ops.modify_port) if (!rdi->driver_f.cap_mask_chg || !rdi->driver_f.shut_down_port) return -EINVAL; break; - case QUERY_PKEY: - check_driver_override(rdi, offsetof(struct ib_device, - query_pkey), - rvt_query_pkey); - break; - case QUERY_GID: - if (!check_driver_override(rdi, offsetof(struct ib_device, - query_gid), - rvt_query_gid)) + if (!rdi->ibdev.ops.query_gid) if (!rdi->driver_f.get_guid_be) return -EINVAL; break; - case ALLOC_UCONTEXT: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_ucontext), - rvt_alloc_ucontext); - break; - - case DEALLOC_UCONTEXT: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_ucontext), - rvt_dealloc_ucontext); - break; - - case GET_PORT_IMMUTABLE: - check_driver_override(rdi, offsetof(struct ib_device, - get_port_immutable), - rvt_get_port_immutable); - break; - case CREATE_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - create_qp), - rvt_create_qp)) + if (!rdi->ibdev.ops.create_qp) if (!rdi->driver_f.qp_priv_alloc || !rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || @@ -496,9 +491,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case MODIFY_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_qp), - rvt_modify_qp)) + if (!rdi->ibdev.ops.modify_qp) if (!rdi->driver_f.notify_qp_reset || !rdi->driver_f.schedule_send || !rdi->driver_f.get_pmtu_from_attr || @@ -512,9 +505,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case DESTROY_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - destroy_qp), - rvt_destroy_qp)) + if (!rdi->ibdev.ops.destroy_qp) if (!rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || !rdi->driver_f.flush_qp_waiters || @@ -523,197 +514,14 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) return -EINVAL; break; - case QUERY_QP: - check_driver_override(rdi, offsetof(struct ib_device, - query_qp), - rvt_query_qp); - break; - case POST_SEND: - if (!check_driver_override(rdi, offsetof(struct ib_device, - post_send), - rvt_post_send)) + if (!rdi->ibdev.ops.post_send) if (!rdi->driver_f.schedule_send || !rdi->driver_f.do_send || !rdi->post_parms) return -EINVAL; break; - case POST_RECV: - check_driver_override(rdi, offsetof(struct ib_device, - post_recv), - rvt_post_recv); - break; - case POST_SRQ_RECV: - check_driver_override(rdi, offsetof(struct ib_device, - post_srq_recv), - rvt_post_srq_recv); - break; - - case CREATE_AH: - check_driver_override(rdi, offsetof(struct ib_device, - create_ah), - rvt_create_ah); - break; - - case DESTROY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_ah), - rvt_destroy_ah); - break; - - case MODIFY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - modify_ah), - rvt_modify_ah); - break; - - case QUERY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - query_ah), - rvt_query_ah); - break; - - case CREATE_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - create_srq), - rvt_create_srq); - break; - - case MODIFY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - modify_srq), - rvt_modify_srq); - break; - - case DESTROY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_srq), - rvt_destroy_srq); - break; - - case QUERY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - query_srq), - rvt_query_srq); - break; - - case ATTACH_MCAST: - check_driver_override(rdi, offsetof(struct ib_device, - attach_mcast), - rvt_attach_mcast); - break; - - case DETACH_MCAST: - check_driver_override(rdi, offsetof(struct ib_device, - detach_mcast), - rvt_detach_mcast); - break; - - case GET_DMA_MR: - check_driver_override(rdi, offsetof(struct ib_device, - get_dma_mr), - rvt_get_dma_mr); - break; - - case REG_USER_MR: - check_driver_override(rdi, offsetof(struct ib_device, - reg_user_mr), - rvt_reg_user_mr); - break; - - case DEREG_MR: - check_driver_override(rdi, offsetof(struct ib_device, - dereg_mr), - rvt_dereg_mr); - break; - - case ALLOC_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_fmr), - rvt_alloc_fmr); - break; - - case ALLOC_MR: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_mr), - rvt_alloc_mr); - break; - - case MAP_MR_SG: - check_driver_override(rdi, offsetof(struct ib_device, - map_mr_sg), - rvt_map_mr_sg); - break; - - case MAP_PHYS_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - map_phys_fmr), - rvt_map_phys_fmr); - break; - - case UNMAP_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - unmap_fmr), - rvt_unmap_fmr); - break; - - case DEALLOC_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_fmr), - rvt_dealloc_fmr); - break; - - case MMAP: - check_driver_override(rdi, offsetof(struct ib_device, - mmap), - rvt_mmap); - break; - - case CREATE_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - create_cq), - rvt_create_cq); - break; - - case DESTROY_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_cq), - rvt_destroy_cq); - break; - - case POLL_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - poll_cq), - rvt_poll_cq); - break; - - case REQ_NOTFIY_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - req_notify_cq), - rvt_req_notify_cq); - break; - - case RESIZE_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - resize_cq), - rvt_resize_cq); - break; - - case ALLOC_PD: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_pd), - rvt_alloc_pd); - break; - - case DEALLOC_PD: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_pd), - rvt_dealloc_pd); - break; - - default: - return -EINVAL; } return 0; @@ -745,6 +553,7 @@ int rvt_register_device(struct rvt_dev_info *rdi, u32 driver_id) return -EINVAL; } + ib_set_device_ops(&rdi->ibdev, &rvt_dev_ops); /* Once we get past here we can use rvt_pr macros and tracepoints */ trace_rvt_dbg(rdi, "Driver attempting registration"); diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index d9ec2de68738..5bde2ad964d2 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -65,8 +65,9 @@ */ #define RXE_UVERBS_ABI_VERSION 2 -#define IB_PHYS_STATE_LINK_UP (5) -#define IB_PHYS_STATE_LINK_DOWN (3) +#define RDMA_LINK_PHYS_STATE_LINK_UP (5) +#define RDMA_LINK_PHYS_STATE_DISABLED (3) +#define RDMA_LINK_PHYS_STATE_POLLING (2) #define RXE_ROCE_V2_SPORT (0xc000) @@ -109,5 +110,6 @@ struct rxe_dev *get_rxe_by_name(const char *name); void rxe_port_up(struct rxe_dev *rxe); void rxe_port_down(struct rxe_dev *rxe); +void rxe_set_port_state(struct rxe_dev *rxe); #endif /* RXE_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index ea089cb091ad..e996da67a851 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -439,6 +439,7 @@ static void make_send_cqe(struct rxe_qp *qp, struct rxe_send_wqe *wqe, */ static void do_complete(struct rxe_qp *qp, struct rxe_send_wqe *wqe) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_cqe cqe; if ((qp->sq_sig_type == IB_SIGNAL_ALL_WR) || @@ -451,6 +452,11 @@ static void do_complete(struct rxe_qp *qp, struct rxe_send_wqe *wqe) advance_consumer(qp->sq.queue); } + if (wqe->wr.opcode == IB_WR_SEND || + wqe->wr.opcode == IB_WR_SEND_WITH_IMM || + wqe->wr.opcode == IB_WR_SEND_WITH_INV) + rxe_counter_inc(rxe, RXE_CNT_RDMA_SEND); + /* * we completed something so let req run again * if it is trying to fence diff --git a/drivers/infiniband/sw/rxe/rxe_hw_counters.c b/drivers/infiniband/sw/rxe/rxe_hw_counters.c index 6aeb7a165e46..636edb5f4cf4 100644 --- a/drivers/infiniband/sw/rxe/rxe_hw_counters.c +++ b/drivers/infiniband/sw/rxe/rxe_hw_counters.c @@ -37,15 +37,18 @@ static const char * const rxe_counter_name[] = { [RXE_CNT_SENT_PKTS] = "sent_pkts", [RXE_CNT_RCVD_PKTS] = "rcvd_pkts", [RXE_CNT_DUP_REQ] = "duplicate_request", - [RXE_CNT_OUT_OF_SEQ_REQ] = "out_of_sequence", + [RXE_CNT_OUT_OF_SEQ_REQ] = "out_of_seq_request", [RXE_CNT_RCV_RNR] = "rcvd_rnr_err", [RXE_CNT_SND_RNR] = "send_rnr_err", [RXE_CNT_RCV_SEQ_ERR] = "rcvd_seq_err", - [RXE_CNT_COMPLETER_SCHED] = "ack_deffered", + [RXE_CNT_COMPLETER_SCHED] = "ack_deferred", [RXE_CNT_RETRY_EXCEEDED] = "retry_exceeded_err", [RXE_CNT_RNR_RETRY_EXCEEDED] = "retry_rnr_exceeded_err", [RXE_CNT_COMP_RETRY] = "completer_retry_err", [RXE_CNT_SEND_ERR] = "send_err", + [RXE_CNT_LINK_DOWNED] = "link_downed", + [RXE_CNT_RDMA_SEND] = "rdma_sends", + [RXE_CNT_RDMA_RECV] = "rdma_recvs", }; int rxe_ib_get_hw_stats(struct ib_device *ibdev, @@ -59,7 +62,7 @@ int rxe_ib_get_hw_stats(struct ib_device *ibdev, return -EINVAL; for (cnt = 0; cnt < ARRAY_SIZE(rxe_counter_name); cnt++) - stats->value[cnt] = dev->stats_counters[cnt]; + stats->value[cnt] = atomic64_read(&dev->stats_counters[cnt]); return ARRAY_SIZE(rxe_counter_name); } diff --git a/drivers/infiniband/sw/rxe/rxe_hw_counters.h b/drivers/infiniband/sw/rxe/rxe_hw_counters.h index f44df1b76742..72c0d63c79e0 100644 --- a/drivers/infiniband/sw/rxe/rxe_hw_counters.h +++ b/drivers/infiniband/sw/rxe/rxe_hw_counters.h @@ -50,6 +50,9 @@ enum rxe_counters { RXE_CNT_RNR_RETRY_EXCEEDED, RXE_CNT_COMP_RETRY, RXE_CNT_SEND_ERR, + RXE_CNT_LINK_DOWNED, + RXE_CNT_RDMA_SEND, + RXE_CNT_RDMA_RECV, RXE_NUM_OF_COUNTERS }; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index afd53f57a62b..01b74597b36a 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -157,7 +157,7 @@ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd); + struct ib_pd *ibpd, struct ib_udata *udata); int rxe_qp_to_init(struct rxe_qp *qp, struct ib_qp_init_attr *init); @@ -250,11 +250,12 @@ static inline unsigned int wr_opcode_mask(int opcode, struct rxe_qp *qp) return rxe_wr_opcode_info[opcode].mask[qp->ibqp.qp_type]; } -static inline int rxe_xmit_packet(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_pkt_info *pkt, struct sk_buff *skb) +static inline int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { int err; int is_request = pkt->mask & RXE_REQ_MASK; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); if ((is_request && (qp->req.state != QP_STATE_READY)) || (!is_request && (qp->resp.state != QP_STATE_READY))) { diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 40e82e0f6c2d..8fd03ae20efc 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -607,7 +607,6 @@ void rxe_port_up(struct rxe_dev *rxe) port = &rxe->port; port->attr.state = IB_PORT_ACTIVE; - port->attr.phys_state = IB_PHYS_STATE_LINK_UP; rxe_port_event(rxe, IB_EVENT_PORT_ACTIVE); dev_info(&rxe->ib_dev.dev, "set active\n"); @@ -620,12 +619,20 @@ void rxe_port_down(struct rxe_dev *rxe) port = &rxe->port; port->attr.state = IB_PORT_DOWN; - port->attr.phys_state = IB_PHYS_STATE_LINK_DOWN; rxe_port_event(rxe, IB_EVENT_PORT_ERR); + rxe_counter_inc(rxe, RXE_CNT_LINK_DOWNED); dev_info(&rxe->ib_dev.dev, "set down\n"); } +void rxe_set_port_state(struct rxe_dev *rxe) +{ + if (netif_running(rxe->ndev) && netif_carrier_ok(rxe->ndev)) + rxe_port_up(rxe); + else + rxe_port_down(rxe); +} + static int rxe_notify(struct notifier_block *not_blk, unsigned long event, void *arg) @@ -652,10 +659,7 @@ static int rxe_notify(struct notifier_block *not_blk, rxe_set_mtu(rxe, ndev->mtu); break; case NETDEV_CHANGE: - if (netif_running(ndev) && netif_carrier_ok(ndev)) - rxe_port_up(rxe); - else - rxe_port_down(rxe); + rxe_set_port_state(rxe); break; case NETDEV_REBOOT: case NETDEV_GOING_DOWN: diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 36b53fb94a49..b5c91df22047 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -112,6 +112,18 @@ static inline struct kmem_cache *pool_cache(struct rxe_pool *pool) return rxe_type_info[pool->type].cache; } +static void rxe_cache_clean(size_t cnt) +{ + int i; + struct rxe_type_info *type; + + for (i = 0; i < cnt; i++) { + type = &rxe_type_info[i]; + kmem_cache_destroy(type->cache); + type->cache = NULL; + } +} + int rxe_cache_init(void) { int err; @@ -136,24 +148,14 @@ int rxe_cache_init(void) return 0; err1: - while (--i >= 0) { - kmem_cache_destroy(type->cache); - type->cache = NULL; - } + rxe_cache_clean(i); return err; } void rxe_cache_exit(void) { - int i; - struct rxe_type_info *type; - - for (i = 0; i < RXE_NUM_TYPES; i++) { - type = &rxe_type_info[i]; - kmem_cache_destroy(type->cache); - type->cache = NULL; - } + rxe_cache_clean(RXE_NUM_TYPES); } static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) @@ -241,7 +243,7 @@ static void rxe_pool_put(struct rxe_pool *pool) kref_put(&pool->ref_cnt, rxe_pool_release); } -int rxe_pool_cleanup(struct rxe_pool *pool) +void rxe_pool_cleanup(struct rxe_pool *pool) { unsigned long flags; @@ -253,8 +255,6 @@ int rxe_pool_cleanup(struct rxe_pool *pool) write_unlock_irqrestore(&pool->pool_lock, flags); rxe_pool_put(pool); - - return 0; } static u32 alloc_index(struct rxe_pool *pool) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index aa4ba307097b..72968c29e01f 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -126,7 +126,7 @@ int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type, u32 max_elem); /* free resources from object pool */ -int rxe_pool_cleanup(struct rxe_pool *pool); +void rxe_pool_cleanup(struct rxe_pool *pool); /* allocate an object from pool */ void *rxe_alloc(struct rxe_pool *pool); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index b9710907dac2..fd86fd2fbb26 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -97,7 +97,7 @@ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init) goto err1; if (init->qp_type == IB_QPT_SMI || init->qp_type == IB_QPT_GSI) { - if (port_num != 1) { + if (!rdma_is_port_valid(&rxe->ib_dev, port_num)) { pr_warn("invalid port = %d\n", port_num); goto err1; } @@ -336,13 +336,14 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd) + struct ib_pd *ibpd, + struct ib_udata *udata) { int err; struct rxe_cq *rcq = to_rcq(init->recv_cq); struct rxe_cq *scq = to_rcq(init->send_cq); struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; - struct ib_ucontext *context = ibpd->uobject ? ibpd->uobject->context : NULL; + struct ib_ucontext *context = udata ? ibpd->uobject->context : NULL; rxe_add_ref(pd); rxe_add_ref(rcq); @@ -433,7 +434,7 @@ int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp, } if (mask & IB_QP_PORT) { - if (attr->port_num != 1) { + if (!rdma_is_port_valid(&rxe->ib_dev, attr->port_num)) { pr_warn("invalid port %d\n", attr->port_num); goto err1; } @@ -448,7 +449,7 @@ int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp, if (mask & IB_QP_ALT_PATH) { if (rxe_av_chk_attr(rxe, &attr->alt_ah_attr)) goto err1; - if (attr->alt_port_num != 1) { + if (!rdma_is_port_valid(&rxe->ib_dev, attr->alt_port_num)) { pr_warn("invalid alt port %d\n", attr->alt_port_num); goto err1; } diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 6c361d70d7cd..c5d9b558fa90 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -643,6 +643,7 @@ next_wqe: rmr->access = wqe->wr.wr.reg.access; rmr->lkey = wqe->wr.wr.reg.key; rmr->rkey = wqe->wr.wr.reg.key; + rmr->iova = wqe->wr.wr.reg.mr->iova; wqe->state = wqe_state_done; wqe->status = IB_WC_SUCCESS; } else { @@ -728,7 +729,7 @@ next_wqe: save_state(wqe, qp, &rollback_wqe, &rollback_psn); update_wqe_state(qp, wqe, &pkt); update_wqe_psn(qp, wqe, &pkt, payload); - ret = rxe_xmit_packet(to_rdev(qp->ibqp.device), qp, &pkt, skb); + ret = rxe_xmit_packet(qp, &pkt, skb); if (ret) { qp->need_req_skb = 1; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index c962160292f4..231528188250 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -124,12 +124,9 @@ static inline enum resp_states get_req(struct rxe_qp *qp, struct sk_buff *skb; if (qp->resp.state == QP_STATE_ERROR) { - skb = skb_dequeue(&qp->req_pkts); - if (skb) { - /* drain request packet queue */ + while ((skb = skb_dequeue(&qp->req_pkts))) { rxe_drop_ref(qp); kfree_skb(skb); - return RESPST_GET_REQ; } /* go drain recv wr queue */ @@ -660,7 +657,6 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, static enum resp_states read_reply(struct rxe_qp *qp, struct rxe_pkt_info *req_pkt) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_pkt_info ack_pkt; struct sk_buff *skb; int mtu = qp->mtu; @@ -739,7 +735,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, p = payload_addr(&ack_pkt) + payload + bth_pad(&ack_pkt); *p = ~icrc; - err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); + err = rxe_xmit_packet(qp, &ack_pkt, skb); if (err) { pr_err("Failed sending RDMA reply.\n"); return RESPST_ERR_RNR; @@ -838,18 +834,25 @@ static enum resp_states do_complete(struct rxe_qp *qp, struct ib_wc *wc = &cqe.ibwc; struct ib_uverbs_wc *uwc = &cqe.uibwc; struct rxe_recv_wqe *wqe = qp->resp.wqe; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); if (unlikely(!wqe)) return RESPST_CLEANUP; memset(&cqe, 0, sizeof(cqe)); - wc->wr_id = wqe->wr_id; - wc->status = qp->resp.status; - wc->qp = &qp->ibqp; + if (qp->rcq->is_user) { + uwc->status = qp->resp.status; + uwc->qp_num = qp->ibqp.qp_num; + uwc->wr_id = wqe->wr_id; + } else { + wc->status = qp->resp.status; + wc->qp = &qp->ibqp; + wc->wr_id = wqe->wr_id; + } - /* fields after status are not required for errors */ if (wc->status == IB_WC_SUCCESS) { + rxe_counter_inc(rxe, RXE_CNT_RDMA_RECV); wc->opcode = (pkt->mask & RXE_IMMDT_MASK && pkt->mask & RXE_WRITE_MASK) ? IB_WC_RECV_RDMA_WITH_IMM : IB_WC_RECV; @@ -898,7 +901,6 @@ static enum resp_states do_complete(struct rxe_qp *qp, } if (pkt->mask & RXE_IETH_MASK) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_mem *rmr; wc->wc_flags |= IB_WC_WITH_INVALIDATE; @@ -950,7 +952,6 @@ static int send_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, int err = 0; struct rxe_pkt_info ack_pkt; struct sk_buff *skb; - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); skb = prepare_ack_packet(qp, pkt, &ack_pkt, IB_OPCODE_RC_ACKNOWLEDGE, 0, psn, syndrome, NULL); @@ -959,7 +960,7 @@ static int send_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, goto err1; } - err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); + err = rxe_xmit_packet(qp, &ack_pkt, skb); if (err) pr_err_ratelimited("Failed sending ack\n"); @@ -973,7 +974,6 @@ static int send_atomic_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, int rc = 0; struct rxe_pkt_info ack_pkt; struct sk_buff *skb; - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct resp_res *res; skb = prepare_ack_packet(qp, pkt, &ack_pkt, @@ -1001,7 +1001,7 @@ static int send_atomic_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, res->last_psn = ack_pkt.psn; res->cur_psn = ack_pkt.psn; - rc = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); + rc = rxe_xmit_packet(qp, &ack_pkt, skb); if (rc) { pr_err_ratelimited("Failed sending ack\n"); rxe_drop_ref(qp); @@ -1131,8 +1131,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp, if (res) { skb_get(res->atomic.skb); /* Resend the result. */ - rc = rxe_xmit_packet(to_rdev(qp->ibqp.device), qp, - pkt, res->atomic.skb); + rc = rxe_xmit_packet(qp, pkt, res->atomic.skb); if (rc) { pr_err("Failed resending result. This flow is not handled - skb ignored\n"); rc = RESPST_CLEANUP; diff --git a/drivers/infiniband/sw/rxe/rxe_sysfs.c b/drivers/infiniband/sw/rxe/rxe_sysfs.c index 73a19f808e1b..95a15892f7e6 100644 --- a/drivers/infiniband/sw/rxe/rxe_sysfs.c +++ b/drivers/infiniband/sw/rxe/rxe_sysfs.c @@ -53,22 +53,6 @@ static int sanitize_arg(const char *val, char *intf, int intf_len) return len; } -static void rxe_set_port_state(struct net_device *ndev) -{ - struct rxe_dev *rxe = net_to_rxe(ndev); - bool is_up = netif_running(ndev) && netif_carrier_ok(ndev); - - if (!rxe) - goto out; - - if (is_up) - rxe_port_up(rxe); - else - rxe_port_down(rxe); /* down for unknown state */ -out: - return; -} - static int rxe_param_set_add(const char *val, const struct kernel_param *kp) { int len; @@ -104,7 +88,7 @@ static int rxe_param_set_add(const char *val, const struct kernel_param *kp) goto err; } - rxe_set_port_state(ndev); + rxe_set_port_state(rxe); dev_info(&rxe->ib_dev.dev, "added %s\n", intf); err: if (ndev) diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 9c19f2027511..b20e6e0415f5 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -56,12 +56,7 @@ static int rxe_query_port(struct ib_device *dev, { struct rxe_dev *rxe = to_rdev(dev); struct rxe_port *port; - int rc = -EINVAL; - - if (unlikely(port_num != 1)) { - pr_warn("invalid port_number %d\n", port_num); - goto out; - } + int rc; port = &rxe->port; @@ -71,9 +66,16 @@ static int rxe_query_port(struct ib_device *dev, mutex_lock(&rxe->usdev_lock); rc = ib_get_eth_speed(dev, port_num, &attr->active_speed, &attr->active_width); + + if (attr->state == IB_PORT_ACTIVE) + attr->phys_state = RDMA_LINK_PHYS_STATE_LINK_UP; + else if (dev_get_flags(rxe->ndev) & IFF_UP) + attr->phys_state = RDMA_LINK_PHYS_STATE_POLLING; + else + attr->phys_state = RDMA_LINK_PHYS_STATE_DISABLED; + mutex_unlock(&rxe->usdev_lock); -out: return rc; } @@ -96,12 +98,6 @@ static int rxe_query_pkey(struct ib_device *device, struct rxe_dev *rxe = to_rdev(device); struct rxe_port *port; - if (unlikely(port_num != 1)) { - dev_warn(device->dev.parent, "invalid port_num = %d\n", - port_num); - goto err1; - } - port = &rxe->port; if (unlikely(index >= port->attr.pkey_tbl_len)) { @@ -139,11 +135,6 @@ static int rxe_modify_port(struct ib_device *dev, struct rxe_dev *rxe = to_rdev(dev); struct rxe_port *port; - if (unlikely(port_num != 1)) { - pr_warn("invalid port_num = %d\n", port_num); - goto err1; - } - port = &rxe->port; port->attr.port_cap_flags |= attr->set_port_cap_mask; @@ -153,9 +144,6 @@ static int rxe_modify_port(struct ib_device *dev, port->attr.qkey_viol_cntr = 0; return 0; - -err1: - return -EINVAL; } static enum rdma_link_layer rxe_get_link_layer(struct ib_device *dev, @@ -231,6 +219,7 @@ static void rxe_init_av(struct rxe_dev *rxe, struct rdma_ah_attr *attr, static struct ib_ah *rxe_create_ah(struct ib_pd *ibpd, struct rdma_ah_attr *attr, + u32 flags, struct ib_udata *udata) { @@ -278,7 +267,7 @@ static int rxe_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *attr) return 0; } -static int rxe_destroy_ah(struct ib_ah *ibah) +static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); @@ -498,7 +487,7 @@ static struct ib_qp *rxe_create_qp(struct ib_pd *ibpd, rxe_add_index(qp); - err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibpd); + err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibpd, udata); if (err) goto err3; @@ -1157,6 +1146,52 @@ static const struct attribute_group rxe_attr_group = { .attrs = rxe_dev_attributes, }; +static const struct ib_device_ops rxe_dev_ops = { + .alloc_hw_stats = rxe_ib_alloc_hw_stats, + .alloc_mr = rxe_alloc_mr, + .alloc_pd = rxe_alloc_pd, + .alloc_ucontext = rxe_alloc_ucontext, + .attach_mcast = rxe_attach_mcast, + .create_ah = rxe_create_ah, + .create_cq = rxe_create_cq, + .create_qp = rxe_create_qp, + .create_srq = rxe_create_srq, + .dealloc_pd = rxe_dealloc_pd, + .dealloc_ucontext = rxe_dealloc_ucontext, + .dereg_mr = rxe_dereg_mr, + .destroy_ah = rxe_destroy_ah, + .destroy_cq = rxe_destroy_cq, + .destroy_qp = rxe_destroy_qp, + .destroy_srq = rxe_destroy_srq, + .detach_mcast = rxe_detach_mcast, + .get_dma_mr = rxe_get_dma_mr, + .get_hw_stats = rxe_ib_get_hw_stats, + .get_link_layer = rxe_get_link_layer, + .get_netdev = rxe_get_netdev, + .get_port_immutable = rxe_port_immutable, + .map_mr_sg = rxe_map_mr_sg, + .mmap = rxe_mmap, + .modify_ah = rxe_modify_ah, + .modify_device = rxe_modify_device, + .modify_port = rxe_modify_port, + .modify_qp = rxe_modify_qp, + .modify_srq = rxe_modify_srq, + .peek_cq = rxe_peek_cq, + .poll_cq = rxe_poll_cq, + .post_recv = rxe_post_recv, + .post_send = rxe_post_send, + .post_srq_recv = rxe_post_srq_recv, + .query_ah = rxe_query_ah, + .query_device = rxe_query_device, + .query_pkey = rxe_query_pkey, + .query_port = rxe_query_port, + .query_qp = rxe_query_qp, + .query_srq = rxe_query_srq, + .reg_user_mr = rxe_reg_user_mr, + .req_notify_cq = rxe_req_notify_cq, + .resize_cq = rxe_resize_cq, +}; + int rxe_register_device(struct rxe_dev *rxe) { int err; @@ -1211,49 +1246,7 @@ int rxe_register_device(struct rxe_dev *rxe) | BIT_ULL(IB_USER_VERBS_CMD_DETACH_MCAST) ; - dev->query_device = rxe_query_device; - dev->modify_device = rxe_modify_device; - dev->query_port = rxe_query_port; - dev->modify_port = rxe_modify_port; - dev->get_link_layer = rxe_get_link_layer; - dev->get_netdev = rxe_get_netdev; - dev->query_pkey = rxe_query_pkey; - dev->alloc_ucontext = rxe_alloc_ucontext; - dev->dealloc_ucontext = rxe_dealloc_ucontext; - dev->mmap = rxe_mmap; - dev->get_port_immutable = rxe_port_immutable; - dev->alloc_pd = rxe_alloc_pd; - dev->dealloc_pd = rxe_dealloc_pd; - dev->create_ah = rxe_create_ah; - dev->modify_ah = rxe_modify_ah; - dev->query_ah = rxe_query_ah; - dev->destroy_ah = rxe_destroy_ah; - dev->create_srq = rxe_create_srq; - dev->modify_srq = rxe_modify_srq; - dev->query_srq = rxe_query_srq; - dev->destroy_srq = rxe_destroy_srq; - dev->post_srq_recv = rxe_post_srq_recv; - dev->create_qp = rxe_create_qp; - dev->modify_qp = rxe_modify_qp; - dev->query_qp = rxe_query_qp; - dev->destroy_qp = rxe_destroy_qp; - dev->post_send = rxe_post_send; - dev->post_recv = rxe_post_recv; - dev->create_cq = rxe_create_cq; - dev->destroy_cq = rxe_destroy_cq; - dev->resize_cq = rxe_resize_cq; - dev->poll_cq = rxe_poll_cq; - dev->peek_cq = rxe_peek_cq; - dev->req_notify_cq = rxe_req_notify_cq; - dev->get_dma_mr = rxe_get_dma_mr; - dev->reg_user_mr = rxe_reg_user_mr; - dev->dereg_mr = rxe_dereg_mr; - dev->alloc_mr = rxe_alloc_mr; - dev->map_mr_sg = rxe_map_mr_sg; - dev->attach_mcast = rxe_attach_mcast; - dev->detach_mcast = rxe_detach_mcast; - dev->get_hw_stats = rxe_ib_get_hw_stats; - dev->alloc_hw_stats = rxe_ib_alloc_hw_stats; + ib_set_device_ops(dev, &rxe_dev_ops); tfm = crypto_alloc_shash("crc32", 0, 0); if (IS_ERR(tfm)) { @@ -1279,11 +1272,9 @@ err1: return err; } -int rxe_unregister_device(struct rxe_dev *rxe) +void rxe_unregister_device(struct rxe_dev *rxe) { struct ib_device *dev = &rxe->ib_dev; ib_unregister_device(dev); - - return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 82e670d6eeea..74e04801d34d 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -409,16 +409,16 @@ struct rxe_dev { spinlock_t mmap_offset_lock; /* guard mmap_offset */ int mmap_offset; - u64 stats_counters[RXE_NUM_OF_COUNTERS]; + atomic64_t stats_counters[RXE_NUM_OF_COUNTERS]; struct rxe_port port; struct list_head list; struct crypto_shash *tfm; }; -static inline void rxe_counter_inc(struct rxe_dev *rxe, enum rxe_counters cnt) +static inline void rxe_counter_inc(struct rxe_dev *rxe, enum rxe_counters index) { - rxe->stats_counters[cnt]++; + atomic64_inc(&rxe->stats_counters[index]); } static inline struct rxe_dev *to_rdev(struct ib_device *dev) @@ -467,7 +467,7 @@ static inline struct rxe_mem *to_rmw(struct ib_mw *mw) } int rxe_register_device(struct rxe_dev *rxe); -int rxe_unregister_device(struct rxe_dev *rxe); +void rxe_unregister_device(struct rxe_dev *rxe); void rxe_mc_cleanup(struct rxe_pool_entry *arg); diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c index 9006a13af1de..6d35570092d6 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c @@ -66,7 +66,7 @@ struct ipoib_ah *ipoib_create_ah(struct net_device *dev, ah->last_send = 0; kref_init(&ah->ref); - vah = rdma_create_ah(pd, attr); + vah = rdma_create_ah(pd, attr, RDMA_CREATE_AH_SLEEPABLE); if (IS_ERR(vah)) { kfree(ah); ah = (struct ipoib_ah *)vah; @@ -678,7 +678,7 @@ static void __ipoib_reap_ah(struct net_device *dev) list_for_each_entry_safe(ah, tah, &priv->dead_ahs, list) if ((int) priv->tx_tail - (int) ah->last_send >= 0) { list_del(&ah->list); - rdma_destroy_ah(ah->ah); + rdma_destroy_ah(ah->ah, 0); kfree(ah); } diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c index 8710214594d8..d932f99201d1 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c @@ -167,7 +167,7 @@ int ipoib_open(struct net_device *dev) if (flags & IFF_UP) continue; - dev_change_flags(cpriv->dev, flags | IFF_UP); + dev_change_flags(cpriv->dev, flags | IFF_UP, NULL); } up_read(&priv->vlan_rwsem); } @@ -207,7 +207,7 @@ static int ipoib_stop(struct net_device *dev) if (!(flags & IFF_UP)) continue; - dev_change_flags(cpriv->dev, flags & ~IFF_UP); + dev_change_flags(cpriv->dev, flags & ~IFF_UP, NULL); } up_read(&priv->vlan_rwsem); } @@ -1823,7 +1823,7 @@ static void ipoib_parent_unregister_pre(struct net_device *ndev) * running ensures the it will not add more work. */ rtnl_lock(); - dev_change_flags(priv->dev, priv->dev->flags & ~IFF_UP); + dev_change_flags(priv->dev, priv->dev->flags & ~IFF_UP, NULL); rtnl_unlock(); /* ipoib_event() cannot be running once this returns */ @@ -2453,8 +2453,8 @@ static struct net_device *ipoib_add_port(const char *format, return ERR_PTR(result); } - if (hca->rdma_netdev_get_params) { - int rc = hca->rdma_netdev_get_params(hca, port, + if (hca->ops.rdma_netdev_get_params) { + int rc = hca->ops.rdma_netdev_get_params(hca, port, RDMA_NETDEV_IPOIB, ¶ms); diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c index 3fecd87c9f2b..8c707accd148 100644 --- a/drivers/infiniband/ulp/iser/iscsi_iser.c +++ b/drivers/infiniband/ulp/iser/iscsi_iser.c @@ -997,7 +997,6 @@ static struct scsi_host_template iscsi_iser_sht = { .eh_device_reset_handler= iscsi_eh_device_reset, .eh_target_reset_handler = iscsi_eh_recover_target, .target_alloc = iscsi_target_alloc, - .use_clustering = ENABLE_CLUSTERING, .slave_alloc = iscsi_iser_slave_alloc, .proc_name = "iscsi_iser", .this_id = -1, diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index 009be8889d71..e9b7efc302d0 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -77,8 +77,8 @@ int iser_assign_reg_ops(struct iser_device *device) struct ib_device *ib_dev = device->ib_device; /* Assign function handles - based on FMR support */ - if (ib_dev->alloc_fmr && ib_dev->dealloc_fmr && - ib_dev->map_phys_fmr && ib_dev->unmap_fmr) { + if (ib_dev->ops.alloc_fmr && ib_dev->ops.dealloc_fmr && + ib_dev->ops.map_phys_fmr && ib_dev->ops.unmap_fmr) { iser_info("FMR supported, using FMR for registration\n"); device->reg_ops = &fmr_ops; } else if (ib_dev->attrs.device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS) { @@ -277,16 +277,13 @@ void iser_unreg_mem_fmr(struct iscsi_iser_task *iser_task, enum iser_data_dir cmd_dir) { struct iser_mem_reg *reg = &iser_task->rdma_reg[cmd_dir]; - int ret; if (!reg->mem_h) return; iser_dbg("PHYSICAL Mem.Unregister mem_h %p\n", reg->mem_h); - ret = ib_fmr_pool_unmap((struct ib_pool_fmr *)reg->mem_h); - if (ret) - iser_err("ib_fmr_pool_unmap failed %d\n", ret); + ib_fmr_pool_unmap((struct ib_pool_fmr *)reg->mem_h); reg->mem_h = NULL; } diff --git a/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c b/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c index 61558788b3fa..ae70cd18903e 100644 --- a/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c +++ b/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c @@ -330,10 +330,10 @@ struct opa_vnic_adapter *opa_vnic_add_netdev(struct ib_device *ibdev, struct rdma_netdev *rn; int rc; - netdev = ibdev->alloc_rdma_netdev(ibdev, port_num, - RDMA_NETDEV_OPA_VNIC, - "veth%d", NET_NAME_UNKNOWN, - ether_setup); + netdev = ibdev->ops.alloc_rdma_netdev(ibdev, port_num, + RDMA_NETDEV_OPA_VNIC, + "veth%d", NET_NAME_UNKNOWN, + ether_setup); if (!netdev) return ERR_PTR(-ENOMEM); else if (IS_ERR(netdev)) diff --git a/drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c b/drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c index d119d9afa845..560e4f2d466e 100644 --- a/drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c +++ b/drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c @@ -606,7 +606,7 @@ static void vema_set(struct opa_vnic_vema_port *port, static void vema_send(struct ib_mad_agent *mad_agent, struct ib_mad_send_wc *mad_wc) { - rdma_destroy_ah(mad_wc->send_buf->ah); + rdma_destroy_ah(mad_wc->send_buf->ah, RDMA_DESTROY_AH_SLEEPABLE); ib_free_send_mad(mad_wc->send_buf); } @@ -680,7 +680,7 @@ static void vema_recv(struct ib_mad_agent *mad_agent, ib_free_send_mad(rsp); err_rsp: - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, RDMA_DESTROY_AH_SLEEPABLE); free_recv_mad: ib_free_recv_mad(mad_wc); } @@ -777,7 +777,7 @@ void opa_vnic_vema_send_trap(struct opa_vnic_adapter *adapter, } rdma_ah_set_dlid(&ah_attr, trap_lid); - ah = rdma_create_ah(port->mad_agent->qp->pd, &ah_attr); + ah = rdma_create_ah(port->mad_agent->qp->pd, &ah_attr, 0); if (IS_ERR(ah)) { c_err("%s:Couldn't create new AH = %p\n", __func__, ah); c_err("%s:dlid = %d, sl = %d, port = %d\n", __func__, @@ -848,7 +848,7 @@ void opa_vnic_vema_send_trap(struct opa_vnic_adapter *adapter, } err_sndbuf: - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, 0); err_exit: v_err("Aborting trap\n"); } diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index eed0eb3bb04c..31d91538bbf4 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -132,6 +132,15 @@ MODULE_PARM_DESC(dev_loss_tmo, " if fast_io_fail_tmo has not been set. \"off\" means that" " this functionality is disabled."); +static bool srp_use_imm_data = true; +module_param_named(use_imm_data, srp_use_imm_data, bool, 0644); +MODULE_PARM_DESC(use_imm_data, + "Whether or not to request permission to use immediate data during SRP login."); + +static unsigned int srp_max_imm_data = 8 * 1024; +module_param_named(max_imm_data, srp_max_imm_data, uint, 0644); +MODULE_PARM_DESC(max_imm_data, "Maximum immediate data size."); + static unsigned ch_count; module_param(ch_count, uint, 0444); MODULE_PARM_DESC(ch_count, @@ -573,7 +582,7 @@ static int srp_create_ch_ib(struct srp_rdma_ch *ch) init_attr->cap.max_send_wr = m * target->queue_size; init_attr->cap.max_recv_wr = target->queue_size + 1; init_attr->cap.max_recv_sge = 1; - init_attr->cap.max_send_sge = 1; + init_attr->cap.max_send_sge = SRP_MAX_SGE; init_attr->sq_sig_type = IB_SIGNAL_REQ_WR; init_attr->qp_type = IB_QPT_RC; init_attr->send_cq = send_cq; @@ -823,7 +832,8 @@ static u8 srp_get_subnet_timeout(struct srp_host *host) return subnet_timeout; } -static int srp_send_req(struct srp_rdma_ch *ch, bool multich) +static int srp_send_req(struct srp_rdma_ch *ch, uint32_t max_iu_len, + bool multich) { struct srp_target_port *target = ch->target; struct { @@ -852,11 +862,15 @@ static int srp_send_req(struct srp_rdma_ch *ch, bool multich) req->ib_req.opcode = SRP_LOGIN_REQ; req->ib_req.tag = 0; - req->ib_req.req_it_iu_len = cpu_to_be32(target->max_iu_len); + req->ib_req.req_it_iu_len = cpu_to_be32(max_iu_len); req->ib_req.req_buf_fmt = cpu_to_be16(SRP_BUF_FORMAT_DIRECT | SRP_BUF_FORMAT_INDIRECT); req->ib_req.req_flags = (multich ? SRP_MULTICHAN_MULTI : SRP_MULTICHAN_SINGLE); + if (srp_use_imm_data) { + req->ib_req.req_flags |= SRP_IMMED_REQUESTED; + req->ib_req.imm_data_offset = cpu_to_be16(SRP_IMM_DATA_OFFSET); + } if (target->using_rdma_cm) { req->rdma_param.flow_control = req->ib_param.flow_control; @@ -873,6 +887,7 @@ static int srp_send_req(struct srp_rdma_ch *ch, bool multich) req->rdma_req.req_it_iu_len = req->ib_req.req_it_iu_len; req->rdma_req.req_buf_fmt = req->ib_req.req_buf_fmt; req->rdma_req.req_flags = req->ib_req.req_flags; + req->rdma_req.imm_data_offset = req->ib_req.imm_data_offset; ipi = req->rdma_req.initiator_port_id; tpi = req->rdma_req.target_port_id; @@ -1145,7 +1160,8 @@ static int srp_connected_ch(struct srp_target_port *target) return c; } -static int srp_connect_ch(struct srp_rdma_ch *ch, bool multich) +static int srp_connect_ch(struct srp_rdma_ch *ch, uint32_t max_iu_len, + bool multich) { struct srp_target_port *target = ch->target; int ret; @@ -1158,7 +1174,7 @@ static int srp_connect_ch(struct srp_rdma_ch *ch, bool multich) while (1) { init_completion(&ch->done); - ret = srp_send_req(ch, multich); + ret = srp_send_req(ch, max_iu_len, multich); if (ret) goto out; ret = wait_for_completion_interruptible(&ch->done); @@ -1344,6 +1360,20 @@ static void srp_terminate_io(struct srp_rport *rport) } } +/* Calculate maximum initiator to target information unit length. */ +static uint32_t srp_max_it_iu_len(int cmd_sg_cnt, bool use_imm_data) +{ + uint32_t max_iu_len = sizeof(struct srp_cmd) + SRP_MAX_ADD_CDB_LEN + + sizeof(struct srp_indirect_buf) + + cmd_sg_cnt * sizeof(struct srp_direct_buf); + + if (use_imm_data) + max_iu_len = max(max_iu_len, SRP_IMM_DATA_OFFSET + + srp_max_imm_data); + + return max_iu_len; +} + /* * It is up to the caller to ensure that srp_rport_reconnect() calls are * serialized and that no concurrent srp_queuecommand(), srp_abort(), @@ -1357,6 +1387,8 @@ static int srp_rport_reconnect(struct srp_rport *rport) { struct srp_target_port *target = rport->lld_data; struct srp_rdma_ch *ch; + uint32_t max_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt, + srp_use_imm_data); int i, j, ret = 0; bool multich = false; @@ -1402,7 +1434,7 @@ static int srp_rport_reconnect(struct srp_rport *rport) ch = &target->ch[i]; if (ret) break; - ret = srp_connect_ch(ch, multich); + ret = srp_connect_ch(ch, max_iu_len, multich); multich = true; } @@ -1764,25 +1796,29 @@ static void srp_check_mapping(struct srp_map_state *state, * @req: SRP request * * Returns the length in bytes of the SRP_CMD IU or a negative value if - * mapping failed. + * mapping failed. The size of any immediate data is not included in the + * return value. */ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, struct srp_request *req) { struct srp_target_port *target = ch->target; - struct scatterlist *scat; + struct scatterlist *scat, *sg; struct srp_cmd *cmd = req->cmd->buf; - int len, nents, count, ret; + int i, len, nents, count, ret; struct srp_device *dev; struct ib_device *ibdev; struct srp_map_state state; struct srp_indirect_buf *indirect_hdr; + u64 data_len; u32 idb_len, table_len; __be32 idb_rkey; u8 fmt; + req->cmd->num_sge = 1; + if (!scsi_sglist(scmnd) || scmnd->sc_data_direction == DMA_NONE) - return sizeof (struct srp_cmd); + return sizeof(struct srp_cmd) + cmd->add_cdb_len; if (scmnd->sc_data_direction != DMA_FROM_DEVICE && scmnd->sc_data_direction != DMA_TO_DEVICE) { @@ -1794,6 +1830,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, nents = scsi_sg_count(scmnd); scat = scsi_sglist(scmnd); + data_len = scsi_bufflen(scmnd); dev = target->srp_host->srp_dev; ibdev = dev->dev; @@ -1802,8 +1839,31 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, if (unlikely(count == 0)) return -EIO; + if (ch->use_imm_data && + count <= SRP_MAX_IMM_SGE && + SRP_IMM_DATA_OFFSET + data_len <= ch->max_it_iu_len && + scmnd->sc_data_direction == DMA_TO_DEVICE) { + struct srp_imm_buf *buf; + struct ib_sge *sge = &req->cmd->sge[1]; + + fmt = SRP_DATA_DESC_IMM; + len = SRP_IMM_DATA_OFFSET; + req->nmdesc = 0; + buf = (void *)cmd->add_data + cmd->add_cdb_len; + buf->len = cpu_to_be32(data_len); + WARN_ON_ONCE((void *)(buf + 1) > (void *)cmd + len); + for_each_sg(scat, sg, count, i) { + sge[i].addr = ib_sg_dma_address(ibdev, sg); + sge[i].length = ib_sg_dma_len(ibdev, sg); + sge[i].lkey = target->lkey; + } + req->cmd->num_sge += count; + goto map_complete; + } + fmt = SRP_DATA_DESC_DIRECT; - len = sizeof (struct srp_cmd) + sizeof (struct srp_direct_buf); + len = sizeof(struct srp_cmd) + cmd->add_cdb_len + + sizeof(struct srp_direct_buf); if (count == 1 && target->global_rkey) { /* @@ -1812,8 +1872,9 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, * single entry. So a direct descriptor along with * the DMA MR suffices. */ - struct srp_direct_buf *buf = (void *) cmd->add_data; + struct srp_direct_buf *buf; + buf = (void *)cmd->add_data + cmd->add_cdb_len; buf->va = cpu_to_be64(ib_sg_dma_address(ibdev, scat)); buf->key = cpu_to_be32(target->global_rkey); buf->len = cpu_to_be32(ib_sg_dma_len(ibdev, scat)); @@ -1826,7 +1887,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, * We have more than one scatter/gather entry, so build our indirect * descriptor table, trying to merge as many entries as we can. */ - indirect_hdr = (void *) cmd->add_data; + indirect_hdr = (void *)cmd->add_data + cmd->add_cdb_len; ib_dma_sync_single_for_cpu(ibdev, req->indirect_dma_addr, target->indirect_size, DMA_TO_DEVICE); @@ -1861,8 +1922,9 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, * Memory registration collapsed the sg-list into one entry, * so use a direct descriptor. */ - struct srp_direct_buf *buf = (void *) cmd->add_data; + struct srp_direct_buf *buf; + buf = (void *)cmd->add_data + cmd->add_cdb_len; *buf = req->indirect_desc[0]; goto map_complete; } @@ -1880,7 +1942,8 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, idb_len = sizeof(struct srp_indirect_buf) + table_len; fmt = SRP_DATA_DESC_INDIRECT; - len = sizeof(struct srp_cmd) + sizeof (struct srp_indirect_buf); + len = sizeof(struct srp_cmd) + cmd->add_cdb_len + + sizeof(struct srp_indirect_buf); len += count * sizeof (struct srp_direct_buf); memcpy(indirect_hdr->desc_list, req->indirect_desc, @@ -2001,22 +2064,30 @@ static void srp_send_done(struct ib_cq *cq, struct ib_wc *wc) list_add(&iu->list, &ch->free_tx); } +/** + * srp_post_send() - send an SRP information unit + * @ch: RDMA channel over which to send the information unit. + * @iu: Information unit to send. + * @len: Length of the information unit excluding immediate data. + */ static int srp_post_send(struct srp_rdma_ch *ch, struct srp_iu *iu, int len) { struct srp_target_port *target = ch->target; - struct ib_sge list; struct ib_send_wr wr; - list.addr = iu->dma; - list.length = len; - list.lkey = target->lkey; + if (WARN_ON_ONCE(iu->num_sge > SRP_MAX_SGE)) + return -EINVAL; + + iu->sge[0].addr = iu->dma; + iu->sge[0].length = len; + iu->sge[0].lkey = target->lkey; iu->cqe.done = srp_send_done; wr.next = NULL; wr.wr_cqe = &iu->cqe; - wr.sg_list = &list; - wr.num_sge = 1; + wr.sg_list = &iu->sge[0]; + wr.num_sge = iu->num_sge; wr.opcode = IB_WR_SEND; wr.send_flags = IB_SEND_SIGNALED; @@ -2129,6 +2200,7 @@ static int srp_response_common(struct srp_rdma_ch *ch, s32 req_delta, return 1; } + iu->num_sge = 1; ib_dma_sync_single_for_cpu(dev, iu->dma, len, DMA_TO_DEVICE); memcpy(iu->buf, rsp, len); ib_dma_sync_single_for_device(dev, iu->dma, len, DMA_TO_DEVICE); @@ -2312,7 +2384,7 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd) req = &ch->req_ring[idx]; dev = target->srp_host->srp_dev->dev; - ib_dma_sync_single_for_cpu(dev, iu->dma, target->max_iu_len, + ib_dma_sync_single_for_cpu(dev, iu->dma, ch->max_it_iu_len, DMA_TO_DEVICE); scmnd->host_scribble = (void *) req; @@ -2324,6 +2396,12 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd) int_to_scsilun(scmnd->device->lun, &cmd->lun); cmd->tag = tag; memcpy(cmd->cdb, scmnd->cmnd, scmnd->cmd_len); + if (unlikely(scmnd->cmd_len > sizeof(cmd->cdb))) { + cmd->add_cdb_len = round_up(scmnd->cmd_len - sizeof(cmd->cdb), + 4); + if (WARN_ON_ONCE(cmd->add_cdb_len > SRP_MAX_ADD_CDB_LEN)) + goto err_iu; + } req->scmnd = scmnd; req->cmd = iu; @@ -2343,11 +2421,12 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd) goto err_iu; } - ib_dma_sync_single_for_device(dev, iu->dma, target->max_iu_len, + ib_dma_sync_single_for_device(dev, iu->dma, ch->max_it_iu_len, DMA_TO_DEVICE); if (srp_post_send(ch, iu, len)) { shost_printk(KERN_ERR, target->scsi_host, PFX "Send failed\n"); + scmnd->result = DID_ERROR << 16; goto err_unmap; } @@ -2410,7 +2489,7 @@ static int srp_alloc_iu_bufs(struct srp_rdma_ch *ch) for (i = 0; i < target->queue_size; ++i) { ch->tx_ring[i] = srp_alloc_iu(target->srp_host, - target->max_iu_len, + ch->max_it_iu_len, GFP_KERNEL, DMA_TO_DEVICE); if (!ch->tx_ring[i]) goto err; @@ -2476,6 +2555,15 @@ static void srp_cm_rep_handler(struct ib_cm_id *cm_id, if (lrsp->opcode == SRP_LOGIN_RSP) { ch->max_ti_iu_len = be32_to_cpu(lrsp->max_ti_iu_len); ch->req_lim = be32_to_cpu(lrsp->req_lim_delta); + ch->use_imm_data = lrsp->rsp_flags & SRP_LOGIN_RSP_IMMED_SUPP; + ch->max_it_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt, + ch->use_imm_data); + WARN_ON_ONCE(ch->max_it_iu_len > + be32_to_cpu(lrsp->max_it_iu_len)); + + if (ch->use_imm_data) + shost_printk(KERN_DEBUG, target->scsi_host, + PFX "using immediate data\n"); /* * Reserve credits for task management so we don't @@ -2864,6 +2952,8 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun, return -1; } + iu->num_sge = 1; + ib_dma_sync_single_for_cpu(dev, iu->dma, sizeof *tsk_mgmt, DMA_TO_DEVICE); tsk_mgmt = iu->buf; @@ -3215,7 +3305,6 @@ static struct scsi_host_template srp_template = { .can_queue = SRP_DEFAULT_CMD_SQ_SIZE, .this_id = -1, .cmd_per_lun = SRP_DEFAULT_CMD_SQ_SIZE, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = srp_host_attrs, .track_queue_depth = 1, }; @@ -3403,6 +3492,9 @@ static const match_table_t srp_opt_tokens = { /** * srp_parse_in - parse an IP address and port number combination + * @net: [in] Network namespace. + * @sa: [out] Address family, IP address and port number. + * @addr_port_str: [in] IP address and port number. * * Parse the following address formats: * - IPv4: :, e.g. 1.2.3.4:5. @@ -3720,6 +3812,7 @@ static ssize_t srp_create_target(struct device *dev, int ret, node_idx, node, cpu, i; unsigned int max_sectors_per_mr, mr_per_cmd = 0; bool multich = false; + uint32_t max_iu_len; target_host = scsi_host_alloc(&srp_template, sizeof (struct srp_target_port)); @@ -3825,9 +3918,7 @@ static ssize_t srp_create_target(struct device *dev, target->mr_per_cmd = mr_per_cmd; target->indirect_size = target->sg_tablesize * sizeof (struct srp_direct_buf); - target->max_iu_len = sizeof (struct srp_cmd) + - sizeof (struct srp_indirect_buf) + - target->cmd_sg_cnt * sizeof (struct srp_direct_buf); + max_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt, srp_use_imm_data); INIT_WORK(&target->tl_err_work, srp_tl_err_work); INIT_WORK(&target->remove_work, srp_remove_work); @@ -3882,7 +3973,7 @@ static ssize_t srp_create_target(struct device *dev, if (ret) goto err_disconnect; - ret = srp_connect_ch(ch, multich); + ret = srp_connect_ch(ch, max_iu_len, multich); if (ret) { char dst[64]; @@ -4063,8 +4154,10 @@ static void srp_add_one(struct ib_device *device) srp_dev->max_pages_per_mr = min_t(u64, SRP_MAX_PAGES_PER_MR, max_pages_per_mr); - srp_dev->has_fmr = (device->alloc_fmr && device->dealloc_fmr && - device->map_phys_fmr && device->unmap_fmr); + srp_dev->has_fmr = (device->ops.alloc_fmr && + device->ops.dealloc_fmr && + device->ops.map_phys_fmr && + device->ops.unmap_fmr); srp_dev->has_fr = (attr->device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS); if (!never_register && !srp_dev->has_fmr && !srp_dev->has_fr) { @@ -4172,6 +4265,11 @@ static int __init srp_init_module(void) { int ret; + BUILD_BUG_ON(sizeof(struct srp_imm_buf) != 4); + BUILD_BUG_ON(sizeof(struct srp_login_req) != 64); + BUILD_BUG_ON(sizeof(struct srp_login_req_rdma) != 56); + BUILD_BUG_ON(sizeof(struct srp_cmd) != 48); + if (srp_sg_tablesize) { pr_warn("srp_sg_tablesize is deprecated, please use cmd_sg_entries\n"); if (!cmd_sg_entries) diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index a2706086b9c7..b2861cd2087a 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -67,6 +67,17 @@ enum { SRP_TAG_TSK_MGMT = 1U << 31, SRP_MAX_PAGES_PER_MR = 512, + + SRP_MAX_ADD_CDB_LEN = 16, + + SRP_MAX_IMM_SGE = 2, + SRP_MAX_SGE = SRP_MAX_IMM_SGE + 1, + /* + * Choose the immediate data offset such that a 32 byte CDB still fits. + */ + SRP_IMM_DATA_OFFSET = sizeof(struct srp_cmd) + + SRP_MAX_ADD_CDB_LEN + + sizeof(struct srp_imm_buf), }; enum srp_target_state { @@ -130,6 +141,8 @@ struct srp_request { /** * struct srp_rdma_ch * @comp_vector: Completion vector used by this RDMA channel. + * @max_it_iu_len: Maximum initiator-to-target information unit length. + * @max_ti_iu_len: Maximum target-to-initiator information unit length. */ struct srp_rdma_ch { /* These are RW in the hot path, and commonly used together */ @@ -146,6 +159,9 @@ struct srp_rdma_ch { struct ib_fmr_pool *fmr_pool; struct srp_fr_pool *fr_pool; }; + uint32_t max_it_iu_len; + uint32_t max_ti_iu_len; + bool use_imm_data; /* Everything above this point is used in the hot path of * command processing. Try to keep them packed into cachelines. @@ -169,7 +185,6 @@ struct srp_rdma_ch { struct srp_iu **tx_ring; struct srp_iu **rx_ring; struct srp_request *req_ring; - int max_ti_iu_len; int comp_vector; u64 tsk_mgmt_tag; @@ -194,7 +209,6 @@ struct srp_target_port { u32 ch_count; u32 lkey; enum srp_target_state state; - unsigned int max_iu_len; unsigned int cmd_sg_cnt; unsigned int indirect_size; bool allow_ext_sg; @@ -259,6 +273,8 @@ struct srp_iu { void *buf; size_t size; enum dma_data_direction direction; + u32 num_sge; + struct ib_sge sge[SRP_MAX_SGE]; struct ib_cqe cqe; }; diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c index 2357aa727dcf..e9c336cff8f5 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.c +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c @@ -51,8 +51,6 @@ /* Name of this kernel module. */ #define DRV_NAME "ib_srpt" -#define DRV_VERSION "2.0.0" -#define DRV_RELDATE "2011-02-14" #define SRPT_ID_STRING "Linux SRP target" @@ -60,8 +58,7 @@ #define pr_fmt(fmt) DRV_NAME " " fmt MODULE_AUTHOR("Vu Pham and Bart Van Assche"); -MODULE_DESCRIPTION("InfiniBand SCSI RDMA Protocol target " - "v" DRV_VERSION " (" DRV_RELDATE ")"); +MODULE_DESCRIPTION("SCSI RDMA Protocol target driver"); MODULE_LICENSE("Dual BSD/GPL"); /* @@ -89,8 +86,7 @@ static int srpt_get_u64_x(char *buffer, const struct kernel_param *kp) module_param_call(srpt_service_guid, NULL, srpt_get_u64_x, &srpt_service_guid, 0444); MODULE_PARM_DESC(srpt_service_guid, - "Using this value for ioc_guid, id_ext, and cm_listen_id" - " instead of using the node_guid of the first HCA."); + "Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA."); static struct ib_client srpt_client; /* Protects both rdma_cm_port and rdma_cm_id. */ @@ -462,7 +458,7 @@ static void srpt_mgmt_method_get(struct srpt_port *sp, struct ib_mad *rq_mad, static void srpt_mad_send_handler(struct ib_mad_agent *mad_agent, struct ib_mad_send_wc *mad_wc) { - rdma_destroy_ah(mad_wc->send_buf->ah); + rdma_destroy_ah(mad_wc->send_buf->ah, RDMA_DESTROY_AH_SLEEPABLE); ib_free_send_mad(mad_wc->send_buf); } @@ -529,7 +525,7 @@ static void srpt_mad_recv_handler(struct ib_mad_agent *mad_agent, ib_free_send_mad(rsp); err_rsp: - rdma_destroy_ah(ah); + rdma_destroy_ah(ah, RDMA_DESTROY_AH_SLEEPABLE); err: ib_free_recv_mad(mad_wc); } @@ -652,31 +648,33 @@ static void srpt_unregister_mad_agent(struct srpt_device *sdev) * srpt_alloc_ioctx - allocate a SRPT I/O context structure * @sdev: SRPT HCA pointer. * @ioctx_size: I/O context size. - * @dma_size: Size of I/O context DMA buffer. + * @buf_cache: I/O buffer cache. * @dir: DMA data direction. */ static struct srpt_ioctx *srpt_alloc_ioctx(struct srpt_device *sdev, - int ioctx_size, int dma_size, + int ioctx_size, + struct kmem_cache *buf_cache, enum dma_data_direction dir) { struct srpt_ioctx *ioctx; - ioctx = kmalloc(ioctx_size, GFP_KERNEL); + ioctx = kzalloc(ioctx_size, GFP_KERNEL); if (!ioctx) goto err; - ioctx->buf = kmalloc(dma_size, GFP_KERNEL); + ioctx->buf = kmem_cache_alloc(buf_cache, GFP_KERNEL); if (!ioctx->buf) goto err_free_ioctx; - ioctx->dma = ib_dma_map_single(sdev->device, ioctx->buf, dma_size, dir); + ioctx->dma = ib_dma_map_single(sdev->device, ioctx->buf, + kmem_cache_size(buf_cache), dir); if (ib_dma_mapping_error(sdev->device, ioctx->dma)) goto err_free_buf; return ioctx; err_free_buf: - kfree(ioctx->buf); + kmem_cache_free(buf_cache, ioctx->buf); err_free_ioctx: kfree(ioctx); err: @@ -687,17 +685,19 @@ err: * srpt_free_ioctx - free a SRPT I/O context structure * @sdev: SRPT HCA pointer. * @ioctx: I/O context pointer. - * @dma_size: Size of I/O context DMA buffer. + * @buf_cache: I/O buffer cache. * @dir: DMA data direction. */ static void srpt_free_ioctx(struct srpt_device *sdev, struct srpt_ioctx *ioctx, - int dma_size, enum dma_data_direction dir) + struct kmem_cache *buf_cache, + enum dma_data_direction dir) { if (!ioctx) return; - ib_dma_unmap_single(sdev->device, ioctx->dma, dma_size, dir); - kfree(ioctx->buf); + ib_dma_unmap_single(sdev->device, ioctx->dma, + kmem_cache_size(buf_cache), dir); + kmem_cache_free(buf_cache, ioctx->buf); kfree(ioctx); } @@ -706,33 +706,38 @@ static void srpt_free_ioctx(struct srpt_device *sdev, struct srpt_ioctx *ioctx, * @sdev: Device to allocate the I/O context ring for. * @ring_size: Number of elements in the I/O context ring. * @ioctx_size: I/O context size. - * @dma_size: DMA buffer size. + * @buf_cache: I/O buffer cache. + * @alignment_offset: Offset in each ring buffer at which the SRP information + * unit starts. * @dir: DMA data direction. */ static struct srpt_ioctx **srpt_alloc_ioctx_ring(struct srpt_device *sdev, int ring_size, int ioctx_size, - int dma_size, enum dma_data_direction dir) + struct kmem_cache *buf_cache, + int alignment_offset, + enum dma_data_direction dir) { struct srpt_ioctx **ring; int i; - WARN_ON(ioctx_size != sizeof(struct srpt_recv_ioctx) - && ioctx_size != sizeof(struct srpt_send_ioctx)); + WARN_ON(ioctx_size != sizeof(struct srpt_recv_ioctx) && + ioctx_size != sizeof(struct srpt_send_ioctx)); ring = kvmalloc_array(ring_size, sizeof(ring[0]), GFP_KERNEL); if (!ring) goto out; for (i = 0; i < ring_size; ++i) { - ring[i] = srpt_alloc_ioctx(sdev, ioctx_size, dma_size, dir); + ring[i] = srpt_alloc_ioctx(sdev, ioctx_size, buf_cache, dir); if (!ring[i]) goto err; ring[i]->index = i; + ring[i]->offset = alignment_offset; } goto out; err: while (--i >= 0) - srpt_free_ioctx(sdev, ring[i], dma_size, dir); + srpt_free_ioctx(sdev, ring[i], buf_cache, dir); kvfree(ring); ring = NULL; out: @@ -744,12 +749,13 @@ out: * @ioctx_ring: I/O context ring to be freed. * @sdev: SRPT HCA pointer. * @ring_size: Number of ring elements. - * @dma_size: Size of I/O context DMA buffer. + * @buf_cache: I/O buffer cache. * @dir: DMA data direction. */ static void srpt_free_ioctx_ring(struct srpt_ioctx **ioctx_ring, struct srpt_device *sdev, int ring_size, - int dma_size, enum dma_data_direction dir) + struct kmem_cache *buf_cache, + enum dma_data_direction dir) { int i; @@ -757,7 +763,7 @@ static void srpt_free_ioctx_ring(struct srpt_ioctx **ioctx_ring, return; for (i = 0; i < ring_size; ++i) - srpt_free_ioctx(sdev, ioctx_ring[i], dma_size, dir); + srpt_free_ioctx(sdev, ioctx_ring[i], buf_cache, dir); kvfree(ioctx_ring); } @@ -819,7 +825,7 @@ static int srpt_post_recv(struct srpt_device *sdev, struct srpt_rdma_ch *ch, struct ib_recv_wr wr; BUG_ON(!sdev); - list.addr = ioctx->ioctx.dma; + list.addr = ioctx->ioctx.dma + ioctx->ioctx.offset; list.length = srp_max_req_size; list.lkey = sdev->lkey; @@ -985,23 +991,28 @@ static inline void *srpt_get_desc_buf(struct srp_cmd *srp_cmd) /** * srpt_get_desc_tbl - parse the data descriptors of a SRP_CMD request - * @ioctx: Pointer to the I/O context associated with the request. + * @recv_ioctx: I/O context associated with the received command @srp_cmd. + * @ioctx: I/O context that will be used for responding to the initiator. * @srp_cmd: Pointer to the SRP_CMD request data. * @dir: Pointer to the variable to which the transfer direction will be * written. - * @sg: [out] scatterlist allocated for the parsed SRP_CMD. + * @sg: [out] scatterlist for the parsed SRP_CMD. * @sg_cnt: [out] length of @sg. * @data_len: Pointer to the variable to which the total data length of all * descriptors in the SRP_CMD request will be written. + * @imm_data_offset: [in] Offset in SRP_CMD requests at which immediate data + * starts. * * This function initializes ioctx->nrbuf and ioctx->r_bufs. * * Returns -EINVAL when the SRP_CMD request contains inconsistent descriptors; * -ENOMEM when memory allocation fails and zero upon success. */ -static int srpt_get_desc_tbl(struct srpt_send_ioctx *ioctx, +static int srpt_get_desc_tbl(struct srpt_recv_ioctx *recv_ioctx, + struct srpt_send_ioctx *ioctx, struct srp_cmd *srp_cmd, enum dma_data_direction *dir, - struct scatterlist **sg, unsigned *sg_cnt, u64 *data_len) + struct scatterlist **sg, unsigned int *sg_cnt, u64 *data_len, + u16 imm_data_offset) { BUG_ON(!dir); BUG_ON(!data_len); @@ -1025,7 +1036,7 @@ static int srpt_get_desc_tbl(struct srpt_send_ioctx *ioctx, if (((srp_cmd->buf_fmt & 0xf) == SRP_DATA_DESC_DIRECT) || ((srp_cmd->buf_fmt >> 4) == SRP_DATA_DESC_DIRECT)) { - struct srp_direct_buf *db = srpt_get_desc_buf(srp_cmd); + struct srp_direct_buf *db = srpt_get_desc_buf(srp_cmd); *data_len = be32_to_cpu(db->len); return srpt_alloc_rw_ctxs(ioctx, db, 1, sg, sg_cnt); @@ -1037,8 +1048,7 @@ static int srpt_get_desc_tbl(struct srpt_send_ioctx *ioctx, if (nbufs > (srp_cmd->data_out_desc_cnt + srp_cmd->data_in_desc_cnt)) { - pr_err("received unsupported SRP_CMD request" - " type (%u out + %u in != %u / %zu)\n", + pr_err("received unsupported SRP_CMD request type (%u out + %u in != %u / %zu)\n", srp_cmd->data_out_desc_cnt, srp_cmd->data_in_desc_cnt, be32_to_cpu(idb->table_desc.len), @@ -1049,6 +1059,40 @@ static int srpt_get_desc_tbl(struct srpt_send_ioctx *ioctx, *data_len = be32_to_cpu(idb->len); return srpt_alloc_rw_ctxs(ioctx, idb->desc_list, nbufs, sg, sg_cnt); + } else if ((srp_cmd->buf_fmt >> 4) == SRP_DATA_DESC_IMM) { + struct srp_imm_buf *imm_buf = srpt_get_desc_buf(srp_cmd); + void *data = (void *)srp_cmd + imm_data_offset; + uint32_t len = be32_to_cpu(imm_buf->len); + uint32_t req_size = imm_data_offset + len; + + if (req_size > srp_max_req_size) { + pr_err("Immediate data (length %d + %d) exceeds request size %d\n", + imm_data_offset, len, srp_max_req_size); + return -EINVAL; + } + if (recv_ioctx->byte_len < req_size) { + pr_err("Received too few data - %d < %d\n", + recv_ioctx->byte_len, req_size); + return -EIO; + } + /* + * The immediate data buffer descriptor must occur before the + * immediate data itself. + */ + if ((void *)(imm_buf + 1) > (void *)data) { + pr_err("Received invalid write request\n"); + return -EINVAL; + } + *data_len = len; + ioctx->recv_ioctx = recv_ioctx; + if ((uintptr_t)data & 511) { + pr_warn_once("Internal error - the receive buffers are not aligned properly.\n"); + return -EINVAL; + } + sg_init_one(&ioctx->imm_sg, data, len); + *sg = &ioctx->imm_sg; + *sg_cnt = 1; + return 0; } else { *data_len = 0; return 0; @@ -1191,6 +1235,7 @@ static struct srpt_send_ioctx *srpt_get_send_ioctx(struct srpt_rdma_ch *ch) BUG_ON(ioctx->ch != ch); ioctx->state = SRPT_STATE_NEW; + WARN_ON_ONCE(ioctx->recv_ioctx); ioctx->n_rdma = 0; ioctx->n_rw_ctx = 0; ioctx->queue_status_only = false; @@ -1352,8 +1397,8 @@ static int srpt_build_cmd_rsp(struct srpt_rdma_ch *ch, BUILD_BUG_ON(MIN_MAX_RSP_SIZE <= sizeof(*srp_rsp)); max_sense_len = ch->max_ti_iu_len - sizeof(*srp_rsp); if (sense_data_len > max_sense_len) { - pr_warn("truncated sense data from %d to %d" - " bytes\n", sense_data_len, max_sense_len); + pr_warn("truncated sense data from %d to %d bytes\n", + sense_data_len, max_sense_len); sense_data_len = max_sense_len; } @@ -1433,7 +1478,7 @@ static void srpt_handle_cmd(struct srpt_rdma_ch *ch, BUG_ON(!send_ioctx); - srp_cmd = recv_ioctx->ioctx.buf; + srp_cmd = recv_ioctx->ioctx.buf + recv_ioctx->ioctx.offset; cmd = &send_ioctx->cmd; cmd->tag = srp_cmd->tag; @@ -1453,8 +1498,8 @@ static void srpt_handle_cmd(struct srpt_rdma_ch *ch, break; } - rc = srpt_get_desc_tbl(send_ioctx, srp_cmd, &dir, &sg, &sg_cnt, - &data_len); + rc = srpt_get_desc_tbl(recv_ioctx, send_ioctx, srp_cmd, &dir, + &sg, &sg_cnt, &data_len, ch->imm_data_offset); if (rc) { if (rc != -EAGAIN) { pr_err("0x%llx: parsing SRP descriptor table failed.\n", @@ -1521,7 +1566,7 @@ static void srpt_handle_tsk_mgmt(struct srpt_rdma_ch *ch, BUG_ON(!send_ioctx); - srp_tsk = recv_ioctx->ioctx.buf; + srp_tsk = recv_ioctx->ioctx.buf + recv_ioctx->ioctx.offset; cmd = &send_ioctx->cmd; pr_debug("recv tsk_mgmt fn %d for task_tag %lld and cmd tag %lld ch %p sess %p\n", @@ -1564,10 +1609,11 @@ srpt_handle_new_iu(struct srpt_rdma_ch *ch, struct srpt_recv_ioctx *recv_ioctx) goto push; ib_dma_sync_single_for_cpu(ch->sport->sdev->device, - recv_ioctx->ioctx.dma, srp_max_req_size, + recv_ioctx->ioctx.dma, + recv_ioctx->ioctx.offset + srp_max_req_size, DMA_FROM_DEVICE); - srp_cmd = recv_ioctx->ioctx.buf; + srp_cmd = recv_ioctx->ioctx.buf + recv_ioctx->ioctx.offset; opcode = srp_cmd->opcode; if (opcode == SRP_CMD || opcode == SRP_TSK_MGMT) { send_ioctx = srpt_get_send_ioctx(ch); @@ -1604,7 +1650,8 @@ srpt_handle_new_iu(struct srpt_rdma_ch *ch, struct srpt_recv_ioctx *recv_ioctx) break; } - srpt_post_recv(ch->sport->sdev, ch, recv_ioctx); + if (!send_ioctx || !send_ioctx->recv_ioctx) + srpt_post_recv(ch->sport->sdev, ch, recv_ioctx); res = true; out: @@ -1630,6 +1677,7 @@ static void srpt_recv_done(struct ib_cq *cq, struct ib_wc *wc) req_lim = atomic_dec_return(&ch->req_lim); if (unlikely(req_lim < 0)) pr_err("req_lim = %d < 0\n", req_lim); + ioctx->byte_len = wc->byte_len; srpt_handle_new_iu(ch, ioctx); } else { pr_info_ratelimited("receiving failed for ioctx %p with status %d\n", @@ -1693,14 +1741,14 @@ static void srpt_send_done(struct ib_cq *cq, struct ib_wc *wc) atomic_add(1 + ioctx->n_rdma, &ch->sq_wr_avail); if (wc->status != IB_WC_SUCCESS) - pr_info("sending response for ioctx 0x%p failed" - " with status %d\n", ioctx, wc->status); + pr_info("sending response for ioctx 0x%p failed with status %d\n", + ioctx, wc->status); if (state != SRPT_STATE_DONE) { transport_generic_free_cmd(&ioctx->cmd, 0); } else { - pr_err("IB completion has been received too late for" - " wr_id = %u.\n", ioctx->ioctx.index); + pr_err("IB completion has been received too late for wr_id = %u.\n", + ioctx->ioctx.index); } srpt_process_wait_list(ch); @@ -1754,6 +1802,8 @@ retry: qp_init->cap.max_rdma_ctxs = sq_size / 2; qp_init->cap.max_send_sge = min(attrs->max_send_sge, SRPT_MAX_SG_PER_WQE); + qp_init->cap.max_recv_sge = min(attrs->max_recv_sge, + SRPT_MAX_SG_PER_WQE); qp_init->port_num = ch->sport->port; if (sdev->use_srq) { qp_init->srq = sdev->srq; @@ -2010,6 +2060,14 @@ static void srpt_free_ch(struct kref *kref) kfree_rcu(ch, rcu); } +/* + * Shut down the SCSI target session, tell the connection manager to + * disconnect the associated RDMA channel, transition the QP to the error + * state and remove the channel from the channel list. This function is + * typically called from inside srpt_zerolength_write_done(). Concurrent + * srpt_zerolength_write() calls from inside srpt_close_ch() are possible + * as long as the channel is on sport->nexus_list. + */ static void srpt_release_channel_work(struct work_struct *w) { struct srpt_rdma_ch *ch; @@ -2037,20 +2095,24 @@ static void srpt_release_channel_work(struct work_struct *w) else ib_destroy_cm_id(ch->ib_cm.cm_id); + sport = ch->sport; + mutex_lock(&sport->mutex); + list_del_rcu(&ch->list); + mutex_unlock(&sport->mutex); + srpt_destroy_ch_ib(ch); srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring, ch->sport->sdev, ch->rq_size, - ch->max_rsp_size, DMA_TO_DEVICE); + ch->rsp_buf_cache, DMA_TO_DEVICE); + + kmem_cache_destroy(ch->rsp_buf_cache); srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_recv_ring, sdev, ch->rq_size, - srp_max_req_size, DMA_FROM_DEVICE); + ch->req_buf_cache, DMA_FROM_DEVICE); - sport = ch->sport; - mutex_lock(&sport->mutex); - list_del_rcu(&ch->list); - mutex_unlock(&sport->mutex); + kmem_cache_destroy(ch->req_buf_cache); wake_up(&sport->ch_releaseQ); @@ -2174,14 +2236,19 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev, INIT_LIST_HEAD(&ch->cmd_wait_list); ch->max_rsp_size = ch->sport->port_attrib.srp_max_rsp_size; + ch->rsp_buf_cache = kmem_cache_create("srpt-rsp-buf", ch->max_rsp_size, + 512, 0, NULL); + if (!ch->rsp_buf_cache) + goto free_ch; + ch->ioctx_ring = (struct srpt_send_ioctx **) srpt_alloc_ioctx_ring(ch->sport->sdev, ch->rq_size, sizeof(*ch->ioctx_ring[0]), - ch->max_rsp_size, DMA_TO_DEVICE); + ch->rsp_buf_cache, 0, DMA_TO_DEVICE); if (!ch->ioctx_ring) { pr_err("rejected SRP_LOGIN_REQ because creating a new QP SQ ring failed.\n"); rej->reason = cpu_to_be32(SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); - goto free_ch; + goto free_rsp_cache; } INIT_LIST_HEAD(&ch->free_list); @@ -2190,16 +2257,39 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev, list_add_tail(&ch->ioctx_ring[i]->free_list, &ch->free_list); } if (!sdev->use_srq) { + u16 imm_data_offset = req->req_flags & SRP_IMMED_REQUESTED ? + be16_to_cpu(req->imm_data_offset) : 0; + u16 alignment_offset; + u32 req_sz; + + if (req->req_flags & SRP_IMMED_REQUESTED) + pr_debug("imm_data_offset = %d\n", + be16_to_cpu(req->imm_data_offset)); + if (imm_data_offset >= sizeof(struct srp_cmd)) { + ch->imm_data_offset = imm_data_offset; + rsp->rsp_flags |= SRP_LOGIN_RSP_IMMED_SUPP; + } else { + ch->imm_data_offset = 0; + } + alignment_offset = round_up(imm_data_offset, 512) - + imm_data_offset; + req_sz = alignment_offset + imm_data_offset + srp_max_req_size; + ch->req_buf_cache = kmem_cache_create("srpt-req-buf", req_sz, + 512, 0, NULL); + if (!ch->req_buf_cache) + goto free_rsp_ring; + ch->ioctx_recv_ring = (struct srpt_recv_ioctx **) srpt_alloc_ioctx_ring(ch->sport->sdev, ch->rq_size, sizeof(*ch->ioctx_recv_ring[0]), - srp_max_req_size, + ch->req_buf_cache, + alignment_offset, DMA_FROM_DEVICE); if (!ch->ioctx_recv_ring) { pr_err("rejected SRP_LOGIN_REQ because creating a new QP RQ ring failed.\n"); rej->reason = cpu_to_be32(SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); - goto free_ring; + goto free_recv_cache; } for (i = 0; i < ch->rq_size; i++) INIT_LIST_HEAD(&ch->ioctx_recv_ring[i]->wait_list); @@ -2249,17 +2339,15 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev, if ((req->req_flags & SRP_MTCH_ACTION) == SRP_MULTICHAN_SINGLE) { struct srpt_rdma_ch *ch2; - rsp->rsp_flags = SRP_LOGIN_RSP_MULTICHAN_NO_CHAN; - list_for_each_entry(ch2, &nexus->ch_list, list) { if (srpt_disconnect_ch(ch2) < 0) continue; pr_info("Relogin - closed existing channel %s\n", ch2->sess_name); - rsp->rsp_flags = SRP_LOGIN_RSP_MULTICHAN_TERMINATED; + rsp->rsp_flags |= SRP_LOGIN_RSP_MULTICHAN_TERMINATED; } } else { - rsp->rsp_flags = SRP_LOGIN_RSP_MULTICHAN_MAINTAINED; + rsp->rsp_flags |= SRP_LOGIN_RSP_MULTICHAN_MAINTAINED; } list_add_tail_rcu(&ch->list, &nexus->ch_list); @@ -2289,7 +2377,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev, /* create srp_login_response */ rsp->opcode = SRP_LOGIN_RSP; rsp->tag = req->tag; - rsp->max_it_iu_len = req->req_it_iu_len; + rsp->max_it_iu_len = cpu_to_be32(srp_max_req_size); rsp->max_ti_iu_len = req->req_it_iu_len; ch->max_ti_iu_len = it_iu_len; rsp->buf_fmt = cpu_to_be16(SRP_BUF_FORMAT_DIRECT | @@ -2353,12 +2441,18 @@ destroy_ib: free_recv_ring: srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_recv_ring, ch->sport->sdev, ch->rq_size, - srp_max_req_size, DMA_FROM_DEVICE); + ch->req_buf_cache, DMA_FROM_DEVICE); + +free_recv_cache: + kmem_cache_destroy(ch->req_buf_cache); -free_ring: +free_rsp_ring: srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring, ch->sport->sdev, ch->rq_size, - ch->max_rsp_size, DMA_TO_DEVICE); + ch->rsp_buf_cache, DMA_TO_DEVICE); + +free_rsp_cache: + kmem_cache_destroy(ch->rsp_buf_cache); free_ch: if (rdma_cm_id) @@ -2439,6 +2533,7 @@ static int srpt_rdma_cm_req_recv(struct rdma_cm_id *cm_id, req.req_flags = req_rdma->req_flags; memcpy(req.initiator_port_id, req_rdma->initiator_port_id, 16); memcpy(req.target_port_id, req_rdma->target_port_id, 16); + req.imm_data_offset = req_rdma->imm_data_offset; snprintf(src_addr, sizeof(src_addr), "%pIS", &cm_id->route.addr.src_addr); @@ -2629,6 +2724,12 @@ static int srpt_write_pending(struct se_cmd *se_cmd) enum srpt_command_state new_state; int ret, i; + if (ioctx->recv_ioctx) { + srpt_set_cmd_state(ioctx, SRPT_STATE_DATA_IN); + target_execute_cmd(&ioctx->cmd); + return 0; + } + new_state = srpt_set_cmd_state(ioctx, SRPT_STATE_NEED_DATA); WARN_ON(new_state == SRPT_STATE_DONE); @@ -2908,7 +3009,9 @@ static void srpt_free_srq(struct srpt_device *sdev) ib_destroy_srq(sdev->srq); srpt_free_ioctx_ring((struct srpt_ioctx **)sdev->ioctx_ring, sdev, - sdev->srq_size, srp_max_req_size, DMA_FROM_DEVICE); + sdev->srq_size, sdev->req_buf_cache, + DMA_FROM_DEVICE); + kmem_cache_destroy(sdev->req_buf_cache); sdev->srq = NULL; } @@ -2935,14 +3038,17 @@ static int srpt_alloc_srq(struct srpt_device *sdev) pr_debug("create SRQ #wr= %d max_allow=%d dev= %s\n", sdev->srq_size, sdev->device->attrs.max_srq_wr, dev_name(&device->dev)); + sdev->req_buf_cache = kmem_cache_create("srpt-srq-req-buf", + srp_max_req_size, 0, 0, NULL); + if (!sdev->req_buf_cache) + goto free_srq; + sdev->ioctx_ring = (struct srpt_recv_ioctx **) srpt_alloc_ioctx_ring(sdev, sdev->srq_size, sizeof(*sdev->ioctx_ring[0]), - srp_max_req_size, DMA_FROM_DEVICE); - if (!sdev->ioctx_ring) { - ib_destroy_srq(srq); - return -ENOMEM; - } + sdev->req_buf_cache, 0, DMA_FROM_DEVICE); + if (!sdev->ioctx_ring) + goto free_cache; sdev->use_srq = true; sdev->srq = srq; @@ -2953,6 +3059,13 @@ static int srpt_alloc_srq(struct srpt_device *sdev) } return 0; + +free_cache: + kmem_cache_destroy(sdev->req_buf_cache); + +free_srq: + ib_destroy_srq(srq); + return -ENOMEM; } static int srpt_use_srq(struct srpt_device *sdev, bool use_srq) @@ -3015,9 +3128,8 @@ static void srpt_add_one(struct ib_device *device) } /* print out target login information */ - pr_debug("Target login info: id_ext=%016llx,ioc_guid=%016llx," - "pkey=ffff,service_id=%016llx\n", srpt_service_guid, - srpt_service_guid, srpt_service_guid); + pr_debug("Target login info: id_ext=%016llx,ioc_guid=%016llx,pkey=ffff,service_id=%016llx\n", + srpt_service_guid, srpt_service_guid, srpt_service_guid); /* * We do not have a consistent service_id (ie. also id_ext of target_id) @@ -3147,11 +3259,6 @@ static int srpt_check_false(struct se_portal_group *se_tpg) return 0; } -static char *srpt_get_fabric_name(void) -{ - return "srpt"; -} - static struct srpt_port *srpt_tpg_to_sport(struct se_portal_group *tpg) { return tpg->se_tpg_wwn->priv; @@ -3182,11 +3289,18 @@ static void srpt_release_cmd(struct se_cmd *se_cmd) struct srpt_send_ioctx *ioctx = container_of(se_cmd, struct srpt_send_ioctx, cmd); struct srpt_rdma_ch *ch = ioctx->ch; + struct srpt_recv_ioctx *recv_ioctx = ioctx->recv_ioctx; unsigned long flags; WARN_ON_ONCE(ioctx->state != SRPT_STATE_DONE && !(ioctx->cmd.transport_state & CMD_T_ABORTED)); + if (recv_ioctx) { + WARN_ON_ONCE(!list_empty(&recv_ioctx->wait_list)); + ioctx->recv_ioctx = NULL; + srpt_post_recv(ch->sport->sdev, ch, recv_ioctx); + } + if (ioctx->n_rw_ctx) { srpt_free_rw_ctxs(ch, ioctx); ioctx->n_rw_ctx = 0; @@ -3572,7 +3686,7 @@ static ssize_t srpt_tpg_enable_show(struct config_item *item, char *page) struct se_portal_group *se_tpg = to_tpg(item); struct srpt_port *sport = srpt_tpg_to_sport(se_tpg); - return snprintf(page, PAGE_SIZE, "%d\n", (sport->enabled) ? 1: 0); + return snprintf(page, PAGE_SIZE, "%d\n", sport->enabled); } static ssize_t srpt_tpg_enable_store(struct config_item *item, @@ -3581,7 +3695,7 @@ static ssize_t srpt_tpg_enable_store(struct config_item *item, struct se_portal_group *se_tpg = to_tpg(item); struct srpt_port *sport = srpt_tpg_to_sport(se_tpg); unsigned long tmp; - int ret; + int ret; ret = kstrtoul(page, 0, &tmp); if (ret < 0) { @@ -3617,7 +3731,7 @@ static struct se_portal_group *srpt_make_tpg(struct se_wwn *wwn, const char *name) { struct srpt_port *sport = wwn->priv; - static struct se_portal_group *tpg; + struct se_portal_group *tpg; int res; WARN_ON_ONCE(wwn != &sport->port_guid_wwn && @@ -3666,7 +3780,7 @@ static void srpt_drop_tport(struct se_wwn *wwn) static ssize_t srpt_wwn_version_show(struct config_item *item, char *buf) { - return scnprintf(buf, PAGE_SIZE, "%s\n", DRV_VERSION); + return scnprintf(buf, PAGE_SIZE, "\n"); } CONFIGFS_ATTR_RO(srpt_wwn_, version); @@ -3678,8 +3792,7 @@ static struct configfs_attribute *srpt_wwn_attrs[] = { static const struct target_core_fabric_ops srpt_template = { .module = THIS_MODULE, - .name = "srpt", - .get_fabric_name = srpt_get_fabric_name, + .fabric_name = "srpt", .tpg_get_wwn = srpt_get_fabric_wwn, .tpg_get_tag = srpt_get_tag, .tpg_check_demo_mode = srpt_check_false, @@ -3730,16 +3843,14 @@ static int __init srpt_init_module(void) ret = -EINVAL; if (srp_max_req_size < MIN_MAX_REQ_SIZE) { - pr_err("invalid value %d for kernel module parameter" - " srp_max_req_size -- must be at least %d.\n", + pr_err("invalid value %d for kernel module parameter srp_max_req_size -- must be at least %d.\n", srp_max_req_size, MIN_MAX_REQ_SIZE); goto out; } if (srpt_srq_size < MIN_SRPT_SRQ_SIZE || srpt_srq_size > MAX_SRPT_SRQ_SIZE) { - pr_err("invalid value %d for kernel module parameter" - " srpt_srq_size -- must be in the range [%d..%d].\n", + pr_err("invalid value %d for kernel module parameter srpt_srq_size -- must be in the range [%d..%d].\n", srpt_srq_size, MIN_SRPT_SRQ_SIZE, MAX_SRPT_SRQ_SIZE); goto out; } diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h index 444dfd7281b5..39b3e50baf3d 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.h +++ b/drivers/infiniband/ulp/srpt/ib_srpt.h @@ -104,10 +104,6 @@ enum { SRP_CMD_ORDERED_Q = 0x2, SRP_CMD_ACA = 0x4, - SRP_LOGIN_RSP_MULTICHAN_NO_CHAN = 0x0, - SRP_LOGIN_RSP_MULTICHAN_TERMINATED = 0x1, - SRP_LOGIN_RSP_MULTICHAN_MAINTAINED = 0x2, - SRPT_DEF_SG_TABLESIZE = 128, /* * An experimentally determined value that avoids that QP creation @@ -124,11 +120,18 @@ enum { MAX_SRPT_RDMA_SIZE = 1U << 24, MAX_SRPT_RSP_SIZE = 1024, + SRP_MAX_ADD_CDB_LEN = 16, + SRP_MAX_IMM_DATA_OFFSET = 80, + SRP_MAX_IMM_DATA = 8 * 1024, MIN_MAX_REQ_SIZE = 996, - DEFAULT_MAX_REQ_SIZE - = sizeof(struct srp_cmd)/*48*/ - + sizeof(struct srp_indirect_buf)/*20*/ - + 128 * sizeof(struct srp_direct_buf)/*16*/, + DEFAULT_MAX_REQ_SIZE_1 = sizeof(struct srp_cmd)/*48*/ + + SRP_MAX_ADD_CDB_LEN + + sizeof(struct srp_indirect_buf)/*20*/ + + 128 * sizeof(struct srp_direct_buf)/*16*/, + DEFAULT_MAX_REQ_SIZE_2 = SRP_MAX_IMM_DATA_OFFSET + + sizeof(struct srp_imm_buf) + SRP_MAX_IMM_DATA, + DEFAULT_MAX_REQ_SIZE = DEFAULT_MAX_REQ_SIZE_1 > DEFAULT_MAX_REQ_SIZE_2 ? + DEFAULT_MAX_REQ_SIZE_1 : DEFAULT_MAX_REQ_SIZE_2, MIN_MAX_RSP_SIZE = sizeof(struct srp_rsp)/*36*/ + 4, DEFAULT_MAX_RSP_SIZE = 256, /* leaves 220 bytes for sense data */ @@ -165,12 +168,14 @@ enum srpt_command_state { * @cqe: Completion queue element. * @buf: Pointer to the buffer. * @dma: DMA address of the buffer. + * @offset: Offset of the first byte in @buf and @dma that is actually used. * @index: Index of the I/O context in its ioctx_ring array. */ struct srpt_ioctx { struct ib_cqe cqe; void *buf; dma_addr_t dma; + uint32_t offset; uint32_t index; }; @@ -178,12 +183,14 @@ struct srpt_ioctx { * struct srpt_recv_ioctx - SRPT receive I/O context * @ioctx: See above. * @wait_list: Node for insertion in srpt_rdma_ch.cmd_wait_list. + * @byte_len: Number of bytes in @ioctx.buf. */ struct srpt_recv_ioctx { struct srpt_ioctx ioctx; struct list_head wait_list; + int byte_len; }; - + struct srpt_rw_ctx { struct rdma_rw_ctx rw; struct scatterlist *sg; @@ -194,8 +201,11 @@ struct srpt_rw_ctx { * struct srpt_send_ioctx - SRPT send I/O context * @ioctx: See above. * @ch: Channel pointer. + * @recv_ioctx: Receive I/O context associated with this send I/O context. + * Only used for processing immediate data. * @s_rw_ctx: @rw_ctxs points here if only a single rw_ctx is needed. * @rw_ctxs: RDMA read/write contexts. + * @imm_sg: Scatterlist for immediate data. * @rdma_cqe: RDMA completion queue element. * @free_list: Node in srpt_rdma_ch.free_list. * @state: I/O context state. @@ -209,10 +219,13 @@ struct srpt_rw_ctx { struct srpt_send_ioctx { struct srpt_ioctx ioctx; struct srpt_rdma_ch *ch; + struct srpt_recv_ioctx *recv_ioctx; struct srpt_rw_ctx s_rw_ctx; struct srpt_rw_ctx *rw_ctxs; + struct scatterlist imm_sg; + struct ib_cqe rdma_cqe; struct list_head free_list; enum srpt_command_state state; @@ -245,7 +258,10 @@ enum rdma_ch_state { * struct srpt_rdma_ch - RDMA channel * @nexus: I_T nexus this channel is associated with. * @qp: IB queue pair used for communicating over this channel. - * @cm_id: IB CM ID associated with the channel. + * @ib_cm: See below. + * @ib_cm.cm_id: IB CM ID associated with the channel. + * @rdma_cm: See below. + * @rdma_cm.cm_id: RDMA CM ID associated with the channel. * @cq: IB completion queue for this channel. * @zw_cqe: Zero-length write CQE. * @rcu: RCU head. @@ -259,12 +275,15 @@ enum rdma_ch_state { * @req_lim: request limit: maximum number of requests that may be sent * by the initiator without having received a response. * @req_lim_delta: Number of credits not yet sent back to the initiator. + * @imm_data_offset: Offset from start of SRP_CMD for immediate data. * @spinlock: Protects free_list and state. * @free_list: Head of list with free send I/O contexts. * @state: channel state. See also enum rdma_ch_state. * @using_rdma_cm: Whether the RDMA/CM or IB/CM is used for this channel. * @processing_wait_list: Whether or not cmd_wait_list is being processed. + * @rsp_buf_cache: kmem_cache for @ioctx_ring. * @ioctx_ring: Send ring. + * @req_buf_cache: kmem_cache for @ioctx_recv_ring. * @ioctx_recv_ring: Receive I/O context ring. * @list: Node in srpt_nexus.ch_list. * @cmd_wait_list: List of SCSI commands that arrived before the RTU event. This @@ -297,10 +316,13 @@ struct srpt_rdma_ch { int max_ti_iu_len; atomic_t req_lim; atomic_t req_lim_delta; + u16 imm_data_offset; spinlock_t spinlock; struct list_head free_list; enum rdma_ch_state state; + struct kmem_cache *rsp_buf_cache; struct srpt_send_ioctx **ioctx_ring; + struct kmem_cache *req_buf_cache; struct srpt_recv_ioctx **ioctx_recv_ring; struct list_head list; struct list_head cmd_wait_list; @@ -395,6 +417,7 @@ struct srpt_port { * @srq_size: SRQ size. * @sdev_mutex: Serializes use_srq changes. * @use_srq: Whether or not to use SRQ. + * @req_buf_cache: kmem_cache for @ioctx_ring buffers. * @ioctx_ring: Per-HCA SRQ. * @event_handler: Per-HCA asynchronous IB event handler. * @list: Node in srpt_dev_list. @@ -409,6 +432,7 @@ struct srpt_device { int srq_size; struct mutex sdev_mutex; bool use_srq; + struct kmem_cache *req_buf_cache; struct srpt_recv_ioctx **ioctx_ring; struct ib_event_handler event_handler; struct list_head list; diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 1167ff0416cf..567221cca13c 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -55,8 +55,6 @@ #include "amd_iommu_types.h" #include "irq_remapping.h" -#define AMD_IOMMU_MAPPING_ERROR 0 - #define CMD_SET_TYPE(cmd, t) ((cmd)->data[1] |= ((t) << 28)) #define LOOP_TIMEOUT 100000 @@ -2186,7 +2184,7 @@ static int amd_iommu_add_device(struct device *dev) dev_name(dev)); iommu_ignore_device(dev); - dev->dma_ops = &dma_direct_ops; + dev->dma_ops = NULL; goto out; } init_iommu_group(dev); @@ -2339,7 +2337,7 @@ static dma_addr_t __map_single(struct device *dev, paddr &= PAGE_MASK; address = dma_ops_alloc_iova(dev, dma_dom, pages, dma_mask); - if (address == AMD_IOMMU_MAPPING_ERROR) + if (!address) goto out; prot = dir2prot(direction); @@ -2376,7 +2374,7 @@ out_unmap: dma_ops_free_iova(dma_dom, address, pages); - return AMD_IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } /* @@ -2427,7 +2425,7 @@ static dma_addr_t map_page(struct device *dev, struct page *page, if (PTR_ERR(domain) == -EINVAL) return (dma_addr_t)paddr; else if (IS_ERR(domain)) - return AMD_IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; dma_mask = *dev->dma_mask; dma_dom = to_dma_ops_domain(domain); @@ -2504,7 +2502,7 @@ static int map_sg(struct device *dev, struct scatterlist *sglist, npages = sg_num_pages(dev, sglist, nelems); address = dma_ops_alloc_iova(dev, dma_dom, npages, dma_mask); - if (address == AMD_IOMMU_MAPPING_ERROR) + if (address == DMA_MAPPING_ERROR) goto out_err; prot = dir2prot(direction); @@ -2627,7 +2625,7 @@ static void *alloc_coherent(struct device *dev, size_t size, *dma_addr = __map_single(dev, dma_dom, page_to_phys(page), size, DMA_BIDIRECTIONAL, dma_mask); - if (*dma_addr == AMD_IOMMU_MAPPING_ERROR) + if (*dma_addr == DMA_MAPPING_ERROR) goto out_free; return page_address(page); @@ -2678,11 +2676,6 @@ static int amd_iommu_dma_supported(struct device *dev, u64 mask) return check_device(dev); } -static int amd_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == AMD_IOMMU_MAPPING_ERROR; -} - static const struct dma_map_ops amd_iommu_dma_ops = { .alloc = alloc_coherent, .free = free_coherent, @@ -2691,7 +2684,6 @@ static const struct dma_map_ops amd_iommu_dma_ops = { .map_sg = map_sg, .unmap_sg = unmap_sg, .dma_supported = amd_iommu_dma_supported, - .mapping_error = amd_iommu_mapping_error, }; static int init_reserved_iova_ranges(void) @@ -2778,17 +2770,6 @@ int __init amd_iommu_init_dma_ops(void) swiotlb = (iommu_pass_through || sme_me_mask) ? 1 : 0; iommu_detected = 1; - /* - * In case we don't initialize SWIOTLB (actually the common case - * when AMD IOMMU is enabled and SME is not active), make sure there - * are global dma_ops set as a fall-back for devices not handled by - * this driver (for example non-PCI devices). When SME is active, - * make sure that swiotlb variable remains set so the global dma_ops - * continue to be SWIOTLB. - */ - if (!swiotlb) - dma_ops = &dma_direct_ops; - if (amd_iommu_unmap_flush) pr_info("AMD-Vi: IO/TLB flush on unmap enabled\n"); else diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index d1b04753b204..60c7e9e9901e 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -32,8 +32,6 @@ #include #include -#define IOMMU_MAPPING_ERROR 0 - struct iommu_dma_msi_page { struct list_head list; dma_addr_t iova; @@ -523,7 +521,7 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size, { __iommu_dma_unmap(iommu_get_dma_domain(dev), *handle, size); __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); - *handle = IOMMU_MAPPING_ERROR; + *handle = DMA_MAPPING_ERROR; } /** @@ -556,7 +554,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, dma_addr_t iova; unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap; - *handle = IOMMU_MAPPING_ERROR; + *handle = DMA_MAPPING_ERROR; min_size = alloc_sizes & -alloc_sizes; if (min_size < PAGE_SIZE) { @@ -649,11 +647,11 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); if (!iova) - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; if (iommu_map(domain, iova, phys - iova_off, size, prot)) { iommu_dma_free_iova(cookie, iova, size); - return IOMMU_MAPPING_ERROR; + return DMA_MAPPING_ERROR; } return iova + iova_off; } @@ -694,7 +692,7 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, s->offset += s_iova_off; s->length = s_length; - sg_dma_address(s) = IOMMU_MAPPING_ERROR; + sg_dma_address(s) = DMA_MAPPING_ERROR; sg_dma_len(s) = 0; /* @@ -737,11 +735,11 @@ static void __invalidate_sg(struct scatterlist *sg, int nents) int i; for_each_sg(sg, s, nents, i) { - if (sg_dma_address(s) != IOMMU_MAPPING_ERROR) + if (sg_dma_address(s) != DMA_MAPPING_ERROR) s->offset += sg_dma_address(s); if (sg_dma_len(s)) s->length = sg_dma_len(s); - sg_dma_address(s) = IOMMU_MAPPING_ERROR; + sg_dma_address(s) = DMA_MAPPING_ERROR; sg_dma_len(s) = 0; } } @@ -858,11 +856,6 @@ void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, __iommu_dma_unmap(iommu_get_dma_domain(dev), handle, size); } -int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == IOMMU_MAPPING_ERROR; -} - static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, phys_addr_t msi_addr, struct iommu_domain *domain) { @@ -882,7 +875,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return NULL; iova = __iommu_dma_map(dev, msi_addr, size, prot, domain); - if (iommu_dma_mapping_error(dev, iova)) + if (iova == DMA_MAPPING_ERROR) goto out_free_page; INIT_LIST_HEAD(&msi_page->list); diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index d9c748b6f9e4..1edf2a251336 100644 --- a/drivers/iommu/dmar.c +++ b/drivers/iommu/dmar.c @@ -2042,3 +2042,28 @@ int dmar_device_remove(acpi_handle handle) { return dmar_device_hotplug(handle, false); } + +/* + * dmar_platform_optin - Is %DMA_CTRL_PLATFORM_OPT_IN_FLAG set in DMAR table + * + * Returns true if the platform has %DMA_CTRL_PLATFORM_OPT_IN_FLAG set in + * the ACPI DMAR table. This means that the platform boot firmware has made + * sure no device can issue DMA outside of RMRR regions. + */ +bool dmar_platform_optin(void) +{ + struct acpi_table_dmar *dmar; + acpi_status status; + bool ret; + + status = acpi_get_table(ACPI_SIG_DMAR, 0, + (struct acpi_table_header **)&dmar); + if (ACPI_FAILURE(status)) + return false; + + ret = !!(dmar->flags & DMAR_PLATFORM_OPT_IN); + acpi_put_table((struct acpi_table_header *)dmar); + + return ret; +} +EXPORT_SYMBOL_GPL(dmar_platform_optin); diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 41a4b8808802..63b6ce78492a 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -184,6 +184,7 @@ static int rwbf_quirk; */ static int force_on = 0; int intel_iommu_tboot_noforce; +static int no_platform_optin; #define ROOT_ENTRY_NR (VTD_PAGE_SIZE/sizeof(struct root_entry)) @@ -503,6 +504,7 @@ static int __init intel_iommu_setup(char *str) pr_info("IOMMU enabled\n"); } else if (!strncmp(str, "off", 3)) { dmar_disabled = 1; + no_platform_optin = 1; pr_info("IOMMU disabled\n"); } else if (!strncmp(str, "igfx_off", 8)) { dmar_map_gfx = 0; @@ -1471,7 +1473,8 @@ static void iommu_enable_dev_iotlb(struct device_domain_info *info) if (info->pri_supported && !pci_reset_pri(pdev) && !pci_enable_pri(pdev, 32)) info->pri_enabled = 1; #endif - if (info->ats_supported && !pci_enable_ats(pdev, VTD_PAGE_SHIFT)) { + if (!pdev->untrusted && info->ats_supported && + !pci_enable_ats(pdev, VTD_PAGE_SHIFT)) { info->ats_enabled = 1; domain_update_iotlb(info->domain); info->ats_qdep = pci_ats_queue_depth(pdev); @@ -2895,6 +2898,13 @@ static int iommu_should_identity_map(struct device *dev, int startup) if (device_is_rmrr_locked(dev)) return 0; + /* + * Prevent any device marked as untrusted from getting + * placed into the statically identity mapping domain. + */ + if (pdev->untrusted) + return 0; + if ((iommu_identity_mapping & IDENTMAP_AZALIA) && IS_AZALIA(pdev)) return 1; @@ -3597,9 +3607,11 @@ static int iommu_no_mapping(struct device *dev) return 0; } -static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, - size_t size, int dir, u64 dma_mask) +static dma_addr_t __intel_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, int dir, + u64 dma_mask) { + phys_addr_t paddr = page_to_phys(page) + offset; struct dmar_domain *domain; phys_addr_t start_paddr; unsigned long iova_pfn; @@ -3615,7 +3627,7 @@ static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, domain = get_valid_domain_for_dev(dev); if (!domain) - return 0; + return DMA_MAPPING_ERROR; iommu = domain_get_iommu(domain); size = aligned_nrpages(paddr, size); @@ -3653,7 +3665,7 @@ error: free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size)); pr_err("Device %s request: %zx@%llx dir %d --- failed\n", dev_name(dev), size, (unsigned long long)paddr, dir); - return 0; + return DMA_MAPPING_ERROR; } static dma_addr_t intel_map_page(struct device *dev, struct page *page, @@ -3661,8 +3673,7 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - return __intel_map_single(dev, page_to_phys(page) + offset, size, - dir, *dev->dma_mask); + return __intel_map_page(dev, page, offset, size, dir, *dev->dma_mask); } static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) @@ -3753,10 +3764,9 @@ static void *intel_alloc_coherent(struct device *dev, size_t size, return NULL; memset(page_address(page), 0, size); - *dma_handle = __intel_map_single(dev, page_to_phys(page), size, - DMA_BIDIRECTIONAL, - dev->coherent_dma_mask); - if (*dma_handle) + *dma_handle = __intel_map_page(dev, page, 0, size, DMA_BIDIRECTIONAL, + dev->coherent_dma_mask); + if (*dma_handle != DMA_MAPPING_ERROR) return page_address(page); if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) __free_pages(page, order); @@ -3865,11 +3875,6 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele return nelems; } -static int intel_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return !dma_addr; -} - static const struct dma_map_ops intel_dma_ops = { .alloc = intel_alloc_coherent, .free = intel_free_coherent, @@ -3877,7 +3882,6 @@ static const struct dma_map_ops intel_dma_ops = { .unmap_sg = intel_unmap_sg, .map_page = intel_map_page, .unmap_page = intel_unmap_page, - .mapping_error = intel_mapping_error, .dma_supported = dma_direct_supported, }; @@ -4728,14 +4732,54 @@ const struct attribute_group *intel_iommu_groups[] = { NULL, }; +static int __init platform_optin_force_iommu(void) +{ + struct pci_dev *pdev = NULL; + bool has_untrusted_dev = false; + + if (!dmar_platform_optin() || no_platform_optin) + return 0; + + for_each_pci_dev(pdev) { + if (pdev->untrusted) { + has_untrusted_dev = true; + break; + } + } + + if (!has_untrusted_dev) + return 0; + + if (no_iommu || dmar_disabled) + pr_info("Intel-IOMMU force enabled due to platform opt in\n"); + + /* + * If Intel-IOMMU is disabled by default, we will apply identity + * map for all devices except those marked as being untrusted. + */ + if (dmar_disabled) + iommu_identity_mapping |= IDENTMAP_ALL; + + dmar_disabled = 0; +#if defined(CONFIG_X86) && defined(CONFIG_SWIOTLB) + swiotlb = 0; +#endif + no_iommu = 0; + + return 1; +} + int __init intel_iommu_init(void) { int ret = -ENODEV; struct dmar_drhd_unit *drhd; struct intel_iommu *iommu; - /* VT-d is required for a TXT/tboot launch, so enforce that */ - force_on = tboot_force_iommu(); + /* + * Intel IOMMU is required for a TXT/tboot launch or platform + * opt in, so enforce that. + */ + force_on = tboot_force_iommu() || platform_optin_force_iommu(); if (iommu_init_mempool()) { if (force_on) diff --git a/drivers/irqchip/irq-ativic32.c b/drivers/irqchip/irq-ativic32.c index f69a8588521c..85cf6e0e0e52 100644 --- a/drivers/irqchip/irq-ativic32.c +++ b/drivers/irqchip/irq-ativic32.c @@ -10,6 +10,8 @@ #include #include +unsigned long wake_mask; + static void ativic32_ack_irq(struct irq_data *data) { __nds32__mtsr_dsb(BIT(data->hwirq), NDS32_SR_INT_PEND2); @@ -27,11 +29,40 @@ static void ativic32_unmask_irq(struct irq_data *data) __nds32__mtsr_dsb(int_mask2 | (BIT(data->hwirq)), NDS32_SR_INT_MASK2); } +static int nointc_set_wake(struct irq_data *data, unsigned int on) +{ + unsigned long int_mask = __nds32__mfsr(NDS32_SR_INT_MASK); + static unsigned long irq_orig_bit; + u32 bit = 1 << data->hwirq; + + if (on) { + if (int_mask & bit) + __assign_bit(data->hwirq, &irq_orig_bit, true); + else + __assign_bit(data->hwirq, &irq_orig_bit, false); + + __assign_bit(data->hwirq, &int_mask, true); + __assign_bit(data->hwirq, &wake_mask, true); + + } else { + if (!(irq_orig_bit & bit)) + __assign_bit(data->hwirq, &int_mask, false); + + __assign_bit(data->hwirq, &wake_mask, false); + __assign_bit(data->hwirq, &irq_orig_bit, false); + } + + __nds32__mtsr_dsb(int_mask, NDS32_SR_INT_MASK); + + return 0; +} + static struct irq_chip ativic32_chip = { .name = "ativic32", .irq_ack = ativic32_ack_irq, .irq_mask = ativic32_mask_irq, .irq_unmask = ativic32_unmask_irq, + .irq_set_wake = nointc_set_wake, }; static unsigned int __initdata nivic_map[6] = { 6, 2, 10, 16, 24, 32 }; diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index d3d4f65b377b..0868a9d81c3c 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -1199,8 +1199,8 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node) part->partition_id = of_node_to_fwnode(child_part); - pr_info("GIC: PPI partition %s[%d] { ", - child_part->name, part_idx); + pr_info("GIC: PPI partition %pOFn[%d] { ", + child_part, part_idx); n = of_property_count_elems_of_size(child_part, "affinity", sizeof(u32)); diff --git a/drivers/irqchip/irq-orion.c b/drivers/irqchip/irq-orion.c index be4c5a8c9659..c4b5ffb61954 100644 --- a/drivers/irqchip/irq-orion.c +++ b/drivers/irqchip/irq-orion.c @@ -64,14 +64,14 @@ static int __init orion_irq_init(struct device_node *np, num_chips * ORION_IRQS_PER_CHIP, &irq_generic_chip_ops, NULL); if (!orion_irq_domain) - panic("%s: unable to add irq domain\n", np->name); + panic("%pOFn: unable to add irq domain\n", np); ret = irq_alloc_domain_generic_chips(orion_irq_domain, - ORION_IRQS_PER_CHIP, 1, np->name, + ORION_IRQS_PER_CHIP, 1, np->full_name, handle_level_irq, clr, 0, IRQ_GC_INIT_MASK_CACHE); if (ret) - panic("%s: unable to alloc irq domain gc\n", np->name); + panic("%pOFn: unable to alloc irq domain gc\n", np); for (n = 0, base = 0; n < num_chips; n++, base += ORION_IRQS_PER_CHIP) { struct irq_chip_generic *gc = @@ -80,12 +80,12 @@ static int __init orion_irq_init(struct device_node *np, of_address_to_resource(np, n, &r); if (!request_mem_region(r.start, resource_size(&r), np->name)) - panic("%s: unable to request mem region %d", - np->name, n); + panic("%pOFn: unable to request mem region %d", + np, n); gc->reg_base = ioremap(r.start, resource_size(&r)); if (!gc->reg_base) - panic("%s: unable to map resource %d", np->name, n); + panic("%pOFn: unable to map resource %d", np, n); gc->chip_types[0].regs.mask = ORION_IRQ_MASK; gc->chip_types[0].chip.irq_mask = irq_gc_mask_clr_bit; @@ -150,20 +150,20 @@ static int __init orion_bridge_irq_init(struct device_node *np, domain = irq_domain_add_linear(np, nrirqs, &irq_generic_chip_ops, NULL); if (!domain) { - pr_err("%s: unable to add irq domain\n", np->name); + pr_err("%pOFn: unable to add irq domain\n", np); return -ENOMEM; } ret = irq_alloc_domain_generic_chips(domain, nrirqs, 1, np->name, handle_edge_irq, clr, 0, IRQ_GC_INIT_MASK_CACHE); if (ret) { - pr_err("%s: unable to alloc irq domain gc\n", np->name); + pr_err("%pOFn: unable to alloc irq domain gc\n", np); return ret; } ret = of_address_to_resource(np, 0, &r); if (ret) { - pr_err("%s: unable to get resource\n", np->name); + pr_err("%pOFn: unable to get resource\n", np); return ret; } @@ -175,14 +175,14 @@ static int __init orion_bridge_irq_init(struct device_node *np, /* Map the parent interrupt for the chained handler */ irq = irq_of_parse_and_map(np, 0); if (irq <= 0) { - pr_err("%s: unable to parse irq\n", np->name); + pr_err("%pOFn: unable to parse irq\n", np); return -EINVAL; } gc = irq_get_domain_generic_chip(domain, 0); gc->reg_base = ioremap(r.start, resource_size(&r)); if (!gc->reg_base) { - pr_err("%s: unable to map resource\n", np->name); + pr_err("%pOFn: unable to map resource\n", np); return -ENOMEM; } diff --git a/drivers/irqchip/irq-tb10x.c b/drivers/irqchip/irq-tb10x.c index 848d782a2a3b..7e6708099a7b 100644 --- a/drivers/irqchip/irq-tb10x.c +++ b/drivers/irqchip/irq-tb10x.c @@ -115,21 +115,21 @@ static int __init of_tb10x_init_irq(struct device_node *ictl, void __iomem *reg_base; if (of_address_to_resource(ictl, 0, &mem)) { - pr_err("%s: No registers declared in DeviceTree.\n", - ictl->name); + pr_err("%pOFn: No registers declared in DeviceTree.\n", + ictl); return -EINVAL; } if (!request_mem_region(mem.start, resource_size(&mem), - ictl->name)) { - pr_err("%s: Request mem region failed.\n", ictl->name); + ictl->full_name)) { + pr_err("%pOFn: Request mem region failed.\n", ictl); return -EBUSY; } reg_base = ioremap(mem.start, resource_size(&mem)); if (!reg_base) { ret = -EBUSY; - pr_err("%s: ioremap failed.\n", ictl->name); + pr_err("%pOFn: ioremap failed.\n", ictl); goto ioremap_fail; } @@ -137,8 +137,8 @@ static int __init of_tb10x_init_irq(struct device_node *ictl, &irq_generic_chip_ops, NULL); if (!domain) { ret = -ENOMEM; - pr_err("%s: Could not register interrupt domain.\n", - ictl->name); + pr_err("%pOFn: Could not register interrupt domain.\n", + ictl); goto irq_domain_add_fail; } @@ -147,8 +147,8 @@ static int __init of_tb10x_init_irq(struct device_node *ictl, IRQ_NOREQUEST, IRQ_NOPROBE, IRQ_GC_INIT_MASK_CACHE); if (ret) { - pr_err("%s: Could not allocate generic interrupt chip.\n", - ictl->name); + pr_err("%pOFn: Could not allocate generic interrupt chip.\n", + ictl); goto gc_alloc_fail; } diff --git a/drivers/irqchip/irq-xtensa-mx.c b/drivers/irqchip/irq-xtensa-mx.c index e539500752d4..5385f5768345 100644 --- a/drivers/irqchip/irq-xtensa-mx.c +++ b/drivers/irqchip/irq-xtensa-mx.c @@ -62,7 +62,7 @@ void secondary_init_irq(void) __this_cpu_write(cached_irq_mask, XCHAL_INTTYPE_MASK_EXTERN_EDGE | XCHAL_INTTYPE_MASK_EXTERN_LEVEL); - set_sr(XCHAL_INTTYPE_MASK_EXTERN_EDGE | + xtensa_set_sr(XCHAL_INTTYPE_MASK_EXTERN_EDGE | XCHAL_INTTYPE_MASK_EXTERN_LEVEL, intenable); } @@ -77,7 +77,7 @@ static void xtensa_mx_irq_mask(struct irq_data *d) } else { mask = __this_cpu_read(cached_irq_mask) & ~mask; __this_cpu_write(cached_irq_mask, mask); - set_sr(mask, intenable); + xtensa_set_sr(mask, intenable); } } @@ -92,7 +92,7 @@ static void xtensa_mx_irq_unmask(struct irq_data *d) } else { mask |= __this_cpu_read(cached_irq_mask); __this_cpu_write(cached_irq_mask, mask); - set_sr(mask, intenable); + xtensa_set_sr(mask, intenable); } } @@ -108,12 +108,12 @@ static void xtensa_mx_irq_disable(struct irq_data *d) static void xtensa_mx_irq_ack(struct irq_data *d) { - set_sr(1 << d->hwirq, intclear); + xtensa_set_sr(1 << d->hwirq, intclear); } static int xtensa_mx_irq_retrigger(struct irq_data *d) { - set_sr(1 << d->hwirq, intset); + xtensa_set_sr(1 << d->hwirq, intset); return 1; } diff --git a/drivers/irqchip/irq-xtensa-pic.c b/drivers/irqchip/irq-xtensa-pic.c index 000cb5462bcf..c200234dd2c9 100644 --- a/drivers/irqchip/irq-xtensa-pic.c +++ b/drivers/irqchip/irq-xtensa-pic.c @@ -44,13 +44,13 @@ static const struct irq_domain_ops xtensa_irq_domain_ops = { static void xtensa_irq_mask(struct irq_data *d) { cached_irq_mask &= ~(1 << d->hwirq); - set_sr(cached_irq_mask, intenable); + xtensa_set_sr(cached_irq_mask, intenable); } static void xtensa_irq_unmask(struct irq_data *d) { cached_irq_mask |= 1 << d->hwirq; - set_sr(cached_irq_mask, intenable); + xtensa_set_sr(cached_irq_mask, intenable); } static void xtensa_irq_enable(struct irq_data *d) @@ -65,12 +65,12 @@ static void xtensa_irq_disable(struct irq_data *d) static void xtensa_irq_ack(struct irq_data *d) { - set_sr(1 << d->hwirq, intclear); + xtensa_set_sr(1 << d->hwirq, intclear); } static int xtensa_irq_retrigger(struct irq_data *d) { - set_sr(1 << d->hwirq, intset); + xtensa_set_sr(1 << d->hwirq, intset); return 1; } diff --git a/drivers/isdn/hardware/Kconfig b/drivers/isdn/hardware/Kconfig index 30d028d24955..95c403088cce 100644 --- a/drivers/isdn/hardware/Kconfig +++ b/drivers/isdn/hardware/Kconfig @@ -5,5 +5,3 @@ comment "CAPI hardware drivers" source "drivers/isdn/hardware/avm/Kconfig" -source "drivers/isdn/hardware/eicon/Kconfig" - diff --git a/drivers/isdn/hardware/Makefile b/drivers/isdn/hardware/Makefile index a5d8fce4c4c4..e503032b05a0 100644 --- a/drivers/isdn/hardware/Makefile +++ b/drivers/isdn/hardware/Makefile @@ -3,5 +3,4 @@ # Object files in subdirectories obj-$(CONFIG_CAPI_AVM) += avm/ -obj-$(CONFIG_CAPI_EICON) += eicon/ obj-$(CONFIG_MISDN) += mISDN/ diff --git a/drivers/isdn/hardware/eicon/Kconfig b/drivers/isdn/hardware/eicon/Kconfig deleted file mode 100644 index 6082b6a5ced3..000000000000 --- a/drivers/isdn/hardware/eicon/Kconfig +++ /dev/null @@ -1,51 +0,0 @@ -# -# ISDN DIVAS Eicon driver -# - -menuconfig CAPI_EICON - bool "Active Eicon DIVA Server cards" - help - Enable support for Eicon Networks active ISDN cards. - -if CAPI_EICON - -config ISDN_DIVAS - tristate "Support Eicon DIVA Server cards" - depends on PROC_FS && PCI - help - Say Y here if you have an Eicon Networks DIVA Server PCI ISDN card. - In order to use this card, additional firmware is necessary, which - has to be downloaded into the card using the divactrl utility. - -if ISDN_DIVAS - -config ISDN_DIVAS_BRIPCI - bool "DIVA Server BRI/PCI support" - help - Enable support for DIVA Server BRI-PCI. - -config ISDN_DIVAS_PRIPCI - bool "DIVA Server PRI/PCI support" - help - Enable support for DIVA Server PRI-PCI. - -config ISDN_DIVAS_DIVACAPI - tristate "DIVA CAPI2.0 interface support" - help - You need this to provide the CAPI interface - for DIVA Server cards. - -config ISDN_DIVAS_USERIDI - tristate "DIVA User-IDI interface support" - help - Enable support for user-mode IDI interface. - -config ISDN_DIVAS_MAINT - tristate "DIVA Maint driver support" - depends on m - help - Enable Divas Maintenance driver. - -endif # ISDN_DIVAS - -endif # CAPI_EICON diff --git a/drivers/isdn/hardware/eicon/Makefile b/drivers/isdn/hardware/eicon/Makefile deleted file mode 100644 index a0ab2e2d7df0..000000000000 --- a/drivers/isdn/hardware/eicon/Makefile +++ /dev/null @@ -1,24 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0 -# Makefile for the Eicon DIVA ISDN drivers. - -# Each configuration option enables a list of files. - -obj-$(CONFIG_ISDN_DIVAS) += divadidd.o divas.o -obj-$(CONFIG_ISDN_DIVAS_MAINT) += diva_mnt.o -obj-$(CONFIG_ISDN_DIVAS_USERIDI) += diva_idi.o -obj-$(CONFIG_ISDN_DIVAS_DIVACAPI) += divacapi.o - -# Multipart objects. - -divas-y := divasmain.o divasfunc.o di.o io.o istream.o \ - diva.o divasproc.o diva_dma.o -divas-$(CONFIG_ISDN_DIVAS_BRIPCI) += os_bri.o s_bri.o os_4bri.o s_4bri.o -divas-$(CONFIG_ISDN_DIVAS_PRIPCI) += os_pri.o s_pri.o - -divacapi-y := capimain.o capifunc.o message.o capidtmf.o - -divadidd-y := diva_didd.o diddfunc.o dadapter.o - -diva_mnt-y := divamnt.o mntfunc.o debug.o maintidi.o - -diva_idi-y := divasi.o idifunc.o um_idi.o dqueue.o diff --git a/drivers/isdn/hardware/eicon/adapter.h b/drivers/isdn/hardware/eicon/adapter.h deleted file mode 100644 index f9b24eb8781d..000000000000 --- a/drivers/isdn/hardware/eicon/adapter.h +++ /dev/null @@ -1,18 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: adapter.h,v 1.4 2004/03/21 17:26:01 armin Exp $ */ - -#ifndef __DIVA_USER_MODE_IDI_ADAPTER_H__ -#define __DIVA_USER_MODE_IDI_ADAPTER_H__ - -#define DIVA_UM_IDI_ADAPTER_REMOVED 0x00000001 - -typedef struct _diva_um_idi_adapter { - struct list_head link; - DESCRIPTOR d; - int adapter_nr; - struct list_head entity_q; /* entities linked to this adapter */ - dword status; -} diva_um_idi_adapter_t; - - -#endif diff --git a/drivers/isdn/hardware/eicon/capi20.h b/drivers/isdn/hardware/eicon/capi20.h deleted file mode 100644 index 391e4175b0b5..000000000000 --- a/drivers/isdn/hardware/eicon/capi20.h +++ /dev/null @@ -1,699 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef _INC_CAPI20 -#define _INC_CAPI20 -/* operations on message queues */ -/* the common device type for CAPI20 drivers */ -#define FILE_DEVICE_CAPI20 0x8001 -/* DEVICE_CONTROL codes for user and kernel mode applications */ -#define CAPI20_CTL_REGISTER 0x0801 -#define CAPI20_CTL_RELEASE 0x0802 -#define CAPI20_CTL_GET_MANUFACTURER 0x0805 -#define CAPI20_CTL_GET_VERSION 0x0806 -#define CAPI20_CTL_GET_SERIAL 0x0807 -#define CAPI20_CTL_GET_PROFILE 0x0808 -/* INTERNAL_DEVICE_CONTROL codes for kernel mode applicatios only */ -#define CAPI20_CTL_PUT_MESSAGE 0x0803 -#define CAPI20_CTL_GET_MESSAGE 0x0804 -/* the wrapped codes as required by the system */ -#define CAPI_CTL_CODE(f, m) CTL_CODE(FILE_DEVICE_CAPI20, f, m, FILE_ANY_ACCESS) -#define IOCTL_CAPI_REGISTER CAPI_CTL_CODE(CAPI20_CTL_REGISTER, METHOD_BUFFERED) -#define IOCTL_CAPI_RELEASE CAPI_CTL_CODE(CAPI20_CTL_RELEASE, METHOD_BUFFERED) -#define IOCTL_CAPI_GET_MANUFACTURER CAPI_CTL_CODE(CAPI20_CTL_GET_MANUFACTURER, METHOD_BUFFERED) -#define IOCTL_CAPI_GET_VERSION CAPI_CTL_CODE(CAPI20_CTL_GET_VERSION, METHOD_BUFFERED) -#define IOCTL_CAPI_GET_SERIAL CAPI_CTL_CODE(CAPI20_CTL_GET_SERIAL, METHOD_BUFFERED) -#define IOCTL_CAPI_GET_PROFILE CAPI_CTL_CODE(CAPI20_CTL_GET_PROFILE, METHOD_BUFFERED) -#define IOCTL_CAPI_PUT_MESSAGE CAPI_CTL_CODE(CAPI20_CTL_PUT_MESSAGE, METHOD_BUFFERED) -#define IOCTL_CAPI_GET_MESSAGE CAPI_CTL_CODE(CAPI20_CTL_GET_MESSAGE, METHOD_BUFFERED) -struct divas_capi_register_params { - word MessageBufferSize; - word maxLogicalConnection; - word maxBDataBlocks; - word maxBDataLen; -}; -struct divas_capi_version { - word CapiMajor; - word CapiMinor; - word ManuMajor; - word ManuMinor; -}; -typedef struct api_profile_s { - word Number; - word Channels; - dword Global_Options; - dword B1_Protocols; - dword B2_Protocols; - dword B3_Protocols; -} API_PROFILE; -/* ISDN Common API message types */ -#define _ALERT_R 0x8001 -#define _CONNECT_R 0x8002 -#define _CONNECT_I 0x8202 -#define _CONNECT_ACTIVE_I 0x8203 -#define _DISCONNECT_R 0x8004 -#define _DISCONNECT_I 0x8204 -#define _LISTEN_R 0x8005 -#define _INFO_R 0x8008 -#define _INFO_I 0x8208 -#define _SELECT_B_REQ 0x8041 -#define _FACILITY_R 0x8080 -#define _FACILITY_I 0x8280 -#define _CONNECT_B3_R 0x8082 -#define _CONNECT_B3_I 0x8282 -#define _CONNECT_B3_ACTIVE_I 0x8283 -#define _DISCONNECT_B3_R 0x8084 -#define _DISCONNECT_B3_I 0x8284 -#define _DATA_B3_R 0x8086 -#define _DATA_B3_I 0x8286 -#define _RESET_B3_R 0x8087 -#define _RESET_B3_I 0x8287 -#define _CONNECT_B3_T90_ACTIVE_I 0x8288 -#define _MANUFACTURER_R 0x80ff -#define _MANUFACTURER_I 0x82ff -/* OR this to convert a REQUEST to a CONFIRM */ -#define CONFIRM 0x0100 -/* OR this to convert a INDICATION to a RESPONSE */ -#define RESPONSE 0x0100 -/*------------------------------------------------------------------*/ -/* diehl isdn private MANUFACTURER codes */ -/*------------------------------------------------------------------*/ -#define _DI_MANU_ID 0x44444944 -#define _DI_ASSIGN_PLCI 0x0001 -#define _DI_ADV_CODEC 0x0002 -#define _DI_DSP_CTRL 0x0003 -#define _DI_SIG_CTRL 0x0004 -#define _DI_RXT_CTRL 0x0005 -#define _DI_IDI_CTRL 0x0006 -#define _DI_CFG_CTRL 0x0007 -#define _DI_REMOVE_CODEC 0x0008 -#define _DI_OPTIONS_REQUEST 0x0009 -#define _DI_SSEXT_CTRL 0x000a -#define _DI_NEGOTIATE_B3 0x000b -/*------------------------------------------------------------------*/ -/* parameter structures */ -/*------------------------------------------------------------------*/ -/* ALERT-REQUEST */ -typedef struct { - byte structs[0]; /* Additional Info */ -} _ALT_REQP; -/* ALERT-CONFIRM */ -typedef struct { - word Info; -} _ALT_CONP; -/* CONNECT-REQUEST */ -typedef struct { - word CIP_Value; - byte structs[0]; /* Called party number, - Called party subaddress, - Calling party number, - Calling party subaddress, - B_protocol, - BC, - LLC, - HLC, - Additional Info */ -} _CON_REQP; -/* CONNECT-CONFIRM */ -typedef struct { - word Info; -} _CON_CONP; -/* CONNECT-INDICATION */ -typedef struct { - word CIP_Value; - byte structs[0]; /* Called party number, - Called party subaddress, - Calling party number, - Calling party subaddress, - BC, - LLC, - HLC, - Additional Info */ -} _CON_INDP; -/* CONNECT-RESPONSE */ -typedef struct { - word Accept; - byte structs[0]; /* B_protocol, - Connected party number, - Connected party subaddress, - LLC */ -} _CON_RESP; -/* CONNECT-ACTIVE-INDICATION */ -typedef struct { - byte structs[0]; /* Connected party number, - Connected party subaddress, - LLC */ -} _CON_A_INDP; -/* CONNECT-ACTIVE-RESPONSE */ -typedef struct { - byte structs[0]; /* empty */ -} _CON_A_RESP; -/* DISCONNECT-REQUEST */ -typedef struct { - byte structs[0]; /* Additional Info */ -} _DIS_REQP; -/* DISCONNECT-CONFIRM */ -typedef struct { - word Info; -} _DIS_CONP; -/* DISCONNECT-INDICATION */ -typedef struct { - word Info; -} _DIS_INDP; -/* DISCONNECT-RESPONSE */ -typedef struct { - byte structs[0]; /* empty */ -} _DIS_RESP; -/* LISTEN-REQUEST */ -typedef struct { - dword Info_Mask; - dword CIP_Mask; - byte structs[0]; /* Calling party number, - Calling party subaddress */ -} _LIS_REQP; -/* LISTEN-CONFIRM */ -typedef struct { - word Info; -} _LIS_CONP; -/* INFO-REQUEST */ -typedef struct { - byte structs[0]; /* Called party number, - Additional Info */ -} _INF_REQP; -/* INFO-CONFIRM */ -typedef struct { - word Info; -} _INF_CONP; -/* INFO-INDICATION */ -typedef struct { - word Number; - byte structs[0]; /* Info element */ -} _INF_INDP; -/* INFO-RESPONSE */ -typedef struct { - byte structs[0]; /* empty */ -} _INF_RESP; -/* SELECT-B-REQUEST */ -typedef struct { - byte structs[0]; /* B-protocol */ -} _SEL_B_REQP; -/* SELECT-B-CONFIRM */ -typedef struct { - word Info; -} _SEL_B_CONP; -/* FACILITY-REQUEST */ -typedef struct { - word Selector; - byte structs[0]; /* Facility parameters */ -} _FAC_REQP; -/* FACILITY-CONFIRM STRUCT FOR SUPPLEMENT. SERVICES */ -typedef struct { - byte struct_length; - word function; - byte length; - word SupplementaryServiceInfo; - dword SupportedServices; -} _FAC_CON_STRUCTS; -/* FACILITY-CONFIRM */ -typedef struct { - word Info; - word Selector; - byte structs[0]; /* Facility parameters */ -} _FAC_CONP; -/* FACILITY-INDICATION */ -typedef struct { - word Selector; - byte structs[0]; /* Facility parameters */ -} _FAC_INDP; -/* FACILITY-RESPONSE */ -typedef struct { - word Selector; - byte structs[0]; /* Facility parameters */ -} _FAC_RESP; -/* CONNECT-B3-REQUEST */ -typedef struct { - byte structs[0]; /* NCPI */ -} _CON_B3_REQP; -/* CONNECT-B3-CONFIRM */ -typedef struct { - word Info; -} _CON_B3_CONP; -/* CONNECT-B3-INDICATION */ -typedef struct { - byte structs[0]; /* NCPI */ -} _CON_B3_INDP; -/* CONNECT-B3-RESPONSE */ -typedef struct { - word Accept; - byte structs[0]; /* NCPI */ -} _CON_B3_RESP; -/* CONNECT-B3-ACTIVE-INDICATION */ -typedef struct { - byte structs[0]; /* NCPI */ -} _CON_B3_A_INDP; -/* CONNECT-B3-ACTIVE-RESPONSE */ -typedef struct { - byte structs[0]; /* empty */ -} _CON_B3_A_RESP; -/* DISCONNECT-B3-REQUEST */ -typedef struct { - byte structs[0]; /* NCPI */ -} _DIS_B3_REQP; -/* DISCONNECT-B3-CONFIRM */ -typedef struct { - word Info; -} _DIS_B3_CONP; -/* DISCONNECT-B3-INDICATION */ -typedef struct { - word Info; - byte structs[0]; /* NCPI */ -} _DIS_B3_INDP; -/* DISCONNECT-B3-RESPONSE */ -typedef struct { - byte structs[0]; /* empty */ -} _DIS_B3_RESP; -/* DATA-B3-REQUEST */ -typedef struct { - dword Data; - word Data_Length; - word Number; - word Flags; -} _DAT_B3_REQP; -/* DATA-B3-REQUEST 64 BIT Systems */ -typedef struct { - dword Data; - word Data_Length; - word Number; - word Flags; - void *pData; -} _DAT_B3_REQ64P; -/* DATA-B3-CONFIRM */ -typedef struct { - word Number; - word Info; -} _DAT_B3_CONP; -/* DATA-B3-INDICATION */ -typedef struct { - dword Data; - word Data_Length; - word Number; - word Flags; -} _DAT_B3_INDP; -/* DATA-B3-INDICATION 64 BIT Systems */ -typedef struct { - dword Data; - word Data_Length; - word Number; - word Flags; - void *pData; -} _DAT_B3_IND64P; -/* DATA-B3-RESPONSE */ -typedef struct { - word Number; -} _DAT_B3_RESP; -/* RESET-B3-REQUEST */ -typedef struct { - byte structs[0]; /* NCPI */ -} _RES_B3_REQP; -/* RESET-B3-CONFIRM */ -typedef struct { - word Info; -} _RES_B3_CONP; -/* RESET-B3-INDICATION */ -typedef struct { - byte structs[0]; /* NCPI */ -} _RES_B3_INDP; -/* RESET-B3-RESPONSE */ -typedef struct { - byte structs[0]; /* empty */ -} _RES_B3_RESP; -/* CONNECT-B3-T90-ACTIVE-INDICATION */ -typedef struct { - byte structs[0]; /* NCPI */ -} _CON_B3_T90_A_INDP; -/* CONNECT-B3-T90-ACTIVE-RESPONSE */ -typedef struct { - word Reject; - byte structs[0]; /* NCPI */ -} _CON_B3_T90_A_RESP; -/*------------------------------------------------------------------*/ -/* message structure */ -/*------------------------------------------------------------------*/ -typedef struct _API_MSG CAPI_MSG; -typedef struct _MSG_HEADER CAPI_MSG_HEADER; -struct _API_MSG { - struct _MSG_HEADER { - word length; - word appl_id; - word command; - word number; - byte controller; - byte plci; - word ncci; - } header; - union { - _ALT_REQP alert_req; - _ALT_CONP alert_con; - _CON_REQP connect_req; - _CON_CONP connect_con; - _CON_INDP connect_ind; - _CON_RESP connect_res; - _CON_A_INDP connect_a_ind; - _CON_A_RESP connect_a_res; - _DIS_REQP disconnect_req; - _DIS_CONP disconnect_con; - _DIS_INDP disconnect_ind; - _DIS_RESP disconnect_res; - _LIS_REQP listen_req; - _LIS_CONP listen_con; - _INF_REQP info_req; - _INF_CONP info_con; - _INF_INDP info_ind; - _INF_RESP info_res; - _SEL_B_REQP select_b_req; - _SEL_B_CONP select_b_con; - _FAC_REQP facility_req; - _FAC_CONP facility_con; - _FAC_INDP facility_ind; - _FAC_RESP facility_res; - _CON_B3_REQP connect_b3_req; - _CON_B3_CONP connect_b3_con; - _CON_B3_INDP connect_b3_ind; - _CON_B3_RESP connect_b3_res; - _CON_B3_A_INDP connect_b3_a_ind; - _CON_B3_A_RESP connect_b3_a_res; - _DIS_B3_REQP disconnect_b3_req; - _DIS_B3_CONP disconnect_b3_con; - _DIS_B3_INDP disconnect_b3_ind; - _DIS_B3_RESP disconnect_b3_res; - _DAT_B3_REQP data_b3_req; - _DAT_B3_REQ64P data_b3_req64; - _DAT_B3_CONP data_b3_con; - _DAT_B3_INDP data_b3_ind; - _DAT_B3_IND64P data_b3_ind64; - _DAT_B3_RESP data_b3_res; - _RES_B3_REQP reset_b3_req; - _RES_B3_CONP reset_b3_con; - _RES_B3_INDP reset_b3_ind; - _RES_B3_RESP reset_b3_res; - _CON_B3_T90_A_INDP connect_b3_t90_a_ind; - _CON_B3_T90_A_RESP connect_b3_t90_a_res; - byte b[200]; - } info; -}; -/*------------------------------------------------------------------*/ -/* non-fatal errors */ -/*------------------------------------------------------------------*/ -#define _NCPI_IGNORED 0x0001 -#define _FLAGS_IGNORED 0x0002 -#define _ALERT_IGNORED 0x0003 -/*------------------------------------------------------------------*/ -/* API function error codes */ -/*------------------------------------------------------------------*/ -#define GOOD 0x0000 -#define _TOO_MANY_APPLICATIONS 0x1001 -#define _BLOCK_TOO_SMALL 0x1002 -#define _BUFFER_TOO_BIG 0x1003 -#define _MSG_BUFFER_TOO_SMALL 0x1004 -#define _TOO_MANY_CONNECTIONS 0x1005 -#define _REG_CAPI_BUSY 0x1007 -#define _REG_RESOURCE_ERROR 0x1008 -#define _REG_CAPI_NOT_INSTALLED 0x1009 -#define _WRONG_APPL_ID 0x1101 -#define _BAD_MSG 0x1102 -#define _QUEUE_FULL 0x1103 -#define _GET_NO_MSG 0x1104 -#define _MSG_LOST 0x1105 -#define _WRONG_NOTIFY 0x1106 -#define _CAPI_BUSY 0x1107 -#define _RESOURCE_ERROR 0x1108 -#define _CAPI_NOT_INSTALLED 0x1109 -#define _NO_EXTERNAL_EQUIPMENT 0x110a -#define _ONLY_EXTERNAL_EQUIPMENT 0x110b -/*------------------------------------------------------------------*/ -/* addressing/coding error codes */ -/*------------------------------------------------------------------*/ -#define _WRONG_STATE 0x2001 -#define _WRONG_IDENTIFIER 0x2002 -#define _OUT_OF_PLCI 0x2003 -#define _OUT_OF_NCCI 0x2004 -#define _OUT_OF_LISTEN 0x2005 -#define _OUT_OF_FAX 0x2006 -#define _WRONG_MESSAGE_FORMAT 0x2007 -#define _OUT_OF_INTERCONNECT_RESOURCES 0x2008 -/*------------------------------------------------------------------*/ -/* configuration error codes */ -/*------------------------------------------------------------------*/ -#define _B1_NOT_SUPPORTED 0x3001 -#define _B2_NOT_SUPPORTED 0x3002 -#define _B3_NOT_SUPPORTED 0x3003 -#define _B1_PARM_NOT_SUPPORTED 0x3004 -#define _B2_PARM_NOT_SUPPORTED 0x3005 -#define _B3_PARM_NOT_SUPPORTED 0x3006 -#define _B_STACK_NOT_SUPPORTED 0x3007 -#define _NCPI_NOT_SUPPORTED 0x3008 -#define _CIP_NOT_SUPPORTED 0x3009 -#define _FLAGS_NOT_SUPPORTED 0x300a -#define _FACILITY_NOT_SUPPORTED 0x300b -#define _DATA_LEN_NOT_SUPPORTED 0x300c -#define _RESET_NOT_SUPPORTED 0x300d -#define _SUPPLEMENTARY_SERVICE_NOT_SUPPORTED 0x300e -#define _REQUEST_NOT_ALLOWED_IN_THIS_STATE 0x3010 -#define _FACILITY_SPECIFIC_FUNCTION_NOT_SUPP 0x3011 -/*------------------------------------------------------------------*/ -/* reason codes */ -/*------------------------------------------------------------------*/ -#define _L1_ERROR 0x3301 -#define _L2_ERROR 0x3302 -#define _L3_ERROR 0x3303 -#define _OTHER_APPL_CONNECTED 0x3304 -#define _CAPI_GUARD_ERROR 0x3305 -#define _L3_CAUSE 0x3400 -/*------------------------------------------------------------------*/ -/* b3 reason codes */ -/*------------------------------------------------------------------*/ -#define _B_CHANNEL_LOST 0x3301 -#define _B2_ERROR 0x3302 -#define _B3_ERROR 0x3303 -/*------------------------------------------------------------------*/ -/* fax error codes */ -/*------------------------------------------------------------------*/ -#define _FAX_NO_CONNECTION 0x3311 -#define _FAX_TRAINING_ERROR 0x3312 -#define _FAX_REMOTE_REJECT 0x3313 -#define _FAX_REMOTE_ABORT 0x3314 -#define _FAX_PROTOCOL_ERROR 0x3315 -#define _FAX_TX_UNDERRUN 0x3316 -#define _FAX_RX_OVERFLOW 0x3317 -#define _FAX_LOCAL_ABORT 0x3318 -#define _FAX_PARAMETER_ERROR 0x3319 -/*------------------------------------------------------------------*/ -/* line interconnect error codes */ -/*------------------------------------------------------------------*/ -#define _LI_USER_INITIATED 0x0000 -#define _LI_LINE_NO_LONGER_AVAILABLE 0x3805 -#define _LI_INTERCONNECT_NOT_ESTABLISHED 0x3806 -#define _LI_LINES_NOT_COMPATIBLE 0x3807 -#define _LI2_USER_INITIATED 0x0000 -#define _LI2_PLCI_HAS_NO_BCHANNEL 0x3800 -#define _LI2_LINES_NOT_COMPATIBLE 0x3801 -#define _LI2_NOT_IN_SAME_INTERCONNECTION 0x3802 -/*------------------------------------------------------------------*/ -/* global options */ -/*------------------------------------------------------------------*/ -#define GL_INTERNAL_CONTROLLER_SUPPORTED 0x00000001L -#define GL_EXTERNAL_EQUIPMENT_SUPPORTED 0x00000002L -#define GL_HANDSET_SUPPORTED 0x00000004L -#define GL_DTMF_SUPPORTED 0x00000008L -#define GL_SUPPLEMENTARY_SERVICES_SUPPORTED 0x00000010L -#define GL_CHANNEL_ALLOCATION_SUPPORTED 0x00000020L -#define GL_BCHANNEL_OPERATION_SUPPORTED 0x00000040L -#define GL_LINE_INTERCONNECT_SUPPORTED 0x00000080L -#define GL_ECHO_CANCELLER_SUPPORTED 0x00000100L -/*------------------------------------------------------------------*/ -/* protocol selection */ -/*------------------------------------------------------------------*/ -#define B1_HDLC 0 -#define B1_TRANSPARENT 1 -#define B1_V110_ASYNC 2 -#define B1_V110_SYNC 3 -#define B1_T30 4 -#define B1_HDLC_INVERTED 5 -#define B1_TRANSPARENT_R 6 -#define B1_MODEM_ALL_NEGOTIATE 7 -#define B1_MODEM_ASYNC 8 -#define B1_MODEM_SYNC_HDLC 9 -#define B2_X75 0 -#define B2_TRANSPARENT 1 -#define B2_SDLC 2 -#define B2_LAPD 3 -#define B2_T30 4 -#define B2_PPP 5 -#define B2_TRANSPARENT_NO_CRC 6 -#define B2_MODEM_EC_COMPRESSION 7 -#define B2_X75_V42BIS 8 -#define B2_V120_ASYNC 9 -#define B2_V120_ASYNC_V42BIS 10 -#define B2_V120_BIT_TRANSPARENT 11 -#define B2_LAPD_FREE_SAPI_SEL 12 -#define B3_TRANSPARENT 0 -#define B3_T90NL 1 -#define B3_ISO8208 2 -#define B3_X25_DCE 3 -#define B3_T30 4 -#define B3_T30_WITH_EXTENSIONS 5 -#define B3_RESERVED 6 -#define B3_MODEM 7 -/*------------------------------------------------------------------*/ -/* facility definitions */ -/*------------------------------------------------------------------*/ -#define SELECTOR_HANDSET 0 -#define SELECTOR_DTMF 1 -#define SELECTOR_V42BIS 2 -#define SELECTOR_SU_SERV 3 -#define SELECTOR_POWER_MANAGEMENT 4 -#define SELECTOR_LINE_INTERCONNECT 5 -#define SELECTOR_ECHO_CANCELLER 6 -/*------------------------------------------------------------------*/ -/* supplementary services definitions */ -/*------------------------------------------------------------------*/ -#define S_GET_SUPPORTED_SERVICES 0x0000 -#define S_LISTEN 0x0001 -#define S_HOLD 0x0002 -#define S_RETRIEVE 0x0003 -#define S_SUSPEND 0x0004 -#define S_RESUME 0x0005 -#define S_ECT 0x0006 -#define S_3PTY_BEGIN 0x0007 -#define S_3PTY_END 0x0008 -#define S_CALL_DEFLECTION 0x000d -#define S_CALL_FORWARDING_START 0x0009 -#define S_CALL_FORWARDING_STOP 0x000a -#define S_INTERROGATE_DIVERSION 0x000b /* or interrogate parameters */ -#define S_INTERROGATE_NUMBERS 0x000c -#define S_CCBS_REQUEST 0x000f -#define S_CCBS_DEACTIVATE 0x0010 -#define S_CCBS_INTERROGATE 0x0011 -#define S_CCBS_CALL 0x0012 -#define S_MWI_ACTIVATE 0x0013 -#define S_MWI_DEACTIVATE 0x0014 -#define S_CONF_BEGIN 0x0017 -#define S_CONF_ADD 0x0018 -#define S_CONF_SPLIT 0x0019 -#define S_CONF_DROP 0x001a -#define S_CONF_ISOLATE 0x001b -#define S_CONF_REATTACH 0x001c -#define S_CCBS_ERASECALLLINKAGEID 0x800d -#define S_CCBS_STOP_ALERTING 0x8012 -#define S_CCBS_INFO_RETAIN 0x8013 -#define S_MWI_INDICATE 0x8014 -#define S_CONF_PARTYDISC 0x8016 -#define S_CONF_NOTIFICATION 0x8017 -/* Service Masks */ -#define MASK_HOLD_RETRIEVE 0x00000001 -#define MASK_TERMINAL_PORTABILITY 0x00000002 -#define MASK_ECT 0x00000004 -#define MASK_3PTY 0x00000008 -#define MASK_CALL_FORWARDING 0x00000010 -#define MASK_CALL_DEFLECTION 0x00000020 -#define MASK_MWI 0x00000100 -#define MASK_CCNR 0x00000200 -#define MASK_CONF 0x00000400 -/*------------------------------------------------------------------*/ -/* dtmf definitions */ -/*------------------------------------------------------------------*/ -#define DTMF_LISTEN_START 1 -#define DTMF_LISTEN_STOP 2 -#define DTMF_DIGITS_SEND 3 -#define DTMF_SUCCESS 0 -#define DTMF_INCORRECT_DIGIT 1 -#define DTMF_UNKNOWN_REQUEST 2 -/*------------------------------------------------------------------*/ -/* line interconnect definitions */ -/*------------------------------------------------------------------*/ -#define LI_GET_SUPPORTED_SERVICES 0 -#define LI_REQ_CONNECT 1 -#define LI_REQ_DISCONNECT 2 -#define LI_IND_CONNECT_ACTIVE 1 -#define LI_IND_DISCONNECT 2 -#define LI_FLAG_CONFERENCE_A_B ((dword) 0x00000001L) -#define LI_FLAG_CONFERENCE_B_A ((dword) 0x00000002L) -#define LI_FLAG_MONITOR_A ((dword) 0x00000004L) -#define LI_FLAG_MONITOR_B ((dword) 0x00000008L) -#define LI_FLAG_ANNOUNCEMENT_A ((dword) 0x00000010L) -#define LI_FLAG_ANNOUNCEMENT_B ((dword) 0x00000020L) -#define LI_FLAG_MIX_A ((dword) 0x00000040L) -#define LI_FLAG_MIX_B ((dword) 0x00000080L) -#define LI_CONFERENCING_SUPPORTED ((dword) 0x00000001L) -#define LI_MONITORING_SUPPORTED ((dword) 0x00000002L) -#define LI_ANNOUNCEMENTS_SUPPORTED ((dword) 0x00000004L) -#define LI_MIXING_SUPPORTED ((dword) 0x00000008L) -#define LI_CROSS_CONTROLLER_SUPPORTED ((dword) 0x00000010L) -#define LI2_GET_SUPPORTED_SERVICES 0 -#define LI2_REQ_CONNECT 1 -#define LI2_REQ_DISCONNECT 2 -#define LI2_IND_CONNECT_ACTIVE 1 -#define LI2_IND_DISCONNECT 2 -#define LI2_FLAG_INTERCONNECT_A_B ((dword) 0x00000001L) -#define LI2_FLAG_INTERCONNECT_B_A ((dword) 0x00000002L) -#define LI2_FLAG_MONITOR_B ((dword) 0x00000004L) -#define LI2_FLAG_MIX_B ((dword) 0x00000008L) -#define LI2_FLAG_MONITOR_X ((dword) 0x00000010L) -#define LI2_FLAG_MIX_X ((dword) 0x00000020L) -#define LI2_FLAG_LOOP_B ((dword) 0x00000040L) -#define LI2_FLAG_LOOP_PC ((dword) 0x00000080L) -#define LI2_FLAG_LOOP_X ((dword) 0x00000100L) -#define LI2_CROSS_CONTROLLER_SUPPORTED ((dword) 0x00000001L) -#define LI2_ASYMMETRIC_SUPPORTED ((dword) 0x00000002L) -#define LI2_MONITORING_SUPPORTED ((dword) 0x00000004L) -#define LI2_MIXING_SUPPORTED ((dword) 0x00000008L) -#define LI2_REMOTE_MONITORING_SUPPORTED ((dword) 0x00000010L) -#define LI2_REMOTE_MIXING_SUPPORTED ((dword) 0x00000020L) -#define LI2_B_LOOPING_SUPPORTED ((dword) 0x00000040L) -#define LI2_PC_LOOPING_SUPPORTED ((dword) 0x00000080L) -#define LI2_X_LOOPING_SUPPORTED ((dword) 0x00000100L) -/*------------------------------------------------------------------*/ -/* echo canceller definitions */ -/*------------------------------------------------------------------*/ -#define EC_GET_SUPPORTED_SERVICES 0 -#define EC_ENABLE_OPERATION 1 -#define EC_DISABLE_OPERATION 2 -#define EC_ENABLE_NON_LINEAR_PROCESSING 0x0001 -#define EC_DO_NOT_REQUIRE_REVERSALS 0x0002 -#define EC_DETECT_DISABLE_TONE 0x0004 -#define EC_ENABLE_ADAPTIVE_PREDELAY 0x0008 -#define EC_NON_LINEAR_PROCESSING_SUPPORTED 0x0001 -#define EC_BYPASS_ON_ANY_2100HZ_SUPPORTED 0x0002 -#define EC_BYPASS_ON_REV_2100HZ_SUPPORTED 0x0004 -#define EC_ADAPTIVE_PREDELAY_SUPPORTED 0x0008 -#define EC_BYPASS_INDICATION 1 -#define EC_BYPASS_DUE_TO_CONTINUOUS_2100HZ 1 -#define EC_BYPASS_DUE_TO_REVERSED_2100HZ 2 -#define EC_BYPASS_RELEASED 3 -/*------------------------------------------------------------------*/ -/* function prototypes */ -/*------------------------------------------------------------------*/ -/*------------------------------------------------------------------*/ -#endif /* _INC_CAPI20 */ diff --git a/drivers/isdn/hardware/eicon/capidtmf.c b/drivers/isdn/hardware/eicon/capidtmf.c deleted file mode 100644 index e3f778415199..000000000000 --- a/drivers/isdn/hardware/eicon/capidtmf.c +++ /dev/null @@ -1,685 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ - -#include "platform.h" - - - - - - - - - -#include "capidtmf.h" - -/* #define TRACE_ */ - -#define FILE_ "CAPIDTMF.C" - -/*---------------------------------------------------------------------------*/ - - -#define trace(a) - - - -/*---------------------------------------------------------------------------*/ - -static short capidtmf_expand_table_alaw[0x0100] = -{ - -5504, 5504, -344, 344, -22016, 22016, -1376, 1376, - -2752, 2752, -88, 88, -11008, 11008, -688, 688, - -7552, 7552, -472, 472, -30208, 30208, -1888, 1888, - -3776, 3776, -216, 216, -15104, 15104, -944, 944, - -4480, 4480, -280, 280, -17920, 17920, -1120, 1120, - -2240, 2240, -24, 24, -8960, 8960, -560, 560, - -6528, 6528, -408, 408, -26112, 26112, -1632, 1632, - -3264, 3264, -152, 152, -13056, 13056, -816, 816, - -6016, 6016, -376, 376, -24064, 24064, -1504, 1504, - -3008, 3008, -120, 120, -12032, 12032, -752, 752, - -8064, 8064, -504, 504, -32256, 32256, -2016, 2016, - -4032, 4032, -248, 248, -16128, 16128, -1008, 1008, - -4992, 4992, -312, 312, -19968, 19968, -1248, 1248, - -2496, 2496, -56, 56, -9984, 9984, -624, 624, - -7040, 7040, -440, 440, -28160, 28160, -1760, 1760, - -3520, 3520, -184, 184, -14080, 14080, -880, 880, - -5248, 5248, -328, 328, -20992, 20992, -1312, 1312, - -2624, 2624, -72, 72, -10496, 10496, -656, 656, - -7296, 7296, -456, 456, -29184, 29184, -1824, 1824, - -3648, 3648, -200, 200, -14592, 14592, -912, 912, - -4224, 4224, -264, 264, -16896, 16896, -1056, 1056, - -2112, 2112, -8, 8, -8448, 8448, -528, 528, - -6272, 6272, -392, 392, -25088, 25088, -1568, 1568, - -3136, 3136, -136, 136, -12544, 12544, -784, 784, - -5760, 5760, -360, 360, -23040, 23040, -1440, 1440, - -2880, 2880, -104, 104, -11520, 11520, -720, 720, - -7808, 7808, -488, 488, -31232, 31232, -1952, 1952, - -3904, 3904, -232, 232, -15616, 15616, -976, 976, - -4736, 4736, -296, 296, -18944, 18944, -1184, 1184, - -2368, 2368, -40, 40, -9472, 9472, -592, 592, - -6784, 6784, -424, 424, -27136, 27136, -1696, 1696, - -3392, 3392, -168, 168, -13568, 13568, -848, 848 -}; - -static short capidtmf_expand_table_ulaw[0x0100] = -{ - -32124, 32124, -1884, 1884, -7932, 7932, -372, 372, - -15996, 15996, -876, 876, -3900, 3900, -120, 120, - -23932, 23932, -1372, 1372, -5884, 5884, -244, 244, - -11900, 11900, -620, 620, -2876, 2876, -56, 56, - -28028, 28028, -1628, 1628, -6908, 6908, -308, 308, - -13948, 13948, -748, 748, -3388, 3388, -88, 88, - -19836, 19836, -1116, 1116, -4860, 4860, -180, 180, - -9852, 9852, -492, 492, -2364, 2364, -24, 24, - -30076, 30076, -1756, 1756, -7420, 7420, -340, 340, - -14972, 14972, -812, 812, -3644, 3644, -104, 104, - -21884, 21884, -1244, 1244, -5372, 5372, -212, 212, - -10876, 10876, -556, 556, -2620, 2620, -40, 40, - -25980, 25980, -1500, 1500, -6396, 6396, -276, 276, - -12924, 12924, -684, 684, -3132, 3132, -72, 72, - -17788, 17788, -988, 988, -4348, 4348, -148, 148, - -8828, 8828, -428, 428, -2108, 2108, -8, 8, - -31100, 31100, -1820, 1820, -7676, 7676, -356, 356, - -15484, 15484, -844, 844, -3772, 3772, -112, 112, - -22908, 22908, -1308, 1308, -5628, 5628, -228, 228, - -11388, 11388, -588, 588, -2748, 2748, -48, 48, - -27004, 27004, -1564, 1564, -6652, 6652, -292, 292, - -13436, 13436, -716, 716, -3260, 3260, -80, 80, - -18812, 18812, -1052, 1052, -4604, 4604, -164, 164, - -9340, 9340, -460, 460, -2236, 2236, -16, 16, - -29052, 29052, -1692, 1692, -7164, 7164, -324, 324, - -14460, 14460, -780, 780, -3516, 3516, -96, 96, - -20860, 20860, -1180, 1180, -5116, 5116, -196, 196, - -10364, 10364, -524, 524, -2492, 2492, -32, 32, - -24956, 24956, -1436, 1436, -6140, 6140, -260, 260, - -12412, 12412, -652, 652, -3004, 3004, -64, 64, - -16764, 16764, -924, 924, -4092, 4092, -132, 132, - -8316, 8316, -396, 396, -1980, 1980, 0, 0 -}; - - -/*---------------------------------------------------------------------------*/ - -static short capidtmf_recv_window_function[CAPIDTMF_RECV_ACCUMULATE_CYCLES] = -{ - -500L, -999L, -1499L, -1998L, -2496L, -2994L, -3491L, -3988L, - -4483L, -4978L, -5471L, -5963L, -6454L, -6943L, -7431L, -7917L, - -8401L, -8883L, -9363L, -9840L, -10316L, -10789L, -11259L, -11727L, - -12193L, -12655L, -13115L, -13571L, -14024L, -14474L, -14921L, -15364L, - -15804L, -16240L, -16672L, -17100L, -17524L, -17944L, -18360L, -18772L, - -19180L, -19583L, -19981L, -20375L, -20764L, -21148L, -21527L, -21901L, - -22270L, -22634L, -22993L, -23346L, -23694L, -24037L, -24374L, -24705L, - -25030L, -25350L, -25664L, -25971L, -26273L, -26568L, -26858L, -27141L, - -27418L, -27688L, -27952L, -28210L, -28461L, -28705L, -28943L, -29174L, - -29398L, -29615L, -29826L, -30029L, -30226L, -30415L, -30598L, -30773L, - -30941L, -31102L, -31256L, -31402L, -31541L, -31673L, -31797L, -31914L, - -32024L, -32126L, -32221L, -32308L, -32388L, -32460L, -32524L, -32581L, - -32631L, -32673L, -32707L, -32734L, -32753L, -32764L, -32768L, -32764L, - -32753L, -32734L, -32707L, -32673L, -32631L, -32581L, -32524L, -32460L, - -32388L, -32308L, -32221L, -32126L, -32024L, -31914L, -31797L, -31673L, - -31541L, -31402L, -31256L, -31102L, -30941L, -30773L, -30598L, -30415L, - -30226L, -30029L, -29826L, -29615L, -29398L, -29174L, -28943L, -28705L, - -28461L, -28210L, -27952L, -27688L, -27418L, -27141L, -26858L, -26568L, - -26273L, -25971L, -25664L, -25350L, -25030L, -24705L, -24374L, -24037L, - -23694L, -23346L, -22993L, -22634L, -22270L, -21901L, -21527L, -21148L, - -20764L, -20375L, -19981L, -19583L, -19180L, -18772L, -18360L, -17944L, - -17524L, -17100L, -16672L, -16240L, -15804L, -15364L, -14921L, -14474L, - -14024L, -13571L, -13115L, -12655L, -12193L, -11727L, -11259L, -10789L, - -10316L, -9840L, -9363L, -8883L, -8401L, -7917L, -7431L, -6943L, - -6454L, -5963L, -5471L, -4978L, -4483L, -3988L, -3491L, -2994L, - -2496L, -1998L, -1499L, -999L, -500L, -}; - -static byte capidtmf_leading_zeroes_table[0x100] = -{ - 8, 7, 6, 6, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4, - 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, - 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, - 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 -}; - -#define capidtmf_byte_leading_zeroes(b) (capidtmf_leading_zeroes_table[(BYTE)(b)]) -#define capidtmf_word_leading_zeroes(w) (((w) & 0xff00) ? capidtmf_leading_zeroes_table[(w) >> 8] : 8 + capidtmf_leading_zeroes_table[(w)]) -#define capidtmf_dword_leading_zeroes(d) (((d) & 0xffff0000L) ? (((d) & 0xff000000L) ? capidtmf_leading_zeroes_table[(d) >> 24] : 8 + capidtmf_leading_zeroes_table[(d) >> 16]) : (((d) & 0xff00) ? 16 + capidtmf_leading_zeroes_table[(d) >> 8] : 24 + capidtmf_leading_zeroes_table[(d)])) - - -/*---------------------------------------------------------------------------*/ - - -static void capidtmf_goertzel_loop(long *buffer, long *coeffs, short *sample, long count) -{ - int i, j; - long c, d, q0, q1, q2; - - for (i = 0; i < CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT - 1; i++) - { - q1 = buffer[i]; - q2 = buffer[i + CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT]; - d = coeffs[i] >> 1; - c = d << 1; - if (c >= 0) - { - for (j = 0; j < count; j++) - { - q0 = sample[j] - q2 + (c * (q1 >> 16)) + (((dword)(((dword) d) * ((dword)(q1 & 0xffff)))) >> 15); - q2 = q1; - q1 = q0; - } - } - else - { - c = -c; - d = -d; - for (j = 0; j < count; j++) - { - q0 = sample[j] - q2 - ((c * (q1 >> 16)) + (((dword)(((dword) d) * ((dword)(q1 & 0xffff)))) >> 15)); - q2 = q1; - q1 = q0; - } - } - buffer[i] = q1; - buffer[i + CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT] = q2; - } - q1 = buffer[i]; - q2 = buffer[i + CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT]; - c = (coeffs[i] >> 1) << 1; - if (c >= 0) - { - for (j = 0; j < count; j++) - { - q0 = sample[j] - q2 + (c * (q1 >> 16)) + (((dword)(((dword)(c >> 1)) * ((dword)(q1 & 0xffff)))) >> 15); - q2 = q1; - q1 = q0; - c -= CAPIDTMF_RECV_FUNDAMENTAL_DECREMENT; - } - } - else - { - c = -c; - for (j = 0; j < count; j++) - { - q0 = sample[j] - q2 - ((c * (q1 >> 16)) + (((dword)(((dword)(c >> 1)) * ((dword)(q1 & 0xffff)))) >> 15)); - q2 = q1; - q1 = q0; - c += CAPIDTMF_RECV_FUNDAMENTAL_DECREMENT; - } - } - coeffs[i] = c; - buffer[i] = q1; - buffer[i + CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT] = q2; -} - - -static void capidtmf_goertzel_result(long *buffer, long *coeffs) -{ - int i; - long d, e, q1, q2, lo, mid, hi; - dword k; - - for (i = 0; i < CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT; i++) - { - q1 = buffer[i]; - q2 = buffer[i + CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT]; - d = coeffs[i] >> 1; - if (d >= 0) - d = ((d << 1) * (-q1 >> 16)) + (((dword)(((dword) d) * ((dword)(-q1 & 0xffff)))) >> 15); - else - d = ((-d << 1) * (-q1 >> 16)) + (((dword)(((dword) -d) * ((dword)(-q1 & 0xffff)))) >> 15); - e = (q2 >= 0) ? q2 : -q2; - if (d >= 0) - { - k = ((dword)(d & 0xffff)) * ((dword)(e & 0xffff)); - lo = k & 0xffff; - mid = k >> 16; - k = ((dword)(d >> 16)) * ((dword)(e & 0xffff)); - mid += k & 0xffff; - hi = k >> 16; - k = ((dword)(d & 0xffff)) * ((dword)(e >> 16)); - mid += k & 0xffff; - hi += k >> 16; - hi += ((dword)(d >> 16)) * ((dword)(e >> 16)); - } - else - { - d = -d; - k = ((dword)(d & 0xffff)) * ((dword)(e & 0xffff)); - lo = -((long)(k & 0xffff)); - mid = -((long)(k >> 16)); - k = ((dword)(d >> 16)) * ((dword)(e & 0xffff)); - mid -= k & 0xffff; - hi = -((long)(k >> 16)); - k = ((dword)(d & 0xffff)) * ((dword)(e >> 16)); - mid -= k & 0xffff; - hi -= k >> 16; - hi -= ((dword)(d >> 16)) * ((dword)(e >> 16)); - } - if (q2 < 0) - { - lo = -lo; - mid = -mid; - hi = -hi; - } - d = (q1 >= 0) ? q1 : -q1; - k = ((dword)(d & 0xffff)) * ((dword)(d & 0xffff)); - lo += k & 0xffff; - mid += k >> 16; - k = ((dword)(d >> 16)) * ((dword)(d & 0xffff)); - mid += (k & 0xffff) << 1; - hi += (k >> 16) << 1; - hi += ((dword)(d >> 16)) * ((dword)(d >> 16)); - d = (q2 >= 0) ? q2 : -q2; - k = ((dword)(d & 0xffff)) * ((dword)(d & 0xffff)); - lo += k & 0xffff; - mid += k >> 16; - k = ((dword)(d >> 16)) * ((dword)(d & 0xffff)); - mid += (k & 0xffff) << 1; - hi += (k >> 16) << 1; - hi += ((dword)(d >> 16)) * ((dword)(d >> 16)); - mid += lo >> 16; - hi += mid >> 16; - buffer[i] = (lo & 0xffff) | (mid << 16); - buffer[i + CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT] = hi; - } -} - - -/*---------------------------------------------------------------------------*/ - -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_697 0 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_770 1 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_852 2 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_941 3 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1209 4 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1336 5 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1477 6 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1633 7 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_635 8 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1010 9 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1140 10 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1272 11 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1405 12 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1555 13 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1715 14 -#define CAPIDTMF_RECV_GUARD_SNR_INDEX_1875 15 - -#define CAPIDTMF_RECV_GUARD_SNR_DONTCARE 0xc000 -#define CAPIDTMF_RECV_NO_DIGIT 0xff -#define CAPIDTMF_RECV_TIME_GRANULARITY (CAPIDTMF_RECV_ACCUMULATE_CYCLES + 1) - -#define CAPIDTMF_RECV_INDICATION_DIGIT 0x0001 - -static long capidtmf_recv_goertzel_coef_table[CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT] = -{ - 0xda97L * 2, /* 697 Hz (Low group 697 Hz) */ - 0xd299L * 2, /* 770 Hz (Low group 770 Hz) */ - 0xc8cbL * 2, /* 852 Hz (Low group 852 Hz) */ - 0xbd36L * 2, /* 941 Hz (Low group 941 Hz) */ - 0x9501L * 2, /* 1209 Hz (High group 1209 Hz) */ - 0x7f89L * 2, /* 1336 Hz (High group 1336 Hz) */ - 0x6639L * 2, /* 1477 Hz (High group 1477 Hz) */ - 0x48c6L * 2, /* 1633 Hz (High group 1633 Hz) */ - 0xe14cL * 2, /* 630 Hz (Lower guard of low group 631 Hz) */ - 0xb2e0L * 2, /* 1015 Hz (Upper guard of low group 1039 Hz) */ - 0xa1a0L * 2, /* 1130 Hz (Lower guard of high group 1140 Hz) */ - 0x8a87L * 2, /* 1272 Hz (Guard between 1209 Hz and 1336 Hz: 1271 Hz) */ - 0x7353L * 2, /* 1405 Hz (2nd harmonics of 697 Hz and guard between 1336 Hz and 1477 Hz: 1405 Hz) */ - 0x583bL * 2, /* 1552 Hz (2nd harmonics of 770 Hz and guard between 1477 Hz and 1633 Hz: 1553 Hz) */ - 0x37d8L * 2, /* 1720 Hz (2nd harmonics of 852 Hz and upper guard of high group: 1715 Hz) */ - 0x0000L * 2 /* 100-630 Hz (fundamentals) */ -}; - - -static word capidtmf_recv_guard_snr_low_table[CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT] = -{ - 14, /* Low group peak versus 697 Hz */ - 14, /* Low group peak versus 770 Hz */ - 16, /* Low group peak versus 852 Hz */ - 16, /* Low group peak versus 941 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* Low group peak versus 1209 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* Low group peak versus 1336 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* Low group peak versus 1477 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* Low group peak versus 1633 Hz */ - 14, /* Low group peak versus 635 Hz */ - 16, /* Low group peak versus 1010 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* Low group peak versus 1140 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* Low group peak versus 1272 Hz */ - DSPDTMF_RX_HARMONICS_SEL_DEFAULT - 8, /* Low group peak versus 1405 Hz */ - DSPDTMF_RX_HARMONICS_SEL_DEFAULT - 4, /* Low group peak versus 1555 Hz */ - DSPDTMF_RX_HARMONICS_SEL_DEFAULT - 4, /* Low group peak versus 1715 Hz */ - 12 /* Low group peak versus 100-630 Hz */ -}; - - -static word capidtmf_recv_guard_snr_high_table[CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT] = -{ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* High group peak versus 697 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* High group peak versus 770 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* High group peak versus 852 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* High group peak versus 941 Hz */ - 20, /* High group peak versus 1209 Hz */ - 20, /* High group peak versus 1336 Hz */ - 20, /* High group peak versus 1477 Hz */ - 20, /* High group peak versus 1633 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* High group peak versus 635 Hz */ - CAPIDTMF_RECV_GUARD_SNR_DONTCARE, /* High group peak versus 1010 Hz */ - 16, /* High group peak versus 1140 Hz */ - 4, /* High group peak versus 1272 Hz */ - 6, /* High group peak versus 1405 Hz */ - 8, /* High group peak versus 1555 Hz */ - 16, /* High group peak versus 1715 Hz */ - 12 /* High group peak versus 100-630 Hz */ -}; - - -/*---------------------------------------------------------------------------*/ - -static void capidtmf_recv_init(t_capidtmf_state *p_state) -{ - p_state->recv.min_gap_duration = 1; - p_state->recv.min_digit_duration = 1; - - p_state->recv.cycle_counter = 0; - p_state->recv.current_digit_on_time = 0; - p_state->recv.current_digit_off_time = 0; - p_state->recv.current_digit_value = CAPIDTMF_RECV_NO_DIGIT; - - p_state->recv.digit_write_pos = 0; - p_state->recv.digit_read_pos = 0; - p_state->recv.indication_state = 0; - p_state->recv.indication_state_ack = 0; - p_state->recv.state = CAPIDTMF_RECV_STATE_IDLE; -} - - -void capidtmf_recv_enable(t_capidtmf_state *p_state, word min_digit_duration, word min_gap_duration) -{ - p_state->recv.indication_state_ack &= CAPIDTMF_RECV_INDICATION_DIGIT; - p_state->recv.min_digit_duration = (word)(((((dword) min_digit_duration) * 8) + - ((dword)(CAPIDTMF_RECV_TIME_GRANULARITY / 2))) / ((dword) CAPIDTMF_RECV_TIME_GRANULARITY)); - if (p_state->recv.min_digit_duration <= 1) - p_state->recv.min_digit_duration = 1; - else - (p_state->recv.min_digit_duration)--; - p_state->recv.min_gap_duration = - (word)((((dword) min_gap_duration) * 8) / ((dword) CAPIDTMF_RECV_TIME_GRANULARITY)); - if (p_state->recv.min_gap_duration <= 1) - p_state->recv.min_gap_duration = 1; - else - (p_state->recv.min_gap_duration)--; - p_state->recv.state |= CAPIDTMF_RECV_STATE_DTMF_ACTIVE; -} - - -void capidtmf_recv_disable(t_capidtmf_state *p_state) -{ - p_state->recv.state &= ~CAPIDTMF_RECV_STATE_DTMF_ACTIVE; - if (p_state->recv.state == CAPIDTMF_RECV_STATE_IDLE) - capidtmf_recv_init(p_state); - else - { - p_state->recv.cycle_counter = 0; - p_state->recv.current_digit_on_time = 0; - p_state->recv.current_digit_off_time = 0; - p_state->recv.current_digit_value = CAPIDTMF_RECV_NO_DIGIT; - } -} - - -word capidtmf_recv_indication(t_capidtmf_state *p_state, byte *buffer) -{ - word i, j, k, flags; - - flags = p_state->recv.indication_state ^ p_state->recv.indication_state_ack; - p_state->recv.indication_state_ack ^= flags & CAPIDTMF_RECV_INDICATION_DIGIT; - if (p_state->recv.digit_write_pos != p_state->recv.digit_read_pos) - { - i = 0; - k = p_state->recv.digit_write_pos; - j = p_state->recv.digit_read_pos; - do - { - buffer[i++] = p_state->recv.digit_buffer[j]; - j = (j == CAPIDTMF_RECV_DIGIT_BUFFER_SIZE - 1) ? 0 : j + 1; - } while (j != k); - p_state->recv.digit_read_pos = k; - return (i); - } - p_state->recv.indication_state_ack ^= flags; - return (0); -} - - -#define CAPIDTMF_RECV_WINDOWED_SAMPLES 32 - -void capidtmf_recv_block(t_capidtmf_state *p_state, byte *buffer, word length) -{ - byte result_digit; - word sample_number, cycle_counter, n, i; - word low_peak, high_peak; - dword lo, hi; - byte *p; - short *q; - byte goertzel_result_buffer[CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT]; - short windowed_sample_buffer[CAPIDTMF_RECV_WINDOWED_SAMPLES]; - - - if (p_state->recv.state & CAPIDTMF_RECV_STATE_DTMF_ACTIVE) - { - cycle_counter = p_state->recv.cycle_counter; - sample_number = 0; - while (sample_number < length) - { - if (cycle_counter < CAPIDTMF_RECV_ACCUMULATE_CYCLES) - { - if (cycle_counter == 0) - { - for (i = 0; i < CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT; i++) - { - p_state->recv.goertzel_buffer[0][i] = 0; - p_state->recv.goertzel_buffer[1][i] = 0; - } - } - n = CAPIDTMF_RECV_ACCUMULATE_CYCLES - cycle_counter; - if (n > length - sample_number) - n = length - sample_number; - if (n > CAPIDTMF_RECV_WINDOWED_SAMPLES) - n = CAPIDTMF_RECV_WINDOWED_SAMPLES; - p = buffer + sample_number; - q = capidtmf_recv_window_function + cycle_counter; - if (p_state->ulaw) - { - for (i = 0; i < n; i++) - { - windowed_sample_buffer[i] = - (short)((capidtmf_expand_table_ulaw[p[i]] * ((long)(q[i]))) >> 15); - } - } - else - { - for (i = 0; i < n; i++) - { - windowed_sample_buffer[i] = - (short)((capidtmf_expand_table_alaw[p[i]] * ((long)(q[i]))) >> 15); - } - } - capidtmf_recv_goertzel_coef_table[CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT - 1] = CAPIDTMF_RECV_FUNDAMENTAL_OFFSET; - capidtmf_goertzel_loop(p_state->recv.goertzel_buffer[0], - capidtmf_recv_goertzel_coef_table, windowed_sample_buffer, n); - cycle_counter += n; - sample_number += n; - } - else - { - capidtmf_goertzel_result(p_state->recv.goertzel_buffer[0], - capidtmf_recv_goertzel_coef_table); - for (i = 0; i < CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT; i++) - { - lo = (dword)(p_state->recv.goertzel_buffer[0][i]); - hi = (dword)(p_state->recv.goertzel_buffer[1][i]); - if (hi != 0) - { - n = capidtmf_dword_leading_zeroes(hi); - hi = (hi << n) | (lo >> (32 - n)); - } - else - { - n = capidtmf_dword_leading_zeroes(lo); - hi = lo << n; - n += 32; - } - n = 195 - 3 * n; - if (hi >= 0xcb300000L) - n += 2; - else if (hi >= 0xa1450000L) - n++; - goertzel_result_buffer[i] = (byte) n; - } - low_peak = DSPDTMF_RX_SENSITIVITY_LOW_DEFAULT; - result_digit = CAPIDTMF_RECV_NO_DIGIT; - for (i = 0; i < CAPIDTMF_LOW_GROUP_FREQUENCIES; i++) - { - if (goertzel_result_buffer[i] > low_peak) - { - low_peak = goertzel_result_buffer[i]; - result_digit = (byte) i; - } - } - high_peak = DSPDTMF_RX_SENSITIVITY_HIGH_DEFAULT; - n = CAPIDTMF_RECV_NO_DIGIT; - for (i = CAPIDTMF_LOW_GROUP_FREQUENCIES; i < CAPIDTMF_RECV_BASE_FREQUENCY_COUNT; i++) - { - if (goertzel_result_buffer[i] > high_peak) - { - high_peak = goertzel_result_buffer[i]; - n = (i - CAPIDTMF_LOW_GROUP_FREQUENCIES) << 2; - } - } - result_digit |= (byte) n; - if (low_peak + DSPDTMF_RX_HIGH_EXCEEDING_LOW_DEFAULT < high_peak) - result_digit = CAPIDTMF_RECV_NO_DIGIT; - if (high_peak + DSPDTMF_RX_LOW_EXCEEDING_HIGH_DEFAULT < low_peak) - result_digit = CAPIDTMF_RECV_NO_DIGIT; - n = 0; - for (i = 0; i < CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT; i++) - { - if ((((short)(low_peak - goertzel_result_buffer[i] - capidtmf_recv_guard_snr_low_table[i])) < 0) - || (((short)(high_peak - goertzel_result_buffer[i] - capidtmf_recv_guard_snr_high_table[i])) < 0)) - { - n++; - } - } - if (n != 2) - result_digit = CAPIDTMF_RECV_NO_DIGIT; - - if (result_digit == CAPIDTMF_RECV_NO_DIGIT) - { - if (p_state->recv.current_digit_on_time != 0) - { - if (++(p_state->recv.current_digit_off_time) >= p_state->recv.min_gap_duration) - { - p_state->recv.current_digit_on_time = 0; - p_state->recv.current_digit_off_time = 0; - } - } - else - { - if (p_state->recv.current_digit_off_time != 0) - (p_state->recv.current_digit_off_time)--; - } - } - else - { - if ((p_state->recv.current_digit_on_time == 0) - && (p_state->recv.current_digit_off_time != 0)) - { - (p_state->recv.current_digit_off_time)--; - } - else - { - n = p_state->recv.current_digit_off_time; - if ((p_state->recv.current_digit_on_time != 0) - && (result_digit != p_state->recv.current_digit_value)) - { - p_state->recv.current_digit_on_time = 0; - n = 0; - } - p_state->recv.current_digit_value = result_digit; - p_state->recv.current_digit_off_time = 0; - if (p_state->recv.current_digit_on_time != 0xffff) - { - p_state->recv.current_digit_on_time += n + 1; - if (p_state->recv.current_digit_on_time >= p_state->recv.min_digit_duration) - { - p_state->recv.current_digit_on_time = 0xffff; - i = (p_state->recv.digit_write_pos == CAPIDTMF_RECV_DIGIT_BUFFER_SIZE - 1) ? - 0 : p_state->recv.digit_write_pos + 1; - if (i == p_state->recv.digit_read_pos) - { - trace(dprintf("%s,%d: Receive digit overrun", - (char *)(FILE_), __LINE__)); - } - else - { - p_state->recv.digit_buffer[p_state->recv.digit_write_pos] = result_digit; - p_state->recv.digit_write_pos = i; - p_state->recv.indication_state = - (p_state->recv.indication_state & ~CAPIDTMF_RECV_INDICATION_DIGIT) | - (~p_state->recv.indication_state_ack & CAPIDTMF_RECV_INDICATION_DIGIT); - } - } - } - } - } - cycle_counter = 0; - sample_number++; - } - } - p_state->recv.cycle_counter = cycle_counter; - } -} - - -void capidtmf_init(t_capidtmf_state *p_state, byte ulaw) -{ - p_state->ulaw = ulaw; - capidtmf_recv_init(p_state); -} - - -/*---------------------------------------------------------------------------*/ diff --git a/drivers/isdn/hardware/eicon/capidtmf.h b/drivers/isdn/hardware/eicon/capidtmf.h deleted file mode 100644 index 0a9cf59bb224..000000000000 --- a/drivers/isdn/hardware/eicon/capidtmf.h +++ /dev/null @@ -1,79 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef CAPIDTMF_H_ -#define CAPIDTMF_H_ -/*---------------------------------------------------------------------------*/ -/*---------------------------------------------------------------------------*/ -#define CAPIDTMF_TONE_GROUP_COUNT 2 -#define CAPIDTMF_LOW_GROUP_FREQUENCIES 4 -#define CAPIDTMF_HIGH_GROUP_FREQUENCIES 4 -#define DSPDTMF_RX_SENSITIVITY_LOW_DEFAULT 50 /* -52 dBm */ -#define DSPDTMF_RX_SENSITIVITY_HIGH_DEFAULT 50 /* -52 dBm */ -#define DSPDTMF_RX_HIGH_EXCEEDING_LOW_DEFAULT 10 /* dB */ -#define DSPDTMF_RX_LOW_EXCEEDING_HIGH_DEFAULT 10 /* dB */ -#define DSPDTMF_RX_HARMONICS_SEL_DEFAULT 12 /* dB */ -#define CAPIDTMF_RECV_BASE_FREQUENCY_COUNT (CAPIDTMF_LOW_GROUP_FREQUENCIES + CAPIDTMF_HIGH_GROUP_FREQUENCIES) -#define CAPIDTMF_RECV_GUARD_FREQUENCY_COUNT 8 -#define CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT (CAPIDTMF_RECV_BASE_FREQUENCY_COUNT + CAPIDTMF_RECV_GUARD_FREQUENCY_COUNT) -#define CAPIDTMF_RECV_POSITIVE_COEFF_COUNT 16 -#define CAPIDTMF_RECV_NEGATIVE_COEFF_COUNT (CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT - CAPIDTMF_RECV_POSITIVE_COEFF_COUNT) -#define CAPIDTMF_RECV_ACCUMULATE_CYCLES 205 -#define CAPIDTMF_RECV_FUNDAMENTAL_OFFSET (0xff35L * 2) -#define CAPIDTMF_RECV_FUNDAMENTAL_DECREMENT (0x0028L * 2) -#define CAPIDTMF_RECV_DIGIT_BUFFER_SIZE 32 -#define CAPIDTMF_RECV_STATE_IDLE 0x00 -#define CAPIDTMF_RECV_STATE_DTMF_ACTIVE 0x01 -typedef struct tag_capidtmf_recv_state -{ - byte digit_buffer[CAPIDTMF_RECV_DIGIT_BUFFER_SIZE]; - word digit_write_pos; - word digit_read_pos; - word indication_state; - word indication_state_ack; - long goertzel_buffer[2][CAPIDTMF_RECV_TOTAL_FREQUENCY_COUNT]; - word min_gap_duration; - word min_digit_duration; - word cycle_counter; - word current_digit_on_time; - word current_digit_off_time; - byte current_digit_value; - byte state; -} t_capidtmf_recv_state; -typedef struct tag_capidtmf_state -{ - byte ulaw; - t_capidtmf_recv_state recv; -} t_capidtmf_state; -word capidtmf_recv_indication(t_capidtmf_state *p_state, byte *buffer); -void capidtmf_recv_block(t_capidtmf_state *p_state, byte *buffer, word length); -void capidtmf_init(t_capidtmf_state *p_state, byte ulaw); -void capidtmf_recv_enable(t_capidtmf_state *p_state, word min_digit_duration, word min_gap_duration); -void capidtmf_recv_disable(t_capidtmf_state *p_state); -#define capidtmf_indication(p_state, buffer) (((p_state)->recv.indication_state != (p_state)->recv.indication_state_ack) ? capidtmf_recv_indication(p_state, buffer) : 0) -#define capidtmf_recv_process_block(p_state, buffer, length) { if ((p_state)->recv.state != CAPIDTMF_RECV_STATE_IDLE) capidtmf_recv_block(p_state, buffer, length); } -/*---------------------------------------------------------------------------*/ -/*---------------------------------------------------------------------------*/ -#endif diff --git a/drivers/isdn/hardware/eicon/capifunc.c b/drivers/isdn/hardware/eicon/capifunc.c deleted file mode 100644 index 7a0bdbdd87ea..000000000000 --- a/drivers/isdn/hardware/eicon/capifunc.c +++ /dev/null @@ -1,1219 +0,0 @@ -/* $Id: capifunc.c,v 1.61.4.7 2005/02/11 19:40:25 armin Exp $ - * - * ISDN interface module for Eicon active cards DIVA. - * CAPI Interface common functions - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - * - */ - -#include "platform.h" -#include "os_capi.h" -#include "di_defs.h" -#include "capi20.h" -#include "divacapi.h" -#include "divasync.h" -#include "capifunc.h" - -#define DBG_MINIMUM (DL_LOG + DL_FTL + DL_ERR) -#define DBG_DEFAULT (DBG_MINIMUM + DL_XLOG + DL_REG) - -DIVA_CAPI_ADAPTER *adapter = (DIVA_CAPI_ADAPTER *) NULL; -APPL *application = (APPL *) NULL; -byte max_appl = MAX_APPL; -byte max_adapter = 0; -static CAPI_MSG *mapped_msg = (CAPI_MSG *) NULL; - -byte UnMapController(byte); -char DRIVERRELEASE_CAPI[32]; - -extern void AutomaticLaw(DIVA_CAPI_ADAPTER *); -extern void callback(ENTITY *); -extern word api_remove_start(void); -extern word CapiRelease(word); -extern word CapiRegister(word); -extern word api_put(APPL *, CAPI_MSG *); - -static diva_os_spin_lock_t api_lock; - -static LIST_HEAD(cards); - -static dword notify_handle; -static void DIRequest(ENTITY *e); -static DESCRIPTOR MAdapter; -static DESCRIPTOR DAdapter; -static byte ControllerMap[MAX_DESCRIPTORS + 1]; - - -static void diva_register_appl(struct capi_ctr *, __u16, - capi_register_params *); -static void diva_release_appl(struct capi_ctr *, __u16); -static char *diva_procinfo(struct capi_ctr *); -static u16 diva_send_message(struct capi_ctr *, - diva_os_message_buffer_s *); -extern void diva_os_set_controller_struct(struct capi_ctr *); - -extern void DIVA_DIDD_Read(DESCRIPTOR *, int); - -/* - * debug - */ -static void no_printf(unsigned char *, ...); -#include "debuglib.c" -static void xlog(char *x, ...) -{ -#ifndef DIVA_NO_DEBUGLIB - va_list ap; - if (myDriverDebugHandle.dbgMask & DL_XLOG) { - va_start(ap, x); - if (myDriverDebugHandle.dbg_irq) { - myDriverDebugHandle.dbg_irq(myDriverDebugHandle.id, - DLI_XLOG, x, ap); - } else if (myDriverDebugHandle.dbg_old) { - myDriverDebugHandle.dbg_old(myDriverDebugHandle.id, - x, ap); - } - va_end(ap); - } -#endif -} - -/* - * info for proc - */ -static char *diva_procinfo(struct capi_ctr *ctrl) -{ - return (ctrl->serial); -} - -/* - * stop debugging - */ -static void stop_dbg(void) -{ - DbgDeregister(); - memset(&MAdapter, 0, sizeof(MAdapter)); - dprintf = no_printf; -} - -/* - * dummy debug function - */ -static void no_printf(unsigned char *x, ...) -{ -} - -/* - * Controller mapping - */ -byte MapController(byte Controller) -{ - byte i; - byte MappedController = 0; - byte ctrl = Controller & 0x7f; /* mask external controller bit off */ - - for (i = 1; i < max_adapter + 1; i++) { - if (ctrl == ControllerMap[i]) { - MappedController = (byte) i; - break; - } - } - if (i > max_adapter) { - ControllerMap[0] = ctrl; - MappedController = 0; - } - return (MappedController | (Controller & 0x80)); /* put back external controller bit */ -} - -/* - * Controller unmapping - */ -byte UnMapController(byte MappedController) -{ - byte Controller; - byte ctrl = MappedController & 0x7f; /* mask external controller bit off */ - - if (ctrl <= max_adapter) { - Controller = ControllerMap[ctrl]; - } else { - Controller = 0; - } - - return (Controller | (MappedController & 0x80)); /* put back external controller bit */ -} - -/* - * find a new free id - */ -static int find_free_id(void) -{ - int num = 0; - DIVA_CAPI_ADAPTER *a; - - while (num < MAX_DESCRIPTORS) { - a = &adapter[num]; - if (!a->Id) - break; - num++; - } - return (num + 1); -} - -/* - * find a card structure by controller number - */ -static diva_card *find_card_by_ctrl(word controller) -{ - struct list_head *tmp; - diva_card *card; - - list_for_each(tmp, &cards) { - card = list_entry(tmp, diva_card, list); - if (ControllerMap[card->Id] == controller) { - if (card->remove_in_progress) - card = NULL; - return (card); - } - } - return (diva_card *) 0; -} - -/* - * Buffer RX/TX - */ -void *TransmitBufferSet(APPL *appl, dword ref) -{ - appl->xbuffer_used[ref] = true; - DBG_PRV1(("%d:xbuf_used(%d)", appl->Id, ref + 1)) - return (void *)(long)ref; -} - -void *TransmitBufferGet(APPL *appl, void *p) -{ - if (appl->xbuffer_internal[(dword)(long)p]) - return appl->xbuffer_internal[(dword)(long)p]; - - return appl->xbuffer_ptr[(dword)(long)p]; -} - -void TransmitBufferFree(APPL *appl, void *p) -{ - appl->xbuffer_used[(dword)(long)p] = false; - DBG_PRV1(("%d:xbuf_free(%d)", appl->Id, ((dword)(long)p) + 1)) - } - -void *ReceiveBufferGet(APPL *appl, int Num) -{ - return &appl->ReceiveBuffer[Num * appl->MaxDataLength]; -} - -/* - * api_remove_start/complete for cleanup - */ -void api_remove_complete(void) -{ - DBG_PRV1(("api_remove_complete")) - } - -/* - * main function called by message.c - */ -void sendf(APPL *appl, word command, dword Id, word Number, byte *format, ...) -{ - word i, j; - word length = 12, dlength = 0; - byte *write; - CAPI_MSG msg; - byte *string = NULL; - va_list ap; - diva_os_message_buffer_s *dmb; - diva_card *card = NULL; - dword tmp; - - if (!appl) - return; - - DBG_PRV1(("sendf(a=%d,cmd=%x,format=%s)", - appl->Id, command, (byte *) format)) - - PUT_WORD(&msg.header.appl_id, appl->Id); - PUT_WORD(&msg.header.command, command); - if ((byte) (command >> 8) == 0x82) - Number = appl->Number++; - PUT_WORD(&msg.header.number, Number); - - PUT_DWORD(&msg.header.controller, Id); - write = (byte *)&msg; - write += 12; - - va_start(ap, format); - for (i = 0; format[i]; i++) { - switch (format[i]) { - case 'b': - tmp = va_arg(ap, dword); - *(byte *) write = (byte) (tmp & 0xff); - write += 1; - length += 1; - break; - case 'w': - tmp = va_arg(ap, dword); - PUT_WORD(write, (tmp & 0xffff)); - write += 2; - length += 2; - break; - case 'd': - tmp = va_arg(ap, dword); - PUT_DWORD(write, tmp); - write += 4; - length += 4; - break; - case 's': - case 'S': - string = va_arg(ap, byte *); - length += string[0] + 1; - for (j = 0; j <= string[0]; j++) - *write++ = string[j]; - break; - } - } - va_end(ap); - - PUT_WORD(&msg.header.length, length); - msg.header.controller = UnMapController(msg.header.controller); - - if (command == _DATA_B3_I) - dlength = GET_WORD( - ((byte *)&msg.info.data_b3_ind.Data_Length)); - - if (!(dmb = diva_os_alloc_message_buffer(length + dlength, - (void **) &write))) { - DBG_ERR(("sendf: alloc_message_buffer failed, incoming msg dropped.")) - return; - } - - /* copy msg header to sk_buff */ - memcpy(write, (byte *)&msg, length); - - /* if DATA_B3_IND, copy data too */ - if (command == _DATA_B3_I) { - dword data = GET_DWORD(&msg.info.data_b3_ind.Data); - memcpy(write + length, (void *)(long)data, dlength); - } - -#ifndef DIVA_NO_DEBUGLIB - if (myDriverDebugHandle.dbgMask & DL_XLOG) { - switch (command) { - default: - xlog("\x00\x02", &msg, 0x81, length); - break; - case _DATA_B3_R | CONFIRM: - if (myDriverDebugHandle.dbgMask & DL_BLK) - xlog("\x00\x02", &msg, 0x81, length); - break; - case _DATA_B3_I: - if (myDriverDebugHandle.dbgMask & DL_BLK) { - xlog("\x00\x02", &msg, 0x81, length); - for (i = 0; i < dlength; i += 256) { - DBG_BLK((((char *)(long)GET_DWORD(&msg.info.data_b3_ind.Data)) + i, - ((dlength - i) < 256) ? (dlength - i) : 256)) - if (!(myDriverDebugHandle.dbgMask & DL_PRV0)) - break; /* not more if not explicitly requested */ - } - } - break; - } - } -#endif - - /* find the card structure for this controller */ - if (!(card = find_card_by_ctrl(write[8] & 0x7f))) { - DBG_ERR(("sendf - controller %d not found, incoming msg dropped", - write[8] & 0x7f)) - diva_os_free_message_buffer(dmb); - return; - } - /* send capi msg to capi layer */ - capi_ctr_handle_message(&card->capi_ctrl, appl->Id, dmb); -} - -/* - * cleanup adapter - */ -static void clean_adapter(int id, struct list_head *free_mem_q) -{ - DIVA_CAPI_ADAPTER *a; - int i, k; - - a = &adapter[id]; - k = li_total_channels - a->li_channels; - if (k == 0) { - if (li_config_table) { - list_add((struct list_head *)li_config_table, free_mem_q); - li_config_table = NULL; - } - } else { - if (a->li_base < k) { - memmove(&li_config_table[a->li_base], - &li_config_table[a->li_base + a->li_channels], - (k - a->li_base) * sizeof(LI_CONFIG)); - for (i = 0; i < k; i++) { - memmove(&li_config_table[i].flag_table[a->li_base], - &li_config_table[i].flag_table[a->li_base + a->li_channels], - k - a->li_base); - memmove(&li_config_table[i]. - coef_table[a->li_base], - &li_config_table[i].coef_table[a->li_base + a->li_channels], - k - a->li_base); - } - } - } - li_total_channels = k; - for (i = id; i < max_adapter; i++) { - if (adapter[i].request) - adapter[i].li_base -= a->li_channels; - } - if (a->plci) - list_add((struct list_head *)a->plci, free_mem_q); - - memset(a, 0x00, sizeof(DIVA_CAPI_ADAPTER)); - while ((max_adapter != 0) && !adapter[max_adapter - 1].request) - max_adapter--; -} - -/* - * remove a card, but ensures consistent state of LI tables - * in the time adapter is removed - */ -static void divacapi_remove_card(DESCRIPTOR *d) -{ - diva_card *card = NULL; - diva_os_spin_lock_magic_t old_irql; - LIST_HEAD(free_mem_q); - struct list_head *link; - struct list_head *tmp; - - /* - * Set "remove in progress flag". - * Ensures that there is no call from sendf to CAPI in - * the time CAPI controller is about to be removed. - */ - diva_os_enter_spin_lock(&api_lock, &old_irql, "remove card"); - list_for_each(tmp, &cards) { - card = list_entry(tmp, diva_card, list); - if (card->d.request == d->request) { - card->remove_in_progress = 1; - list_del(tmp); - break; - } - } - diva_os_leave_spin_lock(&api_lock, &old_irql, "remove card"); - - if (card) { - /* - * Detach CAPI. Sendf cannot call to CAPI any more. - * After detach no call to send_message() is done too. - */ - detach_capi_ctr(&card->capi_ctrl); - - /* - * Now get API lock (to ensure stable state of LI tables) - * and update the adapter map/LI table. - */ - diva_os_enter_spin_lock(&api_lock, &old_irql, "remove card"); - - clean_adapter(card->Id - 1, &free_mem_q); - DBG_TRC(("DelAdapterMap (%d) -> (%d)", - ControllerMap[card->Id], card->Id)) - ControllerMap[card->Id] = 0; - DBG_TRC(("adapter remove, max_adapter=%d", - max_adapter)); - diva_os_leave_spin_lock(&api_lock, &old_irql, "remove card"); - - /* After releasing the lock, we can free the memory */ - diva_os_free(0, card); - } - - /* free queued memory areas */ - list_for_each_safe(link, tmp, &free_mem_q) { - list_del(link); - diva_os_free(0, link); - } -} - -/* - * remove cards - */ -static void divacapi_remove_cards(void) -{ - DESCRIPTOR d; - struct list_head *tmp; - diva_card *card; - diva_os_spin_lock_magic_t old_irql; - -rescan: - diva_os_enter_spin_lock(&api_lock, &old_irql, "remove cards"); - list_for_each(tmp, &cards) { - card = list_entry(tmp, diva_card, list); - diva_os_leave_spin_lock(&api_lock, &old_irql, "remove cards"); - d.request = card->d.request; - divacapi_remove_card(&d); - goto rescan; - } - diva_os_leave_spin_lock(&api_lock, &old_irql, "remove cards"); -} - -/* - * sync_callback - */ -static void sync_callback(ENTITY *e) -{ - diva_os_spin_lock_magic_t old_irql; - - DBG_TRC(("cb:Id=%x,Rc=%x,Ind=%x", e->Id, e->Rc, e->Ind)) - - diva_os_enter_spin_lock(&api_lock, &old_irql, "sync_callback"); - callback(e); - diva_os_leave_spin_lock(&api_lock, &old_irql, "sync_callback"); -} - -/* - * add a new card - */ -static int diva_add_card(DESCRIPTOR *d) -{ - int k = 0, i = 0; - diva_os_spin_lock_magic_t old_irql; - diva_card *card = NULL; - struct capi_ctr *ctrl = NULL; - DIVA_CAPI_ADAPTER *a = NULL; - IDI_SYNC_REQ sync_req; - char serial[16]; - void *mem_to_free; - LI_CONFIG *new_li_config_table; - int j; - - if (!(card = (diva_card *) diva_os_malloc(0, sizeof(diva_card)))) { - DBG_ERR(("diva_add_card: failed to allocate card struct.")) - return (0); - } - memset((char *) card, 0x00, sizeof(diva_card)); - memcpy(&card->d, d, sizeof(DESCRIPTOR)); - sync_req.GetName.Req = 0; - sync_req.GetName.Rc = IDI_SYNC_REQ_GET_NAME; - card->d.request((ENTITY *)&sync_req); - strlcpy(card->name, sync_req.GetName.name, sizeof(card->name)); - ctrl = &card->capi_ctrl; - strcpy(ctrl->name, card->name); - ctrl->register_appl = diva_register_appl; - ctrl->release_appl = diva_release_appl; - ctrl->send_message = diva_send_message; - ctrl->procinfo = diva_procinfo; - ctrl->driverdata = card; - diva_os_set_controller_struct(ctrl); - - if (attach_capi_ctr(ctrl)) { - DBG_ERR(("diva_add_card: failed to attach controller.")) - diva_os_free(0, card); - return (0); - } - - diva_os_enter_spin_lock(&api_lock, &old_irql, "find id"); - card->Id = find_free_id(); - diva_os_leave_spin_lock(&api_lock, &old_irql, "find id"); - - strlcpy(ctrl->manu, M_COMPANY, sizeof(ctrl->manu)); - ctrl->version.majorversion = 2; - ctrl->version.minorversion = 0; - ctrl->version.majormanuversion = DRRELMAJOR; - ctrl->version.minormanuversion = DRRELMINOR; - sync_req.GetSerial.Req = 0; - sync_req.GetSerial.Rc = IDI_SYNC_REQ_GET_SERIAL; - sync_req.GetSerial.serial = 0; - card->d.request((ENTITY *)&sync_req); - if ((i = ((sync_req.GetSerial.serial & 0xff000000) >> 24))) { - sprintf(serial, "%ld-%d", - sync_req.GetSerial.serial & 0x00ffffff, i + 1); - } else { - sprintf(serial, "%ld", sync_req.GetSerial.serial); - } - serial[CAPI_SERIAL_LEN - 1] = 0; - strlcpy(ctrl->serial, serial, sizeof(ctrl->serial)); - - a = &adapter[card->Id - 1]; - card->adapter = a; - a->os_card = card; - ControllerMap[card->Id] = (byte) (ctrl->cnr); - - DBG_TRC(("AddAdapterMap (%d) -> (%d)", ctrl->cnr, card->Id)) - - sync_req.xdi_capi_prms.Req = 0; - sync_req.xdi_capi_prms.Rc = IDI_SYNC_REQ_XDI_GET_CAPI_PARAMS; - sync_req.xdi_capi_prms.info.structure_length = - sizeof(diva_xdi_get_capi_parameters_t); - card->d.request((ENTITY *)&sync_req); - a->flag_dynamic_l1_down = - sync_req.xdi_capi_prms.info.flag_dynamic_l1_down; - a->group_optimization_enabled = - sync_req.xdi_capi_prms.info.group_optimization_enabled; - a->request = DIRequest; /* card->d.request; */ - a->max_plci = card->d.channels + 30; - a->max_listen = (card->d.channels > 2) ? 8 : 2; - if (! - (a->plci = - (PLCI *) diva_os_malloc(0, sizeof(PLCI) * a->max_plci))) { - DBG_ERR(("diva_add_card: failed alloc plci struct.")) - memset(a, 0, sizeof(DIVA_CAPI_ADAPTER)); - return (0); - } - memset(a->plci, 0, sizeof(PLCI) * a->max_plci); - - for (k = 0; k < a->max_plci; k++) { - a->Id = (byte) card->Id; - a->plci[k].Sig.callback = sync_callback; - a->plci[k].Sig.XNum = 1; - a->plci[k].Sig.X = a->plci[k].XData; - a->plci[k].Sig.user[0] = (word) (card->Id - 1); - a->plci[k].Sig.user[1] = (word) k; - a->plci[k].NL.callback = sync_callback; - a->plci[k].NL.XNum = 1; - a->plci[k].NL.X = a->plci[k].XData; - a->plci[k].NL.user[0] = (word) ((card->Id - 1) | 0x8000); - a->plci[k].NL.user[1] = (word) k; - a->plci[k].adapter = a; - } - - a->profile.Number = card->Id; - a->profile.Channels = card->d.channels; - if (card->d.features & DI_FAX3) { - a->profile.Global_Options = 0x71; - if (card->d.features & DI_CODEC) - a->profile.Global_Options |= 0x6; -#if IMPLEMENT_DTMF - a->profile.Global_Options |= 0x8; -#endif /* IMPLEMENT_DTMF */ - a->profile.Global_Options |= 0x80; /* Line Interconnect */ -#if IMPLEMENT_ECHO_CANCELLER - a->profile.Global_Options |= 0x100; -#endif /* IMPLEMENT_ECHO_CANCELLER */ - a->profile.B1_Protocols = 0xdf; - a->profile.B2_Protocols = 0x1fdb; - a->profile.B3_Protocols = 0xb7; - a->manufacturer_features = MANUFACTURER_FEATURE_HARDDTMF; - } else { - a->profile.Global_Options = 0x71; - if (card->d.features & DI_CODEC) - a->profile.Global_Options |= 0x2; - a->profile.B1_Protocols = 0x43; - a->profile.B2_Protocols = 0x1f0f; - a->profile.B3_Protocols = 0x07; - a->manufacturer_features = 0; - } - - a->li_pri = (a->profile.Channels > 2); - a->li_channels = a->li_pri ? MIXER_CHANNELS_PRI : MIXER_CHANNELS_BRI; - a->li_base = 0; - for (i = 0; &adapter[i] != a; i++) { - if (adapter[i].request) - a->li_base = adapter[i].li_base + adapter[i].li_channels; - } - k = li_total_channels + a->li_channels; - new_li_config_table = - (LI_CONFIG *) diva_os_malloc(0, ((k * sizeof(LI_CONFIG) + 3) & ~3) + (2 * k) * ((k + 3) & ~3)); - if (new_li_config_table == NULL) { - DBG_ERR(("diva_add_card: failed alloc li_config table.")) - memset(a, 0, sizeof(DIVA_CAPI_ADAPTER)); - return (0); - } - - /* Prevent access to line interconnect table in process update */ - diva_os_enter_spin_lock(&api_lock, &old_irql, "add card"); - - j = 0; - for (i = 0; i < k; i++) { - if ((i >= a->li_base) && (i < a->li_base + a->li_channels)) - memset(&new_li_config_table[i], 0, sizeof(LI_CONFIG)); - else - memcpy(&new_li_config_table[i], &li_config_table[j], sizeof(LI_CONFIG)); - new_li_config_table[i].flag_table = - ((byte *) new_li_config_table) + (((k * sizeof(LI_CONFIG) + 3) & ~3) + (2 * i) * ((k + 3) & ~3)); - new_li_config_table[i].coef_table = - ((byte *) new_li_config_table) + (((k * sizeof(LI_CONFIG) + 3) & ~3) + (2 * i + 1) * ((k + 3) & ~3)); - if ((i >= a->li_base) && (i < a->li_base + a->li_channels)) { - new_li_config_table[i].adapter = a; - memset(&new_li_config_table[i].flag_table[0], 0, k); - memset(&new_li_config_table[i].coef_table[0], 0, k); - } else { - if (a->li_base != 0) { - memcpy(&new_li_config_table[i].flag_table[0], - &li_config_table[j].flag_table[0], - a->li_base); - memcpy(&new_li_config_table[i].coef_table[0], - &li_config_table[j].coef_table[0], - a->li_base); - } - memset(&new_li_config_table[i].flag_table[a->li_base], 0, a->li_channels); - memset(&new_li_config_table[i].coef_table[a->li_base], 0, a->li_channels); - if (a->li_base + a->li_channels < k) { - memcpy(&new_li_config_table[i].flag_table[a->li_base + - a->li_channels], - &li_config_table[j].flag_table[a->li_base], - k - (a->li_base + a->li_channels)); - memcpy(&new_li_config_table[i].coef_table[a->li_base + - a->li_channels], - &li_config_table[j].coef_table[a->li_base], - k - (a->li_base + a->li_channels)); - } - j++; - } - } - li_total_channels = k; - - mem_to_free = li_config_table; - - li_config_table = new_li_config_table; - for (i = card->Id; i < max_adapter; i++) { - if (adapter[i].request) - adapter[i].li_base += a->li_channels; - } - - if (a == &adapter[max_adapter]) - max_adapter++; - - list_add(&(card->list), &cards); - AutomaticLaw(a); - - diva_os_leave_spin_lock(&api_lock, &old_irql, "add card"); - - if (mem_to_free) { - diva_os_free(0, mem_to_free); - } - - i = 0; - while (i++ < 30) { - if (a->automatic_law > 3) - break; - diva_os_sleep(10); - } - - /* profile information */ - PUT_WORD(&ctrl->profile.nbchannel, card->d.channels); - ctrl->profile.goptions = a->profile.Global_Options; - ctrl->profile.support1 = a->profile.B1_Protocols; - ctrl->profile.support2 = a->profile.B2_Protocols; - ctrl->profile.support3 = a->profile.B3_Protocols; - /* manufacturer profile information */ - ctrl->profile.manu[0] = a->man_profile.private_options; - ctrl->profile.manu[1] = a->man_profile.rtp_primary_payloads; - ctrl->profile.manu[2] = a->man_profile.rtp_additional_payloads; - ctrl->profile.manu[3] = 0; - ctrl->profile.manu[4] = 0; - - capi_ctr_ready(ctrl); - - DBG_TRC(("adapter added, max_adapter=%d", max_adapter)); - return (1); -} - -/* - * register appl - */ -static void diva_register_appl(struct capi_ctr *ctrl, __u16 appl, - capi_register_params *rp) -{ - APPL *this; - word bnum, xnum; - int i = 0; - unsigned char *p; - void *DataNCCI, *DataFlags, *ReceiveBuffer, *xbuffer_used; - void **xbuffer_ptr, **xbuffer_internal; - diva_os_spin_lock_magic_t old_irql; - unsigned int mem_len; - int nconn = rp->level3cnt; - - - if (diva_os_in_irq()) { - DBG_ERR(("CAPI_REGISTER - in irq context !")) - return; - } - - DBG_TRC(("application register Id=%d", appl)) - - if (appl > MAX_APPL) { - DBG_ERR(("CAPI_REGISTER - appl.Id exceeds MAX_APPL")) - return; - } - - if (nconn <= 0) - nconn = ctrl->profile.nbchannel * -nconn; - - if (nconn == 0) - nconn = ctrl->profile.nbchannel; - - DBG_LOG(("CAPI_REGISTER - Id = %d", appl)) - DBG_LOG((" MaxLogicalConnections = %d(%d)", nconn, rp->level3cnt)) - DBG_LOG((" MaxBDataBuffers = %d", rp->datablkcnt)) - DBG_LOG((" MaxBDataLength = %d", rp->datablklen)) - - if (nconn < 1 || - nconn > 255 || - rp->datablklen < 80 || - rp->datablklen > 2150 || rp->datablkcnt > 255) { - DBG_ERR(("CAPI_REGISTER - invalid parameters")) - return; - } - - if (application[appl - 1].Id == appl) { - DBG_LOG(("CAPI_REGISTER - appl already registered")) - return; /* appl already registered */ - } - - /* alloc memory */ - - bnum = nconn * rp->datablkcnt; - xnum = nconn * MAX_DATA_B3; - - mem_len = bnum * sizeof(word); /* DataNCCI */ - mem_len += bnum * sizeof(word); /* DataFlags */ - mem_len += bnum * rp->datablklen; /* ReceiveBuffer */ - mem_len += xnum; /* xbuffer_used */ - mem_len += xnum * sizeof(void *); /* xbuffer_ptr */ - mem_len += xnum * sizeof(void *); /* xbuffer_internal */ - mem_len += xnum * rp->datablklen; /* xbuffer_ptr[xnum] */ - - DBG_LOG((" Allocated Memory = %d", mem_len)) - if (!(p = diva_os_malloc(0, mem_len))) { - DBG_ERR(("CAPI_REGISTER - memory allocation failed")) - return; - } - memset(p, 0, mem_len); - - DataNCCI = (void *)p; - p += bnum * sizeof(word); - DataFlags = (void *)p; - p += bnum * sizeof(word); - ReceiveBuffer = (void *)p; - p += bnum * rp->datablklen; - xbuffer_used = (void *)p; - p += xnum; - xbuffer_ptr = (void **)p; - p += xnum * sizeof(void *); - xbuffer_internal = (void **)p; - p += xnum * sizeof(void *); - for (i = 0; i < xnum; i++) { - xbuffer_ptr[i] = (void *)p; - p += rp->datablklen; - } - - /* initialize application data */ - diva_os_enter_spin_lock(&api_lock, &old_irql, "register_appl"); - - this = &application[appl - 1]; - memset(this, 0, sizeof(APPL)); - - this->Id = appl; - - for (i = 0; i < max_adapter; i++) { - adapter[i].CIP_Mask[appl - 1] = 0; - } - - this->queue_size = 1000; - - this->MaxNCCI = (byte) nconn; - this->MaxNCCIData = (byte) rp->datablkcnt; - this->MaxBuffer = bnum; - this->MaxDataLength = rp->datablklen; - - this->DataNCCI = DataNCCI; - this->DataFlags = DataFlags; - this->ReceiveBuffer = ReceiveBuffer; - this->xbuffer_used = xbuffer_used; - this->xbuffer_ptr = xbuffer_ptr; - this->xbuffer_internal = xbuffer_internal; - for (i = 0; i < xnum; i++) { - this->xbuffer_ptr[i] = xbuffer_ptr[i]; - } - - CapiRegister(this->Id); - diva_os_leave_spin_lock(&api_lock, &old_irql, "register_appl"); - -} - -/* - * release appl - */ -static void diva_release_appl(struct capi_ctr *ctrl, __u16 appl) -{ - diva_os_spin_lock_magic_t old_irql; - APPL *this = &application[appl - 1]; - void *mem_to_free = NULL; - - DBG_TRC(("application %d(%d) cleanup", this->Id, appl)) - - if (diva_os_in_irq()) { - DBG_ERR(("CAPI_RELEASE - in irq context !")) - return; - } - - diva_os_enter_spin_lock(&api_lock, &old_irql, "release_appl"); - if (this->Id) { - CapiRelease(this->Id); - mem_to_free = this->DataNCCI; - this->DataNCCI = NULL; - this->Id = 0; - } - diva_os_leave_spin_lock(&api_lock, &old_irql, "release_appl"); - - if (mem_to_free) - diva_os_free(0, mem_to_free); - -} - -/* - * send message - */ -static u16 diva_send_message(struct capi_ctr *ctrl, - diva_os_message_buffer_s *dmb) -{ - int i = 0; - word ret = 0; - diva_os_spin_lock_magic_t old_irql; - CAPI_MSG *msg = (CAPI_MSG *) DIVA_MESSAGE_BUFFER_DATA(dmb); - APPL *this = &application[GET_WORD(&msg->header.appl_id) - 1]; - diva_card *card = ctrl->driverdata; - __u32 length = DIVA_MESSAGE_BUFFER_LEN(dmb); - word clength = GET_WORD(&msg->header.length); - word command = GET_WORD(&msg->header.command); - u16 retval = CAPI_NOERROR; - - if (diva_os_in_irq()) { - DBG_ERR(("CAPI_SEND_MSG - in irq context !")) - return CAPI_REGOSRESOURCEERR; - } - DBG_PRV1(("Write - appl = %d, cmd = 0x%x", this->Id, command)) - - if (card->remove_in_progress) { - DBG_ERR(("CAPI_SEND_MSG - remove in progress!")) - return CAPI_REGOSRESOURCEERR; - } - - diva_os_enter_spin_lock(&api_lock, &old_irql, "send message"); - - if (!this->Id) { - diva_os_leave_spin_lock(&api_lock, &old_irql, "send message"); - return CAPI_ILLAPPNR; - } - - /* patch controller number */ - msg->header.controller = ControllerMap[card->Id] - | (msg->header.controller & 0x80); /* preserve external controller bit */ - - switch (command) { - default: - xlog("\x00\x02", msg, 0x80, clength); - break; - - case _DATA_B3_I | RESPONSE: -#ifndef DIVA_NO_DEBUGLIB - if (myDriverDebugHandle.dbgMask & DL_BLK) - xlog("\x00\x02", msg, 0x80, clength); -#endif - break; - - case _DATA_B3_R: -#ifndef DIVA_NO_DEBUGLIB - if (myDriverDebugHandle.dbgMask & DL_BLK) - xlog("\x00\x02", msg, 0x80, clength); -#endif - - if (clength == 24) - clength = 22; /* workaround for PPcom bug */ - /* header is always 22 */ - if (GET_WORD(&msg->info.data_b3_req.Data_Length) > - this->MaxDataLength - || GET_WORD(&msg->info.data_b3_req.Data_Length) > - (length - clength)) { - DBG_ERR(("Write - invalid message size")) - retval = CAPI_ILLCMDORSUBCMDORMSGTOSMALL; - goto write_end; - } - - for (i = 0; i < (MAX_DATA_B3 * this->MaxNCCI) - && this->xbuffer_used[i]; i++); - if (i == (MAX_DATA_B3 * this->MaxNCCI)) { - DBG_ERR(("Write - too many data pending")) - retval = CAPI_SENDQUEUEFULL; - goto write_end; - } - msg->info.data_b3_req.Data = i; - - this->xbuffer_internal[i] = NULL; - memcpy(this->xbuffer_ptr[i], &((__u8 *) msg)[clength], - GET_WORD(&msg->info.data_b3_req.Data_Length)); - -#ifndef DIVA_NO_DEBUGLIB - if ((myDriverDebugHandle.dbgMask & DL_BLK) - && (myDriverDebugHandle.dbgMask & DL_XLOG)) { - int j; - for (j = 0; j < - GET_WORD(&msg->info.data_b3_req.Data_Length); - j += 256) { - DBG_BLK((((char *) this->xbuffer_ptr[i]) + j, - ((GET_WORD(&msg->info.data_b3_req.Data_Length) - j) < - 256) ? (GET_WORD(&msg->info.data_b3_req.Data_Length) - j) : 256)) - if (!(myDriverDebugHandle.dbgMask & DL_PRV0)) - break; /* not more if not explicitly requested */ - } - } -#endif - break; - } - - memcpy(mapped_msg, msg, (__u32) clength); - mapped_msg->header.controller = MapController(mapped_msg->header.controller); - mapped_msg->header.length = clength; - mapped_msg->header.command = command; - mapped_msg->header.number = GET_WORD(&msg->header.number); - - ret = api_put(this, mapped_msg); - switch (ret) { - case 0: - break; - case _BAD_MSG: - DBG_ERR(("Write - bad message")) - retval = CAPI_ILLCMDORSUBCMDORMSGTOSMALL; - break; - case _QUEUE_FULL: - DBG_ERR(("Write - queue full")) - retval = CAPI_SENDQUEUEFULL; - break; - default: - DBG_ERR(("Write - api_put returned unknown error")) - retval = CAPI_UNKNOWNNOTPAR; - break; - } - -write_end: - diva_os_leave_spin_lock(&api_lock, &old_irql, "send message"); - if (retval == CAPI_NOERROR) - diva_os_free_message_buffer(dmb); - return retval; -} - - -/* - * cards request function - */ -static void DIRequest(ENTITY *e) -{ - DIVA_CAPI_ADAPTER *a = &(adapter[(byte) e->user[0]]); - diva_card *os_card = (diva_card *) a->os_card; - - if (e->Req && (a->FlowControlIdTable[e->ReqCh] == e->Id)) { - a->FlowControlSkipTable[e->ReqCh] = 1; - } - - (*(os_card->d.request)) (e); -} - -/* - * callback function from didd - */ -static void didd_callback(void *context, DESCRIPTOR *adapter, int removal) -{ - if (adapter->type == IDI_DADAPTER) { - DBG_ERR(("Notification about IDI_DADAPTER change ! Oops.")); - return; - } else if (adapter->type == IDI_DIMAINT) { - if (removal) { - stop_dbg(); - } else { - memcpy(&MAdapter, adapter, sizeof(MAdapter)); - dprintf = (DIVA_DI_PRINTF) MAdapter.request; - DbgRegister("CAPI20", DRIVERRELEASE_CAPI, DBG_DEFAULT); - } - } else if ((adapter->type > 0) && (adapter->type < 16)) { /* IDI Adapter */ - if (removal) { - divacapi_remove_card(adapter); - } else { - diva_add_card(adapter); - } - } - return; -} - -/* - * connect to didd - */ -static int divacapi_connect_didd(void) -{ - int x = 0; - int dadapter = 0; - IDI_SYNC_REQ req; - DESCRIPTOR DIDD_Table[MAX_DESCRIPTORS]; - - DIVA_DIDD_Read(DIDD_Table, sizeof(DIDD_Table)); - - for (x = 0; x < MAX_DESCRIPTORS; x++) { - if (DIDD_Table[x].type == IDI_DIMAINT) { /* MAINT found */ - memcpy(&MAdapter, &DIDD_Table[x], sizeof(DAdapter)); - dprintf = (DIVA_DI_PRINTF) MAdapter.request; - DbgRegister("CAPI20", DRIVERRELEASE_CAPI, DBG_DEFAULT); - break; - } - } - for (x = 0; x < MAX_DESCRIPTORS; x++) { - if (DIDD_Table[x].type == IDI_DADAPTER) { /* DADAPTER found */ - dadapter = 1; - memcpy(&DAdapter, &DIDD_Table[x], sizeof(DAdapter)); - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = - IDI_SYNC_REQ_DIDD_REGISTER_ADAPTER_NOTIFY; - req.didd_notify.info.callback = (void *)didd_callback; - req.didd_notify.info.context = NULL; - DAdapter.request((ENTITY *)&req); - if (req.didd_notify.e.Rc != 0xff) { - stop_dbg(); - return (0); - } - notify_handle = req.didd_notify.info.handle; - } - else if ((DIDD_Table[x].type > 0) && (DIDD_Table[x].type < 16)) { /* IDI Adapter found */ - diva_add_card(&DIDD_Table[x]); - } - } - - if (!dadapter) { - stop_dbg(); - } - - return (dadapter); -} - -/* - * diconnect from didd - */ -static void divacapi_disconnect_didd(void) -{ - IDI_SYNC_REQ req; - - stop_dbg(); - - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER_NOTIFY; - req.didd_notify.info.handle = notify_handle; - DAdapter.request((ENTITY *)&req); -} - -/* - * we do not provide date/time here, - * the application should do this. - */ -int fax_head_line_time(char *buffer) -{ - return (0); -} - -/* - * init (alloc) main structures - */ -static int __init init_main_structs(void) -{ - if (!(mapped_msg = (CAPI_MSG *) diva_os_malloc(0, MAX_MSG_SIZE))) { - DBG_ERR(("init: failed alloc mapped_msg.")) - return 0; - } - - if (!(adapter = diva_os_malloc(0, sizeof(DIVA_CAPI_ADAPTER) * MAX_DESCRIPTORS))) { - DBG_ERR(("init: failed alloc adapter struct.")) - diva_os_free(0, mapped_msg); - return 0; - } - memset(adapter, 0, sizeof(DIVA_CAPI_ADAPTER) * MAX_DESCRIPTORS); - - if (!(application = diva_os_malloc(0, sizeof(APPL) * MAX_APPL))) { - DBG_ERR(("init: failed alloc application struct.")) - diva_os_free(0, mapped_msg); - diva_os_free(0, adapter); - return 0; - } - memset(application, 0, sizeof(APPL) * MAX_APPL); - - return (1); -} - -/* - * remove (free) main structures - */ -static void remove_main_structs(void) -{ - if (application) - diva_os_free(0, application); - if (adapter) - diva_os_free(0, adapter); - if (mapped_msg) - diva_os_free(0, mapped_msg); -} - -/* - * api_remove_start - */ -static void do_api_remove_start(void) -{ - diva_os_spin_lock_magic_t old_irql; - int ret = 1, count = 100; - - do { - diva_os_enter_spin_lock(&api_lock, &old_irql, "api remove start"); - ret = api_remove_start(); - diva_os_leave_spin_lock(&api_lock, &old_irql, "api remove start"); - - diva_os_sleep(10); - } while (ret && count--); - - if (ret) - DBG_ERR(("could not remove signaling ID's")) - } - -/* - * init - */ -int __init init_capifunc(void) -{ - diva_os_initialize_spin_lock(&api_lock, "capifunc"); - memset(ControllerMap, 0, MAX_DESCRIPTORS + 1); - max_adapter = 0; - - - if (!init_main_structs()) { - DBG_ERR(("init: failed to init main structs.")) - diva_os_destroy_spin_lock(&api_lock, "capifunc"); - return (0); - } - - if (!divacapi_connect_didd()) { - DBG_ERR(("init: failed to connect to DIDD.")) - do_api_remove_start(); - divacapi_remove_cards(); - remove_main_structs(); - diva_os_destroy_spin_lock(&api_lock, "capifunc"); - return (0); - } - - return (1); -} - -/* - * finit - */ -void __exit finit_capifunc(void) -{ - do_api_remove_start(); - divacapi_disconnect_didd(); - divacapi_remove_cards(); - remove_main_structs(); - diva_os_destroy_spin_lock(&api_lock, "capifunc"); -} diff --git a/drivers/isdn/hardware/eicon/capifunc.h b/drivers/isdn/hardware/eicon/capifunc.h deleted file mode 100644 index e96c45bb5638..000000000000 --- a/drivers/isdn/hardware/eicon/capifunc.h +++ /dev/null @@ -1,40 +0,0 @@ -/* $Id: capifunc.h,v 1.11.4.1 2004/08/28 20:03:53 armin Exp $ - * - * ISDN interface module for Eicon active cards DIVA. - * CAPI Interface common functions - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#ifndef __CAPIFUNC_H__ -#define __CAPIFUNC_H__ - -#define DRRELMAJOR 2 -#define DRRELMINOR 0 -#define DRRELEXTRA "" - -#define M_COMPANY "Eicon Networks" - -extern char DRIVERRELEASE_CAPI[]; - -typedef struct _diva_card { - struct list_head list; - int remove_in_progress; - int Id; - struct capi_ctr capi_ctrl; - DIVA_CAPI_ADAPTER *adapter; - DESCRIPTOR d; - char name[32]; -} diva_card; - -/* - * prototypes - */ -int init_capifunc(void); -void finit_capifunc(void); - -#endif /* __CAPIFUNC_H__ */ diff --git a/drivers/isdn/hardware/eicon/capimain.c b/drivers/isdn/hardware/eicon/capimain.c deleted file mode 100644 index f9244dc1c3c9..000000000000 --- a/drivers/isdn/hardware/eicon/capimain.c +++ /dev/null @@ -1,141 +0,0 @@ -/* $Id: capimain.c,v 1.24 2003/09/09 06:51:05 schindler Exp $ - * - * ISDN interface module for Eicon active cards DIVA. - * CAPI Interface - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include -#include -#include -#include -#include -#include - -#include "os_capi.h" - -#include "platform.h" -#include "di_defs.h" -#include "capi20.h" -#include "divacapi.h" -#include "cp_vers.h" -#include "capifunc.h" - -static char *main_revision = "$Revision: 1.24 $"; -static char *DRIVERNAME = - "Eicon DIVA - CAPI Interface driver (http://www.melware.net)"; -static char *DRIVERLNAME = "divacapi"; - -MODULE_DESCRIPTION("CAPI driver for Eicon DIVA cards"); -MODULE_AUTHOR("Cytronics & Melware, Eicon Networks"); -MODULE_SUPPORTED_DEVICE("CAPI and DIVA card drivers"); -MODULE_LICENSE("GPL"); - -/* - * get revision number from revision string - */ -static char *getrev(const char *revision) -{ - char *rev; - char *p; - if ((p = strchr(revision, ':'))) { - rev = p + 2; - p = strchr(rev, '$'); - *--p = 0; - } else - rev = "1.0"; - return rev; - -} - -/* - * alloc a message buffer - */ -diva_os_message_buffer_s *diva_os_alloc_message_buffer(unsigned long size, - void **data_buf) -{ - diva_os_message_buffer_s *dmb = alloc_skb(size, GFP_ATOMIC); - if (dmb) { - *data_buf = skb_put(dmb, size); - } - return (dmb); -} - -/* - * free a message buffer - */ -void diva_os_free_message_buffer(diva_os_message_buffer_s *dmb) -{ - kfree_skb(dmb); -} - -/* - * proc function for controller info - */ -static int diva_ctl_proc_show(struct seq_file *m, void *v) -{ - struct capi_ctr *ctrl = m->private; - diva_card *card = (diva_card *) ctrl->driverdata; - - seq_printf(m, "%s\n", ctrl->name); - seq_printf(m, "Serial No. : %s\n", ctrl->serial); - seq_printf(m, "Id : %d\n", card->Id); - seq_printf(m, "Channels : %d\n", card->d.channels); - - return 0; -} - -/* - * set additional os settings in capi_ctr struct - */ -void diva_os_set_controller_struct(struct capi_ctr *ctrl) -{ - ctrl->driver_name = DRIVERLNAME; - ctrl->load_firmware = NULL; - ctrl->reset_ctr = NULL; - ctrl->proc_show = diva_ctl_proc_show; - ctrl->owner = THIS_MODULE; -} - -/* - * module init - */ -static int __init divacapi_init(void) -{ - char tmprev[32]; - int ret = 0; - - sprintf(DRIVERRELEASE_CAPI, "%d.%d%s", DRRELMAJOR, DRRELMINOR, - DRRELEXTRA); - - printk(KERN_INFO "%s\n", DRIVERNAME); - printk(KERN_INFO "%s: Rel:%s Rev:", DRIVERLNAME, DRIVERRELEASE_CAPI); - strcpy(tmprev, main_revision); - printk("%s Build: %s(%s)\n", getrev(tmprev), - diva_capi_common_code_build, DIVA_BUILD); - - if (!(init_capifunc())) { - printk(KERN_ERR "%s: failed init capi_driver.\n", - DRIVERLNAME); - ret = -EIO; - } - - return ret; -} - -/* - * module exit - */ -static void __exit divacapi_exit(void) -{ - finit_capifunc(); - printk(KERN_INFO "%s: module unloaded.\n", DRIVERLNAME); -} - -module_init(divacapi_init); -module_exit(divacapi_exit); diff --git a/drivers/isdn/hardware/eicon/cardtype.h b/drivers/isdn/hardware/eicon/cardtype.h deleted file mode 100644 index 8b20e22cae1e..000000000000 --- a/drivers/isdn/hardware/eicon/cardtype.h +++ /dev/null @@ -1,1098 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef _CARDTYPE_H_ -#define _CARDTYPE_H_ -#ifndef CARDTYPE_H_WANT_DATA -#define CARDTYPE_H_WANT_DATA 0 -#endif -#ifndef CARDTYPE_H_WANT_IDI_DATA -#define CARDTYPE_H_WANT_IDI_DATA 0 -#endif -#ifndef CARDTYPE_H_WANT_RESOURCE_DATA -#define CARDTYPE_H_WANT_RESOURCE_DATA 1 -#endif -#ifndef CARDTYPE_H_WANT_FILE_DATA -#define CARDTYPE_H_WANT_FILE_DATA 1 -#endif -/* - * D-channel protocol identifiers - * - * Attention: Unfortunately the identifiers defined here differ from - * the identifiers used in Protocol/1/Common/prot/q931.h . - * The only reason for this is that q931.h has not a global - * scope and we did not know about the definitions there. - * But the definitions here cannot be changed easily because - * they are used in setup scripts and programs. - * Thus the definitions here have to be mapped if they are - * used in the protocol code context ! - * - * Now the identifiers are defined in the q931lib/constant.h file. - * Unfortunately this file has also not a global scope. - * But beginning with PROTTYPE_US any new identifier will get the same - * value as the corresponding PROT_* definition in q931lib/constant.h ! - */ -#define PROTTYPE_MINVAL 0 -#define PROTTYPE_ETSI 0 -#define PROTTYPE_1TR6 1 -#define PROTTYPE_BELG 2 -#define PROTTYPE_FRANC 3 -#define PROTTYPE_ATEL 4 -#define PROTTYPE_NI 5 /* DMS 100, Nortel, National ISDN */ -#define PROTTYPE_5ESS 6 /* 5ESS , AT&T, 5ESS Custom */ -#define PROTTYPE_JAPAN 7 -#define PROTTYPE_SWED 8 -#define PROTTYPE_US 9 /* US autodetect */ -#define PROTTYPE_ITALY 10 -#define PROTTYPE_TWAN 11 -#define PROTTYPE_AUSTRAL 12 -#define PROTTYPE_4ESDN 13 -#define PROTTYPE_4ESDS 14 -#define PROTTYPE_4ELDS 15 -#define PROTTYPE_4EMGC 16 -#define PROTTYPE_4EMGI 17 -#define PROTTYPE_HONGKONG 18 -#define PROTTYPE_RBSCAS 19 -#define PROTTYPE_CORNETN 20 -#define PROTTYPE_QSIG 21 -#define PROTTYPE_NI_EWSD 22 /* EWSD, Siemens, National ISDN */ -#define PROTTYPE_5ESS_NI 23 /* 5ESS, Lucent, National ISDN */ -#define PROTTYPE_T1CORNETN 24 -#define PROTTYPE_CORNETNQ 25 -#define PROTTYPE_T1CORNETNQ 26 -#define PROTTYPE_T1QSIG 27 -#define PROTTYPE_E1UNCH 28 -#define PROTTYPE_T1UNCH 29 -#define PROTTYPE_E1CHAN 30 -#define PROTTYPE_T1CHAN 31 -#define PROTTYPE_R2CAS 32 -#define PROTTYPE_MAXVAL 32 -/* - * Card type identifiers - */ -#define CARD_UNKNOWN 0 -#define CARD_NONE 0 -/* DIVA cards */ -#define CARDTYPE_DIVA_MCA 0 -#define CARDTYPE_DIVA_ISA 1 -#define CARDTYPE_DIVA_PCM 2 -#define CARDTYPE_DIVAPRO_ISA 3 -#define CARDTYPE_DIVAPRO_PCM 4 -#define CARDTYPE_DIVAPICO_ISA 5 -#define CARDTYPE_DIVAPICO_PCM 6 -/* DIVA 2.0 cards */ -#define CARDTYPE_DIVAPRO20_PCI 7 -#define CARDTYPE_DIVA20_PCI 8 -/* S cards */ -#define CARDTYPE_QUADRO_ISA 9 -#define CARDTYPE_S_ISA 10 -#define CARDTYPE_S_MCA 11 -#define CARDTYPE_SX_ISA 12 -#define CARDTYPE_SX_MCA 13 -#define CARDTYPE_SXN_ISA 14 -#define CARDTYPE_SXN_MCA 15 -#define CARDTYPE_SCOM_ISA 16 -#define CARDTYPE_SCOM_MCA 17 -#define CARDTYPE_PR_ISA 18 -#define CARDTYPE_PR_MCA 19 -/* Diva Server cards (formerly called Maestra, later Amadeo) */ -#define CARDTYPE_MAESTRA_ISA 20 -#define CARDTYPE_MAESTRA_PCI 21 -/* Diva Server cards to be developed (Quadro, Primary rate) */ -#define CARDTYPE_DIVASRV_Q_8M_PCI 22 -#define CARDTYPE_DIVASRV_P_30M_PCI 23 -#define CARDTYPE_DIVASRV_P_2M_PCI 24 -#define CARDTYPE_DIVASRV_P_9M_PCI 25 -/* DIVA 2.0 cards */ -#define CARDTYPE_DIVA20_ISA 26 -#define CARDTYPE_DIVA20U_ISA 27 -#define CARDTYPE_DIVA20U_PCI 28 -#define CARDTYPE_DIVAPRO20_ISA 29 -#define CARDTYPE_DIVAPRO20U_ISA 30 -#define CARDTYPE_DIVAPRO20U_PCI 31 -/* DIVA combi cards (piccola ISDN + rockwell V.34 modem) */ -#define CARDTYPE_DIVAMOBILE_PCM 32 -#define CARDTYPE_TDKGLOBALPRO_PCM 33 -/* DIVA Pro PC OEM card for 'New Media Corporation' */ -#define CARDTYPE_NMC_DIVAPRO_PCM 34 -/* DIVA Pro 2.0 OEM cards for 'British Telecom' */ -#define CARDTYPE_BT_EXLANE_PCI 35 -#define CARDTYPE_BT_EXLANE_ISA 36 -/* DIVA low cost cards, 1st name DIVA 3.0, 2nd DIVA 2.01, 3rd ??? */ -#define CARDTYPE_DIVALOW_ISA 37 -#define CARDTYPE_DIVALOWU_ISA 38 -#define CARDTYPE_DIVALOW_PCI 39 -#define CARDTYPE_DIVALOWU_PCI 40 -/* DIVA combi cards (piccola ISDN + rockwell V.90 modem) */ -#define CARDTYPE_DIVAMOBILE_V90_PCM 41 -#define CARDTYPE_TDKGLOBPRO_V90_PCM 42 -#define CARDTYPE_DIVASRV_P_23M_PCI 43 -#define CARDTYPE_DIVALOW_USB 44 -/* DIVA Audio (CT) family */ -#define CARDTYPE_DIVA_CT_ST 45 -#define CARDTYPE_DIVA_CT_U 46 -#define CARDTYPE_DIVA_CTLITE_ST 47 -#define CARDTYPE_DIVA_CTLITE_U 48 -/* DIVA ISDN plus V.90 series */ -#define CARDTYPE_DIVAISDN_V90_PCM 49 -#define CARDTYPE_DIVAISDN_V90_PCI 50 -#define CARDTYPE_DIVAISDN_TA 51 -/* DIVA Server Voice cards */ -#define CARDTYPE_DIVASRV_VOICE_Q_8M_PCI 52 -/* DIVA Server V2 cards */ -#define CARDTYPE_DIVASRV_Q_8M_V2_PCI 53 -#define CARDTYPE_DIVASRV_P_30M_V2_PCI 54 -/* DIVA Server Voice V2 cards */ -#define CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI 55 -#define CARDTYPE_DIVASRV_VOICE_P_30M_V2_PCI 56 -/* Diva LAN */ -#define CARDTYPE_DIVAISDN_LAN 57 -#define CARDTYPE_DIVA_202_PCI_ST 58 -#define CARDTYPE_DIVA_202_PCI_U 59 -#define CARDTYPE_DIVASRV_B_2M_V2_PCI 60 -#define CARDTYPE_DIVASRV_B_2F_PCI 61 -#define CARDTYPE_DIVALOW_USBV2 62 -#define CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI 63 -#define CARDTYPE_DIVA_PRO_30_PCI_ST 64 -#define CARDTYPE_DIVA_CT_ST_V20 65 -/* Diva Mobile V.90 PC Card and Diva ISDN PC Card */ -#define CARDTYPE_DIVAMOBILE_V2_PCM 66 -#define CARDTYPE_DIVA_V2_PCM 67 -/* Re-badged Diva Pro PC Card */ -#define CARDTYPE_DIVA_PC_CARD 68 -/* next free card type identifier */ -#define CARDTYPE_MAX 69 -/* - * The card families - */ -#define FAMILY_DIVA 1 -#define FAMILY_S 2 -#define FAMILY_MAESTRA 3 -#define FAMILY_MAX 4 -/* - * The basic card types - */ -#define CARD_DIVA 1 /* DSP based, old DSP */ -#define CARD_PRO 2 /* DSP based, new DSP */ -#define CARD_PICO 3 /* HSCX based */ -#define CARD_S 4 /* IDI on board based */ -#define CARD_SX 5 /* IDI on board based */ -#define CARD_SXN 6 /* IDI on board based */ -#define CARD_SCOM 7 /* IDI on board based */ -#define CARD_QUAD 8 /* IDI on board based */ -#define CARD_PR 9 /* IDI on board based */ -#define CARD_MAE 10 /* IDI on board based */ -#define CARD_MAEQ 11 /* IDI on board based */ -#define CARD_MAEP 12 /* IDI on board based */ -#define CARD_DIVALOW 13 /* IPAC based */ -#define CARD_CT 14 /* SCOUT based */ -#define CARD_DIVATA 15 /* DIVA TA */ -#define CARD_DIVALAN 16 /* DIVA LAN */ -#define CARD_MAE2 17 /* IDI on board based */ -#define CARD_MAX 18 -/* - * The internal card types of the S family - */ -#define CARD_I_NONE 0 -#define CARD_I_S 0 -#define CARD_I_SX 1 -#define CARD_I_SCOM 2 -#define CARD_I_QUAD 3 -#define CARD_I_PR 4 -/* - * The bus types we support - */ -#define BUS_ISA 1 -#define BUS_PCM 2 -#define BUS_PCI 3 -#define BUS_MCA 4 -#define BUS_USB 5 -#define BUS_COM 6 -#define BUS_LAN 7 -/* - * The chips we use for B-channel traffic - */ -#define CHIP_NONE 0 -#define CHIP_DSP 1 -#define CHIP_HSCX 2 -#define CHIP_IPAC 3 -#define CHIP_SCOUT 4 -#define CHIP_EXTERN 5 -#define CHIP_IPACX 6 -/* - * The structures where the card properties are aggregated by id - */ -typedef struct CARD_PROPERTIES -{ char *Name; /* official marketing name */ - unsigned short PnPId; /* plug and play ID (for non PCMIA cards) */ - unsigned short Version; /* major and minor version no of the card */ - unsigned char DescType; /* card type to set in the IDI descriptor */ - unsigned char Family; /* basic family of the card */ - unsigned short Features; /* features bits to set in the IDI desc. */ - unsigned char Card; /* basic card type */ - unsigned char IType; /* internal type of S cards (read from ram) */ - unsigned char Bus; /* bus type this card is designed for */ - unsigned char Chip; /* chipset used on card */ - unsigned char Adapters; /* number of adapters on card */ - unsigned char Channels; /* # of channels per adapter */ - unsigned short E_info; /* # of ram entity info structs per adapter */ - unsigned short SizeIo; /* size of IO window per adapter */ - unsigned short SizeMem; /* size of memory window per adapter */ -} CARD_PROPERTIES; -typedef struct CARD_RESOURCE -{ unsigned char Int[10]; - unsigned short IoFirst; - unsigned short IoStep; - unsigned short IoCnt; - unsigned long MemFirst; - unsigned long MemStep; - unsigned short MemCnt; -} CARD_RESOURCE; -/* test if the card of type 't' is a plug & play card */ -#define IS_PNP(t) \ - ( \ - ( \ - CardProperties[t].Bus != BUS_ISA \ - && \ - CardProperties[t].Bus != BUS_MCA \ - ) \ - || \ - ( \ - CardProperties[t].Family != FAMILY_S \ - && \ - CardProperties[t].Card != CARD_DIVA \ - ) \ - ) -/* extract IDI Descriptor info for card type 't' (p == DescType/Features) */ -#define IDI_PROP(t, p) (CardProperties[t].p) -#if CARDTYPE_H_WANT_DATA -#if CARDTYPE_H_WANT_IDI_DATA -/* include "di_defs.h" for IDI adapter type and feature flag definitions */ -#include "di_defs.h" -#else /*!CARDTYPE_H_WANT_IDI_DATA*/ -/* define IDI adapter types and feature flags here to prevent inclusion */ -#ifndef IDI_ADAPTER_S -#define IDI_ADAPTER_S 1 -#define IDI_ADAPTER_PR 2 -#define IDI_ADAPTER_DIVA 3 -#define IDI_ADAPTER_MAESTRA 4 -#endif -#ifndef DI_VOICE -#define DI_VOICE 0x0 /* obsolete define */ -#define DI_FAX3 0x1 -#define DI_MODEM 0x2 -#define DI_POST 0x4 -#define DI_V110 0x8 -#define DI_V120 0x10 -#define DI_POTS 0x20 -#define DI_CODEC 0x40 -#define DI_MANAGE 0x80 -#define DI_V_42 0x0100 -#define DI_EXTD_FAX 0x0200 /* Extended FAX (ECM, 2D, T.6, Polling) */ -#define DI_AT_PARSER 0x0400 /* Build-in AT Parser in the L2 */ -#define DI_VOICE_OVER_IP 0x0800 /* Voice over IP support */ -#endif -#endif /*CARDTYPE_H_WANT_IDI_DATA*/ -#define DI_V1x0 (DI_V110 | DI_V120) -#define DI_NULL 0x0000 -#if defined(SOFT_DSP_SUPPORT) -#define SOFT_DSP_ADD_FEATURES (DI_MODEM | DI_FAX3 | DI_AT_PARSER) -#else -#define SOFT_DSP_ADD_FEATURES 0 -#endif -#if defined(SOFT_V110_SUPPORT) -#define DI_SOFT_V110 DI_V110 -#else -#define DI_SOFT_V110 0 -#endif -/*--- CardProperties [Index=CARDTYPE_....] ---------------------------------*/ -CARD_PROPERTIES CardProperties[] = -{ - { /* 0 */ - "Diva MCA", 0x6336, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3, - CARD_DIVA, CARD_I_NONE, BUS_MCA, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 1 */ - "Diva ISA", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3, - CARD_DIVA, CARD_I_NONE, BUS_ISA, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 2 */ - "Diva/PCM", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3, - CARD_DIVA, CARD_I_NONE, BUS_PCM, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 3 */ - "Diva PRO ISA", 0x0031, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_CODEC, - CARD_PRO, CARD_I_NONE, BUS_ISA, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 4 */ - "Diva PRO PC-Card", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_PRO, CARD_I_NONE, BUS_PCM, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 5 */ - "Diva PICCOLA ISA", 0x0051, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_ISA, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 6 */ - "Diva PICCOLA PCM", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCM, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 7 */ - "Diva PRO 2.0 S/T PCI", 0xe001, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_POTS, - CARD_PRO, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 8 */ - "Diva 2.0 S/T PCI", 0xe002, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | DI_POTS | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCI, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 9 */ - "QUADRO ISA", 0x0000, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_NULL, - CARD_QUAD, CARD_I_QUAD, BUS_ISA, CHIP_NONE, - 4, 2, 16, 0, 0x800 - }, - { /* 10 */ - "S ISA", 0x0000, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_CODEC, - CARD_S, CARD_I_S, BUS_ISA, CHIP_NONE, - 1, 1, 16, 0, 0x800 - }, - { /* 11 */ - "S MCA", 0x6a93, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_CODEC, - CARD_S, CARD_I_S, BUS_MCA, CHIP_NONE, - 1, 1, 16, 16, 0x400 - }, - { /* 12 */ - "SX ISA", 0x0000, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_NULL, - CARD_SX, CARD_I_SX, BUS_ISA, CHIP_NONE, - 1, 2, 16, 0, 0x800 - }, - { /* 13 */ - "SX MCA", 0x6a93, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_NULL, - CARD_SX, CARD_I_SX, BUS_MCA, CHIP_NONE, - 1, 2, 16, 16, 0x400 - }, - { /* 14 */ - "SXN ISA", 0x0000, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_NULL, - CARD_SXN, CARD_I_SCOM, BUS_ISA, CHIP_NONE, - 1, 2, 16, 0, 0x800 - }, - { /* 15 */ - "SXN MCA", 0x6a93, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_NULL, - CARD_SXN, CARD_I_SCOM, BUS_MCA, CHIP_NONE, - 1, 2, 16, 16, 0x400 - }, - { /* 16 */ - "SCOM ISA", 0x0000, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_CODEC, - CARD_SCOM, CARD_I_SCOM, BUS_ISA, CHIP_NONE, - 1, 2, 16, 0, 0x800 - }, - { /* 17 */ - "SCOM MCA", 0x6a93, 0x0100, - IDI_ADAPTER_S, FAMILY_S, DI_CODEC, - CARD_SCOM, CARD_I_SCOM, BUS_MCA, CHIP_NONE, - 1, 2, 16, 16, 0x400 - }, - { /* 18 */ - "S2M ISA", 0x0000, 0x0100, - IDI_ADAPTER_PR, FAMILY_S, DI_NULL, - CARD_PR, CARD_I_PR, BUS_ISA, CHIP_NONE, - 1, 30, 256, 0, 0x4000 - }, - { /* 19 */ - "S2M MCA", 0x6abb, 0x0100, - IDI_ADAPTER_PR, FAMILY_S, DI_NULL, - CARD_PR, CARD_I_PR, BUS_MCA, CHIP_NONE, - 1, 30, 256, 16, 0x4000 - }, - { /* 20 */ - "Diva Server BRI-2M ISA", 0x0041, 0x0100, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAE, CARD_I_NONE, BUS_ISA, CHIP_DSP, - 1, 2, 16, 8, 0 - }, - { /* 21 */ - "Diva Server BRI-2M PCI", 0xE010, 0x0100, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAE, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 16, 8, 0 - }, - { /* 22 */ - "Diva Server 4BRI-8M PCI", 0xE012, 0x0100, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAEQ, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 4, 2, 16, 8, 0 - }, - { /* 23 */ - "Diva Server PRI-30M PCI", 0xE014, 0x0100, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAEP, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 30, 256, 8, 0 - }, - { /* 24 */ - "Diva Server PRI-2M PCI", 0xe014, 0x0100, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAEP, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 30, 256, 8, 0 - }, - { /* 25 */ - "Diva Server PRI-9M PCI", 0x0000, 0x0100, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAEP, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 30, 256, 8, 0 - }, - { /* 26 */ - "Diva 2.0 S/T ISA", 0x0071, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | DI_POTS | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_ISA, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 27 */ - "Diva 2.0 U ISA", 0x0091, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | DI_POTS | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_ISA, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 28 */ - "Diva 2.0 U PCI", 0xe004, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | DI_POTS | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCI, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 29 */ - "Diva PRO 2.0 S/T ISA", 0x0061, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_POTS, - CARD_PRO, CARD_I_NONE, BUS_ISA, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 30 */ - "Diva PRO 2.0 U ISA", 0x0081, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_POTS, - CARD_PRO, CARD_I_NONE, BUS_ISA, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 31 */ - "Diva PRO 2.0 U PCI", 0xe003, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_POTS, - CARD_PRO, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 32 */ - "Diva MOBILE", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCM, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 33 */ - "TDK DFI3600", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCM, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 34 (OEM version of 4 - "Diva PRO PC-Card") */ - "New Media ISDN", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_PRO, CARD_I_NONE, BUS_PCM, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 35 (OEM version of 7 - "Diva PRO 2.0 S/T PCI") */ - "BT ExLane PCI", 0xe101, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_POTS, - CARD_PRO, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 36 (OEM version of 29 - "Diva PRO 2.0 S/T ISA") */ - "BT ExLane ISA", 0x1061, 0x0200, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_POTS, - CARD_PRO, CARD_I_NONE, BUS_ISA, CHIP_DSP, - 1, 2, 0, 8, 0 - }, - { /* 37 */ - "Diva 2.01 S/T ISA", 0x00A1, 0x0300, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALOW, CARD_I_NONE, BUS_ISA, CHIP_IPAC, - 1, 2, 0, 8, 0 - }, - { /* 38 */ - "Diva 2.01 U ISA", 0x00B1, 0x0300, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALOW, CARD_I_NONE, BUS_ISA, CHIP_IPAC, - 1, 2, 0, 8, 0 - }, - { /* 39 */ - "Diva 2.01 S/T PCI", 0xe005, 0x0300, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALOW, CARD_I_NONE, BUS_PCI, CHIP_IPAC, - 1, 2, 0, 8, 0 - }, - { /* 40 no ID yet */ - "Diva 2.01 U PCI", 0x0000, 0x0300, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALOW, CARD_I_NONE, BUS_PCI, CHIP_IPAC, - 1, 2, 0, 8, 0 - }, - { /* 41 */ - "Diva MOBILE V.90", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCM, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 42 */ - "TDK DFI3600 V.90", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCM, CHIP_HSCX, - 1, 2, 0, 8, 0 - }, - { /* 43 */ - "Diva Server PRI-23M PCI", 0xe014, 0x0100, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAEP, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 30, 256, 8, 0 - }, - { /* 44 */ - "Diva 2.01 S/T USB", 0x1000, 0x0300, - IDI_ADAPTER_DIVA , FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALOW, CARD_I_NONE, BUS_USB, CHIP_IPAC, - 1, 2, 0, 8, 0 - }, - { /* 45 */ - "Diva CT S/T PCI", 0xe006, 0x0300, - IDI_ADAPTER_DIVA , FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_CODEC, - CARD_CT, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 0, 0 - }, - { /* 46 */ - "Diva CT U PCI", 0xe007, 0x0300, - IDI_ADAPTER_DIVA , FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_CODEC, - CARD_CT, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 0, 0 - }, - { /* 47 */ - "Diva CT Lite S/T PCI", 0xe008, 0x0300, - IDI_ADAPTER_DIVA , FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_CODEC, - CARD_CT, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 0, 0 - }, - { /* 48 */ - "Diva CT Lite U PCI", 0xe009, 0x0300, - IDI_ADAPTER_DIVA , FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_CODEC, - CARD_CT, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 0, 0 - }, - { /* 49 */ - "Diva ISDN+V.90 PC Card", 0x8D8C, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_CODEC, - CARD_DIVALOW, CARD_I_NONE, BUS_PCM, CHIP_IPAC, - 1, 2, 0, 8, 0 - }, - { /* 50 */ - "Diva ISDN+V.90 PCI", 0xe00A, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALOW, CARD_I_NONE, BUS_PCI, CHIP_IPAC, - 1, 2, 0, 8, 0 - }, - { /* 51 (DivaTA) no ID */ - "Diva TA", 0x0000, 0x0300, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V110 | DI_FAX3 | SOFT_DSP_ADD_FEATURES, - CARD_DIVATA, CARD_I_NONE, BUS_COM, CHIP_EXTERN, - 1, 1, 0, 8, 0 - }, - { /* 52 (Diva Server 4BRI-8M PCI adapter enabled for Voice) */ - "Diva Server Voice 4BRI-8M PCI", 0xE016, 0x0100, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_VOICE_OVER_IP, - CARD_MAEQ, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 4, 2, 16, 8, 0 - }, - { /* 53 (Diva Server 4BRI 2.0 adapter) */ - "Diva Server 4BRI-8M 2.0 PCI", 0xE013, 0x0200, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAEQ, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 4, 2, 16, 8, 0 - }, - { /* 54 (Diva Server PRI 2.0 adapter) */ - "Diva Server PRI 2.0 PCI", 0xE015, 0x0200, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAEP, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 30, 256, 8, 0 - }, - { /* 55 (Diva Server 4BRI-8M 2.0 PCI adapter enabled for Voice) */ - "Diva Server Voice 4BRI-8M 2.0 PCI", 0xE017, 0x0200, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_VOICE_OVER_IP, - CARD_MAEQ, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 4, 2, 16, 8, 0 - }, - { /* 56 (Diva Server PRI 2.0 PCI adapter enabled for Voice) */ - "Diva Server Voice PRI 2.0 PCI", 0xE019, 0x0200, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_VOICE_OVER_IP, - CARD_MAEP, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 30, 256, 8, 0 - }, - { - /* 57 (DivaLan ) no ID */ - "Diva LAN", 0x0000, 0x0300, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V110 | DI_FAX3 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALAN, CARD_I_NONE, BUS_LAN, CHIP_EXTERN, - 1, 1, 0, 8, 0 - }, - { /* 58 */ - "Diva 2.02 PCI S/T", 0xE00B, 0x0300, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES | DI_SOFT_V110, - CARD_DIVALOW, CARD_I_NONE, BUS_PCI, CHIP_IPACX, - 1, 2, 0, 8, 0 - }, - { /* 59 */ - "Diva 2.02 PCI U", 0xE00C, 0x0300, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALOW, CARD_I_NONE, BUS_PCI, CHIP_IPACX, - 1, 2, 0, 8, 0 - }, - { /* 60 */ - "Diva Server BRI-2M 2.0 PCI", 0xE018, 0x0200, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_MAE2, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 16, 8, 0 - }, - { /* 61 (the previous name was Diva Server BRI-2F 2.0 PCI) */ - "Diva Server 2FX", 0xE01A, 0x0200, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_SOFT_V110, - CARD_MAE2, CARD_I_NONE, BUS_PCI, CHIP_IPACX, - 1, 2, 16, 8, 0 - }, - { /* 62 */ - " Diva ISDN USB 2.0", 0x1003, 0x0300, - IDI_ADAPTER_DIVA , FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_DIVALOW, CARD_I_NONE, BUS_USB, CHIP_IPACX, - 1, 2, 0, 8, 0 - }, - { /* 63 (Diva Server BRI-2M 2.0 PCI adapter enabled for Voice) */ - "Diva Server Voice BRI-2M 2.0 PCI", 0xE01B, 0x0200, - IDI_ADAPTER_MAESTRA, FAMILY_MAESTRA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_VOICE_OVER_IP, - CARD_MAE2, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 16, 8, 0 - }, - { /* 64 */ - "Diva Pro 3.0 PCI", 0xe00d, 0x0300, - IDI_ADAPTER_DIVA , FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM, - CARD_PRO, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 0, 0 - }, - { /* 65 */ - "Diva ISDN + CT 2.0", 0xE00E, 0x0300, - IDI_ADAPTER_DIVA , FAMILY_DIVA, DI_V1x0 | DI_FAX3 | DI_MODEM | DI_CODEC, - CARD_CT, CARD_I_NONE, BUS_PCI, CHIP_DSP, - 1, 2, 0, 0, 0 - }, - { /* 66 */ - "Diva Mobile V.90 PC Card", 0x8331, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCM, CHIP_IPACX, - 1, 2, 0, 8, 0 - }, - { /* 67 */ - "Diva ISDN PC Card", 0x8311, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PICO, CARD_I_NONE, BUS_PCM, CHIP_IPACX, - 1, 2, 0, 8, 0 - }, - { /* 68 */ - "Diva ISDN PC Card", 0x0000, 0x0100, - IDI_ADAPTER_DIVA, FAMILY_DIVA, DI_V120 | SOFT_DSP_ADD_FEATURES, - CARD_PRO, CARD_I_NONE, BUS_PCM, CHIP_DSP, - 1, 2, 0, 8, 0 - }, -}; -#if CARDTYPE_H_WANT_RESOURCE_DATA -/*--- CardResource [Index=CARDTYPE_....] ---------------------------(GEI)-*/ -CARD_RESOURCE CardResource[] = { -/* Interrupts IO-Address Mem-Address */ - /* 0*/ { 3,4,9,0,0,0,0,0,0,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA MCA - /* 1*/ { 3,4,9,10,11,12,0,0,0,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA ISA - /* 2*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA PCMCIA - /* 3*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA PRO ISA - /* 4*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA PRO PCMCIA - /* 5*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA PICCOLA ISA - /* 6*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA PICCOLA PCMCIA - /* 7*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA PRO 2.0 PCI - /* 8*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA 2.0 PCI - /* 9*/ { 3,4,5,7,9,10,11,12,0,0, 0x0,0x0,0, 0x80000,0x2000,64 }, // QUADRO ISA - /*10*/ { 3,4,9,10,11,12,0,0,0,0, 0x0,0x0,0, 0xc0000,0x2000,16 }, // S ISA - /*11*/ { 3,4,9,0,0,0,0,0,0,0, 0xc00,0x10,16, 0xc0000,0x2000,16 }, // S MCA - /*12*/ { 3,4,9,10,11,12,0,0,0,0, 0x0,0x0,0, 0xc0000,0x2000,16 }, // SX ISA - /*13*/ { 3,4,9,0,0,0,0,0,0,0, 0xc00,0x10,16, 0xc0000,0x2000,16 }, // SX MCA - /*14*/ { 3,4,5,7,9,10,11,12,0,0, 0x0,0x0,0, 0x80000,0x0800,256 }, // SXN ISA - /*15*/ { 3,4,9,0,0,0,0,0,0,0, 0xc00,0x10,16, 0xc0000,0x2000,16 }, // SXN MCA - /*16*/ { 3,4,5,7,9,10,11,12,0,0, 0x0,0x0,0, 0x80000,0x0800,256 }, // SCOM ISA - /*17*/ { 3,4,9,0,0,0,0,0,0,0, 0xc00,0x10,16, 0xc0000,0x2000,16 }, // SCOM MCA - /*18*/ { 3,4,5,7,9,10,11,12,0,0, 0x0,0x0,0, 0xc0000,0x4000,16 }, // S2M ISA - /*19*/ { 3,4,9,0,0,0,0,0,0,0, 0xc00,0x10,16, 0xc0000,0x4000,16 }, // S2M MCA - /*20*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // MAESTRA ISA - /*21*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // MAESTRA PCI - /*22*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // MAESTRA QUADRO ISA - /*23*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x20,2048, 0x0,0x0,0 }, // MAESTRA QUADRO PCI - /*24*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // MAESTRA PRIMARY ISA - /*25*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // MAESTRA PRIMARY PCI - /*26*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA 2.0 ISA - /*27*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA 2.0 /U ISA - /*28*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA 2.0 /U PCI - /*29*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA PRO 2.0 ISA - /*30*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA PRO 2.0 /U ISA - /*31*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA PRO 2.0 /U PCI - /*32*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA MOBILE - /*33*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // TDK DFI3600 (same as DIVA MOBILE [32]) - /*34*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // New Media ISDN (same as DIVA PRO PCMCIA [4]) - /*35*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // BT ExLane PCI (same as DIVA PRO 2.0 PCI [7]) - /*36*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // BT ExLane ISA (same as DIVA PRO 2.0 ISA [29]) - /*37*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA 2.01 S/T ISA - /*38*/ { 3,5,7,9,10,11,12,14,15,0, 0x200,0x20,16, 0x0,0x0,0 }, // DIVA 2.01 U ISA - /*39*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA 2.01 S/T PCI - /*40*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA 2.01 U PCI - /*41*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA MOBILE V.90 - /*42*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // TDK DFI3600 V.90 (same as DIVA MOBILE V.90 [39]) - /*43*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x20,2048, 0x0,0x0,0 }, // DIVA Server PRI-23M PCI - /*44*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA 2.01 S/T USB - /*45*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA CT S/T PCI - /*46*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA CT U PCI - /*47*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA CT Lite S/T PCI - /*48*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA CT Lite U PCI - /*49*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA ISDN+V.90 PC Card - /*50*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA ISDN+V.90 PCI - /*51*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA TA - /*52*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x20,2048, 0x0,0x0,0 }, // MAESTRA VOICE QUADRO PCI - /*53*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x20,2048, 0x0,0x0,0 }, // MAESTRA VOICE QUADRO PCI - /*54*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // MAESTRA VOICE PRIMARY PCI - /*55*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x20,2048, 0x0,0x0,0 }, // MAESTRA VOICE QUADRO PCI - /*56*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // MAESTRA VOICE PRIMARY PCI - /*57*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA LAN - /*58*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA 2.02 S/T PCI - /*59*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA 2.02 U PCI - /*60*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // Diva Server BRI-2M 2.0 PCI - /*61*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // Diva Server BRI-2F PCI - /*62*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA 2.01 S/T USB - /*63*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // Diva Server Voice BRI-2M 2.0 PCI - /*64*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA 3.0 PCI - /*65*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA CT S/T PCI V2.0 - /*66*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA Mobile V.90 PC Card - /*67*/ { 0,0,0,0,0,0,0,0,0,0, 0x0,0x0,0, 0x0,0x0,0 }, // DIVA ISDN PC Card - /*68*/ { 3,4,5,7,9,10,11,12,14,15, 0x0,0x8,8192, 0x0,0x0,0 }, // DIVA ISDN PC Card -}; -#endif /*CARDTYPE_H_WANT_RESOURCE_DATA*/ -#else /*!CARDTYPE_H_WANT_DATA*/ -extern CARD_PROPERTIES CardProperties[]; -extern CARD_RESOURCE CardResource[]; -#endif /*CARDTYPE_H_WANT_DATA*/ -/* - * all existing download files - */ -#define CARD_DSP_CNT 5 -#define CARD_PROT_CNT 9 -#define CARD_FT_UNKNOWN 0 -#define CARD_FT_B 1 -#define CARD_FT_D 2 -#define CARD_FT_S 3 -#define CARD_FT_M 4 -#define CARD_FT_NEW_DSP_COMBIFILE 5 /* File format of new DSP code (the DSP code powered by Telindus) */ -#define CARD_FILE_NONE 0 -#define CARD_B_S 1 -#define CARD_B_P 2 -#define CARD_D_K1 3 -#define CARD_D_K2 4 -#define CARD_D_H 5 -#define CARD_D_V 6 -#define CARD_D_M 7 -#define CARD_D_F 8 -#define CARD_P_S_E 9 -#define CARD_P_S_1 10 -#define CARD_P_S_B 11 -#define CARD_P_S_F 12 -#define CARD_P_S_A 13 -#define CARD_P_S_N 14 -#define CARD_P_S_5 15 -#define CARD_P_S_J 16 -#define CARD_P_SX_E 17 -#define CARD_P_SX_1 18 -#define CARD_P_SX_B 19 -#define CARD_P_SX_F 20 -#define CARD_P_SX_A 21 -#define CARD_P_SX_N 22 -#define CARD_P_SX_5 23 -#define CARD_P_SX_J 24 -#define CARD_P_SY_E 25 -#define CARD_P_SY_1 26 -#define CARD_P_SY_B 27 -#define CARD_P_SY_F 28 -#define CARD_P_SY_A 29 -#define CARD_P_SY_N 30 -#define CARD_P_SY_5 31 -#define CARD_P_SY_J 32 -#define CARD_P_SQ_E 33 -#define CARD_P_SQ_1 34 -#define CARD_P_SQ_B 35 -#define CARD_P_SQ_F 36 -#define CARD_P_SQ_A 37 -#define CARD_P_SQ_N 38 -#define CARD_P_SQ_5 39 -#define CARD_P_SQ_J 40 -#define CARD_P_P_E 41 -#define CARD_P_P_1 42 -#define CARD_P_P_B 43 -#define CARD_P_P_F 44 -#define CARD_P_P_A 45 -#define CARD_P_P_N 46 -#define CARD_P_P_5 47 -#define CARD_P_P_J 48 -#define CARD_P_M_E 49 -#define CARD_P_M_1 50 -#define CARD_P_M_B 51 -#define CARD_P_M_F 52 -#define CARD_P_M_A 53 -#define CARD_P_M_N 54 -#define CARD_P_M_5 55 -#define CARD_P_M_J 56 -#define CARD_P_S_S 57 -#define CARD_P_SX_S 58 -#define CARD_P_SY_S 59 -#define CARD_P_SQ_S 60 -#define CARD_P_P_S 61 -#define CARD_P_M_S 62 -#define CARD_D_NEW_DSP_COMBIFILE 63 -typedef struct CARD_FILES_DATA -{ - char *Name; - unsigned char Type; -} - CARD_FILES_DATA; -typedef struct CARD_FILES -{ - unsigned char Boot; - unsigned char Dsp[CARD_DSP_CNT]; - unsigned char DspTelindus; - unsigned char Prot[CARD_PROT_CNT]; -} - CARD_FILES; -#if CARDTYPE_H_WANT_DATA -#if CARDTYPE_H_WANT_FILE_DATA -CARD_FILES_DATA CardFData[] = { -// Filename Filetype - 0, CARD_FT_UNKNOWN, - "didnload.bin", CARD_FT_B, - "diprload.bin", CARD_FT_B, - "didiva.bin", CARD_FT_D, - "didivapp.bin", CARD_FT_D, - "dihscx.bin", CARD_FT_D, - "div110.bin", CARD_FT_D, - "dimodem.bin", CARD_FT_D, - "difax.bin", CARD_FT_D, - "di_etsi.bin", CARD_FT_S, - "di_1tr6.bin", CARD_FT_S, - "di_belg.bin", CARD_FT_S, - "di_franc.bin", CARD_FT_S, - "di_atel.bin", CARD_FT_S, - "di_ni.bin", CARD_FT_S, - "di_5ess.bin", CARD_FT_S, - "di_japan.bin", CARD_FT_S, - "di_etsi.sx", CARD_FT_S, - "di_1tr6.sx", CARD_FT_S, - "di_belg.sx", CARD_FT_S, - "di_franc.sx", CARD_FT_S, - "di_atel.sx", CARD_FT_S, - "di_ni.sx", CARD_FT_S, - "di_5ess.sx", CARD_FT_S, - "di_japan.sx", CARD_FT_S, - "di_etsi.sy", CARD_FT_S, - "di_1tr6.sy", CARD_FT_S, - "di_belg.sy", CARD_FT_S, - "di_franc.sy", CARD_FT_S, - "di_atel.sy", CARD_FT_S, - "di_ni.sy", CARD_FT_S, - "di_5ess.sy", CARD_FT_S, - "di_japan.sy", CARD_FT_S, - "di_etsi.sq", CARD_FT_S, - "di_1tr6.sq", CARD_FT_S, - "di_belg.sq", CARD_FT_S, - "di_franc.sq", CARD_FT_S, - "di_atel.sq", CARD_FT_S, - "di_ni.sq", CARD_FT_S, - "di_5ess.sq", CARD_FT_S, - "di_japan.sq", CARD_FT_S, - "di_etsi.p", CARD_FT_S, - "di_1tr6.p", CARD_FT_S, - "di_belg.p", CARD_FT_S, - "di_franc.p", CARD_FT_S, - "di_atel.p", CARD_FT_S, - "di_ni.p", CARD_FT_S, - "di_5ess.p", CARD_FT_S, - "di_japan.p", CARD_FT_S, - "di_etsi.sm", CARD_FT_M, - "di_1tr6.sm", CARD_FT_M, - "di_belg.sm", CARD_FT_M, - "di_franc.sm", CARD_FT_M, - "di_atel.sm", CARD_FT_M, - "di_ni.sm", CARD_FT_M, - "di_5ess.sm", CARD_FT_M, - "di_japan.sm", CARD_FT_M, - "di_swed.bin", CARD_FT_S, - "di_swed.sx", CARD_FT_S, - "di_swed.sy", CARD_FT_S, - "di_swed.sq", CARD_FT_S, - "di_swed.p", CARD_FT_S, - "di_swed.sm", CARD_FT_M, - "didspdld.bin", CARD_FT_NEW_DSP_COMBIFILE -}; -CARD_FILES CardFiles[] = -{ - { /* CARD_UNKNOWN */ - CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE - }, - { /* CARD_DIVA */ - CARD_FILE_NONE, - CARD_D_K1, CARD_D_H, CARD_D_V, CARD_FILE_NONE, CARD_D_F, - CARD_D_NEW_DSP_COMBIFILE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE - }, - { /* CARD_PRO */ - CARD_FILE_NONE, - CARD_D_K2, CARD_D_H, CARD_D_V, CARD_D_M, CARD_D_F, - CARD_D_NEW_DSP_COMBIFILE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE - }, - { /* CARD_PICO */ - CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE - }, - { /* CARD_S */ - CARD_B_S, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_P_S_E, CARD_P_S_1, CARD_P_S_B, CARD_P_S_F, - CARD_P_S_A, CARD_P_S_N, CARD_P_S_5, CARD_P_S_J, - CARD_P_S_S - }, - { /* CARD_SX */ - CARD_B_S, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_P_SX_E, CARD_P_SX_1, CARD_P_SX_B, CARD_P_SX_F, - CARD_P_SX_A, CARD_P_SX_N, CARD_P_SX_5, CARD_P_SX_J, - CARD_P_SX_S - }, - { /* CARD_SXN */ - CARD_B_S, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_P_SY_E, CARD_P_SY_1, CARD_P_SY_B, CARD_P_SY_F, - CARD_P_SY_A, CARD_P_SY_N, CARD_P_SY_5, CARD_P_SY_J, - CARD_P_SY_S - }, - { /* CARD_SCOM */ - CARD_B_S, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_P_SY_E, CARD_P_SY_1, CARD_P_SY_B, CARD_P_SY_F, - CARD_P_SY_A, CARD_P_SY_N, CARD_P_SY_5, CARD_P_SY_J, - CARD_P_SY_S - }, - { /* CARD_QUAD */ - CARD_B_S, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_P_SQ_E, CARD_P_SQ_1, CARD_P_SQ_B, CARD_P_SQ_F, - CARD_P_SQ_A, CARD_P_SQ_N, CARD_P_SQ_5, CARD_P_SQ_J, - CARD_P_SQ_S - }, - { /* CARD_PR */ - CARD_B_P, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_P_P_E, CARD_P_P_1, CARD_P_P_B, CARD_P_P_F, - CARD_P_P_A, CARD_P_P_N, CARD_P_P_5, CARD_P_P_J, - CARD_P_P_S - }, - { /* CARD_MAE */ - CARD_FILE_NONE, - CARD_D_K2, CARD_D_H, CARD_D_V, CARD_D_M, CARD_D_F, - CARD_D_NEW_DSP_COMBIFILE, - CARD_P_M_E, CARD_P_M_1, CARD_P_M_B, CARD_P_M_F, - CARD_P_M_A, CARD_P_M_N, CARD_P_M_5, CARD_P_M_J, - CARD_P_M_S - }, - { /* CARD_MAEQ */ /* currently not supported */ - CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE - }, - { /* CARD_MAEP */ /* currently not supported */ - CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, CARD_FILE_NONE, - CARD_FILE_NONE - } -}; -#endif /*CARDTYPE_H_WANT_FILE_DATA*/ -#else /*!CARDTYPE_H_WANT_DATA*/ -extern CARD_FILES_DATA CardFData[]; -extern CARD_FILES CardFiles[]; -#endif /*CARDTYPE_H_WANT_DATA*/ -#endif /* _CARDTYPE_H_ */ diff --git a/drivers/isdn/hardware/eicon/cp_vers.h b/drivers/isdn/hardware/eicon/cp_vers.h deleted file mode 100644 index c97230c60e71..000000000000 --- a/drivers/isdn/hardware/eicon/cp_vers.h +++ /dev/null @@ -1,26 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -static char diva_capi_common_code_build[] = "102-28"; diff --git a/drivers/isdn/hardware/eicon/dadapter.c b/drivers/isdn/hardware/eicon/dadapter.c deleted file mode 100644 index 51420999418d..000000000000 --- a/drivers/isdn/hardware/eicon/dadapter.c +++ /dev/null @@ -1,364 +0,0 @@ -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#include "pc.h" -#include "debuglib.h" -#include "di_defs.h" -#include "divasync.h" -#include "dadapter.h" -/* -------------------------------------------------------------------------- - Adapter array change notification framework - -------------------------------------------------------------------------- */ -typedef struct _didd_adapter_change_notification { - didd_adapter_change_callback_t callback; - void IDI_CALL_ENTITY_T *context; -} didd_adapter_change_notification_t, \ - * IDI_CALL_ENTITY_T pdidd_adapter_change_notification_t; -#define DIVA_DIDD_MAX_NOTIFICATIONS 256 -static didd_adapter_change_notification_t \ -NotificationTable[DIVA_DIDD_MAX_NOTIFICATIONS]; -/* -------------------------------------------------------------------------- - Array to held adapter information - -------------------------------------------------------------------------- */ -static DESCRIPTOR HandleTable[NEW_MAX_DESCRIPTORS]; -static dword Adapters = 0; /* Number of adapters */ -/* -------------------------------------------------------------------------- - Shadow IDI_DIMAINT - and 'shadow' debug stuff - -------------------------------------------------------------------------- */ -static void no_printf(unsigned char *format, ...) -{ -#ifdef EBUG - va_list ap; - va_start(ap, format); - debug((format, ap)); - va_end(ap); -#endif -} - -/* ------------------------------------------------------------------------- - Portable debug Library - ------------------------------------------------------------------------- */ -#include "debuglib.c" - -static DESCRIPTOR MAdapter = {IDI_DIMAINT, /* Adapter Type */ - 0x00, /* Channels */ - 0x0000, /* Features */ - (IDI_CALL)no_printf}; -/* -------------------------------------------------------------------------- - DAdapter. Only IDI clients with buffer, that is huge enough to - get all descriptors will receive information about DAdapter - { byte type, byte channels, word features, IDI_CALL request } - -------------------------------------------------------------------------- */ -static void IDI_CALL_LINK_T diva_dadapter_request(ENTITY IDI_CALL_ENTITY_T *); -static DESCRIPTOR DAdapter = {IDI_DADAPTER, /* Adapter Type */ - 0x00, /* Channels */ - 0x0000, /* Features */ - diva_dadapter_request }; -/* -------------------------------------------------------------------------- - LOCALS - -------------------------------------------------------------------------- */ -static dword diva_register_adapter_callback(\ - didd_adapter_change_callback_t callback, - void IDI_CALL_ENTITY_T *context); -static void diva_remove_adapter_callback(dword handle); -static void diva_notify_adapter_change(DESCRIPTOR *d, int removal); -static diva_os_spin_lock_t didd_spin; -/* -------------------------------------------------------------------------- - Should be called as first step, after driver init - -------------------------------------------------------------------------- */ -void diva_didd_load_time_init(void) { - memset(&HandleTable[0], 0x00, sizeof(HandleTable)); - memset(&NotificationTable[0], 0x00, sizeof(NotificationTable)); - diva_os_initialize_spin_lock(&didd_spin, "didd"); -} -/* -------------------------------------------------------------------------- - Should be called as last step, if driver does unload - -------------------------------------------------------------------------- */ -void diva_didd_load_time_finit(void) { - diva_os_destroy_spin_lock(&didd_spin, "didd"); -} -/* -------------------------------------------------------------------------- - Called in order to register new adapter in adapter array - return adapter handle (> 0) on success - return -1 adapter array overflow - -------------------------------------------------------------------------- */ -static int diva_didd_add_descriptor(DESCRIPTOR *d) { - diva_os_spin_lock_magic_t irql; - int i; - if (d->type == IDI_DIMAINT) { - if (d->request) { - MAdapter.request = d->request; - dprintf = (DIVA_DI_PRINTF)d->request; - diva_notify_adapter_change(&MAdapter, 0); /* Inserted */ - DBG_TRC(("DIMAINT registered, dprintf=%08x", d->request)) - } else { - DBG_TRC(("DIMAINT removed")) - diva_notify_adapter_change(&MAdapter, 1); /* About to remove */ - MAdapter.request = (IDI_CALL)no_printf; - dprintf = no_printf; - } - return (NEW_MAX_DESCRIPTORS); - } - for (i = 0; i < NEW_MAX_DESCRIPTORS; i++) { - diva_os_enter_spin_lock(&didd_spin, &irql, "didd_add"); - if (HandleTable[i].type == 0) { - memcpy(&HandleTable[i], d, sizeof(*d)); - Adapters++; - diva_os_leave_spin_lock(&didd_spin, &irql, "didd_add"); - diva_notify_adapter_change(d, 0); /* we have new adapter */ - DBG_TRC(("Add adapter[%d], request=%08x", (i + 1), d->request)) - return (i + 1); - } - diva_os_leave_spin_lock(&didd_spin, &irql, "didd_add"); - } - DBG_ERR(("Can't add adapter, out of resources")) - return (-1); -} -/* -------------------------------------------------------------------------- - Called in order to remove one registered adapter from array - return adapter handle (> 0) on success - return 0 on success - -------------------------------------------------------------------------- */ -static int diva_didd_remove_descriptor(IDI_CALL request) { - diva_os_spin_lock_magic_t irql; - int i; - if (request == MAdapter.request) { - DBG_TRC(("DIMAINT removed")) - dprintf = no_printf; - diva_notify_adapter_change(&MAdapter, 1); /* About to remove */ - MAdapter.request = (IDI_CALL)no_printf; - return (0); - } - for (i = 0; (Adapters && (i < NEW_MAX_DESCRIPTORS)); i++) { - if (HandleTable[i].request == request) { - diva_notify_adapter_change(&HandleTable[i], 1); /* About to remove */ - diva_os_enter_spin_lock(&didd_spin, &irql, "didd_rm"); - memset(&HandleTable[i], 0x00, sizeof(HandleTable[0])); - Adapters--; - diva_os_leave_spin_lock(&didd_spin, &irql, "didd_rm"); - DBG_TRC(("Remove adapter[%d], request=%08x", (i + 1), request)) - return (0); - } - } - DBG_ERR(("Invalid request=%08x, can't remove adapter", request)) - return (-1); -} -/* -------------------------------------------------------------------------- - Read adapter array - return 1 if not enough space to save all available adapters - -------------------------------------------------------------------------- */ -static int diva_didd_read_adapter_array(DESCRIPTOR *buffer, int length) { - diva_os_spin_lock_magic_t irql; - int src, dst; - memset(buffer, 0x00, length); - length /= sizeof(DESCRIPTOR); - DBG_TRC(("DIDD_Read, space = %d, Adapters = %d", length, Adapters + 2)) - - diva_os_enter_spin_lock(&didd_spin, &irql, "didd_read"); - for (src = 0, dst = 0; - (Adapters && (src < NEW_MAX_DESCRIPTORS) && (dst < length)); - src++) { - if (HandleTable[src].type) { - memcpy(&buffer[dst], &HandleTable[src], sizeof(DESCRIPTOR)); - dst++; - } - } - diva_os_leave_spin_lock(&didd_spin, &irql, "didd_read"); - if (dst < length) { - memcpy(&buffer[dst], &MAdapter, sizeof(DESCRIPTOR)); - dst++; - } else { - DBG_ERR(("Can't write DIMAINT. Array too small")) - } - if (dst < length) { - memcpy(&buffer[dst], &DAdapter, sizeof(DESCRIPTOR)); - dst++; - } else { - DBG_ERR(("Can't write DADAPTER. Array too small")) - } - DBG_TRC(("Read %d adapters", dst)) - return (dst == length); -} -/* -------------------------------------------------------------------------- - DAdapter request function. - This function does process only synchronous requests, and is used - for reception/registration of new interfaces - -------------------------------------------------------------------------- */ -static void IDI_CALL_LINK_T diva_dadapter_request( \ - ENTITY IDI_CALL_ENTITY_T *e) { - IDI_SYNC_REQ *syncReq = (IDI_SYNC_REQ *)e; - if (e->Req) { /* We do not process it, also return error */ - e->Rc = OUT_OF_RESOURCES; - DBG_ERR(("Can't process async request, Req=%02x", e->Req)) - return; - } - /* - So, we process sync request - */ - switch (e->Rc) { - case IDI_SYNC_REQ_DIDD_REGISTER_ADAPTER_NOTIFY: { - diva_didd_adapter_notify_t *pinfo = &syncReq->didd_notify.info; - pinfo->handle = diva_register_adapter_callback( \ - (didd_adapter_change_callback_t)pinfo->callback, - (void IDI_CALL_ENTITY_T *)pinfo->context); - e->Rc = 0xff; - } break; - case IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER_NOTIFY: { - diva_didd_adapter_notify_t *pinfo = &syncReq->didd_notify.info; - diva_remove_adapter_callback(pinfo->handle); - e->Rc = 0xff; - } break; - case IDI_SYNC_REQ_DIDD_ADD_ADAPTER: { - diva_didd_add_adapter_t *pinfo = &syncReq->didd_add_adapter.info; - if (diva_didd_add_descriptor((DESCRIPTOR *)pinfo->descriptor) < 0) { - e->Rc = OUT_OF_RESOURCES; - } else { - e->Rc = 0xff; - } - } break; - case IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER: { - diva_didd_remove_adapter_t *pinfo = &syncReq->didd_remove_adapter.info; - if (diva_didd_remove_descriptor((IDI_CALL)pinfo->p_request) < 0) { - e->Rc = OUT_OF_RESOURCES; - } else { - e->Rc = 0xff; - } - } break; - case IDI_SYNC_REQ_DIDD_READ_ADAPTER_ARRAY: { - diva_didd_read_adapter_array_t *pinfo =\ - &syncReq->didd_read_adapter_array.info; - if (diva_didd_read_adapter_array((DESCRIPTOR *)pinfo->buffer, - (int)pinfo->length)) { - e->Rc = OUT_OF_RESOURCES; - } else { - e->Rc = 0xff; - } - } break; - default: - DBG_ERR(("Can't process sync request, Req=%02x", e->Rc)) - e->Rc = OUT_OF_RESOURCES; - } -} -/* -------------------------------------------------------------------------- - IDI client does register his notification function - -------------------------------------------------------------------------- */ -static dword diva_register_adapter_callback( \ - didd_adapter_change_callback_t callback, - void IDI_CALL_ENTITY_T *context) { - diva_os_spin_lock_magic_t irql; - dword i; - - for (i = 0; i < DIVA_DIDD_MAX_NOTIFICATIONS; i++) { - diva_os_enter_spin_lock(&didd_spin, &irql, "didd_nfy_add"); - if (!NotificationTable[i].callback) { - NotificationTable[i].callback = callback; - NotificationTable[i].context = context; - diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy_add"); - DBG_TRC(("Register adapter notification[%d]=%08x", i + 1, callback)) - return (i + 1); - } - diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy_add"); - } - DBG_ERR(("Can't register adapter notification, overflow")) - return (0); -} -/* -------------------------------------------------------------------------- - IDI client does register his notification function - -------------------------------------------------------------------------- */ -static void diva_remove_adapter_callback(dword handle) { - diva_os_spin_lock_magic_t irql; - if (handle && ((--handle) < DIVA_DIDD_MAX_NOTIFICATIONS)) { - diva_os_enter_spin_lock(&didd_spin, &irql, "didd_nfy_rm"); - NotificationTable[handle].callback = NULL; - NotificationTable[handle].context = NULL; - diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy_rm"); - DBG_TRC(("Remove adapter notification[%d]", (int)(handle + 1))) - return; - } - DBG_ERR(("Can't remove adapter notification, handle=%d", handle)) - } -/* -------------------------------------------------------------------------- - Notify all client about adapter array change - Does suppose following behavior in the client side: - Step 1: Redister Notification - Step 2: Read Adapter Array - -------------------------------------------------------------------------- */ -static void diva_notify_adapter_change(DESCRIPTOR *d, int removal) { - int i, do_notify; - didd_adapter_change_notification_t nfy; - diva_os_spin_lock_magic_t irql; - for (i = 0; i < DIVA_DIDD_MAX_NOTIFICATIONS; i++) { - do_notify = 0; - diva_os_enter_spin_lock(&didd_spin, &irql, "didd_nfy"); - if (NotificationTable[i].callback) { - memcpy(&nfy, &NotificationTable[i], sizeof(nfy)); - do_notify = 1; - } - diva_os_leave_spin_lock(&didd_spin, &irql, "didd_nfy"); - if (do_notify) { - (*(nfy.callback))(nfy.context, d, removal); - } - } -} -/* -------------------------------------------------------------------------- - For all systems, that are linked by Kernel Mode Linker this is ONLY one - function thet should be exported by this device driver - IDI clients should look for IDI_DADAPTER, and use request function - of this adapter (sync request) in order to receive appropriate services: - - add new adapter - - remove existing adapter - - add adapter array notification - - remove adapter array notification - (read adapter is redundant in this case) - INPUT: - buffer - pointer to buffer that will receive adapter array - length - length (in bytes) of space in buffer - OUTPUT: - Adapter array will be written to memory described by 'buffer' - If the last adapter seen in the returned adapter array is - IDI_DADAPTER or if last adapter in array does have type '0', then - it was enougth space in buffer to accommodate all available - adapter descriptors - *NOTE 1 (debug interface): - The IDI adapter of type 'IDI_DIMAINT' does register as 'request' - famous 'dprintf' function (of type DI_PRINTF, please look - include/debuglib.c and include/debuglib.h) for details. - So dprintf is not exported from module debug module directly, - instead of this IDI_DIMAINT is registered. - Module load order will receive in this case: - 1. DIDD (this file) - 2. DIMAINT does load and register 'IDI_DIMAINT', at this step - DIDD should be able to get 'dprintf', save it, and - register with DIDD by means of 'dprintf' function. - 3. any other driver is loaded and is able to access adapter array - and debug interface - This approach does allow to load/unload debug interface on demand, - and save memory, it it is necessary. - -------------------------------------------------------------------------- */ -void IDI_CALL_LINK_T DIVA_DIDD_Read(void IDI_CALL_ENTITY_T *buffer, - int length) { - diva_didd_read_adapter_array(buffer, length); -} diff --git a/drivers/isdn/hardware/eicon/dadapter.h b/drivers/isdn/hardware/eicon/dadapter.h deleted file mode 100644 index 5540f46a5be3..000000000000 --- a/drivers/isdn/hardware/eicon/dadapter.h +++ /dev/null @@ -1,34 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_DIDD_DADAPTER_INC__ -#define __DIVA_DIDD_DADAPTER_INC__ - -void diva_didd_load_time_init(void); -void diva_didd_load_time_finit(void); - -#define NEW_MAX_DESCRIPTORS 64 - -#endif diff --git a/drivers/isdn/hardware/eicon/debug.c b/drivers/isdn/hardware/eicon/debug.c deleted file mode 100644 index 301788115c4f..000000000000 --- a/drivers/isdn/hardware/eicon/debug.c +++ /dev/null @@ -1,2128 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include "platform.h" -#include "pc.h" -#include "di_defs.h" -#include "debug_if.h" -#include "divasync.h" -#include "kst_ifc.h" -#include "maintidi.h" -#include "man_defs.h" - -/* - LOCALS -*/ -#define DBG_MAGIC (0x47114711L) - -static void DI_register(void *arg); -static void DI_deregister(pDbgHandle hDbg); -static void DI_format(int do_lock, word id, int type, char *format, va_list argument_list); -static void DI_format_locked(word id, int type, char *format, va_list argument_list); -static void DI_format_old(word id, char *format, va_list ap) { } -static void DiProcessEventLog(unsigned short id, unsigned long msgID, va_list ap) { } -static void single_p(byte *P, word *PLength, byte Id); -static void diva_maint_xdi_cb(ENTITY *e); -static word SuperTraceCreateReadReq(byte *P, const char *path); -static int diva_mnt_cmp_nmbr(const char *nmbr); -static void diva_free_dma_descriptor(IDI_CALL request, int nr); -static int diva_get_dma_descriptor(IDI_CALL request, dword *dma_magic); -void diva_mnt_internal_dprintf(dword drv_id, dword type, char *p, ...); - -static dword MaxDumpSize = 256; -static dword MaxXlogSize = 2 + 128; -static char TraceFilter[DIVA_MAX_SELECTIVE_FILTER_LENGTH + 1]; -static int TraceFilterIdent = -1; -static int TraceFilterChannel = -1; - -typedef struct _diva_maint_client { - dword sec; - dword usec; - pDbgHandle hDbg; - char drvName[128]; - dword dbgMask; - dword last_dbgMask; - IDI_CALL request; - _DbgHandle_ Dbg; - int logical; - int channels; - diva_strace_library_interface_t *pIdiLib; - BUFFERS XData; - char xbuffer[2048 + 512]; - byte *pmem; - int request_pending; - int dma_handle; -} diva_maint_client_t; -static diva_maint_client_t clients[MAX_DESCRIPTORS]; - -static void diva_change_management_debug_mask(diva_maint_client_t *pC, dword old_mask); - -static void diva_maint_error(void *user_context, - diva_strace_library_interface_t *hLib, - int Adapter, - int error, - const char *file, - int line); -static void diva_maint_state_change_notify(void *user_context, - diva_strace_library_interface_t *hLib, - int Adapter, - diva_trace_line_state_t *channel, - int notify_subject); -static void diva_maint_trace_notify(void *user_context, - diva_strace_library_interface_t *hLib, - int Adapter, - void *xlog_buffer, - int length); - - - -typedef struct MSG_QUEUE { - dword Size; /* total size of queue (constant) */ - byte *Base; /* lowest address (constant) */ - byte *High; /* Base + Size (constant) */ - byte *Head; /* first message in queue (if any) */ - byte *Tail; /* first free position */ - byte *Wrap; /* current wraparound position */ - dword Count; /* current no of bytes in queue */ -} MSG_QUEUE; - -typedef struct MSG_HEAD { - volatile dword Size; /* size of data following MSG_HEAD */ -#define MSG_INCOMPLETE 0x8000 /* ored to Size until queueCompleteMsg */ -} MSG_HEAD; - -#define queueCompleteMsg(p) do { ((MSG_HEAD *)p - 1)->Size &= ~MSG_INCOMPLETE; } while (0) -#define queueCount(q) ((q)->Count) -#define MSG_NEED(size) \ - ((sizeof(MSG_HEAD) + size + sizeof(dword) - 1) & ~(sizeof(dword) - 1)) - -static void queueInit(MSG_QUEUE *Q, byte *Buffer, dword sizeBuffer) { - Q->Size = sizeBuffer; - Q->Base = Q->Head = Q->Tail = Buffer; - Q->High = Buffer + sizeBuffer; - Q->Wrap = NULL; - Q->Count = 0; -} - -static byte *queueAllocMsg(MSG_QUEUE *Q, word size) { - /* Allocate 'size' bytes at tail of queue which will be filled later - * directly with callers own message header info and/or message. - * An 'alloced' message is marked incomplete by oring the 'Size' field - * with MSG_INCOMPLETE. - * This must be reset via queueCompleteMsg() after the message is filled. - * As long as a message is marked incomplete queuePeekMsg() will return - * a 'queue empty' condition when it reaches such a message. */ - - MSG_HEAD *Msg; - word need = MSG_NEED(size); - - if (Q->Tail == Q->Head) { - if (Q->Wrap || need > Q->Size) { - return NULL; /* full */ - } - goto alloc; /* empty */ - } - - if (Q->Tail > Q->Head) { - if (Q->Tail + need <= Q->High) goto alloc; /* append */ - if (Q->Base + need > Q->Head) { - return NULL; /* too much */ - } - /* wraparound the queue (but not the message) */ - Q->Wrap = Q->Tail; - Q->Tail = Q->Base; - goto alloc; - } - - if (Q->Tail + need > Q->Head) { - return NULL; /* too much */ - } - -alloc: - Msg = (MSG_HEAD *)Q->Tail; - - Msg->Size = size | MSG_INCOMPLETE; - - Q->Tail += need; - Q->Count += size; - - - - return ((byte *)(Msg + 1)); -} - -static void queueFreeMsg(MSG_QUEUE *Q) { -/* Free the message at head of queue */ - - word size = ((MSG_HEAD *)Q->Head)->Size & ~MSG_INCOMPLETE; - - Q->Head += MSG_NEED(size); - Q->Count -= size; - - if (Q->Wrap) { - if (Q->Head >= Q->Wrap) { - Q->Head = Q->Base; - Q->Wrap = NULL; - } - } else if (Q->Head >= Q->Tail) { - Q->Head = Q->Tail = Q->Base; - } -} - -static byte *queuePeekMsg(MSG_QUEUE *Q, word *size) { - /* Show the first valid message in queue BUT DON'T free the message. - * After looking on the message contents it can be freed queueFreeMsg() - * or simply remain in message queue. */ - - MSG_HEAD *Msg = (MSG_HEAD *)Q->Head; - - if (((byte *)Msg == Q->Tail && !Q->Wrap) || - (Msg->Size & MSG_INCOMPLETE)) { - return NULL; - } else { - *size = Msg->Size; - return ((byte *)(Msg + 1)); - } -} - -/* - Message queue header -*/ -static MSG_QUEUE *dbg_queue; -static byte *dbg_base; -static int external_dbg_queue; -static diva_os_spin_lock_t dbg_q_lock; -static diva_os_spin_lock_t dbg_adapter_lock; -static int dbg_q_busy; -static volatile dword dbg_sequence; - -/* - INTERFACE: - Initialize run time queue structures. - base: base of the message queue - length: length of the message queue - do_init: perfor queue reset - - return: zero on success, -1 on error -*/ -int diva_maint_init(byte *base, unsigned long length, int do_init) { - if (dbg_queue || (!base) || (length < (4096 * 4))) { - return (-1); - } - - TraceFilter[0] = 0; - TraceFilterIdent = -1; - TraceFilterChannel = -1; - - dbg_base = base; - - *(dword *)base = (dword)DBG_MAGIC; /* Store Magic */ - base += sizeof(dword); - length -= sizeof(dword); - - *(dword *)base = 2048; /* Extension Field Length */ - base += sizeof(dword); - length -= sizeof(dword); - - strcpy(base, "KERNEL MODE BUFFER\n"); - base += 2048; - length -= 2048; - - *(dword *)base = 0; /* Terminate extension */ - base += sizeof(dword); - length -= sizeof(dword); - - *(void **)base = (void *)(base + sizeof(void *)); /* Store Base */ - base += sizeof(void *); - length -= sizeof(void *); - - dbg_queue = (MSG_QUEUE *)base; - queueInit(dbg_queue, base + sizeof(MSG_QUEUE), length - sizeof(MSG_QUEUE) - 512); - external_dbg_queue = 0; - - if (!do_init) { - external_dbg_queue = 1; /* memory was located on the external device */ - } - - - if (diva_os_initialize_spin_lock(&dbg_q_lock, "dbg_init")) { - dbg_queue = NULL; - dbg_base = NULL; - external_dbg_queue = 0; - return (-1); - } - - if (diva_os_initialize_spin_lock(&dbg_adapter_lock, "dbg_init")) { - diva_os_destroy_spin_lock(&dbg_q_lock, "dbg_init"); - dbg_queue = NULL; - dbg_base = NULL; - external_dbg_queue = 0; - return (-1); - } - - return (0); -} - -/* - INTERFACE: - Finit at unload time - return address of internal queue or zero if queue - was external -*/ -void *diva_maint_finit(void) { - void *ret = (void *)dbg_base; - int i; - - dbg_queue = NULL; - dbg_base = NULL; - - if (ret) { - diva_os_destroy_spin_lock(&dbg_q_lock, "dbg_finit"); - diva_os_destroy_spin_lock(&dbg_adapter_lock, "dbg_finit"); - } - - if (external_dbg_queue) { - ret = NULL; - } - external_dbg_queue = 0; - - for (i = 1; i < ARRAY_SIZE(clients); i++) { - if (clients[i].pmem) { - diva_os_free(0, clients[i].pmem); - } - } - - return (ret); -} - -/* - INTERFACE: - Return amount of messages in debug queue -*/ -dword diva_dbg_q_length(void) { - return (dbg_queue ? queueCount(dbg_queue) : 0); -} - -/* - INTERFACE: - Lock message queue and return the pointer to the first - entry. -*/ -diva_dbg_entry_head_t *diva_maint_get_message(word *size, - diva_os_spin_lock_magic_t *old_irql) { - diva_dbg_entry_head_t *pmsg = NULL; - - diva_os_enter_spin_lock(&dbg_q_lock, old_irql, "read"); - if (dbg_q_busy) { - diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_busy"); - return NULL; - } - dbg_q_busy = 1; - - if (!(pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, size))) { - dbg_q_busy = 0; - diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_empty"); - } - - return (pmsg); -} - -/* - INTERFACE: - acknowledge last message and unlock queue -*/ -void diva_maint_ack_message(int do_release, - diva_os_spin_lock_magic_t *old_irql) { - if (!dbg_q_busy) { - return; - } - if (do_release) { - queueFreeMsg(dbg_queue); - } - dbg_q_busy = 0; - diva_os_leave_spin_lock(&dbg_q_lock, old_irql, "read_ack"); -} - - -/* - INTERFACE: - PRT COMP function used to register - with MAINT adapter or log in compatibility - mode in case older driver version is connected too -*/ -void diva_maint_prtComp(char *format, ...) { - void *hDbg; - va_list ap; - - if (!format) - return; - - va_start(ap, format); - - /* - register to new log driver functions - */ - if ((format[0] == 0) && ((unsigned char)format[1] == 255)) { - hDbg = va_arg(ap, void *); /* ptr to DbgHandle */ - DI_register(hDbg); - } - - va_end(ap); -} - -static void DI_register(void *arg) { - diva_os_spin_lock_magic_t old_irql; - dword sec, usec; - pDbgHandle hDbg; - int id, free_id = -1, best_id = 0; - - diva_os_get_time(&sec, &usec); - - hDbg = (pDbgHandle)arg; - /* - Check for bad args, specially for the old obsolete debug handle - */ - if ((hDbg == NULL) || - ((hDbg->id == 0) && (((_OldDbgHandle_ *)hDbg)->id == -1)) || - (hDbg->Registered != 0)) { - return; - } - - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "register"); - - for (id = 1; id < ARRAY_SIZE(clients); id++) { - if (clients[id].hDbg == hDbg) { - /* - driver already registered - */ - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register"); - return; - } - if (clients[id].hDbg) { /* slot is busy */ - continue; - } - free_id = id; - if (!strcmp(clients[id].drvName, hDbg->drvName)) { - /* - This driver was already registered with this name - and slot is still free - reuse it - */ - best_id = 1; - break; - } - if (!clients[id].hDbg) { /* slot is busy */ - break; - } - } - - if (free_id != -1) { - diva_dbg_entry_head_t *pmsg = NULL; - int len; - char tmp[256]; - word size; - - /* - Register new driver with id == free_id - */ - clients[free_id].hDbg = hDbg; - clients[free_id].sec = sec; - clients[free_id].usec = usec; - strcpy(clients[free_id].drvName, hDbg->drvName); - - clients[free_id].dbgMask = hDbg->dbgMask; - if (best_id) { - hDbg->dbgMask |= clients[free_id].last_dbgMask; - } else { - clients[free_id].last_dbgMask = 0; - } - - hDbg->Registered = DBG_HANDLE_REG_NEW; - hDbg->id = (byte)free_id; - hDbg->dbg_end = DI_deregister; - hDbg->dbg_prt = DI_format_locked; - hDbg->dbg_ev = DiProcessEventLog; - hDbg->dbg_irq = DI_format_locked; - if (hDbg->Version > 0) { - hDbg->dbg_old = DI_format_old; - } - hDbg->next = (pDbgHandle)DBG_MAGIC; - - /* - Log driver register, MAINT driver ID is '0' - */ - len = sprintf(tmp, "DIMAINT - drv # %d = '%s' registered", - free_id, hDbg->drvName); - - while (!(pmsg = (diva_dbg_entry_head_t *)queueAllocMsg(dbg_queue, - (word)(len + 1 + sizeof(*pmsg))))) { - if ((pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, &size))) { - queueFreeMsg(dbg_queue); - } else { - break; - } - } - - if (pmsg) { - pmsg->sequence = dbg_sequence++; - pmsg->time_sec = sec; - pmsg->time_usec = usec; - pmsg->facility = MSG_TYPE_STRING; - pmsg->dli = DLI_REG; - pmsg->drv_id = 0; /* id 0 - DIMAINT */ - pmsg->di_cpu = 0; - pmsg->data_length = len + 1; - - memcpy(&pmsg[1], tmp, len + 1); - queueCompleteMsg(pmsg); - diva_maint_wakeup_read(); - } - } - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register"); -} - -static void DI_deregister(pDbgHandle hDbg) { - diva_os_spin_lock_magic_t old_irql, old_irql1; - dword sec, usec; - int i; - word size; - byte *pmem = NULL; - - diva_os_get_time(&sec, &usec); - - diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "read"); - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read"); - - for (i = 1; i < ARRAY_SIZE(clients); i++) { - if (clients[i].hDbg == hDbg) { - diva_dbg_entry_head_t *pmsg; - char tmp[256]; - int len; - - clients[i].hDbg = NULL; - - hDbg->id = -1; - hDbg->dbgMask = 0; - hDbg->dbg_end = NULL; - hDbg->dbg_prt = NULL; - hDbg->dbg_irq = NULL; - if (hDbg->Version > 0) - hDbg->dbg_old = NULL; - hDbg->Registered = 0; - hDbg->next = NULL; - - if (clients[i].pIdiLib) { - (*(clients[i].pIdiLib->DivaSTraceLibraryFinit))(clients[i].pIdiLib->hLib); - clients[i].pIdiLib = NULL; - - pmem = clients[i].pmem; - clients[i].pmem = NULL; - } - - /* - Log driver register, MAINT driver ID is '0' - */ - len = sprintf(tmp, "DIMAINT - drv # %d = '%s' de-registered", - i, hDbg->drvName); - - while (!(pmsg = (diva_dbg_entry_head_t *)queueAllocMsg(dbg_queue, - (word)(len + 1 + sizeof(*pmsg))))) { - if ((pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, &size))) { - queueFreeMsg(dbg_queue); - } else { - break; - } - } - - if (pmsg) { - pmsg->sequence = dbg_sequence++; - pmsg->time_sec = sec; - pmsg->time_usec = usec; - pmsg->facility = MSG_TYPE_STRING; - pmsg->dli = DLI_REG; - pmsg->drv_id = 0; /* id 0 - DIMAINT */ - pmsg->di_cpu = 0; - pmsg->data_length = len + 1; - - memcpy(&pmsg[1], tmp, len + 1); - queueCompleteMsg(pmsg); - diva_maint_wakeup_read(); - } - - break; - } - } - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_ack"); - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "read_ack"); - - if (pmem) { - diva_os_free(0, pmem); - } -} - -static void DI_format_locked(unsigned short id, - int type, - char *format, - va_list argument_list) { - DI_format(1, id, type, format, argument_list); -} - -static void DI_format(int do_lock, - unsigned short id, - int type, - char *format, - va_list ap) { - diva_os_spin_lock_magic_t old_irql; - dword sec, usec; - diva_dbg_entry_head_t *pmsg = NULL; - dword length; - word size; - static char fmtBuf[MSG_FRAME_MAX_SIZE + sizeof(*pmsg) + 1]; - char *data; - unsigned short code; - - if (diva_os_in_irq()) { - dbg_sequence++; - return; - } - - if ((!format) || - ((TraceFilter[0] != 0) && ((TraceFilterIdent < 0) || (TraceFilterChannel < 0)))) { - return; - } - - - - diva_os_get_time(&sec, &usec); - - if (do_lock) { - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "format"); - } - - switch (type) { - case DLI_MXLOG: - case DLI_BLK: - case DLI_SEND: - case DLI_RECV: - if (!(length = va_arg(ap, unsigned long))) { - break; - } - if (length > MaxDumpSize) { - length = MaxDumpSize; - } - while (!(pmsg = (diva_dbg_entry_head_t *)queueAllocMsg(dbg_queue, - (word)length + sizeof(*pmsg)))) { - if ((pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, &size))) { - queueFreeMsg(dbg_queue); - } else { - break; - } - } - if (pmsg) { - memcpy(&pmsg[1], format, length); - pmsg->sequence = dbg_sequence++; - pmsg->time_sec = sec; - pmsg->time_usec = usec; - pmsg->facility = MSG_TYPE_BINARY; - pmsg->dli = type; /* DLI_XXX */ - pmsg->drv_id = id; /* driver MAINT id */ - pmsg->di_cpu = 0; - pmsg->data_length = length; - queueCompleteMsg(pmsg); - } - break; - - case DLI_XLOG: { - byte *p; - data = va_arg(ap, char *); - code = (unsigned short)va_arg(ap, unsigned int); - length = (unsigned long)va_arg(ap, unsigned int); - - if (length > MaxXlogSize) - length = MaxXlogSize; - - while (!(pmsg = (diva_dbg_entry_head_t *)queueAllocMsg(dbg_queue, - (word)length + sizeof(*pmsg) + 2))) { - if ((pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, &size))) { - queueFreeMsg(dbg_queue); - } else { - break; - } - } - if (pmsg) { - p = (byte *)&pmsg[1]; - p[0] = (char)(code); - p[1] = (char)(code >> 8); - if (data && length) { - memcpy(&p[2], &data[0], length); - } - length += 2; - - pmsg->sequence = dbg_sequence++; - pmsg->time_sec = sec; - pmsg->time_usec = usec; - pmsg->facility = MSG_TYPE_BINARY; - pmsg->dli = type; /* DLI_XXX */ - pmsg->drv_id = id; /* driver MAINT id */ - pmsg->di_cpu = 0; - pmsg->data_length = length; - queueCompleteMsg(pmsg); - } - } break; - - case DLI_LOG: - case DLI_FTL: - case DLI_ERR: - case DLI_TRC: - case DLI_REG: - case DLI_MEM: - case DLI_SPL: - case DLI_IRP: - case DLI_TIM: - case DLI_TAPI: - case DLI_NDIS: - case DLI_CONN: - case DLI_STAT: - case DLI_PRV0: - case DLI_PRV1: - case DLI_PRV2: - case DLI_PRV3: - if ((length = (unsigned long)vsprintf(&fmtBuf[0], format, ap)) > 0) { - length += (sizeof(*pmsg) + 1); - - while (!(pmsg = (diva_dbg_entry_head_t *)queueAllocMsg(dbg_queue, - (word)length))) { - if ((pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, &size))) { - queueFreeMsg(dbg_queue); - } else { - break; - } - } - - pmsg->sequence = dbg_sequence++; - pmsg->time_sec = sec; - pmsg->time_usec = usec; - pmsg->facility = MSG_TYPE_STRING; - pmsg->dli = type; /* DLI_XXX */ - pmsg->drv_id = id; /* driver MAINT id */ - pmsg->di_cpu = 0; - pmsg->data_length = length - sizeof(*pmsg); - - memcpy(&pmsg[1], fmtBuf, pmsg->data_length); - queueCompleteMsg(pmsg); - } - break; - - } /* switch type */ - - - if (queueCount(dbg_queue)) { - diva_maint_wakeup_read(); - } - - if (do_lock) { - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "format"); - } -} - -/* - Write driver ID and driver revision to callers buffer -*/ -int diva_get_driver_info(dword id, byte *data, int data_length) { - diva_os_spin_lock_magic_t old_irql; - byte *p = data; - int to_copy; - - if (!data || !id || (data_length < 17) || - (id >= ARRAY_SIZE(clients))) { - return (-1); - } - - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "driver info"); - - if (clients[id].hDbg) { - *p++ = 1; - *p++ = (byte)clients[id].sec; /* save seconds */ - *p++ = (byte)(clients[id].sec >> 8); - *p++ = (byte)(clients[id].sec >> 16); - *p++ = (byte)(clients[id].sec >> 24); - - *p++ = (byte)(clients[id].usec / 1000); /* save mseconds */ - *p++ = (byte)((clients[id].usec / 1000) >> 8); - *p++ = (byte)((clients[id].usec / 1000) >> 16); - *p++ = (byte)((clients[id].usec / 1000) >> 24); - - data_length -= 9; - - if ((to_copy = min(strlen(clients[id].drvName), (size_t)(data_length - 1)))) { - memcpy(p, clients[id].drvName, to_copy); - p += to_copy; - data_length -= to_copy; - if ((data_length >= 4) && clients[id].hDbg->drvTag[0]) { - *p++ = '('; - data_length -= 1; - if ((to_copy = min(strlen(clients[id].hDbg->drvTag), (size_t)(data_length - 2)))) { - memcpy(p, clients[id].hDbg->drvTag, to_copy); - p += to_copy; - data_length -= to_copy; - if (data_length >= 2) { - *p++ = ')'; - data_length--; - } - } - } - } - } - *p++ = 0; - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "driver info"); - - return (p - data); -} - -int diva_get_driver_dbg_mask(dword id, byte *data) { - diva_os_spin_lock_magic_t old_irql; - int ret = -1; - - if (!data || !id || (id >= ARRAY_SIZE(clients))) { - return (-1); - } - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "driver info"); - - if (clients[id].hDbg) { - ret = 4; - *data++ = (byte)(clients[id].hDbg->dbgMask); - *data++ = (byte)(clients[id].hDbg->dbgMask >> 8); - *data++ = (byte)(clients[id].hDbg->dbgMask >> 16); - *data++ = (byte)(clients[id].hDbg->dbgMask >> 24); - } - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "driver info"); - - return (ret); -} - -int diva_set_driver_dbg_mask(dword id, dword mask) { - diva_os_spin_lock_magic_t old_irql, old_irql1; - int ret = -1; - - - if (!id || (id >= ARRAY_SIZE(clients))) { - return (-1); - } - - diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask"); - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "dbg mask"); - - if (clients[id].hDbg) { - dword old_mask = clients[id].hDbg->dbgMask; - mask &= 0x7fffffff; - clients[id].hDbg->dbgMask = mask; - clients[id].last_dbgMask = (clients[id].hDbg->dbgMask | clients[id].dbgMask); - ret = 4; - diva_change_management_debug_mask(&clients[id], old_mask); - } - - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "dbg mask"); - - if (clients[id].request_pending) { - clients[id].request_pending = 0; - (*(clients[id].request))((ENTITY *)(*(clients[id].pIdiLib->DivaSTraceGetHandle))(clients[id].pIdiLib->hLib)); - } - - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask"); - - return (ret); -} - -static int diva_get_idi_adapter_info(IDI_CALL request, dword *serial, dword *logical) { - IDI_SYNC_REQ sync_req; - - sync_req.xdi_logical_adapter_number.Req = 0; - sync_req.xdi_logical_adapter_number.Rc = IDI_SYNC_REQ_XDI_GET_LOGICAL_ADAPTER_NUMBER; - (*request)((ENTITY *)&sync_req); - *logical = sync_req.xdi_logical_adapter_number.info.logical_adapter_number; - - sync_req.GetSerial.Req = 0; - sync_req.GetSerial.Rc = IDI_SYNC_REQ_GET_SERIAL; - sync_req.GetSerial.serial = 0; - (*request)((ENTITY *)&sync_req); - *serial = sync_req.GetSerial.serial; - - return (0); -} - -/* - Register XDI adapter as MAINT compatible driver -*/ -void diva_mnt_add_xdi_adapter(const DESCRIPTOR *d) { - diva_os_spin_lock_magic_t old_irql, old_irql1; - dword sec, usec, logical, serial, org_mask; - int id, free_id = -1; - char tmp[128]; - diva_dbg_entry_head_t *pmsg = NULL; - int len; - word size; - byte *pmem; - - diva_os_get_time(&sec, &usec); - diva_get_idi_adapter_info(d->request, &serial, &logical); - if (serial & 0xff000000) { - sprintf(tmp, "ADAPTER:%d SN:%u-%d", - (int)logical, - serial & 0x00ffffff, - (byte)(((serial & 0xff000000) >> 24) + 1)); - } else { - sprintf(tmp, "ADAPTER:%d SN:%u", (int)logical, serial); - } - - if (!(pmem = diva_os_malloc(0, DivaSTraceGetMemotyRequirement(d->channels)))) { - return; - } - memset(pmem, 0x00, DivaSTraceGetMemotyRequirement(d->channels)); - - diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "register"); - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "register"); - - for (id = 1; id < ARRAY_SIZE(clients); id++) { - if (clients[id].hDbg && (clients[id].request == d->request)) { - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register"); - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register"); - diva_os_free(0, pmem); - return; - } - if (clients[id].hDbg) { /* slot is busy */ - continue; - } - if (free_id < 0) { - free_id = id; - } - if (!strcmp(clients[id].drvName, tmp)) { - /* - This driver was already registered with this name - and slot is still free - reuse it - */ - free_id = id; - break; - } - } - - if (free_id < 0) { - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register"); - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register"); - diva_os_free(0, pmem); - return; - } - - id = free_id; - clients[id].request = d->request; - clients[id].request_pending = 0; - clients[id].hDbg = &clients[id].Dbg; - clients[id].sec = sec; - clients[id].usec = usec; - strcpy(clients[id].drvName, tmp); - strcpy(clients[id].Dbg.drvName, tmp); - clients[id].Dbg.drvTag[0] = 0; - clients[id].logical = (int)logical; - clients[id].channels = (int)d->channels; - clients[id].dma_handle = -1; - - clients[id].Dbg.dbgMask = 0; - clients[id].dbgMask = clients[id].Dbg.dbgMask; - if (id) { - clients[id].Dbg.dbgMask |= clients[free_id].last_dbgMask; - } else { - clients[id].last_dbgMask = 0; - } - clients[id].Dbg.Registered = DBG_HANDLE_REG_NEW; - clients[id].Dbg.id = (byte)id; - clients[id].Dbg.dbg_end = DI_deregister; - clients[id].Dbg.dbg_prt = DI_format_locked; - clients[id].Dbg.dbg_ev = DiProcessEventLog; - clients[id].Dbg.dbg_irq = DI_format_locked; - clients[id].Dbg.next = (pDbgHandle)DBG_MAGIC; - - { - diva_trace_library_user_interface_t diva_maint_user_ifc = { &clients[id], - diva_maint_state_change_notify, - diva_maint_trace_notify, - diva_maint_error }; - - /* - Attach to adapter management interface - */ - if ((clients[id].pIdiLib = - DivaSTraceLibraryCreateInstance((int)logical, &diva_maint_user_ifc, pmem))) { - if (((*(clients[id].pIdiLib->DivaSTraceLibraryStart))(clients[id].pIdiLib->hLib))) { - diva_mnt_internal_dprintf(0, DLI_ERR, "Adapter(%d) Start failed", (int)logical); - (*(clients[id].pIdiLib->DivaSTraceLibraryFinit))(clients[id].pIdiLib->hLib); - clients[id].pIdiLib = NULL; - } - } else { - diva_mnt_internal_dprintf(0, DLI_ERR, "A(%d) management init failed", (int)logical); - } - } - - if (!clients[id].pIdiLib) { - clients[id].request = NULL; - clients[id].request_pending = 0; - clients[id].hDbg = NULL; - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register"); - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register"); - diva_os_free(0, pmem); - return; - } - - /* - Log driver register, MAINT driver ID is '0' - */ - len = sprintf(tmp, "DIMAINT - drv # %d = '%s' registered", - id, clients[id].Dbg.drvName); - - while (!(pmsg = (diva_dbg_entry_head_t *)queueAllocMsg(dbg_queue, - (word)(len + 1 + sizeof(*pmsg))))) { - if ((pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, &size))) { - queueFreeMsg(dbg_queue); - } else { - break; - } - } - - if (pmsg) { - pmsg->sequence = dbg_sequence++; - pmsg->time_sec = sec; - pmsg->time_usec = usec; - pmsg->facility = MSG_TYPE_STRING; - pmsg->dli = DLI_REG; - pmsg->drv_id = 0; /* id 0 - DIMAINT */ - pmsg->di_cpu = 0; - pmsg->data_length = len + 1; - - memcpy(&pmsg[1], tmp, len + 1); - queueCompleteMsg(pmsg); - diva_maint_wakeup_read(); - } - - org_mask = clients[id].Dbg.dbgMask; - clients[id].Dbg.dbgMask = 0; - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "register"); - - if (clients[id].request_pending) { - clients[id].request_pending = 0; - (*(clients[id].request))((ENTITY *)(*(clients[id].pIdiLib->DivaSTraceGetHandle))(clients[id].pIdiLib->hLib)); - } - - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "register"); - - diva_set_driver_dbg_mask(id, org_mask); -} - -/* - De-Register XDI adapter -*/ -void diva_mnt_remove_xdi_adapter(const DESCRIPTOR *d) { - diva_os_spin_lock_magic_t old_irql, old_irql1; - dword sec, usec; - int i; - word size; - byte *pmem = NULL; - - diva_os_get_time(&sec, &usec); - - diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "read"); - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read"); - - for (i = 1; i < ARRAY_SIZE(clients); i++) { - if (clients[i].hDbg && (clients[i].request == d->request)) { - diva_dbg_entry_head_t *pmsg; - char tmp[256]; - int len; - - if (clients[i].pIdiLib) { - (*(clients[i].pIdiLib->DivaSTraceLibraryFinit))(clients[i].pIdiLib->hLib); - clients[i].pIdiLib = NULL; - - pmem = clients[i].pmem; - clients[i].pmem = NULL; - } - - clients[i].hDbg = NULL; - clients[i].request_pending = 0; - if (clients[i].dma_handle >= 0) { - /* - Free DMA handle - */ - diva_free_dma_descriptor(clients[i].request, clients[i].dma_handle); - clients[i].dma_handle = -1; - } - clients[i].request = NULL; - - /* - Log driver register, MAINT driver ID is '0' - */ - len = sprintf(tmp, "DIMAINT - drv # %d = '%s' de-registered", - i, clients[i].Dbg.drvName); - - memset(&clients[i].Dbg, 0x00, sizeof(clients[i].Dbg)); - - while (!(pmsg = (diva_dbg_entry_head_t *)queueAllocMsg(dbg_queue, - (word)(len + 1 + sizeof(*pmsg))))) { - if ((pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, &size))) { - queueFreeMsg(dbg_queue); - } else { - break; - } - } - - if (pmsg) { - pmsg->sequence = dbg_sequence++; - pmsg->time_sec = sec; - pmsg->time_usec = usec; - pmsg->facility = MSG_TYPE_STRING; - pmsg->dli = DLI_REG; - pmsg->drv_id = 0; /* id 0 - DIMAINT */ - pmsg->di_cpu = 0; - pmsg->data_length = len + 1; - - memcpy(&pmsg[1], tmp, len + 1); - queueCompleteMsg(pmsg); - diva_maint_wakeup_read(); - } - - break; - } - } - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_ack"); - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "read_ack"); - - if (pmem) { - diva_os_free(0, pmem); - } -} - -/* ---------------------------------------------------------------- - Low level interface for management interface client - ---------------------------------------------------------------- */ -/* - Return handle to client structure -*/ -void *SuperTraceOpenAdapter(int AdapterNumber) { - int i; - - for (i = 1; i < ARRAY_SIZE(clients); i++) { - if (clients[i].hDbg && clients[i].request && (clients[i].logical == AdapterNumber)) { - return (&clients[i]); - } - } - - return NULL; -} - -int SuperTraceCloseAdapter(void *AdapterHandle) { - return (0); -} - -int SuperTraceReadRequest(void *AdapterHandle, const char *name, byte *data) { - diva_maint_client_t *pC = (diva_maint_client_t *)AdapterHandle; - - if (pC && pC->pIdiLib && pC->request) { - ENTITY *e = (ENTITY *)(*(pC->pIdiLib->DivaSTraceGetHandle))(pC->pIdiLib->hLib); - byte *xdata = (byte *)&pC->xbuffer[0]; - char tmp = 0; - word length; - - if (!strcmp(name, "\\")) { /* Read ROOT */ - name = &tmp; - } - length = SuperTraceCreateReadReq(xdata, name); - single_p(xdata, &length, 0); /* End Of Message */ - - e->Req = MAN_READ; - e->ReqCh = 0; - e->X->PLength = length; - e->X->P = (byte *)xdata; - - pC->request_pending = 1; - - return (0); - } - - return (-1); -} - -int SuperTraceGetNumberOfChannels(void *AdapterHandle) { - if (AdapterHandle) { - diva_maint_client_t *pC = (diva_maint_client_t *)AdapterHandle; - - return (pC->channels); - } - - return (0); -} - -int SuperTraceASSIGN(void *AdapterHandle, byte *data) { - diva_maint_client_t *pC = (diva_maint_client_t *)AdapterHandle; - - if (pC && pC->pIdiLib && pC->request) { - ENTITY *e = (ENTITY *)(*(pC->pIdiLib->DivaSTraceGetHandle))(pC->pIdiLib->hLib); - IDI_SYNC_REQ *preq; - char buffer[((sizeof(preq->xdi_extended_features) + 4) > sizeof(ENTITY)) ? (sizeof(preq->xdi_extended_features) + 4) : sizeof(ENTITY)]; - char features[4]; - word assign_data_length = 1; - - features[0] = 0; - pC->xbuffer[0] = 0; - preq = (IDI_SYNC_REQ *)&buffer[0]; - preq->xdi_extended_features.Req = 0; - preq->xdi_extended_features.Rc = IDI_SYNC_REQ_XDI_GET_EXTENDED_FEATURES; - preq->xdi_extended_features.info.buffer_length_in_bytes = sizeof(features); - preq->xdi_extended_features.info.features = &features[0]; - - (*(pC->request))((ENTITY *)preq); - - if ((features[0] & DIVA_XDI_EXTENDED_FEATURES_VALID) && - (features[0] & DIVA_XDI_EXTENDED_FEATURE_MANAGEMENT_DMA)) { - dword uninitialized_var(rx_dma_magic); - if ((pC->dma_handle = diva_get_dma_descriptor(pC->request, &rx_dma_magic)) >= 0) { - pC->xbuffer[0] = LLI; - pC->xbuffer[1] = 8; - pC->xbuffer[2] = 0x40; - pC->xbuffer[3] = (byte)pC->dma_handle; - pC->xbuffer[4] = (byte)rx_dma_magic; - pC->xbuffer[5] = (byte)(rx_dma_magic >> 8); - pC->xbuffer[6] = (byte)(rx_dma_magic >> 16); - pC->xbuffer[7] = (byte)(rx_dma_magic >> 24); - pC->xbuffer[8] = (byte)(DIVA_MAX_MANAGEMENT_TRANSFER_SIZE & 0xFF); - pC->xbuffer[9] = (byte)(DIVA_MAX_MANAGEMENT_TRANSFER_SIZE >> 8); - pC->xbuffer[10] = 0; - - assign_data_length = 11; - } - } else { - pC->dma_handle = -1; - } - - e->Id = MAN_ID; - e->callback = diva_maint_xdi_cb; - e->XNum = 1; - e->X = &pC->XData; - e->Req = ASSIGN; - e->ReqCh = 0; - e->X->PLength = assign_data_length; - e->X->P = (byte *)&pC->xbuffer[0]; - - pC->request_pending = 1; - - return (0); - } - - return (-1); -} - -int SuperTraceREMOVE(void *AdapterHandle) { - diva_maint_client_t *pC = (diva_maint_client_t *)AdapterHandle; - - if (pC && pC->pIdiLib && pC->request) { - ENTITY *e = (ENTITY *)(*(pC->pIdiLib->DivaSTraceGetHandle))(pC->pIdiLib->hLib); - - e->XNum = 1; - e->X = &pC->XData; - e->Req = REMOVE; - e->ReqCh = 0; - e->X->PLength = 1; - e->X->P = (byte *)&pC->xbuffer[0]; - pC->xbuffer[0] = 0; - - pC->request_pending = 1; - - return (0); - } - - return (-1); -} - -int SuperTraceTraceOnRequest(void *hAdapter, const char *name, byte *data) { - diva_maint_client_t *pC = (diva_maint_client_t *)hAdapter; - - if (pC && pC->pIdiLib && pC->request) { - ENTITY *e = (ENTITY *)(*(pC->pIdiLib->DivaSTraceGetHandle))(pC->pIdiLib->hLib); - byte *xdata = (byte *)&pC->xbuffer[0]; - char tmp = 0; - word length; - - if (!strcmp(name, "\\")) { /* Read ROOT */ - name = &tmp; - } - length = SuperTraceCreateReadReq(xdata, name); - single_p(xdata, &length, 0); /* End Of Message */ - e->Req = MAN_EVENT_ON; - e->ReqCh = 0; - e->X->PLength = length; - e->X->P = (byte *)xdata; - - pC->request_pending = 1; - - return (0); - } - - return (-1); -} - -int SuperTraceWriteVar(void *AdapterHandle, - byte *data, - const char *name, - void *var, - byte type, - byte var_length) { - diva_maint_client_t *pC = (diva_maint_client_t *)AdapterHandle; - - if (pC && pC->pIdiLib && pC->request) { - ENTITY *e = (ENTITY *)(*(pC->pIdiLib->DivaSTraceGetHandle))(pC->pIdiLib->hLib); - diva_man_var_header_t *pVar = (diva_man_var_header_t *)&pC->xbuffer[0]; - word length = SuperTraceCreateReadReq((byte *)pVar, name); - - memcpy(&pC->xbuffer[length], var, var_length); - length += var_length; - pVar->length += var_length; - pVar->value_length = var_length; - pVar->type = type; - single_p((byte *)pVar, &length, 0); /* End Of Message */ - - e->Req = MAN_WRITE; - e->ReqCh = 0; - e->X->PLength = length; - e->X->P = (byte *)pVar; - - pC->request_pending = 1; - - return (0); - } - - return (-1); -} - -int SuperTraceExecuteRequest(void *AdapterHandle, - const char *name, - byte *data) { - diva_maint_client_t *pC = (diva_maint_client_t *)AdapterHandle; - - if (pC && pC->pIdiLib && pC->request) { - ENTITY *e = (ENTITY *)(*(pC->pIdiLib->DivaSTraceGetHandle))(pC->pIdiLib->hLib); - byte *xdata = (byte *)&pC->xbuffer[0]; - word length; - - length = SuperTraceCreateReadReq(xdata, name); - single_p(xdata, &length, 0); /* End Of Message */ - - e->Req = MAN_EXECUTE; - e->ReqCh = 0; - e->X->PLength = length; - e->X->P = (byte *)xdata; - - pC->request_pending = 1; - - return (0); - } - - return (-1); -} - -static word SuperTraceCreateReadReq(byte *P, const char *path) { - byte var_length; - byte *plen; - - var_length = (byte)strlen(path); - - *P++ = ESC; - plen = P++; - *P++ = 0x80; /* MAN_IE */ - *P++ = 0x00; /* Type */ - *P++ = 0x00; /* Attribute */ - *P++ = 0x00; /* Status */ - *P++ = 0x00; /* Variable Length */ - *P++ = var_length; - memcpy(P, path, var_length); - P += var_length; - *plen = var_length + 0x06; - - return ((word)(var_length + 0x08)); -} - -static void single_p(byte *P, word *PLength, byte Id) { - P[(*PLength)++] = Id; -} - -static void diva_maint_xdi_cb(ENTITY *e) { - diva_strace_context_t *pLib = DIVAS_CONTAINING_RECORD(e, diva_strace_context_t, e); - diva_maint_client_t *pC; - diva_os_spin_lock_magic_t old_irql, old_irql1; - - - diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "xdi_cb"); - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "xdi_cb"); - - pC = (diva_maint_client_t *)pLib->hAdapter; - - if ((e->complete == 255) || (pC->dma_handle < 0)) { - if ((*(pLib->instance.DivaSTraceMessageInput))(&pLib->instance)) { - diva_mnt_internal_dprintf(0, DLI_ERR, "Trace internal library error"); - } - } else { - /* - Process combined management interface indication - */ - if ((*(pLib->instance.DivaSTraceMessageInput))(&pLib->instance)) { - diva_mnt_internal_dprintf(0, DLI_ERR, "Trace internal library error (DMA mode)"); - } - } - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "xdi_cb"); - - - if (pC->request_pending) { - pC->request_pending = 0; - (*(pC->request))(e); - } - - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "xdi_cb"); -} - - -static void diva_maint_error(void *user_context, - diva_strace_library_interface_t *hLib, - int Adapter, - int error, - const char *file, - int line) { - diva_mnt_internal_dprintf(0, DLI_ERR, - "Trace library error(%d) A(%d) %s %d", error, Adapter, file, line); -} - -static void print_ie(diva_trace_ie_t *ie, char *buffer, int length) { - int i; - - buffer[0] = 0; - - if (length > 32) { - for (i = 0; ((i < ie->length) && (length > 3)); i++) { - sprintf(buffer, "%02x", ie->data[i]); - buffer += 2; - length -= 2; - if (i < (ie->length - 1)) { - strcpy(buffer, " "); - buffer++; - length--; - } - } - } -} - -static void diva_maint_state_change_notify(void *user_context, - diva_strace_library_interface_t *hLib, - int Adapter, - diva_trace_line_state_t *channel, - int notify_subject) { - diva_maint_client_t *pC = (diva_maint_client_t *)user_context; - diva_trace_fax_state_t *fax = &channel->fax; - diva_trace_modem_state_t *modem = &channel->modem; - char tmp[256]; - - if (!pC->hDbg) { - return; - } - - switch (notify_subject) { - case DIVA_SUPER_TRACE_NOTIFY_LINE_CHANGE: { - int view = (TraceFilter[0] == 0); - /* - Process selective Trace - */ - if (channel->Line[0] == 'I' && channel->Line[1] == 'd' && - channel->Line[2] == 'l' && channel->Line[3] == 'e') { - if ((TraceFilterIdent == pC->hDbg->id) && (TraceFilterChannel == (int)channel->ChannelNumber)) { - (*(hLib->DivaSTraceSetBChannel))(hLib, (int)channel->ChannelNumber, 0); - (*(hLib->DivaSTraceSetAudioTap))(hLib, (int)channel->ChannelNumber, 0); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, "Selective Trace OFF for Ch=%d", - (int)channel->ChannelNumber); - TraceFilterIdent = -1; - TraceFilterChannel = -1; - view = 1; - } - } else if (TraceFilter[0] && (TraceFilterIdent < 0) && !(diva_mnt_cmp_nmbr(&channel->RemoteAddress[0]) && - diva_mnt_cmp_nmbr(&channel->LocalAddress[0]))) { - - if ((pC->hDbg->dbgMask & DIVA_MGT_DBG_IFC_BCHANNEL) != 0) { /* Activate B-channel trace */ - (*(hLib->DivaSTraceSetBChannel))(hLib, (int)channel->ChannelNumber, 1); - } - if ((pC->hDbg->dbgMask & DIVA_MGT_DBG_IFC_AUDIO) != 0) { /* Activate AudioTap Trace */ - (*(hLib->DivaSTraceSetAudioTap))(hLib, (int)channel->ChannelNumber, 1); - } - - TraceFilterIdent = pC->hDbg->id; - TraceFilterChannel = (int)channel->ChannelNumber; - - if (TraceFilterIdent >= 0) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, "Selective Trace ON for Ch=%d", - (int)channel->ChannelNumber); - view = 1; - } - } - if (view && (pC->hDbg->dbgMask & DIVA_MGT_DBG_LINE_EVENTS)) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L Ch = %d", - (int)channel->ChannelNumber); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L Status = <%s>", &channel->Line[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L Layer1 = <%s>", &channel->Framing[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L Layer2 = <%s>", &channel->Layer2[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L Layer3 = <%s>", &channel->Layer3[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L RAddr = <%s>", - &channel->RemoteAddress[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L RSAddr = <%s>", - &channel->RemoteSubAddress[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L LAddr = <%s>", - &channel->LocalAddress[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L LSAddr = <%s>", - &channel->LocalSubAddress[0]); - print_ie(&channel->call_BC, tmp, sizeof(tmp)); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L BC = <%s>", tmp); - print_ie(&channel->call_HLC, tmp, sizeof(tmp)); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L HLC = <%s>", tmp); - print_ie(&channel->call_LLC, tmp, sizeof(tmp)); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L LLC = <%s>", tmp); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L CR = 0x%x", channel->CallReference); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L Disc = 0x%x", - channel->LastDisconnecCause); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "L Owner = <%s>", &channel->UserID[0]); - } - - } break; - - case DIVA_SUPER_TRACE_NOTIFY_MODEM_CHANGE: - if (pC->hDbg->dbgMask & DIVA_MGT_DBG_MDM_PROGRESS) { - { - int ch = TraceFilterChannel; - int id = TraceFilterIdent; - - if ((id >= 0) && (ch >= 0) && (id < ARRAY_SIZE(clients)) && - (clients[id].Dbg.id == (byte)id) && (clients[id].pIdiLib == hLib)) { - if (ch != (int)modem->ChannelNumber) { - break; - } - } else if (TraceFilter[0] != 0) { - break; - } - } - - - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Ch = %lu", - (int)modem->ChannelNumber); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Event = %lu", modem->Event); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Norm = %lu", modem->Norm); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Opts. = 0x%08x", modem->Options); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Tx = %lu Bps", modem->TxSpeed); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Rx = %lu Bps", modem->RxSpeed); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM RT = %lu mSec", - modem->RoundtripMsec); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Sr = %lu", modem->SymbolRate); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Rxl = %d dBm", modem->RxLeveldBm); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM El = %d dBm", modem->EchoLeveldBm); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM SNR = %lu dB", modem->SNRdb); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM MAE = %lu", modem->MAE); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM LRet = %lu", - modem->LocalRetrains); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM RRet = %lu", - modem->RemoteRetrains); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM LRes = %lu", modem->LocalResyncs); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM RRes = %lu", - modem->RemoteResyncs); - if (modem->Event == 3) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "MDM Disc = %lu", modem->DiscReason); - } - } - if ((modem->Event == 3) && (pC->hDbg->dbgMask & DIVA_MGT_DBG_MDM_STATISTICS)) { - (*(pC->pIdiLib->DivaSTraceGetModemStatistics))(pC->pIdiLib); - } - break; - - case DIVA_SUPER_TRACE_NOTIFY_FAX_CHANGE: - if (pC->hDbg->dbgMask & DIVA_MGT_DBG_FAX_PROGRESS) { - { - int ch = TraceFilterChannel; - int id = TraceFilterIdent; - - if ((id >= 0) && (ch >= 0) && (id < ARRAY_SIZE(clients)) && - (clients[id].Dbg.id == (byte)id) && (clients[id].pIdiLib == hLib)) { - if (ch != (int)fax->ChannelNumber) { - break; - } - } else if (TraceFilter[0] != 0) { - break; - } - } - - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Ch = %lu", (int)fax->ChannelNumber); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Event = %lu", fax->Event); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Pages = %lu", fax->Page_Counter); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Feat. = 0x%08x", fax->Features); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX ID = <%s>", &fax->Station_ID[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Saddr = <%s>", &fax->Subaddress[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Pwd = <%s>", &fax->Password[0]); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Speed = %lu", fax->Speed); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Res. = 0x%08x", fax->Resolution); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Width = %lu", fax->Paper_Width); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Length= %lu", fax->Paper_Length); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX SLT = %lu", fax->Scanline_Time); - if (fax->Event == 3) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, "FAX Disc = %lu", fax->Disc_Reason); - } - } - if ((fax->Event == 3) && (pC->hDbg->dbgMask & DIVA_MGT_DBG_FAX_STATISTICS)) { - (*(pC->pIdiLib->DivaSTraceGetFaxStatistics))(pC->pIdiLib); - } - break; - - case DIVA_SUPER_TRACE_INTERFACE_CHANGE: - if (pC->hDbg->dbgMask & DIVA_MGT_DBG_IFC_EVENTS) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, - "Layer 1 -> [%s]", channel->pInterface->Layer1); - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_STAT, - "Layer 2 -> [%s]", channel->pInterface->Layer2); - } - break; - - case DIVA_SUPER_TRACE_NOTIFY_STAT_CHANGE: - if (pC->hDbg->dbgMask & DIVA_MGT_DBG_IFC_STATISTICS) { - /* - Incoming Statistics - */ - if (channel->pInterfaceStat->inc.Calls) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Inc Calls =%lu", channel->pInterfaceStat->inc.Calls); - } - if (channel->pInterfaceStat->inc.Connected) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Inc Connected =%lu", channel->pInterfaceStat->inc.Connected); - } - if (channel->pInterfaceStat->inc.User_Busy) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Inc Busy =%lu", channel->pInterfaceStat->inc.User_Busy); - } - if (channel->pInterfaceStat->inc.Call_Rejected) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Inc Rejected =%lu", channel->pInterfaceStat->inc.Call_Rejected); - } - if (channel->pInterfaceStat->inc.Wrong_Number) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Inc Wrong Nr =%lu", channel->pInterfaceStat->inc.Wrong_Number); - } - if (channel->pInterfaceStat->inc.Incompatible_Dst) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Inc Incomp. Dest =%lu", channel->pInterfaceStat->inc.Incompatible_Dst); - } - if (channel->pInterfaceStat->inc.Out_of_Order) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Inc Out of Order =%lu", channel->pInterfaceStat->inc.Out_of_Order); - } - if (channel->pInterfaceStat->inc.Ignored) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Inc Ignored =%lu", channel->pInterfaceStat->inc.Ignored); - } - - /* - Outgoing Statistics - */ - if (channel->pInterfaceStat->outg.Calls) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Outg Calls =%lu", channel->pInterfaceStat->outg.Calls); - } - if (channel->pInterfaceStat->outg.Connected) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Outg Connected =%lu", channel->pInterfaceStat->outg.Connected); - } - if (channel->pInterfaceStat->outg.User_Busy) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Outg Busy =%lu", channel->pInterfaceStat->outg.User_Busy); - } - if (channel->pInterfaceStat->outg.No_Answer) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Outg No Answer =%lu", channel->pInterfaceStat->outg.No_Answer); - } - if (channel->pInterfaceStat->outg.Wrong_Number) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Outg Wrong Nr =%lu", channel->pInterfaceStat->outg.Wrong_Number); - } - if (channel->pInterfaceStat->outg.Call_Rejected) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Outg Rejected =%lu", channel->pInterfaceStat->outg.Call_Rejected); - } - if (channel->pInterfaceStat->outg.Other_Failures) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "Outg Other Failures =%lu", channel->pInterfaceStat->outg.Other_Failures); - } - } - break; - - case DIVA_SUPER_TRACE_NOTIFY_MDM_STAT_CHANGE: - if (channel->pInterfaceStat->mdm.Disc_Normal) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc Normal = %lu", channel->pInterfaceStat->mdm.Disc_Normal); - } - if (channel->pInterfaceStat->mdm.Disc_Unspecified) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc Unsp. = %lu", channel->pInterfaceStat->mdm.Disc_Unspecified); - } - if (channel->pInterfaceStat->mdm.Disc_Busy_Tone) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc Busy Tone = %lu", channel->pInterfaceStat->mdm.Disc_Busy_Tone); - } - if (channel->pInterfaceStat->mdm.Disc_Congestion) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc Congestion = %lu", channel->pInterfaceStat->mdm.Disc_Congestion); - } - if (channel->pInterfaceStat->mdm.Disc_Carr_Wait) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc Carrier Wait = %lu", channel->pInterfaceStat->mdm.Disc_Carr_Wait); - } - if (channel->pInterfaceStat->mdm.Disc_Trn_Timeout) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc Trn. T.o. = %lu", channel->pInterfaceStat->mdm.Disc_Trn_Timeout); - } - if (channel->pInterfaceStat->mdm.Disc_Incompat) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc Incompatible = %lu", channel->pInterfaceStat->mdm.Disc_Incompat); - } - if (channel->pInterfaceStat->mdm.Disc_Frame_Rej) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc Frame Reject = %lu", channel->pInterfaceStat->mdm.Disc_Frame_Rej); - } - if (channel->pInterfaceStat->mdm.Disc_V42bis) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "MDM Disc V.42bis = %lu", channel->pInterfaceStat->mdm.Disc_V42bis); - } - break; - - case DIVA_SUPER_TRACE_NOTIFY_FAX_STAT_CHANGE: - if (channel->pInterfaceStat->fax.Disc_Normal) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Normal = %lu", channel->pInterfaceStat->fax.Disc_Normal); - } - if (channel->pInterfaceStat->fax.Disc_Not_Ident) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Not Ident. = %lu", channel->pInterfaceStat->fax.Disc_Not_Ident); - } - if (channel->pInterfaceStat->fax.Disc_No_Response) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc No Response = %lu", channel->pInterfaceStat->fax.Disc_No_Response); - } - if (channel->pInterfaceStat->fax.Disc_Retries) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Max Retries = %lu", channel->pInterfaceStat->fax.Disc_Retries); - } - if (channel->pInterfaceStat->fax.Disc_Unexp_Msg) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Unexp. Msg. = %lu", channel->pInterfaceStat->fax.Disc_Unexp_Msg); - } - if (channel->pInterfaceStat->fax.Disc_No_Polling) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc No Polling = %lu", channel->pInterfaceStat->fax.Disc_No_Polling); - } - if (channel->pInterfaceStat->fax.Disc_Training) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Training = %lu", channel->pInterfaceStat->fax.Disc_Training); - } - if (channel->pInterfaceStat->fax.Disc_Unexpected) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Unexpected = %lu", channel->pInterfaceStat->fax.Disc_Unexpected); - } - if (channel->pInterfaceStat->fax.Disc_Application) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Application = %lu", channel->pInterfaceStat->fax.Disc_Application); - } - if (channel->pInterfaceStat->fax.Disc_Incompat) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Incompatible = %lu", channel->pInterfaceStat->fax.Disc_Incompat); - } - if (channel->pInterfaceStat->fax.Disc_No_Command) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc No Command = %lu", channel->pInterfaceStat->fax.Disc_No_Command); - } - if (channel->pInterfaceStat->fax.Disc_Long_Msg) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Long Msg. = %lu", channel->pInterfaceStat->fax.Disc_Long_Msg); - } - if (channel->pInterfaceStat->fax.Disc_Supervisor) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Supervisor = %lu", channel->pInterfaceStat->fax.Disc_Supervisor); - } - if (channel->pInterfaceStat->fax.Disc_SUB_SEP_PWD) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc SUP SEP PWD = %lu", channel->pInterfaceStat->fax.Disc_SUB_SEP_PWD); - } - if (channel->pInterfaceStat->fax.Disc_Invalid_Msg) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Invalid Msg. = %lu", channel->pInterfaceStat->fax.Disc_Invalid_Msg); - } - if (channel->pInterfaceStat->fax.Disc_Page_Coding) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Page Coding = %lu", channel->pInterfaceStat->fax.Disc_Page_Coding); - } - if (channel->pInterfaceStat->fax.Disc_App_Timeout) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Appl. T.o. = %lu", channel->pInterfaceStat->fax.Disc_App_Timeout); - } - if (channel->pInterfaceStat->fax.Disc_Unspecified) { - diva_mnt_internal_dprintf(pC->hDbg->id, DLI_LOG, - "FAX Disc Unspec. = %lu", channel->pInterfaceStat->fax.Disc_Unspecified); - } - break; - } -} - -/* - Receive trace information from the Management Interface and store it in the - internal trace buffer with MSG_TYPE_MLOG as is, without any filtering. - Event Filtering and formatting is done in Management Interface self. -*/ -static void diva_maint_trace_notify(void *user_context, - diva_strace_library_interface_t *hLib, - int Adapter, - void *xlog_buffer, - int length) { - diva_maint_client_t *pC = (diva_maint_client_t *)user_context; - diva_dbg_entry_head_t *pmsg; - word size; - dword sec, usec; - int ch = TraceFilterChannel; - int id = TraceFilterIdent; - - /* - Selective trace - */ - if ((id >= 0) && (ch >= 0) && (id < ARRAY_SIZE(clients)) && - (clients[id].Dbg.id == (byte)id) && (clients[id].pIdiLib == hLib)) { - const char *p = NULL; - int ch_value = -1; - MI_XLOG_HDR *TrcData = (MI_XLOG_HDR *)xlog_buffer; - - if (Adapter != clients[id].logical) { - return; /* Ignore all trace messages from other adapters */ - } - - if (TrcData->code == 24) { - p = (char *)&TrcData->code; - p += 2; - } - - /* - All L1 messages start as [dsp,ch], so we can filter this information - and filter out all messages that use different channel - */ - if (p && p[0] == '[') { - if (p[2] == ',') { - p += 3; - ch_value = *p - '0'; - } else if (p[3] == ',') { - p += 4; - ch_value = *p - '0'; - } - if (ch_value >= 0) { - if (p[2] == ']') { - ch_value = ch_value * 10 + p[1] - '0'; - } - if (ch_value != ch) { - return; /* Ignore other channels */ - } - } - } - - } else if (TraceFilter[0] != 0) { - return; /* Ignore trace if trace filter is activated, but idle */ - } - - diva_os_get_time(&sec, &usec); - - while (!(pmsg = (diva_dbg_entry_head_t *)queueAllocMsg(dbg_queue, - (word)length + sizeof(*pmsg)))) { - if ((pmsg = (diva_dbg_entry_head_t *)queuePeekMsg(dbg_queue, &size))) { - queueFreeMsg(dbg_queue); - } else { - break; - } - } - if (pmsg) { - memcpy(&pmsg[1], xlog_buffer, length); - pmsg->sequence = dbg_sequence++; - pmsg->time_sec = sec; - pmsg->time_usec = usec; - pmsg->facility = MSG_TYPE_MLOG; - pmsg->dli = pC->logical; - pmsg->drv_id = pC->hDbg->id; - pmsg->di_cpu = 0; - pmsg->data_length = length; - queueCompleteMsg(pmsg); - if (queueCount(dbg_queue)) { - diva_maint_wakeup_read(); - } - } -} - - -/* - Convert MAINT trace mask to management interface trace mask/work/facility and - issue command to management interface -*/ -static void diva_change_management_debug_mask(diva_maint_client_t *pC, dword old_mask) { - if (pC->request && pC->hDbg && pC->pIdiLib) { - dword changed = pC->hDbg->dbgMask ^ old_mask; - - if (changed & DIVA_MGT_DBG_TRACE) { - (*(pC->pIdiLib->DivaSTraceSetInfo))(pC->pIdiLib, - (pC->hDbg->dbgMask & DIVA_MGT_DBG_TRACE) != 0); - } - if (changed & DIVA_MGT_DBG_DCHAN) { - (*(pC->pIdiLib->DivaSTraceSetDChannel))(pC->pIdiLib, - (pC->hDbg->dbgMask & DIVA_MGT_DBG_DCHAN) != 0); - } - if (!TraceFilter[0]) { - if (changed & DIVA_MGT_DBG_IFC_BCHANNEL) { - int i, state = ((pC->hDbg->dbgMask & DIVA_MGT_DBG_IFC_BCHANNEL) != 0); - - for (i = 0; i < pC->channels; i++) { - (*(pC->pIdiLib->DivaSTraceSetBChannel))(pC->pIdiLib, i + 1, state); - } - } - if (changed & DIVA_MGT_DBG_IFC_AUDIO) { - int i, state = ((pC->hDbg->dbgMask & DIVA_MGT_DBG_IFC_AUDIO) != 0); - - for (i = 0; i < pC->channels; i++) { - (*(pC->pIdiLib->DivaSTraceSetAudioTap))(pC->pIdiLib, i + 1, state); - } - } - } - } -} - - -void diva_mnt_internal_dprintf(dword drv_id, dword type, char *fmt, ...) { - va_list ap; - - va_start(ap, fmt); - DI_format(0, (word)drv_id, (int)type, fmt, ap); - va_end(ap); -} - -/* - Shutdown all adapters before driver removal -*/ -int diva_mnt_shutdown_xdi_adapters(void) { - diva_os_spin_lock_magic_t old_irql, old_irql1; - int i, fret = 0; - byte *pmem; - - - for (i = 1; i < ARRAY_SIZE(clients); i++) { - pmem = NULL; - - diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "unload"); - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "unload"); - - if (clients[i].hDbg && clients[i].pIdiLib && clients[i].request) { - if ((*(clients[i].pIdiLib->DivaSTraceLibraryStop))(clients[i].pIdiLib) == 1) { - /* - Adapter removal complete - */ - if (clients[i].pIdiLib) { - (*(clients[i].pIdiLib->DivaSTraceLibraryFinit))(clients[i].pIdiLib->hLib); - clients[i].pIdiLib = NULL; - - pmem = clients[i].pmem; - clients[i].pmem = NULL; - } - clients[i].hDbg = NULL; - clients[i].request_pending = 0; - - if (clients[i].dma_handle >= 0) { - /* - Free DMA handle - */ - diva_free_dma_descriptor(clients[i].request, clients[i].dma_handle); - clients[i].dma_handle = -1; - } - clients[i].request = NULL; - } else { - fret = -1; - } - } - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "unload"); - if (clients[i].hDbg && clients[i].pIdiLib && clients[i].request && clients[i].request_pending) { - clients[i].request_pending = 0; - (*(clients[i].request))((ENTITY *)(*(clients[i].pIdiLib->DivaSTraceGetHandle))(clients[i].pIdiLib->hLib)); - if (clients[i].dma_handle >= 0) { - diva_free_dma_descriptor(clients[i].request, clients[i].dma_handle); - clients[i].dma_handle = -1; - } - } - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "unload"); - - if (pmem) { - diva_os_free(0, pmem); - } - } - - return (fret); -} - -/* - Set/Read the trace filter used for selective tracing. - Affects B- and Audio Tap trace mask at run time -*/ -int diva_set_trace_filter(int filter_length, const char *filter) { - diva_os_spin_lock_magic_t old_irql, old_irql1; - int i, ch, on, client_b_on, client_atap_on; - - diva_os_enter_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask"); - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "write_filter"); - - if (filter_length <= DIVA_MAX_SELECTIVE_FILTER_LENGTH) { - memcpy(&TraceFilter[0], filter, filter_length); - if (TraceFilter[filter_length]) { - TraceFilter[filter_length] = 0; - } - if (TraceFilter[0] == '*') { - TraceFilter[0] = 0; - } - } else { - filter_length = -1; - } - - TraceFilterIdent = -1; - TraceFilterChannel = -1; - - on = (TraceFilter[0] == 0); - - for (i = 1; i < ARRAY_SIZE(clients); i++) { - if (clients[i].hDbg && clients[i].pIdiLib && clients[i].request) { - client_b_on = on && ((clients[i].hDbg->dbgMask & DIVA_MGT_DBG_IFC_BCHANNEL) != 0); - client_atap_on = on && ((clients[i].hDbg->dbgMask & DIVA_MGT_DBG_IFC_AUDIO) != 0); - for (ch = 0; ch < clients[i].channels; ch++) { - (*(clients[i].pIdiLib->DivaSTraceSetBChannel))(clients[i].pIdiLib->hLib, ch + 1, client_b_on); - (*(clients[i].pIdiLib->DivaSTraceSetAudioTap))(clients[i].pIdiLib->hLib, ch + 1, client_atap_on); - } - } - } - - for (i = 1; i < ARRAY_SIZE(clients); i++) { - if (clients[i].hDbg && clients[i].pIdiLib && clients[i].request && clients[i].request_pending) { - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "write_filter"); - clients[i].request_pending = 0; - (*(clients[i].request))((ENTITY *)(*(clients[i].pIdiLib->DivaSTraceGetHandle))(clients[i].pIdiLib->hLib)); - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "write_filter"); - } - } - - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "write_filter"); - diva_os_leave_spin_lock(&dbg_adapter_lock, &old_irql1, "dbg mask"); - - return (filter_length); -} - -int diva_get_trace_filter(int max_length, char *filter) { - diva_os_spin_lock_magic_t old_irql; - int len; - - diva_os_enter_spin_lock(&dbg_q_lock, &old_irql, "read_filter"); - len = strlen(&TraceFilter[0]) + 1; - if (max_length >= len) { - memcpy(filter, &TraceFilter[0], len); - } - diva_os_leave_spin_lock(&dbg_q_lock, &old_irql, "read_filter"); - - return (len); -} - -static int diva_dbg_cmp_key(const char *ref, const char *key) { - while (*key && (*ref++ == *key++)); - return (!*key && !*ref); -} - -/* - In case trace filter starts with "C" character then - all following characters are interpreted as command. - Following commands are available: - - single, trace single call at time, independent from CPN/CiPN -*/ -static int diva_mnt_cmp_nmbr(const char *nmbr) { - const char *ref = &TraceFilter[0]; - int ref_len = strlen(&TraceFilter[0]), nmbr_len = strlen(nmbr); - - if (ref[0] == 'C') { - if (diva_dbg_cmp_key(&ref[1], "single")) { - return (0); - } - return (-1); - } - - if (!ref_len || (ref_len > nmbr_len)) { - return (-1); - } - - nmbr = nmbr + nmbr_len - 1; - ref = ref + ref_len - 1; - - while (ref_len--) { - if (*nmbr-- != *ref--) { - return (-1); - } - } - - return (0); -} - -static int diva_get_dma_descriptor(IDI_CALL request, dword *dma_magic) { - ENTITY e; - IDI_SYNC_REQ *pReq = (IDI_SYNC_REQ *)&e; - - if (!request) { - return (-1); - } - - pReq->xdi_dma_descriptor_operation.Req = 0; - pReq->xdi_dma_descriptor_operation.Rc = IDI_SYNC_REQ_DMA_DESCRIPTOR_OPERATION; - - pReq->xdi_dma_descriptor_operation.info.operation = IDI_SYNC_REQ_DMA_DESCRIPTOR_ALLOC; - pReq->xdi_dma_descriptor_operation.info.descriptor_number = -1; - pReq->xdi_dma_descriptor_operation.info.descriptor_address = NULL; - pReq->xdi_dma_descriptor_operation.info.descriptor_magic = 0; - - (*request)((ENTITY *)pReq); - - if (!pReq->xdi_dma_descriptor_operation.info.operation && - (pReq->xdi_dma_descriptor_operation.info.descriptor_number >= 0) && - pReq->xdi_dma_descriptor_operation.info.descriptor_magic) { - *dma_magic = pReq->xdi_dma_descriptor_operation.info.descriptor_magic; - return (pReq->xdi_dma_descriptor_operation.info.descriptor_number); - } else { - return (-1); - } -} - -static void diva_free_dma_descriptor(IDI_CALL request, int nr) { - ENTITY e; - IDI_SYNC_REQ *pReq = (IDI_SYNC_REQ *)&e; - - if (!request || (nr < 0)) { - return; - } - - pReq->xdi_dma_descriptor_operation.Req = 0; - pReq->xdi_dma_descriptor_operation.Rc = IDI_SYNC_REQ_DMA_DESCRIPTOR_OPERATION; - - pReq->xdi_dma_descriptor_operation.info.operation = IDI_SYNC_REQ_DMA_DESCRIPTOR_FREE; - pReq->xdi_dma_descriptor_operation.info.descriptor_number = nr; - pReq->xdi_dma_descriptor_operation.info.descriptor_address = NULL; - pReq->xdi_dma_descriptor_operation.info.descriptor_magic = 0; - - (*request)((ENTITY *)pReq); -} diff --git a/drivers/isdn/hardware/eicon/debug_if.h b/drivers/isdn/hardware/eicon/debug_if.h deleted file mode 100644 index fc5953a35ff6..000000000000 --- a/drivers/isdn/hardware/eicon/debug_if.h +++ /dev/null @@ -1,88 +0,0 @@ -/* - * - Copyright (c) Eicon Technology Corporation, 2000. - * - This source file is supplied for the use with Eicon - Technology Corporation's range of DIVA Server Adapters. - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_DEBUG_IF_H__ -#define __DIVA_DEBUG_IF_H__ -#define MSG_TYPE_DRV_ID 0x0001 -#define MSG_TYPE_FLAGS 0x0002 -#define MSG_TYPE_STRING 0x0003 -#define MSG_TYPE_BINARY 0x0004 -#define MSG_TYPE_MLOG 0x0005 - -#define MSG_FRAME_MAX_SIZE 2150 - -typedef struct _diva_dbg_entry_head { - dword sequence; - dword time_sec; - dword time_usec; - dword facility; - dword dli; - dword drv_id; - dword di_cpu; - dword data_length; -} diva_dbg_entry_head_t; - -int diva_maint_init(byte *base, unsigned long length, int do_init); -void *diva_maint_finit(void); -dword diva_dbg_q_length(void); -diva_dbg_entry_head_t *diva_maint_get_message(word *size, - diva_os_spin_lock_magic_t *old_irql); -void diva_maint_ack_message(int do_release, - diva_os_spin_lock_magic_t *old_irql); -void diva_maint_prtComp(char *format, ...); -void diva_maint_wakeup_read(void); -int diva_get_driver_info(dword id, byte *data, int data_length); -int diva_get_driver_dbg_mask(dword id, byte *data); -int diva_set_driver_dbg_mask(dword id, dword mask); -void diva_mnt_remove_xdi_adapter(const DESCRIPTOR *d); -void diva_mnt_add_xdi_adapter(const DESCRIPTOR *d); -int diva_mnt_shutdown_xdi_adapters(void); - -#define DIVA_MAX_SELECTIVE_FILTER_LENGTH 127 -int diva_set_trace_filter(int filter_length, const char *filter); -int diva_get_trace_filter(int max_length, char *filter); - - -#define DITRACE_CMD_GET_DRIVER_INFO 1 -#define DITRACE_READ_DRIVER_DBG_MASK 2 -#define DITRACE_WRITE_DRIVER_DBG_MASK 3 -#define DITRACE_READ_TRACE_ENTRY 4 -#define DITRACE_READ_TRACE_ENTRYS 5 -#define DITRACE_WRITE_SELECTIVE_TRACE_FILTER 6 -#define DITRACE_READ_SELECTIVE_TRACE_FILTER 7 - -/* - Trace lavels for debug via management interface -*/ -#define DIVA_MGT_DBG_TRACE 0x00000001 /* All trace messages from the card */ -#define DIVA_MGT_DBG_DCHAN 0x00000002 /* All D-channel relater trace messages */ -#define DIVA_MGT_DBG_MDM_PROGRESS 0x00000004 /* Modem progress events */ -#define DIVA_MGT_DBG_FAX_PROGRESS 0x00000008 /* Fax progress events */ -#define DIVA_MGT_DBG_IFC_STATISTICS 0x00000010 /* Interface call statistics */ -#define DIVA_MGT_DBG_MDM_STATISTICS 0x00000020 /* Global modem statistics */ -#define DIVA_MGT_DBG_FAX_STATISTICS 0x00000040 /* Global call statistics */ -#define DIVA_MGT_DBG_LINE_EVENTS 0x00000080 /* Line state events */ -#define DIVA_MGT_DBG_IFC_EVENTS 0x00000100 /* Interface/L1/L2 state events */ -#define DIVA_MGT_DBG_IFC_BCHANNEL 0x00000200 /* B-Channel trace for all channels */ -#define DIVA_MGT_DBG_IFC_AUDIO 0x00000400 /* Audio Tap trace for all channels */ - -# endif /* DEBUG_IF___H */ diff --git a/drivers/isdn/hardware/eicon/debuglib.c b/drivers/isdn/hardware/eicon/debuglib.c deleted file mode 100644 index d5b1092a54f0..000000000000 --- a/drivers/isdn/hardware/eicon/debuglib.c +++ /dev/null @@ -1,156 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ - -#include "debuglib.h" - -#ifdef DIVA_NO_DEBUGLIB -static DIVA_DI_PRINTF dprintf; -#else /* DIVA_NO_DEBUGLIB */ - -_DbgHandle_ myDriverDebugHandle = { 0 /*!Registered*/, DBG_HANDLE_VERSION }; -DIVA_DI_PRINTF dprintf = no_printf; -/*****************************************************************************/ -#define DBG_FUNC(name) \ - void \ - myDbgPrint_##name(char *format, ...) \ - { va_list ap; \ - if (myDriverDebugHandle.dbg_prt) \ - { va_start(ap, format); \ - (myDriverDebugHandle.dbg_prt) \ - (myDriverDebugHandle.id, DLI_##name, format, ap); \ - va_end(ap); \ - } } -DBG_FUNC(LOG) -DBG_FUNC(FTL) -DBG_FUNC(ERR) -DBG_FUNC(TRC) -DBG_FUNC(MXLOG) -DBG_FUNC(FTL_MXLOG) -void -myDbgPrint_EVL(long msgID, ...) -{ va_list ap; - if (myDriverDebugHandle.dbg_ev) - { va_start(ap, msgID); - (myDriverDebugHandle.dbg_ev) - (myDriverDebugHandle.id, (unsigned long)msgID, ap); - va_end(ap); - } } -DBG_FUNC(REG) -DBG_FUNC(MEM) -DBG_FUNC(SPL) -DBG_FUNC(IRP) -DBG_FUNC(TIM) -DBG_FUNC(BLK) -DBG_FUNC(TAPI) -DBG_FUNC(NDIS) -DBG_FUNC(CONN) -DBG_FUNC(STAT) -DBG_FUNC(SEND) -DBG_FUNC(RECV) -DBG_FUNC(PRV0) -DBG_FUNC(PRV1) -DBG_FUNC(PRV2) -DBG_FUNC(PRV3) -/*****************************************************************************/ -int -DbgRegister(char *drvName, char *drvTag, unsigned long dbgMask) -{ - int len; -/* - * deregister (if already registered) and zero out myDriverDebugHandle - */ - DbgDeregister(); -/* - * initialize the debug handle - */ - myDriverDebugHandle.Version = DBG_HANDLE_VERSION; - myDriverDebugHandle.id = -1; - myDriverDebugHandle.dbgMask = dbgMask | (DL_EVL | DL_FTL | DL_LOG); - len = strlen(drvName); - memcpy(myDriverDebugHandle.drvName, drvName, - (len < sizeof(myDriverDebugHandle.drvName)) ? - len : sizeof(myDriverDebugHandle.drvName) - 1); - len = strlen(drvTag); - memcpy(myDriverDebugHandle.drvTag, drvTag, - (len < sizeof(myDriverDebugHandle.drvTag)) ? - len : sizeof(myDriverDebugHandle.drvTag) - 1); -/* - * Try to register debugging via old (and only) interface - */ - dprintf("\000\377", &myDriverDebugHandle); - if (myDriverDebugHandle.dbg_prt) - { - return (1); - } -/* - * Check if we registered with an old maint driver (see debuglib.h) - */ - if (myDriverDebugHandle.dbg_end != NULL - /* location of 'dbg_prt' in _OldDbgHandle_ struct */ - && (myDriverDebugHandle.regTime.LowPart || - myDriverDebugHandle.regTime.HighPart)) - /* same location as in _OldDbgHandle_ struct */ - { - dprintf("%s: Cannot log to old maint driver !", drvName); - myDriverDebugHandle.dbg_end = - ((_OldDbgHandle_ *)&myDriverDebugHandle)->dbg_end; - DbgDeregister(); - } - return (0); -} -/*****************************************************************************/ -void -DbgSetLevel(unsigned long dbgMask) -{ - myDriverDebugHandle.dbgMask = dbgMask | (DL_EVL | DL_FTL | DL_LOG); -} -/*****************************************************************************/ -void -DbgDeregister(void) -{ - if (myDriverDebugHandle.dbg_end) - { - (myDriverDebugHandle.dbg_end)(&myDriverDebugHandle); - } - memset(&myDriverDebugHandle, 0, sizeof(myDriverDebugHandle)); -} -void xdi_dbg_xlog(char *x, ...) { - va_list ap; - va_start(ap, x); - if (myDriverDebugHandle.dbg_end && - (myDriverDebugHandle.dbg_irq || myDriverDebugHandle.dbg_old) && - (myDriverDebugHandle.dbgMask & DL_STAT)) { - if (myDriverDebugHandle.dbg_irq) { - (*(myDriverDebugHandle.dbg_irq))(myDriverDebugHandle.id, - (x[0] != 0) ? DLI_TRC : DLI_XLOG, x, ap); - } else { - (*(myDriverDebugHandle.dbg_old))(myDriverDebugHandle.id, x, ap); - } - } - va_end(ap); -} -/*****************************************************************************/ -#endif /* DIVA_NO_DEBUGLIB */ diff --git a/drivers/isdn/hardware/eicon/debuglib.h b/drivers/isdn/hardware/eicon/debuglib.h deleted file mode 100644 index 6dcbf6afb8f9..000000000000 --- a/drivers/isdn/hardware/eicon/debuglib.h +++ /dev/null @@ -1,322 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#if !defined(__DEBUGLIB_H__) -#define __DEBUGLIB_H__ -#include -/* - * define global debug priorities - */ -#define DL_LOG 0x00000001 /* always worth mentioning */ -#define DL_FTL 0x00000002 /* always sampled error */ -#define DL_ERR 0x00000004 /* any kind of error */ -#define DL_TRC 0x00000008 /* verbose information */ -#define DL_XLOG 0x00000010 /* old xlog info */ -#define DL_MXLOG 0x00000020 /* maestra xlog info */ -#define DL_FTL_MXLOG 0x00000021 /* fatal maestra xlog info */ -#define DL_EVL 0x00000080 /* special NT eventlog msg */ -#define DL_COMPAT (DL_MXLOG | DL_XLOG) -#define DL_PRIOR_MASK (DL_EVL | DL_COMPAT | DL_TRC | DL_ERR | DL_FTL | DL_LOG) -#define DLI_LOG 0x0100 -#define DLI_FTL 0x0200 -#define DLI_ERR 0x0300 -#define DLI_TRC 0x0400 -#define DLI_XLOG 0x0500 -#define DLI_MXLOG 0x0600 -#define DLI_FTL_MXLOG 0x0600 -#define DLI_EVL 0x0800 -/* - * define OS (operating system interface) debuglevel - */ -#define DL_REG 0x00000100 /* init/query registry */ -#define DL_MEM 0x00000200 /* memory management */ -#define DL_SPL 0x00000400 /* event/spinlock handling */ -#define DL_IRP 0x00000800 /* I/O request handling */ -#define DL_TIM 0x00001000 /* timer/watchdog handling */ -#define DL_BLK 0x00002000 /* raw data block contents */ -#define DL_OS_MASK (DL_BLK | DL_TIM | DL_IRP | DL_SPL | DL_MEM | DL_REG) -#define DLI_REG 0x0900 -#define DLI_MEM 0x0A00 -#define DLI_SPL 0x0B00 -#define DLI_IRP 0x0C00 -#define DLI_TIM 0x0D00 -#define DLI_BLK 0x0E00 -/* - * define ISDN (connection interface) debuglevel - */ -#define DL_TAPI 0x00010000 /* debug TAPI interface */ -#define DL_NDIS 0x00020000 /* debug NDIS interface */ -#define DL_CONN 0x00040000 /* connection handling */ -#define DL_STAT 0x00080000 /* trace state machines */ -#define DL_SEND 0x00100000 /* trace raw xmitted data */ -#define DL_RECV 0x00200000 /* trace raw received data */ -#define DL_DATA (DL_SEND | DL_RECV) -#define DL_ISDN_MASK (DL_DATA | DL_STAT | DL_CONN | DL_NDIS | DL_TAPI) -#define DLI_TAPI 0x1100 -#define DLI_NDIS 0x1200 -#define DLI_CONN 0x1300 -#define DLI_STAT 0x1400 -#define DLI_SEND 0x1500 -#define DLI_RECV 0x1600 -/* - * define some private (unspecified) debuglevel - */ -#define DL_PRV0 0x01000000 -#define DL_PRV1 0x02000000 -#define DL_PRV2 0x04000000 -#define DL_PRV3 0x08000000 -#define DL_PRIV_MASK (DL_PRV0 | DL_PRV1 | DL_PRV2 | DL_PRV3) -#define DLI_PRV0 0x1900 -#define DLI_PRV1 0x1A00 -#define DLI_PRV2 0x1B00 -#define DLI_PRV3 0x1C00 -#define DT_INDEX(x) ((x) & 0x000F) -#define DL_INDEX(x) ((((x) >> 8) & 0x00FF) - 1) -#define DLI_NAME(x) ((x) & 0xFF00) -/* - * Debug mask for kernel mode tracing, if set the output is also sent to - * the system debug function. Requires that the project is compiled - * with _KERNEL_DBG_PRINT_ - */ -#define DL_TO_KERNEL 0x40000000 - -#ifdef DIVA_NO_DEBUGLIB -#define myDbgPrint_LOG(x...) do { } while (0); -#define myDbgPrint_FTL(x...) do { } while (0); -#define myDbgPrint_ERR(x...) do { } while (0); -#define myDbgPrint_TRC(x...) do { } while (0); -#define myDbgPrint_MXLOG(x...) do { } while (0); -#define myDbgPrint_EVL(x...) do { } while (0); -#define myDbgPrint_REG(x...) do { } while (0); -#define myDbgPrint_MEM(x...) do { } while (0); -#define myDbgPrint_SPL(x...) do { } while (0); -#define myDbgPrint_IRP(x...) do { } while (0); -#define myDbgPrint_TIM(x...) do { } while (0); -#define myDbgPrint_BLK(x...) do { } while (0); -#define myDbgPrint_TAPI(x...) do { } while (0); -#define myDbgPrint_NDIS(x...) do { } while (0); -#define myDbgPrint_CONN(x...) do { } while (0); -#define myDbgPrint_STAT(x...) do { } while (0); -#define myDbgPrint_SEND(x...) do { } while (0); -#define myDbgPrint_RECV(x...) do { } while (0); -#define myDbgPrint_PRV0(x...) do { } while (0); -#define myDbgPrint_PRV1(x...) do { } while (0); -#define myDbgPrint_PRV2(x...) do { } while (0); -#define myDbgPrint_PRV3(x...) do { } while (0); -#define DBG_TEST(func, args) do { } while (0); -#define DBG_EVL_ID(args) do { } while (0); - -#else /* DIVA_NO_DEBUGLIB */ -/* - * define low level macros for formatted & raw debugging - */ -#define DBG_DECL(func) extern void myDbgPrint_##func(char *, ...); -DBG_DECL(LOG) -DBG_DECL(FTL) -DBG_DECL(ERR) -DBG_DECL(TRC) -DBG_DECL(MXLOG) -DBG_DECL(FTL_MXLOG) -extern void myDbgPrint_EVL(long, ...); -DBG_DECL(REG) -DBG_DECL(MEM) -DBG_DECL(SPL) -DBG_DECL(IRP) -DBG_DECL(TIM) -DBG_DECL(BLK) -DBG_DECL(TAPI) -DBG_DECL(NDIS) -DBG_DECL(CONN) -DBG_DECL(STAT) -DBG_DECL(SEND) -DBG_DECL(RECV) -DBG_DECL(PRV0) -DBG_DECL(PRV1) -DBG_DECL(PRV2) -DBG_DECL(PRV3) -#ifdef _KERNEL_DBG_PRINT_ -/* - * tracing to maint and kernel if selected in the trace mask. - */ -#define DBG_TEST(func, args) \ - { if ((myDriverDebugHandle.dbgMask) & (unsigned long)DL_##func) \ - { \ - if ((myDriverDebugHandle.dbgMask) & DL_TO_KERNEL) \ - { DbgPrint args; DbgPrint("\r\n"); } \ - myDbgPrint_##func args; \ - } } -#else -/* - * Standard tracing to maint driver. - */ -#define DBG_TEST(func, args) \ - { if ((myDriverDebugHandle.dbgMask) & (unsigned long)DL_##func) \ - { myDbgPrint_##func args; \ - } } -#endif -/* - * For event level debug use a separate define, the parameter are - * different and cause compiler errors on some systems. - */ -#define DBG_EVL_ID(args) \ - { if ((myDriverDebugHandle.dbgMask) & (unsigned long)DL_EVL) \ - { myDbgPrint_EVL args; \ - } } - -#endif /* DIVA_NO_DEBUGLIB */ - -#define DBG_LOG(args) DBG_TEST(LOG, args) -#define DBG_FTL(args) DBG_TEST(FTL, args) -#define DBG_ERR(args) DBG_TEST(ERR, args) -#define DBG_TRC(args) DBG_TEST(TRC, args) -#define DBG_MXLOG(args) DBG_TEST(MXLOG, args) -#define DBG_FTL_MXLOG(args) DBG_TEST(FTL_MXLOG, args) -#define DBG_EVL(args) DBG_EVL_ID(args) -#define DBG_REG(args) DBG_TEST(REG, args) -#define DBG_MEM(args) DBG_TEST(MEM, args) -#define DBG_SPL(args) DBG_TEST(SPL, args) -#define DBG_IRP(args) DBG_TEST(IRP, args) -#define DBG_TIM(args) DBG_TEST(TIM, args) -#define DBG_BLK(args) DBG_TEST(BLK, args) -#define DBG_TAPI(args) DBG_TEST(TAPI, args) -#define DBG_NDIS(args) DBG_TEST(NDIS, args) -#define DBG_CONN(args) DBG_TEST(CONN, args) -#define DBG_STAT(args) DBG_TEST(STAT, args) -#define DBG_SEND(args) DBG_TEST(SEND, args) -#define DBG_RECV(args) DBG_TEST(RECV, args) -#define DBG_PRV0(args) DBG_TEST(PRV0, args) -#define DBG_PRV1(args) DBG_TEST(PRV1, args) -#define DBG_PRV2(args) DBG_TEST(PRV2, args) -#define DBG_PRV3(args) DBG_TEST(PRV3, args) -/* - * prototypes for debug register/deregister functions in "debuglib.c" - */ -#ifdef DIVA_NO_DEBUGLIB -#define DbgRegister(name, tag, mask) do { } while (0) -#define DbgDeregister() do { } while (0) -#define DbgSetLevel(mask) do { } while (0) -#else -extern DIVA_DI_PRINTF dprintf; -extern int DbgRegister(char *drvName, char *drvTag, unsigned long dbgMask); -extern void DbgDeregister(void); -extern void DbgSetLevel(unsigned long dbgMask); -#endif -/* - * driver internal structure for debug handling; - * in client drivers this structure is maintained in "debuglib.c", - * in the debug driver "debug.c" maintains a chain of such structs. - */ -typedef struct _DbgHandle_ *pDbgHandle; -typedef void (*DbgEnd)(pDbgHandle); -typedef void (*DbgLog)(unsigned short, int, char *, va_list); -typedef void (*DbgOld)(unsigned short, char *, va_list); -typedef void (*DbgEv)(unsigned short, unsigned long, va_list); -typedef void (*DbgIrq)(unsigned short, int, char *, va_list); -typedef struct _DbgHandle_ -{ char Registered; /* driver successfully registered */ -#define DBG_HANDLE_REG_NEW 0x01 /* this (new) structure */ -#define DBG_HANDLE_REG_OLD 0x7f /* old structure (see below) */ - char Version; /* version of this structure */ -#define DBG_HANDLE_VERSION 1 /* contains dbg_old function now */ -#define DBG_HANDLE_VER_EXT 2 /* pReserved points to extended info*/ - short id; /* internal id of registered driver */ - struct _DbgHandle_ *next; /* ptr to next registered driver */ - struct /*LARGE_INTEGER*/ { - unsigned long LowPart; - long HighPart; - } regTime; /* timestamp for registration */ - void *pIrp; /* ptr to pending i/o request */ - unsigned long dbgMask; /* current debug mask */ - char drvName[128]; /* ASCII name of registered driver */ - char drvTag[64]; /* revision string */ - DbgEnd dbg_end; /* function for debug closing */ - DbgLog dbg_prt; /* function for debug appending */ - DbgOld dbg_old; /* function for old debug appending */ - DbgEv dbg_ev; /* function for Windows NT Eventlog */ - DbgIrq dbg_irq; /* function for irql checked debug */ - void *pReserved3; -} _DbgHandle_; -extern _DbgHandle_ myDriverDebugHandle; -typedef struct _OldDbgHandle_ -{ struct _OldDbgHandle_ *next; - void *pIrp; - long regTime[2]; - unsigned long dbgMask; - short id; - char drvName[78]; - DbgEnd dbg_end; - DbgLog dbg_prt; -} _OldDbgHandle_; -/* the differences in DbgHandles - old: tmp: new: - 0 long next char Registered char Registered - char filler char Version - short id short id - 4 long pIrp long regTime.lo long next - 8 long regTime.lo long regTime.hi long regTime.lo - 12 long regTime.hi long next long regTime.hi - 16 long dbgMask long pIrp long pIrp - 20 short id long dbgMask long dbgMask - 22 char drvName[78] .. - 24 .. char drvName[16] char drvName[16] - 40 .. char drvTag[64] char drvTag[64] - 100 void *dbg_end .. .. - 104 void *dbg_prt void *dbg_end void *dbg_end - 108 .. void *dbg_prt void *dbg_prt - 112 .. .. void *dbg_old - 116 .. .. void *dbg_ev - 120 .. .. void *dbg_irq - 124 .. .. void *pReserved3 - ( new->id == 0 && *((short *)&new->dbgMask) == -1 ) identifies "old", - new->Registered and new->Version overlay old->next, - new->next overlays old->pIrp, new->regTime matches old->regTime and - thus these fields can be maintained in new struct whithout trouble; - id, dbgMask, drvName, dbg_end and dbg_prt need special handling ! -*/ -#define DBG_EXT_TYPE_CARD_TRACE 0x00000001 -typedef struct -{ - unsigned long ExtendedType; - union - { - /* DBG_EXT_TYPE_CARD_TRACE */ - struct - { - void (*MaskChangedNotify)(void *pContext); - unsigned long ModuleTxtMask; - unsigned long DebugLevel; - unsigned long B_ChannelMask; - unsigned long LogBufferSize; - } CardTrace; - } Data; -} _DbgExtendedInfo_; -#ifndef DIVA_NO_DEBUGLIB -/* ------------------------------------------------------------- - Function used for xlog-style debug - ------------------------------------------------------------- */ -#define XDI_USE_XLOG 1 -void xdi_dbg_xlog(char *x, ...); -#endif /* DIVA_NO_DEBUGLIB */ -#endif /* __DEBUGLIB_H__ */ diff --git a/drivers/isdn/hardware/eicon/dfifo.h b/drivers/isdn/hardware/eicon/dfifo.h deleted file mode 100644 index 6a1d3337f99e..000000000000 --- a/drivers/isdn/hardware/eicon/dfifo.h +++ /dev/null @@ -1,54 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_IDI_DFIFO_INC__ -#define __DIVA_IDI_DFIFO_INC__ -#define DIVA_DFIFO_CACHE_SZ 64 /* Used to isolate pipe from - rest of the world - should be divisible by 4 - */ -#define DIVA_DFIFO_RAW_SZ (2512 * 8) -#define DIVA_DFIFO_DATA_SZ 68 -#define DIVA_DFIFO_HDR_SZ 4 -#define DIVA_DFIFO_SEGMENT_SZ (DIVA_DFIFO_DATA_SZ + DIVA_DFIFO_HDR_SZ) -#define DIVA_DFIFO_SEGMENTS ((DIVA_DFIFO_RAW_SZ) / (DIVA_DFIFO_SEGMENT_SZ) + 1) -#define DIVA_DFIFO_MEM_SZ ( \ - (DIVA_DFIFO_SEGMENT_SZ) * (DIVA_DFIFO_SEGMENTS) + \ - (DIVA_DFIFO_CACHE_SZ) * 2 \ - ) -#define DIVA_DFIFO_STEP DIVA_DFIFO_SEGMENT_SZ -/* ------------------------------------------------------------------------- - Block header layout is: - byte[0] -> flags - byte[1] -> length of data in block - byte[2] -> reserved - byte[4] -> reserved - ------------------------------------------------------------------------- */ -#define DIVA_DFIFO_WRAP 0x80 /* This is the last block in fifo */ -#define DIVA_DFIFO_READY 0x40 /* This block is ready for processing */ -#define DIVA_DFIFO_LAST 0x20 /* This block is last in message */ -#define DIVA_DFIFO_AUTO 0x10 /* Don't look for 'ready', don't ack */ -int diva_dfifo_create(void *start, int length); -#endif diff --git a/drivers/isdn/hardware/eicon/di.c b/drivers/isdn/hardware/eicon/di.c deleted file mode 100644 index cd3fba1add12..000000000000 --- a/drivers/isdn/hardware/eicon/di.c +++ /dev/null @@ -1,835 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#include "pc.h" -#include "pr_pc.h" -#include "di_defs.h" -#include "di.h" -#if !defined USE_EXTENDED_DEBUGS -#include "dimaint.h" -#else -#define dprintf -#endif -#include "io.h" -#include "dfifo.h" -#define PR_RAM ((struct pr_ram *)0) -#define RAM ((struct dual *)0) -/*------------------------------------------------------------------*/ -/* local function prototypes */ -/*------------------------------------------------------------------*/ -void pr_out(ADAPTER *a); -byte pr_dpc(ADAPTER *a); -static byte pr_ready(ADAPTER *a); -static byte isdn_rc(ADAPTER *, byte, byte, byte, word, dword, dword); -static byte isdn_ind(ADAPTER *, byte, byte, byte, PBUFFER *, byte, word); -/* ----------------------------------------------------------------- - Functions used for the extended XDI Debug - macros - global convergence counter (used by all adapters) - Look by the implementation part of the functions - about the parameters. - If you change the dubugging parameters, then you should update - the aididbg.doc in the IDI doc's. - ----------------------------------------------------------------- */ -#if defined(XDI_USE_XLOG) -#define XDI_A_NR(_x_) ((byte)(((ISDN_ADAPTER *)(_x_->io))->ANum)) -static void xdi_xlog(byte *msg, word code, int length); -static byte xdi_xlog_sec = 0; -#else -#define XDI_A_NR(_x_) ((byte)0) -#endif -static void xdi_xlog_rc_event(byte Adapter, - byte Id, byte Ch, byte Rc, byte cb, byte type); -static void xdi_xlog_request(byte Adapter, byte Id, - byte Ch, byte Req, byte type); -static void xdi_xlog_ind(byte Adapter, - byte Id, - byte Ch, - byte Ind, - byte rnr_valid, - byte rnr, - byte type); -/*------------------------------------------------------------------*/ -/* output function */ -/*------------------------------------------------------------------*/ -void pr_out(ADAPTER *a) -{ - byte e_no; - ENTITY *this = NULL; - BUFFERS *X; - word length; - word i; - word clength; - REQ *ReqOut; - byte more; - byte ReadyCount; - byte ReqCount; - byte Id; - dtrc(dprintf("pr_out")); - /* while a request is pending ... */ - e_no = look_req(a); - if (!e_no) - { - dtrc(dprintf("no_req")); - return; - } - ReadyCount = pr_ready(a); - if (!ReadyCount) - { - dtrc(dprintf("not_ready")); - return; - } - ReqCount = 0; - while (e_no && ReadyCount) { - next_req(a); - this = entity_ptr(a, e_no); -#ifdef USE_EXTENDED_DEBUGS - if (!this) - { - DBG_FTL(("XDI: [%02x] !A%d ==> NULL entity ptr - try to ignore", - xdi_xlog_sec++, (int)((ISDN_ADAPTER *)a->io)->ANum)) - e_no = look_req(a); - ReadyCount--; - continue; - } - { - DBG_TRC((">A%d Id=0x%x Req=0x%x", ((ISDN_ADAPTER *)a->io)->ANum, this->Id, this->Req)) - } -#else - dbug(dprintf("out:Req=%x,Id=%x,Ch=%x", this->Req, this->Id, this->ReqCh)); -#endif - /* get address of next available request buffer */ - ReqOut = (REQ *)&PR_RAM->B[a->ram_inw(a, &PR_RAM->NextReq)]; -#if defined(DIVA_ISTREAM) - if (!(a->tx_stream[this->Id] && - this->Req == N_DATA)) { -#endif - /* now copy the data from the current data buffer into the */ - /* adapters request buffer */ - length = 0; - i = this->XCurrent; - X = PTR_X(a, this); - while (i < this->XNum && length < 270) { - clength = min((word)(270 - length), (word)(X[i].PLength-this->XOffset)); - a->ram_out_buffer(a, - &ReqOut->XBuffer.P[length], - PTR_P(a, this, &X[i].P[this->XOffset]), - clength); - length += clength; - this->XOffset += clength; - if (this->XOffset == X[i].PLength) { - this->XCurrent = (byte)++i; - this->XOffset = 0; - } - } -#if defined(DIVA_ISTREAM) - } else { /* Use CMA extension in order to transfer data to the card */ - i = this->XCurrent; - X = PTR_X(a, this); - while (i < this->XNum) { - diva_istream_write(a, - this->Id, - PTR_P(a, this, &X[i].P[0]), - X[i].PLength, - ((i + 1) == this->XNum), - 0, 0); - this->XCurrent = (byte)++i; - } - length = 0; - } -#endif - a->ram_outw(a, &ReqOut->XBuffer.length, length); - a->ram_out(a, &ReqOut->ReqId, this->Id); - a->ram_out(a, &ReqOut->ReqCh, this->ReqCh); - /* if it's a specific request (no ASSIGN) ... */ - if (this->Id & 0x1f) { - /* if buffers are left in the list of data buffers do */ - /* do chaining (LL_MDATA, N_MDATA) */ - this->More++; - if (i < this->XNum && this->MInd) { - xdi_xlog_request(XDI_A_NR(a), this->Id, this->ReqCh, this->MInd, - a->IdTypeTable[this->No]); - a->ram_out(a, &ReqOut->Req, this->MInd); - more = true; - } - else { - xdi_xlog_request(XDI_A_NR(a), this->Id, this->ReqCh, this->Req, - a->IdTypeTable[this->No]); - this->More |= XMOREF; - a->ram_out(a, &ReqOut->Req, this->Req); - more = false; - if (a->FlowControlIdTable[this->ReqCh] == this->Id) - a->FlowControlSkipTable[this->ReqCh] = true; - /* - Note that remove request was sent to the card - */ - if (this->Req == REMOVE) { - a->misc_flags_table[e_no] |= DIVA_MISC_FLAGS_REMOVE_PENDING; - } - } - /* if we did chaining, this entity is put back into the */ - /* request queue */ - if (more) { - req_queue(a, this->No); - } - } - /* else it's a ASSIGN */ - else { - /* save the request code used for buffer chaining */ - this->MInd = 0; - if (this->Id == BLLC_ID) this->MInd = LL_MDATA; - if (this->Id == NL_ID || - this->Id == TASK_ID || - this->Id == MAN_ID - ) this->MInd = N_MDATA; - /* send the ASSIGN */ - a->IdTypeTable[this->No] = this->Id; - xdi_xlog_request(XDI_A_NR(a), this->Id, this->ReqCh, this->Req, this->Id); - this->More |= XMOREF; - a->ram_out(a, &ReqOut->Req, this->Req); - /* save the reference of the ASSIGN */ - assign_queue(a, this->No, a->ram_inw(a, &ReqOut->Reference)); - } - a->ram_outw(a, &PR_RAM->NextReq, a->ram_inw(a, &ReqOut->next)); - ReadyCount--; - ReqCount++; - e_no = look_req(a); - } - /* send the filled request buffers to the ISDN adapter */ - a->ram_out(a, &PR_RAM->ReqInput, - (byte)(a->ram_in(a, &PR_RAM->ReqInput) + ReqCount)); - /* if it is a 'unreturncoded' UREMOVE request, remove the */ - /* Id from our table after sending the request */ - if (this && (this->Req == UREMOVE) && this->Id) { - Id = this->Id; - e_no = a->IdTable[Id]; - free_entity(a, e_no); - for (i = 0; i < 256; i++) - { - if (a->FlowControlIdTable[i] == Id) - a->FlowControlIdTable[i] = 0; - } - a->IdTable[Id] = 0; - this->Id = 0; - } -} -static byte pr_ready(ADAPTER *a) -{ - byte ReadyCount; - ReadyCount = (byte)(a->ram_in(a, &PR_RAM->ReqOutput) - - a->ram_in(a, &PR_RAM->ReqInput)); - if (!ReadyCount) { - if (!a->ReadyInt) { - a->ram_inc(a, &PR_RAM->ReadyInt); - a->ReadyInt++; - } - } - return ReadyCount; -} -/*------------------------------------------------------------------*/ -/* isdn interrupt handler */ -/*------------------------------------------------------------------*/ -byte pr_dpc(ADAPTER *a) -{ - byte Count; - RC *RcIn; - IND *IndIn; - byte c; - byte RNRId; - byte Rc; - byte Ind; - /* if return codes are available ... */ - if ((Count = a->ram_in(a, &PR_RAM->RcOutput)) != 0) { - dtrc(dprintf("#Rc=%x", Count)); - /* get the buffer address of the first return code */ - RcIn = (RC *)&PR_RAM->B[a->ram_inw(a, &PR_RAM->NextRc)]; - /* for all return codes do ... */ - while (Count--) { - if ((Rc = a->ram_in(a, &RcIn->Rc)) != 0) { - dword tmp[2]; - /* - Get extended information, associated with return code - */ - a->ram_in_buffer(a, - &RcIn->Reserved2[0], - (byte *)&tmp[0], - 8); - /* call return code handler, if it is not our return code */ - /* the handler returns 2 */ - /* for all return codes we process, we clear the Rc field */ - isdn_rc(a, - Rc, - a->ram_in(a, &RcIn->RcId), - a->ram_in(a, &RcIn->RcCh), - a->ram_inw(a, &RcIn->Reference), - tmp[0], /* type of extended information */ - tmp[1]); /* extended information */ - a->ram_out(a, &RcIn->Rc, 0); - } - /* get buffer address of next return code */ - RcIn = (RC *)&PR_RAM->B[a->ram_inw(a, &RcIn->next)]; - } - /* clear all return codes (no chaining!) */ - a->ram_out(a, &PR_RAM->RcOutput, 0); - /* call output function */ - pr_out(a); - } - /* clear RNR flag */ - RNRId = 0; - /* if indications are available ... */ - if ((Count = a->ram_in(a, &PR_RAM->IndOutput)) != 0) { - dtrc(dprintf("#Ind=%x", Count)); - /* get the buffer address of the first indication */ - IndIn = (IND *)&PR_RAM->B[a->ram_inw(a, &PR_RAM->NextInd)]; - /* for all indications do ... */ - while (Count--) { - /* if the application marks an indication as RNR, all */ - /* indications from the same Id delivered in this interrupt */ - /* are marked RNR */ - if (RNRId && RNRId == a->ram_in(a, &IndIn->IndId)) { - a->ram_out(a, &IndIn->Ind, 0); - a->ram_out(a, &IndIn->RNR, true); - } - else { - Ind = a->ram_in(a, &IndIn->Ind); - if (Ind) { - RNRId = 0; - /* call indication handler, a return value of 2 means chain */ - /* a return value of 1 means RNR */ - /* for all indications we process, we clear the Ind field */ - c = isdn_ind(a, - Ind, - a->ram_in(a, &IndIn->IndId), - a->ram_in(a, &IndIn->IndCh), - &IndIn->RBuffer, - a->ram_in(a, &IndIn->MInd), - a->ram_inw(a, &IndIn->MLength)); - if (c == 1) { - dtrc(dprintf("RNR")); - a->ram_out(a, &IndIn->Ind, 0); - RNRId = a->ram_in(a, &IndIn->IndId); - a->ram_out(a, &IndIn->RNR, true); - } - } - } - /* get buffer address of next indication */ - IndIn = (IND *)&PR_RAM->B[a->ram_inw(a, &IndIn->next)]; - } - a->ram_out(a, &PR_RAM->IndOutput, 0); - } - return false; -} -byte scom_test_int(ADAPTER *a) -{ - return a->ram_in(a, (void *)0x3fe); -} -void scom_clear_int(ADAPTER *a) -{ - a->ram_out(a, (void *)0x3fe, 0); -} -/*------------------------------------------------------------------*/ -/* return code handler */ -/*------------------------------------------------------------------*/ -static byte isdn_rc(ADAPTER *a, - byte Rc, - byte Id, - byte Ch, - word Ref, - dword extended_info_type, - dword extended_info) -{ - ENTITY *this; - byte e_no; - word i; - int cancel_rc; -#ifdef USE_EXTENDED_DEBUGS - { - DBG_TRC(("io)->ANum, Id, Rc)) - } -#else - dbug(dprintf("isdn_rc(Rc=%x,Id=%x,Ch=%x)", Rc, Id, Ch)); -#endif - /* check for ready interrupt */ - if (Rc == READY_INT) { - xdi_xlog_rc_event(XDI_A_NR(a), Id, Ch, Rc, 0, 0); - if (a->ReadyInt) { - a->ReadyInt--; - return 0; - } - return 2; - } - /* if we know this Id ... */ - e_no = a->IdTable[Id]; - if (e_no) { - this = entity_ptr(a, e_no); - xdi_xlog_rc_event(XDI_A_NR(a), Id, Ch, Rc, 0, a->IdTypeTable[this->No]); - this->RcCh = Ch; - /* if it is a return code to a REMOVE request, remove the */ - /* Id from our table */ - if ((a->misc_flags_table[e_no] & DIVA_MISC_FLAGS_REMOVE_PENDING) && - (Rc == OK)) { - if (a->IdTypeTable[e_no] == NL_ID) { - if (a->RcExtensionSupported && - (extended_info_type != DIVA_RC_TYPE_REMOVE_COMPLETE)) { - dtrc(dprintf("XDI: N-REMOVE, A(%02x) Id:%02x, ignore RC=OK", - XDI_A_NR(a), Id)); - return (0); - } - if (extended_info_type == DIVA_RC_TYPE_REMOVE_COMPLETE) - a->RcExtensionSupported = true; - } - a->misc_flags_table[e_no] &= ~DIVA_MISC_FLAGS_REMOVE_PENDING; - a->misc_flags_table[e_no] &= ~DIVA_MISC_FLAGS_NO_RC_CANCELLING; - free_entity(a, e_no); - for (i = 0; i < 256; i++) - { - if (a->FlowControlIdTable[i] == Id) - a->FlowControlIdTable[i] = 0; - } - a->IdTable[Id] = 0; - this->Id = 0; - /* --------------------------------------------------------------- - If we send N_DISC or N_DISK_ACK after we have received OK_FC - then the card will respond with OK_FC and later with RC==OK. - If we send N_REMOVE in this state we will receive only RC==OK - This will create the state in that the XDI is waiting for the - additional RC and does not delivery the RC to the client. This - code corrects the counter of outstanding RC's in this case. - --------------------------------------------------------------- */ - if ((this->More & XMOREC) > 1) { - this->More &= ~XMOREC; - this->More |= 1; - dtrc(dprintf("XDI: correct MORE on REMOVE A(%02x) Id:%02x", - XDI_A_NR(a), Id)); - } - } - if (Rc == OK_FC) { - a->FlowControlIdTable[Ch] = Id; - a->FlowControlSkipTable[Ch] = false; - this->Rc = Rc; - this->More &= ~(XBUSY | XMOREC); - this->complete = 0xff; - xdi_xlog_rc_event(XDI_A_NR(a), Id, Ch, Rc, 1, a->IdTypeTable[this->No]); - CALLBACK(a, this); - return 0; - } - /* - New protocol code sends return codes that comes from release - of flow control condition marked with DIVA_RC_TYPE_OK_FC extended - information element type. - If like return code arrives then application is able to process - all return codes self and XDI should not cances return codes. - This return code does not decrement XMOREC partial return code - counter due to fact that it was no request for this return code, - also XMOREC was not incremented. - */ - if (extended_info_type == DIVA_RC_TYPE_OK_FC) { - a->misc_flags_table[e_no] |= DIVA_MISC_FLAGS_NO_RC_CANCELLING; - this->Rc = Rc; - this->complete = 0xff; - xdi_xlog_rc_event(XDI_A_NR(a), Id, Ch, Rc, 1, a->IdTypeTable[this->No]); - DBG_TRC(("XDI OK_FC A(%02x) Id:%02x Ch:%02x Rc:%02x", - XDI_A_NR(a), Id, Ch, Rc)) - CALLBACK(a, this); - return 0; - } - cancel_rc = !(a->misc_flags_table[e_no] & DIVA_MISC_FLAGS_NO_RC_CANCELLING); - if (cancel_rc && (a->FlowControlIdTable[Ch] == Id)) - { - a->FlowControlIdTable[Ch] = 0; - if ((Rc != OK) || !a->FlowControlSkipTable[Ch]) - { - this->Rc = Rc; - if (Ch == this->ReqCh) - { - this->More &= ~(XBUSY | XMOREC); - this->complete = 0xff; - } - xdi_xlog_rc_event(XDI_A_NR(a), Id, Ch, Rc, 1, a->IdTypeTable[this->No]); - CALLBACK(a, this); - } - return 0; - } - if (this->More & XMOREC) - this->More--; - /* call the application callback function */ - if (((!cancel_rc) || (this->More & XMOREF)) && !(this->More & XMOREC)) { - this->Rc = Rc; - this->More &= ~XBUSY; - this->complete = 0xff; - xdi_xlog_rc_event(XDI_A_NR(a), Id, Ch, Rc, 1, a->IdTypeTable[this->No]); - CALLBACK(a, this); - } - return 0; - } - /* if it's an ASSIGN return code check if it's a return */ - /* code to an ASSIGN request from us */ - if ((Rc & 0xf0) == ASSIGN_RC) { - e_no = get_assign(a, Ref); - if (e_no) { - this = entity_ptr(a, e_no); - this->Id = Id; - xdi_xlog_rc_event(XDI_A_NR(a), Id, Ch, Rc, 2, a->IdTypeTable[this->No]); - /* call the application callback function */ - this->Rc = Rc; - this->More &= ~XBUSY; - this->complete = 0xff; -#if defined(DIVA_ISTREAM) /* { */ - if ((Rc == ASSIGN_OK) && a->ram_offset && - (a->IdTypeTable[this->No] == NL_ID) && - ((extended_info_type == DIVA_RC_TYPE_RX_DMA) || - (extended_info_type == DIVA_RC_TYPE_CMA_PTR)) && - extended_info) { - dword offset = (*(a->ram_offset)) (a); - dword tmp[2]; - extended_info -= offset; -#ifdef PLATFORM_GT_32BIT - a->ram_in_dw(a, (void *)ULongToPtr(extended_info), (dword *)&tmp[0], 2); -#else - a->ram_in_dw(a, (void *)extended_info, (dword *)&tmp[0], 2); -#endif - a->tx_stream[Id] = tmp[0]; - a->rx_stream[Id] = tmp[1]; - if (extended_info_type == DIVA_RC_TYPE_RX_DMA) { - DBG_TRC(("Id=0x%x RxDMA=%08x:%08x", - Id, a->tx_stream[Id], a->rx_stream[Id])) - a->misc_flags_table[this->No] |= DIVA_MISC_FLAGS_RX_DMA; - } else { - DBG_TRC(("Id=0x%x CMA=%08x:%08x", - Id, a->tx_stream[Id], a->rx_stream[Id])) - a->misc_flags_table[this->No] &= ~DIVA_MISC_FLAGS_RX_DMA; - a->rx_pos[Id] = 0; - a->rx_stream[Id] -= offset; - } - a->tx_pos[Id] = 0; - a->tx_stream[Id] -= offset; - } else { - a->tx_stream[Id] = 0; - a->rx_stream[Id] = 0; - a->misc_flags_table[this->No] &= ~DIVA_MISC_FLAGS_RX_DMA; - } -#endif /* } */ - CALLBACK(a, this); - if (Rc == ASSIGN_OK) { - a->IdTable[Id] = e_no; - } - else - { - free_entity(a, e_no); - for (i = 0; i < 256; i++) - { - if (a->FlowControlIdTable[i] == Id) - a->FlowControlIdTable[i] = 0; - } - a->IdTable[Id] = 0; - this->Id = 0; - } - return 1; - } - } - return 2; -} -/*------------------------------------------------------------------*/ -/* indication handler */ -/*------------------------------------------------------------------*/ -static byte isdn_ind(ADAPTER *a, - byte Ind, - byte Id, - byte Ch, - PBUFFER *RBuffer, - byte MInd, - word MLength) -{ - ENTITY *this; - word clength; - word offset; - BUFFERS *R; - byte *cma = NULL; -#ifdef USE_EXTENDED_DEBUGS - { - DBG_TRC(("io)->ANum, Id, Ind)) - } -#else - dbug(dprintf("isdn_ind(Ind=%x,Id=%x,Ch=%x)", Ind, Id, Ch)); -#endif - if (a->IdTable[Id]) { - this = entity_ptr(a, a->IdTable[Id]); - this->IndCh = Ch; - xdi_xlog_ind(XDI_A_NR(a), Id, Ch, Ind, - 0/* rnr_valid */, 0 /* rnr */, a->IdTypeTable[this->No]); - /* if the Receive More flag is not yet set, this is the */ - /* first buffer of the packet */ - if (this->RCurrent == 0xff) { - /* check for receive buffer chaining */ - if (Ind == this->MInd) { - this->complete = 0; - this->Ind = MInd; - } - else { - this->complete = 1; - this->Ind = Ind; - } - /* call the application callback function for the receive */ - /* look ahead */ - this->RLength = MLength; -#if defined(DIVA_ISTREAM) - if ((a->rx_stream[this->Id] || - (a->misc_flags_table[this->No] & DIVA_MISC_FLAGS_RX_DMA)) && - ((Ind == N_DATA) || - (a->protocol_capabilities & PROTCAP_CMA_ALLPR))) { - PISDN_ADAPTER IoAdapter = (PISDN_ADAPTER)a->io; - if (a->misc_flags_table[this->No] & DIVA_MISC_FLAGS_RX_DMA) { -#if defined(DIVA_IDI_RX_DMA) - dword d; - diva_get_dma_map_entry(\ - (struct _diva_dma_map_entry *)IoAdapter->dma_map, - (int)a->rx_stream[this->Id], (void **)&cma, &d); -#else - cma = &a->stream_buffer[0]; - cma[0] = cma[1] = cma[2] = cma[3] = 0; -#endif - this->RLength = MLength = (word)*(dword *)cma; - cma += 4; - } else { - int final = 0; - cma = &a->stream_buffer[0]; - this->RLength = MLength = (word)diva_istream_read(a, - Id, - cma, - sizeof(a->stream_buffer), - &final, NULL, NULL); - } - IoAdapter->RBuffer.length = min(MLength, (word)270); - if (IoAdapter->RBuffer.length != MLength) { - this->complete = 0; - } else { - this->complete = 1; - } - memcpy(IoAdapter->RBuffer.P, cma, IoAdapter->RBuffer.length); - this->RBuffer = (DBUFFER *)&IoAdapter->RBuffer; - } -#endif - if (!cma) { - a->ram_look_ahead(a, RBuffer, this); - } - this->RNum = 0; - CALLBACK(a, this); - /* map entity ptr, selector could be re-mapped by call to */ - /* IDI from within callback */ - this = entity_ptr(a, a->IdTable[Id]); - xdi_xlog_ind(XDI_A_NR(a), Id, Ch, Ind, - 1/* rnr_valid */, this->RNR/* rnr */, a->IdTypeTable[this->No]); - /* check for RNR */ - if (this->RNR == 1) { - this->RNR = 0; - return 1; - } - /* if no buffers are provided by the application, the */ - /* application want to copy the data itself including */ - /* N_MDATA/LL_MDATA chaining */ - if (!this->RNR && !this->RNum) { - xdi_xlog_ind(XDI_A_NR(a), Id, Ch, Ind, - 2/* rnr_valid */, 0/* rnr */, a->IdTypeTable[this->No]); - return 0; - } - /* if there is no RNR, set the More flag */ - this->RCurrent = 0; - this->ROffset = 0; - } - if (this->RNR == 2) { - if (Ind != this->MInd) { - this->RCurrent = 0xff; - this->RNR = 0; - } - return 0; - } - /* if we have received buffers from the application, copy */ - /* the data into these buffers */ - offset = 0; - R = PTR_R(a, this); - do { - if (this->ROffset == R[this->RCurrent].PLength) { - this->ROffset = 0; - this->RCurrent++; - } - if (cma) { - clength = min(MLength, (word)(R[this->RCurrent].PLength-this->ROffset)); - } else { - clength = min(a->ram_inw(a, &RBuffer->length)-offset, - R[this->RCurrent].PLength-this->ROffset); - } - if (R[this->RCurrent].P) { - if (cma) { - memcpy(PTR_P(a, this, &R[this->RCurrent].P[this->ROffset]), - &cma[offset], - clength); - } else { - a->ram_in_buffer(a, - &RBuffer->P[offset], - PTR_P(a, this, &R[this->RCurrent].P[this->ROffset]), - clength); - } - } - offset += clength; - this->ROffset += clength; - if (cma) { - if (offset >= MLength) { - break; - } - continue; - } - } while (offset < (a->ram_inw(a, &RBuffer->length))); - /* if it's the last buffer of the packet, call the */ - /* application callback function for the receive complete */ - /* call */ - if (Ind != this->MInd) { - R[this->RCurrent].PLength = this->ROffset; - if (this->ROffset) this->RCurrent++; - this->RNum = this->RCurrent; - this->RCurrent = 0xff; - this->Ind = Ind; - this->complete = 2; - xdi_xlog_ind(XDI_A_NR(a), Id, Ch, Ind, - 3/* rnr_valid */, 0/* rnr */, a->IdTypeTable[this->No]); - CALLBACK(a, this); - } - return 0; - } - return 2; -} -#if defined(XDI_USE_XLOG) -/* ----------------------------------------------------------- - This function works in the same way as xlog on the - active board - ----------------------------------------------------------- */ -static void xdi_xlog(byte *msg, word code, int length) { - xdi_dbg_xlog("\x00\x02", msg, code, length); -} -#endif -/* ----------------------------------------------------------- - This function writes the information about the Return Code - processing in the trace buffer. Trace ID is 221. - INPUT: - Adapter - system unicue adapter number (0 ... 255) - Id - Id of the entity that had sent this return code - Ch - Channel of the entity that had sent this return code - Rc - return code value - cb: (0...2) - switch (cb) { - case 0: printf ("DELIVERY"); break; - case 1: printf ("CALLBACK"); break; - case 2: printf ("ASSIGN"); break; - } - DELIVERY - have entered isdn_rc with this RC - CALLBACK - about to make callback to the application - for this RC - ASSIGN - about to make callback for RC that is result - of ASSIGN request. It is no DELIVERY message - before of this message - type - the Id that was sent by the ASSIGN of this entity. - This should be global Id like NL_ID, DSIG_ID, MAN_ID. - An unknown Id will cause "?-" in the front of the request. - In this case the log.c is to be extended. - ----------------------------------------------------------- */ -static void xdi_xlog_rc_event(byte Adapter, - byte Id, byte Ch, byte Rc, byte cb, byte type) { -#if defined(XDI_USE_XLOG) - word LogInfo[4]; - PUT_WORD(&LogInfo[0], ((word)Adapter | (word)(xdi_xlog_sec++ << 8))); - PUT_WORD(&LogInfo[1], ((word)Id | (word)(Ch << 8))); - PUT_WORD(&LogInfo[2], ((word)Rc | (word)(type << 8))); - PUT_WORD(&LogInfo[3], cb); - xdi_xlog((byte *)&LogInfo[0], 221, sizeof(LogInfo)); -#endif -} -/* ------------------------------------------------------------------------ - This function writes the information about the request processing - in the trace buffer. Trace ID is 220. - INPUT: - Adapter - system unicue adapter number (0 ... 255) - Id - Id of the entity that had sent this request - Ch - Channel of the entity that had sent this request - Req - Code of the request - type - the Id that was sent by the ASSIGN of this entity. - This should be global Id like NL_ID, DSIG_ID, MAN_ID. - An unknown Id will cause "?-" in the front of the request. - In this case the log.c is to be extended. - ------------------------------------------------------------------------ */ -static void xdi_xlog_request(byte Adapter, byte Id, - byte Ch, byte Req, byte type) { -#if defined(XDI_USE_XLOG) - word LogInfo[3]; - PUT_WORD(&LogInfo[0], ((word)Adapter | (word)(xdi_xlog_sec++ << 8))); - PUT_WORD(&LogInfo[1], ((word)Id | (word)(Ch << 8))); - PUT_WORD(&LogInfo[2], ((word)Req | (word)(type << 8))); - xdi_xlog((byte *)&LogInfo[0], 220, sizeof(LogInfo)); -#endif -} -/* ------------------------------------------------------------------------ - This function writes the information about the indication processing - in the trace buffer. Trace ID is 222. - INPUT: - Adapter - system unicue adapter number (0 ... 255) - Id - Id of the entity that had sent this indication - Ch - Channel of the entity that had sent this indication - Ind - Code of the indication - rnr_valid: (0 .. 3) supported - switch (rnr_valid) { - case 0: printf ("DELIVERY"); break; - case 1: printf ("RNR=%d", rnr); - case 2: printf ("RNum=0"); - case 3: printf ("COMPLETE"); - } - DELIVERY - indication entered isdn_rc function - RNR=... - application had returned RNR=... after the - look ahead callback - RNum=0 - application had not returned any buffer to copy - this indication and will copy it self - COMPLETE - XDI had copied the data to the buffers provided - bu the application and is about to issue the - final callback - rnr: Look case 1 of the rnr_valid - type: the Id that was sent by the ASSIGN of this entity. This should - be global Id like NL_ID, DSIG_ID, MAN_ID. An unknown Id will - cause "?-" in the front of the request. In this case the - log.c is to be extended. - ------------------------------------------------------------------------ */ -static void xdi_xlog_ind(byte Adapter, - byte Id, - byte Ch, - byte Ind, - byte rnr_valid, - byte rnr, - byte type) { -#if defined(XDI_USE_XLOG) - word LogInfo[4]; - PUT_WORD(&LogInfo[0], ((word)Adapter | (word)(xdi_xlog_sec++ << 8))); - PUT_WORD(&LogInfo[1], ((word)Id | (word)(Ch << 8))); - PUT_WORD(&LogInfo[2], ((word)Ind | (word)(type << 8))); - PUT_WORD(&LogInfo[3], ((word)rnr | (word)(rnr_valid << 8))); - xdi_xlog((byte *)&LogInfo[0], 222, sizeof(LogInfo)); -#endif -} diff --git a/drivers/isdn/hardware/eicon/di.h b/drivers/isdn/hardware/eicon/di.h deleted file mode 100644 index ff26c65631d6..000000000000 --- a/drivers/isdn/hardware/eicon/di.h +++ /dev/null @@ -1,118 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -/* - * some macros for detailed trace management - */ -#include "di_dbg.h" -/*****************************************************************************/ -#define XMOREC 0x1f -#define XMOREF 0x20 -#define XBUSY 0x40 -#define RMORE 0x80 -#define DIVA_MISC_FLAGS_REMOVE_PENDING 0x01 -#define DIVA_MISC_FLAGS_NO_RC_CANCELLING 0x02 -#define DIVA_MISC_FLAGS_RX_DMA 0x04 -/* structure for all information we have to keep on a per */ -/* adapater basis */ -typedef struct adapter_s ADAPTER; -struct adapter_s { - void *io; - byte IdTable[256]; - byte IdTypeTable[256]; - byte FlowControlIdTable[256]; - byte FlowControlSkipTable[256]; - byte ReadyInt; - byte RcExtensionSupported; - byte misc_flags_table[256]; - dword protocol_capabilities; - byte (*ram_in)(ADAPTER *a, void *adr); - word (*ram_inw)(ADAPTER *a, void *adr); - void (*ram_in_buffer)(ADAPTER *a, void *adr, void *P, word length); - void (*ram_look_ahead)(ADAPTER *a, PBUFFER *RBuffer, ENTITY *e); - void (*ram_out)(ADAPTER *a, void *adr, byte data); - void (*ram_outw)(ADAPTER *a, void *adr, word data); - void (*ram_out_buffer)(ADAPTER *a, void *adr, void *P, word length); - void (*ram_inc)(ADAPTER *a, void *adr); -#if defined(DIVA_ISTREAM) - dword rx_stream[256]; - dword tx_stream[256]; - word tx_pos[256]; - word rx_pos[256]; - byte stream_buffer[2512]; - dword (*ram_offset)(ADAPTER *a); - void (*ram_out_dw)(ADAPTER *a, - void *addr, - const dword *data, - int dwords); - void (*ram_in_dw)(ADAPTER *a, - void *addr, - dword *data, - int dwords); - void (*istream_wakeup)(ADAPTER *a); -#else - byte stream_buffer[4]; -#endif -}; -/*------------------------------------------------------------------*/ -/* public functions of IDI common code */ -/*------------------------------------------------------------------*/ -void pr_out(ADAPTER *a); -byte pr_dpc(ADAPTER *a); -byte scom_test_int(ADAPTER *a); -void scom_clear_int(ADAPTER *a); -/*------------------------------------------------------------------*/ -/* OS specific functions used by IDI common code */ -/*------------------------------------------------------------------*/ -void free_entity(ADAPTER *a, byte e_no); -void assign_queue(ADAPTER *a, byte e_no, word ref); -byte get_assign(ADAPTER *a, word ref); -void req_queue(ADAPTER *a, byte e_no); -byte look_req(ADAPTER *a); -void next_req(ADAPTER *a); -ENTITY *entity_ptr(ADAPTER *a, byte e_no); -#if defined(DIVA_ISTREAM) -struct _diva_xdi_stream_interface; -void diva_xdi_provide_istream_info(ADAPTER *a, - struct _diva_xdi_stream_interface *pI); -void pr_stream(ADAPTER *a); -int diva_istream_write(void *context, - int Id, - void *data, - int length, - int final, - byte usr1, - byte usr2); -int diva_istream_read(void *context, - int Id, - void *data, - int max_length, - int *final, - byte *usr1, - byte *usr2); -#if defined(DIVA_IDI_RX_DMA) -#include "diva_dma.h" -#endif -#endif diff --git a/drivers/isdn/hardware/eicon/di_dbg.h b/drivers/isdn/hardware/eicon/di_dbg.h deleted file mode 100644 index 1380b60e526e..000000000000 --- a/drivers/isdn/hardware/eicon/di_dbg.h +++ /dev/null @@ -1,37 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_DI_DBG_INC__ -#define __DIVA_DI_DBG_INC__ -#if !defined(dtrc) -#define dtrc(a) -#endif -#if !defined(dbug) -#define dbug(a) -#endif -#if !defined USE_EXTENDED_DEBUGS -extern void (*dprintf)(char*, ...); -#endif -#endif diff --git a/drivers/isdn/hardware/eicon/di_defs.h b/drivers/isdn/hardware/eicon/di_defs.h deleted file mode 100644 index a5094d221086..000000000000 --- a/drivers/isdn/hardware/eicon/di_defs.h +++ /dev/null @@ -1,181 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef _DI_DEFS_ -#define _DI_DEFS_ -/* typedefs for our data structures */ -typedef struct get_name_s GET_NAME; -/* The entity_s structure is used to pass all - parameters between application and IDI */ -typedef struct entity_s ENTITY; -typedef struct buffers_s BUFFERS; -typedef struct postcall_s POSTCALL; -typedef struct get_para_s GET_PARA; -#define BOARD_NAME_LENGTH 9 -#define IDI_CALL_LINK_T -#define IDI_CALL_ENTITY_T -/* typedef void ( * IDI_CALL)(ENTITY *); */ -/* -------------------------------------------------------- - IDI_CALL - -------------------------------------------------------- */ -typedef void (IDI_CALL_LINK_T *IDI_CALL)(ENTITY IDI_CALL_ENTITY_T *); -typedef struct { - word length; /* length of data/parameter field */ - byte P[270]; /* data/parameter field */ -} DBUFFER; -struct get_name_s { - word command; /* command = 0x0100 */ - byte name[BOARD_NAME_LENGTH]; -}; -struct postcall_s { - word command; /* command = 0x0300 */ - word dummy; /* not used */ - void (*callback)(void *); /* call back */ - void *context; /* context pointer */ -}; -#define REQ_PARA 0x0600 /* request command line parameters */ -#define REQ_PARA_LEN 1 /* number of data bytes */ -#define L1_STARTUP_DOWN_POS 0 /* '-y' command line parameter in......*/ -#define L1_STARTUP_DOWN_MSK 0x01 /* first byte position (index 0) with value 0x01 */ -struct get_para_s { - word command; /* command = 0x0600 */ - byte len; /* max length of para field in bytes */ - byte para[REQ_PARA_LEN]; /* parameter field */ -}; -struct buffers_s { - word PLength; - byte *P; -}; -struct entity_s { - byte Req; /* pending request */ - byte Rc; /* return code received */ - byte Ind; /* indication received */ - byte ReqCh; /* channel of current Req */ - byte RcCh; /* channel of current Rc */ - byte IndCh; /* channel of current Ind */ - byte Id; /* ID used by this entity */ - byte GlobalId; /* reserved field */ - byte XNum; /* number of X-buffers */ - byte RNum; /* number of R-buffers */ - BUFFERS *X; /* pointer to X-buffer list */ - BUFFERS *R; /* pointer to R-buffer list */ - word RLength; /* length of current R-data */ - DBUFFER *RBuffer; /* buffer of current R-data */ - byte RNR; /* receive not ready flag */ - byte complete; /* receive complete status */ - IDI_CALL callback; - word user[2]; - /* fields used by the driver internally */ - byte No; /* entity number */ - byte reserved2; /* reserved field */ - byte More; /* R/X More flags */ - byte MInd; /* MDATA coding for this ID */ - byte XCurrent; /* current transmit buffer */ - byte RCurrent; /* current receive buffer */ - word XOffset; /* offset in x-buffer */ - word ROffset; /* offset in r-buffer */ -}; -typedef struct { - byte type; - byte channels; - word features; - IDI_CALL request; -} DESCRIPTOR; -/* descriptor type field coding */ -#define IDI_ADAPTER_S 1 -#define IDI_ADAPTER_PR 2 -#define IDI_ADAPTER_DIVA 3 -#define IDI_ADAPTER_MAESTRA 4 -#define IDI_VADAPTER 0x40 -#define IDI_DRIVER 0x80 -#define IDI_DADAPTER 0xfd -#define IDI_DIDDPNP 0xfe -#define IDI_DIMAINT 0xff -/* Hardware IDs ISA PNP */ -#define HW_ID_DIVA_PRO 3 /* same as IDI_ADAPTER_DIVA */ -#define HW_ID_MAESTRA 4 /* same as IDI_ADAPTER_MAESTRA */ -#define HW_ID_PICCOLA 5 -#define HW_ID_DIVA_PRO20 6 -#define HW_ID_DIVA20 7 -#define HW_ID_DIVA_PRO20_U 8 -#define HW_ID_DIVA20_U 9 -#define HW_ID_DIVA30 10 -#define HW_ID_DIVA30_U 11 -/* Hardware IDs PCI */ -#define HW_ID_EICON_PCI 0x1133 -#define HW_ID_SIEMENS_PCI 0x8001 /* unused SubVendor ID for Siemens Cornet-N cards */ -#define HW_ID_PROTTYPE_CORNETN 0x0014 /* SubDevice ID for Siemens Cornet-N cards */ -#define HW_ID_FUJITSU_SIEMENS_PCI 0x110A /* SubVendor ID for Fujitsu Siemens */ -#define HW_ID_GS03_PCI 0x0021 /* SubDevice ID for Fujitsu Siemens ISDN S0 card */ -#define HW_ID_DIVA_PRO20_PCI 0xe001 -#define HW_ID_DIVA20_PCI 0xe002 -#define HW_ID_DIVA_PRO20_PCI_U 0xe003 -#define HW_ID_DIVA20_PCI_U 0xe004 -#define HW_ID_DIVA201_PCI 0xe005 -#define HW_ID_DIVA_CT_ST 0xe006 -#define HW_ID_DIVA_CT_U 0xe007 -#define HW_ID_DIVA_CTL_ST 0xe008 -#define HW_ID_DIVA_CTL_U 0xe009 -#define HW_ID_DIVA_ISDN_V90_PCI 0xe00a -#define HW_ID_DIVA202_PCI_ST 0xe00b -#define HW_ID_DIVA202_PCI_U 0xe00c -#define HW_ID_DIVA_PRO30_PCI 0xe00d -#define HW_ID_MAESTRA_PCI 0xe010 -#define HW_ID_MAESTRAQ_PCI 0xe012 -#define HW_ID_DSRV_Q8M_V2_PCI 0xe013 -#define HW_ID_MAESTRAP_PCI 0xe014 -#define HW_ID_DSRV_P30M_V2_PCI 0xe015 -#define HW_ID_DSRV_VOICE_Q8M_PCI 0xe016 -#define HW_ID_DSRV_VOICE_Q8M_V2_PCI 0xe017 -#define HW_ID_DSRV_B2M_V2_PCI 0xe018 -#define HW_ID_DSRV_VOICE_P30M_V2_PCI 0xe019 -#define HW_ID_DSRV_B2F_PCI 0xe01a -#define HW_ID_DSRV_VOICE_B2M_V2_PCI 0xe01b -/* Hardware IDs USB */ -#define EICON_USB_VENDOR_ID 0x071D -#define HW_ID_DIVA_USB_REV1 0x1000 -#define HW_ID_DIVA_USB_REV2 0x1003 -#define HW_ID_TELEDAT_SURF_USB_REV2 0x1004 -#define HW_ID_TELEDAT_SURF_USB_REV1 0x2000 -/* -------------------------------------------------------------------------- - Adapter array change notification framework - -------------------------------------------------------------------------- */ -typedef void (IDI_CALL_LINK_T *didd_adapter_change_callback_t)(void IDI_CALL_ENTITY_T *context, DESCRIPTOR *adapter, int removal); -/* -------------------------------------------------------------------------- */ -#define DI_VOICE 0x0 /* obsolete define */ -#define DI_FAX3 0x1 -#define DI_MODEM 0x2 -#define DI_POST 0x4 -#define DI_V110 0x8 -#define DI_V120 0x10 -#define DI_POTS 0x20 -#define DI_CODEC 0x40 -#define DI_MANAGE 0x80 -#define DI_V_42 0x0100 -#define DI_EXTD_FAX 0x0200 /* Extended FAX (ECM, 2D, T.6, Polling) */ -#define DI_AT_PARSER 0x0400 /* Build-in AT Parser in the L2 */ -#define DI_VOICE_OVER_IP 0x0800 /* Voice over IP support */ -typedef void (IDI_CALL_LINK_T *_IDI_CALL)(void *, ENTITY *); -#endif diff --git a/drivers/isdn/hardware/eicon/did_vers.h b/drivers/isdn/hardware/eicon/did_vers.h deleted file mode 100644 index fa8db8249235..000000000000 --- a/drivers/isdn/hardware/eicon/did_vers.h +++ /dev/null @@ -1,26 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -static char diva_didd_common_code_build[] = "102-51"; diff --git a/drivers/isdn/hardware/eicon/diddfunc.c b/drivers/isdn/hardware/eicon/diddfunc.c deleted file mode 100644 index b0b23ed8b374..000000000000 --- a/drivers/isdn/hardware/eicon/diddfunc.c +++ /dev/null @@ -1,115 +0,0 @@ -/* $Id: diddfunc.c,v 1.14.6.2 2004/08/28 20:03:53 armin Exp $ - * - * DIDD Interface module for Eicon active cards. - * - * Functions are in dadapter.c - * - * Copyright 2002-2003 by Armin Schindler (mac@melware.de) - * Copyright 2002-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include "platform.h" -#include "di_defs.h" -#include "dadapter.h" -#include "divasync.h" - -#define DBG_MINIMUM (DL_LOG + DL_FTL + DL_ERR) -#define DBG_DEFAULT (DBG_MINIMUM + DL_XLOG + DL_REG) - - -extern void DIVA_DIDD_Read(void *, int); -extern char *DRIVERRELEASE_DIDD; -static dword notify_handle; -static DESCRIPTOR _DAdapter; - -/* - * didd callback function - */ -static void *didd_callback(void *context, DESCRIPTOR *adapter, - int removal) -{ - if (adapter->type == IDI_DADAPTER) { - DBG_ERR(("Notification about IDI_DADAPTER change ! Oops.")) - return (NULL); - } else if (adapter->type == IDI_DIMAINT) { - if (removal) { - DbgDeregister(); - } else { - DbgRegister("DIDD", DRIVERRELEASE_DIDD, DBG_DEFAULT); - } - } - return (NULL); -} - -/* - * connect to didd - */ -static int __init connect_didd(void) -{ - int x = 0; - int dadapter = 0; - IDI_SYNC_REQ req; - DESCRIPTOR DIDD_Table[MAX_DESCRIPTORS]; - - DIVA_DIDD_Read(DIDD_Table, sizeof(DIDD_Table)); - - for (x = 0; x < MAX_DESCRIPTORS; x++) { - if (DIDD_Table[x].type == IDI_DADAPTER) { /* DADAPTER found */ - dadapter = 1; - memcpy(&_DAdapter, &DIDD_Table[x], sizeof(_DAdapter)); - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = - IDI_SYNC_REQ_DIDD_REGISTER_ADAPTER_NOTIFY; - req.didd_notify.info.callback = (void *)didd_callback; - req.didd_notify.info.context = NULL; - _DAdapter.request((ENTITY *)&req); - if (req.didd_notify.e.Rc != 0xff) - return (0); - notify_handle = req.didd_notify.info.handle; - } else if (DIDD_Table[x].type == IDI_DIMAINT) { /* MAINT found */ - DbgRegister("DIDD", DRIVERRELEASE_DIDD, DBG_DEFAULT); - } - } - return (dadapter); -} - -/* - * disconnect from didd - */ -static void __exit disconnect_didd(void) -{ - IDI_SYNC_REQ req; - - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER_NOTIFY; - req.didd_notify.info.handle = notify_handle; - _DAdapter.request((ENTITY *)&req); -} - -/* - * init - */ -int __init diddfunc_init(void) -{ - diva_didd_load_time_init(); - - if (!connect_didd()) { - DBG_ERR(("init: failed to connect to DIDD.")) - diva_didd_load_time_finit(); - return (0); - } - return (1); -} - -/* - * finit - */ -void __exit diddfunc_finit(void) -{ - DbgDeregister(); - disconnect_didd(); - diva_didd_load_time_finit(); -} diff --git a/drivers/isdn/hardware/eicon/diva.c b/drivers/isdn/hardware/eicon/diva.c deleted file mode 100644 index 1b25d8bc153a..000000000000 --- a/drivers/isdn/hardware/eicon/diva.c +++ /dev/null @@ -1,666 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* $Id: diva.c,v 1.21.4.1 2004/05/08 14:33:43 armin Exp $ */ - -#define CARDTYPE_H_WANT_DATA 1 -#define CARDTYPE_H_WANT_IDI_DATA 0 -#define CARDTYPE_H_WANT_RESOURCE_DATA 0 -#define CARDTYPE_H_WANT_FILE_DATA 0 - -#include "platform.h" -#include "debuglib.h" -#include "cardtype.h" -#include "pc.h" -#include "di_defs.h" -#include "di.h" -#include "io.h" -#include "pc_maint.h" -#include "xdi_msg.h" -#include "xdi_adapter.h" -#include "diva_pci.h" -#include "diva.h" - -#ifdef CONFIG_ISDN_DIVAS_PRIPCI -#include "os_pri.h" -#endif -#ifdef CONFIG_ISDN_DIVAS_BRIPCI -#include "os_bri.h" -#include "os_4bri.h" -#endif - -PISDN_ADAPTER IoAdapters[MAX_ADAPTER]; -extern IDI_CALL Requests[MAX_ADAPTER]; -extern int create_adapter_proc(diva_os_xdi_adapter_t *a); -extern void remove_adapter_proc(diva_os_xdi_adapter_t *a); - -#define DivaIdiReqFunc(N) \ - static void DivaIdiRequest##N(ENTITY *e) \ - { if (IoAdapters[N]) (*IoAdapters[N]->DIRequest)(IoAdapters[N], e); } - -/* -** Create own 32 Adapters -*/ -DivaIdiReqFunc(0) -DivaIdiReqFunc(1) -DivaIdiReqFunc(2) -DivaIdiReqFunc(3) -DivaIdiReqFunc(4) -DivaIdiReqFunc(5) -DivaIdiReqFunc(6) -DivaIdiReqFunc(7) -DivaIdiReqFunc(8) -DivaIdiReqFunc(9) -DivaIdiReqFunc(10) -DivaIdiReqFunc(11) -DivaIdiReqFunc(12) -DivaIdiReqFunc(13) -DivaIdiReqFunc(14) -DivaIdiReqFunc(15) -DivaIdiReqFunc(16) -DivaIdiReqFunc(17) -DivaIdiReqFunc(18) -DivaIdiReqFunc(19) -DivaIdiReqFunc(20) -DivaIdiReqFunc(21) -DivaIdiReqFunc(22) -DivaIdiReqFunc(23) -DivaIdiReqFunc(24) -DivaIdiReqFunc(25) -DivaIdiReqFunc(26) -DivaIdiReqFunc(27) -DivaIdiReqFunc(28) -DivaIdiReqFunc(29) -DivaIdiReqFunc(30) -DivaIdiReqFunc(31) - -/* -** LOCALS -*/ -static LIST_HEAD(adapter_queue); - -typedef struct _diva_get_xlog { - word command; - byte req; - byte rc; - byte data[sizeof(struct mi_pc_maint)]; -} diva_get_xlog_t; - -typedef struct _diva_supported_cards_info { - int CardOrdinal; - diva_init_card_proc_t init_card; -} diva_supported_cards_info_t; - -static diva_supported_cards_info_t divas_supported_cards[] = { -#ifdef CONFIG_ISDN_DIVAS_PRIPCI - /* - PRI Cards - */ - {CARDTYPE_DIVASRV_P_30M_PCI, diva_pri_init_card}, - /* - PRI Rev.2 Cards - */ - {CARDTYPE_DIVASRV_P_30M_V2_PCI, diva_pri_init_card}, - /* - PRI Rev.2 VoIP Cards - */ - {CARDTYPE_DIVASRV_VOICE_P_30M_V2_PCI, diva_pri_init_card}, -#endif -#ifdef CONFIG_ISDN_DIVAS_BRIPCI - /* - 4BRI Rev 1 Cards - */ - {CARDTYPE_DIVASRV_Q_8M_PCI, diva_4bri_init_card}, - {CARDTYPE_DIVASRV_VOICE_Q_8M_PCI, diva_4bri_init_card}, - /* - 4BRI Rev 2 Cards - */ - {CARDTYPE_DIVASRV_Q_8M_V2_PCI, diva_4bri_init_card}, - {CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI, diva_4bri_init_card}, - /* - 4BRI Based BRI Rev 2 Cards - */ - {CARDTYPE_DIVASRV_B_2M_V2_PCI, diva_4bri_init_card}, - {CARDTYPE_DIVASRV_B_2F_PCI, diva_4bri_init_card}, - {CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI, diva_4bri_init_card}, - /* - BRI - */ - {CARDTYPE_MAESTRA_PCI, diva_bri_init_card}, -#endif - - /* - EOL - */ - {-1} -}; - -static void diva_init_request_array(void); -static void *divas_create_pci_card(int handle, void *pci_dev_handle); - -static diva_os_spin_lock_t adapter_lock; - -static int diva_find_free_adapters(int base, int nr) -{ - int i; - - for (i = 0; i < nr; i++) { - if (IoAdapters[base + i]) { - return (-1); - } - } - - return (0); -} - -static diva_os_xdi_adapter_t *diva_q_get_next(struct list_head *what) -{ - diva_os_xdi_adapter_t *a = NULL; - - if (what && (what->next != &adapter_queue)) - a = list_entry(what->next, diva_os_xdi_adapter_t, link); - - return (a); -} - -/* -------------------------------------------------------------------------- - Add card to the card list - -------------------------------------------------------------------------- */ -void *diva_driver_add_card(void *pdev, unsigned long CardOrdinal) -{ - diva_os_spin_lock_magic_t old_irql; - diva_os_xdi_adapter_t *pdiva, *pa; - int i, j, max, nr; - - for (i = 0; divas_supported_cards[i].CardOrdinal != -1; i++) { - if (divas_supported_cards[i].CardOrdinal == CardOrdinal) { - if (!(pdiva = divas_create_pci_card(i, pdev))) { - return NULL; - } - switch (CardOrdinal) { - case CARDTYPE_DIVASRV_Q_8M_PCI: - case CARDTYPE_DIVASRV_VOICE_Q_8M_PCI: - case CARDTYPE_DIVASRV_Q_8M_V2_PCI: - case CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI: - max = MAX_ADAPTER - 4; - nr = 4; - break; - - default: - max = MAX_ADAPTER; - nr = 1; - } - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card"); - - for (i = 0; i < max; i++) { - if (!diva_find_free_adapters(i, nr)) { - pdiva->controller = i + 1; - pdiva->xdi_adapter.ANum = pdiva->controller; - IoAdapters[i] = &pdiva->xdi_adapter; - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card"); - create_adapter_proc(pdiva); /* add adapter to proc file system */ - - DBG_LOG(("add %s:%d", - CardProperties - [CardOrdinal].Name, - pdiva->controller)) - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card"); - pa = pdiva; - for (j = 1; j < nr; j++) { /* slave adapters, if any */ - pa = diva_q_get_next(&pa->link); - if (pa && !pa->interface.cleanup_adapter_proc) { - pa->controller = i + 1 + j; - pa->xdi_adapter.ANum = pa->controller; - IoAdapters[i + j] = &pa->xdi_adapter; - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card"); - DBG_LOG(("add slave adapter (%d)", - pa->controller)) - create_adapter_proc(pa); /* add adapter to proc file system */ - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add card"); - } else { - DBG_ERR(("slave adapter problem")) - break; - } - } - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card"); - return (pdiva); - } - } - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add card"); - - /* - Not able to add adapter - remove it and return error - */ - DBG_ERR(("can not alloc request array")) - diva_driver_remove_card(pdiva); - - return NULL; - } - } - - return NULL; -} - -/* -------------------------------------------------------------------------- - Called on driver load, MAIN, main, DriverEntry - -------------------------------------------------------------------------- */ -int divasa_xdi_driver_entry(void) -{ - diva_os_initialize_spin_lock(&adapter_lock, "adapter"); - memset(&IoAdapters[0], 0x00, sizeof(IoAdapters)); - diva_init_request_array(); - - return (0); -} - -/* -------------------------------------------------------------------------- - Remove adapter from list - -------------------------------------------------------------------------- */ -static diva_os_xdi_adapter_t *get_and_remove_from_queue(void) -{ - diva_os_spin_lock_magic_t old_irql; - diva_os_xdi_adapter_t *a = NULL; - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "driver_unload"); - - if (!list_empty(&adapter_queue)) { - a = list_entry(adapter_queue.next, diva_os_xdi_adapter_t, link); - list_del(adapter_queue.next); - } - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "driver_unload"); - return (a); -} - -/* -------------------------------------------------------------------------- - Remove card from the card list - -------------------------------------------------------------------------- */ -void diva_driver_remove_card(void *pdiva) -{ - diva_os_spin_lock_magic_t old_irql; - diva_os_xdi_adapter_t *a[4]; - diva_os_xdi_adapter_t *pa; - int i; - - pa = a[0] = (diva_os_xdi_adapter_t *) pdiva; - a[1] = a[2] = a[3] = NULL; - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "remode adapter"); - - for (i = 1; i < 4; i++) { - if ((pa = diva_q_get_next(&pa->link)) - && !pa->interface.cleanup_adapter_proc) { - a[i] = pa; - } else { - break; - } - } - - for (i = 0; ((i < 4) && a[i]); i++) { - list_del(&a[i]->link); - } - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "driver_unload"); - - (*(a[0]->interface.cleanup_adapter_proc)) (a[0]); - - for (i = 0; i < 4; i++) { - if (a[i]) { - if (a[i]->controller) { - DBG_LOG(("remove adapter (%d)", - a[i]->controller)) IoAdapters[a[i]->controller - 1] = NULL; - remove_adapter_proc(a[i]); - } - diva_os_free(0, a[i]); - } - } -} - -/* -------------------------------------------------------------------------- - Create diva PCI adapter and init internal adapter structures - -------------------------------------------------------------------------- */ -static void *divas_create_pci_card(int handle, void *pci_dev_handle) -{ - diva_supported_cards_info_t *pI = &divas_supported_cards[handle]; - diva_os_spin_lock_magic_t old_irql; - diva_os_xdi_adapter_t *a; - - DBG_LOG(("found %d-%s", pI->CardOrdinal, CardProperties[pI->CardOrdinal].Name)) - - if (!(a = (diva_os_xdi_adapter_t *) diva_os_malloc(0, sizeof(*a)))) { - DBG_ERR(("A: can't alloc adapter")); - return NULL; - } - - memset(a, 0x00, sizeof(*a)); - - a->CardIndex = handle; - a->CardOrdinal = pI->CardOrdinal; - a->Bus = DIVAS_XDI_ADAPTER_BUS_PCI; - a->xdi_adapter.cardType = a->CardOrdinal; - a->resources.pci.bus = diva_os_get_pci_bus(pci_dev_handle); - a->resources.pci.func = diva_os_get_pci_func(pci_dev_handle); - a->resources.pci.hdev = pci_dev_handle; - - /* - Add master adapter first, so slave adapters will receive higher - numbers as master adapter - */ - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "found_pci_card"); - list_add_tail(&a->link, &adapter_queue); - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "found_pci_card"); - - if ((*(pI->init_card)) (a)) { - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "found_pci_card"); - list_del(&a->link); - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "found_pci_card"); - diva_os_free(0, a); - DBG_ERR(("A: can't get adapter resources")); - return NULL; - } - - return (a); -} - -/* -------------------------------------------------------------------------- - Called on driver unload FINIT, finit, Unload - -------------------------------------------------------------------------- */ -void divasa_xdi_driver_unload(void) -{ - diva_os_xdi_adapter_t *a; - - while ((a = get_and_remove_from_queue())) { - if (a->interface.cleanup_adapter_proc) { - (*(a->interface.cleanup_adapter_proc)) (a); - } - if (a->controller) { - IoAdapters[a->controller - 1] = NULL; - remove_adapter_proc(a); - } - diva_os_free(0, a); - } - diva_os_destroy_spin_lock(&adapter_lock, "adapter"); -} - -/* -** Receive and process command from user mode utility -*/ -void *diva_xdi_open_adapter(void *os_handle, const void __user *src, - int length, void *mptr, - divas_xdi_copy_from_user_fn_t cp_fn) -{ - diva_xdi_um_cfg_cmd_t *msg = (diva_xdi_um_cfg_cmd_t *)mptr; - diva_os_xdi_adapter_t *a = NULL; - diva_os_spin_lock_magic_t old_irql; - struct list_head *tmp; - - if (length < sizeof(diva_xdi_um_cfg_cmd_t)) { - DBG_ERR(("A: A(?) open, msg too small (%d < %d)", - length, sizeof(diva_xdi_um_cfg_cmd_t))) - return NULL; - } - if ((*cp_fn) (os_handle, msg, src, sizeof(*msg)) <= 0) { - DBG_ERR(("A: A(?) open, write error")) - return NULL; - } - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "open_adapter"); - list_for_each(tmp, &adapter_queue) { - a = list_entry(tmp, diva_os_xdi_adapter_t, link); - if (a->controller == (int)msg->adapter) - break; - a = NULL; - } - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "open_adapter"); - - if (!a) { - DBG_ERR(("A: A(%d) open, adapter not found", msg->adapter)) - } - - return (a); -} - -/* -** Easy cleanup mailbox status -*/ -void diva_xdi_close_adapter(void *adapter, void *os_handle) -{ - diva_os_xdi_adapter_t *a = (diva_os_xdi_adapter_t *) adapter; - - a->xdi_mbox.status &= ~DIVA_XDI_MBOX_BUSY; - if (a->xdi_mbox.data) { - diva_os_free(0, a->xdi_mbox.data); - a->xdi_mbox.data = NULL; - } -} - -int -diva_xdi_write(void *adapter, void *os_handle, const void __user *src, - int length, void *mptr, - divas_xdi_copy_from_user_fn_t cp_fn) -{ - diva_xdi_um_cfg_cmd_t *msg = (diva_xdi_um_cfg_cmd_t *)mptr; - diva_os_xdi_adapter_t *a = (diva_os_xdi_adapter_t *) adapter; - void *data; - - if (a->xdi_mbox.status & DIVA_XDI_MBOX_BUSY) { - DBG_ERR(("A: A(%d) write, mbox busy", a->controller)) - return (-1); - } - - if (length < sizeof(diva_xdi_um_cfg_cmd_t)) { - DBG_ERR(("A: A(%d) write, message too small (%d < %d)", - a->controller, length, - sizeof(diva_xdi_um_cfg_cmd_t))) - return (-3); - } - - if (!(data = diva_os_malloc(0, length))) { - DBG_ERR(("A: A(%d) write, ENOMEM", a->controller)) - return (-2); - } - - if (msg) { - *(diva_xdi_um_cfg_cmd_t *)data = *msg; - length = (*cp_fn) (os_handle, (char *)data + sizeof(*msg), - src + sizeof(*msg), length - sizeof(*msg)); - } else { - length = (*cp_fn) (os_handle, data, src, length); - } - if (length > 0) { - if ((*(a->interface.cmd_proc)) - (a, (diva_xdi_um_cfg_cmd_t *) data, length)) { - length = -3; - } - } else { - DBG_ERR(("A: A(%d) write error (%d)", a->controller, - length)) - } - - diva_os_free(0, data); - - return (length); -} - -/* -** Write answers to user mode utility, if any -*/ -int -diva_xdi_read(void *adapter, void *os_handle, void __user *dst, - int max_length, divas_xdi_copy_to_user_fn_t cp_fn) -{ - diva_os_xdi_adapter_t *a = (diva_os_xdi_adapter_t *) adapter; - int ret; - - if (!(a->xdi_mbox.status & DIVA_XDI_MBOX_BUSY)) { - DBG_ERR(("A: A(%d) rx mbox empty", a->controller)) - return (-1); - } - if (!a->xdi_mbox.data) { - a->xdi_mbox.status &= ~DIVA_XDI_MBOX_BUSY; - DBG_ERR(("A: A(%d) rx ENOMEM", a->controller)) - return (-2); - } - - if (max_length < a->xdi_mbox.data_length) { - DBG_ERR(("A: A(%d) rx buffer too short(%d < %d)", - a->controller, max_length, - a->xdi_mbox.data_length)) - return (-3); - } - - ret = (*cp_fn) (os_handle, dst, a->xdi_mbox.data, - a->xdi_mbox.data_length); - if (ret > 0) { - diva_os_free(0, a->xdi_mbox.data); - a->xdi_mbox.data = NULL; - a->xdi_mbox.status &= ~DIVA_XDI_MBOX_BUSY; - } - - return (ret); -} - - -irqreturn_t diva_os_irq_wrapper(int irq, void *context) -{ - diva_os_xdi_adapter_t *a = context; - diva_xdi_clear_interrupts_proc_t clear_int_proc; - - if (!a || !a->xdi_adapter.diva_isr_handler) - return IRQ_NONE; - - if ((clear_int_proc = a->clear_interrupts_proc)) { - (*clear_int_proc) (a); - a->clear_interrupts_proc = NULL; - return IRQ_HANDLED; - } - - (*(a->xdi_adapter.diva_isr_handler)) (&a->xdi_adapter); - return IRQ_HANDLED; -} - -static void diva_init_request_array(void) -{ - Requests[0] = DivaIdiRequest0; - Requests[1] = DivaIdiRequest1; - Requests[2] = DivaIdiRequest2; - Requests[3] = DivaIdiRequest3; - Requests[4] = DivaIdiRequest4; - Requests[5] = DivaIdiRequest5; - Requests[6] = DivaIdiRequest6; - Requests[7] = DivaIdiRequest7; - Requests[8] = DivaIdiRequest8; - Requests[9] = DivaIdiRequest9; - Requests[10] = DivaIdiRequest10; - Requests[11] = DivaIdiRequest11; - Requests[12] = DivaIdiRequest12; - Requests[13] = DivaIdiRequest13; - Requests[14] = DivaIdiRequest14; - Requests[15] = DivaIdiRequest15; - Requests[16] = DivaIdiRequest16; - Requests[17] = DivaIdiRequest17; - Requests[18] = DivaIdiRequest18; - Requests[19] = DivaIdiRequest19; - Requests[20] = DivaIdiRequest20; - Requests[21] = DivaIdiRequest21; - Requests[22] = DivaIdiRequest22; - Requests[23] = DivaIdiRequest23; - Requests[24] = DivaIdiRequest24; - Requests[25] = DivaIdiRequest25; - Requests[26] = DivaIdiRequest26; - Requests[27] = DivaIdiRequest27; - Requests[28] = DivaIdiRequest28; - Requests[29] = DivaIdiRequest29; - Requests[30] = DivaIdiRequest30; - Requests[31] = DivaIdiRequest31; -} - -void diva_xdi_display_adapter_features(int card) -{ - dword features; - if (!card || ((card - 1) >= MAX_ADAPTER) || !IoAdapters[card - 1]) { - return; - } - card--; - features = IoAdapters[card]->Properties.Features; - - DBG_LOG(("FEATURES FOR ADAPTER: %d", card + 1)) - DBG_LOG((" DI_FAX3 : %s", - (features & DI_FAX3) ? "Y" : "N")) - DBG_LOG((" DI_MODEM : %s", - (features & DI_MODEM) ? "Y" : "N")) - DBG_LOG((" DI_POST : %s", - (features & DI_POST) ? "Y" : "N")) - DBG_LOG((" DI_V110 : %s", - (features & DI_V110) ? "Y" : "N")) - DBG_LOG((" DI_V120 : %s", - (features & DI_V120) ? "Y" : "N")) - DBG_LOG((" DI_POTS : %s", - (features & DI_POTS) ? "Y" : "N")) - DBG_LOG((" DI_CODEC : %s", - (features & DI_CODEC) ? "Y" : "N")) - DBG_LOG((" DI_MANAGE : %s", - (features & DI_MANAGE) ? "Y" : "N")) - DBG_LOG((" DI_V_42 : %s", - (features & DI_V_42) ? "Y" : "N")) - DBG_LOG((" DI_EXTD_FAX : %s", - (features & DI_EXTD_FAX) ? "Y" : "N")) - DBG_LOG((" DI_AT_PARSER : %s", - (features & DI_AT_PARSER) ? "Y" : "N")) - DBG_LOG((" DI_VOICE_OVER_IP : %s", - (features & DI_VOICE_OVER_IP) ? "Y" : "N")) - } - -void diva_add_slave_adapter(diva_os_xdi_adapter_t *a) -{ - diva_os_spin_lock_magic_t old_irql; - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "add_slave"); - list_add_tail(&a->link, &adapter_queue); - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "add_slave"); -} - -int diva_card_read_xlog(diva_os_xdi_adapter_t *a) -{ - diva_get_xlog_t *req; - byte *data; - - if (!a->xdi_adapter.Initialized || !a->xdi_adapter.DIRequest) { - return (-1); - } - if (!(data = diva_os_malloc(0, sizeof(struct mi_pc_maint)))) { - return (-1); - } - memset(data, 0x00, sizeof(struct mi_pc_maint)); - - if (!(req = diva_os_malloc(0, sizeof(*req)))) { - diva_os_free(0, data); - return (-1); - } - req->command = 0x0400; - req->req = LOG; - req->rc = 0x00; - - (*(a->xdi_adapter.DIRequest)) (&a->xdi_adapter, (ENTITY *) req); - - if (!req->rc || req->req) { - diva_os_free(0, data); - diva_os_free(0, req); - return (-1); - } - - memcpy(data, &req->req, sizeof(struct mi_pc_maint)); - - diva_os_free(0, req); - - a->xdi_mbox.data_length = sizeof(struct mi_pc_maint); - a->xdi_mbox.data = data; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - - return (0); -} - -void xdiFreeFile(void *handle) -{ -} diff --git a/drivers/isdn/hardware/eicon/diva.h b/drivers/isdn/hardware/eicon/diva.h deleted file mode 100644 index 1ad76650fbf9..000000000000 --- a/drivers/isdn/hardware/eicon/diva.h +++ /dev/null @@ -1,33 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: diva.h,v 1.1.2.2 2001/02/08 12:25:43 armin Exp $ */ - -#ifndef __DIVA_XDI_OS_PART_H__ -#define __DIVA_XDI_OS_PART_H__ - - -int divasa_xdi_driver_entry(void); -void divasa_xdi_driver_unload(void); -void *diva_driver_add_card(void *pdev, unsigned long CardOrdinal); -void diva_driver_remove_card(void *pdiva); - -typedef int (*divas_xdi_copy_to_user_fn_t) (void *os_handle, void __user *dst, - const void *src, int length); - -typedef int (*divas_xdi_copy_from_user_fn_t) (void *os_handle, void *dst, - const void __user *src, int length); - -int diva_xdi_read(void *adapter, void *os_handle, void __user *dst, - int max_length, divas_xdi_copy_to_user_fn_t cp_fn); - -int diva_xdi_write(void *adapter, void *os_handle, const void __user *src, - int length, void *msg, - divas_xdi_copy_from_user_fn_t cp_fn); - -void *diva_xdi_open_adapter(void *os_handle, const void __user *src, - int length, void *msg, - divas_xdi_copy_from_user_fn_t cp_fn); - -void diva_xdi_close_adapter(void *adapter, void *os_handle); - - -#endif diff --git a/drivers/isdn/hardware/eicon/diva_didd.c b/drivers/isdn/hardware/eicon/diva_didd.c deleted file mode 100644 index 60e79257dd5f..000000000000 --- a/drivers/isdn/hardware/eicon/diva_didd.c +++ /dev/null @@ -1,139 +0,0 @@ -/* $Id: diva_didd.c,v 1.13.6.4 2005/02/11 19:40:25 armin Exp $ - * - * DIDD Interface module for Eicon active cards. - * - * Functions are in dadapter.c - * - * Copyright 2002-2003 by Armin Schindler (mac@melware.de) - * Copyright 2002-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include -#include -#include -#include -#include -#include - -#include "platform.h" -#include "di_defs.h" -#include "dadapter.h" -#include "divasync.h" -#include "did_vers.h" - -static char *main_revision = "$Revision: 1.13.6.4 $"; - -static char *DRIVERNAME = - "Eicon DIVA - DIDD table (http://www.melware.net)"; -static char *DRIVERLNAME = "divadidd"; -char *DRIVERRELEASE_DIDD = "2.0"; - -MODULE_DESCRIPTION("DIDD table driver for diva drivers"); -MODULE_AUTHOR("Cytronics & Melware, Eicon Networks"); -MODULE_SUPPORTED_DEVICE("Eicon diva drivers"); -MODULE_LICENSE("GPL"); - -#define DBG_MINIMUM (DL_LOG + DL_FTL + DL_ERR) -#define DBG_DEFAULT (DBG_MINIMUM + DL_XLOG + DL_REG) - -extern int diddfunc_init(void); -extern void diddfunc_finit(void); - -extern void DIVA_DIDD_Read(void *, int); - -static struct proc_dir_entry *proc_didd; -struct proc_dir_entry *proc_net_eicon = NULL; - -EXPORT_SYMBOL(DIVA_DIDD_Read); -EXPORT_SYMBOL(proc_net_eicon); - -static char *getrev(const char *revision) -{ - char *rev; - char *p; - if ((p = strchr(revision, ':'))) { - rev = p + 2; - p = strchr(rev, '$'); - *--p = 0; - } else - rev = "1.0"; - return rev; -} - -static int divadidd_proc_show(struct seq_file *m, void *v) -{ - char tmprev[32]; - - strcpy(tmprev, main_revision); - seq_printf(m, "%s\n", DRIVERNAME); - seq_printf(m, "name : %s\n", DRIVERLNAME); - seq_printf(m, "release : %s\n", DRIVERRELEASE_DIDD); - seq_printf(m, "build : %s(%s)\n", - diva_didd_common_code_build, DIVA_BUILD); - seq_printf(m, "revision : %s\n", getrev(tmprev)); - - return 0; -} - -static int __init create_proc(void) -{ - proc_net_eicon = proc_mkdir("eicon", init_net.proc_net); - - if (proc_net_eicon) { - proc_didd = proc_create_single(DRIVERLNAME, S_IRUGO, - proc_net_eicon, divadidd_proc_show); - return (1); - } - return (0); -} - -static void remove_proc(void) -{ - remove_proc_entry(DRIVERLNAME, proc_net_eicon); - remove_proc_entry("eicon", init_net.proc_net); -} - -static int __init divadidd_init(void) -{ - char tmprev[32]; - int ret = 0; - - printk(KERN_INFO "%s\n", DRIVERNAME); - printk(KERN_INFO "%s: Rel:%s Rev:", DRIVERLNAME, DRIVERRELEASE_DIDD); - strcpy(tmprev, main_revision); - printk("%s Build:%s(%s)\n", getrev(tmprev), - diva_didd_common_code_build, DIVA_BUILD); - - if (!create_proc()) { - printk(KERN_ERR "%s: could not create proc entry\n", - DRIVERLNAME); - ret = -EIO; - goto out; - } - - if (!diddfunc_init()) { - printk(KERN_ERR "%s: failed to connect to DIDD.\n", - DRIVERLNAME); -#ifdef MODULE - remove_proc(); -#endif - ret = -EIO; - goto out; - } - -out: - return (ret); -} - -static void __exit divadidd_exit(void) -{ - diddfunc_finit(); - remove_proc(); - printk(KERN_INFO "%s: module unloaded.\n", DRIVERLNAME); -} - -module_init(divadidd_init); -module_exit(divadidd_exit); diff --git a/drivers/isdn/hardware/eicon/diva_dma.c b/drivers/isdn/hardware/eicon/diva_dma.c deleted file mode 100644 index 217b6aa9f612..000000000000 --- a/drivers/isdn/hardware/eicon/diva_dma.c +++ /dev/null @@ -1,94 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#include "diva_dma.h" -/* - Every entry has length of PAGE_SIZE - and represents one single physical page -*/ -struct _diva_dma_map_entry { - int busy; - dword phys_bus_addr; /* 32bit address as seen by the card */ - void *local_ram_addr; /* local address as seen by the host */ - void *addr_handle; /* handle uset to free allocated memory */ -}; -/* - Create local mapping structure and init it to default state -*/ -struct _diva_dma_map_entry *diva_alloc_dma_map(void *os_context, int nentries) { - diva_dma_map_entry_t *pmap = diva_os_malloc(0, sizeof(*pmap) * (nentries + 1)); - if (pmap) - memset(pmap, 0, sizeof(*pmap) * (nentries + 1)); - return pmap; -} -/* - Free local map (context should be freed before) if any -*/ -void diva_free_dma_mapping(struct _diva_dma_map_entry *pmap) { - if (pmap) { - diva_os_free(0, pmap); - } -} -/* - Set information saved on the map entry -*/ -void diva_init_dma_map_entry(struct _diva_dma_map_entry *pmap, - int nr, void *virt, dword phys, - void *addr_handle) { - pmap[nr].phys_bus_addr = phys; - pmap[nr].local_ram_addr = virt; - pmap[nr].addr_handle = addr_handle; -} -/* - Allocate one single entry in the map -*/ -int diva_alloc_dma_map_entry(struct _diva_dma_map_entry *pmap) { - int i; - for (i = 0; (pmap && pmap[i].local_ram_addr); i++) { - if (!pmap[i].busy) { - pmap[i].busy = 1; - return (i); - } - } - return (-1); -} -/* - Free one single entry in the map -*/ -void diva_free_dma_map_entry(struct _diva_dma_map_entry *pmap, int nr) { - pmap[nr].busy = 0; -} -/* - Get information saved on the map entry -*/ -void diva_get_dma_map_entry(struct _diva_dma_map_entry *pmap, int nr, - void **pvirt, dword *pphys) { - *pphys = pmap[nr].phys_bus_addr; - *pvirt = pmap[nr].local_ram_addr; -} -void *diva_get_entry_handle(struct _diva_dma_map_entry *pmap, int nr) { - return (pmap[nr].addr_handle); -} diff --git a/drivers/isdn/hardware/eicon/diva_dma.h b/drivers/isdn/hardware/eicon/diva_dma.h deleted file mode 100644 index d32c91be562b..000000000000 --- a/drivers/isdn/hardware/eicon/diva_dma.h +++ /dev/null @@ -1,48 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_DMA_MAPPING_IFC_H__ -#define __DIVA_DMA_MAPPING_IFC_H__ -typedef struct _diva_dma_map_entry diva_dma_map_entry_t; -struct _diva_dma_map_entry *diva_alloc_dma_map(void *os_context, int nentries); -void diva_init_dma_map_entry(struct _diva_dma_map_entry *pmap, - int nr, void *virt, dword phys, - void *addr_handle); -int diva_alloc_dma_map_entry(struct _diva_dma_map_entry *pmap); -void diva_free_dma_map_entry(struct _diva_dma_map_entry *pmap, int entry); -void diva_get_dma_map_entry(struct _diva_dma_map_entry *pmap, int nr, - void **pvirt, dword *pphys); -void diva_free_dma_mapping(struct _diva_dma_map_entry *pmap); -/* - Functionality to be implemented by OS wrapper - and running in process context -*/ -void diva_init_dma_map(void *hdev, - struct _diva_dma_map_entry **ppmap, - int nentries); -void diva_free_dma_map(void *hdev, - struct _diva_dma_map_entry *pmap); -void *diva_get_entry_handle(struct _diva_dma_map_entry *pmap, int nr); -#endif diff --git a/drivers/isdn/hardware/eicon/diva_pci.h b/drivers/isdn/hardware/eicon/diva_pci.h deleted file mode 100644 index 7ef5db98ad3c..000000000000 --- a/drivers/isdn/hardware/eicon/diva_pci.h +++ /dev/null @@ -1,20 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: diva_pci.h,v 1.6 2003/01/04 15:29:45 schindler Exp $ */ - -#ifndef __DIVA_PCI_INTERFACE_H__ -#define __DIVA_PCI_INTERFACE_H__ - -void __iomem *divasa_remap_pci_bar(diva_os_xdi_adapter_t *a, - int id, - unsigned long bar, - unsigned long area_length); -void divasa_unmap_pci_bar(void __iomem *bar); -unsigned long divasa_get_pci_irq(unsigned char bus, - unsigned char func, void *pci_dev_handle); -unsigned long divasa_get_pci_bar(unsigned char bus, - unsigned char func, - int bar, void *pci_dev_handle); -byte diva_os_get_pci_bus(void *pci_dev_handle); -byte diva_os_get_pci_func(void *pci_dev_handle); - -#endif diff --git a/drivers/isdn/hardware/eicon/divacapi.h b/drivers/isdn/hardware/eicon/divacapi.h deleted file mode 100644 index c4868a0d82f4..000000000000 --- a/drivers/isdn/hardware/eicon/divacapi.h +++ /dev/null @@ -1,1350 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ - -/*#define DEBUG */ - -#include - -#define IMPLEMENT_DTMF 1 -#define IMPLEMENT_LINE_INTERCONNECT2 1 -#define IMPLEMENT_ECHO_CANCELLER 1 -#define IMPLEMENT_RTP 1 -#define IMPLEMENT_T38 1 -#define IMPLEMENT_FAX_SUB_SEP_PWD 1 -#define IMPLEMENT_V18 1 -#define IMPLEMENT_DTMF_TONE 1 -#define IMPLEMENT_PIAFS 1 -#define IMPLEMENT_FAX_PAPER_FORMATS 1 -#define IMPLEMENT_VOWN 1 -#define IMPLEMENT_CAPIDTMF 1 -#define IMPLEMENT_FAX_NONSTANDARD 1 -#define VSWITCH_SUPPORT 1 - - -#define IMPLEMENT_LINE_INTERCONNECT 0 -#define IMPLEMENT_MARKED_OK_AFTER_FC 1 - -#include "capidtmf.h" - -/*------------------------------------------------------------------*/ -/* Common API internal definitions */ -/*------------------------------------------------------------------*/ - -#define MAX_APPL 240 -#define MAX_NCCI 127 - -#define MSG_IN_QUEUE_SIZE ((4096 + 3) & 0xfffc) /* must be multiple of 4 */ - - -#define MSG_IN_OVERHEAD sizeof(APPL *) - -#define MAX_NL_CHANNEL 255 -#define MAX_DATA_B3 8 -#define MAX_DATA_ACK MAX_DATA_B3 -#define MAX_MULTI_IE 6 -#define MAX_MSG_SIZE 256 -#define MAX_MSG_PARMS 10 -#define MAX_CPN_MASK_SIZE 16 -#define MAX_MSN_CONFIG 10 -#define EXT_CONTROLLER 0x80 -#define CODEC 0x01 -#define CODEC_PERMANENT 0x02 -#define ADV_VOICE 0x03 -#define MAX_CIP_TYPES 5 /* kind of CIP types for group optimization */ - -#define FAX_CONNECT_INFO_BUFFER_SIZE 256 -#define NCPI_BUFFER_SIZE 256 - -#define MAX_CHANNELS_PER_PLCI 8 -#define MAX_INTERNAL_COMMAND_LEVELS 4 -#define INTERNAL_REQ_BUFFER_SIZE 272 - -#define INTERNAL_IND_BUFFER_SIZE 768 - -#define DTMF_PARAMETER_BUFFER_SIZE 12 -#define ADV_VOICE_COEF_BUFFER_SIZE 50 - -#define LI_PLCI_B_QUEUE_ENTRIES 256 - - - -typedef struct _APPL APPL; -typedef struct _PLCI PLCI; -typedef struct _NCCI NCCI; -typedef struct _DIVA_CAPI_ADAPTER DIVA_CAPI_ADAPTER; -typedef struct _DATA_B3_DESC DATA_B3_DESC; -typedef struct _DATA_ACK_DESC DATA_ACK_DESC; -typedef struct manufacturer_profile_s MANUFACTURER_PROFILE; -typedef struct fax_ncpi_s FAX_NCPI; -typedef struct api_parse_s API_PARSE; -typedef struct api_save_s API_SAVE; -typedef struct msn_config_s MSN_CONFIG; -typedef struct msn_config_max_s MSN_CONFIG_MAX; -typedef struct msn_ld_s MSN_LD; - -struct manufacturer_profile_s { - dword private_options; - dword rtp_primary_payloads; - dword rtp_additional_payloads; -}; - -struct fax_ncpi_s { - word options; - word format; -}; - -struct msn_config_s { - byte msn[MAX_CPN_MASK_SIZE]; -}; - -struct msn_config_max_s { - MSN_CONFIG msn_conf[MAX_MSN_CONFIG]; -}; - -struct msn_ld_s { - dword low; - dword high; -}; - -struct api_parse_s { - word length; - byte *info; -}; - -struct api_save_s { - API_PARSE parms[MAX_MSG_PARMS + 1]; - byte info[MAX_MSG_SIZE]; -}; - -struct _DATA_B3_DESC { - word Handle; - word Number; - word Flags; - word Length; - void *P; -}; - -struct _DATA_ACK_DESC { - word Handle; - word Number; -}; - -typedef void (*t_std_internal_command)(dword Id, PLCI *plci, byte Rc); - -/************************************************************************/ -/* Don't forget to adapt dos.asm after changing the _APPL structure!!!! */ -struct _APPL { - word Id; - word NullCREnable; - word CDEnable; - dword S_Handle; - - - - - - - LIST_ENTRY s_function; - dword s_context; - word s_count; - APPL *s_next; - byte *xbuffer_used; - void **xbuffer_internal; - void **xbuffer_ptr; - - - - - - - byte *queue; - word queue_size; - word queue_free; - word queue_read; - word queue_write; - word queue_signal; - byte msg_lost; - byte appl_flags; - word Number; - - word MaxBuffer; - byte MaxNCCI; - byte MaxNCCIData; - word MaxDataLength; - word NCCIDataFlowCtrlTimer; - byte *ReceiveBuffer; - word *DataNCCI; - word *DataFlags; -}; - - -struct _PLCI { - ENTITY Sig; - ENTITY NL; - word RNum; - word RFlags; - BUFFERS RData[2]; - BUFFERS XData[1]; - BUFFERS NData[2]; - - DIVA_CAPI_ADAPTER *adapter; - APPL *appl; - PLCI *relatedPTYPLCI; - byte Id; - byte State; - byte sig_req; - byte nl_req; - byte SuppState; - byte channels; - byte tel; - byte B1_resource; - byte B2_prot; - byte B3_prot; - - word command; - word m_command; - word internal_command; - word number; - word req_in_start; - word req_in; - word req_out; - word msg_in_write_pos; - word msg_in_read_pos; - word msg_in_wrap_pos; - - void *data_sent_ptr; - byte data_sent; - byte send_disc; - byte sig_global_req; - byte sig_remove_id; - byte nl_global_req; - byte nl_remove_id; - byte b_channel; - byte adv_nl; - byte manufacturer; - byte call_dir; - byte hook_state; - byte spoofed_msg; - byte ptyState; - byte cr_enquiry; - word hangup_flow_ctrl_timer; - - word ncci_ring_list; - byte inc_dis_ncci_table[MAX_CHANNELS_PER_PLCI]; - t_std_internal_command internal_command_queue[MAX_INTERNAL_COMMAND_LEVELS]; - DECLARE_BITMAP(c_ind_mask_table, MAX_APPL); - DECLARE_BITMAP(group_optimization_mask_table, MAX_APPL); - byte RBuffer[200]; - dword msg_in_queue[MSG_IN_QUEUE_SIZE/sizeof(dword)]; - API_SAVE saved_msg; - API_SAVE B_protocol; - byte fax_connect_info_length; - byte fax_connect_info_buffer[FAX_CONNECT_INFO_BUFFER_SIZE]; - byte fax_edata_ack_length; - word nsf_control_bits; - byte ncpi_state; - byte ncpi_buffer[NCPI_BUFFER_SIZE]; - - byte internal_req_buffer[INTERNAL_REQ_BUFFER_SIZE]; - byte internal_ind_buffer[INTERNAL_IND_BUFFER_SIZE + 3]; - dword requested_options_conn; - dword requested_options; - word B1_facilities; - API_SAVE *adjust_b_parms_msg; - word adjust_b_facilities; - word adjust_b_command; - word adjust_b_ncci; - word adjust_b_mode; - word adjust_b_state; - byte adjust_b_restore; - - byte dtmf_rec_active; - word dtmf_rec_pulse_ms; - word dtmf_rec_pause_ms; - byte dtmf_send_requests; - word dtmf_send_pulse_ms; - word dtmf_send_pause_ms; - word dtmf_cmd; - word dtmf_msg_number_queue[8]; - byte dtmf_parameter_length; - byte dtmf_parameter_buffer[DTMF_PARAMETER_BUFFER_SIZE]; - - - t_capidtmf_state capidtmf_state; - - - byte li_bchannel_id; /* BRI: 1..2, PRI: 1..32 */ - byte li_channel_bits; - byte li_notify_update; - word li_cmd; - word li_write_command; - word li_write_channel; - word li_plci_b_write_pos; - word li_plci_b_read_pos; - word li_plci_b_req_pos; - dword li_plci_b_queue[LI_PLCI_B_QUEUE_ENTRIES]; - - - word ec_cmd; - word ec_idi_options; - word ec_tail_length; - - - byte tone_last_indication_code; - - byte vswitchstate; - byte vsprot; - byte vsprotdialect; - byte notifiedcall; /* Flag if it is a spoofed call */ - - int rx_dma_descriptor; - dword rx_dma_magic; -}; - - -struct _NCCI { - byte data_out; - byte data_pending; - byte data_ack_out; - byte data_ack_pending; - DATA_B3_DESC DBuffer[MAX_DATA_B3]; - DATA_ACK_DESC DataAck[MAX_DATA_ACK]; -}; - - -struct _DIVA_CAPI_ADAPTER { - IDI_CALL request; - byte Id; - byte max_plci; - byte max_listen; - byte listen_active; - PLCI *plci; - byte ch_ncci[MAX_NL_CHANNEL + 1]; - byte ncci_ch[MAX_NCCI + 1]; - byte ncci_plci[MAX_NCCI + 1]; - byte ncci_state[MAX_NCCI + 1]; - byte ncci_next[MAX_NCCI + 1]; - NCCI ncci[MAX_NCCI + 1]; - - byte ch_flow_control[MAX_NL_CHANNEL + 1]; /* Used by XON protocol */ - byte ch_flow_control_pending; - byte ch_flow_plci[MAX_NL_CHANNEL + 1]; - int last_flow_control_ch; - - dword Info_Mask[MAX_APPL]; - dword CIP_Mask[MAX_APPL]; - - dword Notification_Mask[MAX_APPL]; - PLCI *codec_listen[MAX_APPL]; - dword requested_options_table[MAX_APPL]; - API_PROFILE profile; - MANUFACTURER_PROFILE man_profile; - dword manufacturer_features; - - byte AdvCodecFLAG; - PLCI *AdvCodecPLCI; - PLCI *AdvSignalPLCI; - APPL *AdvSignalAppl; - byte TelOAD[23]; - byte TelOSA[23]; - byte scom_appl_disable; - PLCI *automatic_lawPLCI; - byte automatic_law; - byte u_law; - - byte adv_voice_coef_length; - byte adv_voice_coef_buffer[ADV_VOICE_COEF_BUFFER_SIZE]; - - byte li_pri; - byte li_channels; - word li_base; - - byte adapter_disabled; - byte group_optimization_enabled; /* use application groups if enabled */ - dword sdram_bar; - byte flag_dynamic_l1_down; /* for hunt groups:down layer 1 if no appl present*/ - byte FlowControlIdTable[256]; - byte FlowControlSkipTable[256]; - void *os_card; /* pointer to associated OS dependent adapter structure */ -}; - - -/*------------------------------------------------------------------*/ -/* Application flags */ -/*------------------------------------------------------------------*/ - -#define APPL_FLAG_OLD_LI_SPEC 0x01 -#define APPL_FLAG_PRIV_EC_SPEC 0x02 - - -/*------------------------------------------------------------------*/ -/* API parameter definitions */ -/*------------------------------------------------------------------*/ - -#define X75_TTX 1 /* x.75 for ttx */ -#define TRF 2 /* transparent with hdlc framing */ -#define TRF_IN 3 /* transparent with hdlc fr. inc. */ -#define SDLC 4 /* sdlc, sna layer-2 */ -#define X75_BTX 5 /* x.75 for btx */ -#define LAPD 6 /* lapd (Q.921) */ -#define X25_L2 7 /* x.25 layer-2 */ -#define V120_L2 8 /* V.120 layer-2 protocol */ -#define V42_IN 9 /* V.42 layer-2 protocol, incoming */ -#define V42 10 /* V.42 layer-2 protocol */ -#define MDM_ATP 11 /* AT Parser built in the L2 */ -#define X75_V42BIS 12 /* ISO7776 (X.75 SLP) modified to support V.42 bis compression */ -#define RTPL2_IN 13 /* RTP layer-2 protocol, incoming */ -#define RTPL2 14 /* RTP layer-2 protocol */ -#define V120_V42BIS 15 /* V.120 layer-2 protocol supporting V.42 bis compression */ - -#define T70NL 1 -#define X25PLP 2 -#define T70NLX 3 -#define TRANSPARENT_NL 4 -#define ISO8208 5 -#define T30 6 - - -/*------------------------------------------------------------------*/ -/* FAX interface to IDI */ -/*------------------------------------------------------------------*/ - -#define CAPI_MAX_HEAD_LINE_SPACE 89 -#define CAPI_MAX_DATE_TIME_LENGTH 18 - -#define T30_MAX_STATION_ID_LENGTH 20 -#define T30_MAX_SUBADDRESS_LENGTH 20 -#define T30_MAX_PASSWORD_LENGTH 20 - -typedef struct t30_info_s T30_INFO; -struct t30_info_s { - byte code; - byte rate_div_2400; - byte resolution; - byte data_format; - byte pages_low; - byte pages_high; - byte operating_mode; - byte control_bits_low; - byte control_bits_high; - byte feature_bits_low; - byte feature_bits_high; - byte recording_properties; - byte universal_6; - byte universal_7; - byte station_id_len; - byte head_line_len; - byte station_id[T30_MAX_STATION_ID_LENGTH]; -/* byte head_line[]; */ -/* byte sub_sep_length; */ -/* byte sub_sep_field[]; */ -/* byte pwd_length; */ -/* byte pwd_field[]; */ -/* byte nsf_info_length; */ -/* byte nsf_info_field[]; */ -}; - - -#define T30_RESOLUTION_R8_0385 0x00 -#define T30_RESOLUTION_R8_0770_OR_200 0x01 -#define T30_RESOLUTION_R8_1540 0x02 -#define T30_RESOLUTION_R16_1540_OR_400 0x04 -#define T30_RESOLUTION_R4_0385_OR_100 0x08 -#define T30_RESOLUTION_300_300 0x10 -#define T30_RESOLUTION_INCH_BASED 0x40 -#define T30_RESOLUTION_METRIC_BASED 0x80 - -#define T30_RECORDING_WIDTH_ISO_A4 0 -#define T30_RECORDING_WIDTH_ISO_B4 1 -#define T30_RECORDING_WIDTH_ISO_A3 2 -#define T30_RECORDING_WIDTH_COUNT 3 - -#define T30_RECORDING_LENGTH_ISO_A4 0 -#define T30_RECORDING_LENGTH_ISO_B4 1 -#define T30_RECORDING_LENGTH_UNLIMITED 2 -#define T30_RECORDING_LENGTH_COUNT 3 - -#define T30_MIN_SCANLINE_TIME_00_00_00 0 -#define T30_MIN_SCANLINE_TIME_05_05_05 1 -#define T30_MIN_SCANLINE_TIME_10_05_05 2 -#define T30_MIN_SCANLINE_TIME_10_10_10 3 -#define T30_MIN_SCANLINE_TIME_20_10_10 4 -#define T30_MIN_SCANLINE_TIME_20_20_20 5 -#define T30_MIN_SCANLINE_TIME_40_20_20 6 -#define T30_MIN_SCANLINE_TIME_40_40_40 7 -#define T30_MIN_SCANLINE_TIME_RES_8 8 -#define T30_MIN_SCANLINE_TIME_RES_9 9 -#define T30_MIN_SCANLINE_TIME_RES_10 10 -#define T30_MIN_SCANLINE_TIME_10_10_05 11 -#define T30_MIN_SCANLINE_TIME_20_10_05 12 -#define T30_MIN_SCANLINE_TIME_20_20_10 13 -#define T30_MIN_SCANLINE_TIME_40_20_10 14 -#define T30_MIN_SCANLINE_TIME_40_40_20 15 -#define T30_MIN_SCANLINE_TIME_COUNT 16 - -#define T30_DATA_FORMAT_SFF 0 -#define T30_DATA_FORMAT_ASCII 1 -#define T30_DATA_FORMAT_NATIVE 2 -#define T30_DATA_FORMAT_COUNT 3 - - -#define T30_OPERATING_MODE_STANDARD 0 -#define T30_OPERATING_MODE_CLASS2 1 -#define T30_OPERATING_MODE_CLASS1 2 -#define T30_OPERATING_MODE_CAPI 3 -#define T30_OPERATING_MODE_CAPI_NEG 4 -#define T30_OPERATING_MODE_COUNT 5 - -/* EDATA transmit messages */ -#define EDATA_T30_DIS 0x01 -#define EDATA_T30_FTT 0x02 -#define EDATA_T30_MCF 0x03 -#define EDATA_T30_PARAMETERS 0x04 - -/* EDATA receive messages */ -#define EDATA_T30_DCS 0x81 -#define EDATA_T30_TRAIN_OK 0x82 -#define EDATA_T30_EOP 0x83 -#define EDATA_T30_MPS 0x84 -#define EDATA_T30_EOM 0x85 -#define EDATA_T30_DTC 0x86 -#define EDATA_T30_PAGE_END 0x87 /* Indicates end of page data. Reserved, but not implemented ! */ -#define EDATA_T30_EOP_CAPI 0x88 - - -#define T30_SUCCESS 0 -#define T30_ERR_NO_DIS_RECEIVED 1 -#define T30_ERR_TIMEOUT_NO_RESPONSE 2 -#define T30_ERR_RETRY_NO_RESPONSE 3 -#define T30_ERR_TOO_MANY_REPEATS 4 -#define T30_ERR_UNEXPECTED_MESSAGE 5 -#define T30_ERR_UNEXPECTED_DCN 6 -#define T30_ERR_DTC_UNSUPPORTED 7 -#define T30_ERR_ALL_RATES_FAILED 8 -#define T30_ERR_TOO_MANY_TRAINS 9 -#define T30_ERR_RECEIVE_CORRUPTED 10 -#define T30_ERR_UNEXPECTED_DISC 11 -#define T30_ERR_APPLICATION_DISC 12 -#define T30_ERR_INCOMPATIBLE_DIS 13 -#define T30_ERR_INCOMPATIBLE_DCS 14 -#define T30_ERR_TIMEOUT_NO_COMMAND 15 -#define T30_ERR_RETRY_NO_COMMAND 16 -#define T30_ERR_TIMEOUT_COMMAND_TOO_LONG 17 -#define T30_ERR_TIMEOUT_RESPONSE_TOO_LONG 18 -#define T30_ERR_NOT_IDENTIFIED 19 -#define T30_ERR_SUPERVISORY_TIMEOUT 20 -#define T30_ERR_TOO_LONG_SCAN_LINE 21 -/* #define T30_ERR_RETRY_NO_PAGE_AFTER_MPS 22 */ -#define T30_ERR_RETRY_NO_PAGE_RECEIVED 23 -#define T30_ERR_RETRY_NO_DCS_AFTER_FTT 24 -#define T30_ERR_RETRY_NO_DCS_AFTER_EOM 25 -#define T30_ERR_RETRY_NO_DCS_AFTER_MPS 26 -#define T30_ERR_RETRY_NO_DCN_AFTER_MCF 27 -#define T30_ERR_RETRY_NO_DCN_AFTER_RTN 28 -#define T30_ERR_RETRY_NO_CFR 29 -#define T30_ERR_RETRY_NO_MCF_AFTER_EOP 30 -#define T30_ERR_RETRY_NO_MCF_AFTER_EOM 31 -#define T30_ERR_RETRY_NO_MCF_AFTER_MPS 32 -#define T30_ERR_SUB_SEP_UNSUPPORTED 33 -#define T30_ERR_PWD_UNSUPPORTED 34 -#define T30_ERR_SUB_SEP_PWD_UNSUPPORTED 35 -#define T30_ERR_INVALID_COMMAND_FRAME 36 -#define T30_ERR_UNSUPPORTED_PAGE_CODING 37 -#define T30_ERR_INVALID_PAGE_CODING 38 -#define T30_ERR_INCOMPATIBLE_PAGE_CONFIG 39 -#define T30_ERR_TIMEOUT_FROM_APPLICATION 40 -#define T30_ERR_V34FAX_NO_REACTION_ON_MARK 41 -#define T30_ERR_V34FAX_TRAINING_TIMEOUT 42 -#define T30_ERR_V34FAX_UNEXPECTED_V21 43 -#define T30_ERR_V34FAX_PRIMARY_CTS_ON 44 -#define T30_ERR_V34FAX_TURNAROUND_POLLING 45 -#define T30_ERR_V34FAX_V8_INCOMPATIBILITY 46 - - -#define T30_CONTROL_BIT_DISABLE_FINE 0x0001 -#define T30_CONTROL_BIT_ENABLE_ECM 0x0002 -#define T30_CONTROL_BIT_ECM_64_BYTES 0x0004 -#define T30_CONTROL_BIT_ENABLE_2D_CODING 0x0008 -#define T30_CONTROL_BIT_ENABLE_T6_CODING 0x0010 -#define T30_CONTROL_BIT_ENABLE_UNCOMPR 0x0020 -#define T30_CONTROL_BIT_ACCEPT_POLLING 0x0040 -#define T30_CONTROL_BIT_REQUEST_POLLING 0x0080 -#define T30_CONTROL_BIT_MORE_DOCUMENTS 0x0100 -#define T30_CONTROL_BIT_ACCEPT_SUBADDRESS 0x0200 -#define T30_CONTROL_BIT_ACCEPT_SEL_POLLING 0x0400 -#define T30_CONTROL_BIT_ACCEPT_PASSWORD 0x0800 -#define T30_CONTROL_BIT_ENABLE_V34FAX 0x1000 -#define T30_CONTROL_BIT_EARLY_CONNECT 0x2000 - -#define T30_CONTROL_BIT_ALL_FEATURES (T30_CONTROL_BIT_ENABLE_ECM | T30_CONTROL_BIT_ENABLE_2D_CODING | T30_CONTROL_BIT_ENABLE_T6_CODING | T30_CONTROL_BIT_ENABLE_UNCOMPR | T30_CONTROL_BIT_ENABLE_V34FAX) - -#define T30_FEATURE_BIT_FINE 0x0001 -#define T30_FEATURE_BIT_ECM 0x0002 -#define T30_FEATURE_BIT_ECM_64_BYTES 0x0004 -#define T30_FEATURE_BIT_2D_CODING 0x0008 -#define T30_FEATURE_BIT_T6_CODING 0x0010 -#define T30_FEATURE_BIT_UNCOMPR_ENABLED 0x0020 -#define T30_FEATURE_BIT_POLLING 0x0040 -#define T30_FEATURE_BIT_MORE_DOCUMENTS 0x0100 -#define T30_FEATURE_BIT_V34FAX 0x1000 - - -#define T30_NSF_CONTROL_BIT_ENABLE_NSF 0x0001 -#define T30_NSF_CONTROL_BIT_RAW_INFO 0x0002 -#define T30_NSF_CONTROL_BIT_NEGOTIATE_IND 0x0004 -#define T30_NSF_CONTROL_BIT_NEGOTIATE_RESP 0x0008 - -#define T30_NSF_ELEMENT_NSF_FIF 0x00 -#define T30_NSF_ELEMENT_NSC_FIF 0x01 -#define T30_NSF_ELEMENT_NSS_FIF 0x02 -#define T30_NSF_ELEMENT_COMPANY_NAME 0x03 - - -/*------------------------------------------------------------------*/ -/* Analog modem definitions */ -/*------------------------------------------------------------------*/ - -typedef struct async_s ASYNC_FORMAT; -struct async_s { - unsigned pe:1; - unsigned parity:2; - unsigned spare:2; - unsigned stp:1; - unsigned ch_len:2; /* 3th octett in CAI */ -}; - - -/*------------------------------------------------------------------*/ -/* PLCI/NCCI states */ -/*------------------------------------------------------------------*/ - -#define IDLE 0 -#define OUTG_CON_PENDING 1 -#define INC_CON_PENDING 2 -#define INC_CON_ALERT 3 -#define INC_CON_ACCEPT 4 -#define INC_ACT_PENDING 5 -#define LISTENING 6 -#define CONNECTED 7 -#define OUTG_DIS_PENDING 8 -#define INC_DIS_PENDING 9 -#define LOCAL_CONNECT 10 -#define INC_RES_PENDING 11 -#define OUTG_RES_PENDING 12 -#define SUSPENDING 13 -#define ADVANCED_VOICE_SIG 14 -#define ADVANCED_VOICE_NOSIG 15 -#define RESUMING 16 -#define INC_CON_CONNECTED_ALERT 17 -#define OUTG_REJ_PENDING 18 - - -/*------------------------------------------------------------------*/ -/* auxiliary states for supplementary services */ -/*------------------------------------------------------------------*/ - -#define IDLE 0 -#define HOLD_REQUEST 1 -#define HOLD_INDICATE 2 -#define CALL_HELD 3 -#define RETRIEVE_REQUEST 4 -#define RETRIEVE_INDICATION 5 - -/*------------------------------------------------------------------*/ -/* Capi IE + Msg types */ -/*------------------------------------------------------------------*/ -#define ESC_CAUSE 0x800 | CAU /* Escape cause element */ -#define ESC_MSGTYPE 0x800 | MSGTYPEIE /* Escape message type */ -#define ESC_CHI 0x800 | CHI /* Escape channel id */ -#define ESC_LAW 0x800 | BC /* Escape law info */ -#define ESC_CR 0x800 | CRIE /* Escape CallReference */ -#define ESC_PROFILE 0x800 | PROFILEIE /* Escape profile */ -#define ESC_SSEXT 0x800 | SSEXTIE /* Escape Supplem. Serv.*/ -#define ESC_VSWITCH 0x800 | VSWITCHIE /* Escape VSwitch */ -#define CST 0x14 /* Call State i.e. */ -#define PI 0x1E /* Progress Indicator */ -#define NI 0x27 /* Notification Ind */ -#define CONN_NR 0x4C /* Connected Number */ -#define CONG_RNR 0xBF /* Congestion RNR */ -#define CONG_RR 0xB0 /* Congestion RR */ -#define RESERVED 0xFF /* Res. for future use */ -#define ON_BOARD_CODEC 0x02 /* external controller */ -#define HANDSET 0x04 /* Codec+Handset(Pro11) */ -#define HOOK_SUPPORT 0x01 /* activate Hook signal */ -#define SCR 0x7a /* unscreened number */ - -#define HOOK_OFF_REQ 0x9001 /* internal conn req */ -#define HOOK_ON_REQ 0x9002 /* internal disc req */ -#define SUSPEND_REQ 0x9003 /* internal susp req */ -#define RESUME_REQ 0x9004 /* internal resume req */ -#define USELAW_REQ 0x9005 /* internal law req */ -#define LISTEN_SIG_ASSIGN_PEND 0x9006 -#define PERM_LIST_REQ 0x900a /* permanent conn DCE */ -#define C_HOLD_REQ 0x9011 -#define C_RETRIEVE_REQ 0x9012 -#define C_NCR_FAC_REQ 0x9013 -#define PERM_COD_ASSIGN 0x9014 -#define PERM_COD_CALL 0x9015 -#define PERM_COD_HOOK 0x9016 -#define PERM_COD_CONN_PEND 0x9017 /* wait for connect_con */ -#define PTY_REQ_PEND 0x9018 -#define CD_REQ_PEND 0x9019 -#define CF_START_PEND 0x901a -#define CF_STOP_PEND 0x901b -#define ECT_REQ_PEND 0x901c -#define GETSERV_REQ_PEND 0x901d -#define BLOCK_PLCI 0x901e -#define INTERR_NUMBERS_REQ_PEND 0x901f -#define INTERR_DIVERSION_REQ_PEND 0x9020 -#define MWI_ACTIVATE_REQ_PEND 0x9021 -#define MWI_DEACTIVATE_REQ_PEND 0x9022 -#define SSEXT_REQ_COMMAND 0x9023 -#define SSEXT_NC_REQ_COMMAND 0x9024 -#define START_L1_SIG_ASSIGN_PEND 0x9025 -#define REM_L1_SIG_ASSIGN_PEND 0x9026 -#define CONF_BEGIN_REQ_PEND 0x9027 -#define CONF_ADD_REQ_PEND 0x9028 -#define CONF_SPLIT_REQ_PEND 0x9029 -#define CONF_DROP_REQ_PEND 0x902a -#define CONF_ISOLATE_REQ_PEND 0x902b -#define CONF_REATTACH_REQ_PEND 0x902c -#define VSWITCH_REQ_PEND 0x902d -#define GET_MWI_STATE 0x902e -#define CCBS_REQUEST_REQ_PEND 0x902f -#define CCBS_DEACTIVATE_REQ_PEND 0x9030 -#define CCBS_INTERROGATE_REQ_PEND 0x9031 - -#define NO_INTERNAL_COMMAND 0 -#define DTMF_COMMAND_1 1 -#define DTMF_COMMAND_2 2 -#define DTMF_COMMAND_3 3 -#define MIXER_COMMAND_1 4 -#define MIXER_COMMAND_2 5 -#define MIXER_COMMAND_3 6 -#define ADV_VOICE_COMMAND_CONNECT_1 7 -#define ADV_VOICE_COMMAND_CONNECT_2 8 -#define ADV_VOICE_COMMAND_CONNECT_3 9 -#define ADV_VOICE_COMMAND_DISCONNECT_1 10 -#define ADV_VOICE_COMMAND_DISCONNECT_2 11 -#define ADV_VOICE_COMMAND_DISCONNECT_3 12 -#define ADJUST_B_RESTORE_1 13 -#define ADJUST_B_RESTORE_2 14 -#define RESET_B3_COMMAND_1 15 -#define SELECT_B_COMMAND_1 16 -#define FAX_CONNECT_INFO_COMMAND_1 17 -#define FAX_CONNECT_INFO_COMMAND_2 18 -#define FAX_ADJUST_B23_COMMAND_1 19 -#define FAX_ADJUST_B23_COMMAND_2 20 -#define EC_COMMAND_1 21 -#define EC_COMMAND_2 22 -#define EC_COMMAND_3 23 -#define RTP_CONNECT_B3_REQ_COMMAND_1 24 -#define RTP_CONNECT_B3_REQ_COMMAND_2 25 -#define RTP_CONNECT_B3_REQ_COMMAND_3 26 -#define RTP_CONNECT_B3_RES_COMMAND_1 27 -#define RTP_CONNECT_B3_RES_COMMAND_2 28 -#define RTP_CONNECT_B3_RES_COMMAND_3 29 -#define HOLD_SAVE_COMMAND_1 30 -#define RETRIEVE_RESTORE_COMMAND_1 31 -#define FAX_DISCONNECT_COMMAND_1 32 -#define FAX_DISCONNECT_COMMAND_2 33 -#define FAX_DISCONNECT_COMMAND_3 34 -#define FAX_EDATA_ACK_COMMAND_1 35 -#define FAX_EDATA_ACK_COMMAND_2 36 -#define FAX_CONNECT_ACK_COMMAND_1 37 -#define FAX_CONNECT_ACK_COMMAND_2 38 -#define STD_INTERNAL_COMMAND_COUNT 39 - -#define UID 0x2d /* User Id for Mgmt */ - -#define CALL_DIR_OUT 0x01 /* call direction of initial call */ -#define CALL_DIR_IN 0x02 -#define CALL_DIR_ORIGINATE 0x04 /* DTE/DCE direction according to */ -#define CALL_DIR_ANSWER 0x08 /* state of B-Channel Operation */ -#define CALL_DIR_FORCE_OUTG_NL 0x10 /* for RESET_B3 reconnect, after DISC_B3... */ - -#define AWAITING_MANUF_CON 0x80 /* command spoofing flags */ -#define SPOOFING_REQUIRED 0xff -#define AWAITING_SELECT_B 0xef - -/*------------------------------------------------------------------*/ -/* B_CTRL / DSP_CTRL */ -/*------------------------------------------------------------------*/ - -#define DSP_CTRL_OLD_SET_MIXER_COEFFICIENTS 0x01 -#define DSP_CTRL_SET_BCHANNEL_PASSIVATION_BRI 0x02 -#define DSP_CTRL_SET_DTMF_PARAMETERS 0x03 - -#define MANUFACTURER_FEATURE_SLAVE_CODEC 0x00000001L -#define MANUFACTURER_FEATURE_FAX_MORE_DOCUMENTS 0x00000002L -#define MANUFACTURER_FEATURE_HARDDTMF 0x00000004L -#define MANUFACTURER_FEATURE_SOFTDTMF_SEND 0x00000008L -#define MANUFACTURER_FEATURE_DTMF_PARAMETERS 0x00000010L -#define MANUFACTURER_FEATURE_SOFTDTMF_RECEIVE 0x00000020L -#define MANUFACTURER_FEATURE_FAX_SUB_SEP_PWD 0x00000040L -#define MANUFACTURER_FEATURE_V18 0x00000080L -#define MANUFACTURER_FEATURE_MIXER_CH_CH 0x00000100L -#define MANUFACTURER_FEATURE_MIXER_CH_PC 0x00000200L -#define MANUFACTURER_FEATURE_MIXER_PC_CH 0x00000400L -#define MANUFACTURER_FEATURE_MIXER_PC_PC 0x00000800L -#define MANUFACTURER_FEATURE_ECHO_CANCELLER 0x00001000L -#define MANUFACTURER_FEATURE_RTP 0x00002000L -#define MANUFACTURER_FEATURE_T38 0x00004000L -#define MANUFACTURER_FEATURE_TRANSP_DELIVERY_CONF 0x00008000L -#define MANUFACTURER_FEATURE_XONOFF_FLOW_CONTROL 0x00010000L -#define MANUFACTURER_FEATURE_OOB_CHANNEL 0x00020000L -#define MANUFACTURER_FEATURE_IN_BAND_CHANNEL 0x00040000L -#define MANUFACTURER_FEATURE_IN_BAND_FEATURE 0x00080000L -#define MANUFACTURER_FEATURE_PIAFS 0x00100000L -#define MANUFACTURER_FEATURE_DTMF_TONE 0x00200000L -#define MANUFACTURER_FEATURE_FAX_PAPER_FORMATS 0x00400000L -#define MANUFACTURER_FEATURE_OK_FC_LABEL 0x00800000L -#define MANUFACTURER_FEATURE_VOWN 0x01000000L -#define MANUFACTURER_FEATURE_XCONNECT 0x02000000L -#define MANUFACTURER_FEATURE_DMACONNECT 0x04000000L -#define MANUFACTURER_FEATURE_AUDIO_TAP 0x08000000L -#define MANUFACTURER_FEATURE_FAX_NONSTANDARD 0x10000000L - -/*------------------------------------------------------------------*/ -/* DTMF interface to IDI */ -/*------------------------------------------------------------------*/ - - -#define DTMF_DIGIT_TONE_LOW_GROUP_697_HZ 0x00 -#define DTMF_DIGIT_TONE_LOW_GROUP_770_HZ 0x01 -#define DTMF_DIGIT_TONE_LOW_GROUP_852_HZ 0x02 -#define DTMF_DIGIT_TONE_LOW_GROUP_941_HZ 0x03 -#define DTMF_DIGIT_TONE_LOW_GROUP_MASK 0x03 -#define DTMF_DIGIT_TONE_HIGH_GROUP_1209_HZ 0x00 -#define DTMF_DIGIT_TONE_HIGH_GROUP_1336_HZ 0x04 -#define DTMF_DIGIT_TONE_HIGH_GROUP_1477_HZ 0x08 -#define DTMF_DIGIT_TONE_HIGH_GROUP_1633_HZ 0x0c -#define DTMF_DIGIT_TONE_HIGH_GROUP_MASK 0x0c -#define DTMF_DIGIT_TONE_CODE_0 0x07 -#define DTMF_DIGIT_TONE_CODE_1 0x00 -#define DTMF_DIGIT_TONE_CODE_2 0x04 -#define DTMF_DIGIT_TONE_CODE_3 0x08 -#define DTMF_DIGIT_TONE_CODE_4 0x01 -#define DTMF_DIGIT_TONE_CODE_5 0x05 -#define DTMF_DIGIT_TONE_CODE_6 0x09 -#define DTMF_DIGIT_TONE_CODE_7 0x02 -#define DTMF_DIGIT_TONE_CODE_8 0x06 -#define DTMF_DIGIT_TONE_CODE_9 0x0a -#define DTMF_DIGIT_TONE_CODE_STAR 0x03 -#define DTMF_DIGIT_TONE_CODE_HASHMARK 0x0b -#define DTMF_DIGIT_TONE_CODE_A 0x0c -#define DTMF_DIGIT_TONE_CODE_B 0x0d -#define DTMF_DIGIT_TONE_CODE_C 0x0e -#define DTMF_DIGIT_TONE_CODE_D 0x0f - -#define DTMF_UDATA_REQUEST_SEND_DIGITS 16 -#define DTMF_UDATA_REQUEST_ENABLE_RECEIVER 17 -#define DTMF_UDATA_REQUEST_DISABLE_RECEIVER 18 -#define DTMF_UDATA_INDICATION_DIGITS_SENT 16 -#define DTMF_UDATA_INDICATION_DIGITS_RECEIVED 17 -#define DTMF_UDATA_INDICATION_MODEM_CALLING_TONE 18 -#define DTMF_UDATA_INDICATION_FAX_CALLING_TONE 19 -#define DTMF_UDATA_INDICATION_ANSWER_TONE 20 - -#define UDATA_REQUEST_MIXER_TAP_DATA 27 -#define UDATA_INDICATION_MIXER_TAP_DATA 27 - -#define DTMF_LISTEN_ACTIVE_FLAG 0x01 -#define DTMF_SEND_DIGIT_FLAG 0x01 - - -/*------------------------------------------------------------------*/ -/* Mixer interface to IDI */ -/*------------------------------------------------------------------*/ - - -#define LI2_FLAG_PCCONNECT_A_B 0x40000000 -#define LI2_FLAG_PCCONNECT_B_A 0x80000000 - -#define MIXER_BCHANNELS_BRI 2 -#define MIXER_IC_CHANNELS_BRI MIXER_BCHANNELS_BRI -#define MIXER_IC_CHANNEL_BASE MIXER_BCHANNELS_BRI -#define MIXER_CHANNELS_BRI (MIXER_BCHANNELS_BRI + MIXER_IC_CHANNELS_BRI) -#define MIXER_CHANNELS_PRI 32 - -typedef struct li_config_s LI_CONFIG; - -struct xconnect_card_address_s { - dword low; - dword high; -}; - -struct xconnect_transfer_address_s { - struct xconnect_card_address_s card_address; - dword offset; -}; - -struct li_config_s { - DIVA_CAPI_ADAPTER *adapter; - PLCI *plci; - struct xconnect_transfer_address_s send_b; - struct xconnect_transfer_address_s send_pc; - byte *flag_table; /* dword aligned and sized */ - byte *coef_table; /* dword aligned and sized */ - byte channel; - byte curchnl; - byte chflags; -}; - -extern LI_CONFIG *li_config_table; -extern word li_total_channels; - -#define LI_CHANNEL_INVOLVED 0x01 -#define LI_CHANNEL_ACTIVE 0x02 -#define LI_CHANNEL_TX_DATA 0x04 -#define LI_CHANNEL_RX_DATA 0x08 -#define LI_CHANNEL_CONFERENCE 0x10 -#define LI_CHANNEL_ADDRESSES_SET 0x80 - -#define LI_CHFLAG_MONITOR 0x01 -#define LI_CHFLAG_MIX 0x02 -#define LI_CHFLAG_LOOP 0x04 - -#define LI_FLAG_INTERCONNECT 0x01 -#define LI_FLAG_MONITOR 0x02 -#define LI_FLAG_MIX 0x04 -#define LI_FLAG_PCCONNECT 0x08 -#define LI_FLAG_CONFERENCE 0x10 -#define LI_FLAG_ANNOUNCEMENT 0x20 - -#define LI_COEF_CH_CH 0x01 -#define LI_COEF_CH_PC 0x02 -#define LI_COEF_PC_CH 0x04 -#define LI_COEF_PC_PC 0x08 -#define LI_COEF_CH_CH_SET 0x10 -#define LI_COEF_CH_PC_SET 0x20 -#define LI_COEF_PC_CH_SET 0x40 -#define LI_COEF_PC_PC_SET 0x80 - -#define LI_REQ_SILENT_UPDATE 0xffff - -#define LI_PLCI_B_LAST_FLAG ((dword) 0x80000000L) -#define LI_PLCI_B_DISC_FLAG ((dword) 0x40000000L) -#define LI_PLCI_B_SKIP_FLAG ((dword) 0x20000000L) -#define LI_PLCI_B_FLAG_MASK ((dword) 0xe0000000L) - -#define UDATA_REQUEST_SET_MIXER_COEFS_BRI 24 -#define UDATA_REQUEST_SET_MIXER_COEFS_PRI_SYNC 25 -#define UDATA_REQUEST_SET_MIXER_COEFS_PRI_ASYN 26 -#define UDATA_INDICATION_MIXER_COEFS_SET 24 - -#define MIXER_FEATURE_ENABLE_TX_DATA 0x0001 -#define MIXER_FEATURE_ENABLE_RX_DATA 0x0002 - -#define MIXER_COEF_LINE_CHANNEL_MASK 0x1f -#define MIXER_COEF_LINE_FROM_PC_FLAG 0x20 -#define MIXER_COEF_LINE_TO_PC_FLAG 0x40 -#define MIXER_COEF_LINE_ROW_FLAG 0x80 - -#define UDATA_REQUEST_XCONNECT_FROM 28 -#define UDATA_INDICATION_XCONNECT_FROM 28 -#define UDATA_REQUEST_XCONNECT_TO 29 -#define UDATA_INDICATION_XCONNECT_TO 29 - -#define XCONNECT_CHANNEL_PORT_B 0x0000 -#define XCONNECT_CHANNEL_PORT_PC 0x8000 -#define XCONNECT_CHANNEL_PORT_MASK 0x8000 -#define XCONNECT_CHANNEL_NUMBER_MASK 0x7fff -#define XCONNECT_CHANNEL_PORT_COUNT 2 - -#define XCONNECT_SUCCESS 0x0000 -#define XCONNECT_ERROR 0x0001 - - -/*------------------------------------------------------------------*/ -/* Echo canceller interface to IDI */ -/*------------------------------------------------------------------*/ - - -#define PRIVATE_ECHO_CANCELLER 0 - -#define PRIV_SELECTOR_ECHO_CANCELLER 255 - -#define EC_ENABLE_OPERATION 1 -#define EC_DISABLE_OPERATION 2 -#define EC_FREEZE_COEFFICIENTS 3 -#define EC_RESUME_COEFFICIENT_UPDATE 4 -#define EC_RESET_COEFFICIENTS 5 - -#define EC_DISABLE_NON_LINEAR_PROCESSING 0x0001 -#define EC_DO_NOT_REQUIRE_REVERSALS 0x0002 -#define EC_DETECT_DISABLE_TONE 0x0004 - -#define EC_SUCCESS 0 -#define EC_UNSUPPORTED_OPERATION 1 - -#define EC_BYPASS_DUE_TO_CONTINUOUS_2100HZ 1 -#define EC_BYPASS_DUE_TO_REVERSED_2100HZ 2 -#define EC_BYPASS_RELEASED 3 - -#define DSP_CTRL_SET_LEC_PARAMETERS 0x05 - -#define LEC_ENABLE_ECHO_CANCELLER 0x0001 -#define LEC_ENABLE_2100HZ_DETECTOR 0x0002 -#define LEC_REQUIRE_2100HZ_REVERSALS 0x0004 -#define LEC_MANUAL_DISABLE 0x0008 -#define LEC_ENABLE_NONLINEAR_PROCESSING 0x0010 -#define LEC_FREEZE_COEFFICIENTS 0x0020 -#define LEC_RESET_COEFFICIENTS 0x8000 - -#define LEC_MAX_SUPPORTED_TAIL_LENGTH 32 - -#define LEC_UDATA_INDICATION_DISABLE_DETECT 9 - -#define LEC_DISABLE_TYPE_CONTIGNUOUS_2100HZ 0x00 -#define LEC_DISABLE_TYPE_REVERSED_2100HZ 0x01 -#define LEC_DISABLE_RELEASED 0x02 - - -/*------------------------------------------------------------------*/ -/* RTP interface to IDI */ -/*------------------------------------------------------------------*/ - - -#define B1_RTP 31 -#define B2_RTP 31 -#define B3_RTP 31 - -#define PRIVATE_RTP 1 - -#define RTP_PRIM_PAYLOAD_PCMU_8000 0 -#define RTP_PRIM_PAYLOAD_1016_8000 1 -#define RTP_PRIM_PAYLOAD_G726_32_8000 2 -#define RTP_PRIM_PAYLOAD_GSM_8000 3 -#define RTP_PRIM_PAYLOAD_G723_8000 4 -#define RTP_PRIM_PAYLOAD_DVI4_8000 5 -#define RTP_PRIM_PAYLOAD_DVI4_16000 6 -#define RTP_PRIM_PAYLOAD_LPC_8000 7 -#define RTP_PRIM_PAYLOAD_PCMA_8000 8 -#define RTP_PRIM_PAYLOAD_G722_16000 9 -#define RTP_PRIM_PAYLOAD_QCELP_8000 12 -#define RTP_PRIM_PAYLOAD_G728_8000 14 -#define RTP_PRIM_PAYLOAD_G729_8000 18 -#define RTP_PRIM_PAYLOAD_GSM_HR_8000 30 -#define RTP_PRIM_PAYLOAD_GSM_EFR_8000 31 - -#define RTP_ADD_PAYLOAD_BASE 32 -#define RTP_ADD_PAYLOAD_RED 32 -#define RTP_ADD_PAYLOAD_CN_8000 33 -#define RTP_ADD_PAYLOAD_DTMF 34 - -#define RTP_SUCCESS 0 -#define RTP_ERR_SSRC_OR_PAYLOAD_CHANGE 1 - -#define UDATA_REQUEST_RTP_RECONFIGURE 64 -#define UDATA_INDICATION_RTP_CHANGE 65 -#define BUDATA_REQUEST_QUERY_RTCP_REPORT 1 -#define BUDATA_INDICATION_RTCP_REPORT 1 - -#define RTP_CONNECT_OPTION_DISC_ON_SSRC_CHANGE 0x00000001L -#define RTP_CONNECT_OPTION_DISC_ON_PT_CHANGE 0x00000002L -#define RTP_CONNECT_OPTION_DISC_ON_UNKNOWN_PT 0x00000004L -#define RTP_CONNECT_OPTION_NO_SILENCE_TRANSMIT 0x00010000L - -#define RTP_PAYLOAD_OPTION_VOICE_ACTIVITY_DETECT 0x0001 -#define RTP_PAYLOAD_OPTION_DISABLE_POST_FILTER 0x0002 -#define RTP_PAYLOAD_OPTION_G723_LOW_CODING_RATE 0x0100 - -#define RTP_PACKET_FILTER_IGNORE_UNKNOWN_SSRC 0x00000001L - -#define RTP_CHANGE_FLAG_SSRC_CHANGE 0x00000001L -#define RTP_CHANGE_FLAG_PAYLOAD_TYPE_CHANGE 0x00000002L -#define RTP_CHANGE_FLAG_UNKNOWN_PAYLOAD_TYPE 0x00000004L - - -/*------------------------------------------------------------------*/ -/* T.38 interface to IDI */ -/*------------------------------------------------------------------*/ - - -#define B1_T38 30 -#define B2_T38 30 -#define B3_T38 30 - -#define PRIVATE_T38 2 - - -/*------------------------------------------------------------------*/ -/* PIAFS interface to IDI */ -/*------------------------------------------------------------------*/ - - -#define B1_PIAFS 29 -#define B2_PIAFS 29 - -#define PRIVATE_PIAFS 29 - -/* - B2 configuration for PIAFS: - +---------------------+------+-----------------------------------------+ - | PIAFS Protocol | byte | Bit 1 - Protocol Speed | - | Speed configuration | | 0 - 32K | - | | | 1 - 64K (default) | - | | | Bit 2 - Variable Protocol Speed | - | | | 0 - Speed is fix | - | | | 1 - Speed is variable (default) | - +---------------------+------+-----------------------------------------+ - | Direction | word | Enable compression/decompression for | - | | | 0: All direction | - | | | 1: disable outgoing data | - | | | 2: disable incoming data | - | | | 3: disable both direction (default) | - +---------------------+------+-----------------------------------------+ - | Number of code | word | Parameter P1 of V.42bis in accordance | - | words | | with V.42bis | - +---------------------+------+-----------------------------------------+ - | Maximum String | word | Parameter P2 of V.42bis in accordance | - | Length | | with V.42bis | - +---------------------+------+-----------------------------------------+ - | control (UDATA) | byte | enable PIAFS control communication | - | abilities | | | - +---------------------+------+-----------------------------------------+ -*/ -#define PIAFS_UDATA_ABILITIES 0x80 - -/*------------------------------------------------------------------*/ -/* FAX SUB/SEP/PWD extension */ -/*------------------------------------------------------------------*/ - - -#define PRIVATE_FAX_SUB_SEP_PWD 3 - - - -/*------------------------------------------------------------------*/ -/* V.18 extension */ -/*------------------------------------------------------------------*/ - - -#define PRIVATE_V18 4 - - - -/*------------------------------------------------------------------*/ -/* DTMF TONE extension */ -/*------------------------------------------------------------------*/ - - -#define DTMF_GET_SUPPORTED_DETECT_CODES 0xf8 -#define DTMF_GET_SUPPORTED_SEND_CODES 0xf9 -#define DTMF_LISTEN_TONE_START 0xfa -#define DTMF_LISTEN_TONE_STOP 0xfb -#define DTMF_SEND_TONE 0xfc -#define DTMF_LISTEN_MF_START 0xfd -#define DTMF_LISTEN_MF_STOP 0xfe -#define DTMF_SEND_MF 0xff - -#define DTMF_MF_DIGIT_TONE_CODE_1 0x10 -#define DTMF_MF_DIGIT_TONE_CODE_2 0x11 -#define DTMF_MF_DIGIT_TONE_CODE_3 0x12 -#define DTMF_MF_DIGIT_TONE_CODE_4 0x13 -#define DTMF_MF_DIGIT_TONE_CODE_5 0x14 -#define DTMF_MF_DIGIT_TONE_CODE_6 0x15 -#define DTMF_MF_DIGIT_TONE_CODE_7 0x16 -#define DTMF_MF_DIGIT_TONE_CODE_8 0x17 -#define DTMF_MF_DIGIT_TONE_CODE_9 0x18 -#define DTMF_MF_DIGIT_TONE_CODE_0 0x19 -#define DTMF_MF_DIGIT_TONE_CODE_K1 0x1a -#define DTMF_MF_DIGIT_TONE_CODE_K2 0x1b -#define DTMF_MF_DIGIT_TONE_CODE_KP 0x1c -#define DTMF_MF_DIGIT_TONE_CODE_S1 0x1d -#define DTMF_MF_DIGIT_TONE_CODE_ST 0x1e - -#define DTMF_DIGIT_CODE_COUNT 16 -#define DTMF_MF_DIGIT_CODE_BASE DSP_DTMF_DIGIT_CODE_COUNT -#define DTMF_MF_DIGIT_CODE_COUNT 15 -#define DTMF_TOTAL_DIGIT_CODE_COUNT (DSP_MF_DIGIT_CODE_BASE + DSP_MF_DIGIT_CODE_COUNT) - -#define DTMF_TONE_DIGIT_BASE 0x80 - -#define DTMF_SIGNAL_NO_TONE (DTMF_TONE_DIGIT_BASE + 0) -#define DTMF_SIGNAL_UNIDENTIFIED_TONE (DTMF_TONE_DIGIT_BASE + 1) - -#define DTMF_SIGNAL_DIAL_TONE (DTMF_TONE_DIGIT_BASE + 2) -#define DTMF_SIGNAL_PABX_INTERNAL_DIAL_TONE (DTMF_TONE_DIGIT_BASE + 3) -#define DTMF_SIGNAL_SPECIAL_DIAL_TONE (DTMF_TONE_DIGIT_BASE + 4) /* stutter dial tone */ -#define DTMF_SIGNAL_SECOND_DIAL_TONE (DTMF_TONE_DIGIT_BASE + 5) -#define DTMF_SIGNAL_RINGING_TONE (DTMF_TONE_DIGIT_BASE + 6) -#define DTMF_SIGNAL_SPECIAL_RINGING_TONE (DTMF_TONE_DIGIT_BASE + 7) -#define DTMF_SIGNAL_BUSY_TONE (DTMF_TONE_DIGIT_BASE + 8) -#define DTMF_SIGNAL_CONGESTION_TONE (DTMF_TONE_DIGIT_BASE + 9) /* reorder tone */ -#define DTMF_SIGNAL_SPECIAL_INFORMATION_TONE (DTMF_TONE_DIGIT_BASE + 10) -#define DTMF_SIGNAL_COMFORT_TONE (DTMF_TONE_DIGIT_BASE + 11) -#define DTMF_SIGNAL_HOLD_TONE (DTMF_TONE_DIGIT_BASE + 12) -#define DTMF_SIGNAL_RECORD_TONE (DTMF_TONE_DIGIT_BASE + 13) -#define DTMF_SIGNAL_CALLER_WAITING_TONE (DTMF_TONE_DIGIT_BASE + 14) -#define DTMF_SIGNAL_CALL_WAITING_TONE (DTMF_TONE_DIGIT_BASE + 15) -#define DTMF_SIGNAL_PAY_TONE (DTMF_TONE_DIGIT_BASE + 16) -#define DTMF_SIGNAL_POSITIVE_INDICATION_TONE (DTMF_TONE_DIGIT_BASE + 17) -#define DTMF_SIGNAL_NEGATIVE_INDICATION_TONE (DTMF_TONE_DIGIT_BASE + 18) -#define DTMF_SIGNAL_WARNING_TONE (DTMF_TONE_DIGIT_BASE + 19) -#define DTMF_SIGNAL_INTRUSION_TONE (DTMF_TONE_DIGIT_BASE + 20) -#define DTMF_SIGNAL_CALLING_CARD_SERVICE_TONE (DTMF_TONE_DIGIT_BASE + 21) -#define DTMF_SIGNAL_PAYPHONE_RECOGNITION_TONE (DTMF_TONE_DIGIT_BASE + 22) -#define DTMF_SIGNAL_CPE_ALERTING_SIGNAL (DTMF_TONE_DIGIT_BASE + 23) -#define DTMF_SIGNAL_OFF_HOOK_WARNING_TONE (DTMF_TONE_DIGIT_BASE + 24) - -#define DTMF_SIGNAL_INTERCEPT_TONE (DTMF_TONE_DIGIT_BASE + 63) - -#define DTMF_SIGNAL_MODEM_CALLING_TONE (DTMF_TONE_DIGIT_BASE + 64) -#define DTMF_SIGNAL_FAX_CALLING_TONE (DTMF_TONE_DIGIT_BASE + 65) -#define DTMF_SIGNAL_ANSWER_TONE (DTMF_TONE_DIGIT_BASE + 66) -#define DTMF_SIGNAL_REVERSED_ANSWER_TONE (DTMF_TONE_DIGIT_BASE + 67) -#define DTMF_SIGNAL_ANSAM_TONE (DTMF_TONE_DIGIT_BASE + 68) -#define DTMF_SIGNAL_REVERSED_ANSAM_TONE (DTMF_TONE_DIGIT_BASE + 69) -#define DTMF_SIGNAL_BELL103_ANSWER_TONE (DTMF_TONE_DIGIT_BASE + 70) -#define DTMF_SIGNAL_FAX_FLAGS (DTMF_TONE_DIGIT_BASE + 71) -#define DTMF_SIGNAL_G2_FAX_GROUP_ID (DTMF_TONE_DIGIT_BASE + 72) -#define DTMF_SIGNAL_HUMAN_SPEECH (DTMF_TONE_DIGIT_BASE + 73) -#define DTMF_SIGNAL_ANSWERING_MACHINE_390 (DTMF_TONE_DIGIT_BASE + 74) - -#define DTMF_MF_LISTEN_ACTIVE_FLAG 0x02 -#define DTMF_SEND_MF_FLAG 0x02 -#define DTMF_TONE_LISTEN_ACTIVE_FLAG 0x04 -#define DTMF_SEND_TONE_FLAG 0x04 - -#define PRIVATE_DTMF_TONE 5 - - -/*------------------------------------------------------------------*/ -/* FAX paper format extension */ -/*------------------------------------------------------------------*/ - - -#define PRIVATE_FAX_PAPER_FORMATS 6 - - - -/*------------------------------------------------------------------*/ -/* V.OWN extension */ -/*------------------------------------------------------------------*/ - - -#define PRIVATE_VOWN 7 - - - -/*------------------------------------------------------------------*/ -/* FAX non-standard facilities extension */ -/*------------------------------------------------------------------*/ - - -#define PRIVATE_FAX_NONSTANDARD 8 - - - -/*------------------------------------------------------------------*/ -/* Advanced voice */ -/*------------------------------------------------------------------*/ - -#define ADV_VOICE_WRITE_ACTIVATION 0 -#define ADV_VOICE_WRITE_DEACTIVATION 1 -#define ADV_VOICE_WRITE_UPDATE 2 - -#define ADV_VOICE_OLD_COEF_COUNT 6 -#define ADV_VOICE_NEW_COEF_BASE (ADV_VOICE_OLD_COEF_COUNT * sizeof(word)) - -/*------------------------------------------------------------------*/ -/* B1 resource switching */ -/*------------------------------------------------------------------*/ - -#define B1_FACILITY_LOCAL 0x01 -#define B1_FACILITY_MIXER 0x02 -#define B1_FACILITY_DTMFX 0x04 -#define B1_FACILITY_DTMFR 0x08 -#define B1_FACILITY_VOICE 0x10 -#define B1_FACILITY_EC 0x20 - -#define ADJUST_B_MODE_SAVE 0x0001 -#define ADJUST_B_MODE_REMOVE_L23 0x0002 -#define ADJUST_B_MODE_SWITCH_L1 0x0004 -#define ADJUST_B_MODE_NO_RESOURCE 0x0008 -#define ADJUST_B_MODE_ASSIGN_L23 0x0010 -#define ADJUST_B_MODE_USER_CONNECT 0x0020 -#define ADJUST_B_MODE_CONNECT 0x0040 -#define ADJUST_B_MODE_RESTORE 0x0080 - -#define ADJUST_B_START 0 -#define ADJUST_B_SAVE_MIXER_1 1 -#define ADJUST_B_SAVE_DTMF_1 2 -#define ADJUST_B_REMOVE_L23_1 3 -#define ADJUST_B_REMOVE_L23_2 4 -#define ADJUST_B_SAVE_EC_1 5 -#define ADJUST_B_SAVE_DTMF_PARAMETER_1 6 -#define ADJUST_B_SAVE_VOICE_1 7 -#define ADJUST_B_SWITCH_L1_1 8 -#define ADJUST_B_SWITCH_L1_2 9 -#define ADJUST_B_RESTORE_VOICE_1 10 -#define ADJUST_B_RESTORE_VOICE_2 11 -#define ADJUST_B_RESTORE_DTMF_PARAMETER_1 12 -#define ADJUST_B_RESTORE_DTMF_PARAMETER_2 13 -#define ADJUST_B_RESTORE_EC_1 14 -#define ADJUST_B_RESTORE_EC_2 15 -#define ADJUST_B_ASSIGN_L23_1 16 -#define ADJUST_B_ASSIGN_L23_2 17 -#define ADJUST_B_CONNECT_1 18 -#define ADJUST_B_CONNECT_2 19 -#define ADJUST_B_CONNECT_3 20 -#define ADJUST_B_CONNECT_4 21 -#define ADJUST_B_RESTORE_DTMF_1 22 -#define ADJUST_B_RESTORE_DTMF_2 23 -#define ADJUST_B_RESTORE_MIXER_1 24 -#define ADJUST_B_RESTORE_MIXER_2 25 -#define ADJUST_B_RESTORE_MIXER_3 26 -#define ADJUST_B_RESTORE_MIXER_4 27 -#define ADJUST_B_RESTORE_MIXER_5 28 -#define ADJUST_B_RESTORE_MIXER_6 29 -#define ADJUST_B_RESTORE_MIXER_7 30 -#define ADJUST_B_END 31 - -/*------------------------------------------------------------------*/ -/* XON Protocol def's */ -/*------------------------------------------------------------------*/ -#define N_CH_XOFF 0x01 -#define N_XON_SENT 0x02 -#define N_XON_REQ 0x04 -#define N_XON_CONNECT_IND 0x08 -#define N_RX_FLOW_CONTROL_MASK 0x3f -#define N_OK_FC_PENDING 0x80 -#define N_TX_FLOW_CONTROL_MASK 0xc0 - -/*------------------------------------------------------------------*/ -/* NCPI state */ -/*------------------------------------------------------------------*/ -#define NCPI_VALID_CONNECT_B3_IND 0x01 -#define NCPI_VALID_CONNECT_B3_ACT 0x02 -#define NCPI_VALID_DISC_B3_IND 0x04 -#define NCPI_CONNECT_B3_ACT_SENT 0x08 -#define NCPI_NEGOTIATE_B3_SENT 0x10 -#define NCPI_MDM_CTS_ON_RECEIVED 0x40 -#define NCPI_MDM_DCD_ON_RECEIVED 0x80 - -/*------------------------------------------------------------------*/ diff --git a/drivers/isdn/hardware/eicon/divamnt.c b/drivers/isdn/hardware/eicon/divamnt.c deleted file mode 100644 index 5a95587b3117..000000000000 --- a/drivers/isdn/hardware/eicon/divamnt.c +++ /dev/null @@ -1,239 +0,0 @@ -/* $Id: divamnt.c,v 1.32.6.10 2005/02/11 19:40:25 armin Exp $ - * - * Driver for Eicon DIVA Server ISDN cards. - * Maint module - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include -#include -#include -#include -#include -#include - -#include "platform.h" -#include "di_defs.h" -#include "divasync.h" -#include "debug_if.h" - -static DEFINE_MUTEX(maint_mutex); -static char *main_revision = "$Revision: 1.32.6.10 $"; - -static int major; - -MODULE_DESCRIPTION("Maint driver for Eicon DIVA Server cards"); -MODULE_AUTHOR("Cytronics & Melware, Eicon Networks"); -MODULE_SUPPORTED_DEVICE("DIVA card driver"); -MODULE_LICENSE("GPL"); - -static int buffer_length = 128; -module_param(buffer_length, int, 0); -static unsigned long diva_dbg_mem = 0; -module_param(diva_dbg_mem, ulong, 0); - -static char *DRIVERNAME = - "Eicon DIVA - MAINT module (http://www.melware.net)"; -static char *DRIVERLNAME = "diva_mnt"; -static char *DEVNAME = "DivasMAINT"; -char *DRIVERRELEASE_MNT = "2.0"; - -static wait_queue_head_t msgwaitq; -static unsigned long opened; - -extern int mntfunc_init(int *, void **, unsigned long); -extern void mntfunc_finit(void); -extern int maint_read_write(void __user *buf, int count); - -/* - * helper functions - */ -static char *getrev(const char *revision) -{ - char *rev; - char *p; - - if ((p = strchr(revision, ':'))) { - rev = p + 2; - p = strchr(rev, '$'); - *--p = 0; - } else - rev = "1.0"; - - return rev; -} - -/* - * kernel/user space copy functions - */ -int diva_os_copy_to_user(void *os_handle, void __user *dst, const void *src, - int length) -{ - return (copy_to_user(dst, src, length)); -} -int diva_os_copy_from_user(void *os_handle, void *dst, const void __user *src, - int length) -{ - return (copy_from_user(dst, src, length)); -} - -/* - * get time - */ -void diva_os_get_time(dword *sec, dword *usec) -{ - struct timespec64 time; - - ktime_get_ts64(&time); - - *sec = (dword) time.tv_sec; - *usec = (dword) (time.tv_nsec / NSEC_PER_USEC); -} - -/* - * device node operations - */ -static __poll_t maint_poll(struct file *file, poll_table *wait) -{ - __poll_t mask = 0; - - poll_wait(file, &msgwaitq, wait); - mask = EPOLLOUT | EPOLLWRNORM; - if (file->private_data || diva_dbg_q_length()) { - mask |= EPOLLIN | EPOLLRDNORM; - } - return (mask); -} - -static int maint_open(struct inode *ino, struct file *filep) -{ - int ret; - - mutex_lock(&maint_mutex); - /* only one open is allowed, so we test - it atomically */ - if (test_and_set_bit(0, &opened)) - ret = -EBUSY; - else { - filep->private_data = NULL; - ret = nonseekable_open(ino, filep); - } - mutex_unlock(&maint_mutex); - return ret; -} - -static int maint_close(struct inode *ino, struct file *filep) -{ - if (filep->private_data) { - diva_os_free(0, filep->private_data); - filep->private_data = NULL; - } - - /* clear 'used' flag */ - clear_bit(0, &opened); - - return (0); -} - -static ssize_t divas_maint_write(struct file *file, const char __user *buf, - size_t count, loff_t *ppos) -{ - return (maint_read_write((char __user *) buf, (int) count)); -} - -static ssize_t divas_maint_read(struct file *file, char __user *buf, - size_t count, loff_t *ppos) -{ - return (maint_read_write(buf, (int) count)); -} - -static const struct file_operations divas_maint_fops = { - .owner = THIS_MODULE, - .llseek = no_llseek, - .read = divas_maint_read, - .write = divas_maint_write, - .poll = maint_poll, - .open = maint_open, - .release = maint_close -}; - -static void divas_maint_unregister_chrdev(void) -{ - unregister_chrdev(major, DEVNAME); -} - -static int __init divas_maint_register_chrdev(void) -{ - if ((major = register_chrdev(0, DEVNAME, &divas_maint_fops)) < 0) - { - printk(KERN_ERR "%s: failed to create /dev entry.\n", - DRIVERLNAME); - return (0); - } - - return (1); -} - -/* - * wake up reader - */ -void diva_maint_wakeup_read(void) -{ - wake_up_interruptible(&msgwaitq); -} - -/* - * Driver Load - */ -static int __init maint_init(void) -{ - char tmprev[50]; - int ret = 0; - void *buffer = NULL; - - init_waitqueue_head(&msgwaitq); - - printk(KERN_INFO "%s\n", DRIVERNAME); - printk(KERN_INFO "%s: Rel:%s Rev:", DRIVERLNAME, DRIVERRELEASE_MNT); - strcpy(tmprev, main_revision); - printk("%s Build: %s \n", getrev(tmprev), DIVA_BUILD); - - if (!divas_maint_register_chrdev()) { - ret = -EIO; - goto out; - } - - if (!(mntfunc_init(&buffer_length, &buffer, diva_dbg_mem))) { - printk(KERN_ERR "%s: failed to connect to DIDD.\n", - DRIVERLNAME); - divas_maint_unregister_chrdev(); - ret = -EIO; - goto out; - } - - printk(KERN_INFO "%s: trace buffer = %p - %d kBytes, %s (Major: %d)\n", - DRIVERLNAME, buffer, (buffer_length / 1024), - (diva_dbg_mem == 0) ? "internal" : "external", major); - -out: - return (ret); -} - -/* -** Driver Unload -*/ -static void __exit maint_exit(void) -{ - divas_maint_unregister_chrdev(); - mntfunc_finit(); - - printk(KERN_INFO "%s: module unloaded.\n", DRIVERLNAME); -} - -module_init(maint_init); -module_exit(maint_exit); diff --git a/drivers/isdn/hardware/eicon/divasfunc.c b/drivers/isdn/hardware/eicon/divasfunc.c deleted file mode 100644 index 4be5f8814777..000000000000 --- a/drivers/isdn/hardware/eicon/divasfunc.c +++ /dev/null @@ -1,237 +0,0 @@ -/* $Id: divasfunc.c,v 1.23.4.2 2004/08/28 20:03:53 armin Exp $ - * - * Low level driver for Eicon DIVA Server ISDN cards. - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include "platform.h" -#include "di_defs.h" -#include "pc.h" -#include "di.h" -#include "io.h" -#include "divasync.h" -#include "diva.h" -#include "xdi_vers.h" - -#define DBG_MINIMUM (DL_LOG + DL_FTL + DL_ERR) -#define DBG_DEFAULT (DBG_MINIMUM + DL_XLOG + DL_REG) - -static int debugmask; - -extern void DIVA_DIDD_Read(void *, int); - -extern PISDN_ADAPTER IoAdapters[MAX_ADAPTER]; - -extern char *DRIVERRELEASE_DIVAS; - -static dword notify_handle; -static DESCRIPTOR DAdapter; -static DESCRIPTOR MAdapter; - -/* -------------------------------------------------------------------------- - MAINT driver connector section - -------------------------------------------------------------------------- */ -static void no_printf(unsigned char *x, ...) -{ - /* dummy debug function */ -} - -#include "debuglib.c" - -/* - * get the adapters serial number - */ -void diva_get_vserial_number(PISDN_ADAPTER IoAdapter, char *buf) -{ - int contr = 0; - - if ((contr = ((IoAdapter->serialNo & 0xff000000) >> 24))) { - sprintf(buf, "%d-%d", - IoAdapter->serialNo & 0x00ffffff, contr + 1); - } else { - sprintf(buf, "%d", IoAdapter->serialNo); - } -} - -/* - * register a new adapter - */ -void diva_xdi_didd_register_adapter(int card) -{ - DESCRIPTOR d; - IDI_SYNC_REQ req; - - if (card && ((card - 1) < MAX_ADAPTER) && - IoAdapters[card - 1] && Requests[card - 1]) { - d.type = IoAdapters[card - 1]->Properties.DescType; - d.request = Requests[card - 1]; - d.channels = IoAdapters[card - 1]->Properties.Channels; - d.features = IoAdapters[card - 1]->Properties.Features; - DBG_TRC(("DIDD register A(%d) channels=%d", card, - d.channels)) - /* workaround for different Name in structure */ - strlcpy(IoAdapters[card - 1]->Name, - IoAdapters[card - 1]->Properties.Name, - sizeof(IoAdapters[card - 1]->Name)); - req.didd_remove_adapter.e.Req = 0; - req.didd_add_adapter.e.Rc = IDI_SYNC_REQ_DIDD_ADD_ADAPTER; - req.didd_add_adapter.info.descriptor = (void *) &d; - DAdapter.request((ENTITY *)&req); - if (req.didd_add_adapter.e.Rc != 0xff) { - DBG_ERR(("DIDD register A(%d) failed !", card)) - } - IoAdapters[card - 1]->os_trap_nfy_Fnc = NULL; - } -} - -/* - * remove an adapter - */ -void diva_xdi_didd_remove_adapter(int card) -{ - IDI_SYNC_REQ req; - ADAPTER *a = &IoAdapters[card - 1]->a; - - IoAdapters[card - 1]->os_trap_nfy_Fnc = NULL; - DBG_TRC(("DIDD de-register A(%d)", card)) - req.didd_remove_adapter.e.Req = 0; - req.didd_remove_adapter.e.Rc = IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER; - req.didd_remove_adapter.info.p_request = - (IDI_CALL) Requests[card - 1]; - DAdapter.request((ENTITY *)&req); - memset(&(a->IdTable), 0x00, 256); -} - -/* - * start debug - */ -static void start_dbg(void) -{ - DbgRegister("DIVAS", DRIVERRELEASE_DIVAS, (debugmask) ? debugmask : DBG_DEFAULT); - DBG_LOG(("DIVA ISDNXDI BUILD (%s[%s])", - DIVA_BUILD, diva_xdi_common_code_build)) - } - -/* - * stop debug - */ -static void stop_dbg(void) -{ - DbgDeregister(); - memset(&MAdapter, 0, sizeof(MAdapter)); - dprintf = no_printf; -} - -/* - * didd callback function - */ -static void *didd_callback(void *context, DESCRIPTOR *adapter, - int removal) -{ - if (adapter->type == IDI_DADAPTER) { - DBG_ERR(("Notification about IDI_DADAPTER change ! Oops.")); - return (NULL); - } - - if (adapter->type == IDI_DIMAINT) { - if (removal) { - stop_dbg(); - } else { - memcpy(&MAdapter, adapter, sizeof(MAdapter)); - dprintf = (DIVA_DI_PRINTF) MAdapter.request; - start_dbg(); - } - } - return (NULL); -} - -/* - * connect to didd - */ -static int __init connect_didd(void) -{ - int x = 0; - int dadapter = 0; - IDI_SYNC_REQ req; - DESCRIPTOR DIDD_Table[MAX_DESCRIPTORS]; - - DIVA_DIDD_Read(DIDD_Table, sizeof(DIDD_Table)); - - for (x = 0; x < MAX_DESCRIPTORS; x++) { - if (DIDD_Table[x].type == IDI_DADAPTER) { /* DADAPTER found */ - dadapter = 1; - memcpy(&DAdapter, &DIDD_Table[x], sizeof(DAdapter)); - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = - IDI_SYNC_REQ_DIDD_REGISTER_ADAPTER_NOTIFY; - req.didd_notify.info.callback = (void *)didd_callback; - req.didd_notify.info.context = NULL; - DAdapter.request((ENTITY *)&req); - if (req.didd_notify.e.Rc != 0xff) { - stop_dbg(); - return (0); - } - notify_handle = req.didd_notify.info.handle; - } else if (DIDD_Table[x].type == IDI_DIMAINT) { /* MAINT found */ - memcpy(&MAdapter, &DIDD_Table[x], sizeof(DAdapter)); - dprintf = (DIVA_DI_PRINTF) MAdapter.request; - start_dbg(); - } - } - - if (!dadapter) { - stop_dbg(); - } - - return (dadapter); -} - -/* - * disconnect from didd - */ -static void disconnect_didd(void) -{ - IDI_SYNC_REQ req; - - stop_dbg(); - - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER_NOTIFY; - req.didd_notify.info.handle = notify_handle; - DAdapter.request((ENTITY *)&req); -} - -/* - * init - */ -int __init divasfunc_init(int dbgmask) -{ - char *version; - - debugmask = dbgmask; - - if (!connect_didd()) { - DBG_ERR(("divasfunc: failed to connect to DIDD.")) - return (0); - } - - version = diva_xdi_common_code_build; - - divasa_xdi_driver_entry(); - - return (1); -} - -/* - * exit - */ -void divasfunc_exit(void) -{ - divasa_xdi_driver_unload(); - disconnect_didd(); -} diff --git a/drivers/isdn/hardware/eicon/divasi.c b/drivers/isdn/hardware/eicon/divasi.c deleted file mode 100644 index e7081e0c0e35..000000000000 --- a/drivers/isdn/hardware/eicon/divasi.c +++ /dev/null @@ -1,562 +0,0 @@ -/* $Id: divasi.c,v 1.25.6.2 2005/01/31 12:22:20 armin Exp $ - * - * Driver for Eicon DIVA Server ISDN cards. - * User Mode IDI Interface - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "platform.h" -#include "di_defs.h" -#include "divasync.h" -#include "um_xdi.h" -#include "um_idi.h" - -static char *main_revision = "$Revision: 1.25.6.2 $"; - -static int major; - -MODULE_DESCRIPTION("User IDI Interface for Eicon ISDN cards"); -MODULE_AUTHOR("Cytronics & Melware, Eicon Networks"); -MODULE_SUPPORTED_DEVICE("DIVA card driver"); -MODULE_LICENSE("GPL"); - -typedef struct _diva_um_idi_os_context { - wait_queue_head_t read_wait; - wait_queue_head_t close_wait; - struct timer_list diva_timer_id; - int aborted; - int adapter_nr; -} diva_um_idi_os_context_t; - -static char *DRIVERNAME = "Eicon DIVA - User IDI (http://www.melware.net)"; -static char *DRIVERLNAME = "diva_idi"; -static char *DEVNAME = "DivasIDI"; -char *DRIVERRELEASE_IDI = "2.0"; - -extern int idifunc_init(void); -extern void idifunc_finit(void); - -/* - * helper functions - */ -static char *getrev(const char *revision) -{ - char *rev; - char *p; - if ((p = strchr(revision, ':'))) { - rev = p + 2; - p = strchr(rev, '$'); - *--p = 0; - } else - rev = "1.0"; - return rev; -} - -/* - * LOCALS - */ -static ssize_t um_idi_read(struct file *file, char __user *buf, size_t count, - loff_t *offset); -static ssize_t um_idi_write(struct file *file, const char __user *buf, - size_t count, loff_t *offset); -static __poll_t um_idi_poll(struct file *file, poll_table *wait); -static int um_idi_open(struct inode *inode, struct file *file); -static int um_idi_release(struct inode *inode, struct file *file); -static int remove_entity(void *entity); -static void diva_um_timer_function(struct timer_list *t); - -/* - * proc entry - */ -extern struct proc_dir_entry *proc_net_eicon; -static struct proc_dir_entry *um_idi_proc_entry = NULL; - -static int um_idi_proc_show(struct seq_file *m, void *v) -{ - char tmprev[32]; - - seq_printf(m, "%s\n", DRIVERNAME); - seq_printf(m, "name : %s\n", DRIVERLNAME); - seq_printf(m, "release : %s\n", DRIVERRELEASE_IDI); - strcpy(tmprev, main_revision); - seq_printf(m, "revision : %s\n", getrev(tmprev)); - seq_printf(m, "build : %s\n", DIVA_BUILD); - seq_printf(m, "major : %d\n", major); - - return 0; -} - -static int __init create_um_idi_proc(void) -{ - um_idi_proc_entry = proc_create_single(DRIVERLNAME, S_IRUGO, - proc_net_eicon, um_idi_proc_show); - if (!um_idi_proc_entry) - return (0); - return (1); -} - -static void remove_um_idi_proc(void) -{ - if (um_idi_proc_entry) { - remove_proc_entry(DRIVERLNAME, proc_net_eicon); - um_idi_proc_entry = NULL; - } -} - -static const struct file_operations divas_idi_fops = { - .owner = THIS_MODULE, - .llseek = no_llseek, - .read = um_idi_read, - .write = um_idi_write, - .poll = um_idi_poll, - .open = um_idi_open, - .release = um_idi_release -}; - -static void divas_idi_unregister_chrdev(void) -{ - unregister_chrdev(major, DEVNAME); -} - -static int __init divas_idi_register_chrdev(void) -{ - if ((major = register_chrdev(0, DEVNAME, &divas_idi_fops)) < 0) - { - printk(KERN_ERR "%s: failed to create /dev entry.\n", - DRIVERLNAME); - return (0); - } - - return (1); -} - -/* -** Driver Load -*/ -static int __init divasi_init(void) -{ - char tmprev[50]; - int ret = 0; - - printk(KERN_INFO "%s\n", DRIVERNAME); - printk(KERN_INFO "%s: Rel:%s Rev:", DRIVERLNAME, DRIVERRELEASE_IDI); - strcpy(tmprev, main_revision); - printk("%s Build: %s\n", getrev(tmprev), DIVA_BUILD); - - if (!divas_idi_register_chrdev()) { - ret = -EIO; - goto out; - } - - if (!create_um_idi_proc()) { - divas_idi_unregister_chrdev(); - printk(KERN_ERR "%s: failed to create proc entry.\n", - DRIVERLNAME); - ret = -EIO; - goto out; - } - - if (!(idifunc_init())) { - remove_um_idi_proc(); - divas_idi_unregister_chrdev(); - printk(KERN_ERR "%s: failed to connect to DIDD.\n", - DRIVERLNAME); - ret = -EIO; - goto out; - } - printk(KERN_INFO "%s: started with major %d\n", DRIVERLNAME, major); - -out: - return (ret); -} - - -/* -** Driver Unload -*/ -static void __exit divasi_exit(void) -{ - idifunc_finit(); - remove_um_idi_proc(); - divas_idi_unregister_chrdev(); - - printk(KERN_INFO "%s: module unloaded.\n", DRIVERLNAME); -} - -module_init(divasi_init); -module_exit(divasi_exit); - - -/* - * FILE OPERATIONS - */ - -static int -divas_um_idi_copy_to_user(void *os_handle, void *dst, const void *src, - int length) -{ - memcpy(dst, src, length); - return (length); -} - -static ssize_t -um_idi_read(struct file *file, char __user *buf, size_t count, loff_t *offset) -{ - diva_um_idi_os_context_t *p_os; - int ret = -EINVAL; - void *data; - - if (!file->private_data) { - return (-ENODEV); - } - - if (! - (p_os = - (diva_um_idi_os_context_t *) diva_um_id_get_os_context(file-> - private_data))) - { - return (-ENODEV); - } - if (p_os->aborted) { - return (-ENODEV); - } - - if (!(data = diva_os_malloc(0, count))) { - return (-ENOMEM); - } - - ret = diva_um_idi_read(file->private_data, - file, data, count, - divas_um_idi_copy_to_user); - switch (ret) { - case 0: /* no message available */ - ret = (-EAGAIN); - break; - case (-1): /* adapter was removed */ - ret = (-ENODEV); - break; - case (-2): /* message_length > length of user buffer */ - ret = (-EFAULT); - break; - } - - if (ret > 0) { - if (copy_to_user(buf, data, ret)) { - ret = (-EFAULT); - } - } - - diva_os_free(0, data); - DBG_TRC(("read: ret %d", ret)); - return (ret); -} - - -static int -divas_um_idi_copy_from_user(void *os_handle, void *dst, const void *src, - int length) -{ - memcpy(dst, src, length); - return (length); -} - -static int um_idi_open_adapter(struct file *file, int adapter_nr) -{ - diva_um_idi_os_context_t *p_os; - void *e = - divas_um_idi_create_entity((dword) adapter_nr, (void *) file); - - if (!(file->private_data = e)) { - return (0); - } - p_os = (diva_um_idi_os_context_t *) diva_um_id_get_os_context(e); - init_waitqueue_head(&p_os->read_wait); - init_waitqueue_head(&p_os->close_wait); - timer_setup(&p_os->diva_timer_id, diva_um_timer_function, 0); - p_os->aborted = 0; - p_os->adapter_nr = adapter_nr; - return (1); -} - -static ssize_t -um_idi_write(struct file *file, const char __user *buf, size_t count, - loff_t *offset) -{ - diva_um_idi_os_context_t *p_os; - int ret = -EINVAL; - void *data; - int adapter_nr = 0; - - if (!file->private_data) { - /* the first write() selects the adapter_nr */ - if (count == sizeof(int)) { - if (copy_from_user - ((void *) &adapter_nr, buf, - count)) return (-EFAULT); - if (!(um_idi_open_adapter(file, adapter_nr))) - return (-ENODEV); - return (count); - } else - return (-ENODEV); - } - - if (!(p_os = - (diva_um_idi_os_context_t *) diva_um_id_get_os_context(file-> - private_data))) - { - return (-ENODEV); - } - if (p_os->aborted) { - return (-ENODEV); - } - - if (!(data = diva_os_malloc(0, count))) { - return (-ENOMEM); - } - - if (copy_from_user(data, buf, count)) { - ret = -EFAULT; - } else { - ret = diva_um_idi_write(file->private_data, - file, data, count, - divas_um_idi_copy_from_user); - switch (ret) { - case 0: /* no space available */ - ret = (-EAGAIN); - break; - case (-1): /* adapter was removed */ - ret = (-ENODEV); - break; - case (-2): /* length of user buffer > max message_length */ - ret = (-EFAULT); - break; - } - } - diva_os_free(0, data); - DBG_TRC(("write: ret %d", ret)); - return (ret); -} - -static __poll_t um_idi_poll(struct file *file, poll_table *wait) -{ - diva_um_idi_os_context_t *p_os; - - if (!file->private_data) { - return (EPOLLERR); - } - - if ((!(p_os = - (diva_um_idi_os_context_t *) - diva_um_id_get_os_context(file->private_data))) - || p_os->aborted) { - return (EPOLLERR); - } - - poll_wait(file, &p_os->read_wait, wait); - - if (p_os->aborted) { - return (EPOLLERR); - } - - switch (diva_user_mode_idi_ind_ready(file->private_data, file)) { - case (-1): - return (EPOLLERR); - - case 0: - return (0); - } - - return (EPOLLIN | EPOLLRDNORM); -} - -static int um_idi_open(struct inode *inode, struct file *file) -{ - return (0); -} - - -static int um_idi_release(struct inode *inode, struct file *file) -{ - diva_um_idi_os_context_t *p_os; - unsigned int adapter_nr; - int ret = 0; - - if (!(file->private_data)) { - ret = -ENODEV; - goto out; - } - - if (!(p_os = - (diva_um_idi_os_context_t *) diva_um_id_get_os_context(file->private_data))) { - ret = -ENODEV; - goto out; - } - - adapter_nr = p_os->adapter_nr; - - if ((ret = remove_entity(file->private_data))) { - goto out; - } - - if (divas_um_idi_delete_entity - ((int) adapter_nr, file->private_data)) { - ret = -ENODEV; - goto out; - } - -out: - return (ret); -} - -int diva_os_get_context_size(void) -{ - return (sizeof(diva_um_idi_os_context_t)); -} - -void diva_os_wakeup_read(void *os_context) -{ - diva_um_idi_os_context_t *p_os = - (diva_um_idi_os_context_t *) os_context; - wake_up_interruptible(&p_os->read_wait); -} - -void diva_os_wakeup_close(void *os_context) -{ - diva_um_idi_os_context_t *p_os = - (diva_um_idi_os_context_t *) os_context; - wake_up_interruptible(&p_os->close_wait); -} - -static -void diva_um_timer_function(struct timer_list *t) -{ - diva_um_idi_os_context_t *p_os = from_timer(p_os, t, diva_timer_id); - - p_os->aborted = 1; - wake_up_interruptible(&p_os->read_wait); - wake_up_interruptible(&p_os->close_wait); - DBG_ERR(("entity removal watchdog")) - } - -/* -** If application exits without entity removal this function will remove -** entity and block until removal is complete -*/ -static int remove_entity(void *entity) -{ - struct task_struct *curtask = current; - diva_um_idi_os_context_t *p_os; - - diva_um_idi_stop_wdog(entity); - - if (!entity) { - DBG_FTL(("Zero entity on remove")) - return (0); - } - - if (!(p_os = - (diva_um_idi_os_context_t *) - diva_um_id_get_os_context(entity))) { - DBG_FTL(("Zero entity os context on remove")) - return (0); - } - - if (!divas_um_idi_entity_assigned(entity) || p_os->aborted) { - /* - Entity is not assigned, also can be removed - */ - return (0); - } - - DBG_TRC(("E(%08x) check remove", entity)) - - /* - If adapter not answers on remove request inside of - 10 Sec, then adapter is dead - */ - diva_um_idi_start_wdog(entity); - - { - DECLARE_WAITQUEUE(wait, curtask); - - add_wait_queue(&p_os->close_wait, &wait); - for (;;) { - set_current_state(TASK_INTERRUPTIBLE); - if (!divas_um_idi_entity_start_remove(entity) - || p_os->aborted) { - break; - } - schedule(); - } - set_current_state(TASK_RUNNING); - remove_wait_queue(&p_os->close_wait, &wait); - } - - DBG_TRC(("E(%08x) start remove", entity)) - { - DECLARE_WAITQUEUE(wait, curtask); - - add_wait_queue(&p_os->close_wait, &wait); - for (;;) { - set_current_state(TASK_INTERRUPTIBLE); - if (!divas_um_idi_entity_assigned(entity) - || p_os->aborted) { - break; - } - schedule(); - } - set_current_state(TASK_RUNNING); - remove_wait_queue(&p_os->close_wait, &wait); - } - - DBG_TRC(("E(%08x) remove complete, aborted:%d", entity, - p_os->aborted)) - - diva_um_idi_stop_wdog(entity); - - p_os->aborted = 0; - - return (0); -} - -/* - * timer watchdog - */ -void diva_um_idi_start_wdog(void *entity) -{ - diva_um_idi_os_context_t *p_os; - - if (entity && - ((p_os = - (diva_um_idi_os_context_t *) - diva_um_id_get_os_context(entity)))) { - mod_timer(&p_os->diva_timer_id, jiffies + 10 * HZ); - } -} - -void diva_um_idi_stop_wdog(void *entity) -{ - diva_um_idi_os_context_t *p_os; - - if (entity && - ((p_os = - (diva_um_idi_os_context_t *) - diva_um_id_get_os_context(entity)))) { - del_timer(&p_os->diva_timer_id); - } -} diff --git a/drivers/isdn/hardware/eicon/divasmain.c b/drivers/isdn/hardware/eicon/divasmain.c deleted file mode 100644 index b6a3950b2564..000000000000 --- a/drivers/isdn/hardware/eicon/divasmain.c +++ /dev/null @@ -1,848 +0,0 @@ -/* $Id: divasmain.c,v 1.55.4.6 2005/02/09 19:28:20 armin Exp $ - * - * Low level driver for Eicon DIVA Server ISDN cards. - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "platform.h" -#undef ID_MASK -#undef N_DATA -#include "pc.h" -#include "di_defs.h" -#include "divasync.h" -#include "diva.h" -#include "di.h" -#include "io.h" -#include "xdi_msg.h" -#include "xdi_adapter.h" -#include "xdi_vers.h" -#include "diva_dma.h" -#include "diva_pci.h" - -static char *main_revision = "$Revision: 1.55.4.6 $"; - -static int major; - -static int dbgmask; - -MODULE_DESCRIPTION("Kernel driver for Eicon DIVA Server cards"); -MODULE_AUTHOR("Cytronics & Melware, Eicon Networks"); -MODULE_LICENSE("GPL"); - -module_param(dbgmask, int, 0); -MODULE_PARM_DESC(dbgmask, "initial debug mask"); - -static char *DRIVERNAME = - "Eicon DIVA Server driver (http://www.melware.net)"; -static char *DRIVERLNAME = "divas"; -static char *DEVNAME = "Divas"; -char *DRIVERRELEASE_DIVAS = "2.0"; - -extern irqreturn_t diva_os_irq_wrapper(int irq, void *context); -extern int create_divas_proc(void); -extern void remove_divas_proc(void); -extern void diva_get_vserial_number(PISDN_ADAPTER IoAdapter, char *buf); -extern int divasfunc_init(int dbgmask); -extern void divasfunc_exit(void); - -typedef struct _diva_os_thread_dpc { - struct tasklet_struct divas_task; - diva_os_soft_isr_t *psoft_isr; -} diva_os_thread_dpc_t; - -/* -------------------------------------------------------------------------- - PCI driver interface section - -------------------------------------------------------------------------- */ -/* - vendor, device Vendor and device ID to match (or PCI_ANY_ID) - subvendor, Subsystem vendor and device ID to match (or PCI_ANY_ID) - subdevice - class, Device class to match. The class_mask tells which bits - class_mask of the class are honored during the comparison. - driver_data Data private to the driver. -*/ - -#if !defined(PCI_DEVICE_ID_EICON_MAESTRAP_2) -#define PCI_DEVICE_ID_EICON_MAESTRAP_2 0xE015 -#endif - -#if !defined(PCI_DEVICE_ID_EICON_4BRI_VOIP) -#define PCI_DEVICE_ID_EICON_4BRI_VOIP 0xE016 -#endif - -#if !defined(PCI_DEVICE_ID_EICON_4BRI_2_VOIP) -#define PCI_DEVICE_ID_EICON_4BRI_2_VOIP 0xE017 -#endif - -#if !defined(PCI_DEVICE_ID_EICON_BRI2M_2) -#define PCI_DEVICE_ID_EICON_BRI2M_2 0xE018 -#endif - -#if !defined(PCI_DEVICE_ID_EICON_MAESTRAP_2_VOIP) -#define PCI_DEVICE_ID_EICON_MAESTRAP_2_VOIP 0xE019 -#endif - -#if !defined(PCI_DEVICE_ID_EICON_2F) -#define PCI_DEVICE_ID_EICON_2F 0xE01A -#endif - -#if !defined(PCI_DEVICE_ID_EICON_BRI2M_2_VOIP) -#define PCI_DEVICE_ID_EICON_BRI2M_2_VOIP 0xE01B -#endif - -/* - This table should be sorted by PCI device ID -*/ -static const struct pci_device_id divas_pci_tbl[] = { - /* Diva Server BRI-2M PCI 0xE010 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRA), - CARDTYPE_MAESTRA_PCI }, - /* Diva Server 4BRI-8M PCI 0xE012 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAQ), - CARDTYPE_DIVASRV_Q_8M_PCI }, - /* Diva Server 4BRI-8M 2.0 PCI 0xE013 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAQ_U), - CARDTYPE_DIVASRV_Q_8M_V2_PCI }, - /* Diva Server PRI-30M PCI 0xE014 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAP), - CARDTYPE_DIVASRV_P_30M_PCI }, - /* Diva Server PRI 2.0 adapter 0xE015 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAP_2), - CARDTYPE_DIVASRV_P_30M_V2_PCI }, - /* Diva Server Voice 4BRI-8M PCI 0xE016 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_4BRI_VOIP), - CARDTYPE_DIVASRV_VOICE_Q_8M_PCI }, - /* Diva Server Voice 4BRI-8M 2.0 PCI 0xE017 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_4BRI_2_VOIP), - CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI }, - /* Diva Server BRI-2M 2.0 PCI 0xE018 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_BRI2M_2), - CARDTYPE_DIVASRV_B_2M_V2_PCI }, - /* Diva Server Voice PRI 2.0 PCI 0xE019 */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAP_2_VOIP), - CARDTYPE_DIVASRV_VOICE_P_30M_V2_PCI }, - /* Diva Server 2FX 0xE01A */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_2F), - CARDTYPE_DIVASRV_B_2F_PCI }, - /* Diva Server Voice BRI-2M 2.0 PCI 0xE01B */ - { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_BRI2M_2_VOIP), - CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI }, - { 0, } /* 0 terminated list. */ -}; -MODULE_DEVICE_TABLE(pci, divas_pci_tbl); - -static int divas_init_one(struct pci_dev *pdev, - const struct pci_device_id *ent); -static void divas_remove_one(struct pci_dev *pdev); - -static struct pci_driver diva_pci_driver = { - .name = "divas", - .probe = divas_init_one, - .remove = divas_remove_one, - .id_table = divas_pci_tbl, -}; - -/********************************************************* - ** little helper functions - *********************************************************/ -static char *getrev(const char *revision) -{ - char *rev; - char *p; - if ((p = strchr(revision, ':'))) { - rev = p + 2; - p = strchr(rev, '$'); - *--p = 0; - } else - rev = "1.0"; - return rev; -} - -void diva_log_info(unsigned char *format, ...) -{ - va_list args; - unsigned char line[160]; - - va_start(args, format); - vsnprintf(line, sizeof(line), format, args); - va_end(args); - - printk(KERN_INFO "%s: %s\n", DRIVERLNAME, line); -} - -void divas_get_version(char *p) -{ - char tmprev[32]; - - strcpy(tmprev, main_revision); - sprintf(p, "%s: %s(%s) %s(%s) major=%d\n", DRIVERLNAME, DRIVERRELEASE_DIVAS, - getrev(tmprev), diva_xdi_common_code_build, DIVA_BUILD, major); -} - -/* -------------------------------------------------------------------------- - PCI Bus services - -------------------------------------------------------------------------- */ -byte diva_os_get_pci_bus(void *pci_dev_handle) -{ - struct pci_dev *pdev = (struct pci_dev *) pci_dev_handle; - return ((byte) pdev->bus->number); -} - -byte diva_os_get_pci_func(void *pci_dev_handle) -{ - struct pci_dev *pdev = (struct pci_dev *) pci_dev_handle; - return ((byte) pdev->devfn); -} - -unsigned long divasa_get_pci_irq(unsigned char bus, unsigned char func, - void *pci_dev_handle) -{ - unsigned char irq = 0; - struct pci_dev *dev = (struct pci_dev *) pci_dev_handle; - - irq = dev->irq; - - return ((unsigned long) irq); -} - -unsigned long divasa_get_pci_bar(unsigned char bus, unsigned char func, - int bar, void *pci_dev_handle) -{ - unsigned long ret = 0; - struct pci_dev *dev = (struct pci_dev *) pci_dev_handle; - - if (bar < 6) { - ret = dev->resource[bar].start; - } - - DBG_TRC(("GOT BAR[%d]=%08x", bar, ret)); - - { - unsigned long type = (ret & 0x00000001); - if (type & PCI_BASE_ADDRESS_SPACE_IO) { - DBG_TRC((" I/O")); - ret &= PCI_BASE_ADDRESS_IO_MASK; - } else { - DBG_TRC((" memory")); - ret &= PCI_BASE_ADDRESS_MEM_MASK; - } - DBG_TRC((" final=%08x", ret)); - } - - return (ret); -} - -void PCIwrite(byte bus, byte func, int offset, void *data, int length, - void *pci_dev_handle) -{ - struct pci_dev *dev = (struct pci_dev *) pci_dev_handle; - - switch (length) { - case 1: /* byte */ - pci_write_config_byte(dev, offset, - *(unsigned char *) data); - break; - case 2: /* word */ - pci_write_config_word(dev, offset, - *(unsigned short *) data); - break; - case 4: /* dword */ - pci_write_config_dword(dev, offset, - *(unsigned int *) data); - break; - - default: /* buffer */ - if (!(length % 4) && !(length & 0x03)) { /* Copy as dword */ - dword *p = (dword *) data; - length /= 4; - - while (length--) { - pci_write_config_dword(dev, offset, - *(unsigned int *) - p++); - } - } else { /* copy as byte stream */ - byte *p = (byte *) data; - - while (length--) { - pci_write_config_byte(dev, offset, - *(unsigned char *) - p++); - } - } - } -} - -void PCIread(byte bus, byte func, int offset, void *data, int length, - void *pci_dev_handle) -{ - struct pci_dev *dev = (struct pci_dev *) pci_dev_handle; - - switch (length) { - case 1: /* byte */ - pci_read_config_byte(dev, offset, (unsigned char *) data); - break; - case 2: /* word */ - pci_read_config_word(dev, offset, (unsigned short *) data); - break; - case 4: /* dword */ - pci_read_config_dword(dev, offset, (unsigned int *) data); - break; - - default: /* buffer */ - if (!(length % 4) && !(length & 0x03)) { /* Copy as dword */ - dword *p = (dword *) data; - length /= 4; - - while (length--) { - pci_read_config_dword(dev, offset, - (unsigned int *) - p++); - } - } else { /* copy as byte stream */ - byte *p = (byte *) data; - - while (length--) { - pci_read_config_byte(dev, offset, - (unsigned char *) - p++); - } - } - } -} - -/* - Init map with DMA pages. It is not problem if some allocations fail - - the channels that will not get one DMA page will use standard PIO - interface -*/ -static void *diva_pci_alloc_consistent(struct pci_dev *hwdev, - size_t size, - dma_addr_t *dma_handle, - void **addr_handle) -{ - void *addr = pci_alloc_consistent(hwdev, size, dma_handle); - - *addr_handle = addr; - - return (addr); -} - -void diva_init_dma_map(void *hdev, - struct _diva_dma_map_entry **ppmap, int nentries) -{ - struct pci_dev *pdev = (struct pci_dev *) hdev; - struct _diva_dma_map_entry *pmap = - diva_alloc_dma_map(hdev, nentries); - - if (pmap) { - int i; - dma_addr_t dma_handle; - void *cpu_addr; - void *addr_handle; - - for (i = 0; i < nentries; i++) { - if (!(cpu_addr = diva_pci_alloc_consistent(pdev, - PAGE_SIZE, - &dma_handle, - &addr_handle))) - { - break; - } - diva_init_dma_map_entry(pmap, i, cpu_addr, - (dword) dma_handle, - addr_handle); - DBG_TRC(("dma map alloc [%d]=(%08lx:%08x:%08lx)", - i, (unsigned long) cpu_addr, - (dword) dma_handle, - (unsigned long) addr_handle))} - } - - *ppmap = pmap; -} - -/* - Free all contained in the map entries and memory used by the map - Should be always called after adapter removal from DIDD array -*/ -void diva_free_dma_map(void *hdev, struct _diva_dma_map_entry *pmap) -{ - struct pci_dev *pdev = (struct pci_dev *) hdev; - int i; - dword phys_addr; - void *cpu_addr; - dma_addr_t dma_handle; - void *addr_handle; - - for (i = 0; (pmap != NULL); i++) { - diva_get_dma_map_entry(pmap, i, &cpu_addr, &phys_addr); - if (!cpu_addr) { - break; - } - addr_handle = diva_get_entry_handle(pmap, i); - dma_handle = (dma_addr_t) phys_addr; - pci_free_consistent(pdev, PAGE_SIZE, addr_handle, - dma_handle); - DBG_TRC(("dma map free [%d]=(%08lx:%08x:%08lx)", i, - (unsigned long) cpu_addr, (dword) dma_handle, - (unsigned long) addr_handle)) - } - - diva_free_dma_mapping(pmap); -} - - -/********************************************************* - ** I/O port utilities - *********************************************************/ - -int -diva_os_register_io_port(void *adapter, int on, unsigned long port, - unsigned long length, const char *name, int id) -{ - if (on) { - if (!request_region(port, length, name)) { - DBG_ERR(("A: I/O: can't register port=%08x", port)) - return (-1); - } - } else { - release_region(port, length); - } - return (0); -} - -void __iomem *divasa_remap_pci_bar(diva_os_xdi_adapter_t *a, int id, unsigned long bar, unsigned long area_length) -{ - void __iomem *ret = ioremap(bar, area_length); - DBG_TRC(("remap(%08x)->%p", bar, ret)); - return (ret); -} - -void divasa_unmap_pci_bar(void __iomem *bar) -{ - if (bar) { - iounmap(bar); - } -} - -/********************************************************* - ** I/O port access - *********************************************************/ -inline byte inpp(void __iomem *addr) -{ - return (inb((unsigned long) addr)); -} - -inline word inppw(void __iomem *addr) -{ - return (inw((unsigned long) addr)); -} - -inline void inppw_buffer(void __iomem *addr, void *P, int length) -{ - insw((unsigned long) addr, (word *) P, length >> 1); -} - -inline void outppw_buffer(void __iomem *addr, void *P, int length) -{ - outsw((unsigned long) addr, (word *) P, length >> 1); -} - -inline void outppw(void __iomem *addr, word w) -{ - outw(w, (unsigned long) addr); -} - -inline void outpp(void __iomem *addr, word p) -{ - outb(p, (unsigned long) addr); -} - -/* -------------------------------------------------------------------------- - IRQ request / remove - -------------------------------------------------------------------------- */ -int diva_os_register_irq(void *context, byte irq, const char *name) -{ - int result = request_irq(irq, diva_os_irq_wrapper, - IRQF_SHARED, name, context); - return (result); -} - -void diva_os_remove_irq(void *context, byte irq) -{ - free_irq(irq, context); -} - -/* -------------------------------------------------------------------------- - DPC framework implementation - -------------------------------------------------------------------------- */ -static void diva_os_dpc_proc(unsigned long context) -{ - diva_os_thread_dpc_t *psoft_isr = (diva_os_thread_dpc_t *) context; - diva_os_soft_isr_t *pisr = psoft_isr->psoft_isr; - - (*(pisr->callback)) (pisr, pisr->callback_context); -} - -int diva_os_initialize_soft_isr(diva_os_soft_isr_t *psoft_isr, - diva_os_soft_isr_callback_t callback, - void *callback_context) -{ - diva_os_thread_dpc_t *pdpc; - - pdpc = (diva_os_thread_dpc_t *) diva_os_malloc(0, sizeof(*pdpc)); - if (!(psoft_isr->object = pdpc)) { - return (-1); - } - memset(pdpc, 0x00, sizeof(*pdpc)); - psoft_isr->callback = callback; - psoft_isr->callback_context = callback_context; - pdpc->psoft_isr = psoft_isr; - tasklet_init(&pdpc->divas_task, diva_os_dpc_proc, (unsigned long)pdpc); - - return (0); -} - -int diva_os_schedule_soft_isr(diva_os_soft_isr_t *psoft_isr) -{ - if (psoft_isr && psoft_isr->object) { - diva_os_thread_dpc_t *pdpc = - (diva_os_thread_dpc_t *) psoft_isr->object; - - tasklet_schedule(&pdpc->divas_task); - } - - return (1); -} - -int diva_os_cancel_soft_isr(diva_os_soft_isr_t *psoft_isr) -{ - return (0); -} - -void diva_os_remove_soft_isr(diva_os_soft_isr_t *psoft_isr) -{ - if (psoft_isr && psoft_isr->object) { - diva_os_thread_dpc_t *pdpc = - (diva_os_thread_dpc_t *) psoft_isr->object; - void *mem; - - tasklet_kill(&pdpc->divas_task); - mem = psoft_isr->object; - psoft_isr->object = NULL; - diva_os_free(0, mem); - } -} - -/* - * kernel/user space copy functions - */ -static int -xdi_copy_to_user(void *os_handle, void __user *dst, const void *src, int length) -{ - if (copy_to_user(dst, src, length)) { - return (-EFAULT); - } - return (length); -} - -static int -xdi_copy_from_user(void *os_handle, void *dst, const void __user *src, int length) -{ - if (copy_from_user(dst, src, length)) { - return (-EFAULT); - } - return (length); -} - -/* - * device node operations - */ -static int divas_open(struct inode *inode, struct file *file) -{ - return (0); -} - -static int divas_release(struct inode *inode, struct file *file) -{ - if (file->private_data) { - diva_xdi_close_adapter(file->private_data, file); - } - return (0); -} - -static ssize_t divas_write(struct file *file, const char __user *buf, - size_t count, loff_t *ppos) -{ - diva_xdi_um_cfg_cmd_t msg; - int ret = -EINVAL; - - if (!file->private_data) { - file->private_data = diva_xdi_open_adapter(file, buf, - count, &msg, - xdi_copy_from_user); - if (!file->private_data) - return (-ENODEV); - ret = diva_xdi_write(file->private_data, file, - buf, count, &msg, xdi_copy_from_user); - } else { - ret = diva_xdi_write(file->private_data, file, - buf, count, NULL, xdi_copy_from_user); - } - - switch (ret) { - case -1: /* Message should be removed from rx mailbox first */ - ret = -EBUSY; - break; - case -2: /* invalid adapter was specified in this call */ - ret = -ENOMEM; - break; - case -3: - ret = -ENXIO; - break; - } - DBG_TRC(("write: ret %d", ret)); - return (ret); -} - -static ssize_t divas_read(struct file *file, char __user *buf, - size_t count, loff_t *ppos) -{ - diva_xdi_um_cfg_cmd_t msg; - int ret = -EINVAL; - - if (!file->private_data) { - file->private_data = diva_xdi_open_adapter(file, buf, - count, &msg, - xdi_copy_from_user); - } - if (!file->private_data) { - return (-ENODEV); - } - - ret = diva_xdi_read(file->private_data, file, - buf, count, xdi_copy_to_user); - switch (ret) { - case -1: /* RX mailbox is empty */ - ret = -EAGAIN; - break; - case -2: /* no memory, mailbox was cleared, last command is failed */ - ret = -ENOMEM; - break; - case -3: /* can't copy to user, retry */ - ret = -EFAULT; - break; - } - DBG_TRC(("read: ret %d", ret)); - return (ret); -} - -static __poll_t divas_poll(struct file *file, poll_table *wait) -{ - if (!file->private_data) { - return (EPOLLERR); - } - return (EPOLLIN | EPOLLRDNORM); -} - -static const struct file_operations divas_fops = { - .owner = THIS_MODULE, - .llseek = no_llseek, - .read = divas_read, - .write = divas_write, - .poll = divas_poll, - .open = divas_open, - .release = divas_release -}; - -static void divas_unregister_chrdev(void) -{ - unregister_chrdev(major, DEVNAME); -} - -static int __init divas_register_chrdev(void) -{ - if ((major = register_chrdev(0, DEVNAME, &divas_fops)) < 0) - { - printk(KERN_ERR "%s: failed to create /dev entry.\n", - DRIVERLNAME); - return (0); - } - - return (1); -} - -/* -------------------------------------------------------------------------- - PCI driver section - -------------------------------------------------------------------------- */ -static int divas_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) -{ - void *pdiva = NULL; - u8 pci_latency; - u8 new_latency = 32; - - DBG_TRC(("%s bus: %08x fn: %08x insertion.\n", - CardProperties[ent->driver_data].Name, - pdev->bus->number, pdev->devfn)) - printk(KERN_INFO "%s: %s bus: %08x fn: %08x insertion.\n", - DRIVERLNAME, CardProperties[ent->driver_data].Name, - pdev->bus->number, pdev->devfn); - - if (pci_enable_device(pdev)) { - DBG_TRC(("%s: %s bus: %08x fn: %08x device init failed.\n", - DRIVERLNAME, - CardProperties[ent->driver_data].Name, - pdev->bus->number, - pdev->devfn)) - printk(KERN_ERR - "%s: %s bus: %08x fn: %08x device init failed.\n", - DRIVERLNAME, - CardProperties[ent->driver_data]. - Name, pdev->bus->number, - pdev->devfn); - return (-EIO); - } - - pci_set_master(pdev); - - pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &pci_latency); - if (!pci_latency) { - DBG_TRC(("%s: bus: %08x fn: %08x fix latency.\n", - DRIVERLNAME, pdev->bus->number, pdev->devfn)) - printk(KERN_INFO - "%s: bus: %08x fn: %08x fix latency.\n", - DRIVERLNAME, pdev->bus->number, pdev->devfn); - pci_write_config_byte(pdev, PCI_LATENCY_TIMER, new_latency); - } - - if (!(pdiva = diva_driver_add_card(pdev, ent->driver_data))) { - DBG_TRC(("%s: %s bus: %08x fn: %08x card init failed.\n", - DRIVERLNAME, - CardProperties[ent->driver_data].Name, - pdev->bus->number, - pdev->devfn)) - printk(KERN_ERR - "%s: %s bus: %08x fn: %08x card init failed.\n", - DRIVERLNAME, - CardProperties[ent->driver_data]. - Name, pdev->bus->number, - pdev->devfn); - return (-EIO); - } - - pci_set_drvdata(pdev, pdiva); - - return (0); -} - -static void divas_remove_one(struct pci_dev *pdev) -{ - void *pdiva = pci_get_drvdata(pdev); - - DBG_TRC(("bus: %08x fn: %08x removal.\n", - pdev->bus->number, pdev->devfn)) - printk(KERN_INFO "%s: bus: %08x fn: %08x removal.\n", - DRIVERLNAME, pdev->bus->number, pdev->devfn); - - if (pdiva) { - diva_driver_remove_card(pdiva); - } - -} - -/* -------------------------------------------------------------------------- - Driver Load / Startup - -------------------------------------------------------------------------- */ -static int __init divas_init(void) -{ - char tmprev[50]; - int ret = 0; - - printk(KERN_INFO "%s\n", DRIVERNAME); - printk(KERN_INFO "%s: Rel:%s Rev:", DRIVERLNAME, DRIVERRELEASE_DIVAS); - strcpy(tmprev, main_revision); - printk("%s Build: %s(%s)\n", getrev(tmprev), - diva_xdi_common_code_build, DIVA_BUILD); - printk(KERN_INFO "%s: support for: ", DRIVERLNAME); -#ifdef CONFIG_ISDN_DIVAS_BRIPCI - printk("BRI/PCI "); -#endif -#ifdef CONFIG_ISDN_DIVAS_PRIPCI - printk("PRI/PCI "); -#endif - printk("adapters\n"); - - if (!divasfunc_init(dbgmask)) { - printk(KERN_ERR "%s: failed to connect to DIDD.\n", - DRIVERLNAME); - ret = -EIO; - goto out; - } - - if (!divas_register_chrdev()) { -#ifdef MODULE - divasfunc_exit(); -#endif - ret = -EIO; - goto out; - } - - if (!create_divas_proc()) { -#ifdef MODULE - divas_unregister_chrdev(); - divasfunc_exit(); -#endif - printk(KERN_ERR "%s: failed to create proc entry.\n", - DRIVERLNAME); - ret = -EIO; - goto out; - } - - if ((ret = pci_register_driver(&diva_pci_driver))) { -#ifdef MODULE - remove_divas_proc(); - divas_unregister_chrdev(); - divasfunc_exit(); -#endif - printk(KERN_ERR "%s: failed to init pci driver.\n", - DRIVERLNAME); - goto out; - } - printk(KERN_INFO "%s: started with major %d\n", DRIVERLNAME, major); - -out: - return (ret); -} - -/* -------------------------------------------------------------------------- - Driver Unload - -------------------------------------------------------------------------- */ -static void __exit divas_exit(void) -{ - pci_unregister_driver(&diva_pci_driver); - remove_divas_proc(); - divas_unregister_chrdev(); - divasfunc_exit(); - - printk(KERN_INFO "%s: module unloaded.\n", DRIVERLNAME); -} - -module_init(divas_init); -module_exit(divas_exit); diff --git a/drivers/isdn/hardware/eicon/divasproc.c b/drivers/isdn/hardware/eicon/divasproc.c deleted file mode 100644 index f52f4622b10b..000000000000 --- a/drivers/isdn/hardware/eicon/divasproc.c +++ /dev/null @@ -1,412 +0,0 @@ -/* $Id: divasproc.c,v 1.19.4.3 2005/01/31 12:22:20 armin Exp $ - * - * Low level driver for Eicon DIVA Server ISDN cards. - * /proc functions - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include -#include -#include -#include -#include -#include -#include - -#include "platform.h" -#include "debuglib.h" -#undef ID_MASK -#undef N_DATA -#include "pc.h" -#include "di_defs.h" -#include "divasync.h" -#include "di.h" -#include "io.h" -#include "xdi_msg.h" -#include "xdi_adapter.h" -#include "diva.h" -#include "diva_pci.h" - - -extern PISDN_ADAPTER IoAdapters[MAX_ADAPTER]; -extern void divas_get_version(char *); -extern void diva_get_vserial_number(PISDN_ADAPTER IoAdapter, char *buf); - -/********************************************************* - ** Functions for /proc interface / File operations - *********************************************************/ - -static char *divas_proc_name = "divas"; -static char *adapter_dir_name = "adapter"; -static char *info_proc_name = "info"; -static char *grp_opt_proc_name = "group_optimization"; -static char *d_l1_down_proc_name = "dynamic_l1_down"; - -/* -** "divas" entry -*/ - -extern struct proc_dir_entry *proc_net_eicon; -static struct proc_dir_entry *divas_proc_entry = NULL; - -static ssize_t -divas_read(struct file *file, char __user *buf, size_t count, loff_t *off) -{ - int len = 0; - int cadapter; - char tmpbuf[80]; - char tmpser[16]; - - if (*off) - return 0; - - divas_get_version(tmpbuf); - if (copy_to_user(buf + len, &tmpbuf, strlen(tmpbuf))) - return -EFAULT; - len += strlen(tmpbuf); - - for (cadapter = 0; cadapter < MAX_ADAPTER; cadapter++) { - if (IoAdapters[cadapter]) { - diva_get_vserial_number(IoAdapters[cadapter], - tmpser); - sprintf(tmpbuf, - "%2d: %-30s Serial:%-10s IRQ:%2d\n", - cadapter + 1, - IoAdapters[cadapter]->Properties.Name, - tmpser, - IoAdapters[cadapter]->irq_info.irq_nr); - if ((strlen(tmpbuf) + len) > count) - break; - if (copy_to_user - (buf + len, &tmpbuf, - strlen(tmpbuf))) return -EFAULT; - len += strlen(tmpbuf); - } - } - - *off += len; - return (len); -} - -static ssize_t -divas_write(struct file *file, const char __user *buf, size_t count, loff_t *off) -{ - return (-ENODEV); -} - -static __poll_t divas_poll(struct file *file, poll_table *wait) -{ - return (EPOLLERR); -} - -static int divas_open(struct inode *inode, struct file *file) -{ - return nonseekable_open(inode, file); -} - -static int divas_close(struct inode *inode, struct file *file) -{ - return (0); -} - -static const struct file_operations divas_fops = { - .owner = THIS_MODULE, - .llseek = no_llseek, - .read = divas_read, - .write = divas_write, - .poll = divas_poll, - .open = divas_open, - .release = divas_close -}; - -int create_divas_proc(void) -{ - divas_proc_entry = proc_create(divas_proc_name, S_IFREG | S_IRUGO, - proc_net_eicon, &divas_fops); - if (!divas_proc_entry) - return (0); - - return (1); -} - -void remove_divas_proc(void) -{ - if (divas_proc_entry) { - remove_proc_entry(divas_proc_name, proc_net_eicon); - divas_proc_entry = NULL; - } -} - -static ssize_t grp_opt_proc_write(struct file *file, const char __user *buffer, - size_t count, loff_t *pos) -{ - diva_os_xdi_adapter_t *a = PDE_DATA(file_inode(file)); - PISDN_ADAPTER IoAdapter = IoAdapters[a->controller - 1]; - - if ((count == 1) || (count == 2)) { - char c; - if (get_user(c, buffer)) - return -EFAULT; - switch (c) { - case '0': - IoAdapter->capi_cfg.cfg_1 &= - ~DIVA_XDI_CAPI_CFG_1_GROUP_POPTIMIZATION_ON; - break; - case '1': - IoAdapter->capi_cfg.cfg_1 |= - DIVA_XDI_CAPI_CFG_1_GROUP_POPTIMIZATION_ON; - break; - default: - return (-EINVAL); - } - return (count); - } - return (-EINVAL); -} - -static ssize_t d_l1_down_proc_write(struct file *file, const char __user *buffer, - size_t count, loff_t *pos) -{ - diva_os_xdi_adapter_t *a = PDE_DATA(file_inode(file)); - PISDN_ADAPTER IoAdapter = IoAdapters[a->controller - 1]; - - if ((count == 1) || (count == 2)) { - char c; - if (get_user(c, buffer)) - return -EFAULT; - switch (c) { - case '0': - IoAdapter->capi_cfg.cfg_1 &= - ~DIVA_XDI_CAPI_CFG_1_DYNAMIC_L1_ON; - break; - case '1': - IoAdapter->capi_cfg.cfg_1 |= - DIVA_XDI_CAPI_CFG_1_DYNAMIC_L1_ON; - break; - default: - return (-EINVAL); - } - return (count); - } - return (-EINVAL); -} - -static int d_l1_down_proc_show(struct seq_file *m, void *v) -{ - diva_os_xdi_adapter_t *a = m->private; - PISDN_ADAPTER IoAdapter = IoAdapters[a->controller - 1]; - - seq_printf(m, "%s\n", - (IoAdapter->capi_cfg. - cfg_1 & DIVA_XDI_CAPI_CFG_1_DYNAMIC_L1_ON) ? "1" : - "0"); - return 0; -} - -static int d_l1_down_proc_open(struct inode *inode, struct file *file) -{ - return single_open(file, d_l1_down_proc_show, PDE_DATA(inode)); -} - -static const struct file_operations d_l1_down_proc_fops = { - .owner = THIS_MODULE, - .open = d_l1_down_proc_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, - .write = d_l1_down_proc_write, -}; - -static int grp_opt_proc_show(struct seq_file *m, void *v) -{ - diva_os_xdi_adapter_t *a = m->private; - PISDN_ADAPTER IoAdapter = IoAdapters[a->controller - 1]; - - seq_printf(m, "%s\n", - (IoAdapter->capi_cfg. - cfg_1 & DIVA_XDI_CAPI_CFG_1_GROUP_POPTIMIZATION_ON) - ? "1" : "0"); - return 0; -} - -static int grp_opt_proc_open(struct inode *inode, struct file *file) -{ - return single_open(file, grp_opt_proc_show, PDE_DATA(inode)); -} - -static const struct file_operations grp_opt_proc_fops = { - .owner = THIS_MODULE, - .open = grp_opt_proc_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, - .write = grp_opt_proc_write, -}; - -static ssize_t info_proc_write(struct file *file, const char __user *buffer, - size_t count, loff_t *pos) -{ - diva_os_xdi_adapter_t *a = PDE_DATA(file_inode(file)); - PISDN_ADAPTER IoAdapter = IoAdapters[a->controller - 1]; - char c[4]; - - if (count <= 4) - return -EINVAL; - - if (copy_from_user(c, buffer, 4)) - return -EFAULT; - - /* this is for test purposes only */ - if (!memcmp(c, "trap", 4)) { - (*(IoAdapter->os_trap_nfy_Fnc)) (IoAdapter, IoAdapter->ANum); - return (count); - } - return (-EINVAL); -} - -static int info_proc_show(struct seq_file *m, void *v) -{ - int i = 0; - char *p; - char tmpser[16]; - diva_os_xdi_adapter_t *a = m->private; - PISDN_ADAPTER IoAdapter = IoAdapters[a->controller - 1]; - - seq_printf(m, "Name : %s\n", IoAdapter->Properties.Name); - seq_printf(m, "DSP state : %08x\n", a->dsp_mask); - seq_printf(m, "Channels : %02d\n", IoAdapter->Properties.Channels); - seq_printf(m, "E. max/used : %03d/%03d\n", - IoAdapter->e_max, IoAdapter->e_count); - diva_get_vserial_number(IoAdapter, tmpser); - seq_printf(m, "Serial : %s\n", tmpser); - seq_printf(m, "IRQ : %d\n", IoAdapter->irq_info.irq_nr); - seq_printf(m, "CardIndex : %d\n", a->CardIndex); - seq_printf(m, "CardOrdinal : %d\n", a->CardOrdinal); - seq_printf(m, "Controller : %d\n", a->controller); - seq_printf(m, "Bus-Type : %s\n", - (a->Bus == - DIVAS_XDI_ADAPTER_BUS_ISA) ? "ISA" : "PCI"); - seq_printf(m, "Port-Name : %s\n", a->port_name); - if (a->Bus == DIVAS_XDI_ADAPTER_BUS_PCI) { - seq_printf(m, "PCI-bus : %d\n", a->resources.pci.bus); - seq_printf(m, "PCI-func : %d\n", a->resources.pci.func); - for (i = 0; i < 8; i++) { - if (a->resources.pci.bar[i]) { - seq_printf(m, - "Mem / I/O %d : 0x%x / mapped : 0x%lx", - i, a->resources.pci.bar[i], - (unsigned long) a->resources. - pci.addr[i]); - if (a->resources.pci.length[i]) { - seq_printf(m, - " / length : %d", - a->resources.pci. - length[i]); - } - seq_putc(m, '\n'); - } - } - } - if ((!a->xdi_adapter.port) && - ((!a->xdi_adapter.ram) || - (!a->xdi_adapter.reset) - || (!a->xdi_adapter.cfg))) { - if (!IoAdapter->irq_info.irq_nr) { - p = "slave"; - } else { - p = "out of service"; - } - } else if (a->xdi_adapter.trapped) { - p = "trapped"; - } else if (a->xdi_adapter.Initialized) { - p = "active"; - } else { - p = "ready"; - } - seq_printf(m, "State : %s\n", p); - - return 0; -} - -static int info_proc_open(struct inode *inode, struct file *file) -{ - return single_open(file, info_proc_show, PDE_DATA(inode)); -} - -static const struct file_operations info_proc_fops = { - .owner = THIS_MODULE, - .open = info_proc_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, - .write = info_proc_write, -}; - -/* -** adapter proc init/de-init -*/ - -/* -------------------------------------------------------------------------- - Create adapter directory and files in proc file system - -------------------------------------------------------------------------- */ -int create_adapter_proc(diva_os_xdi_adapter_t *a) -{ - struct proc_dir_entry *de, *pe; - char tmp[16]; - - sprintf(tmp, "%s%d", adapter_dir_name, a->controller); - if (!(de = proc_mkdir(tmp, proc_net_eicon))) - return (0); - a->proc_adapter_dir = (void *) de; - - pe = proc_create_data(info_proc_name, S_IRUGO | S_IWUSR, de, - &info_proc_fops, a); - if (!pe) - return (0); - a->proc_info = (void *) pe; - - pe = proc_create_data(grp_opt_proc_name, S_IRUGO | S_IWUSR, de, - &grp_opt_proc_fops, a); - if (pe) - a->proc_grp_opt = (void *) pe; - pe = proc_create_data(d_l1_down_proc_name, S_IRUGO | S_IWUSR, de, - &d_l1_down_proc_fops, a); - if (pe) - a->proc_d_l1_down = (void *) pe; - - DBG_TRC(("proc entry %s created", tmp)); - - return (1); -} - -/* -------------------------------------------------------------------------- - Remove adapter directory and files in proc file system - -------------------------------------------------------------------------- */ -void remove_adapter_proc(diva_os_xdi_adapter_t *a) -{ - char tmp[16]; - - if (a->proc_adapter_dir) { - if (a->proc_d_l1_down) { - remove_proc_entry(d_l1_down_proc_name, - (struct proc_dir_entry *) a->proc_adapter_dir); - } - if (a->proc_grp_opt) { - remove_proc_entry(grp_opt_proc_name, - (struct proc_dir_entry *) a->proc_adapter_dir); - } - if (a->proc_info) { - remove_proc_entry(info_proc_name, - (struct proc_dir_entry *) a->proc_adapter_dir); - } - sprintf(tmp, "%s%d", adapter_dir_name, a->controller); - remove_proc_entry(tmp, proc_net_eicon); - DBG_TRC(("proc entry %s%d removed", adapter_dir_name, - a->controller)); - } -} diff --git a/drivers/isdn/hardware/eicon/divasync.h b/drivers/isdn/hardware/eicon/divasync.h deleted file mode 100644 index dd6b53a2c2c8..000000000000 --- a/drivers/isdn/hardware/eicon/divasync.h +++ /dev/null @@ -1,489 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_SYNC__H -#define __DIVA_SYNC__H -#define IDI_SYNC_REQ_REMOVE 0x00 -#define IDI_SYNC_REQ_GET_NAME 0x01 -#define IDI_SYNC_REQ_GET_SERIAL 0x02 -#define IDI_SYNC_REQ_SET_POSTCALL 0x03 -#define IDI_SYNC_REQ_GET_XLOG 0x04 -#define IDI_SYNC_REQ_GET_FEATURES 0x05 -#define IDI_SYNC_REQ_USB_REGISTER 0x06 -#define IDI_SYNC_REQ_USB_RELEASE 0x07 -#define IDI_SYNC_REQ_USB_ADD_DEVICE 0x08 -#define IDI_SYNC_REQ_USB_START_DEVICE 0x09 -#define IDI_SYNC_REQ_USB_STOP_DEVICE 0x0A -#define IDI_SYNC_REQ_USB_REMOVE_DEVICE 0x0B -#define IDI_SYNC_REQ_GET_CARDTYPE 0x0C -#define IDI_SYNC_REQ_GET_DBG_XLOG 0x0D -#define DIVA_USB -#define DIVA_USB_REQ 0xAC -#define DIVA_USB_TEST 0xAB -#define DIVA_USB_ADD_ADAPTER 0xAC -#define DIVA_USB_REMOVE_ADAPTER 0xAD -#define IDI_SYNC_REQ_SERIAL_HOOK 0x80 -#define IDI_SYNC_REQ_XCHANGE_STATUS 0x81 -#define IDI_SYNC_REQ_USB_HOOK 0x82 -#define IDI_SYNC_REQ_PORTDRV_HOOK 0x83 -#define IDI_SYNC_REQ_SLI 0x84 /* SLI request from 3signal modem drivers */ -#define IDI_SYNC_REQ_RECONFIGURE 0x85 -#define IDI_SYNC_REQ_RESET 0x86 -#define IDI_SYNC_REQ_GET_85X_DEVICE_DATA 0x87 -#define IDI_SYNC_REQ_LOCK_85X 0x88 -#define IDI_SYNC_REQ_DIVA_85X_USB_DATA_EXCHANGE 0x99 -#define IDI_SYNC_REQ_DIPORT_EXCHANGE_REQ 0x98 -#define IDI_SYNC_REQ_GET_85X_EXT_PORT_TYPE 0xA0 -/******************************************************************************/ -#define IDI_SYNC_REQ_XDI_GET_EXTENDED_FEATURES 0x92 -/* - To receive XDI features: - 1. set 'buffer_length_in_bytes' to length of you buffer - 2. set 'features' to pointer to your buffer - 3. issue synchronous request to XDI - 4. Check that feature 'DIVA_XDI_EXTENDED_FEATURES_VALID' is present - after call. This feature does indicate that your request - was processed and XDI does support this synchronous request - 5. if on return bit 31 (0x80000000) in 'buffer_length_in_bytes' is - set then provided buffer was too small, and bits 30-0 does - contain necessary length of buffer. - in this case only features that do find place in the buffer - are indicated to caller -*/ -typedef struct _diva_xdi_get_extended_xdi_features { - dword buffer_length_in_bytes; - byte *features; -} diva_xdi_get_extended_xdi_features_t; -/* - features[0] -*/ -#define DIVA_XDI_EXTENDED_FEATURES_VALID 0x01 -#define DIVA_XDI_EXTENDED_FEATURE_CMA 0x02 -#define DIVA_XDI_EXTENDED_FEATURE_SDRAM_BAR 0x04 -#define DIVA_XDI_EXTENDED_FEATURE_CAPI_PRMS 0x08 -#define DIVA_XDI_EXTENDED_FEATURE_NO_CANCEL_RC 0x10 -#define DIVA_XDI_EXTENDED_FEATURE_RX_DMA 0x20 -#define DIVA_XDI_EXTENDED_FEATURE_MANAGEMENT_DMA 0x40 -#define DIVA_XDI_EXTENDED_FEATURE_WIDE_ID 0x80 -#define DIVA_XDI_EXTENDED_FEATURES_MAX_SZ 1 -/******************************************************************************/ -#define IDI_SYNC_REQ_XDI_GET_ADAPTER_SDRAM_BAR 0x93 -typedef struct _diva_xdi_get_adapter_sdram_bar { - dword bar; -} diva_xdi_get_adapter_sdram_bar_t; -/******************************************************************************/ -#define IDI_SYNC_REQ_XDI_GET_CAPI_PARAMS 0x94 -/* - CAPI Parameters will be written in the caller's buffer -*/ -typedef struct _diva_xdi_get_capi_parameters { - dword structure_length; - byte flag_dynamic_l1_down; - byte group_optimization_enabled; -} diva_xdi_get_capi_parameters_t; -/******************************************************************************/ -#define IDI_SYNC_REQ_XDI_GET_LOGICAL_ADAPTER_NUMBER 0x95 -/* - Get logical adapter number, as assigned by XDI - 'controller' is starting with zero 'sub' controller number - in case of one adapter that supports multiple interfaces - 'controller' is zero for Master adapter (and adapter that supports - only one interface) -*/ -typedef struct _diva_xdi_get_logical_adapter_number { - dword logical_adapter_number; - dword controller; - dword total_controllers; -} diva_xdi_get_logical_adapter_number_s_t; -/******************************************************************************/ -#define IDI_SYNC_REQ_UP1DM_OPERATION 0x96 -/******************************************************************************/ -#define IDI_SYNC_REQ_DMA_DESCRIPTOR_OPERATION 0x97 -#define IDI_SYNC_REQ_DMA_DESCRIPTOR_ALLOC 0x01 -#define IDI_SYNC_REQ_DMA_DESCRIPTOR_FREE 0x02 -typedef struct _diva_xdi_dma_descriptor_operation { - int operation; - int descriptor_number; - void *descriptor_address; - dword descriptor_magic; -} diva_xdi_dma_descriptor_operation_t; -/******************************************************************************/ -#define IDI_SYNC_REQ_DIDD_REGISTER_ADAPTER_NOTIFY 0x01 -#define IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER_NOTIFY 0x02 -#define IDI_SYNC_REQ_DIDD_ADD_ADAPTER 0x03 -#define IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER 0x04 -#define IDI_SYNC_REQ_DIDD_READ_ADAPTER_ARRAY 0x05 -#define IDI_SYNC_REQ_DIDD_GET_CFG_LIB_IFC 0x10 -typedef struct _diva_didd_adapter_notify { - dword handle; /* Notification handle */ - void *callback; - void *context; -} diva_didd_adapter_notify_t; -typedef struct _diva_didd_add_adapter { - void *descriptor; -} diva_didd_add_adapter_t; -typedef struct _diva_didd_remove_adapter { - IDI_CALL p_request; -} diva_didd_remove_adapter_t; -typedef struct _diva_didd_read_adapter_array { - void *buffer; - dword length; -} diva_didd_read_adapter_array_t; -typedef struct _diva_didd_get_cfg_lib_ifc { - void *ifc; -} diva_didd_get_cfg_lib_ifc_t; -/******************************************************************************/ -#define IDI_SYNC_REQ_XDI_GET_STREAM 0x91 -#define DIVA_XDI_SYNCHRONOUS_SERVICE 0x01 -#define DIVA_XDI_DMA_SERVICE 0x02 -#define DIVA_XDI_AUTO_SERVICE 0x03 -#define DIVA_ISTREAM_COMPLETE_NOTIFY 0 -#define DIVA_ISTREAM_COMPLETE_READ 1 -#define DIVA_ISTREAM_COMPLETE_WRITE 2 -typedef struct _diva_xdi_stream_interface { - unsigned char Id; /* filled by XDI client */ - unsigned char provided_service; /* filled by XDI */ - unsigned char requested_service; /* filled by XDI Client */ - void *xdi_context; /* filled by XDI */ - void *client_context; /* filled by XDI client */ - int (*write)(void *context, - int Id, - void *data, - int length, - int final, - byte usr1, - byte usr2); - int (*read)(void *context, - int Id, - void *data, - int max_length, - int *final, - byte *usr1, - byte *usr2); - int (*complete)(void *client_context, - int Id, - int what, - void *data, - int length, - int *final); -} diva_xdi_stream_interface_t; -/******************************************************************************/ -/* - * IDI_SYNC_REQ_SERIAL_HOOK - special interface for the DIVA Mobile card - */ -typedef struct -{ unsigned char LineState; /* Modem line state (STATUS_R) */ -#define SERIAL_GSM_CELL 0x01 /* GSM or CELL cable attached */ - unsigned char CardState; /* PCMCIA card state (0 = down) */ - unsigned char IsdnState; /* ISDN layer 1 state (0 = down)*/ - unsigned char HookState; /* current logical hook state */ -#define SERIAL_ON_HOOK 0x02 /* set in DIVA CTRL_R register */ -} SERIAL_STATE; -typedef int (*SERIAL_INT_CB)(void *Context); -typedef int (*SERIAL_DPC_CB)(void *Context); -typedef unsigned char (*SERIAL_I_SYNC)(void *Context); -typedef struct -{ /* 'Req' and 'Rc' must be at the same place as in the ENTITY struct */ - unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (is the request) */ - unsigned char Function; /* private function code */ -#define SERIAL_HOOK_ATTACH 0x81 -#define SERIAL_HOOK_STATUS 0x82 -#define SERIAL_HOOK_I_SYNC 0x83 -#define SERIAL_HOOK_NOECHO 0x84 -#define SERIAL_HOOK_RING 0x85 -#define SERIAL_HOOK_DETACH 0x8f - unsigned char Flags; /* function refinements */ - /* parameters passed by the ATTACH request */ - SERIAL_INT_CB InterruptHandler; /* called on each interrupt */ - SERIAL_DPC_CB DeferredHandler; /* called on hook state changes */ - void *HandlerContext; /* context for both handlers */ - /* return values for both the ATTACH and the STATUS request */ - unsigned long IoBase; /* IO port assigned to UART */ - SERIAL_STATE State; - /* parameters and return values for the I_SYNC function */ - SERIAL_I_SYNC SyncFunction; /* to be called synchronized */ - void *SyncContext; /* context for this function */ - unsigned char SyncResult; /* return value of function */ -} SERIAL_HOOK; -/* - * IDI_SYNC_REQ_XCHANGE_STATUS - exchange the status between IDI and WMP - * IDI_SYNC_REQ_RECONFIGURE - reconfiguration of IDI from WMP - */ -typedef struct -{ /* 'Req' and 'Rc' must be at the same place as in the ENTITY struct */ - unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (is the request) */ -#define DRIVER_STATUS_BOOT 0xA1 -#define DRIVER_STATUS_INIT_DEV 0xA2 -#define DRIVER_STATUS_RUNNING 0xA3 -#define DRIVER_STATUS_SHUTDOWN 0xAF -#define DRIVER_STATUS_TRAPPED 0xAE - unsigned char wmpStatus; /* exported by WMP */ - unsigned char idiStatus; /* exported by IDI */ - unsigned long wizProto; /* from WMP registry to IDI */ - /* the cardtype value is defined by cardtype.h */ - unsigned long cardType; /* from IDI registry to WMP */ - unsigned long nt2; /* from IDI registry to WMP */ - unsigned long permanent; /* from IDI registry to WMP */ - unsigned long stableL2; /* from IDI registry to WMP */ - unsigned long tei; /* from IDI registry to WMP */ -#define CRC4_MASK 0x00000003 -#define L1_TRISTATE_MASK 0x00000004 -#define WATCHDOG_MASK 0x00000008 -#define NO_ORDER_CHECK_MASK 0x00000010 -#define LOW_CHANNEL_MASK 0x00000020 -#define NO_HSCX30_MASK 0x00000040 -#define SET_BOARD 0x00001000 -#define SET_CRC4 0x00030000 -#define SET_L1_TRISTATE 0x00040000 -#define SET_WATCHDOG 0x00080000 -#define SET_NO_ORDER_CHECK 0x00100000 -#define SET_LOW_CHANNEL 0x00200000 -#define SET_NO_HSCX30 0x00400000 -#define SET_MODE 0x00800000 -#define SET_PROTO 0x02000000 -#define SET_CARDTYPE 0x04000000 -#define SET_NT2 0x08000000 -#define SET_PERMANENT 0x10000000 -#define SET_STABLEL2 0x20000000 -#define SET_TEI 0x40000000 -#define SET_NUMBERLEN 0x80000000 - unsigned long Flag; /* |31-Type-16|15-Mask-0| */ - unsigned long NumberLen; /* reconfiguration: union is empty */ - union { - struct { /* possible reconfiguration, but ... ; SET_BOARD */ - unsigned long SerialNumber; - char *pCardname; /* di_defs.h: BOARD_NAME_LENGTH */ - } board; - struct { /* reset: need resources */ - void *pRawResources; - void *pXlatResources; - } res; - struct { /* reconfiguration: wizProto == PROTTYPE_RBSCAS */ -#define GLARE_RESOLVE_MASK 0x00000001 -#define DID_MASK 0x00000002 -#define BEARER_CAP_MASK 0x0000000c -#define SET_GLARE_RESOLVE 0x00010000 -#define SET_DID 0x00020000 -#define SET_BEARER_CAP 0x000c0000 - unsigned long Flag; /* |31-Type-16|15-VALUE-0| */ - unsigned short DigitTimeout; - unsigned short AnswerDelay; - } rbs; - struct { /* reconfiguration: wizProto == PROTTYPE_QSIG */ -#define CALL_REF_LENGTH1_MASK 0x00000001 -#define BRI_CHANNEL_ID_MASK 0x00000002 -#define SET_CALL_REF_LENGTH 0x00010000 -#define SET_BRI_CHANNEL_ID 0x00020000 - unsigned long Flag; /* |31-Type-16|15-VALUE-0| */ - } qsig; - struct { /* reconfiguration: NumberLen != 0 */ -#define SET_SPID1 0x00010000 -#define SET_NUMBER1 0x00020000 -#define SET_SUBADDRESS1 0x00040000 -#define SET_SPID2 0x00100000 -#define SET_NUMBER2 0x00200000 -#define SET_SUBADDRESS2 0x00400000 -#define MASK_SET 0xffff0000 - unsigned long Flag; /* |31-Type-16|15-Channel-0| */ - unsigned char *pBuffer; /* number value */ - } isdnNo; - } - parms - ; -} isdnProps; -/* - * IDI_SYNC_REQ_PORTDRV_HOOK - signal plug/unplug (Award Cardware only) - */ -typedef void (*PORTDRV_HOOK_CB)(void *Context, int Plug); -typedef struct -{ /* 'Req' and 'Rc' must be at the same place as in the ENTITY struct */ - unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (is the request) */ - unsigned char Function; /* private function code */ - unsigned char Flags; /* function refinements */ - PORTDRV_HOOK_CB Callback; /* to be called on plug/unplug */ - void *Context; /* context for callback */ - unsigned long Info; /* more info if needed */ -} PORTDRV_HOOK; -/* Codes for the 'Rc' element in structure below. */ -#define SLI_INSTALL (0xA1) -#define SLI_UNINSTALL (0xA2) -typedef int (*SLIENTRYPOINT)(void *p3SignalAPI, void *pContext); -typedef struct -{ /* 'Req' and 'Rc' must be at the same place as in the ENTITY struct */ - unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (is the request) */ - unsigned char Function; /* private function code */ - unsigned char Flags; /* function refinements */ - SLIENTRYPOINT Callback; /* to be called on plug/unplug */ - void *Context; /* context for callback */ - unsigned long Info; /* more info if needed */ -} SLIENTRYPOINT_REQ; -/******************************************************************************/ -/* - * Definitions for DIVA USB - */ -typedef int (*USB_SEND_REQ)(unsigned char PipeIndex, unsigned char Type, void *Data, int sizeData); -typedef int (*USB_START_DEV)(void *Adapter, void *Ipac); -/* called from WDM */ -typedef void (*USB_RECV_NOTIFY)(void *Ipac, void *msg); -typedef void (*USB_XMIT_NOTIFY)(void *Ipac, unsigned char PipeIndex); -/******************************************************************************/ -/* - * Parameter description for synchronous requests. - * - * Sorry, must repeat some parts of di_defs.h here because - * they are not defined for all operating environments - */ -typedef union -{ ENTITY Entity; - struct - { /* 'Req' and 'Rc' are at the same place as in the ENTITY struct */ - unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (is the request) */ - } Request; - struct - { unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (0x01) */ - unsigned char name[BOARD_NAME_LENGTH]; - } GetName; - struct - { unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (0x02) */ - unsigned long serial; /* serial number */ - } GetSerial; - struct - { unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (0x02) */ - unsigned long lineIdx;/* line, 0 if card has only one */ - } GetLineIdx; - struct - { unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (0x02) */ - unsigned long cardtype;/* card type */ - } GetCardType; - struct - { unsigned short command;/* command = 0x0300 */ - unsigned short dummy; /* not used */ - IDI_CALL callback;/* routine to call back */ - ENTITY *contxt; /* ptr to entity to use */ - } PostCall; - struct - { unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (0x04) */ - unsigned char pcm[1]; /* buffer (a pc_maint struct) */ - } GetXlog; - struct - { unsigned char Req; /* request (must be always 0) */ - unsigned char Rc; /* return code (0x05) */ - unsigned short features;/* feature defines see below */ - } GetFeatures; - SERIAL_HOOK SerialHook; -/* Added for DIVA USB */ - struct - { unsigned char Req; - unsigned char Rc; - USB_SEND_REQ UsbSendRequest; /* function in Diva Usb WDM driver in usb_os.c, */ - /* called from usb_drv.c to send a message to our device */ - /* eg UsbSendRequest (USB_PIPE_SIGNAL, USB_IPAC_START, 0, 0); */ - USB_RECV_NOTIFY usb_recv; /* called from usb_os.c to pass a received message and ptr to IPAC */ - /* on to usb_drv.c by a call to usb_recv(). */ - USB_XMIT_NOTIFY usb_xmit; /* called from usb_os.c in DivaUSB.sys WDM to indicate a completed transmit */ - /* to usb_drv.c by a call to usb_xmit(). */ - USB_START_DEV UsbStartDevice; /* Start the USB Device, in usb_os.c */ - IDI_CALL callback; /* routine to call back */ - ENTITY *contxt; /* ptr to entity to use */ - void **ipac_ptr; /* pointer to struct IPAC in VxD */ - } Usb_Msg_old; -/* message used by WDM and VXD to pass pointers of function and IPAC* */ - struct - { unsigned char Req; - unsigned char Rc; - USB_SEND_REQ pUsbSendRequest;/* function in Diva Usb WDM driver in usb_os.c, */ - /* called from usb_drv.c to send a message to our device */ - /* eg UsbSendRequest (USB_PIPE_SIGNAL, USB_IPAC_START, 0, 0); */ - USB_RECV_NOTIFY p_usb_recv; /* called from usb_os.c to pass a received message and ptr to IPAC */ - /* on to usb_drv.c by a call to usb_recv(). */ - USB_XMIT_NOTIFY p_usb_xmit; /* called from usb_os.c in DivaUSB.sys WDM to indicate a completed transmit */ - /* to usb_drv.c by a call to usb_xmit().*/ - void *ipac_ptr; /* &Diva.ipac pointer to struct IPAC in VxD */ - } Usb_Msg; - PORTDRV_HOOK PortdrvHook; - SLIENTRYPOINT_REQ sliEntryPointReq; - struct { - unsigned char Req; - unsigned char Rc; - diva_xdi_stream_interface_t info; - } xdi_stream_info; - struct { - unsigned char Req; - unsigned char Rc; - diva_xdi_get_extended_xdi_features_t info; - } xdi_extended_features; - struct { - unsigned char Req; - unsigned char Rc; - diva_xdi_get_adapter_sdram_bar_t info; - } xdi_sdram_bar; - struct { - unsigned char Req; - unsigned char Rc; - diva_xdi_get_capi_parameters_t info; - } xdi_capi_prms; - struct { - ENTITY e; - diva_didd_adapter_notify_t info; - } didd_notify; - struct { - ENTITY e; - diva_didd_add_adapter_t info; - } didd_add_adapter; - struct { - ENTITY e; - diva_didd_remove_adapter_t info; - } didd_remove_adapter; - struct { - ENTITY e; - diva_didd_read_adapter_array_t info; - } didd_read_adapter_array; - struct { - ENTITY e; - diva_didd_get_cfg_lib_ifc_t info; - } didd_get_cfg_lib_ifc; - struct { - unsigned char Req; - unsigned char Rc; - diva_xdi_get_logical_adapter_number_s_t info; - } xdi_logical_adapter_number; - struct { - unsigned char Req; - unsigned char Rc; - diva_xdi_dma_descriptor_operation_t info; - } xdi_dma_descriptor_operation; -} IDI_SYNC_REQ; -/******************************************************************************/ -#endif /* __DIVA_SYNC__H */ diff --git a/drivers/isdn/hardware/eicon/dqueue.c b/drivers/isdn/hardware/eicon/dqueue.c deleted file mode 100644 index 7958a2536a10..000000000000 --- a/drivers/isdn/hardware/eicon/dqueue.c +++ /dev/null @@ -1,110 +0,0 @@ -/* $Id: dqueue.c,v 1.5 2003/04/12 21:40:49 schindler Exp $ - * - * Driver for Eicon DIVA Server ISDN cards. - * User Mode IDI Interface - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include "platform.h" -#include "dqueue.h" - -int -diva_data_q_init(diva_um_idi_data_queue_t *q, - int max_length, int max_segments) -{ - int i; - - q->max_length = max_length; - q->segments = max_segments; - - for (i = 0; i < q->segments; i++) { - q->data[i] = NULL; - q->length[i] = 0; - } - q->read = q->write = q->count = q->segment_pending = 0; - - for (i = 0; i < q->segments; i++) { - if (!(q->data[i] = diva_os_malloc(0, q->max_length))) { - diva_data_q_finit(q); - return (-1); - } - } - - return (0); -} - -int diva_data_q_finit(diva_um_idi_data_queue_t *q) -{ - int i; - - for (i = 0; i < q->segments; i++) { - if (q->data[i]) { - diva_os_free(0, q->data[i]); - } - q->data[i] = NULL; - q->length[i] = 0; - } - q->read = q->write = q->count = q->segment_pending = 0; - - return (0); -} - -int diva_data_q_get_max_length(const diva_um_idi_data_queue_t *q) -{ - return (q->max_length); -} - -void *diva_data_q_get_segment4write(diva_um_idi_data_queue_t *q) -{ - if ((!q->segment_pending) && (q->count < q->segments)) { - q->segment_pending = 1; - return (q->data[q->write]); - } - - return NULL; -} - -void -diva_data_q_ack_segment4write(diva_um_idi_data_queue_t *q, int length) -{ - if (q->segment_pending) { - q->length[q->write] = length; - q->count++; - q->write++; - if (q->write >= q->segments) { - q->write = 0; - } - q->segment_pending = 0; - } -} - -const void *diva_data_q_get_segment4read(const diva_um_idi_data_queue_t * - q) -{ - if (q->count) { - return (q->data[q->read]); - } - return NULL; -} - -int diva_data_q_get_segment_length(const diva_um_idi_data_queue_t *q) -{ - return (q->length[q->read]); -} - -void diva_data_q_ack_segment4read(diva_um_idi_data_queue_t *q) -{ - if (q->count) { - q->length[q->read] = 0; - q->count--; - q->read++; - if (q->read >= q->segments) { - q->read = 0; - } - } -} diff --git a/drivers/isdn/hardware/eicon/dqueue.h b/drivers/isdn/hardware/eicon/dqueue.h deleted file mode 100644 index 2da9799686ab..000000000000 --- a/drivers/isdn/hardware/eicon/dqueue.h +++ /dev/null @@ -1,32 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: dqueue.h,v 1.1.2.2 2001/02/08 12:25:43 armin Exp $ */ - -#ifndef _DIVA_USER_MODE_IDI_DATA_QUEUE_H__ -#define _DIVA_USER_MODE_IDI_DATA_QUEUE_H__ - -#define DIVA_UM_IDI_MAX_MSGS 64 - -typedef struct _diva_um_idi_data_queue { - int segments; - int max_length; - int read; - int write; - int count; - int segment_pending; - void *data[DIVA_UM_IDI_MAX_MSGS]; - int length[DIVA_UM_IDI_MAX_MSGS]; -} diva_um_idi_data_queue_t; - -int diva_data_q_init(diva_um_idi_data_queue_t *q, - int max_length, int max_segments); -int diva_data_q_finit(diva_um_idi_data_queue_t *q); -int diva_data_q_get_max_length(const diva_um_idi_data_queue_t *q); -void *diva_data_q_get_segment4write(diva_um_idi_data_queue_t *q); -void diva_data_q_ack_segment4write(diva_um_idi_data_queue_t *q, - int length); -const void *diva_data_q_get_segment4read(const diva_um_idi_data_queue_t * - q); -int diva_data_q_get_segment_length(const diva_um_idi_data_queue_t *q); -void diva_data_q_ack_segment4read(diva_um_idi_data_queue_t *q); - -#endif diff --git a/drivers/isdn/hardware/eicon/dsp_defs.h b/drivers/isdn/hardware/eicon/dsp_defs.h deleted file mode 100644 index 94828c87e2a4..000000000000 --- a/drivers/isdn/hardware/eicon/dsp_defs.h +++ /dev/null @@ -1,301 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef DSP_DEFS_H_ -#define DSP_DEFS_H_ -#include "dspdids.h" -/*---------------------------------------------------------------------------*/ -#define dsp_download_reserve_space(fp, length) -/*****************************************************************************/ -/* - * OS file access abstraction layer - * - * I/O functions returns -1 on error, 0 on EOF - */ -struct _OsFileHandle_; -typedef long (*OsFileIo)(struct _OsFileHandle_ *handle, - void *buffer, - long size); -typedef long (*OsFileSeek)(struct _OsFileHandle_ *handle, - long position, - int mode); -typedef long (*OsCardLoad)(struct _OsFileHandle_ *handle, - long length, - void **addr); -typedef struct _OsFileHandle_ -{ void *sysFileDesc; - unsigned long sysFileSize; - OsFileIo sysFileRead; - OsFileSeek sysFileSeek; - void *sysLoadDesc; - OsCardLoad sysCardLoad; -} OsFileHandle; -extern OsFileHandle *OsOpenFile(char *path_name); -extern void OsCloseFile(OsFileHandle *fp); -/*****************************************************************************/ -#define DSP_TELINDUS_FILE "dspdload.bin" -/* special DSP file for BRI cards for Qsig and CornetN because of missing memory */ -#define DSP_QSIG_TELINDUS_FILE "dspdqsig.bin" -#define DSP_MDM_TELINDUS_FILE "dspdvmdm.bin" -#define DSP_FAX_TELINDUS_FILE "dspdvfax.bin" -#define DSP_DIRECTORY_ENTRIES 64 -#define DSP_MEMORY_TYPE_EXTERNAL_DM 0 -#define DSP_MEMORY_TYPE_EXTERNAL_PM 1 -#define DSP_MEMORY_TYPE_INTERNAL_DM 2 -#define DSP_MEMORY_TYPE_INTERNAL_PM 3 -#define DSP_DOWNLOAD_FLAG_BOOTABLE 0x0001 -#define DSP_DOWNLOAD_FLAG_2181 0x0002 -#define DSP_DOWNLOAD_FLAG_TIMECRITICAL 0x0004 -#define DSP_DOWNLOAD_FLAG_COMPAND 0x0008 -#define DSP_MEMORY_BLOCK_COUNT 16 -#define DSP_SEGMENT_PM_FLAG 0x0001 -#define DSP_SEGMENT_SHARED_FLAG 0x0002 -#define DSP_SEGMENT_EXTERNAL_DM DSP_MEMORY_TYPE_EXTERNAL_DM -#define DSP_SEGMENT_EXTERNAL_PM DSP_MEMORY_TYPE_EXTERNAL_PM -#define DSP_SEGMENT_INTERNAL_DM DSP_MEMORY_TYPE_INTERNAL_DM -#define DSP_SEGMENT_INTERNAL_PM DSP_MEMORY_TYPE_INTERNAL_PM -#define DSP_SEGMENT_FIRST_RELOCATABLE 4 -#define DSP_DATA_BLOCK_PM_FLAG 0x0001 -#define DSP_DATA_BLOCK_DWORD_FLAG 0x0002 -#define DSP_DATA_BLOCK_RESOLVE_FLAG 0x0004 -#define DSP_RELOC_NONE 0x00 -#define DSP_RELOC_SEGMENT_MASK 0x3f -#define DSP_RELOC_TYPE_MASK 0xc0 -#define DSP_RELOC_TYPE_0 0x00 /* relocation of address in DM word / high part of PM word */ -#define DSP_RELOC_TYPE_1 0x40 /* relocation of address in low part of PM data word */ -#define DSP_RELOC_TYPE_2 0x80 /* relocation of address in standard command */ -#define DSP_RELOC_TYPE_3 0xc0 /* relocation of address in call/jump on flag in */ -#define DSP_COMBIFILE_FORMAT_IDENTIFICATION_SIZE 48 -#define DSP_COMBIFILE_FORMAT_VERSION_BCD 0x0100 -#define DSP_FILE_FORMAT_IDENTIFICATION_SIZE 48 -#define DSP_FILE_FORMAT_VERSION_BCD 0x0100 -typedef struct tag_dsp_combifile_header -{ - char format_identification[DSP_COMBIFILE_FORMAT_IDENTIFICATION_SIZE]; - word format_version_bcd; - word header_size; - word combifile_description_size; - word directory_entries; - word directory_size; - word download_count; - word usage_mask_size; -} t_dsp_combifile_header; -typedef struct tag_dsp_combifile_directory_entry -{ - word card_type_number; - word file_set_number; -} t_dsp_combifile_directory_entry; -typedef struct tag_dsp_file_header -{ - char format_identification[DSP_FILE_FORMAT_IDENTIFICATION_SIZE]; - word format_version_bcd; - word download_id; - word download_flags; - word required_processing_power; - word interface_channel_count; - word header_size; - word download_description_size; - word memory_block_table_size; - word memory_block_count; - word segment_table_size; - word segment_count; - word symbol_table_size; - word symbol_count; - word total_data_size_dm; - word data_block_count_dm; - word total_data_size_pm; - word data_block_count_pm; -} t_dsp_file_header; -typedef struct tag_dsp_memory_block_desc -{ - word alias_memory_block; - word memory_type; - word address; - word size; /* DSP words */ -} t_dsp_memory_block_desc; -typedef struct tag_dsp_segment_desc -{ - word memory_block; - word attributes; - word base; - word size; - word alignment; /* ==0 -> no other legal start address than base */ -} t_dsp_segment_desc; -typedef struct tag_dsp_symbol_desc -{ - word symbol_id; - word segment; - word offset; - word size; /* DSP words */ -} t_dsp_symbol_desc; -typedef struct tag_dsp_data_block_header -{ - word attributes; - word segment; - word offset; - word size; /* DSP words */ -} t_dsp_data_block_header; -typedef struct tag_dsp_download_desc -{ - word download_id; - word download_flags; - word required_processing_power; - word interface_channel_count; - word excess_header_size; - word memory_block_count; - word segment_count; - word symbol_count; - word data_block_count_dm; - word data_block_count_pm; - byte *p_excess_header_data; - char *p_download_description; - t_dsp_memory_block_desc *p_memory_block_table; - t_dsp_segment_desc *p_segment_table; - t_dsp_symbol_desc *p_symbol_table; - word *p_data_blocks_dm; - word *p_data_blocks_pm; -} t_dsp_desc; -typedef struct tag_dsp_portable_download_desc /* be sure to keep native alignment for MAESTRA's */ -{ - word download_id; - word download_flags; - word required_processing_power; - word interface_channel_count; - word excess_header_size; - word memory_block_count; - word segment_count; - word symbol_count; - word data_block_count_dm; - word data_block_count_pm; - dword p_excess_header_data; - dword p_download_description; - dword p_memory_block_table; - dword p_segment_table; - dword p_symbol_table; - dword p_data_blocks_dm; - dword p_data_blocks_pm; -} t_dsp_portable_desc; -#define DSP_DOWNLOAD_INDEX_KERNEL 0 -#define DSP30TX_DOWNLOAD_INDEX_KERNEL 1 -#define DSP30RX_DOWNLOAD_INDEX_KERNEL 2 -#define DSP_MAX_DOWNLOAD_COUNT 64 -#define DSP_DOWNLOAD_MAX_SEGMENTS 16 -#define DSP_UDATA_REQUEST_RECONFIGURE 0 -/* - parameters: - reconfigure delay (in 8kHz samples) - reconfigure code - reconfigure hdlc preamble flags -*/ -#define DSP_RECONFIGURE_TX_FLAG 0x8000 -#define DSP_RECONFIGURE_SHORT_TRAIN_FLAG 0x4000 -#define DSP_RECONFIGURE_ECHO_PROTECT_FLAG 0x2000 -#define DSP_RECONFIGURE_HDLC_FLAG 0x1000 -#define DSP_RECONFIGURE_SYNC_FLAG 0x0800 -#define DSP_RECONFIGURE_PROTOCOL_MASK 0x00ff -#define DSP_RECONFIGURE_IDLE 0 -#define DSP_RECONFIGURE_V25 1 -#define DSP_RECONFIGURE_V21_CH2 2 -#define DSP_RECONFIGURE_V27_2400 3 -#define DSP_RECONFIGURE_V27_4800 4 -#define DSP_RECONFIGURE_V29_7200 5 -#define DSP_RECONFIGURE_V29_9600 6 -#define DSP_RECONFIGURE_V33_12000 7 -#define DSP_RECONFIGURE_V33_14400 8 -#define DSP_RECONFIGURE_V17_7200 9 -#define DSP_RECONFIGURE_V17_9600 10 -#define DSP_RECONFIGURE_V17_12000 11 -#define DSP_RECONFIGURE_V17_14400 12 -/* - data indications if transparent framer - data 0 - data 1 - ... - data indications if HDLC framer - data 0 - data 1 - ... - CRC 0 - CRC 1 - preamble flags -*/ -#define DSP_UDATA_INDICATION_SYNC 0 -/* - returns: - time of sync (sampled from counter at 8kHz) -*/ -#define DSP_UDATA_INDICATION_DCD_OFF 1 -/* - returns: - time of DCD off (sampled from counter at 8kHz) -*/ -#define DSP_UDATA_INDICATION_DCD_ON 2 -/* - returns: - time of DCD on (sampled from counter at 8kHz) - connected norm - connected options - connected speed (bit/s) -*/ -#define DSP_UDATA_INDICATION_CTS_OFF 3 -/* - returns: - time of CTS off (sampled from counter at 8kHz) -*/ -#define DSP_UDATA_INDICATION_CTS_ON 4 -/* - returns: - time of CTS on (sampled from counter at 8kHz) - connected norm - connected options - connected speed (bit/s) -*/ -#define DSP_CONNECTED_NORM_UNSPECIFIED 0 -#define DSP_CONNECTED_NORM_V21 1 -#define DSP_CONNECTED_NORM_V23 2 -#define DSP_CONNECTED_NORM_V22 3 -#define DSP_CONNECTED_NORM_V22_BIS 4 -#define DSP_CONNECTED_NORM_V32_BIS 5 -#define DSP_CONNECTED_NORM_V34 6 -#define DSP_CONNECTED_NORM_V8 7 -#define DSP_CONNECTED_NORM_BELL_212A 8 -#define DSP_CONNECTED_NORM_BELL_103 9 -#define DSP_CONNECTED_NORM_V29_LEASED_LINE 10 -#define DSP_CONNECTED_NORM_V33_LEASED_LINE 11 -#define DSP_CONNECTED_NORM_TFAST 12 -#define DSP_CONNECTED_NORM_V21_CH2 13 -#define DSP_CONNECTED_NORM_V27_TER 14 -#define DSP_CONNECTED_NORM_V29 15 -#define DSP_CONNECTED_NORM_V33 16 -#define DSP_CONNECTED_NORM_V17 17 -#define DSP_CONNECTED_OPTION_TRELLIS 0x0001 -/*---------------------------------------------------------------------------*/ -extern char *dsp_read_file(OsFileHandle *fp, - word card_type_number, - word *p_dsp_download_count, - t_dsp_desc *p_dsp_download_table, - t_dsp_portable_desc *p_dsp_portable_download_table); -/*---------------------------------------------------------------------------*/ -#endif /* DSP_DEFS_H_ */ diff --git a/drivers/isdn/hardware/eicon/dsp_tst.h b/drivers/isdn/hardware/eicon/dsp_tst.h deleted file mode 100644 index 85edd3ea50f7..000000000000 --- a/drivers/isdn/hardware/eicon/dsp_tst.h +++ /dev/null @@ -1,48 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: dsp_tst.h,v 1.1.2.2 2001/02/08 12:25:43 armin Exp $ */ - -#ifndef __DIVA_PRI_HOST_TEST_DSPS_H__ -#define __DIVA_PRI_HOST_TEST_DSPS_H__ - -/* - DSP registers on maestra pri -*/ -#define DSP1_PORT (0x00) -#define DSP2_PORT (0x8) -#define DSP3_PORT (0x800) -#define DSP4_PORT (0x808) -#define DSP5_PORT (0x810) -#define DSP6_PORT (0x818) -#define DSP7_PORT (0x820) -#define DSP8_PORT (0x828) -#define DSP9_PORT (0x830) -#define DSP10_PORT (0x840) -#define DSP11_PORT (0x848) -#define DSP12_PORT (0x850) -#define DSP13_PORT (0x858) -#define DSP14_PORT (0x860) -#define DSP15_PORT (0x868) -#define DSP16_PORT (0x870) -#define DSP17_PORT (0x1000) -#define DSP18_PORT (0x1008) -#define DSP19_PORT (0x1010) -#define DSP20_PORT (0x1018) -#define DSP21_PORT (0x1020) -#define DSP22_PORT (0x1028) -#define DSP23_PORT (0x1030) -#define DSP24_PORT (0x1040) -#define DSP25_PORT (0x1048) -#define DSP26_PORT (0x1050) -#define DSP27_PORT (0x1058) -#define DSP28_PORT (0x1060) -#define DSP29_PORT (0x1068) -#define DSP30_PORT (0x1070) -#define DSP_ADR_OFFS 0x80 - -/*------------------------------------------------------------------ - Dsp related definitions - ------------------------------------------------------------------ */ -#define DSP_SIGNATURE_PROBE_WORD 0x5a5a -#define dsp_make_address_ex(pm, address) ((word)((pm) ? (address) : (address) + 0x4000)) - -#endif diff --git a/drivers/isdn/hardware/eicon/dspdids.h b/drivers/isdn/hardware/eicon/dspdids.h deleted file mode 100644 index 957b33cc0022..000000000000 --- a/drivers/isdn/hardware/eicon/dspdids.h +++ /dev/null @@ -1,75 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef DSPDIDS_H_ -#define DSPDIDS_H_ -/*---------------------------------------------------------------------------*/ -#define DSP_DID_INVALID 0 -#define DSP_DID_DIVA 1 -#define DSP_DID_DIVA_PRO 2 -#define DSP_DID_DIVA_PRO_20 3 -#define DSP_DID_DIVA_PRO_PCCARD 4 -#define DSP_DID_DIVA_SERVER_BRI_1M 5 -#define DSP_DID_DIVA_SERVER_BRI_2M 6 -#define DSP_DID_DIVA_SERVER_PRI_2M_TX 7 -#define DSP_DID_DIVA_SERVER_PRI_2M_RX 8 -#define DSP_DID_DIVA_SERVER_PRI_30M 9 -#define DSP_DID_TASK_HSCX 100 -#define DSP_DID_TASK_HSCX_PRI_2M_TX 101 -#define DSP_DID_TASK_HSCX_PRI_2M_RX 102 -#define DSP_DID_TASK_V110KRNL 200 -#define DSP_DID_OVERLAY_V1100 201 -#define DSP_DID_OVERLAY_V1101 202 -#define DSP_DID_OVERLAY_V1102 203 -#define DSP_DID_OVERLAY_V1103 204 -#define DSP_DID_OVERLAY_V1104 205 -#define DSP_DID_OVERLAY_V1105 206 -#define DSP_DID_OVERLAY_V1106 207 -#define DSP_DID_OVERLAY_V1107 208 -#define DSP_DID_OVERLAY_V1108 209 -#define DSP_DID_OVERLAY_V1109 210 -#define DSP_DID_TASK_V110_PRI_2M_TX 220 -#define DSP_DID_TASK_V110_PRI_2M_RX 221 -#define DSP_DID_TASK_MODEM 300 -#define DSP_DID_TASK_FAX05 400 -#define DSP_DID_TASK_VOICE 500 -#define DSP_DID_TASK_TIKRNL81 600 -#define DSP_DID_OVERLAY_DIAL 601 -#define DSP_DID_OVERLAY_V22 602 -#define DSP_DID_OVERLAY_V32 603 -#define DSP_DID_OVERLAY_FSK 604 -#define DSP_DID_OVERLAY_FAX 605 -#define DSP_DID_OVERLAY_VXX 606 -#define DSP_DID_OVERLAY_V8 607 -#define DSP_DID_OVERLAY_INFO 608 -#define DSP_DID_OVERLAY_V34 609 -#define DSP_DID_OVERLAY_DFX 610 -#define DSP_DID_PARTIAL_OVERLAY_DIAL 611 -#define DSP_DID_PARTIAL_OVERLAY_FSK 612 -#define DSP_DID_PARTIAL_OVERLAY_FAX 613 -#define DSP_DID_TASK_TIKRNL05 700 -/*---------------------------------------------------------------------------*/ -#endif -/*---------------------------------------------------------------------------*/ diff --git a/drivers/isdn/hardware/eicon/dsrv4bri.h b/drivers/isdn/hardware/eicon/dsrv4bri.h deleted file mode 100644 index f353fb6b8933..000000000000 --- a/drivers/isdn/hardware/eicon/dsrv4bri.h +++ /dev/null @@ -1,40 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_XDI_DSRV_4_BRI_INC__ -#define __DIVA_XDI_DSRV_4_BRI_INC__ -/* - * Some special registers in the PLX 9054 - */ -#define PLX9054_P2LDBELL 0x60 -#define PLX9054_L2PDBELL 0x64 -#define PLX9054_INTCSR 0x69 -#define PLX9054_INT_ENABLE 0x09 -#define PLX9054_SOFT_RESET 0x4000 -#define PLX9054_RELOAD_EEPROM 0x2000 -#define DIVA_4BRI_REVISION(__x__) (((__x__)->cardType == CARDTYPE_DIVASRV_Q_8M_V2_PCI) || ((__x__)->cardType == CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI) || ((__x__)->cardType == CARDTYPE_DIVASRV_B_2M_V2_PCI) || ((__x__)->cardType == CARDTYPE_DIVASRV_B_2F_PCI) || ((__x__)->cardType == CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI)) -void diva_os_set_qBri_functions(PISDN_ADAPTER IoAdapter); -void diva_os_set_qBri2_functions(PISDN_ADAPTER IoAdapter); -#endif diff --git a/drivers/isdn/hardware/eicon/dsrv_bri.h b/drivers/isdn/hardware/eicon/dsrv_bri.h deleted file mode 100644 index 8a67dbc65be4..000000000000 --- a/drivers/isdn/hardware/eicon/dsrv_bri.h +++ /dev/null @@ -1,37 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_XDI_DSRV_BRI_INC__ -#define __DIVA_XDI_DSRV_BRI_INC__ -/* - Functions exported from os dependent part of - BRI card configuration and used in - OS independed part -*/ -/* - Prepare OS dependent part of BRI functions -*/ -void diva_os_prepare_maestra_functions(PISDN_ADAPTER IoAdapter); -#endif diff --git a/drivers/isdn/hardware/eicon/dsrv_pri.h b/drivers/isdn/hardware/eicon/dsrv_pri.h deleted file mode 100644 index fd1a9ff9f195..000000000000 --- a/drivers/isdn/hardware/eicon/dsrv_pri.h +++ /dev/null @@ -1,38 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_XDI_DSRV_PRI_INC__ -#define __DIVA_XDI_DSRV_PRI_INC__ -/* - Functions exported from os dependent part of - PRI card configuration and used in - OS independed part -*/ -/* - Prepare OS dependent part of PRI/PRI Rev.2 functions -*/ -void diva_os_prepare_pri_functions(PISDN_ADAPTER IoAdapter); -void diva_os_prepare_pri2_functions(PISDN_ADAPTER IoAdapter); -#endif diff --git a/drivers/isdn/hardware/eicon/entity.h b/drivers/isdn/hardware/eicon/entity.h deleted file mode 100644 index f9767d321db9..000000000000 --- a/drivers/isdn/hardware/eicon/entity.h +++ /dev/null @@ -1,29 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: entity.h,v 1.4 2004/03/21 17:26:01 armin Exp $ */ - -#ifndef __DIVAS_USER_MODE_IDI_ENTITY__ -#define __DIVAS_USER_MODE_IDI_ENTITY__ - -#define DIVA_UM_IDI_RC_PENDING 0x00000001 -#define DIVA_UM_IDI_REMOVE_PENDING 0x00000002 -#define DIVA_UM_IDI_TX_FLOW_CONTROL 0x00000004 -#define DIVA_UM_IDI_REMOVED 0x00000008 -#define DIVA_UM_IDI_ASSIGN_PENDING 0x00000010 - -typedef struct _divas_um_idi_entity { - struct list_head link; - diva_um_idi_adapter_t *adapter; /* Back to adapter */ - ENTITY e; - void *os_ref; - dword status; - void *os_context; - int rc_count; - diva_um_idi_data_queue_t data; /* definad by user 1 ... MAX */ - diva_um_idi_data_queue_t rc; /* two entries */ - BUFFERS XData; - BUFFERS RData; - byte buffer[2048 + 512]; -} divas_um_idi_entity_t; - - -#endif diff --git a/drivers/isdn/hardware/eicon/helpers.h b/drivers/isdn/hardware/eicon/helpers.h deleted file mode 100644 index c9156b0acaba..000000000000 --- a/drivers/isdn/hardware/eicon/helpers.h +++ /dev/null @@ -1,51 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_XDI_CARD_CONFIG_HELPERS_INC__ -#define __DIVA_XDI_CARD_CONFIG_HELPERS_INC__ -dword diva_get_protocol_file_features(byte *File, - int offset, - char *IdStringBuffer, - dword IdBufferSize); -void diva_configure_protocol(PISDN_ADAPTER IoAdapter); -/* - Low level file access system abstraction -*/ -/* ------------------------------------------------------------------------- - Access to single file - Return pointer to the image of the requested file, - write image length to 'FileLength' - ------------------------------------------------------------------------- */ -void *xdiLoadFile(char *FileName, dword *FileLength, unsigned long MaxLoadSize); -/* ------------------------------------------------------------------------- - Dependent on the protocol settings does read return pointer - to the image of appropriate protocol file - ------------------------------------------------------------------------- */ -void *xdiLoadArchive(PISDN_ADAPTER IoAdapter, dword *FileLength, unsigned long MaxLoadSize); -/* -------------------------------------------------------------------------- - Free all system resources accessed by xdiLoadFile and xdiLoadArchive - -------------------------------------------------------------------------- */ -void xdiFreeFile(void *handle); -#endif diff --git a/drivers/isdn/hardware/eicon/idifunc.c b/drivers/isdn/hardware/eicon/idifunc.c deleted file mode 100644 index fef6586fe5ac..000000000000 --- a/drivers/isdn/hardware/eicon/idifunc.c +++ /dev/null @@ -1,268 +0,0 @@ -/* $Id: idifunc.c,v 1.14.4.4 2004/08/28 20:03:53 armin Exp $ - * - * Driver for Eicon DIVA Server ISDN cards. - * User Mode IDI Interface - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include "platform.h" -#include "di_defs.h" -#include "divasync.h" -#include "um_xdi.h" -#include "um_idi.h" - -#define DBG_MINIMUM (DL_LOG + DL_FTL + DL_ERR) -#define DBG_DEFAULT (DBG_MINIMUM + DL_XLOG + DL_REG) - -extern char *DRIVERRELEASE_IDI; - -extern void DIVA_DIDD_Read(void *, int); -extern int diva_user_mode_idi_create_adapter(const DESCRIPTOR *, int); -extern void diva_user_mode_idi_remove_adapter(int); - -static dword notify_handle; -static DESCRIPTOR DAdapter; -static DESCRIPTOR MAdapter; - -static void no_printf(unsigned char *x, ...) -{ - /* dummy debug function */ -} - -#include "debuglib.c" - -/* - * stop debug - */ -static void stop_dbg(void) -{ - DbgDeregister(); - memset(&MAdapter, 0, sizeof(MAdapter)); - dprintf = no_printf; -} - -typedef struct _udiva_card { - struct list_head list; - int Id; - DESCRIPTOR d; -} udiva_card; - -static LIST_HEAD(cards); -static diva_os_spin_lock_t ll_lock; - -/* - * find card in list - */ -static udiva_card *find_card_in_list(DESCRIPTOR *d) -{ - udiva_card *card; - struct list_head *tmp; - diva_os_spin_lock_magic_t old_irql; - - diva_os_enter_spin_lock(&ll_lock, &old_irql, "find card"); - list_for_each(tmp, &cards) { - card = list_entry(tmp, udiva_card, list); - if (card->d.request == d->request) { - diva_os_leave_spin_lock(&ll_lock, &old_irql, - "find card"); - return (card); - } - } - diva_os_leave_spin_lock(&ll_lock, &old_irql, "find card"); - return ((udiva_card *) NULL); -} - -/* - * new card - */ -static void um_new_card(DESCRIPTOR *d) -{ - int adapter_nr = 0; - udiva_card *card = NULL; - IDI_SYNC_REQ sync_req; - diva_os_spin_lock_magic_t old_irql; - - if (!(card = diva_os_malloc(0, sizeof(udiva_card)))) { - DBG_ERR(("cannot get buffer for card")); - return; - } - memcpy(&card->d, d, sizeof(DESCRIPTOR)); - sync_req.xdi_logical_adapter_number.Req = 0; - sync_req.xdi_logical_adapter_number.Rc = - IDI_SYNC_REQ_XDI_GET_LOGICAL_ADAPTER_NUMBER; - card->d.request((ENTITY *)&sync_req); - adapter_nr = - sync_req.xdi_logical_adapter_number.info.logical_adapter_number; - card->Id = adapter_nr; - if (!(diva_user_mode_idi_create_adapter(d, adapter_nr))) { - diva_os_enter_spin_lock(&ll_lock, &old_irql, "add card"); - list_add_tail(&card->list, &cards); - diva_os_leave_spin_lock(&ll_lock, &old_irql, "add card"); - } else { - DBG_ERR(("could not create user mode idi card %d", - adapter_nr)); - diva_os_free(0, card); - } -} - -/* - * remove card - */ -static void um_remove_card(DESCRIPTOR *d) -{ - diva_os_spin_lock_magic_t old_irql; - udiva_card *card = NULL; - - if (!(card = find_card_in_list(d))) { - DBG_ERR(("cannot find card to remove")); - return; - } - diva_user_mode_idi_remove_adapter(card->Id); - diva_os_enter_spin_lock(&ll_lock, &old_irql, "remove card"); - list_del(&card->list); - diva_os_leave_spin_lock(&ll_lock, &old_irql, "remove card"); - DBG_LOG(("idi proc entry removed for card %d", card->Id)); - diva_os_free(0, card); -} - -/* - * remove all adapter - */ -static void __exit remove_all_idi_proc(void) -{ - udiva_card *card; - diva_os_spin_lock_magic_t old_irql; - -rescan: - diva_os_enter_spin_lock(&ll_lock, &old_irql, "remove all"); - if (!list_empty(&cards)) { - card = list_entry(cards.next, udiva_card, list); - list_del(&card->list); - diva_os_leave_spin_lock(&ll_lock, &old_irql, "remove all"); - diva_user_mode_idi_remove_adapter(card->Id); - diva_os_free(0, card); - goto rescan; - } - diva_os_leave_spin_lock(&ll_lock, &old_irql, "remove all"); -} - -/* - * DIDD notify callback - */ -static void *didd_callback(void *context, DESCRIPTOR *adapter, - int removal) -{ - if (adapter->type == IDI_DADAPTER) { - DBG_ERR(("Notification about IDI_DADAPTER change ! Oops.")); - return (NULL); - } else if (adapter->type == IDI_DIMAINT) { - if (removal) { - stop_dbg(); - } else { - memcpy(&MAdapter, adapter, sizeof(MAdapter)); - dprintf = (DIVA_DI_PRINTF) MAdapter.request; - DbgRegister("User IDI", DRIVERRELEASE_IDI, DBG_DEFAULT); - } - } else if ((adapter->type > 0) && (adapter->type < 16)) { /* IDI Adapter */ - if (removal) { - um_remove_card(adapter); - } else { - um_new_card(adapter); - } - } - return (NULL); -} - -/* - * connect DIDD - */ -static int __init connect_didd(void) -{ - int x = 0; - int dadapter = 0; - IDI_SYNC_REQ req; - DESCRIPTOR DIDD_Table[MAX_DESCRIPTORS]; - - DIVA_DIDD_Read(DIDD_Table, sizeof(DIDD_Table)); - - for (x = 0; x < MAX_DESCRIPTORS; x++) { - if (DIDD_Table[x].type == IDI_DADAPTER) { /* DADAPTER found */ - dadapter = 1; - memcpy(&DAdapter, &DIDD_Table[x], sizeof(DAdapter)); - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = - IDI_SYNC_REQ_DIDD_REGISTER_ADAPTER_NOTIFY; - req.didd_notify.info.callback = (void *)didd_callback; - req.didd_notify.info.context = NULL; - DAdapter.request((ENTITY *)&req); - if (req.didd_notify.e.Rc != 0xff) { - stop_dbg(); - return (0); - } - notify_handle = req.didd_notify.info.handle; - } else if (DIDD_Table[x].type == IDI_DIMAINT) { /* MAINT found */ - memcpy(&MAdapter, &DIDD_Table[x], sizeof(DAdapter)); - dprintf = (DIVA_DI_PRINTF) MAdapter.request; - DbgRegister("User IDI", DRIVERRELEASE_IDI, DBG_DEFAULT); - } else if ((DIDD_Table[x].type > 0) - && (DIDD_Table[x].type < 16)) { /* IDI Adapter found */ - um_new_card(&DIDD_Table[x]); - } - } - - if (!dadapter) { - stop_dbg(); - } - - return (dadapter); -} - -/* - * Disconnect from DIDD - */ -static void __exit disconnect_didd(void) -{ - IDI_SYNC_REQ req; - - stop_dbg(); - - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER_NOTIFY; - req.didd_notify.info.handle = notify_handle; - DAdapter.request((ENTITY *)&req); -} - -/* - * init - */ -int __init idifunc_init(void) -{ - diva_os_initialize_spin_lock(&ll_lock, "idifunc"); - - if (diva_user_mode_idi_init()) { - DBG_ERR(("init: init failed.")); - return (0); - } - - if (!connect_didd()) { - diva_user_mode_idi_finit(); - DBG_ERR(("init: failed to connect to DIDD.")); - return (0); - } - return (1); -} - -/* - * finit - */ -void __exit idifunc_finit(void) -{ - diva_user_mode_idi_finit(); - disconnect_didd(); - remove_all_idi_proc(); -} diff --git a/drivers/isdn/hardware/eicon/io.c b/drivers/isdn/hardware/eicon/io.c deleted file mode 100644 index 8851ce580c23..000000000000 --- a/drivers/isdn/hardware/eicon/io.c +++ /dev/null @@ -1,852 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#include "di_defs.h" -#include "pc.h" -#include "pr_pc.h" -#include "divasync.h" -#define MIPS_SCOM -#include "pkmaint.h" /* pc_main.h, packed in os-dependent fashion */ -#include "di.h" -#include "mi_pc.h" -#include "io.h" -extern ADAPTER *adapter[MAX_ADAPTER]; -extern PISDN_ADAPTER IoAdapters[MAX_ADAPTER]; -void request(PISDN_ADAPTER, ENTITY *); -static void pcm_req(PISDN_ADAPTER, ENTITY *); -/* -------------------------------------------------------------------------- - local functions - -------------------------------------------------------------------------- */ -#define ReqFunc(N) \ - static void Request##N(ENTITY *e) \ - { if (IoAdapters[N]) (*IoAdapters[N]->DIRequest)(IoAdapters[N], e); } -ReqFunc(0) -ReqFunc(1) -ReqFunc(2) -ReqFunc(3) -ReqFunc(4) -ReqFunc(5) -ReqFunc(6) -ReqFunc(7) -ReqFunc(8) -ReqFunc(9) -ReqFunc(10) -ReqFunc(11) -ReqFunc(12) -ReqFunc(13) -ReqFunc(14) -ReqFunc(15) -IDI_CALL Requests[MAX_ADAPTER] = -{ &Request0, &Request1, &Request2, &Request3, - &Request4, &Request5, &Request6, &Request7, - &Request8, &Request9, &Request10, &Request11, - &Request12, &Request13, &Request14, &Request15 -}; -/*****************************************************************************/ -/* - This array should indicate all new services, that this version of XDI - is able to provide to his clients -*/ -static byte extended_xdi_features[DIVA_XDI_EXTENDED_FEATURES_MAX_SZ + 1] = { - (DIVA_XDI_EXTENDED_FEATURES_VALID | - DIVA_XDI_EXTENDED_FEATURE_SDRAM_BAR | - DIVA_XDI_EXTENDED_FEATURE_CAPI_PRMS | -#if defined(DIVA_IDI_RX_DMA) - DIVA_XDI_EXTENDED_FEATURE_CMA | - DIVA_XDI_EXTENDED_FEATURE_RX_DMA | - DIVA_XDI_EXTENDED_FEATURE_MANAGEMENT_DMA | -#endif - DIVA_XDI_EXTENDED_FEATURE_NO_CANCEL_RC), - 0 -}; -/*****************************************************************************/ -void -dump_xlog_buffer(PISDN_ADAPTER IoAdapter, Xdesc *xlogDesc) -{ - dword logLen; - word *Xlog = xlogDesc->buf; - word logCnt = xlogDesc->cnt; - word logOut = xlogDesc->out / sizeof(*Xlog); - DBG_FTL(("%s: ************* XLOG recovery (%d) *************", - &IoAdapter->Name[0], (int)logCnt)) - DBG_FTL(("Microcode: %s", &IoAdapter->ProtocolIdString[0])) - for (; logCnt > 0; --logCnt) - { - if (!GET_WORD(&Xlog[logOut])) - { - if (--logCnt == 0) - break; - logOut = 0; - } - if (GET_WORD(&Xlog[logOut]) <= (logOut * sizeof(*Xlog))) - { - if (logCnt > 2) - { - DBG_FTL(("Possibly corrupted XLOG: %d entries left", - (int)logCnt)) - } - break; - } - logLen = (dword)(GET_WORD(&Xlog[logOut]) - (logOut * sizeof(*Xlog))); - DBG_FTL_MXLOG(((char *)&Xlog[logOut + 1], (dword)(logLen - 2))) - logOut = (GET_WORD(&Xlog[logOut]) + 1) / sizeof(*Xlog); - } - DBG_FTL(("%s: ***************** end of XLOG *****************", - &IoAdapter->Name[0])) - } -/*****************************************************************************/ -#if defined(XDI_USE_XLOG) -static char *(ExceptionCauseTable[]) = -{ - "Interrupt", - "TLB mod /IBOUND", - "TLB load /DBOUND", - "TLB store", - "Address error load", - "Address error store", - "Instruction load bus error", - "Data load/store bus error", - "Syscall", - "Breakpoint", - "Reverd instruction", - "Coprocessor unusable", - "Overflow", - "TRAP", - "VCEI", - "Floating Point Exception", - "CP2", - "Reserved 17", - "Reserved 18", - "Reserved 19", - "Reserved 20", - "Reserved 21", - "Reserved 22", - "WATCH", - "Reserved 24", - "Reserved 25", - "Reserved 26", - "Reserved 27", - "Reserved 28", - "Reserved 29", - "Reserved 30", - "VCED" -}; -#endif -void -dump_trap_frame(PISDN_ADAPTER IoAdapter, byte __iomem *exceptionFrame) -{ - MP_XCPTC __iomem *xcept = (MP_XCPTC __iomem *)exceptionFrame; - dword __iomem *regs; - regs = &xcept->regs[0]; - DBG_FTL(("%s: ***************** CPU TRAPPED *****************", - &IoAdapter->Name[0])) - DBG_FTL(("Microcode: %s", &IoAdapter->ProtocolIdString[0])) - DBG_FTL(("Cause: %s", - ExceptionCauseTable[(READ_DWORD(&xcept->cr) & 0x0000007c) >> 2])) - DBG_FTL(("sr 0x%08x cr 0x%08x epc 0x%08x vaddr 0x%08x", - READ_DWORD(&xcept->sr), READ_DWORD(&xcept->cr), - READ_DWORD(&xcept->epc), READ_DWORD(&xcept->vaddr))) - DBG_FTL(("zero 0x%08x at 0x%08x v0 0x%08x v1 0x%08x", - READ_DWORD(®s[0]), READ_DWORD(®s[1]), - READ_DWORD(®s[2]), READ_DWORD(®s[3]))) - DBG_FTL(("a0 0x%08x a1 0x%08x a2 0x%08x a3 0x%08x", - READ_DWORD(®s[4]), READ_DWORD(®s[5]), - READ_DWORD(®s[6]), READ_DWORD(®s[7]))) - DBG_FTL(("t0 0x%08x t1 0x%08x t2 0x%08x t3 0x%08x", - READ_DWORD(®s[8]), READ_DWORD(®s[9]), - READ_DWORD(®s[10]), READ_DWORD(®s[11]))) - DBG_FTL(("t4 0x%08x t5 0x%08x t6 0x%08x t7 0x%08x", - READ_DWORD(®s[12]), READ_DWORD(®s[13]), - READ_DWORD(®s[14]), READ_DWORD(®s[15]))) - DBG_FTL(("s0 0x%08x s1 0x%08x s2 0x%08x s3 0x%08x", - READ_DWORD(®s[16]), READ_DWORD(®s[17]), - READ_DWORD(®s[18]), READ_DWORD(®s[19]))) - DBG_FTL(("s4 0x%08x s5 0x%08x s6 0x%08x s7 0x%08x", - READ_DWORD(®s[20]), READ_DWORD(®s[21]), - READ_DWORD(®s[22]), READ_DWORD(®s[23]))) - DBG_FTL(("t8 0x%08x t9 0x%08x k0 0x%08x k1 0x%08x", - READ_DWORD(®s[24]), READ_DWORD(®s[25]), - READ_DWORD(®s[26]), READ_DWORD(®s[27]))) - DBG_FTL(("gp 0x%08x sp 0x%08x s8 0x%08x ra 0x%08x", - READ_DWORD(®s[28]), READ_DWORD(®s[29]), - READ_DWORD(®s[30]), READ_DWORD(®s[31]))) - DBG_FTL(("md 0x%08x|%08x resvd 0x%08x class 0x%08x", - READ_DWORD(&xcept->mdhi), READ_DWORD(&xcept->mdlo), - READ_DWORD(&xcept->reseverd), READ_DWORD(&xcept->xclass))) - } -/* -------------------------------------------------------------------------- - Real XDI Request function - -------------------------------------------------------------------------- */ -void request(PISDN_ADAPTER IoAdapter, ENTITY *e) -{ - byte i; - diva_os_spin_lock_magic_t irql; -/* - * if the Req field in the entity structure is 0, - * we treat this request as a special function call - */ - if (!e->Req) - { - IDI_SYNC_REQ *syncReq = (IDI_SYNC_REQ *)e; - switch (e->Rc) - { -#if defined(DIVA_IDI_RX_DMA) - case IDI_SYNC_REQ_DMA_DESCRIPTOR_OPERATION: { - diva_xdi_dma_descriptor_operation_t *pI = \ - &syncReq->xdi_dma_descriptor_operation.info; - if (!IoAdapter->dma_map) { - pI->operation = -1; - pI->descriptor_number = -1; - return; - } - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "dma_op"); - if (pI->operation == IDI_SYNC_REQ_DMA_DESCRIPTOR_ALLOC) { - pI->descriptor_number = diva_alloc_dma_map_entry(\ - (struct _diva_dma_map_entry *)IoAdapter->dma_map); - if (pI->descriptor_number >= 0) { - dword dma_magic; - void *local_addr; - diva_get_dma_map_entry(\ - (struct _diva_dma_map_entry *)IoAdapter->dma_map, - pI->descriptor_number, - &local_addr, &dma_magic); - pI->descriptor_address = local_addr; - pI->descriptor_magic = dma_magic; - pI->operation = 0; - } else { - pI->operation = -1; - } - } else if ((pI->operation == IDI_SYNC_REQ_DMA_DESCRIPTOR_FREE) && - (pI->descriptor_number >= 0)) { - diva_free_dma_map_entry((struct _diva_dma_map_entry *)IoAdapter->dma_map, - pI->descriptor_number); - pI->descriptor_number = -1; - pI->operation = 0; - } else { - pI->descriptor_number = -1; - pI->operation = -1; - } - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "dma_op"); - } return; -#endif - case IDI_SYNC_REQ_XDI_GET_LOGICAL_ADAPTER_NUMBER: { - diva_xdi_get_logical_adapter_number_s_t *pI = \ - &syncReq->xdi_logical_adapter_number.info; - pI->logical_adapter_number = IoAdapter->ANum; - pI->controller = IoAdapter->ControllerNumber; - pI->total_controllers = IoAdapter->Properties.Adapters; - } return; - case IDI_SYNC_REQ_XDI_GET_CAPI_PARAMS: { - diva_xdi_get_capi_parameters_t prms, *pI = &syncReq->xdi_capi_prms.info; - memset(&prms, 0x00, sizeof(prms)); - prms.structure_length = min_t(size_t, sizeof(prms), pI->structure_length); - memset(pI, 0x00, pI->structure_length); - prms.flag_dynamic_l1_down = (IoAdapter->capi_cfg.cfg_1 & \ - DIVA_XDI_CAPI_CFG_1_DYNAMIC_L1_ON) ? 1 : 0; - prms.group_optimization_enabled = (IoAdapter->capi_cfg.cfg_1 & \ - DIVA_XDI_CAPI_CFG_1_GROUP_POPTIMIZATION_ON) ? 1 : 0; - memcpy(pI, &prms, prms.structure_length); - } return; - case IDI_SYNC_REQ_XDI_GET_ADAPTER_SDRAM_BAR: - syncReq->xdi_sdram_bar.info.bar = IoAdapter->sdram_bar; - return; - case IDI_SYNC_REQ_XDI_GET_EXTENDED_FEATURES: { - dword i; - diva_xdi_get_extended_xdi_features_t *pI =\ - &syncReq->xdi_extended_features.info; - pI->buffer_length_in_bytes &= ~0x80000000; - if (pI->buffer_length_in_bytes && pI->features) { - memset(pI->features, 0x00, pI->buffer_length_in_bytes); - } - for (i = 0; ((pI->features) && (i < pI->buffer_length_in_bytes) && - (i < DIVA_XDI_EXTENDED_FEATURES_MAX_SZ)); i++) { - pI->features[i] = extended_xdi_features[i]; - } - if ((pI->buffer_length_in_bytes < DIVA_XDI_EXTENDED_FEATURES_MAX_SZ) || - (!pI->features)) { - pI->buffer_length_in_bytes =\ - (0x80000000 | DIVA_XDI_EXTENDED_FEATURES_MAX_SZ); - } - } return; - case IDI_SYNC_REQ_XDI_GET_STREAM: - if (IoAdapter) { - diva_xdi_provide_istream_info(&IoAdapter->a, - &syncReq->xdi_stream_info.info); - } else { - syncReq->xdi_stream_info.info.provided_service = 0; - } - return; - case IDI_SYNC_REQ_GET_NAME: - if (IoAdapter) - { - strcpy(&syncReq->GetName.name[0], IoAdapter->Name); - DBG_TRC(("xdi: Adapter %d / Name '%s'", - IoAdapter->ANum, IoAdapter->Name)) - return; - } - syncReq->GetName.name[0] = '\0'; - break; - case IDI_SYNC_REQ_GET_SERIAL: - if (IoAdapter) - { - syncReq->GetSerial.serial = IoAdapter->serialNo; - DBG_TRC(("xdi: Adapter %d / SerialNo %ld", - IoAdapter->ANum, IoAdapter->serialNo)) - return; - } - syncReq->GetSerial.serial = 0; - break; - case IDI_SYNC_REQ_GET_CARDTYPE: - if (IoAdapter) - { - syncReq->GetCardType.cardtype = IoAdapter->cardType; - DBG_TRC(("xdi: Adapter %d / CardType %ld", - IoAdapter->ANum, IoAdapter->cardType)) - return; - } - syncReq->GetCardType.cardtype = 0; - break; - case IDI_SYNC_REQ_GET_XLOG: - if (IoAdapter) - { - pcm_req(IoAdapter, e); - return; - } - e->Ind = 0; - break; - case IDI_SYNC_REQ_GET_DBG_XLOG: - if (IoAdapter) - { - pcm_req(IoAdapter, e); - return; - } - e->Ind = 0; - break; - case IDI_SYNC_REQ_GET_FEATURES: - if (IoAdapter) - { - syncReq->GetFeatures.features = - (unsigned short)IoAdapter->features; - return; - } - syncReq->GetFeatures.features = 0; - break; - case IDI_SYNC_REQ_PORTDRV_HOOK: - if (IoAdapter) - { - DBG_TRC(("Xdi:IDI_SYNC_REQ_PORTDRV_HOOK - ignored")) - return; - } - break; - } - if (IoAdapter) - { - return; - } - } - DBG_TRC(("xdi: Id 0x%x / Req 0x%x / Rc 0x%x", e->Id, e->Req, e->Rc)) - if (!IoAdapter) - { - DBG_FTL(("xdi: uninitialized Adapter used - ignore request")) - return; - } - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req"); -/* - * assign an entity - */ - if (!(e->Id & 0x1f)) - { - if (IoAdapter->e_count >= IoAdapter->e_max) - { - DBG_FTL(("xdi: all Ids in use (max=%d) --> Req ignored", - IoAdapter->e_max)) - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req"); - return; - } -/* - * find a new free id - */ - for (i = 1; IoAdapter->e_tbl[i].e; ++i); - IoAdapter->e_tbl[i].e = e; - IoAdapter->e_count++; - e->No = (byte)i; - e->More = 0; - e->RCurrent = 0xff; - } - else - { - i = e->No; - } -/* - * if the entity is still busy, ignore the request call - */ - if (e->More & XBUSY) - { - DBG_FTL(("xdi: Id 0x%x busy --> Req 0x%x ignored", e->Id, e->Req)) - if (!IoAdapter->trapped && IoAdapter->trapFnc) - { - IoAdapter->trapFnc(IoAdapter); - /* - Firs trap, also notify user if supported - */ - if (IoAdapter->trapped && IoAdapter->os_trap_nfy_Fnc) { - (*(IoAdapter->os_trap_nfy_Fnc))(IoAdapter, IoAdapter->ANum); - } - } - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req"); - return; - } -/* - * initialize transmit status variables - */ - e->More |= XBUSY; - e->More &= ~XMOREF; - e->XCurrent = 0; - e->XOffset = 0; -/* - * queue this entity in the adapter request queue - */ - IoAdapter->e_tbl[i].next = 0; - if (IoAdapter->head) - { - IoAdapter->e_tbl[IoAdapter->tail].next = i; - IoAdapter->tail = i; - } - else - { - IoAdapter->head = i; - IoAdapter->tail = i; - } -/* - * queue the DPC to process the request - */ - diva_os_schedule_soft_isr(&IoAdapter->req_soft_isr); - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req"); -} -/* --------------------------------------------------------------------- - Main DPC routine - --------------------------------------------------------------------- */ -void DIDpcRoutine(struct _diva_os_soft_isr *psoft_isr, void *Context) { - PISDN_ADAPTER IoAdapter = (PISDN_ADAPTER)Context; - ADAPTER *a = &IoAdapter->a; - diva_os_atomic_t *pin_dpc = &IoAdapter->in_dpc; - if (diva_os_atomic_increment(pin_dpc) == 1) { - do { - if (IoAdapter->tst_irq(a)) - { - if (!IoAdapter->Unavailable) - IoAdapter->dpc(a); - IoAdapter->clr_irq(a); - } - IoAdapter->out(a); - } while (diva_os_atomic_decrement(pin_dpc) > 0); - /* ---------------------------------------------------------------- - Look for XLOG request (cards with indirect addressing) - ---------------------------------------------------------------- */ - if (IoAdapter->pcm_pending) { - struct pc_maint *pcm; - diva_os_spin_lock_magic_t OldIrql; - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_dpc"); - pcm = (struct pc_maint *)IoAdapter->pcm_data; - switch (IoAdapter->pcm_pending) { - case 1: /* ask card for XLOG */ - a->ram_out(a, &IoAdapter->pcm->rc, 0); - a->ram_out(a, &IoAdapter->pcm->req, pcm->req); - IoAdapter->pcm_pending = 2; - break; - case 2: /* Try to get XLOG from the card */ - if ((int)(a->ram_in(a, &IoAdapter->pcm->rc))) { - a->ram_in_buffer(a, IoAdapter->pcm, pcm, sizeof(*pcm)); - IoAdapter->pcm_pending = 3; - } - break; - case 3: /* let XDI recovery XLOG */ - break; - } - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_dpc"); - } - /* ---------------------------------------------------------------- */ - } -} -/* -------------------------------------------------------------------------- - XLOG interface - -------------------------------------------------------------------------- */ -static void -pcm_req(PISDN_ADAPTER IoAdapter, ENTITY *e) -{ - diva_os_spin_lock_magic_t OldIrql; - int i, rc; - ADAPTER *a = &IoAdapter->a; - struct pc_maint *pcm = (struct pc_maint *)&e->Ind; -/* - * special handling of I/O based card interface - * the memory access isn't an atomic operation ! - */ - if (IoAdapter->Properties.Card == CARD_MAE) - { - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_pcm_1"); - IoAdapter->pcm_data = (void *)pcm; - IoAdapter->pcm_pending = 1; - diva_os_schedule_soft_isr(&IoAdapter->req_soft_isr); - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_pcm_1"); - for (rc = 0, i = (IoAdapter->trapped ? 3000 : 250); !rc && (i > 0); --i) - { - diva_os_sleep(1); - if (IoAdapter->pcm_pending == 3) { - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_pcm_3"); - IoAdapter->pcm_pending = 0; - IoAdapter->pcm_data = NULL; - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_pcm_3"); - return; - } - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_pcm_2"); - diva_os_schedule_soft_isr(&IoAdapter->req_soft_isr); - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_pcm_2"); - } - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_pcm_4"); - IoAdapter->pcm_pending = 0; - IoAdapter->pcm_data = NULL; - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, - &OldIrql, - "data_pcm_4"); - goto Trapped; - } -/* - * memory based shared ram is accessible from different - * processors without disturbing concurrent processes. - */ - a->ram_out(a, &IoAdapter->pcm->rc, 0); - a->ram_out(a, &IoAdapter->pcm->req, pcm->req); - for (i = (IoAdapter->trapped ? 3000 : 250); --i > 0;) - { - diva_os_sleep(1); - rc = (int)(a->ram_in(a, &IoAdapter->pcm->rc)); - if (rc) - { - a->ram_in_buffer(a, IoAdapter->pcm, pcm, sizeof(*pcm)); - return; - } - } -Trapped: - if (IoAdapter->trapFnc) - { - int trapped = IoAdapter->trapped; - IoAdapter->trapFnc(IoAdapter); - /* - Firs trap, also notify user if supported - */ - if (!trapped && IoAdapter->trapped && IoAdapter->os_trap_nfy_Fnc) { - (*(IoAdapter->os_trap_nfy_Fnc))(IoAdapter, IoAdapter->ANum); - } - } -} -/*------------------------------------------------------------------*/ -/* ram access functions for memory mapped cards */ -/*------------------------------------------------------------------*/ -byte mem_in(ADAPTER *a, void *addr) -{ - byte val; - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - val = READ_BYTE(Base + (unsigned long)addr); - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); - return (val); -} -word mem_inw(ADAPTER *a, void *addr) -{ - word val; - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - val = READ_WORD((Base + (unsigned long)addr)); - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); - return (val); -} -void mem_in_dw(ADAPTER *a, void *addr, dword *data, int dwords) -{ - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - while (dwords--) { - *data++ = READ_DWORD((Base + (unsigned long)addr)); - addr += 4; - } - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); -} -void mem_in_buffer(ADAPTER *a, void *addr, void *buffer, word length) -{ - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - memcpy_fromio(buffer, (Base + (unsigned long)addr), length); - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); -} -void mem_look_ahead(ADAPTER *a, PBUFFER *RBuffer, ENTITY *e) -{ - PISDN_ADAPTER IoAdapter = (PISDN_ADAPTER)a->io; - IoAdapter->RBuffer.length = mem_inw(a, &RBuffer->length); - mem_in_buffer(a, RBuffer->P, IoAdapter->RBuffer.P, - IoAdapter->RBuffer.length); - e->RBuffer = (DBUFFER *)&IoAdapter->RBuffer; -} -void mem_out(ADAPTER *a, void *addr, byte data) -{ - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - WRITE_BYTE(Base + (unsigned long)addr, data); - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); -} -void mem_outw(ADAPTER *a, void *addr, word data) -{ - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - WRITE_WORD((Base + (unsigned long)addr), data); - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); -} -void mem_out_dw(ADAPTER *a, void *addr, const dword *data, int dwords) -{ - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - while (dwords--) { - WRITE_DWORD((Base + (unsigned long)addr), *data); - addr += 4; - data++; - } - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); -} -void mem_out_buffer(ADAPTER *a, void *addr, void *buffer, word length) -{ - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - memcpy_toio((Base + (unsigned long)addr), buffer, length); - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); -} -void mem_inc(ADAPTER *a, void *addr) -{ - volatile byte __iomem *Base = DIVA_OS_MEM_ATTACH_RAM((PISDN_ADAPTER)a->io); - byte x = READ_BYTE(Base + (unsigned long)addr); - WRITE_BYTE(Base + (unsigned long)addr, x + 1); - DIVA_OS_MEM_DETACH_RAM((PISDN_ADAPTER)a->io, Base); -} -/*------------------------------------------------------------------*/ -/* ram access functions for io-mapped cards */ -/*------------------------------------------------------------------*/ -byte io_in(ADAPTER *a, void *adr) -{ - byte val; - byte __iomem *Port = DIVA_OS_MEM_ATTACH_PORT((PISDN_ADAPTER)a->io); - outppw(Port + 4, (word)(unsigned long)adr); - val = inpp(Port); - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); - return (val); -} -word io_inw(ADAPTER *a, void *adr) -{ - word val; - byte __iomem *Port = DIVA_OS_MEM_ATTACH_PORT((PISDN_ADAPTER)a->io); - outppw(Port + 4, (word)(unsigned long)adr); - val = inppw(Port); - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); - return (val); -} -void io_in_buffer(ADAPTER *a, void *adr, void *buffer, word len) -{ - byte __iomem *Port = DIVA_OS_MEM_ATTACH_PORT((PISDN_ADAPTER)a->io); - byte *P = (byte *)buffer; - if ((long)adr & 1) { - outppw(Port + 4, (word)(unsigned long)adr); - *P = inpp(Port); - P++; - adr = ((byte *) adr) + 1; - len--; - if (!len) { - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); - return; - } - } - outppw(Port + 4, (word)(unsigned long)adr); - inppw_buffer(Port, P, len + 1); - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); -} -void io_look_ahead(ADAPTER *a, PBUFFER *RBuffer, ENTITY *e) -{ - byte __iomem *Port = DIVA_OS_MEM_ATTACH_PORT((PISDN_ADAPTER)a->io); - outppw(Port + 4, (word)(unsigned long)RBuffer); - ((PISDN_ADAPTER)a->io)->RBuffer.length = inppw(Port); - inppw_buffer(Port, ((PISDN_ADAPTER)a->io)->RBuffer.P, ((PISDN_ADAPTER)a->io)->RBuffer.length + 1); - e->RBuffer = (DBUFFER *) &(((PISDN_ADAPTER)a->io)->RBuffer); - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); -} -void io_out(ADAPTER *a, void *adr, byte data) -{ - byte __iomem *Port = DIVA_OS_MEM_ATTACH_PORT((PISDN_ADAPTER)a->io); - outppw(Port + 4, (word)(unsigned long)adr); - outpp(Port, data); - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); -} -void io_outw(ADAPTER *a, void *adr, word data) -{ - byte __iomem *Port = DIVA_OS_MEM_ATTACH_PORT((PISDN_ADAPTER)a->io); - outppw(Port + 4, (word)(unsigned long)adr); - outppw(Port, data); - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); -} -void io_out_buffer(ADAPTER *a, void *adr, void *buffer, word len) -{ - byte __iomem *Port = DIVA_OS_MEM_ATTACH_PORT((PISDN_ADAPTER)a->io); - byte *P = (byte *)buffer; - if ((long)adr & 1) { - outppw(Port + 4, (word)(unsigned long)adr); - outpp(Port, *P); - P++; - adr = ((byte *) adr) + 1; - len--; - if (!len) { - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); - return; - } - } - outppw(Port + 4, (word)(unsigned long)adr); - outppw_buffer(Port, P, len + 1); - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); -} -void io_inc(ADAPTER *a, void *adr) -{ - byte x; - byte __iomem *Port = DIVA_OS_MEM_ATTACH_PORT((PISDN_ADAPTER)a->io); - outppw(Port + 4, (word)(unsigned long)adr); - x = inpp(Port); - outppw(Port + 4, (word)(unsigned long)adr); - outpp(Port, x + 1); - DIVA_OS_MEM_DETACH_PORT((PISDN_ADAPTER)a->io, Port); -} -/*------------------------------------------------------------------*/ -/* OS specific functions related to queuing of entities */ -/*------------------------------------------------------------------*/ -void free_entity(ADAPTER *a, byte e_no) -{ - PISDN_ADAPTER IoAdapter; - diva_os_spin_lock_magic_t irql; - IoAdapter = (PISDN_ADAPTER) a->io; - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_free"); - IoAdapter->e_tbl[e_no].e = NULL; - IoAdapter->e_count--; - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_free"); -} -void assign_queue(ADAPTER *a, byte e_no, word ref) -{ - PISDN_ADAPTER IoAdapter; - diva_os_spin_lock_magic_t irql; - IoAdapter = (PISDN_ADAPTER) a->io; - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_assign"); - IoAdapter->e_tbl[e_no].assign_ref = ref; - IoAdapter->e_tbl[e_no].next = (byte)IoAdapter->assign; - IoAdapter->assign = e_no; - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_assign"); -} -byte get_assign(ADAPTER *a, word ref) -{ - PISDN_ADAPTER IoAdapter; - diva_os_spin_lock_magic_t irql; - byte e_no; - IoAdapter = (PISDN_ADAPTER) a->io; - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, - &irql, - "data_assign_get"); - for (e_no = (byte)IoAdapter->assign; - e_no && IoAdapter->e_tbl[e_no].assign_ref != ref; - e_no = IoAdapter->e_tbl[e_no].next); - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, - &irql, - "data_assign_get"); - return e_no; -} -void req_queue(ADAPTER *a, byte e_no) -{ - PISDN_ADAPTER IoAdapter; - diva_os_spin_lock_magic_t irql; - IoAdapter = (PISDN_ADAPTER) a->io; - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req_q"); - IoAdapter->e_tbl[e_no].next = 0; - if (IoAdapter->head) { - IoAdapter->e_tbl[IoAdapter->tail].next = e_no; - IoAdapter->tail = e_no; - } - else { - IoAdapter->head = e_no; - IoAdapter->tail = e_no; - } - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req_q"); -} -byte look_req(ADAPTER *a) -{ - PISDN_ADAPTER IoAdapter; - IoAdapter = (PISDN_ADAPTER) a->io; - return ((byte)IoAdapter->head); -} -void next_req(ADAPTER *a) -{ - PISDN_ADAPTER IoAdapter; - diva_os_spin_lock_magic_t irql; - IoAdapter = (PISDN_ADAPTER) a->io; - diva_os_enter_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req_next"); - IoAdapter->head = IoAdapter->e_tbl[IoAdapter->head].next; - if (!IoAdapter->head) IoAdapter->tail = 0; - diva_os_leave_spin_lock(&IoAdapter->data_spin_lock, &irql, "data_req_next"); -} -/*------------------------------------------------------------------*/ -/* memory map functions */ -/*------------------------------------------------------------------*/ -ENTITY *entity_ptr(ADAPTER *a, byte e_no) -{ - PISDN_ADAPTER IoAdapter; - IoAdapter = (PISDN_ADAPTER)a->io; - return (IoAdapter->e_tbl[e_no].e); -} -void *PTR_X(ADAPTER *a, ENTITY *e) -{ - return ((void *) e->X); -} -void *PTR_R(ADAPTER *a, ENTITY *e) -{ - return ((void *) e->R); -} -void *PTR_P(ADAPTER *a, ENTITY *e, void *P) -{ - return P; -} -void CALLBACK(ADAPTER *a, ENTITY *e) -{ - if (e && e->callback) - e->callback(e); -} diff --git a/drivers/isdn/hardware/eicon/io.h b/drivers/isdn/hardware/eicon/io.h deleted file mode 100644 index 01deced18ab8..000000000000 --- a/drivers/isdn/hardware/eicon/io.h +++ /dev/null @@ -1,308 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_XDI_COMMON_IO_H_INC__ /* { */ -#define __DIVA_XDI_COMMON_IO_H_INC__ -/* - maximum = 16 adapters -*/ -#define DI_MAX_LINKS MAX_ADAPTER -#define ISDN_MAX_NUM_LEN 60 -/* -------------------------------------------------------------------------- - structure for quadro card management (obsolete for - systems that do provide per card load event) - -------------------------------------------------------------------------- */ -typedef struct { - dword Num; - DEVICE_NAME DeviceName[4]; - PISDN_ADAPTER QuadroAdapter[4]; -} ADAPTER_LIST_ENTRY, *PADAPTER_LIST_ENTRY; -/* -------------------------------------------------------------------------- - Special OS memory support structures - -------------------------------------------------------------------------- */ -#define MAX_MAPPED_ENTRIES 8 -typedef struct { - void *Address; - dword Length; -} ADAPTER_MEMORY; -/* -------------------------------------------------------------------------- - Configuration of XDI clients carried by XDI - -------------------------------------------------------------------------- */ -#define DIVA_XDI_CAPI_CFG_1_DYNAMIC_L1_ON 0x01 -#define DIVA_XDI_CAPI_CFG_1_GROUP_POPTIMIZATION_ON 0x02 -typedef struct _diva_xdi_capi_cfg { - byte cfg_1; -} diva_xdi_capi_cfg_t; -/* -------------------------------------------------------------------------- - Main data structure kept per adapter - -------------------------------------------------------------------------- */ -struct _ISDN_ADAPTER { - void (*DIRequest)(PISDN_ADAPTER, ENTITY *); - int State; /* from NT4 1.srv, a good idea, but a poor achievement */ - int Initialized; - int RegisteredWithDidd; - int Unavailable; /* callback function possible? */ - int ResourcesClaimed; - int PnpBiosConfigUsed; - dword Logging; - dword features; - char ProtocolIdString[80]; - /* - remember mapped memory areas - */ - ADAPTER_MEMORY MappedMemory[MAX_MAPPED_ENTRIES]; - CARD_PROPERTIES Properties; - dword cardType; - dword protocol_id; /* configured protocol identifier */ - char protocol_name[8]; /* readable name of protocol */ - dword BusType; - dword BusNumber; - dword slotNumber; - dword slotId; - dword ControllerNumber; /* for QUADRO cards only */ - PISDN_ADAPTER MultiMaster; /* for 4-BRI card only - use MultiMaster or QuadroList */ - PADAPTER_LIST_ENTRY QuadroList; /* for QUADRO card only */ - PDEVICE_OBJECT DeviceObject; - dword DeviceId; - diva_os_adapter_irq_info_t irq_info; - dword volatile IrqCount; - int trapped; - dword DspCodeBaseAddr; - dword MaxDspCodeSize; - dword downloadAddr; - dword DspCodeBaseAddrTable[4]; /* add. for MultiMaster */ - dword MaxDspCodeSizeTable[4]; /* add. for MultiMaster */ - dword downloadAddrTable[4]; /* add. for MultiMaster */ - dword MemoryBase; - dword MemorySize; - byte __iomem *Address; - byte __iomem *Config; - byte __iomem *Control; - byte __iomem *reset; - byte __iomem *port; - byte __iomem *ram; - byte __iomem *cfg; - byte __iomem *prom; - byte __iomem *ctlReg; - struct pc_maint *pcm; - diva_os_dependent_devica_name_t os_name; - byte Name[32]; - dword serialNo; - dword ANum; - dword ArchiveType; /* ARCHIVE_TYPE_NONE ..._SINGLE ..._USGEN ..._MULTI */ - char *ProtocolSuffix; /* internal protocolfile table */ - char Archive[32]; - char Protocol[32]; - char AddDownload[32]; /* Dsp- or other additional download files */ - char Oad1[ISDN_MAX_NUM_LEN]; - char Osa1[ISDN_MAX_NUM_LEN]; - char Oad2[ISDN_MAX_NUM_LEN]; - char Osa2[ISDN_MAX_NUM_LEN]; - char Spid1[ISDN_MAX_NUM_LEN]; - char Spid2[ISDN_MAX_NUM_LEN]; - byte nosig; - byte BriLayer2LinkCount; /* amount of TEI's that adapter will support in P2MP mode */ - dword Channels; - dword tei; - dword nt2; - dword TerminalCount; - dword WatchDog; - dword Permanent; - dword BChMask; /* B channel mask for unchannelized modes */ - dword StableL2; - dword DidLen; - dword NoOrderCheck; - dword ForceLaw; /* VoiceCoding - default:0, a-law: 1, my-law: 2 */ - dword SigFlags; - dword LowChannel; - dword NoHscx30; - dword ProtVersion; - dword crc4; - dword L1TristateOrQsig; /* enable Layer 1 Tristate (bit 2)Or Qsig params (bit 0,1)*/ - dword InitialDspInfo; - dword ModemGuardTone; - dword ModemMinSpeed; - dword ModemMaxSpeed; - dword ModemOptions; - dword ModemOptions2; - dword ModemNegotiationMode; - dword ModemModulationsMask; - dword ModemTransmitLevel; - dword FaxOptions; - dword FaxMaxSpeed; - dword Part68LevelLimiter; - dword UsEktsNumCallApp; - byte UsEktsFeatAddConf; - byte UsEktsFeatRemoveConf; - byte UsEktsFeatCallTransfer; - byte UsEktsFeatMsgWaiting; - byte QsigDialect; - byte ForceVoiceMailAlert; - byte DisableAutoSpid; - byte ModemCarrierWaitTimeSec; - byte ModemCarrierLossWaitTimeTenthSec; - byte PiafsLinkTurnaroundInFrames; - byte DiscAfterProgress; - byte AniDniLimiter[3]; - byte TxAttenuation; /* PRI/E1 only: attenuate TX signal */ - word QsigFeatures; - dword GenerateRingtone; - dword SupplementaryServicesFeatures; - dword R2Dialect; - dword R2CasOptions; - dword FaxV34Options; - dword DisabledDspMask; - dword AdapterTestMask; - dword DspImageLength; - word AlertToIn20mSecTicks; - word ModemEyeSetup; - byte R2CtryLength; - byte CCBSRelTimer; - byte *PcCfgBufferFile;/* flexible parameter via file */ - byte *PcCfgBuffer; /* flexible parameter via multistring */ - diva_os_dump_file_t dump_file; /* dump memory to file at lowest irq level */ - diva_os_board_trace_t board_trace; /* traces from the board */ - diva_os_spin_lock_t isr_spin_lock; - diva_os_spin_lock_t data_spin_lock; - diva_os_soft_isr_t req_soft_isr; - diva_os_soft_isr_t isr_soft_isr; - diva_os_atomic_t in_dpc; - PBUFFER RBuffer; /* Copy of receive lookahead buffer */ - word e_max; - word e_count; - E_INFO *e_tbl; - word assign; /* list of pending ASSIGNs */ - word head; /* head of request queue */ - word tail; /* tail of request queue */ - ADAPTER a; /* not a separate structure */ - void (*out)(ADAPTER *a); - byte (*dpc)(ADAPTER *a); - byte (*tst_irq)(ADAPTER *a); - void (*clr_irq)(ADAPTER *a); - int (*load)(PISDN_ADAPTER); - int (*mapmem)(PISDN_ADAPTER); - int (*chkIrq)(PISDN_ADAPTER); - void (*disIrq)(PISDN_ADAPTER); - void (*start)(PISDN_ADAPTER); - void (*stop)(PISDN_ADAPTER); - void (*rstFnc)(PISDN_ADAPTER); - void (*trapFnc)(PISDN_ADAPTER); - dword (*DetectDsps)(PISDN_ADAPTER); - void (*os_trap_nfy_Fnc)(PISDN_ADAPTER, dword); - diva_os_isr_callback_t diva_isr_handler; - dword sdram_bar; /* must be 32 bit */ - dword fpga_features; - volatile int pcm_pending; - volatile void *pcm_data; - diva_xdi_capi_cfg_t capi_cfg; - dword tasks; - void *dma_map; - int (*DivaAdapterTestProc)(PISDN_ADAPTER); - void *AdapterTestMemoryStart; - dword AdapterTestMemoryLength; - const byte *cfg_lib_memory_init; - dword cfg_lib_memory_init_length; -}; -/* --------------------------------------------------------------------- - Entity table - --------------------------------------------------------------------- */ -struct e_info_s { - ENTITY *e; - byte next; /* chaining index */ - word assign_ref; /* assign reference */ -}; -/* --------------------------------------------------------------------- - S-cards shared ram structure for loading - --------------------------------------------------------------------- */ -struct s_load { - byte ctrl; - byte card; - byte msize; - byte fill0; - word ebit; - word elocl; - word eloch; - byte reserved[20]; - word signature; - byte fill[224]; - byte b[256]; -}; -#define PR_RAM ((struct pr_ram *)0) -#define RAM ((struct dual *)0) -/* --------------------------------------------------------------------- - platform specific conversions - --------------------------------------------------------------------- */ -extern void *PTR_P(ADAPTER *a, ENTITY *e, void *P); -extern void *PTR_X(ADAPTER *a, ENTITY *e); -extern void *PTR_R(ADAPTER *a, ENTITY *e); -extern void CALLBACK(ADAPTER *a, ENTITY *e); -extern void set_ram(void **adr_ptr); -/* --------------------------------------------------------------------- - ram access functions for io mapped cards - --------------------------------------------------------------------- */ -byte io_in(ADAPTER *a, void *adr); -word io_inw(ADAPTER *a, void *adr); -void io_in_buffer(ADAPTER *a, void *adr, void *P, word length); -void io_look_ahead(ADAPTER *a, PBUFFER *RBuffer, ENTITY *e); -void io_out(ADAPTER *a, void *adr, byte data); -void io_outw(ADAPTER *a, void *adr, word data); -void io_out_buffer(ADAPTER *a, void *adr, void *P, word length); -void io_inc(ADAPTER *a, void *adr); -void bri_in_buffer(PISDN_ADAPTER IoAdapter, dword Pos, - void *Buf, dword Len); -int bri_out_buffer(PISDN_ADAPTER IoAdapter, dword Pos, - void *Buf, dword Len, int Verify); -/* --------------------------------------------------------------------- - ram access functions for memory mapped cards - --------------------------------------------------------------------- */ -byte mem_in(ADAPTER *a, void *adr); -word mem_inw(ADAPTER *a, void *adr); -void mem_in_buffer(ADAPTER *a, void *adr, void *P, word length); -void mem_look_ahead(ADAPTER *a, PBUFFER *RBuffer, ENTITY *e); -void mem_out(ADAPTER *a, void *adr, byte data); -void mem_outw(ADAPTER *a, void *adr, word data); -void mem_out_buffer(ADAPTER *a, void *adr, void *P, word length); -void mem_inc(ADAPTER *a, void *adr); -void mem_in_dw(ADAPTER *a, void *addr, dword *data, int dwords); -void mem_out_dw(ADAPTER *a, void *addr, const dword *data, int dwords); -/* --------------------------------------------------------------------- - functions exported by io.c - --------------------------------------------------------------------- */ -extern IDI_CALL Requests[MAX_ADAPTER]; -extern void DIDpcRoutine(struct _diva_os_soft_isr *psoft_isr, - void *context); -extern void request(PISDN_ADAPTER, ENTITY *); -/* --------------------------------------------------------------------- - trapFn helpers, used to recover debug trace from dead card - --------------------------------------------------------------------- */ -typedef struct { - word *buf; - word cnt; - word out; -} Xdesc; -extern void dump_trap_frame(PISDN_ADAPTER IoAdapter, byte __iomem *exception); -extern void dump_xlog_buffer(PISDN_ADAPTER IoAdapter, Xdesc *xlogDesc); -/* --------------------------------------------------------------------- */ -#endif /* } __DIVA_XDI_COMMON_IO_H_INC__ */ diff --git a/drivers/isdn/hardware/eicon/istream.c b/drivers/isdn/hardware/eicon/istream.c deleted file mode 100644 index 045bda5c839f..000000000000 --- a/drivers/isdn/hardware/eicon/istream.c +++ /dev/null @@ -1,226 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#if defined(DIVA_ISTREAM) /* { */ -#include "pc.h" -#include "pr_pc.h" -#include "di_defs.h" -#include "divasync.h" -#include "di.h" -#if !defined USE_EXTENDED_DEBUGS -#include "dimaint.h" -#else -#define dprintf -#endif -#include "dfifo.h" -int diva_istream_write(void *context, - int Id, - void *data, - int length, - int final, - byte usr1, - byte usr2); -int diva_istream_read(void *context, - int Id, - void *data, - int max_length, - int *final, - byte *usr1, - byte *usr2); -/* ------------------------------------------------------------------- - Does provide iStream interface to the client - ------------------------------------------------------------------- */ -void diva_xdi_provide_istream_info(ADAPTER *a, - diva_xdi_stream_interface_t *pi) { - pi->provided_service = 0; -} -/* ------------------------------------------------------------------ - Does write the data from caller's buffer to the card's - stream interface. - If synchronous service was requested, then function - does return amount of data written to stream. - 'final' does indicate that piece of data to be written is - final part of frame (necessary only by structured datatransfer) - return 0 if zero lengh packet was written - return -1 if stream is full - ------------------------------------------------------------------ */ -int diva_istream_write(void *context, - int Id, - void *data, - int length, - int final, - byte usr1, - byte usr2) { - ADAPTER *a = (ADAPTER *)context; - int written = 0, to_write = -1; - char tmp[4]; - byte *data_ptr = (byte *)data; - for (;;) { - a->ram_in_dw(a, -#ifdef PLATFORM_GT_32BIT - ULongToPtr(a->tx_stream[Id] + a->tx_pos[Id]), -#else - (void *)(a->tx_stream[Id] + a->tx_pos[Id]), -#endif - (dword *)&tmp[0], - 1); - if (tmp[0] & DIVA_DFIFO_READY) { /* No free blocks more */ - if (to_write < 0) - return (-1); /* was not able to write */ - break; /* only part of message was written */ - } - to_write = min(length, DIVA_DFIFO_DATA_SZ); - if (to_write) { - a->ram_out_buffer(a, -#ifdef PLATFORM_GT_32BIT - ULongToPtr(a->tx_stream[Id] + a->tx_pos[Id] + 4), -#else - (void *)(a->tx_stream[Id] + a->tx_pos[Id] + 4), -#endif - data_ptr, - (word)to_write); - length -= to_write; - written += to_write; - data_ptr += to_write; - } - tmp[1] = (char)to_write; - tmp[0] = (tmp[0] & DIVA_DFIFO_WRAP) | - DIVA_DFIFO_READY | - ((!length && final) ? DIVA_DFIFO_LAST : 0); - if (tmp[0] & DIVA_DFIFO_LAST) { - tmp[2] = usr1; - tmp[3] = usr2; - } - a->ram_out_dw(a, -#ifdef PLATFORM_GT_32BIT - ULongToPtr(a->tx_stream[Id] + a->tx_pos[Id]), -#else - (void *)(a->tx_stream[Id] + a->tx_pos[Id]), -#endif - (dword *)&tmp[0], - 1); - if (tmp[0] & DIVA_DFIFO_WRAP) { - a->tx_pos[Id] = 0; - } else { - a->tx_pos[Id] += DIVA_DFIFO_STEP; - } - if (!length) { - break; - } - } - return (written); -} -/* ------------------------------------------------------------------- - In case of SYNCRONOUS service: - Does write data from stream in caller's buffer. - Does return amount of data written to buffer - Final flag is set on return if last part of structured frame - was received - return 0 if zero packet was received - return -1 if stream is empty - return -2 if read buffer does not profide sufficient space - to accommodate entire segment - max_length should be at least 68 bytes - ------------------------------------------------------------------- */ -int diva_istream_read(void *context, - int Id, - void *data, - int max_length, - int *final, - byte *usr1, - byte *usr2) { - ADAPTER *a = (ADAPTER *)context; - int read = 0, to_read = -1; - char tmp[4]; - byte *data_ptr = (byte *)data; - *final = 0; - for (;;) { - a->ram_in_dw(a, -#ifdef PLATFORM_GT_32BIT - ULongToPtr(a->rx_stream[Id] + a->rx_pos[Id]), -#else - (void *)(a->rx_stream[Id] + a->rx_pos[Id]), -#endif - (dword *)&tmp[0], - 1); - if (tmp[1] > max_length) { - if (to_read < 0) - return (-2); /* was not able to read */ - break; - } - if (!(tmp[0] & DIVA_DFIFO_READY)) { - if (to_read < 0) - return (-1); /* was not able to read */ - break; - } - to_read = min(max_length, (int)tmp[1]); - if (to_read) { - a->ram_in_buffer(a, -#ifdef PLATFORM_GT_32BIT - ULongToPtr(a->rx_stream[Id] + a->rx_pos[Id] + 4), -#else - (void *)(a->rx_stream[Id] + a->rx_pos[Id] + 4), -#endif - data_ptr, - (word)to_read); - max_length -= to_read; - read += to_read; - data_ptr += to_read; - } - if (tmp[0] & DIVA_DFIFO_LAST) { - *final = 1; - } - tmp[0] &= DIVA_DFIFO_WRAP; - a->ram_out_dw(a, -#ifdef PLATFORM_GT_32BIT - ULongToPtr(a->rx_stream[Id] + a->rx_pos[Id]), -#else - (void *)(a->rx_stream[Id] + a->rx_pos[Id]), -#endif - (dword *)&tmp[0], - 1); - if (tmp[0] & DIVA_DFIFO_WRAP) { - a->rx_pos[Id] = 0; - } else { - a->rx_pos[Id] += DIVA_DFIFO_STEP; - } - if (*final) { - if (usr1) - *usr1 = tmp[2]; - if (usr2) - *usr2 = tmp[3]; - break; - } - } - return (read); -} -/* --------------------------------------------------------------------- - Does check if one of streams had caused interrupt and does - wake up corresponding application - --------------------------------------------------------------------- */ -void pr_stream(ADAPTER *a) { -} -#endif /* } */ diff --git a/drivers/isdn/hardware/eicon/kst_ifc.h b/drivers/isdn/hardware/eicon/kst_ifc.h deleted file mode 100644 index 894fdfda1090..000000000000 --- a/drivers/isdn/hardware/eicon/kst_ifc.h +++ /dev/null @@ -1,335 +0,0 @@ -/* - * - Copyright (c) Eicon Networks, 2000. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 1.9 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_EICON_TRACE_API__ -#define __DIVA_EICON_TRACE_API__ - -#define DIVA_TRACE_LINE_TYPE_LEN 64 -#define DIVA_TRACE_IE_LEN 64 -#define DIVA_TRACE_FAX_PRMS_LEN 128 - -typedef struct _diva_trace_ie { - byte length; - byte data[DIVA_TRACE_IE_LEN]; -} diva_trace_ie_t; - -/* - Structure used to represent "State\\BX\\Modem" directory - to user. -*/ -typedef struct _diva_trace_modem_state { - dword ChannelNumber; - - dword Event; - - dword Norm; - - dword Options; /* Options received from Application */ - - dword TxSpeed; - dword RxSpeed; - - dword RoundtripMsec; - - dword SymbolRate; - - int RxLeveldBm; - int EchoLeveldBm; - - dword SNRdb; - dword MAE; - - dword LocalRetrains; - dword RemoteRetrains; - dword LocalResyncs; - dword RemoteResyncs; - - dword DiscReason; - -} diva_trace_modem_state_t; - -/* - Representation of "State\\BX\\FAX" directory -*/ -typedef struct _diva_trace_fax_state { - dword ChannelNumber; - dword Event; - dword Page_Counter; - dword Features; - char Station_ID[DIVA_TRACE_FAX_PRMS_LEN]; - char Subaddress[DIVA_TRACE_FAX_PRMS_LEN]; - char Password[DIVA_TRACE_FAX_PRMS_LEN]; - dword Speed; - dword Resolution; - dword Paper_Width; - dword Paper_Length; - dword Scanline_Time; - dword Disc_Reason; - dword dummy; -} diva_trace_fax_state_t; - -/* - Structure used to represent Interface State in the abstract - and interface/D-channel protocol independent form. -*/ -typedef struct _diva_trace_interface_state { - char Layer1[DIVA_TRACE_LINE_TYPE_LEN]; - char Layer2[DIVA_TRACE_LINE_TYPE_LEN]; -} diva_trace_interface_state_t; - -typedef struct _diva_incoming_call_statistics { - dword Calls; - dword Connected; - dword User_Busy; - dword Call_Rejected; - dword Wrong_Number; - dword Incompatible_Dst; - dword Out_of_Order; - dword Ignored; -} diva_incoming_call_statistics_t; - -typedef struct _diva_outgoing_call_statistics { - dword Calls; - dword Connected; - dword User_Busy; - dword No_Answer; - dword Wrong_Number; - dword Call_Rejected; - dword Other_Failures; -} diva_outgoing_call_statistics_t; - -typedef struct _diva_modem_call_statistics { - dword Disc_Normal; - dword Disc_Unspecified; - dword Disc_Busy_Tone; - dword Disc_Congestion; - dword Disc_Carr_Wait; - dword Disc_Trn_Timeout; - dword Disc_Incompat; - dword Disc_Frame_Rej; - dword Disc_V42bis; -} diva_modem_call_statistics_t; - -typedef struct _diva_fax_call_statistics { - dword Disc_Normal; - dword Disc_Not_Ident; - dword Disc_No_Response; - dword Disc_Retries; - dword Disc_Unexp_Msg; - dword Disc_No_Polling; - dword Disc_Training; - dword Disc_Unexpected; - dword Disc_Application; - dword Disc_Incompat; - dword Disc_No_Command; - dword Disc_Long_Msg; - dword Disc_Supervisor; - dword Disc_SUB_SEP_PWD; - dword Disc_Invalid_Msg; - dword Disc_Page_Coding; - dword Disc_App_Timeout; - dword Disc_Unspecified; -} diva_fax_call_statistics_t; - -typedef struct _diva_prot_statistics { - dword X_Frames; - dword X_Bytes; - dword X_Errors; - dword R_Frames; - dword R_Bytes; - dword R_Errors; -} diva_prot_statistics_t; - -typedef struct _diva_ifc_statistics { - diva_incoming_call_statistics_t inc; - diva_outgoing_call_statistics_t outg; - diva_modem_call_statistics_t mdm; - diva_fax_call_statistics_t fax; - diva_prot_statistics_t b1; - diva_prot_statistics_t b2; - diva_prot_statistics_t d1; - diva_prot_statistics_t d2; -} diva_ifc_statistics_t; - -/* - Structure used to represent "State\\BX" directory - to user. -*/ -typedef struct _diva_trace_line_state { - dword ChannelNumber; - - char Line[DIVA_TRACE_LINE_TYPE_LEN]; - - char Framing[DIVA_TRACE_LINE_TYPE_LEN]; - - char Layer2[DIVA_TRACE_LINE_TYPE_LEN]; - char Layer3[DIVA_TRACE_LINE_TYPE_LEN]; - - char RemoteAddress[DIVA_TRACE_LINE_TYPE_LEN]; - char RemoteSubAddress[DIVA_TRACE_LINE_TYPE_LEN]; - - char LocalAddress[DIVA_TRACE_LINE_TYPE_LEN]; - char LocalSubAddress[DIVA_TRACE_LINE_TYPE_LEN]; - - diva_trace_ie_t call_BC; - diva_trace_ie_t call_HLC; - diva_trace_ie_t call_LLC; - - dword Charges; - - dword CallReference; - - dword LastDisconnecCause; - - char UserID[DIVA_TRACE_LINE_TYPE_LEN]; - - diva_trace_modem_state_t modem; - diva_trace_fax_state_t fax; - - diva_trace_interface_state_t *pInterface; - - diva_ifc_statistics_t *pInterfaceStat; - -} diva_trace_line_state_t; - -#define DIVA_SUPER_TRACE_NOTIFY_LINE_CHANGE ('l') -#define DIVA_SUPER_TRACE_NOTIFY_MODEM_CHANGE ('m') -#define DIVA_SUPER_TRACE_NOTIFY_FAX_CHANGE ('f') -#define DIVA_SUPER_TRACE_INTERFACE_CHANGE ('i') -#define DIVA_SUPER_TRACE_NOTIFY_STAT_CHANGE ('s') -#define DIVA_SUPER_TRACE_NOTIFY_MDM_STAT_CHANGE ('M') -#define DIVA_SUPER_TRACE_NOTIFY_FAX_STAT_CHANGE ('F') - -struct _diva_strace_library_interface; -typedef void (*diva_trace_channel_state_change_proc_t)(void *user_context, - struct _diva_strace_library_interface *hLib, - int Adapter, - diva_trace_line_state_t *channel, int notify_subject); -typedef void (*diva_trace_channel_trace_proc_t)(void *user_context, - struct _diva_strace_library_interface *hLib, - int Adapter, void *xlog_buffer, int length); -typedef void (*diva_trace_error_proc_t)(void *user_context, - struct _diva_strace_library_interface *hLib, - int Adapter, - int error, const char *file, int line); - -/* - This structure creates interface from user to library -*/ -typedef struct _diva_trace_library_user_interface { - void *user_context; - diva_trace_channel_state_change_proc_t notify_proc; - diva_trace_channel_trace_proc_t trace_proc; - diva_trace_error_proc_t error_notify_proc; -} diva_trace_library_user_interface_t; - -/* - Interface from Library to User -*/ -typedef int (*DivaSTraceLibraryStart_proc_t)(void *hLib); -typedef int (*DivaSTraceLibraryFinit_proc_t)(void *hLib); -typedef int (*DivaSTraceMessageInput_proc_t)(void *hLib); -typedef void* (*DivaSTraceGetHandle_proc_t)(void *hLib); - -/* - Turn Audio Tap trace on/off - Channel should be in the range 1 ... Number of Channels -*/ -typedef int (*DivaSTraceSetAudioTap_proc_t)(void *hLib, int Channel, int on); - -/* - Turn B-channel trace on/off - Channel should be in the range 1 ... Number of Channels -*/ -typedef int (*DivaSTraceSetBChannel_proc_t)(void *hLib, int Channel, int on); - -/* - Turn D-channel (Layer1/Layer2/Layer3) trace on/off - Layer1 - All D-channel frames received/sent over the interface - inclusive Layer 2 headers, Layer 2 frames and TEI management frames - Layer2 - Events from LAPD protocol instance with SAPI of signalling protocol - Layer3 - All D-channel frames addressed to assigned to the card TEI and - SAPI of signalling protocol, and signalling protocol events. -*/ -typedef int (*DivaSTraceSetDChannel_proc_t)(void *hLib, int on); - -/* - Get overall card statistics -*/ -typedef int (*DivaSTraceGetOutgoingCallStatistics_proc_t)(void *hLib); -typedef int (*DivaSTraceGetIncomingCallStatistics_proc_t)(void *hLib); -typedef int (*DivaSTraceGetModemStatistics_proc_t)(void *hLib); -typedef int (*DivaSTraceGetFaxStatistics_proc_t)(void *hLib); -typedef int (*DivaSTraceGetBLayer1Statistics_proc_t)(void *hLib); -typedef int (*DivaSTraceGetBLayer2Statistics_proc_t)(void *hLib); -typedef int (*DivaSTraceGetDLayer1Statistics_proc_t)(void *hLib); -typedef int (*DivaSTraceGetDLayer2Statistics_proc_t)(void *hLib); - -/* - Call control -*/ -typedef int (*DivaSTraceClearCall_proc_t)(void *hLib, int Channel); - -typedef struct _diva_strace_library_interface { - void *hLib; - DivaSTraceLibraryStart_proc_t DivaSTraceLibraryStart; - DivaSTraceLibraryStart_proc_t DivaSTraceLibraryStop; - DivaSTraceLibraryFinit_proc_t DivaSTraceLibraryFinit; - DivaSTraceMessageInput_proc_t DivaSTraceMessageInput; - DivaSTraceGetHandle_proc_t DivaSTraceGetHandle; - DivaSTraceSetAudioTap_proc_t DivaSTraceSetAudioTap; - DivaSTraceSetBChannel_proc_t DivaSTraceSetBChannel; - DivaSTraceSetDChannel_proc_t DivaSTraceSetDChannel; - DivaSTraceSetDChannel_proc_t DivaSTraceSetInfo; - DivaSTraceGetOutgoingCallStatistics_proc_t \ - DivaSTraceGetOutgoingCallStatistics; - DivaSTraceGetIncomingCallStatistics_proc_t \ - DivaSTraceGetIncomingCallStatistics; - DivaSTraceGetModemStatistics_proc_t \ - DivaSTraceGetModemStatistics; - DivaSTraceGetFaxStatistics_proc_t \ - DivaSTraceGetFaxStatistics; - DivaSTraceGetBLayer1Statistics_proc_t \ - DivaSTraceGetBLayer1Statistics; - DivaSTraceGetBLayer2Statistics_proc_t \ - DivaSTraceGetBLayer2Statistics; - DivaSTraceGetDLayer1Statistics_proc_t \ - DivaSTraceGetDLayer1Statistics; - DivaSTraceGetDLayer2Statistics_proc_t \ - DivaSTraceGetDLayer2Statistics; - DivaSTraceClearCall_proc_t DivaSTraceClearCall; -} diva_strace_library_interface_t; - -/* - Create and return Library interface -*/ -diva_strace_library_interface_t *DivaSTraceLibraryCreateInstance(int Adapter, - const diva_trace_library_user_interface_t *user_proc, - byte *pmem); -dword DivaSTraceGetMemotyRequirement(int channels); - -#define DIVA_MAX_ADAPTERS 64 -#define DIVA_MAX_LINES 32 - -#endif diff --git a/drivers/isdn/hardware/eicon/maintidi.c b/drivers/isdn/hardware/eicon/maintidi.c deleted file mode 100644 index 2ee789f95867..000000000000 --- a/drivers/isdn/hardware/eicon/maintidi.c +++ /dev/null @@ -1,2194 +0,0 @@ -/* - * - Copyright (c) Eicon Networks, 2000. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 1.9 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#include "kst_ifc.h" -#include "di_defs.h" -#include "maintidi.h" -#include "pc.h" -#include "man_defs.h" - - -extern void diva_mnt_internal_dprintf(dword drv_id, dword type, char *p, ...); - -#define MODEM_PARSE_ENTRIES 16 /* amount of variables of interest */ -#define FAX_PARSE_ENTRIES 12 /* amount of variables of interest */ -#define LINE_PARSE_ENTRIES 15 /* amount of variables of interest */ -#define STAT_PARSE_ENTRIES 70 /* amount of variables of interest */ - -/* - LOCAL FUNCTIONS -*/ -static int DivaSTraceLibraryStart(void *hLib); -static int DivaSTraceLibraryStop(void *hLib); -static int SuperTraceLibraryFinit(void *hLib); -static void *SuperTraceGetHandle(void *hLib); -static int SuperTraceMessageInput(void *hLib); -static int SuperTraceSetAudioTap(void *hLib, int Channel, int on); -static int SuperTraceSetBChannel(void *hLib, int Channel, int on); -static int SuperTraceSetDChannel(void *hLib, int on); -static int SuperTraceSetInfo(void *hLib, int on); -static int SuperTraceClearCall(void *hLib, int Channel); -static int SuperTraceGetOutgoingCallStatistics(void *hLib); -static int SuperTraceGetIncomingCallStatistics(void *hLib); -static int SuperTraceGetModemStatistics(void *hLib); -static int SuperTraceGetFaxStatistics(void *hLib); -static int SuperTraceGetBLayer1Statistics(void *hLib); -static int SuperTraceGetBLayer2Statistics(void *hLib); -static int SuperTraceGetDLayer1Statistics(void *hLib); -static int SuperTraceGetDLayer2Statistics(void *hLib); - -/* - LOCAL FUNCTIONS -*/ -static int ScheduleNextTraceRequest(diva_strace_context_t *pLib); -static int process_idi_event(diva_strace_context_t *pLib, - diva_man_var_header_t *pVar); -static int process_idi_info(diva_strace_context_t *pLib, - diva_man_var_header_t *pVar); -static int diva_modem_event(diva_strace_context_t *pLib, int Channel); -static int diva_fax_event(diva_strace_context_t *pLib, int Channel); -static int diva_line_event(diva_strace_context_t *pLib, int Channel); -static int diva_modem_info(diva_strace_context_t *pLib, - int Channel, - diva_man_var_header_t *pVar); -static int diva_fax_info(diva_strace_context_t *pLib, - int Channel, - diva_man_var_header_t *pVar); -static int diva_line_info(diva_strace_context_t *pLib, - int Channel, - diva_man_var_header_t *pVar); -static int diva_ifc_statistics(diva_strace_context_t *pLib, - diva_man_var_header_t *pVar); -static diva_man_var_header_t *get_next_var(diva_man_var_header_t *pVar); -static diva_man_var_header_t *find_var(diva_man_var_header_t *pVar, - const char *name); -static int diva_strace_read_int(diva_man_var_header_t *pVar, int *var); -static int diva_strace_read_uint(diva_man_var_header_t *pVar, dword *var); -static int diva_strace_read_asz(diva_man_var_header_t *pVar, char *var); -static int diva_strace_read_asc(diva_man_var_header_t *pVar, char *var); -static int diva_strace_read_ie(diva_man_var_header_t *pVar, - diva_trace_ie_t *var); -static void diva_create_parse_table(diva_strace_context_t *pLib); -static void diva_trace_error(diva_strace_context_t *pLib, - int error, const char *file, int line); -static void diva_trace_notify_user(diva_strace_context_t *pLib, - int Channel, - int notify_subject); -static int diva_trace_read_variable(diva_man_var_header_t *pVar, - void *variable); - -/* - Initialize the library and return context - of the created trace object that will represent - the IDI adapter. - Return 0 on error. -*/ -diva_strace_library_interface_t *DivaSTraceLibraryCreateInstance(int Adapter, - const diva_trace_library_user_interface_t *user_proc, - byte *pmem) { - diva_strace_context_t *pLib = (diva_strace_context_t *)pmem; - int i; - - if (!pLib) { - return NULL; - } - - pmem += sizeof(*pLib); - memset(pLib, 0x00, sizeof(*pLib)); - - pLib->Adapter = Adapter; - - /* - Set up Library Interface - */ - pLib->instance.hLib = pLib; - pLib->instance.DivaSTraceLibraryStart = DivaSTraceLibraryStart; - pLib->instance.DivaSTraceLibraryStop = DivaSTraceLibraryStop; - pLib->instance.DivaSTraceLibraryFinit = SuperTraceLibraryFinit; - pLib->instance.DivaSTraceMessageInput = SuperTraceMessageInput; - pLib->instance.DivaSTraceGetHandle = SuperTraceGetHandle; - pLib->instance.DivaSTraceSetAudioTap = SuperTraceSetAudioTap; - pLib->instance.DivaSTraceSetBChannel = SuperTraceSetBChannel; - pLib->instance.DivaSTraceSetDChannel = SuperTraceSetDChannel; - pLib->instance.DivaSTraceSetInfo = SuperTraceSetInfo; - pLib->instance.DivaSTraceGetOutgoingCallStatistics = \ - SuperTraceGetOutgoingCallStatistics; - pLib->instance.DivaSTraceGetIncomingCallStatistics = \ - SuperTraceGetIncomingCallStatistics; - pLib->instance.DivaSTraceGetModemStatistics = \ - SuperTraceGetModemStatistics; - pLib->instance.DivaSTraceGetFaxStatistics = \ - SuperTraceGetFaxStatistics; - pLib->instance.DivaSTraceGetBLayer1Statistics = \ - SuperTraceGetBLayer1Statistics; - pLib->instance.DivaSTraceGetBLayer2Statistics = \ - SuperTraceGetBLayer2Statistics; - pLib->instance.DivaSTraceGetDLayer1Statistics = \ - SuperTraceGetDLayer1Statistics; - pLib->instance.DivaSTraceGetDLayer2Statistics = \ - SuperTraceGetDLayer2Statistics; - pLib->instance.DivaSTraceClearCall = SuperTraceClearCall; - - - if (user_proc) { - pLib->user_proc_table.user_context = user_proc->user_context; - pLib->user_proc_table.notify_proc = user_proc->notify_proc; - pLib->user_proc_table.trace_proc = user_proc->trace_proc; - pLib->user_proc_table.error_notify_proc = user_proc->error_notify_proc; - } - - if (!(pLib->hAdapter = SuperTraceOpenAdapter(Adapter))) { - diva_mnt_internal_dprintf(0, DLI_ERR, "Can not open XDI adapter"); - return NULL; - } - pLib->Channels = SuperTraceGetNumberOfChannels(pLib->hAdapter); - - /* - Calculate amount of parte table entites necessary to translate - information from all events of onterest - */ - pLib->parse_entries = (MODEM_PARSE_ENTRIES + FAX_PARSE_ENTRIES + \ - STAT_PARSE_ENTRIES + \ - LINE_PARSE_ENTRIES + 1) * pLib->Channels; - pLib->parse_table = (diva_strace_path2action_t *)pmem; - - for (i = 0; i < 30; i++) { - pLib->lines[i].pInterface = &pLib->Interface; - pLib->lines[i].pInterfaceStat = &pLib->InterfaceStat; - } - - pLib->e.R = &pLib->RData; - - pLib->req_busy = 1; - pLib->rc_ok = ASSIGN_OK; - - diva_create_parse_table(pLib); - - return ((diva_strace_library_interface_t *)pLib); -} - -static int DivaSTraceLibraryStart(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - - return (SuperTraceASSIGN(pLib->hAdapter, pLib->buffer)); -} - -/* - Return (-1) on error - Return (0) if was initiated or pending - Return (1) if removal is complete -*/ -static int DivaSTraceLibraryStop(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - - if (!pLib->e.Id) { /* Was never started/assigned */ - return (1); - } - - switch (pLib->removal_state) { - case 0: - pLib->removal_state = 1; - ScheduleNextTraceRequest(pLib); - break; - - case 3: - return (1); - } - - return (0); -} - -static int SuperTraceLibraryFinit(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - if (pLib) { - if (pLib->hAdapter) { - SuperTraceCloseAdapter(pLib->hAdapter); - } - return (0); - } - return (-1); -} - -static void *SuperTraceGetHandle(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - - return (&pLib->e); -} - -/* - After library handle object is gone in signaled state - this function should be called and will pick up incoming - IDI messages (return codes and indications). -*/ -static int SuperTraceMessageInput(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - int ret = 0; - byte Rc, Ind; - - if (pLib->e.complete == 255) { - /* - Process return code - */ - pLib->req_busy = 0; - Rc = pLib->e.Rc; - pLib->e.Rc = 0; - - if (pLib->removal_state == 2) { - pLib->removal_state = 3; - return (0); - } - - if (Rc != pLib->rc_ok) { - int ignore = 0; - /* - Auto-detect amount of events/channels and features - */ - if (pLib->general_b_ch_event == 1) { - pLib->general_b_ch_event = 2; - ignore = 1; - } else if (pLib->general_fax_event == 1) { - pLib->general_fax_event = 2; - ignore = 1; - } else if (pLib->general_mdm_event == 1) { - pLib->general_mdm_event = 2; - ignore = 1; - } else if ((pLib->ChannelsTraceActive < pLib->Channels) && pLib->ChannelsTraceActive) { - pLib->ChannelsTraceActive = pLib->Channels; - ignore = 1; - } else if (pLib->ModemTraceActive < pLib->Channels) { - pLib->ModemTraceActive = pLib->Channels; - ignore = 1; - } else if (pLib->FaxTraceActive < pLib->Channels) { - pLib->FaxTraceActive = pLib->Channels; - ignore = 1; - } else if (pLib->audio_trace_init == 2) { - ignore = 1; - pLib->audio_trace_init = 1; - } else if (pLib->eye_pattern_pending) { - pLib->eye_pattern_pending = 0; - ignore = 1; - } else if (pLib->audio_tap_pending) { - pLib->audio_tap_pending = 0; - ignore = 1; - } - - if (!ignore) { - return (-1); /* request failed */ - } - } else { - if (pLib->general_b_ch_event == 1) { - pLib->ChannelsTraceActive = pLib->Channels; - pLib->general_b_ch_event = 2; - } else if (pLib->general_fax_event == 1) { - pLib->general_fax_event = 2; - pLib->FaxTraceActive = pLib->Channels; - } else if (pLib->general_mdm_event == 1) { - pLib->general_mdm_event = 2; - pLib->ModemTraceActive = pLib->Channels; - } - } - if (pLib->audio_trace_init == 2) { - pLib->audio_trace_init = 1; - } - pLib->rc_ok = 0xff; /* default OK after assign was done */ - if ((ret = ScheduleNextTraceRequest(pLib))) { - return (-1); - } - } else { - /* - Process indication - Always 'RNR' indication if return code is pending - */ - Ind = pLib->e.Ind; - pLib->e.Ind = 0; - if (pLib->removal_state) { - pLib->e.RNum = 0; - pLib->e.RNR = 2; - } else if (pLib->req_busy) { - pLib->e.RNum = 0; - pLib->e.RNR = 1; - } else { - if (pLib->e.complete != 0x02) { - /* - Look-ahead call, set up buffers - */ - pLib->e.RNum = 1; - pLib->e.R->P = (byte *)&pLib->buffer[0]; - pLib->e.R->PLength = (word)(sizeof(pLib->buffer) - 1); - - } else { - /* - Indication reception complete, process it now - */ - byte *p = (byte *)&pLib->buffer[0]; - pLib->buffer[pLib->e.R->PLength] = 0; /* terminate I.E. with zero */ - - switch (Ind) { - case MAN_COMBI_IND: { - int total_length = pLib->e.R->PLength; - word this_ind_length; - - while (total_length > 3 && *p) { - Ind = *p++; - this_ind_length = (word)p[0] | ((word)p[1] << 8); - p += 2; - - switch (Ind) { - case MAN_INFO_IND: - if (process_idi_info(pLib, (diva_man_var_header_t *)p)) { - return (-1); - } - break; - case MAN_EVENT_IND: - if (process_idi_event(pLib, (diva_man_var_header_t *)p)) { - return (-1); - } - break; - case MAN_TRACE_IND: - if (pLib->trace_on == 1) { - /* - Ignore first trace event that is result of - EVENT_ON operation - */ - pLib->trace_on++; - } else { - /* - Delivery XLOG buffer to application - */ - if (pLib->user_proc_table.trace_proc) { - (*(pLib->user_proc_table.trace_proc))(pLib->user_proc_table.user_context, - &pLib->instance, pLib->Adapter, - p, this_ind_length); - } - } - break; - default: - diva_mnt_internal_dprintf(0, DLI_ERR, "Unknown IDI Ind (DMA mode): %02x", Ind); - } - p += (this_ind_length + 1); - total_length -= (4 + this_ind_length); - } - } break; - case MAN_INFO_IND: - if (process_idi_info(pLib, (diva_man_var_header_t *)p)) { - return (-1); - } - break; - case MAN_EVENT_IND: - if (process_idi_event(pLib, (diva_man_var_header_t *)p)) { - return (-1); - } - break; - case MAN_TRACE_IND: - if (pLib->trace_on == 1) { - /* - Ignore first trace event that is result of - EVENT_ON operation - */ - pLib->trace_on++; - } else { - /* - Delivery XLOG buffer to application - */ - if (pLib->user_proc_table.trace_proc) { - (*(pLib->user_proc_table.trace_proc))(pLib->user_proc_table.user_context, - &pLib->instance, pLib->Adapter, - p, pLib->e.R->PLength); - } - } - break; - default: - diva_mnt_internal_dprintf(0, DLI_ERR, "Unknown IDI Ind: %02x", Ind); - } - } - } - } - - if ((ret = ScheduleNextTraceRequest(pLib))) { - return (-1); - } - - return (ret); -} - -/* - Internal state machine responsible for scheduling of requests -*/ -static int ScheduleNextTraceRequest(diva_strace_context_t *pLib) { - char name[64]; - int ret = 0; - int i; - - if (pLib->req_busy) { - return (0); - } - - if (pLib->removal_state == 1) { - if (SuperTraceREMOVE(pLib->hAdapter)) { - pLib->removal_state = 3; - } else { - pLib->req_busy = 1; - pLib->removal_state = 2; - } - return (0); - } - - if (pLib->removal_state) { - return (0); - } - - if (!pLib->general_b_ch_event) { - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, "State\\B Event", pLib->buffer))) { - return (-1); - } - pLib->general_b_ch_event = 1; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->general_fax_event) { - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, "State\\FAX Event", pLib->buffer))) { - return (-1); - } - pLib->general_fax_event = 1; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->general_mdm_event) { - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, "State\\Modem Event", pLib->buffer))) { - return (-1); - } - pLib->general_mdm_event = 1; - pLib->req_busy = 1; - return (0); - } - - if (pLib->ChannelsTraceActive < pLib->Channels) { - pLib->ChannelsTraceActive++; - sprintf(name, "State\\B%d\\Line", pLib->ChannelsTraceActive); - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, name, pLib->buffer))) { - pLib->ChannelsTraceActive--; - return (-1); - } - pLib->req_busy = 1; - return (0); - } - - if (pLib->ModemTraceActive < pLib->Channels) { - pLib->ModemTraceActive++; - sprintf(name, "State\\B%d\\Modem\\Event", pLib->ModemTraceActive); - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, name, pLib->buffer))) { - pLib->ModemTraceActive--; - return (-1); - } - pLib->req_busy = 1; - return (0); - } - - if (pLib->FaxTraceActive < pLib->Channels) { - pLib->FaxTraceActive++; - sprintf(name, "State\\B%d\\FAX\\Event", pLib->FaxTraceActive); - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, name, pLib->buffer))) { - pLib->FaxTraceActive--; - return (-1); - } - pLib->req_busy = 1; - return (0); - } - - if (!pLib->trace_mask_init) { - word tmp = 0x0000; - if (SuperTraceWriteVar(pLib->hAdapter, - pLib->buffer, - "Trace\\Event Enable", - &tmp, - 0x87, /* MI_BITFLD */ - sizeof(tmp))) { - return (-1); - } - pLib->trace_mask_init = 1; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->audio_trace_init) { - dword tmp = 0x00000000; - if (SuperTraceWriteVar(pLib->hAdapter, - pLib->buffer, - "Trace\\AudioCh# Enable", - &tmp, - 0x87, /* MI_BITFLD */ - sizeof(tmp))) { - return (-1); - } - pLib->audio_trace_init = 2; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->bchannel_init) { - dword tmp = 0x00000000; - if (SuperTraceWriteVar(pLib->hAdapter, - pLib->buffer, - "Trace\\B-Ch# Enable", - &tmp, - 0x87, /* MI_BITFLD */ - sizeof(tmp))) { - return (-1); - } - pLib->bchannel_init = 1; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->trace_length_init) { - word tmp = 30; - if (SuperTraceWriteVar(pLib->hAdapter, - pLib->buffer, - "Trace\\Max Log Length", - &tmp, - 0x82, /* MI_UINT */ - sizeof(tmp))) { - return (-1); - } - pLib->trace_length_init = 1; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->trace_on) { - if (SuperTraceTraceOnRequest(pLib->hAdapter, - "Trace\\Log Buffer", - pLib->buffer)) { - return (-1); - } - pLib->trace_on = 1; - pLib->req_busy = 1; - return (0); - } - - if (pLib->trace_event_mask != pLib->current_trace_event_mask) { - if (SuperTraceWriteVar(pLib->hAdapter, - pLib->buffer, - "Trace\\Event Enable", - &pLib->trace_event_mask, - 0x87, /* MI_BITFLD */ - sizeof(pLib->trace_event_mask))) { - return (-1); - } - pLib->current_trace_event_mask = pLib->trace_event_mask; - pLib->req_busy = 1; - return (0); - } - - if ((pLib->audio_tap_pending >= 0) && (pLib->audio_tap_mask != pLib->current_audio_tap_mask)) { - if (SuperTraceWriteVar(pLib->hAdapter, - pLib->buffer, - "Trace\\AudioCh# Enable", - &pLib->audio_tap_mask, - 0x87, /* MI_BITFLD */ - sizeof(pLib->audio_tap_mask))) { - return (-1); - } - pLib->current_audio_tap_mask = pLib->audio_tap_mask; - pLib->audio_tap_pending = 1; - pLib->req_busy = 1; - return (0); - } - - if ((pLib->eye_pattern_pending >= 0) && (pLib->audio_tap_mask != pLib->current_eye_pattern_mask)) { - if (SuperTraceWriteVar(pLib->hAdapter, - pLib->buffer, - "Trace\\EyeCh# Enable", - &pLib->audio_tap_mask, - 0x87, /* MI_BITFLD */ - sizeof(pLib->audio_tap_mask))) { - return (-1); - } - pLib->current_eye_pattern_mask = pLib->audio_tap_mask; - pLib->eye_pattern_pending = 1; - pLib->req_busy = 1; - return (0); - } - - if (pLib->bchannel_trace_mask != pLib->current_bchannel_trace_mask) { - if (SuperTraceWriteVar(pLib->hAdapter, - pLib->buffer, - "Trace\\B-Ch# Enable", - &pLib->bchannel_trace_mask, - 0x87, /* MI_BITFLD */ - sizeof(pLib->bchannel_trace_mask))) { - return (-1); - } - pLib->current_bchannel_trace_mask = pLib->bchannel_trace_mask; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->trace_events_down) { - if (SuperTraceTraceOnRequest(pLib->hAdapter, - "Events Down", - pLib->buffer)) { - return (-1); - } - pLib->trace_events_down = 1; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->l1_trace) { - if (SuperTraceTraceOnRequest(pLib->hAdapter, - "State\\Layer1", - pLib->buffer)) { - return (-1); - } - pLib->l1_trace = 1; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->l2_trace) { - if (SuperTraceTraceOnRequest(pLib->hAdapter, - "State\\Layer2 No1", - pLib->buffer)) { - return (-1); - } - pLib->l2_trace = 1; - pLib->req_busy = 1; - return (0); - } - - for (i = 0; i < 30; i++) { - if (pLib->pending_line_status & (1L << i)) { - sprintf(name, "State\\B%d", i + 1); - if (SuperTraceReadRequest(pLib->hAdapter, name, pLib->buffer)) { - return (-1); - } - pLib->pending_line_status &= ~(1L << i); - pLib->req_busy = 1; - return (0); - } - if (pLib->pending_modem_status & (1L << i)) { - sprintf(name, "State\\B%d\\Modem", i + 1); - if (SuperTraceReadRequest(pLib->hAdapter, name, pLib->buffer)) { - return (-1); - } - pLib->pending_modem_status &= ~(1L << i); - pLib->req_busy = 1; - return (0); - } - if (pLib->pending_fax_status & (1L << i)) { - sprintf(name, "State\\B%d\\FAX", i + 1); - if (SuperTraceReadRequest(pLib->hAdapter, name, pLib->buffer)) { - return (-1); - } - pLib->pending_fax_status &= ~(1L << i); - pLib->req_busy = 1; - return (0); - } - if (pLib->clear_call_command & (1L << i)) { - sprintf(name, "State\\B%d\\Clear Call", i + 1); - if (SuperTraceExecuteRequest(pLib->hAdapter, name, pLib->buffer)) { - return (-1); - } - pLib->clear_call_command &= ~(1L << i); - pLib->req_busy = 1; - return (0); - } - } - - if (pLib->outgoing_ifc_stats) { - if (SuperTraceReadRequest(pLib->hAdapter, - "Statistics\\Outgoing Calls", - pLib->buffer)) { - return (-1); - } - pLib->outgoing_ifc_stats = 0; - pLib->req_busy = 1; - return (0); - } - - if (pLib->incoming_ifc_stats) { - if (SuperTraceReadRequest(pLib->hAdapter, - "Statistics\\Incoming Calls", - pLib->buffer)) { - return (-1); - } - pLib->incoming_ifc_stats = 0; - pLib->req_busy = 1; - return (0); - } - - if (pLib->modem_ifc_stats) { - if (SuperTraceReadRequest(pLib->hAdapter, - "Statistics\\Modem", - pLib->buffer)) { - return (-1); - } - pLib->modem_ifc_stats = 0; - pLib->req_busy = 1; - return (0); - } - - if (pLib->fax_ifc_stats) { - if (SuperTraceReadRequest(pLib->hAdapter, - "Statistics\\FAX", - pLib->buffer)) { - return (-1); - } - pLib->fax_ifc_stats = 0; - pLib->req_busy = 1; - return (0); - } - - if (pLib->b1_ifc_stats) { - if (SuperTraceReadRequest(pLib->hAdapter, - "Statistics\\B-Layer1", - pLib->buffer)) { - return (-1); - } - pLib->b1_ifc_stats = 0; - pLib->req_busy = 1; - return (0); - } - - if (pLib->b2_ifc_stats) { - if (SuperTraceReadRequest(pLib->hAdapter, - "Statistics\\B-Layer2", - pLib->buffer)) { - return (-1); - } - pLib->b2_ifc_stats = 0; - pLib->req_busy = 1; - return (0); - } - - if (pLib->d1_ifc_stats) { - if (SuperTraceReadRequest(pLib->hAdapter, - "Statistics\\D-Layer1", - pLib->buffer)) { - return (-1); - } - pLib->d1_ifc_stats = 0; - pLib->req_busy = 1; - return (0); - } - - if (pLib->d2_ifc_stats) { - if (SuperTraceReadRequest(pLib->hAdapter, - "Statistics\\D-Layer2", - pLib->buffer)) { - return (-1); - } - pLib->d2_ifc_stats = 0; - pLib->req_busy = 1; - return (0); - } - - if (!pLib->IncomingCallsCallsActive) { - pLib->IncomingCallsCallsActive = 1; - sprintf(name, "%s", "Statistics\\Incoming Calls\\Calls"); - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, name, pLib->buffer))) { - pLib->IncomingCallsCallsActive = 0; - return (-1); - } - pLib->req_busy = 1; - return (0); - } - if (!pLib->IncomingCallsConnectedActive) { - pLib->IncomingCallsConnectedActive = 1; - sprintf(name, "%s", "Statistics\\Incoming Calls\\Connected"); - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, name, pLib->buffer))) { - pLib->IncomingCallsConnectedActive = 0; - return (-1); - } - pLib->req_busy = 1; - return (0); - } - if (!pLib->OutgoingCallsCallsActive) { - pLib->OutgoingCallsCallsActive = 1; - sprintf(name, "%s", "Statistics\\Outgoing Calls\\Calls"); - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, name, pLib->buffer))) { - pLib->OutgoingCallsCallsActive = 0; - return (-1); - } - pLib->req_busy = 1; - return (0); - } - if (!pLib->OutgoingCallsConnectedActive) { - pLib->OutgoingCallsConnectedActive = 1; - sprintf(name, "%s", "Statistics\\Outgoing Calls\\Connected"); - if ((ret = SuperTraceTraceOnRequest(pLib->hAdapter, name, pLib->buffer))) { - pLib->OutgoingCallsConnectedActive = 0; - return (-1); - } - pLib->req_busy = 1; - return (0); - } - - return (0); -} - -static int process_idi_event(diva_strace_context_t *pLib, - diva_man_var_header_t *pVar) { - const char *path = (char *)&pVar->path_length + 1; - char name[64]; - int i; - - if (!strncmp("State\\B Event", path, pVar->path_length)) { - dword ch_id; - if (!diva_trace_read_variable(pVar, &ch_id)) { - if (!pLib->line_init_event && !pLib->pending_line_status) { - for (i = 1; i <= pLib->Channels; i++) { - diva_line_event(pLib, i); - } - return (0); - } else if (ch_id && ch_id <= pLib->Channels) { - return (diva_line_event(pLib, (int)ch_id)); - } - return (0); - } - return (-1); - } - - if (!strncmp("State\\FAX Event", path, pVar->path_length)) { - dword ch_id; - if (!diva_trace_read_variable(pVar, &ch_id)) { - if (!pLib->pending_fax_status && !pLib->fax_init_event) { - for (i = 1; i <= pLib->Channels; i++) { - diva_fax_event(pLib, i); - } - return (0); - } else if (ch_id && ch_id <= pLib->Channels) { - return (diva_fax_event(pLib, (int)ch_id)); - } - return (0); - } - return (-1); - } - - if (!strncmp("State\\Modem Event", path, pVar->path_length)) { - dword ch_id; - if (!diva_trace_read_variable(pVar, &ch_id)) { - if (!pLib->pending_modem_status && !pLib->modem_init_event) { - for (i = 1; i <= pLib->Channels; i++) { - diva_modem_event(pLib, i); - } - return (0); - } else if (ch_id && ch_id <= pLib->Channels) { - return (diva_modem_event(pLib, (int)ch_id)); - } - return (0); - } - return (-1); - } - - /* - First look for Line Event - */ - for (i = 1; i <= pLib->Channels; i++) { - sprintf(name, "State\\B%d\\Line", i); - if (find_var(pVar, name)) { - return (diva_line_event(pLib, i)); - } - } - - /* - Look for Moden Progress Event - */ - for (i = 1; i <= pLib->Channels; i++) { - sprintf(name, "State\\B%d\\Modem\\Event", i); - if (find_var(pVar, name)) { - return (diva_modem_event(pLib, i)); - } - } - - /* - Look for Fax Event - */ - for (i = 1; i <= pLib->Channels; i++) { - sprintf(name, "State\\B%d\\FAX\\Event", i); - if (find_var(pVar, name)) { - return (diva_fax_event(pLib, i)); - } - } - - /* - Notification about loss of events - */ - if (!strncmp("Events Down", path, pVar->path_length)) { - if (pLib->trace_events_down == 1) { - pLib->trace_events_down = 2; - } else { - diva_trace_error(pLib, 1, "Events Down", 0); - } - return (0); - } - - if (!strncmp("State\\Layer1", path, pVar->path_length)) { - diva_strace_read_asz(pVar, &pLib->lines[0].pInterface->Layer1[0]); - if (pLib->l1_trace == 1) { - pLib->l1_trace = 2; - } else { - diva_trace_notify_user(pLib, 0, DIVA_SUPER_TRACE_INTERFACE_CHANGE); - } - return (0); - } - if (!strncmp("State\\Layer2 No1", path, pVar->path_length)) { - char *tmp = &pLib->lines[0].pInterface->Layer2[0]; - dword l2_state; - if (diva_strace_read_uint(pVar, &l2_state)) - return -1; - - switch (l2_state) { - case 0: - strcpy(tmp, "Idle"); - break; - case 1: - strcpy(tmp, "Layer2 UP"); - break; - case 2: - strcpy(tmp, "Layer2 Disconnecting"); - break; - case 3: - strcpy(tmp, "Layer2 Connecting"); - break; - case 4: - strcpy(tmp, "SPID Initializing"); - break; - case 5: - strcpy(tmp, "SPID Initialised"); - break; - case 6: - strcpy(tmp, "Layer2 Connecting"); - break; - - case 7: - strcpy(tmp, "Auto SPID Stopped"); - break; - - case 8: - strcpy(tmp, "Auto SPID Idle"); - break; - - case 9: - strcpy(tmp, "Auto SPID Requested"); - break; - - case 10: - strcpy(tmp, "Auto SPID Delivery"); - break; - - case 11: - strcpy(tmp, "Auto SPID Complete"); - break; - - default: - sprintf(tmp, "U:%d", (int)l2_state); - } - if (pLib->l2_trace == 1) { - pLib->l2_trace = 2; - } else { - diva_trace_notify_user(pLib, 0, DIVA_SUPER_TRACE_INTERFACE_CHANGE); - } - return (0); - } - - if (!strncmp("Statistics\\Incoming Calls\\Calls", path, pVar->path_length) || - !strncmp("Statistics\\Incoming Calls\\Connected", path, pVar->path_length)) { - return (SuperTraceGetIncomingCallStatistics(pLib)); - } - - if (!strncmp("Statistics\\Outgoing Calls\\Calls", path, pVar->path_length) || - !strncmp("Statistics\\Outgoing Calls\\Connected", path, pVar->path_length)) { - return (SuperTraceGetOutgoingCallStatistics(pLib)); - } - - return (-1); -} - -static int diva_line_event(diva_strace_context_t *pLib, int Channel) { - pLib->pending_line_status |= (1L << (Channel - 1)); - return (0); -} - -static int diva_modem_event(diva_strace_context_t *pLib, int Channel) { - pLib->pending_modem_status |= (1L << (Channel - 1)); - return (0); -} - -static int diva_fax_event(diva_strace_context_t *pLib, int Channel) { - pLib->pending_fax_status |= (1L << (Channel - 1)); - return (0); -} - -/* - Process INFO indications that arrive from the card - Uses path of first I.E. to detect the source of the - infication -*/ -static int process_idi_info(diva_strace_context_t *pLib, - diva_man_var_header_t *pVar) { - const char *path = (char *)&pVar->path_length + 1; - char name[64]; - int i, len; - - /* - First look for Modem Status Info - */ - for (i = pLib->Channels; i > 0; i--) { - len = sprintf(name, "State\\B%d\\Modem", i); - if (!strncmp(name, path, len)) { - return (diva_modem_info(pLib, i, pVar)); - } - } - - /* - Look for Fax Status Info - */ - for (i = pLib->Channels; i > 0; i--) { - len = sprintf(name, "State\\B%d\\FAX", i); - if (!strncmp(name, path, len)) { - return (diva_fax_info(pLib, i, pVar)); - } - } - - /* - Look for Line Status Info - */ - for (i = pLib->Channels; i > 0; i--) { - len = sprintf(name, "State\\B%d", i); - if (!strncmp(name, path, len)) { - return (diva_line_info(pLib, i, pVar)); - } - } - - if (!diva_ifc_statistics(pLib, pVar)) { - return (0); - } - - return (-1); -} - -/* - MODEM INSTANCE STATE UPDATE - - Update Modem Status Information and issue notification to user, - that will inform about change in the state of modem instance, that is - associuated with this channel -*/ -static int diva_modem_info(diva_strace_context_t *pLib, - int Channel, - diva_man_var_header_t *pVar) { - diva_man_var_header_t *cur; - int i, nr = Channel - 1; - - for (i = pLib->modem_parse_entry_first[nr]; - i <= pLib->modem_parse_entry_last[nr]; i++) { - if ((cur = find_var(pVar, pLib->parse_table[i].path))) { - if (diva_trace_read_variable(cur, pLib->parse_table[i].variable)) { - diva_trace_error(pLib, -3, __FILE__, __LINE__); - return (-1); - } - } else { - diva_trace_error(pLib, -2, __FILE__, __LINE__); - return (-1); - } - } - - /* - We do not use first event to notify user - this is the event that is - generated as result of EVENT ON operation and is used only to initialize - internal variables of application - */ - if (pLib->modem_init_event & (1L << nr)) { - diva_trace_notify_user(pLib, nr, DIVA_SUPER_TRACE_NOTIFY_MODEM_CHANGE); - } else { - pLib->modem_init_event |= (1L << nr); - } - - return (0); -} - -static int diva_fax_info(diva_strace_context_t *pLib, - int Channel, - diva_man_var_header_t *pVar) { - diva_man_var_header_t *cur; - int i, nr = Channel - 1; - - for (i = pLib->fax_parse_entry_first[nr]; - i <= pLib->fax_parse_entry_last[nr]; i++) { - if ((cur = find_var(pVar, pLib->parse_table[i].path))) { - if (diva_trace_read_variable(cur, pLib->parse_table[i].variable)) { - diva_trace_error(pLib, -3, __FILE__, __LINE__); - return (-1); - } - } else { - diva_trace_error(pLib, -2, __FILE__, __LINE__); - return (-1); - } - } - - /* - We do not use first event to notify user - this is the event that is - generated as result of EVENT ON operation and is used only to initialize - internal variables of application - */ - if (pLib->fax_init_event & (1L << nr)) { - diva_trace_notify_user(pLib, nr, DIVA_SUPER_TRACE_NOTIFY_FAX_CHANGE); - } else { - pLib->fax_init_event |= (1L << nr); - } - - return (0); -} - -/* - LINE STATE UPDATE - Update Line Status Information and issue notification to user, - that will inform about change in the line state. -*/ -static int diva_line_info(diva_strace_context_t *pLib, - int Channel, - diva_man_var_header_t *pVar) { - diva_man_var_header_t *cur; - int i, nr = Channel - 1; - - for (i = pLib->line_parse_entry_first[nr]; - i <= pLib->line_parse_entry_last[nr]; i++) { - if ((cur = find_var(pVar, pLib->parse_table[i].path))) { - if (diva_trace_read_variable(cur, pLib->parse_table[i].variable)) { - diva_trace_error(pLib, -3, __FILE__, __LINE__); - return (-1); - } - } else { - diva_trace_error(pLib, -2 , __FILE__, __LINE__); - return (-1); - } - } - - /* - We do not use first event to notify user - this is the event that is - generated as result of EVENT ON operation and is used only to initialize - internal variables of application - - Exception is is if the line is "online". In this case we have to notify - user about this confition. - */ - if (pLib->line_init_event & (1L << nr)) { - diva_trace_notify_user(pLib, nr, DIVA_SUPER_TRACE_NOTIFY_LINE_CHANGE); - } else { - pLib->line_init_event |= (1L << nr); - if (strcmp(&pLib->lines[nr].Line[0], "Idle")) { - diva_trace_notify_user(pLib, nr, DIVA_SUPER_TRACE_NOTIFY_LINE_CHANGE); - } - } - - return (0); -} - -/* - Move position to next vatianle in the chain -*/ -static diva_man_var_header_t *get_next_var(diva_man_var_header_t *pVar) { - byte *msg = (byte *)pVar; - byte *start; - int msg_length; - - if (*msg != ESC) return NULL; - - start = msg + 2; - msg_length = *(msg + 1); - msg = (start + msg_length); - - if (*msg != ESC) return NULL; - - return ((diva_man_var_header_t *)msg); -} - -/* - Move position to variable with given name -*/ -static diva_man_var_header_t *find_var(diva_man_var_header_t *pVar, - const char *name) { - const char *path; - - do { - path = (char *)&pVar->path_length + 1; - - if (!strncmp(name, path, pVar->path_length)) { - break; - } - } while ((pVar = get_next_var(pVar))); - - return (pVar); -} - -static void diva_create_line_parse_table(diva_strace_context_t *pLib, - int Channel) { - diva_trace_line_state_t *pLine = &pLib->lines[Channel]; - int nr = Channel + 1; - - if ((pLib->cur_parse_entry + LINE_PARSE_ENTRIES) >= pLib->parse_entries) { - diva_trace_error(pLib, -1, __FILE__, __LINE__); - return; - } - - pLine->ChannelNumber = nr; - - pLib->line_parse_entry_first[Channel] = pLib->cur_parse_entry; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Framing", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->Framing[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Line", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->Line[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Layer2", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->Layer2[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Layer3", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->Layer3[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Remote Address", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLine->RemoteAddress[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Remote SubAddr", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLine->RemoteSubAddress[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Local Address", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLine->LocalAddress[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Local SubAddr", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLine->LocalSubAddress[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\BC", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->call_BC; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\HLC", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->call_HLC; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\LLC", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->call_LLC; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Charges", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->Charges; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Call Reference", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->CallReference; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Last Disc Cause", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLine->LastDisconnecCause; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\User ID", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pLine->UserID[0]; - - pLib->line_parse_entry_last[Channel] = pLib->cur_parse_entry - 1; -} - -static void diva_create_fax_parse_table(diva_strace_context_t *pLib, - int Channel) { - diva_trace_fax_state_t *pFax = &pLib->lines[Channel].fax; - int nr = Channel + 1; - - if ((pLib->cur_parse_entry + FAX_PARSE_ENTRIES) >= pLib->parse_entries) { - diva_trace_error(pLib, -1, __FILE__, __LINE__); - return; - } - pFax->ChannelNumber = nr; - - pLib->fax_parse_entry_first[Channel] = pLib->cur_parse_entry; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Event", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Event; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Page Counter", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Page_Counter; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Features", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Features; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Station ID", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Station_ID[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Subaddress", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Subaddress[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Password", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Password[0]; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Speed", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Speed; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Resolution", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Resolution; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Paper Width", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Paper_Width; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Paper Length", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Paper_Length; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Scanline Time", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Scanline_Time; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\FAX\\Disc Reason", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pFax->Disc_Reason; - - pLib->fax_parse_entry_last[Channel] = pLib->cur_parse_entry - 1; -} - -static void diva_create_modem_parse_table(diva_strace_context_t *pLib, - int Channel) { - diva_trace_modem_state_t *pModem = &pLib->lines[Channel].modem; - int nr = Channel + 1; - - if ((pLib->cur_parse_entry + MODEM_PARSE_ENTRIES) >= pLib->parse_entries) { - diva_trace_error(pLib, -1, __FILE__, __LINE__); - return; - } - pModem->ChannelNumber = nr; - - pLib->modem_parse_entry_first[Channel] = pLib->cur_parse_entry; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Event", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->Event; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Norm", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->Norm; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Options", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->Options; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\TX Speed", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->TxSpeed; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\RX Speed", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->RxSpeed; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Roundtrip ms", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->RoundtripMsec; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Symbol Rate", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->SymbolRate; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\RX Level dBm", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->RxLeveldBm; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Echo Level dBm", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->EchoLeveldBm; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\SNR dB", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->SNRdb; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\MAE", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->MAE; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Local Retrains", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->LocalRetrains; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Remote Retrains", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->RemoteRetrains; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Local Resyncs", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->LocalResyncs; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Remote Resyncs", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->RemoteResyncs; - - sprintf(pLib->parse_table[pLib->cur_parse_entry].path, - "State\\B%d\\Modem\\Disc Reason", nr); - pLib->parse_table[pLib->cur_parse_entry++].variable = &pModem->DiscReason; - - pLib->modem_parse_entry_last[Channel] = pLib->cur_parse_entry - 1; -} - -static void diva_create_parse_table(diva_strace_context_t *pLib) { - int i; - - for (i = 0; i < pLib->Channels; i++) { - diva_create_line_parse_table(pLib, i); - diva_create_modem_parse_table(pLib, i); - diva_create_fax_parse_table(pLib, i); - } - - pLib->statistic_parse_first = pLib->cur_parse_entry; - - /* - Outgoing Calls - */ - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Outgoing Calls\\Calls"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.outg.Calls; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Outgoing Calls\\Connected"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.outg.Connected; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Outgoing Calls\\User Busy"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.outg.User_Busy; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Outgoing Calls\\No Answer"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.outg.No_Answer; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Outgoing Calls\\Wrong Number"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.outg.Wrong_Number; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Outgoing Calls\\Call Rejected"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.outg.Call_Rejected; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Outgoing Calls\\Other Failures"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.outg.Other_Failures; - - /* - Incoming Calls - */ - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Incoming Calls\\Calls"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.inc.Calls; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Incoming Calls\\Connected"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.inc.Connected; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Incoming Calls\\User Busy"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.inc.User_Busy; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Incoming Calls\\Call Rejected"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.inc.Call_Rejected; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Incoming Calls\\Wrong Number"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.inc.Wrong_Number; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Incoming Calls\\Incompatible Dst"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.inc.Incompatible_Dst; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Incoming Calls\\Out of Order"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.inc.Out_of_Order; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Incoming Calls\\Ignored"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.inc.Ignored; - - /* - Modem Statistics - */ - pLib->mdm_statistic_parse_first = pLib->cur_parse_entry; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc Normal"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_Normal; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc Unspecified"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_Unspecified; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc Busy Tone"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_Busy_Tone; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc Congestion"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_Congestion; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc Carr. Wait"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_Carr_Wait; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc Trn Timeout"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_Trn_Timeout; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc Incompat."); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_Incompat; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc Frame Rej."); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_Frame_Rej; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\Modem\\Disc V42bis"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.mdm.Disc_V42bis; - - pLib->mdm_statistic_parse_last = pLib->cur_parse_entry - 1; - - /* - Fax Statistics - */ - pLib->fax_statistic_parse_first = pLib->cur_parse_entry; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Normal"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Normal; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Not Ident."); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Not_Ident; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc No Response"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_No_Response; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Retries"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Retries; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Unexp. Msg."); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Unexp_Msg; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc No Polling."); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_No_Polling; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Training"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Training; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Unexpected"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Unexpected; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Application"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Application; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Incompat."); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Incompat; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc No Command"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_No_Command; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Long Msg"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Long_Msg; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Supervisor"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Supervisor; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc SUB SEP PWD"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_SUB_SEP_PWD; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Invalid Msg"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Invalid_Msg; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Page Coding"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Page_Coding; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc App Timeout"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_App_Timeout; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\FAX\\Disc Unspecified"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.fax.Disc_Unspecified; - - pLib->fax_statistic_parse_last = pLib->cur_parse_entry - 1; - - /* - B-Layer1" - */ - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer1\\X-Frames"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b1.X_Frames; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer1\\X-Bytes"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b1.X_Bytes; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer1\\X-Errors"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b1.X_Errors; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer1\\R-Frames"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b1.R_Frames; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer1\\R-Bytes"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b1.R_Bytes; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer1\\R-Errors"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b1.R_Errors; - - /* - B-Layer2 - */ - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer2\\X-Frames"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b2.X_Frames; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer2\\X-Bytes"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b2.X_Bytes; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer2\\X-Errors"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b2.X_Errors; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer2\\R-Frames"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b2.R_Frames; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer2\\R-Bytes"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b2.R_Bytes; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\B-Layer2\\R-Errors"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.b2.R_Errors; - - /* - D-Layer1 - */ - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer1\\X-Frames"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d1.X_Frames; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer1\\X-Bytes"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d1.X_Bytes; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer1\\X-Errors"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d1.X_Errors; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer1\\R-Frames"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d1.R_Frames; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer1\\R-Bytes"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d1.R_Bytes; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer1\\R-Errors"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d1.R_Errors; - - /* - D-Layer2 - */ - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer2\\X-Frames"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d2.X_Frames; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer2\\X-Bytes"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d2.X_Bytes; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer2\\X-Errors"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d2.X_Errors; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer2\\R-Frames"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d2.R_Frames; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer2\\R-Bytes"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d2.R_Bytes; - - strcpy(pLib->parse_table[pLib->cur_parse_entry].path, - "Statistics\\D-Layer2\\R-Errors"); - pLib->parse_table[pLib->cur_parse_entry++].variable = \ - &pLib->InterfaceStat.d2.R_Errors; - - - pLib->statistic_parse_last = pLib->cur_parse_entry - 1; -} - -static void diva_trace_error(diva_strace_context_t *pLib, - int error, const char *file, int line) { - if (pLib->user_proc_table.error_notify_proc) { - (*(pLib->user_proc_table.error_notify_proc))(\ - pLib->user_proc_table.user_context, - &pLib->instance, pLib->Adapter, - error, file, line); - } -} - -/* - Delivery notification to user -*/ -static void diva_trace_notify_user(diva_strace_context_t *pLib, - int Channel, - int notify_subject) { - if (pLib->user_proc_table.notify_proc) { - (*(pLib->user_proc_table.notify_proc))(pLib->user_proc_table.user_context, - &pLib->instance, - pLib->Adapter, - &pLib->lines[Channel], - notify_subject); - } -} - -/* - Read variable value to they destination based on the variable type -*/ -static int diva_trace_read_variable(diva_man_var_header_t *pVar, - void *variable) { - switch (pVar->type) { - case 0x03: /* MI_ASCIIZ - syting */ - return (diva_strace_read_asz(pVar, (char *)variable)); - case 0x04: /* MI_ASCII - string */ - return (diva_strace_read_asc(pVar, (char *)variable)); - case 0x05: /* MI_NUMBER - counted sequence of bytes */ - return (diva_strace_read_ie(pVar, (diva_trace_ie_t *)variable)); - case 0x81: /* MI_INT - signed integer */ - return (diva_strace_read_int(pVar, (int *)variable)); - case 0x82: /* MI_UINT - unsigned integer */ - return (diva_strace_read_uint(pVar, (dword *)variable)); - case 0x83: /* MI_HINT - unsigned integer, hex representetion */ - return (diva_strace_read_uint(pVar, (dword *)variable)); - case 0x87: /* MI_BITFLD - unsigned integer, bit representation */ - return (diva_strace_read_uint(pVar, (dword *)variable)); - } - - /* - This type of variable is not handled, indicate error - Or one problem in management interface, or in application recodeing - table, or this application should handle it. - */ - return (-1); -} - -/* - Read signed integer to destination -*/ -static int diva_strace_read_int(diva_man_var_header_t *pVar, int *var) { - byte *ptr = (char *)&pVar->path_length; - int value; - - ptr += (pVar->path_length + 1); - - switch (pVar->value_length) { - case 1: - value = *(char *)ptr; - break; - - case 2: - value = (short)GET_WORD(ptr); - break; - - case 4: - value = (int)GET_DWORD(ptr); - break; - - default: - return (-1); - } - - *var = value; - - return (0); -} - -static int diva_strace_read_uint(diva_man_var_header_t *pVar, dword *var) { - byte *ptr = (char *)&pVar->path_length; - dword value; - - ptr += (pVar->path_length + 1); - - switch (pVar->value_length) { - case 1: - value = (byte)(*ptr); - break; - - case 2: - value = (word)GET_WORD(ptr); - break; - - case 3: - value = (dword)GET_DWORD(ptr); - value &= 0x00ffffff; - break; - - case 4: - value = (dword)GET_DWORD(ptr); - break; - - default: - return (-1); - } - - *var = value; - - return (0); -} - -/* - Read zero terminated ASCII string -*/ -static int diva_strace_read_asz(diva_man_var_header_t *pVar, char *var) { - char *ptr = (char *)&pVar->path_length; - int length; - - ptr += (pVar->path_length + 1); - - if (!(length = pVar->value_length)) { - length = strlen(ptr); - } - memcpy(var, ptr, length); - var[length] = 0; - - return (0); -} - -/* - Read counted (with leading length byte) ASCII string -*/ -static int diva_strace_read_asc(diva_man_var_header_t *pVar, char *var) { - char *ptr = (char *)&pVar->path_length; - - ptr += (pVar->path_length + 1); - memcpy(var, ptr + 1, *ptr); - var[(int)*ptr] = 0; - - return (0); -} - -/* - Read one information element - i.e. one string of byte values with - one length byte in front -*/ -static int diva_strace_read_ie(diva_man_var_header_t *pVar, - diva_trace_ie_t *var) { - char *ptr = (char *)&pVar->path_length; - - ptr += (pVar->path_length + 1); - - var->length = *ptr; - memcpy(&var->data[0], ptr + 1, *ptr); - - return (0); -} - -static int SuperTraceSetAudioTap(void *hLib, int Channel, int on) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - - if ((Channel < 1) || (Channel > pLib->Channels)) { - return (-1); - } - Channel--; - - if (on) { - pLib->audio_tap_mask |= (1L << Channel); - } else { - pLib->audio_tap_mask &= ~(1L << Channel); - } - - /* - EYE patterns have TM_M_DATA set as additional - condition - */ - if (pLib->audio_tap_mask) { - pLib->trace_event_mask |= TM_M_DATA; - } else { - pLib->trace_event_mask &= ~TM_M_DATA; - } - - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceSetBChannel(void *hLib, int Channel, int on) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - - if ((Channel < 1) || (Channel > pLib->Channels)) { - return (-1); - } - Channel--; - - if (on) { - pLib->bchannel_trace_mask |= (1L << Channel); - } else { - pLib->bchannel_trace_mask &= ~(1L << Channel); - } - - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceSetDChannel(void *hLib, int on) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - - if (on) { - pLib->trace_event_mask |= (TM_D_CHAN | TM_C_COMM | TM_DL_ERR | TM_LAYER1); - } else { - pLib->trace_event_mask &= ~(TM_D_CHAN | TM_C_COMM | TM_DL_ERR | TM_LAYER1); - } - - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceSetInfo(void *hLib, int on) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - - if (on) { - pLib->trace_event_mask |= TM_STRING; - } else { - pLib->trace_event_mask &= ~TM_STRING; - } - - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceClearCall(void *hLib, int Channel) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - - if ((Channel < 1) || (Channel > pLib->Channels)) { - return (-1); - } - Channel--; - - pLib->clear_call_command |= (1L << Channel); - - return (ScheduleNextTraceRequest(pLib)); -} - -/* - Parse and update cumulative statistice -*/ -static int diva_ifc_statistics(diva_strace_context_t *pLib, - diva_man_var_header_t *pVar) { - diva_man_var_header_t *cur; - int i, one_updated = 0, mdm_updated = 0, fax_updated = 0; - - for (i = pLib->statistic_parse_first; i <= pLib->statistic_parse_last; i++) { - if ((cur = find_var(pVar, pLib->parse_table[i].path))) { - if (diva_trace_read_variable(cur, pLib->parse_table[i].variable)) { - diva_trace_error(pLib, -3 , __FILE__, __LINE__); - return (-1); - } - one_updated = 1; - if ((i >= pLib->mdm_statistic_parse_first) && (i <= pLib->mdm_statistic_parse_last)) { - mdm_updated = 1; - } - if ((i >= pLib->fax_statistic_parse_first) && (i <= pLib->fax_statistic_parse_last)) { - fax_updated = 1; - } - } - } - - /* - We do not use first event to notify user - this is the event that is - generated as result of EVENT ON operation and is used only to initialize - internal variables of application - */ - if (mdm_updated) { - diva_trace_notify_user(pLib, 0, DIVA_SUPER_TRACE_NOTIFY_MDM_STAT_CHANGE); - } else if (fax_updated) { - diva_trace_notify_user(pLib, 0, DIVA_SUPER_TRACE_NOTIFY_FAX_STAT_CHANGE); - } else if (one_updated) { - diva_trace_notify_user(pLib, 0, DIVA_SUPER_TRACE_NOTIFY_STAT_CHANGE); - } - - return (one_updated ? 0 : -1); -} - -static int SuperTraceGetOutgoingCallStatistics(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - pLib->outgoing_ifc_stats = 1; - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceGetIncomingCallStatistics(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - pLib->incoming_ifc_stats = 1; - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceGetModemStatistics(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - pLib->modem_ifc_stats = 1; - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceGetFaxStatistics(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - pLib->fax_ifc_stats = 1; - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceGetBLayer1Statistics(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - pLib->b1_ifc_stats = 1; - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceGetBLayer2Statistics(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - pLib->b2_ifc_stats = 1; - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceGetDLayer1Statistics(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - pLib->d1_ifc_stats = 1; - return (ScheduleNextTraceRequest(pLib)); -} - -static int SuperTraceGetDLayer2Statistics(void *hLib) { - diva_strace_context_t *pLib = (diva_strace_context_t *)hLib; - pLib->d2_ifc_stats = 1; - return (ScheduleNextTraceRequest(pLib)); -} - -dword DivaSTraceGetMemotyRequirement(int channels) { - dword parse_entries = (MODEM_PARSE_ENTRIES + FAX_PARSE_ENTRIES + \ - STAT_PARSE_ENTRIES + \ - LINE_PARSE_ENTRIES + 1) * channels; - return (sizeof(diva_strace_context_t) + \ - (parse_entries * sizeof(diva_strace_path2action_t))); -} diff --git a/drivers/isdn/hardware/eicon/maintidi.h b/drivers/isdn/hardware/eicon/maintidi.h deleted file mode 100644 index 2b46147c5532..000000000000 --- a/drivers/isdn/hardware/eicon/maintidi.h +++ /dev/null @@ -1,171 +0,0 @@ -/* - * - Copyright (c) Eicon Networks, 2000. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 1.9 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_EICON_TRACE_IDI_IFC_H__ -#define __DIVA_EICON_TRACE_IDI_IFC_H__ - -void *SuperTraceOpenAdapter(int AdapterNumber); -int SuperTraceCloseAdapter(void *AdapterHandle); -int SuperTraceWrite(void *AdapterHandle, - const void *data, int length); -int SuperTraceReadRequest(void *AdapterHandle, const char *name, byte *data); -int SuperTraceGetNumberOfChannels(void *AdapterHandle); -int SuperTraceASSIGN(void *AdapterHandle, byte *data); -int SuperTraceREMOVE(void *AdapterHandle); -int SuperTraceTraceOnRequest(void *hAdapter, const char *name, byte *data); -int SuperTraceWriteVar(void *AdapterHandle, - byte *data, - const char *name, - void *var, - byte type, - byte var_length); -int SuperTraceExecuteRequest(void *AdapterHandle, - const char *name, - byte *data); - -typedef struct _diva_strace_path2action { - char path[64]; /* Full path to variable */ - void *variable; /* Variable that will receive value */ -} diva_strace_path2action_t; - -#define DIVA_MAX_MANAGEMENT_TRANSFER_SIZE 4096 - -typedef struct _diva_strace_context { - diva_strace_library_interface_t instance; - - int Adapter; - void *hAdapter; - - int Channels; - int req_busy; - - ENTITY e; - IDI_CALL request; - BUFFERS XData; - BUFFERS RData; - byte buffer[DIVA_MAX_MANAGEMENT_TRANSFER_SIZE + 1]; - int removal_state; - int general_b_ch_event; - int general_fax_event; - int general_mdm_event; - - byte rc_ok; - - /* - Initialization request state machine - */ - int ChannelsTraceActive; - int ModemTraceActive; - int FaxTraceActive; - int IncomingCallsCallsActive; - int IncomingCallsConnectedActive; - int OutgoingCallsCallsActive; - int OutgoingCallsConnectedActive; - - int trace_mask_init; - int audio_trace_init; - int bchannel_init; - int trace_length_init; - int trace_on; - int trace_events_down; - int l1_trace; - int l2_trace; - - /* - Trace\Event Enable - */ - word trace_event_mask; - word current_trace_event_mask; - - dword audio_tap_mask; - dword current_audio_tap_mask; - dword current_eye_pattern_mask; - int audio_tap_pending; - int eye_pattern_pending; - - dword bchannel_trace_mask; - dword current_bchannel_trace_mask; - - - diva_trace_line_state_t lines[30]; - - int parse_entries; - int cur_parse_entry; - diva_strace_path2action_t *parse_table; - - diva_trace_library_user_interface_t user_proc_table; - - int line_parse_entry_first[30]; - int line_parse_entry_last[30]; - - int modem_parse_entry_first[30]; - int modem_parse_entry_last[30]; - - int fax_parse_entry_first[30]; - int fax_parse_entry_last[30]; - - int statistic_parse_first; - int statistic_parse_last; - - int mdm_statistic_parse_first; - int mdm_statistic_parse_last; - - int fax_statistic_parse_first; - int fax_statistic_parse_last; - - dword line_init_event; - dword modem_init_event; - dword fax_init_event; - - dword pending_line_status; - dword pending_modem_status; - dword pending_fax_status; - - dword clear_call_command; - - int outgoing_ifc_stats; - int incoming_ifc_stats; - int modem_ifc_stats; - int fax_ifc_stats; - int b1_ifc_stats; - int b2_ifc_stats; - int d1_ifc_stats; - int d2_ifc_stats; - - diva_trace_interface_state_t Interface; - diva_ifc_statistics_t InterfaceStat; -} diva_strace_context_t; - -typedef struct _diva_man_var_header { - byte escape; - byte length; - byte management_id; - byte type; - byte attribute; - byte status; - byte value_length; - byte path_length; -} diva_man_var_header_t; - -#endif diff --git a/drivers/isdn/hardware/eicon/man_defs.h b/drivers/isdn/hardware/eicon/man_defs.h deleted file mode 100644 index 249c471700e7..000000000000 --- a/drivers/isdn/hardware/eicon/man_defs.h +++ /dev/null @@ -1,133 +0,0 @@ -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 1.9 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -/* Definitions for use with the Management Information Element */ - -/*------------------------------------------------------------------*/ -/* Management information element */ -/* ---------------------------------------------------------- */ -/* Byte Coding Comment */ -/* ---------------------------------------------------------- */ -/* 0 | 0 1 1 1 1 1 1 1 | ESC */ -/* 1 | 0 x x x x x x x | Length of information element (m-1) */ -/* 2 | 1 0 0 0 0 0 0 0 | Management Information Id */ -/* 3 | x x x x x x x x | Type */ -/* 4 | x x x x x x x x | Attribute */ -/* 5 | x x x x x x x x | Status */ -/* 6 | x x x x x x x x | Variable Value Length (m-n) */ -/* 7 | x x x x x x x x | Path / Variable Name String Length (n-8)*/ -/* 8..n | x x x x x x x x | Path/Node Name String separated by '\' */ -/* n..m | x x x x x x x x | Variable content */ -/*------------------------------------------------------------------*/ - -/*------------------------------------------------------------------*/ -/* Type Field */ -/* */ -/* MAN_READ: not used */ -/* MAN_WRITE: not used */ -/* MAN_EVENT_ON: not used */ -/* MAN_EVENT_OFF: not used */ -/* MAN_INFO_IND: type of variable */ -/* MAN_EVENT_IND: type of variable */ -/* MAN_TRACE_IND not used */ -/*------------------------------------------------------------------*/ -#define MI_DIR 0x01 /* Directory string (zero terminated) */ -#define MI_EXECUTE 0x02 /* Executable function (has no value) */ -#define MI_ASCIIZ 0x03 /* Zero terminated string */ -#define MI_ASCII 0x04 /* String, first byte is length */ -#define MI_NUMBER 0x05 /* Number string, first byte is length*/ -#define MI_TRACE 0x06 /* Trace information, format see below*/ - -#define MI_FIXED_LENGTH 0x80 /* get length from MAN_INFO max_len */ -#define MI_INT 0x81 /* number to display as signed int */ -#define MI_UINT 0x82 /* number to display as unsigned int */ -#define MI_HINT 0x83 /* number to display in hex format */ -#define MI_HSTR 0x84 /* number to display as a hex string */ -#define MI_BOOLEAN 0x85 /* number to display as boolean */ -#define MI_IP_ADDRESS 0x86 /* number to display as IP address */ -#define MI_BITFLD 0x87 /* number to display as bit field */ -#define MI_SPID_STATE 0x88 /* state# of SPID initialisation */ - -/*------------------------------------------------------------------*/ -/* Attribute Field */ -/* */ -/* MAN_READ: not used */ -/* MAN_WRITE: not used */ -/* MAN_EVENT_ON: not used */ -/* MAN_EVENT_OFF: not used */ -/* MAN_INFO_IND: set according to capabilities of that variable */ -/* MAN_EVENT_IND: not used */ -/* MAN_TRACE_IND not used */ -/*------------------------------------------------------------------*/ -#define MI_WRITE 0x01 /* Variable is writeable */ -#define MI_EVENT 0x02 /* Variable can indicate changes */ - -/*------------------------------------------------------------------*/ -/* Status Field */ -/* */ -/* MAN_READ: not used */ -/* MAN_WRITE: not used */ -/* MAN_EVENT_ON: not used */ -/* MAN_EVENT_OFF: not used */ -/* MAN_INFO_IND: set according to the actual status */ -/* MAN_EVENT_IND: set according to the actual statu */ -/* MAN_TRACE_IND not used */ -/*------------------------------------------------------------------*/ -#define MI_LOCKED 0x01 /* write protected by another instance*/ -#define MI_EVENT_ON 0x02 /* Event logging switched on */ -#define MI_PROTECTED 0x04 /* write protected by this instance */ - -/*------------------------------------------------------------------*/ -/* Data Format used for MAN_TRACE_IND (no MI-element used) */ -/*------------------------------------------------------------------*/ -typedef struct mi_xlog_hdr_s MI_XLOG_HDR; -struct mi_xlog_hdr_s -{ - unsigned long time; /* Timestamp in msec units */ - unsigned short size; /* Size of data that follows */ - unsigned short code; /* code of trace event */ -}; /* unspecified data follows this header */ - -/*------------------------------------------------------------------*/ -/* Trace mask definitions for trace events except B channel and */ -/* debug trace events */ -/*------------------------------------------------------------------*/ -#define TM_D_CHAN 0x0001 /* D-Channel (D-.) Code 3,4 */ -#define TM_L_LAYER 0x0002 /* Low Layer (LL) Code 6,7 */ -#define TM_N_LAYER 0x0004 /* Network Layer (N) Code 14,15 */ -#define TM_DL_ERR 0x0008 /* Data Link Error (MDL) Code 9 */ -#define TM_LAYER1 0x0010 /* Layer 1 Code 20 */ -#define TM_C_COMM 0x0020 /* Call Comment (SIG) Code 5,21,22 */ -#define TM_M_DATA 0x0040 /* Modulation Data (EYE) Code 23 */ -#define TM_STRING 0x0080 /* Sting data Code 24 */ -#define TM_N_USED2 0x0100 /* not used */ -#define TM_N_USED3 0x0200 /* not used */ -#define TM_N_USED4 0x0400 /* not used */ -#define TM_N_USED5 0x0800 /* not used */ -#define TM_N_USED6 0x1000 /* not used */ -#define TM_N_USED7 0x2000 /* not used */ -#define TM_N_USED8 0x4000 /* not used */ -#define TM_REST 0x8000 /* Codes 10,11,12,13,16,18,19,128,129 */ - -/*------ End of file -----------------------------------------------*/ diff --git a/drivers/isdn/hardware/eicon/mdm_msg.h b/drivers/isdn/hardware/eicon/mdm_msg.h deleted file mode 100644 index 0e6b2e009a74..000000000000 --- a/drivers/isdn/hardware/eicon/mdm_msg.h +++ /dev/null @@ -1,346 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __EICON_MDM_MSG_H__ -#define __EICON_MDM_MSG_H__ -#define DSP_UDATA_INDICATION_DCD_OFF 0x01 -#define DSP_UDATA_INDICATION_DCD_ON 0x02 -#define DSP_UDATA_INDICATION_CTS_OFF 0x03 -#define DSP_UDATA_INDICATION_CTS_ON 0x04 -/* ===================================================================== - DCD_OFF Message: - time of DCD off (sampled from counter at 8kHz) - DCD_ON Message: - time of DCD on (sampled from counter at 8kHz) - connected norm - connected options - connected speed (bit/s, max of tx and rx speed) - roundtrip delay (ms) - connected speed tx (bit/s) - connected speed rx (bit/s) - Size of this message == 19 bytes, but we will receive only 11 - ===================================================================== */ -#define DSP_CONNECTED_NORM_UNSPECIFIED 0 -#define DSP_CONNECTED_NORM_V21 1 -#define DSP_CONNECTED_NORM_V23 2 -#define DSP_CONNECTED_NORM_V22 3 -#define DSP_CONNECTED_NORM_V22_BIS 4 -#define DSP_CONNECTED_NORM_V32_BIS 5 -#define DSP_CONNECTED_NORM_V34 6 -#define DSP_CONNECTED_NORM_V8 7 -#define DSP_CONNECTED_NORM_BELL_212A 8 -#define DSP_CONNECTED_NORM_BELL_103 9 -#define DSP_CONNECTED_NORM_V29_LEASED_LINE 10 -#define DSP_CONNECTED_NORM_V33_LEASED_LINE 11 -#define DSP_CONNECTED_NORM_V90 12 -#define DSP_CONNECTED_NORM_V21_CH2 13 -#define DSP_CONNECTED_NORM_V27_TER 14 -#define DSP_CONNECTED_NORM_V29 15 -#define DSP_CONNECTED_NORM_V33 16 -#define DSP_CONNECTED_NORM_V17 17 -#define DSP_CONNECTED_NORM_V32 18 -#define DSP_CONNECTED_NORM_K56_FLEX 19 -#define DSP_CONNECTED_NORM_X2 20 -#define DSP_CONNECTED_NORM_V18 21 -#define DSP_CONNECTED_NORM_V18_LOW_HIGH 22 -#define DSP_CONNECTED_NORM_V18_HIGH_LOW 23 -#define DSP_CONNECTED_NORM_V21_LOW_HIGH 24 -#define DSP_CONNECTED_NORM_V21_HIGH_LOW 25 -#define DSP_CONNECTED_NORM_BELL103_LOW_HIGH 26 -#define DSP_CONNECTED_NORM_BELL103_HIGH_LOW 27 -#define DSP_CONNECTED_NORM_V23_75_1200 28 -#define DSP_CONNECTED_NORM_V23_1200_75 29 -#define DSP_CONNECTED_NORM_EDT_110 30 -#define DSP_CONNECTED_NORM_BAUDOT_45 31 -#define DSP_CONNECTED_NORM_BAUDOT_47 32 -#define DSP_CONNECTED_NORM_BAUDOT_50 33 -#define DSP_CONNECTED_NORM_DTMF 34 -#define DSP_CONNECTED_NORM_V18_RESERVED_13 35 -#define DSP_CONNECTED_NORM_V18_RESERVED_14 36 -#define DSP_CONNECTED_NORM_V18_RESERVED_15 37 -#define DSP_CONNECTED_NORM_VOWN 38 -#define DSP_CONNECTED_NORM_V23_OFF_HOOK 39 -#define DSP_CONNECTED_NORM_V23_ON_HOOK 40 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_3 41 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_4 42 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_5 43 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_6 44 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_7 45 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_8 46 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_9 47 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_10 48 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_11 49 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_12 50 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_13 51 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_14 52 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_15 53 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_16 54 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_17 55 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_18 56 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_19 57 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_20 58 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_21 59 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_22 60 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_23 61 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_24 62 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_25 63 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_26 64 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_27 65 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_28 66 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_29 67 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_30 68 -#define DSP_CONNECTED_NORM_VOWN_RESERVED_31 69 -#define DSP_CONNECTED_OPTION_TRELLIS 0x0001 -#define DSP_CONNECTED_OPTION_V42_TRANS 0x0002 -#define DSP_CONNECTED_OPTION_V42_LAPM 0x0004 -#define DSP_CONNECTED_OPTION_SHORT_TRAIN 0x0008 -#define DSP_CONNECTED_OPTION_TALKER_ECHO_PROTECT 0x0010 -#define DSP_CONNECTED_OPTION_V42BIS 0x0020 -#define DSP_CONNECTED_OPTION_MNP2 0x0040 -#define DSP_CONNECTED_OPTION_MNP3 0x0080 -#define DSP_CONNECTED_OPTION_MNP4 0x00c0 -#define DSP_CONNECTED_OPTION_MNP5 0x0100 -#define DSP_CONNECTED_OPTION_MNP10 0x0200 -#define DSP_CONNECTED_OPTION_MASK_V42 0x0024 -#define DSP_CONNECTED_OPTION_MASK_MNP 0x03c0 -#define DSP_CONNECTED_OPTION_MASK_ERROR_CORRECT 0x03e4 -#define DSP_CONNECTED_OPTION_MASK_COMPRESSION 0x0320 -#define DSP_UDATA_INDICATION_DISCONNECT 5 -/* - returns: - cause -*/ -/* ========================================================== - DLC: B2 modem configuration - ========================================================== */ -/* - Fields in assign DLC information element for modem protocol V.42/MNP: - length of information element - information field length - address A (not used, default 3) - address B (not used, default 1) - modulo mode (not used, default 7) - window size (not used, default 7) - XID length (not used, default 0) - ... XID information (not used, default empty) - modem protocol negotiation options - modem protocol options - modem protocol break configuration - modem protocol application options -*/ -#define DLC_MODEMPROT_DISABLE_V42_V42BIS 0x01 -#define DLC_MODEMPROT_DISABLE_MNP_MNP5 0x02 -#define DLC_MODEMPROT_REQUIRE_PROTOCOL 0x04 -#define DLC_MODEMPROT_DISABLE_V42_DETECT 0x08 -#define DLC_MODEMPROT_DISABLE_COMPRESSION 0x10 -#define DLC_MODEMPROT_REQUIRE_PROTOCOL_V34UP 0x20 -#define DLC_MODEMPROT_NO_PROTOCOL_IF_1200 0x01 -#define DLC_MODEMPROT_BUFFER_IN_V42_DETECT 0x02 -#define DLC_MODEMPROT_DISABLE_V42_SREJ 0x04 -#define DLC_MODEMPROT_DISABLE_MNP3 0x08 -#define DLC_MODEMPROT_DISABLE_MNP4 0x10 -#define DLC_MODEMPROT_DISABLE_MNP10 0x20 -#define DLC_MODEMPROT_NO_PROTOCOL_IF_V22BIS 0x40 -#define DLC_MODEMPROT_NO_PROTOCOL_IF_V32BIS 0x80 -#define DLC_MODEMPROT_BREAK_DISABLED 0x00 -#define DLC_MODEMPROT_BREAK_NORMAL 0x01 -#define DLC_MODEMPROT_BREAK_EXPEDITED 0x02 -#define DLC_MODEMPROT_BREAK_DESTRUCTIVE 0x03 -#define DLC_MODEMPROT_BREAK_CONFIG_MASK 0x03 -#define DLC_MODEMPROT_APPL_EARLY_CONNECT 0x01 -#define DLC_MODEMPROT_APPL_PASS_INDICATIONS 0x02 -/* ========================================================== - CAI parameters used for the modem L1 configuration - ========================================================== */ -/* - Fields in assign CAI information element: - length of information element - info field and B-channel hardware - rate adaptation bit rate - async framing parameters - reserved - packet length - modem line taking options - modem modulation negotiation parameters - modem modulation options - modem disabled modulations mask low - modem disabled modulations mask high - modem enabled modulations mask - modem min TX speed - modem max TX speed - modem min RX speed - modem max RX speed - modem disabled symbol rates mask - modem info options mask - modem transmit level adjust - modem speaker parameters - modem private debug config - modem reserved - v18 config parameters - v18 probing sequence - v18 probing message -*/ -#define DSP_CAI_HARDWARE_HDLC_64K 0x05 -#define DSP_CAI_HARDWARE_HDLC_56K 0x08 -#define DSP_CAI_HARDWARE_TRANSP 0x09 -#define DSP_CAI_HARDWARE_V110_SYNC 0x0c -#define DSP_CAI_HARDWARE_V110_ASYNC 0x0d -#define DSP_CAI_HARDWARE_HDLC_128K 0x0f -#define DSP_CAI_HARDWARE_FAX 0x10 -#define DSP_CAI_HARDWARE_MODEM_ASYNC 0x11 -#define DSP_CAI_HARDWARE_MODEM_SYNC 0x12 -#define DSP_CAI_HARDWARE_V110_HDLCA 0x13 -#define DSP_CAI_HARDWARE_ADVANCED_VOICE 0x14 -#define DSP_CAI_HARDWARE_TRANSP_DTMF 0x16 -#define DSP_CAI_HARDWARE_DTMF_VOICE_ISDN 0x17 -#define DSP_CAI_HARDWARE_DTMF_VOICE_LOCAL 0x18 -#define DSP_CAI_HARDWARE_MASK 0x3f -#define DSP_CAI_ENABLE_INFO_INDICATIONS 0x80 -#define DSP_CAI_RATE_ADAPTATION_300 0x00 -#define DSP_CAI_RATE_ADAPTATION_600 0x01 -#define DSP_CAI_RATE_ADAPTATION_1200 0x02 -#define DSP_CAI_RATE_ADAPTATION_2400 0x03 -#define DSP_CAI_RATE_ADAPTATION_4800 0x04 -#define DSP_CAI_RATE_ADAPTATION_9600 0x05 -#define DSP_CAI_RATE_ADAPTATION_19200 0x06 -#define DSP_CAI_RATE_ADAPTATION_38400 0x07 -#define DSP_CAI_RATE_ADAPTATION_48000 0x08 -#define DSP_CAI_RATE_ADAPTATION_56000 0x09 -#define DSP_CAI_RATE_ADAPTATION_7200 0x0a -#define DSP_CAI_RATE_ADAPTATION_14400 0x0b -#define DSP_CAI_RATE_ADAPTATION_28800 0x0c -#define DSP_CAI_RATE_ADAPTATION_12000 0x0d -#define DSP_CAI_RATE_ADAPTATION_1200_75 0x0e -#define DSP_CAI_RATE_ADAPTATION_75_1200 0x0f -#define DSP_CAI_RATE_ADAPTATION_MASK 0x0f -#define DSP_CAI_ASYNC_PARITY_ENABLE 0x01 -#define DSP_CAI_ASYNC_PARITY_SPACE 0x00 -#define DSP_CAI_ASYNC_PARITY_ODD 0x02 -#define DSP_CAI_ASYNC_PARITY_EVEN 0x04 -#define DSP_CAI_ASYNC_PARITY_MARK 0x06 -#define DSP_CAI_ASYNC_PARITY_MASK 0x06 -#define DSP_CAI_ASYNC_ONE_STOP_BIT 0x00 -#define DSP_CAI_ASYNC_TWO_STOP_BITS 0x20 -#define DSP_CAI_ASYNC_CHAR_LENGTH_8 0x00 -#define DSP_CAI_ASYNC_CHAR_LENGTH_7 0x40 -#define DSP_CAI_ASYNC_CHAR_LENGTH_6 0x80 -#define DSP_CAI_ASYNC_CHAR_LENGTH_5 0xc0 -#define DSP_CAI_ASYNC_CHAR_LENGTH_MASK 0xc0 -#define DSP_CAI_MODEM_LEASED_LINE_MODE 0x01 -#define DSP_CAI_MODEM_4_WIRE_OPERATION 0x02 -#define DSP_CAI_MODEM_DISABLE_BUSY_DETECT 0x04 -#define DSP_CAI_MODEM_DISABLE_CALLING_TONE 0x08 -#define DSP_CAI_MODEM_DISABLE_ANSWER_TONE 0x10 -#define DSP_CAI_MODEM_ENABLE_DIAL_TONE_DET 0x20 -#define DSP_CAI_MODEM_USE_POTS_INTERFACE 0x40 -#define DSP_CAI_MODEM_FORCE_RAY_TAYLOR_FAX 0x80 -#define DSP_CAI_MODEM_NEGOTIATE_HIGHEST 0x00 -#define DSP_CAI_MODEM_NEGOTIATE_DISABLED 0x01 -#define DSP_CAI_MODEM_NEGOTIATE_IN_CLASS 0x02 -#define DSP_CAI_MODEM_NEGOTIATE_V100 0x03 -#define DSP_CAI_MODEM_NEGOTIATE_V8 0x04 -#define DSP_CAI_MODEM_NEGOTIATE_V8BIS 0x05 -#define DSP_CAI_MODEM_NEGOTIATE_MASK 0x07 -#define DSP_CAI_MODEM_GUARD_TONE_NONE 0x00 -#define DSP_CAI_MODEM_GUARD_TONE_550HZ 0x40 -#define DSP_CAI_MODEM_GUARD_TONE_1800HZ 0x80 -#define DSP_CAI_MODEM_GUARD_TONE_MASK 0xc0 -#define DSP_CAI_MODEM_DISABLE_RETRAIN 0x01 -#define DSP_CAI_MODEM_DISABLE_STEPUPDOWN 0x02 -#define DSP_CAI_MODEM_DISABLE_SPLIT_SPEED 0x04 -#define DSP_CAI_MODEM_DISABLE_TRELLIS 0x08 -#define DSP_CAI_MODEM_ALLOW_RDL_TEST_LOOP 0x10 -#define DSP_CAI_MODEM_DISABLE_FLUSH_TIMER 0x40 -#define DSP_CAI_MODEM_REVERSE_DIRECTION 0x80 -#define DSP_CAI_MODEM_DISABLE_V21 0x01 -#define DSP_CAI_MODEM_DISABLE_V23 0x02 -#define DSP_CAI_MODEM_DISABLE_V22 0x04 -#define DSP_CAI_MODEM_DISABLE_V22BIS 0x08 -#define DSP_CAI_MODEM_DISABLE_V32 0x10 -#define DSP_CAI_MODEM_DISABLE_V32BIS 0x20 -#define DSP_CAI_MODEM_DISABLE_V34 0x40 -#define DSP_CAI_MODEM_DISABLE_V90 0x80 -#define DSP_CAI_MODEM_DISABLE_BELL103 0x01 -#define DSP_CAI_MODEM_DISABLE_BELL212A 0x02 -#define DSP_CAI_MODEM_DISABLE_VFC 0x04 -#define DSP_CAI_MODEM_DISABLE_K56FLEX 0x08 -#define DSP_CAI_MODEM_DISABLE_X2 0x10 -#define DSP_CAI_MODEM_ENABLE_V29FDX 0x01 -#define DSP_CAI_MODEM_ENABLE_V33 0x02 -#define DSP_CAI_MODEM_DISABLE_2400_SYMBOLS 0x01 -#define DSP_CAI_MODEM_DISABLE_2743_SYMBOLS 0x02 -#define DSP_CAI_MODEM_DISABLE_2800_SYMBOLS 0x04 -#define DSP_CAI_MODEM_DISABLE_3000_SYMBOLS 0x08 -#define DSP_CAI_MODEM_DISABLE_3200_SYMBOLS 0x10 -#define DSP_CAI_MODEM_DISABLE_3429_SYMBOLS 0x20 -#define DSP_CAI_MODEM_DISABLE_TX_REDUCTION 0x01 -#define DSP_CAI_MODEM_DISABLE_PRECODING 0x02 -#define DSP_CAI_MODEM_DISABLE_PREEMPHASIS 0x04 -#define DSP_CAI_MODEM_DISABLE_SHAPING 0x08 -#define DSP_CAI_MODEM_DISABLE_NONLINEAR_EN 0x10 -#define DSP_CAI_MODEM_SPEAKER_OFF 0x00 -#define DSP_CAI_MODEM_SPEAKER_DURING_TRAIN 0x01 -#define DSP_CAI_MODEM_SPEAKER_TIL_CONNECT 0x02 -#define DSP_CAI_MODEM_SPEAKER_ALWAYS_ON 0x03 -#define DSP_CAI_MODEM_SPEAKER_CONTROL_MASK 0x03 -#define DSP_CAI_MODEM_SPEAKER_VOLUME_MIN 0x00 -#define DSP_CAI_MODEM_SPEAKER_VOLUME_LOW 0x04 -#define DSP_CAI_MODEM_SPEAKER_VOLUME_HIGH 0x08 -#define DSP_CAI_MODEM_SPEAKER_VOLUME_MAX 0x0c -#define DSP_CAI_MODEM_SPEAKER_VOLUME_MASK 0x0c -/* ========================================================== - DCD/CTS State - ========================================================== */ -#define MDM_WANT_CONNECT_B3_ACTIVE_I 0x01 -#define MDM_NCPI_VALID 0x02 -#define MDM_NCPI_CTS_ON_RECEIVED 0x04 -#define MDM_NCPI_DCD_ON_RECEIVED 0x08 -/* ========================================================== - CAPI NCPI Constants - ========================================================== */ -#define MDM_NCPI_ECM_V42 0x0001 -#define MDM_NCPI_ECM_MNP 0x0002 -#define MDM_NCPI_TRANSPARENT 0x0004 -#define MDM_NCPI_COMPRESSED 0x0010 -/* ========================================================== - CAPI B2 Config Constants - ========================================================== */ -#define MDM_B2_DISABLE_V42bis 0x0001 -#define MDM_B2_DISABLE_MNP 0x0002 -#define MDM_B2_DISABLE_TRANS 0x0004 -#define MDM_B2_DISABLE_V42 0x0008 -#define MDM_B2_DISABLE_COMP 0x0010 -/* ========================================================== - CAPI B1 Config Constants - ========================================================== */ -#define MDM_CAPI_DISABLE_RETRAIN 0x0001 -#define MDM_CAPI_DISABLE_RING_TONE 0x0002 -#define MDM_CAPI_GUARD_1800 0x0004 -#define MDM_CAPI_GUARD_550 0x0008 -#define MDM_CAPI_NEG_V8 0x0003 -#define MDM_CAPI_NEG_V100 0x0002 -#define MDM_CAPI_NEG_MOD_CLASS 0x0001 -#define MDM_CAPI_NEG_DISABLED 0x0000 -#endif diff --git a/drivers/isdn/hardware/eicon/message.c b/drivers/isdn/hardware/eicon/message.c deleted file mode 100644 index def7992a38e6..000000000000 --- a/drivers/isdn/hardware/eicon/message.c +++ /dev/null @@ -1,14954 +0,0 @@ -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ - -#include - -#include "platform.h" -#include "di_defs.h" -#include "pc.h" -#include "capi20.h" -#include "divacapi.h" -#include "mdm_msg.h" -#include "divasync.h" - -#define FILE_ "MESSAGE.C" -#define dprintf - -/*------------------------------------------------------------------*/ -/* This is options supported for all adapters that are server by */ -/* XDI driver. Allo it is not necessary to ask it from every adapter*/ -/* and it is not necessary to save it separate for every adapter */ -/* Macrose defined here have only local meaning */ -/*------------------------------------------------------------------*/ -static dword diva_xdi_extended_features = 0; - -#define DIVA_CAPI_USE_CMA 0x00000001 -#define DIVA_CAPI_XDI_PROVIDES_SDRAM_BAR 0x00000002 -#define DIVA_CAPI_XDI_PROVIDES_NO_CANCEL 0x00000004 -#define DIVA_CAPI_XDI_PROVIDES_RX_DMA 0x00000008 - -/* - CAPI can request to process all return codes self only if: - protocol code supports this && xdi supports this -*/ -#define DIVA_CAPI_SUPPORTS_NO_CANCEL(__a__) (((__a__)->manufacturer_features & MANUFACTURER_FEATURE_XONOFF_FLOW_CONTROL) && ((__a__)->manufacturer_features & MANUFACTURER_FEATURE_OK_FC_LABEL) && (diva_xdi_extended_features & DIVA_CAPI_XDI_PROVIDES_NO_CANCEL)) - -/*------------------------------------------------------------------*/ -/* local function prototypes */ -/*------------------------------------------------------------------*/ - -static void group_optimization(DIVA_CAPI_ADAPTER *a, PLCI *plci); -void AutomaticLaw(DIVA_CAPI_ADAPTER *); -word CapiRelease(word); -word CapiRegister(word); -word api_put(APPL *, CAPI_MSG *); -static word api_parse(byte *, word, byte *, API_PARSE *); -static void api_save_msg(API_PARSE *in, byte *format, API_SAVE *out); -static void api_load_msg(API_SAVE *in, API_PARSE *out); - -word api_remove_start(void); -void api_remove_complete(void); - -static void plci_remove(PLCI *); -static void diva_get_extended_adapter_features(DIVA_CAPI_ADAPTER *a); -static void diva_ask_for_xdi_sdram_bar(DIVA_CAPI_ADAPTER *, IDI_SYNC_REQ *); - -void callback(ENTITY *); - -static void control_rc(PLCI *, byte, byte, byte, byte, byte); -static void data_rc(PLCI *, byte); -static void data_ack(PLCI *, byte); -static void sig_ind(PLCI *); -static void SendInfo(PLCI *, dword, byte **, byte); -static void SendSetupInfo(APPL *, PLCI *, dword, byte **, byte); -static void SendSSExtInd(APPL *, PLCI *plci, dword Id, byte **parms); - -static void VSwitchReqInd(PLCI *plci, dword Id, byte **parms); - -static void nl_ind(PLCI *); - -static byte connect_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte connect_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte connect_a_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte disconnect_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte disconnect_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte listen_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte info_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte info_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte alert_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte facility_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte facility_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte connect_b3_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte connect_b3_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte connect_b3_a_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte disconnect_b3_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte disconnect_b3_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte data_b3_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte data_b3_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte reset_b3_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte reset_b3_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte connect_b3_t90_a_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte select_b_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte manufacturer_req(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -static byte manufacturer_res(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); - -static word get_plci(DIVA_CAPI_ADAPTER *); -static void add_p(PLCI *, byte, byte *); -static void add_s(PLCI *plci, byte code, API_PARSE *p); -static void add_ss(PLCI *plci, byte code, API_PARSE *p); -static void add_ie(PLCI *plci, byte code, byte *p, word p_length); -static void add_d(PLCI *, word, byte *); -static void add_ai(PLCI *, API_PARSE *); -static word add_b1(PLCI *, API_PARSE *, word, word); -static word add_b23(PLCI *, API_PARSE *); -static word add_modem_b23(PLCI *plci, API_PARSE *bp_parms); -static void sig_req(PLCI *, byte, byte); -static void nl_req_ncci(PLCI *, byte, byte); -static void send_req(PLCI *); -static void send_data(PLCI *); -static word plci_remove_check(PLCI *); -static void listen_check(DIVA_CAPI_ADAPTER *); -static byte AddInfo(byte **, byte **, byte *, byte *); -static byte getChannel(API_PARSE *); -static void IndParse(PLCI *, const word *, byte **, byte); -static byte ie_compare(byte *, byte *); -static word find_cip(DIVA_CAPI_ADAPTER *, byte *, byte *); -static word CPN_filter_ok(byte *cpn, DIVA_CAPI_ADAPTER *, word); - -/* - XON protocol helpers -*/ -static void channel_flow_control_remove(PLCI *plci); -static void channel_x_off(PLCI *plci, byte ch, byte flag); -static void channel_x_on(PLCI *plci, byte ch); -static void channel_request_xon(PLCI *plci, byte ch); -static void channel_xmit_xon(PLCI *plci); -static int channel_can_xon(PLCI *plci, byte ch); -static void channel_xmit_extended_xon(PLCI *plci); - -static byte SendMultiIE(PLCI *plci, dword Id, byte **parms, byte ie_type, dword info_mask, byte setupParse); -static word AdvCodecSupport(DIVA_CAPI_ADAPTER *, PLCI *, APPL *, byte); -static void CodecIdCheck(DIVA_CAPI_ADAPTER *, PLCI *); -static void SetVoiceChannel(PLCI *, byte *, DIVA_CAPI_ADAPTER *); -static void VoiceChannelOff(PLCI *plci); -static void adv_voice_write_coefs(PLCI *plci, word write_command); -static void adv_voice_clear_config(PLCI *plci); - -static word get_b1_facilities(PLCI *plci, byte b1_resource); -static byte add_b1_facilities(PLCI *plci, byte b1_resource, word b1_facilities); -static void adjust_b1_facilities(PLCI *plci, byte new_b1_resource, word new_b1_facilities); -static word adjust_b_process(dword Id, PLCI *plci, byte Rc); -static void adjust_b1_resource(dword Id, PLCI *plci, API_SAVE *bp_msg, word b1_facilities, word internal_command); -static void adjust_b_restore(dword Id, PLCI *plci, byte Rc); -static void reset_b3_command(dword Id, PLCI *plci, byte Rc); -static void select_b_command(dword Id, PLCI *plci, byte Rc); -static void fax_connect_ack_command(dword Id, PLCI *plci, byte Rc); -static void fax_edata_ack_command(dword Id, PLCI *plci, byte Rc); -static void fax_connect_info_command(dword Id, PLCI *plci, byte Rc); -static void fax_adjust_b23_command(dword Id, PLCI *plci, byte Rc); -static void fax_disconnect_command(dword Id, PLCI *plci, byte Rc); -static void hold_save_command(dword Id, PLCI *plci, byte Rc); -static void retrieve_restore_command(dword Id, PLCI *plci, byte Rc); -static void init_b1_config(PLCI *plci); -static void clear_b1_config(PLCI *plci); - -static void dtmf_command(dword Id, PLCI *plci, byte Rc); -static byte dtmf_request(dword Id, word Number, DIVA_CAPI_ADAPTER *a, PLCI *plci, APPL *appl, API_PARSE *msg); -static void dtmf_confirmation(dword Id, PLCI *plci); -static void dtmf_indication(dword Id, PLCI *plci, byte *msg, word length); -static void dtmf_parameter_write(PLCI *plci); - - -static void mixer_set_bchannel_id_esc(PLCI *plci, byte bchannel_id); -static void mixer_set_bchannel_id(PLCI *plci, byte *chi); -static void mixer_clear_config(PLCI *plci); -static void mixer_notify_update(PLCI *plci, byte others); -static void mixer_command(dword Id, PLCI *plci, byte Rc); -static byte mixer_request(dword Id, word Number, DIVA_CAPI_ADAPTER *a, PLCI *plci, APPL *appl, API_PARSE *msg); -static void mixer_indication_coefs_set(dword Id, PLCI *plci); -static void mixer_indication_xconnect_from(dword Id, PLCI *plci, byte *msg, word length); -static void mixer_indication_xconnect_to(dword Id, PLCI *plci, byte *msg, word length); -static void mixer_remove(PLCI *plci); - - -static void ec_command(dword Id, PLCI *plci, byte Rc); -static byte ec_request(dword Id, word Number, DIVA_CAPI_ADAPTER *a, PLCI *plci, APPL *appl, API_PARSE *msg); -static void ec_indication(dword Id, PLCI *plci, byte *msg, word length); - - -static void rtp_connect_b3_req_command(dword Id, PLCI *plci, byte Rc); -static void rtp_connect_b3_res_command(dword Id, PLCI *plci, byte Rc); - - -static int diva_get_dma_descriptor(PLCI *plci, dword *dma_magic); -static void diva_free_dma_descriptor(PLCI *plci, int nr); - -/*------------------------------------------------------------------*/ -/* external function prototypes */ -/*------------------------------------------------------------------*/ - -extern byte MapController(byte); -extern byte UnMapController(byte); -#define MapId(Id)(((Id) & 0xffffff00L) | MapController((byte)(Id))) -#define UnMapId(Id)(((Id) & 0xffffff00L) | UnMapController((byte)(Id))) - -void sendf(APPL *, word, dword, word, byte *, ...); -void *TransmitBufferSet(APPL *appl, dword ref); -void *TransmitBufferGet(APPL *appl, void *p); -void TransmitBufferFree(APPL *appl, void *p); -void *ReceiveBufferGet(APPL *appl, int Num); - -int fax_head_line_time(char *buffer); - - -/*------------------------------------------------------------------*/ -/* Global data definitions */ -/*------------------------------------------------------------------*/ -extern byte max_adapter; -extern byte max_appl; -extern DIVA_CAPI_ADAPTER *adapter; -extern APPL *application; - - - - - - - -static byte remove_started = false; -static PLCI dummy_plci; - - -static struct _ftable { - word command; - byte *format; - byte (*function)(dword, word, DIVA_CAPI_ADAPTER *, PLCI *, APPL *, API_PARSE *); -} ftable[] = { - {_DATA_B3_R, "dwww", data_b3_req}, - {_DATA_B3_I | RESPONSE, "w", data_b3_res}, - {_INFO_R, "ss", info_req}, - {_INFO_I | RESPONSE, "", info_res}, - {_CONNECT_R, "wsssssssss", connect_req}, - {_CONNECT_I | RESPONSE, "wsssss", connect_res}, - {_CONNECT_ACTIVE_I | RESPONSE, "", connect_a_res}, - {_DISCONNECT_R, "s", disconnect_req}, - {_DISCONNECT_I | RESPONSE, "", disconnect_res}, - {_LISTEN_R, "dddss", listen_req}, - {_ALERT_R, "s", alert_req}, - {_FACILITY_R, "ws", facility_req}, - {_FACILITY_I | RESPONSE, "ws", facility_res}, - {_CONNECT_B3_R, "s", connect_b3_req}, - {_CONNECT_B3_I | RESPONSE, "ws", connect_b3_res}, - {_CONNECT_B3_ACTIVE_I | RESPONSE, "", connect_b3_a_res}, - {_DISCONNECT_B3_R, "s", disconnect_b3_req}, - {_DISCONNECT_B3_I | RESPONSE, "", disconnect_b3_res}, - {_RESET_B3_R, "s", reset_b3_req}, - {_RESET_B3_I | RESPONSE, "", reset_b3_res}, - {_CONNECT_B3_T90_ACTIVE_I | RESPONSE, "ws", connect_b3_t90_a_res}, - {_CONNECT_B3_T90_ACTIVE_I | RESPONSE, "", connect_b3_t90_a_res}, - {_SELECT_B_REQ, "s", select_b_req}, - {_MANUFACTURER_R, "dws", manufacturer_req}, - {_MANUFACTURER_I | RESPONSE, "dws", manufacturer_res}, - {_MANUFACTURER_I | RESPONSE, "", manufacturer_res} -}; - -static byte *cip_bc[29][2] = { - { "", "" }, /* 0 */ - { "\x03\x80\x90\xa3", "\x03\x80\x90\xa2" }, /* 1 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 2 */ - { "\x02\x89\x90", "\x02\x89\x90" }, /* 3 */ - { "\x03\x90\x90\xa3", "\x03\x90\x90\xa2" }, /* 4 */ - { "\x03\x91\x90\xa5", "\x03\x91\x90\xa5" }, /* 5 */ - { "\x02\x98\x90", "\x02\x98\x90" }, /* 6 */ - { "\x04\x88\xc0\xc6\xe6", "\x04\x88\xc0\xc6\xe6" }, /* 7 */ - { "\x04\x88\x90\x21\x8f", "\x04\x88\x90\x21\x8f" }, /* 8 */ - { "\x03\x91\x90\xa5", "\x03\x91\x90\xa5" }, /* 9 */ - { "", "" }, /* 10 */ - { "", "" }, /* 11 */ - { "", "" }, /* 12 */ - { "", "" }, /* 13 */ - { "", "" }, /* 14 */ - { "", "" }, /* 15 */ - - { "\x03\x80\x90\xa3", "\x03\x80\x90\xa2" }, /* 16 */ - { "\x03\x90\x90\xa3", "\x03\x90\x90\xa2" }, /* 17 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 18 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 19 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 20 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 21 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 22 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 23 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 24 */ - { "\x02\x88\x90", "\x02\x88\x90" }, /* 25 */ - { "\x03\x91\x90\xa5", "\x03\x91\x90\xa5" }, /* 26 */ - { "\x03\x91\x90\xa5", "\x03\x91\x90\xa5" }, /* 27 */ - { "\x02\x88\x90", "\x02\x88\x90" } /* 28 */ -}; - -static byte *cip_hlc[29] = { - "", /* 0 */ - "", /* 1 */ - "", /* 2 */ - "", /* 3 */ - "", /* 4 */ - "", /* 5 */ - "", /* 6 */ - "", /* 7 */ - "", /* 8 */ - "", /* 9 */ - "", /* 10 */ - "", /* 11 */ - "", /* 12 */ - "", /* 13 */ - "", /* 14 */ - "", /* 15 */ - - "\x02\x91\x81", /* 16 */ - "\x02\x91\x84", /* 17 */ - "\x02\x91\xa1", /* 18 */ - "\x02\x91\xa4", /* 19 */ - "\x02\x91\xa8", /* 20 */ - "\x02\x91\xb1", /* 21 */ - "\x02\x91\xb2", /* 22 */ - "\x02\x91\xb5", /* 23 */ - "\x02\x91\xb8", /* 24 */ - "\x02\x91\xc1", /* 25 */ - "\x02\x91\x81", /* 26 */ - "\x03\x91\xe0\x01", /* 27 */ - "\x03\x91\xe0\x02" /* 28 */ -}; - -/*------------------------------------------------------------------*/ - -#define V120_HEADER_LENGTH 1 -#define V120_HEADER_EXTEND_BIT 0x80 -#define V120_HEADER_BREAK_BIT 0x40 -#define V120_HEADER_C1_BIT 0x04 -#define V120_HEADER_C2_BIT 0x08 -#define V120_HEADER_FLUSH_COND (V120_HEADER_BREAK_BIT | V120_HEADER_C1_BIT | V120_HEADER_C2_BIT) - -static byte v120_default_header[] = -{ - - 0x83 /* Ext, BR , res, res, C2 , C1 , B , F */ - -}; - -static byte v120_break_header[] = -{ - - 0xc3 | V120_HEADER_BREAK_BIT /* Ext, BR , res, res, C2 , C1 , B , F */ - -}; - - -/*------------------------------------------------------------------*/ -/* API_PUT function */ -/*------------------------------------------------------------------*/ - -word api_put(APPL *appl, CAPI_MSG *msg) -{ - word i, j, k, l, n; - word ret; - byte c; - byte controller; - DIVA_CAPI_ADAPTER *a; - PLCI *plci; - NCCI *ncci_ptr; - word ncci; - CAPI_MSG *m; - API_PARSE msg_parms[MAX_MSG_PARMS + 1]; - - if (msg->header.length < sizeof(msg->header) || - msg->header.length > MAX_MSG_SIZE) { - dbug(1, dprintf("bad len")); - return _BAD_MSG; - } - - controller = (byte)((msg->header.controller & 0x7f) - 1); - - /* controller starts with 0 up to (max_adapter - 1) */ - if (controller >= max_adapter) - { - dbug(1, dprintf("invalid ctrl")); - return _BAD_MSG; - } - - a = &adapter[controller]; - plci = NULL; - if ((msg->header.plci != 0) && (msg->header.plci <= a->max_plci) && !a->adapter_disabled) - { - dbug(1, dprintf("plci=%x", msg->header.plci)); - plci = &a->plci[msg->header.plci - 1]; - ncci = GET_WORD(&msg->header.ncci); - if (plci->Id - && (plci->appl - || (plci->State == INC_CON_PENDING) - || (plci->State == INC_CON_ALERT) - || (msg->header.command == (_DISCONNECT_I | RESPONSE))) - && ((ncci == 0) - || (msg->header.command == (_DISCONNECT_B3_I | RESPONSE)) - || ((ncci < MAX_NCCI + 1) && (a->ncci_plci[ncci] == plci->Id)))) - { - i = plci->msg_in_read_pos; - j = plci->msg_in_write_pos; - if (j >= i) - { - if (j + msg->header.length + MSG_IN_OVERHEAD <= MSG_IN_QUEUE_SIZE) - i += MSG_IN_QUEUE_SIZE - j; - else - j = 0; - } - else - { - - n = (((CAPI_MSG *)(plci->msg_in_queue))->header.length + MSG_IN_OVERHEAD + 3) & 0xfffc; - - if (i > MSG_IN_QUEUE_SIZE - n) - i = MSG_IN_QUEUE_SIZE - n + 1; - i -= j; - } - - if (i <= ((msg->header.length + MSG_IN_OVERHEAD + 3) & 0xfffc)) - - { - dbug(0, dprintf("Q-FULL1(msg) - len=%d write=%d read=%d wrap=%d free=%d", - msg->header.length, plci->msg_in_write_pos, - plci->msg_in_read_pos, plci->msg_in_wrap_pos, i)); - - return _QUEUE_FULL; - } - c = false; - if ((((byte *) msg) < ((byte *)(plci->msg_in_queue))) - || (((byte *) msg) >= ((byte *)(plci->msg_in_queue)) + sizeof(plci->msg_in_queue))) - { - if (plci->msg_in_write_pos != plci->msg_in_read_pos) - c = true; - } - if (msg->header.command == _DATA_B3_R) - { - if (msg->header.length < 20) - { - dbug(1, dprintf("DATA_B3 REQ wrong length %d", msg->header.length)); - return _BAD_MSG; - } - ncci_ptr = &(a->ncci[ncci]); - n = ncci_ptr->data_pending; - l = ncci_ptr->data_ack_pending; - k = plci->msg_in_read_pos; - while (k != plci->msg_in_write_pos) - { - if (k == plci->msg_in_wrap_pos) - k = 0; - if ((((CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[k]))->header.command == _DATA_B3_R) - && (((CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[k]))->header.ncci == ncci)) - { - n++; - if (((CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[k]))->info.data_b3_req.Flags & 0x0004) - l++; - } - - k += (((CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[k]))->header.length + - MSG_IN_OVERHEAD + 3) & 0xfffc; - - } - if ((n >= MAX_DATA_B3) || (l >= MAX_DATA_ACK)) - { - dbug(0, dprintf("Q-FULL2(data) - pending=%d/%d ack_pending=%d/%d", - ncci_ptr->data_pending, n, ncci_ptr->data_ack_pending, l)); - - return _QUEUE_FULL; - } - if (plci->req_in || plci->internal_command) - { - if ((((byte *) msg) >= ((byte *)(plci->msg_in_queue))) - && (((byte *) msg) < ((byte *)(plci->msg_in_queue)) + sizeof(plci->msg_in_queue))) - { - dbug(0, dprintf("Q-FULL3(requeue)")); - - return _QUEUE_FULL; - } - c = true; - } - } - else - { - if (plci->req_in || plci->internal_command) - c = true; - else - { - plci->command = msg->header.command; - plci->number = msg->header.number; - } - } - if (c) - { - dbug(1, dprintf("enqueue msg(0x%04x,0x%x,0x%x) - len=%d write=%d read=%d wrap=%d free=%d", - msg->header.command, plci->req_in, plci->internal_command, - msg->header.length, plci->msg_in_write_pos, - plci->msg_in_read_pos, plci->msg_in_wrap_pos, i)); - if (j == 0) - plci->msg_in_wrap_pos = plci->msg_in_write_pos; - m = (CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[j]); - for (i = 0; i < msg->header.length; i++) - ((byte *)(plci->msg_in_queue))[j++] = ((byte *) msg)[i]; - if (m->header.command == _DATA_B3_R) - { - - m->info.data_b3_req.Data = (dword)(long)(TransmitBufferSet(appl, m->info.data_b3_req.Data)); - - } - - j = (j + 3) & 0xfffc; - - *((APPL **)(&((byte *)(plci->msg_in_queue))[j])) = appl; - plci->msg_in_write_pos = j + MSG_IN_OVERHEAD; - return 0; - } - } - else - { - plci = NULL; - } - } - dbug(1, dprintf("com=%x", msg->header.command)); - - for (j = 0; j < MAX_MSG_PARMS + 1; j++) msg_parms[j].length = 0; - for (i = 0, ret = _BAD_MSG; i < ARRAY_SIZE(ftable); i++) { - - if (ftable[i].command == msg->header.command) { - /* break loop if the message is correct, otherwise continue scan */ - /* (for example: CONNECT_B3_T90_ACT_RES has two specifications) */ - if (!api_parse(msg->info.b, (word)(msg->header.length - 12), ftable[i].format, msg_parms)) { - ret = 0; - break; - } - for (j = 0; j < MAX_MSG_PARMS + 1; j++) msg_parms[j].length = 0; - } - } - if (ret) { - dbug(1, dprintf("BAD_MSG")); - if (plci) plci->command = 0; - return ret; - } - - - c = ftable[i].function(GET_DWORD(&msg->header.controller), - msg->header.number, - a, - plci, - appl, - msg_parms); - - channel_xmit_extended_xon(plci); - - if (c == 1) send_req(plci); - if (c == 2 && plci) plci->req_in = plci->req_in_start = plci->req_out = 0; - if (plci && !plci->req_in) plci->command = 0; - return 0; -} - - -/*------------------------------------------------------------------*/ -/* api_parse function, check the format of api messages */ -/*------------------------------------------------------------------*/ - -static word api_parse(byte *msg, word length, byte *format, API_PARSE *parms) -{ - word i; - word p; - - for (i = 0, p = 0; format[i]; i++) { - if (parms) - { - parms[i].info = &msg[p]; - } - switch (format[i]) { - case 'b': - p += 1; - break; - case 'w': - p += 2; - break; - case 'd': - p += 4; - break; - case 's': - if (msg[p] == 0xff) { - parms[i].info += 2; - parms[i].length = msg[p + 1] + (msg[p + 2] << 8); - p += (parms[i].length + 3); - } - else { - parms[i].length = msg[p]; - p += (parms[i].length + 1); - } - break; - } - - if (p > length) return true; - } - if (parms) parms[i].info = NULL; - return false; -} - -static void api_save_msg(API_PARSE *in, byte *format, API_SAVE *out) -{ - word i, j, n = 0; - byte *p; - - p = out->info; - for (i = 0; format[i] != '\0'; i++) - { - out->parms[i].info = p; - out->parms[i].length = in[i].length; - switch (format[i]) - { - case 'b': - n = 1; - break; - case 'w': - n = 2; - break; - case 'd': - n = 4; - break; - case 's': - n = in[i].length + 1; - break; - } - for (j = 0; j < n; j++) - *(p++) = in[i].info[j]; - } - out->parms[i].info = NULL; - out->parms[i].length = 0; -} - -static void api_load_msg(API_SAVE *in, API_PARSE *out) -{ - word i; - - i = 0; - do - { - out[i].info = in->parms[i].info; - out[i].length = in->parms[i].length; - } while (in->parms[i++].info); -} - - -/*------------------------------------------------------------------*/ -/* CAPI remove function */ -/*------------------------------------------------------------------*/ - -word api_remove_start(void) -{ - word i; - word j; - - if (!remove_started) { - remove_started = true; - for (i = 0; i < max_adapter; i++) { - if (adapter[i].request) { - for (j = 0; j < adapter[i].max_plci; j++) { - if (adapter[i].plci[j].Sig.Id) plci_remove(&adapter[i].plci[j]); - } - } - } - return 1; - } - else { - for (i = 0; i < max_adapter; i++) { - if (adapter[i].request) { - for (j = 0; j < adapter[i].max_plci; j++) { - if (adapter[i].plci[j].Sig.Id) return 1; - } - } - } - } - api_remove_complete(); - return 0; -} - - -/*------------------------------------------------------------------*/ -/* internal command queue */ -/*------------------------------------------------------------------*/ - -static void init_internal_command_queue(PLCI *plci) -{ - word i; - - dbug(1, dprintf("%s,%d: init_internal_command_queue", - (char *)(FILE_), __LINE__)); - - plci->internal_command = 0; - for (i = 0; i < MAX_INTERNAL_COMMAND_LEVELS; i++) - plci->internal_command_queue[i] = NULL; -} - - -static void start_internal_command(dword Id, PLCI *plci, t_std_internal_command command_function) -{ - word i; - - dbug(1, dprintf("[%06lx] %s,%d: start_internal_command", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - if (plci->internal_command == 0) - { - plci->internal_command_queue[0] = command_function; - (*command_function)(Id, plci, OK); - } - else - { - i = 1; - while (plci->internal_command_queue[i] != NULL) - i++; - plci->internal_command_queue[i] = command_function; - } -} - - -static void next_internal_command(dword Id, PLCI *plci) -{ - word i; - - dbug(1, dprintf("[%06lx] %s,%d: next_internal_command", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - plci->internal_command = 0; - plci->internal_command_queue[0] = NULL; - while (plci->internal_command_queue[1] != NULL) - { - for (i = 0; i < MAX_INTERNAL_COMMAND_LEVELS - 1; i++) - plci->internal_command_queue[i] = plci->internal_command_queue[i + 1]; - plci->internal_command_queue[MAX_INTERNAL_COMMAND_LEVELS - 1] = NULL; - (*(plci->internal_command_queue[0]))(Id, plci, OK); - if (plci->internal_command != 0) - return; - plci->internal_command_queue[0] = NULL; - } -} - - -/*------------------------------------------------------------------*/ -/* NCCI allocate/remove function */ -/*------------------------------------------------------------------*/ - -static dword ncci_mapping_bug = 0; - -static word get_ncci(PLCI *plci, byte ch, word force_ncci) -{ - DIVA_CAPI_ADAPTER *a; - word ncci, i, j, k; - - a = plci->adapter; - if (!ch || a->ch_ncci[ch]) - { - ncci_mapping_bug++; - dbug(1, dprintf("NCCI mapping exists %ld %02x %02x %02x-%02x", - ncci_mapping_bug, ch, force_ncci, a->ncci_ch[a->ch_ncci[ch]], a->ch_ncci[ch])); - ncci = ch; - } - else - { - if (force_ncci) - ncci = force_ncci; - else - { - if ((ch < MAX_NCCI + 1) && !a->ncci_ch[ch]) - ncci = ch; - else - { - ncci = 1; - while ((ncci < MAX_NCCI + 1) && a->ncci_ch[ncci]) - ncci++; - if (ncci == MAX_NCCI + 1) - { - ncci_mapping_bug++; - i = 1; - do - { - j = 1; - while ((j < MAX_NCCI + 1) && (a->ncci_ch[j] != i)) - j++; - k = j; - if (j < MAX_NCCI + 1) - { - do - { - j++; - } while ((j < MAX_NCCI + 1) && (a->ncci_ch[j] != i)); - } - } while ((i < MAX_NL_CHANNEL + 1) && (j < MAX_NCCI + 1)); - if (i < MAX_NL_CHANNEL + 1) - { - dbug(1, dprintf("NCCI mapping overflow %ld %02x %02x %02x-%02x-%02x", - ncci_mapping_bug, ch, force_ncci, i, k, j)); - } - else - { - dbug(1, dprintf("NCCI mapping overflow %ld %02x %02x", - ncci_mapping_bug, ch, force_ncci)); - } - ncci = ch; - } - } - a->ncci_plci[ncci] = plci->Id; - a->ncci_state[ncci] = IDLE; - if (!plci->ncci_ring_list) - plci->ncci_ring_list = ncci; - else - a->ncci_next[ncci] = a->ncci_next[plci->ncci_ring_list]; - a->ncci_next[plci->ncci_ring_list] = (byte) ncci; - } - a->ncci_ch[ncci] = ch; - a->ch_ncci[ch] = (byte) ncci; - dbug(1, dprintf("NCCI mapping established %ld %02x %02x %02x-%02x", - ncci_mapping_bug, ch, force_ncci, ch, ncci)); - } - return (ncci); -} - - -static void ncci_free_receive_buffers(PLCI *plci, word ncci) -{ - DIVA_CAPI_ADAPTER *a; - APPL *appl; - word i, ncci_code; - dword Id; - - a = plci->adapter; - Id = (((dword) ncci) << 16) | (((word)(plci->Id)) << 8) | a->Id; - if (ncci) - { - if (a->ncci_plci[ncci] == plci->Id) - { - if (!plci->appl) - { - ncci_mapping_bug++; - dbug(1, dprintf("NCCI mapping appl expected %ld %08lx", - ncci_mapping_bug, Id)); - } - else - { - appl = plci->appl; - ncci_code = ncci | (((word) a->Id) << 8); - for (i = 0; i < appl->MaxBuffer; i++) - { - if ((appl->DataNCCI[i] == ncci_code) - && (((byte)(appl->DataFlags[i] >> 8)) == plci->Id)) - { - appl->DataNCCI[i] = 0; - } - } - } - } - } - else - { - for (ncci = 1; ncci < MAX_NCCI + 1; ncci++) - { - if (a->ncci_plci[ncci] == plci->Id) - { - if (!plci->appl) - { - ncci_mapping_bug++; - dbug(1, dprintf("NCCI mapping no appl %ld %08lx", - ncci_mapping_bug, Id)); - } - else - { - appl = plci->appl; - ncci_code = ncci | (((word) a->Id) << 8); - for (i = 0; i < appl->MaxBuffer; i++) - { - if ((appl->DataNCCI[i] == ncci_code) - && (((byte)(appl->DataFlags[i] >> 8)) == plci->Id)) - { - appl->DataNCCI[i] = 0; - } - } - } - } - } - } -} - - -static void cleanup_ncci_data(PLCI *plci, word ncci) -{ - NCCI *ncci_ptr; - - if (ncci && (plci->adapter->ncci_plci[ncci] == plci->Id)) - { - ncci_ptr = &(plci->adapter->ncci[ncci]); - if (plci->appl) - { - while (ncci_ptr->data_pending != 0) - { - if (!plci->data_sent || (ncci_ptr->DBuffer[ncci_ptr->data_out].P != plci->data_sent_ptr)) - TransmitBufferFree(plci->appl, ncci_ptr->DBuffer[ncci_ptr->data_out].P); - (ncci_ptr->data_out)++; - if (ncci_ptr->data_out == MAX_DATA_B3) - ncci_ptr->data_out = 0; - (ncci_ptr->data_pending)--; - } - } - ncci_ptr->data_out = 0; - ncci_ptr->data_pending = 0; - ncci_ptr->data_ack_out = 0; - ncci_ptr->data_ack_pending = 0; - } -} - - -static void ncci_remove(PLCI *plci, word ncci, byte preserve_ncci) -{ - DIVA_CAPI_ADAPTER *a; - dword Id; - word i; - - a = plci->adapter; - Id = (((dword) ncci) << 16) | (((word)(plci->Id)) << 8) | a->Id; - if (!preserve_ncci) - ncci_free_receive_buffers(plci, ncci); - if (ncci) - { - if (a->ncci_plci[ncci] != plci->Id) - { - ncci_mapping_bug++; - dbug(1, dprintf("NCCI mapping doesn't exist %ld %08lx %02x", - ncci_mapping_bug, Id, preserve_ncci)); - } - else - { - cleanup_ncci_data(plci, ncci); - dbug(1, dprintf("NCCI mapping released %ld %08lx %02x %02x-%02x", - ncci_mapping_bug, Id, preserve_ncci, a->ncci_ch[ncci], ncci)); - a->ch_ncci[a->ncci_ch[ncci]] = 0; - if (!preserve_ncci) - { - a->ncci_ch[ncci] = 0; - a->ncci_plci[ncci] = 0; - a->ncci_state[ncci] = IDLE; - i = plci->ncci_ring_list; - while ((i != 0) && (a->ncci_next[i] != plci->ncci_ring_list) && (a->ncci_next[i] != ncci)) - i = a->ncci_next[i]; - if ((i != 0) && (a->ncci_next[i] == ncci)) - { - if (i == ncci) - plci->ncci_ring_list = 0; - else if (plci->ncci_ring_list == ncci) - plci->ncci_ring_list = i; - a->ncci_next[i] = a->ncci_next[ncci]; - } - a->ncci_next[ncci] = 0; - } - } - } - else - { - for (ncci = 1; ncci < MAX_NCCI + 1; ncci++) - { - if (a->ncci_plci[ncci] == plci->Id) - { - cleanup_ncci_data(plci, ncci); - dbug(1, dprintf("NCCI mapping released %ld %08lx %02x %02x-%02x", - ncci_mapping_bug, Id, preserve_ncci, a->ncci_ch[ncci], ncci)); - a->ch_ncci[a->ncci_ch[ncci]] = 0; - if (!preserve_ncci) - { - a->ncci_ch[ncci] = 0; - a->ncci_plci[ncci] = 0; - a->ncci_state[ncci] = IDLE; - a->ncci_next[ncci] = 0; - } - } - } - if (!preserve_ncci) - plci->ncci_ring_list = 0; - } -} - - -/*------------------------------------------------------------------*/ -/* PLCI remove function */ -/*------------------------------------------------------------------*/ - -static void plci_free_msg_in_queue(PLCI *plci) -{ - word i; - - if (plci->appl) - { - i = plci->msg_in_read_pos; - while (i != plci->msg_in_write_pos) - { - if (i == plci->msg_in_wrap_pos) - i = 0; - if (((CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[i]))->header.command == _DATA_B3_R) - { - - TransmitBufferFree(plci->appl, - (byte *)(long)(((CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[i]))->info.data_b3_req.Data)); - - } - - i += (((CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[i]))->header.length + - MSG_IN_OVERHEAD + 3) & 0xfffc; - - } - } - plci->msg_in_write_pos = MSG_IN_QUEUE_SIZE; - plci->msg_in_read_pos = MSG_IN_QUEUE_SIZE; - plci->msg_in_wrap_pos = MSG_IN_QUEUE_SIZE; -} - - -static void plci_remove(PLCI *plci) -{ - - if (!plci) { - dbug(1, dprintf("plci_remove(no plci)")); - return; - } - init_internal_command_queue(plci); - dbug(1, dprintf("plci_remove(%x,tel=%x)", plci->Id, plci->tel)); - if (plci_remove_check(plci)) - { - return; - } - if (plci->Sig.Id == 0xff) - { - dbug(1, dprintf("D-channel X.25 plci->NL.Id:%0x", plci->NL.Id)); - if (plci->NL.Id && !plci->nl_remove_id) - { - nl_req_ncci(plci, REMOVE, 0); - send_req(plci); - } - } - else - { - if (!plci->sig_remove_id - && (plci->Sig.Id - || (plci->req_in != plci->req_out) - || (plci->nl_req || plci->sig_req))) - { - sig_req(plci, HANGUP, 0); - send_req(plci); - } - } - ncci_remove(plci, 0, false); - plci_free_msg_in_queue(plci); - - plci->channels = 0; - plci->appl = NULL; - if ((plci->State == INC_CON_PENDING) || (plci->State == INC_CON_ALERT)) - plci->State = OUTG_DIS_PENDING; -} - -/*------------------------------------------------------------------*/ -/* translation function for each message */ -/*------------------------------------------------------------------*/ - -static byte connect_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word ch; - word i; - word Info; - byte LinkLayer; - API_PARSE *ai; - API_PARSE *bp; - API_PARSE ai_parms[5]; - word channel = 0; - dword ch_mask; - byte m; - static byte esc_chi[35] = {0x02, 0x18, 0x01}; - static byte lli[2] = {0x01, 0x00}; - byte noCh = 0; - word dir = 0; - byte *p_chi = ""; - - for (i = 0; i < 5; i++) ai_parms[i].length = 0; - - dbug(1, dprintf("connect_req(%d)", parms->length)); - Info = _WRONG_IDENTIFIER; - if (a) - { - if (a->adapter_disabled) - { - dbug(1, dprintf("adapter disabled")); - Id = ((word)1 << 8) | a->Id; - sendf(appl, _CONNECT_R | CONFIRM, Id, Number, "w", 0); - sendf(appl, _DISCONNECT_I, Id, 0, "w", _L1_ERROR); - return false; - } - Info = _OUT_OF_PLCI; - if ((i = get_plci(a))) - { - Info = 0; - plci = &a->plci[i - 1]; - plci->appl = appl; - plci->call_dir = CALL_DIR_OUT | CALL_DIR_ORIGINATE; - /* check 'external controller' bit for codec support */ - if (Id & EXT_CONTROLLER) - { - if (AdvCodecSupport(a, plci, appl, 0)) - { - plci->Id = 0; - sendf(appl, _CONNECT_R | CONFIRM, Id, Number, "w", _WRONG_IDENTIFIER); - return 2; - } - } - ai = &parms[9]; - bp = &parms[5]; - ch = 0; - if (bp->length)LinkLayer = bp->info[3]; - else LinkLayer = 0; - if (ai->length) - { - ch = 0xffff; - if (!api_parse(&ai->info[1], (word)ai->length, "ssss", ai_parms)) - { - ch = 0; - if (ai_parms[0].length) - { - ch = GET_WORD(ai_parms[0].info + 1); - if (ch > 4) ch = 0; /* safety -> ignore ChannelID */ - if (ch == 4) /* explizit CHI in message */ - { - /* check length of B-CH struct */ - if ((ai_parms[0].info)[3] >= 1) - { - if ((ai_parms[0].info)[4] == CHI) - { - p_chi = &((ai_parms[0].info)[5]); - } - else - { - p_chi = &((ai_parms[0].info)[3]); - } - if (p_chi[0] > 35) /* check length of channel ID */ - { - Info = _WRONG_MESSAGE_FORMAT; - } - } - else Info = _WRONG_MESSAGE_FORMAT; - } - - if (ch == 3 && ai_parms[0].length >= 7 && ai_parms[0].length <= 36) - { - dir = GET_WORD(ai_parms[0].info + 3); - ch_mask = 0; - m = 0x3f; - for (i = 0; i + 5 <= ai_parms[0].length; i++) - { - if (ai_parms[0].info[i + 5] != 0) - { - if ((ai_parms[0].info[i + 5] | m) != 0xff) - Info = _WRONG_MESSAGE_FORMAT; - else - { - if (ch_mask == 0) - channel = i; - ch_mask |= 1L << i; - } - } - m = 0; - } - if (ch_mask == 0) - Info = _WRONG_MESSAGE_FORMAT; - if (!Info) - { - if ((ai_parms[0].length == 36) || (ch_mask != ((dword)(1L << channel)))) - { - esc_chi[0] = (byte)(ai_parms[0].length - 2); - for (i = 0; i + 5 <= ai_parms[0].length; i++) - esc_chi[i + 3] = ai_parms[0].info[i + 5]; - } - else - esc_chi[0] = 2; - esc_chi[2] = (byte)channel; - plci->b_channel = (byte)channel; /* not correct for ETSI ch 17..31 */ - add_p(plci, LLI, lli); - add_p(plci, ESC, esc_chi); - plci->State = LOCAL_CONNECT; - if (!dir) plci->call_dir |= CALL_DIR_FORCE_OUTG_NL; /* dir 0=DTE, 1=DCE */ - } - } - } - } - else Info = _WRONG_MESSAGE_FORMAT; - } - - dbug(1, dprintf("ch=%x,dir=%x,p_ch=%d", ch, dir, channel)); - plci->command = _CONNECT_R; - plci->number = Number; - /* x.31 or D-ch free SAPI in LinkLayer? */ - if (ch == 1 && LinkLayer != 3 && LinkLayer != 12) noCh = true; - if ((ch == 0 || ch == 2 || noCh || ch == 3 || ch == 4) && !Info) - { - /* B-channel used for B3 connections (ch==0), or no B channel */ - /* is used (ch==2) or perm. connection (3) is used do a CALL */ - if (noCh) Info = add_b1(plci, &parms[5], 2, 0); /* no resource */ - else Info = add_b1(plci, &parms[5], ch, 0); - add_s(plci, OAD, &parms[2]); - add_s(plci, OSA, &parms[4]); - add_s(plci, BC, &parms[6]); - add_s(plci, LLC, &parms[7]); - add_s(plci, HLC, &parms[8]); - if (a->Info_Mask[appl->Id - 1] & 0x200) - { - /* early B3 connect (CIP mask bit 9) no release after a disc */ - add_p(plci, LLI, "\x01\x01"); - } - if (GET_WORD(parms[0].info) < 29) { - add_p(plci, BC, cip_bc[GET_WORD(parms[0].info)][a->u_law]); - add_p(plci, HLC, cip_hlc[GET_WORD(parms[0].info)]); - } - add_p(plci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(plci, ASSIGN, DSIG_ID); - } - else if (ch == 1) { - - /* D-Channel used for B3 connections */ - plci->Sig.Id = 0xff; - Info = 0; - } - - if (!Info && ch != 2 && !noCh) { - Info = add_b23(plci, &parms[5]); - if (!Info) { - if (!(plci->tel && !plci->adv_nl))nl_req_ncci(plci, ASSIGN, 0); - } - } - - if (!Info) - { - if (ch == 0 || ch == 2 || ch == 3 || noCh || ch == 4) - { - if (plci->spoofed_msg == SPOOFING_REQUIRED) - { - api_save_msg(parms, "wsssssssss", &plci->saved_msg); - plci->spoofed_msg = CALL_REQ; - plci->internal_command = BLOCK_PLCI; - plci->command = 0; - dbug(1, dprintf("Spoof")); - send_req(plci); - return false; - } - if (ch == 4)add_p(plci, CHI, p_chi); - add_s(plci, CPN, &parms[1]); - add_s(plci, DSA, &parms[3]); - if (noCh) add_p(plci, ESC, "\x02\x18\xfd"); /* D-channel, no B-L3 */ - add_ai(plci, &parms[9]); - if (!dir)sig_req(plci, CALL_REQ, 0); - else - { - plci->command = PERM_LIST_REQ; - plci->appl = appl; - sig_req(plci, LISTEN_REQ, 0); - send_req(plci); - return false; - } - } - send_req(plci); - return false; - } - plci->Id = 0; - } - } - sendf(appl, - _CONNECT_R | CONFIRM, - Id, - Number, - "w", Info); - return 2; -} - -static byte connect_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word i, Info; - word Reject; - static byte cau_t[] = {0, 0, 0x90, 0x91, 0xac, 0x9d, 0x86, 0xd8, 0x9b}; - static byte esc_t[] = {0x03, 0x08, 0x00, 0x00}; - API_PARSE *ai; - API_PARSE ai_parms[5]; - word ch = 0; - - if (!plci) { - dbug(1, dprintf("connect_res(no plci)")); - return 0; /* no plci, no send */ - } - - dbug(1, dprintf("connect_res(State=0x%x)", plci->State)); - for (i = 0; i < 5; i++) ai_parms[i].length = 0; - ai = &parms[5]; - dbug(1, dprintf("ai->length=%d", ai->length)); - - if (ai->length) - { - if (!api_parse(&ai->info[1], (word)ai->length, "ssss", ai_parms)) - { - dbug(1, dprintf("ai_parms[0].length=%d/0x%x", ai_parms[0].length, GET_WORD(ai_parms[0].info + 1))); - ch = 0; - if (ai_parms[0].length) - { - ch = GET_WORD(ai_parms[0].info + 1); - dbug(1, dprintf("BCH-I=0x%x", ch)); - } - } - } - - if (plci->State == INC_CON_CONNECTED_ALERT) - { - dbug(1, dprintf("Connected Alert Call_Res")); - if (a->Info_Mask[appl->Id - 1] & 0x200) - { - /* early B3 connect (CIP mask bit 9) no release after a disc */ - add_p(plci, LLI, "\x01\x01"); - } - add_s(plci, CONN_NR, &parms[2]); - add_s(plci, LLC, &parms[4]); - add_ai(plci, &parms[5]); - plci->State = INC_CON_ACCEPT; - sig_req(plci, CALL_RES, 0); - return 1; - } - else if (plci->State == INC_CON_PENDING || plci->State == INC_CON_ALERT) { - __clear_bit(appl->Id - 1, plci->c_ind_mask_table); - dbug(1, dprintf("c_ind_mask =%*pb", MAX_APPL, plci->c_ind_mask_table)); - Reject = GET_WORD(parms[0].info); - dbug(1, dprintf("Reject=0x%x", Reject)); - if (Reject) - { - if (bitmap_empty(plci->c_ind_mask_table, MAX_APPL)) - { - if ((Reject & 0xff00) == 0x3400) - { - esc_t[2] = ((byte)(Reject & 0x00ff)) | 0x80; - add_p(plci, ESC, esc_t); - add_ai(plci, &parms[5]); - sig_req(plci, REJECT, 0); - } - else if (Reject == 1 || Reject >= 9) - { - add_ai(plci, &parms[5]); - sig_req(plci, HANGUP, 0); - } - else - { - esc_t[2] = cau_t[(Reject&0x000f)]; - add_p(plci, ESC, esc_t); - add_ai(plci, &parms[5]); - sig_req(plci, REJECT, 0); - } - plci->appl = appl; - } - else - { - sendf(appl, _DISCONNECT_I, Id, 0, "w", _OTHER_APPL_CONNECTED); - } - } - else { - plci->appl = appl; - if (Id & EXT_CONTROLLER) { - if (AdvCodecSupport(a, plci, appl, 0)) { - dbug(1, dprintf("connect_res(error from AdvCodecSupport)")); - sig_req(plci, HANGUP, 0); - return 1; - } - if (plci->tel == ADV_VOICE && a->AdvCodecPLCI) - { - Info = add_b23(plci, &parms[1]); - if (Info) - { - dbug(1, dprintf("connect_res(error from add_b23)")); - sig_req(plci, HANGUP, 0); - return 1; - } - if (plci->adv_nl) - { - nl_req_ncci(plci, ASSIGN, 0); - } - } - } - else - { - plci->tel = 0; - if (ch != 2) - { - Info = add_b23(plci, &parms[1]); - if (Info) - { - dbug(1, dprintf("connect_res(error from add_b23 2)")); - sig_req(plci, HANGUP, 0); - return 1; - } - } - nl_req_ncci(plci, ASSIGN, 0); - } - - if (plci->spoofed_msg == SPOOFING_REQUIRED) - { - api_save_msg(parms, "wsssss", &plci->saved_msg); - plci->spoofed_msg = CALL_RES; - plci->internal_command = BLOCK_PLCI; - plci->command = 0; - dbug(1, dprintf("Spoof")); - } - else - { - add_b1(plci, &parms[1], ch, plci->B1_facilities); - if (a->Info_Mask[appl->Id - 1] & 0x200) - { - /* early B3 connect (CIP mask bit 9) no release after a disc */ - add_p(plci, LLI, "\x01\x01"); - } - add_s(plci, CONN_NR, &parms[2]); - add_s(plci, LLC, &parms[4]); - add_ai(plci, &parms[5]); - plci->State = INC_CON_ACCEPT; - sig_req(plci, CALL_RES, 0); - } - - for_each_set_bit(i, plci->c_ind_mask_table, max_appl) - sendf(&application[i], _DISCONNECT_I, Id, 0, "w", _OTHER_APPL_CONNECTED); - } - } - return 1; -} - -static byte connect_a_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - dbug(1, dprintf("connect_a_res")); - return false; -} - -static byte disconnect_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word Info; - word i; - - dbug(1, dprintf("disconnect_req")); - - Info = _WRONG_IDENTIFIER; - - if (plci) - { - if (plci->State == INC_CON_PENDING || plci->State == INC_CON_ALERT) - { - __clear_bit(appl->Id - 1, plci->c_ind_mask_table); - plci->appl = appl; - for_each_set_bit(i, plci->c_ind_mask_table, max_appl) - sendf(&application[i], _DISCONNECT_I, Id, 0, "w", 0); - plci->State = OUTG_DIS_PENDING; - } - if (plci->Sig.Id && plci->appl) - { - Info = 0; - if (plci->Sig.Id != 0xff) - { - if (plci->State != INC_DIS_PENDING) - { - add_ai(plci, &msg[0]); - sig_req(plci, HANGUP, 0); - plci->State = OUTG_DIS_PENDING; - return 1; - } - } - else - { - if (plci->NL.Id && !plci->nl_remove_id) - { - mixer_remove(plci); - nl_req_ncci(plci, REMOVE, 0); - sendf(appl, _DISCONNECT_R | CONFIRM, Id, Number, "w", 0); - sendf(appl, _DISCONNECT_I, Id, 0, "w", 0); - plci->State = INC_DIS_PENDING; - } - return 1; - } - } - } - - if (!appl) return false; - sendf(appl, _DISCONNECT_R | CONFIRM, Id, Number, "w", Info); - return false; -} - -static byte disconnect_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - dbug(1, dprintf("disconnect_res")); - if (plci) - { - /* clear ind mask bit, just in case of collsion of */ - /* DISCONNECT_IND and CONNECT_RES */ - __clear_bit(appl->Id - 1, plci->c_ind_mask_table); - ncci_free_receive_buffers(plci, 0); - if (plci_remove_check(plci)) - { - return 0; - } - if (plci->State == INC_DIS_PENDING - || plci->State == SUSPENDING) { - if (bitmap_empty(plci->c_ind_mask_table, MAX_APPL)) { - if (plci->State != SUSPENDING) plci->State = IDLE; - dbug(1, dprintf("chs=%d", plci->channels)); - if (!plci->channels) { - plci_remove(plci); - } - } - } - } - return 0; -} - -static byte listen_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word Info; - byte i; - - dbug(1, dprintf("listen_req(Appl=0x%x)", appl->Id)); - - Info = _WRONG_IDENTIFIER; - if (a) { - Info = 0; - a->Info_Mask[appl->Id - 1] = GET_DWORD(parms[0].info); - a->CIP_Mask[appl->Id - 1] = GET_DWORD(parms[1].info); - dbug(1, dprintf("CIP_MASK=0x%lx", GET_DWORD(parms[1].info))); - if (a->Info_Mask[appl->Id - 1] & 0x200) { /* early B3 connect provides */ - a->Info_Mask[appl->Id - 1] |= 0x10; /* call progression infos */ - } - - /* check if external controller listen and switch listen on or off*/ - if (Id&EXT_CONTROLLER && GET_DWORD(parms[1].info)) { - if (a->profile.Global_Options & ON_BOARD_CODEC) { - dummy_plci.State = IDLE; - a->codec_listen[appl->Id - 1] = &dummy_plci; - a->TelOAD[0] = (byte)(parms[3].length); - for (i = 1; parms[3].length >= i && i < 22; i++) { - a->TelOAD[i] = parms[3].info[i]; - } - a->TelOAD[i] = 0; - a->TelOSA[0] = (byte)(parms[4].length); - for (i = 1; parms[4].length >= i && i < 22; i++) { - a->TelOSA[i] = parms[4].info[i]; - } - a->TelOSA[i] = 0; - } - else Info = 0x2002; /* wrong controller, codec not supported */ - } - else{ /* clear listen */ - a->codec_listen[appl->Id - 1] = (PLCI *)0; - } - } - sendf(appl, - _LISTEN_R | CONFIRM, - Id, - Number, - "w", Info); - - if (a) listen_check(a); - return false; -} - -static byte info_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word i; - API_PARSE *ai; - PLCI *rc_plci = NULL; - API_PARSE ai_parms[5]; - word Info = 0; - - dbug(1, dprintf("info_req")); - for (i = 0; i < 5; i++) ai_parms[i].length = 0; - - ai = &msg[1]; - - if (ai->length) - { - if (api_parse(&ai->info[1], (word)ai->length, "ssss", ai_parms)) - { - dbug(1, dprintf("AddInfo wrong")); - Info = _WRONG_MESSAGE_FORMAT; - } - } - if (!a) Info = _WRONG_STATE; - - if (!Info && plci) - { /* no fac, with CPN, or KEY */ - rc_plci = plci; - if (!ai_parms[3].length && plci->State && (msg[0].length || ai_parms[1].length)) - { - /* overlap sending option */ - dbug(1, dprintf("OvlSnd")); - add_s(plci, CPN, &msg[0]); - add_s(plci, KEY, &ai_parms[1]); - sig_req(plci, INFO_REQ, 0); - send_req(plci); - return false; - } - - if (plci->State && ai_parms[2].length) - { - /* User_Info option */ - dbug(1, dprintf("UUI")); - add_s(plci, UUI, &ai_parms[2]); - sig_req(plci, USER_DATA, 0); - } - else if (plci->State && ai_parms[3].length) - { - /* Facility option */ - dbug(1, dprintf("FAC")); - add_s(plci, CPN, &msg[0]); - add_ai(plci, &msg[1]); - sig_req(plci, FACILITY_REQ, 0); - } - else - { - Info = _WRONG_STATE; - } - } - else if ((ai_parms[1].length || ai_parms[2].length || ai_parms[3].length) && !Info) - { - /* NCR_Facility option -> send UUI and Keypad too */ - dbug(1, dprintf("NCR_FAC")); - if ((i = get_plci(a))) - { - rc_plci = &a->plci[i - 1]; - appl->NullCREnable = true; - rc_plci->internal_command = C_NCR_FAC_REQ; - rc_plci->appl = appl; - add_p(rc_plci, CAI, "\x01\x80"); - add_p(rc_plci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(rc_plci, ASSIGN, DSIG_ID); - send_req(rc_plci); - } - else - { - Info = _OUT_OF_PLCI; - } - - if (!Info) - { - add_s(rc_plci, CPN, &msg[0]); - add_ai(rc_plci, &msg[1]); - sig_req(rc_plci, NCR_FACILITY, 0); - send_req(rc_plci); - return false; - /* for application controlled supplementary services */ - } - } - - if (!rc_plci) - { - Info = _WRONG_MESSAGE_FORMAT; - } - - if (!Info) - { - send_req(rc_plci); - } - else - { /* appl is not assigned to a PLCI or error condition */ - dbug(1, dprintf("localInfoCon")); - sendf(appl, - _INFO_R | CONFIRM, - Id, - Number, - "w", Info); - } - return false; -} - -static byte info_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - dbug(1, dprintf("info_res")); - return false; -} - -static byte alert_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word Info; - byte ret; - - dbug(1, dprintf("alert_req")); - - Info = _WRONG_IDENTIFIER; - ret = false; - if (plci) { - Info = _ALERT_IGNORED; - if (plci->State != INC_CON_ALERT) { - Info = _WRONG_STATE; - if (plci->State == INC_CON_PENDING) { - Info = 0; - plci->State = INC_CON_ALERT; - add_ai(plci, &msg[0]); - sig_req(plci, CALL_ALERT, 0); - ret = 1; - } - } - } - sendf(appl, - _ALERT_R | CONFIRM, - Id, - Number, - "w", Info); - return ret; -} - -static byte facility_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word Info = 0; - word i = 0; - - word selector; - word SSreq; - long relatedPLCIvalue; - DIVA_CAPI_ADAPTER *relatedadapter; - byte *SSparms = ""; - byte RCparms[] = "\x05\x00\x00\x02\x00\x00"; - byte SSstruct[] = "\x09\x00\x00\x06\x00\x00\x00\x00\x00\x00"; - API_PARSE *parms; - API_PARSE ss_parms[11]; - PLCI *rplci; - byte cai[15]; - dword d; - API_PARSE dummy; - - dbug(1, dprintf("facility_req")); - for (i = 0; i < 9; i++) ss_parms[i].length = 0; - - parms = &msg[1]; - - if (!a) - { - dbug(1, dprintf("wrong Ctrl")); - Info = _WRONG_IDENTIFIER; - } - - selector = GET_WORD(msg[0].info); - - if (!Info) - { - switch (selector) - { - case SELECTOR_HANDSET: - Info = AdvCodecSupport(a, plci, appl, HOOK_SUPPORT); - break; - - case SELECTOR_SU_SERV: - if (!msg[1].length) - { - Info = _WRONG_MESSAGE_FORMAT; - break; - } - SSreq = GET_WORD(&(msg[1].info[1])); - PUT_WORD(&RCparms[1], SSreq); - SSparms = RCparms; - switch (SSreq) - { - case S_GET_SUPPORTED_SERVICES: - if ((i = get_plci(a))) - { - rplci = &a->plci[i - 1]; - rplci->appl = appl; - add_p(rplci, CAI, "\x01\x80"); - add_p(rplci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(rplci, ASSIGN, DSIG_ID); - send_req(rplci); - } - else - { - PUT_DWORD(&SSstruct[6], MASK_TERMINAL_PORTABILITY); - SSparms = (byte *)SSstruct; - break; - } - rplci->internal_command = GETSERV_REQ_PEND; - rplci->number = Number; - rplci->appl = appl; - sig_req(rplci, S_SUPPORTED, 0); - send_req(rplci); - return false; - break; - - case S_LISTEN: - if (parms->length == 7) - { - if (api_parse(&parms->info[1], (word)parms->length, "wbd", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - } - else - { - Info = _WRONG_MESSAGE_FORMAT; - break; - } - a->Notification_Mask[appl->Id - 1] = GET_DWORD(ss_parms[2].info); - if (a->Notification_Mask[appl->Id - 1] & SMASK_MWI) /* MWI active? */ - { - if ((i = get_plci(a))) - { - rplci = &a->plci[i - 1]; - rplci->appl = appl; - add_p(rplci, CAI, "\x01\x80"); - add_p(rplci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(rplci, ASSIGN, DSIG_ID); - send_req(rplci); - } - else - { - break; - } - rplci->internal_command = GET_MWI_STATE; - rplci->number = Number; - sig_req(rplci, MWI_POLL, 0); - send_req(rplci); - } - break; - - case S_HOLD: - api_parse(&parms->info[1], (word)parms->length, "ws", ss_parms); - if (plci && plci->State && plci->SuppState == IDLE) - { - plci->SuppState = HOLD_REQUEST; - plci->command = C_HOLD_REQ; - add_s(plci, CAI, &ss_parms[1]); - sig_req(plci, CALL_HOLD, 0); - send_req(plci); - return false; - } - else Info = 0x3010; /* wrong state */ - break; - case S_RETRIEVE: - if (plci && plci->State && plci->SuppState == CALL_HELD) - { - if (Id & EXT_CONTROLLER) - { - if (AdvCodecSupport(a, plci, appl, 0)) - { - Info = 0x3010; /* wrong state */ - break; - } - } - else plci->tel = 0; - - plci->SuppState = RETRIEVE_REQUEST; - plci->command = C_RETRIEVE_REQ; - if (plci->spoofed_msg == SPOOFING_REQUIRED) - { - plci->spoofed_msg = CALL_RETRIEVE; - plci->internal_command = BLOCK_PLCI; - plci->command = 0; - dbug(1, dprintf("Spoof")); - return false; - } - else - { - sig_req(plci, CALL_RETRIEVE, 0); - send_req(plci); - return false; - } - } - else Info = 0x3010; /* wrong state */ - break; - case S_SUSPEND: - if (parms->length) - { - if (api_parse(&parms->info[1], (word)parms->length, "wbs", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - } - if (plci && plci->State) - { - add_s(plci, CAI, &ss_parms[2]); - plci->command = SUSPEND_REQ; - sig_req(plci, SUSPEND, 0); - plci->State = SUSPENDING; - send_req(plci); - } - else Info = 0x3010; /* wrong state */ - break; - - case S_RESUME: - if (!(i = get_plci(a))) - { - Info = _OUT_OF_PLCI; - break; - } - rplci = &a->plci[i - 1]; - rplci->appl = appl; - rplci->number = Number; - rplci->tel = 0; - rplci->call_dir = CALL_DIR_OUT | CALL_DIR_ORIGINATE; - /* check 'external controller' bit for codec support */ - if (Id & EXT_CONTROLLER) - { - if (AdvCodecSupport(a, rplci, appl, 0)) - { - rplci->Id = 0; - Info = 0x300A; - break; - } - } - if (parms->length) - { - if (api_parse(&parms->info[1], (word)parms->length, "wbs", ss_parms)) - { - dbug(1, dprintf("format wrong")); - rplci->Id = 0; - Info = _WRONG_MESSAGE_FORMAT; - break; - } - } - dummy.length = 0; - dummy.info = "\x00"; - add_b1(rplci, &dummy, 0, 0); - if (a->Info_Mask[appl->Id - 1] & 0x200) - { - /* early B3 connect (CIP mask bit 9) no release after a disc */ - add_p(rplci, LLI, "\x01\x01"); - } - add_p(rplci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(rplci, ASSIGN, DSIG_ID); - send_req(rplci); - add_s(rplci, CAI, &ss_parms[2]); - rplci->command = RESUME_REQ; - sig_req(rplci, RESUME, 0); - rplci->State = RESUMING; - send_req(rplci); - break; - - case S_CONF_BEGIN: /* Request */ - case S_CONF_DROP: - case S_CONF_ISOLATE: - case S_CONF_REATTACH: - if (api_parse(&parms->info[1], (word)parms->length, "wbd", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - if (plci && plci->State && ((plci->SuppState == IDLE) || (plci->SuppState == CALL_HELD))) - { - d = GET_DWORD(ss_parms[2].info); - if (d >= 0x80) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - plci->ptyState = (byte)SSreq; - plci->command = 0; - cai[0] = 2; - switch (SSreq) - { - case S_CONF_BEGIN: - cai[1] = CONF_BEGIN; - plci->internal_command = CONF_BEGIN_REQ_PEND; - break; - case S_CONF_DROP: - cai[1] = CONF_DROP; - plci->internal_command = CONF_DROP_REQ_PEND; - break; - case S_CONF_ISOLATE: - cai[1] = CONF_ISOLATE; - plci->internal_command = CONF_ISOLATE_REQ_PEND; - break; - case S_CONF_REATTACH: - cai[1] = CONF_REATTACH; - plci->internal_command = CONF_REATTACH_REQ_PEND; - break; - } - cai[2] = (byte)d; /* Conference Size resp. PartyId */ - add_p(plci, CAI, cai); - sig_req(plci, S_SERVICE, 0); - send_req(plci); - return false; - } - else Info = 0x3010; /* wrong state */ - break; - - case S_ECT: - case S_3PTY_BEGIN: - case S_3PTY_END: - case S_CONF_ADD: - if (parms->length == 7) - { - if (api_parse(&parms->info[1], (word)parms->length, "wbd", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - } - else if (parms->length == 8) /* workaround for the T-View-S */ - { - if (api_parse(&parms->info[1], (word)parms->length, "wbdb", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - } - else - { - Info = _WRONG_MESSAGE_FORMAT; - break; - } - if (!msg[1].length) - { - Info = _WRONG_MESSAGE_FORMAT; - break; - } - if (!plci) - { - Info = _WRONG_IDENTIFIER; - break; - } - relatedPLCIvalue = GET_DWORD(ss_parms[2].info); - relatedPLCIvalue &= 0x0000FFFF; - dbug(1, dprintf("PTY/ECT/addCONF,relPLCI=%lx", relatedPLCIvalue)); - /* controller starts with 0 up to (max_adapter - 1) */ - if (((relatedPLCIvalue & 0x7f) == 0) - || (MapController((byte)(relatedPLCIvalue & 0x7f)) == 0) - || (MapController((byte)(relatedPLCIvalue & 0x7f)) > max_adapter)) - { - if (SSreq == S_3PTY_END) - { - dbug(1, dprintf("wrong Controller use 2nd PLCI=PLCI")); - rplci = plci; - } - else - { - Info = 0x3010; /* wrong state */ - break; - } - } - else - { - relatedadapter = &adapter[MapController((byte)(relatedPLCIvalue & 0x7f)) - 1]; - relatedPLCIvalue >>= 8; - /* find PLCI PTR*/ - for (i = 0, rplci = NULL; i < relatedadapter->max_plci; i++) - { - if (relatedadapter->plci[i].Id == (byte)relatedPLCIvalue) - { - rplci = &relatedadapter->plci[i]; - } - } - if (!rplci || !relatedPLCIvalue) - { - if (SSreq == S_3PTY_END) - { - dbug(1, dprintf("use 2nd PLCI=PLCI")); - rplci = plci; - } - else - { - Info = 0x3010; /* wrong state */ - break; - } - } - } -/* - dbug(1, dprintf("rplci:%x", rplci)); - dbug(1, dprintf("plci:%x", plci)); - dbug(1, dprintf("rplci->ptyState:%x", rplci->ptyState)); - dbug(1, dprintf("plci->ptyState:%x", plci->ptyState)); - dbug(1, dprintf("SSreq:%x", SSreq)); - dbug(1, dprintf("rplci->internal_command:%x", rplci->internal_command)); - dbug(1, dprintf("rplci->appl:%x", rplci->appl)); - dbug(1, dprintf("rplci->Id:%x", rplci->Id)); -*/ - /* send PTY/ECT req, cannot check all states because of US stuff */ - if (!rplci->internal_command && rplci->appl) - { - plci->command = 0; - rplci->relatedPTYPLCI = plci; - plci->relatedPTYPLCI = rplci; - rplci->ptyState = (byte)SSreq; - if (SSreq == S_ECT) - { - rplci->internal_command = ECT_REQ_PEND; - cai[1] = ECT_EXECUTE; - - rplci->vswitchstate = 0; - rplci->vsprot = 0; - rplci->vsprotdialect = 0; - plci->vswitchstate = 0; - plci->vsprot = 0; - plci->vsprotdialect = 0; - - } - else if (SSreq == S_CONF_ADD) - { - rplci->internal_command = CONF_ADD_REQ_PEND; - cai[1] = CONF_ADD; - } - else - { - rplci->internal_command = PTY_REQ_PEND; - cai[1] = (byte)(SSreq - 3); - } - rplci->number = Number; - if (plci != rplci) /* explicit invocation */ - { - cai[0] = 2; - cai[2] = plci->Sig.Id; - dbug(1, dprintf("explicit invocation")); - } - else - { - dbug(1, dprintf("implicit invocation")); - cai[0] = 1; - } - add_p(rplci, CAI, cai); - sig_req(rplci, S_SERVICE, 0); - send_req(rplci); - return false; - } - else - { - dbug(0, dprintf("Wrong line")); - Info = 0x3010; /* wrong state */ - break; - } - break; - - case S_CALL_DEFLECTION: - if (api_parse(&parms->info[1], (word)parms->length, "wbwss", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - if (!plci) - { - Info = _WRONG_IDENTIFIER; - break; - } - /* reuse unused screening indicator */ - ss_parms[3].info[3] = (byte)GET_WORD(&(ss_parms[2].info[0])); - plci->command = 0; - plci->internal_command = CD_REQ_PEND; - appl->CDEnable = true; - cai[0] = 1; - cai[1] = CALL_DEFLECTION; - add_p(plci, CAI, cai); - add_p(plci, CPN, ss_parms[3].info); - sig_req(plci, S_SERVICE, 0); - send_req(plci); - return false; - break; - - case S_CALL_FORWARDING_START: - if (api_parse(&parms->info[1], (word)parms->length, "wbdwwsss", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - - if ((i = get_plci(a))) - { - rplci = &a->plci[i - 1]; - rplci->appl = appl; - add_p(rplci, CAI, "\x01\x80"); - add_p(rplci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(rplci, ASSIGN, DSIG_ID); - send_req(rplci); - } - else - { - Info = _OUT_OF_PLCI; - break; - } - - /* reuse unused screening indicator */ - rplci->internal_command = CF_START_PEND; - rplci->appl = appl; - rplci->number = Number; - appl->S_Handle = GET_DWORD(&(ss_parms[2].info[0])); - cai[0] = 2; - cai[1] = 0x70 | (byte)GET_WORD(&(ss_parms[3].info[0])); /* Function */ - cai[2] = (byte)GET_WORD(&(ss_parms[4].info[0])); /* Basic Service */ - add_p(rplci, CAI, cai); - add_p(rplci, OAD, ss_parms[5].info); - add_p(rplci, CPN, ss_parms[6].info); - sig_req(rplci, S_SERVICE, 0); - send_req(rplci); - return false; - break; - - case S_INTERROGATE_DIVERSION: - case S_INTERROGATE_NUMBERS: - case S_CALL_FORWARDING_STOP: - case S_CCBS_REQUEST: - case S_CCBS_DEACTIVATE: - case S_CCBS_INTERROGATE: - switch (SSreq) - { - case S_INTERROGATE_NUMBERS: - if (api_parse(&parms->info[1], (word)parms->length, "wbd", ss_parms)) - { - dbug(0, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - } - break; - case S_CCBS_REQUEST: - case S_CCBS_DEACTIVATE: - if (api_parse(&parms->info[1], (word)parms->length, "wbdw", ss_parms)) - { - dbug(0, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - } - break; - case S_CCBS_INTERROGATE: - if (api_parse(&parms->info[1], (word)parms->length, "wbdws", ss_parms)) - { - dbug(0, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - } - break; - default: - if (api_parse(&parms->info[1], (word)parms->length, "wbdwws", ss_parms)) - { - dbug(0, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - break; - } - - if (Info) break; - if ((i = get_plci(a))) - { - rplci = &a->plci[i - 1]; - switch (SSreq) - { - case S_INTERROGATE_DIVERSION: /* use cai with S_SERVICE below */ - cai[1] = 0x60 | (byte)GET_WORD(&(ss_parms[3].info[0])); /* Function */ - rplci->internal_command = INTERR_DIVERSION_REQ_PEND; /* move to rplci if assigned */ - break; - case S_INTERROGATE_NUMBERS: /* use cai with S_SERVICE below */ - cai[1] = DIVERSION_INTERROGATE_NUM; /* Function */ - rplci->internal_command = INTERR_NUMBERS_REQ_PEND; /* move to rplci if assigned */ - break; - case S_CALL_FORWARDING_STOP: - rplci->internal_command = CF_STOP_PEND; - cai[1] = 0x80 | (byte)GET_WORD(&(ss_parms[3].info[0])); /* Function */ - break; - case S_CCBS_REQUEST: - cai[1] = CCBS_REQUEST; - rplci->internal_command = CCBS_REQUEST_REQ_PEND; - break; - case S_CCBS_DEACTIVATE: - cai[1] = CCBS_DEACTIVATE; - rplci->internal_command = CCBS_DEACTIVATE_REQ_PEND; - break; - case S_CCBS_INTERROGATE: - cai[1] = CCBS_INTERROGATE; - rplci->internal_command = CCBS_INTERROGATE_REQ_PEND; - break; - default: - cai[1] = 0; - break; - } - rplci->appl = appl; - rplci->number = Number; - add_p(rplci, CAI, "\x01\x80"); - add_p(rplci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(rplci, ASSIGN, DSIG_ID); - send_req(rplci); - } - else - { - Info = _OUT_OF_PLCI; - break; - } - - appl->S_Handle = GET_DWORD(&(ss_parms[2].info[0])); - switch (SSreq) - { - case S_INTERROGATE_NUMBERS: - cai[0] = 1; - add_p(rplci, CAI, cai); - break; - case S_CCBS_REQUEST: - case S_CCBS_DEACTIVATE: - cai[0] = 3; - PUT_WORD(&cai[2], GET_WORD(&(ss_parms[3].info[0]))); - add_p(rplci, CAI, cai); - break; - case S_CCBS_INTERROGATE: - cai[0] = 3; - PUT_WORD(&cai[2], GET_WORD(&(ss_parms[3].info[0]))); - add_p(rplci, CAI, cai); - add_p(rplci, OAD, ss_parms[4].info); - break; - default: - cai[0] = 2; - cai[2] = (byte)GET_WORD(&(ss_parms[4].info[0])); /* Basic Service */ - add_p(rplci, CAI, cai); - add_p(rplci, OAD, ss_parms[5].info); - break; - } - - sig_req(rplci, S_SERVICE, 0); - send_req(rplci); - return false; - break; - - case S_MWI_ACTIVATE: - if (api_parse(&parms->info[1], (word)parms->length, "wbwdwwwssss", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - if (!plci) - { - if ((i = get_plci(a))) - { - rplci = &a->plci[i - 1]; - rplci->appl = appl; - rplci->cr_enquiry = true; - add_p(rplci, CAI, "\x01\x80"); - add_p(rplci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(rplci, ASSIGN, DSIG_ID); - send_req(rplci); - } - else - { - Info = _OUT_OF_PLCI; - break; - } - } - else - { - rplci = plci; - rplci->cr_enquiry = false; - } - - rplci->command = 0; - rplci->internal_command = MWI_ACTIVATE_REQ_PEND; - rplci->appl = appl; - rplci->number = Number; - - cai[0] = 13; - cai[1] = ACTIVATION_MWI; /* Function */ - PUT_WORD(&cai[2], GET_WORD(&(ss_parms[2].info[0]))); /* Basic Service */ - PUT_DWORD(&cai[4], GET_DWORD(&(ss_parms[3].info[0]))); /* Number of Messages */ - PUT_WORD(&cai[8], GET_WORD(&(ss_parms[4].info[0]))); /* Message Status */ - PUT_WORD(&cai[10], GET_WORD(&(ss_parms[5].info[0]))); /* Message Reference */ - PUT_WORD(&cai[12], GET_WORD(&(ss_parms[6].info[0]))); /* Invocation Mode */ - add_p(rplci, CAI, cai); - add_p(rplci, CPN, ss_parms[7].info); /* Receiving User Number */ - add_p(rplci, OAD, ss_parms[8].info); /* Controlling User Number */ - add_p(rplci, OSA, ss_parms[9].info); /* Controlling User Provided Number */ - add_p(rplci, UID, ss_parms[10].info); /* Time */ - sig_req(rplci, S_SERVICE, 0); - send_req(rplci); - return false; - - case S_MWI_DEACTIVATE: - if (api_parse(&parms->info[1], (word)parms->length, "wbwwss", ss_parms)) - { - dbug(1, dprintf("format wrong")); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - if (!plci) - { - if ((i = get_plci(a))) - { - rplci = &a->plci[i - 1]; - rplci->appl = appl; - rplci->cr_enquiry = true; - add_p(rplci, CAI, "\x01\x80"); - add_p(rplci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(rplci, ASSIGN, DSIG_ID); - send_req(rplci); - } - else - { - Info = _OUT_OF_PLCI; - break; - } - } - else - { - rplci = plci; - rplci->cr_enquiry = false; - } - - rplci->command = 0; - rplci->internal_command = MWI_DEACTIVATE_REQ_PEND; - rplci->appl = appl; - rplci->number = Number; - - cai[0] = 5; - cai[1] = DEACTIVATION_MWI; /* Function */ - PUT_WORD(&cai[2], GET_WORD(&(ss_parms[2].info[0]))); /* Basic Service */ - PUT_WORD(&cai[4], GET_WORD(&(ss_parms[3].info[0]))); /* Invocation Mode */ - add_p(rplci, CAI, cai); - add_p(rplci, CPN, ss_parms[4].info); /* Receiving User Number */ - add_p(rplci, OAD, ss_parms[5].info); /* Controlling User Number */ - sig_req(rplci, S_SERVICE, 0); - send_req(rplci); - return false; - - default: - Info = 0x300E; /* not supported */ - break; - } - break; /* case SELECTOR_SU_SERV: end */ - - - case SELECTOR_DTMF: - return (dtmf_request(Id, Number, a, plci, appl, msg)); - - - - case SELECTOR_LINE_INTERCONNECT: - return (mixer_request(Id, Number, a, plci, appl, msg)); - - - - case PRIV_SELECTOR_ECHO_CANCELLER: - appl->appl_flags |= APPL_FLAG_PRIV_EC_SPEC; - return (ec_request(Id, Number, a, plci, appl, msg)); - - case SELECTOR_ECHO_CANCELLER: - appl->appl_flags &= ~APPL_FLAG_PRIV_EC_SPEC; - return (ec_request(Id, Number, a, plci, appl, msg)); - - - case SELECTOR_V42BIS: - default: - Info = _FACILITY_NOT_SUPPORTED; - break; - } /* end of switch (selector) */ - } - - dbug(1, dprintf("SendFacRc")); - sendf(appl, - _FACILITY_R | CONFIRM, - Id, - Number, - "wws", Info, selector, SSparms); - return false; -} - -static byte facility_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - dbug(1, dprintf("facility_res")); - return false; -} - -static byte connect_b3_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word Info = 0; - byte req; - byte len; - word w; - word fax_control_bits, fax_feature_bits, fax_info_change; - API_PARSE *ncpi; - byte pvc[2]; - - API_PARSE fax_parms[9]; - word i; - - - dbug(1, dprintf("connect_b3_req")); - if (plci) - { - if ((plci->State == IDLE) || (plci->State == OUTG_DIS_PENDING) - || (plci->State == INC_DIS_PENDING) || (plci->SuppState != IDLE)) - { - Info = _WRONG_STATE; - } - else - { - /* local reply if assign unsuccessful - or B3 protocol allows only one layer 3 connection - and already connected - or B2 protocol not any LAPD - and connect_b3_req contradicts originate/answer direction */ - if (!plci->NL.Id - || (((plci->B3_prot != B3_T90NL) && (plci->B3_prot != B3_ISO8208) && (plci->B3_prot != B3_X25_DCE)) - && ((plci->channels != 0) - || (((plci->B2_prot != B2_SDLC) && (plci->B2_prot != B2_LAPD) && (plci->B2_prot != B2_LAPD_FREE_SAPI_SEL)) - && ((plci->call_dir & CALL_DIR_ANSWER) && !(plci->call_dir & CALL_DIR_FORCE_OUTG_NL)))))) - { - dbug(1, dprintf("B3 already connected=%d or no NL.Id=0x%x, dir=%d sstate=0x%x", - plci->channels, plci->NL.Id, plci->call_dir, plci->SuppState)); - Info = _WRONG_STATE; - sendf(appl, - _CONNECT_B3_R | CONFIRM, - Id, - Number, - "w", Info); - return false; - } - plci->requested_options_conn = 0; - - req = N_CONNECT; - ncpi = &parms[0]; - if (plci->B3_prot == 2 || plci->B3_prot == 3) - { - if (ncpi->length > 2) - { - /* check for PVC */ - if (ncpi->info[2] || ncpi->info[3]) - { - pvc[0] = ncpi->info[3]; - pvc[1] = ncpi->info[2]; - add_d(plci, 2, pvc); - req = N_RESET; - } - else - { - if (ncpi->info[1] & 1) req = N_CONNECT | N_D_BIT; - add_d(plci, (word)(ncpi->length - 3), &ncpi->info[4]); - } - } - } - else if (plci->B3_prot == 5) - { - if (plci->NL.Id && !plci->nl_remove_id) - { - fax_control_bits = GET_WORD(&((T30_INFO *)plci->fax_connect_info_buffer)->control_bits_low); - fax_feature_bits = GET_WORD(&((T30_INFO *)plci->fax_connect_info_buffer)->feature_bits_low); - if (!(fax_control_bits & T30_CONTROL_BIT_MORE_DOCUMENTS) - || (fax_feature_bits & T30_FEATURE_BIT_MORE_DOCUMENTS)) - { - len = offsetof(T30_INFO, universal_6); - fax_info_change = false; - if (ncpi->length >= 4) - { - w = GET_WORD(&ncpi->info[3]); - if ((w & 0x0001) != ((word)(((T30_INFO *)(plci->fax_connect_info_buffer))->resolution & 0x0001))) - { - ((T30_INFO *)(plci->fax_connect_info_buffer))->resolution = - (byte)((((T30_INFO *)(plci->fax_connect_info_buffer))->resolution & ~T30_RESOLUTION_R8_0770_OR_200) | - ((w & 0x0001) ? T30_RESOLUTION_R8_0770_OR_200 : 0)); - fax_info_change = true; - } - fax_control_bits &= ~(T30_CONTROL_BIT_REQUEST_POLLING | T30_CONTROL_BIT_MORE_DOCUMENTS); - if (w & 0x0002) /* Fax-polling request */ - fax_control_bits |= T30_CONTROL_BIT_REQUEST_POLLING; - if ((w & 0x0004) /* Request to send / poll another document */ - && (a->manufacturer_features & MANUFACTURER_FEATURE_FAX_MORE_DOCUMENTS)) - { - fax_control_bits |= T30_CONTROL_BIT_MORE_DOCUMENTS; - } - if (ncpi->length >= 6) - { - w = GET_WORD(&ncpi->info[5]); - if (((byte) w) != ((T30_INFO *)(plci->fax_connect_info_buffer))->data_format) - { - ((T30_INFO *)(plci->fax_connect_info_buffer))->data_format = (byte) w; - fax_info_change = true; - } - - if ((a->man_profile.private_options & (1L << PRIVATE_FAX_SUB_SEP_PWD)) - && (GET_WORD(&ncpi->info[5]) & 0x8000)) /* Private SEP/SUB/PWD enable */ - { - plci->requested_options_conn |= (1L << PRIVATE_FAX_SUB_SEP_PWD); - } - if ((a->man_profile.private_options & (1L << PRIVATE_FAX_NONSTANDARD)) - && (GET_WORD(&ncpi->info[5]) & 0x4000)) /* Private non-standard facilities enable */ - { - plci->requested_options_conn |= (1L << PRIVATE_FAX_NONSTANDARD); - } - fax_control_bits &= ~(T30_CONTROL_BIT_ACCEPT_SUBADDRESS | T30_CONTROL_BIT_ACCEPT_SEL_POLLING | - T30_CONTROL_BIT_ACCEPT_PASSWORD); - if ((plci->requested_options_conn | plci->requested_options | a->requested_options_table[appl->Id - 1]) - & ((1L << PRIVATE_FAX_SUB_SEP_PWD) | (1L << PRIVATE_FAX_NONSTANDARD))) - { - if (api_parse(&ncpi->info[1], ncpi->length, "wwwwsss", fax_parms)) - Info = _WRONG_MESSAGE_FORMAT; - else - { - if ((plci->requested_options_conn | plci->requested_options | a->requested_options_table[appl->Id - 1]) - & (1L << PRIVATE_FAX_SUB_SEP_PWD)) - { - fax_control_bits |= T30_CONTROL_BIT_ACCEPT_SUBADDRESS | T30_CONTROL_BIT_ACCEPT_PASSWORD; - if (fax_control_bits & T30_CONTROL_BIT_ACCEPT_POLLING) - fax_control_bits |= T30_CONTROL_BIT_ACCEPT_SEL_POLLING; - } - w = fax_parms[4].length; - if (w > 20) - w = 20; - ((T30_INFO *)(plci->fax_connect_info_buffer))->station_id_len = (byte) w; - for (i = 0; i < w; i++) - ((T30_INFO *)(plci->fax_connect_info_buffer))->station_id[i] = fax_parms[4].info[1 + i]; - ((T30_INFO *)(plci->fax_connect_info_buffer))->head_line_len = 0; - len = offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH; - w = fax_parms[5].length; - if (w > 20) - w = 20; - plci->fax_connect_info_buffer[len++] = (byte) w; - for (i = 0; i < w; i++) - plci->fax_connect_info_buffer[len++] = fax_parms[5].info[1 + i]; - w = fax_parms[6].length; - if (w > 20) - w = 20; - plci->fax_connect_info_buffer[len++] = (byte) w; - for (i = 0; i < w; i++) - plci->fax_connect_info_buffer[len++] = fax_parms[6].info[1 + i]; - if ((plci->requested_options_conn | plci->requested_options | a->requested_options_table[appl->Id - 1]) - & (1L << PRIVATE_FAX_NONSTANDARD)) - { - if (api_parse(&ncpi->info[1], ncpi->length, "wwwwssss", fax_parms)) - { - dbug(1, dprintf("non-standard facilities info missing or wrong format")); - plci->fax_connect_info_buffer[len++] = 0; - } - else - { - if ((fax_parms[7].length >= 3) && (fax_parms[7].info[1] >= 2)) - plci->nsf_control_bits = GET_WORD(&fax_parms[7].info[2]); - plci->fax_connect_info_buffer[len++] = (byte)(fax_parms[7].length); - for (i = 0; i < fax_parms[7].length; i++) - plci->fax_connect_info_buffer[len++] = fax_parms[7].info[1 + i]; - } - } - } - } - else - { - len = offsetof(T30_INFO, universal_6); - } - fax_info_change = true; - - } - if (fax_control_bits != GET_WORD(&((T30_INFO *)plci->fax_connect_info_buffer)->control_bits_low)) - { - PUT_WORD(&((T30_INFO *)plci->fax_connect_info_buffer)->control_bits_low, fax_control_bits); - fax_info_change = true; - } - } - if (Info == GOOD) - { - plci->fax_connect_info_length = len; - if (fax_info_change) - { - if (fax_feature_bits & T30_FEATURE_BIT_MORE_DOCUMENTS) - { - start_internal_command(Id, plci, fax_connect_info_command); - return false; - } - else - { - start_internal_command(Id, plci, fax_adjust_b23_command); - return false; - } - } - } - } - else Info = _WRONG_STATE; - } - else Info = _WRONG_STATE; - } - - else if (plci->B3_prot == B3_RTP) - { - plci->internal_req_buffer[0] = ncpi->length + 1; - plci->internal_req_buffer[1] = UDATA_REQUEST_RTP_RECONFIGURE; - for (w = 0; w < ncpi->length; w++) - plci->internal_req_buffer[2 + w] = ncpi->info[1 + w]; - start_internal_command(Id, plci, rtp_connect_b3_req_command); - return false; - } - - if (!Info) - { - nl_req_ncci(plci, req, 0); - return 1; - } - } - } - else Info = _WRONG_IDENTIFIER; - - sendf(appl, - _CONNECT_B3_R | CONFIRM, - Id, - Number, - "w", Info); - return false; -} - -static byte connect_b3_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word ncci; - API_PARSE *ncpi; - byte req; - - word w; - - - API_PARSE fax_parms[9]; - word i; - byte len; - - - dbug(1, dprintf("connect_b3_res")); - - ncci = (word)(Id >> 16); - if (plci && ncci) { - if (a->ncci_state[ncci] == INC_CON_PENDING) { - if (GET_WORD(&parms[0].info[0]) != 0) - { - a->ncci_state[ncci] = OUTG_REJ_PENDING; - channel_request_xon(plci, a->ncci_ch[ncci]); - channel_xmit_xon(plci); - cleanup_ncci_data(plci, ncci); - nl_req_ncci(plci, N_DISC, (byte)ncci); - return 1; - } - a->ncci_state[ncci] = INC_ACT_PENDING; - - req = N_CONNECT_ACK; - ncpi = &parms[1]; - if ((plci->B3_prot == 4) || (plci->B3_prot == 5) || (plci->B3_prot == 7)) - { - - if ((plci->requested_options_conn | plci->requested_options | a->requested_options_table[plci->appl->Id - 1]) - & (1L << PRIVATE_FAX_NONSTANDARD)) - { - if (((plci->B3_prot == 4) || (plci->B3_prot == 5)) - && (plci->nsf_control_bits & T30_NSF_CONTROL_BIT_ENABLE_NSF) - && (plci->nsf_control_bits & T30_NSF_CONTROL_BIT_NEGOTIATE_RESP)) - { - len = offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH; - if (plci->fax_connect_info_length < len) - { - ((T30_INFO *)(plci->fax_connect_info_buffer))->station_id_len = 0; - ((T30_INFO *)(plci->fax_connect_info_buffer))->head_line_len = 0; - } - if (api_parse(&ncpi->info[1], ncpi->length, "wwwwssss", fax_parms)) - { - dbug(1, dprintf("non-standard facilities info missing or wrong format")); - } - else - { - if (plci->fax_connect_info_length <= len) - plci->fax_connect_info_buffer[len] = 0; - len += 1 + plci->fax_connect_info_buffer[len]; - if (plci->fax_connect_info_length <= len) - plci->fax_connect_info_buffer[len] = 0; - len += 1 + plci->fax_connect_info_buffer[len]; - if ((fax_parms[7].length >= 3) && (fax_parms[7].info[1] >= 2)) - plci->nsf_control_bits = GET_WORD(&fax_parms[7].info[2]); - plci->fax_connect_info_buffer[len++] = (byte)(fax_parms[7].length); - for (i = 0; i < fax_parms[7].length; i++) - plci->fax_connect_info_buffer[len++] = fax_parms[7].info[1 + i]; - } - plci->fax_connect_info_length = len; - ((T30_INFO *)(plci->fax_connect_info_buffer))->code = 0; - start_internal_command(Id, plci, fax_connect_ack_command); - return false; - } - } - - nl_req_ncci(plci, req, (byte)ncci); - if ((plci->ncpi_state & NCPI_VALID_CONNECT_B3_ACT) - && !(plci->ncpi_state & NCPI_CONNECT_B3_ACT_SENT)) - { - if (plci->B3_prot == 4) - sendf(appl, _CONNECT_B3_ACTIVE_I, Id, 0, "s", ""); - else - sendf(appl, _CONNECT_B3_ACTIVE_I, Id, 0, "S", plci->ncpi_buffer); - plci->ncpi_state |= NCPI_CONNECT_B3_ACT_SENT; - } - } - - else if (plci->B3_prot == B3_RTP) - { - plci->internal_req_buffer[0] = ncpi->length + 1; - plci->internal_req_buffer[1] = UDATA_REQUEST_RTP_RECONFIGURE; - for (w = 0; w < ncpi->length; w++) - plci->internal_req_buffer[2 + w] = ncpi->info[1+w]; - start_internal_command(Id, plci, rtp_connect_b3_res_command); - return false; - } - - else - { - if (ncpi->length > 2) { - if (ncpi->info[1] & 1) req = N_CONNECT_ACK | N_D_BIT; - add_d(plci, (word)(ncpi->length - 3), &ncpi->info[4]); - } - nl_req_ncci(plci, req, (byte)ncci); - sendf(appl, _CONNECT_B3_ACTIVE_I, Id, 0, "s", ""); - if (plci->adjust_b_restore) - { - plci->adjust_b_restore = false; - start_internal_command(Id, plci, adjust_b_restore); - } - } - return 1; - } - } - return false; -} - -static byte connect_b3_a_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word ncci; - - ncci = (word)(Id >> 16); - dbug(1, dprintf("connect_b3_a_res(ncci=0x%x)", ncci)); - - if (plci && ncci && (plci->State != IDLE) && (plci->State != INC_DIS_PENDING) - && (plci->State != OUTG_DIS_PENDING)) - { - if (a->ncci_state[ncci] == INC_ACT_PENDING) { - a->ncci_state[ncci] = CONNECTED; - if (plci->State != INC_CON_CONNECTED_ALERT) plci->State = CONNECTED; - channel_request_xon(plci, a->ncci_ch[ncci]); - channel_xmit_xon(plci); - } - } - return false; -} - -static byte disconnect_b3_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word Info; - word ncci; - API_PARSE *ncpi; - - dbug(1, dprintf("disconnect_b3_req")); - - Info = _WRONG_IDENTIFIER; - ncci = (word)(Id >> 16); - if (plci && ncci) - { - Info = _WRONG_STATE; - if ((a->ncci_state[ncci] == CONNECTED) - || (a->ncci_state[ncci] == OUTG_CON_PENDING) - || (a->ncci_state[ncci] == INC_CON_PENDING) - || (a->ncci_state[ncci] == INC_ACT_PENDING)) - { - a->ncci_state[ncci] = OUTG_DIS_PENDING; - channel_request_xon(plci, a->ncci_ch[ncci]); - channel_xmit_xon(plci); - - if (a->ncci[ncci].data_pending - && ((plci->B3_prot == B3_TRANSPARENT) - || (plci->B3_prot == B3_T30) - || (plci->B3_prot == B3_T30_WITH_EXTENSIONS))) - { - plci->send_disc = (byte)ncci; - plci->command = 0; - return false; - } - else - { - cleanup_ncci_data(plci, ncci); - - if (plci->B3_prot == 2 || plci->B3_prot == 3) - { - ncpi = &parms[0]; - if (ncpi->length > 3) - { - add_d(plci, (word)(ncpi->length - 3), (byte *)&(ncpi->info[4])); - } - } - nl_req_ncci(plci, N_DISC, (byte)ncci); - } - return 1; - } - } - sendf(appl, - _DISCONNECT_B3_R | CONFIRM, - Id, - Number, - "w", Info); - return false; -} - -static byte disconnect_b3_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word ncci; - word i; - - ncci = (word)(Id >> 16); - dbug(1, dprintf("disconnect_b3_res(ncci=0x%x", ncci)); - if (plci && ncci) { - plci->requested_options_conn = 0; - plci->fax_connect_info_length = 0; - plci->ncpi_state = 0x00; - if (((plci->B3_prot != B3_T90NL) && (plci->B3_prot != B3_ISO8208) && (plci->B3_prot != B3_X25_DCE)) - && ((plci->B2_prot != B2_LAPD) && (plci->B2_prot != B2_LAPD_FREE_SAPI_SEL))) - { - plci->call_dir |= CALL_DIR_FORCE_OUTG_NL; - } - for (i = 0; i < MAX_CHANNELS_PER_PLCI && plci->inc_dis_ncci_table[i] != (byte)ncci; i++); - if (i < MAX_CHANNELS_PER_PLCI) { - if (plci->channels)plci->channels--; - for (; i < MAX_CHANNELS_PER_PLCI - 1; i++) plci->inc_dis_ncci_table[i] = plci->inc_dis_ncci_table[i + 1]; - plci->inc_dis_ncci_table[MAX_CHANNELS_PER_PLCI - 1] = 0; - - ncci_free_receive_buffers(plci, ncci); - - if ((plci->State == IDLE || plci->State == SUSPENDING) && !plci->channels) { - if (plci->State == SUSPENDING) { - sendf(plci->appl, - _FACILITY_I, - Id & 0xffffL, - 0, - "ws", (word)3, "\x03\x04\x00\x00"); - sendf(plci->appl, _DISCONNECT_I, Id & 0xffffL, 0, "w", 0); - } - plci_remove(plci); - plci->State = IDLE; - } - } - else - { - if ((a->manufacturer_features & MANUFACTURER_FEATURE_FAX_PAPER_FORMATS) - && ((plci->B3_prot == 4) || (plci->B3_prot == 5)) - && (a->ncci_state[ncci] == INC_DIS_PENDING)) - { - ncci_free_receive_buffers(plci, ncci); - - nl_req_ncci(plci, N_EDATA, (byte)ncci); - - plci->adapter->ncci_state[ncci] = IDLE; - start_internal_command(Id, plci, fax_disconnect_command); - return 1; - } - } - } - return false; -} - -static byte data_b3_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - NCCI *ncci_ptr; - DATA_B3_DESC *data; - word Info; - word ncci; - word i; - - dbug(1, dprintf("data_b3_req")); - - Info = _WRONG_IDENTIFIER; - ncci = (word)(Id >> 16); - dbug(1, dprintf("ncci=0x%x, plci=0x%x", ncci, plci)); - - if (plci && ncci) - { - Info = _WRONG_STATE; - if ((a->ncci_state[ncci] == CONNECTED) - || (a->ncci_state[ncci] == INC_ACT_PENDING)) - { - /* queue data */ - ncci_ptr = &(a->ncci[ncci]); - i = ncci_ptr->data_out + ncci_ptr->data_pending; - if (i >= MAX_DATA_B3) - i -= MAX_DATA_B3; - data = &(ncci_ptr->DBuffer[i]); - data->Number = Number; - if ((((byte *)(parms[0].info)) >= ((byte *)(plci->msg_in_queue))) - && (((byte *)(parms[0].info)) < ((byte *)(plci->msg_in_queue)) + sizeof(plci->msg_in_queue))) - { - - data->P = (byte *)(long)(*((dword *)(parms[0].info))); - - } - else - data->P = TransmitBufferSet(appl, *(dword *)parms[0].info); - data->Length = GET_WORD(parms[1].info); - data->Handle = GET_WORD(parms[2].info); - data->Flags = GET_WORD(parms[3].info); - (ncci_ptr->data_pending)++; - - /* check for delivery confirmation */ - if (data->Flags & 0x0004) - { - i = ncci_ptr->data_ack_out + ncci_ptr->data_ack_pending; - if (i >= MAX_DATA_ACK) - i -= MAX_DATA_ACK; - ncci_ptr->DataAck[i].Number = data->Number; - ncci_ptr->DataAck[i].Handle = data->Handle; - (ncci_ptr->data_ack_pending)++; - } - - send_data(plci); - return false; - } - } - if (appl) - { - if (plci) - { - if ((((byte *)(parms[0].info)) >= ((byte *)(plci->msg_in_queue))) - && (((byte *)(parms[0].info)) < ((byte *)(plci->msg_in_queue)) + sizeof(plci->msg_in_queue))) - { - - TransmitBufferFree(appl, (byte *)(long)(*((dword *)(parms[0].info)))); - - } - } - sendf(appl, - _DATA_B3_R | CONFIRM, - Id, - Number, - "ww", GET_WORD(parms[2].info), Info); - } - return false; -} - -static byte data_b3_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word n; - word ncci; - word NCCIcode; - - dbug(1, dprintf("data_b3_res")); - - ncci = (word)(Id >> 16); - if (plci && ncci) { - n = GET_WORD(parms[0].info); - dbug(1, dprintf("free(%d)", n)); - NCCIcode = ncci | (((word) a->Id) << 8); - if (n < appl->MaxBuffer && - appl->DataNCCI[n] == NCCIcode && - (byte)(appl->DataFlags[n] >> 8) == plci->Id) { - dbug(1, dprintf("found")); - appl->DataNCCI[n] = 0; - - if (channel_can_xon(plci, a->ncci_ch[ncci])) { - channel_request_xon(plci, a->ncci_ch[ncci]); - } - channel_xmit_xon(plci); - - if (appl->DataFlags[n] & 4) { - nl_req_ncci(plci, N_DATA_ACK, (byte)ncci); - return 1; - } - } - } - return false; -} - -static byte reset_b3_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word Info; - word ncci; - - dbug(1, dprintf("reset_b3_req")); - - Info = _WRONG_IDENTIFIER; - ncci = (word)(Id >> 16); - if (plci && ncci) - { - Info = _WRONG_STATE; - switch (plci->B3_prot) - { - case B3_ISO8208: - case B3_X25_DCE: - if (a->ncci_state[ncci] == CONNECTED) - { - nl_req_ncci(plci, N_RESET, (byte)ncci); - send_req(plci); - Info = GOOD; - } - break; - case B3_TRANSPARENT: - if (a->ncci_state[ncci] == CONNECTED) - { - start_internal_command(Id, plci, reset_b3_command); - Info = GOOD; - } - break; - } - } - /* reset_b3 must result in a reset_b3_con & reset_b3_Ind */ - sendf(appl, - _RESET_B3_R | CONFIRM, - Id, - Number, - "w", Info); - return false; -} - -static byte reset_b3_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word ncci; - - dbug(1, dprintf("reset_b3_res")); - - ncci = (word)(Id >> 16); - if (plci && ncci) { - switch (plci->B3_prot) - { - case B3_ISO8208: - case B3_X25_DCE: - if (a->ncci_state[ncci] == INC_RES_PENDING) - { - a->ncci_state[ncci] = CONNECTED; - nl_req_ncci(plci, N_RESET_ACK, (byte)ncci); - return true; - } - break; - } - } - return false; -} - -static byte connect_b3_t90_a_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word ncci; - API_PARSE *ncpi; - byte req; - - dbug(1, dprintf("connect_b3_t90_a_res")); - - ncci = (word)(Id >> 16); - if (plci && ncci) { - if (a->ncci_state[ncci] == INC_ACT_PENDING) { - a->ncci_state[ncci] = CONNECTED; - } - else if (a->ncci_state[ncci] == INC_CON_PENDING) { - a->ncci_state[ncci] = CONNECTED; - - req = N_CONNECT_ACK; - - /* parms[0]==0 for CAPI original message definition! */ - if (parms[0].info) { - ncpi = &parms[1]; - if (ncpi->length > 2) { - if (ncpi->info[1] & 1) req = N_CONNECT_ACK | N_D_BIT; - add_d(plci, (word)(ncpi->length - 3), &ncpi->info[4]); - } - } - nl_req_ncci(plci, req, (byte)ncci); - return 1; - } - } - return false; -} - - -static byte select_b_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word Info = 0; - word i; - byte tel; - API_PARSE bp_parms[7]; - - if (!plci || !msg) - { - Info = _WRONG_IDENTIFIER; - } - else - { - dbug(1, dprintf("select_b_req[%d],PLCI=0x%x,Tel=0x%x,NL=0x%x,appl=0x%x,sstate=0x%x", - msg->length, plci->Id, plci->tel, plci->NL.Id, plci->appl, plci->SuppState)); - dbug(1, dprintf("PlciState=0x%x", plci->State)); - for (i = 0; i < 7; i++) bp_parms[i].length = 0; - - /* check if no channel is open, no B3 connected only */ - if ((plci->State == IDLE) || (plci->State == OUTG_DIS_PENDING) || (plci->State == INC_DIS_PENDING) - || (plci->SuppState != IDLE) || plci->channels || plci->nl_remove_id) - { - Info = _WRONG_STATE; - } - /* check message format and fill bp_parms pointer */ - else if (msg->length && api_parse(&msg->info[1], (word)msg->length, "wwwsss", bp_parms)) - { - Info = _WRONG_MESSAGE_FORMAT; - } - else - { - if ((plci->State == INC_CON_PENDING) || (plci->State == INC_CON_ALERT)) /* send alert tone inband to the network, */ - { /* e.g. Qsig or RBS or Cornet-N or xess PRI */ - if (Id & EXT_CONTROLLER) - { - sendf(appl, _SELECT_B_REQ | CONFIRM, Id, Number, "w", 0x2002); /* wrong controller */ - return 0; - } - plci->State = INC_CON_CONNECTED_ALERT; - plci->appl = appl; - __clear_bit(appl->Id - 1, plci->c_ind_mask_table); - dbug(1, dprintf("c_ind_mask =%*pb", MAX_APPL, plci->c_ind_mask_table)); - /* disconnect the other appls its quasi a connect */ - for_each_set_bit(i, plci->c_ind_mask_table, max_appl) - sendf(&application[i], _DISCONNECT_I, Id, 0, "w", _OTHER_APPL_CONNECTED); - } - - api_save_msg(msg, "s", &plci->saved_msg); - tel = plci->tel; - if (Id & EXT_CONTROLLER) - { - if (tel) /* external controller in use by this PLCI */ - { - if (a->AdvSignalAppl && a->AdvSignalAppl != appl) - { - dbug(1, dprintf("Ext_Ctrl in use 1")); - Info = _WRONG_STATE; - } - } - else /* external controller NOT in use by this PLCI ? */ - { - if (a->AdvSignalPLCI) - { - dbug(1, dprintf("Ext_Ctrl in use 2")); - Info = _WRONG_STATE; - } - else /* activate the codec */ - { - dbug(1, dprintf("Ext_Ctrl start")); - if (AdvCodecSupport(a, plci, appl, 0)) - { - dbug(1, dprintf("Error in codec procedures")); - Info = _WRONG_STATE; - } - else if (plci->spoofed_msg == SPOOFING_REQUIRED) /* wait until codec is active */ - { - plci->spoofed_msg = AWAITING_SELECT_B; - plci->internal_command = BLOCK_PLCI; /* lock other commands */ - plci->command = 0; - dbug(1, dprintf("continue if codec loaded")); - return false; - } - } - } - } - else /* external controller bit is OFF */ - { - if (tel) /* external controller in use, need to switch off */ - { - if (a->AdvSignalAppl == appl) - { - CodecIdCheck(a, plci); - plci->tel = 0; - plci->adv_nl = 0; - dbug(1, dprintf("Ext_Ctrl disable")); - } - else - { - dbug(1, dprintf("Ext_Ctrl not requested")); - } - } - } - if (!Info) - { - if (plci->call_dir & CALL_DIR_OUT) - plci->call_dir = CALL_DIR_OUT | CALL_DIR_ORIGINATE; - else if (plci->call_dir & CALL_DIR_IN) - plci->call_dir = CALL_DIR_IN | CALL_DIR_ANSWER; - start_internal_command(Id, plci, select_b_command); - return false; - } - } - } - sendf(appl, _SELECT_B_REQ | CONFIRM, Id, Number, "w", Info); - return false; -} - -static byte manufacturer_req(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *parms) -{ - word command; - word i; - word ncci; - API_PARSE *m; - API_PARSE m_parms[5]; - word codec; - byte req; - byte ch; - byte dir; - static byte chi[2] = {0x01, 0x00}; - static byte lli[2] = {0x01, 0x00}; - static byte codec_cai[2] = {0x01, 0x01}; - static byte null_msg = {0}; - static API_PARSE null_parms = { 0, &null_msg }; - PLCI *v_plci; - word Info = 0; - - dbug(1, dprintf("manufacturer_req")); - for (i = 0; i < 5; i++) m_parms[i].length = 0; - - if (GET_DWORD(parms[0].info) != _DI_MANU_ID) { - Info = _WRONG_MESSAGE_FORMAT; - } - command = GET_WORD(parms[1].info); - m = &parms[2]; - if (!Info) - { - switch (command) { - case _DI_ASSIGN_PLCI: - if (api_parse(&m->info[1], (word)m->length, "wbbs", m_parms)) { - Info = _WRONG_MESSAGE_FORMAT; - break; - } - codec = GET_WORD(m_parms[0].info); - ch = m_parms[1].info[0]; - dir = m_parms[2].info[0]; - if ((i = get_plci(a))) { - plci = &a->plci[i - 1]; - plci->appl = appl; - plci->command = _MANUFACTURER_R; - plci->m_command = command; - plci->number = Number; - plci->State = LOCAL_CONNECT; - Id = (((word)plci->Id << 8) | plci->adapter->Id | 0x80); - dbug(1, dprintf("ManCMD,plci=0x%x", Id)); - - if ((ch == 1 || ch == 2) && (dir <= 2)) { - chi[1] = (byte)(0x80 | ch); - lli[1] = 0; - plci->call_dir = CALL_DIR_OUT | CALL_DIR_ORIGINATE; - switch (codec) - { - case 0: - Info = add_b1(plci, &m_parms[3], 0, 0); - break; - case 1: - add_p(plci, CAI, codec_cai); - break; - /* manual 'swich on' to the codec support without signalling */ - /* first 'assign plci' with this function, then use */ - case 2: - if (AdvCodecSupport(a, plci, appl, 0)) { - Info = _RESOURCE_ERROR; - } - else { - Info = add_b1(plci, &null_parms, 0, B1_FACILITY_LOCAL); - lli[1] = 0x10; /* local call codec stream */ - } - break; - } - - plci->State = LOCAL_CONNECT; - plci->manufacturer = true; - plci->command = _MANUFACTURER_R; - plci->m_command = command; - plci->number = Number; - - if (!Info) - { - add_p(plci, LLI, lli); - add_p(plci, CHI, chi); - add_p(plci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(plci, ASSIGN, DSIG_ID); - - if (!codec) - { - Info = add_b23(plci, &m_parms[3]); - if (!Info) - { - nl_req_ncci(plci, ASSIGN, 0); - send_req(plci); - } - } - if (!Info) - { - dbug(1, dprintf("dir=0x%x,spoof=0x%x", dir, plci->spoofed_msg)); - if (plci->spoofed_msg == SPOOFING_REQUIRED) - { - api_save_msg(m_parms, "wbbs", &plci->saved_msg); - plci->spoofed_msg = AWAITING_MANUF_CON; - plci->internal_command = BLOCK_PLCI; /* reject other req meanwhile */ - plci->command = 0; - send_req(plci); - return false; - } - if (dir == 1) { - sig_req(plci, CALL_REQ, 0); - } - else if (!dir) { - sig_req(plci, LISTEN_REQ, 0); - } - send_req(plci); - } - else - { - sendf(appl, - _MANUFACTURER_R | CONFIRM, - Id, - Number, - "dww", _DI_MANU_ID, command, Info); - return 2; - } - } - } - } - else Info = _OUT_OF_PLCI; - break; - - case _DI_IDI_CTRL: - if (!plci) - { - Info = _WRONG_IDENTIFIER; - break; - } - if (api_parse(&m->info[1], (word)m->length, "bs", m_parms)) { - Info = _WRONG_MESSAGE_FORMAT; - break; - } - req = m_parms[0].info[0]; - plci->command = _MANUFACTURER_R; - plci->m_command = command; - plci->number = Number; - if (req == CALL_REQ) - { - plci->b_channel = getChannel(&m_parms[1]); - mixer_set_bchannel_id_esc(plci, plci->b_channel); - if (plci->spoofed_msg == SPOOFING_REQUIRED) - { - plci->spoofed_msg = CALL_REQ | AWAITING_MANUF_CON; - plci->internal_command = BLOCK_PLCI; /* reject other req meanwhile */ - plci->command = 0; - break; - } - } - else if (req == LAW_REQ) - { - plci->cr_enquiry = true; - } - add_ss(plci, FTY, &m_parms[1]); - sig_req(plci, req, 0); - send_req(plci); - if (req == HANGUP) - { - if (plci->NL.Id && !plci->nl_remove_id) - { - if (plci->channels) - { - for (ncci = 1; ncci < MAX_NCCI + 1; ncci++) - { - if ((a->ncci_plci[ncci] == plci->Id) && (a->ncci_state[ncci] == CONNECTED)) - { - a->ncci_state[ncci] = OUTG_DIS_PENDING; - cleanup_ncci_data(plci, ncci); - nl_req_ncci(plci, N_DISC, (byte)ncci); - } - } - } - mixer_remove(plci); - nl_req_ncci(plci, REMOVE, 0); - send_req(plci); - } - } - break; - - case _DI_SIG_CTRL: - /* signalling control for loop activation B-channel */ - if (!plci) - { - Info = _WRONG_IDENTIFIER; - break; - } - if (m->length) { - plci->command = _MANUFACTURER_R; - plci->number = Number; - add_ss(plci, FTY, m); - sig_req(plci, SIG_CTRL, 0); - send_req(plci); - } - else Info = _WRONG_MESSAGE_FORMAT; - break; - - case _DI_RXT_CTRL: - /* activation control for receiver/transmitter B-channel */ - if (!plci) - { - Info = _WRONG_IDENTIFIER; - break; - } - if (m->length) { - plci->command = _MANUFACTURER_R; - plci->number = Number; - add_ss(plci, FTY, m); - sig_req(plci, DSP_CTRL, 0); - send_req(plci); - } - else Info = _WRONG_MESSAGE_FORMAT; - break; - - case _DI_ADV_CODEC: - case _DI_DSP_CTRL: - /* TEL_CTRL commands to support non standard adjustments: */ - /* Ring on/off, Handset micro volume, external micro vol. */ - /* handset+external speaker volume, receiver+transm. gain,*/ - /* handsfree on (hookinfo off), set mixer command */ - - if (command == _DI_ADV_CODEC) - { - if (!a->AdvCodecPLCI) { - Info = _WRONG_STATE; - break; - } - v_plci = a->AdvCodecPLCI; - } - else - { - if (plci - && (m->length >= 3) - && (m->info[1] == 0x1c) - && (m->info[2] >= 1)) - { - if (m->info[3] == DSP_CTRL_OLD_SET_MIXER_COEFFICIENTS) - { - if ((plci->tel != ADV_VOICE) || (plci != a->AdvSignalPLCI)) - { - Info = _WRONG_STATE; - break; - } - a->adv_voice_coef_length = m->info[2] - 1; - if (a->adv_voice_coef_length > m->length - 3) - a->adv_voice_coef_length = (byte)(m->length - 3); - if (a->adv_voice_coef_length > ADV_VOICE_COEF_BUFFER_SIZE) - a->adv_voice_coef_length = ADV_VOICE_COEF_BUFFER_SIZE; - for (i = 0; i < a->adv_voice_coef_length; i++) - a->adv_voice_coef_buffer[i] = m->info[4 + i]; - if (plci->B1_facilities & B1_FACILITY_VOICE) - adv_voice_write_coefs(plci, ADV_VOICE_WRITE_UPDATE); - break; - } - else if (m->info[3] == DSP_CTRL_SET_DTMF_PARAMETERS) - { - if (!(a->manufacturer_features & MANUFACTURER_FEATURE_DTMF_PARAMETERS)) - { - Info = _FACILITY_NOT_SUPPORTED; - break; - } - - plci->dtmf_parameter_length = m->info[2] - 1; - if (plci->dtmf_parameter_length > m->length - 3) - plci->dtmf_parameter_length = (byte)(m->length - 3); - if (plci->dtmf_parameter_length > DTMF_PARAMETER_BUFFER_SIZE) - plci->dtmf_parameter_length = DTMF_PARAMETER_BUFFER_SIZE; - for (i = 0; i < plci->dtmf_parameter_length; i++) - plci->dtmf_parameter_buffer[i] = m->info[4 + i]; - if (plci->B1_facilities & B1_FACILITY_DTMFR) - dtmf_parameter_write(plci); - break; - - } - } - v_plci = plci; - } - - if (!v_plci) - { - Info = _WRONG_IDENTIFIER; - break; - } - if (m->length) { - add_ss(v_plci, FTY, m); - sig_req(v_plci, TEL_CTRL, 0); - send_req(v_plci); - } - else Info = _WRONG_MESSAGE_FORMAT; - - break; - - case _DI_OPTIONS_REQUEST: - if (api_parse(&m->info[1], (word)m->length, "d", m_parms)) { - Info = _WRONG_MESSAGE_FORMAT; - break; - } - if (GET_DWORD(m_parms[0].info) & ~a->man_profile.private_options) - { - Info = _FACILITY_NOT_SUPPORTED; - break; - } - a->requested_options_table[appl->Id - 1] = GET_DWORD(m_parms[0].info); - break; - - - - default: - Info = _WRONG_MESSAGE_FORMAT; - break; - } - } - - sendf(appl, - _MANUFACTURER_R | CONFIRM, - Id, - Number, - "dww", _DI_MANU_ID, command, Info); - return false; -} - - -static byte manufacturer_res(dword Id, word Number, DIVA_CAPI_ADAPTER *a, - PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word indication; - - API_PARSE m_parms[3]; - API_PARSE *ncpi; - API_PARSE fax_parms[9]; - word i; - byte len; - - - dbug(1, dprintf("manufacturer_res")); - - if ((msg[0].length == 0) - || (msg[1].length == 0) - || (GET_DWORD(msg[0].info) != _DI_MANU_ID)) - { - return false; - } - indication = GET_WORD(msg[1].info); - switch (indication) - { - - case _DI_NEGOTIATE_B3: - if (!plci) - break; - if (((plci->B3_prot != 4) && (plci->B3_prot != 5)) - || !(plci->ncpi_state & NCPI_NEGOTIATE_B3_SENT)) - { - dbug(1, dprintf("wrong state for NEGOTIATE_B3 parameters")); - break; - } - if (api_parse(&msg[2].info[1], msg[2].length, "ws", m_parms)) - { - dbug(1, dprintf("wrong format in NEGOTIATE_B3 parameters")); - break; - } - ncpi = &m_parms[1]; - len = offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH; - if (plci->fax_connect_info_length < len) - { - ((T30_INFO *)(plci->fax_connect_info_buffer))->station_id_len = 0; - ((T30_INFO *)(plci->fax_connect_info_buffer))->head_line_len = 0; - } - if (api_parse(&ncpi->info[1], ncpi->length, "wwwwssss", fax_parms)) - { - dbug(1, dprintf("non-standard facilities info missing or wrong format")); - } - else - { - if (plci->fax_connect_info_length <= len) - plci->fax_connect_info_buffer[len] = 0; - len += 1 + plci->fax_connect_info_buffer[len]; - if (plci->fax_connect_info_length <= len) - plci->fax_connect_info_buffer[len] = 0; - len += 1 + plci->fax_connect_info_buffer[len]; - if ((fax_parms[7].length >= 3) && (fax_parms[7].info[1] >= 2)) - plci->nsf_control_bits = GET_WORD(&fax_parms[7].info[2]); - plci->fax_connect_info_buffer[len++] = (byte)(fax_parms[7].length); - for (i = 0; i < fax_parms[7].length; i++) - plci->fax_connect_info_buffer[len++] = fax_parms[7].info[1 + i]; - } - plci->fax_connect_info_length = len; - plci->fax_edata_ack_length = plci->fax_connect_info_length; - start_internal_command(Id, plci, fax_edata_ack_command); - break; - - } - return false; -} - -/*------------------------------------------------------------------*/ -/* IDI callback function */ -/*------------------------------------------------------------------*/ - -void callback(ENTITY *e) -{ - DIVA_CAPI_ADAPTER *a; - APPL *appl; - PLCI *plci; - CAPI_MSG *m; - word i, j; - byte rc; - byte ch; - byte req; - byte global_req; - int no_cancel_rc; - - dbug(1, dprintf("%x:CB(%x:Req=%x,Rc=%x,Ind=%x)", - (e->user[0] + 1) & 0x7fff, e->Id, e->Req, e->Rc, e->Ind)); - - a = &(adapter[(byte)e->user[0]]); - plci = &(a->plci[e->user[1]]); - no_cancel_rc = DIVA_CAPI_SUPPORTS_NO_CANCEL(a); - - /* - If new protocol code and new XDI is used then CAPI should work - fully in accordance with IDI cpec an look on callback field instead - of Rc field for return codes. - */ - if (((e->complete == 0xff) && no_cancel_rc) || - (e->Rc && !no_cancel_rc)) { - rc = e->Rc; - ch = e->RcCh; - req = e->Req; - e->Rc = 0; - - if (e->user[0] & 0x8000) - { - /* - If REMOVE request was sent then we have to wait until - return code with Id set to zero arrives. - All other return codes should be ignored. - */ - if (req == REMOVE) - { - if (e->Id) - { - dbug(1, dprintf("cancel RC in REMOVE state")); - return; - } - channel_flow_control_remove(plci); - for (i = 0; i < 256; i++) - { - if (a->FlowControlIdTable[i] == plci->nl_remove_id) - a->FlowControlIdTable[i] = 0; - } - plci->nl_remove_id = 0; - if (plci->rx_dma_descriptor > 0) { - diva_free_dma_descriptor(plci, plci->rx_dma_descriptor - 1); - plci->rx_dma_descriptor = 0; - } - } - if (rc == OK_FC) - { - a->FlowControlIdTable[ch] = e->Id; - a->FlowControlSkipTable[ch] = 0; - - a->ch_flow_control[ch] |= N_OK_FC_PENDING; - a->ch_flow_plci[ch] = plci->Id; - plci->nl_req = 0; - } - else - { - /* - Cancel return codes self, if feature was requested - */ - if (no_cancel_rc && (a->FlowControlIdTable[ch] == e->Id) && e->Id) { - a->FlowControlIdTable[ch] = 0; - if ((rc == OK) && a->FlowControlSkipTable[ch]) { - dbug(3, dprintf("XDI CAPI: RC cancelled Id:0x02, Ch:%02x", e->Id, ch)); - return; - } - } - - if (a->ch_flow_control[ch] & N_OK_FC_PENDING) - { - a->ch_flow_control[ch] &= ~N_OK_FC_PENDING; - if (ch == e->ReqCh) - plci->nl_req = 0; - } - else - plci->nl_req = 0; - } - if (plci->nl_req) - control_rc(plci, 0, rc, ch, 0, true); - else - { - if (req == N_XON) - { - channel_x_on(plci, ch); - if (plci->internal_command) - control_rc(plci, req, rc, ch, 0, true); - } - else - { - if (plci->nl_global_req) - { - global_req = plci->nl_global_req; - plci->nl_global_req = 0; - if (rc != ASSIGN_OK) { - e->Id = 0; - if (plci->rx_dma_descriptor > 0) { - diva_free_dma_descriptor(plci, plci->rx_dma_descriptor - 1); - plci->rx_dma_descriptor = 0; - } - } - channel_xmit_xon(plci); - control_rc(plci, 0, rc, ch, global_req, true); - } - else if (plci->data_sent) - { - channel_xmit_xon(plci); - plci->data_sent = false; - plci->NL.XNum = 1; - data_rc(plci, ch); - if (plci->internal_command) - control_rc(plci, req, rc, ch, 0, true); - } - else - { - channel_xmit_xon(plci); - control_rc(plci, req, rc, ch, 0, true); - } - } - } - } - else - { - /* - If REMOVE request was sent then we have to wait until - return code with Id set to zero arrives. - All other return codes should be ignored. - */ - if (req == REMOVE) - { - if (e->Id) - { - dbug(1, dprintf("cancel RC in REMOVE state")); - return; - } - plci->sig_remove_id = 0; - } - plci->sig_req = 0; - if (plci->sig_global_req) - { - global_req = plci->sig_global_req; - plci->sig_global_req = 0; - if (rc != ASSIGN_OK) - e->Id = 0; - channel_xmit_xon(plci); - control_rc(plci, 0, rc, ch, global_req, false); - } - else - { - channel_xmit_xon(plci); - control_rc(plci, req, rc, ch, 0, false); - } - } - /* - Again: in accordance with IDI spec Rc and Ind can't be delivered in the - same callback. Also if new XDI and protocol code used then jump - direct to finish. - */ - if (no_cancel_rc) { - channel_xmit_xon(plci); - goto capi_callback_suffix; - } - } - - channel_xmit_xon(plci); - - if (e->Ind) { - if (e->user[0] & 0x8000) { - byte Ind = e->Ind & 0x0f; - byte Ch = e->IndCh; - if (((Ind == N_DISC) || (Ind == N_DISC_ACK)) && - (a->ch_flow_plci[Ch] == plci->Id)) { - if (a->ch_flow_control[Ch] & N_RX_FLOW_CONTROL_MASK) { - dbug(3, dprintf("XDI CAPI: I: pending N-XON Ch:%02x", Ch)); - } - a->ch_flow_control[Ch] &= ~N_RX_FLOW_CONTROL_MASK; - } - nl_ind(plci); - if ((e->RNR != 1) && - (a->ch_flow_plci[Ch] == plci->Id) && - (a->ch_flow_control[Ch] & N_RX_FLOW_CONTROL_MASK)) { - a->ch_flow_control[Ch] &= ~N_RX_FLOW_CONTROL_MASK; - dbug(3, dprintf("XDI CAPI: I: remove faked N-XON Ch:%02x", Ch)); - } - } else { - sig_ind(plci); - } - e->Ind = 0; - } - -capi_callback_suffix: - - while (!plci->req_in - && !plci->internal_command - && (plci->msg_in_write_pos != plci->msg_in_read_pos)) - { - j = (plci->msg_in_read_pos == plci->msg_in_wrap_pos) ? 0 : plci->msg_in_read_pos; - - i = (((CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[j]))->header.length + 3) & 0xfffc; - - m = (CAPI_MSG *)(&((byte *)(plci->msg_in_queue))[j]); - appl = *((APPL **)(&((byte *)(plci->msg_in_queue))[j + i])); - dbug(1, dprintf("dequeue msg(0x%04x) - write=%d read=%d wrap=%d", - m->header.command, plci->msg_in_write_pos, plci->msg_in_read_pos, plci->msg_in_wrap_pos)); - if (plci->msg_in_read_pos == plci->msg_in_wrap_pos) - { - plci->msg_in_wrap_pos = MSG_IN_QUEUE_SIZE; - plci->msg_in_read_pos = i + MSG_IN_OVERHEAD; - } - else - { - plci->msg_in_read_pos = j + i + MSG_IN_OVERHEAD; - } - if (plci->msg_in_read_pos == plci->msg_in_write_pos) - { - plci->msg_in_write_pos = MSG_IN_QUEUE_SIZE; - plci->msg_in_read_pos = MSG_IN_QUEUE_SIZE; - } - else if (plci->msg_in_read_pos == plci->msg_in_wrap_pos) - { - plci->msg_in_read_pos = MSG_IN_QUEUE_SIZE; - plci->msg_in_wrap_pos = MSG_IN_QUEUE_SIZE; - } - i = api_put(appl, m); - if (i != 0) - { - if (m->header.command == _DATA_B3_R) - - TransmitBufferFree(appl, (byte *)(long)(m->info.data_b3_req.Data)); - - dbug(1, dprintf("Error 0x%04x from msg(0x%04x)", i, m->header.command)); - break; - } - - if (plci->li_notify_update) - { - plci->li_notify_update = false; - mixer_notify_update(plci, false); - } - - } - send_data(plci); - send_req(plci); -} - - -static void control_rc(PLCI *plci, byte req, byte rc, byte ch, byte global_req, - byte nl_rc) -{ - dword Id; - dword rId; - word Number; - word Info = 0; - word i; - word ncci; - DIVA_CAPI_ADAPTER *a; - APPL *appl; - PLCI *rplci; - byte SSparms[] = "\x05\x00\x00\x02\x00\x00"; - byte SSstruct[] = "\x09\x00\x00\x06\x00\x00\x00\x00\x00\x00"; - - if (!plci) { - dbug(0, dprintf("A: control_rc, no plci %02x:%02x:%02x:%02x:%02x", req, rc, ch, global_req, nl_rc)); - return; - } - dbug(1, dprintf("req0_in/out=%d/%d", plci->req_in, plci->req_out)); - if (plci->req_in != plci->req_out) - { - if (nl_rc || (global_req != ASSIGN) || (rc == ASSIGN_OK)) - { - dbug(1, dprintf("req_1return")); - return; - } - /* cancel outstanding request on the PLCI after SIG ASSIGN failure */ - } - plci->req_in = plci->req_in_start = plci->req_out = 0; - dbug(1, dprintf("control_rc")); - - appl = plci->appl; - a = plci->adapter; - ncci = a->ch_ncci[ch]; - if (appl) - { - Id = (((dword)(ncci ? ncci : ch)) << 16) | ((word)plci->Id << 8) | a->Id; - if (plci->tel && plci->SuppState != CALL_HELD) Id |= EXT_CONTROLLER; - Number = plci->number; - dbug(1, dprintf("Contr_RC-Id=%08lx,plci=%x,tel=%x, entity=0x%x, command=0x%x, int_command=0x%x", Id, plci->Id, plci->tel, plci->Sig.Id, plci->command, plci->internal_command)); - dbug(1, dprintf("channels=0x%x", plci->channels)); - if (plci_remove_check(plci)) - return; - if (req == REMOVE && rc == ASSIGN_OK) - { - sig_req(plci, HANGUP, 0); - sig_req(plci, REMOVE, 0); - send_req(plci); - } - if (plci->command) - { - switch (plci->command) - { - case C_HOLD_REQ: - dbug(1, dprintf("HoldRC=0x%x", rc)); - SSparms[1] = (byte)S_HOLD; - if (rc != OK) - { - plci->SuppState = IDLE; - Info = 0x2001; - } - sendf(appl, _FACILITY_R | CONFIRM, Id, Number, "wws", Info, 3, SSparms); - break; - - case C_RETRIEVE_REQ: - dbug(1, dprintf("RetrieveRC=0x%x", rc)); - SSparms[1] = (byte)S_RETRIEVE; - if (rc != OK) - { - plci->SuppState = CALL_HELD; - Info = 0x2001; - } - sendf(appl, _FACILITY_R | CONFIRM, Id, Number, "wws", Info, 3, SSparms); - break; - - case _INFO_R: - dbug(1, dprintf("InfoRC=0x%x", rc)); - if (rc != OK) Info = _WRONG_STATE; - sendf(appl, _INFO_R | CONFIRM, Id, Number, "w", Info); - break; - - case _CONNECT_R: - dbug(1, dprintf("Connect_R=0x%x/0x%x/0x%x/0x%x", req, rc, global_req, nl_rc)); - if (plci->State == INC_DIS_PENDING) - break; - if (plci->Sig.Id != 0xff) - { - if (((global_req == ASSIGN) && (rc != ASSIGN_OK)) - || (!nl_rc && (req == CALL_REQ) && (rc != OK))) - { - dbug(1, dprintf("No more IDs/Call_Req failed")); - sendf(appl, _CONNECT_R | CONFIRM, Id & 0xffL, Number, "w", _OUT_OF_PLCI); - plci_remove(plci); - plci->State = IDLE; - break; - } - if (plci->State != LOCAL_CONNECT) plci->State = OUTG_CON_PENDING; - sendf(appl, _CONNECT_R | CONFIRM, Id, Number, "w", 0); - } - else /* D-ch activation */ - { - if (rc != ASSIGN_OK) - { - dbug(1, dprintf("No more IDs/X.25 Call_Req failed")); - sendf(appl, _CONNECT_R | CONFIRM, Id & 0xffL, Number, "w", _OUT_OF_PLCI); - plci_remove(plci); - plci->State = IDLE; - break; - } - sendf(appl, _CONNECT_R | CONFIRM, Id, Number, "w", 0); - sendf(plci->appl, _CONNECT_ACTIVE_I, Id, 0, "sss", "", "", ""); - plci->State = INC_ACT_PENDING; - } - break; - - case _CONNECT_I | RESPONSE: - if (plci->State != INC_DIS_PENDING) - plci->State = INC_CON_ACCEPT; - break; - - case _DISCONNECT_R: - if (plci->State == INC_DIS_PENDING) - break; - if (plci->Sig.Id != 0xff) - { - plci->State = OUTG_DIS_PENDING; - sendf(appl, _DISCONNECT_R | CONFIRM, Id, Number, "w", 0); - } - break; - - case SUSPEND_REQ: - break; - - case RESUME_REQ: - break; - - case _CONNECT_B3_R: - if (rc != OK) - { - sendf(appl, _CONNECT_B3_R | CONFIRM, Id, Number, "w", _WRONG_IDENTIFIER); - break; - } - ncci = get_ncci(plci, ch, 0); - Id = (Id & 0xffff) | (((dword) ncci) << 16); - plci->channels++; - if (req == N_RESET) - { - a->ncci_state[ncci] = INC_ACT_PENDING; - sendf(appl, _CONNECT_B3_R | CONFIRM, Id, Number, "w", 0); - sendf(appl, _CONNECT_B3_ACTIVE_I, Id, 0, "s", ""); - } - else - { - a->ncci_state[ncci] = OUTG_CON_PENDING; - sendf(appl, _CONNECT_B3_R | CONFIRM, Id, Number, "w", 0); - } - break; - - case _CONNECT_B3_I | RESPONSE: - break; - - case _RESET_B3_R: -/* sendf(appl, _RESET_B3_R | CONFIRM, Id, Number, "w", 0);*/ - break; - - case _DISCONNECT_B3_R: - sendf(appl, _DISCONNECT_B3_R | CONFIRM, Id, Number, "w", 0); - break; - - case _MANUFACTURER_R: - break; - - case PERM_LIST_REQ: - if (rc != OK) - { - Info = _WRONG_IDENTIFIER; - sendf(plci->appl, _CONNECT_R | CONFIRM, Id, Number, "w", Info); - plci_remove(plci); - } - else - sendf(plci->appl, _CONNECT_R | CONFIRM, Id, Number, "w", Info); - break; - - default: - break; - } - plci->command = 0; - } - else if (plci->internal_command) - { - switch (plci->internal_command) - { - case BLOCK_PLCI: - return; - - case GET_MWI_STATE: - if (rc == OK) /* command supported, wait for indication */ - { - return; - } - plci_remove(plci); - break; - - /* Get Supported Services */ - case GETSERV_REQ_PEND: - if (rc == OK) /* command supported, wait for indication */ - { - break; - } - PUT_DWORD(&SSstruct[6], MASK_TERMINAL_PORTABILITY); - sendf(appl, _FACILITY_R | CONFIRM, Id, Number, "wws", 0, 3, SSstruct); - plci_remove(plci); - break; - - case INTERR_DIVERSION_REQ_PEND: /* Interrogate Parameters */ - case INTERR_NUMBERS_REQ_PEND: - case CF_START_PEND: /* Call Forwarding Start pending */ - case CF_STOP_PEND: /* Call Forwarding Stop pending */ - case CCBS_REQUEST_REQ_PEND: - case CCBS_DEACTIVATE_REQ_PEND: - case CCBS_INTERROGATE_REQ_PEND: - switch (plci->internal_command) - { - case INTERR_DIVERSION_REQ_PEND: - SSparms[1] = S_INTERROGATE_DIVERSION; - break; - case INTERR_NUMBERS_REQ_PEND: - SSparms[1] = S_INTERROGATE_NUMBERS; - break; - case CF_START_PEND: - SSparms[1] = S_CALL_FORWARDING_START; - break; - case CF_STOP_PEND: - SSparms[1] = S_CALL_FORWARDING_STOP; - break; - case CCBS_REQUEST_REQ_PEND: - SSparms[1] = S_CCBS_REQUEST; - break; - case CCBS_DEACTIVATE_REQ_PEND: - SSparms[1] = S_CCBS_DEACTIVATE; - break; - case CCBS_INTERROGATE_REQ_PEND: - SSparms[1] = S_CCBS_INTERROGATE; - break; - } - if (global_req == ASSIGN) - { - dbug(1, dprintf("AssignDiversion_RC=0x%x/0x%x", req, rc)); - return; - } - if (!plci->appl) break; - if (rc == ISDN_GUARD_REJ) - { - Info = _CAPI_GUARD_ERROR; - } - else if (rc != OK) - { - Info = _SUPPLEMENTARY_SERVICE_NOT_SUPPORTED; - } - sendf(plci->appl, _FACILITY_R | CONFIRM, Id & 0x7, - plci->number, "wws", Info, (word)3, SSparms); - if (Info) plci_remove(plci); - break; - - /* 3pty conference pending */ - case PTY_REQ_PEND: - if (!plci->relatedPTYPLCI) break; - rplci = plci->relatedPTYPLCI; - SSparms[1] = plci->ptyState; - rId = ((word)rplci->Id << 8) | rplci->adapter->Id; - if (rplci->tel) rId |= EXT_CONTROLLER; - if (rc != OK) - { - Info = 0x300E; /* not supported */ - plci->relatedPTYPLCI = NULL; - plci->ptyState = 0; - } - sendf(rplci->appl, - _FACILITY_R | CONFIRM, - rId, - plci->number, - "wws", Info, (word)3, SSparms); - break; - - /* Explicit Call Transfer pending */ - case ECT_REQ_PEND: - dbug(1, dprintf("ECT_RC=0x%x/0x%x", req, rc)); - if (!plci->relatedPTYPLCI) break; - rplci = plci->relatedPTYPLCI; - SSparms[1] = S_ECT; - rId = ((word)rplci->Id << 8) | rplci->adapter->Id; - if (rplci->tel) rId |= EXT_CONTROLLER; - if (rc != OK) - { - Info = 0x300E; /* not supported */ - plci->relatedPTYPLCI = NULL; - plci->ptyState = 0; - } - sendf(rplci->appl, - _FACILITY_R | CONFIRM, - rId, - plci->number, - "wws", Info, (word)3, SSparms); - break; - - case _MANUFACTURER_R: - dbug(1, dprintf("_Manufacturer_R=0x%x/0x%x", req, rc)); - if ((global_req == ASSIGN) && (rc != ASSIGN_OK)) - { - dbug(1, dprintf("No more IDs")); - sendf(appl, _MANUFACTURER_R | CONFIRM, Id, Number, "dww", _DI_MANU_ID, _MANUFACTURER_R, _OUT_OF_PLCI); - plci_remove(plci); /* after codec init, internal codec commands pending */ - } - break; - - case _CONNECT_R: - dbug(1, dprintf("_Connect_R=0x%x/0x%x", req, rc)); - if ((global_req == ASSIGN) && (rc != ASSIGN_OK)) - { - dbug(1, dprintf("No more IDs")); - sendf(appl, _CONNECT_R | CONFIRM, Id & 0xffL, Number, "w", _OUT_OF_PLCI); - plci_remove(plci); /* after codec init, internal codec commands pending */ - } - break; - - case PERM_COD_HOOK: /* finished with Hook_Ind */ - return; - - case PERM_COD_CALL: - dbug(1, dprintf("***Codec Connect_Pending A, Rc = 0x%x", rc)); - plci->internal_command = PERM_COD_CONN_PEND; - return; - - case PERM_COD_ASSIGN: - dbug(1, dprintf("***Codec Assign A, Rc = 0x%x", rc)); - if (rc != ASSIGN_OK) break; - sig_req(plci, CALL_REQ, 0); - send_req(plci); - plci->internal_command = PERM_COD_CALL; - return; - - /* Null Call Reference Request pending */ - case C_NCR_FAC_REQ: - dbug(1, dprintf("NCR_FAC=0x%x/0x%x", req, rc)); - if (global_req == ASSIGN) - { - if (rc == ASSIGN_OK) - { - return; - } - else - { - sendf(appl, _INFO_R | CONFIRM, Id & 0xf, Number, "w", _WRONG_STATE); - appl->NullCREnable = false; - plci_remove(plci); - } - } - else if (req == NCR_FACILITY) - { - if (rc == OK) - { - sendf(appl, _INFO_R | CONFIRM, Id & 0xf, Number, "w", 0); - } - else - { - sendf(appl, _INFO_R | CONFIRM, Id & 0xf, Number, "w", _WRONG_STATE); - appl->NullCREnable = false; - } - plci_remove(plci); - } - break; - - case HOOK_ON_REQ: - if (plci->channels) - { - if (a->ncci_state[ncci] == CONNECTED) - { - a->ncci_state[ncci] = OUTG_DIS_PENDING; - cleanup_ncci_data(plci, ncci); - nl_req_ncci(plci, N_DISC, (byte)ncci); - } - break; - } - break; - - case HOOK_OFF_REQ: - if (plci->State == INC_DIS_PENDING) - break; - sig_req(plci, CALL_REQ, 0); - send_req(plci); - plci->State = OUTG_CON_PENDING; - break; - - - case MWI_ACTIVATE_REQ_PEND: - case MWI_DEACTIVATE_REQ_PEND: - if (global_req == ASSIGN && rc == ASSIGN_OK) - { - dbug(1, dprintf("MWI_REQ assigned")); - return; - } - else if (rc != OK) - { - if (rc == WRONG_IE) - { - Info = 0x2007; /* Illegal message parameter coding */ - dbug(1, dprintf("MWI_REQ invalid parameter")); - } - else - { - Info = 0x300B; /* not supported */ - dbug(1, dprintf("MWI_REQ not supported")); - } - /* 0x3010: Request not allowed in this state */ - PUT_WORD(&SSparms[4], 0x300E); /* SS not supported */ - - } - if (plci->internal_command == MWI_ACTIVATE_REQ_PEND) - { - PUT_WORD(&SSparms[1], S_MWI_ACTIVATE); - } - else PUT_WORD(&SSparms[1], S_MWI_DEACTIVATE); - - if (plci->cr_enquiry) - { - sendf(plci->appl, - _FACILITY_R | CONFIRM, - Id & 0xf, - plci->number, - "wws", Info, (word)3, SSparms); - if (rc != OK) plci_remove(plci); - } - else - { - sendf(plci->appl, - _FACILITY_R | CONFIRM, - Id, - plci->number, - "wws", Info, (word)3, SSparms); - } - break; - - case CONF_BEGIN_REQ_PEND: - case CONF_ADD_REQ_PEND: - case CONF_SPLIT_REQ_PEND: - case CONF_DROP_REQ_PEND: - case CONF_ISOLATE_REQ_PEND: - case CONF_REATTACH_REQ_PEND: - dbug(1, dprintf("CONF_RC=0x%x/0x%x", req, rc)); - if ((plci->internal_command == CONF_ADD_REQ_PEND) && (!plci->relatedPTYPLCI)) break; - rplci = plci; - rId = Id; - switch (plci->internal_command) - { - case CONF_BEGIN_REQ_PEND: - SSparms[1] = S_CONF_BEGIN; - break; - case CONF_ADD_REQ_PEND: - SSparms[1] = S_CONF_ADD; - rplci = plci->relatedPTYPLCI; - rId = ((word)rplci->Id << 8) | rplci->adapter->Id; - break; - case CONF_SPLIT_REQ_PEND: - SSparms[1] = S_CONF_SPLIT; - break; - case CONF_DROP_REQ_PEND: - SSparms[1] = S_CONF_DROP; - break; - case CONF_ISOLATE_REQ_PEND: - SSparms[1] = S_CONF_ISOLATE; - break; - case CONF_REATTACH_REQ_PEND: - SSparms[1] = S_CONF_REATTACH; - break; - } - - if (rc != OK) - { - Info = 0x300E; /* not supported */ - plci->relatedPTYPLCI = NULL; - plci->ptyState = 0; - } - sendf(rplci->appl, - _FACILITY_R | CONFIRM, - rId, - plci->number, - "wws", Info, (word)3, SSparms); - break; - - case VSWITCH_REQ_PEND: - if (rc != OK) - { - if (plci->relatedPTYPLCI) - { - plci->relatedPTYPLCI->vswitchstate = 0; - plci->relatedPTYPLCI->vsprot = 0; - plci->relatedPTYPLCI->vsprotdialect = 0; - } - plci->vswitchstate = 0; - plci->vsprot = 0; - plci->vsprotdialect = 0; - } - else - { - if (plci->relatedPTYPLCI && - plci->vswitchstate == 1 && - plci->relatedPTYPLCI->vswitchstate == 3) /* join complete */ - plci->vswitchstate = 3; - } - break; - - /* Call Deflection Request pending (SSCT) */ - case CD_REQ_PEND: - SSparms[1] = S_CALL_DEFLECTION; - if (rc != OK) - { - Info = 0x300E; /* not supported */ - plci->appl->CDEnable = 0; - } - sendf(plci->appl, _FACILITY_R | CONFIRM, Id, - plci->number, "wws", Info, (word)3, SSparms); - break; - - case RTP_CONNECT_B3_REQ_COMMAND_2: - if (rc == OK) - { - ncci = get_ncci(plci, ch, 0); - Id = (Id & 0xffff) | (((dword) ncci) << 16); - plci->channels++; - a->ncci_state[ncci] = OUTG_CON_PENDING; - } - /* fall through */ - - default: - if (plci->internal_command_queue[0]) - { - (*(plci->internal_command_queue[0]))(Id, plci, rc); - if (plci->internal_command) - return; - } - break; - } - next_internal_command(Id, plci); - } - } - else /* appl==0 */ - { - Id = ((word)plci->Id << 8) | plci->adapter->Id; - if (plci->tel) Id |= EXT_CONTROLLER; - - switch (plci->internal_command) - { - case BLOCK_PLCI: - return; - - case START_L1_SIG_ASSIGN_PEND: - case REM_L1_SIG_ASSIGN_PEND: - if (global_req == ASSIGN) - { - break; - } - else - { - dbug(1, dprintf("***L1 Req rem PLCI")); - plci->internal_command = 0; - sig_req(plci, REMOVE, 0); - send_req(plci); - } - break; - - /* Call Deflection Request pending, just no appl ptr assigned */ - case CD_REQ_PEND: - SSparms[1] = S_CALL_DEFLECTION; - if (rc != OK) - { - Info = 0x300E; /* not supported */ - } - for (i = 0; i < max_appl; i++) - { - if (application[i].CDEnable) - { - if (!application[i].Id) application[i].CDEnable = 0; - else - { - sendf(&application[i], _FACILITY_R | CONFIRM, Id, - plci->number, "wws", Info, (word)3, SSparms); - if (Info) application[i].CDEnable = 0; - } - } - } - plci->internal_command = 0; - break; - - case PERM_COD_HOOK: /* finished with Hook_Ind */ - return; - - case PERM_COD_CALL: - plci->internal_command = PERM_COD_CONN_PEND; - dbug(1, dprintf("***Codec Connect_Pending, Rc = 0x%x", rc)); - return; - - case PERM_COD_ASSIGN: - dbug(1, dprintf("***Codec Assign, Rc = 0x%x", rc)); - plci->internal_command = 0; - if (rc != ASSIGN_OK) break; - plci->internal_command = PERM_COD_CALL; - sig_req(plci, CALL_REQ, 0); - send_req(plci); - return; - - case LISTEN_SIG_ASSIGN_PEND: - if (rc == ASSIGN_OK) - { - plci->internal_command = 0; - dbug(1, dprintf("ListenCheck, new SIG_ID = 0x%x", plci->Sig.Id)); - add_p(plci, ESC, "\x02\x18\x00"); /* support call waiting */ - sig_req(plci, INDICATE_REQ, 0); - send_req(plci); - } - else - { - dbug(1, dprintf("ListenCheck failed (assignRc=0x%x)", rc)); - a->listen_active--; - plci_remove(plci); - plci->State = IDLE; - } - break; - - case USELAW_REQ: - if (global_req == ASSIGN) - { - if (rc == ASSIGN_OK) - { - sig_req(plci, LAW_REQ, 0); - send_req(plci); - dbug(1, dprintf("Auto-Law assigned")); - } - else - { - dbug(1, dprintf("Auto-Law assign failed")); - a->automatic_law = 3; - plci->internal_command = 0; - a->automatic_lawPLCI = NULL; - } - break; - } - else if (req == LAW_REQ && rc == OK) - { - dbug(1, dprintf("Auto-Law initiated")); - a->automatic_law = 2; - plci->internal_command = 0; - } - else - { - dbug(1, dprintf("Auto-Law not supported")); - a->automatic_law = 3; - plci->internal_command = 0; - sig_req(plci, REMOVE, 0); - send_req(plci); - a->automatic_lawPLCI = NULL; - } - break; - } - plci_remove_check(plci); - } -} - -static void data_rc(PLCI *plci, byte ch) -{ - dword Id; - DIVA_CAPI_ADAPTER *a; - NCCI *ncci_ptr; - DATA_B3_DESC *data; - word ncci; - - if (plci->appl) - { - TransmitBufferFree(plci->appl, plci->data_sent_ptr); - a = plci->adapter; - ncci = a->ch_ncci[ch]; - if (ncci && (a->ncci_plci[ncci] == plci->Id)) - { - ncci_ptr = &(a->ncci[ncci]); - dbug(1, dprintf("data_out=%d, data_pending=%d", ncci_ptr->data_out, ncci_ptr->data_pending)); - if (ncci_ptr->data_pending) - { - data = &(ncci_ptr->DBuffer[ncci_ptr->data_out]); - if (!(data->Flags & 4) && a->ncci_state[ncci]) - { - Id = (((dword)ncci) << 16) | ((word)plci->Id << 8) | a->Id; - if (plci->tel) Id |= EXT_CONTROLLER; - sendf(plci->appl, _DATA_B3_R | CONFIRM, Id, data->Number, - "ww", data->Handle, 0); - } - (ncci_ptr->data_out)++; - if (ncci_ptr->data_out == MAX_DATA_B3) - ncci_ptr->data_out = 0; - (ncci_ptr->data_pending)--; - } - } - } -} - -static void data_ack(PLCI *plci, byte ch) -{ - dword Id; - DIVA_CAPI_ADAPTER *a; - NCCI *ncci_ptr; - word ncci; - - a = plci->adapter; - ncci = a->ch_ncci[ch]; - ncci_ptr = &(a->ncci[ncci]); - if (ncci_ptr->data_ack_pending) - { - if (a->ncci_state[ncci] && (a->ncci_plci[ncci] == plci->Id)) - { - Id = (((dword)ncci) << 16) | ((word)plci->Id << 8) | a->Id; - if (plci->tel) Id |= EXT_CONTROLLER; - sendf(plci->appl, _DATA_B3_R | CONFIRM, Id, ncci_ptr->DataAck[ncci_ptr->data_ack_out].Number, - "ww", ncci_ptr->DataAck[ncci_ptr->data_ack_out].Handle, 0); - } - (ncci_ptr->data_ack_out)++; - if (ncci_ptr->data_ack_out == MAX_DATA_ACK) - ncci_ptr->data_ack_out = 0; - (ncci_ptr->data_ack_pending)--; - } -} - -static void sig_ind(PLCI *plci) -{ - dword x_Id; - dword Id; - dword rId; - word i; - word cip; - dword cip_mask; - byte *ie; - DIVA_CAPI_ADAPTER *a; - API_PARSE saved_parms[MAX_MSG_PARMS + 1]; -#define MAXPARMSIDS 31 - byte *parms[MAXPARMSIDS]; - byte *add_i[4]; - byte *multi_fac_parms[MAX_MULTI_IE]; - byte *multi_pi_parms[MAX_MULTI_IE]; - byte *multi_ssext_parms[MAX_MULTI_IE]; - byte *multi_CiPN_parms[MAX_MULTI_IE]; - - byte *multi_vswitch_parms[MAX_MULTI_IE]; - - byte ai_len; - byte *esc_chi = ""; - byte *esc_law = ""; - byte *pty_cai = ""; - byte *esc_cr = ""; - byte *esc_profile = ""; - - byte facility[256]; - PLCI *tplci = NULL; - byte chi[] = "\x02\x18\x01"; - byte voice_cai[] = "\x06\x14\x00\x00\x00\x00\x08"; - byte resume_cau[] = "\x05\x05\x00\x02\x00\x00"; - /* ESC_MSGTYPE must be the last but one message, a new IE has to be */ - /* included before the ESC_MSGTYPE and MAXPARMSIDS has to be incremented */ - /* SMSG is situated at the end because its 0 (for compatibility reasons */ - /* (see Info_Mask Bit 4, first IE. then the message type) */ - static const word parms_id[] = - {MAXPARMSIDS, CPN, 0xff, DSA, OSA, BC, LLC, HLC, ESC_CAUSE, DSP, DT, CHA, - UUI, CONG_RR, CONG_RNR, ESC_CHI, KEY, CHI, CAU, ESC_LAW, - RDN, RDX, CONN_NR, RIN, NI, CAI, ESC_CR, - CST, ESC_PROFILE, 0xff, ESC_MSGTYPE, SMSG}; - /* 14 FTY repl by ESC_CHI */ - /* 18 PI repl by ESC_LAW */ - /* removed OAD changed to 0xff for future use, OAD is multiIE now */ - static const word multi_fac_id[] = {1, FTY}; - static const word multi_pi_id[] = {1, PI}; - static const word multi_CiPN_id[] = {1, OAD}; - static const word multi_ssext_id[] = {1, ESC_SSEXT}; - - static const word multi_vswitch_id[] = {1, ESC_VSWITCH}; - - byte *cau; - word ncci; - byte SS_Ind[] = "\x05\x02\x00\x02\x00\x00"; /* Hold_Ind struct*/ - byte CF_Ind[] = "\x09\x02\x00\x06\x00\x00\x00\x00\x00\x00"; - byte Interr_Err_Ind[] = "\x0a\x02\x00\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"; - byte CONF_Ind[] = "\x09\x16\x00\x06\x00\x00\x00\x00\x00\x00"; - byte force_mt_info = false; - byte dir; - dword d; - word w; - - a = plci->adapter; - Id = ((word)plci->Id << 8) | a->Id; - PUT_WORD(&SS_Ind[4], 0x0000); - - if (plci->sig_remove_id) - { - plci->Sig.RNR = 2; /* discard */ - dbug(1, dprintf("SIG discard while remove pending")); - return; - } - if (plci->tel && plci->SuppState != CALL_HELD) Id |= EXT_CONTROLLER; - dbug(1, dprintf("SigInd-Id=%08lx,plci=%x,tel=%x,state=0x%x,channels=%d,Discflowcl=%d", - Id, plci->Id, plci->tel, plci->State, plci->channels, plci->hangup_flow_ctrl_timer)); - if (plci->Sig.Ind == CALL_HOLD_ACK && plci->channels) - { - plci->Sig.RNR = 1; - return; - } - if (plci->Sig.Ind == HANGUP && plci->channels) - { - plci->Sig.RNR = 1; - plci->hangup_flow_ctrl_timer++; - /* recover the network layer after timeout */ - if (plci->hangup_flow_ctrl_timer == 100) - { - dbug(1, dprintf("Exceptional disc")); - plci->Sig.RNR = 0; - plci->hangup_flow_ctrl_timer = 0; - for (ncci = 1; ncci < MAX_NCCI + 1; ncci++) - { - if (a->ncci_plci[ncci] == plci->Id) - { - cleanup_ncci_data(plci, ncci); - if (plci->channels)plci->channels--; - if (plci->appl) - sendf(plci->appl, _DISCONNECT_B3_I, (((dword) ncci) << 16) | Id, 0, "ws", 0, ""); - } - } - if (plci->appl) - sendf(plci->appl, _DISCONNECT_I, Id, 0, "w", 0); - plci_remove(plci); - plci->State = IDLE; - } - return; - } - - /* do first parse the info with no OAD in, because OAD will be converted */ - /* first the multiple facility IE, then mult. progress ind. */ - /* then the parameters for the info_ind + conn_ind */ - IndParse(plci, multi_fac_id, multi_fac_parms, MAX_MULTI_IE); - IndParse(plci, multi_pi_id, multi_pi_parms, MAX_MULTI_IE); - IndParse(plci, multi_ssext_id, multi_ssext_parms, MAX_MULTI_IE); - - IndParse(plci, multi_vswitch_id, multi_vswitch_parms, MAX_MULTI_IE); - - IndParse(plci, parms_id, parms, 0); - IndParse(plci, multi_CiPN_id, multi_CiPN_parms, MAX_MULTI_IE); - esc_chi = parms[14]; - esc_law = parms[18]; - pty_cai = parms[24]; - esc_cr = parms[25]; - esc_profile = parms[27]; - if (esc_cr[0] && plci) - { - if (plci->cr_enquiry && plci->appl) - { - plci->cr_enquiry = false; - /* d = MANU_ID */ - /* w = m_command */ - /* b = total length */ - /* b = indication type */ - /* b = length of all IEs */ - /* b = IE1 */ - /* S = IE1 length + cont. */ - /* b = IE2 */ - /* S = IE2 length + cont. */ - sendf(plci->appl, - _MANUFACTURER_I, - Id, - 0, - "dwbbbbSbS", _DI_MANU_ID, plci->m_command, - 2 + 1 + 1 + esc_cr[0] + 1 + 1 + esc_law[0], plci->Sig.Ind, 1 + 1 + esc_cr[0] + 1 + 1 + esc_law[0], ESC, esc_cr, ESC, esc_law); - } - } - /* create the additional info structure */ - add_i[1] = parms[15]; /* KEY of additional info */ - add_i[2] = parms[11]; /* UUI of additional info */ - ai_len = AddInfo(add_i, multi_fac_parms, esc_chi, facility); - - /* the ESC_LAW indicates if u-Law or a-Law is actually used by the card */ - /* indication returns by the card if requested by the function */ - /* AutomaticLaw() after driver init */ - if (a->automatic_law < 4) - { - if (esc_law[0]) { - if (esc_law[2]) { - dbug(0, dprintf("u-Law selected")); - a->u_law = 1; - } - else { - dbug(0, dprintf("a-Law selected")); - a->u_law = 0; - } - a->automatic_law = 4; - if (plci == a->automatic_lawPLCI) { - plci->internal_command = 0; - sig_req(plci, REMOVE, 0); - send_req(plci); - a->automatic_lawPLCI = NULL; - } - } - if (esc_profile[0]) - { - dbug(1, dprintf("[%06x] CardProfile: %lx %lx %lx %lx %lx", - UnMapController(a->Id), GET_DWORD(&esc_profile[6]), - GET_DWORD(&esc_profile[10]), GET_DWORD(&esc_profile[14]), - GET_DWORD(&esc_profile[18]), GET_DWORD(&esc_profile[46]))); - - a->profile.Global_Options &= 0x000000ffL; - a->profile.B1_Protocols &= 0x000003ffL; - a->profile.B2_Protocols &= 0x00001fdfL; - a->profile.B3_Protocols &= 0x000000b7L; - - a->profile.Global_Options &= GET_DWORD(&esc_profile[6]) | - GL_BCHANNEL_OPERATION_SUPPORTED; - a->profile.B1_Protocols &= GET_DWORD(&esc_profile[10]); - a->profile.B2_Protocols &= GET_DWORD(&esc_profile[14]); - a->profile.B3_Protocols &= GET_DWORD(&esc_profile[18]); - a->manufacturer_features = GET_DWORD(&esc_profile[46]); - a->man_profile.private_options = 0; - - if (a->manufacturer_features & MANUFACTURER_FEATURE_ECHO_CANCELLER) - { - a->man_profile.private_options |= 1L << PRIVATE_ECHO_CANCELLER; - a->profile.Global_Options |= GL_ECHO_CANCELLER_SUPPORTED; - } - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_RTP) - a->man_profile.private_options |= 1L << PRIVATE_RTP; - a->man_profile.rtp_primary_payloads = GET_DWORD(&esc_profile[50]); - a->man_profile.rtp_additional_payloads = GET_DWORD(&esc_profile[54]); - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_T38) - a->man_profile.private_options |= 1L << PRIVATE_T38; - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_FAX_SUB_SEP_PWD) - a->man_profile.private_options |= 1L << PRIVATE_FAX_SUB_SEP_PWD; - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_V18) - a->man_profile.private_options |= 1L << PRIVATE_V18; - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_DTMF_TONE) - a->man_profile.private_options |= 1L << PRIVATE_DTMF_TONE; - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_PIAFS) - a->man_profile.private_options |= 1L << PRIVATE_PIAFS; - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_FAX_PAPER_FORMATS) - a->man_profile.private_options |= 1L << PRIVATE_FAX_PAPER_FORMATS; - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_VOWN) - a->man_profile.private_options |= 1L << PRIVATE_VOWN; - - - if (a->manufacturer_features & MANUFACTURER_FEATURE_FAX_NONSTANDARD) - a->man_profile.private_options |= 1L << PRIVATE_FAX_NONSTANDARD; - - } - else - { - a->profile.Global_Options &= 0x0000007fL; - a->profile.B1_Protocols &= 0x000003dfL; - a->profile.B2_Protocols &= 0x00001adfL; - a->profile.B3_Protocols &= 0x000000b7L; - a->manufacturer_features &= MANUFACTURER_FEATURE_HARDDTMF; - } - if (a->manufacturer_features & (MANUFACTURER_FEATURE_HARDDTMF | - MANUFACTURER_FEATURE_SOFTDTMF_SEND | MANUFACTURER_FEATURE_SOFTDTMF_RECEIVE)) - { - a->profile.Global_Options |= GL_DTMF_SUPPORTED; - } - a->manufacturer_features &= ~MANUFACTURER_FEATURE_OOB_CHANNEL; - dbug(1, dprintf("[%06x] Profile: %lx %lx %lx %lx %lx", - UnMapController(a->Id), a->profile.Global_Options, - a->profile.B1_Protocols, a->profile.B2_Protocols, - a->profile.B3_Protocols, a->manufacturer_features)); - } - /* codec plci for the handset/hook state support is just an internal id */ - if (plci != a->AdvCodecPLCI) - { - force_mt_info = SendMultiIE(plci, Id, multi_fac_parms, FTY, 0x20, 0); - force_mt_info |= SendMultiIE(plci, Id, multi_pi_parms, PI, 0x210, 0); - SendSSExtInd(NULL, plci, Id, multi_ssext_parms); - SendInfo(plci, Id, parms, force_mt_info); - - VSwitchReqInd(plci, Id, multi_vswitch_parms); - - } - - /* switch the codec to the b-channel */ - if (esc_chi[0] && plci && !plci->SuppState) { - plci->b_channel = esc_chi[esc_chi[0]]&0x1f; - mixer_set_bchannel_id_esc(plci, plci->b_channel); - dbug(1, dprintf("storeChannel=0x%x", plci->b_channel)); - if (plci->tel == ADV_VOICE && plci->appl) { - SetVoiceChannel(a->AdvCodecPLCI, esc_chi, a); - } - } - - if (plci->appl) plci->appl->Number++; - - switch (plci->Sig.Ind) { - /* Response to Get_Supported_Services request */ - case S_SUPPORTED: - dbug(1, dprintf("S_Supported")); - if (!plci->appl) break; - if (pty_cai[0] == 4) - { - PUT_DWORD(&CF_Ind[6], GET_DWORD(&pty_cai[1])); - } - else - { - PUT_DWORD(&CF_Ind[6], MASK_TERMINAL_PORTABILITY | MASK_HOLD_RETRIEVE); - } - PUT_WORD(&CF_Ind[1], 0); - PUT_WORD(&CF_Ind[4], 0); - sendf(plci->appl, _FACILITY_R | CONFIRM, Id & 0x7, plci->number, "wws", 0, 3, CF_Ind); - plci_remove(plci); - break; - - /* Supplementary Service rejected */ - case S_SERVICE_REJ: - dbug(1, dprintf("S_Reject=0x%x", pty_cai[5])); - if (!pty_cai[0]) break; - switch (pty_cai[5]) - { - case ECT_EXECUTE: - case THREE_PTY_END: - case THREE_PTY_BEGIN: - if (!plci->relatedPTYPLCI) break; - tplci = plci->relatedPTYPLCI; - rId = ((word)tplci->Id << 8) | tplci->adapter->Id; - if (tplci->tel) rId |= EXT_CONTROLLER; - if (pty_cai[5] == ECT_EXECUTE) - { - PUT_WORD(&SS_Ind[1], S_ECT); - - plci->vswitchstate = 0; - plci->relatedPTYPLCI->vswitchstate = 0; - - } - else - { - PUT_WORD(&SS_Ind[1], pty_cai[5] + 3); - } - if (pty_cai[2] != 0xff) - { - PUT_WORD(&SS_Ind[4], 0x3600 | (word)pty_cai[2]); - } - else - { - PUT_WORD(&SS_Ind[4], 0x300E); - } - plci->relatedPTYPLCI = NULL; - plci->ptyState = 0; - sendf(tplci->appl, _FACILITY_I, rId, 0, "ws", 3, SS_Ind); - break; - - case CALL_DEFLECTION: - if (pty_cai[2] != 0xff) - { - PUT_WORD(&SS_Ind[4], 0x3600 | (word)pty_cai[2]); - } - else - { - PUT_WORD(&SS_Ind[4], 0x300E); - } - PUT_WORD(&SS_Ind[1], pty_cai[5]); - for (i = 0; i < max_appl; i++) - { - if (application[i].CDEnable) - { - if (application[i].Id) sendf(&application[i], _FACILITY_I, Id, 0, "ws", 3, SS_Ind); - application[i].CDEnable = false; - } - } - break; - - case DEACTIVATION_DIVERSION: - case ACTIVATION_DIVERSION: - case DIVERSION_INTERROGATE_CFU: - case DIVERSION_INTERROGATE_CFB: - case DIVERSION_INTERROGATE_CFNR: - case DIVERSION_INTERROGATE_NUM: - case CCBS_REQUEST: - case CCBS_DEACTIVATE: - case CCBS_INTERROGATE: - if (!plci->appl) break; - if (pty_cai[2] != 0xff) - { - PUT_WORD(&Interr_Err_Ind[4], 0x3600 | (word)pty_cai[2]); - } - else - { - PUT_WORD(&Interr_Err_Ind[4], 0x300E); - } - switch (pty_cai[5]) - { - case DEACTIVATION_DIVERSION: - dbug(1, dprintf("Deact_Div")); - Interr_Err_Ind[0] = 0x9; - Interr_Err_Ind[3] = 0x6; - PUT_WORD(&Interr_Err_Ind[1], S_CALL_FORWARDING_STOP); - break; - case ACTIVATION_DIVERSION: - dbug(1, dprintf("Act_Div")); - Interr_Err_Ind[0] = 0x9; - Interr_Err_Ind[3] = 0x6; - PUT_WORD(&Interr_Err_Ind[1], S_CALL_FORWARDING_START); - break; - case DIVERSION_INTERROGATE_CFU: - case DIVERSION_INTERROGATE_CFB: - case DIVERSION_INTERROGATE_CFNR: - dbug(1, dprintf("Interr_Div")); - Interr_Err_Ind[0] = 0xa; - Interr_Err_Ind[3] = 0x7; - PUT_WORD(&Interr_Err_Ind[1], S_INTERROGATE_DIVERSION); - break; - case DIVERSION_INTERROGATE_NUM: - dbug(1, dprintf("Interr_Num")); - Interr_Err_Ind[0] = 0xa; - Interr_Err_Ind[3] = 0x7; - PUT_WORD(&Interr_Err_Ind[1], S_INTERROGATE_NUMBERS); - break; - case CCBS_REQUEST: - dbug(1, dprintf("CCBS Request")); - Interr_Err_Ind[0] = 0xd; - Interr_Err_Ind[3] = 0xa; - PUT_WORD(&Interr_Err_Ind[1], S_CCBS_REQUEST); - break; - case CCBS_DEACTIVATE: - dbug(1, dprintf("CCBS Deactivate")); - Interr_Err_Ind[0] = 0x9; - Interr_Err_Ind[3] = 0x6; - PUT_WORD(&Interr_Err_Ind[1], S_CCBS_DEACTIVATE); - break; - case CCBS_INTERROGATE: - dbug(1, dprintf("CCBS Interrogate")); - Interr_Err_Ind[0] = 0xb; - Interr_Err_Ind[3] = 0x8; - PUT_WORD(&Interr_Err_Ind[1], S_CCBS_INTERROGATE); - break; - } - PUT_DWORD(&Interr_Err_Ind[6], plci->appl->S_Handle); - sendf(plci->appl, _FACILITY_I, Id & 0x7, 0, "ws", 3, Interr_Err_Ind); - plci_remove(plci); - break; - case ACTIVATION_MWI: - case DEACTIVATION_MWI: - if (pty_cai[5] == ACTIVATION_MWI) - { - PUT_WORD(&SS_Ind[1], S_MWI_ACTIVATE); - } - else PUT_WORD(&SS_Ind[1], S_MWI_DEACTIVATE); - - if (pty_cai[2] != 0xff) - { - PUT_WORD(&SS_Ind[4], 0x3600 | (word)pty_cai[2]); - } - else - { - PUT_WORD(&SS_Ind[4], 0x300E); - } - - if (plci->cr_enquiry) - { - sendf(plci->appl, _FACILITY_I, Id & 0xf, 0, "ws", 3, SS_Ind); - plci_remove(plci); - } - else - { - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", 3, SS_Ind); - } - break; - case CONF_ADD: /* ERROR */ - case CONF_BEGIN: - case CONF_DROP: - case CONF_ISOLATE: - case CONF_REATTACH: - CONF_Ind[0] = 9; - CONF_Ind[3] = 6; - switch (pty_cai[5]) - { - case CONF_BEGIN: - PUT_WORD(&CONF_Ind[1], S_CONF_BEGIN); - plci->ptyState = 0; - break; - case CONF_DROP: - CONF_Ind[0] = 5; - CONF_Ind[3] = 2; - PUT_WORD(&CONF_Ind[1], S_CONF_DROP); - plci->ptyState = CONNECTED; - break; - case CONF_ISOLATE: - CONF_Ind[0] = 5; - CONF_Ind[3] = 2; - PUT_WORD(&CONF_Ind[1], S_CONF_ISOLATE); - plci->ptyState = CONNECTED; - break; - case CONF_REATTACH: - CONF_Ind[0] = 5; - CONF_Ind[3] = 2; - PUT_WORD(&CONF_Ind[1], S_CONF_REATTACH); - plci->ptyState = CONNECTED; - break; - case CONF_ADD: - PUT_WORD(&CONF_Ind[1], S_CONF_ADD); - plci->relatedPTYPLCI = NULL; - tplci = plci->relatedPTYPLCI; - if (tplci) tplci->ptyState = CONNECTED; - plci->ptyState = CONNECTED; - break; - } - - if (pty_cai[2] != 0xff) - { - PUT_WORD(&CONF_Ind[4], 0x3600 | (word)pty_cai[2]); - } - else - { - PUT_WORD(&CONF_Ind[4], 0x3303); /* Time-out: network did not respond - within the required time */ - } - - PUT_DWORD(&CONF_Ind[6], 0x0); - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", 3, CONF_Ind); - break; - } - break; - - /* Supplementary Service indicates success */ - case S_SERVICE: - dbug(1, dprintf("Service_Ind")); - PUT_WORD(&CF_Ind[4], 0); - switch (pty_cai[5]) - { - case THREE_PTY_END: - case THREE_PTY_BEGIN: - case ECT_EXECUTE: - if (!plci->relatedPTYPLCI) break; - tplci = plci->relatedPTYPLCI; - rId = ((word)tplci->Id << 8) | tplci->adapter->Id; - if (tplci->tel) rId |= EXT_CONTROLLER; - if (pty_cai[5] == ECT_EXECUTE) - { - PUT_WORD(&SS_Ind[1], S_ECT); - - if (plci->vswitchstate != 3) - { - - plci->ptyState = IDLE; - plci->relatedPTYPLCI = NULL; - plci->ptyState = 0; - - } - - dbug(1, dprintf("ECT OK")); - sendf(tplci->appl, _FACILITY_I, rId, 0, "ws", 3, SS_Ind); - - - - } - else - { - switch (plci->ptyState) - { - case S_3PTY_BEGIN: - plci->ptyState = CONNECTED; - dbug(1, dprintf("3PTY ON")); - break; - - case S_3PTY_END: - plci->ptyState = IDLE; - plci->relatedPTYPLCI = NULL; - plci->ptyState = 0; - dbug(1, dprintf("3PTY OFF")); - break; - } - PUT_WORD(&SS_Ind[1], pty_cai[5] + 3); - sendf(tplci->appl, _FACILITY_I, rId, 0, "ws", 3, SS_Ind); - } - break; - - case CALL_DEFLECTION: - PUT_WORD(&SS_Ind[1], pty_cai[5]); - for (i = 0; i < max_appl; i++) - { - if (application[i].CDEnable) - { - if (application[i].Id) sendf(&application[i], _FACILITY_I, Id, 0, "ws", 3, SS_Ind); - application[i].CDEnable = false; - } - } - break; - - case DEACTIVATION_DIVERSION: - case ACTIVATION_DIVERSION: - if (!plci->appl) break; - PUT_WORD(&CF_Ind[1], pty_cai[5] + 2); - PUT_DWORD(&CF_Ind[6], plci->appl->S_Handle); - sendf(plci->appl, _FACILITY_I, Id & 0x7, 0, "ws", 3, CF_Ind); - plci_remove(plci); - break; - - case DIVERSION_INTERROGATE_CFU: - case DIVERSION_INTERROGATE_CFB: - case DIVERSION_INTERROGATE_CFNR: - case DIVERSION_INTERROGATE_NUM: - case CCBS_REQUEST: - case CCBS_DEACTIVATE: - case CCBS_INTERROGATE: - if (!plci->appl) break; - switch (pty_cai[5]) - { - case DIVERSION_INTERROGATE_CFU: - case DIVERSION_INTERROGATE_CFB: - case DIVERSION_INTERROGATE_CFNR: - dbug(1, dprintf("Interr_Div")); - PUT_WORD(&pty_cai[1], S_INTERROGATE_DIVERSION); - pty_cai[3] = pty_cai[0] - 3; /* Supplementary Service-specific parameter len */ - break; - case DIVERSION_INTERROGATE_NUM: - dbug(1, dprintf("Interr_Num")); - PUT_WORD(&pty_cai[1], S_INTERROGATE_NUMBERS); - pty_cai[3] = pty_cai[0] - 3; /* Supplementary Service-specific parameter len */ - break; - case CCBS_REQUEST: - dbug(1, dprintf("CCBS Request")); - PUT_WORD(&pty_cai[1], S_CCBS_REQUEST); - pty_cai[3] = pty_cai[0] - 3; /* Supplementary Service-specific parameter len */ - break; - case CCBS_DEACTIVATE: - dbug(1, dprintf("CCBS Deactivate")); - PUT_WORD(&pty_cai[1], S_CCBS_DEACTIVATE); - pty_cai[3] = pty_cai[0] - 3; /* Supplementary Service-specific parameter len */ - break; - case CCBS_INTERROGATE: - dbug(1, dprintf("CCBS Interrogate")); - PUT_WORD(&pty_cai[1], S_CCBS_INTERROGATE); - pty_cai[3] = pty_cai[0] - 3; /* Supplementary Service-specific parameter len */ - break; - } - PUT_WORD(&pty_cai[4], 0); /* Supplementary Service Reason */ - PUT_DWORD(&pty_cai[6], plci->appl->S_Handle); - sendf(plci->appl, _FACILITY_I, Id & 0x7, 0, "wS", 3, pty_cai); - plci_remove(plci); - break; - - case ACTIVATION_MWI: - case DEACTIVATION_MWI: - if (pty_cai[5] == ACTIVATION_MWI) - { - PUT_WORD(&SS_Ind[1], S_MWI_ACTIVATE); - } - else PUT_WORD(&SS_Ind[1], S_MWI_DEACTIVATE); - if (plci->cr_enquiry) - { - sendf(plci->appl, _FACILITY_I, Id & 0xf, 0, "ws", 3, SS_Ind); - plci_remove(plci); - } - else - { - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", 3, SS_Ind); - } - break; - case MWI_INDICATION: - if (pty_cai[0] >= 0x12) - { - PUT_WORD(&pty_cai[3], S_MWI_INDICATE); - pty_cai[2] = pty_cai[0] - 2; /* len Parameter */ - pty_cai[5] = pty_cai[0] - 5; /* Supplementary Service-specific parameter len */ - if (plci->appl && (a->Notification_Mask[plci->appl->Id - 1] & SMASK_MWI)) - { - if (plci->internal_command == GET_MWI_STATE) /* result on Message Waiting Listen */ - { - sendf(plci->appl, _FACILITY_I, Id & 0xf, 0, "wS", 3, &pty_cai[2]); - plci_remove(plci); - return; - } - else sendf(plci->appl, _FACILITY_I, Id, 0, "wS", 3, &pty_cai[2]); - pty_cai[0] = 0; - } - else - { - for (i = 0; i < max_appl; i++) - { - if (a->Notification_Mask[i]&SMASK_MWI) - { - sendf(&application[i], _FACILITY_I, Id & 0x7, 0, "wS", 3, &pty_cai[2]); - pty_cai[0] = 0; - } - } - } - - if (!pty_cai[0]) - { /* acknowledge */ - facility[2] = 0; /* returncode */ - } - else facility[2] = 0xff; - } - else - { - /* reject */ - facility[2] = 0xff; /* returncode */ - } - facility[0] = 2; - facility[1] = MWI_RESPONSE; /* Function */ - add_p(plci, CAI, facility); - add_p(plci, ESC, multi_ssext_parms[0]); /* remembered parameter -> only one possible */ - sig_req(plci, S_SERVICE, 0); - send_req(plci); - plci->command = 0; - next_internal_command(Id, plci); - break; - case CONF_ADD: /* OK */ - case CONF_BEGIN: - case CONF_DROP: - case CONF_ISOLATE: - case CONF_REATTACH: - case CONF_PARTYDISC: - CONF_Ind[0] = 9; - CONF_Ind[3] = 6; - switch (pty_cai[5]) - { - case CONF_BEGIN: - PUT_WORD(&CONF_Ind[1], S_CONF_BEGIN); - if (pty_cai[0] == 6) - { - d = pty_cai[6]; - PUT_DWORD(&CONF_Ind[6], d); /* PartyID */ - } - else - { - PUT_DWORD(&CONF_Ind[6], 0x0); - } - break; - case CONF_ISOLATE: - PUT_WORD(&CONF_Ind[1], S_CONF_ISOLATE); - CONF_Ind[0] = 5; - CONF_Ind[3] = 2; - break; - case CONF_REATTACH: - PUT_WORD(&CONF_Ind[1], S_CONF_REATTACH); - CONF_Ind[0] = 5; - CONF_Ind[3] = 2; - break; - case CONF_DROP: - PUT_WORD(&CONF_Ind[1], S_CONF_DROP); - CONF_Ind[0] = 5; - CONF_Ind[3] = 2; - break; - case CONF_ADD: - PUT_WORD(&CONF_Ind[1], S_CONF_ADD); - d = pty_cai[6]; - PUT_DWORD(&CONF_Ind[6], d); /* PartyID */ - tplci = plci->relatedPTYPLCI; - if (tplci) tplci->ptyState = CONNECTED; - break; - case CONF_PARTYDISC: - CONF_Ind[0] = 7; - CONF_Ind[3] = 4; - PUT_WORD(&CONF_Ind[1], S_CONF_PARTYDISC); - d = pty_cai[6]; - PUT_DWORD(&CONF_Ind[4], d); /* PartyID */ - break; - } - plci->ptyState = CONNECTED; - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", 3, CONF_Ind); - break; - case CCBS_INFO_RETAIN: - case CCBS_ERASECALLLINKAGEID: - case CCBS_STOP_ALERTING: - CONF_Ind[0] = 5; - CONF_Ind[3] = 2; - switch (pty_cai[5]) - { - case CCBS_INFO_RETAIN: - PUT_WORD(&CONF_Ind[1], S_CCBS_INFO_RETAIN); - break; - case CCBS_STOP_ALERTING: - PUT_WORD(&CONF_Ind[1], S_CCBS_STOP_ALERTING); - break; - case CCBS_ERASECALLLINKAGEID: - PUT_WORD(&CONF_Ind[1], S_CCBS_ERASECALLLINKAGEID); - CONF_Ind[0] = 7; - CONF_Ind[3] = 4; - CONF_Ind[6] = 0; - CONF_Ind[7] = 0; - break; - } - w = pty_cai[6]; - PUT_WORD(&CONF_Ind[4], w); /* PartyID */ - - if (plci->appl && (a->Notification_Mask[plci->appl->Id - 1] & SMASK_CCBS)) - { - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", 3, CONF_Ind); - } - else - { - for (i = 0; i < max_appl; i++) - if (a->Notification_Mask[i] & SMASK_CCBS) - sendf(&application[i], _FACILITY_I, Id & 0x7, 0, "ws", 3, CONF_Ind); - } - break; - } - break; - case CALL_HOLD_REJ: - cau = parms[7]; - if (cau) - { - i = _L3_CAUSE | cau[2]; - if (cau[2] == 0) i = 0x3603; - } - else - { - i = 0x3603; - } - PUT_WORD(&SS_Ind[1], S_HOLD); - PUT_WORD(&SS_Ind[4], i); - if (plci->SuppState == HOLD_REQUEST) - { - plci->SuppState = IDLE; - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", 3, SS_Ind); - } - break; - - case CALL_HOLD_ACK: - if (plci->SuppState == HOLD_REQUEST) - { - plci->SuppState = CALL_HELD; - CodecIdCheck(a, plci); - start_internal_command(Id, plci, hold_save_command); - } - break; - - case CALL_RETRIEVE_REJ: - cau = parms[7]; - if (cau) - { - i = _L3_CAUSE | cau[2]; - if (cau[2] == 0) i = 0x3603; - } - else - { - i = 0x3603; - } - PUT_WORD(&SS_Ind[1], S_RETRIEVE); - PUT_WORD(&SS_Ind[4], i); - if (plci->SuppState == RETRIEVE_REQUEST) - { - plci->SuppState = CALL_HELD; - CodecIdCheck(a, plci); - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", 3, SS_Ind); - } - break; - - case CALL_RETRIEVE_ACK: - PUT_WORD(&SS_Ind[1], S_RETRIEVE); - if (plci->SuppState == RETRIEVE_REQUEST) - { - plci->SuppState = IDLE; - plci->call_dir |= CALL_DIR_FORCE_OUTG_NL; - plci->b_channel = esc_chi[esc_chi[0]]&0x1f; - if (plci->tel) - { - mixer_set_bchannel_id_esc(plci, plci->b_channel); - dbug(1, dprintf("RetrChannel=0x%x", plci->b_channel)); - SetVoiceChannel(a->AdvCodecPLCI, esc_chi, a); - if (plci->B2_prot == B2_TRANSPARENT && plci->B3_prot == B3_TRANSPARENT) - { - dbug(1, dprintf("Get B-ch")); - start_internal_command(Id, plci, retrieve_restore_command); - } - else - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", 3, SS_Ind); - } - else - start_internal_command(Id, plci, retrieve_restore_command); - } - break; - - case INDICATE_IND: - if (plci->State != LISTENING) { - sig_req(plci, HANGUP, 0); - send_req(plci); - break; - } - cip = find_cip(a, parms[4], parms[6]); - cip_mask = 1L << cip; - dbug(1, dprintf("cip=%d,cip_mask=%lx", cip, cip_mask)); - bitmap_zero(plci->c_ind_mask_table, MAX_APPL); - if (!remove_started && !a->adapter_disabled) - { - group_optimization(a, plci); - for_each_set_bit(i, plci->group_optimization_mask_table, max_appl) { - if (application[i].Id - && (a->CIP_Mask[i] & 1 || a->CIP_Mask[i] & cip_mask) - && CPN_filter_ok(parms[0], a, i)) { - dbug(1, dprintf("storedcip_mask[%d]=0x%lx", i, a->CIP_Mask[i])); - __set_bit(i, plci->c_ind_mask_table); - dbug(1, dprintf("c_ind_mask =%*pb", MAX_APPL, plci->c_ind_mask_table)); - plci->State = INC_CON_PENDING; - plci->call_dir = (plci->call_dir & ~(CALL_DIR_OUT | CALL_DIR_ORIGINATE)) | - CALL_DIR_IN | CALL_DIR_ANSWER; - if (esc_chi[0]) { - plci->b_channel = esc_chi[esc_chi[0]] & 0x1f; - mixer_set_bchannel_id_esc(plci, plci->b_channel); - } - /* if a listen on the ext controller is done, check if hook states */ - /* are supported or if just a on board codec must be activated */ - if (a->codec_listen[i] && !a->AdvSignalPLCI) { - if (a->profile.Global_Options & HANDSET) - plci->tel = ADV_VOICE; - else if (a->profile.Global_Options & ON_BOARD_CODEC) - plci->tel = CODEC; - if (plci->tel) Id |= EXT_CONTROLLER; - a->codec_listen[i] = plci; - } - - sendf(&application[i], _CONNECT_I, Id, 0, - "wSSSSSSSbSSSSS", cip, /* CIP */ - parms[0], /* CalledPartyNumber */ - multi_CiPN_parms[0], /* CallingPartyNumber */ - parms[2], /* CalledPartySubad */ - parms[3], /* CallingPartySubad */ - parms[4], /* BearerCapability */ - parms[5], /* LowLC */ - parms[6], /* HighLC */ - ai_len, /* nested struct add_i */ - add_i[0], /* B channel info */ - add_i[1], /* keypad facility */ - add_i[2], /* user user data */ - add_i[3], /* nested facility */ - multi_CiPN_parms[1] /* second CiPN(SCR) */ - ); - SendSSExtInd(&application[i], - plci, - Id, - multi_ssext_parms); - SendSetupInfo(&application[i], - plci, - Id, - parms, - SendMultiIE(plci, Id, multi_pi_parms, PI, 0x210, true)); - } - } - dbug(1, dprintf("c_ind_mask =%*pb", MAX_APPL, plci->c_ind_mask_table)); - } - if (bitmap_empty(plci->c_ind_mask_table, MAX_APPL)) { - sig_req(plci, HANGUP, 0); - send_req(plci); - plci->State = IDLE; - } - plci->notifiedcall = 0; - a->listen_active--; - listen_check(a); - break; - - case CALL_PEND_NOTIFY: - plci->notifiedcall = 1; - listen_check(a); - break; - - case CALL_IND: - case CALL_CON: - if (plci->State == ADVANCED_VOICE_SIG || plci->State == ADVANCED_VOICE_NOSIG) - { - if (plci->internal_command == PERM_COD_CONN_PEND) - { - if (plci->State == ADVANCED_VOICE_NOSIG) - { - dbug(1, dprintf("***Codec OK")); - if (a->AdvSignalPLCI) - { - tplci = a->AdvSignalPLCI; - if (tplci->spoofed_msg) - { - dbug(1, dprintf("***Spoofed Msg(0x%x)", tplci->spoofed_msg)); - tplci->command = 0; - tplci->internal_command = 0; - x_Id = ((word)tplci->Id << 8) | tplci->adapter->Id | 0x80; - switch (tplci->spoofed_msg) - { - case CALL_RES: - tplci->command = _CONNECT_I | RESPONSE; - api_load_msg(&tplci->saved_msg, saved_parms); - add_b1(tplci, &saved_parms[1], 0, tplci->B1_facilities); - if (tplci->adapter->Info_Mask[tplci->appl->Id - 1] & 0x200) - { - /* early B3 connect (CIP mask bit 9) no release after a disc */ - add_p(tplci, LLI, "\x01\x01"); - } - add_s(tplci, CONN_NR, &saved_parms[2]); - add_s(tplci, LLC, &saved_parms[4]); - add_ai(tplci, &saved_parms[5]); - tplci->State = INC_CON_ACCEPT; - sig_req(tplci, CALL_RES, 0); - send_req(tplci); - break; - - case AWAITING_SELECT_B: - dbug(1, dprintf("Select_B continue")); - start_internal_command(x_Id, tplci, select_b_command); - break; - - case AWAITING_MANUF_CON: /* Get_Plci per Manufacturer_Req to ext controller */ - if (!tplci->Sig.Id) - { - dbug(1, dprintf("No SigID!")); - sendf(tplci->appl, _MANUFACTURER_R | CONFIRM, x_Id, tplci->number, "dww", _DI_MANU_ID, _MANUFACTURER_R, _OUT_OF_PLCI); - plci_remove(tplci); - break; - } - tplci->command = _MANUFACTURER_R; - api_load_msg(&tplci->saved_msg, saved_parms); - dir = saved_parms[2].info[0]; - if (dir == 1) { - sig_req(tplci, CALL_REQ, 0); - } - else if (!dir) { - sig_req(tplci, LISTEN_REQ, 0); - } - send_req(tplci); - sendf(tplci->appl, _MANUFACTURER_R | CONFIRM, x_Id, tplci->number, "dww", _DI_MANU_ID, _MANUFACTURER_R, 0); - break; - - case (CALL_REQ | AWAITING_MANUF_CON): - sig_req(tplci, CALL_REQ, 0); - send_req(tplci); - break; - - case CALL_REQ: - if (!tplci->Sig.Id) - { - dbug(1, dprintf("No SigID!")); - sendf(tplci->appl, _CONNECT_R | CONFIRM, tplci->adapter->Id, 0, "w", _OUT_OF_PLCI); - plci_remove(tplci); - break; - } - tplci->command = _CONNECT_R; - api_load_msg(&tplci->saved_msg, saved_parms); - add_s(tplci, CPN, &saved_parms[1]); - add_s(tplci, DSA, &saved_parms[3]); - add_ai(tplci, &saved_parms[9]); - sig_req(tplci, CALL_REQ, 0); - send_req(tplci); - break; - - case CALL_RETRIEVE: - tplci->command = C_RETRIEVE_REQ; - sig_req(tplci, CALL_RETRIEVE, 0); - send_req(tplci); - break; - } - tplci->spoofed_msg = 0; - if (tplci->internal_command == 0) - next_internal_command(x_Id, tplci); - } - } - next_internal_command(Id, plci); - break; - } - dbug(1, dprintf("***Codec Hook Init Req")); - plci->internal_command = PERM_COD_HOOK; - add_p(plci, FTY, "\x01\x09"); /* Get Hook State*/ - sig_req(plci, TEL_CTRL, 0); - send_req(plci); - } - } - else if (plci->command != _MANUFACTURER_R /* old style permanent connect */ - && plci->State != INC_ACT_PENDING) - { - mixer_set_bchannel_id_esc(plci, plci->b_channel); - if (plci->tel == ADV_VOICE && plci->SuppState == IDLE) /* with permanent codec switch on immediately */ - { - chi[2] = plci->b_channel; - SetVoiceChannel(a->AdvCodecPLCI, chi, a); - } - sendf(plci->appl, _CONNECT_ACTIVE_I, Id, 0, "Sss", parms[21], "", ""); - plci->State = INC_ACT_PENDING; - } - break; - - case TEL_CTRL: - ie = multi_fac_parms[0]; /* inspect the facility hook indications */ - if (plci->State == ADVANCED_VOICE_SIG && ie[0]) { - switch (ie[1] & 0x91) { - case 0x80: /* hook off */ - case 0x81: - if (plci->internal_command == PERM_COD_HOOK) - { - dbug(1, dprintf("init:hook_off")); - plci->hook_state = ie[1]; - next_internal_command(Id, plci); - break; - } - else /* ignore doubled hook indications */ - { - if (((plci->hook_state) & 0xf0) == 0x80) - { - dbug(1, dprintf("ignore hook")); - break; - } - plci->hook_state = ie[1]&0x91; - } - /* check for incoming call pending */ - /* and signal '+'.Appl must decide */ - /* with connect_res if call must */ - /* accepted or not */ - for (i = 0, tplci = NULL; i < max_appl; i++) { - if (a->codec_listen[i] - && (a->codec_listen[i]->State == INC_CON_PENDING - || a->codec_listen[i]->State == INC_CON_ALERT)) { - tplci = a->codec_listen[i]; - tplci->appl = &application[i]; - } - } - /* no incoming call, do outgoing call */ - /* and signal '+' if outg. setup */ - if (!a->AdvSignalPLCI && !tplci) { - if ((i = get_plci(a))) { - a->AdvSignalPLCI = &a->plci[i - 1]; - tplci = a->AdvSignalPLCI; - tplci->tel = ADV_VOICE; - PUT_WORD(&voice_cai[5], a->AdvSignalAppl->MaxDataLength); - if (a->Info_Mask[a->AdvSignalAppl->Id - 1] & 0x200) { - /* early B3 connect (CIP mask bit 9) no release after a disc */ - add_p(tplci, LLI, "\x01\x01"); - } - add_p(tplci, CAI, voice_cai); - add_p(tplci, OAD, a->TelOAD); - add_p(tplci, OSA, a->TelOSA); - add_p(tplci, SHIFT | 6, NULL); - add_p(tplci, SIN, "\x02\x01\x00"); - add_p(tplci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(tplci, ASSIGN, DSIG_ID); - a->AdvSignalPLCI->internal_command = HOOK_OFF_REQ; - a->AdvSignalPLCI->command = 0; - tplci->appl = a->AdvSignalAppl; - tplci->call_dir = CALL_DIR_OUT | CALL_DIR_ORIGINATE; - send_req(tplci); - } - - } - - if (!tplci) break; - Id = ((word)tplci->Id << 8) | a->Id; - Id |= EXT_CONTROLLER; - sendf(tplci->appl, - _FACILITY_I, - Id, - 0, - "ws", (word)0, "\x01+"); - break; - - case 0x90: /* hook on */ - case 0x91: - if (plci->internal_command == PERM_COD_HOOK) - { - dbug(1, dprintf("init:hook_on")); - plci->hook_state = ie[1] & 0x91; - next_internal_command(Id, plci); - break; - } - else /* ignore doubled hook indications */ - { - if (((plci->hook_state) & 0xf0) == 0x90) break; - plci->hook_state = ie[1] & 0x91; - } - /* hangup the adv. voice call and signal '-' to the appl */ - if (a->AdvSignalPLCI) { - Id = ((word)a->AdvSignalPLCI->Id << 8) | a->Id; - if (plci->tel) Id |= EXT_CONTROLLER; - sendf(a->AdvSignalAppl, - _FACILITY_I, - Id, - 0, - "ws", (word)0, "\x01-"); - a->AdvSignalPLCI->internal_command = HOOK_ON_REQ; - a->AdvSignalPLCI->command = 0; - sig_req(a->AdvSignalPLCI, HANGUP, 0); - send_req(a->AdvSignalPLCI); - } - break; - } - } - break; - - case RESUME: - __clear_bit(plci->appl->Id - 1, plci->c_ind_mask_table); - PUT_WORD(&resume_cau[4], GOOD); - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", (word)3, resume_cau); - break; - - case SUSPEND: - bitmap_zero(plci->c_ind_mask_table, MAX_APPL); - - if (plci->NL.Id && !plci->nl_remove_id) { - mixer_remove(plci); - nl_req_ncci(plci, REMOVE, 0); - } - if (!plci->sig_remove_id) { - plci->internal_command = 0; - sig_req(plci, REMOVE, 0); - } - send_req(plci); - if (!plci->channels) { - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", (word)3, "\x05\x04\x00\x02\x00\x00"); - sendf(plci->appl, _DISCONNECT_I, Id, 0, "w", 0); - } - break; - - case SUSPEND_REJ: - break; - - case HANGUP: - plci->hangup_flow_ctrl_timer = 0; - if (plci->manufacturer && plci->State == LOCAL_CONNECT) break; - cau = parms[7]; - if (cau) { - i = _L3_CAUSE | cau[2]; - if (cau[2] == 0) i = 0; - else if (cau[2] == 8) i = _L1_ERROR; - else if (cau[2] == 9 || cau[2] == 10) i = _L2_ERROR; - else if (cau[2] == 5) i = _CAPI_GUARD_ERROR; - } - else { - i = _L3_ERROR; - } - - if (plci->State == INC_CON_PENDING || plci->State == INC_CON_ALERT) - { - for_each_set_bit(i, plci->c_ind_mask_table, max_appl) - sendf(&application[i], _DISCONNECT_I, Id, 0, "w", 0); - } - else - { - bitmap_zero(plci->c_ind_mask_table, MAX_APPL); - } - if (!plci->appl) - { - if (plci->State == LISTENING) - { - plci->notifiedcall = 0; - a->listen_active--; - } - plci->State = INC_DIS_PENDING; - if (bitmap_empty(plci->c_ind_mask_table, MAX_APPL)) - { - plci->State = IDLE; - if (plci->NL.Id && !plci->nl_remove_id) - { - mixer_remove(plci); - nl_req_ncci(plci, REMOVE, 0); - } - if (!plci->sig_remove_id) - { - plci->internal_command = 0; - sig_req(plci, REMOVE, 0); - } - send_req(plci); - } - } - else - { - /* collision of DISCONNECT or CONNECT_RES with HANGUP can */ - /* result in a second HANGUP! Don't generate another */ - /* DISCONNECT */ - if (plci->State != IDLE && plci->State != INC_DIS_PENDING) - { - if (plci->State == RESUMING) - { - PUT_WORD(&resume_cau[4], i); - sendf(plci->appl, _FACILITY_I, Id, 0, "ws", (word)3, resume_cau); - } - plci->State = INC_DIS_PENDING; - sendf(plci->appl, _DISCONNECT_I, Id, 0, "w", i); - } - } - break; - - case SSEXT_IND: - SendSSExtInd(NULL, plci, Id, multi_ssext_parms); - break; - - case VSWITCH_REQ: - VSwitchReqInd(plci, Id, multi_vswitch_parms); - break; - case VSWITCH_IND: - if (plci->relatedPTYPLCI && - plci->vswitchstate == 3 && - plci->relatedPTYPLCI->vswitchstate == 3 && - parms[MAXPARMSIDS - 1][0]) - { - add_p(plci->relatedPTYPLCI, SMSG, parms[MAXPARMSIDS - 1]); - sig_req(plci->relatedPTYPLCI, VSWITCH_REQ, 0); - send_req(plci->relatedPTYPLCI); - } - else VSwitchReqInd(plci, Id, multi_vswitch_parms); - break; - - } -} - - -static void SendSetupInfo(APPL *appl, PLCI *plci, dword Id, byte **parms, byte Info_Sent_Flag) -{ - word i; - byte *ie; - word Info_Number; - byte *Info_Element; - word Info_Mask = 0; - - dbug(1, dprintf("SetupInfo")); - - for (i = 0; i < MAXPARMSIDS; i++) { - ie = parms[i]; - Info_Number = 0; - Info_Element = ie; - if (ie[0]) { - switch (i) { - case 0: - dbug(1, dprintf("CPN ")); - Info_Number = 0x0070; - Info_Mask = 0x80; - Info_Sent_Flag = true; - break; - case 8: /* display */ - dbug(1, dprintf("display(%d)", i)); - Info_Number = 0x0028; - Info_Mask = 0x04; - Info_Sent_Flag = true; - break; - case 16: /* Channel Id */ - dbug(1, dprintf("CHI")); - Info_Number = 0x0018; - Info_Mask = 0x100; - Info_Sent_Flag = true; - mixer_set_bchannel_id(plci, Info_Element); - break; - case 19: /* Redirected Number */ - dbug(1, dprintf("RDN")); - Info_Number = 0x0074; - Info_Mask = 0x400; - Info_Sent_Flag = true; - break; - case 20: /* Redirected Number extended */ - dbug(1, dprintf("RDX")); - Info_Number = 0x0073; - Info_Mask = 0x400; - Info_Sent_Flag = true; - break; - case 22: /* Redirecing Number */ - dbug(1, dprintf("RIN")); - Info_Number = 0x0076; - Info_Mask = 0x400; - Info_Sent_Flag = true; - break; - default: - Info_Number = 0; - break; - } - } - - if (i == MAXPARMSIDS - 2) { /* to indicate the message type "Setup" */ - Info_Number = 0x8000 | 5; - Info_Mask = 0x10; - Info_Element = ""; - } - - if (Info_Sent_Flag && Info_Number) { - if (plci->adapter->Info_Mask[appl->Id - 1] & Info_Mask) { - sendf(appl, _INFO_I, Id, 0, "wS", Info_Number, Info_Element); - } - } - } -} - - -static void SendInfo(PLCI *plci, dword Id, byte **parms, byte iesent) -{ - word i; - word j; - word k; - byte *ie; - word Info_Number; - byte *Info_Element; - word Info_Mask = 0; - static byte charges[5] = {4, 0, 0, 0, 0}; - static byte cause[] = {0x02, 0x80, 0x00}; - APPL *appl; - - dbug(1, dprintf("InfoParse ")); - - if ( - !plci->appl - && !plci->State - && plci->Sig.Ind != NCR_FACILITY - ) - { - dbug(1, dprintf("NoParse ")); - return; - } - cause[2] = 0; - for (i = 0; i < MAXPARMSIDS; i++) { - ie = parms[i]; - Info_Number = 0; - Info_Element = ie; - if (ie[0]) { - switch (i) { - case 0: - dbug(1, dprintf("CPN ")); - Info_Number = 0x0070; - Info_Mask = 0x80; - break; - case 7: /* ESC_CAU */ - dbug(1, dprintf("cau(0x%x)", ie[2])); - Info_Number = 0x0008; - Info_Mask = 0x00; - cause[2] = ie[2]; - Info_Element = NULL; - break; - case 8: /* display */ - dbug(1, dprintf("display(%d)", i)); - Info_Number = 0x0028; - Info_Mask = 0x04; - break; - case 9: /* Date display */ - dbug(1, dprintf("date(%d)", i)); - Info_Number = 0x0029; - Info_Mask = 0x02; - break; - case 10: /* charges */ - for (j = 0; j < 4; j++) charges[1 + j] = 0; - for (j = 0; j < ie[0] && !(ie[1 + j] & 0x80); j++); - for (k = 1, j++; j < ie[0] && k <= 4; j++, k++) charges[k] = ie[1 + j]; - Info_Number = 0x4000; - Info_Mask = 0x40; - Info_Element = charges; - break; - case 11: /* user user info */ - dbug(1, dprintf("uui")); - Info_Number = 0x007E; - Info_Mask = 0x08; - break; - case 12: /* congestion receiver ready */ - dbug(1, dprintf("clRDY")); - Info_Number = 0x00B0; - Info_Mask = 0x08; - Info_Element = ""; - break; - case 13: /* congestion receiver not ready */ - dbug(1, dprintf("clNRDY")); - Info_Number = 0x00BF; - Info_Mask = 0x08; - Info_Element = ""; - break; - case 15: /* Keypad Facility */ - dbug(1, dprintf("KEY")); - Info_Number = 0x002C; - Info_Mask = 0x20; - break; - case 16: /* Channel Id */ - dbug(1, dprintf("CHI")); - Info_Number = 0x0018; - Info_Mask = 0x100; - mixer_set_bchannel_id(plci, Info_Element); - break; - case 17: /* if no 1tr6 cause, send full cause, else esc_cause */ - dbug(1, dprintf("q9cau(0x%x)", ie[2])); - if (!cause[2] || cause[2] < 0x80) break; /* eg. layer 1 error */ - Info_Number = 0x0008; - Info_Mask = 0x01; - if (cause[2] != ie[2]) Info_Element = cause; - break; - case 19: /* Redirected Number */ - dbug(1, dprintf("RDN")); - Info_Number = 0x0074; - Info_Mask = 0x400; - break; - case 22: /* Redirecing Number */ - dbug(1, dprintf("RIN")); - Info_Number = 0x0076; - Info_Mask = 0x400; - break; - case 23: /* Notification Indicator */ - dbug(1, dprintf("NI")); - Info_Number = (word)NI; - Info_Mask = 0x210; - break; - case 26: /* Call State */ - dbug(1, dprintf("CST")); - Info_Number = (word)CST; - Info_Mask = 0x01; /* do with cause i.e. for now */ - break; - case MAXPARMSIDS - 2: /* Escape Message Type, must be the last indication */ - dbug(1, dprintf("ESC/MT[0x%x]", ie[3])); - Info_Number = 0x8000 | ie[3]; - if (iesent) Info_Mask = 0xffff; - else Info_Mask = 0x10; - Info_Element = ""; - break; - default: - Info_Number = 0; - Info_Mask = 0; - Info_Element = ""; - break; - } - } - - if (plci->Sig.Ind == NCR_FACILITY) /* check controller broadcast */ - { - for (j = 0; j < max_appl; j++) - { - appl = &application[j]; - if (Info_Number - && appl->Id - && plci->adapter->Info_Mask[appl->Id - 1] & Info_Mask) - { - dbug(1, dprintf("NCR_Ind")); - iesent = true; - sendf(&application[j], _INFO_I, Id & 0x0f, 0, "wS", Info_Number, Info_Element); - } - } - } - else if (!plci->appl) - { /* overlap receiving broadcast */ - if (Info_Number == CPN - || Info_Number == KEY - || Info_Number == NI - || Info_Number == DSP - || Info_Number == UUI) - { - for_each_set_bit(j, plci->c_ind_mask_table, max_appl) { - dbug(1, dprintf("Ovl_Ind")); - iesent = true; - sendf(&application[j], _INFO_I, Id, 0, "wS", Info_Number, Info_Element); - } - } - } /* all other signalling states */ - else if (Info_Number - && plci->adapter->Info_Mask[plci->appl->Id - 1] & Info_Mask) - { - dbug(1, dprintf("Std_Ind")); - iesent = true; - sendf(plci->appl, _INFO_I, Id, 0, "wS", Info_Number, Info_Element); - } - } -} - - -static byte SendMultiIE(PLCI *plci, dword Id, byte **parms, byte ie_type, - dword info_mask, byte setupParse) -{ - word i; - word j; - byte *ie; - word Info_Number; - byte *Info_Element; - APPL *appl; - word Info_Mask = 0; - byte iesent = 0; - - if ( - !plci->appl - && !plci->State - && plci->Sig.Ind != NCR_FACILITY - && !setupParse - ) - { - dbug(1, dprintf("NoM-IEParse ")); - return 0; - } - dbug(1, dprintf("M-IEParse ")); - - for (i = 0; i < MAX_MULTI_IE; i++) - { - ie = parms[i]; - Info_Number = 0; - Info_Element = ie; - if (ie[0]) - { - dbug(1, dprintf("[Ind0x%x]:IE=0x%x", plci->Sig.Ind, ie_type)); - Info_Number = (word)ie_type; - Info_Mask = (word)info_mask; - } - - if (plci->Sig.Ind == NCR_FACILITY) /* check controller broadcast */ - { - for (j = 0; j < max_appl; j++) - { - appl = &application[j]; - if (Info_Number - && appl->Id - && plci->adapter->Info_Mask[appl->Id - 1] & Info_Mask) - { - iesent = true; - dbug(1, dprintf("Mlt_NCR_Ind")); - sendf(&application[j], _INFO_I, Id & 0x0f, 0, "wS", Info_Number, Info_Element); - } - } - } - else if (!plci->appl && Info_Number) - { /* overlap receiving broadcast */ - for_each_set_bit(j, plci->c_ind_mask_table, max_appl) { - iesent = true; - dbug(1, dprintf("Mlt_Ovl_Ind")); - sendf(&application[j] , _INFO_I, Id, 0, "wS", Info_Number, Info_Element); - } - } /* all other signalling states */ - else if (Info_Number - && plci->adapter->Info_Mask[plci->appl->Id - 1] & Info_Mask) - { - iesent = true; - dbug(1, dprintf("Mlt_Std_Ind")); - sendf(plci->appl, _INFO_I, Id, 0, "wS", Info_Number, Info_Element); - } - } - return iesent; -} - -static void SendSSExtInd(APPL *appl, PLCI *plci, dword Id, byte **parms) -{ - word i; - /* Format of multi_ssext_parms[i][]: - 0 byte length - 1 byte SSEXTIE - 2 byte SSEXT_REQ/SSEXT_IND - 3 byte length - 4 word SSExtCommand - 6... Params - */ - if ( - plci - && plci->State - && plci->Sig.Ind != NCR_FACILITY - ) - for (i = 0; i < MAX_MULTI_IE; i++) - { - if (parms[i][0] < 6) continue; - if (parms[i][2] == SSEXT_REQ) continue; - - if (appl) - { - parms[i][0] = 0; /* kill it */ - sendf(appl, _MANUFACTURER_I, - Id, - 0, - "dwS", - _DI_MANU_ID, - _DI_SSEXT_CTRL, - &parms[i][3]); - } - else if (plci->appl) - { - parms[i][0] = 0; /* kill it */ - sendf(plci->appl, _MANUFACTURER_I, - Id, - 0, - "dwS", - _DI_MANU_ID, - _DI_SSEXT_CTRL, - &parms[i][3]); - } - } -}; - -static void nl_ind(PLCI *plci) -{ - byte ch; - word ncci; - dword Id; - DIVA_CAPI_ADAPTER *a; - word NCCIcode; - APPL *APPLptr; - word count; - word Num; - word i, ncpi_state; - byte len, ncci_state; - word msg; - word info = 0; - word fax_feature_bits; - byte fax_send_edata_ack; - static byte v120_header_buffer[2 + 3]; - static word fax_info[] = { - 0, /* T30_SUCCESS */ - _FAX_NO_CONNECTION, /* T30_ERR_NO_DIS_RECEIVED */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_TIMEOUT_NO_RESPONSE */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_RESPONSE */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_TOO_MANY_REPEATS */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_UNEXPECTED_MESSAGE */ - _FAX_REMOTE_ABORT, /* T30_ERR_UNEXPECTED_DCN */ - _FAX_LOCAL_ABORT, /* T30_ERR_DTC_UNSUPPORTED */ - _FAX_TRAINING_ERROR, /* T30_ERR_ALL_RATES_FAILED */ - _FAX_TRAINING_ERROR, /* T30_ERR_TOO_MANY_TRAINS */ - _FAX_PARAMETER_ERROR, /* T30_ERR_RECEIVE_CORRUPTED */ - _FAX_REMOTE_ABORT, /* T30_ERR_UNEXPECTED_DISC */ - _FAX_LOCAL_ABORT, /* T30_ERR_APPLICATION_DISC */ - _FAX_REMOTE_REJECT, /* T30_ERR_INCOMPATIBLE_DIS */ - _FAX_LOCAL_ABORT, /* T30_ERR_INCOMPATIBLE_DCS */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_TIMEOUT_NO_COMMAND */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_COMMAND */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_TIMEOUT_COMMAND_TOO_LONG */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_TIMEOUT_RESPONSE_TOO_LONG */ - _FAX_NO_CONNECTION, /* T30_ERR_NOT_IDENTIFIED */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_SUPERVISORY_TIMEOUT */ - _FAX_PARAMETER_ERROR, /* T30_ERR_TOO_LONG_SCAN_LINE */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_PAGE_AFTER_MPS */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_PAGE_AFTER_CFR */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_DCS_AFTER_FTT */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_DCS_AFTER_EOM */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_DCS_AFTER_MPS */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_DCN_AFTER_MCF */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_DCN_AFTER_RTN */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_CFR */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_MCF_AFTER_EOP */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_MCF_AFTER_EOM */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_RETRY_NO_MCF_AFTER_MPS */ - 0x331d, /* T30_ERR_SUB_SEP_UNSUPPORTED */ - 0x331e, /* T30_ERR_PWD_UNSUPPORTED */ - 0x331f, /* T30_ERR_SUB_SEP_PWD_UNSUPPORTED */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_INVALID_COMMAND_FRAME */ - _FAX_PARAMETER_ERROR, /* T30_ERR_UNSUPPORTED_PAGE_CODING */ - _FAX_PARAMETER_ERROR, /* T30_ERR_INVALID_PAGE_CODING */ - _FAX_REMOTE_REJECT, /* T30_ERR_INCOMPATIBLE_PAGE_CONFIG */ - _FAX_LOCAL_ABORT, /* T30_ERR_TIMEOUT_FROM_APPLICATION */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_V34FAX_NO_REACTION_ON_MARK */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_V34FAX_TRAINING_TIMEOUT */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_V34FAX_UNEXPECTED_V21 */ - _FAX_PROTOCOL_ERROR, /* T30_ERR_V34FAX_PRIMARY_CTS_ON */ - _FAX_LOCAL_ABORT, /* T30_ERR_V34FAX_TURNAROUND_POLLING */ - _FAX_LOCAL_ABORT /* T30_ERR_V34FAX_V8_INCOMPATIBILITY */ - }; - - byte dtmf_code_buffer[CAPIDTMF_RECV_DIGIT_BUFFER_SIZE + 1]; - - - static word rtp_info[] = { - GOOD, /* RTP_SUCCESS */ - 0x3600 /* RTP_ERR_SSRC_OR_PAYLOAD_CHANGE */ - }; - - static dword udata_forwarding_table[0x100 / sizeof(dword)] = - { - 0x0020301e, 0x00000000, 0x00000000, 0x00000000, - 0x00000000, 0x00000000, 0x00000000, 0x00000000 - }; - - ch = plci->NL.IndCh; - a = plci->adapter; - ncci = a->ch_ncci[ch]; - Id = (((dword)(ncci ? ncci : ch)) << 16) | (((word) plci->Id) << 8) | a->Id; - if (plci->tel) Id |= EXT_CONTROLLER; - APPLptr = plci->appl; - dbug(1, dprintf("NL_IND-Id(NL:0x%x)=0x%08lx,plci=%x,tel=%x,state=0x%x,ch=0x%x,chs=%d,Ind=%x", - plci->NL.Id, Id, plci->Id, plci->tel, plci->State, ch, plci->channels, plci->NL.Ind & 0x0f)); - - /* in the case if no connect_active_Ind was sent to the appl we wait for */ - - if (plci->nl_remove_id) - { - plci->NL.RNR = 2; /* discard */ - dbug(1, dprintf("NL discard while remove pending")); - return; - } - if ((plci->NL.Ind & 0x0f) == N_CONNECT) - { - if (plci->State == INC_DIS_PENDING - || plci->State == OUTG_DIS_PENDING - || plci->State == IDLE) - { - plci->NL.RNR = 2; /* discard */ - dbug(1, dprintf("discard n_connect")); - return; - } - if (plci->State < INC_ACT_PENDING) - { - plci->NL.RNR = 1; /* flow control */ - channel_x_off(plci, ch, N_XON_CONNECT_IND); - return; - } - } - - if (!APPLptr) /* no application or invalid data */ - { /* while reloading the DSP */ - dbug(1, dprintf("discard1")); - plci->NL.RNR = 2; - return; - } - - if (((plci->NL.Ind & 0x0f) == N_UDATA) - && (((plci->B2_prot != B2_SDLC) && ((plci->B1_resource == 17) || (plci->B1_resource == 18))) - || (plci->B2_prot == 7) - || (plci->B3_prot == 7))) - { - plci->ncpi_buffer[0] = 0; - - ncpi_state = plci->ncpi_state; - if (plci->NL.complete == 1) - { - byte *data = &plci->NL.RBuffer->P[0]; - - if ((plci->NL.RBuffer->length >= 12) - && ((*data == DSP_UDATA_INDICATION_DCD_ON) - || (*data == DSP_UDATA_INDICATION_CTS_ON))) - { - word conn_opt, ncpi_opt = 0x00; -/* HexDump ("MDM N_UDATA:", plci->NL.RBuffer->length, data); */ - - if (*data == DSP_UDATA_INDICATION_DCD_ON) - plci->ncpi_state |= NCPI_MDM_DCD_ON_RECEIVED; - if (*data == DSP_UDATA_INDICATION_CTS_ON) - plci->ncpi_state |= NCPI_MDM_CTS_ON_RECEIVED; - - data++; /* indication code */ - data += 2; /* timestamp */ - if ((*data == DSP_CONNECTED_NORM_V18) || (*data == DSP_CONNECTED_NORM_VOWN)) - ncpi_state &= ~(NCPI_MDM_DCD_ON_RECEIVED | NCPI_MDM_CTS_ON_RECEIVED); - data++; /* connected norm */ - conn_opt = GET_WORD(data); - data += 2; /* connected options */ - - PUT_WORD(&(plci->ncpi_buffer[1]), (word)(GET_DWORD(data) & 0x0000FFFF)); - - if (conn_opt & DSP_CONNECTED_OPTION_MASK_V42) - { - ncpi_opt |= MDM_NCPI_ECM_V42; - } - else if (conn_opt & DSP_CONNECTED_OPTION_MASK_MNP) - { - ncpi_opt |= MDM_NCPI_ECM_MNP; - } - else - { - ncpi_opt |= MDM_NCPI_TRANSPARENT; - } - if (conn_opt & DSP_CONNECTED_OPTION_MASK_COMPRESSION) - { - ncpi_opt |= MDM_NCPI_COMPRESSED; - } - PUT_WORD(&(plci->ncpi_buffer[3]), ncpi_opt); - plci->ncpi_buffer[0] = 4; - - plci->ncpi_state |= NCPI_VALID_CONNECT_B3_IND | NCPI_VALID_CONNECT_B3_ACT | NCPI_VALID_DISC_B3_IND; - } - } - if (plci->B3_prot == 7) - { - if (((a->ncci_state[ncci] == INC_ACT_PENDING) || (a->ncci_state[ncci] == OUTG_CON_PENDING)) - && (plci->ncpi_state & NCPI_VALID_CONNECT_B3_ACT) - && !(plci->ncpi_state & NCPI_CONNECT_B3_ACT_SENT)) - { - a->ncci_state[ncci] = INC_ACT_PENDING; - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "S", plci->ncpi_buffer); - plci->ncpi_state |= NCPI_CONNECT_B3_ACT_SENT; - } - } - - if (!((plci->requested_options_conn | plci->requested_options | plci->adapter->requested_options_table[plci->appl->Id - 1]) - & ((1L << PRIVATE_V18) | (1L << PRIVATE_VOWN))) - || !(ncpi_state & NCPI_MDM_DCD_ON_RECEIVED) - || !(ncpi_state & NCPI_MDM_CTS_ON_RECEIVED)) - - { - plci->NL.RNR = 2; - return; - } - } - - if (plci->NL.complete == 2) - { - if (((plci->NL.Ind & 0x0f) == N_UDATA) - && !(udata_forwarding_table[plci->RData[0].P[0] >> 5] & (1L << (plci->RData[0].P[0] & 0x1f)))) - { - switch (plci->RData[0].P[0]) - { - - case DTMF_UDATA_INDICATION_FAX_CALLING_TONE: - if (plci->dtmf_rec_active & DTMF_LISTEN_ACTIVE_FLAG) - sendf(plci->appl, _FACILITY_I, Id & 0xffffL, 0, "ws", SELECTOR_DTMF, "\x01X"); - break; - case DTMF_UDATA_INDICATION_ANSWER_TONE: - if (plci->dtmf_rec_active & DTMF_LISTEN_ACTIVE_FLAG) - sendf(plci->appl, _FACILITY_I, Id & 0xffffL, 0, "ws", SELECTOR_DTMF, "\x01Y"); - break; - case DTMF_UDATA_INDICATION_DIGITS_RECEIVED: - dtmf_indication(Id, plci, plci->RData[0].P, plci->RData[0].PLength); - break; - case DTMF_UDATA_INDICATION_DIGITS_SENT: - dtmf_confirmation(Id, plci); - break; - - - case UDATA_INDICATION_MIXER_TAP_DATA: - capidtmf_recv_process_block(&(plci->capidtmf_state), plci->RData[0].P + 1, (word)(plci->RData[0].PLength - 1)); - i = capidtmf_indication(&(plci->capidtmf_state), dtmf_code_buffer + 1); - if (i != 0) - { - dtmf_code_buffer[0] = DTMF_UDATA_INDICATION_DIGITS_RECEIVED; - dtmf_indication(Id, plci, dtmf_code_buffer, (word)(i + 1)); - } - break; - - - case UDATA_INDICATION_MIXER_COEFS_SET: - mixer_indication_coefs_set(Id, plci); - break; - case UDATA_INDICATION_XCONNECT_FROM: - mixer_indication_xconnect_from(Id, plci, plci->RData[0].P, plci->RData[0].PLength); - break; - case UDATA_INDICATION_XCONNECT_TO: - mixer_indication_xconnect_to(Id, plci, plci->RData[0].P, plci->RData[0].PLength); - break; - - - case LEC_UDATA_INDICATION_DISABLE_DETECT: - ec_indication(Id, plci, plci->RData[0].P, plci->RData[0].PLength); - break; - - - - default: - break; - } - } - else - { - if ((plci->RData[0].PLength != 0) - && ((plci->B2_prot == B2_V120_ASYNC) - || (plci->B2_prot == B2_V120_ASYNC_V42BIS) - || (plci->B2_prot == B2_V120_BIT_TRANSPARENT))) - { - - sendf(plci->appl, _DATA_B3_I, Id, 0, - "dwww", - plci->RData[1].P, - (plci->NL.RNum < 2) ? 0 : plci->RData[1].PLength, - plci->RNum, - plci->RFlags); - - } - else - { - - sendf(plci->appl, _DATA_B3_I, Id, 0, - "dwww", - plci->RData[0].P, - plci->RData[0].PLength, - plci->RNum, - plci->RFlags); - - } - } - return; - } - - fax_feature_bits = 0; - if ((plci->NL.Ind & 0x0f) == N_CONNECT || - (plci->NL.Ind & 0x0f) == N_CONNECT_ACK || - (plci->NL.Ind & 0x0f) == N_DISC || - (plci->NL.Ind & 0x0f) == N_EDATA || - (plci->NL.Ind & 0x0f) == N_DISC_ACK) - { - info = 0; - plci->ncpi_buffer[0] = 0; - switch (plci->B3_prot) { - case 0: /*XPARENT*/ - case 1: /*T.90 NL*/ - break; /* no network control protocol info - jfr */ - case 2: /*ISO8202*/ - case 3: /*X25 DCE*/ - for (i = 0; i < plci->NL.RLength; i++) plci->ncpi_buffer[4 + i] = plci->NL.RBuffer->P[i]; - plci->ncpi_buffer[0] = (byte)(i + 3); - plci->ncpi_buffer[1] = (byte)(plci->NL.Ind & N_D_BIT ? 1 : 0); - plci->ncpi_buffer[2] = 0; - plci->ncpi_buffer[3] = 0; - break; - case 4: /*T.30 - FAX*/ - case 5: /*T.30 - FAX*/ - if (plci->NL.RLength >= sizeof(T30_INFO)) - { - dbug(1, dprintf("FaxStatus %04x", ((T30_INFO *)plci->NL.RBuffer->P)->code)); - len = 9; - PUT_WORD(&(plci->ncpi_buffer[1]), ((T30_INFO *)plci->NL.RBuffer->P)->rate_div_2400 * 2400); - fax_feature_bits = GET_WORD(&((T30_INFO *)plci->NL.RBuffer->P)->feature_bits_low); - i = (((T30_INFO *)plci->NL.RBuffer->P)->resolution & T30_RESOLUTION_R8_0770_OR_200) ? 0x0001 : 0x0000; - if (plci->B3_prot == 5) - { - if (!(fax_feature_bits & T30_FEATURE_BIT_ECM)) - i |= 0x8000; /* This is not an ECM connection */ - if (fax_feature_bits & T30_FEATURE_BIT_T6_CODING) - i |= 0x4000; /* This is a connection with MMR compression */ - if (fax_feature_bits & T30_FEATURE_BIT_2D_CODING) - i |= 0x2000; /* This is a connection with MR compression */ - if (fax_feature_bits & T30_FEATURE_BIT_MORE_DOCUMENTS) - i |= 0x0004; /* More documents */ - if (fax_feature_bits & T30_FEATURE_BIT_POLLING) - i |= 0x0002; /* Fax-polling indication */ - } - dbug(1, dprintf("FAX Options %04x %04x", fax_feature_bits, i)); - PUT_WORD(&(plci->ncpi_buffer[3]), i); - PUT_WORD(&(plci->ncpi_buffer[5]), ((T30_INFO *)plci->NL.RBuffer->P)->data_format); - plci->ncpi_buffer[7] = ((T30_INFO *)plci->NL.RBuffer->P)->pages_low; - plci->ncpi_buffer[8] = ((T30_INFO *)plci->NL.RBuffer->P)->pages_high; - plci->ncpi_buffer[len] = 0; - if (((T30_INFO *)plci->NL.RBuffer->P)->station_id_len) - { - plci->ncpi_buffer[len] = 20; - for (i = 0; i < T30_MAX_STATION_ID_LENGTH; i++) - plci->ncpi_buffer[++len] = ((T30_INFO *)plci->NL.RBuffer->P)->station_id[i]; - } - if (((plci->NL.Ind & 0x0f) == N_DISC) || ((plci->NL.Ind & 0x0f) == N_DISC_ACK)) - { - if (((T30_INFO *)plci->NL.RBuffer->P)->code < ARRAY_SIZE(fax_info)) - info = fax_info[((T30_INFO *)plci->NL.RBuffer->P)->code]; - else - info = _FAX_PROTOCOL_ERROR; - } - - if ((plci->requested_options_conn | plci->requested_options | a->requested_options_table[plci->appl->Id - 1]) - & ((1L << PRIVATE_FAX_SUB_SEP_PWD) | (1L << PRIVATE_FAX_NONSTANDARD))) - { - i = offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH + ((T30_INFO *)plci->NL.RBuffer->P)->head_line_len; - while (i < plci->NL.RBuffer->length) - plci->ncpi_buffer[++len] = plci->NL.RBuffer->P[i++]; - } - - plci->ncpi_buffer[0] = len; - fax_feature_bits = GET_WORD(&((T30_INFO *)plci->NL.RBuffer->P)->feature_bits_low); - PUT_WORD(&((T30_INFO *)plci->fax_connect_info_buffer)->feature_bits_low, fax_feature_bits); - - plci->ncpi_state |= NCPI_VALID_CONNECT_B3_IND; - if (((plci->NL.Ind & 0x0f) == N_CONNECT_ACK) - || (((plci->NL.Ind & 0x0f) == N_CONNECT) - && (fax_feature_bits & T30_FEATURE_BIT_POLLING)) - || (((plci->NL.Ind & 0x0f) == N_EDATA) - && ((((T30_INFO *)plci->NL.RBuffer->P)->code == EDATA_T30_TRAIN_OK) - || (((T30_INFO *)plci->NL.RBuffer->P)->code == EDATA_T30_DIS) - || (((T30_INFO *)plci->NL.RBuffer->P)->code == EDATA_T30_DTC)))) - { - plci->ncpi_state |= NCPI_VALID_CONNECT_B3_ACT; - } - if (((plci->NL.Ind & 0x0f) == N_DISC) - || ((plci->NL.Ind & 0x0f) == N_DISC_ACK) - || (((plci->NL.Ind & 0x0f) == N_EDATA) - && (((T30_INFO *)plci->NL.RBuffer->P)->code == EDATA_T30_EOP_CAPI))) - { - plci->ncpi_state |= NCPI_VALID_CONNECT_B3_ACT | NCPI_VALID_DISC_B3_IND; - } - } - break; - - case B3_RTP: - if (((plci->NL.Ind & 0x0f) == N_DISC) || ((plci->NL.Ind & 0x0f) == N_DISC_ACK)) - { - if (plci->NL.RLength != 0) - { - info = rtp_info[plci->NL.RBuffer->P[0]]; - plci->ncpi_buffer[0] = plci->NL.RLength - 1; - for (i = 1; i < plci->NL.RLength; i++) - plci->ncpi_buffer[i] = plci->NL.RBuffer->P[i]; - } - } - break; - - } - plci->NL.RNR = 2; - } - switch (plci->NL.Ind & 0x0f) { - case N_EDATA: - if ((plci->B3_prot == 4) || (plci->B3_prot == 5)) - { - dbug(1, dprintf("EDATA ncci=0x%x state=%d code=%02x", ncci, a->ncci_state[ncci], - ((T30_INFO *)plci->NL.RBuffer->P)->code)); - fax_send_edata_ack = (((T30_INFO *)(plci->fax_connect_info_buffer))->operating_mode == T30_OPERATING_MODE_CAPI_NEG); - - if ((plci->nsf_control_bits & T30_NSF_CONTROL_BIT_ENABLE_NSF) - && (plci->nsf_control_bits & (T30_NSF_CONTROL_BIT_NEGOTIATE_IND | T30_NSF_CONTROL_BIT_NEGOTIATE_RESP)) - && (((T30_INFO *)plci->NL.RBuffer->P)->code == EDATA_T30_DIS) - && (a->ncci_state[ncci] == OUTG_CON_PENDING) - && (plci->ncpi_state & NCPI_VALID_CONNECT_B3_ACT) - && !(plci->ncpi_state & NCPI_NEGOTIATE_B3_SENT)) - { - ((T30_INFO *)(plci->fax_connect_info_buffer))->code = ((T30_INFO *)plci->NL.RBuffer->P)->code; - sendf(plci->appl, _MANUFACTURER_I, Id, 0, "dwbS", _DI_MANU_ID, _DI_NEGOTIATE_B3, - (byte)(plci->ncpi_buffer[0] + 1), plci->ncpi_buffer); - plci->ncpi_state |= NCPI_NEGOTIATE_B3_SENT; - if (plci->nsf_control_bits & T30_NSF_CONTROL_BIT_NEGOTIATE_RESP) - fax_send_edata_ack = false; - } - - if (a->manufacturer_features & MANUFACTURER_FEATURE_FAX_PAPER_FORMATS) - { - switch (((T30_INFO *)plci->NL.RBuffer->P)->code) - { - case EDATA_T30_DIS: - if ((a->ncci_state[ncci] == OUTG_CON_PENDING) - && !(GET_WORD(&((T30_INFO *)plci->fax_connect_info_buffer)->control_bits_low) & T30_CONTROL_BIT_REQUEST_POLLING) - && (plci->ncpi_state & NCPI_VALID_CONNECT_B3_ACT) - && !(plci->ncpi_state & NCPI_CONNECT_B3_ACT_SENT)) - { - a->ncci_state[ncci] = INC_ACT_PENDING; - if (plci->B3_prot == 4) - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "s", ""); - else - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "S", plci->ncpi_buffer); - plci->ncpi_state |= NCPI_CONNECT_B3_ACT_SENT; - } - break; - - case EDATA_T30_TRAIN_OK: - if ((a->ncci_state[ncci] == INC_ACT_PENDING) - && (plci->ncpi_state & NCPI_VALID_CONNECT_B3_ACT) - && !(plci->ncpi_state & NCPI_CONNECT_B3_ACT_SENT)) - { - if (plci->B3_prot == 4) - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "s", ""); - else - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "S", plci->ncpi_buffer); - plci->ncpi_state |= NCPI_CONNECT_B3_ACT_SENT; - } - break; - - case EDATA_T30_EOP_CAPI: - if (a->ncci_state[ncci] == CONNECTED) - { - sendf(plci->appl, _DISCONNECT_B3_I, Id, 0, "wS", GOOD, plci->ncpi_buffer); - a->ncci_state[ncci] = INC_DIS_PENDING; - plci->ncpi_state = 0; - fax_send_edata_ack = false; - } - break; - } - } - else - { - switch (((T30_INFO *)plci->NL.RBuffer->P)->code) - { - case EDATA_T30_TRAIN_OK: - if ((a->ncci_state[ncci] == INC_ACT_PENDING) - && (plci->ncpi_state & NCPI_VALID_CONNECT_B3_ACT) - && !(plci->ncpi_state & NCPI_CONNECT_B3_ACT_SENT)) - { - if (plci->B3_prot == 4) - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "s", ""); - else - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "S", plci->ncpi_buffer); - plci->ncpi_state |= NCPI_CONNECT_B3_ACT_SENT; - } - break; - } - } - if (fax_send_edata_ack) - { - ((T30_INFO *)(plci->fax_connect_info_buffer))->code = ((T30_INFO *)plci->NL.RBuffer->P)->code; - plci->fax_edata_ack_length = 1; - start_internal_command(Id, plci, fax_edata_ack_command); - } - } - else - { - dbug(1, dprintf("EDATA ncci=0x%x state=%d", ncci, a->ncci_state[ncci])); - } - break; - case N_CONNECT: - if (!a->ch_ncci[ch]) - { - ncci = get_ncci(plci, ch, 0); - Id = (Id & 0xffff) | (((dword) ncci) << 16); - } - dbug(1, dprintf("N_CONNECT: ch=%d state=%d plci=%lx plci_Id=%lx plci_State=%d", - ch, a->ncci_state[ncci], a->ncci_plci[ncci], plci->Id, plci->State)); - - msg = _CONNECT_B3_I; - if (a->ncci_state[ncci] == IDLE) - plci->channels++; - else if (plci->B3_prot == 1) - msg = _CONNECT_B3_T90_ACTIVE_I; - - a->ncci_state[ncci] = INC_CON_PENDING; - if (plci->B3_prot == 4) - sendf(plci->appl, msg, Id, 0, "s", ""); - else - sendf(plci->appl, msg, Id, 0, "S", plci->ncpi_buffer); - break; - case N_CONNECT_ACK: - dbug(1, dprintf("N_connect_Ack")); - if (plci->internal_command_queue[0] - && ((plci->adjust_b_state == ADJUST_B_CONNECT_2) - || (plci->adjust_b_state == ADJUST_B_CONNECT_3) - || (plci->adjust_b_state == ADJUST_B_CONNECT_4))) - { - (*(plci->internal_command_queue[0]))(Id, plci, 0); - if (!plci->internal_command) - next_internal_command(Id, plci); - break; - } - msg = _CONNECT_B3_ACTIVE_I; - if (plci->B3_prot == 1) - { - if (a->ncci_state[ncci] != OUTG_CON_PENDING) - msg = _CONNECT_B3_T90_ACTIVE_I; - a->ncci_state[ncci] = INC_ACT_PENDING; - sendf(plci->appl, msg, Id, 0, "S", plci->ncpi_buffer); - } - else if ((plci->B3_prot == 4) || (plci->B3_prot == 5) || (plci->B3_prot == 7)) - { - if ((a->ncci_state[ncci] == OUTG_CON_PENDING) - && (plci->ncpi_state & NCPI_VALID_CONNECT_B3_ACT) - && !(plci->ncpi_state & NCPI_CONNECT_B3_ACT_SENT)) - { - a->ncci_state[ncci] = INC_ACT_PENDING; - if (plci->B3_prot == 4) - sendf(plci->appl, msg, Id, 0, "s", ""); - else - sendf(plci->appl, msg, Id, 0, "S", plci->ncpi_buffer); - plci->ncpi_state |= NCPI_CONNECT_B3_ACT_SENT; - } - } - else - { - a->ncci_state[ncci] = INC_ACT_PENDING; - sendf(plci->appl, msg, Id, 0, "S", plci->ncpi_buffer); - } - if (plci->adjust_b_restore) - { - plci->adjust_b_restore = false; - start_internal_command(Id, plci, adjust_b_restore); - } - break; - case N_DISC: - case N_DISC_ACK: - if (plci->internal_command_queue[0] - && ((plci->internal_command == FAX_DISCONNECT_COMMAND_1) - || (plci->internal_command == FAX_DISCONNECT_COMMAND_2) - || (plci->internal_command == FAX_DISCONNECT_COMMAND_3))) - { - (*(plci->internal_command_queue[0]))(Id, plci, 0); - if (!plci->internal_command) - next_internal_command(Id, plci); - } - ncci_state = a->ncci_state[ncci]; - ncci_remove(plci, ncci, false); - - /* with N_DISC or N_DISC_ACK the IDI frees the respective */ - /* channel, so we cannot store the state in ncci_state! The */ - /* information which channel we received a N_DISC is thus */ - /* stored in the inc_dis_ncci_table buffer. */ - for (i = 0; plci->inc_dis_ncci_table[i]; i++); - plci->inc_dis_ncci_table[i] = (byte) ncci; - - /* need a connect_b3_ind before a disconnect_b3_ind with FAX */ - if (!plci->channels - && (plci->B1_resource == 16) - && (plci->State <= CONNECTED)) - { - len = 9; - i = ((T30_INFO *)plci->fax_connect_info_buffer)->rate_div_2400 * 2400; - PUT_WORD(&plci->ncpi_buffer[1], i); - PUT_WORD(&plci->ncpi_buffer[3], 0); - i = ((T30_INFO *)plci->fax_connect_info_buffer)->data_format; - PUT_WORD(&plci->ncpi_buffer[5], i); - PUT_WORD(&plci->ncpi_buffer[7], 0); - plci->ncpi_buffer[len] = 0; - plci->ncpi_buffer[0] = len; - if (plci->B3_prot == 4) - sendf(plci->appl, _CONNECT_B3_I, Id, 0, "s", ""); - else - { - - if ((plci->requested_options_conn | plci->requested_options | a->requested_options_table[plci->appl->Id - 1]) - & ((1L << PRIVATE_FAX_SUB_SEP_PWD) | (1L << PRIVATE_FAX_NONSTANDARD))) - { - plci->ncpi_buffer[++len] = 0; - plci->ncpi_buffer[++len] = 0; - plci->ncpi_buffer[++len] = 0; - plci->ncpi_buffer[0] = len; - } - - sendf(plci->appl, _CONNECT_B3_I, Id, 0, "S", plci->ncpi_buffer); - } - sendf(plci->appl, _DISCONNECT_B3_I, Id, 0, "wS", info, plci->ncpi_buffer); - plci->ncpi_state = 0; - sig_req(plci, HANGUP, 0); - send_req(plci); - plci->State = OUTG_DIS_PENDING; - /* disc here */ - } - else if ((a->manufacturer_features & MANUFACTURER_FEATURE_FAX_PAPER_FORMATS) - && ((plci->B3_prot == 4) || (plci->B3_prot == 5)) - && ((ncci_state == INC_DIS_PENDING) || (ncci_state == IDLE))) - { - if (ncci_state == IDLE) - { - if (plci->channels) - plci->channels--; - if ((plci->State == IDLE || plci->State == SUSPENDING) && !plci->channels) { - if (plci->State == SUSPENDING) { - sendf(plci->appl, - _FACILITY_I, - Id & 0xffffL, - 0, - "ws", (word)3, "\x03\x04\x00\x00"); - sendf(plci->appl, _DISCONNECT_I, Id & 0xffffL, 0, "w", 0); - } - plci_remove(plci); - plci->State = IDLE; - } - } - } - else if (plci->channels) - { - sendf(plci->appl, _DISCONNECT_B3_I, Id, 0, "wS", info, plci->ncpi_buffer); - plci->ncpi_state = 0; - if ((ncci_state == OUTG_REJ_PENDING) - && ((plci->B3_prot != B3_T90NL) && (plci->B3_prot != B3_ISO8208) && (plci->B3_prot != B3_X25_DCE))) - { - sig_req(plci, HANGUP, 0); - send_req(plci); - plci->State = OUTG_DIS_PENDING; - } - } - break; - case N_RESET: - a->ncci_state[ncci] = INC_RES_PENDING; - sendf(plci->appl, _RESET_B3_I, Id, 0, "S", plci->ncpi_buffer); - break; - case N_RESET_ACK: - a->ncci_state[ncci] = CONNECTED; - sendf(plci->appl, _RESET_B3_I, Id, 0, "S", plci->ncpi_buffer); - break; - - case N_UDATA: - if (!(udata_forwarding_table[plci->NL.RBuffer->P[0] >> 5] & (1L << (plci->NL.RBuffer->P[0] & 0x1f)))) - { - plci->RData[0].P = plci->internal_ind_buffer + (-((int)(long)(plci->internal_ind_buffer)) & 3); - plci->RData[0].PLength = INTERNAL_IND_BUFFER_SIZE; - plci->NL.R = plci->RData; - plci->NL.RNum = 1; - return; - } - /* fall through */ - case N_BDATA: - case N_DATA: - if (((a->ncci_state[ncci] != CONNECTED) && (plci->B2_prot == 1)) /* transparent */ - || (a->ncci_state[ncci] == IDLE) - || (a->ncci_state[ncci] == INC_DIS_PENDING)) - { - plci->NL.RNR = 2; - break; - } - if ((a->ncci_state[ncci] != CONNECTED) - && (a->ncci_state[ncci] != OUTG_DIS_PENDING) - && (a->ncci_state[ncci] != OUTG_REJ_PENDING)) - { - dbug(1, dprintf("flow control")); - plci->NL.RNR = 1; /* flow control */ - channel_x_off(plci, ch, 0); - break; - } - - NCCIcode = ncci | (((word)a->Id) << 8); - - /* count all buffers within the Application pool */ - /* belonging to the same NCCI. If this is below the */ - /* number of buffers available per NCCI we accept */ - /* this packet, otherwise we reject it */ - count = 0; - Num = 0xffff; - for (i = 0; i < APPLptr->MaxBuffer; i++) { - if (NCCIcode == APPLptr->DataNCCI[i]) count++; - if (!APPLptr->DataNCCI[i] && Num == 0xffff) Num = i; - } - - if (count >= APPLptr->MaxNCCIData || Num == 0xffff) - { - dbug(3, dprintf("Flow-Control")); - plci->NL.RNR = 1; - if (++(APPLptr->NCCIDataFlowCtrlTimer) >= - (word)((a->manufacturer_features & MANUFACTURER_FEATURE_OOB_CHANNEL) ? 40 : 2000)) - { - plci->NL.RNR = 2; - dbug(3, dprintf("DiscardData")); - } else { - channel_x_off(plci, ch, 0); - } - break; - } - else - { - APPLptr->NCCIDataFlowCtrlTimer = 0; - } - - plci->RData[0].P = ReceiveBufferGet(APPLptr, Num); - if (!plci->RData[0].P) { - plci->NL.RNR = 1; - channel_x_off(plci, ch, 0); - break; - } - - APPLptr->DataNCCI[Num] = NCCIcode; - APPLptr->DataFlags[Num] = (plci->Id << 8) | (plci->NL.Ind >> 4); - dbug(3, dprintf("Buffer(%d), Max = %d", Num, APPLptr->MaxBuffer)); - - plci->RNum = Num; - plci->RFlags = plci->NL.Ind >> 4; - plci->RData[0].PLength = APPLptr->MaxDataLength; - plci->NL.R = plci->RData; - if ((plci->NL.RLength != 0) - && ((plci->B2_prot == B2_V120_ASYNC) - || (plci->B2_prot == B2_V120_ASYNC_V42BIS) - || (plci->B2_prot == B2_V120_BIT_TRANSPARENT))) - { - plci->RData[1].P = plci->RData[0].P; - plci->RData[1].PLength = plci->RData[0].PLength; - plci->RData[0].P = v120_header_buffer + (-((unsigned long)v120_header_buffer) & 3); - if ((plci->NL.RBuffer->P[0] & V120_HEADER_EXTEND_BIT) || (plci->NL.RLength == 1)) - plci->RData[0].PLength = 1; - else - plci->RData[0].PLength = 2; - if (plci->NL.RBuffer->P[0] & V120_HEADER_BREAK_BIT) - plci->RFlags |= 0x0010; - if (plci->NL.RBuffer->P[0] & (V120_HEADER_C1_BIT | V120_HEADER_C2_BIT)) - plci->RFlags |= 0x8000; - plci->NL.RNum = 2; - } - else - { - if ((plci->NL.Ind & 0x0f) == N_UDATA) - plci->RFlags |= 0x0010; - - else if ((plci->B3_prot == B3_RTP) && ((plci->NL.Ind & 0x0f) == N_BDATA)) - plci->RFlags |= 0x0001; - - plci->NL.RNum = 1; - } - break; - case N_DATA_ACK: - data_ack(plci, ch); - break; - default: - plci->NL.RNR = 2; - break; - } -} - -/*------------------------------------------------------------------*/ -/* find a free PLCI */ -/*------------------------------------------------------------------*/ - -static word get_plci(DIVA_CAPI_ADAPTER *a) -{ - word i, j; - PLCI *plci; - - for (i = 0; i < a->max_plci && a->plci[i].Id; i++); - if (i == a->max_plci) { - dbug(1, dprintf("get_plci: out of PLCIs")); - return 0; - } - plci = &a->plci[i]; - plci->Id = (byte)(i + 1); - - plci->Sig.Id = 0; - plci->NL.Id = 0; - plci->sig_req = 0; - plci->nl_req = 0; - - plci->appl = NULL; - plci->relatedPTYPLCI = NULL; - plci->State = IDLE; - plci->SuppState = IDLE; - plci->channels = 0; - plci->tel = 0; - plci->B1_resource = 0; - plci->B2_prot = 0; - plci->B3_prot = 0; - - plci->command = 0; - plci->m_command = 0; - init_internal_command_queue(plci); - plci->number = 0; - plci->req_in_start = 0; - plci->req_in = 0; - plci->req_out = 0; - plci->msg_in_write_pos = MSG_IN_QUEUE_SIZE; - plci->msg_in_read_pos = MSG_IN_QUEUE_SIZE; - plci->msg_in_wrap_pos = MSG_IN_QUEUE_SIZE; - - plci->data_sent = false; - plci->send_disc = 0; - plci->sig_global_req = 0; - plci->sig_remove_id = 0; - plci->nl_global_req = 0; - plci->nl_remove_id = 0; - plci->adv_nl = 0; - plci->manufacturer = false; - plci->call_dir = CALL_DIR_OUT | CALL_DIR_ORIGINATE; - plci->spoofed_msg = 0; - plci->ptyState = 0; - plci->cr_enquiry = false; - plci->hangup_flow_ctrl_timer = 0; - - plci->ncci_ring_list = 0; - for (j = 0; j < MAX_CHANNELS_PER_PLCI; j++) plci->inc_dis_ncci_table[j] = 0; - bitmap_zero(plci->c_ind_mask_table, MAX_APPL); - bitmap_fill(plci->group_optimization_mask_table, MAX_APPL); - plci->fax_connect_info_length = 0; - plci->nsf_control_bits = 0; - plci->ncpi_state = 0x00; - plci->ncpi_buffer[0] = 0; - - plci->requested_options_conn = 0; - plci->requested_options = 0; - plci->notifiedcall = 0; - plci->vswitchstate = 0; - plci->vsprot = 0; - plci->vsprotdialect = 0; - init_b1_config(plci); - dbug(1, dprintf("get_plci(%x)", plci->Id)); - return i + 1; -} - -/*------------------------------------------------------------------*/ -/* put a parameter in the parameter buffer */ -/*------------------------------------------------------------------*/ - -static void add_p(PLCI *plci, byte code, byte *p) -{ - word p_length; - - p_length = 0; - if (p) p_length = p[0]; - add_ie(plci, code, p, p_length); -} - -/*------------------------------------------------------------------*/ -/* put a structure in the parameter buffer */ -/*------------------------------------------------------------------*/ -static void add_s(PLCI *plci, byte code, API_PARSE *p) -{ - if (p) add_ie(plci, code, p->info, (word)p->length); -} - -/*------------------------------------------------------------------*/ -/* put multiple structures in the parameter buffer */ -/*------------------------------------------------------------------*/ -static void add_ss(PLCI *plci, byte code, API_PARSE *p) -{ - byte i; - - if (p) { - dbug(1, dprintf("add_ss(%x,len=%d)", code, p->length)); - for (i = 2; i < (byte)p->length; i += p->info[i] + 2) { - dbug(1, dprintf("add_ss_ie(%x,len=%d)", p->info[i - 1], p->info[i])); - add_ie(plci, p->info[i - 1], (byte *)&(p->info[i]), (word)p->info[i]); - } - } -} - -/*------------------------------------------------------------------*/ -/* return the channel number sent by the application in a esc_chi */ -/*------------------------------------------------------------------*/ -static byte getChannel(API_PARSE *p) -{ - byte i; - - if (p) { - for (i = 2; i < (byte)p->length; i += p->info[i] + 2) { - if (p->info[i] == 2) { - if (p->info[i - 1] == ESC && p->info[i + 1] == CHI) return (p->info[i + 2]); - } - } - } - return 0; -} - - -/*------------------------------------------------------------------*/ -/* put an information element in the parameter buffer */ -/*------------------------------------------------------------------*/ - -static void add_ie(PLCI *plci, byte code, byte *p, word p_length) -{ - word i; - - if (!(code & 0x80) && !p_length) return; - - if (plci->req_in == plci->req_in_start) { - plci->req_in += 2; - } - else { - plci->req_in--; - } - plci->RBuffer[plci->req_in++] = code; - - if (p) { - plci->RBuffer[plci->req_in++] = (byte)p_length; - for (i = 0; i < p_length; i++) plci->RBuffer[plci->req_in++] = p[1 + i]; - } - - plci->RBuffer[plci->req_in++] = 0; -} - -/*------------------------------------------------------------------*/ -/* put a unstructured data into the buffer */ -/*------------------------------------------------------------------*/ - -static void add_d(PLCI *plci, word length, byte *p) -{ - word i; - - if (plci->req_in == plci->req_in_start) { - plci->req_in += 2; - } - else { - plci->req_in--; - } - for (i = 0; i < length; i++) plci->RBuffer[plci->req_in++] = p[i]; -} - -/*------------------------------------------------------------------*/ -/* put parameters from the Additional Info parameter in the */ -/* parameter buffer */ -/*------------------------------------------------------------------*/ - -static void add_ai(PLCI *plci, API_PARSE *ai) -{ - word i; - API_PARSE ai_parms[5]; - - for (i = 0; i < 5; i++) ai_parms[i].length = 0; - - if (!ai->length) - return; - if (api_parse(&ai->info[1], (word)ai->length, "ssss", ai_parms)) - return; - - add_s(plci, KEY, &ai_parms[1]); - add_s(plci, UUI, &ai_parms[2]); - add_ss(plci, FTY, &ai_parms[3]); -} - -/*------------------------------------------------------------------*/ -/* put parameter for b1 protocol in the parameter buffer */ -/*------------------------------------------------------------------*/ - -static word add_b1(PLCI *plci, API_PARSE *bp, word b_channel_info, - word b1_facilities) -{ - API_PARSE bp_parms[8]; - API_PARSE mdm_cfg[9]; - API_PARSE global_config[2]; - byte cai[256]; - byte resource[] = {5, 9, 13, 12, 16, 39, 9, 17, 17, 18}; - byte voice_cai[] = "\x06\x14\x00\x00\x00\x00\x08"; - word i; - - API_PARSE mdm_cfg_v18[4]; - word j, n, w; - dword d; - - - for (i = 0; i < 8; i++) bp_parms[i].length = 0; - for (i = 0; i < 2; i++) global_config[i].length = 0; - - dbug(1, dprintf("add_b1")); - api_save_msg(bp, "s", &plci->B_protocol); - - if (b_channel_info == 2) { - plci->B1_resource = 0; - adjust_b1_facilities(plci, plci->B1_resource, b1_facilities); - add_p(plci, CAI, "\x01\x00"); - dbug(1, dprintf("Cai=1,0 (no resource)")); - return 0; - } - - if (plci->tel == CODEC_PERMANENT) return 0; - else if (plci->tel == CODEC) { - plci->B1_resource = 1; - adjust_b1_facilities(plci, plci->B1_resource, b1_facilities); - add_p(plci, CAI, "\x01\x01"); - dbug(1, dprintf("Cai=1,1 (Codec)")); - return 0; - } - else if (plci->tel == ADV_VOICE) { - plci->B1_resource = add_b1_facilities(plci, 9, (word)(b1_facilities | B1_FACILITY_VOICE)); - adjust_b1_facilities(plci, plci->B1_resource, (word)(b1_facilities | B1_FACILITY_VOICE)); - voice_cai[1] = plci->B1_resource; - PUT_WORD(&voice_cai[5], plci->appl->MaxDataLength); - add_p(plci, CAI, voice_cai); - dbug(1, dprintf("Cai=1,0x%x (AdvVoice)", voice_cai[1])); - return 0; - } - plci->call_dir &= ~(CALL_DIR_ORIGINATE | CALL_DIR_ANSWER); - if (plci->call_dir & CALL_DIR_OUT) - plci->call_dir |= CALL_DIR_ORIGINATE; - else if (plci->call_dir & CALL_DIR_IN) - plci->call_dir |= CALL_DIR_ANSWER; - - if (!bp->length) { - plci->B1_resource = 0x5; - adjust_b1_facilities(plci, plci->B1_resource, b1_facilities); - add_p(plci, CAI, "\x01\x05"); - return 0; - } - - dbug(1, dprintf("b_prot_len=%d", (word)bp->length)); - if (bp->length > 256) return _WRONG_MESSAGE_FORMAT; - if (api_parse(&bp->info[1], (word)bp->length, "wwwsssb", bp_parms)) - { - bp_parms[6].length = 0; - if (api_parse(&bp->info[1], (word)bp->length, "wwwsss", bp_parms)) - { - dbug(1, dprintf("b-form.!")); - return _WRONG_MESSAGE_FORMAT; - } - } - else if (api_parse(&bp->info[1], (word)bp->length, "wwwssss", bp_parms)) - { - dbug(1, dprintf("b-form.!")); - return _WRONG_MESSAGE_FORMAT; - } - - if (bp_parms[6].length) - { - if (api_parse(&bp_parms[6].info[1], (word)bp_parms[6].length, "w", global_config)) - { - return _WRONG_MESSAGE_FORMAT; - } - switch (GET_WORD(global_config[0].info)) - { - case 1: - plci->call_dir = (plci->call_dir & ~CALL_DIR_ANSWER) | CALL_DIR_ORIGINATE; - break; - case 2: - plci->call_dir = (plci->call_dir & ~CALL_DIR_ORIGINATE) | CALL_DIR_ANSWER; - break; - } - } - dbug(1, dprintf("call_dir=%04x", plci->call_dir)); - - - if ((GET_WORD(bp_parms[0].info) == B1_RTP) - && (plci->adapter->man_profile.private_options & (1L << PRIVATE_RTP))) - { - plci->B1_resource = add_b1_facilities(plci, 31, (word)(b1_facilities & ~B1_FACILITY_VOICE)); - adjust_b1_facilities(plci, plci->B1_resource, (word)(b1_facilities & ~B1_FACILITY_VOICE)); - cai[1] = plci->B1_resource; - cai[2] = 0; - cai[3] = 0; - cai[4] = 0; - PUT_WORD(&cai[5], plci->appl->MaxDataLength); - for (i = 0; i < bp_parms[3].length; i++) - cai[7 + i] = bp_parms[3].info[1 + i]; - cai[0] = 6 + bp_parms[3].length; - add_p(plci, CAI, cai); - return 0; - } - - - if ((GET_WORD(bp_parms[0].info) == B1_PIAFS) - && (plci->adapter->man_profile.private_options & (1L << PRIVATE_PIAFS))) - { - plci->B1_resource = add_b1_facilities(plci, 35/* PIAFS HARDWARE FACILITY */, (word)(b1_facilities & ~B1_FACILITY_VOICE)); - adjust_b1_facilities(plci, plci->B1_resource, (word)(b1_facilities & ~B1_FACILITY_VOICE)); - cai[1] = plci->B1_resource; - cai[2] = 0; - cai[3] = 0; - cai[4] = 0; - PUT_WORD(&cai[5], plci->appl->MaxDataLength); - cai[0] = 6; - add_p(plci, CAI, cai); - return 0; - } - - - if ((GET_WORD(bp_parms[0].info) >= 32) - || (!((1L << GET_WORD(bp_parms[0].info)) & plci->adapter->profile.B1_Protocols) - && ((GET_WORD(bp_parms[0].info) != 3) - || !((1L << B1_HDLC) & plci->adapter->profile.B1_Protocols) - || ((bp_parms[3].length != 0) && (GET_WORD(&bp_parms[3].info[1]) != 0) && (GET_WORD(&bp_parms[3].info[1]) != 56000))))) - { - return _B1_NOT_SUPPORTED; - } - plci->B1_resource = add_b1_facilities(plci, resource[GET_WORD(bp_parms[0].info)], - (word)(b1_facilities & ~B1_FACILITY_VOICE)); - adjust_b1_facilities(plci, plci->B1_resource, (word)(b1_facilities & ~B1_FACILITY_VOICE)); - cai[0] = 6; - cai[1] = plci->B1_resource; - for (i = 2; i < sizeof(cai); i++) cai[i] = 0; - - if ((GET_WORD(bp_parms[0].info) == B1_MODEM_ALL_NEGOTIATE) - || (GET_WORD(bp_parms[0].info) == B1_MODEM_ASYNC) - || (GET_WORD(bp_parms[0].info) == B1_MODEM_SYNC_HDLC)) - { /* B1 - modem */ - for (i = 0; i < 7; i++) mdm_cfg[i].length = 0; - - if (bp_parms[3].length) - { - if (api_parse(&bp_parms[3].info[1], (word)bp_parms[3].length, "wwwwww", mdm_cfg)) - { - return (_WRONG_MESSAGE_FORMAT); - } - - cai[2] = 0; /* Bit rate for adaptation */ - - dbug(1, dprintf("MDM Max Bit Rate:<%d>", GET_WORD(mdm_cfg[0].info))); - - PUT_WORD(&cai[13], 0); /* Min Tx speed */ - PUT_WORD(&cai[15], GET_WORD(mdm_cfg[0].info)); /* Max Tx speed */ - PUT_WORD(&cai[17], 0); /* Min Rx speed */ - PUT_WORD(&cai[19], GET_WORD(mdm_cfg[0].info)); /* Max Rx speed */ - - cai[3] = 0; /* Async framing parameters */ - switch (GET_WORD(mdm_cfg[2].info)) - { /* Parity */ - case 1: /* odd parity */ - cai[3] |= (DSP_CAI_ASYNC_PARITY_ENABLE | DSP_CAI_ASYNC_PARITY_ODD); - dbug(1, dprintf("MDM: odd parity")); - break; - - case 2: /* even parity */ - cai[3] |= (DSP_CAI_ASYNC_PARITY_ENABLE | DSP_CAI_ASYNC_PARITY_EVEN); - dbug(1, dprintf("MDM: even parity")); - break; - - default: - dbug(1, dprintf("MDM: no parity")); - break; - } - - switch (GET_WORD(mdm_cfg[3].info)) - { /* stop bits */ - case 1: /* 2 stop bits */ - cai[3] |= DSP_CAI_ASYNC_TWO_STOP_BITS; - dbug(1, dprintf("MDM: 2 stop bits")); - break; - - default: - dbug(1, dprintf("MDM: 1 stop bit")); - break; - } - - switch (GET_WORD(mdm_cfg[1].info)) - { /* char length */ - case 5: - cai[3] |= DSP_CAI_ASYNC_CHAR_LENGTH_5; - dbug(1, dprintf("MDM: 5 bits")); - break; - - case 6: - cai[3] |= DSP_CAI_ASYNC_CHAR_LENGTH_6; - dbug(1, dprintf("MDM: 6 bits")); - break; - - case 7: - cai[3] |= DSP_CAI_ASYNC_CHAR_LENGTH_7; - dbug(1, dprintf("MDM: 7 bits")); - break; - - default: - dbug(1, dprintf("MDM: 8 bits")); - break; - } - - cai[7] = 0; /* Line taking options */ - cai[8] = 0; /* Modulation negotiation options */ - cai[9] = 0; /* Modulation options */ - - if (((plci->call_dir & CALL_DIR_ORIGINATE) != 0) ^ ((plci->call_dir & CALL_DIR_OUT) != 0)) - { - cai[9] |= DSP_CAI_MODEM_REVERSE_DIRECTION; - dbug(1, dprintf("MDM: Reverse direction")); - } - - if (GET_WORD(mdm_cfg[4].info) & MDM_CAPI_DISABLE_RETRAIN) - { - cai[9] |= DSP_CAI_MODEM_DISABLE_RETRAIN; - dbug(1, dprintf("MDM: Disable retrain")); - } - - if (GET_WORD(mdm_cfg[4].info) & MDM_CAPI_DISABLE_RING_TONE) - { - cai[7] |= DSP_CAI_MODEM_DISABLE_CALLING_TONE | DSP_CAI_MODEM_DISABLE_ANSWER_TONE; - dbug(1, dprintf("MDM: Disable ring tone")); - } - - if (GET_WORD(mdm_cfg[4].info) & MDM_CAPI_GUARD_1800) - { - cai[8] |= DSP_CAI_MODEM_GUARD_TONE_1800HZ; - dbug(1, dprintf("MDM: 1800 guard tone")); - } - else if (GET_WORD(mdm_cfg[4].info) & MDM_CAPI_GUARD_550) - { - cai[8] |= DSP_CAI_MODEM_GUARD_TONE_550HZ; - dbug(1, dprintf("MDM: 550 guard tone")); - } - - if ((GET_WORD(mdm_cfg[5].info) & 0x00ff) == MDM_CAPI_NEG_V100) - { - cai[8] |= DSP_CAI_MODEM_NEGOTIATE_V100; - dbug(1, dprintf("MDM: V100")); - } - else if ((GET_WORD(mdm_cfg[5].info) & 0x00ff) == MDM_CAPI_NEG_MOD_CLASS) - { - cai[8] |= DSP_CAI_MODEM_NEGOTIATE_IN_CLASS; - dbug(1, dprintf("MDM: IN CLASS")); - } - else if ((GET_WORD(mdm_cfg[5].info) & 0x00ff) == MDM_CAPI_NEG_DISABLED) - { - cai[8] |= DSP_CAI_MODEM_NEGOTIATE_DISABLED; - dbug(1, dprintf("MDM: DISABLED")); - } - cai[0] = 20; - - if ((plci->adapter->man_profile.private_options & (1L << PRIVATE_V18)) - && (GET_WORD(mdm_cfg[5].info) & 0x8000)) /* Private V.18 enable */ - { - plci->requested_options |= 1L << PRIVATE_V18; - } - if (GET_WORD(mdm_cfg[5].info) & 0x4000) /* Private VOWN enable */ - plci->requested_options |= 1L << PRIVATE_VOWN; - - if ((plci->requested_options_conn | plci->requested_options | plci->adapter->requested_options_table[plci->appl->Id - 1]) - & ((1L << PRIVATE_V18) | (1L << PRIVATE_VOWN))) - { - if (!api_parse(&bp_parms[3].info[1], (word)bp_parms[3].length, "wwwwwws", mdm_cfg)) - { - i = 27; - if (mdm_cfg[6].length >= 4) - { - d = GET_DWORD(&mdm_cfg[6].info[1]); - cai[7] |= (byte) d; /* line taking options */ - cai[9] |= (byte)(d >> 8); /* modulation options */ - cai[++i] = (byte)(d >> 16); /* vown modulation options */ - cai[++i] = (byte)(d >> 24); - if (mdm_cfg[6].length >= 8) - { - d = GET_DWORD(&mdm_cfg[6].info[5]); - cai[10] |= (byte) d; /* disabled modulations mask */ - cai[11] |= (byte)(d >> 8); - if (mdm_cfg[6].length >= 12) - { - d = GET_DWORD(&mdm_cfg[6].info[9]); - cai[12] = (byte) d; /* enabled modulations mask */ - cai[++i] = (byte)(d >> 8); /* vown enabled modulations */ - cai[++i] = (byte)(d >> 16); - cai[++i] = (byte)(d >> 24); - cai[++i] = 0; - if (mdm_cfg[6].length >= 14) - { - w = GET_WORD(&mdm_cfg[6].info[13]); - if (w != 0) - PUT_WORD(&cai[13], w); /* min tx speed */ - if (mdm_cfg[6].length >= 16) - { - w = GET_WORD(&mdm_cfg[6].info[15]); - if (w != 0) - PUT_WORD(&cai[15], w); /* max tx speed */ - if (mdm_cfg[6].length >= 18) - { - w = GET_WORD(&mdm_cfg[6].info[17]); - if (w != 0) - PUT_WORD(&cai[17], w); /* min rx speed */ - if (mdm_cfg[6].length >= 20) - { - w = GET_WORD(&mdm_cfg[6].info[19]); - if (w != 0) - PUT_WORD(&cai[19], w); /* max rx speed */ - if (mdm_cfg[6].length >= 22) - { - w = GET_WORD(&mdm_cfg[6].info[21]); - cai[23] = (byte)(-((short) w)); /* transmit level */ - if (mdm_cfg[6].length >= 24) - { - w = GET_WORD(&mdm_cfg[6].info[23]); - cai[22] |= (byte) w; /* info options mask */ - cai[21] |= (byte)(w >> 8); /* disabled symbol rates */ - } - } - } - } - } - } - } - } - } - cai[27] = i - 27; - i++; - if (!api_parse(&bp_parms[3].info[1], (word)bp_parms[3].length, "wwwwwwss", mdm_cfg)) - { - if (!api_parse(&mdm_cfg[7].info[1], (word)mdm_cfg[7].length, "sss", mdm_cfg_v18)) - { - for (n = 0; n < 3; n++) - { - cai[i] = (byte)(mdm_cfg_v18[n].length); - for (j = 1; j < ((word)(cai[i] + 1)); j++) - cai[i + j] = mdm_cfg_v18[n].info[j]; - i += cai[i] + 1; - } - } - } - cai[0] = (byte)(i - 1); - } - } - - } - } - if (GET_WORD(bp_parms[0].info) == 2 || /* V.110 async */ - GET_WORD(bp_parms[0].info) == 3) /* V.110 sync */ - { - if (bp_parms[3].length) { - dbug(1, dprintf("V.110,%d", GET_WORD(&bp_parms[3].info[1]))); - switch (GET_WORD(&bp_parms[3].info[1])) { /* Rate */ - case 0: - case 56000: - if (GET_WORD(bp_parms[0].info) == 3) { /* V.110 sync 56k */ - dbug(1, dprintf("56k sync HSCX")); - cai[1] = 8; - cai[2] = 0; - cai[3] = 0; - } - else if (GET_WORD(bp_parms[0].info) == 2) { - dbug(1, dprintf("56k async DSP")); - cai[2] = 9; - } - break; - case 50: cai[2] = 1; break; - case 75: cai[2] = 1; break; - case 110: cai[2] = 1; break; - case 150: cai[2] = 1; break; - case 200: cai[2] = 1; break; - case 300: cai[2] = 1; break; - case 600: cai[2] = 1; break; - case 1200: cai[2] = 2; break; - case 2400: cai[2] = 3; break; - case 4800: cai[2] = 4; break; - case 7200: cai[2] = 10; break; - case 9600: cai[2] = 5; break; - case 12000: cai[2] = 13; break; - case 24000: cai[2] = 0; break; - case 14400: cai[2] = 11; break; - case 19200: cai[2] = 6; break; - case 28800: cai[2] = 12; break; - case 38400: cai[2] = 7; break; - case 48000: cai[2] = 8; break; - case 76: cai[2] = 15; break; /* 75/1200 */ - case 1201: cai[2] = 14; break; /* 1200/75 */ - case 56001: cai[2] = 9; break; /* V.110 56000 */ - - default: - return _B1_PARM_NOT_SUPPORTED; - } - cai[3] = 0; - if (cai[1] == 13) /* v.110 async */ - { - if (bp_parms[3].length >= 8) - { - switch (GET_WORD(&bp_parms[3].info[3])) - { /* char length */ - case 5: - cai[3] |= DSP_CAI_ASYNC_CHAR_LENGTH_5; - break; - case 6: - cai[3] |= DSP_CAI_ASYNC_CHAR_LENGTH_6; - break; - case 7: - cai[3] |= DSP_CAI_ASYNC_CHAR_LENGTH_7; - break; - } - switch (GET_WORD(&bp_parms[3].info[5])) - { /* Parity */ - case 1: /* odd parity */ - cai[3] |= (DSP_CAI_ASYNC_PARITY_ENABLE | DSP_CAI_ASYNC_PARITY_ODD); - break; - case 2: /* even parity */ - cai[3] |= (DSP_CAI_ASYNC_PARITY_ENABLE | DSP_CAI_ASYNC_PARITY_EVEN); - break; - } - switch (GET_WORD(&bp_parms[3].info[7])) - { /* stop bits */ - case 1: /* 2 stop bits */ - cai[3] |= DSP_CAI_ASYNC_TWO_STOP_BITS; - break; - } - } - } - } - else if (cai[1] == 8 || GET_WORD(bp_parms[0].info) == 3) { - dbug(1, dprintf("V.110 default 56k sync")); - cai[1] = 8; - cai[2] = 0; - cai[3] = 0; - } - else { - dbug(1, dprintf("V.110 default 9600 async")); - cai[2] = 5; - } - } - PUT_WORD(&cai[5], plci->appl->MaxDataLength); - dbug(1, dprintf("CAI[%d]=%x,%x,%x,%x,%x,%x", cai[0], cai[1], cai[2], cai[3], cai[4], cai[5], cai[6])); -/* HexDump ("CAI", sizeof(cai), &cai[0]); */ - - add_p(plci, CAI, cai); - return 0; -} - -/*------------------------------------------------------------------*/ -/* put parameter for b2 and B3 protocol in the parameter buffer */ -/*------------------------------------------------------------------*/ - -static word add_b23(PLCI *plci, API_PARSE *bp) -{ - word i, fax_control_bits; - byte pos, len; - byte SAPI = 0x40; /* default SAPI 16 for x.31 */ - API_PARSE bp_parms[8]; - API_PARSE *b1_config; - API_PARSE *b2_config; - API_PARSE b2_config_parms[8]; - API_PARSE *b3_config; - API_PARSE b3_config_parms[6]; - API_PARSE global_config[2]; - - static byte llc[3] = {2,0,0}; - static byte dlc[20] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}; - static byte nlc[256]; - static byte lli[12] = {1,1}; - - const byte llc2_out[] = {1,2,4,6,2,0,0,0, X75_V42BIS,V120_L2,V120_V42BIS,V120_L2,6}; - const byte llc2_in[] = {1,3,4,6,3,0,0,0, X75_V42BIS,V120_L2,V120_V42BIS,V120_L2,6}; - - const byte llc3[] = {4,3,2,2,6,6,0}; - const byte header[] = {0,2,3,3,0,0,0}; - - for (i = 0; i < 8; i++) bp_parms[i].length = 0; - for (i = 0; i < 6; i++) b2_config_parms[i].length = 0; - for (i = 0; i < 5; i++) b3_config_parms[i].length = 0; - - lli[0] = 1; - lli[1] = 1; - if (plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_XONOFF_FLOW_CONTROL) - lli[1] |= 2; - if (plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_OOB_CHANNEL) - lli[1] |= 4; - - if ((lli[1] & 0x02) && (diva_xdi_extended_features & DIVA_CAPI_USE_CMA)) { - lli[1] |= 0x10; - if (plci->rx_dma_descriptor <= 0) { - plci->rx_dma_descriptor = diva_get_dma_descriptor(plci, &plci->rx_dma_magic); - if (plci->rx_dma_descriptor >= 0) - plci->rx_dma_descriptor++; - } - if (plci->rx_dma_descriptor > 0) { - lli[0] = 6; - lli[1] |= 0x40; - lli[2] = (byte)(plci->rx_dma_descriptor - 1); - lli[3] = (byte)plci->rx_dma_magic; - lli[4] = (byte)(plci->rx_dma_magic >> 8); - lli[5] = (byte)(plci->rx_dma_magic >> 16); - lli[6] = (byte)(plci->rx_dma_magic >> 24); - } - } - - if (DIVA_CAPI_SUPPORTS_NO_CANCEL(plci->adapter)) { - lli[1] |= 0x20; - } - - dbug(1, dprintf("add_b23")); - api_save_msg(bp, "s", &plci->B_protocol); - - if (!bp->length && plci->tel) - { - plci->adv_nl = true; - dbug(1, dprintf("Default adv.Nl")); - add_p(plci, LLI, lli); - plci->B2_prot = 1 /*XPARENT*/; - plci->B3_prot = 0 /*XPARENT*/; - llc[1] = 2; - llc[2] = 4; - add_p(plci, LLC, llc); - dlc[0] = 2; - PUT_WORD(&dlc[1], plci->appl->MaxDataLength); - add_p(plci, DLC, dlc); - return 0; - } - - if (!bp->length) /*default*/ - { - dbug(1, dprintf("ret default")); - add_p(plci, LLI, lli); - plci->B2_prot = 0 /*X.75 */; - plci->B3_prot = 0 /*XPARENT*/; - llc[1] = 1; - llc[2] = 4; - add_p(plci, LLC, llc); - dlc[0] = 2; - PUT_WORD(&dlc[1], plci->appl->MaxDataLength); - add_p(plci, DLC, dlc); - return 0; - } - dbug(1, dprintf("b_prot_len=%d", (word)bp->length)); - if ((word)bp->length > 256) return _WRONG_MESSAGE_FORMAT; - - if (api_parse(&bp->info[1], (word)bp->length, "wwwsssb", bp_parms)) - { - bp_parms[6].length = 0; - if (api_parse(&bp->info[1], (word)bp->length, "wwwsss", bp_parms)) - { - dbug(1, dprintf("b-form.!")); - return _WRONG_MESSAGE_FORMAT; - } - } - else if (api_parse(&bp->info[1], (word)bp->length, "wwwssss", bp_parms)) - { - dbug(1, dprintf("b-form.!")); - return _WRONG_MESSAGE_FORMAT; - } - - if (plci->tel == ADV_VOICE) /* transparent B on advanced voice */ - { - if (GET_WORD(bp_parms[1].info) != 1 - || GET_WORD(bp_parms[2].info) != 0) return _B2_NOT_SUPPORTED; - plci->adv_nl = true; - } - else if (plci->tel) return _B2_NOT_SUPPORTED; - - - if ((GET_WORD(bp_parms[1].info) == B2_RTP) - && (GET_WORD(bp_parms[2].info) == B3_RTP) - && (plci->adapter->man_profile.private_options & (1L << PRIVATE_RTP))) - { - add_p(plci, LLI, lli); - plci->B2_prot = (byte) GET_WORD(bp_parms[1].info); - plci->B3_prot = (byte) GET_WORD(bp_parms[2].info); - llc[1] = (plci->call_dir & (CALL_DIR_ORIGINATE | CALL_DIR_FORCE_OUTG_NL)) ? 14 : 13; - llc[2] = 4; - add_p(plci, LLC, llc); - dlc[0] = 2; - PUT_WORD(&dlc[1], plci->appl->MaxDataLength); - dlc[3] = 3; /* Addr A */ - dlc[4] = 1; /* Addr B */ - dlc[5] = 7; /* modulo mode */ - dlc[6] = 7; /* window size */ - dlc[7] = 0; /* XID len Lo */ - dlc[8] = 0; /* XID len Hi */ - for (i = 0; i < bp_parms[4].length; i++) - dlc[9 + i] = bp_parms[4].info[1 + i]; - dlc[0] = (byte)(8 + bp_parms[4].length); - add_p(plci, DLC, dlc); - for (i = 0; i < bp_parms[5].length; i++) - nlc[1 + i] = bp_parms[5].info[1 + i]; - nlc[0] = (byte)(bp_parms[5].length); - add_p(plci, NLC, nlc); - return 0; - } - - - - if ((GET_WORD(bp_parms[1].info) >= 32) - || (!((1L << GET_WORD(bp_parms[1].info)) & plci->adapter->profile.B2_Protocols) - && ((GET_WORD(bp_parms[1].info) != B2_PIAFS) - || !(plci->adapter->man_profile.private_options & (1L << PRIVATE_PIAFS))))) - - { - return _B2_NOT_SUPPORTED; - } - if ((GET_WORD(bp_parms[2].info) >= 32) - || !((1L << GET_WORD(bp_parms[2].info)) & plci->adapter->profile.B3_Protocols)) - { - return _B3_NOT_SUPPORTED; - } - if ((GET_WORD(bp_parms[1].info) != B2_SDLC) - && ((GET_WORD(bp_parms[0].info) == B1_MODEM_ALL_NEGOTIATE) - || (GET_WORD(bp_parms[0].info) == B1_MODEM_ASYNC) - || (GET_WORD(bp_parms[0].info) == B1_MODEM_SYNC_HDLC))) - { - return (add_modem_b23(plci, bp_parms)); - } - - add_p(plci, LLI, lli); - - plci->B2_prot = (byte)GET_WORD(bp_parms[1].info); - plci->B3_prot = (byte)GET_WORD(bp_parms[2].info); - if (plci->B2_prot == 12) SAPI = 0; /* default SAPI D-channel */ - - if (bp_parms[6].length) - { - if (api_parse(&bp_parms[6].info[1], (word)bp_parms[6].length, "w", global_config)) - { - return _WRONG_MESSAGE_FORMAT; - } - switch (GET_WORD(global_config[0].info)) - { - case 1: - plci->call_dir = (plci->call_dir & ~CALL_DIR_ANSWER) | CALL_DIR_ORIGINATE; - break; - case 2: - plci->call_dir = (plci->call_dir & ~CALL_DIR_ORIGINATE) | CALL_DIR_ANSWER; - break; - } - } - dbug(1, dprintf("call_dir=%04x", plci->call_dir)); - - - if (plci->B2_prot == B2_PIAFS) - llc[1] = PIAFS_CRC; - else -/* IMPLEMENT_PIAFS */ - { - llc[1] = (plci->call_dir & (CALL_DIR_ORIGINATE | CALL_DIR_FORCE_OUTG_NL)) ? - llc2_out[GET_WORD(bp_parms[1].info)] : llc2_in[GET_WORD(bp_parms[1].info)]; - } - llc[2] = llc3[GET_WORD(bp_parms[2].info)]; - - add_p(plci, LLC, llc); - - dlc[0] = 2; - PUT_WORD(&dlc[1], plci->appl->MaxDataLength + - header[GET_WORD(bp_parms[2].info)]); - - b1_config = &bp_parms[3]; - nlc[0] = 0; - if (plci->B3_prot == 4 - || plci->B3_prot == 5) - { - for (i = 0; i < sizeof(T30_INFO); i++) nlc[i] = 0; - nlc[0] = sizeof(T30_INFO); - if (plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_FAX_PAPER_FORMATS) - ((T30_INFO *)&nlc[1])->operating_mode = T30_OPERATING_MODE_CAPI; - ((T30_INFO *)&nlc[1])->rate_div_2400 = 0xff; - if (b1_config->length >= 2) - { - ((T30_INFO *)&nlc[1])->rate_div_2400 = (byte)(GET_WORD(&b1_config->info[1]) / 2400); - } - } - b2_config = &bp_parms[4]; - - - if (llc[1] == PIAFS_CRC) - { - if (plci->B3_prot != B3_TRANSPARENT) - { - return _B_STACK_NOT_SUPPORTED; - } - if (b2_config->length && api_parse(&b2_config->info[1], (word)b2_config->length, "bwww", b2_config_parms)) { - return _WRONG_MESSAGE_FORMAT; - } - PUT_WORD(&dlc[1], plci->appl->MaxDataLength); - dlc[3] = 0; /* Addr A */ - dlc[4] = 0; /* Addr B */ - dlc[5] = 0; /* modulo mode */ - dlc[6] = 0; /* window size */ - if (b2_config->length >= 7) { - dlc[7] = 7; - dlc[8] = 0; - dlc[9] = b2_config_parms[0].info[0]; /* PIAFS protocol Speed configuration */ - dlc[10] = b2_config_parms[1].info[0]; /* V.42bis P0 */ - dlc[11] = b2_config_parms[1].info[1]; /* V.42bis P0 */ - dlc[12] = b2_config_parms[2].info[0]; /* V.42bis P1 */ - dlc[13] = b2_config_parms[2].info[1]; /* V.42bis P1 */ - dlc[14] = b2_config_parms[3].info[0]; /* V.42bis P2 */ - dlc[15] = b2_config_parms[3].info[1]; /* V.42bis P2 */ - dlc[0] = 15; - if (b2_config->length >= 8) { /* PIAFS control abilities */ - dlc[7] = 10; - dlc[16] = 2; /* Length of PIAFS extension */ - dlc[17] = PIAFS_UDATA_ABILITIES; /* control (UDATA) ability */ - dlc[18] = b2_config_parms[4].info[0]; /* value */ - dlc[0] = 18; - } - } - else /* default values, 64K, variable, no compression */ - { - dlc[7] = 7; - dlc[8] = 0; - dlc[9] = 0x03; /* PIAFS protocol Speed configuration */ - dlc[10] = 0x03; /* V.42bis P0 */ - dlc[11] = 0; /* V.42bis P0 */ - dlc[12] = 0; /* V.42bis P1 */ - dlc[13] = 0; /* V.42bis P1 */ - dlc[14] = 0; /* V.42bis P2 */ - dlc[15] = 0; /* V.42bis P2 */ - dlc[0] = 15; - } - add_p(plci, DLC, dlc); - } - else - - if ((llc[1] == V120_L2) || (llc[1] == V120_V42BIS)) - { - if (plci->B3_prot != B3_TRANSPARENT) - return _B_STACK_NOT_SUPPORTED; - - dlc[0] = 6; - PUT_WORD(&dlc[1], GET_WORD(&dlc[1]) + 2); - dlc[3] = 0x08; - dlc[4] = 0x01; - dlc[5] = 127; - dlc[6] = 7; - if (b2_config->length != 0) - { - if ((llc[1] == V120_V42BIS) && api_parse(&b2_config->info[1], (word)b2_config->length, "bbbbwww", b2_config_parms)) { - return _WRONG_MESSAGE_FORMAT; - } - dlc[3] = (byte)((b2_config->info[2] << 3) | ((b2_config->info[1] >> 5) & 0x04)); - dlc[4] = (byte)((b2_config->info[1] << 1) | 0x01); - if (b2_config->info[3] != 128) - { - dbug(1, dprintf("1D-dlc= %x %x %x %x %x", dlc[0], dlc[1], dlc[2], dlc[3], dlc[4])); - return _B2_PARM_NOT_SUPPORTED; - } - dlc[5] = (byte)(b2_config->info[3] - 1); - dlc[6] = b2_config->info[4]; - if (llc[1] == V120_V42BIS) { - if (b2_config->length >= 10) { - dlc[7] = 6; - dlc[8] = 0; - dlc[9] = b2_config_parms[4].info[0]; - dlc[10] = b2_config_parms[4].info[1]; - dlc[11] = b2_config_parms[5].info[0]; - dlc[12] = b2_config_parms[5].info[1]; - dlc[13] = b2_config_parms[6].info[0]; - dlc[14] = b2_config_parms[6].info[1]; - dlc[0] = 14; - dbug(1, dprintf("b2_config_parms[4].info[0] [1]: %x %x", b2_config_parms[4].info[0], b2_config_parms[4].info[1])); - dbug(1, dprintf("b2_config_parms[5].info[0] [1]: %x %x", b2_config_parms[5].info[0], b2_config_parms[5].info[1])); - dbug(1, dprintf("b2_config_parms[6].info[0] [1]: %x %x", b2_config_parms[6].info[0], b2_config_parms[6].info[1])); - } - else { - dlc[6] = 14; - } - } - } - } - else - { - if (b2_config->length) - { - dbug(1, dprintf("B2-Config")); - if (llc[1] == X75_V42BIS) { - if (api_parse(&b2_config->info[1], (word)b2_config->length, "bbbbwww", b2_config_parms)) - { - return _WRONG_MESSAGE_FORMAT; - } - } - else { - if (api_parse(&b2_config->info[1], (word)b2_config->length, "bbbbs", b2_config_parms)) - { - return _WRONG_MESSAGE_FORMAT; - } - } - /* if B2 Protocol is LAPD, b2_config structure is different */ - if (llc[1] == 6) - { - dlc[0] = 4; - if (b2_config->length >= 1) dlc[2] = b2_config->info[1]; /* TEI */ - else dlc[2] = 0x01; - if ((b2_config->length >= 2) && (plci->B2_prot == 12)) - { - SAPI = b2_config->info[2]; /* SAPI */ - } - dlc[1] = SAPI; - if ((b2_config->length >= 3) && (b2_config->info[3] == 128)) - { - dlc[3] = 127; /* Mode */ - } - else - { - dlc[3] = 7; /* Mode */ - } - - if (b2_config->length >= 4) dlc[4] = b2_config->info[4]; /* Window */ - else dlc[4] = 1; - dbug(1, dprintf("D-dlc[%d]=%x,%x,%x,%x", dlc[0], dlc[1], dlc[2], dlc[3], dlc[4])); - if (b2_config->length > 5) return _B2_PARM_NOT_SUPPORTED; - } - else - { - dlc[0] = (byte)(b2_config_parms[4].length + 6); - dlc[3] = b2_config->info[1]; - dlc[4] = b2_config->info[2]; - if (b2_config->info[3] != 8 && b2_config->info[3] != 128) { - dbug(1, dprintf("1D-dlc= %x %x %x %x %x", dlc[0], dlc[1], dlc[2], dlc[3], dlc[4])); - return _B2_PARM_NOT_SUPPORTED; - } - - dlc[5] = (byte)(b2_config->info[3] - 1); - dlc[6] = b2_config->info[4]; - if (dlc[6] > dlc[5]) { - dbug(1, dprintf("2D-dlc= %x %x %x %x %x %x %x", dlc[0], dlc[1], dlc[2], dlc[3], dlc[4], dlc[5], dlc[6])); - return _B2_PARM_NOT_SUPPORTED; - } - - if (llc[1] == X75_V42BIS) { - if (b2_config->length >= 10) { - dlc[7] = 6; - dlc[8] = 0; - dlc[9] = b2_config_parms[4].info[0]; - dlc[10] = b2_config_parms[4].info[1]; - dlc[11] = b2_config_parms[5].info[0]; - dlc[12] = b2_config_parms[5].info[1]; - dlc[13] = b2_config_parms[6].info[0]; - dlc[14] = b2_config_parms[6].info[1]; - dlc[0] = 14; - dbug(1, dprintf("b2_config_parms[4].info[0] [1]: %x %x", b2_config_parms[4].info[0], b2_config_parms[4].info[1])); - dbug(1, dprintf("b2_config_parms[5].info[0] [1]: %x %x", b2_config_parms[5].info[0], b2_config_parms[5].info[1])); - dbug(1, dprintf("b2_config_parms[6].info[0] [1]: %x %x", b2_config_parms[6].info[0], b2_config_parms[6].info[1])); - } - else { - dlc[6] = 14; - } - - } - else { - PUT_WORD(&dlc[7], (word)b2_config_parms[4].length); - for (i = 0; i < b2_config_parms[4].length; i++) - dlc[11 + i] = b2_config_parms[4].info[1 + i]; - } - } - } - } - add_p(plci, DLC, dlc); - - b3_config = &bp_parms[5]; - if (b3_config->length) - { - if (plci->B3_prot == 4 - || plci->B3_prot == 5) - { - if (api_parse(&b3_config->info[1], (word)b3_config->length, "wwss", b3_config_parms)) - { - return _WRONG_MESSAGE_FORMAT; - } - i = GET_WORD((byte *)(b3_config_parms[0].info)); - ((T30_INFO *)&nlc[1])->resolution = (byte)(((i & 0x0001) || - ((plci->B3_prot == 4) && (((byte)(GET_WORD((byte *)b3_config_parms[1].info))) != 5))) ? T30_RESOLUTION_R8_0770_OR_200 : 0); - ((T30_INFO *)&nlc[1])->data_format = (byte)(GET_WORD((byte *)b3_config_parms[1].info)); - fax_control_bits = T30_CONTROL_BIT_ALL_FEATURES; - if ((((T30_INFO *)&nlc[1])->rate_div_2400 != 0) && (((T30_INFO *)&nlc[1])->rate_div_2400 <= 6)) - fax_control_bits &= ~T30_CONTROL_BIT_ENABLE_V34FAX; - if (plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_FAX_PAPER_FORMATS) - { - - if ((plci->requested_options_conn | plci->requested_options | plci->adapter->requested_options_table[plci->appl->Id - 1]) - & (1L << PRIVATE_FAX_PAPER_FORMATS)) - { - ((T30_INFO *)&nlc[1])->resolution |= T30_RESOLUTION_R8_1540 | - T30_RESOLUTION_R16_1540_OR_400 | T30_RESOLUTION_300_300 | - T30_RESOLUTION_INCH_BASED | T30_RESOLUTION_METRIC_BASED; - } - - ((T30_INFO *)&nlc[1])->recording_properties = - T30_RECORDING_WIDTH_ISO_A3 | - (T30_RECORDING_LENGTH_UNLIMITED << 2) | - (T30_MIN_SCANLINE_TIME_00_00_00 << 4); - } - if (plci->B3_prot == 5) - { - if (i & 0x0002) /* Accept incoming fax-polling requests */ - fax_control_bits |= T30_CONTROL_BIT_ACCEPT_POLLING; - if (i & 0x2000) /* Do not use MR compression */ - fax_control_bits &= ~T30_CONTROL_BIT_ENABLE_2D_CODING; - if (i & 0x4000) /* Do not use MMR compression */ - fax_control_bits &= ~T30_CONTROL_BIT_ENABLE_T6_CODING; - if (i & 0x8000) /* Do not use ECM */ - fax_control_bits &= ~T30_CONTROL_BIT_ENABLE_ECM; - if (plci->fax_connect_info_length != 0) - { - ((T30_INFO *)&nlc[1])->resolution = ((T30_INFO *)plci->fax_connect_info_buffer)->resolution; - ((T30_INFO *)&nlc[1])->data_format = ((T30_INFO *)plci->fax_connect_info_buffer)->data_format; - ((T30_INFO *)&nlc[1])->recording_properties = ((T30_INFO *)plci->fax_connect_info_buffer)->recording_properties; - fax_control_bits |= GET_WORD(&((T30_INFO *)plci->fax_connect_info_buffer)->control_bits_low) & - (T30_CONTROL_BIT_REQUEST_POLLING | T30_CONTROL_BIT_MORE_DOCUMENTS); - } - } - /* copy station id to NLC */ - for (i = 0; i < T30_MAX_STATION_ID_LENGTH; i++) - { - if (i < b3_config_parms[2].length) - { - ((T30_INFO *)&nlc[1])->station_id[i] = ((byte *)b3_config_parms[2].info)[1 + i]; - } - else - { - ((T30_INFO *)&nlc[1])->station_id[i] = ' '; - } - } - ((T30_INFO *)&nlc[1])->station_id_len = T30_MAX_STATION_ID_LENGTH; - /* copy head line to NLC */ - if (b3_config_parms[3].length) - { - - pos = (byte)(fax_head_line_time(&(((T30_INFO *)&nlc[1])->station_id[T30_MAX_STATION_ID_LENGTH]))); - if (pos != 0) - { - if (CAPI_MAX_DATE_TIME_LENGTH + 2 + b3_config_parms[3].length > CAPI_MAX_HEAD_LINE_SPACE) - pos = 0; - else - { - nlc[1 + offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH + pos++] = ' '; - nlc[1 + offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH + pos++] = ' '; - len = (byte)b3_config_parms[2].length; - if (len > 20) - len = 20; - if (CAPI_MAX_DATE_TIME_LENGTH + 2 + len + 2 + b3_config_parms[3].length <= CAPI_MAX_HEAD_LINE_SPACE) - { - for (i = 0; i < len; i++) - nlc[1 + offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH + pos++] = ((byte *)b3_config_parms[2].info)[1 + i]; - nlc[1 + offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH + pos++] = ' '; - nlc[1 + offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH + pos++] = ' '; - } - } - } - - len = (byte)b3_config_parms[3].length; - if (len > CAPI_MAX_HEAD_LINE_SPACE - pos) - len = (byte)(CAPI_MAX_HEAD_LINE_SPACE - pos); - ((T30_INFO *)&nlc[1])->head_line_len = (byte)(pos + len); - nlc[0] += (byte)(pos + len); - for (i = 0; i < len; i++) - nlc[1 + offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH + pos++] = ((byte *)b3_config_parms[3].info)[1 + i]; - } else - ((T30_INFO *)&nlc[1])->head_line_len = 0; - - plci->nsf_control_bits = 0; - if (plci->B3_prot == 5) - { - if ((plci->adapter->man_profile.private_options & (1L << PRIVATE_FAX_SUB_SEP_PWD)) - && (GET_WORD((byte *)b3_config_parms[1].info) & 0x8000)) /* Private SUB/SEP/PWD enable */ - { - plci->requested_options |= 1L << PRIVATE_FAX_SUB_SEP_PWD; - } - if ((plci->adapter->man_profile.private_options & (1L << PRIVATE_FAX_NONSTANDARD)) - && (GET_WORD((byte *)b3_config_parms[1].info) & 0x4000)) /* Private non-standard facilities enable */ - { - plci->requested_options |= 1L << PRIVATE_FAX_NONSTANDARD; - } - if ((plci->requested_options_conn | plci->requested_options | plci->adapter->requested_options_table[plci->appl->Id - 1]) - & ((1L << PRIVATE_FAX_SUB_SEP_PWD) | (1L << PRIVATE_FAX_NONSTANDARD))) - { - if ((plci->requested_options_conn | plci->requested_options | plci->adapter->requested_options_table[plci->appl->Id - 1]) - & (1L << PRIVATE_FAX_SUB_SEP_PWD)) - { - fax_control_bits |= T30_CONTROL_BIT_ACCEPT_SUBADDRESS | T30_CONTROL_BIT_ACCEPT_PASSWORD; - if (fax_control_bits & T30_CONTROL_BIT_ACCEPT_POLLING) - fax_control_bits |= T30_CONTROL_BIT_ACCEPT_SEL_POLLING; - } - len = nlc[0]; - pos = offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH; - if (pos < plci->fax_connect_info_length) - { - for (i = 1 + plci->fax_connect_info_buffer[pos]; i != 0; i--) - nlc[++len] = plci->fax_connect_info_buffer[pos++]; - } - else - nlc[++len] = 0; - if (pos < plci->fax_connect_info_length) - { - for (i = 1 + plci->fax_connect_info_buffer[pos]; i != 0; i--) - nlc[++len] = plci->fax_connect_info_buffer[pos++]; - } - else - nlc[++len] = 0; - if ((plci->requested_options_conn | plci->requested_options | plci->adapter->requested_options_table[plci->appl->Id - 1]) - & (1L << PRIVATE_FAX_NONSTANDARD)) - { - if ((pos < plci->fax_connect_info_length) && (plci->fax_connect_info_buffer[pos] != 0)) - { - if ((plci->fax_connect_info_buffer[pos] >= 3) && (plci->fax_connect_info_buffer[pos + 1] >= 2)) - plci->nsf_control_bits = GET_WORD(&plci->fax_connect_info_buffer[pos + 2]); - for (i = 1 + plci->fax_connect_info_buffer[pos]; i != 0; i--) - nlc[++len] = plci->fax_connect_info_buffer[pos++]; - } - else - { - if (api_parse(&b3_config->info[1], (word)b3_config->length, "wwsss", b3_config_parms)) - { - dbug(1, dprintf("non-standard facilities info missing or wrong format")); - nlc[++len] = 0; - } - else - { - if ((b3_config_parms[4].length >= 3) && (b3_config_parms[4].info[1] >= 2)) - plci->nsf_control_bits = GET_WORD(&b3_config_parms[4].info[2]); - nlc[++len] = (byte)(b3_config_parms[4].length); - for (i = 0; i < b3_config_parms[4].length; i++) - nlc[++len] = b3_config_parms[4].info[1 + i]; - } - } - } - nlc[0] = len; - if ((plci->nsf_control_bits & T30_NSF_CONTROL_BIT_ENABLE_NSF) - && (plci->nsf_control_bits & T30_NSF_CONTROL_BIT_NEGOTIATE_RESP)) - { - ((T30_INFO *)&nlc[1])->operating_mode = T30_OPERATING_MODE_CAPI_NEG; - } - } - } - - PUT_WORD(&(((T30_INFO *)&nlc[1])->control_bits_low), fax_control_bits); - len = offsetof(T30_INFO, station_id) + T30_MAX_STATION_ID_LENGTH; - for (i = 0; i < len; i++) - plci->fax_connect_info_buffer[i] = nlc[1 + i]; - ((T30_INFO *) plci->fax_connect_info_buffer)->head_line_len = 0; - i += ((T30_INFO *)&nlc[1])->head_line_len; - while (i < nlc[0]) - plci->fax_connect_info_buffer[len++] = nlc[++i]; - plci->fax_connect_info_length = len; - } - else - { - nlc[0] = 14; - if (b3_config->length != 16) - return _B3_PARM_NOT_SUPPORTED; - for (i = 0; i < 12; i++) nlc[1 + i] = b3_config->info[1 + i]; - if (GET_WORD(&b3_config->info[13]) != 8 && GET_WORD(&b3_config->info[13]) != 128) - return _B3_PARM_NOT_SUPPORTED; - nlc[13] = b3_config->info[13]; - if (GET_WORD(&b3_config->info[15]) >= nlc[13]) - return _B3_PARM_NOT_SUPPORTED; - nlc[14] = b3_config->info[15]; - } - } - else - { - if (plci->B3_prot == 4 - || plci->B3_prot == 5 /*T.30 - FAX*/) return _B3_PARM_NOT_SUPPORTED; - } - add_p(plci, NLC, nlc); - return 0; -} - -/*----------------------------------------------------------------*/ -/* make the same as add_b23, but only for the modem related */ -/* L2 and L3 B-Chan protocol. */ -/* */ -/* Enabled L2 and L3 Configurations: */ -/* If L1 == Modem all negotiation */ -/* only L2 == Modem with full negotiation is allowed */ -/* If L1 == Modem async or sync */ -/* only L2 == Transparent is allowed */ -/* L3 == Modem or L3 == Transparent are allowed */ -/* B2 Configuration for modem: */ -/* word : enable/disable compression, bitoptions */ -/* B3 Configuration for modem: */ -/* empty */ -/*----------------------------------------------------------------*/ -static word add_modem_b23(PLCI *plci, API_PARSE *bp_parms) -{ - static byte lli[12] = {1,1}; - static byte llc[3] = {2,0,0}; - static byte dlc[16] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}; - API_PARSE mdm_config[2]; - word i; - word b2_config = 0; - - for (i = 0; i < 2; i++) mdm_config[i].length = 0; - for (i = 0; i < sizeof(dlc); i++) dlc[i] = 0; - - if (((GET_WORD(bp_parms[0].info) == B1_MODEM_ALL_NEGOTIATE) - && (GET_WORD(bp_parms[1].info) != B2_MODEM_EC_COMPRESSION)) - || ((GET_WORD(bp_parms[0].info) != B1_MODEM_ALL_NEGOTIATE) - && (GET_WORD(bp_parms[1].info) != B2_TRANSPARENT))) - { - return (_B_STACK_NOT_SUPPORTED); - } - if ((GET_WORD(bp_parms[2].info) != B3_MODEM) - && (GET_WORD(bp_parms[2].info) != B3_TRANSPARENT)) - { - return (_B_STACK_NOT_SUPPORTED); - } - - plci->B2_prot = (byte) GET_WORD(bp_parms[1].info); - plci->B3_prot = (byte) GET_WORD(bp_parms[2].info); - - if ((GET_WORD(bp_parms[1].info) == B2_MODEM_EC_COMPRESSION) && bp_parms[4].length) - { - if (api_parse(&bp_parms[4].info[1], - (word)bp_parms[4].length, "w", - mdm_config)) - { - return (_WRONG_MESSAGE_FORMAT); - } - b2_config = GET_WORD(mdm_config[0].info); - } - - /* OK, L2 is modem */ - - lli[0] = 1; - lli[1] = 1; - if (plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_XONOFF_FLOW_CONTROL) - lli[1] |= 2; - if (plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_OOB_CHANNEL) - lli[1] |= 4; - - if ((lli[1] & 0x02) && (diva_xdi_extended_features & DIVA_CAPI_USE_CMA)) { - lli[1] |= 0x10; - if (plci->rx_dma_descriptor <= 0) { - plci->rx_dma_descriptor = diva_get_dma_descriptor(plci, &plci->rx_dma_magic); - if (plci->rx_dma_descriptor >= 0) - plci->rx_dma_descriptor++; - } - if (plci->rx_dma_descriptor > 0) { - lli[1] |= 0x40; - lli[0] = 6; - lli[2] = (byte)(plci->rx_dma_descriptor - 1); - lli[3] = (byte)plci->rx_dma_magic; - lli[4] = (byte)(plci->rx_dma_magic >> 8); - lli[5] = (byte)(plci->rx_dma_magic >> 16); - lli[6] = (byte)(plci->rx_dma_magic >> 24); - } - } - - if (DIVA_CAPI_SUPPORTS_NO_CANCEL(plci->adapter)) { - lli[1] |= 0x20; - } - - llc[1] = (plci->call_dir & (CALL_DIR_ORIGINATE | CALL_DIR_FORCE_OUTG_NL)) ? - /*V42*/ 10 : /*V42_IN*/ 9; - llc[2] = 4; /* pass L3 always transparent */ - add_p(plci, LLI, lli); - add_p(plci, LLC, llc); - i = 1; - PUT_WORD(&dlc[i], plci->appl->MaxDataLength); - i += 2; - if (GET_WORD(bp_parms[1].info) == B2_MODEM_EC_COMPRESSION) - { - if (bp_parms[4].length) - { - dbug(1, dprintf("MDM b2_config=%02x", b2_config)); - dlc[i++] = 3; /* Addr A */ - dlc[i++] = 1; /* Addr B */ - dlc[i++] = 7; /* modulo mode */ - dlc[i++] = 7; /* window size */ - dlc[i++] = 0; /* XID len Lo */ - dlc[i++] = 0; /* XID len Hi */ - - if (b2_config & MDM_B2_DISABLE_V42bis) - { - dlc[i] |= DLC_MODEMPROT_DISABLE_V42_V42BIS; - } - if (b2_config & MDM_B2_DISABLE_MNP) - { - dlc[i] |= DLC_MODEMPROT_DISABLE_MNP_MNP5; - } - if (b2_config & MDM_B2_DISABLE_TRANS) - { - dlc[i] |= DLC_MODEMPROT_REQUIRE_PROTOCOL; - } - if (b2_config & MDM_B2_DISABLE_V42) - { - dlc[i] |= DLC_MODEMPROT_DISABLE_V42_DETECT; - } - if (b2_config & MDM_B2_DISABLE_COMP) - { - dlc[i] |= DLC_MODEMPROT_DISABLE_COMPRESSION; - } - i++; - } - } - else - { - dlc[i++] = 3; /* Addr A */ - dlc[i++] = 1; /* Addr B */ - dlc[i++] = 7; /* modulo mode */ - dlc[i++] = 7; /* window size */ - dlc[i++] = 0; /* XID len Lo */ - dlc[i++] = 0; /* XID len Hi */ - dlc[i++] = DLC_MODEMPROT_DISABLE_V42_V42BIS | - DLC_MODEMPROT_DISABLE_MNP_MNP5 | - DLC_MODEMPROT_DISABLE_V42_DETECT | - DLC_MODEMPROT_DISABLE_COMPRESSION; - } - dlc[0] = (byte)(i - 1); -/* HexDump ("DLC", sizeof(dlc), &dlc[0]); */ - add_p(plci, DLC, dlc); - return (0); -} - - -/*------------------------------------------------------------------*/ -/* send a request for the signaling entity */ -/*------------------------------------------------------------------*/ - -static void sig_req(PLCI *plci, byte req, byte Id) -{ - if (!plci) return; - if (plci->adapter->adapter_disabled) return; - dbug(1, dprintf("sig_req(%x)", req)); - if (req == REMOVE) - plci->sig_remove_id = plci->Sig.Id; - if (plci->req_in == plci->req_in_start) { - plci->req_in += 2; - plci->RBuffer[plci->req_in++] = 0; - } - PUT_WORD(&plci->RBuffer[plci->req_in_start], plci->req_in-plci->req_in_start - 2); - plci->RBuffer[plci->req_in++] = Id; /* sig/nl flag */ - plci->RBuffer[plci->req_in++] = req; /* request */ - plci->RBuffer[plci->req_in++] = 0; /* channel */ - plci->req_in_start = plci->req_in; -} - -/*------------------------------------------------------------------*/ -/* send a request for the network layer entity */ -/*------------------------------------------------------------------*/ - -static void nl_req_ncci(PLCI *plci, byte req, byte ncci) -{ - if (!plci) return; - if (plci->adapter->adapter_disabled) return; - dbug(1, dprintf("nl_req %02x %02x %02x", plci->Id, req, ncci)); - if (req == REMOVE) - { - plci->nl_remove_id = plci->NL.Id; - ncci_remove(plci, 0, (byte)(ncci != 0)); - ncci = 0; - } - if (plci->req_in == plci->req_in_start) { - plci->req_in += 2; - plci->RBuffer[plci->req_in++] = 0; - } - PUT_WORD(&plci->RBuffer[plci->req_in_start], plci->req_in-plci->req_in_start - 2); - plci->RBuffer[plci->req_in++] = 1; /* sig/nl flag */ - plci->RBuffer[plci->req_in++] = req; /* request */ - plci->RBuffer[plci->req_in++] = plci->adapter->ncci_ch[ncci]; /* channel */ - plci->req_in_start = plci->req_in; -} - -static void send_req(PLCI *plci) -{ - ENTITY *e; - word l; -/* word i; */ - - if (!plci) return; - if (plci->adapter->adapter_disabled) return; - channel_xmit_xon(plci); - - /* if nothing to do, return */ - if (plci->req_in == plci->req_out) return; - dbug(1, dprintf("send_req(in=%d,out=%d)", plci->req_in, plci->req_out)); - - if (plci->nl_req || plci->sig_req) return; - - l = GET_WORD(&plci->RBuffer[plci->req_out]); - plci->req_out += 2; - plci->XData[0].P = &plci->RBuffer[plci->req_out]; - plci->req_out += l; - if (plci->RBuffer[plci->req_out] == 1) - { - e = &plci->NL; - plci->req_out++; - e->Req = plci->nl_req = plci->RBuffer[plci->req_out++]; - e->ReqCh = plci->RBuffer[plci->req_out++]; - if (!(e->Id & 0x1f)) - { - e->Id = NL_ID; - plci->RBuffer[plci->req_out - 4] = CAI; - plci->RBuffer[plci->req_out - 3] = 1; - plci->RBuffer[plci->req_out - 2] = (plci->Sig.Id == 0xff) ? 0 : plci->Sig.Id; - plci->RBuffer[plci->req_out - 1] = 0; - l += 3; - plci->nl_global_req = plci->nl_req; - } - dbug(1, dprintf("%x:NLREQ(%x:%x:%x)", plci->adapter->Id, e->Id, e->Req, e->ReqCh)); - } - else - { - e = &plci->Sig; - if (plci->RBuffer[plci->req_out]) - e->Id = plci->RBuffer[plci->req_out]; - plci->req_out++; - e->Req = plci->sig_req = plci->RBuffer[plci->req_out++]; - e->ReqCh = plci->RBuffer[plci->req_out++]; - if (!(e->Id & 0x1f)) - plci->sig_global_req = plci->sig_req; - dbug(1, dprintf("%x:SIGREQ(%x:%x:%x)", plci->adapter->Id, e->Id, e->Req, e->ReqCh)); - } - plci->XData[0].PLength = l; - e->X = plci->XData; - plci->adapter->request(e); - dbug(1, dprintf("send_ok")); -} - -static void send_data(PLCI *plci) -{ - DIVA_CAPI_ADAPTER *a; - DATA_B3_DESC *data; - NCCI *ncci_ptr; - word ncci; - - if (!plci->nl_req && plci->ncci_ring_list) - { - a = plci->adapter; - ncci = plci->ncci_ring_list; - do - { - ncci = a->ncci_next[ncci]; - ncci_ptr = &(a->ncci[ncci]); - if (!(a->ncci_ch[ncci] - && (a->ch_flow_control[a->ncci_ch[ncci]] & N_OK_FC_PENDING))) - { - if (ncci_ptr->data_pending) - { - if ((a->ncci_state[ncci] == CONNECTED) - || (a->ncci_state[ncci] == INC_ACT_PENDING) - || (plci->send_disc == ncci)) - { - data = &(ncci_ptr->DBuffer[ncci_ptr->data_out]); - if ((plci->B2_prot == B2_V120_ASYNC) - || (plci->B2_prot == B2_V120_ASYNC_V42BIS) - || (plci->B2_prot == B2_V120_BIT_TRANSPARENT)) - { - plci->NData[1].P = TransmitBufferGet(plci->appl, data->P); - plci->NData[1].PLength = data->Length; - if (data->Flags & 0x10) - plci->NData[0].P = v120_break_header; - else - plci->NData[0].P = v120_default_header; - plci->NData[0].PLength = 1; - plci->NL.XNum = 2; - plci->NL.Req = plci->nl_req = (byte)((data->Flags & 0x07) << 4 | N_DATA); - } - else - { - plci->NData[0].P = TransmitBufferGet(plci->appl, data->P); - plci->NData[0].PLength = data->Length; - if (data->Flags & 0x10) - plci->NL.Req = plci->nl_req = (byte)N_UDATA; - - else if ((plci->B3_prot == B3_RTP) && (data->Flags & 0x01)) - plci->NL.Req = plci->nl_req = (byte)N_BDATA; - - else - plci->NL.Req = plci->nl_req = (byte)((data->Flags & 0x07) << 4 | N_DATA); - } - plci->NL.X = plci->NData; - plci->NL.ReqCh = a->ncci_ch[ncci]; - dbug(1, dprintf("%x:DREQ(%x:%x)", a->Id, plci->NL.Id, plci->NL.Req)); - plci->data_sent = true; - plci->data_sent_ptr = data->P; - a->request(&plci->NL); - } - else { - cleanup_ncci_data(plci, ncci); - } - } - else if (plci->send_disc == ncci) - { - /* dprintf("N_DISC"); */ - plci->NData[0].PLength = 0; - plci->NL.ReqCh = a->ncci_ch[ncci]; - plci->NL.Req = plci->nl_req = N_DISC; - a->request(&plci->NL); - plci->command = _DISCONNECT_B3_R; - plci->send_disc = 0; - } - } - } while (!plci->nl_req && (ncci != plci->ncci_ring_list)); - plci->ncci_ring_list = ncci; - } -} - -static void listen_check(DIVA_CAPI_ADAPTER *a) -{ - word i, j; - PLCI *plci; - byte activnotifiedcalls = 0; - - dbug(1, dprintf("listen_check(%d,%d)", a->listen_active, a->max_listen)); - if (!remove_started && !a->adapter_disabled) - { - for (i = 0; i < a->max_plci; i++) - { - plci = &(a->plci[i]); - if (plci->notifiedcall) activnotifiedcalls++; - } - dbug(1, dprintf("listen_check(%d)", activnotifiedcalls)); - - for (i = a->listen_active; i < ((word)(a->max_listen + activnotifiedcalls)); i++) { - if ((j = get_plci(a))) { - a->listen_active++; - plci = &a->plci[j - 1]; - plci->State = LISTENING; - - add_p(plci, OAD, "\x01\xfd"); - - add_p(plci, KEY, "\x04\x43\x41\x32\x30"); - - add_p(plci, CAI, "\x01\xc0"); - add_p(plci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - add_p(plci, LLI, "\x01\xc4"); /* support Dummy CR FAC + MWI + SpoofNotify */ - add_p(plci, SHIFT | 6, NULL); - add_p(plci, SIN, "\x02\x00\x00"); - plci->internal_command = LISTEN_SIG_ASSIGN_PEND; /* do indicate_req if OK */ - sig_req(plci, ASSIGN, DSIG_ID); - send_req(plci); - } - } - } -} - -/*------------------------------------------------------------------*/ -/* functions for all parameters sent in INDs */ -/*------------------------------------------------------------------*/ - -static void IndParse(PLCI *plci, const word *parms_id, byte **parms, byte multiIEsize) -{ - word ploc; /* points to current location within packet */ - byte w; - byte wlen; - byte codeset, lock; - byte *in; - word i; - word code; - word mIEindex = 0; - ploc = 0; - codeset = 0; - lock = 0; - - in = plci->Sig.RBuffer->P; - for (i = 0; i < parms_id[0]; i++) /* multiIE parms_id contains just the 1st */ - { /* element but parms array is larger */ - parms[i] = (byte *)""; - } - for (i = 0; i < multiIEsize; i++) - { - parms[i] = (byte *)""; - } - - while (ploc < plci->Sig.RBuffer->length - 1) { - - /* read information element id and length */ - w = in[ploc]; - - if (w & 0x80) { -/* w &=0xf0; removed, cannot detect congestion levels */ -/* upper 4 bit masked with w==SHIFT now */ - wlen = 0; - } - else { - wlen = (byte)(in[ploc + 1] + 1); - } - /* check if length valid (not exceeding end of packet) */ - if ((ploc + wlen) > 270) return; - if (lock & 0x80) lock &= 0x7f; - else codeset = lock; - - if ((w & 0xf0) == SHIFT) { - codeset = in[ploc]; - if (!(codeset & 0x08)) lock = (byte)(codeset & 7); - codeset &= 7; - lock |= 0x80; - } - else { - if (w == ESC && wlen >= 3) code = in[ploc + 2] | 0x800; - else code = w; - code |= (codeset << 8); - - for (i = 1; i < parms_id[0] + 1 && parms_id[i] != code; i++); - - if (i < parms_id[0] + 1) { - if (!multiIEsize) { /* with multiIEs use next field index, */ - mIEindex = i - 1; /* with normal IEs use same index like parms_id */ - } - - parms[mIEindex] = &in[ploc + 1]; - dbug(1, dprintf("mIE[%d]=0x%x", *parms[mIEindex], in[ploc])); - if (parms_id[i] == OAD - || parms_id[i] == CONN_NR - || parms_id[i] == CAD) { - if (in[ploc + 2] & 0x80) { - in[ploc + 0] = (byte)(in[ploc + 1] + 1); - in[ploc + 1] = (byte)(in[ploc + 2] & 0x7f); - in[ploc + 2] = 0x80; - parms[mIEindex] = &in[ploc]; - } - } - mIEindex++; /* effects multiIEs only */ - } - } - - ploc += (wlen + 1); - } - return; -} - -/*------------------------------------------------------------------*/ -/* try to match a cip from received BC and HLC */ -/*------------------------------------------------------------------*/ - -static byte ie_compare(byte *ie1, byte *ie2) -{ - word i; - if (!ie1 || !ie2) return false; - if (!ie1[0]) return false; - for (i = 0; i < (word)(ie1[0] + 1); i++) if (ie1[i] != ie2[i]) return false; - return true; -} - -static word find_cip(DIVA_CAPI_ADAPTER *a, byte *bc, byte *hlc) -{ - word i; - word j; - - for (i = 9; i && !ie_compare(bc, cip_bc[i][a->u_law]); i--); - - for (j = 16; j < 29 && - (!ie_compare(bc, cip_bc[j][a->u_law]) || !ie_compare(hlc, cip_hlc[j])); j++); - if (j == 29) return i; - return j; -} - - -static byte AddInfo(byte **add_i, - byte **fty_i, - byte *esc_chi, - byte *facility) -{ - byte i; - byte j; - byte k; - byte flen; - byte len = 0; - /* facility is a nested structure */ - /* FTY can be more than once */ - - if (esc_chi[0] && !(esc_chi[esc_chi[0]] & 0x7f)) - { - add_i[0] = (byte *)"\x02\x02\x00"; /* use neither b nor d channel */ - } - - else - { - add_i[0] = (byte *)""; - } - if (!fty_i[0][0]) - { - add_i[3] = (byte *)""; - } - else - { /* facility array found */ - for (i = 0, j = 1; i < MAX_MULTI_IE && fty_i[i][0]; i++) - { - dbug(1, dprintf("AddIFac[%d]", fty_i[i][0])); - len += fty_i[i][0]; - len += 2; - flen = fty_i[i][0]; - facility[j++] = 0x1c; /* copy fac IE */ - for (k = 0; k <= flen; k++, j++) - { - facility[j] = fty_i[i][k]; -/* dbug(1, dprintf("%x ",facility[j])); */ - } - } - facility[0] = len; - add_i[3] = facility; - } -/* dbug(1, dprintf("FacArrLen=%d ",len)); */ - len = add_i[0][0] + add_i[1][0] + add_i[2][0] + add_i[3][0]; - len += 4; /* calculate length of all */ - return (len); -} - -/*------------------------------------------------------------------*/ -/* voice and codec features */ -/*------------------------------------------------------------------*/ - -static void SetVoiceChannel(PLCI *plci, byte *chi, DIVA_CAPI_ADAPTER *a) -{ - byte voice_chi[] = "\x02\x18\x01"; - byte channel; - - channel = chi[chi[0]] & 0x3; - dbug(1, dprintf("ExtDevON(Ch=0x%x)", channel)); - voice_chi[2] = (channel) ? channel : 1; - add_p(plci, FTY, "\x02\x01\x07"); /* B On, default on 1 */ - add_p(plci, ESC, voice_chi); /* Channel */ - sig_req(plci, TEL_CTRL, 0); - send_req(plci); - if (a->AdvSignalPLCI) - { - adv_voice_write_coefs(a->AdvSignalPLCI, ADV_VOICE_WRITE_ACTIVATION); - } -} - -static void VoiceChannelOff(PLCI *plci) -{ - dbug(1, dprintf("ExtDevOFF")); - add_p(plci, FTY, "\x02\x01\x08"); /* B Off */ - sig_req(plci, TEL_CTRL, 0); - send_req(plci); - if (plci->adapter->AdvSignalPLCI) - { - adv_voice_clear_config(plci->adapter->AdvSignalPLCI); - } -} - - -static word AdvCodecSupport(DIVA_CAPI_ADAPTER *a, PLCI *plci, APPL *appl, - byte hook_listen) -{ - word j; - PLCI *splci; - - /* check if hardware supports handset with hook states (adv.codec) */ - /* or if just a on board codec is supported */ - /* the advanced codec plci is just for internal use */ - - /* diva Pro with on-board codec: */ - if (a->profile.Global_Options & HANDSET) - { - /* new call, but hook states are already signalled */ - if (a->AdvCodecFLAG) - { - if (a->AdvSignalAppl != appl || a->AdvSignalPLCI) - { - dbug(1, dprintf("AdvSigPlci=0x%x", a->AdvSignalPLCI)); - return 0x2001; /* codec in use by another application */ - } - if (plci != NULL) - { - a->AdvSignalPLCI = plci; - plci->tel = ADV_VOICE; - } - return 0; /* adv codec still used */ - } - if ((j = get_plci(a))) - { - splci = &a->plci[j - 1]; - splci->tel = CODEC_PERMANENT; - /* hook_listen indicates if a facility_req with handset/hook support */ - /* was sent. Otherwise if just a call on an external device was made */ - /* the codec will be used but the hook info will be discarded (just */ - /* the external controller is in use */ - if (hook_listen) splci->State = ADVANCED_VOICE_SIG; - else - { - splci->State = ADVANCED_VOICE_NOSIG; - if (plci) - { - plci->spoofed_msg = SPOOFING_REQUIRED; - } - /* indicate D-ch connect if */ - } /* codec is connected OK */ - if (plci != NULL) - { - a->AdvSignalPLCI = plci; - plci->tel = ADV_VOICE; - } - a->AdvSignalAppl = appl; - a->AdvCodecFLAG = true; - a->AdvCodecPLCI = splci; - add_p(splci, CAI, "\x01\x15"); - add_p(splci, LLI, "\x01\x00"); - add_p(splci, ESC, "\x02\x18\x00"); - add_p(splci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - splci->internal_command = PERM_COD_ASSIGN; - dbug(1, dprintf("Codec Assign")); - sig_req(splci, ASSIGN, DSIG_ID); - send_req(splci); - } - else - { - return 0x2001; /* wrong state, no more plcis */ - } - } - else if (a->profile.Global_Options & ON_BOARD_CODEC) - { - if (hook_listen) return 0x300B; /* Facility not supported */ - /* no hook with SCOM */ - if (plci != NULL) plci->tel = CODEC; - dbug(1, dprintf("S/SCOM codec")); - /* first time we use the scom-s codec we must shut down the internal */ - /* handset application of the card. This can be done by an assign with */ - /* a cai with the 0x80 bit set. Assign return code is 'out of resource'*/ - if (!a->scom_appl_disable) { - if ((j = get_plci(a))) { - splci = &a->plci[j - 1]; - add_p(splci, CAI, "\x01\x80"); - add_p(splci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - sig_req(splci, ASSIGN, 0xC0); /* 0xc0 is the TEL_ID */ - send_req(splci); - a->scom_appl_disable = true; - } - else{ - return 0x2001; /* wrong state, no more plcis */ - } - } - } - else return 0x300B; /* Facility not supported */ - - return 0; -} - - -static void CodecIdCheck(DIVA_CAPI_ADAPTER *a, PLCI *plci) -{ - - dbug(1, dprintf("CodecIdCheck")); - - if (a->AdvSignalPLCI == plci) - { - dbug(1, dprintf("PLCI owns codec")); - VoiceChannelOff(a->AdvCodecPLCI); - if (a->AdvCodecPLCI->State == ADVANCED_VOICE_NOSIG) - { - dbug(1, dprintf("remove temp codec PLCI")); - plci_remove(a->AdvCodecPLCI); - a->AdvCodecFLAG = 0; - a->AdvCodecPLCI = NULL; - a->AdvSignalAppl = NULL; - } - a->AdvSignalPLCI = NULL; - } -} - -/* ------------------------------------------------------------------- - Ask for physical address of card on PCI bus - ------------------------------------------------------------------- */ -static void diva_ask_for_xdi_sdram_bar(DIVA_CAPI_ADAPTER *a, - IDI_SYNC_REQ *preq) { - a->sdram_bar = 0; - if (diva_xdi_extended_features & DIVA_CAPI_XDI_PROVIDES_SDRAM_BAR) { - ENTITY *e = (ENTITY *)preq; - - e->user[0] = a->Id - 1; - preq->xdi_sdram_bar.info.bar = 0; - preq->xdi_sdram_bar.Req = 0; - preq->xdi_sdram_bar.Rc = IDI_SYNC_REQ_XDI_GET_ADAPTER_SDRAM_BAR; - - (*(a->request))(e); - - a->sdram_bar = preq->xdi_sdram_bar.info.bar; - dbug(3, dprintf("A(%d) SDRAM BAR = %08x", a->Id, a->sdram_bar)); - } -} - -/* ------------------------------------------------------------------- - Ask XDI about extended features - ------------------------------------------------------------------- */ -static void diva_get_extended_adapter_features(DIVA_CAPI_ADAPTER *a) { - IDI_SYNC_REQ *preq; - char buffer[((sizeof(preq->xdi_extended_features) + 4) > sizeof(ENTITY)) ? (sizeof(preq->xdi_extended_features) + 4) : sizeof(ENTITY)]; - - char features[4]; - preq = (IDI_SYNC_REQ *)&buffer[0]; - - if (!diva_xdi_extended_features) { - ENTITY *e = (ENTITY *)preq; - diva_xdi_extended_features |= 0x80000000; - - e->user[0] = a->Id - 1; - preq->xdi_extended_features.Req = 0; - preq->xdi_extended_features.Rc = IDI_SYNC_REQ_XDI_GET_EXTENDED_FEATURES; - preq->xdi_extended_features.info.buffer_length_in_bytes = sizeof(features); - preq->xdi_extended_features.info.features = &features[0]; - - (*(a->request))(e); - - if (features[0] & DIVA_XDI_EXTENDED_FEATURES_VALID) { - /* - Check features located in the byte '0' - */ - if (features[0] & DIVA_XDI_EXTENDED_FEATURE_CMA) { - diva_xdi_extended_features |= DIVA_CAPI_USE_CMA; - } - if (features[0] & DIVA_XDI_EXTENDED_FEATURE_RX_DMA) { - diva_xdi_extended_features |= DIVA_CAPI_XDI_PROVIDES_RX_DMA; - dbug(1, dprintf("XDI provides RxDMA")); - } - if (features[0] & DIVA_XDI_EXTENDED_FEATURE_SDRAM_BAR) { - diva_xdi_extended_features |= DIVA_CAPI_XDI_PROVIDES_SDRAM_BAR; - } - if (features[0] & DIVA_XDI_EXTENDED_FEATURE_NO_CANCEL_RC) { - diva_xdi_extended_features |= DIVA_CAPI_XDI_PROVIDES_NO_CANCEL; - dbug(3, dprintf("XDI provides NO_CANCEL_RC feature")); - } - - } - } - - diva_ask_for_xdi_sdram_bar(a, preq); -} - -/*------------------------------------------------------------------*/ -/* automatic law */ -/*------------------------------------------------------------------*/ -/* called from OS specific part after init time to get the Law */ -/* a-law (Euro) and u-law (us,japan) use different BCs in the Setup message */ -void AutomaticLaw(DIVA_CAPI_ADAPTER *a) -{ - word j; - PLCI *splci; - - if (a->automatic_law) { - return; - } - if ((j = get_plci(a))) { - diva_get_extended_adapter_features(a); - splci = &a->plci[j - 1]; - a->automatic_lawPLCI = splci; - a->automatic_law = 1; - add_p(splci, CAI, "\x01\x80"); - add_p(splci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - splci->internal_command = USELAW_REQ; - splci->command = 0; - splci->number = 0; - sig_req(splci, ASSIGN, DSIG_ID); - send_req(splci); - } -} - -/* called from OS specific part if an application sends an Capi20Release */ -word CapiRelease(word Id) -{ - word i, j, appls_found; - PLCI *plci; - APPL *this; - DIVA_CAPI_ADAPTER *a; - - if (!Id) - { - dbug(0, dprintf("A: CapiRelease(Id==0)")); - return (_WRONG_APPL_ID); - } - - this = &application[Id - 1]; /* get application pointer */ - - for (i = 0, appls_found = 0; i < max_appl; i++) - { - if (application[i].Id) /* an application has been found */ - { - appls_found++; - } - } - - for (i = 0; i < max_adapter; i++) /* scan all adapters... */ - { - a = &adapter[i]; - if (a->request) - { - a->Info_Mask[Id - 1] = 0; - a->CIP_Mask[Id - 1] = 0; - a->Notification_Mask[Id - 1] = 0; - a->codec_listen[Id - 1] = NULL; - a->requested_options_table[Id - 1] = 0; - for (j = 0; j < a->max_plci; j++) /* and all PLCIs connected */ - { /* with this application */ - plci = &a->plci[j]; - if (plci->Id) /* if plci owns no application */ - { /* it may be not jet connected */ - if (plci->State == INC_CON_PENDING - || plci->State == INC_CON_ALERT) - { - if (test_bit(Id - 1, plci->c_ind_mask_table)) - { - __clear_bit(Id - 1, plci->c_ind_mask_table); - if (bitmap_empty(plci->c_ind_mask_table, MAX_APPL)) - { - sig_req(plci, HANGUP, 0); - send_req(plci); - plci->State = OUTG_DIS_PENDING; - } - } - } - if (test_bit(Id - 1, plci->c_ind_mask_table)) - { - __clear_bit(Id - 1, plci->c_ind_mask_table); - if (bitmap_empty(plci->c_ind_mask_table, MAX_APPL)) - { - if (!plci->appl) - { - plci_remove(plci); - plci->State = IDLE; - } - } - } - if (plci->appl == this) - { - plci->appl = NULL; - plci_remove(plci); - plci->State = IDLE; - } - } - } - listen_check(a); - - if (a->flag_dynamic_l1_down) - { - if (appls_found == 1) /* last application does a capi release */ - { - if ((j = get_plci(a))) - { - plci = &a->plci[j - 1]; - plci->command = 0; - add_p(plci, OAD, "\x01\xfd"); - add_p(plci, CAI, "\x01\x80"); - add_p(plci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - add_p(plci, SHIFT | 6, NULL); - add_p(plci, SIN, "\x02\x00\x00"); - plci->internal_command = REM_L1_SIG_ASSIGN_PEND; - sig_req(plci, ASSIGN, DSIG_ID); - add_p(plci, FTY, "\x02\xff\x06"); /* l1 down */ - sig_req(plci, SIG_CTRL, 0); - send_req(plci); - } - } - } - if (a->AdvSignalAppl == this) - { - this->NullCREnable = false; - if (a->AdvCodecPLCI) - { - plci_remove(a->AdvCodecPLCI); - a->AdvCodecPLCI->tel = 0; - a->AdvCodecPLCI->adv_nl = 0; - } - a->AdvSignalAppl = NULL; - a->AdvSignalPLCI = NULL; - a->AdvCodecFLAG = 0; - a->AdvCodecPLCI = NULL; - } - } - } - - this->Id = 0; - - return GOOD; -} - -static word plci_remove_check(PLCI *plci) -{ - if (!plci) return true; - if (!plci->NL.Id && bitmap_empty(plci->c_ind_mask_table, MAX_APPL)) - { - if (plci->Sig.Id == 0xff) - plci->Sig.Id = 0; - if (!plci->Sig.Id) - { - dbug(1, dprintf("plci_remove_complete(%x)", plci->Id)); - dbug(1, dprintf("tel=0x%x,Sig=0x%x", plci->tel, plci->Sig.Id)); - if (plci->Id) - { - CodecIdCheck(plci->adapter, plci); - clear_b1_config(plci); - ncci_remove(plci, 0, false); - plci_free_msg_in_queue(plci); - channel_flow_control_remove(plci); - plci->Id = 0; - plci->State = IDLE; - plci->channels = 0; - plci->appl = NULL; - plci->notifiedcall = 0; - } - listen_check(plci->adapter); - return true; - } - } - return false; -} - - -/*------------------------------------------------------------------*/ - -static byte plci_nl_busy(PLCI *plci) -{ - /* only applicable for non-multiplexed protocols */ - return (plci->nl_req - || (plci->ncci_ring_list - && plci->adapter->ncci_ch[plci->ncci_ring_list] - && (plci->adapter->ch_flow_control[plci->adapter->ncci_ch[plci->ncci_ring_list]] & N_OK_FC_PENDING))); -} - - -/*------------------------------------------------------------------*/ -/* DTMF facilities */ -/*------------------------------------------------------------------*/ - - -static struct -{ - byte send_mask; - byte listen_mask; - byte character; - byte code; -} dtmf_digit_map[] = -{ - { 0x01, 0x01, 0x23, DTMF_DIGIT_TONE_CODE_HASHMARK }, - { 0x01, 0x01, 0x2a, DTMF_DIGIT_TONE_CODE_STAR }, - { 0x01, 0x01, 0x30, DTMF_DIGIT_TONE_CODE_0 }, - { 0x01, 0x01, 0x31, DTMF_DIGIT_TONE_CODE_1 }, - { 0x01, 0x01, 0x32, DTMF_DIGIT_TONE_CODE_2 }, - { 0x01, 0x01, 0x33, DTMF_DIGIT_TONE_CODE_3 }, - { 0x01, 0x01, 0x34, DTMF_DIGIT_TONE_CODE_4 }, - { 0x01, 0x01, 0x35, DTMF_DIGIT_TONE_CODE_5 }, - { 0x01, 0x01, 0x36, DTMF_DIGIT_TONE_CODE_6 }, - { 0x01, 0x01, 0x37, DTMF_DIGIT_TONE_CODE_7 }, - { 0x01, 0x01, 0x38, DTMF_DIGIT_TONE_CODE_8 }, - { 0x01, 0x01, 0x39, DTMF_DIGIT_TONE_CODE_9 }, - { 0x01, 0x01, 0x41, DTMF_DIGIT_TONE_CODE_A }, - { 0x01, 0x01, 0x42, DTMF_DIGIT_TONE_CODE_B }, - { 0x01, 0x01, 0x43, DTMF_DIGIT_TONE_CODE_C }, - { 0x01, 0x01, 0x44, DTMF_DIGIT_TONE_CODE_D }, - { 0x01, 0x00, 0x61, DTMF_DIGIT_TONE_CODE_A }, - { 0x01, 0x00, 0x62, DTMF_DIGIT_TONE_CODE_B }, - { 0x01, 0x00, 0x63, DTMF_DIGIT_TONE_CODE_C }, - { 0x01, 0x00, 0x64, DTMF_DIGIT_TONE_CODE_D }, - - { 0x04, 0x04, 0x80, DTMF_SIGNAL_NO_TONE }, - { 0x00, 0x04, 0x81, DTMF_SIGNAL_UNIDENTIFIED_TONE }, - { 0x04, 0x04, 0x82, DTMF_SIGNAL_DIAL_TONE }, - { 0x04, 0x04, 0x83, DTMF_SIGNAL_PABX_INTERNAL_DIAL_TONE }, - { 0x04, 0x04, 0x84, DTMF_SIGNAL_SPECIAL_DIAL_TONE }, - { 0x04, 0x04, 0x85, DTMF_SIGNAL_SECOND_DIAL_TONE }, - { 0x04, 0x04, 0x86, DTMF_SIGNAL_RINGING_TONE }, - { 0x04, 0x04, 0x87, DTMF_SIGNAL_SPECIAL_RINGING_TONE }, - { 0x04, 0x04, 0x88, DTMF_SIGNAL_BUSY_TONE }, - { 0x04, 0x04, 0x89, DTMF_SIGNAL_CONGESTION_TONE }, - { 0x04, 0x04, 0x8a, DTMF_SIGNAL_SPECIAL_INFORMATION_TONE }, - { 0x04, 0x04, 0x8b, DTMF_SIGNAL_COMFORT_TONE }, - { 0x04, 0x04, 0x8c, DTMF_SIGNAL_HOLD_TONE }, - { 0x04, 0x04, 0x8d, DTMF_SIGNAL_RECORD_TONE }, - { 0x04, 0x04, 0x8e, DTMF_SIGNAL_CALLER_WAITING_TONE }, - { 0x04, 0x04, 0x8f, DTMF_SIGNAL_CALL_WAITING_TONE }, - { 0x04, 0x04, 0x90, DTMF_SIGNAL_PAY_TONE }, - { 0x04, 0x04, 0x91, DTMF_SIGNAL_POSITIVE_INDICATION_TONE }, - { 0x04, 0x04, 0x92, DTMF_SIGNAL_NEGATIVE_INDICATION_TONE }, - { 0x04, 0x04, 0x93, DTMF_SIGNAL_WARNING_TONE }, - { 0x04, 0x04, 0x94, DTMF_SIGNAL_INTRUSION_TONE }, - { 0x04, 0x04, 0x95, DTMF_SIGNAL_CALLING_CARD_SERVICE_TONE }, - { 0x04, 0x04, 0x96, DTMF_SIGNAL_PAYPHONE_RECOGNITION_TONE }, - { 0x04, 0x04, 0x97, DTMF_SIGNAL_CPE_ALERTING_SIGNAL }, - { 0x04, 0x04, 0x98, DTMF_SIGNAL_OFF_HOOK_WARNING_TONE }, - { 0x04, 0x04, 0xbf, DTMF_SIGNAL_INTERCEPT_TONE }, - { 0x04, 0x04, 0xc0, DTMF_SIGNAL_MODEM_CALLING_TONE }, - { 0x04, 0x04, 0xc1, DTMF_SIGNAL_FAX_CALLING_TONE }, - { 0x04, 0x04, 0xc2, DTMF_SIGNAL_ANSWER_TONE }, - { 0x04, 0x04, 0xc3, DTMF_SIGNAL_REVERSED_ANSWER_TONE }, - { 0x04, 0x04, 0xc4, DTMF_SIGNAL_ANSAM_TONE }, - { 0x04, 0x04, 0xc5, DTMF_SIGNAL_REVERSED_ANSAM_TONE }, - { 0x04, 0x04, 0xc6, DTMF_SIGNAL_BELL103_ANSWER_TONE }, - { 0x04, 0x04, 0xc7, DTMF_SIGNAL_FAX_FLAGS }, - { 0x04, 0x04, 0xc8, DTMF_SIGNAL_G2_FAX_GROUP_ID }, - { 0x00, 0x04, 0xc9, DTMF_SIGNAL_HUMAN_SPEECH }, - { 0x04, 0x04, 0xca, DTMF_SIGNAL_ANSWERING_MACHINE_390 }, - { 0x02, 0x02, 0xf1, DTMF_MF_DIGIT_TONE_CODE_1 }, - { 0x02, 0x02, 0xf2, DTMF_MF_DIGIT_TONE_CODE_2 }, - { 0x02, 0x02, 0xf3, DTMF_MF_DIGIT_TONE_CODE_3 }, - { 0x02, 0x02, 0xf4, DTMF_MF_DIGIT_TONE_CODE_4 }, - { 0x02, 0x02, 0xf5, DTMF_MF_DIGIT_TONE_CODE_5 }, - { 0x02, 0x02, 0xf6, DTMF_MF_DIGIT_TONE_CODE_6 }, - { 0x02, 0x02, 0xf7, DTMF_MF_DIGIT_TONE_CODE_7 }, - { 0x02, 0x02, 0xf8, DTMF_MF_DIGIT_TONE_CODE_8 }, - { 0x02, 0x02, 0xf9, DTMF_MF_DIGIT_TONE_CODE_9 }, - { 0x02, 0x02, 0xfa, DTMF_MF_DIGIT_TONE_CODE_0 }, - { 0x02, 0x02, 0xfb, DTMF_MF_DIGIT_TONE_CODE_K1 }, - { 0x02, 0x02, 0xfc, DTMF_MF_DIGIT_TONE_CODE_K2 }, - { 0x02, 0x02, 0xfd, DTMF_MF_DIGIT_TONE_CODE_KP }, - { 0x02, 0x02, 0xfe, DTMF_MF_DIGIT_TONE_CODE_S1 }, - { 0x02, 0x02, 0xff, DTMF_MF_DIGIT_TONE_CODE_ST }, - -}; - -#define DTMF_DIGIT_MAP_ENTRIES ARRAY_SIZE(dtmf_digit_map) - - -static void dtmf_enable_receiver(PLCI *plci, byte enable_mask) -{ - word min_digit_duration, min_gap_duration; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_enable_receiver %02x", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, enable_mask)); - - if (enable_mask != 0) - { - min_digit_duration = (plci->dtmf_rec_pulse_ms == 0) ? 40 : plci->dtmf_rec_pulse_ms; - min_gap_duration = (plci->dtmf_rec_pause_ms == 0) ? 40 : plci->dtmf_rec_pause_ms; - plci->internal_req_buffer[0] = DTMF_UDATA_REQUEST_ENABLE_RECEIVER; - PUT_WORD(&plci->internal_req_buffer[1], min_digit_duration); - PUT_WORD(&plci->internal_req_buffer[3], min_gap_duration); - plci->NData[0].PLength = 5; - - PUT_WORD(&plci->internal_req_buffer[5], INTERNAL_IND_BUFFER_SIZE); - plci->NData[0].PLength += 2; - capidtmf_recv_enable(&(plci->capidtmf_state), min_digit_duration, min_gap_duration); - - } - else - { - plci->internal_req_buffer[0] = DTMF_UDATA_REQUEST_DISABLE_RECEIVER; - plci->NData[0].PLength = 1; - - capidtmf_recv_disable(&(plci->capidtmf_state)); - - } - plci->NData[0].P = plci->internal_req_buffer; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_UDATA; - plci->adapter->request(&plci->NL); -} - - -static void dtmf_send_digits(PLCI *plci, byte *digit_buffer, word digit_count) -{ - word w, i; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_send_digits %d", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, digit_count)); - - plci->internal_req_buffer[0] = DTMF_UDATA_REQUEST_SEND_DIGITS; - w = (plci->dtmf_send_pulse_ms == 0) ? 40 : plci->dtmf_send_pulse_ms; - PUT_WORD(&plci->internal_req_buffer[1], w); - w = (plci->dtmf_send_pause_ms == 0) ? 40 : plci->dtmf_send_pause_ms; - PUT_WORD(&plci->internal_req_buffer[3], w); - for (i = 0; i < digit_count; i++) - { - w = 0; - while ((w < DTMF_DIGIT_MAP_ENTRIES) - && (digit_buffer[i] != dtmf_digit_map[w].character)) - { - w++; - } - plci->internal_req_buffer[5 + i] = (w < DTMF_DIGIT_MAP_ENTRIES) ? - dtmf_digit_map[w].code : DTMF_DIGIT_TONE_CODE_STAR; - } - plci->NData[0].PLength = 5 + digit_count; - plci->NData[0].P = plci->internal_req_buffer; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_UDATA; - plci->adapter->request(&plci->NL); -} - - -static void dtmf_rec_clear_config(PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_rec_clear_config", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - plci->dtmf_rec_active = 0; - plci->dtmf_rec_pulse_ms = 0; - plci->dtmf_rec_pause_ms = 0; - - capidtmf_init(&(plci->capidtmf_state), plci->adapter->u_law); - -} - - -static void dtmf_send_clear_config(PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_send_clear_config", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - plci->dtmf_send_requests = 0; - plci->dtmf_send_pulse_ms = 0; - plci->dtmf_send_pause_ms = 0; -} - - -static void dtmf_prepare_switch(dword Id, PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_prepare_switch", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - while (plci->dtmf_send_requests != 0) - dtmf_confirmation(Id, plci); -} - - -static word dtmf_save_config(dword Id, PLCI *plci, byte Rc) -{ - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_save_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - return (GOOD); -} - - -static word dtmf_restore_config(dword Id, PLCI *plci, byte Rc) -{ - word Info; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_restore_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - Info = GOOD; - if (plci->B1_facilities & B1_FACILITY_DTMFR) - { - switch (plci->adjust_b_state) - { - case ADJUST_B_RESTORE_DTMF_1: - plci->internal_command = plci->adjust_b_command; - if (plci_nl_busy(plci)) - { - plci->adjust_b_state = ADJUST_B_RESTORE_DTMF_1; - break; - } - dtmf_enable_receiver(plci, plci->dtmf_rec_active); - plci->adjust_b_state = ADJUST_B_RESTORE_DTMF_2; - break; - case ADJUST_B_RESTORE_DTMF_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Reenable DTMF receiver failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - break; - } - } - return (Info); -} - - -static void dtmf_command(dword Id, PLCI *plci, byte Rc) -{ - word internal_command, Info; - byte mask; - byte result[4]; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_command %02x %04x %04x %d %d %d %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command, - plci->dtmf_cmd, plci->dtmf_rec_pulse_ms, plci->dtmf_rec_pause_ms, - plci->dtmf_send_pulse_ms, plci->dtmf_send_pause_ms)); - - Info = GOOD; - result[0] = 2; - PUT_WORD(&result[1], DTMF_SUCCESS); - internal_command = plci->internal_command; - plci->internal_command = 0; - mask = 0x01; - switch (plci->dtmf_cmd) - { - - case DTMF_LISTEN_TONE_START: - mask <<= 1; /* fall through */ - case DTMF_LISTEN_MF_START: - mask <<= 1; /* fall through */ - - case DTMF_LISTEN_START: - switch (internal_command) - { - default: - adjust_b1_resource(Id, plci, NULL, (word)(plci->B1_facilities | - B1_FACILITY_DTMFR), DTMF_COMMAND_1); - /* fall through */ - case DTMF_COMMAND_1: - if (adjust_b_process(Id, plci, Rc) != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Load DTMF failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - if (plci->internal_command) - return; - /* fall through */ - case DTMF_COMMAND_2: - if (plci_nl_busy(plci)) - { - plci->internal_command = DTMF_COMMAND_2; - return; - } - plci->internal_command = DTMF_COMMAND_3; - dtmf_enable_receiver(plci, (byte)(plci->dtmf_rec_active | mask)); - return; - case DTMF_COMMAND_3: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Enable DTMF receiver failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - - plci->tone_last_indication_code = DTMF_SIGNAL_NO_TONE; - - plci->dtmf_rec_active |= mask; - break; - } - break; - - - case DTMF_LISTEN_TONE_STOP: - mask <<= 1; /* fall through */ - case DTMF_LISTEN_MF_STOP: - mask <<= 1; /* fall through */ - - case DTMF_LISTEN_STOP: - switch (internal_command) - { - default: - plci->dtmf_rec_active &= ~mask; - if (plci->dtmf_rec_active) - break; -/* - case DTMF_COMMAND_1: - if (plci->dtmf_rec_active) - { - if (plci_nl_busy (plci)) - { - plci->internal_command = DTMF_COMMAND_1; - return; - } - plci->dtmf_rec_active &= ~mask; - plci->internal_command = DTMF_COMMAND_2; - dtmf_enable_receiver (plci, false); - return; - } - Rc = OK; - case DTMF_COMMAND_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug (1, dprintf("[%06lx] %s,%d: Disable DTMF receiver failed %02x", - UnMapId (Id), (char far *)(FILE_), __LINE__, Rc)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } -*/ - adjust_b1_resource(Id, plci, NULL, (word)(plci->B1_facilities & - ~(B1_FACILITY_DTMFX | B1_FACILITY_DTMFR)), DTMF_COMMAND_3); - /* fall through */ - case DTMF_COMMAND_3: - if (adjust_b_process(Id, plci, Rc) != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Unload DTMF failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - if (plci->internal_command) - return; - break; - } - break; - - - case DTMF_SEND_TONE: - mask <<= 1; /* fall through */ - case DTMF_SEND_MF: - mask <<= 1; /* fall through */ - - case DTMF_DIGITS_SEND: - switch (internal_command) - { - default: - adjust_b1_resource(Id, plci, NULL, (word)(plci->B1_facilities | - ((plci->dtmf_parameter_length != 0) ? B1_FACILITY_DTMFX | B1_FACILITY_DTMFR : B1_FACILITY_DTMFX)), - DTMF_COMMAND_1); - /* fall through */ - case DTMF_COMMAND_1: - if (adjust_b_process(Id, plci, Rc) != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Load DTMF failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - if (plci->internal_command) - return; - /* fall through */ - case DTMF_COMMAND_2: - if (plci_nl_busy(plci)) - { - plci->internal_command = DTMF_COMMAND_2; - return; - } - plci->dtmf_msg_number_queue[(plci->dtmf_send_requests)++] = plci->number; - plci->internal_command = DTMF_COMMAND_3; - dtmf_send_digits(plci, &plci->saved_msg.parms[3].info[1], plci->saved_msg.parms[3].length); - return; - case DTMF_COMMAND_3: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Send DTMF digits failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - if (plci->dtmf_send_requests != 0) - (plci->dtmf_send_requests)--; - Info = _FACILITY_NOT_SUPPORTED; - break; - } - return; - } - break; - } - sendf(plci->appl, _FACILITY_R | CONFIRM, Id & 0xffffL, plci->number, - "wws", Info, SELECTOR_DTMF, result); -} - - -static byte dtmf_request(dword Id, word Number, DIVA_CAPI_ADAPTER *a, PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word Info; - word i, j; - byte mask; - API_PARSE dtmf_parms[5]; - byte result[40]; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_request", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - Info = GOOD; - result[0] = 2; - PUT_WORD(&result[1], DTMF_SUCCESS); - if (!(a->profile.Global_Options & GL_DTMF_SUPPORTED)) - { - dbug(1, dprintf("[%06lx] %s,%d: Facility not supported", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - } - else if (api_parse(&msg[1].info[1], msg[1].length, "w", dtmf_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - } - - else if ((GET_WORD(dtmf_parms[0].info) == DTMF_GET_SUPPORTED_DETECT_CODES) - || (GET_WORD(dtmf_parms[0].info) == DTMF_GET_SUPPORTED_SEND_CODES)) - { - if (!((a->requested_options_table[appl->Id - 1]) - & (1L << PRIVATE_DTMF_TONE))) - { - dbug(1, dprintf("[%06lx] %s,%d: DTMF unknown request %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, GET_WORD(dtmf_parms[0].info))); - PUT_WORD(&result[1], DTMF_UNKNOWN_REQUEST); - } - else - { - for (i = 0; i < 32; i++) - result[4 + i] = 0; - if (GET_WORD(dtmf_parms[0].info) == DTMF_GET_SUPPORTED_DETECT_CODES) - { - for (i = 0; i < DTMF_DIGIT_MAP_ENTRIES; i++) - { - if (dtmf_digit_map[i].listen_mask != 0) - result[4 + (dtmf_digit_map[i].character >> 3)] |= (1 << (dtmf_digit_map[i].character & 0x7)); - } - } - else - { - for (i = 0; i < DTMF_DIGIT_MAP_ENTRIES; i++) - { - if (dtmf_digit_map[i].send_mask != 0) - result[4 + (dtmf_digit_map[i].character >> 3)] |= (1 << (dtmf_digit_map[i].character & 0x7)); - } - } - result[0] = 3 + 32; - result[3] = 32; - } - } - - else if (plci == NULL) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong PLCI", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_IDENTIFIER; - } - else - { - if (!plci->State - || !plci->NL.Id || plci->nl_remove_id) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong state", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_STATE; - } - else - { - plci->command = 0; - plci->dtmf_cmd = GET_WORD(dtmf_parms[0].info); - mask = 0x01; - switch (plci->dtmf_cmd) - { - - case DTMF_LISTEN_TONE_START: - case DTMF_LISTEN_TONE_STOP: - mask <<= 1; /* fall through */ - case DTMF_LISTEN_MF_START: - case DTMF_LISTEN_MF_STOP: - mask <<= 1; - if (!((plci->requested_options_conn | plci->requested_options | plci->adapter->requested_options_table[appl->Id - 1]) - & (1L << PRIVATE_DTMF_TONE))) - { - dbug(1, dprintf("[%06lx] %s,%d: DTMF unknown request %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, GET_WORD(dtmf_parms[0].info))); - PUT_WORD(&result[1], DTMF_UNKNOWN_REQUEST); - break; - } - /* fall through */ - - case DTMF_LISTEN_START: - case DTMF_LISTEN_STOP: - if (!(a->manufacturer_features & MANUFACTURER_FEATURE_HARDDTMF) - && !(a->manufacturer_features & MANUFACTURER_FEATURE_SOFTDTMF_RECEIVE)) - { - dbug(1, dprintf("[%06lx] %s,%d: Facility not supported", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - if (mask & DTMF_LISTEN_ACTIVE_FLAG) - { - if (api_parse(&msg[1].info[1], msg[1].length, "wwws", dtmf_parms)) - { - plci->dtmf_rec_pulse_ms = 0; - plci->dtmf_rec_pause_ms = 0; - } - else - { - plci->dtmf_rec_pulse_ms = GET_WORD(dtmf_parms[1].info); - plci->dtmf_rec_pause_ms = GET_WORD(dtmf_parms[2].info); - } - } - start_internal_command(Id, plci, dtmf_command); - return (false); - - - case DTMF_SEND_TONE: - mask <<= 1; /* fall through */ - case DTMF_SEND_MF: - mask <<= 1; - if (!((plci->requested_options_conn | plci->requested_options | plci->adapter->requested_options_table[appl->Id - 1]) - & (1L << PRIVATE_DTMF_TONE))) - { - dbug(1, dprintf("[%06lx] %s,%d: DTMF unknown request %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, GET_WORD(dtmf_parms[0].info))); - PUT_WORD(&result[1], DTMF_UNKNOWN_REQUEST); - break; - } - /* fall through */ - - case DTMF_DIGITS_SEND: - if (api_parse(&msg[1].info[1], msg[1].length, "wwws", dtmf_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - if (mask & DTMF_LISTEN_ACTIVE_FLAG) - { - plci->dtmf_send_pulse_ms = GET_WORD(dtmf_parms[1].info); - plci->dtmf_send_pause_ms = GET_WORD(dtmf_parms[2].info); - } - i = 0; - j = 0; - while ((i < dtmf_parms[3].length) && (j < DTMF_DIGIT_MAP_ENTRIES)) - { - j = 0; - while ((j < DTMF_DIGIT_MAP_ENTRIES) - && ((dtmf_parms[3].info[i + 1] != dtmf_digit_map[j].character) - || ((dtmf_digit_map[j].send_mask & mask) == 0))) - { - j++; - } - i++; - } - if (j == DTMF_DIGIT_MAP_ENTRIES) - { - dbug(1, dprintf("[%06lx] %s,%d: Incorrect DTMF digit %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, dtmf_parms[3].info[i])); - PUT_WORD(&result[1], DTMF_INCORRECT_DIGIT); - break; - } - if (plci->dtmf_send_requests >= ARRAY_SIZE(plci->dtmf_msg_number_queue)) - { - dbug(1, dprintf("[%06lx] %s,%d: DTMF request overrun", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_STATE; - break; - } - api_save_msg(dtmf_parms, "wwws", &plci->saved_msg); - start_internal_command(Id, plci, dtmf_command); - return (false); - - default: - dbug(1, dprintf("[%06lx] %s,%d: DTMF unknown request %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, plci->dtmf_cmd)); - PUT_WORD(&result[1], DTMF_UNKNOWN_REQUEST); - } - } - } - sendf(appl, _FACILITY_R | CONFIRM, Id & 0xffffL, Number, - "wws", Info, SELECTOR_DTMF, result); - return (false); -} - - -static void dtmf_confirmation(dword Id, PLCI *plci) -{ - word i; - byte result[4]; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_confirmation", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - result[0] = 2; - PUT_WORD(&result[1], DTMF_SUCCESS); - if (plci->dtmf_send_requests != 0) - { - sendf(plci->appl, _FACILITY_R | CONFIRM, Id & 0xffffL, plci->dtmf_msg_number_queue[0], - "wws", GOOD, SELECTOR_DTMF, result); - (plci->dtmf_send_requests)--; - for (i = 0; i < plci->dtmf_send_requests; i++) - plci->dtmf_msg_number_queue[i] = plci->dtmf_msg_number_queue[i + 1]; - } -} - - -static void dtmf_indication(dword Id, PLCI *plci, byte *msg, word length) -{ - word i, j, n; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_indication", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - n = 0; - for (i = 1; i < length; i++) - { - j = 0; - while ((j < DTMF_DIGIT_MAP_ENTRIES) - && ((msg[i] != dtmf_digit_map[j].code) - || ((dtmf_digit_map[j].listen_mask & plci->dtmf_rec_active) == 0))) - { - j++; - } - if (j < DTMF_DIGIT_MAP_ENTRIES) - { - - if ((dtmf_digit_map[j].listen_mask & DTMF_TONE_LISTEN_ACTIVE_FLAG) - && (plci->tone_last_indication_code == DTMF_SIGNAL_NO_TONE) - && (dtmf_digit_map[j].character != DTMF_SIGNAL_UNIDENTIFIED_TONE)) - { - if (n + 1 == i) - { - for (i = length; i > n + 1; i--) - msg[i] = msg[i - 1]; - length++; - i++; - } - msg[++n] = DTMF_SIGNAL_UNIDENTIFIED_TONE; - } - plci->tone_last_indication_code = dtmf_digit_map[j].character; - - msg[++n] = dtmf_digit_map[j].character; - } - } - if (n != 0) - { - msg[0] = (byte) n; - sendf(plci->appl, _FACILITY_I, Id & 0xffffL, 0, "wS", SELECTOR_DTMF, msg); - } -} - - -/*------------------------------------------------------------------*/ -/* DTMF parameters */ -/*------------------------------------------------------------------*/ - -static void dtmf_parameter_write(PLCI *plci) -{ - word i; - byte parameter_buffer[DTMF_PARAMETER_BUFFER_SIZE + 2]; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_parameter_write", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - parameter_buffer[0] = plci->dtmf_parameter_length + 1; - parameter_buffer[1] = DSP_CTRL_SET_DTMF_PARAMETERS; - for (i = 0; i < plci->dtmf_parameter_length; i++) - parameter_buffer[2 + i] = plci->dtmf_parameter_buffer[i]; - add_p(plci, FTY, parameter_buffer); - sig_req(plci, TEL_CTRL, 0); - send_req(plci); -} - - -static void dtmf_parameter_clear_config(PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_parameter_clear_config", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - plci->dtmf_parameter_length = 0; -} - - -static void dtmf_parameter_prepare_switch(dword Id, PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_parameter_prepare_switch", - UnMapId(Id), (char *)(FILE_), __LINE__)); - -} - - -static word dtmf_parameter_save_config(dword Id, PLCI *plci, byte Rc) -{ - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_parameter_save_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - return (GOOD); -} - - -static word dtmf_parameter_restore_config(dword Id, PLCI *plci, byte Rc) -{ - word Info; - - dbug(1, dprintf("[%06lx] %s,%d: dtmf_parameter_restore_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - Info = GOOD; - if ((plci->B1_facilities & B1_FACILITY_DTMFR) - && (plci->dtmf_parameter_length != 0)) - { - switch (plci->adjust_b_state) - { - case ADJUST_B_RESTORE_DTMF_PARAMETER_1: - plci->internal_command = plci->adjust_b_command; - if (plci->sig_req) - { - plci->adjust_b_state = ADJUST_B_RESTORE_DTMF_PARAMETER_1; - break; - } - dtmf_parameter_write(plci); - plci->adjust_b_state = ADJUST_B_RESTORE_DTMF_PARAMETER_2; - break; - case ADJUST_B_RESTORE_DTMF_PARAMETER_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Restore DTMF parameters failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - break; - } - } - return (Info); -} - - -/*------------------------------------------------------------------*/ -/* Line interconnect facilities */ -/*------------------------------------------------------------------*/ - - -LI_CONFIG *li_config_table; -word li_total_channels; - - -/*------------------------------------------------------------------*/ -/* translate a CHI information element to a channel number */ -/* returns 0xff - any channel */ -/* 0xfe - chi wrong coding */ -/* 0xfd - D-channel */ -/* 0x00 - no channel */ -/* else channel number / PRI: timeslot */ -/* if channels is provided we accept more than one channel. */ -/*------------------------------------------------------------------*/ - -static byte chi_to_channel(byte *chi, dword *pchannelmap) -{ - int p; - int i; - dword map; - byte excl; - byte ofs; - byte ch; - - if (pchannelmap) *pchannelmap = 0; - if (!chi[0]) return 0xff; - excl = 0; - - if (chi[1] & 0x20) { - if (chi[0] == 1 && chi[1] == 0xac) return 0xfd; /* exclusive d-channel */ - for (i = 1; i < chi[0] && !(chi[i] & 0x80); i++); - if (i == chi[0] || !(chi[i] & 0x80)) return 0xfe; - if ((chi[1] | 0xc8) != 0xe9) return 0xfe; - if (chi[1] & 0x08) excl = 0x40; - - /* int. id present */ - if (chi[1] & 0x40) { - p = i + 1; - for (i = p; i < chi[0] && !(chi[i] & 0x80); i++); - if (i == chi[0] || !(chi[i] & 0x80)) return 0xfe; - } - - /* coding standard, Number/Map, Channel Type */ - p = i + 1; - for (i = p; i < chi[0] && !(chi[i] & 0x80); i++); - if (i == chi[0] || !(chi[i] & 0x80)) return 0xfe; - if ((chi[p] | 0xd0) != 0xd3) return 0xfe; - - /* Number/Map */ - if (chi[p] & 0x10) { - - /* map */ - if ((chi[0] - p) == 4) ofs = 0; - else if ((chi[0] - p) == 3) ofs = 1; - else return 0xfe; - ch = 0; - map = 0; - for (i = 0; i < 4 && p < chi[0]; i++) { - p++; - ch += 8; - map <<= 8; - if (chi[p]) { - for (ch = 0; !(chi[p] & (1 << ch)); ch++); - map |= chi[p]; - } - } - ch += ofs; - map <<= ofs; - } - else { - - /* number */ - p = i + 1; - ch = chi[p] & 0x3f; - if (pchannelmap) { - if ((byte)(chi[0] - p) > 30) return 0xfe; - map = 0; - for (i = p; i <= chi[0]; i++) { - if ((chi[i] & 0x7f) > 31) return 0xfe; - map |= (1L << (chi[i] & 0x7f)); - } - } - else { - if (p != chi[0]) return 0xfe; - if (ch > 31) return 0xfe; - map = (1L << ch); - } - if (chi[p] & 0x40) return 0xfe; - } - if (pchannelmap) *pchannelmap = map; - else if (map != ((dword)(1L << ch))) return 0xfe; - return (byte)(excl | ch); - } - else { /* not PRI */ - for (i = 1; i < chi[0] && !(chi[i] & 0x80); i++); - if (i != chi[0] || !(chi[i] & 0x80)) return 0xfe; - if (chi[1] & 0x08) excl = 0x40; - - switch (chi[1] | 0x98) { - case 0x98: return 0; - case 0x99: - if (pchannelmap) *pchannelmap = 2; - return excl | 1; - case 0x9a: - if (pchannelmap) *pchannelmap = 4; - return excl | 2; - case 0x9b: return 0xff; - case 0x9c: return 0xfd; /* d-ch */ - default: return 0xfe; - } - } -} - - -static void mixer_set_bchannel_id_esc(PLCI *plci, byte bchannel_id) -{ - DIVA_CAPI_ADAPTER *a; - PLCI *splci; - byte old_id; - - a = plci->adapter; - old_id = plci->li_bchannel_id; - if (a->li_pri) - { - if ((old_id != 0) && (li_config_table[a->li_base + (old_id - 1)].plci == plci)) - li_config_table[a->li_base + (old_id - 1)].plci = NULL; - plci->li_bchannel_id = (bchannel_id & 0x1f) + 1; - if (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == NULL) - li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci = plci; - } - else - { - if (((bchannel_id & 0x03) == 1) || ((bchannel_id & 0x03) == 2)) - { - if ((old_id != 0) && (li_config_table[a->li_base + (old_id - 1)].plci == plci)) - li_config_table[a->li_base + (old_id - 1)].plci = NULL; - plci->li_bchannel_id = bchannel_id & 0x03; - if ((a->AdvSignalPLCI != NULL) && (a->AdvSignalPLCI != plci) && (a->AdvSignalPLCI->tel == ADV_VOICE)) - { - splci = a->AdvSignalPLCI; - if (li_config_table[a->li_base + (2 - plci->li_bchannel_id)].plci == NULL) - { - if ((splci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (splci->li_bchannel_id - 1)].plci == splci)) - { - li_config_table[a->li_base + (splci->li_bchannel_id - 1)].plci = NULL; - } - splci->li_bchannel_id = 3 - plci->li_bchannel_id; - li_config_table[a->li_base + (2 - plci->li_bchannel_id)].plci = splci; - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_set_bchannel_id_esc %d", - (dword)((splci->Id << 8) | UnMapController(splci->adapter->Id)), - (char *)(FILE_), __LINE__, splci->li_bchannel_id)); - } - } - if (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == NULL) - li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci = plci; - } - } - if ((old_id == 0) && (plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - mixer_clear_config(plci); - } - dbug(1, dprintf("[%06lx] %s,%d: mixer_set_bchannel_id_esc %d %d", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, bchannel_id, plci->li_bchannel_id)); -} - - -static void mixer_set_bchannel_id(PLCI *plci, byte *chi) -{ - DIVA_CAPI_ADAPTER *a; - PLCI *splci; - byte ch, old_id; - - a = plci->adapter; - old_id = plci->li_bchannel_id; - ch = chi_to_channel(chi, NULL); - if (!(ch & 0x80)) - { - if (a->li_pri) - { - if ((old_id != 0) && (li_config_table[a->li_base + (old_id - 1)].plci == plci)) - li_config_table[a->li_base + (old_id - 1)].plci = NULL; - plci->li_bchannel_id = (ch & 0x1f) + 1; - if (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == NULL) - li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci = plci; - } - else - { - if (((ch & 0x1f) == 1) || ((ch & 0x1f) == 2)) - { - if ((old_id != 0) && (li_config_table[a->li_base + (old_id - 1)].plci == plci)) - li_config_table[a->li_base + (old_id - 1)].plci = NULL; - plci->li_bchannel_id = ch & 0x1f; - if ((a->AdvSignalPLCI != NULL) && (a->AdvSignalPLCI != plci) && (a->AdvSignalPLCI->tel == ADV_VOICE)) - { - splci = a->AdvSignalPLCI; - if (li_config_table[a->li_base + (2 - plci->li_bchannel_id)].plci == NULL) - { - if ((splci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (splci->li_bchannel_id - 1)].plci == splci)) - { - li_config_table[a->li_base + (splci->li_bchannel_id - 1)].plci = NULL; - } - splci->li_bchannel_id = 3 - plci->li_bchannel_id; - li_config_table[a->li_base + (2 - plci->li_bchannel_id)].plci = splci; - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_set_bchannel_id %d", - (dword)((splci->Id << 8) | UnMapController(splci->adapter->Id)), - (char *)(FILE_), __LINE__, splci->li_bchannel_id)); - } - } - if (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == NULL) - li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci = plci; - } - } - } - if ((old_id == 0) && (plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - mixer_clear_config(plci); - } - dbug(1, dprintf("[%06lx] %s,%d: mixer_set_bchannel_id %02x %d", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, ch, plci->li_bchannel_id)); -} - - -#define MIXER_MAX_DUMP_CHANNELS 34 - -static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a) -{ - word n, i, j; - char *p; - char hex_line[2 * MIXER_MAX_DUMP_CHANNELS + MIXER_MAX_DUMP_CHANNELS / 8 + 4]; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_calculate_coefs", - (dword)(UnMapController(a->Id)), (char *)(FILE_), __LINE__)); - - for (i = 0; i < li_total_channels; i++) - { - li_config_table[i].channel &= LI_CHANNEL_ADDRESSES_SET; - if (li_config_table[i].chflags != 0) - li_config_table[i].channel |= LI_CHANNEL_INVOLVED; - else - { - for (j = 0; j < li_total_channels; j++) - { - if (((li_config_table[i].flag_table[j]) != 0) - || ((li_config_table[j].flag_table[i]) != 0)) - { - li_config_table[i].channel |= LI_CHANNEL_INVOLVED; - } - if (((li_config_table[i].flag_table[j] & LI_FLAG_CONFERENCE) != 0) - || ((li_config_table[j].flag_table[i] & LI_FLAG_CONFERENCE) != 0)) - { - li_config_table[i].channel |= LI_CHANNEL_CONFERENCE; - } - } - } - } - for (i = 0; i < li_total_channels; i++) - { - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].coef_table[j] &= ~(LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC); - if (li_config_table[i].flag_table[j] & LI_FLAG_CONFERENCE) - li_config_table[i].coef_table[j] |= LI_COEF_CH_CH; - } - } - for (n = 0; n < li_total_channels; n++) - { - if (li_config_table[n].channel & LI_CHANNEL_CONFERENCE) - { - for (i = 0; i < li_total_channels; i++) - { - if (li_config_table[i].channel & LI_CHANNEL_CONFERENCE) - { - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].coef_table[j] |= - li_config_table[i].coef_table[n] & li_config_table[n].coef_table[j]; - } - } - } - } - } - for (i = 0; i < li_total_channels; i++) - { - if (li_config_table[i].channel & LI_CHANNEL_INVOLVED) - { - li_config_table[i].coef_table[i] &= ~LI_COEF_CH_CH; - for (j = 0; j < li_total_channels; j++) - { - if (li_config_table[i].coef_table[j] & LI_COEF_CH_CH) - li_config_table[i].flag_table[j] |= LI_FLAG_CONFERENCE; - } - if (li_config_table[i].flag_table[i] & LI_FLAG_CONFERENCE) - li_config_table[i].coef_table[i] |= LI_COEF_CH_CH; - } - } - for (i = 0; i < li_total_channels; i++) - { - if (li_config_table[i].channel & LI_CHANNEL_INVOLVED) - { - for (j = 0; j < li_total_channels; j++) - { - if (li_config_table[i].flag_table[j] & LI_FLAG_INTERCONNECT) - li_config_table[i].coef_table[j] |= LI_COEF_CH_CH; - if (li_config_table[i].flag_table[j] & LI_FLAG_MONITOR) - li_config_table[i].coef_table[j] |= LI_COEF_CH_PC; - if (li_config_table[i].flag_table[j] & LI_FLAG_MIX) - li_config_table[i].coef_table[j] |= LI_COEF_PC_CH; - if (li_config_table[i].flag_table[j] & LI_FLAG_PCCONNECT) - li_config_table[i].coef_table[j] |= LI_COEF_PC_PC; - } - if (li_config_table[i].chflags & LI_CHFLAG_MONITOR) - { - for (j = 0; j < li_total_channels; j++) - { - if (li_config_table[i].flag_table[j] & LI_FLAG_INTERCONNECT) - { - li_config_table[i].coef_table[j] |= LI_COEF_CH_PC; - if (li_config_table[j].chflags & LI_CHFLAG_MIX) - li_config_table[i].coef_table[j] |= LI_COEF_PC_CH | LI_COEF_PC_PC; - } - } - } - if (li_config_table[i].chflags & LI_CHFLAG_MIX) - { - for (j = 0; j < li_total_channels; j++) - { - if (li_config_table[j].flag_table[i] & LI_FLAG_INTERCONNECT) - li_config_table[j].coef_table[i] |= LI_COEF_PC_CH; - } - } - if (li_config_table[i].chflags & LI_CHFLAG_LOOP) - { - for (j = 0; j < li_total_channels; j++) - { - if (li_config_table[i].flag_table[j] & LI_FLAG_INTERCONNECT) - { - for (n = 0; n < li_total_channels; n++) - { - if (li_config_table[n].flag_table[i] & LI_FLAG_INTERCONNECT) - { - li_config_table[n].coef_table[j] |= LI_COEF_CH_CH; - if (li_config_table[j].chflags & LI_CHFLAG_MIX) - { - li_config_table[n].coef_table[j] |= LI_COEF_PC_CH; - if (li_config_table[n].chflags & LI_CHFLAG_MONITOR) - li_config_table[n].coef_table[j] |= LI_COEF_CH_PC | LI_COEF_PC_PC; - } - else if (li_config_table[n].chflags & LI_CHFLAG_MONITOR) - li_config_table[n].coef_table[j] |= LI_COEF_CH_PC; - } - } - } - } - } - } - } - for (i = 0; i < li_total_channels; i++) - { - if (li_config_table[i].channel & LI_CHANNEL_INVOLVED) - { - if (li_config_table[i].chflags & (LI_CHFLAG_MONITOR | LI_CHFLAG_MIX | LI_CHFLAG_LOOP)) - li_config_table[i].channel |= LI_CHANNEL_ACTIVE; - if (li_config_table[i].chflags & LI_CHFLAG_MONITOR) - li_config_table[i].channel |= LI_CHANNEL_RX_DATA; - if (li_config_table[i].chflags & LI_CHFLAG_MIX) - li_config_table[i].channel |= LI_CHANNEL_TX_DATA; - for (j = 0; j < li_total_channels; j++) - { - if ((li_config_table[i].flag_table[j] & - (LI_FLAG_INTERCONNECT | LI_FLAG_PCCONNECT | LI_FLAG_CONFERENCE | LI_FLAG_MONITOR)) - || (li_config_table[j].flag_table[i] & - (LI_FLAG_INTERCONNECT | LI_FLAG_PCCONNECT | LI_FLAG_CONFERENCE | LI_FLAG_ANNOUNCEMENT | LI_FLAG_MIX))) - { - li_config_table[i].channel |= LI_CHANNEL_ACTIVE; - } - if (li_config_table[i].flag_table[j] & (LI_FLAG_PCCONNECT | LI_FLAG_MONITOR)) - li_config_table[i].channel |= LI_CHANNEL_RX_DATA; - if (li_config_table[j].flag_table[i] & (LI_FLAG_PCCONNECT | LI_FLAG_ANNOUNCEMENT | LI_FLAG_MIX)) - li_config_table[i].channel |= LI_CHANNEL_TX_DATA; - } - if (!(li_config_table[i].channel & LI_CHANNEL_ACTIVE)) - { - li_config_table[i].coef_table[i] |= LI_COEF_PC_CH | LI_COEF_CH_PC; - li_config_table[i].channel |= LI_CHANNEL_TX_DATA | LI_CHANNEL_RX_DATA; - } - } - } - for (i = 0; i < li_total_channels; i++) - { - if (li_config_table[i].channel & LI_CHANNEL_INVOLVED) - { - j = 0; - while ((j < li_total_channels) && !(li_config_table[i].flag_table[j] & LI_FLAG_ANNOUNCEMENT)) - j++; - if (j < li_total_channels) - { - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].coef_table[j] &= ~(LI_COEF_CH_CH | LI_COEF_PC_CH); - if (li_config_table[i].flag_table[j] & LI_FLAG_ANNOUNCEMENT) - li_config_table[i].coef_table[j] |= LI_COEF_PC_CH; - } - } - } - } - n = li_total_channels; - if (n > MIXER_MAX_DUMP_CHANNELS) - n = MIXER_MAX_DUMP_CHANNELS; - - p = hex_line; - for (j = 0; j < n; j++) - { - if ((j & 0x7) == 0) - *(p++) = ' '; - p = hex_byte_pack(p, li_config_table[j].curchnl); - } - *p = '\0'; - dbug(1, dprintf("[%06lx] CURRENT %s", - (dword)(UnMapController(a->Id)), (char *)hex_line)); - p = hex_line; - for (j = 0; j < n; j++) - { - if ((j & 0x7) == 0) - *(p++) = ' '; - p = hex_byte_pack(p, li_config_table[j].channel); - } - *p = '\0'; - dbug(1, dprintf("[%06lx] CHANNEL %s", - (dword)(UnMapController(a->Id)), (char *)hex_line)); - p = hex_line; - for (j = 0; j < n; j++) - { - if ((j & 0x7) == 0) - *(p++) = ' '; - p = hex_byte_pack(p, li_config_table[j].chflags); - } - *p = '\0'; - dbug(1, dprintf("[%06lx] CHFLAG %s", - (dword)(UnMapController(a->Id)), (char *)hex_line)); - for (i = 0; i < n; i++) - { - p = hex_line; - for (j = 0; j < n; j++) - { - if ((j & 0x7) == 0) - *(p++) = ' '; - p = hex_byte_pack(p, li_config_table[i].flag_table[j]); - } - *p = '\0'; - dbug(1, dprintf("[%06lx] FLAG[%02x]%s", - (dword)(UnMapController(a->Id)), i, (char *)hex_line)); - } - for (i = 0; i < n; i++) - { - p = hex_line; - for (j = 0; j < n; j++) - { - if ((j & 0x7) == 0) - *(p++) = ' '; - p = hex_byte_pack(p, li_config_table[i].coef_table[j]); - } - *p = '\0'; - dbug(1, dprintf("[%06lx] COEF[%02x]%s", - (dword)(UnMapController(a->Id)), i, (char *)hex_line)); - } -} - - -static struct -{ - byte mask; - byte line_flags; -} mixer_write_prog_pri[] = -{ - { LI_COEF_CH_CH, 0 }, - { LI_COEF_CH_PC, MIXER_COEF_LINE_TO_PC_FLAG }, - { LI_COEF_PC_CH, MIXER_COEF_LINE_FROM_PC_FLAG }, - { LI_COEF_PC_PC, MIXER_COEF_LINE_TO_PC_FLAG | MIXER_COEF_LINE_FROM_PC_FLAG } -}; - -static struct -{ - byte from_ch; - byte to_ch; - byte mask; - byte xconnect_override; -} mixer_write_prog_bri[] = -{ - { 0, 0, LI_COEF_CH_CH, 0x01 }, /* B to B */ - { 1, 0, LI_COEF_CH_CH, 0x01 }, /* Alt B to B */ - { 0, 0, LI_COEF_PC_CH, 0x80 }, /* PC to B */ - { 1, 0, LI_COEF_PC_CH, 0x01 }, /* Alt PC to B */ - { 2, 0, LI_COEF_CH_CH, 0x00 }, /* IC to B */ - { 3, 0, LI_COEF_CH_CH, 0x00 }, /* Alt IC to B */ - { 0, 0, LI_COEF_CH_PC, 0x80 }, /* B to PC */ - { 1, 0, LI_COEF_CH_PC, 0x01 }, /* Alt B to PC */ - { 0, 0, LI_COEF_PC_PC, 0x01 }, /* PC to PC */ - { 1, 0, LI_COEF_PC_PC, 0x01 }, /* Alt PC to PC */ - { 2, 0, LI_COEF_CH_PC, 0x00 }, /* IC to PC */ - { 3, 0, LI_COEF_CH_PC, 0x00 }, /* Alt IC to PC */ - { 0, 2, LI_COEF_CH_CH, 0x00 }, /* B to IC */ - { 1, 2, LI_COEF_CH_CH, 0x00 }, /* Alt B to IC */ - { 0, 2, LI_COEF_PC_CH, 0x00 }, /* PC to IC */ - { 1, 2, LI_COEF_PC_CH, 0x00 }, /* Alt PC to IC */ - { 2, 2, LI_COEF_CH_CH, 0x00 }, /* IC to IC */ - { 3, 2, LI_COEF_CH_CH, 0x00 }, /* Alt IC to IC */ - { 1, 1, LI_COEF_CH_CH, 0x01 }, /* Alt B to Alt B */ - { 0, 1, LI_COEF_CH_CH, 0x01 }, /* B to Alt B */ - { 1, 1, LI_COEF_PC_CH, 0x80 }, /* Alt PC to Alt B */ - { 0, 1, LI_COEF_PC_CH, 0x01 }, /* PC to Alt B */ - { 3, 1, LI_COEF_CH_CH, 0x00 }, /* Alt IC to Alt B */ - { 2, 1, LI_COEF_CH_CH, 0x00 }, /* IC to Alt B */ - { 1, 1, LI_COEF_CH_PC, 0x80 }, /* Alt B to Alt PC */ - { 0, 1, LI_COEF_CH_PC, 0x01 }, /* B to Alt PC */ - { 1, 1, LI_COEF_PC_PC, 0x01 }, /* Alt PC to Alt PC */ - { 0, 1, LI_COEF_PC_PC, 0x01 }, /* PC to Alt PC */ - { 3, 1, LI_COEF_CH_PC, 0x00 }, /* Alt IC to Alt PC */ - { 2, 1, LI_COEF_CH_PC, 0x00 }, /* IC to Alt PC */ - { 1, 3, LI_COEF_CH_CH, 0x00 }, /* Alt B to Alt IC */ - { 0, 3, LI_COEF_CH_CH, 0x00 }, /* B to Alt IC */ - { 1, 3, LI_COEF_PC_CH, 0x00 }, /* Alt PC to Alt IC */ - { 0, 3, LI_COEF_PC_CH, 0x00 }, /* PC to Alt IC */ - { 3, 3, LI_COEF_CH_CH, 0x00 }, /* Alt IC to Alt IC */ - { 2, 3, LI_COEF_CH_CH, 0x00 } /* IC to Alt IC */ -}; - -static byte mixer_swapped_index_bri[] = -{ - 18, /* B to B */ - 19, /* Alt B to B */ - 20, /* PC to B */ - 21, /* Alt PC to B */ - 22, /* IC to B */ - 23, /* Alt IC to B */ - 24, /* B to PC */ - 25, /* Alt B to PC */ - 26, /* PC to PC */ - 27, /* Alt PC to PC */ - 28, /* IC to PC */ - 29, /* Alt IC to PC */ - 30, /* B to IC */ - 31, /* Alt B to IC */ - 32, /* PC to IC */ - 33, /* Alt PC to IC */ - 34, /* IC to IC */ - 35, /* Alt IC to IC */ - 0, /* Alt B to Alt B */ - 1, /* B to Alt B */ - 2, /* Alt PC to Alt B */ - 3, /* PC to Alt B */ - 4, /* Alt IC to Alt B */ - 5, /* IC to Alt B */ - 6, /* Alt B to Alt PC */ - 7, /* B to Alt PC */ - 8, /* Alt PC to Alt PC */ - 9, /* PC to Alt PC */ - 10, /* Alt IC to Alt PC */ - 11, /* IC to Alt PC */ - 12, /* Alt B to Alt IC */ - 13, /* B to Alt IC */ - 14, /* Alt PC to Alt IC */ - 15, /* PC to Alt IC */ - 16, /* Alt IC to Alt IC */ - 17 /* IC to Alt IC */ -}; - -static struct -{ - byte mask; - byte from_pc; - byte to_pc; -} xconnect_write_prog[] = -{ - { LI_COEF_CH_CH, false, false }, - { LI_COEF_CH_PC, false, true }, - { LI_COEF_PC_CH, true, false }, - { LI_COEF_PC_PC, true, true } -}; - - -static void xconnect_query_addresses(PLCI *plci) -{ - DIVA_CAPI_ADAPTER *a; - word w, ch; - byte *p; - - dbug(1, dprintf("[%06lx] %s,%d: xconnect_query_addresses", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - a = plci->adapter; - if (a->li_pri && ((plci->li_bchannel_id == 0) - || (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci != plci))) - { - dbug(1, dprintf("[%06x] %s,%d: Channel id wiped out", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - return; - } - p = plci->internal_req_buffer; - ch = (a->li_pri) ? plci->li_bchannel_id - 1 : 0; - *(p++) = UDATA_REQUEST_XCONNECT_FROM; - w = ch; - *(p++) = (byte) w; - *(p++) = (byte)(w >> 8); - w = ch | XCONNECT_CHANNEL_PORT_PC; - *(p++) = (byte) w; - *(p++) = (byte)(w >> 8); - plci->NData[0].P = plci->internal_req_buffer; - plci->NData[0].PLength = p - plci->internal_req_buffer; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_UDATA; - plci->adapter->request(&plci->NL); -} - - -static void xconnect_write_coefs(PLCI *plci, word internal_command) -{ - - dbug(1, dprintf("[%06lx] %s,%d: xconnect_write_coefs %04x", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, internal_command)); - - plci->li_write_command = internal_command; - plci->li_write_channel = 0; -} - - -static byte xconnect_write_coefs_process(dword Id, PLCI *plci, byte Rc) -{ - DIVA_CAPI_ADAPTER *a; - word w, n, i, j, r, s, to_ch; - dword d; - byte *p; - struct xconnect_transfer_address_s *transfer_address; - byte ch_map[MIXER_CHANNELS_BRI]; - - dbug(1, dprintf("[%06x] %s,%d: xconnect_write_coefs_process %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->li_write_channel)); - - a = plci->adapter; - if ((plci->li_bchannel_id == 0) - || (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci != plci)) - { - dbug(1, dprintf("[%06x] %s,%d: Channel id wiped out", - UnMapId(Id), (char *)(FILE_), __LINE__)); - return (true); - } - i = a->li_base + (plci->li_bchannel_id - 1); - j = plci->li_write_channel; - p = plci->internal_req_buffer; - if (j != 0) - { - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: LI write coefs failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - return (false); - } - } - if (li_config_table[i].adapter->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - { - r = 0; - s = 0; - if (j < li_total_channels) - { - if (li_config_table[i].channel & LI_CHANNEL_ADDRESSES_SET) - { - s = ((li_config_table[i].send_b.card_address.low | li_config_table[i].send_b.card_address.high) ? - (LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC) : (LI_COEF_CH_PC | LI_COEF_PC_PC)) & - ((li_config_table[i].send_pc.card_address.low | li_config_table[i].send_pc.card_address.high) ? - (LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC) : (LI_COEF_CH_CH | LI_COEF_PC_CH)); - } - r = ((li_config_table[i].coef_table[j] & 0xf) ^ (li_config_table[i].coef_table[j] >> 4)); - while ((j < li_total_channels) - && ((r == 0) - || (!(li_config_table[j].channel & LI_CHANNEL_ADDRESSES_SET)) - || (!li_config_table[j].adapter->li_pri - && (j >= li_config_table[j].adapter->li_base + MIXER_BCHANNELS_BRI)) - || (((li_config_table[j].send_b.card_address.low != li_config_table[i].send_b.card_address.low) - || (li_config_table[j].send_b.card_address.high != li_config_table[i].send_b.card_address.high)) - && (!(a->manufacturer_features & MANUFACTURER_FEATURE_DMACONNECT) - || !(li_config_table[j].adapter->manufacturer_features & MANUFACTURER_FEATURE_DMACONNECT))) - || ((li_config_table[j].adapter->li_base != a->li_base) - && !(r & s & - ((li_config_table[j].send_b.card_address.low | li_config_table[j].send_b.card_address.high) ? - (LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC) : (LI_COEF_PC_CH | LI_COEF_PC_PC)) & - ((li_config_table[j].send_pc.card_address.low | li_config_table[j].send_pc.card_address.high) ? - (LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC) : (LI_COEF_CH_CH | LI_COEF_CH_PC)))))) - { - j++; - if (j < li_total_channels) - r = ((li_config_table[i].coef_table[j] & 0xf) ^ (li_config_table[i].coef_table[j] >> 4)); - } - } - if (j < li_total_channels) - { - plci->internal_command = plci->li_write_command; - if (plci_nl_busy(plci)) - return (true); - to_ch = (a->li_pri) ? plci->li_bchannel_id - 1 : 0; - *(p++) = UDATA_REQUEST_XCONNECT_TO; - do - { - if (li_config_table[j].adapter->li_base != a->li_base) - { - r &= s & - ((li_config_table[j].send_b.card_address.low | li_config_table[j].send_b.card_address.high) ? - (LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC) : (LI_COEF_PC_CH | LI_COEF_PC_PC)) & - ((li_config_table[j].send_pc.card_address.low | li_config_table[j].send_pc.card_address.high) ? - (LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC) : (LI_COEF_CH_CH | LI_COEF_CH_PC)); - } - n = 0; - do - { - if (r & xconnect_write_prog[n].mask) - { - if (xconnect_write_prog[n].from_pc) - transfer_address = &(li_config_table[j].send_pc); - else - transfer_address = &(li_config_table[j].send_b); - d = transfer_address->card_address.low; - *(p++) = (byte) d; - *(p++) = (byte)(d >> 8); - *(p++) = (byte)(d >> 16); - *(p++) = (byte)(d >> 24); - d = transfer_address->card_address.high; - *(p++) = (byte) d; - *(p++) = (byte)(d >> 8); - *(p++) = (byte)(d >> 16); - *(p++) = (byte)(d >> 24); - d = transfer_address->offset; - *(p++) = (byte) d; - *(p++) = (byte)(d >> 8); - *(p++) = (byte)(d >> 16); - *(p++) = (byte)(d >> 24); - w = xconnect_write_prog[n].to_pc ? to_ch | XCONNECT_CHANNEL_PORT_PC : to_ch; - *(p++) = (byte) w; - *(p++) = (byte)(w >> 8); - w = ((li_config_table[i].coef_table[j] & xconnect_write_prog[n].mask) == 0) ? 0x01 : - (li_config_table[i].adapter->u_law ? - (li_config_table[j].adapter->u_law ? 0x80 : 0x86) : - (li_config_table[j].adapter->u_law ? 0x7a : 0x80)); - *(p++) = (byte) w; - *(p++) = (byte) 0; - li_config_table[i].coef_table[j] ^= xconnect_write_prog[n].mask << 4; - } - n++; - } while ((n < ARRAY_SIZE(xconnect_write_prog)) - && ((p - plci->internal_req_buffer) + 16 < INTERNAL_REQ_BUFFER_SIZE)); - if (n == ARRAY_SIZE(xconnect_write_prog)) - { - do - { - j++; - if (j < li_total_channels) - r = ((li_config_table[i].coef_table[j] & 0xf) ^ (li_config_table[i].coef_table[j] >> 4)); - } while ((j < li_total_channels) - && ((r == 0) - || (!(li_config_table[j].channel & LI_CHANNEL_ADDRESSES_SET)) - || (!li_config_table[j].adapter->li_pri - && (j >= li_config_table[j].adapter->li_base + MIXER_BCHANNELS_BRI)) - || (((li_config_table[j].send_b.card_address.low != li_config_table[i].send_b.card_address.low) - || (li_config_table[j].send_b.card_address.high != li_config_table[i].send_b.card_address.high)) - && (!(a->manufacturer_features & MANUFACTURER_FEATURE_DMACONNECT) - || !(li_config_table[j].adapter->manufacturer_features & MANUFACTURER_FEATURE_DMACONNECT))) - || ((li_config_table[j].adapter->li_base != a->li_base) - && !(r & s & - ((li_config_table[j].send_b.card_address.low | li_config_table[j].send_b.card_address.high) ? - (LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC) : (LI_COEF_PC_CH | LI_COEF_PC_PC)) & - ((li_config_table[j].send_pc.card_address.low | li_config_table[j].send_pc.card_address.high) ? - (LI_COEF_CH_CH | LI_COEF_CH_PC | LI_COEF_PC_CH | LI_COEF_PC_PC) : (LI_COEF_CH_CH | LI_COEF_CH_PC)))))); - } - } while ((j < li_total_channels) - && ((p - plci->internal_req_buffer) + 16 < INTERNAL_REQ_BUFFER_SIZE)); - } - else if (j == li_total_channels) - { - plci->internal_command = plci->li_write_command; - if (plci_nl_busy(plci)) - return (true); - if (a->li_pri) - { - *(p++) = UDATA_REQUEST_SET_MIXER_COEFS_PRI_SYNC; - w = 0; - if (li_config_table[i].channel & LI_CHANNEL_TX_DATA) - w |= MIXER_FEATURE_ENABLE_TX_DATA; - if (li_config_table[i].channel & LI_CHANNEL_RX_DATA) - w |= MIXER_FEATURE_ENABLE_RX_DATA; - *(p++) = (byte) w; - *(p++) = (byte)(w >> 8); - } - else - { - *(p++) = UDATA_REQUEST_SET_MIXER_COEFS_BRI; - w = 0; - if ((plci->tel == ADV_VOICE) && (plci == a->AdvSignalPLCI) - && (ADV_VOICE_NEW_COEF_BASE + sizeof(word) <= a->adv_voice_coef_length)) - { - w = GET_WORD(a->adv_voice_coef_buffer + ADV_VOICE_NEW_COEF_BASE); - } - if (li_config_table[i].channel & LI_CHANNEL_TX_DATA) - w |= MIXER_FEATURE_ENABLE_TX_DATA; - if (li_config_table[i].channel & LI_CHANNEL_RX_DATA) - w |= MIXER_FEATURE_ENABLE_RX_DATA; - *(p++) = (byte) w; - *(p++) = (byte)(w >> 8); - for (j = 0; j < sizeof(ch_map); j += 2) - { - if (plci->li_bchannel_id == 2) - { - ch_map[j] = (byte)(j + 1); - ch_map[j + 1] = (byte) j; - } - else - { - ch_map[j] = (byte) j; - ch_map[j + 1] = (byte)(j + 1); - } - } - for (n = 0; n < ARRAY_SIZE(mixer_write_prog_bri); n++) - { - i = a->li_base + ch_map[mixer_write_prog_bri[n].to_ch]; - j = a->li_base + ch_map[mixer_write_prog_bri[n].from_ch]; - if (li_config_table[i].channel & li_config_table[j].channel & LI_CHANNEL_INVOLVED) - { - *p = (mixer_write_prog_bri[n].xconnect_override != 0) ? - mixer_write_prog_bri[n].xconnect_override : - ((li_config_table[i].coef_table[j] & mixer_write_prog_bri[n].mask) ? 0x80 : 0x01); - if ((i >= a->li_base + MIXER_BCHANNELS_BRI) || (j >= a->li_base + MIXER_BCHANNELS_BRI)) - { - w = ((li_config_table[i].coef_table[j] & 0xf) ^ (li_config_table[i].coef_table[j] >> 4)); - li_config_table[i].coef_table[j] ^= (w & mixer_write_prog_bri[n].mask) << 4; - } - } - else - { - *p = 0x00; - if ((a->AdvSignalPLCI != NULL) && (a->AdvSignalPLCI->tel == ADV_VOICE)) - { - w = (plci == a->AdvSignalPLCI) ? n : mixer_swapped_index_bri[n]; - if (ADV_VOICE_NEW_COEF_BASE + sizeof(word) + w < a->adv_voice_coef_length) - *p = a->adv_voice_coef_buffer[ADV_VOICE_NEW_COEF_BASE + sizeof(word) + w]; - } - } - p++; - } - } - j = li_total_channels + 1; - } - } - else - { - if (j <= li_total_channels) - { - plci->internal_command = plci->li_write_command; - if (plci_nl_busy(plci)) - return (true); - if (j < a->li_base) - j = a->li_base; - if (a->li_pri) - { - *(p++) = UDATA_REQUEST_SET_MIXER_COEFS_PRI_SYNC; - w = 0; - if (li_config_table[i].channel & LI_CHANNEL_TX_DATA) - w |= MIXER_FEATURE_ENABLE_TX_DATA; - if (li_config_table[i].channel & LI_CHANNEL_RX_DATA) - w |= MIXER_FEATURE_ENABLE_RX_DATA; - *(p++) = (byte) w; - *(p++) = (byte)(w >> 8); - for (n = 0; n < ARRAY_SIZE(mixer_write_prog_pri); n++) - { - *(p++) = (byte)((plci->li_bchannel_id - 1) | mixer_write_prog_pri[n].line_flags); - for (j = a->li_base; j < a->li_base + MIXER_CHANNELS_PRI; j++) - { - w = ((li_config_table[i].coef_table[j] & 0xf) ^ (li_config_table[i].coef_table[j] >> 4)); - if (w & mixer_write_prog_pri[n].mask) - { - *(p++) = (li_config_table[i].coef_table[j] & mixer_write_prog_pri[n].mask) ? 0x80 : 0x01; - li_config_table[i].coef_table[j] ^= mixer_write_prog_pri[n].mask << 4; - } - else - *(p++) = 0x00; - } - *(p++) = (byte)((plci->li_bchannel_id - 1) | MIXER_COEF_LINE_ROW_FLAG | mixer_write_prog_pri[n].line_flags); - for (j = a->li_base; j < a->li_base + MIXER_CHANNELS_PRI; j++) - { - w = ((li_config_table[j].coef_table[i] & 0xf) ^ (li_config_table[j].coef_table[i] >> 4)); - if (w & mixer_write_prog_pri[n].mask) - { - *(p++) = (li_config_table[j].coef_table[i] & mixer_write_prog_pri[n].mask) ? 0x80 : 0x01; - li_config_table[j].coef_table[i] ^= mixer_write_prog_pri[n].mask << 4; - } - else - *(p++) = 0x00; - } - } - } - else - { - *(p++) = UDATA_REQUEST_SET_MIXER_COEFS_BRI; - w = 0; - if ((plci->tel == ADV_VOICE) && (plci == a->AdvSignalPLCI) - && (ADV_VOICE_NEW_COEF_BASE + sizeof(word) <= a->adv_voice_coef_length)) - { - w = GET_WORD(a->adv_voice_coef_buffer + ADV_VOICE_NEW_COEF_BASE); - } - if (li_config_table[i].channel & LI_CHANNEL_TX_DATA) - w |= MIXER_FEATURE_ENABLE_TX_DATA; - if (li_config_table[i].channel & LI_CHANNEL_RX_DATA) - w |= MIXER_FEATURE_ENABLE_RX_DATA; - *(p++) = (byte) w; - *(p++) = (byte)(w >> 8); - for (j = 0; j < sizeof(ch_map); j += 2) - { - if (plci->li_bchannel_id == 2) - { - ch_map[j] = (byte)(j + 1); - ch_map[j + 1] = (byte) j; - } - else - { - ch_map[j] = (byte) j; - ch_map[j + 1] = (byte)(j + 1); - } - } - for (n = 0; n < ARRAY_SIZE(mixer_write_prog_bri); n++) - { - i = a->li_base + ch_map[mixer_write_prog_bri[n].to_ch]; - j = a->li_base + ch_map[mixer_write_prog_bri[n].from_ch]; - if (li_config_table[i].channel & li_config_table[j].channel & LI_CHANNEL_INVOLVED) - { - *p = ((li_config_table[i].coef_table[j] & mixer_write_prog_bri[n].mask) ? 0x80 : 0x01); - w = ((li_config_table[i].coef_table[j] & 0xf) ^ (li_config_table[i].coef_table[j] >> 4)); - li_config_table[i].coef_table[j] ^= (w & mixer_write_prog_bri[n].mask) << 4; - } - else - { - *p = 0x00; - if ((a->AdvSignalPLCI != NULL) && (a->AdvSignalPLCI->tel == ADV_VOICE)) - { - w = (plci == a->AdvSignalPLCI) ? n : mixer_swapped_index_bri[n]; - if (ADV_VOICE_NEW_COEF_BASE + sizeof(word) + w < a->adv_voice_coef_length) - *p = a->adv_voice_coef_buffer[ADV_VOICE_NEW_COEF_BASE + sizeof(word) + w]; - } - } - p++; - } - } - j = li_total_channels + 1; - } - } - plci->li_write_channel = j; - if (p != plci->internal_req_buffer) - { - plci->NData[0].P = plci->internal_req_buffer; - plci->NData[0].PLength = p - plci->internal_req_buffer; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_UDATA; - plci->adapter->request(&plci->NL); - } - return (true); -} - - -static void mixer_notify_update(PLCI *plci, byte others) -{ - DIVA_CAPI_ADAPTER *a; - word i, w; - PLCI *notify_plci; - byte msg[sizeof(CAPI_MSG_HEADER) + 6]; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_notify_update %d", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, others)); - - a = plci->adapter; - if (a->profile.Global_Options & GL_LINE_INTERCONNECT_SUPPORTED) - { - if (others) - plci->li_notify_update = true; - i = 0; - do - { - notify_plci = NULL; - if (others) - { - while ((i < li_total_channels) && (li_config_table[i].plci == NULL)) - i++; - if (i < li_total_channels) - notify_plci = li_config_table[i++].plci; - } - else - { - if ((plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - notify_plci = plci; - } - } - if ((notify_plci != NULL) - && !notify_plci->li_notify_update - && (notify_plci->appl != NULL) - && (notify_plci->State) - && notify_plci->NL.Id && !notify_plci->nl_remove_id) - { - notify_plci->li_notify_update = true; - ((CAPI_MSG *) msg)->header.length = 18; - ((CAPI_MSG *) msg)->header.appl_id = notify_plci->appl->Id; - ((CAPI_MSG *) msg)->header.command = _FACILITY_R; - ((CAPI_MSG *) msg)->header.number = 0; - ((CAPI_MSG *) msg)->header.controller = notify_plci->adapter->Id; - ((CAPI_MSG *) msg)->header.plci = notify_plci->Id; - ((CAPI_MSG *) msg)->header.ncci = 0; - ((CAPI_MSG *) msg)->info.facility_req.Selector = SELECTOR_LINE_INTERCONNECT; - ((CAPI_MSG *) msg)->info.facility_req.structs[0] = 3; - ((CAPI_MSG *) msg)->info.facility_req.structs[1] = LI_REQ_SILENT_UPDATE & 0xff; - ((CAPI_MSG *) msg)->info.facility_req.structs[2] = LI_REQ_SILENT_UPDATE >> 8; - ((CAPI_MSG *) msg)->info.facility_req.structs[3] = 0; - w = api_put(notify_plci->appl, (CAPI_MSG *) msg); - if (w != _QUEUE_FULL) - { - if (w != 0) - { - dbug(1, dprintf("[%06lx] %s,%d: Interconnect notify failed %06x %d", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, - (dword)((notify_plci->Id << 8) | UnMapController(notify_plci->adapter->Id)), w)); - } - notify_plci->li_notify_update = false; - } - } - } while (others && (notify_plci != NULL)); - if (others) - plci->li_notify_update = false; - } -} - - -static void mixer_clear_config(PLCI *plci) -{ - DIVA_CAPI_ADAPTER *a; - word i, j; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_clear_config", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - plci->li_notify_update = false; - plci->li_plci_b_write_pos = 0; - plci->li_plci_b_read_pos = 0; - plci->li_plci_b_req_pos = 0; - a = plci->adapter; - if ((plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - i = a->li_base + (plci->li_bchannel_id - 1); - li_config_table[i].curchnl = 0; - li_config_table[i].channel = 0; - li_config_table[i].chflags = 0; - for (j = 0; j < li_total_channels; j++) - { - li_config_table[j].flag_table[i] = 0; - li_config_table[i].flag_table[j] = 0; - li_config_table[i].coef_table[j] = 0; - li_config_table[j].coef_table[i] = 0; - } - if (!a->li_pri) - { - li_config_table[i].coef_table[i] |= LI_COEF_CH_PC_SET | LI_COEF_PC_CH_SET; - if ((plci->tel == ADV_VOICE) && (plci == a->AdvSignalPLCI)) - { - i = a->li_base + MIXER_IC_CHANNEL_BASE + (plci->li_bchannel_id - 1); - li_config_table[i].curchnl = 0; - li_config_table[i].channel = 0; - li_config_table[i].chflags = 0; - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].flag_table[j] = 0; - li_config_table[j].flag_table[i] = 0; - li_config_table[i].coef_table[j] = 0; - li_config_table[j].coef_table[i] = 0; - } - if (a->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) - { - i = a->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci->li_bchannel_id); - li_config_table[i].curchnl = 0; - li_config_table[i].channel = 0; - li_config_table[i].chflags = 0; - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].flag_table[j] = 0; - li_config_table[j].flag_table[i] = 0; - li_config_table[i].coef_table[j] = 0; - li_config_table[j].coef_table[i] = 0; - } - } - } - } - } -} - - -static void mixer_prepare_switch(dword Id, PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: mixer_prepare_switch", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - do - { - mixer_indication_coefs_set(Id, plci); - } while (plci->li_plci_b_read_pos != plci->li_plci_b_req_pos); -} - - -static word mixer_save_config(dword Id, PLCI *plci, byte Rc) -{ - DIVA_CAPI_ADAPTER *a; - word i, j; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_save_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - a = plci->adapter; - if ((plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - i = a->li_base + (plci->li_bchannel_id - 1); - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].coef_table[j] &= 0xf; - li_config_table[j].coef_table[i] &= 0xf; - } - if (!a->li_pri) - li_config_table[i].coef_table[i] |= LI_COEF_CH_PC_SET | LI_COEF_PC_CH_SET; - } - return (GOOD); -} - - -static word mixer_restore_config(dword Id, PLCI *plci, byte Rc) -{ - DIVA_CAPI_ADAPTER *a; - word Info; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_restore_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - Info = GOOD; - a = plci->adapter; - if ((plci->B1_facilities & B1_FACILITY_MIXER) - && (plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - switch (plci->adjust_b_state) - { - case ADJUST_B_RESTORE_MIXER_1: - if (a->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - { - plci->internal_command = plci->adjust_b_command; - if (plci_nl_busy(plci)) - { - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_1; - break; - } - xconnect_query_addresses(plci); - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_2; - break; - } - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_5; - Rc = OK; - /* fall through */ - case ADJUST_B_RESTORE_MIXER_2: - case ADJUST_B_RESTORE_MIXER_3: - case ADJUST_B_RESTORE_MIXER_4: - if ((Rc != OK) && (Rc != OK_FC) && (Rc != 0)) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B query addresses failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - if (Rc == OK) - { - if (plci->adjust_b_state == ADJUST_B_RESTORE_MIXER_2) - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_3; - else if (plci->adjust_b_state == ADJUST_B_RESTORE_MIXER_4) - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_5; - } - else if (Rc == 0) - { - if (plci->adjust_b_state == ADJUST_B_RESTORE_MIXER_2) - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_4; - else if (plci->adjust_b_state == ADJUST_B_RESTORE_MIXER_3) - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_5; - } - if (plci->adjust_b_state != ADJUST_B_RESTORE_MIXER_5) - { - plci->internal_command = plci->adjust_b_command; - break; - } - /* fall through */ - case ADJUST_B_RESTORE_MIXER_5: - xconnect_write_coefs(plci, plci->adjust_b_command); - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_6; - Rc = OK; - /* fall through */ - case ADJUST_B_RESTORE_MIXER_6: - if (!xconnect_write_coefs_process(Id, plci, Rc)) - { - dbug(1, dprintf("[%06lx] %s,%d: Write mixer coefs failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - if (plci->internal_command) - break; - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_7; - case ADJUST_B_RESTORE_MIXER_7: - break; - } - } - return (Info); -} - - -static void mixer_command(dword Id, PLCI *plci, byte Rc) -{ - DIVA_CAPI_ADAPTER *a; - word i, internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_command %02x %04x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command, - plci->li_cmd)); - - a = plci->adapter; - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (plci->li_cmd) - { - case LI_REQ_CONNECT: - case LI_REQ_DISCONNECT: - case LI_REQ_SILENT_UPDATE: - switch (internal_command) - { - default: - if (plci->li_channel_bits & LI_CHANNEL_INVOLVED) - { - adjust_b1_resource(Id, plci, NULL, (word)(plci->B1_facilities | - B1_FACILITY_MIXER), MIXER_COMMAND_1); - } - /* fall through */ - case MIXER_COMMAND_1: - if (plci->li_channel_bits & LI_CHANNEL_INVOLVED) - { - if (adjust_b_process(Id, plci, Rc) != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Load mixer failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - break; - } - if (plci->internal_command) - return; - } - plci->li_plci_b_req_pos = plci->li_plci_b_write_pos; - if ((plci->li_channel_bits & LI_CHANNEL_INVOLVED) - || ((get_b1_facilities(plci, plci->B1_resource) & B1_FACILITY_MIXER) - && (add_b1_facilities(plci, plci->B1_resource, (word)(plci->B1_facilities & - ~B1_FACILITY_MIXER)) == plci->B1_resource))) - { - xconnect_write_coefs(plci, MIXER_COMMAND_2); - } - else - { - do - { - mixer_indication_coefs_set(Id, plci); - } while (plci->li_plci_b_read_pos != plci->li_plci_b_req_pos); - } - /* fall through */ - case MIXER_COMMAND_2: - if ((plci->li_channel_bits & LI_CHANNEL_INVOLVED) - || ((get_b1_facilities(plci, plci->B1_resource) & B1_FACILITY_MIXER) - && (add_b1_facilities(plci, plci->B1_resource, (word)(plci->B1_facilities & - ~B1_FACILITY_MIXER)) == plci->B1_resource))) - { - if (!xconnect_write_coefs_process(Id, plci, Rc)) - { - dbug(1, dprintf("[%06lx] %s,%d: Write mixer coefs failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - if (plci->li_plci_b_write_pos != plci->li_plci_b_req_pos) - { - do - { - plci->li_plci_b_write_pos = (plci->li_plci_b_write_pos == 0) ? - LI_PLCI_B_QUEUE_ENTRIES - 1 : plci->li_plci_b_write_pos - 1; - i = (plci->li_plci_b_write_pos == 0) ? - LI_PLCI_B_QUEUE_ENTRIES - 1 : plci->li_plci_b_write_pos - 1; - } while ((plci->li_plci_b_write_pos != plci->li_plci_b_req_pos) - && !(plci->li_plci_b_queue[i] & LI_PLCI_B_LAST_FLAG)); - } - break; - } - if (plci->internal_command) - return; - } - if (!(plci->li_channel_bits & LI_CHANNEL_INVOLVED)) - { - adjust_b1_resource(Id, plci, NULL, (word)(plci->B1_facilities & - ~B1_FACILITY_MIXER), MIXER_COMMAND_3); - } - /* fall through */ - case MIXER_COMMAND_3: - if (!(plci->li_channel_bits & LI_CHANNEL_INVOLVED)) - { - if (adjust_b_process(Id, plci, Rc) != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Unload mixer failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - break; - } - if (plci->internal_command) - return; - } - break; - } - break; - } - if ((plci->li_bchannel_id == 0) - || (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci != plci)) - { - dbug(1, dprintf("[%06x] %s,%d: Channel id wiped out %d", - UnMapId(Id), (char *)(FILE_), __LINE__, (int)(plci->li_bchannel_id))); - } - else - { - i = a->li_base + (plci->li_bchannel_id - 1); - li_config_table[i].curchnl = plci->li_channel_bits; - if (!a->li_pri && (plci->tel == ADV_VOICE) && (plci == a->AdvSignalPLCI)) - { - i = a->li_base + MIXER_IC_CHANNEL_BASE + (plci->li_bchannel_id - 1); - li_config_table[i].curchnl = plci->li_channel_bits; - if (a->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) - { - i = a->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci->li_bchannel_id); - li_config_table[i].curchnl = plci->li_channel_bits; - } - } - } -} - - -static void li_update_connect(dword Id, DIVA_CAPI_ADAPTER *a, PLCI *plci, - dword plci_b_id, byte connect, dword li_flags) -{ - word i, ch_a, ch_a_v, ch_a_s, ch_b, ch_b_v, ch_b_s; - PLCI *plci_b; - DIVA_CAPI_ADAPTER *a_b; - - a_b = &(adapter[MapController((byte)(plci_b_id & 0x7f)) - 1]); - plci_b = &(a_b->plci[((plci_b_id >> 8) & 0xff) - 1]); - ch_a = a->li_base + (plci->li_bchannel_id - 1); - if (!a->li_pri && (plci->tel == ADV_VOICE) - && (plci == a->AdvSignalPLCI) && (Id & EXT_CONTROLLER)) - { - ch_a_v = ch_a + MIXER_IC_CHANNEL_BASE; - ch_a_s = (a->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) ? - a->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci->li_bchannel_id) : ch_a_v; - } - else - { - ch_a_v = ch_a; - ch_a_s = ch_a; - } - ch_b = a_b->li_base + (plci_b->li_bchannel_id - 1); - if (!a_b->li_pri && (plci_b->tel == ADV_VOICE) - && (plci_b == a_b->AdvSignalPLCI) && (plci_b_id & EXT_CONTROLLER)) - { - ch_b_v = ch_b + MIXER_IC_CHANNEL_BASE; - ch_b_s = (a_b->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) ? - a_b->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci_b->li_bchannel_id) : ch_b_v; - } - else - { - ch_b_v = ch_b; - ch_b_s = ch_b; - } - if (connect) - { - li_config_table[ch_a].flag_table[ch_a_v] &= ~LI_FLAG_MONITOR; - li_config_table[ch_a].flag_table[ch_a_s] &= ~LI_FLAG_MONITOR; - li_config_table[ch_a_v].flag_table[ch_a] &= ~(LI_FLAG_ANNOUNCEMENT | LI_FLAG_MIX); - li_config_table[ch_a_s].flag_table[ch_a] &= ~(LI_FLAG_ANNOUNCEMENT | LI_FLAG_MIX); - } - li_config_table[ch_a].flag_table[ch_b_v] &= ~LI_FLAG_MONITOR; - li_config_table[ch_a].flag_table[ch_b_s] &= ~LI_FLAG_MONITOR; - li_config_table[ch_b_v].flag_table[ch_a] &= ~(LI_FLAG_ANNOUNCEMENT | LI_FLAG_MIX); - li_config_table[ch_b_s].flag_table[ch_a] &= ~(LI_FLAG_ANNOUNCEMENT | LI_FLAG_MIX); - if (ch_a_v == ch_b_v) - { - li_config_table[ch_a_v].flag_table[ch_b_v] &= ~LI_FLAG_CONFERENCE; - li_config_table[ch_a_s].flag_table[ch_b_s] &= ~LI_FLAG_CONFERENCE; - } - else - { - if (li_config_table[ch_a_v].flag_table[ch_b_v] & LI_FLAG_CONFERENCE) - { - for (i = 0; i < li_total_channels; i++) - { - if (i != ch_a_v) - li_config_table[ch_a_v].flag_table[i] &= ~LI_FLAG_CONFERENCE; - } - } - if (li_config_table[ch_a_s].flag_table[ch_b_v] & LI_FLAG_CONFERENCE) - { - for (i = 0; i < li_total_channels; i++) - { - if (i != ch_a_s) - li_config_table[ch_a_s].flag_table[i] &= ~LI_FLAG_CONFERENCE; - } - } - if (li_config_table[ch_b_v].flag_table[ch_a_v] & LI_FLAG_CONFERENCE) - { - for (i = 0; i < li_total_channels; i++) - { - if (i != ch_a_v) - li_config_table[i].flag_table[ch_a_v] &= ~LI_FLAG_CONFERENCE; - } - } - if (li_config_table[ch_b_v].flag_table[ch_a_s] & LI_FLAG_CONFERENCE) - { - for (i = 0; i < li_total_channels; i++) - { - if (i != ch_a_s) - li_config_table[i].flag_table[ch_a_s] &= ~LI_FLAG_CONFERENCE; - } - } - } - if (li_flags & LI_FLAG_CONFERENCE_A_B) - { - li_config_table[ch_b_v].flag_table[ch_a_v] |= LI_FLAG_CONFERENCE; - li_config_table[ch_b_s].flag_table[ch_a_v] |= LI_FLAG_CONFERENCE; - li_config_table[ch_b_v].flag_table[ch_a_s] |= LI_FLAG_CONFERENCE; - li_config_table[ch_b_s].flag_table[ch_a_s] |= LI_FLAG_CONFERENCE; - } - if (li_flags & LI_FLAG_CONFERENCE_B_A) - { - li_config_table[ch_a_v].flag_table[ch_b_v] |= LI_FLAG_CONFERENCE; - li_config_table[ch_a_v].flag_table[ch_b_s] |= LI_FLAG_CONFERENCE; - li_config_table[ch_a_s].flag_table[ch_b_v] |= LI_FLAG_CONFERENCE; - li_config_table[ch_a_s].flag_table[ch_b_s] |= LI_FLAG_CONFERENCE; - } - if (li_flags & LI_FLAG_MONITOR_A) - { - li_config_table[ch_a].flag_table[ch_a_v] |= LI_FLAG_MONITOR; - li_config_table[ch_a].flag_table[ch_a_s] |= LI_FLAG_MONITOR; - } - if (li_flags & LI_FLAG_MONITOR_B) - { - li_config_table[ch_a].flag_table[ch_b_v] |= LI_FLAG_MONITOR; - li_config_table[ch_a].flag_table[ch_b_s] |= LI_FLAG_MONITOR; - } - if (li_flags & LI_FLAG_ANNOUNCEMENT_A) - { - li_config_table[ch_a_v].flag_table[ch_a] |= LI_FLAG_ANNOUNCEMENT; - li_config_table[ch_a_s].flag_table[ch_a] |= LI_FLAG_ANNOUNCEMENT; - } - if (li_flags & LI_FLAG_ANNOUNCEMENT_B) - { - li_config_table[ch_b_v].flag_table[ch_a] |= LI_FLAG_ANNOUNCEMENT; - li_config_table[ch_b_s].flag_table[ch_a] |= LI_FLAG_ANNOUNCEMENT; - } - if (li_flags & LI_FLAG_MIX_A) - { - li_config_table[ch_a_v].flag_table[ch_a] |= LI_FLAG_MIX; - li_config_table[ch_a_s].flag_table[ch_a] |= LI_FLAG_MIX; - } - if (li_flags & LI_FLAG_MIX_B) - { - li_config_table[ch_b_v].flag_table[ch_a] |= LI_FLAG_MIX; - li_config_table[ch_b_s].flag_table[ch_a] |= LI_FLAG_MIX; - } - if (ch_a_v != ch_a_s) - { - li_config_table[ch_a_v].flag_table[ch_a_s] |= LI_FLAG_CONFERENCE; - li_config_table[ch_a_s].flag_table[ch_a_v] |= LI_FLAG_CONFERENCE; - } - if (ch_b_v != ch_b_s) - { - li_config_table[ch_b_v].flag_table[ch_b_s] |= LI_FLAG_CONFERENCE; - li_config_table[ch_b_s].flag_table[ch_b_v] |= LI_FLAG_CONFERENCE; - } -} - - -static void li2_update_connect(dword Id, DIVA_CAPI_ADAPTER *a, PLCI *plci, - dword plci_b_id, byte connect, dword li_flags) -{ - word ch_a, ch_a_v, ch_a_s, ch_b, ch_b_v, ch_b_s; - PLCI *plci_b; - DIVA_CAPI_ADAPTER *a_b; - - a_b = &(adapter[MapController((byte)(plci_b_id & 0x7f)) - 1]); - plci_b = &(a_b->plci[((plci_b_id >> 8) & 0xff) - 1]); - ch_a = a->li_base + (plci->li_bchannel_id - 1); - if (!a->li_pri && (plci->tel == ADV_VOICE) - && (plci == a->AdvSignalPLCI) && (Id & EXT_CONTROLLER)) - { - ch_a_v = ch_a + MIXER_IC_CHANNEL_BASE; - ch_a_s = (a->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) ? - a->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci->li_bchannel_id) : ch_a_v; - } - else - { - ch_a_v = ch_a; - ch_a_s = ch_a; - } - ch_b = a_b->li_base + (plci_b->li_bchannel_id - 1); - if (!a_b->li_pri && (plci_b->tel == ADV_VOICE) - && (plci_b == a_b->AdvSignalPLCI) && (plci_b_id & EXT_CONTROLLER)) - { - ch_b_v = ch_b + MIXER_IC_CHANNEL_BASE; - ch_b_s = (a_b->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) ? - a_b->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci_b->li_bchannel_id) : ch_b_v; - } - else - { - ch_b_v = ch_b; - ch_b_s = ch_b; - } - if (connect) - { - li_config_table[ch_b].flag_table[ch_b_v] &= ~LI_FLAG_MONITOR; - li_config_table[ch_b].flag_table[ch_b_s] &= ~LI_FLAG_MONITOR; - li_config_table[ch_b_v].flag_table[ch_b] &= ~LI_FLAG_MIX; - li_config_table[ch_b_s].flag_table[ch_b] &= ~LI_FLAG_MIX; - li_config_table[ch_b].flag_table[ch_b] &= ~LI_FLAG_PCCONNECT; - li_config_table[ch_b].chflags &= ~(LI_CHFLAG_MONITOR | LI_CHFLAG_MIX | LI_CHFLAG_LOOP); - } - li_config_table[ch_b_v].flag_table[ch_a_v] &= ~(LI_FLAG_INTERCONNECT | LI_FLAG_CONFERENCE); - li_config_table[ch_b_s].flag_table[ch_a_v] &= ~(LI_FLAG_INTERCONNECT | LI_FLAG_CONFERENCE); - li_config_table[ch_b_v].flag_table[ch_a_s] &= ~(LI_FLAG_INTERCONNECT | LI_FLAG_CONFERENCE); - li_config_table[ch_b_s].flag_table[ch_a_s] &= ~(LI_FLAG_INTERCONNECT | LI_FLAG_CONFERENCE); - li_config_table[ch_a_v].flag_table[ch_b_v] &= ~(LI_FLAG_INTERCONNECT | LI_FLAG_CONFERENCE); - li_config_table[ch_a_v].flag_table[ch_b_s] &= ~(LI_FLAG_INTERCONNECT | LI_FLAG_CONFERENCE); - li_config_table[ch_a_s].flag_table[ch_b_v] &= ~(LI_FLAG_INTERCONNECT | LI_FLAG_CONFERENCE); - li_config_table[ch_a_s].flag_table[ch_b_s] &= ~(LI_FLAG_INTERCONNECT | LI_FLAG_CONFERENCE); - if (li_flags & LI2_FLAG_INTERCONNECT_A_B) - { - li_config_table[ch_b_v].flag_table[ch_a_v] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_b_s].flag_table[ch_a_v] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_b_v].flag_table[ch_a_s] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_b_s].flag_table[ch_a_s] |= LI_FLAG_INTERCONNECT; - } - if (li_flags & LI2_FLAG_INTERCONNECT_B_A) - { - li_config_table[ch_a_v].flag_table[ch_b_v] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_a_v].flag_table[ch_b_s] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_a_s].flag_table[ch_b_v] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_a_s].flag_table[ch_b_s] |= LI_FLAG_INTERCONNECT; - } - if (li_flags & LI2_FLAG_MONITOR_B) - { - li_config_table[ch_b].flag_table[ch_b_v] |= LI_FLAG_MONITOR; - li_config_table[ch_b].flag_table[ch_b_s] |= LI_FLAG_MONITOR; - } - if (li_flags & LI2_FLAG_MIX_B) - { - li_config_table[ch_b_v].flag_table[ch_b] |= LI_FLAG_MIX; - li_config_table[ch_b_s].flag_table[ch_b] |= LI_FLAG_MIX; - } - if (li_flags & LI2_FLAG_MONITOR_X) - li_config_table[ch_b].chflags |= LI_CHFLAG_MONITOR; - if (li_flags & LI2_FLAG_MIX_X) - li_config_table[ch_b].chflags |= LI_CHFLAG_MIX; - if (li_flags & LI2_FLAG_LOOP_B) - { - li_config_table[ch_b_v].flag_table[ch_b_v] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_b_s].flag_table[ch_b_v] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_b_v].flag_table[ch_b_s] |= LI_FLAG_INTERCONNECT; - li_config_table[ch_b_s].flag_table[ch_b_s] |= LI_FLAG_INTERCONNECT; - } - if (li_flags & LI2_FLAG_LOOP_PC) - li_config_table[ch_b].flag_table[ch_b] |= LI_FLAG_PCCONNECT; - if (li_flags & LI2_FLAG_LOOP_X) - li_config_table[ch_b].chflags |= LI_CHFLAG_LOOP; - if (li_flags & LI2_FLAG_PCCONNECT_A_B) - li_config_table[ch_b_s].flag_table[ch_a_s] |= LI_FLAG_PCCONNECT; - if (li_flags & LI2_FLAG_PCCONNECT_B_A) - li_config_table[ch_a_s].flag_table[ch_b_s] |= LI_FLAG_PCCONNECT; - if (ch_a_v != ch_a_s) - { - li_config_table[ch_a_v].flag_table[ch_a_s] |= LI_FLAG_CONFERENCE; - li_config_table[ch_a_s].flag_table[ch_a_v] |= LI_FLAG_CONFERENCE; - } - if (ch_b_v != ch_b_s) - { - li_config_table[ch_b_v].flag_table[ch_b_s] |= LI_FLAG_CONFERENCE; - li_config_table[ch_b_s].flag_table[ch_b_v] |= LI_FLAG_CONFERENCE; - } -} - - -static word li_check_main_plci(dword Id, PLCI *plci) -{ - if (plci == NULL) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong PLCI", - UnMapId(Id), (char *)(FILE_), __LINE__)); - return (_WRONG_IDENTIFIER); - } - if (!plci->State - || !plci->NL.Id || plci->nl_remove_id - || (plci->li_bchannel_id == 0)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong state", - UnMapId(Id), (char *)(FILE_), __LINE__)); - return (_WRONG_STATE); - } - li_config_table[plci->adapter->li_base + (plci->li_bchannel_id - 1)].plci = plci; - return (GOOD); -} - - -static PLCI *li_check_plci_b(dword Id, PLCI *plci, - dword plci_b_id, word plci_b_write_pos, byte *p_result) -{ - byte ctlr_b; - PLCI *plci_b; - - if (((plci->li_plci_b_read_pos > plci_b_write_pos) ? plci->li_plci_b_read_pos : - LI_PLCI_B_QUEUE_ENTRIES + plci->li_plci_b_read_pos) - plci_b_write_pos - 1 < 2) - { - dbug(1, dprintf("[%06lx] %s,%d: LI request overrun", - UnMapId(Id), (char *)(FILE_), __LINE__)); - PUT_WORD(p_result, _REQUEST_NOT_ALLOWED_IN_THIS_STATE); - return (NULL); - } - ctlr_b = 0; - if ((plci_b_id & 0x7f) != 0) - { - ctlr_b = MapController((byte)(plci_b_id & 0x7f)); - if ((ctlr_b > max_adapter) || ((ctlr_b != 0) && (adapter[ctlr_b - 1].request == NULL))) - ctlr_b = 0; - } - if ((ctlr_b == 0) - || (((plci_b_id >> 8) & 0xff) == 0) - || (((plci_b_id >> 8) & 0xff) > adapter[ctlr_b - 1].max_plci)) - { - dbug(1, dprintf("[%06lx] %s,%d: LI invalid second PLCI %08lx", - UnMapId(Id), (char *)(FILE_), __LINE__, plci_b_id)); - PUT_WORD(p_result, _WRONG_IDENTIFIER); - return (NULL); - } - plci_b = &(adapter[ctlr_b - 1].plci[((plci_b_id >> 8) & 0xff) - 1]); - if (!plci_b->State - || !plci_b->NL.Id || plci_b->nl_remove_id - || (plci_b->li_bchannel_id == 0)) - { - dbug(1, dprintf("[%06lx] %s,%d: LI peer in wrong state %08lx", - UnMapId(Id), (char *)(FILE_), __LINE__, plci_b_id)); - PUT_WORD(p_result, _REQUEST_NOT_ALLOWED_IN_THIS_STATE); - return (NULL); - } - li_config_table[plci_b->adapter->li_base + (plci_b->li_bchannel_id - 1)].plci = plci_b; - if (((byte)(plci_b_id & ~EXT_CONTROLLER)) != - ((byte)(UnMapController(plci->adapter->Id) & ~EXT_CONTROLLER)) - && (!(plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - || !(plci_b->adapter->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT))) - { - dbug(1, dprintf("[%06lx] %s,%d: LI not on same ctrl %08lx", - UnMapId(Id), (char *)(FILE_), __LINE__, plci_b_id)); - PUT_WORD(p_result, _WRONG_IDENTIFIER); - return (NULL); - } - if (!(get_b1_facilities(plci_b, add_b1_facilities(plci_b, plci_b->B1_resource, - (word)(plci_b->B1_facilities | B1_FACILITY_MIXER))) & B1_FACILITY_MIXER)) - { - dbug(1, dprintf("[%06lx] %s,%d: Interconnect peer cannot mix %d", - UnMapId(Id), (char *)(FILE_), __LINE__, plci_b->B1_resource)); - PUT_WORD(p_result, _REQUEST_NOT_ALLOWED_IN_THIS_STATE); - return (NULL); - } - return (plci_b); -} - - -static PLCI *li2_check_plci_b(dword Id, PLCI *plci, - dword plci_b_id, word plci_b_write_pos, byte *p_result) -{ - byte ctlr_b; - PLCI *plci_b; - - if (((plci->li_plci_b_read_pos > plci_b_write_pos) ? plci->li_plci_b_read_pos : - LI_PLCI_B_QUEUE_ENTRIES + plci->li_plci_b_read_pos) - plci_b_write_pos - 1 < 2) - { - dbug(1, dprintf("[%06lx] %s,%d: LI request overrun", - UnMapId(Id), (char *)(FILE_), __LINE__)); - PUT_WORD(p_result, _WRONG_STATE); - return (NULL); - } - ctlr_b = 0; - if ((plci_b_id & 0x7f) != 0) - { - ctlr_b = MapController((byte)(plci_b_id & 0x7f)); - if ((ctlr_b > max_adapter) || ((ctlr_b != 0) && (adapter[ctlr_b - 1].request == NULL))) - ctlr_b = 0; - } - if ((ctlr_b == 0) - || (((plci_b_id >> 8) & 0xff) == 0) - || (((plci_b_id >> 8) & 0xff) > adapter[ctlr_b - 1].max_plci)) - { - dbug(1, dprintf("[%06lx] %s,%d: LI invalid second PLCI %08lx", - UnMapId(Id), (char *)(FILE_), __LINE__, plci_b_id)); - PUT_WORD(p_result, _WRONG_IDENTIFIER); - return (NULL); - } - plci_b = &(adapter[ctlr_b - 1].plci[((plci_b_id >> 8) & 0xff) - 1]); - if (!plci_b->State - || !plci_b->NL.Id || plci_b->nl_remove_id - || (plci_b->li_bchannel_id == 0) - || (li_config_table[plci_b->adapter->li_base + (plci_b->li_bchannel_id - 1)].plci != plci_b)) - { - dbug(1, dprintf("[%06lx] %s,%d: LI peer in wrong state %08lx", - UnMapId(Id), (char *)(FILE_), __LINE__, plci_b_id)); - PUT_WORD(p_result, _WRONG_STATE); - return (NULL); - } - if (((byte)(plci_b_id & ~EXT_CONTROLLER)) != - ((byte)(UnMapController(plci->adapter->Id) & ~EXT_CONTROLLER)) - && (!(plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - || !(plci_b->adapter->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT))) - { - dbug(1, dprintf("[%06lx] %s,%d: LI not on same ctrl %08lx", - UnMapId(Id), (char *)(FILE_), __LINE__, plci_b_id)); - PUT_WORD(p_result, _WRONG_IDENTIFIER); - return (NULL); - } - if (!(get_b1_facilities(plci_b, add_b1_facilities(plci_b, plci_b->B1_resource, - (word)(plci_b->B1_facilities | B1_FACILITY_MIXER))) & B1_FACILITY_MIXER)) - { - dbug(1, dprintf("[%06lx] %s,%d: Interconnect peer cannot mix %d", - UnMapId(Id), (char *)(FILE_), __LINE__, plci_b->B1_resource)); - PUT_WORD(p_result, _WRONG_STATE); - return (NULL); - } - return (plci_b); -} - - -static byte mixer_request(dword Id, word Number, DIVA_CAPI_ADAPTER *a, PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word Info; - word i; - dword d, li_flags, plci_b_id; - PLCI *plci_b; - API_PARSE li_parms[3]; - API_PARSE li_req_parms[3]; - API_PARSE li_participant_struct[2]; - API_PARSE li_participant_parms[3]; - word participant_parms_pos; - byte result_buffer[32]; - byte *result; - word result_pos; - word plci_b_write_pos; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_request", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - Info = GOOD; - result = result_buffer; - result_buffer[0] = 0; - if (!(a->profile.Global_Options & GL_LINE_INTERCONNECT_SUPPORTED)) - { - dbug(1, dprintf("[%06lx] %s,%d: Facility not supported", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - } - else if (api_parse(&msg[1].info[1], msg[1].length, "ws", li_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - } - else - { - result_buffer[0] = 3; - PUT_WORD(&result_buffer[1], GET_WORD(li_parms[0].info)); - result_buffer[3] = 0; - switch (GET_WORD(li_parms[0].info)) - { - case LI_GET_SUPPORTED_SERVICES: - if (appl->appl_flags & APPL_FLAG_OLD_LI_SPEC) - { - result_buffer[0] = 17; - result_buffer[3] = 14; - PUT_WORD(&result_buffer[4], GOOD); - d = 0; - if (a->manufacturer_features & MANUFACTURER_FEATURE_MIXER_CH_CH) - d |= LI_CONFERENCING_SUPPORTED; - if (a->manufacturer_features & MANUFACTURER_FEATURE_MIXER_CH_PC) - d |= LI_MONITORING_SUPPORTED; - if (a->manufacturer_features & MANUFACTURER_FEATURE_MIXER_PC_CH) - d |= LI_ANNOUNCEMENTS_SUPPORTED | LI_MIXING_SUPPORTED; - if (a->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - d |= LI_CROSS_CONTROLLER_SUPPORTED; - PUT_DWORD(&result_buffer[6], d); - if (a->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - { - d = 0; - for (i = 0; i < li_total_channels; i++) - { - if ((li_config_table[i].adapter->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - && (li_config_table[i].adapter->li_pri - || (i < li_config_table[i].adapter->li_base + MIXER_BCHANNELS_BRI))) - { - d++; - } - } - } - else - { - d = a->li_pri ? a->li_channels : MIXER_BCHANNELS_BRI; - } - PUT_DWORD(&result_buffer[10], d / 2); - PUT_DWORD(&result_buffer[14], d); - } - else - { - result_buffer[0] = 25; - result_buffer[3] = 22; - PUT_WORD(&result_buffer[4], GOOD); - d = LI2_ASYMMETRIC_SUPPORTED | LI2_B_LOOPING_SUPPORTED | LI2_X_LOOPING_SUPPORTED; - if (a->manufacturer_features & MANUFACTURER_FEATURE_MIXER_CH_PC) - d |= LI2_MONITORING_SUPPORTED | LI2_REMOTE_MONITORING_SUPPORTED; - if (a->manufacturer_features & MANUFACTURER_FEATURE_MIXER_PC_CH) - d |= LI2_MIXING_SUPPORTED | LI2_REMOTE_MIXING_SUPPORTED; - if (a->manufacturer_features & MANUFACTURER_FEATURE_MIXER_PC_PC) - d |= LI2_PC_LOOPING_SUPPORTED; - if (a->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - d |= LI2_CROSS_CONTROLLER_SUPPORTED; - PUT_DWORD(&result_buffer[6], d); - d = a->li_pri ? a->li_channels : MIXER_BCHANNELS_BRI; - PUT_DWORD(&result_buffer[10], d / 2); - PUT_DWORD(&result_buffer[14], d - 1); - if (a->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - { - d = 0; - for (i = 0; i < li_total_channels; i++) - { - if ((li_config_table[i].adapter->manufacturer_features & MANUFACTURER_FEATURE_XCONNECT) - && (li_config_table[i].adapter->li_pri - || (i < li_config_table[i].adapter->li_base + MIXER_BCHANNELS_BRI))) - { - d++; - } - } - } - PUT_DWORD(&result_buffer[18], d / 2); - PUT_DWORD(&result_buffer[22], d - 1); - } - break; - - case LI_REQ_CONNECT: - if (li_parms[1].length == 8) - { - appl->appl_flags |= APPL_FLAG_OLD_LI_SPEC; - if (api_parse(&li_parms[1].info[1], li_parms[1].length, "dd", li_req_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - plci_b_id = GET_DWORD(li_req_parms[0].info) & 0xffff; - li_flags = GET_DWORD(li_req_parms[1].info); - Info = li_check_main_plci(Id, plci); - result_buffer[0] = 9; - result_buffer[3] = 6; - PUT_DWORD(&result_buffer[4], plci_b_id); - PUT_WORD(&result_buffer[8], GOOD); - if (Info != GOOD) - break; - result = plci->saved_msg.info; - for (i = 0; i <= result_buffer[0]; i++) - result[i] = result_buffer[i]; - plci_b_write_pos = plci->li_plci_b_write_pos; - plci_b = li_check_plci_b(Id, plci, plci_b_id, plci_b_write_pos, &result[8]); - if (plci_b == NULL) - break; - li_update_connect(Id, a, plci, plci_b_id, true, li_flags); - plci->li_plci_b_queue[plci_b_write_pos] = plci_b_id | LI_PLCI_B_LAST_FLAG; - plci_b_write_pos = (plci_b_write_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? 0 : plci_b_write_pos + 1; - plci->li_plci_b_write_pos = plci_b_write_pos; - } - else - { - appl->appl_flags &= ~APPL_FLAG_OLD_LI_SPEC; - if (api_parse(&li_parms[1].info[1], li_parms[1].length, "ds", li_req_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - li_flags = GET_DWORD(li_req_parms[0].info) & ~(LI2_FLAG_INTERCONNECT_A_B | LI2_FLAG_INTERCONNECT_B_A); - Info = li_check_main_plci(Id, plci); - result_buffer[0] = 7; - result_buffer[3] = 4; - PUT_WORD(&result_buffer[4], Info); - result_buffer[6] = 0; - if (Info != GOOD) - break; - result = plci->saved_msg.info; - for (i = 0; i <= result_buffer[0]; i++) - result[i] = result_buffer[i]; - plci_b_write_pos = plci->li_plci_b_write_pos; - participant_parms_pos = 0; - result_pos = 7; - li2_update_connect(Id, a, plci, UnMapId(Id), true, li_flags); - while (participant_parms_pos < li_req_parms[1].length) - { - result[result_pos] = 6; - result_pos += 7; - PUT_DWORD(&result[result_pos - 6], 0); - PUT_WORD(&result[result_pos - 2], GOOD); - if (api_parse(&li_req_parms[1].info[1 + participant_parms_pos], - (word)(li_parms[1].length - participant_parms_pos), "s", li_participant_struct)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - PUT_WORD(&result[result_pos - 2], _WRONG_MESSAGE_FORMAT); - break; - } - if (api_parse(&li_participant_struct[0].info[1], - li_participant_struct[0].length, "dd", li_participant_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - PUT_WORD(&result[result_pos - 2], _WRONG_MESSAGE_FORMAT); - break; - } - plci_b_id = GET_DWORD(li_participant_parms[0].info) & 0xffff; - li_flags = GET_DWORD(li_participant_parms[1].info); - PUT_DWORD(&result[result_pos - 6], plci_b_id); - if (sizeof(result) - result_pos < 7) - { - dbug(1, dprintf("[%06lx] %s,%d: LI result overrun", - UnMapId(Id), (char *)(FILE_), __LINE__)); - PUT_WORD(&result[result_pos - 2], _WRONG_STATE); - break; - } - plci_b = li2_check_plci_b(Id, plci, plci_b_id, plci_b_write_pos, &result[result_pos - 2]); - if (plci_b != NULL) - { - li2_update_connect(Id, a, plci, plci_b_id, true, li_flags); - plci->li_plci_b_queue[plci_b_write_pos] = plci_b_id | - ((li_flags & (LI2_FLAG_INTERCONNECT_A_B | LI2_FLAG_INTERCONNECT_B_A | - LI2_FLAG_PCCONNECT_A_B | LI2_FLAG_PCCONNECT_B_A)) ? 0 : LI_PLCI_B_DISC_FLAG); - plci_b_write_pos = (plci_b_write_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? 0 : plci_b_write_pos + 1; - } - participant_parms_pos = (word)((&li_participant_struct[0].info[1 + li_participant_struct[0].length]) - - (&li_req_parms[1].info[1])); - } - result[0] = (byte)(result_pos - 1); - result[3] = (byte)(result_pos - 4); - result[6] = (byte)(result_pos - 7); - i = (plci_b_write_pos == 0) ? LI_PLCI_B_QUEUE_ENTRIES - 1 : plci_b_write_pos - 1; - if ((plci_b_write_pos == plci->li_plci_b_read_pos) - || (plci->li_plci_b_queue[i] & LI_PLCI_B_LAST_FLAG)) - { - plci->li_plci_b_queue[plci_b_write_pos] = LI_PLCI_B_SKIP_FLAG | LI_PLCI_B_LAST_FLAG; - plci_b_write_pos = (plci_b_write_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? 0 : plci_b_write_pos + 1; - } - else - plci->li_plci_b_queue[i] |= LI_PLCI_B_LAST_FLAG; - plci->li_plci_b_write_pos = plci_b_write_pos; - } - mixer_calculate_coefs(a); - plci->li_channel_bits = li_config_table[a->li_base + (plci->li_bchannel_id - 1)].channel; - mixer_notify_update(plci, true); - sendf(appl, _FACILITY_R | CONFIRM, Id & 0xffffL, Number, - "wwS", Info, SELECTOR_LINE_INTERCONNECT, result); - plci->command = 0; - plci->li_cmd = GET_WORD(li_parms[0].info); - start_internal_command(Id, plci, mixer_command); - return (false); - - case LI_REQ_DISCONNECT: - if (li_parms[1].length == 4) - { - appl->appl_flags |= APPL_FLAG_OLD_LI_SPEC; - if (api_parse(&li_parms[1].info[1], li_parms[1].length, "d", li_req_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - plci_b_id = GET_DWORD(li_req_parms[0].info) & 0xffff; - Info = li_check_main_plci(Id, plci); - result_buffer[0] = 9; - result_buffer[3] = 6; - PUT_DWORD(&result_buffer[4], GET_DWORD(li_req_parms[0].info)); - PUT_WORD(&result_buffer[8], GOOD); - if (Info != GOOD) - break; - result = plci->saved_msg.info; - for (i = 0; i <= result_buffer[0]; i++) - result[i] = result_buffer[i]; - plci_b_write_pos = plci->li_plci_b_write_pos; - plci_b = li_check_plci_b(Id, plci, plci_b_id, plci_b_write_pos, &result[8]); - if (plci_b == NULL) - break; - li_update_connect(Id, a, plci, plci_b_id, false, 0); - plci->li_plci_b_queue[plci_b_write_pos] = plci_b_id | LI_PLCI_B_DISC_FLAG | LI_PLCI_B_LAST_FLAG; - plci_b_write_pos = (plci_b_write_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? 0 : plci_b_write_pos + 1; - plci->li_plci_b_write_pos = plci_b_write_pos; - } - else - { - appl->appl_flags &= ~APPL_FLAG_OLD_LI_SPEC; - if (api_parse(&li_parms[1].info[1], li_parms[1].length, "s", li_req_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - break; - } - Info = li_check_main_plci(Id, plci); - result_buffer[0] = 7; - result_buffer[3] = 4; - PUT_WORD(&result_buffer[4], Info); - result_buffer[6] = 0; - if (Info != GOOD) - break; - result = plci->saved_msg.info; - for (i = 0; i <= result_buffer[0]; i++) - result[i] = result_buffer[i]; - plci_b_write_pos = plci->li_plci_b_write_pos; - participant_parms_pos = 0; - result_pos = 7; - while (participant_parms_pos < li_req_parms[0].length) - { - result[result_pos] = 6; - result_pos += 7; - PUT_DWORD(&result[result_pos - 6], 0); - PUT_WORD(&result[result_pos - 2], GOOD); - if (api_parse(&li_req_parms[0].info[1 + participant_parms_pos], - (word)(li_parms[1].length - participant_parms_pos), "s", li_participant_struct)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - PUT_WORD(&result[result_pos - 2], _WRONG_MESSAGE_FORMAT); - break; - } - if (api_parse(&li_participant_struct[0].info[1], - li_participant_struct[0].length, "d", li_participant_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - PUT_WORD(&result[result_pos - 2], _WRONG_MESSAGE_FORMAT); - break; - } - plci_b_id = GET_DWORD(li_participant_parms[0].info) & 0xffff; - PUT_DWORD(&result[result_pos - 6], plci_b_id); - if (sizeof(result) - result_pos < 7) - { - dbug(1, dprintf("[%06lx] %s,%d: LI result overrun", - UnMapId(Id), (char *)(FILE_), __LINE__)); - PUT_WORD(&result[result_pos - 2], _WRONG_STATE); - break; - } - plci_b = li2_check_plci_b(Id, plci, plci_b_id, plci_b_write_pos, &result[result_pos - 2]); - if (plci_b != NULL) - { - li2_update_connect(Id, a, plci, plci_b_id, false, 0); - plci->li_plci_b_queue[plci_b_write_pos] = plci_b_id | LI_PLCI_B_DISC_FLAG; - plci_b_write_pos = (plci_b_write_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? 0 : plci_b_write_pos + 1; - } - participant_parms_pos = (word)((&li_participant_struct[0].info[1 + li_participant_struct[0].length]) - - (&li_req_parms[0].info[1])); - } - result[0] = (byte)(result_pos - 1); - result[3] = (byte)(result_pos - 4); - result[6] = (byte)(result_pos - 7); - i = (plci_b_write_pos == 0) ? LI_PLCI_B_QUEUE_ENTRIES - 1 : plci_b_write_pos - 1; - if ((plci_b_write_pos == plci->li_plci_b_read_pos) - || (plci->li_plci_b_queue[i] & LI_PLCI_B_LAST_FLAG)) - { - plci->li_plci_b_queue[plci_b_write_pos] = LI_PLCI_B_SKIP_FLAG | LI_PLCI_B_LAST_FLAG; - plci_b_write_pos = (plci_b_write_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? 0 : plci_b_write_pos + 1; - } - else - plci->li_plci_b_queue[i] |= LI_PLCI_B_LAST_FLAG; - plci->li_plci_b_write_pos = plci_b_write_pos; - } - mixer_calculate_coefs(a); - plci->li_channel_bits = li_config_table[a->li_base + (plci->li_bchannel_id - 1)].channel; - mixer_notify_update(plci, true); - sendf(appl, _FACILITY_R | CONFIRM, Id & 0xffffL, Number, - "wwS", Info, SELECTOR_LINE_INTERCONNECT, result); - plci->command = 0; - plci->li_cmd = GET_WORD(li_parms[0].info); - start_internal_command(Id, plci, mixer_command); - return (false); - - case LI_REQ_SILENT_UPDATE: - if (!plci || !plci->State - || !plci->NL.Id || plci->nl_remove_id - || (plci->li_bchannel_id == 0) - || (li_config_table[plci->adapter->li_base + (plci->li_bchannel_id - 1)].plci != plci)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong state", - UnMapId(Id), (char *)(FILE_), __LINE__)); - return (false); - } - plci_b_write_pos = plci->li_plci_b_write_pos; - if (((plci->li_plci_b_read_pos > plci_b_write_pos) ? plci->li_plci_b_read_pos : - LI_PLCI_B_QUEUE_ENTRIES + plci->li_plci_b_read_pos) - plci_b_write_pos - 1 < 2) - { - dbug(1, dprintf("[%06lx] %s,%d: LI request overrun", - UnMapId(Id), (char *)(FILE_), __LINE__)); - return (false); - } - i = (plci_b_write_pos == 0) ? LI_PLCI_B_QUEUE_ENTRIES - 1 : plci_b_write_pos - 1; - if ((plci_b_write_pos == plci->li_plci_b_read_pos) - || (plci->li_plci_b_queue[i] & LI_PLCI_B_LAST_FLAG)) - { - plci->li_plci_b_queue[plci_b_write_pos] = LI_PLCI_B_SKIP_FLAG | LI_PLCI_B_LAST_FLAG; - plci_b_write_pos = (plci_b_write_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? 0 : plci_b_write_pos + 1; - } - else - plci->li_plci_b_queue[i] |= LI_PLCI_B_LAST_FLAG; - plci->li_plci_b_write_pos = plci_b_write_pos; - plci->li_channel_bits = li_config_table[a->li_base + (plci->li_bchannel_id - 1)].channel; - plci->command = 0; - plci->li_cmd = GET_WORD(li_parms[0].info); - start_internal_command(Id, plci, mixer_command); - return (false); - - default: - dbug(1, dprintf("[%06lx] %s,%d: LI unknown request %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, GET_WORD(li_parms[0].info))); - Info = _FACILITY_NOT_SUPPORTED; - } - } - sendf(appl, _FACILITY_R | CONFIRM, Id & 0xffffL, Number, - "wwS", Info, SELECTOR_LINE_INTERCONNECT, result); - return (false); -} - - -static void mixer_indication_coefs_set(dword Id, PLCI *plci) -{ - dword d; - byte result[12]; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_indication_coefs_set", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - if (plci->li_plci_b_read_pos != plci->li_plci_b_req_pos) - { - do - { - d = plci->li_plci_b_queue[plci->li_plci_b_read_pos]; - if (!(d & LI_PLCI_B_SKIP_FLAG)) - { - if (plci->appl->appl_flags & APPL_FLAG_OLD_LI_SPEC) - { - if (d & LI_PLCI_B_DISC_FLAG) - { - result[0] = 5; - PUT_WORD(&result[1], LI_IND_DISCONNECT); - result[3] = 2; - PUT_WORD(&result[4], _LI_USER_INITIATED); - } - else - { - result[0] = 7; - PUT_WORD(&result[1], LI_IND_CONNECT_ACTIVE); - result[3] = 4; - PUT_DWORD(&result[4], d & ~LI_PLCI_B_FLAG_MASK); - } - } - else - { - if (d & LI_PLCI_B_DISC_FLAG) - { - result[0] = 9; - PUT_WORD(&result[1], LI_IND_DISCONNECT); - result[3] = 6; - PUT_DWORD(&result[4], d & ~LI_PLCI_B_FLAG_MASK); - PUT_WORD(&result[8], _LI_USER_INITIATED); - } - else - { - result[0] = 7; - PUT_WORD(&result[1], LI_IND_CONNECT_ACTIVE); - result[3] = 4; - PUT_DWORD(&result[4], d & ~LI_PLCI_B_FLAG_MASK); - } - } - sendf(plci->appl, _FACILITY_I, Id & 0xffffL, 0, - "ws", SELECTOR_LINE_INTERCONNECT, result); - } - plci->li_plci_b_read_pos = (plci->li_plci_b_read_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? - 0 : plci->li_plci_b_read_pos + 1; - } while (!(d & LI_PLCI_B_LAST_FLAG) && (plci->li_plci_b_read_pos != plci->li_plci_b_req_pos)); - } -} - - -static void mixer_indication_xconnect_from(dword Id, PLCI *plci, byte *msg, word length) -{ - word i, j, ch; - struct xconnect_transfer_address_s s, *p; - DIVA_CAPI_ADAPTER *a; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_indication_xconnect_from %d", - UnMapId(Id), (char *)(FILE_), __LINE__, (int)length)); - - a = plci->adapter; - i = 1; - for (i = 1; i < length; i += 16) - { - s.card_address.low = msg[i] | (msg[i + 1] << 8) | (((dword)(msg[i + 2])) << 16) | (((dword)(msg[i + 3])) << 24); - s.card_address.high = msg[i + 4] | (msg[i + 5] << 8) | (((dword)(msg[i + 6])) << 16) | (((dword)(msg[i + 7])) << 24); - s.offset = msg[i + 8] | (msg[i + 9] << 8) | (((dword)(msg[i + 10])) << 16) | (((dword)(msg[i + 11])) << 24); - ch = msg[i + 12] | (msg[i + 13] << 8); - j = ch & XCONNECT_CHANNEL_NUMBER_MASK; - if (!a->li_pri && (plci->li_bchannel_id == 2)) - j = 1 - j; - j += a->li_base; - if (ch & XCONNECT_CHANNEL_PORT_PC) - p = &(li_config_table[j].send_pc); - else - p = &(li_config_table[j].send_b); - p->card_address.low = s.card_address.low; - p->card_address.high = s.card_address.high; - p->offset = s.offset; - li_config_table[j].channel |= LI_CHANNEL_ADDRESSES_SET; - } - if (plci->internal_command_queue[0] - && ((plci->adjust_b_state == ADJUST_B_RESTORE_MIXER_2) - || (plci->adjust_b_state == ADJUST_B_RESTORE_MIXER_3) - || (plci->adjust_b_state == ADJUST_B_RESTORE_MIXER_4))) - { - (*(plci->internal_command_queue[0]))(Id, plci, 0); - if (!plci->internal_command) - next_internal_command(Id, plci); - } - mixer_notify_update(plci, true); -} - - -static void mixer_indication_xconnect_to(dword Id, PLCI *plci, byte *msg, word length) -{ - - dbug(1, dprintf("[%06lx] %s,%d: mixer_indication_xconnect_to %d", - UnMapId(Id), (char *)(FILE_), __LINE__, (int) length)); - -} - - -static byte mixer_notify_source_removed(PLCI *plci, dword plci_b_id) -{ - word plci_b_write_pos; - - plci_b_write_pos = plci->li_plci_b_write_pos; - if (((plci->li_plci_b_read_pos > plci_b_write_pos) ? plci->li_plci_b_read_pos : - LI_PLCI_B_QUEUE_ENTRIES + plci->li_plci_b_read_pos) - plci_b_write_pos - 1 < 1) - { - dbug(1, dprintf("[%06lx] %s,%d: LI request overrun", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - return (false); - } - plci->li_plci_b_queue[plci_b_write_pos] = plci_b_id | LI_PLCI_B_DISC_FLAG; - plci_b_write_pos = (plci_b_write_pos == LI_PLCI_B_QUEUE_ENTRIES - 1) ? 0 : plci_b_write_pos + 1; - plci->li_plci_b_write_pos = plci_b_write_pos; - return (true); -} - - -static void mixer_remove(PLCI *plci) -{ - DIVA_CAPI_ADAPTER *a; - PLCI *notify_plci; - dword plci_b_id; - word i, j; - - dbug(1, dprintf("[%06lx] %s,%d: mixer_remove", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - a = plci->adapter; - plci_b_id = (plci->Id << 8) | UnMapController(plci->adapter->Id); - if (a->profile.Global_Options & GL_LINE_INTERCONNECT_SUPPORTED) - { - if ((plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - i = a->li_base + (plci->li_bchannel_id - 1); - if ((li_config_table[i].curchnl | li_config_table[i].channel) & LI_CHANNEL_INVOLVED) - { - for (j = 0; j < li_total_channels; j++) - { - if ((li_config_table[i].flag_table[j] & LI_FLAG_INTERCONNECT) - || (li_config_table[j].flag_table[i] & LI_FLAG_INTERCONNECT)) - { - notify_plci = li_config_table[j].plci; - if ((notify_plci != NULL) - && (notify_plci != plci) - && (notify_plci->appl != NULL) - && !(notify_plci->appl->appl_flags & APPL_FLAG_OLD_LI_SPEC) - && (notify_plci->State) - && notify_plci->NL.Id && !notify_plci->nl_remove_id) - { - mixer_notify_source_removed(notify_plci, plci_b_id); - } - } - } - mixer_clear_config(plci); - mixer_calculate_coefs(a); - mixer_notify_update(plci, true); - } - li_config_table[i].plci = NULL; - plci->li_bchannel_id = 0; - } - } -} - - -/*------------------------------------------------------------------*/ -/* Echo canceller facilities */ -/*------------------------------------------------------------------*/ - - -static void ec_write_parameters(PLCI *plci) -{ - word w; - byte parameter_buffer[6]; - - dbug(1, dprintf("[%06lx] %s,%d: ec_write_parameters", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - parameter_buffer[0] = 5; - parameter_buffer[1] = DSP_CTRL_SET_LEC_PARAMETERS; - PUT_WORD(¶meter_buffer[2], plci->ec_idi_options); - plci->ec_idi_options &= ~LEC_RESET_COEFFICIENTS; - w = (plci->ec_tail_length == 0) ? 128 : plci->ec_tail_length; - PUT_WORD(¶meter_buffer[4], w); - add_p(plci, FTY, parameter_buffer); - sig_req(plci, TEL_CTRL, 0); - send_req(plci); -} - - -static void ec_clear_config(PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: ec_clear_config", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - plci->ec_idi_options = LEC_ENABLE_ECHO_CANCELLER | - LEC_MANUAL_DISABLE | LEC_ENABLE_NONLINEAR_PROCESSING; - plci->ec_tail_length = 0; -} - - -static void ec_prepare_switch(dword Id, PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: ec_prepare_switch", - UnMapId(Id), (char *)(FILE_), __LINE__)); - -} - - -static word ec_save_config(dword Id, PLCI *plci, byte Rc) -{ - - dbug(1, dprintf("[%06lx] %s,%d: ec_save_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - return (GOOD); -} - - -static word ec_restore_config(dword Id, PLCI *plci, byte Rc) -{ - word Info; - - dbug(1, dprintf("[%06lx] %s,%d: ec_restore_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - Info = GOOD; - if (plci->B1_facilities & B1_FACILITY_EC) - { - switch (plci->adjust_b_state) - { - case ADJUST_B_RESTORE_EC_1: - plci->internal_command = plci->adjust_b_command; - if (plci->sig_req) - { - plci->adjust_b_state = ADJUST_B_RESTORE_EC_1; - break; - } - ec_write_parameters(plci); - plci->adjust_b_state = ADJUST_B_RESTORE_EC_2; - break; - case ADJUST_B_RESTORE_EC_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Restore EC failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - break; - } - } - return (Info); -} - - -static void ec_command(dword Id, PLCI *plci, byte Rc) -{ - word internal_command, Info; - byte result[8]; - - dbug(1, dprintf("[%06lx] %s,%d: ec_command %02x %04x %04x %04x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command, - plci->ec_cmd, plci->ec_idi_options, plci->ec_tail_length)); - - Info = GOOD; - if (plci->appl->appl_flags & APPL_FLAG_PRIV_EC_SPEC) - { - result[0] = 2; - PUT_WORD(&result[1], EC_SUCCESS); - } - else - { - result[0] = 5; - PUT_WORD(&result[1], plci->ec_cmd); - result[3] = 2; - PUT_WORD(&result[4], GOOD); - } - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (plci->ec_cmd) - { - case EC_ENABLE_OPERATION: - case EC_FREEZE_COEFFICIENTS: - case EC_RESUME_COEFFICIENT_UPDATE: - case EC_RESET_COEFFICIENTS: - switch (internal_command) - { - default: - adjust_b1_resource(Id, plci, NULL, (word)(plci->B1_facilities | - B1_FACILITY_EC), EC_COMMAND_1); - /* fall through */ - case EC_COMMAND_1: - if (adjust_b_process(Id, plci, Rc) != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Load EC failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - if (plci->internal_command) - return; - /* fall through */ - case EC_COMMAND_2: - if (plci->sig_req) - { - plci->internal_command = EC_COMMAND_2; - return; - } - plci->internal_command = EC_COMMAND_3; - ec_write_parameters(plci); - return; - case EC_COMMAND_3: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Enable EC failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - break; - } - break; - - case EC_DISABLE_OPERATION: - switch (internal_command) - { - default: - case EC_COMMAND_1: - if (plci->B1_facilities & B1_FACILITY_EC) - { - if (plci->sig_req) - { - plci->internal_command = EC_COMMAND_1; - return; - } - plci->internal_command = EC_COMMAND_2; - ec_write_parameters(plci); - return; - } - Rc = OK; - /* fall through */ - case EC_COMMAND_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Disable EC failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - adjust_b1_resource(Id, plci, NULL, (word)(plci->B1_facilities & - ~B1_FACILITY_EC), EC_COMMAND_3); - /* fall through */ - case EC_COMMAND_3: - if (adjust_b_process(Id, plci, Rc) != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Unload EC failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - break; - } - if (plci->internal_command) - return; - break; - } - break; - } - sendf(plci->appl, _FACILITY_R | CONFIRM, Id & 0xffffL, plci->number, - "wws", Info, (plci->appl->appl_flags & APPL_FLAG_PRIV_EC_SPEC) ? - PRIV_SELECTOR_ECHO_CANCELLER : SELECTOR_ECHO_CANCELLER, result); -} - - -static byte ec_request(dword Id, word Number, DIVA_CAPI_ADAPTER *a, PLCI *plci, APPL *appl, API_PARSE *msg) -{ - word Info; - word opt; - API_PARSE ec_parms[3]; - byte result[16]; - - dbug(1, dprintf("[%06lx] %s,%d: ec_request", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - Info = GOOD; - result[0] = 0; - if (!(a->man_profile.private_options & (1L << PRIVATE_ECHO_CANCELLER))) - { - dbug(1, dprintf("[%06lx] %s,%d: Facility not supported", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _FACILITY_NOT_SUPPORTED; - } - else - { - if (appl->appl_flags & APPL_FLAG_PRIV_EC_SPEC) - { - if (api_parse(&msg[1].info[1], msg[1].length, "w", ec_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - } - else - { - if (plci == NULL) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong PLCI", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_IDENTIFIER; - } - else if (!plci->State || !plci->NL.Id || plci->nl_remove_id) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong state", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_STATE; - } - else - { - plci->command = 0; - plci->ec_cmd = GET_WORD(ec_parms[0].info); - plci->ec_idi_options &= ~(LEC_MANUAL_DISABLE | LEC_RESET_COEFFICIENTS); - result[0] = 2; - PUT_WORD(&result[1], EC_SUCCESS); - if (msg[1].length >= 4) - { - opt = GET_WORD(&ec_parms[0].info[2]); - plci->ec_idi_options &= ~(LEC_ENABLE_NONLINEAR_PROCESSING | - LEC_ENABLE_2100HZ_DETECTOR | LEC_REQUIRE_2100HZ_REVERSALS); - if (!(opt & EC_DISABLE_NON_LINEAR_PROCESSING)) - plci->ec_idi_options |= LEC_ENABLE_NONLINEAR_PROCESSING; - if (opt & EC_DETECT_DISABLE_TONE) - plci->ec_idi_options |= LEC_ENABLE_2100HZ_DETECTOR; - if (!(opt & EC_DO_NOT_REQUIRE_REVERSALS)) - plci->ec_idi_options |= LEC_REQUIRE_2100HZ_REVERSALS; - if (msg[1].length >= 6) - { - plci->ec_tail_length = GET_WORD(&ec_parms[0].info[4]); - } - } - switch (plci->ec_cmd) - { - case EC_ENABLE_OPERATION: - plci->ec_idi_options &= ~LEC_FREEZE_COEFFICIENTS; - start_internal_command(Id, plci, ec_command); - return (false); - - case EC_DISABLE_OPERATION: - plci->ec_idi_options = LEC_ENABLE_ECHO_CANCELLER | - LEC_MANUAL_DISABLE | LEC_ENABLE_NONLINEAR_PROCESSING | - LEC_RESET_COEFFICIENTS; - start_internal_command(Id, plci, ec_command); - return (false); - - case EC_FREEZE_COEFFICIENTS: - plci->ec_idi_options |= LEC_FREEZE_COEFFICIENTS; - start_internal_command(Id, plci, ec_command); - return (false); - - case EC_RESUME_COEFFICIENT_UPDATE: - plci->ec_idi_options &= ~LEC_FREEZE_COEFFICIENTS; - start_internal_command(Id, plci, ec_command); - return (false); - - case EC_RESET_COEFFICIENTS: - plci->ec_idi_options |= LEC_RESET_COEFFICIENTS; - start_internal_command(Id, plci, ec_command); - return (false); - - default: - dbug(1, dprintf("[%06lx] %s,%d: EC unknown request %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, plci->ec_cmd)); - PUT_WORD(&result[1], EC_UNSUPPORTED_OPERATION); - } - } - } - } - else - { - if (api_parse(&msg[1].info[1], msg[1].length, "ws", ec_parms)) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong message format", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_MESSAGE_FORMAT; - } - else - { - if (GET_WORD(ec_parms[0].info) == EC_GET_SUPPORTED_SERVICES) - { - result[0] = 11; - PUT_WORD(&result[1], EC_GET_SUPPORTED_SERVICES); - result[3] = 8; - PUT_WORD(&result[4], GOOD); - PUT_WORD(&result[6], 0x0007); - PUT_WORD(&result[8], LEC_MAX_SUPPORTED_TAIL_LENGTH); - PUT_WORD(&result[10], 0); - } - else if (plci == NULL) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong PLCI", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_IDENTIFIER; - } - else if (!plci->State || !plci->NL.Id || plci->nl_remove_id) - { - dbug(1, dprintf("[%06lx] %s,%d: Wrong state", - UnMapId(Id), (char *)(FILE_), __LINE__)); - Info = _WRONG_STATE; - } - else - { - plci->command = 0; - plci->ec_cmd = GET_WORD(ec_parms[0].info); - plci->ec_idi_options &= ~(LEC_MANUAL_DISABLE | LEC_RESET_COEFFICIENTS); - result[0] = 5; - PUT_WORD(&result[1], plci->ec_cmd); - result[3] = 2; - PUT_WORD(&result[4], GOOD); - plci->ec_idi_options &= ~(LEC_ENABLE_NONLINEAR_PROCESSING | - LEC_ENABLE_2100HZ_DETECTOR | LEC_REQUIRE_2100HZ_REVERSALS); - plci->ec_tail_length = 0; - if (ec_parms[1].length >= 2) - { - opt = GET_WORD(&ec_parms[1].info[1]); - if (opt & EC_ENABLE_NON_LINEAR_PROCESSING) - plci->ec_idi_options |= LEC_ENABLE_NONLINEAR_PROCESSING; - if (opt & EC_DETECT_DISABLE_TONE) - plci->ec_idi_options |= LEC_ENABLE_2100HZ_DETECTOR; - if (!(opt & EC_DO_NOT_REQUIRE_REVERSALS)) - plci->ec_idi_options |= LEC_REQUIRE_2100HZ_REVERSALS; - if (ec_parms[1].length >= 4) - { - plci->ec_tail_length = GET_WORD(&ec_parms[1].info[3]); - } - } - switch (plci->ec_cmd) - { - case EC_ENABLE_OPERATION: - plci->ec_idi_options &= ~LEC_FREEZE_COEFFICIENTS; - start_internal_command(Id, plci, ec_command); - return (false); - - case EC_DISABLE_OPERATION: - plci->ec_idi_options = LEC_ENABLE_ECHO_CANCELLER | - LEC_MANUAL_DISABLE | LEC_ENABLE_NONLINEAR_PROCESSING | - LEC_RESET_COEFFICIENTS; - start_internal_command(Id, plci, ec_command); - return (false); - - default: - dbug(1, dprintf("[%06lx] %s,%d: EC unknown request %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, plci->ec_cmd)); - PUT_WORD(&result[4], _FACILITY_SPECIFIC_FUNCTION_NOT_SUPP); - } - } - } - } - } - sendf(appl, _FACILITY_R | CONFIRM, Id & 0xffffL, Number, - "wws", Info, (appl->appl_flags & APPL_FLAG_PRIV_EC_SPEC) ? - PRIV_SELECTOR_ECHO_CANCELLER : SELECTOR_ECHO_CANCELLER, result); - return (false); -} - - -static void ec_indication(dword Id, PLCI *plci, byte *msg, word length) -{ - byte result[8]; - - dbug(1, dprintf("[%06lx] %s,%d: ec_indication", - UnMapId(Id), (char *)(FILE_), __LINE__)); - - if (!(plci->ec_idi_options & LEC_MANUAL_DISABLE)) - { - if (plci->appl->appl_flags & APPL_FLAG_PRIV_EC_SPEC) - { - result[0] = 2; - PUT_WORD(&result[1], 0); - switch (msg[1]) - { - case LEC_DISABLE_TYPE_CONTIGNUOUS_2100HZ: - PUT_WORD(&result[1], EC_BYPASS_DUE_TO_CONTINUOUS_2100HZ); - break; - case LEC_DISABLE_TYPE_REVERSED_2100HZ: - PUT_WORD(&result[1], EC_BYPASS_DUE_TO_REVERSED_2100HZ); - break; - case LEC_DISABLE_RELEASED: - PUT_WORD(&result[1], EC_BYPASS_RELEASED); - break; - } - } - else - { - result[0] = 5; - PUT_WORD(&result[1], EC_BYPASS_INDICATION); - result[3] = 2; - PUT_WORD(&result[4], 0); - switch (msg[1]) - { - case LEC_DISABLE_TYPE_CONTIGNUOUS_2100HZ: - PUT_WORD(&result[4], EC_BYPASS_DUE_TO_CONTINUOUS_2100HZ); - break; - case LEC_DISABLE_TYPE_REVERSED_2100HZ: - PUT_WORD(&result[4], EC_BYPASS_DUE_TO_REVERSED_2100HZ); - break; - case LEC_DISABLE_RELEASED: - PUT_WORD(&result[4], EC_BYPASS_RELEASED); - break; - } - } - sendf(plci->appl, _FACILITY_I, Id & 0xffffL, 0, "ws", (plci->appl->appl_flags & APPL_FLAG_PRIV_EC_SPEC) ? - PRIV_SELECTOR_ECHO_CANCELLER : SELECTOR_ECHO_CANCELLER, result); - } -} - - - -/*------------------------------------------------------------------*/ -/* Advanced voice */ -/*------------------------------------------------------------------*/ - -static void adv_voice_write_coefs(PLCI *plci, word write_command) -{ - DIVA_CAPI_ADAPTER *a; - word i; - byte *p; - - word w, n, j, k; - byte ch_map[MIXER_CHANNELS_BRI]; - - byte coef_buffer[ADV_VOICE_COEF_BUFFER_SIZE + 2]; - - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_write_coefs %d", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, write_command)); - - a = plci->adapter; - p = coef_buffer + 1; - *(p++) = DSP_CTRL_OLD_SET_MIXER_COEFFICIENTS; - i = 0; - while (i + sizeof(word) <= a->adv_voice_coef_length) - { - PUT_WORD(p, GET_WORD(a->adv_voice_coef_buffer + i)); - p += 2; - i += 2; - } - while (i < ADV_VOICE_OLD_COEF_COUNT * sizeof(word)) - { - PUT_WORD(p, 0x8000); - p += 2; - i += 2; - } - - if (!a->li_pri && (plci->li_bchannel_id == 0)) - { - if ((li_config_table[a->li_base].plci == NULL) && (li_config_table[a->li_base + 1].plci != NULL)) - { - plci->li_bchannel_id = 1; - li_config_table[a->li_base].plci = plci; - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_set_bchannel_id %d", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, plci->li_bchannel_id)); - } - else if ((li_config_table[a->li_base].plci != NULL) && (li_config_table[a->li_base + 1].plci == NULL)) - { - plci->li_bchannel_id = 2; - li_config_table[a->li_base + 1].plci = plci; - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_set_bchannel_id %d", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, plci->li_bchannel_id)); - } - } - if (!a->li_pri && (plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - i = a->li_base + (plci->li_bchannel_id - 1); - switch (write_command) - { - case ADV_VOICE_WRITE_ACTIVATION: - j = a->li_base + MIXER_IC_CHANNEL_BASE + (plci->li_bchannel_id - 1); - k = a->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci->li_bchannel_id); - if (!(plci->B1_facilities & B1_FACILITY_MIXER)) - { - li_config_table[j].flag_table[i] |= LI_FLAG_CONFERENCE | LI_FLAG_MIX; - li_config_table[i].flag_table[j] |= LI_FLAG_CONFERENCE | LI_FLAG_MONITOR; - } - if (a->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) - { - li_config_table[k].flag_table[i] |= LI_FLAG_CONFERENCE | LI_FLAG_MIX; - li_config_table[i].flag_table[k] |= LI_FLAG_CONFERENCE | LI_FLAG_MONITOR; - li_config_table[k].flag_table[j] |= LI_FLAG_CONFERENCE; - li_config_table[j].flag_table[k] |= LI_FLAG_CONFERENCE; - } - mixer_calculate_coefs(a); - li_config_table[i].curchnl = li_config_table[i].channel; - li_config_table[j].curchnl = li_config_table[j].channel; - if (a->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) - li_config_table[k].curchnl = li_config_table[k].channel; - break; - - case ADV_VOICE_WRITE_DEACTIVATION: - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].flag_table[j] = 0; - li_config_table[j].flag_table[i] = 0; - } - k = a->li_base + MIXER_IC_CHANNEL_BASE + (plci->li_bchannel_id - 1); - for (j = 0; j < li_total_channels; j++) - { - li_config_table[k].flag_table[j] = 0; - li_config_table[j].flag_table[k] = 0; - } - if (a->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) - { - k = a->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci->li_bchannel_id); - for (j = 0; j < li_total_channels; j++) - { - li_config_table[k].flag_table[j] = 0; - li_config_table[j].flag_table[k] = 0; - } - } - mixer_calculate_coefs(a); - break; - } - if (plci->B1_facilities & B1_FACILITY_MIXER) - { - w = 0; - if (ADV_VOICE_NEW_COEF_BASE + sizeof(word) <= a->adv_voice_coef_length) - w = GET_WORD(a->adv_voice_coef_buffer + ADV_VOICE_NEW_COEF_BASE); - if (li_config_table[i].channel & LI_CHANNEL_TX_DATA) - w |= MIXER_FEATURE_ENABLE_TX_DATA; - if (li_config_table[i].channel & LI_CHANNEL_RX_DATA) - w |= MIXER_FEATURE_ENABLE_RX_DATA; - *(p++) = (byte) w; - *(p++) = (byte)(w >> 8); - for (j = 0; j < sizeof(ch_map); j += 2) - { - ch_map[j] = (byte)(j + (plci->li_bchannel_id - 1)); - ch_map[j + 1] = (byte)(j + (2 - plci->li_bchannel_id)); - } - for (n = 0; n < ARRAY_SIZE(mixer_write_prog_bri); n++) - { - i = a->li_base + ch_map[mixer_write_prog_bri[n].to_ch]; - j = a->li_base + ch_map[mixer_write_prog_bri[n].from_ch]; - if (li_config_table[i].channel & li_config_table[j].channel & LI_CHANNEL_INVOLVED) - { - *(p++) = ((li_config_table[i].coef_table[j] & mixer_write_prog_bri[n].mask) ? 0x80 : 0x01); - w = ((li_config_table[i].coef_table[j] & 0xf) ^ (li_config_table[i].coef_table[j] >> 4)); - li_config_table[i].coef_table[j] ^= (w & mixer_write_prog_bri[n].mask) << 4; - } - else - { - *(p++) = (ADV_VOICE_NEW_COEF_BASE + sizeof(word) + n < a->adv_voice_coef_length) ? - a->adv_voice_coef_buffer[ADV_VOICE_NEW_COEF_BASE + sizeof(word) + n] : 0x00; - } - } - } - else - { - for (i = ADV_VOICE_NEW_COEF_BASE; i < a->adv_voice_coef_length; i++) - *(p++) = a->adv_voice_coef_buffer[i]; - } - } - else - - { - for (i = ADV_VOICE_NEW_COEF_BASE; i < a->adv_voice_coef_length; i++) - *(p++) = a->adv_voice_coef_buffer[i]; - } - coef_buffer[0] = (p - coef_buffer) - 1; - add_p(plci, FTY, coef_buffer); - sig_req(plci, TEL_CTRL, 0); - send_req(plci); -} - - -static void adv_voice_clear_config(PLCI *plci) -{ - DIVA_CAPI_ADAPTER *a; - - word i, j; - - - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_clear_config", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - a = plci->adapter; - if ((plci->tel == ADV_VOICE) && (plci == a->AdvSignalPLCI)) - { - a->adv_voice_coef_length = 0; - - if (!a->li_pri && (plci->li_bchannel_id != 0) - && (li_config_table[a->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - i = a->li_base + (plci->li_bchannel_id - 1); - li_config_table[i].curchnl = 0; - li_config_table[i].channel = 0; - li_config_table[i].chflags = 0; - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].flag_table[j] = 0; - li_config_table[j].flag_table[i] = 0; - li_config_table[i].coef_table[j] = 0; - li_config_table[j].coef_table[i] = 0; - } - li_config_table[i].coef_table[i] |= LI_COEF_CH_PC_SET | LI_COEF_PC_CH_SET; - i = a->li_base + MIXER_IC_CHANNEL_BASE + (plci->li_bchannel_id - 1); - li_config_table[i].curchnl = 0; - li_config_table[i].channel = 0; - li_config_table[i].chflags = 0; - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].flag_table[j] = 0; - li_config_table[j].flag_table[i] = 0; - li_config_table[i].coef_table[j] = 0; - li_config_table[j].coef_table[i] = 0; - } - if (a->manufacturer_features & MANUFACTURER_FEATURE_SLAVE_CODEC) - { - i = a->li_base + MIXER_IC_CHANNEL_BASE + (2 - plci->li_bchannel_id); - li_config_table[i].curchnl = 0; - li_config_table[i].channel = 0; - li_config_table[i].chflags = 0; - for (j = 0; j < li_total_channels; j++) - { - li_config_table[i].flag_table[j] = 0; - li_config_table[j].flag_table[i] = 0; - li_config_table[i].coef_table[j] = 0; - li_config_table[j].coef_table[i] = 0; - } - } - } - - } -} - - -static void adv_voice_prepare_switch(dword Id, PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_prepare_switch", - UnMapId(Id), (char *)(FILE_), __LINE__)); - -} - - -static word adv_voice_save_config(dword Id, PLCI *plci, byte Rc) -{ - - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_save_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - return (GOOD); -} - - -static word adv_voice_restore_config(dword Id, PLCI *plci, byte Rc) -{ - DIVA_CAPI_ADAPTER *a; - word Info; - - dbug(1, dprintf("[%06lx] %s,%d: adv_voice_restore_config %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - Info = GOOD; - a = plci->adapter; - if ((plci->B1_facilities & B1_FACILITY_VOICE) - && (plci->tel == ADV_VOICE) && (plci == a->AdvSignalPLCI)) - { - switch (plci->adjust_b_state) - { - case ADJUST_B_RESTORE_VOICE_1: - plci->internal_command = plci->adjust_b_command; - if (plci->sig_req) - { - plci->adjust_b_state = ADJUST_B_RESTORE_VOICE_1; - break; - } - adv_voice_write_coefs(plci, ADV_VOICE_WRITE_UPDATE); - plci->adjust_b_state = ADJUST_B_RESTORE_VOICE_2; - break; - case ADJUST_B_RESTORE_VOICE_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Restore voice config failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - break; - } - } - return (Info); -} - - - - -/*------------------------------------------------------------------*/ -/* B1 resource switching */ -/*------------------------------------------------------------------*/ - -static byte b1_facilities_table[] = -{ - 0x00, /* 0 No bchannel resources */ - 0x00, /* 1 Codec (automatic law) */ - 0x00, /* 2 Codec (A-law) */ - 0x00, /* 3 Codec (y-law) */ - 0x00, /* 4 HDLC for X.21 */ - 0x00, /* 5 HDLC */ - 0x00, /* 6 External Device 0 */ - 0x00, /* 7 External Device 1 */ - 0x00, /* 8 HDLC 56k */ - 0x00, /* 9 Transparent */ - 0x00, /* 10 Loopback to network */ - 0x00, /* 11 Test pattern to net */ - 0x00, /* 12 Rate adaptation sync */ - 0x00, /* 13 Rate adaptation async */ - 0x00, /* 14 R-Interface */ - 0x00, /* 15 HDLC 128k leased line */ - 0x00, /* 16 FAX */ - 0x00, /* 17 Modem async */ - 0x00, /* 18 Modem sync HDLC */ - 0x00, /* 19 V.110 async HDLC */ - 0x12, /* 20 Adv voice (Trans,mixer) */ - 0x00, /* 21 Codec connected to IC */ - 0x0c, /* 22 Trans,DTMF */ - 0x1e, /* 23 Trans,DTMF+mixer */ - 0x1f, /* 24 Trans,DTMF+mixer+local */ - 0x13, /* 25 Trans,mixer+local */ - 0x12, /* 26 HDLC,mixer */ - 0x12, /* 27 HDLC 56k,mixer */ - 0x2c, /* 28 Trans,LEC+DTMF */ - 0x3e, /* 29 Trans,LEC+DTMF+mixer */ - 0x3f, /* 30 Trans,LEC+DTMF+mixer+local */ - 0x2c, /* 31 RTP,LEC+DTMF */ - 0x3e, /* 32 RTP,LEC+DTMF+mixer */ - 0x3f, /* 33 RTP,LEC+DTMF+mixer+local */ - 0x00, /* 34 Signaling task */ - 0x00, /* 35 PIAFS */ - 0x0c, /* 36 Trans,DTMF+TONE */ - 0x1e, /* 37 Trans,DTMF+TONE+mixer */ - 0x1f /* 38 Trans,DTMF+TONE+mixer+local*/ -}; - - -static word get_b1_facilities(PLCI *plci, byte b1_resource) -{ - word b1_facilities; - - b1_facilities = b1_facilities_table[b1_resource]; - if ((b1_resource == 9) || (b1_resource == 20) || (b1_resource == 25)) - { - - if (!(((plci->requested_options_conn | plci->requested_options) & (1L << PRIVATE_DTMF_TONE)) - || (plci->appl && (plci->adapter->requested_options_table[plci->appl->Id - 1] & (1L << PRIVATE_DTMF_TONE))))) - - { - if (plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_SOFTDTMF_SEND) - b1_facilities |= B1_FACILITY_DTMFX; - if (plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_SOFTDTMF_RECEIVE) - b1_facilities |= B1_FACILITY_DTMFR; - } - } - if ((b1_resource == 17) || (b1_resource == 18)) - { - if (plci->adapter->manufacturer_features & (MANUFACTURER_FEATURE_V18 | MANUFACTURER_FEATURE_VOWN)) - b1_facilities |= B1_FACILITY_DTMFX | B1_FACILITY_DTMFR; - } -/* - dbug (1, dprintf("[%06lx] %s,%d: get_b1_facilities %d %04x", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char far *)(FILE_), __LINE__, b1_resource, b1_facilites)); -*/ - return (b1_facilities); -} - - -static byte add_b1_facilities(PLCI *plci, byte b1_resource, word b1_facilities) -{ - byte b; - - switch (b1_resource) - { - case 5: - case 26: - if (b1_facilities & (B1_FACILITY_MIXER | B1_FACILITY_VOICE)) - b = 26; - else - b = 5; - break; - - case 8: - case 27: - if (b1_facilities & (B1_FACILITY_MIXER | B1_FACILITY_VOICE)) - b = 27; - else - b = 8; - break; - - case 9: - case 20: - case 22: - case 23: - case 24: - case 25: - case 28: - case 29: - case 30: - case 36: - case 37: - case 38: - if (b1_facilities & B1_FACILITY_EC) - { - if (b1_facilities & B1_FACILITY_LOCAL) - b = 30; - else if (b1_facilities & (B1_FACILITY_MIXER | B1_FACILITY_VOICE)) - b = 29; - else - b = 28; - } - - else if ((b1_facilities & (B1_FACILITY_DTMFX | B1_FACILITY_DTMFR | B1_FACILITY_MIXER)) - && (((plci->requested_options_conn | plci->requested_options) & (1L << PRIVATE_DTMF_TONE)) - || (plci->appl && (plci->adapter->requested_options_table[plci->appl->Id - 1] & (1L << PRIVATE_DTMF_TONE))))) - { - if (b1_facilities & B1_FACILITY_LOCAL) - b = 38; - else if (b1_facilities & (B1_FACILITY_MIXER | B1_FACILITY_VOICE)) - b = 37; - else - b = 36; - } - - else if (((plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_HARDDTMF) - && !(plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_SOFTDTMF_RECEIVE)) - || ((b1_facilities & B1_FACILITY_DTMFR) - && ((b1_facilities & B1_FACILITY_MIXER) - || !(plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_SOFTDTMF_RECEIVE))) - || ((b1_facilities & B1_FACILITY_DTMFX) - && ((b1_facilities & B1_FACILITY_MIXER) - || !(plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_SOFTDTMF_SEND)))) - { - if (b1_facilities & B1_FACILITY_LOCAL) - b = 24; - else if (b1_facilities & (B1_FACILITY_MIXER | B1_FACILITY_VOICE)) - b = 23; - else - b = 22; - } - else - { - if (b1_facilities & B1_FACILITY_LOCAL) - b = 25; - else if (b1_facilities & (B1_FACILITY_MIXER | B1_FACILITY_VOICE)) - b = 20; - else - b = 9; - } - break; - - case 31: - case 32: - case 33: - if (b1_facilities & B1_FACILITY_LOCAL) - b = 33; - else if (b1_facilities & (B1_FACILITY_MIXER | B1_FACILITY_VOICE)) - b = 32; - else - b = 31; - break; - - default: - b = b1_resource; - } - dbug(1, dprintf("[%06lx] %s,%d: add_b1_facilities %d %04x %d %04x", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, - b1_resource, b1_facilities, b, get_b1_facilities(plci, b))); - return (b); -} - - -static void adjust_b1_facilities(PLCI *plci, byte new_b1_resource, word new_b1_facilities) -{ - word removed_facilities; - - dbug(1, dprintf("[%06lx] %s,%d: adjust_b1_facilities %d %04x %04x", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__, new_b1_resource, new_b1_facilities, - new_b1_facilities & get_b1_facilities(plci, new_b1_resource))); - - new_b1_facilities &= get_b1_facilities(plci, new_b1_resource); - removed_facilities = plci->B1_facilities & ~new_b1_facilities; - - if (removed_facilities & B1_FACILITY_EC) - ec_clear_config(plci); - - - if (removed_facilities & B1_FACILITY_DTMFR) - { - dtmf_rec_clear_config(plci); - dtmf_parameter_clear_config(plci); - } - if (removed_facilities & B1_FACILITY_DTMFX) - dtmf_send_clear_config(plci); - - - if (removed_facilities & B1_FACILITY_MIXER) - mixer_clear_config(plci); - - if (removed_facilities & B1_FACILITY_VOICE) - adv_voice_clear_config(plci); - plci->B1_facilities = new_b1_facilities; -} - - -static void adjust_b_clear(PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: adjust_b_clear", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - plci->adjust_b_restore = false; -} - - -static word adjust_b_process(dword Id, PLCI *plci, byte Rc) -{ - word Info; - byte b1_resource; - NCCI *ncci_ptr; - API_PARSE bp[2]; - - dbug(1, dprintf("[%06lx] %s,%d: adjust_b_process %02x %d", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->adjust_b_state)); - - Info = GOOD; - switch (plci->adjust_b_state) - { - case ADJUST_B_START: - if ((plci->adjust_b_parms_msg == NULL) - && (plci->adjust_b_mode & ADJUST_B_MODE_SWITCH_L1) - && ((plci->adjust_b_mode & ~(ADJUST_B_MODE_SAVE | ADJUST_B_MODE_SWITCH_L1 | - ADJUST_B_MODE_NO_RESOURCE | ADJUST_B_MODE_RESTORE)) == 0)) - { - b1_resource = (plci->adjust_b_mode == ADJUST_B_MODE_NO_RESOURCE) ? - 0 : add_b1_facilities(plci, plci->B1_resource, plci->adjust_b_facilities); - if (b1_resource == plci->B1_resource) - { - adjust_b1_facilities(plci, b1_resource, plci->adjust_b_facilities); - break; - } - if (plci->adjust_b_facilities & ~get_b1_facilities(plci, b1_resource)) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B nonsupported facilities %d %d %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, - plci->B1_resource, b1_resource, plci->adjust_b_facilities)); - Info = _WRONG_STATE; - break; - } - } - if (plci->adjust_b_mode & ADJUST_B_MODE_SAVE) - { - - mixer_prepare_switch(Id, plci); - - - dtmf_prepare_switch(Id, plci); - dtmf_parameter_prepare_switch(Id, plci); - - - ec_prepare_switch(Id, plci); - - adv_voice_prepare_switch(Id, plci); - } - plci->adjust_b_state = ADJUST_B_SAVE_MIXER_1; - Rc = OK; - /* fall through */ - case ADJUST_B_SAVE_MIXER_1: - if (plci->adjust_b_mode & ADJUST_B_MODE_SAVE) - { - - Info = mixer_save_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - - } - plci->adjust_b_state = ADJUST_B_SAVE_DTMF_1; - Rc = OK; - /* fall through */ - case ADJUST_B_SAVE_DTMF_1: - if (plci->adjust_b_mode & ADJUST_B_MODE_SAVE) - { - - Info = dtmf_save_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - - } - plci->adjust_b_state = ADJUST_B_REMOVE_L23_1; - /* fall through */ - case ADJUST_B_REMOVE_L23_1: - if ((plci->adjust_b_mode & ADJUST_B_MODE_REMOVE_L23) - && plci->NL.Id && !plci->nl_remove_id) - { - plci->internal_command = plci->adjust_b_command; - if (plci->adjust_b_ncci != 0) - { - ncci_ptr = &(plci->adapter->ncci[plci->adjust_b_ncci]); - while (ncci_ptr->data_pending) - { - plci->data_sent_ptr = ncci_ptr->DBuffer[ncci_ptr->data_out].P; - data_rc(plci, plci->adapter->ncci_ch[plci->adjust_b_ncci]); - } - while (ncci_ptr->data_ack_pending) - data_ack(plci, plci->adapter->ncci_ch[plci->adjust_b_ncci]); - } - nl_req_ncci(plci, REMOVE, - (byte)((plci->adjust_b_mode & ADJUST_B_MODE_CONNECT) ? plci->adjust_b_ncci : 0)); - send_req(plci); - plci->adjust_b_state = ADJUST_B_REMOVE_L23_2; - break; - } - plci->adjust_b_state = ADJUST_B_REMOVE_L23_2; - Rc = OK; - /* fall through */ - case ADJUST_B_REMOVE_L23_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B remove failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - if (plci->adjust_b_mode & ADJUST_B_MODE_REMOVE_L23) - { - if (plci_nl_busy(plci)) - { - plci->internal_command = plci->adjust_b_command; - break; - } - } - plci->adjust_b_state = ADJUST_B_SAVE_EC_1; - Rc = OK; - /* fall through */ - case ADJUST_B_SAVE_EC_1: - if (plci->adjust_b_mode & ADJUST_B_MODE_SAVE) - { - - Info = ec_save_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - - } - plci->adjust_b_state = ADJUST_B_SAVE_DTMF_PARAMETER_1; - Rc = OK; - /* fall through */ - case ADJUST_B_SAVE_DTMF_PARAMETER_1: - if (plci->adjust_b_mode & ADJUST_B_MODE_SAVE) - { - - Info = dtmf_parameter_save_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - - } - plci->adjust_b_state = ADJUST_B_SAVE_VOICE_1; - Rc = OK; - /* fall through */ - case ADJUST_B_SAVE_VOICE_1: - if (plci->adjust_b_mode & ADJUST_B_MODE_SAVE) - { - Info = adv_voice_save_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - } - plci->adjust_b_state = ADJUST_B_SWITCH_L1_1; - /* fall through */ - case ADJUST_B_SWITCH_L1_1: - if (plci->adjust_b_mode & ADJUST_B_MODE_SWITCH_L1) - { - if (plci->sig_req) - { - plci->internal_command = plci->adjust_b_command; - break; - } - if (plci->adjust_b_parms_msg != NULL) - api_load_msg(plci->adjust_b_parms_msg, bp); - else - api_load_msg(&plci->B_protocol, bp); - Info = add_b1(plci, bp, - (word)((plci->adjust_b_mode & ADJUST_B_MODE_NO_RESOURCE) ? 2 : 0), - plci->adjust_b_facilities); - if (Info != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B invalid L1 parameters %d %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, - plci->B1_resource, plci->adjust_b_facilities)); - break; - } - plci->internal_command = plci->adjust_b_command; - sig_req(plci, RESOURCES, 0); - send_req(plci); - plci->adjust_b_state = ADJUST_B_SWITCH_L1_2; - break; - } - plci->adjust_b_state = ADJUST_B_SWITCH_L1_2; - Rc = OK; - /* fall through */ - case ADJUST_B_SWITCH_L1_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B switch failed %02x %d %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, - Rc, plci->B1_resource, plci->adjust_b_facilities)); - Info = _WRONG_STATE; - break; - } - plci->adjust_b_state = ADJUST_B_RESTORE_VOICE_1; - Rc = OK; - /* fall through */ - case ADJUST_B_RESTORE_VOICE_1: - case ADJUST_B_RESTORE_VOICE_2: - if (plci->adjust_b_mode & ADJUST_B_MODE_RESTORE) - { - Info = adv_voice_restore_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - } - plci->adjust_b_state = ADJUST_B_RESTORE_DTMF_PARAMETER_1; - Rc = OK; - /* fall through */ - case ADJUST_B_RESTORE_DTMF_PARAMETER_1: - case ADJUST_B_RESTORE_DTMF_PARAMETER_2: - if (plci->adjust_b_mode & ADJUST_B_MODE_RESTORE) - { - - Info = dtmf_parameter_restore_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - - } - plci->adjust_b_state = ADJUST_B_RESTORE_EC_1; - Rc = OK; - /* fall through */ - case ADJUST_B_RESTORE_EC_1: - case ADJUST_B_RESTORE_EC_2: - if (plci->adjust_b_mode & ADJUST_B_MODE_RESTORE) - { - - Info = ec_restore_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - - } - plci->adjust_b_state = ADJUST_B_ASSIGN_L23_1; - /* fall through */ - case ADJUST_B_ASSIGN_L23_1: - if (plci->adjust_b_mode & ADJUST_B_MODE_ASSIGN_L23) - { - if (plci_nl_busy(plci)) - { - plci->internal_command = plci->adjust_b_command; - break; - } - if (plci->adjust_b_mode & ADJUST_B_MODE_CONNECT) - plci->call_dir |= CALL_DIR_FORCE_OUTG_NL; - if (plci->adjust_b_parms_msg != NULL) - api_load_msg(plci->adjust_b_parms_msg, bp); - else - api_load_msg(&plci->B_protocol, bp); - Info = add_b23(plci, bp); - if (Info != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B invalid L23 parameters %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Info)); - break; - } - plci->internal_command = plci->adjust_b_command; - nl_req_ncci(plci, ASSIGN, 0); - send_req(plci); - plci->adjust_b_state = ADJUST_B_ASSIGN_L23_2; - break; - } - plci->adjust_b_state = ADJUST_B_ASSIGN_L23_2; - Rc = ASSIGN_OK; - /* fall through */ - case ADJUST_B_ASSIGN_L23_2: - if ((Rc != OK) && (Rc != OK_FC) && (Rc != ASSIGN_OK)) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B assign failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - if (plci->adjust_b_mode & ADJUST_B_MODE_ASSIGN_L23) - { - if (Rc != ASSIGN_OK) - { - plci->internal_command = plci->adjust_b_command; - break; - } - } - if (plci->adjust_b_mode & ADJUST_B_MODE_USER_CONNECT) - { - plci->adjust_b_restore = true; - break; - } - plci->adjust_b_state = ADJUST_B_CONNECT_1; - /* fall through */ - case ADJUST_B_CONNECT_1: - if (plci->adjust_b_mode & ADJUST_B_MODE_CONNECT) - { - plci->internal_command = plci->adjust_b_command; - if (plci_nl_busy(plci)) - break; - nl_req_ncci(plci, N_CONNECT, 0); - send_req(plci); - plci->adjust_b_state = ADJUST_B_CONNECT_2; - break; - } - plci->adjust_b_state = ADJUST_B_RESTORE_DTMF_1; - Rc = OK; - /* fall through */ - case ADJUST_B_CONNECT_2: - case ADJUST_B_CONNECT_3: - case ADJUST_B_CONNECT_4: - if ((Rc != OK) && (Rc != OK_FC) && (Rc != 0)) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B connect failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - if (Rc == OK) - { - if (plci->adjust_b_mode & ADJUST_B_MODE_CONNECT) - { - get_ncci(plci, (byte)(Id >> 16), plci->adjust_b_ncci); - Id = (Id & 0xffff) | (((dword)(plci->adjust_b_ncci)) << 16); - } - if (plci->adjust_b_state == ADJUST_B_CONNECT_2) - plci->adjust_b_state = ADJUST_B_CONNECT_3; - else if (plci->adjust_b_state == ADJUST_B_CONNECT_4) - plci->adjust_b_state = ADJUST_B_RESTORE_DTMF_1; - } - else if (Rc == 0) - { - if (plci->adjust_b_state == ADJUST_B_CONNECT_2) - plci->adjust_b_state = ADJUST_B_CONNECT_4; - else if (plci->adjust_b_state == ADJUST_B_CONNECT_3) - plci->adjust_b_state = ADJUST_B_RESTORE_DTMF_1; - } - if (plci->adjust_b_state != ADJUST_B_RESTORE_DTMF_1) - { - plci->internal_command = plci->adjust_b_command; - break; - } - Rc = OK; - /* fall through */ - case ADJUST_B_RESTORE_DTMF_1: - case ADJUST_B_RESTORE_DTMF_2: - if (plci->adjust_b_mode & ADJUST_B_MODE_RESTORE) - { - - Info = dtmf_restore_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - - } - plci->adjust_b_state = ADJUST_B_RESTORE_MIXER_1; - Rc = OK; - /* fall through */ - case ADJUST_B_RESTORE_MIXER_1: - case ADJUST_B_RESTORE_MIXER_2: - case ADJUST_B_RESTORE_MIXER_3: - case ADJUST_B_RESTORE_MIXER_4: - case ADJUST_B_RESTORE_MIXER_5: - case ADJUST_B_RESTORE_MIXER_6: - case ADJUST_B_RESTORE_MIXER_7: - if (plci->adjust_b_mode & ADJUST_B_MODE_RESTORE) - { - - Info = mixer_restore_config(Id, plci, Rc); - if ((Info != GOOD) || plci->internal_command) - break; - - } - plci->adjust_b_state = ADJUST_B_END; - case ADJUST_B_END: - break; - } - return (Info); -} - - -static void adjust_b1_resource(dword Id, PLCI *plci, API_SAVE *bp_msg, word b1_facilities, word internal_command) -{ - - dbug(1, dprintf("[%06lx] %s,%d: adjust_b1_resource %d %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, - plci->B1_resource, b1_facilities)); - - plci->adjust_b_parms_msg = bp_msg; - plci->adjust_b_facilities = b1_facilities; - plci->adjust_b_command = internal_command; - plci->adjust_b_ncci = (word)(Id >> 16); - if ((bp_msg == NULL) && (plci->B1_resource == 0)) - plci->adjust_b_mode = ADJUST_B_MODE_SAVE | ADJUST_B_MODE_NO_RESOURCE | ADJUST_B_MODE_SWITCH_L1; - else - plci->adjust_b_mode = ADJUST_B_MODE_SAVE | ADJUST_B_MODE_SWITCH_L1 | ADJUST_B_MODE_RESTORE; - plci->adjust_b_state = ADJUST_B_START; - dbug(1, dprintf("[%06lx] %s,%d: Adjust B1 resource %d %04x...", - UnMapId(Id), (char *)(FILE_), __LINE__, - plci->B1_resource, b1_facilities)); -} - - -static void adjust_b_restore(dword Id, PLCI *plci, byte Rc) -{ - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: adjust_b_restore %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; - if (plci->req_in != 0) - { - plci->internal_command = ADJUST_B_RESTORE_1; - break; - } - Rc = OK; - /* fall through */ - case ADJUST_B_RESTORE_1: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B enqueued failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - } - plci->adjust_b_parms_msg = NULL; - plci->adjust_b_facilities = plci->B1_facilities; - plci->adjust_b_command = ADJUST_B_RESTORE_2; - plci->adjust_b_ncci = (word)(Id >> 16); - plci->adjust_b_mode = ADJUST_B_MODE_RESTORE; - plci->adjust_b_state = ADJUST_B_START; - dbug(1, dprintf("[%06lx] %s,%d: Adjust B restore...", - UnMapId(Id), (char *)(FILE_), __LINE__)); - /* fall through */ - case ADJUST_B_RESTORE_2: - if (adjust_b_process(Id, plci, Rc) != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Adjust B restore failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - } - if (plci->internal_command) - break; - break; - } -} - - -static void reset_b3_command(dword Id, PLCI *plci, byte Rc) -{ - word Info; - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: reset_b3_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - Info = GOOD; - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; - plci->adjust_b_parms_msg = NULL; - plci->adjust_b_facilities = plci->B1_facilities; - plci->adjust_b_command = RESET_B3_COMMAND_1; - plci->adjust_b_ncci = (word)(Id >> 16); - plci->adjust_b_mode = ADJUST_B_MODE_REMOVE_L23 | ADJUST_B_MODE_ASSIGN_L23 | ADJUST_B_MODE_CONNECT; - plci->adjust_b_state = ADJUST_B_START; - dbug(1, dprintf("[%06lx] %s,%d: Reset B3...", - UnMapId(Id), (char *)(FILE_), __LINE__)); - /* fall through */ - case RESET_B3_COMMAND_1: - Info = adjust_b_process(Id, plci, Rc); - if (Info != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Reset failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - break; - } - if (plci->internal_command) - return; - break; - } -/* sendf (plci->appl, _RESET_B3_R | CONFIRM, Id, plci->number, "w", Info);*/ - sendf(plci->appl, _RESET_B3_I, Id, 0, "s", ""); -} - - -static void select_b_command(dword Id, PLCI *plci, byte Rc) -{ - word Info; - word internal_command; - byte esc_chi[3]; - - dbug(1, dprintf("[%06lx] %s,%d: select_b_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - Info = GOOD; - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; - plci->adjust_b_parms_msg = &plci->saved_msg; - if ((plci->tel == ADV_VOICE) && (plci == plci->adapter->AdvSignalPLCI)) - plci->adjust_b_facilities = plci->B1_facilities | B1_FACILITY_VOICE; - else - plci->adjust_b_facilities = plci->B1_facilities & ~B1_FACILITY_VOICE; - plci->adjust_b_command = SELECT_B_COMMAND_1; - plci->adjust_b_ncci = (word)(Id >> 16); - if (plci->saved_msg.parms[0].length == 0) - { - plci->adjust_b_mode = ADJUST_B_MODE_SAVE | ADJUST_B_MODE_REMOVE_L23 | ADJUST_B_MODE_SWITCH_L1 | - ADJUST_B_MODE_NO_RESOURCE; - } - else - { - plci->adjust_b_mode = ADJUST_B_MODE_SAVE | ADJUST_B_MODE_REMOVE_L23 | ADJUST_B_MODE_SWITCH_L1 | - ADJUST_B_MODE_ASSIGN_L23 | ADJUST_B_MODE_USER_CONNECT | ADJUST_B_MODE_RESTORE; - } - plci->adjust_b_state = ADJUST_B_START; - dbug(1, dprintf("[%06lx] %s,%d: Select B protocol...", - UnMapId(Id), (char *)(FILE_), __LINE__)); - /* fall through */ - case SELECT_B_COMMAND_1: - Info = adjust_b_process(Id, plci, Rc); - if (Info != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: Select B protocol failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - break; - } - if (plci->internal_command) - return; - if (plci->tel == ADV_VOICE) - { - esc_chi[0] = 0x02; - esc_chi[1] = 0x18; - esc_chi[2] = plci->b_channel; - SetVoiceChannel(plci->adapter->AdvCodecPLCI, esc_chi, plci->adapter); - } - break; - } - sendf(plci->appl, _SELECT_B_REQ | CONFIRM, Id, plci->number, "w", Info); -} - - -static void fax_connect_ack_command(dword Id, PLCI *plci, byte Rc) -{ - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: fax_connect_ack_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; /* fall through */ - case FAX_CONNECT_ACK_COMMAND_1: - if (plci_nl_busy(plci)) - { - plci->internal_command = FAX_CONNECT_ACK_COMMAND_1; - return; - } - plci->internal_command = FAX_CONNECT_ACK_COMMAND_2; - plci->NData[0].P = plci->fax_connect_info_buffer; - plci->NData[0].PLength = plci->fax_connect_info_length; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_CONNECT_ACK; - plci->adapter->request(&plci->NL); - return; - case FAX_CONNECT_ACK_COMMAND_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: FAX issue CONNECT ACK failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - break; - } - } - if ((plci->ncpi_state & NCPI_VALID_CONNECT_B3_ACT) - && !(plci->ncpi_state & NCPI_CONNECT_B3_ACT_SENT)) - { - if (plci->B3_prot == 4) - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "s", ""); - else - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "S", plci->ncpi_buffer); - plci->ncpi_state |= NCPI_CONNECT_B3_ACT_SENT; - } -} - - -static void fax_edata_ack_command(dword Id, PLCI *plci, byte Rc) -{ - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: fax_edata_ack_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; - /* fall through */ - case FAX_EDATA_ACK_COMMAND_1: - if (plci_nl_busy(plci)) - { - plci->internal_command = FAX_EDATA_ACK_COMMAND_1; - return; - } - plci->internal_command = FAX_EDATA_ACK_COMMAND_2; - plci->NData[0].P = plci->fax_connect_info_buffer; - plci->NData[0].PLength = plci->fax_edata_ack_length; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_EDATA; - plci->adapter->request(&plci->NL); - return; - case FAX_EDATA_ACK_COMMAND_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: FAX issue EDATA ACK failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - break; - } - } -} - - -static void fax_connect_info_command(dword Id, PLCI *plci, byte Rc) -{ - word Info; - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: fax_connect_info_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - Info = GOOD; - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; /* fall through */ - case FAX_CONNECT_INFO_COMMAND_1: - if (plci_nl_busy(plci)) - { - plci->internal_command = FAX_CONNECT_INFO_COMMAND_1; - return; - } - plci->internal_command = FAX_CONNECT_INFO_COMMAND_2; - plci->NData[0].P = plci->fax_connect_info_buffer; - plci->NData[0].PLength = plci->fax_connect_info_length; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_EDATA; - plci->adapter->request(&plci->NL); - return; - case FAX_CONNECT_INFO_COMMAND_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: FAX setting connect info failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - if (plci_nl_busy(plci)) - { - plci->internal_command = FAX_CONNECT_INFO_COMMAND_2; - return; - } - plci->command = _CONNECT_B3_R; - nl_req_ncci(plci, N_CONNECT, 0); - send_req(plci); - return; - } - sendf(plci->appl, _CONNECT_B3_R | CONFIRM, Id, plci->number, "w", Info); -} - - -static void fax_adjust_b23_command(dword Id, PLCI *plci, byte Rc) -{ - word Info; - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: fax_adjust_b23_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - Info = GOOD; - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; - plci->adjust_b_parms_msg = NULL; - plci->adjust_b_facilities = plci->B1_facilities; - plci->adjust_b_command = FAX_ADJUST_B23_COMMAND_1; - plci->adjust_b_ncci = (word)(Id >> 16); - plci->adjust_b_mode = ADJUST_B_MODE_REMOVE_L23 | ADJUST_B_MODE_ASSIGN_L23; - plci->adjust_b_state = ADJUST_B_START; - dbug(1, dprintf("[%06lx] %s,%d: FAX adjust B23...", - UnMapId(Id), (char *)(FILE_), __LINE__)); - /* fall through */ - case FAX_ADJUST_B23_COMMAND_1: - Info = adjust_b_process(Id, plci, Rc); - if (Info != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: FAX adjust failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - break; - } - if (plci->internal_command) - return; - /* fall through */ - case FAX_ADJUST_B23_COMMAND_2: - if (plci_nl_busy(plci)) - { - plci->internal_command = FAX_ADJUST_B23_COMMAND_2; - return; - } - plci->command = _CONNECT_B3_R; - nl_req_ncci(plci, N_CONNECT, 0); - send_req(plci); - return; - } - sendf(plci->appl, _CONNECT_B3_R | CONFIRM, Id, plci->number, "w", Info); -} - - -static void fax_disconnect_command(dword Id, PLCI *plci, byte Rc) -{ - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: fax_disconnect_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; - plci->internal_command = FAX_DISCONNECT_COMMAND_1; - return; - case FAX_DISCONNECT_COMMAND_1: - case FAX_DISCONNECT_COMMAND_2: - case FAX_DISCONNECT_COMMAND_3: - if ((Rc != OK) && (Rc != OK_FC) && (Rc != 0)) - { - dbug(1, dprintf("[%06lx] %s,%d: FAX disconnect EDATA failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - break; - } - if (Rc == OK) - { - if ((internal_command == FAX_DISCONNECT_COMMAND_1) - || (internal_command == FAX_DISCONNECT_COMMAND_2)) - { - plci->internal_command = FAX_DISCONNECT_COMMAND_2; - } - } - else if (Rc == 0) - { - if (internal_command == FAX_DISCONNECT_COMMAND_1) - plci->internal_command = FAX_DISCONNECT_COMMAND_3; - } - return; - } -} - - - -static void rtp_connect_b3_req_command(dword Id, PLCI *plci, byte Rc) -{ - word Info; - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: rtp_connect_b3_req_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - Info = GOOD; - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; /* fall through */ - case RTP_CONNECT_B3_REQ_COMMAND_1: - if (plci_nl_busy(plci)) - { - plci->internal_command = RTP_CONNECT_B3_REQ_COMMAND_1; - return; - } - plci->internal_command = RTP_CONNECT_B3_REQ_COMMAND_2; - nl_req_ncci(plci, N_CONNECT, 0); - send_req(plci); - return; - case RTP_CONNECT_B3_REQ_COMMAND_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: RTP setting connect info failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - Info = _WRONG_STATE; - break; - } - if (plci_nl_busy(plci)) - { - plci->internal_command = RTP_CONNECT_B3_REQ_COMMAND_2; - return; - } - plci->internal_command = RTP_CONNECT_B3_REQ_COMMAND_3; - plci->NData[0].PLength = plci->internal_req_buffer[0]; - plci->NData[0].P = plci->internal_req_buffer + 1; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_UDATA; - plci->adapter->request(&plci->NL); - break; - case RTP_CONNECT_B3_REQ_COMMAND_3: - return; - } - sendf(plci->appl, _CONNECT_B3_R | CONFIRM, Id, plci->number, "w", Info); -} - - -static void rtp_connect_b3_res_command(dword Id, PLCI *plci, byte Rc) -{ - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: rtp_connect_b3_res_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; /* fall through */ - case RTP_CONNECT_B3_RES_COMMAND_1: - if (plci_nl_busy(plci)) - { - plci->internal_command = RTP_CONNECT_B3_RES_COMMAND_1; - return; - } - plci->internal_command = RTP_CONNECT_B3_RES_COMMAND_2; - nl_req_ncci(plci, N_CONNECT_ACK, (byte)(Id >> 16)); - send_req(plci); - return; - case RTP_CONNECT_B3_RES_COMMAND_2: - if ((Rc != OK) && (Rc != OK_FC)) - { - dbug(1, dprintf("[%06lx] %s,%d: RTP setting connect resp info failed %02x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc)); - break; - } - if (plci_nl_busy(plci)) - { - plci->internal_command = RTP_CONNECT_B3_RES_COMMAND_2; - return; - } - sendf(plci->appl, _CONNECT_B3_ACTIVE_I, Id, 0, "s", ""); - plci->internal_command = RTP_CONNECT_B3_RES_COMMAND_3; - plci->NData[0].PLength = plci->internal_req_buffer[0]; - plci->NData[0].P = plci->internal_req_buffer + 1; - plci->NL.X = plci->NData; - plci->NL.ReqCh = 0; - plci->NL.Req = plci->nl_req = (byte) N_UDATA; - plci->adapter->request(&plci->NL); - return; - case RTP_CONNECT_B3_RES_COMMAND_3: - return; - } -} - - - -static void hold_save_command(dword Id, PLCI *plci, byte Rc) -{ - byte SS_Ind[] = "\x05\x02\x00\x02\x00\x00"; /* Hold_Ind struct*/ - word Info; - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: hold_save_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - Info = GOOD; - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - if (!plci->NL.Id) - break; - plci->command = 0; - plci->adjust_b_parms_msg = NULL; - plci->adjust_b_facilities = plci->B1_facilities; - plci->adjust_b_command = HOLD_SAVE_COMMAND_1; - plci->adjust_b_ncci = (word)(Id >> 16); - plci->adjust_b_mode = ADJUST_B_MODE_SAVE | ADJUST_B_MODE_REMOVE_L23; - plci->adjust_b_state = ADJUST_B_START; - dbug(1, dprintf("[%06lx] %s,%d: HOLD save...", - UnMapId(Id), (char *)(FILE_), __LINE__)); - /* fall through */ - case HOLD_SAVE_COMMAND_1: - Info = adjust_b_process(Id, plci, Rc); - if (Info != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: HOLD save failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - break; - } - if (plci->internal_command) - return; - } - sendf(plci->appl, _FACILITY_I, Id & 0xffffL, 0, "ws", 3, SS_Ind); -} - - -static void retrieve_restore_command(dword Id, PLCI *plci, byte Rc) -{ - byte SS_Ind[] = "\x05\x03\x00\x02\x00\x00"; /* Retrieve_Ind struct*/ - word Info; - word internal_command; - - dbug(1, dprintf("[%06lx] %s,%d: retrieve_restore_command %02x %04x", - UnMapId(Id), (char *)(FILE_), __LINE__, Rc, plci->internal_command)); - - Info = GOOD; - internal_command = plci->internal_command; - plci->internal_command = 0; - switch (internal_command) - { - default: - plci->command = 0; - plci->adjust_b_parms_msg = NULL; - plci->adjust_b_facilities = plci->B1_facilities; - plci->adjust_b_command = RETRIEVE_RESTORE_COMMAND_1; - plci->adjust_b_ncci = (word)(Id >> 16); - plci->adjust_b_mode = ADJUST_B_MODE_ASSIGN_L23 | ADJUST_B_MODE_USER_CONNECT | ADJUST_B_MODE_RESTORE; - plci->adjust_b_state = ADJUST_B_START; - dbug(1, dprintf("[%06lx] %s,%d: RETRIEVE restore...", - UnMapId(Id), (char *)(FILE_), __LINE__)); - /* fall through */ - case RETRIEVE_RESTORE_COMMAND_1: - Info = adjust_b_process(Id, plci, Rc); - if (Info != GOOD) - { - dbug(1, dprintf("[%06lx] %s,%d: RETRIEVE restore failed", - UnMapId(Id), (char *)(FILE_), __LINE__)); - break; - } - if (plci->internal_command) - return; - } - sendf(plci->appl, _FACILITY_I, Id & 0xffffL, 0, "ws", 3, SS_Ind); -} - - -static void init_b1_config(PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: init_b1_config", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - plci->B1_resource = 0; - plci->B1_facilities = 0; - - plci->li_bchannel_id = 0; - mixer_clear_config(plci); - - - ec_clear_config(plci); - - - dtmf_rec_clear_config(plci); - dtmf_send_clear_config(plci); - dtmf_parameter_clear_config(plci); - - adv_voice_clear_config(plci); - adjust_b_clear(plci); -} - - -static void clear_b1_config(PLCI *plci) -{ - - dbug(1, dprintf("[%06lx] %s,%d: clear_b1_config", - (dword)((plci->Id << 8) | UnMapController(plci->adapter->Id)), - (char *)(FILE_), __LINE__)); - - adv_voice_clear_config(plci); - adjust_b_clear(plci); - - ec_clear_config(plci); - - - dtmf_rec_clear_config(plci); - dtmf_send_clear_config(plci); - dtmf_parameter_clear_config(plci); - - - if ((plci->li_bchannel_id != 0) - && (li_config_table[plci->adapter->li_base + (plci->li_bchannel_id - 1)].plci == plci)) - { - mixer_clear_config(plci); - li_config_table[plci->adapter->li_base + (plci->li_bchannel_id - 1)].plci = NULL; - plci->li_bchannel_id = 0; - } - - plci->B1_resource = 0; - plci->B1_facilities = 0; -} - - -/* ----------------------------------------------------------------- - XON protocol local helpers - ----------------------------------------------------------------- */ -static void channel_flow_control_remove(PLCI *plci) { - DIVA_CAPI_ADAPTER *a = plci->adapter; - word i; - for (i = 1; i < MAX_NL_CHANNEL + 1; i++) { - if (a->ch_flow_plci[i] == plci->Id) { - a->ch_flow_plci[i] = 0; - a->ch_flow_control[i] = 0; - } - } -} - -static void channel_x_on(PLCI *plci, byte ch) { - DIVA_CAPI_ADAPTER *a = plci->adapter; - if (a->ch_flow_control[ch] & N_XON_SENT) { - a->ch_flow_control[ch] &= ~N_XON_SENT; - } -} - -static void channel_x_off(PLCI *plci, byte ch, byte flag) { - DIVA_CAPI_ADAPTER *a = plci->adapter; - if ((a->ch_flow_control[ch] & N_RX_FLOW_CONTROL_MASK) == 0) { - a->ch_flow_control[ch] |= (N_CH_XOFF | flag); - a->ch_flow_plci[ch] = plci->Id; - a->ch_flow_control_pending++; - } -} - -static void channel_request_xon(PLCI *plci, byte ch) { - DIVA_CAPI_ADAPTER *a = plci->adapter; - - if (a->ch_flow_control[ch] & N_CH_XOFF) { - a->ch_flow_control[ch] |= N_XON_REQ; - a->ch_flow_control[ch] &= ~N_CH_XOFF; - a->ch_flow_control[ch] &= ~N_XON_CONNECT_IND; - } -} - -static void channel_xmit_extended_xon(PLCI *plci) { - DIVA_CAPI_ADAPTER *a; - int max_ch = ARRAY_SIZE(a->ch_flow_control); - int i, one_requested = 0; - - if ((!plci) || (!plci->Id) || ((a = plci->adapter) == NULL)) { - return; - } - - for (i = 0; i < max_ch; i++) { - if ((a->ch_flow_control[i] & N_CH_XOFF) && - (a->ch_flow_control[i] & N_XON_CONNECT_IND) && - (plci->Id == a->ch_flow_plci[i])) { - channel_request_xon(plci, (byte)i); - one_requested = 1; - } - } - - if (one_requested) { - channel_xmit_xon(plci); - } -} - -/* - Try to xmit next X_ON -*/ -static int find_channel_with_pending_x_on(DIVA_CAPI_ADAPTER *a, PLCI *plci) { - int max_ch = ARRAY_SIZE(a->ch_flow_control); - int i; - - if (!(plci->adapter->manufacturer_features & MANUFACTURER_FEATURE_XONOFF_FLOW_CONTROL)) { - return (0); - } - - if (a->last_flow_control_ch >= max_ch) { - a->last_flow_control_ch = 1; - } - for (i = a->last_flow_control_ch; i < max_ch; i++) { - if ((a->ch_flow_control[i] & N_XON_REQ) && - (plci->Id == a->ch_flow_plci[i])) { - a->last_flow_control_ch = i + 1; - return (i); - } - } - - for (i = 1; i < a->last_flow_control_ch; i++) { - if ((a->ch_flow_control[i] & N_XON_REQ) && - (plci->Id == a->ch_flow_plci[i])) { - a->last_flow_control_ch = i + 1; - return (i); - } - } - - return (0); -} - -static void channel_xmit_xon(PLCI *plci) { - DIVA_CAPI_ADAPTER *a = plci->adapter; - byte ch; - - if (plci->nl_req || !plci->NL.Id || plci->nl_remove_id) { - return; - } - if ((ch = (byte)find_channel_with_pending_x_on(a, plci)) == 0) { - return; - } - a->ch_flow_control[ch] &= ~N_XON_REQ; - a->ch_flow_control[ch] |= N_XON_SENT; - - plci->NL.Req = plci->nl_req = (byte)N_XON; - plci->NL.ReqCh = ch; - plci->NL.X = plci->NData; - plci->NL.XNum = 1; - plci->NData[0].P = &plci->RBuffer[0]; - plci->NData[0].PLength = 0; - - plci->adapter->request(&plci->NL); -} - -static int channel_can_xon(PLCI *plci, byte ch) { - APPL *APPLptr; - DIVA_CAPI_ADAPTER *a; - word NCCIcode; - dword count; - word Num; - word i; - - APPLptr = plci->appl; - a = plci->adapter; - - if (!APPLptr) - return (0); - - NCCIcode = a->ch_ncci[ch] | (((word) a->Id) << 8); - - /* count all buffers within the Application pool */ - /* belonging to the same NCCI. XON if a first is */ - /* used. */ - count = 0; - Num = 0xffff; - for (i = 0; i < APPLptr->MaxBuffer; i++) { - if (NCCIcode == APPLptr->DataNCCI[i]) count++; - if (!APPLptr->DataNCCI[i] && Num == 0xffff) Num = i; - } - if ((count > 2) || (Num == 0xffff)) { - return (0); - } - return (1); -} - - -/*------------------------------------------------------------------*/ - -static word CPN_filter_ok(byte *cpn, DIVA_CAPI_ADAPTER *a, word offset) -{ - return 1; -} - - - -/**********************************************************************************/ -/* function groups the listening applications according to the CIP mask and the */ -/* Info_Mask. Each group gets just one Connect_Ind. Some application manufacturer */ -/* are not multi-instance capable, so they start e.g. 30 applications what causes */ -/* big problems on application level (one call, 30 Connect_Ind, ect). The */ -/* function must be enabled by setting "a->group_optimization_enabled" from the */ -/* OS specific part (per adapter). */ -/**********************************************************************************/ -static void group_optimization(DIVA_CAPI_ADAPTER *a, PLCI *plci) -{ - word i, j, k, busy, group_found; - dword info_mask_group[MAX_CIP_TYPES]; - dword cip_mask_group[MAX_CIP_TYPES]; - word appl_number_group_type[MAX_APPL]; - PLCI *auxplci; - - /* all APPLs within this inc. call are allowed to dial in */ - bitmap_fill(plci->group_optimization_mask_table, MAX_APPL); - - if (!a->group_optimization_enabled) - { - dbug(1, dprintf("No group optimization")); - return; - } - - dbug(1, dprintf("Group optimization = 0x%x...", a->group_optimization_enabled)); - - for (i = 0; i < MAX_CIP_TYPES; i++) - { - info_mask_group[i] = 0; - cip_mask_group[i] = 0; - } - for (i = 0; i < MAX_APPL; i++) - { - appl_number_group_type[i] = 0; - } - for (i = 0; i < max_appl; i++) /* check if any multi instance capable application is present */ - { /* group_optimization set to 1 means not to optimize multi-instance capable applications (default) */ - if (application[i].Id && (application[i].MaxNCCI) > 1 && (a->CIP_Mask[i]) && (a->group_optimization_enabled == 1)) - { - dbug(1, dprintf("Multi-Instance capable, no optimization required")); - return; /* allow good application unfiltered access */ - } - } - for (i = 0; i < max_appl; i++) /* Build CIP Groups */ - { - if (application[i].Id && a->CIP_Mask[i]) - { - for (k = 0, busy = false; k < a->max_plci; k++) - { - if (a->plci[k].Id) - { - auxplci = &a->plci[k]; - if (auxplci->appl == &application[i]) { - /* application has a busy PLCI */ - busy = true; - dbug(1, dprintf("Appl 0x%x is busy", i + 1)); - } else if (test_bit(i, plci->c_ind_mask_table)) { - /* application has an incoming call pending */ - busy = true; - dbug(1, dprintf("Appl 0x%x has inc. call pending", i + 1)); - } - } - } - - for (j = 0, group_found = 0; j <= (MAX_CIP_TYPES) && !busy && !group_found; j++) /* build groups with free applications only */ - { - if (j == MAX_CIP_TYPES) /* all groups are in use but group still not found */ - { /* the MAX_CIP_TYPES group enables all calls because of field overflow */ - appl_number_group_type[i] = MAX_CIP_TYPES; - group_found = true; - dbug(1, dprintf("Field overflow appl 0x%x", i + 1)); - } - else if ((info_mask_group[j] == a->CIP_Mask[i]) && (cip_mask_group[j] == a->Info_Mask[i])) - { /* is group already present ? */ - appl_number_group_type[i] = j | 0x80; /* store the group number for each application */ - group_found = true; - dbug(1, dprintf("Group 0x%x found with appl 0x%x, CIP=0x%lx", appl_number_group_type[i], i + 1, info_mask_group[j])); - } - else if (!info_mask_group[j]) - { /* establish a new group */ - appl_number_group_type[i] = j | 0x80; /* store the group number for each application */ - info_mask_group[j] = a->CIP_Mask[i]; /* store the new CIP mask for the new group */ - cip_mask_group[j] = a->Info_Mask[i]; /* store the new Info_Mask for this new group */ - group_found = true; - dbug(1, dprintf("New Group 0x%x established with appl 0x%x, CIP=0x%lx", appl_number_group_type[i], i + 1, info_mask_group[j])); - } - } - } - } - - for (i = 0; i < max_appl; i++) /* Build group_optimization_mask_table */ - { - if (appl_number_group_type[i]) /* application is free, has listens and is member of a group */ - { - if (appl_number_group_type[i] == MAX_CIP_TYPES) - { - dbug(1, dprintf("OverflowGroup 0x%x, valid appl = 0x%x, call enabled", appl_number_group_type[i], i + 1)); - } - else - { - dbug(1, dprintf("Group 0x%x, valid appl = 0x%x", appl_number_group_type[i], i + 1)); - for (j = i + 1; j < max_appl; j++) /* search other group members and mark them as busy */ - { - if (appl_number_group_type[i] == appl_number_group_type[j]) - { - dbug(1, dprintf("Appl 0x%x is member of group 0x%x, no call", j + 1, appl_number_group_type[j])); - /* disable call on other group members */ - __clear_bit(j, plci->group_optimization_mask_table); - appl_number_group_type[j] = 0; /* remove disabled group member from group list */ - } - } - } - } - else /* application should not get a call */ - { - __clear_bit(i, plci->group_optimization_mask_table); - } - } - -} - - - -/* OS notifies the driver about a application Capi_Register */ -word CapiRegister(word id) -{ - word i, j, appls_found; - - PLCI *plci; - DIVA_CAPI_ADAPTER *a; - - for (i = 0, appls_found = 0; i < max_appl; i++) - { - if (application[i].Id && (application[i].Id != id)) - { - appls_found++; /* an application has been found */ - } - } - - if (appls_found) return true; - for (i = 0; i < max_adapter; i++) /* scan all adapters... */ - { - a = &adapter[i]; - if (a->request) - { - if (a->flag_dynamic_l1_down) /* remove adapter from L1 tristate (Huntgroup) */ - { - if (!appls_found) /* first application does a capi register */ - { - if ((j = get_plci(a))) /* activate L1 of all adapters */ - { - plci = &a->plci[j - 1]; - plci->command = 0; - add_p(plci, OAD, "\x01\xfd"); - add_p(plci, CAI, "\x01\x80"); - add_p(plci, UID, "\x06\x43\x61\x70\x69\x32\x30"); - add_p(plci, SHIFT | 6, NULL); - add_p(plci, SIN, "\x02\x00\x00"); - plci->internal_command = START_L1_SIG_ASSIGN_PEND; - sig_req(plci, ASSIGN, DSIG_ID); - add_p(plci, FTY, "\x02\xff\x07"); /* l1 start */ - sig_req(plci, SIG_CTRL, 0); - send_req(plci); - } - } - } - } - } - return false; -} - -/*------------------------------------------------------------------*/ - -/* Functions for virtual Switching e.g. Transfer by join, Conference */ - -static void VSwitchReqInd(PLCI *plci, dword Id, byte **parms) -{ - word i; - /* Format of vswitch_t: - 0 byte length - 1 byte VSWITCHIE - 2 byte VSWITCH_REQ/VSWITCH_IND - 3 byte reserved - 4 word VSwitchcommand - 6 word returnerror - 8... Params - */ - if (!plci || - !plci->appl || - !plci->State || - plci->Sig.Ind == NCR_FACILITY - ) - return; - - for (i = 0; i < MAX_MULTI_IE; i++) - { - if (!parms[i][0]) continue; - if (parms[i][0] < 7) - { - parms[i][0] = 0; /* kill it */ - continue; - } - dbug(1, dprintf("VSwitchReqInd(%d)", parms[i][4])); - switch (parms[i][4]) - { - case VSJOIN: - if (!plci->relatedPTYPLCI || - (plci->ptyState != S_ECT && plci->relatedPTYPLCI->ptyState != S_ECT)) - { /* Error */ - break; - } - /* remember all necessary informations */ - if (parms[i][0] != 11 || parms[i][8] != 3) /* Length Test */ - { - break; - } - if (parms[i][2] == VSWITCH_IND && parms[i][9] == 1) - { /* first indication after ECT-Request on Consultation Call */ - plci->vswitchstate = parms[i][9]; - parms[i][9] = 2; /* State */ - /* now ask first Call to join */ - } - else if (parms[i][2] == VSWITCH_REQ && parms[i][9] == 3) - { /* Answer of VSWITCH_REQ from first Call */ - plci->vswitchstate = parms[i][9]; - /* tell consultation call to join - and the protocol capabilities of the first call */ - } - else - { /* Error */ - break; - } - plci->vsprot = parms[i][10]; /* protocol */ - plci->vsprotdialect = parms[i][11]; /* protocoldialect */ - /* send join request to related PLCI */ - parms[i][1] = VSWITCHIE; - parms[i][2] = VSWITCH_REQ; - - plci->relatedPTYPLCI->command = 0; - plci->relatedPTYPLCI->internal_command = VSWITCH_REQ_PEND; - add_p(plci->relatedPTYPLCI, ESC, &parms[i][0]); - sig_req(plci->relatedPTYPLCI, VSWITCH_REQ, 0); - send_req(plci->relatedPTYPLCI); - break; - case VSTRANSPORT: - default: - if (plci->relatedPTYPLCI && - plci->vswitchstate == 3 && - plci->relatedPTYPLCI->vswitchstate == 3) - { - add_p(plci->relatedPTYPLCI, ESC, &parms[i][0]); - sig_req(plci->relatedPTYPLCI, VSWITCH_REQ, 0); - send_req(plci->relatedPTYPLCI); - } - break; - } - parms[i][0] = 0; /* kill it */ - } -} - - -/*------------------------------------------------------------------*/ - -static int diva_get_dma_descriptor(PLCI *plci, dword *dma_magic) { - ENTITY e; - IDI_SYNC_REQ *pReq = (IDI_SYNC_REQ *)&e; - - if (!(diva_xdi_extended_features & DIVA_CAPI_XDI_PROVIDES_RX_DMA)) { - return (-1); - } - - pReq->xdi_dma_descriptor_operation.Req = 0; - pReq->xdi_dma_descriptor_operation.Rc = IDI_SYNC_REQ_DMA_DESCRIPTOR_OPERATION; - - pReq->xdi_dma_descriptor_operation.info.operation = IDI_SYNC_REQ_DMA_DESCRIPTOR_ALLOC; - pReq->xdi_dma_descriptor_operation.info.descriptor_number = -1; - pReq->xdi_dma_descriptor_operation.info.descriptor_address = NULL; - pReq->xdi_dma_descriptor_operation.info.descriptor_magic = 0; - - e.user[0] = plci->adapter->Id - 1; - plci->adapter->request((ENTITY *)pReq); - - if (!pReq->xdi_dma_descriptor_operation.info.operation && - (pReq->xdi_dma_descriptor_operation.info.descriptor_number >= 0) && - pReq->xdi_dma_descriptor_operation.info.descriptor_magic) { - *dma_magic = pReq->xdi_dma_descriptor_operation.info.descriptor_magic; - dbug(3, dprintf("dma_alloc, a:%d (%d-%08x)", - plci->adapter->Id, - pReq->xdi_dma_descriptor_operation.info.descriptor_number, - *dma_magic)); - return (pReq->xdi_dma_descriptor_operation.info.descriptor_number); - } else { - dbug(1, dprintf("dma_alloc failed")); - return (-1); - } -} - -static void diva_free_dma_descriptor(PLCI *plci, int nr) { - ENTITY e; - IDI_SYNC_REQ *pReq = (IDI_SYNC_REQ *)&e; - - if (nr < 0) { - return; - } - - pReq->xdi_dma_descriptor_operation.Req = 0; - pReq->xdi_dma_descriptor_operation.Rc = IDI_SYNC_REQ_DMA_DESCRIPTOR_OPERATION; - - pReq->xdi_dma_descriptor_operation.info.operation = IDI_SYNC_REQ_DMA_DESCRIPTOR_FREE; - pReq->xdi_dma_descriptor_operation.info.descriptor_number = nr; - pReq->xdi_dma_descriptor_operation.info.descriptor_address = NULL; - pReq->xdi_dma_descriptor_operation.info.descriptor_magic = 0; - - e.user[0] = plci->adapter->Id - 1; - plci->adapter->request((ENTITY *)pReq); - - if (!pReq->xdi_dma_descriptor_operation.info.operation) { - dbug(1, dprintf("dma_free(%d)", nr)); - } else { - dbug(1, dprintf("dma_free failed (%d)", nr)); - } -} - -/*------------------------------------------------------------------*/ diff --git a/drivers/isdn/hardware/eicon/mi_pc.h b/drivers/isdn/hardware/eicon/mi_pc.h deleted file mode 100644 index 83e9ed8c1bf3..000000000000 --- a/drivers/isdn/hardware/eicon/mi_pc.h +++ /dev/null @@ -1,204 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -/*---------------------------------------------------------------------------- -// MAESTRA ISA PnP */ -#define BRI_MEMORY_BASE 0x1f700000 -#define BRI_MEMORY_SIZE 0x00100000 /* 1MB on the BRI */ -#define BRI_SHARED_RAM_SIZE 0x00010000 /* 64k shared RAM */ -#define BRI_RAY_TAYLOR_DSP_CODE_SIZE 0x00020000 /* max 128k DSP-Code (Ray Taylor's code) */ -#define BRI_ORG_MAX_DSP_CODE_SIZE 0x00050000 /* max 320k DSP-Code (Telindus) */ -#define BRI_V90D_MAX_DSP_CODE_SIZE 0x00060000 /* max 384k DSP-Code if V.90D included */ -#define BRI_CACHED_ADDR(x) (((x) & 0x1fffffffL) | 0x80000000L) -#define BRI_UNCACHED_ADDR(x) (((x) & 0x1fffffffL) | 0xa0000000L) -#define ADDR 4 -#define ADDRH 6 -#define DATA 0 -#define RESET 7 -#define DEFAULT_ADDRESS 0x240 -#define DEFAULT_IRQ 3 -#define M_PCI_ADDR 0x04 /* MAESTRA BRI PCI */ -#define M_PCI_ADDRH 0x0c /* MAESTRA BRI PCI */ -#define M_PCI_DATA 0x00 /* MAESTRA BRI PCI */ -#define M_PCI_RESET 0x10 /* MAESTRA BRI PCI */ -/*---------------------------------------------------------------------------- -// MAESTRA PRI PCI */ -#define MP_IRQ_RESET 0xc18 /* offset of isr in the CONFIG memory bar */ -#define MP_IRQ_RESET_VAL 0xfe /* value to clear an interrupt */ -#define MP_MEMORY_SIZE 0x00400000 /* 4MB on standard PRI */ -#define MP2_MEMORY_SIZE 0x00800000 /* 8MB on PRI Rev. 2 */ -#define MP_SHARED_RAM_OFFSET 0x00001000 /* offset of shared RAM base in the DRAM memory bar */ -#define MP_SHARED_RAM_SIZE 0x00010000 /* 64k shared RAM */ -#define MP_PROTOCOL_OFFSET (MP_SHARED_RAM_OFFSET + MP_SHARED_RAM_SIZE) -#define MP_RAY_TAYLOR_DSP_CODE_SIZE 0x00040000 /* max 256k DSP-Code (Ray Taylor's code) */ -#define MP_ORG_MAX_DSP_CODE_SIZE 0x00060000 /* max 384k DSP-Code (Telindus) */ -#define MP_V90D_MAX_DSP_CODE_SIZE 0x00070000 /* max 448k DSP-Code if V.90D included) */ -#define MP_VOIP_MAX_DSP_CODE_SIZE 0x00090000 /* max 576k DSP-Code if voice over IP included */ -#define MP_CACHED_ADDR(x) (((x) & 0x1fffffffL) | 0x80000000L) -#define MP_UNCACHED_ADDR(x) (((x) & 0x1fffffffL) | 0xa0000000L) -#define MP_RESET 0x20 /* offset of RESET register in the DEVICES memory bar */ -/* RESET register bits */ -#define _MP_S2M_RESET 0x10 /* active lo */ -#define _MP_LED2 0x08 /* 1 = on */ -#define _MP_LED1 0x04 /* 1 = on */ -#define _MP_DSP_RESET 0x02 /* active lo */ -#define _MP_RISC_RESET 0x81 /* active hi, bit 7 for compatibility with old boards */ -/* CPU exception context structure in MP shared ram after trap */ -typedef struct mp_xcptcontext_s MP_XCPTC; -struct mp_xcptcontext_s { - dword sr; - dword cr; - dword epc; - dword vaddr; - dword regs[32]; - dword mdlo; - dword mdhi; - dword reseverd; - dword xclass; -}; -/* boot interface structure for PRI */ -struct mp_load { - dword volatile cmd; - dword volatile addr; - dword volatile len; - dword volatile err; - dword volatile live; - dword volatile res1[0x1b]; - dword volatile TrapId; /* has value 0x999999XX on a CPU trap */ - dword volatile res2[0x03]; - MP_XCPTC volatile xcpt; /* contains register dump */ - dword volatile rest[((0x1020 >> 2) - 6) - 0x1b - 1 - 0x03 - (sizeof(MP_XCPTC) >> 2)]; - dword volatile signature; - dword data[60000]; /* real interface description */ -}; -/*----------------------------------------------------------------------------*/ -/* SERVER 4BRI (Quattro PCI) */ -#define MQ_BOARD_REG_OFFSET 0x800000 /* PC relative On board registers offset */ -#define MQ_BREG_RISC 0x1200 /* RISC Reset ect */ -#define MQ_RISC_COLD_RESET_MASK 0x0001 /* RISC Cold reset */ -#define MQ_RISC_WARM_RESET_MASK 0x0002 /* RISC Warm reset */ -#define MQ_BREG_IRQ_TEST 0x0608 /* Interrupt request, no CPU interaction */ -#define MQ_IRQ_REQ_ON 0x1 -#define MQ_IRQ_REQ_OFF 0x0 -#define MQ_BOARD_DSP_OFFSET 0xa00000 /* PC relative On board DSP regs offset */ -#define MQ_DSP1_ADDR_OFFSET 0x0008 /* Addr register offset DSP 1 subboard 1 */ -#define MQ_DSP2_ADDR_OFFSET 0x0208 /* Addr register offset DSP 2 subboard 1 */ -#define MQ_DSP1_DATA_OFFSET 0x0000 /* Data register offset DSP 1 subboard 1 */ -#define MQ_DSP2_DATA_OFFSET 0x0200 /* Data register offset DSP 2 subboard 1 */ -#define MQ_DSP_JUNK_OFFSET 0x0400 /* DSP Data/Addr regs subboard offset */ -#define MQ_ISAC_DSP_RESET 0x0028 /* ISAC and DSP reset address offset */ -#define MQ_BOARD_ISAC_DSP_RESET 0x800028 /* ISAC and DSP reset address offset */ -#define MQ_INSTANCE_COUNT 4 /* 4BRI consists of four instances */ -#define MQ_MEMORY_SIZE 0x00400000 /* 4MB on standard 4BRI */ -#define MQ_CTRL_SIZE 0x00002000 /* 8K memory mapped registers */ -#define MQ_SHARED_RAM_SIZE 0x00010000 /* 64k shared RAM */ -#define MQ_ORG_MAX_DSP_CODE_SIZE 0x00050000 /* max 320k DSP-Code (Telindus) */ -#define MQ_V90D_MAX_DSP_CODE_SIZE 0x00060000 /* max 384K DSP-Code if V.90D included */ -#define MQ_VOIP_MAX_DSP_CODE_SIZE 0x00028000 /* max 4*160k = 640K DSP-Code if voice over IP included */ -#define MQ_CACHED_ADDR(x) (((x) & 0x1fffffffL) | 0x80000000L) -#define MQ_UNCACHED_ADDR(x) (((x) & 0x1fffffffL) | 0xa0000000L) -/*--------------------------------------------------------------------------------------------*/ -/* Additional definitions reflecting the different address map of the SERVER 4BRI V2 */ -#define MQ2_BREG_RISC 0x0200 /* RISC Reset ect */ -#define MQ2_BREG_IRQ_TEST 0x0400 /* Interrupt request, no CPU interaction */ -#define MQ2_BOARD_DSP_OFFSET 0x800000 /* PC relative On board DSP regs offset */ -#define MQ2_DSP1_DATA_OFFSET 0x1800 /* Data register offset DSP 1 subboard 1 */ -#define MQ2_DSP1_ADDR_OFFSET 0x1808 /* Addr register offset DSP 1 subboard 1 */ -#define MQ2_DSP2_DATA_OFFSET 0x1810 /* Data register offset DSP 2 subboard 1 */ -#define MQ2_DSP2_ADDR_OFFSET 0x1818 /* Addr register offset DSP 2 subboard 1 */ -#define MQ2_DSP_JUNK_OFFSET 0x1000 /* DSP Data/Addr regs subboard offset */ -#define MQ2_ISAC_DSP_RESET 0x0000 /* ISAC and DSP reset address offset */ -#define MQ2_BOARD_ISAC_DSP_RESET 0x800000 /* ISAC and DSP reset address offset */ -#define MQ2_IPACX_CONFIG 0x0300 /* IPACX Configuration TE(0)/NT(1) */ -#define MQ2_BOARD_IPACX_CONFIG 0x800300 /* "" */ -#define MQ2_MEMORY_SIZE 0x01000000 /* 16MB code/data memory */ -#define MQ2_CTRL_SIZE 0x00008000 /* 32K memory mapped registers */ -/*----------------------------------------------------------------------------*/ -/* SERVER BRI 2M/2F as derived from 4BRI V2 */ -#define BRI2_MEMORY_SIZE 0x00800000 /* 8MB code/data memory */ -#define BRI2_PROTOCOL_MEMORY_SIZE (MQ2_MEMORY_SIZE >> 2) /* same as one 4BRI Rev.2 task */ -#define BRI2_CTRL_SIZE 0x00008000 /* 32K memory mapped registers */ -#define M_INSTANCE_COUNT 1 /* BRI consists of one instance */ -/* - * Some useful constants for proper initialization of the GT6401x - */ -#define ID_REG 0x0000 /*Pci reg-contain the Dev&Ven ID of the card*/ -#define RAS0_BASEREG 0x0010 /*Ras0 register - contain the base addr Ras0*/ -#define RAS2_BASEREG 0x0014 -#define CS_BASEREG 0x0018 -#define BOOT_BASEREG 0x001c -#define GTREGS_BASEREG 0x0024 /*GTRegsBase reg-contain the base addr where*/ - /*the GT64010 internal regs where mapped */ -/* - * GT64010 internal registers - */ -/* DRAM device coding */ -#define LOW_RAS0_DREG 0x0400 /*Ras0 low decode address*/ -#define HI_RAS0_DREG 0x0404 /*Ras0 high decode address*/ -#define LOW_RAS1_DREG 0x0408 /*Ras1 low decode address*/ -#define HI_RAS1_DREG 0x040c /*Ras1 high decode address*/ -#define LOW_RAS2_DREG 0x0410 /*Ras2 low decode address*/ -#define HI_RAS2_DREG 0x0414 /*Ras2 high decode address*/ -#define LOW_RAS3_DREG 0x0418 /*Ras3 low decode address*/ -#define HI_RAS3_DREG 0x041c /*Ras3 high decode address*/ -/* I/O CS device coding */ -#define LOW_CS0_DREG 0x0420 /* CS0* low decode register */ -#define HI_CS0_DREG 0x0424 /* CS0* high decode register */ -#define LOW_CS1_DREG 0x0428 /* CS1* low decode register */ -#define HI_CS1_DREG 0x042c /* CS1* high decode register */ -#define LOW_CS2_DREG 0x0430 /* CS2* low decode register */ -#define HI_CS2_DREG 0x0434 /* CS2* high decode register */ -#define LOW_CS3_DREG 0x0438 /* CS3* low decode register */ -#define HI_CS3_DREG 0x043c /* CS3* high decode register */ -/* Boot PROM device coding */ -#define LOW_BOOTCS_DREG 0x0440 /* Boot CS low decode register */ -#define HI_BOOTCS_DREG 0x0444 /* Boot CS High decode register */ -/* DRAM group coding (for CPU) */ -#define LO_RAS10_GREG 0x0008 /*Ras1..0 group low decode address*/ -#define HI_RAS10_GREG 0x0010 /*Ras1..0 group high decode address*/ -#define LO_RAS32_GREG 0x0018 /*Ras3..2 group low decode address */ -#define HI_RAS32_GREG 0x0020 /*Ras3..2 group high decode address */ -/* I/O CS group coding for (CPU) */ -#define LO_CS20_GREG 0x0028 /* CS2..0 group low decode register */ -#define HI_CS20_GREG 0x0030 /* CS2..0 group high decode register */ -#define LO_CS3B_GREG 0x0038 /* CS3 & PROM group low decode register */ -#define HI_CS3B_GREG 0x0040 /* CS3 & PROM group high decode register */ -/* Galileo specific PCI config. */ -#define PCI_TIMEOUT_RET 0x0c04 /* Time Out and retry register */ -#define RAS10_BANKSIZE 0x0c08 /* RAS 1..0 group PCI bank size */ -#define RAS32_BANKSIZE 0x0c0c /* RAS 3..2 group PCI bank size */ -#define CS20_BANKSIZE 0x0c10 /* CS 2..0 group PCI bank size */ -#define CS3B_BANKSIZE 0x0c14 /* CS 3 & Boot group PCI bank size */ -#define DRAM_SIZE 0x0001 /*Dram size in mega bytes*/ -#define PROM_SIZE 0x08000 /*Prom size in bytes*/ -/*--------------------------------------------------------------------------*/ -#define OFFS_DIVA_INIT_TASK_COUNT 0x68 -#define OFFS_DSP_CODE_BASE_ADDR 0x6c -#define OFFS_XLOG_BUF_ADDR 0x70 -#define OFFS_XLOG_COUNT_ADDR 0x74 -#define OFFS_XLOG_OUT_ADDR 0x78 -#define OFFS_PROTOCOL_END_ADDR 0x7c -#define OFFS_PROTOCOL_ID_STRING 0x80 -/*--------------------------------------------------------------------------*/ diff --git a/drivers/isdn/hardware/eicon/mntfunc.c b/drivers/isdn/hardware/eicon/mntfunc.c deleted file mode 100644 index 1cd9affb6058..000000000000 --- a/drivers/isdn/hardware/eicon/mntfunc.c +++ /dev/null @@ -1,370 +0,0 @@ -/* $Id: mntfunc.c,v 1.19.6.4 2005/01/31 12:22:20 armin Exp $ - * - * Driver for Eicon DIVA Server ISDN cards. - * Maint module - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - - -#include "platform.h" -#include "di_defs.h" -#include "divasync.h" -#include "debug_if.h" - -extern char *DRIVERRELEASE_MNT; - -#define DBG_MINIMUM (DL_LOG + DL_FTL + DL_ERR) -#define DBG_DEFAULT (DBG_MINIMUM + DL_XLOG + DL_REG) - -extern void DIVA_DIDD_Read(void *, int); - -static dword notify_handle; -static DESCRIPTOR DAdapter; -static DESCRIPTOR MAdapter; -static DESCRIPTOR MaintDescriptor = -{ IDI_DIMAINT, 0, 0, (IDI_CALL) diva_maint_prtComp }; - -extern int diva_os_copy_to_user(void *os_handle, void __user *dst, - const void *src, int length); -extern int diva_os_copy_from_user(void *os_handle, void *dst, - const void __user *src, int length); - -static void no_printf(unsigned char *x, ...) -{ - /* dummy debug function */ -} - -#include "debuglib.c" - -/* - * DIDD callback function - */ -static void *didd_callback(void *context, DESCRIPTOR *adapter, - int removal) -{ - if (adapter->type == IDI_DADAPTER) { - DBG_ERR(("cb: Change in DAdapter ? Oops ?.")); - } else if (adapter->type == IDI_DIMAINT) { - if (removal) { - DbgDeregister(); - memset(&MAdapter, 0, sizeof(MAdapter)); - dprintf = no_printf; - } else { - memcpy(&MAdapter, adapter, sizeof(MAdapter)); - dprintf = (DIVA_DI_PRINTF) MAdapter.request; - DbgRegister("MAINT", DRIVERRELEASE_MNT, DBG_DEFAULT); - } - } else if ((adapter->type > 0) && (adapter->type < 16)) { - if (removal) { - diva_mnt_remove_xdi_adapter(adapter); - } else { - diva_mnt_add_xdi_adapter(adapter); - } - } - return (NULL); -} - -/* - * connect to didd - */ -static int __init connect_didd(void) -{ - int x = 0; - int dadapter = 0; - IDI_SYNC_REQ req; - DESCRIPTOR DIDD_Table[MAX_DESCRIPTORS]; - - DIVA_DIDD_Read(DIDD_Table, sizeof(DIDD_Table)); - - for (x = 0; x < MAX_DESCRIPTORS; x++) { - if (DIDD_Table[x].type == IDI_DADAPTER) { /* DADAPTER found */ - dadapter = 1; - memcpy(&DAdapter, &DIDD_Table[x], sizeof(DAdapter)); - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = - IDI_SYNC_REQ_DIDD_REGISTER_ADAPTER_NOTIFY; - req.didd_notify.info.callback = (void *)didd_callback; - req.didd_notify.info.context = NULL; - DAdapter.request((ENTITY *)&req); - if (req.didd_notify.e.Rc != 0xff) - return (0); - notify_handle = req.didd_notify.info.handle; - /* Register MAINT (me) */ - req.didd_add_adapter.e.Req = 0; - req.didd_add_adapter.e.Rc = - IDI_SYNC_REQ_DIDD_ADD_ADAPTER; - req.didd_add_adapter.info.descriptor = - (void *) &MaintDescriptor; - DAdapter.request((ENTITY *)&req); - if (req.didd_add_adapter.e.Rc != 0xff) - return (0); - } else if ((DIDD_Table[x].type > 0) - && (DIDD_Table[x].type < 16)) { - diva_mnt_add_xdi_adapter(&DIDD_Table[x]); - } - } - return (dadapter); -} - -/* - * disconnect from didd - */ -static void __exit disconnect_didd(void) -{ - IDI_SYNC_REQ req; - - req.didd_notify.e.Req = 0; - req.didd_notify.e.Rc = IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER_NOTIFY; - req.didd_notify.info.handle = notify_handle; - DAdapter.request((ENTITY *)&req); - - req.didd_remove_adapter.e.Req = 0; - req.didd_remove_adapter.e.Rc = IDI_SYNC_REQ_DIDD_REMOVE_ADAPTER; - req.didd_remove_adapter.info.p_request = - (IDI_CALL) MaintDescriptor.request; - DAdapter.request((ENTITY *)&req); -} - -/* - * read/write maint - */ -int maint_read_write(void __user *buf, int count) -{ - byte data[128]; - dword cmd, id, mask; - int ret = 0; - - if (count < (3 * sizeof(dword))) - return (-EFAULT); - - if (diva_os_copy_from_user(NULL, (void *) &data[0], - buf, 3 * sizeof(dword))) { - return (-EFAULT); - } - - cmd = *(dword *)&data[0]; /* command */ - id = *(dword *)&data[4]; /* driver id */ - mask = *(dword *)&data[8]; /* mask or size */ - - switch (cmd) { - case DITRACE_CMD_GET_DRIVER_INFO: - if ((ret = diva_get_driver_info(id, data, sizeof(data))) > 0) { - if ((count < ret) || diva_os_copy_to_user - (NULL, buf, (void *) &data[0], ret)) - ret = -EFAULT; - } else { - ret = -EINVAL; - } - break; - - case DITRACE_READ_DRIVER_DBG_MASK: - if ((ret = diva_get_driver_dbg_mask(id, (byte *) data)) > 0) { - if ((count < ret) || diva_os_copy_to_user - (NULL, buf, (void *) &data[0], ret)) - ret = -EFAULT; - } else { - ret = -ENODEV; - } - break; - - case DITRACE_WRITE_DRIVER_DBG_MASK: - if ((ret = diva_set_driver_dbg_mask(id, mask)) <= 0) { - ret = -ENODEV; - } - break; - - /* - Filter commands will ignore the ID due to fact that filtering affects - the B- channel and Audio Tap trace levels only. Also MAINT driver will - select the right trace ID by itself - */ - case DITRACE_WRITE_SELECTIVE_TRACE_FILTER: - if (!mask) { - ret = diva_set_trace_filter(1, "*"); - } else if (mask < sizeof(data)) { - if (diva_os_copy_from_user(NULL, data, (char __user *)buf + 12, mask)) { - ret = -EFAULT; - } else { - ret = diva_set_trace_filter((int)mask, data); - } - } else { - ret = -EINVAL; - } - break; - - case DITRACE_READ_SELECTIVE_TRACE_FILTER: - if ((ret = diva_get_trace_filter(sizeof(data), data)) > 0) { - if (diva_os_copy_to_user(NULL, buf, data, ret)) - ret = -EFAULT; - } else { - ret = -ENODEV; - } - break; - - case DITRACE_READ_TRACE_ENTRY:{ - diva_os_spin_lock_magic_t old_irql; - word size; - diva_dbg_entry_head_t *pmsg; - byte *pbuf; - - if (!(pbuf = diva_os_malloc(0, mask))) { - return (-ENOMEM); - } - - for (;;) { - if (!(pmsg = - diva_maint_get_message(&size, &old_irql))) { - break; - } - if (size > mask) { - diva_maint_ack_message(0, &old_irql); - ret = -EINVAL; - break; - } - ret = size; - memcpy(pbuf, pmsg, size); - diva_maint_ack_message(1, &old_irql); - if ((count < size) || - diva_os_copy_to_user(NULL, buf, (void *) pbuf, size)) - ret = -EFAULT; - break; - } - diva_os_free(0, pbuf); - } - break; - - case DITRACE_READ_TRACE_ENTRYS:{ - diva_os_spin_lock_magic_t old_irql; - word size; - diva_dbg_entry_head_t *pmsg; - byte *pbuf = NULL; - int written = 0; - - if (mask < 4096) { - ret = -EINVAL; - break; - } - if (!(pbuf = diva_os_malloc(0, mask))) { - return (-ENOMEM); - } - - for (;;) { - if (!(pmsg = - diva_maint_get_message(&size, &old_irql))) { - break; - } - if ((size + 8) > mask) { - diva_maint_ack_message(0, &old_irql); - break; - } - /* - Write entry length - */ - pbuf[written++] = (byte) size; - pbuf[written++] = (byte) (size >> 8); - pbuf[written++] = 0; - pbuf[written++] = 0; - /* - Write message - */ - memcpy(&pbuf[written], pmsg, size); - diva_maint_ack_message(1, &old_irql); - written += size; - mask -= (size + 4); - } - pbuf[written++] = 0; - pbuf[written++] = 0; - pbuf[written++] = 0; - pbuf[written++] = 0; - - if ((count < written) || diva_os_copy_to_user(NULL, buf, (void *) pbuf, written)) { - ret = -EFAULT; - } else { - ret = written; - } - diva_os_free(0, pbuf); - } - break; - - default: - ret = -EINVAL; - } - return (ret); -} - -/* - * init - */ -int __init mntfunc_init(int *buffer_length, void **buffer, - unsigned long diva_dbg_mem) -{ - if (*buffer_length < 64) { - *buffer_length = 64; - } - if (*buffer_length > 512) { - *buffer_length = 512; - } - *buffer_length *= 1024; - - if (diva_dbg_mem) { - *buffer = (void *) diva_dbg_mem; - } else { - while ((*buffer_length >= (64 * 1024)) - && - (!(*buffer = diva_os_malloc(0, *buffer_length)))) { - *buffer_length -= 1024; - } - - if (!*buffer) { - DBG_ERR(("init: Can not alloc trace buffer")); - return (0); - } - } - - if (diva_maint_init(*buffer, *buffer_length, (diva_dbg_mem == 0))) { - if (!diva_dbg_mem) { - diva_os_free(0, *buffer); - } - DBG_ERR(("init: maint init failed")); - return (0); - } - - if (!connect_didd()) { - DBG_ERR(("init: failed to connect to DIDD.")); - diva_maint_finit(); - if (!diva_dbg_mem) { - diva_os_free(0, *buffer); - } - return (0); - } - return (1); -} - -/* - * exit - */ -void __exit mntfunc_finit(void) -{ - void *buffer; - int i = 100; - - DbgDeregister(); - - while (diva_mnt_shutdown_xdi_adapters() && i--) { - diva_os_sleep(10); - } - - disconnect_didd(); - - if ((buffer = diva_maint_finit())) { - diva_os_free(0, buffer); - } - - memset(&MAdapter, 0, sizeof(MAdapter)); - dprintf = no_printf; -} diff --git a/drivers/isdn/hardware/eicon/os_4bri.c b/drivers/isdn/hardware/eicon/os_4bri.c deleted file mode 100644 index 87db5f4df27d..000000000000 --- a/drivers/isdn/hardware/eicon/os_4bri.c +++ /dev/null @@ -1,1132 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* $Id: os_4bri.c,v 1.28.4.4 2005/02/11 19:40:25 armin Exp $ */ - -#include "platform.h" -#include "debuglib.h" -#include "cardtype.h" -#include "pc.h" -#include "pr_pc.h" -#include "di_defs.h" -#include "dsp_defs.h" -#include "di.h" -#include "io.h" - -#include "xdi_msg.h" -#include "xdi_adapter.h" -#include "os_4bri.h" -#include "diva_pci.h" -#include "mi_pc.h" -#include "dsrv4bri.h" -#include "helpers.h" - -static void *diva_xdiLoadFileFile = NULL; -static dword diva_xdiLoadFileLength = 0; - -/* -** IMPORTS -*/ -extern void prepare_qBri_functions(PISDN_ADAPTER IoAdapter); -extern void prepare_qBri2_functions(PISDN_ADAPTER IoAdapter); -extern void diva_xdi_display_adapter_features(int card); -extern void diva_add_slave_adapter(diva_os_xdi_adapter_t *a); - -extern int qBri_FPGA_download(PISDN_ADAPTER IoAdapter); -extern void start_qBri_hardware(PISDN_ADAPTER IoAdapter); - -extern int diva_card_read_xlog(diva_os_xdi_adapter_t *a); - -/* -** LOCALS -*/ -static unsigned long _4bri_bar_length[4] = { - 0x100, - 0x100, /* I/O */ - MQ_MEMORY_SIZE, - 0x2000 -}; -static unsigned long _4bri_v2_bar_length[4] = { - 0x100, - 0x100, /* I/O */ - MQ2_MEMORY_SIZE, - 0x10000 -}; -static unsigned long _4bri_v2_bri_bar_length[4] = { - 0x100, - 0x100, /* I/O */ - BRI2_MEMORY_SIZE, - 0x10000 -}; - - -static int diva_4bri_cleanup_adapter(diva_os_xdi_adapter_t *a); -static int _4bri_get_serial_number(diva_os_xdi_adapter_t *a); -static int diva_4bri_cmd_card_proc(struct _diva_os_xdi_adapter *a, - diva_xdi_um_cfg_cmd_t *cmd, - int length); -static int diva_4bri_cleanup_slave_adapters(diva_os_xdi_adapter_t *a); -static int diva_4bri_write_fpga_image(diva_os_xdi_adapter_t *a, - byte *data, dword length); -static int diva_4bri_reset_adapter(PISDN_ADAPTER IoAdapter); -static int diva_4bri_write_sdram_block(PISDN_ADAPTER IoAdapter, - dword address, - const byte *data, - dword length, dword limit); -static int diva_4bri_start_adapter(PISDN_ADAPTER IoAdapter, - dword start_address, dword features); -static int check_qBri_interrupt(PISDN_ADAPTER IoAdapter); -static int diva_4bri_stop_adapter(diva_os_xdi_adapter_t *a); - -static int _4bri_is_rev_2_card(int card_ordinal) -{ - switch (card_ordinal) { - case CARDTYPE_DIVASRV_Q_8M_V2_PCI: - case CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI: - case CARDTYPE_DIVASRV_B_2M_V2_PCI: - case CARDTYPE_DIVASRV_B_2F_PCI: - case CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI: - return (1); - } - return (0); -} - -static int _4bri_is_rev_2_bri_card(int card_ordinal) -{ - switch (card_ordinal) { - case CARDTYPE_DIVASRV_B_2M_V2_PCI: - case CARDTYPE_DIVASRV_B_2F_PCI: - case CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI: - return (1); - } - return (0); -} - -static void diva_4bri_set_addresses(diva_os_xdi_adapter_t *a) -{ - dword offset = a->resources.pci.qoffset; - dword c_offset = offset * a->xdi_adapter.ControllerNumber; - - a->resources.pci.mem_type_id[MEM_TYPE_RAM] = 2; - a->resources.pci.mem_type_id[MEM_TYPE_ADDRESS] = 2; - a->resources.pci.mem_type_id[MEM_TYPE_CONTROL] = 2; - a->resources.pci.mem_type_id[MEM_TYPE_RESET] = 0; - a->resources.pci.mem_type_id[MEM_TYPE_CTLREG] = 3; - a->resources.pci.mem_type_id[MEM_TYPE_PROM] = 0; - - /* - Set up hardware related pointers - */ - a->xdi_adapter.Address = a->resources.pci.addr[2]; /* BAR2 SDRAM */ - a->xdi_adapter.Address += c_offset; - - a->xdi_adapter.Control = a->resources.pci.addr[2]; /* BAR2 SDRAM */ - - a->xdi_adapter.ram = a->resources.pci.addr[2]; /* BAR2 SDRAM */ - a->xdi_adapter.ram += c_offset + (offset - MQ_SHARED_RAM_SIZE); - - a->xdi_adapter.reset = a->resources.pci.addr[0]; /* BAR0 CONFIG */ - /* - ctlReg contains the register address for the MIPS CPU reset control - */ - a->xdi_adapter.ctlReg = a->resources.pci.addr[3]; /* BAR3 CNTRL */ - /* - prom contains the register address for FPGA and EEPROM programming - */ - a->xdi_adapter.prom = &a->xdi_adapter.reset[0x6E]; -} - -/* -** BAR0 - MEM - 0x100 - CONFIG MEM -** BAR1 - I/O - 0x100 - UNUSED -** BAR2 - MEM - MQ_MEMORY_SIZE (MQ2_MEMORY_SIZE on Rev.2) - SDRAM -** BAR3 - MEM - 0x2000 (0x10000 on Rev.2) - CNTRL -** -** Called by master adapter, that will initialize and add slave adapters -*/ -int diva_4bri_init_card(diva_os_xdi_adapter_t *a) -{ - int bar, i; - byte __iomem *p; - PADAPTER_LIST_ENTRY quadro_list; - diva_os_xdi_adapter_t *diva_current; - diva_os_xdi_adapter_t *adapter_list[4]; - PISDN_ADAPTER Slave; - unsigned long bar_length[ARRAY_SIZE(_4bri_bar_length)]; - int v2 = _4bri_is_rev_2_card(a->CardOrdinal); - int tasks = _4bri_is_rev_2_bri_card(a->CardOrdinal) ? 1 : MQ_INSTANCE_COUNT; - int factor = (tasks == 1) ? 1 : 2; - - if (v2) { - if (_4bri_is_rev_2_bri_card(a->CardOrdinal)) { - memcpy(bar_length, _4bri_v2_bri_bar_length, - sizeof(bar_length)); - } else { - memcpy(bar_length, _4bri_v2_bar_length, - sizeof(bar_length)); - } - } else { - memcpy(bar_length, _4bri_bar_length, sizeof(bar_length)); - } - DBG_TRC(("SDRAM_LENGTH=%08x, tasks=%d, factor=%d", - bar_length[2], tasks, factor)) - - /* - Get Serial Number - The serial number of 4BRI is accessible in accordance with PCI spec - via command register located in configuration space, also we do not - have to map any BAR before we can access it - */ - if (!_4bri_get_serial_number(a)) { - DBG_ERR(("A: 4BRI can't get Serial Number")) - diva_4bri_cleanup_adapter(a); - return (-1); - } - - /* - Set properties - */ - a->xdi_adapter.Properties = CardProperties[a->CardOrdinal]; - DBG_LOG(("Load %s, SN:%ld, bus:%02x, func:%02x", - a->xdi_adapter.Properties.Name, - a->xdi_adapter.serialNo, - a->resources.pci.bus, a->resources.pci.func)) - - /* - First initialization step: get and check hardware resoures. - Do not map resources and do not access card at this step - */ - for (bar = 0; bar < 4; bar++) { - a->resources.pci.bar[bar] = - divasa_get_pci_bar(a->resources.pci.bus, - a->resources.pci.func, bar, - a->resources.pci.hdev); - if (!a->resources.pci.bar[bar] - || (a->resources.pci.bar[bar] == 0xFFFFFFF0)) { - DBG_ERR( - ("A: invalid bar[%d]=%08x", bar, - a->resources.pci.bar[bar])) - return (-1); - } - } - a->resources.pci.irq = - (byte) divasa_get_pci_irq(a->resources.pci.bus, - a->resources.pci.func, - a->resources.pci.hdev); - if (!a->resources.pci.irq) { - DBG_ERR(("A: invalid irq")); - return (-1); - } - - a->xdi_adapter.sdram_bar = a->resources.pci.bar[2]; - - /* - Map all MEMORY BAR's - */ - for (bar = 0; bar < 4; bar++) { - if (bar != 1) { /* ignore I/O */ - a->resources.pci.addr[bar] = - divasa_remap_pci_bar(a, bar, a->resources.pci.bar[bar], - bar_length[bar]); - if (!a->resources.pci.addr[bar]) { - DBG_ERR(("A: 4BRI: can't map bar[%d]", bar)) - diva_4bri_cleanup_adapter(a); - return (-1); - } - } - } - - /* - Register I/O port - */ - sprintf(&a->port_name[0], "DIVA 4BRI %ld", (long) a->xdi_adapter.serialNo); - - if (diva_os_register_io_port(a, 1, a->resources.pci.bar[1], - bar_length[1], &a->port_name[0], 1)) { - DBG_ERR(("A: 4BRI: can't register bar[1]")) - diva_4bri_cleanup_adapter(a); - return (-1); - } - - a->resources.pci.addr[1] = - (void *) (unsigned long) a->resources.pci.bar[1]; - - /* - Set cleanup pointer for base adapter only, so slave adapter - will be unable to get cleanup - */ - a->interface.cleanup_adapter_proc = diva_4bri_cleanup_adapter; - - /* - Create slave adapters - */ - if (tasks > 1) { - if (!(a->slave_adapters[0] = - (diva_os_xdi_adapter_t *) diva_os_malloc(0, sizeof(*a)))) - { - diva_4bri_cleanup_adapter(a); - return (-1); - } - if (!(a->slave_adapters[1] = - (diva_os_xdi_adapter_t *) diva_os_malloc(0, sizeof(*a)))) - { - diva_os_free(0, a->slave_adapters[0]); - a->slave_adapters[0] = NULL; - diva_4bri_cleanup_adapter(a); - return (-1); - } - if (!(a->slave_adapters[2] = - (diva_os_xdi_adapter_t *) diva_os_malloc(0, sizeof(*a)))) - { - diva_os_free(0, a->slave_adapters[0]); - diva_os_free(0, a->slave_adapters[1]); - a->slave_adapters[0] = NULL; - a->slave_adapters[1] = NULL; - diva_4bri_cleanup_adapter(a); - return (-1); - } - memset(a->slave_adapters[0], 0x00, sizeof(*a)); - memset(a->slave_adapters[1], 0x00, sizeof(*a)); - memset(a->slave_adapters[2], 0x00, sizeof(*a)); - } - - adapter_list[0] = a; - adapter_list[1] = a->slave_adapters[0]; - adapter_list[2] = a->slave_adapters[1]; - adapter_list[3] = a->slave_adapters[2]; - - /* - Allocate slave list - */ - quadro_list = - (PADAPTER_LIST_ENTRY) diva_os_malloc(0, sizeof(*quadro_list)); - if (!(a->slave_list = quadro_list)) { - for (i = 0; i < (tasks - 1); i++) { - diva_os_free(0, a->slave_adapters[i]); - a->slave_adapters[i] = NULL; - } - diva_4bri_cleanup_adapter(a); - return (-1); - } - memset(quadro_list, 0x00, sizeof(*quadro_list)); - - /* - Set interfaces - */ - a->xdi_adapter.QuadroList = quadro_list; - for (i = 0; i < tasks; i++) { - adapter_list[i]->xdi_adapter.ControllerNumber = i; - adapter_list[i]->xdi_adapter.tasks = tasks; - quadro_list->QuadroAdapter[i] = - &adapter_list[i]->xdi_adapter; - } - - for (i = 0; i < tasks; i++) { - diva_current = adapter_list[i]; - - diva_current->dsp_mask = 0x00000003; - - diva_current->xdi_adapter.a.io = - &diva_current->xdi_adapter; - diva_current->xdi_adapter.DIRequest = request; - diva_current->interface.cmd_proc = diva_4bri_cmd_card_proc; - diva_current->xdi_adapter.Properties = - CardProperties[a->CardOrdinal]; - diva_current->CardOrdinal = a->CardOrdinal; - - diva_current->xdi_adapter.Channels = - CardProperties[a->CardOrdinal].Channels; - diva_current->xdi_adapter.e_max = - CardProperties[a->CardOrdinal].E_info; - diva_current->xdi_adapter.e_tbl = - diva_os_malloc(0, - diva_current->xdi_adapter.e_max * - sizeof(E_INFO)); - - if (!diva_current->xdi_adapter.e_tbl) { - diva_4bri_cleanup_slave_adapters(a); - diva_4bri_cleanup_adapter(a); - for (i = 1; i < (tasks - 1); i++) { - diva_os_free(0, adapter_list[i]); - } - return (-1); - } - memset(diva_current->xdi_adapter.e_tbl, 0x00, - diva_current->xdi_adapter.e_max * sizeof(E_INFO)); - - if (diva_os_initialize_spin_lock(&diva_current->xdi_adapter.isr_spin_lock, "isr")) { - diva_4bri_cleanup_slave_adapters(a); - diva_4bri_cleanup_adapter(a); - for (i = 1; i < (tasks - 1); i++) { - diva_os_free(0, adapter_list[i]); - } - return (-1); - } - if (diva_os_initialize_spin_lock(&diva_current->xdi_adapter.data_spin_lock, "data")) { - diva_4bri_cleanup_slave_adapters(a); - diva_4bri_cleanup_adapter(a); - for (i = 1; i < (tasks - 1); i++) { - diva_os_free(0, adapter_list[i]); - } - return (-1); - } - - strcpy(diva_current->xdi_adapter.req_soft_isr. dpc_thread_name, "kdivas4brid"); - - if (diva_os_initialize_soft_isr(&diva_current->xdi_adapter.req_soft_isr, DIDpcRoutine, - &diva_current->xdi_adapter)) { - diva_4bri_cleanup_slave_adapters(a); - diva_4bri_cleanup_adapter(a); - for (i = 1; i < (tasks - 1); i++) { - diva_os_free(0, adapter_list[i]); - } - return (-1); - } - - /* - Do not initialize second DPC - only one thread will be created - */ - diva_current->xdi_adapter.isr_soft_isr.object = - diva_current->xdi_adapter.req_soft_isr.object; - } - - if (v2) { - prepare_qBri2_functions(&a->xdi_adapter); - } else { - prepare_qBri_functions(&a->xdi_adapter); - } - - for (i = 0; i < tasks; i++) { - diva_current = adapter_list[i]; - if (i) - memcpy(&diva_current->resources, &a->resources, sizeof(divas_card_resources_t)); - diva_current->resources.pci.qoffset = (a->xdi_adapter.MemorySize >> factor); - } - - /* - Set up hardware related pointers - */ - a->xdi_adapter.cfg = (void *) (unsigned long) a->resources.pci.bar[0]; /* BAR0 CONFIG */ - a->xdi_adapter.port = (void *) (unsigned long) a->resources.pci.bar[1]; /* BAR1 */ - a->xdi_adapter.ctlReg = (void *) (unsigned long) a->resources.pci.bar[3]; /* BAR3 CNTRL */ - - for (i = 0; i < tasks; i++) { - diva_current = adapter_list[i]; - diva_4bri_set_addresses(diva_current); - Slave = a->xdi_adapter.QuadroList->QuadroAdapter[i]; - Slave->MultiMaster = &a->xdi_adapter; - Slave->sdram_bar = a->xdi_adapter.sdram_bar; - if (i) { - Slave->serialNo = ((dword) (Slave->ControllerNumber << 24)) | - a->xdi_adapter.serialNo; - Slave->cardType = a->xdi_adapter.cardType; - } - } - - /* - reset contains the base address for the PLX 9054 register set - */ - p = DIVA_OS_MEM_ATTACH_RESET(&a->xdi_adapter); - WRITE_BYTE(&p[PLX9054_INTCSR], 0x00); /* disable PCI interrupts */ - DIVA_OS_MEM_DETACH_RESET(&a->xdi_adapter, p); - - /* - Set IRQ handler - */ - a->xdi_adapter.irq_info.irq_nr = a->resources.pci.irq; - sprintf(a->xdi_adapter.irq_info.irq_name, "DIVA 4BRI %ld", - (long) a->xdi_adapter.serialNo); - - if (diva_os_register_irq(a, a->xdi_adapter.irq_info.irq_nr, - a->xdi_adapter.irq_info.irq_name)) { - diva_4bri_cleanup_slave_adapters(a); - diva_4bri_cleanup_adapter(a); - for (i = 1; i < (tasks - 1); i++) { - diva_os_free(0, adapter_list[i]); - } - return (-1); - } - - a->xdi_adapter.irq_info.registered = 1; - - /* - Add three slave adapters - */ - if (tasks > 1) { - diva_add_slave_adapter(adapter_list[1]); - diva_add_slave_adapter(adapter_list[2]); - diva_add_slave_adapter(adapter_list[3]); - } - - diva_log_info("%s IRQ:%d SerNo:%d", a->xdi_adapter.Properties.Name, - a->resources.pci.irq, a->xdi_adapter.serialNo); - - return (0); -} - -/* -** Cleanup function will be called for master adapter only -** this is guaranteed by design: cleanup callback is set -** by master adapter only -*/ -static int diva_4bri_cleanup_adapter(diva_os_xdi_adapter_t *a) -{ - int bar; - - /* - Stop adapter if running - */ - if (a->xdi_adapter.Initialized) { - diva_4bri_stop_adapter(a); - } - - /* - Remove IRQ handler - */ - if (a->xdi_adapter.irq_info.registered) { - diva_os_remove_irq(a, a->xdi_adapter.irq_info.irq_nr); - } - a->xdi_adapter.irq_info.registered = 0; - - /* - Free DPC's and spin locks on all adapters - */ - diva_4bri_cleanup_slave_adapters(a); - - /* - Unmap all BARS - */ - for (bar = 0; bar < 4; bar++) { - if (bar != 1) { - if (a->resources.pci.bar[bar] - && a->resources.pci.addr[bar]) { - divasa_unmap_pci_bar(a->resources.pci.addr[bar]); - a->resources.pci.bar[bar] = 0; - a->resources.pci.addr[bar] = NULL; - } - } - } - - /* - Unregister I/O - */ - if (a->resources.pci.bar[1] && a->resources.pci.addr[1]) { - diva_os_register_io_port(a, 0, a->resources.pci.bar[1], - _4bri_is_rev_2_card(a-> - CardOrdinal) ? - _4bri_v2_bar_length[1] : - _4bri_bar_length[1], - &a->port_name[0], 1); - a->resources.pci.bar[1] = 0; - a->resources.pci.addr[1] = NULL; - } - - if (a->slave_list) { - diva_os_free(0, a->slave_list); - a->slave_list = NULL; - } - - return (0); -} - -static int _4bri_get_serial_number(diva_os_xdi_adapter_t *a) -{ - dword data[64]; - dword serNo; - word addr, status, i, j; - byte Bus, Slot; - void *hdev; - - Bus = a->resources.pci.bus; - Slot = a->resources.pci.func; - hdev = a->resources.pci.hdev; - - for (i = 0; i < 64; ++i) { - addr = i * 4; - for (j = 0; j < 5; ++j) { - PCIwrite(Bus, Slot, 0x4E, &addr, sizeof(addr), - hdev); - diva_os_wait(1); - PCIread(Bus, Slot, 0x4E, &status, sizeof(status), - hdev); - if (status & 0x8000) - break; - } - if (j >= 5) { - DBG_ERR(("EEPROM[%d] read failed (0x%x)", i * 4, addr)) - return (0); - } - PCIread(Bus, Slot, 0x50, &data[i], sizeof(data[i]), hdev); - } - DBG_BLK(((char *) &data[0], sizeof(data))) - - serNo = data[32]; - if (serNo == 0 || serNo == 0xffffffff) - serNo = data[63]; - - if (!serNo) { - DBG_LOG(("W: Serial Number == 0, create one serial number")); - serNo = a->resources.pci.bar[1] & 0xffff0000; - serNo |= a->resources.pci.bus << 8; - serNo |= a->resources.pci.func; - } - - a->xdi_adapter.serialNo = serNo; - - DBG_REG(("Serial No. : %ld", a->xdi_adapter.serialNo)) - - return (serNo); -} - -/* -** Release resources of slave adapters -*/ -static int diva_4bri_cleanup_slave_adapters(diva_os_xdi_adapter_t *a) -{ - diva_os_xdi_adapter_t *adapter_list[4]; - diva_os_xdi_adapter_t *diva_current; - int i; - - adapter_list[0] = a; - adapter_list[1] = a->slave_adapters[0]; - adapter_list[2] = a->slave_adapters[1]; - adapter_list[3] = a->slave_adapters[2]; - - for (i = 0; i < a->xdi_adapter.tasks; i++) { - diva_current = adapter_list[i]; - if (diva_current) { - diva_os_destroy_spin_lock(&diva_current-> - xdi_adapter. - isr_spin_lock, "unload"); - diva_os_destroy_spin_lock(&diva_current-> - xdi_adapter. - data_spin_lock, - "unload"); - - diva_os_cancel_soft_isr(&diva_current->xdi_adapter. - req_soft_isr); - diva_os_cancel_soft_isr(&diva_current->xdi_adapter. - isr_soft_isr); - - diva_os_remove_soft_isr(&diva_current->xdi_adapter. - req_soft_isr); - diva_current->xdi_adapter.isr_soft_isr.object = NULL; - - if (diva_current->xdi_adapter.e_tbl) { - diva_os_free(0, - diva_current->xdi_adapter. - e_tbl); - } - diva_current->xdi_adapter.e_tbl = NULL; - diva_current->xdi_adapter.e_max = 0; - diva_current->xdi_adapter.e_count = 0; - } - } - - return (0); -} - -static int -diva_4bri_cmd_card_proc(struct _diva_os_xdi_adapter *a, - diva_xdi_um_cfg_cmd_t *cmd, int length) -{ - int ret = -1; - - if (cmd->adapter != a->controller) { - DBG_ERR(("A: 4bri_cmd, invalid controller=%d != %d", - cmd->adapter, a->controller)) - return (-1); - } - - switch (cmd->command) { - case DIVA_XDI_UM_CMD_GET_CARD_ORDINAL: - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - *(dword *) a->xdi_mbox.data = - (dword) a->CardOrdinal; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_GET_SERIAL_NR: - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - *(dword *) a->xdi_mbox.data = - (dword) a->xdi_adapter.serialNo; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_GET_PCI_HW_CONFIG: - if (!a->xdi_adapter.ControllerNumber) { - /* - Only master adapter can access hardware config - */ - a->xdi_mbox.data_length = sizeof(dword) * 9; - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - int i; - dword *data = (dword *) a->xdi_mbox.data; - - for (i = 0; i < 8; i++) { - *data++ = a->resources.pci.bar[i]; - } - *data++ = (dword) a->resources.pci.irq; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - } - break; - - case DIVA_XDI_UM_CMD_GET_CARD_STATE: - if (!a->xdi_adapter.ControllerNumber) { - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - dword *data = (dword *) a->xdi_mbox.data; - if (!a->xdi_adapter.ram - || !a->xdi_adapter.reset - || !a->xdi_adapter.cfg) { - *data = 3; - } else if (a->xdi_adapter.trapped) { - *data = 2; - } else if (a->xdi_adapter.Initialized) { - *data = 1; - } else { - *data = 0; - } - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - } - break; - - case DIVA_XDI_UM_CMD_WRITE_FPGA: - if (!a->xdi_adapter.ControllerNumber) { - ret = - diva_4bri_write_fpga_image(a, - (byte *)&cmd[1], - cmd->command_data. - write_fpga. - image_length); - } - break; - - case DIVA_XDI_UM_CMD_RESET_ADAPTER: - if (!a->xdi_adapter.ControllerNumber) { - ret = diva_4bri_reset_adapter(&a->xdi_adapter); - } - break; - - case DIVA_XDI_UM_CMD_WRITE_SDRAM_BLOCK: - if (!a->xdi_adapter.ControllerNumber) { - ret = diva_4bri_write_sdram_block(&a->xdi_adapter, - cmd-> - command_data. - write_sdram. - offset, - (byte *) & - cmd[1], - cmd-> - command_data. - write_sdram. - length, - a->xdi_adapter. - MemorySize); - } - break; - - case DIVA_XDI_UM_CMD_START_ADAPTER: - if (!a->xdi_adapter.ControllerNumber) { - ret = diva_4bri_start_adapter(&a->xdi_adapter, - cmd->command_data. - start.offset, - cmd->command_data. - start.features); - } - break; - - case DIVA_XDI_UM_CMD_SET_PROTOCOL_FEATURES: - if (!a->xdi_adapter.ControllerNumber) { - a->xdi_adapter.features = - cmd->command_data.features.features; - a->xdi_adapter.a.protocol_capabilities = - a->xdi_adapter.features; - DBG_TRC(("Set raw protocol features (%08x)", - a->xdi_adapter.features)) - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_STOP_ADAPTER: - if (!a->xdi_adapter.ControllerNumber) { - ret = diva_4bri_stop_adapter(a); - } - break; - - case DIVA_XDI_UM_CMD_READ_XLOG_ENTRY: - ret = diva_card_read_xlog(a); - break; - - case DIVA_XDI_UM_CMD_READ_SDRAM: - if (!a->xdi_adapter.ControllerNumber - && a->xdi_adapter.Address) { - if ( - (a->xdi_mbox.data_length = - cmd->command_data.read_sdram.length)) { - if ( - (a->xdi_mbox.data_length + - cmd->command_data.read_sdram.offset) < - a->xdi_adapter.MemorySize) { - a->xdi_mbox.data = - diva_os_malloc(0, - a->xdi_mbox. - data_length); - if (a->xdi_mbox.data) { - byte __iomem *p = DIVA_OS_MEM_ATTACH_ADDRESS(&a->xdi_adapter); - byte __iomem *src = p; - byte *dst = a->xdi_mbox.data; - dword len = a->xdi_mbox.data_length; - - src += cmd->command_data.read_sdram.offset; - - while (len--) { - *dst++ = READ_BYTE(src++); - } - DIVA_OS_MEM_DETACH_ADDRESS(&a->xdi_adapter, p); - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - } - } - } - break; - - default: - DBG_ERR(("A: A(%d) invalid cmd=%d", a->controller, - cmd->command)) - } - - return (ret); -} - -void *xdiLoadFile(char *FileName, dword *FileLength, - unsigned long lim) -{ - void *ret = diva_xdiLoadFileFile; - - if (FileLength) { - *FileLength = diva_xdiLoadFileLength; - } - diva_xdiLoadFileFile = NULL; - diva_xdiLoadFileLength = 0; - - return (ret); -} - -void diva_os_set_qBri_functions(PISDN_ADAPTER IoAdapter) -{ -} - -void diva_os_set_qBri2_functions(PISDN_ADAPTER IoAdapter) -{ -} - -static int -diva_4bri_write_fpga_image(diva_os_xdi_adapter_t *a, byte *data, - dword length) -{ - int ret; - - diva_xdiLoadFileFile = data; - diva_xdiLoadFileLength = length; - - ret = qBri_FPGA_download(&a->xdi_adapter); - - diva_xdiLoadFileFile = NULL; - diva_xdiLoadFileLength = 0; - - return (ret ? 0 : -1); -} - -static int diva_4bri_reset_adapter(PISDN_ADAPTER IoAdapter) -{ - PISDN_ADAPTER Slave; - int i; - - if (!IoAdapter->Address || !IoAdapter->reset) { - return (-1); - } - if (IoAdapter->Initialized) { - DBG_ERR(("A: A(%d) can't reset 4BRI adapter - please stop first", - IoAdapter->ANum)) - return (-1); - } - - /* - Forget all entities on all adapters - */ - for (i = 0; ((i < IoAdapter->tasks) && IoAdapter->QuadroList); i++) { - Slave = IoAdapter->QuadroList->QuadroAdapter[i]; - Slave->e_count = 0; - if (Slave->e_tbl) { - memset(Slave->e_tbl, 0x00, - Slave->e_max * sizeof(E_INFO)); - } - Slave->head = 0; - Slave->tail = 0; - Slave->assign = 0; - Slave->trapped = 0; - - memset(&Slave->a.IdTable[0], 0x00, - sizeof(Slave->a.IdTable)); - memset(&Slave->a.IdTypeTable[0], 0x00, - sizeof(Slave->a.IdTypeTable)); - memset(&Slave->a.FlowControlIdTable[0], 0x00, - sizeof(Slave->a.FlowControlIdTable)); - memset(&Slave->a.FlowControlSkipTable[0], 0x00, - sizeof(Slave->a.FlowControlSkipTable)); - memset(&Slave->a.misc_flags_table[0], 0x00, - sizeof(Slave->a.misc_flags_table)); - memset(&Slave->a.rx_stream[0], 0x00, - sizeof(Slave->a.rx_stream)); - memset(&Slave->a.tx_stream[0], 0x00, - sizeof(Slave->a.tx_stream)); - memset(&Slave->a.tx_pos[0], 0x00, sizeof(Slave->a.tx_pos)); - memset(&Slave->a.rx_pos[0], 0x00, sizeof(Slave->a.rx_pos)); - } - - return (0); -} - - -static int -diva_4bri_write_sdram_block(PISDN_ADAPTER IoAdapter, - dword address, - const byte *data, dword length, dword limit) -{ - byte __iomem *p = DIVA_OS_MEM_ATTACH_ADDRESS(IoAdapter); - byte __iomem *mem = p; - - if (((address + length) >= limit) || !mem) { - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, p); - DBG_ERR(("A: A(%d) write 4BRI address=0x%08lx", - IoAdapter->ANum, address + length)) - return (-1); - } - mem += address; - - while (length--) { - WRITE_BYTE(mem++, *data++); - } - - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, p); - return (0); -} - -static int -diva_4bri_start_adapter(PISDN_ADAPTER IoAdapter, - dword start_address, dword features) -{ - volatile word __iomem *signature; - int started = 0; - int i; - byte __iomem *p; - - /* - start adapter - */ - start_qBri_hardware(IoAdapter); - - p = DIVA_OS_MEM_ATTACH_RAM(IoAdapter); - /* - wait for signature in shared memory (max. 3 seconds) - */ - signature = (volatile word __iomem *) (&p[0x1E]); - - for (i = 0; i < 300; ++i) { - diva_os_wait(10); - if (READ_WORD(&signature[0]) == 0x4447) { - DBG_TRC(("Protocol startup time %d.%02d seconds", - (i / 100), (i % 100))) - started = 1; - break; - } - } - - for (i = 1; i < IoAdapter->tasks; i++) { - IoAdapter->QuadroList->QuadroAdapter[i]->features = - IoAdapter->features; - IoAdapter->QuadroList->QuadroAdapter[i]->a. - protocol_capabilities = IoAdapter->features; - } - - if (!started) { - DBG_FTL(("%s: Adapter selftest failed, signature=%04x", - IoAdapter->Properties.Name, - READ_WORD(&signature[0]))) - DIVA_OS_MEM_DETACH_RAM(IoAdapter, p); - (*(IoAdapter->trapFnc)) (IoAdapter); - IoAdapter->stop(IoAdapter); - return (-1); - } - DIVA_OS_MEM_DETACH_RAM(IoAdapter, p); - - for (i = 0; i < IoAdapter->tasks; i++) { - IoAdapter->QuadroList->QuadroAdapter[i]->Initialized = 1; - IoAdapter->QuadroList->QuadroAdapter[i]->IrqCount = 0; - } - - if (check_qBri_interrupt(IoAdapter)) { - DBG_ERR(("A: A(%d) interrupt test failed", - IoAdapter->ANum)) - for (i = 0; i < IoAdapter->tasks; i++) { - IoAdapter->QuadroList->QuadroAdapter[i]->Initialized = 0; - } - IoAdapter->stop(IoAdapter); - return (-1); - } - - IoAdapter->Properties.Features = (word) features; - diva_xdi_display_adapter_features(IoAdapter->ANum); - - for (i = 0; i < IoAdapter->tasks; i++) { - DBG_LOG(("A(%d) %s adapter successfully started", - IoAdapter->QuadroList->QuadroAdapter[i]->ANum, - (IoAdapter->tasks == 1) ? "BRI 2.0" : "4BRI")) - diva_xdi_didd_register_adapter(IoAdapter->QuadroList->QuadroAdapter[i]->ANum); - IoAdapter->QuadroList->QuadroAdapter[i]->Properties.Features = (word) features; - } - - return (0); -} - -static int check_qBri_interrupt(PISDN_ADAPTER IoAdapter) -{ -#ifdef SUPPORT_INTERRUPT_TEST_ON_4BRI - int i; - ADAPTER *a = &IoAdapter->a; - byte __iomem *p; - - IoAdapter->IrqCount = 0; - - if (IoAdapter->ControllerNumber > 0) - return (-1); - - p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - WRITE_BYTE(&p[PLX9054_INTCSR], PLX9054_INT_ENABLE); - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); - /* - interrupt test - */ - a->ReadyInt = 1; - a->ram_out(a, &PR_RAM->ReadyInt, 1); - - for (i = 100; !IoAdapter->IrqCount && (i-- > 0); diva_os_wait(10)); - - return ((IoAdapter->IrqCount > 0) ? 0 : -1); -#else - dword volatile __iomem *qBriIrq; - byte __iomem *p; - /* - Reset on-board interrupt register - */ - IoAdapter->IrqCount = 0; - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - qBriIrq = (dword volatile __iomem *) (&p[_4bri_is_rev_2_card - (IoAdapter-> - cardType) ? (MQ2_BREG_IRQ_TEST) - : (MQ_BREG_IRQ_TEST)]); - - WRITE_DWORD(qBriIrq, MQ_IRQ_REQ_OFF); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); - - p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - WRITE_BYTE(&p[PLX9054_INTCSR], PLX9054_INT_ENABLE); - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); - - diva_os_wait(100); - - return (0); -#endif /* SUPPORT_INTERRUPT_TEST_ON_4BRI */ -} - -static void diva_4bri_clear_interrupts(diva_os_xdi_adapter_t *a) -{ - PISDN_ADAPTER IoAdapter = &a->xdi_adapter; - - /* - clear any pending interrupt - */ - IoAdapter->disIrq(IoAdapter); - - IoAdapter->tst_irq(&IoAdapter->a); - IoAdapter->clr_irq(&IoAdapter->a); - IoAdapter->tst_irq(&IoAdapter->a); - - /* - kill pending dpcs - */ - diva_os_cancel_soft_isr(&IoAdapter->req_soft_isr); - diva_os_cancel_soft_isr(&IoAdapter->isr_soft_isr); -} - -static int diva_4bri_stop_adapter(diva_os_xdi_adapter_t *a) -{ - PISDN_ADAPTER IoAdapter = &a->xdi_adapter; - int i; - - if (!IoAdapter->ram) { - return (-1); - } - - if (!IoAdapter->Initialized) { - DBG_ERR(("A: A(%d) can't stop PRI adapter - not running", - IoAdapter->ANum)) - return (-1); /* nothing to stop */ - } - - for (i = 0; i < IoAdapter->tasks; i++) { - IoAdapter->QuadroList->QuadroAdapter[i]->Initialized = 0; - } - - /* - Disconnect Adapters from DIDD - */ - for (i = 0; i < IoAdapter->tasks; i++) { - diva_xdi_didd_remove_adapter(IoAdapter->QuadroList->QuadroAdapter[i]->ANum); - } - - i = 100; - - /* - Stop interrupts - */ - a->clear_interrupts_proc = diva_4bri_clear_interrupts; - IoAdapter->a.ReadyInt = 1; - IoAdapter->a.ram_inc(&IoAdapter->a, &PR_RAM->ReadyInt); - do { - diva_os_sleep(10); - } while (i-- && a->clear_interrupts_proc); - - if (a->clear_interrupts_proc) { - diva_4bri_clear_interrupts(a); - a->clear_interrupts_proc = NULL; - DBG_ERR(("A: A(%d) no final interrupt from 4BRI adapter", - IoAdapter->ANum)) - } - IoAdapter->a.ReadyInt = 0; - - /* - Stop and reset adapter - */ - IoAdapter->stop(IoAdapter); - - return (0); -} diff --git a/drivers/isdn/hardware/eicon/os_4bri.h b/drivers/isdn/hardware/eicon/os_4bri.h deleted file mode 100644 index 94b2709537d8..000000000000 --- a/drivers/isdn/hardware/eicon/os_4bri.h +++ /dev/null @@ -1,9 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: os_4bri.h,v 1.1.2.2 2001/02/08 12:25:44 armin Exp $ */ - -#ifndef __DIVA_OS_4_BRI_H__ -#define __DIVA_OS_4_BRI_H__ - -int diva_4bri_init_card(diva_os_xdi_adapter_t *a); - -#endif diff --git a/drivers/isdn/hardware/eicon/os_bri.c b/drivers/isdn/hardware/eicon/os_bri.c deleted file mode 100644 index de93090bcacb..000000000000 --- a/drivers/isdn/hardware/eicon/os_bri.c +++ /dev/null @@ -1,815 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* $Id: os_bri.c,v 1.21 2004/03/21 17:26:01 armin Exp $ */ - -#include "platform.h" -#include "debuglib.h" -#include "cardtype.h" -#include "pc.h" -#include "pr_pc.h" -#include "di_defs.h" -#include "dsp_defs.h" -#include "di.h" -#include "io.h" - -#include "xdi_msg.h" -#include "xdi_adapter.h" -#include "os_bri.h" -#include "diva_pci.h" -#include "mi_pc.h" -#include "pc_maint.h" -#include "dsrv_bri.h" - -/* -** IMPORTS -*/ -extern void prepare_maestra_functions(PISDN_ADAPTER IoAdapter); -extern void diva_xdi_display_adapter_features(int card); -extern int diva_card_read_xlog(diva_os_xdi_adapter_t *a); - -/* -** LOCALS -*/ -static int bri_bar_length[3] = { - 0x80, - 0x80, - 0x20 -}; -static int diva_bri_cleanup_adapter(diva_os_xdi_adapter_t *a); -static dword diva_bri_get_serial_number(diva_os_xdi_adapter_t *a); -static int diva_bri_cmd_card_proc(struct _diva_os_xdi_adapter *a, - diva_xdi_um_cfg_cmd_t *cmd, int length); -static int diva_bri_reregister_io(diva_os_xdi_adapter_t *a); -static int diva_bri_reset_adapter(PISDN_ADAPTER IoAdapter); -static int diva_bri_write_sdram_block(PISDN_ADAPTER IoAdapter, - dword address, - const byte *data, dword length); -static int diva_bri_start_adapter(PISDN_ADAPTER IoAdapter, - dword start_address, dword features); -static int diva_bri_stop_adapter(diva_os_xdi_adapter_t *a); - -static void diva_bri_set_addresses(diva_os_xdi_adapter_t *a) -{ - a->resources.pci.mem_type_id[MEM_TYPE_RAM] = 0; - a->resources.pci.mem_type_id[MEM_TYPE_CFG] = 1; - a->resources.pci.mem_type_id[MEM_TYPE_ADDRESS] = 2; - a->resources.pci.mem_type_id[MEM_TYPE_RESET] = 1; - a->resources.pci.mem_type_id[MEM_TYPE_PORT] = 2; - a->resources.pci.mem_type_id[MEM_TYPE_CTLREG] = 2; - - a->xdi_adapter.ram = a->resources.pci.addr[0]; - a->xdi_adapter.cfg = a->resources.pci.addr[1]; - a->xdi_adapter.Address = a->resources.pci.addr[2]; - - a->xdi_adapter.reset = a->xdi_adapter.cfg; - a->xdi_adapter.port = a->xdi_adapter.Address; - - a->xdi_adapter.ctlReg = a->xdi_adapter.port + M_PCI_RESET; - - a->xdi_adapter.reset += 0x4C; /* PLX 9050 !! */ -} - -/* -** BAR0 - MEM Addr - 0x80 - NOT USED -** BAR1 - I/O Addr - 0x80 -** BAR2 - I/O Addr - 0x20 -*/ -int diva_bri_init_card(diva_os_xdi_adapter_t *a) -{ - int bar; - dword bar2 = 0, bar2_length = 0xffffffff; - word cmd = 0, cmd_org; - byte Bus, Slot; - void *hdev; - byte __iomem *p; - - /* - Set properties - */ - a->xdi_adapter.Properties = CardProperties[a->CardOrdinal]; - DBG_LOG(("Load %s", a->xdi_adapter.Properties.Name)) - - /* - Get resources - */ - for (bar = 0; bar < 3; bar++) { - a->resources.pci.bar[bar] = - divasa_get_pci_bar(a->resources.pci.bus, - a->resources.pci.func, bar, - a->resources.pci.hdev); - if (!a->resources.pci.bar[bar]) { - DBG_ERR(("A: can't get BAR[%d]", bar)) - return (-1); - } - } - - a->resources.pci.irq = - (byte) divasa_get_pci_irq(a->resources.pci.bus, - a->resources.pci.func, - a->resources.pci.hdev); - if (!a->resources.pci.irq) { - DBG_ERR(("A: invalid irq")); - return (-1); - } - - /* - Get length of I/O bar 2 - it is different by older - EEPROM version - */ - Bus = a->resources.pci.bus; - Slot = a->resources.pci.func; - hdev = a->resources.pci.hdev; - - /* - Get plain original values of the BAR2 CDM registers - */ - PCIread(Bus, Slot, 0x18, &bar2, sizeof(bar2), hdev); - PCIread(Bus, Slot, 0x04, &cmd_org, sizeof(cmd_org), hdev); - /* - Disable device and get BAR2 length - */ - PCIwrite(Bus, Slot, 0x04, &cmd, sizeof(cmd), hdev); - PCIwrite(Bus, Slot, 0x18, &bar2_length, sizeof(bar2_length), hdev); - PCIread(Bus, Slot, 0x18, &bar2_length, sizeof(bar2_length), hdev); - /* - Restore BAR2 and CMD registers - */ - PCIwrite(Bus, Slot, 0x18, &bar2, sizeof(bar2), hdev); - PCIwrite(Bus, Slot, 0x04, &cmd_org, sizeof(cmd_org), hdev); - - /* - Calculate BAR2 length - */ - bar2_length = (~(bar2_length & ~7)) + 1; - DBG_LOG(("BAR[2] length=%lx", bar2_length)) - - /* - Map and register resources - */ - if (!(a->resources.pci.addr[0] = - divasa_remap_pci_bar(a, 0, a->resources.pci.bar[0], - bri_bar_length[0]))) { - DBG_ERR(("A: BRI, can't map BAR[0]")) - diva_bri_cleanup_adapter(a); - return (-1); - } - - sprintf(&a->port_name[0], "BRI %02x:%02x", - a->resources.pci.bus, a->resources.pci.func); - - if (diva_os_register_io_port(a, 1, a->resources.pci.bar[1], - bri_bar_length[1], &a->port_name[0], 1)) { - DBG_ERR(("A: BRI, can't register BAR[1]")) - diva_bri_cleanup_adapter(a); - return (-1); - } - a->resources.pci.addr[1] = (void *) (unsigned long) a->resources.pci.bar[1]; - a->resources.pci.length[1] = bri_bar_length[1]; - - if (diva_os_register_io_port(a, 1, a->resources.pci.bar[2], - bar2_length, &a->port_name[0], 2)) { - DBG_ERR(("A: BRI, can't register BAR[2]")) - diva_bri_cleanup_adapter(a); - return (-1); - } - a->resources.pci.addr[2] = (void *) (unsigned long) a->resources.pci.bar[2]; - a->resources.pci.length[2] = bar2_length; - - /* - Set all memory areas - */ - diva_bri_set_addresses(a); - - /* - Get Serial Number - */ - a->xdi_adapter.serialNo = diva_bri_get_serial_number(a); - - /* - Register I/O ports with correct name now - */ - if (diva_bri_reregister_io(a)) { - diva_bri_cleanup_adapter(a); - return (-1); - } - - /* - Initialize OS dependent objects - */ - if (diva_os_initialize_spin_lock - (&a->xdi_adapter.isr_spin_lock, "isr")) { - diva_bri_cleanup_adapter(a); - return (-1); - } - if (diva_os_initialize_spin_lock - (&a->xdi_adapter.data_spin_lock, "data")) { - diva_bri_cleanup_adapter(a); - return (-1); - } - - strcpy(a->xdi_adapter.req_soft_isr.dpc_thread_name, "kdivasbrid"); - - if (diva_os_initialize_soft_isr(&a->xdi_adapter.req_soft_isr, - DIDpcRoutine, &a->xdi_adapter)) { - diva_bri_cleanup_adapter(a); - return (-1); - } - /* - Do not initialize second DPC - only one thread will be created - */ - a->xdi_adapter.isr_soft_isr.object = a->xdi_adapter.req_soft_isr.object; - - /* - Create entity table - */ - a->xdi_adapter.Channels = CardProperties[a->CardOrdinal].Channels; - a->xdi_adapter.e_max = CardProperties[a->CardOrdinal].E_info; - a->xdi_adapter.e_tbl = diva_os_malloc(0, a->xdi_adapter.e_max * sizeof(E_INFO)); - if (!a->xdi_adapter.e_tbl) { - diva_bri_cleanup_adapter(a); - return (-1); - } - memset(a->xdi_adapter.e_tbl, 0x00, a->xdi_adapter.e_max * sizeof(E_INFO)); - - /* - Set up interface - */ - a->xdi_adapter.a.io = &a->xdi_adapter; - a->xdi_adapter.DIRequest = request; - a->interface.cleanup_adapter_proc = diva_bri_cleanup_adapter; - a->interface.cmd_proc = diva_bri_cmd_card_proc; - - p = DIVA_OS_MEM_ATTACH_RESET(&a->xdi_adapter); - outpp(p, 0x41); - DIVA_OS_MEM_DETACH_RESET(&a->xdi_adapter, p); - - prepare_maestra_functions(&a->xdi_adapter); - - a->dsp_mask = 0x00000003; - - /* - Set IRQ handler - */ - a->xdi_adapter.irq_info.irq_nr = a->resources.pci.irq; - sprintf(a->xdi_adapter.irq_info.irq_name, "DIVA BRI %ld", - (long) a->xdi_adapter.serialNo); - if (diva_os_register_irq(a, a->xdi_adapter.irq_info.irq_nr, - a->xdi_adapter.irq_info.irq_name)) { - diva_bri_cleanup_adapter(a); - return (-1); - } - a->xdi_adapter.irq_info.registered = 1; - - diva_log_info("%s IRQ:%d SerNo:%d", a->xdi_adapter.Properties.Name, - a->resources.pci.irq, a->xdi_adapter.serialNo); - - return (0); -} - - -static int diva_bri_cleanup_adapter(diva_os_xdi_adapter_t *a) -{ - int i; - - if (a->xdi_adapter.Initialized) { - diva_bri_stop_adapter(a); - } - - /* - Remove ISR Handler - */ - if (a->xdi_adapter.irq_info.registered) { - diva_os_remove_irq(a, a->xdi_adapter.irq_info.irq_nr); - } - a->xdi_adapter.irq_info.registered = 0; - - if (a->resources.pci.addr[0] && a->resources.pci.bar[0]) { - divasa_unmap_pci_bar(a->resources.pci.addr[0]); - a->resources.pci.addr[0] = NULL; - a->resources.pci.bar[0] = 0; - } - - for (i = 1; i < 3; i++) { - if (a->resources.pci.addr[i] && a->resources.pci.bar[i]) { - diva_os_register_io_port(a, 0, - a->resources.pci.bar[i], - a->resources.pci. - length[i], - &a->port_name[0], i); - a->resources.pci.addr[i] = NULL; - a->resources.pci.bar[i] = 0; - } - } - - /* - Free OS objects - */ - diva_os_cancel_soft_isr(&a->xdi_adapter.req_soft_isr); - diva_os_cancel_soft_isr(&a->xdi_adapter.isr_soft_isr); - - diva_os_remove_soft_isr(&a->xdi_adapter.req_soft_isr); - a->xdi_adapter.isr_soft_isr.object = NULL; - - diva_os_destroy_spin_lock(&a->xdi_adapter.isr_spin_lock, "rm"); - diva_os_destroy_spin_lock(&a->xdi_adapter.data_spin_lock, "rm"); - - /* - Free memory - */ - if (a->xdi_adapter.e_tbl) { - diva_os_free(0, a->xdi_adapter.e_tbl); - a->xdi_adapter.e_tbl = NULL; - } - - return (0); -} - -void diva_os_prepare_maestra_functions(PISDN_ADAPTER IoAdapter) -{ -} - -/* -** Get serial number -*/ -static dword diva_bri_get_serial_number(diva_os_xdi_adapter_t *a) -{ - dword serNo = 0; - byte __iomem *confIO; - word serHi, serLo; - word __iomem *confMem; - - confIO = DIVA_OS_MEM_ATTACH_CFG(&a->xdi_adapter); - serHi = (word) (inppw(&confIO[0x22]) & 0x0FFF); - serLo = (word) (inppw(&confIO[0x26]) & 0x0FFF); - serNo = ((dword) serHi << 16) | (dword) serLo; - DIVA_OS_MEM_DETACH_CFG(&a->xdi_adapter, confIO); - - if ((serNo == 0) || (serNo == 0xFFFFFFFF)) { - DBG_FTL(("W: BRI use BAR[0] to get card serial number")) - - confMem = (word __iomem *)DIVA_OS_MEM_ATTACH_RAM(&a->xdi_adapter); - serHi = (word) (READ_WORD(&confMem[0x11]) & 0x0FFF); - serLo = (word) (READ_WORD(&confMem[0x13]) & 0x0FFF); - serNo = (((dword) serHi) << 16) | ((dword) serLo); - DIVA_OS_MEM_DETACH_RAM(&a->xdi_adapter, confMem); - } - - DBG_LOG(("Serial Number=%ld", serNo)) - - return (serNo); -} - -/* -** Unregister I/O and register it with new name, -** based on Serial Number -*/ -static int diva_bri_reregister_io(diva_os_xdi_adapter_t *a) -{ - int i; - - for (i = 1; i < 3; i++) { - diva_os_register_io_port(a, 0, a->resources.pci.bar[i], - a->resources.pci.length[i], - &a->port_name[0], i); - a->resources.pci.addr[i] = NULL; - } - - sprintf(a->port_name, "DIVA BRI %ld", - (long) a->xdi_adapter.serialNo); - - for (i = 1; i < 3; i++) { - if (diva_os_register_io_port(a, 1, a->resources.pci.bar[i], - a->resources.pci.length[i], - &a->port_name[0], i)) { - DBG_ERR(("A: failed to reregister BAR[%d]", i)) - return (-1); - } - a->resources.pci.addr[i] = - (void *) (unsigned long) a->resources.pci.bar[i]; - } - - return (0); -} - -/* -** Process command from user mode -*/ -static int -diva_bri_cmd_card_proc(struct _diva_os_xdi_adapter *a, - diva_xdi_um_cfg_cmd_t *cmd, int length) -{ - int ret = -1; - - if (cmd->adapter != a->controller) { - DBG_ERR(("A: pri_cmd, invalid controller=%d != %d", - cmd->adapter, a->controller)) - return (-1); - } - - switch (cmd->command) { - case DIVA_XDI_UM_CMD_GET_CARD_ORDINAL: - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - *(dword *) a->xdi_mbox.data = - (dword) a->CardOrdinal; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_GET_SERIAL_NR: - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - *(dword *) a->xdi_mbox.data = - (dword) a->xdi_adapter.serialNo; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_GET_PCI_HW_CONFIG: - a->xdi_mbox.data_length = sizeof(dword) * 9; - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - int i; - dword *data = (dword *) a->xdi_mbox.data; - - for (i = 0; i < 8; i++) { - *data++ = a->resources.pci.bar[i]; - } - *data++ = (dword) a->resources.pci.irq; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_GET_CARD_STATE: - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - dword *data = (dword *) a->xdi_mbox.data; - if (!a->xdi_adapter.port) { - *data = 3; - } else if (a->xdi_adapter.trapped) { - *data = 2; - } else if (a->xdi_adapter.Initialized) { - *data = 1; - } else { - *data = 0; - } - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_RESET_ADAPTER: - ret = diva_bri_reset_adapter(&a->xdi_adapter); - break; - - case DIVA_XDI_UM_CMD_WRITE_SDRAM_BLOCK: - ret = diva_bri_write_sdram_block(&a->xdi_adapter, - cmd->command_data. - write_sdram.offset, - (byte *)&cmd[1], - cmd->command_data. - write_sdram.length); - break; - - case DIVA_XDI_UM_CMD_START_ADAPTER: - ret = diva_bri_start_adapter(&a->xdi_adapter, - cmd->command_data.start. - offset, - cmd->command_data.start. - features); - break; - - case DIVA_XDI_UM_CMD_SET_PROTOCOL_FEATURES: - a->xdi_adapter.features = - cmd->command_data.features.features; - a->xdi_adapter.a.protocol_capabilities = - a->xdi_adapter.features; - DBG_TRC( - ("Set raw protocol features (%08x)", - a->xdi_adapter.features)) ret = 0; - break; - - case DIVA_XDI_UM_CMD_STOP_ADAPTER: - ret = diva_bri_stop_adapter(a); - break; - - case DIVA_XDI_UM_CMD_READ_XLOG_ENTRY: - ret = diva_card_read_xlog(a); - break; - - default: - DBG_ERR( - ("A: A(%d) invalid cmd=%d", a->controller, - cmd->command))} - - return (ret); -} - -static int diva_bri_reset_adapter(PISDN_ADAPTER IoAdapter) -{ - byte __iomem *addrHi, *addrLo, *ioaddr; - dword i; - byte __iomem *Port; - - if (!IoAdapter->port) { - return (-1); - } - if (IoAdapter->Initialized) { - DBG_ERR(("A: A(%d) can't reset BRI adapter - please stop first", - IoAdapter->ANum)) return (-1); - } - (*(IoAdapter->rstFnc)) (IoAdapter); - diva_os_wait(100); - Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter); - addrHi = Port + - ((IoAdapter->Properties.Bus == BUS_PCI) ? M_PCI_ADDRH : ADDRH); - addrLo = Port + ADDR; - ioaddr = Port + DATA; - /* - recover - */ - outpp(addrHi, (byte) 0); - outppw(addrLo, (word) 0); - outppw(ioaddr, (word) 0); - /* - clear shared memory - */ - outpp(addrHi, - (byte) ( - (IoAdapter->MemoryBase + IoAdapter->MemorySize - - BRI_SHARED_RAM_SIZE) >> 16)); - outppw(addrLo, 0); - for (i = 0; i < 0x8000; outppw(ioaddr, 0), ++i); - diva_os_wait(100); - - /* - clear signature - */ - outpp(addrHi, - (byte) ( - (IoAdapter->MemoryBase + IoAdapter->MemorySize - - BRI_SHARED_RAM_SIZE) >> 16)); - outppw(addrLo, 0x1e); - outpp(ioaddr, 0); - outpp(ioaddr, 0); - - outpp(addrHi, (byte) 0); - outppw(addrLo, (word) 0); - outppw(ioaddr, (word) 0); - - DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port); - - /* - Forget all outstanding entities - */ - IoAdapter->e_count = 0; - if (IoAdapter->e_tbl) { - memset(IoAdapter->e_tbl, 0x00, - IoAdapter->e_max * sizeof(E_INFO)); - } - IoAdapter->head = 0; - IoAdapter->tail = 0; - IoAdapter->assign = 0; - IoAdapter->trapped = 0; - - memset(&IoAdapter->a.IdTable[0], 0x00, - sizeof(IoAdapter->a.IdTable)); - memset(&IoAdapter->a.IdTypeTable[0], 0x00, - sizeof(IoAdapter->a.IdTypeTable)); - memset(&IoAdapter->a.FlowControlIdTable[0], 0x00, - sizeof(IoAdapter->a.FlowControlIdTable)); - memset(&IoAdapter->a.FlowControlSkipTable[0], 0x00, - sizeof(IoAdapter->a.FlowControlSkipTable)); - memset(&IoAdapter->a.misc_flags_table[0], 0x00, - sizeof(IoAdapter->a.misc_flags_table)); - memset(&IoAdapter->a.rx_stream[0], 0x00, - sizeof(IoAdapter->a.rx_stream)); - memset(&IoAdapter->a.tx_stream[0], 0x00, - sizeof(IoAdapter->a.tx_stream)); - memset(&IoAdapter->a.tx_pos[0], 0x00, sizeof(IoAdapter->a.tx_pos)); - memset(&IoAdapter->a.rx_pos[0], 0x00, sizeof(IoAdapter->a.rx_pos)); - - return (0); -} - -static int -diva_bri_write_sdram_block(PISDN_ADAPTER IoAdapter, - dword address, const byte *data, dword length) -{ - byte __iomem *addrHi, *addrLo, *ioaddr; - byte __iomem *Port; - - if (!IoAdapter->port) { - return (-1); - } - - Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter); - addrHi = Port + - ((IoAdapter->Properties.Bus == BUS_PCI) ? M_PCI_ADDRH : ADDRH); - addrLo = Port + ADDR; - ioaddr = Port + DATA; - - while (length--) { - outpp(addrHi, (word) (address >> 16)); - outppw(addrLo, (word) (address & 0x0000ffff)); - outpp(ioaddr, *data++); - address++; - } - - DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port); - return (0); -} - -static int -diva_bri_start_adapter(PISDN_ADAPTER IoAdapter, - dword start_address, dword features) -{ - byte __iomem *Port; - dword i, test; - byte __iomem *addrHi, *addrLo, *ioaddr; - int started = 0; - ADAPTER *a = &IoAdapter->a; - - if (IoAdapter->Initialized) { - DBG_ERR( - ("A: A(%d) bri_start_adapter, adapter already running", - IoAdapter->ANum)) return (-1); - } - if (!IoAdapter->port) { - DBG_ERR(("A: A(%d) bri_start_adapter, adapter not mapped", - IoAdapter->ANum)) return (-1); - } - - sprintf(IoAdapter->Name, "A(%d)", (int) IoAdapter->ANum); - DBG_LOG(("A(%d) start BRI", IoAdapter->ANum)) - - Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter); - addrHi = Port + - ((IoAdapter->Properties.Bus == BUS_PCI) ? M_PCI_ADDRH : ADDRH); - addrLo = Port + ADDR; - ioaddr = Port + DATA; - - outpp(addrHi, - (byte) ( - (IoAdapter->MemoryBase + IoAdapter->MemorySize - - BRI_SHARED_RAM_SIZE) >> 16)); - outppw(addrLo, 0x1e); - outppw(ioaddr, 0x00); - DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port); - - /* - start the protocol code - */ - Port = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - outpp(Port, 0x08); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, Port); - - Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter); - addrHi = Port + - ((IoAdapter->Properties.Bus == BUS_PCI) ? M_PCI_ADDRH : ADDRH); - addrLo = Port + ADDR; - ioaddr = Port + DATA; - /* - wait for signature (max. 3 seconds) - */ - for (i = 0; i < 300; ++i) { - diva_os_wait(10); - outpp(addrHi, - (byte) ( - (IoAdapter->MemoryBase + - IoAdapter->MemorySize - - BRI_SHARED_RAM_SIZE) >> 16)); - outppw(addrLo, 0x1e); - test = (dword) inppw(ioaddr); - if (test == 0x4447) { - DBG_LOG( - ("Protocol startup time %d.%02d seconds", - (i / 100), (i % 100))) - started = 1; - break; - } - } - DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port); - - if (!started) { - DBG_FTL(("A: A(%d) %s: Adapter selftest failed 0x%04X", - IoAdapter->ANum, IoAdapter->Properties.Name, - test)) - (*(IoAdapter->trapFnc)) (IoAdapter); - return (-1); - } - - IoAdapter->Initialized = 1; - - /* - Check Interrupt - */ - IoAdapter->IrqCount = 0; - a->ReadyInt = 1; - - if (IoAdapter->reset) { - Port = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - outpp(Port, 0x41); - DIVA_OS_MEM_DETACH_RESET(IoAdapter, Port); - } - - a->ram_out(a, &PR_RAM->ReadyInt, 1); - for (i = 0; ((!IoAdapter->IrqCount) && (i < 100)); i++) { - diva_os_wait(10); - } - if (!IoAdapter->IrqCount) { - DBG_ERR( - ("A: A(%d) interrupt test failed", - IoAdapter->ANum)) - IoAdapter->Initialized = 0; - IoAdapter->stop(IoAdapter); - return (-1); - } - - IoAdapter->Properties.Features = (word) features; - diva_xdi_display_adapter_features(IoAdapter->ANum); - DBG_LOG(("A(%d) BRI adapter successfully started", IoAdapter->ANum)) - /* - Register with DIDD - */ - diva_xdi_didd_register_adapter(IoAdapter->ANum); - - return (0); -} - -static void diva_bri_clear_interrupts(diva_os_xdi_adapter_t *a) -{ - PISDN_ADAPTER IoAdapter = &a->xdi_adapter; - - /* - clear any pending interrupt - */ - IoAdapter->disIrq(IoAdapter); - - IoAdapter->tst_irq(&IoAdapter->a); - IoAdapter->clr_irq(&IoAdapter->a); - IoAdapter->tst_irq(&IoAdapter->a); - - /* - kill pending dpcs - */ - diva_os_cancel_soft_isr(&IoAdapter->req_soft_isr); - diva_os_cancel_soft_isr(&IoAdapter->isr_soft_isr); -} - -/* -** Stop card -*/ -static int diva_bri_stop_adapter(diva_os_xdi_adapter_t *a) -{ - PISDN_ADAPTER IoAdapter = &a->xdi_adapter; - int i = 100; - - if (!IoAdapter->port) { - return (-1); - } - if (!IoAdapter->Initialized) { - DBG_ERR(("A: A(%d) can't stop BRI adapter - not running", - IoAdapter->ANum)) - return (-1); /* nothing to stop */ - } - IoAdapter->Initialized = 0; - - /* - Disconnect Adapter from DIDD - */ - diva_xdi_didd_remove_adapter(IoAdapter->ANum); - - /* - Stop interrupts - */ - a->clear_interrupts_proc = diva_bri_clear_interrupts; - IoAdapter->a.ReadyInt = 1; - IoAdapter->a.ram_inc(&IoAdapter->a, &PR_RAM->ReadyInt); - do { - diva_os_sleep(10); - } while (i-- && a->clear_interrupts_proc); - if (a->clear_interrupts_proc) { - diva_bri_clear_interrupts(a); - a->clear_interrupts_proc = NULL; - DBG_ERR(("A: A(%d) no final interrupt from BRI adapter", - IoAdapter->ANum)) - } - IoAdapter->a.ReadyInt = 0; - - /* - Stop and reset adapter - */ - IoAdapter->stop(IoAdapter); - - return (0); -} diff --git a/drivers/isdn/hardware/eicon/os_bri.h b/drivers/isdn/hardware/eicon/os_bri.h deleted file mode 100644 index 37c92cc53ded..000000000000 --- a/drivers/isdn/hardware/eicon/os_bri.h +++ /dev/null @@ -1,9 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: os_bri.h,v 1.1.2.2 2001/02/08 12:25:44 armin Exp $ */ - -#ifndef __DIVA_OS_BRI_REV_1_H__ -#define __DIVA_OS_BRI_REV_1_H__ - -int diva_bri_init_card(diva_os_xdi_adapter_t *a); - -#endif diff --git a/drivers/isdn/hardware/eicon/os_capi.h b/drivers/isdn/hardware/eicon/os_capi.h deleted file mode 100644 index e72394b95d50..000000000000 --- a/drivers/isdn/hardware/eicon/os_capi.h +++ /dev/null @@ -1,21 +0,0 @@ -/* $Id: os_capi.h,v 1.7 2003/04/12 21:40:49 schindler Exp $ - * - * ISDN interface module for Eicon active cards DIVA. - * CAPI Interface OS include files - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000-2003 Cytronics & Melware (info@melware.de) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#ifndef __OS_CAPI_H__ -#define __OS_CAPI_H__ - -#include -#include -#include -#include - -#endif /* __OS_CAPI_H__ */ diff --git a/drivers/isdn/hardware/eicon/os_pri.c b/drivers/isdn/hardware/eicon/os_pri.c deleted file mode 100644 index b20f1fb89d14..000000000000 --- a/drivers/isdn/hardware/eicon/os_pri.c +++ /dev/null @@ -1,1053 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* $Id: os_pri.c,v 1.32 2004/03/21 17:26:01 armin Exp $ */ - -#include "platform.h" -#include "debuglib.h" -#include "cardtype.h" -#include "pc.h" -#include "pr_pc.h" -#include "di_defs.h" -#include "dsp_defs.h" -#include "di.h" -#include "io.h" - -#include "xdi_msg.h" -#include "xdi_adapter.h" -#include "os_pri.h" -#include "diva_pci.h" -#include "mi_pc.h" -#include "pc_maint.h" -#include "dsp_tst.h" -#include "diva_dma.h" -#include "dsrv_pri.h" - -/* -------------------------------------------------------------------------- - OS Dependent part of XDI driver for DIVA PRI Adapter - - DSP detection/validation by Anthony Booth (Eicon Networks, www.eicon.com) - -------------------------------------------------------------------------- */ - -#define DIVA_PRI_NO_PCI_BIOS_WORKAROUND 1 - -extern int diva_card_read_xlog(diva_os_xdi_adapter_t *a); - -/* -** IMPORTS -*/ -extern void prepare_pri_functions(PISDN_ADAPTER IoAdapter); -extern void prepare_pri2_functions(PISDN_ADAPTER IoAdapter); -extern void diva_xdi_display_adapter_features(int card); - -static int diva_pri_cleanup_adapter(diva_os_xdi_adapter_t *a); -static int diva_pri_cmd_card_proc(struct _diva_os_xdi_adapter *a, - diva_xdi_um_cfg_cmd_t *cmd, int length); -static int pri_get_serial_number(diva_os_xdi_adapter_t *a); -static int diva_pri_stop_adapter(diva_os_xdi_adapter_t *a); -static dword diva_pri_detect_dsps(diva_os_xdi_adapter_t *a); - -/* -** Check card revision -*/ -static int pri_is_rev_2_card(int card_ordinal) -{ - switch (card_ordinal) { - case CARDTYPE_DIVASRV_P_30M_V2_PCI: - case CARDTYPE_DIVASRV_VOICE_P_30M_V2_PCI: - return (1); - } - return (0); -} - -static void diva_pri_set_addresses(diva_os_xdi_adapter_t *a) -{ - a->resources.pci.mem_type_id[MEM_TYPE_ADDRESS] = 0; - a->resources.pci.mem_type_id[MEM_TYPE_CONTROL] = 2; - a->resources.pci.mem_type_id[MEM_TYPE_CONFIG] = 4; - a->resources.pci.mem_type_id[MEM_TYPE_RAM] = 0; - a->resources.pci.mem_type_id[MEM_TYPE_RESET] = 2; - a->resources.pci.mem_type_id[MEM_TYPE_CFG] = 4; - a->resources.pci.mem_type_id[MEM_TYPE_PROM] = 3; - - a->xdi_adapter.Address = a->resources.pci.addr[0]; - a->xdi_adapter.Control = a->resources.pci.addr[2]; - a->xdi_adapter.Config = a->resources.pci.addr[4]; - - a->xdi_adapter.ram = a->resources.pci.addr[0]; - a->xdi_adapter.ram += MP_SHARED_RAM_OFFSET; - - a->xdi_adapter.reset = a->resources.pci.addr[2]; - a->xdi_adapter.reset += MP_RESET; - - a->xdi_adapter.cfg = a->resources.pci.addr[4]; - a->xdi_adapter.cfg += MP_IRQ_RESET; - - a->xdi_adapter.sdram_bar = a->resources.pci.bar[0]; - - a->xdi_adapter.prom = a->resources.pci.addr[3]; -} - -/* -** BAR0 - SDRAM, MP_MEMORY_SIZE, MP2_MEMORY_SIZE by Rev.2 -** BAR1 - DEVICES, 0x1000 -** BAR2 - CONTROL (REG), 0x2000 -** BAR3 - FLASH (REG), 0x8000 -** BAR4 - CONFIG (CFG), 0x1000 -*/ -int diva_pri_init_card(diva_os_xdi_adapter_t *a) -{ - int bar = 0; - int pri_rev_2; - unsigned long bar_length[5] = { - MP_MEMORY_SIZE, - 0x1000, - 0x2000, - 0x8000, - 0x1000 - }; - - pri_rev_2 = pri_is_rev_2_card(a->CardOrdinal); - - if (pri_rev_2) { - bar_length[0] = MP2_MEMORY_SIZE; - } - /* - Set properties - */ - a->xdi_adapter.Properties = CardProperties[a->CardOrdinal]; - DBG_LOG(("Load %s", a->xdi_adapter.Properties.Name)) - - /* - First initialization step: get and check hardware resoures. - Do not map resources and do not acecess card at this step - */ - for (bar = 0; bar < 5; bar++) { - a->resources.pci.bar[bar] = - divasa_get_pci_bar(a->resources.pci.bus, - a->resources.pci.func, bar, - a->resources.pci.hdev); - if (!a->resources.pci.bar[bar] - || (a->resources.pci.bar[bar] == 0xFFFFFFF0)) { - DBG_ERR(("A: invalid bar[%d]=%08x", bar, - a->resources.pci.bar[bar])) - return (-1); - } - } - a->resources.pci.irq = - (byte) divasa_get_pci_irq(a->resources.pci.bus, - a->resources.pci.func, - a->resources.pci.hdev); - if (!a->resources.pci.irq) { - DBG_ERR(("A: invalid irq")); - return (-1); - } - - /* - Map all BAR's - */ - for (bar = 0; bar < 5; bar++) { - a->resources.pci.addr[bar] = - divasa_remap_pci_bar(a, bar, a->resources.pci.bar[bar], - bar_length[bar]); - if (!a->resources.pci.addr[bar]) { - DBG_ERR(("A: A(%d), can't map bar[%d]", - a->controller, bar)) - diva_pri_cleanup_adapter(a); - return (-1); - } - } - - /* - Set all memory areas - */ - diva_pri_set_addresses(a); - - /* - Get Serial Number of this adapter - */ - if (pri_get_serial_number(a)) { - dword serNo; - serNo = a->resources.pci.bar[1] & 0xffff0000; - serNo |= ((dword) a->resources.pci.bus) << 8; - serNo += (a->resources.pci.func + a->controller + 1); - a->xdi_adapter.serialNo = serNo & ~0xFF000000; - DBG_ERR(("A: A(%d) can't get Serial Number, generated serNo=%ld", - a->controller, a->xdi_adapter.serialNo)) - } - - - /* - Initialize os objects - */ - if (diva_os_initialize_spin_lock(&a->xdi_adapter.isr_spin_lock, "isr")) { - diva_pri_cleanup_adapter(a); - return (-1); - } - if (diva_os_initialize_spin_lock - (&a->xdi_adapter.data_spin_lock, "data")) { - diva_pri_cleanup_adapter(a); - return (-1); - } - - strcpy(a->xdi_adapter.req_soft_isr.dpc_thread_name, "kdivasprid"); - - if (diva_os_initialize_soft_isr(&a->xdi_adapter.req_soft_isr, - DIDpcRoutine, &a->xdi_adapter)) { - diva_pri_cleanup_adapter(a); - return (-1); - } - - /* - Do not initialize second DPC - only one thread will be created - */ - a->xdi_adapter.isr_soft_isr.object = - a->xdi_adapter.req_soft_isr.object; - - /* - Next step of card initialization: - set up all interface pointers - */ - a->xdi_adapter.Channels = CardProperties[a->CardOrdinal].Channels; - a->xdi_adapter.e_max = CardProperties[a->CardOrdinal].E_info; - - a->xdi_adapter.e_tbl = - diva_os_malloc(0, a->xdi_adapter.e_max * sizeof(E_INFO)); - if (!a->xdi_adapter.e_tbl) { - diva_pri_cleanup_adapter(a); - return (-1); - } - memset(a->xdi_adapter.e_tbl, 0x00, a->xdi_adapter.e_max * sizeof(E_INFO)); - - a->xdi_adapter.a.io = &a->xdi_adapter; - a->xdi_adapter.DIRequest = request; - a->interface.cleanup_adapter_proc = diva_pri_cleanup_adapter; - a->interface.cmd_proc = diva_pri_cmd_card_proc; - - if (pri_rev_2) { - prepare_pri2_functions(&a->xdi_adapter); - } else { - prepare_pri_functions(&a->xdi_adapter); - } - - a->dsp_mask = diva_pri_detect_dsps(a); - - /* - Allocate DMA map - */ - if (pri_rev_2) { - diva_init_dma_map(a->resources.pci.hdev, - (struct _diva_dma_map_entry **) &a->xdi_adapter.dma_map, 32); - } - - /* - Set IRQ handler - */ - a->xdi_adapter.irq_info.irq_nr = a->resources.pci.irq; - sprintf(a->xdi_adapter.irq_info.irq_name, - "DIVA PRI %ld", (long) a->xdi_adapter.serialNo); - - if (diva_os_register_irq(a, a->xdi_adapter.irq_info.irq_nr, - a->xdi_adapter.irq_info.irq_name)) { - diva_pri_cleanup_adapter(a); - return (-1); - } - a->xdi_adapter.irq_info.registered = 1; - - diva_log_info("%s IRQ:%d SerNo:%d", a->xdi_adapter.Properties.Name, - a->resources.pci.irq, a->xdi_adapter.serialNo); - - return (0); -} - -static int diva_pri_cleanup_adapter(diva_os_xdi_adapter_t *a) -{ - int bar = 0; - - /* - Stop Adapter if adapter is running - */ - if (a->xdi_adapter.Initialized) { - diva_pri_stop_adapter(a); - } - - /* - Remove ISR Handler - */ - if (a->xdi_adapter.irq_info.registered) { - diva_os_remove_irq(a, a->xdi_adapter.irq_info.irq_nr); - } - a->xdi_adapter.irq_info.registered = 0; - - /* - Step 1: unmap all BAR's, if any was mapped - */ - for (bar = 0; bar < 5; bar++) { - if (a->resources.pci.bar[bar] - && a->resources.pci.addr[bar]) { - divasa_unmap_pci_bar(a->resources.pci.addr[bar]); - a->resources.pci.bar[bar] = 0; - a->resources.pci.addr[bar] = NULL; - } - } - - /* - Free OS objects - */ - diva_os_cancel_soft_isr(&a->xdi_adapter.isr_soft_isr); - diva_os_cancel_soft_isr(&a->xdi_adapter.req_soft_isr); - - diva_os_remove_soft_isr(&a->xdi_adapter.req_soft_isr); - a->xdi_adapter.isr_soft_isr.object = NULL; - - diva_os_destroy_spin_lock(&a->xdi_adapter.isr_spin_lock, "rm"); - diva_os_destroy_spin_lock(&a->xdi_adapter.data_spin_lock, "rm"); - - /* - Free memory accupied by XDI adapter - */ - if (a->xdi_adapter.e_tbl) { - diva_os_free(0, a->xdi_adapter.e_tbl); - a->xdi_adapter.e_tbl = NULL; - } - a->xdi_adapter.Channels = 0; - a->xdi_adapter.e_max = 0; - - - /* - Free adapter DMA map - */ - diva_free_dma_map(a->resources.pci.hdev, - (struct _diva_dma_map_entry *) a->xdi_adapter. - dma_map); - a->xdi_adapter.dma_map = NULL; - - - /* - Detach this adapter from debug driver - */ - - return (0); -} - -/* -** Activate On Board Boot Loader -*/ -static int diva_pri_reset_adapter(PISDN_ADAPTER IoAdapter) -{ - dword i; - struct mp_load __iomem *boot; - - if (!IoAdapter->Address || !IoAdapter->reset) { - return (-1); - } - if (IoAdapter->Initialized) { - DBG_ERR(("A: A(%d) can't reset PRI adapter - please stop first", - IoAdapter->ANum)) - return (-1); - } - - boot = (struct mp_load __iomem *) DIVA_OS_MEM_ATTACH_ADDRESS(IoAdapter); - WRITE_DWORD(&boot->err, 0); - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, boot); - - IoAdapter->rstFnc(IoAdapter); - - diva_os_wait(10); - - boot = (struct mp_load __iomem *) DIVA_OS_MEM_ATTACH_ADDRESS(IoAdapter); - i = READ_DWORD(&boot->live); - - diva_os_wait(10); - if (i == READ_DWORD(&boot->live)) { - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, boot); - DBG_ERR(("A: A(%d) CPU on PRI %ld is not alive!", - IoAdapter->ANum, IoAdapter->serialNo)) - return (-1); - } - if (READ_DWORD(&boot->err)) { - DBG_ERR(("A: A(%d) PRI %ld Board Selftest failed, error=%08lx", - IoAdapter->ANum, IoAdapter->serialNo, - READ_DWORD(&boot->err))) - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, boot); - return (-1); - } - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, boot); - - /* - Forget all outstanding entities - */ - IoAdapter->e_count = 0; - if (IoAdapter->e_tbl) { - memset(IoAdapter->e_tbl, 0x00, - IoAdapter->e_max * sizeof(E_INFO)); - } - IoAdapter->head = 0; - IoAdapter->tail = 0; - IoAdapter->assign = 0; - IoAdapter->trapped = 0; - - memset(&IoAdapter->a.IdTable[0], 0x00, - sizeof(IoAdapter->a.IdTable)); - memset(&IoAdapter->a.IdTypeTable[0], 0x00, - sizeof(IoAdapter->a.IdTypeTable)); - memset(&IoAdapter->a.FlowControlIdTable[0], 0x00, - sizeof(IoAdapter->a.FlowControlIdTable)); - memset(&IoAdapter->a.FlowControlSkipTable[0], 0x00, - sizeof(IoAdapter->a.FlowControlSkipTable)); - memset(&IoAdapter->a.misc_flags_table[0], 0x00, - sizeof(IoAdapter->a.misc_flags_table)); - memset(&IoAdapter->a.rx_stream[0], 0x00, - sizeof(IoAdapter->a.rx_stream)); - memset(&IoAdapter->a.tx_stream[0], 0x00, - sizeof(IoAdapter->a.tx_stream)); - memset(&IoAdapter->a.tx_pos[0], 0x00, sizeof(IoAdapter->a.tx_pos)); - memset(&IoAdapter->a.rx_pos[0], 0x00, sizeof(IoAdapter->a.rx_pos)); - - return (0); -} - -static int -diva_pri_write_sdram_block(PISDN_ADAPTER IoAdapter, - dword address, - const byte *data, dword length, dword limit) -{ - byte __iomem *p = DIVA_OS_MEM_ATTACH_ADDRESS(IoAdapter); - byte __iomem *mem = p; - - if (((address + length) >= limit) || !mem) { - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, p); - DBG_ERR(("A: A(%d) write PRI address=0x%08lx", - IoAdapter->ANum, address + length)) - return (-1); - } - mem += address; - - /* memcpy_toio(), maybe? */ - while (length--) { - WRITE_BYTE(mem++, *data++); - } - - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, p); - return (0); -} - -static int -diva_pri_start_adapter(PISDN_ADAPTER IoAdapter, - dword start_address, dword features) -{ - dword i; - int started = 0; - byte __iomem *p; - struct mp_load __iomem *boot = (struct mp_load __iomem *) DIVA_OS_MEM_ATTACH_ADDRESS(IoAdapter); - ADAPTER *a = &IoAdapter->a; - - if (IoAdapter->Initialized) { - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, boot); - DBG_ERR(("A: A(%d) pri_start_adapter, adapter already running", - IoAdapter->ANum)) - return (-1); - } - if (!boot) { - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, boot); - DBG_ERR(("A: PRI %ld can't start, adapter not mapped", - IoAdapter->serialNo)) - return (-1); - } - - sprintf(IoAdapter->Name, "A(%d)", (int) IoAdapter->ANum); - DBG_LOG(("A(%d) start PRI at 0x%08lx", IoAdapter->ANum, - start_address)) - - WRITE_DWORD(&boot->addr, start_address); - WRITE_DWORD(&boot->cmd, 3); - - for (i = 0; i < 300; ++i) { - diva_os_wait(10); - if ((READ_DWORD(&boot->signature) >> 16) == 0x4447) { - DBG_LOG(("A(%d) Protocol startup time %d.%02d seconds", - IoAdapter->ANum, (i / 100), (i % 100))) - started = 1; - break; - } - } - - if (!started) { - byte __iomem *p = (byte __iomem *)boot; - dword TrapId; - dword debug; - TrapId = READ_DWORD(&p[0x80]); - debug = READ_DWORD(&p[0x1c]); - DBG_ERR(("A(%d) Adapter start failed 0x%08lx, TrapId=%08lx, debug=%08lx", - IoAdapter->ANum, READ_DWORD(&boot->signature), - TrapId, debug)) - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, boot); - if (IoAdapter->trapFnc) { - (*(IoAdapter->trapFnc)) (IoAdapter); - } - IoAdapter->stop(IoAdapter); - return (-1); - } - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, boot); - - IoAdapter->Initialized = true; - - /* - Check Interrupt - */ - IoAdapter->IrqCount = 0; - p = DIVA_OS_MEM_ATTACH_CFG(IoAdapter); - WRITE_DWORD(p, (dword)~0x03E00000); - DIVA_OS_MEM_DETACH_CFG(IoAdapter, p); - a->ReadyInt = 1; - a->ram_out(a, &PR_RAM->ReadyInt, 1); - - for (i = 100; !IoAdapter->IrqCount && (i-- > 0); diva_os_wait(10)); - - if (!IoAdapter->IrqCount) { - DBG_ERR(("A: A(%d) interrupt test failed", - IoAdapter->ANum)) - IoAdapter->Initialized = false; - IoAdapter->stop(IoAdapter); - return (-1); - } - - IoAdapter->Properties.Features = (word) features; - - diva_xdi_display_adapter_features(IoAdapter->ANum); - - DBG_LOG(("A(%d) PRI adapter successfully started", IoAdapter->ANum)) - /* - Register with DIDD - */ - diva_xdi_didd_register_adapter(IoAdapter->ANum); - - return (0); -} - -static void diva_pri_clear_interrupts(diva_os_xdi_adapter_t *a) -{ - PISDN_ADAPTER IoAdapter = &a->xdi_adapter; - - /* - clear any pending interrupt - */ - IoAdapter->disIrq(IoAdapter); - - IoAdapter->tst_irq(&IoAdapter->a); - IoAdapter->clr_irq(&IoAdapter->a); - IoAdapter->tst_irq(&IoAdapter->a); - - /* - kill pending dpcs - */ - diva_os_cancel_soft_isr(&IoAdapter->req_soft_isr); - diva_os_cancel_soft_isr(&IoAdapter->isr_soft_isr); -} - -/* -** Stop Adapter, but do not unmap/unregister - adapter -** will be restarted later -*/ -static int diva_pri_stop_adapter(diva_os_xdi_adapter_t *a) -{ - PISDN_ADAPTER IoAdapter = &a->xdi_adapter; - int i = 100; - - if (!IoAdapter->ram) { - return (-1); - } - if (!IoAdapter->Initialized) { - DBG_ERR(("A: A(%d) can't stop PRI adapter - not running", - IoAdapter->ANum)) - return (-1); /* nothing to stop */ - } - IoAdapter->Initialized = 0; - - /* - Disconnect Adapter from DIDD - */ - diva_xdi_didd_remove_adapter(IoAdapter->ANum); - - /* - Stop interrupts - */ - a->clear_interrupts_proc = diva_pri_clear_interrupts; - IoAdapter->a.ReadyInt = 1; - IoAdapter->a.ram_inc(&IoAdapter->a, &PR_RAM->ReadyInt); - do { - diva_os_sleep(10); - } while (i-- && a->clear_interrupts_proc); - - if (a->clear_interrupts_proc) { - diva_pri_clear_interrupts(a); - a->clear_interrupts_proc = NULL; - DBG_ERR(("A: A(%d) no final interrupt from PRI adapter", - IoAdapter->ANum)) - } - IoAdapter->a.ReadyInt = 0; - - /* - Stop and reset adapter - */ - IoAdapter->stop(IoAdapter); - - return (0); -} - -/* -** Process commands form configuration/download framework and from -** user mode -** -** return 0 on success -*/ -static int -diva_pri_cmd_card_proc(struct _diva_os_xdi_adapter *a, - diva_xdi_um_cfg_cmd_t *cmd, int length) -{ - int ret = -1; - - if (cmd->adapter != a->controller) { - DBG_ERR(("A: pri_cmd, invalid controller=%d != %d", - cmd->adapter, a->controller)) - return (-1); - } - - switch (cmd->command) { - case DIVA_XDI_UM_CMD_GET_CARD_ORDINAL: - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - *(dword *) a->xdi_mbox.data = - (dword) a->CardOrdinal; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_GET_SERIAL_NR: - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - *(dword *) a->xdi_mbox.data = - (dword) a->xdi_adapter.serialNo; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_GET_PCI_HW_CONFIG: - a->xdi_mbox.data_length = sizeof(dword) * 9; - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - int i; - dword *data = (dword *) a->xdi_mbox.data; - - for (i = 0; i < 8; i++) { - *data++ = a->resources.pci.bar[i]; - } - *data++ = (dword) a->resources.pci.irq; - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_RESET_ADAPTER: - ret = diva_pri_reset_adapter(&a->xdi_adapter); - break; - - case DIVA_XDI_UM_CMD_WRITE_SDRAM_BLOCK: - ret = diva_pri_write_sdram_block(&a->xdi_adapter, - cmd->command_data. - write_sdram.offset, - (byte *)&cmd[1], - cmd->command_data. - write_sdram.length, - pri_is_rev_2_card(a-> - CardOrdinal) - ? MP2_MEMORY_SIZE : - MP_MEMORY_SIZE); - break; - - case DIVA_XDI_UM_CMD_STOP_ADAPTER: - ret = diva_pri_stop_adapter(a); - break; - - case DIVA_XDI_UM_CMD_START_ADAPTER: - ret = diva_pri_start_adapter(&a->xdi_adapter, - cmd->command_data.start. - offset, - cmd->command_data.start. - features); - break; - - case DIVA_XDI_UM_CMD_SET_PROTOCOL_FEATURES: - a->xdi_adapter.features = - cmd->command_data.features.features; - a->xdi_adapter.a.protocol_capabilities = - a->xdi_adapter.features; - DBG_TRC(("Set raw protocol features (%08x)", - a->xdi_adapter.features)) - ret = 0; - break; - - case DIVA_XDI_UM_CMD_GET_CARD_STATE: - a->xdi_mbox.data_length = sizeof(dword); - a->xdi_mbox.data = - diva_os_malloc(0, a->xdi_mbox.data_length); - if (a->xdi_mbox.data) { - dword *data = (dword *) a->xdi_mbox.data; - if (!a->xdi_adapter.ram || - !a->xdi_adapter.reset || - !a->xdi_adapter.cfg) { - *data = 3; - } else if (a->xdi_adapter.trapped) { - *data = 2; - } else if (a->xdi_adapter.Initialized) { - *data = 1; - } else { - *data = 0; - } - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - ret = 0; - } - break; - - case DIVA_XDI_UM_CMD_READ_XLOG_ENTRY: - ret = diva_card_read_xlog(a); - break; - - case DIVA_XDI_UM_CMD_READ_SDRAM: - if (a->xdi_adapter.Address) { - if ( - (a->xdi_mbox.data_length = - cmd->command_data.read_sdram.length)) { - if ( - (a->xdi_mbox.data_length + - cmd->command_data.read_sdram.offset) < - a->xdi_adapter.MemorySize) { - a->xdi_mbox.data = - diva_os_malloc(0, - a->xdi_mbox. - data_length); - if (a->xdi_mbox.data) { - byte __iomem *p = DIVA_OS_MEM_ATTACH_ADDRESS(&a->xdi_adapter); - byte __iomem *src = p; - byte *dst = a->xdi_mbox.data; - dword len = a->xdi_mbox.data_length; - - src += cmd->command_data.read_sdram.offset; - - while (len--) { - *dst++ = READ_BYTE(src++); - } - a->xdi_mbox.status = DIVA_XDI_MBOX_BUSY; - DIVA_OS_MEM_DETACH_ADDRESS(&a->xdi_adapter, p); - ret = 0; - } - } - } - } - break; - - default: - DBG_ERR(("A: A(%d) invalid cmd=%d", a->controller, - cmd->command)) - } - - return (ret); -} - -/* -** Get Serial Number -*/ -static int pri_get_serial_number(diva_os_xdi_adapter_t *a) -{ - byte data[64]; - int i; - dword len = sizeof(data); - volatile byte __iomem *config; - volatile byte __iomem *flash; - byte c; - -/* - * First set some GT6401x config registers before accessing the BOOT-ROM - */ - config = DIVA_OS_MEM_ATTACH_CONFIG(&a->xdi_adapter); - c = READ_BYTE(&config[0xc3c]); - if (!(c & 0x08)) { - WRITE_BYTE(&config[0xc3c], c); /* Base Address enable register */ - } - WRITE_BYTE(&config[LOW_BOOTCS_DREG], 0x00); - WRITE_BYTE(&config[HI_BOOTCS_DREG], 0xFF); - DIVA_OS_MEM_DETACH_CONFIG(&a->xdi_adapter, config); -/* - * Read only the last 64 bytes of manufacturing data - */ - memset(data, '\0', len); - flash = DIVA_OS_MEM_ATTACH_PROM(&a->xdi_adapter); - for (i = 0; i < len; i++) { - data[i] = READ_BYTE(&flash[0x8000 - len + i]); - } - DIVA_OS_MEM_DETACH_PROM(&a->xdi_adapter, flash); - - config = DIVA_OS_MEM_ATTACH_CONFIG(&a->xdi_adapter); - WRITE_BYTE(&config[LOW_BOOTCS_DREG], 0xFC); /* Disable FLASH EPROM access */ - WRITE_BYTE(&config[HI_BOOTCS_DREG], 0xFF); - DIVA_OS_MEM_DETACH_CONFIG(&a->xdi_adapter, config); - - if (memcmp(&data[48], "DIVAserverPR", 12)) { -#if !defined(DIVA_PRI_NO_PCI_BIOS_WORKAROUND) /* { */ - word cmd = 0, cmd_org; - void *addr; - dword addr1, addr3, addr4; - byte Bus, Slot; - void *hdev; - addr4 = a->resources.pci.bar[4]; - addr3 = a->resources.pci.bar[3]; /* flash */ - addr1 = a->resources.pci.bar[1]; /* unused */ - - DBG_ERR(("A: apply Compaq BIOS workaround")) - DBG_LOG(("%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x", - data[0], data[1], data[2], data[3], - data[4], data[5], data[6], data[7])) - - Bus = a->resources.pci.bus; - Slot = a->resources.pci.func; - hdev = a->resources.pci.hdev; - PCIread(Bus, Slot, 0x04, &cmd_org, sizeof(cmd_org), hdev); - PCIwrite(Bus, Slot, 0x04, &cmd, sizeof(cmd), hdev); - - PCIwrite(Bus, Slot, 0x14, &addr4, sizeof(addr4), hdev); - PCIwrite(Bus, Slot, 0x20, &addr1, sizeof(addr1), hdev); - - PCIwrite(Bus, Slot, 0x04, &cmd_org, sizeof(cmd_org), hdev); - - addr = a->resources.pci.addr[1]; - a->resources.pci.addr[1] = a->resources.pci.addr[4]; - a->resources.pci.addr[4] = addr; - - addr1 = a->resources.pci.bar[1]; - a->resources.pci.bar[1] = a->resources.pci.bar[4]; - a->resources.pci.bar[4] = addr1; - - /* - Try to read Flash again - */ - len = sizeof(data); - - config = DIVA_OS_MEM_ATTACH_CONFIG(&a->xdi_adapter); - if (!(config[0xc3c] & 0x08)) { - config[0xc3c] |= 0x08; /* Base Address enable register */ - } - config[LOW_BOOTCS_DREG] = 0x00; - config[HI_BOOTCS_DREG] = 0xFF; - DIVA_OS_MEM_DETACH_CONFIG(&a->xdi_adapter, config); - - memset(data, '\0', len); - flash = DIVA_OS_MEM_ATTACH_PROM(&a->xdi_adapter); - for (i = 0; i < len; i++) { - data[i] = flash[0x8000 - len + i]; - } - DIVA_OS_MEM_ATTACH_PROM(&a->xdi_adapter, flash); - config = DIVA_OS_MEM_ATTACH_CONFIG(&a->xdi_adapter); - config[LOW_BOOTCS_DREG] = 0xFC; - config[HI_BOOTCS_DREG] = 0xFF; - DIVA_OS_MEM_DETACH_CONFIG(&a->xdi_adapter, config); - - if (memcmp(&data[48], "DIVAserverPR", 12)) { - DBG_ERR(("A: failed to read serial number")) - DBG_LOG(("%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x", - data[0], data[1], data[2], data[3], - data[4], data[5], data[6], data[7])) - return (-1); - } -#else /* } { */ - DBG_ERR(("A: failed to read DIVA signature word")) - DBG_LOG(("%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x", - data[0], data[1], data[2], data[3], - data[4], data[5], data[6], data[7])) - DBG_LOG(("%02x:%02x:%02x:%02x", data[47], data[46], - data[45], data[44])) -#endif /* } */ - } - - a->xdi_adapter.serialNo = - (data[47] << 24) | (data[46] << 16) | (data[45] << 8) | - data[44]; - if (!a->xdi_adapter.serialNo - || (a->xdi_adapter.serialNo == 0xffffffff)) { - a->xdi_adapter.serialNo = 0; - DBG_ERR(("A: failed to read serial number")) - return (-1); - } - - DBG_LOG(("Serial No. : %ld", a->xdi_adapter.serialNo)) - DBG_TRC(("Board Revision : %d.%02d", (int) data[41], - (int) data[40])) - DBG_TRC(("PLD revision : %d.%02d", (int) data[33], - (int) data[32])) - DBG_TRC(("Boot loader version : %d.%02d", (int) data[37], - (int) data[36])) - - DBG_TRC(("Manufacturing Date : %d/%02d/%02d (yyyy/mm/dd)", - (int) ((data[28] > 90) ? 1900 : 2000) + - (int) data[28], (int) data[29], (int) data[30])) - - return (0); -} - -void diva_os_prepare_pri2_functions(PISDN_ADAPTER IoAdapter) -{ -} - -void diva_os_prepare_pri_functions(PISDN_ADAPTER IoAdapter) -{ -} - -/* -** Checks presence of DSP on board -*/ -static int -dsp_check_presence(volatile byte __iomem *addr, volatile byte __iomem *data, int dsp) -{ - word pattern; - - WRITE_WORD(addr, 0x4000); - WRITE_WORD(data, DSP_SIGNATURE_PROBE_WORD); - - WRITE_WORD(addr, 0x4000); - pattern = READ_WORD(data); - - if (pattern != DSP_SIGNATURE_PROBE_WORD) { - DBG_TRC(("W: DSP[%d] %04x(is) != %04x(should)", - dsp, pattern, DSP_SIGNATURE_PROBE_WORD)) - return (-1); - } - - WRITE_WORD(addr, 0x4000); - WRITE_WORD(data, ~DSP_SIGNATURE_PROBE_WORD); - - WRITE_WORD(addr, 0x4000); - pattern = READ_WORD(data); - - if (pattern != (word)~DSP_SIGNATURE_PROBE_WORD) { - DBG_ERR(("A: DSP[%d] %04x(is) != %04x(should)", - dsp, pattern, (word)~DSP_SIGNATURE_PROBE_WORD)) - return (-2); - } - - DBG_TRC(("DSP[%d] present", dsp)) - - return (0); -} - - -/* -** Check if DSP's are present and operating -** Information about detected DSP's is returned as bit mask -** Bit 0 - DSP1 -** ... -** ... -** ... -** Bit 29 - DSP30 -*/ -static dword diva_pri_detect_dsps(diva_os_xdi_adapter_t *a) -{ - byte __iomem *base; - byte __iomem *p; - dword ret = 0; - dword row_offset[7] = { - 0x00000000, - 0x00000800, /* 1 - ROW 1 */ - 0x00000840, /* 2 - ROW 2 */ - 0x00001000, /* 3 - ROW 3 */ - 0x00001040, /* 4 - ROW 4 */ - 0x00000000 /* 5 - ROW 0 */ - }; - - byte __iomem *dsp_addr_port; - byte __iomem *dsp_data_port; - byte row_state; - int dsp_row = 0, dsp_index, dsp_num; - - if (!a->xdi_adapter.Control || !a->xdi_adapter.reset) { - return (0); - } - - p = DIVA_OS_MEM_ATTACH_RESET(&a->xdi_adapter); - WRITE_BYTE(p, _MP_RISC_RESET | _MP_DSP_RESET); - DIVA_OS_MEM_DETACH_RESET(&a->xdi_adapter, p); - diva_os_wait(5); - - base = DIVA_OS_MEM_ATTACH_CONTROL(&a->xdi_adapter); - - for (dsp_num = 0; dsp_num < 30; dsp_num++) { - dsp_row = dsp_num / 7 + 1; - dsp_index = dsp_num % 7; - - dsp_data_port = base; - dsp_addr_port = base; - - dsp_data_port += row_offset[dsp_row]; - dsp_addr_port += row_offset[dsp_row]; - - dsp_data_port += (dsp_index * 8); - dsp_addr_port += (dsp_index * 8) + 0x80; - - if (!dsp_check_presence - (dsp_addr_port, dsp_data_port, dsp_num + 1)) { - ret |= (1 << dsp_num); - } - } - DIVA_OS_MEM_DETACH_CONTROL(&a->xdi_adapter, base); - - p = DIVA_OS_MEM_ATTACH_RESET(&a->xdi_adapter); - WRITE_BYTE(p, _MP_RISC_RESET | _MP_LED1 | _MP_LED2); - DIVA_OS_MEM_DETACH_RESET(&a->xdi_adapter, p); - diva_os_wait(5); - - /* - Verify modules - */ - for (dsp_row = 0; dsp_row < 4; dsp_row++) { - row_state = ((ret >> (dsp_row * 7)) & 0x7F); - if (row_state && (row_state != 0x7F)) { - for (dsp_index = 0; dsp_index < 7; dsp_index++) { - if (!(row_state & (1 << dsp_index))) { - DBG_ERR(("A: MODULE[%d]-DSP[%d] failed", - dsp_row + 1, - dsp_index + 1)) - } - } - } - } - - if (!(ret & 0x10000000)) { - DBG_ERR(("A: ON BOARD-DSP[1] failed")) - } - if (!(ret & 0x20000000)) { - DBG_ERR(("A: ON BOARD-DSP[2] failed")) - } - - /* - Print module population now - */ - DBG_LOG(("+-----------------------+")) - DBG_LOG(("| DSP MODULE POPULATION |")) - DBG_LOG(("+-----------------------+")) - DBG_LOG(("| 1 | 2 | 3 | 4 |")) - DBG_LOG(("+-----------------------+")) - DBG_LOG(("| %s | %s | %s | %s |", - ((ret >> (0 * 7)) & 0x7F) ? "Y" : "N", - ((ret >> (1 * 7)) & 0x7F) ? "Y" : "N", - ((ret >> (2 * 7)) & 0x7F) ? "Y" : "N", - ((ret >> (3 * 7)) & 0x7F) ? "Y" : "N")) - DBG_LOG(("+-----------------------+")) - - DBG_LOG(("DSP's(present-absent):%08x-%08x", ret, - ~ret & 0x3fffffff)) - - return (ret); -} diff --git a/drivers/isdn/hardware/eicon/os_pri.h b/drivers/isdn/hardware/eicon/os_pri.h deleted file mode 100644 index 0e91855b171a..000000000000 --- a/drivers/isdn/hardware/eicon/os_pri.h +++ /dev/null @@ -1,9 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: os_pri.h,v 1.1.2.2 2001/02/08 12:25:44 armin Exp $ */ - -#ifndef __DIVA_OS_PRI_REV_1_H__ -#define __DIVA_OS_PRI_REV_1_H__ - -int diva_pri_init_card(diva_os_xdi_adapter_t *a); - -#endif diff --git a/drivers/isdn/hardware/eicon/pc.h b/drivers/isdn/hardware/eicon/pc.h deleted file mode 100644 index 329c0c26abfb..000000000000 --- a/drivers/isdn/hardware/eicon/pc.h +++ /dev/null @@ -1,738 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef PC_H_INCLUDED /* { */ -#define PC_H_INCLUDED -/*------------------------------------------------------------------*/ -/* buffer definition */ -/*------------------------------------------------------------------*/ -typedef struct { - word length; /* length of data/parameter field */ - byte P[270]; /* data/parameter field */ -} PBUFFER; -/*------------------------------------------------------------------*/ -/* dual port ram structure */ -/*------------------------------------------------------------------*/ -struct dual -{ - byte Req; /* request register */ - byte ReqId; /* request task/entity identification */ - byte Rc; /* return code register */ - byte RcId; /* return code task/entity identification */ - byte Ind; /* Indication register */ - byte IndId; /* Indication task/entity identification */ - byte IMask; /* Interrupt Mask Flag */ - byte RNR; /* Receiver Not Ready (set by PC) */ - byte XLock; /* XBuffer locked Flag */ - byte Int; /* ISDN-S interrupt */ - byte ReqCh; /* Channel field for layer-3 Requests */ - byte RcCh; /* Channel field for layer-3 Returncodes */ - byte IndCh; /* Channel field for layer-3 Indications */ - byte MInd; /* more data indication field */ - word MLength; /* more data total packet length */ - byte ReadyInt; /* request field for ready interrupt */ - byte SWReg; /* Software register for special purposes */ - byte Reserved[11]; /* reserved space */ - byte InterfaceType; /* interface type 1=16K interface */ - word Signature; /* ISDN-S adapter Signature (GD) */ - PBUFFER XBuffer; /* Transmit Buffer */ - PBUFFER RBuffer; /* Receive Buffer */ -}; -/*------------------------------------------------------------------*/ -/* SWReg Values (0 means no command) */ -/*------------------------------------------------------------------*/ -#define SWREG_DIE_WITH_LEDON 0x01 -#define SWREG_HALT_CPU 0x02 /* Push CPU into a while (1) loop */ -/*------------------------------------------------------------------*/ -/* Id Fields Coding */ -/*------------------------------------------------------------------*/ -#define ID_MASK 0xe0 /* Mask for the ID field */ -#define GL_ERR_ID 0x1f /* ID for error reporting on global requests*/ -#define DSIG_ID 0x00 /* ID for D-channel signaling */ -#define NL_ID 0x20 /* ID for network-layer access (B or D) */ -#define BLLC_ID 0x60 /* ID for B-channel link level access */ -#define TASK_ID 0x80 /* ID for dynamic user tasks */ -#define TIMER_ID 0xa0 /* ID for timer task */ -#define TEL_ID 0xc0 /* ID for telephone support */ -#define MAN_ID 0xe0 /* ID for management */ -/*------------------------------------------------------------------*/ -/* ASSIGN and REMOVE requests are the same for all entities */ -/*------------------------------------------------------------------*/ -#define ASSIGN 0x01 -#define UREMOVE 0xfe /* without return code */ -#define REMOVE 0xff -/*------------------------------------------------------------------*/ -/* Timer Interrupt Task Interface */ -/*------------------------------------------------------------------*/ -#define ASSIGN_TIM 0x01 -#define REMOVE_TIM 0xff -/*------------------------------------------------------------------*/ -/* dynamic user task interface */ -/*------------------------------------------------------------------*/ -#define ASSIGN_TSK 0x01 -#define REMOVE_TSK 0xff -#define LOAD 0xf0 -#define RELOCATE 0xf1 -#define START 0xf2 -#define LOAD2 0xf3 -#define RELOCATE2 0xf4 -/*------------------------------------------------------------------*/ -/* dynamic user task messages */ -/*------------------------------------------------------------------*/ -#define TSK_B2 0x0000 -#define TSK_WAKEUP 0x2000 -#define TSK_TIMER 0x4000 -#define TSK_TSK 0x6000 -#define TSK_PC 0xe000 -/*------------------------------------------------------------------*/ -/* LL management primitives */ -/*------------------------------------------------------------------*/ -#define ASSIGN_LL 1 /* assign logical link */ -#define REMOVE_LL 0xff /* remove logical link */ -/*------------------------------------------------------------------*/ -/* LL service primitives */ -/*------------------------------------------------------------------*/ -#define LL_UDATA 1 /* link unit data request/indication */ -#define LL_ESTABLISH 2 /* link establish request/indication */ -#define LL_RELEASE 3 /* link release request/indication */ -#define LL_DATA 4 /* data request/indication */ -#define LL_LOCAL 5 /* switch to local operation (COM only) */ -#define LL_DATA_PEND 5 /* data pending indication (SDLC SHM only) */ -#define LL_REMOTE 6 /* switch to remote operation (COM only) */ -#define LL_TEST 8 /* link test request */ -#define LL_MDATA 9 /* more data request/indication */ -#define LL_BUDATA 10 /* broadcast unit data request/indication */ -#define LL_XID 12 /* XID command request/indication */ -#define LL_XID_R 13 /* XID response request/indication */ -/*------------------------------------------------------------------*/ -/* NL service primitives */ -/*------------------------------------------------------------------*/ -#define N_MDATA 1 /* more data to come REQ/IND */ -#define N_CONNECT 2 /* OSI N-CONNECT REQ/IND */ -#define N_CONNECT_ACK 3 /* OSI N-CONNECT CON/RES */ -#define N_DISC 4 /* OSI N-DISC REQ/IND */ -#define N_DISC_ACK 5 /* OSI N-DISC CON/RES */ -#define N_RESET 6 /* OSI N-RESET REQ/IND */ -#define N_RESET_ACK 7 /* OSI N-RESET CON/RES */ -#define N_DATA 8 /* OSI N-DATA REQ/IND */ -#define N_EDATA 9 /* OSI N-EXPEDITED DATA REQ/IND */ -#define N_UDATA 10 /* OSI D-UNIT-DATA REQ/IND */ -#define N_BDATA 11 /* BROADCAST-DATA REQ/IND */ -#define N_DATA_ACK 12 /* data ack ind for D-bit procedure */ -#define N_EDATA_ACK 13 /* data ack ind for INTERRUPT */ -#define N_XON 15 /* clear RNR state */ -#define N_COMBI_IND N_XON /* combined indication */ -#define N_Q_BIT 0x10 /* Q-bit for req/ind */ -#define N_M_BIT 0x20 /* M-bit for req/ind */ -#define N_D_BIT 0x40 /* D-bit for req/ind */ -/*------------------------------------------------------------------*/ -/* Signaling management primitives */ -/*------------------------------------------------------------------*/ -#define ASSIGN_SIG 1 /* assign signaling task */ -#define UREMOVE_SIG 0xfe /* remove signaling task without return code*/ -#define REMOVE_SIG 0xff /* remove signaling task */ -/*------------------------------------------------------------------*/ -/* Signaling service primitives */ -/*------------------------------------------------------------------*/ -#define CALL_REQ 1 /* call request */ -#define CALL_CON 1 /* call confirmation */ -#define CALL_IND 2 /* incoming call connected */ -#define LISTEN_REQ 2 /* listen request */ -#define HANGUP 3 /* hangup request/indication */ -#define SUSPEND 4 /* call suspend request/confirm */ -#define RESUME 5 /* call resume request/confirm */ -#define SUSPEND_REJ 6 /* suspend rejected indication */ -#define USER_DATA 8 /* user data for user to user signaling */ -#define CONGESTION 9 /* network congestion indication */ -#define INDICATE_REQ 10 /* request to indicate an incoming call */ -#define INDICATE_IND 10 /* indicates that there is an incoming call */ -#define CALL_RES 11 /* accept an incoming call */ -#define CALL_ALERT 12 /* send ALERT for incoming call */ -#define INFO_REQ 13 /* INFO request */ -#define INFO_IND 13 /* INFO indication */ -#define REJECT 14 /* reject an incoming call */ -#define RESOURCES 15 /* reserve B-Channel hardware resources */ -#define HW_CTRL 16 /* B-Channel hardware IOCTL req/ind */ -#define TEL_CTRL 16 /* Telephone control request/indication */ -#define STATUS_REQ 17 /* Request D-State (returned in INFO_IND) */ -#define FAC_REG_REQ 18 /* 1TR6 connection independent fac reg */ -#define FAC_REG_ACK 19 /* 1TR6 fac registration acknowledge */ -#define FAC_REG_REJ 20 /* 1TR6 fac registration reject */ -#define CALL_COMPLETE 21/* send a CALL_PROC for incoming call */ -#define SW_CTRL 22 /* extended software features */ -#define REGISTER_REQ 23 /* Q.931 connection independent reg req */ -#define REGISTER_IND 24 /* Q.931 connection independent reg ind */ -#define FACILITY_REQ 25 /* Q.931 connection independent fac req */ -#define FACILITY_IND 26 /* Q.931 connection independent fac ind */ -#define NCR_INFO_REQ 27 /* INFO_REQ with NULL CR */ -#define GCR_MIM_REQ 28 /* MANAGEMENT_INFO_REQ with global CR */ -#define SIG_CTRL 29 /* Control for Signalling Hardware */ -#define DSP_CTRL 30 /* Control for DSPs */ -#define LAW_REQ 31 /* Law config request for (returns info_i) */ -#define SPID_CTRL 32 /* Request/indication SPID related */ -#define NCR_FACILITY 33 /* Request/indication with NULL/DUMMY CR */ -#define CALL_HOLD 34 /* Request/indication to hold a CALL */ -#define CALL_RETRIEVE 35 /* Request/indication to retrieve a CALL */ -#define CALL_HOLD_ACK 36 /* OK of hold a CALL */ -#define CALL_RETRIEVE_ACK 37 /* OK of retrieve a CALL */ -#define CALL_HOLD_REJ 38 /* Reject of hold a CALL */ -#define CALL_RETRIEVE_REJ 39 /* Reject of retrieve a call */ -#define GCR_RESTART 40 /* Send/Receive Restart message */ -#define S_SERVICE 41 /* Send/Receive Supplementary Service */ -#define S_SERVICE_REJ 42 /* Reject Supplementary Service indication */ -#define S_SUPPORTED 43 /* Req/Ind to get Supported Services */ -#define STATUS_ENQ 44 /* Req to send the D-ch request if !state0 */ -#define CALL_GUARD 45 /* Req/Ind to use the FLAGS_CALL_OUTCHECK */ -#define CALL_GUARD_HP 46 /* Call Guard function to reject a call */ -#define CALL_GUARD_IF 47 /* Call Guard function, inform the appl */ -#define SSEXT_REQ 48 /* Supplem.Serv./QSIG specific request */ -#define SSEXT_IND 49 /* Supplem.Serv./QSIG specific indication */ -/* reserved commands for the US protocols */ -#define INT_3PTY_NIND 50 /* US specific indication */ -#define INT_CF_NIND 51 /* US specific indication */ -#define INT_3PTY_DROP 52 /* US specific indication */ -#define INT_MOVE_CONF 53 /* US specific indication */ -#define INT_MOVE_RC 54 /* US specific indication */ -#define INT_MOVE_FLIPPED_CONF 55 /* US specific indication */ -#define INT_X5NI_OK 56 /* internal transfer OK indication */ -#define INT_XDMS_START 57 /* internal transfer OK indication */ -#define INT_XDMS_STOP 58 /* internal transfer finish indication */ -#define INT_XDMS_STOP2 59 /* internal transfer send FA */ -#define INT_CUSTCONF_REJ 60 /* internal conference reject */ -#define INT_CUSTXFER 61 /* internal transfer request */ -#define INT_CUSTX_NIND 62 /* internal transfer ack */ -#define INT_CUSTXREJ_NIND 63 /* internal transfer rej */ -#define INT_X5NI_CF_XFER 64 /* internal transfer OK indication */ -#define VSWITCH_REQ 65 /* communication between protocol and */ -#define VSWITCH_IND 66 /* capifunctions for D-CH-switching */ -#define MWI_POLL 67 /* Message Waiting Status Request fkt */ -#define CALL_PEND_NOTIFY 68 /* notify capi to set new listen */ -#define DO_NOTHING 69 /* dont do somethin if you get this */ -#define INT_CT_REJ 70 /* ECT rejected internal command */ -#define CALL_HOLD_COMPLETE 71 /* In NT Mode indicate hold complete */ -#define CALL_RETRIEVE_COMPLETE 72 /* In NT Mode indicate retrieve complete */ -/*------------------------------------------------------------------*/ -/* management service primitives */ -/*------------------------------------------------------------------*/ -#define MAN_READ 2 -#define MAN_WRITE 3 -#define MAN_EXECUTE 4 -#define MAN_EVENT_ON 5 -#define MAN_EVENT_OFF 6 -#define MAN_LOCK 7 -#define MAN_UNLOCK 8 -#define MAN_INFO_IND 2 -#define MAN_EVENT_IND 3 -#define MAN_TRACE_IND 4 -#define MAN_COMBI_IND 9 -#define MAN_ESC 0x80 -/*------------------------------------------------------------------*/ -/* return code coding */ -/*------------------------------------------------------------------*/ -#define UNKNOWN_COMMAND 0x01 /* unknown command */ -#define WRONG_COMMAND 0x02 /* wrong command */ -#define WRONG_ID 0x03 /* unknown task/entity id */ -#define WRONG_CH 0x04 /* wrong task/entity id */ -#define UNKNOWN_IE 0x05 /* unknown information el. */ -#define WRONG_IE 0x06 /* wrong information el. */ -#define OUT_OF_RESOURCES 0x07 /* ISDN-S card out of res. */ -#define ISDN_GUARD_REJ 0x09 /* ISDN-Guard SuppServ rej */ -#define N_FLOW_CONTROL 0x10 /* Flow-Control, retry */ -#define ASSIGN_RC 0xe0 /* ASSIGN acknowledgement */ -#define ASSIGN_OK 0xef /* ASSIGN OK */ -#define OK_FC 0xfc /* Flow-Control RC */ -#define READY_INT 0xfd /* Ready interrupt */ -#define TIMER_INT 0xfe /* timer interrupt */ -#define OK 0xff /* command accepted */ -/*------------------------------------------------------------------*/ -/* information elements */ -/*------------------------------------------------------------------*/ -#define SHIFT 0x90 /* codeset shift */ -#define MORE 0xa0 /* more data */ -#define SDNCMPL 0xa1 /* sending complete */ -#define CL 0xb0 /* congestion level */ -/* codeset 0 */ -#define SMSG 0x00 /* segmented message */ -#define BC 0x04 /* Bearer Capability */ -#define CAU 0x08 /* cause */ -#define CAD 0x0c /* Connected address */ -#define CAI 0x10 /* call identity */ -#define CHI 0x18 /* channel identification */ -#define LLI 0x19 /* logical link id */ -#define CHA 0x1a /* charge advice */ -#define FTY 0x1c /* Facility */ -#define DT 0x29 /* ETSI date/time */ -#define KEY 0x2c /* keypad information element */ -#define UID 0x2d /* User id information element */ -#define DSP 0x28 /* display */ -#define SIG 0x34 /* signalling hardware control */ -#define OAD 0x6c /* origination address */ -#define OSA 0x6d /* origination sub-address */ -#define CPN 0x70 /* called party number */ -#define DSA 0x71 /* destination sub-address */ -#define RDX 0x73 /* redirecting number extended */ -#define RDN 0x74 /* redirecting number */ -#define RIN 0x76 /* redirection number */ -#define IUP 0x76 /* VN6 rerouter->PCS (codeset 6) */ -#define IPU 0x77 /* VN6 PCS->rerouter (codeset 6) */ -#define RI 0x79 /* restart indicator */ -#define MIE 0x7a /* management info element */ -#define LLC 0x7c /* low layer compatibility */ -#define HLC 0x7d /* high layer compatibility */ -#define UUI 0x7e /* user user information */ -#define ESC 0x7f /* escape extension */ -#define DLC 0x20 /* data link layer configuration */ -#define NLC 0x21 /* network layer configuration */ -#define REDIRECT_IE 0x22 /* redirection request/indication data */ -#define REDIRECT_NET_IE 0x23 /* redirection network override data */ -/* codeset 6 */ -#define SIN 0x01 /* service indicator */ -#define CIF 0x02 /* charging information */ -#define DATE 0x03 /* date */ -#define CPS 0x07 /* called party status */ -/*------------------------------------------------------------------*/ -/* ESC information elements */ -/*------------------------------------------------------------------*/ -#define MSGTYPEIE 0x7a /* Messagetype info element */ -#define CRIE 0x7b /* INFO info element */ -#define CODESET6IE 0xec /* Tunnel for Codeset 6 IEs */ -#define VSWITCHIE 0xed /* VSwitch info element */ -#define SSEXTIE 0xee /* Supplem. Service info element */ -#define PROFILEIE 0xef /* Profile info element */ -/*------------------------------------------------------------------*/ -/* TEL_CTRL contents */ -/*------------------------------------------------------------------*/ -#define RING_ON 0x01 -#define RING_OFF 0x02 -#define HANDS_FREE_ON 0x03 -#define HANDS_FREE_OFF 0x04 -#define ON_HOOK 0x80 -#define OFF_HOOK 0x90 -/* operation values used by ETSI supplementary services */ -#define THREE_PTY_BEGIN 0x04 -#define THREE_PTY_END 0x05 -#define ECT_EXECUTE 0x06 -#define ACTIVATION_DIVERSION 0x07 -#define DEACTIVATION_DIVERSION 0x08 -#define CALL_DEFLECTION 0x0D -#define INTERROGATION_DIVERSION 0x0B -#define INTERROGATION_SERV_USR_NR 0x11 -#define ACTIVATION_MWI 0x20 -#define DEACTIVATION_MWI 0x21 -#define MWI_INDICATION 0x22 -#define MWI_RESPONSE 0x23 -#define CONF_BEGIN 0x28 -#define CONF_ADD 0x29 -#define CONF_SPLIT 0x2a -#define CONF_DROP 0x2b -#define CONF_ISOLATE 0x2c -#define CONF_REATTACH 0x2d -#define CONF_PARTYDISC 0x2e -#define CCBS_INFO_RETAIN 0x2f -#define CCBS_ERASECALLLINKAGEID 0x30 -#define CCBS_STOP_ALERTING 0x31 -#define CCBS_REQUEST 0x32 -#define CCBS_DEACTIVATE 0x33 -#define CCBS_INTERROGATE 0x34 -#define CCBS_STATUS 0x35 -#define CCBS_ERASE 0x36 -#define CCBS_B_FREE 0x37 -#define CCNR_INFO_RETAIN 0x38 -#define CCBS_REMOTE_USER_FREE 0x39 -#define CCNR_REQUEST 0x3a -#define CCNR_INTERROGATE 0x3b -#define GET_SUPPORTED_SERVICES 0xff -#define DIVERSION_PROCEDURE_CFU 0x70 -#define DIVERSION_PROCEDURE_CFB 0x71 -#define DIVERSION_PROCEDURE_CFNR 0x72 -#define DIVERSION_DEACTIVATION_CFU 0x80 -#define DIVERSION_DEACTIVATION_CFB 0x81 -#define DIVERSION_DEACTIVATION_CFNR 0x82 -#define DIVERSION_INTERROGATE_NUM 0x11 -#define DIVERSION_INTERROGATE_CFU 0x60 -#define DIVERSION_INTERROGATE_CFB 0x61 -#define DIVERSION_INTERROGATE_CFNR 0x62 -/* Service Masks */ -#define SMASK_HOLD_RETRIEVE 0x00000001 -#define SMASK_TERMINAL_PORTABILITY 0x00000002 -#define SMASK_ECT 0x00000004 -#define SMASK_3PTY 0x00000008 -#define SMASK_CALL_FORWARDING 0x00000010 -#define SMASK_CALL_DEFLECTION 0x00000020 -#define SMASK_MCID 0x00000040 -#define SMASK_CCBS 0x00000080 -#define SMASK_MWI 0x00000100 -#define SMASK_CCNR 0x00000200 -#define SMASK_CONF 0x00000400 -/* ---------------------------------------------- - Types of transfers used to transfer the - information in the 'struct RC->Reserved2[8]' - The information is transferred as 2 dwords - (2 4Byte unsigned values) - First of them is the transfer type. - 2^32-1 possible messages are possible in this way. - The context of the second one had no meaning - ---------------------------------------------- */ -#define DIVA_RC_TYPE_NONE 0x00000000 -#define DIVA_RC_TYPE_REMOVE_COMPLETE 0x00000008 -#define DIVA_RC_TYPE_STREAM_PTR 0x00000009 -#define DIVA_RC_TYPE_CMA_PTR 0x0000000a -#define DIVA_RC_TYPE_OK_FC 0x0000000b -#define DIVA_RC_TYPE_RX_DMA 0x0000000c -/* ------------------------------------------------------ - IO Control codes for IN BAND SIGNALING - ------------------------------------------------------ */ -#define CTRL_L1_SET_SIG_ID 5 -#define CTRL_L1_SET_DAD 6 -#define CTRL_L1_RESOURCES 7 -/* ------------------------------------------------------ */ -/* ------------------------------------------------------ - Layer 2 types - ------------------------------------------------------ */ -#define X75T 1 /* x.75 for ttx */ -#define TRF 2 /* transparent with hdlc framing */ -#define TRF_IN 3 /* transparent with hdlc fr. inc. */ -#define SDLC 4 /* sdlc, sna layer-2 */ -#define X75 5 /* x.75 for btx */ -#define LAPD 6 /* lapd (Q.921) */ -#define X25_L2 7 /* x.25 layer-2 */ -#define V120_L2 8 /* V.120 layer-2 protocol */ -#define V42_IN 9 /* V.42 layer-2 protocol, incoming */ -#define V42 10 /* V.42 layer-2 protocol */ -#define MDM_ATP 11 /* AT Parser built in the L2 */ -#define X75_V42BIS 12 /* x.75 with V.42bis */ -#define RTPL2_IN 13 /* RTP layer-2 protocol, incoming */ -#define RTPL2 14 /* RTP layer-2 protocol */ -#define V120_V42BIS 15 /* V.120 asynchronous mode supporting V.42bis compression */ -#define LISTENER 27 /* Layer 2 to listen line */ -#define MTP2 28 /* MTP2 Layer 2 */ -#define PIAFS_CRC 29 /* PIAFS Layer 2 with CRC calculation at L2 */ -/* ------------------------------------------------------ - PIAFS DLC DEFINITIONS - ------------------------------------------------------ */ -#define PIAFS_64K 0x01 -#define PIAFS_VARIABLE_SPEED 0x02 -#define PIAFS_CHINESE_SPEED 0x04 -#define PIAFS_UDATA_ABILITY_ID 0x80 -#define PIAFS_UDATA_ABILITY_DCDON 0x01 -#define PIAFS_UDATA_ABILITY_DDI 0x80 -/* - DLC of PIAFS : - Byte | 8 7 6 5 4 3 2 1 - -----+-------------------------------------------------------- - 0 | 0 0 1 0 0 0 0 0 Data Link Configuration - 1 | X X X X X X X X Length of IE (at least 15 Bytes) - 2 | 0 0 0 0 0 0 0 0 max. information field, LOW byte (not used, fix 73 Bytes) - 3 | 0 0 0 0 0 0 0 0 max. information field, HIGH byte (not used, fix 73 Bytes) - 4 | 0 0 0 0 0 0 0 0 address A (not used) - 5 | 0 0 0 0 0 0 0 0 address B (not used) - 6 | 0 0 0 0 0 0 0 0 Mode (not used, fix 128) - 7 | 0 0 0 0 0 0 0 0 Window Size (not used, fix 127) - 8 | X X X X X X X X XID Length, Low Byte (at least 7 Bytes) - 9 | X X X X X X X X XID Length, High Byte - 10 | 0 0 0 0 0 C V S PIAFS Protocol Speed configuration -> Note(1) - | S = 0 -> Protocol Speed is 32K - | S = 1 -> Protocol Speed is 64K - | V = 0 -> Protocol Speed is fixed - | V = 1 -> Protocol Speed is variable - | C = 0 -> speed setting according to standard - | C = 1 -> speed setting for chinese implementation - 11 | 0 0 0 0 0 0 R T P0 - V42bis Compression enable/disable, Low Byte - | T = 0 -> Transmit Direction enable - | T = 1 -> Transmit Direction disable - | R = 0 -> Receive Direction enable - | R = 1 -> Receive Direction disable - 13 | 0 0 0 0 0 0 0 0 P0 - V42bis Compression enable/disable, High Byte - 14 | X X X X X X X X P1 - V42bis Dictionary Size, Low Byte - 15 | X X X X X X X X P1 - V42bis Dictionary Size, High Byte - 16 | X X X X X X X X P2 - V42bis String Length, Low Byte - 17 | X X X X X X X X P2 - V42bis String Length, High Byte - 18 | X X X X X X X X PIAFS extension length - 19 | 1 0 0 0 0 0 0 0 PIAFS extension Id (0x80) - UDATA abilities - 20 | U 0 0 0 0 0 0 D UDATA abilities -> Note (2) - | up to now the following Bits are defined: - | D - signal DCD ON - | U - use extensive UDATA control communication - | for DDI test application - + Note (1): ----------+------+-----------------------------------------+ - | PIAFS Protocol | Bit | | - | Speed configuration | S | Bit 1 - Protocol Speed | - | | | 0 - 32K | - | | | 1 - 64K (default) | - | | V | Bit 2 - Variable Protocol Speed | - | | | 0 - Speed is fix | - | | | 1 - Speed is variable (default) | - | | | OVERWRITES 32k Bit 1 | - | | C | Bit 3 0 - Speed Settings according to | - | | | PIAFS specification | - | | | 1 - Speed setting for chinese | - | | | PIAFS implementation | - | | | Explanation for chinese speed settings: | - | | | if Bit 3 is set the following | - | | | rules apply: | - | | | Bit1=0 Bit2=0: 32k fix | - | | | Bit1=1 Bit2=0: 64k fix | - | | | Bit1=0 Bit2=1: PIAFS is trying | - | | | to negotiate 32k is that is | - | | | not possible it tries to | - | | | negotiate 64k | - | | | Bit1=1 Bit2=1: PIAFS is trying | - | | | to negotiate 64k is that is | - | | | not possible it tries to | - | | | negotiate 32k | - + Note (2): ----------+------+-----------------------------------------+ - | PIAFS | Bit | this byte defines the usage of UDATA | - | Implementation | | control communication | - | UDATA usage | D | Bit 1 - DCD-ON signalling | - | | | 0 - no DCD-ON is signalled | - | | | (default) | - | | | 1 - DCD-ON will be signalled | - | | U | Bit 8 - DDI test application UDATA | - | | | control communication | - | | | 0 - no UDATA control | - | | | communication (default) | - | | | sets as well the DCD-ON | - | | | signalling | - | | | 1 - UDATA control communication | - | | | ATTENTION: Do not use these | - | | | setting if you | - | | | are not really | - | | | that you need it | - | | | and you know | - | | | exactly what you | - | | | are doing. | - | | | You can easily | - | | | disable any | - | | | data transfer. | - +---------------------+------+-----------------------------------------+ -*/ -/* ------------------------------------------------------ - LISTENER DLC DEFINITIONS - ------------------------------------------------------ */ -#define LISTENER_FEATURE_MASK_CUMMULATIVE 0x0001 -/* ------------------------------------------------------ - LISTENER META-FRAME CODE/PRIMITIVE DEFINITIONS - ------------------------------------------------------ */ -#define META_CODE_LL_UDATA_RX 0x01 -#define META_CODE_LL_UDATA_TX 0x02 -#define META_CODE_LL_DATA_RX 0x03 -#define META_CODE_LL_DATA_TX 0x04 -#define META_CODE_LL_MDATA_RX 0x05 -#define META_CODE_LL_MDATA_TX 0x06 -#define META_CODE_EMPTY 0x10 -#define META_CODE_LOST_FRAMES 0x11 -#define META_FLAG_TRUNCATED 0x0001 -/*------------------------------------------------------------------*/ -/* CAPI-like profile to indicate features on LAW_REQ */ -/*------------------------------------------------------------------*/ -#define GL_INTERNAL_CONTROLLER_SUPPORTED 0x00000001L -#define GL_EXTERNAL_EQUIPMENT_SUPPORTED 0x00000002L -#define GL_HANDSET_SUPPORTED 0x00000004L -#define GL_DTMF_SUPPORTED 0x00000008L -#define GL_SUPPLEMENTARY_SERVICES_SUPPORTED 0x00000010L -#define GL_CHANNEL_ALLOCATION_SUPPORTED 0x00000020L -#define GL_BCHANNEL_OPERATION_SUPPORTED 0x00000040L -#define GL_LINE_INTERCONNECT_SUPPORTED 0x00000080L -#define B1_HDLC_SUPPORTED 0x00000001L -#define B1_TRANSPARENT_SUPPORTED 0x00000002L -#define B1_V110_ASYNC_SUPPORTED 0x00000004L -#define B1_V110_SYNC_SUPPORTED 0x00000008L -#define B1_T30_SUPPORTED 0x00000010L -#define B1_HDLC_INVERTED_SUPPORTED 0x00000020L -#define B1_TRANSPARENT_R_SUPPORTED 0x00000040L -#define B1_MODEM_ALL_NEGOTIATE_SUPPORTED 0x00000080L -#define B1_MODEM_ASYNC_SUPPORTED 0x00000100L -#define B1_MODEM_SYNC_HDLC_SUPPORTED 0x00000200L -#define B2_X75_SUPPORTED 0x00000001L -#define B2_TRANSPARENT_SUPPORTED 0x00000002L -#define B2_SDLC_SUPPORTED 0x00000004L -#define B2_LAPD_SUPPORTED 0x00000008L -#define B2_T30_SUPPORTED 0x00000010L -#define B2_PPP_SUPPORTED 0x00000020L -#define B2_TRANSPARENT_NO_CRC_SUPPORTED 0x00000040L -#define B2_MODEM_EC_COMPRESSION_SUPPORTED 0x00000080L -#define B2_X75_V42BIS_SUPPORTED 0x00000100L -#define B2_V120_ASYNC_SUPPORTED 0x00000200L -#define B2_V120_ASYNC_V42BIS_SUPPORTED 0x00000400L -#define B2_V120_BIT_TRANSPARENT_SUPPORTED 0x00000800L -#define B2_LAPD_FREE_SAPI_SEL_SUPPORTED 0x00001000L -#define B3_TRANSPARENT_SUPPORTED 0x00000001L -#define B3_T90NL_SUPPORTED 0x00000002L -#define B3_ISO8208_SUPPORTED 0x00000004L -#define B3_X25_DCE_SUPPORTED 0x00000008L -#define B3_T30_SUPPORTED 0x00000010L -#define B3_T30_WITH_EXTENSIONS_SUPPORTED 0x00000020L -#define B3_RESERVED_SUPPORTED 0x00000040L -#define B3_MODEM_SUPPORTED 0x00000080L -#define MANUFACTURER_FEATURE_SLAVE_CODEC 0x00000001L -#define MANUFACTURER_FEATURE_FAX_MORE_DOCUMENTS 0x00000002L -#define MANUFACTURER_FEATURE_HARDDTMF 0x00000004L -#define MANUFACTURER_FEATURE_SOFTDTMF_SEND 0x00000008L -#define MANUFACTURER_FEATURE_DTMF_PARAMETERS 0x00000010L -#define MANUFACTURER_FEATURE_SOFTDTMF_RECEIVE 0x00000020L -#define MANUFACTURER_FEATURE_FAX_SUB_SEP_PWD 0x00000040L -#define MANUFACTURER_FEATURE_V18 0x00000080L -#define MANUFACTURER_FEATURE_MIXER_CH_CH 0x00000100L -#define MANUFACTURER_FEATURE_MIXER_CH_PC 0x00000200L -#define MANUFACTURER_FEATURE_MIXER_PC_CH 0x00000400L -#define MANUFACTURER_FEATURE_MIXER_PC_PC 0x00000800L -#define MANUFACTURER_FEATURE_ECHO_CANCELLER 0x00001000L -#define MANUFACTURER_FEATURE_RTP 0x00002000L -#define MANUFACTURER_FEATURE_T38 0x00004000L -#define MANUFACTURER_FEATURE_TRANSP_DELIVERY_CONF 0x00008000L -#define MANUFACTURER_FEATURE_XONOFF_FLOW_CONTROL 0x00010000L -#define MANUFACTURER_FEATURE_OOB_CHANNEL 0x00020000L -#define MANUFACTURER_FEATURE_IN_BAND_CHANNEL 0x00040000L -#define MANUFACTURER_FEATURE_IN_BAND_FEATURE 0x00080000L -#define MANUFACTURER_FEATURE_PIAFS 0x00100000L -#define MANUFACTURER_FEATURE_DTMF_TONE 0x00200000L -#define MANUFACTURER_FEATURE_FAX_PAPER_FORMATS 0x00400000L -#define MANUFACTURER_FEATURE_OK_FC_LABEL 0x00800000L -#define MANUFACTURER_FEATURE_VOWN 0x01000000L -#define MANUFACTURER_FEATURE_XCONNECT 0x02000000L -#define MANUFACTURER_FEATURE_DMACONNECT 0x04000000L -#define MANUFACTURER_FEATURE_AUDIO_TAP 0x08000000L -#define MANUFACTURER_FEATURE_FAX_NONSTANDARD 0x10000000L -#define MANUFACTURER_FEATURE_SS7 0x20000000L -#define MANUFACTURER_FEATURE_MADAPTER 0x40000000L -#define MANUFACTURER_FEATURE_MEASURE 0x80000000L -#define MANUFACTURER_FEATURE2_LISTENING 0x00000001L -#define MANUFACTURER_FEATURE2_SS_DIFFCONTPOSSIBLE 0x00000002L -#define MANUFACTURER_FEATURE2_GENERIC_TONE 0x00000004L -#define MANUFACTURER_FEATURE2_COLOR_FAX 0x00000008L -#define MANUFACTURER_FEATURE2_SS_ECT_DIFFCONTPOSSIBLE 0x00000010L -#define RTP_PRIM_PAYLOAD_PCMU_8000 0 -#define RTP_PRIM_PAYLOAD_1016_8000 1 -#define RTP_PRIM_PAYLOAD_G726_32_8000 2 -#define RTP_PRIM_PAYLOAD_GSM_8000 3 -#define RTP_PRIM_PAYLOAD_G723_8000 4 -#define RTP_PRIM_PAYLOAD_DVI4_8000 5 -#define RTP_PRIM_PAYLOAD_DVI4_16000 6 -#define RTP_PRIM_PAYLOAD_LPC_8000 7 -#define RTP_PRIM_PAYLOAD_PCMA_8000 8 -#define RTP_PRIM_PAYLOAD_G722_16000 9 -#define RTP_PRIM_PAYLOAD_QCELP_8000 12 -#define RTP_PRIM_PAYLOAD_G728_8000 14 -#define RTP_PRIM_PAYLOAD_G729_8000 18 -#define RTP_PRIM_PAYLOAD_GSM_HR_8000 30 -#define RTP_PRIM_PAYLOAD_GSM_EFR_8000 31 -#define RTP_ADD_PAYLOAD_BASE 32 -#define RTP_ADD_PAYLOAD_RED 32 -#define RTP_ADD_PAYLOAD_CN_8000 33 -#define RTP_ADD_PAYLOAD_DTMF 34 -#define RTP_PRIM_PAYLOAD_PCMU_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_PCMU_8000) -#define RTP_PRIM_PAYLOAD_1016_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_1016_8000) -#define RTP_PRIM_PAYLOAD_G726_32_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_G726_32_8000) -#define RTP_PRIM_PAYLOAD_GSM_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_GSM_8000) -#define RTP_PRIM_PAYLOAD_G723_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_G723_8000) -#define RTP_PRIM_PAYLOAD_DVI4_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_DVI4_8000) -#define RTP_PRIM_PAYLOAD_DVI4_16000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_DVI4_16000) -#define RTP_PRIM_PAYLOAD_LPC_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_LPC_8000) -#define RTP_PRIM_PAYLOAD_PCMA_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_PCMA_8000) -#define RTP_PRIM_PAYLOAD_G722_16000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_G722_16000) -#define RTP_PRIM_PAYLOAD_QCELP_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_QCELP_8000) -#define RTP_PRIM_PAYLOAD_G728_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_G728_8000) -#define RTP_PRIM_PAYLOAD_G729_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_G729_8000) -#define RTP_PRIM_PAYLOAD_GSM_HR_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_GSM_HR_8000) -#define RTP_PRIM_PAYLOAD_GSM_EFR_8000_SUPPORTED (1L << RTP_PRIM_PAYLOAD_GSM_EFR_8000) -#define RTP_ADD_PAYLOAD_RED_SUPPORTED (1L << (RTP_ADD_PAYLOAD_RED - RTP_ADD_PAYLOAD_BASE)) -#define RTP_ADD_PAYLOAD_CN_8000_SUPPORTED (1L << (RTP_ADD_PAYLOAD_CN_8000 - RTP_ADD_PAYLOAD_BASE)) -#define RTP_ADD_PAYLOAD_DTMF_SUPPORTED (1L << (RTP_ADD_PAYLOAD_DTMF - RTP_ADD_PAYLOAD_BASE)) -/* virtual switching definitions */ -#define VSJOIN 1 -#define VSTRANSPORT 2 -#define VSGETPARAMS 3 -#define VSCAD 1 -#define VSRXCPNAME 2 -#define VSCALLSTAT 3 -#define VSINVOKEID 4 -#define VSCLMRKS 5 -#define VSTBCTIDENT 6 -#define VSETSILINKID 7 -#define VSSAMECONTROLLER 8 -/* Errorcodes for VSETSILINKID begin */ -#define VSETSILINKIDRRWC 1 -#define VSETSILINKIDREJECT 2 -#define VSETSILINKIDTIMEOUT 3 -#define VSETSILINKIDFAILCOUNT 4 -#define VSETSILINKIDERROR 5 -/* Errorcodes for VSETSILINKID end */ -/* -----------------------------------------------------------** -** The PROTOCOL_FEATURE_STRING in feature.h (included ** -** in prstart.sx and astart.sx) defines capabilities and ** -** features of the actual protocol code. It's used as a bit ** -** mask. ** -** The following Bits are defined: ** -** -----------------------------------------------------------*/ -#define PROTCAP_TELINDUS 0x0001 /* Telindus Variant of protocol code */ -#define PROTCAP_MAN_IF 0x0002 /* Management interface implemented */ -#define PROTCAP_V_42 0x0004 /* V42 implemented */ -#define PROTCAP_V90D 0x0008 /* V.90D (implies up to 384k DSP code) */ -#define PROTCAP_EXTD_FAX 0x0010 /* Extended FAX (ECM, 2D, T6, Polling) */ -#define PROTCAP_EXTD_RXFC 0x0020 /* RxFC (Extd Flow Control), OOB Chnl */ -#define PROTCAP_VOIP 0x0040 /* VoIP (implies up to 512k DSP code) */ -#define PROTCAP_CMA_ALLPR 0x0080 /* CMA support for all NL primitives */ -#define PROTCAP_FREE8 0x0100 /* not used */ -#define PROTCAP_FREE9 0x0200 /* not used */ -#define PROTCAP_FREE10 0x0400 /* not used */ -#define PROTCAP_FREE11 0x0800 /* not used */ -#define PROTCAP_FREE12 0x1000 /* not used */ -#define PROTCAP_FREE13 0x2000 /* not used */ -#define PROTCAP_FREE14 0x4000 /* not used */ -#define PROTCAP_EXTENSION 0x8000 /* used for future extensions */ -/* -----------------------------------------------------------* */ -/* Onhook data transmission ETS30065901 */ -/* Message Type */ -/*#define RESERVED4 0x4*/ -#define CALL_SETUP 0x80 -#define MESSAGE_WAITING_INDICATOR 0x82 -/*#define RESERVED84 0x84*/ -/*#define RESERVED85 0x85*/ -#define ADVICE_OF_CHARGE 0x86 -/*1111 0001 - to - 1111 1111 - F1H - Reserved for network operator use - to - FFH*/ -/* Parameter Types */ -#define DATE_AND_TIME 1 -#define CLI_PARAMETER_TYPE 2 -#define CALLED_DIRECTORY_NUMBER_PARAMETER_TYPE 3 -#define REASON_FOR_ABSENCE_OF_CLI_PARAMETER_TYPE 4 -#define NAME_PARAMETER_TYPE 7 -#define REASON_FOR_ABSENCE_OF_CALLING_PARTY_NAME_PARAMETER_TYPE 8 -#define VISUAL_INDICATOR_PARAMETER_TYPE 0xb -#define COMPLEMENTARY_CLI_PARAMETER_TYPE 0x10 -#define CALL_TYPE_PARAMETER_TYPE 0x11 -#define FIRST_CALLED_LINE_DIRECTORY_NUMBER_PARAMETER_TYPE 0x12 -#define NETWORK_MESSAGE_SYSTEM_STATUS_PARAMETER_TYPE 0x13 -#define FORWARDED_CALL_TYPE_PARAMETER_TYPE 0x15 -#define TYPE_OF_CALLING_USER_PARAMETER_TYPE 0x16 -#define REDIRECTING_NUMBER_PARAMETER_TYPE 0x1a -#define EXTENSION_FOR_NETWORK_OPERATOR_USE_PARAMETER_TYPE 0xe0 -/* -----------------------------------------------------------* */ -#else -#endif /* PC_H_INCLUDED } */ diff --git a/drivers/isdn/hardware/eicon/pc_init.h b/drivers/isdn/hardware/eicon/pc_init.h deleted file mode 100644 index d1d00866e8d4..000000000000 --- a/drivers/isdn/hardware/eicon/pc_init.h +++ /dev/null @@ -1,267 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef PC_INIT_H_ -#define PC_INIT_H_ -/*------------------------------------------------------------------*/ -/* - Initialisation parameters for the card - 0x0008 TEI - 0x0009 NT2 flag - 0x000a Default DID length - 0x000b Disable watchdog flag - 0x000c Permanent connection flag - 0x000d Bit 3-8: L1 Hunt Group/Tristate - 0x000d Bit 1: QSig small CR length if set to 1 - 0x000d Bit 2: QSig small CHI length if set to 1 - 0x000e Bit 1-3: Stable L2, 0=OnDemand,1=NoDisc,2=permanent - 0x000e Bit 4: NT mode - 0x000e Bit 5: QSig Channel ID format - 0x000e Bit 6: QSig Call Forwarding Allowed Flag - 0x000e Bit 7: Disable AutoSPID Flag - 0x000f No order check flag - 0x0010 Force companding type:0=default,1=a-law,2=u-law - 0x0012 Low channel flag - 0x0013 Protocol version - 0x0014 CRC4 option:0=default,1=double_frm,2=multi_frm,3=auto - 0x0015 Bit 0: NoHscx30, Bit 1: Loopback flag, Bit 2: ForceHscx30 - 0x0016 DSP info - 0x0017-0x0019 Serial number - 0x001a Card type - 0x0020 OAD 0 - 0x0040 OSA 0 - 0x0060 SPID 0 (if not T.1) - 0x0060 if T.1: Robbed Bit Configuration - 0x0060 length (8) - 0x0061 RBS Answer Delay - 0x0062 RBS Config Bit 3, 4: - 0 0 -> Wink Start - 1 0 -> Loop Start - 0 1 -> Ground Start - 1 1 -> reserved - Bit 5, 6: - 0 0 -> Pulse Dial -> Rotary - 1 0 -> DTMF - 0 1 -> MF - 1 1 -> reserved - 0x0063 RBS RX Digit Timeout - 0x0064 RBS Bearer Capability - 0x0065-0x0069 RBS Debug Mask - 0x0080 OAD 1 - 0x00a0 OSA 1 - 0x00c0 SPID 1 - 0x00e0 Additional configuration -*/ -#define PCINIT_END_OF_LIST 0x00 -#define PCINIT_MODEM_GUARD_TONE 0x01 -#define PCINIT_MODEM_MIN_SPEED 0x02 -#define PCINIT_MODEM_MAX_SPEED 0x03 -#define PCINIT_MODEM_PROTOCOL_OPTIONS 0x04 -#define PCINIT_FAX_OPTIONS 0x05 -#define PCINIT_FAX_MAX_SPEED 0x06 -#define PCINIT_MODEM_OPTIONS 0x07 -#define PCINIT_MODEM_NEGOTIATION_MODE 0x08 -#define PCINIT_MODEM_MODULATIONS_MASK 0x09 -#define PCINIT_MODEM_TRANSMIT_LEVEL 0x0a -#define PCINIT_FAX_DISABLED_RESOLUTIONS 0x0b -#define PCINIT_FAX_MAX_RECORDING_WIDTH 0x0c -#define PCINIT_FAX_MAX_RECORDING_LENGTH 0x0d -#define PCINIT_FAX_MIN_SCANLINE_TIME 0x0e -#define PCINIT_US_EKTS_CACH_HANDLES 0x0f -#define PCINIT_US_EKTS_BEGIN_CONF 0x10 -#define PCINIT_US_EKTS_DROP_CONF 0x11 -#define PCINIT_US_EKTS_CALL_TRANSFER 0x12 -#define PCINIT_RINGERTONE_OPTION 0x13 -#define PCINIT_CARD_ADDRESS 0x14 -#define PCINIT_FPGA_FEATURES 0x15 -#define PCINIT_US_EKTS_MWI 0x16 -#define PCINIT_MODEM_SPEAKER_CONTROL 0x17 -#define PCINIT_MODEM_SPEAKER_VOLUME 0x18 -#define PCINIT_MODEM_CARRIER_WAIT_TIME 0x19 -#define PCINIT_MODEM_CARRIER_LOSS_TIME 0x1a -#define PCINIT_UNCHAN_B_MASK 0x1b -#define PCINIT_PART68_LIMITER 0x1c -#define PCINIT_XDI_FEATURES 0x1d -#define PCINIT_QSIG_DIALECT 0x1e -#define PCINIT_DISABLE_AUTOSPID_FLAG 0x1f -#define PCINIT_FORCE_VOICE_MAIL_ALERT 0x20 -#define PCINIT_PIAFS_TURNAROUND_FRAMES 0x21 -#define PCINIT_L2_COUNT 0x22 -#define PCINIT_QSIG_FEATURES 0x23 -#define PCINIT_NO_SIGNALLING 0x24 -#define PCINIT_CARD_SN 0x25 -#define PCINIT_CARD_PORT 0x26 -#define PCINIT_ALERTTO 0x27 -#define PCINIT_MODEM_EYE_SETUP 0x28 -#define PCINIT_FAX_V34_OPTIONS 0x29 -/*------------------------------------------------------------------*/ -#define PCINIT_MODEM_GUARD_TONE_NONE 0x00 -#define PCINIT_MODEM_GUARD_TONE_550HZ 0x01 -#define PCINIT_MODEM_GUARD_TONE_1800HZ 0x02 -#define PCINIT_MODEM_GUARD_TONE_CHOICES 0x03 -#define PCINIT_MODEMPROT_DISABLE_V42_V42BIS 0x0001 -#define PCINIT_MODEMPROT_DISABLE_MNP_MNP5 0x0002 -#define PCINIT_MODEMPROT_REQUIRE_PROTOCOL 0x0004 -#define PCINIT_MODEMPROT_DISABLE_V42_DETECT 0x0008 -#define PCINIT_MODEMPROT_DISABLE_COMPRESSION 0x0010 -#define PCINIT_MODEMPROT_REQUIRE_PROTOCOL_V34UP 0x0020 -#define PCINIT_MODEMPROT_NO_PROTOCOL_IF_1200 0x0100 -#define PCINIT_MODEMPROT_BUFFER_IN_V42_DETECT 0x0200 -#define PCINIT_MODEMPROT_DISABLE_V42_SREJ 0x0400 -#define PCINIT_MODEMPROT_DISABLE_MNP3 0x0800 -#define PCINIT_MODEMPROT_DISABLE_MNP4 0x1000 -#define PCINIT_MODEMPROT_DISABLE_MNP10 0x2000 -#define PCINIT_MODEMPROT_NO_PROTOCOL_IF_V22BIS 0x4000 -#define PCINIT_MODEMPROT_NO_PROTOCOL_IF_V32BIS 0x8000 -#define PCINIT_MODEMCONFIG_LEASED_LINE_MODE 0x00000001L -#define PCINIT_MODEMCONFIG_4_WIRE_OPERATION 0x00000002L -#define PCINIT_MODEMCONFIG_DISABLE_BUSY_DETECT 0x00000004L -#define PCINIT_MODEMCONFIG_DISABLE_CALLING_TONE 0x00000008L -#define PCINIT_MODEMCONFIG_DISABLE_ANSWER_TONE 0x00000010L -#define PCINIT_MODEMCONFIG_ENABLE_DIAL_TONE_DET 0x00000020L -#define PCINIT_MODEMCONFIG_USE_POTS_INTERFACE 0x00000040L -#define PCINIT_MODEMCONFIG_FORCE_RAY_TAYLOR_FAX 0x00000080L -#define PCINIT_MODEMCONFIG_DISABLE_RETRAIN 0x00000100L -#define PCINIT_MODEMCONFIG_DISABLE_STEPDOWN 0x00000200L -#define PCINIT_MODEMCONFIG_DISABLE_SPLIT_SPEED 0x00000400L -#define PCINIT_MODEMCONFIG_DISABLE_TRELLIS 0x00000800L -#define PCINIT_MODEMCONFIG_ALLOW_RDL_TEST_LOOP 0x00001000L -#define PCINIT_MODEMCONFIG_DISABLE_STEPUP 0x00002000L -#define PCINIT_MODEMCONFIG_DISABLE_FLUSH_TIMER 0x00004000L -#define PCINIT_MODEMCONFIG_REVERSE_DIRECTION 0x00008000L -#define PCINIT_MODEMCONFIG_DISABLE_TX_REDUCTION 0x00010000L -#define PCINIT_MODEMCONFIG_DISABLE_PRECODING 0x00020000L -#define PCINIT_MODEMCONFIG_DISABLE_PREEMPHASIS 0x00040000L -#define PCINIT_MODEMCONFIG_DISABLE_SHAPING 0x00080000L -#define PCINIT_MODEMCONFIG_DISABLE_NONLINEAR_EN 0x00100000L -#define PCINIT_MODEMCONFIG_DISABLE_MANUALREDUCT 0x00200000L -#define PCINIT_MODEMCONFIG_DISABLE_16_POINT_TRN 0x00400000L -#define PCINIT_MODEMCONFIG_DISABLE_2400_SYMBOLS 0x01000000L -#define PCINIT_MODEMCONFIG_DISABLE_2743_SYMBOLS 0x02000000L -#define PCINIT_MODEMCONFIG_DISABLE_2800_SYMBOLS 0x04000000L -#define PCINIT_MODEMCONFIG_DISABLE_3000_SYMBOLS 0x08000000L -#define PCINIT_MODEMCONFIG_DISABLE_3200_SYMBOLS 0x10000000L -#define PCINIT_MODEMCONFIG_DISABLE_3429_SYMBOLS 0x20000000L -#define PCINIT_MODEM_NEGOTIATE_HIGHEST 0x00 -#define PCINIT_MODEM_NEGOTIATE_DISABLED 0x01 -#define PCINIT_MODEM_NEGOTIATE_IN_CLASS 0x02 -#define PCINIT_MODEM_NEGOTIATE_V100 0x03 -#define PCINIT_MODEM_NEGOTIATE_V8 0x04 -#define PCINIT_MODEM_NEGOTIATE_V8BIS 0x05 -#define PCINIT_MODEM_NEGOTIATE_CHOICES 0x06 -#define PCINIT_MODEMMODULATION_DISABLE_V21 0x00000001L -#define PCINIT_MODEMMODULATION_DISABLE_V23 0x00000002L -#define PCINIT_MODEMMODULATION_DISABLE_V22 0x00000004L -#define PCINIT_MODEMMODULATION_DISABLE_V22BIS 0x00000008L -#define PCINIT_MODEMMODULATION_DISABLE_V32 0x00000010L -#define PCINIT_MODEMMODULATION_DISABLE_V32BIS 0x00000020L -#define PCINIT_MODEMMODULATION_DISABLE_V34 0x00000040L -#define PCINIT_MODEMMODULATION_DISABLE_V90 0x00000080L -#define PCINIT_MODEMMODULATION_DISABLE_BELL103 0x00000100L -#define PCINIT_MODEMMODULATION_DISABLE_BELL212A 0x00000200L -#define PCINIT_MODEMMODULATION_DISABLE_VFC 0x00000400L -#define PCINIT_MODEMMODULATION_DISABLE_K56FLEX 0x00000800L -#define PCINIT_MODEMMODULATION_DISABLE_X2 0x00001000L -#define PCINIT_MODEMMODULATION_ENABLE_V29FDX 0x00010000L -#define PCINIT_MODEMMODULATION_ENABLE_V33 0x00020000L -#define PCINIT_MODEMMODULATION_ENABLE_V90A 0x00040000L -#define PCINIT_MODEM_TRANSMIT_LEVEL_CHOICES 0x10 -#define PCINIT_MODEM_SPEAKER_OFF 0x00 -#define PCINIT_MODEM_SPEAKER_DURING_TRAIN 0x01 -#define PCINIT_MODEM_SPEAKER_TIL_CONNECT 0x02 -#define PCINIT_MODEM_SPEAKER_ALWAYS_ON 0x03 -#define PCINIT_MODEM_SPEAKER_CHOICES 0x04 -#define PCINIT_MODEM_SPEAKER_VOLUME_MIN 0x00 -#define PCINIT_MODEM_SPEAKER_VOLUME_LOW 0x01 -#define PCINIT_MODEM_SPEAKER_VOLUME_HIGH 0x02 -#define PCINIT_MODEM_SPEAKER_VOLUME_MAX 0x03 -#define PCINIT_MODEM_SPEAKER_VOLUME_CHOICES 0x04 -/*------------------------------------------------------------------*/ -#define PCINIT_FAXCONFIG_DISABLE_FINE 0x0001 -#define PCINIT_FAXCONFIG_DISABLE_ECM 0x0002 -#define PCINIT_FAXCONFIG_ECM_64_BYTES 0x0004 -#define PCINIT_FAXCONFIG_DISABLE_2D_CODING 0x0008 -#define PCINIT_FAXCONFIG_DISABLE_T6_CODING 0x0010 -#define PCINIT_FAXCONFIG_DISABLE_UNCOMPR 0x0020 -#define PCINIT_FAXCONFIG_REFUSE_POLLING 0x0040 -#define PCINIT_FAXCONFIG_HIDE_TOTAL_PAGES 0x0080 -#define PCINIT_FAXCONFIG_HIDE_ALL_HEADLINE 0x0100 -#define PCINIT_FAXCONFIG_HIDE_PAGE_INFO 0x0180 -#define PCINIT_FAXCONFIG_HEADLINE_OPTIONS_MASK 0x0180 -#define PCINIT_FAXCONFIG_DISABLE_FEATURE_FALLBACK 0x0200 -#define PCINIT_FAXCONFIG_V34FAX_CONTROL_RATE_1200 0x0800 -#define PCINIT_FAXCONFIG_DISABLE_V34FAX 0x1000 -#define PCINIT_FAXCONFIG_DISABLE_R8_0770_OR_200 0x01 -#define PCINIT_FAXCONFIG_DISABLE_R8_1540 0x02 -#define PCINIT_FAXCONFIG_DISABLE_R16_1540_OR_400 0x04 -#define PCINIT_FAXCONFIG_DISABLE_R4_0385_OR_100 0x08 -#define PCINIT_FAXCONFIG_DISABLE_300_300 0x10 -#define PCINIT_FAXCONFIG_DISABLE_INCH_BASED 0x40 -#define PCINIT_FAXCONFIG_DISABLE_METRIC_BASED 0x80 -#define PCINIT_FAXCONFIG_REC_WIDTH_ISO_A3 0 -#define PCINIT_FAXCONFIG_REC_WIDTH_ISO_B4 1 -#define PCINIT_FAXCONFIG_REC_WIDTH_ISO_A4 2 -#define PCINIT_FAXCONFIG_REC_WIDTH_COUNT 3 -#define PCINIT_FAXCONFIG_REC_LENGTH_UNLIMITED 0 -#define PCINIT_FAXCONFIG_REC_LENGTH_ISO_B4 1 -#define PCINIT_FAXCONFIG_REC_LENGTH_ISO_A4 2 -#define PCINIT_FAXCONFIG_REC_LENGTH_COUNT 3 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_00_00_00 0 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_05_05_05 1 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_10_05_05 2 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_10_10_10 3 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_20_10_10 4 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_20_20_20 5 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_40_20_20 6 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_40_40_40 7 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_RES_8 8 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_RES_9 9 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_RES_10 10 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_10_10_05 11 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_20_10_05 12 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_20_20_10 13 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_40_20_10 14 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_40_40_20 15 -#define PCINIT_FAXCONFIG_SCANLINE_TIME_COUNT 16 -#define PCINIT_FAXCONFIG_DISABLE_TX_REDUCTION 0x00010000L -#define PCINIT_FAXCONFIG_DISABLE_PRECODING 0x00020000L -#define PCINIT_FAXCONFIG_DISABLE_PREEMPHASIS 0x00040000L -#define PCINIT_FAXCONFIG_DISABLE_SHAPING 0x00080000L -#define PCINIT_FAXCONFIG_DISABLE_NONLINEAR_EN 0x00100000L -#define PCINIT_FAXCONFIG_DISABLE_MANUALREDUCT 0x00200000L -#define PCINIT_FAXCONFIG_DISABLE_16_POINT_TRN 0x00400000L -#define PCINIT_FAXCONFIG_DISABLE_2400_SYMBOLS 0x01000000L -#define PCINIT_FAXCONFIG_DISABLE_2743_SYMBOLS 0x02000000L -#define PCINIT_FAXCONFIG_DISABLE_2800_SYMBOLS 0x04000000L -#define PCINIT_FAXCONFIG_DISABLE_3000_SYMBOLS 0x08000000L -#define PCINIT_FAXCONFIG_DISABLE_3200_SYMBOLS 0x10000000L -#define PCINIT_FAXCONFIG_DISABLE_3429_SYMBOLS 0x20000000L -/*--------------------------------------------------------------------------*/ -#define PCINIT_XDI_CMA_FOR_ALL_NL_PRIMITIVES 0x01 -/*--------------------------------------------------------------------------*/ -#define PCINIT_FPGA_PLX_ACCESS_SUPPORTED 0x01 -/*--------------------------------------------------------------------------*/ -#endif -/*--------------------------------------------------------------------------*/ diff --git a/drivers/isdn/hardware/eicon/pc_maint.h b/drivers/isdn/hardware/eicon/pc_maint.h deleted file mode 100644 index 496f018fb5a2..000000000000 --- a/drivers/isdn/hardware/eicon/pc_maint.h +++ /dev/null @@ -1,160 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifdef PLATFORM_GT_32BIT -/* #define POINTER_32BIT byte * __ptr32 */ -#define POINTER_32BIT dword -#else -#define POINTER_32BIT byte * -#endif -#if !defined(MIPS_SCOM) -#define BUFFER_SZ 48 -#define MAINT_OFFS 0x380 -#else -#define BUFFER_SZ 128 -#if defined(PRI) -#define MAINT_OFFS 0xef00 -#else -#define MAINT_OFFS 0xff00 -#endif -#endif -#define MIPS_BUFFER_SZ 128 -#if defined(PRI) -#define MIPS_MAINT_OFFS 0xef00 -#else -#define MIPS_MAINT_OFFS 0xff00 -#endif -#define LOG 1 -#define MEMR 2 -#define MEMW 3 -#define IOR 4 -#define IOW 5 -#define B1TEST 6 -#define B2TEST 7 -#define BTESTOFF 8 -#define DSIG_STATS 9 -#define B_CH_STATS 10 -#define D_CH_STATS 11 -#define BL1_STATS 12 -#define BL1_STATS_C 13 -#define GET_VERSION 14 -#define OS_STATS 15 -#define XLOG_SET_MASK 16 -#define XLOG_GET_MASK 17 -#define DSP_READ 20 -#define DSP_WRITE 21 -#define OK 0xff -#define MORE_EVENTS 0xfe -#define NO_EVENT 1 -struct DSigStruc -{ - byte Id; - byte u; - byte listen; - byte active; - byte sin[3]; - byte bc[6]; - byte llc[6]; - byte hlc[6]; - byte oad[20]; -}; -struct BL1Struc { - dword cx_b1; - dword cx_b2; - dword cr_b1; - dword cr_b2; - dword px_b1; - dword px_b2; - dword pr_b1; - dword pr_b2; - word er_b1; - word er_b2; -}; -struct L2Struc { - dword XTotal; - dword RTotal; - word XError; - word RError; -}; -struct OSStruc { - dword free_n; -}; -typedef union -{ - struct DSigStruc DSigStats; - struct BL1Struc BL1Stats; - struct L2Struc L2Stats; - struct OSStruc OSStats; - byte b[BUFFER_SZ]; - word w[BUFFER_SZ >> 1]; - word l[BUFFER_SZ >> 2]; /* word is wrong, do not use! Use 'd' instead. */ - dword d[BUFFER_SZ >> 2]; -} BUFFER; -typedef union -{ - struct DSigStruc DSigStats; - struct BL1Struc BL1Stats; - struct L2Struc L2Stats; - struct OSStruc OSStats; - byte b[MIPS_BUFFER_SZ]; - word w[MIPS_BUFFER_SZ >> 1]; - word l[BUFFER_SZ >> 2]; /* word is wrong, do not use! Use 'd' instead. */ - dword d[MIPS_BUFFER_SZ >> 2]; -} MIPS_BUFFER; -#if !defined(MIPS_SCOM) -struct pc_maint -{ - byte req; - byte rc; - POINTER_32BIT mem; - short length; - word port; - byte fill[6]; - BUFFER data; -}; -#else -struct pc_maint -{ - byte req; - byte rc; - byte reserved[2]; /* R3000 alignment ... */ - POINTER_32BIT mem; - short length; - word port; - byte fill[4]; /* data at offset 16 */ - BUFFER data; -}; -#endif -struct mi_pc_maint -{ - byte req; - byte rc; - byte reserved[2]; /* R3000 alignment ... */ - POINTER_32BIT mem; - short length; - word port; - byte fill[4]; /* data at offset 16 */ - MIPS_BUFFER data; -}; diff --git a/drivers/isdn/hardware/eicon/pkmaint.h b/drivers/isdn/hardware/eicon/pkmaint.h deleted file mode 100644 index cf3fb14a8e6f..000000000000 --- a/drivers/isdn/hardware/eicon/pkmaint.h +++ /dev/null @@ -1,43 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_XDI_OS_DEPENDENT_PACK_MAIN_ON_BYTE_INC__ -#define __DIVA_XDI_OS_DEPENDENT_PACK_MAIN_ON_BYTE_INC__ - - -/* - Only one purpose of this compiler dependent file to pack - structures, described in pc_maint.h so that no padding - will be included. - - With microsoft compile it is done by "pshpack1.h" and - after is restored by "poppack.h" -*/ - - -#include "pc_maint.h" - - -#endif diff --git a/drivers/isdn/hardware/eicon/platform.h b/drivers/isdn/hardware/eicon/platform.h deleted file mode 100644 index 62e2073c3690..000000000000 --- a/drivers/isdn/hardware/eicon/platform.h +++ /dev/null @@ -1,369 +0,0 @@ -/* $Id: platform.h,v 1.37.4.6 2005/01/31 12:22:20 armin Exp $ - * - * platform.h - * - * - * Copyright 2000-2003 by Armin Schindler (mac@melware.de) - * Copyright 2000 Eicon Networks - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - - -#ifndef __PLATFORM_H__ -#define __PLATFORM_H__ - -#if !defined(DIVA_BUILD) -#define DIVA_BUILD "local" -#endif - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "cardtype.h" - -/* activate debuglib for modules only */ -#ifndef MODULE -#define DIVA_NO_DEBUGLIB -#endif - -#define DIVA_USER_MODE_CARD_CONFIG 1 -#define USE_EXTENDED_DEBUGS 1 - -#define MAX_ADAPTER 32 - -#define DIVA_ISTREAM 1 - -#define MEMORY_SPACE_TYPE 0 -#define PORT_SPACE_TYPE 1 - - -#include - -#ifndef byte -#define byte u8 -#endif - -#ifndef word -#define word u16 -#endif - -#ifndef dword -#define dword u32 -#endif - -#ifndef qword -#define qword u64 -#endif - -#ifndef NULL -#define NULL ((void *) 0) -#endif - -#ifndef far -#define far -#endif - -#ifndef _pascal -#define _pascal -#endif - -#ifndef _loadds -#define _loadds -#endif - -#ifndef _cdecl -#define _cdecl -#endif - -#define MEM_TYPE_RAM 0 -#define MEM_TYPE_PORT 1 -#define MEM_TYPE_PROM 2 -#define MEM_TYPE_CTLREG 3 -#define MEM_TYPE_RESET 4 -#define MEM_TYPE_CFG 5 -#define MEM_TYPE_ADDRESS 6 -#define MEM_TYPE_CONFIG 7 -#define MEM_TYPE_CONTROL 8 - -#define MAX_MEM_TYPE 10 - -#define DIVA_OS_MEM_ATTACH_RAM(a) ((a)->ram) -#define DIVA_OS_MEM_ATTACH_PORT(a) ((a)->port) -#define DIVA_OS_MEM_ATTACH_PROM(a) ((a)->prom) -#define DIVA_OS_MEM_ATTACH_CTLREG(a) ((a)->ctlReg) -#define DIVA_OS_MEM_ATTACH_RESET(a) ((a)->reset) -#define DIVA_OS_MEM_ATTACH_CFG(a) ((a)->cfg) -#define DIVA_OS_MEM_ATTACH_ADDRESS(a) ((a)->Address) -#define DIVA_OS_MEM_ATTACH_CONFIG(a) ((a)->Config) -#define DIVA_OS_MEM_ATTACH_CONTROL(a) ((a)->Control) - -#define DIVA_OS_MEM_DETACH_RAM(a, x) do { } while (0) -#define DIVA_OS_MEM_DETACH_PORT(a, x) do { } while (0) -#define DIVA_OS_MEM_DETACH_PROM(a, x) do { } while (0) -#define DIVA_OS_MEM_DETACH_CTLREG(a, x) do { } while (0) -#define DIVA_OS_MEM_DETACH_RESET(a, x) do { } while (0) -#define DIVA_OS_MEM_DETACH_CFG(a, x) do { } while (0) -#define DIVA_OS_MEM_DETACH_ADDRESS(a, x) do { } while (0) -#define DIVA_OS_MEM_DETACH_CONFIG(a, x) do { } while (0) -#define DIVA_OS_MEM_DETACH_CONTROL(a, x) do { } while (0) - -#define DIVA_INVALID_FILE_HANDLE ((dword)(-1)) - -#define DIVAS_CONTAINING_RECORD(address, type, field) \ - ((type *)((char *)(address) - (char *)(&((type *)0)->field))) - -extern int sprintf(char *, const char *, ...); - -typedef void *LIST_ENTRY; - -typedef char DEVICE_NAME[64]; -typedef struct _ISDN_ADAPTER ISDN_ADAPTER; -typedef struct _ISDN_ADAPTER *PISDN_ADAPTER; - -typedef void (*DIVA_DI_PRINTF)(unsigned char *, ...); -#include "debuglib.h" - -#define dtrc(p) DBG_PRV0(p) -#define dbug(a, p) DBG_PRV1(p) - - -typedef struct e_info_s E_INFO; - -typedef char diva_os_dependent_devica_name_t[64]; -typedef void *PDEVICE_OBJECT; - -struct _diva_os_soft_isr; -struct _diva_os_timer; -struct _ISDN_ADAPTER; - -void diva_log_info(unsigned char *, ...); - -/* -** XDI DIDD Interface -*/ -void diva_xdi_didd_register_adapter(int card); -void diva_xdi_didd_remove_adapter(int card); - -/* -** memory allocation -*/ -static __inline__ void *diva_os_malloc(unsigned long flags, unsigned long size) -{ - void *ret = NULL; - - if (size) { - ret = (void *) vmalloc((unsigned int) size); - } - return (ret); -} -static __inline__ void diva_os_free(unsigned long flags, void *ptr) -{ - vfree(ptr); -} - -/* -** use skbuffs for message buffer -*/ -typedef struct sk_buff diva_os_message_buffer_s; -diva_os_message_buffer_s *diva_os_alloc_message_buffer(unsigned long size, void **data_buf); -void diva_os_free_message_buffer(diva_os_message_buffer_s *dmb); -#define DIVA_MESSAGE_BUFFER_LEN(x) x->len -#define DIVA_MESSAGE_BUFFER_DATA(x) x->data - -/* -** mSeconds waiting -*/ -static __inline__ void diva_os_sleep(dword mSec) -{ - msleep(mSec); -} -static __inline__ void diva_os_wait(dword mSec) -{ - mdelay(mSec); -} - -/* -** PCI Configuration space access -*/ -void PCIwrite(byte bus, byte func, int offset, void *data, int length, void *pci_dev_handle); -void PCIread(byte bus, byte func, int offset, void *data, int length, void *pci_dev_handle); - -/* -** I/O Port utilities -*/ -int diva_os_register_io_port(void *adapter, int reg, unsigned long port, - unsigned long length, const char *name, int id); -/* -** I/O port access abstraction -*/ -byte inpp(void __iomem *); -word inppw(void __iomem *); -void inppw_buffer(void __iomem *, void *, int); -void outppw(void __iomem *, word); -void outppw_buffer(void __iomem * , void*, int); -void outpp(void __iomem *, word); - -/* -** IRQ -*/ -typedef struct _diva_os_adapter_irq_info { - byte irq_nr; - int registered; - char irq_name[24]; -} diva_os_adapter_irq_info_t; -int diva_os_register_irq(void *context, byte irq, const char *name); -void diva_os_remove_irq(void *context, byte irq); - -#define diva_os_in_irq() in_irq() - -/* -** Spin Lock framework -*/ -typedef long diva_os_spin_lock_magic_t; -typedef spinlock_t diva_os_spin_lock_t; -static __inline__ int diva_os_initialize_spin_lock(spinlock_t *lock, void *unused) { \ - spin_lock_init(lock); return (0); } -static __inline__ void diva_os_enter_spin_lock(diva_os_spin_lock_t *a, \ - diva_os_spin_lock_magic_t *old_irql, \ - void *dbg) { spin_lock_bh(a); } -static __inline__ void diva_os_leave_spin_lock(diva_os_spin_lock_t *a, \ - diva_os_spin_lock_magic_t *old_irql, \ - void *dbg) { spin_unlock_bh(a); } - -#define diva_os_destroy_spin_lock(a, b) do { } while (0) - -/* -** Deffered processing framework -*/ -typedef int (*diva_os_isr_callback_t)(struct _ISDN_ADAPTER *); -typedef void (*diva_os_soft_isr_callback_t)(struct _diva_os_soft_isr *psoft_isr, void *context); - -typedef struct _diva_os_soft_isr { - void *object; - diva_os_soft_isr_callback_t callback; - void *callback_context; - char dpc_thread_name[24]; -} diva_os_soft_isr_t; - -int diva_os_initialize_soft_isr(diva_os_soft_isr_t *psoft_isr, diva_os_soft_isr_callback_t callback, void *callback_context); -int diva_os_schedule_soft_isr(diva_os_soft_isr_t *psoft_isr); -int diva_os_cancel_soft_isr(diva_os_soft_isr_t *psoft_isr); -void diva_os_remove_soft_isr(diva_os_soft_isr_t *psoft_isr); - -/* - Get time service -*/ -void diva_os_get_time(dword *sec, dword *usec); - -/* -** atomic operation, fake because we use threads -*/ -typedef int diva_os_atomic_t; -static inline diva_os_atomic_t -diva_os_atomic_increment(diva_os_atomic_t *pv) -{ - *pv += 1; - return (*pv); -} -static inline diva_os_atomic_t -diva_os_atomic_decrement(diva_os_atomic_t *pv) -{ - *pv -= 1; - return (*pv); -} - -/* -** CAPI SECTION -*/ -#define NO_CORNETN -#define IMPLEMENT_DTMF 1 -#define IMPLEMENT_ECHO_CANCELLER 1 -#define IMPLEMENT_RTP 1 -#define IMPLEMENT_T38 1 -#define IMPLEMENT_FAX_SUB_SEP_PWD 1 -#define IMPLEMENT_V18 1 -#define IMPLEMENT_DTMF_TONE 1 -#define IMPLEMENT_PIAFS 1 -#define IMPLEMENT_FAX_PAPER_FORMATS 1 -#define IMPLEMENT_VOWN 1 -#define IMPLEMENT_CAPIDTMF 1 -#define IMPLEMENT_FAX_NONSTANDARD 1 -#define VSWITCH_SUPPORT 1 - -#define IMPLEMENT_MARKED_OK_AFTER_FC 1 - -#define DIVA_IDI_RX_DMA 1 - -/* -** endian macros -** -** If only... In some cases we did use them for endianness conversion; -** unfortunately, other uses were real iomem accesses. -*/ -#define READ_BYTE(addr) readb(addr) -#define READ_WORD(addr) readw(addr) -#define READ_DWORD(addr) readl(addr) - -#define WRITE_BYTE(addr, v) writeb(v, addr) -#define WRITE_WORD(addr, v) writew(v, addr) -#define WRITE_DWORD(addr, v) writel(v, addr) - -static inline __u16 GET_WORD(void *addr) -{ - return le16_to_cpu(*(__le16 *)addr); -} -static inline __u32 GET_DWORD(void *addr) -{ - return le32_to_cpu(*(__le32 *)addr); -} -static inline void PUT_WORD(void *addr, __u16 v) -{ - *(__le16 *)addr = cpu_to_le16(v); -} -static inline void PUT_DWORD(void *addr, __u32 v) -{ - *(__le32 *)addr = cpu_to_le32(v); -} - -/* -** 32/64 bit macors -*/ -#ifdef BITS_PER_LONG -#if BITS_PER_LONG > 32 -#define PLATFORM_GT_32BIT -#define ULongToPtr(x) (void *)(unsigned long)(x) -#endif -#endif - -/* -** undef os definitions of macros we use -*/ -#undef ID_MASK -#undef N_DATA -#undef ADDR - -/* -** dump file -*/ -#define diva_os_dump_file_t char -#define diva_os_board_trace_t char -#define diva_os_dump_file(__x__) do { } while (0) - -/* -** size of internal arrays -*/ -#define MAX_DESCRIPTORS 64 - -#endif /* __PLATFORM_H__ */ diff --git a/drivers/isdn/hardware/eicon/pr_pc.h b/drivers/isdn/hardware/eicon/pr_pc.h deleted file mode 100644 index a08d6d57a486..000000000000 --- a/drivers/isdn/hardware/eicon/pr_pc.h +++ /dev/null @@ -1,76 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -struct pr_ram { - word NextReq; /* pointer to next Req Buffer */ - word NextRc; /* pointer to next Rc Buffer */ - word NextInd; /* pointer to next Ind Buffer */ - byte ReqInput; /* number of Req Buffers sent */ - byte ReqOutput; /* number of Req Buffers returned */ - byte ReqReserved; /* number of Req Buffers reserved */ - byte Int; /* ISDN-P interrupt */ - byte XLock; /* Lock field for arbitration */ - byte RcOutput; /* number of Rc buffers received */ - byte IndOutput; /* number of Ind buffers received */ - byte IMask; /* Interrupt Mask Flag */ - byte Reserved1[2]; /* reserved field, do not use */ - byte ReadyInt; /* request field for ready interrupt */ - byte Reserved2[12]; /* reserved field, do not use */ - byte InterfaceType; /* interface type 1=16K interface */ - word Signature; /* ISDN-P initialized indication */ - byte B[1]; /* buffer space for Req,Ind and Rc */ -}; -typedef struct { - word next; - byte Req; - byte ReqId; - byte ReqCh; - byte Reserved1; - word Reference; - byte Reserved[8]; - PBUFFER XBuffer; -} REQ; -typedef struct { - word next; - byte Rc; - byte RcId; - byte RcCh; - byte Reserved1; - word Reference; - byte Reserved2[8]; -} RC; -typedef struct { - word next; - byte Ind; - byte IndId; - byte IndCh; - byte MInd; - word MLength; - word Reference; - byte RNR; - byte Reserved; - dword Ack; - PBUFFER RBuffer; -} IND; diff --git a/drivers/isdn/hardware/eicon/s_4bri.c b/drivers/isdn/hardware/eicon/s_4bri.c deleted file mode 100644 index ec12165fbf62..000000000000 --- a/drivers/isdn/hardware/eicon/s_4bri.c +++ /dev/null @@ -1,510 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#include "di_defs.h" -#include "pc.h" -#include "pr_pc.h" -#include "di.h" -#include "mi_pc.h" -#include "pc_maint.h" -#include "divasync.h" -#include "pc_init.h" -#include "io.h" -#include "helpers.h" -#include "dsrv4bri.h" -#include "dsp_defs.h" -#include "sdp_hdr.h" - -/*****************************************************************************/ -#define MAX_XLOG_SIZE (64 * 1024) - -/* -------------------------------------------------------------------------- - Recovery XLOG from QBRI Card - -------------------------------------------------------------------------- */ -static void qBri_cpu_trapped(PISDN_ADAPTER IoAdapter) { - byte __iomem *base; - word *Xlog; - dword regs[4], TrapID, offset, size; - Xdesc xlogDesc; - int factor = (IoAdapter->tasks == 1) ? 1 : 2; - -/* - * check for trapped MIPS 46xx CPU, dump exception frame - */ - - base = DIVA_OS_MEM_ATTACH_CONTROL(IoAdapter); - offset = IoAdapter->ControllerNumber * (IoAdapter->MemorySize >> factor); - - TrapID = READ_DWORD(&base[0x80]); - - if ((TrapID == 0x99999999) || (TrapID == 0x99999901)) - { - dump_trap_frame(IoAdapter, &base[0x90]); - IoAdapter->trapped = 1; - } - - regs[0] = READ_DWORD((base + offset) + 0x70); - regs[1] = READ_DWORD((base + offset) + 0x74); - regs[2] = READ_DWORD((base + offset) + 0x78); - regs[3] = READ_DWORD((base + offset) + 0x7c); - regs[0] &= IoAdapter->MemorySize - 1; - - if ((regs[0] >= offset) - && (regs[0] < offset + (IoAdapter->MemorySize >> factor) - 1)) - { - if (!(Xlog = (word *)diva_os_malloc(0, MAX_XLOG_SIZE))) { - DIVA_OS_MEM_DETACH_CONTROL(IoAdapter, base); - return; - } - - size = offset + (IoAdapter->MemorySize >> factor) - regs[0]; - if (size > MAX_XLOG_SIZE) - size = MAX_XLOG_SIZE; - memcpy_fromio(Xlog, &base[regs[0]], size); - xlogDesc.buf = Xlog; - xlogDesc.cnt = READ_WORD(&base[regs[1] & (IoAdapter->MemorySize - 1)]); - xlogDesc.out = READ_WORD(&base[regs[2] & (IoAdapter->MemorySize - 1)]); - dump_xlog_buffer(IoAdapter, &xlogDesc); - diva_os_free(0, Xlog); - IoAdapter->trapped = 2; - } - DIVA_OS_MEM_DETACH_CONTROL(IoAdapter, base); -} - -/* -------------------------------------------------------------------------- - Reset QBRI Hardware - -------------------------------------------------------------------------- */ -static void reset_qBri_hardware(PISDN_ADAPTER IoAdapter) { - word volatile __iomem *qBriReset; - byte volatile __iomem *qBriCntrl; - byte volatile __iomem *p; - - qBriReset = (word volatile __iomem *)DIVA_OS_MEM_ATTACH_PROM(IoAdapter); - WRITE_WORD(qBriReset, READ_WORD(qBriReset) | PLX9054_SOFT_RESET); - diva_os_wait(1); - WRITE_WORD(qBriReset, READ_WORD(qBriReset) & ~PLX9054_SOFT_RESET); - diva_os_wait(1); - WRITE_WORD(qBriReset, READ_WORD(qBriReset) | PLX9054_RELOAD_EEPROM); - diva_os_wait(1); - WRITE_WORD(qBriReset, READ_WORD(qBriReset) & ~PLX9054_RELOAD_EEPROM); - diva_os_wait(1); - DIVA_OS_MEM_DETACH_PROM(IoAdapter, qBriReset); - - qBriCntrl = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - p = &qBriCntrl[DIVA_4BRI_REVISION(IoAdapter) ? (MQ2_BREG_RISC) : (MQ_BREG_RISC)]; - WRITE_DWORD(p, 0); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, qBriCntrl); - - DBG_TRC(("resetted board @ reset addr 0x%08lx", qBriReset)) - DBG_TRC(("resetted board @ cntrl addr 0x%08lx", p)) - } - -/* -------------------------------------------------------------------------- - Start Card CPU - -------------------------------------------------------------------------- */ -void start_qBri_hardware(PISDN_ADAPTER IoAdapter) { - byte volatile __iomem *qBriReset; - byte volatile __iomem *p; - - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - qBriReset = &p[(DIVA_4BRI_REVISION(IoAdapter)) ? (MQ2_BREG_RISC) : (MQ_BREG_RISC)]; - WRITE_DWORD(qBriReset, MQ_RISC_COLD_RESET_MASK); - diva_os_wait(2); - WRITE_DWORD(qBriReset, MQ_RISC_WARM_RESET_MASK | MQ_RISC_COLD_RESET_MASK); - diva_os_wait(10); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); - - DBG_TRC(("started processor @ addr 0x%08lx", qBriReset)) - } - -/* -------------------------------------------------------------------------- - Stop Card CPU - -------------------------------------------------------------------------- */ -static void stop_qBri_hardware(PISDN_ADAPTER IoAdapter) { - byte volatile __iomem *p; - dword volatile __iomem *qBriReset; - dword volatile __iomem *qBriIrq; - dword volatile __iomem *qBriIsacDspReset; - int rev2 = DIVA_4BRI_REVISION(IoAdapter); - int reset_offset = rev2 ? (MQ2_BREG_RISC) : (MQ_BREG_RISC); - int irq_offset = rev2 ? (MQ2_BREG_IRQ_TEST) : (MQ_BREG_IRQ_TEST); - int hw_offset = rev2 ? (MQ2_ISAC_DSP_RESET) : (MQ_ISAC_DSP_RESET); - - if (IoAdapter->ControllerNumber > 0) - return; - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - qBriReset = (dword volatile __iomem *)&p[reset_offset]; - qBriIsacDspReset = (dword volatile __iomem *)&p[hw_offset]; -/* - * clear interrupt line (reset Local Interrupt Test Register) - */ - WRITE_DWORD(qBriReset, 0); - WRITE_DWORD(qBriIsacDspReset, 0); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); - - p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - WRITE_BYTE(&p[PLX9054_INTCSR], 0x00); /* disable PCI interrupts */ - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); - - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - qBriIrq = (dword volatile __iomem *)&p[irq_offset]; - WRITE_DWORD(qBriIrq, MQ_IRQ_REQ_OFF); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); - - DBG_TRC(("stopped processor @ addr 0x%08lx", qBriReset)) - - } - -/* -------------------------------------------------------------------------- - FPGA download - -------------------------------------------------------------------------- */ -#define FPGA_NAME_OFFSET 0x10 - -static byte *qBri_check_FPGAsrc(PISDN_ADAPTER IoAdapter, char *FileName, - dword *Length, dword *code) { - byte *File; - char *fpgaFile, *fpgaType, *fpgaDate, *fpgaTime; - dword fpgaFlen, fpgaTlen, fpgaDlen, cnt, year, i; - - if (!(File = (byte *)xdiLoadFile(FileName, Length, 0))) { - return (NULL); - } -/* - * scan file until FF and put id string into buffer - */ - for (i = 0; File[i] != 0xff;) - { - if (++i >= *Length) - { - DBG_FTL(("FPGA download: start of data header not found")) - xdiFreeFile(File); - return (NULL); - } - } - *code = i++; - - if ((File[i] & 0xF0) != 0x20) - { - DBG_FTL(("FPGA download: data header corrupted")) - xdiFreeFile(File); - return (NULL); - } - fpgaFlen = (dword)File[FPGA_NAME_OFFSET - 1]; - if (fpgaFlen == 0) - fpgaFlen = 12; - fpgaFile = (char *)&File[FPGA_NAME_OFFSET]; - fpgaTlen = (dword)fpgaFile[fpgaFlen + 2]; - if (fpgaTlen == 0) - fpgaTlen = 10; - fpgaType = (char *)&fpgaFile[fpgaFlen + 3]; - fpgaDlen = (dword) fpgaType[fpgaTlen + 2]; - if (fpgaDlen == 0) - fpgaDlen = 11; - fpgaDate = (char *)&fpgaType[fpgaTlen + 3]; - fpgaTime = (char *)&fpgaDate[fpgaDlen + 3]; - cnt = (dword)(((File[i] & 0x0F) << 20) + (File[i + 1] << 12) - + (File[i + 2] << 4) + (File[i + 3] >> 4)); - - if ((dword)(i + (cnt / 8)) > *Length) - { - DBG_FTL(("FPGA download: '%s' file too small (%ld < %ld)", - FileName, *Length, code + ((cnt + 7) / 8))) - xdiFreeFile(File); - return (NULL); - } - i = 0; - do - { - while ((fpgaDate[i] != '\0') - && ((fpgaDate[i] < '0') || (fpgaDate[i] > '9'))) - { - i++; - } - year = 0; - while ((fpgaDate[i] >= '0') && (fpgaDate[i] <= '9')) - year = year * 10 + (fpgaDate[i++] - '0'); - } while ((year < 2000) && (fpgaDate[i] != '\0')); - - switch (IoAdapter->cardType) { - case CARDTYPE_DIVASRV_B_2F_PCI: - break; - - default: - if (year >= 2001) { - IoAdapter->fpga_features |= PCINIT_FPGA_PLX_ACCESS_SUPPORTED; - } - } - - DBG_LOG(("FPGA[%s] file %s (%s %s) len %d", - fpgaType, fpgaFile, fpgaDate, fpgaTime, cnt)) - return (File); -} - -/******************************************************************************/ - -#define FPGA_PROG 0x0001 /* PROG enable low */ -#define FPGA_BUSY 0x0002 /* BUSY high, DONE low */ -#define FPGA_CS 0x000C /* Enable I/O pins */ -#define FPGA_CCLK 0x0100 -#define FPGA_DOUT 0x0400 -#define FPGA_DIN FPGA_DOUT /* bidirectional I/O */ - -int qBri_FPGA_download(PISDN_ADAPTER IoAdapter) { - int bit; - byte *File; - dword code, FileLength; - word volatile __iomem *addr = (word volatile __iomem *)DIVA_OS_MEM_ATTACH_PROM(IoAdapter); - word val, baseval = FPGA_CS | FPGA_PROG; - - - - if (DIVA_4BRI_REVISION(IoAdapter)) - { - char *name; - - switch (IoAdapter->cardType) { - case CARDTYPE_DIVASRV_B_2F_PCI: - name = "dsbri2f.bit"; - break; - - case CARDTYPE_DIVASRV_B_2M_V2_PCI: - case CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI: - name = "dsbri2m.bit"; - break; - - default: - name = "ds4bri2.bit"; - } - - File = qBri_check_FPGAsrc(IoAdapter, name, - &FileLength, &code); - } - else - { - File = qBri_check_FPGAsrc(IoAdapter, "ds4bri.bit", - &FileLength, &code); - } - if (!File) { - DIVA_OS_MEM_DETACH_PROM(IoAdapter, addr); - return (0); - } -/* - * prepare download, pulse PROGRAM pin down. - */ - WRITE_WORD(addr, baseval & ~FPGA_PROG); /* PROGRAM low pulse */ - WRITE_WORD(addr, baseval); /* release */ - diva_os_wait(50); /* wait until FPGA finished internal memory clear */ -/* - * check done pin, must be low - */ - if (READ_WORD(addr) & FPGA_BUSY) - { - DBG_FTL(("FPGA download: acknowledge for FPGA memory clear missing")) - xdiFreeFile(File); - DIVA_OS_MEM_DETACH_PROM(IoAdapter, addr); - return (0); - } -/* - * put data onto the FPGA - */ - while (code < FileLength) - { - val = ((word)File[code++]) << 3; - - for (bit = 8; bit-- > 0; val <<= 1) /* put byte onto FPGA */ - { - baseval &= ~FPGA_DOUT; /* clr data bit */ - baseval |= (val & FPGA_DOUT); /* copy data bit */ - WRITE_WORD(addr, baseval); - WRITE_WORD(addr, baseval | FPGA_CCLK); /* set CCLK hi */ - WRITE_WORD(addr, baseval | FPGA_CCLK); /* set CCLK hi */ - WRITE_WORD(addr, baseval); /* set CCLK lo */ - } - } - xdiFreeFile(File); - diva_os_wait(100); - val = READ_WORD(addr); - - DIVA_OS_MEM_DETACH_PROM(IoAdapter, addr); - - if (!(val & FPGA_BUSY)) - { - DBG_FTL(("FPGA download: chip remains in busy state (0x%04x)", val)) - return (0); - } - - return (1); -} - -static int load_qBri_hardware(PISDN_ADAPTER IoAdapter) { - return (0); -} - -/* -------------------------------------------------------------------------- - Card ISR - -------------------------------------------------------------------------- */ -static int qBri_ISR(struct _ISDN_ADAPTER *IoAdapter) { - dword volatile __iomem *qBriIrq; - - PADAPTER_LIST_ENTRY QuadroList = IoAdapter->QuadroList; - - word i; - int serviced = 0; - byte __iomem *p; - - p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - - if (!(READ_BYTE(&p[PLX9054_INTCSR]) & 0x80)) { - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); - return (0); - } - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); - -/* - * clear interrupt line (reset Local Interrupt Test Register) - */ - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - qBriIrq = (dword volatile __iomem *)(&p[DIVA_4BRI_REVISION(IoAdapter) ? (MQ2_BREG_IRQ_TEST) : (MQ_BREG_IRQ_TEST)]); - WRITE_DWORD(qBriIrq, MQ_IRQ_REQ_OFF); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); - - for (i = 0; i < IoAdapter->tasks; ++i) - { - IoAdapter = QuadroList->QuadroAdapter[i]; - - if (IoAdapter && IoAdapter->Initialized - && IoAdapter->tst_irq(&IoAdapter->a)) - { - IoAdapter->IrqCount++; - serviced = 1; - diva_os_schedule_soft_isr(&IoAdapter->isr_soft_isr); - } - } - - return (serviced); -} - -/* -------------------------------------------------------------------------- - Does disable the interrupt on the card - -------------------------------------------------------------------------- */ -static void disable_qBri_interrupt(PISDN_ADAPTER IoAdapter) { - dword volatile __iomem *qBriIrq; - byte __iomem *p; - - if (IoAdapter->ControllerNumber > 0) - return; -/* - * clear interrupt line (reset Local Interrupt Test Register) - */ - p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - WRITE_BYTE(&p[PLX9054_INTCSR], 0x00); /* disable PCI interrupts */ - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); - - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - qBriIrq = (dword volatile __iomem *)(&p[DIVA_4BRI_REVISION(IoAdapter) ? (MQ2_BREG_IRQ_TEST) : (MQ_BREG_IRQ_TEST)]); - WRITE_DWORD(qBriIrq, MQ_IRQ_REQ_OFF); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); -} - -/* -------------------------------------------------------------------------- - Install Adapter Entry Points - -------------------------------------------------------------------------- */ -static void set_common_qBri_functions(PISDN_ADAPTER IoAdapter) { - ADAPTER *a; - - a = &IoAdapter->a; - - a->ram_in = mem_in; - a->ram_inw = mem_inw; - a->ram_in_buffer = mem_in_buffer; - a->ram_look_ahead = mem_look_ahead; - a->ram_out = mem_out; - a->ram_outw = mem_outw; - a->ram_out_buffer = mem_out_buffer; - a->ram_inc = mem_inc; - - IoAdapter->out = pr_out; - IoAdapter->dpc = pr_dpc; - IoAdapter->tst_irq = scom_test_int; - IoAdapter->clr_irq = scom_clear_int; - IoAdapter->pcm = (struct pc_maint *)MIPS_MAINT_OFFS; - - IoAdapter->load = load_qBri_hardware; - - IoAdapter->disIrq = disable_qBri_interrupt; - IoAdapter->rstFnc = reset_qBri_hardware; - IoAdapter->stop = stop_qBri_hardware; - IoAdapter->trapFnc = qBri_cpu_trapped; - - IoAdapter->diva_isr_handler = qBri_ISR; - - IoAdapter->a.io = (void *)IoAdapter; -} - -static void set_qBri_functions(PISDN_ADAPTER IoAdapter) { - if (!IoAdapter->tasks) { - IoAdapter->tasks = MQ_INSTANCE_COUNT; - } - IoAdapter->MemorySize = MQ_MEMORY_SIZE; - set_common_qBri_functions(IoAdapter); - diva_os_set_qBri_functions(IoAdapter); -} - -static void set_qBri2_functions(PISDN_ADAPTER IoAdapter) { - if (!IoAdapter->tasks) { - IoAdapter->tasks = MQ_INSTANCE_COUNT; - } - IoAdapter->MemorySize = (IoAdapter->tasks == 1) ? BRI2_MEMORY_SIZE : MQ2_MEMORY_SIZE; - set_common_qBri_functions(IoAdapter); - diva_os_set_qBri2_functions(IoAdapter); -} - -/******************************************************************************/ - -void prepare_qBri_functions(PISDN_ADAPTER IoAdapter) { - - set_qBri_functions(IoAdapter->QuadroList->QuadroAdapter[0]); - set_qBri_functions(IoAdapter->QuadroList->QuadroAdapter[1]); - set_qBri_functions(IoAdapter->QuadroList->QuadroAdapter[2]); - set_qBri_functions(IoAdapter->QuadroList->QuadroAdapter[3]); - -} - -void prepare_qBri2_functions(PISDN_ADAPTER IoAdapter) { - if (!IoAdapter->tasks) { - IoAdapter->tasks = MQ_INSTANCE_COUNT; - } - - set_qBri2_functions(IoAdapter->QuadroList->QuadroAdapter[0]); - if (IoAdapter->tasks > 1) { - set_qBri2_functions(IoAdapter->QuadroList->QuadroAdapter[1]); - set_qBri2_functions(IoAdapter->QuadroList->QuadroAdapter[2]); - set_qBri2_functions(IoAdapter->QuadroList->QuadroAdapter[3]); - } - -} - -/* -------------------------------------------------------------------------- */ diff --git a/drivers/isdn/hardware/eicon/s_bri.c b/drivers/isdn/hardware/eicon/s_bri.c deleted file mode 100644 index 6a5bb7462339..000000000000 --- a/drivers/isdn/hardware/eicon/s_bri.c +++ /dev/null @@ -1,191 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#include "di_defs.h" -#include "pc.h" -#include "pr_pc.h" -#include "di.h" -#include "mi_pc.h" -#include "pc_maint.h" -#include "divasync.h" -#include "io.h" -#include "helpers.h" -#include "dsrv_bri.h" -#include "dsp_defs.h" -/*****************************************************************************/ -#define MAX_XLOG_SIZE (64 * 1024) -/* -------------------------------------------------------------------------- - Investigate card state, recovery trace buffer - -------------------------------------------------------------------------- */ -static void bri_cpu_trapped(PISDN_ADAPTER IoAdapter) { - byte __iomem *addrHi, *addrLo, *ioaddr; - word *Xlog; - dword regs[4], i, size; - Xdesc xlogDesc; - byte __iomem *Port; -/* - * first read pointers and trap frame - */ - if (!(Xlog = (word *)diva_os_malloc(0, MAX_XLOG_SIZE))) - return; - Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter); - addrHi = Port + ((IoAdapter->Properties.Bus == BUS_PCI) ? M_PCI_ADDRH : ADDRH); - addrLo = Port + ADDR; - ioaddr = Port + DATA; - outpp(addrHi, 0); - outppw(addrLo, 0); - for (i = 0; i < 0x100; Xlog[i++] = inppw(ioaddr)); -/* - * check for trapped MIPS 3xxx CPU, dump only exception frame - */ - if (GET_DWORD(&Xlog[0x80 / sizeof(Xlog[0])]) == 0x99999999) - { - dump_trap_frame(IoAdapter, &((byte *)Xlog)[0x90]); - IoAdapter->trapped = 1; - } - regs[0] = GET_DWORD(&((byte *)Xlog)[0x70]); - regs[1] = GET_DWORD(&((byte *)Xlog)[0x74]); - regs[2] = GET_DWORD(&((byte *)Xlog)[0x78]); - regs[3] = GET_DWORD(&((byte *)Xlog)[0x7c]); - outpp(addrHi, (regs[1] >> 16) & 0x7F); - outppw(addrLo, regs[1] & 0xFFFF); - xlogDesc.cnt = inppw(ioaddr); - outpp(addrHi, (regs[2] >> 16) & 0x7F); - outppw(addrLo, regs[2] & 0xFFFF); - xlogDesc.out = inppw(ioaddr); - xlogDesc.buf = Xlog; - regs[0] &= IoAdapter->MemorySize - 1; - if ((regs[0] < IoAdapter->MemorySize - 1)) - { - size = IoAdapter->MemorySize - regs[0]; - if (size > MAX_XLOG_SIZE) - size = MAX_XLOG_SIZE; - for (i = 0; i < (size / sizeof(*Xlog)); regs[0] += 2) - { - outpp(addrHi, (regs[0] >> 16) & 0x7F); - outppw(addrLo, regs[0] & 0xFFFF); - Xlog[i++] = inppw(ioaddr); - } - dump_xlog_buffer(IoAdapter, &xlogDesc); - diva_os_free(0, Xlog); - IoAdapter->trapped = 2; - } - outpp(addrHi, (byte)((BRI_UNCACHED_ADDR(IoAdapter->MemoryBase + IoAdapter->MemorySize - - BRI_SHARED_RAM_SIZE)) >> 16)); - outppw(addrLo, 0x00); - DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port); -} -/* --------------------------------------------------------------------- - Reset hardware - --------------------------------------------------------------------- */ -static void reset_bri_hardware(PISDN_ADAPTER IoAdapter) { - byte __iomem *p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - outpp(p, 0x00); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); -} -/* --------------------------------------------------------------------- - Halt system - --------------------------------------------------------------------- */ -static void stop_bri_hardware(PISDN_ADAPTER IoAdapter) { - byte __iomem *p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - if (p) { - outpp(p, 0x00); /* disable interrupts ! */ - } - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - outpp(p, 0x00); /* clear int, halt cpu */ - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); -} -static int load_bri_hardware(PISDN_ADAPTER IoAdapter) { - return (0); -} -/******************************************************************************/ -static int bri_ISR(struct _ISDN_ADAPTER *IoAdapter) { - byte __iomem *p; - - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - if (!(inpp(p) & 0x01)) { - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); - return (0); - } - /* - clear interrupt line - */ - outpp(p, 0x08); - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); - IoAdapter->IrqCount++; - if (IoAdapter->Initialized) { - diva_os_schedule_soft_isr(&IoAdapter->isr_soft_isr); - } - return (1); -} -/* -------------------------------------------------------------------------- - Disable IRQ in the card hardware - -------------------------------------------------------------------------- */ -static void disable_bri_interrupt(PISDN_ADAPTER IoAdapter) { - byte __iomem *p; - p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - if (p) - { - outpp(p, 0x00); /* disable interrupts ! */ - } - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); - p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter); - outpp(p, 0x00); /* clear int, halt cpu */ - DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p); -} -/* ------------------------------------------------------------------------- - Fill card entry points - ------------------------------------------------------------------------- */ -void prepare_maestra_functions(PISDN_ADAPTER IoAdapter) { - ADAPTER *a = &IoAdapter->a; - a->ram_in = io_in; - a->ram_inw = io_inw; - a->ram_in_buffer = io_in_buffer; - a->ram_look_ahead = io_look_ahead; - a->ram_out = io_out; - a->ram_outw = io_outw; - a->ram_out_buffer = io_out_buffer; - a->ram_inc = io_inc; - IoAdapter->MemoryBase = BRI_MEMORY_BASE; - IoAdapter->MemorySize = BRI_MEMORY_SIZE; - IoAdapter->out = pr_out; - IoAdapter->dpc = pr_dpc; - IoAdapter->tst_irq = scom_test_int; - IoAdapter->clr_irq = scom_clear_int; - IoAdapter->pcm = (struct pc_maint *)MIPS_MAINT_OFFS; - IoAdapter->load = load_bri_hardware; - IoAdapter->disIrq = disable_bri_interrupt; - IoAdapter->rstFnc = reset_bri_hardware; - IoAdapter->stop = stop_bri_hardware; - IoAdapter->trapFnc = bri_cpu_trapped; - IoAdapter->diva_isr_handler = bri_ISR; - /* - Prepare OS dependent functions - */ - diva_os_prepare_maestra_functions(IoAdapter); -} -/* -------------------------------------------------------------------------- */ diff --git a/drivers/isdn/hardware/eicon/s_pri.c b/drivers/isdn/hardware/eicon/s_pri.c deleted file mode 100644 index ddd0e0ef8ed7..000000000000 --- a/drivers/isdn/hardware/eicon/s_pri.c +++ /dev/null @@ -1,205 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#include "platform.h" -#include "di_defs.h" -#include "pc.h" -#include "pr_pc.h" -#include "di.h" -#include "mi_pc.h" -#include "pc_maint.h" -#include "divasync.h" -#include "io.h" -#include "helpers.h" -#include "dsrv_pri.h" -#include "dsp_defs.h" -/*****************************************************************************/ -#define MAX_XLOG_SIZE (64 * 1024) -/* ------------------------------------------------------------------------- - Does return offset between ADAPTER->ram and real begin of memory - ------------------------------------------------------------------------- */ -static dword pri_ram_offset(ADAPTER *a) { - return ((dword)MP_SHARED_RAM_OFFSET); -} -/* ------------------------------------------------------------------------- - Recovery XLOG buffer from the card - ------------------------------------------------------------------------- */ -static void pri_cpu_trapped(PISDN_ADAPTER IoAdapter) { - byte __iomem *base; - word *Xlog; - dword regs[4], TrapID, size; - Xdesc xlogDesc; -/* - * check for trapped MIPS 46xx CPU, dump exception frame - */ - base = DIVA_OS_MEM_ATTACH_ADDRESS(IoAdapter); - TrapID = READ_DWORD(&base[0x80]); - if ((TrapID == 0x99999999) || (TrapID == 0x99999901)) - { - dump_trap_frame(IoAdapter, &base[0x90]); - IoAdapter->trapped = 1; - } - regs[0] = READ_DWORD(&base[MP_PROTOCOL_OFFSET + 0x70]); - regs[1] = READ_DWORD(&base[MP_PROTOCOL_OFFSET + 0x74]); - regs[2] = READ_DWORD(&base[MP_PROTOCOL_OFFSET + 0x78]); - regs[3] = READ_DWORD(&base[MP_PROTOCOL_OFFSET + 0x7c]); - regs[0] &= IoAdapter->MemorySize - 1; - if ((regs[0] < IoAdapter->MemorySize - 1)) - { - if (!(Xlog = (word *)diva_os_malloc(0, MAX_XLOG_SIZE))) { - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, base); - return; - } - size = IoAdapter->MemorySize - regs[0]; - if (size > MAX_XLOG_SIZE) - size = MAX_XLOG_SIZE; - memcpy_fromio(Xlog, &base[regs[0]], size); - xlogDesc.buf = Xlog; - xlogDesc.cnt = READ_WORD(&base[regs[1] & (IoAdapter->MemorySize - 1)]); - xlogDesc.out = READ_WORD(&base[regs[2] & (IoAdapter->MemorySize - 1)]); - dump_xlog_buffer(IoAdapter, &xlogDesc); - diva_os_free(0, Xlog); - IoAdapter->trapped = 2; - } - DIVA_OS_MEM_DETACH_ADDRESS(IoAdapter, base); -} -/* ------------------------------------------------------------------------- - Hardware reset of PRI card - ------------------------------------------------------------------------- */ -static void reset_pri_hardware(PISDN_ADAPTER IoAdapter) { - byte __iomem *p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - WRITE_BYTE(p, _MP_RISC_RESET | _MP_LED1 | _MP_LED2); - diva_os_wait(50); - WRITE_BYTE(p, 0x00); - diva_os_wait(50); - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); -} -/* ------------------------------------------------------------------------- - Stop Card Hardware - ------------------------------------------------------------------------- */ -static void stop_pri_hardware(PISDN_ADAPTER IoAdapter) { - dword i; - byte __iomem *p; - dword volatile __iomem *cfgReg = (void __iomem *)DIVA_OS_MEM_ATTACH_CFG(IoAdapter); - WRITE_DWORD(&cfgReg[3], 0); - WRITE_DWORD(&cfgReg[1], 0); - DIVA_OS_MEM_DETACH_CFG(IoAdapter, cfgReg); - IoAdapter->a.ram_out(&IoAdapter->a, &RAM->SWReg, SWREG_HALT_CPU); - i = 0; - while ((i < 100) && (IoAdapter->a.ram_in(&IoAdapter->a, &RAM->SWReg) != 0)) - { - diva_os_wait(1); - i++; - } - DBG_TRC(("%s: PRI stopped (%d)", IoAdapter->Name, i)) - cfgReg = (void __iomem *)DIVA_OS_MEM_ATTACH_CFG(IoAdapter); - WRITE_DWORD(&cfgReg[0], ((dword)(~0x03E00000))); - DIVA_OS_MEM_DETACH_CFG(IoAdapter, cfgReg); - diva_os_wait(1); - p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter); - WRITE_BYTE(p, _MP_RISC_RESET | _MP_LED1 | _MP_LED2); - DIVA_OS_MEM_DETACH_RESET(IoAdapter, p); -} -static int load_pri_hardware(PISDN_ADAPTER IoAdapter) { - return (0); -} -/* -------------------------------------------------------------------------- - PRI Adapter interrupt Service Routine - -------------------------------------------------------------------------- */ -static int pri_ISR(struct _ISDN_ADAPTER *IoAdapter) { - byte __iomem *cfg = DIVA_OS_MEM_ATTACH_CFG(IoAdapter); - if (!(READ_DWORD(cfg) & 0x80000000)) { - DIVA_OS_MEM_DETACH_CFG(IoAdapter, cfg); - return (0); - } - /* - clear interrupt line - */ - WRITE_DWORD(cfg, (dword)~0x03E00000); - DIVA_OS_MEM_DETACH_CFG(IoAdapter, cfg); - IoAdapter->IrqCount++; - if (IoAdapter->Initialized) - { - diva_os_schedule_soft_isr(&IoAdapter->isr_soft_isr); - } - return (1); -} -/* ------------------------------------------------------------------------- - Disable interrupt in the card hardware - ------------------------------------------------------------------------- */ -static void disable_pri_interrupt(PISDN_ADAPTER IoAdapter) { - dword volatile __iomem *cfgReg = (dword volatile __iomem *)DIVA_OS_MEM_ATTACH_CFG(IoAdapter); - WRITE_DWORD(&cfgReg[3], 0); - WRITE_DWORD(&cfgReg[1], 0); - WRITE_DWORD(&cfgReg[0], (dword)(~0x03E00000)); - DIVA_OS_MEM_DETACH_CFG(IoAdapter, cfgReg); -} -/* ------------------------------------------------------------------------- - Install entry points for PRI Adapter - ------------------------------------------------------------------------- */ -static void prepare_common_pri_functions(PISDN_ADAPTER IoAdapter) { - ADAPTER *a = &IoAdapter->a; - a->ram_in = mem_in; - a->ram_inw = mem_inw; - a->ram_in_buffer = mem_in_buffer; - a->ram_look_ahead = mem_look_ahead; - a->ram_out = mem_out; - a->ram_outw = mem_outw; - a->ram_out_buffer = mem_out_buffer; - a->ram_inc = mem_inc; - a->ram_offset = pri_ram_offset; - a->ram_out_dw = mem_out_dw; - a->ram_in_dw = mem_in_dw; - a->istream_wakeup = pr_stream; - IoAdapter->out = pr_out; - IoAdapter->dpc = pr_dpc; - IoAdapter->tst_irq = scom_test_int; - IoAdapter->clr_irq = scom_clear_int; - IoAdapter->pcm = (struct pc_maint *)(MIPS_MAINT_OFFS - - MP_SHARED_RAM_OFFSET); - IoAdapter->load = load_pri_hardware; - IoAdapter->disIrq = disable_pri_interrupt; - IoAdapter->rstFnc = reset_pri_hardware; - IoAdapter->stop = stop_pri_hardware; - IoAdapter->trapFnc = pri_cpu_trapped; - IoAdapter->diva_isr_handler = pri_ISR; -} -/* ------------------------------------------------------------------------- - Install entry points for PRI Adapter - ------------------------------------------------------------------------- */ -void prepare_pri_functions(PISDN_ADAPTER IoAdapter) { - IoAdapter->MemorySize = MP_MEMORY_SIZE; - prepare_common_pri_functions(IoAdapter); - diva_os_prepare_pri_functions(IoAdapter); -} -/* ------------------------------------------------------------------------- - Install entry points for PRI Rev.2 Adapter - ------------------------------------------------------------------------- */ -void prepare_pri2_functions(PISDN_ADAPTER IoAdapter) { - IoAdapter->MemorySize = MP2_MEMORY_SIZE; - prepare_common_pri_functions(IoAdapter); - diva_os_prepare_pri2_functions(IoAdapter); -} -/* ------------------------------------------------------------------------- */ diff --git a/drivers/isdn/hardware/eicon/sdp_hdr.h b/drivers/isdn/hardware/eicon/sdp_hdr.h deleted file mode 100644 index 5e20f8d68673..000000000000 --- a/drivers/isdn/hardware/eicon/sdp_hdr.h +++ /dev/null @@ -1,117 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#ifndef __DIVA_SOFT_DSP_TASK_ENTRY_H__ -#define __DIVA_SOFT_DSP_TASK_ENTRY_H__ -/* - The soft DSP image is described by binary header contained on begin of this - image: - OFFSET FROM IMAGE START | VARIABLE - ------------------------------------------------------------------------ - DIVA_MIPS_TASK_IMAGE_LINK_OFFS | link to the next image - ---------------------------------------------------------------------- - DIVA_MIPS_TASK_IMAGE_GP_OFFS | image gp register value, void* - ---------------------------------------------------------------------- - DIVA_MIPS_TASK_IMAGE_ENTRY_OFFS | diva_mips_sdp_task_entry_t* - ---------------------------------------------------------------------- - DIVA_MIPS_TASK_IMAGE_LOAD_ADDR_OFFS | image image start address (void*) - ---------------------------------------------------------------------- - DIVA_MIPS_TASK_IMAGE_END_ADDR_OFFS | image image end address (void*) - ---------------------------------------------------------------------- - DIVA_MIPS_TASK_IMAGE_ID_STRING_OFFS | image id string char[...]; - ---------------------------------------------------------------------- -*/ -#define DIVA_MIPS_TASK_IMAGE_LINK_OFFS 0x6C -#define DIVA_MIPS_TASK_IMAGE_GP_OFFS 0x70 -#define DIVA_MIPS_TASK_IMAGE_ENTRY_OFFS 0x74 -#define DIVA_MIPS_TASK_IMAGE_LOAD_ADDR_OFFS 0x78 -#define DIVA_MIPS_TASK_IMAGE_END_ADDR_OFFS 0x7c -#define DIVA_MIPS_TASK_IMAGE_ID_STRING_OFFS 0x80 -/* - This function is called in order to set GP register of this task - This function should be always called before any function of the - task is called -*/ -typedef void (*diva_task_set_prog_gp_proc_t)(void *new_gp); -/* - This function is called to clear .bss at task initialization step -*/ -typedef void (*diva_task_sys_reset_proc_t)(void); -/* - This function is called in order to provide GP of master call to - task, that will be used by calls from the task to the master -*/ -typedef void (*diva_task_set_main_gp_proc_t)(void *main_gp); -/* - This function is called to provide address of 'dprintf' function - to the task -*/ -typedef word (*diva_prt_proc_t)(char *, ...); -typedef void (*diva_task_set_prt_proc_t)(diva_prt_proc_t fn); -/* - This function is called to set task PID -*/ -typedef void (*diva_task_set_pid_proc_t)(dword id); -/* - This function is called for run-time task init -*/ -typedef int (*diva_task_run_time_init_proc_t)(void*, dword); -/* - This function is called from system scheduler or from timer -*/ -typedef void (*diva_task_callback_proc_t)(void); -/* - This callback is used by task to get current time im mS -*/ -typedef dword (*diva_task_get_tick_count_proc_t)(void); -typedef void (*diva_task_set_get_time_proc_t)(\ - diva_task_get_tick_count_proc_t fn); -typedef struct _diva_mips_sdp_task_entry { - diva_task_set_prog_gp_proc_t set_gp_proc; - diva_task_sys_reset_proc_t sys_reset_proc; - diva_task_set_main_gp_proc_t set_main_gp_proc; - diva_task_set_prt_proc_t set_dprintf_proc; - diva_task_set_pid_proc_t set_pid_proc; - diva_task_run_time_init_proc_t run_time_init_proc; - diva_task_callback_proc_t task_callback_proc; - diva_task_callback_proc_t timer_callback_proc; - diva_task_set_get_time_proc_t set_get_time_proc; - void *last_entry_proc; -} diva_mips_sdp_task_entry_t; -/* - 'last_entry_proc' should be set to zero and is used for future extensuios -*/ -typedef struct _diva_mips_sw_task { - diva_mips_sdp_task_entry_t sdp_entry; - void *sdp_gp_reg; - void *own_gp_reg; -} diva_mips_sw_task_t; -#if !defined(DIVA_BRI2F_SDP_1_NAME) -#define DIVA_BRI2F_SDP_1_NAME "sdp0.2q0" -#endif -#if !defined(DIVA_BRI2F_SDP_2_NAME) -#define DIVA_BRI2F_SDP_2_NAME "sdp1.2q0" -#endif -#endif diff --git a/drivers/isdn/hardware/eicon/um_idi.c b/drivers/isdn/hardware/eicon/um_idi.c deleted file mode 100644 index db4dd4ff3642..000000000000 --- a/drivers/isdn/hardware/eicon/um_idi.c +++ /dev/null @@ -1,886 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* $Id: um_idi.c,v 1.14 2004/03/21 17:54:37 armin Exp $ */ - -#include "platform.h" -#include "di_defs.h" -#include "pc.h" -#include "dqueue.h" -#include "adapter.h" -#include "entity.h" -#include "um_xdi.h" -#include "um_idi.h" -#include "debuglib.h" -#include "divasync.h" - -#define DIVAS_MAX_XDI_ADAPTERS 64 - -/* -------------------------------------------------------------------------- - IMPORTS - -------------------------------------------------------------------------- */ -extern void diva_os_wakeup_read(void *os_context); -extern void diva_os_wakeup_close(void *os_context); -/* -------------------------------------------------------------------------- - LOCALS - -------------------------------------------------------------------------- */ -static LIST_HEAD(adapter_q); -static diva_os_spin_lock_t adapter_lock; - -static diva_um_idi_adapter_t *diva_um_idi_find_adapter(dword nr); -static void cleanup_adapter(diva_um_idi_adapter_t *a); -static void cleanup_entity(divas_um_idi_entity_t *e); -static int diva_user_mode_idi_adapter_features(diva_um_idi_adapter_t *a, - diva_um_idi_adapter_features_t - *features); -static int process_idi_request(divas_um_idi_entity_t *e, - const diva_um_idi_req_hdr_t *req); -static int process_idi_rc(divas_um_idi_entity_t *e, byte rc); -static int process_idi_ind(divas_um_idi_entity_t *e, byte ind); -static int write_return_code(divas_um_idi_entity_t *e, byte rc); - -/* -------------------------------------------------------------------------- - MAIN - -------------------------------------------------------------------------- */ -int diva_user_mode_idi_init(void) -{ - diva_os_initialize_spin_lock(&adapter_lock, "adapter"); - return (0); -} - -/* -------------------------------------------------------------------------- - Copy adapter features to user supplied buffer - -------------------------------------------------------------------------- */ -static int -diva_user_mode_idi_adapter_features(diva_um_idi_adapter_t *a, - diva_um_idi_adapter_features_t * - features) -{ - IDI_SYNC_REQ sync_req; - - if ((a) && (a->d.request)) { - features->type = a->d.type; - features->features = a->d.features; - features->channels = a->d.channels; - memset(features->name, 0, sizeof(features->name)); - - sync_req.GetName.Req = 0; - sync_req.GetName.Rc = IDI_SYNC_REQ_GET_NAME; - (*(a->d.request)) ((ENTITY *)&sync_req); - strlcpy(features->name, sync_req.GetName.name, - sizeof(features->name)); - - sync_req.GetSerial.Req = 0; - sync_req.GetSerial.Rc = IDI_SYNC_REQ_GET_SERIAL; - sync_req.GetSerial.serial = 0; - (*(a->d.request))((ENTITY *)&sync_req); - features->serial_number = sync_req.GetSerial.serial; - } - - return ((a) ? 0 : -1); -} - -/* -------------------------------------------------------------------------- - REMOVE ADAPTER - -------------------------------------------------------------------------- */ -void diva_user_mode_idi_remove_adapter(int adapter_nr) -{ - struct list_head *tmp; - diva_um_idi_adapter_t *a; - - list_for_each(tmp, &adapter_q) { - a = list_entry(tmp, diva_um_idi_adapter_t, link); - if (a->adapter_nr == adapter_nr) { - list_del(tmp); - cleanup_adapter(a); - DBG_LOG(("DIDD: del adapter(%d)", a->adapter_nr)); - diva_os_free(0, a); - break; - } - } -} - -/* -------------------------------------------------------------------------- - CALLED ON DRIVER EXIT (UNLOAD) - -------------------------------------------------------------------------- */ -void diva_user_mode_idi_finit(void) -{ - struct list_head *tmp, *safe; - diva_um_idi_adapter_t *a; - - list_for_each_safe(tmp, safe, &adapter_q) { - a = list_entry(tmp, diva_um_idi_adapter_t, link); - list_del(tmp); - cleanup_adapter(a); - DBG_LOG(("DIDD: del adapter(%d)", a->adapter_nr)); - diva_os_free(0, a); - } - diva_os_destroy_spin_lock(&adapter_lock, "adapter"); -} - -/* ------------------------------------------------------------------------- - CREATE AND INIT IDI ADAPTER - ------------------------------------------------------------------------- */ -int diva_user_mode_idi_create_adapter(const DESCRIPTOR *d, int adapter_nr) -{ - diva_os_spin_lock_magic_t old_irql; - diva_um_idi_adapter_t *a = - (diva_um_idi_adapter_t *) diva_os_malloc(0, - sizeof - (diva_um_idi_adapter_t)); - - if (!a) { - return (-1); - } - memset(a, 0x00, sizeof(*a)); - INIT_LIST_HEAD(&a->entity_q); - - a->d = *d; - a->adapter_nr = adapter_nr; - - DBG_LOG(("DIDD_ADD A(%d), type:%02x, features:%04x, channels:%d", - adapter_nr, a->d.type, a->d.features, a->d.channels)); - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "create_adapter"); - list_add_tail(&a->link, &adapter_q); - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "create_adapter"); - return (0); -} - -/* ------------------------------------------------------------------------ - Find adapter by Adapter number - ------------------------------------------------------------------------ */ -static diva_um_idi_adapter_t *diva_um_idi_find_adapter(dword nr) -{ - diva_um_idi_adapter_t *a = NULL; - struct list_head *tmp; - - list_for_each(tmp, &adapter_q) { - a = list_entry(tmp, diva_um_idi_adapter_t, link); - DBG_TRC(("find_adapter: (%d)-(%d)", nr, a->adapter_nr)); - if (a->adapter_nr == (int)nr) - break; - a = NULL; - } - return (a); -} - -/* ------------------------------------------------------------------------ - Cleanup this adapter and cleanup/delete all entities assigned - to this adapter - ------------------------------------------------------------------------ */ -static void cleanup_adapter(diva_um_idi_adapter_t *a) -{ - struct list_head *tmp, *safe; - divas_um_idi_entity_t *e; - - list_for_each_safe(tmp, safe, &a->entity_q) { - e = list_entry(tmp, divas_um_idi_entity_t, link); - list_del(tmp); - cleanup_entity(e); - if (e->os_context) { - diva_os_wakeup_read(e->os_context); - diva_os_wakeup_close(e->os_context); - } - } - memset(&a->d, 0x00, sizeof(DESCRIPTOR)); -} - -/* ------------------------------------------------------------------------ - Cleanup, but NOT delete this entity - ------------------------------------------------------------------------ */ -static void cleanup_entity(divas_um_idi_entity_t *e) -{ - e->os_ref = NULL; - e->status = 0; - e->adapter = NULL; - e->e.Id = 0; - e->rc_count = 0; - - e->status |= DIVA_UM_IDI_REMOVED; - e->status |= DIVA_UM_IDI_REMOVE_PENDING; - - diva_data_q_finit(&e->data); - diva_data_q_finit(&e->rc); -} - - -/* ------------------------------------------------------------------------ - Create ENTITY, link it to the adapter and remove pointer to entity - ------------------------------------------------------------------------ */ -void *divas_um_idi_create_entity(dword adapter_nr, void *file) -{ - divas_um_idi_entity_t *e; - diva_um_idi_adapter_t *a; - diva_os_spin_lock_magic_t old_irql; - - if ((e = (divas_um_idi_entity_t *) diva_os_malloc(0, sizeof(*e)))) { - memset(e, 0x00, sizeof(*e)); - if (! - (e->os_context = - diva_os_malloc(0, diva_os_get_context_size()))) { - DBG_LOG(("E(%08x) no memory for os context", e)); - diva_os_free(0, e); - return NULL; - } - memset(e->os_context, 0x00, diva_os_get_context_size()); - - if ((diva_data_q_init(&e->data, 2048 + 512, 16))) { - diva_os_free(0, e->os_context); - diva_os_free(0, e); - return NULL; - } - if ((diva_data_q_init(&e->rc, sizeof(diva_um_idi_ind_hdr_t), 2))) { - diva_data_q_finit(&e->data); - diva_os_free(0, e->os_context); - diva_os_free(0, e); - return NULL; - } - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "create_entity"); - /* - Look for Adapter requested - */ - if (!(a = diva_um_idi_find_adapter(adapter_nr))) { - /* - No adapter was found, or this adapter was removed - */ - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "create_entity"); - - DBG_LOG(("A: no adapter(%ld)", adapter_nr)); - - cleanup_entity(e); - diva_os_free(0, e->os_context); - diva_os_free(0, e); - - return NULL; - } - - e->os_ref = file; /* link to os handle */ - e->adapter = a; /* link to adapter */ - - list_add_tail(&e->link, &a->entity_q); /* link from adapter */ - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "create_entity"); - - DBG_LOG(("A(%ld), create E(%08x)", adapter_nr, e)); - } - - return (e); -} - -/* ------------------------------------------------------------------------ - Unlink entity and free memory - ------------------------------------------------------------------------ */ -int divas_um_idi_delete_entity(int adapter_nr, void *entity) -{ - divas_um_idi_entity_t *e; - diva_um_idi_adapter_t *a; - diva_os_spin_lock_magic_t old_irql; - - if (!(e = (divas_um_idi_entity_t *) entity)) - return (-1); - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "delete_entity"); - if ((a = e->adapter)) { - list_del(&e->link); - } - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "delete_entity"); - - diva_um_idi_stop_wdog(entity); - cleanup_entity(e); - diva_os_free(0, e->os_context); - memset(e, 0x00, sizeof(*e)); - - DBG_LOG(("A(%d) remove E:%08x", adapter_nr, e)); - diva_os_free(0, e); - - return (0); -} - -/* -------------------------------------------------------------------------- - Called by application to read data from IDI - -------------------------------------------------------------------------- */ -int diva_um_idi_read(void *entity, - void *os_handle, - void *dst, - int max_length, divas_um_idi_copy_to_user_fn_t cp_fn) -{ - divas_um_idi_entity_t *e; - diva_um_idi_adapter_t *a; - const void *data; - int length, ret = 0; - diva_um_idi_data_queue_t *q; - diva_os_spin_lock_magic_t old_irql; - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "read"); - - e = (divas_um_idi_entity_t *) entity; - if (!e || (!(a = e->adapter)) || - (e->status & DIVA_UM_IDI_REMOVE_PENDING) || - (e->status & DIVA_UM_IDI_REMOVED) || - (a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) { - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "read"); - DBG_ERR(("E(%08x) read failed - adapter removed", e)) - return (-1); - } - - DBG_TRC(("A(%d) E(%08x) read(%d)", a->adapter_nr, e, max_length)); - - /* - Try to read return code first - */ - data = diva_data_q_get_segment4read(&e->rc); - q = &e->rc; - - /* - No return codes available, read indications now - */ - if (!data) { - if (!(e->status & DIVA_UM_IDI_RC_PENDING)) { - DBG_TRC(("A(%d) E(%08x) read data", a->adapter_nr, e)); - data = diva_data_q_get_segment4read(&e->data); - q = &e->data; - } - } else { - e->status &= ~DIVA_UM_IDI_RC_PENDING; - DBG_TRC(("A(%d) E(%08x) read rc", a->adapter_nr, e)); - } - - if (data) { - if ((length = diva_data_q_get_segment_length(q)) > - max_length) { - /* - Not enough space to read message - */ - DBG_ERR(("A: A(%d) E(%08x) read small buffer", - a->adapter_nr, e, ret)); - diva_os_leave_spin_lock(&adapter_lock, &old_irql, - "read"); - return (-2); - } - /* - Copy it to user, this function does access ONLY locked an verified - memory, also we can access it witch spin lock held - */ - - if ((ret = (*cp_fn) (os_handle, dst, data, length)) >= 0) { - /* - Acknowledge only if read was successful - */ - diva_data_q_ack_segment4read(q); - } - } - - - DBG_TRC(("A(%d) E(%08x) read=%d", a->adapter_nr, e, ret)); - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "read"); - - return (ret); -} - - -int diva_um_idi_write(void *entity, - void *os_handle, - const void *src, - int length, divas_um_idi_copy_from_user_fn_t cp_fn) -{ - divas_um_idi_entity_t *e; - diva_um_idi_adapter_t *a; - diva_um_idi_req_hdr_t *req; - void *data; - int ret = 0; - diva_os_spin_lock_magic_t old_irql; - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "write"); - - e = (divas_um_idi_entity_t *) entity; - if (!e || (!(a = e->adapter)) || - (e->status & DIVA_UM_IDI_REMOVE_PENDING) || - (e->status & DIVA_UM_IDI_REMOVED) || - (a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) { - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - DBG_ERR(("E(%08x) write failed - adapter removed", e)) - return (-1); - } - - DBG_TRC(("A(%d) E(%08x) write(%d)", a->adapter_nr, e, length)); - - if ((length < sizeof(*req)) || (length > sizeof(e->buffer))) { - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - return (-2); - } - - if (e->status & DIVA_UM_IDI_RC_PENDING) { - DBG_ERR(("A: A(%d) E(%08x) rc pending", a->adapter_nr, e)); - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - return (-1); /* should wait for RC code first */ - } - - /* - Copy function does access only locked verified memory, - also it can be called with spin lock held - */ - if ((ret = (*cp_fn) (os_handle, e->buffer, src, length)) < 0) { - DBG_TRC(("A: A(%d) E(%08x) write error=%d", a->adapter_nr, - e, ret)); - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - return (ret); - } - - req = (diva_um_idi_req_hdr_t *)&e->buffer[0]; - - switch (req->type) { - case DIVA_UM_IDI_GET_FEATURES:{ - DBG_LOG(("A(%d) get_features", a->adapter_nr)); - if (!(data = - diva_data_q_get_segment4write(&e->data))) { - DBG_ERR(("A(%d) get_features, no free buffer", - a->adapter_nr)); - diva_os_leave_spin_lock(&adapter_lock, - &old_irql, - "write"); - return (0); - } - diva_user_mode_idi_adapter_features(a, &(((diva_um_idi_ind_hdr_t - *) data)->hdr.features)); - ((diva_um_idi_ind_hdr_t *) data)->type = - DIVA_UM_IDI_IND_FEATURES; - ((diva_um_idi_ind_hdr_t *) data)->data_length = 0; - diva_data_q_ack_segment4write(&e->data, - sizeof(diva_um_idi_ind_hdr_t)); - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - - diva_os_wakeup_read(e->os_context); - } - break; - - case DIVA_UM_IDI_REQ: - case DIVA_UM_IDI_REQ_MAN: - case DIVA_UM_IDI_REQ_SIG: - case DIVA_UM_IDI_REQ_NET: - DBG_TRC(("A(%d) REQ(%02d)-(%02d)-(%08x)", a->adapter_nr, - req->Req, req->ReqCh, - req->type & DIVA_UM_IDI_REQ_TYPE_MASK)); - switch (process_idi_request(e, req)) { - case -1: - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - return (-1); - case -2: - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - diva_os_wakeup_read(e->os_context); - break; - default: - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - break; - } - break; - - default: - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "write"); - return (-1); - } - - DBG_TRC(("A(%d) E(%08x) write=%d", a->adapter_nr, e, ret)); - - return (ret); -} - -/* -------------------------------------------------------------------------- - CALLBACK FROM XDI - -------------------------------------------------------------------------- */ -static void diva_um_idi_xdi_callback(ENTITY *entity) -{ - divas_um_idi_entity_t *e = DIVAS_CONTAINING_RECORD(entity, - divas_um_idi_entity_t, - e); - diva_os_spin_lock_magic_t old_irql; - int call_wakeup = 0; - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "xdi_callback"); - - if (e->e.complete == 255) { - if (!(e->status & DIVA_UM_IDI_REMOVE_PENDING)) { - diva_um_idi_stop_wdog(e); - } - if ((call_wakeup = process_idi_rc(e, e->e.Rc))) { - if (e->rc_count) { - e->rc_count--; - } - } - e->e.Rc = 0; - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "xdi_callback"); - - if (call_wakeup) { - diva_os_wakeup_read(e->os_context); - diva_os_wakeup_close(e->os_context); - } - } else { - if (e->status & DIVA_UM_IDI_REMOVE_PENDING) { - e->e.RNum = 0; - e->e.RNR = 2; - } else { - call_wakeup = process_idi_ind(e, e->e.Ind); - } - e->e.Ind = 0; - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "xdi_callback"); - if (call_wakeup) { - diva_os_wakeup_read(e->os_context); - } - } -} - -static int process_idi_request(divas_um_idi_entity_t *e, - const diva_um_idi_req_hdr_t *req) -{ - int assign = 0; - byte Req = (byte) req->Req; - dword type = req->type & DIVA_UM_IDI_REQ_TYPE_MASK; - - if (!e->e.Id || !e->e.callback) { /* not assigned */ - if (Req != ASSIGN) { - DBG_ERR(("A: A(%d) E(%08x) not assigned", - e->adapter->adapter_nr, e)); - return (-1); /* NOT ASSIGNED */ - } else { - switch (type) { - case DIVA_UM_IDI_REQ_TYPE_MAN: - e->e.Id = MAN_ID; - DBG_TRC(("A(%d) E(%08x) assign MAN", - e->adapter->adapter_nr, e)); - break; - - case DIVA_UM_IDI_REQ_TYPE_SIG: - e->e.Id = DSIG_ID; - DBG_TRC(("A(%d) E(%08x) assign SIG", - e->adapter->adapter_nr, e)); - break; - - case DIVA_UM_IDI_REQ_TYPE_NET: - e->e.Id = NL_ID; - DBG_TRC(("A(%d) E(%08x) assign NET", - e->adapter->adapter_nr, e)); - break; - - default: - DBG_ERR(("A: A(%d) E(%08x) unknown type=%08x", - e->adapter->adapter_nr, e, - type)); - return (-1); - } - } - e->e.XNum = 1; - e->e.RNum = 1; - e->e.callback = diva_um_idi_xdi_callback; - e->e.X = &e->XData; - e->e.R = &e->RData; - assign = 1; - } - e->status |= DIVA_UM_IDI_RC_PENDING; - e->e.Req = Req; - e->e.ReqCh = (byte) req->ReqCh; - e->e.X->PLength = (word) req->data_length; - e->e.X->P = (byte *)&req[1]; /* Our buffer is safe */ - - DBG_TRC(("A(%d) E(%08x) request(%02x-%02x-%02x (%d))", - e->adapter->adapter_nr, e, e->e.Id, e->e.Req, - e->e.ReqCh, e->e.X->PLength)); - - e->rc_count++; - - if (e->adapter && e->adapter->d.request) { - diva_um_idi_start_wdog(e); - (*(e->adapter->d.request)) (&e->e); - } - - if (assign) { - if (e->e.Rc == OUT_OF_RESOURCES) { - /* - XDI has no entities more, call was not forwarded to the card, - no callback will be scheduled - */ - DBG_ERR(("A: A(%d) E(%08x) XDI out of entities", - e->adapter->adapter_nr, e)); - - e->e.Id = 0; - e->e.ReqCh = 0; - e->e.RcCh = 0; - e->e.Ind = 0; - e->e.IndCh = 0; - e->e.XNum = 0; - e->e.RNum = 0; - e->e.callback = NULL; - e->e.X = NULL; - e->e.R = NULL; - write_return_code(e, ASSIGN_RC | OUT_OF_RESOURCES); - return (-2); - } else { - e->status |= DIVA_UM_IDI_ASSIGN_PENDING; - } - } - - return (0); -} - -static int process_idi_rc(divas_um_idi_entity_t *e, byte rc) -{ - DBG_TRC(("A(%d) E(%08x) rc(%02x-%02x-%02x)", - e->adapter->adapter_nr, e, e->e.Id, rc, e->e.RcCh)); - - if (e->status & DIVA_UM_IDI_ASSIGN_PENDING) { - e->status &= ~DIVA_UM_IDI_ASSIGN_PENDING; - if (rc != ASSIGN_OK) { - DBG_ERR(("A: A(%d) E(%08x) ASSIGN failed", - e->adapter->adapter_nr, e)); - e->e.callback = NULL; - e->e.Id = 0; - e->e.Req = 0; - e->e.ReqCh = 0; - e->e.Rc = 0; - e->e.RcCh = 0; - e->e.Ind = 0; - e->e.IndCh = 0; - e->e.X = NULL; - e->e.R = NULL; - e->e.XNum = 0; - e->e.RNum = 0; - } - } - if ((e->e.Req == REMOVE) && e->e.Id && (rc == 0xff)) { - DBG_ERR(("A: A(%d) E(%08x) discard OK in REMOVE", - e->adapter->adapter_nr, e)); - return (0); /* let us do it in the driver */ - } - if ((e->e.Req == REMOVE) && (!e->e.Id)) { /* REMOVE COMPLETE */ - e->e.callback = NULL; - e->e.Id = 0; - e->e.Req = 0; - e->e.ReqCh = 0; - e->e.Rc = 0; - e->e.RcCh = 0; - e->e.Ind = 0; - e->e.IndCh = 0; - e->e.X = NULL; - e->e.R = NULL; - e->e.XNum = 0; - e->e.RNum = 0; - e->rc_count = 0; - } - if ((e->e.Req == REMOVE) && (rc != 0xff)) { /* REMOVE FAILED */ - DBG_ERR(("A: A(%d) E(%08x) REMOVE FAILED", - e->adapter->adapter_nr, e)); - } - write_return_code(e, rc); - - return (1); -} - -static int process_idi_ind(divas_um_idi_entity_t *e, byte ind) -{ - int do_wakeup = 0; - - if (e->e.complete != 0x02) { - diva_um_idi_ind_hdr_t *pind = - (diva_um_idi_ind_hdr_t *) - diva_data_q_get_segment4write(&e->data); - if (pind) { - e->e.RNum = 1; - e->e.R->P = (byte *)&pind[1]; - e->e.R->PLength = - (word) (diva_data_q_get_max_length(&e->data) - - sizeof(*pind)); - DBG_TRC(("A(%d) E(%08x) ind_1(%02x-%02x-%02x)-[%d-%d]", - e->adapter->adapter_nr, e, e->e.Id, ind, - e->e.IndCh, e->e.RLength, - e->e.R->PLength)); - - } else { - DBG_TRC(("A(%d) E(%08x) ind(%02x-%02x-%02x)-RNR", - e->adapter->adapter_nr, e, e->e.Id, ind, - e->e.IndCh)); - e->e.RNum = 0; - e->e.RNR = 1; - do_wakeup = 1; - } - } else { - diva_um_idi_ind_hdr_t *pind = - (diva_um_idi_ind_hdr_t *) (e->e.R->P); - - DBG_TRC(("A(%d) E(%08x) ind(%02x-%02x-%02x)-[%d]", - e->adapter->adapter_nr, e, e->e.Id, ind, - e->e.IndCh, e->e.R->PLength)); - - pind--; - pind->type = DIVA_UM_IDI_IND; - pind->hdr.ind.Ind = ind; - pind->hdr.ind.IndCh = e->e.IndCh; - pind->data_length = e->e.R->PLength; - diva_data_q_ack_segment4write(&e->data, - (int) (sizeof(*pind) + - e->e.R->PLength)); - do_wakeup = 1; - } - - if ((e->status & DIVA_UM_IDI_RC_PENDING) && !e->rc.count) { - do_wakeup = 0; - } - - return (do_wakeup); -} - -/* -------------------------------------------------------------------------- - Write return code to the return code queue of entity - -------------------------------------------------------------------------- */ -static int write_return_code(divas_um_idi_entity_t *e, byte rc) -{ - diva_um_idi_ind_hdr_t *prc; - - if (!(prc = - (diva_um_idi_ind_hdr_t *) diva_data_q_get_segment4write(&e->rc))) - { - DBG_ERR(("A: A(%d) E(%08x) rc(%02x) lost", - e->adapter->adapter_nr, e, rc)); - e->status &= ~DIVA_UM_IDI_RC_PENDING; - return (-1); - } - - prc->type = DIVA_UM_IDI_IND_RC; - prc->hdr.rc.Rc = rc; - prc->hdr.rc.RcCh = e->e.RcCh; - prc->data_length = 0; - diva_data_q_ack_segment4write(&e->rc, sizeof(*prc)); - - return (0); -} - -/* -------------------------------------------------------------------------- - Return amount of entries that can be bead from this entity or - -1 if adapter was removed - -------------------------------------------------------------------------- */ -int diva_user_mode_idi_ind_ready(void *entity, void *os_handle) -{ - divas_um_idi_entity_t *e; - diva_um_idi_adapter_t *a; - diva_os_spin_lock_magic_t old_irql; - int ret; - - if (!entity) - return (-1); - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "ind_ready"); - e = (divas_um_idi_entity_t *) entity; - a = e->adapter; - - if ((!a) || (a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) { - /* - Adapter was unloaded - */ - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready"); - return (-1); /* adapter was removed */ - } - if (e->status & DIVA_UM_IDI_REMOVED) { - /* - entity was removed as result of adapter removal - user should assign this entity again - */ - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready"); - return (-1); - } - - ret = e->rc.count + e->data.count; - - if ((e->status & DIVA_UM_IDI_RC_PENDING) && !e->rc.count) { - ret = 0; - } - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "ind_ready"); - - return (ret); -} - -void *diva_um_id_get_os_context(void *entity) -{ - return (((divas_um_idi_entity_t *) entity)->os_context); -} - -int divas_um_idi_entity_assigned(void *entity) -{ - divas_um_idi_entity_t *e; - diva_um_idi_adapter_t *a; - int ret; - diva_os_spin_lock_magic_t old_irql; - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "assigned?"); - - - e = (divas_um_idi_entity_t *) entity; - if (!e || (!(a = e->adapter)) || - (e->status & DIVA_UM_IDI_REMOVED) || - (a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) { - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "assigned?"); - return (0); - } - - e->status |= DIVA_UM_IDI_REMOVE_PENDING; - - ret = (e->e.Id || e->rc_count - || (e->status & DIVA_UM_IDI_ASSIGN_PENDING)); - - DBG_TRC(("Id:%02x, rc_count:%d, status:%08x", e->e.Id, e->rc_count, - e->status)) - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "assigned?"); - - return (ret); -} - -int divas_um_idi_entity_start_remove(void *entity) -{ - divas_um_idi_entity_t *e; - diva_um_idi_adapter_t *a; - diva_os_spin_lock_magic_t old_irql; - - diva_os_enter_spin_lock(&adapter_lock, &old_irql, "start_remove"); - - e = (divas_um_idi_entity_t *) entity; - if (!e || (!(a = e->adapter)) || - (e->status & DIVA_UM_IDI_REMOVED) || - (a->status & DIVA_UM_IDI_ADAPTER_REMOVED)) { - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove"); - return (0); - } - - if (e->rc_count) { - /* - Entity BUSY - */ - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove"); - return (1); - } - - if (!e->e.Id) { - /* - Remove request was already pending, and arrived now - */ - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove"); - return (0); /* REMOVE was pending */ - } - - /* - Now send remove request - */ - e->e.Req = REMOVE; - e->e.ReqCh = 0; - - e->rc_count++; - - DBG_TRC(("A(%d) E(%08x) request(%02x-%02x-%02x (%d))", - e->adapter->adapter_nr, e, e->e.Id, e->e.Req, - e->e.ReqCh, e->e.X->PLength)); - - if (a->d.request) - (*(a->d.request)) (&e->e); - - diva_os_leave_spin_lock(&adapter_lock, &old_irql, "start_remove"); - - return (0); -} diff --git a/drivers/isdn/hardware/eicon/um_idi.h b/drivers/isdn/hardware/eicon/um_idi.h deleted file mode 100644 index 9aedd9e351a3..000000000000 --- a/drivers/isdn/hardware/eicon/um_idi.h +++ /dev/null @@ -1,44 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: um_idi.h,v 1.6 2004/03/21 17:26:01 armin Exp $ */ - -#ifndef __DIVA_USER_MODE_IDI_CORE_H__ -#define __DIVA_USER_MODE_IDI_CORE_H__ - - -/* - interface between UM IDI core and OS dependent part -*/ -int diva_user_mode_idi_init(void); -void diva_user_mode_idi_finit(void); -void *divas_um_idi_create_entity(dword adapter_nr, void *file); -int divas_um_idi_delete_entity(int adapter_nr, void *entity); - -typedef int (*divas_um_idi_copy_to_user_fn_t) (void *os_handle, - void *dst, - const void *src, - int length); -typedef int (*divas_um_idi_copy_from_user_fn_t) (void *os_handle, - void *dst, - const void *src, - int length); - -int diva_um_idi_read(void *entity, - void *os_handle, - void *dst, - int max_length, divas_um_idi_copy_to_user_fn_t cp_fn); - -int diva_um_idi_write(void *entity, - void *os_handle, - const void *src, - int length, divas_um_idi_copy_from_user_fn_t cp_fn); - -int diva_user_mode_idi_ind_ready(void *entity, void *os_handle); -void *diva_um_id_get_os_context(void *entity); -int diva_os_get_context_size(void); -int divas_um_idi_entity_assigned(void *entity); -int divas_um_idi_entity_start_remove(void *entity); - -void diva_um_idi_start_wdog(void *entity); -void diva_um_idi_stop_wdog(void *entity); - -#endif diff --git a/drivers/isdn/hardware/eicon/um_xdi.h b/drivers/isdn/hardware/eicon/um_xdi.h deleted file mode 100644 index 1f37aa4efd18..000000000000 --- a/drivers/isdn/hardware/eicon/um_xdi.h +++ /dev/null @@ -1,69 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: um_xdi.h,v 1.1.2.2 2002/10/02 14:38:38 armin Exp $ */ - -#ifndef __DIVA_USER_MODE_XDI_H__ -#define __DIVA_USER_MODE_XDI_H__ - -/* - Contains declaratiom of structures shared between application - and user mode idi driver -*/ - -typedef struct _diva_um_idi_adapter_features { - dword type; - dword features; - dword channels; - dword serial_number; - char name[128]; -} diva_um_idi_adapter_features_t; - -#define DIVA_UM_IDI_REQ_MASK 0x0000FFFF -#define DIVA_UM_IDI_REQ_TYPE_MASK (~(DIVA_UM_IDI_REQ_MASK)) -#define DIVA_UM_IDI_GET_FEATURES 1 /* trigger features indication */ -#define DIVA_UM_IDI_REQ 2 -#define DIVA_UM_IDI_REQ_TYPE_MAN 0x10000000 -#define DIVA_UM_IDI_REQ_TYPE_SIG 0x20000000 -#define DIVA_UM_IDI_REQ_TYPE_NET 0x30000000 -#define DIVA_UM_IDI_REQ_MAN (DIVA_UM_IDI_REQ | DIVA_UM_IDI_REQ_TYPE_MAN) -#define DIVA_UM_IDI_REQ_SIG (DIVA_UM_IDI_REQ | DIVA_UM_IDI_REQ_TYPE_SIG) -#define DIVA_UM_IDI_REQ_NET (DIVA_UM_IDI_REQ | DIVA_UM_IDI_REQ_TYPE_NET) -/* - data_length bytes will follow this structure -*/ -typedef struct _diva_um_idi_req_hdr { - dword type; - dword Req; - dword ReqCh; - dword data_length; -} diva_um_idi_req_hdr_t; - -typedef struct _diva_um_idi_ind_parameters { - dword Ind; - dword IndCh; -} diva_um_idi_ind_parameters_t; - -typedef struct _diva_um_idi_rc_parameters { - dword Rc; - dword RcCh; -} diva_um_idi_rc_parameters_t; - -typedef union _diva_um_idi_ind { - diva_um_idi_adapter_features_t features; - diva_um_idi_ind_parameters_t ind; - diva_um_idi_rc_parameters_t rc; -} diva_um_idi_ind_t; - -#define DIVA_UM_IDI_IND_FEATURES 1 /* features indication */ -#define DIVA_UM_IDI_IND 2 -#define DIVA_UM_IDI_IND_RC 3 -/* - data_length bytes of data follow - this structure -*/ -typedef struct _diva_um_idi_ind_hdr { - dword type; - diva_um_idi_ind_t hdr; - dword data_length; -} diva_um_idi_ind_hdr_t; - -#endif diff --git a/drivers/isdn/hardware/eicon/xdi_adapter.h b/drivers/isdn/hardware/eicon/xdi_adapter.h deleted file mode 100644 index b036e217c659..000000000000 --- a/drivers/isdn/hardware/eicon/xdi_adapter.h +++ /dev/null @@ -1,71 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: xdi_adapter.h,v 1.7 2004/03/21 17:26:01 armin Exp $ */ - -#ifndef __DIVA_OS_XDI_ADAPTER_H__ -#define __DIVA_OS_XDI_ADAPTER_H__ - -#define DIVAS_XDI_ADAPTER_BUS_PCI 0 -#define DIVAS_XDI_ADAPTER_BUS_ISA 1 - -typedef struct _divas_pci_card_resources { - byte bus; - byte func; - void *hdev; - - dword bar[8]; /* contains context of appropriate BAR Register */ - void __iomem *addr[8]; /* same bar, but mapped into memory */ - dword length[8]; /* bar length */ - int mem_type_id[MAX_MEM_TYPE]; - unsigned int qoffset; - byte irq; -} divas_pci_card_resources_t; - -typedef union _divas_card_resources { - divas_pci_card_resources_t pci; -} divas_card_resources_t; - -struct _diva_os_xdi_adapter; -typedef int (*diva_init_card_proc_t)(struct _diva_os_xdi_adapter *a); -typedef int (*diva_cmd_card_proc_t)(struct _diva_os_xdi_adapter *a, - diva_xdi_um_cfg_cmd_t *data, - int length); -typedef void (*diva_xdi_clear_interrupts_proc_t)(struct - _diva_os_xdi_adapter *); - -#define DIVA_XDI_MBOX_BUSY 1 -#define DIVA_XDI_MBOX_WAIT_XLOG 2 - -typedef struct _xdi_mbox_t { - dword status; - diva_xdi_um_cfg_cmd_data_t cmd_data; - dword data_length; - void *data; -} xdi_mbox_t; - -typedef struct _diva_os_idi_adapter_interface { - diva_init_card_proc_t cleanup_adapter_proc; - diva_cmd_card_proc_t cmd_proc; -} diva_os_idi_adapter_interface_t; - -typedef struct _diva_os_xdi_adapter { - struct list_head link; - int CardIndex; - int CardOrdinal; - int controller; /* number of this controller */ - int Bus; /* PCI, ISA, ... */ - divas_card_resources_t resources; - char port_name[24]; - ISDN_ADAPTER xdi_adapter; - xdi_mbox_t xdi_mbox; - diva_os_idi_adapter_interface_t interface; - struct _diva_os_xdi_adapter *slave_adapters[3]; - void *slave_list; - void *proc_adapter_dir; /* adapterX proc entry */ - void *proc_info; /* info proc entry */ - void *proc_grp_opt; /* group_optimization */ - void *proc_d_l1_down; /* dynamic_l1_down */ - volatile diva_xdi_clear_interrupts_proc_t clear_interrupts_proc; - dword dsp_mask; -} diva_os_xdi_adapter_t; - -#endif diff --git a/drivers/isdn/hardware/eicon/xdi_msg.h b/drivers/isdn/hardware/eicon/xdi_msg.h deleted file mode 100644 index 0646079bf466..000000000000 --- a/drivers/isdn/hardware/eicon/xdi_msg.h +++ /dev/null @@ -1,128 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* $Id: xdi_msg.h,v 1.1.2.2 2001/02/16 08:40:36 armin Exp $ */ - -#ifndef __DIVA_XDI_UM_CFG_MESSAGE_H__ -#define __DIVA_XDI_UM_CFG_MESSAGE_H__ - -/* - Definition of messages used to communicate between - XDI device driver and user mode configuration utility -*/ - -/* - As acknowledge one DWORD - card ordinal will be read from the card -*/ -#define DIVA_XDI_UM_CMD_GET_CARD_ORDINAL 0 - -/* - no acknowledge will be generated, memory block will be written in the - memory at given offset -*/ -#define DIVA_XDI_UM_CMD_WRITE_SDRAM_BLOCK 1 - -/* - no acknowledge will be genatated, FPGA will be programmed -*/ -#define DIVA_XDI_UM_CMD_WRITE_FPGA 2 - -/* - As acknowledge block of SDRAM will be read in the user buffer -*/ -#define DIVA_XDI_UM_CMD_READ_SDRAM 3 - -/* - As acknowledge dword with serial number will be read in the user buffer -*/ -#define DIVA_XDI_UM_CMD_GET_SERIAL_NR 4 - -/* - As acknowledge struct consisting from 9 dwords with PCI info. - dword[0...7] = 8 PCI BARS - dword[9] = IRQ -*/ -#define DIVA_XDI_UM_CMD_GET_PCI_HW_CONFIG 5 - -/* - Reset of the board + activation of primary - boot loader -*/ -#define DIVA_XDI_UM_CMD_RESET_ADAPTER 6 - -/* - Called after code download to start adapter - at specified address - Start does set new set of features due to fact that we not know - if protocol features have changed -*/ -#define DIVA_XDI_UM_CMD_START_ADAPTER 7 - -/* - Stop adapter, called if user - wishes to stop adapter without unload - of the driver, to reload adapter with - different protocol -*/ -#define DIVA_XDI_UM_CMD_STOP_ADAPTER 8 - -/* - Get state of current adapter - Acknowledge is one dword with following values: - 0 - adapter ready for download - 1 - adapter running - 2 - adapter dead - 3 - out of service, driver should be restarted or hardware problem -*/ -#define DIVA_XDI_UM_CMD_GET_CARD_STATE 9 - -/* - Reads XLOG entry from the card -*/ -#define DIVA_XDI_UM_CMD_READ_XLOG_ENTRY 10 - -/* - Set untranslated protocol code features -*/ -#define DIVA_XDI_UM_CMD_SET_PROTOCOL_FEATURES 11 - -typedef struct _diva_xdi_um_cfg_cmd_data_set_features { - dword features; -} diva_xdi_um_cfg_cmd_data_set_features_t; - -typedef struct _diva_xdi_um_cfg_cmd_data_start { - dword offset; - dword features; -} diva_xdi_um_cfg_cmd_data_start_t; - -typedef struct _diva_xdi_um_cfg_cmd_data_write_sdram { - dword ram_number; - dword offset; - dword length; -} diva_xdi_um_cfg_cmd_data_write_sdram_t; - -typedef struct _diva_xdi_um_cfg_cmd_data_write_fpga { - dword fpga_number; - dword image_length; -} diva_xdi_um_cfg_cmd_data_write_fpga_t; - -typedef struct _diva_xdi_um_cfg_cmd_data_read_sdram { - dword ram_number; - dword offset; - dword length; -} diva_xdi_um_cfg_cmd_data_read_sdram_t; - -typedef union _diva_xdi_um_cfg_cmd_data { - diva_xdi_um_cfg_cmd_data_write_sdram_t write_sdram; - diva_xdi_um_cfg_cmd_data_write_fpga_t write_fpga; - diva_xdi_um_cfg_cmd_data_read_sdram_t read_sdram; - diva_xdi_um_cfg_cmd_data_start_t start; - diva_xdi_um_cfg_cmd_data_set_features_t features; -} diva_xdi_um_cfg_cmd_data_t; - -typedef struct _diva_xdi_um_cfg_cmd { - dword adapter; /* Adapter number 1...N */ - dword command; - diva_xdi_um_cfg_cmd_data_t command_data; - dword data_length; /* Plain binary data will follow */ -} diva_xdi_um_cfg_cmd_t; - -#endif diff --git a/drivers/isdn/hardware/eicon/xdi_vers.h b/drivers/isdn/hardware/eicon/xdi_vers.h deleted file mode 100644 index b3479e59c7c5..000000000000 --- a/drivers/isdn/hardware/eicon/xdi_vers.h +++ /dev/null @@ -1,26 +0,0 @@ - -/* - * - Copyright (c) Eicon Networks, 2002. - * - This source file is supplied for the use with - Eicon Networks range of DIVA Server Adapters. - * - Eicon File Revision : 2.1 - * - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2, or (at your option) - any later version. - * - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY - implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - * - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -static char diva_xdi_common_code_build[] = "102-52"; diff --git a/drivers/isdn/hardware/mISDN/w6692.c b/drivers/isdn/hardware/mISDN/w6692.c index 5acf6ab67cd3..6f60aced11c5 100644 --- a/drivers/isdn/hardware/mISDN/w6692.c +++ b/drivers/isdn/hardware/mISDN/w6692.c @@ -52,10 +52,7 @@ static const struct w6692map w6692_map[] = {W6692_USR, "USR W6692"} }; -#ifndef PCI_VENDOR_ID_USR -#define PCI_VENDOR_ID_USR 0x16ec #define PCI_DEVICE_ID_USR_6692 0x3409 -#endif struct w6692_ch { struct bchannel bch; diff --git a/drivers/isdn/hisax/hfc_pci.c b/drivers/isdn/hisax/hfc_pci.c index ea0e4c6de3fb..5b719b561860 100644 --- a/drivers/isdn/hisax/hfc_pci.c +++ b/drivers/isdn/hisax/hfc_pci.c @@ -274,7 +274,7 @@ hfcpci_empty_fifo(struct BCState *bcs, bzfifo_type *bz, u_char *bdata, int count u_char *ptr, *ptr1, new_f2; struct sk_buff *skb; struct IsdnCardState *cs = bcs->cs; - int total, maxlen, new_z2; + int maxlen, new_z2; z_type *zp; if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO)) @@ -297,7 +297,6 @@ hfcpci_empty_fifo(struct BCState *bcs, bzfifo_type *bz, u_char *bdata, int count } else if (!(skb = dev_alloc_skb(count - 3))) printk(KERN_WARNING "HFCPCI: receive out of memory\n"); else { - total = count; count -= 3; ptr = skb_put(skb, count); diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index efb976a863d2..5f82036fe322 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -389,7 +389,7 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create) goto err_dev; } - tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node, NULL); + tqueue = blk_alloc_queue_node(GFP_KERNEL, dev->q->node); if (!tqueue) { ret = -ENOMEM; goto err_disk; @@ -974,7 +974,7 @@ static int nvm_get_bb_meta(struct nvm_dev *dev, sector_t slba, struct ppa_addr ppa; u8 *blks; int ch, lun, nr_blks; - int ret; + int ret = 0; ppa.ppa = slba; ppa = dev_to_generic_addr(dev, ppa); @@ -1140,30 +1140,33 @@ EXPORT_SYMBOL(nvm_alloc_dev); int nvm_register(struct nvm_dev *dev) { - int ret; + int ret, exp_pool_size; if (!dev->q || !dev->ops) return -EINVAL; - dev->dma_pool = dev->ops->create_dma_pool(dev, "ppalist"); + ret = nvm_init(dev); + if (ret) + return ret; + + exp_pool_size = max_t(int, PAGE_SIZE, + (NVM_MAX_VLBA * (sizeof(u64) + dev->geo.sos))); + exp_pool_size = round_up(exp_pool_size, PAGE_SIZE); + + dev->dma_pool = dev->ops->create_dma_pool(dev, "ppalist", + exp_pool_size); if (!dev->dma_pool) { pr_err("nvm: could not create dma pool\n"); + nvm_free(dev); return -ENOMEM; } - ret = nvm_init(dev); - if (ret) - goto err_init; - /* register device with a supported media manager */ down_write(&nvm_lock); list_add(&dev->devices, &nvm_devices); up_write(&nvm_lock); return 0; -err_init: - dev->ops->destroy_dma_pool(dev->dma_pool); - return ret; } EXPORT_SYMBOL(nvm_register); diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index 6944aac43b01..1ff165351180 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -250,8 +250,8 @@ int pblk_alloc_rqd_meta(struct pblk *pblk, struct nvm_rq *rqd) if (rqd->nr_ppas == 1) return 0; - rqd->ppa_list = rqd->meta_list + pblk_dma_meta_size; - rqd->dma_ppa_list = rqd->dma_meta_list + pblk_dma_meta_size; + rqd->ppa_list = rqd->meta_list + pblk_dma_meta_size(pblk); + rqd->dma_ppa_list = rqd->dma_meta_list + pblk_dma_meta_size(pblk); return 0; } @@ -376,7 +376,7 @@ void pblk_write_should_kick(struct pblk *pblk) { unsigned int secs_avail = pblk_rb_read_count(&pblk->rwb); - if (secs_avail >= pblk->min_write_pgs) + if (secs_avail >= pblk->min_write_pgs_data) pblk_write_kick(pblk); } @@ -407,7 +407,9 @@ struct list_head *pblk_line_gc_list(struct pblk *pblk, struct pblk_line *line) struct pblk_line_meta *lm = &pblk->lm; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct list_head *move_list = NULL; - int vsc = le32_to_cpu(*line->vsc); + int packed_meta = (le32_to_cpu(*line->vsc) / pblk->min_write_pgs_data) + * (pblk->min_write_pgs - pblk->min_write_pgs_data); + int vsc = le32_to_cpu(*line->vsc) + packed_meta; lockdep_assert_held(&line->lock); @@ -531,7 +533,7 @@ void pblk_check_chunk_state_update(struct pblk *pblk, struct nvm_rq *rqd) if (caddr == 0) trace_pblk_chunk_state(pblk_disk_name(pblk), ppa, NVM_CHK_ST_OPEN); - else if (caddr == chunk->cnlb) + else if (caddr == (chunk->cnlb - 1)) trace_pblk_chunk_state(pblk_disk_name(pblk), ppa, NVM_CHK_ST_CLOSED); } @@ -620,12 +622,15 @@ out: } int pblk_calc_secs(struct pblk *pblk, unsigned long secs_avail, - unsigned long secs_to_flush) + unsigned long secs_to_flush, bool skip_meta) { int max = pblk->sec_per_write; int min = pblk->min_write_pgs; int secs_to_sync = 0; + if (skip_meta && pblk->min_write_pgs_data != pblk->min_write_pgs) + min = max = pblk->min_write_pgs_data; + if (secs_avail >= max) secs_to_sync = max; else if (secs_avail >= min) @@ -796,10 +801,11 @@ static int pblk_line_smeta_write(struct pblk *pblk, struct pblk_line *line, rqd.is_seq = 1; for (i = 0; i < lm->smeta_sec; i++, paddr++) { - struct pblk_sec_meta *meta_list = rqd.meta_list; + struct pblk_sec_meta *meta = pblk_get_meta(pblk, + rqd.meta_list, i); rqd.ppa_list[i] = addr_to_gen_ppa(pblk, paddr, line->id); - meta_list[i].lba = lba_list[paddr] = addr_empty; + meta->lba = lba_list[paddr] = addr_empty; } ret = pblk_submit_io_sync_sem(pblk, &rqd); @@ -845,13 +851,13 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, if (!meta_list) return -ENOMEM; - ppa_list = meta_list + pblk_dma_meta_size; - dma_ppa_list = dma_meta_list + pblk_dma_meta_size; + ppa_list = meta_list + pblk_dma_meta_size(pblk); + dma_ppa_list = dma_meta_list + pblk_dma_meta_size(pblk); next_rq: memset(&rqd, 0, sizeof(struct nvm_rq)); - rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); + rq_ppas = pblk_calc_secs(pblk, left_ppas, 0, false); rq_len = rq_ppas * geo->csecs; bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len, @@ -1276,6 +1282,7 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line) return 0; } +/* Line allocations in the recovery path are always single threaded */ int pblk_line_recov_alloc(struct pblk *pblk, struct pblk_line *line) { struct pblk_line_mgmt *l_mg = &pblk->l_mg; @@ -1295,15 +1302,22 @@ int pblk_line_recov_alloc(struct pblk *pblk, struct pblk_line *line) ret = pblk_line_alloc_bitmaps(pblk, line); if (ret) - return ret; + goto fail; if (!pblk_line_init_bb(pblk, line, 0)) { - list_add(&line->list, &l_mg->free_list); - return -EINTR; + ret = -EINTR; + goto fail; } pblk_rl_free_lines_dec(&pblk->rl, line, true); return 0; + +fail: + spin_lock(&l_mg->free_lock); + list_add(&line->list, &l_mg->free_list); + spin_unlock(&l_mg->free_lock); + + return ret; } void pblk_line_recov_close(struct pblk *pblk, struct pblk_line *line) @@ -2160,3 +2174,38 @@ void pblk_lookup_l2p_rand(struct pblk *pblk, struct ppa_addr *ppas, } spin_unlock(&pblk->trans_lock); } + +void *pblk_get_meta_for_writes(struct pblk *pblk, struct nvm_rq *rqd) +{ + void *buffer; + + if (pblk_is_oob_meta_supported(pblk)) { + /* Just use OOB metadata buffer as always */ + buffer = rqd->meta_list; + } else { + /* We need to reuse last page of request (packed metadata) + * in similar way as traditional oob metadata + */ + buffer = page_to_virt( + rqd->bio->bi_io_vec[rqd->bio->bi_vcnt - 1].bv_page); + } + + return buffer; +} + +void pblk_get_packed_meta(struct pblk *pblk, struct nvm_rq *rqd) +{ + void *meta_list = rqd->meta_list; + void *page; + int i = 0; + + if (pblk_is_oob_meta_supported(pblk)) + return; + + page = page_to_virt(rqd->bio->bi_io_vec[rqd->bio->bi_vcnt - 1].bv_page); + /* We need to fill oob meta buffer with data from packed metadata */ + for (; i < rqd->nr_ppas; i++) + memcpy(pblk_get_meta(pblk, meta_list, i), + page + (i * sizeof(struct pblk_sec_meta)), + sizeof(struct pblk_sec_meta)); +} diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 13822594647c..f9a3e47b6a93 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -207,9 +207,6 @@ static int pblk_rwb_init(struct pblk *pblk) return pblk_rb_init(&pblk->rwb, buffer_size, threshold, geo->csecs); } -/* Minimum pages needed within a lun */ -#define ADDR_POOL_SIZE 64 - static int pblk_set_addrf_12(struct pblk *pblk, struct nvm_geo *geo, struct nvm_addrf_12 *dst) { @@ -350,23 +347,19 @@ fail_destroy_ws: static int pblk_get_global_caches(void) { - int ret; + int ret = 0; mutex_lock(&pblk_caches.mutex); - if (kref_read(&pblk_caches.kref) > 0) { - kref_get(&pblk_caches.kref); - mutex_unlock(&pblk_caches.mutex); - return 0; - } + if (kref_get_unless_zero(&pblk_caches.kref)) + goto out; ret = pblk_create_global_caches(); - if (!ret) - kref_get(&pblk_caches.kref); + kref_init(&pblk_caches.kref); +out: mutex_unlock(&pblk_caches.mutex); - return ret; } @@ -406,12 +399,45 @@ static int pblk_core_init(struct pblk *pblk) pblk->nr_flush_rst = 0; pblk->min_write_pgs = geo->ws_opt; + pblk->min_write_pgs_data = pblk->min_write_pgs; max_write_ppas = pblk->min_write_pgs * geo->all_luns; pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA); pblk->max_write_pgs = min_t(int, pblk->max_write_pgs, queue_max_hw_sectors(dev->q) / (geo->csecs >> SECTOR_SHIFT)); pblk_set_sec_per_write(pblk, pblk->min_write_pgs); + pblk->oob_meta_size = geo->sos; + if (!pblk_is_oob_meta_supported(pblk)) { + /* For drives which does not have OOB metadata feature + * in order to support recovery feature we need to use + * so called packed metadata. Packed metada will store + * the same information as OOB metadata (l2p table mapping, + * but in the form of the single page at the end of + * every write request. + */ + if (pblk->min_write_pgs + * sizeof(struct pblk_sec_meta) > PAGE_SIZE) { + /* We want to keep all the packed metadata on single + * page per write requests. So we need to ensure that + * it will fit. + * + * This is more like sanity check, since there is + * no device with such a big minimal write size + * (above 1 metabytes). + */ + pblk_err(pblk, "Not supported min write size\n"); + return -EINVAL; + } + /* For packed meta approach we do some simplification. + * On read path we always issue requests which size + * equal to max_write_pgs, with all pages filled with + * user payload except of last one page which will be + * filled with packed metadata. + */ + pblk->max_write_pgs = pblk->min_write_pgs; + pblk->min_write_pgs_data = pblk->min_write_pgs - 1; + } + pblk->pad_dist = kcalloc(pblk->min_write_pgs - 1, sizeof(atomic64_t), GFP_KERNEL); if (!pblk->pad_dist) @@ -635,40 +661,61 @@ static unsigned int calc_emeta_len(struct pblk *pblk) return (lm->emeta_len[1] + lm->emeta_len[2] + lm->emeta_len[3]); } -static void pblk_set_provision(struct pblk *pblk, long nr_free_blks) +static int pblk_set_provision(struct pblk *pblk, int nr_free_chks) { struct nvm_tgt_dev *dev = pblk->dev; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; struct nvm_geo *geo = &dev->geo; sector_t provisioned; - int sec_meta, blk_meta; + int sec_meta, blk_meta, clba; + int minimum; if (geo->op == NVM_TARGET_DEFAULT_OP) pblk->op = PBLK_DEFAULT_OP; else pblk->op = geo->op; - provisioned = nr_free_blks; + minimum = pblk_get_min_chks(pblk); + provisioned = nr_free_chks; provisioned *= (100 - pblk->op); sector_div(provisioned, 100); - pblk->op_blks = nr_free_blks - provisioned; + if ((nr_free_chks - provisioned) < minimum) { + if (geo->op != NVM_TARGET_DEFAULT_OP) { + pblk_err(pblk, "OP too small to create a sane instance\n"); + return -EINTR; + } + + /* If the user did not specify an OP value, and PBLK_DEFAULT_OP + * is not enough, calculate and set sane value + */ + + provisioned = nr_free_chks - minimum; + pblk->op = (100 * minimum) / nr_free_chks; + pblk_info(pblk, "Default OP insufficient, adjusting OP to %d\n", + pblk->op); + } + + pblk->op_blks = nr_free_chks - provisioned; /* Internally pblk manages all free blocks, but all calculations based * on user capacity consider only provisioned blocks */ - pblk->rl.total_blocks = nr_free_blks; - pblk->rl.nr_secs = nr_free_blks * geo->clba; + pblk->rl.total_blocks = nr_free_chks; + pblk->rl.nr_secs = nr_free_chks * geo->clba; /* Consider sectors used for metadata */ sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); - pblk->capacity = (provisioned - blk_meta) * geo->clba; + clba = (geo->clba / pblk->min_write_pgs) * pblk->min_write_pgs_data; + pblk->capacity = (provisioned - blk_meta) * clba; - atomic_set(&pblk->rl.free_blocks, nr_free_blks); - atomic_set(&pblk->rl.free_user_blocks, nr_free_blks); + atomic_set(&pblk->rl.free_blocks, nr_free_chks); + atomic_set(&pblk->rl.free_user_blocks, nr_free_chks); + + return 0; } static int pblk_setup_line_meta_chk(struct pblk *pblk, struct pblk_line *line, @@ -984,7 +1031,7 @@ static int pblk_lines_init(struct pblk *pblk) struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line *line; void *chunk_meta; - long nr_free_chks = 0; + int nr_free_chks = 0; int i, ret; ret = pblk_line_meta_init(pblk); @@ -1031,7 +1078,9 @@ static int pblk_lines_init(struct pblk *pblk) goto fail_free_lines; } - pblk_set_provision(pblk, nr_free_chks); + ret = pblk_set_provision(pblk, nr_free_chks); + if (ret) + goto fail_free_lines; vfree(chunk_meta); return 0; @@ -1041,7 +1090,7 @@ fail_free_lines: pblk_line_meta_free(l_mg, &pblk->lines[i]); kfree(pblk->lines); fail_free_chunk_meta: - kfree(chunk_meta); + vfree(chunk_meta); fail_free_luns: kfree(pblk->luns); fail_free_meta: @@ -1154,6 +1203,12 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk, return ERR_PTR(-EINVAL); } + if (geo->ext) { + pblk_err(pblk, "extended metadata not supported\n"); + kfree(pblk); + return ERR_PTR(-EINVAL); + } + spin_lock_init(&pblk->resubmit_lock); spin_lock_init(&pblk->trans_lock); spin_lock_init(&pblk->lock); diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c index 6dcbd44e3acb..79df583ea709 100644 --- a/drivers/lightnvm/pblk-map.c +++ b/drivers/lightnvm/pblk-map.c @@ -22,7 +22,7 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, struct ppa_addr *ppa_list, unsigned long *lun_bitmap, - struct pblk_sec_meta *meta_list, + void *meta_list, unsigned int valid_secs) { struct pblk_line *line = pblk_line_get_data(pblk); @@ -33,6 +33,9 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, int nr_secs = pblk->min_write_pgs; int i; + if (!line) + return -ENOSPC; + if (pblk_line_is_full(line)) { struct pblk_line *prev_line = line; @@ -42,8 +45,11 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, line = pblk_line_replace_data(pblk); pblk_line_close_meta(pblk, prev_line); - if (!line) - return -EINTR; + if (!line) { + pblk_pipeline_stop(pblk); + return -ENOSPC; + } + } emeta = line->emeta; @@ -52,6 +58,7 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, paddr = pblk_alloc_page(pblk, line, nr_secs); for (i = 0; i < nr_secs; i++, paddr++) { + struct pblk_sec_meta *meta = pblk_get_meta(pblk, meta_list, i); __le64 addr_empty = cpu_to_le64(ADDR_EMPTY); /* ppa to be sent to the device */ @@ -68,14 +75,15 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, kref_get(&line->ref); w_ctx = pblk_rb_w_ctx(&pblk->rwb, sentry + i); w_ctx->ppa = ppa_list[i]; - meta_list[i].lba = cpu_to_le64(w_ctx->lba); + meta->lba = cpu_to_le64(w_ctx->lba); lba_list[paddr] = cpu_to_le64(w_ctx->lba); if (lba_list[paddr] != addr_empty) line->nr_valid_lbas++; else atomic64_inc(&pblk->pad_wa); } else { - lba_list[paddr] = meta_list[i].lba = addr_empty; + lba_list[paddr] = addr_empty; + meta->lba = addr_empty; __pblk_map_invalidate(pblk, line, paddr); } } @@ -84,50 +92,57 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry, return 0; } -void pblk_map_rq(struct pblk *pblk, struct nvm_rq *rqd, unsigned int sentry, +int pblk_map_rq(struct pblk *pblk, struct nvm_rq *rqd, unsigned int sentry, unsigned long *lun_bitmap, unsigned int valid_secs, unsigned int off) { - struct pblk_sec_meta *meta_list = rqd->meta_list; + void *meta_list = pblk_get_meta_for_writes(pblk, rqd); + void *meta_buffer; struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd); unsigned int map_secs; int min = pblk->min_write_pgs; int i; + int ret; for (i = off; i < rqd->nr_ppas; i += min) { map_secs = (i + min > valid_secs) ? (valid_secs % min) : min; - if (pblk_map_page_data(pblk, sentry + i, &ppa_list[i], - lun_bitmap, &meta_list[i], map_secs)) { - bio_put(rqd->bio); - pblk_free_rqd(pblk, rqd, PBLK_WRITE); - pblk_pipeline_stop(pblk); - } + meta_buffer = pblk_get_meta(pblk, meta_list, i); + + ret = pblk_map_page_data(pblk, sentry + i, &ppa_list[i], + lun_bitmap, meta_buffer, map_secs); + if (ret) + return ret; } + + return 0; } /* only if erase_ppa is set, acquire erase semaphore */ -void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, +int pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, unsigned int sentry, unsigned long *lun_bitmap, unsigned int valid_secs, struct ppa_addr *erase_ppa) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; struct pblk_line_meta *lm = &pblk->lm; - struct pblk_sec_meta *meta_list = rqd->meta_list; + void *meta_list = pblk_get_meta_for_writes(pblk, rqd); + void *meta_buffer; struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd); struct pblk_line *e_line, *d_line; unsigned int map_secs; int min = pblk->min_write_pgs; int i, erase_lun; + int ret; + for (i = 0; i < rqd->nr_ppas; i += min) { map_secs = (i + min > valid_secs) ? (valid_secs % min) : min; - if (pblk_map_page_data(pblk, sentry + i, &ppa_list[i], - lun_bitmap, &meta_list[i], map_secs)) { - bio_put(rqd->bio); - pblk_free_rqd(pblk, rqd, PBLK_WRITE); - pblk_pipeline_stop(pblk); - } + meta_buffer = pblk_get_meta(pblk, meta_list, i); + + ret = pblk_map_page_data(pblk, sentry + i, &ppa_list[i], + lun_bitmap, meta_buffer, map_secs); + if (ret) + return ret; erase_lun = pblk_ppa_to_pos(geo, ppa_list[i]); @@ -163,7 +178,7 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, */ e_line = pblk_line_get_erase(pblk); if (!e_line) - return; + return -ENOSPC; /* Erase blocks that are bad in this line but might not be in next */ if (unlikely(pblk_ppa_empty(*erase_ppa)) && @@ -174,7 +189,7 @@ retry: bit = find_next_bit(d_line->blk_bitmap, lm->blk_per_line, bit + 1); if (bit >= lm->blk_per_line) - return; + return 0; spin_lock(&e_line->lock); if (test_bit(bit, e_line->erase_bitmap)) { @@ -188,4 +203,6 @@ retry: *erase_ppa = pblk->luns[bit].bppa; /* set ch and lun */ erase_ppa->a.blk = e_line->id; } + + return 0; } diff --git a/drivers/lightnvm/pblk-rb.c b/drivers/lightnvm/pblk-rb.c index b1f4b51783f4..d4ca8c64ee0f 100644 --- a/drivers/lightnvm/pblk-rb.c +++ b/drivers/lightnvm/pblk-rb.c @@ -147,7 +147,7 @@ int pblk_rb_init(struct pblk_rb *rb, unsigned int size, unsigned int threshold, /* * Initialize rate-limiter, which controls access to the write buffer - * but user and GC I/O + * by user and GC I/O */ pblk_rl_init(&pblk->rl, rb->nr_entries); @@ -552,6 +552,9 @@ unsigned int pblk_rb_read_to_bio(struct pblk_rb *rb, struct nvm_rq *rqd, to_read = count; } + /* Add space for packed metadata if in use*/ + pad += (pblk->min_write_pgs - pblk->min_write_pgs_data); + c_ctx->sentry = pos; c_ctx->nr_valid = to_read; c_ctx->nr_padded = pad; diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c index 9fba614adeeb..3789185144da 100644 --- a/drivers/lightnvm/pblk-read.c +++ b/drivers/lightnvm/pblk-read.c @@ -43,7 +43,7 @@ static void pblk_read_ppalist_rq(struct pblk *pblk, struct nvm_rq *rqd, struct bio *bio, sector_t blba, unsigned long *read_bitmap) { - struct pblk_sec_meta *meta_list = rqd->meta_list; + void *meta_list = rqd->meta_list; struct ppa_addr ppas[NVM_MAX_VLBA]; int nr_secs = rqd->nr_ppas; bool advanced_bio = false; @@ -53,12 +53,15 @@ static void pblk_read_ppalist_rq(struct pblk *pblk, struct nvm_rq *rqd, for (i = 0; i < nr_secs; i++) { struct ppa_addr p = ppas[i]; + struct pblk_sec_meta *meta = pblk_get_meta(pblk, meta_list, i); sector_t lba = blba + i; retry: if (pblk_ppa_empty(p)) { + __le64 addr_empty = cpu_to_le64(ADDR_EMPTY); + WARN_ON(test_and_set_bit(i, read_bitmap)); - meta_list[i].lba = cpu_to_le64(ADDR_EMPTY); + meta->lba = addr_empty; if (unlikely(!advanced_bio)) { bio_advance(bio, (i) * PBLK_EXPOSED_PAGE_SIZE); @@ -78,7 +81,7 @@ retry: goto retry; } WARN_ON(test_and_set_bit(i, read_bitmap)); - meta_list[i].lba = cpu_to_le64(lba); + meta->lba = cpu_to_le64(lba); advanced_bio = true; #ifdef CONFIG_NVM_PBLK_DEBUG atomic_long_inc(&pblk->cache_reads); @@ -105,12 +108,16 @@ next: static void pblk_read_check_seq(struct pblk *pblk, struct nvm_rq *rqd, sector_t blba) { - struct pblk_sec_meta *meta_lba_list = rqd->meta_list; + void *meta_list = rqd->meta_list; int nr_lbas = rqd->nr_ppas; int i; + if (!pblk_is_oob_meta_supported(pblk)) + return; + for (i = 0; i < nr_lbas; i++) { - u64 lba = le64_to_cpu(meta_lba_list[i].lba); + struct pblk_sec_meta *meta = pblk_get_meta(pblk, meta_list, i); + u64 lba = le64_to_cpu(meta->lba); if (lba == ADDR_EMPTY) continue; @@ -134,17 +141,22 @@ static void pblk_read_check_seq(struct pblk *pblk, struct nvm_rq *rqd, static void pblk_read_check_rand(struct pblk *pblk, struct nvm_rq *rqd, u64 *lba_list, int nr_lbas) { - struct pblk_sec_meta *meta_lba_list = rqd->meta_list; + void *meta_lba_list = rqd->meta_list; int i, j; + if (!pblk_is_oob_meta_supported(pblk)) + return; + for (i = 0, j = 0; i < nr_lbas; i++) { + struct pblk_sec_meta *meta = pblk_get_meta(pblk, + meta_lba_list, j); u64 lba = lba_list[i]; u64 meta_lba; if (lba == ADDR_EMPTY) continue; - meta_lba = le64_to_cpu(meta_lba_list[j].lba); + meta_lba = le64_to_cpu(meta->lba); if (lba != meta_lba) { #ifdef CONFIG_NVM_PBLK_DEBUG @@ -216,15 +228,15 @@ static void pblk_end_partial_read(struct nvm_rq *rqd) struct pblk *pblk = rqd->private; struct pblk_g_ctx *r_ctx = nvm_rq_to_pdu(rqd); struct pblk_pr_ctx *pr_ctx = r_ctx->private; + struct pblk_sec_meta *meta; struct bio *new_bio = rqd->bio; struct bio *bio = pr_ctx->orig_bio; struct bio_vec src_bv, dst_bv; - struct pblk_sec_meta *meta_list = rqd->meta_list; + void *meta_list = rqd->meta_list; int bio_init_idx = pr_ctx->bio_init_idx; unsigned long *read_bitmap = pr_ctx->bitmap; int nr_secs = pr_ctx->orig_nr_secs; int nr_holes = nr_secs - bitmap_weight(read_bitmap, nr_secs); - __le64 *lba_list_mem, *lba_list_media; void *src_p, *dst_p; int hole, i; @@ -237,13 +249,10 @@ static void pblk_end_partial_read(struct nvm_rq *rqd) rqd->ppa_list[0] = ppa; } - /* Re-use allocated memory for intermediate lbas */ - lba_list_mem = (((void *)rqd->ppa_list) + pblk_dma_ppa_size); - lba_list_media = (((void *)rqd->ppa_list) + 2 * pblk_dma_ppa_size); - for (i = 0; i < nr_secs; i++) { - lba_list_media[i] = meta_list[i].lba; - meta_list[i].lba = lba_list_mem[i]; + meta = pblk_get_meta(pblk, meta_list, i); + pr_ctx->lba_list_media[i] = le64_to_cpu(meta->lba); + meta->lba = cpu_to_le64(pr_ctx->lba_list_mem[i]); } /* Fill the holes in the original bio */ @@ -255,7 +264,8 @@ static void pblk_end_partial_read(struct nvm_rq *rqd) line = pblk_ppa_to_line(pblk, rqd->ppa_list[i]); kref_put(&line->ref, pblk_line_put); - meta_list[hole].lba = lba_list_media[i]; + meta = pblk_get_meta(pblk, meta_list, hole); + meta->lba = cpu_to_le64(pr_ctx->lba_list_media[i]); src_bv = new_bio->bi_io_vec[i++]; dst_bv = bio->bi_io_vec[bio_init_idx + hole]; @@ -291,17 +301,13 @@ static int pblk_setup_partial_read(struct pblk *pblk, struct nvm_rq *rqd, unsigned long *read_bitmap, int nr_holes) { - struct pblk_sec_meta *meta_list = rqd->meta_list; + void *meta_list = rqd->meta_list; struct pblk_g_ctx *r_ctx = nvm_rq_to_pdu(rqd); struct pblk_pr_ctx *pr_ctx; struct bio *new_bio, *bio = r_ctx->private; - __le64 *lba_list_mem; int nr_secs = rqd->nr_ppas; int i; - /* Re-use allocated memory for intermediate lbas */ - lba_list_mem = (((void *)rqd->ppa_list) + pblk_dma_ppa_size); - new_bio = bio_alloc(GFP_KERNEL, nr_holes); if (pblk_bio_add_pages(pblk, new_bio, GFP_KERNEL, nr_holes)) @@ -312,12 +318,15 @@ static int pblk_setup_partial_read(struct pblk *pblk, struct nvm_rq *rqd, goto fail_free_pages; } - pr_ctx = kmalloc(sizeof(struct pblk_pr_ctx), GFP_KERNEL); + pr_ctx = kzalloc(sizeof(struct pblk_pr_ctx), GFP_KERNEL); if (!pr_ctx) goto fail_free_pages; - for (i = 0; i < nr_secs; i++) - lba_list_mem[i] = meta_list[i].lba; + for (i = 0; i < nr_secs; i++) { + struct pblk_sec_meta *meta = pblk_get_meta(pblk, meta_list, i); + + pr_ctx->lba_list_mem[i] = le64_to_cpu(meta->lba); + } new_bio->bi_iter.bi_sector = 0; /* internal bio */ bio_set_op_attrs(new_bio, REQ_OP_READ, 0); @@ -325,7 +334,6 @@ static int pblk_setup_partial_read(struct pblk *pblk, struct nvm_rq *rqd, rqd->bio = new_bio; rqd->nr_ppas = nr_holes; - pr_ctx->ppa_ptr = NULL; pr_ctx->orig_bio = bio; bitmap_copy(pr_ctx->bitmap, read_bitmap, NVM_MAX_VLBA); pr_ctx->bio_init_idx = bio_init_idx; @@ -383,7 +391,7 @@ err: static void pblk_read_rq(struct pblk *pblk, struct nvm_rq *rqd, struct bio *bio, sector_t lba, unsigned long *read_bitmap) { - struct pblk_sec_meta *meta_list = rqd->meta_list; + struct pblk_sec_meta *meta = pblk_get_meta(pblk, rqd->meta_list, 0); struct ppa_addr ppa; pblk_lookup_l2p_seq(pblk, &ppa, lba, 1); @@ -394,8 +402,10 @@ static void pblk_read_rq(struct pblk *pblk, struct nvm_rq *rqd, struct bio *bio, retry: if (pblk_ppa_empty(ppa)) { + __le64 addr_empty = cpu_to_le64(ADDR_EMPTY); + WARN_ON(test_and_set_bit(0, read_bitmap)); - meta_list[0].lba = cpu_to_le64(ADDR_EMPTY); + meta->lba = addr_empty; return; } @@ -409,7 +419,7 @@ retry: } WARN_ON(test_and_set_bit(0, read_bitmap)); - meta_list[0].lba = cpu_to_le64(lba); + meta->lba = cpu_to_le64(lba); #ifdef CONFIG_NVM_PBLK_DEBUG atomic_long_inc(&pblk->cache_reads); diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c index 5740b7509bd8..3fcf062d752c 100644 --- a/drivers/lightnvm/pblk-recovery.c +++ b/drivers/lightnvm/pblk-recovery.c @@ -13,6 +13,9 @@ * General Public License for more details. * * pblk-recovery.c - pblk's recovery path + * + * The L2P recovery path is single threaded as the L2P table is updated in order + * following the line sequence ID. */ #include "pblk.h" @@ -124,7 +127,7 @@ static u64 pblk_sec_in_open_line(struct pblk *pblk, struct pblk_line *line) struct pblk_recov_alloc { struct ppa_addr *ppa_list; - struct pblk_sec_meta *meta_list; + void *meta_list; struct nvm_rq *rqd; void *data; dma_addr_t dma_ppa_list; @@ -158,7 +161,7 @@ static int pblk_recov_pad_line(struct pblk *pblk, struct pblk_line *line, { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - struct pblk_sec_meta *meta_list; + void *meta_list; struct pblk_pad_rq *pad_rq; struct nvm_rq *rqd; struct bio *bio; @@ -188,7 +191,7 @@ static int pblk_recov_pad_line(struct pblk *pblk, struct pblk_line *line, kref_init(&pad_rq->ref); next_pad_rq: - rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); + rq_ppas = pblk_calc_secs(pblk, left_ppas, 0, false); if (rq_ppas < pblk->min_write_pgs) { pblk_err(pblk, "corrupted pad line %d\n", line->id); goto fail_free_pad; @@ -237,12 +240,15 @@ next_pad_rq: for (j = 0; j < pblk->min_write_pgs; j++, i++, w_ptr++) { struct ppa_addr dev_ppa; + struct pblk_sec_meta *meta; __le64 addr_empty = cpu_to_le64(ADDR_EMPTY); dev_ppa = addr_to_gen_ppa(pblk, w_ptr, line->id); pblk_map_invalidate(pblk, dev_ppa); - lba_list[w_ptr] = meta_list[i].lba = addr_empty; + lba_list[w_ptr] = addr_empty; + meta = pblk_get_meta(pblk, meta_list, i); + meta->lba = addr_empty; rqd->ppa_list[i] = dev_ppa; } } @@ -334,20 +340,21 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, struct pblk_recov_alloc p) { struct nvm_tgt_dev *dev = pblk->dev; + struct pblk_line_meta *lm = &pblk->lm; struct nvm_geo *geo = &dev->geo; struct ppa_addr *ppa_list; - struct pblk_sec_meta *meta_list; + void *meta_list; struct nvm_rq *rqd; struct bio *bio; void *data; dma_addr_t dma_ppa_list, dma_meta_list; __le64 *lba_list; - u64 paddr = 0; + u64 paddr = pblk_line_smeta_start(pblk, line) + lm->smeta_sec; bool padded = false; int rq_ppas, rq_len; int i, j; int ret; - u64 left_ppas = pblk_sec_in_open_line(pblk, line); + u64 left_ppas = pblk_sec_in_open_line(pblk, line) - lm->smeta_sec; if (pblk_line_wp_is_unbalanced(pblk, line)) pblk_warn(pblk, "recovering unbalanced line (%d)\n", line->id); @@ -364,17 +371,19 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, next_rq: memset(rqd, 0, pblk_g_rq_size); - rq_ppas = pblk_calc_secs(pblk, left_ppas, 0); + rq_ppas = pblk_calc_secs(pblk, left_ppas, 0, false); if (!rq_ppas) rq_ppas = pblk->min_write_pgs; rq_len = rq_ppas * geo->csecs; +retry_rq: bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); if (IS_ERR(bio)) return PTR_ERR(bio); bio->bi_iter.bi_sector = 0; /* internal bio */ bio_set_op_attrs(bio, REQ_OP_READ, 0); + bio_get(bio); rqd->bio = bio; rqd->opcode = NVM_OP_PREAD; @@ -387,7 +396,6 @@ next_rq: if (pblk_io_aligned(pblk, rq_ppas)) rqd->is_seq = 1; -retry_rq: for (i = 0; i < rqd->nr_ppas; ) { struct ppa_addr ppa; int pos; @@ -410,6 +418,7 @@ retry_rq: if (ret) { pblk_err(pblk, "I/O submission failed: %d\n", ret); bio_put(bio); + bio_put(bio); return ret; } @@ -421,20 +430,28 @@ retry_rq: if (padded) { pblk_log_read_err(pblk, rqd); + bio_put(bio); return -EINTR; } pad_distance = pblk_pad_distance(pblk, line); ret = pblk_recov_pad_line(pblk, line, pad_distance); - if (ret) + if (ret) { + bio_put(bio); return ret; + } padded = true; + bio_put(bio); goto retry_rq; } + pblk_get_packed_meta(pblk, rqd); + bio_put(bio); + for (i = 0; i < rqd->nr_ppas; i++) { - u64 lba = le64_to_cpu(meta_list[i].lba); + struct pblk_sec_meta *meta = pblk_get_meta(pblk, meta_list, i); + u64 lba = le64_to_cpu(meta->lba); lba_list[paddr++] = cpu_to_le64(lba); @@ -463,7 +480,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) struct nvm_geo *geo = &dev->geo; struct nvm_rq *rqd; struct ppa_addr *ppa_list; - struct pblk_sec_meta *meta_list; + void *meta_list; struct pblk_recov_alloc p; void *data; dma_addr_t dma_ppa_list, dma_meta_list; @@ -473,8 +490,8 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line) if (!meta_list) return -ENOMEM; - ppa_list = (void *)(meta_list) + pblk_dma_meta_size; - dma_ppa_list = dma_meta_list + pblk_dma_meta_size; + ppa_list = (void *)(meta_list) + pblk_dma_meta_size(pblk); + dma_ppa_list = dma_meta_list + pblk_dma_meta_size(pblk); data = kcalloc(pblk->max_write_pgs, geo->csecs, GFP_KERNEL); if (!data) { @@ -804,7 +821,6 @@ next: WARN_ON_ONCE(!test_and_clear_bit(meta_line, &l_mg->meta_bitmap)); spin_unlock(&l_mg->free_lock); - pblk_line_replace_data(pblk); } else { spin_lock(&l_mg->free_lock); /* Allocate next line for preparation */ diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c index db55a1c89997..76116d5f78e4 100644 --- a/drivers/lightnvm/pblk-rl.c +++ b/drivers/lightnvm/pblk-rl.c @@ -214,11 +214,10 @@ void pblk_rl_init(struct pblk_rl *rl, int budget) struct nvm_geo *geo = &dev->geo; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; - int min_blocks = lm->blk_per_line * PBLK_GC_RSV_LINE; int sec_meta, blk_meta; - unsigned int rb_windows; + /* Consider sectors used for metadata */ sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines; blk_meta = DIV_ROUND_UP(sec_meta, geo->clba); @@ -226,7 +225,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget) rl->high = pblk->op_blks - blk_meta - lm->blk_per_line; rl->high_pw = get_count_order(rl->high); - rl->rsv_blocks = min_blocks; + rl->rsv_blocks = pblk_get_min_chks(pblk); /* This will always be a power-of-2 */ rb_windows = budget / NVM_MAX_VLBA; diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c index 2d2818155aa8..7d8958df9472 100644 --- a/drivers/lightnvm/pblk-sysfs.c +++ b/drivers/lightnvm/pblk-sysfs.c @@ -479,6 +479,13 @@ static ssize_t pblk_sysfs_set_sec_per_write(struct pblk *pblk, if (kstrtouint(page, 0, &sec_per_write)) return -EINVAL; + if (!pblk_is_oob_meta_supported(pblk)) { + /* For packed metadata case it is + * not allowed to change sec_per_write. + */ + return -EINVAL; + } + if (sec_per_write < pblk->min_write_pgs || sec_per_write > pblk->max_write_pgs || sec_per_write % pblk->min_write_pgs != 0) diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c index fa8726493b39..06d56deb645d 100644 --- a/drivers/lightnvm/pblk-write.c +++ b/drivers/lightnvm/pblk-write.c @@ -105,14 +105,20 @@ retry: } /* Map remaining sectors in chunk, starting from ppa */ -static void pblk_map_remaining(struct pblk *pblk, struct ppa_addr *ppa) +static void pblk_map_remaining(struct pblk *pblk, struct ppa_addr *ppa, + int rqd_ppas) { struct pblk_line *line; struct ppa_addr map_ppa = *ppa; + __le64 addr_empty = cpu_to_le64(ADDR_EMPTY); + __le64 *lba_list; u64 paddr; int done = 0; + int n = 0; line = pblk_ppa_to_line(pblk, *ppa); + lba_list = emeta_to_lbas(pblk, line->emeta->buf); + spin_lock(&line->lock); while (!done) { @@ -121,10 +127,17 @@ static void pblk_map_remaining(struct pblk *pblk, struct ppa_addr *ppa) if (!test_and_set_bit(paddr, line->map_bitmap)) line->left_msecs--; + if (n < rqd_ppas && lba_list[paddr] != addr_empty) + line->nr_valid_lbas--; + + lba_list[paddr] = addr_empty; + if (!test_and_set_bit(paddr, line->invalid_bitmap)) le32_add_cpu(line->vsc, -1); done = nvm_next_ppa_in_chk(pblk->dev, &map_ppa); + + n++; } line->w_err_gc->has_write_err = 1; @@ -148,9 +161,11 @@ static void pblk_prepare_resubmit(struct pblk *pblk, unsigned int sentry, w_ctx = &entry->w_ctx; /* Check if the lba has been overwritten */ - ppa_l2p = pblk_trans_map_get(pblk, w_ctx->lba); - if (!pblk_ppa_comp(ppa_l2p, entry->cacheline)) - w_ctx->lba = ADDR_EMPTY; + if (w_ctx->lba != ADDR_EMPTY) { + ppa_l2p = pblk_trans_map_get(pblk, w_ctx->lba); + if (!pblk_ppa_comp(ppa_l2p, entry->cacheline)) + w_ctx->lba = ADDR_EMPTY; + } /* Mark up the entry as submittable again */ flags = READ_ONCE(w_ctx->flags); @@ -200,7 +215,7 @@ static void pblk_submit_rec(struct work_struct *work) pblk_log_write_err(pblk, rqd); - pblk_map_remaining(pblk, ppa_list); + pblk_map_remaining(pblk, ppa_list, rqd->nr_ppas); pblk_queue_resubmit(pblk, c_ctx); pblk_up_rq(pblk, c_ctx->lun_bitmap); @@ -319,12 +334,13 @@ static int pblk_setup_w_rq(struct pblk *pblk, struct nvm_rq *rqd, } if (likely(!e_line || !atomic_read(&e_line->left_eblks))) - pblk_map_rq(pblk, rqd, c_ctx->sentry, lun_bitmap, valid, 0); + ret = pblk_map_rq(pblk, rqd, c_ctx->sentry, lun_bitmap, + valid, 0); else - pblk_map_erase_rq(pblk, rqd, c_ctx->sentry, lun_bitmap, + ret = pblk_map_erase_rq(pblk, rqd, c_ctx->sentry, lun_bitmap, valid, erase_ppa); - return 0; + return ret; } static int pblk_calc_secs_to_sync(struct pblk *pblk, unsigned int secs_avail, @@ -332,7 +348,7 @@ static int pblk_calc_secs_to_sync(struct pblk *pblk, unsigned int secs_avail, { int secs_to_sync; - secs_to_sync = pblk_calc_secs(pblk, secs_avail, secs_to_flush); + secs_to_sync = pblk_calc_secs(pblk, secs_avail, secs_to_flush, true); #ifdef CONFIG_NVM_PBLK_DEBUG if ((!secs_to_sync && secs_to_flush) @@ -548,15 +564,17 @@ static void pblk_free_write_rqd(struct pblk *pblk, struct nvm_rq *rqd) c_ctx->nr_padded); } -static int pblk_submit_write(struct pblk *pblk) +static int pblk_submit_write(struct pblk *pblk, int *secs_left) { struct bio *bio; struct nvm_rq *rqd; unsigned int secs_avail, secs_to_sync, secs_to_com; - unsigned int secs_to_flush; + unsigned int secs_to_flush, packed_meta_pgs; unsigned long pos; unsigned int resubmit; + *secs_left = 0; + spin_lock(&pblk->resubmit_lock); resubmit = !list_empty(&pblk->resubmit_list); spin_unlock(&pblk->resubmit_lock); @@ -586,17 +604,17 @@ static int pblk_submit_write(struct pblk *pblk) */ secs_avail = pblk_rb_read_count(&pblk->rwb); if (!secs_avail) - return 1; + return 0; secs_to_flush = pblk_rb_flush_point_count(&pblk->rwb); - if (!secs_to_flush && secs_avail < pblk->min_write_pgs) - return 1; + if (!secs_to_flush && secs_avail < pblk->min_write_pgs_data) + return 0; secs_to_sync = pblk_calc_secs_to_sync(pblk, secs_avail, secs_to_flush); if (secs_to_sync > pblk->max_write_pgs) { pblk_err(pblk, "bad buffer sync calculation\n"); - return 1; + return 0; } secs_to_com = (secs_to_sync > secs_avail) ? @@ -604,7 +622,8 @@ static int pblk_submit_write(struct pblk *pblk) pos = pblk_rb_read_commit(&pblk->rwb, secs_to_com); } - bio = bio_alloc(GFP_KERNEL, secs_to_sync); + packed_meta_pgs = (pblk->min_write_pgs - pblk->min_write_pgs_data); + bio = bio_alloc(GFP_KERNEL, secs_to_sync + packed_meta_pgs); bio->bi_iter.bi_sector = 0; /* internal bio */ bio_set_op_attrs(bio, REQ_OP_WRITE, 0); @@ -625,6 +644,7 @@ static int pblk_submit_write(struct pblk *pblk) atomic_long_add(secs_to_sync, &pblk->sub_writes); #endif + *secs_left = 1; return 0; fail_free_bio: @@ -633,16 +653,22 @@ fail_put_bio: bio_put(bio); pblk_free_rqd(pblk, rqd, PBLK_WRITE); - return 1; + return -EINTR; } int pblk_write_ts(void *data) { struct pblk *pblk = data; + int secs_left; + int write_failure = 0; while (!kthread_should_stop()) { - if (!pblk_submit_write(pblk)) - continue; + if (!write_failure) { + write_failure = pblk_submit_write(pblk, &secs_left); + + if (secs_left) + continue; + } set_current_state(TASK_INTERRUPTIBLE); io_schedule(); } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index 02bb2e98f8a9..85e38ed62f85 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -104,7 +104,6 @@ enum { PBLK_RL_LOW = 4 }; -#define pblk_dma_meta_size (sizeof(struct pblk_sec_meta) * NVM_MAX_VLBA) #define pblk_dma_ppa_size (sizeof(u64) * NVM_MAX_VLBA) /* write buffer completion context */ @@ -132,6 +131,8 @@ struct pblk_pr_ctx { unsigned int bio_init_idx; void *ppa_ptr; dma_addr_t dma_ppa_list; + __le64 lba_list_mem[NVM_MAX_VLBA]; + __le64 lba_list_media[NVM_MAX_VLBA]; }; /* Pad context */ @@ -631,7 +632,9 @@ struct pblk { int state; /* pblk line state */ int min_write_pgs; /* Minimum amount of pages required by controller */ + int min_write_pgs_data; /* Minimum amount of payload pages */ int max_write_pgs; /* Maximum amount of pages supported by controller */ + int oob_meta_size; /* Size of OOB sector metadata */ sector_t capacity; /* Device capacity when bad blocks are subtracted */ @@ -836,7 +839,7 @@ void pblk_dealloc_page(struct pblk *pblk, struct pblk_line *line, int nr_secs); u64 pblk_alloc_page(struct pblk *pblk, struct pblk_line *line, int nr_secs); u64 __pblk_alloc_page(struct pblk *pblk, struct pblk_line *line, int nr_secs); int pblk_calc_secs(struct pblk *pblk, unsigned long secs_avail, - unsigned long secs_to_flush); + unsigned long secs_to_flush, bool skip_meta); void pblk_down_rq(struct pblk *pblk, struct ppa_addr ppa, unsigned long *lun_bitmap); void pblk_down_chunk(struct pblk *pblk, struct ppa_addr ppa); @@ -860,6 +863,8 @@ void pblk_lookup_l2p_rand(struct pblk *pblk, struct ppa_addr *ppas, u64 *lba_list, int nr_secs); void pblk_lookup_l2p_seq(struct pblk *pblk, struct ppa_addr *ppas, sector_t blba, int nr_secs); +void *pblk_get_meta_for_writes(struct pblk *pblk, struct nvm_rq *rqd); +void pblk_get_packed_meta(struct pblk *pblk, struct nvm_rq *rqd); /* * pblk user I/O write path @@ -871,10 +876,10 @@ int pblk_write_gc_to_cache(struct pblk *pblk, struct pblk_gc_rq *gc_rq); /* * pblk map */ -void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, +int pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, unsigned int sentry, unsigned long *lun_bitmap, unsigned int valid_secs, struct ppa_addr *erase_ppa); -void pblk_map_rq(struct pblk *pblk, struct nvm_rq *rqd, unsigned int sentry, +int pblk_map_rq(struct pblk *pblk, struct nvm_rq *rqd, unsigned int sentry, unsigned long *lun_bitmap, unsigned int valid_secs, unsigned int off); @@ -905,7 +910,6 @@ int pblk_recov_check_emeta(struct pblk *pblk, struct line_emeta *emeta); #define PBLK_GC_MAX_READERS 8 /* Max number of outstanding GC reader jobs */ #define PBLK_GC_RQ_QD 128 /* Queue depth for inflight GC requests */ #define PBLK_GC_L_QD 4 /* Queue depth for inflight GC lines */ -#define PBLK_GC_RSV_LINE 1 /* Reserved lines for GC */ int pblk_gc_init(struct pblk *pblk); void pblk_gc_exit(struct pblk *pblk, bool graceful); @@ -1370,4 +1374,33 @@ static inline char *pblk_disk_name(struct pblk *pblk) return disk->disk_name; } + +static inline unsigned int pblk_get_min_chks(struct pblk *pblk) +{ + struct pblk_line_meta *lm = &pblk->lm; + /* In a worst-case scenario every line will have OP invalid sectors. + * We will then need a minimum of 1/OP lines to free up a single line + */ + + return DIV_ROUND_UP(100, pblk->op) * lm->blk_per_line; +} + +static inline struct pblk_sec_meta *pblk_get_meta(struct pblk *pblk, + void *meta, int index) +{ + return meta + + max_t(int, sizeof(struct pblk_sec_meta), pblk->oob_meta_size) + * index; +} + +static inline int pblk_dma_meta_size(struct pblk *pblk) +{ + return max_t(int, sizeof(struct pblk_sec_meta), pblk->oob_meta_size) + * NVM_MAX_VLBA; +} + +static inline int pblk_is_oob_meta_supported(struct pblk *pblk) +{ + return pblk->oob_meta_size >= sizeof(struct pblk_sec_meta); +} #endif /* PBLK_H_ */ diff --git a/drivers/macintosh/ans-lcd.c b/drivers/macintosh/ans-lcd.c index c8e078b911c7..ef0c2366cf59 100644 --- a/drivers/macintosh/ans-lcd.c +++ b/drivers/macintosh/ans-lcd.c @@ -160,7 +160,7 @@ anslcd_init(void) struct device_node* node; node = of_find_node_by_name(NULL, "lcd"); - if (!node || !node->parent || strcmp(node->parent->name, "gc")) { + if (!node || !of_node_name_eq(node->parent, "gc")) { of_node_put(node); return -ENODEV; } diff --git a/drivers/macintosh/macio_asic.c b/drivers/macintosh/macio_asic.c index 17d3bc917562..3543a82081de 100644 --- a/drivers/macintosh/macio_asic.c +++ b/drivers/macintosh/macio_asic.c @@ -190,11 +190,11 @@ static int macio_resource_quirks(struct device_node *np, struct resource *res, return 0; /* Grand Central has too large resource 0 on some machines */ - if (index == 0 && !strcmp(np->name, "gc")) + if (index == 0 && of_node_name_eq(np, "gc")) res->end = res->start + 0x1ffff; /* Airport has bogus resource 2 */ - if (index >= 2 && !strcmp(np->name, "radio")) + if (index >= 2 && of_node_name_eq(np, "radio")) return 1; #ifndef CONFIG_PPC64 @@ -207,21 +207,21 @@ static int macio_resource_quirks(struct device_node *np, struct resource *res, * level of hierarchy, but I don't really feel the need * for it */ - if (!strcmp(np->name, "escc")) + if (of_node_name_eq(np, "escc")) return 1; /* ESCC has bogus resources >= 3 */ - if (index >= 3 && !(strcmp(np->name, "ch-a") && - strcmp(np->name, "ch-b"))) + if (index >= 3 && (of_node_name_eq(np, "ch-a") || + of_node_name_eq(np, "ch-b"))) return 1; /* Media bay has too many resources, keep only first one */ - if (index > 0 && !strcmp(np->name, "media-bay")) + if (index > 0 && of_node_name_eq(np, "media-bay")) return 1; /* Some older IDE resources have bogus sizes */ - if (!(strcmp(np->name, "IDE") && strcmp(np->name, "ATA") && - strcmp(np->type, "ide") && strcmp(np->type, "ata"))) { + if (of_node_name_eq(np, "IDE") || of_node_name_eq(np, "ATA") || + of_node_is_type(np, "ide") || of_node_is_type(np, "ata")) { if (index == 0 && (res->end - res->start) > 0xfff) res->end = res->start + 0xfff; if (index == 1 && (res->end - res->start) > 0xff) @@ -260,7 +260,7 @@ static void macio_add_missing_resources(struct macio_dev *dev) irq_base = 64; /* Fix SCC */ - if (strcmp(np->name, "ch-a") == 0) { + if (of_node_name_eq(np, "ch-a")) { macio_create_fixup_irq(dev, 0, 15 + irq_base); macio_create_fixup_irq(dev, 1, 4 + irq_base); macio_create_fixup_irq(dev, 2, 5 + irq_base); @@ -268,18 +268,18 @@ static void macio_add_missing_resources(struct macio_dev *dev) } /* Fix media-bay */ - if (strcmp(np->name, "media-bay") == 0) { + if (of_node_name_eq(np, "media-bay")) { macio_create_fixup_irq(dev, 0, 29 + irq_base); printk(KERN_INFO "macio: fixed media-bay irq on gatwick\n"); } /* Fix left media bay childs */ - if (dev->media_bay != NULL && strcmp(np->name, "floppy") == 0) { + if (dev->media_bay != NULL && of_node_name_eq(np, "floppy")) { macio_create_fixup_irq(dev, 0, 19 + irq_base); macio_create_fixup_irq(dev, 1, 1 + irq_base); printk(KERN_INFO "macio: fixed left floppy irqs\n"); } - if (dev->media_bay != NULL && strcasecmp(np->name, "ata4") == 0) { + if (dev->media_bay != NULL && of_node_name_eq(np, "ata4")) { macio_create_fixup_irq(dev, 0, 14 + irq_base); macio_create_fixup_irq(dev, 0, 3 + irq_base); printk(KERN_INFO "macio: fixed left ide irqs\n"); @@ -438,11 +438,8 @@ static struct macio_dev * macio_add_one_device(struct macio_chip *chip, static int macio_skip_device(struct device_node *np) { - if (strncmp(np->name, "battery", 7) == 0) - return 1; - if (strncmp(np->name, "escc-legacy", 11) == 0) - return 1; - return 0; + return of_node_name_prefix(np, "battery") || + of_node_name_prefix(np, "escc-legacy"); } /** @@ -489,9 +486,9 @@ static void macio_pci_add_devices(struct macio_chip *chip) root_res); if (mdev == NULL) of_node_put(np); - else if (strncmp(np->name, "media-bay", 9) == 0) + else if (of_node_name_prefix(np, "media-bay")) mbdev = mdev; - else if (strncmp(np->name, "escc", 4) == 0) + else if (of_node_name_prefix(np, "escc")) sdev = mdev; } diff --git a/drivers/macintosh/macio_sysfs.c b/drivers/macintosh/macio_sysfs.c index d2451e58acb9..27f5eefc508f 100644 --- a/drivers/macintosh/macio_sysfs.c +++ b/drivers/macintosh/macio_sysfs.c @@ -3,17 +3,6 @@ #include #include - -#define macio_config_of_attr(field, format_string) \ -static ssize_t \ -field##_show (struct device *dev, struct device_attribute *attr, \ - char *buf) \ -{ \ - struct macio_dev *mdev = to_macio_device (dev); \ - return sprintf (buf, format_string, mdev->ofdev.dev.of_node->field); \ -} \ -static DEVICE_ATTR_RO(field); - static ssize_t compatible_show (struct device *dev, struct device_attribute *attr, char *buf) { @@ -65,7 +54,12 @@ static ssize_t name_show(struct device *dev, } static DEVICE_ATTR_RO(name); -macio_config_of_attr (type, "%s\n"); +static ssize_t type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sprintf(buf, "%s\n", of_node_get_device_type(dev->of_node)); +} +static DEVICE_ATTR_RO(type); static struct attribute *macio_dev_attrs[] = { &dev_attr_name.attr, diff --git a/drivers/macintosh/rack-meter.c b/drivers/macintosh/rack-meter.c index 1f29d2413c74..3940e2a032f7 100644 --- a/drivers/macintosh/rack-meter.c +++ b/drivers/macintosh/rack-meter.c @@ -376,18 +376,19 @@ static int rackmeter_probe(struct macio_dev* mdev, pr_debug("rackmeter_probe()\n"); /* Get i2s-a node */ - while ((i2s = of_get_next_child(mdev->ofdev.dev.of_node, i2s)) != NULL) - if (strcmp(i2s->name, "i2s-a") == 0) - break; + for_each_child_of_node(mdev->ofdev.dev.of_node, i2s) + if (of_node_name_eq(i2s, "i2s-a")) + break; + if (i2s == NULL) { pr_debug(" i2s-a child not found\n"); goto bail; } /* Get lightshow or virtual sound */ - while ((np = of_get_next_child(i2s, np)) != NULL) { - if (strcmp(np->name, "lightshow") == 0) + for_each_child_of_node(i2s, np) { + if (of_node_name_eq(np, "lightshow")) break; - if ((strcmp(np->name, "sound") == 0) && + if (of_node_name_eq(np, "sound") && of_get_property(np, "virtual", NULL) != NULL) break; } diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c index 60f57e2abf21..ac0cf37d6239 100644 --- a/drivers/macintosh/via-pmu.c +++ b/drivers/macintosh/via-pmu.c @@ -318,8 +318,8 @@ int __init find_via_pmu(void) PMU_INT_ADB | PMU_INT_TICK; - if (vias->parent->name && ((strcmp(vias->parent->name, "ohare") == 0) - || of_device_is_compatible(vias->parent, "ohare"))) + if (of_node_name_eq(vias->parent, "ohare") || + of_device_is_compatible(vias->parent, "ohare")) pmu_kind = PMU_OHARE_BASED; else if (of_device_is_compatible(vias->parent, "paddington")) pmu_kind = PMU_PADDINGTON_BASED; diff --git a/drivers/macintosh/windfarm_fcu_controls.c b/drivers/macintosh/windfarm_fcu_controls.c index fab7a21e9577..629f19875d7f 100644 --- a/drivers/macintosh/windfarm_fcu_controls.c +++ b/drivers/macintosh/windfarm_fcu_controls.c @@ -425,25 +425,25 @@ static void wf_fcu_lookup_fans(struct wf_fcu_priv *pv) { "CPU B 2", "cpu-fan-b-1", }, { "CPU B 3", "cpu-fan-c-1", }, }; - struct device_node *np = NULL, *fcu = pv->i2c->dev.of_node; + struct device_node *np, *fcu = pv->i2c->dev.of_node; int i; DBG("Looking up FCU controls in device-tree...\n"); - while ((np = of_get_next_child(fcu, np)) != NULL) { + for_each_child_of_node(fcu, np) { int id, type = -1; const char *loc; const char *name; const u32 *reg; - DBG(" control: %s, type: %s\n", np->name, np->type); + DBG(" control: %pOFn, type: %s\n", np, of_node_get_device_type(np)); /* Detect control type */ - if (!strcmp(np->type, "fan-rpm-control") || - !strcmp(np->type, "fan-rpm")) + if (of_node_is_type(np, "fan-rpm-control") || + of_node_is_type(np, "fan-rpm")) type = FCU_FAN_RPM; - if (!strcmp(np->type, "fan-pwm-control") || - !strcmp(np->type, "fan-pwm")) + if (of_node_is_type(np, "fan-pwm-control") || + of_node_is_type(np, "fan-pwm")) type = FCU_FAN_PWM; /* Only care about fans for now */ if (type == -1) diff --git a/drivers/macintosh/windfarm_lm87_sensor.c b/drivers/macintosh/windfarm_lm87_sensor.c index 35aa571d498a..09724acd70b6 100644 --- a/drivers/macintosh/windfarm_lm87_sensor.c +++ b/drivers/macintosh/windfarm_lm87_sensor.c @@ -110,8 +110,8 @@ static int wf_lm87_probe(struct i2c_client *client, * the Xserve G5 has several lm87's. However, for now we only * care about the internal temperature sensor */ - while ((np = of_get_next_child(client->dev.of_node, np)) != NULL) { - if (strcmp(np->name, "int-temp")) + for_each_child_of_node(client->dev.of_node, np) { + if (!of_node_name_eq(np, "int-temp")) continue; loc = of_get_property(np, "location", NULL); if (!loc) diff --git a/drivers/macintosh/windfarm_smu_controls.c b/drivers/macintosh/windfarm_smu_controls.c index 86d65462a61c..2cb9652a9998 100644 --- a/drivers/macintosh/windfarm_smu_controls.c +++ b/drivers/macintosh/windfarm_smu_controls.c @@ -267,7 +267,7 @@ static int __init smu_controls_init(void) /* Look for RPM fans */ for (fans = NULL; (fans = of_get_next_child(smu, fans)) != NULL;) - if (!strcmp(fans->name, "rpm-fans") || + if (of_node_name_eq(fans, "rpm-fans") || of_device_is_compatible(fans, "smu-rpm-fans")) break; for (fan = NULL; @@ -287,7 +287,7 @@ static int __init smu_controls_init(void) /* Look for PWM fans */ for (fans = NULL; (fans = of_get_next_child(smu, fans)) != NULL;) - if (!strcmp(fans->name, "pwm-fans")) + if (of_node_name_eq(fans, "pwm-fans")) break; for (fan = NULL; fans && (fan = of_get_next_child(fans, fan)) != NULL;) { diff --git a/drivers/macintosh/windfarm_smu_sat.c b/drivers/macintosh/windfarm_smu_sat.c index a0f61eb853c5..b4be718beba8 100644 --- a/drivers/macintosh/windfarm_smu_sat.c +++ b/drivers/macintosh/windfarm_smu_sat.c @@ -197,7 +197,7 @@ static int wf_sat_probe(struct i2c_client *client, struct wf_sat *sat; struct wf_sat_sensor *sens; const u32 *reg; - const char *loc, *type; + const char *loc; u8 chip, core; struct device_node *child; int shift, cpu, index; @@ -220,7 +220,6 @@ static int wf_sat_probe(struct i2c_client *client, child = NULL; while ((child = of_get_next_child(dev, child)) != NULL) { reg = of_get_property(child, "reg", NULL); - type = of_get_property(child, "device_type", NULL); loc = of_get_property(child, "location", NULL); if (reg == NULL || loc == NULL) continue; @@ -249,15 +248,15 @@ static int wf_sat_probe(struct i2c_client *client, continue; } - if (strcmp(type, "voltage-sensor") == 0) { + if (of_node_is_type(child, "voltage-sensor")) { name = "cpu-voltage"; shift = 4; vsens[core] = index; - } else if (strcmp(type, "current-sensor") == 0) { + } else if (of_node_is_type(child, "current-sensor")) { name = "cpu-current"; shift = 8; isens[core] = index; - } else if (strcmp(type, "temp-sensor") == 0) { + } else if (of_node_is_type(child, "temp-sensor")) { name = "cpu-temp"; shift = 10; } else diff --git a/drivers/macintosh/windfarm_smu_sensors.c b/drivers/macintosh/windfarm_smu_sensors.c index 172fd267dcf6..a58f6733381a 100644 --- a/drivers/macintosh/windfarm_smu_sensors.c +++ b/drivers/macintosh/windfarm_smu_sensors.c @@ -197,15 +197,14 @@ static const struct wf_sensor_ops smu_slotspow_ops = { static struct smu_ad_sensor *smu_ads_create(struct device_node *node) { struct smu_ad_sensor *ads; - const char *c, *l; + const char *l; const u32 *v; ads = kmalloc(sizeof(struct smu_ad_sensor), GFP_KERNEL); if (ads == NULL) return NULL; - c = of_get_property(node, "device_type", NULL); l = of_get_property(node, "location", NULL); - if (c == NULL || l == NULL) + if (l == NULL) goto fail; /* We currently pick the sensors based on the OF name and location @@ -215,7 +214,7 @@ static struct smu_ad_sensor *smu_ads_create(struct device_node *node) * the names and locations consistents so I'll stick with the names * and locations for now. */ - if (!strcmp(c, "temp-sensor") && + if (of_node_is_type(node, "temp-sensor") && !strcmp(l, "CPU T-Diode")) { ads->sens.ops = &smu_cputemp_ops; ads->sens.name = "cpu-temp"; @@ -224,7 +223,7 @@ static struct smu_ad_sensor *smu_ads_create(struct device_node *node) SMU_SDB_CPUDIODE_ID); goto fail; } - } else if (!strcmp(c, "current-sensor") && + } else if (of_node_is_type(node, "current-sensor") && !strcmp(l, "CPU Current")) { ads->sens.ops = &smu_cpuamp_ops; ads->sens.name = "cpu-current"; @@ -233,7 +232,7 @@ static struct smu_ad_sensor *smu_ads_create(struct device_node *node) SMU_SDB_CPUVCP_ID); goto fail; } - } else if (!strcmp(c, "voltage-sensor") && + } else if (of_node_is_type(node, "voltage-sensor") && !strcmp(l, "CPU Voltage")) { ads->sens.ops = &smu_cpuvolt_ops; ads->sens.name = "cpu-voltage"; @@ -242,7 +241,7 @@ static struct smu_ad_sensor *smu_ads_create(struct device_node *node) SMU_SDB_CPUVCP_ID); goto fail; } - } else if (!strcmp(c, "power-sensor") && + } else if (of_node_is_type(node, "power-sensor") && !strcmp(l, "Slots Power")) { ads->sens.ops = &smu_slotspow_ops; ads->sens.name = "slots-power"; @@ -425,7 +424,7 @@ static int __init smu_sensors_init(void) /* Look for sensors subdir */ for (sensors = NULL; (sensors = of_get_next_child(smu, sensors)) != NULL;) - if (!strcmp(sensors->name, "sensors")) + if (of_node_name_eq(sensors, "sensors")) break; of_node_put(smu); diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index b61b83bbcfff..fdf75352e16a 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -626,6 +626,20 @@ struct cache_set { /* Where in the btree gc currently is */ struct bkey gc_done; + /* + * For automatical garbage collection after writeback completed, this + * varialbe is used as bit fields, + * - 0000 0001b (BCH_ENABLE_AUTO_GC): enable gc after writeback + * - 0000 0010b (BCH_DO_AUTO_GC): do gc after writeback + * This is an optimization for following write request after writeback + * finished, but read hit rate dropped due to clean data on cache is + * discarded. Unless user explicitly sets it via sysfs, it won't be + * enabled. + */ +#define BCH_ENABLE_AUTO_GC 1 +#define BCH_DO_AUTO_GC 2 + uint8_t gc_after_writeback; + /* * The allocation code needs gc_mark in struct bucket to be correct, but * it's not while a gc is in progress. Protected by bucket_lock. @@ -658,7 +672,11 @@ struct cache_set { /* * A btree node on disk could have too many bsets for an iterator to fit - * on the stack - have to dynamically allocate them + * on the stack - have to dynamically allocate them. + * bch_cache_set_alloc() will make sure the pool can allocate iterators + * equipped with enough room that can host + * (sb.bucket_size / sb.block_size) + * btree_iter_sets, which is more than static MAX_BSETS. */ mempool_t fill_iter; diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index 3f4211b5cd33..23cb1dc7296b 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -207,6 +207,11 @@ void bch_btree_node_read_done(struct btree *b) struct bset *i = btree_bset_first(b); struct btree_iter *iter; + /* + * c->fill_iter can allocate an iterator with more memory space + * than static MAX_BSETS. + * See the comment arount cache_set->fill_iter. + */ iter = mempool_alloc(&b->c->fill_iter, GFP_NOIO); iter->size = b->c->sb.bucket_size / b->c->sb.block_size; iter->used = 0; diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h index a68d6c55783b..d1c72ef64edf 100644 --- a/drivers/md/bcache/btree.h +++ b/drivers/md/bcache/btree.h @@ -266,6 +266,24 @@ static inline void wake_up_gc(struct cache_set *c) wake_up(&c->gc_wait); } +static inline void force_wake_up_gc(struct cache_set *c) +{ + /* + * Garbage collection thread only works when sectors_to_gc < 0, + * calling wake_up_gc() won't start gc thread if sectors_to_gc is + * not a nagetive value. + * Therefore sectors_to_gc is set to -1 here, before waking up + * gc thread by calling wake_up_gc(). Then gc_should_run() will + * give a chance to permit gc thread to run. "Give a chance" means + * before going into gc_should_run(), there is still possibility + * that c->sectors_to_gc being set to other positive value. So + * this routine won't 100% make sure gc thread will be woken up + * to run. + */ + atomic_set(&c->sectors_to_gc, -1); + wake_up_gc(c); +} + #define MAP_DONE 0 #define MAP_CONTINUE 1 diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c index 8f448b9c96a1..8b123be05254 100644 --- a/drivers/md/bcache/debug.c +++ b/drivers/md/bcache/debug.c @@ -249,8 +249,7 @@ void bch_debug_init_cache_set(struct cache_set *c) void bch_debug_exit(void) { - if (!IS_ERR_OR_NULL(bcache_debug)) - debugfs_remove_recursive(bcache_debug); + debugfs_remove_recursive(bcache_debug); } void __init bch_debug_init(void) diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c index 522c7426f3a0..b2fd412715b1 100644 --- a/drivers/md/bcache/journal.c +++ b/drivers/md/bcache/journal.c @@ -663,7 +663,7 @@ static void journal_write_unlocked(struct closure *cl) REQ_SYNC|REQ_META|REQ_PREFLUSH|REQ_FUA); bch_bio_map(bio, w->data); - trace_bcache_journal_write(bio); + trace_bcache_journal_write(bio, w->data->keys); bio_list_add(&list, bio); SET_PTR_OFFSET(k, i, PTR_OFFSET(k, i) + sectors); diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index 3bf35914bb57..15070412a32e 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -311,11 +311,11 @@ err: * data is written it calls bch_journal, and after the keys have been added to * the next journal write they're inserted into the btree. * - * It inserts the data in s->cache_bio; bi_sector is used for the key offset, + * It inserts the data in op->bio; bi_sector is used for the key offset, * and op->inode is used for the key inode. * - * If s->bypass is true, instead of inserting the data it invalidates the - * region of the cache represented by s->cache_bio and op->inode. + * If op->bypass is true, instead of inserting the data it invalidates the + * region of the cache represented by op->bio and op->inode. */ void bch_data_insert(struct closure *cl) { diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 7bbd670a5a84..4dee119c3664 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -25,8 +25,8 @@ #include #include -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Kent Overstreet "); +unsigned int bch_cutoff_writeback; +unsigned int bch_cutoff_writeback_sync; static const char bcache_magic[] = { 0xc6, 0x85, 0x73, 0xf6, 0x4e, 0x1a, 0x45, 0xca, @@ -1510,8 +1510,7 @@ static void cache_set_free(struct closure *cl) struct cache *ca; unsigned int i; - if (!IS_ERR_OR_NULL(c->debug)) - debugfs_remove(c->debug); + debugfs_remove(c->debug); bch_open_buckets_free(c); bch_btree_cache_free(c); @@ -2424,6 +2423,32 @@ static void bcache_exit(void) mutex_destroy(&bch_register_lock); } +/* Check and fixup module parameters */ +static void check_module_parameters(void) +{ + if (bch_cutoff_writeback_sync == 0) + bch_cutoff_writeback_sync = CUTOFF_WRITEBACK_SYNC; + else if (bch_cutoff_writeback_sync > CUTOFF_WRITEBACK_SYNC_MAX) { + pr_warn("set bch_cutoff_writeback_sync (%u) to max value %u", + bch_cutoff_writeback_sync, CUTOFF_WRITEBACK_SYNC_MAX); + bch_cutoff_writeback_sync = CUTOFF_WRITEBACK_SYNC_MAX; + } + + if (bch_cutoff_writeback == 0) + bch_cutoff_writeback = CUTOFF_WRITEBACK; + else if (bch_cutoff_writeback > CUTOFF_WRITEBACK_MAX) { + pr_warn("set bch_cutoff_writeback (%u) to max value %u", + bch_cutoff_writeback, CUTOFF_WRITEBACK_MAX); + bch_cutoff_writeback = CUTOFF_WRITEBACK_MAX; + } + + if (bch_cutoff_writeback > bch_cutoff_writeback_sync) { + pr_warn("set bch_cutoff_writeback (%u) to %u", + bch_cutoff_writeback, bch_cutoff_writeback_sync); + bch_cutoff_writeback = bch_cutoff_writeback_sync; + } +} + static int __init bcache_init(void) { static const struct attribute *files[] = { @@ -2432,6 +2457,8 @@ static int __init bcache_init(void) NULL }; + check_module_parameters(); + mutex_init(&bch_register_lock); init_waitqueue_head(&unregister_wait); register_reboot_notifier(&reboot); @@ -2468,5 +2495,18 @@ err: return -ENOMEM; } +/* + * Module hooks + */ module_exit(bcache_exit); module_init(bcache_init); + +module_param(bch_cutoff_writeback, uint, 0); +MODULE_PARM_DESC(bch_cutoff_writeback, "threshold to cutoff writeback"); + +module_param(bch_cutoff_writeback_sync, uint, 0); +MODULE_PARM_DESC(bch_cutoff_writeback_sync, "hard threshold to cutoff writeback"); + +MODULE_DESCRIPTION("Bcache: a Linux block layer cache"); +MODULE_AUTHOR("Kent Overstreet "); +MODULE_LICENSE("GPL"); diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c index 26f035a0c5b9..557a8a3270a1 100644 --- a/drivers/md/bcache/sysfs.c +++ b/drivers/md/bcache/sysfs.c @@ -16,7 +16,7 @@ #include #include -/* Default is -1; we skip past it for struct cached_dev's cache mode */ +/* Default is 0 ("writethrough") */ static const char * const bch_cache_modes[] = { "writethrough", "writeback", @@ -25,7 +25,7 @@ static const char * const bch_cache_modes[] = { NULL }; -/* Default is -1; we skip past it for stop_when_cache_set_failed */ +/* Default is 0 ("auto") */ static const char * const bch_stop_on_failure_modes[] = { "auto", "always", @@ -88,6 +88,8 @@ read_attribute(writeback_keys_done); read_attribute(writeback_keys_failed); read_attribute(io_errors); read_attribute(congested); +read_attribute(cutoff_writeback); +read_attribute(cutoff_writeback_sync); rw_attribute(congested_read_threshold_us); rw_attribute(congested_write_threshold_us); @@ -128,6 +130,7 @@ rw_attribute(expensive_debug_checks); rw_attribute(cache_replacement_policy); rw_attribute(btree_shrinker_disabled); rw_attribute(copy_gc_enabled); +rw_attribute(gc_after_writeback); rw_attribute(size); static ssize_t bch_snprint_string_list(char *buf, @@ -264,7 +267,8 @@ STORE(__cached_dev) d_strtoul(writeback_running); d_strtoul(writeback_delay); - sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent, 0, 40); + sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent, + 0, bch_cutoff_writeback); if (attr == &sysfs_writeback_rate) { ssize_t ret; @@ -384,8 +388,25 @@ STORE(bch_cached_dev) mutex_lock(&bch_register_lock); size = __cached_dev_store(kobj, attr, buf, size); - if (attr == &sysfs_writeback_running) - bch_writeback_queue(dc); + if (attr == &sysfs_writeback_running) { + /* dc->writeback_running changed in __cached_dev_store() */ + if (IS_ERR_OR_NULL(dc->writeback_thread)) { + /* + * reject setting it to 1 via sysfs if writeback + * kthread is not created yet. + */ + if (dc->writeback_running) { + dc->writeback_running = false; + pr_err("%s: failed to run non-existent writeback thread", + dc->disk.disk->disk_name); + } + } else + /* + * writeback kthread will check if dc->writeback_running + * is true or false. + */ + bch_writeback_queue(dc); + } if (attr == &sysfs_writeback_percent) if (!test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) @@ -668,6 +689,9 @@ SHOW(__bch_cache_set) sysfs_print(congested_write_threshold_us, c->congested_write_threshold_us); + sysfs_print(cutoff_writeback, bch_cutoff_writeback); + sysfs_print(cutoff_writeback_sync, bch_cutoff_writeback_sync); + sysfs_print(active_journal_entries, fifo_used(&c->journal.pin)); sysfs_printf(verify, "%i", c->verify); sysfs_printf(key_merging_disabled, "%i", c->key_merging_disabled); @@ -676,6 +700,7 @@ SHOW(__bch_cache_set) sysfs_printf(gc_always_rewrite, "%i", c->gc_always_rewrite); sysfs_printf(btree_shrinker_disabled, "%i", c->shrinker_disabled); sysfs_printf(copy_gc_enabled, "%i", c->copy_gc_enabled); + sysfs_printf(gc_after_writeback, "%i", c->gc_after_writeback); sysfs_printf(io_disable, "%i", test_bit(CACHE_SET_IO_DISABLE, &c->flags)); @@ -725,21 +750,8 @@ STORE(__bch_cache_set) bch_cache_accounting_clear(&c->accounting); } - if (attr == &sysfs_trigger_gc) { - /* - * Garbage collection thread only works when sectors_to_gc < 0, - * when users write to sysfs entry trigger_gc, most of time - * they want to forcibly triger gargage collection. Here -1 is - * set to c->sectors_to_gc, to make gc_should_run() give a - * chance to permit gc thread to run. "give a chance" means - * before going into gc_should_run(), there is still chance - * that c->sectors_to_gc being set to other positive value. So - * writing sysfs entry trigger_gc won't always make sure gc - * thread takes effect. - */ - atomic_set(&c->sectors_to_gc, -1); - wake_up_gc(c); - } + if (attr == &sysfs_trigger_gc) + force_wake_up_gc(c); if (attr == &sysfs_prune_cache) { struct shrink_control sc; @@ -789,6 +801,12 @@ STORE(__bch_cache_set) sysfs_strtoul(gc_always_rewrite, c->gc_always_rewrite); sysfs_strtoul(btree_shrinker_disabled, c->shrinker_disabled); sysfs_strtoul(copy_gc_enabled, c->copy_gc_enabled); + /* + * write gc_after_writeback here may overwrite an already set + * BCH_DO_AUTO_GC, it doesn't matter because this flag will be + * set in next chance. + */ + sysfs_strtoul_clamp(gc_after_writeback, c->gc_after_writeback, 0, 1); return size; } @@ -869,7 +887,10 @@ static struct attribute *bch_cache_set_internal_files[] = { &sysfs_gc_always_rewrite, &sysfs_btree_shrinker_disabled, &sysfs_copy_gc_enabled, + &sysfs_gc_after_writeback, &sysfs_io_disable, + &sysfs_cutoff_writeback, + &sysfs_cutoff_writeback_sync, NULL }; KTYPE(bch_cache_set_internal); diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 08c3a9f9676c..73f0efac2b9f 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -17,6 +17,15 @@ #include #include +static void update_gc_after_writeback(struct cache_set *c) +{ + if (c->gc_after_writeback != (BCH_ENABLE_AUTO_GC) || + c->gc_stats.in_use < BCH_AUTO_GC_DIRTY_THRESHOLD) + return; + + c->gc_after_writeback |= BCH_DO_AUTO_GC; +} + /* Rate limiting */ static uint64_t __calc_target_rate(struct cached_dev *dc) { @@ -191,6 +200,7 @@ static void update_writeback_rate(struct work_struct *work) if (!set_at_max_writeback_rate(c, dc)) { down_read(&dc->writeback_lock); __update_writeback_rate(dc); + update_gc_after_writeback(c); up_read(&dc->writeback_lock); } } @@ -689,6 +699,23 @@ static int bch_writeback_thread(void *arg) up_write(&dc->writeback_lock); break; } + + /* + * When dirty data rate is high (e.g. 50%+), there might + * be heavy buckets fragmentation after writeback + * finished, which hurts following write performance. + * If users really care about write performance they + * may set BCH_ENABLE_AUTO_GC via sysfs, then when + * BCH_DO_AUTO_GC is set, garbage collection thread + * will be wake up here. After moving gc, the shrunk + * btree and discarded free buckets SSD space may be + * helpful for following write requests. + */ + if (c->gc_after_writeback == + (BCH_ENABLE_AUTO_GC|BCH_DO_AUTO_GC)) { + c->gc_after_writeback &= ~BCH_DO_AUTO_GC; + force_wake_up_gc(c); + } } up_write(&dc->writeback_lock); @@ -777,7 +804,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc) bch_keybuf_init(&dc->writeback_keys); dc->writeback_metadata = true; - dc->writeback_running = true; + dc->writeback_running = false; dc->writeback_percent = 10; dc->writeback_delay = 30; atomic_long_set(&dc->writeback_rate.rate, 1024); @@ -805,6 +832,7 @@ int bch_cached_dev_writeback_start(struct cached_dev *dc) cached_dev_put(dc); return PTR_ERR(dc->writeback_thread); } + dc->writeback_running = true; WARN_ON(test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)); schedule_delayed_work(&dc->writeback_rate_update, diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h index d2b9fdbc8994..6a743d3bb338 100644 --- a/drivers/md/bcache/writeback.h +++ b/drivers/md/bcache/writeback.h @@ -5,12 +5,17 @@ #define CUTOFF_WRITEBACK 40 #define CUTOFF_WRITEBACK_SYNC 70 +#define CUTOFF_WRITEBACK_MAX 70 +#define CUTOFF_WRITEBACK_SYNC_MAX 90 + #define MAX_WRITEBACKS_IN_PASS 5 #define MAX_WRITESIZE_IN_PASS 5000 /* *512b */ #define WRITEBACK_RATE_UPDATE_SECS_MAX 60 #define WRITEBACK_RATE_UPDATE_SECS_DEFAULT 5 +#define BCH_AUTO_GC_DIRTY_THRESHOLD 50 + /* * 14 (16384ths) is chosen here as something that each backing device * should be a reasonable fraction of the share, and not to blow up @@ -53,6 +58,9 @@ static inline bool bcache_dev_stripe_dirty(struct cached_dev *dc, } } +extern unsigned int bch_cutoff_writeback; +extern unsigned int bch_cutoff_writeback_sync; + static inline bool should_writeback(struct cached_dev *dc, struct bio *bio, unsigned int cache_mode, bool would_skip) { @@ -60,7 +68,7 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio, if (cache_mode != CACHE_MODE_WRITEBACK || test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) || - in_use > CUTOFF_WRITEBACK_SYNC) + in_use > bch_cutoff_writeback_sync) return false; if (dc->partial_stripes_expensive && @@ -73,7 +81,7 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio, return (op_is_sync(bio->bi_opf) || bio->bi_opf & (REQ_META|REQ_PRIO) || - in_use <= CUTOFF_WRITEBACK); + in_use <= bch_cutoff_writeback); } static inline void bch_writeback_queue(struct cached_dev *dc) diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index dc385b70e4c3..1ecef76225a1 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -65,7 +65,7 @@ /* * Linking of buffers: - * All buffers are linked to cache_hash with their hash_list field. + * All buffers are linked to buffer_tree with their node field. * * Clean buffers that are not being written (B_WRITING not set) * are linked to lru[LIST_CLEAN] with their lru_list field. @@ -457,7 +457,7 @@ static void free_buffer(struct dm_buffer *b) } /* - * Link buffer to the hash list and clean or dirty queue. + * Link buffer to the buffer tree and clean or dirty queue. */ static void __link_buffer(struct dm_buffer *b, sector_t block, int dirty) { @@ -472,7 +472,7 @@ static void __link_buffer(struct dm_buffer *b, sector_t block, int dirty) } /* - * Unlink buffer from the hash list and dirty or clean queue. + * Unlink buffer from the buffer tree and dirty or clean queue. */ static void __unlink_buffer(struct dm_buffer *b) { @@ -993,7 +993,7 @@ static struct dm_buffer *__bufio_new(struct dm_bufio_client *c, sector_t block, /* * We've had a period where the mutex was unlocked, so need to - * recheck the hash table. + * recheck the buffer tree. */ b = __find(c, block); if (b) { @@ -1327,7 +1327,7 @@ again: EXPORT_SYMBOL_GPL(dm_bufio_write_dirty_buffers); /* - * Use dm-io to send and empty barrier flush the device. + * Use dm-io to send an empty barrier to flush the device. */ int dm_bufio_issue_flush(struct dm_bufio_client *c) { @@ -1356,7 +1356,7 @@ EXPORT_SYMBOL_GPL(dm_bufio_issue_flush); * Then, we write the buffer to the original location if it was dirty. * * Then, if we are the only one who is holding the buffer, relink the buffer - * in the hash queue for the new location. + * in the buffer tree for the new location. * * If there was someone else holding the buffer, we write it to the new * location but not relink it, because that other user needs to have the buffer @@ -1887,7 +1887,7 @@ static int __init dm_bufio_init(void) dm_bufio_allocated_vmalloc = 0; dm_bufio_current_allocated = 0; - mem = (__u64)mult_frac(totalram_pages - totalhigh_pages, + mem = (__u64)mult_frac(totalram_pages() - totalhigh_pages(), DM_BUFIO_MEMORY_PERCENT, 100) << PAGE_SHIFT; if (mem > ULONG_MAX) diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index 224d44503a06..95c6d86ab5e8 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -65,7 +65,6 @@ struct mapped_device { */ struct work_struct work; wait_queue_head_t wait; - atomic_t pending[2]; spinlock_t deferred_lock; struct bio_list deferred; @@ -107,9 +106,6 @@ struct mapped_device { struct block_device *bdev; - /* zero-length flush that will be cloned and submitted to targets */ - struct bio flush_bio; - struct dm_stats stats; /* for blk-mq request-based DM support */ @@ -119,7 +115,6 @@ struct mapped_device { struct srcu_struct io_barrier; }; -int md_in_flight(struct mapped_device *md); void disable_write_same(struct mapped_device *md); void disable_write_zeroes(struct mapped_device *md); diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index b8eec515a003..0ff22159a0ca 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -49,7 +49,7 @@ struct convert_context { struct bio *bio_out; struct bvec_iter iter_in; struct bvec_iter iter_out; - sector_t cc_sector; + u64 cc_sector; atomic_t cc_pending; union { struct skcipher_request *req; @@ -81,7 +81,7 @@ struct dm_crypt_request { struct convert_context *ctx; struct scatterlist sg_in[4]; struct scatterlist sg_out[4]; - sector_t iv_sector; + u64 iv_sector; }; struct crypt_config; @@ -160,7 +160,7 @@ struct crypt_config { struct iv_lmk_private lmk; struct iv_tcw_private tcw; } iv_gen_private; - sector_t iv_offset; + u64 iv_offset; unsigned int iv_size; unsigned short int sector_size; unsigned char sector_shift; @@ -377,7 +377,7 @@ static struct crypto_cipher *alloc_essiv_cipher(struct crypt_config *cc, int err; /* Setup the essiv_tfm with the given salt */ - essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, CRYPTO_ALG_ASYNC); + essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, 0); if (IS_ERR(essiv_tfm)) { ti->error = "Error allocating crypto tfm for ESSIV"; return essiv_tfm; @@ -1885,6 +1885,13 @@ static int crypt_alloc_tfms_skcipher(struct crypt_config *cc, char *ciphermode) } } + /* + * dm-crypt performance can vary greatly depending on which crypto + * algorithm implementation is used. Help people debug performance + * problems by logging the ->cra_driver_name. + */ + DMINFO("%s using implementation \"%s\"", ciphermode, + crypto_skcipher_alg(any_tfm(cc))->base.cra_driver_name); return 0; } @@ -1903,6 +1910,8 @@ static int crypt_alloc_tfms_aead(struct crypt_config *cc, char *ciphermode) return err; } + DMINFO("%s using implementation \"%s\"", ciphermode, + crypto_aead_alg(any_tfm_aead(cc))->base.cra_driver_name); return 0; } @@ -2158,7 +2167,7 @@ static int crypt_wipe_key(struct crypt_config *cc) static void crypt_calculate_pages_per_client(void) { - unsigned long pages = (totalram_pages - totalhigh_pages) * DM_CRYPT_MEMORY_PERCENT / 100; + unsigned long pages = (totalram_pages() - totalhigh_pages()) * DM_CRYPT_MEMORY_PERCENT / 100; if (!dm_crypt_clients_n) return; @@ -2781,7 +2790,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) } ret = -EINVAL; - if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1) { + if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 || tmpll != (sector_t)tmpll) { ti->error = "Invalid device sector"; goto bad; } diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c index 2fb7bb4304ad..fddffe251bf6 100644 --- a/drivers/md/dm-delay.c +++ b/drivers/md/dm-delay.c @@ -141,7 +141,7 @@ static int delay_class_ctr(struct dm_target *ti, struct delay_class *c, char **a unsigned long long tmpll; char dummy; - if (sscanf(argv[1], "%llu%c", &tmpll, &dummy) != 1) { + if (sscanf(argv[1], "%llu%c", &tmpll, &dummy) != 1 || tmpll != (sector_t)tmpll) { ti->error = "Invalid device sector"; return -EINVAL; } diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c index 3cb97fa4c11d..a9bc518156f2 100644 --- a/drivers/md/dm-flakey.c +++ b/drivers/md/dm-flakey.c @@ -213,7 +213,7 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv) devname = dm_shift_arg(&as); r = -EINVAL; - if (sscanf(dm_shift_arg(&as), "%llu%c", &tmpll, &dummy) != 1) { + if (sscanf(dm_shift_arg(&as), "%llu%c", &tmpll, &dummy) != 1 || tmpll != (sector_t)tmpll) { ti->error = "Invalid device sector"; goto bad; } @@ -287,20 +287,31 @@ static void flakey_map_bio(struct dm_target *ti, struct bio *bio) static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc) { - unsigned bio_bytes = bio_cur_bytes(bio); - char *data = bio_data(bio); + unsigned int corrupt_bio_byte = fc->corrupt_bio_byte - 1; + + struct bvec_iter iter; + struct bio_vec bvec; + + if (!bio_has_data(bio)) + return; /* - * Overwrite the Nth byte of the data returned. + * Overwrite the Nth byte of the bio's data, on whichever page + * it falls. */ - if (data && bio_bytes >= fc->corrupt_bio_byte) { - data[fc->corrupt_bio_byte - 1] = fc->corrupt_bio_value; - - DMDEBUG("Corrupting data bio=%p by writing %u to byte %u " - "(rw=%c bi_opf=%u bi_sector=%llu cur_bytes=%u)\n", - bio, fc->corrupt_bio_value, fc->corrupt_bio_byte, - (bio_data_dir(bio) == WRITE) ? 'w' : 'r', bio->bi_opf, - (unsigned long long)bio->bi_iter.bi_sector, bio_bytes); + bio_for_each_segment(bvec, bio, iter) { + if (bio_iter_len(bio, iter) > corrupt_bio_byte) { + char *segment = (page_address(bio_iter_page(bio, iter)) + + bio_iter_offset(bio, iter)); + segment[corrupt_bio_byte] = fc->corrupt_bio_value; + DMDEBUG("Corrupting data bio=%p by writing %u to byte %u " + "(rw=%c bi_opf=%u bi_sector=%llu size=%u)\n", + bio, fc->corrupt_bio_value, fc->corrupt_bio_byte, + (bio_data_dir(bio) == WRITE) ? 'w' : 'r', bio->bi_opf, + (unsigned long long)bio->bi_iter.bi_sector, bio->bi_iter.bi_size); + break; + } + corrupt_bio_byte -= bio_iter_len(bio, iter); } } diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index bb3096bf2cc6..457200ca6287 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -2804,7 +2804,7 @@ static int get_mac(struct crypto_shash **hash, struct alg_spec *a, char **error, int r; if (a->alg_string) { - *hash = crypto_alloc_shash(a->alg_string, 0, CRYPTO_ALG_ASYNC); + *hash = crypto_alloc_shash(a->alg_string, 0, 0); if (IS_ERR(*hash)) { *error = error_alg; r = PTR_ERR(*hash); @@ -2843,7 +2843,7 @@ static int create_journal(struct dm_integrity_c *ic, char **error) journal_pages = roundup((__u64)ic->journal_sections * ic->journal_section_sectors, PAGE_SIZE >> SECTOR_SHIFT) >> (PAGE_SHIFT - SECTOR_SHIFT); journal_desc_size = journal_pages * sizeof(struct page_list); - if (journal_pages >= totalram_pages - totalhigh_pages || journal_desc_size > ULONG_MAX) { + if (journal_pages >= totalram_pages() - totalhigh_pages() || journal_desc_size > ULONG_MAX) { *error = "Journal doesn't fit into memory"; r = -ENOMEM; goto bad; @@ -3460,7 +3460,7 @@ try_smaller_buffer: ti->error = "Recalculate is only valid with internal hash"; goto bad; } - ic->recalc_wq = alloc_workqueue("dm-intergrity-recalc", WQ_MEM_RECLAIM, 1); + ic->recalc_wq = alloc_workqueue("dm-integrity-recalc", WQ_MEM_RECLAIM, 1); if (!ic->recalc_wq ) { ti->error = "Cannot allocate workqueue"; r = -ENOMEM; diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 2fc4213e02b5..671c24332802 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -56,15 +56,17 @@ struct dm_kcopyd_client { atomic_t nr_jobs; /* - * We maintain three lists of jobs: + * We maintain four lists of jobs: * * i) jobs waiting for pages * ii) jobs that have pages, and are waiting for the io to be issued. - * iii) jobs that have completed. + * iii) jobs that don't need to do any IO and just run a callback + * iv) jobs that have completed. * - * All three of these are protected by job_lock. + * All four of these are protected by job_lock. */ spinlock_t job_lock; + struct list_head callback_jobs; struct list_head complete_jobs; struct list_head io_jobs; struct list_head pages_jobs; @@ -625,6 +627,7 @@ static void do_work(struct work_struct *work) struct dm_kcopyd_client *kc = container_of(work, struct dm_kcopyd_client, kcopyd_work); struct blk_plug plug; + unsigned long flags; /* * The order that these are called is *very* important. @@ -633,6 +636,10 @@ static void do_work(struct work_struct *work) * list. io jobs call wake when they complete and it all * starts again. */ + spin_lock_irqsave(&kc->job_lock, flags); + list_splice_tail_init(&kc->callback_jobs, &kc->complete_jobs); + spin_unlock_irqrestore(&kc->job_lock, flags); + blk_start_plug(&plug); process_jobs(&kc->complete_jobs, kc, run_complete_job); process_jobs(&kc->pages_jobs, kc, run_pages_job); @@ -650,7 +657,7 @@ static void dispatch_job(struct kcopyd_job *job) struct dm_kcopyd_client *kc = job->kc; atomic_inc(&kc->nr_jobs); if (unlikely(!job->source.count)) - push(&kc->complete_jobs, job); + push(&kc->callback_jobs, job); else if (job->pages == &zero_page_list) push(&kc->io_jobs, job); else @@ -858,7 +865,7 @@ void dm_kcopyd_do_callback(void *j, int read_err, unsigned long write_err) job->read_err = read_err; job->write_err = write_err; - push(&kc->complete_jobs, job); + push(&kc->callback_jobs, job); wake(kc); } EXPORT_SYMBOL(dm_kcopyd_do_callback); @@ -888,6 +895,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *thro return ERR_PTR(-ENOMEM); spin_lock_init(&kc->job_lock); + INIT_LIST_HEAD(&kc->callback_jobs); INIT_LIST_HEAD(&kc->complete_jobs); INIT_LIST_HEAD(&kc->io_jobs); INIT_LIST_HEAD(&kc->pages_jobs); @@ -939,6 +947,7 @@ void dm_kcopyd_client_destroy(struct dm_kcopyd_client *kc) /* Wait for completion of all jobs submitted by this client. */ wait_event(kc->destroyq, !atomic_read(&kc->nr_jobs)); + BUG_ON(!list_empty(&kc->callback_jobs)); BUG_ON(!list_empty(&kc->complete_jobs)); BUG_ON(!list_empty(&kc->io_jobs)); BUG_ON(!list_empty(&kc->pages_jobs)); diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index 8d7ddee6ac4d..ad980a38fb1e 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -45,7 +45,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv) } ret = -EINVAL; - if (sscanf(argv[1], "%llu%c", &tmp, &dummy) != 1) { + if (sscanf(argv[1], "%llu%c", &tmp, &dummy) != 1 || tmp != (sector_t)tmp) { ti->error = "Invalid device sector"; goto bad; } diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index d6a66921daf4..2ee5e357a0a7 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -1211,14 +1211,16 @@ static void flush_multipath_work(struct multipath *m) set_bit(MPATHF_PG_INIT_DISABLED, &m->flags); smp_mb__after_atomic(); - flush_workqueue(kmpath_handlerd); + if (atomic_read(&m->pg_init_in_progress)) + flush_workqueue(kmpath_handlerd); multipath_wait_for_pg_init_completion(m); clear_bit(MPATHF_PG_INIT_DISABLED, &m->flags); smp_mb__after_atomic(); } - flush_workqueue(kmultipathd); + if (m->queue_mode == DM_TYPE_BIO_BASED) + flush_work(&m->process_queued_bios); flush_work(&m->trigger_event); } diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index e1dd1622a290..adcfe8ae10aa 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3690,8 +3690,7 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv, set_bit(MD_RECOVERY_INTR, &mddev->recovery); md_reap_sync_thread(mddev); } - } else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) || - test_bit(MD_RECOVERY_NEEDED, &mddev->recovery)) + } else if (decipher_sync_action(mddev, mddev->recovery) != st_idle) return -EBUSY; else if (!strcasecmp(argv[0], "resync")) ; /* MD_RECOVERY_NEEDED set below */ diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c index 79eab1071ec2..5a51151f680d 100644 --- a/drivers/md/dm-raid1.c +++ b/drivers/md/dm-raid1.c @@ -943,7 +943,8 @@ static int get_mirror(struct mirror_set *ms, struct dm_target *ti, char dummy; int ret; - if (sscanf(argv[1], "%llu%c", &offset, &dummy) != 1) { + if (sscanf(argv[1], "%llu%c", &offset, &dummy) != 1 || + offset != (sector_t)offset) { ti->error = "Invalid offset"; return -EINVAL; } diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index 7cd36e4d1310..4eb5f8c56535 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -43,7 +43,7 @@ static unsigned dm_get_blk_mq_queue_depth(void) int dm_request_based(struct mapped_device *md) { - return queue_is_rq_based(md->queue); + return queue_is_mq(md->queue); } void dm_start_queue(struct request_queue *q) @@ -128,12 +128,10 @@ static void rq_end_stats(struct mapped_device *md, struct request *orig) * the md may be freed in dm_put() at the end of this function. * Or do dm_get() before calling this function and dm_put() later. */ -static void rq_completed(struct mapped_device *md, int rw, bool run_queue) +static void rq_completed(struct mapped_device *md) { - atomic_dec(&md->pending[rw]); - /* nudge anyone waiting on suspend queue */ - if (!md_in_flight(md)) + if (unlikely(waitqueue_active(&md->wait))) wake_up(&md->wait); /* @@ -149,7 +147,6 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue) */ static void dm_end_request(struct request *clone, blk_status_t error) { - int rw = rq_data_dir(clone); struct dm_rq_target_io *tio = clone->end_io_data; struct mapped_device *md = tio->md; struct request *rq = tio->orig; @@ -159,7 +156,7 @@ static void dm_end_request(struct request *clone, blk_status_t error) rq_end_stats(md, rq); blk_mq_end_request(rq, error); - rq_completed(md, rw, true); + rq_completed(md); } static void __dm_mq_kick_requeue_list(struct request_queue *q, unsigned long msecs) @@ -183,7 +180,6 @@ static void dm_requeue_original_request(struct dm_rq_target_io *tio, bool delay_ { struct mapped_device *md = tio->md; struct request *rq = tio->orig; - int rw = rq_data_dir(rq); unsigned long delay_ms = delay_requeue ? 100 : 0; rq_end_stats(md, rq); @@ -193,7 +189,7 @@ static void dm_requeue_original_request(struct dm_rq_target_io *tio, bool delay_ } dm_mq_delay_requeue_request(rq, delay_ms); - rq_completed(md, rw, false); + rq_completed(md); } static void dm_done(struct request *clone, blk_status_t error, bool mapped) @@ -248,15 +244,13 @@ static void dm_softirq_done(struct request *rq) bool mapped = true; struct dm_rq_target_io *tio = tio_from_request(rq); struct request *clone = tio->clone; - int rw; if (!clone) { struct mapped_device *md = tio->md; rq_end_stats(md, rq); - rw = rq_data_dir(rq); blk_mq_end_request(rq, tio->error); - rq_completed(md, rw, false); + rq_completed(md); return; } @@ -378,7 +372,6 @@ static int map_request(struct dm_rq_target_io *tio) blk_status_t ret; r = ti->type->clone_and_map_rq(ti, rq, &tio->info, &clone); -check_again: switch (r) { case DM_MAPIO_SUBMITTED: /* The target has taken the I/O to submit by itself later */ @@ -398,8 +391,7 @@ check_again: blk_rq_unprep_clone(clone); tio->ti->type->release_clone_rq(clone); tio->clone = NULL; - r = DM_MAPIO_REQUEUE; - goto check_again; + return DM_MAPIO_REQUEUE; } break; case DM_MAPIO_REQUEUE: @@ -436,7 +428,6 @@ ssize_t dm_attr_rq_based_seq_io_merge_deadline_store(struct mapped_device *md, static void dm_start_request(struct mapped_device *md, struct request *orig) { blk_mq_start_request(orig); - atomic_inc(&md->pending[rq_data_dir(orig)]); if (unlikely(dm_stats_used(&md->stats))) { struct dm_rq_target_io *tio = tio_from_request(orig); @@ -510,7 +501,7 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx, if (map_request(tio) == DM_MAPIO_REQUEUE) { /* Undo dm_start_request() before requeuing */ rq_end_stats(md, rq); - rq_completed(md, rq_data_dir(rq), false); + rq_completed(md); return BLK_STS_RESOURCE; } diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c index ae4b33d10924..36805b12661e 100644 --- a/drivers/md/dm-snap.c +++ b/drivers/md/dm-snap.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "dm.h" @@ -105,6 +106,9 @@ struct dm_snapshot { /* The on disk metadata handler */ struct dm_exception_store *store; + /* Maximum number of in-flight COW jobs. */ + struct semaphore cow_count; + struct dm_kcopyd_client *kcopyd_client; /* Wait for events based on state_bits */ @@ -145,6 +149,19 @@ struct dm_snapshot { #define RUNNING_MERGE 0 #define SHUTDOWN_MERGE 1 +/* + * Maximum number of chunks being copied on write. + * + * The value was decided experimentally as a trade-off between memory + * consumption, stalling the kernel's workqueues and maintaining a high enough + * throughput. + */ +#define DEFAULT_COW_THRESHOLD 2048 + +static int cow_threshold = DEFAULT_COW_THRESHOLD; +module_param_named(snapshot_cow_threshold, cow_threshold, int, 0644); +MODULE_PARM_DESC(snapshot_cow_threshold, "Maximum number of chunks being copied on write"); + DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(snapshot_copy_throttle, "A percentage of time allocated for copy on write"); @@ -1190,6 +1207,8 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) goto bad_hash_tables; } + sema_init(&s->cow_count, (cow_threshold > 0) ? cow_threshold : INT_MAX); + s->kcopyd_client = dm_kcopyd_client_create(&dm_kcopyd_throttle); if (IS_ERR(s->kcopyd_client)) { r = PTR_ERR(s->kcopyd_client); @@ -1575,6 +1594,7 @@ static void copy_callback(int read_err, unsigned long write_err, void *context) rb_link_node(&pe->out_of_order_node, parent, p); rb_insert_color(&pe->out_of_order_node, &s->out_of_order_tree); } + up(&s->cow_count); } /* @@ -1598,6 +1618,7 @@ static void start_copy(struct dm_snap_pending_exception *pe) dest.count = src.count; /* Hand over to kcopyd */ + down(&s->cow_count); dm_kcopyd_copy(s->kcopyd_client, &src, 1, &dest, 0, copy_callback, pe); } @@ -1617,6 +1638,7 @@ static void start_full_bio(struct dm_snap_pending_exception *pe, pe->full_bio = bio; pe->full_bio_end_io = bio->bi_end_io; + down(&s->cow_count); callback_data = dm_kcopyd_prepare_callback(s->kcopyd_client, copy_callback, pe); diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c index 21de30b4e2a1..45b92a3d9d8e 100644 --- a/drivers/md/dm-stats.c +++ b/drivers/md/dm-stats.c @@ -85,7 +85,7 @@ static bool __check_shared_memory(size_t alloc_size) a = shared_memory_amount + alloc_size; if (a < shared_memory_amount) return false; - if (a >> PAGE_SHIFT > totalram_pages / DM_STATS_MEMORY_FACTOR) + if (a >> PAGE_SHIFT > totalram_pages() / DM_STATS_MEMORY_FACTOR) return false; #ifdef CONFIG_MMU if (a > (VMALLOC_END - VMALLOC_START) / DM_STATS_VMALLOC_FACTOR) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 9038c302d5c2..4b1be754cc41 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -919,12 +919,12 @@ static int device_is_rq_based(struct dm_target *ti, struct dm_dev *dev, struct request_queue *q = bdev_get_queue(dev->bdev); struct verify_rq_based_data *v = data; - if (q->mq_ops) + if (queue_is_mq(q)) v->mq_count++; else v->sq_count++; - return queue_is_rq_based(q); + return queue_is_mq(q); } static int dm_table_determine_type(struct dm_table *t) @@ -1927,6 +1927,9 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, */ if (blk_queue_is_zoned(q)) blk_revalidate_disk_zones(t->md->disk); + + /* Allow reads to exceed readahead limits */ + q->backing_dev_info->io_pages = limits->max_sectors >> (PAGE_SHIFT - 9); } unsigned int dm_table_get_num_targets(struct dm_table *t) diff --git a/drivers/md/dm-unstripe.c b/drivers/md/dm-unstripe.c index 954b7ab4e684..e673dacf6418 100644 --- a/drivers/md/dm-unstripe.c +++ b/drivers/md/dm-unstripe.c @@ -78,7 +78,7 @@ static int unstripe_ctr(struct dm_target *ti, unsigned int argc, char **argv) goto err; } - if (sscanf(argv[4], "%llu%c", &start, &dummy) != 1) { + if (sscanf(argv[4], "%llu%c", &start, &dummy) != 1 || start != (sector_t)start) { ti->error = "Invalid striped device offset"; goto err; } diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index fc65f0dedf7f..f4c31ffaa88e 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -1040,6 +1040,15 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv) v->tfm = NULL; goto bad; } + + /* + * dm-verity performance can vary greatly depending on which hash + * algorithm implementation is used. Help people debug performance + * problems by logging the ->cra_driver_name. + */ + DMINFO("%s using implementation \"%s\"", v->alg_name, + crypto_hash_alg_common(v->tfm)->base.cra_driver_name); + v->digest_size = crypto_ahash_digestsize(v->tfm); if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) { ti->error = "Digest size too big"; diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c index 2d50eec94cd7..2b8cee35e4d5 100644 --- a/drivers/md/dm-writecache.c +++ b/drivers/md/dm-writecache.c @@ -2061,7 +2061,7 @@ invalid_optional: if (IS_ERR(wc->flush_thread)) { r = PTR_ERR(wc->flush_thread); wc->flush_thread = NULL; - ti->error = "Couldn't spawn endio thread"; + ti->error = "Couldn't spawn flush thread"; goto bad; } wake_up_process(wc->flush_thread); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 63a7c416b224..d67c95ef8d7e 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -646,26 +646,38 @@ static void free_tio(struct dm_target_io *tio) bio_put(&tio->clone); } -int md_in_flight(struct mapped_device *md) +static bool md_in_flight_bios(struct mapped_device *md) { - return atomic_read(&md->pending[READ]) + - atomic_read(&md->pending[WRITE]); + int cpu; + struct hd_struct *part = &dm_disk(md)->part0; + long sum = 0; + + for_each_possible_cpu(cpu) { + sum += part_stat_local_read_cpu(part, in_flight[0], cpu); + sum += part_stat_local_read_cpu(part, in_flight[1], cpu); + } + + return sum != 0; +} + +static bool md_in_flight(struct mapped_device *md) +{ + if (queue_is_mq(md->queue)) + return blk_mq_queue_inflight(md->queue); + else + return md_in_flight_bios(md); } static void start_io_acct(struct dm_io *io) { struct mapped_device *md = io->md; struct bio *bio = io->orig_bio; - int rw = bio_data_dir(bio); io->start_time = jiffies; generic_start_io_acct(md->queue, bio_op(bio), bio_sectors(bio), &dm_disk(md)->part0); - atomic_set(&dm_disk(md)->part0.in_flight[rw], - atomic_inc_return(&md->pending[rw])); - if (unlikely(dm_stats_used(&md->stats))) dm_stats_account_io(&md->stats, bio_data_dir(bio), bio->bi_iter.bi_sector, bio_sectors(bio), @@ -677,8 +689,6 @@ static void end_io_acct(struct dm_io *io) struct mapped_device *md = io->md; struct bio *bio = io->orig_bio; unsigned long duration = jiffies - io->start_time; - int pending; - int rw = bio_data_dir(bio); generic_end_io_acct(md->queue, bio_op(bio), &dm_disk(md)->part0, io->start_time); @@ -688,16 +698,8 @@ static void end_io_acct(struct dm_io *io) bio->bi_iter.bi_sector, bio_sectors(bio), true, duration, &io->stats_aux); - /* - * After this is decremented the bio must not be touched if it is - * a flush. - */ - pending = atomic_dec_return(&md->pending[rw]); - atomic_set(&dm_disk(md)->part0.in_flight[rw], pending); - pending += atomic_read(&md->pending[rw^0x1]); - /* nudge anyone waiting on suspend queue */ - if (!pending) + if (unlikely(waitqueue_active(&md->wait))) wake_up(&md->wait); } @@ -1417,10 +1419,21 @@ static int __send_empty_flush(struct clone_info *ci) unsigned target_nr = 0; struct dm_target *ti; + /* + * Empty flush uses a statically initialized bio, as the base for + * cloning. However, blkg association requires that a bdev is + * associated with a gendisk, which doesn't happen until the bdev is + * opened. So, blkg association is done at issue time of the flush + * rather than when the device is created in alloc_dev(). + */ + bio_set_dev(ci->bio, ci->io->md->bdev); + BUG_ON(bio_has_data(ci->bio)); while ((ti = dm_table_get_target(ci->map, target_nr++))) __send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL); + bio_disassociate_blkg(ci->bio); + return 0; } @@ -1473,11 +1486,9 @@ static bool is_split_required_for_discard(struct dm_target *ti) } static int __send_changing_extent_only(struct clone_info *ci, struct dm_target *ti, - get_num_bios_fn get_num_bios, - is_split_required_fn is_split_required) + unsigned num_bios, bool is_split_required) { unsigned len; - unsigned num_bios; /* * Even though the device advertised support for this type of @@ -1485,11 +1496,10 @@ static int __send_changing_extent_only(struct clone_info *ci, struct dm_target * * reconfiguration might also have changed that since the * check was performed. */ - num_bios = get_num_bios ? get_num_bios(ti) : 0; if (!num_bios) return -EOPNOTSUPP; - if (is_split_required && !is_split_required(ti)) + if (!is_split_required) len = min((sector_t)ci->sector_count, max_io_len_target_boundary(ci->sector, ti)); else len = min((sector_t)ci->sector_count, max_io_len(ci->sector, ti)); @@ -1504,23 +1514,23 @@ static int __send_changing_extent_only(struct clone_info *ci, struct dm_target * static int __send_discard(struct clone_info *ci, struct dm_target *ti) { - return __send_changing_extent_only(ci, ti, get_num_discard_bios, - is_split_required_for_discard); + return __send_changing_extent_only(ci, ti, get_num_discard_bios(ti), + is_split_required_for_discard(ti)); } static int __send_secure_erase(struct clone_info *ci, struct dm_target *ti) { - return __send_changing_extent_only(ci, ti, get_num_secure_erase_bios, NULL); + return __send_changing_extent_only(ci, ti, get_num_secure_erase_bios(ti), false); } static int __send_write_same(struct clone_info *ci, struct dm_target *ti) { - return __send_changing_extent_only(ci, ti, get_num_write_same_bios, NULL); + return __send_changing_extent_only(ci, ti, get_num_write_same_bios(ti), false); } static int __send_write_zeroes(struct clone_info *ci, struct dm_target *ti) { - return __send_changing_extent_only(ci, ti, get_num_write_zeroes_bios, NULL); + return __send_changing_extent_only(ci, ti, get_num_write_zeroes_bios(ti), false); } static bool __process_abnormal_io(struct clone_info *ci, struct dm_target *ti, @@ -1598,7 +1608,16 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md, init_clone_info(&ci, md, map, bio); if (bio->bi_opf & REQ_PREFLUSH) { - ci.bio = &ci.io->md->flush_bio; + struct bio flush_bio; + + /* + * Use an on-stack bio for this, it's safe since we don't + * need to reference it after submit. It's just used as + * the basis for the clone(s). + */ + bio_init(&flush_bio, NULL, 0); + flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC; + ci.bio = &flush_bio; ci.sector_count = 0; error = __send_empty_flush(&ci); /* dec_pending submits any data associated with flush */ @@ -1654,7 +1673,16 @@ static blk_qc_t __process_bio(struct mapped_device *md, init_clone_info(&ci, md, map, bio); if (bio->bi_opf & REQ_PREFLUSH) { - ci.bio = &ci.io->md->flush_bio; + struct bio flush_bio; + + /* + * Use an on-stack bio for this, it's safe since we don't + * need to reference it after submit. It's just used as + * the basis for the clone(s). + */ + bio_init(&flush_bio, NULL, 0); + flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC; + ci.bio = &flush_bio; ci.sector_count = 0; error = __send_empty_flush(&ci); /* dec_pending submits any data associated with flush */ @@ -1685,10 +1713,7 @@ out: return ret; } -typedef blk_qc_t (process_bio_fn)(struct mapped_device *, struct dm_table *, struct bio *); - -static blk_qc_t __dm_make_request(struct request_queue *q, struct bio *bio, - process_bio_fn process_bio) +static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio) { struct mapped_device *md = q->queuedata; blk_qc_t ret = BLK_QC_T_NONE; @@ -1708,26 +1733,15 @@ static blk_qc_t __dm_make_request(struct request_queue *q, struct bio *bio, return ret; } - ret = process_bio(md, map, bio); + if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED) + ret = __process_bio(md, map, bio); + else + ret = __split_and_process_bio(md, map, bio); dm_put_live_table(md, srcu_idx); return ret; } -/* - * The request function that remaps the bio to one target and - * splits off any remainder. - */ -static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio) -{ - return __dm_make_request(q, bio, __split_and_process_bio); -} - -static blk_qc_t dm_make_request_nvme(struct request_queue *q, struct bio *bio) -{ - return __dm_make_request(q, bio, __process_bio); -} - static int dm_any_congested(void *congested_data, int bdi_bits) { int r = bdi_bits; @@ -1898,7 +1912,7 @@ static struct mapped_device *alloc_dev(int minor) INIT_LIST_HEAD(&md->table_devices); spin_lock_init(&md->uevent_lock); - md->queue = blk_alloc_queue_node(GFP_KERNEL, numa_node_id, NULL); + md->queue = blk_alloc_queue_node(GFP_KERNEL, numa_node_id); if (!md->queue) goto bad; md->queue->queuedata = md; @@ -1908,8 +1922,6 @@ static struct mapped_device *alloc_dev(int minor) if (!md->disk) goto bad; - atomic_set(&md->pending[0], 0); - atomic_set(&md->pending[1], 0); init_waitqueue_head(&md->wait); INIT_WORK(&md->work, dm_wq_work); init_waitqueue_head(&md->eventq); @@ -1940,10 +1952,6 @@ static struct mapped_device *alloc_dev(int minor) if (!md->bdev) goto bad; - bio_init(&md->flush_bio, NULL, 0); - bio_set_dev(&md->flush_bio, md->bdev); - md->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC; - dm_stats_init(&md->stats); /* Populate the mapping, nobody knows we exist yet */ @@ -2221,12 +2229,9 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) break; case DM_TYPE_BIO_BASED: case DM_TYPE_DAX_BIO_BASED: - dm_init_normal_md_queue(md); - blk_queue_make_request(md->queue, dm_make_request); - break; case DM_TYPE_NVME_BIO_BASED: dm_init_normal_md_queue(md); - blk_queue_make_request(md->queue, dm_make_request_nvme); + blk_queue_make_request(md->queue, dm_make_request); break; case DM_TYPE_NONE: WARN_ON_ONCE(true); diff --git a/drivers/md/md.c b/drivers/md/md.c index fc488cb30a94..9a0a1e0934d5 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -334,7 +334,6 @@ static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio) const int sgrp = op_stat_group(bio_op(bio)); struct mddev *mddev = q->queuedata; unsigned int sectors; - int cpu; blk_queue_split(q, &bio); @@ -359,9 +358,9 @@ static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio) md_handle_request(mddev, bio); - cpu = part_stat_lock(); - part_stat_inc(cpu, &mddev->gendisk->part0, ios[sgrp]); - part_stat_add(cpu, &mddev->gendisk->part0, sectors[sgrp], sectors); + part_stat_lock(); + part_stat_inc(&mddev->gendisk->part0, ios[sgrp]); + part_stat_add(&mddev->gendisk->part0, sectors[sgrp], sectors); part_stat_unlock(); return BLK_QC_T_NONE; diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index ac1cffd2a09b..f3fb5bb8c82a 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -542,7 +542,7 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio) !discard_bio) continue; bio_chain(discard_bio, bio); - bio_clone_blkcg_association(discard_bio, bio); + bio_clone_blkg_association(discard_bio, bio); if (mddev->gendisk) trace_block_bio_remap(bdev_get_queue(rdev->bdev), discard_bio, disk_devt(mddev->gendisk), diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c index 616f78b24a79..b6602490a247 100644 --- a/drivers/media/platform/mtk-vpu/mtk_vpu.c +++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c @@ -855,7 +855,7 @@ static int mtk_vpu_probe(struct platform_device *pdev) /* Set PTCM to 96K and DTCM to 32K */ vpu_cfg_writel(vpu, 0x2, VPU_TCM_CFG); - vpu->enable_4GB = !!(totalram_pages > (SZ_2G >> PAGE_SHIFT)); + vpu->enable_4GB = !!(totalram_pages() > (SZ_2G >> PAGE_SHIFT)); dev_info(dev, "4GB mode %u\n", vpu->enable_4GB); if (vpu->enable_4GB) { diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c index 8b97fd1f0cea..390a722e6211 100644 --- a/drivers/media/rc/bpf-lirc.c +++ b/drivers/media/rc/bpf-lirc.c @@ -59,6 +59,28 @@ static const struct bpf_func_proto rc_keydown_proto = { .arg4_type = ARG_ANYTHING, }; +BPF_CALL_3(bpf_rc_pointer_rel, u32*, sample, s32, rel_x, s32, rel_y) +{ + struct ir_raw_event_ctrl *ctrl; + + ctrl = container_of(sample, struct ir_raw_event_ctrl, bpf_sample); + + input_report_rel(ctrl->dev->input_dev, REL_X, rel_x); + input_report_rel(ctrl->dev->input_dev, REL_Y, rel_y); + input_sync(ctrl->dev->input_dev); + + return 0; +} + +static const struct bpf_func_proto rc_pointer_rel_proto = { + .func = bpf_rc_pointer_rel, + .gpl_only = true, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, + .arg2_type = ARG_ANYTHING, + .arg3_type = ARG_ANYTHING, +}; + static const struct bpf_func_proto * lirc_mode2_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { @@ -67,6 +89,8 @@ lirc_mode2_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &rc_repeat_proto; case BPF_FUNC_rc_keydown: return &rc_keydown_proto; + case BPF_FUNC_rc_pointer_rel: + return &rc_pointer_rel_proto; case BPF_FUNC_map_lookup_elem: return &bpf_map_lookup_elem_proto; case BPF_FUNC_map_update_elem: diff --git a/drivers/memory/Makefile.asm-offsets b/drivers/memory/Makefile.asm-offsets index 843ff60ccb5a..0447e174c752 100644 --- a/drivers/memory/Makefile.asm-offsets +++ b/drivers/memory/Makefile.asm-offsets @@ -1,5 +1,4 @@ -drivers/memory/emif-asm-offsets.s: drivers/memory/emif-asm-offsets.c - $(call if_changed_dep,cc_s_c) - include/generated/ti-emif-asm-offsets.h: drivers/memory/emif-asm-offsets.s FORCE $(call filechk,offsets,__TI_EMIF_ASM_OFFSETS_H__) + +targets += emif-asm-offsets.s diff --git a/drivers/memory/omap-gpmc.c b/drivers/memory/omap-gpmc.c index c215287e80cf..a2188f7c04c6 100644 --- a/drivers/memory/omap-gpmc.c +++ b/drivers/memory/omap-gpmc.c @@ -21,6 +21,7 @@ #include #include #include +#include /* GPIO descriptor enum */ #include #include #include @@ -2145,8 +2146,8 @@ static int gpmc_probe_generic_child(struct platform_device *pdev, gpmc_s.device_width = GPMC_DEVWIDTH_16BIT; break; default: - dev_err(&pdev->dev, "%s: invalid 'nand-bus-width'\n", - child->name); + dev_err(&pdev->dev, "%pOFn: invalid 'nand-bus-width'\n", + child); ret = -EINVAL; goto err; } @@ -2170,7 +2171,8 @@ static int gpmc_probe_generic_child(struct platform_device *pdev, unsigned int wait_pin = gpmc_s.wait_pin; waitpin_desc = gpiochip_request_own_desc(&gpmc->gpio_chip, - wait_pin, "WAITPIN"); + wait_pin, "WAITPIN", + 0); if (IS_ERR(waitpin_desc)) { dev_err(&pdev->dev, "invalid wait-pin: %d\n", wait_pin); ret = PTR_ERR(waitpin_desc); @@ -2186,8 +2188,8 @@ static int gpmc_probe_generic_child(struct platform_device *pdev, ret = gpmc_cs_set_timings(cs, &gpmc_t, &gpmc_s); if (ret) { - dev_err(&pdev->dev, "failed to set gpmc timings for: %s\n", - child->name); + dev_err(&pdev->dev, "failed to set gpmc timings for: %pOFn\n", + child); goto err_cs; } @@ -2215,7 +2217,7 @@ no_timings: err_child_fail: - dev_err(&pdev->dev, "failed to create gpmc child %s\n", child->name); + dev_err(&pdev->dev, "failed to create gpmc child %pOFn\n", child); ret = -ENODEV; err_cs: @@ -2265,14 +2267,10 @@ static void gpmc_probe_dt_children(struct platform_device *pdev) struct device_node *child; for_each_available_child_of_node(pdev->dev.of_node, child) { - - if (!child->name) - continue; - ret = gpmc_probe_generic_child(pdev, child); if (ret) { - dev_err(&pdev->dev, "failed to probe DT child '%s': %d\n", - child->name, ret); + dev_err(&pdev->dev, "failed to probe DT child '%pOFn': %d\n", + child, ret); } } } diff --git a/drivers/memory/samsung/exynos-srom.c b/drivers/memory/samsung/exynos-srom.c index 7edd7fb540f2..c27c6105c66d 100644 --- a/drivers/memory/samsung/exynos-srom.c +++ b/drivers/memory/samsung/exynos-srom.c @@ -139,8 +139,8 @@ static int exynos_srom_probe(struct platform_device *pdev) for_each_child_of_node(np, child) { if (exynos_srom_configure_bank(srom, child)) { dev_err(dev, - "Could not decode bank configuration for %s\n", - child->name); + "Could not decode bank configuration for %pOFn\n", + child); bad_bank_config = true; } } diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c index bd25faf6d13d..24afc36833bf 100644 --- a/drivers/memory/tegra/mc.c +++ b/drivers/memory/tegra/mc.c @@ -345,7 +345,7 @@ static int load_one_timing(struct tegra_mc *mc, err = of_property_read_u32(node, "clock-frequency", &tmp); if (err) { dev_err(mc->dev, - "timing %s: failed to read rate\n", node->name); + "timing %pOFn: failed to read rate\n", node); return err; } @@ -360,8 +360,8 @@ static int load_one_timing(struct tegra_mc *mc, mc->soc->num_emem_regs); if (err) { dev_err(mc->dev, - "timing %s: failed to read EMEM configuration\n", - node->name); + "timing %pOFn: failed to read EMEM configuration\n", + node); return err; } diff --git a/drivers/memory/tegra/tegra124-emc.c b/drivers/memory/tegra/tegra124-emc.c index 392dc8dd481f..eedb7d48e2ea 100644 --- a/drivers/memory/tegra/tegra124-emc.c +++ b/drivers/memory/tegra/tegra124-emc.c @@ -888,8 +888,8 @@ static int load_one_timing_from_dt(struct tegra_emc *emc, err = of_property_read_u32(node, "clock-frequency", &value); if (err) { - dev_err(emc->dev, "timing %s: failed to read rate: %d\n", - node->name, err); + dev_err(emc->dev, "timing %pOFn: failed to read rate: %d\n", + node, err); return err; } @@ -900,16 +900,16 @@ static int load_one_timing_from_dt(struct tegra_emc *emc, ARRAY_SIZE(timing->emc_burst_data)); if (err) { dev_err(emc->dev, - "timing %s: failed to read emc burst data: %d\n", - node->name, err); + "timing %pOFn: failed to read emc burst data: %d\n", + node, err); return err; } #define EMC_READ_PROP(prop, dtprop) { \ err = of_property_read_u32(node, dtprop, &timing->prop); \ if (err) { \ - dev_err(emc->dev, "timing %s: failed to read " #prop ": %d\n", \ - node->name, err); \ + dev_err(emc->dev, "timing %pOFn: failed to read " #prop ": %d\n", \ + node, err); \ return err; \ } \ } diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c index 76382c858c35..1246d69ba187 100644 --- a/drivers/memstick/core/memstick.c +++ b/drivers/memstick/core/memstick.c @@ -18,6 +18,7 @@ #include #include #include +#include #define DRIVER_NAME "memstick" @@ -436,6 +437,7 @@ static void memstick_check(struct work_struct *work) struct memstick_dev *card; dev_dbg(&host->dev, "memstick_check started\n"); + pm_runtime_get_noresume(host->dev.parent); mutex_lock(&host->lock); if (!host->card) { if (memstick_power_on(host)) @@ -479,6 +481,7 @@ out_power_off: host->set_param(host, MEMSTICK_POWER, MEMSTICK_POWER_OFF); mutex_unlock(&host->lock); + pm_runtime_put(host->dev.parent); dev_dbg(&host->dev, "memstick_check finished\n"); } diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c index 8a02f11076f9..82daccc9ea62 100644 --- a/drivers/memstick/core/ms_block.c +++ b/drivers/memstick/core/ms_block.c @@ -15,7 +15,7 @@ #define pr_fmt(fmt) DRIVER_NAME ": " fmt #include -#include +#include #include #include #include @@ -1873,69 +1873,65 @@ static void msb_io_work(struct work_struct *work) struct msb_data *msb = container_of(work, struct msb_data, io_work); int page, error, len; sector_t lba; - unsigned long flags; struct scatterlist *sg = msb->prealloc_sg; + struct request *req; dbg_verbose("IO: work started"); while (1) { - spin_lock_irqsave(&msb->q_lock, flags); + spin_lock_irq(&msb->q_lock); if (msb->need_flush_cache) { msb->need_flush_cache = false; - spin_unlock_irqrestore(&msb->q_lock, flags); + spin_unlock_irq(&msb->q_lock); msb_cache_flush(msb); continue; } - if (!msb->req) { - msb->req = blk_fetch_request(msb->queue); - if (!msb->req) { - dbg_verbose("IO: no more requests exiting"); - spin_unlock_irqrestore(&msb->q_lock, flags); - return; - } + req = msb->req; + if (!req) { + dbg_verbose("IO: no more requests exiting"); + spin_unlock_irq(&msb->q_lock); + return; } - spin_unlock_irqrestore(&msb->q_lock, flags); - - /* If card was removed meanwhile */ - if (!msb->req) - return; + spin_unlock_irq(&msb->q_lock); /* process the request */ dbg_verbose("IO: processing new request"); - blk_rq_map_sg(msb->queue, msb->req, sg); + blk_rq_map_sg(msb->queue, req, sg); - lba = blk_rq_pos(msb->req); + lba = blk_rq_pos(req); sector_div(lba, msb->page_size / 512); page = sector_div(lba, msb->pages_in_block); if (rq_data_dir(msb->req) == READ) error = msb_do_read_request(msb, lba, page, sg, - blk_rq_bytes(msb->req), &len); + blk_rq_bytes(req), &len); else error = msb_do_write_request(msb, lba, page, sg, - blk_rq_bytes(msb->req), &len); - - spin_lock_irqsave(&msb->q_lock, flags); + blk_rq_bytes(req), &len); - if (len) - if (!__blk_end_request(msb->req, BLK_STS_OK, len)) - msb->req = NULL; + if (len && !blk_update_request(req, BLK_STS_OK, len)) { + __blk_mq_end_request(req, BLK_STS_OK); + spin_lock_irq(&msb->q_lock); + msb->req = NULL; + spin_unlock_irq(&msb->q_lock); + } if (error && msb->req) { blk_status_t ret = errno_to_blk_status(error); + dbg_verbose("IO: ending one sector of the request with error"); - if (!__blk_end_request(msb->req, ret, msb->page_size)) - msb->req = NULL; + blk_mq_end_request(req, ret); + spin_lock_irq(&msb->q_lock); + msb->req = NULL; + spin_unlock_irq(&msb->q_lock); } if (msb->req) dbg_verbose("IO: request still pending"); - - spin_unlock_irqrestore(&msb->q_lock, flags); } } @@ -2002,29 +1998,40 @@ static int msb_bd_getgeo(struct block_device *bdev, return 0; } -static void msb_submit_req(struct request_queue *q) +static blk_status_t msb_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) { - struct memstick_dev *card = q->queuedata; + struct memstick_dev *card = hctx->queue->queuedata; struct msb_data *msb = memstick_get_drvdata(card); - struct request *req = NULL; + struct request *req = bd->rq; dbg_verbose("Submit request"); + spin_lock_irq(&msb->q_lock); + if (msb->card_dead) { dbg("Refusing requests on removed card"); WARN_ON(!msb->io_queue_stopped); - while ((req = blk_fetch_request(q)) != NULL) - __blk_end_request_all(req, BLK_STS_IOERR); - return; + spin_unlock_irq(&msb->q_lock); + blk_mq_start_request(req); + return BLK_STS_IOERR; } - if (msb->req) - return; + if (msb->req) { + spin_unlock_irq(&msb->q_lock); + return BLK_STS_DEV_RESOURCE; + } + + blk_mq_start_request(req); + msb->req = req; if (!msb->io_queue_stopped) queue_work(msb->io_queue, &msb->io_work); + + spin_unlock_irq(&msb->q_lock); + return BLK_STS_OK; } static int msb_check_card(struct memstick_dev *card) @@ -2040,21 +2047,20 @@ static void msb_stop(struct memstick_dev *card) dbg("Stopping all msblock IO"); + blk_mq_stop_hw_queues(msb->queue); spin_lock_irqsave(&msb->q_lock, flags); - blk_stop_queue(msb->queue); msb->io_queue_stopped = true; spin_unlock_irqrestore(&msb->q_lock, flags); del_timer_sync(&msb->cache_flush_timer); flush_workqueue(msb->io_queue); + spin_lock_irqsave(&msb->q_lock, flags); if (msb->req) { - spin_lock_irqsave(&msb->q_lock, flags); - blk_requeue_request(msb->queue, msb->req); + blk_mq_requeue_request(msb->req, false); msb->req = NULL; - spin_unlock_irqrestore(&msb->q_lock, flags); } - + spin_unlock_irqrestore(&msb->q_lock, flags); } static void msb_start(struct memstick_dev *card) @@ -2077,9 +2083,7 @@ static void msb_start(struct memstick_dev *card) msb->need_flush_cache = true; msb->io_queue_stopped = false; - spin_lock_irqsave(&msb->q_lock, flags); - blk_start_queue(msb->queue); - spin_unlock_irqrestore(&msb->q_lock, flags); + blk_mq_start_hw_queues(msb->queue); queue_work(msb->io_queue, &msb->io_work); @@ -2092,6 +2096,10 @@ static const struct block_device_operations msb_bdops = { .owner = THIS_MODULE }; +static const struct blk_mq_ops msb_mq_ops = { + .queue_rq = msb_queue_rq, +}; + /* Registers the block device */ static int msb_init_disk(struct memstick_dev *card) { @@ -2112,9 +2120,11 @@ static int msb_init_disk(struct memstick_dev *card) goto out_release_id; } - msb->queue = blk_init_queue(msb_submit_req, &msb->q_lock); - if (!msb->queue) { - rc = -ENOMEM; + msb->queue = blk_mq_init_sq_queue(&msb->tag_set, &msb_mq_ops, 2, + BLK_MQ_F_SHOULD_MERGE); + if (IS_ERR(msb->queue)) { + rc = PTR_ERR(msb->queue); + msb->queue = NULL; goto out_put_disk; } @@ -2202,12 +2212,13 @@ static void msb_remove(struct memstick_dev *card) /* Take care of unhandled + new requests from now on */ spin_lock_irqsave(&msb->q_lock, flags); msb->card_dead = true; - blk_start_queue(msb->queue); spin_unlock_irqrestore(&msb->q_lock, flags); + blk_mq_start_hw_queues(msb->queue); /* Remove the disk */ del_gendisk(msb->disk); blk_cleanup_queue(msb->queue); + blk_mq_free_tag_set(&msb->tag_set); msb->queue = NULL; mutex_lock(&msb_disk_lock); diff --git a/drivers/memstick/core/ms_block.h b/drivers/memstick/core/ms_block.h index 53962c3b21df..9ba84e0ced63 100644 --- a/drivers/memstick/core/ms_block.h +++ b/drivers/memstick/core/ms_block.h @@ -152,6 +152,7 @@ struct msb_data { struct gendisk *disk; struct request_queue *queue; spinlock_t q_lock; + struct blk_mq_tag_set tag_set; struct hd_geometry geometry; struct attribute_group attr_group; struct request *req; diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c index 0cd30dcb6801..aba50ec98b4d 100644 --- a/drivers/memstick/core/mspro_block.c +++ b/drivers/memstick/core/mspro_block.c @@ -12,7 +12,7 @@ * */ -#include +#include #include #include #include @@ -142,6 +142,7 @@ struct mspro_block_data { struct gendisk *disk; struct request_queue *queue; struct request *block_req; + struct blk_mq_tag_set tag_set; spinlock_t q_lock; unsigned short page_size; @@ -152,7 +153,6 @@ struct mspro_block_data { unsigned char system; unsigned char read_only:1, eject:1, - has_request:1, data_dir:1, active:1; unsigned char transfer_cmd; @@ -694,13 +694,12 @@ static void h_mspro_block_setup_cmd(struct memstick_dev *card, u64 offset, /*** Data transfer ***/ -static int mspro_block_issue_req(struct memstick_dev *card, int chunk) +static int mspro_block_issue_req(struct memstick_dev *card, bool chunk) { struct mspro_block_data *msb = memstick_get_drvdata(card); u64 t_off; unsigned int count; -try_again: while (chunk) { msb->current_page = 0; msb->current_seg = 0; @@ -709,9 +708,17 @@ try_again: msb->req_sg); if (!msb->seg_count) { - chunk = __blk_end_request_cur(msb->block_req, - BLK_STS_RESOURCE); - continue; + unsigned int bytes = blk_rq_cur_bytes(msb->block_req); + + chunk = blk_update_request(msb->block_req, + BLK_STS_RESOURCE, + bytes); + if (chunk) + continue; + __blk_mq_end_request(msb->block_req, + BLK_STS_RESOURCE); + msb->block_req = NULL; + break; } t_off = blk_rq_pos(msb->block_req); @@ -729,30 +736,22 @@ try_again: return 0; } - dev_dbg(&card->dev, "blk_fetch\n"); - msb->block_req = blk_fetch_request(msb->queue); - if (!msb->block_req) { - dev_dbg(&card->dev, "issue end\n"); - return -EAGAIN; - } - - dev_dbg(&card->dev, "trying again\n"); - chunk = 1; - goto try_again; + return 1; } static int mspro_block_complete_req(struct memstick_dev *card, int error) { struct mspro_block_data *msb = memstick_get_drvdata(card); - int chunk, cnt; + int cnt; + bool chunk; unsigned int t_len = 0; unsigned long flags; spin_lock_irqsave(&msb->q_lock, flags); - dev_dbg(&card->dev, "complete %d, %d\n", msb->has_request ? 1 : 0, + dev_dbg(&card->dev, "complete %d, %d\n", msb->block_req ? 1 : 0, error); - if (msb->has_request) { + if (msb->block_req) { /* Nothing to do - not really an error */ if (error == -EAGAIN) error = 0; @@ -777,15 +776,17 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error) if (error && !t_len) t_len = blk_rq_cur_bytes(msb->block_req); - chunk = __blk_end_request(msb->block_req, + chunk = blk_update_request(msb->block_req, errno_to_blk_status(error), t_len); - - error = mspro_block_issue_req(card, chunk); - - if (!error) - goto out; - else - msb->has_request = 0; + if (chunk) { + error = mspro_block_issue_req(card, chunk); + if (!error) + goto out; + } else { + __blk_mq_end_request(msb->block_req, + errno_to_blk_status(error)); + msb->block_req = NULL; + } } else { if (!error) error = -EAGAIN; @@ -806,8 +807,8 @@ static void mspro_block_stop(struct memstick_dev *card) while (1) { spin_lock_irqsave(&msb->q_lock, flags); - if (!msb->has_request) { - blk_stop_queue(msb->queue); + if (!msb->block_req) { + blk_mq_stop_hw_queues(msb->queue); rc = 1; } spin_unlock_irqrestore(&msb->q_lock, flags); @@ -822,32 +823,37 @@ static void mspro_block_stop(struct memstick_dev *card) static void mspro_block_start(struct memstick_dev *card) { struct mspro_block_data *msb = memstick_get_drvdata(card); - unsigned long flags; - spin_lock_irqsave(&msb->q_lock, flags); - blk_start_queue(msb->queue); - spin_unlock_irqrestore(&msb->q_lock, flags); + blk_mq_start_hw_queues(msb->queue); } -static void mspro_block_submit_req(struct request_queue *q) +static blk_status_t mspro_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) { - struct memstick_dev *card = q->queuedata; + struct memstick_dev *card = hctx->queue->queuedata; struct mspro_block_data *msb = memstick_get_drvdata(card); - struct request *req = NULL; - if (msb->has_request) - return; + spin_lock_irq(&msb->q_lock); - if (msb->eject) { - while ((req = blk_fetch_request(q)) != NULL) - __blk_end_request_all(req, BLK_STS_IOERR); + if (msb->block_req) { + spin_unlock_irq(&msb->q_lock); + return BLK_STS_DEV_RESOURCE; + } - return; + if (msb->eject) { + spin_unlock_irq(&msb->q_lock); + blk_mq_start_request(bd->rq); + return BLK_STS_IOERR; } - msb->has_request = 1; - if (mspro_block_issue_req(card, 0)) - msb->has_request = 0; + msb->block_req = bd->rq; + blk_mq_start_request(bd->rq); + + if (mspro_block_issue_req(card, true)) + msb->block_req = NULL; + + spin_unlock_irq(&msb->q_lock); + return BLK_STS_OK; } /*** Initialization ***/ @@ -1167,6 +1173,10 @@ static int mspro_block_init_card(struct memstick_dev *card) } +static const struct blk_mq_ops mspro_mq_ops = { + .queue_rq = mspro_queue_rq, +}; + static int mspro_block_init_disk(struct memstick_dev *card) { struct mspro_block_data *msb = memstick_get_drvdata(card); @@ -1206,9 +1216,11 @@ static int mspro_block_init_disk(struct memstick_dev *card) goto out_release_id; } - msb->queue = blk_init_queue(mspro_block_submit_req, &msb->q_lock); - if (!msb->queue) { - rc = -ENOMEM; + msb->queue = blk_mq_init_sq_queue(&msb->tag_set, &mspro_mq_ops, 2, + BLK_MQ_F_SHOULD_MERGE); + if (IS_ERR(msb->queue)) { + rc = PTR_ERR(msb->queue); + msb->queue = NULL; goto out_put_disk; } @@ -1318,13 +1330,14 @@ static void mspro_block_remove(struct memstick_dev *card) spin_lock_irqsave(&msb->q_lock, flags); msb->eject = 1; - blk_start_queue(msb->queue); spin_unlock_irqrestore(&msb->q_lock, flags); + blk_mq_start_hw_queues(msb->queue); del_gendisk(msb->disk); dev_dbg(&card->dev, "mspro block remove\n"); blk_cleanup_queue(msb->queue); + blk_mq_free_tag_set(&msb->tag_set); msb->queue = NULL; sysfs_remove_group(&card->dev.kobj, &msb->attr_group); @@ -1344,8 +1357,9 @@ static int mspro_block_suspend(struct memstick_dev *card, pm_message_t state) struct mspro_block_data *msb = memstick_get_drvdata(card); unsigned long flags; + blk_mq_stop_hw_queues(msb->queue); + spin_lock_irqsave(&msb->q_lock, flags); - blk_stop_queue(msb->queue); msb->active = 0; spin_unlock_irqrestore(&msb->q_lock, flags); @@ -1355,7 +1369,6 @@ static int mspro_block_suspend(struct memstick_dev *card, pm_message_t state) static int mspro_block_resume(struct memstick_dev *card) { struct mspro_block_data *msb = memstick_get_drvdata(card); - unsigned long flags; int rc = 0; #ifdef CONFIG_MEMSTICK_UNSAFE_RESUME @@ -1401,9 +1414,7 @@ out_unlock: #endif /* CONFIG_MEMSTICK_UNSAFE_RESUME */ - spin_lock_irqsave(&msb->q_lock, flags); - blk_start_queue(msb->queue); - spin_unlock_irqrestore(&msb->q_lock, flags); + blk_mq_start_hw_queues(msb->queue); return rc; } diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c index 4f64563df7de..97308dc28ccf 100644 --- a/drivers/memstick/host/rtsx_usb_ms.c +++ b/drivers/memstick/host/rtsx_usb_ms.c @@ -40,15 +40,14 @@ struct rtsx_usb_ms { struct mutex host_mutex; struct work_struct handle_req; - - struct task_struct *detect_ms; - struct completion detect_ms_exit; + struct delayed_work poll_card; u8 ssc_depth; unsigned int clock; int power_mode; unsigned char ifmode; bool eject; + bool system_suspending; }; static inline struct device *ms_dev(struct rtsx_usb_ms *host) @@ -545,7 +544,7 @@ static void rtsx_usb_ms_handle_req(struct work_struct *work) host->req->error); } } while (!rc); - pm_runtime_put(ms_dev(host)); + pm_runtime_put_sync(ms_dev(host)); } } @@ -585,14 +584,14 @@ static int rtsx_usb_ms_set_param(struct memstick_host *msh, break; if (value == MEMSTICK_POWER_ON) { - pm_runtime_get_sync(ms_dev(host)); + pm_runtime_get_noresume(ms_dev(host)); err = ms_power_on(host); + if (err) + pm_runtime_put_noidle(ms_dev(host)); } else if (value == MEMSTICK_POWER_OFF) { err = ms_power_off(host); - if (host->msh->card) + if (!err) pm_runtime_put_noidle(ms_dev(host)); - else - pm_runtime_put(ms_dev(host)); } else err = -EINVAL; if (!err) @@ -638,12 +637,16 @@ static int rtsx_usb_ms_set_param(struct memstick_host *msh, } out: mutex_unlock(&ucr->dev_mutex); - pm_runtime_put(ms_dev(host)); + pm_runtime_put_sync(ms_dev(host)); /* power-on delay */ - if (param == MEMSTICK_POWER && value == MEMSTICK_POWER_ON) + if (param == MEMSTICK_POWER && value == MEMSTICK_POWER_ON) { usleep_range(10000, 12000); + if (!host->eject) + schedule_delayed_work(&host->poll_card, 100); + } + dev_dbg(ms_dev(host), "%s: return = %d\n", __func__, err); return err; } @@ -654,9 +657,24 @@ static int rtsx_usb_ms_suspend(struct device *dev) struct rtsx_usb_ms *host = dev_get_drvdata(dev); struct memstick_host *msh = host->msh; - dev_dbg(ms_dev(host), "--> %s\n", __func__); + /* Since we use rtsx_usb's resume callback to runtime resume its + * children to implement remote wakeup signaling, this causes + * rtsx_usb_ms' runtime resume callback runs after its suspend + * callback: + * rtsx_usb_ms_suspend() + * rtsx_usb_resume() + * -> rtsx_usb_ms_runtime_resume() + * -> memstick_detect_change() + * + * rtsx_usb_suspend() + * + * To avoid this, skip runtime resume/suspend if system suspend is + * underway. + */ + host->system_suspending = true; memstick_suspend_host(msh); + return 0; } @@ -665,58 +683,85 @@ static int rtsx_usb_ms_resume(struct device *dev) struct rtsx_usb_ms *host = dev_get_drvdata(dev); struct memstick_host *msh = host->msh; - dev_dbg(ms_dev(host), "--> %s\n", __func__); - memstick_resume_host(msh); + host->system_suspending = false; + return 0; } #endif /* CONFIG_PM_SLEEP */ -/* - * Thread function of ms card slot detection. The thread starts right after - * successful host addition. It stops while the driver removal function sets - * host->eject true. - */ -static int rtsx_usb_detect_ms_card(void *__host) +#ifdef CONFIG_PM +static int rtsx_usb_ms_runtime_suspend(struct device *dev) { - struct rtsx_usb_ms *host = (struct rtsx_usb_ms *)__host; + struct rtsx_usb_ms *host = dev_get_drvdata(dev); + + if (host->system_suspending) + return 0; + + if (host->msh->card || host->power_mode != MEMSTICK_POWER_OFF) + return -EAGAIN; + + return 0; +} + +static int rtsx_usb_ms_runtime_resume(struct device *dev) +{ + struct rtsx_usb_ms *host = dev_get_drvdata(dev); + + + if (host->system_suspending) + return 0; + + memstick_detect_change(host->msh); + + return 0; +} +#endif /* CONFIG_PM */ + +static const struct dev_pm_ops rtsx_usb_ms_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(rtsx_usb_ms_suspend, rtsx_usb_ms_resume) + SET_RUNTIME_PM_OPS(rtsx_usb_ms_runtime_suspend, rtsx_usb_ms_runtime_resume, NULL) +}; + + +static void rtsx_usb_ms_poll_card(struct work_struct *work) +{ + struct rtsx_usb_ms *host = container_of(work, struct rtsx_usb_ms, + poll_card.work); struct rtsx_ucr *ucr = host->ucr; - u8 val = 0; int err; + u8 val; - for (;;) { - pm_runtime_get_sync(ms_dev(host)); - mutex_lock(&ucr->dev_mutex); - - /* Check pending MS card changes */ - err = rtsx_usb_read_register(ucr, CARD_INT_PEND, &val); - if (err) { - mutex_unlock(&ucr->dev_mutex); - goto poll_again; - } + if (host->eject || host->power_mode != MEMSTICK_POWER_ON) + return; - /* Clear the pending */ - rtsx_usb_write_register(ucr, CARD_INT_PEND, - XD_INT | MS_INT | SD_INT, - XD_INT | MS_INT | SD_INT); + pm_runtime_get_sync(ms_dev(host)); + mutex_lock(&ucr->dev_mutex); + /* Check pending MS card changes */ + err = rtsx_usb_read_register(ucr, CARD_INT_PEND, &val); + if (err) { mutex_unlock(&ucr->dev_mutex); + goto poll_again; + } - if (val & MS_INT) { - dev_dbg(ms_dev(host), "MS slot change detected\n"); - memstick_detect_change(host->msh); - } + /* Clear the pending */ + rtsx_usb_write_register(ucr, CARD_INT_PEND, + XD_INT | MS_INT | SD_INT, + XD_INT | MS_INT | SD_INT); -poll_again: - pm_runtime_put(ms_dev(host)); - if (host->eject) - break; + mutex_unlock(&ucr->dev_mutex); - schedule_timeout_idle(HZ); + if (val & MS_INT) { + dev_dbg(ms_dev(host), "MS slot change detected\n"); + memstick_detect_change(host->msh); } - complete(&host->detect_ms_exit); - return 0; +poll_again: + pm_runtime_put_sync(ms_dev(host)); + + if (!host->eject && host->power_mode == MEMSTICK_POWER_ON) + schedule_delayed_work(&host->poll_card, 100); } static int rtsx_usb_ms_drv_probe(struct platform_device *pdev) @@ -747,45 +792,42 @@ static int rtsx_usb_ms_drv_probe(struct platform_device *pdev) mutex_init(&host->host_mutex); INIT_WORK(&host->handle_req, rtsx_usb_ms_handle_req); - init_completion(&host->detect_ms_exit); - host->detect_ms = kthread_create(rtsx_usb_detect_ms_card, host, - "rtsx_usb_ms_%d", pdev->id); - if (IS_ERR(host->detect_ms)) { - dev_dbg(&(pdev->dev), - "Unable to create polling thread.\n"); - err = PTR_ERR(host->detect_ms); - goto err_out; - } + INIT_DELAYED_WORK(&host->poll_card, rtsx_usb_ms_poll_card); msh->request = rtsx_usb_ms_request; msh->set_param = rtsx_usb_ms_set_param; msh->caps = MEMSTICK_CAP_PAR4; - pm_runtime_enable(&pdev->dev); + pm_runtime_get_noresume(ms_dev(host)); + pm_runtime_set_active(ms_dev(host)); + pm_runtime_enable(ms_dev(host)); + err = memstick_add_host(msh); if (err) goto err_out; - wake_up_process(host->detect_ms); + pm_runtime_put(ms_dev(host)); + return 0; err_out: memstick_free_host(msh); + pm_runtime_disable(ms_dev(host)); + pm_runtime_put_noidle(ms_dev(host)); return err; } static int rtsx_usb_ms_drv_remove(struct platform_device *pdev) { struct rtsx_usb_ms *host = platform_get_drvdata(pdev); - struct memstick_host *msh; + struct memstick_host *msh = host->msh; int err; - msh = host->msh; host->eject = true; cancel_work_sync(&host->handle_req); mutex_lock(&host->host_mutex); if (host->req) { - dev_dbg(&(pdev->dev), + dev_dbg(ms_dev(host), "%s: Controller removed during transfer\n", dev_name(&msh->dev)); host->req->error = -ENOMEDIUM; @@ -797,7 +839,6 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev) } mutex_unlock(&host->host_mutex); - wait_for_completion(&host->detect_ms_exit); memstick_remove_host(msh); memstick_free_host(msh); @@ -807,18 +848,15 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev) if (pm_runtime_active(ms_dev(host))) pm_runtime_put(ms_dev(host)); - pm_runtime_disable(&pdev->dev); + pm_runtime_disable(ms_dev(host)); platform_set_drvdata(pdev, NULL); - dev_dbg(&(pdev->dev), + dev_dbg(ms_dev(host), ": Realtek USB Memstick controller has been removed\n"); return 0; } -static SIMPLE_DEV_PM_OPS(rtsx_usb_ms_pm_ops, - rtsx_usb_ms_suspend, rtsx_usb_ms_resume); - static struct platform_device_id rtsx_usb_ms_ids[] = { { .name = "rtsx_usb_ms", diff --git a/drivers/message/fusion/mptfc.c b/drivers/message/fusion/mptfc.c index b15fdc626fb8..4314a3352b96 100644 --- a/drivers/message/fusion/mptfc.c +++ b/drivers/message/fusion/mptfc.c @@ -129,7 +129,6 @@ static struct scsi_host_template mptfc_driver_template = { .sg_tablesize = MPT_SCSI_SG_DEPTH, .max_sectors = 8192, .cmd_per_lun = 7, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = mptscsih_host_attrs, }; diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c index 9b404fc69c90..612cb5bc1333 100644 --- a/drivers/message/fusion/mptsas.c +++ b/drivers/message/fusion/mptsas.c @@ -1992,7 +1992,6 @@ static struct scsi_host_template mptsas_driver_template = { .sg_tablesize = MPT_SCSI_SG_DEPTH, .max_sectors = 8192, .cmd_per_lun = 7, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = mptscsih_host_attrs, .no_write_same = 1, }; diff --git a/drivers/message/fusion/mptspi.c b/drivers/message/fusion/mptspi.c index 9a336a161d9f..7172b0b16bdd 100644 --- a/drivers/message/fusion/mptspi.c +++ b/drivers/message/fusion/mptspi.c @@ -848,7 +848,6 @@ static struct scsi_host_template mptspi_driver_template = { .sg_tablesize = MPT_SCSI_SG_DEPTH, .max_sectors = 8192, .cmd_per_lun = 7, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = mptscsih_host_attrs, }; diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig index 3726eacdf65d..f417b06e11c5 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig @@ -513,6 +513,14 @@ config MISC_RTSX tristate default MISC_RTSX_PCI || MISC_RTSX_USB +config PVPANIC + tristate "pvpanic device support" + depends on HAS_IOMEM && (ACPI || OF) + help + This driver provides support for the pvpanic device. pvpanic is + a paravirtualized device provided by QEMU; it lets a virtual machine + (guest) communicate panic events to the host. + source "drivers/misc/c2port/Kconfig" source "drivers/misc/eeprom/Kconfig" source "drivers/misc/cb710/Kconfig" diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile index af22bbc3d00c..e39ccbbc1b3a 100644 --- a/drivers/misc/Makefile +++ b/drivers/misc/Makefile @@ -57,4 +57,5 @@ obj-$(CONFIG_ASPEED_LPC_CTRL) += aspeed-lpc-ctrl.o obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o obj-$(CONFIG_OCXL) += ocxl/ -obj-$(CONFIG_MISC_RTSX) += cardreader/ +obj-y += cardreader/ +obj-$(CONFIG_PVPANIC) += pvpanic.o diff --git a/drivers/misc/altera-stapl/altera.c b/drivers/misc/altera-stapl/altera.c index ef83a9078646..d2ed3b9728b7 100644 --- a/drivers/misc/altera-stapl/altera.c +++ b/drivers/misc/altera-stapl/altera.c @@ -2176,8 +2176,7 @@ static int altera_get_note(u8 *p, s32 program_size, key_ptr = &p[note_strings + get_unaligned_be32( &p[note_table + (8 * i)])]; - if ((strncasecmp(key, key_ptr, strlen(key_ptr)) == 0) && - (key != NULL)) { + if (key && !strncasecmp(key, key_ptr, strlen(key_ptr))) { status = 0; value_ptr = &p[note_strings + diff --git a/drivers/misc/cardreader/Kconfig b/drivers/misc/cardreader/Kconfig index 69e815e32a8c..ed8993b5d058 100644 --- a/drivers/misc/cardreader/Kconfig +++ b/drivers/misc/cardreader/Kconfig @@ -1,3 +1,14 @@ +config MISC_ALCOR_PCI + tristate "Alcor Micro/Alcor Link PCI-E card reader" + depends on PCI + select MFD_CORE + help + This supports for Alcor Micro PCI-Express card reader including au6601, + au6621. + Alcor Micro card readers support access to many types of memory cards, + such as Memory Stick, Memory Stick Pro, Secure Digital and + MultiMediaCard. + config MISC_RTSX_PCI tristate "Realtek PCI-E card reader" depends on PCI diff --git a/drivers/misc/cardreader/Makefile b/drivers/misc/cardreader/Makefile index 9fabfcc6fa7a..9882d2a1025c 100644 --- a/drivers/misc/cardreader/Makefile +++ b/drivers/misc/cardreader/Makefile @@ -1,4 +1,4 @@ -rtsx_pci-objs := rtsx_pcr.o rts5209.o rts5229.o rtl8411.o rts5227.o rts5249.o rts5260.o - +obj-$(CONFIG_MISC_ALCOR_PCI) += alcor_pci.o obj-$(CONFIG_MISC_RTSX_PCI) += rtsx_pci.o +rtsx_pci-objs := rtsx_pcr.o rts5209.o rts5229.o rtl8411.o rts5227.o rts5249.o rts5260.o obj-$(CONFIG_MISC_RTSX_USB) += rtsx_usb.o diff --git a/drivers/misc/cardreader/alcor_pci.c b/drivers/misc/cardreader/alcor_pci.c new file mode 100644 index 000000000000..bcb10fa4bc3a --- /dev/null +++ b/drivers/misc/cardreader/alcor_pci.c @@ -0,0 +1,371 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Oleksij Rempel + * + * Driver for Alcor Micro AU6601 and AU6621 controllers + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#define DRV_NAME_ALCOR_PCI "alcor_pci" + +static DEFINE_IDA(alcor_pci_idr); + +static struct mfd_cell alcor_pci_cells[] = { + [ALCOR_SD_CARD] = { + .name = DRV_NAME_ALCOR_PCI_SDMMC, + }, + [ALCOR_MS_CARD] = { + .name = DRV_NAME_ALCOR_PCI_MS, + }, +}; + +static const struct alcor_dev_cfg alcor_cfg = { + .dma = 0, +}; + +static const struct alcor_dev_cfg au6621_cfg = { + .dma = 1, +}; + +static const struct pci_device_id pci_ids[] = { + { PCI_DEVICE(PCI_ID_ALCOR_MICRO, PCI_ID_AU6601), + .driver_data = (kernel_ulong_t)&alcor_cfg }, + { PCI_DEVICE(PCI_ID_ALCOR_MICRO, PCI_ID_AU6621), + .driver_data = (kernel_ulong_t)&au6621_cfg }, + { }, +}; +MODULE_DEVICE_TABLE(pci, pci_ids); + +void alcor_write8(struct alcor_pci_priv *priv, u8 val, unsigned int addr) +{ + writeb(val, priv->iobase + addr); +} +EXPORT_SYMBOL_GPL(alcor_write8); + +void alcor_write16(struct alcor_pci_priv *priv, u16 val, unsigned int addr) +{ + writew(val, priv->iobase + addr); +} +EXPORT_SYMBOL_GPL(alcor_write16); + +void alcor_write32(struct alcor_pci_priv *priv, u32 val, unsigned int addr) +{ + writel(val, priv->iobase + addr); +} +EXPORT_SYMBOL_GPL(alcor_write32); + +void alcor_write32be(struct alcor_pci_priv *priv, u32 val, unsigned int addr) +{ + iowrite32be(val, priv->iobase + addr); +} +EXPORT_SYMBOL_GPL(alcor_write32be); + +u8 alcor_read8(struct alcor_pci_priv *priv, unsigned int addr) +{ + return readb(priv->iobase + addr); +} +EXPORT_SYMBOL_GPL(alcor_read8); + +u32 alcor_read32(struct alcor_pci_priv *priv, unsigned int addr) +{ + return readl(priv->iobase + addr); +} +EXPORT_SYMBOL_GPL(alcor_read32); + +u32 alcor_read32be(struct alcor_pci_priv *priv, unsigned int addr) +{ + return ioread32be(priv->iobase + addr); +} +EXPORT_SYMBOL_GPL(alcor_read32be); + +static int alcor_pci_find_cap_offset(struct alcor_pci_priv *priv, + struct pci_dev *pci) +{ + int where; + u8 val8; + u32 val32; + + where = ALCOR_CAP_START_OFFSET; + pci_read_config_byte(pci, where, &val8); + if (!val8) + return 0; + + where = (int)val8; + while (1) { + pci_read_config_dword(pci, where, &val32); + if (val32 == 0xffffffff) { + dev_dbg(priv->dev, "find_cap_offset invalid value %x.\n", + val32); + return 0; + } + + if ((val32 & 0xff) == 0x10) { + dev_dbg(priv->dev, "pcie cap offset: %x\n", where); + return where; + } + + if ((val32 & 0xff00) == 0x00) { + dev_dbg(priv->dev, "pci_find_cap_offset invalid value %x.\n", + val32); + break; + } + where = (int)((val32 >> 8) & 0xff); + } + + return 0; +} + +static void alcor_pci_init_check_aspm(struct alcor_pci_priv *priv) +{ + struct pci_dev *pci; + int where; + u32 val32; + + priv->pdev_cap_off = alcor_pci_find_cap_offset(priv, priv->pdev); + priv->parent_cap_off = alcor_pci_find_cap_offset(priv, + priv->parent_pdev); + + if ((priv->pdev_cap_off == 0) || (priv->parent_cap_off == 0)) { + dev_dbg(priv->dev, "pci_cap_off: %x, parent_cap_off: %x\n", + priv->pdev_cap_off, priv->parent_cap_off); + return; + } + + /* link capability */ + pci = priv->pdev; + where = priv->pdev_cap_off + ALCOR_PCIE_LINK_CAP_OFFSET; + pci_read_config_dword(pci, where, &val32); + priv->pdev_aspm_cap = (u8)(val32 >> 10) & 0x03; + + pci = priv->parent_pdev; + where = priv->parent_cap_off + ALCOR_PCIE_LINK_CAP_OFFSET; + pci_read_config_dword(pci, where, &val32); + priv->parent_aspm_cap = (u8)(val32 >> 10) & 0x03; + + if (priv->pdev_aspm_cap != priv->parent_aspm_cap) { + u8 aspm_cap; + + dev_dbg(priv->dev, "pdev_aspm_cap: %x, parent_aspm_cap: %x\n", + priv->pdev_aspm_cap, priv->parent_aspm_cap); + aspm_cap = priv->pdev_aspm_cap & priv->parent_aspm_cap; + priv->pdev_aspm_cap = aspm_cap; + priv->parent_aspm_cap = aspm_cap; + } + + dev_dbg(priv->dev, "ext_config_dev_aspm: %x, pdev_aspm_cap: %x\n", + priv->ext_config_dev_aspm, priv->pdev_aspm_cap); + priv->ext_config_dev_aspm &= priv->pdev_aspm_cap; +} + +static void alcor_pci_aspm_ctrl(struct alcor_pci_priv *priv, u8 aspm_enable) +{ + struct pci_dev *pci; + u8 aspm_ctrl, i; + int where; + u32 val32; + + if ((!priv->pdev_cap_off) || (!priv->parent_cap_off)) { + dev_dbg(priv->dev, "pci_cap_off: %x, parent_cap_off: %x\n", + priv->pdev_cap_off, priv->parent_cap_off); + return; + } + + if (!priv->pdev_aspm_cap) + return; + + aspm_ctrl = 0; + if (aspm_enable) { + aspm_ctrl = priv->ext_config_dev_aspm; + + if (!aspm_ctrl) { + dev_dbg(priv->dev, "aspm_ctrl == 0\n"); + return; + } + } + + for (i = 0; i < 2; i++) { + + if (i) { + pci = priv->parent_pdev; + where = priv->parent_cap_off + + ALCOR_PCIE_LINK_CTRL_OFFSET; + } else { + pci = priv->pdev; + where = priv->pdev_cap_off + + ALCOR_PCIE_LINK_CTRL_OFFSET; + } + + pci_read_config_dword(pci, where, &val32); + val32 &= (~0x03); + val32 |= (aspm_ctrl & priv->pdev_aspm_cap); + pci_write_config_byte(pci, where, (u8)val32); + } + +} + +static inline void alcor_mask_sd_irqs(struct alcor_pci_priv *priv) +{ + alcor_write32(priv, 0, AU6601_REG_INT_ENABLE); +} + +static inline void alcor_unmask_sd_irqs(struct alcor_pci_priv *priv) +{ + alcor_write32(priv, AU6601_INT_CMD_MASK | AU6601_INT_DATA_MASK | + AU6601_INT_CARD_INSERT | AU6601_INT_CARD_REMOVE | + AU6601_INT_OVER_CURRENT_ERR, + AU6601_REG_INT_ENABLE); +} + +static inline void alcor_mask_ms_irqs(struct alcor_pci_priv *priv) +{ + alcor_write32(priv, 0, AU6601_MS_INT_ENABLE); +} + +static inline void alcor_unmask_ms_irqs(struct alcor_pci_priv *priv) +{ + alcor_write32(priv, 0x3d00fa, AU6601_MS_INT_ENABLE); +} + +static int alcor_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *ent) +{ + struct alcor_dev_cfg *cfg; + struct alcor_pci_priv *priv; + int ret, i, bar = 0; + + cfg = (void *)ent->driver_data; + + ret = pcim_enable_device(pdev); + if (ret) + return ret; + + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + ret = ida_simple_get(&alcor_pci_idr, 0, 0, GFP_KERNEL); + if (ret < 0) + return ret; + priv->id = ret; + + priv->pdev = pdev; + priv->parent_pdev = pdev->bus->self; + priv->dev = &pdev->dev; + priv->cfg = cfg; + priv->irq = pdev->irq; + + ret = pci_request_regions(pdev, DRV_NAME_ALCOR_PCI); + if (ret) { + dev_err(&pdev->dev, "Cannot request region\n"); + return -ENOMEM; + } + + if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) { + dev_err(&pdev->dev, "BAR %d is not iomem. Aborting.\n", bar); + ret = -ENODEV; + goto error_release_regions; + } + + priv->iobase = pcim_iomap(pdev, bar, 0); + if (!priv->iobase) { + ret = -ENOMEM; + goto error_release_regions; + } + + /* make sure irqs are disabled */ + alcor_write32(priv, 0, AU6601_REG_INT_ENABLE); + alcor_write32(priv, 0, AU6601_MS_INT_ENABLE); + + ret = dma_set_mask_and_coherent(priv->dev, AU6601_SDMA_MASK); + if (ret) { + dev_err(priv->dev, "Failed to set DMA mask\n"); + goto error_release_regions; + } + + pci_set_master(pdev); + pci_set_drvdata(pdev, priv); + alcor_pci_init_check_aspm(priv); + + for (i = 0; i < ARRAY_SIZE(alcor_pci_cells); i++) { + alcor_pci_cells[i].platform_data = priv; + alcor_pci_cells[i].pdata_size = sizeof(*priv); + } + ret = mfd_add_devices(&pdev->dev, priv->id, alcor_pci_cells, + ARRAY_SIZE(alcor_pci_cells), NULL, 0, NULL); + if (ret < 0) + goto error_release_regions; + + alcor_pci_aspm_ctrl(priv, 0); + + return 0; + +error_release_regions: + pci_release_regions(pdev); + return ret; +} + +static void alcor_pci_remove(struct pci_dev *pdev) +{ + struct alcor_pci_priv *priv; + + priv = pci_get_drvdata(pdev); + + alcor_pci_aspm_ctrl(priv, 1); + + mfd_remove_devices(&pdev->dev); + + ida_simple_remove(&alcor_pci_idr, priv->id); + + pci_release_regions(pdev); + pci_set_drvdata(pdev, NULL); +} + +#ifdef CONFIG_PM_SLEEP +static int alcor_suspend(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct alcor_pci_priv *priv = pci_get_drvdata(pdev); + + alcor_pci_aspm_ctrl(priv, 1); + return 0; +} + +static int alcor_resume(struct device *dev) +{ + + struct pci_dev *pdev = to_pci_dev(dev); + struct alcor_pci_priv *priv = pci_get_drvdata(pdev); + + alcor_pci_aspm_ctrl(priv, 0); + return 0; +} +#endif /* CONFIG_PM_SLEEP */ + +static SIMPLE_DEV_PM_OPS(alcor_pci_pm_ops, alcor_suspend, alcor_resume); + +static struct pci_driver alcor_driver = { + .name = DRV_NAME_ALCOR_PCI, + .id_table = pci_ids, + .probe = alcor_pci_probe, + .remove = alcor_pci_remove, + .driver = { + .pm = &alcor_pci_pm_ops + }, +}; + +module_pci_driver(alcor_driver); + +MODULE_AUTHOR("Oleksij Rempel "); +MODULE_DESCRIPTION("PCI driver for Alcor Micro AU6601 Secure Digital Host Controller Interface"); +MODULE_LICENSE("GPL"); diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c index b97903ff1a72..f7a66f614085 100644 --- a/drivers/misc/cardreader/rtsx_usb.c +++ b/drivers/misc/cardreader/rtsx_usb.c @@ -723,8 +723,15 @@ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message) return 0; } +static int rtsx_usb_resume_child(struct device *dev, void *data) +{ + pm_request_resume(dev); + return 0; +} + static int rtsx_usb_resume(struct usb_interface *intf) { + device_for_each_child(&intf->dev, NULL, rtsx_usb_resume_child); return 0; } @@ -734,6 +741,7 @@ static int rtsx_usb_reset_resume(struct usb_interface *intf) (struct rtsx_ucr *)usb_get_intfdata(intf); rtsx_usb_reset_chip(ucr); + device_for_each_child(&intf->dev, NULL, rtsx_usb_resume_child); return 0; } diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c index b66d832d3233..c79ba1c699ad 100644 --- a/drivers/misc/cxl/pci.c +++ b/drivers/misc/cxl/pci.c @@ -1718,7 +1718,6 @@ int cxl_slot_is_switched(struct pci_dev *dev) { struct device_node *np; int depth = 0; - const __be32 *prop; if (!(np = pci_device_to_OF_node(dev))) { pr_err("cxl: np = NULL\n"); @@ -1727,8 +1726,7 @@ int cxl_slot_is_switched(struct pci_dev *dev) of_node_get(np); while (np) { np = of_get_next_parent(np); - prop = of_get_property(np, "device_type", NULL); - if (!prop || strcmp((char *)prop, "pciex")) + if (!of_node_is_type(np, "pciex")) break; depth++; } diff --git a/drivers/misc/cxl/vphb.c b/drivers/misc/cxl/vphb.c index 7908633d9204..49da2f744bbf 100644 --- a/drivers/misc/cxl/vphb.c +++ b/drivers/misc/cxl/vphb.c @@ -11,17 +11,6 @@ #include #include "cxl.h" -static int cxl_dma_set_mask(struct pci_dev *pdev, u64 dma_mask) -{ - if (dma_mask < DMA_BIT_MASK(64)) { - pr_info("%s only 64bit DMA supported on CXL", __func__); - return -EIO; - } - - *(pdev->dev.dma_mask) = dma_mask; - return 0; -} - static int cxl_pci_probe_mode(struct pci_bus *bus) { return PCI_PROBE_NORMAL; @@ -220,7 +209,6 @@ static struct pci_controller_ops cxl_pci_controller_ops = .reset_secondary_bus = cxl_pci_reset_secondary_bus, .setup_msi_irqs = cxl_setup_msi_irqs, .teardown_msi_irqs = cxl_teardown_msi_irqs, - .dma_set_mask = cxl_dma_set_mask, }; int cxl_pci_vphb_add(struct cxl_afu *afu) diff --git a/drivers/misc/genwqe/card_debugfs.c b/drivers/misc/genwqe/card_debugfs.c index c6b82f09b3ba..7c713e01d198 100644 --- a/drivers/misc/genwqe/card_debugfs.c +++ b/drivers/misc/genwqe/card_debugfs.c @@ -33,19 +33,6 @@ #include "card_base.h" #include "card_ddcb.h" -#define GENWQE_DEBUGFS_RO(_name, _showfn) \ - static int genwqe_debugfs_##_name##_open(struct inode *inode, \ - struct file *file) \ - { \ - return single_open(file, _showfn, inode->i_private); \ - } \ - static const struct file_operations genwqe_##_name##_fops = { \ - .open = genwqe_debugfs_##_name##_open, \ - .read = seq_read, \ - .llseek = seq_lseek, \ - .release = single_release, \ - } - static void dbg_uidn_show(struct seq_file *s, struct genwqe_reg *regs, int entries) { @@ -87,26 +74,26 @@ static int curr_dbg_uidn_show(struct seq_file *s, void *unused, int uid) return 0; } -static int genwqe_curr_dbg_uid0_show(struct seq_file *s, void *unused) +static int curr_dbg_uid0_show(struct seq_file *s, void *unused) { return curr_dbg_uidn_show(s, unused, 0); } -GENWQE_DEBUGFS_RO(curr_dbg_uid0, genwqe_curr_dbg_uid0_show); +DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid0); -static int genwqe_curr_dbg_uid1_show(struct seq_file *s, void *unused) +static int curr_dbg_uid1_show(struct seq_file *s, void *unused) { return curr_dbg_uidn_show(s, unused, 1); } -GENWQE_DEBUGFS_RO(curr_dbg_uid1, genwqe_curr_dbg_uid1_show); +DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid1); -static int genwqe_curr_dbg_uid2_show(struct seq_file *s, void *unused) +static int curr_dbg_uid2_show(struct seq_file *s, void *unused) { return curr_dbg_uidn_show(s, unused, 2); } -GENWQE_DEBUGFS_RO(curr_dbg_uid2, genwqe_curr_dbg_uid2_show); +DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid2); static int prev_dbg_uidn_show(struct seq_file *s, void *unused, int uid) { @@ -116,28 +103,28 @@ static int prev_dbg_uidn_show(struct seq_file *s, void *unused, int uid) return 0; } -static int genwqe_prev_dbg_uid0_show(struct seq_file *s, void *unused) +static int prev_dbg_uid0_show(struct seq_file *s, void *unused) { return prev_dbg_uidn_show(s, unused, 0); } -GENWQE_DEBUGFS_RO(prev_dbg_uid0, genwqe_prev_dbg_uid0_show); +DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid0); -static int genwqe_prev_dbg_uid1_show(struct seq_file *s, void *unused) +static int prev_dbg_uid1_show(struct seq_file *s, void *unused) { return prev_dbg_uidn_show(s, unused, 1); } -GENWQE_DEBUGFS_RO(prev_dbg_uid1, genwqe_prev_dbg_uid1_show); +DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid1); -static int genwqe_prev_dbg_uid2_show(struct seq_file *s, void *unused) +static int prev_dbg_uid2_show(struct seq_file *s, void *unused) { return prev_dbg_uidn_show(s, unused, 2); } -GENWQE_DEBUGFS_RO(prev_dbg_uid2, genwqe_prev_dbg_uid2_show); +DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid2); -static int genwqe_curr_regs_show(struct seq_file *s, void *unused) +static int curr_regs_show(struct seq_file *s, void *unused) { struct genwqe_dev *cd = s->private; unsigned int i; @@ -164,9 +151,9 @@ static int genwqe_curr_regs_show(struct seq_file *s, void *unused) return 0; } -GENWQE_DEBUGFS_RO(curr_regs, genwqe_curr_regs_show); +DEFINE_SHOW_ATTRIBUTE(curr_regs); -static int genwqe_prev_regs_show(struct seq_file *s, void *unused) +static int prev_regs_show(struct seq_file *s, void *unused) { struct genwqe_dev *cd = s->private; unsigned int i; @@ -188,9 +175,9 @@ static int genwqe_prev_regs_show(struct seq_file *s, void *unused) return 0; } -GENWQE_DEBUGFS_RO(prev_regs, genwqe_prev_regs_show); +DEFINE_SHOW_ATTRIBUTE(prev_regs); -static int genwqe_jtimer_show(struct seq_file *s, void *unused) +static int jtimer_show(struct seq_file *s, void *unused) { struct genwqe_dev *cd = s->private; unsigned int vf_num; @@ -209,9 +196,9 @@ static int genwqe_jtimer_show(struct seq_file *s, void *unused) return 0; } -GENWQE_DEBUGFS_RO(jtimer, genwqe_jtimer_show); +DEFINE_SHOW_ATTRIBUTE(jtimer); -static int genwqe_queue_working_time_show(struct seq_file *s, void *unused) +static int queue_working_time_show(struct seq_file *s, void *unused) { struct genwqe_dev *cd = s->private; unsigned int vf_num; @@ -227,9 +214,9 @@ static int genwqe_queue_working_time_show(struct seq_file *s, void *unused) return 0; } -GENWQE_DEBUGFS_RO(queue_working_time, genwqe_queue_working_time_show); +DEFINE_SHOW_ATTRIBUTE(queue_working_time); -static int genwqe_ddcb_info_show(struct seq_file *s, void *unused) +static int ddcb_info_show(struct seq_file *s, void *unused) { struct genwqe_dev *cd = s->private; unsigned int i; @@ -300,9 +287,9 @@ static int genwqe_ddcb_info_show(struct seq_file *s, void *unused) return 0; } -GENWQE_DEBUGFS_RO(ddcb_info, genwqe_ddcb_info_show); +DEFINE_SHOW_ATTRIBUTE(ddcb_info); -static int genwqe_info_show(struct seq_file *s, void *unused) +static int info_show(struct seq_file *s, void *unused) { struct genwqe_dev *cd = s->private; u64 app_id, slu_id, bitstream = -1; @@ -335,7 +322,7 @@ static int genwqe_info_show(struct seq_file *s, void *unused) return 0; } -GENWQE_DEBUGFS_RO(info, genwqe_info_show); +DEFINE_SHOW_ATTRIBUTE(info); int genwqe_init_debugfs(struct genwqe_dev *cd) { @@ -356,14 +343,14 @@ int genwqe_init_debugfs(struct genwqe_dev *cd) /* non privileged interfaces are done here */ file = debugfs_create_file("ddcb_info", S_IRUGO, root, cd, - &genwqe_ddcb_info_fops); + &ddcb_info_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("info", S_IRUGO, root, cd, - &genwqe_info_fops); + &info_fops); if (!file) { ret = -ENOMEM; goto err1; @@ -396,56 +383,56 @@ int genwqe_init_debugfs(struct genwqe_dev *cd) } file = debugfs_create_file("curr_regs", S_IRUGO, root, cd, - &genwqe_curr_regs_fops); + &curr_regs_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("curr_dbg_uid0", S_IRUGO, root, cd, - &genwqe_curr_dbg_uid0_fops); + &curr_dbg_uid0_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("curr_dbg_uid1", S_IRUGO, root, cd, - &genwqe_curr_dbg_uid1_fops); + &curr_dbg_uid1_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("curr_dbg_uid2", S_IRUGO, root, cd, - &genwqe_curr_dbg_uid2_fops); + &curr_dbg_uid2_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("prev_regs", S_IRUGO, root, cd, - &genwqe_prev_regs_fops); + &prev_regs_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("prev_dbg_uid0", S_IRUGO, root, cd, - &genwqe_prev_dbg_uid0_fops); + &prev_dbg_uid0_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("prev_dbg_uid1", S_IRUGO, root, cd, - &genwqe_prev_dbg_uid1_fops); + &prev_dbg_uid1_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("prev_dbg_uid2", S_IRUGO, root, cd, - &genwqe_prev_dbg_uid2_fops); + &prev_dbg_uid2_fops); if (!file) { ret = -ENOMEM; goto err1; @@ -463,14 +450,14 @@ int genwqe_init_debugfs(struct genwqe_dev *cd) } file = debugfs_create_file("jobtimer", S_IRUGO, root, cd, - &genwqe_jtimer_fops); + &jtimer_fops); if (!file) { ret = -ENOMEM; goto err1; } file = debugfs_create_file("queue_working_time", S_IRUGO, root, cd, - &genwqe_queue_working_time_fops); + &queue_working_time_fops); if (!file) { ret = -ENOMEM; goto err1; diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c index 3fcb9a2fe1c9..efe2fb72d54b 100644 --- a/drivers/misc/genwqe/card_utils.c +++ b/drivers/misc/genwqe/card_utils.c @@ -215,7 +215,7 @@ u32 genwqe_crc32(u8 *buff, size_t len, u32 init) void *__genwqe_alloc_consistent(struct genwqe_dev *cd, size_t size, dma_addr_t *dma_handle) { - if (get_order(size) > MAX_ORDER) + if (get_order(size) >= MAX_ORDER) return NULL; return dma_zalloc_coherent(&cd->pci_dev->dev, size, dma_handle, diff --git a/drivers/misc/mei/Makefile b/drivers/misc/mei/Makefile index cd6825afa8e1..d9215fc4e499 100644 --- a/drivers/misc/mei/Makefile +++ b/drivers/misc/mei/Makefile @@ -9,6 +9,7 @@ mei-objs += hbm.o mei-objs += interrupt.o mei-objs += client.o mei-objs += main.o +mei-objs += dma-ring.o mei-objs += bus.o mei-objs += bus-fixup.o mei-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c index ebdcf0b450e2..1fc8ea0f519b 100644 --- a/drivers/misc/mei/client.c +++ b/drivers/misc/mei/client.c @@ -317,23 +317,6 @@ void mei_me_cl_rm_all(struct mei_device *dev) up_write(&dev->me_clients_rwsem); } -/** - * mei_cl_cmp_id - tells if the clients are the same - * - * @cl1: host client 1 - * @cl2: host client 2 - * - * Return: true - if the clients has same host and me ids - * false - otherwise - */ -static inline bool mei_cl_cmp_id(const struct mei_cl *cl1, - const struct mei_cl *cl2) -{ - return cl1 && cl2 && - (cl1->host_client_id == cl2->host_client_id) && - (mei_cl_me_id(cl1) == mei_cl_me_id(cl2)); -} - /** * mei_io_cb_free - free mei_cb_private related memory * @@ -418,7 +401,7 @@ static void mei_io_list_flush_cl(struct list_head *head, struct mei_cl_cb *cb, *next; list_for_each_entry_safe(cb, next, head, list) { - if (mei_cl_cmp_id(cl, cb->cl)) + if (cl == cb->cl) list_del_init(&cb->list); } } @@ -435,7 +418,7 @@ static void mei_io_tx_list_free_cl(struct list_head *head, struct mei_cl_cb *cb, *next; list_for_each_entry_safe(cb, next, head, list) { - if (mei_cl_cmp_id(cl, cb->cl)) + if (cl == cb->cl) mei_tx_cb_dequeue(cb); } } @@ -478,7 +461,7 @@ struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length, if (length == 0) return cb; - cb->buf.data = kmalloc(length, GFP_KERNEL); + cb->buf.data = kmalloc(roundup(length, MEI_SLOT_SIZE), GFP_KERNEL); if (!cb->buf.data) { mei_io_cb_free(cb); return NULL; @@ -1374,7 +1357,9 @@ int mei_cl_notify_request(struct mei_cl *cl, mutex_unlock(&dev->device_lock); wait_event_timeout(cl->wait, - cl->notify_en == request || !mei_cl_is_connected(cl), + cl->notify_en == request || + cl->status || + !mei_cl_is_connected(cl), mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT)); mutex_lock(&dev->device_lock); @@ -1573,10 +1558,13 @@ int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb, struct mei_msg_hdr mei_hdr; size_t hdr_len = sizeof(mei_hdr); size_t len; - size_t hbuf_len; + size_t hbuf_len, dr_len; int hbuf_slots; + u32 dr_slots; + u32 dma_len; int rets; bool first_chunk; + const void *data; if (WARN_ON(!cl || !cl->dev)) return -ENODEV; @@ -1597,6 +1585,7 @@ int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb, } len = buf->size - cb->buf_idx; + data = buf->data + cb->buf_idx; hbuf_slots = mei_hbuf_empty_slots(dev); if (hbuf_slots < 0) { rets = -EOVERFLOW; @@ -1604,6 +1593,8 @@ int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb, } hbuf_len = mei_slots2data(hbuf_slots); + dr_slots = mei_dma_ring_empty_slots(dev); + dr_len = mei_slots2data(dr_slots); mei_msg_hdr_init(&mei_hdr, cb); @@ -1614,23 +1605,33 @@ int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb, if (len + hdr_len <= hbuf_len) { mei_hdr.length = len; mei_hdr.msg_complete = 1; + } else if (dr_slots && hbuf_len >= hdr_len + sizeof(dma_len)) { + mei_hdr.dma_ring = 1; + if (len > dr_len) + len = dr_len; + else + mei_hdr.msg_complete = 1; + + mei_hdr.length = sizeof(dma_len); + dma_len = len; + data = &dma_len; } else if ((u32)hbuf_slots == mei_hbuf_depth(dev)) { - mei_hdr.length = hbuf_len - hdr_len; + len = hbuf_len - hdr_len; + mei_hdr.length = len; } else { return 0; } - cl_dbg(dev, cl, "buf: size = %zu idx = %zu\n", - cb->buf.size, cb->buf_idx); + if (mei_hdr.dma_ring) + mei_dma_ring_write(dev, buf->data + cb->buf_idx, len); - rets = mei_write_message(dev, &mei_hdr, hdr_len, - buf->data + cb->buf_idx, mei_hdr.length); + rets = mei_write_message(dev, &mei_hdr, hdr_len, data, mei_hdr.length); if (rets) goto err; cl->status = 0; cl->writing_state = MEI_WRITING; - cb->buf_idx += mei_hdr.length; + cb->buf_idx += len; if (first_chunk) { if (mei_cl_tx_flow_ctrl_creds_reduce(cl)) { @@ -1665,11 +1666,13 @@ ssize_t mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb) struct mei_msg_data *buf; struct mei_msg_hdr mei_hdr; size_t hdr_len = sizeof(mei_hdr); - size_t len; - size_t hbuf_len; + size_t len, hbuf_len, dr_len; int hbuf_slots; + u32 dr_slots; + u32 dma_len; ssize_t rets; bool blocking; + const void *data; if (WARN_ON(!cl || !cl->dev)) return -ENODEV; @@ -1681,10 +1684,12 @@ ssize_t mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb) buf = &cb->buf; len = buf->size; - blocking = cb->blocking; cl_dbg(dev, cl, "len=%zd\n", len); + blocking = cb->blocking; + data = buf->data; + rets = pm_runtime_get(dev->dev); if (rets < 0 && rets != -EINPROGRESS) { pm_runtime_put_noidle(dev->dev); @@ -1721,16 +1726,32 @@ ssize_t mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb) } hbuf_len = mei_slots2data(hbuf_slots); + dr_slots = mei_dma_ring_empty_slots(dev); + dr_len = mei_slots2data(dr_slots); if (len + hdr_len <= hbuf_len) { mei_hdr.length = len; mei_hdr.msg_complete = 1; + } else if (dr_slots && hbuf_len >= hdr_len + sizeof(dma_len)) { + mei_hdr.dma_ring = 1; + if (len > dr_len) + len = dr_len; + else + mei_hdr.msg_complete = 1; + + mei_hdr.length = sizeof(dma_len); + dma_len = len; + data = &dma_len; } else { - mei_hdr.length = hbuf_len - hdr_len; + len = hbuf_len - hdr_len; + mei_hdr.length = len; } + if (mei_hdr.dma_ring) + mei_dma_ring_write(dev, buf->data, len); + rets = mei_write_message(dev, &mei_hdr, hdr_len, - buf->data, mei_hdr.length); + data, mei_hdr.length); if (rets) goto err; @@ -1739,7 +1760,9 @@ ssize_t mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb) goto err; cl->writing_state = MEI_WRITING; - cb->buf_idx = mei_hdr.length; + cb->buf_idx = len; + /* restore return value */ + len = buf->size; out: if (mei_hdr.msg_complete) diff --git a/drivers/misc/mei/dma-ring.c b/drivers/misc/mei/dma-ring.c new file mode 100644 index 000000000000..795641b82181 --- /dev/null +++ b/drivers/misc/mei/dma-ring.c @@ -0,0 +1,269 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. + */ +#include +#include + +#include "mei_dev.h" + +/** + * mei_dmam_dscr_alloc() - allocate a managed coherent buffer + * for the dma descriptor + * @dev: mei_device + * @dscr: dma descriptor + * + * Return: + * * 0 - on success or zero allocation request + * * -EINVAL - if size is not power of 2 + * * -ENOMEM - of allocation has failed + */ +static int mei_dmam_dscr_alloc(struct mei_device *dev, + struct mei_dma_dscr *dscr) +{ + if (!dscr->size) + return 0; + + if (WARN_ON(!is_power_of_2(dscr->size))) + return -EINVAL; + + if (dscr->vaddr) + return 0; + + dscr->vaddr = dmam_alloc_coherent(dev->dev, dscr->size, &dscr->daddr, + GFP_KERNEL); + if (!dscr->vaddr) + return -ENOMEM; + + return 0; +} + +/** + * mei_dmam_dscr_free() - free a managed coherent buffer + * from the dma descriptor + * @dev: mei_device + * @dscr: dma descriptor + */ +static void mei_dmam_dscr_free(struct mei_device *dev, + struct mei_dma_dscr *dscr) +{ + if (!dscr->vaddr) + return; + + dmam_free_coherent(dev->dev, dscr->size, dscr->vaddr, dscr->daddr); + dscr->vaddr = NULL; +} + +/** + * mei_dmam_ring_free() - free dma ring buffers + * @dev: mei device + */ +void mei_dmam_ring_free(struct mei_device *dev) +{ + int i; + + for (i = 0; i < DMA_DSCR_NUM; i++) + mei_dmam_dscr_free(dev, &dev->dr_dscr[i]); +} + +/** + * mei_dmam_ring_alloc() - allocate dma ring buffers + * @dev: mei device + * + * Return: -ENOMEM on allocation failure 0 otherwise + */ +int mei_dmam_ring_alloc(struct mei_device *dev) +{ + int i; + + for (i = 0; i < DMA_DSCR_NUM; i++) + if (mei_dmam_dscr_alloc(dev, &dev->dr_dscr[i])) + goto err; + + return 0; + +err: + mei_dmam_ring_free(dev); + return -ENOMEM; +} + +/** + * mei_dma_ring_is_allocated() - check if dma ring is allocated + * @dev: mei device + * + * Return: true if dma ring is allocated + */ +bool mei_dma_ring_is_allocated(struct mei_device *dev) +{ + return !!dev->dr_dscr[DMA_DSCR_HOST].vaddr; +} + +static inline +struct hbm_dma_ring_ctrl *mei_dma_ring_ctrl(struct mei_device *dev) +{ + return (struct hbm_dma_ring_ctrl *)dev->dr_dscr[DMA_DSCR_CTRL].vaddr; +} + +/** + * mei_dma_ring_reset() - reset the dma control block + * @dev: mei device + */ +void mei_dma_ring_reset(struct mei_device *dev) +{ + struct hbm_dma_ring_ctrl *ctrl = mei_dma_ring_ctrl(dev); + + if (!ctrl) + return; + + memset(ctrl, 0, sizeof(*ctrl)); +} + +/** + * mei_dma_copy_from() - copy from dma ring into buffer + * @dev: mei device + * @buf: data buffer + * @offset: offset in slots. + * @n: number of slots to copy. + */ +static size_t mei_dma_copy_from(struct mei_device *dev, unsigned char *buf, + u32 offset, u32 n) +{ + unsigned char *dbuf = dev->dr_dscr[DMA_DSCR_DEVICE].vaddr; + + size_t b_offset = offset << 2; + size_t b_n = n << 2; + + memcpy(buf, dbuf + b_offset, b_n); + + return b_n; +} + +/** + * mei_dma_copy_to() - copy to a buffer to the dma ring + * @dev: mei device + * @buf: data buffer + * @offset: offset in slots. + * @n: number of slots to copy. + */ +static size_t mei_dma_copy_to(struct mei_device *dev, unsigned char *buf, + u32 offset, u32 n) +{ + unsigned char *hbuf = dev->dr_dscr[DMA_DSCR_HOST].vaddr; + + size_t b_offset = offset << 2; + size_t b_n = n << 2; + + memcpy(hbuf + b_offset, buf, b_n); + + return b_n; +} + +/** + * mei_dma_ring_read() - read data from the ring + * @dev: mei device + * @buf: buffer to read into: may be NULL in case of droping the data. + * @len: length to read. + */ +void mei_dma_ring_read(struct mei_device *dev, unsigned char *buf, u32 len) +{ + struct hbm_dma_ring_ctrl *ctrl = mei_dma_ring_ctrl(dev); + u32 dbuf_depth; + u32 rd_idx, rem, slots; + + if (WARN_ON(!ctrl)) + return; + + dev_dbg(dev->dev, "reading from dma %u bytes\n", len); + + if (!len) + return; + + dbuf_depth = dev->dr_dscr[DMA_DSCR_DEVICE].size >> 2; + rd_idx = READ_ONCE(ctrl->dbuf_rd_idx) & (dbuf_depth - 1); + slots = mei_data2slots(len); + + /* if buf is NULL we drop the packet by advancing the pointer.*/ + if (!buf) + goto out; + + if (rd_idx + slots > dbuf_depth) { + buf += mei_dma_copy_from(dev, buf, rd_idx, dbuf_depth - rd_idx); + rem = slots - (dbuf_depth - rd_idx); + rd_idx = 0; + } else { + rem = slots; + } + + mei_dma_copy_from(dev, buf, rd_idx, rem); +out: + WRITE_ONCE(ctrl->dbuf_rd_idx, ctrl->dbuf_rd_idx + slots); +} + +static inline u32 mei_dma_ring_hbuf_depth(struct mei_device *dev) +{ + return dev->dr_dscr[DMA_DSCR_HOST].size >> 2; +} + +/** + * mei_dma_ring_empty_slots() - calaculate number of empty slots in dma ring + * @dev: mei_device + * + * Return: number of empty slots + */ +u32 mei_dma_ring_empty_slots(struct mei_device *dev) +{ + struct hbm_dma_ring_ctrl *ctrl = mei_dma_ring_ctrl(dev); + u32 wr_idx, rd_idx, hbuf_depth, empty; + + if (!mei_dma_ring_is_allocated(dev)) + return 0; + + if (WARN_ON(!ctrl)) + return 0; + + /* easier to work in slots */ + hbuf_depth = mei_dma_ring_hbuf_depth(dev); + rd_idx = READ_ONCE(ctrl->hbuf_rd_idx); + wr_idx = READ_ONCE(ctrl->hbuf_wr_idx); + + if (rd_idx > wr_idx) + empty = rd_idx - wr_idx; + else + empty = hbuf_depth - (wr_idx - rd_idx); + + return empty; +} + +/** + * mei_dma_ring_write - write data to dma ring host buffer + * + * @dev: mei_device + * @buf: data will be written + * @len: data length + */ +void mei_dma_ring_write(struct mei_device *dev, unsigned char *buf, u32 len) +{ + struct hbm_dma_ring_ctrl *ctrl = mei_dma_ring_ctrl(dev); + u32 hbuf_depth; + u32 wr_idx, rem, slots; + + if (WARN_ON(!ctrl)) + return; + + dev_dbg(dev->dev, "writing to dma %u bytes\n", len); + hbuf_depth = mei_dma_ring_hbuf_depth(dev); + wr_idx = READ_ONCE(ctrl->hbuf_wr_idx) & (hbuf_depth - 1); + slots = mei_data2slots(len); + + if (wr_idx + slots > hbuf_depth) { + buf += mei_dma_copy_to(dev, buf, wr_idx, hbuf_depth - wr_idx); + rem = slots - (hbuf_depth - wr_idx); + wr_idx = 0; + } else { + rem = slots; + } + + mei_dma_copy_to(dev, buf, wr_idx, rem); + + WRITE_ONCE(ctrl->hbuf_wr_idx, ctrl->hbuf_wr_idx + slots); +} diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c index e56f3e72d57a..78c26cebf5d4 100644 --- a/drivers/misc/mei/hbm.c +++ b/drivers/misc/mei/hbm.c @@ -65,6 +65,7 @@ const char *mei_hbm_state_str(enum mei_hbm_state state) MEI_HBM_STATE(IDLE); MEI_HBM_STATE(STARTING); MEI_HBM_STATE(STARTED); + MEI_HBM_STATE(DR_SETUP); MEI_HBM_STATE(ENUM_CLIENTS); MEI_HBM_STATE(CLIENT_PROPERTIES); MEI_HBM_STATE(STOPPED); @@ -295,6 +296,48 @@ int mei_hbm_start_req(struct mei_device *dev) return 0; } +/** + * mei_hbm_dma_setup_req() - setup DMA request + * @dev: the device structure + * + * Return: 0 on success and < 0 on failure + */ +static int mei_hbm_dma_setup_req(struct mei_device *dev) +{ + struct mei_msg_hdr mei_hdr; + struct hbm_dma_setup_request req; + const size_t len = sizeof(struct hbm_dma_setup_request); + unsigned int i; + int ret; + + mei_hbm_hdr(&mei_hdr, len); + + memset(&req, 0, len); + req.hbm_cmd = MEI_HBM_DMA_SETUP_REQ_CMD; + for (i = 0; i < DMA_DSCR_NUM; i++) { + phys_addr_t paddr; + + paddr = dev->dr_dscr[i].daddr; + req.dma_dscr[i].addr_hi = upper_32_bits(paddr); + req.dma_dscr[i].addr_lo = lower_32_bits(paddr); + req.dma_dscr[i].size = dev->dr_dscr[i].size; + } + + mei_dma_ring_reset(dev); + + ret = mei_hbm_write_message(dev, &mei_hdr, &req); + if (ret) { + dev_err(dev->dev, "dma setup request write failed: ret = %d.\n", + ret); + return ret; + } + + dev->hbm_state = MEI_HBM_DR_SETUP; + dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT; + mei_schedule_stall_timer(dev); + return 0; +} + /** * mei_hbm_enum_clients_req - sends enumeration client request message. * @@ -1044,6 +1087,7 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr) struct hbm_host_version_response *version_res; struct hbm_props_response *props_res; struct hbm_host_enum_response *enum_res; + struct hbm_dma_setup_response *dma_setup_res; struct hbm_add_client_request *add_cl_req; int ret; @@ -1108,14 +1152,52 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr) return -EPROTO; } - if (mei_hbm_enum_clients_req(dev)) { - dev_err(dev->dev, "hbm: start: failed to send enumeration request\n"); - return -EIO; + if (dev->hbm_f_dr_supported) { + if (mei_dmam_ring_alloc(dev)) + dev_info(dev->dev, "running w/o dma ring\n"); + if (mei_dma_ring_is_allocated(dev)) { + if (mei_hbm_dma_setup_req(dev)) + return -EIO; + + wake_up(&dev->wait_hbm_start); + break; + } } + dev->hbm_f_dr_supported = 0; + mei_dmam_ring_free(dev); + + if (mei_hbm_enum_clients_req(dev)) + return -EIO; + wake_up(&dev->wait_hbm_start); break; + case MEI_HBM_DMA_SETUP_RES_CMD: + dev_dbg(dev->dev, "hbm: dma setup response: message received.\n"); + + dev->init_clients_timer = 0; + + if (dev->hbm_state != MEI_HBM_DR_SETUP) { + dev_err(dev->dev, "hbm: dma setup response: state mismatch, [%d, %d]\n", + dev->dev_state, dev->hbm_state); + return -EPROTO; + } + + dma_setup_res = (struct hbm_dma_setup_response *)mei_msg; + + if (dma_setup_res->status) { + dev_info(dev->dev, "hbm: dma setup response: failure = %d %s\n", + dma_setup_res->status, + mei_hbm_status_str(dma_setup_res->status)); + dev->hbm_f_dr_supported = 0; + mei_dmam_ring_free(dev); + } + + if (mei_hbm_enum_clients_req(dev)) + return -EIO; + break; + case CLIENT_CONNECT_RES_CMD: dev_dbg(dev->dev, "hbm: client connect response: message received.\n"); mei_hbm_cl_res(dev, cl_cmd, MEI_FOP_CONNECT); @@ -1271,8 +1353,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr) break; default: - BUG(); - break; + WARN(1, "hbm: wrong command %d\n", mei_msg->hbm_cmd); + return -EPROTO; } return 0; diff --git a/drivers/misc/mei/hbm.h b/drivers/misc/mei/hbm.h index a2025a5083a3..0171a7e79bab 100644 --- a/drivers/misc/mei/hbm.h +++ b/drivers/misc/mei/hbm.h @@ -26,6 +26,7 @@ struct mei_cl; * * @MEI_HBM_IDLE : protocol not started * @MEI_HBM_STARTING : start request message was sent + * @MEI_HBM_DR_SETUP : dma ring setup request message was sent * @MEI_HBM_ENUM_CLIENTS : enumeration request was sent * @MEI_HBM_CLIENT_PROPERTIES : acquiring clients properties * @MEI_HBM_STARTED : enumeration was completed @@ -34,6 +35,7 @@ struct mei_cl; enum mei_hbm_state { MEI_HBM_IDLE = 0, MEI_HBM_STARTING, + MEI_HBM_DR_SETUP, MEI_HBM_ENUM_CLIENTS, MEI_HBM_CLIENT_PROPERTIES, MEI_HBM_STARTED, diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c index 0759c3a668de..3fbbadfa2ae1 100644 --- a/drivers/misc/mei/hw-me.c +++ b/drivers/misc/mei/hw-me.c @@ -1471,15 +1471,21 @@ struct mei_device *mei_me_dev_init(struct pci_dev *pdev, { struct mei_device *dev; struct mei_me_hw *hw; + int i; dev = devm_kzalloc(&pdev->dev, sizeof(struct mei_device) + sizeof(struct mei_me_hw), GFP_KERNEL); if (!dev) return NULL; + hw = to_me_hw(dev); + for (i = 0; i < DMA_DSCR_NUM; i++) + dev->dr_dscr[i].size = cfg->dma_size[i]; + mei_device_init(dev, &pdev->dev, &mei_me_hw_ops); hw->cfg = cfg; + return dev; } diff --git a/drivers/misc/mei/hw.h b/drivers/misc/mei/hw.h index 65655925791a..2b7f7677f8cc 100644 --- a/drivers/misc/mei/hw.h +++ b/drivers/misc/mei/hw.h @@ -35,7 +35,7 @@ /* * MEI Version */ -#define HBM_MINOR_VERSION 0 +#define HBM_MINOR_VERSION 1 #define HBM_MAJOR_VERSION 2 /* @@ -206,6 +206,7 @@ enum mei_cl_disconnect_status { * @dma_ring: message is on dma ring * @internal: message is internal * @msg_complete: last packet of the message + * @extension: extension of the header */ struct mei_msg_hdr { u32 me_addr:8; @@ -215,8 +216,11 @@ struct mei_msg_hdr { u32 dma_ring:1; u32 internal:1; u32 msg_complete:1; + u32 extension[0]; } __packed; +#define MEI_MSG_HDR_MAX 2 + struct mei_bus_message { u8 hbm_cmd; u8 data[0]; @@ -512,4 +516,27 @@ struct hbm_dma_setup_response { u8 reserved[2]; } __packed; +/** + * struct mei_dma_ring_ctrl - dma ring control block + * + * @hbuf_wr_idx: host circular buffer write index in slots + * @reserved1: reserved for alignment + * @hbuf_rd_idx: host circular buffer read index in slots + * @reserved2: reserved for alignment + * @dbuf_wr_idx: device circular buffer write index in slots + * @reserved3: reserved for alignment + * @dbuf_rd_idx: device circular buffer read index in slots + * @reserved4: reserved for alignment + */ +struct hbm_dma_ring_ctrl { + u32 hbuf_wr_idx; + u32 reserved1; + u32 hbuf_rd_idx; + u32 reserved2; + u32 dbuf_wr_idx; + u32 reserved3; + u32 dbuf_rd_idx; + u32 reserved4; +} __packed; + #endif diff --git a/drivers/misc/mei/init.c b/drivers/misc/mei/init.c index 4888ebc076b7..eb026e2a0537 100644 --- a/drivers/misc/mei/init.c +++ b/drivers/misc/mei/init.c @@ -151,7 +151,7 @@ int mei_reset(struct mei_device *dev) mei_hbm_reset(dev); - dev->rd_msg_hdr = 0; + memset(dev->rd_msg_hdr, 0, sizeof(dev->rd_msg_hdr)); if (ret) { dev_err(dev->dev, "hw_reset failed ret = %d\n", ret); diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c index 5a661cbdf2ae..055c2d89b310 100644 --- a/drivers/misc/mei/interrupt.c +++ b/drivers/misc/mei/interrupt.c @@ -75,6 +75,8 @@ static inline int mei_cl_hbm_equal(struct mei_cl *cl, */ static void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr) { + if (hdr->dma_ring) + mei_dma_ring_read(dev, NULL, hdr->extension[0]); /* * no need to check for size as it is guarantied * that length fits into rd_msg_buf @@ -100,6 +102,7 @@ static int mei_cl_irq_read_msg(struct mei_cl *cl, struct mei_device *dev = cl->dev; struct mei_cl_cb *cb; size_t buf_sz; + u32 length; cb = list_first_entry_or_null(&cl->rd_pending, struct mei_cl_cb, list); if (!cb) { @@ -119,25 +122,31 @@ static int mei_cl_irq_read_msg(struct mei_cl *cl, goto discard; } - buf_sz = mei_hdr->length + cb->buf_idx; + length = mei_hdr->dma_ring ? mei_hdr->extension[0] : mei_hdr->length; + + buf_sz = length + cb->buf_idx; /* catch for integer overflow */ if (buf_sz < cb->buf_idx) { cl_err(dev, cl, "message is too big len %d idx %zu\n", - mei_hdr->length, cb->buf_idx); + length, cb->buf_idx); cb->status = -EMSGSIZE; goto discard; } if (cb->buf.size < buf_sz) { cl_dbg(dev, cl, "message overflow. size %zu len %d idx %zu\n", - cb->buf.size, mei_hdr->length, cb->buf_idx); + cb->buf.size, length, cb->buf_idx); cb->status = -EMSGSIZE; goto discard; } + if (mei_hdr->dma_ring) + mei_dma_ring_read(dev, cb->buf.data + cb->buf_idx, length); + + /* for DMA read 0 length to generate an interrupt to the device */ mei_read_slots(dev, cb->buf.data + cb->buf_idx, mei_hdr->length); - cb->buf_idx += mei_hdr->length; + cb->buf_idx += length; if (mei_hdr->msg_complete) { cl_dbg(dev, cl, "completed read length = %zu\n", cb->buf_idx); @@ -247,6 +256,9 @@ static inline int hdr_is_valid(u32 msg_hdr) if (!msg_hdr || mei_hdr->reserved) return -EBADMSG; + if (mei_hdr->dma_ring && mei_hdr->length != MEI_SLOT_SIZE) + return -EBADMSG; + return 0; } @@ -267,20 +279,20 @@ int mei_irq_read_handler(struct mei_device *dev, struct mei_cl *cl; int ret; - if (!dev->rd_msg_hdr) { - dev->rd_msg_hdr = mei_read_hdr(dev); + if (!dev->rd_msg_hdr[0]) { + dev->rd_msg_hdr[0] = mei_read_hdr(dev); (*slots)--; dev_dbg(dev->dev, "slots =%08x.\n", *slots); - ret = hdr_is_valid(dev->rd_msg_hdr); + ret = hdr_is_valid(dev->rd_msg_hdr[0]); if (ret) { dev_err(dev->dev, "corrupted message header 0x%08X\n", - dev->rd_msg_hdr); + dev->rd_msg_hdr[0]); goto end; } } - mei_hdr = (struct mei_msg_hdr *)&dev->rd_msg_hdr; + mei_hdr = (struct mei_msg_hdr *)dev->rd_msg_hdr; dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM(mei_hdr)); if (mei_slots2data(*slots) < mei_hdr->length) { @@ -291,6 +303,12 @@ int mei_irq_read_handler(struct mei_device *dev, goto end; } + if (mei_hdr->dma_ring) { + dev->rd_msg_hdr[1] = mei_read_hdr(dev); + (*slots)--; + mei_hdr->length = 0; + } + /* HBM message */ if (hdr_is_hbm(mei_hdr)) { ret = mei_hbm_dispatch(dev, mei_hdr); @@ -324,7 +342,7 @@ int mei_irq_read_handler(struct mei_device *dev, goto reset_slots; } dev_err(dev->dev, "no destination client found 0x%08X\n", - dev->rd_msg_hdr); + dev->rd_msg_hdr[0]); ret = -EBADMSG; goto end; } @@ -334,9 +352,8 @@ int mei_irq_read_handler(struct mei_device *dev, reset_slots: /* reset the number of slots and header */ + memset(dev->rd_msg_hdr, 0, sizeof(dev->rd_msg_hdr)); *slots = mei_count_full_read_slots(dev); - dev->rd_msg_hdr = 0; - if (*slots == -EOVERFLOW) { /* overflow - reset */ dev_err(dev->dev, "resetting due to slots overflow.\n"); diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h index 377397e1b5a5..685b78ce30a5 100644 --- a/drivers/misc/mei/mei_dev.h +++ b/drivers/misc/mei/mei_dev.h @@ -122,6 +122,19 @@ struct mei_msg_data { unsigned char *data; }; +/** + * struct mei_dma_dscr - dma address descriptor + * + * @vaddr: dma buffer virtual address + * @daddr: dma buffer physical address + * @size : dma buffer size + */ +struct mei_dma_dscr { + void *vaddr; + dma_addr_t daddr; + size_t size; +}; + /* Maximum number of processed FW status registers */ #define MEI_FW_STATUS_MAX 6 /* Minimal buffer for FW status string (8 bytes in dw + space or '\0') */ @@ -409,6 +422,7 @@ struct mei_fw_version { * @rd_msg_hdr : read message header storage * * @hbuf_is_ready : query if the host host/write buffer is ready + * @dr_dscr: DMA ring descriptors: TX, RX, and CTRL * * @version : HBM protocol version in use * @hbm_f_pg_supported : hbm feature pgi protocol @@ -483,11 +497,13 @@ struct mei_device { #endif /* CONFIG_PM */ unsigned char rd_msg_buf[MEI_RD_MSG_BUF_SIZE]; - u32 rd_msg_hdr; + u32 rd_msg_hdr[MEI_MSG_HDR_MAX]; /* write buffer */ bool hbuf_is_ready; + struct mei_dma_dscr dr_dscr[DMA_DSCR_NUM]; + struct hbm_version version; unsigned int hbm_f_pg_supported:1; unsigned int hbm_f_dc_supported:1; @@ -578,6 +594,14 @@ int mei_restart(struct mei_device *dev); void mei_stop(struct mei_device *dev); void mei_cancel_work(struct mei_device *dev); +int mei_dmam_ring_alloc(struct mei_device *dev); +void mei_dmam_ring_free(struct mei_device *dev); +bool mei_dma_ring_is_allocated(struct mei_device *dev); +void mei_dma_ring_reset(struct mei_device *dev); +void mei_dma_ring_read(struct mei_device *dev, unsigned char *buf, u32 len); +void mei_dma_ring_write(struct mei_device *dev, unsigned char *buf, u32 len); +u32 mei_dma_ring_empty_slots(struct mei_device *dev); + /* * MEI interrupt functions prototype */ diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c index ea4e152270a3..73ace2d59dea 100644 --- a/drivers/misc/mei/pci-me.c +++ b/drivers/misc/mei/pci-me.c @@ -98,9 +98,9 @@ static const struct pci_device_id mei_me_pci_tbl[] = { {MEI_PCI_DEVICE(MEI_DEV_ID_KBP, MEI_ME_PCH8_CFG)}, {MEI_PCI_DEVICE(MEI_DEV_ID_KBP_2, MEI_ME_PCH8_CFG)}, - {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH8_CFG)}, + {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH12_CFG)}, {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP_4, MEI_ME_PCH8_CFG)}, - {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH8_CFG)}, + {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH12_CFG)}, {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_4, MEI_ME_PCH8_CFG)}, /* required last entry */ diff --git a/drivers/misc/mic/card/mic_debugfs.c b/drivers/misc/mic/card/mic_debugfs.c index 421b3d7911df..7a4140874888 100644 --- a/drivers/misc/mic/card/mic_debugfs.c +++ b/drivers/misc/mic/card/mic_debugfs.c @@ -37,9 +37,9 @@ static struct dentry *mic_dbg; /** - * mic_intr_test - Send interrupts to host. + * mic_intr_show - Send interrupts to host. */ -static int mic_intr_test(struct seq_file *s, void *unused) +static int mic_intr_show(struct seq_file *s, void *unused) { struct mic_driver *mdrv = s->private; struct mic_device *mdev = &mdrv->mdev; @@ -56,23 +56,7 @@ static int mic_intr_test(struct seq_file *s, void *unused) return 0; } -static int mic_intr_test_open(struct inode *inode, struct file *file) -{ - return single_open(file, mic_intr_test, inode->i_private); -} - -static int mic_intr_test_release(struct inode *inode, struct file *file) -{ - return single_release(inode, file); -} - -static const struct file_operations intr_test_ops = { - .owner = THIS_MODULE, - .open = mic_intr_test_open, - .read = seq_read, - .llseek = seq_lseek, - .release = mic_intr_test_release -}; +DEFINE_SHOW_ATTRIBUTE(mic_intr); /** * mic_create_card_debug_dir - Initialize MIC debugfs entries. @@ -91,7 +75,7 @@ void __init mic_create_card_debug_dir(struct mic_driver *mdrv) } d = debugfs_create_file("intr_test", 0444, mdrv->dbg_dir, - mdrv, &intr_test_ops); + mdrv, &mic_intr_fops); if (!d) { dev_err(mdrv->dev, diff --git a/drivers/misc/mic/cosm/cosm_debugfs.c b/drivers/misc/mic/cosm/cosm_debugfs.c index 216cb3cd2fe3..71c216d0504d 100644 --- a/drivers/misc/mic/cosm/cosm_debugfs.c +++ b/drivers/misc/mic/cosm/cosm_debugfs.c @@ -28,12 +28,12 @@ static struct dentry *cosm_dbg; /** - * cosm_log_buf_show - Display MIC kernel log buffer + * log_buf_show - Display MIC kernel log buffer * * log_buf addr/len is read from System.map by user space * and populated in sysfs entries. */ -static int cosm_log_buf_show(struct seq_file *s, void *unused) +static int log_buf_show(struct seq_file *s, void *unused) { void __iomem *log_buf_va; int __iomem *log_buf_len_va; @@ -78,26 +78,15 @@ done: return 0; } -static int cosm_log_buf_open(struct inode *inode, struct file *file) -{ - return single_open(file, cosm_log_buf_show, inode->i_private); -} - -static const struct file_operations log_buf_ops = { - .owner = THIS_MODULE, - .open = cosm_log_buf_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release -}; +DEFINE_SHOW_ATTRIBUTE(log_buf); /** - * cosm_force_reset_show - Force MIC reset + * force_reset_show - Force MIC reset * * Invokes the force_reset COSM bus op instead of the standard reset * op in case a force reset of the MIC device is required */ -static int cosm_force_reset_show(struct seq_file *s, void *pos) +static int force_reset_show(struct seq_file *s, void *pos) { struct cosm_device *cdev = s->private; @@ -105,18 +94,7 @@ static int cosm_force_reset_show(struct seq_file *s, void *pos) return 0; } -static int cosm_force_reset_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, cosm_force_reset_show, inode->i_private); -} - -static const struct file_operations force_reset_ops = { - .owner = THIS_MODULE, - .open = cosm_force_reset_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release -}; +DEFINE_SHOW_ATTRIBUTE(force_reset); void cosm_create_debug_dir(struct cosm_device *cdev) { @@ -130,9 +108,10 @@ void cosm_create_debug_dir(struct cosm_device *cdev) if (!cdev->dbg_dir) return; - debugfs_create_file("log_buf", 0444, cdev->dbg_dir, cdev, &log_buf_ops); + debugfs_create_file("log_buf", 0444, cdev->dbg_dir, cdev, + &log_buf_fops); debugfs_create_file("force_reset", 0444, cdev->dbg_dir, cdev, - &force_reset_ops); + &force_reset_fops); } void cosm_delete_debug_dir(struct cosm_device *cdev) diff --git a/drivers/misc/mic/host/mic_boot.c b/drivers/misc/mic/host/mic_boot.c index c327985c9523..6479435ac96b 100644 --- a/drivers/misc/mic/host/mic_boot.c +++ b/drivers/misc/mic/host/mic_boot.c @@ -149,7 +149,7 @@ static void *__mic_dma_alloc(struct device *dev, size_t size, struct scif_hw_dev *scdev = dev_get_drvdata(dev); struct mic_device *mdev = scdev_to_mdev(scdev); dma_addr_t tmp; - void *va = kmalloc(size, gfp); + void *va = kmalloc(size, gfp | __GFP_ZERO); if (va) { tmp = mic_map_single(mdev, va, size); diff --git a/drivers/misc/mic/host/mic_debugfs.c b/drivers/misc/mic/host/mic_debugfs.c index 0a9daba8bb5d..c6e3c764699f 100644 --- a/drivers/misc/mic/host/mic_debugfs.c +++ b/drivers/misc/mic/host/mic_debugfs.c @@ -54,23 +54,7 @@ static int mic_smpt_show(struct seq_file *s, void *pos) return 0; } -static int mic_smpt_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, mic_smpt_show, inode->i_private); -} - -static int mic_smpt_debug_release(struct inode *inode, struct file *file) -{ - return single_release(inode, file); -} - -static const struct file_operations smpt_file_ops = { - .owner = THIS_MODULE, - .open = mic_smpt_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = mic_smpt_debug_release -}; +DEFINE_SHOW_ATTRIBUTE(mic_smpt); static int mic_post_code_show(struct seq_file *s, void *pos) { @@ -81,23 +65,7 @@ static int mic_post_code_show(struct seq_file *s, void *pos) return 0; } -static int mic_post_code_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, mic_post_code_show, inode->i_private); -} - -static int mic_post_code_debug_release(struct inode *inode, struct file *file) -{ - return single_release(inode, file); -} - -static const struct file_operations post_code_ops = { - .owner = THIS_MODULE, - .open = mic_post_code_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = mic_post_code_debug_release -}; +DEFINE_SHOW_ATTRIBUTE(mic_post_code); static int mic_msi_irq_info_show(struct seq_file *s, void *pos) { @@ -143,24 +111,7 @@ static int mic_msi_irq_info_show(struct seq_file *s, void *pos) return 0; } -static int mic_msi_irq_info_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, mic_msi_irq_info_show, inode->i_private); -} - -static int -mic_msi_irq_info_debug_release(struct inode *inode, struct file *file) -{ - return single_release(inode, file); -} - -static const struct file_operations msi_irq_info_ops = { - .owner = THIS_MODULE, - .open = mic_msi_irq_info_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = mic_msi_irq_info_debug_release -}; +DEFINE_SHOW_ATTRIBUTE(mic_msi_irq_info); /** * mic_create_debug_dir - Initialize MIC debugfs entries. @@ -177,13 +128,14 @@ void mic_create_debug_dir(struct mic_device *mdev) if (!mdev->dbg_dir) return; - debugfs_create_file("smpt", 0444, mdev->dbg_dir, mdev, &smpt_file_ops); + debugfs_create_file("smpt", 0444, mdev->dbg_dir, mdev, + &mic_smpt_fops); debugfs_create_file("post_code", 0444, mdev->dbg_dir, mdev, - &post_code_ops); + &mic_post_code_fops); debugfs_create_file("msi_irq_info", 0444, mdev->dbg_dir, mdev, - &msi_irq_info_ops); + &mic_msi_irq_info_fops); } /** diff --git a/drivers/misc/mic/scif/scif_debugfs.c b/drivers/misc/mic/scif/scif_debugfs.c index 6884dad97e17..cca5e980c710 100644 --- a/drivers/misc/mic/scif/scif_debugfs.c +++ b/drivers/misc/mic/scif/scif_debugfs.c @@ -24,7 +24,7 @@ /* Debugfs parent dir */ static struct dentry *scif_dbg; -static int scif_dev_test(struct seq_file *s, void *unused) +static int scif_dev_show(struct seq_file *s, void *unused) { int node; @@ -44,23 +44,7 @@ static int scif_dev_test(struct seq_file *s, void *unused) return 0; } -static int scif_dev_test_open(struct inode *inode, struct file *file) -{ - return single_open(file, scif_dev_test, inode->i_private); -} - -static int scif_dev_test_release(struct inode *inode, struct file *file) -{ - return single_release(inode, file); -} - -static const struct file_operations scif_dev_ops = { - .owner = THIS_MODULE, - .open = scif_dev_test_open, - .read = seq_read, - .llseek = seq_lseek, - .release = scif_dev_test_release -}; +DEFINE_SHOW_ATTRIBUTE(scif_dev); static void scif_display_window(struct scif_window *window, struct seq_file *s) { @@ -104,7 +88,7 @@ static void scif_display_all_windows(struct list_head *head, struct seq_file *s) } } -static int scif_rma_test(struct seq_file *s, void *unused) +static int scif_rma_show(struct seq_file *s, void *unused) { struct scif_endpt *ep; struct list_head *pos; @@ -123,23 +107,7 @@ static int scif_rma_test(struct seq_file *s, void *unused) return 0; } -static int scif_rma_test_open(struct inode *inode, struct file *file) -{ - return single_open(file, scif_rma_test, inode->i_private); -} - -static int scif_rma_test_release(struct inode *inode, struct file *file) -{ - return single_release(inode, file); -} - -static const struct file_operations scif_rma_ops = { - .owner = THIS_MODULE, - .open = scif_rma_test_open, - .read = seq_read, - .llseek = seq_lseek, - .release = scif_rma_test_release -}; +DEFINE_SHOW_ATTRIBUTE(scif_rma); void __init scif_init_debugfs(void) { @@ -150,8 +118,8 @@ void __init scif_init_debugfs(void) return; } - debugfs_create_file("scif_dev", 0444, scif_dbg, NULL, &scif_dev_ops); - debugfs_create_file("scif_rma", 0444, scif_dbg, NULL, &scif_rma_ops); + debugfs_create_file("scif_dev", 0444, scif_dbg, NULL, &scif_dev_fops); + debugfs_create_file("scif_rma", 0444, scif_dbg, NULL, &scif_rma_fops); debugfs_create_u8("en_msg_log", 0666, scif_dbg, &scif_info.en_msg_log); debugfs_create_u8("p2p_enable", 0666, scif_dbg, &scif_info.p2p_enable); } diff --git a/drivers/misc/mic/scif/scif_dma.c b/drivers/misc/mic/scif/scif_dma.c index 18b8ed57c4ac..e0d97044d0e9 100644 --- a/drivers/misc/mic/scif/scif_dma.c +++ b/drivers/misc/mic/scif/scif_dma.c @@ -201,23 +201,18 @@ static void scif_mmu_notifier_release(struct mmu_notifier *mn, } static int scif_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct scif_mmu_notif *mmn; mmn = container_of(mn, struct scif_mmu_notif, ep_mmu_notifier); - scif_rma_destroy_tcw(mmn, start, end - start); + scif_rma_destroy_tcw(mmn, range->start, range->end - range->start); return 0; } static void scif_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, - unsigned long end) + const struct mmu_notifier_range *range) { /* * Nothing to do here, everything needed was done in diff --git a/drivers/misc/mic/scif/scif_fence.c b/drivers/misc/mic/scif/scif_fence.c index 7bb929f05d85..2e7ce6ae9dd2 100644 --- a/drivers/misc/mic/scif/scif_fence.c +++ b/drivers/misc/mic/scif/scif_fence.c @@ -195,10 +195,11 @@ static inline void *scif_get_local_va(off_t off, struct scif_window *window) static void scif_prog_signal_cb(void *arg) { - struct scif_status *status = arg; + struct scif_cb_arg *cb_arg = arg; - dma_pool_free(status->ep->remote_dev->signal_pool, status, - status->src_dma_addr); + dma_pool_free(cb_arg->ep->remote_dev->signal_pool, cb_arg->status, + cb_arg->src_dma_addr); + kfree(cb_arg); } static int _scif_prog_signal(scif_epd_t epd, dma_addr_t dst, u64 val) @@ -209,6 +210,7 @@ static int _scif_prog_signal(scif_epd_t epd, dma_addr_t dst, u64 val) bool x100 = !is_dma_copy_aligned(chan->device, 1, 1, 1); struct dma_async_tx_descriptor *tx; struct scif_status *status = NULL; + struct scif_cb_arg *cb_arg = NULL; dma_addr_t src; dma_cookie_t cookie; int err; @@ -257,8 +259,16 @@ static int _scif_prog_signal(scif_epd_t epd, dma_addr_t dst, u64 val) goto dma_fail; } if (!x100) { + cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL); + if (!cb_arg) { + err = -ENOMEM; + goto dma_fail; + } + cb_arg->src_dma_addr = src; + cb_arg->status = status; + cb_arg->ep = ep; tx->callback = scif_prog_signal_cb; - tx->callback_param = status; + tx->callback_param = cb_arg; } cookie = tx->tx_submit(tx); if (dma_submit_error(cookie)) { @@ -270,9 +280,11 @@ static int _scif_prog_signal(scif_epd_t epd, dma_addr_t dst, u64 val) dma_async_issue_pending(chan); return 0; dma_fail: - if (!x100) + if (!x100) { dma_pool_free(ep->remote_dev->signal_pool, status, src - offsetof(struct scif_status, val)); + kfree(cb_arg); + } alloc_fail: return err; } diff --git a/drivers/misc/mic/scif/scif_rma.h b/drivers/misc/mic/scif/scif_rma.h index fa6722279196..84af3033a473 100644 --- a/drivers/misc/mic/scif/scif_rma.h +++ b/drivers/misc/mic/scif/scif_rma.h @@ -205,6 +205,19 @@ struct scif_status { struct scif_endpt *ep; }; +/* + * struct scif_cb_arg - Stores the argument of the callback func + * + * @src_dma_addr: Source buffer DMA address + * @status: DMA status + * @ep: SCIF endpoint + */ +struct scif_cb_arg { + dma_addr_t src_dma_addr; + struct scif_status *status; + struct scif_endpt *ep; +}; + /* * struct scif_window - Registration Window for Self and Remote * diff --git a/drivers/misc/mic/vop/vop_debugfs.c b/drivers/misc/mic/vop/vop_debugfs.c index ab43884e5cd7..2ccef52aca23 100644 --- a/drivers/misc/mic/vop/vop_debugfs.c +++ b/drivers/misc/mic/vop/vop_debugfs.c @@ -101,23 +101,7 @@ static int vop_dp_show(struct seq_file *s, void *pos) return 0; } -static int vop_dp_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, vop_dp_show, inode->i_private); -} - -static int vop_dp_debug_release(struct inode *inode, struct file *file) -{ - return single_release(inode, file); -} - -static const struct file_operations dp_ops = { - .owner = THIS_MODULE, - .open = vop_dp_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = vop_dp_debug_release -}; +DEFINE_SHOW_ATTRIBUTE(vop_dp); static int vop_vdev_info_show(struct seq_file *s, void *unused) { @@ -194,23 +178,7 @@ static int vop_vdev_info_show(struct seq_file *s, void *unused) return 0; } -static int vop_vdev_info_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, vop_vdev_info_show, inode->i_private); -} - -static int vop_vdev_info_debug_release(struct inode *inode, struct file *file) -{ - return single_release(inode, file); -} - -static const struct file_operations vdev_info_ops = { - .owner = THIS_MODULE, - .open = vop_vdev_info_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = vop_vdev_info_debug_release -}; +DEFINE_SHOW_ATTRIBUTE(vop_vdev_info); void vop_init_debugfs(struct vop_info *vi) { @@ -222,8 +190,8 @@ void vop_init_debugfs(struct vop_info *vi) pr_err("can't create debugfs dir vop\n"); return; } - debugfs_create_file("dp", 0444, vi->dbg, vi, &dp_ops); - debugfs_create_file("vdev_info", 0444, vi->dbg, vi, &vdev_info_ops); + debugfs_create_file("dp", 0444, vi->dbg, vi, &vop_dp_fops); + debugfs_create_file("vdev_info", 0444, vi->dbg, vi, &vop_vdev_info_fops); } void vop_exit_debugfs(struct vop_info *vi) diff --git a/drivers/misc/mic/vop/vop_main.c b/drivers/misc/mic/vop/vop_main.c index 3633202e18f4..6b212c8b78e7 100644 --- a/drivers/misc/mic/vop/vop_main.c +++ b/drivers/misc/mic/vop/vop_main.c @@ -129,6 +129,16 @@ static u64 vop_get_features(struct virtio_device *vdev) return features; } +static void vop_transport_features(struct virtio_device *vdev) +{ + /* + * Packed ring isn't enabled on virtio_vop for now, + * because virtio_vop uses vring_new_virtqueue() which + * creates virtio rings on preallocated memory. + */ + __virtio_clear_bit(vdev, VIRTIO_F_RING_PACKED); +} + static int vop_finalize_features(struct virtio_device *vdev) { unsigned int i, bits; @@ -141,6 +151,9 @@ static int vop_finalize_features(struct virtio_device *vdev) /* Give virtio_ring a chance to accept features. */ vring_transport_features(vdev); + /* Give virtio_vop a chance to accept features. */ + vop_transport_features(vdev); + memset_io(out_features, 0, feature_len); bits = min_t(unsigned, feature_len, sizeof(vdev->features)) * 8; diff --git a/drivers/misc/ocxl/afu_irq.c b/drivers/misc/ocxl/afu_irq.c index e70cfa24577f..11ab996657a2 100644 --- a/drivers/misc/ocxl/afu_irq.c +++ b/drivers/misc/ocxl/afu_irq.c @@ -2,7 +2,6 @@ // Copyright 2017 IBM Corp. #include #include -#include #include "ocxl_internal.h" #include "trace.h" diff --git a/drivers/misc/ocxl/config.c b/drivers/misc/ocxl/config.c index 57a6bb1fd3c9..8f2c5d8bd2ee 100644 --- a/drivers/misc/ocxl/config.c +++ b/drivers/misc/ocxl/config.c @@ -318,7 +318,7 @@ static int read_afu_name(struct pci_dev *dev, struct ocxl_fn_config *fn, if (rc) return rc; ptr = (u32 *) &afu->name[i]; - *ptr = val; + *ptr = le32_to_cpu((__force __le32) val); } afu->name[OCXL_AFU_NAME_SZ - 1] = '\0'; /* play safe */ return 0; diff --git a/drivers/misc/ocxl/link.c b/drivers/misc/ocxl/link.c index 31695a078485..d50b861d7e57 100644 --- a/drivers/misc/ocxl/link.c +++ b/drivers/misc/ocxl/link.c @@ -273,9 +273,9 @@ static int setup_xsl_irq(struct pci_dev *dev, struct link *link) spa->irq_name = kasprintf(GFP_KERNEL, "ocxl-xsl-%x-%x-%x", link->domain, link->bus, link->dev); if (!spa->irq_name) { - unmap_irq_registers(spa); dev_err(&dev->dev, "Can't allocate name for xsl interrupt\n"); - return -ENOMEM; + rc = -ENOMEM; + goto err_xsl; } /* * At some point, we'll need to look into allowing a higher @@ -283,11 +283,10 @@ static int setup_xsl_irq(struct pci_dev *dev, struct link *link) */ spa->virq = irq_create_mapping(NULL, hwirq); if (!spa->virq) { - kfree(spa->irq_name); - unmap_irq_registers(spa); dev_err(&dev->dev, "irq_create_mapping failed for translation interrupt\n"); - return -EINVAL; + rc = -EINVAL; + goto err_name; } dev_dbg(&dev->dev, "hwirq %d mapped to virq %d\n", hwirq, spa->virq); @@ -295,15 +294,21 @@ static int setup_xsl_irq(struct pci_dev *dev, struct link *link) rc = request_irq(spa->virq, xsl_fault_handler, 0, spa->irq_name, link); if (rc) { - irq_dispose_mapping(spa->virq); - kfree(spa->irq_name); - unmap_irq_registers(spa); dev_err(&dev->dev, "request_irq failed for translation interrupt: %d\n", rc); - return -EINVAL; + rc = -EINVAL; + goto err_mapping; } return 0; + +err_mapping: + irq_dispose_mapping(spa->virq); +err_name: + kfree(spa->irq_name); +err_xsl: + unmap_irq_registers(spa); + return rc; } static void release_xsl_irq(struct link *link) @@ -566,7 +571,7 @@ int ocxl_link_update_pe(void *link_handle, int pasid, __u16 tid) mutex_lock(&spa->spa_lock); - pe->tid = tid; + pe->tid = cpu_to_be32(tid); /* * The barrier makes sure the PE is updated diff --git a/drivers/misc/pvpanic.c b/drivers/misc/pvpanic.c new file mode 100644 index 000000000000..595ac065b401 --- /dev/null +++ b/drivers/misc/pvpanic.c @@ -0,0 +1,192 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Pvpanic Device Support + * + * Copyright (C) 2013 Fujitsu. + * Copyright (C) 2018 ZTE. + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include + +static void __iomem *base; + +#define PVPANIC_PANICKED (1 << 0) + +MODULE_AUTHOR("Hu Tao "); +MODULE_DESCRIPTION("pvpanic device driver"); +MODULE_LICENSE("GPL"); + +static void +pvpanic_send_event(unsigned int event) +{ + iowrite8(event, base); +} + +static int +pvpanic_panic_notify(struct notifier_block *nb, unsigned long code, + void *unused) +{ + pvpanic_send_event(PVPANIC_PANICKED); + return NOTIFY_DONE; +} + +static struct notifier_block pvpanic_panic_nb = { + .notifier_call = pvpanic_panic_notify, + .priority = 1, /* let this called before broken drm_fb_helper */ +}; + +#ifdef CONFIG_ACPI +static int pvpanic_add(struct acpi_device *device); +static int pvpanic_remove(struct acpi_device *device); + +static const struct acpi_device_id pvpanic_device_ids[] = { + { "QEMU0001", 0 }, + { "", 0 } +}; +MODULE_DEVICE_TABLE(acpi, pvpanic_device_ids); + +static struct acpi_driver pvpanic_driver = { + .name = "pvpanic", + .class = "QEMU", + .ids = pvpanic_device_ids, + .ops = { + .add = pvpanic_add, + .remove = pvpanic_remove, + }, + .owner = THIS_MODULE, +}; + +static acpi_status +pvpanic_walk_resources(struct acpi_resource *res, void *context) +{ + struct resource r; + + if (acpi_dev_resource_io(res, &r)) { + base = ioport_map(r.start, resource_size(&r)); + return AE_OK; + } else if (acpi_dev_resource_memory(res, &r)) { + base = ioremap(r.start, resource_size(&r)); + return AE_OK; + } + + return AE_ERROR; +} + +static int pvpanic_add(struct acpi_device *device) +{ + int ret; + + ret = acpi_bus_get_status(device); + if (ret < 0) + return ret; + + if (!device->status.enabled || !device->status.functional) + return -ENODEV; + + acpi_walk_resources(device->handle, METHOD_NAME__CRS, + pvpanic_walk_resources, NULL); + + if (!base) + return -ENODEV; + + atomic_notifier_chain_register(&panic_notifier_list, + &pvpanic_panic_nb); + + return 0; +} + +static int pvpanic_remove(struct acpi_device *device) +{ + + atomic_notifier_chain_unregister(&panic_notifier_list, + &pvpanic_panic_nb); + iounmap(base); + + return 0; +} + +static int pvpanic_register_acpi_driver(void) +{ + return acpi_bus_register_driver(&pvpanic_driver); +} + +static void pvpanic_unregister_acpi_driver(void) +{ + acpi_bus_unregister_driver(&pvpanic_driver); +} +#else +static int pvpanic_register_acpi_driver(void) +{ + return -ENODEV; +} + +static void pvpanic_unregister_acpi_driver(void) {} +#endif + +static int pvpanic_mmio_probe(struct platform_device *pdev) +{ + struct resource *mem; + + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!mem) + return -EINVAL; + + base = devm_ioremap_resource(&pdev->dev, mem); + if (IS_ERR(base)) + return PTR_ERR(base); + + atomic_notifier_chain_register(&panic_notifier_list, + &pvpanic_panic_nb); + + return 0; +} + +static int pvpanic_mmio_remove(struct platform_device *pdev) +{ + + atomic_notifier_chain_unregister(&panic_notifier_list, + &pvpanic_panic_nb); + + return 0; +} + +static const struct of_device_id pvpanic_mmio_match[] = { + { .compatible = "qemu,pvpanic-mmio", }, + {} +}; + +static struct platform_driver pvpanic_mmio_driver = { + .driver = { + .name = "pvpanic-mmio", + .of_match_table = pvpanic_mmio_match, + }, + .probe = pvpanic_mmio_probe, + .remove = pvpanic_mmio_remove, +}; + +static int __init pvpanic_mmio_init(void) +{ + if (acpi_disabled) + return platform_driver_register(&pvpanic_mmio_driver); + else + return pvpanic_register_acpi_driver(); +} + +static void __exit pvpanic_mmio_exit(void) +{ + if (acpi_disabled) + platform_driver_unregister(&pvpanic_mmio_driver); + else + pvpanic_unregister_acpi_driver(); +} + +module_init(pvpanic_mmio_init); +module_exit(pvpanic_mmio_exit); diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c index 03b49d52092e..ca2032afe035 100644 --- a/drivers/misc/sgi-gru/grutlbpurge.c +++ b/drivers/misc/sgi-gru/grutlbpurge.c @@ -220,9 +220,7 @@ void gru_flush_all_tlb(struct gru_state *gru) * MMUOPS notifier callout functions */ static int gru_invalidate_range_start(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct, ms_notifier); @@ -230,15 +228,14 @@ static int gru_invalidate_range_start(struct mmu_notifier *mn, STAT(mmu_invalidate_range); atomic_inc(&gms->ms_range_active); gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx, act %d\n", gms, - start, end, atomic_read(&gms->ms_range_active)); - gru_flush_tlb_range(gms, start, end - start); + range->start, range->end, atomic_read(&gms->ms_range_active)); + gru_flush_tlb_range(gms, range->start, range->end - range->start); return 0; } static void gru_invalidate_range_end(struct mmu_notifier *mn, - struct mm_struct *mm, unsigned long start, - unsigned long end) + const struct mmu_notifier_range *range) { struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct, ms_notifier); @@ -247,7 +244,8 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn, (void)atomic_dec_and_test(&gms->ms_range_active); wake_up_all(&gms->ms_wait_queue); - gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx\n", gms, start, end); + gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx\n", + gms, range->start, range->end); } static void gru_release(struct mmu_notifier *mn, struct mm_struct *mm) diff --git a/drivers/misc/ti-st/st_kim.c b/drivers/misc/ti-st/st_kim.c index 1874ac922166..e7cfdbd1f66d 100644 --- a/drivers/misc/ti-st/st_kim.c +++ b/drivers/misc/ti-st/st_kim.c @@ -211,7 +211,7 @@ static void kim_int_recv(struct kim_data_s *kim_gdata, static long read_local_version(struct kim_data_s *kim_gdata, char *bts_scr_name) { unsigned short version = 0, chip = 0, min_ver = 0, maj_ver = 0; - const char read_ver_cmd[] = { 0x01, 0x01, 0x10, 0x00 }; + static const char read_ver_cmd[] = { 0x01, 0x01, 0x10, 0x00 }; long timeout; pr_debug("%s", __func__); @@ -564,7 +564,7 @@ long st_kim_stop(void *kim_data) /* functions called from subsystems */ /* called when debugfs entry is read from */ -static int show_version(struct seq_file *s, void *unused) +static int version_show(struct seq_file *s, void *unused) { struct kim_data_s *kim_gdata = (struct kim_data_s *)s->private; seq_printf(s, "%04X %d.%d.%d\n", kim_gdata->version.full, @@ -573,7 +573,7 @@ static int show_version(struct seq_file *s, void *unused) return 0; } -static int show_list(struct seq_file *s, void *unused) +static int list_show(struct seq_file *s, void *unused) { struct kim_data_s *kim_gdata = (struct kim_data_s *)s->private; kim_st_list_protocols(kim_gdata->core_data, s); @@ -688,30 +688,8 @@ err: *core_data = NULL; } -static int kim_version_open(struct inode *i, struct file *f) -{ - return single_open(f, show_version, i->i_private); -} - -static int kim_list_open(struct inode *i, struct file *f) -{ - return single_open(f, show_list, i->i_private); -} - -static const struct file_operations version_debugfs_fops = { - /* version info */ - .open = kim_version_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; -static const struct file_operations list_debugfs_fops = { - /* protocols info */ - .open = kim_list_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(version); +DEFINE_SHOW_ATTRIBUTE(list); /**********************************************************************/ /* functions called from platform device driver subsystem @@ -789,9 +767,9 @@ static int kim_probe(struct platform_device *pdev) } debugfs_create_file("version", S_IRUGO, kim_debugfs_dir, - kim_gdata, &version_debugfs_fops); + kim_gdata, &version_fops); debugfs_create_file("protocols", S_IRUGO, kim_debugfs_dir, - kim_gdata, &list_debugfs_fops); + kim_gdata, &list_fops); return 0; err_sysfs_group: diff --git a/drivers/misc/vexpress-syscfg.c b/drivers/misc/vexpress-syscfg.c index 6c3591cdf855..a3c6c773d9dc 100644 --- a/drivers/misc/vexpress-syscfg.c +++ b/drivers/misc/vexpress-syscfg.c @@ -61,7 +61,7 @@ static int vexpress_syscfg_exec(struct vexpress_syscfg_func *func, int tries; long timeout; - if (WARN_ON(index > func->num_templates)) + if (WARN_ON(index >= func->num_templates)) return -EINVAL; command = readl(syscfg->base + SYS_CFGCTRL); diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c index 9b0b3fa4f836..f8240b87df22 100644 --- a/drivers/misc/vmw_balloon.c +++ b/drivers/misc/vmw_balloon.c @@ -570,7 +570,7 @@ static int vmballoon_send_get_target(struct vmballoon *b) unsigned long status; unsigned long limit; - limit = totalram_pages; + limit = totalram_pages(); /* Ensure limit fits in 32-bits */ if (limit != (u32)limit) @@ -1470,18 +1470,7 @@ static int vmballoon_debug_show(struct seq_file *f, void *offset) return 0; } -static int vmballoon_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, vmballoon_debug_show, inode->i_private); -} - -static const struct file_operations vmballoon_debug_fops = { - .owner = THIS_MODULE, - .open = vmballoon_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(vmballoon_debug); static int __init vmballoon_debugfs_init(struct vmballoon *b) { diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c index edfffc9699ba..5da1f3e3f997 100644 --- a/drivers/misc/vmw_vmci/vmci_host.c +++ b/drivers/misc/vmw_vmci/vmci_host.c @@ -750,19 +750,10 @@ static int vmci_host_do_ctx_set_cpt_state(struct vmci_host_dev *vmci_host_dev, if (copy_from_user(&set_info, uptr, sizeof(set_info))) return -EFAULT; - cpt_buf = kmalloc(set_info.buf_size, GFP_KERNEL); - if (!cpt_buf) { - vmci_ioctl_err( - "cannot allocate memory to set cpt state (type=%d)\n", - set_info.cpt_type); - return -ENOMEM; - } - - if (copy_from_user(cpt_buf, (void __user *)(uintptr_t)set_info.cpt_buf, - set_info.buf_size)) { - retval = -EFAULT; - goto out; - } + cpt_buf = memdup_user((void __user *)(uintptr_t)set_info.cpt_buf, + set_info.buf_size); + if (IS_ERR(cpt_buf)) + return PTR_ERR(cpt_buf); cid = vmci_ctx_get_id(vmci_host_dev->context); set_info.result = vmci_ctx_set_chkpt_state(cid, set_info.cpt_type, @@ -770,7 +761,6 @@ static int vmci_host_do_ctx_set_cpt_state(struct vmci_host_dev *vmci_host_dev, retval = copy_to_user(uptr, &set_info, sizeof(set_info)) ? -EFAULT : 0; -out: kfree(cpt_buf); return retval; } diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 111934838da2..aef1185f383d 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -100,7 +100,6 @@ static DEFINE_IDA(mmc_rpmb_ida); * There is one mmc_blk_data per slot. */ struct mmc_blk_data { - spinlock_t lock; struct device *parent; struct gendisk *disk; struct mmc_queue queue; @@ -1488,7 +1487,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req) blk_mq_end_request(req, BLK_STS_OK); } - spin_lock_irqsave(q->queue_lock, flags); + spin_lock_irqsave(&mq->lock, flags); mq->in_flight[mmc_issue_type(mq, req)] -= 1; @@ -1496,7 +1495,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req) mmc_cqe_check_busy(mq); - spin_unlock_irqrestore(q->queue_lock, flags); + spin_unlock_irqrestore(&mq->lock, flags); if (!mq->cqe_busy) blk_mq_run_hw_queues(q, true); @@ -1961,7 +1960,7 @@ static void mmc_blk_urgent_bkops(struct mmc_queue *mq, struct mmc_queue_req *mqrq) { if (mmc_blk_urgent_bkops_needed(mq, mqrq)) - mmc_start_bkops(mq->card, true); + mmc_run_bkops(mq->card); } void mmc_blk_mq_complete(struct request *req) @@ -1993,17 +1992,16 @@ static void mmc_blk_mq_poll_completion(struct mmc_queue *mq, static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req) { - struct request_queue *q = req->q; unsigned long flags; bool put_card; - spin_lock_irqsave(q->queue_lock, flags); + spin_lock_irqsave(&mq->lock, flags); mq->in_flight[mmc_issue_type(mq, req)] -= 1; put_card = (mmc_tot_in_flight(mq) == 0); - spin_unlock_irqrestore(q->queue_lock, flags); + spin_unlock_irqrestore(&mq->lock, flags); if (put_card) mmc_put_card(mq->card, &mq->ctx); @@ -2099,11 +2097,11 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq) * request does not need to wait (although it does need to * complete complete_req first). */ - spin_lock_irqsave(q->queue_lock, flags); + spin_lock_irqsave(&mq->lock, flags); mq->complete_req = req; mq->rw_wait = false; waiting = mq->waiting; - spin_unlock_irqrestore(q->queue_lock, flags); + spin_unlock_irqrestore(&mq->lock, flags); /* * If 'waiting' then the waiting task will complete this @@ -2122,10 +2120,10 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq) /* Take the recovery path for errors or urgent background operations */ if (mmc_blk_rq_error(&mqrq->brq) || mmc_blk_urgent_bkops_needed(mq, mqrq)) { - spin_lock_irqsave(q->queue_lock, flags); + spin_lock_irqsave(&mq->lock, flags); mq->recovery_needed = true; mq->recovery_req = req; - spin_unlock_irqrestore(q->queue_lock, flags); + spin_unlock_irqrestore(&mq->lock, flags); wake_up(&mq->wait); schedule_work(&mq->recovery_work); return; @@ -2141,7 +2139,6 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq) static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err) { - struct request_queue *q = mq->queue; unsigned long flags; bool done; @@ -2149,7 +2146,7 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err) * Wait while there is another request in progress, but not if recovery * is needed. Also indicate whether there is a request waiting to start. */ - spin_lock_irqsave(q->queue_lock, flags); + spin_lock_irqsave(&mq->lock, flags); if (mq->recovery_needed) { *err = -EBUSY; done = true; @@ -2157,7 +2154,7 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err) done = !mq->rw_wait; } mq->waiting = !done; - spin_unlock_irqrestore(q->queue_lock, flags); + spin_unlock_irqrestore(&mq->lock, flags); return done; } @@ -2334,12 +2331,11 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, goto err_kfree; } - spin_lock_init(&md->lock); INIT_LIST_HEAD(&md->part); INIT_LIST_HEAD(&md->rpmbs); md->usage = 1; - ret = mmc_init_queue(&md->queue, card, &md->lock, subname); + ret = mmc_init_queue(&md->queue, card); if (ret) goto err_putdisk; diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h index 1170feb8f969..eef301452406 100644 --- a/drivers/mmc/core/card.h +++ b/drivers/mmc/core/card.h @@ -23,15 +23,13 @@ #define MMC_STATE_BLOCKADDR (1<<2) /* card uses block-addressing */ #define MMC_CARD_SDXC (1<<3) /* card is SDXC */ #define MMC_CARD_REMOVED (1<<4) /* card has been removed */ -#define MMC_STATE_DOING_BKOPS (1<<5) /* card is doing BKOPS */ -#define MMC_STATE_SUSPENDED (1<<6) /* card is suspended */ +#define MMC_STATE_SUSPENDED (1<<5) /* card is suspended */ #define mmc_card_present(c) ((c)->state & MMC_STATE_PRESENT) #define mmc_card_readonly(c) ((c)->state & MMC_STATE_READONLY) #define mmc_card_blockaddr(c) ((c)->state & MMC_STATE_BLOCKADDR) #define mmc_card_ext_capacity(c) ((c)->state & MMC_CARD_SDXC) #define mmc_card_removed(c) ((c) && ((c)->state & MMC_CARD_REMOVED)) -#define mmc_card_doing_bkops(c) ((c)->state & MMC_STATE_DOING_BKOPS) #define mmc_card_suspended(c) ((c)->state & MMC_STATE_SUSPENDED) #define mmc_card_set_present(c) ((c)->state |= MMC_STATE_PRESENT) @@ -39,8 +37,6 @@ #define mmc_card_set_blockaddr(c) ((c)->state |= MMC_STATE_BLOCKADDR) #define mmc_card_set_ext_capacity(c) ((c)->state |= MMC_CARD_SDXC) #define mmc_card_set_removed(c) ((c)->state |= MMC_CARD_REMOVED) -#define mmc_card_set_doing_bkops(c) ((c)->state |= MMC_STATE_DOING_BKOPS) -#define mmc_card_clr_doing_bkops(c) ((c)->state &= ~MMC_STATE_DOING_BKOPS) #define mmc_card_set_suspended(c) ((c)->state |= MMC_STATE_SUSPENDED) #define mmc_card_clr_suspended(c) ((c)->state &= ~MMC_STATE_SUSPENDED) diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 50a5c340307b..5bd58b95d318 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -887,7 +887,10 @@ void mmc_release_host(struct mmc_host *host) spin_unlock_irqrestore(&host->lock, flags); wake_up(&host->wq); pm_runtime_mark_last_busy(mmc_dev(host)); - pm_runtime_put_autosuspend(mmc_dev(host)); + if (host->caps & MMC_CAP_SYNC_RUNTIME_PM) + pm_runtime_put_sync_suspend(mmc_dev(host)); + else + pm_runtime_put_autosuspend(mmc_dev(host)); } } EXPORT_SYMBOL(mmc_release_host); @@ -2413,20 +2416,6 @@ int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen) } EXPORT_SYMBOL(mmc_set_blocklen); -int mmc_set_blockcount(struct mmc_card *card, unsigned int blockcount, - bool is_rel_write) -{ - struct mmc_command cmd = {}; - - cmd.opcode = MMC_SET_BLOCK_COUNT; - cmd.arg = blockcount & 0x0000FFFF; - if (is_rel_write) - cmd.arg |= 1 << 31; - cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC; - return mmc_wait_for_cmd(card->host, &cmd, 5); -} -EXPORT_SYMBOL(mmc_set_blockcount); - static void mmc_hw_reset_for_init(struct mmc_host *host) { mmc_pwrseq_reset(host); diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 087ba68b2920..8fb6bc37f808 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -118,8 +118,6 @@ int mmc_erase_group_aligned(struct mmc_card *card, unsigned int from, unsigned int mmc_calc_max_discard(struct mmc_card *card); int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen); -int mmc_set_blockcount(struct mmc_card *card, unsigned int blockcount, - bool is_rel_write); int __mmc_claim_host(struct mmc_host *host, struct mmc_ctx *ctx, atomic_t *abort); diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c index 55997cf84b39..da892a599524 100644 --- a/drivers/mmc/core/mmc.c +++ b/drivers/mmc/core/mmc.c @@ -1181,6 +1181,9 @@ static int mmc_select_hs400(struct mmc_card *card) if (err) goto out_err; + if (host->ops->hs400_prepare_ddr) + host->ops->hs400_prepare_ddr(host); + /* Switch card to DDR */ err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BUS_WIDTH, @@ -2011,12 +2014,6 @@ static int _mmc_suspend(struct mmc_host *host, bool is_suspend) if (mmc_card_suspended(host->card)) goto out; - if (mmc_card_doing_bkops(host->card)) { - err = mmc_stop_bkops(host->card); - if (err) - goto out; - } - err = mmc_flush_cache(host->card); if (err) goto out; diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c index 873b2aa0c155..9054329fe903 100644 --- a/drivers/mmc/core/mmc_ops.c +++ b/drivers/mmc/core/mmc_ops.c @@ -802,12 +802,6 @@ static int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status) unsigned int opcode; int err; - if (!card->ext_csd.hpi) { - pr_warn("%s: Card didn't support HPI command\n", - mmc_hostname(card->host)); - return -EINVAL; - } - opcode = card->ext_csd.hpi_cmd; if (opcode == MMC_STOP_TRANSMISSION) cmd.flags = MMC_RSP_R1B | MMC_CMD_AC; @@ -897,34 +891,6 @@ int mmc_can_ext_csd(struct mmc_card *card) return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3); } -/** - * mmc_stop_bkops - stop ongoing BKOPS - * @card: MMC card to check BKOPS - * - * Send HPI command to stop ongoing background operations to - * allow rapid servicing of foreground operations, e.g. read/ - * writes. Wait until the card comes out of the programming state - * to avoid errors in servicing read/write requests. - */ -int mmc_stop_bkops(struct mmc_card *card) -{ - int err = 0; - - err = mmc_interrupt_hpi(card); - - /* - * If err is EINVAL, we can't issue an HPI. - * It should complete the BKOPS. - */ - if (!err || (err == -EINVAL)) { - mmc_card_clr_doing_bkops(card); - mmc_retune_release(card->host); - err = 0; - } - - return err; -} - static int mmc_read_bkops_status(struct mmc_card *card) { int err; @@ -941,22 +907,17 @@ static int mmc_read_bkops_status(struct mmc_card *card) } /** - * mmc_start_bkops - start BKOPS for supported cards - * @card: MMC card to start BKOPS - * @from_exception: A flag to indicate if this function was - * called due to an exception raised by the card + * mmc_run_bkops - Run BKOPS for supported cards + * @card: MMC card to run BKOPS for * - * Start background operations whenever requested. - * When the urgent BKOPS bit is set in a R1 command response - * then background operations should be started immediately. + * Run background operations synchronously for cards having manual BKOPS + * enabled and in case it reports urgent BKOPS level. */ -void mmc_start_bkops(struct mmc_card *card, bool from_exception) +void mmc_run_bkops(struct mmc_card *card) { int err; - int timeout; - bool use_busy_signal; - if (!card->ext_csd.man_bkops_en || mmc_card_doing_bkops(card)) + if (!card->ext_csd.man_bkops_en) return; err = mmc_read_bkops_status(card); @@ -966,44 +927,26 @@ void mmc_start_bkops(struct mmc_card *card, bool from_exception) return; } - if (!card->ext_csd.raw_bkops_status) - return; - - if (card->ext_csd.raw_bkops_status < EXT_CSD_BKOPS_LEVEL_2 && - from_exception) + if (!card->ext_csd.raw_bkops_status || + card->ext_csd.raw_bkops_status < EXT_CSD_BKOPS_LEVEL_2) return; - if (card->ext_csd.raw_bkops_status >= EXT_CSD_BKOPS_LEVEL_2) { - timeout = MMC_OPS_TIMEOUT_MS; - use_busy_signal = true; - } else { - timeout = 0; - use_busy_signal = false; - } - mmc_retune_hold(card->host); - err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, - EXT_CSD_BKOPS_START, 1, timeout, 0, - use_busy_signal, true, false); - if (err) { + /* + * For urgent BKOPS status, LEVEL_2 and higher, let's execute + * synchronously. Future wise, we may consider to start BKOPS, for less + * urgent levels by using an asynchronous background task, when idle. + */ + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, + EXT_CSD_BKOPS_START, 1, MMC_OPS_TIMEOUT_MS); + if (err) pr_warn("%s: Error %d starting bkops\n", mmc_hostname(card->host), err); - mmc_retune_release(card->host); - return; - } - /* - * For urgent bkops status (LEVEL_2 and more) - * bkops executed synchronously, otherwise - * the operation is in progress - */ - if (!use_busy_signal) - mmc_card_set_doing_bkops(card); - else - mmc_retune_release(card->host); + mmc_retune_release(card->host); } -EXPORT_SYMBOL(mmc_start_bkops); +EXPORT_SYMBOL(mmc_run_bkops); /* * Flush the cache to the non-volatile storage. diff --git a/drivers/mmc/core/mmc_ops.h b/drivers/mmc/core/mmc_ops.h index a1390d486381..018a5e3f66d6 100644 --- a/drivers/mmc/core/mmc_ops.h +++ b/drivers/mmc/core/mmc_ops.h @@ -40,8 +40,7 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, bool use_busy_signal, bool send_status, bool retry_crc_err); int mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, unsigned int timeout_ms); -int mmc_stop_bkops(struct mmc_card *card); -void mmc_start_bkops(struct mmc_card *card, bool from_exception); +void mmc_run_bkops(struct mmc_card *card); int mmc_flush_cache(struct mmc_card *card); int mmc_cmdq_enable(struct mmc_card *card); int mmc_cmdq_disable(struct mmc_card *card); diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c index ef18daeaa4cc..eabb1cab1765 100644 --- a/drivers/mmc/core/mmc_test.c +++ b/drivers/mmc/core/mmc_test.c @@ -3145,17 +3145,7 @@ static int mtf_testlist_show(struct seq_file *sf, void *data) return 0; } -static int mtf_testlist_open(struct inode *inode, struct file *file) -{ - return single_open(file, mtf_testlist_show, inode->i_private); -} - -static const struct file_operations mmc_test_fops_testlist = { - .open = mtf_testlist_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(mtf_testlist); static void mmc_test_free_dbgfs_file(struct mmc_card *card) { @@ -3216,7 +3206,7 @@ static int mmc_test_register_dbgfs_file(struct mmc_card *card) goto err; ret = __mmc_test_register_dbgfs_file(card, "testlist", S_IRUGO, - &mmc_test_fops_testlist); + &mtf_testlist_fops); if (ret) goto err; diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 6edffeed9953..35cc138b096d 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -89,9 +89,9 @@ void mmc_cqe_recovery_notifier(struct mmc_request *mrq) struct mmc_queue *mq = q->queuedata; unsigned long flags; - spin_lock_irqsave(q->queue_lock, flags); + spin_lock_irqsave(&mq->lock, flags); __mmc_cqe_recovery_notifier(mq); - spin_unlock_irqrestore(q->queue_lock, flags); + spin_unlock_irqrestore(&mq->lock, flags); } static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req) @@ -128,14 +128,14 @@ static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req, unsigned long flags; int ret; - spin_lock_irqsave(q->queue_lock, flags); + spin_lock_irqsave(&mq->lock, flags); if (mq->recovery_needed || !mq->use_cqe) ret = BLK_EH_RESET_TIMER; else ret = mmc_cqe_timed_out(req); - spin_unlock_irqrestore(q->queue_lock, flags); + spin_unlock_irqrestore(&mq->lock, flags); return ret; } @@ -157,9 +157,9 @@ static void mmc_mq_recovery_handler(struct work_struct *work) mq->in_recovery = false; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&mq->lock); mq->recovery_needed = false; - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&mq->lock); mmc_put_card(mq->card, &mq->ctx); @@ -258,10 +258,10 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, issue_type = mmc_issue_type(mq, req); - spin_lock_irq(q->queue_lock); + spin_lock_irq(&mq->lock); if (mq->recovery_needed || mq->busy) { - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&mq->lock); return BLK_STS_RESOURCE; } @@ -269,7 +269,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, case MMC_ISSUE_DCMD: if (mmc_cqe_dcmd_busy(mq)) { mq->cqe_busy |= MMC_CQE_DCMD_BUSY; - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&mq->lock); return BLK_STS_RESOURCE; } break; @@ -294,7 +294,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, get_card = (mmc_tot_in_flight(mq) == 1); cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1); - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&mq->lock); if (!(req->rq_flags & RQF_DONTPREP)) { req_to_mmc_queue_req(req)->retries = 0; @@ -328,12 +328,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, if (issued != MMC_REQ_STARTED) { bool put_card = false; - spin_lock_irq(q->queue_lock); + spin_lock_irq(&mq->lock); mq->in_flight[issue_type] -= 1; if (mmc_tot_in_flight(mq) == 0) put_card = true; mq->busy = false; - spin_unlock_irq(q->queue_lock); + spin_unlock_irq(&mq->lock); if (put_card) mmc_put_card(card, &mq->ctx); } else { @@ -378,14 +378,37 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) init_waitqueue_head(&mq->wait); } -static int mmc_mq_init_queue(struct mmc_queue *mq, int q_depth, - const struct blk_mq_ops *mq_ops, spinlock_t *lock) +/* Set queue depth to get a reasonable value for q->nr_requests */ +#define MMC_QUEUE_DEPTH 64 + +/** + * mmc_init_queue - initialise a queue structure. + * @mq: mmc queue + * @card: mmc card to attach this queue + * + * Initialise a MMC card request queue. + */ +int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card) { + struct mmc_host *host = card->host; int ret; + mq->card = card; + mq->use_cqe = host->cqe_enabled; + + spin_lock_init(&mq->lock); + memset(&mq->tag_set, 0, sizeof(mq->tag_set)); - mq->tag_set.ops = mq_ops; - mq->tag_set.queue_depth = q_depth; + mq->tag_set.ops = &mmc_mq_ops; + /* + * The queue depth for CQE must match the hardware because the request + * tag is used to index the hardware queue. + */ + if (mq->use_cqe) + mq->tag_set.queue_depth = + min_t(int, card->ext_csd.cmdq_depth, host->cqe_qdepth); + else + mq->tag_set.queue_depth = MMC_QUEUE_DEPTH; mq->tag_set.numa_node = NUMA_NO_NODE; mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE | BLK_MQ_F_BLOCKING; @@ -403,68 +426,17 @@ static int mmc_mq_init_queue(struct mmc_queue *mq, int q_depth, goto free_tag_set; } - mq->queue->queue_lock = lock; mq->queue->queuedata = mq; + blk_queue_rq_timeout(mq->queue, 60 * HZ); + mmc_setup_queue(mq, card); return 0; free_tag_set: blk_mq_free_tag_set(&mq->tag_set); - return ret; } -/* Set queue depth to get a reasonable value for q->nr_requests */ -#define MMC_QUEUE_DEPTH 64 - -static int mmc_mq_init(struct mmc_queue *mq, struct mmc_card *card, - spinlock_t *lock) -{ - struct mmc_host *host = card->host; - int q_depth; - int ret; - - /* - * The queue depth for CQE must match the hardware because the request - * tag is used to index the hardware queue. - */ - if (mq->use_cqe) - q_depth = min_t(int, card->ext_csd.cmdq_depth, host->cqe_qdepth); - else - q_depth = MMC_QUEUE_DEPTH; - - ret = mmc_mq_init_queue(mq, q_depth, &mmc_mq_ops, lock); - if (ret) - return ret; - - blk_queue_rq_timeout(mq->queue, 60 * HZ); - - mmc_setup_queue(mq, card); - - return 0; -} - -/** - * mmc_init_queue - initialise a queue structure. - * @mq: mmc queue - * @card: mmc card to attach this queue - * @lock: queue lock - * @subname: partition subname - * - * Initialise a MMC card request queue. - */ -int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, - spinlock_t *lock, const char *subname) -{ - struct mmc_host *host = card->host; - - mq->card = card; - - mq->use_cqe = host->cqe_enabled; - - return mmc_mq_init(mq, card, lock); -} - void mmc_queue_suspend(struct mmc_queue *mq) { blk_mq_quiesce_queue(mq->queue); diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 9bf3c9245075..fd11491ced9f 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -77,6 +77,7 @@ struct mmc_queue { struct blk_mq_tag_set tag_set; struct mmc_blk_data *blkdata; struct request_queue *queue; + spinlock_t lock; int in_flight[MMC_ISSUE_MAX]; unsigned int cqe_busy; #define MMC_CQE_DCMD_BUSY BIT(0) @@ -95,8 +96,7 @@ struct mmc_queue { struct work_struct complete_work; }; -extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, - const char *); +extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *); extern void mmc_cleanup_queue(struct mmc_queue *); extern void mmc_queue_suspend(struct mmc_queue *); extern void mmc_queue_resume(struct mmc_queue *); diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c index 86803a3a04dc..319ccd93383d 100644 --- a/drivers/mmc/core/slot-gpio.c +++ b/drivers/mmc/core/slot-gpio.c @@ -9,7 +9,6 @@ */ #include -#include #include #include #include @@ -27,8 +26,8 @@ struct mmc_gpio { bool override_cd_active_level; irqreturn_t (*cd_gpio_isr)(int irq, void *dev_id); char *ro_label; + char *cd_label; u32 cd_debounce_delay_ms; - char cd_label[]; }; static irqreturn_t mmc_gpio_cd_irqt(int irq, void *dev_id) @@ -45,15 +44,19 @@ static irqreturn_t mmc_gpio_cd_irqt(int irq, void *dev_id) int mmc_gpio_alloc(struct mmc_host *host) { - size_t len = strlen(dev_name(host->parent)) + 4; struct mmc_gpio *ctx = devm_kzalloc(host->parent, - sizeof(*ctx) + 2 * len, GFP_KERNEL); + sizeof(*ctx), GFP_KERNEL); if (ctx) { - ctx->ro_label = ctx->cd_label + len; ctx->cd_debounce_delay_ms = 200; - snprintf(ctx->cd_label, len, "%s cd", dev_name(host->parent)); - snprintf(ctx->ro_label, len, "%s ro", dev_name(host->parent)); + ctx->cd_label = devm_kasprintf(host->parent, GFP_KERNEL, + "%s cd", dev_name(host->parent)); + if (!ctx->cd_label) + return -ENOMEM; + ctx->ro_label = devm_kasprintf(host->parent, GFP_KERNEL, + "%s ro", dev_name(host->parent)); + if (!ctx->ro_label) + return -ENOMEM; host->slot.handler_priv = ctx; host->slot.cd_irq = -EINVAL; } @@ -98,36 +101,6 @@ int mmc_gpio_get_cd(struct mmc_host *host) } EXPORT_SYMBOL(mmc_gpio_get_cd); -/** - * mmc_gpio_request_ro - request a gpio for write-protection - * @host: mmc host - * @gpio: gpio number requested - * - * As devm_* managed functions are used in mmc_gpio_request_ro(), client - * drivers do not need to worry about freeing up memory. - * - * Returns zero on success, else an error. - */ -int mmc_gpio_request_ro(struct mmc_host *host, unsigned int gpio) -{ - struct mmc_gpio *ctx = host->slot.handler_priv; - int ret; - - if (!gpio_is_valid(gpio)) - return -EINVAL; - - ret = devm_gpio_request_one(host->parent, gpio, GPIOF_DIR_IN, - ctx->ro_label); - if (ret < 0) - return ret; - - ctx->override_ro_active_level = true; - ctx->ro_gpio = gpio_to_desc(gpio); - - return 0; -} -EXPORT_SYMBOL(mmc_gpio_request_ro); - void mmc_gpiod_request_cd_irq(struct mmc_host *host) { struct mmc_gpio *ctx = host->slot.handler_priv; @@ -196,50 +169,6 @@ void mmc_gpio_set_cd_isr(struct mmc_host *host, } EXPORT_SYMBOL(mmc_gpio_set_cd_isr); -/** - * mmc_gpio_request_cd - request a gpio for card-detection - * @host: mmc host - * @gpio: gpio number requested - * @debounce: debounce time in microseconds - * - * As devm_* managed functions are used in mmc_gpio_request_cd(), client - * drivers do not need to worry about freeing up memory. - * - * If GPIO debouncing is desired, set the debounce parameter to a non-zero - * value. The caller is responsible for ensuring that the GPIO driver associated - * with the GPIO supports debouncing, otherwise an error will be returned. - * - * Returns zero on success, else an error. - */ -int mmc_gpio_request_cd(struct mmc_host *host, unsigned int gpio, - unsigned int debounce) -{ - struct mmc_gpio *ctx = host->slot.handler_priv; - int ret; - - ret = devm_gpio_request_one(host->parent, gpio, GPIOF_DIR_IN, - ctx->cd_label); - if (ret < 0) - /* - * don't bother freeing memory. It might still get used by other - * slot functions, in any case it will be freed, when the device - * is destroyed. - */ - return ret; - - if (debounce) { - ret = gpio_set_debounce(gpio, debounce); - if (ret < 0) - return ret; - } - - ctx->override_cd_active_level = true; - ctx->cd_gpio = gpio_to_desc(gpio); - - return 0; -} -EXPORT_SYMBOL(mmc_gpio_request_cd); - /** * mmc_gpiod_request_cd - request a gpio descriptor for card-detection * @host: mmc host @@ -250,8 +179,7 @@ EXPORT_SYMBOL(mmc_gpio_request_cd); * @gpio_invert: will return whether the GPIO line is inverted or not, set * to NULL to ignore * - * Use this function in place of mmc_gpio_request_cd() to use the GPIO - * descriptor API. Note that it must be called prior to mmc_add_host() + * Note that this must be called prior to mmc_add_host() * otherwise the caller must also call mmc_gpiod_request_cd_irq(). * * Returns zero on success, else an error. @@ -302,9 +230,6 @@ EXPORT_SYMBOL(mmc_can_gpio_cd); * @gpio_invert: will return whether the GPIO line is inverted or not, * set to NULL to ignore * - * Use this function in place of mmc_gpio_request_ro() to use the GPIO - * descriptor API. - * * Returns zero on success, else an error. */ int mmc_gpiod_request_ro(struct mmc_host *host, const char *con_id, diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig index 1b58739d9744..e26b8145efb3 100644 --- a/drivers/mmc/host/Kconfig +++ b/drivers/mmc/host/Kconfig @@ -441,6 +441,13 @@ config MMC_WBSD If unsure, say N. +config MMC_ALCOR + tristate "Alcor Micro/Alcor Link SD/MMC controller" + depends on MISC_ALCOR_PCI + help + Say Y here to include driver code to support SD/MMC card interface + of Alcor Micro PCI-E card reader + config MMC_AU1X tristate "Alchemy AU1XX0 MMC Card Interface support" depends on MIPS_ALCHEMY @@ -646,13 +653,14 @@ config MMC_SDHI_SYS_DMAC config MMC_SDHI_INTERNAL_DMAC tristate "DMA for SDHI SD/SDIO controllers using on-chip bus mastering" - depends on ARM64 || ARCH_R8A77470 || COMPILE_TEST + depends on ARM64 || ARCH_R7S9210 || ARCH_R8A77470 || COMPILE_TEST depends on MMC_SDHI - default MMC_SDHI if (ARM64 || ARCH_R8A77470) + default MMC_SDHI if (ARM64 || ARCH_R7S9210 || ARCH_R8A77470) help This provides DMA support for SDHI SD/SDIO controllers using on-chip bus mastering. This supports the controllers - found in arm64 based SoCs. + found in arm64 based SoCs. This controller is also found in + some RZ family SoCs. config MMC_UNIPHIER tristate "UniPhier SD/eMMC Host Controller support" @@ -969,6 +977,8 @@ config MMC_SDHCI_XENON config MMC_SDHCI_OMAP tristate "TI SDHCI Controller Support" depends on MMC_SDHCI_PLTFM && OF + select THERMAL + select TI_SOC_THERMAL help This selects the Secure Digital Host Controller Interface (SDHCI) support present in TI's DRA7 SOCs. The controller supports @@ -977,3 +987,15 @@ config MMC_SDHCI_OMAP If you have a controller with this interface, say Y or M here. If unsure, say N. + +config MMC_SDHCI_AM654 + tristate "Support for the SDHCI Controller in TI's AM654 SOCs" + depends on MMC_SDHCI_PLTFM && OF + help + This selects the Secure Digital Host Controller Interface (SDHCI) + support present in TI's AM654 SOCs. The controller supports + SD/MMC/SDIO devices. + + If you have a controller with this interface, say Y or M here. + + If unsure, say N. diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile index 720d37777098..73578718f119 100644 --- a/drivers/mmc/host/Makefile +++ b/drivers/mmc/host/Makefile @@ -22,8 +22,10 @@ obj-$(CONFIG_MMC_SDHCI_S3C) += sdhci-s3c.o obj-$(CONFIG_MMC_SDHCI_SIRF) += sdhci-sirf.o obj-$(CONFIG_MMC_SDHCI_F_SDH30) += sdhci_f_sdh30.o obj-$(CONFIG_MMC_SDHCI_SPEAR) += sdhci-spear.o +obj-$(CONFIG_MMC_SDHCI_AM654) += sdhci_am654.o obj-$(CONFIG_MMC_WBSD) += wbsd.o obj-$(CONFIG_MMC_AU1X) += au1xmmc.o +obj-$(CONFIG_MMC_ALCOR) += alcor.o obj-$(CONFIG_MMC_MTK) += mtk-sd.o obj-$(CONFIG_MMC_OMAP) += omap.o obj-$(CONFIG_MMC_OMAP_HS) += omap_hsmmc.o diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c new file mode 100644 index 000000000000..c712b7deb3a9 --- /dev/null +++ b/drivers/mmc/host/alcor.c @@ -0,0 +1,1162 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2018 Oleksij Rempel + * + * Driver for Alcor Micro AU6601 and AU6621 controllers + */ + +/* Note: this driver was created without any documentation. Based + * on sniffing, testing and in some cases mimic of original driver. + * As soon as some one with documentation or more experience in SD/MMC, or + * reverse engineering then me, please review this driver and question every + * thing what I did. 2018 Oleksij Rempel + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include + +enum alcor_cookie { + COOKIE_UNMAPPED, + COOKIE_PRE_MAPPED, + COOKIE_MAPPED, +}; + +struct alcor_pll_conf { + unsigned int clk_src_freq; + unsigned int clk_src_reg; + unsigned int min_div; + unsigned int max_div; +}; + +struct alcor_sdmmc_host { + struct device *dev; + struct alcor_pci_priv *alcor_pci; + + struct mmc_host *mmc; + struct mmc_request *mrq; + struct mmc_command *cmd; + struct mmc_data *data; + unsigned int dma_on:1; + unsigned int early_data:1; + + struct mutex cmd_mutex; + + struct delayed_work timeout_work; + + struct sg_mapping_iter sg_miter; /* SG state for PIO */ + struct scatterlist *sg; + unsigned int blocks; /* remaining PIO blocks */ + int sg_count; + + u32 irq_status_sd; + unsigned char cur_power_mode; +}; + +static const struct alcor_pll_conf alcor_pll_cfg[] = { + /* MHZ, CLK src, max div, min div */ + { 31250000, AU6601_CLK_31_25_MHZ, 1, 511}, + { 48000000, AU6601_CLK_48_MHZ, 1, 511}, + {125000000, AU6601_CLK_125_MHZ, 1, 511}, + {384000000, AU6601_CLK_384_MHZ, 1, 511}, +}; + +static inline void alcor_rmw8(struct alcor_sdmmc_host *host, unsigned int addr, + u8 clear, u8 set) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + u32 var; + + var = alcor_read8(priv, addr); + var &= ~clear; + var |= set; + alcor_write8(priv, var, addr); +} + +/* As soon as irqs are masked, some status updates may be missed. + * Use this with care. + */ +static inline void alcor_mask_sd_irqs(struct alcor_sdmmc_host *host) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + + alcor_write32(priv, 0, AU6601_REG_INT_ENABLE); +} + +static inline void alcor_unmask_sd_irqs(struct alcor_sdmmc_host *host) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + + alcor_write32(priv, AU6601_INT_CMD_MASK | AU6601_INT_DATA_MASK | + AU6601_INT_CARD_INSERT | AU6601_INT_CARD_REMOVE | + AU6601_INT_OVER_CURRENT_ERR, + AU6601_REG_INT_ENABLE); +} + +static void alcor_reset(struct alcor_sdmmc_host *host, u8 val) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + int i; + + alcor_write8(priv, val | AU6601_BUF_CTRL_RESET, + AU6601_REG_SW_RESET); + for (i = 0; i < 100; i++) { + if (!(alcor_read8(priv, AU6601_REG_SW_RESET) & val)) + return; + udelay(50); + } + dev_err(host->dev, "%s: timeout\n", __func__); +} + +static void alcor_data_set_dma(struct alcor_sdmmc_host *host) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + u32 addr; + + if (!host->sg_count) + return; + + if (!host->sg) { + dev_err(host->dev, "have blocks, but no SG\n"); + return; + } + + if (!sg_dma_len(host->sg)) { + dev_err(host->dev, "DMA SG len == 0\n"); + return; + } + + + addr = (u32)sg_dma_address(host->sg); + + alcor_write32(priv, addr, AU6601_REG_SDMA_ADDR); + host->sg = sg_next(host->sg); + host->sg_count--; +} + +static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host, + bool early) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + struct mmc_data *data = host->data; + u8 ctrl = 0; + + if (data->flags & MMC_DATA_WRITE) + ctrl |= AU6601_DATA_WRITE; + + if (data->host_cookie == COOKIE_MAPPED) { + if (host->early_data) { + host->early_data = false; + return; + } + + host->early_data = early; + + alcor_data_set_dma(host); + ctrl |= AU6601_DATA_DMA_MODE; + host->dma_on = 1; + alcor_write32(priv, data->sg_count * 0x1000, + AU6601_REG_BLOCK_SIZE); + } else { + alcor_write32(priv, data->blksz, AU6601_REG_BLOCK_SIZE); + } + + alcor_write8(priv, ctrl | AU6601_DATA_START_XFER, + AU6601_DATA_XFER_CTRL); +} + +static void alcor_trf_block_pio(struct alcor_sdmmc_host *host, bool read) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + size_t blksize, len; + u8 *buf; + + if (!host->blocks) + return; + + if (host->dma_on) { + dev_err(host->dev, "configured DMA but got PIO request.\n"); + return; + } + + if (!!(host->data->flags & MMC_DATA_READ) != read) { + dev_err(host->dev, "got unexpected direction %i != %i\n", + !!(host->data->flags & MMC_DATA_READ), read); + } + + if (!sg_miter_next(&host->sg_miter)) + return; + + blksize = host->data->blksz; + len = min(host->sg_miter.length, blksize); + + dev_dbg(host->dev, "PIO, %s block size: 0x%zx\n", + read ? "read" : "write", blksize); + + host->sg_miter.consumed = len; + host->blocks--; + + buf = host->sg_miter.addr; + + if (read) + ioread32_rep(priv->iobase + AU6601_REG_BUFFER, buf, len >> 2); + else + iowrite32_rep(priv->iobase + AU6601_REG_BUFFER, buf, len >> 2); + + sg_miter_stop(&host->sg_miter); +} + +static void alcor_prepare_sg_miter(struct alcor_sdmmc_host *host) +{ + unsigned int flags = SG_MITER_ATOMIC; + struct mmc_data *data = host->data; + + if (data->flags & MMC_DATA_READ) + flags |= SG_MITER_TO_SG; + else + flags |= SG_MITER_FROM_SG; + sg_miter_start(&host->sg_miter, data->sg, data->sg_len, flags); +} + +static void alcor_prepare_data(struct alcor_sdmmc_host *host, + struct mmc_command *cmd) +{ + struct mmc_data *data = cmd->data; + + if (!data) + return; + + + host->data = data; + host->data->bytes_xfered = 0; + host->blocks = data->blocks; + host->sg = data->sg; + host->sg_count = data->sg_count; + dev_dbg(host->dev, "prepare DATA: sg %i, blocks: %i\n", + host->sg_count, host->blocks); + + if (data->host_cookie != COOKIE_MAPPED) + alcor_prepare_sg_miter(host); + + alcor_trigger_data_transfer(host, true); +} + +static void alcor_send_cmd(struct alcor_sdmmc_host *host, + struct mmc_command *cmd, bool set_timeout) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + unsigned long timeout = 0; + u8 ctrl = 0; + + host->cmd = cmd; + alcor_prepare_data(host, cmd); + + dev_dbg(host->dev, "send CMD. opcode: 0x%02x, arg; 0x%08x\n", + cmd->opcode, cmd->arg); + alcor_write8(priv, cmd->opcode | 0x40, AU6601_REG_CMD_OPCODE); + alcor_write32be(priv, cmd->arg, AU6601_REG_CMD_ARG); + + switch (mmc_resp_type(cmd)) { + case MMC_RSP_NONE: + ctrl = AU6601_CMD_NO_RESP; + break; + case MMC_RSP_R1: + ctrl = AU6601_CMD_6_BYTE_CRC; + break; + case MMC_RSP_R1B: + ctrl = AU6601_CMD_6_BYTE_CRC | AU6601_CMD_STOP_WAIT_RDY; + break; + case MMC_RSP_R2: + ctrl = AU6601_CMD_17_BYTE_CRC; + break; + case MMC_RSP_R3: + ctrl = AU6601_CMD_6_BYTE_WO_CRC; + break; + default: + dev_err(host->dev, "%s: cmd->flag (0x%02x) is not valid\n", + mmc_hostname(host->mmc), mmc_resp_type(cmd)); + break; + } + + if (set_timeout) { + if (!cmd->data && cmd->busy_timeout) + timeout = cmd->busy_timeout; + else + timeout = 10000; + + schedule_delayed_work(&host->timeout_work, + msecs_to_jiffies(timeout)); + } + + dev_dbg(host->dev, "xfer ctrl: 0x%02x; timeout: %lu\n", ctrl, timeout); + alcor_write8(priv, ctrl | AU6601_CMD_START_XFER, + AU6601_CMD_XFER_CTRL); +} + +static void alcor_request_complete(struct alcor_sdmmc_host *host, + bool cancel_timeout) +{ + struct mmc_request *mrq; + + /* + * If this work gets rescheduled while running, it will + * be run again afterwards but without any active request. + */ + if (!host->mrq) + return; + + if (cancel_timeout) + cancel_delayed_work(&host->timeout_work); + + mrq = host->mrq; + + host->mrq = NULL; + host->cmd = NULL; + host->data = NULL; + host->dma_on = 0; + + mmc_request_done(host->mmc, mrq); +} + +static void alcor_finish_data(struct alcor_sdmmc_host *host) +{ + struct mmc_data *data; + + data = host->data; + host->data = NULL; + host->dma_on = 0; + + /* + * The specification states that the block count register must + * be updated, but it does not specify at what point in the + * data flow. That makes the register entirely useless to read + * back so we have to assume that nothing made it to the card + * in the event of an error. + */ + if (data->error) + data->bytes_xfered = 0; + else + data->bytes_xfered = data->blksz * data->blocks; + + /* + * Need to send CMD12 if - + * a) open-ended multiblock transfer (no CMD23) + * b) error in multiblock transfer + */ + if (data->stop && + (data->error || + !host->mrq->sbc)) { + + /* + * The controller needs a reset of internal state machines + * upon error conditions. + */ + if (data->error) + alcor_reset(host, AU6601_RESET_CMD | AU6601_RESET_DATA); + + alcor_unmask_sd_irqs(host); + alcor_send_cmd(host, data->stop, false); + return; + } + + alcor_request_complete(host, 1); +} + +static void alcor_err_irq(struct alcor_sdmmc_host *host, u32 intmask) +{ + dev_dbg(host->dev, "ERR IRQ %x\n", intmask); + + if (host->cmd) { + if (intmask & AU6601_INT_CMD_TIMEOUT_ERR) + host->cmd->error = -ETIMEDOUT; + else + host->cmd->error = -EILSEQ; + } + + if (host->data) { + if (intmask & AU6601_INT_DATA_TIMEOUT_ERR) + host->data->error = -ETIMEDOUT; + else + host->data->error = -EILSEQ; + + host->data->bytes_xfered = 0; + } + + alcor_reset(host, AU6601_RESET_CMD | AU6601_RESET_DATA); + alcor_request_complete(host, 1); +} + +static int alcor_cmd_irq_done(struct alcor_sdmmc_host *host, u32 intmask) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + + intmask &= AU6601_INT_CMD_END; + + if (!intmask) + return true; + + /* got CMD_END but no CMD is in progress, wake thread an process the + * error + */ + if (!host->cmd) + return false; + + if (host->cmd->flags & MMC_RSP_PRESENT) { + struct mmc_command *cmd = host->cmd; + + cmd->resp[0] = alcor_read32be(priv, AU6601_REG_CMD_RSP0); + dev_dbg(host->dev, "RSP0: 0x%04x\n", cmd->resp[0]); + if (host->cmd->flags & MMC_RSP_136) { + cmd->resp[1] = + alcor_read32be(priv, AU6601_REG_CMD_RSP1); + cmd->resp[2] = + alcor_read32be(priv, AU6601_REG_CMD_RSP2); + cmd->resp[3] = + alcor_read32be(priv, AU6601_REG_CMD_RSP3); + dev_dbg(host->dev, "RSP1,2,3: 0x%04x 0x%04x 0x%04x\n", + cmd->resp[1], cmd->resp[2], cmd->resp[3]); + } + + } + + host->cmd->error = 0; + + /* Processed actual command. */ + if (!host->data) + return false; + + alcor_trigger_data_transfer(host, false); + host->cmd = NULL; + return true; +} + +static void alcor_cmd_irq_thread(struct alcor_sdmmc_host *host, u32 intmask) +{ + intmask &= AU6601_INT_CMD_END; + + if (!intmask) + return; + + if (!host->cmd && intmask & AU6601_INT_CMD_END) { + dev_dbg(host->dev, "Got command interrupt 0x%08x even though no command operation was in progress.\n", + intmask); + } + + /* Processed actual command. */ + if (!host->data) + alcor_request_complete(host, 1); + else + alcor_trigger_data_transfer(host, false); + host->cmd = NULL; +} + +static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask) +{ + u32 tmp; + + intmask &= AU6601_INT_DATA_MASK; + + /* nothing here to do */ + if (!intmask) + return 1; + + /* we was too fast and got DATA_END after it was processed? + * lets ignore it for now. + */ + if (!host->data && intmask == AU6601_INT_DATA_END) + return 1; + + /* looks like an error, so lets handle it. */ + if (!host->data) + return 0; + + tmp = intmask & (AU6601_INT_READ_BUF_RDY | AU6601_INT_WRITE_BUF_RDY + | AU6601_INT_DMA_END); + switch (tmp) { + case 0: + break; + case AU6601_INT_READ_BUF_RDY: + alcor_trf_block_pio(host, true); + if (!host->blocks) + break; + alcor_trigger_data_transfer(host, false); + return 1; + case AU6601_INT_WRITE_BUF_RDY: + alcor_trf_block_pio(host, false); + if (!host->blocks) + break; + alcor_trigger_data_transfer(host, false); + return 1; + case AU6601_INT_DMA_END: + if (!host->sg_count) + break; + + alcor_data_set_dma(host); + break; + default: + dev_err(host->dev, "Got READ_BUF_RDY and WRITE_BUF_RDY at same time\n"); + break; + } + + if (intmask & AU6601_INT_DATA_END) + return 0; + + return 1; +} + +static void alcor_data_irq_thread(struct alcor_sdmmc_host *host, u32 intmask) +{ + intmask &= AU6601_INT_DATA_MASK; + + if (!intmask) + return; + + if (!host->data) { + dev_dbg(host->dev, "Got data interrupt 0x%08x even though no data operation was in progress.\n", + intmask); + alcor_reset(host, AU6601_RESET_DATA); + return; + } + + if (alcor_data_irq_done(host, intmask)) + return; + + if ((intmask & AU6601_INT_DATA_END) || !host->blocks || + (host->dma_on && !host->sg_count)) + alcor_finish_data(host); +} + +static void alcor_cd_irq(struct alcor_sdmmc_host *host, u32 intmask) +{ + dev_dbg(host->dev, "card %s\n", + intmask & AU6601_INT_CARD_REMOVE ? "removed" : "inserted"); + + if (host->mrq) { + dev_dbg(host->dev, "cancel all pending tasks.\n"); + + if (host->data) + host->data->error = -ENOMEDIUM; + + if (host->cmd) + host->cmd->error = -ENOMEDIUM; + else + host->mrq->cmd->error = -ENOMEDIUM; + + alcor_request_complete(host, 1); + } + + mmc_detect_change(host->mmc, msecs_to_jiffies(1)); +} + +static irqreturn_t alcor_irq_thread(int irq, void *d) +{ + struct alcor_sdmmc_host *host = d; + irqreturn_t ret = IRQ_HANDLED; + u32 intmask, tmp; + + mutex_lock(&host->cmd_mutex); + + intmask = host->irq_status_sd; + + /* some thing bad */ + if (unlikely(!intmask || AU6601_INT_ALL_MASK == intmask)) { + dev_dbg(host->dev, "unexpected IRQ: 0x%04x\n", intmask); + ret = IRQ_NONE; + goto exit; + } + + tmp = intmask & (AU6601_INT_CMD_MASK | AU6601_INT_DATA_MASK); + if (tmp) { + if (tmp & AU6601_INT_ERROR_MASK) + alcor_err_irq(host, tmp); + else { + alcor_cmd_irq_thread(host, tmp); + alcor_data_irq_thread(host, tmp); + } + intmask &= ~(AU6601_INT_CMD_MASK | AU6601_INT_DATA_MASK); + } + + if (intmask & (AU6601_INT_CARD_INSERT | AU6601_INT_CARD_REMOVE)) { + alcor_cd_irq(host, intmask); + intmask &= ~(AU6601_INT_CARD_INSERT | AU6601_INT_CARD_REMOVE); + } + + if (intmask & AU6601_INT_OVER_CURRENT_ERR) { + dev_warn(host->dev, + "warning: over current detected!\n"); + intmask &= ~AU6601_INT_OVER_CURRENT_ERR; + } + + if (intmask) + dev_dbg(host->dev, "got not handled IRQ: 0x%04x\n", intmask); + +exit: + mutex_unlock(&host->cmd_mutex); + alcor_unmask_sd_irqs(host); + return ret; +} + + +static irqreturn_t alcor_irq(int irq, void *d) +{ + struct alcor_sdmmc_host *host = d; + struct alcor_pci_priv *priv = host->alcor_pci; + u32 status, tmp; + irqreturn_t ret; + int cmd_done, data_done; + + status = alcor_read32(priv, AU6601_REG_INT_STATUS); + if (!status) + return IRQ_NONE; + + alcor_write32(priv, status, AU6601_REG_INT_STATUS); + + tmp = status & (AU6601_INT_READ_BUF_RDY | AU6601_INT_WRITE_BUF_RDY + | AU6601_INT_DATA_END | AU6601_INT_DMA_END + | AU6601_INT_CMD_END); + if (tmp == status) { + cmd_done = alcor_cmd_irq_done(host, tmp); + data_done = alcor_data_irq_done(host, tmp); + /* use fast path for simple tasks */ + if (cmd_done && data_done) { + ret = IRQ_HANDLED; + goto alcor_irq_done; + } + } + + host->irq_status_sd = status; + ret = IRQ_WAKE_THREAD; + alcor_mask_sd_irqs(host); +alcor_irq_done: + return ret; +} + +static void alcor_set_clock(struct alcor_sdmmc_host *host, unsigned int clock) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + int i, diff = 0x7fffffff, tmp_clock = 0; + u16 clk_src = 0; + u8 clk_div = 0; + + if (clock == 0) { + alcor_write16(priv, 0, AU6601_CLK_SELECT); + return; + } + + for (i = 0; i < ARRAY_SIZE(alcor_pll_cfg); i++) { + unsigned int tmp_div, tmp_diff; + const struct alcor_pll_conf *cfg = &alcor_pll_cfg[i]; + + tmp_div = DIV_ROUND_UP(cfg->clk_src_freq, clock); + if (cfg->min_div > tmp_div || tmp_div > cfg->max_div) + continue; + + tmp_clock = DIV_ROUND_UP(cfg->clk_src_freq, tmp_div); + tmp_diff = abs(clock - tmp_clock); + + if (tmp_diff >= 0 && tmp_diff < diff) { + diff = tmp_diff; + clk_src = cfg->clk_src_reg; + clk_div = tmp_div; + } + } + + clk_src |= ((clk_div - 1) << 8); + clk_src |= AU6601_CLK_ENABLE; + + dev_dbg(host->dev, "set freq %d cal freq %d, use div %d, mod %x\n", + clock, tmp_clock, clk_div, clk_src); + + alcor_write16(priv, clk_src, AU6601_CLK_SELECT); + +} + +static void alcor_set_timing(struct mmc_host *mmc, struct mmc_ios *ios) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + + if (ios->timing == MMC_TIMING_LEGACY) { + alcor_rmw8(host, AU6601_CLK_DELAY, + AU6601_CLK_POSITIVE_EDGE_ALL, 0); + } else { + alcor_rmw8(host, AU6601_CLK_DELAY, + 0, AU6601_CLK_POSITIVE_EDGE_ALL); + } +} + +static void alcor_set_bus_width(struct mmc_host *mmc, struct mmc_ios *ios) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + struct alcor_pci_priv *priv = host->alcor_pci; + + if (ios->bus_width == MMC_BUS_WIDTH_1) { + alcor_write8(priv, 0, AU6601_REG_BUS_CTRL); + } else if (ios->bus_width == MMC_BUS_WIDTH_4) { + alcor_write8(priv, AU6601_BUS_WIDTH_4BIT, + AU6601_REG_BUS_CTRL); + } else + dev_err(host->dev, "Unknown BUS mode\n"); + +} + +static int alcor_card_busy(struct mmc_host *mmc) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + struct alcor_pci_priv *priv = host->alcor_pci; + u8 status; + + /* Check whether dat[0:3] low */ + status = alcor_read8(priv, AU6601_DATA_PIN_STATE); + + return !(status & AU6601_BUS_STAT_DAT_MASK); +} + +static int alcor_get_cd(struct mmc_host *mmc) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + struct alcor_pci_priv *priv = host->alcor_pci; + u8 detect; + + detect = alcor_read8(priv, AU6601_DETECT_STATUS) + & AU6601_DETECT_STATUS_M; + /* check if card is present then send command and data */ + return (detect == AU6601_SD_DETECTED); +} + +static int alcor_get_ro(struct mmc_host *mmc) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + struct alcor_pci_priv *priv = host->alcor_pci; + u8 status; + + /* get write protect pin status */ + status = alcor_read8(priv, AU6601_INTERFACE_MODE_CTRL); + + return !!(status & AU6601_SD_CARD_WP); +} + +static void alcor_request(struct mmc_host *mmc, struct mmc_request *mrq) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + + mutex_lock(&host->cmd_mutex); + + host->mrq = mrq; + + /* check if card is present then send command and data */ + if (alcor_get_cd(mmc)) + alcor_send_cmd(host, mrq->cmd, true); + else { + mrq->cmd->error = -ENOMEDIUM; + alcor_request_complete(host, 1); + } + + mutex_unlock(&host->cmd_mutex); +} + +static void alcor_pre_req(struct mmc_host *mmc, + struct mmc_request *mrq) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + struct mmc_data *data = mrq->data; + struct mmc_command *cmd = mrq->cmd; + struct scatterlist *sg; + unsigned int i, sg_len; + + if (!data || !cmd) + return; + + data->host_cookie = COOKIE_UNMAPPED; + + /* FIXME: looks like the DMA engine works only with CMD18 */ + if (cmd->opcode != 18) + return; + /* + * We don't do DMA on "complex" transfers, i.e. with + * non-word-aligned buffers or lengths. Also, we don't bother + * with all the DMA setup overhead for short transfers. + */ + if (data->blocks * data->blksz < AU6601_MAX_DMA_BLOCK_SIZE) + return; + + if (data->blksz & 3) + return; + + for_each_sg(data->sg, sg, data->sg_len, i) { + if (sg->length != AU6601_MAX_DMA_BLOCK_SIZE) + return; + } + + /* This data might be unmapped at this time */ + + sg_len = dma_map_sg(host->dev, data->sg, data->sg_len, + mmc_get_dma_dir(data)); + if (sg_len) + data->host_cookie = COOKIE_MAPPED; + + data->sg_count = sg_len; +} + +static void alcor_post_req(struct mmc_host *mmc, + struct mmc_request *mrq, + int err) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + struct mmc_data *data = mrq->data; + + if (!data) + return; + + if (data->host_cookie == COOKIE_MAPPED) { + dma_unmap_sg(host->dev, + data->sg, + data->sg_len, + mmc_get_dma_dir(data)); + } + + data->host_cookie = COOKIE_UNMAPPED; +} + +static void alcor_set_power_mode(struct mmc_host *mmc, struct mmc_ios *ios) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + struct alcor_pci_priv *priv = host->alcor_pci; + + switch (ios->power_mode) { + case MMC_POWER_OFF: + alcor_set_clock(host, ios->clock); + /* set all pins to input */ + alcor_write8(priv, 0, AU6601_OUTPUT_ENABLE); + /* turn of VDD */ + alcor_write8(priv, 0, AU6601_POWER_CONTROL); + break; + case MMC_POWER_UP: + break; + case MMC_POWER_ON: + /* This is most trickiest part. The order and timings of + * instructions seems to play important role. Any changes may + * confuse internal state engine if this HW. + * FIXME: If we will ever get access to documentation, then this + * part should be reviewed again. + */ + + /* enable SD card mode */ + alcor_write8(priv, AU6601_SD_CARD, + AU6601_ACTIVE_CTRL); + /* set signal voltage to 3.3V */ + alcor_write8(priv, 0, AU6601_OPT); + /* no documentation about clk delay, for now just try to mimic + * original driver. + */ + alcor_write8(priv, 0x20, AU6601_CLK_DELAY); + /* set BUS width to 1 bit */ + alcor_write8(priv, 0, AU6601_REG_BUS_CTRL); + /* set CLK first time */ + alcor_set_clock(host, ios->clock); + /* power on VDD */ + alcor_write8(priv, AU6601_SD_CARD, + AU6601_POWER_CONTROL); + /* wait until the CLK will get stable */ + mdelay(20); + /* set CLK again, mimic original driver. */ + alcor_set_clock(host, ios->clock); + + /* enable output */ + alcor_write8(priv, AU6601_SD_CARD, + AU6601_OUTPUT_ENABLE); + /* The clk will not work on au6621. We need to trigger data + * transfer. + */ + alcor_write8(priv, AU6601_DATA_WRITE, + AU6601_DATA_XFER_CTRL); + /* configure timeout. Not clear what exactly it means. */ + alcor_write8(priv, 0x7d, AU6601_TIME_OUT_CTRL); + mdelay(100); + break; + default: + dev_err(host->dev, "Unknown power parameter\n"); + } +} + +static void alcor_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + + mutex_lock(&host->cmd_mutex); + + dev_dbg(host->dev, "set ios. bus width: %x, power mode: %x\n", + ios->bus_width, ios->power_mode); + + if (ios->power_mode != host->cur_power_mode) { + alcor_set_power_mode(mmc, ios); + host->cur_power_mode = ios->power_mode; + } else { + alcor_set_timing(mmc, ios); + alcor_set_bus_width(mmc, ios); + alcor_set_clock(host, ios->clock); + } + + mutex_unlock(&host->cmd_mutex); +} + +static int alcor_signal_voltage_switch(struct mmc_host *mmc, + struct mmc_ios *ios) +{ + struct alcor_sdmmc_host *host = mmc_priv(mmc); + + mutex_lock(&host->cmd_mutex); + + switch (ios->signal_voltage) { + case MMC_SIGNAL_VOLTAGE_330: + alcor_rmw8(host, AU6601_OPT, AU6601_OPT_SD_18V, 0); + break; + case MMC_SIGNAL_VOLTAGE_180: + alcor_rmw8(host, AU6601_OPT, 0, AU6601_OPT_SD_18V); + break; + default: + /* No signal voltage switch required */ + break; + } + + mutex_unlock(&host->cmd_mutex); + return 0; +} + +static const struct mmc_host_ops alcor_sdc_ops = { + .card_busy = alcor_card_busy, + .get_cd = alcor_get_cd, + .get_ro = alcor_get_ro, + .post_req = alcor_post_req, + .pre_req = alcor_pre_req, + .request = alcor_request, + .set_ios = alcor_set_ios, + .start_signal_voltage_switch = alcor_signal_voltage_switch, +}; + +static void alcor_timeout_timer(struct work_struct *work) +{ + struct delayed_work *d = to_delayed_work(work); + struct alcor_sdmmc_host *host = container_of(d, struct alcor_sdmmc_host, + timeout_work); + mutex_lock(&host->cmd_mutex); + + dev_dbg(host->dev, "triggered timeout\n"); + if (host->mrq) { + dev_err(host->dev, "Timeout waiting for hardware interrupt.\n"); + + if (host->data) { + host->data->error = -ETIMEDOUT; + } else { + if (host->cmd) + host->cmd->error = -ETIMEDOUT; + else + host->mrq->cmd->error = -ETIMEDOUT; + } + + alcor_reset(host, AU6601_RESET_CMD | AU6601_RESET_DATA); + alcor_request_complete(host, 0); + } + + mmiowb(); + mutex_unlock(&host->cmd_mutex); +} + +static void alcor_hw_init(struct alcor_sdmmc_host *host) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + struct alcor_dev_cfg *cfg = priv->cfg; + + /* FIXME: This part is a mimics HW init of original driver. + * If we will ever get access to documentation, then this part + * should be reviewed again. + */ + + /* reset command state engine */ + alcor_reset(host, AU6601_RESET_CMD); + + alcor_write8(priv, 0, AU6601_DMA_BOUNDARY); + /* enable sd card mode */ + alcor_write8(priv, AU6601_SD_CARD, AU6601_ACTIVE_CTRL); + + /* set BUS width to 1 bit */ + alcor_write8(priv, 0, AU6601_REG_BUS_CTRL); + + /* reset data state engine */ + alcor_reset(host, AU6601_RESET_DATA); + /* Not sure if a voodoo with AU6601_DMA_BOUNDARY is really needed */ + alcor_write8(priv, 0, AU6601_DMA_BOUNDARY); + + alcor_write8(priv, 0, AU6601_INTERFACE_MODE_CTRL); + /* not clear what we are doing here. */ + alcor_write8(priv, 0x44, AU6601_PAD_DRIVE0); + alcor_write8(priv, 0x44, AU6601_PAD_DRIVE1); + alcor_write8(priv, 0x00, AU6601_PAD_DRIVE2); + + /* for 6601 - dma_boundary; for 6621 - dma_page_cnt + * exact meaning of this register is not clear. + */ + alcor_write8(priv, cfg->dma, AU6601_DMA_BOUNDARY); + + /* make sure all pins are set to input and VDD is off */ + alcor_write8(priv, 0, AU6601_OUTPUT_ENABLE); + alcor_write8(priv, 0, AU6601_POWER_CONTROL); + + alcor_write8(priv, AU6601_DETECT_EN, AU6601_DETECT_STATUS); + /* now we should be safe to enable IRQs */ + alcor_unmask_sd_irqs(host); +} + +static void alcor_hw_uninit(struct alcor_sdmmc_host *host) +{ + struct alcor_pci_priv *priv = host->alcor_pci; + + alcor_mask_sd_irqs(host); + alcor_reset(host, AU6601_RESET_CMD | AU6601_RESET_DATA); + + alcor_write8(priv, 0, AU6601_DETECT_STATUS); + + alcor_write8(priv, 0, AU6601_OUTPUT_ENABLE); + alcor_write8(priv, 0, AU6601_POWER_CONTROL); + + alcor_write8(priv, 0, AU6601_OPT); +} + +static void alcor_init_mmc(struct alcor_sdmmc_host *host) +{ + struct mmc_host *mmc = host->mmc; + + mmc->f_min = AU6601_MIN_CLOCK; + mmc->f_max = AU6601_MAX_CLOCK; + mmc->ocr_avail = MMC_VDD_33_34; + mmc->caps = MMC_CAP_4_BIT_DATA | MMC_CAP_SD_HIGHSPEED + | MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | MMC_CAP_UHS_SDR50 + | MMC_CAP_UHS_SDR104 | MMC_CAP_UHS_DDR50; + mmc->caps2 = MMC_CAP2_NO_SDIO; + mmc->ops = &alcor_sdc_ops; + + /* Hardware cannot do scatter lists */ + mmc->max_segs = AU6601_MAX_DMA_SEGMENTS; + mmc->max_seg_size = AU6601_MAX_DMA_BLOCK_SIZE; + + mmc->max_blk_size = mmc->max_seg_size; + mmc->max_blk_count = mmc->max_segs; + + mmc->max_req_size = mmc->max_seg_size * mmc->max_segs; +} + +static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev) +{ + struct alcor_pci_priv *priv = pdev->dev.platform_data; + struct mmc_host *mmc; + struct alcor_sdmmc_host *host; + int ret; + + mmc = mmc_alloc_host(sizeof(*host), &pdev->dev); + if (!mmc) { + dev_err(&pdev->dev, "Can't allocate MMC\n"); + return -ENOMEM; + } + + host = mmc_priv(mmc); + host->mmc = mmc; + host->dev = &pdev->dev; + host->cur_power_mode = MMC_POWER_UNDEFINED; + host->alcor_pci = priv; + + /* make sure irqs are disabled */ + alcor_write32(priv, 0, AU6601_REG_INT_ENABLE); + alcor_write32(priv, 0, AU6601_MS_INT_ENABLE); + + ret = devm_request_threaded_irq(&pdev->dev, priv->irq, + alcor_irq, alcor_irq_thread, IRQF_SHARED, + DRV_NAME_ALCOR_PCI_SDMMC, host); + + if (ret) { + dev_err(&pdev->dev, "Failed to get irq for data line\n"); + return ret; + } + + mutex_init(&host->cmd_mutex); + INIT_DELAYED_WORK(&host->timeout_work, alcor_timeout_timer); + + alcor_init_mmc(host); + alcor_hw_init(host); + + dev_set_drvdata(&pdev->dev, host); + mmc_add_host(mmc); + return 0; +} + +static int alcor_pci_sdmmc_drv_remove(struct platform_device *pdev) +{ + struct alcor_sdmmc_host *host = dev_get_drvdata(&pdev->dev); + + if (cancel_delayed_work_sync(&host->timeout_work)) + alcor_request_complete(host, 0); + + alcor_hw_uninit(host); + mmc_remove_host(host->mmc); + mmc_free_host(host->mmc); + + return 0; +} + +#ifdef CONFIG_PM_SLEEP +static int alcor_pci_sdmmc_suspend(struct device *dev) +{ + struct alcor_sdmmc_host *host = dev_get_drvdata(dev); + + if (cancel_delayed_work_sync(&host->timeout_work)) + alcor_request_complete(host, 0); + + alcor_hw_uninit(host); + + return 0; +} + +static int alcor_pci_sdmmc_resume(struct device *dev) +{ + struct alcor_sdmmc_host *host = dev_get_drvdata(dev); + + alcor_hw_init(host); + + return 0; +} +#endif /* CONFIG_PM_SLEEP */ + +static SIMPLE_DEV_PM_OPS(alcor_mmc_pm_ops, alcor_pci_sdmmc_suspend, + alcor_pci_sdmmc_resume); + +static const struct platform_device_id alcor_pci_sdmmc_ids[] = { + { + .name = DRV_NAME_ALCOR_PCI_SDMMC, + }, { + /* sentinel */ + } +}; +MODULE_DEVICE_TABLE(platform, alcor_pci_sdmmc_ids); + +static struct platform_driver alcor_pci_sdmmc_driver = { + .probe = alcor_pci_sdmmc_drv_probe, + .remove = alcor_pci_sdmmc_drv_remove, + .id_table = alcor_pci_sdmmc_ids, + .driver = { + .name = DRV_NAME_ALCOR_PCI_SDMMC, + .pm = &alcor_mmc_pm_ops + }, +}; +module_platform_driver(alcor_pci_sdmmc_driver); + +MODULE_AUTHOR("Oleksij Rempel "); +MODULE_DESCRIPTION("PCI driver for Alcor Micro AU6601 Secure Digital Host Controller Interface"); +MODULE_LICENSE("GPL"); diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c index be53044086c7..47189f9ed4e2 100644 --- a/drivers/mmc/host/atmel-mci.c +++ b/drivers/mmc/host/atmel-mci.c @@ -446,18 +446,7 @@ static int atmci_req_show(struct seq_file *s, void *v) return 0; } -static int atmci_req_open(struct inode *inode, struct file *file) -{ - return single_open(file, atmci_req_show, inode->i_private); -} - -static const struct file_operations atmci_req_fops = { - .owner = THIS_MODULE, - .open = atmci_req_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(atmci_req); static void atmci_show_status_reg(struct seq_file *s, const char *regname, u32 value) @@ -583,18 +572,7 @@ static int atmci_regs_show(struct seq_file *s, void *v) return ret; } -static int atmci_regs_open(struct inode *inode, struct file *file) -{ - return single_open(file, atmci_regs_show, inode->i_private); -} - -static const struct file_operations atmci_regs_fops = { - .owner = THIS_MODULE, - .open = atmci_regs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(atmci_regs); static void atmci_init_debugfs(struct atmel_mci_slot *slot) { @@ -608,13 +586,14 @@ static void atmci_init_debugfs(struct atmel_mci_slot *slot) return; node = debugfs_create_file("regs", S_IRUSR, root, host, - &atmci_regs_fops); + &atmci_regs_fops); if (IS_ERR(node)) return; if (!node) goto err; - node = debugfs_create_file("req", S_IRUSR, root, slot, &atmci_req_fops); + node = debugfs_create_file("req", S_IRUSR, root, slot, + &atmci_req_fops); if (!node) goto err; @@ -1954,13 +1933,14 @@ static void atmci_tasklet_func(unsigned long priv) } atmci_request_end(host, host->mrq); - state = STATE_IDLE; + goto unlock; /* atmci_request_end() sets host->state */ break; } } while (state != prev_state); host->state = state; +unlock: spin_unlock(&host->lock); } diff --git a/drivers/mmc/host/bcm2835.c b/drivers/mmc/host/bcm2835.c index 768972af8b85..50293529d6de 100644 --- a/drivers/mmc/host/bcm2835.c +++ b/drivers/mmc/host/bcm2835.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * bcm2835 sdhost driver. * @@ -25,18 +26,6 @@ * sdhci-bcm2708.c by Broadcom * sdhci-bcm2835.c by Stephen Warren and Oleksandr Tymoshenko * sdhci.c and sdhci-pci.c by Pierre Ossman - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . */ #include #include @@ -286,6 +275,7 @@ static void bcm2835_reset(struct mmc_host *mmc) if (host->dma_chan) dmaengine_terminate_sync(host->dma_chan); + host->dma_chan = NULL; bcm2835_reset_internal(host); } @@ -463,7 +453,7 @@ static void bcm2835_transfer_pio(struct bcm2835_host *host) static void bcm2835_prepare_dma(struct bcm2835_host *host, struct mmc_data *data) { - int len, dir_data, dir_slave; + int sg_len, dir_data, dir_slave; struct dma_async_tx_descriptor *desc = NULL; struct dma_chan *dma_chan; @@ -509,23 +499,24 @@ void bcm2835_prepare_dma(struct bcm2835_host *host, struct mmc_data *data) &host->dma_cfg_rx : &host->dma_cfg_tx); - len = dma_map_sg(dma_chan->device->dev, data->sg, data->sg_len, - dir_data); + sg_len = dma_map_sg(dma_chan->device->dev, data->sg, data->sg_len, + dir_data); + if (!sg_len) + return; - if (len > 0) { - desc = dmaengine_prep_slave_sg(dma_chan, data->sg, - len, dir_slave, - DMA_PREP_INTERRUPT | - DMA_CTRL_ACK); - } + desc = dmaengine_prep_slave_sg(dma_chan, data->sg, sg_len, dir_slave, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); - if (desc) { - desc->callback = bcm2835_dma_complete; - desc->callback_param = host; - host->dma_desc = desc; - host->dma_chan = dma_chan; - host->dma_dir = dir_data; + if (!desc) { + dma_unmap_sg(dma_chan->device->dev, data->sg, sg_len, dir_data); + return; } + + desc->callback = bcm2835_dma_complete; + desc->callback_param = host; + host->dma_desc = desc; + host->dma_chan = dma_chan; + host->dma_dir = dir_data; } static void bcm2835_start_dma(struct bcm2835_host *host) @@ -607,7 +598,7 @@ static void bcm2835_finish_request(struct bcm2835_host *host) struct dma_chan *terminate_chan = NULL; struct mmc_request *mrq; - cancel_delayed_work(&host->timeout_work); + cancel_delayed_work_sync(&host->timeout_work); mrq = host->mrq; @@ -772,6 +763,8 @@ static void bcm2835_finish_command(struct bcm2835_host *host) if (!(sdhsts & SDHSTS_CRC7_ERROR) || (host->cmd->opcode != MMC_SEND_OP_COND)) { + u32 edm, fsm; + if (sdhsts & SDHSTS_CMD_TIME_OUT) { host->cmd->error = -ETIMEDOUT; } else { @@ -780,6 +773,13 @@ static void bcm2835_finish_command(struct bcm2835_host *host) bcm2835_dumpregs(host); host->cmd->error = -EILSEQ; } + edm = readl(host->ioaddr + SDEDM); + fsm = edm & SDEDM_FSM_MASK; + if (fsm == SDEDM_FSM_READWAIT || + fsm == SDEDM_FSM_WRITESTART1) + /* Kick the FSM out of its wait */ + writel(edm | SDEDM_FORCE_DATA_MODE, + host->ioaddr + SDEDM); bcm2835_finish_request(host); return; } @@ -837,6 +837,8 @@ static void bcm2835_timeout(struct work_struct *work) dev_err(dev, "timeout waiting for hardware interrupt.\n"); bcm2835_dumpregs(host); + bcm2835_reset(host->mmc); + if (host->data) { host->data->error = -ETIMEDOUT; bcm2835_finish_data(host); @@ -1052,10 +1054,12 @@ static void bcm2835_dma_complete_work(struct work_struct *work) { struct bcm2835_host *host = container_of(work, struct bcm2835_host, dma_work); - struct mmc_data *data = host->data; + struct mmc_data *data; mutex_lock(&host->mutex); + data = host->data; + if (host->dma_chan) { dma_unmap_sg(host->dma_chan->device->dev, data->sg, data->sg_len, @@ -1180,9 +1184,6 @@ static void bcm2835_request(struct mmc_host *mmc, struct mmc_request *mrq) return; } - if (host->use_dma && mrq->data && (mrq->data->blocks > PIO_THRESHOLD)) - bcm2835_prepare_dma(host, mrq->data); - mutex_lock(&host->mutex); WARN_ON(host->mrq); @@ -1206,6 +1207,9 @@ static void bcm2835_request(struct mmc_host *mmc, struct mmc_request *mrq) return; } + if (host->use_dma && mrq->data && (mrq->data->blocks > PIO_THRESHOLD)) + bcm2835_prepare_dma(host, mrq->data); + host->use_sbc = !!mrq->sbc && host->mrq->data && (host->mrq->data->flags & MMC_DATA_READ); if (host->use_sbc) { @@ -1445,6 +1449,9 @@ static int bcm2835_remove(struct platform_device *pdev) cancel_work_sync(&host->dma_work); cancel_delayed_work_sync(&host->timeout_work); + if (host->dma_chan_rxtx) + dma_release_channel(host->dma_chan_rxtx); + mmc_free_host(host->mmc); platform_set_drvdata(pdev, NULL); diff --git a/drivers/mmc/host/dw_mmc-bluefield.c b/drivers/mmc/host/dw_mmc-bluefield.c index 54c3fbb4a391..ed8f2254b66a 100644 --- a/drivers/mmc/host/dw_mmc-bluefield.c +++ b/drivers/mmc/host/dw_mmc-bluefield.c @@ -52,16 +52,7 @@ MODULE_DEVICE_TABLE(of, dw_mci_bluefield_match); static int dw_mci_bluefield_probe(struct platform_device *pdev) { - const struct dw_mci_drv_data *drv_data = NULL; - const struct of_device_id *match; - - if (pdev->dev.of_node) { - match = of_match_node(dw_mci_bluefield_match, - pdev->dev.of_node); - drv_data = match->data; - } - - return dw_mci_pltfm_register(pdev, drv_data); + return dw_mci_pltfm_register(pdev, &bluefield_drv_data); } static struct platform_driver dw_mci_bluefield_pltfm_driver = { diff --git a/drivers/mmc/host/jz4740_mmc.c b/drivers/mmc/host/jz4740_mmc.c index 0c1efd5100b7..33215d66afa2 100644 --- a/drivers/mmc/host/jz4740_mmc.c +++ b/drivers/mmc/host/jz4740_mmc.c @@ -21,7 +21,7 @@ #include #include #include -#include +#include #include #include #include @@ -126,9 +126,23 @@ enum jz4740_mmc_state { JZ4740_MMC_STATE_DONE, }; -struct jz4740_mmc_host_next { - int sg_len; - s32 cookie; +/* + * The MMC core allows to prepare a mmc_request while another mmc_request + * is in-flight. This is used via the pre_req/post_req hooks. + * This driver uses the pre_req/post_req hooks to map/unmap the mmc_request. + * Following what other drivers do (sdhci, dw_mmc) we use the following cookie + * flags to keep track of the mmc_request mapping state. + * + * COOKIE_UNMAPPED: the request is not mapped. + * COOKIE_PREMAPPED: the request was mapped in pre_req, + * and should be unmapped in post_req. + * COOKIE_MAPPED: the request was mapped in the irq handler, + * and should be unmapped before mmc_request_done is called.. + */ +enum jz4780_cookie { + COOKIE_UNMAPPED = 0, + COOKIE_PREMAPPED, + COOKIE_MAPPED, }; struct jz4740_mmc_host { @@ -136,6 +150,7 @@ struct jz4740_mmc_host { struct platform_device *pdev; struct jz4740_mmc_platform_data *pdata; struct clk *clk; + struct gpio_desc *power; enum jz4740_mmc_version version; @@ -162,9 +177,7 @@ struct jz4740_mmc_host { /* DMA support */ struct dma_chan *dma_rx; struct dma_chan *dma_tx; - struct jz4740_mmc_host_next next_data; bool use_dma; - int sg_len; /* The DMA trigger level is 8 words, that is to say, the DMA read * trigger is when data words in MSC_RXFIFO is >= 8 and the DMA write @@ -226,9 +239,6 @@ static int jz4740_mmc_acquire_dma_channels(struct jz4740_mmc_host *host) return PTR_ERR(host->dma_rx); } - /* Initialize DMA pre request cookie */ - host->next_data.cookie = 1; - return 0; } @@ -245,60 +255,44 @@ static void jz4740_mmc_dma_unmap(struct jz4740_mmc_host *host, enum dma_data_direction dir = mmc_get_dma_dir(data); dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir); + data->host_cookie = COOKIE_UNMAPPED; } -/* Prepares DMA data for current/next transfer, returns non-zero on failure */ +/* Prepares DMA data for current or next transfer. + * A request can be in-flight when this is called. + */ static int jz4740_mmc_prepare_dma_data(struct jz4740_mmc_host *host, struct mmc_data *data, - struct jz4740_mmc_host_next *next, - struct dma_chan *chan) + int cookie) { - struct jz4740_mmc_host_next *next_data = &host->next_data; + struct dma_chan *chan = jz4740_mmc_get_dma_chan(host, data); enum dma_data_direction dir = mmc_get_dma_dir(data); - int sg_len; - - if (!next && data->host_cookie && - data->host_cookie != host->next_data.cookie) { - dev_warn(mmc_dev(host->mmc), - "[%s] invalid cookie: data->host_cookie %d host->next_data.cookie %d\n", - __func__, - data->host_cookie, - host->next_data.cookie); - data->host_cookie = 0; - } + int sg_count; - /* Check if next job is already prepared */ - if (next || data->host_cookie != host->next_data.cookie) { - sg_len = dma_map_sg(chan->device->dev, - data->sg, - data->sg_len, - dir); + if (data->host_cookie == COOKIE_PREMAPPED) + return data->sg_count; - } else { - sg_len = next_data->sg_len; - next_data->sg_len = 0; - } + sg_count = dma_map_sg(chan->device->dev, + data->sg, + data->sg_len, + dir); - if (sg_len <= 0) { + if (sg_count <= 0) { dev_err(mmc_dev(host->mmc), "Failed to map scatterlist for DMA operation\n"); return -EINVAL; } - if (next) { - next->sg_len = sg_len; - data->host_cookie = ++next->cookie < 0 ? 1 : next->cookie; - } else - host->sg_len = sg_len; + data->sg_count = sg_count; + data->host_cookie = cookie; - return 0; + return data->sg_count; } static int jz4740_mmc_start_dma_transfer(struct jz4740_mmc_host *host, struct mmc_data *data) { - int ret; - struct dma_chan *chan; + struct dma_chan *chan = jz4740_mmc_get_dma_chan(host, data); struct dma_async_tx_descriptor *desc; struct dma_slave_config conf = { .src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES, @@ -306,29 +300,26 @@ static int jz4740_mmc_start_dma_transfer(struct jz4740_mmc_host *host, .src_maxburst = JZ4740_MMC_FIFO_HALF_SIZE, .dst_maxburst = JZ4740_MMC_FIFO_HALF_SIZE, }; + int sg_count; if (data->flags & MMC_DATA_WRITE) { conf.direction = DMA_MEM_TO_DEV; conf.dst_addr = host->mem_res->start + JZ_REG_MMC_TXFIFO; conf.slave_id = JZ4740_DMA_TYPE_MMC_TRANSMIT; - chan = host->dma_tx; } else { conf.direction = DMA_DEV_TO_MEM; conf.src_addr = host->mem_res->start + JZ_REG_MMC_RXFIFO; conf.slave_id = JZ4740_DMA_TYPE_MMC_RECEIVE; - chan = host->dma_rx; } - ret = jz4740_mmc_prepare_dma_data(host, data, NULL, chan); - if (ret) - return ret; + sg_count = jz4740_mmc_prepare_dma_data(host, data, COOKIE_MAPPED); + if (sg_count < 0) + return sg_count; dmaengine_slave_config(chan, &conf); - desc = dmaengine_prep_slave_sg(chan, - data->sg, - host->sg_len, - conf.direction, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + desc = dmaengine_prep_slave_sg(chan, data->sg, sg_count, + conf.direction, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!desc) { dev_err(mmc_dev(host->mmc), "Failed to allocate DMA %s descriptor", @@ -342,7 +333,8 @@ static int jz4740_mmc_start_dma_transfer(struct jz4740_mmc_host *host, return 0; dma_unmap: - jz4740_mmc_dma_unmap(host, data); + if (data->host_cookie == COOKIE_MAPPED) + jz4740_mmc_dma_unmap(host, data); return -ENOMEM; } @@ -351,16 +343,13 @@ static void jz4740_mmc_pre_request(struct mmc_host *mmc, { struct jz4740_mmc_host *host = mmc_priv(mmc); struct mmc_data *data = mrq->data; - struct jz4740_mmc_host_next *next_data = &host->next_data; - BUG_ON(data->host_cookie); - - if (host->use_dma) { - struct dma_chan *chan = jz4740_mmc_get_dma_chan(host, data); + if (!host->use_dma) + return; - if (jz4740_mmc_prepare_dma_data(host, data, next_data, chan)) - data->host_cookie = 0; - } + data->host_cookie = COOKIE_UNMAPPED; + if (jz4740_mmc_prepare_dma_data(host, data, COOKIE_PREMAPPED) < 0) + data->host_cookie = COOKIE_UNMAPPED; } static void jz4740_mmc_post_request(struct mmc_host *mmc, @@ -370,10 +359,8 @@ static void jz4740_mmc_post_request(struct mmc_host *mmc, struct jz4740_mmc_host *host = mmc_priv(mmc); struct mmc_data *data = mrq->data; - if (host->use_dma && data->host_cookie) { + if (data && data->host_cookie != COOKIE_UNMAPPED) jz4740_mmc_dma_unmap(host, data); - data->host_cookie = 0; - } if (err) { struct dma_chan *chan = jz4740_mmc_get_dma_chan(host, data); @@ -436,10 +423,14 @@ static void jz4740_mmc_reset(struct jz4740_mmc_host *host) static void jz4740_mmc_request_done(struct jz4740_mmc_host *host) { struct mmc_request *req; + struct mmc_data *data; req = host->req; + data = req->data; host->req = NULL; + if (data && data->host_cookie == COOKIE_MAPPED) + jz4740_mmc_dma_unmap(host, data); mmc_request_done(host->mmc, req); } @@ -903,18 +894,16 @@ static void jz4740_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) switch (ios->power_mode) { case MMC_POWER_UP: jz4740_mmc_reset(host); - if (host->pdata && gpio_is_valid(host->pdata->gpio_power)) - gpio_set_value(host->pdata->gpio_power, - !host->pdata->power_active_low); + if (host->power) + gpiod_set_value(host->power, 1); host->cmdat |= JZ_MMC_CMDAT_INIT; clk_prepare_enable(host->clk); break; case MMC_POWER_ON: break; default: - if (host->pdata && gpio_is_valid(host->pdata->gpio_power)) - gpio_set_value(host->pdata->gpio_power, - host->pdata->power_active_low); + if (host->power) + gpiod_set_value(host->power, 0); clk_disable_unprepare(host->clk); break; } @@ -947,30 +936,9 @@ static const struct mmc_host_ops jz4740_mmc_ops = { .enable_sdio_irq = jz4740_mmc_enable_sdio_irq, }; -static int jz4740_mmc_request_gpio(struct device *dev, int gpio, - const char *name, bool output, int value) -{ - int ret; - - if (!gpio_is_valid(gpio)) - return 0; - - ret = gpio_request(gpio, name); - if (ret) { - dev_err(dev, "Failed to request %s gpio: %d\n", name, ret); - return ret; - } - - if (output) - gpio_direction_output(gpio, value); - else - gpio_direction_input(gpio); - - return 0; -} - -static int jz4740_mmc_request_gpios(struct mmc_host *mmc, - struct platform_device *pdev) +static int jz4740_mmc_request_gpios(struct jz4740_mmc_host *host, + struct mmc_host *mmc, + struct platform_device *pdev) { struct jz4740_mmc_platform_data *pdata = dev_get_platdata(&pdev->dev); int ret = 0; @@ -983,31 +951,21 @@ static int jz4740_mmc_request_gpios(struct mmc_host *mmc, if (!pdata->read_only_active_low) mmc->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH; - if (gpio_is_valid(pdata->gpio_card_detect)) { - ret = mmc_gpio_request_cd(mmc, pdata->gpio_card_detect, 0); - if (ret) - return ret; - } - - if (gpio_is_valid(pdata->gpio_read_only)) { - ret = mmc_gpio_request_ro(mmc, pdata->gpio_read_only); - if (ret) - return ret; - } - - return jz4740_mmc_request_gpio(&pdev->dev, pdata->gpio_power, - "MMC read only", true, pdata->power_active_low); -} - -static void jz4740_mmc_free_gpios(struct platform_device *pdev) -{ - struct jz4740_mmc_platform_data *pdata = dev_get_platdata(&pdev->dev); + /* + * Get optional card detect and write protect GPIOs, + * only back out on probe deferral. + */ + ret = mmc_gpiod_request_cd(mmc, "cd", 0, false, 0, NULL); + if (ret == -EPROBE_DEFER) + return ret; - if (!pdata) - return; + ret = mmc_gpiod_request_ro(mmc, "wp", 0, false, 0, NULL); + if (ret == -EPROBE_DEFER) + return ret; - if (gpio_is_valid(pdata->gpio_power)) - gpio_free(pdata->gpio_power); + host->power = devm_gpiod_get_optional(&pdev->dev, "power", + GPIOD_OUT_HIGH); + return PTR_ERR_OR_ZERO(host->power); } static const struct of_device_id jz4740_mmc_of_match[] = { @@ -1053,7 +1011,7 @@ static int jz4740_mmc_probe(struct platform_device* pdev) mmc->caps |= MMC_CAP_SDIO_IRQ; if (!(pdata && pdata->data_1bit)) mmc->caps |= MMC_CAP_4_BIT_DATA; - ret = jz4740_mmc_request_gpios(mmc, pdev); + ret = jz4740_mmc_request_gpios(host, mmc, pdev); if (ret) goto err_free_host; } @@ -1104,7 +1062,7 @@ static int jz4740_mmc_probe(struct platform_device* pdev) dev_name(&pdev->dev), host); if (ret) { dev_err(&pdev->dev, "Failed to request irq: %d\n", ret); - goto err_free_gpios; + goto err_free_host; } jz4740_mmc_clock_disable(host); @@ -1135,8 +1093,6 @@ err_release_dma: jz4740_mmc_release_dma_channels(host); err_free_irq: free_irq(host->irq, host); -err_free_gpios: - jz4740_mmc_free_gpios(pdev); err_free_host: mmc_free_host(mmc); @@ -1155,8 +1111,6 @@ static int jz4740_mmc_remove(struct platform_device *pdev) free_irq(host->irq, host); - jz4740_mmc_free_gpios(pdev); - if (host->use_dma) jz4740_mmc_release_dma_channels(host); diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c index c201c378537e..c2690c1a50ff 100644 --- a/drivers/mmc/host/meson-gx-mmc.c +++ b/drivers/mmc/host/meson-gx-mmc.c @@ -21,11 +21,11 @@ #include #include #include +#include #include #include #include #include -#include #include #include #include @@ -66,6 +66,9 @@ #define SD_EMMC_DELAY 0x4 #define SD_EMMC_ADJUST 0x8 +#define ADJUST_ADJ_DELAY_MASK GENMASK(21, 16) +#define ADJUST_DS_EN BIT(15) +#define ADJUST_ADJ_EN BIT(13) #define SD_EMMC_DELAY1 0x4 #define SD_EMMC_DELAY2 0x8 @@ -90,9 +93,11 @@ #define CFG_CLK_ALWAYS_ON BIT(18) #define CFG_CHK_DS BIT(20) #define CFG_AUTO_CLK BIT(23) +#define CFG_ERR_ABORT BIT(27) #define SD_EMMC_STATUS 0x48 #define STATUS_BUSY BIT(31) +#define STATUS_DESC_BUSY BIT(30) #define STATUS_DATI GENMASK(23, 16) #define SD_EMMC_IRQ_EN 0x4c @@ -141,6 +146,7 @@ struct meson_mmc_data { unsigned int tx_delay_mask; unsigned int rx_delay_mask; unsigned int always_on; + unsigned int adjust; }; struct sd_emmc_desc { @@ -156,7 +162,6 @@ struct meson_host { struct mmc_host *mmc; struct mmc_command *cmd; - spinlock_t lock; void __iomem *regs; struct clk *core_clk; struct clk *mmc_clk; @@ -633,14 +638,8 @@ static int meson_mmc_clk_init(struct meson_host *host) if (ret) return ret; - /* - * Set phases : These values are mostly the datasheet recommended ones - * except for the Tx phase. Datasheet recommends 180 but some cards - * fail at initialisation with it. 270 works just fine, it fixes these - * initialisation issues and enable eMMC DDR52 mode. - */ clk_set_phase(host->mmc_clk, 180); - clk_set_phase(host->tx_clk, 270); + clk_set_phase(host->tx_clk, 0); clk_set_phase(host->rx_clk, 0); return clk_prepare_enable(host->mmc_clk); @@ -928,6 +927,7 @@ static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd) cmd_cfg |= FIELD_PREP(CMD_CFG_CMD_INDEX_MASK, cmd->opcode); cmd_cfg |= CMD_CFG_OWNER; /* owned by CPU */ + cmd_cfg |= CMD_CFG_ERROR; /* stop in case of error */ meson_mmc_set_response_bits(cmd, &cmd_cfg); @@ -1022,29 +1022,34 @@ static irqreturn_t meson_mmc_irq(int irq, void *dev_id) u32 irq_en, status, raw_status; irqreturn_t ret = IRQ_NONE; - if (WARN_ON(!host) || WARN_ON(!host->cmd)) + irq_en = readl(host->regs + SD_EMMC_IRQ_EN); + raw_status = readl(host->regs + SD_EMMC_STATUS); + status = raw_status & irq_en; + + if (!status) { + dev_dbg(host->dev, + "Unexpected IRQ! irq_en 0x%08x - status 0x%08x\n", + irq_en, raw_status); return IRQ_NONE; + } - spin_lock(&host->lock); + if (WARN_ON(!host) || WARN_ON(!host->cmd)) + return IRQ_NONE; cmd = host->cmd; data = cmd->data; - irq_en = readl(host->regs + SD_EMMC_IRQ_EN); - raw_status = readl(host->regs + SD_EMMC_STATUS); - status = raw_status & irq_en; - cmd->error = 0; if (status & IRQ_CRC_ERR) { dev_dbg(host->dev, "CRC Error - status 0x%08x\n", status); cmd->error = -EILSEQ; - ret = IRQ_HANDLED; + ret = IRQ_WAKE_THREAD; goto out; } if (status & IRQ_TIMEOUTS) { dev_dbg(host->dev, "Timeout - status 0x%08x\n", status); cmd->error = -ETIMEDOUT; - ret = IRQ_HANDLED; + ret = IRQ_WAKE_THREAD; goto out; } @@ -1069,17 +1074,48 @@ out: /* ack all enabled interrupts */ writel(irq_en, host->regs + SD_EMMC_STATUS); + if (cmd->error) { + /* Stop desc in case of errors */ + u32 start = readl(host->regs + SD_EMMC_START); + + start &= ~START_DESC_BUSY; + writel(start, host->regs + SD_EMMC_START); + } + if (ret == IRQ_HANDLED) meson_mmc_request_done(host->mmc, cmd->mrq); - else if (ret == IRQ_NONE) - dev_warn(host->dev, - "Unexpected IRQ! status=0x%08x, irq_en=0x%08x\n", - raw_status, irq_en); - spin_unlock(&host->lock); return ret; } +static int meson_mmc_wait_desc_stop(struct meson_host *host) +{ + int loop; + u32 status; + + /* + * It may sometimes take a while for it to actually halt. Here, we + * are giving it 5ms to comply + * + * If we don't confirm the descriptor is stopped, it might raise new + * IRQs after we have called mmc_request_done() which is bad. + */ + for (loop = 50; loop; loop--) { + status = readl(host->regs + SD_EMMC_STATUS); + if (status & (STATUS_BUSY | STATUS_DESC_BUSY)) + udelay(100); + else + break; + } + + if (status & (STATUS_BUSY | STATUS_DESC_BUSY)) { + dev_err(host->dev, "Timed out waiting for host to stop\n"); + return -ETIMEDOUT; + } + + return 0; +} + static irqreturn_t meson_mmc_irq_thread(int irq, void *dev_id) { struct meson_host *host = dev_id; @@ -1090,6 +1126,13 @@ static irqreturn_t meson_mmc_irq_thread(int irq, void *dev_id) if (WARN_ON(!cmd)) return IRQ_NONE; + if (cmd->error) { + meson_mmc_wait_desc_stop(host); + meson_mmc_request_done(host->mmc, cmd->mrq); + + return IRQ_HANDLED; + } + data = cmd->data; if (meson_mmc_bounce_buf_read(data)) { xfer_bytes = data->blksz * data->blocks; @@ -1123,14 +1166,21 @@ static int meson_mmc_get_cd(struct mmc_host *mmc) static void meson_mmc_cfg_init(struct meson_host *host) { - u32 cfg = 0; + u32 cfg = 0, adj = 0; cfg |= FIELD_PREP(CFG_RESP_TIMEOUT_MASK, ilog2(SD_EMMC_CFG_RESP_TIMEOUT)); cfg |= FIELD_PREP(CFG_RC_CC_MASK, ilog2(SD_EMMC_CFG_CMD_GAP)); cfg |= FIELD_PREP(CFG_BLK_LEN_MASK, ilog2(SD_EMMC_CFG_BLK_SIZE)); + /* abort chain on R/W errors */ + cfg |= CFG_ERR_ABORT; + writel(cfg, host->regs + SD_EMMC_CFG); + + /* enable signal resampling w/o delay */ + adj = ADJUST_ADJ_EN; + writel(adj, host->regs + host->data->adjust); } static int meson_mmc_card_busy(struct mmc_host *mmc) @@ -1191,8 +1241,6 @@ static int meson_mmc_probe(struct platform_device *pdev) host->dev = &pdev->dev; dev_set_drvdata(&pdev->dev, host); - spin_lock_init(&host->lock); - /* Get regulators and the supported OCR mask */ host->vqmmc_enabled = false; ret = mmc_regulator_get_supply(mmc); @@ -1356,12 +1404,14 @@ static const struct meson_mmc_data meson_gx_data = { .tx_delay_mask = CLK_V2_TX_DELAY_MASK, .rx_delay_mask = CLK_V2_RX_DELAY_MASK, .always_on = CLK_V2_ALWAYS_ON, + .adjust = SD_EMMC_ADJUST, }; static const struct meson_mmc_data meson_axg_data = { .tx_delay_mask = CLK_V3_TX_DELAY_MASK, .rx_delay_mask = CLK_V3_RX_DELAY_MASK, .always_on = CLK_V3_ALWAYS_ON, + .adjust = SD_EMMC_V3_ADJUST, }; static const struct of_device_id meson_mmc_of_match[] = { diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c index abe253c262a2..ec980bda071c 100644 --- a/drivers/mmc/host/meson-mx-sdio.c +++ b/drivers/mmc/host/meson-mx-sdio.c @@ -596,6 +596,9 @@ static int meson_mx_mmc_register_clks(struct meson_mx_mmc_host *host) init.name = devm_kasprintf(host->controller_dev, GFP_KERNEL, "%s#fixed_factor", dev_name(host->controller_dev)); + if (!init.name) + return -ENOMEM; + init.ops = &clk_fixed_factor_ops; init.flags = 0; init.parent_names = &clk_fixed_factor_parent; @@ -612,6 +615,9 @@ static int meson_mx_mmc_register_clks(struct meson_mx_mmc_host *host) clk_div_parent = __clk_get_name(host->fixed_factor_clk); init.name = devm_kasprintf(host->controller_dev, GFP_KERNEL, "%s#div", dev_name(host->controller_dev)); + if (!init.name) + return -ENOMEM; + init.ops = &clk_divider_ops; init.flags = CLK_SET_RATE_PARENT; init.parent_names = &clk_div_parent; diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c index 476e53d30128..10ba46b728e8 100644 --- a/drivers/mmc/host/mmc_spi.c +++ b/drivers/mmc/host/mmc_spi.c @@ -1434,13 +1434,16 @@ static int mmc_spi_probe(struct spi_device *spi) if (status != 0) goto fail_add_host; - if (host->pdata && host->pdata->flags & MMC_SPI_USE_CD_GPIO) { - status = mmc_gpio_request_cd(mmc, host->pdata->cd_gpio, - host->pdata->cd_debounce); - if (status != 0) - goto fail_add_host; - - /* The platform has a CD GPIO signal that may support + /* + * Index 0 is card detect + * Old boardfiles were specifying 1 ms as debounce + */ + status = mmc_gpiod_request_cd(mmc, NULL, 0, false, 1, NULL); + if (status == -EPROBE_DEFER) + goto fail_add_host; + if (!status) { + /* + * The platform has a CD GPIO signal that may support * interrupts, so let mmc_gpiod_request_cd_irq() decide * if polling is needed or not. */ @@ -1448,12 +1451,12 @@ static int mmc_spi_probe(struct spi_device *spi) mmc_gpiod_request_cd_irq(mmc); } - if (host->pdata && host->pdata->flags & MMC_SPI_USE_RO_GPIO) { + /* Index 1 is write protect/read only */ + status = mmc_gpiod_request_ro(mmc, NULL, 1, false, 0, NULL); + if (status == -EPROBE_DEFER) + goto fail_add_host; + if (!status) has_ro = true; - status = mmc_gpio_request_ro(mmc, host->pdata->ro_gpio); - if (status != 0) - goto fail_add_host; - } dev_info(&spi->dev, "SD/MMC host %s%s%s%s%s\n", dev_name(&mmc->class_dev), diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c index 82bab35fff41..e352f5ad5801 100644 --- a/drivers/mmc/host/mmci.c +++ b/drivers/mmc/host/mmci.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -274,6 +275,7 @@ static struct variant_data variant_stm32_sdmmc = { .cmdreg_lrsp_crc = MCI_CPSM_STM32_LRSP_CRC, .cmdreg_srsp_crc = MCI_CPSM_STM32_SRSP_CRC, .cmdreg_srsp = MCI_CPSM_STM32_SRSP, + .cmdreg_stop = MCI_CPSM_STM32_CMDSTOP, .data_cmd_enable = MCI_CPSM_STM32_CMDTRANS, .irq_pio_mask = MCI_IRQ_PIO_STM32_MASK, .datactrl_first = true, @@ -1100,6 +1102,10 @@ mmci_start_command(struct mmci_host *host, struct mmc_command *cmd, u32 c) mmci_reg_delay(host); } + if (host->variant->cmdreg_stop && + cmd->opcode == MMC_STOP_TRANSMISSION) + c |= host->variant->cmdreg_stop; + c |= cmd->opcode | host->variant->cmdreg_cpsm_enable; if (cmd->flags & MMC_RSP_PRESENT) { if (cmd->flags & MMC_RSP_136) @@ -1190,11 +1196,10 @@ mmci_data_irq(struct mmci_host *host, struct mmc_data *data, /* The error clause is handled above, success! */ data->bytes_xfered = data->blksz * data->blocks; - if (!data->stop || host->mrq->sbc) { + if (!data->stop || (host->mrq->sbc && !data->error)) mmci_request_end(host, data->mrq); - } else { + else mmci_start_command(host, data->stop, 0); - } } } diff --git a/drivers/mmc/host/mmci.h b/drivers/mmc/host/mmci.h index 550dd3914461..24229097d05c 100644 --- a/drivers/mmc/host/mmci.h +++ b/drivers/mmc/host/mmci.h @@ -264,6 +264,7 @@ struct mmci_host; * @cmdreg_lrsp_crc: enable value for long response with crc * @cmdreg_srsp_crc: enable value for short response with crc * @cmdreg_srsp: enable value for short response without crc + * @cmdreg_stop: enable value for stop and abort transmission * @datalength_bits: number of bits in the MMCIDATALENGTH register * @fifosize: number of bytes that can be written when MMCI_TXFIFOEMPTY * is asserted (likewise for RX) @@ -316,6 +317,7 @@ struct variant_data { unsigned int cmdreg_lrsp_crc; unsigned int cmdreg_srsp_crc; unsigned int cmdreg_srsp; + unsigned int cmdreg_stop; unsigned int datalength_bits; unsigned int fifosize; unsigned int fifohalfsize; diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c index 6334cc752d8b..8afeaf81ae66 100644 --- a/drivers/mmc/host/mtk-sd.c +++ b/drivers/mmc/host/mtk-sd.c @@ -1114,6 +1114,7 @@ static void msdc_start_command(struct msdc_host *host, struct mmc_request *mrq, struct mmc_command *cmd) { u32 rawcmd; + unsigned long flags; WARN_ON(host->cmd); host->cmd = cmd; @@ -1131,7 +1132,10 @@ static void msdc_start_command(struct msdc_host *host, cmd->error = 0; rawcmd = msdc_cmd_prepare_raw_cmd(host, mrq, cmd); + spin_lock_irqsave(&host->lock, flags); sdr_set_bits(host->base + MSDC_INTEN, cmd_ints_mask); + spin_unlock_irqrestore(&host->lock, flags); + writel(cmd->arg, host->base + SDC_ARG); writel(rawcmd, host->base + SDC_CMD); } @@ -1351,6 +1355,31 @@ static void msdc_request_timeout(struct work_struct *work) } } +static void __msdc_enable_sdio_irq(struct mmc_host *mmc, int enb) +{ + unsigned long flags; + struct msdc_host *host = mmc_priv(mmc); + + spin_lock_irqsave(&host->lock, flags); + if (enb) + sdr_set_bits(host->base + MSDC_INTEN, MSDC_INTEN_SDIOIRQ); + else + sdr_clr_bits(host->base + MSDC_INTEN, MSDC_INTEN_SDIOIRQ); + spin_unlock_irqrestore(&host->lock, flags); +} + +static void msdc_enable_sdio_irq(struct mmc_host *mmc, int enb) +{ + struct msdc_host *host = mmc_priv(mmc); + + __msdc_enable_sdio_irq(mmc, enb); + + if (enb) + pm_runtime_get_noresume(host->dev); + else + pm_runtime_put_noidle(host->dev); +} + static irqreturn_t msdc_irq(int irq, void *dev_id) { struct msdc_host *host = (struct msdc_host *) dev_id; @@ -1373,7 +1402,12 @@ static irqreturn_t msdc_irq(int irq, void *dev_id) data = host->data; spin_unlock_irqrestore(&host->lock, flags); - if (!(events & event_mask)) + if ((events & event_mask) & MSDC_INT_SDIOIRQ) { + __msdc_enable_sdio_irq(host->mmc, 0); + sdio_signal_irq(host->mmc); + } + + if (!(events & (event_mask & ~MSDC_INT_SDIOIRQ))) break; if (!mrq) { @@ -1493,8 +1527,11 @@ static void msdc_init_hw(struct msdc_host *host) */ sdr_set_bits(host->base + SDC_CFG, SDC_CFG_SDIO); - /* disable detect SDIO device interrupt function */ - sdr_clr_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE); + /* Config SDIO device detect interrupt function */ + if (host->mmc->caps & MMC_CAP_SDIO_IRQ) + sdr_set_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE); + else + sdr_clr_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE); /* Configure to default data timeout */ sdr_set_field(host->base + SDC_CFG, SDC_CFG_DTOC, 3); @@ -2013,6 +2050,11 @@ static void msdc_hw_reset(struct mmc_host *mmc) sdr_clr_bits(host->base + EMMC_IOCON, 1); } +static void msdc_ack_sdio_irq(struct mmc_host *mmc) +{ + __msdc_enable_sdio_irq(mmc, 1); +} + static const struct mmc_host_ops mt_msdc_ops = { .post_req = msdc_post_req, .pre_req = msdc_pre_req, @@ -2020,6 +2062,8 @@ static const struct mmc_host_ops mt_msdc_ops = { .set_ios = msdc_ops_set_ios, .get_ro = mmc_gpio_get_ro, .get_cd = mmc_gpio_get_cd, + .enable_sdio_irq = msdc_enable_sdio_irq, + .ack_sdio_irq = msdc_ack_sdio_irq, .start_signal_voltage_switch = msdc_ops_switch_volt, .card_busy = msdc_card_busy, .execute_tuning = msdc_execute_tuning, @@ -2147,6 +2191,9 @@ static int msdc_drv_probe(struct platform_device *pdev) else mmc->f_min = DIV_ROUND_UP(host->src_clk_freq, 4 * 4095); + if (mmc->caps & MMC_CAP_SDIO_IRQ) + mmc->caps2 |= MMC_CAP2_SDIO_IRQ_NOTHREAD; + mmc->caps |= MMC_CAP_ERASE | MMC_CAP_CMD23; /* MMC core transfer sizes tunable parameters */ mmc->max_segs = MAX_BD_NUM; diff --git a/drivers/mmc/host/of_mmc_spi.c b/drivers/mmc/host/of_mmc_spi.c index c9eed8436b6b..b294b221f225 100644 --- a/drivers/mmc/host/of_mmc_spi.c +++ b/drivers/mmc/host/of_mmc_spi.c @@ -16,9 +16,7 @@ #include #include #include -#include #include -#include #include #include #include @@ -32,15 +30,7 @@ MODULE_LICENSE("GPL"); -enum { - CD_GPIO = 0, - WP_GPIO, - NUM_GPIOS, -}; - struct of_mmc_spi { - int gpios[NUM_GPIOS]; - bool alow_gpios[NUM_GPIOS]; int detect_irq; struct mmc_spi_platform_data pdata; }; @@ -102,30 +92,6 @@ struct mmc_spi_platform_data *mmc_spi_get_pdata(struct spi_device *spi) oms->pdata.ocr_mask |= mask; } - for (i = 0; i < ARRAY_SIZE(oms->gpios); i++) { - enum of_gpio_flags gpio_flags; - - oms->gpios[i] = of_get_gpio_flags(np, i, &gpio_flags); - if (!gpio_is_valid(oms->gpios[i])) - continue; - - if (gpio_flags & OF_GPIO_ACTIVE_LOW) - oms->alow_gpios[i] = true; - } - - if (gpio_is_valid(oms->gpios[CD_GPIO])) { - oms->pdata.cd_gpio = oms->gpios[CD_GPIO]; - oms->pdata.flags |= MMC_SPI_USE_CD_GPIO; - if (!oms->alow_gpios[CD_GPIO]) - oms->pdata.caps2 |= MMC_CAP2_CD_ACTIVE_HIGH; - } - if (gpio_is_valid(oms->gpios[WP_GPIO])) { - oms->pdata.ro_gpio = oms->gpios[WP_GPIO]; - oms->pdata.flags |= MMC_SPI_USE_RO_GPIO; - if (!oms->alow_gpios[WP_GPIO]) - oms->pdata.caps2 |= MMC_CAP2_RO_ACTIVE_HIGH; - } - oms->detect_irq = irq_of_parse_and_map(np, 0); if (oms->detect_irq != 0) { oms->pdata.init = of_mmc_spi_init; diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c index 3f4ea8f624be..29a1ddaa7466 100644 --- a/drivers/mmc/host/omap_hsmmc.c +++ b/drivers/mmc/host/omap_hsmmc.c @@ -1652,7 +1652,7 @@ static struct mmc_host_ops omap_hsmmc_ops = { #ifdef CONFIG_DEBUG_FS -static int omap_hsmmc_regs_show(struct seq_file *s, void *data) +static int mmc_regs_show(struct seq_file *s, void *data) { struct mmc_host *mmc = s->private; struct omap_hsmmc_host *host = mmc_priv(mmc); @@ -1691,17 +1691,7 @@ static int omap_hsmmc_regs_show(struct seq_file *s, void *data) return 0; } -static int omap_hsmmc_regs_open(struct inode *inode, struct file *file) -{ - return single_open(file, omap_hsmmc_regs_show, inode->i_private); -} - -static const struct file_operations mmc_regs_fops = { - .open = omap_hsmmc_regs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(mmc_regs); static void omap_hsmmc_debugfs(struct mmc_host *mmc) { diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c index f7ffbf1676b1..8779bbaa6b69 100644 --- a/drivers/mmc/host/pxamci.c +++ b/drivers/mmc/host/pxamci.c @@ -30,10 +30,9 @@ #include #include #include -#include +#include #include #include -#include #include #include @@ -63,6 +62,8 @@ struct pxamci_host { unsigned int imask; unsigned int power_mode; unsigned long detect_delay_ms; + bool use_ro_gpio; + struct gpio_desc *power; struct pxamci_platform_data *pdata; struct mmc_request *mrq; @@ -101,16 +102,13 @@ static inline int pxamci_set_power(struct pxamci_host *host, { struct mmc_host *mmc = host->mmc; struct regulator *supply = mmc->supply.vmmc; - int on; if (!IS_ERR(supply)) return mmc_regulator_set_ocr(mmc, supply, vdd); - if (host->pdata && - gpio_is_valid(host->pdata->gpio_power)) { - on = ((1 << vdd) & host->pdata->ocr_mask); - gpio_set_value(host->pdata->gpio_power, - !!on ^ host->pdata->gpio_power_invert); + if (host->power) { + bool on = !!((1 << vdd) & host->pdata->ocr_mask); + gpiod_set_value(host->power, on); } if (host->pdata && host->pdata->setpower) @@ -432,7 +430,7 @@ static int pxamci_get_ro(struct mmc_host *mmc) { struct pxamci_host *host = mmc_priv(mmc); - if (host->pdata && gpio_is_valid(host->pdata->gpio_card_ro)) + if (host->use_ro_gpio) return mmc_gpio_get_ro(mmc); if (host->pdata && host->pdata->get_ro) return !!host->pdata->get_ro(mmc_dev(mmc)); @@ -730,52 +728,38 @@ static int pxamci_probe(struct platform_device *pdev) } if (host->pdata) { - int gpio_cd = host->pdata->gpio_card_detect; - int gpio_ro = host->pdata->gpio_card_ro; - int gpio_power = host->pdata->gpio_power; - host->detect_delay_ms = host->pdata->detect_delay_ms; - if (gpio_is_valid(gpio_power)) { - ret = devm_gpio_request(dev, gpio_power, - "mmc card power"); - if (ret) { - dev_err(dev, - "Failed requesting gpio_power %d\n", - gpio_power); - goto out; - } - gpio_direction_output(gpio_power, - host->pdata->gpio_power_invert); + host->power = devm_gpiod_get_optional(dev, "power", GPIOD_OUT_LOW); + if (IS_ERR(host->power)) { + dev_err(dev, "Failed requesting gpio_power\n"); + goto out; } - if (gpio_is_valid(gpio_ro)) { - ret = mmc_gpio_request_ro(mmc, gpio_ro); - if (ret) { - dev_err(dev, - "Failed requesting gpio_ro %d\n", - gpio_ro); - goto out; - } else { - mmc->caps2 |= host->pdata->gpio_card_ro_invert ? - 0 : MMC_CAP2_RO_ACTIVE_HIGH; - } + /* FIXME: should we pass detection delay to debounce? */ + ret = mmc_gpiod_request_cd(mmc, "cd", 0, false, 0, NULL); + if (ret && ret != -ENOENT) { + dev_err(dev, "Failed requesting gpio_cd\n"); + goto out; } - if (gpio_is_valid(gpio_cd)) - ret = mmc_gpio_request_cd(mmc, gpio_cd, 0); - if (ret) { - dev_err(dev, "Failed requesting gpio_cd %d\n", - gpio_cd); + ret = mmc_gpiod_request_ro(mmc, "wp", 0, false, 0, NULL); + if (ret && ret != -ENOENT) { + dev_err(dev, "Failed requesting gpio_ro\n"); goto out; } + if (!ret) { + host->use_ro_gpio = true; + mmc->caps2 |= host->pdata->gpio_card_ro_invert ? + 0 : MMC_CAP2_RO_ACTIVE_HIGH; + } if (host->pdata->init) host->pdata->init(dev, pxamci_detect_irq, mmc); - if (gpio_is_valid(gpio_power) && host->pdata->setpower) + if (host->power && host->pdata->setpower) dev_warn(dev, "gpio_power and setpower() both defined\n"); - if (gpio_is_valid(gpio_ro) && host->pdata->get_ro) + if (host->use_ro_gpio && host->pdata->get_ro) dev_warn(dev, "gpio_ro and get_ro() both defined\n"); } diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c index d3ac43c3d0b6..31a351a20dc0 100644 --- a/drivers/mmc/host/renesas_sdhi_core.c +++ b/drivers/mmc/host/renesas_sdhi_core.c @@ -32,6 +32,7 @@ #include #include #include +#include #include "renesas_sdhi.h" #include "tmio_mmc.h" @@ -45,6 +46,11 @@ #define SDHI_VER_GEN3_SD 0xcc10 #define SDHI_VER_GEN3_SDMMC 0xcd10 +struct renesas_sdhi_quirks { + bool hs400_disabled; + bool hs400_4taps; +}; + static void renesas_sdhi_sdbuf_width(struct tmio_mmc_host *host, int width) { u32 val; @@ -163,15 +169,6 @@ static void renesas_sdhi_set_clock(struct tmio_mmc_host *host, if (new_clock == 0) goto out; - /* - * Both HS400 and HS200/SD104 set 200MHz, but some devices need to - * set 400MHz to distinguish the CPG settings in HS400. - */ - if (host->mmc->ios.timing == MMC_TIMING_MMC_HS400 && - host->pdata->flags & TMIO_MMC_HAVE_4TAP_HS400 && - new_clock == 200000000) - new_clock = 400000000; - clock = renesas_sdhi_clk_update(host, new_clock) / 512; for (clk = 0x80000080; new_clock >= (clock << 1); clk >>= 1) @@ -532,6 +529,10 @@ static void renesas_sdhi_hw_reset(struct tmio_mmc_host *host) sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL, ~SH_MOBILE_SDHI_SCC_RVSCNTL_RVSEN & sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL)); + + if (host->pdata->flags & TMIO_MMC_MIN_RCAR2) + sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, + TMIO_MASK_INIT_RCAR2); } static int renesas_sdhi_wait_idle(struct tmio_mmc_host *host, u32 bit) @@ -602,11 +603,31 @@ static void renesas_sdhi_enable_dma(struct tmio_mmc_host *host, bool enable) renesas_sdhi_sdbuf_width(host, enable ? width : 16); } +static const struct renesas_sdhi_quirks sdhi_quirks_h3_m3w_es1 = { + .hs400_disabled = true, + .hs400_4taps = true, +}; + +static const struct renesas_sdhi_quirks sdhi_quirks_h3_es2 = { + .hs400_disabled = false, + .hs400_4taps = true, +}; + +static const struct soc_device_attribute sdhi_quirks_match[] = { + { .soc_id = "r8a7795", .revision = "ES1.*", .data = &sdhi_quirks_h3_m3w_es1 }, + { .soc_id = "r8a7795", .revision = "ES2.0", .data = &sdhi_quirks_h3_es2 }, + { .soc_id = "r8a7796", .revision = "ES1.0", .data = &sdhi_quirks_h3_m3w_es1 }, + { .soc_id = "r8a7796", .revision = "ES1.1", .data = &sdhi_quirks_h3_m3w_es1 }, + { /* Sentinel. */ }, +}; + int renesas_sdhi_probe(struct platform_device *pdev, const struct tmio_mmc_dma_ops *dma_ops) { struct tmio_mmc_data *mmd = pdev->dev.platform_data; + const struct renesas_sdhi_quirks *quirks = NULL; const struct renesas_sdhi_of_data *of_data; + const struct soc_device_attribute *attr; struct tmio_mmc_data *mmc_data; struct tmio_mmc_dma *dma_priv; struct tmio_mmc_host *host; @@ -616,6 +637,10 @@ int renesas_sdhi_probe(struct platform_device *pdev, of_data = of_device_get_match_data(&pdev->dev); + attr = soc_device_match(sdhi_quirks_match); + if (attr) + quirks = attr->data; + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!res) return -EINVAL; @@ -681,6 +706,12 @@ int renesas_sdhi_probe(struct platform_device *pdev, host->multi_io_quirk = renesas_sdhi_multi_io_quirk; host->dma_ops = dma_ops; + if (quirks && quirks->hs400_disabled) + host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES); + + if (quirks && quirks->hs400_4taps) + mmc_data->flags |= TMIO_MMC_HAVE_4TAP_HS400; + /* For some SoC, we disable internal WP. GPIO may override this */ if (mmc_can_gpio_ro(host->mmc)) mmc_data->capabilities2 &= ~MMC_CAP2_NO_WRITE_PROTECT; @@ -691,6 +722,7 @@ int renesas_sdhi_probe(struct platform_device *pdev, host->ops.card_busy = renesas_sdhi_card_busy; host->ops.start_signal_voltage_switch = renesas_sdhi_start_signal_voltage_switch; + host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27; } /* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */ diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c index b6f54102bfdd..92c9b15252da 100644 --- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c +++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c @@ -34,7 +34,7 @@ #define DTRAN_MODE_CH_NUM_CH0 0 /* "downstream" = for write commands */ #define DTRAN_MODE_CH_NUM_CH1 BIT(16) /* "upstream" = for read commands */ #define DTRAN_MODE_BUS_WIDTH (BIT(5) | BIT(4)) -#define DTRAN_MODE_ADDR_MODE BIT(0) /* 1 = Increment address */ +#define DTRAN_MODE_ADDR_MODE BIT(0) /* 1 = Increment address, 0 = Fixed */ /* DM_CM_DTRAN_CTRL */ #define DTRAN_CTRL_DM_START BIT(0) @@ -73,6 +73,9 @@ static unsigned long global_flags; #define SDHI_INTERNAL_DMAC_ONE_RX_ONLY 0 #define SDHI_INTERNAL_DMAC_RX_IN_USE 1 +/* RZ/A2 does not have the ADRR_MODE bit */ +#define SDHI_INTERNAL_DMAC_ADDR_MODE_FIXED_ONLY 2 + /* Definitions for sampling clocks */ static struct renesas_sdhi_scc rcar_gen3_scc_taps[] = { { @@ -81,15 +84,14 @@ static struct renesas_sdhi_scc rcar_gen3_scc_taps[] = { }, }; -static const struct renesas_sdhi_of_data of_rcar_r8a7795_compatible = { +static const struct renesas_sdhi_of_data of_rza2_compatible = { .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_CLK_ACTUAL | - TMIO_MMC_HAVE_CBSY | TMIO_MMC_MIN_RCAR2 | - TMIO_MMC_HAVE_4TAP_HS400, + TMIO_MMC_HAVE_CBSY, + .tmio_ocr_mask = MMC_VDD_32_33, .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ | MMC_CAP_CMD23, - .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT, .bus_shift = 2, - .scc_offset = 0x1000, + .scc_offset = 0 - 0x1000, .taps = rcar_gen3_scc_taps, .taps_num = ARRAY_SIZE(rcar_gen3_scc_taps), /* DMAC can handle 0xffffffff blk count but only 1 segment */ @@ -113,9 +115,10 @@ static const struct renesas_sdhi_of_data of_rcar_gen3_compatible = { }; static const struct of_device_id renesas_sdhi_internal_dmac_of_match[] = { + { .compatible = "renesas,sdhi-r7s9210", .data = &of_rza2_compatible, }, { .compatible = "renesas,sdhi-mmc-r8a77470", .data = &of_rcar_gen3_compatible, }, - { .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_r8a7795_compatible, }, - { .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_r8a7795_compatible, }, + { .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_gen3_compatible, }, + { .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_gen3_compatible, }, { .compatible = "renesas,rcar-gen3-sdhi", .data = &of_rcar_gen3_compatible, }, {}, }; @@ -172,7 +175,10 @@ renesas_sdhi_internal_dmac_start_dma(struct tmio_mmc_host *host, struct mmc_data *data) { struct scatterlist *sg = host->sg_ptr; - u32 dtran_mode = DTRAN_MODE_BUS_WIDTH | DTRAN_MODE_ADDR_MODE; + u32 dtran_mode = DTRAN_MODE_BUS_WIDTH; + + if (!test_bit(SDHI_INTERNAL_DMAC_ADDR_MODE_FIXED_ONLY, &global_flags)) + dtran_mode |= DTRAN_MODE_ADDR_MODE; if (!dma_map_sg(&host->pdev->dev, sg, host->sg_len, mmc_get_dma_dir(data))) @@ -292,18 +298,22 @@ static const struct tmio_mmc_dma_ops renesas_sdhi_internal_dmac_dma_ops = { */ static const struct soc_device_attribute soc_whitelist[] = { /* specific ones */ + { .soc_id = "r7s9210", + .data = (void *)BIT(SDHI_INTERNAL_DMAC_ADDR_MODE_FIXED_ONLY) }, { .soc_id = "r8a7795", .revision = "ES1.*", .data = (void *)BIT(SDHI_INTERNAL_DMAC_ONE_RX_ONLY) }, { .soc_id = "r8a7796", .revision = "ES1.0", .data = (void *)BIT(SDHI_INTERNAL_DMAC_ONE_RX_ONLY) }, /* generic ones */ { .soc_id = "r8a774a1" }, + { .soc_id = "r8a774c0" }, { .soc_id = "r8a77470" }, { .soc_id = "r8a7795" }, { .soc_id = "r8a7796" }, { .soc_id = "r8a77965" }, { .soc_id = "r8a77970" }, { .soc_id = "r8a77980" }, + { .soc_id = "r8a77990" }, { .soc_id = "r8a77995" }, { /* sentinel */ } }; diff --git a/drivers/mmc/host/renesas_sdhi_sys_dmac.c b/drivers/mmc/host/renesas_sdhi_sys_dmac.c index 1a4016f635d3..8471160316e0 100644 --- a/drivers/mmc/host/renesas_sdhi_sys_dmac.c +++ b/drivers/mmc/host/renesas_sdhi_sys_dmac.c @@ -75,19 +75,6 @@ static struct renesas_sdhi_scc rcar_gen3_scc_taps[] = { }, }; -static const struct renesas_sdhi_of_data of_rcar_r8a7795_compatible = { - .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_CLK_ACTUAL | - TMIO_MMC_HAVE_CBSY | TMIO_MMC_MIN_RCAR2 | - TMIO_MMC_HAVE_4TAP_HS400, - .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ | - MMC_CAP_CMD23, - .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT, - .bus_shift = 2, - .scc_offset = 0x1000, - .taps = rcar_gen3_scc_taps, - .taps_num = ARRAY_SIZE(rcar_gen3_scc_taps), -}; - static const struct renesas_sdhi_of_data of_rcar_gen3_compatible = { .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_CLK_ACTUAL | TMIO_MMC_HAVE_CBSY | TMIO_MMC_MIN_RCAR2, @@ -114,8 +101,8 @@ static const struct of_device_id renesas_sdhi_sys_dmac_of_match[] = { { .compatible = "renesas,sdhi-r8a7792", .data = &of_rcar_gen2_compatible, }, { .compatible = "renesas,sdhi-r8a7793", .data = &of_rcar_gen2_compatible, }, { .compatible = "renesas,sdhi-r8a7794", .data = &of_rcar_gen2_compatible, }, - { .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_r8a7795_compatible, }, - { .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_r8a7795_compatible, }, + { .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_gen3_compatible, }, + { .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_gen3_compatible, }, { .compatible = "renesas,rcar-gen1-sdhi", .data = &of_rcar_gen1_compatible, }, { .compatible = "renesas,rcar-gen2-sdhi", .data = &of_rcar_gen2_compatible, }, { .compatible = "renesas,rcar-gen3-sdhi", .data = &of_rcar_gen3_compatible, }, @@ -493,8 +480,7 @@ static const struct soc_device_attribute gen3_soc_whitelist[] = { static int renesas_sdhi_sys_dmac_probe(struct platform_device *pdev) { - if ((of_device_get_match_data(&pdev->dev) == &of_rcar_gen3_compatible || - of_device_get_match_data(&pdev->dev) == &of_rcar_r8a7795_compatible) && + if (of_device_get_match_data(&pdev->dev) == &of_rcar_gen3_compatible && !soc_device_match(gen3_soc_whitelist)) return -ENODEV; diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c index 9a3ff22dd0fe..669c6ab021c8 100644 --- a/drivers/mmc/host/rtsx_usb_sdmmc.c +++ b/drivers/mmc/host/rtsx_usb_sdmmc.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include @@ -1042,9 +1043,9 @@ static int sd_set_power_mode(struct rtsx_usb_sdmmc *host, if (power_mode == MMC_POWER_OFF) { err = sd_power_off(host); - pm_runtime_put(sdmmc_dev(host)); + pm_runtime_put_noidle(sdmmc_dev(host)); } else { - pm_runtime_get_sync(sdmmc_dev(host)); + pm_runtime_get_noresume(sdmmc_dev(host)); err = sd_power_on(host); } @@ -1297,16 +1298,20 @@ static void rtsx_usb_update_led(struct work_struct *work) container_of(work, struct rtsx_usb_sdmmc, led_work); struct rtsx_ucr *ucr = host->ucr; - pm_runtime_get_sync(sdmmc_dev(host)); + pm_runtime_get_noresume(sdmmc_dev(host)); mutex_lock(&ucr->dev_mutex); + if (host->power_mode == MMC_POWER_OFF) + goto out; + if (host->led.brightness == LED_OFF) rtsx_usb_turn_off_led(ucr); else rtsx_usb_turn_on_led(ucr); +out: mutex_unlock(&ucr->dev_mutex); - pm_runtime_put(sdmmc_dev(host)); + pm_runtime_put_sync_suspend(sdmmc_dev(host)); } #endif @@ -1320,7 +1325,7 @@ static void rtsx_usb_init_host(struct rtsx_usb_sdmmc *host) mmc->caps = MMC_CAP_4_BIT_DATA | MMC_CAP_SD_HIGHSPEED | MMC_CAP_MMC_HIGHSPEED | MMC_CAP_BUS_WIDTH_TEST | MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | MMC_CAP_UHS_SDR50 | - MMC_CAP_NEEDS_POLL | MMC_CAP_ERASE; + MMC_CAP_ERASE | MMC_CAP_SYNC_RUNTIME_PM; mmc->caps2 = MMC_CAP2_NO_PRESCAN_POWERUP | MMC_CAP2_FULL_PWR_CYCLE | MMC_CAP2_NO_SDIO; @@ -1363,8 +1368,6 @@ static int rtsx_usb_sdmmc_drv_probe(struct platform_device *pdev) mutex_init(&host->host_mutex); rtsx_usb_init_host(host); - pm_runtime_use_autosuspend(&pdev->dev); - pm_runtime_set_autosuspend_delay(&pdev->dev, 50); pm_runtime_enable(&pdev->dev); #ifdef RTSX_USB_USE_LEDS_CLASS @@ -1419,7 +1422,6 @@ static int rtsx_usb_sdmmc_drv_remove(struct platform_device *pdev) mmc_free_host(mmc); pm_runtime_disable(&pdev->dev); - pm_runtime_dont_use_autosuspend(&pdev->dev); platform_set_drvdata(pdev, NULL); dev_dbg(&(pdev->dev), @@ -1428,6 +1430,31 @@ static int rtsx_usb_sdmmc_drv_remove(struct platform_device *pdev) return 0; } +#ifdef CONFIG_PM +static int rtsx_usb_sdmmc_runtime_suspend(struct device *dev) +{ + struct rtsx_usb_sdmmc *host = dev_get_drvdata(dev); + + host->mmc->caps &= ~MMC_CAP_NEEDS_POLL; + return 0; +} + +static int rtsx_usb_sdmmc_runtime_resume(struct device *dev) +{ + struct rtsx_usb_sdmmc *host = dev_get_drvdata(dev); + + host->mmc->caps |= MMC_CAP_NEEDS_POLL; + if (sdmmc_get_cd(host->mmc) == 1) + mmc_detect_change(host->mmc, 0); + return 0; +} +#endif + +static const struct dev_pm_ops rtsx_usb_sdmmc_dev_pm_ops = { + SET_RUNTIME_PM_OPS(rtsx_usb_sdmmc_runtime_suspend, + rtsx_usb_sdmmc_runtime_resume, NULL) +}; + static const struct platform_device_id rtsx_usb_sdmmc_ids[] = { { .name = "rtsx_usb_sdmmc", @@ -1443,6 +1470,7 @@ static struct platform_driver rtsx_usb_sdmmc_driver = { .id_table = rtsx_usb_sdmmc_ids, .driver = { .name = "rtsx_usb_sdmmc", + .pm = &rtsx_usb_sdmmc_dev_pm_ops, }, }; module_platform_driver(rtsx_usb_sdmmc_driver); diff --git a/drivers/mmc/host/s3cmci.c b/drivers/mmc/host/s3cmci.c index f77493604312..10f5219b3b40 100644 --- a/drivers/mmc/host/s3cmci.c +++ b/drivers/mmc/host/s3cmci.c @@ -26,7 +26,6 @@ #include #include #include -#include #include #include @@ -1406,18 +1405,7 @@ static int s3cmci_state_show(struct seq_file *seq, void *v) return 0; } -static int s3cmci_state_open(struct inode *inode, struct file *file) -{ - return single_open(file, s3cmci_state_show, inode->i_private); -} - -static const struct file_operations s3cmci_fops_state = { - .owner = THIS_MODULE, - .open = s3cmci_state_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(s3cmci_state); #define DBG_REG(_r) { .addr = S3C2410_SDI##_r, .name = #_r } @@ -1459,18 +1447,7 @@ static int s3cmci_regs_show(struct seq_file *seq, void *v) return 0; } -static int s3cmci_regs_open(struct inode *inode, struct file *file) -{ - return single_open(file, s3cmci_regs_show, inode->i_private); -} - -static const struct file_operations s3cmci_fops_regs = { - .owner = THIS_MODULE, - .open = s3cmci_regs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(s3cmci_regs); static void s3cmci_debugfs_attach(struct s3cmci_host *host) { @@ -1484,14 +1461,14 @@ static void s3cmci_debugfs_attach(struct s3cmci_host *host) host->debug_state = debugfs_create_file("state", 0444, host->debug_root, host, - &s3cmci_fops_state); + &s3cmci_state_fops); if (IS_ERR(host->debug_state)) dev_err(dev, "failed to create debug state file\n"); host->debug_regs = debugfs_create_file("regs", 0444, host->debug_root, host, - &s3cmci_fops_regs); + &s3cmci_regs_fops); if (IS_ERR(host->debug_regs)) dev_err(dev, "failed to create debug regs file\n"); @@ -1545,25 +1522,19 @@ static int s3cmci_probe_pdata(struct s3cmci_host *host) if (pdata->wprotect_invert) mmc->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH; - if (pdata->detect_invert) - mmc->caps2 |= MMC_CAP2_CD_ACTIVE_HIGH; - - if (gpio_is_valid(pdata->gpio_detect)) { - ret = mmc_gpio_request_cd(mmc, pdata->gpio_detect, 0); - if (ret) { - dev_err(&pdev->dev, "error requesting GPIO for CD %d\n", - ret); - return ret; - } + /* If we get -ENOENT we have no card detect GPIO line */ + ret = mmc_gpiod_request_cd(mmc, "cd", 0, false, 0, NULL); + if (ret != -ENOENT) { + dev_err(&pdev->dev, "error requesting GPIO for CD %d\n", + ret); + return ret; } - if (gpio_is_valid(pdata->gpio_wprotect)) { - ret = mmc_gpio_request_ro(mmc, pdata->gpio_wprotect); - if (ret) { - dev_err(&pdev->dev, "error requesting GPIO for WP %d\n", - ret); - return ret; - } + ret = mmc_gpiod_request_ro(host->mmc, "wp", 0, false, 0, NULL); + if (ret != -ENOENT) { + dev_err(&pdev->dev, "error requesting GPIO for WP %d\n", + ret); + return ret; } return 0; diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c index 057e24f4a620..6669e540851d 100644 --- a/drivers/mmc/host/sdhci-acpi.c +++ b/drivers/mmc/host/sdhci-acpi.c @@ -437,7 +437,8 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = { MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | MMC_CAP_CMD_DURING_TFR | MMC_CAP_WAIT_WHILE_BUSY, .flags = SDHCI_ACPI_RUNTIME_PM, - .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | + SDHCI_QUIRK_NO_LED, .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | SDHCI_QUIRK2_STOP_WITH_TC | SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400, @@ -448,6 +449,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = { static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sdio = { .quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION | + SDHCI_QUIRK_NO_LED | SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, .quirks2 = SDHCI_QUIRK2_HOST_OFF_CARD_ON, .caps = MMC_CAP_NONREMOVABLE | MMC_CAP_POWER_OFF_CARD | @@ -462,7 +464,8 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sdio = { static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sd = { .flags = SDHCI_ACPI_SD_CD | SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL | SDHCI_ACPI_RUNTIME_PM, - .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | + SDHCI_QUIRK_NO_LED, .quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON | SDHCI_QUIRK2_STOP_WITH_TC, .caps = MMC_CAP_WAIT_WHILE_BUSY | MMC_CAP_AGGRESSIVE_PM, diff --git a/drivers/mmc/host/sdhci-cadence.c b/drivers/mmc/host/sdhci-cadence.c index 7a343b87b5e5..e2412875dac5 100644 --- a/drivers/mmc/host/sdhci-cadence.c +++ b/drivers/mmc/host/sdhci-cadence.c @@ -14,7 +14,7 @@ */ #include -#include +#include #include #include #include diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c index f44e49014a44..d0d319398a54 100644 --- a/drivers/mmc/host/sdhci-esdhc-imx.c +++ b/drivers/mmc/host/sdhci-esdhc-imx.c @@ -12,7 +12,6 @@ #include #include #include -#include #include #include #include @@ -21,7 +20,6 @@ #include #include #include -#include #include #include #include @@ -429,7 +427,7 @@ static u16 esdhc_readw_le(struct sdhci_host *host, int reg) val = readl(host->ioaddr + ESDHC_MIX_CTRL); else if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) /* the std tuning bits is in ACMD12_ERR for imx6sl */ - val = readl(host->ioaddr + SDHCI_ACMD12_ERR); + val = readl(host->ioaddr + SDHCI_AUTO_CMD_STATUS); } if (val & ESDHC_MIX_CTRL_EXE_TUNE) @@ -494,7 +492,7 @@ static void esdhc_writew_le(struct sdhci_host *host, u16 val, int reg) } writel(new_val , host->ioaddr + ESDHC_MIX_CTRL); } else if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) { - u32 v = readl(host->ioaddr + SDHCI_ACMD12_ERR); + u32 v = readl(host->ioaddr + SDHCI_AUTO_CMD_STATUS); u32 m = readl(host->ioaddr + ESDHC_MIX_CTRL); if (val & SDHCI_CTRL_TUNED_CLK) { v |= ESDHC_MIX_CTRL_SMPCLK_SEL; @@ -512,7 +510,7 @@ static void esdhc_writew_le(struct sdhci_host *host, u16 val, int reg) v &= ~ESDHC_MIX_CTRL_EXE_TUNE; } - writel(v, host->ioaddr + SDHCI_ACMD12_ERR); + writel(v, host->ioaddr + SDHCI_AUTO_CMD_STATUS); writel(m, host->ioaddr + ESDHC_MIX_CTRL); } return; @@ -957,9 +955,9 @@ static void esdhc_reset_tuning(struct sdhci_host *host) writel(ctrl, host->ioaddr + ESDHC_MIX_CTRL); writel(0, host->ioaddr + ESDHC_TUNE_CTRL_STATUS); } else if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) { - ctrl = readl(host->ioaddr + SDHCI_ACMD12_ERR); + ctrl = readl(host->ioaddr + SDHCI_AUTO_CMD_STATUS); ctrl &= ~ESDHC_MIX_CTRL_SMPCLK_SEL; - writel(ctrl, host->ioaddr + SDHCI_ACMD12_ERR); + writel(ctrl, host->ioaddr + SDHCI_AUTO_CMD_STATUS); } } } @@ -1139,8 +1137,12 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev, if (of_get_property(np, "fsl,wp-controller", NULL)) boarddata->wp_type = ESDHC_WP_CONTROLLER; - boarddata->wp_gpio = of_get_named_gpio(np, "wp-gpios", 0); - if (gpio_is_valid(boarddata->wp_gpio)) + /* + * If we have this property, then activate WP check. + * Retrieveing and requesting the actual WP GPIO will happen + * in the call to mmc_of_parse(). + */ + if (of_property_read_bool(np, "wp-gpios")) boarddata->wp_type = ESDHC_WP_GPIO; of_property_read_u32(np, "fsl,tuning-step", &boarddata->tuning_step); @@ -1198,7 +1200,7 @@ static int sdhci_esdhc_imx_probe_nondt(struct platform_device *pdev, host->mmc->parent->platform_data); /* write_protect */ if (boarddata->wp_type == ESDHC_WP_GPIO) { - err = mmc_gpio_request_ro(host->mmc, boarddata->wp_gpio); + err = mmc_gpiod_request_ro(host->mmc, "wp", 0, false, 0, NULL); if (err) { dev_err(mmc_dev(host->mmc), "failed to request write-protect gpio!\n"); @@ -1210,7 +1212,7 @@ static int sdhci_esdhc_imx_probe_nondt(struct platform_device *pdev, /* card_detect */ switch (boarddata->cd_type) { case ESDHC_CD_GPIO: - err = mmc_gpio_request_cd(host->mmc, boarddata->cd_gpio, 0); + err = mmc_gpiod_request_cd(host->mmc, "cd", 0, false, 0, NULL); if (err) { dev_err(mmc_dev(host->mmc), "failed to request card-detect gpio!\n"); @@ -1317,7 +1319,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev) /* clear tuning bits in case ROM has set it already */ writel(0x0, host->ioaddr + ESDHC_MIX_CTRL); - writel(0x0, host->ioaddr + SDHCI_ACMD12_ERR); + writel(0x0, host->ioaddr + SDHCI_AUTO_CMD_STATUS); writel(0x0, host->ioaddr + ESDHC_TUNE_CTRL_STATUS); } diff --git a/drivers/mmc/host/sdhci-esdhc.h b/drivers/mmc/host/sdhci-esdhc.h index 3f16d9c90ba2..39dbbd6eaf28 100644 --- a/drivers/mmc/host/sdhci-esdhc.h +++ b/drivers/mmc/host/sdhci-esdhc.h @@ -59,9 +59,33 @@ /* Tuning Block Control Register */ #define ESDHC_TBCTL 0x120 +#define ESDHC_HS400_WNDW_ADJUST 0x00000040 +#define ESDHC_HS400_MODE 0x00000010 #define ESDHC_TB_EN 0x00000004 #define ESDHC_TBPTR 0x128 +/* SD Clock Control Register */ +#define ESDHC_SDCLKCTL 0x144 +#define ESDHC_LPBK_CLK_SEL 0x80000000 +#define ESDHC_CMD_CLK_CTL 0x00008000 + +/* SD Timing Control Register */ +#define ESDHC_SDTIMNGCTL 0x148 +#define ESDHC_FLW_CTL_BG 0x00008000 + +/* DLL Config 0 Register */ +#define ESDHC_DLLCFG0 0x160 +#define ESDHC_DLL_ENABLE 0x80000000 +#define ESDHC_DLL_FREQ_SEL 0x08000000 + +/* DLL Config 1 Register */ +#define ESDHC_DLLCFG1 0x164 +#define ESDHC_DLL_PD_PULSE_STRETCH_SEL 0x80000000 + +/* DLL Status 0 Register */ +#define ESDHC_DLLSTAT0 0x170 +#define ESDHC_DLL_STS_SLV_LOCK 0x08000000 + /* Control Register for DMA transfer */ #define ESDHC_DMA_SYSCTL 0x40c #define ESDHC_PERIPHERAL_CLK_SEL 0x00080000 diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c index 3cc8bfee6c18..d6c9ebd8d263 100644 --- a/drivers/mmc/host/sdhci-msm.c +++ b/drivers/mmc/host/sdhci-msm.c @@ -232,6 +232,7 @@ struct sdhci_msm_variant_ops { */ struct sdhci_msm_variant_info { bool mci_removed; + bool restore_dll_config; const struct sdhci_msm_variant_ops *var_ops; const struct sdhci_msm_offset *offset; }; @@ -256,8 +257,11 @@ struct sdhci_msm_host { bool pwr_irq_flag; u32 caps_0; bool mci_removed; + bool restore_dll_config; const struct sdhci_msm_variant_ops *var_ops; const struct sdhci_msm_offset *offset; + bool use_cdr; + u32 transfer_mode; }; static const struct sdhci_msm_offset *sdhci_priv_msm_offset(struct sdhci_host *host) @@ -1025,6 +1029,69 @@ out: return ret; } +static bool sdhci_msm_is_tuning_needed(struct sdhci_host *host) +{ + struct mmc_ios *ios = &host->mmc->ios; + + /* + * Tuning is required for SDR104, HS200 and HS400 cards and + * if clock frequency is greater than 100MHz in these modes. + */ + if (host->clock <= CORE_FREQ_100MHZ || + !(ios->timing == MMC_TIMING_MMC_HS400 || + ios->timing == MMC_TIMING_MMC_HS200 || + ios->timing == MMC_TIMING_UHS_SDR104) || + ios->enhanced_strobe) + return false; + + return true; +} + +static int sdhci_msm_restore_sdr_dll_config(struct sdhci_host *host) +{ + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); + int ret; + + /* + * SDR DLL comes into picture only for timing modes which needs + * tuning. + */ + if (!sdhci_msm_is_tuning_needed(host)) + return 0; + + /* Reset the tuning block */ + ret = msm_init_cm_dll(host); + if (ret) + return ret; + + /* Restore the tuning block */ + ret = msm_config_cm_dll_phase(host, msm_host->saved_tuning_phase); + + return ret; +} + +static void sdhci_msm_set_cdr(struct sdhci_host *host, bool enable) +{ + const struct sdhci_msm_offset *msm_offset = sdhci_priv_msm_offset(host); + u32 config, oldconfig = readl_relaxed(host->ioaddr + + msm_offset->core_dll_config); + + config = oldconfig; + if (enable) { + config |= CORE_CDR_EN; + config &= ~CORE_CDR_EXT_EN; + } else { + config &= ~CORE_CDR_EN; + config |= CORE_CDR_EXT_EN; + } + + if (config != oldconfig) { + writel_relaxed(config, host->ioaddr + + msm_offset->core_dll_config); + } +} + static int sdhci_msm_execute_tuning(struct mmc_host *mmc, u32 opcode) { struct sdhci_host *host = mmc_priv(mmc); @@ -1035,15 +1102,14 @@ static int sdhci_msm_execute_tuning(struct mmc_host *mmc, u32 opcode) struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); - /* - * Tuning is required for SDR104, HS200 and HS400 cards and - * if clock frequency is greater than 100MHz in these modes. - */ - if (host->clock <= CORE_FREQ_100MHZ || - !(ios.timing == MMC_TIMING_MMC_HS400 || - ios.timing == MMC_TIMING_MMC_HS200 || - ios.timing == MMC_TIMING_UHS_SDR104)) + if (!sdhci_msm_is_tuning_needed(host)) { + msm_host->use_cdr = false; + sdhci_msm_set_cdr(host, false); return 0; + } + + /* Clock-Data-Recovery used to dynamically adjust RX sampling point */ + msm_host->use_cdr = true; /* * For HS400 tuning in HS200 timing requires: @@ -1069,7 +1135,6 @@ retry: if (rc) return rc; - msm_host->saved_tuning_phase = phase; rc = mmc_send_tuning(mmc, opcode, NULL); if (!rc) { /* Tuning is successful at this tuning point */ @@ -1094,6 +1159,7 @@ retry: rc = msm_config_cm_dll_phase(host, phase); if (rc) return rc; + msm_host->saved_tuning_phase = phase; dev_dbg(mmc_dev(mmc), "%s: Setting the tuning phase to %d\n", mmc_hostname(mmc), phase); } else { @@ -1525,6 +1591,19 @@ static int __sdhci_msm_check_write(struct sdhci_host *host, u16 val, int reg) case SDHCI_POWER_CONTROL: req_type = !val ? REQ_BUS_OFF : REQ_BUS_ON; break; + case SDHCI_TRANSFER_MODE: + msm_host->transfer_mode = val; + break; + case SDHCI_COMMAND: + if (!msm_host->use_cdr) + break; + if ((msm_host->transfer_mode & SDHCI_TRNS_READ) && + SDHCI_GET_CMD(val) != MMC_SEND_TUNING_BLOCK_HS200 && + SDHCI_GET_CMD(val) != MMC_SEND_TUNING_BLOCK) + sdhci_msm_set_cdr(host, true); + else + sdhci_msm_set_cdr(host, false); + break; } if (req_type) { @@ -1616,7 +1695,6 @@ static const struct sdhci_msm_variant_ops v5_var_ops = { }; static const struct sdhci_msm_variant_info sdhci_msm_mci_var = { - .mci_removed = false, .var_ops = &mci_var_ops, .offset = &sdhci_msm_mci_offset, }; @@ -1627,9 +1705,17 @@ static const struct sdhci_msm_variant_info sdhci_msm_v5_var = { .offset = &sdhci_msm_v5_offset, }; +static const struct sdhci_msm_variant_info sdm845_sdhci_var = { + .mci_removed = true, + .restore_dll_config = true, + .var_ops = &v5_var_ops, + .offset = &sdhci_msm_v5_offset, +}; + static const struct of_device_id sdhci_msm_dt_match[] = { {.compatible = "qcom,sdhci-msm-v4", .data = &sdhci_msm_mci_var}, {.compatible = "qcom,sdhci-msm-v5", .data = &sdhci_msm_v5_var}, + {.compatible = "qcom,sdm845-sdhci", .data = &sdm845_sdhci_var}, {}, }; @@ -1689,6 +1775,7 @@ static int sdhci_msm_probe(struct platform_device *pdev) var_info = of_device_get_match_data(&pdev->dev); msm_host->mci_removed = var_info->mci_removed; + msm_host->restore_dll_config = var_info->restore_dll_config; msm_host->var_ops = var_info->var_ops; msm_host->offset = var_info->offset; @@ -1910,8 +1997,7 @@ static int sdhci_msm_remove(struct platform_device *pdev) return 0; } -#ifdef CONFIG_PM -static int sdhci_msm_runtime_suspend(struct device *dev) +static __maybe_unused int sdhci_msm_runtime_suspend(struct device *dev) { struct sdhci_host *host = dev_get_drvdata(dev); struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); @@ -1923,16 +2009,26 @@ static int sdhci_msm_runtime_suspend(struct device *dev) return 0; } -static int sdhci_msm_runtime_resume(struct device *dev) +static __maybe_unused int sdhci_msm_runtime_resume(struct device *dev) { struct sdhci_host *host = dev_get_drvdata(dev); struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); + int ret; - return clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks), + ret = clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks), msm_host->bulk_clks); + if (ret) + return ret; + /* + * Whenever core-clock is gated dynamically, it's needed to + * restore the SDR DLL settings when the clock is ungated. + */ + if (msm_host->restore_dll_config && msm_host->clk_rate) + return sdhci_msm_restore_sdr_dll_config(host); + + return 0; } -#endif static const struct dev_pm_ops sdhci_msm_pm_ops = { SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c index 142c4b802f31..c9e3e050ccc8 100644 --- a/drivers/mmc/host/sdhci-of-arasan.c +++ b/drivers/mmc/host/sdhci-of-arasan.c @@ -231,25 +231,6 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock) } } -static void sdhci_arasan_am654_set_clock(struct sdhci_host *host, - unsigned int clock) -{ - struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); - struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host); - - if (sdhci_arasan->is_phy_on) { - phy_power_off(sdhci_arasan->phy); - sdhci_arasan->is_phy_on = false; - } - - sdhci_set_clock(host, clock); - - if (clock > PHY_CLK_TOO_SLOW_HZ) { - phy_power_on(sdhci_arasan->phy); - sdhci_arasan->is_phy_on = true; - } -} - static void sdhci_arasan_hs400_enhanced_strobe(struct mmc_host *mmc, struct mmc_ios *ios) { @@ -335,29 +316,6 @@ static struct sdhci_arasan_of_data sdhci_arasan_data = { .pdata = &sdhci_arasan_pdata, }; -static const struct sdhci_ops sdhci_arasan_am654_ops = { - .set_clock = sdhci_arasan_am654_set_clock, - .get_max_clock = sdhci_pltfm_clk_get_max_clock, - .get_timeout_clock = sdhci_pltfm_clk_get_max_clock, - .set_bus_width = sdhci_set_bus_width, - .reset = sdhci_arasan_reset, - .set_uhs_signaling = sdhci_set_uhs_signaling, -}; - -static const struct sdhci_pltfm_data sdhci_arasan_am654_pdata = { - .ops = &sdhci_arasan_am654_ops, - .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | - SDHCI_QUIRK_INVERTED_WRITE_PROTECT | - SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12, - .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | - SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN | - SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400, -}; - -static const struct sdhci_arasan_of_data sdhci_arasan_am654_data = { - .pdata = &sdhci_arasan_am654_pdata, -}; - static u32 sdhci_arasan_cqhci_irq(struct sdhci_host *host, u32 intmask) { int cmd_error = 0; @@ -520,10 +478,6 @@ static const struct of_device_id sdhci_arasan_of_match[] = { .compatible = "rockchip,rk3399-sdhci-5.1", .data = &sdhci_arasan_rk3399_data, }, - { - .compatible = "ti,am654-sdhci-5.1", - .data = &sdhci_arasan_am654_data, - }, /* Generic compatible below here */ { .compatible = "arasan,sdhci-8.9a", diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c index 86fc9f022002..4e669b4edfc1 100644 --- a/drivers/mmc/host/sdhci-of-esdhc.c +++ b/drivers/mmc/host/sdhci-of-esdhc.c @@ -78,6 +78,8 @@ struct sdhci_esdhc { u8 vendor_ver; u8 spec_ver; bool quirk_incorrect_hostver; + bool quirk_limited_clk_division; + bool quirk_unreliable_pulse_detection; bool quirk_fixup_tuning; unsigned int peripheral_clock; const struct esdhc_clk_fixup *clk_fixup; @@ -528,8 +530,12 @@ static void esdhc_clock_enable(struct sdhci_host *host, bool enable) /* Wait max 20 ms */ timeout = ktime_add_ms(ktime_get(), 20); val = ESDHC_CLOCK_STABLE; - while (!(sdhci_readl(host, ESDHC_PRSSTAT) & val)) { - if (ktime_after(ktime_get(), timeout)) { + while (1) { + bool timedout = ktime_after(ktime_get(), timeout); + + if (sdhci_readl(host, ESDHC_PRSSTAT) & val) + break; + if (timedout) { pr_err("%s: Internal clock never stabilised.\n", mmc_hostname(host->mmc)); break; @@ -544,6 +550,7 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock) struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); int pre_div = 1; int div = 1; + int division; ktime_t timeout; long fixup = 0; u32 temp; @@ -579,6 +586,26 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock) while (host->max_clk / pre_div / div > clock && div < 16) div++; + if (esdhc->quirk_limited_clk_division && + clock == MMC_HS200_MAX_DTR && + (host->mmc->ios.timing == MMC_TIMING_MMC_HS400 || + host->flags & SDHCI_HS400_TUNING)) { + division = pre_div * div; + if (division <= 4) { + pre_div = 4; + div = 1; + } else if (division <= 8) { + pre_div = 4; + div = 2; + } else if (division <= 12) { + pre_div = 4; + div = 3; + } else { + pr_warn("%s: using unsupported clock division.\n", + mmc_hostname(host->mmc)); + } + } + dev_dbg(mmc_dev(host->mmc), "desired SD clock: %d, actual: %d\n", clock, host->max_clk / pre_div / div); host->mmc->actual_clock = host->max_clk / pre_div / div; @@ -592,10 +619,36 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock) | (pre_div << ESDHC_PREDIV_SHIFT)); sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL); + if (host->mmc->ios.timing == MMC_TIMING_MMC_HS400 && + clock == MMC_HS200_MAX_DTR) { + temp = sdhci_readl(host, ESDHC_TBCTL); + sdhci_writel(host, temp | ESDHC_HS400_MODE, ESDHC_TBCTL); + temp = sdhci_readl(host, ESDHC_SDCLKCTL); + sdhci_writel(host, temp | ESDHC_CMD_CLK_CTL, ESDHC_SDCLKCTL); + esdhc_clock_enable(host, true); + + temp = sdhci_readl(host, ESDHC_DLLCFG0); + temp |= ESDHC_DLL_ENABLE; + if (host->mmc->actual_clock == MMC_HS200_MAX_DTR) + temp |= ESDHC_DLL_FREQ_SEL; + sdhci_writel(host, temp, ESDHC_DLLCFG0); + temp = sdhci_readl(host, ESDHC_TBCTL); + sdhci_writel(host, temp | ESDHC_HS400_WNDW_ADJUST, ESDHC_TBCTL); + + esdhc_clock_enable(host, false); + temp = sdhci_readl(host, ESDHC_DMA_SYSCTL); + temp |= ESDHC_FLUSH_ASYNC_FIFO; + sdhci_writel(host, temp, ESDHC_DMA_SYSCTL); + } + /* Wait max 20 ms */ timeout = ktime_add_ms(ktime_get(), 20); - while (!(sdhci_readl(host, ESDHC_PRSSTAT) & ESDHC_CLOCK_STABLE)) { - if (ktime_after(ktime_get(), timeout)) { + while (1) { + bool timedout = ktime_after(ktime_get(), timeout); + + if (sdhci_readl(host, ESDHC_PRSSTAT) & ESDHC_CLOCK_STABLE) + break; + if (timedout) { pr_err("%s: Internal clock never stabilised.\n", mmc_hostname(host->mmc)); return; @@ -603,6 +656,7 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock) udelay(10); } + temp = sdhci_readl(host, ESDHC_SYSTEM_CONTROL); temp |= ESDHC_CLOCK_SDCLKEN; sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL); } @@ -631,6 +685,8 @@ static void esdhc_pltfm_set_bus_width(struct sdhci_host *host, int width) static void esdhc_reset(struct sdhci_host *host, u8 mask) { + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); u32 val; sdhci_reset(host, mask); @@ -642,6 +698,12 @@ static void esdhc_reset(struct sdhci_host *host, u8 mask) val = sdhci_readl(host, ESDHC_TBCTL); val &= ~ESDHC_TB_EN; sdhci_writel(host, val, ESDHC_TBCTL); + + if (esdhc->quirk_unreliable_pulse_detection) { + val = sdhci_readl(host, ESDHC_DLLCFG1); + val &= ~ESDHC_DLL_PD_PULSE_STRETCH_SEL; + sdhci_writel(host, val, ESDHC_DLLCFG1); + } } } @@ -728,25 +790,50 @@ static struct soc_device_attribute soc_fixup_tuning[] = { { }, }; -static int esdhc_execute_tuning(struct mmc_host *mmc, u32 opcode) +static void esdhc_tuning_block_enable(struct sdhci_host *host, bool enable) { - struct sdhci_host *host = mmc_priv(mmc); - struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); - struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); u32 val; - /* Use tuning block for tuning procedure */ esdhc_clock_enable(host, false); + val = sdhci_readl(host, ESDHC_DMA_SYSCTL); val |= ESDHC_FLUSH_ASYNC_FIFO; sdhci_writel(host, val, ESDHC_DMA_SYSCTL); val = sdhci_readl(host, ESDHC_TBCTL); - val |= ESDHC_TB_EN; + if (enable) + val |= ESDHC_TB_EN; + else + val &= ~ESDHC_TB_EN; sdhci_writel(host, val, ESDHC_TBCTL); + esdhc_clock_enable(host, true); +} + +static int esdhc_execute_tuning(struct mmc_host *mmc, u32 opcode) +{ + struct sdhci_host *host = mmc_priv(mmc); + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); + bool hs400_tuning; + u32 val; + int ret; + + if (esdhc->quirk_limited_clk_division && + host->flags & SDHCI_HS400_TUNING) + esdhc_of_set_clock(host, host->clock); + + esdhc_tuning_block_enable(host, true); + + hs400_tuning = host->flags & SDHCI_HS400_TUNING; + ret = sdhci_execute_tuning(mmc, opcode); + + if (hs400_tuning) { + val = sdhci_readl(host, ESDHC_SDTIMNGCTL); + val |= ESDHC_FLW_CTL_BG; + sdhci_writel(host, val, ESDHC_SDTIMNGCTL); + } - sdhci_execute_tuning(mmc, opcode); if (host->tuning_err == -EAGAIN && esdhc->quirk_fixup_tuning) { /* program TBPTR[TB_WNDW_END_PTR] = 3*DIV_RATIO and @@ -765,7 +852,16 @@ static int esdhc_execute_tuning(struct mmc_host *mmc, u32 opcode) sdhci_writel(host, val, ESDHC_TBCTL); sdhci_execute_tuning(mmc, opcode); } - return 0; + return ret; +} + +static void esdhc_set_uhs_signaling(struct sdhci_host *host, + unsigned int timing) +{ + if (timing == MMC_TIMING_MMC_HS400) + esdhc_tuning_block_enable(host, true); + else + sdhci_set_uhs_signaling(host, timing); } #ifdef CONFIG_PM_SLEEP @@ -814,7 +910,7 @@ static const struct sdhci_ops sdhci_esdhc_be_ops = { .adma_workaround = esdhc_of_adma_workaround, .set_bus_width = esdhc_pltfm_set_bus_width, .reset = esdhc_reset, - .set_uhs_signaling = sdhci_set_uhs_signaling, + .set_uhs_signaling = esdhc_set_uhs_signaling, }; static const struct sdhci_ops sdhci_esdhc_le_ops = { @@ -831,7 +927,7 @@ static const struct sdhci_ops sdhci_esdhc_le_ops = { .adma_workaround = esdhc_of_adma_workaround, .set_bus_width = esdhc_pltfm_set_bus_width, .reset = esdhc_reset, - .set_uhs_signaling = sdhci_set_uhs_signaling, + .set_uhs_signaling = esdhc_set_uhs_signaling, }; static const struct sdhci_pltfm_data sdhci_esdhc_be_pdata = { @@ -857,6 +953,16 @@ static struct soc_device_attribute soc_incorrect_hostver[] = { { }, }; +static struct soc_device_attribute soc_fixup_sdhc_clkdivs[] = { + { .family = "QorIQ LX2160A", .revision = "1.0", }, + { }, +}; + +static struct soc_device_attribute soc_unreliable_pulse_detection[] = { + { .family = "QorIQ LX2160A", .revision = "1.0", }, + { }, +}; + static void esdhc_init(struct platform_device *pdev, struct sdhci_host *host) { const struct of_device_id *match; @@ -879,6 +985,16 @@ static void esdhc_init(struct platform_device *pdev, struct sdhci_host *host) else esdhc->quirk_incorrect_hostver = false; + if (soc_device_match(soc_fixup_sdhc_clkdivs)) + esdhc->quirk_limited_clk_division = true; + else + esdhc->quirk_limited_clk_division = false; + + if (soc_device_match(soc_unreliable_pulse_detection)) + esdhc->quirk_unreliable_pulse_detection = true; + else + esdhc->quirk_unreliable_pulse_detection = false; + match = of_match_node(sdhci_esdhc_of_match, pdev->dev.of_node); if (match) esdhc->clk_fixup = match->data; @@ -909,6 +1025,12 @@ static void esdhc_init(struct platform_device *pdev, struct sdhci_host *host) } } +static int esdhc_hs400_prepare_ddr(struct mmc_host *mmc) +{ + esdhc_tuning_block_enable(mmc_priv(mmc), false); + return 0; +} + static int sdhci_esdhc_probe(struct platform_device *pdev) { struct sdhci_host *host; @@ -932,6 +1054,7 @@ static int sdhci_esdhc_probe(struct platform_device *pdev) host->mmc_host_ops.start_signal_voltage_switch = esdhc_signal_voltage_switch; host->mmc_host_ops.execute_tuning = esdhc_execute_tuning; + host->mmc_host_ops.hs400_prepare_ddr = esdhc_hs400_prepare_ddr; host->tuning_delay = 1; esdhc_init(pdev, host); diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c index d264391616f9..c11c18a9aacb 100644 --- a/drivers/mmc/host/sdhci-omap.c +++ b/drivers/mmc/host/sdhci-omap.c @@ -27,6 +27,7 @@ #include #include #include +#include #include "sdhci-pltfm.h" @@ -115,6 +116,7 @@ struct sdhci_omap_host { struct pinctrl *pinctrl; struct pinctrl_state **pinctrl_state; + bool is_tuning; }; static void sdhci_omap_start_clock(struct sdhci_omap_host *omap_host); @@ -220,8 +222,12 @@ static void sdhci_omap_conf_bus_power(struct sdhci_omap_host *omap_host, /* wait 1ms */ timeout = ktime_add_ms(ktime_get(), SDHCI_OMAP_TIMEOUT); - while (!(sdhci_omap_readl(omap_host, SDHCI_OMAP_HCTL) & HCTL_SDBP)) { - if (WARN_ON(ktime_after(ktime_get(), timeout))) + while (1) { + bool timedout = ktime_after(ktime_get(), timeout); + + if (sdhci_omap_readl(omap_host, SDHCI_OMAP_HCTL) & HCTL_SDBP) + break; + if (WARN_ON(timedout)) return; usleep_range(5, 10); } @@ -285,19 +291,19 @@ static int sdhci_omap_execute_tuning(struct mmc_host *mmc, u32 opcode) struct sdhci_host *host = mmc_priv(mmc); struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host); + struct thermal_zone_device *thermal_dev; struct device *dev = omap_host->dev; struct mmc_ios *ios = &mmc->ios; u32 start_window = 0, max_window = 0; + bool single_point_failure = false; bool dcrc_was_enabled = false; u8 cur_match, prev_match = 0; u32 length = 0, max_len = 0; u32 phase_delay = 0; + int temperature; int ret = 0; u32 reg; - - pltfm_host = sdhci_priv(host); - omap_host = sdhci_pltfm_priv(pltfm_host); - dev = omap_host->dev; + int i; /* clock tuning is not needed for upto 52MHz */ if (ios->clock <= 52000000) @@ -307,6 +313,16 @@ static int sdhci_omap_execute_tuning(struct mmc_host *mmc, u32 opcode) if (ios->timing == MMC_TIMING_UHS_SDR50 && !(reg & CAPA2_TSDR50)) return 0; + thermal_dev = thermal_zone_get_zone_by_name("cpu_thermal"); + if (IS_ERR(thermal_dev)) { + dev_err(dev, "Unable to get thermal zone for tuning\n"); + return PTR_ERR(thermal_dev); + } + + ret = thermal_zone_get_temp(thermal_dev, &temperature); + if (ret) + return ret; + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_DLL); reg |= DLL_SWT; sdhci_omap_writel(omap_host, SDHCI_OMAP_DLL, reg); @@ -322,6 +338,13 @@ static int sdhci_omap_execute_tuning(struct mmc_host *mmc, u32 opcode) dcrc_was_enabled = true; } + omap_host->is_tuning = true; + + /* + * Stage 1: Search for a maximum pass window ignoring any + * any single point failures. If the tuning value ends up + * near it, move away from it in stage 2 below + */ while (phase_delay <= MAX_PHASE_DELAY) { sdhci_omap_set_dll(omap_host, phase_delay); @@ -329,10 +352,15 @@ static int sdhci_omap_execute_tuning(struct mmc_host *mmc, u32 opcode) if (cur_match) { if (prev_match) { length++; + } else if (single_point_failure) { + /* ignore single point failure */ + length++; } else { start_window = phase_delay; length = 1; } + } else { + single_point_failure = prev_match; } if (length > max_len) { @@ -350,18 +378,84 @@ static int sdhci_omap_execute_tuning(struct mmc_host *mmc, u32 opcode) goto tuning_error; } + /* + * Assign tuning value as a ratio of maximum pass window based + * on temperature + */ + if (temperature < -20000) + phase_delay = min(max_window + 4 * max_len - 24, + max_window + + DIV_ROUND_UP(13 * max_len, 16) * 4); + else if (temperature < 20000) + phase_delay = max_window + DIV_ROUND_UP(9 * max_len, 16) * 4; + else if (temperature < 40000) + phase_delay = max_window + DIV_ROUND_UP(8 * max_len, 16) * 4; + else if (temperature < 70000) + phase_delay = max_window + DIV_ROUND_UP(7 * max_len, 16) * 4; + else if (temperature < 90000) + phase_delay = max_window + DIV_ROUND_UP(5 * max_len, 16) * 4; + else if (temperature < 120000) + phase_delay = max_window + DIV_ROUND_UP(4 * max_len, 16) * 4; + else + phase_delay = max_window + DIV_ROUND_UP(3 * max_len, 16) * 4; + + /* + * Stage 2: Search for a single point failure near the chosen tuning + * value in two steps. First in the +3 to +10 range and then in the + * +2 to -10 range. If found, move away from it in the appropriate + * direction by the appropriate amount depending on the temperature. + */ + for (i = 3; i <= 10; i++) { + sdhci_omap_set_dll(omap_host, phase_delay + i); + + if (mmc_send_tuning(mmc, opcode, NULL)) { + if (temperature < 10000) + phase_delay += i + 6; + else if (temperature < 20000) + phase_delay += i - 12; + else if (temperature < 70000) + phase_delay += i - 8; + else + phase_delay += i - 6; + + goto single_failure_found; + } + } + + for (i = 2; i >= -10; i--) { + sdhci_omap_set_dll(omap_host, phase_delay + i); + + if (mmc_send_tuning(mmc, opcode, NULL)) { + if (temperature < 10000) + phase_delay += i + 12; + else if (temperature < 20000) + phase_delay += i + 8; + else if (temperature < 70000) + phase_delay += i + 8; + else if (temperature < 90000) + phase_delay += i + 10; + else + phase_delay += i + 12; + + goto single_failure_found; + } + } + +single_failure_found: reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_AC12); if (!(reg & AC12_SCLK_SEL)) { ret = -EIO; goto tuning_error; } - phase_delay = max_window + 4 * (max_len >> 1); sdhci_omap_set_dll(omap_host, phase_delay); + omap_host->is_tuning = false; + goto ret; tuning_error: + omap_host->is_tuning = false; dev_err(dev, "Tuning failed\n"); sdhci_omap_disable_tuning(omap_host); @@ -653,8 +747,12 @@ static void sdhci_omap_init_74_clocks(struct sdhci_host *host, u8 power_mode) /* wait 1ms */ timeout = ktime_add_ms(ktime_get(), SDHCI_OMAP_TIMEOUT); - while (!(sdhci_omap_readl(omap_host, SDHCI_OMAP_STAT) & INT_CC_EN)) { - if (WARN_ON(ktime_after(ktime_get(), timeout))) + while (1) { + bool timedout = ktime_after(ktime_get(), timeout); + + if (sdhci_omap_readl(omap_host, SDHCI_OMAP_STAT) & INT_CC_EN) + break; + if (WARN_ON(timedout)) return; usleep_range(5, 10); } @@ -687,6 +785,18 @@ static void sdhci_omap_set_uhs_signaling(struct sdhci_host *host, sdhci_omap_start_clock(omap_host); } +void sdhci_omap_reset(struct sdhci_host *host, u8 mask) +{ + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host); + + /* Don't reset data lines during tuning operation */ + if (omap_host->is_tuning) + mask &= ~SDHCI_RESET_DATA; + + sdhci_reset(host, mask); +} + static struct sdhci_ops sdhci_omap_ops = { .set_clock = sdhci_omap_set_clock, .set_power = sdhci_omap_set_power, @@ -695,7 +805,7 @@ static struct sdhci_ops sdhci_omap_ops = { .get_min_clock = sdhci_omap_get_min_clock, .set_bus_width = sdhci_omap_set_bus_width, .platform_send_init_74_clocks = sdhci_omap_init_74_clocks, - .reset = sdhci_reset, + .reset = sdhci_omap_reset, .set_uhs_signaling = sdhci_omap_set_uhs_signaling, }; diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c index c4115bae5db1..2a6eba74b94e 100644 --- a/drivers/mmc/host/sdhci-pci-core.c +++ b/drivers/mmc/host/sdhci-pci-core.c @@ -710,11 +710,15 @@ static int intel_execute_tuning(struct mmc_host *mmc, u32 opcode) static void byt_probe_slot(struct sdhci_pci_slot *slot) { struct mmc_host_ops *ops = &slot->host->mmc_host_ops; + struct device *dev = &slot->chip->pdev->dev; + struct mmc_host *mmc = slot->host->mmc; byt_read_dsm(slot); ops->execute_tuning = intel_execute_tuning; ops->start_signal_voltage_switch = intel_start_signal_voltage_switch; + + device_property_read_u32(dev, "max-frequency", &mmc->f_max); } static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot) @@ -937,7 +941,8 @@ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot) static const struct sdhci_pci_fixes sdhci_intel_byt_emmc = { .allow_runtime_pm = true, .probe_slot = byt_emmc_probe_slot, - .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | + SDHCI_QUIRK_NO_LED, .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400 | SDHCI_QUIRK2_STOP_WITH_TC, @@ -957,7 +962,8 @@ static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = { .runtime_suspend = glk_runtime_suspend, .runtime_resume = glk_runtime_resume, #endif - .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | + SDHCI_QUIRK_NO_LED, .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400 | SDHCI_QUIRK2_STOP_WITH_TC, @@ -966,7 +972,8 @@ static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = { }; static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = { - .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | + SDHCI_QUIRK_NO_LED, .quirks2 = SDHCI_QUIRK2_HOST_OFF_CARD_ON | SDHCI_QUIRK2_PRESET_VALUE_BROKEN, .allow_runtime_pm = true, @@ -976,7 +983,8 @@ static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = { }; static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = { - .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | + SDHCI_QUIRK_NO_LED, .quirks2 = SDHCI_QUIRK2_HOST_OFF_CARD_ON | SDHCI_QUIRK2_PRESET_VALUE_BROKEN, .allow_runtime_pm = true, @@ -986,7 +994,8 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = { }; static const struct sdhci_pci_fixes sdhci_intel_byt_sd = { - .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | + SDHCI_QUIRK_NO_LED, .quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON | SDHCI_QUIRK2_PRESET_VALUE_BROKEN | SDHCI_QUIRK2_STOP_WITH_TC, diff --git a/drivers/mmc/host/sdhci-xenon-phy.c b/drivers/mmc/host/sdhci-xenon-phy.c index 5956e90380e8..5b5eb53a63d2 100644 --- a/drivers/mmc/host/sdhci-xenon-phy.c +++ b/drivers/mmc/host/sdhci-xenon-phy.c @@ -357,9 +357,13 @@ static int xenon_emmc_phy_enable_dll(struct sdhci_host *host) /* Wait max 32 ms */ timeout = ktime_add_ms(ktime_get(), 32); - while (!(sdhci_readw(host, XENON_SLOT_EXT_PRESENT_STATE) & - XENON_DLL_LOCK_STATE)) { - if (ktime_after(ktime_get(), timeout)) { + while (1) { + bool timedout = ktime_after(ktime_get(), timeout); + + if (sdhci_readw(host, XENON_SLOT_EXT_PRESENT_STATE) & + XENON_DLL_LOCK_STATE) + break; + if (timedout) { dev_err(mmc_dev(host->mmc), "Wait for DLL Lock time-out\n"); return -ETIMEDOUT; } diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c index 4d0791f6ec23..a0b5089b3274 100644 --- a/drivers/mmc/host/sdhci-xenon.c +++ b/drivers/mmc/host/sdhci-xenon.c @@ -34,9 +34,13 @@ static int xenon_enable_internal_clk(struct sdhci_host *host) sdhci_writel(host, reg, SDHCI_CLOCK_CONTROL); /* Wait max 20 ms */ timeout = ktime_add_ms(ktime_get(), 20); - while (!((reg = sdhci_readw(host, SDHCI_CLOCK_CONTROL)) - & SDHCI_CLOCK_INT_STABLE)) { - if (ktime_after(ktime_get(), timeout)) { + while (1) { + bool timedout = ktime_after(ktime_get(), timeout); + + reg = sdhci_readw(host, SDHCI_CLOCK_CONTROL); + if (reg & SDHCI_CLOCK_INT_STABLE) + break; + if (timedout) { dev_err(mmc_dev(host->mmc), "Internal clock never stabilised.\n"); return -ETIMEDOUT; } diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index df05352b6a4a..a22e11a65658 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -82,8 +82,8 @@ void sdhci_dumpregs(struct sdhci_host *host) SDHCI_DUMP("Int enab: 0x%08x | Sig enab: 0x%08x\n", sdhci_readl(host, SDHCI_INT_ENABLE), sdhci_readl(host, SDHCI_SIGNAL_ENABLE)); - SDHCI_DUMP("AC12 err: 0x%08x | Slot int: 0x%08x\n", - sdhci_readw(host, SDHCI_ACMD12_ERR), + SDHCI_DUMP("ACmd stat: 0x%08x | Slot int: 0x%08x\n", + sdhci_readw(host, SDHCI_AUTO_CMD_STATUS), sdhci_readw(host, SDHCI_SLOT_INT_STATUS)); SDHCI_DUMP("Caps: 0x%08x | Caps_1: 0x%08x\n", sdhci_readl(host, SDHCI_CAPABILITIES), @@ -349,6 +349,9 @@ static void __sdhci_led_activate(struct sdhci_host *host) { u8 ctrl; + if (host->quirks & SDHCI_QUIRK_NO_LED) + return; + ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL); ctrl |= SDHCI_CTRL_LED; sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); @@ -358,6 +361,9 @@ static void __sdhci_led_deactivate(struct sdhci_host *host) { u8 ctrl; + if (host->quirks & SDHCI_QUIRK_NO_LED) + return; + ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL); ctrl &= ~SDHCI_CTRL_LED; sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); @@ -387,6 +393,9 @@ static int sdhci_led_register(struct sdhci_host *host) { struct mmc_host *mmc = host->mmc; + if (host->quirks & SDHCI_QUIRK_NO_LED) + return 0; + snprintf(host->led_name, sizeof(host->led_name), "%s::", mmc_hostname(mmc)); @@ -400,6 +409,9 @@ static int sdhci_led_register(struct sdhci_host *host) static void sdhci_led_unregister(struct sdhci_host *host) { + if (host->quirks & SDHCI_QUIRK_NO_LED) + return; + led_classdev_unregister(&host->led); } @@ -933,6 +945,11 @@ static void sdhci_set_transfer_irqs(struct sdhci_host *host) else host->ier = (host->ier & ~dma_irqs) | pio_irqs; + if (host->flags & (SDHCI_AUTO_CMD23 | SDHCI_AUTO_CMD12)) + host->ier |= SDHCI_INT_AUTO_CMD_ERR; + else + host->ier &= ~SDHCI_INT_AUTO_CMD_ERR; + sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); } @@ -1191,8 +1208,7 @@ static bool sdhci_needs_reset(struct sdhci_host *host, struct mmc_request *mrq) return (!(host->flags & SDHCI_DEVICE_DEAD) && ((mrq->cmd && mrq->cmd->error) || (mrq->sbc && mrq->sbc->error) || - (mrq->data && ((mrq->data->error && !mrq->data->stop) || - (mrq->data->stop && mrq->data->stop->error))) || + (mrq->data && mrq->data->stop && mrq->data->stop->error) || (host->quirks & SDHCI_QUIRK_RESET_AFTER_REQUEST))); } @@ -1244,6 +1260,16 @@ static void sdhci_finish_data(struct sdhci_host *host) host->data = NULL; host->data_cmd = NULL; + /* + * The controller needs a reset of internal state machines upon error + * conditions. + */ + if (data->error) { + if (!host->cmd || host->cmd == data_cmd) + sdhci_do_reset(host, SDHCI_RESET_CMD); + sdhci_do_reset(host, SDHCI_RESET_DATA); + } + if ((host->flags & (SDHCI_REQ_USE_DMA | SDHCI_USE_ADMA)) == (SDHCI_REQ_USE_DMA | SDHCI_USE_ADMA)) sdhci_adma_table_post(host, data); @@ -1268,17 +1294,6 @@ static void sdhci_finish_data(struct sdhci_host *host) if (data->stop && (data->error || !data->mrq->sbc)) { - - /* - * The controller needs a reset of internal state machines - * upon error conditions. - */ - if (data->error) { - if (!host->cmd || host->cmd == data_cmd) - sdhci_do_reset(host, SDHCI_RESET_CMD); - sdhci_do_reset(host, SDHCI_RESET_DATA); - } - /* * 'cap_cmd_during_tfr' request must not use the command line * after mmc_command_done() has been called. It is upper layer's @@ -2757,8 +2772,23 @@ static void sdhci_timeout_data_timer(struct timer_list *t) * * \*****************************************************************************/ -static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask) +static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask, u32 *intmask_p) { + /* Handle auto-CMD12 error */ + if (intmask & SDHCI_INT_AUTO_CMD_ERR && host->data_cmd) { + struct mmc_request *mrq = host->data_cmd->mrq; + u16 auto_cmd_status = sdhci_readw(host, SDHCI_AUTO_CMD_STATUS); + int data_err_bit = (auto_cmd_status & SDHCI_AUTO_CMD_TIMEOUT) ? + SDHCI_INT_DATA_TIMEOUT : + SDHCI_INT_DATA_CRC; + + /* Treat auto-CMD12 error the same as data error */ + if (!mrq->sbc && (host->flags & SDHCI_AUTO_CMD12)) { + *intmask_p |= data_err_bit; + return; + } + } + if (!host->cmd) { /* * SDHCI recovers from errors by resetting the cmd and data @@ -2780,20 +2810,12 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask) else host->cmd->error = -EILSEQ; - /* - * If this command initiates a data phase and a response - * CRC error is signalled, the card can start transferring - * data - the card may have received the command without - * error. We must not terminate the mmc_request early. - * - * If the card did not receive the command or returned an - * error which prevented it sending data, the data phase - * will time out. - */ + /* Treat data command CRC error the same as data CRC error */ if (host->cmd->data && (intmask & (SDHCI_INT_CRC | SDHCI_INT_TIMEOUT)) == SDHCI_INT_CRC) { host->cmd = NULL; + *intmask_p |= SDHCI_INT_DATA_CRC; return; } @@ -2801,6 +2823,21 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask) return; } + /* Handle auto-CMD23 error */ + if (intmask & SDHCI_INT_AUTO_CMD_ERR) { + struct mmc_request *mrq = host->cmd->mrq; + u16 auto_cmd_status = sdhci_readw(host, SDHCI_AUTO_CMD_STATUS); + int err = (auto_cmd_status & SDHCI_AUTO_CMD_TIMEOUT) ? + -ETIMEDOUT : + -EILSEQ; + + if (mrq->sbc && (host->flags & SDHCI_AUTO_CMD23)) { + mrq->sbc->error = err; + sdhci_finish_mrq(host, mrq); + return; + } + } + if (intmask & SDHCI_INT_RESPONSE) sdhci_finish_command(host); } @@ -3021,7 +3058,7 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id) } if (intmask & SDHCI_INT_CMD_MASK) - sdhci_cmd_irq(host, intmask & SDHCI_INT_CMD_MASK); + sdhci_cmd_irq(host, intmask & SDHCI_INT_CMD_MASK, &intmask); if (intmask & SDHCI_INT_DATA_MASK) sdhci_data_irq(host, intmask & SDHCI_INT_DATA_MASK); @@ -3541,7 +3578,7 @@ void __sdhci_read_caps(struct sdhci_host *host, u16 *ver, u32 *caps, u32 *caps1) } EXPORT_SYMBOL_GPL(__sdhci_read_caps); -static int sdhci_allocate_bounce_buffer(struct sdhci_host *host) +static void sdhci_allocate_bounce_buffer(struct sdhci_host *host) { struct mmc_host *mmc = host->mmc; unsigned int max_blocks; @@ -3579,7 +3616,7 @@ static int sdhci_allocate_bounce_buffer(struct sdhci_host *host) * Exiting with zero here makes sure we proceed with * mmc->max_segs == 1. */ - return 0; + return; } host->bounce_addr = dma_map_single(mmc->parent, @@ -3589,7 +3626,7 @@ static int sdhci_allocate_bounce_buffer(struct sdhci_host *host) ret = dma_mapping_error(mmc->parent, host->bounce_addr); if (ret) /* Again fall back to max_segs == 1 */ - return 0; + return; host->bounce_buffer_size = bounce_size; /* Lie about this since we're bouncing */ @@ -3599,8 +3636,6 @@ static int sdhci_allocate_bounce_buffer(struct sdhci_host *host) pr_info("%s bounce up to %u segments into one, max segment size %u bytes\n", mmc_hostname(mmc), max_blocks, bounce_size); - - return 0; } static inline bool sdhci_can_64bit_dma(struct sdhci_host *host) @@ -4134,12 +4169,9 @@ int sdhci_setup_host(struct sdhci_host *host) */ mmc->max_blk_count = (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535; - if (mmc->max_segs == 1) { + if (mmc->max_segs == 1) /* This may alter mmc->*_blk_* parameters */ - ret = sdhci_allocate_bounce_buffer(host); - if (ret) - return ret; - } + sdhci_allocate_bounce_buffer(host); return 0; diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index b001cf4d3d7e..6cc9a3c2ac66 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -146,14 +146,15 @@ #define SDHCI_INT_DATA_CRC 0x00200000 #define SDHCI_INT_DATA_END_BIT 0x00400000 #define SDHCI_INT_BUS_POWER 0x00800000 -#define SDHCI_INT_ACMD12ERR 0x01000000 +#define SDHCI_INT_AUTO_CMD_ERR 0x01000000 #define SDHCI_INT_ADMA_ERROR 0x02000000 #define SDHCI_INT_NORMAL_MASK 0x00007FFF #define SDHCI_INT_ERROR_MASK 0xFFFF8000 #define SDHCI_INT_CMD_MASK (SDHCI_INT_RESPONSE | SDHCI_INT_TIMEOUT | \ - SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX) + SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX | \ + SDHCI_INT_AUTO_CMD_ERR) #define SDHCI_INT_DATA_MASK (SDHCI_INT_DATA_END | SDHCI_INT_DMA_END | \ SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL | \ SDHCI_INT_DATA_TIMEOUT | SDHCI_INT_DATA_CRC | \ @@ -168,7 +169,11 @@ #define SDHCI_CQE_INT_MASK (SDHCI_CQE_INT_ERR_MASK | SDHCI_INT_CQE) -#define SDHCI_ACMD12_ERR 0x3C +#define SDHCI_AUTO_CMD_STATUS 0x3C +#define SDHCI_AUTO_CMD_TIMEOUT 0x00000002 +#define SDHCI_AUTO_CMD_CRC 0x00000004 +#define SDHCI_AUTO_CMD_END_BIT 0x00000008 +#define SDHCI_AUTO_CMD_INDEX 0x00000010 #define SDHCI_HOST_CONTROL2 0x3E #define SDHCI_CTRL_UHS_MASK 0x0007 @@ -403,6 +408,8 @@ struct sdhci_host { #define SDHCI_QUIRK_INVERTED_WRITE_PROTECT (1<<16) /* Controller does not like fast PIO transfers */ #define SDHCI_QUIRK_PIO_NEEDS_DELAY (1<<18) +/* Controller does not have a LED */ +#define SDHCI_QUIRK_NO_LED (1<<19) /* Controller has to be forced to use block size of 2048 bytes */ #define SDHCI_QUIRK_FORCE_BLK_SZ_2048 (1<<20) /* Controller cannot do multi-block transfers */ diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c new file mode 100644 index 000000000000..8c05879850a0 --- /dev/null +++ b/drivers/mmc/host/sdhci_am654.c @@ -0,0 +1,374 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * sdhci_am654.c - SDHCI driver for TI's AM654 SOCs + * + * Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com + * + */ +#include +#include +#include +#include +#include + +#include "sdhci-pltfm.h" + +/* CTL_CFG Registers */ +#define CTL_CFG_2 0x14 + +#define SLOTTYPE_MASK GENMASK(31, 30) +#define SLOTTYPE_EMBEDDED BIT(30) + +/* PHY Registers */ +#define PHY_CTRL1 0x100 +#define PHY_CTRL2 0x104 +#define PHY_CTRL3 0x108 +#define PHY_CTRL4 0x10C +#define PHY_CTRL5 0x110 +#define PHY_CTRL6 0x114 +#define PHY_STAT1 0x130 +#define PHY_STAT2 0x134 + +#define IOMUX_ENABLE_SHIFT 31 +#define IOMUX_ENABLE_MASK BIT(IOMUX_ENABLE_SHIFT) +#define OTAPDLYENA_SHIFT 20 +#define OTAPDLYENA_MASK BIT(OTAPDLYENA_SHIFT) +#define OTAPDLYSEL_SHIFT 12 +#define OTAPDLYSEL_MASK GENMASK(15, 12) +#define STRBSEL_SHIFT 24 +#define STRBSEL_MASK GENMASK(27, 24) +#define SEL50_SHIFT 8 +#define SEL50_MASK BIT(SEL50_SHIFT) +#define SEL100_SHIFT 9 +#define SEL100_MASK BIT(SEL100_SHIFT) +#define DLL_TRIM_ICP_SHIFT 4 +#define DLL_TRIM_ICP_MASK GENMASK(7, 4) +#define DR_TY_SHIFT 20 +#define DR_TY_MASK GENMASK(22, 20) +#define ENDLL_SHIFT 1 +#define ENDLL_MASK BIT(ENDLL_SHIFT) +#define DLLRDY_SHIFT 0 +#define DLLRDY_MASK BIT(DLLRDY_SHIFT) +#define PDB_SHIFT 0 +#define PDB_MASK BIT(PDB_SHIFT) +#define CALDONE_SHIFT 1 +#define CALDONE_MASK BIT(CALDONE_SHIFT) +#define RETRIM_SHIFT 17 +#define RETRIM_MASK BIT(RETRIM_SHIFT) + +#define DRIVER_STRENGTH_50_OHM 0x0 +#define DRIVER_STRENGTH_33_OHM 0x1 +#define DRIVER_STRENGTH_66_OHM 0x2 +#define DRIVER_STRENGTH_100_OHM 0x3 +#define DRIVER_STRENGTH_40_OHM 0x4 + +#define CLOCK_TOO_SLOW_HZ 400000 + +static struct regmap_config sdhci_am654_regmap_config = { + .reg_bits = 32, + .val_bits = 32, + .reg_stride = 4, + .fast_io = true, +}; + +struct sdhci_am654_data { + struct regmap *base; + int otap_del_sel; + int trm_icp; + int drv_strength; + bool dll_on; +}; + +static void sdhci_am654_set_clock(struct sdhci_host *host, unsigned int clock) +{ + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host); + int sel50, sel100; + u32 mask, val; + int ret; + + if (sdhci_am654->dll_on) { + regmap_update_bits(sdhci_am654->base, PHY_CTRL1, + ENDLL_MASK, 0); + + sdhci_am654->dll_on = false; + } + + sdhci_set_clock(host, clock); + + if (clock > CLOCK_TOO_SLOW_HZ) { + /* Setup DLL Output TAP delay */ + mask = OTAPDLYENA_MASK | OTAPDLYSEL_MASK; + val = (1 << OTAPDLYENA_SHIFT) | + (sdhci_am654->otap_del_sel << OTAPDLYSEL_SHIFT); + regmap_update_bits(sdhci_am654->base, PHY_CTRL4, + mask, val); + switch (clock) { + case 200000000: + sel50 = 0; + sel100 = 0; + break; + case 100000000: + sel50 = 0; + sel100 = 1; + break; + default: + sel50 = 1; + sel100 = 0; + } + + /* Configure PHY DLL frequency */ + mask = SEL50_MASK | SEL100_MASK; + val = (sel50 << SEL50_SHIFT) | (sel100 << SEL100_SHIFT); + regmap_update_bits(sdhci_am654->base, PHY_CTRL5, + mask, val); + /* Configure DLL TRIM */ + mask = DLL_TRIM_ICP_MASK; + val = sdhci_am654->trm_icp << DLL_TRIM_ICP_SHIFT; + + /* Configure DLL driver strength */ + mask |= DR_TY_MASK; + val |= sdhci_am654->drv_strength << DR_TY_SHIFT; + regmap_update_bits(sdhci_am654->base, PHY_CTRL1, + mask, val); + /* Enable DLL */ + regmap_update_bits(sdhci_am654->base, PHY_CTRL1, + ENDLL_MASK, 0x1 << ENDLL_SHIFT); + /* + * Poll for DLL ready. Use a one second timeout. + * Works in all experiments done so far + */ + ret = regmap_read_poll_timeout(sdhci_am654->base, + PHY_STAT1, val, + val & DLLRDY_MASK, + 1000, 1000000); + + sdhci_am654->dll_on = true; + } +} + +static void sdhci_am654_set_power(struct sdhci_host *host, unsigned char mode, + unsigned short vdd) +{ + if (!IS_ERR(host->mmc->supply.vmmc)) { + struct mmc_host *mmc = host->mmc; + + mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd); + } + sdhci_set_power_noreg(host, mode, vdd); +} + +struct sdhci_ops sdhci_am654_ops = { + .get_max_clock = sdhci_pltfm_clk_get_max_clock, + .get_timeout_clock = sdhci_pltfm_clk_get_max_clock, + .set_uhs_signaling = sdhci_set_uhs_signaling, + .set_bus_width = sdhci_set_bus_width, + .set_power = sdhci_am654_set_power, + .set_clock = sdhci_am654_set_clock, + .reset = sdhci_reset, +}; + +static const struct sdhci_pltfm_data sdhci_am654_pdata = { + .ops = &sdhci_am654_ops, + .quirks = SDHCI_QUIRK_INVERTED_WRITE_PROTECT | + SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12, + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, +}; + +static int sdhci_am654_init(struct sdhci_host *host) +{ + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host); + u32 ctl_cfg_2 = 0; + u32 mask; + u32 val; + int ret; + + /* Reset OTAP to default value */ + mask = OTAPDLYENA_MASK | OTAPDLYSEL_MASK; + regmap_update_bits(sdhci_am654->base, PHY_CTRL4, + mask, 0x0); + + regmap_read(sdhci_am654->base, PHY_STAT1, &val); + if (~val & CALDONE_MASK) { + /* Calibrate IO lines */ + regmap_update_bits(sdhci_am654->base, PHY_CTRL1, + PDB_MASK, PDB_MASK); + ret = regmap_read_poll_timeout(sdhci_am654->base, PHY_STAT1, + val, val & CALDONE_MASK, 1, 20); + if (ret) + return ret; + } + + /* Enable pins by setting IO mux to 0 */ + regmap_update_bits(sdhci_am654->base, PHY_CTRL1, + IOMUX_ENABLE_MASK, 0); + + /* Set slot type based on SD or eMMC */ + if (host->mmc->caps & MMC_CAP_NONREMOVABLE) + ctl_cfg_2 = SLOTTYPE_EMBEDDED; + + regmap_update_bits(sdhci_am654->base, CTL_CFG_2, + ctl_cfg_2, SLOTTYPE_MASK); + + return sdhci_add_host(host); +} + +static int sdhci_am654_get_of_property(struct platform_device *pdev, + struct sdhci_am654_data *sdhci_am654) +{ + struct device *dev = &pdev->dev; + int drv_strength; + int ret; + + ret = device_property_read_u32(dev, "ti,trm-icp", + &sdhci_am654->trm_icp); + if (ret) + return ret; + + ret = device_property_read_u32(dev, "ti,otap-del-sel", + &sdhci_am654->otap_del_sel); + if (ret) + return ret; + + ret = device_property_read_u32(dev, "ti,driver-strength-ohm", + &drv_strength); + if (ret) + return ret; + + switch (drv_strength) { + case 50: + sdhci_am654->drv_strength = DRIVER_STRENGTH_50_OHM; + break; + case 33: + sdhci_am654->drv_strength = DRIVER_STRENGTH_33_OHM; + break; + case 66: + sdhci_am654->drv_strength = DRIVER_STRENGTH_66_OHM; + break; + case 100: + sdhci_am654->drv_strength = DRIVER_STRENGTH_100_OHM; + break; + case 40: + sdhci_am654->drv_strength = DRIVER_STRENGTH_40_OHM; + break; + default: + dev_err(dev, "Invalid driver strength\n"); + return -EINVAL; + } + + sdhci_get_of_property(pdev); + + return 0; +} + +static int sdhci_am654_probe(struct platform_device *pdev) +{ + struct sdhci_pltfm_host *pltfm_host; + struct sdhci_am654_data *sdhci_am654; + struct sdhci_host *host; + struct resource *res; + struct clk *clk_xin; + struct device *dev = &pdev->dev; + void __iomem *base; + int ret; + + host = sdhci_pltfm_init(pdev, &sdhci_am654_pdata, sizeof(*sdhci_am654)); + if (IS_ERR(host)) + return PTR_ERR(host); + + pltfm_host = sdhci_priv(host); + sdhci_am654 = sdhci_pltfm_priv(pltfm_host); + + clk_xin = devm_clk_get(dev, "clk_xin"); + if (IS_ERR(clk_xin)) { + dev_err(dev, "clk_xin clock not found.\n"); + ret = PTR_ERR(clk_xin); + goto err_pltfm_free; + } + + pltfm_host->clk = clk_xin; + + /* Clocks are enabled using pm_runtime */ + pm_runtime_enable(dev); + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + pm_runtime_put_noidle(dev); + goto pm_runtime_disable; + } + + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); + base = devm_ioremap_resource(dev, res); + if (IS_ERR(base)) { + ret = PTR_ERR(base); + goto pm_runtime_put; + } + + sdhci_am654->base = devm_regmap_init_mmio(dev, base, + &sdhci_am654_regmap_config); + if (IS_ERR(sdhci_am654->base)) { + dev_err(dev, "Failed to initialize regmap\n"); + ret = PTR_ERR(sdhci_am654->base); + goto pm_runtime_put; + } + + ret = sdhci_am654_get_of_property(pdev, sdhci_am654); + if (ret) + goto pm_runtime_put; + + ret = mmc_of_parse(host->mmc); + if (ret) { + dev_err(dev, "parsing dt failed (%d)\n", ret); + goto pm_runtime_put; + } + + ret = sdhci_am654_init(host); + if (ret) + goto pm_runtime_put; + + return 0; + +pm_runtime_put: + pm_runtime_put_sync(dev); +pm_runtime_disable: + pm_runtime_disable(dev); +err_pltfm_free: + sdhci_pltfm_free(pdev); + return ret; +} + +static int sdhci_am654_remove(struct platform_device *pdev) +{ + struct sdhci_host *host = platform_get_drvdata(pdev); + int ret; + + sdhci_remove_host(host, true); + ret = pm_runtime_put_sync(&pdev->dev); + if (ret < 0) + return ret; + + pm_runtime_disable(&pdev->dev); + sdhci_pltfm_free(pdev); + + return 0; +} + +static const struct of_device_id sdhci_am654_of_match[] = { + { .compatible = "ti,am654-sdhci-5.1" }, + { /* sentinel */ } +}; + +static struct platform_driver sdhci_am654_driver = { + .driver = { + .name = "sdhci-am654", + .of_match_table = sdhci_am654_of_match, + }, + .probe = sdhci_am654_probe, + .remove = sdhci_am654_remove, +}; + +module_platform_driver(sdhci_am654_driver); + +MODULE_DESCRIPTION("Driver for SDHCI Controller on TI's AM654 devices"); +MODULE_AUTHOR("Faiz Abbas "); +MODULE_LICENSE("GPL"); diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h index 1e317027bf53..c03529e3f01a 100644 --- a/drivers/mmc/host/tmio_mmc.h +++ b/drivers/mmc/host/tmio_mmc.h @@ -70,6 +70,7 @@ #define TMIO_STAT_DAT0 BIT(23) /* only known on R-Car so far */ #define TMIO_STAT_RXRDY BIT(24) #define TMIO_STAT_TXRQ BIT(25) +#define TMIO_STAT_ALWAYS_SET_27 BIT(27) /* only known on R-Car 2+ so far */ #define TMIO_STAT_ILL_FUNC BIT(29) /* only when !TMIO_MMC_HAS_IDLE_WAIT */ #define TMIO_STAT_SCLKDIVEN BIT(29) /* only when TMIO_MMC_HAS_IDLE_WAIT */ #define TMIO_STAT_CMD_BUSY BIT(30) @@ -96,6 +97,7 @@ /* Define some IRQ masks */ /* This is the mask used at reset by the chip */ +#define TMIO_MASK_INIT_RCAR2 0x8b7f031d /* Initial value for R-Car Gen2+ */ #define TMIO_MASK_ALL 0x837f031d #define TMIO_MASK_READOP (TMIO_STAT_RXRDY | TMIO_STAT_DATAEND) #define TMIO_MASK_WRITEOP (TMIO_STAT_TXRQ | TMIO_STAT_DATAEND) @@ -153,6 +155,7 @@ struct tmio_mmc_host { u32 sdcard_irq_mask; u32 sdio_irq_mask; unsigned int clk_cache; + u32 sdcard_irq_setbit_mask; spinlock_t lock; /* protect host private data */ unsigned long last_req_ts; @@ -267,6 +270,9 @@ static inline void sd_ctrl_write16_rep(struct tmio_mmc_host *host, int addr, static inline void sd_ctrl_write32_as_16_and_16(struct tmio_mmc_host *host, int addr, u32 val) { + if (addr == CTL_IRQ_MASK || addr == CTL_STATUS) + val |= host->sdcard_irq_setbit_mask; + iowrite16(val & 0xffff, host->ctl + (addr << host->bus_shift)); iowrite16(val >> 16, host->ctl + ((addr + 2) << host->bus_shift)); } diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c index 8d64f6196f33..085a0fab769c 100644 --- a/drivers/mmc/host/tmio_mmc_core.c +++ b/drivers/mmc/host/tmio_mmc_core.c @@ -171,6 +171,18 @@ static void tmio_mmc_reset(struct tmio_mmc_host *host) } } +static void tmio_mmc_hw_reset(struct mmc_host *mmc) +{ + struct tmio_mmc_host *host = mmc_priv(mmc); + + host->reset(host); + + tmio_mmc_abort_dma(host); + + if (host->hw_reset) + host->hw_reset(host); +} + static void tmio_mmc_reset_work(struct work_struct *work) { struct tmio_mmc_host *host = container_of(work, struct tmio_mmc_host, @@ -209,12 +221,11 @@ static void tmio_mmc_reset_work(struct work_struct *work) spin_unlock_irqrestore(&host->lock, flags); - host->reset(host); + tmio_mmc_hw_reset(host->mmc); /* Ready for new calls */ host->mrq = NULL; - tmio_mmc_abort_dma(host); mmc_request_done(host->mmc, mrq); } @@ -696,14 +707,6 @@ static int tmio_mmc_start_data(struct tmio_mmc_host *host, return 0; } -static void tmio_mmc_hw_reset(struct mmc_host *mmc) -{ - struct tmio_mmc_host *host = mmc_priv(mmc); - - if (host->hw_reset) - host->hw_reset(host); -} - static int tmio_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode) { struct tmio_mmc_host *host = mmc_priv(mmc); @@ -734,8 +737,6 @@ static int tmio_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode) ret = mmc_send_tuning(mmc, opcode, NULL); if (ret == 0) set_bit(i, host->taps); - - usleep_range(1000, 1200); } ret = host->select_tuning(host); @@ -1167,11 +1168,13 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host) if (ret < 0) return ret; - if (pdata->flags & TMIO_MMC_USE_GPIO_CD) { - ret = mmc_gpio_request_cd(mmc, pdata->cd_gpio, 0); - if (ret) - return ret; - } + /* + * Look for a card detect GPIO, if it fails with anything + * else than a probe deferral, just live without it. + */ + ret = mmc_gpiod_request_cd(mmc, "cd", 0, false, 0, NULL); + if (ret == -EPROBE_DEFER) + return ret; mmc->caps |= MMC_CAP_4_BIT_DATA | pdata->capabilities; mmc->caps2 |= pdata->capabilities2; @@ -1228,7 +1231,7 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host) _host->sdio_irq_mask = TMIO_SDIO_MASK_ALL; _host->set_clock(_host, 0); - _host->reset(_host); + tmio_mmc_hw_reset(mmc); _host->sdcard_irq_mask = sd_ctrl_read16_and_16_as_32(_host, CTL_IRQ_MASK); tmio_mmc_disable_mmc_irqs(_host, TMIO_MASK_ALL); @@ -1328,8 +1331,8 @@ int tmio_mmc_host_runtime_resume(struct device *dev) { struct tmio_mmc_host *host = dev_get_drvdata(dev); - host->reset(host); tmio_mmc_clk_enable(host); + tmio_mmc_hw_reset(host->mmc); if (host->clk_cache) host->set_clock(host, host->clk_cache); diff --git a/drivers/mtd/Kconfig b/drivers/mtd/Kconfig index 1e18c9639c3e..79a8ff542883 100644 --- a/drivers/mtd/Kconfig +++ b/drivers/mtd/Kconfig @@ -1,5 +1,6 @@ menuconfig MTD tristate "Memory Technology Device (MTD) support" + imply NVMEM help Memory Technology Devices are flash, RAM and similar chips, often used for solid state file systems on embedded devices. This option diff --git a/drivers/mtd/devices/powernv_flash.c b/drivers/mtd/devices/powernv_flash.c index 33593122e49b..22f753e555ac 100644 --- a/drivers/mtd/devices/powernv_flash.c +++ b/drivers/mtd/devices/powernv_flash.c @@ -212,7 +212,7 @@ static int powernv_flash_set_driver_info(struct device *dev, * Going to have to check what details I need to set and how to * get them */ - mtd->name = of_get_property(dev->of_node, "name", NULL); + mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFn", dev->of_node); mtd->type = MTD_NORFLASH; mtd->flags = MTD_WRITEABLE; mtd->size = size; diff --git a/drivers/mtd/maps/scx200_docflash.c b/drivers/mtd/maps/scx200_docflash.c index f1c1f737d0d7..7f1a0e690c4f 100644 --- a/drivers/mtd/maps/scx200_docflash.c +++ b/drivers/mtd/maps/scx200_docflash.c @@ -216,10 +216,3 @@ static void __exit cleanup_scx200_docflash(void) module_init(init_scx200_docflash); module_exit(cleanup_scx200_docflash); - -/* - Local variables: - compile-command: "make -k -C ../../.. SUBDIRS=drivers/mtd/maps modules" - c-basic-offset: 8 - End: -*/ diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c index b6b93291aba9..21e3cdc04036 100644 --- a/drivers/mtd/mtdcore.c +++ b/drivers/mtd/mtdcore.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include @@ -488,6 +489,50 @@ int mtd_pairing_groups(struct mtd_info *mtd) } EXPORT_SYMBOL_GPL(mtd_pairing_groups); +static int mtd_nvmem_reg_read(void *priv, unsigned int offset, + void *val, size_t bytes) +{ + struct mtd_info *mtd = priv; + size_t retlen; + int err; + + err = mtd_read(mtd, offset, bytes, &retlen, val); + if (err && err != -EUCLEAN) + return err; + + return retlen == bytes ? 0 : -EIO; +} + +static int mtd_nvmem_add(struct mtd_info *mtd) +{ + struct nvmem_config config = {}; + + config.dev = &mtd->dev; + config.name = mtd->name; + config.owner = THIS_MODULE; + config.reg_read = mtd_nvmem_reg_read; + config.size = mtd->size; + config.word_size = 1; + config.stride = 1; + config.read_only = true; + config.root_only = true; + config.no_of_node = true; + config.priv = mtd; + + mtd->nvmem = nvmem_register(&config); + if (IS_ERR(mtd->nvmem)) { + /* Just ignore if there is no NVMEM support in the kernel */ + if (PTR_ERR(mtd->nvmem) == -ENOSYS) { + mtd->nvmem = NULL; + } else { + dev_err(&mtd->dev, "Failed to register NVMEM device\n"); + return PTR_ERR(mtd->nvmem); + } + } + + return 0; +} + static struct dentry *dfs_dir_mtd; /** @@ -570,6 +615,11 @@ int add_mtd_device(struct mtd_info *mtd) if (error) goto fail_added; + /* Add the nvmem provider */ + error = mtd_nvmem_add(mtd); + if (error) + goto fail_nvmem_add; + if (!IS_ERR_OR_NULL(dfs_dir_mtd)) { mtd->dbg.dfs_dir = debugfs_create_dir(dev_name(&mtd->dev), dfs_dir_mtd); if (IS_ERR_OR_NULL(mtd->dbg.dfs_dir)) { @@ -595,6 +645,8 @@ int add_mtd_device(struct mtd_info *mtd) __module_get(THIS_MODULE); return 0; +fail_nvmem_add: + device_unregister(&mtd->dev); fail_added: of_node_put(mtd_get_of_node(mtd)); idr_remove(&mtd_idr, i); @@ -637,6 +689,10 @@ int del_mtd_device(struct mtd_info *mtd) mtd->index, mtd->name, mtd->usecount); ret = -EBUSY; } else { + /* Try to remove the NVMEM provider */ + if (mtd->nvmem) + nvmem_unregister(mtd->nvmem); + device_unregister(&mtd->dev); idr_remove(&mtd_idr, mtd->index); diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index d03775100f7d..6371958dd170 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -397,10 +397,10 @@ config NET_SB1000 At present this driver only compiles as a module, so say M here if you have this card. The module will be called sb1000. Then read - for information on how - to use this module, as it needs special ppp scripts for establishing - a connection. Further documentation and the necessary scripts can be - found at: + for + information on how to use this module, as it needs special ppp + scripts for establishing a connection. Further documentation + and the necessary scripts can be found at: diff --git a/drivers/net/appletalk/cops.c b/drivers/net/appletalk/cops.c index bb49f6e40a19..f90bb723985f 100644 --- a/drivers/net/appletalk/cops.c +++ b/drivers/net/appletalk/cops.c @@ -777,10 +777,7 @@ static void cops_rx(struct net_device *dev) } /* Get response length. */ - if(lp->board==DAYNA) - pkt_len = inb(ioaddr) & 0xFF; - else - pkt_len = inb(ioaddr) & 0x00FF; + pkt_len = inb(ioaddr); pkt_len |= (inb(ioaddr) << 8); /* Input IO code. */ rsp_type=inb(ioaddr); @@ -892,10 +889,7 @@ static netdev_tx_t cops_send_packet(struct sk_buff *skb, /* Output IO length. */ outb(skb->len, ioaddr); - if(lp->board == DAYNA) - outb(skb->len >> 8, ioaddr); - else - outb((skb->len >> 8)&0x0FF, ioaddr); + outb(skb->len >> 8, ioaddr); /* Output IO code. */ outb(LAP_WRITE, ioaddr); diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c index 93dfcef8afc4..7c46d9f4fefd 100644 --- a/drivers/net/bonding/bond_3ad.c +++ b/drivers/net/bonding/bond_3ad.c @@ -1220,7 +1220,7 @@ static void ad_churn_machine(struct port *port) port->sm_churn_partner_state = AD_CHURN_MONITOR; port->sm_churn_actor_timer_counter = __ad_timer_to_ticks(AD_ACTOR_CHURN_TIMER, 0); - port->sm_churn_partner_timer_counter = + port->sm_churn_partner_timer_counter = __ad_timer_to_ticks(AD_PARTNER_CHURN_TIMER, 0); return; } @@ -2128,7 +2128,7 @@ void bond_3ad_unbind_slave(struct slave *slave) if ((new_aggregator->lag_ports == port) && new_aggregator->is_active) { netdev_info(bond->dev, "Removing an active aggregator\n"); - select_new_active_agg = 1; + select_new_active_agg = 1; } new_aggregator->is_individual = aggregator->is_individual; diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c index e82108c917a6..9431127bbc60 100644 --- a/drivers/net/bonding/bond_alb.c +++ b/drivers/net/bonding/bond_alb.c @@ -1031,7 +1031,7 @@ static int alb_set_slave_mac_addr(struct slave *slave, u8 addr[], */ memcpy(ss.__data, addr, len); ss.ss_family = dev->type; - if (dev_set_mac_address(dev, (struct sockaddr *)&ss)) { + if (dev_set_mac_address(dev, (struct sockaddr *)&ss, NULL)) { netdev_err(slave->bond->dev, "dev_set_mac_address of dev %s failed! ALB mode requires that the base driver support setting the hw address also when the network device's interface is open\n", dev->name); return -EOPNOTSUPP; @@ -1250,7 +1250,7 @@ static int alb_set_mac_address(struct bonding *bond, void *addr) bond_hw_addr_copy(tmp_addr, slave->dev->dev_addr, slave->dev->addr_len); - res = dev_set_mac_address(slave->dev, addr); + res = dev_set_mac_address(slave->dev, addr, NULL); /* restore net_device's hw address */ bond_hw_addr_copy(slave->dev->dev_addr, tmp_addr, @@ -1273,7 +1273,7 @@ unwind: bond_hw_addr_copy(tmp_addr, rollback_slave->dev->dev_addr, rollback_slave->dev->addr_len); dev_set_mac_address(rollback_slave->dev, - (struct sockaddr *)&ss); + (struct sockaddr *)&ss, NULL); bond_hw_addr_copy(rollback_slave->dev->dev_addr, tmp_addr, rollback_slave->dev->addr_len); } @@ -1732,7 +1732,8 @@ void bond_alb_handle_active_change(struct bonding *bond, struct slave *new_slave bond->dev->addr_len); ss.ss_family = bond->dev->type; /* we don't care if it can't change its mac, best effort */ - dev_set_mac_address(new_slave->dev, (struct sockaddr *)&ss); + dev_set_mac_address(new_slave->dev, (struct sockaddr *)&ss, + NULL); bond_hw_addr_copy(new_slave->dev->dev_addr, tmp_addr, new_slave->dev->addr_len); diff --git a/drivers/net/bonding/bond_debugfs.c b/drivers/net/bonding/bond_debugfs.c index 3868e1a5126d..1360f1ffe070 100644 --- a/drivers/net/bonding/bond_debugfs.c +++ b/drivers/net/bonding/bond_debugfs.c @@ -45,19 +45,7 @@ static int bond_debug_rlb_hash_show(struct seq_file *m, void *v) return 0; } - -static int bond_debug_rlb_hash_open(struct inode *inode, struct file *file) -{ - return single_open(file, bond_debug_rlb_hash_show, inode->i_private); -} - -static const struct file_operations bond_debug_rlb_hash_fops = { - .owner = THIS_MODULE, - .open = bond_debug_rlb_hash_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(bond_debug_rlb_hash); void bond_debug_register(struct bonding *bond) { diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 333387f1f1fe..a9d597f28023 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -609,14 +609,21 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active, * * Should be called with RTNL held. */ -static void bond_set_dev_addr(struct net_device *bond_dev, - struct net_device *slave_dev) +static int bond_set_dev_addr(struct net_device *bond_dev, + struct net_device *slave_dev) { + int err; + netdev_dbg(bond_dev, "bond_dev=%p slave_dev=%p slave_dev->name=%s slave_dev->addr_len=%d\n", bond_dev, slave_dev, slave_dev->name, slave_dev->addr_len); + err = dev_pre_changeaddr_notify(bond_dev, slave_dev->dev_addr, NULL); + if (err) + return err; + memcpy(bond_dev->dev_addr, slave_dev->dev_addr, slave_dev->addr_len); bond_dev->addr_assign_type = NET_ADDR_STOLEN; call_netdevice_notifiers(NETDEV_CHANGEADDR, bond_dev); + return 0; } static struct slave *bond_get_old_active(struct bonding *bond, @@ -652,8 +659,12 @@ static void bond_do_fail_over_mac(struct bonding *bond, switch (bond->params.fail_over_mac) { case BOND_FOM_ACTIVE: - if (new_active) - bond_set_dev_addr(bond->dev, new_active->dev); + if (new_active) { + rv = bond_set_dev_addr(bond->dev, new_active->dev); + if (rv) + netdev_err(bond->dev, "Error %d setting MAC of slave %s\n", + -rv, bond->dev->name); + } break; case BOND_FOM_FOLLOW: /* if new_active && old_active, swap them @@ -680,7 +691,7 @@ static void bond_do_fail_over_mac(struct bonding *bond, } rv = dev_set_mac_address(new_active->dev, - (struct sockaddr *)&ss); + (struct sockaddr *)&ss, NULL); if (rv) { netdev_err(bond->dev, "Error %d setting MAC of slave %s\n", -rv, new_active->dev->name); @@ -695,7 +706,7 @@ static void bond_do_fail_over_mac(struct bonding *bond, ss.ss_family = old_active->dev->type; rv = dev_set_mac_address(old_active->dev, - (struct sockaddr *)&ss); + (struct sockaddr *)&ss, NULL); if (rv) netdev_err(bond->dev, "Error %d setting MAC of slave %s\n", -rv, new_active->dev->name); @@ -1489,8 +1500,11 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev, * address to be the same as the slave's. */ if (!bond_has_slaves(bond) && - bond->dev->addr_assign_type == NET_ADDR_RANDOM) - bond_set_dev_addr(bond->dev, slave_dev); + bond->dev->addr_assign_type == NET_ADDR_RANDOM) { + res = bond_set_dev_addr(bond->dev, slave_dev); + if (res) + goto err_undo_flags; + } new_slave = bond_alloc_slave(bond); if (!new_slave) { @@ -1527,7 +1541,8 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev, */ memcpy(ss.__data, bond_dev->dev_addr, bond_dev->addr_len); ss.ss_family = slave_dev->type; - res = dev_set_mac_address(slave_dev, (struct sockaddr *)&ss); + res = dev_set_mac_address(slave_dev, (struct sockaddr *)&ss, + extack); if (res) { netdev_dbg(bond_dev, "Error %d calling set_mac_address\n", res); goto err_restore_mtu; @@ -1538,7 +1553,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev, slave_dev->flags |= IFF_SLAVE; /* open the slave since the application closed it */ - res = dev_open(slave_dev); + res = dev_open(slave_dev, extack); if (res) { netdev_dbg(bond_dev, "Opening slave %s failed\n", slave_dev->name); goto err_restore_mac; @@ -1818,7 +1833,7 @@ err_restore_mac: bond_hw_addr_copy(ss.__data, new_slave->perm_hwaddr, new_slave->dev->addr_len); ss.ss_family = slave_dev->type; - dev_set_mac_address(slave_dev, (struct sockaddr *)&ss); + dev_set_mac_address(slave_dev, (struct sockaddr *)&ss, NULL); } err_restore_mtu: @@ -1999,7 +2014,7 @@ static int __bond_release_one(struct net_device *bond_dev, bond_hw_addr_copy(ss.__data, slave->perm_hwaddr, slave->dev->addr_len); ss.ss_family = slave_dev->type; - dev_set_mac_address(slave_dev, (struct sockaddr *)&ss); + dev_set_mac_address(slave_dev, (struct sockaddr *)&ss, NULL); } if (unregister) @@ -3544,8 +3559,7 @@ static int bond_do_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cmd break; case BOND_SETHWADDR_OLD: case SIOCBONDSETHWADDR: - bond_set_dev_addr(bond_dev, slave_dev); - res = 0; + res = bond_set_dev_addr(bond_dev, slave_dev); break; case BOND_CHANGE_ACTIVE_OLD: case SIOCBONDCHANGEACTIVE: @@ -3732,7 +3746,7 @@ static int bond_set_mac_address(struct net_device *bond_dev, void *addr) bond_for_each_slave(bond, slave, iter) { netdev_dbg(bond_dev, "slave %p %s\n", slave, slave->dev->name); - res = dev_set_mac_address(slave->dev, addr); + res = dev_set_mac_address(slave->dev, addr, NULL); if (res) { /* TODO: consider downing the slave * and retry ? @@ -3761,7 +3775,7 @@ unwind: break; tmp_res = dev_set_mac_address(rollback_slave->dev, - (struct sockaddr *)&tmp_ss); + (struct sockaddr *)&tmp_ss, NULL); if (tmp_res) { netdev_dbg(bond_dev, "unwind err %d dev %s\n", tmp_res, rollback_slave->dev->name); diff --git a/drivers/net/can/Kconfig b/drivers/net/can/Kconfig index 7cdd0cead693..e0f0ad7a550a 100644 --- a/drivers/net/can/Kconfig +++ b/drivers/net/can/Kconfig @@ -96,7 +96,7 @@ config CAN_AT91 config CAN_FLEXCAN tristate "Support for Freescale FLEXCAN based chips" - depends on ARM || PPC + depends on OF && HAS_IOMEM ---help--- Say Y here if you want to support for Freescale FlexCAN. diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c index 75ce11395ee8..0f36eafe3ac1 100644 --- a/drivers/net/can/flexcan.c +++ b/drivers/net/can/flexcan.c @@ -19,11 +19,13 @@ #include #include #include +#include #include #include #include #include #include +#include #define DRV_NAME "flexcan" @@ -131,16 +133,15 @@ (FLEXCAN_ESR_ERR_BUS | FLEXCAN_ESR_ERR_STATE) #define FLEXCAN_ESR_ALL_INT \ (FLEXCAN_ESR_TWRN_INT | FLEXCAN_ESR_RWRN_INT | \ - FLEXCAN_ESR_BOFF_INT | FLEXCAN_ESR_ERR_INT) + FLEXCAN_ESR_BOFF_INT | FLEXCAN_ESR_ERR_INT | \ + FLEXCAN_ESR_WAK_INT) /* FLEXCAN interrupt flag register (IFLAG) bits */ /* Errata ERR005829 step7: Reserve first valid MB */ #define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8 #define FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP 0 -#define FLEXCAN_TX_MB 63 #define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP + 1) -#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST (FLEXCAN_TX_MB - 1) -#define FLEXCAN_IFLAG_MB(x) BIT(x & 0x1f) +#define FLEXCAN_IFLAG_MB(x) BIT((x) & 0x1f) #define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7) #define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6) #define FLEXCAN_IFLAG_RX_FIFO_AVAILABLE BIT(5) @@ -189,12 +190,13 @@ #define FLEXCAN_QUIRK_USE_OFF_TIMESTAMP BIT(5) /* Use timestamp based offloading */ #define FLEXCAN_QUIRK_BROKEN_PERR_STATE BIT(6) /* No interrupt for error passive */ #define FLEXCAN_QUIRK_DEFAULT_BIG_ENDIAN BIT(7) /* default to BE register access */ +#define FLEXCAN_QUIRK_SETUP_STOP_MODE BIT(8) /* Setup stop mode to support wakeup */ /* Structure of the message buffer */ struct flexcan_mb { u32 can_ctrl; u32 can_id; - u32 data[2]; + u32 data[]; }; /* Structure of the hardware registers */ @@ -223,7 +225,7 @@ struct flexcan_regs { u32 rxfgmask; /* 0x48 */ u32 rxfir; /* 0x4c */ u32 _reserved3[12]; /* 0x50 */ - struct flexcan_mb mb[64]; /* 0x80 */ + u8 mb[2][512]; /* 0x80 */ /* FIFO-mode: * MB * 0x080...0x08f 0 RX message buffer @@ -253,12 +255,24 @@ struct flexcan_devtype_data { u32 quirks; /* quirks needed for different IP cores */ }; +struct flexcan_stop_mode { + struct regmap *gpr; + u8 req_gpr; + u8 req_bit; + u8 ack_gpr; + u8 ack_bit; +}; + struct flexcan_priv { struct can_priv can; struct can_rx_offload offload; struct flexcan_regs __iomem *regs; + struct flexcan_mb __iomem *tx_mb; struct flexcan_mb __iomem *tx_mb_reserved; + u8 tx_mb_idx; + u8 mb_count; + u8 mb_size; u32 reg_ctrl_default; u32 reg_imask1_default; u32 reg_imask2_default; @@ -267,6 +281,7 @@ struct flexcan_priv { struct clk *clk_per; const struct flexcan_devtype_data *devtype_data; struct regulator *reg_xceiver; + struct flexcan_stop_mode stm; /* Read and Write APIs */ u32 (*read)(void __iomem *addr); @@ -290,7 +305,8 @@ static const struct flexcan_devtype_data fsl_imx28_devtype_data = { static const struct flexcan_devtype_data fsl_imx6q_devtype_data = { .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS | - FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_BROKEN_PERR_STATE, + FLEXCAN_QUIRK_USE_OFF_TIMESTAMP | FLEXCAN_QUIRK_BROKEN_PERR_STATE | + FLEXCAN_QUIRK_SETUP_STOP_MODE, }; static const struct flexcan_devtype_data fsl_vf610_devtype_data = { @@ -350,6 +366,68 @@ static inline void flexcan_write_le(u32 val, void __iomem *addr) iowrite32(val, addr); } +static struct flexcan_mb __iomem *flexcan_get_mb(const struct flexcan_priv *priv, + u8 mb_index) +{ + u8 bank_size; + bool bank; + + if (WARN_ON(mb_index >= priv->mb_count)) + return NULL; + + bank_size = sizeof(priv->regs->mb[0]) / priv->mb_size; + + bank = mb_index >= bank_size; + if (bank) + mb_index -= bank_size; + + return (struct flexcan_mb __iomem *) + (&priv->regs->mb[bank][priv->mb_size * mb_index]); +} + +static void flexcan_enable_wakeup_irq(struct flexcan_priv *priv, bool enable) +{ + struct flexcan_regs __iomem *regs = priv->regs; + u32 reg_mcr; + + reg_mcr = priv->read(®s->mcr); + + if (enable) + reg_mcr |= FLEXCAN_MCR_WAK_MSK; + else + reg_mcr &= ~FLEXCAN_MCR_WAK_MSK; + + priv->write(reg_mcr, ®s->mcr); +} + +static inline void flexcan_enter_stop_mode(struct flexcan_priv *priv) +{ + struct flexcan_regs __iomem *regs = priv->regs; + u32 reg_mcr; + + reg_mcr = priv->read(®s->mcr); + reg_mcr |= FLEXCAN_MCR_SLF_WAK; + priv->write(reg_mcr, ®s->mcr); + + /* enable stop request */ + regmap_update_bits(priv->stm.gpr, priv->stm.req_gpr, + 1 << priv->stm.req_bit, 1 << priv->stm.req_bit); +} + +static inline void flexcan_exit_stop_mode(struct flexcan_priv *priv) +{ + struct flexcan_regs __iomem *regs = priv->regs; + u32 reg_mcr; + + /* remove stop request */ + regmap_update_bits(priv->stm.gpr, priv->stm.req_gpr, + 1 << priv->stm.req_bit, 0); + + reg_mcr = priv->read(®s->mcr); + reg_mcr &= ~FLEXCAN_MCR_SLF_WAK; + priv->write(reg_mcr, ®s->mcr); +} + static inline void flexcan_error_irq_enable(const struct flexcan_priv *priv) { struct flexcan_regs __iomem *regs = priv->regs; @@ -512,11 +590,11 @@ static int flexcan_get_berr_counter(const struct net_device *dev, static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev) { const struct flexcan_priv *priv = netdev_priv(dev); - struct flexcan_regs __iomem *regs = priv->regs; struct can_frame *cf = (struct can_frame *)skb->data; u32 can_id; u32 data; u32 ctrl = FLEXCAN_MB_CODE_TX_DATA | (cf->can_dlc << 16); + int i; if (can_dropped_invalid_skb(dev, skb)) return NETDEV_TX_OK; @@ -533,27 +611,23 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de if (cf->can_id & CAN_RTR_FLAG) ctrl |= FLEXCAN_MB_CNT_RTR; - if (cf->can_dlc > 0) { - data = be32_to_cpup((__be32 *)&cf->data[0]); - priv->write(data, ®s->mb[FLEXCAN_TX_MB].data[0]); - } - if (cf->can_dlc > 4) { - data = be32_to_cpup((__be32 *)&cf->data[4]); - priv->write(data, ®s->mb[FLEXCAN_TX_MB].data[1]); + for (i = 0; i < cf->can_dlc; i += sizeof(u32)) { + data = be32_to_cpup((__be32 *)&cf->data[i]); + priv->write(data, &priv->tx_mb->data[i / sizeof(u32)]); } can_put_echo_skb(skb, dev, 0); - priv->write(can_id, ®s->mb[FLEXCAN_TX_MB].can_id); - priv->write(ctrl, ®s->mb[FLEXCAN_TX_MB].can_ctrl); + priv->write(can_id, &priv->tx_mb->can_id); + priv->write(ctrl, &priv->tx_mb->can_ctrl); /* Errata ERR005829 step8: * Write twice INACTIVE(0x8) code to first MB. */ priv->write(FLEXCAN_MB_CODE_TX_INACTIVE, - &priv->tx_mb_reserved->can_ctrl); + &priv->tx_mb_reserved->can_ctrl); priv->write(FLEXCAN_MB_CODE_TX_INACTIVE, - &priv->tx_mb_reserved->can_ctrl); + &priv->tx_mb_reserved->can_ctrl); return NETDEV_TX_OK; } @@ -672,8 +746,11 @@ static unsigned int flexcan_mailbox_read(struct can_rx_offload *offload, { struct flexcan_priv *priv = rx_offload_to_priv(offload); struct flexcan_regs __iomem *regs = priv->regs; - struct flexcan_mb __iomem *mb = ®s->mb[n]; + struct flexcan_mb __iomem *mb; u32 reg_ctrl, reg_id, reg_iflag1; + int i; + + mb = flexcan_get_mb(priv, n); if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { u32 code; @@ -714,8 +791,10 @@ static unsigned int flexcan_mailbox_read(struct can_rx_offload *offload, cf->can_id |= CAN_RTR_FLAG; cf->can_dlc = get_can_dlc((reg_ctrl >> 16) & 0xf); - *(__be32 *)(cf->data + 0) = cpu_to_be32(priv->read(&mb->data[0])); - *(__be32 *)(cf->data + 4) = cpu_to_be32(priv->read(&mb->data[1])); + for (i = 0; i < cf->can_dlc; i += sizeof(u32)) { + __be32 data = cpu_to_be32(priv->read(&mb->data[i / sizeof(u32)])); + *(__be32 *)(cf->data + i) = data; + } /* mark as read */ if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { @@ -744,7 +823,7 @@ static inline u64 flexcan_read_reg_iflag_rx(struct flexcan_priv *priv) u32 iflag1, iflag2; iflag2 = priv->read(®s->iflag2) & priv->reg_imask2_default & - ~FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB); + ~FLEXCAN_IFLAG_MB(priv->tx_mb_idx); iflag1 = priv->read(®s->iflag1) & priv->reg_imask1_default; return (u64)iflag2 << 32 | iflag1; @@ -794,8 +873,8 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id) reg_iflag2 = priv->read(®s->iflag2); /* transmission complete interrupt */ - if (reg_iflag2 & FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB)) { - u32 reg_ctrl = priv->read(®s->mb[FLEXCAN_TX_MB].can_ctrl); + if (reg_iflag2 & FLEXCAN_IFLAG_MB(priv->tx_mb_idx)) { + u32 reg_ctrl = priv->read(&priv->tx_mb->can_ctrl); handled = IRQ_HANDLED; stats->tx_bytes += can_rx_offload_get_echo_skb(&priv->offload, @@ -805,8 +884,8 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id) /* after sending a RTR frame MB is in RX mode */ priv->write(FLEXCAN_MB_CODE_TX_INACTIVE, - ®s->mb[FLEXCAN_TX_MB].can_ctrl); - priv->write(FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB), ®s->iflag2); + &priv->tx_mb->can_ctrl); + priv->write(FLEXCAN_IFLAG_MB(priv->tx_mb_idx), ®s->iflag2); netif_wake_queue(dev); } @@ -821,7 +900,7 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id) /* state change interrupt or broken error state quirk fix is enabled */ if ((reg_esr & FLEXCAN_ESR_ERR_STATE) || (priv->devtype_data->quirks & (FLEXCAN_QUIRK_BROKEN_WERR_STATE | - FLEXCAN_QUIRK_BROKEN_PERR_STATE))) + FLEXCAN_QUIRK_BROKEN_PERR_STATE))) flexcan_irq_state(dev, reg_esr); /* bus error IRQ - handle if bus error reporting is activated */ @@ -919,6 +998,7 @@ static int flexcan_chip_start(struct net_device *dev) struct flexcan_regs __iomem *regs = priv->regs; u32 reg_mcr, reg_ctrl, reg_ctrl2, reg_mecr; int err, i; + struct flexcan_mb __iomem *mb; /* enable module */ err = flexcan_chip_enable(priv); @@ -935,11 +1015,9 @@ static int flexcan_chip_start(struct net_device *dev) /* MCR * * enable freeze - * enable fifo * halt now * only supervisor access * enable warning int - * disable local echo * enable individual RX masking * choose format C * set max mailbox number @@ -947,14 +1025,37 @@ static int flexcan_chip_start(struct net_device *dev) reg_mcr = priv->read(®s->mcr); reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff); reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV | - FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_SRX_DIS | FLEXCAN_MCR_IRMQ | - FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(FLEXCAN_TX_MB); + FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ | FLEXCAN_MCR_IDAM_C | + FLEXCAN_MCR_MAXMB(priv->tx_mb_idx); + /* MCR + * + * FIFO: + * - disable for timestamp mode + * - enable for FIFO mode + */ if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) reg_mcr &= ~FLEXCAN_MCR_FEN; else reg_mcr |= FLEXCAN_MCR_FEN; + /* MCR + * + * NOTE: In loopback mode, the CAN_MCR[SRXDIS] cannot be + * asserted because this will impede the self reception + * of a transmitted message. This is not documented in + * earlier versions of flexcan block guide. + * + * Self Reception: + * - enable Self Reception for loopback mode + * (by clearing "Self Reception Disable" bit) + * - disable for normal operation + */ + if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK) + reg_mcr &= ~FLEXCAN_MCR_SRX_DIS; + else + reg_mcr |= FLEXCAN_MCR_SRX_DIS; + netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr); priv->write(reg_mcr, ®s->mcr); @@ -999,14 +1100,16 @@ static int flexcan_chip_start(struct net_device *dev) if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) { + mb = flexcan_get_mb(priv, i); priv->write(FLEXCAN_MB_CODE_RX_EMPTY, - ®s->mb[i].can_ctrl); + &mb->can_ctrl); } } else { /* clear and invalidate unused mailboxes first */ - for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i <= ARRAY_SIZE(regs->mb); i++) { + for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i <= priv->mb_count; i++) { + mb = flexcan_get_mb(priv, i); priv->write(FLEXCAN_MB_CODE_RX_INACTIVE, - ®s->mb[i].can_ctrl); + &mb->can_ctrl); } } @@ -1016,7 +1119,7 @@ static int flexcan_chip_start(struct net_device *dev) /* mark TX mailbox as INACTIVE */ priv->write(FLEXCAN_MB_CODE_TX_INACTIVE, - ®s->mb[FLEXCAN_TX_MB].can_ctrl); + &priv->tx_mb->can_ctrl); /* acceptance mask/acceptance code (accept everything) */ priv->write(0x0, ®s->rxgmask); @@ -1027,7 +1130,7 @@ static int flexcan_chip_start(struct net_device *dev) priv->write(0x0, ®s->rxfgmask); /* clear acceptance filters */ - for (i = 0; i < ARRAY_SIZE(regs->mb); i++) + for (i = 0; i < priv->mb_count; i++) priv->write(0, ®s->rximr[i]); /* On Vybrid, disable memory error detection interrupts @@ -1128,10 +1231,49 @@ static int flexcan_open(struct net_device *dev) if (err) goto out_close; + priv->mb_size = sizeof(struct flexcan_mb) + CAN_MAX_DLEN; + priv->mb_count = (sizeof(priv->regs->mb[0]) / priv->mb_size) + + (sizeof(priv->regs->mb[1]) / priv->mb_size); + + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) + priv->tx_mb_reserved = + flexcan_get_mb(priv, FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP); + else + priv->tx_mb_reserved = + flexcan_get_mb(priv, FLEXCAN_TX_MB_RESERVED_OFF_FIFO); + priv->tx_mb_idx = priv->mb_count - 1; + priv->tx_mb = flexcan_get_mb(priv, priv->tx_mb_idx); + + priv->reg_imask1_default = 0; + priv->reg_imask2_default = FLEXCAN_IFLAG_MB(priv->tx_mb_idx); + + priv->offload.mailbox_read = flexcan_mailbox_read; + + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { + u64 imask; + + priv->offload.mb_first = FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST; + priv->offload.mb_last = priv->mb_count - 2; + + imask = GENMASK_ULL(priv->offload.mb_last, + priv->offload.mb_first); + priv->reg_imask1_default |= imask; + priv->reg_imask2_default |= imask >> 32; + + err = can_rx_offload_add_timestamp(dev, &priv->offload); + } else { + priv->reg_imask1_default |= FLEXCAN_IFLAG_RX_FIFO_OVERFLOW | + FLEXCAN_IFLAG_RX_FIFO_AVAILABLE; + err = can_rx_offload_add_fifo(dev, &priv->offload, + FLEXCAN_NAPI_WEIGHT); + } + if (err) + goto out_free_irq; + /* start chip and queuing */ err = flexcan_chip_start(dev); if (err) - goto out_free_irq; + goto out_offload_del; can_led_event(dev, CAN_LED_EVENT_OPEN); @@ -1140,6 +1282,8 @@ static int flexcan_open(struct net_device *dev) return 0; + out_offload_del: + can_rx_offload_del(&priv->offload); out_free_irq: free_irq(dev->irq, dev); out_close: @@ -1160,6 +1304,7 @@ static int flexcan_close(struct net_device *dev) can_rx_offload_disable(&priv->offload); flexcan_chip_stop(dev); + can_rx_offload_del(&priv->offload); free_irq(dev->irq, dev); clk_disable_unprepare(priv->clk_per); clk_disable_unprepare(priv->clk_ipg); @@ -1260,6 +1405,59 @@ static void unregister_flexcandev(struct net_device *dev) unregister_candev(dev); } +static int flexcan_setup_stop_mode(struct platform_device *pdev) +{ + struct net_device *dev = platform_get_drvdata(pdev); + struct device_node *np = pdev->dev.of_node; + struct device_node *gpr_np; + struct flexcan_priv *priv; + phandle phandle; + u32 out_val[5]; + int ret; + + if (!np) + return -EINVAL; + + /* stop mode property format is: + * <&gpr req_gpr req_bit ack_gpr ack_bit>. + */ + ret = of_property_read_u32_array(np, "fsl,stop-mode", out_val, + ARRAY_SIZE(out_val)); + if (ret) { + dev_dbg(&pdev->dev, "no stop-mode property\n"); + return ret; + } + phandle = *out_val; + + gpr_np = of_find_node_by_phandle(phandle); + if (!gpr_np) { + dev_dbg(&pdev->dev, "could not find gpr node by phandle\n"); + return PTR_ERR(gpr_np); + } + + priv = netdev_priv(dev); + priv->stm.gpr = syscon_node_to_regmap(gpr_np); + of_node_put(gpr_np); + if (IS_ERR(priv->stm.gpr)) { + dev_dbg(&pdev->dev, "could not find gpr regmap\n"); + return PTR_ERR(priv->stm.gpr); + } + + priv->stm.req_gpr = out_val[1]; + priv->stm.req_bit = out_val[2]; + priv->stm.ack_gpr = out_val[3]; + priv->stm.ack_bit = out_val[4]; + + dev_dbg(&pdev->dev, + "gpr %s req_gpr=0x02%x req_bit=%u ack_gpr=0x02%x ack_bit=%u\n", + gpr_np->full_name, priv->stm.req_gpr, priv->stm.req_bit, + priv->stm.ack_gpr, priv->stm.ack_bit); + + device_set_wakeup_capable(&pdev->dev, true); + + return 0; +} + static const struct of_device_id flexcan_of_match[] = { { .compatible = "fsl,imx6q-flexcan", .data = &fsl_imx6q_devtype_data, }, { .compatible = "fsl,imx28-flexcan", .data = &fsl_imx28_devtype_data, }, @@ -1371,35 +1569,6 @@ static int flexcan_probe(struct platform_device *pdev) priv->devtype_data = devtype_data; priv->reg_xceiver = reg_xceiver; - if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) - priv->tx_mb_reserved = ®s->mb[FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP]; - else - priv->tx_mb_reserved = ®s->mb[FLEXCAN_TX_MB_RESERVED_OFF_FIFO]; - - priv->reg_imask1_default = 0; - priv->reg_imask2_default = FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB); - - priv->offload.mailbox_read = flexcan_mailbox_read; - - if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) { - u64 imask; - - priv->offload.mb_first = FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST; - priv->offload.mb_last = FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST; - - imask = GENMASK_ULL(priv->offload.mb_last, priv->offload.mb_first); - priv->reg_imask1_default |= imask; - priv->reg_imask2_default |= imask >> 32; - - err = can_rx_offload_add_timestamp(dev, &priv->offload); - } else { - priv->reg_imask1_default |= FLEXCAN_IFLAG_RX_FIFO_OVERFLOW | - FLEXCAN_IFLAG_RX_FIFO_AVAILABLE; - err = can_rx_offload_add_fifo(dev, &priv->offload, FLEXCAN_NAPI_WEIGHT); - } - if (err) - goto failed_offload; - err = register_flexcandev(dev); if (err) { dev_err(&pdev->dev, "registering netdev failed\n"); @@ -1408,12 +1577,17 @@ static int flexcan_probe(struct platform_device *pdev) devm_can_led_init(dev); + if (priv->devtype_data->quirks & FLEXCAN_QUIRK_SETUP_STOP_MODE) { + err = flexcan_setup_stop_mode(pdev); + if (err) + dev_dbg(&pdev->dev, "failed to setup stop-mode\n"); + } + dev_info(&pdev->dev, "device registered (reg_base=%p, irq=%d)\n", priv->regs, dev->irq); return 0; - failed_offload: failed_register: free_candev(dev); return err; @@ -1422,10 +1596,8 @@ static int flexcan_probe(struct platform_device *pdev) static int flexcan_remove(struct platform_device *pdev) { struct net_device *dev = platform_get_drvdata(pdev); - struct flexcan_priv *priv = netdev_priv(dev); unregister_flexcandev(dev); - can_rx_offload_del(&priv->offload); free_candev(dev); return 0; @@ -1438,9 +1610,17 @@ static int __maybe_unused flexcan_suspend(struct device *device) int err; if (netif_running(dev)) { - err = flexcan_chip_disable(priv); - if (err) - return err; + /* if wakeup is enabled, enter stop mode + * else enter disabled mode. + */ + if (device_may_wakeup(device)) { + enable_irq_wake(dev->irq); + flexcan_enter_stop_mode(priv); + } else { + err = flexcan_chip_disable(priv); + if (err) + return err; + } netif_stop_queue(dev); netif_device_detach(dev); } @@ -1459,14 +1639,45 @@ static int __maybe_unused flexcan_resume(struct device *device) if (netif_running(dev)) { netif_device_attach(dev); netif_start_queue(dev); - err = flexcan_chip_enable(priv); - if (err) - return err; + if (device_may_wakeup(device)) { + disable_irq_wake(dev->irq); + } else { + err = flexcan_chip_enable(priv); + if (err) + return err; + } } return 0; } -static SIMPLE_DEV_PM_OPS(flexcan_pm_ops, flexcan_suspend, flexcan_resume); +static int __maybe_unused flexcan_noirq_suspend(struct device *device) +{ + struct net_device *dev = dev_get_drvdata(device); + struct flexcan_priv *priv = netdev_priv(dev); + + if (netif_running(dev) && device_may_wakeup(device)) + flexcan_enable_wakeup_irq(priv, true); + + return 0; +} + +static int __maybe_unused flexcan_noirq_resume(struct device *device) +{ + struct net_device *dev = dev_get_drvdata(device); + struct flexcan_priv *priv = netdev_priv(dev); + + if (netif_running(dev) && device_may_wakeup(device)) { + flexcan_enable_wakeup_irq(priv, false); + flexcan_exit_stop_mode(priv); + } + + return 0; +} + +static const struct dev_pm_ops flexcan_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(flexcan_suspend, flexcan_resume) + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(flexcan_noirq_suspend, flexcan_noirq_resume) +}; static struct platform_driver flexcan_driver = { .driver = { diff --git a/drivers/net/can/rcar/Kconfig b/drivers/net/can/rcar/Kconfig index 7b03a3a37db7..bd5a8fcd83e1 100644 --- a/drivers/net/can/rcar/Kconfig +++ b/drivers/net/can/rcar/Kconfig @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 config CAN_RCAR tristate "Renesas R-Car CAN controller" depends on ARCH_RENESAS || ARM diff --git a/drivers/net/can/rcar/Makefile b/drivers/net/can/rcar/Makefile index 08de36a4cfcc..c9185b0c04a8 100644 --- a/drivers/net/can/rcar/Makefile +++ b/drivers/net/can/rcar/Makefile @@ -1,3 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 # # Makefile for the Renesas R-Car CAN & CAN FD controller drivers # diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c index 771a46083739..13e66297b65f 100644 --- a/drivers/net/can/rcar/rcar_can.c +++ b/drivers/net/can/rcar/rcar_can.c @@ -1,12 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0+ /* Renesas R-Car CAN device driver * * Copyright (C) 2013 Cogent Embedded, Inc. * Copyright (C) 2013 Renesas Solutions Corp. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2 of the License, or (at your - * option) any later version. */ #include diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c index 602c19e23f05..05410008aa6b 100644 --- a/drivers/net/can/rcar/rcar_canfd.c +++ b/drivers/net/can/rcar/rcar_canfd.c @@ -1,11 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0+ /* Renesas R-Car CAN FD device driver * * Copyright (C) 2015 Renesas Electronics Corp. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2 of the License, or (at your - * option) any later version. */ /* The R-Car CAN FD controller can operate in either one of the below two modes diff --git a/drivers/net/can/sja1000/Kconfig b/drivers/net/can/sja1000/Kconfig index 1e65cb6c2591..f6dc89927ece 100644 --- a/drivers/net/can/sja1000/Kconfig +++ b/drivers/net/can/sja1000/Kconfig @@ -88,6 +88,7 @@ config CAN_PLX_PCI - TEWS TECHNOLOGIES TPMC810 card (http://www.tews.com/) - IXXAT Automation PC-I 04/PCI card (http://www.ixxat.com/) - Connect Tech Inc. CANpro/104-Plus Opto (CRG001) card (http://www.connecttech.com) + - ASEM CAN raw - 2 isolated CAN channels (www.asem.it) config CAN_TSCAN1 tristate "TS-CAN1 PC104 boards" diff --git a/drivers/net/can/sja1000/plx_pci.c b/drivers/net/can/sja1000/plx_pci.c index f8ff25c8ee2e..9bcdefea138a 100644 --- a/drivers/net/can/sja1000/plx_pci.c +++ b/drivers/net/can/sja1000/plx_pci.c @@ -46,7 +46,8 @@ MODULE_SUPPORTED_DEVICE("Adlink PCI-7841/cPCI-7841, " "esd CAN-PCIe/2000, " "Connect Tech Inc. CANpro/104-Plus Opto (CRG001), " "IXXAT PC-I 04/PCI, " - "ELCUS CAN-200-PCI") + "ELCUS CAN-200-PCI, " + "ASEM DUAL CAN-RAW") MODULE_LICENSE("GPL v2"); #define PLX_PCI_MAX_CHAN 2 @@ -70,7 +71,9 @@ struct plx_pci_card { */ #define PLX_LINT1_EN 0x1 /* Local interrupt 1 enable */ +#define PLX_LINT1_POL (1 << 1) /* Local interrupt 1 polarity */ #define PLX_LINT2_EN (1 << 3) /* Local interrupt 2 enable */ +#define PLX_LINT2_POL (1 << 4) /* Local interrupt 2 polarity */ #define PLX_PCI_INT_EN (1 << 6) /* PCI Interrupt Enable */ #define PLX_PCI_RESET (1 << 30) /* PCI Adapter Software Reset */ @@ -92,6 +95,9 @@ struct plx_pci_card { */ #define PLX_PCI_OCR (OCR_TX0_PUSHPULL | OCR_TX1_PUSHPULL) +/* OCR setting for ASEM Dual CAN raw */ +#define ASEM_PCI_OCR 0xfe + /* * In the CDR register, you should set CBP to 1. * You will probably also want to set the clock divider value to 7 @@ -145,10 +151,20 @@ struct plx_pci_card { #define MOXA_PCI_VENDOR_ID 0x1393 #define MOXA_PCI_DEVICE_ID 0x0100 +#define ASEM_RAW_CAN_VENDOR_ID 0x10b5 +#define ASEM_RAW_CAN_DEVICE_ID 0x9030 +#define ASEM_RAW_CAN_SUB_VENDOR_ID 0x3000 +#define ASEM_RAW_CAN_SUB_DEVICE_ID 0x1001 +#define ASEM_RAW_CAN_SUB_DEVICE_ID_BIS 0x1002 +#define ASEM_RAW_CAN_RST_REGISTER 0x54 +#define ASEM_RAW_CAN_RST_MASK_CAN1 0x20 +#define ASEM_RAW_CAN_RST_MASK_CAN2 0x04 + static void plx_pci_reset_common(struct pci_dev *pdev); static void plx9056_pci_reset_common(struct pci_dev *pdev); static void plx_pci_reset_marathon_pci(struct pci_dev *pdev); static void plx_pci_reset_marathon_pcie(struct pci_dev *pdev); +static void plx_pci_reset_asem_dual_can_raw(struct pci_dev *pdev); struct plx_pci_channel_map { u32 bar; @@ -269,6 +285,14 @@ static struct plx_pci_card_info plx_pci_card_info_moxa = { /* based on PLX9052 */ }; +static struct plx_pci_card_info plx_pci_card_info_asem_dual_can = { + "ASEM Dual CAN raw PCI", 2, + PLX_PCI_CAN_CLOCK, ASEM_PCI_OCR, PLX_PCI_CDR, + {0, 0x00, 0x00}, { {2, 0x00, 0x00}, {4, 0x00, 0x00} }, + &plx_pci_reset_asem_dual_can_raw + /* based on PLX9030 */ +}; + static const struct pci_device_id plx_pci_tbl[] = { { /* Adlink PCI-7841/cPCI-7841 */ @@ -375,6 +399,20 @@ static const struct pci_device_id plx_pci_tbl[] = { 0, 0, (kernel_ulong_t)&plx_pci_card_info_moxa }, + { + /* ASEM Dual CAN raw */ + ASEM_RAW_CAN_VENDOR_ID, ASEM_RAW_CAN_DEVICE_ID, + ASEM_RAW_CAN_SUB_VENDOR_ID, ASEM_RAW_CAN_SUB_DEVICE_ID, + 0, 0, + (kernel_ulong_t)&plx_pci_card_info_asem_dual_can + }, + { + /* ASEM Dual CAN raw -new model */ + ASEM_RAW_CAN_VENDOR_ID, ASEM_RAW_CAN_DEVICE_ID, + ASEM_RAW_CAN_SUB_VENDOR_ID, ASEM_RAW_CAN_SUB_DEVICE_ID_BIS, + 0, 0, + (kernel_ulong_t)&plx_pci_card_info_asem_dual_can + }, { 0,} }; MODULE_DEVICE_TABLE(pci, plx_pci_tbl); @@ -524,6 +562,31 @@ static void plx_pci_reset_marathon_pcie(struct pci_dev *pdev) } } +/* Special reset function for ASEM Dual CAN raw card */ +static void plx_pci_reset_asem_dual_can_raw(struct pci_dev *pdev) +{ + void __iomem *bar0_addr; + u8 tmpval; + + plx_pci_reset_common(pdev); + + bar0_addr = pci_iomap(pdev, 0, 0); + if (!bar0_addr) { + dev_err(&pdev->dev, "Failed to remap reset space 0 (BAR0)\n"); + return; + } + + /* reset the two SJA1000 chips */ + tmpval = ioread8(bar0_addr + ASEM_RAW_CAN_RST_REGISTER); + tmpval &= ~(ASEM_RAW_CAN_RST_MASK_CAN1 | ASEM_RAW_CAN_RST_MASK_CAN2); + iowrite8(tmpval, bar0_addr + ASEM_RAW_CAN_RST_REGISTER); + usleep_range(300, 400); + tmpval |= ASEM_RAW_CAN_RST_MASK_CAN1 | ASEM_RAW_CAN_RST_MASK_CAN2; + iowrite8(tmpval, bar0_addr + ASEM_RAW_CAN_RST_REGISTER); + usleep_range(300, 400); + pci_iounmap(pdev, bar0_addr); +} + static void plx_pci_del_card(struct pci_dev *pdev) { struct plx_pci_card *card = pci_get_drvdata(pdev); diff --git a/drivers/net/can/usb/ucan.c b/drivers/net/can/usb/ucan.c index f3d5bda012a1..04aac3bb54ef 100644 --- a/drivers/net/can/usb/ucan.c +++ b/drivers/net/can/usb/ucan.c @@ -715,7 +715,7 @@ static void ucan_read_bulk_callback(struct urb *urb) up->in_ep_size, urb->transfer_buffer, urb->transfer_dma); - netdev_dbg(up->netdev, "not resumbmitting urb; status: %d\n", + netdev_dbg(up->netdev, "not resubmitting urb; status: %d\n", urb->status); return; default: diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c index ed6828821fbd..80af658e530d 100644 --- a/drivers/net/can/vxcan.c +++ b/drivers/net/can/vxcan.c @@ -207,7 +207,7 @@ static int vxcan_newlink(struct net *net, struct net_device *dev, return PTR_ERR(peer_net); peer = rtnl_create_link(peer_net, ifname, name_assign_type, - &vxcan_link_ops, tbp); + &vxcan_link_ops, tbp, extack); if (IS_ERR(peer)) { put_net(peer_net); return PTR_ERR(peer); diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c index 045f0845e665..97d0933d9bd9 100644 --- a/drivers/net/can/xilinx_can.c +++ b/drivers/net/can/xilinx_can.c @@ -63,6 +63,7 @@ enum xcan_reg { XCAN_FSR_OFFSET = 0x00E8, /* RX FIFO Status */ XCAN_TXMSG_BASE_OFFSET = 0x0100, /* TX Message Space */ XCAN_RXMSG_BASE_OFFSET = 0x1100, /* RX Message Space */ + XCAN_RXMSG_2_BASE_OFFSET = 0x2100, /* RX Message Space */ }; #define XCAN_FRAME_ID_OFFSET(frame_base) ((frame_base) + 0x00) @@ -75,6 +76,8 @@ enum xcan_reg { XCAN_CANFD_FRAME_SIZE * (n)) #define XCAN_RXMSG_FRAME_OFFSET(n) (XCAN_RXMSG_BASE_OFFSET + \ XCAN_CANFD_FRAME_SIZE * (n)) +#define XCAN_RXMSG_2_FRAME_OFFSET(n) (XCAN_RXMSG_2_BASE_OFFSET + \ + XCAN_CANFD_FRAME_SIZE * (n)) /* the single TX mailbox used by this driver on CAN FD HW */ #define XCAN_TX_MAILBOX_IDX 0 @@ -152,6 +155,7 @@ enum xcan_reg { * instead of the regular FIFO at 0x50 */ #define XCAN_FLAG_RX_FIFO_MULTI 0x0010 +#define XCAN_FLAG_CANFD_2 0x0020 struct xcan_devtype_data { unsigned int flags; @@ -221,6 +225,18 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd = { .brp_inc = 1, }; +static const struct can_bittiming_const xcan_bittiming_const_canfd2 = { + .name = DRIVER_NAME, + .tseg1_min = 1, + .tseg1_max = 256, + .tseg2_min = 1, + .tseg2_max = 128, + .sjw_max = 128, + .brp_min = 1, + .brp_max = 256, + .brp_inc = 1, +}; + /** * xcan_write_reg_le - Write a value to the device register little endian * @priv: Driver private data structure @@ -612,7 +628,7 @@ static int xcan_start_xmit_mailbox(struct sk_buff *skb, struct net_device *ndev) * * Return: NETDEV_TX_OK on success and NETDEV_TX_BUSY when the tx queue is full */ -static int xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev) +static netdev_tx_t xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev) { struct xcan_priv *priv = netdev_priv(ndev); int ret; @@ -973,7 +989,10 @@ static int xcan_rx_fifo_get_next_frame(struct xcan_priv *priv) if (!(fsr & XCAN_FSR_FL_MASK)) return -ENOENT; - offset = XCAN_RXMSG_FRAME_OFFSET(fsr & XCAN_FSR_RI_MASK); + if (priv->devtype.flags & XCAN_FLAG_CANFD_2) + offset = XCAN_RXMSG_2_FRAME_OFFSET(fsr & XCAN_FSR_RI_MASK); + else + offset = XCAN_RXMSG_FRAME_OFFSET(fsr & XCAN_FSR_RI_MASK); } else { /* check if RX FIFO is empty */ @@ -1430,11 +1449,24 @@ static const struct xcan_devtype_data xcan_canfd_data = { .bus_clk_name = "s_axi_aclk", }; +static const struct xcan_devtype_data xcan_canfd2_data = { + .flags = XCAN_FLAG_EXT_FILTERS | + XCAN_FLAG_RXMNF | + XCAN_FLAG_TX_MAILBOXES | + XCAN_FLAG_CANFD_2 | + XCAN_FLAG_RX_FIFO_MULTI, + .bittiming_const = &xcan_bittiming_const_canfd2, + .btr_ts2_shift = XCAN_BTR_TS2_SHIFT_CANFD, + .btr_sjw_shift = XCAN_BTR_SJW_SHIFT_CANFD, + .bus_clk_name = "s_axi_aclk", +}; + /* Match table for OF platform binding */ static const struct of_device_id xcan_of_match[] = { { .compatible = "xlnx,zynq-can-1.0", .data = &xcan_zynq_data }, { .compatible = "xlnx,axi-can-1.00.a", .data = &xcan_axi_data }, { .compatible = "xlnx,canfd-1.0", .data = &xcan_canfd_data }, + { .compatible = "xlnx,canfd-2.0", .data = &xcan_canfd2_data }, { /* end of list */ }, }; MODULE_DEVICE_TABLE(of, xcan_of_match); diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c index 2eb68769562c..aa4a1f5206f1 100644 --- a/drivers/net/dsa/bcm_sf2.c +++ b/drivers/net/dsa/bcm_sf2.c @@ -710,6 +710,10 @@ static int bcm_sf2_sw_resume(struct dsa_switch *ds) return ret; } + ret = bcm_sf2_cfp_resume(ds); + if (ret) + return ret; + if (priv->hw_params.num_gphy == 1) bcm_sf2_gphy_enable_set(ds, true); @@ -1061,6 +1065,7 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev) spin_lock_init(&priv->indir_lock); mutex_init(&priv->stats_mutex); mutex_init(&priv->cfp.lock); + INIT_LIST_HEAD(&priv->cfp.rules_list); /* CFP rule #0 cannot be used for specific classifications, flag it as * permanently used @@ -1090,12 +1095,16 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev) return ret; } + bcm_sf2_gphy_enable_set(priv->dev->ds, true); + ret = bcm_sf2_mdio_register(ds); if (ret) { pr_err("failed to register MDIO bus\n"); return ret; } + bcm_sf2_gphy_enable_set(priv->dev->ds, false); + ret = bcm_sf2_cfp_rst(priv); if (ret) { pr_err("failed to reset CFP\n"); @@ -1166,6 +1175,7 @@ static int bcm_sf2_sw_remove(struct platform_device *pdev) priv->wol_ports_mask = 0; dsa_unregister_switch(priv->dev->ds); + bcm_sf2_cfp_exit(priv->dev->ds); /* Disable all ports and interrupts */ bcm_sf2_sw_suspend(priv->dev->ds); bcm_sf2_mdio_unregister(priv); diff --git a/drivers/net/dsa/bcm_sf2.h b/drivers/net/dsa/bcm_sf2.h index cc31e986e6e3..faaef320ec48 100644 --- a/drivers/net/dsa/bcm_sf2.h +++ b/drivers/net/dsa/bcm_sf2.h @@ -56,6 +56,7 @@ struct bcm_sf2_cfp_priv { DECLARE_BITMAP(used, CFP_NUM_RULES); DECLARE_BITMAP(unique, CFP_NUM_RULES); unsigned int rules_cnt; + struct list_head rules_list; }; struct bcm_sf2_priv { @@ -213,5 +214,7 @@ int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port, int bcm_sf2_set_rxnfc(struct dsa_switch *ds, int port, struct ethtool_rxnfc *nfc); int bcm_sf2_cfp_rst(struct bcm_sf2_priv *priv); +void bcm_sf2_cfp_exit(struct dsa_switch *ds); +int bcm_sf2_cfp_resume(struct dsa_switch *ds); #endif /* __BCM_SF2_H */ diff --git a/drivers/net/dsa/bcm_sf2_cfp.c b/drivers/net/dsa/bcm_sf2_cfp.c index 47c5f272a084..e14663ab6dbc 100644 --- a/drivers/net/dsa/bcm_sf2_cfp.c +++ b/drivers/net/dsa/bcm_sf2_cfp.c @@ -20,6 +20,12 @@ #include "bcm_sf2.h" #include "bcm_sf2_regs.h" +struct cfp_rule { + int port; + struct ethtool_rx_flow_spec fs; + struct list_head next; +}; + struct cfp_udf_slice_layout { u8 slices[UDFS_PER_SLICE]; u32 mask_value; @@ -515,6 +521,61 @@ static void bcm_sf2_cfp_slice_ipv6(struct bcm_sf2_priv *priv, core_writel(priv, reg, offset); } +static struct cfp_rule *bcm_sf2_cfp_rule_find(struct bcm_sf2_priv *priv, + int port, u32 location) +{ + struct cfp_rule *rule = NULL; + + list_for_each_entry(rule, &priv->cfp.rules_list, next) { + if (rule->port == port && rule->fs.location == location) + break; + } + + return rule; +} + +static int bcm_sf2_cfp_rule_cmp(struct bcm_sf2_priv *priv, int port, + struct ethtool_rx_flow_spec *fs) +{ + struct cfp_rule *rule = NULL; + size_t fs_size = 0; + int ret = 1; + + if (list_empty(&priv->cfp.rules_list)) + return ret; + + list_for_each_entry(rule, &priv->cfp.rules_list, next) { + ret = 1; + if (rule->port != port) + continue; + + if (rule->fs.flow_type != fs->flow_type || + rule->fs.ring_cookie != fs->ring_cookie || + rule->fs.m_ext.data[0] != fs->m_ext.data[0]) + continue; + + switch (fs->flow_type & ~FLOW_EXT) { + case TCP_V6_FLOW: + case UDP_V6_FLOW: + fs_size = sizeof(struct ethtool_tcpip6_spec); + break; + case TCP_V4_FLOW: + case UDP_V4_FLOW: + fs_size = sizeof(struct ethtool_tcpip4_spec); + break; + default: + continue; + } + + ret = memcmp(&rule->fs.h_u, &fs->h_u, fs_size); + ret |= memcmp(&rule->fs.m_u, &fs->m_u, fs_size); + if (ret == 0) + break; + } + + return ret; +} + static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port, unsigned int port_num, unsigned int queue_num, @@ -728,27 +789,14 @@ out_err: return ret; } -static int bcm_sf2_cfp_rule_set(struct dsa_switch *ds, int port, - struct ethtool_rx_flow_spec *fs) +static int bcm_sf2_cfp_rule_insert(struct dsa_switch *ds, int port, + struct ethtool_rx_flow_spec *fs) { struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); s8 cpu_port = ds->ports[port].cpu_dp->index; __u64 ring_cookie = fs->ring_cookie; unsigned int queue_num, port_num; - int ret = -EINVAL; - - /* Check for unsupported extensions */ - if ((fs->flow_type & FLOW_EXT) && (fs->m_ext.vlan_etype || - fs->m_ext.data[1])) - return -EINVAL; - - if (fs->location != RX_CLS_LOC_ANY && - test_bit(fs->location, priv->cfp.used)) - return -EBUSY; - - if (fs->location != RX_CLS_LOC_ANY && - fs->location > bcm_sf2_cfp_rule_size(priv)) - return -EINVAL; + int ret; /* This rule is a Wake-on-LAN filter and we must specifically * target the CPU port in order for it to be working. @@ -787,12 +835,54 @@ static int bcm_sf2_cfp_rule_set(struct dsa_switch *ds, int port, queue_num, fs); break; default: + ret = -EINVAL; break; } return ret; } +static int bcm_sf2_cfp_rule_set(struct dsa_switch *ds, int port, + struct ethtool_rx_flow_spec *fs) +{ + struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); + struct cfp_rule *rule = NULL; + int ret = -EINVAL; + + /* Check for unsupported extensions */ + if ((fs->flow_type & FLOW_EXT) && (fs->m_ext.vlan_etype || + fs->m_ext.data[1])) + return -EINVAL; + + if (fs->location != RX_CLS_LOC_ANY && + test_bit(fs->location, priv->cfp.used)) + return -EBUSY; + + if (fs->location != RX_CLS_LOC_ANY && + fs->location > bcm_sf2_cfp_rule_size(priv)) + return -EINVAL; + + ret = bcm_sf2_cfp_rule_cmp(priv, port, fs); + if (ret == 0) + return -EEXIST; + + rule = kzalloc(sizeof(*rule), GFP_KERNEL); + if (!rule) + return -ENOMEM; + + ret = bcm_sf2_cfp_rule_insert(ds, port, fs); + if (ret) { + kfree(rule); + return ret; + } + + rule->port = port; + memcpy(&rule->fs, fs, sizeof(*fs)); + list_add_tail(&rule->next, &priv->cfp.rules_list); + + return ret; +} + static int bcm_sf2_cfp_rule_del_one(struct bcm_sf2_priv *priv, int port, u32 loc, u32 *next_loc) { @@ -830,19 +920,12 @@ static int bcm_sf2_cfp_rule_del_one(struct bcm_sf2_priv *priv, int port, return 0; } -static int bcm_sf2_cfp_rule_del(struct bcm_sf2_priv *priv, int port, - u32 loc) +static int bcm_sf2_cfp_rule_remove(struct bcm_sf2_priv *priv, int port, + u32 loc) { u32 next_loc = 0; int ret; - /* Refuse deleting unused rules, and those that are not unique since - * that could leave IPv6 rules with one of the chained rule in the - * table. - */ - if (!test_bit(loc, priv->cfp.unique) || loc == 0) - return -EINVAL; - ret = bcm_sf2_cfp_rule_del_one(priv, port, loc, &next_loc); if (ret) return ret; @@ -854,318 +937,54 @@ static int bcm_sf2_cfp_rule_del(struct bcm_sf2_priv *priv, int port, return ret; } -static void bcm_sf2_invert_masks(struct ethtool_rx_flow_spec *flow) +static int bcm_sf2_cfp_rule_del(struct bcm_sf2_priv *priv, int port, u32 loc) { - unsigned int i; - - for (i = 0; i < sizeof(flow->m_u); i++) - flow->m_u.hdata[i] ^= 0xff; - - flow->m_ext.vlan_etype ^= cpu_to_be16(~0); - flow->m_ext.vlan_tci ^= cpu_to_be16(~0); - flow->m_ext.data[0] ^= cpu_to_be32(~0); - flow->m_ext.data[1] ^= cpu_to_be32(~0); -} - -static int bcm_sf2_cfp_unslice_ipv4(struct bcm_sf2_priv *priv, - struct ethtool_tcpip4_spec *v4_spec, - bool mask) -{ - u32 reg, offset, ipv4; - u16 src_dst_port; - - if (mask) - offset = CORE_CFP_MASK_PORT(3); - else - offset = CORE_CFP_DATA_PORT(3); - - reg = core_readl(priv, offset); - /* src port [15:8] */ - src_dst_port = reg << 8; - - if (mask) - offset = CORE_CFP_MASK_PORT(2); - else - offset = CORE_CFP_DATA_PORT(2); - - reg = core_readl(priv, offset); - /* src port [7:0] */ - src_dst_port |= (reg >> 24); - - v4_spec->pdst = cpu_to_be16(src_dst_port); - v4_spec->psrc = cpu_to_be16((u16)(reg >> 8)); - - /* IPv4 dst [15:8] */ - ipv4 = (reg & 0xff) << 8; - - if (mask) - offset = CORE_CFP_MASK_PORT(1); - else - offset = CORE_CFP_DATA_PORT(1); - - reg = core_readl(priv, offset); - /* IPv4 dst [31:16] */ - ipv4 |= ((reg >> 8) & 0xffff) << 16; - /* IPv4 dst [7:0] */ - ipv4 |= (reg >> 24) & 0xff; - v4_spec->ip4dst = cpu_to_be32(ipv4); - - /* IPv4 src [15:8] */ - ipv4 = (reg & 0xff) << 8; - - if (mask) - offset = CORE_CFP_MASK_PORT(0); - else - offset = CORE_CFP_DATA_PORT(0); - reg = core_readl(priv, offset); + struct cfp_rule *rule; + int ret; - /* Once the TCAM is programmed, the mask reflects the slice number - * being matched, don't bother checking it when reading back the - * mask spec + /* Refuse deleting unused rules, and those that are not unique since + * that could leave IPv6 rules with one of the chained rule in the + * table. */ - if (!mask && !(reg & SLICE_VALID)) + if (!test_bit(loc, priv->cfp.unique) || loc == 0) return -EINVAL; - /* IPv4 src [7:0] */ - ipv4 |= (reg >> 24) & 0xff; - /* IPv4 src [31:16] */ - ipv4 |= ((reg >> 8) & 0xffff) << 16; - v4_spec->ip4src = cpu_to_be32(ipv4); - - return 0; -} - -static int bcm_sf2_cfp_ipv4_rule_get(struct bcm_sf2_priv *priv, int port, - struct ethtool_rx_flow_spec *fs) -{ - struct ethtool_tcpip4_spec *v4_spec = NULL, *v4_m_spec = NULL; - u32 reg; - int ret; - - reg = core_readl(priv, CORE_CFP_DATA_PORT(6)); - - switch ((reg & IPPROTO_MASK) >> IPPROTO_SHIFT) { - case IPPROTO_TCP: - fs->flow_type = TCP_V4_FLOW; - v4_spec = &fs->h_u.tcp_ip4_spec; - v4_m_spec = &fs->m_u.tcp_ip4_spec; - break; - case IPPROTO_UDP: - fs->flow_type = UDP_V4_FLOW; - v4_spec = &fs->h_u.udp_ip4_spec; - v4_m_spec = &fs->m_u.udp_ip4_spec; - break; - default: + rule = bcm_sf2_cfp_rule_find(priv, port, loc); + if (!rule) return -EINVAL; - } - - fs->m_ext.data[0] = cpu_to_be32((reg >> IP_FRAG_SHIFT) & 1); - v4_spec->tos = (reg >> IPTOS_SHIFT) & IPTOS_MASK; - - ret = bcm_sf2_cfp_unslice_ipv4(priv, v4_spec, false); - if (ret) - return ret; - - return bcm_sf2_cfp_unslice_ipv4(priv, v4_m_spec, true); -} - -static int bcm_sf2_cfp_unslice_ipv6(struct bcm_sf2_priv *priv, - __be32 *ip6_addr, __be16 *port, - bool mask) -{ - u32 reg, tmp, offset; - - /* C-Tag [31:24] - * UDF_n_B8 [23:8] (port) - * UDF_n_B7 (upper) [7:0] (addr[15:8]) - */ - if (mask) - offset = CORE_CFP_MASK_PORT(4); - else - offset = CORE_CFP_DATA_PORT(4); - reg = core_readl(priv, offset); - *port = cpu_to_be32(reg) >> 8; - tmp = (u32)(reg & 0xff) << 8; - - /* UDF_n_B7 (lower) [31:24] (addr[7:0]) - * UDF_n_B6 [23:8] (addr[31:16]) - * UDF_n_B5 (upper) [7:0] (addr[47:40]) - */ - if (mask) - offset = CORE_CFP_MASK_PORT(3); - else - offset = CORE_CFP_DATA_PORT(3); - reg = core_readl(priv, offset); - tmp |= (reg >> 24) & 0xff; - tmp |= (u32)((reg >> 8) << 16); - ip6_addr[3] = cpu_to_be32(tmp); - tmp = (u32)(reg & 0xff) << 8; - - /* UDF_n_B5 (lower) [31:24] (addr[39:32]) - * UDF_n_B4 [23:8] (addr[63:48]) - * UDF_n_B3 (upper) [7:0] (addr[79:72]) - */ - if (mask) - offset = CORE_CFP_MASK_PORT(2); - else - offset = CORE_CFP_DATA_PORT(2); - reg = core_readl(priv, offset); - tmp |= (reg >> 24) & 0xff; - tmp |= (u32)((reg >> 8) << 16); - ip6_addr[2] = cpu_to_be32(tmp); - tmp = (u32)(reg & 0xff) << 8; - /* UDF_n_B3 (lower) [31:24] (addr[71:64]) - * UDF_n_B2 [23:8] (addr[95:80]) - * UDF_n_B1 (upper) [7:0] (addr[111:104]) - */ - if (mask) - offset = CORE_CFP_MASK_PORT(1); - else - offset = CORE_CFP_DATA_PORT(1); - reg = core_readl(priv, offset); - tmp |= (reg >> 24) & 0xff; - tmp |= (u32)((reg >> 8) << 16); - ip6_addr[1] = cpu_to_be32(tmp); - tmp = (u32)(reg & 0xff) << 8; + ret = bcm_sf2_cfp_rule_remove(priv, port, loc); - /* UDF_n_B1 (lower) [31:24] (addr[103:96]) - * UDF_n_B0 [23:8] (addr[127:112]) - * Reserved [7:4] - * Slice ID [3:2] - * Slice valid [1:0] - */ - if (mask) - offset = CORE_CFP_MASK_PORT(0); - else - offset = CORE_CFP_DATA_PORT(0); - reg = core_readl(priv, offset); - tmp |= (reg >> 24) & 0xff; - tmp |= (u32)((reg >> 8) << 16); - ip6_addr[0] = cpu_to_be32(tmp); + list_del(&rule->next); + kfree(rule); - if (!mask && !(reg & SLICE_VALID)) - return -EINVAL; - - return 0; + return ret; } -static int bcm_sf2_cfp_ipv6_rule_get(struct bcm_sf2_priv *priv, int port, - struct ethtool_rx_flow_spec *fs, - u32 next_loc) +static void bcm_sf2_invert_masks(struct ethtool_rx_flow_spec *flow) { - struct ethtool_tcpip6_spec *v6_spec = NULL, *v6_m_spec = NULL; - u32 reg; - int ret; - - /* UDPv6 and TCPv6 both use ethtool_tcpip6_spec so we are fine - * assuming tcp_ip6_spec here being an union. - */ - v6_spec = &fs->h_u.tcp_ip6_spec; - v6_m_spec = &fs->m_u.tcp_ip6_spec; - - /* Read the second half first */ - ret = bcm_sf2_cfp_unslice_ipv6(priv, v6_spec->ip6dst, &v6_spec->pdst, - false); - if (ret) - return ret; - - ret = bcm_sf2_cfp_unslice_ipv6(priv, v6_m_spec->ip6dst, - &v6_m_spec->pdst, true); - if (ret) - return ret; - - /* Read last to avoid next entry clobbering the results during search - * operations. We would not have the port enabled for this rule, so - * don't bother checking it. - */ - (void)core_readl(priv, CORE_CFP_DATA_PORT(7)); - - /* The slice number is valid, so read the rule we are chained from now - * which is our first half. - */ - bcm_sf2_cfp_rule_addr_set(priv, next_loc); - ret = bcm_sf2_cfp_op(priv, OP_SEL_READ | TCAM_SEL); - if (ret) - return ret; - - reg = core_readl(priv, CORE_CFP_DATA_PORT(6)); - - switch ((reg & IPPROTO_MASK) >> IPPROTO_SHIFT) { - case IPPROTO_TCP: - fs->flow_type = TCP_V6_FLOW; - break; - case IPPROTO_UDP: - fs->flow_type = UDP_V6_FLOW; - break; - default: - return -EINVAL; - } + unsigned int i; - ret = bcm_sf2_cfp_unslice_ipv6(priv, v6_spec->ip6src, &v6_spec->psrc, - false); - if (ret) - return ret; + for (i = 0; i < sizeof(flow->m_u); i++) + flow->m_u.hdata[i] ^= 0xff; - return bcm_sf2_cfp_unslice_ipv6(priv, v6_m_spec->ip6src, - &v6_m_spec->psrc, true); + flow->m_ext.vlan_etype ^= cpu_to_be16(~0); + flow->m_ext.vlan_tci ^= cpu_to_be16(~0); + flow->m_ext.data[0] ^= cpu_to_be32(~0); + flow->m_ext.data[1] ^= cpu_to_be32(~0); } static int bcm_sf2_cfp_rule_get(struct bcm_sf2_priv *priv, int port, struct ethtool_rxnfc *nfc) { - u32 reg, ipv4_or_chain_id; - unsigned int queue_num; - int ret; - - bcm_sf2_cfp_rule_addr_set(priv, nfc->fs.location); - - ret = bcm_sf2_cfp_op(priv, OP_SEL_READ | ACT_POL_RAM); - if (ret) - return ret; - - reg = core_readl(priv, CORE_ACT_POL_DATA0); - - ret = bcm_sf2_cfp_op(priv, OP_SEL_READ | TCAM_SEL); - if (ret) - return ret; - - /* Extract the destination port */ - nfc->fs.ring_cookie = fls((reg >> DST_MAP_IB_SHIFT) & - DST_MAP_IB_MASK) - 1; - - /* There is no Port 6, so we compensate for that here */ - if (nfc->fs.ring_cookie >= 6) - nfc->fs.ring_cookie++; - nfc->fs.ring_cookie *= SF2_NUM_EGRESS_QUEUES; - - /* Extract the destination queue */ - queue_num = (reg >> NEW_TC_SHIFT) & NEW_TC_MASK; - nfc->fs.ring_cookie += queue_num; - - /* Extract the L3_FRAMING or CHAIN_ID */ - reg = core_readl(priv, CORE_CFP_DATA_PORT(6)); + struct cfp_rule *rule; - /* With IPv6 rules this would contain a non-zero chain ID since - * we reserve entry 0 and it cannot be used. So if we read 0 here - * this means an IPv4 rule. - */ - ipv4_or_chain_id = (reg >> L3_FRAMING_SHIFT) & 0xff; - if (ipv4_or_chain_id == 0) - ret = bcm_sf2_cfp_ipv4_rule_get(priv, port, &nfc->fs); - else - ret = bcm_sf2_cfp_ipv6_rule_get(priv, port, &nfc->fs, - ipv4_or_chain_id); - if (ret) - return ret; - - /* Read last to avoid next entry clobbering the results during search - * operations - */ - reg = core_readl(priv, CORE_CFP_DATA_PORT(7)); - if (!(reg & 1 << port)) + rule = bcm_sf2_cfp_rule_find(priv, port, nfc->fs.location); + if (!rule) return -EINVAL; + memcpy(&nfc->fs, &rule->fs, sizeof(rule->fs)); + bcm_sf2_invert_masks(&nfc->fs); /* Put the TCAM size here */ @@ -1302,3 +1121,51 @@ int bcm_sf2_cfp_rst(struct bcm_sf2_priv *priv) return 0; } + +void bcm_sf2_cfp_exit(struct dsa_switch *ds) +{ + struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); + struct cfp_rule *rule, *n; + + if (list_empty(&priv->cfp.rules_list)) + return; + + list_for_each_entry_safe_reverse(rule, n, &priv->cfp.rules_list, next) + bcm_sf2_cfp_rule_del(priv, rule->port, rule->fs.location); +} + +int bcm_sf2_cfp_resume(struct dsa_switch *ds) +{ + struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); + struct cfp_rule *rule; + int ret = 0; + u32 reg; + + if (list_empty(&priv->cfp.rules_list)) + return ret; + + reg = core_readl(priv, CORE_CFP_CTL_REG); + reg &= ~CFP_EN_MAP_MASK; + core_writel(priv, reg, CORE_CFP_CTL_REG); + + ret = bcm_sf2_cfp_rst(priv); + if (ret) + return ret; + + list_for_each_entry(rule, &priv->cfp.rules_list, next) { + ret = bcm_sf2_cfp_rule_remove(priv, rule->port, + rule->fs.location); + if (ret) { + dev_err(ds->dev, "failed to remove rule\n"); + return ret; + } + + ret = bcm_sf2_cfp_rule_insert(ds, rule->port, &rule->fs); + if (ret) { + dev_err(ds->dev, "failed to restore rule\n"); + return ret; + } + } + + return ret; +} diff --git a/drivers/net/dsa/microchip/Kconfig b/drivers/net/dsa/microchip/Kconfig index a8b8f59099ce..bea29fde9f3d 100644 --- a/drivers/net/dsa/microchip/Kconfig +++ b/drivers/net/dsa/microchip/Kconfig @@ -1,12 +1,16 @@ -menuconfig MICROCHIP_KSZ - tristate "Microchip KSZ series switch support" +config NET_DSA_MICROCHIP_KSZ_COMMON + tristate + +menuconfig NET_DSA_MICROCHIP_KSZ9477 + tristate "Microchip KSZ9477 series switch support" depends on NET_DSA - select NET_DSA_TAG_KSZ + select NET_DSA_TAG_KSZ9477 + select NET_DSA_MICROCHIP_KSZ_COMMON help - This driver adds support for Microchip KSZ switch chips. + This driver adds support for Microchip KSZ9477 switch chips. -config MICROCHIP_KSZ_SPI_DRIVER - tristate "KSZ series SPI connected switch driver" - depends on MICROCHIP_KSZ && SPI +config NET_DSA_MICROCHIP_KSZ9477_SPI + tristate "KSZ9477 series SPI connected switch driver" + depends on NET_DSA_MICROCHIP_KSZ9477 && SPI help Select to enable support for registering switches configured through SPI. diff --git a/drivers/net/dsa/microchip/Makefile b/drivers/net/dsa/microchip/Makefile index ed335e29fae8..3142c18b8f57 100644 --- a/drivers/net/dsa/microchip/Makefile +++ b/drivers/net/dsa/microchip/Makefile @@ -1,2 +1,3 @@ -obj-$(CONFIG_MICROCHIP_KSZ) += ksz_common.o -obj-$(CONFIG_MICROCHIP_KSZ_SPI_DRIVER) += ksz_spi.o +obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ_COMMON) += ksz_common.o +obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ9477) += ksz9477.o +obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ9477_SPI) += ksz9477_spi.o diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c new file mode 100644 index 000000000000..89ed059bb576 --- /dev/null +++ b/drivers/net/dsa/microchip/ksz9477.c @@ -0,0 +1,1316 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Microchip KSZ9477 switch driver main logic + * + * Copyright (C) 2017-2018 Microchip Technology Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ksz_priv.h" +#include "ksz_common.h" +#include "ksz9477_reg.h" + +static const struct { + int index; + char string[ETH_GSTRING_LEN]; +} ksz9477_mib_names[TOTAL_SWITCH_COUNTER_NUM] = { + { 0x00, "rx_hi" }, + { 0x01, "rx_undersize" }, + { 0x02, "rx_fragments" }, + { 0x03, "rx_oversize" }, + { 0x04, "rx_jabbers" }, + { 0x05, "rx_symbol_err" }, + { 0x06, "rx_crc_err" }, + { 0x07, "rx_align_err" }, + { 0x08, "rx_mac_ctrl" }, + { 0x09, "rx_pause" }, + { 0x0A, "rx_bcast" }, + { 0x0B, "rx_mcast" }, + { 0x0C, "rx_ucast" }, + { 0x0D, "rx_64_or_less" }, + { 0x0E, "rx_65_127" }, + { 0x0F, "rx_128_255" }, + { 0x10, "rx_256_511" }, + { 0x11, "rx_512_1023" }, + { 0x12, "rx_1024_1522" }, + { 0x13, "rx_1523_2000" }, + { 0x14, "rx_2001" }, + { 0x15, "tx_hi" }, + { 0x16, "tx_late_col" }, + { 0x17, "tx_pause" }, + { 0x18, "tx_bcast" }, + { 0x19, "tx_mcast" }, + { 0x1A, "tx_ucast" }, + { 0x1B, "tx_deferred" }, + { 0x1C, "tx_total_col" }, + { 0x1D, "tx_exc_col" }, + { 0x1E, "tx_single_col" }, + { 0x1F, "tx_mult_col" }, + { 0x80, "rx_total" }, + { 0x81, "tx_total" }, + { 0x82, "rx_discards" }, + { 0x83, "tx_discards" }, +}; + +static void ksz9477_cfg32(struct ksz_device *dev, u32 addr, u32 bits, bool set) +{ + u32 data; + + ksz_read32(dev, addr, &data); + if (set) + data |= bits; + else + data &= ~bits; + ksz_write32(dev, addr, data); +} + +static void ksz9477_port_cfg32(struct ksz_device *dev, int port, int offset, + u32 bits, bool set) +{ + u32 addr; + u32 data; + + addr = PORT_CTRL_ADDR(port, offset); + ksz_read32(dev, addr, &data); + + if (set) + data |= bits; + else + data &= ~bits; + + ksz_write32(dev, addr, data); +} + +static int ksz9477_wait_vlan_ctrl_ready(struct ksz_device *dev, u32 waiton, + int timeout) +{ + u8 data; + + do { + ksz_read8(dev, REG_SW_VLAN_CTRL, &data); + if (!(data & waiton)) + break; + usleep_range(1, 10); + } while (timeout-- > 0); + + if (timeout <= 0) + return -ETIMEDOUT; + + return 0; +} + +static int ksz9477_get_vlan_table(struct ksz_device *dev, u16 vid, + u32 *vlan_table) +{ + int ret; + + mutex_lock(&dev->vlan_mutex); + + ksz_write16(dev, REG_SW_VLAN_ENTRY_INDEX__2, vid & VLAN_INDEX_M); + ksz_write8(dev, REG_SW_VLAN_CTRL, VLAN_READ | VLAN_START); + + /* wait to be cleared */ + ret = ksz9477_wait_vlan_ctrl_ready(dev, VLAN_START, 1000); + if (ret < 0) { + dev_dbg(dev->dev, "Failed to read vlan table\n"); + goto exit; + } + + ksz_read32(dev, REG_SW_VLAN_ENTRY__4, &vlan_table[0]); + ksz_read32(dev, REG_SW_VLAN_ENTRY_UNTAG__4, &vlan_table[1]); + ksz_read32(dev, REG_SW_VLAN_ENTRY_PORTS__4, &vlan_table[2]); + + ksz_write8(dev, REG_SW_VLAN_CTRL, 0); + +exit: + mutex_unlock(&dev->vlan_mutex); + + return ret; +} + +static int ksz9477_set_vlan_table(struct ksz_device *dev, u16 vid, + u32 *vlan_table) +{ + int ret; + + mutex_lock(&dev->vlan_mutex); + + ksz_write32(dev, REG_SW_VLAN_ENTRY__4, vlan_table[0]); + ksz_write32(dev, REG_SW_VLAN_ENTRY_UNTAG__4, vlan_table[1]); + ksz_write32(dev, REG_SW_VLAN_ENTRY_PORTS__4, vlan_table[2]); + + ksz_write16(dev, REG_SW_VLAN_ENTRY_INDEX__2, vid & VLAN_INDEX_M); + ksz_write8(dev, REG_SW_VLAN_CTRL, VLAN_START | VLAN_WRITE); + + /* wait to be cleared */ + ret = ksz9477_wait_vlan_ctrl_ready(dev, VLAN_START, 1000); + if (ret < 0) { + dev_dbg(dev->dev, "Failed to write vlan table\n"); + goto exit; + } + + ksz_write8(dev, REG_SW_VLAN_CTRL, 0); + + /* update vlan cache table */ + dev->vlan_cache[vid].table[0] = vlan_table[0]; + dev->vlan_cache[vid].table[1] = vlan_table[1]; + dev->vlan_cache[vid].table[2] = vlan_table[2]; + +exit: + mutex_unlock(&dev->vlan_mutex); + + return ret; +} + +static void ksz9477_read_table(struct ksz_device *dev, u32 *table) +{ + ksz_read32(dev, REG_SW_ALU_VAL_A, &table[0]); + ksz_read32(dev, REG_SW_ALU_VAL_B, &table[1]); + ksz_read32(dev, REG_SW_ALU_VAL_C, &table[2]); + ksz_read32(dev, REG_SW_ALU_VAL_D, &table[3]); +} + +static void ksz9477_write_table(struct ksz_device *dev, u32 *table) +{ + ksz_write32(dev, REG_SW_ALU_VAL_A, table[0]); + ksz_write32(dev, REG_SW_ALU_VAL_B, table[1]); + ksz_write32(dev, REG_SW_ALU_VAL_C, table[2]); + ksz_write32(dev, REG_SW_ALU_VAL_D, table[3]); +} + +static int ksz9477_wait_alu_ready(struct ksz_device *dev, u32 waiton, + int timeout) +{ + u32 data; + + do { + ksz_read32(dev, REG_SW_ALU_CTRL__4, &data); + if (!(data & waiton)) + break; + usleep_range(1, 10); + } while (timeout-- > 0); + + if (timeout <= 0) + return -ETIMEDOUT; + + return 0; +} + +static int ksz9477_wait_alu_sta_ready(struct ksz_device *dev, u32 waiton, + int timeout) +{ + u32 data; + + do { + ksz_read32(dev, REG_SW_ALU_STAT_CTRL__4, &data); + if (!(data & waiton)) + break; + usleep_range(1, 10); + } while (timeout-- > 0); + + if (timeout <= 0) + return -ETIMEDOUT; + + return 0; +} + +static int ksz9477_reset_switch(struct ksz_device *dev) +{ + u8 data8; + u16 data16; + u32 data32; + + /* reset switch */ + ksz_cfg(dev, REG_SW_OPERATION, SW_RESET, true); + + /* turn off SPI DO Edge select */ + ksz_read8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, &data8); + data8 &= ~SPI_AUTO_EDGE_DETECTION; + ksz_write8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, data8); + + /* default configuration */ + ksz_read8(dev, REG_SW_LUE_CTRL_1, &data8); + data8 = SW_AGING_ENABLE | SW_LINK_AUTO_AGING | + SW_SRC_ADDR_FILTER | SW_FLUSH_STP_TABLE | SW_FLUSH_MSTP_TABLE; + ksz_write8(dev, REG_SW_LUE_CTRL_1, data8); + + /* disable interrupts */ + ksz_write32(dev, REG_SW_INT_MASK__4, SWITCH_INT_MASK); + ksz_write32(dev, REG_SW_PORT_INT_MASK__4, 0x7F); + ksz_read32(dev, REG_SW_PORT_INT_STATUS__4, &data32); + + /* set broadcast storm protection 10% rate */ + ksz_read16(dev, REG_SW_MAC_CTRL_2, &data16); + data16 &= ~BROADCAST_STORM_RATE; + data16 |= (BROADCAST_STORM_VALUE * BROADCAST_STORM_PROT_RATE) / 100; + ksz_write16(dev, REG_SW_MAC_CTRL_2, data16); + + return 0; +} + +static enum dsa_tag_protocol ksz9477_get_tag_protocol(struct dsa_switch *ds, + int port) +{ + return DSA_TAG_PROTO_KSZ9477; +} + +static int ksz9477_phy_read16(struct dsa_switch *ds, int addr, int reg) +{ + struct ksz_device *dev = ds->priv; + u16 val = 0xffff; + + /* No real PHY after this. Simulate the PHY. + * A fixed PHY can be setup in the device tree, but this function is + * still called for that port during initialization. + * For RGMII PHY there is no way to access it so the fixed PHY should + * be used. For SGMII PHY the supporting code will be added later. + */ + if (addr >= dev->phy_port_cnt) { + struct ksz_port *p = &dev->ports[addr]; + + switch (reg) { + case MII_BMCR: + val = 0x1140; + break; + case MII_BMSR: + val = 0x796d; + break; + case MII_PHYSID1: + val = 0x0022; + break; + case MII_PHYSID2: + val = 0x1631; + break; + case MII_ADVERTISE: + val = 0x05e1; + break; + case MII_LPA: + val = 0xc5e1; + break; + case MII_CTRL1000: + val = 0x0700; + break; + case MII_STAT1000: + if (p->phydev.speed == SPEED_1000) + val = 0x3800; + else + val = 0; + break; + } + } else { + ksz_pread16(dev, addr, 0x100 + (reg << 1), &val); + } + + return val; +} + +static int ksz9477_phy_write16(struct dsa_switch *ds, int addr, int reg, + u16 val) +{ + struct ksz_device *dev = ds->priv; + + /* No real PHY after this. */ + if (addr >= dev->phy_port_cnt) + return 0; + ksz_pwrite16(dev, addr, 0x100 + (reg << 1), val); + + return 0; +} + +static void ksz9477_get_strings(struct dsa_switch *ds, int port, + u32 stringset, uint8_t *buf) +{ + int i; + + if (stringset != ETH_SS_STATS) + return; + + for (i = 0; i < TOTAL_SWITCH_COUNTER_NUM; i++) { + memcpy(buf + i * ETH_GSTRING_LEN, ksz9477_mib_names[i].string, + ETH_GSTRING_LEN); + } +} + +static void ksz_get_ethtool_stats(struct dsa_switch *ds, int port, + uint64_t *buf) +{ + struct ksz_device *dev = ds->priv; + int i; + u32 data; + int timeout; + + mutex_lock(&dev->stats_mutex); + + for (i = 0; i < TOTAL_SWITCH_COUNTER_NUM; i++) { + data = MIB_COUNTER_READ; + data |= ((ksz9477_mib_names[i].index & 0xFF) << + MIB_COUNTER_INDEX_S); + ksz_pwrite32(dev, port, REG_PORT_MIB_CTRL_STAT__4, data); + + timeout = 1000; + do { + ksz_pread32(dev, port, REG_PORT_MIB_CTRL_STAT__4, + &data); + usleep_range(1, 10); + if (!(data & MIB_COUNTER_READ)) + break; + } while (timeout-- > 0); + + /* failed to read MIB. get out of loop */ + if (!timeout) { + dev_dbg(dev->dev, "Failed to get MIB\n"); + break; + } + + /* count resets upon read */ + ksz_pread32(dev, port, REG_PORT_MIB_DATA, &data); + + dev->mib_value[i] += (uint64_t)data; + buf[i] = dev->mib_value[i]; + } + + mutex_unlock(&dev->stats_mutex); +} + +static void ksz9477_cfg_port_member(struct ksz_device *dev, int port, + u8 member) +{ + ksz_pwrite32(dev, port, REG_PORT_VLAN_MEMBERSHIP__4, member); + dev->ports[port].member = member; +} + +static void ksz9477_port_stp_state_set(struct dsa_switch *ds, int port, + u8 state) +{ + struct ksz_device *dev = ds->priv; + struct ksz_port *p = &dev->ports[port]; + u8 data; + int member = -1; + + ksz_pread8(dev, port, P_STP_CTRL, &data); + data &= ~(PORT_TX_ENABLE | PORT_RX_ENABLE | PORT_LEARN_DISABLE); + + switch (state) { + case BR_STATE_DISABLED: + data |= PORT_LEARN_DISABLE; + if (port != dev->cpu_port) + member = 0; + break; + case BR_STATE_LISTENING: + data |= (PORT_RX_ENABLE | PORT_LEARN_DISABLE); + if (port != dev->cpu_port && + p->stp_state == BR_STATE_DISABLED) + member = dev->host_mask | p->vid_member; + break; + case BR_STATE_LEARNING: + data |= PORT_RX_ENABLE; + break; + case BR_STATE_FORWARDING: + data |= (PORT_TX_ENABLE | PORT_RX_ENABLE); + + /* This function is also used internally. */ + if (port == dev->cpu_port) + break; + + member = dev->host_mask | p->vid_member; + + /* Port is a member of a bridge. */ + if (dev->br_member & (1 << port)) { + dev->member |= (1 << port); + member = dev->member; + } + break; + case BR_STATE_BLOCKING: + data |= PORT_LEARN_DISABLE; + if (port != dev->cpu_port && + p->stp_state == BR_STATE_DISABLED) + member = dev->host_mask | p->vid_member; + break; + default: + dev_err(ds->dev, "invalid STP state: %d\n", state); + return; + } + + ksz_pwrite8(dev, port, P_STP_CTRL, data); + p->stp_state = state; + if (data & PORT_RX_ENABLE) + dev->rx_ports |= (1 << port); + else + dev->rx_ports &= ~(1 << port); + if (data & PORT_TX_ENABLE) + dev->tx_ports |= (1 << port); + else + dev->tx_ports &= ~(1 << port); + + /* Port membership may share register with STP state. */ + if (member >= 0 && member != p->member) + ksz9477_cfg_port_member(dev, port, (u8)member); + + /* Check if forwarding needs to be updated. */ + if (state != BR_STATE_FORWARDING) { + if (dev->br_member & (1 << port)) + dev->member &= ~(1 << port); + } + + /* When topology has changed the function ksz_update_port_member + * should be called to modify port forwarding behavior. However + * as the offload_fwd_mark indication cannot be reported here + * the switch forwarding function is not enabled. + */ +} + +static void ksz9477_flush_dyn_mac_table(struct ksz_device *dev, int port) +{ + u8 data; + + ksz_read8(dev, REG_SW_LUE_CTRL_2, &data); + data &= ~(SW_FLUSH_OPTION_M << SW_FLUSH_OPTION_S); + data |= (SW_FLUSH_OPTION_DYN_MAC << SW_FLUSH_OPTION_S); + ksz_write8(dev, REG_SW_LUE_CTRL_2, data); + if (port < dev->mib_port_cnt) { + /* flush individual port */ + ksz_pread8(dev, port, P_STP_CTRL, &data); + if (!(data & PORT_LEARN_DISABLE)) + ksz_pwrite8(dev, port, P_STP_CTRL, + data | PORT_LEARN_DISABLE); + ksz_cfg(dev, S_FLUSH_TABLE_CTRL, SW_FLUSH_DYN_MAC_TABLE, true); + ksz_pwrite8(dev, port, P_STP_CTRL, data); + } else { + /* flush all */ + ksz_cfg(dev, S_FLUSH_TABLE_CTRL, SW_FLUSH_STP_TABLE, true); + } +} + +static int ksz9477_port_vlan_filtering(struct dsa_switch *ds, int port, + bool flag) +{ + struct ksz_device *dev = ds->priv; + + if (flag) { + ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, + PORT_VLAN_LOOKUP_VID_0, true); + ksz_cfg(dev, REG_SW_LUE_CTRL_0, SW_VLAN_ENABLE, true); + } else { + ksz_cfg(dev, REG_SW_LUE_CTRL_0, SW_VLAN_ENABLE, false); + ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, + PORT_VLAN_LOOKUP_VID_0, false); + } + + return 0; +} + +static void ksz9477_port_vlan_add(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_vlan *vlan) +{ + struct ksz_device *dev = ds->priv; + u32 vlan_table[3]; + u16 vid; + bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; + + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) { + if (ksz9477_get_vlan_table(dev, vid, vlan_table)) { + dev_dbg(dev->dev, "Failed to get vlan table\n"); + return; + } + + vlan_table[0] = VLAN_VALID | (vid & VLAN_FID_M); + if (untagged) + vlan_table[1] |= BIT(port); + else + vlan_table[1] &= ~BIT(port); + vlan_table[1] &= ~(BIT(dev->cpu_port)); + + vlan_table[2] |= BIT(port) | BIT(dev->cpu_port); + + if (ksz9477_set_vlan_table(dev, vid, vlan_table)) { + dev_dbg(dev->dev, "Failed to set vlan table\n"); + return; + } + + /* change PVID */ + if (vlan->flags & BRIDGE_VLAN_INFO_PVID) + ksz_pwrite16(dev, port, REG_PORT_DEFAULT_VID, vid); + } +} + +static int ksz9477_port_vlan_del(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_vlan *vlan) +{ + struct ksz_device *dev = ds->priv; + bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; + u32 vlan_table[3]; + u16 vid; + u16 pvid; + + ksz_pread16(dev, port, REG_PORT_DEFAULT_VID, &pvid); + pvid = pvid & 0xFFF; + + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) { + if (ksz9477_get_vlan_table(dev, vid, vlan_table)) { + dev_dbg(dev->dev, "Failed to get vlan table\n"); + return -ETIMEDOUT; + } + + vlan_table[2] &= ~BIT(port); + + if (pvid == vid) + pvid = 1; + + if (untagged) + vlan_table[1] &= ~BIT(port); + + if (ksz9477_set_vlan_table(dev, vid, vlan_table)) { + dev_dbg(dev->dev, "Failed to set vlan table\n"); + return -ETIMEDOUT; + } + } + + ksz_pwrite16(dev, port, REG_PORT_DEFAULT_VID, pvid); + + return 0; +} + +static int ksz9477_port_fdb_add(struct dsa_switch *ds, int port, + const unsigned char *addr, u16 vid) +{ + struct ksz_device *dev = ds->priv; + u32 alu_table[4]; + u32 data; + int ret = 0; + + mutex_lock(&dev->alu_mutex); + + /* find any entry with mac & vid */ + data = vid << ALU_FID_INDEX_S; + data |= ((addr[0] << 8) | addr[1]); + ksz_write32(dev, REG_SW_ALU_INDEX_0, data); + + data = ((addr[2] << 24) | (addr[3] << 16)); + data |= ((addr[4] << 8) | addr[5]); + ksz_write32(dev, REG_SW_ALU_INDEX_1, data); + + /* start read operation */ + ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_READ | ALU_START); + + /* wait to be finished */ + ret = ksz9477_wait_alu_ready(dev, ALU_START, 1000); + if (ret < 0) { + dev_dbg(dev->dev, "Failed to read ALU\n"); + goto exit; + } + + /* read ALU entry */ + ksz9477_read_table(dev, alu_table); + + /* update ALU entry */ + alu_table[0] = ALU_V_STATIC_VALID; + alu_table[1] |= BIT(port); + if (vid) + alu_table[1] |= ALU_V_USE_FID; + alu_table[2] = (vid << ALU_V_FID_S); + alu_table[2] |= ((addr[0] << 8) | addr[1]); + alu_table[3] = ((addr[2] << 24) | (addr[3] << 16)); + alu_table[3] |= ((addr[4] << 8) | addr[5]); + + ksz9477_write_table(dev, alu_table); + + ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_WRITE | ALU_START); + + /* wait to be finished */ + ret = ksz9477_wait_alu_ready(dev, ALU_START, 1000); + if (ret < 0) + dev_dbg(dev->dev, "Failed to write ALU\n"); + +exit: + mutex_unlock(&dev->alu_mutex); + + return ret; +} + +static int ksz9477_port_fdb_del(struct dsa_switch *ds, int port, + const unsigned char *addr, u16 vid) +{ + struct ksz_device *dev = ds->priv; + u32 alu_table[4]; + u32 data; + int ret = 0; + + mutex_lock(&dev->alu_mutex); + + /* read any entry with mac & vid */ + data = vid << ALU_FID_INDEX_S; + data |= ((addr[0] << 8) | addr[1]); + ksz_write32(dev, REG_SW_ALU_INDEX_0, data); + + data = ((addr[2] << 24) | (addr[3] << 16)); + data |= ((addr[4] << 8) | addr[5]); + ksz_write32(dev, REG_SW_ALU_INDEX_1, data); + + /* start read operation */ + ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_READ | ALU_START); + + /* wait to be finished */ + ret = ksz9477_wait_alu_ready(dev, ALU_START, 1000); + if (ret < 0) { + dev_dbg(dev->dev, "Failed to read ALU\n"); + goto exit; + } + + ksz_read32(dev, REG_SW_ALU_VAL_A, &alu_table[0]); + if (alu_table[0] & ALU_V_STATIC_VALID) { + ksz_read32(dev, REG_SW_ALU_VAL_B, &alu_table[1]); + ksz_read32(dev, REG_SW_ALU_VAL_C, &alu_table[2]); + ksz_read32(dev, REG_SW_ALU_VAL_D, &alu_table[3]); + + /* clear forwarding port */ + alu_table[2] &= ~BIT(port); + + /* if there is no port to forward, clear table */ + if ((alu_table[2] & ALU_V_PORT_MAP) == 0) { + alu_table[0] = 0; + alu_table[1] = 0; + alu_table[2] = 0; + alu_table[3] = 0; + } + } else { + alu_table[0] = 0; + alu_table[1] = 0; + alu_table[2] = 0; + alu_table[3] = 0; + } + + ksz9477_write_table(dev, alu_table); + + ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_WRITE | ALU_START); + + /* wait to be finished */ + ret = ksz9477_wait_alu_ready(dev, ALU_START, 1000); + if (ret < 0) + dev_dbg(dev->dev, "Failed to write ALU\n"); + +exit: + mutex_unlock(&dev->alu_mutex); + + return ret; +} + +static void ksz9477_convert_alu(struct alu_struct *alu, u32 *alu_table) +{ + alu->is_static = !!(alu_table[0] & ALU_V_STATIC_VALID); + alu->is_src_filter = !!(alu_table[0] & ALU_V_SRC_FILTER); + alu->is_dst_filter = !!(alu_table[0] & ALU_V_DST_FILTER); + alu->prio_age = (alu_table[0] >> ALU_V_PRIO_AGE_CNT_S) & + ALU_V_PRIO_AGE_CNT_M; + alu->mstp = alu_table[0] & ALU_V_MSTP_M; + + alu->is_override = !!(alu_table[1] & ALU_V_OVERRIDE); + alu->is_use_fid = !!(alu_table[1] & ALU_V_USE_FID); + alu->port_forward = alu_table[1] & ALU_V_PORT_MAP; + + alu->fid = (alu_table[2] >> ALU_V_FID_S) & ALU_V_FID_M; + + alu->mac[0] = (alu_table[2] >> 8) & 0xFF; + alu->mac[1] = alu_table[2] & 0xFF; + alu->mac[2] = (alu_table[3] >> 24) & 0xFF; + alu->mac[3] = (alu_table[3] >> 16) & 0xFF; + alu->mac[4] = (alu_table[3] >> 8) & 0xFF; + alu->mac[5] = alu_table[3] & 0xFF; +} + +static int ksz9477_port_fdb_dump(struct dsa_switch *ds, int port, + dsa_fdb_dump_cb_t *cb, void *data) +{ + struct ksz_device *dev = ds->priv; + int ret = 0; + u32 ksz_data; + u32 alu_table[4]; + struct alu_struct alu; + int timeout; + + mutex_lock(&dev->alu_mutex); + + /* start ALU search */ + ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_START | ALU_SEARCH); + + do { + timeout = 1000; + do { + ksz_read32(dev, REG_SW_ALU_CTRL__4, &ksz_data); + if ((ksz_data & ALU_VALID) || !(ksz_data & ALU_START)) + break; + usleep_range(1, 10); + } while (timeout-- > 0); + + if (!timeout) { + dev_dbg(dev->dev, "Failed to search ALU\n"); + ret = -ETIMEDOUT; + goto exit; + } + + /* read ALU table */ + ksz9477_read_table(dev, alu_table); + + ksz9477_convert_alu(&alu, alu_table); + + if (alu.port_forward & BIT(port)) { + ret = cb(alu.mac, alu.fid, alu.is_static, data); + if (ret) + goto exit; + } + } while (ksz_data & ALU_START); + +exit: + + /* stop ALU search */ + ksz_write32(dev, REG_SW_ALU_CTRL__4, 0); + + mutex_unlock(&dev->alu_mutex); + + return ret; +} + +static void ksz9477_port_mdb_add(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_mdb *mdb) +{ + struct ksz_device *dev = ds->priv; + u32 static_table[4]; + u32 data; + int index; + u32 mac_hi, mac_lo; + + mac_hi = ((mdb->addr[0] << 8) | mdb->addr[1]); + mac_lo = ((mdb->addr[2] << 24) | (mdb->addr[3] << 16)); + mac_lo |= ((mdb->addr[4] << 8) | mdb->addr[5]); + + mutex_lock(&dev->alu_mutex); + + for (index = 0; index < dev->num_statics; index++) { + /* find empty slot first */ + data = (index << ALU_STAT_INDEX_S) | + ALU_STAT_READ | ALU_STAT_START; + ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); + + /* wait to be finished */ + if (ksz9477_wait_alu_sta_ready(dev, ALU_STAT_START, 1000) < 0) { + dev_dbg(dev->dev, "Failed to read ALU STATIC\n"); + goto exit; + } + + /* read ALU static table */ + ksz9477_read_table(dev, static_table); + + if (static_table[0] & ALU_V_STATIC_VALID) { + /* check this has same vid & mac address */ + if (((static_table[2] >> ALU_V_FID_S) == mdb->vid) && + ((static_table[2] & ALU_V_MAC_ADDR_HI) == mac_hi) && + static_table[3] == mac_lo) { + /* found matching one */ + break; + } + } else { + /* found empty one */ + break; + } + } + + /* no available entry */ + if (index == dev->num_statics) + goto exit; + + /* add entry */ + static_table[0] = ALU_V_STATIC_VALID; + static_table[1] |= BIT(port); + if (mdb->vid) + static_table[1] |= ALU_V_USE_FID; + static_table[2] = (mdb->vid << ALU_V_FID_S); + static_table[2] |= mac_hi; + static_table[3] = mac_lo; + + ksz9477_write_table(dev, static_table); + + data = (index << ALU_STAT_INDEX_S) | ALU_STAT_START; + ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); + + /* wait to be finished */ + if (ksz9477_wait_alu_sta_ready(dev, ALU_STAT_START, 1000) < 0) + dev_dbg(dev->dev, "Failed to read ALU STATIC\n"); + +exit: + mutex_unlock(&dev->alu_mutex); +} + +static int ksz9477_port_mdb_del(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_mdb *mdb) +{ + struct ksz_device *dev = ds->priv; + u32 static_table[4]; + u32 data; + int index; + int ret = 0; + u32 mac_hi, mac_lo; + + mac_hi = ((mdb->addr[0] << 8) | mdb->addr[1]); + mac_lo = ((mdb->addr[2] << 24) | (mdb->addr[3] << 16)); + mac_lo |= ((mdb->addr[4] << 8) | mdb->addr[5]); + + mutex_lock(&dev->alu_mutex); + + for (index = 0; index < dev->num_statics; index++) { + /* find empty slot first */ + data = (index << ALU_STAT_INDEX_S) | + ALU_STAT_READ | ALU_STAT_START; + ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); + + /* wait to be finished */ + ret = ksz9477_wait_alu_sta_ready(dev, ALU_STAT_START, 1000); + if (ret < 0) { + dev_dbg(dev->dev, "Failed to read ALU STATIC\n"); + goto exit; + } + + /* read ALU static table */ + ksz9477_read_table(dev, static_table); + + if (static_table[0] & ALU_V_STATIC_VALID) { + /* check this has same vid & mac address */ + + if (((static_table[2] >> ALU_V_FID_S) == mdb->vid) && + ((static_table[2] & ALU_V_MAC_ADDR_HI) == mac_hi) && + static_table[3] == mac_lo) { + /* found matching one */ + break; + } + } + } + + /* no available entry */ + if (index == dev->num_statics) + goto exit; + + /* clear port */ + static_table[1] &= ~BIT(port); + + if ((static_table[1] & ALU_V_PORT_MAP) == 0) { + /* delete entry */ + static_table[0] = 0; + static_table[1] = 0; + static_table[2] = 0; + static_table[3] = 0; + } + + ksz9477_write_table(dev, static_table); + + data = (index << ALU_STAT_INDEX_S) | ALU_STAT_START; + ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); + + /* wait to be finished */ + ret = ksz9477_wait_alu_sta_ready(dev, ALU_STAT_START, 1000); + if (ret < 0) + dev_dbg(dev->dev, "Failed to read ALU STATIC\n"); + +exit: + mutex_unlock(&dev->alu_mutex); + + return ret; +} + +static int ksz9477_port_mirror_add(struct dsa_switch *ds, int port, + struct dsa_mall_mirror_tc_entry *mirror, + bool ingress) +{ + struct ksz_device *dev = ds->priv; + + if (ingress) + ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_RX, true); + else + ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_TX, true); + + ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_SNIFFER, false); + + /* configure mirror port */ + ksz_port_cfg(dev, mirror->to_local_port, P_MIRROR_CTRL, + PORT_MIRROR_SNIFFER, true); + + ksz_cfg(dev, S_MIRROR_CTRL, SW_MIRROR_RX_TX, false); + + return 0; +} + +static void ksz9477_port_mirror_del(struct dsa_switch *ds, int port, + struct dsa_mall_mirror_tc_entry *mirror) +{ + struct ksz_device *dev = ds->priv; + u8 data; + + if (mirror->ingress) + ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_RX, false); + else + ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_TX, false); + + ksz_pread8(dev, port, P_MIRROR_CTRL, &data); + + if (!(data & (PORT_MIRROR_RX | PORT_MIRROR_TX))) + ksz_port_cfg(dev, mirror->to_local_port, P_MIRROR_CTRL, + PORT_MIRROR_SNIFFER, false); +} + +static void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port) +{ + u8 data8; + u8 member; + u16 data16; + struct ksz_port *p = &dev->ports[port]; + + /* enable tag tail for host port */ + if (cpu_port) + ksz_port_cfg(dev, port, REG_PORT_CTRL_0, PORT_TAIL_TAG_ENABLE, + true); + + ksz_port_cfg(dev, port, REG_PORT_CTRL_0, PORT_MAC_LOOPBACK, false); + + /* set back pressure */ + ksz_port_cfg(dev, port, REG_PORT_MAC_CTRL_1, PORT_BACK_PRESSURE, true); + + /* enable broadcast storm limit */ + ksz_port_cfg(dev, port, P_BCAST_STORM_CTRL, PORT_BROADCAST_STORM, true); + + /* disable DiffServ priority */ + ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_DIFFSERV_PRIO_ENABLE, false); + + /* replace priority */ + ksz_port_cfg(dev, port, REG_PORT_MRI_MAC_CTRL, PORT_USER_PRIO_CEILING, + false); + ksz9477_port_cfg32(dev, port, REG_PORT_MTI_QUEUE_CTRL_0__4, + MTI_PVID_REPLACE, false); + + /* enable 802.1p priority */ + ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_PRIO_ENABLE, true); + + if (port < dev->phy_port_cnt) { + /* do not force flow control */ + ksz_port_cfg(dev, port, REG_PORT_CTRL_0, + PORT_FORCE_TX_FLOW_CTRL | PORT_FORCE_RX_FLOW_CTRL, + false); + + } else { + /* force flow control */ + ksz_port_cfg(dev, port, REG_PORT_CTRL_0, + PORT_FORCE_TX_FLOW_CTRL | PORT_FORCE_RX_FLOW_CTRL, + true); + + /* configure MAC to 1G & RGMII mode */ + ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8); + data8 &= ~PORT_MII_NOT_1GBIT; + data8 &= ~PORT_MII_SEL_M; + switch (dev->interface) { + case PHY_INTERFACE_MODE_MII: + data8 |= PORT_MII_NOT_1GBIT; + data8 |= PORT_MII_SEL; + p->phydev.speed = SPEED_100; + break; + case PHY_INTERFACE_MODE_RMII: + data8 |= PORT_MII_NOT_1GBIT; + data8 |= PORT_RMII_SEL; + p->phydev.speed = SPEED_100; + break; + case PHY_INTERFACE_MODE_GMII: + data8 |= PORT_GMII_SEL; + p->phydev.speed = SPEED_1000; + break; + default: + data8 &= ~PORT_RGMII_ID_IG_ENABLE; + data8 &= ~PORT_RGMII_ID_EG_ENABLE; + if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || + dev->interface == PHY_INTERFACE_MODE_RGMII_RXID) + data8 |= PORT_RGMII_ID_IG_ENABLE; + if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || + dev->interface == PHY_INTERFACE_MODE_RGMII_TXID) + data8 |= PORT_RGMII_ID_EG_ENABLE; + data8 |= PORT_RGMII_SEL; + p->phydev.speed = SPEED_1000; + break; + } + ksz_pwrite8(dev, port, REG_PORT_XMII_CTRL_1, data8); + p->phydev.duplex = 1; + } + if (cpu_port) { + member = dev->port_mask; + dev->on_ports = dev->host_mask; + dev->live_ports = dev->host_mask; + } else { + member = dev->host_mask | p->vid_member; + dev->on_ports |= (1 << port); + + /* Link was detected before port is enabled. */ + if (p->phydev.link) + dev->live_ports |= (1 << port); + } + ksz9477_cfg_port_member(dev, port, member); + + /* clear pending interrupts */ + if (port < dev->phy_port_cnt) + ksz_pread16(dev, port, REG_PORT_PHY_INT_ENABLE, &data16); +} + +static void ksz9477_config_cpu_port(struct dsa_switch *ds) +{ + struct ksz_device *dev = ds->priv; + struct ksz_port *p; + int i; + + ds->num_ports = dev->port_cnt; + + for (i = 0; i < dev->port_cnt; i++) { + if (dsa_is_cpu_port(ds, i) && (dev->cpu_ports & (1 << i))) { + dev->cpu_port = i; + dev->host_mask = (1 << dev->cpu_port); + dev->port_mask |= dev->host_mask; + + /* enable cpu port */ + ksz9477_port_setup(dev, i, true); + p = &dev->ports[dev->cpu_port]; + p->vid_member = dev->port_mask; + p->on = 1; + } + } + + dev->member = dev->host_mask; + + for (i = 0; i < dev->mib_port_cnt; i++) { + if (i == dev->cpu_port) + continue; + p = &dev->ports[i]; + + /* Initialize to non-zero so that ksz_cfg_port_member() will + * be called. + */ + p->vid_member = (1 << i); + p->member = dev->port_mask; + ksz9477_port_stp_state_set(ds, i, BR_STATE_DISABLED); + p->on = 1; + if (i < dev->phy_port_cnt) + p->phy = 1; + if (dev->chip_id == 0x00947700 && i == 6) { + p->sgmii = 1; + + /* SGMII PHY detection code is not implemented yet. */ + p->phy = 0; + } + } +} + +static int ksz9477_setup(struct dsa_switch *ds) +{ + struct ksz_device *dev = ds->priv; + int ret = 0; + + dev->vlan_cache = devm_kcalloc(dev->dev, sizeof(struct vlan_table), + dev->num_vlans, GFP_KERNEL); + if (!dev->vlan_cache) + return -ENOMEM; + + ret = ksz9477_reset_switch(dev); + if (ret) { + dev_err(ds->dev, "failed to reset switch\n"); + return ret; + } + + /* Required for port partitioning. */ + ksz9477_cfg32(dev, REG_SW_QM_CTRL__4, UNICAST_VLAN_BOUNDARY, + true); + + /* accept packet up to 2000bytes */ + ksz_cfg(dev, REG_SW_MAC_CTRL_1, SW_LEGAL_PACKET_DISABLE, true); + + ksz9477_config_cpu_port(ds); + + ksz_cfg(dev, REG_SW_MAC_CTRL_1, MULTICAST_STORM_DISABLE, true); + + /* queue based egress rate limit */ + ksz_cfg(dev, REG_SW_MAC_CTRL_5, SW_OUT_RATE_LIMIT_QUEUE_BASED, true); + + /* start switch */ + ksz_cfg(dev, REG_SW_OPERATION, SW_START, true); + + return 0; +} + +static const struct dsa_switch_ops ksz9477_switch_ops = { + .get_tag_protocol = ksz9477_get_tag_protocol, + .setup = ksz9477_setup, + .phy_read = ksz9477_phy_read16, + .phy_write = ksz9477_phy_write16, + .port_enable = ksz_enable_port, + .port_disable = ksz_disable_port, + .get_strings = ksz9477_get_strings, + .get_ethtool_stats = ksz_get_ethtool_stats, + .get_sset_count = ksz_sset_count, + .port_bridge_join = ksz_port_bridge_join, + .port_bridge_leave = ksz_port_bridge_leave, + .port_stp_state_set = ksz9477_port_stp_state_set, + .port_fast_age = ksz_port_fast_age, + .port_vlan_filtering = ksz9477_port_vlan_filtering, + .port_vlan_prepare = ksz_port_vlan_prepare, + .port_vlan_add = ksz9477_port_vlan_add, + .port_vlan_del = ksz9477_port_vlan_del, + .port_fdb_dump = ksz9477_port_fdb_dump, + .port_fdb_add = ksz9477_port_fdb_add, + .port_fdb_del = ksz9477_port_fdb_del, + .port_mdb_prepare = ksz_port_mdb_prepare, + .port_mdb_add = ksz9477_port_mdb_add, + .port_mdb_del = ksz9477_port_mdb_del, + .port_mirror_add = ksz9477_port_mirror_add, + .port_mirror_del = ksz9477_port_mirror_del, +}; + +static u32 ksz9477_get_port_addr(int port, int offset) +{ + return PORT_CTRL_ADDR(port, offset); +} + +static int ksz9477_switch_detect(struct ksz_device *dev) +{ + u8 data8; + u32 id32; + int ret; + + /* turn off SPI DO Edge select */ + ret = ksz_read8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, &data8); + if (ret) + return ret; + + data8 &= ~SPI_AUTO_EDGE_DETECTION; + ret = ksz_write8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, data8); + if (ret) + return ret; + + /* read chip id */ + ret = ksz_read32(dev, REG_CHIP_ID0__1, &id32); + if (ret) + return ret; + + /* Number of ports can be reduced depending on chip. */ + dev->mib_port_cnt = TOTAL_PORT_NUM; + dev->phy_port_cnt = 5; + + dev->chip_id = id32; + + return 0; +} + +struct ksz_chip_data { + u32 chip_id; + const char *dev_name; + int num_vlans; + int num_alus; + int num_statics; + int cpu_ports; + int port_cnt; +}; + +static const struct ksz_chip_data ksz9477_switch_chips[] = { + { + .chip_id = 0x00947700, + .dev_name = "KSZ9477", + .num_vlans = 4096, + .num_alus = 4096, + .num_statics = 16, + .cpu_ports = 0x7F, /* can be configured as cpu port */ + .port_cnt = 7, /* total physical port count */ + }, + { + .chip_id = 0x00989700, + .dev_name = "KSZ9897", + .num_vlans = 4096, + .num_alus = 4096, + .num_statics = 16, + .cpu_ports = 0x7F, /* can be configured as cpu port */ + .port_cnt = 7, /* total physical port count */ + }, +}; + +static int ksz9477_switch_init(struct ksz_device *dev) +{ + int i; + + dev->ds->ops = &ksz9477_switch_ops; + + for (i = 0; i < ARRAY_SIZE(ksz9477_switch_chips); i++) { + const struct ksz_chip_data *chip = &ksz9477_switch_chips[i]; + + if (dev->chip_id == chip->chip_id) { + dev->name = chip->dev_name; + dev->num_vlans = chip->num_vlans; + dev->num_alus = chip->num_alus; + dev->num_statics = chip->num_statics; + dev->port_cnt = chip->port_cnt; + dev->cpu_ports = chip->cpu_ports; + + break; + } + } + + /* no switch found */ + if (!dev->port_cnt) + return -ENODEV; + + dev->port_mask = (1 << dev->port_cnt) - 1; + + dev->reg_mib_cnt = SWITCH_COUNTER_NUM; + dev->mib_cnt = TOTAL_SWITCH_COUNTER_NUM; + + i = dev->mib_port_cnt; + dev->ports = devm_kzalloc(dev->dev, sizeof(struct ksz_port) * i, + GFP_KERNEL); + if (!dev->ports) + return -ENOMEM; + for (i = 0; i < dev->mib_port_cnt; i++) { + dev->ports[i].mib.counters = + devm_kzalloc(dev->dev, + sizeof(u64) * + (TOTAL_SWITCH_COUNTER_NUM + 1), + GFP_KERNEL); + if (!dev->ports[i].mib.counters) + return -ENOMEM; + } + dev->interface = PHY_INTERFACE_MODE_RGMII_TXID; + + return 0; +} + +static void ksz9477_switch_exit(struct ksz_device *dev) +{ + ksz9477_reset_switch(dev); +} + +static const struct ksz_dev_ops ksz9477_dev_ops = { + .get_port_addr = ksz9477_get_port_addr, + .cfg_port_member = ksz9477_cfg_port_member, + .flush_dyn_mac_table = ksz9477_flush_dyn_mac_table, + .port_setup = ksz9477_port_setup, + .shutdown = ksz9477_reset_switch, + .detect = ksz9477_switch_detect, + .init = ksz9477_switch_init, + .exit = ksz9477_switch_exit, +}; + +int ksz9477_switch_register(struct ksz_device *dev) +{ + return ksz_switch_register(dev, &ksz9477_dev_ops); +} +EXPORT_SYMBOL(ksz9477_switch_register); + +MODULE_AUTHOR("Woojung Huh "); +MODULE_DESCRIPTION("Microchip KSZ9477 Series Switch DSA Driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/dsa/microchip/ksz_9477_reg.h b/drivers/net/dsa/microchip/ksz9477_reg.h similarity index 98% rename from drivers/net/dsa/microchip/ksz_9477_reg.h rename to drivers/net/dsa/microchip/ksz9477_reg.h index 6aa6752035a1..2938e892b631 100644 --- a/drivers/net/dsa/microchip/ksz_9477_reg.h +++ b/drivers/net/dsa/microchip/ksz9477_reg.h @@ -1,19 +1,8 @@ -/* - * Microchip KSZ9477 register definitions - * - * Copyright (C) 2017 +/* SPDX-License-Identifier: GPL-2.0 * - * Permission to use, copy, modify, and/or distribute this software for any - * purpose with or without fee is hereby granted, provided that the above - * copyright notice and this permission notice appear in all copies. + * Microchip KSZ9477 register definitions * - * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR - * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES - * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN - * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + * Copyright (C) 2017-2018 Microchip Technology Inc. */ #ifndef __KSZ9477_REGS_H diff --git a/drivers/net/dsa/microchip/ksz9477_spi.c b/drivers/net/dsa/microchip/ksz9477_spi.c new file mode 100644 index 000000000000..d757ba151cb1 --- /dev/null +++ b/drivers/net/dsa/microchip/ksz9477_spi.c @@ -0,0 +1,177 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Microchip KSZ9477 series register access through SPI + * + * Copyright (C) 2017-2018 Microchip Technology Inc. + */ + +#include + +#include +#include +#include +#include + +#include "ksz_priv.h" +#include "ksz_spi.h" + +/* SPI frame opcodes */ +#define KS_SPIOP_RD 3 +#define KS_SPIOP_WR 2 + +#define SPI_ADDR_SHIFT 24 +#define SPI_ADDR_MASK (BIT(SPI_ADDR_SHIFT) - 1) +#define SPI_TURNAROUND_SHIFT 5 + +/* Enough to read all switch port registers. */ +#define SPI_TX_BUF_LEN 0x100 + +static int ksz9477_spi_read_reg(struct spi_device *spi, u32 reg, u8 *val, + unsigned int len) +{ + u32 txbuf; + int ret; + + txbuf = reg & SPI_ADDR_MASK; + txbuf |= KS_SPIOP_RD << SPI_ADDR_SHIFT; + txbuf <<= SPI_TURNAROUND_SHIFT; + txbuf = cpu_to_be32(txbuf); + + ret = spi_write_then_read(spi, &txbuf, 4, val, len); + return ret; +} + +static int ksz9477_spi_write_reg(struct spi_device *spi, u32 reg, u8 *val, + unsigned int len) +{ + u32 *txbuf = (u32 *)val; + + *txbuf = reg & SPI_ADDR_MASK; + *txbuf |= (KS_SPIOP_WR << SPI_ADDR_SHIFT); + *txbuf <<= SPI_TURNAROUND_SHIFT; + *txbuf = cpu_to_be32(*txbuf); + + return spi_write(spi, txbuf, 4 + len); +} + +static int ksz_spi_read(struct ksz_device *dev, u32 reg, u8 *data, + unsigned int len) +{ + struct spi_device *spi = dev->priv; + + return ksz9477_spi_read_reg(spi, reg, data, len); +} + +static int ksz_spi_write(struct ksz_device *dev, u32 reg, void *data, + unsigned int len) +{ + struct spi_device *spi = dev->priv; + + if (len > SPI_TX_BUF_LEN) + len = SPI_TX_BUF_LEN; + memcpy(&dev->txbuf[4], data, len); + return ksz9477_spi_write_reg(spi, reg, dev->txbuf, len); +} + +static int ksz_spi_read24(struct ksz_device *dev, u32 reg, u32 *val) +{ + int ret; + + *val = 0; + ret = ksz_spi_read(dev, reg, (u8 *)val, 3); + if (!ret) { + *val = be32_to_cpu(*val); + /* convert to 24bit */ + *val >>= 8; + } + + return ret; +} + +static int ksz_spi_write24(struct ksz_device *dev, u32 reg, u32 value) +{ + /* make it to big endian 24bit from MSB */ + value <<= 8; + value = cpu_to_be32(value); + return ksz_spi_write(dev, reg, &value, 3); +} + +static const struct ksz_io_ops ksz9477_spi_ops = { + .read8 = ksz_spi_read8, + .read16 = ksz_spi_read16, + .read24 = ksz_spi_read24, + .read32 = ksz_spi_read32, + .write8 = ksz_spi_write8, + .write16 = ksz_spi_write16, + .write24 = ksz_spi_write24, + .write32 = ksz_spi_write32, + .get = ksz_spi_get, + .set = ksz_spi_set, +}; + +static int ksz9477_spi_probe(struct spi_device *spi) +{ + struct ksz_device *dev; + int ret; + + dev = ksz_switch_alloc(&spi->dev, &ksz9477_spi_ops, spi); + if (!dev) + return -ENOMEM; + + if (spi->dev.platform_data) + dev->pdata = spi->dev.platform_data; + + dev->txbuf = devm_kzalloc(dev->dev, 4 + SPI_TX_BUF_LEN, GFP_KERNEL); + + ret = ksz9477_switch_register(dev); + + /* Main DSA driver may not be started yet. */ + if (ret) + return ret; + + spi_set_drvdata(spi, dev); + + return 0; +} + +static int ksz9477_spi_remove(struct spi_device *spi) +{ + struct ksz_device *dev = spi_get_drvdata(spi); + + if (dev) + ksz_switch_remove(dev); + + return 0; +} + +static void ksz9477_spi_shutdown(struct spi_device *spi) +{ + struct ksz_device *dev = spi_get_drvdata(spi); + + if (dev && dev->dev_ops->shutdown) + dev->dev_ops->shutdown(dev); +} + +static const struct of_device_id ksz9477_dt_ids[] = { + { .compatible = "microchip,ksz9477" }, + { .compatible = "microchip,ksz9897" }, + {}, +}; +MODULE_DEVICE_TABLE(of, ksz9477_dt_ids); + +static struct spi_driver ksz9477_spi_driver = { + .driver = { + .name = "ksz9477-switch", + .owner = THIS_MODULE, + .of_match_table = of_match_ptr(ksz9477_dt_ids), + }, + .probe = ksz9477_spi_probe, + .remove = ksz9477_spi_remove, + .shutdown = ksz9477_spi_shutdown, +}; + +module_spi_driver(ksz9477_spi_driver); + +MODULE_AUTHOR("Woojung Huh "); +MODULE_DESCRIPTION("Microchip KSZ9477 Series Switch SPI access Driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c index 86b6464b4525..3b12e2dcff31 100644 --- a/drivers/net/dsa/microchip/ksz_common.c +++ b/drivers/net/dsa/microchip/ksz_common.c @@ -1,1145 +1,266 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Microchip switch driver main logic * - * Copyright (C) 2017 - * - * Permission to use, copy, modify, and/or distribute this software for any - * purpose with or without fee is hereby granted, provided that the above - * copyright notice and this permission notice appear in all copies. - * - * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR - * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES - * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN - * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + * Copyright (C) 2017-2018 Microchip Technology Inc. */ #include #include #include +#include #include #include #include #include #include #include +#include +#include #include #include #include "ksz_priv.h" -static const struct { - int index; - char string[ETH_GSTRING_LEN]; -} mib_names[TOTAL_SWITCH_COUNTER_NUM] = { - { 0x00, "rx_hi" }, - { 0x01, "rx_undersize" }, - { 0x02, "rx_fragments" }, - { 0x03, "rx_oversize" }, - { 0x04, "rx_jabbers" }, - { 0x05, "rx_symbol_err" }, - { 0x06, "rx_crc_err" }, - { 0x07, "rx_align_err" }, - { 0x08, "rx_mac_ctrl" }, - { 0x09, "rx_pause" }, - { 0x0A, "rx_bcast" }, - { 0x0B, "rx_mcast" }, - { 0x0C, "rx_ucast" }, - { 0x0D, "rx_64_or_less" }, - { 0x0E, "rx_65_127" }, - { 0x0F, "rx_128_255" }, - { 0x10, "rx_256_511" }, - { 0x11, "rx_512_1023" }, - { 0x12, "rx_1024_1522" }, - { 0x13, "rx_1523_2000" }, - { 0x14, "rx_2001" }, - { 0x15, "tx_hi" }, - { 0x16, "tx_late_col" }, - { 0x17, "tx_pause" }, - { 0x18, "tx_bcast" }, - { 0x19, "tx_mcast" }, - { 0x1A, "tx_ucast" }, - { 0x1B, "tx_deferred" }, - { 0x1C, "tx_total_col" }, - { 0x1D, "tx_exc_col" }, - { 0x1E, "tx_single_col" }, - { 0x1F, "tx_mult_col" }, - { 0x80, "rx_total" }, - { 0x81, "tx_total" }, - { 0x82, "rx_discards" }, - { 0x83, "tx_discards" }, -}; - -static void ksz_cfg(struct ksz_device *dev, u32 addr, u8 bits, bool set) -{ - u8 data; - - ksz_read8(dev, addr, &data); - if (set) - data |= bits; - else - data &= ~bits; - ksz_write8(dev, addr, data); -} - -static void ksz_cfg32(struct ksz_device *dev, u32 addr, u32 bits, bool set) -{ - u32 data; - - ksz_read32(dev, addr, &data); - if (set) - data |= bits; - else - data &= ~bits; - ksz_write32(dev, addr, data); -} - -static void ksz_port_cfg(struct ksz_device *dev, int port, int offset, u8 bits, - bool set) -{ - u32 addr; - u8 data; - - addr = PORT_CTRL_ADDR(port, offset); - ksz_read8(dev, addr, &data); - - if (set) - data |= bits; - else - data &= ~bits; - - ksz_write8(dev, addr, data); -} - -static void ksz_port_cfg32(struct ksz_device *dev, int port, int offset, - u32 bits, bool set) -{ - u32 addr; - u32 data; - - addr = PORT_CTRL_ADDR(port, offset); - ksz_read32(dev, addr, &data); - - if (set) - data |= bits; - else - data &= ~bits; - - ksz_write32(dev, addr, data); -} - -static int wait_vlan_ctrl_ready(struct ksz_device *dev, u32 waiton, int timeout) -{ - u8 data; - - do { - ksz_read8(dev, REG_SW_VLAN_CTRL, &data); - if (!(data & waiton)) - break; - usleep_range(1, 10); - } while (timeout-- > 0); - - if (timeout <= 0) - return -ETIMEDOUT; - - return 0; -} - -static int get_vlan_table(struct dsa_switch *ds, u16 vid, u32 *vlan_table) -{ - struct ksz_device *dev = ds->priv; - int ret; - - mutex_lock(&dev->vlan_mutex); - - ksz_write16(dev, REG_SW_VLAN_ENTRY_INDEX__2, vid & VLAN_INDEX_M); - ksz_write8(dev, REG_SW_VLAN_CTRL, VLAN_READ | VLAN_START); - - /* wait to be cleared */ - ret = wait_vlan_ctrl_ready(dev, VLAN_START, 1000); - if (ret < 0) { - dev_dbg(dev->dev, "Failed to read vlan table\n"); - goto exit; - } - - ksz_read32(dev, REG_SW_VLAN_ENTRY__4, &vlan_table[0]); - ksz_read32(dev, REG_SW_VLAN_ENTRY_UNTAG__4, &vlan_table[1]); - ksz_read32(dev, REG_SW_VLAN_ENTRY_PORTS__4, &vlan_table[2]); - - ksz_write8(dev, REG_SW_VLAN_CTRL, 0); - -exit: - mutex_unlock(&dev->vlan_mutex); - - return ret; -} - -static int set_vlan_table(struct dsa_switch *ds, u16 vid, u32 *vlan_table) -{ - struct ksz_device *dev = ds->priv; - int ret; - - mutex_lock(&dev->vlan_mutex); - - ksz_write32(dev, REG_SW_VLAN_ENTRY__4, vlan_table[0]); - ksz_write32(dev, REG_SW_VLAN_ENTRY_UNTAG__4, vlan_table[1]); - ksz_write32(dev, REG_SW_VLAN_ENTRY_PORTS__4, vlan_table[2]); - - ksz_write16(dev, REG_SW_VLAN_ENTRY_INDEX__2, vid & VLAN_INDEX_M); - ksz_write8(dev, REG_SW_VLAN_CTRL, VLAN_START | VLAN_WRITE); - - /* wait to be cleared */ - ret = wait_vlan_ctrl_ready(dev, VLAN_START, 1000); - if (ret < 0) { - dev_dbg(dev->dev, "Failed to write vlan table\n"); - goto exit; - } - - ksz_write8(dev, REG_SW_VLAN_CTRL, 0); - - /* update vlan cache table */ - dev->vlan_cache[vid].table[0] = vlan_table[0]; - dev->vlan_cache[vid].table[1] = vlan_table[1]; - dev->vlan_cache[vid].table[2] = vlan_table[2]; - -exit: - mutex_unlock(&dev->vlan_mutex); - - return ret; -} - -static void read_table(struct dsa_switch *ds, u32 *table) -{ - struct ksz_device *dev = ds->priv; - - ksz_read32(dev, REG_SW_ALU_VAL_A, &table[0]); - ksz_read32(dev, REG_SW_ALU_VAL_B, &table[1]); - ksz_read32(dev, REG_SW_ALU_VAL_C, &table[2]); - ksz_read32(dev, REG_SW_ALU_VAL_D, &table[3]); -} - -static void write_table(struct dsa_switch *ds, u32 *table) -{ - struct ksz_device *dev = ds->priv; - - ksz_write32(dev, REG_SW_ALU_VAL_A, table[0]); - ksz_write32(dev, REG_SW_ALU_VAL_B, table[1]); - ksz_write32(dev, REG_SW_ALU_VAL_C, table[2]); - ksz_write32(dev, REG_SW_ALU_VAL_D, table[3]); -} - -static int wait_alu_ready(struct ksz_device *dev, u32 waiton, int timeout) -{ - u32 data; - - do { - ksz_read32(dev, REG_SW_ALU_CTRL__4, &data); - if (!(data & waiton)) - break; - usleep_range(1, 10); - } while (timeout-- > 0); - - if (timeout <= 0) - return -ETIMEDOUT; - - return 0; -} - -static int wait_alu_sta_ready(struct ksz_device *dev, u32 waiton, int timeout) -{ - u32 data; - - do { - ksz_read32(dev, REG_SW_ALU_STAT_CTRL__4, &data); - if (!(data & waiton)) - break; - usleep_range(1, 10); - } while (timeout-- > 0); - - if (timeout <= 0) - return -ETIMEDOUT; - - return 0; -} - -static int ksz_reset_switch(struct dsa_switch *ds) +void ksz_update_port_member(struct ksz_device *dev, int port) { - struct ksz_device *dev = ds->priv; - u8 data8; - u16 data16; - u32 data32; - - /* reset switch */ - ksz_cfg(dev, REG_SW_OPERATION, SW_RESET, true); - - /* turn off SPI DO Edge select */ - ksz_read8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, &data8); - data8 &= ~SPI_AUTO_EDGE_DETECTION; - ksz_write8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, data8); - - /* default configuration */ - ksz_read8(dev, REG_SW_LUE_CTRL_1, &data8); - data8 = SW_AGING_ENABLE | SW_LINK_AUTO_AGING | - SW_SRC_ADDR_FILTER | SW_FLUSH_STP_TABLE | SW_FLUSH_MSTP_TABLE; - ksz_write8(dev, REG_SW_LUE_CTRL_1, data8); - - /* disable interrupts */ - ksz_write32(dev, REG_SW_INT_MASK__4, SWITCH_INT_MASK); - ksz_write32(dev, REG_SW_PORT_INT_MASK__4, 0x7F); - ksz_read32(dev, REG_SW_PORT_INT_STATUS__4, &data32); - - /* set broadcast storm protection 10% rate */ - ksz_read16(dev, REG_SW_MAC_CTRL_2, &data16); - data16 &= ~BROADCAST_STORM_RATE; - data16 |= (BROADCAST_STORM_VALUE * BROADCAST_STORM_PROT_RATE) / 100; - ksz_write16(dev, REG_SW_MAC_CTRL_2, data16); - - return 0; -} - -static void port_setup(struct ksz_device *dev, int port, bool cpu_port) -{ - u8 data8; - u16 data16; - - /* enable tag tail for host port */ - if (cpu_port) - ksz_port_cfg(dev, port, REG_PORT_CTRL_0, PORT_TAIL_TAG_ENABLE, - true); - - ksz_port_cfg(dev, port, REG_PORT_CTRL_0, PORT_MAC_LOOPBACK, false); - - /* set back pressure */ - ksz_port_cfg(dev, port, REG_PORT_MAC_CTRL_1, PORT_BACK_PRESSURE, true); - - /* set flow control */ - ksz_port_cfg(dev, port, REG_PORT_CTRL_0, - PORT_FORCE_TX_FLOW_CTRL | PORT_FORCE_RX_FLOW_CTRL, true); - - /* enable broadcast storm limit */ - ksz_port_cfg(dev, port, P_BCAST_STORM_CTRL, PORT_BROADCAST_STORM, true); - - /* disable DiffServ priority */ - ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_DIFFSERV_PRIO_ENABLE, false); - - /* replace priority */ - ksz_port_cfg(dev, port, REG_PORT_MRI_MAC_CTRL, PORT_USER_PRIO_CEILING, - false); - ksz_port_cfg32(dev, port, REG_PORT_MTI_QUEUE_CTRL_0__4, - MTI_PVID_REPLACE, false); - - /* enable 802.1p priority */ - ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_PRIO_ENABLE, true); - - /* configure MAC to 1G & RGMII mode */ - ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8); - data8 |= PORT_RGMII_ID_EG_ENABLE; - data8 &= ~PORT_MII_NOT_1GBIT; - data8 &= ~PORT_MII_SEL_M; - data8 |= PORT_RGMII_SEL; - ksz_pwrite8(dev, port, REG_PORT_XMII_CTRL_1, data8); - - /* clear pending interrupts */ - ksz_pread16(dev, port, REG_PORT_PHY_INT_ENABLE, &data16); -} - -static void ksz_config_cpu_port(struct dsa_switch *ds) -{ - struct ksz_device *dev = ds->priv; + struct ksz_port *p; int i; - ds->num_ports = dev->port_cnt; - - for (i = 0; i < ds->num_ports; i++) { - if (dsa_is_cpu_port(ds, i) && (dev->cpu_ports & (1 << i))) { - dev->cpu_port = i; - - /* enable cpu port */ - port_setup(dev, i, true); - } - } -} - -static int ksz_setup(struct dsa_switch *ds) -{ - struct ksz_device *dev = ds->priv; - int ret = 0; - - dev->vlan_cache = devm_kcalloc(dev->dev, sizeof(struct vlan_table), - dev->num_vlans, GFP_KERNEL); - if (!dev->vlan_cache) - return -ENOMEM; - - ret = ksz_reset_switch(ds); - if (ret) { - dev_err(ds->dev, "failed to reset switch\n"); - return ret; + for (i = 0; i < dev->port_cnt; i++) { + if (i == port || i == dev->cpu_port) + continue; + p = &dev->ports[i]; + if (!(dev->member & (1 << i))) + continue; + + /* Port is a member of the bridge and is forwarding. */ + if (p->stp_state == BR_STATE_FORWARDING && + p->member != dev->member) + dev->dev_ops->cfg_port_member(dev, i, dev->member); } - - /* accept packet up to 2000bytes */ - ksz_cfg(dev, REG_SW_MAC_CTRL_1, SW_LEGAL_PACKET_DISABLE, true); - - ksz_config_cpu_port(ds); - - ksz_cfg(dev, REG_SW_MAC_CTRL_1, MULTICAST_STORM_DISABLE, true); - - /* queue based egress rate limit */ - ksz_cfg(dev, REG_SW_MAC_CTRL_5, SW_OUT_RATE_LIMIT_QUEUE_BASED, true); - - /* start switch */ - ksz_cfg(dev, REG_SW_OPERATION, SW_START, true); - - return 0; } +EXPORT_SYMBOL_GPL(ksz_update_port_member); -static enum dsa_tag_protocol ksz_get_tag_protocol(struct dsa_switch *ds, - int port) -{ - return DSA_TAG_PROTO_KSZ; -} - -static int ksz_phy_read16(struct dsa_switch *ds, int addr, int reg) +int ksz_phy_read16(struct dsa_switch *ds, int addr, int reg) { struct ksz_device *dev = ds->priv; - u16 val = 0; + u16 val = 0xffff; - ksz_pread16(dev, addr, 0x100 + (reg << 1), &val); + dev->dev_ops->r_phy(dev, addr, reg, &val); return val; } +EXPORT_SYMBOL_GPL(ksz_phy_read16); -static int ksz_phy_write16(struct dsa_switch *ds, int addr, int reg, u16 val) +int ksz_phy_write16(struct dsa_switch *ds, int addr, int reg, u16 val) { struct ksz_device *dev = ds->priv; - ksz_pwrite16(dev, addr, 0x100 + (reg << 1), val); + dev->dev_ops->w_phy(dev, addr, reg, val); return 0; } +EXPORT_SYMBOL_GPL(ksz_phy_write16); -static int ksz_enable_port(struct dsa_switch *ds, int port, - struct phy_device *phy) +int ksz_sset_count(struct dsa_switch *ds, int port, int sset) { struct ksz_device *dev = ds->priv; - /* setup slave port */ - port_setup(dev, port, false); - - return 0; -} - -static void ksz_disable_port(struct dsa_switch *ds, int port, - struct phy_device *phy) -{ - struct ksz_device *dev = ds->priv; - - /* there is no port disable */ - ksz_port_cfg(dev, port, REG_PORT_CTRL_0, PORT_MAC_LOOPBACK, true); -} - -static int ksz_sset_count(struct dsa_switch *ds, int port, int sset) -{ if (sset != ETH_SS_STATS) return 0; - return TOTAL_SWITCH_COUNTER_NUM; + return dev->mib_cnt; } +EXPORT_SYMBOL_GPL(ksz_sset_count); -static void ksz_get_strings(struct dsa_switch *ds, int port, - u32 stringset, uint8_t *buf) -{ - int i; - - if (stringset != ETH_SS_STATS) - return; - - for (i = 0; i < TOTAL_SWITCH_COUNTER_NUM; i++) { - memcpy(buf + i * ETH_GSTRING_LEN, mib_names[i].string, - ETH_GSTRING_LEN); - } -} - -static void ksz_get_ethtool_stats(struct dsa_switch *ds, int port, - uint64_t *buf) +int ksz_port_bridge_join(struct dsa_switch *ds, int port, + struct net_device *br) { struct ksz_device *dev = ds->priv; - int i; - u32 data; - int timeout; - - mutex_lock(&dev->stats_mutex); - - for (i = 0; i < TOTAL_SWITCH_COUNTER_NUM; i++) { - data = MIB_COUNTER_READ; - data |= ((mib_names[i].index & 0xFF) << MIB_COUNTER_INDEX_S); - ksz_pwrite32(dev, port, REG_PORT_MIB_CTRL_STAT__4, data); - - timeout = 1000; - do { - ksz_pread32(dev, port, REG_PORT_MIB_CTRL_STAT__4, - &data); - usleep_range(1, 10); - if (!(data & MIB_COUNTER_READ)) - break; - } while (timeout-- > 0); - - /* failed to read MIB. get out of loop */ - if (!timeout) { - dev_dbg(dev->dev, "Failed to get MIB\n"); - break; - } - /* count resets upon read */ - ksz_pread32(dev, port, REG_PORT_MIB_DATA, &data); + dev->br_member |= (1 << port); - dev->mib_value[i] += (uint64_t)data; - buf[i] = dev->mib_value[i]; - } - - mutex_unlock(&dev->stats_mutex); -} - -static void ksz_port_stp_state_set(struct dsa_switch *ds, int port, u8 state) -{ - struct ksz_device *dev = ds->priv; - u8 data; - - ksz_pread8(dev, port, P_STP_CTRL, &data); - data &= ~(PORT_TX_ENABLE | PORT_RX_ENABLE | PORT_LEARN_DISABLE); - - switch (state) { - case BR_STATE_DISABLED: - data |= PORT_LEARN_DISABLE; - break; - case BR_STATE_LISTENING: - data |= (PORT_RX_ENABLE | PORT_LEARN_DISABLE); - break; - case BR_STATE_LEARNING: - data |= PORT_RX_ENABLE; - break; - case BR_STATE_FORWARDING: - data |= (PORT_TX_ENABLE | PORT_RX_ENABLE); - break; - case BR_STATE_BLOCKING: - data |= PORT_LEARN_DISABLE; - break; - default: - dev_err(ds->dev, "invalid STP state: %d\n", state); - return; - } + /* port_stp_state_set() will be called after to put the port in + * appropriate state so there is no need to do anything. + */ - ksz_pwrite8(dev, port, P_STP_CTRL, data); + return 0; } +EXPORT_SYMBOL_GPL(ksz_port_bridge_join); -static void ksz_port_fast_age(struct dsa_switch *ds, int port) +void ksz_port_bridge_leave(struct dsa_switch *ds, int port, + struct net_device *br) { struct ksz_device *dev = ds->priv; - u8 data8; - ksz_read8(dev, REG_SW_LUE_CTRL_1, &data8); - data8 |= SW_FAST_AGING; - ksz_write8(dev, REG_SW_LUE_CTRL_1, data8); + dev->br_member &= ~(1 << port); + dev->member &= ~(1 << port); - data8 &= ~SW_FAST_AGING; - ksz_write8(dev, REG_SW_LUE_CTRL_1, data8); + /* port_stp_state_set() will be called after to put the port in + * forwarding state so there is no need to do anything. + */ } +EXPORT_SYMBOL_GPL(ksz_port_bridge_leave); -static int ksz_port_vlan_filtering(struct dsa_switch *ds, int port, bool flag) +void ksz_port_fast_age(struct dsa_switch *ds, int port) { struct ksz_device *dev = ds->priv; - if (flag) { - ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, - PORT_VLAN_LOOKUP_VID_0, true); - ksz_cfg32(dev, REG_SW_QM_CTRL__4, UNICAST_VLAN_BOUNDARY, true); - ksz_cfg(dev, REG_SW_LUE_CTRL_0, SW_VLAN_ENABLE, true); - } else { - ksz_cfg(dev, REG_SW_LUE_CTRL_0, SW_VLAN_ENABLE, false); - ksz_cfg32(dev, REG_SW_QM_CTRL__4, UNICAST_VLAN_BOUNDARY, false); - ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, - PORT_VLAN_LOOKUP_VID_0, false); - } - - return 0; + dev->dev_ops->flush_dyn_mac_table(dev, port); } +EXPORT_SYMBOL_GPL(ksz_port_fast_age); -static int ksz_port_vlan_prepare(struct dsa_switch *ds, int port, - const struct switchdev_obj_port_vlan *vlan) +int ksz_port_vlan_prepare(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_vlan *vlan) { /* nothing needed */ return 0; } +EXPORT_SYMBOL_GPL(ksz_port_vlan_prepare); -static void ksz_port_vlan_add(struct dsa_switch *ds, int port, - const struct switchdev_obj_port_vlan *vlan) -{ - struct ksz_device *dev = ds->priv; - u32 vlan_table[3]; - u16 vid; - bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; - - for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) { - if (get_vlan_table(ds, vid, vlan_table)) { - dev_dbg(dev->dev, "Failed to get vlan table\n"); - return; - } - - vlan_table[0] = VLAN_VALID | (vid & VLAN_FID_M); - if (untagged) - vlan_table[1] |= BIT(port); - else - vlan_table[1] &= ~BIT(port); - vlan_table[1] &= ~(BIT(dev->cpu_port)); - - vlan_table[2] |= BIT(port) | BIT(dev->cpu_port); - - if (set_vlan_table(ds, vid, vlan_table)) { - dev_dbg(dev->dev, "Failed to set vlan table\n"); - return; - } - - /* change PVID */ - if (vlan->flags & BRIDGE_VLAN_INFO_PVID) - ksz_pwrite16(dev, port, REG_PORT_DEFAULT_VID, vid); - } -} - -static int ksz_port_vlan_del(struct dsa_switch *ds, int port, - const struct switchdev_obj_port_vlan *vlan) -{ - struct ksz_device *dev = ds->priv; - bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; - u32 vlan_table[3]; - u16 vid; - u16 pvid; - - ksz_pread16(dev, port, REG_PORT_DEFAULT_VID, &pvid); - pvid = pvid & 0xFFF; - - for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) { - if (get_vlan_table(ds, vid, vlan_table)) { - dev_dbg(dev->dev, "Failed to get vlan table\n"); - return -ETIMEDOUT; - } - - vlan_table[2] &= ~BIT(port); - - if (pvid == vid) - pvid = 1; - - if (untagged) - vlan_table[1] &= ~BIT(port); - - if (set_vlan_table(ds, vid, vlan_table)) { - dev_dbg(dev->dev, "Failed to set vlan table\n"); - return -ETIMEDOUT; - } - } - - ksz_pwrite16(dev, port, REG_PORT_DEFAULT_VID, pvid); - - return 0; -} - -struct alu_struct { - /* entry 1 */ - u8 is_static:1; - u8 is_src_filter:1; - u8 is_dst_filter:1; - u8 prio_age:3; - u32 _reserv_0_1:23; - u8 mstp:3; - /* entry 2 */ - u8 is_override:1; - u8 is_use_fid:1; - u32 _reserv_1_1:23; - u8 port_forward:7; - /* entry 3 & 4*/ - u32 _reserv_2_1:9; - u8 fid:7; - u8 mac[ETH_ALEN]; -}; - -static int ksz_port_fdb_add(struct dsa_switch *ds, int port, - const unsigned char *addr, u16 vid) -{ - struct ksz_device *dev = ds->priv; - u32 alu_table[4]; - u32 data; - int ret = 0; - - mutex_lock(&dev->alu_mutex); - - /* find any entry with mac & vid */ - data = vid << ALU_FID_INDEX_S; - data |= ((addr[0] << 8) | addr[1]); - ksz_write32(dev, REG_SW_ALU_INDEX_0, data); - - data = ((addr[2] << 24) | (addr[3] << 16)); - data |= ((addr[4] << 8) | addr[5]); - ksz_write32(dev, REG_SW_ALU_INDEX_1, data); - - /* start read operation */ - ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_READ | ALU_START); - - /* wait to be finished */ - ret = wait_alu_ready(dev, ALU_START, 1000); - if (ret < 0) { - dev_dbg(dev->dev, "Failed to read ALU\n"); - goto exit; - } - - /* read ALU entry */ - read_table(ds, alu_table); - - /* update ALU entry */ - alu_table[0] = ALU_V_STATIC_VALID; - alu_table[1] |= BIT(port); - if (vid) - alu_table[1] |= ALU_V_USE_FID; - alu_table[2] = (vid << ALU_V_FID_S); - alu_table[2] |= ((addr[0] << 8) | addr[1]); - alu_table[3] = ((addr[2] << 24) | (addr[3] << 16)); - alu_table[3] |= ((addr[4] << 8) | addr[5]); - - write_table(ds, alu_table); - - ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_WRITE | ALU_START); - - /* wait to be finished */ - ret = wait_alu_ready(dev, ALU_START, 1000); - if (ret < 0) - dev_dbg(dev->dev, "Failed to write ALU\n"); - -exit: - mutex_unlock(&dev->alu_mutex); - - return ret; -} - -static int ksz_port_fdb_del(struct dsa_switch *ds, int port, - const unsigned char *addr, u16 vid) +int ksz_port_fdb_dump(struct dsa_switch *ds, int port, dsa_fdb_dump_cb_t *cb, + void *data) { struct ksz_device *dev = ds->priv; - u32 alu_table[4]; - u32 data; int ret = 0; - - mutex_lock(&dev->alu_mutex); - - /* read any entry with mac & vid */ - data = vid << ALU_FID_INDEX_S; - data |= ((addr[0] << 8) | addr[1]); - ksz_write32(dev, REG_SW_ALU_INDEX_0, data); - - data = ((addr[2] << 24) | (addr[3] << 16)); - data |= ((addr[4] << 8) | addr[5]); - ksz_write32(dev, REG_SW_ALU_INDEX_1, data); - - /* start read operation */ - ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_READ | ALU_START); - - /* wait to be finished */ - ret = wait_alu_ready(dev, ALU_START, 1000); - if (ret < 0) { - dev_dbg(dev->dev, "Failed to read ALU\n"); - goto exit; - } - - ksz_read32(dev, REG_SW_ALU_VAL_A, &alu_table[0]); - if (alu_table[0] & ALU_V_STATIC_VALID) { - ksz_read32(dev, REG_SW_ALU_VAL_B, &alu_table[1]); - ksz_read32(dev, REG_SW_ALU_VAL_C, &alu_table[2]); - ksz_read32(dev, REG_SW_ALU_VAL_D, &alu_table[3]); - - /* clear forwarding port */ - alu_table[2] &= ~BIT(port); - - /* if there is no port to forward, clear table */ - if ((alu_table[2] & ALU_V_PORT_MAP) == 0) { - alu_table[0] = 0; - alu_table[1] = 0; - alu_table[2] = 0; - alu_table[3] = 0; - } - } else { - alu_table[0] = 0; - alu_table[1] = 0; - alu_table[2] = 0; - alu_table[3] = 0; - } - - write_table(ds, alu_table); - - ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_WRITE | ALU_START); - - /* wait to be finished */ - ret = wait_alu_ready(dev, ALU_START, 1000); - if (ret < 0) - dev_dbg(dev->dev, "Failed to write ALU\n"); - -exit: - mutex_unlock(&dev->alu_mutex); - - return ret; -} - -static void convert_alu(struct alu_struct *alu, u32 *alu_table) -{ - alu->is_static = !!(alu_table[0] & ALU_V_STATIC_VALID); - alu->is_src_filter = !!(alu_table[0] & ALU_V_SRC_FILTER); - alu->is_dst_filter = !!(alu_table[0] & ALU_V_DST_FILTER); - alu->prio_age = (alu_table[0] >> ALU_V_PRIO_AGE_CNT_S) & - ALU_V_PRIO_AGE_CNT_M; - alu->mstp = alu_table[0] & ALU_V_MSTP_M; - - alu->is_override = !!(alu_table[1] & ALU_V_OVERRIDE); - alu->is_use_fid = !!(alu_table[1] & ALU_V_USE_FID); - alu->port_forward = alu_table[1] & ALU_V_PORT_MAP; - - alu->fid = (alu_table[2] >> ALU_V_FID_S) & ALU_V_FID_M; - - alu->mac[0] = (alu_table[2] >> 8) & 0xFF; - alu->mac[1] = alu_table[2] & 0xFF; - alu->mac[2] = (alu_table[3] >> 24) & 0xFF; - alu->mac[3] = (alu_table[3] >> 16) & 0xFF; - alu->mac[4] = (alu_table[3] >> 8) & 0xFF; - alu->mac[5] = alu_table[3] & 0xFF; -} - -static int ksz_port_fdb_dump(struct dsa_switch *ds, int port, - dsa_fdb_dump_cb_t *cb, void *data) -{ - struct ksz_device *dev = ds->priv; - int ret = 0; - u32 ksz_data; - u32 alu_table[4]; + u16 i = 0; + u16 entries = 0; + u8 timestamp = 0; + u8 fid; + u8 member; struct alu_struct alu; - int timeout; - - mutex_lock(&dev->alu_mutex); - - /* start ALU search */ - ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_START | ALU_SEARCH); do { - timeout = 1000; - do { - ksz_read32(dev, REG_SW_ALU_CTRL__4, &ksz_data); - if ((ksz_data & ALU_VALID) || !(ksz_data & ALU_START)) - break; - usleep_range(1, 10); - } while (timeout-- > 0); - - if (!timeout) { - dev_dbg(dev->dev, "Failed to search ALU\n"); - ret = -ETIMEDOUT; - goto exit; - } - - /* read ALU table */ - read_table(ds, alu_table); - - convert_alu(&alu, alu_table); - - if (alu.port_forward & BIT(port)) { + alu.is_static = false; + ret = dev->dev_ops->r_dyn_mac_table(dev, i, alu.mac, &fid, + &member, ×tamp, + &entries); + if (!ret && (member & BIT(port))) { ret = cb(alu.mac, alu.fid, alu.is_static, data); if (ret) - goto exit; + break; } - } while (ksz_data & ALU_START); - -exit: - - /* stop ALU search */ - ksz_write32(dev, REG_SW_ALU_CTRL__4, 0); - - mutex_unlock(&dev->alu_mutex); + i++; + } while (i < entries); + if (i >= entries) + ret = 0; return ret; } +EXPORT_SYMBOL_GPL(ksz_port_fdb_dump); -static int ksz_port_mdb_prepare(struct dsa_switch *ds, int port, - const struct switchdev_obj_port_mdb *mdb) +int ksz_port_mdb_prepare(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_mdb *mdb) { /* nothing to do */ return 0; } +EXPORT_SYMBOL_GPL(ksz_port_mdb_prepare); -static void ksz_port_mdb_add(struct dsa_switch *ds, int port, - const struct switchdev_obj_port_mdb *mdb) +void ksz_port_mdb_add(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_mdb *mdb) { struct ksz_device *dev = ds->priv; - u32 static_table[4]; - u32 data; + struct alu_struct alu; int index; - u32 mac_hi, mac_lo; - - mac_hi = ((mdb->addr[0] << 8) | mdb->addr[1]); - mac_lo = ((mdb->addr[2] << 24) | (mdb->addr[3] << 16)); - mac_lo |= ((mdb->addr[4] << 8) | mdb->addr[5]); - - mutex_lock(&dev->alu_mutex); + int empty = 0; + alu.port_forward = 0; for (index = 0; index < dev->num_statics; index++) { - /* find empty slot first */ - data = (index << ALU_STAT_INDEX_S) | - ALU_STAT_READ | ALU_STAT_START; - ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); - - /* wait to be finished */ - if (wait_alu_sta_ready(dev, ALU_STAT_START, 1000) < 0) { - dev_dbg(dev->dev, "Failed to read ALU STATIC\n"); - goto exit; - } - - /* read ALU static table */ - read_table(ds, static_table); - - if (static_table[0] & ALU_V_STATIC_VALID) { - /* check this has same vid & mac address */ - if (((static_table[2] >> ALU_V_FID_S) == (mdb->vid)) && - ((static_table[2] & ALU_V_MAC_ADDR_HI) == mac_hi) && - (static_table[3] == mac_lo)) { - /* found matching one */ + if (!dev->dev_ops->r_sta_mac_table(dev, index, &alu)) { + /* Found one already in static MAC table. */ + if (!memcmp(alu.mac, mdb->addr, ETH_ALEN) && + alu.fid == mdb->vid) break; - } - } else { - /* found empty one */ - break; + /* Remember the first empty entry. */ + } else if (!empty) { + empty = index + 1; } } /* no available entry */ - if (index == dev->num_statics) - goto exit; + if (index == dev->num_statics && !empty) + return; /* add entry */ - static_table[0] = ALU_V_STATIC_VALID; - static_table[1] |= BIT(port); - if (mdb->vid) - static_table[1] |= ALU_V_USE_FID; - static_table[2] = (mdb->vid << ALU_V_FID_S); - static_table[2] |= mac_hi; - static_table[3] = mac_lo; - - write_table(ds, static_table); - - data = (index << ALU_STAT_INDEX_S) | ALU_STAT_START; - ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); - - /* wait to be finished */ - if (wait_alu_sta_ready(dev, ALU_STAT_START, 1000) < 0) - dev_dbg(dev->dev, "Failed to read ALU STATIC\n"); + if (index == dev->num_statics) { + index = empty - 1; + memset(&alu, 0, sizeof(alu)); + memcpy(alu.mac, mdb->addr, ETH_ALEN); + alu.is_static = true; + } + alu.port_forward |= BIT(port); + if (mdb->vid) { + alu.is_use_fid = true; -exit: - mutex_unlock(&dev->alu_mutex); + /* Need a way to map VID to FID. */ + alu.fid = mdb->vid; + } + dev->dev_ops->w_sta_mac_table(dev, index, &alu); } +EXPORT_SYMBOL_GPL(ksz_port_mdb_add); -static int ksz_port_mdb_del(struct dsa_switch *ds, int port, - const struct switchdev_obj_port_mdb *mdb) +int ksz_port_mdb_del(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_mdb *mdb) { struct ksz_device *dev = ds->priv; - u32 static_table[4]; - u32 data; + struct alu_struct alu; int index; int ret = 0; - u32 mac_hi, mac_lo; - - mac_hi = ((mdb->addr[0] << 8) | mdb->addr[1]); - mac_lo = ((mdb->addr[2] << 24) | (mdb->addr[3] << 16)); - mac_lo |= ((mdb->addr[4] << 8) | mdb->addr[5]); - - mutex_lock(&dev->alu_mutex); for (index = 0; index < dev->num_statics; index++) { - /* find empty slot first */ - data = (index << ALU_STAT_INDEX_S) | - ALU_STAT_READ | ALU_STAT_START; - ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); - - /* wait to be finished */ - ret = wait_alu_sta_ready(dev, ALU_STAT_START, 1000); - if (ret < 0) { - dev_dbg(dev->dev, "Failed to read ALU STATIC\n"); - goto exit; - } - - /* read ALU static table */ - read_table(ds, static_table); - - if (static_table[0] & ALU_V_STATIC_VALID) { - /* check this has same vid & mac address */ - - if (((static_table[2] >> ALU_V_FID_S) == (mdb->vid)) && - ((static_table[2] & ALU_V_MAC_ADDR_HI) == mac_hi) && - (static_table[3] == mac_lo)) { - /* found matching one */ + if (!dev->dev_ops->r_sta_mac_table(dev, index, &alu)) { + /* Found one already in static MAC table. */ + if (!memcmp(alu.mac, mdb->addr, ETH_ALEN) && + alu.fid == mdb->vid) break; - } } } /* no available entry */ - if (index == dev->num_statics) { - ret = -EINVAL; + if (index == dev->num_statics) goto exit; - } /* clear port */ - static_table[1] &= ~BIT(port); - - if ((static_table[1] & ALU_V_PORT_MAP) == 0) { - /* delete entry */ - static_table[0] = 0; - static_table[1] = 0; - static_table[2] = 0; - static_table[3] = 0; - } - - write_table(ds, static_table); - - data = (index << ALU_STAT_INDEX_S) | ALU_STAT_START; - ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); - - /* wait to be finished */ - ret = wait_alu_sta_ready(dev, ALU_STAT_START, 1000); - if (ret < 0) - dev_dbg(dev->dev, "Failed to read ALU STATIC\n"); + alu.port_forward &= ~BIT(port); + if (!alu.port_forward) + alu.is_static = false; + dev->dev_ops->w_sta_mac_table(dev, index, &alu); exit: - mutex_unlock(&dev->alu_mutex); - return ret; } +EXPORT_SYMBOL_GPL(ksz_port_mdb_del); -static int ksz_port_mirror_add(struct dsa_switch *ds, int port, - struct dsa_mall_mirror_tc_entry *mirror, - bool ingress) +int ksz_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy) { struct ksz_device *dev = ds->priv; - if (ingress) - ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_RX, true); - else - ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_TX, true); - - ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_SNIFFER, false); - - /* configure mirror port */ - ksz_port_cfg(dev, mirror->to_local_port, P_MIRROR_CTRL, - PORT_MIRROR_SNIFFER, true); + /* setup slave port */ + dev->dev_ops->port_setup(dev, port, false); - ksz_cfg(dev, S_MIRROR_CTRL, SW_MIRROR_RX_TX, false); + /* port_stp_state_set() will be called after to enable the port so + * there is no need to do anything. + */ return 0; } +EXPORT_SYMBOL_GPL(ksz_enable_port); -static void ksz_port_mirror_del(struct dsa_switch *ds, int port, - struct dsa_mall_mirror_tc_entry *mirror) +void ksz_disable_port(struct dsa_switch *ds, int port, struct phy_device *phy) { struct ksz_device *dev = ds->priv; - u8 data; - - if (mirror->ingress) - ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_RX, false); - else - ksz_port_cfg(dev, port, P_MIRROR_CTRL, PORT_MIRROR_TX, false); - ksz_pread8(dev, port, P_MIRROR_CTRL, &data); + dev->on_ports &= ~(1 << port); + dev->live_ports &= ~(1 << port); - if (!(data & (PORT_MIRROR_RX | PORT_MIRROR_TX))) - ksz_port_cfg(dev, mirror->to_local_port, P_MIRROR_CTRL, - PORT_MIRROR_SNIFFER, false); -} - -static const struct dsa_switch_ops ksz_switch_ops = { - .get_tag_protocol = ksz_get_tag_protocol, - .setup = ksz_setup, - .phy_read = ksz_phy_read16, - .phy_write = ksz_phy_write16, - .port_enable = ksz_enable_port, - .port_disable = ksz_disable_port, - .get_strings = ksz_get_strings, - .get_ethtool_stats = ksz_get_ethtool_stats, - .get_sset_count = ksz_sset_count, - .port_stp_state_set = ksz_port_stp_state_set, - .port_fast_age = ksz_port_fast_age, - .port_vlan_filtering = ksz_port_vlan_filtering, - .port_vlan_prepare = ksz_port_vlan_prepare, - .port_vlan_add = ksz_port_vlan_add, - .port_vlan_del = ksz_port_vlan_del, - .port_fdb_dump = ksz_port_fdb_dump, - .port_fdb_add = ksz_port_fdb_add, - .port_fdb_del = ksz_port_fdb_del, - .port_mdb_prepare = ksz_port_mdb_prepare, - .port_mdb_add = ksz_port_mdb_add, - .port_mdb_del = ksz_port_mdb_del, - .port_mirror_add = ksz_port_mirror_add, - .port_mirror_del = ksz_port_mirror_del, -}; - -struct ksz_chip_data { - u32 chip_id; - const char *dev_name; - int num_vlans; - int num_alus; - int num_statics; - int cpu_ports; - int port_cnt; -}; - -static const struct ksz_chip_data ksz_switch_chips[] = { - { - .chip_id = 0x00947700, - .dev_name = "KSZ9477", - .num_vlans = 4096, - .num_alus = 4096, - .num_statics = 16, - .cpu_ports = 0x7F, /* can be configured as cpu port */ - .port_cnt = 7, /* total physical port count */ - }, - { - .chip_id = 0x00989700, - .dev_name = "KSZ9897", - .num_vlans = 4096, - .num_alus = 4096, - .num_statics = 16, - .cpu_ports = 0x7F, /* can be configured as cpu port */ - .port_cnt = 7, /* total physical port count */ - }, -}; - -static int ksz_switch_init(struct ksz_device *dev) -{ - int i; - - dev->ds->ops = &ksz_switch_ops; - - for (i = 0; i < ARRAY_SIZE(ksz_switch_chips); i++) { - const struct ksz_chip_data *chip = &ksz_switch_chips[i]; - - if (dev->chip_id == chip->chip_id) { - dev->name = chip->dev_name; - dev->num_vlans = chip->num_vlans; - dev->num_alus = chip->num_alus; - dev->num_statics = chip->num_statics; - dev->port_cnt = chip->port_cnt; - dev->cpu_ports = chip->cpu_ports; - - break; - } - } - - /* no switch found */ - if (!dev->port_cnt) - return -ENODEV; - - return 0; + /* port_stp_state_set() will be called after to disable the port so + * there is no need to do anything. + */ } +EXPORT_SYMBOL_GPL(ksz_disable_port); struct ksz_device *ksz_switch_alloc(struct device *base, const struct ksz_io_ops *ops, @@ -1167,59 +288,64 @@ struct ksz_device *ksz_switch_alloc(struct device *base, } EXPORT_SYMBOL(ksz_switch_alloc); -int ksz_switch_detect(struct ksz_device *dev) -{ - u8 data8; - u32 id32; - int ret; - - /* turn off SPI DO Edge select */ - ret = ksz_read8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, &data8); - if (ret) - return ret; - - data8 &= ~SPI_AUTO_EDGE_DETECTION; - ret = ksz_write8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, data8); - if (ret) - return ret; - - /* read chip id */ - ret = ksz_read32(dev, REG_CHIP_ID0__1, &id32); - if (ret) - return ret; - - dev->chip_id = id32; - - return 0; -} -EXPORT_SYMBOL(ksz_switch_detect); - -int ksz_switch_register(struct ksz_device *dev) +int ksz_switch_register(struct ksz_device *dev, + const struct ksz_dev_ops *ops) { int ret; if (dev->pdata) dev->chip_id = dev->pdata->chip_id; + dev->reset_gpio = devm_gpiod_get_optional(dev->dev, "reset", + GPIOD_OUT_LOW); + if (IS_ERR(dev->reset_gpio)) + return PTR_ERR(dev->reset_gpio); + + if (dev->reset_gpio) { + gpiod_set_value(dev->reset_gpio, 1); + mdelay(10); + gpiod_set_value(dev->reset_gpio, 0); + } + mutex_init(&dev->reg_mutex); mutex_init(&dev->stats_mutex); mutex_init(&dev->alu_mutex); mutex_init(&dev->vlan_mutex); - if (ksz_switch_detect(dev)) + dev->dev_ops = ops; + + if (dev->dev_ops->detect(dev)) return -EINVAL; - ret = ksz_switch_init(dev); + ret = dev->dev_ops->init(dev); if (ret) return ret; - return dsa_register_switch(dev->ds); + dev->interface = PHY_INTERFACE_MODE_MII; + if (dev->dev->of_node) { + ret = of_get_phy_mode(dev->dev->of_node); + if (ret >= 0) + dev->interface = ret; + } + + ret = dsa_register_switch(dev->ds); + if (ret) { + dev->dev_ops->exit(dev); + return ret; + } + + return 0; } EXPORT_SYMBOL(ksz_switch_register); void ksz_switch_remove(struct ksz_device *dev) { + dev->dev_ops->exit(dev); dsa_unregister_switch(dev->ds); + + if (dev->reset_gpio) + gpiod_set_value(dev->reset_gpio, 1); + } EXPORT_SYMBOL(ksz_switch_remove); diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h new file mode 100644 index 000000000000..2dd832de0d52 --- /dev/null +++ b/drivers/net/dsa/microchip/ksz_common.h @@ -0,0 +1,214 @@ +/* SPDX-License-Identifier: GPL-2.0 + * Microchip switch driver common header + * + * Copyright (C) 2017-2018 Microchip Technology Inc. + */ + +#ifndef __KSZ_COMMON_H +#define __KSZ_COMMON_H + +void ksz_update_port_member(struct ksz_device *dev, int port); + +/* Common DSA access functions */ + +int ksz_phy_read16(struct dsa_switch *ds, int addr, int reg); +int ksz_phy_write16(struct dsa_switch *ds, int addr, int reg, u16 val); +int ksz_sset_count(struct dsa_switch *ds, int port, int sset); +int ksz_port_bridge_join(struct dsa_switch *ds, int port, + struct net_device *br); +void ksz_port_bridge_leave(struct dsa_switch *ds, int port, + struct net_device *br); +void ksz_port_fast_age(struct dsa_switch *ds, int port); +int ksz_port_vlan_prepare(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_vlan *vlan); +int ksz_port_fdb_dump(struct dsa_switch *ds, int port, dsa_fdb_dump_cb_t *cb, + void *data); +int ksz_port_mdb_prepare(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_mdb *mdb); +void ksz_port_mdb_add(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_mdb *mdb); +int ksz_port_mdb_del(struct dsa_switch *ds, int port, + const struct switchdev_obj_port_mdb *mdb); +int ksz_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy); +void ksz_disable_port(struct dsa_switch *ds, int port, struct phy_device *phy); + +/* Common register access functions */ + +static inline int ksz_read8(struct ksz_device *dev, u32 reg, u8 *val) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->read8(dev, reg, val); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_read16(struct ksz_device *dev, u32 reg, u16 *val) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->read16(dev, reg, val); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_read24(struct ksz_device *dev, u32 reg, u32 *val) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->read24(dev, reg, val); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_read32(struct ksz_device *dev, u32 reg, u32 *val) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->read32(dev, reg, val); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_write8(struct ksz_device *dev, u32 reg, u8 value) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->write8(dev, reg, value); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_write16(struct ksz_device *dev, u32 reg, u16 value) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->write16(dev, reg, value); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_write24(struct ksz_device *dev, u32 reg, u32 value) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->write24(dev, reg, value); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_write32(struct ksz_device *dev, u32 reg, u32 value) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->write32(dev, reg, value); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_get(struct ksz_device *dev, u32 reg, void *data, + size_t len) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->get(dev, reg, data, len); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline int ksz_set(struct ksz_device *dev, u32 reg, void *data, + size_t len) +{ + int ret; + + mutex_lock(&dev->reg_mutex); + ret = dev->ops->set(dev, reg, data, len); + mutex_unlock(&dev->reg_mutex); + + return ret; +} + +static inline void ksz_pread8(struct ksz_device *dev, int port, int offset, + u8 *data) +{ + ksz_read8(dev, dev->dev_ops->get_port_addr(port, offset), data); +} + +static inline void ksz_pread16(struct ksz_device *dev, int port, int offset, + u16 *data) +{ + ksz_read16(dev, dev->dev_ops->get_port_addr(port, offset), data); +} + +static inline void ksz_pread32(struct ksz_device *dev, int port, int offset, + u32 *data) +{ + ksz_read32(dev, dev->dev_ops->get_port_addr(port, offset), data); +} + +static inline void ksz_pwrite8(struct ksz_device *dev, int port, int offset, + u8 data) +{ + ksz_write8(dev, dev->dev_ops->get_port_addr(port, offset), data); +} + +static inline void ksz_pwrite16(struct ksz_device *dev, int port, int offset, + u16 data) +{ + ksz_write16(dev, dev->dev_ops->get_port_addr(port, offset), data); +} + +static inline void ksz_pwrite32(struct ksz_device *dev, int port, int offset, + u32 data) +{ + ksz_write32(dev, dev->dev_ops->get_port_addr(port, offset), data); +} + +static void ksz_cfg(struct ksz_device *dev, u32 addr, u8 bits, bool set) +{ + u8 data; + + ksz_read8(dev, addr, &data); + if (set) + data |= bits; + else + data &= ~bits; + ksz_write8(dev, addr, data); +} + +static void ksz_port_cfg(struct ksz_device *dev, int port, int offset, u8 bits, + bool set) +{ + u32 addr; + u8 data; + + addr = dev->dev_ops->get_port_addr(port, offset); + ksz_read8(dev, addr, &data); + + if (set) + data |= bits; + else + data &= ~bits; + + ksz_write8(dev, addr, data); +} + +#endif diff --git a/drivers/net/dsa/microchip/ksz_priv.h b/drivers/net/dsa/microchip/ksz_priv.h index 2a98dbd51456..60b49010904b 100644 --- a/drivers/net/dsa/microchip/ksz_priv.h +++ b/drivers/net/dsa/microchip/ksz_priv.h @@ -1,19 +1,8 @@ -/* - * Microchip KSZ series switch common definitions - * - * Copyright (C) 2017 +/* SPDX-License-Identifier: GPL-2.0 * - * Permission to use, copy, modify, and/or distribute this software for any - * purpose with or without fee is hereby granted, provided that the above - * copyright notice and this permission notice appear in all copies. + * Microchip KSZ series switch common definitions * - * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR - * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES - * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN - * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + * Copyright (C) 2017-2018 Microchip Technology Inc. */ #ifndef __KSZ_PRIV_H @@ -25,7 +14,7 @@ #include #include -#include "ksz_9477_reg.h" +#include "ksz9477_reg.h" struct ksz_io_ops; @@ -33,6 +22,27 @@ struct vlan_table { u32 table[3]; }; +struct ksz_port_mib { + u8 cnt_ptr; + u64 *counters; +}; + +struct ksz_port { + u16 member; + u16 vid_member; + int stp_state; + struct phy_device phydev; + + u32 on:1; /* port is not disabled by hardware */ + u32 phy:1; /* port has a PHY */ + u32 fiber:1; /* port is fiber */ + u32 sgmii:1; /* port is SGMII */ + u32 force:1; + u32 link_just_down:1; /* link just goes down */ + + struct ksz_port_mib mib; +}; + struct ksz_device { struct dsa_switch *ds; struct ksz_platform_data *pdata; @@ -43,11 +53,14 @@ struct ksz_device { struct mutex alu_mutex; /* ALU access */ struct mutex vlan_mutex; /* vlan access */ const struct ksz_io_ops *ops; + const struct ksz_dev_ops *dev_ops; struct device *dev; void *priv; + struct gpio_desc *reset_gpio; /* Optional reset GPIO */ + /* chip specific data */ u32 chip_id; int num_vlans; @@ -55,11 +68,37 @@ struct ksz_device { int num_statics; int cpu_port; /* port connected to CPU */ int cpu_ports; /* port bitmap can be cpu port */ + int phy_port_cnt; int port_cnt; + int reg_mib_cnt; + int mib_cnt; + int mib_port_cnt; + int last_port; /* ports after that not used */ + phy_interface_t interface; + u32 regs_size; struct vlan_table *vlan_cache; u64 mib_value[TOTAL_SWITCH_COUNTER_NUM]; + + u8 *txbuf; + + struct ksz_port *ports; + struct timer_list mib_read_timer; + struct work_struct mib_read; + unsigned long mib_read_interval; + u16 br_member; + u16 member; + u16 live_ports; + u16 on_ports; /* ports enabled by DSA */ + u16 rx_ports; + u16 tx_ports; + u16 mirror_rx; + u16 mirror_tx; + u32 features; /* chip specific features */ + u32 overrides; /* chip functions set by user */ + u16 host_mask; + u16 port_mask; }; struct ksz_io_ops { @@ -71,140 +110,60 @@ struct ksz_io_ops { int (*write16)(struct ksz_device *dev, u32 reg, u16 value); int (*write24)(struct ksz_device *dev, u32 reg, u32 value); int (*write32)(struct ksz_device *dev, u32 reg, u32 value); - int (*phy_read16)(struct ksz_device *dev, int addr, int reg, - u16 *value); - int (*phy_write16)(struct ksz_device *dev, int addr, int reg, - u16 value); + int (*get)(struct ksz_device *dev, u32 reg, void *data, size_t len); + int (*set)(struct ksz_device *dev, u32 reg, void *data, size_t len); +}; + +struct alu_struct { + /* entry 1 */ + u8 is_static:1; + u8 is_src_filter:1; + u8 is_dst_filter:1; + u8 prio_age:3; + u32 _reserv_0_1:23; + u8 mstp:3; + /* entry 2 */ + u8 is_override:1; + u8 is_use_fid:1; + u32 _reserv_1_1:23; + u8 port_forward:7; + /* entry 3 & 4*/ + u32 _reserv_2_1:9; + u8 fid:7; + u8 mac[ETH_ALEN]; +}; + +struct ksz_dev_ops { + u32 (*get_port_addr)(int port, int offset); + void (*cfg_port_member)(struct ksz_device *dev, int port, u8 member); + void (*flush_dyn_mac_table)(struct ksz_device *dev, int port); + void (*port_setup)(struct ksz_device *dev, int port, bool cpu_port); + void (*r_phy)(struct ksz_device *dev, u16 phy, u16 reg, u16 *val); + void (*w_phy)(struct ksz_device *dev, u16 phy, u16 reg, u16 val); + int (*r_dyn_mac_table)(struct ksz_device *dev, u16 addr, u8 *mac_addr, + u8 *fid, u8 *src_port, u8 *timestamp, + u16 *entries); + int (*r_sta_mac_table)(struct ksz_device *dev, u16 addr, + struct alu_struct *alu); + void (*w_sta_mac_table)(struct ksz_device *dev, u16 addr, + struct alu_struct *alu); + void (*r_mib_cnt)(struct ksz_device *dev, int port, u16 addr, + u64 *cnt); + void (*r_mib_pkt)(struct ksz_device *dev, int port, u16 addr, + u64 *dropped, u64 *cnt); + void (*port_init_cnt)(struct ksz_device *dev, int port); + int (*shutdown)(struct ksz_device *dev); + int (*detect)(struct ksz_device *dev); + int (*init)(struct ksz_device *dev); + void (*exit)(struct ksz_device *dev); }; struct ksz_device *ksz_switch_alloc(struct device *base, const struct ksz_io_ops *ops, void *priv); -int ksz_switch_detect(struct ksz_device *dev); -int ksz_switch_register(struct ksz_device *dev); +int ksz_switch_register(struct ksz_device *dev, + const struct ksz_dev_ops *ops); void ksz_switch_remove(struct ksz_device *dev); -static inline int ksz_read8(struct ksz_device *dev, u32 reg, u8 *val) -{ - int ret; - - mutex_lock(&dev->reg_mutex); - ret = dev->ops->read8(dev, reg, val); - mutex_unlock(&dev->reg_mutex); - - return ret; -} - -static inline int ksz_read16(struct ksz_device *dev, u32 reg, u16 *val) -{ - int ret; - - mutex_lock(&dev->reg_mutex); - ret = dev->ops->read16(dev, reg, val); - mutex_unlock(&dev->reg_mutex); - - return ret; -} - -static inline int ksz_read24(struct ksz_device *dev, u32 reg, u32 *val) -{ - int ret; - - mutex_lock(&dev->reg_mutex); - ret = dev->ops->read24(dev, reg, val); - mutex_unlock(&dev->reg_mutex); - - return ret; -} - -static inline int ksz_read32(struct ksz_device *dev, u32 reg, u32 *val) -{ - int ret; - - mutex_lock(&dev->reg_mutex); - ret = dev->ops->read32(dev, reg, val); - mutex_unlock(&dev->reg_mutex); - - return ret; -} - -static inline int ksz_write8(struct ksz_device *dev, u32 reg, u8 value) -{ - int ret; - - mutex_lock(&dev->reg_mutex); - ret = dev->ops->write8(dev, reg, value); - mutex_unlock(&dev->reg_mutex); - - return ret; -} - -static inline int ksz_write16(struct ksz_device *dev, u32 reg, u16 value) -{ - int ret; - - mutex_lock(&dev->reg_mutex); - ret = dev->ops->write16(dev, reg, value); - mutex_unlock(&dev->reg_mutex); - - return ret; -} - -static inline int ksz_write24(struct ksz_device *dev, u32 reg, u32 value) -{ - int ret; - - mutex_lock(&dev->reg_mutex); - ret = dev->ops->write24(dev, reg, value); - mutex_unlock(&dev->reg_mutex); - - return ret; -} - -static inline int ksz_write32(struct ksz_device *dev, u32 reg, u32 value) -{ - int ret; - - mutex_lock(&dev->reg_mutex); - ret = dev->ops->write32(dev, reg, value); - mutex_unlock(&dev->reg_mutex); - - return ret; -} - -static inline void ksz_pread8(struct ksz_device *dev, int port, int offset, - u8 *data) -{ - ksz_read8(dev, PORT_CTRL_ADDR(port, offset), data); -} - -static inline void ksz_pread16(struct ksz_device *dev, int port, int offset, - u16 *data) -{ - ksz_read16(dev, PORT_CTRL_ADDR(port, offset), data); -} - -static inline void ksz_pread32(struct ksz_device *dev, int port, int offset, - u32 *data) -{ - ksz_read32(dev, PORT_CTRL_ADDR(port, offset), data); -} - -static inline void ksz_pwrite8(struct ksz_device *dev, int port, int offset, - u8 data) -{ - ksz_write8(dev, PORT_CTRL_ADDR(port, offset), data); -} - -static inline void ksz_pwrite16(struct ksz_device *dev, int port, int offset, - u16 data) -{ - ksz_write16(dev, PORT_CTRL_ADDR(port, offset), data); -} - -static inline void ksz_pwrite32(struct ksz_device *dev, int port, int offset, - u32 data) -{ - ksz_write32(dev, PORT_CTRL_ADDR(port, offset), data); -} +int ksz9477_switch_register(struct ksz_device *dev); #endif diff --git a/drivers/net/dsa/microchip/ksz_spi.c b/drivers/net/dsa/microchip/ksz_spi.c deleted file mode 100644 index 8c1778b42701..000000000000 --- a/drivers/net/dsa/microchip/ksz_spi.c +++ /dev/null @@ -1,217 +0,0 @@ -/* - * Microchip KSZ series register access through SPI - * - * Copyright (C) 2017 - * - * Permission to use, copy, modify, and/or distribute this software for any - * purpose with or without fee is hereby granted, provided that the above - * copyright notice and this permission notice appear in all copies. - * - * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR - * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES - * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN - * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - */ - -#include - -#include -#include -#include -#include - -#include "ksz_priv.h" - -/* SPI frame opcodes */ -#define KS_SPIOP_RD 3 -#define KS_SPIOP_WR 2 - -#define SPI_ADDR_SHIFT 24 -#define SPI_ADDR_MASK (BIT(SPI_ADDR_SHIFT) - 1) -#define SPI_TURNAROUND_SHIFT 5 - -static int ksz_spi_read_reg(struct spi_device *spi, u32 reg, u8 *val, - unsigned int len) -{ - u32 txbuf; - int ret; - - txbuf = reg & SPI_ADDR_MASK; - txbuf |= KS_SPIOP_RD << SPI_ADDR_SHIFT; - txbuf <<= SPI_TURNAROUND_SHIFT; - txbuf = cpu_to_be32(txbuf); - - ret = spi_write_then_read(spi, &txbuf, 4, val, len); - return ret; -} - -static int ksz_spi_read(struct ksz_device *dev, u32 reg, u8 *data, - unsigned int len) -{ - struct spi_device *spi = dev->priv; - - return ksz_spi_read_reg(spi, reg, data, len); -} - -static int ksz_spi_read8(struct ksz_device *dev, u32 reg, u8 *val) -{ - return ksz_spi_read(dev, reg, val, 1); -} - -static int ksz_spi_read16(struct ksz_device *dev, u32 reg, u16 *val) -{ - int ret = ksz_spi_read(dev, reg, (u8 *)val, 2); - - if (!ret) - *val = be16_to_cpu(*val); - - return ret; -} - -static int ksz_spi_read24(struct ksz_device *dev, u32 reg, u32 *val) -{ - int ret; - - *val = 0; - ret = ksz_spi_read(dev, reg, (u8 *)val, 3); - if (!ret) { - *val = be32_to_cpu(*val); - /* convert to 24bit */ - *val >>= 8; - } - - return ret; -} - -static int ksz_spi_read32(struct ksz_device *dev, u32 reg, u32 *val) -{ - int ret = ksz_spi_read(dev, reg, (u8 *)val, 4); - - if (!ret) - *val = be32_to_cpu(*val); - - return ret; -} - -static int ksz_spi_write_reg(struct spi_device *spi, u32 reg, u8 *val, - unsigned int len) -{ - u32 txbuf; - u8 data[12]; - int i; - - txbuf = reg & SPI_ADDR_MASK; - txbuf |= (KS_SPIOP_WR << SPI_ADDR_SHIFT); - txbuf <<= SPI_TURNAROUND_SHIFT; - txbuf = cpu_to_be32(txbuf); - - data[0] = txbuf & 0xFF; - data[1] = (txbuf & 0xFF00) >> 8; - data[2] = (txbuf & 0xFF0000) >> 16; - data[3] = (txbuf & 0xFF000000) >> 24; - for (i = 0; i < len; i++) - data[i + 4] = val[i]; - - return spi_write(spi, &data, 4 + len); -} - -static int ksz_spi_write8(struct ksz_device *dev, u32 reg, u8 value) -{ - struct spi_device *spi = dev->priv; - - return ksz_spi_write_reg(spi, reg, &value, 1); -} - -static int ksz_spi_write16(struct ksz_device *dev, u32 reg, u16 value) -{ - struct spi_device *spi = dev->priv; - - value = cpu_to_be16(value); - return ksz_spi_write_reg(spi, reg, (u8 *)&value, 2); -} - -static int ksz_spi_write24(struct ksz_device *dev, u32 reg, u32 value) -{ - struct spi_device *spi = dev->priv; - - /* make it to big endian 24bit from MSB */ - value <<= 8; - value = cpu_to_be32(value); - return ksz_spi_write_reg(spi, reg, (u8 *)&value, 3); -} - -static int ksz_spi_write32(struct ksz_device *dev, u32 reg, u32 value) -{ - struct spi_device *spi = dev->priv; - - value = cpu_to_be32(value); - return ksz_spi_write_reg(spi, reg, (u8 *)&value, 4); -} - -static const struct ksz_io_ops ksz_spi_ops = { - .read8 = ksz_spi_read8, - .read16 = ksz_spi_read16, - .read24 = ksz_spi_read24, - .read32 = ksz_spi_read32, - .write8 = ksz_spi_write8, - .write16 = ksz_spi_write16, - .write24 = ksz_spi_write24, - .write32 = ksz_spi_write32, -}; - -static int ksz_spi_probe(struct spi_device *spi) -{ - struct ksz_device *dev; - int ret; - - dev = ksz_switch_alloc(&spi->dev, &ksz_spi_ops, spi); - if (!dev) - return -ENOMEM; - - if (spi->dev.platform_data) - dev->pdata = spi->dev.platform_data; - - ret = ksz_switch_register(dev); - if (ret) - return ret; - - spi_set_drvdata(spi, dev); - - return 0; -} - -static int ksz_spi_remove(struct spi_device *spi) -{ - struct ksz_device *dev = spi_get_drvdata(spi); - - if (dev) - ksz_switch_remove(dev); - - return 0; -} - -static const struct of_device_id ksz_dt_ids[] = { - { .compatible = "microchip,ksz9477" }, - { .compatible = "microchip,ksz9897" }, - {}, -}; -MODULE_DEVICE_TABLE(of, ksz_dt_ids); - -static struct spi_driver ksz_spi_driver = { - .driver = { - .name = "ksz9477-switch", - .owner = THIS_MODULE, - .of_match_table = of_match_ptr(ksz_dt_ids), - }, - .probe = ksz_spi_probe, - .remove = ksz_spi_remove, -}; - -module_spi_driver(ksz_spi_driver); - -MODULE_AUTHOR("Woojung Huh "); -MODULE_DESCRIPTION("Microchip KSZ Series Switch SPI access Driver"); -MODULE_LICENSE("GPL"); diff --git a/drivers/net/dsa/microchip/ksz_spi.h b/drivers/net/dsa/microchip/ksz_spi.h new file mode 100644 index 000000000000..427811bd60b3 --- /dev/null +++ b/drivers/net/dsa/microchip/ksz_spi.h @@ -0,0 +1,69 @@ +/* SPDX-License-Identifier: GPL-2.0 + * Microchip KSZ series SPI access common header + * + * Copyright (C) 2017-2018 Microchip Technology Inc. + * Tristram Ha + */ + +#ifndef __KSZ_SPI_H +#define __KSZ_SPI_H + +/* Chip dependent SPI access */ +static int ksz_spi_read(struct ksz_device *dev, u32 reg, u8 *data, + unsigned int len); +static int ksz_spi_write(struct ksz_device *dev, u32 reg, void *data, + unsigned int len); + +static int ksz_spi_read8(struct ksz_device *dev, u32 reg, u8 *val) +{ + return ksz_spi_read(dev, reg, val, 1); +} + +static int ksz_spi_read16(struct ksz_device *dev, u32 reg, u16 *val) +{ + int ret = ksz_spi_read(dev, reg, (u8 *)val, 2); + + if (!ret) + *val = be16_to_cpu(*val); + + return ret; +} + +static int ksz_spi_read32(struct ksz_device *dev, u32 reg, u32 *val) +{ + int ret = ksz_spi_read(dev, reg, (u8 *)val, 4); + + if (!ret) + *val = be32_to_cpu(*val); + + return ret; +} + +static int ksz_spi_write8(struct ksz_device *dev, u32 reg, u8 value) +{ + return ksz_spi_write(dev, reg, &value, 1); +} + +static int ksz_spi_write16(struct ksz_device *dev, u32 reg, u16 value) +{ + value = cpu_to_be16(value); + return ksz_spi_write(dev, reg, &value, 2); +} + +static int ksz_spi_write32(struct ksz_device *dev, u32 reg, u32 value) +{ + value = cpu_to_be32(value); + return ksz_spi_write(dev, reg, &value, 4); +} + +static int ksz_spi_get(struct ksz_device *dev, u32 reg, void *data, size_t len) +{ + return ksz_spi_read(dev, reg, data, len); +} + +static int ksz_spi_set(struct ksz_device *dev, u32 reg, void *data, size_t len) +{ + return ksz_spi_write(dev, reg, data, len); +} + +#endif diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index a5de9bffe5be..74547f43b938 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -658,7 +658,8 @@ static void mt7530_adjust_link(struct dsa_switch *ds, int port, if (phydev->asym_pause) rmt_adv |= LPA_PAUSE_ASYM; - lcl_adv = ethtool_adv_to_lcl_adv_t(phydev->advertising); + lcl_adv = linkmode_adv_to_lcl_adv_t( + phydev->advertising); flowctrl = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv); if (flowctrl & FLOW_CTRL_TX) diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index 24fb6a685039..8a517d8fb9d1 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -2524,11 +2524,22 @@ static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg) mutex_unlock(&chip->reg_lock); if (reg == MII_PHYSID2) { - /* Some internal PHYS don't have a model number. Use - * the mv88e6390 family model number instead. - */ - if (!(val & 0x3f0)) - val |= MV88E6XXX_PORT_SWITCH_ID_PROD_6390 >> 4; + /* Some internal PHYs don't have a model number. */ + if (chip->info->family != MV88E6XXX_FAMILY_6165) + /* Then there is the 6165 family. It gets is + * PHYs correct. But it can also have two + * SERDES interfaces in the PHY address + * space. And these don't have a model + * number. But they are not PHYs, so we don't + * want to give them something a PHY driver + * will recognise. + * + * Use the mv88e6390 family model number + * instead, for anything which really could be + * a PHY, + */ + if (!(val & 0x3f0)) + val |= MV88E6XXX_PORT_SWITCH_ID_PROD_6390 >> 4; } return err ? err : val; @@ -3234,6 +3245,7 @@ static const struct mv88e6xxx_ops mv88e6190_ops = { .port_disable_pri_override = mv88e6xxx_port_disable_pri_override, .port_link_state = mv88e6352_port_link_state, .port_get_cmode = mv88e6352_port_get_cmode, + .port_set_cmode = mv88e6390_port_set_cmode, .stats_snapshot = mv88e6390_g1_stats_snapshot, .stats_set_histogram = mv88e6390_g1_stats_set_histogram, .stats_get_sset_count = mv88e6320_stats_get_sset_count, @@ -3276,6 +3288,7 @@ static const struct mv88e6xxx_ops mv88e6190x_ops = { .port_disable_pri_override = mv88e6xxx_port_disable_pri_override, .port_link_state = mv88e6352_port_link_state, .port_get_cmode = mv88e6352_port_get_cmode, + .port_set_cmode = mv88e6390x_port_set_cmode, .stats_snapshot = mv88e6390_g1_stats_snapshot, .stats_set_histogram = mv88e6390_g1_stats_set_histogram, .stats_get_sset_count = mv88e6320_stats_get_sset_count, @@ -3291,8 +3304,8 @@ static const struct mv88e6xxx_ops mv88e6190x_ops = { .vtu_getnext = mv88e6390_g1_vtu_getnext, .vtu_loadpurge = mv88e6390_g1_vtu_loadpurge, .serdes_power = mv88e6390x_serdes_power, - .serdes_irq_setup = mv88e6390_serdes_irq_setup, - .serdes_irq_free = mv88e6390_serdes_irq_free, + .serdes_irq_setup = mv88e6390x_serdes_irq_setup, + .serdes_irq_free = mv88e6390x_serdes_irq_free, .gpio_ops = &mv88e6352_gpio_ops, .phylink_validate = mv88e6390x_phylink_validate, }; @@ -3318,6 +3331,7 @@ static const struct mv88e6xxx_ops mv88e6191_ops = { .port_disable_pri_override = mv88e6xxx_port_disable_pri_override, .port_link_state = mv88e6352_port_link_state, .port_get_cmode = mv88e6352_port_get_cmode, + .port_set_cmode = mv88e6390_port_set_cmode, .stats_snapshot = mv88e6390_g1_stats_snapshot, .stats_set_histogram = mv88e6390_g1_stats_set_histogram, .stats_get_sset_count = mv88e6320_stats_get_sset_count, @@ -3405,11 +3419,11 @@ static const struct mv88e6xxx_ops mv88e6290_ops = { .port_set_egress_floods = mv88e6352_port_set_egress_floods, .port_set_ether_type = mv88e6351_port_set_ether_type, .port_pause_limit = mv88e6390_port_pause_limit, - .port_set_cmode = mv88e6390x_port_set_cmode, .port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit, .port_disable_pri_override = mv88e6xxx_port_disable_pri_override, .port_link_state = mv88e6352_port_link_state, .port_get_cmode = mv88e6352_port_get_cmode, + .port_set_cmode = mv88e6390_port_set_cmode, .stats_snapshot = mv88e6390_g1_stats_snapshot, .stats_set_histogram = mv88e6390_g1_stats_set_histogram, .stats_get_sset_count = mv88e6320_stats_get_sset_count, @@ -3464,6 +3478,7 @@ static const struct mv88e6xxx_ops mv88e6320_ops = { .stats_get_stats = mv88e6320_stats_get_stats, .set_cpu_port = mv88e6095_g1_set_cpu_port, .set_egress_port = mv88e6095_g1_set_egress_port, + .watchdog_ops = &mv88e6390_watchdog_ops, .mgmt_rsvd2cpu = mv88e6352_g2_mgmt_rsvd2cpu, .pot_clear = mv88e6xxx_g2_pot_clear, .reset = mv88e6352_g1_reset, @@ -3506,6 +3521,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = { .stats_get_stats = mv88e6320_stats_get_stats, .set_cpu_port = mv88e6095_g1_set_cpu_port, .set_egress_port = mv88e6095_g1_set_egress_port, + .watchdog_ops = &mv88e6390_watchdog_ops, .reset = mv88e6352_g1_reset, .vtu_getnext = mv88e6185_g1_vtu_getnext, .vtu_loadpurge = mv88e6185_g1_vtu_loadpurge, @@ -3710,11 +3726,11 @@ static const struct mv88e6xxx_ops mv88e6390_ops = { .port_set_jumbo_size = mv88e6165_port_set_jumbo_size, .port_egress_rate_limiting = mv88e6097_port_egress_rate_limiting, .port_pause_limit = mv88e6390_port_pause_limit, - .port_set_cmode = mv88e6390x_port_set_cmode, .port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit, .port_disable_pri_override = mv88e6xxx_port_disable_pri_override, .port_link_state = mv88e6352_port_link_state, .port_get_cmode = mv88e6352_port_get_cmode, + .port_set_cmode = mv88e6390_port_set_cmode, .stats_snapshot = mv88e6390_g1_stats_snapshot, .stats_set_histogram = mv88e6390_g1_stats_set_histogram, .stats_get_sset_count = mv88e6320_stats_get_sset_count, @@ -3757,11 +3773,11 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = { .port_set_jumbo_size = mv88e6165_port_set_jumbo_size, .port_egress_rate_limiting = mv88e6097_port_egress_rate_limiting, .port_pause_limit = mv88e6390_port_pause_limit, - .port_set_cmode = mv88e6390x_port_set_cmode, .port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit, .port_disable_pri_override = mv88e6xxx_port_disable_pri_override, .port_link_state = mv88e6352_port_link_state, .port_get_cmode = mv88e6352_port_get_cmode, + .port_set_cmode = mv88e6390x_port_set_cmode, .stats_snapshot = mv88e6390_g1_stats_snapshot, .stats_set_histogram = mv88e6390_g1_stats_set_histogram, .stats_get_sset_count = mv88e6320_stats_get_sset_count, @@ -3777,8 +3793,8 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = { .vtu_getnext = mv88e6390_g1_vtu_getnext, .vtu_loadpurge = mv88e6390_g1_vtu_loadpurge, .serdes_power = mv88e6390x_serdes_power, - .serdes_irq_setup = mv88e6390_serdes_irq_setup, - .serdes_irq_free = mv88e6390_serdes_irq_free, + .serdes_irq_setup = mv88e6390x_serdes_irq_setup, + .serdes_irq_free = mv88e6390x_serdes_irq_free, .gpio_ops = &mv88e6352_gpio_ops, .avb_ops = &mv88e6390_avb_ops, .ptp_ops = &mv88e6352_ptp_ops, diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c index cd7db60a508b..ebd26b6a93e6 100644 --- a/drivers/net/dsa/mv88e6xxx/port.c +++ b/drivers/net/dsa/mv88e6xxx/port.c @@ -368,12 +368,15 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port, u16 reg; int err; - if (mode == PHY_INTERFACE_MODE_NA) - return 0; - if (port != 9 && port != 10) return -EOPNOTSUPP; + /* Default to a slow mode, so freeing up SERDES interfaces for + * other ports which might use them for SFPs. + */ + if (mode == PHY_INTERFACE_MODE_NA) + mode = PHY_INTERFACE_MODE_1000BASEX; + switch (mode) { case PHY_INTERFACE_MODE_1000BASEX: cmode = MV88E6XXX_PORT_STS_CMODE_1000BASE_X; @@ -437,6 +440,21 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port, return 0; } +int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port, + phy_interface_t mode) +{ + switch (mode) { + case PHY_INTERFACE_MODE_XGMII: + case PHY_INTERFACE_MODE_XAUI: + case PHY_INTERFACE_MODE_RXAUI: + return -EINVAL; + default: + break; + } + + return mv88e6390x_port_set_cmode(chip, port, mode); +} + int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode) { int err; diff --git a/drivers/net/dsa/mv88e6xxx/port.h b/drivers/net/dsa/mv88e6xxx/port.h index 36904c9bf955..0d81866d0e4a 100644 --- a/drivers/net/dsa/mv88e6xxx/port.h +++ b/drivers/net/dsa/mv88e6xxx/port.h @@ -310,6 +310,8 @@ int mv88e6097_port_pause_limit(struct mv88e6xxx_chip *chip, int port, u8 in, u8 out); int mv88e6390_port_pause_limit(struct mv88e6xxx_chip *chip, int port, u8 in, u8 out); +int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port, + phy_interface_t mode); int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port, phy_interface_t mode); int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode); diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c index bb69650ff772..2caa8c8b4b55 100644 --- a/drivers/net/dsa/mv88e6xxx/serdes.c +++ b/drivers/net/dsa/mv88e6xxx/serdes.c @@ -619,15 +619,11 @@ out: return ret; } -int mv88e6390_serdes_irq_setup(struct mv88e6xxx_chip *chip, int port) +int mv88e6390x_serdes_irq_setup(struct mv88e6xxx_chip *chip, int port) { int lane; int err; - /* Only support ports 9 and 10 at the moment */ - if (port < 9) - return 0; - lane = mv88e6390x_serdes_get_lane(chip, port); if (lane == -ENODEV) @@ -663,11 +659,19 @@ int mv88e6390_serdes_irq_setup(struct mv88e6xxx_chip *chip, int port) return mv88e6390_serdes_irq_enable(chip, port, lane); } -void mv88e6390_serdes_irq_free(struct mv88e6xxx_chip *chip, int port) +int mv88e6390_serdes_irq_setup(struct mv88e6xxx_chip *chip, int port) +{ + if (port < 9) + return 0; + + return mv88e6390_serdes_irq_setup(chip, port); +} + +void mv88e6390x_serdes_irq_free(struct mv88e6xxx_chip *chip, int port) { int lane = mv88e6390x_serdes_get_lane(chip, port); - if (port < 9) + if (lane == -ENODEV) return; if (lane < 0) @@ -685,6 +689,14 @@ void mv88e6390_serdes_irq_free(struct mv88e6xxx_chip *chip, int port) chip->ports[port].serdes_irq = 0; } +void mv88e6390_serdes_irq_free(struct mv88e6xxx_chip *chip, int port) +{ + if (port < 9) + return; + + mv88e6390x_serdes_irq_free(chip, port); +} + int mv88e6341_serdes_power(struct mv88e6xxx_chip *chip, int port, bool on) { u8 cmode = chip->ports[port].cmode; diff --git a/drivers/net/dsa/mv88e6xxx/serdes.h b/drivers/net/dsa/mv88e6xxx/serdes.h index 7870c5a9ef12..573dce8b1eb4 100644 --- a/drivers/net/dsa/mv88e6xxx/serdes.h +++ b/drivers/net/dsa/mv88e6xxx/serdes.h @@ -77,6 +77,8 @@ int mv88e6390_serdes_power(struct mv88e6xxx_chip *chip, int port, bool on); int mv88e6390x_serdes_power(struct mv88e6xxx_chip *chip, int port, bool on); int mv88e6390_serdes_irq_setup(struct mv88e6xxx_chip *chip, int port); void mv88e6390_serdes_irq_free(struct mv88e6xxx_chip *chip, int port); +int mv88e6390x_serdes_irq_setup(struct mv88e6xxx_chip *chip, int port); +void mv88e6390x_serdes_irq_free(struct mv88e6xxx_chip *chip, int port); int mv88e6352_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port); int mv88e6352_serdes_get_strings(struct mv88e6xxx_chip *chip, int port, uint8_t *data); diff --git a/drivers/net/ethernet/3com/3c59x.c b/drivers/net/ethernet/3com/3c59x.c index 5bc168314ea2..40f421dbdf57 100644 --- a/drivers/net/ethernet/3com/3c59x.c +++ b/drivers/net/ethernet/3com/3c59x.c @@ -1151,7 +1151,7 @@ static int vortex_probe1(struct device *gendev, void __iomem *ioaddr, int irq, print_info = (vortex_debug > 1); if (print_info) - pr_info("See Documentation/networking/vortex.txt\n"); + pr_info("See Documentation/networking/device_drivers/3com/vortex.txt\n"); pr_info("%s: 3Com %s %s at %p.\n", print_name, @@ -1956,7 +1956,7 @@ vortex_error(struct net_device *dev, int status) dev->name, tx_status); if (tx_status == 0x82) { pr_err("Probably a duplex mismatch. See " - "Documentation/networking/vortex.txt\n"); + "Documentation/networking/device_drivers/3com/vortex.txt\n"); } dump_tx_ring(dev); } diff --git a/drivers/net/ethernet/3com/Kconfig b/drivers/net/ethernet/3com/Kconfig index 5c3ef9fc8207..0ac44ef1f7a9 100644 --- a/drivers/net/ethernet/3com/Kconfig +++ b/drivers/net/ethernet/3com/Kconfig @@ -75,8 +75,9 @@ config VORTEX "Hurricane" (3c555/3cSOHO) PCI If you have such a card, say Y here. More specific information is in - and in the comments at - the beginning of . + and + in the comments at the beginning of + . To compile this support as a module, choose M here. diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c index 7c9348a26cbb..91fc64c1145e 100644 --- a/drivers/net/ethernet/aeroflex/greth.c +++ b/drivers/net/ethernet/aeroflex/greth.c @@ -1283,7 +1283,7 @@ static int greth_mdio_probe(struct net_device *dev) else phy_set_max_speed(phy, SPEED_100); - phy->advertising = phy->supported; + linkmode_copy(phy->advertising, phy->supported); greth->link = 0; greth->speed = 0; diff --git a/drivers/net/ethernet/amd/au1000_eth.c b/drivers/net/ethernet/amd/au1000_eth.c index 7c1eb304c27e..e833d1b3fe18 100644 --- a/drivers/net/ethernet/amd/au1000_eth.c +++ b/drivers/net/ethernet/amd/au1000_eth.c @@ -940,11 +940,8 @@ static int au1000_open(struct net_device *dev) return retval; } - if (dev->phydev) { - /* cause the PHY state machine to schedule a link state check */ - dev->phydev->state = PHY_CHANGELINK; + if (dev->phydev) phy_start(dev->phydev); - } netif_start_queue(dev); diff --git a/drivers/net/ethernet/amd/sunlance.c b/drivers/net/ethernet/amd/sunlance.c index 9d4899826823..bd6589de93d9 100644 --- a/drivers/net/ethernet/amd/sunlance.c +++ b/drivers/net/ethernet/amd/sunlance.c @@ -1488,9 +1488,9 @@ static int sunlance_sbus_probe(struct platform_device *op) struct device_node *parent_dp = parent->dev.of_node; int err; - if (!strcmp(parent_dp->name, "ledma")) { + if (of_node_name_eq(parent_dp, "ledma")) { err = sparc_lance_probe_one(op, parent, NULL); - } else if (!strcmp(parent_dp->name, "lebuffer")) { + } else if (of_node_name_eq(parent_dp, "lebuffer")) { err = sparc_lance_probe_one(op, NULL, parent); } else err = sparc_lance_probe_one(op, NULL, NULL); diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c index 151bdb629e8a..128cd648ba99 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c @@ -857,6 +857,7 @@ static void xgbe_phy_free_phy_device(struct xgbe_prv_data *pdata) static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, }; struct xgbe_phy_data *phy_data = pdata->phy_data; unsigned int phy_id = phy_data->phydev->phy_id; @@ -878,9 +879,15 @@ static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata) phy_write(phy_data->phydev, 0x04, 0x0d01); phy_write(phy_data->phydev, 0x00, 0x9140); - phy_data->phydev->supported = PHY_10BT_FEATURES | - PHY_100BT_FEATURES | - PHY_1000BT_FEATURES; + linkmode_set_bit_array(phy_10_100_features_array, + ARRAY_SIZE(phy_10_100_features_array), + supported); + linkmode_set_bit_array(phy_gbit_features_array, + ARRAY_SIZE(phy_gbit_features_array), + supported); + + linkmode_copy(phy_data->phydev->supported, supported); + phy_support_asym_pause(phy_data->phydev); netif_dbg(pdata, drv, pdata->netdev, @@ -891,6 +898,7 @@ static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata) static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, }; struct xgbe_phy_data *phy_data = pdata->phy_data; struct xgbe_sfp_eeprom *sfp_eeprom = &phy_data->sfp_eeprom; unsigned int phy_id = phy_data->phydev->phy_id; @@ -951,9 +959,13 @@ static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata) reg = phy_read(phy_data->phydev, 0x00); phy_write(phy_data->phydev, 0x00, reg & ~0x00800); - phy_data->phydev->supported = (PHY_10BT_FEATURES | - PHY_100BT_FEATURES | - PHY_1000BT_FEATURES); + linkmode_set_bit_array(phy_10_100_features_array, + ARRAY_SIZE(phy_10_100_features_array), + supported); + linkmode_set_bit_array(phy_gbit_features_array, + ARRAY_SIZE(phy_gbit_features_array), + supported); + linkmode_copy(phy_data->phydev->supported, supported); phy_support_asym_pause(phy_data->phydev); netif_dbg(pdata, drv, pdata->netdev, @@ -976,7 +988,6 @@ static int xgbe_phy_find_phy_device(struct xgbe_prv_data *pdata) struct ethtool_link_ksettings *lks = &pdata->phy.lks; struct xgbe_phy_data *phy_data = pdata->phy_data; struct phy_device *phydev; - u32 advertising; int ret; /* If we already have a PHY, just return */ @@ -1036,9 +1047,8 @@ static int xgbe_phy_find_phy_device(struct xgbe_prv_data *pdata) xgbe_phy_external_phy_quirks(pdata); - ethtool_convert_link_mode_to_legacy_u32(&advertising, - lks->link_modes.advertising); - phydev->advertising &= advertising; + linkmode_and(phydev->advertising, phydev->advertising, + lks->link_modes.advertising); phy_start_aneg(phy_data->phydev); @@ -1497,7 +1507,7 @@ static void xgbe_phy_phydev_flowctrl(struct xgbe_prv_data *pdata) if (!phy_data->phydev) return; - lcl_adv = ethtool_adv_to_lcl_adv_t(phy_data->phydev->advertising); + lcl_adv = linkmode_adv_to_lcl_adv_t(phy_data->phydev->advertising); if (phy_data->phydev->pause) { XGBE_SET_LP_ADV(lks, Pause); @@ -1815,7 +1825,6 @@ static int xgbe_phy_an_config(struct xgbe_prv_data *pdata) { struct ethtool_link_ksettings *lks = &pdata->phy.lks; struct xgbe_phy_data *phy_data = pdata->phy_data; - u32 advertising; int ret; ret = xgbe_phy_find_phy_device(pdata); @@ -1825,12 +1834,10 @@ static int xgbe_phy_an_config(struct xgbe_prv_data *pdata) if (!phy_data->phydev) return 0; - ethtool_convert_link_mode_to_legacy_u32(&advertising, - lks->link_modes.advertising); - phy_data->phydev->autoneg = pdata->phy.autoneg; - phy_data->phydev->advertising = phy_data->phydev->supported & - advertising; + linkmode_and(phy_data->phydev->advertising, + phy_data->phydev->supported, + lks->link_modes.advertising); if (pdata->phy.autoneg != AUTONEG_ENABLE) { phy_data->phydev->speed = pdata->phy.speed; diff --git a/drivers/net/ethernet/apm/xgene-v2/mdio.c b/drivers/net/ethernet/apm/xgene-v2/mdio.c index f5fe3bb2e59d..53529cd85162 100644 --- a/drivers/net/ethernet/apm/xgene-v2/mdio.c +++ b/drivers/net/ethernet/apm/xgene-v2/mdio.c @@ -109,6 +109,7 @@ void xge_mdio_remove(struct net_device *ndev) int xge_mdio_config(struct net_device *ndev) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; struct xge_pdata *pdata = netdev_priv(ndev); struct device *dev = &pdata->pdev->dev; struct mii_bus *mdio_bus; @@ -148,16 +149,17 @@ int xge_mdio_config(struct net_device *ndev) goto err; } - phydev->supported &= ~(SUPPORTED_10baseT_Half | - SUPPORTED_10baseT_Full | - SUPPORTED_100baseT_Half | - SUPPORTED_100baseT_Full | - SUPPORTED_1000baseT_Half | - SUPPORTED_AUI | - SUPPORTED_MII | - SUPPORTED_FIBRE | - SUPPORTED_BNC); - phydev->advertising = phydev->supported; + linkmode_set_bit_array(phy_10_100_features_array, + ARRAY_SIZE(phy_10_100_features_array), + mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_AUI_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_MII_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_BNC_BIT, mask); + + linkmode_andnot(phydev->supported, phydev->supported, mask); + linkmode_copy(phydev->advertising, phydev->supported); pdata->phy_speed = SPEED_UNKNOWN; return 0; diff --git a/drivers/net/ethernet/aquantia/atlantic/Makefile b/drivers/net/ethernet/aquantia/atlantic/Makefile index 686f6d8c9e79..4556630ee286 100644 --- a/drivers/net/ethernet/aquantia/atlantic/Makefile +++ b/drivers/net/ethernet/aquantia/atlantic/Makefile @@ -36,6 +36,7 @@ atlantic-objs := aq_main.o \ aq_ring.o \ aq_hw_utils.o \ aq_ethtool.o \ + aq_filters.o \ hw_atl/hw_atl_a0.o \ hw_atl/hw_atl_b0.o \ hw_atl/hw_atl_utils.o \ diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h b/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h index 91eb8910b1c9..3944ce7f0870 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h @@ -12,7 +12,7 @@ #ifndef AQ_CFG_H #define AQ_CFG_H -#define AQ_CFG_VECS_DEF 4U +#define AQ_CFG_VECS_DEF 8U #define AQ_CFG_TCS_DEF 1U #define AQ_CFG_TXDS_DEF 4096U @@ -42,8 +42,8 @@ #define AQ_CFG_IS_LRO_DEF 1U /* RSS */ -#define AQ_CFG_RSS_INDIRECTION_TABLE_MAX 128U -#define AQ_CFG_RSS_HASHKEY_SIZE 320U +#define AQ_CFG_RSS_INDIRECTION_TABLE_MAX 64U +#define AQ_CFG_RSS_HASHKEY_SIZE 40U #define AQ_CFG_IS_RSS_DEF 1U #define AQ_CFG_NUM_RSS_QUEUES_DEF AQ_CFG_VECS_DEF diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_common.h b/drivers/net/ethernet/aquantia/atlantic/aq_common.h index becb578211ed..6b6d1724676e 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_common.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_common.h @@ -14,7 +14,7 @@ #include #include - +#include #include "ver.h" #include "aq_cfg.h" #include "aq_utils.h" diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c index 99ef1daaa4d8..38e87eed76b9 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c @@ -12,6 +12,7 @@ #include "aq_ethtool.h" #include "aq_nic.h" #include "aq_vec.h" +#include "aq_filters.h" static void aq_ethtool_get_regs(struct net_device *ndev, struct ethtool_regs *regs, void *p) @@ -201,6 +202,41 @@ static int aq_ethtool_get_rss(struct net_device *ndev, u32 *indir, u8 *key, return 0; } +static int aq_ethtool_set_rss(struct net_device *netdev, const u32 *indir, + const u8 *key, const u8 hfunc) +{ + struct aq_nic_s *aq_nic = netdev_priv(netdev); + struct aq_nic_cfg_s *cfg; + unsigned int i = 0U; + u32 rss_entries; + int err = 0; + + cfg = aq_nic_get_cfg(aq_nic); + rss_entries = cfg->aq_rss.indirection_table_size; + + /* We do not allow change in unsupported parameters */ + if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP) + return -EOPNOTSUPP; + /* Fill out the redirection table */ + if (indir) + for (i = 0; i < rss_entries; i++) + cfg->aq_rss.indirection_table[i] = indir[i]; + + /* Fill out the rss hash key */ + if (key) { + memcpy(cfg->aq_rss.hash_secret_key, key, + sizeof(cfg->aq_rss.hash_secret_key)); + err = aq_nic->aq_hw_ops->hw_rss_hash_set(aq_nic->aq_hw, + &cfg->aq_rss); + if (err) + return err; + } + + err = aq_nic->aq_hw_ops->hw_rss_set(aq_nic->aq_hw, &cfg->aq_rss); + + return err; +} + static int aq_ethtool_get_rxnfc(struct net_device *ndev, struct ethtool_rxnfc *cmd, u32 *rule_locs) @@ -213,7 +249,36 @@ static int aq_ethtool_get_rxnfc(struct net_device *ndev, case ETHTOOL_GRXRINGS: cmd->data = cfg->vecs; break; + case ETHTOOL_GRXCLSRLCNT: + cmd->rule_cnt = aq_get_rxnfc_count_all_rules(aq_nic); + break; + case ETHTOOL_GRXCLSRULE: + err = aq_get_rxnfc_rule(aq_nic, cmd); + break; + case ETHTOOL_GRXCLSRLALL: + err = aq_get_rxnfc_all_rules(aq_nic, cmd, rule_locs); + break; + default: + err = -EOPNOTSUPP; + break; + } + + return err; +} + +static int aq_ethtool_set_rxnfc(struct net_device *ndev, + struct ethtool_rxnfc *cmd) +{ + int err = 0; + struct aq_nic_s *aq_nic = netdev_priv(ndev); + switch (cmd->cmd) { + case ETHTOOL_SRXCLSRLINS: + err = aq_add_rxnfc_rule(aq_nic, cmd); + break; + case ETHTOOL_SRXCLSRLDEL: + err = aq_del_rxnfc_rule(aq_nic, cmd); + break; default: err = -EOPNOTSUPP; break; @@ -495,7 +560,7 @@ static int aq_set_ringparam(struct net_device *ndev, } } if (ndev_running) - err = dev_open(ndev); + err = dev_open(ndev, NULL); err_exit: return err; @@ -519,7 +584,9 @@ const struct ethtool_ops aq_ethtool_ops = { .set_pauseparam = aq_ethtool_set_pauseparam, .get_rxfh_key_size = aq_ethtool_get_rss_key_size, .get_rxfh = aq_ethtool_get_rss, + .set_rxfh = aq_ethtool_set_rss, .get_rxnfc = aq_ethtool_get_rxnfc, + .set_rxnfc = aq_ethtool_set_rxnfc, .get_sset_count = aq_ethtool_get_sset_count, .get_ethtool_stats = aq_ethtool_stats, .get_link_ksettings = aq_ethtool_get_link_ksettings, diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_filters.c b/drivers/net/ethernet/aquantia/atlantic/aq_filters.c new file mode 100644 index 000000000000..18bc035da850 --- /dev/null +++ b/drivers/net/ethernet/aquantia/atlantic/aq_filters.c @@ -0,0 +1,876 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright (C) 2014-2017 aQuantia Corporation. */ + +/* File aq_filters.c: RX filters related functions. */ + +#include "aq_filters.h" + +static bool __must_check +aq_rule_is_approve(struct ethtool_rx_flow_spec *fsp) +{ + if (fsp->flow_type & FLOW_MAC_EXT) + return false; + + switch (fsp->flow_type & ~FLOW_EXT) { + case ETHER_FLOW: + case TCP_V4_FLOW: + case UDP_V4_FLOW: + case SCTP_V4_FLOW: + case TCP_V6_FLOW: + case UDP_V6_FLOW: + case SCTP_V6_FLOW: + case IPV4_FLOW: + case IPV6_FLOW: + return true; + case IP_USER_FLOW: + switch (fsp->h_u.usr_ip4_spec.proto) { + case IPPROTO_TCP: + case IPPROTO_UDP: + case IPPROTO_SCTP: + case IPPROTO_IP: + return true; + default: + return false; + } + case IPV6_USER_FLOW: + switch (fsp->h_u.usr_ip6_spec.l4_proto) { + case IPPROTO_TCP: + case IPPROTO_UDP: + case IPPROTO_SCTP: + case IPPROTO_IP: + return true; + default: + return false; + } + default: + return false; + } + + return false; +} + +static bool __must_check +aq_match_filter(struct ethtool_rx_flow_spec *fsp1, + struct ethtool_rx_flow_spec *fsp2) +{ + if (fsp1->flow_type != fsp2->flow_type || + memcmp(&fsp1->h_u, &fsp2->h_u, sizeof(fsp2->h_u)) || + memcmp(&fsp1->h_ext, &fsp2->h_ext, sizeof(fsp2->h_ext)) || + memcmp(&fsp1->m_u, &fsp2->m_u, sizeof(fsp2->m_u)) || + memcmp(&fsp1->m_ext, &fsp2->m_ext, sizeof(fsp2->m_ext))) + return false; + + return true; +} + +static bool __must_check +aq_rule_already_exists(struct aq_nic_s *aq_nic, + struct ethtool_rx_flow_spec *fsp) +{ + struct aq_rx_filter *rule; + struct hlist_node *aq_node2; + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + + hlist_for_each_entry_safe(rule, aq_node2, + &rx_fltrs->filter_list, aq_node) { + if (rule->aq_fsp.location == fsp->location) + continue; + if (aq_match_filter(&rule->aq_fsp, fsp)) { + netdev_err(aq_nic->ndev, + "ethtool: This filter is already set\n"); + return true; + } + } + + return false; +} + +static int aq_check_approve_fl3l4(struct aq_nic_s *aq_nic, + struct aq_hw_rx_fltrs_s *rx_fltrs, + struct ethtool_rx_flow_spec *fsp) +{ + if (fsp->location < AQ_RX_FIRST_LOC_FL3L4 || + fsp->location > AQ_RX_LAST_LOC_FL3L4) { + netdev_err(aq_nic->ndev, + "ethtool: location must be in range [%d, %d]", + AQ_RX_FIRST_LOC_FL3L4, + AQ_RX_LAST_LOC_FL3L4); + return -EINVAL; + } + if (rx_fltrs->fl3l4.is_ipv6 && rx_fltrs->fl3l4.active_ipv4) { + rx_fltrs->fl3l4.is_ipv6 = false; + netdev_err(aq_nic->ndev, + "ethtool: mixing ipv4 and ipv6 is not allowed"); + return -EINVAL; + } else if (!rx_fltrs->fl3l4.is_ipv6 && rx_fltrs->fl3l4.active_ipv6) { + rx_fltrs->fl3l4.is_ipv6 = true; + netdev_err(aq_nic->ndev, + "ethtool: mixing ipv4 and ipv6 is not allowed"); + return -EINVAL; + } else if (rx_fltrs->fl3l4.is_ipv6 && + fsp->location != AQ_RX_FIRST_LOC_FL3L4 + 4 && + fsp->location != AQ_RX_FIRST_LOC_FL3L4) { + netdev_err(aq_nic->ndev, + "ethtool: The specified location for ipv6 must be %d or %d", + AQ_RX_FIRST_LOC_FL3L4, AQ_RX_FIRST_LOC_FL3L4 + 4); + return -EINVAL; + } + + return 0; +} + +static int __must_check +aq_check_approve_fl2(struct aq_nic_s *aq_nic, + struct aq_hw_rx_fltrs_s *rx_fltrs, + struct ethtool_rx_flow_spec *fsp) +{ + if (fsp->location < AQ_RX_FIRST_LOC_FETHERT || + fsp->location > AQ_RX_LAST_LOC_FETHERT) { + netdev_err(aq_nic->ndev, + "ethtool: location must be in range [%d, %d]", + AQ_RX_FIRST_LOC_FETHERT, + AQ_RX_LAST_LOC_FETHERT); + return -EINVAL; + } + + if (be16_to_cpu(fsp->m_ext.vlan_tci) == VLAN_PRIO_MASK && + fsp->m_u.ether_spec.h_proto == 0U) { + netdev_err(aq_nic->ndev, + "ethtool: proto (ether_type) parameter must be specified"); + return -EINVAL; + } + + return 0; +} + +static int __must_check +aq_check_approve_fvlan(struct aq_nic_s *aq_nic, + struct aq_hw_rx_fltrs_s *rx_fltrs, + struct ethtool_rx_flow_spec *fsp) +{ + if (fsp->location < AQ_RX_FIRST_LOC_FVLANID || + fsp->location > AQ_RX_LAST_LOC_FVLANID) { + netdev_err(aq_nic->ndev, + "ethtool: location must be in range [%d, %d]", + AQ_RX_FIRST_LOC_FVLANID, + AQ_RX_LAST_LOC_FVLANID); + return -EINVAL; + } + + if ((aq_nic->ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER) && + (!test_bit(be16_to_cpu(fsp->h_ext.vlan_tci), + aq_nic->active_vlans))) { + netdev_err(aq_nic->ndev, + "ethtool: unknown vlan-id specified"); + return -EINVAL; + } + + if (fsp->ring_cookie > aq_nic->aq_nic_cfg.num_rss_queues) { + netdev_err(aq_nic->ndev, + "ethtool: queue number must be in range [0, %d]", + aq_nic->aq_nic_cfg.num_rss_queues - 1); + return -EINVAL; + } + return 0; +} + +static int __must_check +aq_check_filter(struct aq_nic_s *aq_nic, + struct ethtool_rx_flow_spec *fsp) +{ + int err = 0; + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + + if (fsp->flow_type & FLOW_EXT) { + if (be16_to_cpu(fsp->m_ext.vlan_tci) == VLAN_VID_MASK) { + err = aq_check_approve_fvlan(aq_nic, rx_fltrs, fsp); + } else if (be16_to_cpu(fsp->m_ext.vlan_tci) == VLAN_PRIO_MASK) { + err = aq_check_approve_fl2(aq_nic, rx_fltrs, fsp); + } else { + netdev_err(aq_nic->ndev, + "ethtool: invalid vlan mask 0x%x specified", + be16_to_cpu(fsp->m_ext.vlan_tci)); + err = -EINVAL; + } + } else { + switch (fsp->flow_type & ~FLOW_EXT) { + case ETHER_FLOW: + err = aq_check_approve_fl2(aq_nic, rx_fltrs, fsp); + break; + case TCP_V4_FLOW: + case UDP_V4_FLOW: + case SCTP_V4_FLOW: + case IPV4_FLOW: + case IP_USER_FLOW: + rx_fltrs->fl3l4.is_ipv6 = false; + err = aq_check_approve_fl3l4(aq_nic, rx_fltrs, fsp); + break; + case TCP_V6_FLOW: + case UDP_V6_FLOW: + case SCTP_V6_FLOW: + case IPV6_FLOW: + case IPV6_USER_FLOW: + rx_fltrs->fl3l4.is_ipv6 = true; + err = aq_check_approve_fl3l4(aq_nic, rx_fltrs, fsp); + break; + default: + netdev_err(aq_nic->ndev, + "ethtool: unknown flow-type specified"); + err = -EINVAL; + } + } + + return err; +} + +static bool __must_check +aq_rule_is_not_support(struct aq_nic_s *aq_nic, + struct ethtool_rx_flow_spec *fsp) +{ + bool rule_is_not_support = false; + + if (!(aq_nic->ndev->features & NETIF_F_NTUPLE)) { + netdev_err(aq_nic->ndev, + "ethtool: Please, to enable the RX flow control:\n" + "ethtool -K %s ntuple on\n", aq_nic->ndev->name); + rule_is_not_support = true; + } else if (!aq_rule_is_approve(fsp)) { + netdev_err(aq_nic->ndev, + "ethtool: The specified flow type is not supported\n"); + rule_is_not_support = true; + } else if ((fsp->flow_type & ~FLOW_EXT) != ETHER_FLOW && + (fsp->h_u.tcp_ip4_spec.tos || + fsp->h_u.tcp_ip6_spec.tclass)) { + netdev_err(aq_nic->ndev, + "ethtool: The specified tos tclass are not supported\n"); + rule_is_not_support = true; + } else if (fsp->flow_type & FLOW_MAC_EXT) { + netdev_err(aq_nic->ndev, + "ethtool: MAC_EXT is not supported"); + rule_is_not_support = true; + } + + return rule_is_not_support; +} + +static bool __must_check +aq_rule_is_not_correct(struct aq_nic_s *aq_nic, + struct ethtool_rx_flow_spec *fsp) +{ + bool rule_is_not_correct = false; + + if (!aq_nic) { + rule_is_not_correct = true; + } else if (fsp->location > AQ_RX_MAX_RXNFC_LOC) { + netdev_err(aq_nic->ndev, + "ethtool: The specified number %u rule is invalid\n", + fsp->location); + rule_is_not_correct = true; + } else if (aq_check_filter(aq_nic, fsp)) { + rule_is_not_correct = true; + } else if (fsp->ring_cookie != RX_CLS_FLOW_DISC) { + if (fsp->ring_cookie >= aq_nic->aq_nic_cfg.num_rss_queues) { + netdev_err(aq_nic->ndev, + "ethtool: The specified action is invalid.\n" + "Maximum allowable value action is %u.\n", + aq_nic->aq_nic_cfg.num_rss_queues - 1); + rule_is_not_correct = true; + } + } + + return rule_is_not_correct; +} + +static int __must_check +aq_check_rule(struct aq_nic_s *aq_nic, + struct ethtool_rx_flow_spec *fsp) +{ + int err = 0; + + if (aq_rule_is_not_correct(aq_nic, fsp)) + err = -EINVAL; + else if (aq_rule_is_not_support(aq_nic, fsp)) + err = -EOPNOTSUPP; + else if (aq_rule_already_exists(aq_nic, fsp)) + err = -EEXIST; + + return err; +} + +static void aq_set_data_fl2(struct aq_nic_s *aq_nic, + struct aq_rx_filter *aq_rx_fltr, + struct aq_rx_filter_l2 *data, bool add) +{ + const struct ethtool_rx_flow_spec *fsp = &aq_rx_fltr->aq_fsp; + + memset(data, 0, sizeof(*data)); + + data->location = fsp->location - AQ_RX_FIRST_LOC_FETHERT; + + if (fsp->ring_cookie != RX_CLS_FLOW_DISC) + data->queue = fsp->ring_cookie; + else + data->queue = -1; + + data->ethertype = be16_to_cpu(fsp->h_u.ether_spec.h_proto); + data->user_priority_en = be16_to_cpu(fsp->m_ext.vlan_tci) + == VLAN_PRIO_MASK; + data->user_priority = (be16_to_cpu(fsp->h_ext.vlan_tci) + & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; +} + +static int aq_add_del_fether(struct aq_nic_s *aq_nic, + struct aq_rx_filter *aq_rx_fltr, bool add) +{ + struct aq_rx_filter_l2 data; + struct aq_hw_s *aq_hw = aq_nic->aq_hw; + const struct aq_hw_ops *aq_hw_ops = aq_nic->aq_hw_ops; + + aq_set_data_fl2(aq_nic, aq_rx_fltr, &data, add); + + if (unlikely(!aq_hw_ops->hw_filter_l2_set)) + return -EOPNOTSUPP; + if (unlikely(!aq_hw_ops->hw_filter_l2_clear)) + return -EOPNOTSUPP; + + if (add) + return aq_hw_ops->hw_filter_l2_set(aq_hw, &data); + else + return aq_hw_ops->hw_filter_l2_clear(aq_hw, &data); +} + +static bool aq_fvlan_is_busy(struct aq_rx_filter_vlan *aq_vlans, int vlan) +{ + int i; + + for (i = 0; i < AQ_VLAN_MAX_FILTERS; ++i) { + if (aq_vlans[i].enable && + aq_vlans[i].queue != AQ_RX_QUEUE_NOT_ASSIGNED && + aq_vlans[i].vlan_id == vlan) { + return true; + } + } + + return false; +} + +/* Function rebuilds array of vlan filters so that filters with assigned + * queue have a precedence over just vlans on the interface. + */ +static void aq_fvlan_rebuild(struct aq_nic_s *aq_nic, + unsigned long *active_vlans, + struct aq_rx_filter_vlan *aq_vlans) +{ + bool vlan_busy = false; + int vlan = -1; + int i; + + for (i = 0; i < AQ_VLAN_MAX_FILTERS; ++i) { + if (aq_vlans[i].enable && + aq_vlans[i].queue != AQ_RX_QUEUE_NOT_ASSIGNED) + continue; + do { + vlan = find_next_bit(active_vlans, + VLAN_N_VID, + vlan + 1); + if (vlan == VLAN_N_VID) { + aq_vlans[i].enable = 0U; + aq_vlans[i].queue = AQ_RX_QUEUE_NOT_ASSIGNED; + aq_vlans[i].vlan_id = 0; + continue; + } + + vlan_busy = aq_fvlan_is_busy(aq_vlans, vlan); + if (!vlan_busy) { + aq_vlans[i].enable = 1U; + aq_vlans[i].queue = AQ_RX_QUEUE_NOT_ASSIGNED; + aq_vlans[i].vlan_id = vlan; + } + } while (vlan_busy && vlan != VLAN_N_VID); + } +} + +static int aq_set_data_fvlan(struct aq_nic_s *aq_nic, + struct aq_rx_filter *aq_rx_fltr, + struct aq_rx_filter_vlan *aq_vlans, bool add) +{ + const struct ethtool_rx_flow_spec *fsp = &aq_rx_fltr->aq_fsp; + int location = fsp->location - AQ_RX_FIRST_LOC_FVLANID; + int i; + + memset(&aq_vlans[location], 0, sizeof(aq_vlans[location])); + + if (!add) + return 0; + + /* remove vlan if it was in table without queue assignment */ + for (i = 0; i < AQ_VLAN_MAX_FILTERS; ++i) { + if (aq_vlans[i].vlan_id == + (be16_to_cpu(fsp->h_ext.vlan_tci) & VLAN_VID_MASK)) { + aq_vlans[i].enable = false; + } + } + + aq_vlans[location].location = location; + aq_vlans[location].vlan_id = be16_to_cpu(fsp->h_ext.vlan_tci) + & VLAN_VID_MASK; + aq_vlans[location].queue = fsp->ring_cookie & 0x1FU; + aq_vlans[location].enable = 1U; + + return 0; +} + +int aq_del_fvlan_by_vlan(struct aq_nic_s *aq_nic, u16 vlan_id) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + struct aq_rx_filter *rule = NULL; + struct hlist_node *aq_node2; + + hlist_for_each_entry_safe(rule, aq_node2, + &rx_fltrs->filter_list, aq_node) { + if (be16_to_cpu(rule->aq_fsp.h_ext.vlan_tci) == vlan_id) + break; + } + if (rule && be16_to_cpu(rule->aq_fsp.h_ext.vlan_tci) == vlan_id) { + struct ethtool_rxnfc cmd; + + cmd.fs.location = rule->aq_fsp.location; + return aq_del_rxnfc_rule(aq_nic, &cmd); + } + + return -ENOENT; +} + +static int aq_add_del_fvlan(struct aq_nic_s *aq_nic, + struct aq_rx_filter *aq_rx_fltr, bool add) +{ + const struct aq_hw_ops *aq_hw_ops = aq_nic->aq_hw_ops; + + if (unlikely(!aq_hw_ops->hw_filter_vlan_set)) + return -EOPNOTSUPP; + + aq_set_data_fvlan(aq_nic, + aq_rx_fltr, + aq_nic->aq_hw_rx_fltrs.fl2.aq_vlans, + add); + + return aq_filters_vlans_update(aq_nic); +} + +static int aq_set_data_fl3l4(struct aq_nic_s *aq_nic, + struct aq_rx_filter *aq_rx_fltr, + struct aq_rx_filter_l3l4 *data, bool add) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + const struct ethtool_rx_flow_spec *fsp = &aq_rx_fltr->aq_fsp; + + memset(data, 0, sizeof(*data)); + + data->is_ipv6 = rx_fltrs->fl3l4.is_ipv6; + data->location = HW_ATL_GET_REG_LOCATION_FL3L4(fsp->location); + + if (!add) { + if (!data->is_ipv6) + rx_fltrs->fl3l4.active_ipv4 &= ~BIT(data->location); + else + rx_fltrs->fl3l4.active_ipv6 &= + ~BIT((data->location) / 4); + + return 0; + } + + data->cmd |= HW_ATL_RX_ENABLE_FLTR_L3L4; + + switch (fsp->flow_type) { + case TCP_V4_FLOW: + case TCP_V6_FLOW: + data->cmd |= HW_ATL_RX_ENABLE_CMP_PROT_L4; + break; + case UDP_V4_FLOW: + case UDP_V6_FLOW: + data->cmd |= HW_ATL_RX_UDP; + data->cmd |= HW_ATL_RX_ENABLE_CMP_PROT_L4; + break; + case SCTP_V4_FLOW: + case SCTP_V6_FLOW: + data->cmd |= HW_ATL_RX_SCTP; + data->cmd |= HW_ATL_RX_ENABLE_CMP_PROT_L4; + break; + default: + break; + } + + if (!data->is_ipv6) { + data->ip_src[0] = + ntohl(fsp->h_u.tcp_ip4_spec.ip4src); + data->ip_dst[0] = + ntohl(fsp->h_u.tcp_ip4_spec.ip4dst); + rx_fltrs->fl3l4.active_ipv4 |= BIT(data->location); + } else { + int i; + + rx_fltrs->fl3l4.active_ipv6 |= BIT((data->location) / 4); + for (i = 0; i < HW_ATL_RX_CNT_REG_ADDR_IPV6; ++i) { + data->ip_dst[i] = + ntohl(fsp->h_u.tcp_ip6_spec.ip6dst[i]); + data->ip_src[i] = + ntohl(fsp->h_u.tcp_ip6_spec.ip6src[i]); + } + data->cmd |= HW_ATL_RX_ENABLE_L3_IPV6; + } + if (fsp->flow_type != IP_USER_FLOW && + fsp->flow_type != IPV6_USER_FLOW) { + if (!data->is_ipv6) { + data->p_dst = + ntohs(fsp->h_u.tcp_ip4_spec.pdst); + data->p_src = + ntohs(fsp->h_u.tcp_ip4_spec.psrc); + } else { + data->p_dst = + ntohs(fsp->h_u.tcp_ip6_spec.pdst); + data->p_src = + ntohs(fsp->h_u.tcp_ip6_spec.psrc); + } + } + if (data->ip_src[0] && !data->is_ipv6) + data->cmd |= HW_ATL_RX_ENABLE_CMP_SRC_ADDR_L3; + if (data->ip_dst[0] && !data->is_ipv6) + data->cmd |= HW_ATL_RX_ENABLE_CMP_DEST_ADDR_L3; + if (data->p_dst) + data->cmd |= HW_ATL_RX_ENABLE_CMP_DEST_PORT_L4; + if (data->p_src) + data->cmd |= HW_ATL_RX_ENABLE_CMP_SRC_PORT_L4; + if (fsp->ring_cookie != RX_CLS_FLOW_DISC) { + data->cmd |= HW_ATL_RX_HOST << HW_ATL_RX_ACTION_FL3F4_SHIFT; + data->cmd |= fsp->ring_cookie << HW_ATL_RX_QUEUE_FL3L4_SHIFT; + data->cmd |= HW_ATL_RX_ENABLE_QUEUE_L3L4; + } else { + data->cmd |= HW_ATL_RX_DISCARD << HW_ATL_RX_ACTION_FL3F4_SHIFT; + } + + return 0; +} + +static int aq_set_fl3l4(struct aq_hw_s *aq_hw, + const struct aq_hw_ops *aq_hw_ops, + struct aq_rx_filter_l3l4 *data) +{ + if (unlikely(!aq_hw_ops->hw_filter_l3l4_set)) + return -EOPNOTSUPP; + + return aq_hw_ops->hw_filter_l3l4_set(aq_hw, data); +} + +static int aq_add_del_fl3l4(struct aq_nic_s *aq_nic, + struct aq_rx_filter *aq_rx_fltr, bool add) +{ + const struct aq_hw_ops *aq_hw_ops = aq_nic->aq_hw_ops; + struct aq_hw_s *aq_hw = aq_nic->aq_hw; + struct aq_rx_filter_l3l4 data; + + if (unlikely(aq_rx_fltr->aq_fsp.location < AQ_RX_FIRST_LOC_FL3L4 || + aq_rx_fltr->aq_fsp.location > AQ_RX_LAST_LOC_FL3L4 || + aq_set_data_fl3l4(aq_nic, aq_rx_fltr, &data, add))) + return -EINVAL; + + return aq_set_fl3l4(aq_hw, aq_hw_ops, &data); +} + +static int aq_add_del_rule(struct aq_nic_s *aq_nic, + struct aq_rx_filter *aq_rx_fltr, bool add) +{ + int err = -EINVAL; + + if (aq_rx_fltr->aq_fsp.flow_type & FLOW_EXT) { + if (be16_to_cpu(aq_rx_fltr->aq_fsp.m_ext.vlan_tci) + == VLAN_VID_MASK) { + aq_rx_fltr->type = aq_rx_filter_vlan; + err = aq_add_del_fvlan(aq_nic, aq_rx_fltr, add); + } else if (be16_to_cpu(aq_rx_fltr->aq_fsp.m_ext.vlan_tci) + == VLAN_PRIO_MASK) { + aq_rx_fltr->type = aq_rx_filter_ethertype; + err = aq_add_del_fether(aq_nic, aq_rx_fltr, add); + } + } else { + switch (aq_rx_fltr->aq_fsp.flow_type & ~FLOW_EXT) { + case ETHER_FLOW: + aq_rx_fltr->type = aq_rx_filter_ethertype; + err = aq_add_del_fether(aq_nic, aq_rx_fltr, add); + break; + case TCP_V4_FLOW: + case UDP_V4_FLOW: + case SCTP_V4_FLOW: + case IP_USER_FLOW: + case TCP_V6_FLOW: + case UDP_V6_FLOW: + case SCTP_V6_FLOW: + case IPV6_USER_FLOW: + aq_rx_fltr->type = aq_rx_filter_l3l4; + err = aq_add_del_fl3l4(aq_nic, aq_rx_fltr, add); + break; + default: + err = -EINVAL; + break; + } + } + + return err; +} + +static int aq_update_table_filters(struct aq_nic_s *aq_nic, + struct aq_rx_filter *aq_rx_fltr, u16 index, + struct ethtool_rxnfc *cmd) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + struct aq_rx_filter *rule = NULL, *parent = NULL; + struct hlist_node *aq_node2; + int err = -EINVAL; + + hlist_for_each_entry_safe(rule, aq_node2, + &rx_fltrs->filter_list, aq_node) { + if (rule->aq_fsp.location >= index) + break; + parent = rule; + } + + if (rule && rule->aq_fsp.location == index) { + err = aq_add_del_rule(aq_nic, rule, false); + hlist_del(&rule->aq_node); + kfree(rule); + --rx_fltrs->active_filters; + } + + if (unlikely(!aq_rx_fltr)) + return err; + + INIT_HLIST_NODE(&aq_rx_fltr->aq_node); + + if (parent) + hlist_add_behind(&aq_rx_fltr->aq_node, &parent->aq_node); + else + hlist_add_head(&aq_rx_fltr->aq_node, &rx_fltrs->filter_list); + + ++rx_fltrs->active_filters; + + return 0; +} + +u16 aq_get_rxnfc_count_all_rules(struct aq_nic_s *aq_nic) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + + return rx_fltrs->active_filters; +} + +struct aq_hw_rx_fltrs_s *aq_get_hw_rx_fltrs(struct aq_nic_s *aq_nic) +{ + return &aq_nic->aq_hw_rx_fltrs; +} + +int aq_add_rxnfc_rule(struct aq_nic_s *aq_nic, const struct ethtool_rxnfc *cmd) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + struct ethtool_rx_flow_spec *fsp = + (struct ethtool_rx_flow_spec *)&cmd->fs; + struct aq_rx_filter *aq_rx_fltr; + int err = 0; + + err = aq_check_rule(aq_nic, fsp); + if (err) + goto err_exit; + + aq_rx_fltr = kzalloc(sizeof(*aq_rx_fltr), GFP_KERNEL); + if (unlikely(!aq_rx_fltr)) { + err = -ENOMEM; + goto err_exit; + } + + memcpy(&aq_rx_fltr->aq_fsp, fsp, sizeof(*fsp)); + + err = aq_update_table_filters(aq_nic, aq_rx_fltr, fsp->location, NULL); + if (unlikely(err)) + goto err_free; + + err = aq_add_del_rule(aq_nic, aq_rx_fltr, true); + if (unlikely(err)) { + hlist_del(&aq_rx_fltr->aq_node); + --rx_fltrs->active_filters; + goto err_free; + } + + return 0; + +err_free: + kfree(aq_rx_fltr); +err_exit: + return err; +} + +int aq_del_rxnfc_rule(struct aq_nic_s *aq_nic, const struct ethtool_rxnfc *cmd) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + struct aq_rx_filter *rule = NULL; + struct hlist_node *aq_node2; + int err = -EINVAL; + + hlist_for_each_entry_safe(rule, aq_node2, + &rx_fltrs->filter_list, aq_node) { + if (rule->aq_fsp.location == cmd->fs.location) + break; + } + + if (rule && rule->aq_fsp.location == cmd->fs.location) { + err = aq_add_del_rule(aq_nic, rule, false); + hlist_del(&rule->aq_node); + kfree(rule); + --rx_fltrs->active_filters; + } + return err; +} + +int aq_get_rxnfc_rule(struct aq_nic_s *aq_nic, struct ethtool_rxnfc *cmd) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + struct ethtool_rx_flow_spec *fsp = + (struct ethtool_rx_flow_spec *)&cmd->fs; + struct aq_rx_filter *rule = NULL; + struct hlist_node *aq_node2; + + hlist_for_each_entry_safe(rule, aq_node2, + &rx_fltrs->filter_list, aq_node) + if (fsp->location <= rule->aq_fsp.location) + break; + + if (unlikely(!rule || fsp->location != rule->aq_fsp.location)) + return -EINVAL; + + memcpy(fsp, &rule->aq_fsp, sizeof(*fsp)); + + return 0; +} + +int aq_get_rxnfc_all_rules(struct aq_nic_s *aq_nic, struct ethtool_rxnfc *cmd, + u32 *rule_locs) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + struct hlist_node *aq_node2; + struct aq_rx_filter *rule; + int count = 0; + + cmd->data = aq_get_rxnfc_count_all_rules(aq_nic); + + hlist_for_each_entry_safe(rule, aq_node2, + &rx_fltrs->filter_list, aq_node) { + if (unlikely(count == cmd->rule_cnt)) + return -EMSGSIZE; + + rule_locs[count++] = rule->aq_fsp.location; + } + + cmd->rule_cnt = count; + + return 0; +} + +int aq_clear_rxnfc_all_rules(struct aq_nic_s *aq_nic) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + struct hlist_node *aq_node2; + struct aq_rx_filter *rule; + int err = 0; + + hlist_for_each_entry_safe(rule, aq_node2, + &rx_fltrs->filter_list, aq_node) { + err = aq_add_del_rule(aq_nic, rule, false); + if (err) + goto err_exit; + hlist_del(&rule->aq_node); + kfree(rule); + --rx_fltrs->active_filters; + } + +err_exit: + return err; +} + +int aq_reapply_rxnfc_all_rules(struct aq_nic_s *aq_nic) +{ + struct aq_hw_rx_fltrs_s *rx_fltrs = aq_get_hw_rx_fltrs(aq_nic); + struct hlist_node *aq_node2; + struct aq_rx_filter *rule; + int err = 0; + + hlist_for_each_entry_safe(rule, aq_node2, + &rx_fltrs->filter_list, aq_node) { + err = aq_add_del_rule(aq_nic, rule, true); + if (err) + goto err_exit; + } + +err_exit: + return err; +} + +int aq_filters_vlans_update(struct aq_nic_s *aq_nic) +{ + const struct aq_hw_ops *aq_hw_ops = aq_nic->aq_hw_ops; + struct aq_hw_s *aq_hw = aq_nic->aq_hw; + int hweight = 0; + int err = 0; + int i; + + if (unlikely(!aq_hw_ops->hw_filter_vlan_set)) + return -EOPNOTSUPP; + if (unlikely(!aq_hw_ops->hw_filter_vlan_ctrl)) + return -EOPNOTSUPP; + + aq_fvlan_rebuild(aq_nic, aq_nic->active_vlans, + aq_nic->aq_hw_rx_fltrs.fl2.aq_vlans); + + if (aq_nic->ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER) { + for (i = 0; i < BITS_TO_LONGS(VLAN_N_VID); i++) + hweight += hweight_long(aq_nic->active_vlans[i]); + + err = aq_hw_ops->hw_filter_vlan_ctrl(aq_hw, false); + if (err) + return err; + } + + err = aq_hw_ops->hw_filter_vlan_set(aq_hw, + aq_nic->aq_hw_rx_fltrs.fl2.aq_vlans + ); + if (err) + return err; + + if (aq_nic->ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER) { + if (hweight < AQ_VLAN_MAX_FILTERS) + err = aq_hw_ops->hw_filter_vlan_ctrl(aq_hw, true); + /* otherwise left in promiscue mode */ + } + + return err; +} + +int aq_filters_vlan_offload_off(struct aq_nic_s *aq_nic) +{ + const struct aq_hw_ops *aq_hw_ops = aq_nic->aq_hw_ops; + struct aq_hw_s *aq_hw = aq_nic->aq_hw; + int err = 0; + + memset(aq_nic->active_vlans, 0, sizeof(aq_nic->active_vlans)); + aq_fvlan_rebuild(aq_nic, aq_nic->active_vlans, + aq_nic->aq_hw_rx_fltrs.fl2.aq_vlans); + + if (unlikely(!aq_hw_ops->hw_filter_vlan_set)) + return -EOPNOTSUPP; + if (unlikely(!aq_hw_ops->hw_filter_vlan_ctrl)) + return -EOPNOTSUPP; + + err = aq_hw_ops->hw_filter_vlan_ctrl(aq_hw, false); + if (err) + return err; + err = aq_hw_ops->hw_filter_vlan_set(aq_hw, + aq_nic->aq_hw_rx_fltrs.fl2.aq_vlans + ); + return err; +} diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_filters.h b/drivers/net/ethernet/aquantia/atlantic/aq_filters.h new file mode 100644 index 000000000000..c6a08c6585d5 --- /dev/null +++ b/drivers/net/ethernet/aquantia/atlantic/aq_filters.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* Copyright (C) 2014-2017 aQuantia Corporation. */ + +/* File aq_filters.h: RX filters related functions. */ + +#ifndef AQ_FILTERS_H +#define AQ_FILTERS_H + +#include "aq_nic.h" + +enum aq_rx_filter_type { + aq_rx_filter_ethertype, + aq_rx_filter_vlan, + aq_rx_filter_l3l4 +}; + +struct aq_rx_filter { + struct hlist_node aq_node; + enum aq_rx_filter_type type; + struct ethtool_rx_flow_spec aq_fsp; +}; + +u16 aq_get_rxnfc_count_all_rules(struct aq_nic_s *aq_nic); +struct aq_hw_rx_fltrs_s *aq_get_hw_rx_fltrs(struct aq_nic_s *aq_nic); +int aq_add_rxnfc_rule(struct aq_nic_s *aq_nic, const struct ethtool_rxnfc *cmd); +int aq_del_rxnfc_rule(struct aq_nic_s *aq_nic, const struct ethtool_rxnfc *cmd); +int aq_get_rxnfc_rule(struct aq_nic_s *aq_nic, struct ethtool_rxnfc *cmd); +int aq_get_rxnfc_all_rules(struct aq_nic_s *aq_nic, struct ethtool_rxnfc *cmd, + u32 *rule_locs); +int aq_del_fvlan_by_vlan(struct aq_nic_s *aq_nic, u16 vlan_id); +int aq_clear_rxnfc_all_rules(struct aq_nic_s *aq_nic); +int aq_reapply_rxnfc_all_rules(struct aq_nic_s *aq_nic); +int aq_filters_vlans_update(struct aq_nic_s *aq_nic); +int aq_filters_vlan_offload_off(struct aq_nic_s *aq_nic); + +#endif /* AQ_FILTERS_H */ diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h index a1e70da358ca..81aab73dc22f 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h @@ -18,6 +18,17 @@ #include "aq_rss.h" #include "hw_atl/hw_atl_utils.h" +#define AQ_RX_FIRST_LOC_FVLANID 0U +#define AQ_RX_LAST_LOC_FVLANID 15U +#define AQ_RX_FIRST_LOC_FETHERT 16U +#define AQ_RX_LAST_LOC_FETHERT 31U +#define AQ_RX_FIRST_LOC_FL3L4 32U +#define AQ_RX_LAST_LOC_FL3L4 39U +#define AQ_RX_MAX_RXNFC_LOC AQ_RX_LAST_LOC_FL3L4 +#define AQ_VLAN_MAX_FILTERS \ + (AQ_RX_LAST_LOC_FVLANID - AQ_RX_FIRST_LOC_FVLANID + 1U) +#define AQ_RX_QUEUE_NOT_ASSIGNED 0xFFU + /* NIC H/W capabilities */ struct aq_hw_caps_s { u64 hw_features; @@ -130,6 +141,7 @@ struct aq_hw_s { struct aq_ring_s; struct aq_ring_param_s; struct sk_buff; +struct aq_rx_filter_l3l4; struct aq_hw_ops { @@ -183,6 +195,23 @@ struct aq_hw_ops { int (*hw_packet_filter_set)(struct aq_hw_s *self, unsigned int packet_filter); + int (*hw_filter_l3l4_set)(struct aq_hw_s *self, + struct aq_rx_filter_l3l4 *data); + + int (*hw_filter_l3l4_clear)(struct aq_hw_s *self, + struct aq_rx_filter_l3l4 *data); + + int (*hw_filter_l2_set)(struct aq_hw_s *self, + struct aq_rx_filter_l2 *data); + + int (*hw_filter_l2_clear)(struct aq_hw_s *self, + struct aq_rx_filter_l2 *data); + + int (*hw_filter_vlan_set)(struct aq_hw_s *self, + struct aq_rx_filter_vlan *aq_vlans); + + int (*hw_filter_vlan_ctrl)(struct aq_hw_s *self, bool enable); + int (*hw_multicast_list_set)(struct aq_hw_s *self, u8 ar_mac[AQ_HW_MULTICAST_ADDRESS_MAX] [ETH_ALEN], diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c index 7c07eef275eb..2a11c1eefd8f 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c @@ -13,6 +13,7 @@ #include "aq_nic.h" #include "aq_pci_func.h" #include "aq_ethtool.h" +#include "aq_filters.h" #include #include @@ -49,6 +50,11 @@ static int aq_ndev_open(struct net_device *ndev) err = aq_nic_init(aq_nic); if (err < 0) goto err_exit; + + err = aq_reapply_rxnfc_all_rules(aq_nic); + if (err < 0) + goto err_exit; + err = aq_nic_start(aq_nic); if (err < 0) goto err_exit; @@ -101,6 +107,21 @@ static int aq_ndev_set_features(struct net_device *ndev, bool is_lro = false; int err = 0; + if (!(features & NETIF_F_NTUPLE)) { + if (aq_nic->ndev->features & NETIF_F_NTUPLE) { + err = aq_clear_rxnfc_all_rules(aq_nic); + if (unlikely(err)) + goto err_exit; + } + } + if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER)) { + if (aq_nic->ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER) { + err = aq_filters_vlan_offload_off(aq_nic); + if (unlikely(err)) + goto err_exit; + } + } + aq_cfg->features = features; if (aq_cfg->aq_hw_caps->hw_features & NETIF_F_LRO) { @@ -119,6 +140,7 @@ static int aq_ndev_set_features(struct net_device *ndev, err = aq_nic->aq_hw_ops->hw_set_offload(aq_nic->aq_hw, aq_cfg); +err_exit: return err; } @@ -147,6 +169,35 @@ static void aq_ndev_set_multicast_settings(struct net_device *ndev) aq_nic_set_multicast_list(aq_nic, ndev); } +static int aq_ndo_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, + u16 vid) +{ + struct aq_nic_s *aq_nic = netdev_priv(ndev); + + if (!aq_nic->aq_hw_ops->hw_filter_vlan_set) + return -EOPNOTSUPP; + + set_bit(vid, aq_nic->active_vlans); + + return aq_filters_vlans_update(aq_nic); +} + +static int aq_ndo_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, + u16 vid) +{ + struct aq_nic_s *aq_nic = netdev_priv(ndev); + + if (!aq_nic->aq_hw_ops->hw_filter_vlan_set) + return -EOPNOTSUPP; + + clear_bit(vid, aq_nic->active_vlans); + + if (-ENOENT == aq_del_fvlan_by_vlan(aq_nic, vid)) + return aq_filters_vlans_update(aq_nic); + + return 0; +} + static const struct net_device_ops aq_ndev_ops = { .ndo_open = aq_ndev_open, .ndo_stop = aq_ndev_close, @@ -154,5 +205,7 @@ static const struct net_device_ops aq_ndev_ops = { .ndo_set_rx_mode = aq_ndev_set_multicast_settings, .ndo_change_mtu = aq_ndev_change_mtu, .ndo_set_mac_address = aq_ndev_set_mac_address, - .ndo_set_features = aq_ndev_set_features + .ndo_set_features = aq_ndev_set_features, + .ndo_vlan_rx_add_vid = aq_ndo_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = aq_ndo_vlan_rx_kill_vid, }; diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c index 7abdc0952425..0147c037ca96 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c @@ -44,7 +44,7 @@ static void aq_nic_rss_init(struct aq_nic_s *self, unsigned int num_rss_queues) struct aq_rss_parameters *rss_params = &cfg->aq_rss; int i = 0; - static u8 rss_key[40] = { + static u8 rss_key[AQ_CFG_RSS_HASHKEY_SIZE] = { 0x1e, 0xad, 0x71, 0x87, 0x65, 0xfc, 0x26, 0x7d, 0x0d, 0x45, 0x67, 0x74, 0xcd, 0x06, 0x1a, 0x18, 0xb6, 0xc1, 0xf0, 0xc7, 0xbb, 0x18, 0xbe, 0xf8, @@ -84,10 +84,6 @@ void aq_nic_cfg_start(struct aq_nic_s *self) cfg->is_lro = AQ_CFG_IS_LRO_DEF; - cfg->vlan_id = 0U; - - aq_nic_rss_init(self, cfg->num_rss_queues); - /*descriptors */ cfg->rxds = min(cfg->aq_hw_caps->rxds_max, AQ_CFG_RXDS_DEF); cfg->txds = min(cfg->aq_hw_caps->txds_max, AQ_CFG_TXDS_DEF); @@ -108,6 +104,8 @@ void aq_nic_cfg_start(struct aq_nic_s *self) cfg->num_rss_queues = min(cfg->vecs, AQ_CFG_NUM_RSS_QUEUES_DEF); + aq_nic_rss_init(self, cfg->num_rss_queues); + cfg->irq_type = aq_pci_func_get_irq_type(self); if ((cfg->irq_type == AQ_HW_IRQ_LEGACY) || diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h index 44ec47a3d60a..8e34c1e49bf2 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h @@ -35,7 +35,6 @@ struct aq_nic_cfg_s { u32 mtu; u32 flow_control; u32 link_speed_msk; - u32 vlan_id; u32 wol; u16 is_mc_list_enabled; u16 mc_list_count; @@ -61,6 +60,23 @@ struct aq_nic_cfg_s { #define AQ_NIC_TCVEC2RING(_NIC_, _TC_, _VEC_) \ ((_TC_) * AQ_CFG_TCS_MAX + (_VEC_)) +struct aq_hw_rx_fl2 { + struct aq_rx_filter_vlan aq_vlans[AQ_VLAN_MAX_FILTERS]; +}; + +struct aq_hw_rx_fl3l4 { + u8 active_ipv4; + u8 active_ipv6:2; + u8 is_ipv6; +}; + +struct aq_hw_rx_fltrs_s { + struct hlist_head filter_list; + u16 active_filters; + struct aq_hw_rx_fl2 fl2; + struct aq_hw_rx_fl3l4 fl3l4; +}; + struct aq_nic_s { atomic_t flags; struct aq_vec_s *aq_vec[AQ_CFG_VECS_MAX]; @@ -81,10 +97,13 @@ struct aq_nic_s { u32 count; u8 ar[AQ_HW_MULTICAST_ADDRESS_MAX][ETH_ALEN]; } mc_list; + /* Bitmask of currently assigned vlans from linux */ + unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; struct pci_dev *pdev; unsigned int msix_entry_mask; u32 irqvecs; + struct aq_hw_rx_fltrs_s aq_hw_rx_fltrs; }; static inline struct device *aq_nic_get_dev(struct aq_nic_s *self) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c index 1d5d6b8df855..c8b44cdb91c1 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c @@ -19,6 +19,7 @@ #include "aq_pci_func.h" #include "hw_atl/hw_atl_a0.h" #include "hw_atl/hw_atl_b0.h" +#include "aq_filters.h" static const struct pci_device_id aq_pci_tbl[] = { { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_0001), }, @@ -309,6 +310,7 @@ static void aq_pci_remove(struct pci_dev *pdev) struct aq_nic_s *self = pci_get_drvdata(pdev); if (self->ndev) { + aq_clear_rxnfc_all_rules(self); if (self->ndev->reg_state == NETREG_REGISTERED) unregister_netdev(self->ndev); aq_nic_free_vectors(self); diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c index a7e853fa43c2..b58ca7cb8e9d 100644 --- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c +++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c @@ -21,7 +21,7 @@ #define DEFAULT_B0_BOARD_BASIC_CAPABILITIES \ .is_64_dma = true, \ - .msix_irqs = 4U, \ + .msix_irqs = 8U, \ .irq_mask = ~0U, \ .vecs = HW_ATL_B0_RSS_MAX, \ .tcs = HW_ATL_B0_TC_MAX, \ @@ -41,7 +41,9 @@ NETIF_F_RXHASH | \ NETIF_F_SG | \ NETIF_F_TSO | \ - NETIF_F_LRO, \ + NETIF_F_LRO | \ + NETIF_F_NTUPLE | \ + NETIF_F_HW_VLAN_CTAG_FILTER, \ .hw_priv_flags = IFF_UNICAST_FLT, \ .flow_control = true, \ .mtu = HW_ATL_B0_MTU_JUMBO, \ @@ -319,20 +321,11 @@ static int hw_atl_b0_hw_init_rx_path(struct aq_hw_s *self) hw_atl_rpf_vlan_outer_etht_set(self, 0x88A8U); hw_atl_rpf_vlan_inner_etht_set(self, 0x8100U); - if (cfg->vlan_id) { - hw_atl_rpf_vlan_flr_act_set(self, 1U, 0U); - hw_atl_rpf_vlan_id_flr_set(self, 0U, 0U); - hw_atl_rpf_vlan_flr_en_set(self, 0U, 0U); + hw_atl_rpf_vlan_prom_mode_en_set(self, 1); - hw_atl_rpf_vlan_accept_untagged_packets_set(self, 1U); - hw_atl_rpf_vlan_untagged_act_set(self, 1U); - - hw_atl_rpf_vlan_flr_act_set(self, 1U, 1U); - hw_atl_rpf_vlan_id_flr_set(self, cfg->vlan_id, 0U); - hw_atl_rpf_vlan_flr_en_set(self, 1U, 1U); - } else { - hw_atl_rpf_vlan_prom_mode_en_set(self, 1); - } + // Always accept untagged packets + hw_atl_rpf_vlan_accept_untagged_packets_set(self, 1U); + hw_atl_rpf_vlan_untagged_act_set(self, 1U); /* Rx Interrupts */ hw_atl_rdm_rx_desc_wr_wb_irq_en_set(self, 1U); @@ -945,6 +938,142 @@ static int hw_atl_b0_hw_ring_rx_stop(struct aq_hw_s *self, return aq_hw_err_from_flags(self); } +static int hw_atl_b0_hw_fl3l4_clear(struct aq_hw_s *self, + struct aq_rx_filter_l3l4 *data) +{ + u8 location = data->location; + + if (!data->is_ipv6) { + hw_atl_rpfl3l4_cmd_clear(self, location); + hw_atl_rpf_l4_spd_set(self, 0U, location); + hw_atl_rpf_l4_dpd_set(self, 0U, location); + hw_atl_rpfl3l4_ipv4_src_addr_clear(self, location); + hw_atl_rpfl3l4_ipv4_dest_addr_clear(self, location); + } else { + int i; + + for (i = 0; i < HW_ATL_RX_CNT_REG_ADDR_IPV6; ++i) { + hw_atl_rpfl3l4_cmd_clear(self, location + i); + hw_atl_rpf_l4_spd_set(self, 0U, location + i); + hw_atl_rpf_l4_dpd_set(self, 0U, location + i); + } + hw_atl_rpfl3l4_ipv6_src_addr_clear(self, location); + hw_atl_rpfl3l4_ipv6_dest_addr_clear(self, location); + } + + return aq_hw_err_from_flags(self); +} + +static int hw_atl_b0_hw_fl3l4_set(struct aq_hw_s *self, + struct aq_rx_filter_l3l4 *data) +{ + u8 location = data->location; + + hw_atl_b0_hw_fl3l4_clear(self, data); + + if (data->cmd) { + if (!data->is_ipv6) { + hw_atl_rpfl3l4_ipv4_dest_addr_set(self, + location, + data->ip_dst[0]); + hw_atl_rpfl3l4_ipv4_src_addr_set(self, + location, + data->ip_src[0]); + } else { + hw_atl_rpfl3l4_ipv6_dest_addr_set(self, + location, + data->ip_dst); + hw_atl_rpfl3l4_ipv6_src_addr_set(self, + location, + data->ip_src); + } + } + hw_atl_rpf_l4_dpd_set(self, data->p_dst, location); + hw_atl_rpf_l4_spd_set(self, data->p_src, location); + hw_atl_rpfl3l4_cmd_set(self, location, data->cmd); + + return aq_hw_err_from_flags(self); +} + +static int hw_atl_b0_hw_fl2_set(struct aq_hw_s *self, + struct aq_rx_filter_l2 *data) +{ + hw_atl_rpf_etht_flr_en_set(self, 1U, data->location); + hw_atl_rpf_etht_flr_set(self, data->ethertype, data->location); + hw_atl_rpf_etht_user_priority_en_set(self, + !!data->user_priority_en, + data->location); + if (data->user_priority_en) + hw_atl_rpf_etht_user_priority_set(self, + data->user_priority, + data->location); + + if (data->queue < 0) { + hw_atl_rpf_etht_flr_act_set(self, 0U, data->location); + hw_atl_rpf_etht_rx_queue_en_set(self, 0U, data->location); + } else { + hw_atl_rpf_etht_flr_act_set(self, 1U, data->location); + hw_atl_rpf_etht_rx_queue_en_set(self, 1U, data->location); + hw_atl_rpf_etht_rx_queue_set(self, data->queue, data->location); + } + + return aq_hw_err_from_flags(self); +} + +static int hw_atl_b0_hw_fl2_clear(struct aq_hw_s *self, + struct aq_rx_filter_l2 *data) +{ + hw_atl_rpf_etht_flr_en_set(self, 0U, data->location); + hw_atl_rpf_etht_flr_set(self, 0U, data->location); + hw_atl_rpf_etht_user_priority_en_set(self, 0U, data->location); + + return aq_hw_err_from_flags(self); +} + +/** + * @brief Set VLAN filter table + * @details Configure VLAN filter table to accept (and assign the queue) traffic + * for the particular vlan ids. + * Note: use this function under vlan promisc mode not to lost the traffic + * + * @param aq_hw_s + * @param aq_rx_filter_vlan VLAN filter configuration + * @return 0 - OK, <0 - error + */ +static int hw_atl_b0_hw_vlan_set(struct aq_hw_s *self, + struct aq_rx_filter_vlan *aq_vlans) +{ + int i; + + for (i = 0; i < AQ_VLAN_MAX_FILTERS; i++) { + hw_atl_rpf_vlan_flr_en_set(self, 0U, i); + hw_atl_rpf_vlan_rxq_en_flr_set(self, 0U, i); + if (aq_vlans[i].enable) { + hw_atl_rpf_vlan_id_flr_set(self, + aq_vlans[i].vlan_id, + i); + hw_atl_rpf_vlan_flr_act_set(self, 1U, i); + hw_atl_rpf_vlan_flr_en_set(self, 1U, i); + if (aq_vlans[i].queue != 0xFF) { + hw_atl_rpf_vlan_rxq_flr_set(self, + aq_vlans[i].queue, + i); + hw_atl_rpf_vlan_rxq_en_flr_set(self, 1U, i); + } + } + } + + return aq_hw_err_from_flags(self); +} + +static int hw_atl_b0_hw_vlan_ctrl(struct aq_hw_s *self, bool enable) +{ + /* set promisc in case of disabing the vland filter */ + hw_atl_rpf_vlan_prom_mode_en_set(self, !!!enable); + + return aq_hw_err_from_flags(self); +} + const struct aq_hw_ops hw_atl_ops_b0 = { .hw_set_mac_address = hw_atl_b0_hw_mac_addr_set, .hw_init = hw_atl_b0_hw_init, @@ -969,6 +1098,11 @@ const struct aq_hw_ops hw_atl_ops_b0 = { .hw_ring_rx_init = hw_atl_b0_hw_ring_rx_init, .hw_ring_tx_init = hw_atl_b0_hw_ring_tx_init, .hw_packet_filter_set = hw_atl_b0_hw_packet_filter_set, + .hw_filter_l2_set = hw_atl_b0_hw_fl2_set, + .hw_filter_l2_clear = hw_atl_b0_hw_fl2_clear, + .hw_filter_l3l4_set = hw_atl_b0_hw_fl3l4_set, + .hw_filter_vlan_set = hw_atl_b0_hw_vlan_set, + .hw_filter_vlan_ctrl = hw_atl_b0_hw_vlan_ctrl, .hw_multicast_list_set = hw_atl_b0_hw_multicast_list_set, .hw_interrupt_moderation_set = hw_atl_b0_hw_interrupt_moderation_set, .hw_rss_set = hw_atl_b0_hw_rss_set, diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c index 5502ec5f0f69..939f77e2e117 100644 --- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c +++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c @@ -898,6 +898,24 @@ void hw_atl_rpf_vlan_id_flr_set(struct aq_hw_s *aq_hw, u32 vlan_id_flr, vlan_id_flr); } +void hw_atl_rpf_vlan_rxq_en_flr_set(struct aq_hw_s *aq_hw, u32 vlan_rxq_en, + u32 filter) +{ + aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_VL_RXQ_EN_F_ADR(filter), + HW_ATL_RPF_VL_RXQ_EN_F_MSK, + HW_ATL_RPF_VL_RXQ_EN_F_SHIFT, + vlan_rxq_en); +} + +void hw_atl_rpf_vlan_rxq_flr_set(struct aq_hw_s *aq_hw, u32 vlan_rxq, + u32 filter) +{ + aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_VL_RXQ_F_ADR(filter), + HW_ATL_RPF_VL_RXQ_F_MSK, + HW_ATL_RPF_VL_RXQ_F_SHIFT, + vlan_rxq); +}; + void hw_atl_rpf_etht_flr_en_set(struct aq_hw_s *aq_hw, u32 etht_flr_en, u32 filter) { @@ -965,6 +983,20 @@ void hw_atl_rpf_etht_flr_set(struct aq_hw_s *aq_hw, u32 etht_flr, u32 filter) HW_ATL_RPF_ET_VALF_SHIFT, etht_flr); } +void hw_atl_rpf_l4_spd_set(struct aq_hw_s *aq_hw, u32 val, u32 filter) +{ + aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_L4_SPD_ADR(filter), + HW_ATL_RPF_L4_SPD_MSK, + HW_ATL_RPF_L4_SPD_SHIFT, val); +} + +void hw_atl_rpf_l4_dpd_set(struct aq_hw_s *aq_hw, u32 val, u32 filter) +{ + aq_hw_write_reg_bit(aq_hw, HW_ATL_RPF_L4_DPD_ADR(filter), + HW_ATL_RPF_L4_DPD_MSK, + HW_ATL_RPF_L4_DPD_SHIFT, val); +} + /* RPO: rx packet offload */ void hw_atl_rpo_ipv4header_crc_offload_en_set(struct aq_hw_s *aq_hw, u32 ipv4header_crc_offload_en) @@ -1476,3 +1508,80 @@ void hw_atl_mcp_up_force_intr_set(struct aq_hw_s *aq_hw, u32 up_force_intr) HW_ATL_MCP_UP_FORCE_INTERRUPT_SHIFT, up_force_intr); } + +void hw_atl_rpfl3l4_ipv4_dest_addr_clear(struct aq_hw_s *aq_hw, u8 location) +{ + aq_hw_write_reg(aq_hw, HW_ATL_RPF_L3_DSTA_ADR(location), 0U); +} + +void hw_atl_rpfl3l4_ipv4_src_addr_clear(struct aq_hw_s *aq_hw, u8 location) +{ + aq_hw_write_reg(aq_hw, HW_ATL_RPF_L3_SRCA_ADR(location), 0U); +} + +void hw_atl_rpfl3l4_cmd_clear(struct aq_hw_s *aq_hw, u8 location) +{ + aq_hw_write_reg(aq_hw, HW_ATL_RPF_L3_REG_CTRL_ADR(location), 0U); +} + +void hw_atl_rpfl3l4_ipv6_dest_addr_clear(struct aq_hw_s *aq_hw, u8 location) +{ + int i; + + for (i = 0; i < 4; ++i) + aq_hw_write_reg(aq_hw, + HW_ATL_RPF_L3_DSTA_ADR(location + i), + 0U); +} + +void hw_atl_rpfl3l4_ipv6_src_addr_clear(struct aq_hw_s *aq_hw, u8 location) +{ + int i; + + for (i = 0; i < 4; ++i) + aq_hw_write_reg(aq_hw, + HW_ATL_RPF_L3_SRCA_ADR(location + i), + 0U); +} + +void hw_atl_rpfl3l4_ipv4_dest_addr_set(struct aq_hw_s *aq_hw, u8 location, + u32 ipv4_dest) +{ + aq_hw_write_reg(aq_hw, HW_ATL_RPF_L3_DSTA_ADR(location), + ipv4_dest); +} + +void hw_atl_rpfl3l4_ipv4_src_addr_set(struct aq_hw_s *aq_hw, u8 location, + u32 ipv4_src) +{ + aq_hw_write_reg(aq_hw, + HW_ATL_RPF_L3_SRCA_ADR(location), + ipv4_src); +} + +void hw_atl_rpfl3l4_cmd_set(struct aq_hw_s *aq_hw, u8 location, u32 cmd) +{ + aq_hw_write_reg(aq_hw, HW_ATL_RPF_L3_REG_CTRL_ADR(location), cmd); +} + +void hw_atl_rpfl3l4_ipv6_src_addr_set(struct aq_hw_s *aq_hw, u8 location, + u32 *ipv6_src) +{ + int i; + + for (i = 0; i < 4; ++i) + aq_hw_write_reg(aq_hw, + HW_ATL_RPF_L3_SRCA_ADR(location + i), + ipv6_src[i]); +} + +void hw_atl_rpfl3l4_ipv6_dest_addr_set(struct aq_hw_s *aq_hw, u8 location, + u32 *ipv6_dest) +{ + int i; + + for (i = 0; i < 4; ++i) + aq_hw_write_reg(aq_hw, + HW_ATL_RPF_L3_DSTA_ADR(location + i), + ipv6_dest[i]); +} diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h index 41f239928c15..03c570d115fe 100644 --- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h +++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h @@ -441,6 +441,14 @@ void hw_atl_rpf_vlan_flr_act_set(struct aq_hw_s *aq_hw, u32 vlan_filter_act, void hw_atl_rpf_vlan_id_flr_set(struct aq_hw_s *aq_hw, u32 vlan_id_flr, u32 filter); +/* Set VLAN RX queue assignment enable */ +void hw_atl_rpf_vlan_rxq_en_flr_set(struct aq_hw_s *aq_hw, u32 vlan_rxq_en, + u32 filter); + +/* Set VLAN RX queue */ +void hw_atl_rpf_vlan_rxq_flr_set(struct aq_hw_s *aq_hw, u32 vlan_rxq, + u32 filter); + /* set ethertype filter enable */ void hw_atl_rpf_etht_flr_en_set(struct aq_hw_s *aq_hw, u32 etht_flr_en, u32 filter); @@ -475,6 +483,12 @@ void hw_atl_rpf_etht_flr_act_set(struct aq_hw_s *aq_hw, u32 etht_flr_act, /* set ethertype filter */ void hw_atl_rpf_etht_flr_set(struct aq_hw_s *aq_hw, u32 etht_flr, u32 filter); +/* set L4 source port */ +void hw_atl_rpf_l4_spd_set(struct aq_hw_s *aq_hw, u32 val, u32 filter); + +/* set L4 destination port */ +void hw_atl_rpf_l4_dpd_set(struct aq_hw_s *aq_hw, u32 val, u32 filter); + /* rpo */ /* set ipv4 header checksum offload enable */ @@ -704,4 +718,38 @@ void hw_atl_pci_pci_reg_res_dis_set(struct aq_hw_s *aq_hw, u32 pci_reg_res_dis); /* set uP Force Interrupt */ void hw_atl_mcp_up_force_intr_set(struct aq_hw_s *aq_hw, u32 up_force_intr); +/* clear ipv4 filter destination address */ +void hw_atl_rpfl3l4_ipv4_dest_addr_clear(struct aq_hw_s *aq_hw, u8 location); + +/* clear ipv4 filter source address */ +void hw_atl_rpfl3l4_ipv4_src_addr_clear(struct aq_hw_s *aq_hw, u8 location); + +/* clear command for filter l3-l4 */ +void hw_atl_rpfl3l4_cmd_clear(struct aq_hw_s *aq_hw, u8 location); + +/* clear ipv6 filter destination address */ +void hw_atl_rpfl3l4_ipv6_dest_addr_clear(struct aq_hw_s *aq_hw, u8 location); + +/* clear ipv6 filter source address */ +void hw_atl_rpfl3l4_ipv6_src_addr_clear(struct aq_hw_s *aq_hw, u8 location); + +/* set ipv4 filter destination address */ +void hw_atl_rpfl3l4_ipv4_dest_addr_set(struct aq_hw_s *aq_hw, u8 location, + u32 ipv4_dest); + +/* set ipv4 filter source address */ +void hw_atl_rpfl3l4_ipv4_src_addr_set(struct aq_hw_s *aq_hw, u8 location, + u32 ipv4_src); + +/* set command for filter l3-l4 */ +void hw_atl_rpfl3l4_cmd_set(struct aq_hw_s *aq_hw, u8 location, u32 cmd); + +/* set ipv6 filter source address */ +void hw_atl_rpfl3l4_ipv6_src_addr_set(struct aq_hw_s *aq_hw, u8 location, + u32 *ipv6_src); + +/* set ipv6 filter destination address */ +void hw_atl_rpfl3l4_ipv6_dest_addr_set(struct aq_hw_s *aq_hw, u8 location, + u32 *ipv6_dest); + #endif /* HW_ATL_LLH_H */ diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h index a715fa317b1c..8470d92db812 100644 --- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h +++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h @@ -1092,24 +1092,43 @@ /* Default value of bitfield vl_id{F}[B:0] */ #define HW_ATL_RPF_VL_ID_F_DEFAULT 0x0 -/* RX et_en{F} Bitfield Definitions - * Preprocessor definitions for the bitfield "et_en{F}". +/* RX vl_rxq_en{F} Bitfield Definitions + * Preprocessor definitions for the bitfield "vl_rxq{F}". * Parameter: filter {F} | stride size 0x4 | range [0, 15] - * PORT="pif_rpf_et_en_i[0]" - */ - -/* Register address for bitfield et_en{F} */ -#define HW_ATL_RPF_ET_EN_F_ADR(filter) (0x00005300 + (filter) * 0x4) -/* Bitmask for bitfield et_en{F} */ -#define HW_ATL_RPF_ET_EN_F_MSK 0x80000000 -/* Inverted bitmask for bitfield et_en{F} */ -#define HW_ATL_RPF_ET_EN_F_MSKN 0x7FFFFFFF -/* Lower bit position of bitfield et_en{F} */ -#define HW_ATL_RPF_ET_EN_F_SHIFT 31 -/* Width of bitfield et_en{F} */ -#define HW_ATL_RPF_ET_EN_F_WIDTH 1 -/* Default value of bitfield et_en{F} */ -#define HW_ATL_RPF_ET_EN_F_DEFAULT 0x0 + * PORT="pif_rpf_vl_rxq_en_i" + */ + +/* Register address for bitfield vl_rxq_en{F} */ +#define HW_ATL_RPF_VL_RXQ_EN_F_ADR(filter) (0x00005290 + (filter) * 0x4) +/* Bitmask for bitfield vl_rxq_en{F} */ +#define HW_ATL_RPF_VL_RXQ_EN_F_MSK 0x10000000 +/* Inverted bitmask for bitfield vl_rxq_en{F}[ */ +#define HW_ATL_RPF_VL_RXQ_EN_F_MSKN 0xEFFFFFFF +/* Lower bit position of bitfield vl_rxq_en{F} */ +#define HW_ATL_RPF_VL_RXQ_EN_F_SHIFT 28 +/* Width of bitfield vl_rxq_en{F} */ +#define HW_ATL_RPF_VL_RXQ_EN_F_WIDTH 1 +/* Default value of bitfield vl_rxq_en{F} */ +#define HW_ATL_RPF_VL_RXQ_EN_F_DEFAULT 0x0 + +/* RX vl_rxq{F}[4:0] Bitfield Definitions + * Preprocessor definitions for the bitfield "vl_rxq{F}[4:0]". + * Parameter: filter {F} | stride size 0x4 | range [0, 15] + * PORT="pif_rpf_vl_rxq0_i[4:0]" + */ + +/* Register address for bitfield vl_rxq{F}[4:0] */ +#define HW_ATL_RPF_VL_RXQ_F_ADR(filter) (0x00005290 + (filter) * 0x4) +/* Bitmask for bitfield vl_rxq{F}[4:0] */ +#define HW_ATL_RPF_VL_RXQ_F_MSK 0x01F00000 +/* Inverted bitmask for bitfield vl_rxq{F}[4:0] */ +#define HW_ATL_RPF_VL_RXQ_F_MSKN 0xFE0FFFFF +/* Lower bit position of bitfield vl_rxq{F}[4:0] */ +#define HW_ATL_RPF_VL_RXQ_F_SHIFT 20 +/* Width of bitfield vl_rxw{F}[4:0] */ +#define HW_ATL_RPF_VL_RXQ_F_WIDTH 5 +/* Default value of bitfield vl_rxq{F}[4:0] */ +#define HW_ATL_RPF_VL_RXQ_F_DEFAULT 0x0 /* rx et_en{f} bitfield definitions * preprocessor definitions for the bitfield "et_en{f}". @@ -1263,6 +1282,44 @@ /* default value of bitfield et_val{f}[f:0] */ #define HW_ATL_RPF_ET_VALF_DEFAULT 0x0 +/* RX l4_sp{D}[F:0] Bitfield Definitions + * Preprocessor definitions for the bitfield "l4_sp{D}[F:0]". + * Parameter: srcport {D} | stride size 0x4 | range [0, 7] + * PORT="pif_rpf_l4_sp0_i[15:0]" + */ + +/* Register address for bitfield l4_sp{D}[F:0] */ +#define HW_ATL_RPF_L4_SPD_ADR(srcport) (0x00005400u + (srcport) * 0x4) +/* Bitmask for bitfield l4_sp{D}[F:0] */ +#define HW_ATL_RPF_L4_SPD_MSK 0x0000FFFFu +/* Inverted bitmask for bitfield l4_sp{D}[F:0] */ +#define HW_ATL_RPF_L4_SPD_MSKN 0xFFFF0000u +/* Lower bit position of bitfield l4_sp{D}[F:0] */ +#define HW_ATL_RPF_L4_SPD_SHIFT 0 +/* Width of bitfield l4_sp{D}[F:0] */ +#define HW_ATL_RPF_L4_SPD_WIDTH 16 +/* Default value of bitfield l4_sp{D}[F:0] */ +#define HW_ATL_RPF_L4_SPD_DEFAULT 0x0 + +/* RX l4_dp{D}[F:0] Bitfield Definitions + * Preprocessor definitions for the bitfield "l4_dp{D}[F:0]". + * Parameter: destport {D} | stride size 0x4 | range [0, 7] + * PORT="pif_rpf_l4_dp0_i[15:0]" + */ + +/* Register address for bitfield l4_dp{D}[F:0] */ +#define HW_ATL_RPF_L4_DPD_ADR(destport) (0x00005420u + (destport) * 0x4) +/* Bitmask for bitfield l4_dp{D}[F:0] */ +#define HW_ATL_RPF_L4_DPD_MSK 0x0000FFFFu +/* Inverted bitmask for bitfield l4_dp{D}[F:0] */ +#define HW_ATL_RPF_L4_DPD_MSKN 0xFFFF0000u +/* Lower bit position of bitfield l4_dp{D}[F:0] */ +#define HW_ATL_RPF_L4_DPD_SHIFT 0 +/* Width of bitfield l4_dp{D}[F:0] */ +#define HW_ATL_RPF_L4_DPD_WIDTH 16 +/* Default value of bitfield l4_dp{D}[F:0] */ +#define HW_ATL_RPF_L4_DPD_DEFAULT 0x0 + /* rx ipv4_chk_en bitfield definitions * preprocessor definitions for the bitfield "ipv4_chk_en". * port="pif_rpo_ipv4_chk_en_i" @@ -2418,4 +2475,48 @@ /* default value of bitfield uP Force Interrupt */ #define HW_ATL_MCP_UP_FORCE_INTERRUPT_DEFAULT 0x0 +#define HW_ATL_RX_CTRL_ADDR_BEGIN_FL3L4 0x00005380 +#define HW_ATL_RX_SRCA_ADDR_BEGIN_FL3L4 0x000053B0 +#define HW_ATL_RX_DESTA_ADDR_BEGIN_FL3L4 0x000053D0 + +#define HW_ATL_RPF_L3_REG_CTRL_ADR(location) (0x00005380 + (location) * 0x4) + +/* RX rpf_l3_sa{D}[1F:0] Bitfield Definitions + * Preprocessor definitions for the bitfield "l3_sa{D}[1F:0]". + * Parameter: location {D} | stride size 0x4 | range [0, 7] + * PORT="pif_rpf_l3_sa0_i[31:0]" + */ + +/* Register address for bitfield pif_rpf_l3_sa0_i[31:0] */ +#define HW_ATL_RPF_L3_SRCA_ADR(location) (0x000053B0 + (location) * 0x4) +/* Bitmask for bitfield l3_sa0[1F:0] */ +#define HW_ATL_RPF_L3_SRCA_MSK 0xFFFFFFFFu +/* Inverted bitmask for bitfield l3_sa0[1F:0] */ +#define HW_ATL_RPF_L3_SRCA_MSKN 0xFFFFFFFFu +/* Lower bit position of bitfield l3_sa0[1F:0] */ +#define HW_ATL_RPF_L3_SRCA_SHIFT 0 +/* Width of bitfield l3_sa0[1F:0] */ +#define HW_ATL_RPF_L3_SRCA_WIDTH 32 +/* Default value of bitfield l3_sa0[1F:0] */ +#define HW_ATL_RPF_L3_SRCA_DEFAULT 0x0 + +/* RX rpf_l3_da{D}[1F:0] Bitfield Definitions + * Preprocessor definitions for the bitfield "l3_da{D}[1F:0]". + * Parameter: location {D} | stride size 0x4 | range [0, 7] + * PORT="pif_rpf_l3_da0_i[31:0]" + */ + + /* Register address for bitfield pif_rpf_l3_da0_i[31:0] */ +#define HW_ATL_RPF_L3_DSTA_ADR(location) (0x000053B0 + (location) * 0x4) +/* Bitmask for bitfield l3_da0[1F:0] */ +#define HW_ATL_RPF_L3_DSTA_MSK 0xFFFFFFFFu +/* Inverted bitmask for bitfield l3_da0[1F:0] */ +#define HW_ATL_RPF_L3_DSTA_MSKN 0xFFFFFFFFu +/* Lower bit position of bitfield l3_da0[1F:0] */ +#define HW_ATL_RPF_L3_DSTA_SHIFT 0 +/* Width of bitfield l3_da0[1F:0] */ +#define HW_ATL_RPF_L3_DSTA_WIDTH 32 +/* Default value of bitfield l3_da0[1F:0] */ +#define HW_ATL_RPF_L3_DSTA_DEFAULT 0x0 + #endif /* HW_ATL_LLH_INTERNAL_H */ diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c index 7def1cb8ab9d..9b74a3197d7f 100644 --- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c +++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c @@ -263,6 +263,8 @@ int hw_atl_utils_soft_reset(struct aq_hw_s *self) AQ_HW_WAIT_FOR((aq_hw_read_reg(self, HW_ATL_MPI_STATE_ADR) & HW_ATL_MPI_STATE_MSK) == MPI_DEINIT, 10, 1000U); + if (err) + return err; } if (self->rbl_enabled) @@ -454,8 +456,6 @@ int hw_atl_utils_fw_rpc_wait(struct aq_hw_s *self, (fw.val = aq_hw_read_reg(self, HW_ATL_RPC_STATE_ADR), fw.tid), 1000U, 100U); - if (err < 0) - goto err_exit; if (fw.len == 0xFFFFU) { err = hw_atl_utils_fw_rpc_call(self, sw.len); @@ -463,8 +463,6 @@ int hw_atl_utils_fw_rpc_wait(struct aq_hw_s *self, goto err_exit; } } while (sw.tid != fw.tid || 0xFFFFU == fw.len); - if (err < 0) - goto err_exit; if (rpc) { if (fw.len) { diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.h index 3613fca64b58..48278e333462 100644 --- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.h +++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.h @@ -240,6 +240,64 @@ struct __packed offload_info { u8 buf[0]; }; +enum hw_atl_rx_action_with_traffic { + HW_ATL_RX_DISCARD, + HW_ATL_RX_HOST, +}; + +struct aq_rx_filter_vlan { + u8 enable; + u8 location; + u16 vlan_id; + u8 queue; +}; + +struct aq_rx_filter_l2 { + s8 queue; + u8 location; + u8 user_priority_en; + u8 user_priority; + u16 ethertype; +}; + +struct aq_rx_filter_l3l4 { + u32 cmd; + u8 location; + u32 ip_dst[4]; + u32 ip_src[4]; + u16 p_dst; + u16 p_src; + u8 is_ipv6; +}; + +enum hw_atl_rx_protocol_value_l3l4 { + HW_ATL_RX_TCP, + HW_ATL_RX_UDP, + HW_ATL_RX_SCTP, + HW_ATL_RX_ICMP +}; + +enum hw_atl_rx_ctrl_registers_l3l4 { + HW_ATL_RX_ENABLE_MNGMNT_QUEUE_L3L4 = BIT(22), + HW_ATL_RX_ENABLE_QUEUE_L3L4 = BIT(23), + HW_ATL_RX_ENABLE_ARP_FLTR_L3 = BIT(24), + HW_ATL_RX_ENABLE_CMP_PROT_L4 = BIT(25), + HW_ATL_RX_ENABLE_CMP_DEST_PORT_L4 = BIT(26), + HW_ATL_RX_ENABLE_CMP_SRC_PORT_L4 = BIT(27), + HW_ATL_RX_ENABLE_CMP_DEST_ADDR_L3 = BIT(28), + HW_ATL_RX_ENABLE_CMP_SRC_ADDR_L3 = BIT(29), + HW_ATL_RX_ENABLE_L3_IPV6 = BIT(30), + HW_ATL_RX_ENABLE_FLTR_L3L4 = BIT(31) +}; + +#define HW_ATL_RX_QUEUE_FL3L4_SHIFT 8U +#define HW_ATL_RX_ACTION_FL3F4_SHIFT 16U + +#define HW_ATL_RX_CNT_REG_ADDR_IPV6 4U + +#define HW_ATL_GET_REG_LOCATION_FL3L4(location) \ + ((location) - AQ_RX_FIRST_LOC_FL3L4) + #define HAL_ATLANTIC_UTILS_CHIP_MIPS 0x00000001U #define HAL_ATLANTIC_UTILS_CHIP_TPO2 0x00000002U #define HAL_ATLANTIC_UTILS_CHIP_RPF2 0x00000004U diff --git a/drivers/net/ethernet/arc/emac_main.c b/drivers/net/ethernet/arc/emac_main.c index bd277b0dc615..4406325fdd9f 100644 --- a/drivers/net/ethernet/arc/emac_main.c +++ b/drivers/net/ethernet/arc/emac_main.c @@ -432,7 +432,8 @@ static int arc_emac_open(struct net_device *ndev) phy_dev->autoneg = AUTONEG_ENABLE; phy_dev->speed = 0; phy_dev->duplex = 0; - phy_dev->advertising &= phy_dev->supported; + linkmode_and(phy_dev->advertising, phy_dev->advertising, + phy_dev->supported); priv->last_rx_bd = 0; diff --git a/drivers/net/ethernet/broadcom/b44.c b/drivers/net/ethernet/broadcom/b44.c index e445ab724827..f44808959ff3 100644 --- a/drivers/net/ethernet/broadcom/b44.c +++ b/drivers/net/ethernet/broadcom/b44.c @@ -2248,6 +2248,7 @@ static void b44_adjust_link(struct net_device *dev) static int b44_register_phy_one(struct b44 *bp) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; struct mii_bus *mii_bus; struct ssb_device *sdev = bp->sdev; struct phy_device *phydev; @@ -2303,11 +2304,12 @@ static int b44_register_phy_one(struct b44 *bp) } /* mask with MAC supported features */ - phydev->supported &= (SUPPORTED_100baseT_Half | - SUPPORTED_100baseT_Full | - SUPPORTED_Autoneg | - SUPPORTED_MII); - phydev->advertising = phydev->supported; + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_MII_BIT, mask); + linkmode_and(phydev->supported, phydev->supported, mask); + linkmode_copy(phydev->advertising, phydev->supported); bp->old_link = 0; bp->phy_addr = phydev->mdio.addr; diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c index 0e2d99c737e3..4574275ef445 100644 --- a/drivers/net/ethernet/broadcom/bcmsysport.c +++ b/drivers/net/ethernet/broadcom/bcmsysport.c @@ -1068,6 +1068,7 @@ static void mpd_enable_set(struct bcm_sysport_priv *priv, bool enable) static void bcm_sysport_resume_from_wol(struct bcm_sysport_priv *priv) { + unsigned int index; u32 reg; /* Disable RXCHK, active filters and Broadcom tag matching */ @@ -1076,6 +1077,15 @@ static void bcm_sysport_resume_from_wol(struct bcm_sysport_priv *priv) RXCHK_BRCM_TAG_MATCH_SHIFT | RXCHK_EN | RXCHK_BRCM_TAG_EN); rxchk_writel(priv, reg, RXCHK_CONTROL); + /* Make sure we restore correct CID index in case HW lost + * its context during deep idle state + */ + for_each_set_bit(index, priv->filters, RXCHK_BRCM_TAG_MAX) { + rxchk_writel(priv, priv->filters_loc[index] << + RXCHK_BRCM_TAG_CID_SHIFT, RXCHK_BRCM_TAG(index)); + rxchk_writel(priv, 0xff00ffff, RXCHK_BRCM_TAG_MASK(index)); + } + /* Clear the MagicPacket detection logic */ mpd_enable_set(priv, false); @@ -2189,6 +2199,7 @@ static int bcm_sysport_rule_set(struct bcm_sysport_priv *priv, rxchk_writel(priv, reg, RXCHK_BRCM_TAG(index)); rxchk_writel(priv, 0xff00ffff, RXCHK_BRCM_TAG_MASK(index)); + priv->filters_loc[index] = nfc->fs.location; set_bit(index, priv->filters); return 0; @@ -2208,6 +2219,7 @@ static int bcm_sysport_rule_del(struct bcm_sysport_priv *priv, * be taken care of during suspend time by bcm_sysport_suspend_to_wol */ clear_bit(index, priv->filters); + priv->filters_loc[index] = 0; return 0; } @@ -2312,7 +2324,7 @@ static int bcm_sysport_map_queues(struct notifier_block *nb, struct bcm_sysport_priv *priv; struct net_device *slave_dev; unsigned int num_tx_queues; - unsigned int q, start, port; + unsigned int q, qp, port; struct net_device *dev; priv = container_of(nb, struct bcm_sysport_priv, dsa_notifier); @@ -2351,20 +2363,61 @@ static int bcm_sysport_map_queues(struct notifier_block *nb, priv->per_port_num_tx_queues = num_tx_queues; - start = find_first_zero_bit(&priv->queue_bitmap, dev->num_tx_queues); - for (q = 0; q < num_tx_queues; q++) { - ring = &priv->tx_rings[q + start]; + for (q = 0, qp = 0; q < dev->num_tx_queues && qp < num_tx_queues; + q++) { + ring = &priv->tx_rings[q]; + + if (ring->inspect) + continue; /* Just remember the mapping actual programming done * during bcm_sysport_init_tx_ring */ - ring->switch_queue = q; + ring->switch_queue = qp; ring->switch_port = port; ring->inspect = true; priv->ring_map[q + port * num_tx_queues] = ring; + qp++; + } + + return 0; +} + +static int bcm_sysport_unmap_queues(struct notifier_block *nb, + struct dsa_notifier_register_info *info) +{ + struct bcm_sysport_tx_ring *ring; + struct bcm_sysport_priv *priv; + struct net_device *slave_dev; + unsigned int num_tx_queues; + struct net_device *dev; + unsigned int q, port; + + priv = container_of(nb, struct bcm_sysport_priv, dsa_notifier); + if (priv->netdev != info->master) + return 0; + + dev = info->master; + + if (dev->netdev_ops != &bcm_sysport_netdev_ops) + return 0; + + port = info->port_number; + slave_dev = info->info.dev; + + num_tx_queues = slave_dev->real_num_tx_queues; + + for (q = 0; q < dev->num_tx_queues; q++) { + ring = &priv->tx_rings[q]; - /* Set all queues as being used now */ - set_bit(q + start, &priv->queue_bitmap); + if (ring->switch_port != port) + continue; + + if (!ring->inspect) + continue; + + ring->inspect = false; + priv->ring_map[q + port * num_tx_queues] = NULL; } return 0; @@ -2373,14 +2426,18 @@ static int bcm_sysport_map_queues(struct notifier_block *nb, static int bcm_sysport_dsa_notifier(struct notifier_block *nb, unsigned long event, void *ptr) { - struct dsa_notifier_register_info *info; - - if (event != DSA_PORT_REGISTER) - return NOTIFY_DONE; + int ret = NOTIFY_DONE; - info = ptr; + switch (event) { + case DSA_PORT_REGISTER: + ret = bcm_sysport_map_queues(nb, ptr); + break; + case DSA_PORT_UNREGISTER: + ret = bcm_sysport_unmap_queues(nb, ptr); + break; + } - return notifier_from_errno(bcm_sysport_map_queues(nb, info)); + return notifier_from_errno(ret); } #define REV_FMT "v%2x.%02x" diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h index a7a230884a87..0887e6356649 100644 --- a/drivers/net/ethernet/broadcom/bcmsysport.h +++ b/drivers/net/ethernet/broadcom/bcmsysport.h @@ -786,6 +786,7 @@ struct bcm_sysport_priv { /* Ethtool */ u32 msg_enable; DECLARE_BITMAP(filters, RXCHK_BRCM_TAG_MAX); + u32 filters_loc[RXCHK_BRCM_TAG_MAX]; struct bcm_sysport_stats64 stats64; @@ -795,7 +796,6 @@ struct bcm_sysport_priv { /* map information between switch port queues and local queues */ struct notifier_block dsa_notifier; unsigned int per_port_num_tx_queues; - unsigned long queue_bitmap; struct bcm_sysport_tx_ring *ring_map[DSA_MAX_PORTS * 8]; }; diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c index a4a90b6cdb46..749d0ef44371 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c @@ -1105,11 +1105,39 @@ static void bnx2x_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) { struct bnx2x *bp = netdev_priv(dev); + char version[ETHTOOL_FWVERS_LEN]; + int ext_dev_info_offset; + u32 mbi; strlcpy(info->driver, DRV_MODULE_NAME, sizeof(info->driver)); strlcpy(info->version, DRV_MODULE_VERSION, sizeof(info->version)); - bnx2x_fill_fw_str(bp, info->fw_version, sizeof(info->fw_version)); + memset(version, 0, sizeof(version)); + snprintf(version, ETHTOOL_FWVERS_LEN, " storm %d.%d.%d.%d", + BCM_5710_FW_MAJOR_VERSION, BCM_5710_FW_MINOR_VERSION, + BCM_5710_FW_REVISION_VERSION, BCM_5710_FW_ENGINEERING_VERSION); + strlcat(info->version, version, sizeof(info->version)); + + if (SHMEM2_HAS(bp, extended_dev_info_shared_addr)) { + ext_dev_info_offset = SHMEM2_RD(bp, + extended_dev_info_shared_addr); + mbi = REG_RD(bp, ext_dev_info_offset + + offsetof(struct extended_dev_info_shared_cfg, + mbi_version)); + if (mbi) { + memset(version, 0, sizeof(version)); + snprintf(version, ETHTOOL_FWVERS_LEN, "mbi %d.%d.%d ", + (mbi & 0xff000000) >> 24, + (mbi & 0x00ff0000) >> 16, + (mbi & 0x0000ff00) >> 8); + strlcpy(info->fw_version, version, + sizeof(info->fw_version)); + } + } + + memset(version, 0, sizeof(version)); + bnx2x_fill_fw_str(bp, version, ETHTOOL_FWVERS_LEN); + strlcat(info->fw_version, version, sizeof(info->fw_version)); strlcpy(info->bus_info, pci_name(bp->pdev), sizeof(info->bus_info)); } diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h index f8b810313094..d9057c8bbeef 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h @@ -1140,6 +1140,11 @@ struct shm_dev_info { /* size */ }; +struct extended_dev_info_shared_cfg { + u32 reserved[18]; + u32 mbi_version; + u32 mbi_date; +}; #if !defined(__LITTLE_ENDIAN) && !defined(__BIG_ENDIAN) #error "Missing either LITTLE_ENDIAN or BIG_ENDIAN definition." diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c index b164f705709d..3b5b47e98c73 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c @@ -9360,10 +9360,16 @@ void bnx2x_chip_cleanup(struct bnx2x *bp, int unload_mode, bool keep_link) BNX2X_ERR("Failed to schedule DEL commands for UC MACs list: %d\n", rc); - /* Remove all currently configured VLANs */ - rc = bnx2x_del_all_vlans(bp); - if (rc < 0) - BNX2X_ERR("Failed to delete all VLANs\n"); + /* The whole *vlan_obj structure may be not initialized if VLAN + * filtering offload is not supported by hardware. Currently this is + * true for all hardware covered by CHIP_IS_E1x(). + */ + if (!CHIP_IS_E1x(bp)) { + /* Remove all currently configured VLANs */ + rc = bnx2x_del_all_vlans(bp); + if (rc < 0) + BNX2X_ERR("Failed to delete all VLANs\n"); + } /* Disable LLH */ if (!CHIP_IS_E1(bp)) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 5d21c14853ac..3aa80da973d7 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -118,6 +118,7 @@ enum board_idx { NETXTREME_E_VF, NETXTREME_C_VF, NETXTREME_S_VF, + NETXTREME_E_P5_VF, }; /* indexed by enum above */ @@ -160,6 +161,7 @@ static const struct { [NETXTREME_E_VF] = { "Broadcom NetXtreme-E Ethernet Virtual Function" }, [NETXTREME_C_VF] = { "Broadcom NetXtreme-C Ethernet Virtual Function" }, [NETXTREME_S_VF] = { "Broadcom NetXtreme-S Ethernet Virtual Function" }, + [NETXTREME_E_P5_VF] = { "Broadcom BCM5750X NetXtreme-E Ethernet Virtual Function" }, }; static const struct pci_device_id bnxt_pci_tbl[] = { @@ -210,6 +212,7 @@ static const struct pci_device_id bnxt_pci_tbl[] = { { PCI_VDEVICE(BROADCOM, 0x16dc), .driver_data = NETXTREME_E_VF }, { PCI_VDEVICE(BROADCOM, 0x16e1), .driver_data = NETXTREME_C_VF }, { PCI_VDEVICE(BROADCOM, 0x16e5), .driver_data = NETXTREME_C_VF }, + { PCI_VDEVICE(BROADCOM, 0x1807), .driver_data = NETXTREME_E_P5_VF }, { PCI_VDEVICE(BROADCOM, 0xd800), .driver_data = NETXTREME_S_VF }, #endif { 0 } @@ -237,7 +240,7 @@ static struct workqueue_struct *bnxt_pf_wq; static bool bnxt_vf_pciid(enum board_idx idx) { return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF || - idx == NETXTREME_S_VF); + idx == NETXTREME_S_VF || idx == NETXTREME_E_P5_VF); } #define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID) @@ -1809,7 +1812,7 @@ static int bnxt_hwrm_handler(struct bnxt *bp, struct tx_cmp *txcmp) case CMPL_BASE_TYPE_HWRM_DONE: seq_id = le16_to_cpu(h_cmpl->sequence_id); if (seq_id == bp->hwrm_intr_seq_id) - bp->hwrm_intr_seq_id = HWRM_SEQ_ID_INVALID; + bp->hwrm_intr_seq_id = (u16)~bp->hwrm_intr_seq_id; else netdev_err(bp->dev, "Invalid hwrm seq id %d\n", seq_id); break; @@ -2372,7 +2375,11 @@ static void bnxt_free_ring(struct bnxt *bp, struct bnxt_ring_mem_info *rmem) rmem->pg_arr[i] = NULL; } if (rmem->pg_tbl) { - dma_free_coherent(&pdev->dev, rmem->nr_pages * 8, + size_t pg_tbl_size = rmem->nr_pages * 8; + + if (rmem->flags & BNXT_RMEM_USE_FULL_PAGE_FLAG) + pg_tbl_size = rmem->page_size; + dma_free_coherent(&pdev->dev, pg_tbl_size, rmem->pg_tbl, rmem->pg_tbl_map); rmem->pg_tbl = NULL; } @@ -2390,9 +2397,12 @@ static int bnxt_alloc_ring(struct bnxt *bp, struct bnxt_ring_mem_info *rmem) if (rmem->flags & (BNXT_RMEM_VALID_PTE_FLAG | BNXT_RMEM_RING_PTE_FLAG)) valid_bit = PTU_PTE_VALID; - if (rmem->nr_pages > 1) { - rmem->pg_tbl = dma_alloc_coherent(&pdev->dev, - rmem->nr_pages * 8, + if ((rmem->nr_pages > 1 || rmem->depth > 0) && !rmem->pg_tbl) { + size_t pg_tbl_size = rmem->nr_pages * 8; + + if (rmem->flags & BNXT_RMEM_USE_FULL_PAGE_FLAG) + pg_tbl_size = rmem->page_size; + rmem->pg_tbl = dma_alloc_coherent(&pdev->dev, pg_tbl_size, &rmem->pg_tbl_map, GFP_KERNEL); if (!rmem->pg_tbl) @@ -2409,7 +2419,7 @@ static int bnxt_alloc_ring(struct bnxt *bp, struct bnxt_ring_mem_info *rmem) if (!rmem->pg_arr[i]) return -ENOMEM; - if (rmem->nr_pages > 1) { + if (rmem->nr_pages > 1 || rmem->depth > 0) { if (i == rmem->nr_pages - 2 && (rmem->flags & BNXT_RMEM_RING_PTE_FLAG)) extra_bits |= PTU_PTE_NEXT_TO_LAST; @@ -3276,6 +3286,27 @@ static void bnxt_free_hwrm_resources(struct bnxt *bp) bp->hwrm_cmd_resp_dma_addr); bp->hwrm_cmd_resp_addr = NULL; } + + if (bp->hwrm_cmd_kong_resp_addr) { + dma_free_coherent(&pdev->dev, PAGE_SIZE, + bp->hwrm_cmd_kong_resp_addr, + bp->hwrm_cmd_kong_resp_dma_addr); + bp->hwrm_cmd_kong_resp_addr = NULL; + } +} + +static int bnxt_alloc_kong_hwrm_resources(struct bnxt *bp) +{ + struct pci_dev *pdev = bp->pdev; + + bp->hwrm_cmd_kong_resp_addr = + dma_alloc_coherent(&pdev->dev, PAGE_SIZE, + &bp->hwrm_cmd_kong_resp_dma_addr, + GFP_KERNEL); + if (!bp->hwrm_cmd_kong_resp_addr) + return -ENOMEM; + + return 0; } static int bnxt_alloc_hwrm_resources(struct bnxt *bp) @@ -3317,9 +3348,8 @@ static int bnxt_alloc_hwrm_short_cmd_req(struct bnxt *bp) return 0; } -static void bnxt_free_stats(struct bnxt *bp) +static void bnxt_free_port_stats(struct bnxt *bp) { - u32 size, i; struct pci_dev *pdev = bp->pdev; bp->flags &= ~BNXT_FLAG_PORT_STATS; @@ -3345,6 +3375,12 @@ static void bnxt_free_stats(struct bnxt *bp) bp->hw_rx_port_stats_ext_map); bp->hw_rx_port_stats_ext = NULL; } +} + +static void bnxt_free_ring_stats(struct bnxt *bp) +{ + struct pci_dev *pdev = bp->pdev; + int size, i; if (!bp->bnapi) return; @@ -3384,6 +3420,9 @@ static int bnxt_alloc_stats(struct bnxt *bp) } if (BNXT_PF(bp) && bp->chip_num != CHIP_NUM_58700) { + if (bp->hw_rx_port_stats) + goto alloc_ext_stats; + bp->hw_port_stats_size = sizeof(struct rx_port_stats) + sizeof(struct tx_port_stats) + 1024; @@ -3400,11 +3439,15 @@ static int bnxt_alloc_stats(struct bnxt *bp) sizeof(struct rx_port_stats) + 512; bp->flags |= BNXT_FLAG_PORT_STATS; +alloc_ext_stats: /* Display extended statistics only if FW supports it */ if (bp->hwrm_spec_code < 0x10804 || bp->hwrm_spec_code == 0x10900) return 0; + if (bp->hw_rx_port_stats_ext) + goto alloc_tx_ext_stats; + bp->hw_rx_port_stats_ext = dma_zalloc_coherent(&pdev->dev, sizeof(struct rx_port_stats_ext), @@ -3413,6 +3456,10 @@ static int bnxt_alloc_stats(struct bnxt *bp) if (!bp->hw_rx_port_stats_ext) return 0; +alloc_tx_ext_stats: + if (bp->hw_tx_port_stats_ext) + return 0; + if (bp->hwrm_spec_code >= 0x10902) { bp->hw_tx_port_stats_ext = dma_zalloc_coherent(&pdev->dev, @@ -3520,7 +3567,7 @@ static void bnxt_free_mem(struct bnxt *bp, bool irq_re_init) bnxt_free_cp_rings(bp); bnxt_free_ntp_fltrs(bp, irq_re_init); if (irq_re_init) { - bnxt_free_stats(bp); + bnxt_free_ring_stats(bp); bnxt_free_ring_grps(bp); bnxt_free_vnics(bp); kfree(bp->tx_ring_map); @@ -3721,7 +3768,10 @@ void bnxt_hwrm_cmd_hdr_init(struct bnxt *bp, void *request, u16 req_type, req->req_type = cpu_to_le16(req_type); req->cmpl_ring = cpu_to_le16(cmpl_ring); req->target_id = cpu_to_le16(target_id); - req->resp_addr = cpu_to_le64(bp->hwrm_cmd_resp_dma_addr); + if (bnxt_kong_hwrm_message(bp, req)) + req->resp_addr = cpu_to_le64(bp->hwrm_cmd_kong_resp_dma_addr); + else + req->resp_addr = cpu_to_le64(bp->hwrm_cmd_resp_dma_addr); } static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, @@ -3736,11 +3786,10 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, struct hwrm_err_output *resp = bp->hwrm_cmd_resp_addr; u16 max_req_len = BNXT_HWRM_MAX_REQ_LEN; struct hwrm_short_input short_input = {0}; - - req->seq_id = cpu_to_le16(bp->hwrm_cmd_seq++); - memset(resp, 0, PAGE_SIZE); - cp_ring_id = le16_to_cpu(req->cmpl_ring); - intr_process = (cp_ring_id == INVALID_HW_RING_ID) ? 0 : 1; + u32 doorbell_offset = BNXT_GRCPF_REG_CHIMP_COMM_TRIGGER; + u8 *resp_addr = (u8 *)bp->hwrm_cmd_resp_addr; + u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM; + u16 dst = BNXT_HWRM_CHNL_CHIMP; if (msg_len > BNXT_HWRM_MAX_REQ_LEN) { if (msg_len > bp->hwrm_max_ext_req_len || @@ -3748,6 +3797,23 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, return -EINVAL; } + if (bnxt_hwrm_kong_chnl(bp, req)) { + dst = BNXT_HWRM_CHNL_KONG; + bar_offset = BNXT_GRCPF_REG_KONG_COMM; + doorbell_offset = BNXT_GRCPF_REG_KONG_COMM_TRIGGER; + resp = bp->hwrm_cmd_kong_resp_addr; + resp_addr = (u8 *)bp->hwrm_cmd_kong_resp_addr; + } + + memset(resp, 0, PAGE_SIZE); + cp_ring_id = le16_to_cpu(req->cmpl_ring); + intr_process = (cp_ring_id == INVALID_HW_RING_ID) ? 0 : 1; + + req->seq_id = cpu_to_le16(bnxt_get_hwrm_seq_id(bp, dst)); + /* currently supports only one outstanding message */ + if (intr_process) + bp->hwrm_intr_seq_id = le16_to_cpu(req->seq_id); + if ((bp->fw_cap & BNXT_FW_CAP_SHORT_CMD) || msg_len > BNXT_HWRM_MAX_REQ_LEN) { void *short_cmd_req = bp->hwrm_short_cmd_req_addr; @@ -3781,17 +3847,13 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, } /* Write request msg to hwrm channel */ - __iowrite32_copy(bp->bar0, data, msg_len / 4); + __iowrite32_copy(bp->bar0 + bar_offset, data, msg_len / 4); for (i = msg_len; i < max_req_len; i += 4) - writel(0, bp->bar0 + i); - - /* currently supports only one outstanding message */ - if (intr_process) - bp->hwrm_intr_seq_id = le16_to_cpu(req->seq_id); + writel(0, bp->bar0 + bar_offset + i); /* Ring channel doorbell */ - writel(1, bp->bar0 + 0x100); + writel(1, bp->bar0 + doorbell_offset); if (!timeout) timeout = DFLT_HWRM_CMD_TIMEOUT; @@ -3806,10 +3868,13 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, tmo_count = HWRM_SHORT_TIMEOUT_COUNTER; timeout = timeout - HWRM_SHORT_MIN_TIMEOUT * HWRM_SHORT_TIMEOUT_COUNTER; tmo_count += DIV_ROUND_UP(timeout, HWRM_MIN_TIMEOUT); - resp_len = bp->hwrm_cmd_resp_addr + HWRM_RESP_LEN_OFFSET; + resp_len = (__le32 *)(resp_addr + HWRM_RESP_LEN_OFFSET); + if (intr_process) { + u16 seq_id = bp->hwrm_intr_seq_id; + /* Wait until hwrm response cmpl interrupt is processed */ - while (bp->hwrm_intr_seq_id != HWRM_SEQ_ID_INVALID && + while (bp->hwrm_intr_seq_id != (u16)~seq_id && i++ < tmo_count) { /* on first few passes, just barely sleep */ if (i < HWRM_SHORT_TIMEOUT_COUNTER) @@ -3820,14 +3885,14 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, HWRM_MAX_TIMEOUT); } - if (bp->hwrm_intr_seq_id != HWRM_SEQ_ID_INVALID) { + if (bp->hwrm_intr_seq_id != (u16)~seq_id) { netdev_err(bp->dev, "Resp cmpl intr err msg: 0x%x\n", le16_to_cpu(req->req_type)); return -1; } len = (le32_to_cpu(*resp_len) & HWRM_RESP_LEN_MASK) >> HWRM_RESP_LEN_SFT; - valid = bp->hwrm_cmd_resp_addr + len - 1; + valid = resp_addr + len - 1; } else { int j; @@ -3855,7 +3920,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len, } /* Last byte of resp contains valid bit */ - valid = bp->hwrm_cmd_resp_addr + len - 1; + valid = resp_addr + len - 1; for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; j++) { /* make sure we read from updated DMA memory */ dma_rmb(); @@ -3990,6 +4055,10 @@ static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp) cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_VF_REQ_FWD); } + if (bp->fw_cap & BNXT_FW_CAP_OVS_64BIT_HANDLE) + req.flags |= cpu_to_le32( + FUNC_DRV_RGTR_REQ_FLAGS_FLOW_HANDLE_64BIT_MODE); + mutex_lock(&bp->hwrm_cmd_lock); rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); if (rc) @@ -4118,12 +4187,11 @@ static int bnxt_hwrm_cfa_ntuple_filter_free(struct bnxt *bp, static int bnxt_hwrm_cfa_ntuple_filter_alloc(struct bnxt *bp, struct bnxt_ntuple_filter *fltr) { - int rc = 0; + struct bnxt_vnic_info *vnic = &bp->vnic_info[fltr->rxq + 1]; struct hwrm_cfa_ntuple_filter_alloc_input req = {0}; - struct hwrm_cfa_ntuple_filter_alloc_output *resp = - bp->hwrm_cmd_resp_addr; + struct hwrm_cfa_ntuple_filter_alloc_output *resp; struct flow_keys *keys = &fltr->fkeys; - struct bnxt_vnic_info *vnic = &bp->vnic_info[fltr->rxq + 1]; + int rc = 0; bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_CFA_NTUPLE_FILTER_ALLOC, -1, -1); req.l2_filter_id = bp->vnic_info[0].fw_l2_filter_id[fltr->l2_fltr_idx]; @@ -4169,8 +4237,10 @@ static int bnxt_hwrm_cfa_ntuple_filter_alloc(struct bnxt *bp, req.dst_id = cpu_to_le16(vnic->fw_vnic_id); mutex_lock(&bp->hwrm_cmd_lock); rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); - if (!rc) + if (!rc) { + resp = bnxt_get_hwrm_resp_addr(bp, &req); fltr->filter_id = resp->ntuple_filter_id; + } mutex_unlock(&bp->hwrm_cmd_lock); return rc; } @@ -5161,7 +5231,6 @@ static int bnxt_hwrm_get_rings(struct bnxt *bp) hw_resc->resv_vnics = le16_to_cpu(resp->alloc_vnics); cp = le16_to_cpu(resp->alloc_cmpl_rings); stats = le16_to_cpu(resp->alloc_stat_ctx); - cp = min_t(u16, cp, stats); hw_resc->resv_irqs = cp; if (bp->flags & BNXT_FLAG_CHIP_P5) { int rx = hw_resc->resv_rx_rings; @@ -5180,6 +5249,7 @@ static int bnxt_hwrm_get_rings(struct bnxt *bp) hw_resc->resv_hw_ring_grps = rx; } hw_resc->resv_cp_rings = cp; + hw_resc->resv_stat_ctxs = stats; } mutex_unlock(&bp->hwrm_cmd_lock); return 0; @@ -5209,7 +5279,7 @@ static bool bnxt_rfs_supported(struct bnxt *bp); static void __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, struct hwrm_func_cfg_input *req, int tx_rings, int rx_rings, int ring_grps, - int cp_rings, int vnics) + int cp_rings, int stats, int vnics) { u32 enables = 0; @@ -5251,7 +5321,7 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, struct hwrm_func_cfg_input *req, req->num_rsscos_ctxs = cpu_to_le16(ring_grps + 1); } - req->num_stat_ctxs = req->num_cmpl_rings; + req->num_stat_ctxs = cpu_to_le16(stats); req->num_vnics = cpu_to_le16(vnics); } req->enables = cpu_to_le32(enables); @@ -5261,7 +5331,7 @@ static void __bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, struct hwrm_func_vf_cfg_input *req, int tx_rings, int rx_rings, int ring_grps, int cp_rings, - int vnics) + int stats, int vnics) { u32 enables = 0; @@ -5294,7 +5364,7 @@ __bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, req->num_hw_ring_grps = cpu_to_le16(ring_grps); req->num_rsscos_ctxs = cpu_to_le16(BNXT_VF_MAX_RSS_CTX); } - req->num_stat_ctxs = req->num_cmpl_rings; + req->num_stat_ctxs = cpu_to_le16(stats); req->num_vnics = cpu_to_le16(vnics); req->enables = cpu_to_le32(enables); @@ -5302,13 +5372,13 @@ __bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, static int bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings, - int ring_grps, int cp_rings, int vnics) + int ring_grps, int cp_rings, int stats, int vnics) { struct hwrm_func_cfg_input req = {0}; int rc; __bnxt_hwrm_reserve_pf_rings(bp, &req, tx_rings, rx_rings, ring_grps, - cp_rings, vnics); + cp_rings, stats, vnics); if (!req.enables) return 0; @@ -5325,7 +5395,7 @@ bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings, static int bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, - int ring_grps, int cp_rings, int vnics) + int ring_grps, int cp_rings, int stats, int vnics) { struct hwrm_func_vf_cfg_input req = {0}; int rc; @@ -5336,7 +5406,7 @@ bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, } __bnxt_hwrm_reserve_vf_rings(bp, &req, tx_rings, rx_rings, ring_grps, - cp_rings, vnics); + cp_rings, stats, vnics); rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); if (rc) return -ENOMEM; @@ -5346,15 +5416,17 @@ bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, } static int bnxt_hwrm_reserve_rings(struct bnxt *bp, int tx, int rx, int grp, - int cp, int vnic) + int cp, int stat, int vnic) { if (BNXT_PF(bp)) - return bnxt_hwrm_reserve_pf_rings(bp, tx, rx, grp, cp, vnic); + return bnxt_hwrm_reserve_pf_rings(bp, tx, rx, grp, cp, stat, + vnic); else - return bnxt_hwrm_reserve_vf_rings(bp, tx, rx, grp, cp, vnic); + return bnxt_hwrm_reserve_vf_rings(bp, tx, rx, grp, cp, stat, + vnic); } -static int bnxt_nq_rings_in_use(struct bnxt *bp) +int bnxt_nq_rings_in_use(struct bnxt *bp) { int cp = bp->cp_nr_rings; int ulp_msix, ulp_base; @@ -5380,12 +5452,17 @@ static int bnxt_cp_rings_in_use(struct bnxt *bp) return cp; } +static int bnxt_get_func_stat_ctxs(struct bnxt *bp) +{ + return bp->cp_nr_rings + bnxt_get_ulp_stat_ctxs(bp); +} + static bool bnxt_need_reserve_rings(struct bnxt *bp) { struct bnxt_hw_resc *hw_resc = &bp->hw_resc; int cp = bnxt_cp_rings_in_use(bp); int nq = bnxt_nq_rings_in_use(bp); - int rx = bp->rx_nr_rings; + int rx = bp->rx_nr_rings, stat; int vnic = 1, grp = rx; if (bp->hwrm_spec_code < 0x10601) @@ -5398,9 +5475,11 @@ static bool bnxt_need_reserve_rings(struct bnxt *bp) vnic = rx + 1; if (bp->flags & BNXT_FLAG_AGG_RINGS) rx <<= 1; + stat = bnxt_get_func_stat_ctxs(bp); if (BNXT_NEW_RM(bp) && (hw_resc->resv_rx_rings != rx || hw_resc->resv_cp_rings != cp || hw_resc->resv_irqs < nq || hw_resc->resv_vnics != vnic || + hw_resc->resv_stat_ctxs != stat || (hw_resc->resv_hw_ring_grps != grp && !(bp->flags & BNXT_FLAG_CHIP_P5)))) return true; @@ -5414,8 +5493,8 @@ static int __bnxt_reserve_rings(struct bnxt *bp) int tx = bp->tx_nr_rings; int rx = bp->rx_nr_rings; int grp, rx_rings, rc; + int vnic = 1, stat; bool sh = false; - int vnic = 1; if (!bnxt_need_reserve_rings(bp)) return 0; @@ -5427,8 +5506,9 @@ static int __bnxt_reserve_rings(struct bnxt *bp) if (bp->flags & BNXT_FLAG_AGG_RINGS) rx <<= 1; grp = bp->rx_nr_rings; + stat = bnxt_get_func_stat_ctxs(bp); - rc = bnxt_hwrm_reserve_rings(bp, tx, rx, grp, cp, vnic); + rc = bnxt_hwrm_reserve_rings(bp, tx, rx, grp, cp, stat, vnic); if (rc) return rc; @@ -5438,6 +5518,7 @@ static int __bnxt_reserve_rings(struct bnxt *bp) cp = hw_resc->resv_irqs; grp = hw_resc->resv_hw_ring_grps; vnic = hw_resc->resv_vnics; + stat = hw_resc->resv_stat_ctxs; } rx_rings = rx; @@ -5456,6 +5537,10 @@ static int __bnxt_reserve_rings(struct bnxt *bp) } } rx_rings = min_t(int, rx_rings, grp); + cp = min_t(int, cp, bp->cp_nr_rings); + if (stat > bnxt_get_ulp_stat_ctxs(bp)) + stat -= bnxt_get_ulp_stat_ctxs(bp); + cp = min_t(int, cp, stat); rc = bnxt_trim_rings(bp, &rx_rings, &tx, cp, sh); if (bp->flags & BNXT_FLAG_AGG_RINGS) rx = rx_rings << 1; @@ -5464,14 +5549,15 @@ static int __bnxt_reserve_rings(struct bnxt *bp) bp->rx_nr_rings = rx_rings; bp->cp_nr_rings = cp; - if (!tx || !rx || !cp || !grp || !vnic) + if (!tx || !rx || !cp || !grp || !vnic || !stat) return -ENOMEM; return rc; } static int bnxt_hwrm_check_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, - int ring_grps, int cp_rings, int vnics) + int ring_grps, int cp_rings, int stats, + int vnics) { struct hwrm_func_vf_cfg_input req = {0}; u32 flags; @@ -5481,7 +5567,7 @@ static int bnxt_hwrm_check_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, return 0; __bnxt_hwrm_reserve_vf_rings(bp, &req, tx_rings, rx_rings, ring_grps, - cp_rings, vnics); + cp_rings, stats, vnics); flags = FUNC_VF_CFG_REQ_FLAGS_TX_ASSETS_TEST | FUNC_VF_CFG_REQ_FLAGS_RX_ASSETS_TEST | FUNC_VF_CFG_REQ_FLAGS_CMPL_ASSETS_TEST | @@ -5499,14 +5585,15 @@ static int bnxt_hwrm_check_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, } static int bnxt_hwrm_check_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings, - int ring_grps, int cp_rings, int vnics) + int ring_grps, int cp_rings, int stats, + int vnics) { struct hwrm_func_cfg_input req = {0}; u32 flags; int rc; __bnxt_hwrm_reserve_pf_rings(bp, &req, tx_rings, rx_rings, ring_grps, - cp_rings, vnics); + cp_rings, stats, vnics); flags = FUNC_CFG_REQ_FLAGS_TX_ASSETS_TEST; if (BNXT_NEW_RM(bp)) { flags |= FUNC_CFG_REQ_FLAGS_RX_ASSETS_TEST | @@ -5527,17 +5614,19 @@ static int bnxt_hwrm_check_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings, } static int bnxt_hwrm_check_rings(struct bnxt *bp, int tx_rings, int rx_rings, - int ring_grps, int cp_rings, int vnics) + int ring_grps, int cp_rings, int stats, + int vnics) { if (bp->hwrm_spec_code < 0x10801) return 0; if (BNXT_PF(bp)) return bnxt_hwrm_check_pf_rings(bp, tx_rings, rx_rings, - ring_grps, cp_rings, vnics); + ring_grps, cp_rings, stats, + vnics); return bnxt_hwrm_check_vf_rings(bp, tx_rings, rx_rings, ring_grps, - cp_rings, vnics); + cp_rings, stats, vnics); } static void bnxt_hwrm_coal_params_qcaps(struct bnxt *bp) @@ -5962,8 +6051,11 @@ static void bnxt_hwrm_set_pg_attr(struct bnxt_ring_mem_info *rmem, u8 *pg_attr, pg_size = 2 << 4; *pg_attr = pg_size; - if (rmem->nr_pages > 1) { - *pg_attr |= 1; + if (rmem->depth >= 1) { + if (rmem->depth == 2) + *pg_attr |= 2; + else + *pg_attr |= 1; *pg_dir = cpu_to_le64(rmem->pg_tbl_map); } else { *pg_dir = cpu_to_le64(rmem->dma_arr[0]); @@ -6040,6 +6132,22 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) &req.stat_pg_size_stat_lvl, &req.stat_page_dir); } + if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV) { + ctx_pg = &ctx->mrav_mem; + req.mrav_num_entries = cpu_to_le32(ctx_pg->entries); + req.mrav_entry_size = cpu_to_le16(ctx->mrav_entry_size); + bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, + &req.mrav_pg_size_mrav_lvl, + &req.mrav_page_dir); + } + if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM) { + ctx_pg = &ctx->tim_mem; + req.tim_num_entries = cpu_to_le32(ctx_pg->entries); + req.tim_entry_size = cpu_to_le16(ctx->tim_entry_size); + bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, + &req.tim_pg_size_tim_lvl, + &req.tim_page_dir); + } for (i = 0, num_entries = &req.tqm_sp_num_entries, pg_attr = &req.tqm_sp_pg_size_tqm_sp_lvl, pg_dir = &req.tqm_sp_page_dir, @@ -6060,25 +6168,104 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) } static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, - struct bnxt_ctx_pg_info *ctx_pg, u32 mem_size) + struct bnxt_ctx_pg_info *ctx_pg) { struct bnxt_ring_mem_info *rmem = &ctx_pg->ring_mem; - if (!mem_size) - return 0; - - rmem->nr_pages = DIV_ROUND_UP(mem_size, BNXT_PAGE_SIZE); - if (rmem->nr_pages > MAX_CTX_PAGES) { - rmem->nr_pages = 0; - return -EINVAL; - } rmem->page_size = BNXT_PAGE_SIZE; rmem->pg_arr = ctx_pg->ctx_pg_arr; rmem->dma_arr = ctx_pg->ctx_dma_arr; rmem->flags = BNXT_RMEM_VALID_PTE_FLAG; + if (rmem->depth >= 1) + rmem->flags |= BNXT_RMEM_USE_FULL_PAGE_FLAG; return bnxt_alloc_ring(bp, rmem); } +static int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp, + struct bnxt_ctx_pg_info *ctx_pg, u32 mem_size, + u8 depth) +{ + struct bnxt_ring_mem_info *rmem = &ctx_pg->ring_mem; + int rc; + + if (!mem_size) + return 0; + + ctx_pg->nr_pages = DIV_ROUND_UP(mem_size, BNXT_PAGE_SIZE); + if (ctx_pg->nr_pages > MAX_CTX_TOTAL_PAGES) { + ctx_pg->nr_pages = 0; + return -EINVAL; + } + if (ctx_pg->nr_pages > MAX_CTX_PAGES || depth > 1) { + int nr_tbls, i; + + rmem->depth = 2; + ctx_pg->ctx_pg_tbl = kcalloc(MAX_CTX_PAGES, sizeof(ctx_pg), + GFP_KERNEL); + if (!ctx_pg->ctx_pg_tbl) + return -ENOMEM; + nr_tbls = DIV_ROUND_UP(ctx_pg->nr_pages, MAX_CTX_PAGES); + rmem->nr_pages = nr_tbls; + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg); + if (rc) + return rc; + for (i = 0; i < nr_tbls; i++) { + struct bnxt_ctx_pg_info *pg_tbl; + + pg_tbl = kzalloc(sizeof(*pg_tbl), GFP_KERNEL); + if (!pg_tbl) + return -ENOMEM; + ctx_pg->ctx_pg_tbl[i] = pg_tbl; + rmem = &pg_tbl->ring_mem; + rmem->pg_tbl = ctx_pg->ctx_pg_arr[i]; + rmem->pg_tbl_map = ctx_pg->ctx_dma_arr[i]; + rmem->depth = 1; + rmem->nr_pages = MAX_CTX_PAGES; + if (i == (nr_tbls - 1)) + rmem->nr_pages = ctx_pg->nr_pages % + MAX_CTX_PAGES; + rc = bnxt_alloc_ctx_mem_blk(bp, pg_tbl); + if (rc) + break; + } + } else { + rmem->nr_pages = DIV_ROUND_UP(mem_size, BNXT_PAGE_SIZE); + if (rmem->nr_pages > 1 || depth) + rmem->depth = 1; + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg); + } + return rc; +} + +static void bnxt_free_ctx_pg_tbls(struct bnxt *bp, + struct bnxt_ctx_pg_info *ctx_pg) +{ + struct bnxt_ring_mem_info *rmem = &ctx_pg->ring_mem; + + if (rmem->depth > 1 || ctx_pg->nr_pages > MAX_CTX_PAGES || + ctx_pg->ctx_pg_tbl) { + int i, nr_tbls = rmem->nr_pages; + + for (i = 0; i < nr_tbls; i++) { + struct bnxt_ctx_pg_info *pg_tbl; + struct bnxt_ring_mem_info *rmem2; + + pg_tbl = ctx_pg->ctx_pg_tbl[i]; + if (!pg_tbl) + continue; + rmem2 = &pg_tbl->ring_mem; + bnxt_free_ring(bp, rmem2); + ctx_pg->ctx_pg_arr[i] = NULL; + kfree(pg_tbl); + ctx_pg->ctx_pg_tbl[i] = NULL; + } + kfree(ctx_pg->ctx_pg_tbl); + ctx_pg->ctx_pg_tbl = NULL; + } + bnxt_free_ring(bp, rmem); + ctx_pg->nr_pages = 0; +} + static void bnxt_free_ctx_mem(struct bnxt *bp) { struct bnxt_ctx_mem_info *ctx = bp->ctx; @@ -6089,16 +6276,18 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) if (ctx->tqm_mem[0]) { for (i = 0; i < bp->max_q + 1; i++) - bnxt_free_ring(bp, &ctx->tqm_mem[i]->ring_mem); + bnxt_free_ctx_pg_tbls(bp, ctx->tqm_mem[i]); kfree(ctx->tqm_mem[0]); ctx->tqm_mem[0] = NULL; } - bnxt_free_ring(bp, &ctx->stat_mem.ring_mem); - bnxt_free_ring(bp, &ctx->vnic_mem.ring_mem); - bnxt_free_ring(bp, &ctx->cq_mem.ring_mem); - bnxt_free_ring(bp, &ctx->srq_mem.ring_mem); - bnxt_free_ring(bp, &ctx->qp_mem.ring_mem); + bnxt_free_ctx_pg_tbls(bp, &ctx->tim_mem); + bnxt_free_ctx_pg_tbls(bp, &ctx->mrav_mem); + bnxt_free_ctx_pg_tbls(bp, &ctx->stat_mem); + bnxt_free_ctx_pg_tbls(bp, &ctx->vnic_mem); + bnxt_free_ctx_pg_tbls(bp, &ctx->cq_mem); + bnxt_free_ctx_pg_tbls(bp, &ctx->srq_mem); + bnxt_free_ctx_pg_tbls(bp, &ctx->qp_mem); ctx->flags &= ~BNXT_CTX_FLAG_INITED; } @@ -6107,6 +6296,9 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) struct bnxt_ctx_pg_info *ctx_pg; struct bnxt_ctx_mem_info *ctx; u32 mem_size, ena, entries; + u32 extra_srqs = 0; + u32 extra_qps = 0; + u8 pg_lvl = 1; int i, rc; rc = bnxt_hwrm_func_backing_store_qcaps(bp); @@ -6119,24 +6311,31 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) if (!ctx || (ctx->flags & BNXT_CTX_FLAG_INITED)) return 0; + if (bp->flags & BNXT_FLAG_ROCE_CAP) { + pg_lvl = 2; + extra_qps = 65536; + extra_srqs = 8192; + } + ctx_pg = &ctx->qp_mem; - ctx_pg->entries = ctx->qp_min_qp1_entries + ctx->qp_max_l2_entries; + ctx_pg->entries = ctx->qp_min_qp1_entries + ctx->qp_max_l2_entries + + extra_qps; mem_size = ctx->qp_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size); + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl); if (rc) return rc; ctx_pg = &ctx->srq_mem; - ctx_pg->entries = ctx->srq_max_l2_entries; + ctx_pg->entries = ctx->srq_max_l2_entries + extra_srqs; mem_size = ctx->srq_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size); + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl); if (rc) return rc; ctx_pg = &ctx->cq_mem; - ctx_pg->entries = ctx->cq_max_l2_entries; + ctx_pg->entries = ctx->cq_max_l2_entries + extra_qps * 2; mem_size = ctx->cq_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size); + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl); if (rc) return rc; @@ -6144,26 +6343,47 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = ctx->vnic_max_vnic_entries + ctx->vnic_max_ring_table_entries; mem_size = ctx->vnic_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size); + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1); if (rc) return rc; ctx_pg = &ctx->stat_mem; ctx_pg->entries = ctx->stat_max_entries; mem_size = ctx->stat_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size); + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1); + if (rc) + return rc; + + ena = 0; + if (!(bp->flags & BNXT_FLAG_ROCE_CAP)) + goto skip_rdma; + + ctx_pg = &ctx->mrav_mem; + ctx_pg->entries = extra_qps * 4; + mem_size = ctx->mrav_entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 2); if (rc) return rc; + ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV; - entries = ctx->qp_max_l2_entries; + ctx_pg = &ctx->tim_mem; + ctx_pg->entries = ctx->qp_mem.entries; + mem_size = ctx->tim_entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1); + if (rc) + return rc; + ena |= FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM; + +skip_rdma: + entries = ctx->qp_max_l2_entries + extra_qps; entries = roundup(entries, ctx->tqm_entries_multiple); entries = clamp_t(u32, entries, ctx->tqm_min_entries_per_ring, ctx->tqm_max_entries_per_ring); - for (i = 0, ena = 0; i < bp->max_q + 1; i++) { + for (i = 0; i < bp->max_q + 1; i++) { ctx_pg = ctx->tqm_mem[i]; ctx_pg->entries = entries; mem_size = ctx->tqm_entry_size * entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size); + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1); if (rc) return rc; ena |= FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP << i; @@ -6190,7 +6410,8 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp, bool all) req.fid = cpu_to_le16(0xffff); mutex_lock(&bp->hwrm_cmd_lock); - rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); + rc = _hwrm_send_message_silent(bp, &req, sizeof(req), + HWRM_CMD_TIMEOUT); if (rc) { rc = -EIO; goto hwrm_func_resc_qcaps_exit; @@ -6220,7 +6441,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp, bool all) if (bp->flags & BNXT_FLAG_CHIP_P5) { u16 max_msix = le16_to_cpu(resp->max_msix); - hw_resc->max_irqs = min_t(u16, hw_resc->max_irqs, max_msix); + hw_resc->max_nqs = max_msix; hw_resc->max_hw_ring_grps = hw_resc->max_rx_rings; } @@ -6442,6 +6663,13 @@ static int bnxt_hwrm_ver_get(struct bnxt *bp) (dev_caps_cfg & VER_GET_RESP_DEV_CAPS_CFG_SHORT_CMD_REQUIRED)) bp->fw_cap |= BNXT_FW_CAP_SHORT_CMD; + if (dev_caps_cfg & VER_GET_RESP_DEV_CAPS_CFG_KONG_MB_CHNL_SUPPORTED) + bp->fw_cap |= BNXT_FW_CAP_KONG_MB_CHNL; + + if (dev_caps_cfg & + VER_GET_RESP_DEV_CAPS_CFG_FLOW_HANDLE_64BIT_SUPPORTED) + bp->fw_cap |= BNXT_FW_CAP_OVS_64BIT_HANDLE; + hwrm_ver_get_exit: mutex_unlock(&bp->hwrm_cmd_lock); return rc; @@ -6488,6 +6716,7 @@ static int bnxt_hwrm_port_qstats(struct bnxt *bp) static int bnxt_hwrm_port_qstats_ext(struct bnxt *bp) { struct hwrm_port_qstats_ext_output *resp = bp->hwrm_cmd_resp_addr; + struct hwrm_queue_pri2cos_qcfg_input req2 = {0}; struct hwrm_port_qstats_ext_input req = {0}; struct bnxt_pf_info *pf = &bp->pf; int rc; @@ -6510,6 +6739,34 @@ static int bnxt_hwrm_port_qstats_ext(struct bnxt *bp) bp->fw_rx_stats_ext_size = 0; bp->fw_tx_stats_ext_size = 0; } + if (bp->fw_tx_stats_ext_size <= + offsetof(struct tx_port_stats_ext, pfc_pri0_tx_duration_us) / 8) { + mutex_unlock(&bp->hwrm_cmd_lock); + bp->pri2cos_valid = 0; + return rc; + } + + bnxt_hwrm_cmd_hdr_init(bp, &req2, HWRM_QUEUE_PRI2COS_QCFG, -1, -1); + req2.flags = cpu_to_le32(QUEUE_PRI2COS_QCFG_REQ_FLAGS_IVLAN); + + rc = _hwrm_send_message(bp, &req2, sizeof(req2), HWRM_CMD_TIMEOUT); + if (!rc) { + struct hwrm_queue_pri2cos_qcfg_output *resp2; + u8 *pri2cos; + int i, j; + + resp2 = bp->hwrm_cmd_resp_addr; + pri2cos = &resp2->pri0_cos_queue_id; + for (i = 0; i < 8; i++) { + u8 queue_id = pri2cos[i]; + + for (j = 0; j < bp->max_q; j++) { + if (bp->q_ids[j] == queue_id) + bp->pri2cos[i] = j; + } + } + bp->pri2cos_valid = 1; + } mutex_unlock(&bp->hwrm_cmd_lock); return rc; } @@ -7034,17 +7291,12 @@ unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp) return bp->hw_resc.max_stat_ctxs; } -void bnxt_set_max_func_stat_ctxs(struct bnxt *bp, unsigned int max) -{ - bp->hw_resc.max_stat_ctxs = max; -} - unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp) { return bp->hw_resc.max_cp_rings; } -unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp) +static unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp) { unsigned int cp = bp->hw_resc.max_cp_rings; @@ -7058,6 +7310,9 @@ static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) { struct bnxt_hw_resc *hw_resc = &bp->hw_resc; + if (bp->flags & BNXT_FLAG_CHIP_P5) + return min_t(unsigned int, hw_resc->max_irqs, hw_resc->max_nqs); + return min_t(unsigned int, hw_resc->max_irqs, hw_resc->max_cp_rings); } @@ -7066,6 +7321,26 @@ static void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs) bp->hw_resc.max_irqs = max_irqs; } +unsigned int bnxt_get_avail_cp_rings_for_en(struct bnxt *bp) +{ + unsigned int cp; + + cp = bnxt_get_max_func_cp_rings_for_en(bp); + if (bp->flags & BNXT_FLAG_CHIP_P5) + return cp - bp->rx_nr_rings - bp->tx_nr_rings; + else + return cp - bp->cp_nr_rings; +} + +unsigned int bnxt_get_avail_stat_ctxs_for_en(struct bnxt *bp) +{ + unsigned int stat; + + stat = bnxt_get_max_func_stat_ctxs(bp) - bnxt_get_ulp_stat_ctxs(bp); + stat -= bp->cp_nr_rings; + return stat; +} + int bnxt_get_avail_msix(struct bnxt *bp, int num) { int max_cp = bnxt_get_max_func_cp_rings(bp); @@ -7203,23 +7478,26 @@ static void bnxt_clear_int_mode(struct bnxt *bp) int bnxt_reserve_rings(struct bnxt *bp) { int tcs = netdev_get_num_tc(bp->dev); + bool reinit_irq = false; int rc; if (!bnxt_need_reserve_rings(bp)) return 0; - rc = __bnxt_reserve_rings(bp); - if (rc) { - netdev_err(bp->dev, "ring reservation failure rc: %d\n", rc); - return rc; - } if (BNXT_NEW_RM(bp) && (bnxt_get_num_msix(bp) != bp->total_irqs)) { bnxt_ulp_irq_stop(bp); bnxt_clear_int_mode(bp); - rc = bnxt_init_int_mode(bp); + reinit_irq = true; + } + rc = __bnxt_reserve_rings(bp); + if (reinit_irq) { + if (!rc) + rc = bnxt_init_int_mode(bp); bnxt_ulp_irq_restart(bp, rc); - if (rc) - return rc; + } + if (rc) { + netdev_err(bp->dev, "ring reservation/IRQ init failure rc: %d\n", rc); + return rc; } if (tcs && (bp->tx_nr_rings_per_tc * tcs != bp->tx_nr_rings)) { netdev_err(bp->dev, "tx ring reservation failure\n"); @@ -7227,7 +7505,6 @@ int bnxt_reserve_rings(struct bnxt *bp) bp->tx_nr_rings_per_tc = bp->tx_nr_rings; return -ENOMEM; } - bp->num_stat_ctxs = bp->cp_nr_rings; return 0; } @@ -7821,6 +8098,7 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up) rc = bnxt_hwrm_func_resc_qcaps(bp, true); hw_resc->resv_cp_rings = 0; + hw_resc->resv_stat_ctxs = 0; hw_resc->resv_irqs = 0; hw_resc->resv_tx_rings = 0; hw_resc->resv_rx_rings = 0; @@ -8260,6 +8538,9 @@ static bool bnxt_drv_busy(struct bnxt *bp) test_bit(BNXT_STATE_READ_STATS, &bp->state)); } +static void bnxt_get_ring_stats(struct bnxt *bp, + struct rtnl_link_stats64 *stats); + static void __bnxt_close_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init) { @@ -8285,6 +8566,9 @@ static void __bnxt_close_nic(struct bnxt *bp, bool irq_re_init, del_timer_sync(&bp->timer); bnxt_free_skbs(bp); + /* Save ring stats before shutdown */ + if (bp->bnapi) + bnxt_get_ring_stats(bp, &bp->net_stats_prev); if (irq_re_init) { bnxt_free_irq(bp); bnxt_del_napi(bp); @@ -8346,23 +8630,12 @@ static int bnxt_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) return -EOPNOTSUPP; } -static void -bnxt_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats) +static void bnxt_get_ring_stats(struct bnxt *bp, + struct rtnl_link_stats64 *stats) { - u32 i; - struct bnxt *bp = netdev_priv(dev); + int i; - set_bit(BNXT_STATE_READ_STATS, &bp->state); - /* Make sure bnxt_close_nic() sees that we are reading stats before - * we check the BNXT_STATE_OPEN flag. - */ - smp_mb__after_atomic(); - if (!test_bit(BNXT_STATE_OPEN, &bp->state)) { - clear_bit(BNXT_STATE_READ_STATS, &bp->state); - return; - } - /* TODO check if we need to synchronize with bnxt_close path */ for (i = 0; i < bp->cp_nr_rings; i++) { struct bnxt_napi *bnapi = bp->bnapi[i]; struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring; @@ -8391,6 +8664,40 @@ bnxt_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats) stats->tx_dropped += le64_to_cpu(hw_stats->tx_drop_pkts); } +} + +static void bnxt_add_prev_stats(struct bnxt *bp, + struct rtnl_link_stats64 *stats) +{ + struct rtnl_link_stats64 *prev_stats = &bp->net_stats_prev; + + stats->rx_packets += prev_stats->rx_packets; + stats->tx_packets += prev_stats->tx_packets; + stats->rx_bytes += prev_stats->rx_bytes; + stats->tx_bytes += prev_stats->tx_bytes; + stats->rx_missed_errors += prev_stats->rx_missed_errors; + stats->multicast += prev_stats->multicast; + stats->tx_dropped += prev_stats->tx_dropped; +} + +static void +bnxt_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats) +{ + struct bnxt *bp = netdev_priv(dev); + + set_bit(BNXT_STATE_READ_STATS, &bp->state); + /* Make sure bnxt_close_nic() sees that we are reading stats before + * we check the BNXT_STATE_OPEN flag. + */ + smp_mb__after_atomic(); + if (!test_bit(BNXT_STATE_OPEN, &bp->state)) { + clear_bit(BNXT_STATE_READ_STATS, &bp->state); + *stats = bp->net_stats_prev; + return; + } + + bnxt_get_ring_stats(bp, stats); + bnxt_add_prev_stats(bp, stats); if (bp->flags & BNXT_FLAG_PORT_STATS) { struct rx_port_stats *rx = bp->hw_rx_port_stats; @@ -8626,12 +8933,12 @@ static bool bnxt_rfs_capable(struct bnxt *bp) if (vnics == bp->hw_resc.resv_vnics) return true; - bnxt_hwrm_reserve_rings(bp, 0, 0, 0, 0, vnics); + bnxt_hwrm_reserve_rings(bp, 0, 0, 0, 0, 0, vnics); if (vnics <= bp->hw_resc.resv_vnics) return true; netdev_warn(bp->dev, "Unable to reserve resources to support NTUPLE filters.\n"); - bnxt_hwrm_reserve_rings(bp, 0, 0, 0, 0, 1); + bnxt_hwrm_reserve_rings(bp, 0, 0, 0, 0, 0, 1); return false; #else return false; @@ -9042,7 +9349,7 @@ int bnxt_check_rings(struct bnxt *bp, int tx, int rx, bool sh, int tcs, int tx_xdp) { int max_rx, max_tx, tx_sets = 1; - int tx_rings_needed; + int tx_rings_needed, stats; int rx_rings = rx; int cp, vnics, rc; @@ -9067,10 +9374,13 @@ int bnxt_check_rings(struct bnxt *bp, int tx, int rx, bool sh, int tcs, if (bp->flags & BNXT_FLAG_AGG_RINGS) rx_rings <<= 1; cp = sh ? max_t(int, tx_rings_needed, rx) : tx_rings_needed + rx; - if (BNXT_NEW_RM(bp)) + stats = cp; + if (BNXT_NEW_RM(bp)) { cp += bnxt_get_ulp_msix_num(bp); + stats += bnxt_get_ulp_stat_ctxs(bp); + } return bnxt_hwrm_check_rings(bp, tx_rings_needed, rx_rings, rx, cp, - vnics); + stats, vnics); } static void bnxt_unmap_bars(struct bnxt *bp, struct pci_dev *pdev) @@ -9106,7 +9416,7 @@ static void bnxt_init_dflt_coal(struct bnxt *bp) * 1 coal_buf x bufs_per_record = 1 completion record. */ coal = &bp->rx_coal; - coal->coal_ticks = 14; + coal->coal_ticks = 10; coal->coal_bufs = 30; coal->coal_ticks_irq = 1; coal->coal_bufs_irq = 2; @@ -9294,7 +9604,6 @@ int bnxt_setup_mq_tc(struct net_device *dev, u8 tc) bp->tx_nr_rings += bp->tx_nr_rings_xdp; bp->cp_nr_rings = sh ? max_t(int, bp->tx_nr_rings, bp->rx_nr_rings) : bp->tx_nr_rings + bp->rx_nr_rings; - bp->num_stat_ctxs = bp->cp_nr_rings; if (netif_running(bp->dev)) return bnxt_open_nic(bp, true, false); @@ -9617,7 +9926,7 @@ static int bnxt_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, } static int bnxt_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh, - u16 flags) + u16 flags, struct netlink_ext_ack *extack) { struct bnxt *bp = netdev_priv(dev); struct nlattr *attr, *br_spec; @@ -9760,6 +10069,7 @@ static void bnxt_remove_one(struct pci_dev *pdev) kfree(bp->ctx); bp->ctx = NULL; bnxt_cleanup_pci(bp); + bnxt_free_port_stats(bp); free_netdev(dev); } @@ -9834,7 +10144,7 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx, *max_cp = bnxt_get_max_func_cp_rings_for_en(bp); max_irq = min_t(int, bnxt_get_max_func_irqs(bp) - bnxt_get_ulp_msix_num(bp), - bnxt_get_max_func_stat_ctxs(bp)); + hw_resc->max_stat_ctxs - bnxt_get_ulp_stat_ctxs(bp)); if (!(bp->flags & BNXT_FLAG_CHIP_P5)) *max_cp = min_t(int, *max_cp, max_irq); max_ring_grps = hw_resc->max_hw_ring_grps; @@ -9965,7 +10275,6 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh) netdev_warn(bp->dev, "2nd rings reservation failed.\n"); bp->tx_nr_rings_per_tc = bp->tx_nr_rings; } - bp->num_stat_ctxs = bp->cp_nr_rings; if (BNXT_CHIP_TYPE_NITRO_A0(bp)) { bp->rx_nr_rings++; bp->cp_nr_rings++; @@ -10099,6 +10408,12 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (rc) goto init_err_pci_clean; + if (bp->fw_cap & BNXT_FW_CAP_KONG_MB_CHNL) { + rc = bnxt_alloc_kong_hwrm_resources(bp); + if (rc) + bp->fw_cap &= ~BNXT_FW_CAP_KONG_MB_CHNL; + } + if ((bp->fw_cap & BNXT_FW_CAP_SHORT_CMD) || bp->hwrm_max_ext_req_len > BNXT_HWRM_MAX_REQ_LEN) { rc = bnxt_alloc_hwrm_short_cmd_req(bp); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 3030931ccaf8..a451796deefe 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -567,7 +567,6 @@ struct nqe_cn { #define HWRM_RESP_LEN_MASK 0xffff0000 #define HWRM_RESP_LEN_SFT 16 #define HWRM_RESP_VALID_MASK 0xff000000 -#define HWRM_SEQ_ID_INVALID -1 #define BNXT_HWRM_REQ_MAX_SIZE 128 #define BNXT_HWRM_REQS_PER_PAGE (BNXT_PAGE_SIZE / \ BNXT_HWRM_REQ_MAX_SIZE) @@ -585,6 +584,9 @@ struct nqe_cn { #define HWRM_VALID_BIT_DELAY_USEC 20 +#define BNXT_HWRM_CHNL_CHIMP 0 +#define BNXT_HWRM_CHNL_KONG 1 + #define BNXT_RX_EVENT 1 #define BNXT_AGG_EVENT 2 #define BNXT_TX_EVENT 4 @@ -615,9 +617,12 @@ struct bnxt_sw_rx_agg_bd { struct bnxt_ring_mem_info { int nr_pages; int page_size; - u32 flags; + u16 flags; #define BNXT_RMEM_VALID_PTE_FLAG 1 #define BNXT_RMEM_RING_PTE_FLAG 2 +#define BNXT_RMEM_USE_FULL_PAGE_FLAG 4 + + u16 depth; void **pg_arr; dma_addr_t *dma_arr; @@ -927,6 +932,8 @@ struct bnxt_hw_resc { u16 resv_vnics; u16 min_stat_ctxs; u16 max_stat_ctxs; + u16 resv_stat_ctxs; + u16 max_nqs; u16 max_irqs; u16 resv_irqs; }; @@ -1111,9 +1118,14 @@ struct bnxt_test_info { char string[BNXT_MAX_TEST][ETH_GSTRING_LEN]; }; -#define BNXT_GRCPF_REG_WINDOW_BASE_OUT 0x400 -#define BNXT_CAG_REG_LEGACY_INT_STATUS 0x4014 -#define BNXT_CAG_REG_BASE 0x300000 +#define BNXT_GRCPF_REG_CHIMP_COMM 0x0 +#define BNXT_GRCPF_REG_CHIMP_COMM_TRIGGER 0x100 +#define BNXT_GRCPF_REG_WINDOW_BASE_OUT 0x400 +#define BNXT_CAG_REG_LEGACY_INT_STATUS 0x4014 +#define BNXT_CAG_REG_BASE 0x300000 + +#define BNXT_GRCPF_REG_KONG_COMM 0xA00 +#define BNXT_GRCPF_REG_KONG_COMM_TRIGGER 0xB00 struct bnxt_tc_flow_stats { u64 packets; @@ -1181,12 +1193,15 @@ struct bnxt_vf_rep { #define PTU_PTE_NEXT_TO_LAST 0x4UL #define MAX_CTX_PAGES (BNXT_PAGE_SIZE / 8) +#define MAX_CTX_TOTAL_PAGES (MAX_CTX_PAGES * MAX_CTX_PAGES) struct bnxt_ctx_pg_info { u32 entries; + u32 nr_pages; void *ctx_pg_arr[MAX_CTX_PAGES]; dma_addr_t ctx_dma_arr[MAX_CTX_PAGES]; struct bnxt_ring_mem_info ring_mem; + struct bnxt_ctx_pg_info **ctx_pg_tbl; }; struct bnxt_ctx_mem_info { @@ -1222,6 +1237,8 @@ struct bnxt_ctx_mem_info { struct bnxt_ctx_pg_info cq_mem; struct bnxt_ctx_pg_info vnic_mem; struct bnxt_ctx_pg_info stat_mem; + struct bnxt_ctx_pg_info mrav_mem; + struct bnxt_ctx_pg_info tim_mem; struct bnxt_ctx_pg_info *tqm_mem[9]; }; @@ -1416,8 +1433,6 @@ struct bnxt { int cp_nr_pages; int cp_nr_rings; - int num_stat_ctxs; - /* grp_info indexed by completion ring index */ struct bnxt_ring_grp_info *grp_info; struct bnxt_vnic_info *vnic_info; @@ -1457,21 +1472,27 @@ struct bnxt { u32 msg_enable; u32 fw_cap; - #define BNXT_FW_CAP_SHORT_CMD 0x00000001 - #define BNXT_FW_CAP_LLDP_AGENT 0x00000002 - #define BNXT_FW_CAP_DCBX_AGENT 0x00000004 - #define BNXT_FW_CAP_NEW_RM 0x00000008 - #define BNXT_FW_CAP_IF_CHANGE 0x00000010 + #define BNXT_FW_CAP_SHORT_CMD 0x00000001 + #define BNXT_FW_CAP_LLDP_AGENT 0x00000002 + #define BNXT_FW_CAP_DCBX_AGENT 0x00000004 + #define BNXT_FW_CAP_NEW_RM 0x00000008 + #define BNXT_FW_CAP_IF_CHANGE 0x00000010 + #define BNXT_FW_CAP_KONG_MB_CHNL 0x00000080 + #define BNXT_FW_CAP_OVS_64BIT_HANDLE 0x00000400 #define BNXT_NEW_RM(bp) ((bp)->fw_cap & BNXT_FW_CAP_NEW_RM) u32 hwrm_spec_code; u16 hwrm_cmd_seq; - u32 hwrm_intr_seq_id; + u16 hwrm_cmd_kong_seq; + u16 hwrm_intr_seq_id; void *hwrm_short_cmd_req_addr; dma_addr_t hwrm_short_cmd_req_dma_addr; void *hwrm_cmd_resp_addr; dma_addr_t hwrm_cmd_resp_dma_addr; + void *hwrm_cmd_kong_resp_addr; + dma_addr_t hwrm_cmd_kong_resp_dma_addr; + struct rtnl_link_stats64 net_stats_prev; struct rx_port_stats *hw_rx_port_stats; struct tx_port_stats *hw_tx_port_stats; struct rx_port_stats_ext *hw_rx_port_stats_ext; @@ -1483,6 +1504,8 @@ struct bnxt { int hw_port_stats_size; u16 fw_rx_stats_ext_size; u16 fw_tx_stats_ext_size; + u8 pri2cos[8]; + u8 pri2cos_valid; u16 hwrm_max_req_len; u16 hwrm_max_ext_req_len; @@ -1669,6 +1692,66 @@ static inline void bnxt_db_write(struct bnxt *bp, struct bnxt_db_info *db, } } +static inline bool bnxt_cfa_hwrm_message(u16 req_type) +{ + switch (req_type) { + case HWRM_CFA_ENCAP_RECORD_ALLOC: + case HWRM_CFA_ENCAP_RECORD_FREE: + case HWRM_CFA_DECAP_FILTER_ALLOC: + case HWRM_CFA_DECAP_FILTER_FREE: + case HWRM_CFA_NTUPLE_FILTER_ALLOC: + case HWRM_CFA_NTUPLE_FILTER_FREE: + case HWRM_CFA_NTUPLE_FILTER_CFG: + case HWRM_CFA_EM_FLOW_ALLOC: + case HWRM_CFA_EM_FLOW_FREE: + case HWRM_CFA_EM_FLOW_CFG: + case HWRM_CFA_FLOW_ALLOC: + case HWRM_CFA_FLOW_FREE: + case HWRM_CFA_FLOW_INFO: + case HWRM_CFA_FLOW_FLUSH: + case HWRM_CFA_FLOW_STATS: + case HWRM_CFA_METER_PROFILE_ALLOC: + case HWRM_CFA_METER_PROFILE_FREE: + case HWRM_CFA_METER_PROFILE_CFG: + case HWRM_CFA_METER_INSTANCE_ALLOC: + case HWRM_CFA_METER_INSTANCE_FREE: + return true; + default: + return false; + } +} + +static inline bool bnxt_kong_hwrm_message(struct bnxt *bp, struct input *req) +{ + return (bp->fw_cap & BNXT_FW_CAP_KONG_MB_CHNL && + bnxt_cfa_hwrm_message(le16_to_cpu(req->req_type))); +} + +static inline bool bnxt_hwrm_kong_chnl(struct bnxt *bp, struct input *req) +{ + return (bp->fw_cap & BNXT_FW_CAP_KONG_MB_CHNL && + req->resp_addr == cpu_to_le64(bp->hwrm_cmd_kong_resp_dma_addr)); +} + +static inline void *bnxt_get_hwrm_resp_addr(struct bnxt *bp, void *req) +{ + if (bnxt_hwrm_kong_chnl(bp, (struct input *)req)) + return bp->hwrm_cmd_kong_resp_addr; + else + return bp->hwrm_cmd_resp_addr; +} + +static inline u16 bnxt_get_hwrm_seq_id(struct bnxt *bp, u16 dst) +{ + u16 seq_id; + + if (dst == BNXT_HWRM_CHNL_CHIMP) + seq_id = bp->hwrm_cmd_seq++; + else + seq_id = bp->hwrm_cmd_kong_seq++; + return seq_id; +} + extern const u16 bnxt_lhint_arr[]; int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, @@ -1686,11 +1769,12 @@ int bnxt_hwrm_func_rgtr_async_events(struct bnxt *bp, unsigned long *bmap, int bmap_size); int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id); int __bnxt_hwrm_get_tx_rings(struct bnxt *bp, u16 fid, int *tx_rings); +int bnxt_nq_rings_in_use(struct bnxt *bp); int bnxt_hwrm_set_coal(struct bnxt *); unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp); -void bnxt_set_max_func_stat_ctxs(struct bnxt *bp, unsigned int max); +unsigned int bnxt_get_avail_stat_ctxs_for_en(struct bnxt *bp); unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp); -unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp); +unsigned int bnxt_get_avail_cp_rings_for_en(struct bnxt *bp); int bnxt_get_avail_msix(struct bnxt *bp, int num); int bnxt_reserve_rings(struct bnxt *bp); void bnxt_tx_disable(struct bnxt *bp); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c index a85d2be986af..15c7041e937b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c @@ -471,7 +471,10 @@ static int bnxt_ets_validate(struct bnxt *bp, struct ieee_ets *ets, u8 *tc) if (total_ets_bw > 100) return -EINVAL; - *tc = max_tc + 1; + if (max_tc >= bp->max_tc) + *tc = bp->max_tc; + else + *tc = max_tc + 1; return 0; } diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index 6b51f4de6017..adabbe94a259 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -207,6 +207,34 @@ reset_coalesce: BNXT_TX_STATS_EXT_COS_ENTRY(6), \ BNXT_TX_STATS_EXT_COS_ENTRY(7) \ +#define BNXT_RX_STATS_PRI_ENTRY(counter, n) \ + { BNXT_RX_STATS_EXT_OFFSET(counter##_cos0), \ + __stringify(counter##_pri##n) } + +#define BNXT_TX_STATS_PRI_ENTRY(counter, n) \ + { BNXT_TX_STATS_EXT_OFFSET(counter##_cos0), \ + __stringify(counter##_pri##n) } + +#define BNXT_RX_STATS_PRI_ENTRIES(counter) \ + BNXT_RX_STATS_PRI_ENTRY(counter, 0), \ + BNXT_RX_STATS_PRI_ENTRY(counter, 1), \ + BNXT_RX_STATS_PRI_ENTRY(counter, 2), \ + BNXT_RX_STATS_PRI_ENTRY(counter, 3), \ + BNXT_RX_STATS_PRI_ENTRY(counter, 4), \ + BNXT_RX_STATS_PRI_ENTRY(counter, 5), \ + BNXT_RX_STATS_PRI_ENTRY(counter, 6), \ + BNXT_RX_STATS_PRI_ENTRY(counter, 7) + +#define BNXT_TX_STATS_PRI_ENTRIES(counter) \ + BNXT_TX_STATS_PRI_ENTRY(counter, 0), \ + BNXT_TX_STATS_PRI_ENTRY(counter, 1), \ + BNXT_TX_STATS_PRI_ENTRY(counter, 2), \ + BNXT_TX_STATS_PRI_ENTRY(counter, 3), \ + BNXT_TX_STATS_PRI_ENTRY(counter, 4), \ + BNXT_TX_STATS_PRI_ENTRY(counter, 5), \ + BNXT_TX_STATS_PRI_ENTRY(counter, 6), \ + BNXT_TX_STATS_PRI_ENTRY(counter, 7) + enum { RX_TOTAL_DISCARDS, TX_TOTAL_DISCARDS, @@ -327,8 +355,41 @@ static const struct { BNXT_TX_STATS_EXT_PFC_ENTRIES, }; +static const struct { + long base_off; + char string[ETH_GSTRING_LEN]; +} bnxt_rx_bytes_pri_arr[] = { + BNXT_RX_STATS_PRI_ENTRIES(rx_bytes), +}; + +static const struct { + long base_off; + char string[ETH_GSTRING_LEN]; +} bnxt_rx_pkts_pri_arr[] = { + BNXT_RX_STATS_PRI_ENTRIES(rx_packets), +}; + +static const struct { + long base_off; + char string[ETH_GSTRING_LEN]; +} bnxt_tx_bytes_pri_arr[] = { + BNXT_TX_STATS_PRI_ENTRIES(tx_bytes), +}; + +static const struct { + long base_off; + char string[ETH_GSTRING_LEN]; +} bnxt_tx_pkts_pri_arr[] = { + BNXT_TX_STATS_PRI_ENTRIES(tx_packets), +}; + #define BNXT_NUM_SW_FUNC_STATS ARRAY_SIZE(bnxt_sw_func_stats) #define BNXT_NUM_PORT_STATS ARRAY_SIZE(bnxt_port_stats_arr) +#define BNXT_NUM_STATS_PRI \ + (ARRAY_SIZE(bnxt_rx_bytes_pri_arr) + \ + ARRAY_SIZE(bnxt_rx_pkts_pri_arr) + \ + ARRAY_SIZE(bnxt_tx_bytes_pri_arr) + \ + ARRAY_SIZE(bnxt_tx_pkts_pri_arr)) static int bnxt_get_num_stats(struct bnxt *bp) { @@ -339,9 +400,12 @@ static int bnxt_get_num_stats(struct bnxt *bp) if (bp->flags & BNXT_FLAG_PORT_STATS) num_stats += BNXT_NUM_PORT_STATS; - if (bp->flags & BNXT_FLAG_PORT_STATS_EXT) + if (bp->flags & BNXT_FLAG_PORT_STATS_EXT) { num_stats += bp->fw_rx_stats_ext_size + bp->fw_tx_stats_ext_size; + if (bp->pri2cos_valid) + num_stats += BNXT_NUM_STATS_PRI; + } return num_stats; } @@ -369,8 +433,10 @@ static void bnxt_get_ethtool_stats(struct net_device *dev, struct bnxt *bp = netdev_priv(dev); u32 stat_fields = sizeof(struct ctx_hw_stats) / 8; - if (!bp->bnapi) - return; + if (!bp->bnapi) { + j += BNXT_NUM_STATS * bp->cp_nr_rings + BNXT_NUM_SW_FUNC_STATS; + goto skip_ring_stats; + } for (i = 0; i < BNXT_NUM_SW_FUNC_STATS; i++) bnxt_sw_func_stats[i].counter = 0; @@ -395,6 +461,7 @@ static void bnxt_get_ethtool_stats(struct net_device *dev, for (i = 0; i < BNXT_NUM_SW_FUNC_STATS; i++, j++) buf[j] = bnxt_sw_func_stats[i].counter; +skip_ring_stats: if (bp->flags & BNXT_FLAG_PORT_STATS) { __le64 *port_stats = (__le64 *)bp->hw_rx_port_stats; @@ -415,6 +482,32 @@ static void bnxt_get_ethtool_stats(struct net_device *dev, buf[j] = le64_to_cpu(*(tx_port_stats_ext + bnxt_tx_port_stats_ext_arr[i].offset)); } + if (bp->pri2cos_valid) { + for (i = 0; i < 8; i++, j++) { + long n = bnxt_rx_bytes_pri_arr[i].base_off + + bp->pri2cos[i]; + + buf[j] = le64_to_cpu(*(rx_port_stats_ext + n)); + } + for (i = 0; i < 8; i++, j++) { + long n = bnxt_rx_pkts_pri_arr[i].base_off + + bp->pri2cos[i]; + + buf[j] = le64_to_cpu(*(rx_port_stats_ext + n)); + } + for (i = 0; i < 8; i++, j++) { + long n = bnxt_tx_bytes_pri_arr[i].base_off + + bp->pri2cos[i]; + + buf[j] = le64_to_cpu(*(tx_port_stats_ext + n)); + } + for (i = 0; i < 8; i++, j++) { + long n = bnxt_tx_pkts_pri_arr[i].base_off + + bp->pri2cos[i]; + + buf[j] = le64_to_cpu(*(tx_port_stats_ext + n)); + } + } } } @@ -493,6 +586,28 @@ static void bnxt_get_strings(struct net_device *dev, u32 stringset, u8 *buf) bnxt_tx_port_stats_ext_arr[i].string); buf += ETH_GSTRING_LEN; } + if (bp->pri2cos_valid) { + for (i = 0; i < 8; i++) { + strcpy(buf, + bnxt_rx_bytes_pri_arr[i].string); + buf += ETH_GSTRING_LEN; + } + for (i = 0; i < 8; i++) { + strcpy(buf, + bnxt_rx_pkts_pri_arr[i].string); + buf += ETH_GSTRING_LEN; + } + for (i = 0; i < 8; i++) { + strcpy(buf, + bnxt_tx_bytes_pri_arr[i].string); + buf += ETH_GSTRING_LEN; + } + for (i = 0; i < 8; i++) { + strcpy(buf, + bnxt_tx_pkts_pri_arr[i].string); + buf += ETH_GSTRING_LEN; + } + } } break; case ETH_SS_TEST: @@ -663,8 +778,6 @@ static int bnxt_set_channels(struct net_device *dev, bp->cp_nr_rings = sh ? max_t(int, bp->tx_nr_rings, bp->rx_nr_rings) : bp->tx_nr_rings + bp->rx_nr_rings; - bp->num_stat_ctxs = bp->cp_nr_rings; - /* After changing number of rx channels, update NTUPLE feature. */ netdev_update_features(dev); if (netif_running(dev)) { @@ -1526,14 +1639,22 @@ static int bnxt_flash_nvram(struct net_device *dev, rc = hwrm_send_message(bp, &req, sizeof(req), FLASH_NVRAM_TIMEOUT); dma_free_coherent(&bp->pdev->dev, data_len, kmem, dma_handle); + if (rc == HWRM_ERR_CODE_RESOURCE_ACCESS_DENIED) { + netdev_info(dev, + "PF does not have admin privileges to flash the device\n"); + rc = -EACCES; + } else if (rc) { + rc = -EIO; + } return rc; } static int bnxt_firmware_reset(struct net_device *dev, u16 dir_type) { - struct bnxt *bp = netdev_priv(dev); struct hwrm_fw_reset_input req = {0}; + struct bnxt *bp = netdev_priv(dev); + int rc; bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FW_RESET, -1, -1); @@ -1573,7 +1694,15 @@ static int bnxt_firmware_reset(struct net_device *dev, return -EINVAL; } - return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); + rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); + if (rc == HWRM_ERR_CODE_RESOURCE_ACCESS_DENIED) { + netdev_info(dev, + "PF does not have admin privileges to reset the device\n"); + rc = -EACCES; + } else if (rc) { + rc = -EIO; + } + return rc; } static int bnxt_flash_firmware(struct net_device *dev, @@ -1780,9 +1909,9 @@ static int bnxt_flash_package_from_file(struct net_device *dev, struct hwrm_nvm_install_update_output *resp = bp->hwrm_cmd_resp_addr; struct hwrm_nvm_install_update_input install = {0}; const struct firmware *fw; + int rc, hwrm_err = 0; u32 item_len; u16 index; - int rc; bnxt_hwrm_fw_set_time(bp); @@ -1825,15 +1954,16 @@ static int bnxt_flash_package_from_file(struct net_device *dev, memcpy(kmem, fw->data, fw->size); modify.host_src_addr = cpu_to_le64(dma_handle); - rc = hwrm_send_message(bp, &modify, sizeof(modify), - FLASH_PACKAGE_TIMEOUT); + hwrm_err = hwrm_send_message(bp, &modify, + sizeof(modify), + FLASH_PACKAGE_TIMEOUT); dma_free_coherent(&bp->pdev->dev, fw->size, kmem, dma_handle); } } release_firmware(fw); - if (rc) - return rc; + if (rc || hwrm_err) + goto err_exit; if ((install_type & 0xffff) == 0) install_type >>= 16; @@ -1841,12 +1971,10 @@ static int bnxt_flash_package_from_file(struct net_device *dev, install.install_type = cpu_to_le32(install_type); mutex_lock(&bp->hwrm_cmd_lock); - rc = _hwrm_send_message(bp, &install, sizeof(install), - INSTALL_PACKAGE_TIMEOUT); - if (rc) { - rc = -EOPNOTSUPP; + hwrm_err = _hwrm_send_message(bp, &install, sizeof(install), + INSTALL_PACKAGE_TIMEOUT); + if (hwrm_err) goto flash_pkg_exit; - } if (resp->error_code) { u8 error_code = ((struct hwrm_err_output *)resp)->cmd_err; @@ -1854,12 +1982,11 @@ static int bnxt_flash_package_from_file(struct net_device *dev, if (error_code == NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) { install.flags |= cpu_to_le16( NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG); - rc = _hwrm_send_message(bp, &install, sizeof(install), - INSTALL_PACKAGE_TIMEOUT); - if (rc) { - rc = -EOPNOTSUPP; + hwrm_err = _hwrm_send_message(bp, &install, + sizeof(install), + INSTALL_PACKAGE_TIMEOUT); + if (hwrm_err) goto flash_pkg_exit; - } } } @@ -1870,6 +1997,14 @@ static int bnxt_flash_package_from_file(struct net_device *dev, } flash_pkg_exit: mutex_unlock(&bp->hwrm_cmd_lock); +err_exit: + if (hwrm_err == HWRM_ERR_CODE_RESOURCE_ACCESS_DENIED) { + netdev_info(dev, + "PF does not have admin privileges to flash the device\n"); + rc = -EACCES; + } else if (hwrm_err) { + rc = -EOPNOTSUPP; + } return rc; } @@ -2450,17 +2585,37 @@ static int bnxt_hwrm_mac_loopback(struct bnxt *bp, bool enable) return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); } +static int bnxt_query_force_speeds(struct bnxt *bp, u16 *force_speeds) +{ + struct hwrm_port_phy_qcaps_output *resp = bp->hwrm_cmd_resp_addr; + struct hwrm_port_phy_qcaps_input req = {0}; + int rc; + + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_PORT_PHY_QCAPS, -1, -1); + mutex_lock(&bp->hwrm_cmd_lock); + rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); + if (!rc) + *force_speeds = le16_to_cpu(resp->supported_speeds_force_mode); + + mutex_unlock(&bp->hwrm_cmd_lock); + return rc; +} + static int bnxt_disable_an_for_lpbk(struct bnxt *bp, struct hwrm_port_phy_cfg_input *req) { struct bnxt_link_info *link_info = &bp->link_info; - u16 fw_advertising = link_info->advertising; + u16 fw_advertising; u16 fw_speed; int rc; if (!link_info->autoneg) return 0; + rc = bnxt_query_force_speeds(bp, &fw_advertising); + if (rc) + return rc; + fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_1GB; if (netif_carrier_ok(bp->dev)) fw_speed = bp->link_info.link_speed; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h index 5dd086059568..f1aaac8e6268 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h @@ -194,6 +194,8 @@ struct cmd_nums { #define HWRM_STAT_CTX_QUERY 0xb2UL #define HWRM_STAT_CTX_CLR_STATS 0xb3UL #define HWRM_PORT_QSTATS_EXT 0xb4UL + #define HWRM_PORT_PHY_MDIO_WRITE 0xb5UL + #define HWRM_PORT_PHY_MDIO_READ 0xb6UL #define HWRM_FW_RESET 0xc0UL #define HWRM_FW_QSTATUS 0xc1UL #define HWRM_FW_HEALTH_CHECK 0xc2UL @@ -213,6 +215,7 @@ struct cmd_nums { #define HWRM_WOL_FILTER_FREE 0xf1UL #define HWRM_WOL_FILTER_QCFG 0xf2UL #define HWRM_WOL_REASON_QCFG 0xf3UL + #define HWRM_CFA_METER_QCAPS 0xf4UL #define HWRM_CFA_METER_PROFILE_ALLOC 0xf5UL #define HWRM_CFA_METER_PROFILE_FREE 0xf6UL #define HWRM_CFA_METER_PROFILE_CFG 0xf7UL @@ -239,6 +242,24 @@ struct cmd_nums { #define HWRM_FW_IPC_MSG 0x110UL #define HWRM_CFA_REDIRECT_TUNNEL_TYPE_INFO 0x111UL #define HWRM_CFA_REDIRECT_QUERY_TUNNEL_TYPE 0x112UL + #define HWRM_CFA_FLOW_AGING_TIMER_RESET 0x113UL + #define HWRM_CFA_FLOW_AGING_CFG 0x114UL + #define HWRM_CFA_FLOW_AGING_QCFG 0x115UL + #define HWRM_CFA_FLOW_AGING_QCAPS 0x116UL + #define HWRM_CFA_CTX_MEM_RGTR 0x117UL + #define HWRM_CFA_CTX_MEM_UNRGTR 0x118UL + #define HWRM_CFA_CTX_MEM_QCTX 0x119UL + #define HWRM_CFA_CTX_MEM_QCAPS 0x11aUL + #define HWRM_CFA_COUNTER_QCAPS 0x11bUL + #define HWRM_CFA_COUNTER_CFG 0x11cUL + #define HWRM_CFA_COUNTER_QCFG 0x11dUL + #define HWRM_CFA_COUNTER_QSTATS 0x11eUL + #define HWRM_CFA_TCP_FLAG_PROCESS_QCFG 0x11fUL + #define HWRM_CFA_EEM_QCAPS 0x120UL + #define HWRM_CFA_EEM_CFG 0x121UL + #define HWRM_CFA_EEM_QCFG 0x122UL + #define HWRM_CFA_EEM_OP 0x123UL + #define HWRM_CFA_ADV_FLOW_MGNT_QCAPS 0x124UL #define HWRM_ENGINE_CKV_HELLO 0x12dUL #define HWRM_ENGINE_CKV_STATUS 0x12eUL #define HWRM_ENGINE_CKV_CKEK_ADD 0x12fUL @@ -335,6 +356,8 @@ struct ret_codes { #define HWRM_ERR_CODE_UNSUPPORTED_TLV 0x7UL #define HWRM_ERR_CODE_NO_BUFFER 0x8UL #define HWRM_ERR_CODE_UNSUPPORTED_OPTION_ERR 0x9UL + #define HWRM_ERR_CODE_HOT_RESET_PROGRESS 0xaUL + #define HWRM_ERR_CODE_HOT_RESET_FAIL 0xbUL #define HWRM_ERR_CODE_HWRM_ERROR 0xfUL #define HWRM_ERR_CODE_TLV_ENCAPSULATED_RESPONSE 0x8000UL #define HWRM_ERR_CODE_UNKNOWN_ERR 0xfffeUL @@ -363,8 +386,8 @@ struct hwrm_err_output { #define HWRM_VERSION_MAJOR 1 #define HWRM_VERSION_MINOR 10 #define HWRM_VERSION_UPDATE 0 -#define HWRM_VERSION_RSVD 3 -#define HWRM_VERSION_STR "1.10.0.3" +#define HWRM_VERSION_RSVD 33 +#define HWRM_VERSION_STR "1.10.0.33" /* hwrm_ver_get_input (size:192b/24B) */ struct hwrm_ver_get_input { @@ -411,6 +434,10 @@ struct hwrm_ver_get_output { #define VER_GET_RESP_DEV_CAPS_CFG_L2_FILTER_TYPES_ROCE_OR_L2_SUPPORTED 0x40UL #define VER_GET_RESP_DEV_CAPS_CFG_VIRTIO_VSWITCH_OFFLOAD_SUPPORTED 0x80UL #define VER_GET_RESP_DEV_CAPS_CFG_TRUSTED_VF_SUPPORTED 0x100UL + #define VER_GET_RESP_DEV_CAPS_CFG_FLOW_AGING_SUPPORTED 0x200UL + #define VER_GET_RESP_DEV_CAPS_CFG_ADV_FLOW_COUNTERS_SUPPORTED 0x400UL + #define VER_GET_RESP_DEV_CAPS_CFG_CFA_EEM_SUPPORTED 0x800UL + #define VER_GET_RESP_DEV_CAPS_CFG_CFA_ADV_FLOW_MGNT_SUPPORTED 0x1000UL u8 roce_fw_maj_8b; u8 roce_fw_min_8b; u8 roce_fw_bld_8b; @@ -465,14 +492,27 @@ struct hwrm_ver_get_output { /* eject_cmpl (size:128b/16B) */ struct eject_cmpl { __le16 type; - #define EJECT_CMPL_TYPE_MASK 0x3fUL - #define EJECT_CMPL_TYPE_SFT 0 - #define EJECT_CMPL_TYPE_STAT_EJECT 0x1aUL - #define EJECT_CMPL_TYPE_LAST EJECT_CMPL_TYPE_STAT_EJECT + #define EJECT_CMPL_TYPE_MASK 0x3fUL + #define EJECT_CMPL_TYPE_SFT 0 + #define EJECT_CMPL_TYPE_STAT_EJECT 0x1aUL + #define EJECT_CMPL_TYPE_LAST EJECT_CMPL_TYPE_STAT_EJECT + #define EJECT_CMPL_FLAGS_MASK 0xffc0UL + #define EJECT_CMPL_FLAGS_SFT 6 + #define EJECT_CMPL_FLAGS_ERROR 0x40UL __le16 len; __le32 opaque; - __le32 v; - #define EJECT_CMPL_V 0x1UL + __le16 v; + #define EJECT_CMPL_V 0x1UL + #define EJECT_CMPL_ERRORS_MASK 0xfffeUL + #define EJECT_CMPL_ERRORS_SFT 1 + #define EJECT_CMPL_ERRORS_BUFFER_ERROR_MASK 0xeUL + #define EJECT_CMPL_ERRORS_BUFFER_ERROR_SFT 1 + #define EJECT_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER (0x0UL << 1) + #define EJECT_CMPL_ERRORS_BUFFER_ERROR_DID_NOT_FIT (0x1UL << 1) + #define EJECT_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT (0x3UL << 1) + #define EJECT_CMPL_ERRORS_BUFFER_ERROR_FLUSH (0x5UL << 1) + #define EJECT_CMPL_ERRORS_BUFFER_ERROR_LAST EJECT_CMPL_ERRORS_BUFFER_ERROR_FLUSH + __le16 reserved16; __le32 unused_2; }; @@ -552,6 +592,10 @@ struct hwrm_async_event_cmpl { #define ASYNC_EVENT_CMPL_EVENT_ID_LLFC_PFC_CHANGE 0x34UL #define ASYNC_EVENT_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE 0x35UL #define ASYNC_EVENT_CMPL_EVENT_ID_HW_FLOW_AGED 0x36UL + #define ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION 0x37UL + #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_CACHE_FLUSH_REQ 0x38UL + #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_CACHE_FLUSH_DONE 0x39UL + #define ASYNC_EVENT_CMPL_EVENT_ID_FW_TRACE_MSG 0xfeUL #define ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR 0xffUL #define ASYNC_EVENT_CMPL_EVENT_ID_LAST ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR __le32 event_data2; @@ -647,6 +691,39 @@ struct hwrm_async_event_cmpl_link_speed_cfg_change { #define ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_EVENT_DATA1_ILLEGAL_LINK_SPEED_CFG 0x20000UL }; +/* hwrm_async_event_cmpl_reset_notify (size:128b/16B) */ +struct hwrm_async_event_cmpl_reset_notify { + __le16 type; + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_TYPE_MASK 0x3fUL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_TYPE_SFT 0 + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_TYPE_HWRM_ASYNC_EVENT 0x2eUL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_TYPE_LAST ASYNC_EVENT_CMPL_RESET_NOTIFY_TYPE_HWRM_ASYNC_EVENT + __le16 event_id; + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_ID_RESET_NOTIFY 0x8UL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_ID_LAST ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_ID_RESET_NOTIFY + __le32 event_data2; + u8 opaque_v; + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_V 0x1UL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_OPAQUE_MASK 0xfeUL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_OPAQUE_SFT 1 + u8 timestamp_lo; + __le16 timestamp_hi; + __le32 event_data1; + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DRIVER_ACTION_MASK 0xffUL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DRIVER_ACTION_SFT 0 + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DRIVER_ACTION_DRIVER_STOP_TX_QUEUE 0x1UL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DRIVER_ACTION_DRIVER_IFDOWN 0x2UL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DRIVER_ACTION_LAST ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DRIVER_ACTION_DRIVER_IFDOWN + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_MASK 0xff00UL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_SFT 8 + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_MANAGEMENT_RESET_REQUEST (0x1UL << 8) + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_EXCEPTION_FATAL (0x2UL << 8) + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_EXCEPTION_NON_FATAL (0x3UL << 8) + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_LAST ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_EXCEPTION_NON_FATAL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DELAY_IN_100MS_TICKS_MASK 0xffff0000UL + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DELAY_IN_100MS_TICKS_SFT 16 +}; + /* hwrm_async_event_cmpl_vf_cfg_change (size:128b/16B) */ struct hwrm_async_event_cmpl_vf_cfg_change { __le16 type; @@ -672,6 +749,74 @@ struct hwrm_async_event_cmpl_vf_cfg_change { #define ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_TRUSTED_VF_CFG_CHANGE 0x10UL }; +/* hwrm_async_event_cmpl_hw_flow_aged (size:128b/16B) */ +struct hwrm_async_event_cmpl_hw_flow_aged { + __le16 type; + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_TYPE_MASK 0x3fUL + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_TYPE_SFT 0 + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_TYPE_HWRM_ASYNC_EVENT 0x2eUL + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_TYPE_LAST ASYNC_EVENT_CMPL_HW_FLOW_AGED_TYPE_HWRM_ASYNC_EVENT + __le16 event_id; + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_ID_HW_FLOW_AGED 0x36UL + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_ID_LAST ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_ID_HW_FLOW_AGED + __le32 event_data2; + u8 opaque_v; + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_V 0x1UL + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_OPAQUE_MASK 0xfeUL + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_OPAQUE_SFT 1 + u8 timestamp_lo; + __le16 timestamp_hi; + __le32 event_data1; + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_ID_MASK 0x7fffffffUL + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_ID_SFT 0 + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_DIRECTION 0x80000000UL + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_DIRECTION_RX (0x0UL << 31) + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_DIRECTION_TX (0x1UL << 31) + #define ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_DIRECTION_LAST ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_DIRECTION_TX +}; + +/* hwrm_async_event_cmpl_eem_cache_flush_req (size:128b/16B) */ +struct hwrm_async_event_cmpl_eem_cache_flush_req { + __le16 type; + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_TYPE_MASK 0x3fUL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_TYPE_SFT 0 + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_TYPE_HWRM_ASYNC_EVENT 0x2eUL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_TYPE_LAST ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_TYPE_HWRM_ASYNC_EVENT + __le16 event_id; + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_EVENT_ID_EEM_CACHE_FLUSH_REQ 0x38UL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_EVENT_ID_LAST ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_EVENT_ID_EEM_CACHE_FLUSH_REQ + __le32 event_data2; + u8 opaque_v; + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_V 0x1UL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_OPAQUE_MASK 0xfeUL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_REQ_OPAQUE_SFT 1 + u8 timestamp_lo; + __le16 timestamp_hi; + __le32 event_data1; +}; + +/* hwrm_async_event_cmpl_eem_cache_flush_done (size:128b/16B) */ +struct hwrm_async_event_cmpl_eem_cache_flush_done { + __le16 type; + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_TYPE_MASK 0x3fUL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_TYPE_SFT 0 + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_TYPE_HWRM_ASYNC_EVENT 0x2eUL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_TYPE_LAST ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_TYPE_HWRM_ASYNC_EVENT + __le16 event_id; + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_EVENT_ID_EEM_CACHE_FLUSH_DONE 0x39UL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_EVENT_ID_LAST ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_EVENT_ID_EEM_CACHE_FLUSH_DONE + __le32 event_data2; + u8 opaque_v; + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_V 0x1UL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_OPAQUE_MASK 0xfeUL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_OPAQUE_SFT 1 + u8 timestamp_lo; + __le16 timestamp_hi; + __le32 event_data1; + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_EVENT_DATA1_FID_MASK 0xffffUL + #define ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_EVENT_DATA1_FID_SFT 0 +}; + /* hwrm_func_reset_input (size:192b/24B) */ struct hwrm_func_reset_input { __le16 req_type; @@ -867,6 +1012,8 @@ struct hwrm_func_qcaps_output { #define FUNC_QCAPS_RESP_FLAGS_ADMIN_PF_SUPPORTED 0x40000UL #define FUNC_QCAPS_RESP_FLAGS_LINK_ADMIN_STATUS_SUPPORTED 0x80000UL #define FUNC_QCAPS_RESP_FLAGS_WCB_PUSH_MODE 0x100000UL + #define FUNC_QCAPS_RESP_FLAGS_DYNAMIC_TX_RING_ALLOC 0x200000UL + #define FUNC_QCAPS_RESP_FLAGS_HOT_RESET_CAPABLE 0x400000UL u8 mac_address[6]; __le16 max_rsscos_ctx; __le16 max_cmpl_rings; @@ -902,7 +1049,7 @@ struct hwrm_func_qcfg_input { u8 unused_0[6]; }; -/* hwrm_func_qcfg_output (size:640b/80B) */ +/* hwrm_func_qcfg_output (size:704b/88B) */ struct hwrm_func_qcfg_output { __le16 error_code; __le16 req_type; @@ -919,6 +1066,7 @@ struct hwrm_func_qcfg_output { #define FUNC_QCFG_RESP_FLAGS_FW_LLDP_AGENT_ENABLED 0x10UL #define FUNC_QCFG_RESP_FLAGS_MULTI_HOST 0x20UL #define FUNC_QCFG_RESP_FLAGS_TRUSTED_VF 0x40UL + #define FUNC_QCFG_RESP_FLAGS_SECURE_MODE_ENABLED 0x80UL u8 mac_address[6]; __le16 pci_id; __le16 alloc_rsscos_ctx; @@ -1000,7 +1148,11 @@ struct hwrm_func_qcfg_output { __le16 alloc_sp_tx_rings; __le16 alloc_stat_ctx; __le16 alloc_msix; - u8 unused_2[5]; + __le16 registered_vfs; + u8 unused_1[3]; + u8 always_1; + __le32 reset_addr_poll; + u8 unused_2[3]; u8 valid; }; @@ -1031,6 +1183,7 @@ struct hwrm_func_cfg_input { #define FUNC_CFG_REQ_FLAGS_VNIC_ASSETS_TEST 0x80000UL #define FUNC_CFG_REQ_FLAGS_L2_CTX_ASSETS_TEST 0x100000UL #define FUNC_CFG_REQ_FLAGS_TRUSTED_VF_ENABLE 0x200000UL + #define FUNC_CFG_REQ_FLAGS_DYNAMIC_TX_RING_ALLOC 0x400000UL __le32 enables; #define FUNC_CFG_REQ_ENABLES_MTU 0x1UL #define FUNC_CFG_REQ_ENABLES_MRU 0x2UL @@ -1235,6 +1388,7 @@ struct hwrm_func_drv_rgtr_input { #define FUNC_DRV_RGTR_REQ_FLAGS_FWD_NONE_MODE 0x2UL #define FUNC_DRV_RGTR_REQ_FLAGS_16BIT_VER_MODE 0x4UL #define FUNC_DRV_RGTR_REQ_FLAGS_FLOW_HANDLE_64BIT_MODE 0x8UL + #define FUNC_DRV_RGTR_REQ_FLAGS_HOT_RESET_SUPPORT 0x10UL __le32 enables; #define FUNC_DRV_RGTR_REQ_ENABLES_OS_TYPE 0x1UL #define FUNC_DRV_RGTR_REQ_ENABLES_VER 0x2UL @@ -1888,7 +2042,8 @@ struct hwrm_func_drv_if_change_output { __le16 seq_id; __le16 resp_len; __le32 flags; - #define FUNC_DRV_IF_CHANGE_RESP_FLAGS_RESC_CHANGE 0x1UL + #define FUNC_DRV_IF_CHANGE_RESP_FLAGS_RESC_CHANGE 0x1UL + #define FUNC_DRV_IF_CHANGE_RESP_FLAGS_HOT_FW_RESET_DONE 0x2UL u8 unused_0[3]; u8 valid; }; @@ -2864,6 +3019,60 @@ struct hwrm_port_phy_i2c_read_output { u8 valid; }; +/* hwrm_port_phy_mdio_write_input (size:320b/40B) */ +struct hwrm_port_phy_mdio_write_input { + __le16 req_type; + __le16 cmpl_ring; + __le16 seq_id; + __le16 target_id; + __le64 resp_addr; + __le32 unused_0[2]; + __le16 port_id; + u8 phy_addr; + u8 dev_addr; + __le16 reg_addr; + __le16 reg_data; + u8 cl45_mdio; + u8 unused_1[7]; +}; + +/* hwrm_port_phy_mdio_write_output (size:128b/16B) */ +struct hwrm_port_phy_mdio_write_output { + __le16 error_code; + __le16 req_type; + __le16 seq_id; + __le16 resp_len; + u8 unused_0[7]; + u8 valid; +}; + +/* hwrm_port_phy_mdio_read_input (size:256b/32B) */ +struct hwrm_port_phy_mdio_read_input { + __le16 req_type; + __le16 cmpl_ring; + __le16 seq_id; + __le16 target_id; + __le64 resp_addr; + __le32 unused_0[2]; + __le16 port_id; + u8 phy_addr; + u8 dev_addr; + __le16 reg_addr; + u8 cl45_mdio; + u8 unused_1; +}; + +/* hwrm_port_phy_mdio_read_output (size:128b/16B) */ +struct hwrm_port_phy_mdio_read_output { + __le16 error_code; + __le16 req_type; + __le16 seq_id; + __le16 resp_len; + __le16 reg_data; + u8 unused_0[5]; + u8 valid; +}; + /* hwrm_port_led_cfg_input (size:512b/64B) */ struct hwrm_port_led_cfg_input { __le16 req_type; @@ -4869,6 +5078,10 @@ struct hwrm_ring_grp_free_output { u8 unused_0[7]; u8 valid; }; +#define DEFAULT_FLOW_ID 0xFFFFFFFFUL +#define ROCEV1_FLOW_ID 0xFFFFFFFEUL +#define ROCEV2_FLOW_ID 0xFFFFFFFDUL +#define ROCEV2_CNP_FLOW_ID 0xFFFFFFFCUL /* hwrm_cfa_l2_filter_alloc_input (size:768b/96B) */ struct hwrm_cfa_l2_filter_alloc_input { @@ -4937,20 +5150,21 @@ struct hwrm_cfa_l2_filter_alloc_input { u8 unused_3; __le32 src_id; u8 tunnel_type; - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL - #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL + #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL u8 unused_4; __le16 dst_id; __le16 mirror_vnic_id; @@ -5108,20 +5322,21 @@ struct hwrm_cfa_tunnel_filter_alloc_input { u8 l3_addr_type; u8 t_l3_addr_type; u8 tunnel_type; - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL - #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL + #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL u8 tunnel_flags; #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_FLAGS_TUN_FLAGS_OAM_CHECKSUM_EXPLHDR 0x1UL #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_FLAGS_TUN_FLAGS_CRITICAL_OPT_S1 0x2UL @@ -5326,20 +5541,21 @@ struct hwrm_cfa_ntuple_filter_alloc_input { __le16 dst_id; __le16 mirror_vnic_id; u8 tunnel_type; - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL - #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL + #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL u8 pri_hint; #define CFA_NTUPLE_FILTER_ALLOC_REQ_PRI_HINT_NO_PREFER 0x0UL #define CFA_NTUPLE_FILTER_ALLOC_REQ_PRI_HINT_ABOVE 0x1UL @@ -5459,20 +5675,21 @@ struct hwrm_cfa_decap_filter_alloc_input { #define CFA_DECAP_FILTER_ALLOC_REQ_ENABLES_MIRROR_VNIC_ID 0x10000UL __be32 tunnel_id; u8 tunnel_type; - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL - #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL + #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL u8 unused_0; __le16 unused_1; u8 src_macaddr[6]; @@ -5559,20 +5776,23 @@ struct hwrm_cfa_flow_alloc_input { #define CFA_FLOW_ALLOC_REQ_FLAGS_PATH_TX 0x40UL #define CFA_FLOW_ALLOC_REQ_FLAGS_PATH_RX 0x80UL #define CFA_FLOW_ALLOC_REQ_FLAGS_MATCH_VXLAN_IP_VNI 0x100UL + #define CFA_FLOW_ALLOC_REQ_FLAGS_VHOST_ID_USE_VLAN 0x200UL __le16 src_fid; __le32 tunnel_handle; __le16 action_flags; - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_FWD 0x1UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_RECYCLE 0x2UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_DROP 0x4UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_METER 0x8UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_TUNNEL 0x10UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_NAT_SRC 0x20UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_NAT_DEST 0x40UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_NAT_IPV4_ADDRESS 0x80UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_L2_HEADER_REWRITE 0x100UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_TTL_DECREMENT 0x200UL - #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_TUNNEL_IP 0x400UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_FWD 0x1UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_RECYCLE 0x2UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_DROP 0x4UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_METER 0x8UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_TUNNEL 0x10UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_NAT_SRC 0x20UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_NAT_DEST 0x40UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_NAT_IPV4_ADDRESS 0x80UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_L2_HEADER_REWRITE 0x100UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_TTL_DECREMENT 0x200UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_TUNNEL_IP 0x400UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_FLOW_AGING_ENABLED 0x800UL + #define CFA_FLOW_ALLOC_REQ_ACTION_FLAGS_PRI_HINT 0x1000UL __le16 dst_fid; __be16 l2_rewrite_vlan_tpid; __be16 l2_rewrite_vlan_tci; @@ -5597,20 +5817,21 @@ struct hwrm_cfa_flow_alloc_input { __be16 l2_rewrite_smac[3]; u8 ip_proto; u8 tunnel_type; - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL - #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_NONTUNNEL 0x0UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_NVGRE 0x2UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_L2GRE 0x3UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_IPIP 0x4UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_MPLS 0x6UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_STT 0x7UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_IPGRE 0x8UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL + #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL }; /* hwrm_cfa_flow_alloc_output (size:256b/32B) */ @@ -5623,7 +5844,8 @@ struct hwrm_cfa_flow_alloc_output { u8 unused_0[2]; __le32 flow_id; __le64 ext_flow_handle; - u8 unused_1[7]; + __le32 flow_counter_id; + u8 unused_1[3]; u8 valid; }; @@ -5651,6 +5873,46 @@ struct hwrm_cfa_flow_free_output { u8 valid; }; +/* hwrm_cfa_flow_info_input (size:256b/32B) */ +struct hwrm_cfa_flow_info_input { + __le16 req_type; + __le16 cmpl_ring; + __le16 seq_id; + __le16 target_id; + __le64 resp_addr; + __le16 flow_handle; + #define CFA_FLOW_INFO_REQ_FLOW_HANDLE_MAX_MASK 0xfffUL + #define CFA_FLOW_INFO_REQ_FLOW_HANDLE_MAX_SFT 0 + #define CFA_FLOW_INFO_REQ_FLOW_HANDLE_CNP_CNT 0x1000UL + #define CFA_FLOW_INFO_REQ_FLOW_HANDLE_ROCEV1_CNT 0x2000UL + #define CFA_FLOW_INFO_REQ_FLOW_HANDLE_ROCEV2_CNT 0x4000UL + #define CFA_FLOW_INFO_REQ_FLOW_HANDLE_DIR_RX 0x8000UL + u8 unused_0[6]; + __le64 ext_flow_handle; +}; + +/* hwrm_cfa_flow_info_output (size:448b/56B) */ +struct hwrm_cfa_flow_info_output { + __le16 error_code; + __le16 req_type; + __le16 seq_id; + __le16 resp_len; + u8 flags; + u8 profile; + __le16 src_fid; + __le16 dst_fid; + __le16 l2_ctxt_id; + __le64 em_info; + __le64 tcam_info; + __le64 vfp_tcam_info; + __le16 ar_id; + __le16 flow_handle; + __le32 tunnel_handle; + __le16 flow_timer; + u8 unused_0[5]; + u8 valid; +}; + /* hwrm_cfa_flow_stats_input (size:640b/80B) */ struct hwrm_cfa_flow_stats_input { __le16 req_type; @@ -5757,6 +6019,128 @@ struct hwrm_cfa_vfr_free_output { u8 valid; }; +/* hwrm_cfa_eem_qcaps_input (size:192b/24B) */ +struct hwrm_cfa_eem_qcaps_input { + __le16 req_type; + __le16 cmpl_ring; + __le16 seq_id; + __le16 target_id; + __le64 resp_addr; + __le32 flags; + #define CFA_EEM_QCAPS_REQ_FLAGS_PATH_TX 0x1UL + #define CFA_EEM_QCAPS_REQ_FLAGS_PATH_RX 0x2UL + #define CFA_EEM_QCAPS_REQ_FLAGS_PREFERRED_OFFLOAD 0x4UL + __le32 unused_0; +}; + +/* hwrm_cfa_eem_qcaps_output (size:256b/32B) */ +struct hwrm_cfa_eem_qcaps_output { + __le16 error_code; + __le16 req_type; + __le16 seq_id; + __le16 resp_len; + __le32 flags; + #define CFA_EEM_QCAPS_RESP_FLAGS_PATH_TX 0x1UL + #define CFA_EEM_QCAPS_RESP_FLAGS_PATH_RX 0x2UL + __le32 unused_0; + __le32 supported; + #define CFA_EEM_QCAPS_RESP_SUPPORTED_KEY0_TABLE 0x1UL + #define CFA_EEM_QCAPS_RESP_SUPPORTED_KEY1_TABLE 0x2UL + #define CFA_EEM_QCAPS_RESP_SUPPORTED_EXTERNAL_RECORD_TABLE 0x4UL + #define CFA_EEM_QCAPS_RESP_SUPPORTED_EXTERNAL_FLOW_COUNTERS_TABLE 0x8UL + __le32 max_entries_supported; + __le16 key_entry_size; + __le16 record_entry_size; + __le16 efc_entry_size; + u8 unused_1; + u8 valid; +}; + +/* hwrm_cfa_eem_cfg_input (size:320b/40B) */ +struct hwrm_cfa_eem_cfg_input { + __le16 req_type; + __le16 cmpl_ring; + __le16 seq_id; + __le16 target_id; + __le64 resp_addr; + __le32 flags; + #define CFA_EEM_CFG_REQ_FLAGS_PATH_TX 0x1UL + #define CFA_EEM_CFG_REQ_FLAGS_PATH_RX 0x2UL + #define CFA_EEM_CFG_REQ_FLAGS_PREFERRED_OFFLOAD 0x4UL + __le32 unused_0; + __le32 num_entries; + __le32 unused_1; + __le16 key0_ctx_id; + __le16 key1_ctx_id; + __le16 record_ctx_id; + __le16 efc_ctx_id; +}; + +/* hwrm_cfa_eem_cfg_output (size:128b/16B) */ +struct hwrm_cfa_eem_cfg_output { + __le16 error_code; + __le16 req_type; + __le16 seq_id; + __le16 resp_len; + u8 unused_0[7]; + u8 valid; +}; + +/* hwrm_cfa_eem_qcfg_input (size:192b/24B) */ +struct hwrm_cfa_eem_qcfg_input { + __le16 req_type; + __le16 cmpl_ring; + __le16 seq_id; + __le16 target_id; + __le64 resp_addr; + __le32 flags; + #define CFA_EEM_QCFG_REQ_FLAGS_PATH_TX 0x1UL + #define CFA_EEM_QCFG_REQ_FLAGS_PATH_RX 0x2UL + __le32 unused_0; +}; + +/* hwrm_cfa_eem_qcfg_output (size:128b/16B) */ +struct hwrm_cfa_eem_qcfg_output { + __le16 error_code; + __le16 req_type; + __le16 seq_id; + __le16 resp_len; + __le32 flags; + #define CFA_EEM_QCFG_RESP_FLAGS_PATH_TX 0x1UL + #define CFA_EEM_QCFG_RESP_FLAGS_PATH_RX 0x2UL + #define CFA_EEM_QCFG_RESP_FLAGS_PREFERRED_OFFLOAD 0x4UL + __le32 num_entries; +}; + +/* hwrm_cfa_eem_op_input (size:192b/24B) */ +struct hwrm_cfa_eem_op_input { + __le16 req_type; + __le16 cmpl_ring; + __le16 seq_id; + __le16 target_id; + __le64 resp_addr; + __le32 flags; + #define CFA_EEM_OP_REQ_FLAGS_PATH_TX 0x1UL + #define CFA_EEM_OP_REQ_FLAGS_PATH_RX 0x2UL + __le16 unused_0; + __le16 op; + #define CFA_EEM_OP_REQ_OP_RESERVED 0x0UL + #define CFA_EEM_OP_REQ_OP_EEM_DISABLE 0x1UL + #define CFA_EEM_OP_REQ_OP_EEM_ENABLE 0x2UL + #define CFA_EEM_OP_REQ_OP_EEM_CLEANUP 0x3UL + #define CFA_EEM_OP_REQ_OP_LAST CFA_EEM_OP_REQ_OP_EEM_CLEANUP +}; + +/* hwrm_cfa_eem_op_output (size:128b/16B) */ +struct hwrm_cfa_eem_op_output { + __le16 error_code; + __le16 req_type; + __le16 seq_id; + __le16 resp_len; + u8 unused_0[7]; + u8 valid; +}; + /* hwrm_tunnel_dst_port_query_input (size:192b/24B) */ struct hwrm_tunnel_dst_port_query_input { __le16 req_type; @@ -5765,12 +6149,13 @@ struct hwrm_tunnel_dst_port_query_input { __le16 target_id; __le64 resp_addr; u8 tunnel_type; - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN 0x1UL - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_GENEVE 0x5UL - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_L2_ETYPE + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN 0x1UL + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_GENEVE 0x5UL + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 u8 unused_0[7]; }; @@ -5794,12 +6179,13 @@ struct hwrm_tunnel_dst_port_alloc_input { __le16 target_id; __le64 resp_addr; u8 tunnel_type; - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 u8 unused_0; __be16 tunnel_dst_port_val; u8 unused_1[4]; @@ -5824,12 +6210,13 @@ struct hwrm_tunnel_dst_port_free_input { __le16 target_id; __le64 resp_addr; u8 tunnel_type; - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN 0x1UL - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE 0x5UL - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_L2_ETYPE + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN 0x1UL + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE 0x5UL + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 u8 unused_0; __le16 tunnel_dst_port_id; u8 unused_1[4]; @@ -6040,7 +6427,9 @@ struct hwrm_fw_reset_input { #define FW_RESET_REQ_SELFRST_STATUS_SELFRSTIMMEDIATE 0x3UL #define FW_RESET_REQ_SELFRST_STATUS_LAST FW_RESET_REQ_SELFRST_STATUS_SELFRSTIMMEDIATE u8 host_idx; - u8 unused_0[5]; + u8 flags; + #define FW_RESET_REQ_FLAGS_RESET_GRACEFUL 0x1UL + u8 unused_0[4]; }; /* hwrm_fw_reset_output (size:128b/16B) */ @@ -6137,6 +6526,7 @@ struct hwrm_struct_hdr { #define STRUCT_HDR_STRUCT_ID_DCBX_FEATURE_STATE 0x422UL #define STRUCT_HDR_STRUCT_ID_LLDP_GENERIC 0x424UL #define STRUCT_HDR_STRUCT_ID_LLDP_DEVICE 0x426UL + #define STRUCT_HDR_STRUCT_ID_POWER_BKUP 0x427UL #define STRUCT_HDR_STRUCT_ID_AFM_OPAQUE 0x1UL #define STRUCT_HDR_STRUCT_ID_PORT_DESCRIPTION 0xaUL #define STRUCT_HDR_STRUCT_ID_RSS_V2 0x64UL diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c index 3962f6fd543c..d80f5c981d90 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c @@ -448,16 +448,22 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs) u16 vf_stat_ctx, vf_vnics, vf_ring_grps; struct bnxt_pf_info *pf = &bp->pf; int i, rc = 0, min = 1; + u16 vf_msix = 0; bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_RESOURCE_CFG, -1, -1); - vf_cp_rings = bnxt_get_max_func_cp_rings_for_en(bp) - bp->cp_nr_rings; - vf_stat_ctx = hw_resc->max_stat_ctxs - bp->num_stat_ctxs; + if (bp->flags & BNXT_FLAG_CHIP_P5) { + vf_msix = hw_resc->max_nqs - bnxt_nq_rings_in_use(bp); + vf_ring_grps = 0; + } else { + vf_ring_grps = hw_resc->max_hw_ring_grps - bp->rx_nr_rings; + } + vf_cp_rings = bnxt_get_avail_cp_rings_for_en(bp); + vf_stat_ctx = bnxt_get_avail_stat_ctxs_for_en(bp); if (bp->flags & BNXT_FLAG_AGG_RINGS) vf_rx_rings = hw_resc->max_rx_rings - bp->rx_nr_rings * 2; else vf_rx_rings = hw_resc->max_rx_rings - bp->rx_nr_rings; - vf_ring_grps = hw_resc->max_hw_ring_grps - bp->rx_nr_rings; vf_tx_rings = hw_resc->max_tx_rings - bp->tx_nr_rings; vf_vnics = hw_resc->max_vnics - bp->nr_vnics; vf_vnics = min_t(u16, vf_vnics, vf_rx_rings); @@ -476,7 +482,8 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs) req.min_l2_ctxs = cpu_to_le16(min); req.min_vnics = cpu_to_le16(min); req.min_stat_ctx = cpu_to_le16(min); - req.min_hw_ring_grps = cpu_to_le16(min); + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + req.min_hw_ring_grps = cpu_to_le16(min); } else { vf_cp_rings /= num_vfs; vf_tx_rings /= num_vfs; @@ -500,6 +507,8 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs) req.max_vnics = cpu_to_le16(vf_vnics); req.max_stat_ctx = cpu_to_le16(vf_stat_ctx); req.max_hw_ring_grps = cpu_to_le16(vf_ring_grps); + if (bp->flags & BNXT_FLAG_CHIP_P5) + req.max_msix = cpu_to_le16(vf_msix / num_vfs); mutex_lock(&bp->hwrm_cmd_lock); for (i = 0; i < num_vfs; i++) { @@ -525,6 +534,8 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs) hw_resc->max_rsscos_ctxs -= pf->active_vfs; hw_resc->max_stat_ctxs -= le16_to_cpu(req.min_stat_ctx) * n; hw_resc->max_vnics -= le16_to_cpu(req.min_vnics) * n; + if (bp->flags & BNXT_FLAG_CHIP_P5) + hw_resc->max_irqs -= vf_msix * n; rc = pf->active_vfs; } @@ -539,19 +550,16 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs) u32 rc = 0, mtu, i; u16 vf_tx_rings, vf_rx_rings, vf_cp_rings, vf_stat_ctx, vf_vnics; struct bnxt_hw_resc *hw_resc = &bp->hw_resc; - u16 vf_ring_grps, max_stat_ctxs; struct hwrm_func_cfg_input req = {0}; struct bnxt_pf_info *pf = &bp->pf; int total_vf_tx_rings = 0; + u16 vf_ring_grps; bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1); - max_stat_ctxs = hw_resc->max_stat_ctxs; - /* Remaining rings are distributed equally amongs VF's for now */ - vf_cp_rings = (bnxt_get_max_func_cp_rings_for_en(bp) - - bp->cp_nr_rings) / num_vfs; - vf_stat_ctx = (max_stat_ctxs - bp->num_stat_ctxs) / num_vfs; + vf_cp_rings = bnxt_get_avail_cp_rings_for_en(bp) / num_vfs; + vf_stat_ctx = bnxt_get_avail_stat_ctxs_for_en(bp) / num_vfs; if (bp->flags & BNXT_FLAG_AGG_RINGS) vf_rx_rings = (hw_resc->max_rx_rings - bp->rx_nr_rings * 2) / num_vfs; @@ -644,8 +652,8 @@ static int bnxt_sriov_enable(struct bnxt *bp, int *num_vfs) */ vfs_supported = *num_vfs; - avail_cp = bnxt_get_max_func_cp_rings_for_en(bp) - bp->cp_nr_rings; - avail_stat = hw_resc->max_stat_ctxs - bp->num_stat_ctxs; + avail_cp = bnxt_get_avail_cp_rings_for_en(bp); + avail_stat = bnxt_get_avail_stat_ctxs_for_en(bp); avail_cp = min_t(int, avail_cp, avail_stat); while (vfs_supported) { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c index 749f63beddd8..c683b5e96b1d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c @@ -337,18 +337,21 @@ static int bnxt_tc_parse_flow(struct bnxt *bp, return bnxt_tc_parse_actions(bp, &flow->actions, tc_flow_cmd->exts); } -static int bnxt_hwrm_cfa_flow_free(struct bnxt *bp, __le16 flow_handle) +static int bnxt_hwrm_cfa_flow_free(struct bnxt *bp, + struct bnxt_tc_flow_node *flow_node) { struct hwrm_cfa_flow_free_input req = { 0 }; int rc; bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_CFA_FLOW_FREE, -1, -1); - req.flow_handle = flow_handle; + if (bp->fw_cap & BNXT_FW_CAP_OVS_64BIT_HANDLE) + req.ext_flow_handle = flow_node->ext_flow_handle; + else + req.flow_handle = flow_node->flow_handle; rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); if (rc) - netdev_info(bp->dev, "Error: %s: flow_handle=0x%x rc=%d", - __func__, flow_handle, rc); + netdev_info(bp->dev, "%s: Error rc=%d", __func__, rc); if (rc) rc = -EIO; @@ -418,13 +421,14 @@ static bool bits_set(void *key, int len) static int bnxt_hwrm_cfa_flow_alloc(struct bnxt *bp, struct bnxt_tc_flow *flow, __le16 ref_flow_handle, - __le32 tunnel_handle, __le16 *flow_handle) + __le32 tunnel_handle, + struct bnxt_tc_flow_node *flow_node) { - struct hwrm_cfa_flow_alloc_output *resp = bp->hwrm_cmd_resp_addr; struct bnxt_tc_actions *actions = &flow->actions; struct bnxt_tc_l3_key *l3_mask = &flow->l3_mask; struct bnxt_tc_l3_key *l3_key = &flow->l3_key; struct hwrm_cfa_flow_alloc_input req = { 0 }; + struct hwrm_cfa_flow_alloc_output *resp; u16 flow_flags = 0, action_flags = 0; int rc; @@ -527,8 +531,23 @@ static int bnxt_hwrm_cfa_flow_alloc(struct bnxt *bp, struct bnxt_tc_flow *flow, mutex_lock(&bp->hwrm_cmd_lock); rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); - if (!rc) - *flow_handle = resp->flow_handle; + if (!rc) { + resp = bnxt_get_hwrm_resp_addr(bp, &req); + /* CFA_FLOW_ALLOC response interpretation: + * fw with fw with + * 16-bit 64-bit + * flow handle flow handle + * =========== =========== + * flow_handle flow handle flow context id + * ext_flow_handle INVALID flow handle + * flow_id INVALID flow counter id + */ + flow_node->flow_handle = resp->flow_handle; + if (bp->fw_cap & BNXT_FW_CAP_OVS_64BIT_HANDLE) { + flow_node->ext_flow_handle = resp->ext_flow_handle; + flow_node->flow_id = resp->flow_id; + } + } mutex_unlock(&bp->hwrm_cmd_lock); if (rc == HWRM_ERR_CODE_RESOURCE_ALLOC_ERROR) @@ -544,9 +563,8 @@ static int hwrm_cfa_decap_filter_alloc(struct bnxt *bp, __le32 ref_decap_handle, __le32 *decap_filter_handle) { - struct hwrm_cfa_decap_filter_alloc_output *resp = - bp->hwrm_cmd_resp_addr; struct hwrm_cfa_decap_filter_alloc_input req = { 0 }; + struct hwrm_cfa_decap_filter_alloc_output *resp; struct ip_tunnel_key *tun_key = &flow->tun_key; u32 enables = 0; int rc; @@ -599,10 +617,12 @@ static int hwrm_cfa_decap_filter_alloc(struct bnxt *bp, mutex_lock(&bp->hwrm_cmd_lock); rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); - if (!rc) + if (!rc) { + resp = bnxt_get_hwrm_resp_addr(bp, &req); *decap_filter_handle = resp->decap_filter_id; - else + } else { netdev_info(bp->dev, "%s: Error rc=%d", __func__, rc); + } mutex_unlock(&bp->hwrm_cmd_lock); if (rc) @@ -633,9 +653,8 @@ static int hwrm_cfa_encap_record_alloc(struct bnxt *bp, struct bnxt_tc_l2_key *l2_info, __le32 *encap_record_handle) { - struct hwrm_cfa_encap_record_alloc_output *resp = - bp->hwrm_cmd_resp_addr; struct hwrm_cfa_encap_record_alloc_input req = { 0 }; + struct hwrm_cfa_encap_record_alloc_output *resp; struct hwrm_cfa_encap_data_vxlan *encap = (struct hwrm_cfa_encap_data_vxlan *)&req.encap_data; struct hwrm_vxlan_ipv4_hdr *encap_ipv4 = @@ -667,10 +686,12 @@ static int hwrm_cfa_encap_record_alloc(struct bnxt *bp, mutex_lock(&bp->hwrm_cmd_lock); rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); - if (!rc) + if (!rc) { + resp = bnxt_get_hwrm_resp_addr(bp, &req); *encap_record_handle = resp->encap_record_id; - else + } else { netdev_info(bp->dev, "%s: Error rc=%d", __func__, rc); + } mutex_unlock(&bp->hwrm_cmd_lock); if (rc) @@ -1224,7 +1245,7 @@ static int __bnxt_tc_del_flow(struct bnxt *bp, int rc; /* send HWRM cmd to free the flow-id */ - bnxt_hwrm_cfa_flow_free(bp, flow_node->flow_handle); + bnxt_hwrm_cfa_flow_free(bp, flow_node); mutex_lock(&tc_info->lock); @@ -1246,6 +1267,12 @@ static int __bnxt_tc_del_flow(struct bnxt *bp, return 0; } +static void bnxt_tc_set_flow_dir(struct bnxt *bp, struct bnxt_tc_flow *flow, + u16 src_fid) +{ + flow->dir = (bp->pf.fw_fid == src_fid) ? BNXT_DIR_RX : BNXT_DIR_TX; +} + static void bnxt_tc_set_src_fid(struct bnxt *bp, struct bnxt_tc_flow *flow, u16 src_fid) { @@ -1293,6 +1320,9 @@ static int bnxt_tc_add_flow(struct bnxt *bp, u16 src_fid, bnxt_tc_set_src_fid(bp, flow, src_fid); + if (bp->fw_cap & BNXT_FW_CAP_OVS_64BIT_HANDLE) + bnxt_tc_set_flow_dir(bp, flow, src_fid); + if (!bnxt_tc_can_offload(bp, flow)) { rc = -ENOSPC; goto free_node; @@ -1320,7 +1350,7 @@ static int bnxt_tc_add_flow(struct bnxt *bp, u16 src_fid, /* send HWRM cmd to alloc the flow */ rc = bnxt_hwrm_cfa_flow_alloc(bp, flow, ref_flow_handle, - tunnel_handle, &new_node->flow_handle); + tunnel_handle, new_node); if (rc) goto put_tunnel; @@ -1336,7 +1366,7 @@ static int bnxt_tc_add_flow(struct bnxt *bp, u16 src_fid, return 0; hwrm_flow_free: - bnxt_hwrm_cfa_flow_free(bp, new_node->flow_handle); + bnxt_hwrm_cfa_flow_free(bp, new_node); put_tunnel: bnxt_tc_put_tunnel_handle(bp, flow, new_node); put_l2: @@ -1397,13 +1427,40 @@ static int bnxt_tc_get_flow_stats(struct bnxt *bp, return 0; } +static void bnxt_fill_cfa_stats_req(struct bnxt *bp, + struct bnxt_tc_flow_node *flow_node, + __le16 *flow_handle, __le32 *flow_id) +{ + u16 handle; + + if (bp->fw_cap & BNXT_FW_CAP_OVS_64BIT_HANDLE) { + *flow_id = flow_node->flow_id; + + /* If flow_id is used to fetch flow stats then: + * 1. lower 12 bits of flow_handle must be set to all 1s. + * 2. 15th bit of flow_handle must specify the flow + * direction (TX/RX). + */ + if (flow_node->flow.dir == BNXT_DIR_RX) + handle = CFA_FLOW_INFO_REQ_FLOW_HANDLE_DIR_RX | + CFA_FLOW_INFO_REQ_FLOW_HANDLE_MAX_MASK; + else + handle = CFA_FLOW_INFO_REQ_FLOW_HANDLE_MAX_MASK; + + *flow_handle = cpu_to_le16(handle); + } else { + *flow_handle = flow_node->flow_handle; + } +} + static int bnxt_hwrm_cfa_flow_stats_get(struct bnxt *bp, int num_flows, struct bnxt_tc_stats_batch stats_batch[]) { - struct hwrm_cfa_flow_stats_output *resp = bp->hwrm_cmd_resp_addr; struct hwrm_cfa_flow_stats_input req = { 0 }; + struct hwrm_cfa_flow_stats_output *resp; __le16 *req_flow_handles = &req.flow_handle_0; + __le32 *req_flow_ids = &req.flow_id_0; int rc, i; bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_CFA_FLOW_STATS, -1, -1); @@ -1411,14 +1468,19 @@ bnxt_hwrm_cfa_flow_stats_get(struct bnxt *bp, int num_flows, for (i = 0; i < num_flows; i++) { struct bnxt_tc_flow_node *flow_node = stats_batch[i].flow_node; - req_flow_handles[i] = flow_node->flow_handle; + bnxt_fill_cfa_stats_req(bp, flow_node, + &req_flow_handles[i], &req_flow_ids[i]); } mutex_lock(&bp->hwrm_cmd_lock); rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); if (!rc) { - __le64 *resp_packets = &resp->packet_0; - __le64 *resp_bytes = &resp->byte_0; + __le64 *resp_packets; + __le64 *resp_bytes; + + resp = bnxt_get_hwrm_resp_addr(bp, &req); + resp_packets = &resp->packet_0; + resp_bytes = &resp->byte_0; for (i = 0; i < num_flows; i++) { stats_batch[i].hw_stats.packets = diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h index 97e09a880693..8a0968967bc5 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h @@ -98,6 +98,9 @@ struct bnxt_tc_flow { /* flow applicable to pkts ingressing on this fid */ u16 src_fid; + u8 dir; +#define BNXT_DIR_RX 1 +#define BNXT_DIR_TX 0 struct bnxt_tc_l2_key l2_key; struct bnxt_tc_l2_key l2_mask; struct bnxt_tc_l3_key l3_key; @@ -170,7 +173,9 @@ struct bnxt_tc_flow_node { struct bnxt_tc_flow flow; + __le64 ext_flow_handle; __le16 flow_handle; + __le32 flow_id; /* L2 node in l2 hashtable that shares flow's l2 key */ struct bnxt_tc_l2_node *l2_node; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c index 0a3097baafde..ea45a9b8179e 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c @@ -48,10 +48,8 @@ static int bnxt_register_dev(struct bnxt_en_dev *edev, int ulp_id, max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS || - bp->num_stat_ctxs == max_stat_ctxs) + bp->cp_nr_rings == max_stat_ctxs) return -ENOMEM; - bnxt_set_max_func_stat_ctxs(bp, max_stat_ctxs - - BNXT_MIN_ROCE_STAT_CTXS); } atomic_set(&ulp->ref_count, 0); @@ -82,14 +80,9 @@ static int bnxt_unregister_dev(struct bnxt_en_dev *edev, int ulp_id) netdev_err(bp->dev, "ulp id %d not registered\n", ulp_id); return -EINVAL; } - if (ulp_id == BNXT_ROCE_ULP) { - unsigned int max_stat_ctxs; + if (ulp_id == BNXT_ROCE_ULP && ulp->msix_requested) + edev->en_ops->bnxt_free_msix(edev, ulp_id); - max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); - bnxt_set_max_func_stat_ctxs(bp, max_stat_ctxs + 1); - if (ulp->msix_requested) - edev->en_ops->bnxt_free_msix(edev, ulp_id); - } if (ulp->max_async_event_id) bnxt_hwrm_func_rgtr_async_events(bp, NULL, 0); @@ -218,6 +211,14 @@ int bnxt_get_ulp_msix_base(struct bnxt *bp) return 0; } +int bnxt_get_ulp_stat_ctxs(struct bnxt *bp) +{ + if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) + return BNXT_MIN_ROCE_STAT_CTXS; + + return 0; +} + static int bnxt_send_msg(struct bnxt_en_dev *edev, int ulp_id, struct bnxt_fw_msg *fw_msg) { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h index d9bea37cd211..cd78453d0bf0 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h @@ -90,6 +90,7 @@ static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev, int ulp_id) int bnxt_get_ulp_msix_num(struct bnxt *bp); int bnxt_get_ulp_msix_base(struct bnxt *bp); +int bnxt_get_ulp_stat_ctxs(struct bnxt *bp); void bnxt_ulp_stop(struct bnxt *bp); void bnxt_ulp_start(struct bnxt *bp); void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index bf6de02be396..0184ef6f05a7 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -199,7 +199,6 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog) bp->tx_nr_rings_xdp = tx_xdp; bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tc + tx_xdp; bp->cp_nr_rings = max_t(int, bp->tx_nr_rings, bp->rx_nr_rings); - bp->num_stat_ctxs = bp->cp_nr_rings; bnxt_set_tpa_flags(bp); bnxt_set_ring_params(bp); diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c index d83233ae4a15..510dfc1c236b 100644 --- a/drivers/net/ethernet/broadcom/cnic.c +++ b/drivers/net/ethernet/broadcom/cnic.c @@ -5731,7 +5731,7 @@ static int cnic_netdev_event(struct notifier_block *this, unsigned long event, if (realdev) { dev = cnic_from_netdev(realdev); if (dev) { - vid |= VLAN_TAG_PRESENT; + vid |= VLAN_CFI_MASK; /* make non-zero */ cnic_rcv_netevent(dev->cnic_priv, event, vid); cnic_put(dev); } diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index 2d6f090bf644..983245c0867c 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -1169,7 +1169,7 @@ static int bcmgenet_power_down(struct bcmgenet_priv *priv, break; } - return 0; + return ret; } static void bcmgenet_power_up(struct bcmgenet_priv *priv, @@ -3612,36 +3612,6 @@ static int bcmgenet_remove(struct platform_device *pdev) } #ifdef CONFIG_PM_SLEEP -static int bcmgenet_suspend(struct device *d) -{ - struct net_device *dev = dev_get_drvdata(d); - struct bcmgenet_priv *priv = netdev_priv(dev); - int ret = 0; - - if (!netif_running(dev)) - return 0; - - netif_device_detach(dev); - - bcmgenet_netif_stop(dev); - - if (!device_may_wakeup(d)) - phy_suspend(dev->phydev); - - /* Prepare the device for Wake-on-LAN and switch to the slow clock */ - if (device_may_wakeup(d) && priv->wolopts) { - ret = bcmgenet_power_down(priv, GENET_POWER_WOL_MAGIC); - clk_prepare_enable(priv->clk_wol); - } else if (priv->internal_phy) { - ret = bcmgenet_power_down(priv, GENET_POWER_PASSIVE); - } - - /* Turn off the clocks */ - clk_disable_unprepare(priv->clk); - - return ret; -} - static int bcmgenet_resume(struct device *d) { struct net_device *dev = dev_get_drvdata(d); @@ -3719,6 +3689,39 @@ out_clk_disable: clk_disable_unprepare(priv->clk); return ret; } + +static int bcmgenet_suspend(struct device *d) +{ + struct net_device *dev = dev_get_drvdata(d); + struct bcmgenet_priv *priv = netdev_priv(dev); + int ret = 0; + + if (!netif_running(dev)) + return 0; + + netif_device_detach(dev); + + bcmgenet_netif_stop(dev); + + if (!device_may_wakeup(d)) + phy_suspend(dev->phydev); + + /* Prepare the device for Wake-on-LAN and switch to the slow clock */ + if (device_may_wakeup(d) && priv->wolopts) { + ret = bcmgenet_power_down(priv, GENET_POWER_WOL_MAGIC); + clk_prepare_enable(priv->clk_wol); + } else if (priv->internal_phy) { + ret = bcmgenet_power_down(priv, GENET_POWER_PASSIVE); + } + + /* Turn off the clocks */ + clk_disable_unprepare(priv->clk); + + if (ret) + bcmgenet_resume(d); + + return ret; +} #endif /* CONFIG_PM_SLEEP */ static SIMPLE_DEV_PM_OPS(bcmgenet_pm_ops, bcmgenet_suspend, bcmgenet_resume); diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c index 2fbd027f0148..57582efa362d 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c @@ -186,6 +186,8 @@ void bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv, } reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); + if (!(reg & MPD_EN)) + return; /* already powered up so skip the rest */ reg &= ~MPD_EN; bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c index a6cbaca37e94..aceb9b7b55bd 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmmii.c +++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c @@ -226,7 +226,8 @@ int bcmgenet_mii_config(struct net_device *dev, bool init) * capabilities, use that knowledge to also configure the * Reverse MII interface correctly. */ - if (dev->phydev->supported & PHY_1000BT_FEATURES) + if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + dev->phydev->supported)) port_ctrl = PORT_MODE_EXT_RVMII_50; else port_ctrl = PORT_MODE_EXT_RVMII_25; @@ -317,7 +318,7 @@ int bcmgenet_mii_probe(struct net_device *dev) return ret; } - phydev->advertising = phydev->supported; + linkmode_copy(phydev->advertising, phydev->supported); /* The internal PHY has its link interrupts routed to the * Ethernet MAC ISRs. On GENETv5 there is a hardware issue diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index 432c3b867084..3b1397af81f7 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -66,11 +66,6 @@ #include #include -#ifdef CONFIG_SPARC -#include -#include -#endif - #define BAR_0 0 #define BAR_2 2 @@ -2157,7 +2152,8 @@ static void tg3_phy_start(struct tg3 *tp) phydev->speed = tp->link_config.speed; phydev->duplex = tp->link_config.duplex; phydev->autoneg = tp->link_config.autoneg; - phydev->advertising = tp->link_config.advertising; + ethtool_convert_legacy_u32_to_link_mode( + phydev->advertising, tp->link_config.advertising); } phy_start(phydev); @@ -4057,8 +4053,9 @@ static int tg3_power_down_prepare(struct tg3 *tp) do_low_power = false; if ((tp->phy_flags & TG3_PHYFLG_IS_CONNECTED) && !(tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising) = { 0, }; struct phy_device *phydev; - u32 phyid, advertising; + u32 phyid; phydev = mdiobus_get_phy(tp->mdio_bus, tp->phy_addr); @@ -4067,25 +4064,33 @@ static int tg3_power_down_prepare(struct tg3 *tp) tp->link_config.speed = phydev->speed; tp->link_config.duplex = phydev->duplex; tp->link_config.autoneg = phydev->autoneg; - tp->link_config.advertising = phydev->advertising; - - advertising = ADVERTISED_TP | - ADVERTISED_Pause | - ADVERTISED_Autoneg | - ADVERTISED_10baseT_Half; + ethtool_convert_link_mode_to_legacy_u32( + &tp->link_config.advertising, + phydev->advertising); + + linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, advertising); + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, + advertising); + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + advertising); + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, + advertising); if (tg3_flag(tp, ENABLE_ASF) || device_should_wake) { - if (tg3_flag(tp, WOL_SPEED_100MB)) - advertising |= - ADVERTISED_100baseT_Half | - ADVERTISED_100baseT_Full | - ADVERTISED_10baseT_Full; - else - advertising |= ADVERTISED_10baseT_Full; + if (tg3_flag(tp, WOL_SPEED_100MB)) { + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, + advertising); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, + advertising); + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, + advertising); + } else { + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, + advertising); + } } - phydev->advertising = advertising; - + linkmode_copy(phydev->advertising, advertising); phy_start_aneg(phydev); phyid = phydev->drv->phy_id & phydev->drv->phy_id_mask; @@ -6135,10 +6140,16 @@ static int tg3_setup_phy(struct tg3 *tp, bool force_reset) } /* tp->lock must be held */ -static u64 tg3_refclk_read(struct tg3 *tp) +static u64 tg3_refclk_read(struct tg3 *tp, struct ptp_system_timestamp *sts) { - u64 stamp = tr32(TG3_EAV_REF_CLCK_LSB); - return stamp | (u64)tr32(TG3_EAV_REF_CLCK_MSB) << 32; + u64 stamp; + + ptp_read_system_prets(sts); + stamp = tr32(TG3_EAV_REF_CLCK_LSB); + ptp_read_system_postts(sts); + stamp |= (u64)tr32(TG3_EAV_REF_CLCK_MSB) << 32; + + return stamp; } /* tp->lock must be held */ @@ -6229,13 +6240,14 @@ static int tg3_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) return 0; } -static int tg3_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts) +static int tg3_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts, + struct ptp_system_timestamp *sts) { u64 ns; struct tg3 *tp = container_of(ptp, struct tg3, ptp_info); tg3_full_lock(tp, 0); - ns = tg3_refclk_read(tp); + ns = tg3_refclk_read(tp, sts); ns += tp->ptp_adjust; tg3_full_unlock(tp); @@ -6330,7 +6342,7 @@ static const struct ptp_clock_info tg3_ptp_caps = { .pps = 0, .adjfreq = tg3_ptp_adjfreq, .adjtime = tg3_ptp_adjtime, - .gettime64 = tg3_ptp_gettime, + .gettimex64 = tg3_ptp_gettimex, .settime64 = tg3_ptp_settime, .enable = tg3_ptp_enable, }; @@ -16973,32 +16985,6 @@ static int tg3_get_invariants(struct tg3 *tp, const struct pci_device_id *ent) return err; } -#ifdef CONFIG_SPARC -static int tg3_get_macaddr_sparc(struct tg3 *tp) -{ - struct net_device *dev = tp->dev; - struct pci_dev *pdev = tp->pdev; - struct device_node *dp = pci_device_to_OF_node(pdev); - const unsigned char *addr; - int len; - - addr = of_get_property(dp, "local-mac-address", &len); - if (addr && len == ETH_ALEN) { - memcpy(dev->dev_addr, addr, ETH_ALEN); - return 0; - } - return -ENODEV; -} - -static int tg3_get_default_macaddr_sparc(struct tg3 *tp) -{ - struct net_device *dev = tp->dev; - - memcpy(dev->dev_addr, idprom->id_ethaddr, ETH_ALEN); - return 0; -} -#endif - static int tg3_get_device_address(struct tg3 *tp) { struct net_device *dev = tp->dev; @@ -17006,10 +16992,8 @@ static int tg3_get_device_address(struct tg3 *tp) int addr_ok = 0; int err; -#ifdef CONFIG_SPARC - if (!tg3_get_macaddr_sparc(tp)) + if (!eth_platform_get_mac_address(&tp->pdev->dev, dev->dev_addr)) return 0; -#endif if (tg3_flag(tp, IS_SSB_CORE)) { err = ssb_gige_get_macaddr(tp->pdev, &dev->dev_addr[0]); @@ -17071,13 +17055,8 @@ static int tg3_get_device_address(struct tg3 *tp) } } - if (!is_valid_ether_addr(&dev->dev_addr[0])) { -#ifdef CONFIG_SPARC - if (!tg3_get_default_macaddr_sparc(tp)) - return 0; -#endif + if (!is_valid_ether_addr(&dev->dev_addr[0])) return -EINVAL; - } return 0; } diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index 4c816e5a841f..b126926ef7f5 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -4091,7 +4091,7 @@ static int macb_probe(struct platform_device *pdev) if (mac) { ether_addr_copy(bp->dev->dev_addr, mac); } else { - err = of_get_nvmem_mac_address(np, bp->dev->dev_addr); + err = nvmem_get_mac_address(&pdev->dev, bp->dev->dev_addr); if (err) { if (err == -EPROBE_DEFER) goto err_out_free_netdev; diff --git a/drivers/net/ethernet/cavium/common/cavium_ptp.c b/drivers/net/ethernet/cavium/common/cavium_ptp.c index 6aeb1045c302..73632b843749 100644 --- a/drivers/net/ethernet/cavium/common/cavium_ptp.c +++ b/drivers/net/ethernet/cavium/common/cavium_ptp.c @@ -277,10 +277,6 @@ static int cavium_ptp_probe(struct pci_dev *pdev, writeq(clock_comp, clock->reg_base + PTP_CLOCK_COMP); clock->ptp_clock = ptp_clock_register(&clock->ptp_info, dev); - if (!clock->ptp_clock) { - err = -ENODEV; - goto error_stop; - } if (IS_ERR(clock->ptp_clock)) { err = PTR_ERR(clock->ptp_clock); goto error_stop; diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c index 4b3aecf98f2a..5359c1021f42 100644 --- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c +++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c @@ -1080,8 +1080,11 @@ static int octeon_mgmt_open(struct net_device *netdev) /* Set the mode of the interface, RGMII/MII. */ if (OCTEON_IS_MODEL(OCTEON_CN6XXX) && netdev->phydev) { union cvmx_agl_prtx_ctl agl_prtx_ctl; - int rgmii_mode = (netdev->phydev->supported & - (SUPPORTED_1000baseT_Half | SUPPORTED_1000baseT_Full)) != 0; + int rgmii_mode = + (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + netdev->phydev->supported) | + linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + netdev->phydev->supported)) != 0; agl_prtx_ctl.u64 = cvmx_read_csr(p->agl_prt_ctl); agl_prtx_ctl.s.mode = rgmii_mode ? 0 : 1; diff --git a/drivers/net/ethernet/chelsio/Kconfig b/drivers/net/ethernet/chelsio/Kconfig index e2cdfa75673f..e8001e974411 100644 --- a/drivers/net/ethernet/chelsio/Kconfig +++ b/drivers/net/ethernet/chelsio/Kconfig @@ -24,7 +24,8 @@ config CHELSIO_T1 ---help--- This driver supports Chelsio gigabit and 10-gigabit Ethernet cards. More information about adapter features and - performance tuning is in . + performance tuning is in + . For general information about Chelsio and our products, visit our website at . diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h index b16f4b3ef4c5..2d1ca920601e 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h @@ -404,6 +404,7 @@ struct adapter_params { bool fr_nsmr_tpte_wr_support; /* FW support for FR_NSMR_TPTE_WR */ u8 fw_caps_support; /* 32-bit Port Capabilities */ bool filter2_wr_support; /* FW support for FILTER2_WR */ + unsigned int viid_smt_extn_support:1; /* FW returns vin and smt index */ /* MPS Buffer Group Map[per Port]. Bit i is set if buffer group i is * used by the Port @@ -592,6 +593,13 @@ struct port_info { bool ptp_enable; struct sched_table *sched_tbl; u32 eth_flags; + + /* viid and smt fields either returned by fw + * or decoded by parsing viid by driver. + */ + u8 vin; + u8 vivld; + u8 smt_idx; }; struct dentry; @@ -1757,7 +1765,7 @@ int t4_cfg_pfvf(struct adapter *adap, unsigned int mbox, unsigned int pf, unsigned int nexact, unsigned int rcaps, unsigned int wxcaps); int t4_alloc_vi(struct adapter *adap, unsigned int mbox, unsigned int port, unsigned int pf, unsigned int vf, unsigned int nmac, u8 *mac, - unsigned int *rss_size); + unsigned int *rss_size, u8 *vivld, u8 *vin); int t4_free_vi(struct adapter *adap, unsigned int mbox, unsigned int pf, unsigned int vf, unsigned int viid); @@ -1783,7 +1791,7 @@ int t4_free_mac_filt(struct adapter *adap, unsigned int mbox, unsigned int viid, unsigned int naddr, const u8 **addr, bool sleep_ok); int t4_change_mac(struct adapter *adap, unsigned int mbox, unsigned int viid, - int idx, const u8 *addr, bool persist, bool add_smt); + int idx, const u8 *addr, bool persist, u8 *smt_idx); int t4_set_addr_hash(struct adapter *adap, unsigned int mbox, unsigned int viid, bool ucast, u64 vec, bool sleep_ok); int t4_enable_vi_params(struct adapter *adap, unsigned int mbox, diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c index cab492ec8f59..b0ff9fa183f4 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c @@ -378,19 +378,7 @@ static int cim_qcfg_show(struct seq_file *seq, void *v) QUEREMFLITS_G(p[2]) * 16); return 0; } - -static int cim_qcfg_open(struct inode *inode, struct file *file) -{ - return single_open(file, cim_qcfg_show, inode->i_private); -} - -static const struct file_operations cim_qcfg_fops = { - .owner = THIS_MODULE, - .open = cim_qcfg_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(cim_qcfg); static int cimq_show(struct seq_file *seq, void *v, int idx) { @@ -860,8 +848,7 @@ static int tx_rate_show(struct seq_file *seq, void *v) } return 0; } - -DEFINE_SIMPLE_DEBUGFS_FILE(tx_rate); +DEFINE_SHOW_ATTRIBUTE(tx_rate); static int cctrl_tbl_show(struct seq_file *seq, void *v) { @@ -893,8 +880,7 @@ static int cctrl_tbl_show(struct seq_file *seq, void *v) kfree(incr); return 0; } - -DEFINE_SIMPLE_DEBUGFS_FILE(cctrl_tbl); +DEFINE_SHOW_ATTRIBUTE(cctrl_tbl); /* Format a value in a unit that differs from the value's native unit by the * given factor. @@ -955,8 +941,7 @@ static int clk_show(struct seq_file *seq, void *v) return 0; } - -DEFINE_SIMPLE_DEBUGFS_FILE(clk); +DEFINE_SHOW_ATTRIBUTE(clk); /* Firmware Device Log dump. */ static const char * const devlog_level_strings[] = { @@ -1990,22 +1975,10 @@ static int sensors_show(struct seq_file *seq, void *v) return 0; } - -DEFINE_SIMPLE_DEBUGFS_FILE(sensors); +DEFINE_SHOW_ATTRIBUTE(sensors); #if IS_ENABLED(CONFIG_IPV6) -static int clip_tbl_open(struct inode *inode, struct file *file) -{ - return single_open(file, clip_tbl_show, inode->i_private); -} - -static const struct file_operations clip_tbl_debugfs_fops = { - .owner = THIS_MODULE, - .open = clip_tbl_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release -}; +DEFINE_SHOW_ATTRIBUTE(clip_tbl); #endif /*RSS Table. @@ -2208,8 +2181,7 @@ static int rss_config_show(struct seq_file *seq, void *v) return 0; } - -DEFINE_SIMPLE_DEBUGFS_FILE(rss_config); +DEFINE_SHOW_ATTRIBUTE(rss_config); /* RSS Secret Key. */ @@ -2628,19 +2600,7 @@ static int resources_show(struct seq_file *seq, void *v) return 0; } - -static int resources_open(struct inode *inode, struct file *file) -{ - return single_open(file, resources_show, inode->i_private); -} - -static const struct file_operations resources_debugfs_fops = { - .owner = THIS_MODULE, - .open = resources_open, - .read = seq_read, - .llseek = seq_lseek, - .release = seq_release, -}; +DEFINE_SHOW_ATTRIBUTE(resources); /** * ethqset2pinfo - return port_info of an Ethernet Queue Set @@ -3233,8 +3193,7 @@ static int tid_info_show(struct seq_file *seq, void *v) t4_read_reg(adap, LE_DB_ACT_CNT_IPV6_A)); return 0; } - -DEFINE_SIMPLE_DEBUGFS_FILE(tid_info); +DEFINE_SHOW_ATTRIBUTE(tid_info); static void add_debugfs_mem(struct adapter *adap, const char *name, unsigned int idx, unsigned int size_mb) @@ -3364,21 +3323,9 @@ static int meminfo_show(struct seq_file *seq, void *v) return 0; } +DEFINE_SHOW_ATTRIBUTE(meminfo); -static int meminfo_open(struct inode *inode, struct file *file) -{ - return single_open(file, meminfo_show, inode->i_private); -} - -static const struct file_operations meminfo_fops = { - .owner = THIS_MODULE, - .open = meminfo_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; - -static int chcr_show(struct seq_file *seq, void *v) +static int chcr_stats_show(struct seq_file *seq, void *v) { struct adapter *adap = seq->private; @@ -3399,20 +3346,7 @@ static int chcr_show(struct seq_file *seq, void *v) atomic_read(&adap->chcr_stats.ipsec_cnt)); return 0; } - - -static int chcr_stats_open(struct inode *inode, struct file *file) -{ - return single_open(file, chcr_show, inode->i_private); -} - -static const struct file_operations chcr_stats_debugfs_fops = { - .owner = THIS_MODULE, - .open = chcr_stats_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(chcr_stats); #define PRINT_ADAP_STATS(string, value) \ seq_printf(seq, "%-25s %-20llu\n", (string), \ @@ -3573,8 +3507,7 @@ static int tp_stats_show(struct seq_file *seq, void *v) return 0; } - -DEFINE_SIMPLE_DEBUGFS_FILE(tp_stats); +DEFINE_SHOW_ATTRIBUTE(tp_stats); /* Add an array of Debug FS files. */ @@ -3603,7 +3536,7 @@ int t4_setup_debugfs(struct adapter *adap) { "cim_pif_la", &cim_pif_la_fops, 0400, 0 }, { "cim_ma_la", &cim_ma_la_fops, 0400, 0 }, { "cim_qcfg", &cim_qcfg_fops, 0400, 0 }, - { "clk", &clk_debugfs_fops, 0400, 0 }, + { "clk", &clk_fops, 0400, 0 }, { "devlog", &devlog_fops, 0400, 0 }, { "mboxlog", &mboxlog_fops, 0400, 0 }, { "mbox0", &mbox_debugfs_fops, 0600, 0 }, @@ -3621,11 +3554,11 @@ int t4_setup_debugfs(struct adapter *adap) { "l2t", &t4_l2t_fops, 0400, 0}, { "mps_tcam", &mps_tcam_debugfs_fops, 0400, 0 }, { "rss", &rss_debugfs_fops, 0400, 0 }, - { "rss_config", &rss_config_debugfs_fops, 0400, 0 }, + { "rss_config", &rss_config_fops, 0400, 0 }, { "rss_key", &rss_key_debugfs_fops, 0400, 0 }, { "rss_pf_config", &rss_pf_config_debugfs_fops, 0400, 0 }, { "rss_vf_config", &rss_vf_config_debugfs_fops, 0400, 0 }, - { "resources", &resources_debugfs_fops, 0400, 0 }, + { "resources", &resources_fops, 0400, 0 }, #ifdef CONFIG_CHELSIO_T4_DCB { "dcb_info", &dcb_info_debugfs_fops, 0400, 0 }, #endif @@ -3644,18 +3577,18 @@ int t4_setup_debugfs(struct adapter *adap) { "obq_ncsi", &cim_obq_fops, 0400, 5 }, { "tp_la", &tp_la_fops, 0400, 0 }, { "ulprx_la", &ulprx_la_fops, 0400, 0 }, - { "sensors", &sensors_debugfs_fops, 0400, 0 }, + { "sensors", &sensors_fops, 0400, 0 }, { "pm_stats", &pm_stats_debugfs_fops, 0400, 0 }, - { "tx_rate", &tx_rate_debugfs_fops, 0400, 0 }, - { "cctrl", &cctrl_tbl_debugfs_fops, 0400, 0 }, + { "tx_rate", &tx_rate_fops, 0400, 0 }, + { "cctrl", &cctrl_tbl_fops, 0400, 0 }, #if IS_ENABLED(CONFIG_IPV6) - { "clip_tbl", &clip_tbl_debugfs_fops, 0400, 0 }, + { "clip_tbl", &clip_tbl_fops, 0400, 0 }, #endif - { "tids", &tid_info_debugfs_fops, 0400, 0}, + { "tids", &tid_info_fops, 0400, 0}, { "blocked_fl", &blocked_fl_fops, 0600, 0 }, { "meminfo", &meminfo_fops, 0400, 0 }, - { "crypto", &chcr_stats_debugfs_fops, 0400, 0 }, - { "tp_stats", &tp_stats_debugfs_fops, 0400, 0 }, + { "crypto", &chcr_stats_fops, 0400, 0 }, + { "tp_stats", &tp_stats_fops, 0400, 0 }, }; /* Debug FS nodes common to all T5 and later adapters. diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.h index 23f43a0f8950..ba95e13d52da 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.h +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.h @@ -37,19 +37,6 @@ #include -#define DEFINE_SIMPLE_DEBUGFS_FILE(name) \ -static int name##_open(struct inode *inode, struct file *file) \ -{ \ - return single_open(file, name##_show, inode->i_private); \ -} \ -static const struct file_operations name##_debugfs_fops = { \ - .owner = THIS_MODULE, \ - .open = name##_open, \ - .read = seq_read, \ - .llseek = seq_lseek, \ - .release = single_release \ -} - struct t4_debugfs_entry { const char *name; const struct file_operations *ops; diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index d49db46254cd..6ba9099ca7fe 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c @@ -453,7 +453,7 @@ static int link_start(struct net_device *dev) if (ret == 0) { ret = t4_change_mac(pi->adapter, mb, pi->viid, pi->xact_addr_filt, dev->dev_addr, true, - true); + &pi->smt_idx); if (ret >= 0) { pi->xact_addr_filt = ret; ret = 0; @@ -1584,28 +1584,6 @@ unsigned int cxgb4_best_aligned_mtu(const unsigned short *mtus, } EXPORT_SYMBOL(cxgb4_best_aligned_mtu); -/** - * cxgb4_tp_smt_idx - Get the Source Mac Table index for this VI - * @chip: chip type - * @viid: VI id of the given port - * - * Return the SMT index for this VI. - */ -unsigned int cxgb4_tp_smt_idx(enum chip_type chip, unsigned int viid) -{ - /* In T4/T5, SMT contains 256 SMAC entries organized in - * 128 rows of 2 entries each. - * In T6, SMT contains 256 SMAC entries in 256 rows. - * TODO: The below code needs to be updated when we add support - * for 256 VFs. - */ - if (CHELSIO_CHIP_VERSION(chip) <= CHELSIO_T5) - return ((viid & 0x7f) << 1); - else - return (viid & 0x7f); -} -EXPORT_SYMBOL(cxgb4_tp_smt_idx); - /** * cxgb4_port_chan - get the HW channel of a port * @dev: the net device for the port @@ -2280,8 +2258,6 @@ static int cxgb_up(struct adapter *adap) #if IS_ENABLED(CONFIG_IPV6) update_clip(adap); #endif - /* Initialize hash mac addr list*/ - INIT_LIST_HEAD(&adap->mac_hlist); return err; irq_err: @@ -2303,6 +2279,7 @@ static void cxgb_down(struct adapter *adapter) t4_sge_stop(adapter); t4_free_sge_resources(adapter); + adapter->flags &= ~FULL_INIT_DONE; } @@ -2669,7 +2646,7 @@ static void cxgb4_mgmt_fill_vf_station_mac_addr(struct adapter *adap) for (vf = 0, nvfs = pci_sriov_get_totalvfs(adap->pdev); vf < nvfs; vf++) { - macaddr[5] = adap->pf * 16 + vf; + macaddr[5] = adap->pf * nvfs + vf; ether_addr_copy(adap->vfinfo[vf].vf_mac_addr, macaddr); } } @@ -2863,7 +2840,8 @@ static int cxgb_set_mac_addr(struct net_device *dev, void *p) return -EADDRNOTAVAIL; ret = t4_change_mac(pi->adapter, pi->adapter->pf, pi->viid, - pi->xact_addr_filt, addr->sa_data, true, true); + pi->xact_addr_filt, addr->sa_data, true, + &pi->smt_idx); if (ret < 0) return ret; @@ -4467,6 +4445,15 @@ static int adap_init0(struct adapter *adap) adap->params.filter2_wr_support = (ret == 0 && val[0] != 0); } + /* Check if FW supports returning vin and smt index. + * If this is not supported, driver will interpret + * these values from viid. + */ + params[0] = FW_PARAM_DEV(OPAQUE_VIID_SMT_EXTN); + ret = t4_query_params(adap, adap->mbox, adap->pf, 0, + 1, params, val); + adap->params.viid_smt_extn_support = (ret == 0 && val[0] != 0); + /* * Get device capabilities so we can determine what resources we need * to manage. @@ -4777,14 +4764,26 @@ static pci_ers_result_t eeh_slot_reset(struct pci_dev *pdev) return PCI_ERS_RESULT_DISCONNECT; for_each_port(adap, i) { - struct port_info *p = adap2pinfo(adap, i); + struct port_info *pi = adap2pinfo(adap, i); + u8 vivld = 0, vin = 0; - ret = t4_alloc_vi(adap, adap->mbox, p->tx_chan, adap->pf, 0, 1, - NULL, NULL); + ret = t4_alloc_vi(adap, adap->mbox, pi->tx_chan, adap->pf, 0, 1, + NULL, NULL, &vivld, &vin); if (ret < 0) return PCI_ERS_RESULT_DISCONNECT; - p->viid = ret; - p->xact_addr_filt = -1; + pi->viid = ret; + pi->xact_addr_filt = -1; + /* If fw supports returning the VIN as part of FW_VI_CMD, + * save the returned values. + */ + if (adap->params.viid_smt_extn_support) { + pi->vivld = vivld; + pi->vin = vin; + } else { + /* Retrieve the values from VIID */ + pi->vivld = FW_VIID_VIVLD_G(pi->viid); + pi->vin = FW_VIID_VIN_G(pi->viid); + } } t4_load_mtus(adap, adap->params.mtus, adap->params.a_wnd, @@ -5621,6 +5620,9 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent) (is_t5(adapter->params.chip) ? STATMODE_V(0) : T6_STATMODE_V(0))); + /* Initialize hash mac addr list */ + INIT_LIST_HEAD(&adapter->mac_hlist); + for_each_port(adapter, i) { netdev = alloc_etherdev_mq(sizeof(struct port_info), MAX_ETH_QSETS); @@ -5899,6 +5901,7 @@ fw_attach_fail: static void remove_one(struct pci_dev *pdev) { struct adapter *adapter = pci_get_drvdata(pdev); + struct hash_mac_addr *entry, *tmp; if (!adapter) { pci_release_regions(pdev); @@ -5948,6 +5951,12 @@ static void remove_one(struct pci_dev *pdev) if (adapter->num_uld || adapter->num_ofld_uld) t4_uld_mem_free(adapter); free_some_resources(adapter); + list_for_each_entry_safe(entry, tmp, &adapter->mac_hlist, + list) { + list_del(&entry->list); + kfree(entry); + } + #if IS_ENABLED(CONFIG_IPV6) t4_cleanup_clip_tbl(adapter); #endif diff --git a/drivers/net/ethernet/chelsio/cxgb4/l2t.c b/drivers/net/ethernet/chelsio/cxgb4/l2t.c index 99022c0898b5..4852febbfec3 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/l2t.c +++ b/drivers/net/ethernet/chelsio/cxgb4/l2t.c @@ -495,14 +495,11 @@ u64 cxgb4_select_ntuple(struct net_device *dev, ntuple |= (u64)IPPROTO_TCP << tp->protocol_shift; if (tp->vnic_shift >= 0 && (tp->ingress_config & VNIC_F)) { - u32 viid = cxgb4_port_viid(dev); - u32 vf = FW_VIID_VIN_G(viid); - u32 pf = FW_VIID_PFN_G(viid); - u32 vld = FW_VIID_VIVLD_G(viid); - - ntuple |= (u64)(FT_VNID_ID_VF_V(vf) | - FT_VNID_ID_PF_V(pf) | - FT_VNID_ID_VLD_V(vld)) << tp->vnic_shift; + struct port_info *pi = (struct port_info *)netdev_priv(dev); + + ntuple |= (u64)(FT_VNID_ID_VF_V(pi->vin) | + FT_VNID_ID_PF_V(adap->pf) | + FT_VNID_ID_VLD_V(pi->vivld)) << tp->vnic_shift; } return ntuple; diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c index cb523949c812..e8c34292a0ec 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c +++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c @@ -5880,7 +5880,6 @@ int t4_set_trace_filter(struct adapter *adap, const struct trace_params *tp, { int i, ofst = idx * 4; u32 data_reg, mask_reg, cfg; - u32 multitrc = TRCMULTIFILTER_F; if (!enable) { t4_write_reg(adap, MPS_TRC_FILTER_MATCH_CTL_A_A + ofst, 0); @@ -5900,7 +5899,6 @@ int t4_set_trace_filter(struct adapter *adap, const struct trace_params *tp, * maximum packet capture size of 9600 bytes is recommended. * Also in this mode, only trace0 can be enabled and running. */ - multitrc = 0; if (tp->snap_len > 9600 || idx) return -EINVAL; } @@ -7141,21 +7139,10 @@ int t4_fixup_host_params(struct adapter *adap, unsigned int page_size, unsigned int cache_line_size) { unsigned int page_shift = fls(page_size) - 1; - unsigned int sge_hps = page_shift - 10; unsigned int stat_len = cache_line_size > 64 ? 128 : 64; unsigned int fl_align = cache_line_size < 32 ? 32 : cache_line_size; unsigned int fl_align_log = fls(fl_align) - 1; - t4_write_reg(adap, SGE_HOST_PAGE_SIZE_A, - HOSTPAGESIZEPF0_V(sge_hps) | - HOSTPAGESIZEPF1_V(sge_hps) | - HOSTPAGESIZEPF2_V(sge_hps) | - HOSTPAGESIZEPF3_V(sge_hps) | - HOSTPAGESIZEPF4_V(sge_hps) | - HOSTPAGESIZEPF5_V(sge_hps) | - HOSTPAGESIZEPF6_V(sge_hps) | - HOSTPAGESIZEPF7_V(sge_hps)); - if (is_t4(adap->params.chip)) { t4_set_reg_field(adap, SGE_CONTROL_A, INGPADBOUNDARY_V(INGPADBOUNDARY_M) | @@ -7488,7 +7475,7 @@ int t4_cfg_pfvf(struct adapter *adap, unsigned int mbox, unsigned int pf, */ int t4_alloc_vi(struct adapter *adap, unsigned int mbox, unsigned int port, unsigned int pf, unsigned int vf, unsigned int nmac, u8 *mac, - unsigned int *rss_size) + unsigned int *rss_size, u8 *vivld, u8 *vin) { int ret; struct fw_vi_cmd c; @@ -7523,6 +7510,13 @@ int t4_alloc_vi(struct adapter *adap, unsigned int mbox, unsigned int port, } if (rss_size) *rss_size = FW_VI_CMD_RSSSIZE_G(be16_to_cpu(c.rsssize_pkd)); + + if (vivld) + *vivld = FW_VI_CMD_VFVLD_G(be32_to_cpu(c.alloc_to_len16)); + + if (vin) + *vin = FW_VI_CMD_VIN_G(be32_to_cpu(c.alloc_to_len16)); + return FW_VI_CMD_VIID_G(be16_to_cpu(c.type_viid)); } @@ -7980,7 +7974,7 @@ int t4_free_mac_filt(struct adapter *adap, unsigned int mbox, * MAC value. */ int t4_change_mac(struct adapter *adap, unsigned int mbox, unsigned int viid, - int idx, const u8 *addr, bool persist, bool add_smt) + int idx, const u8 *addr, bool persist, u8 *smt_idx) { int ret, mode; struct fw_vi_mac_cmd c; @@ -7989,7 +7983,7 @@ int t4_change_mac(struct adapter *adap, unsigned int mbox, unsigned int viid, if (idx < 0) /* new allocation */ idx = persist ? FW_VI_MAC_ADD_PERSIST_MAC : FW_VI_MAC_ADD_MAC; - mode = add_smt ? FW_VI_MAC_SMT_AND_MPSTCAM : FW_VI_MAC_MPS_TCAM_ENTRY; + mode = smt_idx ? FW_VI_MAC_SMT_AND_MPSTCAM : FW_VI_MAC_MPS_TCAM_ENTRY; memset(&c, 0, sizeof(c)); c.op_to_viid = cpu_to_be32(FW_CMD_OP_V(FW_VI_MAC_CMD) | @@ -8006,6 +8000,23 @@ int t4_change_mac(struct adapter *adap, unsigned int mbox, unsigned int viid, ret = FW_VI_MAC_CMD_IDX_G(be16_to_cpu(p->valid_to_idx)); if (ret >= max_mac_addr) ret = -ENOMEM; + if (smt_idx) { + if (adap->params.viid_smt_extn_support) { + *smt_idx = FW_VI_MAC_CMD_SMTID_G + (be32_to_cpu(c.op_to_viid)); + } else { + /* In T4/T5, SMT contains 256 SMAC entries + * organized in 128 rows of 2 entries each. + * In T6, SMT contains 256 SMAC entries in + * 256 rows. + */ + if (CHELSIO_CHIP_VERSION(adap->params.chip) <= + CHELSIO_T5) + *smt_idx = (viid & FW_VIID_VIN_M) << 1; + else + *smt_idx = (viid & FW_VIID_VIN_M); + } + } } return ret; } @@ -8593,7 +8604,7 @@ int t4_get_link_params(struct port_info *pi, unsigned int *link_okp, { unsigned int fw_caps = pi->adapter->params.fw_caps_support; struct fw_port_cmd port_cmd; - unsigned int action, link_ok, speed, mtu; + unsigned int action, link_ok, mtu; fw_port_cap32_t linkattr; int ret; @@ -8627,7 +8638,6 @@ int t4_get_link_params(struct port_info *pi, unsigned int *link_okp, mtu = FW_PORT_CMD_MTU32_G( be32_to_cpu(port_cmd.u.info32.auxlinfo32_mtu32)); } - speed = fwcap_to_speed(linkattr); *link_okp = link_ok; *speedp = fwcap_to_speed(linkattr); @@ -9374,6 +9384,7 @@ int t4_init_portinfo(struct port_info *pi, int mbox, enum fw_port_type port_type; int mdio_addr; fw_port_cap32_t pcaps, acaps; + u8 vivld = 0, vin = 0; int ret; /* If we haven't yet determined whether we're talking to Firmware @@ -9428,7 +9439,8 @@ int t4_init_portinfo(struct port_info *pi, int mbox, acaps = be32_to_cpu(cmd.u.info32.acaps32); } - ret = t4_alloc_vi(pi->adapter, mbox, port, pf, vf, 1, mac, &rss_size); + ret = t4_alloc_vi(pi->adapter, mbox, port, pf, vf, 1, mac, &rss_size, + &vivld, &vin); if (ret < 0) return ret; @@ -9437,6 +9449,18 @@ int t4_init_portinfo(struct port_info *pi, int mbox, pi->lport = port; pi->rss_size = rss_size; + /* If fw supports returning the VIN as part of FW_VI_CMD, + * save the returned values. + */ + if (adapter->params.viid_smt_extn_support) { + pi->vivld = vivld; + pi->vin = vin; + } else { + /* Retrieve the values from VIID */ + pi->vivld = FW_VIID_VIVLD_G(pi->viid); + pi->vin = FW_VIID_VIN_G(pi->viid); + } + pi->port_type = port_type; pi->mdio_addr = mdio_addr; pi->mod_type = FW_PORT_MOD_TYPE_NA; diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h b/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h index 60df66f4d21c..bf7325f6d553 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h +++ b/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h @@ -217,6 +217,7 @@ CH_PCI_DEVICE_ID_TABLE_DEFINE_BEGIN CH_PCI_ID_TABLE_FENTRY(0x6087), /* Custom T6225-CR */ CH_PCI_ID_TABLE_FENTRY(0x6088), /* Custom T62100-CR */ CH_PCI_ID_TABLE_FENTRY(0x6089), /* Custom T62100-KR */ + CH_PCI_ID_TABLE_FENTRY(0x608a), /* Custom T62100-CR */ CH_PCI_DEVICE_ID_TABLE_DEFINE_END; #endif /* __T4_PCI_ID_TBL_H__ */ diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h index 57584ab32043..1d9b3e1e5f94 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h +++ b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h @@ -1253,6 +1253,7 @@ enum fw_params_param_dev { FW_PARAMS_PARAM_DEV_HMA_SIZE = 0x20, FW_PARAMS_PARAM_DEV_RDMA_WRITE_WITH_IMM = 0x21, FW_PARAMS_PARAM_DEV_RI_WRITE_CMPL_WR = 0x24, + FW_PARAMS_PARAM_DEV_OPAQUE_VIID_SMT_EXTN = 0x27, }; /* @@ -2109,6 +2110,19 @@ struct fw_vi_cmd { #define FW_VI_CMD_FREE_V(x) ((x) << FW_VI_CMD_FREE_S) #define FW_VI_CMD_FREE_F FW_VI_CMD_FREE_V(1U) +#define FW_VI_CMD_VFVLD_S 24 +#define FW_VI_CMD_VFVLD_M 0x1 +#define FW_VI_CMD_VFVLD_V(x) ((x) << FW_VI_CMD_VFVLD_S) +#define FW_VI_CMD_VFVLD_G(x) \ + (((x) >> FW_VI_CMD_VFVLD_S) & FW_VI_CMD_VFVLD_M) +#define FW_VI_CMD_VFVLD_F FW_VI_CMD_VFVLD_V(1U) + +#define FW_VI_CMD_VIN_S 16 +#define FW_VI_CMD_VIN_M 0xff +#define FW_VI_CMD_VIN_V(x) ((x) << FW_VI_CMD_VIN_S) +#define FW_VI_CMD_VIN_G(x) \ + (((x) >> FW_VI_CMD_VIN_S) & FW_VI_CMD_VIN_M) + #define FW_VI_CMD_VIID_S 0 #define FW_VI_CMD_VIID_M 0xfff #define FW_VI_CMD_VIID_V(x) ((x) << FW_VI_CMD_VIID_S) @@ -2182,6 +2196,12 @@ struct fw_vi_mac_cmd { } u; }; +#define FW_VI_MAC_CMD_SMTID_S 12 +#define FW_VI_MAC_CMD_SMTID_M 0xff +#define FW_VI_MAC_CMD_SMTID_V(x) ((x) << FW_VI_MAC_CMD_SMTID_S) +#define FW_VI_MAC_CMD_SMTID_G(x) \ + (((x) >> FW_VI_MAC_CMD_SMTID_S) & FW_VI_MAC_CMD_SMTID_M) + #define FW_VI_MAC_CMD_VIID_S 0 #define FW_VI_MAC_CMD_VIID_V(x) ((x) << FW_VI_MAC_CMD_VIID_S) diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c index ff84791a0ff8..2fab87e86561 100644 --- a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c @@ -722,6 +722,7 @@ static int adapter_up(struct adapter *adapter) if (adapter->flags & USING_MSIX) name_msix_vecs(adapter); + adapter->flags |= FULL_INIT_DONE; } @@ -747,8 +748,6 @@ static int adapter_up(struct adapter *adapter) enable_rx(adapter); t4vf_sge_start(adapter); - /* Initialize hash mac addr list*/ - INIT_LIST_HEAD(&adapter->mac_hlist); return 0; } @@ -2324,19 +2323,7 @@ static int resources_show(struct seq_file *seq, void *v) return 0; } - -static int resources_open(struct inode *inode, struct file *file) -{ - return single_open(file, resources_show, inode->i_private); -} - -static const struct file_operations resources_proc_fops = { - .owner = THIS_MODULE, - .open = resources_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(resources); /* * Show Virtual Interfaces. @@ -2420,7 +2407,7 @@ static struct cxgb4vf_debugfs_entry debugfs_files[] = { { "mboxlog", 0444, &mboxlog_fops }, { "sge_qinfo", 0444, &sge_qinfo_debugfs_fops }, { "sge_qstats", 0444, &sge_qstats_proc_fops }, - { "resources", 0444, &resources_proc_fops }, + { "resources", 0444, &resources_fops }, { "interfaces", 0444, &interfaces_proc_fops }, }; @@ -3036,6 +3023,9 @@ static int cxgb4vf_pci_probe(struct pci_dev *pdev, if (err) goto err_unmap_bar; + /* Initialize hash mac addr list */ + INIT_LIST_HEAD(&adapter->mac_hlist); + /* * Allocate our "adapter ports" and stitch everything together. */ @@ -3287,6 +3277,7 @@ err_disable_device: static void cxgb4vf_pci_remove(struct pci_dev *pdev) { struct adapter *adapter = pci_get_drvdata(pdev); + struct hash_mac_addr *entry, *tmp; /* * Tear down driver state associated with device. @@ -3337,6 +3328,11 @@ static void cxgb4vf_pci_remove(struct pci_dev *pdev) if (!is_t4(adapter->params.chip)) iounmap(adapter->bar2); kfree(adapter->mbox_log); + list_for_each_entry_safe(entry, tmp, &adapter->mac_hlist, + list) { + list_del(&entry->list); + kfree(entry); + } kfree(adapter); } diff --git a/drivers/net/ethernet/cirrus/Kconfig b/drivers/net/ethernet/cirrus/Kconfig index ec0b545197e2..e9a0213b08c4 100644 --- a/drivers/net/ethernet/cirrus/Kconfig +++ b/drivers/net/ethernet/cirrus/Kconfig @@ -23,7 +23,7 @@ config CS89x0 ---help--- Support for CS89x0 chipset based Ethernet cards. If you have a network (Ethernet) card of this type, say Y and read the file - . + . To compile this driver as a module, choose M here. The module will be called cs89x0. diff --git a/drivers/net/ethernet/cisco/enic/enic_ethtool.c b/drivers/net/ethernet/cisco/enic/enic_ethtool.c index f42f7a6e1559..ebd5c2cf1efe 100644 --- a/drivers/net/ethernet/cisco/enic/enic_ethtool.c +++ b/drivers/net/ethernet/cisco/enic/enic_ethtool.c @@ -241,7 +241,7 @@ static int enic_set_ringparam(struct net_device *netdev, } enic_init_vnic_resources(enic); if (running) { - err = dev_open(netdev); + err = dev_open(netdev, NULL); if (err) goto err_out; } diff --git a/drivers/net/ethernet/dec/tulip/Kconfig b/drivers/net/ethernet/dec/tulip/Kconfig index 1003201b5d80..264e9b413e94 100644 --- a/drivers/net/ethernet/dec/tulip/Kconfig +++ b/drivers/net/ethernet/dec/tulip/Kconfig @@ -113,7 +113,7 @@ config DE4X5 These include the DE425, DE434, DE435, DE450 and DE500 models. If you have a network card of this type, say Y. More specific information is contained in - . + . To compile this driver as a module, choose M here. The module will be called de4x5. @@ -137,7 +137,7 @@ config DM9102 This driver is for DM9102(A)/DM9132/DM9801 compatible PCI cards from Davicom (). If you have such a network (Ethernet) card, say Y. Some information is contained in the file - . + . To compile this driver as a module, choose M here. The module will be called dmfe. diff --git a/drivers/net/ethernet/dlink/dl2k.c b/drivers/net/ethernet/dlink/dl2k.c index f0536b16b3c3..d8d423f22c4f 100644 --- a/drivers/net/ethernet/dlink/dl2k.c +++ b/drivers/net/ethernet/dlink/dl2k.c @@ -1881,7 +1881,7 @@ Compile command: gcc -D__KERNEL__ -DMODULE -I/usr/src/linux/include -Wall -Wstrict-prototypes -O2 -c dl2k.c -Read Documentation/networking/dl2k.txt for details. +Read Documentation/networking/device_drivers/dlink/dl2k.txt for details. */ diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index c5ad7a4f4d83..852f5bfe5f6d 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -796,7 +796,7 @@ static inline u16 be_get_tx_vlan_tag(struct be_adapter *adapter, u16 vlan_tag; vlan_tag = skb_vlan_tag_get(skb); - vlan_prio = (vlan_tag & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + vlan_prio = skb_vlan_tag_get_prio(skb); /* If vlan priority provided by OS is NOT in available bmap */ if (!(adapter->vlan_prio_bmap & (1 << vlan_prio))) vlan_tag = (vlan_tag & ~VLAN_PRIO_MASK) | @@ -1049,30 +1049,35 @@ static struct sk_buff *be_insert_vlan_in_pkt(struct be_adapter *adapter, struct be_wrb_params *wrb_params) { + bool insert_vlan = false; u16 vlan_tag = 0; skb = skb_share_check(skb, GFP_ATOMIC); if (unlikely(!skb)) return skb; - if (skb_vlan_tag_present(skb)) + if (skb_vlan_tag_present(skb)) { vlan_tag = be_get_tx_vlan_tag(adapter, skb); + insert_vlan = true; + } if (qnq_async_evt_rcvd(adapter) && adapter->pvid) { - if (!vlan_tag) + if (!insert_vlan) { vlan_tag = adapter->pvid; + insert_vlan = true; + } /* f/w workaround to set skip_hw_vlan = 1, informs the F/W to * skip VLAN insertion */ BE_WRB_F_SET(wrb_params->features, VLAN_SKIP_HW, 1); } - if (vlan_tag) { + if (insert_vlan) { skb = vlan_insert_tag_set_proto(skb, htons(ETH_P_8021Q), vlan_tag); if (unlikely(!skb)) return skb; - skb->vlan_tci = 0; + __vlan_hwaccel_clear_tag(skb); } /* Insert the outer VLAN, if any */ @@ -4950,7 +4955,7 @@ fw_exit: } static int be_ndo_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh, - u16 flags) + u16 flags, struct netlink_ext_ack *extack) { struct be_adapter *adapter = netdev_priv(dev); struct nlattr *attr, *br_spec; diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 6e0f47f2c8a3..f53090cde041 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -51,9 +51,9 @@ #include #include #include +#include #include #include - #include "fman.h" #include "fman_port.h" #include "mac.h" @@ -2475,6 +2475,7 @@ static void dpaa_adjust_link(struct net_device *net_dev) static int dpaa_phy_init(struct net_device *net_dev) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; struct mac_device *mac_dev; struct phy_device *phy_dev; struct dpaa_priv *priv; @@ -2491,7 +2492,9 @@ static int dpaa_phy_init(struct net_device *net_dev) } /* Remove any features not supported by the controller */ - phy_dev->supported &= mac_dev->if_support; + ethtool_convert_legacy_u32_to_link_mode(mask, mac_dev->if_support); + linkmode_and(phy_dev->supported, phy_dev->supported, mask); + phy_support_asym_pause(phy_dev); mac_dev->phy_dev = phy_dev; @@ -2613,6 +2616,7 @@ static const struct net_device_ops dpaa_ops = { .ndo_stop = dpaa_eth_stop, .ndo_tx_timeout = dpaa_tx_timeout, .ndo_get_stats64 = dpaa_get_stats64, + .ndo_change_carrier = fixed_phy_change_carrier, .ndo_set_mac_address = dpaa_set_mac_address, .ndo_validate_addr = eth_validate_addr, .ndo_set_rx_mode = dpaa_set_rx_mode, diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c index 13d6e2272ece..62497119c85f 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c @@ -529,6 +529,75 @@ static int dpaa_get_ts_info(struct net_device *net_dev, return 0; } +static int dpaa_get_coalesce(struct net_device *dev, + struct ethtool_coalesce *c) +{ + struct qman_portal *portal; + u32 period; + u8 thresh; + + portal = qman_get_affine_portal(smp_processor_id()); + qman_portal_get_iperiod(portal, &period); + qman_dqrr_get_ithresh(portal, &thresh); + + c->rx_coalesce_usecs = period; + c->rx_max_coalesced_frames = thresh; + c->use_adaptive_rx_coalesce = false; + + return 0; +} + +static int dpaa_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *c) +{ + const cpumask_t *cpus = qman_affine_cpus(); + bool needs_revert[NR_CPUS] = {false}; + struct qman_portal *portal; + u32 period, prev_period; + u8 thresh, prev_thresh; + int cpu, res; + + if (c->use_adaptive_rx_coalesce) + return -EINVAL; + + period = c->rx_coalesce_usecs; + thresh = c->rx_max_coalesced_frames; + + /* save previous values */ + portal = qman_get_affine_portal(smp_processor_id()); + qman_portal_get_iperiod(portal, &prev_period); + qman_dqrr_get_ithresh(portal, &prev_thresh); + + /* set new values */ + for_each_cpu(cpu, cpus) { + portal = qman_get_affine_portal(cpu); + res = qman_portal_set_iperiod(portal, period); + if (res) + goto revert_values; + res = qman_dqrr_set_ithresh(portal, thresh); + if (res) { + qman_portal_set_iperiod(portal, prev_period); + goto revert_values; + } + needs_revert[cpu] = true; + } + + return 0; + +revert_values: + /* restore previous values */ + for_each_cpu(cpu, cpus) { + if (!needs_revert[cpu]) + continue; + portal = qman_get_affine_portal(cpu); + /* previous values will not fail, ignore return value */ + qman_portal_set_iperiod(portal, prev_period); + qman_dqrr_set_ithresh(portal, prev_thresh); + } + + return res; +} + const struct ethtool_ops dpaa_ethtool_ops = { .get_drvinfo = dpaa_get_drvinfo, .get_msglevel = dpaa_get_msglevel, @@ -545,4 +614,6 @@ const struct ethtool_ops dpaa_ethtool_ops = { .get_rxnfc = dpaa_get_rxnfc, .set_rxnfc = dpaa_set_rxnfc, .get_ts_info = dpaa_get_ts_info, + .get_coalesce = dpaa_get_coalesce, + .set_coalesce = dpaa_set_coalesce, }; diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index 88f7acce38dc..1ca9a18139ec 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -13,7 +13,8 @@ #include #include #include - +#include +#include #include #include "dpaa2-eth.h" @@ -86,7 +87,7 @@ static void free_rx_fd(struct dpaa2_eth_priv *priv, addr = dpaa2_sg_get_addr(&sgt[i]); sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr); dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, - DMA_FROM_DEVICE); + DMA_BIDIRECTIONAL); skb_free_frag(sg_vaddr); if (dpaa2_sg_is_final(&sgt[i])) @@ -144,7 +145,7 @@ static struct sk_buff *build_frag_skb(struct dpaa2_eth_priv *priv, sg_addr = dpaa2_sg_get_addr(sge); sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, sg_addr); dma_unmap_single(dev, sg_addr, DPAA2_ETH_RX_BUF_SIZE, - DMA_FROM_DEVICE); + DMA_BIDIRECTIONAL); sg_length = dpaa2_sg_get_len(sge); @@ -199,12 +200,148 @@ static struct sk_buff *build_frag_skb(struct dpaa2_eth_priv *priv, return skb; } +/* Free buffers acquired from the buffer pool or which were meant to + * be released in the pool + */ +static void free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array, int count) +{ + struct device *dev = priv->net_dev->dev.parent; + void *vaddr; + int i; + + for (i = 0; i < count; i++) { + vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]); + dma_unmap_single(dev, buf_array[i], DPAA2_ETH_RX_BUF_SIZE, + DMA_BIDIRECTIONAL); + skb_free_frag(vaddr); + } +} + +static void xdp_release_buf(struct dpaa2_eth_priv *priv, + struct dpaa2_eth_channel *ch, + dma_addr_t addr) +{ + int err; + + ch->xdp.drop_bufs[ch->xdp.drop_cnt++] = addr; + if (ch->xdp.drop_cnt < DPAA2_ETH_BUFS_PER_CMD) + return; + + while ((err = dpaa2_io_service_release(ch->dpio, priv->bpid, + ch->xdp.drop_bufs, + ch->xdp.drop_cnt)) == -EBUSY) + cpu_relax(); + + if (err) { + free_bufs(priv, ch->xdp.drop_bufs, ch->xdp.drop_cnt); + ch->buf_count -= ch->xdp.drop_cnt; + } + + ch->xdp.drop_cnt = 0; +} + +static int xdp_enqueue(struct dpaa2_eth_priv *priv, struct dpaa2_fd *fd, + void *buf_start, u16 queue_id) +{ + struct dpaa2_eth_fq *fq; + struct dpaa2_faead *faead; + u32 ctrl, frc; + int i, err; + + /* Mark the egress frame hardware annotation area as valid */ + frc = dpaa2_fd_get_frc(fd); + dpaa2_fd_set_frc(fd, frc | DPAA2_FD_FRC_FAEADV); + dpaa2_fd_set_ctrl(fd, DPAA2_FD_CTRL_ASAL); + + /* Instruct hardware to release the FD buffer directly into + * the buffer pool once transmission is completed, instead of + * sending a Tx confirmation frame to us + */ + ctrl = DPAA2_FAEAD_A4V | DPAA2_FAEAD_A2V | DPAA2_FAEAD_EBDDV; + faead = dpaa2_get_faead(buf_start, false); + faead->ctrl = cpu_to_le32(ctrl); + faead->conf_fqid = 0; + + fq = &priv->fq[queue_id]; + for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { + err = dpaa2_io_service_enqueue_qd(fq->channel->dpio, + priv->tx_qdid, 0, + fq->tx_qdbin, fd); + if (err != -EBUSY) + break; + } + + return err; +} + +static u32 run_xdp(struct dpaa2_eth_priv *priv, + struct dpaa2_eth_channel *ch, + struct dpaa2_eth_fq *rx_fq, + struct dpaa2_fd *fd, void *vaddr) +{ + dma_addr_t addr = dpaa2_fd_get_addr(fd); + struct rtnl_link_stats64 *percpu_stats; + struct bpf_prog *xdp_prog; + struct xdp_buff xdp; + u32 xdp_act = XDP_PASS; + int err; + + percpu_stats = this_cpu_ptr(priv->percpu_stats); + + rcu_read_lock(); + + xdp_prog = READ_ONCE(ch->xdp.prog); + if (!xdp_prog) + goto out; + + xdp.data = vaddr + dpaa2_fd_get_offset(fd); + xdp.data_end = xdp.data + dpaa2_fd_get_len(fd); + xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM; + xdp_set_data_meta_invalid(&xdp); + + xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); + + /* xdp.data pointer may have changed */ + dpaa2_fd_set_offset(fd, xdp.data - vaddr); + dpaa2_fd_set_len(fd, xdp.data_end - xdp.data); + + switch (xdp_act) { + case XDP_PASS: + break; + case XDP_TX: + err = xdp_enqueue(priv, fd, vaddr, rx_fq->flowid); + if (err) { + xdp_release_buf(priv, ch, addr); + percpu_stats->tx_errors++; + ch->stats.xdp_tx_err++; + } else { + percpu_stats->tx_packets++; + percpu_stats->tx_bytes += dpaa2_fd_get_len(fd); + ch->stats.xdp_tx++; + } + break; + default: + bpf_warn_invalid_xdp_action(xdp_act); + /* fall through */ + case XDP_ABORTED: + trace_xdp_exception(priv->net_dev, xdp_prog, xdp_act); + /* fall through */ + case XDP_DROP: + xdp_release_buf(priv, ch, addr); + ch->stats.xdp_drop++; + break; + } + +out: + rcu_read_unlock(); + return xdp_act; +} + /* Main Rx frame processing routine */ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv, struct dpaa2_eth_channel *ch, const struct dpaa2_fd *fd, - struct napi_struct *napi, - u16 queue_id) + struct dpaa2_eth_fq *fq) { dma_addr_t addr = dpaa2_fd_get_addr(fd); u8 fd_format = dpaa2_fd_get_format(fd); @@ -216,12 +353,14 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv, struct dpaa2_fas *fas; void *buf_data; u32 status = 0; + u32 xdp_act; /* Tracing point */ trace_dpaa2_rx_fd(priv->net_dev, fd); vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr); - dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, DMA_FROM_DEVICE); + dma_sync_single_for_cpu(dev, addr, DPAA2_ETH_RX_BUF_SIZE, + DMA_BIDIRECTIONAL); fas = dpaa2_get_fas(vaddr, false); prefetch(fas); @@ -232,8 +371,21 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv, percpu_extras = this_cpu_ptr(priv->percpu_extras); if (fd_format == dpaa2_fd_single) { + xdp_act = run_xdp(priv, ch, fq, (struct dpaa2_fd *)fd, vaddr); + if (xdp_act != XDP_PASS) { + percpu_stats->rx_packets++; + percpu_stats->rx_bytes += dpaa2_fd_get_len(fd); + return; + } + + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, + DMA_BIDIRECTIONAL); skb = build_linear_skb(ch, fd, vaddr); } else if (fd_format == dpaa2_fd_sg) { + WARN_ON(priv->xdp_prog); + + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, + DMA_BIDIRECTIONAL); skb = build_frag_skb(priv, ch, buf_data); skb_free_frag(vaddr); percpu_extras->rx_sg_frames++; @@ -267,12 +419,12 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv, } skb->protocol = eth_type_trans(skb, priv->net_dev); - skb_record_rx_queue(skb, queue_id); + skb_record_rx_queue(skb, fq->flowid); percpu_stats->rx_packets++; percpu_stats->rx_bytes += dpaa2_fd_get_len(fd); - napi_gro_receive(napi, skb); + napi_gro_receive(&ch->napi, skb); return; @@ -289,7 +441,7 @@ err_frame_format: * Observance of NAPI budget is not our concern, leaving that to the caller. */ static int consume_frames(struct dpaa2_eth_channel *ch, - enum dpaa2_eth_fq_type *type) + struct dpaa2_eth_fq **src) { struct dpaa2_eth_priv *priv = ch->priv; struct dpaa2_eth_fq *fq = NULL; @@ -312,7 +464,7 @@ static int consume_frames(struct dpaa2_eth_channel *ch, fd = dpaa2_dq_fd(dq); fq = (struct dpaa2_eth_fq *)(uintptr_t)dpaa2_dq_fqd_ctx(dq); - fq->consume(priv, ch, fd, &ch->napi, fq->flowid); + fq->consume(priv, ch, fd, fq); cleaned++; } while (!is_last); @@ -320,13 +472,12 @@ static int consume_frames(struct dpaa2_eth_channel *ch, return 0; fq->stats.frames += cleaned; - ch->stats.frames += cleaned; /* A dequeue operation only pulls frames from a single queue - * into the store. Return the frame queue type as an out param. + * into the store. Return the frame queue as an out param. */ - if (type) - *type = fq->type; + if (src) + *src = fq; return cleaned; } @@ -571,8 +722,10 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev) struct rtnl_link_stats64 *percpu_stats; struct dpaa2_eth_drv_stats *percpu_extras; struct dpaa2_eth_fq *fq; + struct netdev_queue *nq; u16 queue_mapping; unsigned int needed_headroom; + u32 fd_len; int err, i; percpu_stats = this_cpu_ptr(priv->percpu_stats); @@ -644,8 +797,12 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev) /* Clean up everything, including freeing the skb */ free_tx_fd(priv, &fd); } else { + fd_len = dpaa2_fd_get_len(&fd); percpu_stats->tx_packets++; - percpu_stats->tx_bytes += dpaa2_fd_get_len(&fd); + percpu_stats->tx_bytes += fd_len; + + nq = netdev_get_tx_queue(net_dev, queue_mapping); + netdev_tx_sent_queue(nq, fd_len); } return NETDEV_TX_OK; @@ -661,11 +818,11 @@ err_alloc_headroom: static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv, struct dpaa2_eth_channel *ch __always_unused, const struct dpaa2_fd *fd, - struct napi_struct *napi __always_unused, - u16 queue_id __always_unused) + struct dpaa2_eth_fq *fq) { struct rtnl_link_stats64 *percpu_stats; struct dpaa2_eth_drv_stats *percpu_extras; + u32 fd_len = dpaa2_fd_get_len(fd); u32 fd_errors; /* Tracing point */ @@ -673,7 +830,10 @@ static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv, percpu_extras = this_cpu_ptr(priv->percpu_extras); percpu_extras->tx_conf_frames++; - percpu_extras->tx_conf_bytes += dpaa2_fd_get_len(fd); + percpu_extras->tx_conf_bytes += fd_len; + + fq->dq_frames++; + fq->dq_bytes += fd_len; /* Check frame errors in the FD field */ fd_errors = dpaa2_fd_get_ctrl(fd) & DPAA2_FD_TX_ERR_MASK; @@ -735,23 +895,6 @@ static int set_tx_csum(struct dpaa2_eth_priv *priv, bool enable) return 0; } -/* Free buffers acquired from the buffer pool or which were meant to - * be released in the pool - */ -static void free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array, int count) -{ - struct device *dev = priv->net_dev->dev.parent; - void *vaddr; - int i; - - for (i = 0; i < count; i++) { - vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]); - dma_unmap_single(dev, buf_array[i], DPAA2_ETH_RX_BUF_SIZE, - DMA_FROM_DEVICE); - skb_free_frag(vaddr); - } -} - /* Perform a single release command to add buffers * to the specified buffer pool */ @@ -775,7 +918,7 @@ static int add_bufs(struct dpaa2_eth_priv *priv, buf = PTR_ALIGN(buf, priv->rx_buf_align); addr = dma_map_single(dev, buf, DPAA2_ETH_RX_BUF_SIZE, - DMA_FROM_DEVICE); + DMA_BIDIRECTIONAL); if (unlikely(dma_mapping_error(dev, addr))) goto err_map; @@ -934,8 +1077,9 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget) struct dpaa2_eth_channel *ch; struct dpaa2_eth_priv *priv; int rx_cleaned = 0, txconf_cleaned = 0; - enum dpaa2_eth_fq_type type = 0; - int store_cleaned; + struct dpaa2_eth_fq *fq, *txc_fq = NULL; + struct netdev_queue *nq; + int store_cleaned, work_done; int err; ch = container_of(napi, struct dpaa2_eth_channel, napi); @@ -949,18 +1093,25 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget) /* Refill pool if appropriate */ refill_pool(priv, ch, priv->bpid); - store_cleaned = consume_frames(ch, &type); - if (type == DPAA2_RX_FQ) + store_cleaned = consume_frames(ch, &fq); + if (!store_cleaned) + break; + if (fq->type == DPAA2_RX_FQ) { rx_cleaned += store_cleaned; - else + } else { txconf_cleaned += store_cleaned; + /* We have a single Tx conf FQ on this channel */ + txc_fq = fq; + } /* If we either consumed the whole NAPI budget with Rx frames * or we reached the Tx confirmations threshold, we're done. */ if (rx_cleaned >= budget || - txconf_cleaned >= DPAA2_ETH_TXCONF_PER_NAPI) - return budget; + txconf_cleaned >= DPAA2_ETH_TXCONF_PER_NAPI) { + work_done = budget; + goto out; + } } while (store_cleaned); /* We didn't consume the entire budget, so finish napi and @@ -974,7 +1125,18 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget) WARN_ONCE(err, "CDAN notifications rearm failed on core %d", ch->nctx.desired_cpu); - return max(rx_cleaned, 1); + work_done = max(rx_cleaned, 1); + +out: + if (txc_fq) { + nq = netdev_get_tx_queue(priv->net_dev, txc_fq->flowid); + netdev_tx_completed_queue(nq, txc_fq->dq_frames, + txc_fq->dq_bytes); + txc_fq->dq_frames = 0; + txc_fq->dq_bytes = 0; + } + + return work_done; } static void enable_ch_napi(struct dpaa2_eth_priv *priv) @@ -1400,6 +1562,174 @@ static int dpaa2_eth_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) return -EINVAL; } +static bool xdp_mtu_valid(struct dpaa2_eth_priv *priv, int mtu) +{ + int mfl, linear_mfl; + + mfl = DPAA2_ETH_L2_MAX_FRM(mtu); + linear_mfl = DPAA2_ETH_RX_BUF_SIZE - DPAA2_ETH_RX_HWA_SIZE - + dpaa2_eth_rx_head_room(priv) - XDP_PACKET_HEADROOM; + + if (mfl > linear_mfl) { + netdev_warn(priv->net_dev, "Maximum MTU for XDP is %d\n", + linear_mfl - VLAN_ETH_HLEN); + return false; + } + + return true; +} + +static int set_rx_mfl(struct dpaa2_eth_priv *priv, int mtu, bool has_xdp) +{ + int mfl, err; + + /* We enforce a maximum Rx frame length based on MTU only if we have + * an XDP program attached (in order to avoid Rx S/G frames). + * Otherwise, we accept all incoming frames as long as they are not + * larger than maximum size supported in hardware + */ + if (has_xdp) + mfl = DPAA2_ETH_L2_MAX_FRM(mtu); + else + mfl = DPAA2_ETH_MFL; + + err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token, mfl); + if (err) { + netdev_err(priv->net_dev, "dpni_set_max_frame_length failed\n"); + return err; + } + + return 0; +} + +static int dpaa2_eth_change_mtu(struct net_device *dev, int new_mtu) +{ + struct dpaa2_eth_priv *priv = netdev_priv(dev); + int err; + + if (!priv->xdp_prog) + goto out; + + if (!xdp_mtu_valid(priv, new_mtu)) + return -EINVAL; + + err = set_rx_mfl(priv, new_mtu, true); + if (err) + return err; + +out: + dev->mtu = new_mtu; + return 0; +} + +static int update_rx_buffer_headroom(struct dpaa2_eth_priv *priv, bool has_xdp) +{ + struct dpni_buffer_layout buf_layout = {0}; + int err; + + err = dpni_get_buffer_layout(priv->mc_io, 0, priv->mc_token, + DPNI_QUEUE_RX, &buf_layout); + if (err) { + netdev_err(priv->net_dev, "dpni_get_buffer_layout failed\n"); + return err; + } + + /* Reserve extra headroom for XDP header size changes */ + buf_layout.data_head_room = dpaa2_eth_rx_head_room(priv) + + (has_xdp ? XDP_PACKET_HEADROOM : 0); + buf_layout.options = DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM; + err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token, + DPNI_QUEUE_RX, &buf_layout); + if (err) { + netdev_err(priv->net_dev, "dpni_set_buffer_layout failed\n"); + return err; + } + + return 0; +} + +static int setup_xdp(struct net_device *dev, struct bpf_prog *prog) +{ + struct dpaa2_eth_priv *priv = netdev_priv(dev); + struct dpaa2_eth_channel *ch; + struct bpf_prog *old; + bool up, need_update; + int i, err; + + if (prog && !xdp_mtu_valid(priv, dev->mtu)) + return -EINVAL; + + if (prog) { + prog = bpf_prog_add(prog, priv->num_channels); + if (IS_ERR(prog)) + return PTR_ERR(prog); + } + + up = netif_running(dev); + need_update = (!!priv->xdp_prog != !!prog); + + if (up) + dpaa2_eth_stop(dev); + + /* While in xdp mode, enforce a maximum Rx frame size based on MTU. + * Also, when switching between xdp/non-xdp modes we need to reconfigure + * our Rx buffer layout. Buffer pool was drained on dpaa2_eth_stop, + * so we are sure no old format buffers will be used from now on. + */ + if (need_update) { + err = set_rx_mfl(priv, dev->mtu, !!prog); + if (err) + goto out_err; + err = update_rx_buffer_headroom(priv, !!prog); + if (err) + goto out_err; + } + + old = xchg(&priv->xdp_prog, prog); + if (old) + bpf_prog_put(old); + + for (i = 0; i < priv->num_channels; i++) { + ch = priv->channel[i]; + old = xchg(&ch->xdp.prog, prog); + if (old) + bpf_prog_put(old); + } + + if (up) { + err = dpaa2_eth_open(dev); + if (err) + return err; + } + + return 0; + +out_err: + if (prog) + bpf_prog_sub(prog, priv->num_channels); + if (up) + dpaa2_eth_open(dev); + + return err; +} + +static int dpaa2_eth_xdp(struct net_device *dev, struct netdev_bpf *xdp) +{ + struct dpaa2_eth_priv *priv = netdev_priv(dev); + + switch (xdp->command) { + case XDP_SETUP_PROG: + return setup_xdp(dev, xdp->prog); + case XDP_QUERY_PROG: + xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0; + break; + default: + return -EINVAL; + } + + return 0; +} + static const struct net_device_ops dpaa2_eth_ops = { .ndo_open = dpaa2_eth_open, .ndo_start_xmit = dpaa2_eth_tx, @@ -1409,6 +1739,8 @@ static const struct net_device_ops dpaa2_eth_ops = { .ndo_set_rx_mode = dpaa2_eth_set_rx_mode, .ndo_set_features = dpaa2_eth_set_features, .ndo_do_ioctl = dpaa2_eth_ioctl, + .ndo_change_mtu = dpaa2_eth_change_mtu, + .ndo_bpf = dpaa2_eth_xdp, }; static void cdan_cb(struct dpaa2_io_notification_ctx *ctx) @@ -1434,8 +1766,11 @@ static struct fsl_mc_device *setup_dpcon(struct dpaa2_eth_priv *priv) err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPCON, &dpcon); if (err) { - dev_info(dev, "Not enough DPCONs, will go on as-is\n"); - return NULL; + if (err == -ENXIO) + err = -EPROBE_DEFER; + else + dev_info(dev, "Not enough DPCONs, will go on as-is\n"); + return ERR_PTR(err); } err = dpcon_open(priv->mc_io, 0, dpcon->obj_desc.id, &dpcon->mc_handle); @@ -1493,8 +1828,10 @@ alloc_channel(struct dpaa2_eth_priv *priv) return NULL; channel->dpcon = setup_dpcon(priv); - if (!channel->dpcon) + if (IS_ERR_OR_NULL(channel->dpcon)) { + err = PTR_ERR(channel->dpcon); goto err_setup; + } err = dpcon_get_attributes(priv->mc_io, 0, channel->dpcon->mc_handle, &attr); @@ -1513,7 +1850,7 @@ err_get_attr: free_dpcon(priv, channel->dpcon); err_setup: kfree(channel); - return NULL; + return ERR_PTR(err); } static void free_channel(struct dpaa2_eth_priv *priv, @@ -1547,10 +1884,11 @@ static int setup_dpio(struct dpaa2_eth_priv *priv) for_each_online_cpu(i) { /* Try to allocate a channel */ channel = alloc_channel(priv); - if (!channel) { - dev_info(dev, - "No affine channel for cpu %d and above\n", i); - err = -ENODEV; + if (IS_ERR_OR_NULL(channel)) { + err = PTR_ERR(channel); + if (err != -EPROBE_DEFER) + dev_info(dev, + "No affine channel for cpu %d and above\n", i); goto err_alloc_ch; } @@ -1597,7 +1935,7 @@ static int setup_dpio(struct dpaa2_eth_priv *priv) /* Stop if we already have enough channels to accommodate all * RX and TX conf queues */ - if (priv->num_channels == dpaa2_eth_queue_count(priv)) + if (priv->num_channels == priv->dpni_attrs.num_queues) break; } @@ -1608,9 +1946,12 @@ err_set_cdan: err_service_reg: free_channel(priv, channel); err_alloc_ch: + if (err == -EPROBE_DEFER) + return err; + if (cpumask_empty(&priv->dpio_cpumask)) { dev_err(dev, "No cpu with an affine DPIO/DPCON\n"); - return err; + return -ENODEV; } dev_info(dev, "Cores %*pbl available for processing ingress traffic\n", @@ -1732,7 +2073,10 @@ static int setup_dpbp(struct dpaa2_eth_priv *priv) err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP, &dpbp_dev); if (err) { - dev_err(dev, "DPBP device allocation failed\n"); + if (err == -ENXIO) + err = -EPROBE_DEFER; + else + dev_err(dev, "DPBP device allocation failed\n"); return err; } diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h index 452a8e9c4f0e..69c965de192b 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h @@ -139,7 +139,9 @@ struct dpaa2_faead { }; #define DPAA2_FAEAD_A2V 0x20000000 +#define DPAA2_FAEAD_A4V 0x08000000 #define DPAA2_FAEAD_UPDV 0x00001000 +#define DPAA2_FAEAD_EBDDV 0x00002000 #define DPAA2_FAEAD_UPD 0x00000010 /* Accessors for the hardware annotation fields that we use */ @@ -243,12 +245,14 @@ struct dpaa2_eth_fq_stats { struct dpaa2_eth_ch_stats { /* Volatile dequeues retried due to portal busy */ __u64 dequeue_portal_busy; - /* Number of CDANs; useful to estimate avg NAPI len */ - __u64 cdan; - /* Number of frames received on queues from this channel */ - __u64 frames; /* Pull errors */ __u64 pull_err; + /* Number of CDANs; useful to estimate avg NAPI len */ + __u64 cdan; + /* XDP counters */ + __u64 xdp_drop; + __u64 xdp_tx; + __u64 xdp_tx_err; }; /* Maximum number of queues associated with a DPNI */ @@ -271,17 +275,24 @@ struct dpaa2_eth_fq { u32 tx_qdbin; u16 flowid; int target_cpu; + u32 dq_frames; + u32 dq_bytes; struct dpaa2_eth_channel *channel; enum dpaa2_eth_fq_type type; void (*consume)(struct dpaa2_eth_priv *priv, struct dpaa2_eth_channel *ch, const struct dpaa2_fd *fd, - struct napi_struct *napi, - u16 queue_id); + struct dpaa2_eth_fq *fq); struct dpaa2_eth_fq_stats stats; }; +struct dpaa2_eth_ch_xdp { + struct bpf_prog *prog; + u64 drop_bufs[DPAA2_ETH_BUFS_PER_CMD]; + int drop_cnt; +}; + struct dpaa2_eth_channel { struct dpaa2_io_notification_ctx nctx; struct fsl_mc_device *dpcon; @@ -293,6 +304,7 @@ struct dpaa2_eth_channel { struct dpaa2_eth_priv *priv; int buf_count; struct dpaa2_eth_ch_stats stats; + struct dpaa2_eth_ch_xdp xdp; }; struct dpaa2_eth_dist_fields { @@ -352,6 +364,7 @@ struct dpaa2_eth_priv { u64 rx_hash_fields; struct dpaa2_eth_cls_rule *cls_rules; u8 rx_cls_enabled; + struct bpf_prog *xdp_prog; }; #define DPAA2_RXH_SUPPORTED (RXH_L2DA | RXH_VLAN | RXH_L3_PROTO \ @@ -434,9 +447,10 @@ static inline unsigned int dpaa2_eth_rx_head_room(struct dpaa2_eth_priv *priv) DPAA2_ETH_RX_HWA_SIZE; } +/* We have exactly one {Rx, Tx conf} queue per channel */ static int dpaa2_eth_queue_count(struct dpaa2_eth_priv *priv) { - return priv->dpni_attrs.num_queues; + return priv->num_channels; } int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c index 26bd5a2bd8ed..a7389e722c49 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c @@ -45,6 +45,15 @@ static char dpaa2_ethtool_extras[][ETH_GSTRING_LEN] = { "[drv] dequeue portal busy", "[drv] channel pull errors", "[drv] cdan", + "[drv] xdp drop", + "[drv] xdp tx", + "[drv] xdp tx errors", + /* FQ stats */ + "[qbman] rx pending frames", + "[qbman] rx pending bytes", + "[qbman] tx conf pending frames", + "[qbman] tx conf pending bytes", + "[qbman] buffer count", }; #define DPAA2_ETH_NUM_EXTRA_STATS ARRAY_SIZE(dpaa2_ethtool_extras) @@ -174,8 +183,10 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev, int j, k, err; int num_cnt; union dpni_statistics dpni_stats; - u64 cdan = 0; - u64 portal_busy = 0, pull_err = 0; + u32 fcnt, bcnt; + u32 fcnt_rx_total = 0, fcnt_tx_total = 0; + u32 bcnt_rx_total = 0, bcnt_tx_total = 0; + u32 buf_cnt; struct dpaa2_eth_priv *priv = netdev_priv(net_dev); struct dpaa2_eth_drv_stats *extras; struct dpaa2_eth_ch_stats *ch_stats; @@ -212,16 +223,43 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev, } i += j; - for (j = 0; j < priv->num_channels; j++) { - ch_stats = &priv->channel[j]->stats; - cdan += ch_stats->cdan; - portal_busy += ch_stats->dequeue_portal_busy; - pull_err += ch_stats->pull_err; + /* Per-channel stats */ + for (k = 0; k < priv->num_channels; k++) { + ch_stats = &priv->channel[k]->stats; + for (j = 0; j < sizeof(*ch_stats) / sizeof(__u64); j++) + *((__u64 *)data + i + j) += *((__u64 *)ch_stats + j); } + i += j; + + for (j = 0; j < priv->num_fqs; j++) { + /* Print FQ instantaneous counts */ + err = dpaa2_io_query_fq_count(NULL, priv->fq[j].fqid, + &fcnt, &bcnt); + if (err) { + netdev_warn(net_dev, "FQ query error %d", err); + return; + } - *(data + i++) = portal_busy; - *(data + i++) = pull_err; - *(data + i++) = cdan; + if (priv->fq[j].type == DPAA2_TX_CONF_FQ) { + fcnt_tx_total += fcnt; + bcnt_tx_total += bcnt; + } else { + fcnt_rx_total += fcnt; + bcnt_rx_total += bcnt; + } + } + + *(data + i++) = fcnt_rx_total; + *(data + i++) = bcnt_rx_total; + *(data + i++) = fcnt_tx_total; + *(data + i++) = bcnt_tx_total; + + err = dpaa2_io_query_bp_count(NULL, priv->bpid, &buf_cnt); + if (err) { + netdev_warn(net_dev, "Buffer count query error %d\n", err); + return; + } + *(data + i++) = buf_cnt; } static int prep_eth_rule(struct ethhdr *eth_value, struct ethhdr *eth_mask, diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c index 84b942b1eccc..9b150db3b510 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c @@ -140,7 +140,10 @@ static int dpaa2_ptp_probe(struct fsl_mc_device *mc_dev) err = fsl_mc_portal_allocate(mc_dev, 0, &mc_dev->mc_io); if (err) { - dev_err(dev, "fsl_mc_portal_allocate err %d\n", err); + if (err == -ENXIO) + err = -EPROBE_DEFER; + else + dev_err(dev, "fsl_mc_portal_allocate err %d\n", err); goto err_exit; } diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h index bf80855dd0dd..f79e57f735b3 100644 --- a/drivers/net/ethernet/freescale/fec.h +++ b/drivers/net/ethernet/freescale/fec.h @@ -531,7 +531,6 @@ struct fec_enet_private { /* Phylib and MDIO interface */ struct mii_bus *mii_bus; - int mii_timeout; uint phy_speed; phy_interface_t phy_interface; struct device_node *phy_node; diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 6db69ba30dcd..ae0f88bce9aa 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -1714,12 +1714,6 @@ static void fec_enet_adjust_link(struct net_device *ndev) struct phy_device *phy_dev = ndev->phydev; int status_change = 0; - /* Prevent a state halted on mii error */ - if (fep->mii_timeout && phy_dev->state == PHY_HALTED) { - phy_dev->state = PHY_RESUMING; - return; - } - /* * If the netdev is down, or is going down, we're not interested * in link state events, so just mark our idea of the link as down @@ -1779,7 +1773,6 @@ static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) if (ret < 0) return ret; - fep->mii_timeout = 0; reinit_completion(&fep->mdio_done); /* start a read op */ @@ -1791,7 +1784,6 @@ static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) time_left = wait_for_completion_timeout(&fep->mdio_done, usecs_to_jiffies(FEC_MII_TIMEOUT)); if (time_left == 0) { - fep->mii_timeout = 1; netdev_err(fep->netdev, "MDIO read timeout\n"); ret = -ETIMEDOUT; goto out; @@ -1820,7 +1812,6 @@ static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, else ret = 0; - fep->mii_timeout = 0; reinit_completion(&fep->mdio_done); /* start a write op */ @@ -1833,7 +1824,6 @@ static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, time_left = wait_for_completion_timeout(&fep->mdio_done, usecs_to_jiffies(FEC_MII_TIMEOUT)); if (time_left == 0) { - fep->mii_timeout = 1; netdev_err(fep->netdev, "MDIO write timeout\n"); ret = -ETIMEDOUT; } @@ -2001,8 +1991,6 @@ static int fec_enet_mii_init(struct platform_device *pdev) return -ENOENT; } - fep->mii_timeout = 0; - /* * Set MII speed to 2.5 MHz (= clk_get_rate() / 2 * phy_speed) * diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c index d79e4e009d63..71f4205f14e7 100644 --- a/drivers/net/ethernet/freescale/fman/mac.c +++ b/drivers/net/ethernet/freescale/fman/mac.c @@ -393,7 +393,7 @@ void fman_get_pause_cfg(struct mac_device *mac_dev, bool *rx_pause, */ /* get local capabilities */ - lcl_adv = ethtool_adv_to_lcl_adv_t(phy_dev->advertising); + lcl_adv = linkmode_adv_to_lcl_adv_t(phy_dev->advertising); /* get link partner capabilities */ rmt_adv = 0; diff --git a/drivers/net/ethernet/freescale/fsl_pq_mdio.c b/drivers/net/ethernet/freescale/fsl_pq_mdio.c index 82722d05fedb..88a396fd242f 100644 --- a/drivers/net/ethernet/freescale/fsl_pq_mdio.c +++ b/drivers/net/ethernet/freescale/fsl_pq_mdio.c @@ -473,7 +473,7 @@ static int fsl_pq_mdio_probe(struct platform_device *pdev) if (data->get_tbipa) { for_each_child_of_node(np, tbi) { - if (strcmp(tbi->type, "tbi-phy") == 0) { + if (of_node_is_type(tbi, "tbi-phy")) { dev_dbg(&pdev->dev, "found TBI PHY node %pOFP\n", tbi); break; diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c index 3c8da1a18ba0..45fcc96be90e 100644 --- a/drivers/net/ethernet/freescale/gianfar.c +++ b/drivers/net/ethernet/freescale/gianfar.c @@ -500,6 +500,7 @@ static const struct net_device_ops gfar_netdev_ops = { .ndo_tx_timeout = gfar_timeout, .ndo_do_ioctl = gfar_ioctl, .ndo_get_stats = gfar_get_stats, + .ndo_change_carrier = fixed_phy_change_carrier, .ndo_set_mac_address = gfar_set_mac_addr, .ndo_validate_addr = eth_validate_addr, #ifdef CONFIG_NET_POLL_CONTROLLER @@ -720,7 +721,7 @@ static int gfar_of_group_count(struct device_node *np) int num = 0; for_each_available_child_of_node(np, child) - if (!of_node_cmp(child->name, "queue-group")) + if (of_node_name_eq(child, "queue-group")) num++; return num; @@ -838,7 +839,7 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev) /* Parse and initialize group specific information */ if (priv->mode == MQ_MG_MODE) { for_each_available_child_of_node(np, child) { - if (of_node_cmp(child->name, "queue-group")) + if (!of_node_name_eq(child, "queue-group")) continue; err = gfar_parse_group(child, priv, model); @@ -1784,14 +1785,20 @@ static phy_interface_t gfar_get_interface(struct net_device *dev) */ static int init_phy(struct net_device *dev) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; struct gfar_private *priv = netdev_priv(dev); - uint gigabit_support = - priv->device_flags & FSL_GIANFAR_DEV_HAS_GIGABIT ? - GFAR_SUPPORTED_GBIT : 0; phy_interface_t interface; struct phy_device *phydev; struct ethtool_eee edata; + linkmode_set_bit_array(phy_10_100_features_array, + ARRAY_SIZE(phy_10_100_features_array), + mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_MII_BIT, mask); + if (priv->device_flags & FSL_GIANFAR_DEV_HAS_GIGABIT) + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, mask); + priv->oldlink = 0; priv->oldspeed = 0; priv->oldduplex = -1; @@ -1809,8 +1816,8 @@ static int init_phy(struct net_device *dev) gfar_configure_serdes(dev); /* Remove any features not supported by the controller */ - phydev->supported &= (GFAR_SUPPORTED | gigabit_support); - phydev->advertising = phydev->supported; + linkmode_and(phydev->supported, phydev->supported, mask); + linkmode_copy(phydev->advertising, phydev->supported); /* Add support for flow control */ phy_support_asym_pause(phydev); @@ -3656,7 +3663,7 @@ static u32 gfar_get_flowctrl_cfg(struct gfar_private *priv) if (phydev->asym_pause) rmt_adv |= LPA_PAUSE_ASYM; - lcl_adv = ethtool_adv_to_lcl_adv_t(phydev->advertising); + lcl_adv = linkmode_adv_to_lcl_adv_t(phydev->advertising); flowctrl = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv); if (flowctrl & FLOW_CTRL_TX) val |= MACCFG1_TX_FLOW; diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c index 0d76e15cd6dd..241325c35cb4 100644 --- a/drivers/net/ethernet/freescale/gianfar_ethtool.c +++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c @@ -1134,11 +1134,9 @@ static int gfar_convert_to_filer(struct ethtool_rx_flow_spec *rule, prio = vlan_tci_prio(rule); prio_mask = vlan_tci_priom(rule); - if (cfi == VLAN_TAG_PRESENT && cfi_mask == VLAN_TAG_PRESENT) { - vlan |= RQFPR_CFI; - vlan_mask |= RQFPR_CFI; - } else if (cfi != VLAN_TAG_PRESENT && - cfi_mask == VLAN_TAG_PRESENT) { + if (cfi_mask) { + if (cfi) + vlan |= RQFPR_CFI; vlan_mask |= RQFPR_CFI; } } diff --git a/drivers/net/ethernet/freescale/ucc_geth.c b/drivers/net/ethernet/freescale/ucc_geth.c index 32e02700feaa..c3d539e209ed 100644 --- a/drivers/net/ethernet/freescale/ucc_geth.c +++ b/drivers/net/ethernet/freescale/ucc_geth.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include #include @@ -1742,12 +1743,7 @@ static int init_phy(struct net_device *dev) if (priv->phy_interface == PHY_INTERFACE_MODE_SGMII) uec_configure_serdes(dev); - phy_set_max_speed(phydev, SPEED_100); - - if (priv->max_speed == SPEED_1000) - phydev->supported |= ADVERTISED_1000baseT_Full; - - phydev->advertising = phydev->supported; + phy_set_max_speed(phydev, priv->max_speed); priv->phydev = phydev; @@ -3681,6 +3677,7 @@ static const struct net_device_ops ucc_geth_netdev_ops = { .ndo_stop = ucc_geth_close, .ndo_start_xmit = ucc_geth_start_xmit, .ndo_validate_addr = eth_validate_addr, + .ndo_change_carrier = fixed_phy_change_carrier, .ndo_set_mac_address = ucc_geth_set_mac_addr, .ndo_set_rx_mode = ucc_geth_set_multi, .ndo_tx_timeout = ucc_geth_timeout, diff --git a/drivers/net/ethernet/hisilicon/Kconfig b/drivers/net/ethernet/hisilicon/Kconfig index 25152715396b..fee4664c9189 100644 --- a/drivers/net/ethernet/hisilicon/Kconfig +++ b/drivers/net/ethernet/hisilicon/Kconfig @@ -118,6 +118,7 @@ config HNS3_ENET tristate "Hisilicon HNS3 Ethernet Device Support" default m depends on 64BIT && PCI + depends on INET ---help--- This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08 family of SoCs. This module depends upon HNAE3 driver to access the HNAE3 diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c index 6242249c9f4c..5748d3f722f6 100644 --- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c +++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c @@ -1163,6 +1163,7 @@ static void hns_nic_adjust_link(struct net_device *ndev) */ int hns_nic_init_phy(struct net_device *ndev, struct hnae_handle *h) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, }; struct phy_device *phy_dev = h->phy_dev; int ret; @@ -1180,8 +1181,9 @@ int hns_nic_init_phy(struct net_device *ndev, struct hnae_handle *h) if (unlikely(ret)) return -ENODEV; - phy_dev->supported &= h->if_support; - phy_dev->advertising = phy_dev->supported; + ethtool_convert_legacy_u32_to_link_mode(supported, h->if_support); + linkmode_and(phy_dev->supported, phy_dev->supported, supported); + linkmode_copy(phy_dev->advertising, phy_dev->supported); if (h->phy_if == PHY_INTERFACE_MODE_XGMII) phy_dev->autoneg = false; diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c index 774beda040a1..8e9b95871d30 100644 --- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c +++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c @@ -624,7 +624,7 @@ static void hns_nic_self_test(struct net_device *ndev, clear_bit(NIC_STATE_TESTING, &priv->state); if (if_running) - (void)dev_open(ndev); + (void)dev_open(ndev, NULL); } /* Online tests aren't run; pass by default */ diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile index 002534f12b66..d01bf536eb86 100644 --- a/drivers/net/ethernet/hisilicon/hns3/Makefile +++ b/drivers/net/ethernet/hisilicon/hns3/Makefile @@ -9,6 +9,6 @@ obj-$(CONFIG_HNS3) += hns3vf/ obj-$(CONFIG_HNS3) += hnae3.o obj-$(CONFIG_HNS3_ENET) += hns3.o -hns3-objs = hns3_enet.o hns3_ethtool.o +hns3-objs = hns3_enet.o hns3_ethtool.o hns3_debugfs.o hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h index 038326cfda93..691d12174902 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h +++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h @@ -36,6 +36,10 @@ enum HCLGE_MBX_OPCODE { HCLGE_MBX_BIND_FUNC_QUEUE, /* (VF -> PF) bind function and queue */ HCLGE_MBX_GET_LINK_STATUS, /* (VF -> PF) get link status */ HCLGE_MBX_QUEUE_RESET, /* (VF -> PF) reset queue */ + HCLGE_MBX_KEEP_ALIVE, /* (VF -> PF) send keep alive cmd */ + HCLGE_MBX_SET_ALIVE, /* (VF -> PF) set alive state */ + HCLGE_MBX_SET_MTU, /* (VF -> PF) set mtu */ + HCLGE_MBX_GET_QID_IN_PF, /* (VF -> PF) get queue id in pf */ }; /* below are per-VF mac-vlan subcodes */ @@ -85,6 +89,12 @@ struct hclge_mbx_pf_to_vf_cmd { u16 msg[8]; }; +struct hclge_vf_rst_cmd { + u8 dest_vfid; + u8 vf_rst; + u8 rsv[22]; +}; + /* used by VF to store the received Async responses from PF */ struct hclgevf_mbx_arq_ring { #define HCLGE_MBX_MAX_ARQ_MSG_SIZE 8 diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h index 055b40606dbc..36eab37d8a40 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h +++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h @@ -52,6 +52,7 @@ #define HNAE3_UNIC_CLIENT_INITED_B 0x4 #define HNAE3_ROCE_CLIENT_INITED_B 0x5 #define HNAE3_DEV_SUPPORT_FD_B 0x6 +#define HNAE3_DEV_SUPPORT_GRO_B 0x7 #define HNAE3_DEV_SUPPORT_ROCE_DCB_BITS (BIT(HNAE3_DEV_SUPPORT_DCB_B) |\ BIT(HNAE3_DEV_SUPPORT_ROCE_B)) @@ -65,6 +66,9 @@ #define hnae3_dev_fd_supported(hdev) \ hnae3_get_bit((hdev)->ae_dev->flag, HNAE3_DEV_SUPPORT_FD_B) +#define hnae3_dev_gro_supported(hdev) \ + hnae3_get_bit((hdev)->ae_dev->flag, HNAE3_DEV_SUPPORT_GRO_B) + #define ring_ptr_move_fw(ring, p) \ ((ring)->p = ((ring)->p + 1) % (ring)->desc_num) #define ring_ptr_move_bw(ring, p) \ @@ -124,14 +128,23 @@ enum hnae3_reset_notify_type { enum hnae3_reset_type { HNAE3_VF_RESET, + HNAE3_VF_FUNC_RESET, + HNAE3_VF_PF_FUNC_RESET, HNAE3_VF_FULL_RESET, + HNAE3_FLR_RESET, HNAE3_FUNC_RESET, HNAE3_CORE_RESET, HNAE3_GLOBAL_RESET, HNAE3_IMP_RESET, + HNAE3_UNKNOWN_RESET, HNAE3_NONE_RESET, }; +enum hnae3_flr_state { + HNAE3_FLR_DOWN, + HNAE3_FLR_DONE, +}; + struct hnae3_vector_info { u8 __iomem *io_addr; int vector; @@ -162,6 +175,7 @@ struct hnae3_client_ops { int (*setup_tc)(struct hnae3_handle *handle, u8 tc); int (*reset_notify)(struct hnae3_handle *handle, enum hnae3_reset_notify_type type); + enum hnae3_reset_type (*process_hw_error)(struct hnae3_handle *handle); }; #define HNAE3_CLIENT_NAME_LENGTH 16 @@ -197,6 +211,10 @@ struct hnae3_ae_dev { * Enable the hardware * stop() * Disable the hardware + * start_client() + * Inform the hclge that client has been started + * stop_client() + * Inform the hclge that client has been stopped * get_status() * Get the carrier state of the back channel of the handle, 1 for ok, 0 for * non-ok @@ -292,17 +310,22 @@ struct hnae3_ae_dev { * Set vlan filter config of vf * enable_hw_strip_rxvtag() * Enable/disable hardware strip vlan tag of packets received + * set_gro_en + * Enable/disable HW GRO */ struct hnae3_ae_ops { int (*init_ae_dev)(struct hnae3_ae_dev *ae_dev); void (*uninit_ae_dev)(struct hnae3_ae_dev *ae_dev); - + void (*flr_prepare)(struct hnae3_ae_dev *ae_dev); + void (*flr_done)(struct hnae3_ae_dev *ae_dev); int (*init_client_instance)(struct hnae3_client *client, struct hnae3_ae_dev *ae_dev); void (*uninit_client_instance)(struct hnae3_client *client, struct hnae3_ae_dev *ae_dev); int (*start)(struct hnae3_handle *handle); void (*stop)(struct hnae3_handle *handle); + int (*client_start)(struct hnae3_handle *handle); + void (*client_stop)(struct hnae3_handle *handle); int (*get_status)(struct hnae3_handle *handle); void (*get_ksettings_an_result)(struct hnae3_handle *handle, u8 *auto_neg, u32 *speed, u8 *duplex); @@ -403,6 +426,8 @@ struct hnae3_ae_ops { u16 vlan, u8 qos, __be16 proto); int (*enable_hw_strip_rxvtag)(struct hnae3_handle *handle, bool enable); void (*reset_event)(struct pci_dev *pdev, struct hnae3_handle *handle); + void (*set_default_reset_request)(struct hnae3_ae_dev *ae_dev, + enum hnae3_reset_type rst_type); void (*get_channels)(struct hnae3_handle *handle, struct ethtool_channels *ch); void (*get_tqps_and_rss_info)(struct hnae3_handle *h, @@ -429,7 +454,14 @@ struct hnae3_ae_ops { struct ethtool_rxnfc *cmd, u32 *rule_locs); int (*restore_fd_rules)(struct hnae3_handle *handle); void (*enable_fd)(struct hnae3_handle *handle, bool enable); - pci_ers_result_t (*process_hw_error)(struct hnae3_ae_dev *ae_dev); + int (*dbg_run_cmd)(struct hnae3_handle *handle, char *cmd_buf); + pci_ers_result_t (*handle_hw_ras_error)(struct hnae3_ae_dev *ae_dev); + bool (*get_hw_reset_stat)(struct hnae3_handle *handle); + bool (*ae_dev_resetting)(struct hnae3_handle *handle); + unsigned long (*ae_dev_reset_cnt)(struct hnae3_handle *handle); + int (*set_gro_en)(struct hnae3_handle *handle, int enable); + u16 (*get_global_queue_id)(struct hnae3_handle *handle, u16 queue_id); + void (*set_timer_task)(struct hnae3_handle *handle, bool enable); }; struct hnae3_dcb_ops { @@ -488,6 +520,14 @@ struct hnae3_roce_private_info { void __iomem *roce_io_base; int base_vector; int num_vectors; + + /* The below attributes defined for RoCE client, hnae3 gives + * initial values to them, and RoCE client can modify and use + * them. + */ + unsigned long reset_state; + unsigned long instance_state; + unsigned long state; }; struct hnae3_unic_private_info { @@ -520,9 +560,6 @@ struct hnae3_handle { struct hnae3_ae_algo *ae_algo; /* the class who provides this handle */ u64 flags; /* Indicate the capabilities for this handle*/ - unsigned long last_reset_time; - enum hnae3_reset_type reset_level; - union { struct net_device *netdev; /* first member */ struct hnae3_knic_private_info kinfo; @@ -533,6 +570,7 @@ struct hnae3_handle { u32 numa_node_mask; /* for multi-chip support */ u8 netdev_flags; + struct dentry *hnae3_dbgfs; }; #define hnae3_set_field(origin, mask, shift, val) \ diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c index ea5f8a84070d..b6fabbbdfd5b 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c @@ -9,6 +9,9 @@ int hns3_dcbnl_ieee_getets(struct net_device *ndev, struct ieee_ets *ets) { struct hnae3_handle *h = hns3_get_handle(ndev); + if (hns3_nic_resetting(ndev)) + return -EBUSY; + if (h->kinfo.dcb_ops->ieee_getets) return h->kinfo.dcb_ops->ieee_getets(h, ets); @@ -20,6 +23,9 @@ int hns3_dcbnl_ieee_setets(struct net_device *ndev, struct ieee_ets *ets) { struct hnae3_handle *h = hns3_get_handle(ndev); + if (hns3_nic_resetting(ndev)) + return -EBUSY; + if (h->kinfo.dcb_ops->ieee_setets) return h->kinfo.dcb_ops->ieee_setets(h, ets); @@ -31,6 +37,9 @@ int hns3_dcbnl_ieee_getpfc(struct net_device *ndev, struct ieee_pfc *pfc) { struct hnae3_handle *h = hns3_get_handle(ndev); + if (hns3_nic_resetting(ndev)) + return -EBUSY; + if (h->kinfo.dcb_ops->ieee_getpfc) return h->kinfo.dcb_ops->ieee_getpfc(h, pfc); @@ -42,6 +51,9 @@ int hns3_dcbnl_ieee_setpfc(struct net_device *ndev, struct ieee_pfc *pfc) { struct hnae3_handle *h = hns3_get_handle(ndev); + if (hns3_nic_resetting(ndev)) + return -EBUSY; + if (h->kinfo.dcb_ops->ieee_setpfc) return h->kinfo.dcb_ops->ieee_setpfc(h, pfc); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c new file mode 100644 index 000000000000..0de543faa5b1 --- /dev/null +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c @@ -0,0 +1,399 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Copyright (c) 2018-2019 Hisilicon Limited. */ + +#include +#include + +#include "hnae3.h" +#include "hns3_enet.h" + +#define HNS3_DBG_READ_LEN 256 + +static struct dentry *hns3_dbgfs_root; + +static int hns3_dbg_queue_info(struct hnae3_handle *h, char *cmd_buf) +{ + struct hns3_nic_priv *priv = h->priv; + struct hns3_nic_ring_data *ring_data; + struct hns3_enet_ring *ring; + u32 base_add_l, base_add_h; + u32 queue_num, queue_max; + u32 value, i = 0; + int cnt; + + if (!priv->ring_data) { + dev_err(&h->pdev->dev, "ring_data is NULL\n"); + return -EFAULT; + } + + queue_max = h->kinfo.num_tqps; + cnt = kstrtouint(&cmd_buf[11], 0, &queue_num); + if (cnt) + queue_num = 0; + else + queue_max = queue_num + 1; + + dev_info(&h->pdev->dev, "queue info\n"); + + if (queue_num >= h->kinfo.num_tqps) { + dev_err(&h->pdev->dev, + "Queue number(%u) is out of range(%u)\n", queue_num, + h->kinfo.num_tqps - 1); + return -EINVAL; + } + + ring_data = priv->ring_data; + for (i = queue_num; i < queue_max; i++) { + /* Each cycle needs to determine whether the instance is reset, + * to prevent reference to invalid memory. And need to ensure + * that the following code is executed within 100ms. + */ + if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || + test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) + return -EPERM; + + ring = ring_data[(u32)(i + h->kinfo.num_tqps)].ring; + base_add_h = readl_relaxed(ring->tqp->io_base + + HNS3_RING_RX_RING_BASEADDR_H_REG); + base_add_l = readl_relaxed(ring->tqp->io_base + + HNS3_RING_RX_RING_BASEADDR_L_REG); + dev_info(&h->pdev->dev, "RX(%d) BASE ADD: 0x%08x%08x\n", i, + base_add_h, base_add_l); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_RX_RING_BD_NUM_REG); + dev_info(&h->pdev->dev, "RX(%d) RING BD NUM: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_RX_RING_BD_LEN_REG); + dev_info(&h->pdev->dev, "RX(%d) RING BD LEN: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_RX_RING_TAIL_REG); + dev_info(&h->pdev->dev, "RX(%d) RING TAIL: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_RX_RING_HEAD_REG); + dev_info(&h->pdev->dev, "RX(%d) RING HEAD: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_RX_RING_FBDNUM_REG); + dev_info(&h->pdev->dev, "RX(%d) RING FBDNUM: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_RX_RING_PKTNUM_RECORD_REG); + dev_info(&h->pdev->dev, "RX(%d) RING PKTNUM: %u\n", i, value); + + ring = ring_data[i].ring; + base_add_h = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_BASEADDR_H_REG); + base_add_l = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_BASEADDR_L_REG); + dev_info(&h->pdev->dev, "TX(%d) BASE ADD: 0x%08x%08x\n", i, + base_add_h, base_add_l); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_BD_NUM_REG); + dev_info(&h->pdev->dev, "TX(%d) RING BD NUM: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_TC_REG); + dev_info(&h->pdev->dev, "TX(%d) RING TC: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_TAIL_REG); + dev_info(&h->pdev->dev, "TX(%d) RING TAIL: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_HEAD_REG); + dev_info(&h->pdev->dev, "TX(%d) RING HEAD: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_FBDNUM_REG); + dev_info(&h->pdev->dev, "TX(%d) RING FBDNUM: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_OFFSET_REG); + dev_info(&h->pdev->dev, "TX(%d) RING OFFSET: %u\n", i, value); + + value = readl_relaxed(ring->tqp->io_base + + HNS3_RING_TX_RING_PKTNUM_RECORD_REG); + dev_info(&h->pdev->dev, "TX(%d) RING PKTNUM: %u\n\n", i, + value); + } + + return 0; +} + +static int hns3_dbg_queue_map(struct hnae3_handle *h) +{ + struct hns3_nic_priv *priv = h->priv; + struct hns3_nic_ring_data *ring_data; + int i; + + if (!h->ae_algo->ops->get_global_queue_id) + return -EOPNOTSUPP; + + dev_info(&h->pdev->dev, "map info for queue id and vector id\n"); + dev_info(&h->pdev->dev, + "local queue id | global queue id | vector id\n"); + for (i = 0; i < h->kinfo.num_tqps; i++) { + u16 global_qid; + + global_qid = h->ae_algo->ops->get_global_queue_id(h, i); + ring_data = &priv->ring_data[i]; + if (!ring_data || !ring_data->ring || + !ring_data->ring->tqp_vector) + continue; + + dev_info(&h->pdev->dev, + " %4d %4d %4d\n", + i, global_qid, + ring_data->ring->tqp_vector->vector_irq); + } + + return 0; +} + +static int hns3_dbg_bd_info(struct hnae3_handle *h, char *cmd_buf) +{ + struct hns3_nic_priv *priv = h->priv; + struct hns3_nic_ring_data *ring_data; + struct hns3_desc *rx_desc, *tx_desc; + struct device *dev = &h->pdev->dev; + struct hns3_enet_ring *ring; + u32 tx_index, rx_index; + u32 q_num, value; + int cnt; + + cnt = sscanf(&cmd_buf[8], "%u %u", &q_num, &tx_index); + if (cnt == 2) { + rx_index = tx_index; + } else if (cnt != 1) { + dev_err(dev, "bd info: bad command string, cnt=%d\n", cnt); + return -EINVAL; + } + + if (q_num >= h->kinfo.num_tqps) { + dev_err(dev, "Queue number(%u) is out of range(%u)\n", q_num, + h->kinfo.num_tqps - 1); + return -EINVAL; + } + + ring_data = priv->ring_data; + ring = ring_data[q_num].ring; + value = readl_relaxed(ring->tqp->io_base + HNS3_RING_TX_RING_TAIL_REG); + tx_index = (cnt == 1) ? value : tx_index; + + if (tx_index >= ring->desc_num) { + dev_err(dev, "bd index (%u) is out of range(%u)\n", tx_index, + ring->desc_num - 1); + return -EINVAL; + } + + tx_desc = &ring->desc[tx_index]; + dev_info(dev, "TX Queue Num: %u, BD Index: %u\n", q_num, tx_index); + dev_info(dev, "(TX) addr: 0x%llx\n", tx_desc->addr); + dev_info(dev, "(TX)vlan_tag: %u\n", tx_desc->tx.vlan_tag); + dev_info(dev, "(TX)send_size: %u\n", tx_desc->tx.send_size); + dev_info(dev, "(TX)vlan_tso: %u\n", tx_desc->tx.type_cs_vlan_tso); + dev_info(dev, "(TX)l2_len: %u\n", tx_desc->tx.l2_len); + dev_info(dev, "(TX)l3_len: %u\n", tx_desc->tx.l3_len); + dev_info(dev, "(TX)l4_len: %u\n", tx_desc->tx.l4_len); + dev_info(dev, "(TX)vlan_tag: %u\n", tx_desc->tx.outer_vlan_tag); + dev_info(dev, "(TX)tv: %u\n", tx_desc->tx.tv); + dev_info(dev, "(TX)vlan_msec: %u\n", tx_desc->tx.ol_type_vlan_msec); + dev_info(dev, "(TX)ol2_len: %u\n", tx_desc->tx.ol2_len); + dev_info(dev, "(TX)ol3_len: %u\n", tx_desc->tx.ol3_len); + dev_info(dev, "(TX)ol4_len: %u\n", tx_desc->tx.ol4_len); + dev_info(dev, "(TX)paylen: %u\n", tx_desc->tx.paylen); + dev_info(dev, "(TX)vld_ra_ri: %u\n", tx_desc->tx.bdtp_fe_sc_vld_ra_ri); + dev_info(dev, "(TX)mss: %u\n", tx_desc->tx.mss); + + ring = ring_data[q_num + h->kinfo.num_tqps].ring; + value = readl_relaxed(ring->tqp->io_base + HNS3_RING_RX_RING_TAIL_REG); + rx_index = (cnt == 1) ? value : tx_index; + rx_desc = &ring->desc[rx_index]; + + dev_info(dev, "RX Queue Num: %u, BD Index: %u\n", q_num, rx_index); + dev_info(dev, "(RX)addr: 0x%llx\n", rx_desc->addr); + dev_info(dev, "(RX)pkt_len: %u\n", rx_desc->rx.pkt_len); + dev_info(dev, "(RX)size: %u\n", rx_desc->rx.size); + dev_info(dev, "(RX)rss_hash: %u\n", rx_desc->rx.rss_hash); + dev_info(dev, "(RX)fd_id: %u\n", rx_desc->rx.fd_id); + dev_info(dev, "(RX)vlan_tag: %u\n", rx_desc->rx.vlan_tag); + dev_info(dev, "(RX)o_dm_vlan_id_fb: %u\n", rx_desc->rx.o_dm_vlan_id_fb); + dev_info(dev, "(RX)ot_vlan_tag: %u\n", rx_desc->rx.ot_vlan_tag); + dev_info(dev, "(RX)bd_base_info: %u\n", rx_desc->rx.bd_base_info); + + return 0; +} + +static void hns3_dbg_help(struct hnae3_handle *h) +{ +#define HNS3_DBG_BUF_LEN 256 + + char printf_buf[HNS3_DBG_BUF_LEN]; + + dev_info(&h->pdev->dev, "available commands\n"); + dev_info(&h->pdev->dev, "queue info [number]\n"); + dev_info(&h->pdev->dev, "queue map\n"); + dev_info(&h->pdev->dev, "bd info [q_num] \n"); + dev_info(&h->pdev->dev, "dump fd tcam\n"); + dev_info(&h->pdev->dev, "dump tc\n"); + dev_info(&h->pdev->dev, "dump tm map [q_num]\n"); + dev_info(&h->pdev->dev, "dump tm\n"); + dev_info(&h->pdev->dev, "dump qos pause cfg\n"); + dev_info(&h->pdev->dev, "dump qos pri map\n"); + dev_info(&h->pdev->dev, "dump qos buf cfg\n"); + dev_info(&h->pdev->dev, "dump mng tbl\n"); + + memset(printf_buf, 0, HNS3_DBG_BUF_LEN); + strncat(printf_buf, "dump reg [[bios common] [ssu ]", + HNS3_DBG_BUF_LEN - 1); + strncat(printf_buf + strlen(printf_buf), + " [igu egu ] [rpu ]", + HNS3_DBG_BUF_LEN - strlen(printf_buf) - 1); + strncat(printf_buf + strlen(printf_buf), + " [rtc] [ppp] [rcb] [tqp ]]\n", + HNS3_DBG_BUF_LEN - strlen(printf_buf) - 1); + dev_info(&h->pdev->dev, "%s", printf_buf); + + memset(printf_buf, 0, HNS3_DBG_BUF_LEN); + strncat(printf_buf, "dump reg dcb [port_id] [pri_id] [pg_id]", + HNS3_DBG_BUF_LEN - 1); + strncat(printf_buf + strlen(printf_buf), " [rq_id] [nq_id] [qset_id]\n", + HNS3_DBG_BUF_LEN - strlen(printf_buf) - 1); + dev_info(&h->pdev->dev, "%s", printf_buf); +} + +static ssize_t hns3_dbg_cmd_read(struct file *filp, char __user *buffer, + size_t count, loff_t *ppos) +{ + int uncopy_bytes; + char *buf; + int len; + + if (*ppos != 0) + return 0; + + if (count < HNS3_DBG_READ_LEN) + return -ENOSPC; + + buf = kzalloc(HNS3_DBG_READ_LEN, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + len = snprintf(buf, HNS3_DBG_READ_LEN, "%s\n", + "Please echo help to cmd to get help information"); + uncopy_bytes = copy_to_user(buffer, buf, len); + + kfree(buf); + + if (uncopy_bytes) + return -EFAULT; + + return (*ppos = len); +} + +static ssize_t hns3_dbg_cmd_write(struct file *filp, const char __user *buffer, + size_t count, loff_t *ppos) +{ + struct hnae3_handle *handle = filp->private_data; + struct hns3_nic_priv *priv = handle->priv; + char *cmd_buf, *cmd_buf_tmp; + int uncopied_bytes; + int ret = 0; + + if (*ppos != 0) + return 0; + + /* Judge if the instance is being reset. */ + if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || + test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) + return 0; + + cmd_buf = kzalloc(count + 1, GFP_KERNEL); + if (!cmd_buf) + return count; + + uncopied_bytes = copy_from_user(cmd_buf, buffer, count); + if (uncopied_bytes) { + kfree(cmd_buf); + return -EFAULT; + } + + cmd_buf[count] = '\0'; + + cmd_buf_tmp = strchr(cmd_buf, '\n'); + if (cmd_buf_tmp) { + *cmd_buf_tmp = '\0'; + count = cmd_buf_tmp - cmd_buf + 1; + } + + if (strncmp(cmd_buf, "help", 4) == 0) + hns3_dbg_help(handle); + else if (strncmp(cmd_buf, "queue info", 10) == 0) + ret = hns3_dbg_queue_info(handle, cmd_buf); + else if (strncmp(cmd_buf, "queue map", 9) == 0) + ret = hns3_dbg_queue_map(handle); + else if (strncmp(cmd_buf, "bd info", 7) == 0) + ret = hns3_dbg_bd_info(handle, cmd_buf); + else if (handle->ae_algo->ops->dbg_run_cmd) + ret = handle->ae_algo->ops->dbg_run_cmd(handle, cmd_buf); + + if (ret) + hns3_dbg_help(handle); + + kfree(cmd_buf); + cmd_buf = NULL; + + return count; +} + +static const struct file_operations hns3_dbg_cmd_fops = { + .owner = THIS_MODULE, + .open = simple_open, + .read = hns3_dbg_cmd_read, + .write = hns3_dbg_cmd_write, +}; + +void hns3_dbg_init(struct hnae3_handle *handle) +{ + const char *name = pci_name(handle->pdev); + struct dentry *pfile; + + handle->hnae3_dbgfs = debugfs_create_dir(name, hns3_dbgfs_root); + if (!handle->hnae3_dbgfs) + return; + + pfile = debugfs_create_file("cmd", 0600, handle->hnae3_dbgfs, handle, + &hns3_dbg_cmd_fops); + if (!pfile) { + debugfs_remove_recursive(handle->hnae3_dbgfs); + handle->hnae3_dbgfs = NULL; + dev_warn(&handle->pdev->dev, "create file for %s fail\n", + name); + } +} + +void hns3_dbg_uninit(struct hnae3_handle *handle) +{ + debugfs_remove_recursive(handle->hnae3_dbgfs); + handle->hnae3_dbgfs = NULL; +} + +void hns3_dbg_register_debugfs(const char *debugfs_dir_name) +{ + hns3_dbgfs_root = debugfs_create_dir(debugfs_dir_name, NULL); + if (!hns3_dbgfs_root) { + pr_warn("Register debugfs for %s fail\n", debugfs_dir_name); + return; + } +} + +void hns3_dbg_unregister_debugfs(void) +{ + debugfs_remove_recursive(hns3_dbgfs_root); + hns3_dbgfs_root = NULL; +} diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 20fcf0d1c2ce..d3b9aaf96c1c 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include "hnae3.h" @@ -239,7 +240,6 @@ static void hns3_vector_gl_rl_init(struct hns3_enet_tqp_vector *tqp_vector, tqp_vector->tx_group.coal.int_gl = HNS3_INT_GL_50K; tqp_vector->rx_group.coal.int_gl = HNS3_INT_GL_50K; - tqp_vector->int_adapt_down = HNS3_INT_ADAPT_DOWN_START; tqp_vector->rx_group.coal.flow_level = HNS3_FLOW_LOW; tqp_vector->tx_group.coal.flow_level = HNS3_FLOW_LOW; } @@ -312,6 +312,24 @@ static u16 hns3_get_max_available_channels(struct hnae3_handle *h) return min_t(u16, rss_size, max_rss_size); } +static void hns3_tqp_enable(struct hnae3_queue *tqp) +{ + u32 rcb_reg; + + rcb_reg = hns3_read_dev(tqp, HNS3_RING_EN_REG); + rcb_reg |= BIT(HNS3_RING_EN_B); + hns3_write_dev(tqp, HNS3_RING_EN_REG, rcb_reg); +} + +static void hns3_tqp_disable(struct hnae3_queue *tqp) +{ + u32 rcb_reg; + + rcb_reg = hns3_read_dev(tqp, HNS3_RING_EN_REG); + rcb_reg &= ~BIT(HNS3_RING_EN_B); + hns3_write_dev(tqp, HNS3_RING_EN_REG, rcb_reg); +} + static int hns3_nic_net_up(struct net_device *netdev) { struct hns3_nic_priv *priv = netdev_priv(netdev); @@ -334,6 +352,10 @@ static int hns3_nic_net_up(struct net_device *netdev) for (i = 0; i < priv->vector_num; i++) hns3_vector_enable(&priv->tqp_vector[i]); + /* enable rcb */ + for (j = 0; j < h->kinfo.num_tqps; j++) + hns3_tqp_enable(h->kinfo.tqp[j]); + /* start the ae_dev */ ret = h->ae_algo->ops->start ? h->ae_algo->ops->start(h) : 0; if (ret) @@ -344,6 +366,9 @@ static int hns3_nic_net_up(struct net_device *netdev) return 0; out_start_err: + while (j--) + hns3_tqp_disable(h->kinfo.tqp[j]); + for (j = i - 1; j >= 0; j--) hns3_vector_disable(&priv->tqp_vector[j]); @@ -359,6 +384,9 @@ static int hns3_nic_net_open(struct net_device *netdev) struct hnae3_knic_private_info *kinfo; int i, ret; + if (hns3_nic_resetting(netdev)) + return -EBUSY; + netif_carrier_off(netdev); ret = hns3_nic_set_real_num_queue(netdev); @@ -378,23 +406,27 @@ static int hns3_nic_net_open(struct net_device *netdev) kinfo->prio_tc[i]); } - priv->ae_handle->last_reset_time = jiffies; + if (h->ae_algo->ops->set_timer_task) + h->ae_algo->ops->set_timer_task(priv->ae_handle, true); + return 0; } static void hns3_nic_net_down(struct net_device *netdev) { struct hns3_nic_priv *priv = netdev_priv(netdev); + struct hnae3_handle *h = hns3_get_handle(netdev); const struct hnae3_ae_ops *ops; int i; - if (test_and_set_bit(HNS3_NIC_STATE_DOWN, &priv->state)) - return; - /* disable vectors */ for (i = 0; i < priv->vector_num; i++) hns3_vector_disable(&priv->tqp_vector[i]); + /* disable rcb */ + for (i = 0; i < h->kinfo.num_tqps; i++) + hns3_tqp_disable(h->kinfo.tqp[i]); + /* stop ae_dev */ ops = priv->ae_handle->ae_algo->ops; if (ops->stop) @@ -408,6 +440,15 @@ static void hns3_nic_net_down(struct net_device *netdev) static int hns3_nic_net_stop(struct net_device *netdev) { + struct hns3_nic_priv *priv = netdev_priv(netdev); + struct hnae3_handle *h = hns3_get_handle(netdev); + + if (test_and_set_bit(HNS3_NIC_STATE_DOWN, &priv->state)) + return 0; + + if (h->ae_algo->ops->set_timer_task) + h->ae_algo->ops->set_timer_task(priv->ae_handle, false); + netif_tx_stop_all_queues(netdev); netif_carrier_off(netdev); @@ -1312,6 +1353,15 @@ static int hns3_nic_set_features(struct net_device *netdev, priv->ops.maybe_stop_tx = hns3_nic_maybe_stop_tx; } + if (changed & (NETIF_F_GRO_HW) && h->ae_algo->ops->set_gro_en) { + if (features & NETIF_F_GRO_HW) + ret = h->ae_algo->ops->set_gro_en(h, true); + else + ret = h->ae_algo->ops->set_gro_en(h, false); + if (ret) + return ret; + } + if ((changed & NETIF_F_HW_VLAN_CTAG_FILTER) && h->ae_algo->ops->enable_vlan_filter) { if (features & NETIF_F_HW_VLAN_CTAG_FILTER) @@ -1530,18 +1580,11 @@ static int hns3_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, static int hns3_nic_change_mtu(struct net_device *netdev, int new_mtu) { struct hnae3_handle *h = hns3_get_handle(netdev); - bool if_running = netif_running(netdev); int ret; if (!h->ae_algo->ops->set_mtu) return -EOPNOTSUPP; - /* if this was called with netdev up then bring netdevice down */ - if (if_running) { - (void)hns3_nic_net_stop(netdev); - msleep(100); - } - ret = h->ae_algo->ops->set_mtu(h, new_mtu); if (ret) netdev_err(netdev, "failed to change MTU in hardware %d\n", @@ -1549,10 +1592,6 @@ static int hns3_nic_change_mtu(struct net_device *netdev, int new_mtu) else netdev->mtu = new_mtu; - /* if the netdev was running earlier, bring it up again */ - if (if_running && hns3_nic_net_open(netdev)) - ret = -EINVAL; - return ret; } @@ -1615,10 +1654,9 @@ static void hns3_nic_net_timeout(struct net_device *ndev) priv->tx_timeout_count++; - if (time_before(jiffies, (h->last_reset_time + ndev->watchdog_timeo))) - return; - - /* request the reset */ + /* request the reset, and let the hclge to determine + * which reset level should be done + */ if (h->ae_algo->ops->reset_event) h->ae_algo->ops->reset_event(h->pdev, h); } @@ -1682,8 +1720,10 @@ static void hns3_disable_sriov(struct pci_dev *pdev) static void hns3_get_dev_capability(struct pci_dev *pdev, struct hnae3_ae_dev *ae_dev) { - if (pdev->revision >= 0x21) + if (pdev->revision >= 0x21) { hnae3_set_bit(ae_dev->flag, HNAE3_DEV_SUPPORT_FD_B, 1); + hnae3_set_bit(ae_dev->flag, HNAE3_DEV_SUPPORT_GRO_B, 1); + } } /* hns3_probe - Device initialization routine @@ -1795,8 +1835,8 @@ static pci_ers_result_t hns3_error_detected(struct pci_dev *pdev, return PCI_ERS_RESULT_NONE; } - if (ae_dev->ops->process_hw_error) - ret = ae_dev->ops->process_hw_error(ae_dev); + if (ae_dev->ops->handle_hw_ras_error) + ret = ae_dev->ops->handle_hw_ras_error(ae_dev); else return PCI_ERS_RESULT_NONE; @@ -1819,9 +1859,29 @@ static pci_ers_result_t hns3_slot_reset(struct pci_dev *pdev) return PCI_ERS_RESULT_DISCONNECT; } +static void hns3_reset_prepare(struct pci_dev *pdev) +{ + struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev); + + dev_info(&pdev->dev, "hns3 flr prepare\n"); + if (ae_dev && ae_dev->ops && ae_dev->ops->flr_prepare) + ae_dev->ops->flr_prepare(ae_dev); +} + +static void hns3_reset_done(struct pci_dev *pdev) +{ + struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev); + + dev_info(&pdev->dev, "hns3 flr done\n"); + if (ae_dev && ae_dev->ops && ae_dev->ops->flr_done) + ae_dev->ops->flr_done(ae_dev); +} + static const struct pci_error_handlers hns3_err_handler = { .error_detected = hns3_error_detected, .slot_reset = hns3_slot_reset, + .reset_prepare = hns3_reset_prepare, + .reset_done = hns3_reset_done, }; static struct pci_driver hns3_driver = { @@ -1875,7 +1935,9 @@ static void hns3_set_default_feature(struct net_device *netdev) NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_SCTP_CRC; if (pdev->revision >= 0x21) { - netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER; + netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER | + NETIF_F_GRO_HW; + netdev->features |= NETIF_F_GRO_HW; if (!(h->flags & HNAE3_SUPPORT_VF)) { netdev->hw_features |= NETIF_F_NTUPLE; @@ -2253,6 +2315,12 @@ static void hns3_rx_checksum(struct hns3_enet_ring *ring, struct sk_buff *skb, if (!(netdev->features & NETIF_F_RXCSUM)) return; + /* We MUST enable hardware checksum before enabling hardware GRO */ + if (skb_shinfo(skb)->gso_size) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + return; + } + /* check if hardware has done checksum */ if (!hnae3_get_bit(bd_base_info, HNS3_RXD_L3L4P_B)) return; @@ -2296,6 +2364,9 @@ static void hns3_rx_checksum(struct hns3_enet_ring *ring, struct sk_buff *skb, static void hns3_rx_skb(struct hns3_enet_ring *ring, struct sk_buff *skb) { + if (skb_has_frag_list(skb)) + napi_gro_flush(&ring->tqp_vector->napi, false); + napi_gro_receive(&ring->tqp_vector->napi, skb); } @@ -2329,12 +2400,166 @@ static bool hns3_parse_vlan_tag(struct hns3_enet_ring *ring, } } +static int hns3_alloc_skb(struct hns3_enet_ring *ring, int length, + unsigned char *va) +{ +#define HNS3_NEED_ADD_FRAG 1 + struct hns3_desc_cb *desc_cb = &ring->desc_cb[ring->next_to_clean]; + struct net_device *netdev = ring->tqp->handle->kinfo.netdev; + struct sk_buff *skb; + + ring->skb = napi_alloc_skb(&ring->tqp_vector->napi, HNS3_RX_HEAD_SIZE); + skb = ring->skb; + if (unlikely(!skb)) { + netdev_err(netdev, "alloc rx skb fail\n"); + + u64_stats_update_begin(&ring->syncp); + ring->stats.sw_err_cnt++; + u64_stats_update_end(&ring->syncp); + + return -ENOMEM; + } + + prefetchw(skb->data); + + ring->pending_buf = 1; + ring->frag_num = 0; + ring->tail_skb = NULL; + if (length <= HNS3_RX_HEAD_SIZE) { + memcpy(__skb_put(skb, length), va, ALIGN(length, sizeof(long))); + + /* We can reuse buffer as-is, just make sure it is local */ + if (likely(page_to_nid(desc_cb->priv) == numa_node_id())) + desc_cb->reuse_flag = 1; + else /* This page cannot be reused so discard it */ + put_page(desc_cb->priv); + + ring_ptr_move_fw(ring, next_to_clean); + return 0; + } + u64_stats_update_begin(&ring->syncp); + ring->stats.seg_pkt_cnt++; + u64_stats_update_end(&ring->syncp); + + ring->pull_len = eth_get_headlen(va, HNS3_RX_HEAD_SIZE); + __skb_put(skb, ring->pull_len); + hns3_nic_reuse_page(skb, ring->frag_num++, ring, ring->pull_len, + desc_cb); + ring_ptr_move_fw(ring, next_to_clean); + + return HNS3_NEED_ADD_FRAG; +} + +static int hns3_add_frag(struct hns3_enet_ring *ring, struct hns3_desc *desc, + struct sk_buff **out_skb, bool pending) +{ + struct sk_buff *skb = *out_skb; + struct sk_buff *head_skb = *out_skb; + struct sk_buff *new_skb; + struct hns3_desc_cb *desc_cb; + struct hns3_desc *pre_desc; + u32 bd_base_info; + int pre_bd; + + /* if there is pending bd, the SW param next_to_clean has moved + * to next and the next is NULL + */ + if (pending) { + pre_bd = (ring->next_to_clean - 1 + ring->desc_num) % + ring->desc_num; + pre_desc = &ring->desc[pre_bd]; + bd_base_info = le32_to_cpu(pre_desc->rx.bd_base_info); + } else { + bd_base_info = le32_to_cpu(desc->rx.bd_base_info); + } + + while (!hnae3_get_bit(bd_base_info, HNS3_RXD_FE_B)) { + desc = &ring->desc[ring->next_to_clean]; + desc_cb = &ring->desc_cb[ring->next_to_clean]; + bd_base_info = le32_to_cpu(desc->rx.bd_base_info); + if (!hnae3_get_bit(bd_base_info, HNS3_RXD_VLD_B)) + return -ENXIO; + + if (unlikely(ring->frag_num >= MAX_SKB_FRAGS)) { + new_skb = napi_alloc_skb(&ring->tqp_vector->napi, + HNS3_RX_HEAD_SIZE); + if (unlikely(!new_skb)) { + netdev_err(ring->tqp->handle->kinfo.netdev, + "alloc rx skb frag fail\n"); + return -ENXIO; + } + ring->frag_num = 0; + + if (ring->tail_skb) { + ring->tail_skb->next = new_skb; + ring->tail_skb = new_skb; + } else { + skb_shinfo(skb)->frag_list = new_skb; + ring->tail_skb = new_skb; + } + } + + if (ring->tail_skb) { + head_skb->truesize += hnae3_buf_size(ring); + head_skb->data_len += le16_to_cpu(desc->rx.size); + head_skb->len += le16_to_cpu(desc->rx.size); + skb = ring->tail_skb; + } + + hns3_nic_reuse_page(skb, ring->frag_num++, ring, 0, desc_cb); + ring_ptr_move_fw(ring, next_to_clean); + ring->pending_buf++; + } + + return 0; +} + +static void hns3_set_gro_param(struct sk_buff *skb, u32 l234info, + u32 bd_base_info) +{ + u16 gro_count; + u32 l3_type; + + gro_count = hnae3_get_field(l234info, HNS3_RXD_GRO_COUNT_M, + HNS3_RXD_GRO_COUNT_S); + /* if there is no HW GRO, do not set gro params */ + if (!gro_count) + return; + + /* tcp_gro_complete() will copy NAPI_GRO_CB(skb)->count + * to skb_shinfo(skb)->gso_segs + */ + NAPI_GRO_CB(skb)->count = gro_count; + + l3_type = hnae3_get_field(l234info, HNS3_RXD_L3ID_M, + HNS3_RXD_L3ID_S); + if (l3_type == HNS3_L3_TYPE_IPV4) + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; + else if (l3_type == HNS3_L3_TYPE_IPV6) + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + else + return; + + skb_shinfo(skb)->gso_size = hnae3_get_field(bd_base_info, + HNS3_RXD_GRO_SIZE_M, + HNS3_RXD_GRO_SIZE_S); + if (skb_shinfo(skb)->gso_size) + tcp_gro_complete(skb); +} + static void hns3_set_rx_skb_rss_type(struct hns3_enet_ring *ring, struct sk_buff *skb) { - struct hns3_desc *desc = &ring->desc[ring->next_to_clean]; struct hnae3_handle *handle = ring->tqp->handle; enum pkt_hash_types rss_type; + struct hns3_desc *desc; + int last_bd; + + /* When driver handle the rss type, ring->next_to_clean indicates the + * first descriptor of next packet, need -1 here. + */ + last_bd = (ring->next_to_clean - 1 + ring->desc_num) % ring->desc_num; + desc = &ring->desc[last_bd]; if (le32_to_cpu(desc->rx.rss_hash)) rss_type = handle->kinfo.rss_type; @@ -2345,18 +2570,16 @@ static void hns3_set_rx_skb_rss_type(struct hns3_enet_ring *ring, } static int hns3_handle_rx_bd(struct hns3_enet_ring *ring, - struct sk_buff **out_skb, int *out_bnum) + struct sk_buff **out_skb) { struct net_device *netdev = ring->tqp->handle->kinfo.netdev; + struct sk_buff *skb = ring->skb; struct hns3_desc_cb *desc_cb; struct hns3_desc *desc; - struct sk_buff *skb; - unsigned char *va; u32 bd_base_info; - int pull_len; u32 l234info; int length; - int bnum; + int ret; desc = &ring->desc[ring->next_to_clean]; desc_cb = &ring->desc_cb[ring->next_to_clean]; @@ -2368,9 +2591,10 @@ static int hns3_handle_rx_bd(struct hns3_enet_ring *ring, /* Check valid BD */ if (unlikely(!hnae3_get_bit(bd_base_info, HNS3_RXD_VLD_B))) - return -EFAULT; + return -ENXIO; - va = (unsigned char *)desc_cb->buf + desc_cb->page_offset; + if (!skb) + ring->va = (unsigned char *)desc_cb->buf + desc_cb->page_offset; /* Prefetch first cache line of first page * Idea is to cache few bytes of the header of the packet. Our L1 Cache @@ -2379,62 +2603,42 @@ static int hns3_handle_rx_bd(struct hns3_enet_ring *ring, * lines. In such a case, single fetch would suffice to cache in the * relevant part of the header. */ - prefetch(va); + prefetch(ring->va); #if L1_CACHE_BYTES < 128 - prefetch(va + L1_CACHE_BYTES); + prefetch(ring->va + L1_CACHE_BYTES); #endif - skb = *out_skb = napi_alloc_skb(&ring->tqp_vector->napi, - HNS3_RX_HEAD_SIZE); - if (unlikely(!skb)) { - netdev_err(netdev, "alloc rx skb fail\n"); - - u64_stats_update_begin(&ring->syncp); - ring->stats.sw_err_cnt++; - u64_stats_update_end(&ring->syncp); - - return -ENOMEM; - } - - prefetchw(skb->data); - - bnum = 1; - if (length <= HNS3_RX_HEAD_SIZE) { - memcpy(__skb_put(skb, length), va, ALIGN(length, sizeof(long))); + if (!skb) { + ret = hns3_alloc_skb(ring, length, ring->va); + *out_skb = skb = ring->skb; - /* We can reuse buffer as-is, just make sure it is local */ - if (likely(page_to_nid(desc_cb->priv) == numa_node_id())) - desc_cb->reuse_flag = 1; - else /* This page cannot be reused so discard it */ - put_page(desc_cb->priv); + if (ret < 0) /* alloc buffer fail */ + return ret; + if (ret > 0) { /* need add frag */ + ret = hns3_add_frag(ring, desc, &skb, false); + if (ret) + return ret; - ring_ptr_move_fw(ring, next_to_clean); + /* As the head data may be changed when GRO enable, copy + * the head data in after other data rx completed + */ + memcpy(skb->data, ring->va, + ALIGN(ring->pull_len, sizeof(long))); + } } else { - u64_stats_update_begin(&ring->syncp); - ring->stats.seg_pkt_cnt++; - u64_stats_update_end(&ring->syncp); - - pull_len = eth_get_headlen(va, HNS3_RX_HEAD_SIZE); - - memcpy(__skb_put(skb, pull_len), va, - ALIGN(pull_len, sizeof(long))); - - hns3_nic_reuse_page(skb, 0, ring, pull_len, desc_cb); - ring_ptr_move_fw(ring, next_to_clean); + ret = hns3_add_frag(ring, desc, &skb, true); + if (ret) + return ret; - while (!hnae3_get_bit(bd_base_info, HNS3_RXD_FE_B)) { - desc = &ring->desc[ring->next_to_clean]; - desc_cb = &ring->desc_cb[ring->next_to_clean]; - bd_base_info = le32_to_cpu(desc->rx.bd_base_info); - hns3_nic_reuse_page(skb, bnum, ring, 0, desc_cb); - ring_ptr_move_fw(ring, next_to_clean); - bnum++; - } + /* As the head data may be changed when GRO enable, copy + * the head data in after other data rx completed + */ + memcpy(skb->data, ring->va, + ALIGN(ring->pull_len, sizeof(long))); } - *out_bnum = bnum; - l234info = le32_to_cpu(desc->rx.l234_info); + bd_base_info = le32_to_cpu(desc->rx.bd_base_info); /* Based on hw strategy, the tag offloaded will be stored at * ot_vlan_tag in two layer tag case, and stored at vlan_tag @@ -2484,7 +2688,11 @@ static int hns3_handle_rx_bd(struct hns3_enet_ring *ring, ring->tqp_vector->rx_group.total_bytes += skb->len; + /* This is needed in order to enable forwarding support */ + hns3_set_gro_param(skb, l234info, bd_base_info); + hns3_rx_checksum(ring, skb, desc); + *out_skb = skb; hns3_set_rx_skb_rss_type(ring, skb); return 0; @@ -2497,9 +2705,9 @@ int hns3_clean_rx_ring( #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16 struct net_device *netdev = ring->tqp->handle->kinfo.netdev; int recv_pkts, recv_bds, clean_count, err; - int unused_count = hns3_desc_unused(ring); - struct sk_buff *skb = NULL; - int num, bnum = 0; + int unused_count = hns3_desc_unused(ring) - ring->pending_buf; + struct sk_buff *skb = ring->skb; + int num; num = readl_relaxed(ring->tqp->io_base + HNS3_RING_RX_RING_FBDNUM_REG); rmb(); /* Make sure num taken effect before the other data is touched */ @@ -2513,24 +2721,32 @@ int hns3_clean_rx_ring( hns3_nic_alloc_rx_buffers(ring, clean_count + unused_count); clean_count = 0; - unused_count = hns3_desc_unused(ring); + unused_count = hns3_desc_unused(ring) - + ring->pending_buf; } /* Poll one pkt */ - err = hns3_handle_rx_bd(ring, &skb, &bnum); + err = hns3_handle_rx_bd(ring, &skb); if (unlikely(!skb)) /* This fault cannot be repaired */ goto out; - recv_bds += bnum; - clean_count += bnum; - if (unlikely(err)) { /* Do jump the err */ - recv_pkts++; + if (err == -ENXIO) { /* Do not get FE for the packet */ + goto out; + } else if (unlikely(err)) { /* Do jump the err */ + recv_bds += ring->pending_buf; + clean_count += ring->pending_buf; + ring->skb = NULL; + ring->pending_buf = 0; continue; } /* Do update ip stack process */ skb->protocol = eth_type_trans(skb, netdev); rx_fn(ring, skb); + recv_bds += ring->pending_buf; + clean_count += ring->pending_buf; + ring->skb = NULL; + ring->pending_buf = 0; recv_pkts++; } @@ -2644,10 +2860,10 @@ static void hns3_update_new_int_gl(struct hns3_enet_tqp_vector *tqp_vector) struct hns3_enet_ring_group *tx_group = &tqp_vector->tx_group; bool rx_update, tx_update; - if (tqp_vector->int_adapt_down > 0) { - tqp_vector->int_adapt_down--; + /* update param every 1000ms */ + if (time_before(jiffies, + tqp_vector->last_jiffies + msecs_to_jiffies(1000))) return; - } if (rx_group->coal.gl_adapt_enable) { rx_update = hns3_get_new_int_gl(rx_group); @@ -2664,11 +2880,11 @@ static void hns3_update_new_int_gl(struct hns3_enet_tqp_vector *tqp_vector) } tqp_vector->last_jiffies = jiffies; - tqp_vector->int_adapt_down = HNS3_INT_ADAPT_DOWN_START; } static int hns3_nic_common_poll(struct napi_struct *napi, int budget) { + struct hns3_nic_priv *priv = netdev_priv(napi->dev); struct hns3_enet_ring *ring; int rx_pkt_total = 0; @@ -2677,6 +2893,11 @@ static int hns3_nic_common_poll(struct napi_struct *napi, int budget) bool clean_complete = true; int rx_budget; + if (unlikely(test_bit(HNS3_NIC_STATE_DOWN, &priv->state))) { + napi_complete(napi); + return 0; + } + /* Since the actual Tx work is minimal, we can give the Tx a larger * budget and be more aggressive about cleaning up the Tx descriptors. */ @@ -2701,9 +2922,11 @@ static int hns3_nic_common_poll(struct napi_struct *napi, int budget) if (!clean_complete) return budget; - napi_complete(napi); - hns3_update_new_int_gl(tqp_vector); - hns3_mask_vector_irq(tqp_vector, 1); + if (napi_complete(napi) && + likely(!test_bit(HNS3_NIC_STATE_DOWN, &priv->state))) { + hns3_update_new_int_gl(tqp_vector); + hns3_mask_vector_irq(tqp_vector, 1); + } return rx_pkt_total; } @@ -2783,9 +3006,10 @@ err_free_chain: cur_chain = head->next; while (cur_chain) { chain = cur_chain->next; - devm_kfree(&pdev->dev, chain); + devm_kfree(&pdev->dev, cur_chain); cur_chain = chain; } + head->next = NULL; return -ENOMEM; } @@ -2876,7 +3100,7 @@ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv) ret = hns3_get_vector_ring_chain(tqp_vector, &vector_ring_chain); if (ret) - return ret; + goto map_ring_fail; ret = h->ae_algo->ops->map_ring_to_vector(h, tqp_vector->vector_irq, &vector_ring_chain); @@ -2901,6 +3125,8 @@ map_ring_fail: static int hns3_nic_alloc_vector_data(struct hns3_nic_priv *priv) { +#define HNS3_VECTOR_PF_MAX_NUM 64 + struct hnae3_handle *h = priv->ae_handle; struct hns3_enet_tqp_vector *tqp_vector; struct hnae3_vector_info *vector; @@ -2913,6 +3139,8 @@ static int hns3_nic_alloc_vector_data(struct hns3_nic_priv *priv) /* RSS size, cpu online and vector_num should be the same */ /* Should consider 2p/4p later */ vector_num = min_t(u16, num_online_cpus(), tqp_num); + vector_num = min_t(u16, vector_num, HNS3_VECTOR_PF_MAX_NUM); + vector = devm_kcalloc(&pdev->dev, vector_num, sizeof(*vector), GFP_KERNEL); if (!vector) @@ -2970,12 +3198,12 @@ static int hns3_nic_uninit_vector_data(struct hns3_nic_priv *priv) hns3_free_vector_ring_chain(tqp_vector, &vector_ring_chain); - if (priv->tqp_vector[i].irq_init_flag == HNS3_VECTOR_INITED) { - (void)irq_set_affinity_hint( - priv->tqp_vector[i].vector_irq, - NULL); - free_irq(priv->tqp_vector[i].vector_irq, - &priv->tqp_vector[i]); + if (tqp_vector->irq_init_flag == HNS3_VECTOR_INITED) { + irq_set_affinity_notifier(tqp_vector->vector_irq, + NULL); + irq_set_affinity_hint(tqp_vector->vector_irq, NULL); + free_irq(tqp_vector->vector_irq, tqp_vector); + tqp_vector->irq_init_flag = HNS3_VECTOR_NOT_INITED; } priv->ring_data[i].ring->irq_init_flag = HNS3_VECTOR_NOT_INITED; @@ -3319,6 +3547,22 @@ static void hns3_nic_set_priv_ops(struct net_device *netdev) priv->ops.maybe_stop_tx = hns3_nic_maybe_stop_tx; } +static int hns3_client_start(struct hnae3_handle *handle) +{ + if (!handle->ae_algo->ops->client_start) + return 0; + + return handle->ae_algo->ops->client_start(handle); +} + +static void hns3_client_stop(struct hnae3_handle *handle) +{ + if (!handle->ae_algo->ops->client_stop) + return; + + handle->ae_algo->ops->client_stop(handle); +} + static int hns3_client_init(struct hnae3_handle *handle) { struct pci_dev *pdev = handle->pdev; @@ -3337,7 +3581,6 @@ static int hns3_client_init(struct hnae3_handle *handle) priv->dev = &pdev->dev; priv->netdev = netdev; priv->ae_handle = handle; - priv->ae_handle->last_reset_time = jiffies; priv->tx_timeout_count = 0; handle->kinfo.netdev = netdev; @@ -3357,11 +3600,6 @@ static int hns3_client_init(struct hnae3_handle *handle) /* Carrier off reporting is important to ethtool even BEFORE open */ netif_carrier_off(netdev); - if (handle->flags & HNAE3_SUPPORT_VF) - handle->reset_level = HNAE3_VF_RESET; - else - handle->reset_level = HNAE3_FUNC_RESET; - ret = hns3_get_ring_config(priv); if (ret) { ret = -ENOMEM; @@ -3392,10 +3630,20 @@ static int hns3_client_init(struct hnae3_handle *handle) goto out_reg_netdev_fail; } + ret = hns3_client_start(handle); + if (ret) { + dev_err(priv->dev, "hns3_client_start fail! ret=%d\n", ret); + goto out_reg_netdev_fail; + } + hns3_dcbnl_setup(handle); - /* MTU range: (ETH_MIN_MTU(kernel default) - 9706) */ - netdev->max_mtu = HNS3_MAX_MTU - (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN); + hns3_dbg_init(handle); + + /* MTU range: (ETH_MIN_MTU(kernel default) - 9702) */ + netdev->max_mtu = HNS3_MAX_MTU; + + set_bit(HNS3_NIC_STATE_INITED, &priv->state); return ret; @@ -3418,11 +3666,18 @@ static void hns3_client_uninit(struct hnae3_handle *handle, bool reset) struct hns3_nic_priv *priv = netdev_priv(netdev); int ret; + hns3_client_stop(handle); + hns3_remove_hw_addr(netdev); if (netdev->reg_state != NETREG_UNINITIALIZED) unregister_netdev(netdev); + if (!test_and_clear_bit(HNS3_NIC_STATE_INITED, &priv->state)) { + netdev_warn(netdev, "already uninitialized\n"); + goto out_netdev_free; + } + hns3_del_all_fd_rules(netdev, true); hns3_force_clear_all_rx_ring(handle); @@ -3441,8 +3696,11 @@ static void hns3_client_uninit(struct hnae3_handle *handle, bool reset) hns3_put_ring_config(priv); + hns3_dbg_uninit(handle); + priv->ring_data = NULL; +out_netdev_free: free_netdev(netdev); } @@ -3708,8 +3966,22 @@ static void hns3_restore_coal(struct hns3_nic_priv *priv) static int hns3_reset_notify_down_enet(struct hnae3_handle *handle) { + struct hnae3_ae_dev *ae_dev = pci_get_drvdata(handle->pdev); struct hnae3_knic_private_info *kinfo = &handle->kinfo; struct net_device *ndev = kinfo->netdev; + struct hns3_nic_priv *priv = netdev_priv(ndev); + + if (test_and_set_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) + return 0; + + /* it is cumbersome for hardware to pick-and-choose entries for deletion + * from table space. Hence, for function reset software intervention is + * required to delete the entries + */ + if (hns3_dev_ongoing_func_reset(ae_dev)) { + hns3_remove_hw_addr(ndev); + hns3_del_all_fd_rules(ndev, false); + } if (!netif_running(ndev)) return 0; @@ -3720,6 +3992,7 @@ static int hns3_reset_notify_down_enet(struct hnae3_handle *handle) static int hns3_reset_notify_up_enet(struct hnae3_handle *handle) { struct hnae3_knic_private_info *kinfo = &handle->kinfo; + struct hns3_nic_priv *priv = netdev_priv(kinfo->netdev); int ret = 0; if (netif_running(kinfo->netdev)) { @@ -3729,9 +4002,10 @@ static int hns3_reset_notify_up_enet(struct hnae3_handle *handle) "hns net up fail, ret=%d!\n", ret); return ret; } - handle->last_reset_time = jiffies; } + clear_bit(HNS3_NIC_STATE_RESETTING, &priv->state); + return ret; } @@ -3771,28 +4045,44 @@ static int hns3_reset_notify_init_enet(struct hnae3_handle *handle) /* Carrier off reporting is important to ethtool even BEFORE open */ netif_carrier_off(netdev); + ret = hns3_nic_alloc_vector_data(priv); + if (ret) + return ret; + hns3_restore_coal(priv); ret = hns3_nic_init_vector_data(priv); if (ret) - return ret; + goto err_dealloc_vector; ret = hns3_init_all_ring(priv); - if (ret) { - hns3_nic_uninit_vector_data(priv); - priv->ring_data = NULL; - } + if (ret) + goto err_uninit_vector; + + set_bit(HNS3_NIC_STATE_INITED, &priv->state); + + return ret; + +err_uninit_vector: + hns3_nic_uninit_vector_data(priv); + priv->ring_data = NULL; +err_dealloc_vector: + hns3_nic_dealloc_vector_data(priv); return ret; } static int hns3_reset_notify_uninit_enet(struct hnae3_handle *handle) { - struct hnae3_ae_dev *ae_dev = pci_get_drvdata(handle->pdev); struct net_device *netdev = handle->kinfo.netdev; struct hns3_nic_priv *priv = netdev_priv(netdev); int ret; + if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state)) { + netdev_warn(netdev, "already uninitialized\n"); + return 0; + } + hns3_force_clear_all_rx_ring(handle); ret = hns3_nic_uninit_vector_data(priv); @@ -3803,18 +4093,15 @@ static int hns3_reset_notify_uninit_enet(struct hnae3_handle *handle) hns3_store_coal(priv); + ret = hns3_nic_dealloc_vector_data(priv); + if (ret) + netdev_err(netdev, "dealloc vector error\n"); + ret = hns3_uninit_all_ring(priv); if (ret) netdev_err(netdev, "uninit ring error\n"); - /* it is cumbersome for hardware to pick-and-choose entries for deletion - * from table space. Hence, for function reset software intervention is - * required to delete the entries - */ - if (hns3_dev_ongoing_func_reset(ae_dev)) { - hns3_remove_hw_addr(netdev); - hns3_del_all_fd_rules(netdev, false); - } + clear_bit(HNS3_NIC_STATE_INITED, &priv->state); return ret; } @@ -3980,15 +4267,23 @@ static int __init hns3_init_module(void) INIT_LIST_HEAD(&client.node); + hns3_dbg_register_debugfs(hns3_driver_name); + ret = hnae3_register_client(&client); if (ret) - return ret; + goto err_reg_client; ret = pci_register_driver(&hns3_driver); if (ret) - hnae3_unregister_client(&client); + goto err_reg_driver; return ret; + +err_reg_driver: + hnae3_unregister_client(&client); +err_reg_client: + hns3_dbg_unregister_debugfs(); + return ret; } module_init(hns3_init_module); @@ -4000,6 +4295,7 @@ static void __exit hns3_exit_module(void) { pci_unregister_driver(&hns3_driver); hnae3_unregister_client(&client); + hns3_dbg_unregister_debugfs(); } module_exit(hns3_exit_module); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h index d3636d088aa3..e55995e93bb0 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h @@ -15,7 +15,7 @@ extern const char hns3_driver_version[]; enum hns3_nic_state { HNS3_NIC_STATE_TESTING, HNS3_NIC_STATE_RESETTING, - HNS3_NIC_STATE_REINITING, + HNS3_NIC_STATE_INITED, HNS3_NIC_STATE_DOWN, HNS3_NIC_STATE_DISABLED, HNS3_NIC_STATE_REMOVING, @@ -47,7 +47,7 @@ enum hns3_nic_state { #define HNS3_RING_PREFETCH_EN_REG 0x0007C #define HNS3_RING_CFG_VF_NUM_REG 0x00080 #define HNS3_RING_ASID_REG 0x0008C -#define HNS3_RING_RX_VM_REG 0x00090 +#define HNS3_RING_EN_REG 0x00090 #define HNS3_RING_T0_BE_RST 0x00094 #define HNS3_RING_COULD_BE_RST 0x00098 #define HNS3_RING_WRR_WEIGHT_REG 0x0009c @@ -76,7 +76,10 @@ enum hns3_nic_state { #define HNS3_RING_MAX_PENDING 32768 #define HNS3_RING_MIN_PENDING 8 #define HNS3_RING_BD_MULTIPLE 8 -#define HNS3_MAX_MTU 9728 +/* max frame size of mac */ +#define HNS3_MAC_MAX_FRAME 9728 +#define HNS3_MAX_MTU \ + (HNS3_MAC_MAX_FRAME - (ETH_HLEN + ETH_FCS_LEN + 2 * VLAN_HLEN)) #define HNS3_BD_SIZE_512_TYPE 0 #define HNS3_BD_SIZE_1024_TYPE 1 @@ -109,6 +112,10 @@ enum hns3_nic_state { #define HNS3_RXD_DOI_B 21 #define HNS3_RXD_OL3E_B 22 #define HNS3_RXD_OL4E_B 23 +#define HNS3_RXD_GRO_COUNT_S 24 +#define HNS3_RXD_GRO_COUNT_M (0x3f << HNS3_RXD_GRO_COUNT_S) +#define HNS3_RXD_GRO_FIXID_B 30 +#define HNS3_RXD_GRO_ECN_B 31 #define HNS3_RXD_ODMAC_S 0 #define HNS3_RXD_ODMAC_M (0x3 << HNS3_RXD_ODMAC_S) @@ -135,9 +142,8 @@ enum hns3_nic_state { #define HNS3_RXD_TSIND_S 12 #define HNS3_RXD_TSIND_M (0x7 << HNS3_RXD_TSIND_S) #define HNS3_RXD_LKBK_B 15 -#define HNS3_RXD_HDL_S 16 -#define HNS3_RXD_HDL_M (0x7ff << HNS3_RXD_HDL_S) -#define HNS3_RXD_HSIND_B 31 +#define HNS3_RXD_GRO_SIZE_S 16 +#define HNS3_RXD_GRO_SIZE_M (0x3ff << HNS3_RXD_GRO_SIZE_S) #define HNS3_TXD_L3T_S 0 #define HNS3_TXD_L3T_M (0x3 << HNS3_TXD_L3T_S) @@ -194,6 +200,8 @@ enum hns3_nic_state { #define HNS3_VECTOR_RL_OFFSET 0x900 #define HNS3_VECTOR_RL_EN_B 6 +#define HNS3_RING_EN_B 0 + enum hns3_pkt_l3t_type { HNS3_L3T_NONE, HNS3_L3T_IPV6, @@ -399,11 +407,19 @@ struct hns3_enet_ring { */ int next_to_clean; + int pull_len; /* head length for current packet */ + u32 frag_num; + unsigned char *va; /* first buffer address for current packet */ + u32 flag; /* ring attribute */ int irq_init_flag; int numa_node; cpumask_t affinity_mask; + + int pending_buf; + struct sk_buff *skb; + struct sk_buff *tail_skb; }; struct hns_queue; @@ -460,8 +476,6 @@ enum hns3_link_mode_bits { #define HNS3_INT_RL_MAX 0x00EC #define HNS3_INT_RL_ENABLE_MASK 0x40 -#define HNS3_INT_ADAPT_DOWN_START 100 - struct hns3_enet_coalesce { u16 int_gl; u8 gl_adapt_enable; @@ -496,8 +510,6 @@ struct hns3_enet_tqp_vector { char name[HNAE3_INT_NAME_LEN]; - /* when 0 should adjust interrupt coalesce parameter */ - u8 int_adapt_down; unsigned long last_jiffies; } ____cacheline_internodealigned_in_smp; @@ -577,6 +589,11 @@ static inline int is_ring_empty(struct hns3_enet_ring *ring) return ring->next_to_use == ring->next_to_clean; } +static inline u32 hns3_read_reg(void __iomem *base, u32 reg) +{ + return readl(base + reg); +} + static inline void hns3_write_reg(void __iomem *base, u32 reg, u32 value) { u8 __iomem *reg_addr = READ_ONCE(base); @@ -586,7 +603,21 @@ static inline void hns3_write_reg(void __iomem *base, u32 reg, u32 value) static inline bool hns3_dev_ongoing_func_reset(struct hnae3_ae_dev *ae_dev) { - return (ae_dev && (ae_dev->reset_type == HNAE3_FUNC_RESET)); + return (ae_dev && (ae_dev->reset_type == HNAE3_FUNC_RESET || + ae_dev->reset_type == HNAE3_FLR_RESET || + ae_dev->reset_type == HNAE3_VF_FUNC_RESET || + ae_dev->reset_type == HNAE3_VF_FULL_RESET || + ae_dev->reset_type == HNAE3_VF_PF_FUNC_RESET)); +} + +#define hns3_read_dev(a, reg) \ + hns3_read_reg((a)->io_base, (reg)) + +static inline bool hns3_nic_resetting(struct net_device *netdev) +{ + struct hns3_nic_priv *priv = netdev_priv(netdev); + + return test_bit(HNS3_NIC_STATE_RESETTING, &priv->state); } #define hns3_write_dev(a, reg, value) \ @@ -648,4 +679,8 @@ void hns3_dcbnl_setup(struct hnae3_handle *handle); static inline void hns3_dcbnl_setup(struct hnae3_handle *handle) {} #endif +void hns3_dbg_init(struct hnae3_handle *handle); +void hns3_dbg_uninit(struct hnae3_handle *handle); +void hns3_dbg_register_debugfs(const char *debugfs_dir_name); +void hns3_dbg_unregister_debugfs(void); #endif diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c index a4762c2b8ba1..e678b6939da3 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c @@ -291,6 +291,11 @@ static void hns3_self_test(struct net_device *ndev, int test_index = 0; u32 i; + if (hns3_nic_resetting(ndev)) { + netdev_err(ndev, "dev resetting!"); + return; + } + /* Only do offline selftest, or pass by default */ if (eth_test->flags != ETH_TEST_FL_OFFLINE) return; @@ -530,6 +535,11 @@ static void hns3_get_ringparam(struct net_device *netdev, struct hnae3_handle *h = priv->ae_handle; int queue_num = h->kinfo.num_tqps; + if (hns3_nic_resetting(netdev)) { + netdev_err(netdev, "dev resetting!"); + return; + } + param->tx_max_pending = HNS3_RING_MAX_PENDING; param->rx_max_pending = HNS3_RING_MAX_PENDING; @@ -760,6 +770,9 @@ static int hns3_set_ringparam(struct net_device *ndev, u32 old_desc_num, new_desc_num; int ret; + if (hns3_nic_resetting(ndev)) + return -EBUSY; + if (param->rx_mini_pending || param->rx_jumbo_pending) return -EINVAL; @@ -808,7 +821,7 @@ static int hns3_set_ringparam(struct net_device *ndev, } if (if_running) - ret = dev_open(ndev); + ret = dev_open(ndev, NULL); return ret; } @@ -872,6 +885,9 @@ static int hns3_get_coalesce_per_queue(struct net_device *netdev, u32 queue, struct hnae3_handle *h = priv->ae_handle; u16 queue_num = h->kinfo.num_tqps; + if (hns3_nic_resetting(netdev)) + return -EBUSY; + if (queue >= queue_num) { netdev_err(netdev, "Invalid queue value %d! Queue max id=%d\n", @@ -1033,6 +1049,9 @@ static int hns3_set_coalesce(struct net_device *netdev, int ret; int i; + if (hns3_nic_resetting(netdev)) + return -EBUSY; + ret = hns3_check_coalesce_para(netdev, cmd); if (ret) return ret; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile index 580e81743681..fffe8c1c45d3 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile @@ -6,6 +6,6 @@ ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3 obj-$(CONFIG_HNS3_HCLGE) += hclge.o -hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o hclge_err.o +hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o hclge_err.o hclge_debugfs.o hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c index 690f62ed87dc..8af0cef5609b 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c @@ -350,11 +350,20 @@ int hclge_cmd_init(struct hclge_dev *hdev) hdev->hw.cmq.crq.next_to_use = 0; hclge_cmd_init_regs(&hdev->hw); - clear_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); spin_unlock_bh(&hdev->hw.cmq.crq.lock); spin_unlock_bh(&hdev->hw.cmq.csq.lock); + clear_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); + + /* Check if there is new reset pending, because the higher level + * reset may happen when lower level reset is being processed. + */ + if ((hclge_is_reset_pending(hdev))) { + set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); + return -EBUSY; + } + ret = hclge_cmd_query_firmware_version(&hdev->hw, &version); if (ret) { dev_err(&hdev->pdev->dev, diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h index 872cd4bdd70d..f23042b24c09 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h @@ -86,11 +86,24 @@ enum hclge_opcode_type { HCLGE_OPC_QUERY_REG_NUM = 0x0040, HCLGE_OPC_QUERY_32_BIT_REG = 0x0041, HCLGE_OPC_QUERY_64_BIT_REG = 0x0042, + HCLGE_OPC_DFX_BD_NUM = 0x0043, + HCLGE_OPC_DFX_BIOS_COMMON_REG = 0x0044, + HCLGE_OPC_DFX_SSU_REG_0 = 0x0045, + HCLGE_OPC_DFX_SSU_REG_1 = 0x0046, + HCLGE_OPC_DFX_IGU_EGU_REG = 0x0047, + HCLGE_OPC_DFX_RPU_REG_0 = 0x0048, + HCLGE_OPC_DFX_RPU_REG_1 = 0x0049, + HCLGE_OPC_DFX_NCSI_REG = 0x004A, + HCLGE_OPC_DFX_RTC_REG = 0x004B, + HCLGE_OPC_DFX_PPP_REG = 0x004C, + HCLGE_OPC_DFX_RCB_REG = 0x004D, + HCLGE_OPC_DFX_TQP_REG = 0x004E, + HCLGE_OPC_DFX_SSU_REG_2 = 0x004F, + HCLGE_OPC_DFX_QUERY_CHIP_CAP = 0x0050, /* MAC command */ HCLGE_OPC_CONFIG_MAC_MODE = 0x0301, HCLGE_OPC_CONFIG_AN_MODE = 0x0304, - HCLGE_OPC_QUERY_AN_RESULT = 0x0306, HCLGE_OPC_QUERY_LINK_STATUS = 0x0307, HCLGE_OPC_CONFIG_MAX_FRM_SIZE = 0x0308, HCLGE_OPC_CONFIG_SPEED_DUP = 0x0309, @@ -126,6 +139,16 @@ enum hclge_opcode_type { HCLGE_OPC_TM_PRI_SCH_MODE_CFG = 0x0813, HCLGE_OPC_TM_QS_SCH_MODE_CFG = 0x0814, HCLGE_OPC_TM_BP_TO_QSET_MAPPING = 0x0815, + HCLGE_OPC_ETS_TC_WEIGHT = 0x0843, + HCLGE_OPC_QSET_DFX_STS = 0x0844, + HCLGE_OPC_PRI_DFX_STS = 0x0845, + HCLGE_OPC_PG_DFX_STS = 0x0846, + HCLGE_OPC_PORT_DFX_STS = 0x0847, + HCLGE_OPC_SCH_NQ_CNT = 0x0848, + HCLGE_OPC_SCH_RQ_CNT = 0x0849, + HCLGE_OPC_TM_INTERNAL_STS = 0x0850, + HCLGE_OPC_TM_INTERNAL_CNT = 0x0851, + HCLGE_OPC_TM_INTERNAL_STS_1 = 0x0852, /* Packet buffer allocate commands */ HCLGE_OPC_TX_BUFF_ALLOC = 0x0901, @@ -142,6 +165,7 @@ enum hclge_opcode_type { HCLGE_OPC_CFG_TX_QUEUE = 0x0B01, HCLGE_OPC_QUERY_TX_POINTER = 0x0B02, HCLGE_OPC_QUERY_TX_STATUS = 0x0B03, + HCLGE_OPC_TQP_TX_QUEUE_TC = 0x0B04, HCLGE_OPC_CFG_RX_QUEUE = 0x0B11, HCLGE_OPC_QUERY_RX_POINTER = 0x0B12, HCLGE_OPC_QUERY_RX_STATUS = 0x0B13, @@ -152,6 +176,7 @@ enum hclge_opcode_type { /* TSO command */ HCLGE_OPC_TSO_GENERIC_CONFIG = 0x0C01, + HCLGE_OPC_GRO_GENERIC_CONFIG = 0x0C10, /* RSS commands */ HCLGE_OPC_RSS_GENERIC_CONFIG = 0x0D01, @@ -210,27 +235,34 @@ enum hclge_opcode_type { /* Led command */ HCLGE_OPC_LED_STATUS_CFG = 0xB000, + /* SFP command */ + HCLGE_OPC_SFP_GET_SPEED = 0x7104, + /* Error INT commands */ + HCLGE_MAC_COMMON_INT_EN = 0x030E, HCLGE_TM_SCH_ECC_INT_EN = 0x0829, - HCLGE_TM_SCH_ECC_ERR_RINT_CMD = 0x082d, - HCLGE_TM_SCH_ECC_ERR_RINT_CE = 0x082f, - HCLGE_TM_SCH_ECC_ERR_RINT_NFE = 0x0830, - HCLGE_TM_SCH_ECC_ERR_RINT_FE = 0x0831, - HCLGE_TM_SCH_MBIT_ECC_INFO_CMD = 0x0833, + HCLGE_SSU_ECC_INT_CMD = 0x0989, + HCLGE_SSU_COMMON_INT_CMD = 0x098C, + HCLGE_PPU_MPF_ECC_INT_CMD = 0x0B40, + HCLGE_PPU_MPF_OTHER_INT_CMD = 0x0B41, + HCLGE_PPU_PF_OTHER_INT_CMD = 0x0B42, HCLGE_COMMON_ECC_INT_CFG = 0x1505, - HCLGE_IGU_EGU_TNL_INT_QUERY = 0x1802, + HCLGE_QUERY_RAS_INT_STS_BD_NUM = 0x1510, + HCLGE_QUERY_CLEAR_MPF_RAS_INT = 0x1511, + HCLGE_QUERY_CLEAR_PF_RAS_INT = 0x1512, + HCLGE_QUERY_MSIX_INT_STS_BD_NUM = 0x1513, + HCLGE_QUERY_CLEAR_ALL_MPF_MSIX_INT = 0x1514, + HCLGE_QUERY_CLEAR_ALL_PF_MSIX_INT = 0x1515, + HCLGE_CONFIG_ROCEE_RAS_INT_EN = 0x1580, + HCLGE_QUERY_CLEAR_ROCEE_RAS_INT = 0x1581, + HCLGE_ROCEE_PF_RAS_INT_CMD = 0x1584, HCLGE_IGU_EGU_TNL_INT_EN = 0x1803, - HCLGE_IGU_EGU_TNL_INT_CLR = 0x1804, - HCLGE_IGU_COMMON_INT_QUERY = 0x1805, HCLGE_IGU_COMMON_INT_EN = 0x1806, - HCLGE_IGU_COMMON_INT_CLR = 0x1807, HCLGE_TM_QCN_MEM_INT_CFG = 0x1A14, - HCLGE_TM_QCN_MEM_INT_INFO_CMD = 0x1A17, HCLGE_PPP_CMD0_INT_CMD = 0x2100, HCLGE_PPP_CMD1_INT_CMD = 0x2101, - HCLGE_NCSI_INT_QUERY = 0x2400, + HCLGE_MAC_ETHERTYPE_IDX_RD = 0x2105, HCLGE_NCSI_INT_EN = 0x2401, - HCLGE_NCSI_INT_CLR = 0x2402, }; #define HCLGE_TQP_REG_OFFSET 0x80000 @@ -388,7 +420,9 @@ struct hclge_pf_res_cmd { #define HCLGE_PF_VEC_NUM_M GENMASK(7, 0) __le16 pf_intr_vector_number; __le16 pf_own_fun_number; - __le32 rsv[3]; + __le16 tx_buf_size; + __le16 dv_buf_size; + __le32 rsv[2]; }; #define HCLGE_CFG_OFFSET_S 0 @@ -542,20 +576,6 @@ struct hclge_config_mac_speed_dup_cmd { u8 rsv[22]; }; -#define HCLGE_QUERY_SPEED_S 3 -#define HCLGE_QUERY_AN_B 0 -#define HCLGE_QUERY_DUPLEX_B 2 - -#define HCLGE_QUERY_SPEED_M GENMASK(4, 0) -#define HCLGE_QUERY_AN_M BIT(HCLGE_QUERY_AN_B) -#define HCLGE_QUERY_DUPLEX_M BIT(HCLGE_QUERY_DUPLEX_B) - -struct hclge_query_an_speed_dup_cmd { - u8 an_syn_dup_speed; - u8 pause; - u8 rsv[23]; -}; - #define HCLGE_RING_ID_MASK GENMASK(9, 0) #define HCLGE_TQP_ENABLE_B 0 @@ -572,6 +592,11 @@ struct hclge_config_auto_neg_cmd { u8 rsv[20]; }; +struct hclge_sfp_speed_cmd { + __le32 sfp_speed; + u32 rsv[5]; +}; + #define HCLGE_MAC_UPLINK_PORT 0x100 struct hclge_config_max_frm_size_cmd { @@ -746,6 +771,24 @@ struct hclge_cfg_tx_queue_pointer_cmd { u8 rsv[14]; }; +#pragma pack(1) +struct hclge_mac_ethertype_idx_rd_cmd { + u8 flags; + u8 resp_code; + __le16 vlan_tag; + u8 mac_add[6]; + __le16 index; + __le16 ethter_type; + __le16 egress_port; + __le16 egress_queue; + __le16 rev0; + u8 i_port_bitmap; + u8 i_port_direction; + u8 rev1[2]; +}; + +#pragma pack() + #define HCLGE_TSO_MSS_MIN_S 0 #define HCLGE_TSO_MSS_MIN_M GENMASK(13, 0) @@ -758,6 +801,12 @@ struct hclge_cfg_tso_status_cmd { u8 rsv[20]; }; +#define HCLGE_GRO_EN_B 0 +struct hclge_cfg_gro_status_cmd { + __le16 gro_en; + u8 rsv[22]; +}; + #define HCLGE_TSO_MSS_MIN 256 #define HCLGE_TSO_MSS_MAX 9668 @@ -792,6 +841,7 @@ struct hclge_serdes_lb_cmd { #define HCLGE_TOTAL_PKT_BUF 0x108000 /* 1.03125M bytes */ #define HCLGE_DEFAULT_DV 0xA000 /* 40k byte */ #define HCLGE_DEFAULT_NON_DCB_DV 0x7800 /* 30K byte */ +#define HCLGE_NON_DCB_ADDITIONAL_BUF 0x200 /* 512 byte */ #define HCLGE_TYPE_CRQ 0 #define HCLGE_TYPE_CSQ 1 diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c index e72f724123d7..f6323b2501dc 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c @@ -35,7 +35,9 @@ static int hclge_ieee_ets_to_tm_info(struct hclge_dev *hdev, } } - return hclge_tm_prio_tc_info_update(hdev, ets->prio_tc); + hclge_tm_prio_tc_info_update(hdev, ets->prio_tc); + + return 0; } static void hclge_tm_info_to_ieee_ets(struct hclge_dev *hdev, @@ -70,25 +72,61 @@ static int hclge_ieee_getets(struct hnae3_handle *h, struct ieee_ets *ets) return 0; } +static int hclge_dcb_common_validate(struct hclge_dev *hdev, u8 num_tc, + u8 *prio_tc) +{ + int i; + + if (num_tc > hdev->tc_max) { + dev_err(&hdev->pdev->dev, + "tc num checking failed, %u > tc_max(%u)\n", + num_tc, hdev->tc_max); + return -EINVAL; + } + + for (i = 0; i < HNAE3_MAX_USER_PRIO; i++) { + if (prio_tc[i] >= num_tc) { + dev_err(&hdev->pdev->dev, + "prio_tc[%u] checking failed, %u >= num_tc(%u)\n", + i, prio_tc[i], num_tc); + return -EINVAL; + } + } + + for (i = 0; i < hdev->num_alloc_vport; i++) { + if (num_tc > hdev->vport[i].alloc_tqps) { + dev_err(&hdev->pdev->dev, + "allocated tqp(%u) checking failed, %u > tqp(%u)\n", + i, num_tc, hdev->vport[i].alloc_tqps); + return -EINVAL; + } + } + + return 0; +} + static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets, u8 *tc, bool *changed) { bool has_ets_tc = false; u32 total_ets_bw = 0; u8 max_tc = 0; + int ret; u8 i; - for (i = 0; i < HNAE3_MAX_TC; i++) { - if (ets->prio_tc[i] >= hdev->tc_max || - i >= hdev->tc_max) - return -EINVAL; - + for (i = 0; i < HNAE3_MAX_USER_PRIO; i++) { if (ets->prio_tc[i] != hdev->tm_info.prio_tc[i]) *changed = true; if (ets->prio_tc[i] > max_tc) max_tc = ets->prio_tc[i]; + } + ret = hclge_dcb_common_validate(hdev, max_tc + 1, ets->prio_tc); + if (ret) + return ret; + + for (i = 0; i < HNAE3_MAX_TC; i++) { switch (ets->tc_tsa[i]) { case IEEE_8021QAZ_TSA_STRICT: if (hdev->tm_info.tc_info[i].tc_sch_mode != @@ -184,9 +222,7 @@ static int hclge_ieee_setets(struct hnae3_handle *h, struct ieee_ets *ets) if (ret) return ret; - ret = hclge_tm_schd_info_update(hdev, num_tc); - if (ret) - return ret; + hclge_tm_schd_info_update(hdev, num_tc); ret = hclge_ieee_ets_to_tm_info(hdev, ets); if (ret) @@ -305,20 +341,12 @@ static int hclge_setup_tc(struct hnae3_handle *h, u8 tc, u8 *prio_tc) if (hdev->flag & HCLGE_FLAG_DCB_ENABLE) return -EINVAL; - if (tc > hdev->tc_max) { - dev_err(&hdev->pdev->dev, - "setup tc failed, tc(%u) > tc_max(%u)\n", - tc, hdev->tc_max); - return -EINVAL; - } - - ret = hclge_tm_schd_info_update(hdev, tc); + ret = hclge_dcb_common_validate(hdev, tc, prio_tc); if (ret) - return ret; + return -EINVAL; - ret = hclge_tm_prio_tc_info_update(hdev, prio_tc); - if (ret) - return ret; + hclge_tm_schd_info_update(hdev, tc); + hclge_tm_prio_tc_info_update(hdev, prio_tc); ret = hclge_tm_init_hw(hdev); if (ret) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c new file mode 100644 index 000000000000..26d80504c730 --- /dev/null +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c @@ -0,0 +1,933 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Copyright (c) 2018-2019 Hisilicon Limited. */ + +#include + +#include "hclge_debugfs.h" +#include "hclge_cmd.h" +#include "hclge_main.h" +#include "hclge_tm.h" +#include "hnae3.h" + +static int hclge_dbg_get_dfx_bd_num(struct hclge_dev *hdev, int offset) +{ + struct hclge_desc desc[4]; + int ret; + + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_DFX_BD_NUM, true); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[1], HCLGE_OPC_DFX_BD_NUM, true); + desc[1].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[2], HCLGE_OPC_DFX_BD_NUM, true); + desc[2].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[3], HCLGE_OPC_DFX_BD_NUM, true); + + ret = hclge_cmd_send(&hdev->hw, desc, 4); + if (ret != HCLGE_CMD_EXEC_SUCCESS) { + dev_err(&hdev->pdev->dev, + "get dfx bdnum fail, status is %d.\n", ret); + return ret; + } + + return (int)desc[offset / 6].data[offset % 6]; +} + +static int hclge_dbg_cmd_send(struct hclge_dev *hdev, + struct hclge_desc *desc_src, + int index, int bd_num, + enum hclge_opcode_type cmd) +{ + struct hclge_desc *desc = desc_src; + int ret, i; + + hclge_cmd_setup_basic_desc(desc, cmd, true); + desc->data[0] = cpu_to_le32(index); + + for (i = 1; i < bd_num; i++) { + desc->flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + desc++; + hclge_cmd_setup_basic_desc(desc, cmd, true); + } + + ret = hclge_cmd_send(&hdev->hw, desc_src, bd_num); + if (ret) { + dev_err(&hdev->pdev->dev, + "read reg cmd send fail, status is %d.\n", ret); + return ret; + } + + return ret; +} + +static void hclge_dbg_dump_reg_common(struct hclge_dev *hdev, + struct hclge_dbg_dfx_message *dfx_message, + char *cmd_buf, int msg_num, int offset, + enum hclge_opcode_type cmd) +{ + struct hclge_desc *desc_src; + struct hclge_desc *desc; + int bd_num, buf_len; + int ret, i; + int index; + int max; + + ret = kstrtouint(cmd_buf, 10, &index); + index = (ret != 0) ? 0 : index; + + bd_num = hclge_dbg_get_dfx_bd_num(hdev, offset); + if (bd_num <= 0) + return; + + buf_len = sizeof(struct hclge_desc) * bd_num; + desc_src = kzalloc(buf_len, GFP_KERNEL); + if (!desc_src) { + dev_err(&hdev->pdev->dev, "call kzalloc failed\n"); + return; + } + + desc = desc_src; + ret = hclge_dbg_cmd_send(hdev, desc, index, bd_num, cmd); + if (ret != HCLGE_CMD_EXEC_SUCCESS) { + kfree(desc_src); + return; + } + + max = (bd_num * 6) <= msg_num ? (bd_num * 6) : msg_num; + + desc = desc_src; + for (i = 0; i < max; i++) { + (((i / 6) > 0) && ((i % 6) == 0)) ? desc++ : desc; + if (dfx_message->flag) + dev_info(&hdev->pdev->dev, "%s: 0x%x\n", + dfx_message->message, desc->data[i % 6]); + + dfx_message++; + } + + kfree(desc_src); +} + +static void hclge_dbg_dump_dcb(struct hclge_dev *hdev, char *cmd_buf) +{ + struct device *dev = &hdev->pdev->dev; + struct hclge_dbg_bitmap_cmd *bitmap; + int rq_id, pri_id, qset_id; + int port_id, nq_id, pg_id; + struct hclge_desc desc[2]; + + int cnt, ret; + + cnt = sscanf(cmd_buf, "%i %i %i %i %i %i", + &port_id, &pri_id, &pg_id, &rq_id, &nq_id, &qset_id); + if (cnt != 6) { + dev_err(&hdev->pdev->dev, + "dump dcb: bad command parameter, cnt=%d\n", cnt); + return; + } + + ret = hclge_dbg_cmd_send(hdev, desc, qset_id, 1, + HCLGE_OPC_QSET_DFX_STS); + if (ret) + return; + + bitmap = (struct hclge_dbg_bitmap_cmd *)&desc[0].data[1]; + dev_info(dev, "roce_qset_mask: 0x%x\n", bitmap->bit0); + dev_info(dev, "nic_qs_mask: 0x%x\n", bitmap->bit1); + dev_info(dev, "qs_shaping_pass: 0x%x\n", bitmap->bit2); + dev_info(dev, "qs_bp_sts: 0x%x\n", bitmap->bit3); + + ret = hclge_dbg_cmd_send(hdev, desc, pri_id, 1, HCLGE_OPC_PRI_DFX_STS); + if (ret) + return; + + bitmap = (struct hclge_dbg_bitmap_cmd *)&desc[0].data[1]; + dev_info(dev, "pri_mask: 0x%x\n", bitmap->bit0); + dev_info(dev, "pri_cshaping_pass: 0x%x\n", bitmap->bit1); + dev_info(dev, "pri_pshaping_pass: 0x%x\n", bitmap->bit2); + + ret = hclge_dbg_cmd_send(hdev, desc, pg_id, 1, HCLGE_OPC_PG_DFX_STS); + if (ret) + return; + + bitmap = (struct hclge_dbg_bitmap_cmd *)&desc[0].data[1]; + dev_info(dev, "pg_mask: 0x%x\n", bitmap->bit0); + dev_info(dev, "pg_cshaping_pass: 0x%x\n", bitmap->bit1); + dev_info(dev, "pg_pshaping_pass: 0x%x\n", bitmap->bit2); + + ret = hclge_dbg_cmd_send(hdev, desc, port_id, 1, + HCLGE_OPC_PORT_DFX_STS); + if (ret) + return; + + bitmap = (struct hclge_dbg_bitmap_cmd *)&desc[0].data[1]; + dev_info(dev, "port_mask: 0x%x\n", bitmap->bit0); + dev_info(dev, "port_shaping_pass: 0x%x\n", bitmap->bit1); + + ret = hclge_dbg_cmd_send(hdev, desc, nq_id, 1, HCLGE_OPC_SCH_NQ_CNT); + if (ret) + return; + + dev_info(dev, "sch_nq_cnt: 0x%x\n", desc[0].data[1]); + + ret = hclge_dbg_cmd_send(hdev, desc, nq_id, 1, HCLGE_OPC_SCH_RQ_CNT); + if (ret) + return; + + dev_info(dev, "sch_rq_cnt: 0x%x\n", desc[0].data[1]); + + ret = hclge_dbg_cmd_send(hdev, desc, 0, 2, HCLGE_OPC_TM_INTERNAL_STS); + if (ret) + return; + + dev_info(dev, "pri_bp: 0x%x\n", desc[0].data[1]); + dev_info(dev, "fifo_dfx_info: 0x%x\n", desc[0].data[2]); + dev_info(dev, "sch_roce_fifo_afull_gap: 0x%x\n", desc[0].data[3]); + dev_info(dev, "tx_private_waterline: 0x%x\n", desc[0].data[4]); + dev_info(dev, "tm_bypass_en: 0x%x\n", desc[0].data[5]); + dev_info(dev, "SSU_TM_BYPASS_EN: 0x%x\n", desc[1].data[0]); + dev_info(dev, "SSU_RESERVE_CFG: 0x%x\n", desc[1].data[1]); + + ret = hclge_dbg_cmd_send(hdev, desc, port_id, 1, + HCLGE_OPC_TM_INTERNAL_CNT); + if (ret) + return; + + dev_info(dev, "SCH_NIC_NUM: 0x%x\n", desc[0].data[1]); + dev_info(dev, "SCH_ROCE_NUM: 0x%x\n", desc[0].data[2]); + + ret = hclge_dbg_cmd_send(hdev, desc, port_id, 1, + HCLGE_OPC_TM_INTERNAL_STS_1); + if (ret) + return; + + dev_info(dev, "TC_MAP_SEL: 0x%x\n", desc[0].data[1]); + dev_info(dev, "IGU_PFC_PRI_EN: 0x%x\n", desc[0].data[2]); + dev_info(dev, "MAC_PFC_PRI_EN: 0x%x\n", desc[0].data[3]); + dev_info(dev, "IGU_PRI_MAP_TC_CFG: 0x%x\n", desc[0].data[4]); + dev_info(dev, "IGU_TX_PRI_MAP_TC_CFG: 0x%x\n", desc[0].data[5]); +} + +static void hclge_dbg_dump_reg_cmd(struct hclge_dev *hdev, char *cmd_buf) +{ + int msg_num; + + if (strncmp(&cmd_buf[9], "bios common", 11) == 0) { + msg_num = sizeof(hclge_dbg_bios_common_reg) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_bios_common_reg, + &cmd_buf[21], msg_num, + HCLGE_DBG_DFX_BIOS_OFFSET, + HCLGE_OPC_DFX_BIOS_COMMON_REG); + } else if (strncmp(&cmd_buf[9], "ssu", 3) == 0) { + msg_num = sizeof(hclge_dbg_ssu_reg_0) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_ssu_reg_0, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_SSU_0_OFFSET, + HCLGE_OPC_DFX_SSU_REG_0); + + msg_num = sizeof(hclge_dbg_ssu_reg_1) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_ssu_reg_1, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_SSU_1_OFFSET, + HCLGE_OPC_DFX_SSU_REG_1); + + msg_num = sizeof(hclge_dbg_ssu_reg_2) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_ssu_reg_2, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_SSU_2_OFFSET, + HCLGE_OPC_DFX_SSU_REG_2); + } else if (strncmp(&cmd_buf[9], "igu egu", 7) == 0) { + msg_num = sizeof(hclge_dbg_igu_egu_reg) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_igu_egu_reg, + &cmd_buf[17], msg_num, + HCLGE_DBG_DFX_IGU_OFFSET, + HCLGE_OPC_DFX_IGU_EGU_REG); + } else if (strncmp(&cmd_buf[9], "rpu", 3) == 0) { + msg_num = sizeof(hclge_dbg_rpu_reg_0) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_rpu_reg_0, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_RPU_0_OFFSET, + HCLGE_OPC_DFX_RPU_REG_0); + + msg_num = sizeof(hclge_dbg_rpu_reg_1) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_rpu_reg_1, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_RPU_1_OFFSET, + HCLGE_OPC_DFX_RPU_REG_1); + } else if (strncmp(&cmd_buf[9], "ncsi", 4) == 0) { + msg_num = sizeof(hclge_dbg_ncsi_reg) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_ncsi_reg, + &cmd_buf[14], msg_num, + HCLGE_DBG_DFX_NCSI_OFFSET, + HCLGE_OPC_DFX_NCSI_REG); + } else if (strncmp(&cmd_buf[9], "rtc", 3) == 0) { + msg_num = sizeof(hclge_dbg_rtc_reg) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_rtc_reg, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_RTC_OFFSET, + HCLGE_OPC_DFX_RTC_REG); + } else if (strncmp(&cmd_buf[9], "ppp", 3) == 0) { + msg_num = sizeof(hclge_dbg_ppp_reg) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_ppp_reg, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_PPP_OFFSET, + HCLGE_OPC_DFX_PPP_REG); + } else if (strncmp(&cmd_buf[9], "rcb", 3) == 0) { + msg_num = sizeof(hclge_dbg_rcb_reg) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_rcb_reg, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_RCB_OFFSET, + HCLGE_OPC_DFX_RCB_REG); + } else if (strncmp(&cmd_buf[9], "tqp", 3) == 0) { + msg_num = sizeof(hclge_dbg_tqp_reg) / + sizeof(struct hclge_dbg_dfx_message); + hclge_dbg_dump_reg_common(hdev, hclge_dbg_tqp_reg, + &cmd_buf[13], msg_num, + HCLGE_DBG_DFX_TQP_OFFSET, + HCLGE_OPC_DFX_TQP_REG); + } else if (strncmp(&cmd_buf[9], "dcb", 3) == 0) { + hclge_dbg_dump_dcb(hdev, &cmd_buf[13]); + } else { + dev_info(&hdev->pdev->dev, "unknown command\n"); + return; + } +} + +static void hclge_title_idx_print(struct hclge_dev *hdev, bool flag, int index, + char *title_buf, char *true_buf, + char *false_buf) +{ + if (flag) + dev_info(&hdev->pdev->dev, "%s(%d): %s\n", title_buf, index, + true_buf); + else + dev_info(&hdev->pdev->dev, "%s(%d): %s\n", title_buf, index, + false_buf); +} + +static void hclge_dbg_dump_tc(struct hclge_dev *hdev) +{ + struct hclge_ets_tc_weight_cmd *ets_weight; + struct hclge_desc desc; + int i, ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_ETS_TC_WEIGHT, true); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) { + dev_err(&hdev->pdev->dev, "dump tc fail, status is %d.\n", ret); + return; + } + + ets_weight = (struct hclge_ets_tc_weight_cmd *)desc.data; + + dev_info(&hdev->pdev->dev, "dump tc\n"); + dev_info(&hdev->pdev->dev, "weight_offset: %u\n", + ets_weight->weight_offset); + + for (i = 0; i < HNAE3_MAX_TC; i++) + hclge_title_idx_print(hdev, ets_weight->tc_weight[i], i, + "tc", "no sp mode", "sp mode"); +} + +static void hclge_dbg_dump_tm_pg(struct hclge_dev *hdev) +{ + struct hclge_port_shapping_cmd *port_shap_cfg_cmd; + struct hclge_bp_to_qs_map_cmd *bp_to_qs_map_cmd; + struct hclge_pg_shapping_cmd *pg_shap_cfg_cmd; + enum hclge_opcode_type cmd; + struct hclge_desc desc; + int ret; + + cmd = HCLGE_OPC_TM_PG_C_SHAPPING; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_pg_cmd_send; + + pg_shap_cfg_cmd = (struct hclge_pg_shapping_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "PG_C pg_id: %u\n", pg_shap_cfg_cmd->pg_id); + dev_info(&hdev->pdev->dev, "PG_C pg_shapping: 0x%x\n", + pg_shap_cfg_cmd->pg_shapping_para); + + cmd = HCLGE_OPC_TM_PG_P_SHAPPING; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_pg_cmd_send; + + pg_shap_cfg_cmd = (struct hclge_pg_shapping_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "PG_P pg_id: %u\n", pg_shap_cfg_cmd->pg_id); + dev_info(&hdev->pdev->dev, "PG_P pg_shapping: 0x%x\n", + pg_shap_cfg_cmd->pg_shapping_para); + + cmd = HCLGE_OPC_TM_PORT_SHAPPING; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_pg_cmd_send; + + port_shap_cfg_cmd = (struct hclge_port_shapping_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "PORT port_shapping: 0x%x\n", + port_shap_cfg_cmd->port_shapping_para); + + cmd = HCLGE_OPC_TM_PG_SCH_MODE_CFG; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_pg_cmd_send; + + dev_info(&hdev->pdev->dev, "PG_SCH pg_id: %u\n", desc.data[0]); + + cmd = HCLGE_OPC_TM_PRI_SCH_MODE_CFG; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_pg_cmd_send; + + dev_info(&hdev->pdev->dev, "PRI_SCH pg_id: %u\n", desc.data[0]); + + cmd = HCLGE_OPC_TM_QS_SCH_MODE_CFG; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_pg_cmd_send; + + dev_info(&hdev->pdev->dev, "QS_SCH pg_id: %u\n", desc.data[0]); + + cmd = HCLGE_OPC_TM_BP_TO_QSET_MAPPING; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_pg_cmd_send; + + bp_to_qs_map_cmd = (struct hclge_bp_to_qs_map_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "BP_TO_QSET pg_id: %u\n", + bp_to_qs_map_cmd->tc_id); + dev_info(&hdev->pdev->dev, "BP_TO_QSET pg_shapping: 0x%x\n", + bp_to_qs_map_cmd->qs_group_id); + dev_info(&hdev->pdev->dev, "BP_TO_QSET qs_bit_map: 0x%x\n", + bp_to_qs_map_cmd->qs_bit_map); + return; + +err_tm_pg_cmd_send: + dev_err(&hdev->pdev->dev, "dump tm_pg fail(0x%x), status is %d\n", + cmd, ret); +} + +static void hclge_dbg_dump_tm(struct hclge_dev *hdev) +{ + struct hclge_priority_weight_cmd *priority_weight; + struct hclge_pg_to_pri_link_cmd *pg_to_pri_map; + struct hclge_qs_to_pri_link_cmd *qs_to_pri_map; + struct hclge_nq_to_qs_link_cmd *nq_to_qs_map; + struct hclge_pri_shapping_cmd *shap_cfg_cmd; + struct hclge_pg_weight_cmd *pg_weight; + struct hclge_qs_weight_cmd *qs_weight; + enum hclge_opcode_type cmd; + struct hclge_desc desc; + int ret; + + cmd = HCLGE_OPC_TM_PG_TO_PRI_LINK; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_cmd_send; + + pg_to_pri_map = (struct hclge_pg_to_pri_link_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "dump tm\n"); + dev_info(&hdev->pdev->dev, "PG_TO_PRI gp_id: %u\n", + pg_to_pri_map->pg_id); + dev_info(&hdev->pdev->dev, "PG_TO_PRI map: 0x%x\n", + pg_to_pri_map->pri_bit_map); + + cmd = HCLGE_OPC_TM_QS_TO_PRI_LINK; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_cmd_send; + + qs_to_pri_map = (struct hclge_qs_to_pri_link_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "QS_TO_PRI qs_id: %u\n", + qs_to_pri_map->qs_id); + dev_info(&hdev->pdev->dev, "QS_TO_PRI priority: %u\n", + qs_to_pri_map->priority); + dev_info(&hdev->pdev->dev, "QS_TO_PRI link_vld: %u\n", + qs_to_pri_map->link_vld); + + cmd = HCLGE_OPC_TM_NQ_TO_QS_LINK; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_cmd_send; + + nq_to_qs_map = (struct hclge_nq_to_qs_link_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "NQ_TO_QS nq_id: %u\n", nq_to_qs_map->nq_id); + dev_info(&hdev->pdev->dev, "NQ_TO_QS qset_id: %u\n", + nq_to_qs_map->qset_id); + + cmd = HCLGE_OPC_TM_PG_WEIGHT; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_cmd_send; + + pg_weight = (struct hclge_pg_weight_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "PG pg_id: %u\n", pg_weight->pg_id); + dev_info(&hdev->pdev->dev, "PG dwrr: %u\n", pg_weight->dwrr); + + cmd = HCLGE_OPC_TM_QS_WEIGHT; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_cmd_send; + + qs_weight = (struct hclge_qs_weight_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "QS qs_id: %u\n", qs_weight->qs_id); + dev_info(&hdev->pdev->dev, "QS dwrr: %u\n", qs_weight->dwrr); + + cmd = HCLGE_OPC_TM_PRI_WEIGHT; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_cmd_send; + + priority_weight = (struct hclge_priority_weight_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "PRI pri_id: %u\n", priority_weight->pri_id); + dev_info(&hdev->pdev->dev, "PRI dwrr: %u\n", priority_weight->dwrr); + + cmd = HCLGE_OPC_TM_PRI_C_SHAPPING; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_cmd_send; + + shap_cfg_cmd = (struct hclge_pri_shapping_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "PRI_C pri_id: %u\n", shap_cfg_cmd->pri_id); + dev_info(&hdev->pdev->dev, "PRI_C pri_shapping: 0x%x\n", + shap_cfg_cmd->pri_shapping_para); + + cmd = HCLGE_OPC_TM_PRI_P_SHAPPING; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_cmd_send; + + shap_cfg_cmd = (struct hclge_pri_shapping_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "PRI_P pri_id: %u\n", shap_cfg_cmd->pri_id); + dev_info(&hdev->pdev->dev, "PRI_P pri_shapping: 0x%x\n", + shap_cfg_cmd->pri_shapping_para); + + hclge_dbg_dump_tm_pg(hdev); + + return; + +err_tm_cmd_send: + dev_err(&hdev->pdev->dev, "dump tm fail(0x%x), status is %d\n", + cmd, ret); +} + +static void hclge_dbg_dump_tm_map(struct hclge_dev *hdev, char *cmd_buf) +{ + struct hclge_bp_to_qs_map_cmd *bp_to_qs_map_cmd; + struct hclge_nq_to_qs_link_cmd *nq_to_qs_map; + struct hclge_qs_to_pri_link_cmd *map; + struct hclge_tqp_tx_queue_tc_cmd *tc; + enum hclge_opcode_type cmd; + struct hclge_desc desc; + int queue_id, group_id; + u32 qset_maping[32]; + int tc_id, qset_id; + int pri_id, ret; + u32 i; + + ret = kstrtouint(&cmd_buf[12], 10, &queue_id); + queue_id = (ret != 0) ? 0 : queue_id; + + cmd = HCLGE_OPC_TM_NQ_TO_QS_LINK; + nq_to_qs_map = (struct hclge_nq_to_qs_link_cmd *)desc.data; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + nq_to_qs_map->nq_id = cpu_to_le16(queue_id); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_map_cmd_send; + qset_id = nq_to_qs_map->qset_id & 0x3FF; + + cmd = HCLGE_OPC_TM_QS_TO_PRI_LINK; + map = (struct hclge_qs_to_pri_link_cmd *)desc.data; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + map->qs_id = cpu_to_le16(qset_id); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_map_cmd_send; + pri_id = map->priority; + + cmd = HCLGE_OPC_TQP_TX_QUEUE_TC; + tc = (struct hclge_tqp_tx_queue_tc_cmd *)desc.data; + hclge_cmd_setup_basic_desc(&desc, cmd, true); + tc->queue_id = cpu_to_le16(queue_id); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_map_cmd_send; + tc_id = tc->tc_id & 0x7; + + dev_info(&hdev->pdev->dev, "queue_id | qset_id | pri_id | tc_id\n"); + dev_info(&hdev->pdev->dev, "%04d | %04d | %02d | %02d\n", + queue_id, qset_id, pri_id, tc_id); + + cmd = HCLGE_OPC_TM_BP_TO_QSET_MAPPING; + bp_to_qs_map_cmd = (struct hclge_bp_to_qs_map_cmd *)desc.data; + for (group_id = 0; group_id < 32; group_id++) { + hclge_cmd_setup_basic_desc(&desc, cmd, true); + bp_to_qs_map_cmd->tc_id = tc_id; + bp_to_qs_map_cmd->qs_group_id = group_id; + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + goto err_tm_map_cmd_send; + + qset_maping[group_id] = bp_to_qs_map_cmd->qs_bit_map; + } + + dev_info(&hdev->pdev->dev, "index | tm bp qset maping:\n"); + + i = 0; + for (group_id = 0; group_id < 4; group_id++) { + dev_info(&hdev->pdev->dev, + "%04d | %08x:%08x:%08x:%08x:%08x:%08x:%08x:%08x\n", + group_id * 256, qset_maping[(u32)(i + 7)], + qset_maping[(u32)(i + 6)], qset_maping[(u32)(i + 5)], + qset_maping[(u32)(i + 4)], qset_maping[(u32)(i + 3)], + qset_maping[(u32)(i + 2)], qset_maping[(u32)(i + 1)], + qset_maping[i]); + i += 8; + } + + return; + +err_tm_map_cmd_send: + dev_err(&hdev->pdev->dev, "dump tqp map fail(0x%x), status is %d\n", + cmd, ret); +} + +static void hclge_dbg_dump_qos_pause_cfg(struct hclge_dev *hdev) +{ + struct hclge_cfg_pause_param_cmd *pause_param; + struct hclge_desc desc; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CFG_MAC_PARA, true); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) { + dev_err(&hdev->pdev->dev, "dump checksum fail, status is %d.\n", + ret); + return; + } + + pause_param = (struct hclge_cfg_pause_param_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "dump qos pause cfg\n"); + dev_info(&hdev->pdev->dev, "pause_trans_gap: 0x%x\n", + pause_param->pause_trans_gap); + dev_info(&hdev->pdev->dev, "pause_trans_time: 0x%x\n", + pause_param->pause_trans_time); +} + +static void hclge_dbg_dump_qos_pri_map(struct hclge_dev *hdev) +{ + struct hclge_qos_pri_map_cmd *pri_map; + struct hclge_desc desc; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_PRI_TO_TC_MAPPING, true); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) { + dev_err(&hdev->pdev->dev, + "dump qos pri map fail, status is %d.\n", ret); + return; + } + + pri_map = (struct hclge_qos_pri_map_cmd *)desc.data; + dev_info(&hdev->pdev->dev, "dump qos pri map\n"); + dev_info(&hdev->pdev->dev, "vlan_to_pri: 0x%x\n", pri_map->vlan_pri); + dev_info(&hdev->pdev->dev, "pri_0_to_tc: 0x%x\n", pri_map->pri0_tc); + dev_info(&hdev->pdev->dev, "pri_1_to_tc: 0x%x\n", pri_map->pri1_tc); + dev_info(&hdev->pdev->dev, "pri_2_to_tc: 0x%x\n", pri_map->pri2_tc); + dev_info(&hdev->pdev->dev, "pri_3_to_tc: 0x%x\n", pri_map->pri3_tc); + dev_info(&hdev->pdev->dev, "pri_4_to_tc: 0x%x\n", pri_map->pri4_tc); + dev_info(&hdev->pdev->dev, "pri_5_to_tc: 0x%x\n", pri_map->pri5_tc); + dev_info(&hdev->pdev->dev, "pri_6_to_tc: 0x%x\n", pri_map->pri6_tc); + dev_info(&hdev->pdev->dev, "pri_7_to_tc: 0x%x\n", pri_map->pri7_tc); +} + +static void hclge_dbg_dump_qos_buf_cfg(struct hclge_dev *hdev) +{ + struct hclge_tx_buff_alloc_cmd *tx_buf_cmd; + struct hclge_rx_priv_buff_cmd *rx_buf_cmd; + struct hclge_rx_priv_wl_buf *rx_priv_wl; + struct hclge_rx_com_wl *rx_packet_cnt; + struct hclge_rx_com_thrd *rx_com_thrd; + struct hclge_rx_com_wl *rx_com_wl; + enum hclge_opcode_type cmd; + struct hclge_desc desc[2]; + int i, ret; + + cmd = HCLGE_OPC_TX_BUFF_ALLOC; + hclge_cmd_setup_basic_desc(desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, desc, 1); + if (ret) + goto err_qos_cmd_send; + + dev_info(&hdev->pdev->dev, "dump qos buf cfg\n"); + + tx_buf_cmd = (struct hclge_tx_buff_alloc_cmd *)desc[0].data; + for (i = 0; i < HCLGE_TC_NUM; i++) + dev_info(&hdev->pdev->dev, "tx_packet_buf_tc_%d: 0x%x\n", i, + tx_buf_cmd->tx_pkt_buff[i]); + + cmd = HCLGE_OPC_RX_PRIV_BUFF_ALLOC; + hclge_cmd_setup_basic_desc(desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, desc, 1); + if (ret) + goto err_qos_cmd_send; + + dev_info(&hdev->pdev->dev, "\n"); + rx_buf_cmd = (struct hclge_rx_priv_buff_cmd *)desc[0].data; + for (i = 0; i < HCLGE_TC_NUM; i++) + dev_info(&hdev->pdev->dev, "rx_packet_buf_tc_%d: 0x%x\n", i, + rx_buf_cmd->buf_num[i]); + + dev_info(&hdev->pdev->dev, "rx_share_buf: 0x%x\n", + rx_buf_cmd->shared_buf); + + cmd = HCLGE_OPC_RX_PRIV_WL_ALLOC; + hclge_cmd_setup_basic_desc(&desc[0], cmd, true); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[1], cmd, true); + ret = hclge_cmd_send(&hdev->hw, desc, 2); + if (ret) + goto err_qos_cmd_send; + + dev_info(&hdev->pdev->dev, "\n"); + rx_priv_wl = (struct hclge_rx_priv_wl_buf *)desc[0].data; + for (i = 0; i < HCLGE_TC_NUM_ONE_DESC; i++) + dev_info(&hdev->pdev->dev, + "rx_priv_wl_tc_%d: high: 0x%x, low: 0x%x\n", i, + rx_priv_wl->tc_wl[i].high, rx_priv_wl->tc_wl[i].low); + + rx_priv_wl = (struct hclge_rx_priv_wl_buf *)desc[1].data; + for (i = 0; i < HCLGE_TC_NUM_ONE_DESC; i++) + dev_info(&hdev->pdev->dev, + "rx_priv_wl_tc_%d: high: 0x%x, low: 0x%x\n", i + 4, + rx_priv_wl->tc_wl[i].high, rx_priv_wl->tc_wl[i].low); + + cmd = HCLGE_OPC_RX_COM_THRD_ALLOC; + hclge_cmd_setup_basic_desc(&desc[0], cmd, true); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[1], cmd, true); + ret = hclge_cmd_send(&hdev->hw, desc, 2); + if (ret) + goto err_qos_cmd_send; + + dev_info(&hdev->pdev->dev, "\n"); + rx_com_thrd = (struct hclge_rx_com_thrd *)desc[0].data; + for (i = 0; i < HCLGE_TC_NUM_ONE_DESC; i++) + dev_info(&hdev->pdev->dev, + "rx_com_thrd_tc_%d: high: 0x%x, low: 0x%x\n", i, + rx_com_thrd->com_thrd[i].high, + rx_com_thrd->com_thrd[i].low); + + rx_com_thrd = (struct hclge_rx_com_thrd *)desc[1].data; + for (i = 0; i < HCLGE_TC_NUM_ONE_DESC; i++) + dev_info(&hdev->pdev->dev, + "rx_com_thrd_tc_%d: high: 0x%x, low: 0x%x\n", i + 4, + rx_com_thrd->com_thrd[i].high, + rx_com_thrd->com_thrd[i].low); + + cmd = HCLGE_OPC_RX_COM_WL_ALLOC; + hclge_cmd_setup_basic_desc(desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, desc, 1); + if (ret) + goto err_qos_cmd_send; + + rx_com_wl = (struct hclge_rx_com_wl *)desc[0].data; + dev_info(&hdev->pdev->dev, "\n"); + dev_info(&hdev->pdev->dev, "rx_com_wl: high: 0x%x, low: 0x%x\n", + rx_com_wl->com_wl.high, rx_com_wl->com_wl.low); + + cmd = HCLGE_OPC_RX_GBL_PKT_CNT; + hclge_cmd_setup_basic_desc(desc, cmd, true); + ret = hclge_cmd_send(&hdev->hw, desc, 1); + if (ret) + goto err_qos_cmd_send; + + rx_packet_cnt = (struct hclge_rx_com_wl *)desc[0].data; + dev_info(&hdev->pdev->dev, + "rx_global_packet_cnt: high: 0x%x, low: 0x%x\n", + rx_packet_cnt->com_wl.high, rx_packet_cnt->com_wl.low); + + return; + +err_qos_cmd_send: + dev_err(&hdev->pdev->dev, + "dump qos buf cfg fail(0x%x), status is %d\n", cmd, ret); +} + +static void hclge_dbg_dump_mng_table(struct hclge_dev *hdev) +{ + struct hclge_mac_ethertype_idx_rd_cmd *req0; + char printf_buf[HCLGE_DBG_BUF_LEN]; + struct hclge_desc desc; + int ret, i; + + dev_info(&hdev->pdev->dev, "mng tab:\n"); + memset(printf_buf, 0, HCLGE_DBG_BUF_LEN); + strncat(printf_buf, + "entry|mac_addr |mask|ether|mask|vlan|mask", + HCLGE_DBG_BUF_LEN - 1); + strncat(printf_buf + strlen(printf_buf), + "|i_map|i_dir|e_type|pf_id|vf_id|q_id|drop\n", + HCLGE_DBG_BUF_LEN - strlen(printf_buf) - 1); + + dev_info(&hdev->pdev->dev, "%s", printf_buf); + + for (i = 0; i < HCLGE_DBG_MNG_TBL_MAX; i++) { + hclge_cmd_setup_basic_desc(&desc, HCLGE_MAC_ETHERTYPE_IDX_RD, + true); + req0 = (struct hclge_mac_ethertype_idx_rd_cmd *)&desc.data; + req0->index = cpu_to_le16(i); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) { + dev_err(&hdev->pdev->dev, + "call hclge_cmd_send fail, ret = %d\n", ret); + return; + } + + if (!req0->resp_code) + continue; + + memset(printf_buf, 0, HCLGE_DBG_BUF_LEN); + snprintf(printf_buf, HCLGE_DBG_BUF_LEN, + "%02u |%02x:%02x:%02x:%02x:%02x:%02x|", + req0->index, req0->mac_add[0], req0->mac_add[1], + req0->mac_add[2], req0->mac_add[3], req0->mac_add[4], + req0->mac_add[5]); + + snprintf(printf_buf + strlen(printf_buf), + HCLGE_DBG_BUF_LEN - strlen(printf_buf), + "%x |%04x |%x |%04x|%x |%02x |%02x |", + !!(req0->flags & HCLGE_DBG_MNG_MAC_MASK_B), + req0->ethter_type, + !!(req0->flags & HCLGE_DBG_MNG_ETHER_MASK_B), + req0->vlan_tag & HCLGE_DBG_MNG_VLAN_TAG, + !!(req0->flags & HCLGE_DBG_MNG_VLAN_MASK_B), + req0->i_port_bitmap, req0->i_port_direction); + + snprintf(printf_buf + strlen(printf_buf), + HCLGE_DBG_BUF_LEN - strlen(printf_buf), + "%d |%d |%02d |%04d|%x\n", + !!(req0->egress_port & HCLGE_DBG_MNG_E_TYPE_B), + req0->egress_port & HCLGE_DBG_MNG_PF_ID, + (req0->egress_port >> 3) & HCLGE_DBG_MNG_VF_ID, + req0->egress_queue, + !!(req0->egress_port & HCLGE_DBG_MNG_DROP_B)); + + dev_info(&hdev->pdev->dev, "%s", printf_buf); + } +} + +static void hclge_dbg_fd_tcam_read(struct hclge_dev *hdev, u8 stage, + bool sel_x, u32 loc) +{ + struct hclge_fd_tcam_config_1_cmd *req1; + struct hclge_fd_tcam_config_2_cmd *req2; + struct hclge_fd_tcam_config_3_cmd *req3; + struct hclge_desc desc[3]; + int ret, i; + u32 *req; + + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_FD_TCAM_OP, true); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[1], HCLGE_OPC_FD_TCAM_OP, true); + desc[1].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[2], HCLGE_OPC_FD_TCAM_OP, true); + + req1 = (struct hclge_fd_tcam_config_1_cmd *)desc[0].data; + req2 = (struct hclge_fd_tcam_config_2_cmd *)desc[1].data; + req3 = (struct hclge_fd_tcam_config_3_cmd *)desc[2].data; + + req1->stage = stage; + req1->xy_sel = sel_x ? 1 : 0; + req1->index = cpu_to_le32(loc); + + ret = hclge_cmd_send(&hdev->hw, desc, 3); + if (ret) + return; + + dev_info(&hdev->pdev->dev, " read result tcam key %s(%u):\n", + sel_x ? "x" : "y", loc); + + req = (u32 *)req1->tcam_data; + for (i = 0; i < 2; i++) + dev_info(&hdev->pdev->dev, "%08x\n", *req++); + + req = (u32 *)req2->tcam_data; + for (i = 0; i < 6; i++) + dev_info(&hdev->pdev->dev, "%08x\n", *req++); + + req = (u32 *)req3->tcam_data; + for (i = 0; i < 5; i++) + dev_info(&hdev->pdev->dev, "%08x\n", *req++); +} + +static void hclge_dbg_fd_tcam(struct hclge_dev *hdev) +{ + u32 i; + + for (i = 0; i < hdev->fd_cfg.rule_num[0]; i++) { + hclge_dbg_fd_tcam_read(hdev, 0, true, i); + hclge_dbg_fd_tcam_read(hdev, 0, false, i); + } +} + +int hclge_dbg_run_cmd(struct hnae3_handle *handle, char *cmd_buf) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + + if (strncmp(cmd_buf, "dump fd tcam", 12) == 0) { + hclge_dbg_fd_tcam(hdev); + } else if (strncmp(cmd_buf, "dump tc", 7) == 0) { + hclge_dbg_dump_tc(hdev); + } else if (strncmp(cmd_buf, "dump tm map", 11) == 0) { + hclge_dbg_dump_tm_map(hdev, cmd_buf); + } else if (strncmp(cmd_buf, "dump tm", 7) == 0) { + hclge_dbg_dump_tm(hdev); + } else if (strncmp(cmd_buf, "dump qos pause cfg", 18) == 0) { + hclge_dbg_dump_qos_pause_cfg(hdev); + } else if (strncmp(cmd_buf, "dump qos pri map", 16) == 0) { + hclge_dbg_dump_qos_pri_map(hdev); + } else if (strncmp(cmd_buf, "dump qos buf cfg", 16) == 0) { + hclge_dbg_dump_qos_buf_cfg(hdev); + } else if (strncmp(cmd_buf, "dump mng tbl", 12) == 0) { + hclge_dbg_dump_mng_table(hdev); + } else if (strncmp(cmd_buf, "dump reg", 8) == 0) { + hclge_dbg_dump_reg_cmd(hdev, cmd_buf); + } else { + dev_info(&hdev->pdev->dev, "unknown command\n"); + return -EINVAL; + } + + return 0; +} diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.h new file mode 100644 index 000000000000..d055fda41775 --- /dev/null +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.h @@ -0,0 +1,713 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* Copyright (c) 2018-2019 Hisilicon Limited. */ + +#ifndef __HCLGE_DEBUGFS_H +#define __HCLGE_DEBUGFS_H + +#define HCLGE_DBG_BUF_LEN 256 +#define HCLGE_DBG_MNG_TBL_MAX 64 + +#define HCLGE_DBG_MNG_VLAN_MASK_B BIT(0) +#define HCLGE_DBG_MNG_MAC_MASK_B BIT(1) +#define HCLGE_DBG_MNG_ETHER_MASK_B BIT(2) +#define HCLGE_DBG_MNG_E_TYPE_B BIT(11) +#define HCLGE_DBG_MNG_DROP_B BIT(13) +#define HCLGE_DBG_MNG_VLAN_TAG 0x0FFF +#define HCLGE_DBG_MNG_PF_ID 0x0007 +#define HCLGE_DBG_MNG_VF_ID 0x00FF + +/* Get DFX BD number offset */ +#define HCLGE_DBG_DFX_BIOS_OFFSET 1 +#define HCLGE_DBG_DFX_SSU_0_OFFSET 2 +#define HCLGE_DBG_DFX_SSU_1_OFFSET 3 +#define HCLGE_DBG_DFX_IGU_OFFSET 4 +#define HCLGE_DBG_DFX_RPU_0_OFFSET 5 + +#define HCLGE_DBG_DFX_RPU_1_OFFSET 6 +#define HCLGE_DBG_DFX_NCSI_OFFSET 7 +#define HCLGE_DBG_DFX_RTC_OFFSET 8 +#define HCLGE_DBG_DFX_PPP_OFFSET 9 +#define HCLGE_DBG_DFX_RCB_OFFSET 10 +#define HCLGE_DBG_DFX_TQP_OFFSET 11 + +#define HCLGE_DBG_DFX_SSU_2_OFFSET 12 + +#pragma pack(1) + +struct hclge_qos_pri_map_cmd { + u8 pri0_tc : 4, + pri1_tc : 4; + u8 pri2_tc : 4, + pri3_tc : 4; + u8 pri4_tc : 4, + pri5_tc : 4; + u8 pri6_tc : 4, + pri7_tc : 4; + u8 vlan_pri : 4, + rev : 4; +}; + +struct hclge_dbg_bitmap_cmd { + union { + u8 bitmap; + struct { + u8 bit0 : 1, + bit1 : 1, + bit2 : 1, + bit3 : 1, + bit4 : 1, + bit5 : 1, + bit6 : 1, + bit7 : 1; + }; + }; +}; + +struct hclge_dbg_dfx_message { + int flag; + char message[60]; +}; + +#pragma pack() + +static struct hclge_dbg_dfx_message hclge_dbg_bios_common_reg[] = { + {false, "Reserved"}, + {true, "BP_CPU_STATE"}, + {true, "DFX_MSIX_INFO_NIC_0"}, + {true, "DFX_MSIX_INFO_NIC_1"}, + {true, "DFX_MSIX_INFO_NIC_2"}, + {true, "DFX_MSIX_INFO_NIC_3"}, + + {true, "DFX_MSIX_INFO_ROC_0"}, + {true, "DFX_MSIX_INFO_ROC_1"}, + {true, "DFX_MSIX_INFO_ROC_2"}, + {true, "DFX_MSIX_INFO_ROC_3"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_ssu_reg_0[] = { + {false, "Reserved"}, + {true, "SSU_ETS_PORT_STATUS"}, + {true, "SSU_ETS_TCG_STATUS"}, + {false, "Reserved"}, + {false, "Reserved"}, + {true, "SSU_BP_STATUS_0"}, + + {true, "SSU_BP_STATUS_1"}, + {true, "SSU_BP_STATUS_2"}, + {true, "SSU_BP_STATUS_3"}, + {true, "SSU_BP_STATUS_4"}, + {true, "SSU_BP_STATUS_5"}, + {true, "SSU_MAC_TX_PFC_IND"}, + + {true, "MAC_SSU_RX_PFC_IND"}, + {true, "BTMP_AGEING_ST_B0"}, + {true, "BTMP_AGEING_ST_B1"}, + {true, "BTMP_AGEING_ST_B2"}, + {false, "Reserved"}, + {false, "Reserved"}, + + {true, "FULL_DROP_NUM"}, + {true, "PART_DROP_NUM"}, + {true, "PPP_KEY_DROP_NUM"}, + {true, "PPP_RLT_DROP_NUM"}, + {true, "LO_PRI_UNICAST_RLT_DROP_NUM"}, + {true, "HI_PRI_MULTICAST_RLT_DROP_NUM"}, + + {true, "LO_PRI_MULTICAST_RLT_DROP_NUM"}, + {true, "NCSI_PACKET_CURR_BUFFER_CNT"}, + {true, "BTMP_AGEING_RLS_CNT_BANK0"}, + {true, "BTMP_AGEING_RLS_CNT_BANK1"}, + {true, "BTMP_AGEING_RLS_CNT_BANK2"}, + {true, "SSU_MB_RD_RLT_DROP_CNT"}, + + {true, "SSU_PPP_MAC_KEY_NUM_L"}, + {true, "SSU_PPP_MAC_KEY_NUM_H"}, + {true, "SSU_PPP_HOST_KEY_NUM_L"}, + {true, "SSU_PPP_HOST_KEY_NUM_H"}, + {true, "PPP_SSU_MAC_RLT_NUM_L"}, + {true, "PPP_SSU_MAC_RLT_NUM_H"}, + + {true, "PPP_SSU_HOST_RLT_NUM_L"}, + {true, "PPP_SSU_HOST_RLT_NUM_H"}, + {true, "NCSI_RX_PACKET_IN_CNT_L"}, + {true, "NCSI_RX_PACKET_IN_CNT_H"}, + {true, "NCSI_TX_PACKET_OUT_CNT_L"}, + {true, "NCSI_TX_PACKET_OUT_CNT_H"}, + + {true, "SSU_KEY_DROP_NUM"}, + {true, "MB_UNCOPY_NUM"}, + {true, "RX_OQ_DROP_PKT_CNT"}, + {true, "TX_OQ_DROP_PKT_CNT"}, + {true, "BANK_UNBALANCE_DROP_CNT"}, + {true, "BANK_UNBALANCE_RX_DROP_CNT"}, + + {true, "NIC_L2_ERR_DROP_PKT_CNT"}, + {true, "ROC_L2_ERR_DROP_PKT_CNT"}, + {true, "NIC_L2_ERR_DROP_PKT_CNT_RX"}, + {true, "ROC_L2_ERR_DROP_PKT_CNT_RX"}, + {true, "RX_OQ_GLB_DROP_PKT_CNT"}, + {false, "Reserved"}, + + {true, "LO_PRI_UNICAST_CUR_CNT"}, + {true, "HI_PRI_MULTICAST_CUR_CNT"}, + {true, "LO_PRI_MULTICAST_CUR_CNT"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_ssu_reg_1[] = { + {true, "prt_id"}, + {true, "PACKET_TC_CURR_BUFFER_CNT_0"}, + {true, "PACKET_TC_CURR_BUFFER_CNT_1"}, + {true, "PACKET_TC_CURR_BUFFER_CNT_2"}, + {true, "PACKET_TC_CURR_BUFFER_CNT_3"}, + {true, "PACKET_TC_CURR_BUFFER_CNT_4"}, + + {true, "PACKET_TC_CURR_BUFFER_CNT_5"}, + {true, "PACKET_TC_CURR_BUFFER_CNT_6"}, + {true, "PACKET_TC_CURR_BUFFER_CNT_7"}, + {true, "PACKET_CURR_BUFFER_CNT"}, + {false, "Reserved"}, + {false, "Reserved"}, + + {true, "RX_PACKET_IN_CNT_L"}, + {true, "RX_PACKET_IN_CNT_H"}, + {true, "RX_PACKET_OUT_CNT_L"}, + {true, "RX_PACKET_OUT_CNT_H"}, + {true, "TX_PACKET_IN_CNT_L"}, + {true, "TX_PACKET_IN_CNT_H"}, + + {true, "TX_PACKET_OUT_CNT_L"}, + {true, "TX_PACKET_OUT_CNT_H"}, + {true, "ROC_RX_PACKET_IN_CNT_L"}, + {true, "ROC_RX_PACKET_IN_CNT_H"}, + {true, "ROC_TX_PACKET_OUT_CNT_L"}, + {true, "ROC_TX_PACKET_OUT_CNT_H"}, + + {true, "RX_PACKET_TC_IN_CNT_0_L"}, + {true, "RX_PACKET_TC_IN_CNT_0_H"}, + {true, "RX_PACKET_TC_IN_CNT_1_L"}, + {true, "RX_PACKET_TC_IN_CNT_1_H"}, + {true, "RX_PACKET_TC_IN_CNT_2_L"}, + {true, "RX_PACKET_TC_IN_CNT_2_H"}, + + {true, "RX_PACKET_TC_IN_CNT_3_L"}, + {true, "RX_PACKET_TC_IN_CNT_3_H"}, + {true, "RX_PACKET_TC_IN_CNT_4_L"}, + {true, "RX_PACKET_TC_IN_CNT_4_H"}, + {true, "RX_PACKET_TC_IN_CNT_5_L"}, + {true, "RX_PACKET_TC_IN_CNT_5_H"}, + + {true, "RX_PACKET_TC_IN_CNT_6_L"}, + {true, "RX_PACKET_TC_IN_CNT_6_H"}, + {true, "RX_PACKET_TC_IN_CNT_7_L"}, + {true, "RX_PACKET_TC_IN_CNT_7_H"}, + {true, "RX_PACKET_TC_OUT_CNT_0_L"}, + {true, "RX_PACKET_TC_OUT_CNT_0_H"}, + + {true, "RX_PACKET_TC_OUT_CNT_1_L"}, + {true, "RX_PACKET_TC_OUT_CNT_1_H"}, + {true, "RX_PACKET_TC_OUT_CNT_2_L"}, + {true, "RX_PACKET_TC_OUT_CNT_2_H"}, + {true, "RX_PACKET_TC_OUT_CNT_3_L"}, + {true, "RX_PACKET_TC_OUT_CNT_3_H"}, + + {true, "RX_PACKET_TC_OUT_CNT_4_L"}, + {true, "RX_PACKET_TC_OUT_CNT_4_H"}, + {true, "RX_PACKET_TC_OUT_CNT_5_L"}, + {true, "RX_PACKET_TC_OUT_CNT_5_H"}, + {true, "RX_PACKET_TC_OUT_CNT_6_L"}, + {true, "RX_PACKET_TC_OUT_CNT_6_H"}, + + {true, "RX_PACKET_TC_OUT_CNT_7_L"}, + {true, "RX_PACKET_TC_OUT_CNT_7_H"}, + {true, "TX_PACKET_TC_IN_CNT_0_L"}, + {true, "TX_PACKET_TC_IN_CNT_0_H"}, + {true, "TX_PACKET_TC_IN_CNT_1_L"}, + {true, "TX_PACKET_TC_IN_CNT_1_H"}, + + {true, "TX_PACKET_TC_IN_CNT_2_L"}, + {true, "TX_PACKET_TC_IN_CNT_2_H"}, + {true, "TX_PACKET_TC_IN_CNT_3_L"}, + {true, "TX_PACKET_TC_IN_CNT_3_H"}, + {true, "TX_PACKET_TC_IN_CNT_4_L"}, + {true, "TX_PACKET_TC_IN_CNT_4_H"}, + + {true, "TX_PACKET_TC_IN_CNT_5_L"}, + {true, "TX_PACKET_TC_IN_CNT_5_H"}, + {true, "TX_PACKET_TC_IN_CNT_6_L"}, + {true, "TX_PACKET_TC_IN_CNT_6_H"}, + {true, "TX_PACKET_TC_IN_CNT_7_L"}, + {true, "TX_PACKET_TC_IN_CNT_7_H"}, + + {true, "TX_PACKET_TC_OUT_CNT_0_L"}, + {true, "TX_PACKET_TC_OUT_CNT_0_H"}, + {true, "TX_PACKET_TC_OUT_CNT_1_L"}, + {true, "TX_PACKET_TC_OUT_CNT_1_H"}, + {true, "TX_PACKET_TC_OUT_CNT_2_L"}, + {true, "TX_PACKET_TC_OUT_CNT_2_H"}, + + {true, "TX_PACKET_TC_OUT_CNT_3_L"}, + {true, "TX_PACKET_TC_OUT_CNT_3_H"}, + {true, "TX_PACKET_TC_OUT_CNT_4_L"}, + {true, "TX_PACKET_TC_OUT_CNT_4_H"}, + {true, "TX_PACKET_TC_OUT_CNT_5_L"}, + {true, "TX_PACKET_TC_OUT_CNT_5_H"}, + + {true, "TX_PACKET_TC_OUT_CNT_6_L"}, + {true, "TX_PACKET_TC_OUT_CNT_6_H"}, + {true, "TX_PACKET_TC_OUT_CNT_7_L"}, + {true, "TX_PACKET_TC_OUT_CNT_7_H"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_ssu_reg_2[] = { + {true, "OQ_INDEX"}, + {true, "QUEUE_CNT"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_igu_egu_reg[] = { + {true, "prt_id"}, + {true, "IGU_RX_ERR_PKT"}, + {true, "IGU_RX_NO_SOF_PKT"}, + {true, "EGU_TX_1588_SHORT_PKT"}, + {true, "EGU_TX_1588_PKT"}, + {true, "EGU_TX_ERR_PKT"}, + + {true, "IGU_RX_OUT_L2_PKT"}, + {true, "IGU_RX_OUT_L3_PKT"}, + {true, "IGU_RX_OUT_L4_PKT"}, + {true, "IGU_RX_IN_L2_PKT"}, + {true, "IGU_RX_IN_L3_PKT"}, + {true, "IGU_RX_IN_L4_PKT"}, + + {true, "IGU_RX_EL3E_PKT"}, + {true, "IGU_RX_EL4E_PKT"}, + {true, "IGU_RX_L3E_PKT"}, + {true, "IGU_RX_L4E_PKT"}, + {true, "IGU_RX_ROCEE_PKT"}, + {true, "IGU_RX_OUT_UDP0_PKT"}, + + {true, "IGU_RX_IN_UDP0_PKT"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + + {true, "IGU_RX_OVERSIZE_PKT_L"}, + {true, "IGU_RX_OVERSIZE_PKT_H"}, + {true, "IGU_RX_UNDERSIZE_PKT_L"}, + {true, "IGU_RX_UNDERSIZE_PKT_H"}, + {true, "IGU_RX_OUT_ALL_PKT_L"}, + {true, "IGU_RX_OUT_ALL_PKT_H"}, + + {true, "IGU_TX_OUT_ALL_PKT_L"}, + {true, "IGU_TX_OUT_ALL_PKT_H"}, + {true, "IGU_RX_UNI_PKT_L"}, + {true, "IGU_RX_UNI_PKT_H"}, + {true, "IGU_RX_MULTI_PKT_L"}, + {true, "IGU_RX_MULTI_PKT_H"}, + + {true, "IGU_RX_BROAD_PKT_L"}, + {true, "IGU_RX_BROAD_PKT_H"}, + {true, "EGU_TX_OUT_ALL_PKT_L"}, + {true, "EGU_TX_OUT_ALL_PKT_H"}, + {true, "EGU_TX_UNI_PKT_L"}, + {true, "EGU_TX_UNI_PKT_H"}, + + {true, "EGU_TX_MULTI_PKT_L"}, + {true, "EGU_TX_MULTI_PKT_H"}, + {true, "EGU_TX_BROAD_PKT_L"}, + {true, "EGU_TX_BROAD_PKT_H"}, + {true, "IGU_TX_KEY_NUM_L"}, + {true, "IGU_TX_KEY_NUM_H"}, + + {true, "IGU_RX_NON_TUN_PKT_L"}, + {true, "IGU_RX_NON_TUN_PKT_H"}, + {true, "IGU_RX_TUN_PKT_L"}, + {true, "IGU_RX_TUN_PKT_H"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_rpu_reg_0[] = { + {true, "tc_queue_num"}, + {true, "FSM_DFX_ST0"}, + {true, "FSM_DFX_ST1"}, + {true, "RPU_RX_PKT_DROP_CNT"}, + {true, "BUF_WAIT_TIMEOUT"}, + {true, "BUF_WAIT_TIMEOUT_QID"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_rpu_reg_1[] = { + {false, "Reserved"}, + {true, "FIFO_DFX_ST0"}, + {true, "FIFO_DFX_ST1"}, + {true, "FIFO_DFX_ST2"}, + {true, "FIFO_DFX_ST3"}, + {true, "FIFO_DFX_ST4"}, + + {true, "FIFO_DFX_ST5"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_ncsi_reg[] = { + {false, "Reserved"}, + {true, "NCSI_EGU_TX_FIFO_STS"}, + {true, "NCSI_PAUSE_STATUS"}, + {true, "NCSI_RX_CTRL_DMAC_ERR_CNT"}, + {true, "NCSI_RX_CTRL_SMAC_ERR_CNT"}, + {true, "NCSI_RX_CTRL_CKS_ERR_CNT"}, + + {true, "NCSI_RX_CTRL_PKT_CNT"}, + {true, "NCSI_RX_PT_DMAC_ERR_CNT"}, + {true, "NCSI_RX_PT_SMAC_ERR_CNT"}, + {true, "NCSI_RX_PT_PKT_CNT"}, + {true, "NCSI_RX_FCS_ERR_CNT"}, + {true, "NCSI_TX_CTRL_DMAC_ERR_CNT"}, + + {true, "NCSI_TX_CTRL_SMAC_ERR_CNT"}, + {true, "NCSI_TX_CTRL_PKT_CNT"}, + {true, "NCSI_TX_PT_DMAC_ERR_CNT"}, + {true, "NCSI_TX_PT_SMAC_ERR_CNT"}, + {true, "NCSI_TX_PT_PKT_CNT"}, + {true, "NCSI_TX_PT_PKT_TRUNC_CNT"}, + + {true, "NCSI_TX_PT_PKT_ERR_CNT"}, + {true, "NCSI_TX_CTRL_PKT_ERR_CNT"}, + {true, "NCSI_RX_CTRL_PKT_TRUNC_CNT"}, + {true, "NCSI_RX_CTRL_PKT_CFLIT_CNT"}, + {false, "Reserved"}, + {false, "Reserved"}, + + {true, "NCSI_MAC_RX_OCTETS_OK"}, + {true, "NCSI_MAC_RX_OCTETS_BAD"}, + {true, "NCSI_MAC_RX_UC_PKTS"}, + {true, "NCSI_MAC_RX_MC_PKTS"}, + {true, "NCSI_MAC_RX_BC_PKTS"}, + {true, "NCSI_MAC_RX_PKTS_64OCTETS"}, + + {true, "NCSI_MAC_RX_PKTS_65TO127OCTETS"}, + {true, "NCSI_MAC_RX_PKTS_128TO255OCTETS"}, + {true, "NCSI_MAC_RX_PKTS_255TO511OCTETS"}, + {true, "NCSI_MAC_RX_PKTS_512TO1023OCTETS"}, + {true, "NCSI_MAC_RX_PKTS_1024TO1518OCTETS"}, + {true, "NCSI_MAC_RX_PKTS_1519TOMAXOCTETS"}, + + {true, "NCSI_MAC_RX_FCS_ERRORS"}, + {true, "NCSI_MAC_RX_LONG_ERRORS"}, + {true, "NCSI_MAC_RX_JABBER_ERRORS"}, + {true, "NCSI_MAC_RX_RUNT_ERR_CNT"}, + {true, "NCSI_MAC_RX_SHORT_ERR_CNT"}, + {true, "NCSI_MAC_RX_FILT_PKT_CNT"}, + + {true, "NCSI_MAC_RX_OCTETS_TOTAL_FILT"}, + {true, "NCSI_MAC_TX_OCTETS_OK"}, + {true, "NCSI_MAC_TX_OCTETS_BAD"}, + {true, "NCSI_MAC_TX_UC_PKTS"}, + {true, "NCSI_MAC_TX_MC_PKTS"}, + {true, "NCSI_MAC_TX_BC_PKTS"}, + + {true, "NCSI_MAC_TX_PKTS_64OCTETS"}, + {true, "NCSI_MAC_TX_PKTS_65TO127OCTETS"}, + {true, "NCSI_MAC_TX_PKTS_128TO255OCTETS"}, + {true, "NCSI_MAC_TX_PKTS_256TO511OCTETS"}, + {true, "NCSI_MAC_TX_PKTS_512TO1023OCTETS"}, + {true, "NCSI_MAC_TX_PKTS_1024TO1518OCTETS"}, + + {true, "NCSI_MAC_TX_PKTS_1519TOMAXOCTETS"}, + {true, "NCSI_MAC_TX_UNDERRUN"}, + {true, "NCSI_MAC_TX_CRC_ERROR"}, + {true, "NCSI_MAC_TX_PAUSE_FRAMES"}, + {true, "NCSI_MAC_RX_PAD_PKTS"}, + {true, "NCSI_MAC_RX_PAUSE_FRAMES"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_rtc_reg[] = { + {false, "Reserved"}, + {true, "LGE_IGU_AFIFO_DFX_0"}, + {true, "LGE_IGU_AFIFO_DFX_1"}, + {true, "LGE_IGU_AFIFO_DFX_2"}, + {true, "LGE_IGU_AFIFO_DFX_3"}, + {true, "LGE_IGU_AFIFO_DFX_4"}, + + {true, "LGE_IGU_AFIFO_DFX_5"}, + {true, "LGE_IGU_AFIFO_DFX_6"}, + {true, "LGE_IGU_AFIFO_DFX_7"}, + {true, "LGE_EGU_AFIFO_DFX_0"}, + {true, "LGE_EGU_AFIFO_DFX_1"}, + {true, "LGE_EGU_AFIFO_DFX_2"}, + + {true, "LGE_EGU_AFIFO_DFX_3"}, + {true, "LGE_EGU_AFIFO_DFX_4"}, + {true, "LGE_EGU_AFIFO_DFX_5"}, + {true, "LGE_EGU_AFIFO_DFX_6"}, + {true, "LGE_EGU_AFIFO_DFX_7"}, + {true, "CGE_IGU_AFIFO_DFX_0"}, + + {true, "CGE_IGU_AFIFO_DFX_1"}, + {true, "CGE_EGU_AFIFO_DFX_0"}, + {true, "CGE_EGU_AFIFO_DFX_1"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_ppp_reg[] = { + {false, "Reserved"}, + {true, "DROP_FROM_PRT_PKT_CNT"}, + {true, "DROP_FROM_HOST_PKT_CNT"}, + {true, "DROP_TX_VLAN_PROC_CNT"}, + {true, "DROP_MNG_CNT"}, + {true, "DROP_FD_CNT"}, + + {true, "DROP_NO_DST_CNT"}, + {true, "DROP_MC_MBID_FULL_CNT"}, + {true, "DROP_SC_FILTERED"}, + {true, "PPP_MC_DROP_PKT_CNT"}, + {true, "DROP_PT_CNT"}, + {true, "DROP_MAC_ANTI_SPOOF_CNT"}, + + {true, "DROP_IG_VFV_CNT"}, + {true, "DROP_IG_PRTV_CNT"}, + {true, "DROP_CNM_PFC_PAUSE_CNT"}, + {true, "DROP_TORUS_TC_CNT"}, + {true, "DROP_TORUS_LPBK_CNT"}, + {true, "PPP_HFS_STS"}, + + {true, "PPP_MC_RSLT_STS"}, + {true, "PPP_P3U_STS"}, + {true, "PPP_RSLT_DESCR_STS"}, + {true, "PPP_UMV_STS_0"}, + {true, "PPP_UMV_STS_1"}, + {true, "PPP_VFV_STS"}, + + {true, "PPP_GRO_KEY_CNT"}, + {true, "PPP_GRO_INFO_CNT"}, + {true, "PPP_GRO_DROP_CNT"}, + {true, "PPP_GRO_OUT_CNT"}, + {true, "PPP_GRO_KEY_MATCH_DATA_CNT"}, + {true, "PPP_GRO_KEY_MATCH_TCAM_CNT"}, + + {true, "PPP_GRO_INFO_MATCH_CNT"}, + {true, "PPP_GRO_FREE_ENTRY_CNT"}, + {true, "PPP_GRO_INNER_DFX_SIGNAL"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + + {true, "GET_RX_PKT_CNT_L"}, + {true, "GET_RX_PKT_CNT_H"}, + {true, "GET_TX_PKT_CNT_L"}, + {true, "GET_TX_PKT_CNT_H"}, + {true, "SEND_UC_PRT2HOST_PKT_CNT_L"}, + {true, "SEND_UC_PRT2HOST_PKT_CNT_H"}, + + {true, "SEND_UC_PRT2PRT_PKT_CNT_L"}, + {true, "SEND_UC_PRT2PRT_PKT_CNT_H"}, + {true, "SEND_UC_HOST2HOST_PKT_CNT_L"}, + {true, "SEND_UC_HOST2HOST_PKT_CNT_H"}, + {true, "SEND_UC_HOST2PRT_PKT_CNT_L"}, + {true, "SEND_UC_HOST2PRT_PKT_CNT_H"}, + + {true, "SEND_MC_FROM_PRT_CNT_L"}, + {true, "SEND_MC_FROM_PRT_CNT_H"}, + {true, "SEND_MC_FROM_HOST_CNT_L"}, + {true, "SEND_MC_FROM_HOST_CNT_H"}, + {true, "SSU_MC_RD_CNT_L"}, + {true, "SSU_MC_RD_CNT_H"}, + + {true, "SSU_MC_DROP_CNT_L"}, + {true, "SSU_MC_DROP_CNT_H"}, + {true, "SSU_MC_RD_PKT_CNT_L"}, + {true, "SSU_MC_RD_PKT_CNT_H"}, + {true, "PPP_MC_2HOST_PKT_CNT_L"}, + {true, "PPP_MC_2HOST_PKT_CNT_H"}, + + {true, "PPP_MC_2PRT_PKT_CNT_L"}, + {true, "PPP_MC_2PRT_PKT_CNT_H"}, + {true, "NTSNOS_PKT_CNT_L"}, + {true, "NTSNOS_PKT_CNT_H"}, + {true, "NTUP_PKT_CNT_L"}, + {true, "NTUP_PKT_CNT_H"}, + + {true, "NTLCL_PKT_CNT_L"}, + {true, "NTLCL_PKT_CNT_H"}, + {true, "NTTGT_PKT_CNT_L"}, + {true, "NTTGT_PKT_CNT_H"}, + {true, "RTNS_PKT_CNT_L"}, + {true, "RTNS_PKT_CNT_H"}, + + {true, "RTLPBK_PKT_CNT_L"}, + {true, "RTLPBK_PKT_CNT_H"}, + {true, "NR_PKT_CNT_L"}, + {true, "NR_PKT_CNT_H"}, + {true, "RR_PKT_CNT_L"}, + {true, "RR_PKT_CNT_H"}, + + {true, "MNG_TBL_HIT_CNT_L"}, + {true, "MNG_TBL_HIT_CNT_H"}, + {true, "FD_TBL_HIT_CNT_L"}, + {true, "FD_TBL_HIT_CNT_H"}, + {true, "FD_LKUP_CNT_L"}, + {true, "FD_LKUP_CNT_H"}, + + {true, "BC_HIT_CNT_L"}, + {true, "BC_HIT_CNT_H"}, + {true, "UM_TBL_UC_HIT_CNT_L"}, + {true, "UM_TBL_UC_HIT_CNT_H"}, + {true, "UM_TBL_MC_HIT_CNT_L"}, + {true, "UM_TBL_MC_HIT_CNT_H"}, + + {true, "UM_TBL_VMDQ1_HIT_CNT_L"}, + {true, "UM_TBL_VMDQ1_HIT_CNT_H"}, + {true, "MTA_TBL_HIT_CNT_L"}, + {true, "MTA_TBL_HIT_CNT_H"}, + {true, "FWD_BONDING_HIT_CNT_L"}, + {true, "FWD_BONDING_HIT_CNT_H"}, + + {true, "PROMIS_TBL_HIT_CNT_L"}, + {true, "PROMIS_TBL_HIT_CNT_H"}, + {true, "GET_TUNL_PKT_CNT_L"}, + {true, "GET_TUNL_PKT_CNT_H"}, + {true, "GET_BMC_PKT_CNT_L"}, + {true, "GET_BMC_PKT_CNT_H"}, + + {true, "SEND_UC_PRT2BMC_PKT_CNT_L"}, + {true, "SEND_UC_PRT2BMC_PKT_CNT_H"}, + {true, "SEND_UC_HOST2BMC_PKT_CNT_L"}, + {true, "SEND_UC_HOST2BMC_PKT_CNT_H"}, + {true, "SEND_UC_BMC2HOST_PKT_CNT_L"}, + {true, "SEND_UC_BMC2HOST_PKT_CNT_H"}, + + {true, "SEND_UC_BMC2PRT_PKT_CNT_L"}, + {true, "SEND_UC_BMC2PRT_PKT_CNT_H"}, + {true, "PPP_MC_2BMC_PKT_CNT_L"}, + {true, "PPP_MC_2BMC_PKT_CNT_H"}, + {true, "VLAN_MIRR_CNT_L"}, + {true, "VLAN_MIRR_CNT_H"}, + + {true, "IG_MIRR_CNT_L"}, + {true, "IG_MIRR_CNT_H"}, + {true, "EG_MIRR_CNT_L"}, + {true, "EG_MIRR_CNT_H"}, + {true, "RX_DEFAULT_HOST_HIT_CNT_L"}, + {true, "RX_DEFAULT_HOST_HIT_CNT_H"}, + + {true, "LAN_PAIR_CNT_L"}, + {true, "LAN_PAIR_CNT_H"}, + {true, "UM_TBL_MC_HIT_PKT_CNT_L"}, + {true, "UM_TBL_MC_HIT_PKT_CNT_H"}, + {true, "MTA_TBL_HIT_PKT_CNT_L"}, + {true, "MTA_TBL_HIT_PKT_CNT_H"}, + + {true, "PROMIS_TBL_HIT_PKT_CNT_L"}, + {true, "PROMIS_TBL_HIT_PKT_CNT_H"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_rcb_reg[] = { + {false, "Reserved"}, + {true, "FSM_DFX_ST0"}, + {true, "FSM_DFX_ST1"}, + {true, "FSM_DFX_ST2"}, + {true, "FIFO_DFX_ST0"}, + {true, "FIFO_DFX_ST1"}, + + {true, "FIFO_DFX_ST2"}, + {true, "FIFO_DFX_ST3"}, + {true, "FIFO_DFX_ST4"}, + {true, "FIFO_DFX_ST5"}, + {true, "FIFO_DFX_ST6"}, + {true, "FIFO_DFX_ST7"}, + + {true, "FIFO_DFX_ST8"}, + {true, "FIFO_DFX_ST9"}, + {true, "FIFO_DFX_ST10"}, + {true, "FIFO_DFX_ST11"}, + {true, "Q_CREDIT_VLD_0"}, + {true, "Q_CREDIT_VLD_1"}, + + {true, "Q_CREDIT_VLD_2"}, + {true, "Q_CREDIT_VLD_3"}, + {true, "Q_CREDIT_VLD_4"}, + {true, "Q_CREDIT_VLD_5"}, + {true, "Q_CREDIT_VLD_6"}, + {true, "Q_CREDIT_VLD_7"}, + + {true, "Q_CREDIT_VLD_8"}, + {true, "Q_CREDIT_VLD_9"}, + {true, "Q_CREDIT_VLD_10"}, + {true, "Q_CREDIT_VLD_11"}, + {true, "Q_CREDIT_VLD_12"}, + {true, "Q_CREDIT_VLD_13"}, + + {true, "Q_CREDIT_VLD_14"}, + {true, "Q_CREDIT_VLD_15"}, + {true, "Q_CREDIT_VLD_16"}, + {true, "Q_CREDIT_VLD_17"}, + {true, "Q_CREDIT_VLD_18"}, + {true, "Q_CREDIT_VLD_19"}, + + {true, "Q_CREDIT_VLD_20"}, + {true, "Q_CREDIT_VLD_21"}, + {true, "Q_CREDIT_VLD_22"}, + {true, "Q_CREDIT_VLD_23"}, + {true, "Q_CREDIT_VLD_24"}, + {true, "Q_CREDIT_VLD_25"}, + + {true, "Q_CREDIT_VLD_26"}, + {true, "Q_CREDIT_VLD_27"}, + {true, "Q_CREDIT_VLD_28"}, + {true, "Q_CREDIT_VLD_29"}, + {true, "Q_CREDIT_VLD_30"}, + {true, "Q_CREDIT_VLD_31"}, + + {true, "GRO_BD_SERR_CNT"}, + {true, "GRO_CONTEXT_SERR_CNT"}, + {true, "RX_STASH_CFG_SERR_CNT"}, + {true, "AXI_RD_FBD_SERR_CNT"}, + {true, "GRO_BD_MERR_CNT"}, + {true, "GRO_CONTEXT_MERR_CNT"}, + + {true, "RX_STASH_CFG_MERR_CNT"}, + {true, "AXI_RD_FBD_MERR_CNT"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, + {false, "Reserved"}, +}; + +static struct hclge_dbg_dfx_message hclge_dbg_tqp_reg[] = { + {true, "q_num"}, + {true, "RCB_CFG_RX_RING_TAIL"}, + {true, "RCB_CFG_RX_RING_HEAD"}, + {true, "RCB_CFG_RX_RING_FBDNUM"}, + {true, "RCB_CFG_RX_RING_OFFSET"}, + {true, "RCB_CFG_RX_RING_FBDOFFSET"}, + + {true, "RCB_CFG_RX_RING_PKTNUM_RECORD"}, + {true, "RCB_CFG_TX_RING_TAIL"}, + {true, "RCB_CFG_TX_RING_HEAD"}, + {true, "RCB_CFG_TX_RING_FBDNUM"}, + {true, "RCB_CFG_TX_RING_OFFSET"}, + {true, "RCB_CFG_TX_RING_EBDNUM"}, +}; + +#endif diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c index 123c37e653f3..d0f654123b9b 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c @@ -4,78 +4,39 @@ #include "hclge_err.h" static const struct hclge_hw_error hclge_imp_tcm_ecc_int[] = { - { .int_msk = BIT(0), .msg = "imp_itcm0_ecc_1bit_err" }, { .int_msk = BIT(1), .msg = "imp_itcm0_ecc_mbit_err" }, - { .int_msk = BIT(2), .msg = "imp_itcm1_ecc_1bit_err" }, { .int_msk = BIT(3), .msg = "imp_itcm1_ecc_mbit_err" }, - { .int_msk = BIT(4), .msg = "imp_itcm2_ecc_1bit_err" }, { .int_msk = BIT(5), .msg = "imp_itcm2_ecc_mbit_err" }, - { .int_msk = BIT(6), .msg = "imp_itcm3_ecc_1bit_err" }, { .int_msk = BIT(7), .msg = "imp_itcm3_ecc_mbit_err" }, - { .int_msk = BIT(8), .msg = "imp_dtcm0_mem0_ecc_1bit_err" }, { .int_msk = BIT(9), .msg = "imp_dtcm0_mem0_ecc_mbit_err" }, - { .int_msk = BIT(10), .msg = "imp_dtcm0_mem1_ecc_1bit_err" }, { .int_msk = BIT(11), .msg = "imp_dtcm0_mem1_ecc_mbit_err" }, - { .int_msk = BIT(12), .msg = "imp_dtcm1_mem0_ecc_1bit_err" }, { .int_msk = BIT(13), .msg = "imp_dtcm1_mem0_ecc_mbit_err" }, - { .int_msk = BIT(14), .msg = "imp_dtcm1_mem1_ecc_1bit_err" }, { .int_msk = BIT(15), .msg = "imp_dtcm1_mem1_ecc_mbit_err" }, - { /* sentinel */ } -}; - -static const struct hclge_hw_error hclge_imp_itcm4_ecc_int[] = { - { .int_msk = BIT(0), .msg = "imp_itcm4_ecc_1bit_err" }, - { .int_msk = BIT(1), .msg = "imp_itcm4_ecc_mbit_err" }, + { .int_msk = BIT(17), .msg = "imp_itcm4_ecc_mbit_err" }, { /* sentinel */ } }; static const struct hclge_hw_error hclge_cmdq_nic_mem_ecc_int[] = { - { .int_msk = BIT(0), .msg = "cmdq_nic_rx_depth_ecc_1bit_err" }, { .int_msk = BIT(1), .msg = "cmdq_nic_rx_depth_ecc_mbit_err" }, - { .int_msk = BIT(2), .msg = "cmdq_nic_tx_depth_ecc_1bit_err" }, { .int_msk = BIT(3), .msg = "cmdq_nic_tx_depth_ecc_mbit_err" }, - { .int_msk = BIT(4), .msg = "cmdq_nic_rx_tail_ecc_1bit_err" }, { .int_msk = BIT(5), .msg = "cmdq_nic_rx_tail_ecc_mbit_err" }, - { .int_msk = BIT(6), .msg = "cmdq_nic_tx_tail_ecc_1bit_err" }, { .int_msk = BIT(7), .msg = "cmdq_nic_tx_tail_ecc_mbit_err" }, - { .int_msk = BIT(8), .msg = "cmdq_nic_rx_head_ecc_1bit_err" }, { .int_msk = BIT(9), .msg = "cmdq_nic_rx_head_ecc_mbit_err" }, - { .int_msk = BIT(10), .msg = "cmdq_nic_tx_head_ecc_1bit_err" }, { .int_msk = BIT(11), .msg = "cmdq_nic_tx_head_ecc_mbit_err" }, - { .int_msk = BIT(12), .msg = "cmdq_nic_rx_addr_ecc_1bit_err" }, { .int_msk = BIT(13), .msg = "cmdq_nic_rx_addr_ecc_mbit_err" }, - { .int_msk = BIT(14), .msg = "cmdq_nic_tx_addr_ecc_1bit_err" }, { .int_msk = BIT(15), .msg = "cmdq_nic_tx_addr_ecc_mbit_err" }, - { /* sentinel */ } -}; - -static const struct hclge_hw_error hclge_cmdq_rocee_mem_ecc_int[] = { - { .int_msk = BIT(0), .msg = "cmdq_rocee_rx_depth_ecc_1bit_err" }, - { .int_msk = BIT(1), .msg = "cmdq_rocee_rx_depth_ecc_mbit_err" }, - { .int_msk = BIT(2), .msg = "cmdq_rocee_tx_depth_ecc_1bit_err" }, - { .int_msk = BIT(3), .msg = "cmdq_rocee_tx_depth_ecc_mbit_err" }, - { .int_msk = BIT(4), .msg = "cmdq_rocee_rx_tail_ecc_1bit_err" }, - { .int_msk = BIT(5), .msg = "cmdq_rocee_rx_tail_ecc_mbit_err" }, - { .int_msk = BIT(6), .msg = "cmdq_rocee_tx_tail_ecc_1bit_err" }, - { .int_msk = BIT(7), .msg = "cmdq_rocee_tx_tail_ecc_mbit_err" }, - { .int_msk = BIT(8), .msg = "cmdq_rocee_rx_head_ecc_1bit_err" }, - { .int_msk = BIT(9), .msg = "cmdq_rocee_rx_head_ecc_mbit_err" }, - { .int_msk = BIT(10), .msg = "cmdq_rocee_tx_head_ecc_1bit_err" }, - { .int_msk = BIT(11), .msg = "cmdq_rocee_tx_head_ecc_mbit_err" }, - { .int_msk = BIT(12), .msg = "cmdq_rocee_rx_addr_ecc_1bit_err" }, - { .int_msk = BIT(13), .msg = "cmdq_rocee_rx_addr_ecc_mbit_err" }, - { .int_msk = BIT(14), .msg = "cmdq_rocee_tx_addr_ecc_1bit_err" }, - { .int_msk = BIT(15), .msg = "cmdq_rocee_tx_addr_ecc_mbit_err" }, + { .int_msk = BIT(17), .msg = "cmdq_rocee_rx_depth_ecc_mbit_err" }, + { .int_msk = BIT(19), .msg = "cmdq_rocee_tx_depth_ecc_mbit_err" }, + { .int_msk = BIT(21), .msg = "cmdq_rocee_rx_tail_ecc_mbit_err" }, + { .int_msk = BIT(23), .msg = "cmdq_rocee_tx_tail_ecc_mbit_err" }, + { .int_msk = BIT(25), .msg = "cmdq_rocee_rx_head_ecc_mbit_err" }, + { .int_msk = BIT(27), .msg = "cmdq_rocee_tx_head_ecc_mbit_err" }, + { .int_msk = BIT(29), .msg = "cmdq_rocee_rx_addr_ecc_mbit_err" }, + { .int_msk = BIT(31), .msg = "cmdq_rocee_tx_addr_ecc_mbit_err" }, { /* sentinel */ } }; static const struct hclge_hw_error hclge_tqp_int_ecc_int[] = { - { .int_msk = BIT(0), .msg = "tqp_int_cfg_even_ecc_1bit_err" }, - { .int_msk = BIT(1), .msg = "tqp_int_cfg_odd_ecc_1bit_err" }, - { .int_msk = BIT(2), .msg = "tqp_int_ctrl_even_ecc_1bit_err" }, - { .int_msk = BIT(3), .msg = "tqp_int_ctrl_odd_ecc_1bit_err" }, - { .int_msk = BIT(4), .msg = "tx_que_scan_int_ecc_1bit_err" }, - { .int_msk = BIT(5), .msg = "rx_que_scan_int_ecc_1bit_err" }, { .int_msk = BIT(6), .msg = "tqp_int_cfg_even_ecc_mbit_err" }, { .int_msk = BIT(7), .msg = "tqp_int_cfg_odd_ecc_mbit_err" }, { .int_msk = BIT(8), .msg = "tqp_int_ctrl_even_ecc_mbit_err" }, @@ -85,15 +46,19 @@ static const struct hclge_hw_error hclge_tqp_int_ecc_int[] = { { /* sentinel */ } }; -static const struct hclge_hw_error hclge_igu_com_err_int[] = { +static const struct hclge_hw_error hclge_msix_sram_ecc_int[] = { + { .int_msk = BIT(1), .msg = "msix_nic_ecc_mbit_err" }, + { .int_msk = BIT(3), .msg = "msix_rocee_ecc_mbit_err" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_igu_int[] = { { .int_msk = BIT(0), .msg = "igu_rx_buf0_ecc_mbit_err" }, - { .int_msk = BIT(1), .msg = "igu_rx_buf0_ecc_1bit_err" }, { .int_msk = BIT(2), .msg = "igu_rx_buf1_ecc_mbit_err" }, - { .int_msk = BIT(3), .msg = "igu_rx_buf1_ecc_1bit_err" }, { /* sentinel */ } }; -static const struct hclge_hw_error hclge_igu_egu_tnl_err_int[] = { +static const struct hclge_hw_error hclge_igu_egu_tnl_int[] = { { .int_msk = BIT(0), .msg = "rx_buf_overflow" }, { .int_msk = BIT(1), .msg = "rx_stp_fifo_overflow" }, { .int_msk = BIT(2), .msg = "rx_stp_fifo_undeflow" }, @@ -104,51 +69,11 @@ static const struct hclge_hw_error hclge_igu_egu_tnl_err_int[] = { }; static const struct hclge_hw_error hclge_ncsi_err_int[] = { - { .int_msk = BIT(0), .msg = "ncsi_tx_ecc_1bit_err" }, { .int_msk = BIT(1), .msg = "ncsi_tx_ecc_mbit_err" }, { /* sentinel */ } }; -static const struct hclge_hw_error hclge_ppp_mpf_int0[] = { - { .int_msk = BIT(0), .msg = "vf_vlan_ad_mem_ecc_1bit_err" }, - { .int_msk = BIT(1), .msg = "umv_mcast_group_mem_ecc_1bit_err" }, - { .int_msk = BIT(2), .msg = "umv_key_mem0_ecc_1bit_err" }, - { .int_msk = BIT(3), .msg = "umv_key_mem1_ecc_1bit_err" }, - { .int_msk = BIT(4), .msg = "umv_key_mem2_ecc_1bit_err" }, - { .int_msk = BIT(5), .msg = "umv_key_mem3_ecc_1bit_err" }, - { .int_msk = BIT(6), .msg = "umv_ad_mem_ecc_1bit_err" }, - { .int_msk = BIT(7), .msg = "rss_tc_mode_mem_ecc_1bit_err" }, - { .int_msk = BIT(8), .msg = "rss_idt_mem0_ecc_1bit_err" }, - { .int_msk = BIT(9), .msg = "rss_idt_mem1_ecc_1bit_err" }, - { .int_msk = BIT(10), .msg = "rss_idt_mem2_ecc_1bit_err" }, - { .int_msk = BIT(11), .msg = "rss_idt_mem3_ecc_1bit_err" }, - { .int_msk = BIT(12), .msg = "rss_idt_mem4_ecc_1bit_err" }, - { .int_msk = BIT(13), .msg = "rss_idt_mem5_ecc_1bit_err" }, - { .int_msk = BIT(14), .msg = "rss_idt_mem6_ecc_1bit_err" }, - { .int_msk = BIT(15), .msg = "rss_idt_mem7_ecc_1bit_err" }, - { .int_msk = BIT(16), .msg = "rss_idt_mem8_ecc_1bit_err" }, - { .int_msk = BIT(17), .msg = "rss_idt_mem9_ecc_1bit_err" }, - { .int_msk = BIT(18), .msg = "rss_idt_mem10_ecc_1bit_err" }, - { .int_msk = BIT(19), .msg = "rss_idt_mem11_ecc_1bit_err" }, - { .int_msk = BIT(20), .msg = "rss_idt_mem12_ecc_1bit_err" }, - { .int_msk = BIT(21), .msg = "rss_idt_mem13_ecc_1bit_err" }, - { .int_msk = BIT(22), .msg = "rss_idt_mem14_ecc_1bit_err" }, - { .int_msk = BIT(23), .msg = "rss_idt_mem15_ecc_1bit_err" }, - { .int_msk = BIT(24), .msg = "port_vlan_mem_ecc_1bit_err" }, - { .int_msk = BIT(25), .msg = "mcast_linear_table_mem_ecc_1bit_err" }, - { .int_msk = BIT(26), .msg = "mcast_result_mem_ecc_1bit_err" }, - { .int_msk = BIT(27), - .msg = "flow_director_ad_mem0_ecc_1bit_err" }, - { .int_msk = BIT(28), - .msg = "flow_director_ad_mem1_ecc_1bit_err" }, - { .int_msk = BIT(29), - .msg = "rx_vlan_tag_memory_ecc_1bit_err" }, - { .int_msk = BIT(30), - .msg = "Tx_UP_mapping_config_mem_ecc_1bit_err" }, - { /* sentinel */ } -}; - -static const struct hclge_hw_error hclge_ppp_mpf_int1[] = { +static const struct hclge_hw_error hclge_ppp_mpf_abnormal_int_st1[] = { { .int_msk = BIT(0), .msg = "vf_vlan_ad_mem_ecc_mbit_err" }, { .int_msk = BIT(1), .msg = "umv_mcast_group_mem_ecc_mbit_err" }, { .int_msk = BIT(2), .msg = "umv_key_mem0_ecc_mbit_err" }, @@ -187,23 +112,13 @@ static const struct hclge_hw_error hclge_ppp_mpf_int1[] = { { /* sentinel */ } }; -static const struct hclge_hw_error hclge_ppp_pf_int[] = { - { .int_msk = BIT(0), .msg = "Tx_vlan_tag_err" }, +static const struct hclge_hw_error hclge_ppp_pf_abnormal_int[] = { + { .int_msk = BIT(0), .msg = "tx_vlan_tag_err" }, { .int_msk = BIT(1), .msg = "rss_list_tc_unassigned_queue_err" }, { /* sentinel */ } }; -static const struct hclge_hw_error hclge_ppp_mpf_int2[] = { - { .int_msk = BIT(0), .msg = "hfs_fifo_mem_ecc_1bit_err" }, - { .int_msk = BIT(1), .msg = "rslt_descr_fifo_mem_ecc_1bit_err" }, - { .int_msk = BIT(2), .msg = "tx_vlan_tag_mem_ecc_1bit_err" }, - { .int_msk = BIT(3), .msg = "FD_CN0_memory_ecc_1bit_err" }, - { .int_msk = BIT(4), .msg = "FD_CN1_memory_ecc_1bit_err" }, - { .int_msk = BIT(5), .msg = "GRO_AD_memory_ecc_1bit_err" }, - { /* sentinel */ } -}; - -static const struct hclge_hw_error hclge_ppp_mpf_int3[] = { +static const struct hclge_hw_error hclge_ppp_mpf_abnormal_int_st3[] = { { .int_msk = BIT(0), .msg = "hfs_fifo_mem_ecc_mbit_err" }, { .int_msk = BIT(1), .msg = "rslt_descr_fifo_mem_ecc_mbit_err" }, { .int_msk = BIT(2), .msg = "tx_vlan_tag_mem_ecc_mbit_err" }, @@ -213,145 +128,248 @@ static const struct hclge_hw_error hclge_ppp_mpf_int3[] = { { /* sentinel */ } }; -struct hclge_tm_sch_ecc_info { - const char *name; -}; - -static const struct hclge_tm_sch_ecc_info hclge_tm_sch_ecc_err[7][15] = { - { - { .name = "QSET_QUEUE_CTRL:PRI_LEN TAB" }, - { .name = "QSET_QUEUE_CTRL:SPA_LEN TAB" }, - { .name = "QSET_QUEUE_CTRL:SPB_LEN TAB" }, - { .name = "QSET_QUEUE_CTRL:WRRA_LEN TAB" }, - { .name = "QSET_QUEUE_CTRL:WRRB_LEN TAB" }, - { .name = "QSET_QUEUE_CTRL:SPA_HPTR TAB" }, - { .name = "QSET_QUEUE_CTRL:SPB_HPTR TAB" }, - { .name = "QSET_QUEUE_CTRL:WRRA_HPTR TAB" }, - { .name = "QSET_QUEUE_CTRL:WRRB_HPTR TAB" }, - { .name = "QSET_QUEUE_CTRL:QS_LINKLIST TAB" }, - { .name = "QSET_QUEUE_CTRL:SPA_TPTR TAB" }, - { .name = "QSET_QUEUE_CTRL:SPB_TPTR TAB" }, - { .name = "QSET_QUEUE_CTRL:WRRA_TPTR TAB" }, - { .name = "QSET_QUEUE_CTRL:WRRB_TPTR TAB" }, - { .name = "QSET_QUEUE_CTRL:QS_DEFICITCNT TAB" }, - }, - { - { .name = "ROCE_QUEUE_CTRL:QS_LEN TAB" }, - { .name = "ROCE_QUEUE_CTRL:QS_TPTR TAB" }, - { .name = "ROCE_QUEUE_CTRL:QS_HPTR TAB" }, - { .name = "ROCE_QUEUE_CTRL:QLINKLIST TAB" }, - { .name = "ROCE_QUEUE_CTRL:QCLEN TAB" }, - }, - { - { .name = "NIC_QUEUE_CTRL:QS_LEN TAB" }, - { .name = "NIC_QUEUE_CTRL:QS_TPTR TAB" }, - { .name = "NIC_QUEUE_CTRL:QS_HPTR TAB" }, - { .name = "NIC_QUEUE_CTRL:QLINKLIST TAB" }, - { .name = "NIC_QUEUE_CTRL:QCLEN TAB" }, - }, - { - { .name = "RAM_CFG_CTRL:CSHAP TAB" }, - { .name = "RAM_CFG_CTRL:PSHAP TAB" }, - }, - { - { .name = "SHAPER_CTRL:PSHAP TAB" }, - }, - { - { .name = "MSCH_CTRL" }, - }, - { - { .name = "TOP_CTRL" }, - }, -}; - -static const struct hclge_hw_error hclge_tm_sch_err_int[] = { - { .int_msk = BIT(0), .msg = "tm_sch_ecc_1bit_err" }, +static const struct hclge_hw_error hclge_tm_sch_rint[] = { { .int_msk = BIT(1), .msg = "tm_sch_ecc_mbit_err" }, - { .int_msk = BIT(2), .msg = "tm_sch_port_shap_sub_fifo_wr_full_err" }, - { .int_msk = BIT(3), .msg = "tm_sch_port_shap_sub_fifo_rd_empty_err" }, - { .int_msk = BIT(4), .msg = "tm_sch_pg_pshap_sub_fifo_wr_full_err" }, - { .int_msk = BIT(5), .msg = "tm_sch_pg_pshap_sub_fifo_rd_empty_err" }, - { .int_msk = BIT(6), .msg = "tm_sch_pg_cshap_sub_fifo_wr_full_err" }, - { .int_msk = BIT(7), .msg = "tm_sch_pg_cshap_sub_fifo_rd_empty_err" }, - { .int_msk = BIT(8), .msg = "tm_sch_pri_pshap_sub_fifo_wr_full_err" }, - { .int_msk = BIT(9), .msg = "tm_sch_pri_pshap_sub_fifo_rd_empty_err" }, - { .int_msk = BIT(10), .msg = "tm_sch_pri_cshap_sub_fifo_wr_full_err" }, - { .int_msk = BIT(11), .msg = "tm_sch_pri_cshap_sub_fifo_rd_empty_err" }, + { .int_msk = BIT(2), .msg = "tm_sch_port_shap_sub_fifo_wr_err" }, + { .int_msk = BIT(3), .msg = "tm_sch_port_shap_sub_fifo_rd_err" }, + { .int_msk = BIT(4), .msg = "tm_sch_pg_pshap_sub_fifo_wr_err" }, + { .int_msk = BIT(5), .msg = "tm_sch_pg_pshap_sub_fifo_rd_err" }, + { .int_msk = BIT(6), .msg = "tm_sch_pg_cshap_sub_fifo_wr_err" }, + { .int_msk = BIT(7), .msg = "tm_sch_pg_cshap_sub_fifo_rd_err" }, + { .int_msk = BIT(8), .msg = "tm_sch_pri_pshap_sub_fifo_wr_err" }, + { .int_msk = BIT(9), .msg = "tm_sch_pri_pshap_sub_fifo_rd_err" }, + { .int_msk = BIT(10), .msg = "tm_sch_pri_cshap_sub_fifo_wr_err" }, + { .int_msk = BIT(11), .msg = "tm_sch_pri_cshap_sub_fifo_rd_err" }, { .int_msk = BIT(12), - .msg = "tm_sch_port_shap_offset_fifo_wr_full_err" }, + .msg = "tm_sch_port_shap_offset_fifo_wr_err" }, { .int_msk = BIT(13), - .msg = "tm_sch_port_shap_offset_fifo_rd_empty_err" }, + .msg = "tm_sch_port_shap_offset_fifo_rd_err" }, { .int_msk = BIT(14), - .msg = "tm_sch_pg_pshap_offset_fifo_wr_full_err" }, + .msg = "tm_sch_pg_pshap_offset_fifo_wr_err" }, { .int_msk = BIT(15), - .msg = "tm_sch_pg_pshap_offset_fifo_rd_empty_err" }, + .msg = "tm_sch_pg_pshap_offset_fifo_rd_err" }, { .int_msk = BIT(16), - .msg = "tm_sch_pg_cshap_offset_fifo_wr_full_err" }, + .msg = "tm_sch_pg_cshap_offset_fifo_wr_err" }, { .int_msk = BIT(17), - .msg = "tm_sch_pg_cshap_offset_fifo_rd_empty_err" }, + .msg = "tm_sch_pg_cshap_offset_fifo_rd_err" }, { .int_msk = BIT(18), - .msg = "tm_sch_pri_pshap_offset_fifo_wr_full_err" }, + .msg = "tm_sch_pri_pshap_offset_fifo_wr_err" }, { .int_msk = BIT(19), - .msg = "tm_sch_pri_pshap_offset_fifo_rd_empty_err" }, + .msg = "tm_sch_pri_pshap_offset_fifo_rd_err" }, { .int_msk = BIT(20), - .msg = "tm_sch_pri_cshap_offset_fifo_wr_full_err" }, + .msg = "tm_sch_pri_cshap_offset_fifo_wr_err" }, { .int_msk = BIT(21), - .msg = "tm_sch_pri_cshap_offset_fifo_rd_empty_err" }, - { .int_msk = BIT(22), .msg = "tm_sch_rq_fifo_wr_full_err" }, - { .int_msk = BIT(23), .msg = "tm_sch_rq_fifo_rd_empty_err" }, - { .int_msk = BIT(24), .msg = "tm_sch_nq_fifo_wr_full_err" }, - { .int_msk = BIT(25), .msg = "tm_sch_nq_fifo_rd_empty_err" }, - { .int_msk = BIT(26), .msg = "tm_sch_roce_up_fifo_wr_full_err" }, - { .int_msk = BIT(27), .msg = "tm_sch_roce_up_fifo_rd_empty_err" }, - { .int_msk = BIT(28), .msg = "tm_sch_rcb_byte_fifo_wr_full_err" }, - { .int_msk = BIT(29), .msg = "tm_sch_rcb_byte_fifo_rd_empty_err" }, - { .int_msk = BIT(30), .msg = "tm_sch_ssu_byte_fifo_wr_full_err" }, - { .int_msk = BIT(31), .msg = "tm_sch_ssu_byte_fifo_rd_empty_err" }, + .msg = "tm_sch_pri_cshap_offset_fifo_rd_err" }, + { .int_msk = BIT(22), .msg = "tm_sch_rq_fifo_wr_err" }, + { .int_msk = BIT(23), .msg = "tm_sch_rq_fifo_rd_err" }, + { .int_msk = BIT(24), .msg = "tm_sch_nq_fifo_wr_err" }, + { .int_msk = BIT(25), .msg = "tm_sch_nq_fifo_rd_err" }, + { .int_msk = BIT(26), .msg = "tm_sch_roce_up_fifo_wr_err" }, + { .int_msk = BIT(27), .msg = "tm_sch_roce_up_fifo_rd_err" }, + { .int_msk = BIT(28), .msg = "tm_sch_rcb_byte_fifo_wr_err" }, + { .int_msk = BIT(29), .msg = "tm_sch_rcb_byte_fifo_rd_err" }, + { .int_msk = BIT(30), .msg = "tm_sch_ssu_byte_fifo_wr_err" }, + { .int_msk = BIT(31), .msg = "tm_sch_ssu_byte_fifo_rd_err" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_qcn_fifo_rint[] = { + { .int_msk = BIT(0), .msg = "qcn_shap_gp0_sch_fifo_rd_err" }, + { .int_msk = BIT(1), .msg = "qcn_shap_gp0_sch_fifo_wr_err" }, + { .int_msk = BIT(2), .msg = "qcn_shap_gp1_sch_fifo_rd_err" }, + { .int_msk = BIT(3), .msg = "qcn_shap_gp1_sch_fifo_wr_err" }, + { .int_msk = BIT(4), .msg = "qcn_shap_gp2_sch_fifo_rd_err" }, + { .int_msk = BIT(5), .msg = "qcn_shap_gp2_sch_fifo_wr_err" }, + { .int_msk = BIT(6), .msg = "qcn_shap_gp3_sch_fifo_rd_err" }, + { .int_msk = BIT(7), .msg = "qcn_shap_gp3_sch_fifo_wr_err" }, + { .int_msk = BIT(8), .msg = "qcn_shap_gp0_offset_fifo_rd_err" }, + { .int_msk = BIT(9), .msg = "qcn_shap_gp0_offset_fifo_wr_err" }, + { .int_msk = BIT(10), .msg = "qcn_shap_gp1_offset_fifo_rd_err" }, + { .int_msk = BIT(11), .msg = "qcn_shap_gp1_offset_fifo_wr_err" }, + { .int_msk = BIT(12), .msg = "qcn_shap_gp2_offset_fifo_rd_err" }, + { .int_msk = BIT(13), .msg = "qcn_shap_gp2_offset_fifo_wr_err" }, + { .int_msk = BIT(14), .msg = "qcn_shap_gp3_offset_fifo_rd_err" }, + { .int_msk = BIT(15), .msg = "qcn_shap_gp3_offset_fifo_wr_err" }, + { .int_msk = BIT(16), .msg = "qcn_byte_info_fifo_rd_err" }, + { .int_msk = BIT(17), .msg = "qcn_byte_info_fifo_wr_err" }, { /* sentinel */ } }; -static const struct hclge_hw_error hclge_qcn_ecc_err_int[] = { - { .int_msk = BIT(0), .msg = "qcn_byte_mem_ecc_1bit_err" }, +static const struct hclge_hw_error hclge_qcn_ecc_rint[] = { { .int_msk = BIT(1), .msg = "qcn_byte_mem_ecc_mbit_err" }, - { .int_msk = BIT(2), .msg = "qcn_time_mem_ecc_1bit_err" }, { .int_msk = BIT(3), .msg = "qcn_time_mem_ecc_mbit_err" }, - { .int_msk = BIT(4), .msg = "qcn_fb_mem_ecc_1bit_err" }, { .int_msk = BIT(5), .msg = "qcn_fb_mem_ecc_mbit_err" }, - { .int_msk = BIT(6), .msg = "qcn_link_mem_ecc_1bit_err" }, { .int_msk = BIT(7), .msg = "qcn_link_mem_ecc_mbit_err" }, - { .int_msk = BIT(8), .msg = "qcn_rate_mem_ecc_1bit_err" }, { .int_msk = BIT(9), .msg = "qcn_rate_mem_ecc_mbit_err" }, - { .int_msk = BIT(10), .msg = "qcn_tmplt_mem_ecc_1bit_err" }, { .int_msk = BIT(11), .msg = "qcn_tmplt_mem_ecc_mbit_err" }, - { .int_msk = BIT(12), .msg = "qcn_shap_cfg_mem_ecc_1bit_err" }, { .int_msk = BIT(13), .msg = "qcn_shap_cfg_mem_ecc_mbit_err" }, - { .int_msk = BIT(14), .msg = "qcn_gp0_barrel_mem_ecc_1bit_err" }, { .int_msk = BIT(15), .msg = "qcn_gp0_barrel_mem_ecc_mbit_err" }, - { .int_msk = BIT(16), .msg = "qcn_gp1_barrel_mem_ecc_1bit_err" }, { .int_msk = BIT(17), .msg = "qcn_gp1_barrel_mem_ecc_mbit_err" }, - { .int_msk = BIT(18), .msg = "qcn_gp2_barrel_mem_ecc_1bit_err" }, { .int_msk = BIT(19), .msg = "qcn_gp2_barrel_mem_ecc_mbit_err" }, - { .int_msk = BIT(20), .msg = "qcn_gp3_barral_mem_ecc_1bit_err" }, { .int_msk = BIT(21), .msg = "qcn_gp3_barral_mem_ecc_mbit_err" }, { /* sentinel */ } }; -static void hclge_log_error(struct device *dev, - const struct hclge_hw_error *err_list, +static const struct hclge_hw_error hclge_mac_afifo_tnl_int[] = { + { .int_msk = BIT(0), .msg = "egu_cge_afifo_ecc_1bit_err" }, + { .int_msk = BIT(1), .msg = "egu_cge_afifo_ecc_mbit_err" }, + { .int_msk = BIT(2), .msg = "egu_lge_afifo_ecc_1bit_err" }, + { .int_msk = BIT(3), .msg = "egu_lge_afifo_ecc_mbit_err" }, + { .int_msk = BIT(4), .msg = "cge_igu_afifo_ecc_1bit_err" }, + { .int_msk = BIT(5), .msg = "cge_igu_afifo_ecc_mbit_err" }, + { .int_msk = BIT(6), .msg = "lge_igu_afifo_ecc_1bit_err" }, + { .int_msk = BIT(7), .msg = "lge_igu_afifo_ecc_mbit_err" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_ppu_mpf_abnormal_int_st2[] = { + { .int_msk = BIT(13), .msg = "rpu_rx_pkt_bit32_ecc_mbit_err" }, + { .int_msk = BIT(14), .msg = "rpu_rx_pkt_bit33_ecc_mbit_err" }, + { .int_msk = BIT(15), .msg = "rpu_rx_pkt_bit34_ecc_mbit_err" }, + { .int_msk = BIT(16), .msg = "rpu_rx_pkt_bit35_ecc_mbit_err" }, + { .int_msk = BIT(17), .msg = "rcb_tx_ring_ecc_mbit_err" }, + { .int_msk = BIT(18), .msg = "rcb_rx_ring_ecc_mbit_err" }, + { .int_msk = BIT(19), .msg = "rcb_tx_fbd_ecc_mbit_err" }, + { .int_msk = BIT(20), .msg = "rcb_rx_ebd_ecc_mbit_err" }, + { .int_msk = BIT(21), .msg = "rcb_tso_info_ecc_mbit_err" }, + { .int_msk = BIT(22), .msg = "rcb_tx_int_info_ecc_mbit_err" }, + { .int_msk = BIT(23), .msg = "rcb_rx_int_info_ecc_mbit_err" }, + { .int_msk = BIT(24), .msg = "tpu_tx_pkt_0_ecc_mbit_err" }, + { .int_msk = BIT(25), .msg = "tpu_tx_pkt_1_ecc_mbit_err" }, + { .int_msk = BIT(26), .msg = "rd_bus_err" }, + { .int_msk = BIT(27), .msg = "wr_bus_err" }, + { .int_msk = BIT(28), .msg = "reg_search_miss" }, + { .int_msk = BIT(29), .msg = "rx_q_search_miss" }, + { .int_msk = BIT(30), .msg = "ooo_ecc_err_detect" }, + { .int_msk = BIT(31), .msg = "ooo_ecc_err_multpl" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_ppu_mpf_abnormal_int_st3[] = { + { .int_msk = BIT(4), .msg = "gro_bd_ecc_mbit_err" }, + { .int_msk = BIT(5), .msg = "gro_context_ecc_mbit_err" }, + { .int_msk = BIT(6), .msg = "rx_stash_cfg_ecc_mbit_err" }, + { .int_msk = BIT(7), .msg = "axi_rd_fbd_ecc_mbit_err" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_ppu_pf_abnormal_int[] = { + { .int_msk = BIT(0), .msg = "over_8bd_no_fe" }, + { .int_msk = BIT(1), .msg = "tso_mss_cmp_min_err" }, + { .int_msk = BIT(2), .msg = "tso_mss_cmp_max_err" }, + { .int_msk = BIT(3), .msg = "tx_rd_fbd_poison" }, + { .int_msk = BIT(4), .msg = "rx_rd_ebd_poison" }, + { .int_msk = BIT(5), .msg = "buf_wait_timeout" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_ssu_com_err_int[] = { + { .int_msk = BIT(0), .msg = "buf_sum_err" }, + { .int_msk = BIT(1), .msg = "ppp_mb_num_err" }, + { .int_msk = BIT(2), .msg = "ppp_mbid_err" }, + { .int_msk = BIT(3), .msg = "ppp_rlt_mac_err" }, + { .int_msk = BIT(4), .msg = "ppp_rlt_host_err" }, + { .int_msk = BIT(5), .msg = "cks_edit_position_err" }, + { .int_msk = BIT(6), .msg = "cks_edit_condition_err" }, + { .int_msk = BIT(7), .msg = "vlan_edit_condition_err" }, + { .int_msk = BIT(8), .msg = "vlan_num_ot_err" }, + { .int_msk = BIT(9), .msg = "vlan_num_in_err" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_ssu_port_based_err_int[] = { + { .int_msk = BIT(0), .msg = "roc_pkt_without_key_port" }, + { .int_msk = BIT(1), .msg = "tpu_pkt_without_key_port" }, + { .int_msk = BIT(2), .msg = "igu_pkt_without_key_port" }, + { .int_msk = BIT(3), .msg = "roc_eof_mis_match_port" }, + { .int_msk = BIT(4), .msg = "tpu_eof_mis_match_port" }, + { .int_msk = BIT(5), .msg = "igu_eof_mis_match_port" }, + { .int_msk = BIT(6), .msg = "roc_sof_mis_match_port" }, + { .int_msk = BIT(7), .msg = "tpu_sof_mis_match_port" }, + { .int_msk = BIT(8), .msg = "igu_sof_mis_match_port" }, + { .int_msk = BIT(11), .msg = "ets_rd_int_rx_port" }, + { .int_msk = BIT(12), .msg = "ets_wr_int_rx_port" }, + { .int_msk = BIT(13), .msg = "ets_rd_int_tx_port" }, + { .int_msk = BIT(14), .msg = "ets_wr_int_tx_port" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_ssu_fifo_overflow_int[] = { + { .int_msk = BIT(0), .msg = "ig_mac_inf_int" }, + { .int_msk = BIT(1), .msg = "ig_host_inf_int" }, + { .int_msk = BIT(2), .msg = "ig_roc_buf_int" }, + { .int_msk = BIT(3), .msg = "ig_host_data_fifo_int" }, + { .int_msk = BIT(4), .msg = "ig_host_key_fifo_int" }, + { .int_msk = BIT(5), .msg = "tx_qcn_fifo_int" }, + { .int_msk = BIT(6), .msg = "rx_qcn_fifo_int" }, + { .int_msk = BIT(7), .msg = "tx_pf_rd_fifo_int" }, + { .int_msk = BIT(8), .msg = "rx_pf_rd_fifo_int" }, + { .int_msk = BIT(9), .msg = "qm_eof_fifo_int" }, + { .int_msk = BIT(10), .msg = "mb_rlt_fifo_int" }, + { .int_msk = BIT(11), .msg = "dup_uncopy_fifo_int" }, + { .int_msk = BIT(12), .msg = "dup_cnt_rd_fifo_int" }, + { .int_msk = BIT(13), .msg = "dup_cnt_drop_fifo_int" }, + { .int_msk = BIT(14), .msg = "dup_cnt_wrb_fifo_int" }, + { .int_msk = BIT(15), .msg = "host_cmd_fifo_int" }, + { .int_msk = BIT(16), .msg = "mac_cmd_fifo_int" }, + { .int_msk = BIT(17), .msg = "host_cmd_bitmap_empty_int" }, + { .int_msk = BIT(18), .msg = "mac_cmd_bitmap_empty_int" }, + { .int_msk = BIT(19), .msg = "dup_bitmap_empty_int" }, + { .int_msk = BIT(20), .msg = "out_queue_bitmap_empty_int" }, + { .int_msk = BIT(21), .msg = "bank2_bitmap_empty_int" }, + { .int_msk = BIT(22), .msg = "bank1_bitmap_empty_int" }, + { .int_msk = BIT(23), .msg = "bank0_bitmap_empty_int" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_ssu_ets_tcg_int[] = { + { .int_msk = BIT(0), .msg = "ets_rd_int_rx_tcg" }, + { .int_msk = BIT(1), .msg = "ets_wr_int_rx_tcg" }, + { .int_msk = BIT(2), .msg = "ets_rd_int_tx_tcg" }, + { .int_msk = BIT(3), .msg = "ets_wr_int_tx_tcg" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_ssu_port_based_pf_int[] = { + { .int_msk = BIT(0), .msg = "roc_pkt_without_key_port" }, + { .int_msk = BIT(9), .msg = "low_water_line_err_port" }, + { .int_msk = BIT(10), .msg = "hi_water_line_err_port" }, + { /* sentinel */ } +}; + +static const struct hclge_hw_error hclge_rocee_qmm_ovf_err_int[] = { + { .int_msk = 0, .msg = "rocee qmm ovf: sgid invalid err" }, + { .int_msk = 0x4, .msg = "rocee qmm ovf: sgid ovf err" }, + { .int_msk = 0x8, .msg = "rocee qmm ovf: smac invalid err" }, + { .int_msk = 0xC, .msg = "rocee qmm ovf: smac ovf err" }, + { .int_msk = 0x10, .msg = "rocee qmm ovf: cqc invalid err" }, + { .int_msk = 0x11, .msg = "rocee qmm ovf: cqc ovf err" }, + { .int_msk = 0x12, .msg = "rocee qmm ovf: cqc hopnum err" }, + { .int_msk = 0x13, .msg = "rocee qmm ovf: cqc ba0 err" }, + { .int_msk = 0x14, .msg = "rocee qmm ovf: srqc invalid err" }, + { .int_msk = 0x15, .msg = "rocee qmm ovf: srqc ovf err" }, + { .int_msk = 0x16, .msg = "rocee qmm ovf: srqc hopnum err" }, + { .int_msk = 0x17, .msg = "rocee qmm ovf: srqc ba0 err" }, + { .int_msk = 0x18, .msg = "rocee qmm ovf: mpt invalid err" }, + { .int_msk = 0x19, .msg = "rocee qmm ovf: mpt ovf err" }, + { .int_msk = 0x1A, .msg = "rocee qmm ovf: mpt hopnum err" }, + { .int_msk = 0x1B, .msg = "rocee qmm ovf: mpt ba0 err" }, + { .int_msk = 0x1C, .msg = "rocee qmm ovf: qpc invalid err" }, + { .int_msk = 0x1D, .msg = "rocee qmm ovf: qpc ovf err" }, + { .int_msk = 0x1E, .msg = "rocee qmm ovf: qpc hopnum err" }, + { .int_msk = 0x1F, .msg = "rocee qmm ovf: qpc ba0 err" }, + { /* sentinel */ } +}; + +static void hclge_log_error(struct device *dev, char *reg, + const struct hclge_hw_error *err, u32 err_sts) { - const struct hclge_hw_error *err; - int i = 0; - - while (err_list[i].msg) { - err = &err_list[i]; - if (!(err->int_msk & err_sts)) { - i++; - continue; - } - dev_warn(dev, "%s [error status=0x%x] found\n", - err->msg, err_sts); - i++; + while (err->msg) { + if (err->int_msk & err_sts) + dev_warn(dev, "%s %s found [error status=0x%x]\n", + reg, err->msg, err_sts); + err++; } } @@ -391,96 +409,44 @@ static int hclge_cmd_query_error(struct hclge_dev *hdev, return ret; } -/* hclge_cmd_clear_error: clear the error status - * @hdev: pointer to struct hclge_dev - * @desc: descriptor for describing the command - * @desc_src: prefilled descriptor from the previous command for reusing - * @cmd: command opcode - * @flag: flag for extended command structure - * - * This function clear the error status in the hw register/s using command - */ -static int hclge_cmd_clear_error(struct hclge_dev *hdev, - struct hclge_desc *desc, - struct hclge_desc *desc_src, - u32 cmd, u16 flag) -{ - struct device *dev = &hdev->pdev->dev; - int num = 1; - int ret, i; - - if (cmd) { - hclge_cmd_setup_basic_desc(&desc[0], cmd, false); - if (flag) { - desc[0].flag |= cpu_to_le16(flag); - hclge_cmd_setup_basic_desc(&desc[1], cmd, false); - num = 2; - } - if (desc_src) { - for (i = 0; i < 6; i++) { - desc[0].data[i] = desc_src[0].data[i]; - if (flag) - desc[1].data[i] = desc_src[1].data[i]; - } - } - } else { - hclge_cmd_reuse_desc(&desc[0], false); - if (flag) { - desc[0].flag |= cpu_to_le16(flag); - hclge_cmd_reuse_desc(&desc[1], false); - num = 2; - } - } - ret = hclge_cmd_send(&hdev->hw, &desc[0], num); - if (ret) - dev_err(dev, "clear error cmd failed (%d)\n", ret); - - return ret; -} - -static int hclge_enable_common_error(struct hclge_dev *hdev, bool en) +static int hclge_config_common_hw_err_int(struct hclge_dev *hdev, bool en) { struct device *dev = &hdev->pdev->dev; struct hclge_desc desc[2]; int ret; + /* configure common error interrupts */ hclge_cmd_setup_basic_desc(&desc[0], HCLGE_COMMON_ECC_INT_CFG, false); desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); hclge_cmd_setup_basic_desc(&desc[1], HCLGE_COMMON_ECC_INT_CFG, false); if (en) { - /* enable COMMON error interrupts */ desc[0].data[0] = cpu_to_le32(HCLGE_IMP_TCM_ECC_ERR_INT_EN); desc[0].data[2] = cpu_to_le32(HCLGE_CMDQ_NIC_ECC_ERR_INT_EN | HCLGE_CMDQ_ROCEE_ECC_ERR_INT_EN); desc[0].data[3] = cpu_to_le32(HCLGE_IMP_RD_POISON_ERR_INT_EN); - desc[0].data[4] = cpu_to_le32(HCLGE_TQP_ECC_ERR_INT_EN); + desc[0].data[4] = cpu_to_le32(HCLGE_TQP_ECC_ERR_INT_EN | + HCLGE_MSIX_SRAM_ECC_ERR_INT_EN); desc[0].data[5] = cpu_to_le32(HCLGE_IMP_ITCM4_ECC_ERR_INT_EN); - } else { - /* disable COMMON error interrupts */ - desc[0].data[0] = 0; - desc[0].data[2] = 0; - desc[0].data[3] = 0; - desc[0].data[4] = 0; - desc[0].data[5] = 0; } + desc[1].data[0] = cpu_to_le32(HCLGE_IMP_TCM_ECC_ERR_INT_EN_MASK); desc[1].data[2] = cpu_to_le32(HCLGE_CMDQ_NIC_ECC_ERR_INT_EN_MASK | HCLGE_CMDQ_ROCEE_ECC_ERR_INT_EN_MASK); desc[1].data[3] = cpu_to_le32(HCLGE_IMP_RD_POISON_ERR_INT_EN_MASK); - desc[1].data[4] = cpu_to_le32(HCLGE_TQP_ECC_ERR_INT_EN_MASK); + desc[1].data[4] = cpu_to_le32(HCLGE_TQP_ECC_ERR_INT_EN_MASK | + HCLGE_MSIX_SRAM_ECC_ERR_INT_EN_MASK); desc[1].data[5] = cpu_to_le32(HCLGE_IMP_ITCM4_ECC_ERR_INT_EN_MASK); ret = hclge_cmd_send(&hdev->hw, &desc[0], 2); if (ret) dev_err(dev, - "failed(%d) to enable/disable COMMON err interrupts\n", - ret); + "fail(%d) to configure common err interrupts\n", ret); return ret; } -static int hclge_enable_ncsi_error(struct hclge_dev *hdev, bool en) +static int hclge_config_ncsi_hw_err_int(struct hclge_dev *hdev, bool en) { struct device *dev = &hdev->pdev->dev; struct hclge_desc desc; @@ -489,74 +455,65 @@ static int hclge_enable_ncsi_error(struct hclge_dev *hdev, bool en) if (hdev->pdev->revision < 0x21) return 0; - /* enable/disable NCSI error interrupts */ + /* configure NCSI error interrupts */ hclge_cmd_setup_basic_desc(&desc, HCLGE_NCSI_INT_EN, false); if (en) desc.data[0] = cpu_to_le32(HCLGE_NCSI_ERR_INT_EN); - else - desc.data[0] = 0; ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) dev_err(dev, - "failed(%d) to enable/disable NCSI error interrupts\n", - ret); + "fail(%d) to configure NCSI error interrupts\n", ret); return ret; } -static int hclge_enable_igu_egu_error(struct hclge_dev *hdev, bool en) +static int hclge_config_igu_egu_hw_err_int(struct hclge_dev *hdev, bool en) { struct device *dev = &hdev->pdev->dev; struct hclge_desc desc; int ret; - /* enable/disable error interrupts */ + /* configure IGU,EGU error interrupts */ hclge_cmd_setup_basic_desc(&desc, HCLGE_IGU_COMMON_INT_EN, false); if (en) desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN); - else - desc.data[0] = 0; + desc.data[1] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN_MASK); ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) { dev_err(dev, - "failed(%d) to enable/disable IGU common interrupts\n", - ret); + "fail(%d) to configure IGU common interrupts\n", ret); return ret; } hclge_cmd_setup_basic_desc(&desc, HCLGE_IGU_EGU_TNL_INT_EN, false); if (en) desc.data[0] = cpu_to_le32(HCLGE_IGU_TNL_ERR_INT_EN); - else - desc.data[0] = 0; + desc.data[1] = cpu_to_le32(HCLGE_IGU_TNL_ERR_INT_EN_MASK); ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) { dev_err(dev, - "failed(%d) to enable/disable IGU-EGU TNL interrupts\n", - ret); + "fail(%d) to configure IGU-EGU TNL interrupts\n", ret); return ret; } - ret = hclge_enable_ncsi_error(hdev, en); - if (ret) - dev_err(dev, "fail(%d) to en/disable err int\n", ret); + ret = hclge_config_ncsi_hw_err_int(hdev, en); return ret; } -static int hclge_enable_ppp_error_interrupt(struct hclge_dev *hdev, u32 cmd, +static int hclge_config_ppp_error_interrupt(struct hclge_dev *hdev, u32 cmd, bool en) { struct device *dev = &hdev->pdev->dev; struct hclge_desc desc[2]; int ret; - /* enable/disable PPP error interrupts */ + /* configure PPP error interrupts */ hclge_cmd_setup_basic_desc(&desc[0], cmd, false); desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); hclge_cmd_setup_basic_desc(&desc[1], cmd, false); @@ -567,24 +524,24 @@ static int hclge_enable_ppp_error_interrupt(struct hclge_dev *hdev, u32 cmd, cpu_to_le32(HCLGE_PPP_MPF_ECC_ERR_INT0_EN); desc[0].data[1] = cpu_to_le32(HCLGE_PPP_MPF_ECC_ERR_INT1_EN); - } else { - desc[0].data[0] = 0; - desc[0].data[1] = 0; + desc[0].data[4] = cpu_to_le32(HCLGE_PPP_PF_ERR_INT_EN); } + desc[1].data[0] = cpu_to_le32(HCLGE_PPP_MPF_ECC_ERR_INT0_EN_MASK); desc[1].data[1] = cpu_to_le32(HCLGE_PPP_MPF_ECC_ERR_INT1_EN_MASK); + if (hdev->pdev->revision >= 0x21) + desc[1].data[2] = + cpu_to_le32(HCLGE_PPP_PF_ERR_INT_EN_MASK); } else if (cmd == HCLGE_PPP_CMD1_INT_CMD) { if (en) { desc[0].data[0] = cpu_to_le32(HCLGE_PPP_MPF_ECC_ERR_INT2_EN); desc[0].data[1] = cpu_to_le32(HCLGE_PPP_MPF_ECC_ERR_INT3_EN); - } else { - desc[0].data[0] = 0; - desc[0].data[1] = 0; } + desc[1].data[0] = cpu_to_le32(HCLGE_PPP_MPF_ECC_ERR_INT2_EN_MASK); desc[1].data[1] = @@ -593,498 +550,863 @@ static int hclge_enable_ppp_error_interrupt(struct hclge_dev *hdev, u32 cmd, ret = hclge_cmd_send(&hdev->hw, &desc[0], 2); if (ret) - dev_err(dev, - "failed(%d) to enable/disable PPP error interrupts\n", - ret); + dev_err(dev, "fail(%d) to configure PPP error intr\n", ret); return ret; } -static int hclge_enable_ppp_error(struct hclge_dev *hdev, bool en) +static int hclge_config_ppp_hw_err_int(struct hclge_dev *hdev, bool en) { - struct device *dev = &hdev->pdev->dev; int ret; - ret = hclge_enable_ppp_error_interrupt(hdev, HCLGE_PPP_CMD0_INT_CMD, + ret = hclge_config_ppp_error_interrupt(hdev, HCLGE_PPP_CMD0_INT_CMD, en); - if (ret) { - dev_err(dev, - "failed(%d) to enable/disable PPP error intr 0,1\n", - ret); + if (ret) return ret; - } - ret = hclge_enable_ppp_error_interrupt(hdev, HCLGE_PPP_CMD1_INT_CMD, + ret = hclge_config_ppp_error_interrupt(hdev, HCLGE_PPP_CMD1_INT_CMD, en); - if (ret) - dev_err(dev, - "failed(%d) to enable/disable PPP error intr 2,3\n", - ret); return ret; } -int hclge_enable_tm_hw_error(struct hclge_dev *hdev, bool en) +static int hclge_config_tm_hw_err_int(struct hclge_dev *hdev, bool en) { struct device *dev = &hdev->pdev->dev; struct hclge_desc desc; int ret; - /* enable TM SCH hw errors */ + /* configure TM SCH hw errors */ hclge_cmd_setup_basic_desc(&desc, HCLGE_TM_SCH_ECC_INT_EN, false); if (en) desc.data[0] = cpu_to_le32(HCLGE_TM_SCH_ECC_ERR_INT_EN); - else - desc.data[0] = 0; ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) { - dev_err(dev, "failed(%d) to configure TM SCH errors\n", ret); + dev_err(dev, "fail(%d) to configure TM SCH errors\n", ret); return ret; } - /* enable TM QCN hw errors */ + /* configure TM QCN hw errors */ ret = hclge_cmd_query_error(hdev, &desc, HCLGE_TM_QCN_MEM_INT_CFG, 0, 0, 0); if (ret) { - dev_err(dev, "failed(%d) to read TM QCN CFG status\n", ret); + dev_err(dev, "fail(%d) to read TM QCN CFG status\n", ret); return ret; } hclge_cmd_reuse_desc(&desc, false); if (en) desc.data[1] = cpu_to_le32(HCLGE_TM_QCN_MEM_ERR_INT_EN); - else - desc.data[1] = 0; ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) dev_err(dev, - "failed(%d) to configure TM QCN mem errors\n", ret); + "fail(%d) to configure TM QCN mem errors\n", ret); return ret; } -static void hclge_process_common_error(struct hclge_dev *hdev, - enum hclge_err_int_type type) +static int hclge_config_mac_err_int(struct hclge_dev *hdev, bool en) { struct device *dev = &hdev->pdev->dev; - struct hclge_desc desc[2]; - u32 err_sts; + struct hclge_desc desc; int ret; - /* read err sts */ - ret = hclge_cmd_query_error(hdev, &desc[0], - HCLGE_COMMON_ECC_INT_CFG, - HCLGE_CMD_FLAG_NEXT, 0, 0); - if (ret) { + /* configure MAC common error interrupts */ + hclge_cmd_setup_basic_desc(&desc, HCLGE_MAC_COMMON_INT_EN, false); + if (en) + desc.data[0] = cpu_to_le32(HCLGE_MAC_COMMON_ERR_INT_EN); + + desc.data[1] = cpu_to_le32(HCLGE_MAC_COMMON_ERR_INT_EN_MASK); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) dev_err(dev, - "failed(=%d) to query COMMON error interrupt status\n", - ret); - return; - } + "fail(%d) to configure MAC COMMON error intr\n", ret); - /* log err */ - err_sts = (le32_to_cpu(desc[0].data[0])) & HCLGE_IMP_TCM_ECC_INT_MASK; - hclge_log_error(dev, &hclge_imp_tcm_ecc_int[0], err_sts); + return ret; +} - err_sts = (le32_to_cpu(desc[0].data[1])) & HCLGE_CMDQ_ECC_INT_MASK; - hclge_log_error(dev, &hclge_cmdq_nic_mem_ecc_int[0], err_sts); +static int hclge_config_ppu_error_interrupts(struct hclge_dev *hdev, u32 cmd, + bool en) +{ + struct device *dev = &hdev->pdev->dev; + struct hclge_desc desc[2]; + int num = 1; + int ret; - err_sts = (le32_to_cpu(desc[0].data[1]) >> HCLGE_CMDQ_ROC_ECC_INT_SHIFT) - & HCLGE_CMDQ_ECC_INT_MASK; - hclge_log_error(dev, &hclge_cmdq_rocee_mem_ecc_int[0], err_sts); + /* configure PPU error interrupts */ + if (cmd == HCLGE_PPU_MPF_ECC_INT_CMD) { + hclge_cmd_setup_basic_desc(&desc[0], cmd, false); + desc[0].flag |= HCLGE_CMD_FLAG_NEXT; + hclge_cmd_setup_basic_desc(&desc[1], cmd, false); + if (en) { + desc[0].data[0] = HCLGE_PPU_MPF_ABNORMAL_INT0_EN; + desc[0].data[1] = HCLGE_PPU_MPF_ABNORMAL_INT1_EN; + desc[1].data[3] = HCLGE_PPU_MPF_ABNORMAL_INT3_EN; + desc[1].data[4] = HCLGE_PPU_MPF_ABNORMAL_INT2_EN; + } - if ((le32_to_cpu(desc[0].data[3])) & BIT(0)) - dev_warn(dev, "imp_rd_data_poison_err found\n"); + desc[1].data[0] = HCLGE_PPU_MPF_ABNORMAL_INT0_EN_MASK; + desc[1].data[1] = HCLGE_PPU_MPF_ABNORMAL_INT1_EN_MASK; + desc[1].data[2] = HCLGE_PPU_MPF_ABNORMAL_INT2_EN_MASK; + desc[1].data[3] |= HCLGE_PPU_MPF_ABNORMAL_INT3_EN_MASK; + num = 2; + } else if (cmd == HCLGE_PPU_MPF_OTHER_INT_CMD) { + hclge_cmd_setup_basic_desc(&desc[0], cmd, false); + if (en) + desc[0].data[0] = HCLGE_PPU_MPF_ABNORMAL_INT2_EN2; - err_sts = (le32_to_cpu(desc[0].data[3]) >> HCLGE_TQP_ECC_INT_SHIFT) & - HCLGE_TQP_ECC_INT_MASK; - hclge_log_error(dev, &hclge_tqp_int_ecc_int[0], err_sts); + desc[0].data[2] = HCLGE_PPU_MPF_ABNORMAL_INT2_EN2_MASK; + } else if (cmd == HCLGE_PPU_PF_OTHER_INT_CMD) { + hclge_cmd_setup_basic_desc(&desc[0], cmd, false); + if (en) + desc[0].data[0] = HCLGE_PPU_PF_ABNORMAL_INT_EN; - err_sts = (le32_to_cpu(desc[0].data[5])) & - HCLGE_IMP_ITCM4_ECC_INT_MASK; - hclge_log_error(dev, &hclge_imp_itcm4_ecc_int[0], err_sts); + desc[0].data[2] = HCLGE_PPU_PF_ABNORMAL_INT_EN_MASK; + } else { + dev_err(dev, "Invalid cmd to configure PPU error interrupts\n"); + return -EINVAL; + } - /* clear error interrupts */ - desc[1].data[0] = cpu_to_le32(HCLGE_IMP_TCM_ECC_CLR_MASK); - desc[1].data[1] = cpu_to_le32(HCLGE_CMDQ_NIC_ECC_CLR_MASK | - HCLGE_CMDQ_ROCEE_ECC_CLR_MASK); - desc[1].data[3] = cpu_to_le32(HCLGE_TQP_IMP_ERR_CLR_MASK); - desc[1].data[5] = cpu_to_le32(HCLGE_IMP_ITCM4_ECC_CLR_MASK); + ret = hclge_cmd_send(&hdev->hw, &desc[0], num); - ret = hclge_cmd_clear_error(hdev, &desc[0], NULL, 0, - HCLGE_CMD_FLAG_NEXT); - if (ret) - dev_err(dev, - "failed(%d) to clear COMMON error interrupt status\n", - ret); + return ret; } -static void hclge_process_ncsi_error(struct hclge_dev *hdev, - enum hclge_err_int_type type) +static int hclge_config_ppu_hw_err_int(struct hclge_dev *hdev, bool en) { struct device *dev = &hdev->pdev->dev; - struct hclge_desc desc_rd; - struct hclge_desc desc_wr; - u32 err_sts; int ret; - if (hdev->pdev->revision < 0x21) - return; - - /* read NCSI error status */ - ret = hclge_cmd_query_error(hdev, &desc_rd, HCLGE_NCSI_INT_QUERY, - 0, 1, HCLGE_NCSI_ERR_INT_TYPE); + ret = hclge_config_ppu_error_interrupts(hdev, HCLGE_PPU_MPF_ECC_INT_CMD, + en); if (ret) { - dev_err(dev, - "failed(=%d) to query NCSI error interrupt status\n", + dev_err(dev, "fail(%d) to configure PPU MPF ECC error intr\n", ret); - return; + return ret; } - /* log err */ - err_sts = le32_to_cpu(desc_rd.data[0]); - hclge_log_error(dev, &hclge_ncsi_err_int[0], err_sts); + ret = hclge_config_ppu_error_interrupts(hdev, + HCLGE_PPU_MPF_OTHER_INT_CMD, + en); + if (ret) { + dev_err(dev, "fail(%d) to configure PPU MPF other intr\n", ret); + return ret; + } - /* clear err int */ - ret = hclge_cmd_clear_error(hdev, &desc_wr, &desc_rd, - HCLGE_NCSI_INT_CLR, 0); + ret = hclge_config_ppu_error_interrupts(hdev, + HCLGE_PPU_PF_OTHER_INT_CMD, en); if (ret) - dev_err(dev, "failed(=%d) to clear NCSI interrupt status\n", + dev_err(dev, "fail(%d) to configure PPU PF error interrupts\n", ret); + return ret; } -static void hclge_process_igu_egu_error(struct hclge_dev *hdev, - enum hclge_err_int_type int_type) +static int hclge_config_ssu_hw_err_int(struct hclge_dev *hdev, bool en) { struct device *dev = &hdev->pdev->dev; - struct hclge_desc desc_rd; - struct hclge_desc desc_wr; - u32 err_sts; + struct hclge_desc desc[2]; int ret; - /* read IGU common err sts */ - ret = hclge_cmd_query_error(hdev, &desc_rd, - HCLGE_IGU_COMMON_INT_QUERY, - 0, 1, int_type); - if (ret) { - dev_err(dev, "failed(=%d) to query IGU common int status\n", - ret); - return; + /* configure SSU ecc error interrupts */ + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_SSU_ECC_INT_CMD, false); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[1], HCLGE_SSU_ECC_INT_CMD, false); + if (en) { + desc[0].data[0] = cpu_to_le32(HCLGE_SSU_1BIT_ECC_ERR_INT_EN); + desc[0].data[1] = + cpu_to_le32(HCLGE_SSU_MULTI_BIT_ECC_ERR_INT_EN); + desc[0].data[4] = cpu_to_le32(HCLGE_SSU_BIT32_ECC_ERR_INT_EN); } - /* log err */ - err_sts = le32_to_cpu(desc_rd.data[0]) & - HCLGE_IGU_COM_INT_MASK; - hclge_log_error(dev, &hclge_igu_com_err_int[0], err_sts); + desc[1].data[0] = cpu_to_le32(HCLGE_SSU_1BIT_ECC_ERR_INT_EN_MASK); + desc[1].data[1] = cpu_to_le32(HCLGE_SSU_MULTI_BIT_ECC_ERR_INT_EN_MASK); + desc[1].data[2] = cpu_to_le32(HCLGE_SSU_BIT32_ECC_ERR_INT_EN_MASK); - /* clear err int */ - ret = hclge_cmd_clear_error(hdev, &desc_wr, &desc_rd, - HCLGE_IGU_COMMON_INT_CLR, 0); + ret = hclge_cmd_send(&hdev->hw, &desc[0], 2); if (ret) { - dev_err(dev, "failed(=%d) to clear IGU common int status\n", - ret); - return; + dev_err(dev, + "fail(%d) to configure SSU ECC error interrupt\n", ret); + return ret; } - /* read IGU-EGU TNL err sts */ - ret = hclge_cmd_query_error(hdev, &desc_rd, - HCLGE_IGU_EGU_TNL_INT_QUERY, - 0, 1, int_type); - if (ret) { - dev_err(dev, "failed(=%d) to query IGU-EGU TNL int status\n", - ret); - return; + /* configure SSU common error interrupts */ + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_SSU_COMMON_INT_CMD, false); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + hclge_cmd_setup_basic_desc(&desc[1], HCLGE_SSU_COMMON_INT_CMD, false); + + if (en) { + if (hdev->pdev->revision >= 0x21) + desc[0].data[0] = + cpu_to_le32(HCLGE_SSU_COMMON_INT_EN); + else + desc[0].data[0] = + cpu_to_le32(HCLGE_SSU_COMMON_INT_EN & ~BIT(5)); + desc[0].data[1] = cpu_to_le32(HCLGE_SSU_PORT_BASED_ERR_INT_EN); + desc[0].data[2] = + cpu_to_le32(HCLGE_SSU_FIFO_OVERFLOW_ERR_INT_EN); } - /* log err */ - err_sts = le32_to_cpu(desc_rd.data[0]) & - HCLGE_IGU_EGU_TNL_INT_MASK; - hclge_log_error(dev, &hclge_igu_egu_tnl_err_int[0], err_sts); + desc[1].data[0] = cpu_to_le32(HCLGE_SSU_COMMON_INT_EN_MASK | + HCLGE_SSU_PORT_BASED_ERR_INT_EN_MASK); + desc[1].data[1] = cpu_to_le32(HCLGE_SSU_FIFO_OVERFLOW_ERR_INT_EN_MASK); - /* clear err int */ - ret = hclge_cmd_clear_error(hdev, &desc_wr, &desc_rd, - HCLGE_IGU_EGU_TNL_INT_CLR, 0); - if (ret) { - dev_err(dev, "failed(=%d) to clear IGU-EGU TNL int status\n", - ret); - return; - } + ret = hclge_cmd_send(&hdev->hw, &desc[0], 2); + if (ret) + dev_err(dev, + "fail(%d) to configure SSU COMMON error intr\n", ret); - hclge_process_ncsi_error(hdev, HCLGE_ERR_INT_RAS_NFE); + return ret; } -static int hclge_log_and_clear_ppp_error(struct hclge_dev *hdev, u32 cmd, - enum hclge_err_int_type int_type) +#define HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type) \ + do { \ + if (ae_dev->ops->set_default_reset_request) \ + ae_dev->ops->set_default_reset_request(ae_dev, \ + reset_type); \ + } while (0) + +/* hclge_handle_mpf_ras_error: handle all main PF RAS errors + * @hdev: pointer to struct hclge_dev + * @desc: descriptor for describing the command + * @num: number of extended command structures + * + * This function handles all the main PF RAS errors in the + * hw register/s using command. + */ +static int hclge_handle_mpf_ras_error(struct hclge_dev *hdev, + struct hclge_desc *desc, + int num) { - enum hnae3_reset_type reset_level = HNAE3_NONE_RESET; + struct hnae3_ae_dev *ae_dev = hdev->ae_dev; struct device *dev = &hdev->pdev->dev; - const struct hclge_hw_error *hw_err_lst1, *hw_err_lst2, *hw_err_lst3; - struct hclge_desc desc[2]; - u32 err_sts; + __le32 *desc_data; + u32 status; int ret; - /* read PPP INT sts */ - ret = hclge_cmd_query_error(hdev, &desc[0], cmd, - HCLGE_CMD_FLAG_NEXT, 5, int_type); + /* query all main PF RAS errors */ + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_CLEAR_MPF_RAS_INT, + true); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + + ret = hclge_cmd_send(&hdev->hw, &desc[0], num); if (ret) { - dev_err(dev, "failed(=%d) to query PPP interrupt status\n", - ret); - return -EIO; + dev_err(dev, "query all mpf ras int cmd failed (%d)\n", ret); + return ret; } - /* log error */ - if (cmd == HCLGE_PPP_CMD0_INT_CMD) { - hw_err_lst1 = &hclge_ppp_mpf_int0[0]; - hw_err_lst2 = &hclge_ppp_mpf_int1[0]; - hw_err_lst3 = &hclge_ppp_pf_int[0]; - } else if (cmd == HCLGE_PPP_CMD1_INT_CMD) { - hw_err_lst1 = &hclge_ppp_mpf_int2[0]; - hw_err_lst2 = &hclge_ppp_mpf_int3[0]; - } else { - dev_err(dev, "invalid command(=%d)\n", cmd); - return -EINVAL; + /* log HNS common errors */ + status = le32_to_cpu(desc[0].data[0]); + if (status) { + hclge_log_error(dev, "IMP_TCM_ECC_INT_STS", + &hclge_imp_tcm_ecc_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); } - err_sts = le32_to_cpu(desc[0].data[2]); - if (err_sts) { - hclge_log_error(dev, hw_err_lst1, err_sts); - reset_level = HNAE3_FUNC_RESET; + status = le32_to_cpu(desc[0].data[1]); + if (status) { + hclge_log_error(dev, "CMDQ_MEM_ECC_INT_STS", + &hclge_cmdq_nic_mem_ecc_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); } - err_sts = le32_to_cpu(desc[0].data[3]); - if (err_sts) { - hclge_log_error(dev, hw_err_lst2, err_sts); - reset_level = HNAE3_FUNC_RESET; + if ((le32_to_cpu(desc[0].data[2])) & BIT(0)) { + dev_warn(dev, "imp_rd_data_poison_err found\n"); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); } - if (cmd == HCLGE_PPP_CMD0_INT_CMD) { - err_sts = (le32_to_cpu(desc[0].data[4]) >> 8) & 0x3; - if (err_sts) { - hclge_log_error(dev, hw_err_lst3, err_sts); - reset_level = HNAE3_FUNC_RESET; - } + status = le32_to_cpu(desc[0].data[3]); + if (status) { + hclge_log_error(dev, "TQP_INT_ECC_INT_STS", + &hclge_tqp_int_ecc_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); } - /* clear PPP INT */ - ret = hclge_cmd_clear_error(hdev, &desc[0], NULL, 0, - HCLGE_CMD_FLAG_NEXT); - if (ret) { - dev_err(dev, "failed(=%d) to clear PPP interrupt status\n", - ret); - return -EIO; + status = le32_to_cpu(desc[0].data[4]); + if (status) { + hclge_log_error(dev, "MSIX_ECC_INT_STS", + &hclge_msix_sram_ecc_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); } - return 0; + /* log SSU(Storage Switch Unit) errors */ + desc_data = (__le32 *)&desc[2]; + status = le32_to_cpu(*(desc_data + 2)); + if (status) { + dev_warn(dev, "SSU_ECC_MULTI_BIT_INT_0 ssu_ecc_mbit_int[31:0]\n"); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + status = le32_to_cpu(*(desc_data + 3)) & BIT(0); + if (status) { + dev_warn(dev, "SSU_ECC_MULTI_BIT_INT_1 ssu_ecc_mbit_int[32]\n"); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + status = le32_to_cpu(*(desc_data + 4)) & HCLGE_SSU_COMMON_ERR_INT_MASK; + if (status) { + hclge_log_error(dev, "SSU_COMMON_ERR_INT", + &hclge_ssu_com_err_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); + } + + /* log IGU(Ingress Unit) errors */ + desc_data = (__le32 *)&desc[3]; + status = le32_to_cpu(*desc_data) & HCLGE_IGU_INT_MASK; + if (status) + hclge_log_error(dev, "IGU_INT_STS", + &hclge_igu_int[0], status); + + /* log PPP(Programmable Packet Process) errors */ + desc_data = (__le32 *)&desc[4]; + status = le32_to_cpu(*(desc_data + 1)); + if (status) + hclge_log_error(dev, "PPP_MPF_ABNORMAL_INT_ST1", + &hclge_ppp_mpf_abnormal_int_st1[0], status); + + status = le32_to_cpu(*(desc_data + 3)) & HCLGE_PPP_MPF_INT_ST3_MASK; + if (status) + hclge_log_error(dev, "PPP_MPF_ABNORMAL_INT_ST3", + &hclge_ppp_mpf_abnormal_int_st3[0], status); + + /* log PPU(RCB) errors */ + desc_data = (__le32 *)&desc[5]; + status = le32_to_cpu(*(desc_data + 1)); + if (status) { + dev_warn(dev, "PPU_MPF_ABNORMAL_INT_ST1 %s found\n", + "rpu_rx_pkt_ecc_mbit_err"); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + status = le32_to_cpu(*(desc_data + 2)); + if (status) { + hclge_log_error(dev, "PPU_MPF_ABNORMAL_INT_ST2", + &hclge_ppu_mpf_abnormal_int_st2[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + status = le32_to_cpu(*(desc_data + 3)) & HCLGE_PPU_MPF_INT_ST3_MASK; + if (status) { + hclge_log_error(dev, "PPU_MPF_ABNORMAL_INT_ST3", + &hclge_ppu_mpf_abnormal_int_st3[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + /* log TM(Traffic Manager) errors */ + desc_data = (__le32 *)&desc[6]; + status = le32_to_cpu(*desc_data); + if (status) { + hclge_log_error(dev, "TM_SCH_RINT", + &hclge_tm_sch_rint[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + /* log QCN(Quantized Congestion Control) errors */ + desc_data = (__le32 *)&desc[7]; + status = le32_to_cpu(*desc_data) & HCLGE_QCN_FIFO_INT_MASK; + if (status) { + hclge_log_error(dev, "QCN_FIFO_RINT", + &hclge_qcn_fifo_rint[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + status = le32_to_cpu(*(desc_data + 1)) & HCLGE_QCN_ECC_INT_MASK; + if (status) { + hclge_log_error(dev, "QCN_ECC_RINT", + &hclge_qcn_ecc_rint[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + /* log NCSI errors */ + desc_data = (__le32 *)&desc[9]; + status = le32_to_cpu(*desc_data) & HCLGE_NCSI_ECC_INT_MASK; + if (status) { + hclge_log_error(dev, "NCSI_ECC_INT_RPT", + &hclge_ncsi_err_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_CORE_RESET); + } + + /* clear all main PF RAS errors */ + hclge_cmd_reuse_desc(&desc[0], false); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + + ret = hclge_cmd_send(&hdev->hw, &desc[0], num); + if (ret) + dev_err(dev, "clear all mpf ras int cmd failed (%d)\n", ret); + + return ret; } -static void hclge_process_ppp_error(struct hclge_dev *hdev, - enum hclge_err_int_type int_type) +/* hclge_handle_pf_ras_error: handle all PF RAS errors + * @hdev: pointer to struct hclge_dev + * @desc: descriptor for describing the command + * @num: number of extended command structures + * + * This function handles all the PF RAS errors in the + * hw register/s using command. + */ +static int hclge_handle_pf_ras_error(struct hclge_dev *hdev, + struct hclge_desc *desc, + int num) { + struct hnae3_ae_dev *ae_dev = hdev->ae_dev; struct device *dev = &hdev->pdev->dev; + __le32 *desc_data; + u32 status; int ret; - /* read PPP INT0,1 sts */ - ret = hclge_log_and_clear_ppp_error(hdev, HCLGE_PPP_CMD0_INT_CMD, - int_type); - if (ret < 0) { - dev_err(dev, "failed(=%d) to clear PPP interrupt 0,1 status\n", - ret); - return; + /* query all PF RAS errors */ + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_CLEAR_PF_RAS_INT, + true); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + + ret = hclge_cmd_send(&hdev->hw, &desc[0], num); + if (ret) { + dev_err(dev, "query all pf ras int cmd failed (%d)\n", ret); + return ret; } - /* read err PPP INT2,3 sts */ - ret = hclge_log_and_clear_ppp_error(hdev, HCLGE_PPP_CMD1_INT_CMD, - int_type); - if (ret < 0) - dev_err(dev, "failed(=%d) to clear PPP interrupt 2,3 status\n", - ret); + /* log SSU(Storage Switch Unit) errors */ + status = le32_to_cpu(desc[0].data[0]); + if (status) { + hclge_log_error(dev, "SSU_PORT_BASED_ERR_INT", + &hclge_ssu_port_based_err_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); + } + + status = le32_to_cpu(desc[0].data[1]); + if (status) { + hclge_log_error(dev, "SSU_FIFO_OVERFLOW_INT", + &hclge_ssu_fifo_overflow_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); + } + + status = le32_to_cpu(desc[0].data[2]); + if (status) { + hclge_log_error(dev, "SSU_ETS_TCG_INT", + &hclge_ssu_ets_tcg_int[0], status); + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); + } + + /* log IGU(Ingress Unit) EGU(Egress Unit) TNL errors */ + desc_data = (__le32 *)&desc[1]; + status = le32_to_cpu(*desc_data) & HCLGE_IGU_EGU_TNL_INT_MASK; + if (status) + hclge_log_error(dev, "IGU_EGU_TNL_INT_STS", + &hclge_igu_egu_tnl_int[0], status); + + /* clear all PF RAS errors */ + hclge_cmd_reuse_desc(&desc[0], false); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + + ret = hclge_cmd_send(&hdev->hw, &desc[0], num); + if (ret) + dev_err(dev, "clear all pf ras int cmd failed (%d)\n", ret); + + return ret; } -static void hclge_process_tm_sch_error(struct hclge_dev *hdev) +static int hclge_handle_all_ras_errors(struct hclge_dev *hdev) { struct device *dev = &hdev->pdev->dev; - const struct hclge_tm_sch_ecc_info *tm_sch_ecc_info; - struct hclge_desc desc; - u32 ecc_info; - u8 module_no; - u8 ram_no; + u32 mpf_bd_num, pf_bd_num, bd_num; + struct hclge_desc desc_bd; + struct hclge_desc *desc; int ret; - /* read TM scheduler errors */ - ret = hclge_cmd_query_error(hdev, &desc, - HCLGE_TM_SCH_MBIT_ECC_INFO_CMD, 0, 0, 0); + /* query the number of registers in the RAS int status */ + hclge_cmd_setup_basic_desc(&desc_bd, HCLGE_QUERY_RAS_INT_STS_BD_NUM, + true); + ret = hclge_cmd_send(&hdev->hw, &desc_bd, 1); if (ret) { - dev_err(dev, "failed(%d) to read SCH mbit ECC err info\n", ret); - return; + dev_err(dev, "fail(%d) to query ras int status bd num\n", ret); + return ret; } - ecc_info = le32_to_cpu(desc.data[0]); + mpf_bd_num = le32_to_cpu(desc_bd.data[0]); + pf_bd_num = le32_to_cpu(desc_bd.data[1]); + bd_num = max_t(u32, mpf_bd_num, pf_bd_num); - ret = hclge_cmd_query_error(hdev, &desc, - HCLGE_TM_SCH_ECC_ERR_RINT_CMD, 0, 0, 0); + desc = kcalloc(bd_num, sizeof(struct hclge_desc), GFP_KERNEL); + if (!desc) + return -ENOMEM; + + /* handle all main PF RAS errors */ + ret = hclge_handle_mpf_ras_error(hdev, desc, mpf_bd_num); if (ret) { - dev_err(dev, "failed(%d) to read SCH ECC err status\n", ret); - return; + kfree(desc); + return ret; } + memset(desc, 0, bd_num * sizeof(struct hclge_desc)); + + /* handle all PF RAS errors */ + ret = hclge_handle_pf_ras_error(hdev, desc, pf_bd_num); + kfree(desc); - /* log TM scheduler errors */ - if (le32_to_cpu(desc.data[0])) { - hclge_log_error(dev, &hclge_tm_sch_err_int[0], - le32_to_cpu(desc.data[0])); - if (le32_to_cpu(desc.data[0]) & 0x2) { - module_no = (ecc_info >> 20) & 0xF; - ram_no = (ecc_info >> 16) & 0xF; - tm_sch_ecc_info = - &hclge_tm_sch_ecc_err[module_no][ram_no]; - dev_warn(dev, "ecc err module:ram=%s\n", - tm_sch_ecc_info->name); - dev_warn(dev, "ecc memory address = 0x%x\n", - ecc_info & 0xFFFF); + return ret; +} + +static int hclge_log_rocee_ovf_error(struct hclge_dev *hdev) +{ + struct device *dev = &hdev->pdev->dev; + struct hclge_desc desc[2]; + int ret; + + /* read overflow error status */ + ret = hclge_cmd_query_error(hdev, &desc[0], + HCLGE_ROCEE_PF_RAS_INT_CMD, + 0, 0, 0); + if (ret) { + dev_err(dev, "failed(%d) to query ROCEE OVF error sts\n", ret); + return ret; + } + + /* log overflow error */ + if (le32_to_cpu(desc[0].data[0]) & HCLGE_ROCEE_OVF_ERR_INT_MASK) { + const struct hclge_hw_error *err; + u32 err_sts; + + err = &hclge_rocee_qmm_ovf_err_int[0]; + err_sts = HCLGE_ROCEE_OVF_ERR_TYPE_MASK & + le32_to_cpu(desc[0].data[0]); + while (err->msg) { + if (err->int_msk == err_sts) { + dev_warn(dev, "%s [error status=0x%x] found\n", + err->msg, + le32_to_cpu(desc[0].data[0])); + break; + } + err++; } } - /* clear TM scheduler errors */ - ret = hclge_cmd_clear_error(hdev, &desc, NULL, 0, 0); - if (ret) { - dev_err(dev, "failed(%d) to clear TM SCH error status\n", ret); - return; + if (le32_to_cpu(desc[0].data[1]) & HCLGE_ROCEE_OVF_ERR_INT_MASK) { + dev_warn(dev, "ROCEE TSP OVF [error status=0x%x] found\n", + le32_to_cpu(desc[0].data[1])); } - ret = hclge_cmd_query_error(hdev, &desc, - HCLGE_TM_SCH_ECC_ERR_RINT_CE, 0, 0, 0); - if (ret) { - dev_err(dev, "failed(%d) to read SCH CE status\n", ret); - return; + if (le32_to_cpu(desc[0].data[2]) & HCLGE_ROCEE_OVF_ERR_INT_MASK) { + dev_warn(dev, "ROCEE SCC OVF [error status=0x%x] found\n", + le32_to_cpu(desc[0].data[2])); } - ret = hclge_cmd_clear_error(hdev, &desc, NULL, 0, 0); + return 0; +} + +static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev) +{ + enum hnae3_reset_type reset_type = HNAE3_FUNC_RESET; + struct hnae3_ae_dev *ae_dev = hdev->ae_dev; + struct device *dev = &hdev->pdev->dev; + struct hclge_desc desc[2]; + unsigned int status; + int ret; + + /* read RAS error interrupt status */ + ret = hclge_cmd_query_error(hdev, &desc[0], + HCLGE_QUERY_CLEAR_ROCEE_RAS_INT, + 0, 0, 0); if (ret) { - dev_err(dev, "failed(%d) to clear TM SCH CE status\n", ret); - return; + dev_err(dev, "failed(%d) to query ROCEE RAS INT SRC\n", ret); + /* reset everything for now */ + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); + return ret; } - ret = hclge_cmd_query_error(hdev, &desc, - HCLGE_TM_SCH_ECC_ERR_RINT_NFE, 0, 0, 0); - if (ret) { - dev_err(dev, "failed(%d) to read SCH NFE status\n", ret); - return; + status = le32_to_cpu(desc[0].data[0]); + + if (status & HCLGE_ROCEE_RERR_INT_MASK) + dev_warn(dev, "ROCEE RAS AXI rresp error\n"); + + if (status & HCLGE_ROCEE_BERR_INT_MASK) + dev_warn(dev, "ROCEE RAS AXI bresp error\n"); + + if (status & HCLGE_ROCEE_ECC_INT_MASK) { + dev_warn(dev, "ROCEE RAS 2bit ECC error\n"); + reset_type = HNAE3_GLOBAL_RESET; } - ret = hclge_cmd_clear_error(hdev, &desc, NULL, 0, 0); - if (ret) { - dev_err(dev, "failed(%d) to clear TM SCH NFE status\n", ret); - return; + if (status & HCLGE_ROCEE_OVF_INT_MASK) { + ret = hclge_log_rocee_ovf_error(hdev); + if (ret) { + dev_err(dev, "failed(%d) to process ovf error\n", ret); + /* reset everything for now */ + HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET); + return ret; + } } - ret = hclge_cmd_query_error(hdev, &desc, - HCLGE_TM_SCH_ECC_ERR_RINT_FE, 0, 0, 0); + /* clear error status */ + hclge_cmd_reuse_desc(&desc[0], false); + ret = hclge_cmd_send(&hdev->hw, &desc[0], 1); if (ret) { - dev_err(dev, "failed(%d) to read SCH FE status\n", ret); - return; + dev_err(dev, "failed(%d) to clear ROCEE RAS error\n", ret); + /* reset everything for now */ + reset_type = HNAE3_GLOBAL_RESET; } - ret = hclge_cmd_clear_error(hdev, &desc, NULL, 0, 0); - if (ret) - dev_err(dev, "failed(%d) to clear TM SCH FE status\n", ret); + HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type); + + return ret; } -static void hclge_process_tm_qcn_error(struct hclge_dev *hdev) +static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en) { struct device *dev = &hdev->pdev->dev; struct hclge_desc desc; int ret; - /* read QCN errors */ - ret = hclge_cmd_query_error(hdev, &desc, - HCLGE_TM_QCN_MEM_INT_INFO_CMD, 0, 0, 0); - if (ret) { - dev_err(dev, "failed(%d) to read QCN ECC err status\n", ret); - return; - } + if (hdev->pdev->revision < 0x21 || !hnae3_dev_roce_supported(hdev)) + return 0; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_CONFIG_ROCEE_RAS_INT_EN, false); + if (en) { + /* enable ROCEE hw error interrupts */ + desc.data[0] = cpu_to_le32(HCLGE_ROCEE_RAS_NFE_INT_EN); + desc.data[1] = cpu_to_le32(HCLGE_ROCEE_RAS_CE_INT_EN); - /* log QCN errors */ - if (le32_to_cpu(desc.data[0])) - hclge_log_error(dev, &hclge_qcn_ecc_err_int[0], - le32_to_cpu(desc.data[0])); + hclge_log_and_clear_rocee_ras_error(hdev); + } + desc.data[2] = cpu_to_le32(HCLGE_ROCEE_RAS_NFE_INT_EN_MASK); + desc.data[3] = cpu_to_le32(HCLGE_ROCEE_RAS_CE_INT_EN_MASK); - /* clear QCN errors */ - ret = hclge_cmd_clear_error(hdev, &desc, NULL, 0, 0); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) - dev_err(dev, "failed(%d) to clear QCN error status\n", ret); + dev_err(dev, "failed(%d) to config ROCEE RAS interrupt\n", ret); + + return ret; } -static void hclge_process_tm_error(struct hclge_dev *hdev, - enum hclge_err_int_type type) +static int hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev) { - hclge_process_tm_sch_error(hdev); - hclge_process_tm_qcn_error(hdev); + struct hclge_dev *hdev = ae_dev->priv; + + if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) || + hdev->pdev->revision < 0x21) + return HNAE3_NONE_RESET; + + return hclge_log_and_clear_rocee_ras_error(hdev); } static const struct hclge_hw_blk hw_blk[] = { - { .msk = BIT(0), .name = "IGU_EGU", - .enable_error = hclge_enable_igu_egu_error, - .process_error = hclge_process_igu_egu_error, }, - { .msk = BIT(5), .name = "COMMON", - .enable_error = hclge_enable_common_error, - .process_error = hclge_process_common_error, }, - { .msk = BIT(4), .name = "TM", - .enable_error = hclge_enable_tm_hw_error, - .process_error = hclge_process_tm_error, }, - { .msk = BIT(1), .name = "PPP", - .enable_error = hclge_enable_ppp_error, - .process_error = hclge_process_ppp_error, }, + { + .msk = BIT(0), .name = "IGU_EGU", + .config_err_int = hclge_config_igu_egu_hw_err_int, + }, + { + .msk = BIT(1), .name = "PPP", + .config_err_int = hclge_config_ppp_hw_err_int, + }, + { + .msk = BIT(2), .name = "SSU", + .config_err_int = hclge_config_ssu_hw_err_int, + }, + { + .msk = BIT(3), .name = "PPU", + .config_err_int = hclge_config_ppu_hw_err_int, + }, + { + .msk = BIT(4), .name = "TM", + .config_err_int = hclge_config_tm_hw_err_int, + }, + { + .msk = BIT(5), .name = "COMMON", + .config_err_int = hclge_config_common_hw_err_int, + }, + { + .msk = BIT(8), .name = "MAC", + .config_err_int = hclge_config_mac_err_int, + }, { /* sentinel */ } }; int hclge_hw_error_set_state(struct hclge_dev *hdev, bool state) { + const struct hclge_hw_blk *module = hw_blk; struct device *dev = &hdev->pdev->dev; int ret = 0; - int i = 0; - while (hw_blk[i].name) { - if (!hw_blk[i].enable_error) { - i++; - continue; + while (module->name) { + if (module->config_err_int) { + ret = module->config_err_int(hdev, state); + if (ret) + return ret; } - ret = hw_blk[i].enable_error(hdev, state); - if (ret) { - dev_err(dev, "fail(%d) to en/disable err int\n", ret); - return ret; - } - i++; + module++; } + ret = hclge_config_rocee_ras_interrupt(hdev, state); + if (ret) + dev_err(dev, "fail(%d) to configure ROCEE err int\n", ret); + return ret; } -pci_ers_result_t hclge_process_ras_hw_error(struct hnae3_ae_dev *ae_dev) +pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev) { struct hclge_dev *hdev = ae_dev->priv; struct device *dev = &hdev->pdev->dev; - u32 sts, val; - int i = 0; - - sts = hclge_read_dev(&hdev->hw, HCLGE_RAS_PF_OTHER_INT_STS_REG); - - /* Processing Non-fatal errors */ - if (sts & HCLGE_RAS_REG_NFE_MASK) { - val = (sts >> HCLGE_RAS_REG_NFE_SHIFT) & 0xFF; - i = 0; - while (hw_blk[i].name) { - if (!(hw_blk[i].msk & val)) { - i++; - continue; - } - dev_warn(dev, "%s ras non-fatal error identified\n", - hw_blk[i].name); - if (hw_blk[i].process_error) - hw_blk[i].process_error(hdev, - HCLGE_ERR_INT_RAS_NFE); - i++; - } + u32 status; + + status = hclge_read_dev(&hdev->hw, HCLGE_RAS_PF_OTHER_INT_STS_REG); + + /* Handling Non-fatal HNS RAS errors */ + if (status & HCLGE_RAS_REG_NFE_MASK) { + dev_warn(dev, + "HNS Non-Fatal RAS error(status=0x%x) identified\n", + status); + hclge_handle_all_ras_errors(hdev); + } else { + if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) || + hdev->pdev->revision < 0x21) + return PCI_ERS_RESULT_RECOVERED; + } + + if (status & HCLGE_RAS_REG_ROCEE_ERR_MASK) { + dev_warn(dev, "ROCEE uncorrected RAS error identified\n"); + hclge_handle_rocee_ras_error(ae_dev); + } + + if (status & HCLGE_RAS_REG_NFE_MASK || + status & HCLGE_RAS_REG_ROCEE_ERR_MASK) + return PCI_ERS_RESULT_NEED_RESET; + + return PCI_ERS_RESULT_RECOVERED; +} + +int hclge_handle_hw_msix_error(struct hclge_dev *hdev, + unsigned long *reset_requests) +{ + struct device *dev = &hdev->pdev->dev; + u32 mpf_bd_num, pf_bd_num, bd_num; + struct hclge_desc desc_bd; + struct hclge_desc *desc; + __le32 *desc_data; + int ret = 0; + u32 status; + + /* set default handling */ + set_bit(HNAE3_FUNC_RESET, reset_requests); + + /* query the number of bds for the MSIx int status */ + hclge_cmd_setup_basic_desc(&desc_bd, HCLGE_QUERY_MSIX_INT_STS_BD_NUM, + true); + ret = hclge_cmd_send(&hdev->hw, &desc_bd, 1); + if (ret) { + dev_err(dev, "fail(%d) to query msix int status bd num\n", + ret); + /* reset everything for now */ + set_bit(HNAE3_GLOBAL_RESET, reset_requests); + return ret; + } + + mpf_bd_num = le32_to_cpu(desc_bd.data[0]); + pf_bd_num = le32_to_cpu(desc_bd.data[1]); + bd_num = max_t(u32, mpf_bd_num, pf_bd_num); + + desc = kcalloc(bd_num, sizeof(struct hclge_desc), GFP_KERNEL); + if (!desc) + goto out; + + /* query all main PF MSIx errors */ + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_CLEAR_ALL_MPF_MSIX_INT, + true); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + + ret = hclge_cmd_send(&hdev->hw, &desc[0], mpf_bd_num); + if (ret) { + dev_err(dev, "query all mpf msix int cmd failed (%d)\n", + ret); + /* reset everything for now */ + set_bit(HNAE3_GLOBAL_RESET, reset_requests); + goto msi_error; + } + + /* log MAC errors */ + desc_data = (__le32 *)&desc[1]; + status = le32_to_cpu(*desc_data); + if (status) { + hclge_log_error(dev, "MAC_AFIFO_TNL_INT_R", + &hclge_mac_afifo_tnl_int[0], status); + set_bit(HNAE3_GLOBAL_RESET, reset_requests); } - return PCI_ERS_RESULT_NEED_RESET; + /* log PPU(RCB) errors */ + desc_data = (__le32 *)&desc[5]; + status = le32_to_cpu(*(desc_data + 2)) & + HCLGE_PPU_MPF_INT_ST2_MSIX_MASK; + if (status) { + dev_warn(dev, + "PPU_MPF_ABNORMAL_INT_ST2[28:29], err_status(0x%x)\n", + status); + set_bit(HNAE3_CORE_RESET, reset_requests); + } + + /* clear all main PF MSIx errors */ + hclge_cmd_reuse_desc(&desc[0], false); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + + ret = hclge_cmd_send(&hdev->hw, &desc[0], mpf_bd_num); + if (ret) { + dev_err(dev, "clear all mpf msix int cmd failed (%d)\n", + ret); + /* reset everything for now */ + set_bit(HNAE3_GLOBAL_RESET, reset_requests); + goto msi_error; + } + + /* query all PF MSIx errors */ + memset(desc, 0, bd_num * sizeof(struct hclge_desc)); + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_CLEAR_ALL_PF_MSIX_INT, + true); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + + ret = hclge_cmd_send(&hdev->hw, &desc[0], pf_bd_num); + if (ret) { + dev_err(dev, "query all pf msix int cmd failed (%d)\n", + ret); + /* reset everything for now */ + set_bit(HNAE3_GLOBAL_RESET, reset_requests); + goto msi_error; + } + + /* log SSU PF errors */ + status = le32_to_cpu(desc[0].data[0]) & HCLGE_SSU_PORT_INT_MSIX_MASK; + if (status) { + hclge_log_error(dev, "SSU_PORT_BASED_ERR_INT", + &hclge_ssu_port_based_pf_int[0], status); + set_bit(HNAE3_GLOBAL_RESET, reset_requests); + } + + /* read and log PPP PF errors */ + desc_data = (__le32 *)&desc[2]; + status = le32_to_cpu(*desc_data); + if (status) + hclge_log_error(dev, "PPP_PF_ABNORMAL_INT_ST0", + &hclge_ppp_pf_abnormal_int[0], status); + + /* PPU(RCB) PF errors */ + desc_data = (__le32 *)&desc[3]; + status = le32_to_cpu(*desc_data) & HCLGE_PPU_PF_INT_MSIX_MASK; + if (status) + hclge_log_error(dev, "PPU_PF_ABNORMAL_INT_ST", + &hclge_ppu_pf_abnormal_int[0], status); + + /* clear all PF MSIx errors */ + hclge_cmd_reuse_desc(&desc[0], false); + desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); + + ret = hclge_cmd_send(&hdev->hw, &desc[0], pf_bd_num); + if (ret) { + dev_err(dev, "clear all pf msix int cmd failed (%d)\n", + ret); + /* reset everything for now */ + set_bit(HNAE3_GLOBAL_RESET, reset_requests); + } + +msi_error: + kfree(desc); +out: + return ret; } diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h index e0e3b5861495..51a7d4eb066a 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h @@ -7,9 +7,11 @@ #include "hclge_main.h" #define HCLGE_RAS_PF_OTHER_INT_STS_REG 0x20B00 -#define HCLGE_RAS_REG_FE_MASK 0xFF #define HCLGE_RAS_REG_NFE_MASK 0xFF00 -#define HCLGE_RAS_REG_NFE_SHIFT 8 +#define HCLGE_RAS_REG_ROCEE_ERR_MASK 0x3000000 + +#define HCLGE_VECTOR0_PF_OTHER_INT_STS_REG 0x20800 +#define HCLGE_VECTOR0_REG_MSIX_MASK 0x1FF00 #define HCLGE_IMP_TCM_ECC_ERR_INT_EN 0xFFFF0000 #define HCLGE_IMP_TCM_ECC_ERR_INT_EN_MASK 0xFFFF0000 @@ -23,6 +25,8 @@ #define HCLGE_IMP_RD_POISON_ERR_INT_EN_MASK 0x0100 #define HCLGE_TQP_ECC_ERR_INT_EN 0x0FFF #define HCLGE_TQP_ECC_ERR_INT_EN_MASK 0x0FFF +#define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN_MASK 0x0F000000 +#define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN 0x0F000000 #define HCLGE_IGU_ERR_INT_EN 0x0000066F #define HCLGE_IGU_ERR_INT_EN_MASK 0x000F #define HCLGE_IGU_TNL_ERR_INT_EN 0x0002AABF @@ -41,21 +45,55 @@ #define HCLGE_TM_QCN_MEM_ERR_INT_EN 0xFFFFFF #define HCLGE_NCSI_ERR_INT_EN 0x3 #define HCLGE_NCSI_ERR_INT_TYPE 0x9 +#define HCLGE_MAC_COMMON_ERR_INT_EN GENMASK(7, 0) +#define HCLGE_MAC_COMMON_ERR_INT_EN_MASK GENMASK(7, 0) +#define HCLGE_PPU_MPF_ABNORMAL_INT0_EN GENMASK(31, 0) +#define HCLGE_PPU_MPF_ABNORMAL_INT0_EN_MASK GENMASK(31, 0) +#define HCLGE_PPU_MPF_ABNORMAL_INT1_EN GENMASK(31, 0) +#define HCLGE_PPU_MPF_ABNORMAL_INT1_EN_MASK GENMASK(31, 0) +#define HCLGE_PPU_MPF_ABNORMAL_INT2_EN 0x3FFF3FFF +#define HCLGE_PPU_MPF_ABNORMAL_INT2_EN_MASK 0x3FFF3FFF +#define HCLGE_PPU_MPF_ABNORMAL_INT2_EN2 0xB +#define HCLGE_PPU_MPF_ABNORMAL_INT2_EN2_MASK 0xB +#define HCLGE_PPU_MPF_ABNORMAL_INT3_EN GENMASK(7, 0) +#define HCLGE_PPU_MPF_ABNORMAL_INT3_EN_MASK GENMASK(23, 16) +#define HCLGE_PPU_PF_ABNORMAL_INT_EN GENMASK(5, 0) +#define HCLGE_PPU_PF_ABNORMAL_INT_EN_MASK GENMASK(5, 0) +#define HCLGE_SSU_1BIT_ECC_ERR_INT_EN GENMASK(31, 0) +#define HCLGE_SSU_1BIT_ECC_ERR_INT_EN_MASK GENMASK(31, 0) +#define HCLGE_SSU_MULTI_BIT_ECC_ERR_INT_EN GENMASK(31, 0) +#define HCLGE_SSU_MULTI_BIT_ECC_ERR_INT_EN_MASK GENMASK(31, 0) +#define HCLGE_SSU_BIT32_ECC_ERR_INT_EN 0x0101 +#define HCLGE_SSU_BIT32_ECC_ERR_INT_EN_MASK 0x0101 +#define HCLGE_SSU_COMMON_INT_EN GENMASK(9, 0) +#define HCLGE_SSU_COMMON_INT_EN_MASK GENMASK(9, 0) +#define HCLGE_SSU_PORT_BASED_ERR_INT_EN 0x0BFF +#define HCLGE_SSU_PORT_BASED_ERR_INT_EN_MASK 0x0BFF0000 +#define HCLGE_SSU_FIFO_OVERFLOW_ERR_INT_EN GENMASK(23, 0) +#define HCLGE_SSU_FIFO_OVERFLOW_ERR_INT_EN_MASK GENMASK(23, 0) + +#define HCLGE_SSU_COMMON_ERR_INT_MASK GENMASK(9, 0) +#define HCLGE_SSU_PORT_INT_MSIX_MASK 0x7BFF +#define HCLGE_IGU_INT_MASK GENMASK(3, 0) +#define HCLGE_IGU_EGU_TNL_INT_MASK GENMASK(5, 0) +#define HCLGE_PPP_MPF_INT_ST3_MASK GENMASK(5, 0) +#define HCLGE_PPU_MPF_INT_ST3_MASK GENMASK(7, 0) +#define HCLGE_PPU_MPF_INT_ST2_MSIX_MASK GENMASK(29, 28) +#define HCLGE_PPU_PF_INT_MSIX_MASK 0x27 +#define HCLGE_QCN_FIFO_INT_MASK GENMASK(17, 0) +#define HCLGE_QCN_ECC_INT_MASK GENMASK(21, 0) +#define HCLGE_NCSI_ECC_INT_MASK GENMASK(1, 0) -#define HCLGE_IMP_TCM_ECC_INT_MASK 0xFFFF -#define HCLGE_IMP_ITCM4_ECC_INT_MASK 0x3 -#define HCLGE_CMDQ_ECC_INT_MASK 0xFFFF -#define HCLGE_CMDQ_ROC_ECC_INT_SHIFT 16 -#define HCLGE_TQP_ECC_INT_MASK 0xFFF -#define HCLGE_TQP_ECC_INT_SHIFT 16 -#define HCLGE_IMP_TCM_ECC_CLR_MASK 0xFFFF -#define HCLGE_IMP_ITCM4_ECC_CLR_MASK 0x3 -#define HCLGE_CMDQ_NIC_ECC_CLR_MASK 0xFFFF -#define HCLGE_CMDQ_ROCEE_ECC_CLR_MASK 0xFFFF0000 -#define HCLGE_TQP_IMP_ERR_CLR_MASK 0x0FFF0001 -#define HCLGE_IGU_COM_INT_MASK 0xF -#define HCLGE_IGU_EGU_TNL_INT_MASK 0x3F -#define HCLGE_PPP_PF_INT_MASK 0x100 +#define HCLGE_ROCEE_RAS_NFE_INT_EN 0xF +#define HCLGE_ROCEE_RAS_CE_INT_EN 0x1 +#define HCLGE_ROCEE_RAS_NFE_INT_EN_MASK 0xF +#define HCLGE_ROCEE_RAS_CE_INT_EN_MASK 0x1 +#define HCLGE_ROCEE_RERR_INT_MASK BIT(0) +#define HCLGE_ROCEE_BERR_INT_MASK BIT(1) +#define HCLGE_ROCEE_ECC_INT_MASK BIT(2) +#define HCLGE_ROCEE_OVF_INT_MASK BIT(3) +#define HCLGE_ROCEE_OVF_ERR_INT_MASK 0x10000 +#define HCLGE_ROCEE_OVF_ERR_TYPE_MASK 0x3F enum hclge_err_int_type { HCLGE_ERR_INT_MSIX = 0, @@ -67,9 +105,7 @@ enum hclge_err_int_type { struct hclge_hw_blk { u32 msk; const char *name; - int (*enable_error)(struct hclge_dev *hdev, bool en); - void (*process_error)(struct hclge_dev *hdev, - enum hclge_err_int_type type); + int (*config_err_int)(struct hclge_dev *hdev, bool en); }; struct hclge_hw_error { @@ -78,6 +114,7 @@ struct hclge_hw_error { }; int hclge_hw_error_set_state(struct hclge_dev *hdev, bool state); -int hclge_enable_tm_hw_error(struct hclge_dev *hdev, bool en); -pci_ers_result_t hclge_process_ras_hw_error(struct hnae3_ae_dev *ae_dev); +pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev); +int hclge_handle_hw_msix_error(struct hclge_dev *hdev, + unsigned long *reset_requests); #endif diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index ffdd96020860..f7637c08bb3a 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -26,7 +26,9 @@ #define HCLGE_STATS_READ(p, offset) (*((u64 *)((u8 *)(p) + (offset)))) #define HCLGE_MAC_STATS_FIELD_OFF(f) (offsetof(struct hclge_mac_stats, f)) -static int hclge_set_mtu(struct hnae3_handle *handle, int new_mtu); +#define HCLGE_BUF_SIZE_UNIT 256 + +static int hclge_set_mac_mtu(struct hclge_dev *hdev, int new_mps); static int hclge_init_vlan_config(struct hclge_dev *hdev); static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev); static int hclge_set_umv_space(struct hclge_dev *hdev, u16 space_size, @@ -48,6 +50,62 @@ static const struct pci_device_id ae_algo_pci_tbl[] = { MODULE_DEVICE_TABLE(pci, ae_algo_pci_tbl); +static const u32 cmdq_reg_addr_list[] = {HCLGE_CMDQ_TX_ADDR_L_REG, + HCLGE_CMDQ_TX_ADDR_H_REG, + HCLGE_CMDQ_TX_DEPTH_REG, + HCLGE_CMDQ_TX_TAIL_REG, + HCLGE_CMDQ_TX_HEAD_REG, + HCLGE_CMDQ_RX_ADDR_L_REG, + HCLGE_CMDQ_RX_ADDR_H_REG, + HCLGE_CMDQ_RX_DEPTH_REG, + HCLGE_CMDQ_RX_TAIL_REG, + HCLGE_CMDQ_RX_HEAD_REG, + HCLGE_VECTOR0_CMDQ_SRC_REG, + HCLGE_CMDQ_INTR_STS_REG, + HCLGE_CMDQ_INTR_EN_REG, + HCLGE_CMDQ_INTR_GEN_REG}; + +static const u32 common_reg_addr_list[] = {HCLGE_MISC_VECTOR_REG_BASE, + HCLGE_VECTOR0_OTER_EN_REG, + HCLGE_MISC_RESET_STS_REG, + HCLGE_MISC_VECTOR_INT_STS, + HCLGE_GLOBAL_RESET_REG, + HCLGE_FUN_RST_ING, + HCLGE_GRO_EN_REG}; + +static const u32 ring_reg_addr_list[] = {HCLGE_RING_RX_ADDR_L_REG, + HCLGE_RING_RX_ADDR_H_REG, + HCLGE_RING_RX_BD_NUM_REG, + HCLGE_RING_RX_BD_LENGTH_REG, + HCLGE_RING_RX_MERGE_EN_REG, + HCLGE_RING_RX_TAIL_REG, + HCLGE_RING_RX_HEAD_REG, + HCLGE_RING_RX_FBD_NUM_REG, + HCLGE_RING_RX_OFFSET_REG, + HCLGE_RING_RX_FBD_OFFSET_REG, + HCLGE_RING_RX_STASH_REG, + HCLGE_RING_RX_BD_ERR_REG, + HCLGE_RING_TX_ADDR_L_REG, + HCLGE_RING_TX_ADDR_H_REG, + HCLGE_RING_TX_BD_NUM_REG, + HCLGE_RING_TX_PRIORITY_REG, + HCLGE_RING_TX_TC_REG, + HCLGE_RING_TX_MERGE_EN_REG, + HCLGE_RING_TX_TAIL_REG, + HCLGE_RING_TX_HEAD_REG, + HCLGE_RING_TX_FBD_NUM_REG, + HCLGE_RING_TX_OFFSET_REG, + HCLGE_RING_TX_EBD_NUM_REG, + HCLGE_RING_TX_EBD_OFFSET_REG, + HCLGE_RING_TX_BD_ERR_REG, + HCLGE_RING_EN_REG}; + +static const u32 tqp_intr_reg_addr_list[] = {HCLGE_TQP_INTR_CTRL_REG, + HCLGE_TQP_INTR_GL0_REG, + HCLGE_TQP_INTR_GL1_REG, + HCLGE_TQP_INTR_GL2_REG, + HCLGE_TQP_INTR_RL_REG}; + static const char hns3_nic_test_strs[][ETH_GSTRING_LEN] = { "App Loopback test", "Serdes serial Loopback test", @@ -631,6 +689,22 @@ static int hclge_query_pf_resource(struct hclge_dev *hdev) hdev->num_tqps = __le16_to_cpu(req->tqp_num); hdev->pkt_buf_size = __le16_to_cpu(req->buf_size) << HCLGE_BUF_UNIT_S; + if (req->tx_buf_size) + hdev->tx_buf_size = + __le16_to_cpu(req->tx_buf_size) << HCLGE_BUF_UNIT_S; + else + hdev->tx_buf_size = HCLGE_DEFAULT_TX_BUF; + + hdev->tx_buf_size = roundup(hdev->tx_buf_size, HCLGE_BUF_SIZE_UNIT); + + if (req->dv_buf_size) + hdev->dv_buf_size = + __le16_to_cpu(req->dv_buf_size) << HCLGE_BUF_UNIT_S; + else + hdev->dv_buf_size = HCLGE_DEFAULT_DV; + + hdev->dv_buf_size = roundup(hdev->dv_buf_size, HCLGE_BUF_SIZE_UNIT); + if (hnae3_dev_roce_supported(hdev)) { hdev->roce_base_msix_offset = hnae3_get_field(__le16_to_cpu(req->msixcap_localid_ba_rocee), @@ -886,7 +960,7 @@ static int hclge_configure(struct hclge_dev *hdev) hdev->pfc_max = hdev->tc_max; } - hdev->tm_info.num_tc = hdev->tc_max; + hdev->tm_info.num_tc = 1; /* Currently not support uncontiuous tc */ for (i = 0; i < hdev->tm_info.num_tc; i++) @@ -921,6 +995,28 @@ static int hclge_config_tso(struct hclge_dev *hdev, int tso_mss_min, return hclge_cmd_send(&hdev->hw, &desc, 1); } +static int hclge_config_gro(struct hclge_dev *hdev, bool en) +{ + struct hclge_cfg_gro_status_cmd *req; + struct hclge_desc desc; + int ret; + + if (!hnae3_dev_gro_supported(hdev)) + return 0; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_GRO_GENERIC_CONFIG, false); + req = (struct hclge_cfg_gro_status_cmd *)desc.data; + + req->gro_en = cpu_to_le16(en ? 1 : 0); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + dev_err(&hdev->pdev->dev, + "GRO hardware config cmd failed, ret = %d\n", ret); + + return ret; +} + static int hclge_alloc_tqps(struct hclge_dev *hdev) { struct hclge_tqp *tqp; @@ -1144,6 +1240,7 @@ static int hclge_alloc_vport(struct hclge_dev *hdev) for (i = 0; i < num_vport; i++) { vport->back = hdev; vport->vport_id = i; + vport->mps = HCLGE_MAC_DEFAULT_FRAME; if (i == 0) ret = hclge_vport_setup(vport, tqp_main_vport); @@ -1289,40 +1386,51 @@ static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev, { u32 shared_buf_min, shared_buf_tc, shared_std; int tc_num, pfc_enable_num; - u32 shared_buf; + u32 shared_buf, aligned_mps; u32 rx_priv; int i; tc_num = hclge_get_tc_num(hdev); pfc_enable_num = hclge_get_pfc_enalbe_num(hdev); + aligned_mps = roundup(hdev->mps, HCLGE_BUF_SIZE_UNIT); if (hnae3_dev_dcb_supported(hdev)) - shared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_DV; + shared_buf_min = 2 * aligned_mps + hdev->dv_buf_size; else - shared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_NON_DCB_DV; + shared_buf_min = aligned_mps + HCLGE_NON_DCB_ADDITIONAL_BUF + + hdev->dv_buf_size; - shared_buf_tc = pfc_enable_num * hdev->mps + - (tc_num - pfc_enable_num) * hdev->mps / 2 + - hdev->mps; - shared_std = max_t(u32, shared_buf_min, shared_buf_tc); + shared_buf_tc = pfc_enable_num * aligned_mps + + (tc_num - pfc_enable_num) * aligned_mps / 2 + + aligned_mps; + shared_std = roundup(max_t(u32, shared_buf_min, shared_buf_tc), + HCLGE_BUF_SIZE_UNIT); rx_priv = hclge_get_rx_priv_buff_alloced(buf_alloc); - if (rx_all <= rx_priv + shared_std) + if (rx_all < rx_priv + shared_std) return false; - shared_buf = rx_all - rx_priv; + shared_buf = rounddown(rx_all - rx_priv, HCLGE_BUF_SIZE_UNIT); buf_alloc->s_buf.buf_size = shared_buf; - buf_alloc->s_buf.self.high = shared_buf; - buf_alloc->s_buf.self.low = 2 * hdev->mps; + if (hnae3_dev_dcb_supported(hdev)) { + buf_alloc->s_buf.self.high = shared_buf - hdev->dv_buf_size; + buf_alloc->s_buf.self.low = buf_alloc->s_buf.self.high + - roundup(aligned_mps / 2, HCLGE_BUF_SIZE_UNIT); + } else { + buf_alloc->s_buf.self.high = aligned_mps + + HCLGE_NON_DCB_ADDITIONAL_BUF; + buf_alloc->s_buf.self.low = + roundup(aligned_mps / 2, HCLGE_BUF_SIZE_UNIT); + } for (i = 0; i < HCLGE_MAX_TC_NUM; i++) { if ((hdev->hw_tc_map & BIT(i)) && (hdev->tm_info.hw_pfc_map & BIT(i))) { - buf_alloc->s_buf.tc_thrd[i].low = hdev->mps; - buf_alloc->s_buf.tc_thrd[i].high = 2 * hdev->mps; + buf_alloc->s_buf.tc_thrd[i].low = aligned_mps; + buf_alloc->s_buf.tc_thrd[i].high = 2 * aligned_mps; } else { buf_alloc->s_buf.tc_thrd[i].low = 0; - buf_alloc->s_buf.tc_thrd[i].high = hdev->mps; + buf_alloc->s_buf.tc_thrd[i].high = aligned_mps; } } @@ -1340,11 +1448,11 @@ static int hclge_tx_buffer_calc(struct hclge_dev *hdev, for (i = 0; i < HCLGE_MAX_TC_NUM; i++) { struct hclge_priv_buf *priv = &buf_alloc->priv_buf[i]; - if (total_size < HCLGE_DEFAULT_TX_BUF) + if (total_size < hdev->tx_buf_size) return -ENOMEM; if (hdev->hw_tc_map & BIT(i)) - priv->tx_buf_size = HCLGE_DEFAULT_TX_BUF; + priv->tx_buf_size = hdev->tx_buf_size; else priv->tx_buf_size = 0; @@ -1362,7 +1470,6 @@ static int hclge_tx_buffer_calc(struct hclge_dev *hdev, static int hclge_rx_buffer_calc(struct hclge_dev *hdev, struct hclge_pkt_buf_alloc *buf_alloc) { -#define HCLGE_BUF_SIZE_UNIT 128 u32 rx_all = hdev->pkt_buf_size, aligned_mps; int no_pfc_priv_num, pfc_priv_num; struct hclge_priv_buf *priv; @@ -1388,13 +1495,16 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev, priv->enable = 1; if (hdev->tm_info.hw_pfc_map & BIT(i)) { priv->wl.low = aligned_mps; - priv->wl.high = priv->wl.low + aligned_mps; + priv->wl.high = + roundup(priv->wl.low + aligned_mps, + HCLGE_BUF_SIZE_UNIT); priv->buf_size = priv->wl.high + - HCLGE_DEFAULT_DV; + hdev->dv_buf_size; } else { priv->wl.low = 0; priv->wl.high = 2 * aligned_mps; - priv->buf_size = priv->wl.high; + priv->buf_size = priv->wl.high + + hdev->dv_buf_size; } } else { priv->enable = 0; @@ -1424,13 +1534,13 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev, priv->enable = 1; if (hdev->tm_info.hw_pfc_map & BIT(i)) { - priv->wl.low = 128; + priv->wl.low = 256; priv->wl.high = priv->wl.low + aligned_mps; - priv->buf_size = priv->wl.high + HCLGE_DEFAULT_DV; + priv->buf_size = priv->wl.high + hdev->dv_buf_size; } else { priv->wl.low = 0; priv->wl.high = aligned_mps; - priv->buf_size = priv->wl.high; + priv->buf_size = priv->wl.high + hdev->dv_buf_size; } } @@ -1873,37 +1983,6 @@ static int hclge_cfg_mac_speed_dup_h(struct hnae3_handle *handle, int speed, return hclge_cfg_mac_speed_dup(hdev, speed, duplex); } -static int hclge_query_mac_an_speed_dup(struct hclge_dev *hdev, int *speed, - u8 *duplex) -{ - struct hclge_query_an_speed_dup_cmd *req; - struct hclge_desc desc; - int speed_tmp; - int ret; - - req = (struct hclge_query_an_speed_dup_cmd *)desc.data; - - hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_QUERY_AN_RESULT, true); - ret = hclge_cmd_send(&hdev->hw, &desc, 1); - if (ret) { - dev_err(&hdev->pdev->dev, - "mac speed/autoneg/duplex query cmd failed %d\n", - ret); - return ret; - } - - *duplex = hnae3_get_bit(req->an_syn_dup_speed, HCLGE_QUERY_DUPLEX_B); - speed_tmp = hnae3_get_field(req->an_syn_dup_speed, HCLGE_QUERY_SPEED_M, - HCLGE_QUERY_SPEED_S); - - ret = hclge_parse_speed(speed_tmp, speed); - if (ret) - dev_err(&hdev->pdev->dev, - "could not parse speed(=%d), %d\n", speed_tmp, ret); - - return ret; -} - static int hclge_set_autoneg_en(struct hclge_dev *hdev, bool enable) { struct hclge_config_auto_neg_cmd *req; @@ -1947,12 +2026,10 @@ static int hclge_get_autoneg(struct hnae3_handle *handle) static int hclge_mac_init(struct hclge_dev *hdev) { - struct hnae3_handle *handle = &hdev->vport[0].nic; - struct net_device *netdev = handle->kinfo.netdev; struct hclge_mac *mac = &hdev->hw.mac; - int mtu; int ret; + hdev->support_sfp_query = true; hdev->hw.mac.duplex = HCLGE_MAC_FULL; ret = hclge_cfg_mac_speed_dup_hw(hdev, hdev->hw.mac.speed, hdev->hw.mac.duplex); @@ -1964,15 +2041,16 @@ static int hclge_mac_init(struct hclge_dev *hdev) mac->link = 0; - if (netdev) - mtu = netdev->mtu; - else - mtu = ETH_DATA_LEN; + ret = hclge_set_mac_mtu(hdev, hdev->mps); + if (ret) { + dev_err(&hdev->pdev->dev, "set mtu failed ret=%d\n", ret); + return ret; + } - ret = hclge_set_mtu(handle, mtu); + ret = hclge_buffer_alloc(hdev); if (ret) dev_err(&hdev->pdev->dev, - "set mtu failed ret=%d\n", ret); + "allocate buffer fail, ret=%d\n", ret); return ret; } @@ -2061,34 +2139,58 @@ static void hclge_update_link_status(struct hclge_dev *hdev) } } +static int hclge_get_sfp_speed(struct hclge_dev *hdev, u32 *speed) +{ + struct hclge_sfp_speed_cmd *resp = NULL; + struct hclge_desc desc; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_SFP_GET_SPEED, true); + resp = (struct hclge_sfp_speed_cmd *)desc.data; + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret == -EOPNOTSUPP) { + dev_warn(&hdev->pdev->dev, + "IMP do not support get SFP speed %d\n", ret); + return ret; + } else if (ret) { + dev_err(&hdev->pdev->dev, "get sfp speed failed %d\n", ret); + return ret; + } + + *speed = resp->sfp_speed; + + return 0; +} + static int hclge_update_speed_duplex(struct hclge_dev *hdev) { struct hclge_mac mac = hdev->hw.mac; - u8 duplex; int speed; int ret; - /* get the speed and duplex as autoneg'result from mac cmd when phy + /* get the speed from SFP cmd when phy * doesn't exit. */ - if (mac.phydev || !mac.autoneg) + if (mac.phydev) return 0; - ret = hclge_query_mac_an_speed_dup(hdev, &speed, &duplex); - if (ret) { - dev_err(&hdev->pdev->dev, - "mac autoneg/speed/duplex query failed %d\n", ret); - return ret; - } + /* if IMP does not support get SFP/qSFP speed, return directly */ + if (!hdev->support_sfp_query) + return 0; - ret = hclge_cfg_mac_speed_dup(hdev, speed, duplex); - if (ret) { - dev_err(&hdev->pdev->dev, - "mac speed/duplex config failed %d\n", ret); + ret = hclge_get_sfp_speed(hdev, &speed); + if (ret == -EOPNOTSUPP) { + hdev->support_sfp_query = false; + return ret; + } else if (ret) { return ret; } - return 0; + if (speed == HCLGE_MAC_SPEED_UNKNOWN) + return 0; /* do nothing if no SFP */ + + /* must config full duplex for SFP */ + return hclge_cfg_mac_speed_dup(hdev, speed, HCLGE_MAC_FULL); } static int hclge_update_speed_duplex_h(struct hnae3_handle *handle) @@ -2129,12 +2231,13 @@ static void hclge_service_complete(struct hclge_dev *hdev) static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) { - u32 rst_src_reg; - u32 cmdq_src_reg; + u32 rst_src_reg, cmdq_src_reg, msix_src_reg; /* fetch the events from their corresponding regs */ rst_src_reg = hclge_read_dev(&hdev->hw, HCLGE_MISC_VECTOR_INT_STS); cmdq_src_reg = hclge_read_dev(&hdev->hw, HCLGE_VECTOR0_CMDQ_SRC_REG); + msix_src_reg = hclge_read_dev(&hdev->hw, + HCLGE_VECTOR0_PF_OTHER_INT_STS_REG); /* Assumption: If by any chance reset and mailbox events are reported * together then we will only process reset event in this go and will @@ -2144,7 +2247,16 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) */ /* check for vector0 reset event sources */ + if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & rst_src_reg) { + dev_info(&hdev->pdev->dev, "IMP reset interrupt\n"); + set_bit(HNAE3_IMP_RESET, &hdev->reset_pending); + set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); + *clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B); + return HCLGE_VECTOR0_EVENT_RST; + } + if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & rst_src_reg) { + dev_info(&hdev->pdev->dev, "global reset interrupt\n"); set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); set_bit(HNAE3_GLOBAL_RESET, &hdev->reset_pending); *clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B); @@ -2152,17 +2264,16 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) } if (BIT(HCLGE_VECTOR0_CORERESET_INT_B) & rst_src_reg) { + dev_info(&hdev->pdev->dev, "core reset interrupt\n"); set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); set_bit(HNAE3_CORE_RESET, &hdev->reset_pending); *clearval = BIT(HCLGE_VECTOR0_CORERESET_INT_B); return HCLGE_VECTOR0_EVENT_RST; } - if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & rst_src_reg) { - set_bit(HNAE3_IMP_RESET, &hdev->reset_pending); - *clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B); - return HCLGE_VECTOR0_EVENT_RST; - } + /* check for vector0 msix event source */ + if (msix_src_reg & HCLGE_VECTOR0_REG_MSIX_MASK) + return HCLGE_VECTOR0_EVENT_ERR; /* check for vector0 mailbox(=CMDQ RX) event source */ if (BIT(HCLGE_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) { @@ -2214,6 +2325,19 @@ static irqreturn_t hclge_misc_irq_handle(int irq, void *data) /* vector 0 interrupt is shared with reset and mailbox source events.*/ switch (event_cause) { + case HCLGE_VECTOR0_EVENT_ERR: + /* we do not know what type of reset is required now. This could + * only be decided after we fetch the type of errors which + * caused this event. Therefore, we will do below for now: + * 1. Assert HNAE3_UNKNOWN_RESET type of reset. This means we + * have defered type of reset to be used. + * 2. Schedule the reset serivce task. + * 3. When service task receives HNAE3_UNKNOWN_RESET type it + * will fetch the correct type of reset. This would be done + * by first decoding the types of errors. + */ + set_bit(HNAE3_UNKNOWN_RESET, &hdev->reset_request); + /* fall through */ case HCLGE_VECTOR0_EVENT_RST: hclge_reset_task_schedule(hdev); break; @@ -2308,21 +2432,56 @@ static int hclge_notify_client(struct hclge_dev *hdev, int ret; ret = client->ops->reset_notify(handle, type); - if (ret) + if (ret) { + dev_err(&hdev->pdev->dev, + "notify nic client failed %d(%d)\n", type, ret); return ret; + } } return 0; } +static int hclge_notify_roce_client(struct hclge_dev *hdev, + enum hnae3_reset_notify_type type) +{ + struct hnae3_client *client = hdev->roce_client; + int ret = 0; + u16 i; + + if (!client) + return 0; + + if (!client->ops->reset_notify) + return -EOPNOTSUPP; + + for (i = 0; i < hdev->num_vmdq_vport + 1; i++) { + struct hnae3_handle *handle = &hdev->vport[i].roce; + + ret = client->ops->reset_notify(handle, type); + if (ret) { + dev_err(&hdev->pdev->dev, + "notify roce client failed %d(%d)", + type, ret); + return ret; + } + } + + return ret; +} + static int hclge_reset_wait(struct hclge_dev *hdev) { #define HCLGE_RESET_WATI_MS 100 -#define HCLGE_RESET_WAIT_CNT 5 +#define HCLGE_RESET_WAIT_CNT 200 u32 val, reg, reg_bit; u32 cnt = 0; switch (hdev->reset_type) { + case HNAE3_IMP_RESET: + reg = HCLGE_GLOBAL_RESET_REG; + reg_bit = HCLGE_IMP_RESET_BIT; + break; case HNAE3_GLOBAL_RESET: reg = HCLGE_GLOBAL_RESET_REG; reg_bit = HCLGE_GLOBAL_RESET_BIT; @@ -2335,6 +2494,8 @@ static int hclge_reset_wait(struct hclge_dev *hdev) reg = HCLGE_FUN_RST_ING; reg_bit = HCLGE_FUN_RST_ING_B; break; + case HNAE3_FLR_RESET: + break; default: dev_err(&hdev->pdev->dev, "Wait for unsupported reset type: %d\n", @@ -2342,6 +2503,20 @@ static int hclge_reset_wait(struct hclge_dev *hdev) return -EINVAL; } + if (hdev->reset_type == HNAE3_FLR_RESET) { + while (!test_bit(HNAE3_FLR_DONE, &hdev->flr_state) && + cnt++ < HCLGE_RESET_WAIT_CNT) + msleep(HCLGE_RESET_WATI_MS); + + if (!test_bit(HNAE3_FLR_DONE, &hdev->flr_state)) { + dev_err(&hdev->pdev->dev, + "flr wait timeout: %d\n", cnt); + return -EBUSY; + } + + return 0; + } + val = hclge_read_dev(&hdev->hw, reg); while (hnae3_get_bit(val, reg_bit) && cnt < HCLGE_RESET_WAIT_CNT) { msleep(HCLGE_RESET_WATI_MS); @@ -2358,6 +2533,55 @@ static int hclge_reset_wait(struct hclge_dev *hdev) return 0; } +static int hclge_set_vf_rst(struct hclge_dev *hdev, int func_id, bool reset) +{ + struct hclge_vf_rst_cmd *req; + struct hclge_desc desc; + + req = (struct hclge_vf_rst_cmd *)desc.data; + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_GBL_RST_STATUS, false); + req->dest_vfid = func_id; + + if (reset) + req->vf_rst = 0x1; + + return hclge_cmd_send(&hdev->hw, &desc, 1); +} + +int hclge_set_all_vf_rst(struct hclge_dev *hdev, bool reset) +{ + int i; + + for (i = hdev->num_vmdq_vport + 1; i < hdev->num_alloc_vport; i++) { + struct hclge_vport *vport = &hdev->vport[i]; + int ret; + + /* Send cmd to set/clear VF's FUNC_RST_ING */ + ret = hclge_set_vf_rst(hdev, vport->vport_id, reset); + if (ret) { + dev_err(&hdev->pdev->dev, + "set vf(%d) rst failed %d!\n", + vport->vport_id, ret); + return ret; + } + + if (!reset) + continue; + + /* Inform VF to process the reset. + * hclge_inform_reset_assert_to_vf may fail if VF + * driver is not loaded. + */ + ret = hclge_inform_reset_assert_to_vf(vport); + if (ret) + dev_warn(&hdev->pdev->dev, + "inform reset to vf(%d) failed %d!\n", + vport->vport_id, ret); + } + + return 0; +} + int hclge_func_reset_cmd(struct hclge_dev *hdev, int func_id) { struct hclge_desc desc; @@ -2396,11 +2620,16 @@ static void hclge_do_reset(struct hclge_dev *hdev) break; case HNAE3_FUNC_RESET: dev_info(&pdev->dev, "PF Reset requested\n"); - hclge_func_reset_cmd(hdev, 0); /* schedule again to check later */ set_bit(HNAE3_FUNC_RESET, &hdev->reset_pending); hclge_reset_task_schedule(hdev); break; + case HNAE3_FLR_RESET: + dev_info(&pdev->dev, "FLR requested\n"); + /* schedule again to check later */ + set_bit(HNAE3_FLR_RESET, &hdev->reset_pending); + hclge_reset_task_schedule(hdev); + break; default: dev_warn(&pdev->dev, "Unsupported reset type: %d\n", hdev->reset_type); @@ -2413,21 +2642,46 @@ static enum hnae3_reset_type hclge_get_reset_level(struct hclge_dev *hdev, { enum hnae3_reset_type rst_level = HNAE3_NONE_RESET; + /* first, resolve any unknown reset type to the known type(s) */ + if (test_bit(HNAE3_UNKNOWN_RESET, addr)) { + /* we will intentionally ignore any errors from this function + * as we will end up in *some* reset request in any case + */ + hclge_handle_hw_msix_error(hdev, addr); + clear_bit(HNAE3_UNKNOWN_RESET, addr); + /* We defered the clearing of the error event which caused + * interrupt since it was not posssible to do that in + * interrupt context (and this is the reason we introduced + * new UNKNOWN reset type). Now, the errors have been + * handled and cleared in hardware we can safely enable + * interrupts. This is an exception to the norm. + */ + hclge_enable_vector(&hdev->misc_vector, true); + } + /* return the highest priority reset level amongst all */ - if (test_bit(HNAE3_GLOBAL_RESET, addr)) + if (test_bit(HNAE3_IMP_RESET, addr)) { + rst_level = HNAE3_IMP_RESET; + clear_bit(HNAE3_IMP_RESET, addr); + clear_bit(HNAE3_GLOBAL_RESET, addr); + clear_bit(HNAE3_CORE_RESET, addr); + clear_bit(HNAE3_FUNC_RESET, addr); + } else if (test_bit(HNAE3_GLOBAL_RESET, addr)) { rst_level = HNAE3_GLOBAL_RESET; - else if (test_bit(HNAE3_CORE_RESET, addr)) + clear_bit(HNAE3_GLOBAL_RESET, addr); + clear_bit(HNAE3_CORE_RESET, addr); + clear_bit(HNAE3_FUNC_RESET, addr); + } else if (test_bit(HNAE3_CORE_RESET, addr)) { rst_level = HNAE3_CORE_RESET; - else if (test_bit(HNAE3_IMP_RESET, addr)) - rst_level = HNAE3_IMP_RESET; - else if (test_bit(HNAE3_FUNC_RESET, addr)) + clear_bit(HNAE3_CORE_RESET, addr); + clear_bit(HNAE3_FUNC_RESET, addr); + } else if (test_bit(HNAE3_FUNC_RESET, addr)) { rst_level = HNAE3_FUNC_RESET; - - /* now, clear all other resets */ - clear_bit(HNAE3_GLOBAL_RESET, addr); - clear_bit(HNAE3_CORE_RESET, addr); - clear_bit(HNAE3_IMP_RESET, addr); - clear_bit(HNAE3_FUNC_RESET, addr); + clear_bit(HNAE3_FUNC_RESET, addr); + } else if (test_bit(HNAE3_FLR_RESET, addr)) { + rst_level = HNAE3_FLR_RESET; + clear_bit(HNAE3_FLR_RESET, addr); + } return rst_level; } @@ -2457,39 +2711,209 @@ static void hclge_clear_reset_cause(struct hclge_dev *hdev) hclge_enable_vector(&hdev->misc_vector, true); } +static int hclge_reset_prepare_down(struct hclge_dev *hdev) +{ + int ret = 0; + + switch (hdev->reset_type) { + case HNAE3_FUNC_RESET: + /* fall through */ + case HNAE3_FLR_RESET: + ret = hclge_set_all_vf_rst(hdev, true); + break; + default: + break; + } + + return ret; +} + +static int hclge_reset_prepare_wait(struct hclge_dev *hdev) +{ + u32 reg_val; + int ret = 0; + + switch (hdev->reset_type) { + case HNAE3_FUNC_RESET: + /* There is no mechanism for PF to know if VF has stopped IO + * for now, just wait 100 ms for VF to stop IO + */ + msleep(100); + ret = hclge_func_reset_cmd(hdev, 0); + if (ret) { + dev_err(&hdev->pdev->dev, + "asserting function reset fail %d!\n", ret); + return ret; + } + + /* After performaning pf reset, it is not necessary to do the + * mailbox handling or send any command to firmware, because + * any mailbox handling or command to firmware is only valid + * after hclge_cmd_init is called. + */ + set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); + break; + case HNAE3_FLR_RESET: + /* There is no mechanism for PF to know if VF has stopped IO + * for now, just wait 100 ms for VF to stop IO + */ + msleep(100); + set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); + set_bit(HNAE3_FLR_DOWN, &hdev->flr_state); + break; + case HNAE3_IMP_RESET: + reg_val = hclge_read_dev(&hdev->hw, HCLGE_PF_OTHER_INT_REG); + hclge_write_dev(&hdev->hw, HCLGE_PF_OTHER_INT_REG, + BIT(HCLGE_VECTOR0_IMP_RESET_INT_B) | reg_val); + break; + default: + break; + } + + dev_info(&hdev->pdev->dev, "prepare wait ok\n"); + + return ret; +} + +static bool hclge_reset_err_handle(struct hclge_dev *hdev, bool is_timeout) +{ +#define MAX_RESET_FAIL_CNT 5 +#define RESET_UPGRADE_DELAY_SEC 10 + + if (hdev->reset_pending) { + dev_info(&hdev->pdev->dev, "Reset pending %lu\n", + hdev->reset_pending); + return true; + } else if ((hdev->reset_type != HNAE3_IMP_RESET) && + (hclge_read_dev(&hdev->hw, HCLGE_GLOBAL_RESET_REG) & + BIT(HCLGE_IMP_RESET_BIT))) { + dev_info(&hdev->pdev->dev, + "reset failed because IMP Reset is pending\n"); + hclge_clear_reset_cause(hdev); + return false; + } else if (hdev->reset_fail_cnt < MAX_RESET_FAIL_CNT) { + hdev->reset_fail_cnt++; + if (is_timeout) { + set_bit(hdev->reset_type, &hdev->reset_pending); + dev_info(&hdev->pdev->dev, + "re-schedule to wait for hw reset done\n"); + return true; + } + + dev_info(&hdev->pdev->dev, "Upgrade reset level\n"); + hclge_clear_reset_cause(hdev); + mod_timer(&hdev->reset_timer, + jiffies + RESET_UPGRADE_DELAY_SEC * HZ); + + return false; + } + + hclge_clear_reset_cause(hdev); + dev_err(&hdev->pdev->dev, "Reset fail!\n"); + return false; +} + +static int hclge_reset_prepare_up(struct hclge_dev *hdev) +{ + int ret = 0; + + switch (hdev->reset_type) { + case HNAE3_FUNC_RESET: + /* fall through */ + case HNAE3_FLR_RESET: + ret = hclge_set_all_vf_rst(hdev, false); + break; + default: + break; + } + + return ret; +} + static void hclge_reset(struct hclge_dev *hdev) { struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev); - struct hnae3_handle *handle; + bool is_timeout = false; + int ret; /* Initialize ae_dev reset status as well, in case enet layer wants to * know if device is undergoing reset */ ae_dev->reset_type = hdev->reset_type; + hdev->reset_count++; /* perform reset of the stack & ae device for a client */ - handle = &hdev->vport[0].nic; + ret = hclge_notify_roce_client(hdev, HNAE3_DOWN_CLIENT); + if (ret) + goto err_reset; + + ret = hclge_reset_prepare_down(hdev); + if (ret) + goto err_reset; + rtnl_lock(); - hclge_notify_client(hdev, HNAE3_DOWN_CLIENT); + ret = hclge_notify_client(hdev, HNAE3_DOWN_CLIENT); + if (ret) + goto err_reset_lock; + rtnl_unlock(); - if (!hclge_reset_wait(hdev)) { - rtnl_lock(); - hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT); - hclge_reset_ae_dev(hdev->ae_dev); - hclge_notify_client(hdev, HNAE3_INIT_CLIENT); + ret = hclge_reset_prepare_wait(hdev); + if (ret) + goto err_reset; - hclge_clear_reset_cause(hdev); - } else { - rtnl_lock(); - /* schedule again to check pending resets later */ - set_bit(hdev->reset_type, &hdev->reset_pending); - hclge_reset_task_schedule(hdev); + if (hclge_reset_wait(hdev)) { + is_timeout = true; + goto err_reset; } - hclge_notify_client(hdev, HNAE3_UP_CLIENT); - handle->last_reset_time = jiffies; + ret = hclge_notify_roce_client(hdev, HNAE3_UNINIT_CLIENT); + if (ret) + goto err_reset; + + rtnl_lock(); + ret = hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT); + if (ret) + goto err_reset_lock; + + ret = hclge_reset_ae_dev(hdev->ae_dev); + if (ret) + goto err_reset_lock; + + ret = hclge_notify_client(hdev, HNAE3_INIT_CLIENT); + if (ret) + goto err_reset_lock; + + hclge_clear_reset_cause(hdev); + + ret = hclge_reset_prepare_up(hdev); + if (ret) + goto err_reset_lock; + + ret = hclge_notify_client(hdev, HNAE3_UP_CLIENT); + if (ret) + goto err_reset_lock; + rtnl_unlock(); + + ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT); + if (ret) + goto err_reset; + + ret = hclge_notify_roce_client(hdev, HNAE3_UP_CLIENT); + if (ret) + goto err_reset; + + hdev->last_reset_time = jiffies; + hdev->reset_fail_cnt = 0; ae_dev->reset_type = HNAE3_NONE_RESET; + + return; + +err_reset_lock: + rtnl_unlock(); +err_reset: + if (hclge_reset_err_handle(hdev, is_timeout)) + hclge_reset_task_schedule(hdev); } static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle) @@ -2515,20 +2939,42 @@ static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle) if (!handle) handle = &hdev->vport[0].nic; - if (time_before(jiffies, (handle->last_reset_time + 3 * HZ))) + if (time_before(jiffies, (hdev->last_reset_time + 3 * HZ))) return; - else if (time_after(jiffies, (handle->last_reset_time + 4 * 5 * HZ))) - handle->reset_level = HNAE3_FUNC_RESET; + else if (hdev->default_reset_request) + hdev->reset_level = + hclge_get_reset_level(hdev, + &hdev->default_reset_request); + else if (time_after(jiffies, (hdev->last_reset_time + 4 * 5 * HZ))) + hdev->reset_level = HNAE3_FUNC_RESET; dev_info(&hdev->pdev->dev, "received reset event , reset type is %d", - handle->reset_level); + hdev->reset_level); /* request reset & schedule reset task */ - set_bit(handle->reset_level, &hdev->reset_request); + set_bit(hdev->reset_level, &hdev->reset_request); hclge_reset_task_schedule(hdev); - if (handle->reset_level < HNAE3_GLOBAL_RESET) - handle->reset_level++; + if (hdev->reset_level < HNAE3_GLOBAL_RESET) + hdev->reset_level++; +} + +static void hclge_set_def_reset_request(struct hnae3_ae_dev *ae_dev, + enum hnae3_reset_type rst_type) +{ + struct hclge_dev *hdev = ae_dev->priv; + + set_bit(rst_type, &hdev->default_reset_request); +} + +static void hclge_reset_timer(struct timer_list *t) +{ + struct hclge_dev *hdev = from_timer(hdev, t, reset_timer); + + dev_info(&hdev->pdev->dev, + "triggering global reset in reset timer\n"); + set_bit(HNAE3_GLOBAL_RESET, &hdev->default_reset_request); + hclge_reset_event(hdev->pdev, NULL); } static void hclge_reset_subtask(struct hclge_dev *hdev) @@ -2542,6 +2988,7 @@ static void hclge_reset_subtask(struct hclge_dev *hdev) * b. else, we can come back later to check this status so re-sched * now. */ + hdev->last_reset_time = jiffies; hdev->reset_type = hclge_get_reset_level(hdev, &hdev->reset_pending); if (hdev->reset_type != HNAE3_NONE_RESET) hclge_reset(hdev); @@ -2584,6 +3031,23 @@ static void hclge_mailbox_service_task(struct work_struct *work) clear_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state); } +static void hclge_update_vport_alive(struct hclge_dev *hdev) +{ + int i; + + /* start from vport 1 for PF is always alive */ + for (i = 1; i < hdev->num_alloc_vport; i++) { + struct hclge_vport *vport = &hdev->vport[i]; + + if (time_after(jiffies, vport->last_active_jiffies + 8 * HZ)) + clear_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state); + + /* If vf is not alive, set to default value */ + if (!test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state)) + vport->mps = HCLGE_MAC_DEFAULT_FRAME; + } +} + static void hclge_service_task(struct work_struct *work) { struct hclge_dev *hdev = @@ -2596,6 +3060,7 @@ static void hclge_service_task(struct work_struct *work) hclge_update_speed_duplex(hdev); hclge_update_link_status(hdev); + hclge_update_vport_alive(hdev); hclge_service_complete(hdev); } @@ -4212,6 +4677,13 @@ static int hclge_add_fd_entry(struct hnae3_handle *handle, u8 vf = ethtool_get_flow_spec_ring_vf(fs->ring_cookie); u16 tqps; + if (vf > hdev->num_req_vfs) { + dev_err(&hdev->pdev->dev, + "Error: vf id (%d) > max vf num (%d)\n", + vf, hdev->num_req_vfs); + return -EINVAL; + } + dst_vport_id = vf ? hdev->vport[vf].vport_id : vport->vport_id; tqps = vf ? hdev->vport[vf].alloc_tqps : vport->alloc_tqps; @@ -4222,13 +4694,6 @@ static int hclge_add_fd_entry(struct hnae3_handle *handle, return -EINVAL; } - if (vf > hdev->num_req_vfs) { - dev_err(&hdev->pdev->dev, - "Error: vf id (%d) > max vf num (%d)\n", - vf, hdev->num_req_vfs); - return -EINVAL; - } - action = HCLGE_FD_ACTION_ACCEPT_PACKET; q_index = ring; } @@ -4336,8 +4801,16 @@ static int hclge_restore_fd_entries(struct hnae3_handle *handle) struct hlist_node *node; int ret; + /* Return ok here, because reset error handling will check this + * return value. If error is returned here, the reset process will + * fail. + */ if (!hnae3_dev_fd_supported(hdev)) - return -EOPNOTSUPP; + return 0; + + /* if fd is disabled, should not restore it when reset */ + if (!hdev->fd_cfg.fd_en) + return 0; hlist_for_each_entry_safe(rule, node, &hdev->fd_rule_list, rule_node) { ret = hclge_config_action(hdev, HCLGE_FD_STAGE_1, rule); @@ -4592,6 +5065,31 @@ static int hclge_get_all_rules(struct hnae3_handle *handle, return 0; } +static bool hclge_get_hw_reset_stat(struct hnae3_handle *handle) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + + return hclge_read_dev(&hdev->hw, HCLGE_GLOBAL_RESET_REG) || + hclge_read_dev(&hdev->hw, HCLGE_FUN_RST_ING); +} + +static bool hclge_ae_dev_resetting(struct hnae3_handle *handle) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + + return test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state); +} + +static unsigned long hclge_ae_dev_reset_cnt(struct hnae3_handle *handle) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + + return hdev->reset_count; +} + static void hclge_enable_fd(struct hnae3_handle *handle, bool enable) { struct hclge_vport *vport = hclge_get_vport(handle); @@ -4801,19 +5299,28 @@ static void hclge_reset_tqp_stats(struct hnae3_handle *handle) } } -static int hclge_ae_start(struct hnae3_handle *handle) +static void hclge_set_timer_task(struct hnae3_handle *handle, bool enable) { struct hclge_vport *vport = hclge_get_vport(handle); struct hclge_dev *hdev = vport->back; - int i; - for (i = 0; i < vport->alloc_tqps; i++) - hclge_tqp_enable(hdev, i, 0, true); + if (enable) { + mod_timer(&hdev->service_timer, jiffies + HZ); + } else { + del_timer_sync(&hdev->service_timer); + cancel_work_sync(&hdev->service_task); + clear_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state); + } +} + +static int hclge_ae_start(struct hnae3_handle *handle) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; /* mac enable */ hclge_cfg_mac_mode(hdev, true); clear_bit(HCLGE_STATE_DOWN, &hdev->state); - mod_timer(&hdev->service_timer, jiffies + HZ); hdev->hw.mac.link = 0; /* reset tqp stats */ @@ -4832,17 +5339,17 @@ static void hclge_ae_stop(struct hnae3_handle *handle) set_bit(HCLGE_STATE_DOWN, &hdev->state); - del_timer_sync(&hdev->service_timer); - cancel_work_sync(&hdev->service_task); - clear_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state); - - if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) { + /* If it is not PF reset, the firmware will disable the MAC, + * so it only need to stop phy here. + */ + if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) && + hdev->reset_type != HNAE3_FUNC_RESET) { hclge_mac_stop_phy(hdev); return; } - for (i = 0; i < vport->alloc_tqps; i++) - hclge_tqp_enable(hdev, i, 0, false); + for (i = 0; i < handle->kinfo.num_tqps; i++) + hclge_reset_tqp(handle, i); /* Mac disable */ hclge_cfg_mac_mode(hdev, false); @@ -4851,11 +5358,35 @@ static void hclge_ae_stop(struct hnae3_handle *handle) /* reset tqp stats */ hclge_reset_tqp_stats(handle); - del_timer_sync(&hdev->service_timer); - cancel_work_sync(&hdev->service_task); hclge_update_link_status(hdev); } +int hclge_vport_start(struct hclge_vport *vport) +{ + set_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state); + vport->last_active_jiffies = jiffies; + return 0; +} + +void hclge_vport_stop(struct hclge_vport *vport) +{ + clear_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state); +} + +static int hclge_client_start(struct hnae3_handle *handle) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + + return hclge_vport_start(vport); +} + +static void hclge_client_stop(struct hnae3_handle *handle) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + + hclge_vport_stop(vport); +} + static int hclge_get_mac_vlan_cmd_status(struct hclge_vport *vport, u16 cmdq_resp, u8 resp_code, enum hclge_mac_vlan_tbl_opcode op) @@ -6003,54 +6534,76 @@ int hclge_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable) return hclge_set_vlan_rx_offload_cfg(vport); } -static int hclge_set_mac_mtu(struct hclge_dev *hdev, int new_mtu) +static int hclge_set_mac_mtu(struct hclge_dev *hdev, int new_mps) { struct hclge_config_max_frm_size_cmd *req; struct hclge_desc desc; - int max_frm_size; - int ret; - - max_frm_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; - - if (max_frm_size < HCLGE_MAC_MIN_FRAME || - max_frm_size > HCLGE_MAC_MAX_FRAME) - return -EINVAL; - - max_frm_size = max(max_frm_size, HCLGE_MAC_DEFAULT_FRAME); hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CONFIG_MAX_FRM_SIZE, false); req = (struct hclge_config_max_frm_size_cmd *)desc.data; - req->max_frm_size = cpu_to_le16(max_frm_size); + req->max_frm_size = cpu_to_le16(new_mps); req->min_frm_size = HCLGE_MAC_MIN_FRAME; - ret = hclge_cmd_send(&hdev->hw, &desc, 1); - if (ret) - dev_err(&hdev->pdev->dev, "set mtu fail, ret =%d.\n", ret); - else - hdev->mps = max_frm_size; - - return ret; + return hclge_cmd_send(&hdev->hw, &desc, 1); } static int hclge_set_mtu(struct hnae3_handle *handle, int new_mtu) { struct hclge_vport *vport = hclge_get_vport(handle); + + return hclge_set_vport_mtu(vport, new_mtu); +} + +int hclge_set_vport_mtu(struct hclge_vport *vport, int new_mtu) +{ struct hclge_dev *hdev = vport->back; - int ret; + int i, max_frm_size, ret = 0; + + max_frm_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + 2 * VLAN_HLEN; + if (max_frm_size < HCLGE_MAC_MIN_FRAME || + max_frm_size > HCLGE_MAC_MAX_FRAME) + return -EINVAL; + + max_frm_size = max(max_frm_size, HCLGE_MAC_DEFAULT_FRAME); + mutex_lock(&hdev->vport_lock); + /* VF's mps must fit within hdev->mps */ + if (vport->vport_id && max_frm_size > hdev->mps) { + mutex_unlock(&hdev->vport_lock); + return -EINVAL; + } else if (vport->vport_id) { + vport->mps = max_frm_size; + mutex_unlock(&hdev->vport_lock); + return 0; + } + + /* PF's mps must be greater then VF's mps */ + for (i = 1; i < hdev->num_alloc_vport; i++) + if (max_frm_size < hdev->vport[i].mps) { + mutex_unlock(&hdev->vport_lock); + return -EINVAL; + } + + hclge_notify_client(hdev, HNAE3_DOWN_CLIENT); - ret = hclge_set_mac_mtu(hdev, new_mtu); + ret = hclge_set_mac_mtu(hdev, max_frm_size); if (ret) { dev_err(&hdev->pdev->dev, "Change mtu fail, ret =%d\n", ret); - return ret; + goto out; } + hdev->mps = max_frm_size; + vport->mps = max_frm_size; + ret = hclge_buffer_alloc(hdev); if (ret) dev_err(&hdev->pdev->dev, "Allocate buffer fail, ret =%d\n", ret); +out: + hclge_notify_client(hdev, HNAE3_UP_CLIENT); + mutex_unlock(&hdev->vport_lock); return ret; } @@ -6098,8 +6651,7 @@ static int hclge_get_reset_status(struct hclge_dev *hdev, u16 queue_id) return hnae3_get_bit(req->ready_to_reset, HCLGE_TQP_RESET_B); } -static u16 hclge_covert_handle_qid_global(struct hnae3_handle *handle, - u16 queue_id) +u16 hclge_covert_handle_qid_global(struct hnae3_handle *handle, u16 queue_id) { struct hnae3_queue *queue; struct hclge_tqp *tqp; @@ -6250,7 +6802,7 @@ int hclge_cfg_flowctrl(struct hclge_dev *hdev) if (!phydev->link || !phydev->autoneg) return 0; - local_advertising = ethtool_adv_to_lcl_adv_t(phydev->advertising); + local_advertising = linkmode_adv_to_lcl_adv_t(phydev->advertising); if (phydev->pause) remote_advertising = LPA_PAUSE_CAP; @@ -6612,6 +7164,8 @@ static void hclge_state_uninit(struct hclge_dev *hdev) if (hdev->service_timer.function) del_timer_sync(&hdev->service_timer); + if (hdev->reset_timer.function) + del_timer_sync(&hdev->reset_timer); if (hdev->service_task.func) cancel_work_sync(&hdev->service_task); if (hdev->rst_service_task.func) @@ -6620,6 +7174,34 @@ static void hclge_state_uninit(struct hclge_dev *hdev) cancel_work_sync(&hdev->mbx_service_task); } +static void hclge_flr_prepare(struct hnae3_ae_dev *ae_dev) +{ +#define HCLGE_FLR_WAIT_MS 100 +#define HCLGE_FLR_WAIT_CNT 50 + struct hclge_dev *hdev = ae_dev->priv; + int cnt = 0; + + clear_bit(HNAE3_FLR_DOWN, &hdev->flr_state); + clear_bit(HNAE3_FLR_DONE, &hdev->flr_state); + set_bit(HNAE3_FLR_RESET, &hdev->default_reset_request); + hclge_reset_event(hdev->pdev, NULL); + + while (!test_bit(HNAE3_FLR_DOWN, &hdev->flr_state) && + cnt++ < HCLGE_FLR_WAIT_CNT) + msleep(HCLGE_FLR_WAIT_MS); + + if (!test_bit(HNAE3_FLR_DOWN, &hdev->flr_state)) + dev_err(&hdev->pdev->dev, + "flr wait down timeout: %d\n", cnt); +} + +static void hclge_flr_done(struct hnae3_ae_dev *ae_dev) +{ + struct hclge_dev *hdev = ae_dev->priv; + + set_bit(HNAE3_FLR_DONE, &hdev->flr_state); +} + static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) { struct pci_dev *pdev = ae_dev->pdev; @@ -6635,7 +7217,11 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) hdev->pdev = pdev; hdev->ae_dev = ae_dev; hdev->reset_type = HNAE3_NONE_RESET; + hdev->reset_level = HNAE3_FUNC_RESET; ae_dev->priv = hdev; + hdev->mps = ETH_FRAME_LEN + ETH_FCS_LEN + 2 * VLAN_HLEN; + + mutex_init(&hdev->vport_lock); ret = hclge_pci_init(hdev); if (ret) { @@ -6727,6 +7313,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) goto err_mdiobus_unreg; } + ret = hclge_config_gro(hdev, true); + if (ret) + goto err_mdiobus_unreg; + ret = hclge_init_vlan_config(hdev); if (ret) { dev_err(&pdev->dev, "VLAN init fail, ret =%d\n", ret); @@ -6762,13 +7352,14 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) ret = hclge_hw_error_set_state(hdev, true); if (ret) { dev_err(&pdev->dev, - "hw error interrupts enable failed, ret =%d\n", ret); + "fail(%d) to enable hw error interrupts\n", ret); goto err_mdiobus_unreg; } hclge_dcb_ops_set(hdev); timer_setup(&hdev->service_timer, hclge_service_timer, 0); + timer_setup(&hdev->reset_timer, hclge_reset_timer, 0); INIT_WORK(&hdev->service_task, hclge_service_task); INIT_WORK(&hdev->rst_service_task, hclge_reset_service_task); INIT_WORK(&hdev->mbx_service_task, hclge_mailbox_service_task); @@ -6779,6 +7370,7 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) hclge_enable_vector(&hdev->misc_vector, true); hclge_state_init(hdev); + hdev->last_reset_time = jiffies; pr_info("%s driver initialization finished.\n", HCLGE_DRIVER_NAME); return 0; @@ -6806,6 +7398,17 @@ static void hclge_stats_clear(struct hclge_dev *hdev) memset(&hdev->hw_stats, 0, sizeof(hdev->hw_stats)); } +static void hclge_reset_vport_state(struct hclge_dev *hdev) +{ + struct hclge_vport *vport = hdev->vport; + int i; + + for (i = 0; i < hdev->num_alloc_vport; i++) { + hclge_vport_start(vport); + vport++; + } +} + static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev) { struct hclge_dev *hdev = ae_dev->priv; @@ -6823,19 +7426,6 @@ static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev) return ret; } - ret = hclge_get_cap(hdev); - if (ret) { - dev_err(&pdev->dev, "get hw capability error, ret = %d.\n", - ret); - return ret; - } - - ret = hclge_configure(hdev); - if (ret) { - dev_err(&pdev->dev, "Configure dev error, ret = %d.\n", ret); - return ret; - } - ret = hclge_map_tqp(hdev); if (ret) { dev_err(&pdev->dev, "Map tqp error, ret = %d.\n", ret); @@ -6856,6 +7446,10 @@ static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev) return ret; } + ret = hclge_config_gro(hdev, true); + if (ret) + return ret; + ret = hclge_init_vlan_config(hdev); if (ret) { dev_err(&pdev->dev, "VLAN init fail, ret =%d\n", ret); @@ -6881,11 +7475,17 @@ static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev) return ret; } - /* Re-enable the TM hw error interrupts because - * they get disabled on core/global reset. + /* Re-enable the hw error interrupts because + * the interrupts get disabled on core/global reset. */ - if (hclge_enable_tm_hw_error(hdev, true)) - dev_err(&pdev->dev, "failed to enable TM hw error interrupts\n"); + ret = hclge_hw_error_set_state(hdev, true); + if (ret) { + dev_err(&pdev->dev, + "fail(%d) to re-enable HNS hw error interrupts\n", ret); + return ret; + } + + hclge_reset_vport_state(hdev); dev_info(&pdev->dev, "Reset done, %s driver initialization finished.\n", HCLGE_DRIVER_NAME); @@ -6913,6 +7513,7 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev) hclge_destroy_cmd_queue(&hdev->hw); hclge_misc_irq_uninit(hdev); hclge_pci_uninit(hdev); + mutex_destroy(&hdev->vport_lock); ae_dev->priv = NULL; } @@ -7166,8 +7767,15 @@ static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num, return 0; } +#define MAX_SEPARATE_NUM 4 +#define SEPARATOR_VALUE 0xFFFFFFFF +#define REG_NUM_PER_LINE 4 +#define REG_LEN_PER_LINE (REG_NUM_PER_LINE * sizeof(u32)) + static int hclge_get_regs_len(struct hnae3_handle *handle) { + int cmdq_lines, common_lines, ring_lines, tqp_intr_lines; + struct hnae3_knic_private_info *kinfo = &handle->kinfo; struct hclge_vport *vport = hclge_get_vport(handle); struct hclge_dev *hdev = vport->back; u32 regs_num_32_bit, regs_num_64_bit; @@ -7180,15 +7788,25 @@ static int hclge_get_regs_len(struct hnae3_handle *handle) return -EOPNOTSUPP; } - return regs_num_32_bit * sizeof(u32) + regs_num_64_bit * sizeof(u64); + cmdq_lines = sizeof(cmdq_reg_addr_list) / REG_LEN_PER_LINE + 1; + common_lines = sizeof(common_reg_addr_list) / REG_LEN_PER_LINE + 1; + ring_lines = sizeof(ring_reg_addr_list) / REG_LEN_PER_LINE + 1; + tqp_intr_lines = sizeof(tqp_intr_reg_addr_list) / REG_LEN_PER_LINE + 1; + + return (cmdq_lines + common_lines + ring_lines * kinfo->num_tqps + + tqp_intr_lines * (hdev->num_msi_used - 1)) * REG_LEN_PER_LINE + + regs_num_32_bit * sizeof(u32) + regs_num_64_bit * sizeof(u64); } static void hclge_get_regs(struct hnae3_handle *handle, u32 *version, void *data) { + struct hnae3_knic_private_info *kinfo = &handle->kinfo; struct hclge_vport *vport = hclge_get_vport(handle); struct hclge_dev *hdev = vport->back; u32 regs_num_32_bit, regs_num_64_bit; + int i, j, reg_um, separator_num; + u32 *reg = data; int ret; *version = hdev->fw_version; @@ -7200,16 +7818,53 @@ static void hclge_get_regs(struct hnae3_handle *handle, u32 *version, return; } - ret = hclge_get_32_bit_regs(hdev, regs_num_32_bit, data); + /* fetching per-PF registers valus from PF PCIe register space */ + reg_um = sizeof(cmdq_reg_addr_list) / sizeof(u32); + separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE; + for (i = 0; i < reg_um; i++) + *reg++ = hclge_read_dev(&hdev->hw, cmdq_reg_addr_list[i]); + for (i = 0; i < separator_num; i++) + *reg++ = SEPARATOR_VALUE; + + reg_um = sizeof(common_reg_addr_list) / sizeof(u32); + separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE; + for (i = 0; i < reg_um; i++) + *reg++ = hclge_read_dev(&hdev->hw, common_reg_addr_list[i]); + for (i = 0; i < separator_num; i++) + *reg++ = SEPARATOR_VALUE; + + reg_um = sizeof(ring_reg_addr_list) / sizeof(u32); + separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE; + for (j = 0; j < kinfo->num_tqps; j++) { + for (i = 0; i < reg_um; i++) + *reg++ = hclge_read_dev(&hdev->hw, + ring_reg_addr_list[i] + + 0x200 * j); + for (i = 0; i < separator_num; i++) + *reg++ = SEPARATOR_VALUE; + } + + reg_um = sizeof(tqp_intr_reg_addr_list) / sizeof(u32); + separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE; + for (j = 0; j < hdev->num_msi_used - 1; j++) { + for (i = 0; i < reg_um; i++) + *reg++ = hclge_read_dev(&hdev->hw, + tqp_intr_reg_addr_list[i] + + 4 * j); + for (i = 0; i < separator_num; i++) + *reg++ = SEPARATOR_VALUE; + } + + /* fetching PF common registers values from firmware */ + ret = hclge_get_32_bit_regs(hdev, regs_num_32_bit, reg); if (ret) { dev_err(&hdev->pdev->dev, "Get 32 bit register failed, ret = %d.\n", ret); return; } - data = (u32 *)data + regs_num_32_bit; - ret = hclge_get_64_bit_regs(hdev, regs_num_64_bit, - data); + reg += regs_num_32_bit; + ret = hclge_get_64_bit_regs(hdev, regs_num_64_bit, reg); if (ret) dev_err(&hdev->pdev->dev, "Get 64 bit register failed, ret = %d.\n", ret); @@ -7272,9 +7927,19 @@ static void hclge_get_link_mode(struct hnae3_handle *handle, } } +static int hclge_gro_en(struct hnae3_handle *handle, int enable) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + + return hclge_config_gro(hdev, enable); +} + static const struct hnae3_ae_ops hclge_ops = { .init_ae_dev = hclge_init_ae_dev, .uninit_ae_dev = hclge_uninit_ae_dev, + .flr_prepare = hclge_flr_prepare, + .flr_done = hclge_flr_done, .init_client_instance = hclge_init_client_instance, .uninit_client_instance = hclge_uninit_client_instance, .map_ring_to_vector = hclge_map_ring_to_vector, @@ -7285,6 +7950,8 @@ static const struct hnae3_ae_ops hclge_ops = { .set_loopback = hclge_set_loopback, .start = hclge_ae_start, .stop = hclge_ae_stop, + .client_start = hclge_client_start, + .client_stop = hclge_client_stop, .get_status = hclge_get_status, .get_ksettings_an_result = hclge_get_ksettings_an_result, .update_speed_duplex_h = hclge_update_speed_duplex_h, @@ -7321,6 +7988,7 @@ static const struct hnae3_ae_ops hclge_ops = { .set_vf_vlan_filter = hclge_set_vf_vlan_filter, .enable_hw_strip_rxvtag = hclge_en_hw_strip_rxvtag, .reset_event = hclge_reset_event, + .set_default_reset_request = hclge_set_def_reset_request, .get_tqps_and_rss_info = hclge_get_tqps_and_rss_info, .set_channels = hclge_set_channels, .get_channels = hclge_get_channels, @@ -7336,7 +8004,14 @@ static const struct hnae3_ae_ops hclge_ops = { .get_fd_all_rules = hclge_get_all_rules, .restore_fd_rules = hclge_restore_fd_entries, .enable_fd = hclge_enable_fd, - .process_hw_error = hclge_process_ras_hw_error, + .dbg_run_cmd = hclge_dbg_run_cmd, + .handle_hw_ras_error = hclge_handle_hw_ras_error, + .get_hw_reset_stat = hclge_get_hw_reset_stat, + .ae_dev_resetting = hclge_ae_dev_resetting, + .ae_dev_reset_cnt = hclge_ae_dev_reset_cnt, + .set_gro_en = hclge_gro_en, + .get_global_queue_id = hclge_covert_handle_qid_global, + .set_timer_task = hclge_set_timer_task, }; static struct hnae3_ae_algo ae_algo = { diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h index 0d9215404269..6615b85a1c52 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h @@ -28,6 +28,62 @@ #define HCLGE_VECTOR_REG_OFFSET 0x4 #define HCLGE_VECTOR_VF_OFFSET 0x100000 +#define HCLGE_CMDQ_TX_ADDR_L_REG 0x27000 +#define HCLGE_CMDQ_TX_ADDR_H_REG 0x27004 +#define HCLGE_CMDQ_TX_DEPTH_REG 0x27008 +#define HCLGE_CMDQ_TX_TAIL_REG 0x27010 +#define HCLGE_CMDQ_TX_HEAD_REG 0x27014 +#define HCLGE_CMDQ_RX_ADDR_L_REG 0x27018 +#define HCLGE_CMDQ_RX_ADDR_H_REG 0x2701C +#define HCLGE_CMDQ_RX_DEPTH_REG 0x27020 +#define HCLGE_CMDQ_RX_TAIL_REG 0x27024 +#define HCLGE_CMDQ_RX_HEAD_REG 0x27028 +#define HCLGE_CMDQ_INTR_SRC_REG 0x27100 +#define HCLGE_CMDQ_INTR_STS_REG 0x27104 +#define HCLGE_CMDQ_INTR_EN_REG 0x27108 +#define HCLGE_CMDQ_INTR_GEN_REG 0x2710C + +/* bar registers for common func */ +#define HCLGE_VECTOR0_OTER_EN_REG 0x20600 +#define HCLGE_RAS_OTHER_STS_REG 0x20B00 +#define HCLGE_FUNC_RESET_STS_REG 0x20C00 +#define HCLGE_GRO_EN_REG 0x28000 + +/* bar registers for rcb */ +#define HCLGE_RING_RX_ADDR_L_REG 0x80000 +#define HCLGE_RING_RX_ADDR_H_REG 0x80004 +#define HCLGE_RING_RX_BD_NUM_REG 0x80008 +#define HCLGE_RING_RX_BD_LENGTH_REG 0x8000C +#define HCLGE_RING_RX_MERGE_EN_REG 0x80014 +#define HCLGE_RING_RX_TAIL_REG 0x80018 +#define HCLGE_RING_RX_HEAD_REG 0x8001C +#define HCLGE_RING_RX_FBD_NUM_REG 0x80020 +#define HCLGE_RING_RX_OFFSET_REG 0x80024 +#define HCLGE_RING_RX_FBD_OFFSET_REG 0x80028 +#define HCLGE_RING_RX_STASH_REG 0x80030 +#define HCLGE_RING_RX_BD_ERR_REG 0x80034 +#define HCLGE_RING_TX_ADDR_L_REG 0x80040 +#define HCLGE_RING_TX_ADDR_H_REG 0x80044 +#define HCLGE_RING_TX_BD_NUM_REG 0x80048 +#define HCLGE_RING_TX_PRIORITY_REG 0x8004C +#define HCLGE_RING_TX_TC_REG 0x80050 +#define HCLGE_RING_TX_MERGE_EN_REG 0x80054 +#define HCLGE_RING_TX_TAIL_REG 0x80058 +#define HCLGE_RING_TX_HEAD_REG 0x8005C +#define HCLGE_RING_TX_FBD_NUM_REG 0x80060 +#define HCLGE_RING_TX_OFFSET_REG 0x80064 +#define HCLGE_RING_TX_EBD_NUM_REG 0x80068 +#define HCLGE_RING_TX_EBD_OFFSET_REG 0x80070 +#define HCLGE_RING_TX_BD_ERR_REG 0x80074 +#define HCLGE_RING_EN_REG 0x80090 + +/* bar registers for tqp interrupt */ +#define HCLGE_TQP_INTR_CTRL_REG 0x20000 +#define HCLGE_TQP_INTR_GL0_REG 0x20100 +#define HCLGE_TQP_INTR_GL1_REG 0x20200 +#define HCLGE_TQP_INTR_GL2_REG 0x20300 +#define HCLGE_TQP_INTR_RL_REG 0x20900 + #define HCLGE_RSS_IND_TBL_SIZE 512 #define HCLGE_RSS_SET_BITMAP_MSK GENMASK(15, 0) #define HCLGE_RSS_KEY_SIZE 40 @@ -97,11 +153,13 @@ enum HLCGE_PORT_TYPE { #define HCLGE_NETWORK_PORT_ID_M GENMASK(3, 0) /* Reset related Registers */ +#define HCLGE_PF_OTHER_INT_REG 0x20600 #define HCLGE_MISC_RESET_STS_REG 0x20700 #define HCLGE_MISC_VECTOR_INT_STS 0x20800 #define HCLGE_GLOBAL_RESET_REG 0x20A00 #define HCLGE_GLOBAL_RESET_BIT 0 #define HCLGE_CORE_RESET_BIT 1 +#define HCLGE_IMP_RESET_BIT 2 #define HCLGE_FUN_RST_ING 0x20C00 #define HCLGE_FUN_RST_ING_B 0 @@ -115,8 +173,10 @@ enum HLCGE_PORT_TYPE { /* CMDQ register bits for RX event(=MBX event) */ #define HCLGE_VECTOR0_RX_CMDQ_INT_B 1 +#define HCLGE_VECTOR0_IMP_RESET_INT_B 1 + #define HCLGE_MAC_DEFAULT_FRAME \ - (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN + ETH_DATA_LEN) + (ETH_HLEN + ETH_FCS_LEN + 2 * VLAN_HLEN + ETH_DATA_LEN) #define HCLGE_MAC_MIN_FRAME 64 #define HCLGE_MAC_MAX_FRAME 9728 @@ -145,12 +205,14 @@ enum HCLGE_DEV_STATE { enum hclge_evt_cause { HCLGE_VECTOR0_EVENT_RST, HCLGE_VECTOR0_EVENT_MBX, + HCLGE_VECTOR0_EVENT_ERR, HCLGE_VECTOR0_EVENT_OTHER, }; #define HCLGE_MPF_ENBALE 1 enum HCLGE_MAC_SPEED { + HCLGE_MAC_SPEED_UNKNOWN = 0, /* unknown */ HCLGE_MAC_SPEED_10M = 10, /* 10 Mbps */ HCLGE_MAC_SPEED_100M = 100, /* 100 Mbps */ HCLGE_MAC_SPEED_1G = 1000, /* 1000 Mbps = 1 Gbps */ @@ -593,10 +655,16 @@ struct hclge_dev { struct hclge_misc_vector misc_vector; struct hclge_hw_stats hw_stats; unsigned long state; + unsigned long flr_state; + unsigned long last_reset_time; enum hnae3_reset_type reset_type; + enum hnae3_reset_type reset_level; + unsigned long default_reset_request; unsigned long reset_request; /* reset has been requested */ unsigned long reset_pending; /* client rst is pending to be served */ + unsigned long reset_count; /* the number of reset has been done */ + u32 reset_fail_cnt; u32 fw_version; u16 num_vmdq_vport; /* Num vmdq vport this PF has set up */ u16 num_tqps; /* Num task queue pairs of this PF */ @@ -614,6 +682,7 @@ struct hclge_dev { u8 hw_tc_map; u8 tc_num_last_time; enum hclge_fc_mode fc_mode_last_time; + u8 support_sfp_query; #define HCLGE_FLAG_TC_BASE_SCH_MODE 1 #define HCLGE_FLAG_VNET_BASE_SCH_MODE 2 @@ -644,6 +713,7 @@ struct hclge_dev { unsigned long service_timer_period; unsigned long service_timer_previous; struct timer_list service_timer; + struct timer_list reset_timer; struct work_struct service_task; struct work_struct rst_service_task; struct work_struct mbx_service_task; @@ -666,7 +736,12 @@ struct hclge_dev { u32 flag; u32 pkt_buf_size; /* Total pf buf size for tx/rx */ + u32 tx_buf_size; /* Tx buffer size for each TC */ + u32 dv_buf_size; /* Dv buffer size for each TC */ + u32 mps; /* Max packet size */ + /* vport_lock protect resource shared by vports */ + struct mutex vport_lock; struct hclge_vlan_type_cfg vlan_type_cfg; @@ -717,6 +792,11 @@ struct hclge_rss_tuple_cfg { u8 ipv6_fragment_en; }; +enum HCLGE_VPORT_STATE { + HCLGE_VPORT_STATE_ALIVE, + HCLGE_VPORT_STATE_MAX +}; + struct hclge_vport { u16 alloc_tqps; /* Allocated Tx/Rx queues */ @@ -742,6 +822,10 @@ struct hclge_vport { struct hclge_dev *back; /* Back reference to associated dev */ struct hnae3_handle nic; struct hnae3_handle roce; + + unsigned long state; + unsigned long last_active_jiffies; + u32 mps; /* Max packet size */ }; void hclge_promisc_param_init(struct hclge_promisc_param *param, bool en_uc, @@ -768,6 +852,12 @@ static inline int hclge_get_queue_id(struct hnae3_queue *queue) return tqp->index; } +static inline bool hclge_is_reset_pending(struct hclge_dev *hdev) +{ + return !!hdev->reset_pending; +} + +int hclge_inform_reset_assert_to_vf(struct hclge_vport *vport); int hclge_cfg_mac_speed_dup(struct hclge_dev *hdev, int speed, u8 duplex); int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto, u16 vlan_id, bool is_kill); @@ -777,9 +867,15 @@ int hclge_buffer_alloc(struct hclge_dev *hdev); int hclge_rss_init_hw(struct hclge_dev *hdev); void hclge_rss_indir_init_cfg(struct hclge_dev *hdev); +int hclge_inform_reset_assert_to_vf(struct hclge_vport *vport); void hclge_mbx_handler(struct hclge_dev *hdev); int hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id); void hclge_reset_vf_queue(struct hclge_vport *vport, u16 queue_id); int hclge_cfg_flowctrl(struct hclge_dev *hdev); int hclge_func_reset_cmd(struct hclge_dev *hdev, int func_id); +int hclge_vport_start(struct hclge_vport *vport); +void hclge_vport_stop(struct hclge_vport *vport); +int hclge_set_vport_mtu(struct hclge_vport *vport, int new_mtu); +int hclge_dbg_run_cmd(struct hnae3_handle *handle, char *cmd_buf); +u16 hclge_covert_handle_qid_global(struct hnae3_handle *handle, u16 queue_id); #endif diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c index f890022938d9..a1de451a85df 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c @@ -79,15 +79,26 @@ static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len, return status; } -static int hclge_inform_reset_assert_to_vf(struct hclge_vport *vport) +int hclge_inform_reset_assert_to_vf(struct hclge_vport *vport) { + struct hclge_dev *hdev = vport->back; + enum hnae3_reset_type reset_type; u8 msg_data[2]; u8 dest_vfid; dest_vfid = (u8)vport->vport_id; + if (hdev->reset_type == HNAE3_FUNC_RESET) + reset_type = HNAE3_VF_PF_FUNC_RESET; + else if (hdev->reset_type == HNAE3_FLR_RESET) + reset_type = HNAE3_VF_FULL_RESET; + else + return -EINVAL; + + memcpy(&msg_data[0], &reset_type, sizeof(u16)); + /* send this requested info to VF */ - return hclge_send_mbx_msg(vport, msg_data, sizeof(u8), + return hclge_send_mbx_msg(vport, msg_data, sizeof(msg_data), HCLGE_MBX_ASSERTING_RESET, dest_vfid); } @@ -290,6 +301,21 @@ static int hclge_set_vf_vlan_cfg(struct hclge_vport *vport, return status; } +static int hclge_set_vf_alive(struct hclge_vport *vport, + struct hclge_mbx_vf_to_pf_cmd *mbx_req, + bool gen_resp) +{ + bool alive = !!mbx_req->msg[2]; + int ret = 0; + + if (alive) + ret = hclge_vport_start(vport); + else + hclge_vport_stop(vport); + + return ret; +} + static int hclge_get_vf_tcinfo(struct hclge_vport *vport, struct hclge_mbx_vf_to_pf_cmd *mbx_req, bool gen_resp) @@ -363,24 +389,41 @@ static void hclge_reset_vf(struct hclge_vport *vport, int ret; dev_warn(&hdev->pdev->dev, "PF received VF reset request from VF %d!", - mbx_req->mbx_src_vfid); - - /* Acknowledge VF that PF is now about to assert the reset for the VF. - * On receiving this message VF will get into pending state and will - * start polling for the hardware reset completion status. - */ - ret = hclge_inform_reset_assert_to_vf(vport); - if (ret) { - dev_err(&hdev->pdev->dev, - "PF fail(%d) to inform VF(%d)of reset, reset failed!\n", - ret, vport->vport_id); - return; - } + vport->vport_id); - dev_warn(&hdev->pdev->dev, "PF is now resetting VF %d.\n", - mbx_req->mbx_src_vfid); - /* reset this virtual function */ - hclge_func_reset_cmd(hdev, mbx_req->mbx_src_vfid); + ret = hclge_func_reset_cmd(hdev, vport->vport_id); + hclge_gen_resp_to_vf(vport, mbx_req, ret, NULL, 0); +} + +static void hclge_vf_keep_alive(struct hclge_vport *vport, + struct hclge_mbx_vf_to_pf_cmd *mbx_req) +{ + vport->last_active_jiffies = jiffies; +} + +static int hclge_set_vf_mtu(struct hclge_vport *vport, + struct hclge_mbx_vf_to_pf_cmd *mbx_req) +{ + int ret; + u32 mtu; + + memcpy(&mtu, &mbx_req->msg[2], sizeof(mtu)); + ret = hclge_set_vport_mtu(vport, mtu); + + return hclge_gen_resp_to_vf(vport, mbx_req, ret, NULL, 0); +} + +static int hclge_get_queue_id_in_pf(struct hclge_vport *vport, + struct hclge_mbx_vf_to_pf_cmd *mbx_req) +{ + u16 queue_id, qid_in_pf; + u8 resp_data[2]; + + memcpy(&queue_id, &mbx_req->msg[2], sizeof(queue_id)); + qid_in_pf = hclge_covert_handle_qid_global(&vport->nic, queue_id); + memcpy(resp_data, &qid_in_pf, sizeof(qid_in_pf)); + + return hclge_gen_resp_to_vf(vport, mbx_req, 0, resp_data, 2); } static bool hclge_cmd_crq_empty(struct hclge_hw *hw) @@ -460,6 +503,13 @@ void hclge_mbx_handler(struct hclge_dev *hdev) "PF failed(%d) to config VF's VLAN\n", ret); break; + case HCLGE_MBX_SET_ALIVE: + ret = hclge_set_vf_alive(vport, req, false); + if (ret) + dev_err(&hdev->pdev->dev, + "PF failed(%d) to set VF's ALIVE\n", + ret); + break; case HCLGE_MBX_GET_QINFO: ret = hclge_get_vf_queue_info(vport, req, true); if (ret) @@ -487,6 +537,22 @@ void hclge_mbx_handler(struct hclge_dev *hdev) case HCLGE_MBX_RESET: hclge_reset_vf(vport, req); break; + case HCLGE_MBX_KEEP_ALIVE: + hclge_vf_keep_alive(vport, req); + break; + case HCLGE_MBX_SET_MTU: + ret = hclge_set_vf_mtu(vport, req); + if (ret) + dev_err(&hdev->pdev->dev, + "VF fail(%d) to set mtu\n", ret); + break; + case HCLGE_MBX_GET_QID_IN_PF: + ret = hclge_get_queue_id_in_pf(vport, req); + if (ret) + dev_err(&hdev->pdev->dev, + "PF failed(%d) to get qid for VF\n", + ret); + break; default: dev_err(&hdev->pdev->dev, "un-supported mailbox message, code = %d\n", diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c index 03018638f701..dabb8437f8dc 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c @@ -12,7 +12,7 @@ SUPPORTED_TP | \ PHY_10BT_FEATURES | \ PHY_100BT_FEATURES | \ - PHY_1000BT_FEATURES) + SUPPORTED_1000baseT_Full) enum hclge_mdio_c22_op_seq { HCLGE_MDIO_C22_WRITE = 1, @@ -179,6 +179,10 @@ static void hclge_mac_adjust_link(struct net_device *netdev) int duplex, speed; int ret; + /* When phy link down, do nothing */ + if (netdev->phydev->link == 0) + return; + speed = netdev->phydev->speed; duplex = netdev->phydev->duplex; @@ -195,12 +199,13 @@ int hclge_mac_connect_phy(struct hclge_dev *hdev) { struct net_device *netdev = hdev->vport[0].nic.netdev; struct phy_device *phydev = hdev->hw.mac.phydev; + __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; int ret; if (!phydev) return 0; - phydev->supported &= ~SUPPORTED_FIBRE; + linkmode_clear_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, phydev->supported); ret = phy_connect_direct(netdev, phydev, hclge_mac_adjust_link, @@ -210,7 +215,15 @@ int hclge_mac_connect_phy(struct hclge_dev *hdev) return ret; } - phydev->supported &= HCLGE_PHY_SUPPORTED_FEATURES; + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, mask); + linkmode_set_bit_array(phy_10_100_features_array, + ARRAY_SIZE(phy_10_100_features_array), + mask); + linkmode_set_bit_array(phy_gbit_features_array, + ARRAY_SIZE(phy_gbit_features_array), + mask); + linkmode_and(phydev->supported, phydev->supported, mask); phy_support_asym_pause(phydev); return 0; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c index 494e562fe8c7..00458da67503 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c @@ -1259,15 +1259,13 @@ int hclge_pause_setup_hw(struct hclge_dev *hdev) return 0; } -int hclge_tm_prio_tc_info_update(struct hclge_dev *hdev, u8 *prio_tc) +void hclge_tm_prio_tc_info_update(struct hclge_dev *hdev, u8 *prio_tc) { struct hclge_vport *vport = hdev->vport; struct hnae3_knic_private_info *kinfo; u32 i, k; for (i = 0; i < HNAE3_MAX_USER_PRIO; i++) { - if (prio_tc[i] >= hdev->tm_info.num_tc) - return -EINVAL; hdev->tm_info.prio_tc[i] = prio_tc[i]; for (k = 0; k < hdev->num_alloc_vport; k++) { @@ -1275,18 +1273,12 @@ int hclge_tm_prio_tc_info_update(struct hclge_dev *hdev, u8 *prio_tc) kinfo->prio_tc[i] = prio_tc[i]; } } - return 0; } -int hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc) +void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc) { u8 i, bit_map = 0; - for (i = 0; i < hdev->num_alloc_vport; i++) { - if (num_tc > hdev->vport[i].alloc_tqps) - return -EINVAL; - } - hdev->tm_info.num_tc = num_tc; for (i = 0; i < hdev->tm_info.num_tc; i++) @@ -1300,8 +1292,6 @@ int hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc) hdev->hw_tc_map = bit_map; hclge_tm_schd_info_init(hdev); - - return 0; } int hclge_tm_init_hw(struct hclge_dev *hdev) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h index 25eef13a3e14..b6496a439304 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h @@ -40,6 +40,13 @@ struct hclge_nq_to_qs_link_cmd { __le16 qset_id; }; +struct hclge_tqp_tx_queue_tc_cmd { + __le16 queue_id; + __le16 rsvd; + u8 tc_id; + u8 rev[3]; +}; + struct hclge_pg_weight_cmd { u8 pg_id; u8 dwrr; @@ -55,6 +62,12 @@ struct hclge_qs_weight_cmd { u8 dwrr; }; +struct hclge_ets_tc_weight_cmd { + u8 tc_weight[HNAE3_MAX_TC]; + u8 weight_offset; + u8 rsvd[15]; +}; + #define HCLGE_TM_SHAP_IR_B_MSK GENMASK(7, 0) #define HCLGE_TM_SHAP_IR_B_LSH 0 #define HCLGE_TM_SHAP_IR_U_MSK GENMASK(11, 8) @@ -131,8 +144,8 @@ struct hclge_port_shapping_cmd { int hclge_tm_schd_init(struct hclge_dev *hdev); int hclge_pause_setup_hw(struct hclge_dev *hdev); int hclge_tm_schd_mode_hw(struct hclge_dev *hdev); -int hclge_tm_prio_tc_info_update(struct hclge_dev *hdev, u8 *prio_tc); -int hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc); +void hclge_tm_prio_tc_info_update(struct hclge_dev *hdev, u8 *prio_tc); +void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc); int hclge_tm_dwrr_cfg(struct hclge_dev *hdev); int hclge_tm_map_cfg(struct hclge_dev *hdev); int hclge_tm_init_hw(struct hclge_dev *hdev); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c index 0d3b445f6799..d5765c8cf3a3 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c @@ -72,6 +72,45 @@ static bool hclgevf_is_special_opcode(u16 opcode) return false; } +static void hclgevf_cmd_config_regs(struct hclgevf_cmq_ring *ring) +{ + struct hclgevf_dev *hdev = ring->dev; + struct hclgevf_hw *hw = &hdev->hw; + u32 reg_val; + + if (ring->flag == HCLGEVF_TYPE_CSQ) { + reg_val = (u32)ring->desc_dma_addr; + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_L_REG, reg_val); + reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1); + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_H_REG, reg_val); + + reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S); + reg_val |= HCLGEVF_NIC_CMQ_ENABLE; + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_DEPTH_REG, reg_val); + + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG, 0); + hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG, 0); + } else { + reg_val = (u32)ring->desc_dma_addr; + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_L_REG, reg_val); + reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1); + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_H_REG, reg_val); + + reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S); + reg_val |= HCLGEVF_NIC_CMQ_ENABLE; + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_DEPTH_REG, reg_val); + + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_HEAD_REG, 0); + hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_TAIL_REG, 0); + } +} + +static void hclgevf_cmd_init_regs(struct hclgevf_hw *hw) +{ + hclgevf_cmd_config_regs(&hw->cmq.csq); + hclgevf_cmd_config_regs(&hw->cmq.crq); +} + static int hclgevf_alloc_cmd_desc(struct hclgevf_cmq_ring *ring) { int size = ring->desc_num * sizeof(struct hclgevf_desc); @@ -96,61 +135,23 @@ static void hclgevf_free_cmd_desc(struct hclgevf_cmq_ring *ring) } } -static int hclgevf_init_cmd_queue(struct hclgevf_dev *hdev, - struct hclgevf_cmq_ring *ring) +static int hclgevf_alloc_cmd_queue(struct hclgevf_dev *hdev, int ring_type) { struct hclgevf_hw *hw = &hdev->hw; - int ring_type = ring->flag; - u32 reg_val; + struct hclgevf_cmq_ring *ring = + (ring_type == HCLGEVF_TYPE_CSQ) ? &hw->cmq.csq : &hw->cmq.crq; int ret; - ring->desc_num = HCLGEVF_NIC_CMQ_DESC_NUM; - spin_lock_init(&ring->lock); - ring->next_to_clean = 0; - ring->next_to_use = 0; ring->dev = hdev; + ring->flag = ring_type; /* allocate CSQ/CRQ descriptor */ ret = hclgevf_alloc_cmd_desc(ring); - if (ret) { + if (ret) dev_err(&hdev->pdev->dev, "failed(%d) to alloc %s desc\n", ret, (ring_type == HCLGEVF_TYPE_CSQ) ? "CSQ" : "CRQ"); - return ret; - } - /* initialize the hardware registers with csq/crq dma-address, - * descriptor number, head & tail pointers - */ - switch (ring_type) { - case HCLGEVF_TYPE_CSQ: - reg_val = (u32)ring->desc_dma_addr; - hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_L_REG, reg_val); - reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1); - hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_H_REG, reg_val); - - reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S); - reg_val |= HCLGEVF_NIC_CMQ_ENABLE; - hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_DEPTH_REG, reg_val); - - hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG, 0); - hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG, 0); - return 0; - case HCLGEVF_TYPE_CRQ: - reg_val = (u32)ring->desc_dma_addr; - hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_L_REG, reg_val); - reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1); - hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_H_REG, reg_val); - - reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S); - reg_val |= HCLGEVF_NIC_CMQ_ENABLE; - hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_DEPTH_REG, reg_val); - - hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_HEAD_REG, 0); - hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_TAIL_REG, 0); - return 0; - default: - return -EINVAL; - } + return ret; } void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc, @@ -188,7 +189,8 @@ int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num) spin_lock_bh(&hw->cmq.csq.lock); - if (num > hclgevf_ring_space(&hw->cmq.csq)) { + if (num > hclgevf_ring_space(&hw->cmq.csq) || + test_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state)) { spin_unlock_bh(&hw->cmq.csq.lock); return -EBUSY; } @@ -282,55 +284,83 @@ static int hclgevf_cmd_query_firmware_version(struct hclgevf_hw *hw, return status; } -int hclgevf_cmd_init(struct hclgevf_dev *hdev) +int hclgevf_cmd_queue_init(struct hclgevf_dev *hdev) { - u32 version; int ret; - /* setup Tx write back timeout */ + /* Setup the lock for command queue */ + spin_lock_init(&hdev->hw.cmq.csq.lock); + spin_lock_init(&hdev->hw.cmq.crq.lock); + hdev->hw.cmq.tx_timeout = HCLGEVF_CMDQ_TX_TIMEOUT; + hdev->hw.cmq.csq.desc_num = HCLGEVF_NIC_CMQ_DESC_NUM; + hdev->hw.cmq.crq.desc_num = HCLGEVF_NIC_CMQ_DESC_NUM; - /* setup queue CSQ/CRQ rings */ - hdev->hw.cmq.csq.flag = HCLGEVF_TYPE_CSQ; - ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.csq); + ret = hclgevf_alloc_cmd_queue(hdev, HCLGEVF_TYPE_CSQ); if (ret) { dev_err(&hdev->pdev->dev, - "failed(%d) to initialize CSQ ring\n", ret); + "CSQ ring setup error %d\n", ret); return ret; } - hdev->hw.cmq.crq.flag = HCLGEVF_TYPE_CRQ; - ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.crq); + ret = hclgevf_alloc_cmd_queue(hdev, HCLGEVF_TYPE_CRQ); if (ret) { dev_err(&hdev->pdev->dev, - "failed(%d) to initialize CRQ ring\n", ret); + "CRQ ring setup error %d\n", ret); goto err_csq; } + return 0; +err_csq: + hclgevf_free_cmd_desc(&hdev->hw.cmq.csq); + return ret; +} + +int hclgevf_cmd_init(struct hclgevf_dev *hdev) +{ + u32 version; + int ret; + + spin_lock_bh(&hdev->hw.cmq.csq.lock); + spin_lock_bh(&hdev->hw.cmq.crq.lock); + /* initialize the pointers of async rx queue of mailbox */ hdev->arq.hdev = hdev; hdev->arq.head = 0; hdev->arq.tail = 0; hdev->arq.count = 0; + hdev->hw.cmq.csq.next_to_clean = 0; + hdev->hw.cmq.csq.next_to_use = 0; + hdev->hw.cmq.crq.next_to_clean = 0; + hdev->hw.cmq.crq.next_to_use = 0; + + hclgevf_cmd_init_regs(&hdev->hw); + + spin_unlock_bh(&hdev->hw.cmq.crq.lock); + spin_unlock_bh(&hdev->hw.cmq.csq.lock); + + clear_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state); + + /* Check if there is new reset pending, because the higher level + * reset may happen when lower level reset is being processed. + */ + if (hclgevf_is_reset_pending(hdev)) { + set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state); + return -EBUSY; + } /* get firmware version */ ret = hclgevf_cmd_query_firmware_version(&hdev->hw, &version); if (ret) { dev_err(&hdev->pdev->dev, "failed(%d) to query firmware version\n", ret); - goto err_crq; + return ret; } hdev->fw_version = version; dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version); return 0; -err_crq: - hclgevf_free_cmd_desc(&hdev->hw.cmq.crq); -err_csq: - hclgevf_free_cmd_desc(&hdev->hw.cmq.csq); - - return ret; } void hclgevf_cmd_uninit(struct hclgevf_dev *hdev) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h index bc294b0c8b62..47030b42341f 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h @@ -87,6 +87,8 @@ enum hclgevf_opcode_type { HCLGEVF_OPC_QUERY_TX_STATUS = 0x0B03, HCLGEVF_OPC_QUERY_RX_STATUS = 0x0B13, HCLGEVF_OPC_CFG_COM_TQP_QUEUE = 0x0B20, + /* GRO command */ + HCLGEVF_OPC_GRO_GENERIC_CONFIG = 0x0C10, /* RSS cmd */ HCLGEVF_OPC_RSS_GENERIC_CONFIG = 0x0D01, HCLGEVF_OPC_RSS_INPUT_TUPLE = 0x0D02, @@ -149,6 +151,12 @@ struct hclgevf_query_res_cmd { __le16 rsv[7]; }; +#define HCLGEVF_GRO_EN_B 0 +struct hclgevf_cfg_gro_status_cmd { + __le16 gro_en; + u8 rsv[22]; +}; + #define HCLGEVF_RSS_DEFAULT_OUTPORT_B 4 #define HCLGEVF_RSS_HASH_KEY_OFFSET_B 4 #define HCLGEVF_RSS_HASH_KEY_NUM 16 @@ -256,6 +264,7 @@ static inline u32 hclgevf_read_reg(u8 __iomem *base, u32 reg) int hclgevf_cmd_init(struct hclgevf_dev *hdev); void hclgevf_cmd_uninit(struct hclgevf_dev *hdev); +int hclgevf_cmd_queue_init(struct hclgevf_dev *hdev); int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num); void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc, diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c index 085edb945389..82103d5fa815 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c @@ -2,6 +2,7 @@ // Copyright (c) 2016-2017 Hisilicon Limited. #include +#include #include #include "hclgevf_cmd.h" #include "hclgevf_main.h" @@ -10,8 +11,7 @@ #define HCLGEVF_NAME "hclgevf" -static int hclgevf_init_hdev(struct hclgevf_dev *hdev); -static void hclgevf_uninit_hdev(struct hclgevf_dev *hdev); +static int hclgevf_reset_hdev(struct hclgevf_dev *hdev); static struct hnae3_ae_algo ae_algovf; static const struct pci_device_id ae_algovf_pci_tbl[] = { @@ -23,6 +23,58 @@ static const struct pci_device_id ae_algovf_pci_tbl[] = { MODULE_DEVICE_TABLE(pci, ae_algovf_pci_tbl); +static const u32 cmdq_reg_addr_list[] = {HCLGEVF_CMDQ_TX_ADDR_L_REG, + HCLGEVF_CMDQ_TX_ADDR_H_REG, + HCLGEVF_CMDQ_TX_DEPTH_REG, + HCLGEVF_CMDQ_TX_TAIL_REG, + HCLGEVF_CMDQ_TX_HEAD_REG, + HCLGEVF_CMDQ_RX_ADDR_L_REG, + HCLGEVF_CMDQ_RX_ADDR_H_REG, + HCLGEVF_CMDQ_RX_DEPTH_REG, + HCLGEVF_CMDQ_RX_TAIL_REG, + HCLGEVF_CMDQ_RX_HEAD_REG, + HCLGEVF_VECTOR0_CMDQ_SRC_REG, + HCLGEVF_CMDQ_INTR_STS_REG, + HCLGEVF_CMDQ_INTR_EN_REG, + HCLGEVF_CMDQ_INTR_GEN_REG}; + +static const u32 common_reg_addr_list[] = {HCLGEVF_MISC_VECTOR_REG_BASE, + HCLGEVF_RST_ING, + HCLGEVF_GRO_EN_REG}; + +static const u32 ring_reg_addr_list[] = {HCLGEVF_RING_RX_ADDR_L_REG, + HCLGEVF_RING_RX_ADDR_H_REG, + HCLGEVF_RING_RX_BD_NUM_REG, + HCLGEVF_RING_RX_BD_LENGTH_REG, + HCLGEVF_RING_RX_MERGE_EN_REG, + HCLGEVF_RING_RX_TAIL_REG, + HCLGEVF_RING_RX_HEAD_REG, + HCLGEVF_RING_RX_FBD_NUM_REG, + HCLGEVF_RING_RX_OFFSET_REG, + HCLGEVF_RING_RX_FBD_OFFSET_REG, + HCLGEVF_RING_RX_STASH_REG, + HCLGEVF_RING_RX_BD_ERR_REG, + HCLGEVF_RING_TX_ADDR_L_REG, + HCLGEVF_RING_TX_ADDR_H_REG, + HCLGEVF_RING_TX_BD_NUM_REG, + HCLGEVF_RING_TX_PRIORITY_REG, + HCLGEVF_RING_TX_TC_REG, + HCLGEVF_RING_TX_MERGE_EN_REG, + HCLGEVF_RING_TX_TAIL_REG, + HCLGEVF_RING_TX_HEAD_REG, + HCLGEVF_RING_TX_FBD_NUM_REG, + HCLGEVF_RING_TX_OFFSET_REG, + HCLGEVF_RING_TX_EBD_NUM_REG, + HCLGEVF_RING_TX_EBD_OFFSET_REG, + HCLGEVF_RING_TX_BD_ERR_REG, + HCLGEVF_RING_EN_REG}; + +static const u32 tqp_intr_reg_addr_list[] = {HCLGEVF_TQP_INTR_CTRL_REG, + HCLGEVF_TQP_INTR_GL0_REG, + HCLGEVF_TQP_INTR_GL1_REG, + HCLGEVF_TQP_INTR_GL2_REG, + HCLGEVF_TQP_INTR_RL_REG}; + static inline struct hclgevf_dev *hclgevf_ae_get_hdev( struct hnae3_handle *handle) { @@ -204,17 +256,28 @@ static int hclgevf_get_queue_info(struct hclgevf_dev *hdev) return 0; } +static u16 hclgevf_get_qid_global(struct hnae3_handle *handle, u16 queue_id) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + u8 msg_data[2], resp_data[2]; + u16 qid_in_pf = 0; + int ret; + + memcpy(&msg_data[0], &queue_id, sizeof(queue_id)); + + ret = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_QID_IN_PF, 0, msg_data, + 2, true, resp_data, 2); + if (!ret) + qid_in_pf = *(u16 *)resp_data; + + return qid_in_pf; +} + static int hclgevf_alloc_tqps(struct hclgevf_dev *hdev) { struct hclgevf_tqp *tqp; int i; - /* if this is on going reset then we need to re-allocate the TPQs - * since we cannot assume we would get same number of TPQs back from PF - */ - if (hclgevf_dev_ongoing_reset(hdev)) - devm_kfree(&hdev->pdev->dev, hdev->htqp); - hdev->htqp = devm_kcalloc(&hdev->pdev->dev, hdev->num_tqps, sizeof(struct hclgevf_tqp), GFP_KERNEL); if (!hdev->htqp) @@ -258,12 +321,6 @@ static int hclgevf_knic_setup(struct hclgevf_dev *hdev) new_tqps = kinfo->rss_size * kinfo->num_tc; kinfo->num_tqps = min(new_tqps, hdev->num_tqps); - /* if this is on going reset then we need to re-allocate the hnae queues - * as well since number of TPQs from PF might have changed. - */ - if (hclgevf_dev_ongoing_reset(hdev)) - devm_kfree(&hdev->pdev->dev, kinfo->tqp); - kinfo->tqp = devm_kcalloc(&hdev->pdev->dev, kinfo->num_tqps, sizeof(struct hnae3_queue *), GFP_KERNEL); if (!kinfo->tqp) @@ -868,6 +925,9 @@ static int hclgevf_unmap_ring_from_vector( struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); int ret, vector_id; + if (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state)) + return 0; + vector_id = hclgevf_get_vector_index(hdev, vector); if (vector_id < 0) { dev_err(&handle->pdev->dev, @@ -956,13 +1016,6 @@ static int hclgevf_tqp_enable(struct hclgevf_dev *hdev, int tqp_id, return status; } -static int hclgevf_get_queue_id(struct hnae3_queue *queue) -{ - struct hclgevf_tqp *tqp = container_of(queue, struct hclgevf_tqp, q); - - return tqp->index; -} - static void hclgevf_reset_tqp_stats(struct hnae3_handle *handle) { struct hnae3_knic_private_info *kinfo = &handle->kinfo; @@ -1097,38 +1150,87 @@ static int hclgevf_reset_tqp(struct hnae3_handle *handle, u16 queue_id) 2, true, NULL, 0); } +static int hclgevf_set_mtu(struct hnae3_handle *handle, int new_mtu) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + + return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MTU, 0, (u8 *)&new_mtu, + sizeof(new_mtu), true, NULL, 0); +} + static int hclgevf_notify_client(struct hclgevf_dev *hdev, enum hnae3_reset_notify_type type) { struct hnae3_client *client = hdev->nic_client; struct hnae3_handle *handle = &hdev->nic; + int ret; if (!client->ops->reset_notify) return -EOPNOTSUPP; - return client->ops->reset_notify(handle, type); + ret = client->ops->reset_notify(handle, type); + if (ret) + dev_err(&hdev->pdev->dev, "notify nic client failed %d(%d)\n", + type, ret); + + return ret; +} + +static void hclgevf_flr_done(struct hnae3_ae_dev *ae_dev) +{ + struct hclgevf_dev *hdev = ae_dev->priv; + + set_bit(HNAE3_FLR_DONE, &hdev->flr_state); +} + +static int hclgevf_flr_poll_timeout(struct hclgevf_dev *hdev, + unsigned long delay_us, + unsigned long wait_cnt) +{ + unsigned long cnt = 0; + + while (!test_bit(HNAE3_FLR_DONE, &hdev->flr_state) && + cnt++ < wait_cnt) + usleep_range(delay_us, delay_us * 2); + + if (!test_bit(HNAE3_FLR_DONE, &hdev->flr_state)) { + dev_err(&hdev->pdev->dev, + "flr wait timeout\n"); + return -ETIMEDOUT; + } + + return 0; } static int hclgevf_reset_wait(struct hclgevf_dev *hdev) { -#define HCLGEVF_RESET_WAIT_MS 500 -#define HCLGEVF_RESET_WAIT_CNT 20 - u32 val, cnt = 0; +#define HCLGEVF_RESET_WAIT_US 20000 +#define HCLGEVF_RESET_WAIT_CNT 2000 +#define HCLGEVF_RESET_WAIT_TIMEOUT_US \ + (HCLGEVF_RESET_WAIT_US * HCLGEVF_RESET_WAIT_CNT) + + u32 val; + int ret; /* wait to check the hardware reset completion status */ - val = hclgevf_read_dev(&hdev->hw, HCLGEVF_FUN_RST_ING); - while (hnae3_get_bit(val, HCLGEVF_FUN_RST_ING_B) && - (cnt < HCLGEVF_RESET_WAIT_CNT)) { - msleep(HCLGEVF_RESET_WAIT_MS); - val = hclgevf_read_dev(&hdev->hw, HCLGEVF_FUN_RST_ING); - cnt++; - } + val = hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING); + dev_info(&hdev->pdev->dev, "checking vf resetting status: %x\n", val); + + if (hdev->reset_type == HNAE3_FLR_RESET) + return hclgevf_flr_poll_timeout(hdev, + HCLGEVF_RESET_WAIT_US, + HCLGEVF_RESET_WAIT_CNT); + + ret = readl_poll_timeout(hdev->hw.io_base + HCLGEVF_RST_ING, val, + !(val & HCLGEVF_RST_ING_BITS), + HCLGEVF_RESET_WAIT_US, + HCLGEVF_RESET_WAIT_TIMEOUT_US); /* hardware completion status should be available by this time */ - if (cnt >= HCLGEVF_RESET_WAIT_CNT) { - dev_warn(&hdev->pdev->dev, - "could'nt get reset done status from h/w, timeout!\n"); - return -EBUSY; + if (ret) { + dev_err(&hdev->pdev->dev, + "could'nt get reset done status from h/w, timeout!\n"); + return ret; } /* we will wait a bit more to let reset of the stack to complete. This @@ -1145,10 +1247,12 @@ static int hclgevf_reset_stack(struct hclgevf_dev *hdev) int ret; /* uninitialize the nic client */ - hclgevf_notify_client(hdev, HNAE3_UNINIT_CLIENT); + ret = hclgevf_notify_client(hdev, HNAE3_UNINIT_CLIENT); + if (ret) + return ret; /* re-initialize the hclge device */ - ret = hclgevf_init_hdev(hdev); + ret = hclgevf_reset_hdev(hdev); if (ret) { dev_err(&hdev->pdev->dev, "hclge device re-init failed, VF is disabled!\n"); @@ -1156,22 +1260,60 @@ static int hclgevf_reset_stack(struct hclgevf_dev *hdev) } /* bring up the nic client again */ - hclgevf_notify_client(hdev, HNAE3_INIT_CLIENT); + ret = hclgevf_notify_client(hdev, HNAE3_INIT_CLIENT); + if (ret) + return ret; return 0; } +static int hclgevf_reset_prepare_wait(struct hclgevf_dev *hdev) +{ + int ret = 0; + + switch (hdev->reset_type) { + case HNAE3_VF_FUNC_RESET: + ret = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_RESET, 0, NULL, + 0, true, NULL, sizeof(u8)); + break; + case HNAE3_FLR_RESET: + set_bit(HNAE3_FLR_DOWN, &hdev->flr_state); + break; + default: + break; + } + + set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state); + + dev_info(&hdev->pdev->dev, "prepare reset(%d) wait done, ret:%d\n", + hdev->reset_type, ret); + + return ret; +} + static int hclgevf_reset(struct hclgevf_dev *hdev) { + struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev); int ret; + /* Initialize ae_dev reset status as well, in case enet layer wants to + * know if device is undergoing reset + */ + ae_dev->reset_type = hdev->reset_type; + hdev->reset_count++; rtnl_lock(); /* bring down the nic to stop any ongoing TX/RX */ - hclgevf_notify_client(hdev, HNAE3_DOWN_CLIENT); + ret = hclgevf_notify_client(hdev, HNAE3_DOWN_CLIENT); + if (ret) + goto err_reset_lock; rtnl_unlock(); + ret = hclgevf_reset_prepare_wait(hdev); + if (ret) + goto err_reset; + /* check if VF could successfully fetch the hardware reset completion * status from the hardware */ @@ -1181,58 +1323,121 @@ static int hclgevf_reset(struct hclgevf_dev *hdev) dev_err(&hdev->pdev->dev, "VF failed(=%d) to fetch H/W reset completion status\n", ret); - - dev_warn(&hdev->pdev->dev, "VF reset failed, disabling VF!\n"); - rtnl_lock(); - hclgevf_notify_client(hdev, HNAE3_UNINIT_CLIENT); - - rtnl_unlock(); - return ret; + goto err_reset; } rtnl_lock(); /* now, re-initialize the nic client and ae device*/ ret = hclgevf_reset_stack(hdev); - if (ret) + if (ret) { dev_err(&hdev->pdev->dev, "failed to reset VF stack\n"); + goto err_reset_lock; + } /* bring up the nic to enable TX/RX again */ - hclgevf_notify_client(hdev, HNAE3_UP_CLIENT); + ret = hclgevf_notify_client(hdev, HNAE3_UP_CLIENT); + if (ret) + goto err_reset_lock; rtnl_unlock(); + hdev->last_reset_time = jiffies; + ae_dev->reset_type = HNAE3_NONE_RESET; + return ret; -} +err_reset_lock: + rtnl_unlock(); +err_reset: + /* When VF reset failed, only the higher level reset asserted by PF + * can restore it, so re-initialize the command queue to receive + * this higher reset event. + */ + hclgevf_cmd_init(hdev); + dev_err(&hdev->pdev->dev, "failed to reset VF\n"); -static int hclgevf_do_reset(struct hclgevf_dev *hdev) -{ - int status; - u8 respmsg; + return ret; +} - status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_RESET, 0, NULL, - 0, false, &respmsg, sizeof(u8)); - if (status) - dev_err(&hdev->pdev->dev, - "VF reset request to PF failed(=%d)\n", status); +static enum hnae3_reset_type hclgevf_get_reset_level(struct hclgevf_dev *hdev, + unsigned long *addr) +{ + enum hnae3_reset_type rst_level = HNAE3_NONE_RESET; + + /* return the highest priority reset level amongst all */ + if (test_bit(HNAE3_VF_RESET, addr)) { + rst_level = HNAE3_VF_RESET; + clear_bit(HNAE3_VF_RESET, addr); + clear_bit(HNAE3_VF_PF_FUNC_RESET, addr); + clear_bit(HNAE3_VF_FUNC_RESET, addr); + } else if (test_bit(HNAE3_VF_FULL_RESET, addr)) { + rst_level = HNAE3_VF_FULL_RESET; + clear_bit(HNAE3_VF_FULL_RESET, addr); + clear_bit(HNAE3_VF_FUNC_RESET, addr); + } else if (test_bit(HNAE3_VF_PF_FUNC_RESET, addr)) { + rst_level = HNAE3_VF_PF_FUNC_RESET; + clear_bit(HNAE3_VF_PF_FUNC_RESET, addr); + clear_bit(HNAE3_VF_FUNC_RESET, addr); + } else if (test_bit(HNAE3_VF_FUNC_RESET, addr)) { + rst_level = HNAE3_VF_FUNC_RESET; + clear_bit(HNAE3_VF_FUNC_RESET, addr); + } else if (test_bit(HNAE3_FLR_RESET, addr)) { + rst_level = HNAE3_FLR_RESET; + clear_bit(HNAE3_FLR_RESET, addr); + } - return status; + return rst_level; } static void hclgevf_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle) { - struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev); + struct hclgevf_dev *hdev = ae_dev->priv; dev_info(&hdev->pdev->dev, "received reset request from VF enet\n"); - handle->reset_level = HNAE3_VF_RESET; + if (hdev->default_reset_request) + hdev->reset_level = + hclgevf_get_reset_level(hdev, + &hdev->default_reset_request); + else + hdev->reset_level = HNAE3_VF_FUNC_RESET; /* reset of this VF requested */ set_bit(HCLGEVF_RESET_REQUESTED, &hdev->reset_state); hclgevf_reset_task_schedule(hdev); - handle->last_reset_time = jiffies; + hdev->last_reset_time = jiffies; +} + +static void hclgevf_set_def_reset_request(struct hnae3_ae_dev *ae_dev, + enum hnae3_reset_type rst_type) +{ + struct hclgevf_dev *hdev = ae_dev->priv; + + set_bit(rst_type, &hdev->default_reset_request); +} + +static void hclgevf_flr_prepare(struct hnae3_ae_dev *ae_dev) +{ +#define HCLGEVF_FLR_WAIT_MS 100 +#define HCLGEVF_FLR_WAIT_CNT 50 + struct hclgevf_dev *hdev = ae_dev->priv; + int cnt = 0; + + clear_bit(HNAE3_FLR_DOWN, &hdev->flr_state); + clear_bit(HNAE3_FLR_DONE, &hdev->flr_state); + set_bit(HNAE3_FLR_RESET, &hdev->default_reset_request); + hclgevf_reset_event(hdev->pdev, NULL); + + while (!test_bit(HNAE3_FLR_DOWN, &hdev->flr_state) && + cnt++ < HCLGEVF_FLR_WAIT_CNT) + msleep(HCLGEVF_FLR_WAIT_MS); + + if (!test_bit(HNAE3_FLR_DOWN, &hdev->flr_state)) + dev_err(&hdev->pdev->dev, + "flr wait down timeout: %d\n", cnt); } static u32 hclgevf_get_fw_version(struct hnae3_handle *handle) @@ -1321,9 +1526,15 @@ static void hclgevf_reset_service_task(struct work_struct *work) */ hdev->reset_attempts = 0; - ret = hclgevf_reset(hdev); - if (ret) - dev_err(&hdev->pdev->dev, "VF stack reset failed.\n"); + hdev->last_reset_time = jiffies; + while ((hdev->reset_type = + hclgevf_get_reset_level(hdev, &hdev->reset_pending)) + != HNAE3_NONE_RESET) { + ret = hclgevf_reset(hdev); + if (ret) + dev_err(&hdev->pdev->dev, + "VF stack reset failed %d.\n", ret); + } } else if (test_and_clear_bit(HCLGEVF_RESET_REQUESTED, &hdev->reset_state)) { /* we could be here when either of below happens: @@ -1352,19 +1563,17 @@ static void hclgevf_reset_service_task(struct work_struct *work) */ if (hdev->reset_attempts > 3) { /* prepare for full reset of stack + pcie interface */ - hdev->nic.reset_level = HNAE3_VF_FULL_RESET; + set_bit(HNAE3_VF_FULL_RESET, &hdev->reset_pending); /* "defer" schedule the reset task again */ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); } else { hdev->reset_attempts++; - /* request PF for resetting this VF via mailbox */ - ret = hclgevf_do_reset(hdev); - if (ret) - dev_warn(&hdev->pdev->dev, - "VF rst fail, stack will call\n"); + set_bit(hdev->reset_level, &hdev->reset_pending); + set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); } + hclgevf_reset_task_schedule(hdev); } clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state); @@ -1386,6 +1595,28 @@ static void hclgevf_mailbox_service_task(struct work_struct *work) clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state); } +static void hclgevf_keep_alive_timer(struct timer_list *t) +{ + struct hclgevf_dev *hdev = from_timer(hdev, t, keep_alive_timer); + + schedule_work(&hdev->keep_alive_task); + mod_timer(&hdev->keep_alive_timer, jiffies + 2 * HZ); +} + +static void hclgevf_keep_alive_task(struct work_struct *work) +{ + struct hclgevf_dev *hdev; + u8 respmsg; + int ret; + + hdev = container_of(work, struct hclgevf_dev, keep_alive_task); + ret = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_KEEP_ALIVE, 0, NULL, + 0, false, &respmsg, sizeof(u8)); + if (ret) + dev_err(&hdev->pdev->dev, + "VF sends keep alive cmd failed(=%d)\n", ret); +} + static void hclgevf_service_task(struct work_struct *work) { struct hclgevf_dev *hdev; @@ -1407,24 +1638,37 @@ static void hclgevf_clear_event_cause(struct hclgevf_dev *hdev, u32 regclr) hclgevf_write_dev(&hdev->hw, HCLGEVF_VECTOR0_CMDQ_SRC_REG, regclr); } -static bool hclgevf_check_event_cause(struct hclgevf_dev *hdev, u32 *clearval) +static enum hclgevf_evt_cause hclgevf_check_evt_cause(struct hclgevf_dev *hdev, + u32 *clearval) { - u32 cmdq_src_reg; + u32 cmdq_src_reg, rst_ing_reg; /* fetch the events from their corresponding regs */ cmdq_src_reg = hclgevf_read_dev(&hdev->hw, HCLGEVF_VECTOR0_CMDQ_SRC_REG); + if (BIT(HCLGEVF_VECTOR0_RST_INT_B) & cmdq_src_reg) { + rst_ing_reg = hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING); + dev_info(&hdev->pdev->dev, + "receive reset interrupt 0x%x!\n", rst_ing_reg); + set_bit(HNAE3_VF_RESET, &hdev->reset_pending); + set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); + set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state); + cmdq_src_reg &= ~BIT(HCLGEVF_VECTOR0_RST_INT_B); + *clearval = cmdq_src_reg; + return HCLGEVF_VECTOR0_EVENT_RST; + } + /* check for vector0 mailbox(=CMDQ RX) event source */ if (BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) { cmdq_src_reg &= ~BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B); *clearval = cmdq_src_reg; - return true; + return HCLGEVF_VECTOR0_EVENT_MBX; } dev_dbg(&hdev->pdev->dev, "vector 0 interrupt from unknown source\n"); - return false; + return HCLGEVF_VECTOR0_EVENT_OTHER; } static void hclgevf_enable_vector(struct hclgevf_misc_vector *vector, bool en) @@ -1434,19 +1678,28 @@ static void hclgevf_enable_vector(struct hclgevf_misc_vector *vector, bool en) static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data) { + enum hclgevf_evt_cause event_cause; struct hclgevf_dev *hdev = data; u32 clearval; hclgevf_enable_vector(&hdev->misc_vector, false); - if (!hclgevf_check_event_cause(hdev, &clearval)) - goto skip_sched; - - hclgevf_mbx_handler(hdev); + event_cause = hclgevf_check_evt_cause(hdev, &clearval); - hclgevf_clear_event_cause(hdev, clearval); + switch (event_cause) { + case HCLGEVF_VECTOR0_EVENT_RST: + hclgevf_reset_task_schedule(hdev); + break; + case HCLGEVF_VECTOR0_EVENT_MBX: + hclgevf_mbx_handler(hdev); + break; + default: + break; + } -skip_sched: - hclgevf_enable_vector(&hdev->misc_vector, true); + if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER) { + hclgevf_clear_event_cause(hdev, clearval); + hclgevf_enable_vector(&hdev->misc_vector, true); + } return IRQ_HANDLED; } @@ -1468,7 +1721,7 @@ static int hclgevf_configure(struct hclgevf_dev *hdev) static int hclgevf_alloc_hdev(struct hnae3_ae_dev *ae_dev) { struct pci_dev *pdev = ae_dev->pdev; - struct hclgevf_dev *hdev = ae_dev->priv; + struct hclgevf_dev *hdev; hdev = devm_kzalloc(&pdev->dev, sizeof(*hdev), GFP_KERNEL); if (!hdev) @@ -1504,6 +1757,29 @@ static int hclgevf_init_roce_base_info(struct hclgevf_dev *hdev) return 0; } +static int hclgevf_config_gro(struct hclgevf_dev *hdev, bool en) +{ + struct hclgevf_cfg_gro_status_cmd *req; + struct hclgevf_desc desc; + int ret; + + if (!hnae3_dev_gro_supported(hdev)) + return 0; + + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_GRO_GENERIC_CONFIG, + false); + req = (struct hclgevf_cfg_gro_status_cmd *)desc.data; + + req->gro_en = cpu_to_le16(en ? 1 : 0); + + ret = hclgevf_cmd_send(&hdev->hw, &desc, 1); + if (ret) + dev_err(&hdev->pdev->dev, + "VF GRO hardware config cmd failed, ret = %d.\n", ret); + + return ret; +} + static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev) { struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg; @@ -1564,23 +1840,22 @@ static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev) false); } -static int hclgevf_ae_start(struct hnae3_handle *handle) +static void hclgevf_set_timer_task(struct hnae3_handle *handle, bool enable) { - struct hnae3_knic_private_info *kinfo = &handle->kinfo; struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); - int i, queue_id; - for (i = 0; i < kinfo->num_tqps; i++) { - /* ring enable */ - queue_id = hclgevf_get_queue_id(kinfo->tqp[i]); - if (queue_id < 0) { - dev_warn(&hdev->pdev->dev, - "Get invalid queue id, ignore it\n"); - continue; - } - - hclgevf_tqp_enable(hdev, queue_id, 0, true); + if (enable) { + mod_timer(&hdev->service_timer, jiffies + HZ); + } else { + del_timer_sync(&hdev->service_timer); + cancel_work_sync(&hdev->service_task); + clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state); } +} + +static int hclgevf_ae_start(struct hnae3_handle *handle) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); /* reset tqp stats */ hclgevf_reset_tqp_stats(handle); @@ -1588,45 +1863,59 @@ static int hclgevf_ae_start(struct hnae3_handle *handle) hclgevf_request_link_info(hdev); clear_bit(HCLGEVF_STATE_DOWN, &hdev->state); - mod_timer(&hdev->service_timer, jiffies + HZ); return 0; } static void hclgevf_ae_stop(struct hnae3_handle *handle) { - struct hnae3_knic_private_info *kinfo = &handle->kinfo; struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); - int i, queue_id; + int i; set_bit(HCLGEVF_STATE_DOWN, &hdev->state); - for (i = 0; i < kinfo->num_tqps; i++) { - /* Ring disable */ - queue_id = hclgevf_get_queue_id(kinfo->tqp[i]); - if (queue_id < 0) { - dev_warn(&hdev->pdev->dev, - "Get invalid queue id, ignore it\n"); - continue; - } - - hclgevf_tqp_enable(hdev, queue_id, 0, false); - } + for (i = 0; i < handle->kinfo.num_tqps; i++) + hclgevf_reset_tqp(handle, i); /* reset tqp stats */ hclgevf_reset_tqp_stats(handle); - del_timer_sync(&hdev->service_timer); - cancel_work_sync(&hdev->service_task); - clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state); hclgevf_update_link_status(hdev, 0); } -static void hclgevf_state_init(struct hclgevf_dev *hdev) +static int hclgevf_set_alive(struct hnae3_handle *handle, bool alive) { - /* if this is on going reset then skip this initialization */ - if (hclgevf_dev_ongoing_reset(hdev)) - return; + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + u8 msg_data; + + msg_data = alive ? 1 : 0; + return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_ALIVE, + 0, &msg_data, 1, false, NULL, 0); +} + +static int hclgevf_client_start(struct hnae3_handle *handle) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + + mod_timer(&hdev->keep_alive_timer, jiffies + 2 * HZ); + return hclgevf_set_alive(handle, true); +} + +static void hclgevf_client_stop(struct hnae3_handle *handle) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + int ret; + + ret = hclgevf_set_alive(handle, false); + if (ret) + dev_warn(&hdev->pdev->dev, + "%s failed %d\n", __func__, ret); + del_timer_sync(&hdev->keep_alive_timer); + cancel_work_sync(&hdev->keep_alive_task); +} + +static void hclgevf_state_init(struct hclgevf_dev *hdev) +{ /* setup tasks for the MBX */ INIT_WORK(&hdev->mbx_service_task, hclgevf_mailbox_service_task); clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state); @@ -1668,10 +1957,6 @@ static int hclgevf_init_msi(struct hclgevf_dev *hdev) int vectors; int i; - /* if this is on going reset then skip this initialization */ - if (hclgevf_dev_ongoing_reset(hdev)) - return 0; - if (hnae3_get_bit(hdev->ae_dev->flag, HNAE3_DEV_SUPPORT_ROCE_B)) vectors = pci_alloc_irq_vectors(pdev, hdev->roce_base_msix_offset + 1, @@ -1710,6 +1995,7 @@ static int hclgevf_init_msi(struct hclgevf_dev *hdev) hdev->vector_irq = devm_kcalloc(&pdev->dev, hdev->num_msi, sizeof(int), GFP_KERNEL); if (!hdev->vector_irq) { + devm_kfree(&pdev->dev, hdev->vector_status); pci_free_irq_vectors(pdev); return -ENOMEM; } @@ -1721,6 +2007,8 @@ static void hclgevf_uninit_msi(struct hclgevf_dev *hdev) { struct pci_dev *pdev = hdev->pdev; + devm_kfree(&pdev->dev, hdev->vector_status); + devm_kfree(&pdev->dev, hdev->vector_irq); pci_free_irq_vectors(pdev); } @@ -1728,10 +2016,6 @@ static int hclgevf_misc_irq_init(struct hclgevf_dev *hdev) { int ret = 0; - /* if this is on going reset then skip this initialization */ - if (hclgevf_dev_ongoing_reset(hdev)) - return 0; - hclgevf_get_misc_vector(hdev); ret = request_irq(hdev->misc_vector.vector_irq, hclgevf_misc_irq_handle, @@ -1861,14 +2145,6 @@ static int hclgevf_pci_init(struct hclgevf_dev *hdev) struct hclgevf_hw *hw; int ret; - /* check if we need to skip initialization of pci. This will happen if - * device is undergoing VF reset. Otherwise, we would need to - * re-initialize pci interface again i.e. when device is not going - * through *any* reset or actually undergoing full reset. - */ - if (hclgevf_dev_ongoing_reset(hdev)) - return 0; - ret = pci_enable_device(pdev); if (ret) { dev_err(&pdev->dev, "failed to enable PCI device\n"); @@ -1957,23 +2233,98 @@ static int hclgevf_query_vf_resource(struct hclgevf_dev *hdev) return 0; } -static int hclgevf_init_hdev(struct hclgevf_dev *hdev) +static int hclgevf_pci_reset(struct hclgevf_dev *hdev) +{ + struct pci_dev *pdev = hdev->pdev; + int ret = 0; + + if (hdev->reset_type == HNAE3_VF_FULL_RESET && + test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) { + hclgevf_misc_irq_uninit(hdev); + hclgevf_uninit_msi(hdev); + clear_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state); + } + + if (!test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) { + pci_set_master(pdev); + ret = hclgevf_init_msi(hdev); + if (ret) { + dev_err(&pdev->dev, + "failed(%d) to init MSI/MSI-X\n", ret); + return ret; + } + + ret = hclgevf_misc_irq_init(hdev); + if (ret) { + hclgevf_uninit_msi(hdev); + dev_err(&pdev->dev, "failed(%d) to init Misc IRQ(vector0)\n", + ret); + return ret; + } + + set_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state); + } + + return ret; +} + +static int hclgevf_reset_hdev(struct hclgevf_dev *hdev) { struct pci_dev *pdev = hdev->pdev; int ret; - /* check if device is on-going full reset(i.e. pcie as well) */ - if (hclgevf_dev_ongoing_full_reset(hdev)) { - dev_warn(&pdev->dev, "device is going full reset\n"); - hclgevf_uninit_hdev(hdev); + ret = hclgevf_pci_reset(hdev); + if (ret) { + dev_err(&pdev->dev, "pci reset failed %d\n", ret); + return ret; + } + + ret = hclgevf_cmd_init(hdev); + if (ret) { + dev_err(&pdev->dev, "cmd failed %d\n", ret); + return ret; } + ret = hclgevf_rss_init_hw(hdev); + if (ret) { + dev_err(&hdev->pdev->dev, + "failed(%d) to initialize RSS\n", ret); + return ret; + } + + ret = hclgevf_config_gro(hdev, true); + if (ret) + return ret; + + ret = hclgevf_init_vlan_config(hdev); + if (ret) { + dev_err(&hdev->pdev->dev, + "failed(%d) to initialize VLAN config\n", ret); + return ret; + } + + dev_info(&hdev->pdev->dev, "Reset done\n"); + + return 0; +} + +static int hclgevf_init_hdev(struct hclgevf_dev *hdev) +{ + struct pci_dev *pdev = hdev->pdev; + int ret; + ret = hclgevf_pci_init(hdev); if (ret) { dev_err(&pdev->dev, "PCI initialization failed\n"); return ret; } + ret = hclgevf_cmd_queue_init(hdev); + if (ret) { + dev_err(&pdev->dev, "Cmd queue init failed: %d\n", ret); + goto err_cmd_queue_init; + } + ret = hclgevf_cmd_init(hdev); if (ret) goto err_cmd_init; @@ -1983,16 +2334,17 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev) if (ret) { dev_err(&hdev->pdev->dev, "Query vf status error, ret = %d.\n", ret); - goto err_query_vf; + goto err_cmd_init; } ret = hclgevf_init_msi(hdev); if (ret) { dev_err(&pdev->dev, "failed(%d) to init MSI/MSI-X\n", ret); - goto err_query_vf; + goto err_cmd_init; } hclgevf_state_init(hdev); + hdev->reset_level = HNAE3_VF_FUNC_RESET; ret = hclgevf_misc_irq_init(hdev); if (ret) { @@ -2001,6 +2353,8 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev) goto err_misc_irq_init; } + set_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state); + ret = hclgevf_configure(hdev); if (ret) { dev_err(&pdev->dev, "failed(%d) to fetch configuration\n", ret); @@ -2019,6 +2373,10 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev) goto err_config; } + ret = hclgevf_config_gro(hdev, true); + if (ret) + goto err_config; + /* Initialize RSS for this VF */ ret = hclgevf_rss_init_hw(hdev); if (ret) { @@ -2034,6 +2392,7 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev) goto err_config; } + hdev->last_reset_time = jiffies; pr_info("finished initializing %s driver\n", HCLGEVF_DRIVER_NAME); return 0; @@ -2043,25 +2402,31 @@ err_config: err_misc_irq_init: hclgevf_state_uninit(hdev); hclgevf_uninit_msi(hdev); -err_query_vf: - hclgevf_cmd_uninit(hdev); err_cmd_init: + hclgevf_cmd_uninit(hdev); +err_cmd_queue_init: hclgevf_pci_uninit(hdev); + clear_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state); return ret; } static void hclgevf_uninit_hdev(struct hclgevf_dev *hdev) { hclgevf_state_uninit(hdev); - hclgevf_misc_irq_uninit(hdev); - hclgevf_cmd_uninit(hdev); - hclgevf_uninit_msi(hdev); + + if (test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) { + hclgevf_misc_irq_uninit(hdev); + hclgevf_uninit_msi(hdev); + } + hclgevf_pci_uninit(hdev); + hclgevf_cmd_uninit(hdev); } static int hclgevf_init_ae_dev(struct hnae3_ae_dev *ae_dev) { struct pci_dev *pdev = ae_dev->pdev; + struct hclgevf_dev *hdev; int ret; ret = hclgevf_alloc_hdev(ae_dev); @@ -2071,10 +2436,16 @@ static int hclgevf_init_ae_dev(struct hnae3_ae_dev *ae_dev) } ret = hclgevf_init_hdev(ae_dev->priv); - if (ret) + if (ret) { dev_err(&pdev->dev, "hclge device initialization failed\n"); + return ret; + } - return ret; + hdev = ae_dev->priv; + timer_setup(&hdev->keep_alive_timer, hclgevf_keep_alive_timer, 0); + INIT_WORK(&hdev->keep_alive_task, hclgevf_keep_alive_task); + + return 0; } static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev) @@ -2151,6 +2522,13 @@ void hclgevf_update_speed_duplex(struct hclgevf_dev *hdev, u32 speed, hdev->hw.mac.duplex = duplex; } +static int hclgevf_gro_en(struct hnae3_handle *handle, int enable) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + + return hclgevf_config_gro(hdev, enable); +} + static void hclgevf_get_media_type(struct hnae3_handle *handle, u8 *media_type) { @@ -2159,13 +2537,104 @@ static void hclgevf_get_media_type(struct hnae3_handle *handle, *media_type = hdev->hw.mac.media_type; } +static bool hclgevf_get_hw_reset_stat(struct hnae3_handle *handle) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + + return !!hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING); +} + +static bool hclgevf_ae_dev_resetting(struct hnae3_handle *handle) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + + return test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state); +} + +static unsigned long hclgevf_ae_dev_reset_cnt(struct hnae3_handle *handle) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + + return hdev->reset_count; +} + +#define MAX_SEPARATE_NUM 4 +#define SEPARATOR_VALUE 0xFFFFFFFF +#define REG_NUM_PER_LINE 4 +#define REG_LEN_PER_LINE (REG_NUM_PER_LINE * sizeof(u32)) + +static int hclgevf_get_regs_len(struct hnae3_handle *handle) +{ + int cmdq_lines, common_lines, ring_lines, tqp_intr_lines; + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + + cmdq_lines = sizeof(cmdq_reg_addr_list) / REG_LEN_PER_LINE + 1; + common_lines = sizeof(common_reg_addr_list) / REG_LEN_PER_LINE + 1; + ring_lines = sizeof(ring_reg_addr_list) / REG_LEN_PER_LINE + 1; + tqp_intr_lines = sizeof(tqp_intr_reg_addr_list) / REG_LEN_PER_LINE + 1; + + return (cmdq_lines + common_lines + ring_lines * hdev->num_tqps + + tqp_intr_lines * (hdev->num_msi_used - 1)) * REG_LEN_PER_LINE; +} + +static void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version, + void *data) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + int i, j, reg_um, separator_num; + u32 *reg = data; + + *version = hdev->fw_version; + + /* fetching per-VF registers values from VF PCIe register space */ + reg_um = sizeof(cmdq_reg_addr_list) / sizeof(u32); + separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE; + for (i = 0; i < reg_um; i++) + *reg++ = hclgevf_read_dev(&hdev->hw, cmdq_reg_addr_list[i]); + for (i = 0; i < separator_num; i++) + *reg++ = SEPARATOR_VALUE; + + reg_um = sizeof(common_reg_addr_list) / sizeof(u32); + separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE; + for (i = 0; i < reg_um; i++) + *reg++ = hclgevf_read_dev(&hdev->hw, common_reg_addr_list[i]); + for (i = 0; i < separator_num; i++) + *reg++ = SEPARATOR_VALUE; + + reg_um = sizeof(ring_reg_addr_list) / sizeof(u32); + separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE; + for (j = 0; j < hdev->num_tqps; j++) { + for (i = 0; i < reg_um; i++) + *reg++ = hclgevf_read_dev(&hdev->hw, + ring_reg_addr_list[i] + + 0x200 * j); + for (i = 0; i < separator_num; i++) + *reg++ = SEPARATOR_VALUE; + } + + reg_um = sizeof(tqp_intr_reg_addr_list) / sizeof(u32); + separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE; + for (j = 0; j < hdev->num_msi_used - 1; j++) { + for (i = 0; i < reg_um; i++) + *reg++ = hclgevf_read_dev(&hdev->hw, + tqp_intr_reg_addr_list[i] + + 4 * j); + for (i = 0; i < separator_num; i++) + *reg++ = SEPARATOR_VALUE; + } +} + static const struct hnae3_ae_ops hclgevf_ops = { .init_ae_dev = hclgevf_init_ae_dev, .uninit_ae_dev = hclgevf_uninit_ae_dev, + .flr_prepare = hclgevf_flr_prepare, + .flr_done = hclgevf_flr_done, .init_client_instance = hclgevf_init_client_instance, .uninit_client_instance = hclgevf_uninit_client_instance, .start = hclgevf_ae_start, .stop = hclgevf_ae_stop, + .client_start = hclgevf_client_start, + .client_stop = hclgevf_client_stop, .map_ring_to_vector = hclgevf_map_ring_to_vector, .unmap_ring_from_vector = hclgevf_unmap_ring_from_vector, .get_vector = hclgevf_get_vector, @@ -2193,11 +2662,21 @@ static const struct hnae3_ae_ops hclgevf_ops = { .set_vlan_filter = hclgevf_set_vlan_filter, .enable_hw_strip_rxvtag = hclgevf_en_hw_strip_rxvtag, .reset_event = hclgevf_reset_event, + .set_default_reset_request = hclgevf_set_def_reset_request, .get_channels = hclgevf_get_channels, .get_tqps_and_rss_info = hclgevf_get_tqps_and_rss_info, + .get_regs_len = hclgevf_get_regs_len, + .get_regs = hclgevf_get_regs, .get_status = hclgevf_get_status, .get_ksettings_an_result = hclgevf_get_ksettings_an_result, .get_media_type = hclgevf_get_media_type, + .get_hw_reset_stat = hclgevf_get_hw_reset_stat, + .ae_dev_resetting = hclgevf_ae_dev_resetting, + .ae_dev_reset_cnt = hclgevf_ae_dev_reset_cnt, + .set_gro_en = hclgevf_gro_en, + .set_mtu = hclgevf_set_mtu, + .get_global_queue_id = hclgevf_get_qid_global, + .set_timer_task = hclgevf_set_timer_task, }; static struct hnae3_ae_algo ae_algovf = { diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h index aed241e8ffab..787bc06944e5 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h @@ -27,15 +27,77 @@ #define HCLGEVF_VECTOR_REG_OFFSET 0x4 #define HCLGEVF_VECTOR_VF_OFFSET 0x100000 +/* bar registers for cmdq */ +#define HCLGEVF_CMDQ_TX_ADDR_L_REG 0x27000 +#define HCLGEVF_CMDQ_TX_ADDR_H_REG 0x27004 +#define HCLGEVF_CMDQ_TX_DEPTH_REG 0x27008 +#define HCLGEVF_CMDQ_TX_TAIL_REG 0x27010 +#define HCLGEVF_CMDQ_TX_HEAD_REG 0x27014 +#define HCLGEVF_CMDQ_RX_ADDR_L_REG 0x27018 +#define HCLGEVF_CMDQ_RX_ADDR_H_REG 0x2701C +#define HCLGEVF_CMDQ_RX_DEPTH_REG 0x27020 +#define HCLGEVF_CMDQ_RX_TAIL_REG 0x27024 +#define HCLGEVF_CMDQ_RX_HEAD_REG 0x27028 +#define HCLGEVF_CMDQ_INTR_SRC_REG 0x27100 +#define HCLGEVF_CMDQ_INTR_STS_REG 0x27104 +#define HCLGEVF_CMDQ_INTR_EN_REG 0x27108 +#define HCLGEVF_CMDQ_INTR_GEN_REG 0x2710C + +/* bar registers for common func */ +#define HCLGEVF_GRO_EN_REG 0x28000 + +/* bar registers for rcb */ +#define HCLGEVF_RING_RX_ADDR_L_REG 0x80000 +#define HCLGEVF_RING_RX_ADDR_H_REG 0x80004 +#define HCLGEVF_RING_RX_BD_NUM_REG 0x80008 +#define HCLGEVF_RING_RX_BD_LENGTH_REG 0x8000C +#define HCLGEVF_RING_RX_MERGE_EN_REG 0x80014 +#define HCLGEVF_RING_RX_TAIL_REG 0x80018 +#define HCLGEVF_RING_RX_HEAD_REG 0x8001C +#define HCLGEVF_RING_RX_FBD_NUM_REG 0x80020 +#define HCLGEVF_RING_RX_OFFSET_REG 0x80024 +#define HCLGEVF_RING_RX_FBD_OFFSET_REG 0x80028 +#define HCLGEVF_RING_RX_STASH_REG 0x80030 +#define HCLGEVF_RING_RX_BD_ERR_REG 0x80034 +#define HCLGEVF_RING_TX_ADDR_L_REG 0x80040 +#define HCLGEVF_RING_TX_ADDR_H_REG 0x80044 +#define HCLGEVF_RING_TX_BD_NUM_REG 0x80048 +#define HCLGEVF_RING_TX_PRIORITY_REG 0x8004C +#define HCLGEVF_RING_TX_TC_REG 0x80050 +#define HCLGEVF_RING_TX_MERGE_EN_REG 0x80054 +#define HCLGEVF_RING_TX_TAIL_REG 0x80058 +#define HCLGEVF_RING_TX_HEAD_REG 0x8005C +#define HCLGEVF_RING_TX_FBD_NUM_REG 0x80060 +#define HCLGEVF_RING_TX_OFFSET_REG 0x80064 +#define HCLGEVF_RING_TX_EBD_NUM_REG 0x80068 +#define HCLGEVF_RING_TX_EBD_OFFSET_REG 0x80070 +#define HCLGEVF_RING_TX_BD_ERR_REG 0x80074 +#define HCLGEVF_RING_EN_REG 0x80090 + +/* bar registers for tqp interrupt */ +#define HCLGEVF_TQP_INTR_CTRL_REG 0x20000 +#define HCLGEVF_TQP_INTR_GL0_REG 0x20100 +#define HCLGEVF_TQP_INTR_GL1_REG 0x20200 +#define HCLGEVF_TQP_INTR_GL2_REG 0x20300 +#define HCLGEVF_TQP_INTR_RL_REG 0x20900 + /* Vector0 interrupt CMDQ event source register(RW) */ #define HCLGEVF_VECTOR0_CMDQ_SRC_REG 0x27100 /* CMDQ register bits for RX event(=MBX event) */ #define HCLGEVF_VECTOR0_RX_CMDQ_INT_B 1 +/* RST register bits for RESET event */ +#define HCLGEVF_VECTOR0_RST_INT_B 2 #define HCLGEVF_TQP_RESET_TRY_TIMES 10 /* Reset related Registers */ -#define HCLGEVF_FUN_RST_ING 0x20C00 -#define HCLGEVF_FUN_RST_ING_B 0 +#define HCLGEVF_RST_ING 0x20C00 +#define HCLGEVF_FUN_RST_ING_BIT BIT(0) +#define HCLGEVF_GLOBAL_RST_ING_BIT BIT(5) +#define HCLGEVF_CORE_RST_ING_BIT BIT(6) +#define HCLGEVF_IMP_RST_ING_BIT BIT(7) +#define HCLGEVF_RST_ING_BITS \ + (HCLGEVF_FUN_RST_ING_BIT | HCLGEVF_GLOBAL_RST_ING_BIT | \ + HCLGEVF_CORE_RST_ING_BIT | HCLGEVF_IMP_RST_ING_BIT) #define HCLGEVF_RSS_IND_TBL_SIZE 512 #define HCLGEVF_RSS_SET_BITMAP_MSK 0xffff @@ -54,17 +116,25 @@ #define HCLGEVF_S_IP_BIT BIT(3) #define HCLGEVF_V_TAG_BIT BIT(4) +enum hclgevf_evt_cause { + HCLGEVF_VECTOR0_EVENT_RST, + HCLGEVF_VECTOR0_EVENT_MBX, + HCLGEVF_VECTOR0_EVENT_OTHER, +}; + /* states of hclgevf device & tasks */ enum hclgevf_states { /* device states */ HCLGEVF_STATE_DOWN, HCLGEVF_STATE_DISABLED, + HCLGEVF_STATE_IRQ_INITED, /* task states */ HCLGEVF_STATE_SERVICE_SCHED, HCLGEVF_STATE_RST_SERVICE_SCHED, HCLGEVF_STATE_RST_HANDLING, HCLGEVF_STATE_MBX_SERVICE_SCHED, HCLGEVF_STATE_MBX_HANDLING, + HCLGEVF_STATE_CMD_DISABLE, }; #define HCLGEVF_MPF_ENBALE 1 @@ -145,10 +215,17 @@ struct hclgevf_dev { struct hclgevf_misc_vector misc_vector; struct hclgevf_rss_cfg rss_cfg; unsigned long state; + unsigned long flr_state; + unsigned long default_reset_request; + unsigned long last_reset_time; + enum hnae3_reset_type reset_level; + unsigned long reset_pending; + enum hnae3_reset_type reset_type; #define HCLGEVF_RESET_REQUESTED 0 #define HCLGEVF_RESET_PENDING 1 unsigned long reset_state; /* requested, pending */ + unsigned long reset_count; /* the number of reset has been done */ u32 reset_attempts; u32 fw_version; @@ -178,7 +255,9 @@ struct hclgevf_dev { struct hclgevf_mbx_arq_ring arq; /* mailbox async rx queue */ struct timer_list service_timer; + struct timer_list keep_alive_timer; struct work_struct service_task; + struct work_struct keep_alive_task; struct work_struct rst_service_task; struct work_struct mbx_service_task; @@ -192,18 +271,9 @@ struct hclgevf_dev { u32 flag; }; -static inline bool hclgevf_dev_ongoing_reset(struct hclgevf_dev *hdev) -{ - return (hdev && - (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state)) && - (hdev->nic.reset_level == HNAE3_VF_RESET)); -} - -static inline bool hclgevf_dev_ongoing_full_reset(struct hclgevf_dev *hdev) +static inline bool hclgevf_is_reset_pending(struct hclgevf_dev *hdev) { - return (hdev && - (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state)) && - (hdev->nic.reset_level == HNAE3_VF_FULL_RESET)); + return !!hdev->reset_pending; } int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode, diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c index e9d5a4f96304..84653f58b2d1 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c @@ -26,7 +26,7 @@ static int hclgevf_get_mbx_resp(struct hclgevf_dev *hdev, u16 code0, u16 code1, u8 *resp_data, u16 resp_len) { #define HCLGEVF_MAX_TRY_TIMES 500 -#define HCLGEVF_SLEEP_USCOEND 1000 +#define HCLGEVF_SLEEP_USECOND 1000 struct hclgevf_mbx_resp_status *mbx_resp; u16 r_code0, r_code1; int i = 0; @@ -40,7 +40,10 @@ static int hclgevf_get_mbx_resp(struct hclgevf_dev *hdev, u16 code0, u16 code1, } while ((!hdev->mbx_resp.received_resp) && (i < HCLGEVF_MAX_TRY_TIMES)) { - udelay(HCLGEVF_SLEEP_USCOEND); + if (test_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state)) + return -EIO; + + usleep_range(HCLGEVF_SLEEP_USECOND, HCLGEVF_SLEEP_USECOND * 2); i++; } @@ -148,6 +151,11 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev) crq = &hdev->hw.cmq.crq; while (!hclgevf_cmd_crq_empty(&hdev->hw)) { + if (test_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state)) { + dev_info(&hdev->pdev->dev, "vf crq need init\n"); + return; + } + desc = &crq->desc[crq->next_to_use]; req = (struct hclge_mbx_pf_to_vf_cmd *)desc->data; @@ -233,6 +241,7 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev) void hclgevf_mbx_async_handler(struct hclgevf_dev *hdev) { + enum hnae3_reset_type reset_type; u16 link_status; u16 *msg_q; u8 duplex; @@ -248,6 +257,12 @@ void hclgevf_mbx_async_handler(struct hclgevf_dev *hdev) /* process all the async queue messages */ while (tail != hdev->arq.head) { + if (test_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state)) { + dev_info(&hdev->pdev->dev, + "vf crq need init in async\n"); + return; + } + msg_q = hdev->arq.msg_q[hdev->arq.head]; switch (msg_q[0]) { @@ -267,7 +282,8 @@ void hclgevf_mbx_async_handler(struct hclgevf_dev *hdev) * has been completely reset. After this stack should * eventually be re-initialized. */ - hdev->nic.reset_level = HNAE3_VF_RESET; + reset_type = le16_to_cpu(msg_q[1]); + set_bit(reset_type, &hdev->reset_pending); set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); hclgevf_reset_task_schedule(hdev); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h index 097b5502603f..d1a7d2522d82 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h @@ -50,6 +50,8 @@ enum hinic_port_cmd { HINIC_PORT_CMD_GET_LINK_STATE = 24, + HINIC_PORT_CMD_SET_RX_CSUM = 26, + HINIC_PORT_CMD_SET_PORT_STATE = 41, HINIC_PORT_CMD_FWCTXT_INIT = 69, diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c index f92f1bf3901a..1dfa7eb05c10 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c @@ -74,12 +74,6 @@ ((void *)((cmdq_pages)->shadow_page_vaddr) \ + (wq)->block_idx * CMDQ_BLOCK_SIZE) -#define WQE_PAGE_OFF(wq, idx) (((idx) & ((wq)->num_wqebbs_per_page - 1)) * \ - (wq)->wqebb_size) - -#define WQE_PAGE_NUM(wq, idx) (((idx) / ((wq)->num_wqebbs_per_page)) \ - & ((wq)->num_q_pages - 1)) - #define WQ_PAGE_ADDR(wq, idx) \ ((wq)->shadow_block_vaddr[WQE_PAGE_NUM(wq, idx)]) @@ -93,6 +87,17 @@ (((unsigned long)(wqe) - (unsigned long)(wq)->shadow_wqe) \ / (wq)->max_wqe_size) +static inline int WQE_PAGE_OFF(struct hinic_wq *wq, u16 idx) +{ + return (((idx) & ((wq)->num_wqebbs_per_page - 1)) + << (wq)->wqebb_size_shift); +} + +static inline int WQE_PAGE_NUM(struct hinic_wq *wq, u16 idx) +{ + return (((idx) >> ((wq)->wqebbs_per_page_shift)) + & ((wq)->num_q_pages - 1)); +} /** * queue_alloc_page - allocate page for Queue * @hwif: HW interface for allocating DMA @@ -513,10 +518,11 @@ int hinic_wq_allocate(struct hinic_wqs *wqs, struct hinic_wq *wq, struct hinic_hwif *hwif = wqs->hwif; struct pci_dev *pdev = hwif->pdev; u16 num_wqebbs_per_page; + u16 wqebb_size_shift; int err; - if (wqebb_size == 0) { - dev_err(&pdev->dev, "wqebb_size must be > 0\n"); + if (!is_power_of_2(wqebb_size)) { + dev_err(&pdev->dev, "wqebb_size must be power of 2\n"); return -EINVAL; } @@ -530,9 +536,11 @@ int hinic_wq_allocate(struct hinic_wqs *wqs, struct hinic_wq *wq, return -EINVAL; } - num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) / wqebb_size; + wqebb_size_shift = ilog2(wqebb_size); + num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) + >> wqebb_size_shift; - if (num_wqebbs_per_page & (num_wqebbs_per_page - 1)) { + if (!is_power_of_2(num_wqebbs_per_page)) { dev_err(&pdev->dev, "num wqebbs per page must be power of 2\n"); return -EINVAL; } @@ -550,7 +558,8 @@ int hinic_wq_allocate(struct hinic_wqs *wqs, struct hinic_wq *wq, wq->q_depth = q_depth; wq->max_wqe_size = max_wqe_size; wq->num_wqebbs_per_page = num_wqebbs_per_page; - + wq->wqebbs_per_page_shift = ilog2(num_wqebbs_per_page); + wq->wqebb_size_shift = wqebb_size_shift; wq->block_vaddr = WQ_BASE_VADDR(wqs, wq); wq->shadow_block_vaddr = WQ_BASE_ADDR(wqs, wq); wq->block_paddr = WQ_BASE_PADDR(wqs, wq); @@ -604,11 +613,13 @@ int hinic_wqs_cmdq_alloc(struct hinic_cmdq_pages *cmdq_pages, u16 q_depth, u16 max_wqe_size) { struct pci_dev *pdev = hwif->pdev; + u16 num_wqebbs_per_page_shift; u16 num_wqebbs_per_page; + u16 wqebb_size_shift; int i, j, err = -ENOMEM; - if (wqebb_size == 0) { - dev_err(&pdev->dev, "wqebb_size must be > 0\n"); + if (!is_power_of_2(wqebb_size)) { + dev_err(&pdev->dev, "wqebb_size must be power of 2\n"); return -EINVAL; } @@ -622,9 +633,11 @@ int hinic_wqs_cmdq_alloc(struct hinic_cmdq_pages *cmdq_pages, return -EINVAL; } - num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) / wqebb_size; + wqebb_size_shift = ilog2(wqebb_size); + num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) + >> wqebb_size_shift; - if (num_wqebbs_per_page & (num_wqebbs_per_page - 1)) { + if (!is_power_of_2(num_wqebbs_per_page)) { dev_err(&pdev->dev, "num wqebbs per page must be power of 2\n"); return -EINVAL; } @@ -636,6 +649,7 @@ int hinic_wqs_cmdq_alloc(struct hinic_cmdq_pages *cmdq_pages, dev_err(&pdev->dev, "Failed to allocate CMDQ page\n"); return err; } + num_wqebbs_per_page_shift = ilog2(num_wqebbs_per_page); for (i = 0; i < cmdq_blocks; i++) { wq[i].hwif = hwif; @@ -647,7 +661,8 @@ int hinic_wqs_cmdq_alloc(struct hinic_cmdq_pages *cmdq_pages, wq[i].q_depth = q_depth; wq[i].max_wqe_size = max_wqe_size; wq[i].num_wqebbs_per_page = num_wqebbs_per_page; - + wq[i].wqebbs_per_page_shift = num_wqebbs_per_page_shift; + wq[i].wqebb_size_shift = wqebb_size_shift; wq[i].block_vaddr = CMDQ_BASE_VADDR(cmdq_pages, &wq[i]); wq[i].shadow_block_vaddr = CMDQ_BASE_ADDR(cmdq_pages, &wq[i]); wq[i].block_paddr = CMDQ_BASE_PADDR(cmdq_pages, &wq[i]); @@ -741,7 +756,7 @@ struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq *wq, unsigned int wqe_size, *prod_idx = MASKED_WQE_IDX(wq, atomic_read(&wq->prod_idx)); - num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size; + num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) >> wq->wqebb_size_shift; if (atomic_sub_return(num_wqebbs, &wq->delta) <= 0) { atomic_add(num_wqebbs, &wq->delta); @@ -795,7 +810,8 @@ void hinic_return_wqe(struct hinic_wq *wq, unsigned int wqe_size) **/ void hinic_put_wqe(struct hinic_wq *wq, unsigned int wqe_size) { - int num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size; + int num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) + >> wq->wqebb_size_shift; atomic_add(num_wqebbs, &wq->cons_idx); @@ -813,7 +829,8 @@ void hinic_put_wqe(struct hinic_wq *wq, unsigned int wqe_size) struct hinic_hw_wqe *hinic_read_wqe(struct hinic_wq *wq, unsigned int wqe_size, u16 *cons_idx) { - int num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size; + int num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) + >> wq->wqebb_size_shift; u16 curr_cons_idx, end_cons_idx; int curr_pg, end_pg; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h index 9b66545ba563..0a936cd6709b 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h @@ -39,7 +39,8 @@ struct hinic_wq { u16 q_depth; u16 max_wqe_size; u16 num_wqebbs_per_page; - + u16 wqebbs_per_page_shift; + u16 wqebb_size_shift; /* The addresses are 64 bit in the HW */ u64 block_paddr; void **shadow_block_vaddr; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h index 9754d6ed5f4a..138941527872 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h @@ -170,6 +170,10 @@ #define HINIC_RQ_CQE_STATUS_RXDONE_MASK 0x1 +#define HINIC_RQ_CQE_STATUS_CSUM_ERR_SHIFT 0 + +#define HINIC_RQ_CQE_STATUS_CSUM_ERR_MASK 0xFFFFU + #define HINIC_RQ_CQE_STATUS_GET(val, member) \ (((val) >> HINIC_RQ_CQE_STATUS_##member##_SHIFT) & \ HINIC_RQ_CQE_STATUS_##member##_MASK) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c index fdf2bdb6b0d0..6d48dc62a44b 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_main.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c @@ -600,9 +600,6 @@ static int add_mac_addr(struct net_device *netdev, const u8 *addr) u16 vid = 0; int err; - if (!is_valid_ether_addr(addr)) - return -EADDRNOTAVAIL; - netif_info(nic_dev, drv, netdev, "set mac addr = %02x %02x %02x %02x %02x %02x\n", addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]); @@ -726,6 +723,7 @@ static void set_rx_mode(struct work_struct *work) { struct hinic_rx_mode_work *rx_mode_work = work_to_rx_mode_work(work); struct hinic_dev *nic_dev = rx_mode_work_to_nic_dev(rx_mode_work); + struct netdev_hw_addr *ha; netif_info(nic_dev, drv, nic_dev->netdev, "set rx mode work\n"); @@ -733,6 +731,9 @@ static void set_rx_mode(struct work_struct *work) __dev_uc_sync(nic_dev->netdev, add_mac_addr, remove_mac_addr); __dev_mc_sync(nic_dev->netdev, add_mac_addr, remove_mac_addr); + + netdev_for_each_mc_addr(ha, nic_dev->netdev) + add_mac_addr(nic_dev->netdev, ha->addr); } static void hinic_set_rx_mode(struct net_device *netdev) @@ -806,7 +807,8 @@ static const struct net_device_ops hinic_netdev_ops = { static void netdev_features_init(struct net_device *netdev) { netdev->hw_features = NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_IP_CSUM | - NETIF_F_IPV6_CSUM | NETIF_F_TSO | NETIF_F_TSO6; + NETIF_F_IPV6_CSUM | NETIF_F_TSO | NETIF_F_TSO6 | + NETIF_F_RXCSUM; netdev->vlan_features = netdev->hw_features; @@ -869,12 +871,16 @@ static int set_features(struct hinic_dev *nic_dev, netdev_features_t features, bool force_change) { netdev_features_t changed = force_change ? ~0 : pre_features ^ features; + u32 csum_en = HINIC_RX_CSUM_OFFLOAD_EN; int err = 0; if (changed & NETIF_F_TSO) err = hinic_port_set_tso(nic_dev, (features & NETIF_F_TSO) ? HINIC_TSO_ENABLE : HINIC_TSO_DISABLE); + if (changed & NETIF_F_RXCSUM) + err = hinic_set_rx_csum_offload(nic_dev, csum_en); + return err; } diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port.c b/drivers/net/ethernet/huawei/hinic/hinic_port.c index 7575a7d3bd9f..122c93597268 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_port.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_port.c @@ -409,3 +409,33 @@ int hinic_port_set_tso(struct hinic_dev *nic_dev, enum hinic_tso_state state) return 0; } + +int hinic_set_rx_csum_offload(struct hinic_dev *nic_dev, u32 en) +{ + struct hinic_checksum_offload rx_csum_cfg = {0}; + struct hinic_hwdev *hwdev = nic_dev->hwdev; + struct hinic_hwif *hwif; + struct pci_dev *pdev; + u16 out_size; + int err; + + if (!hwdev) + return -EINVAL; + + hwif = hwdev->hwif; + pdev = hwif->pdev; + rx_csum_cfg.func_id = HINIC_HWIF_FUNC_IDX(hwif); + rx_csum_cfg.rx_csum_offload = en; + + err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_RX_CSUM, + &rx_csum_cfg, sizeof(rx_csum_cfg), + &rx_csum_cfg, &out_size); + if (err || !out_size || rx_csum_cfg.status) { + dev_err(&pdev->dev, + "Failed to set rx csum offload, ret = %d\n", + rx_csum_cfg.status); + return -EINVAL; + } + + return 0; +} diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port.h b/drivers/net/ethernet/huawei/hinic/hinic_port.h index f6e3220fe28f..02d896eed455 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_port.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_port.h @@ -183,6 +183,15 @@ struct hinic_tso_config { u8 resv2[3]; }; +struct hinic_checksum_offload { + u8 status; + u8 version; + u8 rsvd0[6]; + + u16 func_id; + u16 rsvd1; + u32 rx_csum_offload; +}; int hinic_port_add_mac(struct hinic_dev *nic_dev, const u8 *addr, u16 vlan_id); @@ -213,4 +222,5 @@ int hinic_port_get_cap(struct hinic_dev *nic_dev, int hinic_port_set_tso(struct hinic_dev *nic_dev, enum hinic_tso_state state); +int hinic_set_rx_csum_offload(struct hinic_dev *nic_dev, u32 en); #endif diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c index 4c0f7eda1166..0098b206e7e9 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c @@ -43,6 +43,7 @@ #define RX_IRQ_NO_LLI_TIMER 0 #define RX_IRQ_NO_CREDIT 0 #define RX_IRQ_NO_RESEND_TIMER 0 +#define HINIC_RX_BUFFER_WRITE 16 /** * hinic_rxq_clean_stats - Clean the statistics of specific queue @@ -89,6 +90,28 @@ static void rxq_stats_init(struct hinic_rxq *rxq) hinic_rxq_clean_stats(rxq); } +static void rx_csum(struct hinic_rxq *rxq, u16 cons_idx, + struct sk_buff *skb) +{ + struct net_device *netdev = rxq->netdev; + struct hinic_rq_cqe *cqe; + struct hinic_rq *rq; + u32 csum_err; + u32 status; + + rq = rxq->rq; + cqe = rq->cqe[cons_idx]; + status = be32_to_cpu(cqe->status); + csum_err = HINIC_RQ_CQE_STATUS_GET(status, CSUM_ERR); + + if (!(netdev->features & NETIF_F_RXCSUM)) + return; + + if (!csum_err) + skb->ip_summed = CHECKSUM_UNNECESSARY; + else + skb->ip_summed = CHECKSUM_NONE; +} /** * rx_alloc_skb - allocate skb and map it to dma address * @rxq: rx queue @@ -209,7 +232,6 @@ skb_out: hinic_rq_update(rxq->rq, prod_idx); } - tasklet_schedule(&rxq->rx_task); return i; } @@ -236,17 +258,6 @@ static void free_all_rx_skbs(struct hinic_rxq *rxq) } } -/** - * rx_alloc_task - tasklet for queue allocation - * @data: rx queue - **/ -static void rx_alloc_task(unsigned long data) -{ - struct hinic_rxq *rxq = (struct hinic_rxq *)data; - - (void)rx_alloc_pkts(rxq); -} - /** * rx_recv_jumbo_pkt - Rx handler for jumbo pkt * @rxq: rx queue @@ -311,6 +322,7 @@ static int rxq_recv(struct hinic_rxq *rxq, int budget) struct hinic_qp *qp = container_of(rxq->rq, struct hinic_qp, rq); u64 pkt_len = 0, rx_bytes = 0; struct hinic_rq_wqe *rq_wqe; + unsigned int free_wqebbs; int num_wqes, pkts = 0; struct hinic_sge sge; struct sk_buff *skb; @@ -328,6 +340,8 @@ static int rxq_recv(struct hinic_rxq *rxq, int budget) rx_unmap_skb(rxq, hinic_sge_to_dma(&sge)); + rx_csum(rxq, ci, skb); + prefetch(skb->data); pkt_len = sge.len; @@ -352,8 +366,9 @@ static int rxq_recv(struct hinic_rxq *rxq, int budget) rx_bytes += pkt_len; } - if (pkts) - tasklet_schedule(&rxq->rx_task); /* rx_alloc_pkts */ + free_wqebbs = hinic_get_rq_free_wqebbs(rxq->rq); + if (free_wqebbs > HINIC_RX_BUFFER_WRITE) + rx_alloc_pkts(rxq); u64_stats_update_begin(&rxq->rxq_stats.syncp); rxq->rxq_stats.pkts += pkts; @@ -470,8 +485,6 @@ int hinic_init_rxq(struct hinic_rxq *rxq, struct hinic_rq *rq, sprintf(rxq->irq_name, "hinic_rxq%d", qp->q_id); - tasklet_init(&rxq->rx_task, rx_alloc_task, (unsigned long)rxq); - pkts = rx_alloc_pkts(rxq); if (!pkts) { err = -ENOMEM; @@ -488,7 +501,6 @@ int hinic_init_rxq(struct hinic_rxq *rxq, struct hinic_rq *rq, err_req_rx_irq: err_rx_pkts: - tasklet_kill(&rxq->rx_task); free_all_rx_skbs(rxq); devm_kfree(&netdev->dev, rxq->irq_name); return err; @@ -504,7 +516,6 @@ void hinic_clean_rxq(struct hinic_rxq *rxq) rx_free_irq(rxq); - tasklet_kill(&rxq->rx_task); free_all_rx_skbs(rxq); devm_kfree(&netdev->dev, rxq->irq_name); } diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.h b/drivers/net/ethernet/huawei/hinic/hinic_rx.h index 27c9af4b1c12..f8ed3fa6c8ee 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_rx.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.h @@ -23,6 +23,10 @@ #include "hinic_hw_qp.h" +#define HINIC_RX_CSUM_OFFLOAD_EN 0xFFF +#define HINIC_RX_CSUM_HW_CHECK_NONE BIT(7) +#define HINIC_RX_CSUM_IPSU_OTHER_ERR BIT(8) + struct hinic_rxq_stats { u64 pkts; u64 bytes; @@ -38,8 +42,6 @@ struct hinic_rxq { char *irq_name; - struct tasklet_struct rx_task; - struct napi_struct napi; }; diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c index 760b2ad8e295..209255495bc9 100644 --- a/drivers/net/ethernet/ibm/emac/core.c +++ b/drivers/net/ethernet/ibm/emac/core.c @@ -2455,7 +2455,8 @@ static void emac_adjust_link(struct net_device *ndev) dev->phy.duplex = phy->duplex; dev->phy.pause = phy->pause; dev->phy.asym_pause = phy->asym_pause; - dev->phy.advertising = phy->advertising; + ethtool_convert_link_mode_to_legacy_u32(&dev->phy.advertising, + phy->advertising); } static int emac_mii_bus_read(struct mii_bus *bus, int addr, int regnum) @@ -2490,7 +2491,8 @@ static int emac_mdio_phy_start_aneg(struct mii_phy *phy, phy_dev->autoneg = phy->autoneg; phy_dev->speed = phy->speed; phy_dev->duplex = phy->duplex; - phy_dev->advertising = phy->advertising; + ethtool_convert_legacy_u32_to_link_mode(phy_dev->advertising, + phy->advertising); return phy_start_aneg(phy_dev); } @@ -2624,7 +2626,8 @@ static int emac_dt_phy_connect(struct emac_instance *dev, dev->phy.def->phy_id_mask = dev->phy_dev->drv->phy_id_mask; dev->phy.def->name = dev->phy_dev->drv->name; dev->phy.def->ops = &emac_dt_mdio_phy_ops; - dev->phy.features = dev->phy_dev->supported; + ethtool_convert_link_mode_to_legacy_u32(&dev->phy.features, + dev->phy_dev->supported); dev->phy.address = dev->phy_dev->mdio.addr; dev->phy.mode = dev->phy_dev->interface; return 0; diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 67cc6d9c8fd7..5ecbb1adcf3b 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -773,11 +773,8 @@ static void release_napi(struct ibmvnic_adapter *adapter) return; for (i = 0; i < adapter->num_active_rx_napi; i++) { - if (&adapter->napi[i]) { - netdev_dbg(adapter->netdev, - "Releasing napi[%d]\n", i); - netif_napi_del(&adapter->napi[i]); - } + netdev_dbg(adapter->netdev, "Releasing napi[%d]\n", i); + netif_napi_del(&adapter->napi[i]); } kfree(adapter->napi); diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 59e1bc0f609e..31fb76ee9d82 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -33,7 +33,7 @@ config E100 to identify the adapter. More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called e100. @@ -49,7 +49,7 @@ config E1000 More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called e1000. @@ -69,7 +69,7 @@ config E1000E More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called e1000e. @@ -97,7 +97,7 @@ config IGB More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called igb. @@ -133,7 +133,7 @@ config IGBVF More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called igbvf. @@ -150,7 +150,7 @@ config IXGB More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called ixgb. @@ -159,6 +159,7 @@ config IXGBE tristate "Intel(R) 10GbE PCI Express adapters support" depends on PCI select MDIO + select MDIO_DEVICE imply PTP_1588_CLOCK ---help--- This driver supports Intel(R) 10GbE PCI Express family of @@ -168,7 +169,7 @@ config IXGBE More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called ixgbe. @@ -220,7 +221,7 @@ config IXGBEVF More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called ixgbevf. MSI-X interrupt support is required @@ -247,7 +248,7 @@ config I40E More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called i40e. @@ -282,7 +283,7 @@ config I40EVF This driver was formerly named i40evf. More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called iavf. MSI-X interrupt support is required @@ -300,7 +301,7 @@ config ICE More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called ice. @@ -318,7 +319,7 @@ config FM10K More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called fm10k. MSI-X interrupt support is required diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c index 7c4b55482f72..0fd268070fb4 100644 --- a/drivers/net/ethernet/intel/e100.c +++ b/drivers/net/ethernet/intel/e100.c @@ -1345,8 +1345,8 @@ static inline int e100_load_ucode_wait(struct nic *nic) fw = e100_request_firmware(nic); /* If it's NULL, then no ucode is required */ - if (!fw || IS_ERR(fw)) - return PTR_ERR(fw); + if (IS_ERR_OR_NULL(fw)) + return PTR_ERR_OR_ZERO(fw); if ((err = e100_exec_cb(nic, (void *)fw, e100_setup_ucode))) netif_err(nic, probe, nic->netdev, @@ -2225,11 +2225,13 @@ static int e100_poll(struct napi_struct *napi, int budget) e100_rx_clean(nic, &work_done, budget); e100_tx_clean(nic); - /* If budget not fully consumed, exit the polling mode */ - if (work_done < budget) { - napi_complete_done(napi, work_done); + /* If budget fully consumed, continue polling */ + if (work_done == budget) + return budget; + + /* only re-enable interrupt if stack agrees polling is really done */ + if (likely(napi_complete_done(napi, work_done))) e100_enable_irq(nic); - } return work_done; } diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c index 43b6d3cec3b3..8fe9af0e2ab7 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_main.c +++ b/drivers/net/ethernet/intel/e1000/e1000_main.c @@ -3803,14 +3803,15 @@ static int e1000_clean(struct napi_struct *napi, int budget) adapter->clean_rx(adapter, &adapter->rx_ring[0], &work_done, budget); - if (!tx_clean_complete) - work_done = budget; + if (!tx_clean_complete || work_done == budget) + return budget; - /* If budget not fully consumed, exit the polling mode */ - if (work_done < budget) { + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) { if (likely(adapter->itr_setting & 3)) e1000_set_itr(adapter); - napi_complete_done(napi, work_done); if (!test_bit(__E1000_DOWN, &adapter->flags)) e1000_irq_enable(adapter); } diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h index c760dc72c520..be13227f1697 100644 --- a/drivers/net/ethernet/intel/e1000e/e1000.h +++ b/drivers/net/ethernet/intel/e1000e/e1000.h @@ -505,6 +505,9 @@ extern const struct e1000_info e1000_es2_info; void e1000e_ptp_init(struct e1000_adapter *adapter); void e1000e_ptp_remove(struct e1000_adapter *adapter); +u64 e1000e_read_systim(struct e1000_adapter *adapter, + struct ptp_system_timestamp *sts); + static inline s32 e1000_phy_hw_reset(struct e1000_hw *hw) { return hw->phy.ops.reset(hw); diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index 16a73bd9f4cb..308c006cb41d 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -2651,9 +2651,9 @@ err: /** * e1000e_poll - NAPI Rx polling callback * @napi: struct associated with this polling callback - * @weight: number of packets driver is allowed to process this poll + * @budget: number of packets driver is allowed to process this poll **/ -static int e1000e_poll(struct napi_struct *napi, int weight) +static int e1000e_poll(struct napi_struct *napi, int budget) { struct e1000_adapter *adapter = container_of(napi, struct e1000_adapter, napi); @@ -2667,16 +2667,17 @@ static int e1000e_poll(struct napi_struct *napi, int weight) (adapter->rx_ring->ims_val & adapter->tx_ring->ims_val)) tx_cleaned = e1000_clean_tx_irq(adapter->tx_ring); - adapter->clean_rx(adapter->rx_ring, &work_done, weight); + adapter->clean_rx(adapter->rx_ring, &work_done, budget); - if (!tx_cleaned) - work_done = weight; + if (!tx_cleaned || work_done == budget) + return budget; - /* If weight not fully consumed, exit the polling mode */ - if (work_done < weight) { + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) { if (adapter->itr_setting & 3) e1000_set_itr(adapter); - napi_complete_done(napi, work_done); if (!test_bit(__E1000_DOWN, &adapter->state)) { if (adapter->msix_entries) ew32(IMS, adapter->rx_ring->ims_val); @@ -4319,13 +4320,16 @@ void e1000e_reinit_locked(struct e1000_adapter *adapter) /** * e1000e_sanitize_systim - sanitize raw cycle counter reads * @hw: pointer to the HW structure - * @systim: time value read, sanitized and returned + * @systim: PHC time value read, sanitized and returned + * @sts: structure to hold system time before and after reading SYSTIML, + * may be NULL * * Errata for 82574/82583 possible bad bits read from SYSTIMH/L: * check to see that the time is incrementing at a reasonable * rate and is a multiple of incvalue. **/ -static u64 e1000e_sanitize_systim(struct e1000_hw *hw, u64 systim) +static u64 e1000e_sanitize_systim(struct e1000_hw *hw, u64 systim, + struct ptp_system_timestamp *sts) { u64 time_delta, rem, temp; u64 systim_next; @@ -4335,7 +4339,9 @@ static u64 e1000e_sanitize_systim(struct e1000_hw *hw, u64 systim) incvalue = er32(TIMINCA) & E1000_TIMINCA_INCVALUE_MASK; for (i = 0; i < E1000_MAX_82574_SYSTIM_REREADS; i++) { /* latch SYSTIMH on read of SYSTIML */ + ptp_read_system_prets(sts); systim_next = (u64)er32(SYSTIML); + ptp_read_system_postts(sts); systim_next |= (u64)er32(SYSTIMH) << 32; time_delta = systim_next - systim; @@ -4353,15 +4359,16 @@ static u64 e1000e_sanitize_systim(struct e1000_hw *hw, u64 systim) } /** - * e1000e_cyclecounter_read - read raw cycle counter (used by time counter) - * @cc: cyclecounter structure + * e1000e_read_systim - read SYSTIM register + * @adapter: board private structure + * @sts: structure which will contain system time before and after reading + * SYSTIML, may be NULL **/ -static u64 e1000e_cyclecounter_read(const struct cyclecounter *cc) +u64 e1000e_read_systim(struct e1000_adapter *adapter, + struct ptp_system_timestamp *sts) { - struct e1000_adapter *adapter = container_of(cc, struct e1000_adapter, - cc); struct e1000_hw *hw = &adapter->hw; - u32 systimel, systimeh; + u32 systimel, systimel_2, systimeh; u64 systim; /* SYSTIMH latching upon SYSTIML read does not work well. * This means that if SYSTIML overflows after we read it but before @@ -4369,11 +4376,15 @@ static u64 e1000e_cyclecounter_read(const struct cyclecounter *cc) * will experience a huge non linear increment in the systime value * to fix that we test for overflow and if true, we re-read systime. */ + ptp_read_system_prets(sts); systimel = er32(SYSTIML); + ptp_read_system_postts(sts); systimeh = er32(SYSTIMH); /* Is systimel is so large that overflow is possible? */ if (systimel >= (u32)0xffffffff - E1000_TIMINCA_INCVALUE_MASK) { - u32 systimel_2 = er32(SYSTIML); + ptp_read_system_prets(sts); + systimel_2 = er32(SYSTIML); + ptp_read_system_postts(sts); if (systimel > systimel_2) { /* There was an overflow, read again SYSTIMH, and use * systimel_2 @@ -4386,11 +4397,23 @@ static u64 e1000e_cyclecounter_read(const struct cyclecounter *cc) systim |= (u64)systimeh << 32; if (adapter->flags2 & FLAG2_CHECK_SYSTIM_OVERFLOW) - systim = e1000e_sanitize_systim(hw, systim); + systim = e1000e_sanitize_systim(hw, systim, sts); return systim; } +/** + * e1000e_cyclecounter_read - read raw cycle counter (used by time counter) + * @cc: cyclecounter structure + **/ +static u64 e1000e_cyclecounter_read(const struct cyclecounter *cc) +{ + struct e1000_adapter *adapter = container_of(cc, struct e1000_adapter, + cc); + + return e1000e_read_systim(adapter, NULL); +} + /** * e1000_sw_init - Initialize general software structures (struct e1000_adapter) * @adapter: board private structure to initialize diff --git a/drivers/net/ethernet/intel/e1000e/ptp.c b/drivers/net/ethernet/intel/e1000e/ptp.c index 37c76945ad9b..1a4c65d9feb4 100644 --- a/drivers/net/ethernet/intel/e1000e/ptp.c +++ b/drivers/net/ethernet/intel/e1000e/ptp.c @@ -161,22 +161,30 @@ static int e1000e_phc_getcrosststamp(struct ptp_clock_info *ptp, #endif/*CONFIG_E1000E_HWTS*/ /** - * e1000e_phc_gettime - Reads the current time from the hardware clock + * e1000e_phc_gettimex - Reads the current time from the hardware clock and + * system clock * @ptp: ptp clock structure - * @ts: timespec structure to hold the current time value + * @ts: timespec structure to hold the current PHC time + * @sts: structure to hold the current system time * * Read the timecounter and return the correct value in ns after converting * it into a struct timespec. **/ -static int e1000e_phc_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts) +static int e1000e_phc_gettimex(struct ptp_clock_info *ptp, + struct timespec64 *ts, + struct ptp_system_timestamp *sts) { struct e1000_adapter *adapter = container_of(ptp, struct e1000_adapter, ptp_clock_info); unsigned long flags; - u64 ns; + u64 cycles, ns; spin_lock_irqsave(&adapter->systim_lock, flags); - ns = timecounter_read(&adapter->tc); + + /* NOTE: Non-monotonic SYSTIM readings may be returned */ + cycles = e1000e_read_systim(adapter, sts); + ns = timecounter_cyc2time(&adapter->tc, cycles); + spin_unlock_irqrestore(&adapter->systim_lock, flags); *ts = ns_to_timespec64(ns); @@ -232,9 +240,12 @@ static void e1000e_systim_overflow_work(struct work_struct *work) systim_overflow_work.work); struct e1000_hw *hw = &adapter->hw; struct timespec64 ts; + u64 ns; - adapter->ptp_clock_info.gettime64(&adapter->ptp_clock_info, &ts); + /* Update the timecounter */ + ns = timecounter_read(&adapter->tc); + ts = ns_to_timespec64(ns); e_dbg("SYSTIM overflow check at %lld.%09lu\n", (long long) ts.tv_sec, ts.tv_nsec); @@ -251,7 +262,7 @@ static const struct ptp_clock_info e1000e_ptp_clock_info = { .pps = 0, .adjfreq = e1000e_phc_adjfreq, .adjtime = e1000e_phc_adjtime, - .gettime64 = e1000e_phc_gettime, + .gettimex64 = e1000e_phc_gettimex, .settime64 = e1000e_phc_settime, .enable = e1000e_phc_enable, }; diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c index 5b2a50e5798f..6fd15a734324 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c @@ -1465,11 +1465,11 @@ static int fm10k_poll(struct napi_struct *napi, int budget) if (!clean_complete) return budget; - /* all work done, exit the polling mode */ - napi_complete_done(napi, work_done); - - /* re-enable the q_vector */ - fm10k_qv_enable(q_vector); + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + fm10k_qv_enable(q_vector); return min(work_done, budget - 1); } diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index 876cac317e79..8de9085bba9e 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -122,6 +122,7 @@ enum i40e_state_t { __I40E_MDD_EVENT_PENDING, __I40E_VFLR_EVENT_PENDING, __I40E_RESET_RECOVERY_PENDING, + __I40E_TIMEOUT_RECOVERY_PENDING, __I40E_MISC_IRQ_REQUESTED, __I40E_RESET_INTR_RECEIVED, __I40E_REINIT_REQUESTED, @@ -146,6 +147,7 @@ enum i40e_state_t { __I40E_CLIENT_SERVICE_REQUESTED, __I40E_CLIENT_L2_CHANGE, __I40E_CLIENT_RESET, + __I40E_VIRTCHNL_OP_PENDING, /* This must be last as it determines the size of the BITMAP */ __I40E_STATE_SIZE__, }; @@ -494,7 +496,6 @@ struct i40e_pf { #define I40E_HW_STOP_FW_LLDP BIT(16) #define I40E_HW_PORT_ID_VALID BIT(17) #define I40E_HW_RESTART_AUTONEG BIT(18) -#define I40E_HW_STOPPABLE_FW_LLDP BIT(19) u32 flags; #define I40E_FLAG_RX_CSUM_ENABLED BIT(0) diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c b/drivers/net/ethernet/intel/i40e/i40e_adminq.c index 501ee718177f..7ab61f6ebb5f 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c @@ -588,6 +588,12 @@ i40e_status i40e_init_adminq(struct i40e_hw *hw) hw->aq.api_maj_ver == I40E_FW_API_VERSION_MAJOR && hw->aq.api_min_ver >= I40E_MINOR_VER_GET_LINK_INFO_XL710) { hw->flags |= I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE; + hw->flags |= I40E_HW_FLAG_FW_LLDP_STOPPABLE; + } + if (hw->mac.type == I40E_MAC_X722 && + hw->aq.api_maj_ver == I40E_FW_API_VERSION_MAJOR && + hw->aq.api_min_ver >= I40E_MINOR_VER_FW_LLDP_STOPPABLE_X722) { + hw->flags |= I40E_HW_FLAG_FW_LLDP_STOPPABLE; } /* Newer versions of firmware require lock when reading the NVM */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h index 80e3eec6134e..11506102471c 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h @@ -11,7 +11,7 @@ */ #define I40E_FW_API_VERSION_MAJOR 0x0001 -#define I40E_FW_API_VERSION_MINOR_X722 0x0005 +#define I40E_FW_API_VERSION_MINOR_X722 0x0006 #define I40E_FW_API_VERSION_MINOR_X710 0x0007 #define I40E_FW_MINOR_VERSION(_h) ((_h)->mac.type == I40E_MAC_XL710 ? \ @@ -20,6 +20,8 @@ /* API version 1.7 implements additional link and PHY-specific APIs */ #define I40E_MINOR_VER_GET_LINK_INFO_XL710 0x0007 +/* API version 1.6 for X722 devices adds ability to stop FW LLDP agent */ +#define I40E_MINOR_VER_FW_LLDP_STOPPABLE_X722 0x0006 struct i40e_aq_desc { __le16 flags; diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c index 85f75b5978fc..97a9b1fb4763 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_common.c +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c @@ -3723,6 +3723,9 @@ i40e_aq_set_dcb_parameters(struct i40e_hw *hw, bool dcb_enable, (struct i40e_aqc_set_dcb_parameters *)&desc.params.raw; i40e_status status; + if (!(hw->flags & I40E_HW_FLAG_FW_LLDP_STOPPABLE)) + return I40E_ERR_DEVICE_NOT_SUPPORTED; + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_dcb_parameters); diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index 9f8464f80783..a6bc7847346b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -906,6 +906,7 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw, ks->base.speed = SPEED_100; break; default: + ks->base.speed = SPEED_UNKNOWN; break; } ks->base.duplex = DUPLEX_FULL; @@ -1335,6 +1336,7 @@ static int i40e_set_pauseparam(struct net_device *netdev, i40e_status status; u8 aq_failures; int err = 0; + u32 is_an; /* Changing the port's flow control is not supported if this isn't the * port's controlling PF @@ -1347,15 +1349,14 @@ static int i40e_set_pauseparam(struct net_device *netdev, if (vsi != pf->vsi[pf->lan_vsi]) return -EOPNOTSUPP; - if (pause->autoneg != ((hw_link_info->an_info & I40E_AQ_AN_COMPLETED) ? - AUTONEG_ENABLE : AUTONEG_DISABLE)) { + is_an = hw_link_info->an_info & I40E_AQ_AN_COMPLETED; + if (pause->autoneg != is_an) { netdev_info(netdev, "To change autoneg please use: ethtool -s autoneg \n"); return -EOPNOTSUPP; } /* If we have link and don't have autoneg */ - if (!test_bit(__I40E_DOWN, pf->state) && - !(hw_link_info->an_info & I40E_AQ_AN_COMPLETED)) { + if (!test_bit(__I40E_DOWN, pf->state) && !is_an) { /* Send message that it might not necessarily work*/ netdev_info(netdev, "Autoneg did not complete so changing settings may not result in an actual change.\n"); } @@ -1406,7 +1407,7 @@ static int i40e_set_pauseparam(struct net_device *netdev, err = -EAGAIN; } - if (!test_bit(__I40E_DOWN, pf->state)) { + if (!test_bit(__I40E_DOWN, pf->state) && is_an) { /* Give it a little more time to try to come back */ msleep(75); if (!test_bit(__I40E_DOWN, pf->state)) @@ -2377,7 +2378,8 @@ static int i40e_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol) return -EOPNOTSUPP; /* only magic packet is supported */ - if (wol->wolopts && (wol->wolopts != WAKE_MAGIC)) + if (wol->wolopts && (wol->wolopts != WAKE_MAGIC) + | (wol->wolopts != WAKE_FILTER)) return -EOPNOTSUPP; /* is this a new value? */ @@ -4659,14 +4661,15 @@ flags_complete: return -EOPNOTSUPP; /* If the driver detected FW LLDP was disabled on init, this flag could - * be set, however we do not support _changing_ the flag if NPAR is - * enabled or FW API version < 1.7. There are situations where older - * FW versions/NPAR enabled PFs could disable LLDP, however we _must_ - * not allow the user to enable/disable LLDP with this flag on - * unsupported FW versions. + * be set, however we do not support _changing_ the flag: + * - on XL710 if NPAR is enabled or FW API version < 1.7 + * - on X722 with FW API version < 1.6 + * There are situations where older FW versions/NPAR enabled PFs could + * disable LLDP, however we _must_ not allow the user to enable/disable + * LLDP with this flag on unsupported FW versions. */ if (changed_flags & I40E_FLAG_DISABLE_FW_LLDP) { - if (!(pf->hw_features & I40E_HW_STOPPABLE_FW_LLDP)) { + if (!(pf->hw.flags & I40E_HW_FLAG_FW_LLDP_STOPPABLE)) { dev_warn(&pf->pdev->dev, "Device does not support changing FW LLDP\n"); return -EOPNOTSUPP; diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 0e5dc74b4ef2..4d40878e395a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -26,8 +26,8 @@ static const char i40e_driver_string[] = #define DRV_KERN "-k" #define DRV_VERSION_MAJOR 2 -#define DRV_VERSION_MINOR 3 -#define DRV_VERSION_BUILD 2 +#define DRV_VERSION_MINOR 7 +#define DRV_VERSION_BUILD 6 #define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \ __stringify(DRV_VERSION_MINOR) "." \ __stringify(DRV_VERSION_BUILD) DRV_KERN @@ -338,6 +338,10 @@ static void i40e_tx_timeout(struct net_device *netdev) (pf->tx_timeout_last_recovery + netdev->watchdog_timeo))) return; /* don't do any new action before the next timeout */ + /* don't kick off another recovery if one is already pending */ + if (test_and_set_bit(__I40E_TIMEOUT_RECOVERY_PENDING, pf->state)) + return; + if (tx_ring) { head = i40e_get_head(tx_ring); /* Read interrupt register */ @@ -1493,8 +1497,7 @@ int i40e_del_mac_filter(struct i40e_vsi *vsi, const u8 *macaddr) bool found = false; int bkt; - WARN(!spin_is_locked(&vsi->mac_filter_hash_lock), - "Missing mac_filter_hash_lock\n"); + lockdep_assert_held(&vsi->mac_filter_hash_lock); hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) { if (ether_addr_equal(macaddr, f->macaddr)) { __i40e_del_filter(vsi, f); @@ -9632,6 +9635,7 @@ end_core_reset: clear_bit(__I40E_RESET_FAILED, pf->state); clear_recovery: clear_bit(__I40E_RESET_RECOVERY_PENDING, pf->state); + clear_bit(__I40E_TIMEOUT_RECOVERY_PENDING, pf->state); } /** @@ -11332,16 +11336,15 @@ static int i40e_sw_init(struct i40e_pf *pf) /* IWARP needs one extra vector for CQP just like MISC.*/ pf->num_iwarp_msix = (int)num_online_cpus() + 1; } - /* Stopping the FW LLDP engine is only supported on the - * XL710 with a FW ver >= 1.7. Also, stopping FW LLDP - * engine is not supported if NPAR is functioning on this - * part + /* Stopping FW LLDP engine is supported on XL710 and X722 + * starting from FW versions determined in i40e_init_adminq. + * Stopping the FW LLDP engine is not supported on XL710 + * if NPAR is functioning so unset this hw flag in this case. */ if (pf->hw.mac.type == I40E_MAC_XL710 && - !pf->hw.func_caps.npar_enable && - (pf->hw.aq.api_maj_ver > 1 || - (pf->hw.aq.api_maj_ver == 1 && pf->hw.aq.api_min_ver > 6))) - pf->hw_features |= I40E_HW_STOPPABLE_FW_LLDP; + pf->hw.func_caps.npar_enable && + (pf->hw.flags & I40E_HW_FLAG_FW_LLDP_STOPPABLE)) + pf->hw.flags &= ~I40E_HW_FLAG_FW_LLDP_STOPPABLE; #ifdef CONFIG_PCI_IOV if (pf->hw.func_caps.num_vfs && pf->hw.partition_id == 1) { @@ -11682,6 +11685,7 @@ static int i40e_ndo_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], * @dev: the netdev being configured * @nlh: RTNL message * @flags: bridge flags + * @extack: netlink extended ack * * Inserts a new hardware bridge if not already created and * enables the bridging mode requested (VEB or VEPA). If the @@ -11694,7 +11698,8 @@ static int i40e_ndo_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], **/ static int i40e_ndo_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh, - u16 flags) + u16 flags, + struct netlink_ext_ack *extack) { struct i40e_netdev_priv *np = netdev_priv(dev); struct i40e_vsi *vsi = np->vsi; @@ -12334,6 +12339,9 @@ static int i40e_config_netdev(struct i40e_vsi *vsi) ether_addr_copy(netdev->dev_addr, mac_addr); ether_addr_copy(netdev->perm_addr, mac_addr); + /* i40iw_net_event() reads 16 bytes from neigh->primary_key */ + netdev->neigh_priv_len = sizeof(u32) * 4; + netdev->priv_flags |= IFF_UNICAST_FLT; netdev->priv_flags |= IFF_SUPP_NOFCS; /* Setup netdev TC information */ @@ -14302,23 +14310,23 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) switch (hw->bus.speed) { case i40e_bus_speed_8000: - strncpy(speed, "8.0", PCI_SPEED_SIZE); break; + strlcpy(speed, "8.0", PCI_SPEED_SIZE); break; case i40e_bus_speed_5000: - strncpy(speed, "5.0", PCI_SPEED_SIZE); break; + strlcpy(speed, "5.0", PCI_SPEED_SIZE); break; case i40e_bus_speed_2500: - strncpy(speed, "2.5", PCI_SPEED_SIZE); break; + strlcpy(speed, "2.5", PCI_SPEED_SIZE); break; default: break; } switch (hw->bus.width) { case i40e_bus_width_pcie_x8: - strncpy(width, "8", PCI_WIDTH_SIZE); break; + strlcpy(width, "8", PCI_WIDTH_SIZE); break; case i40e_bus_width_pcie_x4: - strncpy(width, "4", PCI_WIDTH_SIZE); break; + strlcpy(width, "4", PCI_WIDTH_SIZE); break; case i40e_bus_width_pcie_x2: - strncpy(width, "2", PCI_WIDTH_SIZE); break; + strlcpy(width, "2", PCI_WIDTH_SIZE); break; case i40e_bus_width_pcie_x1: - strncpy(width, "1", PCI_WIDTH_SIZE); break; + strlcpy(width, "1", PCI_WIDTH_SIZE); break; default: break; } diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c index 1199f0502d6d..5fb4353c742b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c @@ -28,19 +28,23 @@ * i40e_ptp_read - Read the PHC time from the device * @pf: Board private structure * @ts: timespec structure to hold the current time value + * @sts: structure to hold the system time before and after reading the PHC * * This function reads the PRTTSYN_TIME registers and stores them in a * timespec. However, since the registers are 64 bits of nanoseconds, we must * convert the result to a timespec before we can return. **/ -static void i40e_ptp_read(struct i40e_pf *pf, struct timespec64 *ts) +static void i40e_ptp_read(struct i40e_pf *pf, struct timespec64 *ts, + struct ptp_system_timestamp *sts) { struct i40e_hw *hw = &pf->hw; u32 hi, lo; u64 ns; /* The timer latches on the lowest register read. */ + ptp_read_system_prets(sts); lo = rd32(hw, I40E_PRTTSYN_TIME_L); + ptp_read_system_postts(sts); hi = rd32(hw, I40E_PRTTSYN_TIME_H); ns = (((u64)hi) << 32) | lo; @@ -146,7 +150,7 @@ static int i40e_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) mutex_lock(&pf->tmreg_lock); - i40e_ptp_read(pf, &now); + i40e_ptp_read(pf, &now, NULL); timespec64_add_ns(&now, delta); i40e_ptp_write(pf, (const struct timespec64 *)&now); @@ -156,19 +160,21 @@ static int i40e_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) } /** - * i40e_ptp_gettime - Get the time of the PHC + * i40e_ptp_gettimex - Get the time of the PHC * @ptp: The PTP clock structure * @ts: timespec structure to hold the current time value + * @sts: structure to hold the system time before and after reading the PHC * * Read the device clock and return the correct value on ns, after converting it * into a timespec struct. **/ -static int i40e_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts) +static int i40e_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts, + struct ptp_system_timestamp *sts) { struct i40e_pf *pf = container_of(ptp, struct i40e_pf, ptp_caps); mutex_lock(&pf->tmreg_lock); - i40e_ptp_read(pf, ts); + i40e_ptp_read(pf, ts, sts); mutex_unlock(&pf->tmreg_lock); return 0; @@ -694,7 +700,7 @@ static long i40e_ptp_create_clock(struct i40e_pf *pf) if (!IS_ERR_OR_NULL(pf->ptp_clock)) return 0; - strncpy(pf->ptp_caps.name, i40e_driver_name, + strlcpy(pf->ptp_caps.name, i40e_driver_name, sizeof(pf->ptp_caps.name) - 1); pf->ptp_caps.owner = THIS_MODULE; pf->ptp_caps.max_adj = 999999999; @@ -702,7 +708,7 @@ static long i40e_ptp_create_clock(struct i40e_pf *pf) pf->ptp_caps.pps = 0; pf->ptp_caps.adjfreq = i40e_ptp_adjfreq; pf->ptp_caps.adjtime = i40e_ptp_adjtime; - pf->ptp_caps.gettime64 = i40e_ptp_gettime; + pf->ptp_caps.gettimex64 = i40e_ptp_gettimex; pf->ptp_caps.settime64 = i40e_ptp_settime; pf->ptp_caps.enable = i40e_ptp_feature_enable; diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index d0a95424ce58..a7e14e98889f 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2648,10 +2648,11 @@ tx_only: if (vsi->back->flags & I40E_TXR_FLAGS_WB_ON_ITR) q_vector->arm_wb_state = false; - /* Work is done so exit the polling mode and re-enable the interrupt */ - napi_complete_done(napi, work_done); - - i40e_update_enable_itr(vsi, q_vector); + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + i40e_update_enable_itr(vsi, q_vector); return min(work_done, budget - 1); } @@ -3454,6 +3455,8 @@ static inline int i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb, tx_desc->cmd_type_offset_bsz = build_ctob(td_cmd, td_offset, size, td_tag); + skb_tx_timestamp(skb); + /* Force memory writes to complete before letting h/w know there * are new descriptors to fetch. * @@ -3507,6 +3510,7 @@ static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf, u16 i = xdp_ring->next_to_use; struct i40e_tx_buffer *tx_bi; struct i40e_tx_desc *tx_desc; + void *data = xdpf->data; u32 size = xdpf->len; dma_addr_t dma; @@ -3514,8 +3518,7 @@ static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf, xdp_ring->tx_stats.tx_busy++; return I40E_XDP_CONSUMED; } - - dma = dma_map_single(xdp_ring->dev, xdpf->data, size, DMA_TO_DEVICE); + dma = dma_map_single(xdp_ring->dev, data, size, DMA_TO_DEVICE); if (dma_mapping_error(xdp_ring->dev, dma)) return I40E_XDP_CONSUMED; @@ -3633,8 +3636,6 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb, if (tsyn) tx_flags |= I40E_TX_FLAGS_TSYN; - skb_tx_timestamp(skb); - /* always enable CRC insertion offload */ td_cmd |= I40E_TX_DESC_CMD_ICRC; diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h index 7df969c59855..2781ab91ca82 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_type.h +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h @@ -615,6 +615,7 @@ struct i40e_hw { #define I40E_HW_FLAG_802_1AD_CAPABLE BIT_ULL(1) #define I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE BIT_ULL(2) #define I40E_HW_FLAG_NVM_READ_REQUIRES_LOCK BIT_ULL(3) +#define I40E_HW_FLAG_FW_LLDP_STOPPABLE BIT_ULL(4) u64 flags; /* Used in set switch config AQ command */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index ac5698ed0b11..2ac23ebfbf31 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -1112,7 +1112,8 @@ static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf, if (!i40e_vc_isvalid_vsi_id(vf, vsi_id) || !vsi) return I40E_ERR_PARAM; - if (!test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) { + if (!test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) && + (allmulti || alluni)) { dev_err(&pf->pdev->dev, "Unprivileged VF %d is attempting to configure promiscuous mode\n", vf->vf_id); @@ -1675,13 +1676,20 @@ err_out: int i40e_pci_sriov_configure(struct pci_dev *pdev, int num_vfs) { struct i40e_pf *pf = pci_get_drvdata(pdev); + int ret = 0; + + if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) { + dev_warn(&pdev->dev, "Unable to configure VFs, other operation is pending.\n"); + return -EAGAIN; + } if (num_vfs) { if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) { pf->flags |= I40E_FLAG_VEB_MODE_ENABLED; i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG); } - return i40e_pci_sriov_enable(pdev, num_vfs); + ret = i40e_pci_sriov_enable(pdev, num_vfs); + goto sriov_configure_out; } if (!pci_vfs_assigned(pf->pdev)) { @@ -1690,9 +1698,12 @@ int i40e_pci_sriov_configure(struct pci_dev *pdev, int num_vfs) i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG); } else { dev_warn(&pdev->dev, "Unable to free VFs because some are assigned to VMs.\n"); - return -EINVAL; + ret = -EINVAL; + goto sriov_configure_out; } - return 0; +sriov_configure_out: + clear_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state); + return ret; } /***********************virtual channel routines******************/ @@ -3893,6 +3904,11 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac) goto error_param; } + if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) { + dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n"); + return -EAGAIN; + } + if (is_multicast_ether_addr(mac)) { dev_err(&pf->pdev->dev, "Invalid Ethernet address %pM for VF %d\n", mac, vf_id); @@ -3941,6 +3957,7 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac) dev_info(&pf->pdev->dev, "Bring down and up the VF interface to make this change effective.\n"); error_param: + clear_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state); return ret; } @@ -3992,6 +4009,11 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id, struct i40e_vf *vf; int ret = 0; + if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) { + dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n"); + return -EAGAIN; + } + /* validate the request */ ret = i40e_validate_vf(pf, vf_id); if (ret) @@ -4107,6 +4129,7 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id, ret = 0; error_pvid: + clear_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state); return ret; } @@ -4128,6 +4151,11 @@ int i40e_ndo_set_vf_bw(struct net_device *netdev, int vf_id, int min_tx_rate, struct i40e_vf *vf; int ret = 0; + if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) { + dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n"); + return -EAGAIN; + } + /* validate the request */ ret = i40e_validate_vf(pf, vf_id); if (ret) @@ -4154,6 +4182,7 @@ int i40e_ndo_set_vf_bw(struct net_device *netdev, int vf_id, int min_tx_rate, vf->tx_rate = max_tx_rate; error: + clear_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state); return ret; } @@ -4174,6 +4203,11 @@ int i40e_ndo_get_vf_config(struct net_device *netdev, struct i40e_vf *vf; int ret = 0; + if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) { + dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n"); + return -EAGAIN; + } + /* validate the request */ ret = i40e_validate_vf(pf, vf_id); if (ret) @@ -4209,6 +4243,7 @@ int i40e_ndo_get_vf_config(struct net_device *netdev, ret = 0; error_param: + clear_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state); return ret; } @@ -4230,6 +4265,11 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link) int abs_vf_id; int ret = 0; + if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) { + dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n"); + return -EAGAIN; + } + /* validate the request */ if (vf_id >= pf->num_alloc_vfs) { dev_err(&pf->pdev->dev, "Invalid VF Identifier %d\n", vf_id); @@ -4273,6 +4313,7 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link) 0, (u8 *)&pfe, sizeof(pfe), NULL); error_out: + clear_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state); return ret; } @@ -4294,6 +4335,11 @@ int i40e_ndo_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool enable) struct i40e_vf *vf; int ret = 0; + if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) { + dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n"); + return -EAGAIN; + } + /* validate the request */ if (vf_id >= pf->num_alloc_vfs) { dev_err(&pf->pdev->dev, "Invalid VF Identifier %d\n", vf_id); @@ -4327,6 +4373,7 @@ int i40e_ndo_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool enable) ret = -EIO; } out: + clear_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state); return ret; } @@ -4345,15 +4392,22 @@ int i40e_ndo_set_vf_trust(struct net_device *netdev, int vf_id, bool setting) struct i40e_vf *vf; int ret = 0; + if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) { + dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n"); + return -EAGAIN; + } + /* validate the request */ if (vf_id >= pf->num_alloc_vfs) { dev_err(&pf->pdev->dev, "Invalid VF Identifier %d\n", vf_id); - return -EINVAL; + ret = -EINVAL; + goto out; } if (pf->flags & I40E_FLAG_MFP_ENABLED) { dev_err(&pf->pdev->dev, "Trusted VF not supported in MFP mode.\n"); - return -EINVAL; + ret = -EINVAL; + goto out; } vf = &pf->vf[vf_id]; @@ -4376,5 +4430,6 @@ int i40e_ndo_set_vf_trust(struct net_device *netdev, int vf_id, bool setting) } out: + clear_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state); return ret; } diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h index bf67d62e2b5f..f9621026beef 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h @@ -13,9 +13,9 @@ #define I40E_DEFAULT_NUM_MDD_EVENTS_ALLOWED 3 #define I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED 10 -#define I40E_VLAN_PRIORITY_SHIFT 12 +#define I40E_VLAN_PRIORITY_SHIFT 13 #define I40E_VLAN_MASK 0xFFF -#define I40E_PRIORITY_MASK 0x7000 +#define I40E_PRIORITY_MASK 0xE000 /* Various queue ctrls */ enum i40e_queue_ctrl { diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index fb9bfad96daf..9b4d7cec2e18 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -1761,10 +1761,11 @@ tx_only: if (vsi->back->flags & IAVF_TXR_FLAGS_WB_ON_ITR) q_vector->arm_wb_state = false; - /* Work is done so exit the polling mode and re-enable the interrupt */ - napi_complete_done(napi, work_done); - - iavf_update_enable_itr(vsi, q_vector); + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + iavf_update_enable_itr(vsi, q_vector); return min(work_done, budget - 1); } @@ -2343,6 +2344,8 @@ static inline void iavf_tx_map(struct iavf_ring *tx_ring, struct sk_buff *skb, tx_desc->cmd_type_offset_bsz = build_ctob(td_cmd, td_offset, size, td_tag); + skb_tx_timestamp(skb); + /* Force memory writes to complete before letting h/w know there * are new descriptors to fetch. * @@ -2461,8 +2464,6 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, if (tso < 0) goto out_drop; - skb_tx_timestamp(skb); - /* always enable CRC insertion offload */ td_cmd |= IAVF_TX_DESC_CMD_ICRC; diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index b8548370f1c7..a385575600f6 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -52,7 +52,6 @@ extern const char ice_drv_ver[]; #define ICE_MBXQ_LEN 64 #define ICE_MIN_MSIX 2 #define ICE_NO_VSI 0xffff -#define ICE_MAX_VSI_ALLOC 130 #define ICE_MAX_TXQS 2048 #define ICE_MAX_RXQS 2048 #define ICE_VSI_MAP_CONTIG 0 @@ -97,14 +96,14 @@ extern const char ice_drv_ver[]; #define ice_for_each_vsi(pf, i) \ for ((i) = 0; (i) < (pf)->num_alloc_vsi; (i)++) -/* Macros for each tx/rx ring in a VSI */ +/* Macros for each Tx/Rx ring in a VSI */ #define ice_for_each_txq(vsi, i) \ for ((i) = 0; (i) < (vsi)->num_txq; (i)++) #define ice_for_each_rxq(vsi, i) \ for ((i) = 0; (i) < (vsi)->num_rxq; (i)++) -/* Macros for each allocated tx/rx ring whether used or not in a VSI */ +/* Macros for each allocated Tx/Rx ring whether used or not in a VSI */ #define ice_for_each_alloc_txq(vsi, i) \ for ((i) = 0; (i) < (vsi)->alloc_txq; (i)++) @@ -113,7 +112,9 @@ extern const char ice_drv_ver[]; struct ice_tc_info { u16 qoffset; - u16 qcount; + u16 qcount_tx; + u16 qcount_rx; + u8 netdev_tc; }; struct ice_tc_cfg { @@ -149,10 +150,10 @@ enum ice_state { __ICE_RESET_FAILED, /* set by reset/rebuild */ /* When checking for the PF to be in a nominal operating state, the * bits that are grouped at the beginning of the list need to be - * checked. Bits occurring before __ICE_STATE_NOMINAL_CHECK_BITS will - * be checked. If you need to add a bit into consideration for nominal + * checked. Bits occurring before __ICE_STATE_NOMINAL_CHECK_BITS will + * be checked. If you need to add a bit into consideration for nominal * operating state, it must be added before - * __ICE_STATE_NOMINAL_CHECK_BITS. Do not move this entry's position + * __ICE_STATE_NOMINAL_CHECK_BITS. Do not move this entry's position * without appropriate consideration. */ __ICE_STATE_NOMINAL_CHECK_BITS, @@ -182,8 +183,8 @@ struct ice_vsi { struct ice_sw *vsw; /* switch this VSI is on */ struct ice_pf *back; /* back pointer to PF */ struct ice_port_info *port_info; /* back pointer to port_info */ - struct ice_ring **rx_rings; /* rx ring array */ - struct ice_ring **tx_rings; /* tx ring array */ + struct ice_ring **rx_rings; /* Rx ring array */ + struct ice_ring **tx_rings; /* Tx ring array */ struct ice_q_vector **q_vectors; /* q_vector array */ irqreturn_t (*irq_handler)(int irq, void *data); @@ -200,8 +201,8 @@ struct ice_vsi { int sw_base_vector; /* Irq base for OS reserved vectors */ int hw_base_vector; /* HW (absolute) index of a vector */ enum ice_vsi_type type; - u16 vsi_num; /* HW (absolute) index of this VSI */ - u16 idx; /* software index in pf->vsi[] */ + u16 vsi_num; /* HW (absolute) index of this VSI */ + u16 idx; /* software index in pf->vsi[] */ /* Interrupt thresholds */ u16 work_lmt; @@ -254,8 +255,8 @@ struct ice_q_vector { struct ice_ring_container tx; struct irq_affinity_notify affinity_notify; u16 v_idx; /* index in the vsi->q_vector array. */ - u8 num_ring_tx; /* total number of tx rings in vector */ - u8 num_ring_rx; /* total number of rx rings in vector */ + u8 num_ring_tx; /* total number of Tx rings in vector */ + u8 num_ring_rx; /* total number of Rx rings in vector */ char name[ICE_INT_NAME_STR_LEN]; /* in usecs, need to use ice_intrl_to_usecs_reg() before writing this * value to the device @@ -307,10 +308,10 @@ struct ice_pf { u32 hw_oicr_idx; /* Other interrupt cause vector HW index */ u32 num_avail_hw_msix; /* remaining HW MSIX vectors left unclaimed */ u32 num_lan_msix; /* Total MSIX vectors for base driver */ - u16 num_lan_tx; /* num lan tx queues setup */ - u16 num_lan_rx; /* num lan rx queues setup */ - u16 q_left_tx; /* remaining num tx queues left unclaimed */ - u16 q_left_rx; /* remaining num rx queues left unclaimed */ + u16 num_lan_tx; /* num lan Tx queues setup */ + u16 num_lan_rx; /* num lan Rx queues setup */ + u16 q_left_tx; /* remaining num Tx queues left unclaimed */ + u16 q_left_rx; /* remaining num Rx queues left unclaimed */ u16 next_vsi; /* Next free slot in pf->vsi[] - 0-based! */ u16 num_alloc_vsi; u16 corer_count; /* Core reset count */ diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 6653555f55dd..fcdcd80b18e7 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -5,7 +5,7 @@ #define _ICE_ADMINQ_CMD_H_ /* This header file defines the Admin Queue commands, error codes and - * descriptor format. It is shared between Firmware and Software. + * descriptor format. It is shared between Firmware and Software. */ #define ICE_MAX_VSI 768 @@ -87,6 +87,7 @@ struct ice_aqc_list_caps { /* Device/Function buffer entry, repeated per reported capability */ struct ice_aqc_list_caps_elem { __le16 cap; +#define ICE_AQC_CAPS_VALID_FUNCTIONS 0x0005 #define ICE_AQC_CAPS_SRIOV 0x0012 #define ICE_AQC_CAPS_VF 0x0013 #define ICE_AQC_CAPS_VSI 0x0017 @@ -462,7 +463,7 @@ struct ice_aqc_sw_rules { }; /* Add/Update/Get/Remove lookup Rx/Tx command/response entry - * This structures describes the lookup rules and associated actions. "index" + * This structures describes the lookup rules and associated actions. "index" * is returned as part of a response to a successful Add command, and can be * used to identify the rule for Update/Get/Remove commands. */ @@ -1065,10 +1066,10 @@ struct ice_aqc_nvm { #define ICE_AQC_NVM_LAST_CMD BIT(0) #define ICE_AQC_NVM_PCIR_REQ BIT(0) /* Used by NVM Update reply */ #define ICE_AQC_NVM_PRESERVATION_S 1 -#define ICE_AQC_NVM_PRESERVATION_M (3 << CSR_AQ_NVM_PRESERVATION_S) -#define ICE_AQC_NVM_NO_PRESERVATION (0 << CSR_AQ_NVM_PRESERVATION_S) +#define ICE_AQC_NVM_PRESERVATION_M (3 << ICE_AQC_NVM_PRESERVATION_S) +#define ICE_AQC_NVM_NO_PRESERVATION (0 << ICE_AQC_NVM_PRESERVATION_S) #define ICE_AQC_NVM_PRESERVE_ALL BIT(1) -#define ICE_AQC_NVM_PRESERVE_SELECTED (3 << CSR_AQ_NVM_PRESERVATION_S) +#define ICE_AQC_NVM_PRESERVE_SELECTED (3 << ICE_AQC_NVM_PRESERVATION_S) #define ICE_AQC_NVM_FLASH_ONLY BIT(7) __le16 module_typeid; __le16 length; @@ -1110,7 +1111,7 @@ struct ice_aqc_get_set_rss_keys { }; /* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */ -struct ice_aqc_get_set_rss_lut { +struct ice_aqc_get_set_rss_lut { #define ICE_AQC_GSET_RSS_LUT_VSI_VALID BIT(15) #define ICE_AQC_GSET_RSS_LUT_VSI_ID_S 0 #define ICE_AQC_GSET_RSS_LUT_VSI_ID_M (0x1FF << ICE_AQC_GSET_RSS_LUT_VSI_ID_S) @@ -1314,10 +1315,10 @@ struct ice_aqc_get_clear_fw_log { * @params: command-specific parameters * * Descriptor format for commands the driver posts on the Admin Transmit Queue - * (ATQ). The firmware writes back onto the command descriptor and returns - * the result of the command. Asynchronous events that are not an immediate + * (ATQ). The firmware writes back onto the command descriptor and returns + * the result of the command. Asynchronous events that are not an immediate * result of the command are written to the Admin Receive Queue (ARQ) using - * the same descriptor format. Descriptors are in little-endian notation with + * the same descriptor format. Descriptors are in little-endian notation with * 32-bit words. */ struct ice_aq_desc { @@ -1379,10 +1380,10 @@ struct ice_aq_desc { /* error codes */ enum ice_aq_err { - ICE_AQ_RC_OK = 0, /* success */ + ICE_AQ_RC_OK = 0, /* Success */ ICE_AQ_RC_ENOMEM = 9, /* Out of memory */ ICE_AQ_RC_EBUSY = 12, /* Device or resource busy */ - ICE_AQ_RC_EEXIST = 13, /* object already exists */ + ICE_AQ_RC_EEXIST = 13, /* Object already exists */ ICE_AQ_RC_ENOSPC = 16, /* No space left or allocation failure */ }; diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 554fd707a6d6..4c1d35da940d 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -405,9 +405,7 @@ static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw) INIT_LIST_HEAD(&sw->vsi_list_map_head); - ice_init_def_sw_recp(hw); - - return 0; + return ice_init_def_sw_recp(hw); } /** @@ -715,7 +713,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw) hw->evb_veb = true; - /* Query the allocated resources for tx scheduler */ + /* Query the allocated resources for Tx scheduler */ status = ice_sched_query_res_alloc(hw); if (status) { ice_debug(hw, ICE_DBG_SCHED, @@ -958,7 +956,7 @@ enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req) * ice_copy_rxq_ctx_to_hw * @hw: pointer to the hardware structure * @ice_rxq_ctx: pointer to the rxq context - * @rxq_index: the index of the rx queue + * @rxq_index: the index of the Rx queue * * Copies rxq context from dense structure to hw register space */ @@ -1014,7 +1012,7 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = { * ice_write_rxq_ctx * @hw: pointer to the hardware structure * @rlan_ctx: pointer to the rxq context - * @rxq_index: the index of the rx queue + * @rxq_index: the index of the Rx queue * * Converts rxq context from sparse to dense structure and then writes * it to hw register space @@ -1386,6 +1384,27 @@ void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res) } } +/** + * ice_get_guar_num_vsi - determine number of guar VSI for a PF + * @hw: pointer to the hw structure + * + * Determine the number of valid functions by going through the bitmap returned + * from parsing capabilities and use this to calculate the number of VSI per PF. + */ +static u32 ice_get_guar_num_vsi(struct ice_hw *hw) +{ + u8 funcs; + +#define ICE_CAPS_VALID_FUNCS_M 0xFF + funcs = hweight8(hw->dev_caps.common_cap.valid_functions & + ICE_CAPS_VALID_FUNCS_M); + + if (!funcs) + return 0; + + return ICE_MAX_VSI / funcs; +} + /** * ice_parse_caps - parse function/device capabilities * @hw: pointer to the hw struct @@ -1428,6 +1447,12 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count, u16 cap = le16_to_cpu(cap_resp->cap); switch (cap) { + case ICE_AQC_CAPS_VALID_FUNCTIONS: + caps->valid_functions = number; + ice_debug(hw, ICE_DBG_INIT, + "HW caps: Valid Functions = %d\n", + caps->valid_functions); + break; case ICE_AQC_CAPS_SRIOV: caps->sr_iov_1_1 = (number == 1); ice_debug(hw, ICE_DBG_INIT, @@ -1457,10 +1482,10 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count, "HW caps: Dev.VSI cnt = %d\n", dev_p->num_vsi_allocd_to_host); } else if (func_p) { - func_p->guaranteed_num_vsi = number; + func_p->guar_num_vsi = ice_get_guar_num_vsi(hw); ice_debug(hw, ICE_DBG_INIT, "HW caps: Func.VSI cnt = %d\n", - func_p->guaranteed_num_vsi); + number); } break; case ICE_AQC_CAPS_RSS: @@ -1688,8 +1713,7 @@ void ice_clear_pxe_mode(struct ice_hw *hw) * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned */ -static u16 -ice_get_link_speed_based_on_phy_type(u64 phy_type_low) +static u16 ice_get_link_speed_based_on_phy_type(u64 phy_type_low) { u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN; diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c index 84c967294eaf..2bf5e11f559a 100644 --- a/drivers/net/ethernet/intel/ice/ice_controlq.c +++ b/drivers/net/ethernet/intel/ice/ice_controlq.c @@ -3,6 +3,26 @@ #include "ice_common.h" +#define ICE_CQ_INIT_REGS(qinfo, prefix) \ +do { \ + (qinfo)->sq.head = prefix##_ATQH; \ + (qinfo)->sq.tail = prefix##_ATQT; \ + (qinfo)->sq.len = prefix##_ATQLEN; \ + (qinfo)->sq.bah = prefix##_ATQBAH; \ + (qinfo)->sq.bal = prefix##_ATQBAL; \ + (qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M; \ + (qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M; \ + (qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M; \ + (qinfo)->rq.head = prefix##_ARQH; \ + (qinfo)->rq.tail = prefix##_ARQT; \ + (qinfo)->rq.len = prefix##_ARQLEN; \ + (qinfo)->rq.bah = prefix##_ARQBAH; \ + (qinfo)->rq.bal = prefix##_ARQBAL; \ + (qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M; \ + (qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M; \ + (qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M; \ +} while (0) + /** * ice_adminq_init_regs - Initialize AdminQ registers * @hw: pointer to the hardware structure @@ -13,23 +33,7 @@ static void ice_adminq_init_regs(struct ice_hw *hw) { struct ice_ctl_q_info *cq = &hw->adminq; - cq->sq.head = PF_FW_ATQH; - cq->sq.tail = PF_FW_ATQT; - cq->sq.len = PF_FW_ATQLEN; - cq->sq.bah = PF_FW_ATQBAH; - cq->sq.bal = PF_FW_ATQBAL; - cq->sq.len_mask = PF_FW_ATQLEN_ATQLEN_M; - cq->sq.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M; - cq->sq.head_mask = PF_FW_ATQH_ATQH_M; - - cq->rq.head = PF_FW_ARQH; - cq->rq.tail = PF_FW_ARQT; - cq->rq.len = PF_FW_ARQLEN; - cq->rq.bah = PF_FW_ARQBAH; - cq->rq.bal = PF_FW_ARQBAL; - cq->rq.len_mask = PF_FW_ARQLEN_ARQLEN_M; - cq->rq.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M; - cq->rq.head_mask = PF_FW_ARQH_ARQH_M; + ICE_CQ_INIT_REGS(cq, PF_FW); } /** @@ -42,24 +46,7 @@ static void ice_mailbox_init_regs(struct ice_hw *hw) { struct ice_ctl_q_info *cq = &hw->mailboxq; - /* set head and tail registers in our local struct */ - cq->sq.head = PF_MBX_ATQH; - cq->sq.tail = PF_MBX_ATQT; - cq->sq.len = PF_MBX_ATQLEN; - cq->sq.bah = PF_MBX_ATQBAH; - cq->sq.bal = PF_MBX_ATQBAL; - cq->sq.len_mask = PF_MBX_ATQLEN_ATQLEN_M; - cq->sq.len_ena_mask = PF_MBX_ATQLEN_ATQENABLE_M; - cq->sq.head_mask = PF_MBX_ATQH_ATQH_M; - - cq->rq.head = PF_MBX_ARQH; - cq->rq.tail = PF_MBX_ARQT; - cq->rq.len = PF_MBX_ARQLEN; - cq->rq.bah = PF_MBX_ARQBAH; - cq->rq.bal = PF_MBX_ARQBAL; - cq->rq.len_mask = PF_MBX_ARQLEN_ARQLEN_M; - cq->rq.len_ena_mask = PF_MBX_ARQLEN_ARQENABLE_M; - cq->rq.head_mask = PF_MBX_ARQH_ARQH_M; + ICE_CQ_INIT_REGS(cq, PF_MBX); } /** @@ -131,37 +118,20 @@ ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq) } /** - * ice_free_ctrlq_sq_ring - Free Control Transmit Queue (ATQ) rings + * ice_free_cq_ring - Free control queue ring * @hw: pointer to the hardware structure - * @cq: pointer to the specific Control queue + * @ring: pointer to the specific control queue ring * - * This assumes the posted send buffers have already been cleaned + * This assumes the posted buffers have already been cleaned * and de-allocated */ -static void ice_free_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq) +static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring) { - dmam_free_coherent(ice_hw_to_dev(hw), cq->sq.desc_buf.size, - cq->sq.desc_buf.va, cq->sq.desc_buf.pa); - cq->sq.desc_buf.va = NULL; - cq->sq.desc_buf.pa = 0; - cq->sq.desc_buf.size = 0; -} - -/** - * ice_free_ctrlq_rq_ring - Free Control Receive Queue (ARQ) rings - * @hw: pointer to the hardware structure - * @cq: pointer to the specific Control queue - * - * This assumes the posted receive buffers have already been cleaned - * and de-allocated - */ -static void ice_free_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq) -{ - dmam_free_coherent(ice_hw_to_dev(hw), cq->rq.desc_buf.size, - cq->rq.desc_buf.va, cq->rq.desc_buf.pa); - cq->rq.desc_buf.va = NULL; - cq->rq.desc_buf.pa = 0; - cq->rq.desc_buf.size = 0; + dmam_free_coherent(ice_hw_to_dev(hw), ring->desc_buf.size, + ring->desc_buf.va, ring->desc_buf.pa); + ring->desc_buf.va = NULL; + ring->desc_buf.pa = 0; + ring->desc_buf.size = 0; } /** @@ -280,54 +250,23 @@ unwind_alloc_sq_bufs: return ICE_ERR_NO_MEMORY; } -/** - * ice_free_rq_bufs - Free ARQ buffer info elements - * @hw: pointer to the hardware structure - * @cq: pointer to the specific Control queue - */ -static void ice_free_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) -{ - int i; - - /* free descriptors */ - for (i = 0; i < cq->num_rq_entries; i++) { - dmam_free_coherent(ice_hw_to_dev(hw), cq->rq.r.rq_bi[i].size, - cq->rq.r.rq_bi[i].va, cq->rq.r.rq_bi[i].pa); - cq->rq.r.rq_bi[i].va = NULL; - cq->rq.r.rq_bi[i].pa = 0; - cq->rq.r.rq_bi[i].size = 0; - } - - /* free the dma header */ - devm_kfree(ice_hw_to_dev(hw), cq->rq.dma_head); -} - -/** - * ice_free_sq_bufs - Free ATQ buffer info elements - * @hw: pointer to the hardware structure - * @cq: pointer to the specific Control queue - */ -static void ice_free_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) +static enum ice_status +ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries) { - int i; + /* Clear Head and Tail */ + wr32(hw, ring->head, 0); + wr32(hw, ring->tail, 0); - /* only unmap if the address is non-NULL */ - for (i = 0; i < cq->num_sq_entries; i++) - if (cq->sq.r.sq_bi[i].pa) { - dmam_free_coherent(ice_hw_to_dev(hw), - cq->sq.r.sq_bi[i].size, - cq->sq.r.sq_bi[i].va, - cq->sq.r.sq_bi[i].pa); - cq->sq.r.sq_bi[i].va = NULL; - cq->sq.r.sq_bi[i].pa = 0; - cq->sq.r.sq_bi[i].size = 0; - } + /* set starting point */ + wr32(hw, ring->len, (num_entries | ring->len_ena_mask)); + wr32(hw, ring->bal, lower_32_bits(ring->desc_buf.pa)); + wr32(hw, ring->bah, upper_32_bits(ring->desc_buf.pa)); - /* free the buffer info list */ - devm_kfree(ice_hw_to_dev(hw), cq->sq.cmd_buf); + /* Check one register to verify that config was applied */ + if (rd32(hw, ring->bal) != lower_32_bits(ring->desc_buf.pa)) + return ICE_ERR_AQ_ERROR; - /* free the dma header */ - devm_kfree(ice_hw_to_dev(hw), cq->sq.dma_head); + return 0; } /** @@ -340,23 +279,7 @@ static void ice_free_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) static enum ice_status ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq) { - u32 reg = 0; - - /* Clear Head and Tail */ - wr32(hw, cq->sq.head, 0); - wr32(hw, cq->sq.tail, 0); - - /* set starting point */ - wr32(hw, cq->sq.len, (cq->num_sq_entries | cq->sq.len_ena_mask)); - wr32(hw, cq->sq.bal, lower_32_bits(cq->sq.desc_buf.pa)); - wr32(hw, cq->sq.bah, upper_32_bits(cq->sq.desc_buf.pa)); - - /* Check one register to verify that config was applied */ - reg = rd32(hw, cq->sq.bal); - if (reg != lower_32_bits(cq->sq.desc_buf.pa)) - return ICE_ERR_AQ_ERROR; - - return 0; + return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries); } /** @@ -369,25 +292,15 @@ ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq) static enum ice_status ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq) { - u32 reg = 0; - - /* Clear Head and Tail */ - wr32(hw, cq->rq.head, 0); - wr32(hw, cq->rq.tail, 0); + enum ice_status status; - /* set starting point */ - wr32(hw, cq->rq.len, (cq->num_rq_entries | cq->rq.len_ena_mask)); - wr32(hw, cq->rq.bal, lower_32_bits(cq->rq.desc_buf.pa)); - wr32(hw, cq->rq.bah, upper_32_bits(cq->rq.desc_buf.pa)); + status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries); + if (status) + return status; /* Update tail in the HW to post pre-allocated buffers */ wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1)); - /* Check one register to verify that config was applied */ - reg = rd32(hw, cq->rq.bal); - if (reg != lower_32_bits(cq->rq.desc_buf.pa)) - return ICE_ERR_AQ_ERROR; - return 0; } @@ -444,7 +357,7 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) goto init_ctrlq_exit; init_ctrlq_free_rings: - ice_free_ctrlq_sq_ring(hw, cq); + ice_free_cq_ring(hw, &cq->sq); init_ctrlq_exit: return ret_code; @@ -503,12 +416,33 @@ static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq) goto init_ctrlq_exit; init_ctrlq_free_rings: - ice_free_ctrlq_rq_ring(hw, cq); + ice_free_cq_ring(hw, &cq->rq); init_ctrlq_exit: return ret_code; } +#define ICE_FREE_CQ_BUFS(hw, qi, ring) \ +do { \ + int i; \ + /* free descriptors */ \ + for (i = 0; i < (qi)->num_##ring##_entries; i++) \ + if ((qi)->ring.r.ring##_bi[i].pa) { \ + dmam_free_coherent(ice_hw_to_dev(hw), \ + (qi)->ring.r.ring##_bi[i].size,\ + (qi)->ring.r.ring##_bi[i].va,\ + (qi)->ring.r.ring##_bi[i].pa);\ + (qi)->ring.r.ring##_bi[i].va = NULL; \ + (qi)->ring.r.ring##_bi[i].pa = 0; \ + (qi)->ring.r.ring##_bi[i].size = 0; \ + } \ + /* free the buffer info list */ \ + if ((qi)->ring.cmd_buf) \ + devm_kfree(ice_hw_to_dev(hw), (qi)->ring.cmd_buf); \ + /* free dma head */ \ + devm_kfree(ice_hw_to_dev(hw), (qi)->ring.dma_head); \ +} while (0) + /** * ice_shutdown_sq - shutdown the Control ATQ * @hw: pointer to the hardware structure @@ -538,8 +472,8 @@ ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) cq->sq.count = 0; /* to indicate uninitialized queue */ /* free ring buffers and the ring itself */ - ice_free_sq_bufs(hw, cq); - ice_free_ctrlq_sq_ring(hw, cq); + ICE_FREE_CQ_BUFS(hw, cq, sq); + ice_free_cq_ring(hw, &cq->sq); shutdown_sq_out: mutex_unlock(&cq->sq_lock); @@ -606,8 +540,8 @@ ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq) cq->rq.count = 0; /* free ring buffers and the ring itself */ - ice_free_rq_bufs(hw, cq); - ice_free_ctrlq_rq_ring(hw, cq); + ICE_FREE_CQ_BUFS(hw, cq, rq); + ice_free_cq_ring(hw, &cq->rq); shutdown_rq_out: mutex_unlock(&cq->rq_lock); @@ -657,7 +591,6 @@ init_ctrlq_free_rq: * - cq->num_rq_entries * - cq->rq_buf_size * - cq->sq_buf_size - * */ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type) { @@ -841,7 +774,7 @@ static bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq) * @buf_size: size of buffer for indirect commands (or 0 for direct commands) * @cd: pointer to command details structure * - * This is the main send command routine for the ATQ. It runs the q, + * This is the main send command routine for the ATQ. It runs the queue, * cleans the queue, etc. */ enum ice_status @@ -1035,7 +968,7 @@ void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode) * @pending: number of events that could be left to process * * This function cleans one Admin Receive Queue element and returns - * the contents through e. It can also return how many events are + * the contents through e. It can also return how many events are * left to process through 'pending'. */ enum ice_status diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 648acdb4c644..3b6e387f5440 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -62,7 +62,7 @@ static const struct ice_stats ice_gstrings_vsi_stats[] = { * The PF_STATs are appended to the netdev stats only when ethtool -S * is queried on the base PF netdev. */ -static struct ice_stats ice_gstrings_pf_stats[] = { +static const struct ice_stats ice_gstrings_pf_stats[] = { ICE_PF_STAT("tx_bytes", stats.eth.tx_bytes), ICE_PF_STAT("rx_bytes", stats.eth.rx_bytes), ICE_PF_STAT("tx_unicast", stats.eth.tx_unicast), @@ -104,7 +104,7 @@ static struct ice_stats ice_gstrings_pf_stats[] = { ICE_PF_STAT("mac_remote_faults", stats.mac_remote_faults), }; -static u32 ice_regs_dump_list[] = { +static const u32 ice_regs_dump_list[] = { PFGEN_STATE, PRTGEN_STATUS, QRX_CTRL(0), @@ -260,10 +260,10 @@ static int ice_get_sset_count(struct net_device *netdev, int sset) * a private ethtool flag). This is due to the nature of the * ethtool stats API. * - * User space programs such as ethtool must make 3 separate + * Userspace programs such as ethtool must make 3 separate * ioctl requests, one for size, one for the strings, and * finally one for the stats. Since these cross into - * user space, changes to the number or size could result in + * userspace, changes to the number or size could result in * undefined memory access or incorrect string<->value * correlations for statistics. * @@ -1392,17 +1392,17 @@ static int ice_nway_reset(struct net_device *netdev) { /* restart autonegotiation */ struct ice_netdev_priv *np = netdev_priv(netdev); - struct ice_link_status *hw_link_info; struct ice_vsi *vsi = np->vsi; struct ice_port_info *pi; enum ice_status status; - bool link_up; pi = vsi->port_info; - hw_link_info = &pi->phy.link_info; - link_up = hw_link_info->link_info & ICE_AQ_LINK_UP; + /* If VSI state is up, then restart autoneg with link up */ + if (!test_bit(__ICE_DOWN, vsi->back->state)) + status = ice_aq_set_link_restart_an(pi, true, NULL); + else + status = ice_aq_set_link_restart_an(pi, false, NULL); - status = ice_aq_set_link_restart_an(pi, link_up, NULL); if (status) { netdev_info(netdev, "link restart failed, err %d aq_err %d\n", status, pi->hw->adminq.sq_last_status); @@ -1441,7 +1441,7 @@ ice_get_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *pause) /** * ice_set_pauseparam - Set Flow Control parameter * @netdev: network interface device structure - * @pause: return tx/rx flow control status + * @pause: return Tx/Rx flow control status */ static int ice_set_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *pause) @@ -1543,7 +1543,7 @@ static u32 ice_get_rxfh_key_size(struct net_device __always_unused *netdev) } /** - * ice_get_rxfh_indir_size - get the rx flow hash indirection table size + * ice_get_rxfh_indir_size - get the Rx flow hash indirection table size * @netdev: network interface device structure * * Returns the table size. @@ -1556,7 +1556,7 @@ static u32 ice_get_rxfh_indir_size(struct net_device *netdev) } /** - * ice_get_rxfh - get the rx flow hash indirection table + * ice_get_rxfh - get the Rx flow hash indirection table * @netdev: network interface device structure * @indir: indirection table * @key: hash key @@ -1603,7 +1603,7 @@ out: } /** - * ice_set_rxfh - set the rx flow hash indirection table + * ice_set_rxfh - set the Rx flow hash indirection table * @netdev: network interface device structure * @indir: indirection table * @key: hash key diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h index 596b9fb1c510..5507928c8fbe 100644 --- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h +++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h @@ -7,6 +7,9 @@ #define _ICE_HW_AUTOGEN_H_ #define QTX_COMM_DBELL(_DBQM) (0x002C0000 + ((_DBQM) * 4)) +#define QTX_COMM_HEAD(_DBQM) (0x000E0000 + ((_DBQM) * 4)) +#define QTX_COMM_HEAD_HEAD_S 0 +#define QTX_COMM_HEAD_HEAD_M ICE_M(0x1FFF, 0) #define PF_FW_ARQBAH 0x00080180 #define PF_FW_ARQBAL 0x00080080 #define PF_FW_ARQH 0x00080380 diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index 7d2a66739e3f..bb51dd7defb5 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -6,11 +6,11 @@ union ice_32byte_rx_desc { struct { - __le64 pkt_addr; /* Packet buffer address */ - __le64 hdr_addr; /* Header buffer address */ + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ /* bit 0 of hdr_addr is DD bit */ - __le64 rsvd1; - __le64 rsvd2; + __le64 rsvd1; + __le64 rsvd2; } read; struct { struct { @@ -105,11 +105,11 @@ enum ice_rx_ptype_payload_layer { */ union ice_32b_rx_flex_desc { struct { - __le64 pkt_addr; /* Packet buffer address */ - __le64 hdr_addr; /* Header buffer address */ - /* bit 0 of hdr_addr is DD bit */ - __le64 rsvd1; - __le64 rsvd2; + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + /* bit 0 of hdr_addr is DD bit */ + __le64 rsvd1; + __le64 rsvd2; } read; struct { /* Qword 0 */ @@ -256,6 +256,9 @@ enum ice_rx_flex_desc_status_error_0_bits { #define ICE_RXQ_CTX_SIZE_DWORDS 8 #define ICE_RXQ_CTX_SZ (ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32)) +#define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS 22 +#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS 5 +#define GLTCLAN_CQ_CNTX(i, CQ) (GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800)) /* RLAN Rx queue context data * @@ -274,18 +277,18 @@ struct ice_rlan_ctx { u16 dbuf; /* bigger than needed, see above for reason */ #define ICE_RLAN_CTX_HBUF_S 6 u16 hbuf; /* bigger than needed, see above for reason */ - u8 dtype; - u8 dsize; - u8 crcstrip; - u8 l2tsel; - u8 hsplit_0; - u8 hsplit_1; - u8 showiv; + u8 dtype; + u8 dsize; + u8 crcstrip; + u8 l2tsel; + u8 hsplit_0; + u8 hsplit_1; + u8 showiv; u32 rxmax; /* bigger than needed, see above for reason */ - u8 tphrdesc_ena; - u8 tphwdesc_ena; - u8 tphdata_ena; - u8 tphhead_ena; + u8 tphrdesc_ena; + u8 tphwdesc_ena; + u8 tphdata_ena; + u8 tphhead_ena; u16 lrxqthresh; /* bigger than needed, see above for reason */ }; @@ -413,35 +416,35 @@ enum ice_tx_ctx_desc_cmd_bits { struct ice_tlan_ctx { #define ICE_TLAN_CTX_BASE_S 7 u64 base; /* base is defined in 128-byte units */ - u8 port_num; + u8 port_num; u16 cgd_num; /* bigger than needed, see above for reason */ - u8 pf_num; + u8 pf_num; u16 vmvf_num; - u8 vmvf_type; + u8 vmvf_type; #define ICE_TLAN_CTX_VMVF_TYPE_VF 0 #define ICE_TLAN_CTX_VMVF_TYPE_VMQ 1 #define ICE_TLAN_CTX_VMVF_TYPE_PF 2 u16 src_vsi; - u8 tsyn_ena; - u8 alt_vlan; + u8 tsyn_ena; + u8 alt_vlan; u16 cpuid; /* bigger than needed, see above for reason */ - u8 wb_mode; - u8 tphrd_desc; - u8 tphrd; - u8 tphwr_desc; + u8 wb_mode; + u8 tphrd_desc; + u8 tphrd; + u8 tphwr_desc; u16 cmpq_id; u16 qnum_in_func; - u8 itr_notification_mode; - u8 adjust_prof_id; + u8 itr_notification_mode; + u8 adjust_prof_id; u32 qlen; /* bigger than needed, see above for reason */ - u8 quanta_prof_idx; - u8 tso_ena; + u8 quanta_prof_idx; + u8 tso_ena; u16 tso_qnum; - u8 legacy_int; - u8 drop_ena; - u8 cache_prof_idx; - u8 pkt_shaper_prof_idx; - u8 int_q_state; /* width not needed - internal do not write */ + u8 legacy_int; + u8 drop_ena; + u8 cache_prof_idx; + u8 pkt_shaper_prof_idx; + u8 int_q_state; /* width not needed - internal do not write */ }; /* macro to make the table lines short */ diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 1041fa2a7767..29b1dcfd4331 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -20,7 +20,7 @@ static int ice_setup_rx_ctx(struct ice_ring *ring) u16 pf_q; int err; - /* what is RX queue number in global space of 2K Rx queues */ + /* what is Rx queue number in global space of 2K Rx queues */ pf_q = vsi->rxq_map[ring->q_index]; /* clear the context structure first */ @@ -174,15 +174,15 @@ static int ice_pf_rxq_wait(struct ice_pf *pf, int pf_q, bool ena) { int i; - for (i = 0; i < ICE_Q_WAIT_RETRY_LIMIT; i++) { + for (i = 0; i < ICE_Q_WAIT_MAX_RETRY; i++) { u32 rx_reg = rd32(&pf->hw, QRX_CTRL(pf_q)); if (ena == !!(rx_reg & QRX_CTRL_QENA_STAT_M)) break; - usleep_range(10, 20); + usleep_range(20, 40); } - if (i >= ICE_Q_WAIT_RETRY_LIMIT) + if (i >= ICE_Q_WAIT_MAX_RETRY) return -ETIMEDOUT; return 0; @@ -774,11 +774,13 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) */ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) { - u16 offset = 0, qmap = 0, numq_tc; - u16 pow = 0, max_rss = 0, qcount; + u16 offset = 0, qmap = 0, tx_count = 0; u16 qcount_tx = vsi->alloc_txq; u16 qcount_rx = vsi->alloc_rxq; + u16 tx_numq_tc, rx_numq_tc; + u16 pow = 0, max_rss = 0; bool ena_tc0 = false; + u8 netdev_tc = 0; int i; /* at least TC0 should be enabled by default */ @@ -794,7 +796,12 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) vsi->tc_cfg.ena_tc |= 1; } - numq_tc = qcount_rx / vsi->tc_cfg.numtc; + rx_numq_tc = qcount_rx / vsi->tc_cfg.numtc; + if (!rx_numq_tc) + rx_numq_tc = 1; + tx_numq_tc = qcount_tx / vsi->tc_cfg.numtc; + if (!tx_numq_tc) + tx_numq_tc = 1; /* TC mapping is a function of the number of Rx queues assigned to the * VSI for each traffic class and the offset of these queues. @@ -808,7 +815,8 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) * Setup number and offset of Rx queues for all TCs for the VSI */ - qcount = numq_tc; + qcount_rx = rx_numq_tc; + /* qcount will change if RSS is enabled */ if (test_bit(ICE_FLAG_RSS_ENA, vsi->back->flags)) { if (vsi->type == ICE_VSI_PF || vsi->type == ICE_VSI_VF) { @@ -816,37 +824,41 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) max_rss = ICE_MAX_LG_RSS_QS; else max_rss = ICE_MAX_SMALL_RSS_QS; - qcount = min_t(int, numq_tc, max_rss); - qcount = min_t(int, qcount, vsi->rss_size); + qcount_rx = min_t(int, rx_numq_tc, max_rss); + qcount_rx = min_t(int, qcount_rx, vsi->rss_size); } } /* find the (rounded up) power-of-2 of qcount */ - pow = order_base_2(qcount); + pow = order_base_2(qcount_rx); for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) { if (!(vsi->tc_cfg.ena_tc & BIT(i))) { /* TC is not enabled */ vsi->tc_cfg.tc_info[i].qoffset = 0; - vsi->tc_cfg.tc_info[i].qcount = 1; + vsi->tc_cfg.tc_info[i].qcount_rx = 1; + vsi->tc_cfg.tc_info[i].qcount_tx = 1; + vsi->tc_cfg.tc_info[i].netdev_tc = 0; ctxt->info.tc_mapping[i] = 0; continue; } /* TC is enabled */ vsi->tc_cfg.tc_info[i].qoffset = offset; - vsi->tc_cfg.tc_info[i].qcount = qcount; + vsi->tc_cfg.tc_info[i].qcount_rx = qcount_rx; + vsi->tc_cfg.tc_info[i].qcount_tx = tx_numq_tc; + vsi->tc_cfg.tc_info[i].netdev_tc = netdev_tc++; qmap = ((offset << ICE_AQ_VSI_TC_Q_OFFSET_S) & ICE_AQ_VSI_TC_Q_OFFSET_M) | ((pow << ICE_AQ_VSI_TC_Q_NUM_S) & ICE_AQ_VSI_TC_Q_NUM_M); - offset += qcount; + offset += qcount_rx; + tx_count += tx_numq_tc; ctxt->info.tc_mapping[i] = cpu_to_le16(qmap); } - - vsi->num_txq = qcount_tx; vsi->num_rxq = offset; + vsi->num_txq = tx_count; if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) { dev_dbg(&vsi->back->pdev->dev, "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n"); @@ -1000,7 +1012,7 @@ void ice_vsi_free_q_vectors(struct ice_vsi *vsi) * @vsi: the VSI being configured * @v_idx: index of the vector in the VSI struct * - * We allocate one q_vector. If allocation fails we return -ENOMEM. + * We allocate one q_vector. If allocation fails we return -ENOMEM. */ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, int v_idx) { @@ -1039,7 +1051,7 @@ out: * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors * @vsi: the VSI being configured * - * We allocate one q_vector per queue interrupt. If allocation fails we + * We allocate one q_vector per queue interrupt. If allocation fails we * return -ENOMEM. */ static int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi) @@ -1188,7 +1200,7 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) struct ice_pf *pf = vsi->back; int i; - /* Allocate tx_rings */ + /* Allocate Tx rings */ for (i = 0; i < vsi->alloc_txq; i++) { struct ice_ring *ring; @@ -1207,7 +1219,7 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) vsi->tx_rings[i] = ring; } - /* Allocate rx_rings */ + /* Allocate Rx rings */ for (i = 0; i < vsi->alloc_rxq; i++) { struct ice_ring *ring; @@ -1611,55 +1623,62 @@ int ice_vsi_cfg_txqs(struct ice_vsi *vsi) struct ice_aqc_add_tx_qgrp *qg_buf; struct ice_aqc_add_txqs_perq *txq; struct ice_pf *pf = vsi->back; + u8 num_q_grps, q_idx = 0; enum ice_status status; u16 buf_len, i, pf_q; int err = 0, tc = 0; - u8 num_q_grps; buf_len = sizeof(struct ice_aqc_add_tx_qgrp); qg_buf = devm_kzalloc(&pf->pdev->dev, buf_len, GFP_KERNEL); if (!qg_buf) return -ENOMEM; - if (vsi->num_txq > ICE_MAX_TXQ_PER_TXQG) { - err = -EINVAL; - goto err_cfg_txqs; - } qg_buf->num_txqs = 1; num_q_grps = 1; - /* set up and configure the Tx queues */ - ice_for_each_txq(vsi, i) { - struct ice_tlan_ctx tlan_ctx = { 0 }; + /* set up and configure the Tx queues for each enabled TC */ + for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) { + if (!(vsi->tc_cfg.ena_tc & BIT(tc))) + break; - pf_q = vsi->txq_map[i]; - ice_setup_tx_ctx(vsi->tx_rings[i], &tlan_ctx, pf_q); - /* copy context contents into the qg_buf */ - qg_buf->txqs[0].txq_id = cpu_to_le16(pf_q); - ice_set_ctx((u8 *)&tlan_ctx, qg_buf->txqs[0].txq_ctx, - ice_tlan_ctx_info); + for (i = 0; i < vsi->tc_cfg.tc_info[tc].qcount_tx; i++) { + struct ice_tlan_ctx tlan_ctx = { 0 }; + + pf_q = vsi->txq_map[q_idx]; + ice_setup_tx_ctx(vsi->tx_rings[q_idx], &tlan_ctx, + pf_q); + /* copy context contents into the qg_buf */ + qg_buf->txqs[0].txq_id = cpu_to_le16(pf_q); + ice_set_ctx((u8 *)&tlan_ctx, qg_buf->txqs[0].txq_ctx, + ice_tlan_ctx_info); + + /* init queue specific tail reg. It is referred as + * transmit comm scheduler queue doorbell. + */ + vsi->tx_rings[q_idx]->tail = + pf->hw.hw_addr + QTX_COMM_DBELL(pf_q); + status = ice_ena_vsi_txq(vsi->port_info, vsi->idx, tc, + num_q_grps, qg_buf, buf_len, + NULL); + if (status) { + dev_err(&vsi->back->pdev->dev, + "Failed to set LAN Tx queue context, error: %d\n", + status); + err = -ENODEV; + goto err_cfg_txqs; + } - /* init queue specific tail reg. It is referred as transmit - * comm scheduler queue doorbell. - */ - vsi->tx_rings[i]->tail = pf->hw.hw_addr + QTX_COMM_DBELL(pf_q); - status = ice_ena_vsi_txq(vsi->port_info, vsi->idx, tc, - num_q_grps, qg_buf, buf_len, NULL); - if (status) { - dev_err(&vsi->back->pdev->dev, - "Failed to set LAN Tx queue context, error: %d\n", - status); - err = -ENODEV; - goto err_cfg_txqs; - } + /* Add Tx Queue TEID into the VSI Tx ring from the + * response. This will complete configuring and + * enabling the queue. + */ + txq = &qg_buf->txqs[0]; + if (pf_q == le16_to_cpu(txq->txq_id)) + vsi->tx_rings[q_idx]->txq_teid = + le32_to_cpu(txq->q_teid); - /* Add Tx Queue TEID into the VSI Tx ring from the response - * This will complete configuring and enabling the queue. - */ - txq = &qg_buf->txqs[0]; - if (pf_q == le16_to_cpu(txq->txq_id)) - vsi->tx_rings[i]->txq_teid = - le32_to_cpu(txq->q_teid); + q_idx++; + } } err_cfg_txqs: devm_kfree(&pf->pdev->dev, qg_buf); @@ -1908,7 +1927,8 @@ int ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src, ice_for_each_txq(vsi, i) { u16 v_idx; - if (!vsi->tx_rings || !vsi->tx_rings[i]) { + if (!vsi->tx_rings || !vsi->tx_rings[i] || + !vsi->tx_rings[i]->q_vector) { err = -EINVAL; goto err_out; } @@ -2056,6 +2076,9 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, /* set RSS capabilities */ ice_vsi_set_rss_params(vsi); + /* set tc configuration */ + ice_vsi_set_tc_cfg(vsi); + /* create the VSI */ ret = ice_vsi_init(vsi); if (ret) @@ -2113,17 +2136,13 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, pf->q_left_rx -= vsi->alloc_rxq; break; default: - /* if VSI type is not recognized, clean up the resources and - * exit - */ + /* clean up the resources and exit */ goto unroll_vsi_init; } - ice_vsi_set_tc_cfg(vsi); - /* configure VSI nodes based on number of queues and TC's */ for (i = 0; i < vsi->tc_cfg.numtc; i++) - max_txqs[i] = vsi->num_txq; + max_txqs[i] = pf->num_lan_tx; ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc, max_txqs); @@ -2314,7 +2333,7 @@ static int ice_search_res(struct ice_res_tracker *res, u16 needed, u16 id) int start = res->search_hint; int end = start; - if ((start + needed) > res->num_entries) + if ((start + needed) > res->num_entries) return -ENOMEM; id |= ICE_RES_VALID_BIT; @@ -2491,6 +2510,7 @@ int ice_vsi_release(struct ice_vsi *vsi) } ice_remove_vsi_fltr(&pf->hw, vsi->idx); + ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); ice_vsi_delete(vsi); ice_vsi_free_q_vectors(vsi); ice_vsi_clear_rings(vsi); @@ -2518,11 +2538,14 @@ int ice_vsi_release(struct ice_vsi *vsi) int ice_vsi_rebuild(struct ice_vsi *vsi) { u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 }; + struct ice_pf *pf; int ret, i; if (!vsi) return -EINVAL; + pf = vsi->back; + ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); ice_vsi_free_q_vectors(vsi); ice_free_res(vsi->back->sw_irq_tracker, vsi->sw_base_vector, vsi->idx); ice_free_res(vsi->back->hw_irq_tracker, vsi->hw_base_vector, vsi->idx); @@ -2532,6 +2555,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi) ice_vsi_free_arrays(vsi, false); ice_dev_onetime_setup(&vsi->back->hw); ice_vsi_set_num_qs(vsi); + ice_vsi_set_tc_cfg(vsi); /* Initialize VSI struct elements and create VSI in FW */ ret = ice_vsi_init(vsi); @@ -2578,11 +2602,9 @@ int ice_vsi_rebuild(struct ice_vsi *vsi) break; } - ice_vsi_set_tc_cfg(vsi); - /* configure VSI nodes based on number of queues and TC's */ for (i = 0; i < vsi->tc_cfg.numtc; i++) - max_txqs[i] = vsi->num_txq; + max_txqs[i] = pf->num_lan_tx; ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc, max_txqs); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 333312a1d595..8725569d11f0 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -349,6 +349,9 @@ ice_prepare_for_reset(struct ice_pf *pf) /* disable the VSIs and their queues that are not already DOWN */ ice_pf_dis_all_vsi(pf); + if (hw->port_info) + ice_sched_clear_port(hw->port_info); + ice_shutdown_all_ctrlq(hw); set_bit(__ICE_PREPARED_FOR_RESET, pf->state); @@ -405,7 +408,7 @@ static void ice_reset_subtask(struct ice_pf *pf) /* When a CORER/GLOBR/EMPR is about to happen, the hardware triggers an * OICR interrupt. The OICR handler (ice_misc_intr) determines what type * of reset is pending and sets bits in pf->state indicating the reset - * type and __ICE_RESET_OICR_RECV. So, if the latter bit is set + * type and __ICE_RESET_OICR_RECV. So, if the latter bit is set * prepare for pending reset if not already (for PF software-initiated * global resets the software should already be prepared for it as * indicated by __ICE_PREPARED_FOR_RESET; for global resets initiated @@ -1379,7 +1382,7 @@ static void ice_free_irq_msix_misc(struct ice_pf *pf) * @pf: board private structure * * This sets up the handler for MSIX 0, which is used to manage the - * non-queue interrupts, e.g. AdminQ and errors. This is not used + * non-queue interrupts, e.g. AdminQ and errors. This is not used * when in MSI or Legacy interrupt mode. */ static int ice_req_irq_msix_misc(struct ice_pf *pf) @@ -1783,7 +1786,7 @@ static void ice_determine_q_usage(struct ice_pf *pf) pf->num_lan_tx = min_t(int, q_left_tx, num_online_cpus()); - /* only 1 rx queue unless RSS is enabled */ + /* only 1 Rx queue unless RSS is enabled */ if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) pf->num_lan_rx = 1; else @@ -2091,8 +2094,7 @@ static int ice_probe(struct pci_dev *pdev, ice_determine_q_usage(pf); - pf->num_alloc_vsi = min_t(u16, ICE_MAX_VSI_ALLOC, - hw->func_caps.guaranteed_num_vsi); + pf->num_alloc_vsi = hw->func_caps.guar_num_vsi; if (!pf->num_alloc_vsi) { err = -EIO; goto err_init_pf_unroll; @@ -2544,7 +2546,6 @@ static int ice_vsi_cfg(struct ice_vsi *vsi) if (err) return err; } - err = ice_vsi_cfg_txqs(vsi); if (!err) err = ice_vsi_cfg_rxqs(vsi); @@ -2563,8 +2564,12 @@ static void ice_napi_enable_all(struct ice_vsi *vsi) if (!vsi->netdev) return; - for (q_idx = 0; q_idx < vsi->num_q_vectors; q_idx++) - napi_enable(&vsi->q_vectors[q_idx]->napi); + for (q_idx = 0; q_idx < vsi->num_q_vectors; q_idx++) { + struct ice_q_vector *q_vector = vsi->q_vectors[q_idx]; + + if (q_vector->rx.ring || q_vector->tx.ring) + napi_enable(&q_vector->napi); + } } /** @@ -2931,8 +2936,12 @@ static void ice_napi_disable_all(struct ice_vsi *vsi) if (!vsi->netdev) return; - for (q_idx = 0; q_idx < vsi->num_q_vectors; q_idx++) - napi_disable(&vsi->q_vectors[q_idx]->napi); + for (q_idx = 0; q_idx < vsi->num_q_vectors; q_idx++) { + struct ice_q_vector *q_vector = vsi->q_vectors[q_idx]; + + if (q_vector->rx.ring || q_vector->tx.ring) + napi_disable(&q_vector->napi); + } } /** @@ -3138,8 +3147,9 @@ static void ice_vsi_release_all(struct ice_pf *pf) /** * ice_dis_vsi - pause a VSI * @vsi: the VSI being paused + * @locked: is the rtnl_lock already held */ -static void ice_dis_vsi(struct ice_vsi *vsi) +static void ice_dis_vsi(struct ice_vsi *vsi, bool locked) { if (test_bit(__ICE_DOWN, vsi->state)) return; @@ -3148,9 +3158,13 @@ static void ice_dis_vsi(struct ice_vsi *vsi) if (vsi->type == ICE_VSI_PF && vsi->netdev) { if (netif_running(vsi->netdev)) { - rtnl_lock(); - vsi->netdev->netdev_ops->ndo_stop(vsi->netdev); - rtnl_unlock(); + if (!locked) { + rtnl_lock(); + vsi->netdev->netdev_ops->ndo_stop(vsi->netdev); + rtnl_unlock(); + } else { + vsi->netdev->netdev_ops->ndo_stop(vsi->netdev); + } } else { ice_vsi_close(vsi); } @@ -3189,7 +3203,7 @@ static void ice_pf_dis_all_vsi(struct ice_pf *pf) ice_for_each_vsi(pf, v) if (pf->vsi[v]) - ice_dis_vsi(pf->vsi[v]); + ice_dis_vsi(pf->vsi[v], false); } /** @@ -3618,6 +3632,7 @@ static int ice_vsi_update_bridge_mode(struct ice_vsi *vsi, u16 bmode) * @dev: the netdev being configured * @nlh: RTNL message * @flags: bridge setlink flags + * @extack: netlink extended ack * * Sets the bridge mode (VEB/VEPA) of the switch to which the netdev (VSI) is * hooked up to. Iterates through the PF VSI list and sets the loopback mode (if @@ -3626,7 +3641,7 @@ static int ice_vsi_update_bridge_mode(struct ice_vsi *vsi, u16 bmode) */ static int ice_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh, - u16 __always_unused flags) + u16 __always_unused flags, struct netlink_ext_ack *extack) { struct ice_netdev_priv *np = netdev_priv(dev); struct ice_pf *pf = np->vsi->back; @@ -3668,7 +3683,7 @@ ice_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh, */ status = ice_update_sw_rule_bridge_mode(hw); if (status) { - netdev_err(dev, "update SW_RULE for bridge mode failed, = %d err %d aq_err %d\n", + netdev_err(dev, "switch rule update failed, mode = %d err %d aq_err %d\n", mode, status, hw->adminq.sq_last_status); /* revert hw->evb_veb */ hw->evb_veb = (pf_sw->bridge_mode == BRIDGE_MODE_VEB); @@ -3691,40 +3706,36 @@ static void ice_tx_timeout(struct net_device *netdev) struct ice_ring *tx_ring = NULL; struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; - u32 head, val = 0, i; int hung_queue = -1; + u32 i; pf->tx_timeout_count++; - /* find the stopped queue the same way the stack does */ + /* find the stopped queue the same way dev_watchdog() does */ for (i = 0; i < netdev->num_tx_queues; i++) { - struct netdev_queue *q; unsigned long trans_start; + struct netdev_queue *q; q = netdev_get_tx_queue(netdev, i); trans_start = q->trans_start; if (netif_xmit_stopped(q) && time_after(jiffies, - (trans_start + netdev->watchdog_timeo))) { + trans_start + netdev->watchdog_timeo)) { hung_queue = i; break; } } - if (i == netdev->num_tx_queues) { + if (i == netdev->num_tx_queues) netdev_info(netdev, "tx_timeout: no netdev hung queue found\n"); - } else { + else /* now that we have an index, find the tx_ring struct */ - for (i = 0; i < vsi->num_txq; i++) { - if (vsi->tx_rings[i] && vsi->tx_rings[i]->desc) { - if (hung_queue == - vsi->tx_rings[i]->q_index) { + for (i = 0; i < vsi->num_txq; i++) + if (vsi->tx_rings[i] && vsi->tx_rings[i]->desc) + if (hung_queue == vsi->tx_rings[i]->q_index) { tx_ring = vsi->tx_rings[i]; break; } - } - } - } /* Reset recovery level if enough time has elapsed after last timeout. * Also ensure no new reset action happens before next timeout period. @@ -3736,17 +3747,20 @@ static void ice_tx_timeout(struct net_device *netdev) return; if (tx_ring) { - head = tx_ring->next_to_clean; + struct ice_hw *hw = &pf->hw; + u32 head, val = 0; + + head = (rd32(hw, QTX_COMM_HEAD(vsi->txq_map[hung_queue])) & + QTX_COMM_HEAD_HEAD_M) >> QTX_COMM_HEAD_HEAD_S; /* Read interrupt register */ if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) - val = rd32(&pf->hw, + val = rd32(hw, GLINT_DYN_CTL(tx_ring->q_vector->v_idx + - tx_ring->vsi->hw_base_vector)); + tx_ring->vsi->hw_base_vector)); - netdev_info(netdev, "tx_timeout: VSI_num: %d, Q %d, NTC: 0x%x, HWB: 0x%x, NTU: 0x%x, TAIL: 0x%x, INT: 0x%x\n", + netdev_info(netdev, "tx_timeout: VSI_num: %d, Q %d, NTC: 0x%x, HW_HEAD: 0x%x, NTU: 0x%x, INT: 0x%x\n", vsi->vsi_num, hung_queue, tx_ring->next_to_clean, - head, tx_ring->next_to_use, - readl(tx_ring->tail), val); + head, tx_ring->next_to_use, val); } pf->tx_timeout_last_recovery = jiffies; @@ -3780,7 +3794,7 @@ static void ice_tx_timeout(struct net_device *netdev) * @netdev: network interface device structure * * The open entry point is called when a network interface is made - * active by the system (IFF_UP). At this point all resources needed + * active by the system (IFF_UP). At this point all resources needed * for transmit and receive operations are allocated, the interrupt * handler is registered with the OS, the netdev watchdog is enabled, * and the stack is notified that the interface is ready. @@ -3813,7 +3827,7 @@ static int ice_open(struct net_device *netdev) * @netdev: network interface device structure * * The stop entry point is called when an interface is de-activated by the OS, - * and the netdevice enters the DOWN state. The hardware is still under the + * and the netdevice enters the DOWN state. The hardware is still under the * driver's control, but the netdev interface is disabled. * * Returns success only - not allowed to fail @@ -3842,14 +3856,14 @@ ice_features_check(struct sk_buff *skb, size_t len; /* No point in doing any of this if neither checksum nor GSO are - * being requested for this frame. We can rule out both by just + * being requested for this frame. We can rule out both by just * checking for CHECKSUM_PARTIAL */ if (skb->ip_summed != CHECKSUM_PARTIAL) return features; /* We cannot support GSO if the MSS is going to be less than - * 64 bytes. If it is then we need to drop support for GSO. + * 64 bytes. If it is then we need to drop support for GSO. */ if (skb_is_gso(skb) && (skb_shinfo(skb)->gso_size < 64)) features &= ~NETIF_F_GSO_MASK; diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index 7cc8aa18a22b..a1681853df2e 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -630,7 +630,7 @@ static void ice_sched_clear_tx_topo(struct ice_port_info *pi) * * Cleanup scheduling elements from SW DB */ -static void ice_sched_clear_port(struct ice_port_info *pi) +void ice_sched_clear_port(struct ice_port_info *pi) { if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY) return; @@ -894,8 +894,7 @@ static u8 ice_sched_get_vsi_layer(struct ice_hw *hw) * This function removes the leaf node that was created by the FW * during initialization */ -static void -ice_rm_dflt_leaf_node(struct ice_port_info *pi) +static void ice_rm_dflt_leaf_node(struct ice_port_info *pi) { struct ice_sched_node *node; @@ -923,8 +922,7 @@ ice_rm_dflt_leaf_node(struct ice_port_info *pi) * This function frees all the nodes except root and TC that were created by * the FW during initialization */ -static void -ice_sched_rm_dflt_nodes(struct ice_port_info *pi) +static void ice_sched_rm_dflt_nodes(struct ice_port_info *pi) { struct ice_sched_node *node; @@ -1339,7 +1337,7 @@ ice_sched_rm_vsi_child_nodes(struct ice_port_info *pi, * @num_nodes: pointer to num nodes array * * This function calculates the number of supported nodes needed to add this - * VSI into tx tree including the VSI, parent and intermediate nodes in below + * VSI into Tx tree including the VSI, parent and intermediate nodes in below * layers */ static void @@ -1376,13 +1374,13 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw, } /** - * ice_sched_add_vsi_support_nodes - add VSI supported nodes into tx tree + * ice_sched_add_vsi_support_nodes - add VSI supported nodes into Tx tree * @pi: port information structure * @vsi_handle: software VSI handle * @tc_node: pointer to TC node * @num_nodes: pointer to num nodes array * - * This function adds the VSI supported nodes into tx tree including the + * This function adds the VSI supported nodes into Tx tree including the * VSI, its parent and intermediate nodes in below layers */ static enum ice_status @@ -1527,7 +1525,7 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, } /** - * ice_sched_cfg_vsi - configure the new/exisiting VSI + * ice_sched_cfg_vsi - configure the new/existing VSI * @pi: port information structure * @vsi_handle: software VSI handle * @tc: TC number @@ -1605,3 +1603,109 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs, return status; } + +/** + * ice_sched_rm_agg_vsi_entry - remove agg related VSI info entry + * @pi: port information structure + * @vsi_handle: software VSI handle + * + * This function removes single aggregator VSI info entry from + * aggregator list. + */ +static void +ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle) +{ + struct ice_sched_agg_info *agg_info; + struct ice_sched_agg_info *atmp; + + list_for_each_entry_safe(agg_info, atmp, &pi->agg_list, list_entry) { + struct ice_sched_agg_vsi_info *agg_vsi_info; + struct ice_sched_agg_vsi_info *vtmp; + + list_for_each_entry_safe(agg_vsi_info, vtmp, + &agg_info->agg_vsi_list, list_entry) + if (agg_vsi_info->vsi_handle == vsi_handle) { + list_del(&agg_vsi_info->list_entry); + devm_kfree(ice_hw_to_dev(pi->hw), + agg_vsi_info); + return; + } + } +} + +/** + * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes + * @pi: port information structure + * @vsi_handle: software VSI handle + * @owner: LAN or RDMA + * + * This function removes the VSI and its LAN or RDMA children nodes from the + * scheduler tree. + */ +static enum ice_status +ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner) +{ + enum ice_status status = ICE_ERR_PARAM; + struct ice_vsi_ctx *vsi_ctx; + u8 i, j = 0; + + if (!ice_is_vsi_valid(pi->hw, vsi_handle)) + return status; + mutex_lock(&pi->sched_lock); + vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle); + if (!vsi_ctx) + goto exit_sched_rm_vsi_cfg; + + for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) { + struct ice_sched_node *vsi_node, *tc_node; + + tc_node = ice_sched_get_tc_node(pi, i); + if (!tc_node) + continue; + + vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle); + if (!vsi_node) + continue; + + while (j < vsi_node->num_children) { + if (vsi_node->children[j]->owner == owner) { + ice_free_sched_node(pi, vsi_node->children[j]); + + /* reset the counter again since the num + * children will be updated after node removal + */ + j = 0; + } else { + j++; + } + } + /* remove the VSI if it has no children */ + if (!vsi_node->num_children) { + ice_free_sched_node(pi, vsi_node); + vsi_ctx->sched.vsi_node[i] = NULL; + + /* clean up agg related vsi info if any */ + ice_sched_rm_agg_vsi_info(pi, vsi_handle); + } + if (owner == ICE_SCHED_NODE_OWNER_LAN) + vsi_ctx->sched.max_lanq[i] = 0; + } + status = 0; + +exit_sched_rm_vsi_cfg: + mutex_unlock(&pi->sched_lock); + return status; +} + +/** + * ice_rm_vsi_lan_cfg - remove VSI and its LAN children nodes + * @pi: port information structure + * @vsi_handle: software VSI handle + * + * This function clears the VSI and its LAN children nodes from scheduler tree + * for all TCs. + */ +enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle) +{ + return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN); +} diff --git a/drivers/net/ethernet/intel/ice/ice_sched.h b/drivers/net/ethernet/intel/ice/ice_sched.h index 5dc9cfa04c58..da5b4c166da8 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.h +++ b/drivers/net/ethernet/intel/ice/ice_sched.h @@ -12,6 +12,7 @@ struct ice_sched_agg_vsi_info { struct list_head list_entry; DECLARE_BITMAP(tc_bitmap, ICE_MAX_TRAFFIC_CLASS); + u16 vsi_handle; }; struct ice_sched_agg_info { @@ -25,6 +26,7 @@ struct ice_sched_agg_info { /* FW AQ command calls */ enum ice_status ice_sched_init_port(struct ice_port_info *pi); enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw); +void ice_sched_clear_port(struct ice_port_info *pi); void ice_sched_cleanup_all(struct ice_hw *hw); struct ice_sched_node * ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid); @@ -39,4 +41,5 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, enum ice_status ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs, u8 owner, bool enable); +enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle); #endif /* _ICE_SCHED_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c index 027eba4e13f8..533b989a23e1 100644 --- a/drivers/net/ethernet/intel/ice/ice_sriov.c +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c @@ -46,7 +46,7 @@ ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval, * @link_speed: variable containing the link_speed to be converted * * Convert link speed supported by HW to link speed supported by virtchnl. - * If adv_link_support is true, then return link speed in Mbps. Else return + * If adv_link_support is true, then return link speed in Mbps. Else return * link speed as a VIRTCHNL_LINK_SPEED_* casted to a u32. Note that the caller * needs to cast back to an enum virtchnl_link_speed in the case where * adv_link_support is false, but when adv_link_support is true the caller can diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index 40c9c6558956..2e5693107fa4 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -92,8 +92,7 @@ ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries, * Allocate memory for the entire recipe table and initialize the structures/ * entries corresponding to basic recipes. */ -enum ice_status -ice_init_def_sw_recp(struct ice_hw *hw) +enum ice_status ice_init_def_sw_recp(struct ice_hw *hw) { struct ice_sw_recipe *recps; u8 i; @@ -130,7 +129,7 @@ ice_init_def_sw_recp(struct ice_hw *hw) * * NOTE: *req_desc is both an input/output parameter. * The caller of this function first calls this function with *request_desc set - * to 0. If the response from f/w has *req_desc set to 0, all the switch + * to 0. If the response from f/w has *req_desc set to 0, all the switch * configuration information has been returned; if non-zero (meaning not all * the information was returned), the caller should call this function again * with *req_desc set to the previous value returned by f/w to get the @@ -629,25 +628,36 @@ enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw) /** * ice_fill_sw_info - Helper function to populate lb_en and lan_en * @hw: pointer to the hardware structure - * @f_info: filter info structure to fill/update + * @fi: filter info structure to fill/update * * This helper function populates the lb_en and lan_en elements of the provided * ice_fltr_info struct using the switch's type and characteristics of the * switch rule being configured. */ -static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *f_info) +static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi) { - f_info->lb_en = false; - f_info->lan_en = false; - if ((f_info->flag & ICE_FLTR_TX) && - (f_info->fltr_act == ICE_FWD_TO_VSI || - f_info->fltr_act == ICE_FWD_TO_VSI_LIST || - f_info->fltr_act == ICE_FWD_TO_Q || - f_info->fltr_act == ICE_FWD_TO_QGRP)) { - f_info->lb_en = true; - if (!(hw->evb_veb && f_info->lkup_type == ICE_SW_LKUP_MAC && - is_unicast_ether_addr(f_info->l_data.mac.mac_addr))) - f_info->lan_en = true; + fi->lb_en = false; + fi->lan_en = false; + if ((fi->flag & ICE_FLTR_TX) && + (fi->fltr_act == ICE_FWD_TO_VSI || + fi->fltr_act == ICE_FWD_TO_VSI_LIST || + fi->fltr_act == ICE_FWD_TO_Q || + fi->fltr_act == ICE_FWD_TO_QGRP)) { + fi->lb_en = true; + /* Do not set lan_en to TRUE if + * 1. The switch is a VEB AND + * 2 + * 2.1 The lookup is MAC with unicast addr for MAC, OR + * 2.2 The lookup is MAC_VLAN with unicast addr for MAC + * + * In all other cases, the LAN enable has to be set to true. + */ + if (!(hw->evb_veb && + ((fi->lkup_type == ICE_SW_LKUP_MAC && + is_unicast_ether_addr(fi->l_data.mac.mac_addr)) || + (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN && + is_unicast_ether_addr(fi->l_data.mac_vlan.mac_addr))))) + fi->lan_en = true; } } @@ -817,7 +827,7 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, /* Create two back-to-back switch rules and submit them to the HW using * one memory buffer: * 1. Large Action - * 2. Look up tx rx + * 2. Look up Tx Rx */ lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts); rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE; @@ -861,7 +871,7 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, lg_act->pdata.lg_act.act[2] = cpu_to_le32(act); - /* call the fill switch rule to fill the lookup tx rx structure */ + /* call the fill switch rule to fill the lookup Tx Rx structure */ ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx, ice_aqc_opc_update_sw_rules); @@ -1158,8 +1168,8 @@ enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw) * Call AQ command to add or update previously created VSI list with new VSI. * * Helper function to do book keeping associated with adding filter information - * The algorithm to do the booking keeping is described below : - * When a VSI needs to subscribe to a given filter( MAC/VLAN/Ethtype etc.) + * The algorithm to do the book keeping is described below : + * When a VSI needs to subscribe to a given filter (MAC/VLAN/Ethtype etc.) * if only one VSI has been added till now * Allocate a new VSI list and add two VSIs * to this list using switch rule command @@ -1237,6 +1247,9 @@ ice_add_update_vsi_list(struct ice_hw *hw, u16 vsi_handle = new_fltr->vsi_handle; enum ice_adminq_opc opcode; + if (!m_entry->vsi_list_info) + return ICE_ERR_CFG; + /* A rule already exists with the new VSI being added */ if (test_bit(vsi_handle, m_entry->vsi_list_info->vsi_map)) return 0; @@ -1853,7 +1866,7 @@ ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry) tmp_fltr.fwd_id.vsi_list_id = vsi_list_id; tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST; /* Update the previous switch rule to a new VSI list which - * includes current VSI thats requested + * includes current VSI that is requested */ status = ice_update_pkt_fwd_rule(hw, &tmp_fltr); if (status) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index fe5bbabbb41e..49fc38094185 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -219,7 +219,7 @@ static bool ice_clean_tx_irq(struct ice_vsi *vsi, struct ice_ring *tx_ring, /** * ice_setup_tx_ring - Allocate the Tx descriptors - * @tx_ring: the tx ring to set up + * @tx_ring: the Tx ring to set up * * Return 0 on success, negative on error */ @@ -324,7 +324,7 @@ void ice_free_rx_ring(struct ice_ring *rx_ring) /** * ice_setup_rx_ring - Allocate the Rx descriptors - * @rx_ring: the rx ring to set up + * @rx_ring: the Rx ring to set up * * Return 0 on success, negative on error */ @@ -377,7 +377,7 @@ static void ice_release_rx_desc(struct ice_ring *rx_ring, u32 val) rx_ring->next_to_alloc = val; /* Force memory writes to complete before letting h/w - * know there are new descriptors to fetch. (Only + * know there are new descriptors to fetch. (Only * applicable for weak-ordered memory model archs, * such as IA-64). */ @@ -586,7 +586,7 @@ static bool ice_add_rx_frag(struct ice_rx_buf *rx_buf, /** * ice_reuse_rx_page - page flip buffer and store it back on the ring - * @rx_ring: rx descriptor ring to store buffers on + * @rx_ring: Rx descriptor ring to store buffers on * @old_buf: donor buffer to have page reused * * Synchronizes page for reuse by the adapter @@ -609,7 +609,7 @@ static void ice_reuse_rx_page(struct ice_ring *rx_ring, /** * ice_fetch_rx_buf - Allocate skb and populate it - * @rx_ring: rx descriptor ring to transact packets on + * @rx_ring: Rx descriptor ring to transact packets on * @rx_desc: descriptor containing info written by hardware * * This function allocates an skb on the fly, and populates it with the page @@ -686,7 +686,7 @@ static struct sk_buff *ice_fetch_rx_buf(struct ice_ring *rx_ring, * ice_pull_tail - ice specific version of skb_pull_tail * @skb: pointer to current skb being adjusted * - * This function is an ice specific version of __pskb_pull_tail. The + * This function is an ice specific version of __pskb_pull_tail. The * main difference between this version and the original function is that * this function can make several assumptions about the state of things * that allow for significant optimizations versus the standard function. @@ -768,7 +768,7 @@ static bool ice_test_staterr(union ice_32b_rx_flex_desc *rx_desc, * @rx_desc: Rx descriptor for current buffer * @skb: Current socket buffer containing buffer in progress * - * This function updates next to clean. If the buffer is an EOP buffer + * This function updates next to clean. If the buffer is an EOP buffer * this function exits returning false, otherwise it will place the * sk_buff in the next buffer to be chained and return true indicating * that this is in fact a non-EOP buffer. @@ -904,7 +904,7 @@ checksum_fail: /** * ice_process_skb_fields - Populate skb header fields from Rx descriptor - * @rx_ring: rx descriptor ring packet is being transacted on + * @rx_ring: Rx descriptor ring packet is being transacted on * @rx_desc: pointer to the EOP Rx descriptor * @skb: pointer to current skb being populated * @ptype: the packet type decoded by hardware @@ -927,7 +927,7 @@ static void ice_process_skb_fields(struct ice_ring *rx_ring, /** * ice_receive_skb - Send a completed packet up the stack - * @rx_ring: rx ring in play + * @rx_ring: Rx ring in play * @skb: packet to send up * @vlan_tag: vlan tag for packet * @@ -946,11 +946,11 @@ static void ice_receive_skb(struct ice_ring *rx_ring, struct sk_buff *skb, /** * ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf - * @rx_ring: rx descriptor ring to transact packets on + * @rx_ring: Rx descriptor ring to transact packets on * @budget: Total limit on number of packets to process * * This function provides a "bounce buffer" approach to Rx interrupt - * processing. The advantage to this is that on systems that have + * processing. The advantage to this is that on systems that have * expensive overhead for IOMMU access this provides a means of avoiding * it by maintaining the mapping of the page to the system. * @@ -1103,11 +1103,14 @@ int ice_napi_poll(struct napi_struct *napi, int budget) if (!clean_complete) return budget; - /* Work is done so exit the polling mode and re-enable the interrupt */ - napi_complete_done(napi, work_done); - if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) - ice_irq_dynamic_ena(&vsi->back->hw, vsi, q_vector); - return 0; + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) + ice_irq_dynamic_ena(&vsi->back->hw, vsi, q_vector); + + return min(work_done, budget - 1); } /* helper function for building cmd/type/offset */ @@ -1122,7 +1125,7 @@ build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag) } /** - * __ice_maybe_stop_tx - 2nd level check for tx stop conditions + * __ice_maybe_stop_tx - 2nd level check for Tx stop conditions * @tx_ring: the ring to be checked * @size: the size buffer we want to assure is available * @@ -1145,7 +1148,7 @@ static int __ice_maybe_stop_tx(struct ice_ring *tx_ring, unsigned int size) } /** - * ice_maybe_stop_tx - 1st level check for tx stop conditions + * ice_maybe_stop_tx - 1st level check for Tx stop conditions * @tx_ring: the ring to be checked * @size: the size buffer we want to assure is available * @@ -1155,6 +1158,7 @@ static int ice_maybe_stop_tx(struct ice_ring *tx_ring, unsigned int size) { if (likely(ICE_DESC_UNUSED(tx_ring) >= size)) return 0; + return __ice_maybe_stop_tx(tx_ring, size); } @@ -1552,7 +1556,7 @@ int ice_tso(struct ice_tx_buf *first, struct ice_tx_offload_params *off) * Finally, we add one to round up. Because 256 isn't an exact multiple of * 3, we'll underestimate near each multiple of 12K. This is actually more * accurate as we have 4K - 1 of wiggle room that we can fit into the last - * segment. For our purposes this is accurate out to 1M which is orders of + * segment. For our purposes this is accurate out to 1M which is orders of * magnitude greater than our largest possible GSO size. * * This would then be implemented as: @@ -1568,7 +1572,7 @@ static unsigned int ice_txd_use_count(unsigned int size) } /** - * ice_xmit_desc_count - calculate number of tx descriptors needed + * ice_xmit_desc_count - calculate number of Tx descriptors needed * @skb: send buffer * * Returns number of data descriptors needed for this skb. @@ -1620,7 +1624,7 @@ static bool __ice_chk_linearize(struct sk_buff *skb) nr_frags -= ICE_MAX_BUF_TXD - 2; frag = &skb_shinfo(skb)->frags[0]; - /* Initialize size to the negative value of gso_size minus 1. We + /* Initialize size to the negative value of gso_size minus 1. We * use this as the worst case scenerio in which the frag ahead * of us only provides one byte which is why we are limited to 6 * descriptors for a single transmit as the header and previous diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index f4dbc81c1988..0ea428104215 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -124,6 +124,8 @@ struct ice_phy_info { /* Common HW capabilities for SW use */ struct ice_hw_common_caps { + u32 valid_functions; + /* TX/RX queues */ u16 num_rxq; /* Number/Total RX queues */ u16 rxq_first_id; /* First queue ID for RX queues */ @@ -150,7 +152,7 @@ struct ice_hw_func_caps { struct ice_hw_common_caps common_cap; u32 num_allocd_vfs; /* Number of allocated VFs */ u32 vf_base_id; /* Logical ID of the first VF */ - u32 guaranteed_num_vsi; + u32 guar_num_vsi; }; /* Device wide capabilities */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index e71065f9d391..05ff4f910649 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -156,8 +156,6 @@ static void ice_free_vf_res(struct ice_vf *vf) clear_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states); } -/***********************enable_vf routines*****************************/ - /** * ice_dis_vf_mappings * @vf: pointer to the VF structure @@ -215,6 +213,15 @@ void ice_free_vfs(struct ice_pf *pf) while (test_and_set_bit(__ICE_VF_DIS, pf->state)) usleep_range(1000, 2000); + /* Disable IOV before freeing resources. This lets any VF drivers + * running in the host get themselves cleaned up before we yank + * the carpet out from underneath their feet. + */ + if (!pci_vfs_assigned(pf->pdev)) + pci_disable_sriov(pf->pdev); + else + dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n"); + /* Avoid wait time by stopping all VFs at the same time */ for (i = 0; i < pf->num_alloc_vfs; i++) { if (!test_bit(ICE_VF_STATE_ENA, pf->vf[i].vf_states)) @@ -228,15 +235,6 @@ void ice_free_vfs(struct ice_pf *pf) clear_bit(ICE_VF_STATE_ENA, pf->vf[i].vf_states); } - /* Disable IOV before freeing resources. This lets any VF drivers - * running in the host get themselves cleaned up before we yank - * the carpet out from underneath their feet. - */ - if (!pci_vfs_assigned(pf->pdev)) - pci_disable_sriov(pf->pdev); - else - dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n"); - tmp = pf->num_alloc_vfs; pf->num_vf_qps = 0; pf->num_alloc_vfs = 0; @@ -454,7 +452,7 @@ static int ice_alloc_vsi_res(struct ice_vf *vf) /* Clear this bit after VF initialization since we shouldn't reclaim * and reassign interrupts for synchronous or asynchronous VFR events. - * We don't want to reconfigure interrupts since AVF driver doesn't + * We dont want to reconfigure interrupts since AVF driver doesn't * expect vector assignment to be changed unless there is a request for * more vectors. */ @@ -1105,7 +1103,7 @@ int ice_sriov_configure(struct pci_dev *pdev, int num_vfs) * ice_process_vflr_event - Free VF resources via IRQ calls * @pf: pointer to the PF structure * - * called from the VLFR IRQ handler to + * called from the VFLR IRQ handler to * free up VF resources and state variables */ void ice_process_vflr_event(struct ice_pf *pf) @@ -1764,7 +1762,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) /* copy Tx queue info from VF into VSI */ vsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr; vsi->tx_rings[i]->count = qpi->txq.ring_len; - /* copy Rx queue info from VF into vsi */ + /* copy Rx queue info from VF into VSI */ vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr; vsi->rx_rings[i]->count = qpi->rxq.ring_len; if (qpi->rxq.databuffer_size > ((16 * 1024) - 128)) { @@ -1830,7 +1828,7 @@ static bool ice_can_vf_change_mac(struct ice_vf *vf) * @msg: pointer to the msg buffer * @set: true if mac filters are being set, false otherwise * - * add guest mac address filter + * add guest MAC address filter */ static int ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set) @@ -1968,9 +1966,9 @@ static int ice_vc_del_mac_addr_msg(struct ice_vf *vf, u8 *msg) * @msg: pointer to the msg buffer * * VFs get a default number of queues but can use this message to request a - * different number. If the request is successful, PF will reset the VF and + * different number. If the request is successful, PF will reset the VF and * return 0. If unsuccessful, PF will send message informing VF of number of - * available queue pairs via virtchnl message response to VF. + * available queue pairs via virtchnl message response to vf. */ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg) { @@ -1991,7 +1989,7 @@ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg) tx_rx_queue_left = min_t(int, pf->q_left_tx, pf->q_left_rx); if (req_queues <= 0) { dev_err(&pf->pdev->dev, - "VF %d tried to request %d queues. Ignoring.\n", + "VF %d tried to request %d queues. Ignoring.\n", vf->vf_id, req_queues); } else if (req_queues > ICE_MAX_QS_PER_VF) { dev_err(&pf->pdev->dev, diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 10131e0180f9..01470a8ee03a 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -70,7 +70,7 @@ struct ice_vf { u8 spoofchk; u16 num_mac; u16 num_vlan; - u8 num_req_qs; /* num of queue pairs requested by VF */ + u8 num_req_qs; /* num of queue pairs requested by VF */ }; #ifdef CONFIG_PCI_IOV diff --git a/drivers/net/ethernet/intel/igb/e1000_defines.h b/drivers/net/ethernet/intel/igb/e1000_defines.h index 8a28f3388f69..01fcfc6f3415 100644 --- a/drivers/net/ethernet/intel/igb/e1000_defines.h +++ b/drivers/net/ethernet/intel/igb/e1000_defines.h @@ -334,6 +334,7 @@ #define I210_RXPBSIZE_DEFAULT 0x000000A2 /* RXPBSIZE default */ #define I210_RXPBSIZE_MASK 0x0000003F +#define I210_RXPBSIZE_PB_30KB 0x0000001E #define I210_RXPBSIZE_PB_32KB 0x00000020 #define I210_TXPBSIZE_DEFAULT 0x04000014 /* TXPBSIZE default */ #define I210_TXPBSIZE_MASK 0xC0FFFFFF diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index ca54e268d157..fe1592ae8769 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -515,7 +515,7 @@ struct igb_adapter { /* OS defined structs */ struct pci_dev *pdev; - spinlock_t stats64_lock; + struct mutex stats64_lock; struct rtnl_link_stats64 stats64; /* structs defined in e1000_hw.h */ diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c index 5acf3b743876..7426060b678f 100644 --- a/drivers/net/ethernet/intel/igb/igb_ethtool.c +++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c @@ -2113,7 +2113,7 @@ static int igb_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol) { struct igb_adapter *adapter = netdev_priv(netdev); - if (wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE)) + if (wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE | WAKE_FILTER)) return -EOPNOTSUPP; if (!(adapter->flags & IGB_FLAG_WOL_SUPPORTED)) @@ -2295,7 +2295,7 @@ static void igb_get_ethtool_stats(struct net_device *netdev, int i, j; char *p; - spin_lock(&adapter->stats64_lock); + mutex_lock(&adapter->stats64_lock); igb_update_stats(adapter); for (i = 0; i < IGB_GLOBAL_STATS_LEN; i++) { @@ -2338,7 +2338,7 @@ static void igb_get_ethtool_stats(struct net_device *netdev, } while (u64_stats_fetch_retry_irq(&ring->rx_syncp, start)); i += IGB_RX_QUEUE_STATS_LEN; } - spin_unlock(&adapter->stats64_lock); + mutex_unlock(&adapter->stats64_lock); } static void igb_get_strings(struct net_device *netdev, u32 stringset, u8 *data) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 5df88ad8ac81..87bdf1604ae2 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -1850,13 +1850,12 @@ static void igb_config_tx_modes(struct igb_adapter *adapter, int queue) * configuration' in respect to these parameters. */ - netdev_dbg(netdev, "Qav Tx mode: cbs %s, launchtime %s, queue %d \ - idleslope %d sendslope %d hiCredit %d \ - locredit %d\n", - (ring->cbs_enable) ? "enabled" : "disabled", - (ring->launchtime_enable) ? "enabled" : "disabled", queue, - ring->idleslope, ring->sendslope, ring->hicredit, - ring->locredit); + netdev_dbg(netdev, "Qav Tx mode: cbs %s, launchtime %s, queue %d idleslope %d sendslope %d hiCredit %d locredit %d\n", + ring->cbs_enable ? "enabled" : "disabled", + ring->launchtime_enable ? "enabled" : "disabled", + queue, + ring->idleslope, ring->sendslope, + ring->hicredit, ring->locredit); } static int igb_save_txtime_params(struct igb_adapter *adapter, int queue, @@ -1935,7 +1934,7 @@ static void igb_setup_tx_mode(struct igb_adapter *adapter) val = rd32(E1000_RXPBS); val &= ~I210_RXPBSIZE_MASK; - val |= I210_RXPBSIZE_PB_32KB; + val |= I210_RXPBSIZE_PB_30KB; wr32(E1000_RXPBS, val); /* Section 8.12.9 states that MAX_TPKT_SIZE from DTXMXPKTSZ @@ -2204,9 +2203,9 @@ void igb_down(struct igb_adapter *adapter) del_timer_sync(&adapter->phy_info_timer); /* record the stats before reset*/ - spin_lock(&adapter->stats64_lock); + mutex_lock(&adapter->stats64_lock); igb_update_stats(adapter); - spin_unlock(&adapter->stats64_lock); + mutex_unlock(&adapter->stats64_lock); adapter->link_speed = 0; adapter->link_duplex = 0; @@ -3841,7 +3840,7 @@ static int igb_sw_init(struct igb_adapter *adapter) adapter->min_frame_size = ETH_ZLEN + ETH_FCS_LEN; spin_lock_init(&adapter->nfc_lock); - spin_lock_init(&adapter->stats64_lock); + mutex_init(&adapter->stats64_lock); #ifdef CONFIG_PCI_IOV switch (hw->mac.type) { case e1000_82576: @@ -5407,9 +5406,9 @@ no_wait: } } - spin_lock(&adapter->stats64_lock); + mutex_lock(&adapter->stats64_lock); igb_update_stats(adapter); - spin_unlock(&adapter->stats64_lock); + mutex_unlock(&adapter->stats64_lock); for (i = 0; i < adapter->num_tx_queues; i++) { struct igb_ring *tx_ring = adapter->tx_ring[i]; @@ -6019,6 +6018,8 @@ static int igb_tx_map(struct igb_ring *tx_ring, /* set the timestamp */ first->time_stamp = jiffies; + skb_tx_timestamp(skb); + /* Force memory writes to complete before letting h/w know there * are new descriptors to fetch. (Only applicable for weak-ordered * memory model archs, such as IA-64). @@ -6147,8 +6148,6 @@ netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb, else if (!tso) igb_tx_csum(tx_ring, first); - skb_tx_timestamp(skb); - if (igb_tx_map(tx_ring, first, hdr_len)) goto cleanup_tx_tstamp; @@ -6236,10 +6235,10 @@ static void igb_get_stats64(struct net_device *netdev, { struct igb_adapter *adapter = netdev_priv(netdev); - spin_lock(&adapter->stats64_lock); + mutex_lock(&adapter->stats64_lock); igb_update_stats(adapter); memcpy(stats, &adapter->stats64, sizeof(*stats)); - spin_unlock(&adapter->stats64_lock); + mutex_unlock(&adapter->stats64_lock); } /** @@ -7753,11 +7752,13 @@ static int igb_poll(struct napi_struct *napi, int budget) if (!clean_complete) return budget; - /* If not enough Rx work done, exit the polling mode */ - napi_complete_done(napi, work_done); - igb_ring_irq_enable(q_vector); + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + igb_ring_irq_enable(q_vector); - return 0; + return min(work_done, budget - 1); } /** @@ -8770,9 +8771,11 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake, rtnl_unlock(); #ifdef CONFIG_PM - retval = pci_save_state(pdev); - if (retval) - return retval; + if (!runtime) { + retval = pci_save_state(pdev); + if (retval) + return retval; + } #endif status = rd32(E1000_STATUS); diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c index 2b95dc9c7a6a..fd3071f55bd3 100644 --- a/drivers/net/ethernet/intel/igb/igb_ptp.c +++ b/drivers/net/ethernet/intel/igb/igb_ptp.c @@ -277,17 +277,53 @@ static int igb_ptp_adjtime_i210(struct ptp_clock_info *ptp, s64 delta) return 0; } -static int igb_ptp_gettime_82576(struct ptp_clock_info *ptp, - struct timespec64 *ts) +static int igb_ptp_gettimex_82576(struct ptp_clock_info *ptp, + struct timespec64 *ts, + struct ptp_system_timestamp *sts) { struct igb_adapter *igb = container_of(ptp, struct igb_adapter, ptp_caps); + struct e1000_hw *hw = &igb->hw; unsigned long flags; + u32 lo, hi; u64 ns; spin_lock_irqsave(&igb->tmreg_lock, flags); - ns = timecounter_read(&igb->tc); + ptp_read_system_prets(sts); + lo = rd32(E1000_SYSTIML); + ptp_read_system_postts(sts); + hi = rd32(E1000_SYSTIMH); + + ns = timecounter_cyc2time(&igb->tc, ((u64)hi << 32) | lo); + + spin_unlock_irqrestore(&igb->tmreg_lock, flags); + + *ts = ns_to_timespec64(ns); + + return 0; +} + +static int igb_ptp_gettimex_82580(struct ptp_clock_info *ptp, + struct timespec64 *ts, + struct ptp_system_timestamp *sts) +{ + struct igb_adapter *igb = container_of(ptp, struct igb_adapter, + ptp_caps); + struct e1000_hw *hw = &igb->hw; + unsigned long flags; + u32 lo, hi; + u64 ns; + + spin_lock_irqsave(&igb->tmreg_lock, flags); + + ptp_read_system_prets(sts); + rd32(E1000_SYSTIMR); + ptp_read_system_postts(sts); + lo = rd32(E1000_SYSTIML); + hi = rd32(E1000_SYSTIMH); + + ns = timecounter_cyc2time(&igb->tc, ((u64)hi << 32) | lo); spin_unlock_irqrestore(&igb->tmreg_lock, flags); @@ -296,16 +332,22 @@ static int igb_ptp_gettime_82576(struct ptp_clock_info *ptp, return 0; } -static int igb_ptp_gettime_i210(struct ptp_clock_info *ptp, - struct timespec64 *ts) +static int igb_ptp_gettimex_i210(struct ptp_clock_info *ptp, + struct timespec64 *ts, + struct ptp_system_timestamp *sts) { struct igb_adapter *igb = container_of(ptp, struct igb_adapter, ptp_caps); + struct e1000_hw *hw = &igb->hw; unsigned long flags; spin_lock_irqsave(&igb->tmreg_lock, flags); - igb_ptp_read_i210(igb, ts); + ptp_read_system_prets(sts); + rd32(E1000_SYSTIMR); + ptp_read_system_postts(sts); + ts->tv_nsec = rd32(E1000_SYSTIML); + ts->tv_sec = rd32(E1000_SYSTIMH); spin_unlock_irqrestore(&igb->tmreg_lock, flags); @@ -658,9 +700,12 @@ static void igb_ptp_overflow_check(struct work_struct *work) struct igb_adapter *igb = container_of(work, struct igb_adapter, ptp_overflow_work.work); struct timespec64 ts; + u64 ns; - igb->ptp_caps.gettime64(&igb->ptp_caps, &ts); + /* Update the timecounter */ + ns = timecounter_read(&igb->tc); + ts = ns_to_timespec64(ns); pr_debug("igb overflow check at %lld.%09lu\n", (long long) ts.tv_sec, ts.tv_nsec); @@ -1126,7 +1171,7 @@ void igb_ptp_init(struct igb_adapter *adapter) adapter->ptp_caps.pps = 0; adapter->ptp_caps.adjfreq = igb_ptp_adjfreq_82576; adapter->ptp_caps.adjtime = igb_ptp_adjtime_82576; - adapter->ptp_caps.gettime64 = igb_ptp_gettime_82576; + adapter->ptp_caps.gettimex64 = igb_ptp_gettimex_82576; adapter->ptp_caps.settime64 = igb_ptp_settime_82576; adapter->ptp_caps.enable = igb_ptp_feature_enable; adapter->cc.read = igb_ptp_read_82576; @@ -1145,7 +1190,7 @@ void igb_ptp_init(struct igb_adapter *adapter) adapter->ptp_caps.pps = 0; adapter->ptp_caps.adjfine = igb_ptp_adjfine_82580; adapter->ptp_caps.adjtime = igb_ptp_adjtime_82576; - adapter->ptp_caps.gettime64 = igb_ptp_gettime_82576; + adapter->ptp_caps.gettimex64 = igb_ptp_gettimex_82580; adapter->ptp_caps.settime64 = igb_ptp_settime_82576; adapter->ptp_caps.enable = igb_ptp_feature_enable; adapter->cc.read = igb_ptp_read_82580; @@ -1173,7 +1218,7 @@ void igb_ptp_init(struct igb_adapter *adapter) adapter->ptp_caps.pin_config = adapter->sdp_config; adapter->ptp_caps.adjfine = igb_ptp_adjfine_82580; adapter->ptp_caps.adjtime = igb_ptp_adjtime_i210; - adapter->ptp_caps.gettime64 = igb_ptp_gettime_i210; + adapter->ptp_caps.gettimex64 = igb_ptp_gettimex_i210; adapter->ptp_caps.settime64 = igb_ptp_settime_i210; adapter->ptp_caps.enable = igb_ptp_feature_enable_i210; adapter->ptp_caps.verify = igb_ptp_verify_pin; diff --git a/drivers/net/ethernet/intel/igbvf/mbx.c b/drivers/net/ethernet/intel/igbvf/mbx.c index 163e5838f7c2..a3cd7ac48d4b 100644 --- a/drivers/net/ethernet/intel/igbvf/mbx.c +++ b/drivers/net/ethernet/intel/igbvf/mbx.c @@ -241,7 +241,7 @@ static s32 e1000_write_mbx_vf(struct e1000_hw *hw, u32 *msg, u16 size) s32 err; u16 i; - WARN_ON_ONCE(!spin_is_locked(&hw->mbx_lock)); + lockdep_assert_held(&hw->mbx_lock); /* lock the mailbox to prevent pf/vf race condition */ err = e1000_obtain_mbx_lock_vf(hw); @@ -279,7 +279,7 @@ static s32 e1000_read_mbx_vf(struct e1000_hw *hw, u32 *msg, u16 size) s32 err; u16 i; - WARN_ON_ONCE(!spin_is_locked(&hw->mbx_lock)); + lockdep_assert_held(&hw->mbx_lock); /* lock the mailbox to prevent pf/vf race condition */ err = e1000_obtain_mbx_lock_vf(hw); diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c index 820d49eb41ab..4eab83faec62 100644 --- a/drivers/net/ethernet/intel/igbvf/netdev.c +++ b/drivers/net/ethernet/intel/igbvf/netdev.c @@ -1186,10 +1186,13 @@ static int igbvf_poll(struct napi_struct *napi, int budget) igbvf_clean_rx_irq(adapter, &work_done, budget); - /* If not enough Rx work done, exit the polling mode */ - if (work_done < budget) { - napi_complete_done(napi, work_done); + if (work_done == budget) + return budget; + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) { if (adapter->requested_itr & 3) igbvf_set_itr(adapter); diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h index cdf18a5d9e08..b1039dd3dd13 100644 --- a/drivers/net/ethernet/intel/igc/igc.h +++ b/drivers/net/ethernet/intel/igc/igc.h @@ -5,23 +5,12 @@ #define _IGC_H_ #include - #include #include #include - #include - #include -#define IGC_ERR(args...) pr_err("igc: " args) - -#define PFX "igc: " - -#include -#include -#include - #include "igc_hw.h" /* main */ diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c index 832da609d9a7..df40af759542 100644 --- a/drivers/net/ethernet/intel/igc/igc_base.c +++ b/drivers/net/ethernet/intel/igc/igc_base.c @@ -237,7 +237,6 @@ static s32 igc_init_phy_params_base(struct igc_hw *hw) { struct igc_phy_info *phy = &hw->phy; s32 ret_val = 0; - u32 ctrl_ext; if (hw->phy.media_type != igc_media_type_copper) { phy->type = igc_phy_none; @@ -247,8 +246,6 @@ static s32 igc_init_phy_params_base(struct igc_hw *hw) phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT_2500; phy->reset_delay_us = 100; - ctrl_ext = rd32(IGC_CTRL_EXT); - /* set lan id */ hw->bus.func = (rd32(IGC_STATUS) & IGC_STATUS_FUNC_MASK) >> IGC_STATUS_FUNC_SHIFT; @@ -287,8 +284,6 @@ out: static s32 igc_get_invariants_base(struct igc_hw *hw) { struct igc_mac_info *mac = &hw->mac; - u32 link_mode = 0; - u32 ctrl_ext = 0; s32 ret_val = 0; switch (hw->device_id) { @@ -302,9 +297,6 @@ static s32 igc_get_invariants_base(struct igc_hw *hw) hw->phy.media_type = igc_media_type_copper; - ctrl_ext = rd32(IGC_CTRL_EXT); - link_mode = ctrl_ext & IGC_CTRL_EXT_LINK_MODE_MASK; - /* mac initialization and operations */ ret_val = igc_init_mac_params_base(hw); if (ret_val) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 9d85707e8a81..f20183037fb2 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -865,6 +865,8 @@ static int igc_tx_map(struct igc_ring *tx_ring, /* set the timestamp */ first->time_stamp = jiffies; + skb_tx_timestamp(skb); + /* Force memory writes to complete before letting h/w know there * are new descriptors to fetch. (Only applicable for weak-ordered * memory model archs, such as IA-64). @@ -959,8 +961,6 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb, first->bytecount = skb->len; first->gso_segs = 1; - skb_tx_timestamp(skb); - /* record initial flags and protocol */ first->tx_flags = tx_flags; first->protocol = protocol; @@ -1108,7 +1108,7 @@ static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, /* update pointers within the skb to store the data */ skb_reserve(skb, IGC_SKB_PAD); - __skb_put(skb, size); + __skb_put(skb, size); /* update buffer offset */ #if (PAGE_SIZE < 8192) @@ -1160,9 +1160,9 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, (va + headlen) - page_address(rx_buffer->page), size, truesize); #if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; + rx_buffer->page_offset ^= truesize; #else - rx_buffer->page_offset += truesize; + rx_buffer->page_offset += truesize; #endif } else { rx_buffer->pagecnt_bias++; @@ -1668,8 +1668,8 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget) tx_buffer->next_to_watch, jiffies, tx_buffer->next_to_watch->wb.status); - netif_stop_subqueue(tx_ring->netdev, - tx_ring->queue_index); + netif_stop_subqueue(tx_ring->netdev, + tx_ring->queue_index); /* we are about to reset, no point in enabling stuff */ return true; @@ -1699,20 +1699,6 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget) return !!budget; } -/** - * igc_ioctl - I/O control method - * @netdev: network interface device structure - * @ifreq: frequency - * @cmd: command - */ -static int igc_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) -{ - switch (cmd) { - default: - return -EOPNOTSUPP; - } -} - /** * igc_up - Open the interface and prepare it to handle traffic * @adapter: board private structure @@ -2866,11 +2852,13 @@ static int igc_poll(struct napi_struct *napi, int budget) if (!clean_complete) return budget; - /* If not enough Rx work done, exit the polling mode */ - napi_complete_done(napi, work_done); - igc_ring_irq_enable(q_vector); + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + igc_ring_irq_enable(q_vector); - return 0; + return min(work_done, budget - 1); } /** @@ -3358,7 +3346,7 @@ static int __igc_open(struct net_device *netdev, bool resuming) goto err_req_irq; /* Notify the stack of the actual queue counts. */ - netif_set_real_num_tx_queues(netdev, adapter->num_tx_queues); + err = netif_set_real_num_tx_queues(netdev, adapter->num_tx_queues); if (err) goto err_set_queues; @@ -3445,7 +3433,6 @@ static const struct net_device_ops igc_netdev_ops = { .ndo_set_mac_address = igc_set_mac, .ndo_change_mtu = igc_change_mtu, .ndo_get_stats = igc_get_stats, - .ndo_do_ioctl = igc_ioctl, }; /* PCIe configuration access */ @@ -3532,26 +3519,23 @@ static int igc_probe(struct pci_dev *pdev, struct net_device *netdev; struct igc_hw *hw; const struct igc_info *ei = igc_info_tbl[ent->driver_data]; - int err, pci_using_dac; + int err; err = pci_enable_device_mem(pdev); if (err) return err; - pci_using_dac = 0; err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); - if (!err) - pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { - IGC_ERR("Wrong DMA configuration, aborting\n"); + dev_err(&pdev->dev, "igc: Wrong DMA config\n"); goto err_dma; } } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h index 143bdd5ee2a0..08d85e336bd4 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -561,6 +562,7 @@ struct ixgbe_adapter { struct net_device *netdev; struct bpf_prog *xdp_prog; struct pci_dev *pdev; + struct mii_bus *mii_bus; unsigned long state; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c index 732b1e6ecc43..acba067cc15a 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c @@ -2206,7 +2206,8 @@ static int ixgbe_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol) { struct ixgbe_adapter *adapter = netdev_priv(netdev); - if (wol->wolopts & (WAKE_PHY | WAKE_ARP | WAKE_MAGICSECURE)) + if (wol->wolopts & (WAKE_PHY | WAKE_ARP | WAKE_MAGICSECURE | + WAKE_FILTER)) return -EOPNOTSUPP; if (ixgbe_wol_exclusion(adapter, wol)) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c index fd1b0546fd67..ff85ce5791a3 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c @@ -4,6 +4,7 @@ #include "ixgbe.h" #include #include +#include #define IXGBE_IPSEC_KEY_BITS 160 static const char aes_gcm_name[] = "rfc4106(gcm(aes))"; @@ -693,7 +694,8 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) } else { struct tx_sa tsa; - if (adapter->num_vfs) + if (adapter->num_vfs && + adapter->bridge_mode != BRIDGE_MODE_VEPA) return -EOPNOTSUPP; /* find the first unused index */ @@ -1063,11 +1065,13 @@ int ixgbe_ipsec_tx(struct ixgbe_ring *tx_ring, struct ixgbe_adapter *adapter = netdev_priv(tx_ring->netdev); struct ixgbe_ipsec *ipsec = adapter->ipsec; struct xfrm_state *xs; + struct sec_path *sp; struct tx_sa *tsa; - if (unlikely(!first->skb->sp->len)) { + sp = skb_sec_path(first->skb); + if (unlikely(!sp->len)) { netdev_err(tx_ring->netdev, "%s: no xfrm state len = %d\n", - __func__, first->skb->sp->len); + __func__, sp->len); return 0; } @@ -1157,6 +1161,7 @@ void ixgbe_ipsec_rx(struct ixgbe_ring *rx_ring, struct xfrm_state *xs = NULL; struct ipv6hdr *ip6 = NULL; struct iphdr *ip4 = NULL; + struct sec_path *sp; void *daddr; __be32 spi; u8 *c_hdr; @@ -1196,12 +1201,12 @@ void ixgbe_ipsec_rx(struct ixgbe_ring *rx_ring, if (unlikely(!xs)) return; - skb->sp = secpath_dup(skb->sp); - if (unlikely(!skb->sp)) + sp = secpath_set(skb); + if (unlikely(!sp)) return; - skb->sp->xvec[skb->sp->len++] = xs; - skb->sp->olen++; + sp->xvec[sp->len++] = xs; + sp->olen++; xo = xfrm_offload(skb); xo->flags = CRYPTO_DONE; xo->status = CRYPTO_SUCCESS; diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 113b38e0defb..daff8183534b 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -39,6 +39,7 @@ #include "ixgbe.h" #include "ixgbe_common.h" #include "ixgbe_dcb_82599.h" +#include "ixgbe_phy.h" #include "ixgbe_sriov.h" #include "ixgbe_model.h" #include "ixgbe_txrx_common.h" @@ -6077,9 +6078,9 @@ void ixgbe_down(struct ixgbe_adapter *adapter) /* Disable Rx */ ixgbe_disable_rx(adapter); - /* synchronize_sched() needed for pending XDP buffers to drain */ + /* synchronize_rcu() needed for pending XDP buffers to drain */ if (adapter->xdp_ring[0]) - synchronize_sched(); + synchronize_rcu(); ixgbe_irq_disable(adapter); @@ -8269,6 +8270,8 @@ static int ixgbe_tx_map(struct ixgbe_ring *tx_ring, /* set the timestamp */ first->time_stamp = jiffies; + skb_tx_timestamp(skb); + /* * Force memory writes to complete before letting h/w know there * are new descriptors to fetch. (Only applicable for weak-ordered @@ -8646,8 +8649,6 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct sk_buff *skb, } } - skb_tx_timestamp(skb); - #ifdef CONFIG_PCI_IOV /* * Use the l2switch_enable flag - would be false if the DMA @@ -8695,7 +8696,8 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct sk_buff *skb, #endif /* IXGBE_FCOE */ #ifdef CONFIG_IXGBE_IPSEC - if (skb->sp && !ixgbe_ipsec_tx(tx_ring, first, &ipsec_tx)) + if (secpath_exists(skb) && + !ixgbe_ipsec_tx(tx_ring, first, &ipsec_tx)) goto out_drop; #endif tso = ixgbe_tso(tx_ring, first, &hdr_len, &ipsec_tx); @@ -8789,6 +8791,15 @@ ixgbe_mdio_read(struct net_device *netdev, int prtad, int devad, u16 addr) u16 value; int rc; + if (adapter->mii_bus) { + int regnum = addr; + + if (devad != MDIO_DEVAD_NONE) + regnum |= (devad << 16) | MII_ADDR_C45; + + return mdiobus_read(adapter->mii_bus, prtad, regnum); + } + if (prtad != hw->phy.mdio.prtad) return -EINVAL; rc = hw->phy.ops.read_reg(hw, addr, devad, &value); @@ -8803,6 +8814,15 @@ static int ixgbe_mdio_write(struct net_device *netdev, int prtad, int devad, struct ixgbe_adapter *adapter = netdev_priv(netdev); struct ixgbe_hw *hw = &adapter->hw; + if (adapter->mii_bus) { + int regnum = addr; + + if (devad != MDIO_DEVAD_NONE) + regnum |= (devad << 16) | MII_ADDR_C45; + + return mdiobus_write(adapter->mii_bus, prtad, regnum, value); + } + if (prtad != hw->phy.mdio.prtad) return -EINVAL; return hw->phy.ops.write_reg(hw, addr, devad, value); @@ -9979,7 +9999,8 @@ static int ixgbe_configure_bridge_mode(struct ixgbe_adapter *adapter, } static int ixgbe_ndo_bridge_setlink(struct net_device *dev, - struct nlmsghdr *nlh, u16 flags) + struct nlmsghdr *nlh, u16 flags, + struct netlink_ext_ack *extack) { struct ixgbe_adapter *adapter = netdev_priv(dev); struct nlattr *attr, *br_spec; @@ -10191,7 +10212,7 @@ ixgbe_features_check(struct sk_buff *skb, struct net_device *dev, */ if (skb->encapsulation && !(features & NETIF_F_TSO_MANGLEID)) { #ifdef CONFIG_IXGBE_IPSEC - if (!skb->sp) + if (!secpath_exists(skb)) #endif features &= ~NETIF_F_TSO; } @@ -10476,7 +10497,7 @@ void ixgbe_txrx_ring_disable(struct ixgbe_adapter *adapter, int ring) ixgbe_disable_rxr_hw(adapter, rx_ring); if (xdp_ring) - synchronize_sched(); + synchronize_rcu(); /* Rx/Tx/XDP Tx share the same napi context. */ napi_disable(&rx_ring->q_vector->napi); @@ -10517,7 +10538,8 @@ void ixgbe_txrx_ring_enable(struct ixgbe_adapter *adapter, int ring) ixgbe_configure_rx_ring(adapter, rx_ring); clear_bit(__IXGBE_TX_DISABLED, &tx_ring->state); - clear_bit(__IXGBE_TX_DISABLED, &xdp_ring->state); + if (xdp_ring) + clear_bit(__IXGBE_TX_DISABLED, &xdp_ring->state); } /** @@ -11119,6 +11141,8 @@ skip_sriov: IXGBE_LINK_SPEED_10GB_FULL | IXGBE_LINK_SPEED_1GB_FULL, true); + ixgbe_mii_bus_init(hw); + return 0; err_register: @@ -11169,6 +11193,8 @@ static void ixgbe_remove(struct pci_dev *pdev) set_bit(__IXGBE_REMOVING, &adapter->state); cancel_work_sync(&adapter->service_task); + if (adapter->mii_bus) + mdiobus_unregister(adapter->mii_bus); #ifdef CONFIG_IXGBE_DCA if (adapter->flags & IXGBE_FLAG_DCA_ENABLED) { diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c index 919a7af84b42..cc4907f9ff02 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c @@ -3,6 +3,7 @@ #include #include +#include #include #include "ixgbe.h" @@ -658,6 +659,304 @@ s32 ixgbe_write_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, return status; } +#define IXGBE_HW_READ_REG(addr) IXGBE_READ_REG(hw, addr) + +/** + * ixgbe_msca_cmd - Write the command register and poll for completion/timeout + * @hw: pointer to hardware structure + * @cmd: command register value to write + **/ +static s32 ixgbe_msca_cmd(struct ixgbe_hw *hw, u32 cmd) +{ + IXGBE_WRITE_REG(hw, IXGBE_MSCA, cmd); + + return readx_poll_timeout(IXGBE_HW_READ_REG, IXGBE_MSCA, cmd, + !(cmd & IXGBE_MSCA_MDI_COMMAND), 10, + 10 * IXGBE_MDIO_COMMAND_TIMEOUT); +} + +/** + * ixgbe_mii_bus_read_generic - Read a clause 22/45 register with gssr flags + * @hw: pointer to hardware structure + * @addr: address + * @regnum: register number + * @gssr: semaphore flags to acquire + **/ +static s32 ixgbe_mii_bus_read_generic(struct ixgbe_hw *hw, int addr, + int regnum, u32 gssr) +{ + u32 hwaddr, cmd; + s32 data; + + if (hw->mac.ops.acquire_swfw_sync(hw, gssr)) + return -EBUSY; + + hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT; + if (regnum & MII_ADDR_C45) { + hwaddr |= regnum & GENMASK(21, 0); + cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND; + } else { + hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT; + cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL | + IXGBE_MSCA_READ_AUTOINC | IXGBE_MSCA_MDI_COMMAND; + } + + data = ixgbe_msca_cmd(hw, cmd); + if (data < 0) + goto mii_bus_read_done; + + /* For a clause 45 access the address cycle just completed, we still + * need to do the read command, otherwise just get the data + */ + if (!(regnum & MII_ADDR_C45)) + goto do_mii_bus_read; + + cmd = hwaddr | IXGBE_MSCA_READ | IXGBE_MSCA_MDI_COMMAND; + data = ixgbe_msca_cmd(hw, cmd); + if (data < 0) + goto mii_bus_read_done; + +do_mii_bus_read: + data = IXGBE_READ_REG(hw, IXGBE_MSRWD); + data = (data >> IXGBE_MSRWD_READ_DATA_SHIFT) & GENMASK(16, 0); + +mii_bus_read_done: + hw->mac.ops.release_swfw_sync(hw, gssr); + return data; +} + +/** + * ixgbe_mii_bus_write_generic - Write a clause 22/45 register with gssr flags + * @hw: pointer to hardware structure + * @addr: address + * @regnum: register number + * @val: value to write + * @gssr: semaphore flags to acquire + **/ +static s32 ixgbe_mii_bus_write_generic(struct ixgbe_hw *hw, int addr, + int regnum, u16 val, u32 gssr) +{ + u32 hwaddr, cmd; + s32 err; + + if (hw->mac.ops.acquire_swfw_sync(hw, gssr)) + return -EBUSY; + + IXGBE_WRITE_REG(hw, IXGBE_MSRWD, (u32)val); + + hwaddr = addr << IXGBE_MSCA_PHY_ADDR_SHIFT; + if (regnum & MII_ADDR_C45) { + hwaddr |= regnum & GENMASK(21, 0); + cmd = hwaddr | IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND; + } else { + hwaddr |= (regnum & GENMASK(5, 0)) << IXGBE_MSCA_DEV_TYPE_SHIFT; + cmd = hwaddr | IXGBE_MSCA_OLD_PROTOCOL | IXGBE_MSCA_WRITE | + IXGBE_MSCA_MDI_COMMAND; + } + + /* For clause 45 this is an address cycle, for clause 22 this is the + * entire transaction + */ + err = ixgbe_msca_cmd(hw, cmd); + if (err < 0 || !(regnum & MII_ADDR_C45)) + goto mii_bus_write_done; + + cmd = hwaddr | IXGBE_MSCA_WRITE | IXGBE_MSCA_MDI_COMMAND; + err = ixgbe_msca_cmd(hw, cmd); + +mii_bus_write_done: + hw->mac.ops.release_swfw_sync(hw, gssr); + return err; +} + +/** + * ixgbe_mii_bus_read - Read a clause 22/45 register + * @hw: pointer to hardware structure + * @addr: address + * @regnum: register number + **/ +static s32 ixgbe_mii_bus_read(struct mii_bus *bus, int addr, int regnum) +{ + struct ixgbe_adapter *adapter = bus->priv; + struct ixgbe_hw *hw = &adapter->hw; + u32 gssr = hw->phy.phy_semaphore_mask; + + return ixgbe_mii_bus_read_generic(hw, addr, regnum, gssr); +} + +/** + * ixgbe_mii_bus_write - Write a clause 22/45 register + * @hw: pointer to hardware structure + * @addr: address + * @regnum: register number + * @val: value to write + **/ +static s32 ixgbe_mii_bus_write(struct mii_bus *bus, int addr, int regnum, + u16 val) +{ + struct ixgbe_adapter *adapter = bus->priv; + struct ixgbe_hw *hw = &adapter->hw; + u32 gssr = hw->phy.phy_semaphore_mask; + + return ixgbe_mii_bus_write_generic(hw, addr, regnum, val, gssr); +} + +/** + * ixgbe_x550em_a_mii_bus_read - Read a clause 22/45 register on x550em_a + * @hw: pointer to hardware structure + * @addr: address + * @regnum: register number + **/ +static s32 ixgbe_x550em_a_mii_bus_read(struct mii_bus *bus, int addr, + int regnum) +{ + struct ixgbe_adapter *adapter = bus->priv; + struct ixgbe_hw *hw = &adapter->hw; + u32 gssr = hw->phy.phy_semaphore_mask; + + gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM; + return ixgbe_mii_bus_read_generic(hw, addr, regnum, gssr); +} + +/** + * ixgbe_x550em_a_mii_bus_write - Write a clause 22/45 register on x550em_a + * @hw: pointer to hardware structure + * @addr: address + * @regnum: register number + * @val: value to write + **/ +static s32 ixgbe_x550em_a_mii_bus_write(struct mii_bus *bus, int addr, + int regnum, u16 val) +{ + struct ixgbe_adapter *adapter = bus->priv; + struct ixgbe_hw *hw = &adapter->hw; + u32 gssr = hw->phy.phy_semaphore_mask; + + gssr |= IXGBE_GSSR_TOKEN_SM | IXGBE_GSSR_PHY0_SM; + return ixgbe_mii_bus_write_generic(hw, addr, regnum, val, gssr); +} + +/** + * ixgbe_get_first_secondary_devfn - get first device downstream of root port + * @devfn: PCI_DEVFN of root port on domain 0, bus 0 + * + * Returns pci_dev pointer to PCI_DEVFN(0, 0) on subordinate side of root + * on domain 0, bus 0, devfn = 'devfn' + **/ +static struct pci_dev *ixgbe_get_first_secondary_devfn(unsigned int devfn) +{ + struct pci_dev *rp_pdev; + int bus; + + rp_pdev = pci_get_domain_bus_and_slot(0, 0, devfn); + if (rp_pdev && rp_pdev->subordinate) { + bus = rp_pdev->subordinate->number; + return pci_get_domain_bus_and_slot(0, bus, 0); + } + + return NULL; +} + +/** + * ixgbe_x550em_a_has_mii - is this the first ixgbe x550em_a PCI function? + * @hw: pointer to hardware structure + * + * Returns true if hw points to lowest numbered PCI B:D.F x550_em_a device in + * the SoC. There are up to 4 MACs sharing a single MDIO bus on the x550em_a, + * but we only want to register one MDIO bus. + **/ +static bool ixgbe_x550em_a_has_mii(struct ixgbe_hw *hw) +{ + struct ixgbe_adapter *adapter = hw->back; + struct pci_dev *pdev = adapter->pdev; + struct pci_dev *func0_pdev; + + /* For the C3000 family of SoCs (x550em_a) the internal ixgbe devices + * are always downstream of root ports @ 0000:00:16.0 & 0000:00:17.0 + * It's not valid for function 0 to be disabled and function 1 is up, + * so the lowest numbered ixgbe dev will be device 0 function 0 on one + * of those two root ports + */ + func0_pdev = ixgbe_get_first_secondary_devfn(PCI_DEVFN(0x16, 0)); + if (func0_pdev) { + if (func0_pdev == pdev) + return true; + else + return false; + } + func0_pdev = ixgbe_get_first_secondary_devfn(PCI_DEVFN(0x17, 0)); + if (func0_pdev == pdev) + return true; + + return false; +} + +/** + * ixgbe_mii_bus_init - mii_bus structure setup + * @hw: pointer to hardware structure + * + * Returns 0 on success, negative on failure + * + * ixgbe_mii_bus_init initializes a mii_bus structure in adapter + **/ +s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw) +{ + struct ixgbe_adapter *adapter = hw->back; + struct pci_dev *pdev = adapter->pdev; + struct device *dev = &adapter->netdev->dev; + struct mii_bus *bus; + + adapter->mii_bus = devm_mdiobus_alloc(dev); + if (!adapter->mii_bus) + return -ENOMEM; + + bus = adapter->mii_bus; + + switch (hw->device_id) { + /* C3000 SoCs */ + case IXGBE_DEV_ID_X550EM_A_KR: + case IXGBE_DEV_ID_X550EM_A_KR_L: + case IXGBE_DEV_ID_X550EM_A_SFP_N: + case IXGBE_DEV_ID_X550EM_A_SGMII: + case IXGBE_DEV_ID_X550EM_A_SGMII_L: + case IXGBE_DEV_ID_X550EM_A_10G_T: + case IXGBE_DEV_ID_X550EM_A_SFP: + case IXGBE_DEV_ID_X550EM_A_1G_T: + case IXGBE_DEV_ID_X550EM_A_1G_T_L: + if (!ixgbe_x550em_a_has_mii(hw)) + goto ixgbe_no_mii_bus; + bus->read = &ixgbe_x550em_a_mii_bus_read; + bus->write = &ixgbe_x550em_a_mii_bus_write; + break; + default: + bus->read = &ixgbe_mii_bus_read; + bus->write = &ixgbe_mii_bus_write; + break; + } + + /* Use the position of the device in the PCI hierarchy as the id */ + snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mdio-%s", ixgbe_driver_name, + pci_name(pdev)); + + bus->name = "ixgbe-mdio"; + bus->priv = adapter; + bus->parent = dev; + bus->phy_mask = GENMASK(31, 0); + + /* Support clause 22/45 natively. ixgbe_probe() sets MDIO_EMULATE_C22 + * unfortunately that causes some clause 22 frames to be sent with + * clause 45 addressing. We don't want that. + */ + hw->phy.mdio.mode_support = MDIO_SUPPORTS_C45 | MDIO_SUPPORTS_C22; + + return mdiobus_register(bus); + +ixgbe_no_mii_bus: + devm_mdiobus_free(dev, bus); + adapter->mii_bus = NULL; + return -ENODEV; +} + /** * ixgbe_setup_phy_link_generic - Set and restart autoneg * @hw: pointer to hardware structure diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h index 64e44e01c973..214b01085718 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h @@ -120,6 +120,8 @@ /* SFP+ SFF-8472 Compliance code */ #define IXGBE_SFF_SFF_8472_UNSUP 0x00 +s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw); + s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw); s32 ixgbe_reset_phy_generic(struct ixgbe_hw *hw); s32 ixgbe_read_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c index b3e0d8bb5cbd..d81a50dc9535 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c @@ -443,22 +443,52 @@ static int ixgbe_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) } /** - * ixgbe_ptp_gettime + * ixgbe_ptp_gettimex * @ptp: the ptp clock structure - * @ts: timespec structure to hold the current time value + * @ts: timespec to hold the PHC timestamp + * @sts: structure to hold the system time before and after reading the PHC * * read the timecounter and return the correct value on ns, * after converting it into a struct timespec. */ -static int ixgbe_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts) +static int ixgbe_ptp_gettimex(struct ptp_clock_info *ptp, + struct timespec64 *ts, + struct ptp_system_timestamp *sts) { struct ixgbe_adapter *adapter = container_of(ptp, struct ixgbe_adapter, ptp_caps); + struct ixgbe_hw *hw = &adapter->hw; unsigned long flags; - u64 ns; + u64 ns, stamp; spin_lock_irqsave(&adapter->tmreg_lock, flags); - ns = timecounter_read(&adapter->hw_tc); + + switch (adapter->hw.mac.type) { + case ixgbe_mac_X550: + case ixgbe_mac_X550EM_x: + case ixgbe_mac_x550em_a: + /* Upper 32 bits represent billions of cycles, lower 32 bits + * represent cycles. However, we use timespec64_to_ns for the + * correct math even though the units haven't been corrected + * yet. + */ + ptp_read_system_prets(sts); + IXGBE_READ_REG(hw, IXGBE_SYSTIMR); + ptp_read_system_postts(sts); + ts->tv_nsec = IXGBE_READ_REG(hw, IXGBE_SYSTIML); + ts->tv_sec = IXGBE_READ_REG(hw, IXGBE_SYSTIMH); + stamp = timespec64_to_ns(ts); + break; + default: + ptp_read_system_prets(sts); + stamp = IXGBE_READ_REG(hw, IXGBE_SYSTIML); + ptp_read_system_postts(sts); + stamp |= (u64)IXGBE_READ_REG(hw, IXGBE_SYSTIMH) << 32; + break; + } + + ns = timecounter_cyc2time(&adapter->hw_tc, stamp); + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); *ts = ns_to_timespec64(ns); @@ -567,10 +597,14 @@ void ixgbe_ptp_overflow_check(struct ixgbe_adapter *adapter) { bool timeout = time_is_before_jiffies(adapter->last_overflow_check + IXGBE_OVERFLOW_PERIOD); - struct timespec64 ts; + unsigned long flags; if (timeout) { - ixgbe_ptp_gettime(&adapter->ptp_caps, &ts); + /* Update the timecounter */ + spin_lock_irqsave(&adapter->tmreg_lock, flags); + timecounter_read(&adapter->hw_tc); + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); + adapter->last_overflow_check = jiffies; } } @@ -1216,7 +1250,7 @@ static long ixgbe_ptp_create_clock(struct ixgbe_adapter *adapter) adapter->ptp_caps.pps = 1; adapter->ptp_caps.adjfreq = ixgbe_ptp_adjfreq_82599; adapter->ptp_caps.adjtime = ixgbe_ptp_adjtime; - adapter->ptp_caps.gettime64 = ixgbe_ptp_gettime; + adapter->ptp_caps.gettimex64 = ixgbe_ptp_gettimex; adapter->ptp_caps.settime64 = ixgbe_ptp_settime; adapter->ptp_caps.enable = ixgbe_ptp_feature_enable; adapter->ptp_setup_sdp = ixgbe_ptp_setup_sdp_x540; @@ -1233,7 +1267,7 @@ static long ixgbe_ptp_create_clock(struct ixgbe_adapter *adapter) adapter->ptp_caps.pps = 0; adapter->ptp_caps.adjfreq = ixgbe_ptp_adjfreq_82599; adapter->ptp_caps.adjtime = ixgbe_ptp_adjtime; - adapter->ptp_caps.gettime64 = ixgbe_ptp_gettime; + adapter->ptp_caps.gettimex64 = ixgbe_ptp_gettimex; adapter->ptp_caps.settime64 = ixgbe_ptp_settime; adapter->ptp_caps.enable = ixgbe_ptp_feature_enable; break; @@ -1249,7 +1283,7 @@ static long ixgbe_ptp_create_clock(struct ixgbe_adapter *adapter) adapter->ptp_caps.pps = 0; adapter->ptp_caps.adjfreq = ixgbe_ptp_adjfreq_X550; adapter->ptp_caps.adjtime = ixgbe_ptp_adjtime; - adapter->ptp_caps.gettime64 = ixgbe_ptp_gettime; + adapter->ptp_caps.gettimex64 = ixgbe_ptp_gettimex; adapter->ptp_caps.settime64 = ixgbe_ptp_settime; adapter->ptp_caps.enable = ixgbe_ptp_feature_enable; adapter->ptp_setup_sdp = NULL; diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c index e8a3231be0bf..5170dd9d8705 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c +++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c @@ -450,12 +450,14 @@ int ixgbevf_ipsec_tx(struct ixgbevf_ring *tx_ring, struct ixgbevf_adapter *adapter = netdev_priv(tx_ring->netdev); struct ixgbevf_ipsec *ipsec = adapter->ipsec; struct xfrm_state *xs; + struct sec_path *sp; struct tx_sa *tsa; u16 sa_idx; - if (unlikely(!first->skb->sp->len)) { + sp = skb_sec_path(first->skb); + if (unlikely(!sp->len)) { netdev_err(tx_ring->netdev, "%s: no xfrm state len = %d\n", - __func__, first->skb->sp->len); + __func__, sp->len); return 0; } @@ -546,6 +548,7 @@ void ixgbevf_ipsec_rx(struct ixgbevf_ring *rx_ring, struct xfrm_state *xs = NULL; struct ipv6hdr *ip6 = NULL; struct iphdr *ip4 = NULL; + struct sec_path *sp; void *daddr; __be32 spi; u8 *c_hdr; @@ -585,12 +588,12 @@ void ixgbevf_ipsec_rx(struct ixgbevf_ring *rx_ring, if (unlikely(!xs)) return; - skb->sp = secpath_dup(skb->sp); - if (unlikely(!skb->sp)) + sp = secpath_set(skb); + if (unlikely(!sp)) return; - skb->sp->xvec[skb->sp->len++] = xs; - skb->sp->olen++; + sp->xvec[sp->len++] = xs; + sp->olen++; xo = xfrm_offload(skb); xo->flags = CRYPTO_DONE; xo->status = CRYPTO_SUCCESS; diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 5e47ede7e832..49e23afa05a2 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -1293,16 +1293,20 @@ static int ixgbevf_poll(struct napi_struct *napi, int budget) /* If all work not completed, return budget and keep polling */ if (!clean_complete) return budget; - /* all work done, exit the polling mode */ - napi_complete_done(napi, work_done); - if (adapter->rx_itr_setting == 1) - ixgbevf_set_itr(q_vector); - if (!test_bit(__IXGBEVF_DOWN, &adapter->state) && - !test_bit(__IXGBEVF_REMOVING, &adapter->state)) - ixgbevf_irq_enable_queues(adapter, - BIT(q_vector->v_idx)); - return 0; + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) { + if (adapter->rx_itr_setting == 1) + ixgbevf_set_itr(q_vector); + if (!test_bit(__IXGBEVF_DOWN, &adapter->state) && + !test_bit(__IXGBEVF_REMOVING, &adapter->state)) + ixgbevf_irq_enable_queues(adapter, + BIT(q_vector->v_idx)); + } + + return min(work_done, budget - 1); } /** @@ -4016,6 +4020,8 @@ static void ixgbevf_tx_map(struct ixgbevf_ring *tx_ring, /* set the timestamp */ first->time_stamp = jiffies; + skb_tx_timestamp(skb); + /* Force memory writes to complete before letting h/w know there * are new descriptors to fetch. (Only applicable for weak-ordered * memory model archs, such as IA-64). @@ -4151,7 +4157,7 @@ static int ixgbevf_xmit_frame_ring(struct sk_buff *skb, first->protocol = vlan_get_protocol(skb); #ifdef CONFIG_IXGBEVF_IPSEC - if (skb->sp && !ixgbevf_ipsec_tx(tx_ring, first, &ipsec_tx)) + if (secpath_exists(skb) && !ixgbevf_ipsec_tx(tx_ring, first, &ipsec_tx)) goto out_drop; #endif tso = ixgbevf_tso(tx_ring, first, &hdr_len, &ipsec_tx); diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c index 1e9bcbdc6a90..2f427271a793 100644 --- a/drivers/net/ethernet/marvell/mv643xx_eth.c +++ b/drivers/net/ethernet/marvell/mv643xx_eth.c @@ -1499,23 +1499,16 @@ mv643xx_eth_get_link_ksettings_phy(struct mv643xx_eth_private *mp, struct ethtool_link_ksettings *cmd) { struct net_device *dev = mp->dev; - u32 supported, advertising; phy_ethtool_ksettings_get(dev->phydev, cmd); /* * The MAC does not support 1000baseT_Half. */ - ethtool_convert_link_mode_to_legacy_u32(&supported, - cmd->link_modes.supported); - ethtool_convert_link_mode_to_legacy_u32(&advertising, - cmd->link_modes.advertising); - supported &= ~SUPPORTED_1000baseT_Half; - advertising &= ~ADVERTISED_1000baseT_Half; - ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported, - supported); - ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising, - advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + cmd->link_modes.supported); + linkmode_clear_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + cmd->link_modes.advertising); return 0; } @@ -3031,10 +3024,12 @@ static void phy_init(struct mv643xx_eth_private *mp, int speed, int duplex) phy->autoneg = AUTONEG_ENABLE; phy->speed = 0; phy->duplex = 0; - phy->advertising = phy->supported | ADVERTISED_Autoneg; + linkmode_copy(phy->advertising, phy->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + phy->advertising); } else { phy->autoneg = AUTONEG_DISABLE; - phy->advertising = 0; + linkmode_zero(phy->advertising); phy->speed = speed; phy->duplex = duplex; } diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 61b23497f836..9d4568eb2297 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -4248,8 +4248,7 @@ static int mvneta_ethtool_set_eee(struct net_device *dev, /* The Armada 37x documents do not give limits for this other than * it being an 8-bit register. */ - if (eee->tx_lpi_enabled && - (eee->tx_lpi_timer < 0 || eee->tx_lpi_timer > 255)) + if (eee->tx_lpi_enabled && eee->tx_lpi_timer > 255) return -EINVAL; lpi_ctl0 = mvreg_read(pp, MVNETA_LPI_CTRL_0); diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index f1dab0b55769..6a059d6ee03f 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -1165,28 +1165,13 @@ static void mvpp22_gop_setup_irq(struct mvpp2_port *port) */ static int mvpp22_comphy_init(struct mvpp2_port *port) { - enum phy_mode mode; int ret; if (!port->comphy) return 0; - switch (port->phy_interface) { - case PHY_INTERFACE_MODE_SGMII: - case PHY_INTERFACE_MODE_1000BASEX: - mode = PHY_MODE_SGMII; - break; - case PHY_INTERFACE_MODE_2500BASEX: - mode = PHY_MODE_2500SGMII; - break; - case PHY_INTERFACE_MODE_10GKR: - mode = PHY_MODE_10GKR; - break; - default: - return -EINVAL; - } - - ret = phy_set_mode(port->comphy, mode); + ret = phy_set_mode_ext(port->comphy, PHY_MODE_ETHERNET, + port->phy_interface); if (ret) return ret; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 12db256c8c9f..742f0c1f60df 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -31,6 +31,7 @@ * @resp: command response * @link_info: link related information * @event_cb: callback for linkchange events + * @event_cb_lock: lock for serializing callback with unregister * @cmd_pend: flag set before new command is started * flag cleared after command response is received * @cgx: parent cgx port @@ -43,6 +44,7 @@ struct lmac { u64 resp; struct cgx_link_user_info link_info; struct cgx_event_cb event_cb; + spinlock_t event_cb_lock; bool cmd_pend; struct cgx *cgx; u8 lmac_id; @@ -55,6 +57,8 @@ struct cgx { u8 cgx_id; u8 lmac_count; struct lmac *lmac_idmap[MAX_LMAC_PER_CGX]; + struct work_struct cgx_cmd_work; + struct workqueue_struct *cgx_cmd_workq; struct list_head cgx_list; }; @@ -66,6 +70,9 @@ static u32 cgx_speed_mbps[CGX_LINK_SPEED_MAX]; /* Convert firmware lmac type encoding to string */ static char *cgx_lmactype_string[LMAC_MODE_MAX]; +/* CGX PHY management internal APIs */ +static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool en); + /* Supported devices */ static const struct pci_device_id cgx_id_table[] = { { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) }, @@ -92,17 +99,21 @@ static inline struct lmac *lmac_pdata(u8 lmac_id, struct cgx *cgx) return cgx->lmac_idmap[lmac_id]; } -int cgx_get_cgx_cnt(void) +int cgx_get_cgxcnt_max(void) { struct cgx *cgx_dev; - int count = 0; + int idmax = -ENODEV; list_for_each_entry(cgx_dev, &cgx_list, cgx_list) - count++; + if (cgx_dev->cgx_id > idmax) + idmax = cgx_dev->cgx_id; + + if (idmax < 0) + return 0; - return count; + return idmax + 1; } -EXPORT_SYMBOL(cgx_get_cgx_cnt); +EXPORT_SYMBOL(cgx_get_cgxcnt_max); int cgx_get_lmac_cnt(void *cgxd) { @@ -445,6 +456,9 @@ static inline void cgx_link_change_handler(u64 lstat, lmac->link_info = event.link_uinfo; linfo = &lmac->link_info; + /* Ensure callback doesn't get unregistered until we finish it */ + spin_lock(&lmac->event_cb_lock); + if (!lmac->event_cb.notify_link_chg) { dev_dbg(dev, "cgx port %d:%d Link change handler null", cgx->cgx_id, lmac->lmac_id); @@ -455,11 +469,13 @@ static inline void cgx_link_change_handler(u64 lstat, dev_info(dev, "cgx port %d:%d Link is %s %d Mbps\n", cgx->cgx_id, lmac->lmac_id, linfo->link_up ? "UP" : "DOWN", linfo->speed); - return; + goto err; } if (lmac->event_cb.notify_link_chg(&event, lmac->event_cb.data)) dev_err(dev, "event notification failure\n"); +err: + spin_unlock(&lmac->event_cb_lock); } static inline bool cgx_cmdresp_is_linkevent(u64 event) @@ -482,6 +498,60 @@ static inline bool cgx_event_is_linkevent(u64 event) return false; } +static inline int cgx_fwi_get_mkex_prfl_sz(u64 *prfl_sz, + struct cgx *cgx) +{ + u64 req = 0; + u64 resp; + int err; + + req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_MKEX_PRFL_SIZE, req); + err = cgx_fwi_cmd_generic(req, &resp, cgx, 0); + if (!err) + *prfl_sz = FIELD_GET(RESP_MKEX_PRFL_SIZE, resp); + + return err; +} + +static inline int cgx_fwi_get_mkex_prfl_addr(u64 *prfl_addr, + struct cgx *cgx) +{ + u64 req = 0; + u64 resp; + int err; + + req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_MKEX_PRFL_ADDR, req); + err = cgx_fwi_cmd_generic(req, &resp, cgx, 0); + if (!err) + *prfl_addr = FIELD_GET(RESP_MKEX_PRFL_ADDR, resp); + + return err; +} + +int cgx_get_mkex_prfl_info(u64 *addr, u64 *size) +{ + struct cgx *cgx_dev; + int err; + + if (!addr || !size) + return -EINVAL; + + cgx_dev = list_first_entry(&cgx_list, struct cgx, cgx_list); + if (!cgx_dev) + return -ENXIO; + + err = cgx_fwi_get_mkex_prfl_sz(size, cgx_dev); + if (err) + return -EIO; + + err = cgx_fwi_get_mkex_prfl_addr(addr, cgx_dev); + if (err) + return -EIO; + + return 0; +} +EXPORT_SYMBOL(cgx_get_mkex_prfl_info); + static irqreturn_t cgx_fwi_event_handler(int irq, void *data) { struct lmac *lmac = data; @@ -548,6 +618,38 @@ int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id) } EXPORT_SYMBOL(cgx_lmac_evh_register); +int cgx_lmac_evh_unregister(void *cgxd, int lmac_id) +{ + struct lmac *lmac; + unsigned long flags; + struct cgx *cgx = cgxd; + + lmac = lmac_pdata(lmac_id, cgx); + if (!lmac) + return -ENODEV; + + spin_lock_irqsave(&lmac->event_cb_lock, flags); + lmac->event_cb.notify_link_chg = NULL; + lmac->event_cb.data = NULL; + spin_unlock_irqrestore(&lmac->event_cb_lock, flags); + + return 0; +} +EXPORT_SYMBOL(cgx_lmac_evh_unregister); + +static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool enable) +{ + u64 req = 0; + u64 resp; + + if (enable) + req = FIELD_SET(CMDREG_ID, CGX_CMD_LINK_BRING_UP, req); + else + req = FIELD_SET(CMDREG_ID, CGX_CMD_LINK_BRING_DOWN, req); + + return cgx_fwi_cmd_generic(req, &resp, cgx, lmac_id); +} + static inline int cgx_fwi_read_version(u64 *resp, struct cgx *cgx) { u64 req = 0; @@ -581,6 +683,34 @@ static int cgx_lmac_verify_fwi_version(struct cgx *cgx) return 0; } +static void cgx_lmac_linkup_work(struct work_struct *work) +{ + struct cgx *cgx = container_of(work, struct cgx, cgx_cmd_work); + struct device *dev = &cgx->pdev->dev; + int i, err; + + /* Do Link up for all the lmacs */ + for (i = 0; i < cgx->lmac_count; i++) { + err = cgx_fwi_link_change(cgx, i, true); + if (err) + dev_info(dev, "cgx port %d:%d Link up command failed\n", + cgx->cgx_id, i); + } +} + +int cgx_lmac_linkup_start(void *cgxd) +{ + struct cgx *cgx = cgxd; + + if (!cgx) + return -ENODEV; + + queue_work(cgx->cgx_cmd_workq, &cgx->cgx_cmd_work); + + return 0; +} +EXPORT_SYMBOL(cgx_lmac_linkup_start); + static int cgx_lmac_init(struct cgx *cgx) { struct lmac *lmac; @@ -602,6 +732,7 @@ static int cgx_lmac_init(struct cgx *cgx) lmac->cgx = cgx; init_waitqueue_head(&lmac->wq_cmd_cmplt); mutex_init(&lmac->cmd_lock); + spin_lock_init(&lmac->event_cb_lock); err = request_irq(pci_irq_vector(cgx->pdev, CGX_LMAC_FWI + i * 9), cgx_fwi_event_handler, 0, lmac->name, lmac); @@ -624,6 +755,12 @@ static int cgx_lmac_exit(struct cgx *cgx) struct lmac *lmac; int i; + if (cgx->cgx_cmd_workq) { + flush_workqueue(cgx->cgx_cmd_workq); + destroy_workqueue(cgx->cgx_cmd_workq); + cgx->cgx_cmd_workq = NULL; + } + /* Free all lmac related resources */ for (i = 0; i < cgx->lmac_count; i++) { lmac = cgx->lmac_idmap[i]; @@ -679,8 +816,19 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err_release_regions; } + cgx->cgx_id = (pci_resource_start(pdev, PCI_CFG_REG_BAR_NUM) >> 24) + & CGX_ID_MASK; + + /* init wq for processing linkup requests */ + INIT_WORK(&cgx->cgx_cmd_work, cgx_lmac_linkup_work); + cgx->cgx_cmd_workq = alloc_workqueue("cgx_cmd_workq", 0, 0); + if (!cgx->cgx_cmd_workq) { + dev_err(dev, "alloc workqueue failed for cgx cmd"); + err = -ENOMEM; + goto err_release_regions; + } + list_add(&cgx->cgx_list, &cgx_list); - cgx->cgx_id = cgx_get_cgx_cnt() - 1; cgx_link_usertable_init(); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h index 0a66d2717442..206dc5dc1df8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h @@ -20,40 +20,41 @@ /* PCI BAR nos */ #define PCI_CFG_REG_BAR_NUM 0 -#define MAX_CGX 3 +#define CGX_ID_MASK 0x7 #define MAX_LMAC_PER_CGX 4 +#define CGX_FIFO_LEN 65536 /* 64K for both Rx & Tx */ #define CGX_OFFSET(x) ((x) * MAX_LMAC_PER_CGX) /* Registers */ #define CGXX_CMRX_CFG 0x00 -#define CMR_EN BIT_ULL(55) -#define DATA_PKT_TX_EN BIT_ULL(53) -#define DATA_PKT_RX_EN BIT_ULL(54) -#define CGX_LMAC_TYPE_SHIFT 40 -#define CGX_LMAC_TYPE_MASK 0xF +#define CMR_EN BIT_ULL(55) +#define DATA_PKT_TX_EN BIT_ULL(53) +#define DATA_PKT_RX_EN BIT_ULL(54) +#define CGX_LMAC_TYPE_SHIFT 40 +#define CGX_LMAC_TYPE_MASK 0xF #define CGXX_CMRX_INT 0x040 -#define FW_CGX_INT BIT_ULL(1) +#define FW_CGX_INT BIT_ULL(1) #define CGXX_CMRX_INT_ENA_W1S 0x058 #define CGXX_CMRX_RX_ID_MAP 0x060 #define CGXX_CMRX_RX_STAT0 0x070 #define CGXX_CMRX_RX_LMACS 0x128 #define CGXX_CMRX_RX_DMAC_CTL0 0x1F8 -#define CGX_DMAC_CTL0_CAM_ENABLE BIT_ULL(3) -#define CGX_DMAC_CAM_ACCEPT BIT_ULL(3) -#define CGX_DMAC_MCAST_MODE BIT_ULL(1) -#define CGX_DMAC_BCAST_MODE BIT_ULL(0) +#define CGX_DMAC_CTL0_CAM_ENABLE BIT_ULL(3) +#define CGX_DMAC_CAM_ACCEPT BIT_ULL(3) +#define CGX_DMAC_MCAST_MODE BIT_ULL(1) +#define CGX_DMAC_BCAST_MODE BIT_ULL(0) #define CGXX_CMRX_RX_DMAC_CAM0 0x200 -#define CGX_DMAC_CAM_ADDR_ENABLE BIT_ULL(48) +#define CGX_DMAC_CAM_ADDR_ENABLE BIT_ULL(48) #define CGXX_CMRX_RX_DMAC_CAM1 0x400 -#define CGX_RX_DMAC_ADR_MASK GENMASK_ULL(47, 0) +#define CGX_RX_DMAC_ADR_MASK GENMASK_ULL(47, 0) #define CGXX_CMRX_TX_STAT0 0x700 #define CGXX_SCRATCH0_REG 0x1050 #define CGXX_SCRATCH1_REG 0x1058 #define CGX_CONST 0x2000 #define CGXX_SPUX_CONTROL1 0x10000 -#define CGXX_SPUX_CONTROL1_LBK BIT_ULL(14) +#define CGXX_SPUX_CONTROL1_LBK BIT_ULL(14) #define CGXX_GMP_PCS_MRX_CTL 0x30000 -#define CGXX_GMP_PCS_MRX_CTL_LBK BIT_ULL(14) +#define CGXX_GMP_PCS_MRX_CTL_LBK BIT_ULL(14) #define CGX_COMMAND_REG CGXX_SCRATCH1_REG #define CGX_EVENT_REG CGXX_SCRATCH0_REG @@ -94,11 +95,12 @@ struct cgx_event_cb { extern struct pci_driver cgx_driver; -int cgx_get_cgx_cnt(void); +int cgx_get_cgxcnt_max(void); int cgx_get_lmac_cnt(void *cgxd); void *cgx_get_pdata(int cgx_id); int cgx_set_pkind(void *cgxd, u8 lmac_id, int pkind); int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id); +int cgx_lmac_evh_unregister(void *cgxd, int lmac_id); int cgx_get_tx_stats(void *cgxd, int lmac_id, int idx, u64 *tx_stat); int cgx_get_rx_stats(void *cgxd, int lmac_id, int idx, u64 *rx_stat); int cgx_lmac_rx_tx_enable(void *cgxd, int lmac_id, bool enable); @@ -108,4 +110,6 @@ void cgx_lmac_promisc_config(int cgx_id, int lmac_id, bool enable); int cgx_lmac_internal_loopback(void *cgxd, int lmac_id, bool enable); int cgx_get_link_info(void *cgxd, int lmac_id, struct cgx_link_user_info *linfo); +int cgx_lmac_linkup_start(void *cgxd); +int cgx_get_mkex_prfl_info(u64 *addr, u64 *size); #endif /* CGX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx_fw_if.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx_fw_if.h index fa17af3f4ba7..fb3ba4968a9b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx_fw_if.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx_fw_if.h @@ -78,8 +78,8 @@ enum cgx_cmd_id { CGX_CMD_LINK_STATE_CHANGE, CGX_CMD_MODE_CHANGE, /* hot plug support */ CGX_CMD_INTF_SHUTDOWN, - CGX_CMD_IRQ_ENABLE, - CGX_CMD_IRQ_DISABLE, + CGX_CMD_GET_MKEX_PRFL_SIZE, + CGX_CMD_GET_MKEX_PRFL_ADDR }; /* async event ids */ @@ -139,6 +139,16 @@ enum cgx_cmd_own { */ #define RESP_MAC_ADDR GENMASK_ULL(56, 9) +/* Response to cmd ID as CGX_CMD_GET_MKEX_PRFL_SIZE with cmd status as + * CGX_STAT_SUCCESS + */ +#define RESP_MKEX_PRFL_SIZE GENMASK_ULL(63, 9) + +/* Response to cmd ID as CGX_CMD_GET_MKEX_PRFL_ADDR with cmd status as + * CGX_STAT_SUCCESS + */ +#define RESP_MKEX_PRFL_ADDR GENMASK_ULL(63, 9) + /* Response to cmd ID - CGX_CMD_LINK_BRING_UP/DOWN, event ID CGX_EVT_LINK_CHANGE * status can be either CGX_STAT_FAIL or CGX_STAT_SUCCESS * diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h index d39ada404c8f..ec50a21c5aaf 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h @@ -143,6 +143,14 @@ enum nix_scheduler { NIX_TXSCH_LVL_CNT = 0x5, }; +#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1) +#define TXSCH_TL1_DFLT_RR_PRIO (0x1ull) + +/* Min/Max packet sizes, excluding FCS */ +#define NIC_HW_MIN_FRS 40 +#define NIC_HW_MAX_FRS 9212 +#define SDP_HW_MAX_FRS 65535 + /* NIX RX action operation*/ #define NIX_RX_ACTIONOP_DROP (0x0ull) #define NIX_RX_ACTIONOP_UCAST (0x1ull) @@ -169,7 +177,9 @@ enum nix_scheduler { #define MAX_LMAC_PKIND 12 #define NIX_LINK_CGX_LMAC(a, b) (0 + 4 * (a) + (b)) +#define NIX_LINK_LBK(a) (12 + (a)) #define NIX_CHAN_CGX_LMAC_CHX(a, b, c) (0x800 + 0x100 * (a) + 0x10 * (b) + (c)) +#define NIX_CHAN_LBK_CHX(a, b) (0 + 0x100 * (a) + (b)) /* NIX LSO format indices. * As of now TSO is the only one using, so statically assigning indices. @@ -186,26 +196,4 @@ enum nix_scheduler { #define DEFAULT_RSS_CONTEXT_GROUP 0 #define MAX_RSS_INDIR_TBL_SIZE 256 /* 1 << Max adder bits */ -/* NIX flow tag, key type flags */ -#define FLOW_KEY_TYPE_PORT BIT(0) -#define FLOW_KEY_TYPE_IPV4 BIT(1) -#define FLOW_KEY_TYPE_IPV6 BIT(2) -#define FLOW_KEY_TYPE_TCP BIT(3) -#define FLOW_KEY_TYPE_UDP BIT(4) -#define FLOW_KEY_TYPE_SCTP BIT(5) - -/* NIX flow tag algorithm indices, max is 31 */ -enum { - FLOW_KEY_ALG_PORT, - FLOW_KEY_ALG_IP, - FLOW_KEY_ALG_TCP, - FLOW_KEY_ALG_UDP, - FLOW_KEY_ALG_SCTP, - FLOW_KEY_ALG_TCP_UDP, - FLOW_KEY_ALG_TCP_SCTP, - FLOW_KEY_ALG_UDP_SCTP, - FLOW_KEY_ALG_TCP_UDP_SCTP, - FLOW_KEY_ALG_MAX, -}; - #endif /* COMMON_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c index 85ba24a05774..d6f9ed8ea966 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c @@ -290,7 +290,7 @@ EXPORT_SYMBOL(otx2_mbox_nonempty); const char *otx2_mbox_id2name(u16 id) { switch (id) { -#define M(_name, _id, _1, _2) case _id: return # _name; +#define M(_name, _id, _1, _2, _3) case _id: return # _name; MBOX_MESSAGES #undef M default: diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index a15a59c9a239..76a4575d18ff 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -120,54 +120,101 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox, #define MBOX_MESSAGES \ /* Generic mbox IDs (range 0x000 - 0x1FF) */ \ -M(READY, 0x001, msg_req, ready_msg_rsp) \ -M(ATTACH_RESOURCES, 0x002, rsrc_attach, msg_rsp) \ -M(DETACH_RESOURCES, 0x003, rsrc_detach, msg_rsp) \ -M(MSIX_OFFSET, 0x004, msg_req, msix_offset_rsp) \ +M(READY, 0x001, ready, msg_req, ready_msg_rsp) \ +M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach, msg_rsp) \ +M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach, msg_rsp) \ +M(MSIX_OFFSET, 0x004, msix_offset, msg_req, msix_offset_rsp) \ +M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ -M(CGX_START_RXTX, 0x200, msg_req, msg_rsp) \ -M(CGX_STOP_RXTX, 0x201, msg_req, msg_rsp) \ -M(CGX_STATS, 0x202, msg_req, cgx_stats_rsp) \ -M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set_or_get, \ +M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ +M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ +M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \ +M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get, \ cgx_mac_addr_set_or_get) \ -M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_set_or_get, \ +M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get, \ cgx_mac_addr_set_or_get) \ -M(CGX_PROMISC_ENABLE, 0x205, msg_req, msg_rsp) \ -M(CGX_PROMISC_DISABLE, 0x206, msg_req, msg_rsp) \ -M(CGX_START_LINKEVENTS, 0x207, msg_req, msg_rsp) \ -M(CGX_STOP_LINKEVENTS, 0x208, msg_req, msg_rsp) \ -M(CGX_GET_LINKINFO, 0x209, msg_req, cgx_link_info_msg) \ -M(CGX_INTLBK_ENABLE, 0x20A, msg_req, msg_rsp) \ -M(CGX_INTLBK_DISABLE, 0x20B, msg_req, msg_rsp) \ +M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \ +M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \ +M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \ +M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \ +M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, cgx_link_info_msg) \ +M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \ +M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \ /* NPA mbox IDs (range 0x400 - 0x5FF) */ \ -M(NPA_LF_ALLOC, 0x400, npa_lf_alloc_req, npa_lf_alloc_rsp) \ -M(NPA_LF_FREE, 0x401, msg_req, msg_rsp) \ -M(NPA_AQ_ENQ, 0x402, npa_aq_enq_req, npa_aq_enq_rsp) \ -M(NPA_HWCTX_DISABLE, 0x403, hwctx_disable_req, msg_rsp) \ +M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, \ + npa_lf_alloc_req, npa_lf_alloc_rsp) \ +M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \ +M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp) \ +M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, msg_rsp)\ /* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \ /* TIM mbox IDs (range 0x800 - 0x9FF) */ \ /* CPT mbox IDs (range 0xA00 - 0xBFF) */ \ /* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \ +M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, npc_mcam_alloc_entry_req,\ + npc_mcam_alloc_entry_rsp) \ +M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \ + npc_mcam_free_entry_req, msg_rsp) \ +M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \ + npc_mcam_write_entry_req, msg_rsp) \ +M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \ + npc_mcam_ena_dis_entry_req, msg_rsp) \ +M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \ + npc_mcam_ena_dis_entry_req, msg_rsp) \ +M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, npc_mcam_shift_entry_req,\ + npc_mcam_shift_entry_rsp) \ +M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \ + npc_mcam_alloc_counter_req, \ + npc_mcam_alloc_counter_rsp) \ +M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \ + npc_mcam_oper_counter_req, msg_rsp) \ +M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \ + npc_mcam_unmap_counter_req, msg_rsp) \ +M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \ + npc_mcam_oper_counter_req, msg_rsp) \ +M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \ + npc_mcam_oper_counter_req, \ + npc_mcam_oper_counter_rsp) \ +M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry, \ + npc_mcam_alloc_and_write_entry_req, \ + npc_mcam_alloc_and_write_entry_rsp) \ +M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, \ + msg_req, npc_get_kex_cfg_rsp) \ /* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \ -M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc_req, nix_lf_alloc_rsp) \ -M(NIX_LF_FREE, 0x8001, msg_req, msg_rsp) \ -M(NIX_AQ_ENQ, 0x8002, nix_aq_enq_req, nix_aq_enq_rsp) \ -M(NIX_HWCTX_DISABLE, 0x8003, hwctx_disable_req, msg_rsp) \ -M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc_req, nix_txsch_alloc_rsp) \ -M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free_req, msg_rsp) \ -M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_config, msg_rsp) \ -M(NIX_STATS_RST, 0x8007, msg_req, msg_rsp) \ -M(NIX_VTAG_CFG, 0x8008, nix_vtag_config, msg_rsp) \ -M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, msg_rsp) \ -M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, msg_rsp) \ -M(NIX_SET_RX_MODE, 0x800b, nix_rx_mode, msg_rsp) +M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, \ + nix_lf_alloc_req, nix_lf_alloc_rsp) \ +M(NIX_LF_FREE, 0x8001, nix_lf_free, msg_req, msg_rsp) \ +M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, nix_aq_enq_rsp) \ +M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, \ + hwctx_disable_req, msg_rsp) \ +M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, \ + nix_txsch_alloc_req, nix_txsch_alloc_rsp) \ +M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, msg_rsp) \ +M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, msg_rsp) \ +M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \ +M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \ +M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \ + nix_rss_flowkey_cfg, \ + nix_rss_flowkey_cfg_rsp) \ +M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, msg_rsp) \ +M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \ +M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \ +M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \ +M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \ +M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \ + nix_mark_format_cfg, \ + nix_mark_format_cfg_rsp) \ +M(NIX_SET_RX_CFG, 0x8010, nix_set_rx_cfg, nix_rx_cfg, msg_rsp) \ +M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, \ + nix_lso_format_cfg, \ + nix_lso_format_cfg_rsp) \ +M(NIX_RXVLAN_ALLOC, 0x8012, nix_rxvlan_alloc, msg_req, msg_rsp) /* Messages initiated by AF (range 0xC00 - 0xDFF) */ #define MBOX_UP_CGX_MESSAGES \ -M(CGX_LINK_EVENT, 0xC00, cgx_link_info_msg, msg_rsp) +M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp) enum { -#define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id, +#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id, MBOX_MESSAGES MBOX_UP_CGX_MESSAGES #undef M @@ -191,6 +238,13 @@ struct msg_rsp { struct mbox_msghdr hdr; }; +/* RVU mailbox error codes + * Range 256 - 300. + */ +enum rvu_af_status { + RVU_INVALID_VF_ID = -256, +}; + struct ready_msg_rsp { struct mbox_msghdr hdr; u16 sclk_feq; /* SCLK frequency */ @@ -347,6 +401,8 @@ struct hwctx_disable_req { u8 ctype; }; +/* NIX mbox message formats */ + /* NIX mailbox error codes * Range 401 - 500. */ @@ -365,6 +421,12 @@ enum nix_af_status { NIX_AF_INVAL_TXSCHQ_CFG = -412, NIX_AF_SMQ_FLUSH_FAILED = -413, NIX_AF_ERR_LF_RESET = -414, + NIX_AF_ERR_RSS_NOSPC_FIELD = -415, + NIX_AF_ERR_RSS_NOSPC_ALGO = -416, + NIX_AF_ERR_MARK_CFG_FAIL = -417, + NIX_AF_ERR_LSO_CFG_FAIL = -418, + NIX_AF_INVAL_NPA_PF_FUNC = -419, + NIX_AF_INVAL_SSO_PF_FUNC = -420, }; /* For NIX LF context alloc and init */ @@ -392,6 +454,10 @@ struct nix_lf_alloc_rsp { u8 lso_tsov4_idx; u8 lso_tsov6_idx; u8 mac_addr[ETH_ALEN]; + u8 lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */ + u8 lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */ + u16 cints; /* NIX_AF_CONST2::CINTS */ + u16 qints; /* NIX_AF_CONST2::QINTS */ }; /* NIX AQ enqueue msg */ @@ -472,6 +538,7 @@ struct nix_txschq_config { struct nix_vtag_config { struct mbox_msghdr hdr; + /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */ u8 vtag_size; /* cfg_type is '0' for tx vlan cfg * cfg_type is '1' for rx vlan cfg @@ -492,7 +559,7 @@ struct nix_vtag_config { /* valid when cfg_type is '1' */ struct { - /* rx vtag type index */ + /* rx vtag type index, valid values are in 0..7 range */ u8 vtag_type; /* rx vtag strip */ u8 strip_vtag :1; @@ -505,15 +572,40 @@ struct nix_vtag_config { struct nix_rss_flowkey_cfg { struct mbox_msghdr hdr; int mcam_index; /* MCAM entry index to modify */ +#define NIX_FLOW_KEY_TYPE_PORT BIT(0) +#define NIX_FLOW_KEY_TYPE_IPV4 BIT(1) +#define NIX_FLOW_KEY_TYPE_IPV6 BIT(2) +#define NIX_FLOW_KEY_TYPE_TCP BIT(3) +#define NIX_FLOW_KEY_TYPE_UDP BIT(4) +#define NIX_FLOW_KEY_TYPE_SCTP BIT(5) u32 flowkey_cfg; /* Flowkey types selected */ u8 group; /* RSS context or group */ }; +struct nix_rss_flowkey_cfg_rsp { + struct mbox_msghdr hdr; + u8 alg_idx; /* Selected algo index */ +}; + struct nix_set_mac_addr { struct mbox_msghdr hdr; u8 mac_addr[ETH_ALEN]; /* MAC address to be set for this pcifunc */ }; +struct nix_mark_format_cfg { + struct mbox_msghdr hdr; + u8 offset; + u8 y_mask; + u8 y_val; + u8 r_mask; + u8 r_val; +}; + +struct nix_mark_format_cfg_rsp { + struct mbox_msghdr hdr; + u8 mark_format_idx; +}; + struct nix_rx_mode { struct mbox_msghdr hdr; #define NIX_RX_MODE_UCAST BIT(0) @@ -522,4 +614,182 @@ struct nix_rx_mode { u16 mode; }; +struct nix_rx_cfg { + struct mbox_msghdr hdr; +#define NIX_RX_OL3_VERIFY BIT(0) +#define NIX_RX_OL4_VERIFY BIT(1) + u8 len_verify; /* Outer L3/L4 len check */ +#define NIX_RX_CSUM_OL4_VERIFY BIT(0) + u8 csum_verify; /* Outer L4 checksum verification */ +}; + +struct nix_frs_cfg { + struct mbox_msghdr hdr; + u8 update_smq; /* Update SMQ's min/max lens */ + u8 update_minlen; /* Set minlen also */ + u8 sdp_link; /* Set SDP RX link */ + u16 maxlen; + u16 minlen; +}; + +struct nix_lso_format_cfg { + struct mbox_msghdr hdr; + u64 field_mask; +#define NIX_LSO_FIELD_MAX 8 + u64 fields[NIX_LSO_FIELD_MAX]; +}; + +struct nix_lso_format_cfg_rsp { + struct mbox_msghdr hdr; + u8 lso_format_idx; +}; + +/* NPC mbox message structs */ + +#define NPC_MCAM_ENTRY_INVALID 0xFFFF +#define NPC_MCAM_INVALID_MAP 0xFFFF + +/* NPC mailbox error codes + * Range 701 - 800. + */ +enum npc_af_status { + NPC_MCAM_INVALID_REQ = -701, + NPC_MCAM_ALLOC_DENIED = -702, + NPC_MCAM_ALLOC_FAILED = -703, + NPC_MCAM_PERM_DENIED = -704, +}; + +struct npc_mcam_alloc_entry_req { + struct mbox_msghdr hdr; +#define NPC_MAX_NONCONTIG_ENTRIES 256 + u8 contig; /* Contiguous entries ? */ +#define NPC_MCAM_ANY_PRIO 0 +#define NPC_MCAM_LOWER_PRIO 1 +#define NPC_MCAM_HIGHER_PRIO 2 + u8 priority; /* Lower or higher w.r.t ref_entry */ + u16 ref_entry; + u16 count; /* Number of entries requested */ +}; + +struct npc_mcam_alloc_entry_rsp { + struct mbox_msghdr hdr; + u16 entry; /* Entry allocated or start index if contiguous. + * Invalid incase of non-contiguous. + */ + u16 count; /* Number of entries allocated */ + u16 free_count; /* Number of entries available */ + u16 entry_list[NPC_MAX_NONCONTIG_ENTRIES]; +}; + +struct npc_mcam_free_entry_req { + struct mbox_msghdr hdr; + u16 entry; /* Entry index to be freed */ + u8 all; /* If all entries allocated to this PFVF to be freed */ +}; + +struct mcam_entry { +#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max keywidth */ + u64 kw[NPC_MAX_KWS_IN_KEY]; + u64 kw_mask[NPC_MAX_KWS_IN_KEY]; + u64 action; + u64 vtag_action; +}; + +struct npc_mcam_write_entry_req { + struct mbox_msghdr hdr; + struct mcam_entry entry_data; + u16 entry; /* MCAM entry to write this match key */ + u16 cntr; /* Counter for this MCAM entry */ + u8 intf; /* Rx or Tx interface */ + u8 enable_entry;/* Enable this MCAM entry ? */ + u8 set_cntr; /* Set counter for this entry ? */ +}; + +/* Enable/Disable a given entry */ +struct npc_mcam_ena_dis_entry_req { + struct mbox_msghdr hdr; + u16 entry; +}; + +struct npc_mcam_shift_entry_req { + struct mbox_msghdr hdr; +#define NPC_MCAM_MAX_SHIFTS 64 + u16 curr_entry[NPC_MCAM_MAX_SHIFTS]; + u16 new_entry[NPC_MCAM_MAX_SHIFTS]; + u16 shift_count; /* Number of entries to shift */ +}; + +struct npc_mcam_shift_entry_rsp { + struct mbox_msghdr hdr; + u16 failed_entry_idx; /* Index in 'curr_entry', not entry itself */ +}; + +struct npc_mcam_alloc_counter_req { + struct mbox_msghdr hdr; + u8 contig; /* Contiguous counters ? */ +#define NPC_MAX_NONCONTIG_COUNTERS 64 + u16 count; /* Number of counters requested */ +}; + +struct npc_mcam_alloc_counter_rsp { + struct mbox_msghdr hdr; + u16 cntr; /* Counter allocated or start index if contiguous. + * Invalid incase of non-contiguous. + */ + u16 count; /* Number of counters allocated */ + u16 cntr_list[NPC_MAX_NONCONTIG_COUNTERS]; +}; + +struct npc_mcam_oper_counter_req { + struct mbox_msghdr hdr; + u16 cntr; /* Free a counter or clear/fetch it's stats */ +}; + +struct npc_mcam_oper_counter_rsp { + struct mbox_msghdr hdr; + u64 stat; /* valid only while fetching counter's stats */ +}; + +struct npc_mcam_unmap_counter_req { + struct mbox_msghdr hdr; + u16 cntr; + u16 entry; /* Entry and counter to be unmapped */ + u8 all; /* Unmap all entries using this counter ? */ +}; + +struct npc_mcam_alloc_and_write_entry_req { + struct mbox_msghdr hdr; + struct mcam_entry entry_data; + u16 ref_entry; + u8 priority; /* Lower or higher w.r.t ref_entry */ + u8 intf; /* Rx or Tx interface */ + u8 enable_entry;/* Enable this MCAM entry ? */ + u8 alloc_cntr; /* Allocate counter and map ? */ +}; + +struct npc_mcam_alloc_and_write_entry_rsp { + struct mbox_msghdr hdr; + u16 entry; + u16 cntr; +}; + +struct npc_get_kex_cfg_rsp { + struct mbox_msghdr hdr; + u64 rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */ + u64 tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */ +#define NPC_MAX_INTF 2 +#define NPC_MAX_LID 8 +#define NPC_MAX_LT 16 +#define NPC_MAX_LD 2 +#define NPC_MAX_LFL 16 + /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */ + u64 kex_ld_flags[NPC_MAX_LD]; + /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */ + u64 intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; + /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */ + u64 intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL]; +#define MKEX_NAME_LEN 128 + u8 mkex_pfl_name[MKEX_NAME_LEN]; +}; + #endif /* MBOX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h index f98b0113def3..8d6d90fdfb73 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h @@ -259,4 +259,28 @@ struct nix_rx_action { #endif }; +/* NIX Receive Vtag Action Structure */ +#define VTAG0_VALID_BIT BIT_ULL(15) +#define VTAG0_TYPE_MASK GENMASK_ULL(14, 12) +#define VTAG0_LID_MASK GENMASK_ULL(10, 8) +#define VTAG0_RELPTR_MASK GENMASK_ULL(7, 0) + +struct npc_mcam_kex { + /* MKEX Profle Header */ + u64 mkex_sign; /* "mcam-kex-profile" (8 bytes/ASCII characters) */ + u8 name[MKEX_NAME_LEN]; /* MKEX Profile name */ + u64 cpu_model; /* Format as profiled by CPU hardware */ + u64 kpu_version; /* KPU firmware/profile version */ + u64 reserved; /* Reserved for extension */ + + /* MKEX Profle Data */ + u64 keyx_cfg[NPC_MAX_INTF]; /* NPC_AF_INTF(0..1)_KEX_CFG */ + /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */ + u64 kex_ld_flags[NPC_MAX_LD]; + /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */ + u64 intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; + /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */ + u64 intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL]; +} __packed; + #endif /* NPC_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index dc28fa2b9481..e581091c09c4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -29,6 +29,16 @@ static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf, struct rvu_block *block, int lf); static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf, struct rvu_block *block, int lf); +static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc); + +static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, + int type, int num, + void (mbox_handler)(struct work_struct *), + void (mbox_up_handler)(struct work_struct *)); +enum { + TYPE_AFVF, + TYPE_AFPF, +}; /* Supported devices */ static const struct pci_device_id rvu_id_table[] = { @@ -42,6 +52,10 @@ MODULE_LICENSE("GPL v2"); MODULE_VERSION(DRV_VERSION); MODULE_DEVICE_TABLE(pci, rvu_id_table); +static char *mkex_profile; /* MKEX profile name */ +module_param(mkex_profile, charp, 0000); +MODULE_PARM_DESC(mkex_profile, "MKEX profile name string"); + /* Poll a RVU block's register 'offset', for a 'zero' * or 'nonzero' at bits specified by 'mask' */ @@ -153,17 +167,17 @@ int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot) u16 match = 0; int lf; - spin_lock(&rvu->rsrc_lock); + mutex_lock(&rvu->rsrc_lock); for (lf = 0; lf < block->lf.max; lf++) { if (block->fn_map[lf] == pcifunc) { if (slot == match) { - spin_unlock(&rvu->rsrc_lock); + mutex_unlock(&rvu->rsrc_lock); return lf; } match++; } } - spin_unlock(&rvu->rsrc_lock); + mutex_unlock(&rvu->rsrc_lock); return -ENODEV; } @@ -337,6 +351,28 @@ struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc) return &rvu->pf[rvu_get_pf(pcifunc)]; } +static bool is_pf_func_valid(struct rvu *rvu, u16 pcifunc) +{ + int pf, vf, nvfs; + u64 cfg; + + pf = rvu_get_pf(pcifunc); + if (pf >= rvu->hw->total_pfs) + return false; + + if (!(pcifunc & RVU_PFVF_FUNC_MASK)) + return true; + + /* Check if VF is within number of VFs attached to this PF */ + vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1; + cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf)); + nvfs = (cfg >> 12) & 0xFF; + if (vf >= nvfs) + return false; + + return true; +} + bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr) { struct rvu_block *block; @@ -597,6 +633,8 @@ static void rvu_free_hw_resources(struct rvu *rvu) dma_unmap_resource(rvu->dev, rvu->msix_base_iova, max_msix * PCI_MSIX_ENTRY_SIZE, DMA_BIDIRECTIONAL, 0); + + mutex_destroy(&rvu->rsrc_lock); } static int rvu_setup_hw_resources(struct rvu *rvu) @@ -752,7 +790,7 @@ init: if (!rvu->hwvf) return -ENOMEM; - spin_lock_init(&rvu->rsrc_lock); + mutex_init(&rvu->rsrc_lock); err = rvu_setup_msix_resources(rvu); if (err) @@ -777,17 +815,26 @@ init: err = rvu_npc_init(rvu); if (err) - return err; + goto exit; + + err = rvu_cgx_init(rvu); + if (err) + goto exit; err = rvu_npa_init(rvu); if (err) - return err; + goto cgx_err; err = rvu_nix_init(rvu); if (err) - return err; + goto cgx_err; return 0; + +cgx_err: + rvu_cgx_exit(rvu); +exit: + return err; } /* NPA and NIX admin queue APIs */ @@ -830,7 +877,7 @@ int rvu_aq_alloc(struct rvu *rvu, struct admin_queue **ad_queue, return 0; } -static int rvu_mbox_handler_READY(struct rvu *rvu, struct msg_req *req, +static int rvu_mbox_handler_ready(struct rvu *rvu, struct msg_req *req, struct ready_msg_rsp *rsp) { return 0; @@ -858,6 +905,22 @@ static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype) return 0; } +bool is_pffunc_map_valid(struct rvu *rvu, u16 pcifunc, int blktype) +{ + struct rvu_pfvf *pfvf; + + if (!is_pf_func_valid(rvu, pcifunc)) + return false; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + + /* Check if this PFFUNC has a LF of type blktype attached */ + if (!rvu_get_rsrc_mapcount(pfvf, blktype)) + return false; + + return true; +} + static int rvu_lookup_rsrc(struct rvu *rvu, struct rvu_block *block, int pcifunc, int slot) { @@ -926,7 +989,7 @@ static int rvu_detach_rsrcs(struct rvu *rvu, struct rsrc_detach *detach, struct rvu_block *block; int blkid; - spin_lock(&rvu->rsrc_lock); + mutex_lock(&rvu->rsrc_lock); /* Check for partial resource detach */ if (detach && detach->partial) @@ -956,11 +1019,11 @@ static int rvu_detach_rsrcs(struct rvu *rvu, struct rsrc_detach *detach, rvu_detach_block(rvu, pcifunc, block->type); } - spin_unlock(&rvu->rsrc_lock); + mutex_unlock(&rvu->rsrc_lock); return 0; } -static int rvu_mbox_handler_DETACH_RESOURCES(struct rvu *rvu, +static int rvu_mbox_handler_detach_resources(struct rvu *rvu, struct rsrc_detach *detach, struct msg_rsp *rsp) { @@ -1108,7 +1171,7 @@ fail: return -ENOSPC; } -static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu, +static int rvu_mbox_handler_attach_resources(struct rvu *rvu, struct rsrc_attach *attach, struct msg_rsp *rsp) { @@ -1119,7 +1182,7 @@ static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu, if (!attach->modify) rvu_detach_rsrcs(rvu, NULL, pcifunc); - spin_lock(&rvu->rsrc_lock); + mutex_lock(&rvu->rsrc_lock); /* Check if the request can be accommodated */ err = rvu_check_rsrc_availability(rvu, attach, pcifunc); @@ -1163,7 +1226,7 @@ static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu, } exit: - spin_unlock(&rvu->rsrc_lock); + mutex_unlock(&rvu->rsrc_lock); return err; } @@ -1231,7 +1294,7 @@ static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf, rvu_free_rsrc_contig(&pfvf->msix, nvecs, offset); } -static int rvu_mbox_handler_MSIX_OFFSET(struct rvu *rvu, struct msg_req *req, +static int rvu_mbox_handler_msix_offset(struct rvu *rvu, struct msg_req *req, struct msix_offset_rsp *rsp) { struct rvu_hwinfo *hw = rvu->hw; @@ -1280,22 +1343,51 @@ static int rvu_mbox_handler_MSIX_OFFSET(struct rvu *rvu, struct msg_req *req, return 0; } -static int rvu_process_mbox_msg(struct rvu *rvu, int devid, +static int rvu_mbox_handler_vf_flr(struct rvu *rvu, struct msg_req *req, + struct msg_rsp *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + u16 vf, numvfs; + u64 cfg; + + vf = pcifunc & RVU_PFVF_FUNC_MASK; + cfg = rvu_read64(rvu, BLKADDR_RVUM, + RVU_PRIV_PFX_CFG(rvu_get_pf(pcifunc))); + numvfs = (cfg >> 12) & 0xFF; + + if (vf && vf <= numvfs) + __rvu_flr_handler(rvu, pcifunc); + else + return RVU_INVALID_VF_ID; + + return 0; +} + +static int rvu_process_mbox_msg(struct otx2_mbox *mbox, int devid, struct mbox_msghdr *req) { + struct rvu *rvu = pci_get_drvdata(mbox->pdev); + /* Check if valid, if not reply with a invalid msg */ if (req->sig != OTX2_MBOX_REQ_SIG) goto bad_message; switch (req->id) { -#define M(_name, _id, _req_type, _rsp_type) \ +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ case _id: { \ struct _rsp_type *rsp; \ int err; \ \ rsp = (struct _rsp_type *)otx2_mbox_alloc_msg( \ - &rvu->mbox, devid, \ + mbox, devid, \ sizeof(struct _rsp_type)); \ + /* some handlers should complete even if reply */ \ + /* could not be allocated */ \ + if (!rsp && \ + _id != MBOX_MSG_DETACH_RESOURCES && \ + _id != MBOX_MSG_NIX_TXSCH_FREE && \ + _id != MBOX_MSG_VF_FLR) \ + return -ENOMEM; \ if (rsp) { \ rsp->hdr.id = _id; \ rsp->hdr.sig = OTX2_MBOX_RSP_SIG; \ @@ -1303,9 +1395,9 @@ static int rvu_process_mbox_msg(struct rvu *rvu, int devid, rsp->hdr.rc = 0; \ } \ \ - err = rvu_mbox_handler_ ## _name(rvu, \ - (struct _req_type *)req, \ - rsp); \ + err = rvu_mbox_handler_ ## _fn_name(rvu, \ + (struct _req_type *)req, \ + rsp); \ if (rsp && err) \ rsp->hdr.rc = err; \ \ @@ -1313,29 +1405,38 @@ static int rvu_process_mbox_msg(struct rvu *rvu, int devid, } MBOX_MESSAGES #undef M - break; + bad_message: default: - otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc, - req->id); + otx2_reply_invalid_msg(mbox, devid, req->pcifunc, req->id); return -ENODEV; } } -static void rvu_mbox_handler(struct work_struct *work) +static void __rvu_mbox_handler(struct rvu_work *mwork, int type) { - struct rvu_work *mwork = container_of(work, struct rvu_work, work); struct rvu *rvu = mwork->rvu; + int offset, err, id, devid; struct otx2_mbox_dev *mdev; struct mbox_hdr *req_hdr; struct mbox_msghdr *msg; + struct mbox_wq_info *mw; struct otx2_mbox *mbox; - int offset, id, err; - u16 pf; - mbox = &rvu->mbox; - pf = mwork - rvu->mbox_wrk; - mdev = &mbox->dev[pf]; + switch (type) { + case TYPE_AFPF: + mw = &rvu->afpf_wq_info; + break; + case TYPE_AFVF: + mw = &rvu->afvf_wq_info; + break; + default: + return; + } + + devid = mwork - mw->mbox_wrk; + mbox = &mw->mbox; + mdev = &mbox->dev[devid]; /* Process received mbox messages */ req_hdr = mdev->mbase + mbox->rx_start; @@ -1347,10 +1448,21 @@ static void rvu_mbox_handler(struct work_struct *work) for (id = 0; id < req_hdr->num_msgs; id++) { msg = mdev->mbase + offset; - /* Set which PF sent this message based on mbox IRQ */ - msg->pcifunc &= ~(RVU_PFVF_PF_MASK << RVU_PFVF_PF_SHIFT); - msg->pcifunc |= (pf << RVU_PFVF_PF_SHIFT); - err = rvu_process_mbox_msg(rvu, pf, msg); + /* Set which PF/VF sent this message based on mbox IRQ */ + switch (type) { + case TYPE_AFPF: + msg->pcifunc &= + ~(RVU_PFVF_PF_MASK << RVU_PFVF_PF_SHIFT); + msg->pcifunc |= (devid << RVU_PFVF_PF_SHIFT); + break; + case TYPE_AFVF: + msg->pcifunc &= + ~(RVU_PFVF_FUNC_MASK << RVU_PFVF_FUNC_SHIFT); + msg->pcifunc |= (devid << RVU_PFVF_FUNC_SHIFT) + 1; + break; + } + + err = rvu_process_mbox_msg(mbox, devid, msg); if (!err) { offset = mbox->rx_start + msg->next_msgoff; continue; @@ -1358,31 +1470,57 @@ static void rvu_mbox_handler(struct work_struct *work) if (msg->pcifunc & RVU_PFVF_FUNC_MASK) dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d:VF%d\n", - err, otx2_mbox_id2name(msg->id), msg->id, pf, + err, otx2_mbox_id2name(msg->id), + msg->id, devid, (msg->pcifunc & RVU_PFVF_FUNC_MASK) - 1); else dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d\n", - err, otx2_mbox_id2name(msg->id), msg->id, pf); + err, otx2_mbox_id2name(msg->id), + msg->id, devid); } - /* Send mbox responses to PF */ - otx2_mbox_msg_send(mbox, pf); + /* Send mbox responses to VF/PF */ + otx2_mbox_msg_send(mbox, devid); +} + +static inline void rvu_afpf_mbox_handler(struct work_struct *work) +{ + struct rvu_work *mwork = container_of(work, struct rvu_work, work); + + __rvu_mbox_handler(mwork, TYPE_AFPF); } -static void rvu_mbox_up_handler(struct work_struct *work) +static inline void rvu_afvf_mbox_handler(struct work_struct *work) { struct rvu_work *mwork = container_of(work, struct rvu_work, work); + + __rvu_mbox_handler(mwork, TYPE_AFVF); +} + +static void __rvu_mbox_up_handler(struct rvu_work *mwork, int type) +{ struct rvu *rvu = mwork->rvu; struct otx2_mbox_dev *mdev; struct mbox_hdr *rsp_hdr; struct mbox_msghdr *msg; + struct mbox_wq_info *mw; struct otx2_mbox *mbox; - int offset, id; - u16 pf; + int offset, id, devid; + + switch (type) { + case TYPE_AFPF: + mw = &rvu->afpf_wq_info; + break; + case TYPE_AFVF: + mw = &rvu->afvf_wq_info; + break; + default: + return; + } - mbox = &rvu->mbox_up; - pf = mwork - rvu->mbox_wrk_up; - mdev = &mbox->dev[pf]; + devid = mwork - mw->mbox_wrk_up; + mbox = &mw->mbox_up; + mdev = &mbox->dev[devid]; rsp_hdr = mdev->mbase + mbox->rx_start; if (rsp_hdr->num_msgs == 0) { @@ -1423,128 +1561,182 @@ end: mdev->msgs_acked++; } - otx2_mbox_reset(mbox, 0); + otx2_mbox_reset(mbox, devid); } -static int rvu_mbox_init(struct rvu *rvu) +static inline void rvu_afpf_mbox_up_handler(struct work_struct *work) { - struct rvu_hwinfo *hw = rvu->hw; - void __iomem *hwbase = NULL; + struct rvu_work *mwork = container_of(work, struct rvu_work, work); + + __rvu_mbox_up_handler(mwork, TYPE_AFPF); +} + +static inline void rvu_afvf_mbox_up_handler(struct work_struct *work) +{ + struct rvu_work *mwork = container_of(work, struct rvu_work, work); + + __rvu_mbox_up_handler(mwork, TYPE_AFVF); +} + +static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, + int type, int num, + void (mbox_handler)(struct work_struct *), + void (mbox_up_handler)(struct work_struct *)) +{ + void __iomem *hwbase = NULL, *reg_base; + int err, i, dir, dir_up; struct rvu_work *mwork; + const char *name; u64 bar4_addr; - int err, pf; - rvu->mbox_wq = alloc_workqueue("rvu_afpf_mailbox", - WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM, - hw->total_pfs); - if (!rvu->mbox_wq) + switch (type) { + case TYPE_AFPF: + name = "rvu_afpf_mailbox"; + bar4_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PF_BAR4_ADDR); + dir = MBOX_DIR_AFPF; + dir_up = MBOX_DIR_AFPF_UP; + reg_base = rvu->afreg_base; + break; + case TYPE_AFVF: + name = "rvu_afvf_mailbox"; + bar4_addr = rvupf_read64(rvu, RVU_PF_VF_BAR4_ADDR); + dir = MBOX_DIR_PFVF; + dir_up = MBOX_DIR_PFVF_UP; + reg_base = rvu->pfreg_base; + break; + default: + return -EINVAL; + } + + mw->mbox_wq = alloc_workqueue(name, + WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM, + num); + if (!mw->mbox_wq) return -ENOMEM; - rvu->mbox_wrk = devm_kcalloc(rvu->dev, hw->total_pfs, - sizeof(struct rvu_work), GFP_KERNEL); - if (!rvu->mbox_wrk) { + mw->mbox_wrk = devm_kcalloc(rvu->dev, num, + sizeof(struct rvu_work), GFP_KERNEL); + if (!mw->mbox_wrk) { err = -ENOMEM; goto exit; } - rvu->mbox_wrk_up = devm_kcalloc(rvu->dev, hw->total_pfs, - sizeof(struct rvu_work), GFP_KERNEL); - if (!rvu->mbox_wrk_up) { + mw->mbox_wrk_up = devm_kcalloc(rvu->dev, num, + sizeof(struct rvu_work), GFP_KERNEL); + if (!mw->mbox_wrk_up) { err = -ENOMEM; goto exit; } - /* Map mbox region shared with PFs */ - bar4_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PF_BAR4_ADDR); /* Mailbox is a reserved memory (in RAM) region shared between * RVU devices, shouldn't be mapped as device memory to allow * unaligned accesses. */ - hwbase = ioremap_wc(bar4_addr, MBOX_SIZE * hw->total_pfs); + hwbase = ioremap_wc(bar4_addr, MBOX_SIZE * num); if (!hwbase) { dev_err(rvu->dev, "Unable to map mailbox region\n"); err = -ENOMEM; goto exit; } - err = otx2_mbox_init(&rvu->mbox, hwbase, rvu->pdev, rvu->afreg_base, - MBOX_DIR_AFPF, hw->total_pfs); + err = otx2_mbox_init(&mw->mbox, hwbase, rvu->pdev, reg_base, dir, num); if (err) goto exit; - err = otx2_mbox_init(&rvu->mbox_up, hwbase, rvu->pdev, rvu->afreg_base, - MBOX_DIR_AFPF_UP, hw->total_pfs); + err = otx2_mbox_init(&mw->mbox_up, hwbase, rvu->pdev, + reg_base, dir_up, num); if (err) goto exit; - for (pf = 0; pf < hw->total_pfs; pf++) { - mwork = &rvu->mbox_wrk[pf]; + for (i = 0; i < num; i++) { + mwork = &mw->mbox_wrk[i]; mwork->rvu = rvu; - INIT_WORK(&mwork->work, rvu_mbox_handler); - } + INIT_WORK(&mwork->work, mbox_handler); - for (pf = 0; pf < hw->total_pfs; pf++) { - mwork = &rvu->mbox_wrk_up[pf]; + mwork = &mw->mbox_wrk_up[i]; mwork->rvu = rvu; - INIT_WORK(&mwork->work, rvu_mbox_up_handler); + INIT_WORK(&mwork->work, mbox_up_handler); } return 0; exit: if (hwbase) iounmap((void __iomem *)hwbase); - destroy_workqueue(rvu->mbox_wq); + destroy_workqueue(mw->mbox_wq); return err; } -static void rvu_mbox_destroy(struct rvu *rvu) +static void rvu_mbox_destroy(struct mbox_wq_info *mw) { - if (rvu->mbox_wq) { - flush_workqueue(rvu->mbox_wq); - destroy_workqueue(rvu->mbox_wq); - rvu->mbox_wq = NULL; + if (mw->mbox_wq) { + flush_workqueue(mw->mbox_wq); + destroy_workqueue(mw->mbox_wq); + mw->mbox_wq = NULL; } - if (rvu->mbox.hwbase) - iounmap((void __iomem *)rvu->mbox.hwbase); + if (mw->mbox.hwbase) + iounmap((void __iomem *)mw->mbox.hwbase); - otx2_mbox_destroy(&rvu->mbox); - otx2_mbox_destroy(&rvu->mbox_up); + otx2_mbox_destroy(&mw->mbox); + otx2_mbox_destroy(&mw->mbox_up); } -static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq) +static void rvu_queue_work(struct mbox_wq_info *mw, int first, + int mdevs, u64 intr) { - struct rvu *rvu = (struct rvu *)rvu_irq; struct otx2_mbox_dev *mdev; struct otx2_mbox *mbox; struct mbox_hdr *hdr; + int i; + + for (i = first; i < mdevs; i++) { + /* start from 0 */ + if (!(intr & BIT_ULL(i - first))) + continue; + + mbox = &mw->mbox; + mdev = &mbox->dev[i]; + hdr = mdev->mbase + mbox->rx_start; + if (hdr->num_msgs) + queue_work(mw->mbox_wq, &mw->mbox_wrk[i].work); + + mbox = &mw->mbox_up; + mdev = &mbox->dev[i]; + hdr = mdev->mbase + mbox->rx_start; + if (hdr->num_msgs) + queue_work(mw->mbox_wq, &mw->mbox_wrk_up[i].work); + } +} + +static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq) +{ + struct rvu *rvu = (struct rvu *)rvu_irq; + int vfs = rvu->vfs; u64 intr; - u8 pf; intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT); /* Clear interrupts */ rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT, intr); /* Sync with mbox memory region */ - smp_wmb(); + rmb(); - for (pf = 0; pf < rvu->hw->total_pfs; pf++) { - if (intr & (1ULL << pf)) { - mbox = &rvu->mbox; - mdev = &mbox->dev[pf]; - hdr = mdev->mbase + mbox->rx_start; - if (hdr->num_msgs) - queue_work(rvu->mbox_wq, - &rvu->mbox_wrk[pf].work); - mbox = &rvu->mbox_up; - mdev = &mbox->dev[pf]; - hdr = mdev->mbase + mbox->rx_start; - if (hdr->num_msgs) - queue_work(rvu->mbox_wq, - &rvu->mbox_wrk_up[pf].work); - } + rvu_queue_work(&rvu->afpf_wq_info, 0, rvu->hw->total_pfs, intr); + + /* Handle VF interrupts */ + if (vfs > 64) { + intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(1)); + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(1), intr); + + rvu_queue_work(&rvu->afvf_wq_info, 64, vfs, intr); + vfs -= 64; } + intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(0)); + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(0), intr); + + rvu_queue_work(&rvu->afvf_wq_info, 0, vfs, intr); + return IRQ_HANDLED; } @@ -1561,6 +1753,216 @@ static void rvu_enable_mbox_intr(struct rvu *rvu) INTR_MASK(hw->total_pfs) & ~1ULL); } +static void rvu_blklf_teardown(struct rvu *rvu, u16 pcifunc, u8 blkaddr) +{ + struct rvu_block *block; + int slot, lf, num_lfs; + int err; + + block = &rvu->hw->block[blkaddr]; + num_lfs = rvu_get_rsrc_mapcount(rvu_get_pfvf(rvu, pcifunc), + block->type); + if (!num_lfs) + return; + for (slot = 0; slot < num_lfs; slot++) { + lf = rvu_get_lf(rvu, block, pcifunc, slot); + if (lf < 0) + continue; + + /* Cleanup LF and reset it */ + if (block->addr == BLKADDR_NIX0) + rvu_nix_lf_teardown(rvu, pcifunc, block->addr, lf); + else if (block->addr == BLKADDR_NPA) + rvu_npa_lf_teardown(rvu, pcifunc, lf); + + err = rvu_lf_reset(rvu, block, lf); + if (err) { + dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n", + block->addr, lf); + } + } +} + +static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc) +{ + mutex_lock(&rvu->flr_lock); + /* Reset order should reflect inter-block dependencies: + * 1. Reset any packet/work sources (NIX, CPT, TIM) + * 2. Flush and reset SSO/SSOW + * 3. Cleanup pools (NPA) + */ + rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NIX0); + rvu_blklf_teardown(rvu, pcifunc, BLKADDR_CPT0); + rvu_blklf_teardown(rvu, pcifunc, BLKADDR_TIM); + rvu_blklf_teardown(rvu, pcifunc, BLKADDR_SSOW); + rvu_blklf_teardown(rvu, pcifunc, BLKADDR_SSO); + rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA); + rvu_detach_rsrcs(rvu, NULL, pcifunc); + mutex_unlock(&rvu->flr_lock); +} + +static void rvu_afvf_flr_handler(struct rvu *rvu, int vf) +{ + int reg = 0; + + /* pcifunc = 0(PF0) | (vf + 1) */ + __rvu_flr_handler(rvu, vf + 1); + + if (vf >= 64) { + reg = 1; + vf = vf - 64; + } + + /* Signal FLR finish and enable IRQ */ + rvupf_write64(rvu, RVU_PF_VFTRPENDX(reg), BIT_ULL(vf)); + rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1SX(reg), BIT_ULL(vf)); +} + +static void rvu_flr_handler(struct work_struct *work) +{ + struct rvu_work *flrwork = container_of(work, struct rvu_work, work); + struct rvu *rvu = flrwork->rvu; + u16 pcifunc, numvfs, vf; + u64 cfg; + int pf; + + pf = flrwork - rvu->flr_wrk; + if (pf >= rvu->hw->total_pfs) { + rvu_afvf_flr_handler(rvu, pf - rvu->hw->total_pfs); + return; + } + + cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf)); + numvfs = (cfg >> 12) & 0xFF; + pcifunc = pf << RVU_PFVF_PF_SHIFT; + + for (vf = 0; vf < numvfs; vf++) + __rvu_flr_handler(rvu, (pcifunc | (vf + 1))); + + __rvu_flr_handler(rvu, pcifunc); + + /* Signal FLR finish */ + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFTRPEND, BIT_ULL(pf)); + + /* Enable interrupt */ + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1S, BIT_ULL(pf)); +} + +static void rvu_afvf_queue_flr_work(struct rvu *rvu, int start_vf, int numvfs) +{ + int dev, vf, reg = 0; + u64 intr; + + if (start_vf >= 64) + reg = 1; + + intr = rvupf_read64(rvu, RVU_PF_VFFLR_INTX(reg)); + if (!intr) + return; + + for (vf = 0; vf < numvfs; vf++) { + if (!(intr & BIT_ULL(vf))) + continue; + dev = vf + start_vf + rvu->hw->total_pfs; + queue_work(rvu->flr_wq, &rvu->flr_wrk[dev].work); + /* Clear and disable the interrupt */ + rvupf_write64(rvu, RVU_PF_VFFLR_INTX(reg), BIT_ULL(vf)); + rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1CX(reg), BIT_ULL(vf)); + } +} + +static irqreturn_t rvu_flr_intr_handler(int irq, void *rvu_irq) +{ + struct rvu *rvu = (struct rvu *)rvu_irq; + u64 intr; + u8 pf; + + intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT); + if (!intr) + goto afvf_flr; + + for (pf = 0; pf < rvu->hw->total_pfs; pf++) { + if (intr & (1ULL << pf)) { + /* PF is already dead do only AF related operations */ + queue_work(rvu->flr_wq, &rvu->flr_wrk[pf].work); + /* clear interrupt */ + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT, + BIT_ULL(pf)); + /* Disable the interrupt */ + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1C, + BIT_ULL(pf)); + } + } + +afvf_flr: + rvu_afvf_queue_flr_work(rvu, 0, 64); + if (rvu->vfs > 64) + rvu_afvf_queue_flr_work(rvu, 64, rvu->vfs - 64); + + return IRQ_HANDLED; +} + +static void rvu_me_handle_vfset(struct rvu *rvu, int idx, u64 intr) +{ + int vf; + + /* Nothing to be done here other than clearing the + * TRPEND bit. + */ + for (vf = 0; vf < 64; vf++) { + if (intr & (1ULL << vf)) { + /* clear the trpend due to ME(master enable) */ + rvupf_write64(rvu, RVU_PF_VFTRPENDX(idx), BIT_ULL(vf)); + /* clear interrupt */ + rvupf_write64(rvu, RVU_PF_VFME_INTX(idx), BIT_ULL(vf)); + } + } +} + +/* Handles ME interrupts from VFs of AF */ +static irqreturn_t rvu_me_vf_intr_handler(int irq, void *rvu_irq) +{ + struct rvu *rvu = (struct rvu *)rvu_irq; + int vfset; + u64 intr; + + intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT); + + for (vfset = 0; vfset <= 1; vfset++) { + intr = rvupf_read64(rvu, RVU_PF_VFME_INTX(vfset)); + if (intr) + rvu_me_handle_vfset(rvu, vfset, intr); + } + + return IRQ_HANDLED; +} + +/* Handles ME interrupts from PFs */ +static irqreturn_t rvu_me_pf_intr_handler(int irq, void *rvu_irq) +{ + struct rvu *rvu = (struct rvu *)rvu_irq; + u64 intr; + u8 pf; + + intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT); + + /* Nothing to be done here other than clearing the + * TRPEND bit. + */ + for (pf = 0; pf < rvu->hw->total_pfs; pf++) { + if (intr & (1ULL << pf)) { + /* clear the trpend due to ME(master enable) */ + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFTRPEND, + BIT_ULL(pf)); + /* clear interrupt */ + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT, + BIT_ULL(pf)); + } + } + + return IRQ_HANDLED; +} + static void rvu_unregister_interrupts(struct rvu *rvu) { int irq; @@ -1569,6 +1971,14 @@ static void rvu_unregister_interrupts(struct rvu *rvu) rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1C, INTR_MASK(rvu->hw->total_pfs) & ~1ULL); + /* Disable the PF FLR interrupt */ + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1C, + INTR_MASK(rvu->hw->total_pfs) & ~1ULL); + + /* Disable the PF ME interrupt */ + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT_ENA_W1C, + INTR_MASK(rvu->hw->total_pfs) & ~1ULL); + for (irq = 0; irq < rvu->num_vec; irq++) { if (rvu->irq_allocated[irq]) free_irq(pci_irq_vector(rvu->pdev, irq), rvu); @@ -1578,9 +1988,25 @@ static void rvu_unregister_interrupts(struct rvu *rvu) rvu->num_vec = 0; } +static int rvu_afvf_msix_vectors_num_ok(struct rvu *rvu) +{ + struct rvu_pfvf *pfvf = &rvu->pf[0]; + int offset; + + pfvf = &rvu->pf[0]; + offset = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_INT_CFG(0)) & 0x3ff; + + /* Make sure there are enough MSIX vectors configured so that + * VF interrupts can be handled. Offset equal to zero means + * that PF vectors are not configured and overlapping AF vectors. + */ + return (pfvf->msix.max >= RVU_AF_INT_VEC_CNT + RVU_PF_INT_VEC_CNT) && + offset; +} + static int rvu_register_interrupts(struct rvu *rvu) { - int ret; + int ret, offset, pf_vec_start; rvu->num_vec = pci_msix_vec_count(rvu->pdev); @@ -1620,13 +2046,331 @@ static int rvu_register_interrupts(struct rvu *rvu) /* Enable mailbox interrupts from all PFs */ rvu_enable_mbox_intr(rvu); + /* Register FLR interrupt handler */ + sprintf(&rvu->irq_name[RVU_AF_INT_VEC_PFFLR * NAME_SIZE], + "RVUAF FLR"); + ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_PFFLR), + rvu_flr_intr_handler, 0, + &rvu->irq_name[RVU_AF_INT_VEC_PFFLR * NAME_SIZE], + rvu); + if (ret) { + dev_err(rvu->dev, + "RVUAF: IRQ registration failed for FLR\n"); + goto fail; + } + rvu->irq_allocated[RVU_AF_INT_VEC_PFFLR] = true; + + /* Enable FLR interrupt for all PFs*/ + rvu_write64(rvu, BLKADDR_RVUM, + RVU_AF_PFFLR_INT, INTR_MASK(rvu->hw->total_pfs)); + + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1S, + INTR_MASK(rvu->hw->total_pfs) & ~1ULL); + + /* Register ME interrupt handler */ + sprintf(&rvu->irq_name[RVU_AF_INT_VEC_PFME * NAME_SIZE], + "RVUAF ME"); + ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_PFME), + rvu_me_pf_intr_handler, 0, + &rvu->irq_name[RVU_AF_INT_VEC_PFME * NAME_SIZE], + rvu); + if (ret) { + dev_err(rvu->dev, + "RVUAF: IRQ registration failed for ME\n"); + } + rvu->irq_allocated[RVU_AF_INT_VEC_PFME] = true; + + /* Enable ME interrupt for all PFs*/ + rvu_write64(rvu, BLKADDR_RVUM, + RVU_AF_PFME_INT, INTR_MASK(rvu->hw->total_pfs)); + + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT_ENA_W1S, + INTR_MASK(rvu->hw->total_pfs) & ~1ULL); + + if (!rvu_afvf_msix_vectors_num_ok(rvu)) + return 0; + + /* Get PF MSIX vectors offset. */ + pf_vec_start = rvu_read64(rvu, BLKADDR_RVUM, + RVU_PRIV_PFX_INT_CFG(0)) & 0x3ff; + + /* Register MBOX0 interrupt. */ + offset = pf_vec_start + RVU_PF_INT_VEC_VFPF_MBOX0; + sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF Mbox0"); + ret = request_irq(pci_irq_vector(rvu->pdev, offset), + rvu_mbox_intr_handler, 0, + &rvu->irq_name[offset * NAME_SIZE], + rvu); + if (ret) + dev_err(rvu->dev, + "RVUAF: IRQ registration failed for Mbox0\n"); + + rvu->irq_allocated[offset] = true; + + /* Register MBOX1 interrupt. MBOX1 IRQ number follows MBOX0 so + * simply increment current offset by 1. + */ + offset = pf_vec_start + RVU_PF_INT_VEC_VFPF_MBOX1; + sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF Mbox1"); + ret = request_irq(pci_irq_vector(rvu->pdev, offset), + rvu_mbox_intr_handler, 0, + &rvu->irq_name[offset * NAME_SIZE], + rvu); + if (ret) + dev_err(rvu->dev, + "RVUAF: IRQ registration failed for Mbox1\n"); + + rvu->irq_allocated[offset] = true; + + /* Register FLR interrupt handler for AF's VFs */ + offset = pf_vec_start + RVU_PF_INT_VEC_VFFLR0; + sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF FLR0"); + ret = request_irq(pci_irq_vector(rvu->pdev, offset), + rvu_flr_intr_handler, 0, + &rvu->irq_name[offset * NAME_SIZE], rvu); + if (ret) { + dev_err(rvu->dev, + "RVUAF: IRQ registration failed for RVUAFVF FLR0\n"); + goto fail; + } + rvu->irq_allocated[offset] = true; + + offset = pf_vec_start + RVU_PF_INT_VEC_VFFLR1; + sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF FLR1"); + ret = request_irq(pci_irq_vector(rvu->pdev, offset), + rvu_flr_intr_handler, 0, + &rvu->irq_name[offset * NAME_SIZE], rvu); + if (ret) { + dev_err(rvu->dev, + "RVUAF: IRQ registration failed for RVUAFVF FLR1\n"); + goto fail; + } + rvu->irq_allocated[offset] = true; + + /* Register ME interrupt handler for AF's VFs */ + offset = pf_vec_start + RVU_PF_INT_VEC_VFME0; + sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF ME0"); + ret = request_irq(pci_irq_vector(rvu->pdev, offset), + rvu_me_vf_intr_handler, 0, + &rvu->irq_name[offset * NAME_SIZE], rvu); + if (ret) { + dev_err(rvu->dev, + "RVUAF: IRQ registration failed for RVUAFVF ME0\n"); + goto fail; + } + rvu->irq_allocated[offset] = true; + + offset = pf_vec_start + RVU_PF_INT_VEC_VFME1; + sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF ME1"); + ret = request_irq(pci_irq_vector(rvu->pdev, offset), + rvu_me_vf_intr_handler, 0, + &rvu->irq_name[offset * NAME_SIZE], rvu); + if (ret) { + dev_err(rvu->dev, + "RVUAF: IRQ registration failed for RVUAFVF ME1\n"); + goto fail; + } + rvu->irq_allocated[offset] = true; return 0; fail: - pci_free_irq_vectors(rvu->pdev); + rvu_unregister_interrupts(rvu); + return ret; +} + +static void rvu_flr_wq_destroy(struct rvu *rvu) +{ + if (rvu->flr_wq) { + flush_workqueue(rvu->flr_wq); + destroy_workqueue(rvu->flr_wq); + rvu->flr_wq = NULL; + } +} + +static int rvu_flr_init(struct rvu *rvu) +{ + int dev, num_devs; + u64 cfg; + int pf; + + /* Enable FLR for all PFs*/ + for (pf = 0; pf < rvu->hw->total_pfs; pf++) { + cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf)); + rvu_write64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf), + cfg | BIT_ULL(22)); + } + + rvu->flr_wq = alloc_workqueue("rvu_afpf_flr", + WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM, + 1); + if (!rvu->flr_wq) + return -ENOMEM; + + num_devs = rvu->hw->total_pfs + pci_sriov_get_totalvfs(rvu->pdev); + rvu->flr_wrk = devm_kcalloc(rvu->dev, num_devs, + sizeof(struct rvu_work), GFP_KERNEL); + if (!rvu->flr_wrk) { + destroy_workqueue(rvu->flr_wq); + return -ENOMEM; + } + + for (dev = 0; dev < num_devs; dev++) { + rvu->flr_wrk[dev].rvu = rvu; + INIT_WORK(&rvu->flr_wrk[dev].work, rvu_flr_handler); + } + + mutex_init(&rvu->flr_lock); + + return 0; +} + +static void rvu_disable_afvf_intr(struct rvu *rvu) +{ + int vfs = rvu->vfs; + + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(0), INTR_MASK(vfs)); + rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1CX(0), INTR_MASK(vfs)); + rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1CX(0), INTR_MASK(vfs)); + if (vfs <= 64) + return; + + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(1), + INTR_MASK(vfs - 64)); + rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1CX(1), INTR_MASK(vfs - 64)); + rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1CX(1), INTR_MASK(vfs - 64)); +} + +static void rvu_enable_afvf_intr(struct rvu *rvu) +{ + int vfs = rvu->vfs; + + /* Clear any pending interrupts and enable AF VF interrupts for + * the first 64 VFs. + */ + /* Mbox */ + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(0), INTR_MASK(vfs)); + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1SX(0), INTR_MASK(vfs)); + + /* FLR */ + rvupf_write64(rvu, RVU_PF_VFFLR_INTX(0), INTR_MASK(vfs)); + rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1SX(0), INTR_MASK(vfs)); + rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1SX(0), INTR_MASK(vfs)); + + /* Same for remaining VFs, if any. */ + if (vfs <= 64) + return; + + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(1), INTR_MASK(vfs - 64)); + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1SX(1), + INTR_MASK(vfs - 64)); + + rvupf_write64(rvu, RVU_PF_VFFLR_INTX(1), INTR_MASK(vfs - 64)); + rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1SX(1), INTR_MASK(vfs - 64)); + rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1SX(1), INTR_MASK(vfs - 64)); +} + +#define PCI_DEVID_OCTEONTX2_LBK 0xA061 + +static int lbk_get_num_chans(void) +{ + struct pci_dev *pdev; + void __iomem *base; + int ret = -EIO; + + pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_LBK, + NULL); + if (!pdev) + goto err; + + base = pci_ioremap_bar(pdev, 0); + if (!base) + goto err_put; + + /* Read number of available LBK channels from LBK(0)_CONST register. */ + ret = (readq(base + 0x10) >> 32) & 0xffff; + iounmap(base); +err_put: + pci_dev_put(pdev); +err: return ret; } +static int rvu_enable_sriov(struct rvu *rvu) +{ + struct pci_dev *pdev = rvu->pdev; + int err, chans, vfs; + + if (!rvu_afvf_msix_vectors_num_ok(rvu)) { + dev_warn(&pdev->dev, + "Skipping SRIOV enablement since not enough IRQs are available\n"); + return 0; + } + + chans = lbk_get_num_chans(); + if (chans < 0) + return chans; + + vfs = pci_sriov_get_totalvfs(pdev); + + /* Limit VFs in case we have more VFs than LBK channels available. */ + if (vfs > chans) + vfs = chans; + + /* AF's VFs work in pairs and talk over consecutive loopback channels. + * Thus we want to enable maximum even number of VFs. In case + * odd number of VFs are available then the last VF on the list + * remains disabled. + */ + if (vfs & 0x1) { + dev_warn(&pdev->dev, + "Number of VFs should be even. Enabling %d out of %d.\n", + vfs - 1, vfs); + vfs--; + } + + if (!vfs) + return 0; + + /* Save VFs number for reference in VF interrupts handlers. + * Since interrupts might start arriving during SRIOV enablement + * ordinary API cannot be used to get number of enabled VFs. + */ + rvu->vfs = vfs; + + err = rvu_mbox_init(rvu, &rvu->afvf_wq_info, TYPE_AFVF, vfs, + rvu_afvf_mbox_handler, rvu_afvf_mbox_up_handler); + if (err) + return err; + + rvu_enable_afvf_intr(rvu); + /* Make sure IRQs are enabled before SRIOV. */ + mb(); + + err = pci_enable_sriov(pdev, vfs); + if (err) { + rvu_disable_afvf_intr(rvu); + rvu_mbox_destroy(&rvu->afvf_wq_info); + return err; + } + + return 0; +} + +static void rvu_disable_sriov(struct rvu *rvu) +{ + rvu_disable_afvf_intr(rvu); + rvu_mbox_destroy(&rvu->afvf_wq_info); + pci_disable_sriov(rvu->pdev); +} + +static void rvu_update_module_params(struct rvu *rvu) +{ + const char *default_pfl_name = "default"; + + strscpy(rvu->mkex_pfl_name, + mkex_profile ? mkex_profile : default_pfl_name, MKEX_NAME_LEN); +} + static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct device *dev = &pdev->dev; @@ -1680,6 +2424,9 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err_release_regions; } + /* Store module params in rvu structure */ + rvu_update_module_params(rvu); + /* Check which blocks the HW supports */ rvu_check_block_implemented(rvu); @@ -1689,24 +2436,35 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (err) goto err_release_regions; - err = rvu_mbox_init(rvu); + /* Init mailbox btw AF and PFs */ + err = rvu_mbox_init(rvu, &rvu->afpf_wq_info, TYPE_AFPF, + rvu->hw->total_pfs, rvu_afpf_mbox_handler, + rvu_afpf_mbox_up_handler); if (err) goto err_hwsetup; - err = rvu_cgx_probe(rvu); + err = rvu_flr_init(rvu); if (err) goto err_mbox; err = rvu_register_interrupts(rvu); if (err) - goto err_cgx; + goto err_flr; + + /* Enable AF's VFs (if any) */ + err = rvu_enable_sriov(rvu); + if (err) + goto err_irq; return 0; -err_cgx: - rvu_cgx_wq_destroy(rvu); +err_irq: + rvu_unregister_interrupts(rvu); +err_flr: + rvu_flr_wq_destroy(rvu); err_mbox: - rvu_mbox_destroy(rvu); + rvu_mbox_destroy(&rvu->afpf_wq_info); err_hwsetup: + rvu_cgx_exit(rvu); rvu_reset_all_blocks(rvu); rvu_free_hw_resources(rvu); err_release_regions: @@ -1725,8 +2483,10 @@ static void rvu_remove(struct pci_dev *pdev) struct rvu *rvu = pci_get_drvdata(pdev); rvu_unregister_interrupts(rvu); - rvu_cgx_wq_destroy(rvu); - rvu_mbox_destroy(rvu); + rvu_flr_wq_destroy(rvu); + rvu_cgx_exit(rvu); + rvu_mbox_destroy(&rvu->afpf_wq_info); + rvu_disable_sriov(rvu); rvu_reset_all_blocks(rvu); rvu_free_hw_resources(rvu); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 2c0580cd2807..c9d60b0554c0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -11,6 +11,7 @@ #ifndef RVU_H #define RVU_H +#include #include "rvu_struct.h" #include "common.h" #include "mbox.h" @@ -18,6 +19,9 @@ /* PCI device IDs */ #define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065 +/* Subsystem Device ID */ +#define PCI_SUBSYS_DEVID_96XX 0xB200 + /* PCI BAR nos */ #define PCI_AF_REG_BAR_NUM 0 #define PCI_PF_REG_BAR_NUM 2 @@ -64,7 +68,7 @@ struct nix_mcast { struct qmem *mcast_buf; int replay_pkind; int next_free_mce; - spinlock_t mce_lock; /* Serialize MCE updates */ + struct mutex mce_lock; /* Serialize MCE updates */ }; struct nix_mce_list { @@ -74,15 +78,27 @@ struct nix_mce_list { }; struct npc_mcam { - spinlock_t lock; /* MCAM entries and counters update lock */ + struct rsrc_bmap counters; + struct mutex lock; /* MCAM entries and counters update lock */ + unsigned long *bmap; /* bitmap, 0 => bmap_entries */ + unsigned long *bmap_reverse; /* Reverse bitmap, bmap_entries => 0 */ + u16 bmap_entries; /* Number of unreserved MCAM entries */ + u16 bmap_fcnt; /* MCAM entries free count */ + u16 *entry2pfvf_map; + u16 *entry2cntr_map; + u16 *cntr2pfvf_map; + u16 *cntr_refcnt; u8 keysize; /* MCAM keysize 112/224/448 bits */ u8 banks; /* Number of MCAM banks */ u8 banks_per_entry;/* Number of keywords in key */ u16 banksize; /* Number of MCAM entries in each bank */ u16 total_entries; /* Total number of MCAM entries */ - u16 entries; /* Total minus reserved for NIX LFs */ u16 nixlf_offset; /* Offset of nixlf rsvd uncast entries */ u16 pf_offset; /* Offset of PF's rsvd bcast, promisc entries */ + u16 lprio_count; + u16 lprio_start; + u16 hprio_count; + u16 hprio_end; }; /* Structure for per RVU func info ie PF/VF */ @@ -122,18 +138,35 @@ struct rvu_pfvf { u16 tx_chan_base; u8 rx_chan_cnt; /* total number of RX channels */ u8 tx_chan_cnt; /* total number of TX channels */ + u16 maxlen; + u16 minlen; u8 mac_addr[ETH_ALEN]; /* MAC address of this PF/VF */ /* Broadcast pkt replication info */ u16 bcast_mce_idx; struct nix_mce_list bcast_mce_list; + + /* VLAN offload */ + struct mcam_entry entry; + int rxvlan_index; + bool rxvlan; }; struct nix_txsch { struct rsrc_bmap schq; u8 lvl; - u16 *pfvf_map; +#define NIX_TXSCHQ_TL1_CFG_DONE BIT_ULL(0) +#define TXSCH_MAP_FUNC(__pfvf_map) ((__pfvf_map) & 0xFFFF) +#define TXSCH_MAP_FLAGS(__pfvf_map) ((__pfvf_map) >> 16) +#define TXSCH_MAP(__func, __flags) (((__func) & 0xFFFF) | ((__flags) << 16)) + u32 *pfvf_map; +}; + +struct nix_mark_format { + u8 total; + u8 in_use; + u32 *cfg; }; struct npc_pkind { @@ -141,9 +174,23 @@ struct npc_pkind { u32 *pfchan_map; }; +struct nix_flowkey { +#define NIX_FLOW_KEY_ALG_MAX 32 + u32 flowkey[NIX_FLOW_KEY_ALG_MAX]; + int in_use; +}; + +struct nix_lso { + u8 total; + u8 in_use; +}; + struct nix_hw { struct nix_txsch txsch[NIX_TXSCH_LVL_CNT]; /* Tx schedulers */ struct nix_mcast mcast; + struct nix_flowkey flowkey; + struct nix_mark_format mark_format; + struct nix_lso lso; }; struct rvu_hwinfo { @@ -164,6 +211,16 @@ struct rvu_hwinfo { struct npc_mcam mcam; }; +struct mbox_wq_info { + struct otx2_mbox mbox; + struct rvu_work *mbox_wrk; + + struct otx2_mbox mbox_up; + struct rvu_work *mbox_wrk_up; + + struct workqueue_struct *mbox_wq; +}; + struct rvu { void __iomem *afreg_base; void __iomem *pfreg_base; @@ -172,14 +229,17 @@ struct rvu { struct rvu_hwinfo *hw; struct rvu_pfvf *pf; struct rvu_pfvf *hwvf; - spinlock_t rsrc_lock; /* Serialize resource alloc/free */ + struct mutex rsrc_lock; /* Serialize resource alloc/free */ + int vfs; /* Number of VFs attached to RVU */ /* Mbox */ - struct otx2_mbox mbox; - struct rvu_work *mbox_wrk; - struct otx2_mbox mbox_up; - struct rvu_work *mbox_wrk_up; - struct workqueue_struct *mbox_wq; + struct mbox_wq_info afpf_wq_info; + struct mbox_wq_info afvf_wq_info; + + /* PF FLR */ + struct rvu_work *flr_wrk; + struct workqueue_struct *flr_wq; + struct mutex flr_lock; /* Serialize FLRs */ /* MSI-X */ u16 num_vec; @@ -190,7 +250,7 @@ struct rvu { /* CGX */ #define PF_CGXMAP_BASE 1 /* PF 0 is reserved for RVU PF */ u8 cgx_mapped_pfs; - u8 cgx_cnt; /* available cgx ports */ + u8 cgx_cnt_max; /* CGX port count max */ u8 *pf2cgxlmac_map; /* pf to cgx_lmac map */ u16 *cgxlmac2pf_map; /* bitmap of mapped pfs for * every cgx lmac port @@ -201,6 +261,8 @@ struct rvu { struct workqueue_struct *cgx_evh_wq; spinlock_t cgx_evq_lock; /* cgx event queue lock */ struct list_head cgx_evq_head; /* cgx event queue head */ + + char mkex_pfl_name[MKEX_NAME_LEN]; /* Configured MKEX profile name */ }; static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val) @@ -223,9 +285,22 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset) return readq(rvu->pfreg_base + offset); } +static inline bool is_rvu_9xxx_A0(struct rvu *rvu) +{ + struct pci_dev *pdev = rvu->pdev; + + return (pdev->revision == 0x00) && + (pdev->subsystem_device == PCI_SUBSYS_DEVID_96XX); +} + /* Function Prototypes * RVU */ +static inline int is_afvf(u16 pcifunc) +{ + return !(pcifunc & ~RVU_PFVF_FUNC_MASK); +} + int rvu_alloc_bitmap(struct rsrc_bmap *rsrc); int rvu_alloc_rsrc(struct rsrc_bmap *rsrc); void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id); @@ -236,6 +311,7 @@ int rvu_get_pf(u16 pcifunc); struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc); void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf); bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr); +bool is_pffunc_map_valid(struct rvu *rvu, u16 pcifunc, int blktype); int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot); int rvu_lf_reset(struct rvu *rvu, struct rvu_block *block, int lf); int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc); @@ -266,89 +342,110 @@ static inline void rvu_get_cgx_lmac_id(u8 map, u8 *cgx_id, u8 *lmac_id) *lmac_id = (map & 0xF); } -int rvu_cgx_probe(struct rvu *rvu); -void rvu_cgx_wq_destroy(struct rvu *rvu); +int rvu_cgx_init(struct rvu *rvu); +int rvu_cgx_exit(struct rvu *rvu); void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu); int rvu_cgx_config_rxtx(struct rvu *rvu, u16 pcifunc, bool start); -int rvu_mbox_handler_CGX_START_RXTX(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_start_rxtx(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_CGX_STOP_RXTX(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_stop_rxtx(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_CGX_STATS(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_stats(struct rvu *rvu, struct msg_req *req, struct cgx_stats_rsp *rsp); -int rvu_mbox_handler_CGX_MAC_ADDR_SET(struct rvu *rvu, +int rvu_mbox_handler_cgx_mac_addr_set(struct rvu *rvu, struct cgx_mac_addr_set_or_get *req, struct cgx_mac_addr_set_or_get *rsp); -int rvu_mbox_handler_CGX_MAC_ADDR_GET(struct rvu *rvu, +int rvu_mbox_handler_cgx_mac_addr_get(struct rvu *rvu, struct cgx_mac_addr_set_or_get *req, struct cgx_mac_addr_set_or_get *rsp); -int rvu_mbox_handler_CGX_PROMISC_ENABLE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_promisc_enable(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_CGX_PROMISC_DISABLE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_promisc_disable(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_CGX_START_LINKEVENTS(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_start_linkevents(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_CGX_STOP_LINKEVENTS(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_stop_linkevents(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_CGX_GET_LINKINFO(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_get_linkinfo(struct rvu *rvu, struct msg_req *req, struct cgx_link_info_msg *rsp); -int rvu_mbox_handler_CGX_INTLBK_ENABLE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_intlbk_enable(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_CGX_INTLBK_DISABLE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_intlbk_disable(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); /* NPA APIs */ int rvu_npa_init(struct rvu *rvu); void rvu_npa_freemem(struct rvu *rvu); -int rvu_mbox_handler_NPA_AQ_ENQ(struct rvu *rvu, +void rvu_npa_lf_teardown(struct rvu *rvu, u16 pcifunc, int npalf); +int rvu_mbox_handler_npa_aq_enq(struct rvu *rvu, struct npa_aq_enq_req *req, struct npa_aq_enq_rsp *rsp); -int rvu_mbox_handler_NPA_HWCTX_DISABLE(struct rvu *rvu, +int rvu_mbox_handler_npa_hwctx_disable(struct rvu *rvu, struct hwctx_disable_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_NPA_LF_ALLOC(struct rvu *rvu, +int rvu_mbox_handler_npa_lf_alloc(struct rvu *rvu, struct npa_lf_alloc_req *req, struct npa_lf_alloc_rsp *rsp); -int rvu_mbox_handler_NPA_LF_FREE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_npa_lf_free(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); /* NIX APIs */ +bool is_nixlf_attached(struct rvu *rvu, u16 pcifunc); int rvu_nix_init(struct rvu *rvu); +int rvu_nix_reserve_mark_format(struct rvu *rvu, struct nix_hw *nix_hw, + int blkaddr, u32 cfg); void rvu_nix_freemem(struct rvu *rvu); int rvu_get_nixlf_count(struct rvu *rvu); -int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu, +void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int npalf); +int rvu_mbox_handler_nix_lf_alloc(struct rvu *rvu, struct nix_lf_alloc_req *req, struct nix_lf_alloc_rsp *rsp); -int rvu_mbox_handler_NIX_LF_FREE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_NIX_AQ_ENQ(struct rvu *rvu, +int rvu_mbox_handler_nix_aq_enq(struct rvu *rvu, struct nix_aq_enq_req *req, struct nix_aq_enq_rsp *rsp); -int rvu_mbox_handler_NIX_HWCTX_DISABLE(struct rvu *rvu, +int rvu_mbox_handler_nix_hwctx_disable(struct rvu *rvu, struct hwctx_disable_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu, +int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu, struct nix_txsch_alloc_req *req, struct nix_txsch_alloc_rsp *rsp); -int rvu_mbox_handler_NIX_TXSCH_FREE(struct rvu *rvu, +int rvu_mbox_handler_nix_txsch_free(struct rvu *rvu, struct nix_txsch_free_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_NIX_TXSCHQ_CFG(struct rvu *rvu, +int rvu_mbox_handler_nix_txschq_cfg(struct rvu *rvu, struct nix_txschq_config *req, struct msg_rsp *rsp); -int rvu_mbox_handler_NIX_STATS_RST(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_nix_stats_rst(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp); -int rvu_mbox_handler_NIX_VTAG_CFG(struct rvu *rvu, +int rvu_mbox_handler_nix_vtag_cfg(struct rvu *rvu, struct nix_vtag_config *req, struct msg_rsp *rsp); -int rvu_mbox_handler_NIX_RSS_FLOWKEY_CFG(struct rvu *rvu, +int rvu_mbox_handler_nix_rxvlan_alloc(struct rvu *rvu, struct msg_req *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_nix_rss_flowkey_cfg(struct rvu *rvu, struct nix_rss_flowkey_cfg *req, - struct msg_rsp *rsp); -int rvu_mbox_handler_NIX_SET_MAC_ADDR(struct rvu *rvu, + struct nix_rss_flowkey_cfg_rsp *rsp); +int rvu_mbox_handler_nix_set_mac_addr(struct rvu *rvu, struct nix_set_mac_addr *req, struct msg_rsp *rsp); -int rvu_mbox_handler_NIX_SET_RX_MODE(struct rvu *rvu, struct nix_rx_mode *req, +int rvu_mbox_handler_nix_set_rx_mode(struct rvu *rvu, struct nix_rx_mode *req, struct msg_rsp *rsp); +int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_nix_lf_start_rx(struct rvu *rvu, struct msg_req *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_nix_lf_stop_rx(struct rvu *rvu, struct msg_req *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_nix_mark_format_cfg(struct rvu *rvu, + struct nix_mark_format_cfg *req, + struct nix_mark_format_cfg_rsp *rsp); +int rvu_mbox_handler_nix_set_rx_cfg(struct rvu *rvu, struct nix_rx_cfg *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_nix_lso_format_cfg(struct rvu *rvu, + struct nix_lso_format_cfg *req, + struct nix_lso_format_cfg_rsp *rsp); /* NPC APIs */ int rvu_npc_init(struct rvu *rvu); @@ -360,9 +457,48 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc, void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf, u64 chan, bool allmulti); void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf); +void rvu_npc_enable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf); void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc, int nixlf, u64 chan); +int rvu_npc_update_rxvlan(struct rvu *rvu, u16 pcifunc, int nixlf); void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf); +void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf); +void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf); void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf, int group, int alg_idx, int mcam_index); +int rvu_mbox_handler_npc_mcam_alloc_entry(struct rvu *rvu, + struct npc_mcam_alloc_entry_req *req, + struct npc_mcam_alloc_entry_rsp *rsp); +int rvu_mbox_handler_npc_mcam_free_entry(struct rvu *rvu, + struct npc_mcam_free_entry_req *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_npc_mcam_write_entry(struct rvu *rvu, + struct npc_mcam_write_entry_req *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_npc_mcam_ena_entry(struct rvu *rvu, + struct npc_mcam_ena_dis_entry_req *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_npc_mcam_dis_entry(struct rvu *rvu, + struct npc_mcam_ena_dis_entry_req *req, + struct msg_rsp *rsp); +int rvu_mbox_handler_npc_mcam_shift_entry(struct rvu *rvu, + struct npc_mcam_shift_entry_req *req, + struct npc_mcam_shift_entry_rsp *rsp); +int rvu_mbox_handler_npc_mcam_alloc_counter(struct rvu *rvu, + struct npc_mcam_alloc_counter_req *req, + struct npc_mcam_alloc_counter_rsp *rsp); +int rvu_mbox_handler_npc_mcam_free_counter(struct rvu *rvu, + struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp); +int rvu_mbox_handler_npc_mcam_clear_counter(struct rvu *rvu, + struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp); +int rvu_mbox_handler_npc_mcam_unmap_counter(struct rvu *rvu, + struct npc_mcam_unmap_counter_req *req, struct msg_rsp *rsp); +int rvu_mbox_handler_npc_mcam_counter_stats(struct rvu *rvu, + struct npc_mcam_oper_counter_req *req, + struct npc_mcam_oper_counter_rsp *rsp); +int rvu_mbox_handler_npc_mcam_alloc_and_write_entry(struct rvu *rvu, + struct npc_mcam_alloc_and_write_entry_req *req, + struct npc_mcam_alloc_and_write_entry_rsp *rsp); +int rvu_mbox_handler_npc_get_kex_cfg(struct rvu *rvu, struct msg_req *req, + struct npc_get_kex_cfg_rsp *rsp); #endif /* RVU_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index 188185c15b4a..7d7133c5f799 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -20,14 +20,14 @@ struct cgx_evq_entry { struct cgx_link_event link_event; }; -#define M(_name, _id, _req_type, _rsp_type) \ +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ static struct _req_type __maybe_unused \ -*otx2_mbox_alloc_msg_ ## _name(struct rvu *rvu, int devid) \ +*otx2_mbox_alloc_msg_ ## _fn_name(struct rvu *rvu, int devid) \ { \ struct _req_type *req; \ \ req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \ - &rvu->mbox_up, devid, sizeof(struct _req_type), \ + &rvu->afpf_wq_info.mbox_up, devid, sizeof(struct _req_type), \ sizeof(struct _rsp_type)); \ if (!req) \ return NULL; \ @@ -52,7 +52,7 @@ static inline u8 cgxlmac_id_to_bmap(u8 cgx_id, u8 lmac_id) void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu) { - if (cgx_id >= rvu->cgx_cnt) + if (cgx_id >= rvu->cgx_cnt_max) return NULL; return rvu->cgx_idmap[cgx_id]; @@ -61,38 +61,40 @@ void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu) static int rvu_map_cgx_lmac_pf(struct rvu *rvu) { struct npc_pkind *pkind = &rvu->hw->pkind; - int cgx_cnt = rvu->cgx_cnt; + int cgx_cnt_max = rvu->cgx_cnt_max; int cgx, lmac_cnt, lmac; int pf = PF_CGXMAP_BASE; int size, free_pkind; - if (!cgx_cnt) + if (!cgx_cnt_max) return 0; - if (cgx_cnt > 0xF || MAX_LMAC_PER_CGX > 0xF) + if (cgx_cnt_max > 0xF || MAX_LMAC_PER_CGX > 0xF) return -EINVAL; /* Alloc map table * An additional entry is required since PF id starts from 1 and * hence entry at offset 0 is invalid. */ - size = (cgx_cnt * MAX_LMAC_PER_CGX + 1) * sizeof(u8); - rvu->pf2cgxlmac_map = devm_kzalloc(rvu->dev, size, GFP_KERNEL); + size = (cgx_cnt_max * MAX_LMAC_PER_CGX + 1) * sizeof(u8); + rvu->pf2cgxlmac_map = devm_kmalloc(rvu->dev, size, GFP_KERNEL); if (!rvu->pf2cgxlmac_map) return -ENOMEM; - /* Initialize offset 0 with an invalid cgx and lmac id */ - rvu->pf2cgxlmac_map[0] = 0xFF; + /* Initialize all entries with an invalid cgx and lmac id */ + memset(rvu->pf2cgxlmac_map, 0xFF, size); /* Reverse map table */ rvu->cgxlmac2pf_map = devm_kzalloc(rvu->dev, - cgx_cnt * MAX_LMAC_PER_CGX * sizeof(u16), + cgx_cnt_max * MAX_LMAC_PER_CGX * sizeof(u16), GFP_KERNEL); if (!rvu->cgxlmac2pf_map) return -ENOMEM; rvu->cgx_mapped_pfs = 0; - for (cgx = 0; cgx < cgx_cnt; cgx++) { + for (cgx = 0; cgx < cgx_cnt_max; cgx++) { + if (!rvu_cgx_pdata(cgx, rvu)) + continue; lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu)); for (lmac = 0; lmac < lmac_cnt; lmac++, pf++) { rvu->pf2cgxlmac_map[pf] = cgxlmac_id_to_bmap(cgx, lmac); @@ -177,12 +179,12 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu) } /* Send mbox message to PF */ - msg = otx2_mbox_alloc_msg_CGX_LINK_EVENT(rvu, pfid); + msg = otx2_mbox_alloc_msg_cgx_link_event(rvu, pfid); if (!msg) continue; msg->link_info = *linfo; - otx2_mbox_msg_send(&rvu->mbox_up, pfid); - err = otx2_mbox_wait_for_rsp(&rvu->mbox_up, pfid); + otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pfid); + err = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pfid); if (err) dev_warn(rvu->dev, "notification to pf %d failed\n", pfid); @@ -216,7 +218,7 @@ static void cgx_evhandler_task(struct work_struct *work) } while (1); } -static void cgx_lmac_event_handler_init(struct rvu *rvu) +static int cgx_lmac_event_handler_init(struct rvu *rvu) { struct cgx_event_cb cb; int cgx, lmac, err; @@ -228,14 +230,16 @@ static void cgx_lmac_event_handler_init(struct rvu *rvu) rvu->cgx_evh_wq = alloc_workqueue("rvu_evh_wq", 0, 0); if (!rvu->cgx_evh_wq) { dev_err(rvu->dev, "alloc workqueue failed"); - return; + return -ENOMEM; } cb.notify_link_chg = cgx_lmac_postevent; /* link change call back */ cb.data = rvu; - for (cgx = 0; cgx < rvu->cgx_cnt; cgx++) { + for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) { cgxd = rvu_cgx_pdata(cgx, rvu); + if (!cgxd) + continue; for (lmac = 0; lmac < cgx_get_lmac_cnt(cgxd); lmac++) { err = cgx_lmac_evh_register(&cb, cgxd, lmac); if (err) @@ -244,9 +248,11 @@ static void cgx_lmac_event_handler_init(struct rvu *rvu) cgx, lmac); } } + + return 0; } -void rvu_cgx_wq_destroy(struct rvu *rvu) +static void rvu_cgx_wq_destroy(struct rvu *rvu) { if (rvu->cgx_evh_wq) { flush_workqueue(rvu->cgx_evh_wq); @@ -255,25 +261,28 @@ void rvu_cgx_wq_destroy(struct rvu *rvu) } } -int rvu_cgx_probe(struct rvu *rvu) +int rvu_cgx_init(struct rvu *rvu) { - int i, err; + int cgx, err; + void *cgxd; - /* find available cgx ports */ - rvu->cgx_cnt = cgx_get_cgx_cnt(); - if (!rvu->cgx_cnt) { + /* CGX port id starts from 0 and are not necessarily contiguous + * Hence we allocate resources based on the maximum port id value. + */ + rvu->cgx_cnt_max = cgx_get_cgxcnt_max(); + if (!rvu->cgx_cnt_max) { dev_info(rvu->dev, "No CGX devices found!\n"); return -ENODEV; } - rvu->cgx_idmap = devm_kzalloc(rvu->dev, rvu->cgx_cnt * sizeof(void *), - GFP_KERNEL); + rvu->cgx_idmap = devm_kzalloc(rvu->dev, rvu->cgx_cnt_max * + sizeof(void *), GFP_KERNEL); if (!rvu->cgx_idmap) return -ENOMEM; /* Initialize the cgxdata table */ - for (i = 0; i < rvu->cgx_cnt; i++) - rvu->cgx_idmap[i] = cgx_get_pdata(i); + for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) + rvu->cgx_idmap[cgx] = cgx_get_pdata(cgx); /* Map CGX LMAC interfaces to RVU PFs */ err = rvu_map_cgx_lmac_pf(rvu); @@ -281,7 +290,47 @@ int rvu_cgx_probe(struct rvu *rvu) return err; /* Register for CGX events */ - cgx_lmac_event_handler_init(rvu); + err = cgx_lmac_event_handler_init(rvu); + if (err) + return err; + + /* Ensure event handler registration is completed, before + * we turn on the links + */ + mb(); + + /* Do link up for all CGX ports */ + for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) { + cgxd = rvu_cgx_pdata(cgx, rvu); + if (!cgxd) + continue; + err = cgx_lmac_linkup_start(cgxd); + if (err) + dev_err(rvu->dev, + "Link up process failed to start on cgx %d\n", + cgx); + } + + return 0; +} + +int rvu_cgx_exit(struct rvu *rvu) +{ + int cgx, lmac; + void *cgxd; + + for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) { + cgxd = rvu_cgx_pdata(cgx, rvu); + if (!cgxd) + continue; + for (lmac = 0; lmac < cgx_get_lmac_cnt(cgxd); lmac++) + cgx_lmac_evh_unregister(cgxd, lmac); + } + + /* Ensure event handler unregister is completed */ + mb(); + + rvu_cgx_wq_destroy(rvu); return 0; } @@ -303,21 +352,21 @@ int rvu_cgx_config_rxtx(struct rvu *rvu, u16 pcifunc, bool start) return 0; } -int rvu_mbox_handler_CGX_START_RXTX(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_start_rxtx(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { rvu_cgx_config_rxtx(rvu, req->hdr.pcifunc, true); return 0; } -int rvu_mbox_handler_CGX_STOP_RXTX(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_stop_rxtx(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { rvu_cgx_config_rxtx(rvu, req->hdr.pcifunc, false); return 0; } -int rvu_mbox_handler_CGX_STATS(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_stats(struct rvu *rvu, struct msg_req *req, struct cgx_stats_rsp *rsp) { int pf = rvu_get_pf(req->hdr.pcifunc); @@ -354,7 +403,7 @@ int rvu_mbox_handler_CGX_STATS(struct rvu *rvu, struct msg_req *req, return 0; } -int rvu_mbox_handler_CGX_MAC_ADDR_SET(struct rvu *rvu, +int rvu_mbox_handler_cgx_mac_addr_set(struct rvu *rvu, struct cgx_mac_addr_set_or_get *req, struct cgx_mac_addr_set_or_get *rsp) { @@ -368,7 +417,7 @@ int rvu_mbox_handler_CGX_MAC_ADDR_SET(struct rvu *rvu, return 0; } -int rvu_mbox_handler_CGX_MAC_ADDR_GET(struct rvu *rvu, +int rvu_mbox_handler_cgx_mac_addr_get(struct rvu *rvu, struct cgx_mac_addr_set_or_get *req, struct cgx_mac_addr_set_or_get *rsp) { @@ -387,7 +436,7 @@ int rvu_mbox_handler_CGX_MAC_ADDR_GET(struct rvu *rvu, return 0; } -int rvu_mbox_handler_CGX_PROMISC_ENABLE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_promisc_enable(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { u16 pcifunc = req->hdr.pcifunc; @@ -407,7 +456,7 @@ int rvu_mbox_handler_CGX_PROMISC_ENABLE(struct rvu *rvu, struct msg_req *req, return 0; } -int rvu_mbox_handler_CGX_PROMISC_DISABLE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_promisc_disable(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { u16 pcifunc = req->hdr.pcifunc; @@ -451,21 +500,21 @@ static int rvu_cgx_config_linkevents(struct rvu *rvu, u16 pcifunc, bool en) return 0; } -int rvu_mbox_handler_CGX_START_LINKEVENTS(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_start_linkevents(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { rvu_cgx_config_linkevents(rvu, req->hdr.pcifunc, true); return 0; } -int rvu_mbox_handler_CGX_STOP_LINKEVENTS(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_stop_linkevents(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { rvu_cgx_config_linkevents(rvu, req->hdr.pcifunc, false); return 0; } -int rvu_mbox_handler_CGX_GET_LINKINFO(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_get_linkinfo(struct rvu *rvu, struct msg_req *req, struct cgx_link_info_msg *rsp) { u8 cgx_id, lmac_id; @@ -500,14 +549,14 @@ static int rvu_cgx_config_intlbk(struct rvu *rvu, u16 pcifunc, bool en) lmac_id, en); } -int rvu_mbox_handler_CGX_INTLBK_ENABLE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_intlbk_enable(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { rvu_cgx_config_intlbk(rvu, req->hdr.pcifunc, true); return 0; } -int rvu_mbox_handler_CGX_INTLBK_DISABLE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_cgx_intlbk_disable(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { rvu_cgx_config_intlbk(rvu, req->hdr.pcifunc, false); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index a5ab7eff2301..4a7609fd6dd0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -43,6 +43,19 @@ enum mc_buf_cnt { MC_BUF_CNT_2048, }; +enum nix_makr_fmt_indexes { + NIX_MARK_CFG_IP_DSCP_RED, + NIX_MARK_CFG_IP_DSCP_YELLOW, + NIX_MARK_CFG_IP_DSCP_YELLOW_RED, + NIX_MARK_CFG_IP_ECN_RED, + NIX_MARK_CFG_IP_ECN_YELLOW, + NIX_MARK_CFG_IP_ECN_YELLOW_RED, + NIX_MARK_CFG_VLAN_DEI_RED, + NIX_MARK_CFG_VLAN_DEI_YELLOW, + NIX_MARK_CFG_VLAN_DEI_YELLOW_RED, + NIX_MARK_CFG_MAX, +}; + /* For now considering MC resources needed for broadcast * pkt replication only. i.e 256 HWVFs + 12 PFs. */ @@ -55,6 +68,17 @@ struct mce { u16 pcifunc; }; +bool is_nixlf_attached(struct rvu *rvu, u16 pcifunc) +{ + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + int blkaddr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (!pfvf->nixlf || blkaddr < 0) + return false; + return true; +} + int rvu_get_nixlf_count(struct rvu *rvu) { struct rvu_block *block; @@ -94,11 +118,29 @@ static inline struct nix_hw *get_nix_hw(struct rvu_hwinfo *hw, int blkaddr) return NULL; } +static void nix_rx_sync(struct rvu *rvu, int blkaddr) +{ + int err; + + /*Sync all in flight RX packets to LLC/DRAM */ + rvu_write64(rvu, blkaddr, NIX_AF_RX_SW_SYNC, BIT_ULL(0)); + err = rvu_poll_reg(rvu, blkaddr, NIX_AF_RX_SW_SYNC, BIT_ULL(0), true); + if (err) + dev_err(rvu->dev, "NIX RX software sync failed\n"); + + /* As per a HW errata in 9xxx A0 silicon, HW may clear SW_SYNC[ENA] + * bit too early. Hence wait for 50us more. + */ + if (is_rvu_9xxx_A0(rvu)) + usleep_range(50, 60); +} + static bool is_valid_txschq(struct rvu *rvu, int blkaddr, int lvl, u16 pcifunc, u16 schq) { struct nix_txsch *txsch; struct nix_hw *nix_hw; + u16 map_func; nix_hw = get_nix_hw(rvu->hw, blkaddr); if (!nix_hw) @@ -109,12 +151,19 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr, if (schq >= txsch->schq.max) return false; - spin_lock(&rvu->rsrc_lock); - if (txsch->pfvf_map[schq] != pcifunc) { - spin_unlock(&rvu->rsrc_lock); + mutex_lock(&rvu->rsrc_lock); + map_func = TXSCH_MAP_FUNC(txsch->pfvf_map[schq]); + mutex_unlock(&rvu->rsrc_lock); + + /* For TL1 schq, sharing across VF's of same PF is ok */ + if (lvl == NIX_TXSCH_LVL_TL1 && + rvu_get_pf(map_func) != rvu_get_pf(pcifunc)) return false; - } - spin_unlock(&rvu->rsrc_lock); + + if (lvl != NIX_TXSCH_LVL_TL1 && + map_func != pcifunc) + return false; + return true; } @@ -122,7 +171,7 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) { struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); u8 cgx_id, lmac_id; - int pkind, pf; + int pkind, pf, vf; int err; pf = rvu_get_pf(pcifunc); @@ -148,6 +197,14 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) rvu_npc_set_pkind(rvu, pkind, pfvf); break; case NIX_INTF_TYPE_LBK: + vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1; + pfvf->rx_chan_base = NIX_CHAN_LBK_CHX(0, vf); + pfvf->tx_chan_base = vf & 0x1 ? NIX_CHAN_LBK_CHX(0, vf - 1) : + NIX_CHAN_LBK_CHX(0, vf + 1); + pfvf->rx_chan_cnt = 1; + pfvf->tx_chan_cnt = 1; + rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf, + pfvf->rx_chan_base, false); break; } @@ -168,14 +225,21 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) rvu_npc_install_bcast_match_entry(rvu, pcifunc, nixlf, pfvf->rx_chan_base); + pfvf->maxlen = NIC_HW_MIN_FRS; + pfvf->minlen = NIC_HW_MIN_FRS; return 0; } static void nix_interface_deinit(struct rvu *rvu, u16 pcifunc, u8 nixlf) { + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); int err; + pfvf->maxlen = 0; + pfvf->minlen = 0; + pfvf->rxvlan = false; + /* Remove this PF_FUNC from bcast pkt replication list */ err = nix_update_bcast_mce_list(rvu, pcifunc, false); if (err) { @@ -234,17 +298,21 @@ static void nix_setup_lso_tso_l4(struct rvu *rvu, int blkaddr, /* TCP's flags field */ field.layer = NIX_TXLAYER_OL4; field.offset = 12; - field.sizem1 = 0; /* not needed */ + field.sizem1 = 1; /* 2 bytes */ field.alg = NIX_LSOALG_TCP_FLAGS; rvu_write64(rvu, blkaddr, NIX_AF_LSO_FORMATX_FIELDX(format, (*fidx)++), *(u64 *)&field); } -static void nix_setup_lso(struct rvu *rvu, int blkaddr) +static void nix_setup_lso(struct rvu *rvu, struct nix_hw *nix_hw, int blkaddr) { u64 cfg, idx, fidx = 0; + /* Get max HW supported format indices */ + cfg = (rvu_read64(rvu, blkaddr, NIX_AF_CONST1) >> 48) & 0xFF; + nix_hw->lso.total = cfg; + /* Enable LSO */ cfg = rvu_read64(rvu, blkaddr, NIX_AF_LSO_CFG); /* For TSO, set first and middle segment flags to @@ -254,7 +322,10 @@ static void nix_setup_lso(struct rvu *rvu, int blkaddr) cfg |= (0xFFF2ULL << 32) | (0xFFF2ULL << 16); rvu_write64(rvu, blkaddr, NIX_AF_LSO_CFG, cfg | BIT_ULL(63)); - /* Configure format fields for TCPv4 segmentation offload */ + /* Setup default static LSO formats + * + * Configure format fields for TCPv4 segmentation offload + */ idx = NIX_LSO_FORMAT_IDX_TSOV4; nix_setup_lso_tso_l3(rvu, blkaddr, idx, true, &fidx); nix_setup_lso_tso_l4(rvu, blkaddr, idx, &fidx); @@ -264,6 +335,7 @@ static void nix_setup_lso(struct rvu *rvu, int blkaddr) rvu_write64(rvu, blkaddr, NIX_AF_LSO_FORMATX_FIELDX(idx, fidx), 0x0ULL); } + nix_hw->lso.in_use++; /* Configure format fields for TCPv6 segmentation offload */ idx = NIX_LSO_FORMAT_IDX_TSOV6; @@ -276,6 +348,7 @@ static void nix_setup_lso(struct rvu *rvu, int blkaddr) rvu_write64(rvu, blkaddr, NIX_AF_LSO_FORMATX_FIELDX(idx, fidx), 0x0ULL); } + nix_hw->lso.in_use++; } static void nix_ctx_free(struct rvu *rvu, struct rvu_pfvf *pfvf) @@ -388,9 +461,8 @@ static int rvu_nix_aq_enq_inst(struct rvu *rvu, struct nix_aq_enq_req *req, bool ena; u64 cfg; - pfvf = rvu_get_pfvf(rvu, pcifunc); blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); - if (!pfvf->nixlf || blkaddr < 0) + if (blkaddr < 0) return NIX_AF_ERR_AF_LF_INVALID; block = &hw->block[blkaddr]; @@ -400,9 +472,14 @@ static int rvu_nix_aq_enq_inst(struct rvu *rvu, struct nix_aq_enq_req *req, return NIX_AF_ERR_AQ_ENQUEUE; } + pfvf = rvu_get_pfvf(rvu, pcifunc); nixlf = rvu_get_lf(rvu, block, pcifunc, 0); - if (nixlf < 0) - return NIX_AF_ERR_AF_LF_INVALID; + + /* Skip NIXLF check for broadcast MCE entry init */ + if (!(!rsp && req->ctype == NIX_AQ_CTYPE_MCE)) { + if (!pfvf->nixlf || nixlf < 0) + return NIX_AF_ERR_AF_LF_INVALID; + } switch (req->ctype) { case NIX_AQ_CTYPE_RQ: @@ -447,7 +524,9 @@ static int rvu_nix_aq_enq_inst(struct rvu *rvu, struct nix_aq_enq_req *req, /* Check if SQ pointed SMQ belongs to this PF/VF or not */ if (req->ctype == NIX_AQ_CTYPE_SQ && - req->op != NIX_AQ_INSTOP_WRITE) { + ((req->op == NIX_AQ_INSTOP_INIT && req->sq.ena) || + (req->op == NIX_AQ_INSTOP_WRITE && + req->sq_mask.ena && req->sq_mask.smq && req->sq.ena))) { if (!is_valid_txschq(rvu, blkaddr, NIX_TXSCH_LVL_SMQ, pcifunc, req->sq.smq)) return NIX_AF_ERR_AQ_ENQUEUE; @@ -637,25 +716,25 @@ static int nix_lf_hwctx_disable(struct rvu *rvu, struct hwctx_disable_req *req) return err; } -int rvu_mbox_handler_NIX_AQ_ENQ(struct rvu *rvu, +int rvu_mbox_handler_nix_aq_enq(struct rvu *rvu, struct nix_aq_enq_req *req, struct nix_aq_enq_rsp *rsp) { return rvu_nix_aq_enq_inst(rvu, req, rsp); } -int rvu_mbox_handler_NIX_HWCTX_DISABLE(struct rvu *rvu, +int rvu_mbox_handler_nix_hwctx_disable(struct rvu *rvu, struct hwctx_disable_req *req, struct msg_rsp *rsp) { return nix_lf_hwctx_disable(rvu, req); } -int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu, +int rvu_mbox_handler_nix_lf_alloc(struct rvu *rvu, struct nix_lf_alloc_req *req, struct nix_lf_alloc_rsp *rsp) { - int nixlf, qints, hwctx_size, err, rc = 0; + int nixlf, qints, hwctx_size, intf, err, rc = 0; struct rvu_hwinfo *hw = rvu->hw; u16 pcifunc = req->hdr.pcifunc; struct rvu_block *block; @@ -676,6 +755,24 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu, if (nixlf < 0) return NIX_AF_ERR_AF_LF_INVALID; + /* Check if requested 'NIXLF <=> NPALF' mapping is valid */ + if (req->npa_func) { + /* If default, use 'this' NIXLF's PFFUNC */ + if (req->npa_func == RVU_DEFAULT_PF_FUNC) + req->npa_func = pcifunc; + if (!is_pffunc_map_valid(rvu, req->npa_func, BLKTYPE_NPA)) + return NIX_AF_INVAL_NPA_PF_FUNC; + } + + /* Check if requested 'NIXLF <=> SSOLF' mapping is valid */ + if (req->sso_func) { + /* If default, use 'this' NIXLF's PFFUNC */ + if (req->sso_func == RVU_DEFAULT_PF_FUNC) + req->sso_func = pcifunc; + if (!is_pffunc_map_valid(rvu, req->sso_func, BLKTYPE_SSO)) + return NIX_AF_INVAL_SSO_PF_FUNC; + } + /* If RSS is being enabled, check if requested config is valid. * RSS table size should be power of two, otherwise * RSS_GRP::OFFSET + adder might go beyond that group or @@ -777,21 +874,20 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu, (u64)pfvf->nix_qints_ctx->iova); rvu_write64(rvu, blkaddr, NIX_AF_LFX_QINTS_CFG(nixlf), BIT_ULL(36)); + /* Setup VLANX TPID's. + * Use VLAN1 for 802.1Q + * and VLAN0 for 802.1AD. + */ + cfg = (0x8100ULL << 16) | 0x88A8ULL; + rvu_write64(rvu, blkaddr, NIX_AF_LFX_TX_CFG(nixlf), cfg); + /* Enable LMTST for this NIX LF */ rvu_write64(rvu, blkaddr, NIX_AF_LFX_TX_CFG2(nixlf), BIT_ULL(0)); - /* Set CQE/WQE size, NPA_PF_FUNC for SQBs and also SSO_PF_FUNC - * If requester has sent a 'RVU_DEFAULT_PF_FUNC' use this NIX LF's - * PCIFUNC itself. - */ - if (req->npa_func == RVU_DEFAULT_PF_FUNC) - cfg = pcifunc; - else + /* Set CQE/WQE size, NPA_PF_FUNC for SQBs and also SSO_PF_FUNC */ + if (req->npa_func) cfg = req->npa_func; - - if (req->sso_func == RVU_DEFAULT_PF_FUNC) - cfg |= (u64)pcifunc << 16; - else + if (req->sso_func) cfg |= (u64)req->sso_func << 16; cfg |= (u64)req->xqe_sz << 33; @@ -800,10 +896,14 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu, /* Config Rx pkt length, csum checks and apad enable / disable */ rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_CFG(nixlf), req->rx_cfg); - err = nix_interface_init(rvu, pcifunc, NIX_INTF_TYPE_CGX, nixlf); + intf = is_afvf(pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX; + err = nix_interface_init(rvu, pcifunc, intf, nixlf); if (err) goto free_mem; + /* Disable NPC entries as NIXLF's contexts are not initialized yet */ + rvu_npc_disable_default_entries(rvu, pcifunc, nixlf); + goto exit; free_mem: @@ -823,10 +923,18 @@ exit: rsp->tx_chan_cnt = pfvf->tx_chan_cnt; rsp->lso_tsov4_idx = NIX_LSO_FORMAT_IDX_TSOV4; rsp->lso_tsov6_idx = NIX_LSO_FORMAT_IDX_TSOV6; + /* Get HW supported stat count */ + cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST1); + rsp->lf_rx_stats = ((cfg >> 32) & 0xFF); + rsp->lf_tx_stats = ((cfg >> 24) & 0xFF); + /* Get count of CQ IRQs and error IRQs supported per LF */ + cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST2); + rsp->qints = ((cfg >> 12) & 0xFFF); + rsp->cints = ((cfg >> 24) & 0xFFF); return rc; } -int rvu_mbox_handler_NIX_LF_FREE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { struct rvu_hwinfo *hw = rvu->hw; @@ -860,6 +968,41 @@ int rvu_mbox_handler_NIX_LF_FREE(struct rvu *rvu, struct msg_req *req, return 0; } +int rvu_mbox_handler_nix_mark_format_cfg(struct rvu *rvu, + struct nix_mark_format_cfg *req, + struct nix_mark_format_cfg_rsp *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + struct nix_hw *nix_hw; + struct rvu_pfvf *pfvf; + int blkaddr, rc; + u32 cfg; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (!pfvf->nixlf || blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return -EINVAL; + + cfg = (((u32)req->offset & 0x7) << 16) | + (((u32)req->y_mask & 0xF) << 12) | + (((u32)req->y_val & 0xF) << 8) | + (((u32)req->r_mask & 0xF) << 4) | ((u32)req->r_val & 0xF); + + rc = rvu_nix_reserve_mark_format(rvu, nix_hw, blkaddr, cfg); + if (rc < 0) { + dev_err(rvu->dev, "No mark_format_ctl for (pf:%d, vf:%d)", + rvu_get_pf(pcifunc), pcifunc & RVU_PFVF_FUNC_MASK); + return NIX_AF_ERR_MARK_CFG_FAIL; + } + + rsp->mark_format_idx = rc; + return 0; +} + /* Disable shaping of pkts by a scheduler queue * at a given scheduler level. */ @@ -918,7 +1061,74 @@ static void nix_reset_tx_linkcfg(struct rvu *rvu, int blkaddr, NIX_AF_TL3_TL2X_LINKX_CFG(schq, link), 0x00); } -int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu, +static int +rvu_get_tl1_schqs(struct rvu *rvu, int blkaddr, u16 pcifunc, + u16 *schq_list, u16 *schq_cnt) +{ + struct nix_txsch *txsch; + struct nix_hw *nix_hw; + struct rvu_pfvf *pfvf; + u8 cgx_id, lmac_id; + u16 schq_base; + u32 *pfvf_map; + int pf, intf; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return -ENODEV; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + txsch = &nix_hw->txsch[NIX_TXSCH_LVL_TL1]; + pfvf_map = txsch->pfvf_map; + pf = rvu_get_pf(pcifunc); + + /* static allocation as two TL1's per link */ + intf = is_afvf(pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX; + + switch (intf) { + case NIX_INTF_TYPE_CGX: + rvu_get_cgx_lmac_id(pfvf->cgx_lmac, &cgx_id, &lmac_id); + schq_base = (cgx_id * MAX_LMAC_PER_CGX + lmac_id) * 2; + break; + case NIX_INTF_TYPE_LBK: + schq_base = rvu->cgx_cnt_max * MAX_LMAC_PER_CGX * 2; + break; + default: + return -ENODEV; + } + + if (schq_base + 1 > txsch->schq.max) + return -ENODEV; + + /* init pfvf_map as we store flags */ + if (pfvf_map[schq_base] == U32_MAX) { + pfvf_map[schq_base] = + TXSCH_MAP((pf << RVU_PFVF_PF_SHIFT), 0); + pfvf_map[schq_base + 1] = + TXSCH_MAP((pf << RVU_PFVF_PF_SHIFT), 0); + + /* Onetime reset for TL1 */ + nix_reset_tx_linkcfg(rvu, blkaddr, + NIX_TXSCH_LVL_TL1, schq_base); + nix_reset_tx_shaping(rvu, blkaddr, + NIX_TXSCH_LVL_TL1, schq_base); + + nix_reset_tx_linkcfg(rvu, blkaddr, + NIX_TXSCH_LVL_TL1, schq_base + 1); + nix_reset_tx_shaping(rvu, blkaddr, + NIX_TXSCH_LVL_TL1, schq_base + 1); + } + + if (schq_list && schq_cnt) { + schq_list[0] = schq_base; + schq_list[1] = schq_base + 1; + *schq_cnt = 2; + } + + return 0; +} + +int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu, struct nix_txsch_alloc_req *req, struct nix_txsch_alloc_rsp *rsp) { @@ -928,6 +1138,7 @@ int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu, struct rvu_pfvf *pfvf; struct nix_hw *nix_hw; int blkaddr, rc = 0; + u32 *pfvf_map; u16 schq; pfvf = rvu_get_pfvf(rvu, pcifunc); @@ -939,17 +1150,27 @@ int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu, if (!nix_hw) return -EINVAL; - spin_lock(&rvu->rsrc_lock); + mutex_lock(&rvu->rsrc_lock); for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { txsch = &nix_hw->txsch[lvl]; req_schq = req->schq_contig[lvl] + req->schq[lvl]; + pfvf_map = txsch->pfvf_map; + + if (!req_schq) + continue; /* There are only 28 TL1s */ - if (lvl == NIX_TXSCH_LVL_TL1 && req_schq > txsch->schq.max) - goto err; + if (lvl == NIX_TXSCH_LVL_TL1) { + if (req->schq_contig[lvl] || + req->schq[lvl] > 2 || + rvu_get_tl1_schqs(rvu, blkaddr, + pcifunc, NULL, NULL)) + goto err; + continue; + } /* Check if request is valid */ - if (!req_schq || req_schq > MAX_TXSCHQ_PER_FUNC) + if (req_schq > MAX_TXSCHQ_PER_FUNC) goto err; /* If contiguous queues are needed, check for availability */ @@ -965,16 +1186,32 @@ int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu, for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { txsch = &nix_hw->txsch[lvl]; rsp->schq_contig[lvl] = req->schq_contig[lvl]; + pfvf_map = txsch->pfvf_map; rsp->schq[lvl] = req->schq[lvl]; - schq = 0; + if (!req->schq[lvl] && !req->schq_contig[lvl]) + continue; + + /* Handle TL1 specially as it is + * allocation is restricted to 2 TL1's + * per link + */ + + if (lvl == NIX_TXSCH_LVL_TL1) { + rsp->schq_contig[lvl] = 0; + rvu_get_tl1_schqs(rvu, blkaddr, pcifunc, + &rsp->schq_list[lvl][0], + &rsp->schq[lvl]); + continue; + } + /* Alloc contiguous queues first */ if (req->schq_contig[lvl]) { schq = rvu_alloc_rsrc_contig(&txsch->schq, req->schq_contig[lvl]); for (idx = 0; idx < req->schq_contig[lvl]; idx++) { - txsch->pfvf_map[schq] = pcifunc; + pfvf_map[schq] = TXSCH_MAP(pcifunc, 0); nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); nix_reset_tx_shaping(rvu, blkaddr, lvl, schq); rsp->schq_contig_list[lvl][idx] = schq; @@ -985,7 +1222,7 @@ int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu, /* Alloc non-contiguous queues */ for (idx = 0; idx < req->schq[lvl]; idx++) { schq = rvu_alloc_rsrc(&txsch->schq); - txsch->pfvf_map[schq] = pcifunc; + pfvf_map[schq] = TXSCH_MAP(pcifunc, 0); nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); nix_reset_tx_shaping(rvu, blkaddr, lvl, schq); rsp->schq_list[lvl][idx] = schq; @@ -995,7 +1232,7 @@ int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu, err: rc = NIX_AF_ERR_TLX_ALLOC_FAIL; exit: - spin_unlock(&rvu->rsrc_lock); + mutex_unlock(&rvu->rsrc_lock); return rc; } @@ -1020,14 +1257,14 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) return NIX_AF_ERR_AF_LF_INVALID; /* Disable TL2/3 queue links before SMQ flush*/ - spin_lock(&rvu->rsrc_lock); + mutex_lock(&rvu->rsrc_lock); for (lvl = NIX_TXSCH_LVL_TL4; lvl < NIX_TXSCH_LVL_CNT; lvl++) { if (lvl != NIX_TXSCH_LVL_TL2 && lvl != NIX_TXSCH_LVL_TL4) continue; txsch = &nix_hw->txsch[lvl]; for (schq = 0; schq < txsch->schq.max; schq++) { - if (txsch->pfvf_map[schq] != pcifunc) + if (TXSCH_MAP_FUNC(txsch->pfvf_map[schq]) != pcifunc) continue; nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); } @@ -1036,7 +1273,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) /* Flush SMQs */ txsch = &nix_hw->txsch[NIX_TXSCH_LVL_SMQ]; for (schq = 0; schq < txsch->schq.max; schq++) { - if (txsch->pfvf_map[schq] != pcifunc) + if (TXSCH_MAP_FUNC(txsch->pfvf_map[schq]) != pcifunc) continue; cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(schq)); /* Do SMQ flush and set enqueue xoff */ @@ -1054,15 +1291,21 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) /* Now free scheduler queues to free pool */ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + /* Free all SCHQ's except TL1 as + * TL1 is shared across all VF's for a RVU PF + */ + if (lvl == NIX_TXSCH_LVL_TL1) + continue; + txsch = &nix_hw->txsch[lvl]; for (schq = 0; schq < txsch->schq.max; schq++) { - if (txsch->pfvf_map[schq] != pcifunc) + if (TXSCH_MAP_FUNC(txsch->pfvf_map[schq]) != pcifunc) continue; rvu_free_rsrc(&txsch->schq, schq); txsch->pfvf_map[schq] = 0; } } - spin_unlock(&rvu->rsrc_lock); + mutex_unlock(&rvu->rsrc_lock); /* Sync cached info for this LF in NDC-TX to LLC/DRAM */ rvu_write64(rvu, blkaddr, NIX_AF_NDC_TX_SYNC, BIT_ULL(12) | nixlf); @@ -1073,11 +1316,81 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) return 0; } -int rvu_mbox_handler_NIX_TXSCH_FREE(struct rvu *rvu, +static int nix_txschq_free_one(struct rvu *rvu, + struct nix_txsch_free_req *req) +{ + int lvl, schq, nixlf, blkaddr, rc; + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc = req->hdr.pcifunc; + struct nix_txsch *txsch; + struct nix_hw *nix_hw; + u32 *pfvf_map; + u64 cfg; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return -EINVAL; + + nixlf = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0); + if (nixlf < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + lvl = req->schq_lvl; + schq = req->schq; + txsch = &nix_hw->txsch[lvl]; + + /* Don't allow freeing TL1 */ + if (lvl > NIX_TXSCH_LVL_TL2 || + schq >= txsch->schq.max) + goto err; + + pfvf_map = txsch->pfvf_map; + mutex_lock(&rvu->rsrc_lock); + + if (TXSCH_MAP_FUNC(pfvf_map[schq]) != pcifunc) { + mutex_unlock(&rvu->rsrc_lock); + goto err; + } + + /* Flush if it is a SMQ. Onus of disabling + * TL2/3 queue links before SMQ flush is on user + */ + if (lvl == NIX_TXSCH_LVL_SMQ) { + cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(schq)); + /* Do SMQ flush and set enqueue xoff */ + cfg |= BIT_ULL(50) | BIT_ULL(49); + rvu_write64(rvu, blkaddr, NIX_AF_SMQX_CFG(schq), cfg); + + /* Wait for flush to complete */ + rc = rvu_poll_reg(rvu, blkaddr, + NIX_AF_SMQX_CFG(schq), BIT_ULL(49), true); + if (rc) { + dev_err(rvu->dev, + "NIXLF%d: SMQ%d flush failed\n", nixlf, schq); + } + } + + /* Free the resource */ + rvu_free_rsrc(&txsch->schq, schq); + txsch->pfvf_map[schq] = 0; + mutex_unlock(&rvu->rsrc_lock); + return 0; +err: + return NIX_AF_ERR_TLX_INVALID; +} + +int rvu_mbox_handler_nix_txsch_free(struct rvu *rvu, struct nix_txsch_free_req *req, struct msg_rsp *rsp) { - return nix_txschq_free(rvu, req->hdr.pcifunc); + if (req->flags & TXSCHQ_FREE_ALL) + return nix_txschq_free(rvu, req->hdr.pcifunc); + else + return nix_txschq_free_one(rvu, req); } static bool is_txschq_config_valid(struct rvu *rvu, u16 pcifunc, int blkaddr, @@ -1118,16 +1431,73 @@ static bool is_txschq_config_valid(struct rvu *rvu, u16 pcifunc, int blkaddr, return true; } -int rvu_mbox_handler_NIX_TXSCHQ_CFG(struct rvu *rvu, +static int +nix_tl1_default_cfg(struct rvu *rvu, u16 pcifunc) +{ + u16 schq_list[2], schq_cnt, schq; + int blkaddr, idx, err = 0; + u16 map_func, map_flags; + struct nix_hw *nix_hw; + u64 reg, regval; + u32 *pfvf_map; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return -EINVAL; + + pfvf_map = nix_hw->txsch[NIX_TXSCH_LVL_TL1].pfvf_map; + + mutex_lock(&rvu->rsrc_lock); + + err = rvu_get_tl1_schqs(rvu, blkaddr, + pcifunc, schq_list, &schq_cnt); + if (err) + goto unlock; + + for (idx = 0; idx < schq_cnt; idx++) { + schq = schq_list[idx]; + map_func = TXSCH_MAP_FUNC(pfvf_map[schq]); + map_flags = TXSCH_MAP_FLAGS(pfvf_map[schq]); + + /* check if config is already done or this is pf */ + if (map_flags & NIX_TXSCHQ_TL1_CFG_DONE) + continue; + + /* default configuration */ + reg = NIX_AF_TL1X_TOPOLOGY(schq); + regval = (TXSCH_TL1_DFLT_RR_PRIO << 1); + rvu_write64(rvu, blkaddr, reg, regval); + reg = NIX_AF_TL1X_SCHEDULE(schq); + regval = TXSCH_TL1_DFLT_RR_QTM; + rvu_write64(rvu, blkaddr, reg, regval); + reg = NIX_AF_TL1X_CIR(schq); + regval = 0; + rvu_write64(rvu, blkaddr, reg, regval); + + map_flags |= NIX_TXSCHQ_TL1_CFG_DONE; + pfvf_map[schq] = TXSCH_MAP(map_func, map_flags); + } +unlock: + mutex_unlock(&rvu->rsrc_lock); + return err; +} + +int rvu_mbox_handler_nix_txschq_cfg(struct rvu *rvu, struct nix_txschq_config *req, struct msg_rsp *rsp) { + u16 schq, pcifunc = req->hdr.pcifunc; struct rvu_hwinfo *hw = rvu->hw; - u16 pcifunc = req->hdr.pcifunc; u64 reg, regval, schq_regbase; struct nix_txsch *txsch; + u16 map_func, map_flags; struct nix_hw *nix_hw; int blkaddr, idx, err; + u32 *pfvf_map; int nixlf; if (req->lvl >= NIX_TXSCH_LVL_CNT || @@ -1147,6 +1517,16 @@ int rvu_mbox_handler_NIX_TXSCHQ_CFG(struct rvu *rvu, return NIX_AF_ERR_AF_LF_INVALID; txsch = &nix_hw->txsch[req->lvl]; + pfvf_map = txsch->pfvf_map; + + /* VF is only allowed to trigger + * setting default cfg on TL1 + */ + if (pcifunc & RVU_PFVF_FUNC_MASK && + req->lvl == NIX_TXSCH_LVL_TL1) { + return nix_tl1_default_cfg(rvu, pcifunc); + } + for (idx = 0; idx < req->num_regs; idx++) { reg = req->reg[idx]; regval = req->regval[idx]; @@ -1164,6 +1544,21 @@ int rvu_mbox_handler_NIX_TXSCHQ_CFG(struct rvu *rvu, regval |= ((u64)nixlf << 24); } + /* Mark config as done for TL1 by PF */ + if (schq_regbase >= NIX_AF_TL1X_SCHEDULE(0) && + schq_regbase <= NIX_AF_TL1X_GREEN_BYTES(0)) { + schq = TXSCHQ_IDX(reg, TXSCHQ_IDX_SHIFT); + + mutex_lock(&rvu->rsrc_lock); + + map_func = TXSCH_MAP_FUNC(pfvf_map[schq]); + map_flags = TXSCH_MAP_FLAGS(pfvf_map[schq]); + + map_flags |= NIX_TXSCHQ_TL1_CFG_DONE; + pfvf_map[schq] = TXSCH_MAP(map_func, map_flags); + mutex_unlock(&rvu->rsrc_lock); + } + rvu_write64(rvu, blkaddr, reg, regval); /* Check for SMQ flush, if so, poll for its completion */ @@ -1181,35 +1576,22 @@ int rvu_mbox_handler_NIX_TXSCHQ_CFG(struct rvu *rvu, static int nix_rx_vtag_cfg(struct rvu *rvu, int nixlf, int blkaddr, struct nix_vtag_config *req) { - u64 regval = 0; + u64 regval = req->vtag_size; -#define NIX_VTAGTYPE_MAX 0x8ull -#define NIX_VTAGSIZE_MASK 0x7ull -#define NIX_VTAGSTRIP_CAP_MASK 0x30ull - - if (req->rx.vtag_type >= NIX_VTAGTYPE_MAX || - req->vtag_size > VTAGSIZE_T8) + if (req->rx.vtag_type > 7 || req->vtag_size > VTAGSIZE_T8) return -EINVAL; - regval = rvu_read64(rvu, blkaddr, - NIX_AF_LFX_RX_VTAG_TYPEX(nixlf, req->rx.vtag_type)); - - if (req->rx.strip_vtag && req->rx.capture_vtag) - regval |= BIT_ULL(4) | BIT_ULL(5); - else if (req->rx.strip_vtag) + if (req->rx.capture_vtag) + regval |= BIT_ULL(5); + if (req->rx.strip_vtag) regval |= BIT_ULL(4); - else - regval &= ~(BIT_ULL(4) | BIT_ULL(5)); - - regval &= ~NIX_VTAGSIZE_MASK; - regval |= req->vtag_size & NIX_VTAGSIZE_MASK; rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_VTAG_TYPEX(nixlf, req->rx.vtag_type), regval); return 0; } -int rvu_mbox_handler_NIX_VTAG_CFG(struct rvu *rvu, +int rvu_mbox_handler_nix_vtag_cfg(struct rvu *rvu, struct nix_vtag_config *req, struct msg_rsp *rsp) { @@ -1243,7 +1625,7 @@ static int nix_setup_mce(struct rvu *rvu, int mce, u8 op, struct nix_aq_enq_req aq_req; int err; - aq_req.hdr.pcifunc = pcifunc; + aq_req.hdr.pcifunc = 0; aq_req.ctype = NIX_AQ_CTYPE_MCE; aq_req.op = op; aq_req.qidx = mce; @@ -1294,7 +1676,7 @@ static int nix_update_mce_list(struct nix_mce_list *mce_list, return 0; /* Add a new one to the list, at the tail */ - mce = kzalloc(sizeof(*mce), GFP_ATOMIC); + mce = kzalloc(sizeof(*mce), GFP_KERNEL); if (!mce) return -ENOMEM; mce->idx = idx; @@ -1317,6 +1699,10 @@ static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add) struct rvu_pfvf *pfvf; int blkaddr; + /* Broadcast pkt replication is not needed for AF's VFs, hence skip */ + if (is_afvf(pcifunc)) + return 0; + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); if (blkaddr < 0) return 0; @@ -1340,7 +1726,7 @@ static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add) return -EINVAL; } - spin_lock(&mcast->mce_lock); + mutex_lock(&mcast->mce_lock); err = nix_update_mce_list(mce_list, pcifunc, idx, add); if (err) @@ -1370,7 +1756,7 @@ static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add) } end: - spin_unlock(&mcast->mce_lock); + mutex_unlock(&mcast->mce_lock); return err; } @@ -1455,7 +1841,7 @@ static int nix_setup_mcast(struct rvu *rvu, struct nix_hw *nix_hw, int blkaddr) BIT_ULL(63) | (mcast->replay_pkind << 24) | BIT_ULL(20) | MC_BUF_CNT); - spin_lock_init(&mcast->mce_lock); + mutex_init(&mcast->mce_lock); return nix_setup_bcast_tables(rvu, nix_hw); } @@ -1499,14 +1885,66 @@ static int nix_setup_txschq(struct rvu *rvu, struct nix_hw *nix_hw, int blkaddr) * PF/VF pcifunc mapping info. */ txsch->pfvf_map = devm_kcalloc(rvu->dev, txsch->schq.max, - sizeof(u16), GFP_KERNEL); + sizeof(u32), GFP_KERNEL); if (!txsch->pfvf_map) return -ENOMEM; + memset(txsch->pfvf_map, U8_MAX, txsch->schq.max * sizeof(u32)); + } + return 0; +} + +int rvu_nix_reserve_mark_format(struct rvu *rvu, struct nix_hw *nix_hw, + int blkaddr, u32 cfg) +{ + int fmt_idx; + + for (fmt_idx = 0; fmt_idx < nix_hw->mark_format.in_use; fmt_idx++) { + if (nix_hw->mark_format.cfg[fmt_idx] == cfg) + return fmt_idx; } + if (fmt_idx >= nix_hw->mark_format.total) + return -ERANGE; + + rvu_write64(rvu, blkaddr, NIX_AF_MARK_FORMATX_CTL(fmt_idx), cfg); + nix_hw->mark_format.cfg[fmt_idx] = cfg; + nix_hw->mark_format.in_use++; + return fmt_idx; +} + +static int nix_af_mark_format_setup(struct rvu *rvu, struct nix_hw *nix_hw, + int blkaddr) +{ + u64 cfgs[] = { + [NIX_MARK_CFG_IP_DSCP_RED] = 0x10003, + [NIX_MARK_CFG_IP_DSCP_YELLOW] = 0x11200, + [NIX_MARK_CFG_IP_DSCP_YELLOW_RED] = 0x11203, + [NIX_MARK_CFG_IP_ECN_RED] = 0x6000c, + [NIX_MARK_CFG_IP_ECN_YELLOW] = 0x60c00, + [NIX_MARK_CFG_IP_ECN_YELLOW_RED] = 0x60c0c, + [NIX_MARK_CFG_VLAN_DEI_RED] = 0x30008, + [NIX_MARK_CFG_VLAN_DEI_YELLOW] = 0x30800, + [NIX_MARK_CFG_VLAN_DEI_YELLOW_RED] = 0x30808, + }; + int i, rc; + u64 total; + + total = (rvu_read64(rvu, blkaddr, NIX_AF_PSE_CONST) & 0xFF00) >> 8; + nix_hw->mark_format.total = (u8)total; + nix_hw->mark_format.cfg = devm_kcalloc(rvu->dev, total, sizeof(u32), + GFP_KERNEL); + if (!nix_hw->mark_format.cfg) + return -ENOMEM; + for (i = 0; i < NIX_MARK_CFG_MAX; i++) { + rc = rvu_nix_reserve_mark_format(rvu, nix_hw, blkaddr, cfgs[i]); + if (rc < 0) + dev_err(rvu->dev, "Err %d in setup mark format %d\n", + i, rc); + } + return 0; } -int rvu_mbox_handler_NIX_STATS_RST(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_nix_stats_rst(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { struct rvu_hwinfo *hw = rvu->hw; @@ -1537,190 +1975,287 @@ int rvu_mbox_handler_NIX_STATS_RST(struct rvu *rvu, struct msg_req *req, } /* Returns the ALG index to be set into NPC_RX_ACTION */ -static int get_flowkey_alg_idx(u32 flow_cfg) -{ - u32 ip_cfg; - - flow_cfg &= ~FLOW_KEY_TYPE_PORT; - ip_cfg = FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_IPV6; - if (flow_cfg == ip_cfg) - return FLOW_KEY_ALG_IP; - else if (flow_cfg == (ip_cfg | FLOW_KEY_TYPE_TCP)) - return FLOW_KEY_ALG_TCP; - else if (flow_cfg == (ip_cfg | FLOW_KEY_TYPE_UDP)) - return FLOW_KEY_ALG_UDP; - else if (flow_cfg == (ip_cfg | FLOW_KEY_TYPE_SCTP)) - return FLOW_KEY_ALG_SCTP; - else if (flow_cfg == (ip_cfg | FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_UDP)) - return FLOW_KEY_ALG_TCP_UDP; - else if (flow_cfg == (ip_cfg | FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_SCTP)) - return FLOW_KEY_ALG_TCP_SCTP; - else if (flow_cfg == (ip_cfg | FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_SCTP)) - return FLOW_KEY_ALG_UDP_SCTP; - else if (flow_cfg == (ip_cfg | FLOW_KEY_TYPE_TCP | - FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_SCTP)) - return FLOW_KEY_ALG_TCP_UDP_SCTP; - - return FLOW_KEY_ALG_PORT; -} - -int rvu_mbox_handler_NIX_RSS_FLOWKEY_CFG(struct rvu *rvu, - struct nix_rss_flowkey_cfg *req, - struct msg_rsp *rsp) +static int get_flowkey_alg_idx(struct nix_hw *nix_hw, u32 flow_cfg) { - struct rvu_hwinfo *hw = rvu->hw; - u16 pcifunc = req->hdr.pcifunc; - int alg_idx, nixlf, blkaddr; - - blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); - if (blkaddr < 0) - return NIX_AF_ERR_AF_LF_INVALID; - - nixlf = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0); - if (nixlf < 0) - return NIX_AF_ERR_AF_LF_INVALID; + int i; - alg_idx = get_flowkey_alg_idx(req->flowkey_cfg); + /* Scan over exiting algo entries to find a match */ + for (i = 0; i < nix_hw->flowkey.in_use; i++) + if (nix_hw->flowkey.flowkey[i] == flow_cfg) + return i; - rvu_npc_update_flowkey_alg_idx(rvu, pcifunc, nixlf, req->group, - alg_idx, req->mcam_index); - return 0; + return -ERANGE; } -static void set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg) +static int set_flowkey_fields(struct nix_rx_flowkey_alg *alg, u32 flow_cfg) { - struct nix_rx_flowkey_alg *field = NULL; - int idx, key_type; + int idx, nr_field, key_off, field_marker, keyoff_marker; + int max_key_off, max_bit_pos, group_member; + struct nix_rx_flowkey_alg *field; + struct nix_rx_flowkey_alg tmp; + u32 key_type, valid_key; if (!alg) - return; + return -EINVAL; - /* FIELD0: IPv4 - * FIELD1: IPv6 - * FIELD2: TCP/UDP/SCTP/ALL - * FIELD3: Unused - * FIELD4: Unused - * - * Each of the 32 possible flow key algorithm definitions should +#define FIELDS_PER_ALG 5 +#define MAX_KEY_OFF 40 + /* Clear all fields */ + memset(alg, 0, sizeof(uint64_t) * FIELDS_PER_ALG); + + /* Each of the 32 possible flow key algorithm definitions should * fall into above incremental config (except ALG0). Otherwise a * single NPC MCAM entry is not sufficient for supporting RSS. * * If a different definition or combination needed then NPC MCAM * has to be programmed to filter such pkts and it's action should * point to this definition to calculate flowtag or hash. + * + * The `for loop` goes over _all_ protocol field and the following + * variables depicts the state machine forward progress logic. + * + * keyoff_marker - Enabled when hash byte length needs to be accounted + * in field->key_offset update. + * field_marker - Enabled when a new field needs to be selected. + * group_member - Enabled when protocol is part of a group. */ - for (idx = 0; idx < 32; idx++) { - key_type = flow_cfg & BIT_ULL(idx); - if (!key_type) - continue; + + keyoff_marker = 0; max_key_off = 0; group_member = 0; + nr_field = 0; key_off = 0; field_marker = 1; + field = &tmp; max_bit_pos = fls(flow_cfg); + for (idx = 0; + idx < max_bit_pos && nr_field < FIELDS_PER_ALG && + key_off < MAX_KEY_OFF; idx++) { + key_type = BIT(idx); + valid_key = flow_cfg & key_type; + /* Found a field marker, reset the field values */ + if (field_marker) + memset(&tmp, 0, sizeof(tmp)); + switch (key_type) { - case FLOW_KEY_TYPE_PORT: - field = &alg[0]; + case NIX_FLOW_KEY_TYPE_PORT: field->sel_chan = true; /* This should be set to 1, when SEL_CHAN is set */ field->bytesm1 = 1; + field_marker = true; + keyoff_marker = true; break; - case FLOW_KEY_TYPE_IPV4: - field = &alg[0]; + case NIX_FLOW_KEY_TYPE_IPV4: field->lid = NPC_LID_LC; field->ltype_match = NPC_LT_LC_IP; field->hdr_offset = 12; /* SIP offset */ field->bytesm1 = 7; /* SIP + DIP, 8 bytes */ field->ltype_mask = 0xF; /* Match only IPv4 */ + field_marker = true; + keyoff_marker = false; break; - case FLOW_KEY_TYPE_IPV6: - field = &alg[1]; + case NIX_FLOW_KEY_TYPE_IPV6: field->lid = NPC_LID_LC; field->ltype_match = NPC_LT_LC_IP6; field->hdr_offset = 8; /* SIP offset */ field->bytesm1 = 31; /* SIP + DIP, 32 bytes */ field->ltype_mask = 0xF; /* Match only IPv6 */ + field_marker = true; + keyoff_marker = true; break; - case FLOW_KEY_TYPE_TCP: - case FLOW_KEY_TYPE_UDP: - case FLOW_KEY_TYPE_SCTP: - field = &alg[2]; + case NIX_FLOW_KEY_TYPE_TCP: + case NIX_FLOW_KEY_TYPE_UDP: + case NIX_FLOW_KEY_TYPE_SCTP: field->lid = NPC_LID_LD; field->bytesm1 = 3; /* Sport + Dport, 4 bytes */ - if (key_type == FLOW_KEY_TYPE_TCP) + if (key_type == NIX_FLOW_KEY_TYPE_TCP && valid_key) { field->ltype_match |= NPC_LT_LD_TCP; - else if (key_type == FLOW_KEY_TYPE_UDP) + group_member = true; + } else if (key_type == NIX_FLOW_KEY_TYPE_UDP && + valid_key) { field->ltype_match |= NPC_LT_LD_UDP; - else if (key_type == FLOW_KEY_TYPE_SCTP) + group_member = true; + } else if (key_type == NIX_FLOW_KEY_TYPE_SCTP && + valid_key) { field->ltype_match |= NPC_LT_LD_SCTP; - field->key_offset = 32; /* After IPv4/v6 SIP, DIP */ + group_member = true; + } field->ltype_mask = ~field->ltype_match; + if (key_type == NIX_FLOW_KEY_TYPE_SCTP) { + /* Handle the case where any of the group item + * is enabled in the group but not the final one + */ + if (group_member) { + valid_key = true; + group_member = false; + } + field_marker = true; + keyoff_marker = true; + } else { + field_marker = false; + keyoff_marker = false; + } break; } - if (field) - field->ena = 1; - field = NULL; + field->ena = 1; + + /* Found a valid flow key type */ + if (valid_key) { + field->key_offset = key_off; + memcpy(&alg[nr_field], field, sizeof(*field)); + max_key_off = max(max_key_off, field->bytesm1 + 1); + + /* Found a field marker, get the next field */ + if (field_marker) + nr_field++; + } + + /* Found a keyoff marker, update the new key_off */ + if (keyoff_marker) { + key_off += max_key_off; + max_key_off = 0; + } } + /* Processed all the flow key types */ + if (idx == max_bit_pos && key_off <= MAX_KEY_OFF) + return 0; + else + return NIX_AF_ERR_RSS_NOSPC_FIELD; } -static void nix_rx_flowkey_alg_cfg(struct rvu *rvu, int blkaddr) +static int reserve_flowkey_alg_idx(struct rvu *rvu, int blkaddr, u32 flow_cfg) { -#define FIELDS_PER_ALG 5 - u64 field[FLOW_KEY_ALG_MAX][FIELDS_PER_ALG]; - u32 flowkey_cfg, minkey_cfg; - int alg, fid; + u64 field[FIELDS_PER_ALG]; + struct nix_hw *hw; + int fid, rc; - memset(&field, 0, sizeof(u64) * FLOW_KEY_ALG_MAX * FIELDS_PER_ALG); + hw = get_nix_hw(rvu->hw, blkaddr); + if (!hw) + return -EINVAL; - /* Only incoming channel number */ - flowkey_cfg = FLOW_KEY_TYPE_PORT; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_PORT], flowkey_cfg); + /* No room to add new flow hash algoritham */ + if (hw->flowkey.in_use >= NIX_FLOW_KEY_ALG_MAX) + return NIX_AF_ERR_RSS_NOSPC_ALGO; - /* For a incoming pkt if none of the fields match then flowkey - * will be zero, hence tag generated will also be zero. - * RSS entry at rsse_index = NIX_AF_LF()_RSS_GRP()[OFFSET] will - * be used to queue the packet. - */ + /* Generate algo fields for the given flow_cfg */ + rc = set_flowkey_fields((struct nix_rx_flowkey_alg *)field, flow_cfg); + if (rc) + return rc; - /* IPv4/IPv6 SIP/DIPs */ - flowkey_cfg = FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_IPV6; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_IP], flowkey_cfg); + /* Update ALGX_FIELDX register with generated fields */ + for (fid = 0; fid < FIELDS_PER_ALG; fid++) + rvu_write64(rvu, blkaddr, + NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(hw->flowkey.in_use, + fid), field[fid]); - /* TCPv4/v6 4-tuple, SIP, DIP, Sport, Dport */ + /* Store the flow_cfg for futher lookup */ + rc = hw->flowkey.in_use; + hw->flowkey.flowkey[rc] = flow_cfg; + hw->flowkey.in_use++; + + return rc; +} + +int rvu_mbox_handler_nix_rss_flowkey_cfg(struct rvu *rvu, + struct nix_rss_flowkey_cfg *req, + struct nix_rss_flowkey_cfg_rsp *rsp) +{ + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc = req->hdr.pcifunc; + int alg_idx, nixlf, blkaddr; + struct nix_hw *nix_hw; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + nixlf = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0); + if (nixlf < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return -EINVAL; + + alg_idx = get_flowkey_alg_idx(nix_hw, req->flowkey_cfg); + /* Failed to get algo index from the exiting list, reserve new */ + if (alg_idx < 0) { + alg_idx = reserve_flowkey_alg_idx(rvu, blkaddr, + req->flowkey_cfg); + if (alg_idx < 0) + return alg_idx; + } + rsp->alg_idx = alg_idx; + rvu_npc_update_flowkey_alg_idx(rvu, pcifunc, nixlf, req->group, + alg_idx, req->mcam_index); + return 0; +} + +static int nix_rx_flowkey_alg_cfg(struct rvu *rvu, int blkaddr) +{ + u32 flowkey_cfg, minkey_cfg; + int alg, fid, rc; + + /* Disable all flow key algx fieldx */ + for (alg = 0; alg < NIX_FLOW_KEY_ALG_MAX; alg++) { + for (fid = 0; fid < FIELDS_PER_ALG; fid++) + rvu_write64(rvu, blkaddr, + NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(alg, fid), + 0); + } + + /* IPv4/IPv6 SIP/DIPs */ + flowkey_cfg = NIX_FLOW_KEY_TYPE_IPV4 | NIX_FLOW_KEY_TYPE_IPV6; + rc = reserve_flowkey_alg_idx(rvu, blkaddr, flowkey_cfg); + if (rc < 0) + return rc; + + /* TCPv4/v6 4-tuple, SIP, DIP, Sport, Dport */ minkey_cfg = flowkey_cfg; - flowkey_cfg = minkey_cfg | FLOW_KEY_TYPE_TCP; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_TCP], flowkey_cfg); + flowkey_cfg = minkey_cfg | NIX_FLOW_KEY_TYPE_TCP; + rc = reserve_flowkey_alg_idx(rvu, blkaddr, flowkey_cfg); + if (rc < 0) + return rc; /* UDPv4/v6 4-tuple, SIP, DIP, Sport, Dport */ - flowkey_cfg = minkey_cfg | FLOW_KEY_TYPE_UDP; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_UDP], flowkey_cfg); + flowkey_cfg = minkey_cfg | NIX_FLOW_KEY_TYPE_UDP; + rc = reserve_flowkey_alg_idx(rvu, blkaddr, flowkey_cfg); + if (rc < 0) + return rc; /* SCTPv4/v6 4-tuple, SIP, DIP, Sport, Dport */ - flowkey_cfg = minkey_cfg | FLOW_KEY_TYPE_SCTP; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_SCTP], flowkey_cfg); + flowkey_cfg = minkey_cfg | NIX_FLOW_KEY_TYPE_SCTP; + rc = reserve_flowkey_alg_idx(rvu, blkaddr, flowkey_cfg); + if (rc < 0) + return rc; /* TCP/UDP v4/v6 4-tuple, rest IP pkts 2-tuple */ - flowkey_cfg = minkey_cfg | FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_UDP; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_TCP_UDP], flowkey_cfg); + flowkey_cfg = minkey_cfg | NIX_FLOW_KEY_TYPE_TCP | + NIX_FLOW_KEY_TYPE_UDP; + rc = reserve_flowkey_alg_idx(rvu, blkaddr, flowkey_cfg); + if (rc < 0) + return rc; /* TCP/SCTP v4/v6 4-tuple, rest IP pkts 2-tuple */ - flowkey_cfg = minkey_cfg | FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_SCTP; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_TCP_SCTP], flowkey_cfg); + flowkey_cfg = minkey_cfg | NIX_FLOW_KEY_TYPE_TCP | + NIX_FLOW_KEY_TYPE_SCTP; + rc = reserve_flowkey_alg_idx(rvu, blkaddr, flowkey_cfg); + if (rc < 0) + return rc; /* UDP/SCTP v4/v6 4-tuple, rest IP pkts 2-tuple */ - flowkey_cfg = minkey_cfg | FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_SCTP; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_UDP_SCTP], flowkey_cfg); + flowkey_cfg = minkey_cfg | NIX_FLOW_KEY_TYPE_UDP | + NIX_FLOW_KEY_TYPE_SCTP; + rc = reserve_flowkey_alg_idx(rvu, blkaddr, flowkey_cfg); + if (rc < 0) + return rc; /* TCP/UDP/SCTP v4/v6 4-tuple, rest IP pkts 2-tuple */ - flowkey_cfg = minkey_cfg | FLOW_KEY_TYPE_TCP | - FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_SCTP; - set_flowkey_fields((void *)&field[FLOW_KEY_ALG_TCP_UDP_SCTP], - flowkey_cfg); + flowkey_cfg = minkey_cfg | NIX_FLOW_KEY_TYPE_TCP | + NIX_FLOW_KEY_TYPE_UDP | NIX_FLOW_KEY_TYPE_SCTP; + rc = reserve_flowkey_alg_idx(rvu, blkaddr, flowkey_cfg); + if (rc < 0) + return rc; - for (alg = 0; alg < FLOW_KEY_ALG_MAX; alg++) { - for (fid = 0; fid < FIELDS_PER_ALG; fid++) - rvu_write64(rvu, blkaddr, - NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(alg, fid), - field[alg][fid]); - } + return 0; } -int rvu_mbox_handler_NIX_SET_MAC_ADDR(struct rvu *rvu, +int rvu_mbox_handler_nix_set_mac_addr(struct rvu *rvu, struct nix_set_mac_addr *req, struct msg_rsp *rsp) { @@ -1742,10 +2277,13 @@ int rvu_mbox_handler_NIX_SET_MAC_ADDR(struct rvu *rvu, rvu_npc_install_ucast_entry(rvu, pcifunc, nixlf, pfvf->rx_chan_base, req->mac_addr); + + rvu_npc_update_rxvlan(rvu, pcifunc, nixlf); + return 0; } -int rvu_mbox_handler_NIX_SET_RX_MODE(struct rvu *rvu, struct nix_rx_mode *req, +int rvu_mbox_handler_nix_set_rx_mode(struct rvu *rvu, struct nix_rx_mode *req, struct msg_rsp *rsp) { bool allmulti = false, disable_promisc = false; @@ -1775,9 +2313,303 @@ int rvu_mbox_handler_NIX_SET_RX_MODE(struct rvu *rvu, struct nix_rx_mode *req, else rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf, pfvf->rx_chan_base, allmulti); + + rvu_npc_update_rxvlan(rvu, pcifunc, nixlf); + return 0; } +static void nix_find_link_frs(struct rvu *rvu, + struct nix_frs_cfg *req, u16 pcifunc) +{ + int pf = rvu_get_pf(pcifunc); + struct rvu_pfvf *pfvf; + int maxlen, minlen; + int numvfs, hwvf; + int vf; + + /* Update with requester's min/max lengths */ + pfvf = rvu_get_pfvf(rvu, pcifunc); + pfvf->maxlen = req->maxlen; + if (req->update_minlen) + pfvf->minlen = req->minlen; + + maxlen = req->maxlen; + minlen = req->update_minlen ? req->minlen : 0; + + /* Get this PF's numVFs and starting hwvf */ + rvu_get_pf_numvfs(rvu, pf, &numvfs, &hwvf); + + /* For each VF, compare requested max/minlen */ + for (vf = 0; vf < numvfs; vf++) { + pfvf = &rvu->hwvf[hwvf + vf]; + if (pfvf->maxlen > maxlen) + maxlen = pfvf->maxlen; + if (req->update_minlen && + pfvf->minlen && pfvf->minlen < minlen) + minlen = pfvf->minlen; + } + + /* Compare requested max/minlen with PF's max/minlen */ + pfvf = &rvu->pf[pf]; + if (pfvf->maxlen > maxlen) + maxlen = pfvf->maxlen; + if (req->update_minlen && + pfvf->minlen && pfvf->minlen < minlen) + minlen = pfvf->minlen; + + /* Update the request with max/min PF's and it's VF's max/min */ + req->maxlen = maxlen; + if (req->update_minlen) + req->minlen = minlen; +} + +int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, + struct msg_rsp *rsp) +{ + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc = req->hdr.pcifunc; + int pf = rvu_get_pf(pcifunc); + int blkaddr, schq, link = -1; + struct nix_txsch *txsch; + u64 cfg, lmac_fifo_len; + struct nix_hw *nix_hw; + u8 cgx = 0, lmac = 0; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return -EINVAL; + + if (!req->sdp_link && req->maxlen > NIC_HW_MAX_FRS) + return NIX_AF_ERR_FRS_INVALID; + + if (req->update_minlen && req->minlen < NIC_HW_MIN_FRS) + return NIX_AF_ERR_FRS_INVALID; + + /* Check if requester wants to update SMQ's */ + if (!req->update_smq) + goto rx_frscfg; + + /* Update min/maxlen in each of the SMQ attached to this PF/VF */ + txsch = &nix_hw->txsch[NIX_TXSCH_LVL_SMQ]; + mutex_lock(&rvu->rsrc_lock); + for (schq = 0; schq < txsch->schq.max; schq++) { + if (TXSCH_MAP_FUNC(txsch->pfvf_map[schq]) != pcifunc) + continue; + cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(schq)); + cfg = (cfg & ~(0xFFFFULL << 8)) | ((u64)req->maxlen << 8); + if (req->update_minlen) + cfg = (cfg & ~0x7FULL) | ((u64)req->minlen & 0x7F); + rvu_write64(rvu, blkaddr, NIX_AF_SMQX_CFG(schq), cfg); + } + mutex_unlock(&rvu->rsrc_lock); + +rx_frscfg: + /* Check if config is for SDP link */ + if (req->sdp_link) { + if (!hw->sdp_links) + return NIX_AF_ERR_RX_LINK_INVALID; + link = hw->cgx_links + hw->lbk_links; + goto linkcfg; + } + + /* Check if the request is from CGX mapped RVU PF */ + if (is_pf_cgxmapped(rvu, pf)) { + /* Get CGX and LMAC to which this PF is mapped and find link */ + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx, &lmac); + link = (cgx * hw->lmac_per_cgx) + lmac; + } else if (pf == 0) { + /* For VFs of PF0 ingress is LBK port, so config LBK link */ + link = hw->cgx_links; + } + + if (link < 0) + return NIX_AF_ERR_RX_LINK_INVALID; + + nix_find_link_frs(rvu, req, pcifunc); + +linkcfg: + cfg = rvu_read64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link)); + cfg = (cfg & ~(0xFFFFULL << 16)) | ((u64)req->maxlen << 16); + if (req->update_minlen) + cfg = (cfg & ~0xFFFFULL) | req->minlen; + rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), cfg); + + if (req->sdp_link || pf == 0) + return 0; + + /* Update transmit credits for CGX links */ + lmac_fifo_len = + CGX_FIFO_LEN / cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu)); + cfg = rvu_read64(rvu, blkaddr, NIX_AF_TX_LINKX_NORM_CREDIT(link)); + cfg &= ~(0xFFFFFULL << 12); + cfg |= ((lmac_fifo_len - req->maxlen) / 16) << 12; + rvu_write64(rvu, blkaddr, NIX_AF_TX_LINKX_NORM_CREDIT(link), cfg); + rvu_write64(rvu, blkaddr, NIX_AF_TX_LINKX_EXPR_CREDIT(link), cfg); + + return 0; +} + +int rvu_mbox_handler_nix_rxvlan_alloc(struct rvu *rvu, struct msg_req *req, + struct msg_rsp *rsp) +{ + struct npc_mcam_alloc_entry_req alloc_req = { }; + struct npc_mcam_alloc_entry_rsp alloc_rsp = { }; + struct npc_mcam_free_entry_req free_req = { }; + u16 pcifunc = req->hdr.pcifunc; + int blkaddr, nixlf, err; + struct rvu_pfvf *pfvf; + + /* LBK VFs do not have separate MCAM UCAST entry hence + * skip allocating rxvlan for them + */ + if (is_afvf(pcifunc)) + return 0; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + if (pfvf->rxvlan) + return 0; + + /* alloc new mcam entry */ + alloc_req.hdr.pcifunc = pcifunc; + alloc_req.count = 1; + + err = rvu_mbox_handler_npc_mcam_alloc_entry(rvu, &alloc_req, + &alloc_rsp); + if (err) + return err; + + /* update entry to enable rxvlan offload */ + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (blkaddr < 0) { + err = NIX_AF_ERR_AF_LF_INVALID; + goto free_entry; + } + + nixlf = rvu_get_lf(rvu, &rvu->hw->block[blkaddr], pcifunc, 0); + if (nixlf < 0) { + err = NIX_AF_ERR_AF_LF_INVALID; + goto free_entry; + } + + pfvf->rxvlan_index = alloc_rsp.entry_list[0]; + /* all it means is that rxvlan_index is valid */ + pfvf->rxvlan = true; + + err = rvu_npc_update_rxvlan(rvu, pcifunc, nixlf); + if (err) + goto free_entry; + + return 0; +free_entry: + free_req.hdr.pcifunc = pcifunc; + free_req.entry = alloc_rsp.entry_list[0]; + rvu_mbox_handler_npc_mcam_free_entry(rvu, &free_req, rsp); + pfvf->rxvlan = false; + return err; +} + +int rvu_mbox_handler_nix_set_rx_cfg(struct rvu *rvu, struct nix_rx_cfg *req, + struct msg_rsp *rsp) +{ + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc = req->hdr.pcifunc; + struct rvu_block *block; + struct rvu_pfvf *pfvf; + int nixlf, blkaddr; + u64 cfg; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (!pfvf->nixlf || blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + block = &hw->block[blkaddr]; + nixlf = rvu_get_lf(rvu, block, pcifunc, 0); + if (nixlf < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_CFG(nixlf)); + /* Set the interface configuration */ + if (req->len_verify & BIT(0)) + cfg |= BIT_ULL(41); + else + cfg &= ~BIT_ULL(41); + + if (req->len_verify & BIT(1)) + cfg |= BIT_ULL(40); + else + cfg &= ~BIT_ULL(40); + + if (req->csum_verify & BIT(0)) + cfg |= BIT_ULL(37); + else + cfg &= ~BIT_ULL(37); + + rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_CFG(nixlf), cfg); + + return 0; +} + +static void nix_link_config(struct rvu *rvu, int blkaddr) +{ + struct rvu_hwinfo *hw = rvu->hw; + int cgx, lmac_cnt, slink, link; + u64 tx_credits; + + /* Set default min/max packet lengths allowed on NIX Rx links. + * + * With HW reset minlen value of 60byte, HW will treat ARP pkts + * as undersize and report them to SW as error pkts, hence + * setting it to 40 bytes. + */ + for (link = 0; link < (hw->cgx_links + hw->lbk_links); link++) { + rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), + NIC_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS); + } + + if (hw->sdp_links) { + link = hw->cgx_links + hw->lbk_links; + rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), + SDP_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS); + } + + /* Set credits for Tx links assuming max packet length allowed. + * This will be reconfigured based on MTU set for PF/VF. + */ + for (cgx = 0; cgx < hw->cgx; cgx++) { + lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu)); + tx_credits = ((CGX_FIFO_LEN / lmac_cnt) - NIC_HW_MAX_FRS) / 16; + /* Enable credits and set credit pkt count to max allowed */ + tx_credits = (tx_credits << 12) | (0x1FF << 2) | BIT_ULL(1); + slink = cgx * hw->lmac_per_cgx; + for (link = slink; link < (slink + lmac_cnt); link++) { + rvu_write64(rvu, blkaddr, + NIX_AF_TX_LINKX_NORM_CREDIT(link), + tx_credits); + rvu_write64(rvu, blkaddr, + NIX_AF_TX_LINKX_EXPR_CREDIT(link), + tx_credits); + } + } + + /* Set Tx credits for LBK link */ + slink = hw->cgx_links; + for (link = slink; link < (slink + hw->lbk_links); link++) { + tx_credits = 1000; /* 10 * max LBK datarate = 10 * 100Gbps */ + /* Enable credits and set credit pkt count to max allowed */ + tx_credits = (tx_credits << 12) | (0x1FF << 2) | BIT_ULL(1); + rvu_write64(rvu, blkaddr, + NIX_AF_TX_LINKX_NORM_CREDIT(link), tx_credits); + rvu_write64(rvu, blkaddr, + NIX_AF_TX_LINKX_EXPR_CREDIT(link), tx_credits); + } +} + static int nix_calibrate_x2p(struct rvu *rvu, int blkaddr) { int idx, err; @@ -1796,8 +2628,10 @@ static int nix_calibrate_x2p(struct rvu *rvu, int blkaddr) status = rvu_read64(rvu, blkaddr, NIX_AF_STATUS); /* Check if CGX devices are ready */ - for (idx = 0; idx < cgx_get_cgx_cnt(); idx++) { - if (status & (BIT_ULL(16 + idx))) + for (idx = 0; idx < rvu->cgx_cnt_max; idx++) { + /* Skip when cgx port is not available */ + if (!rvu_cgx_pdata(idx, rvu) || + (status & (BIT_ULL(16 + idx)))) continue; dev_err(rvu->dev, "CGX%d didn't respond to NIX X2P calibration\n", idx); @@ -1830,10 +2664,10 @@ static int nix_aq_init(struct rvu *rvu, struct rvu_block *block) /* Set admin queue endianness */ cfg = rvu_read64(rvu, block->addr, NIX_AF_CFG); #ifdef __BIG_ENDIAN - cfg |= BIT_ULL(1); + cfg |= BIT_ULL(8); rvu_write64(rvu, block->addr, NIX_AF_CFG, cfg); #else - cfg &= ~BIT_ULL(1); + cfg &= ~BIT_ULL(8); rvu_write64(rvu, block->addr, NIX_AF_CFG, cfg); #endif @@ -1870,6 +2704,14 @@ int rvu_nix_init(struct rvu *rvu) return 0; block = &hw->block[blkaddr]; + /* As per a HW errata in 9xxx A0 silicon, NIX may corrupt + * internal state when conditional clocks are turned off. + * Hence enable them. + */ + if (is_rvu_9xxx_A0(rvu)) + rvu_write64(rvu, blkaddr, NIX_AF_CFG, + rvu_read64(rvu, blkaddr, NIX_AF_CFG) | 0x5EULL); + /* Calibrate X2P bus to check if CGX/LBK links are fine */ err = nix_calibrate_x2p(rvu, blkaddr); if (err) @@ -1891,9 +2733,6 @@ int rvu_nix_init(struct rvu *rvu) /* Restore CINT timer delay to HW reset values */ rvu_write64(rvu, blkaddr, NIX_AF_CINT_DELAY, 0x0ULL); - /* Configure segmentation offload formats */ - nix_setup_lso(rvu, blkaddr); - if (blkaddr == BLKADDR_NIX0) { hw->nix0 = devm_kzalloc(rvu->dev, sizeof(struct nix_hw), GFP_KERNEL); @@ -1904,24 +2743,51 @@ int rvu_nix_init(struct rvu *rvu) if (err) return err; + err = nix_af_mark_format_setup(rvu, hw->nix0, blkaddr); + if (err) + return err; + err = nix_setup_mcast(rvu, hw->nix0, blkaddr); if (err) return err; - /* Config Outer L2, IP, TCP and UDP's NPC layer info. + /* Configure segmentation offload formats */ + nix_setup_lso(rvu, hw->nix0, blkaddr); + + /* Config Outer/Inner L2, IP, TCP, UDP and SCTP NPC layer info. * This helps HW protocol checker to identify headers * and validate length and checksums. */ rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OL2, (NPC_LID_LA << 8) | (NPC_LT_LA_ETHER << 4) | 0x0F); - rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OUDP, - (NPC_LID_LD << 8) | (NPC_LT_LD_UDP << 4) | 0x0F); - rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OTCP, - (NPC_LID_LD << 8) | (NPC_LT_LD_TCP << 4) | 0x0F); rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OIP4, (NPC_LID_LC << 8) | (NPC_LT_LC_IP << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_IIP4, + (NPC_LID_LF << 8) | (NPC_LT_LF_TU_IP << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OIP6, + (NPC_LID_LC << 8) | (NPC_LT_LC_IP6 << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_IIP6, + (NPC_LID_LF << 8) | (NPC_LT_LF_TU_IP6 << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OTCP, + (NPC_LID_LD << 8) | (NPC_LT_LD_TCP << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_ITCP, + (NPC_LID_LG << 8) | (NPC_LT_LG_TU_TCP << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OUDP, + (NPC_LID_LD << 8) | (NPC_LT_LD_UDP << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_IUDP, + (NPC_LID_LG << 8) | (NPC_LT_LG_TU_UDP << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_OSCTP, + (NPC_LID_LD << 8) | (NPC_LT_LD_SCTP << 4) | 0x0F); + rvu_write64(rvu, blkaddr, NIX_AF_RX_DEF_ISCTP, + (NPC_LID_LG << 8) | (NPC_LT_LG_TU_SCTP << 4) | + 0x0F); + + err = nix_rx_flowkey_alg_cfg(rvu, blkaddr); + if (err) + return err; - nix_rx_flowkey_alg_cfg(rvu, blkaddr); + /* Initialize CGX/LBK/SDP link credits, min/max pkt lengths */ + nix_link_config(rvu, blkaddr); } return 0; } @@ -1955,5 +2821,139 @@ void rvu_nix_freemem(struct rvu *rvu) mcast = &nix_hw->mcast; qmem_free(rvu->dev, mcast->mce_ctx); qmem_free(rvu->dev, mcast->mcast_buf); + mutex_destroy(&mcast->mce_lock); + } +} + +static int nix_get_nixlf(struct rvu *rvu, u16 pcifunc, int *nixlf) +{ + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + struct rvu_hwinfo *hw = rvu->hw; + int blkaddr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (!pfvf->nixlf || blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + *nixlf = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0); + if (*nixlf < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + return 0; +} + +int rvu_mbox_handler_nix_lf_start_rx(struct rvu *rvu, struct msg_req *req, + struct msg_rsp *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + int nixlf, err; + + err = nix_get_nixlf(rvu, pcifunc, &nixlf); + if (err) + return err; + + rvu_npc_enable_default_entries(rvu, pcifunc, nixlf); + return 0; +} + +int rvu_mbox_handler_nix_lf_stop_rx(struct rvu *rvu, struct msg_req *req, + struct msg_rsp *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + int nixlf, err; + + err = nix_get_nixlf(rvu, pcifunc, &nixlf); + if (err) + return err; + + rvu_npc_disable_default_entries(rvu, pcifunc, nixlf); + return 0; +} + +void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf) +{ + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + struct hwctx_disable_req ctx_req; + int err; + + ctx_req.hdr.pcifunc = pcifunc; + + /* Cleanup NPC MCAM entries, free Tx scheduler queues being used */ + nix_interface_deinit(rvu, pcifunc, nixlf); + nix_rx_sync(rvu, blkaddr); + nix_txschq_free(rvu, pcifunc); + + if (pfvf->sq_ctx) { + ctx_req.ctype = NIX_AQ_CTYPE_SQ; + err = nix_lf_hwctx_disable(rvu, &ctx_req); + if (err) + dev_err(rvu->dev, "SQ ctx disable failed\n"); + } + + if (pfvf->rq_ctx) { + ctx_req.ctype = NIX_AQ_CTYPE_RQ; + err = nix_lf_hwctx_disable(rvu, &ctx_req); + if (err) + dev_err(rvu->dev, "RQ ctx disable failed\n"); } + + if (pfvf->cq_ctx) { + ctx_req.ctype = NIX_AQ_CTYPE_CQ; + err = nix_lf_hwctx_disable(rvu, &ctx_req); + if (err) + dev_err(rvu->dev, "CQ ctx disable failed\n"); + } + + nix_ctx_free(rvu, pfvf); +} + +int rvu_mbox_handler_nix_lso_format_cfg(struct rvu *rvu, + struct nix_lso_format_cfg *req, + struct nix_lso_format_cfg_rsp *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + struct nix_hw *nix_hw; + struct rvu_pfvf *pfvf; + int blkaddr, idx, f; + u64 reg; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (!pfvf->nixlf || blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return -EINVAL; + + /* Find existing matching LSO format, if any */ + for (idx = 0; idx < nix_hw->lso.in_use; idx++) { + for (f = 0; f < NIX_LSO_FIELD_MAX; f++) { + reg = rvu_read64(rvu, blkaddr, + NIX_AF_LSO_FORMATX_FIELDX(idx, f)); + if (req->fields[f] != (reg & req->field_mask)) + break; + } + + if (f == NIX_LSO_FIELD_MAX) + break; + } + + if (idx < nix_hw->lso.in_use) { + /* Match found */ + rsp->lso_format_idx = idx; + return 0; + } + + if (nix_hw->lso.in_use == nix_hw->lso.total) + return NIX_AF_ERR_LSO_CFG_FAIL; + + rsp->lso_format_idx = nix_hw->lso.in_use++; + + for (f = 0; f < NIX_LSO_FIELD_MAX; f++) + rvu_write64(rvu, blkaddr, + NIX_AF_LSO_FORMATX_FIELDX(rsp->lso_format_idx, f), + req->fields[f]); + + return 0; } diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c index 7531fdc54fa1..c0e165dfc403 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c @@ -241,14 +241,14 @@ static int npa_lf_hwctx_disable(struct rvu *rvu, struct hwctx_disable_req *req) return err; } -int rvu_mbox_handler_NPA_AQ_ENQ(struct rvu *rvu, +int rvu_mbox_handler_npa_aq_enq(struct rvu *rvu, struct npa_aq_enq_req *req, struct npa_aq_enq_rsp *rsp) { return rvu_npa_aq_enq_inst(rvu, req, rsp); } -int rvu_mbox_handler_NPA_HWCTX_DISABLE(struct rvu *rvu, +int rvu_mbox_handler_npa_hwctx_disable(struct rvu *rvu, struct hwctx_disable_req *req, struct msg_rsp *rsp) { @@ -273,7 +273,7 @@ static void npa_ctx_free(struct rvu *rvu, struct rvu_pfvf *pfvf) pfvf->npa_qints_ctx = NULL; } -int rvu_mbox_handler_NPA_LF_ALLOC(struct rvu *rvu, +int rvu_mbox_handler_npa_lf_alloc(struct rvu *rvu, struct npa_lf_alloc_req *req, struct npa_lf_alloc_rsp *rsp) { @@ -372,7 +372,7 @@ exit: return rc; } -int rvu_mbox_handler_NPA_LF_FREE(struct rvu *rvu, struct msg_req *req, +int rvu_mbox_handler_npa_lf_free(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { struct rvu_hwinfo *hw = rvu->hw; @@ -470,3 +470,20 @@ void rvu_npa_freemem(struct rvu *rvu) block = &hw->block[blkaddr]; rvu_aq_free(rvu, block->aq); } + +void rvu_npa_lf_teardown(struct rvu *rvu, u16 pcifunc, int npalf) +{ + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + struct hwctx_disable_req ctx_req; + + /* Disable all pools */ + ctx_req.hdr.pcifunc = pcifunc; + ctx_req.ctype = NPA_AQ_CTYPE_POOL; + npa_lf_hwctx_disable(rvu, &ctx_req); + + /* Disable all auras */ + ctx_req.ctype = NPA_AQ_CTYPE_AURA; + npa_lf_hwctx_disable(rvu, &ctx_req); + + npa_ctx_free(rvu, pfvf); +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c index 23ff47f7efc5..15f70273e29c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c @@ -8,6 +8,7 @@ * published by the Free Software Foundation. */ +#include #include #include @@ -15,6 +16,7 @@ #include "rvu_reg.h" #include "rvu.h" #include "npc.h" +#include "cgx.h" #include "npc_profile.h" #define RSVD_MCAM_ENTRIES_PER_PF 2 /* Bcast & Promisc */ @@ -26,13 +28,10 @@ #define NPC_PARSE_RESULT_DMAC_OFFSET 8 -struct mcam_entry { -#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max keywidth */ - u64 kw[NPC_MAX_KWS_IN_KEY]; - u64 kw_mask[NPC_MAX_KWS_IN_KEY]; - u64 action; - u64 vtag_action; -}; +static void npc_mcam_free_all_entries(struct rvu *rvu, struct npc_mcam *mcam, + int blkaddr, u16 pcifunc); +static void npc_mcam_free_all_counters(struct rvu *rvu, struct npc_mcam *mcam, + u16 pcifunc); void rvu_npc_set_pkind(struct rvu *rvu, int pkind, struct rvu_pfvf *pfvf) { @@ -256,6 +255,46 @@ static void npc_config_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam, npc_enable_mcam_entry(rvu, mcam, blkaddr, actindex, false); } +static void npc_copy_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam, + int blkaddr, u16 src, u16 dest) +{ + int dbank = npc_get_bank(mcam, dest); + int sbank = npc_get_bank(mcam, src); + u64 cfg, sreg, dreg; + int bank, i; + + src &= (mcam->banksize - 1); + dest &= (mcam->banksize - 1); + + /* Copy INTF's, W0's, W1's CAM0 and CAM1 configuration */ + for (bank = 0; bank < mcam->banks_per_entry; bank++) { + sreg = NPC_AF_MCAMEX_BANKX_CAMX_INTF(src, sbank + bank, 0); + dreg = NPC_AF_MCAMEX_BANKX_CAMX_INTF(dest, dbank + bank, 0); + for (i = 0; i < 6; i++) { + cfg = rvu_read64(rvu, blkaddr, sreg + (i * 8)); + rvu_write64(rvu, blkaddr, dreg + (i * 8), cfg); + } + } + + /* Copy action */ + cfg = rvu_read64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_ACTION(src, sbank)); + rvu_write64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_ACTION(dest, dbank), cfg); + + /* Copy TAG action */ + cfg = rvu_read64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_TAG_ACT(src, sbank)); + rvu_write64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_TAG_ACT(dest, dbank), cfg); + + /* Enable or disable */ + cfg = rvu_read64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_CFG(src, sbank)); + rvu_write64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_CFG(dest, dbank), cfg); +} + static u64 npc_get_mcam_action(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr, int index) { @@ -269,12 +308,17 @@ static u64 npc_get_mcam_action(struct rvu *rvu, struct npc_mcam *mcam, void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc, int nixlf, u64 chan, u8 *mac_addr) { + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); struct npc_mcam *mcam = &rvu->hw->mcam; struct mcam_entry entry = { {0} }; struct nix_rx_action action; int blkaddr, index, kwi; u64 mac = 0; + /* AF's VFs work in promiscuous mode */ + if (is_afvf(pcifunc)) + return; + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); if (blkaddr < 0) return; @@ -308,22 +352,33 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc, entry.action = *(u64 *)&action; npc_config_mcam_entry(rvu, mcam, blkaddr, index, NIX_INTF_RX, &entry, true); + + /* add VLAN matching, setup action and save entry back for later */ + entry.kw[0] |= (NPC_LT_LB_STAG | NPC_LT_LB_CTAG) << 20; + entry.kw_mask[0] |= (NPC_LT_LB_STAG & NPC_LT_LB_CTAG) << 20; + + entry.vtag_action = VTAG0_VALID_BIT | + FIELD_PREP(VTAG0_TYPE_MASK, 0) | + FIELD_PREP(VTAG0_LID_MASK, NPC_LID_LA) | + FIELD_PREP(VTAG0_RELPTR_MASK, 12); + + memcpy(&pfvf->entry, &entry, sizeof(entry)); } void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf, u64 chan, bool allmulti) { struct npc_mcam *mcam = &rvu->hw->mcam; + int blkaddr, ucast_idx, index, kwi; struct mcam_entry entry = { {0} }; - struct nix_rx_action action; - int blkaddr, index, kwi; + struct nix_rx_action action = { }; - blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); - if (blkaddr < 0) + /* Only PF or AF VF can add a promiscuous entry */ + if ((pcifunc & RVU_PFVF_FUNC_MASK) && !is_afvf(pcifunc)) return; - /* Only PF or AF VF can add a promiscuous entry */ - if (pcifunc & RVU_PFVF_FUNC_MASK) + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) return; index = npc_get_nixlf_mcam_index(mcam, pcifunc, @@ -338,16 +393,29 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc, entry.kw_mask[kwi] = BIT_ULL(40); } - *(u64 *)&action = 0x00; - action.op = NIX_RX_ACTIONOP_UCAST; - action.pf_func = pcifunc; + ucast_idx = npc_get_nixlf_mcam_index(mcam, pcifunc, + nixlf, NIXLF_UCAST_ENTRY); + + /* If the corresponding PF's ucast action is RSS, + * use the same action for promisc also + */ + if (is_mcam_entry_enabled(rvu, mcam, blkaddr, ucast_idx)) + *(u64 *)&action = npc_get_mcam_action(rvu, mcam, + blkaddr, ucast_idx); + + if (action.op != NIX_RX_ACTIONOP_RSS) { + *(u64 *)&action = 0x00; + action.op = NIX_RX_ACTIONOP_UCAST; + action.pf_func = pcifunc; + } entry.action = *(u64 *)&action; npc_config_mcam_entry(rvu, mcam, blkaddr, index, NIX_INTF_RX, &entry, true); } -void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf) +static void npc_enadis_promisc_entry(struct rvu *rvu, u16 pcifunc, + int nixlf, bool enable) { struct npc_mcam *mcam = &rvu->hw->mcam; int blkaddr, index; @@ -362,7 +430,17 @@ void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf) index = npc_get_nixlf_mcam_index(mcam, pcifunc, nixlf, NIXLF_PROMISC_ENTRY); - npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false); + npc_enable_mcam_entry(rvu, mcam, blkaddr, index, enable); +} + +void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf) +{ + npc_enadis_promisc_entry(rvu, pcifunc, nixlf, false); +} + +void rvu_npc_enable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf) +{ + npc_enadis_promisc_entry(rvu, pcifunc, nixlf, true); } void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc, @@ -390,9 +468,28 @@ void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc, index = npc_get_nixlf_mcam_index(mcam, pcifunc, nixlf, NIXLF_BCAST_ENTRY); - /* Check for L2B bit and LMAC channel */ - entry.kw[0] = BIT_ULL(25) | chan; - entry.kw_mask[0] = BIT_ULL(25) | 0xFFFULL; + /* Check for L2B bit and LMAC channel + * NOTE: Since MKEX default profile(a reduced version intended to + * accommodate more capability but igoring few bits) a stap-gap + * approach. + * Since we care for L2B which by HRM NPC_PARSE_KEX_S at BIT_POS[25], So + * moved to BIT_POS[13], ignoring ERRCODE, ERRLEV as we'll loose out + * on capability features needed for CoS (/from ODP PoV) e.g: VLAN, + * DSCP. + * + * Reduced layout of MKEX default profile - + * Includes following are (i.e.CHAN, L2/3{B/M}, LA, LB, LC, LD): + * + * BIT_POS[31:28] : LD + * BIT_POS[27:24] : LC + * BIT_POS[23:20] : LB + * BIT_POS[19:16] : LA + * BIT_POS[15:12] : L3B, L3M, L2B, L2M + * BIT_POS[11:00] : CHAN + * + */ + entry.kw[0] = BIT_ULL(13) | chan; + entry.kw_mask[0] = BIT_ULL(13) | 0xFFFULL; *(u64 *)&action = 0x00; #ifdef MCAST_MCE @@ -454,51 +551,110 @@ void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf, rvu_write64(rvu, blkaddr, NPC_AF_MCAMEX_BANKX_ACTION(index, bank), *(u64 *)&action); + + index = npc_get_nixlf_mcam_index(mcam, pcifunc, + nixlf, NIXLF_PROMISC_ENTRY); + + /* If PF's promiscuous entry is enabled, + * Set RSS action for that entry as well + */ + if (is_mcam_entry_enabled(rvu, mcam, blkaddr, index)) { + bank = npc_get_bank(mcam, index); + index &= (mcam->banksize - 1); + + rvu_write64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_ACTION(index, bank), + *(u64 *)&action); + } + + rvu_npc_update_rxvlan(rvu, pcifunc, nixlf); } -void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf) +static void npc_enadis_default_entries(struct rvu *rvu, u16 pcifunc, + int nixlf, bool enable) { struct npc_mcam *mcam = &rvu->hw->mcam; struct nix_rx_action action; - int blkaddr, index, bank; + int index, bank, blkaddr; blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); if (blkaddr < 0) return; - /* Disable ucast MCAM match entry of this PF/VF */ + /* Ucast MCAM match entry of this PF/VF */ index = npc_get_nixlf_mcam_index(mcam, pcifunc, nixlf, NIXLF_UCAST_ENTRY); - npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false); + npc_enable_mcam_entry(rvu, mcam, blkaddr, index, enable); - /* For PF, disable promisc and bcast MCAM match entries */ - if (!(pcifunc & RVU_PFVF_FUNC_MASK)) { - index = npc_get_nixlf_mcam_index(mcam, pcifunc, - nixlf, NIXLF_BCAST_ENTRY); - /* For bcast, disable only if it's action is not - * packet replication, incase if action is replication - * then this PF's nixlf is removed from bcast replication - * list. - */ - bank = npc_get_bank(mcam, index); - index &= (mcam->banksize - 1); - *(u64 *)&action = rvu_read64(rvu, blkaddr, - NPC_AF_MCAMEX_BANKX_ACTION(index, bank)); - if (action.op != NIX_RX_ACTIONOP_MCAST) - npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false); + /* For PF, ena/dis promisc and bcast MCAM match entries */ + if (pcifunc & RVU_PFVF_FUNC_MASK) + return; + /* For bcast, enable/disable only if it's action is not + * packet replication, incase if action is replication + * then this PF's nixlf is removed from bcast replication + * list. + */ + index = npc_get_nixlf_mcam_index(mcam, pcifunc, + nixlf, NIXLF_BCAST_ENTRY); + bank = npc_get_bank(mcam, index); + *(u64 *)&action = rvu_read64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_ACTION(index & (mcam->banksize - 1), bank)); + if (action.op != NIX_RX_ACTIONOP_MCAST) + npc_enable_mcam_entry(rvu, mcam, + blkaddr, index, enable); + if (enable) + rvu_npc_enable_promisc_entry(rvu, pcifunc, nixlf); + else rvu_npc_disable_promisc_entry(rvu, pcifunc, nixlf); - } + + rvu_npc_update_rxvlan(rvu, pcifunc, nixlf); +} + +void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf) +{ + npc_enadis_default_entries(rvu, pcifunc, nixlf, false); +} + +void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf) +{ + npc_enadis_default_entries(rvu, pcifunc, nixlf, true); +} + +void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + int blkaddr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return; + + mutex_lock(&mcam->lock); + + /* Disable and free all MCAM entries mapped to this 'pcifunc' */ + npc_mcam_free_all_entries(rvu, mcam, blkaddr, pcifunc); + + /* Free all MCAM counters mapped to this 'pcifunc' */ + npc_mcam_free_all_counters(rvu, mcam, pcifunc); + + mutex_unlock(&mcam->lock); + + rvu_npc_disable_default_entries(rvu, pcifunc, nixlf); } -#define LDATA_EXTRACT_CONFIG(intf, lid, ltype, ld, cfg) \ +#define SET_KEX_LD(intf, lid, ltype, ld, cfg) \ rvu_write64(rvu, blkaddr, \ NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, lid, ltype, ld), cfg) -#define LDATA_FLAGS_CONFIG(intf, ld, flags, cfg) \ +#define SET_KEX_LDFLAGS(intf, ld, flags, cfg) \ rvu_write64(rvu, blkaddr, \ NPC_AF_INTFX_LDATAX_FLAGSX_CFG(intf, ld, flags), cfg) +#define KEX_LD_CFG(bytesm1, hdr_ofs, ena, flags_ena, key_ofs) \ + (((bytesm1) << 16) | ((hdr_ofs) << 8) | ((ena) << 7) | \ + ((flags_ena) << 6) | ((key_ofs) & 0x3F)) + static void npc_config_ldata_extract(struct rvu *rvu, int blkaddr) { struct npc_mcam *mcam = &rvu->hw->mcam; @@ -514,28 +670,171 @@ static void npc_config_ldata_extract(struct rvu *rvu, int blkaddr) */ for (lid = 0; lid < lid_count; lid++) { for (ltype = 0; ltype < 16; ltype++) { - LDATA_EXTRACT_CONFIG(NIX_INTF_RX, lid, ltype, 0, 0ULL); - LDATA_EXTRACT_CONFIG(NIX_INTF_RX, lid, ltype, 1, 0ULL); - LDATA_EXTRACT_CONFIG(NIX_INTF_TX, lid, ltype, 0, 0ULL); - LDATA_EXTRACT_CONFIG(NIX_INTF_TX, lid, ltype, 1, 0ULL); - - LDATA_FLAGS_CONFIG(NIX_INTF_RX, 0, ltype, 0ULL); - LDATA_FLAGS_CONFIG(NIX_INTF_RX, 1, ltype, 0ULL); - LDATA_FLAGS_CONFIG(NIX_INTF_TX, 0, ltype, 0ULL); - LDATA_FLAGS_CONFIG(NIX_INTF_TX, 1, ltype, 0ULL); + SET_KEX_LD(NIX_INTF_RX, lid, ltype, 0, 0ULL); + SET_KEX_LD(NIX_INTF_RX, lid, ltype, 1, 0ULL); + SET_KEX_LD(NIX_INTF_TX, lid, ltype, 0, 0ULL); + SET_KEX_LD(NIX_INTF_TX, lid, ltype, 1, 0ULL); + + SET_KEX_LDFLAGS(NIX_INTF_RX, 0, ltype, 0ULL); + SET_KEX_LDFLAGS(NIX_INTF_RX, 1, ltype, 0ULL); + SET_KEX_LDFLAGS(NIX_INTF_TX, 0, ltype, 0ULL); + SET_KEX_LDFLAGS(NIX_INTF_TX, 1, ltype, 0ULL); } } - /* If we plan to extract Outer IPv4 tuple for TCP/UDP pkts - * then 112bit key is not sufficient - */ if (mcam->keysize != NPC_MCAM_KEY_X2) return; - /* Start placing extracted data/flags from 64bit onwards, for now */ - /* Extract DMAC from the packet */ - cfg = (0x05 << 16) | BIT_ULL(7) | NPC_PARSE_RESULT_DMAC_OFFSET; - LDATA_EXTRACT_CONFIG(NIX_INTF_RX, NPC_LID_LA, NPC_LT_LA_ETHER, 0, cfg); + /* Default MCAM KEX profile */ + /* Layer A: Ethernet: */ + + /* DMAC: 6 bytes, KW1[47:0] */ + cfg = KEX_LD_CFG(0x05, 0x0, 0x1, 0x0, NPC_PARSE_RESULT_DMAC_OFFSET); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LA, NPC_LT_LA_ETHER, 0, cfg); + + /* Ethertype: 2 bytes, KW0[47:32] */ + cfg = KEX_LD_CFG(0x01, 0xc, 0x1, 0x0, 0x4); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LA, NPC_LT_LA_ETHER, 1, cfg); + + /* Layer B: Single VLAN (CTAG) */ + /* CTAG VLAN[2..3] + Ethertype, 4 bytes, KW0[63:32] */ + cfg = KEX_LD_CFG(0x03, 0x0, 0x1, 0x0, 0x4); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LB, NPC_LT_LB_CTAG, 0, cfg); + + /* Layer B: Stacked VLAN (STAG|QinQ) */ + /* CTAG VLAN[2..3] + Ethertype, 4 bytes, KW0[63:32] */ + cfg = KEX_LD_CFG(0x03, 0x4, 0x1, 0x0, 0x4); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LB, NPC_LT_LB_STAG, 0, cfg); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LB, NPC_LT_LB_QINQ, 0, cfg); + + /* Layer C: IPv4 */ + /* SIP+DIP: 8 bytes, KW2[63:0] */ + cfg = KEX_LD_CFG(0x07, 0xc, 0x1, 0x0, 0x10); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LC, NPC_LT_LC_IP, 0, cfg); + /* TOS: 1 byte, KW1[63:56] */ + cfg = KEX_LD_CFG(0x0, 0x1, 0x1, 0x0, 0xf); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LC, NPC_LT_LC_IP, 1, cfg); + + /* Layer D:UDP */ + /* SPORT: 2 bytes, KW3[15:0] */ + cfg = KEX_LD_CFG(0x1, 0x0, 0x1, 0x0, 0x18); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LD, NPC_LT_LD_UDP, 0, cfg); + /* DPORT: 2 bytes, KW3[31:16] */ + cfg = KEX_LD_CFG(0x1, 0x2, 0x1, 0x0, 0x1a); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LD, NPC_LT_LD_UDP, 1, cfg); + + /* Layer D:TCP */ + /* SPORT: 2 bytes, KW3[15:0] */ + cfg = KEX_LD_CFG(0x1, 0x0, 0x1, 0x0, 0x18); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LD, NPC_LT_LD_TCP, 0, cfg); + /* DPORT: 2 bytes, KW3[31:16] */ + cfg = KEX_LD_CFG(0x1, 0x2, 0x1, 0x0, 0x1a); + SET_KEX_LD(NIX_INTF_RX, NPC_LID_LD, NPC_LT_LD_TCP, 1, cfg); +} + +static void npc_program_mkex_profile(struct rvu *rvu, int blkaddr, + struct npc_mcam_kex *mkex) +{ + int lid, lt, ld, fl; + + rvu_write64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_RX), + mkex->keyx_cfg[NIX_INTF_RX]); + rvu_write64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_TX), + mkex->keyx_cfg[NIX_INTF_TX]); + + for (ld = 0; ld < NPC_MAX_LD; ld++) + rvu_write64(rvu, blkaddr, NPC_AF_KEX_LDATAX_FLAGS_CFG(ld), + mkex->kex_ld_flags[ld]); + + for (lid = 0; lid < NPC_MAX_LID; lid++) { + for (lt = 0; lt < NPC_MAX_LT; lt++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + SET_KEX_LD(NIX_INTF_RX, lid, lt, ld, + mkex->intf_lid_lt_ld[NIX_INTF_RX] + [lid][lt][ld]); + + SET_KEX_LD(NIX_INTF_TX, lid, lt, ld, + mkex->intf_lid_lt_ld[NIX_INTF_TX] + [lid][lt][ld]); + } + } + } + + for (ld = 0; ld < NPC_MAX_LD; ld++) { + for (fl = 0; fl < NPC_MAX_LFL; fl++) { + SET_KEX_LDFLAGS(NIX_INTF_RX, ld, fl, + mkex->intf_ld_flags[NIX_INTF_RX] + [ld][fl]); + + SET_KEX_LDFLAGS(NIX_INTF_TX, ld, fl, + mkex->intf_ld_flags[NIX_INTF_TX] + [ld][fl]); + } + } +} + +/* strtoull of "mkexprof" with base:36 */ +#define MKEX_SIGN 0x19bbfdbd15f +#define MKEX_END_SIGN 0xdeadbeef + +static void npc_load_mkex_profile(struct rvu *rvu, int blkaddr) +{ + const char *mkex_profile = rvu->mkex_pfl_name; + struct device *dev = &rvu->pdev->dev; + void __iomem *mkex_prfl_addr = NULL; + struct npc_mcam_kex *mcam_kex; + u64 prfl_addr; + u64 prfl_sz; + + /* If user not selected mkex profile */ + if (!strncmp(mkex_profile, "default", MKEX_NAME_LEN)) + goto load_default; + + if (cgx_get_mkex_prfl_info(&prfl_addr, &prfl_sz)) + goto load_default; + + if (!prfl_addr || !prfl_sz) + goto load_default; + + mkex_prfl_addr = ioremap_wc(prfl_addr, prfl_sz); + if (!mkex_prfl_addr) + goto load_default; + + mcam_kex = (struct npc_mcam_kex *)mkex_prfl_addr; + + while (((s64)prfl_sz > 0) && (mcam_kex->mkex_sign != MKEX_END_SIGN)) { + /* Compare with mkex mod_param name string */ + if (mcam_kex->mkex_sign == MKEX_SIGN && + !strncmp(mcam_kex->name, mkex_profile, MKEX_NAME_LEN)) { + /* Due to an errata (35786) in A0 pass silicon, + * parse nibble enable configuration has to be + * identical for both Rx and Tx interfaces. + */ + if (is_rvu_9xxx_A0(rvu) && + mcam_kex->keyx_cfg[NIX_INTF_RX] != + mcam_kex->keyx_cfg[NIX_INTF_TX]) + goto load_default; + + /* Program selected mkex profile */ + npc_program_mkex_profile(rvu, blkaddr, mcam_kex); + + goto unmap; + } + + mcam_kex++; + prfl_sz -= sizeof(struct npc_mcam_kex); + } + dev_warn(dev, "Failed to load requested profile: %s\n", + rvu->mkex_pfl_name); + +load_default: + dev_info(rvu->dev, "Using default mkex profile\n"); + /* Config packet data and flags extraction into PARSE result */ + npc_config_ldata_extract(rvu, blkaddr); + +unmap: + if (mkex_prfl_addr) + iounmap(mkex_prfl_addr); } static void npc_config_kpuaction(struct rvu *rvu, int blkaddr, @@ -690,13 +989,14 @@ static int npc_mcam_rsrcs_init(struct rvu *rvu, int blkaddr) { int nixlf_count = rvu_get_nixlf_count(rvu); struct npc_mcam *mcam = &rvu->hw->mcam; - int rsvd; + int rsvd, err; u64 cfg; /* Get HW limits */ cfg = rvu_read64(rvu, blkaddr, NPC_AF_CONST); mcam->banks = (cfg >> 44) & 0xF; mcam->banksize = (cfg >> 28) & 0xFFFF; + mcam->counters.max = (cfg >> 48) & 0xFFFF; /* Actual number of MCAM entries vary by entry size */ cfg = (rvu_read64(rvu, blkaddr, @@ -728,20 +1028,82 @@ static int npc_mcam_rsrcs_init(struct rvu *rvu, int blkaddr) return -ENOMEM; } - mcam->entries = mcam->total_entries - rsvd; - mcam->nixlf_offset = mcam->entries; + mcam->bmap_entries = mcam->total_entries - rsvd; + mcam->nixlf_offset = mcam->bmap_entries; mcam->pf_offset = mcam->nixlf_offset + nixlf_count; - spin_lock_init(&mcam->lock); + /* Allocate bitmaps for managing MCAM entries */ + mcam->bmap = devm_kcalloc(rvu->dev, BITS_TO_LONGS(mcam->bmap_entries), + sizeof(long), GFP_KERNEL); + if (!mcam->bmap) + return -ENOMEM; + + mcam->bmap_reverse = devm_kcalloc(rvu->dev, + BITS_TO_LONGS(mcam->bmap_entries), + sizeof(long), GFP_KERNEL); + if (!mcam->bmap_reverse) + return -ENOMEM; + + mcam->bmap_fcnt = mcam->bmap_entries; + + /* Alloc memory for saving entry to RVU PFFUNC allocation mapping */ + mcam->entry2pfvf_map = devm_kcalloc(rvu->dev, mcam->bmap_entries, + sizeof(u16), GFP_KERNEL); + if (!mcam->entry2pfvf_map) + return -ENOMEM; + + /* Reserve 1/8th of MCAM entries at the bottom for low priority + * allocations and another 1/8th at the top for high priority + * allocations. + */ + mcam->lprio_count = mcam->bmap_entries / 8; + if (mcam->lprio_count > BITS_PER_LONG) + mcam->lprio_count = round_down(mcam->lprio_count, + BITS_PER_LONG); + mcam->lprio_start = mcam->bmap_entries - mcam->lprio_count; + mcam->hprio_count = mcam->lprio_count; + mcam->hprio_end = mcam->hprio_count; + + /* Allocate bitmap for managing MCAM counters and memory + * for saving counter to RVU PFFUNC allocation mapping. + */ + err = rvu_alloc_bitmap(&mcam->counters); + if (err) + return err; + + mcam->cntr2pfvf_map = devm_kcalloc(rvu->dev, mcam->counters.max, + sizeof(u16), GFP_KERNEL); + if (!mcam->cntr2pfvf_map) + goto free_mem; + + /* Alloc memory for MCAM entry to counter mapping and for tracking + * counter's reference count. + */ + mcam->entry2cntr_map = devm_kcalloc(rvu->dev, mcam->bmap_entries, + sizeof(u16), GFP_KERNEL); + if (!mcam->entry2cntr_map) + goto free_mem; + + mcam->cntr_refcnt = devm_kcalloc(rvu->dev, mcam->counters.max, + sizeof(u16), GFP_KERNEL); + if (!mcam->cntr_refcnt) + goto free_mem; + + mutex_init(&mcam->lock); return 0; + +free_mem: + kfree(mcam->counters.bmap); + return -ENOMEM; } int rvu_npc_init(struct rvu *rvu) { struct npc_pkind *pkind = &rvu->hw->pkind; u64 keyz = NPC_MCAM_KEY_X2; - int blkaddr, err; + int blkaddr, entry, bank, err; + u64 cfg, nibble_ena; blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); if (blkaddr < 0) { @@ -749,6 +1111,14 @@ int rvu_npc_init(struct rvu *rvu) return -ENODEV; } + /* First disable all MCAM entries, to stop traffic towards NIXLFs */ + cfg = rvu_read64(rvu, blkaddr, NPC_AF_CONST); + for (bank = 0; bank < ((cfg >> 44) & 0xF); bank++) { + for (entry = 0; entry < ((cfg >> 28) & 0xFFFF); entry++) + rvu_write64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_CFG(entry, bank), 0); + } + /* Allocate resource bimap for pkind*/ pkind->rsrc.max = (rvu_read64(rvu, blkaddr, NPC_AF_CONST1) >> 12) & 0xFF; @@ -771,29 +1141,41 @@ int rvu_npc_init(struct rvu *rvu) rvu_write64(rvu, blkaddr, NPC_AF_PCK_DEF_OIP4, (NPC_LID_LC << 8) | (NPC_LT_LC_IP << 4) | 0x0F); + /* Config Inner IPV4 NPC layer info */ + rvu_write64(rvu, blkaddr, NPC_AF_PCK_DEF_IIP4, + (NPC_LID_LF << 8) | (NPC_LT_LF_TU_IP << 4) | 0x0F); + /* Enable below for Rx pkts. * - Outer IPv4 header checksum validation. * - Detect outer L2 broadcast address and set NPC_RESULT_S[L2M]. + * - Inner IPv4 header checksum validation. + * - Set non zero checksum error code value */ rvu_write64(rvu, blkaddr, NPC_AF_PCK_CFG, rvu_read64(rvu, blkaddr, NPC_AF_PCK_CFG) | - BIT_ULL(6) | BIT_ULL(2)); + BIT_ULL(32) | BIT_ULL(24) | BIT_ULL(6) | + BIT_ULL(2) | BIT_ULL(1)); /* Set RX and TX side MCAM search key size. - * Also enable parse key extract nibbles suchthat except - * layer E to H, rest of the key is included for MCAM search. + * LA..LD (ltype only) + Channel */ + nibble_ena = 0x49247; rvu_write64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_RX), - ((keyz & 0x3) << 32) | ((1ULL << 20) - 1)); + ((keyz & 0x3) << 32) | nibble_ena); + /* Due to an errata (35786) in A0 pass silicon, parse nibble enable + * configuration has to be identical for both Rx and Tx interfaces. + */ + if (!is_rvu_9xxx_A0(rvu)) + nibble_ena = (1ULL << 19) - 1; rvu_write64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_TX), - ((keyz & 0x3) << 32) | ((1ULL << 20) - 1)); + ((keyz & 0x3) << 32) | nibble_ena); err = npc_mcam_rsrcs_init(rvu, blkaddr); if (err) return err; - /* Config packet data and flags extraction into PARSE result */ - npc_config_ldata_extract(rvu, blkaddr); + /* Configure MKEX profile */ + npc_load_mkex_profile(rvu, blkaddr); /* Set TX miss action to UCAST_DEFAULT i.e * transmit the packet on NIX LF SQ's default channel. @@ -811,6 +1193,1020 @@ int rvu_npc_init(struct rvu *rvu) void rvu_npc_freemem(struct rvu *rvu) { struct npc_pkind *pkind = &rvu->hw->pkind; + struct npc_mcam *mcam = &rvu->hw->mcam; kfree(pkind->rsrc.bmap); + kfree(mcam->counters.bmap); + mutex_destroy(&mcam->lock); +} + +static int npc_mcam_verify_entry(struct npc_mcam *mcam, + u16 pcifunc, int entry) +{ + /* Verify if entry is valid and if it is indeed + * allocated to the requesting PFFUNC. + */ + if (entry >= mcam->bmap_entries) + return NPC_MCAM_INVALID_REQ; + + if (pcifunc != mcam->entry2pfvf_map[entry]) + return NPC_MCAM_PERM_DENIED; + + return 0; +} + +static int npc_mcam_verify_counter(struct npc_mcam *mcam, + u16 pcifunc, int cntr) +{ + /* Verify if counter is valid and if it is indeed + * allocated to the requesting PFFUNC. + */ + if (cntr >= mcam->counters.max) + return NPC_MCAM_INVALID_REQ; + + if (pcifunc != mcam->cntr2pfvf_map[cntr]) + return NPC_MCAM_PERM_DENIED; + + return 0; +} + +static void npc_map_mcam_entry_and_cntr(struct rvu *rvu, struct npc_mcam *mcam, + int blkaddr, u16 entry, u16 cntr) +{ + u16 index = entry & (mcam->banksize - 1); + u16 bank = npc_get_bank(mcam, entry); + + /* Set mapping and increment counter's refcnt */ + mcam->entry2cntr_map[entry] = cntr; + mcam->cntr_refcnt[cntr]++; + /* Enable stats */ + rvu_write64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_STAT_ACT(index, bank), + BIT_ULL(9) | cntr); +} + +static void npc_unmap_mcam_entry_and_cntr(struct rvu *rvu, + struct npc_mcam *mcam, + int blkaddr, u16 entry, u16 cntr) +{ + u16 index = entry & (mcam->banksize - 1); + u16 bank = npc_get_bank(mcam, entry); + + /* Remove mapping and reduce counter's refcnt */ + mcam->entry2cntr_map[entry] = NPC_MCAM_INVALID_MAP; + mcam->cntr_refcnt[cntr]--; + /* Disable stats */ + rvu_write64(rvu, blkaddr, + NPC_AF_MCAMEX_BANKX_STAT_ACT(index, bank), 0x00); +} + +/* Sets MCAM entry in bitmap as used. Update + * reverse bitmap too. Should be called with + * 'mcam->lock' held. + */ +static void npc_mcam_set_bit(struct npc_mcam *mcam, u16 index) +{ + u16 entry, rentry; + + entry = index; + rentry = mcam->bmap_entries - index - 1; + + __set_bit(entry, mcam->bmap); + __set_bit(rentry, mcam->bmap_reverse); + mcam->bmap_fcnt--; +} + +/* Sets MCAM entry in bitmap as free. Update + * reverse bitmap too. Should be called with + * 'mcam->lock' held. + */ +static void npc_mcam_clear_bit(struct npc_mcam *mcam, u16 index) +{ + u16 entry, rentry; + + entry = index; + rentry = mcam->bmap_entries - index - 1; + + __clear_bit(entry, mcam->bmap); + __clear_bit(rentry, mcam->bmap_reverse); + mcam->bmap_fcnt++; +} + +static void npc_mcam_free_all_entries(struct rvu *rvu, struct npc_mcam *mcam, + int blkaddr, u16 pcifunc) +{ + u16 index, cntr; + + /* Scan all MCAM entries and free the ones mapped to 'pcifunc' */ + for (index = 0; index < mcam->bmap_entries; index++) { + if (mcam->entry2pfvf_map[index] == pcifunc) { + mcam->entry2pfvf_map[index] = NPC_MCAM_INVALID_MAP; + /* Free the entry in bitmap */ + npc_mcam_clear_bit(mcam, index); + /* Disable the entry */ + npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false); + + /* Update entry2counter mapping */ + cntr = mcam->entry2cntr_map[index]; + if (cntr != NPC_MCAM_INVALID_MAP) + npc_unmap_mcam_entry_and_cntr(rvu, mcam, + blkaddr, index, + cntr); + } + } +} + +static void npc_mcam_free_all_counters(struct rvu *rvu, struct npc_mcam *mcam, + u16 pcifunc) +{ + u16 cntr; + + /* Scan all MCAM counters and free the ones mapped to 'pcifunc' */ + for (cntr = 0; cntr < mcam->counters.max; cntr++) { + if (mcam->cntr2pfvf_map[cntr] == pcifunc) { + mcam->cntr2pfvf_map[cntr] = NPC_MCAM_INVALID_MAP; + mcam->cntr_refcnt[cntr] = 0; + rvu_free_rsrc(&mcam->counters, cntr); + /* This API is expected to be called after freeing + * MCAM entries, which inturn will remove + * 'entry to counter' mapping. + * No need to do it again. + */ + } + } +} + +/* Find area of contiguous free entries of size 'nr'. + * If not found return max contiguous free entries available. + */ +static u16 npc_mcam_find_zero_area(unsigned long *map, u16 size, u16 start, + u16 nr, u16 *max_area) +{ + u16 max_area_start = 0; + u16 index, next, end; + + *max_area = 0; + +again: + index = find_next_zero_bit(map, size, start); + if (index >= size) + return max_area_start; + + end = ((index + nr) >= size) ? size : index + nr; + next = find_next_bit(map, end, index); + if (*max_area < (next - index)) { + *max_area = next - index; + max_area_start = index; + } + + if (next < end) { + start = next + 1; + goto again; + } + + return max_area_start; +} + +/* Find number of free MCAM entries available + * within range i.e in between 'start' and 'end'. + */ +static u16 npc_mcam_get_free_count(unsigned long *map, u16 start, u16 end) +{ + u16 index, next; + u16 fcnt = 0; + +again: + if (start >= end) + return fcnt; + + index = find_next_zero_bit(map, end, start); + if (index >= end) + return fcnt; + + next = find_next_bit(map, end, index); + if (next <= end) { + fcnt += next - index; + start = next + 1; + goto again; + } + + fcnt += end - index; + return fcnt; +} + +static void +npc_get_mcam_search_range_priority(struct npc_mcam *mcam, + struct npc_mcam_alloc_entry_req *req, + u16 *start, u16 *end, bool *reverse) +{ + u16 fcnt; + + if (req->priority == NPC_MCAM_HIGHER_PRIO) + goto hprio; + + /* For a low priority entry allocation + * - If reference entry is not in hprio zone then + * search range: ref_entry to end. + * - If reference entry is in hprio zone and if + * request can be accomodated in non-hprio zone then + * search range: 'start of middle zone' to 'end' + * - else search in reverse, so that less number of hprio + * zone entries are allocated. + */ + + *reverse = false; + *start = req->ref_entry + 1; + *end = mcam->bmap_entries; + + if (req->ref_entry >= mcam->hprio_end) + return; + + fcnt = npc_mcam_get_free_count(mcam->bmap, + mcam->hprio_end, mcam->bmap_entries); + if (fcnt > req->count) + *start = mcam->hprio_end; + else + *reverse = true; + return; + +hprio: + /* For a high priority entry allocation, search is always + * in reverse to preserve hprio zone entries. + * - If reference entry is not in lprio zone then + * search range: 0 to ref_entry. + * - If reference entry is in lprio zone and if + * request can be accomodated in middle zone then + * search range: 'hprio_end' to 'lprio_start' + */ + + *reverse = true; + *start = 0; + *end = req->ref_entry; + + if (req->ref_entry <= mcam->lprio_start) + return; + + fcnt = npc_mcam_get_free_count(mcam->bmap, + mcam->hprio_end, mcam->lprio_start); + if (fcnt < req->count) + return; + *start = mcam->hprio_end; + *end = mcam->lprio_start; +} + +static int npc_mcam_alloc_entries(struct npc_mcam *mcam, u16 pcifunc, + struct npc_mcam_alloc_entry_req *req, + struct npc_mcam_alloc_entry_rsp *rsp) +{ + u16 entry_list[NPC_MAX_NONCONTIG_ENTRIES]; + u16 fcnt, hp_fcnt, lp_fcnt; + u16 start, end, index; + int entry, next_start; + bool reverse = false; + unsigned long *bmap; + u16 max_contig; + + mutex_lock(&mcam->lock); + + /* Check if there are any free entries */ + if (!mcam->bmap_fcnt) { + mutex_unlock(&mcam->lock); + return NPC_MCAM_ALLOC_FAILED; + } + + /* MCAM entries are divided into high priority, middle and + * low priority zones. Idea is to not allocate top and lower + * most entries as much as possible, this is to increase + * probability of honouring priority allocation requests. + * + * Two bitmaps are used for mcam entry management, + * mcam->bmap for forward search i.e '0 to mcam->bmap_entries'. + * mcam->bmap_reverse for reverse search i.e 'mcam->bmap_entries to 0'. + * + * Reverse bitmap is used to allocate entries + * - when a higher priority entry is requested + * - when available free entries are less. + * Lower priority ones out of avaialble free entries are always + * chosen when 'high vs low' question arises. + */ + + /* Get the search range for priority allocation request */ + if (req->priority) { + npc_get_mcam_search_range_priority(mcam, req, + &start, &end, &reverse); + goto alloc; + } + + /* Find out the search range for non-priority allocation request + * + * Get MCAM free entry count in middle zone. + */ + lp_fcnt = npc_mcam_get_free_count(mcam->bmap, + mcam->lprio_start, + mcam->bmap_entries); + hp_fcnt = npc_mcam_get_free_count(mcam->bmap, 0, mcam->hprio_end); + fcnt = mcam->bmap_fcnt - lp_fcnt - hp_fcnt; + + /* Check if request can be accomodated in the middle zone */ + if (fcnt > req->count) { + start = mcam->hprio_end; + end = mcam->lprio_start; + } else if ((fcnt + (hp_fcnt / 2) + (lp_fcnt / 2)) > req->count) { + /* Expand search zone from half of hprio zone to + * half of lprio zone. + */ + start = mcam->hprio_end / 2; + end = mcam->bmap_entries - (mcam->lprio_count / 2); + reverse = true; + } else { + /* Not enough free entries, search all entries in reverse, + * so that low priority ones will get used up. + */ + reverse = true; + start = 0; + end = mcam->bmap_entries; + } + +alloc: + if (reverse) { + bmap = mcam->bmap_reverse; + start = mcam->bmap_entries - start; + end = mcam->bmap_entries - end; + index = start; + start = end; + end = index; + } else { + bmap = mcam->bmap; + } + + if (req->contig) { + /* Allocate requested number of contiguous entries, if + * unsuccessful find max contiguous entries available. + */ + index = npc_mcam_find_zero_area(bmap, end, start, + req->count, &max_contig); + rsp->count = max_contig; + if (reverse) + rsp->entry = mcam->bmap_entries - index - max_contig; + else + rsp->entry = index; + } else { + /* Allocate requested number of non-contiguous entries, + * if unsuccessful allocate as many as possible. + */ + rsp->count = 0; + next_start = start; + for (entry = 0; entry < req->count; entry++) { + index = find_next_zero_bit(bmap, end, next_start); + if (index >= end) + break; + + next_start = start + (index - start) + 1; + + /* Save the entry's index */ + if (reverse) + index = mcam->bmap_entries - index - 1; + entry_list[entry] = index; + rsp->count++; + } + } + + /* If allocating requested no of entries is unsucessful, + * expand the search range to full bitmap length and retry. + */ + if (!req->priority && (rsp->count < req->count) && + ((end - start) != mcam->bmap_entries)) { + reverse = true; + start = 0; + end = mcam->bmap_entries; + goto alloc; + } + + /* For priority entry allocation requests, if allocation is + * failed then expand search to max possible range and retry. + */ + if (req->priority && rsp->count < req->count) { + if (req->priority == NPC_MCAM_LOWER_PRIO && + (start != (req->ref_entry + 1))) { + start = req->ref_entry + 1; + end = mcam->bmap_entries; + reverse = false; + goto alloc; + } else if ((req->priority == NPC_MCAM_HIGHER_PRIO) && + ((end - start) != req->ref_entry)) { + start = 0; + end = req->ref_entry; + reverse = true; + goto alloc; + } + } + + /* Copy MCAM entry indices into mbox response entry_list. + * Requester always expects indices in ascending order, so + * so reverse the list if reverse bitmap is used for allocation. + */ + if (!req->contig && rsp->count) { + index = 0; + for (entry = rsp->count - 1; entry >= 0; entry--) { + if (reverse) + rsp->entry_list[index++] = entry_list[entry]; + else + rsp->entry_list[entry] = entry_list[entry]; + } + } + + /* Mark the allocated entries as used and set nixlf mapping */ + for (entry = 0; entry < rsp->count; entry++) { + index = req->contig ? + (rsp->entry + entry) : rsp->entry_list[entry]; + npc_mcam_set_bit(mcam, index); + mcam->entry2pfvf_map[index] = pcifunc; + mcam->entry2cntr_map[index] = NPC_MCAM_INVALID_MAP; + } + + /* Update available free count in mbox response */ + rsp->free_count = mcam->bmap_fcnt; + + mutex_unlock(&mcam->lock); + return 0; +} + +int rvu_mbox_handler_npc_mcam_alloc_entry(struct rvu *rvu, + struct npc_mcam_alloc_entry_req *req, + struct npc_mcam_alloc_entry_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 pcifunc = req->hdr.pcifunc; + int blkaddr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + rsp->entry = NPC_MCAM_ENTRY_INVALID; + rsp->free_count = 0; + + /* Check if ref_entry is within range */ + if (req->priority && req->ref_entry >= mcam->bmap_entries) + return NPC_MCAM_INVALID_REQ; + + /* ref_entry can't be '0' if requested priority is high. + * Can't be last entry if requested priority is low. + */ + if ((!req->ref_entry && req->priority == NPC_MCAM_HIGHER_PRIO) || + ((req->ref_entry == (mcam->bmap_entries - 1)) && + req->priority == NPC_MCAM_LOWER_PRIO)) + return NPC_MCAM_INVALID_REQ; + + /* Since list of allocated indices needs to be sent to requester, + * max number of non-contiguous entries per mbox msg is limited. + */ + if (!req->contig && req->count > NPC_MAX_NONCONTIG_ENTRIES) + return NPC_MCAM_INVALID_REQ; + + /* Alloc request from PFFUNC with no NIXLF attached should be denied */ + if (!is_nixlf_attached(rvu, pcifunc)) + return NPC_MCAM_ALLOC_DENIED; + + return npc_mcam_alloc_entries(mcam, pcifunc, req, rsp); +} + +int rvu_mbox_handler_npc_mcam_free_entry(struct rvu *rvu, + struct npc_mcam_free_entry_req *req, + struct msg_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 pcifunc = req->hdr.pcifunc; + int blkaddr, rc = 0; + u16 cntr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + /* Free request from PFFUNC with no NIXLF attached, ignore */ + if (!is_nixlf_attached(rvu, pcifunc)) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + + if (req->all) + goto free_all; + + rc = npc_mcam_verify_entry(mcam, pcifunc, req->entry); + if (rc) + goto exit; + + mcam->entry2pfvf_map[req->entry] = 0; + npc_mcam_clear_bit(mcam, req->entry); + npc_enable_mcam_entry(rvu, mcam, blkaddr, req->entry, false); + + /* Update entry2counter mapping */ + cntr = mcam->entry2cntr_map[req->entry]; + if (cntr != NPC_MCAM_INVALID_MAP) + npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr, + req->entry, cntr); + + goto exit; + +free_all: + /* Free up all entries allocated to requesting PFFUNC */ + npc_mcam_free_all_entries(rvu, mcam, blkaddr, pcifunc); +exit: + mutex_unlock(&mcam->lock); + return rc; +} + +int rvu_mbox_handler_npc_mcam_write_entry(struct rvu *rvu, + struct npc_mcam_write_entry_req *req, + struct msg_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 pcifunc = req->hdr.pcifunc; + int blkaddr, rc; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + rc = npc_mcam_verify_entry(mcam, pcifunc, req->entry); + if (rc) + goto exit; + + if (req->set_cntr && + npc_mcam_verify_counter(mcam, pcifunc, req->cntr)) { + rc = NPC_MCAM_INVALID_REQ; + goto exit; + } + + if (req->intf != NIX_INTF_RX && req->intf != NIX_INTF_TX) { + rc = NPC_MCAM_INVALID_REQ; + goto exit; + } + + npc_config_mcam_entry(rvu, mcam, blkaddr, req->entry, req->intf, + &req->entry_data, req->enable_entry); + + if (req->set_cntr) + npc_map_mcam_entry_and_cntr(rvu, mcam, blkaddr, + req->entry, req->cntr); + + rc = 0; +exit: + mutex_unlock(&mcam->lock); + return rc; +} + +int rvu_mbox_handler_npc_mcam_ena_entry(struct rvu *rvu, + struct npc_mcam_ena_dis_entry_req *req, + struct msg_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 pcifunc = req->hdr.pcifunc; + int blkaddr, rc; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + rc = npc_mcam_verify_entry(mcam, pcifunc, req->entry); + mutex_unlock(&mcam->lock); + if (rc) + return rc; + + npc_enable_mcam_entry(rvu, mcam, blkaddr, req->entry, true); + + return 0; +} + +int rvu_mbox_handler_npc_mcam_dis_entry(struct rvu *rvu, + struct npc_mcam_ena_dis_entry_req *req, + struct msg_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 pcifunc = req->hdr.pcifunc; + int blkaddr, rc; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + rc = npc_mcam_verify_entry(mcam, pcifunc, req->entry); + mutex_unlock(&mcam->lock); + if (rc) + return rc; + + npc_enable_mcam_entry(rvu, mcam, blkaddr, req->entry, false); + + return 0; +} + +int rvu_mbox_handler_npc_mcam_shift_entry(struct rvu *rvu, + struct npc_mcam_shift_entry_req *req, + struct npc_mcam_shift_entry_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 pcifunc = req->hdr.pcifunc; + u16 old_entry, new_entry; + u16 index, cntr; + int blkaddr, rc; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + if (req->shift_count > NPC_MCAM_MAX_SHIFTS) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + for (index = 0; index < req->shift_count; index++) { + old_entry = req->curr_entry[index]; + new_entry = req->new_entry[index]; + + /* Check if both old and new entries are valid and + * does belong to this PFFUNC or not. + */ + rc = npc_mcam_verify_entry(mcam, pcifunc, old_entry); + if (rc) + break; + + rc = npc_mcam_verify_entry(mcam, pcifunc, new_entry); + if (rc) + break; + + /* new_entry should not have a counter mapped */ + if (mcam->entry2cntr_map[new_entry] != NPC_MCAM_INVALID_MAP) { + rc = NPC_MCAM_PERM_DENIED; + break; + } + + /* Disable the new_entry */ + npc_enable_mcam_entry(rvu, mcam, blkaddr, new_entry, false); + + /* Copy rule from old entry to new entry */ + npc_copy_mcam_entry(rvu, mcam, blkaddr, old_entry, new_entry); + + /* Copy counter mapping, if any */ + cntr = mcam->entry2cntr_map[old_entry]; + if (cntr != NPC_MCAM_INVALID_MAP) { + npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr, + old_entry, cntr); + npc_map_mcam_entry_and_cntr(rvu, mcam, blkaddr, + new_entry, cntr); + } + + /* Enable new_entry and disable old_entry */ + npc_enable_mcam_entry(rvu, mcam, blkaddr, new_entry, true); + npc_enable_mcam_entry(rvu, mcam, blkaddr, old_entry, false); + } + + /* If shift has failed then report the failed index */ + if (index != req->shift_count) { + rc = NPC_MCAM_PERM_DENIED; + rsp->failed_entry_idx = index; + } + + mutex_unlock(&mcam->lock); + return rc; +} + +int rvu_mbox_handler_npc_mcam_alloc_counter(struct rvu *rvu, + struct npc_mcam_alloc_counter_req *req, + struct npc_mcam_alloc_counter_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 pcifunc = req->hdr.pcifunc; + u16 max_contig, cntr; + int blkaddr, index; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + /* If the request is from a PFFUNC with no NIXLF attached, ignore */ + if (!is_nixlf_attached(rvu, pcifunc)) + return NPC_MCAM_INVALID_REQ; + + /* Since list of allocated counter IDs needs to be sent to requester, + * max number of non-contiguous counters per mbox msg is limited. + */ + if (!req->contig && req->count > NPC_MAX_NONCONTIG_COUNTERS) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + + /* Check if unused counters are available or not */ + if (!rvu_rsrc_free_count(&mcam->counters)) { + mutex_unlock(&mcam->lock); + return NPC_MCAM_ALLOC_FAILED; + } + + rsp->count = 0; + + if (req->contig) { + /* Allocate requested number of contiguous counters, if + * unsuccessful find max contiguous entries available. + */ + index = npc_mcam_find_zero_area(mcam->counters.bmap, + mcam->counters.max, 0, + req->count, &max_contig); + rsp->count = max_contig; + rsp->cntr = index; + for (cntr = index; cntr < (index + max_contig); cntr++) { + __set_bit(cntr, mcam->counters.bmap); + mcam->cntr2pfvf_map[cntr] = pcifunc; + } + } else { + /* Allocate requested number of non-contiguous counters, + * if unsuccessful allocate as many as possible. + */ + for (cntr = 0; cntr < req->count; cntr++) { + index = rvu_alloc_rsrc(&mcam->counters); + if (index < 0) + break; + rsp->cntr_list[cntr] = index; + rsp->count++; + mcam->cntr2pfvf_map[index] = pcifunc; + } + } + + mutex_unlock(&mcam->lock); + return 0; +} + +int rvu_mbox_handler_npc_mcam_free_counter(struct rvu *rvu, + struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 index, entry = 0; + int blkaddr, err; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + err = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr); + if (err) { + mutex_unlock(&mcam->lock); + return err; + } + + /* Mark counter as free/unused */ + mcam->cntr2pfvf_map[req->cntr] = NPC_MCAM_INVALID_MAP; + rvu_free_rsrc(&mcam->counters, req->cntr); + + /* Disable all MCAM entry's stats which are using this counter */ + while (entry < mcam->bmap_entries) { + if (!mcam->cntr_refcnt[req->cntr]) + break; + + index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry); + if (index >= mcam->bmap_entries) + break; + if (mcam->entry2cntr_map[index] != req->cntr) + continue; + + entry = index + 1; + npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr, + index, req->cntr); + } + + mutex_unlock(&mcam->lock); + return 0; +} + +int rvu_mbox_handler_npc_mcam_unmap_counter(struct rvu *rvu, + struct npc_mcam_unmap_counter_req *req, struct msg_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 index, entry = 0; + int blkaddr, rc; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + rc = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr); + if (rc) + goto exit; + + /* Unmap the MCAM entry and counter */ + if (!req->all) { + rc = npc_mcam_verify_entry(mcam, req->hdr.pcifunc, req->entry); + if (rc) + goto exit; + npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr, + req->entry, req->cntr); + goto exit; + } + + /* Disable all MCAM entry's stats which are using this counter */ + while (entry < mcam->bmap_entries) { + if (!mcam->cntr_refcnt[req->cntr]) + break; + + index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry); + if (index >= mcam->bmap_entries) + break; + if (mcam->entry2cntr_map[index] != req->cntr) + continue; + + entry = index + 1; + npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr, + index, req->cntr); + } +exit: + mutex_unlock(&mcam->lock); + return rc; +} + +int rvu_mbox_handler_npc_mcam_clear_counter(struct rvu *rvu, + struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + int blkaddr, err; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + err = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr); + mutex_unlock(&mcam->lock); + if (err) + return err; + + rvu_write64(rvu, blkaddr, NPC_AF_MATCH_STATX(req->cntr), 0x00); + + return 0; +} + +int rvu_mbox_handler_npc_mcam_counter_stats(struct rvu *rvu, + struct npc_mcam_oper_counter_req *req, + struct npc_mcam_oper_counter_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + int blkaddr, err; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + mutex_lock(&mcam->lock); + err = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr); + mutex_unlock(&mcam->lock); + if (err) + return err; + + rsp->stat = rvu_read64(rvu, blkaddr, NPC_AF_MATCH_STATX(req->cntr)); + rsp->stat &= BIT_ULL(48) - 1; + + return 0; +} + +int rvu_mbox_handler_npc_mcam_alloc_and_write_entry(struct rvu *rvu, + struct npc_mcam_alloc_and_write_entry_req *req, + struct npc_mcam_alloc_and_write_entry_rsp *rsp) +{ + struct npc_mcam_alloc_counter_req cntr_req; + struct npc_mcam_alloc_counter_rsp cntr_rsp; + struct npc_mcam_alloc_entry_req entry_req; + struct npc_mcam_alloc_entry_rsp entry_rsp; + struct npc_mcam *mcam = &rvu->hw->mcam; + u16 entry = NPC_MCAM_ENTRY_INVALID; + u16 cntr = NPC_MCAM_ENTRY_INVALID; + int blkaddr, rc; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NPC_MCAM_INVALID_REQ; + + if (req->intf != NIX_INTF_RX && req->intf != NIX_INTF_TX) + return NPC_MCAM_INVALID_REQ; + + /* Try to allocate a MCAM entry */ + entry_req.hdr.pcifunc = req->hdr.pcifunc; + entry_req.contig = true; + entry_req.priority = req->priority; + entry_req.ref_entry = req->ref_entry; + entry_req.count = 1; + + rc = rvu_mbox_handler_npc_mcam_alloc_entry(rvu, + &entry_req, &entry_rsp); + if (rc) + return rc; + + if (!entry_rsp.count) + return NPC_MCAM_ALLOC_FAILED; + + entry = entry_rsp.entry; + + if (!req->alloc_cntr) + goto write_entry; + + /* Now allocate counter */ + cntr_req.hdr.pcifunc = req->hdr.pcifunc; + cntr_req.contig = true; + cntr_req.count = 1; + + rc = rvu_mbox_handler_npc_mcam_alloc_counter(rvu, &cntr_req, &cntr_rsp); + if (rc) { + /* Free allocated MCAM entry */ + mutex_lock(&mcam->lock); + mcam->entry2pfvf_map[entry] = 0; + npc_mcam_clear_bit(mcam, entry); + mutex_unlock(&mcam->lock); + return rc; + } + + cntr = cntr_rsp.cntr; + +write_entry: + mutex_lock(&mcam->lock); + npc_config_mcam_entry(rvu, mcam, blkaddr, entry, req->intf, + &req->entry_data, req->enable_entry); + + if (req->alloc_cntr) + npc_map_mcam_entry_and_cntr(rvu, mcam, blkaddr, entry, cntr); + mutex_unlock(&mcam->lock); + + rsp->entry = entry; + rsp->cntr = cntr; + + return 0; +} + +#define GET_KEX_CFG(intf) \ + rvu_read64(rvu, BLKADDR_NPC, NPC_AF_INTFX_KEX_CFG(intf)) + +#define GET_KEX_FLAGS(ld) \ + rvu_read64(rvu, BLKADDR_NPC, NPC_AF_KEX_LDATAX_FLAGS_CFG(ld)) + +#define GET_KEX_LD(intf, lid, lt, ld) \ + rvu_read64(rvu, BLKADDR_NPC, \ + NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, lid, lt, ld)) + +#define GET_KEX_LDFLAGS(intf, ld, fl) \ + rvu_read64(rvu, BLKADDR_NPC, \ + NPC_AF_INTFX_LDATAX_FLAGSX_CFG(intf, ld, fl)) + +int rvu_mbox_handler_npc_get_kex_cfg(struct rvu *rvu, struct msg_req *req, + struct npc_get_kex_cfg_rsp *rsp) +{ + int lid, lt, ld, fl; + + rsp->rx_keyx_cfg = GET_KEX_CFG(NIX_INTF_RX); + rsp->tx_keyx_cfg = GET_KEX_CFG(NIX_INTF_TX); + for (lid = 0; lid < NPC_MAX_LID; lid++) { + for (lt = 0; lt < NPC_MAX_LT; lt++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + rsp->intf_lid_lt_ld[NIX_INTF_RX][lid][lt][ld] = + GET_KEX_LD(NIX_INTF_RX, lid, lt, ld); + rsp->intf_lid_lt_ld[NIX_INTF_TX][lid][lt][ld] = + GET_KEX_LD(NIX_INTF_TX, lid, lt, ld); + } + } + } + for (ld = 0; ld < NPC_MAX_LD; ld++) + rsp->kex_ld_flags[ld] = GET_KEX_FLAGS(ld); + + for (ld = 0; ld < NPC_MAX_LD; ld++) { + for (fl = 0; fl < NPC_MAX_LFL; fl++) { + rsp->intf_ld_flags[NIX_INTF_RX][ld][fl] = + GET_KEX_LDFLAGS(NIX_INTF_RX, ld, fl); + rsp->intf_ld_flags[NIX_INTF_TX][ld][fl] = + GET_KEX_LDFLAGS(NIX_INTF_TX, ld, fl); + } + } + memcpy(rsp->mkex_pfl_name, rvu->mkex_pfl_name, MKEX_NAME_LEN); + return 0; +} + +int rvu_npc_update_rxvlan(struct rvu *rvu, u16 pcifunc, int nixlf) +{ + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + struct npc_mcam *mcam = &rvu->hw->mcam; + int blkaddr, index; + bool enable; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + if (!pfvf->rxvlan) + return 0; + + index = npc_get_nixlf_mcam_index(mcam, pcifunc, nixlf, + NIXLF_UCAST_ENTRY); + pfvf->entry.action = npc_get_mcam_action(rvu, mcam, blkaddr, index); + enable = is_mcam_entry_enabled(rvu, mcam, blkaddr, index); + npc_config_mcam_entry(rvu, mcam, blkaddr, pfvf->rxvlan_index, + NIX_INTF_RX, &pfvf->entry, enable); + + return 0; } diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c index 9c08c3650c02..04fd1f135011 100644 --- a/drivers/net/ethernet/marvell/skge.c +++ b/drivers/net/ethernet/marvell/skge.c @@ -3732,19 +3732,7 @@ static int skge_debug_show(struct seq_file *seq, void *v) return 0; } - -static int skge_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, skge_debug_show, inode->i_private); -} - -static const struct file_operations skge_debug_fops = { - .owner = THIS_MODULE, - .open = skge_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(skge_debug); /* * Use network device events to create/remove/rename diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c index 697d9b374f5e..f3a5fa84860f 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -2485,13 +2485,11 @@ static struct sk_buff *receive_copy(struct sky2_port *sky2, skb->ip_summed = re->skb->ip_summed; skb->csum = re->skb->csum; skb_copy_hash(skb, re->skb); - skb->vlan_proto = re->skb->vlan_proto; - skb->vlan_tci = re->skb->vlan_tci; + __vlan_hwaccel_copy_tag(skb, re->skb); pci_dma_sync_single_for_device(sky2->hw->pdev, re->data_addr, length, PCI_DMA_FROMDEVICE); - re->skb->vlan_proto = 0; - re->skb->vlan_tci = 0; + __vlan_hwaccel_clear_tag(re->skb); skb_clear_hash(re->skb); re->skb->ip_summed = CHECKSUM_NONE; skb_put(skb, length); @@ -4623,19 +4621,7 @@ static int sky2_debug_show(struct seq_file *seq, void *v) napi_enable(&hw->napi); return 0; } - -static int sky2_debug_open(struct inode *inode, struct file *file) -{ - return single_open(file, sky2_debug_show, inode->i_private); -} - -static const struct file_operations sky2_debug_fops = { - .owner = THIS_MODULE, - .open = sky2_debug_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(sky2_debug); /* * Use network device events to create/remove/rename diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 7dbfdac4067a..399f565dd85a 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -243,7 +243,7 @@ static void mtk_phy_link_adjust(struct net_device *dev) if (dev->phydev->asym_pause) rmt_adv |= LPA_PAUSE_ASYM; - lcl_adv = ethtool_adv_to_lcl_adv_t(dev->phydev->advertising); + lcl_adv = linkmode_adv_to_lcl_adv_t(dev->phydev->advertising); flowctrl = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv); if (flowctrl & FLOW_CTRL_TX) @@ -353,8 +353,9 @@ static int mtk_phy_connect(struct net_device *dev) phy_set_max_speed(dev->phydev, SPEED_1000); phy_support_asym_pause(dev->phydev); - dev->phydev->advertising = dev->phydev->supported | - ADVERTISED_Autoneg; + linkmode_copy(dev->phydev->advertising, dev->phydev->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + dev->phydev->advertising); phy_start_aneg(dev->phydev); of_node_put(np); diff --git a/drivers/net/ethernet/mellanox/mlx4/cq.c b/drivers/net/ethernet/mellanox/mlx4/cq.c index d8e9a323122e..db909b6069b5 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/cq.c @@ -144,9 +144,9 @@ void mlx4_cq_event(struct mlx4_dev *dev, u32 cqn, int event_type) } static int mlx4_SW2HW_CQ(struct mlx4_dev *dev, struct mlx4_cmd_mailbox *mailbox, - int cq_num) + int cq_num, u8 opmod) { - return mlx4_cmd(dev, mailbox->dma, cq_num, 0, + return mlx4_cmd(dev, mailbox->dma, cq_num, opmod, MLX4_CMD_SW2HW_CQ, MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED); } @@ -287,11 +287,61 @@ static void mlx4_cq_free_icm(struct mlx4_dev *dev, int cqn) __mlx4_cq_free_icm(dev, cqn); } +static int mlx4_init_user_cqes(void *buf, int entries, int cqe_size) +{ + int entries_per_copy = PAGE_SIZE / cqe_size; + void *init_ents; + int err = 0; + int i; + + init_ents = kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!init_ents) + return -ENOMEM; + + /* Populate a list of CQ entries to reduce the number of + * copy_to_user calls. 0xcc is the initialization value + * required by the FW. + */ + memset(init_ents, 0xcc, PAGE_SIZE); + + if (entries_per_copy < entries) { + for (i = 0; i < entries / entries_per_copy; i++) { + err = copy_to_user(buf, init_ents, PAGE_SIZE); + if (err) + goto out; + + buf += PAGE_SIZE; + } + } else { + err = copy_to_user(buf, init_ents, entries * cqe_size); + } + +out: + kfree(init_ents); + + return err; +} + +static void mlx4_init_kernel_cqes(struct mlx4_buf *buf, + int entries, + int cqe_size) +{ + int i; + + if (buf->nbufs == 1) + memset(buf->direct.buf, 0xcc, entries * cqe_size); + else + for (i = 0; i < buf->npages; i++) + memset(buf->page_list[i].buf, 0xcc, + 1UL << buf->page_shift); +} + int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, struct mlx4_mtt *mtt, struct mlx4_uar *uar, u64 db_rec, struct mlx4_cq *cq, unsigned vector, int collapsed, - int timestamp_en) + int timestamp_en, void *buf_addr, bool user_cq) { + bool sw_cq_init = dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_SW_CQ_INIT; struct mlx4_priv *priv = mlx4_priv(dev); struct mlx4_cq_table *cq_table = &priv->cq_table; struct mlx4_cmd_mailbox *mailbox; @@ -336,7 +386,20 @@ int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, cq_context->mtt_base_addr_l = cpu_to_be32(mtt_addr & 0xffffffff); cq_context->db_rec_addr = cpu_to_be64(db_rec); - err = mlx4_SW2HW_CQ(dev, mailbox, cq->cqn); + if (sw_cq_init) { + if (user_cq) { + err = mlx4_init_user_cqes(buf_addr, nent, + dev->caps.cqe_size); + if (err) + sw_cq_init = false; + } else { + mlx4_init_kernel_cqes(buf_addr, nent, + dev->caps.cqe_size); + } + } + + err = mlx4_SW2HW_CQ(dev, mailbox, cq->cqn, sw_cq_init); + mlx4_free_cmd_mailbox(dev, mailbox); if (err) goto err_radix; diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c index 1e487acb4667..74d466796b7c 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c @@ -54,11 +54,8 @@ int mlx4_en_create_cq(struct mlx4_en_priv *priv, cq = kzalloc_node(sizeof(*cq), GFP_KERNEL, node); if (!cq) { - cq = kzalloc(sizeof(*cq), GFP_KERNEL); - if (!cq) { - en_err(priv, "Failed to allocate CQ structure\n"); - return -ENOMEM; - } + en_err(priv, "Failed to allocate CQ structure\n"); + return -ENOMEM; } cq->size = entries; @@ -143,7 +140,7 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq, cq->mcq.usage = MLX4_RES_USAGE_DRIVER; err = mlx4_cq_alloc(mdev->dev, cq->size, &cq->wqres.mtt, &mdev->priv_uar, cq->wqres.db.dma, &cq->mcq, - cq->vector, 0, timestamp_en); + cq->vector, 0, timestamp_en, &cq->wqres.buf, false); if (err) goto free_eq; diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index db00bf1c23f5..9a0881cb7f51 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -271,11 +271,8 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv, ring = kzalloc_node(sizeof(*ring), GFP_KERNEL, node); if (!ring) { - ring = kzalloc(sizeof(*ring), GFP_KERNEL); - if (!ring) { - en_err(priv, "Failed to allocate RX ring structure\n"); - return -ENOMEM; - } + en_err(priv, "Failed to allocate RX ring structure\n"); + return -ENOMEM; } ring->prod = 0; @@ -875,7 +872,7 @@ csum_none: skb->data_len = length; napi_gro_frags(&cq->napi); } else { - skb->vlan_tci = 0; + __vlan_hwaccel_clear_tag(skb); skb_clear_hash(skb); } next: diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c index 6f5153afcab4..2cbd2bd7c67c 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c @@ -57,11 +57,8 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv, ring = kzalloc_node(sizeof(*ring), GFP_KERNEL, node); if (!ring) { - ring = kzalloc(sizeof(*ring), GFP_KERNEL); - if (!ring) { - en_err(priv, "Failed allocating TX ring\n"); - return -ENOMEM; - } + en_err(priv, "Failed allocating TX ring\n"); + return -ENOMEM; } ring->size = size; diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c index babcfd9c0571..7df728f1e5b5 100644 --- a/drivers/net/ethernet/mellanox/mlx4/fw.c +++ b/drivers/net/ethernet/mellanox/mlx4/fw.c @@ -166,6 +166,7 @@ static void dump_dev_cap_flags2(struct mlx4_dev *dev, u64 flags) [37] = "sl to vl mapping table change event support", [38] = "user MAC support", [39] = "Report driver version to FW support", + [40] = "SW CQ initialization support", }; int i; @@ -1098,6 +1099,8 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap) dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_FSM; if (field32 & (1 << 21)) dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_80_VFS; + if (field32 & (1 << 23)) + dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_SW_CQ_INIT; for (i = 1; i <= dev_cap->num_ports; i++) { err = mlx4_QUERY_PORT(dev, i, dev_cap->port_cap + i); diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c index 6a046030e873..bdb8dd161923 100644 --- a/drivers/net/ethernet/mellanox/mlx4/main.c +++ b/drivers/net/ethernet/mellanox/mlx4/main.c @@ -63,7 +63,7 @@ struct workqueue_struct *mlx4_wq; #ifdef CONFIG_MLX4_DEBUG -int mlx4_debug_level = 0; +int mlx4_debug_level; /* 0 by default */ module_param_named(debug_level, mlx4_debug_level, int, 0644); MODULE_PARM_DESC(debug_level, "Enable debug tracing if > 0"); @@ -83,7 +83,7 @@ MODULE_PARM_DESC(msi_x, "0 - don't use MSI-X, 1 - use MSI-X, >1 - limit number o static uint8_t num_vfs[3] = {0, 0, 0}; static int num_vfs_argc; -module_param_array(num_vfs, byte , &num_vfs_argc, 0444); +module_param_array(num_vfs, byte, &num_vfs_argc, 0444); MODULE_PARM_DESC(num_vfs, "enable #num_vfs functions if num_vfs > 0\n" "num_vfs=port1,port2,port1+2"); @@ -313,7 +313,7 @@ int mlx4_check_port_params(struct mlx4_dev *dev, for (i = 0; i < dev->caps.num_ports - 1; i++) { if (port_type[i] != port_type[i + 1]) { mlx4_err(dev, "Only same port types supported on this HCA, aborting\n"); - return -EINVAL; + return -EOPNOTSUPP; } } } @@ -322,7 +322,7 @@ int mlx4_check_port_params(struct mlx4_dev *dev, if (!(port_type[i] & dev->caps.supported_type[i+1])) { mlx4_err(dev, "Requested port type for port %d is not supported on this HCA\n", i + 1); - return -EINVAL; + return -EOPNOTSUPP; } } return 0; @@ -1188,8 +1188,7 @@ static int __set_port_type(struct mlx4_port_info *info, mlx4_err(mdev, "Requested port type for port %d is not supported on this HCA\n", info->port); - err = -EINVAL; - goto err_sup; + return -EOPNOTSUPP; } mlx4_stop_sense(mdev); @@ -1211,7 +1210,7 @@ static int __set_port_type(struct mlx4_port_info *info, for (i = 1; i <= mdev->caps.num_ports; i++) { if (mdev->caps.possible_type[i] == MLX4_PORT_TYPE_AUTO) { mdev->caps.possible_type[i] = mdev->caps.port_type[i]; - err = -EINVAL; + err = -EOPNOTSUPP; } } } @@ -1237,7 +1236,7 @@ static int __set_port_type(struct mlx4_port_info *info, out: mlx4_start_sense(mdev); mutex_unlock(&priv->port_mutex); -err_sup: + return err; } @@ -3252,7 +3251,7 @@ disable_sriov: free_mem: dev->persist->num_vfs = 0; kfree(dev->dev_vfs); - dev->dev_vfs = NULL; + dev->dev_vfs = NULL; return dev_flags & ~MLX4_FLAG_MASTER; } diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c index 31bd56727022..eb13d3618162 100644 --- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c @@ -4729,7 +4729,6 @@ static void rem_slave_srqs(struct mlx4_dev *dev, int slave) struct res_srq *tmp; int state; u64 in_param; - LIST_HEAD(tlist); int srqn; int err; @@ -4795,7 +4794,6 @@ static void rem_slave_cqs(struct mlx4_dev *dev, int slave) struct res_cq *tmp; int state; u64 in_param; - LIST_HEAD(tlist); int cqn; int err; @@ -4858,7 +4856,6 @@ static void rem_slave_mrs(struct mlx4_dev *dev, int slave) struct res_mpt *tmp; int state; u64 in_param; - LIST_HEAD(tlist); int mptn; int err; @@ -4926,7 +4923,6 @@ static void rem_slave_mtts(struct mlx4_dev *dev, int slave) struct res_mtt *mtt; struct res_mtt *tmp; int state; - LIST_HEAD(tlist); int base; int err; @@ -5115,7 +5111,6 @@ static void rem_slave_eqs(struct mlx4_dev *dev, int slave) struct res_eq *tmp; int err; int state; - LIST_HEAD(tlist); int eqn; err = move_all_busy(dev, slave, RES_EQ); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index d324a3884462..9de9abacf7f6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -12,17 +12,17 @@ obj-$(CONFIG_MLX5_CORE) += mlx5_core.o # mlx5 core basic # mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \ - health.o mcg.o cq.o srq.o alloc.o qp.o port.o mr.o pd.o \ + health.o mcg.o cq.o alloc.o qp.o port.o mr.o pd.o \ mad.o transobj.o vport.o sriov.o fs_cmd.o fs_core.o \ - fs_counters.o rl.o lag.o dev.o wq.o lib/gid.o \ - diag/fs_tracepoint.o diag/fw_tracer.o + fs_counters.o rl.o lag.o dev.o events.o wq.o lib/gid.o \ + lib/devcom.o diag/fs_tracepoint.o diag/fw_tracer.o # # Netdev basic # mlx5_core-$(CONFIG_MLX5_CORE_EN) += en_main.o en_common.o en_fs.o en_ethtool.o \ en_tx.o en_rx.o en_dim.o en_txrx.o en/xdp.o en_stats.o \ - en_selftest.o en/port.o + en_selftest.o en/port.o en/monitor_stats.o # # Netdev extra @@ -30,7 +30,7 @@ mlx5_core-$(CONFIG_MLX5_CORE_EN) += en_main.o en_common.o en_fs.o en_ethtool.o \ mlx5_core-$(CONFIG_MLX5_EN_ARFS) += en_arfs.o mlx5_core-$(CONFIG_MLX5_EN_RXNFC) += en_fs_ethtool.o mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o en/port_buffer.o -mlx5_core-$(CONFIG_MLX5_ESWITCH) += en_rep.o en_tc.o +mlx5_core-$(CONFIG_MLX5_ESWITCH) += en_rep.o en_tc.o en/tc_tun.o # # Core extra diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c index a5a0823e5ada..d3125cdf69db 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c @@ -40,9 +40,11 @@ #include #include #include +#include #include #include "mlx5_core.h" +#include "lib/eq.h" enum { CMD_IF_REV = 5, @@ -313,6 +315,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op, case MLX5_CMD_OP_FPGA_DESTROY_QP: case MLX5_CMD_OP_DESTROY_GENERAL_OBJECT: case MLX5_CMD_OP_DEALLOC_MEMIC: + case MLX5_CMD_OP_PAGE_FAULT_RESUME: return MLX5_CMD_STAT_OK; case MLX5_CMD_OP_QUERY_HCA_CAP: @@ -326,7 +329,6 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op, case MLX5_CMD_OP_CREATE_MKEY: case MLX5_CMD_OP_QUERY_MKEY: case MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS: - case MLX5_CMD_OP_PAGE_FAULT_RESUME: case MLX5_CMD_OP_CREATE_EQ: case MLX5_CMD_OP_QUERY_EQ: case MLX5_CMD_OP_GEN_EQE: @@ -371,6 +373,8 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op, case MLX5_CMD_OP_QUERY_VPORT_COUNTER: case MLX5_CMD_OP_ALLOC_Q_COUNTER: case MLX5_CMD_OP_QUERY_Q_COUNTER: + case MLX5_CMD_OP_SET_MONITOR_COUNTER: + case MLX5_CMD_OP_ARM_MONITOR_COUNTER: case MLX5_CMD_OP_SET_PP_RATE_LIMIT: case MLX5_CMD_OP_QUERY_RATE_LIMIT: case MLX5_CMD_OP_CREATE_SCHEDULING_ELEMENT: @@ -520,6 +524,8 @@ const char *mlx5_command_str(int command) MLX5_COMMAND_STR_CASE(ALLOC_Q_COUNTER); MLX5_COMMAND_STR_CASE(DEALLOC_Q_COUNTER); MLX5_COMMAND_STR_CASE(QUERY_Q_COUNTER); + MLX5_COMMAND_STR_CASE(SET_MONITOR_COUNTER); + MLX5_COMMAND_STR_CASE(ARM_MONITOR_COUNTER); MLX5_COMMAND_STR_CASE(SET_PP_RATE_LIMIT); MLX5_COMMAND_STR_CASE(QUERY_RATE_LIMIT); MLX5_COMMAND_STR_CASE(CREATE_SCHEDULING_ELEMENT); @@ -805,6 +811,8 @@ static u16 msg_to_opcode(struct mlx5_cmd_msg *in) return MLX5_GET(mbox_in, in->first.data, opcode); } +static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced); + static void cb_timeout_handler(struct work_struct *work) { struct delayed_work *dwork = container_of(work, struct delayed_work, @@ -1412,14 +1420,32 @@ static void mlx5_cmd_change_mod(struct mlx5_core_dev *dev, int mode) up(&cmd->sem); } +static int cmd_comp_notifier(struct notifier_block *nb, + unsigned long type, void *data) +{ + struct mlx5_core_dev *dev; + struct mlx5_cmd *cmd; + struct mlx5_eqe *eqe; + + cmd = mlx5_nb_cof(nb, struct mlx5_cmd, nb); + dev = container_of(cmd, struct mlx5_core_dev, cmd); + eqe = data; + + mlx5_cmd_comp_handler(dev, be32_to_cpu(eqe->data.cmd.vector), false); + + return NOTIFY_OK; +} void mlx5_cmd_use_events(struct mlx5_core_dev *dev) { + MLX5_NB_INIT(&dev->cmd.nb, cmd_comp_notifier, CMD); + mlx5_eq_notifier_register(dev, &dev->cmd.nb); mlx5_cmd_change_mod(dev, CMD_MODE_EVENTS); } void mlx5_cmd_use_polling(struct mlx5_core_dev *dev) { mlx5_cmd_change_mod(dev, CMD_MODE_POLLING); + mlx5_eq_notifier_unregister(dev, &dev->cmd.nb); } static void free_msg(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *msg) @@ -1435,7 +1461,7 @@ static void free_msg(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *msg) } } -void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced) +static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced) { struct mlx5_cmd *cmd = &dev->cmd; struct mlx5_cmd_work_ent *ent; @@ -1533,7 +1559,29 @@ void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced) } } } -EXPORT_SYMBOL(mlx5_cmd_comp_handler); + +void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev) +{ + unsigned long flags; + u64 vector; + + /* wait for pending handlers to complete */ + mlx5_eq_synchronize_cmd_irq(dev); + spin_lock_irqsave(&dev->cmd.alloc_lock, flags); + vector = ~dev->cmd.bitmask & ((1ul << (1 << dev->cmd.log_sz)) - 1); + if (!vector) + goto no_trig; + + vector |= MLX5_TRIGGERED_CMD_COMP; + spin_unlock_irqrestore(&dev->cmd.alloc_lock, flags); + + mlx5_core_dbg(dev, "vector 0x%llx\n", vector); + mlx5_cmd_comp_handler(dev, vector, true); + return; + +no_trig: + spin_unlock_irqrestore(&dev->cmd.alloc_lock, flags); +} static int status_to_err(u8 status) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c index 4b85abb5c9f7..713a17ee3751 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c @@ -38,6 +38,7 @@ #include #include #include "mlx5_core.h" +#include "lib/eq.h" #define TASKLET_MAX_TIME 2 #define TASKLET_MAX_TIME_JIFFIES msecs_to_jiffies(TASKLET_MAX_TIME) @@ -92,10 +93,10 @@ int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq, u32 dout[MLX5_ST_SZ_DW(destroy_cq_out)]; u32 out[MLX5_ST_SZ_DW(create_cq_out)]; u32 din[MLX5_ST_SZ_DW(destroy_cq_in)]; - struct mlx5_eq *eq; + struct mlx5_eq_comp *eq; int err; - eq = mlx5_eqn2eq(dev, eqn); + eq = mlx5_eqn2comp_eq(dev, eqn); if (IS_ERR(eq)) return PTR_ERR(eq); @@ -119,12 +120,12 @@ int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq, INIT_LIST_HEAD(&cq->tasklet_ctx.list); /* Add to comp EQ CQ tree to recv comp events */ - err = mlx5_eq_add_cq(eq, cq); + err = mlx5_eq_add_cq(&eq->core, cq); if (err) goto err_cmd; /* Add to async EQ CQ tree to recv async events */ - err = mlx5_eq_add_cq(&dev->priv.eq_table.async_eq, cq); + err = mlx5_eq_add_cq(mlx5_get_async_eq(dev), cq); if (err) goto err_cq_add; @@ -139,7 +140,7 @@ int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq, return 0; err_cq_add: - mlx5_eq_del_cq(eq, cq); + mlx5_eq_del_cq(&eq->core, cq); err_cmd: memset(din, 0, sizeof(din)); memset(dout, 0, sizeof(dout)); @@ -157,11 +158,11 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq) u32 in[MLX5_ST_SZ_DW(destroy_cq_in)] = {0}; int err; - err = mlx5_eq_del_cq(&dev->priv.eq_table.async_eq, cq); + err = mlx5_eq_del_cq(mlx5_get_async_eq(dev), cq); if (err) return err; - err = mlx5_eq_del_cq(cq->eq, cq); + err = mlx5_eq_del_cq(&cq->eq->core, cq); if (err) return err; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c index 90fabd612b6c..a11e22d0b0cc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c @@ -36,6 +36,7 @@ #include #include #include "mlx5_core.h" +#include "lib/eq.h" enum { QP_PID, @@ -349,6 +350,16 @@ out: return param; } +static int mlx5_core_eq_query(struct mlx5_core_dev *dev, struct mlx5_eq *eq, + u32 *out, int outlen) +{ + u32 in[MLX5_ST_SZ_DW(query_eq_in)] = {}; + + MLX5_SET(query_eq_in, in, opcode, MLX5_CMD_OP_QUERY_EQ); + MLX5_SET(query_eq_in, in, eq_number, eq->eqn); + return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen); +} + static u64 eq_read_field(struct mlx5_core_dev *dev, struct mlx5_eq *eq, int index) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c index 37ba7c78859d..ebc046fa97d3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c @@ -45,75 +45,11 @@ struct mlx5_device_context { unsigned long state; }; -struct mlx5_delayed_event { - struct list_head list; - struct mlx5_core_dev *dev; - enum mlx5_dev_event event; - unsigned long param; -}; - enum { MLX5_INTERFACE_ADDED, MLX5_INTERFACE_ATTACHED, }; -static void add_delayed_event(struct mlx5_priv *priv, - struct mlx5_core_dev *dev, - enum mlx5_dev_event event, - unsigned long param) -{ - struct mlx5_delayed_event *delayed_event; - - delayed_event = kzalloc(sizeof(*delayed_event), GFP_ATOMIC); - if (!delayed_event) { - mlx5_core_err(dev, "event %d is missed\n", event); - return; - } - - mlx5_core_dbg(dev, "Accumulating event %d\n", event); - delayed_event->dev = dev; - delayed_event->event = event; - delayed_event->param = param; - list_add_tail(&delayed_event->list, &priv->waiting_events_list); -} - -static void delayed_event_release(struct mlx5_device_context *dev_ctx, - struct mlx5_priv *priv) -{ - struct mlx5_core_dev *dev = container_of(priv, struct mlx5_core_dev, priv); - struct mlx5_delayed_event *de; - struct mlx5_delayed_event *n; - struct list_head temp; - - INIT_LIST_HEAD(&temp); - - spin_lock_irq(&priv->ctx_lock); - - priv->is_accum_events = false; - list_splice_init(&priv->waiting_events_list, &temp); - if (!dev_ctx->context) - goto out; - list_for_each_entry_safe(de, n, &temp, list) - dev_ctx->intf->event(dev, dev_ctx->context, de->event, de->param); - -out: - spin_unlock_irq(&priv->ctx_lock); - - list_for_each_entry_safe(de, n, &temp, list) { - list_del(&de->list); - kfree(de); - } -} - -/* accumulating events that can come after mlx5_ib calls to - * ib_register_device, till adding that interface to the events list. - */ -static void delayed_event_start(struct mlx5_priv *priv) -{ - spin_lock_irq(&priv->ctx_lock); - priv->is_accum_events = true; - spin_unlock_irq(&priv->ctx_lock); -} void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv) { @@ -129,8 +65,6 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv) dev_ctx->intf = intf; - delayed_event_start(priv); - dev_ctx->context = intf->add(dev); if (dev_ctx->context) { set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state); @@ -139,22 +73,9 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv) spin_lock_irq(&priv->ctx_lock); list_add_tail(&dev_ctx->list, &priv->ctx_list); - -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - if (dev_ctx->intf->pfault) { - if (priv->pfault) { - mlx5_core_err(dev, "multiple page fault handlers not supported"); - } else { - priv->pfault_ctx = dev_ctx->context; - priv->pfault = dev_ctx->intf->pfault; - } - } -#endif spin_unlock_irq(&priv->ctx_lock); } - delayed_event_release(dev_ctx, priv); - if (!dev_ctx->context) kfree(dev_ctx); } @@ -179,15 +100,6 @@ void mlx5_remove_device(struct mlx5_interface *intf, struct mlx5_priv *priv) if (!dev_ctx) return; -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - spin_lock_irq(&priv->ctx_lock); - if (priv->pfault == dev_ctx->intf->pfault) - priv->pfault = NULL; - spin_unlock_irq(&priv->ctx_lock); - - synchronize_srcu(&priv->pfault_srcu); -#endif - spin_lock_irq(&priv->ctx_lock); list_del(&dev_ctx->list); spin_unlock_irq(&priv->ctx_lock); @@ -207,26 +119,20 @@ static void mlx5_attach_interface(struct mlx5_interface *intf, struct mlx5_priv if (!dev_ctx) return; - delayed_event_start(priv); if (intf->attach) { if (test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state)) - goto out; + return; if (intf->attach(dev, dev_ctx->context)) - goto out; - + return; set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state); } else { if (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state)) - goto out; + return; dev_ctx->context = intf->add(dev); if (!dev_ctx->context) - goto out; - + return; set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state); } - -out: - delayed_event_release(dev_ctx, priv); } void mlx5_attach_device(struct mlx5_core_dev *dev) @@ -350,28 +256,6 @@ void mlx5_reload_interface(struct mlx5_core_dev *mdev, int protocol) mutex_unlock(&mlx5_intf_mutex); } -void *mlx5_get_protocol_dev(struct mlx5_core_dev *mdev, int protocol) -{ - struct mlx5_priv *priv = &mdev->priv; - struct mlx5_device_context *dev_ctx; - unsigned long flags; - void *result = NULL; - - spin_lock_irqsave(&priv->ctx_lock, flags); - - list_for_each_entry(dev_ctx, &mdev->priv.ctx_list, list) - if ((dev_ctx->intf->protocol == protocol) && - dev_ctx->intf->get_dev) { - result = dev_ctx->intf->get_dev(dev_ctx->context); - break; - } - - spin_unlock_irqrestore(&priv->ctx_lock, flags); - - return result; -} -EXPORT_SYMBOL(mlx5_get_protocol_dev); - /* Must be called with intf_mutex held */ void mlx5_add_dev_by_protocol(struct mlx5_core_dev *dev, int protocol) { @@ -422,44 +306,6 @@ struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev) return res; } -void mlx5_core_event(struct mlx5_core_dev *dev, enum mlx5_dev_event event, - unsigned long param) -{ - struct mlx5_priv *priv = &dev->priv; - struct mlx5_device_context *dev_ctx; - unsigned long flags; - - spin_lock_irqsave(&priv->ctx_lock, flags); - - if (priv->is_accum_events) - add_delayed_event(priv, dev, event, param); - - /* After mlx5_detach_device, the dev_ctx->intf is still set and dev_ctx is - * still in priv->ctx_list. In this case, only notify the dev_ctx if its - * ADDED or ATTACHED bit are set. - */ - list_for_each_entry(dev_ctx, &priv->ctx_list, list) - if (dev_ctx->intf->event && - (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state) || - test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state))) - dev_ctx->intf->event(dev, dev_ctx->context, event, param); - - spin_unlock_irqrestore(&priv->ctx_lock, flags); -} - -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING -void mlx5_core_page_fault(struct mlx5_core_dev *dev, - struct mlx5_pagefault *pfault) -{ - struct mlx5_priv *priv = &dev->priv; - int srcu_idx; - - srcu_idx = srcu_read_lock(&priv->pfault_srcu); - if (priv->pfault) - priv->pfault(dev, priv->pfault_ctx, pfault); - srcu_read_unlock(&priv->pfault_srcu, srcu_idx); -} -#endif void mlx5_dev_list_lock(void) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c index 0f11fff32a9b..424457ff9759 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.c @@ -161,10 +161,10 @@ static void print_misc_parameters_hdrs(struct trace_seq *p, PRINT_MASKED_VAL(name, p, format); \ } DECLARE_MASK_VAL(u64, gre_key) = { - .m = MLX5_GET(fte_match_set_misc, mask, gre_key_h) << 8 | - MLX5_GET(fte_match_set_misc, mask, gre_key_l), - .v = MLX5_GET(fte_match_set_misc, value, gre_key_h) << 8 | - MLX5_GET(fte_match_set_misc, value, gre_key_l)}; + .m = MLX5_GET(fte_match_set_misc, mask, gre_key.nvgre.hi) << 8 | + MLX5_GET(fte_match_set_misc, mask, gre_key.nvgre.lo), + .v = MLX5_GET(fte_match_set_misc, value, gre_key.nvgre.hi) << 8 | + MLX5_GET(fte_match_set_misc, value, gre_key.nvgre.lo)}; PRINT_MASKED_VAL(gre_key, p, "%llu"); PRINT_MASKED_VAL_MISC(u32, source_sqn, source_sqn, p, "%u"); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c index d4ec93bde4de..6999f4486e9e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c @@ -30,6 +30,7 @@ * SOFTWARE. */ #define CREATE_TRACE_POINTS +#include "lib/eq.h" #include "fw_tracer.h" #include "fw_tracer_tracepoint.h" @@ -846,9 +847,9 @@ free_tracer: return ERR_PTR(err); } -/* Create HW resources + start tracer - * must be called before Async EQ is created - */ +static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void *data); + +/* Create HW resources + start tracer */ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer) { struct mlx5_core_dev *dev; @@ -874,6 +875,9 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer) goto err_dealloc_pd; } + MLX5_NB_INIT(&tracer->nb, fw_tracer_event, DEVICE_TRACER); + mlx5_eq_notifier_register(dev, &tracer->nb); + mlx5_fw_tracer_start(tracer); return 0; @@ -883,9 +887,7 @@ err_dealloc_pd: return err; } -/* Stop tracer + Cleanup HW resources - * must be called after Async EQ is destroyed - */ +/* Stop tracer + Cleanup HW resources */ void mlx5_fw_tracer_cleanup(struct mlx5_fw_tracer *tracer) { if (IS_ERR_OR_NULL(tracer)) @@ -893,7 +895,7 @@ void mlx5_fw_tracer_cleanup(struct mlx5_fw_tracer *tracer) mlx5_core_dbg(tracer->dev, "FWTracer: Cleanup, is owner ? (%d)\n", tracer->owner); - + mlx5_eq_notifier_unregister(tracer->dev, &tracer->nb); cancel_work_sync(&tracer->ownership_change_work); cancel_work_sync(&tracer->handle_traces_work); @@ -922,12 +924,11 @@ void mlx5_fw_tracer_destroy(struct mlx5_fw_tracer *tracer) kfree(tracer); } -void mlx5_fw_tracer_event(struct mlx5_core_dev *dev, struct mlx5_eqe *eqe) +static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void *data) { - struct mlx5_fw_tracer *tracer = dev->tracer; - - if (!tracer) - return; + struct mlx5_fw_tracer *tracer = mlx5_nb_cof(nb, struct mlx5_fw_tracer, nb); + struct mlx5_core_dev *dev = tracer->dev; + struct mlx5_eqe *eqe = data; switch (eqe->sub_type) { case MLX5_TRACER_SUBTYPE_OWNERSHIP_CHANGE: @@ -942,6 +943,8 @@ void mlx5_fw_tracer_event(struct mlx5_core_dev *dev, struct mlx5_eqe *eqe) mlx5_core_dbg(dev, "FWTracer: Event with unrecognized subtype: sub_type %d\n", eqe->sub_type); } + + return NOTIFY_OK; } EXPORT_TRACEPOINT_SYMBOL(mlx5_fw); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h index 0347f2dd5cee..a8b8747f2b61 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h @@ -55,6 +55,7 @@ struct mlx5_fw_tracer { struct mlx5_core_dev *dev; + struct mlx5_nb nb; bool owner; u8 trc_ver; struct workqueue_struct *work_queue; @@ -170,6 +171,5 @@ struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev); int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer); void mlx5_fw_tracer_cleanup(struct mlx5_fw_tracer *tracer); void mlx5_fw_tracer_destroy(struct mlx5_fw_tracer *tracer); -void mlx5_fw_tracer_event(struct mlx5_core_dev *dev, struct mlx5_eqe *eqe); #endif diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 118324802926..8fa8fdd30b85 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -49,6 +49,7 @@ #include #include #include +#include #include "wq.h" #include "mlx5_core.h" #include "en_stats.h" @@ -147,9 +148,6 @@ struct page_pool; MLX5_UMR_MTT_ALIGNMENT)) #define MLX5E_UMR_WQEBBS \ (DIV_ROUND_UP(MLX5E_UMR_WQE_INLINE_SZ, MLX5_SEND_WQE_BB)) -#define MLX5E_ICOSQ_MAX_WQEBBS MLX5E_UMR_WQEBBS - -#define MLX5E_NUM_MAIN_GROUPS 9 #define MLX5E_MSG_LEVEL NETIF_MSG_LINK @@ -178,8 +176,7 @@ static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev) { return is_kdump_kernel() ? MLX5E_MIN_NUM_CHANNELS : - min_t(int, mdev->priv.eq_table.num_comp_vectors, - MLX5E_MAX_NUM_CHANNELS); + min_t(int, mlx5_comp_vectors_count(mdev), MLX5E_MAX_NUM_CHANNELS); } /* Use this function to get max num channels after netdev was created */ @@ -214,22 +211,24 @@ struct mlx5e_umr_wqe { extern const char mlx5e_self_tests[][ETH_GSTRING_LEN]; enum mlx5e_priv_flag { - MLX5E_PFLAG_RX_CQE_BASED_MODER = (1 << 0), - MLX5E_PFLAG_TX_CQE_BASED_MODER = (1 << 1), - MLX5E_PFLAG_RX_CQE_COMPRESS = (1 << 2), - MLX5E_PFLAG_RX_STRIDING_RQ = (1 << 3), - MLX5E_PFLAG_RX_NO_CSUM_COMPLETE = (1 << 4), + MLX5E_PFLAG_RX_CQE_BASED_MODER, + MLX5E_PFLAG_TX_CQE_BASED_MODER, + MLX5E_PFLAG_RX_CQE_COMPRESS, + MLX5E_PFLAG_RX_STRIDING_RQ, + MLX5E_PFLAG_RX_NO_CSUM_COMPLETE, + MLX5E_PFLAG_XDP_TX_MPWQE, + MLX5E_NUM_PFLAGS, /* Keep last */ }; #define MLX5E_SET_PFLAG(params, pflag, enable) \ do { \ if (enable) \ - (params)->pflags |= (pflag); \ + (params)->pflags |= BIT(pflag); \ else \ - (params)->pflags &= ~(pflag); \ + (params)->pflags &= ~(BIT(pflag)); \ } while (0) -#define MLX5E_GET_PFLAG(params, pflag) (!!((params)->pflags & (pflag))) +#define MLX5E_GET_PFLAG(params, pflag) (!!((params)->pflags & (BIT(pflag)))) #ifdef CONFIG_MLX5_CORE_EN_DCB #define MLX5E_MAX_BW_ALLOC 100 /* Max percentage of BW allocation */ @@ -247,9 +246,6 @@ struct mlx5e_params { bool lro_en; u32 lro_wqe_sz; u8 tx_min_inline_mode; - u8 rss_hfunc; - u8 toeplitz_hash_key[40]; - u32 indirection_rqt[MLX5E_INDIR_RQT_SIZE]; bool vlan_strip_disable; bool scatter_fcs_en; bool rx_dim_enabled; @@ -349,7 +345,6 @@ enum { MLX5E_SQ_STATE_IPSEC, MLX5E_SQ_STATE_AM, MLX5E_SQ_STATE_TLS, - MLX5E_SQ_STATE_REDIRECT, }; struct mlx5e_sq_wqe_info { @@ -410,24 +405,51 @@ struct mlx5e_xdp_info { struct mlx5e_dma_info di; }; +struct mlx5e_xdp_info_fifo { + struct mlx5e_xdp_info *xi; + u32 *cc; + u32 *pc; + u32 mask; +}; + +struct mlx5e_xdp_wqe_info { + u8 num_wqebbs; + u8 num_ds; +}; + +struct mlx5e_xdp_mpwqe { + /* Current MPWQE session */ + struct mlx5e_tx_wqe *wqe; + u8 ds_count; + u8 max_ds_count; +}; + +struct mlx5e_xdpsq; +typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq*, + struct mlx5e_xdp_info*); struct mlx5e_xdpsq { /* data path */ /* dirtied @completion */ + u32 xdpi_fifo_cc; u16 cc; bool redirect_flush; /* dirtied @xmit */ - u16 pc ____cacheline_aligned_in_smp; - bool doorbell; + u32 xdpi_fifo_pc ____cacheline_aligned_in_smp; + u16 pc; + struct mlx5_wqe_ctrl_seg *doorbell_cseg; + struct mlx5e_xdp_mpwqe mpwqe; struct mlx5e_cq cq; /* read only */ struct mlx5_wq_cyc wq; struct mlx5e_xdpsq_stats *stats; + mlx5e_fp_xmit_xdp_frame xmit_xdp_frame; struct { - struct mlx5e_xdp_info *xdpi; + struct mlx5e_xdp_wqe_info *wqe_info; + struct mlx5e_xdp_info_fifo xdpi_fifo; } db; void __iomem *uar_map; u32 sqn; @@ -633,7 +655,6 @@ struct mlx5e_channel_stats { } ____cacheline_aligned_in_smp; enum { - MLX5E_STATE_ASYNC_EVENTS_ENABLED, MLX5E_STATE_OPENED, MLX5E_STATE_DESTROYING, }; @@ -654,6 +675,13 @@ enum { MLX5E_NIC_PRIO }; +struct mlx5e_rss_params { + u32 indirection_rqt[MLX5E_INDIR_RQT_SIZE]; + u32 rx_hash_fields[MLX5E_NUM_INDIR_TIRS]; + u8 toeplitz_hash_key[40]; + u8 hfunc; +}; + struct mlx5e_priv { /* priv data path fields - start */ struct mlx5e_txqsq *txq2sq[MLX5E_MAX_NUM_CHANNELS * MLX5E_MAX_NUM_TC]; @@ -674,6 +702,7 @@ struct mlx5e_priv { struct mlx5e_tir indir_tir[MLX5E_NUM_INDIR_TIRS]; struct mlx5e_tir inner_indir_tir[MLX5E_NUM_INDIR_TIRS]; struct mlx5e_tir direct_tir[MLX5E_MAX_NUM_CHANNELS]; + struct mlx5e_rss_params rss_params; u32 tx_rates[MLX5E_MAX_NUM_SQS]; struct mlx5e_flow_steering fs; @@ -683,6 +712,8 @@ struct mlx5e_priv { struct work_struct set_rx_mode_work; struct work_struct tx_timeout_work; struct work_struct update_stats_work; + struct work_struct monitor_counters_work; + struct mlx5_nb monitor_counters_nb; struct mlx5_core_dev *mdev; struct net_device *netdev; @@ -692,6 +723,8 @@ struct mlx5e_priv { struct hwtstamp_config tstamp; u16 q_counter; u16 drop_rq_q_counter; + struct notifier_block events_nb; + #ifdef CONFIG_MLX5_CORE_EN_DCB struct mlx5e_dcbx dcbx; #endif @@ -769,6 +802,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, struct mlx5e_wqe_frag_info *wi, u32 cqe_bcnt); void mlx5e_update_stats(struct mlx5e_priv *priv); +void mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats); void mlx5e_init_l2_addr(struct mlx5e_priv *priv); int mlx5e_self_test_num(struct mlx5e_priv *priv); @@ -799,9 +833,11 @@ struct mlx5e_redirect_rqt_param { int mlx5e_redirect_rqt(struct mlx5e_priv *priv, u32 rqtn, int sz, struct mlx5e_redirect_rqt_param rrp); -void mlx5e_build_indir_tir_ctx_hash(struct mlx5e_params *params, - enum mlx5e_traffic_types tt, +void mlx5e_build_indir_tir_ctx_hash(struct mlx5e_rss_params *rss_params, + const struct mlx5e_tirc_config *ttconfig, void *tirc, bool inner); +void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen); +struct mlx5e_tirc_config mlx5e_tirc_get_default_config(enum mlx5e_traffic_types tt); int mlx5e_open_locked(struct net_device *netdev); int mlx5e_close_locked(struct net_device *netdev); @@ -931,14 +967,16 @@ int mlx5e_create_tis(struct mlx5_core_dev *mdev, int tc, void mlx5e_destroy_tis(struct mlx5_core_dev *mdev, u32 tisn); int mlx5e_create_tises(struct mlx5e_priv *priv); -void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv); +void mlx5e_update_carrier(struct mlx5e_priv *priv); int mlx5e_close(struct net_device *netdev); int mlx5e_open(struct net_device *netdev); +void mlx5e_update_ndo_stats(struct mlx5e_priv *priv); void mlx5e_queue_update_stats(struct mlx5e_priv *priv); int mlx5e_bits_invert(unsigned long a, int size); typedef int (*change_hw_mtu_cb)(struct mlx5e_priv *priv); +int mlx5e_set_dev_port_mtu(struct mlx5e_priv *priv); int mlx5e_change_mtu(struct net_device *netdev, int new_mtu, change_hw_mtu_cb set_mtu_cb); @@ -962,12 +1000,20 @@ int mlx5e_ethtool_get_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal); int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal); +int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv, + struct ethtool_link_ksettings *link_ksettings); +int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv, + const struct ethtool_link_ksettings *link_ksettings); u32 mlx5e_ethtool_get_rxfh_key_size(struct mlx5e_priv *priv); u32 mlx5e_ethtool_get_rxfh_indir_size(struct mlx5e_priv *priv); int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv, struct ethtool_ts_info *info); int mlx5e_ethtool_flash_device(struct mlx5e_priv *priv, struct ethtool_flash *flash); +void mlx5e_ethtool_get_pauseparam(struct mlx5e_priv *priv, + struct ethtool_pauseparam *pauseparam); +int mlx5e_ethtool_set_pauseparam(struct mlx5e_priv *priv, + struct ethtool_pauseparam *pauseparam); /* mlx5e generic netdev management API */ int mlx5e_netdev_init(struct net_device *netdev, @@ -983,12 +1029,26 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv); void mlx5e_detach_netdev(struct mlx5e_priv *priv); void mlx5e_destroy_netdev(struct mlx5e_priv *priv); void mlx5e_build_nic_params(struct mlx5_core_dev *mdev, + struct mlx5e_rss_params *rss_params, struct mlx5e_params *params, u16 max_channels, u16 mtu); void mlx5e_build_rq_params(struct mlx5_core_dev *mdev, struct mlx5e_params *params); -void mlx5e_build_rss_params(struct mlx5e_params *params); +void mlx5e_build_rss_params(struct mlx5e_rss_params *rss_params, + u16 num_channels); u8 mlx5e_params_calculate_tx_min_inline(struct mlx5_core_dev *mdev); void mlx5e_rx_dim_work(struct work_struct *work); void mlx5e_tx_dim_work(struct work_struct *work); + +void mlx5e_add_vxlan_port(struct net_device *netdev, struct udp_tunnel_info *ti); +void mlx5e_del_vxlan_port(struct net_device *netdev, struct udp_tunnel_info *ti); +netdev_features_t mlx5e_features_check(struct sk_buff *skb, + struct net_device *netdev, + netdev_features_t features); +#ifdef CONFIG_MLX5_ESWITCH +int mlx5e_set_vf_mac(struct net_device *dev, int vf, u8 *mac); +int mlx5e_set_vf_rate(struct net_device *dev, int vf, int min_tx_rate, int max_tx_rate); +int mlx5e_get_vf_config(struct net_device *dev, int vf, struct ifla_vf_info *ivi); +int mlx5e_get_vf_stats(struct net_device *dev, int vf, struct ifla_vf_stats *vf_stats); +#endif #endif /* __MLX5_EN_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h index 1431232c9a09..be5961ff24cc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h @@ -73,6 +73,22 @@ enum mlx5e_traffic_types { MLX5E_NUM_INDIR_TIRS = MLX5E_TT_ANY, }; +struct mlx5e_tirc_config { + u8 l3_prot_type; + u8 l4_prot_type; + u32 rx_hash_fields; +}; + +#define MLX5_HASH_IP (MLX5_HASH_FIELD_SEL_SRC_IP |\ + MLX5_HASH_FIELD_SEL_DST_IP) +#define MLX5_HASH_IP_L4PORTS (MLX5_HASH_FIELD_SEL_SRC_IP |\ + MLX5_HASH_FIELD_SEL_DST_IP |\ + MLX5_HASH_FIELD_SEL_L4_SPORT |\ + MLX5_HASH_FIELD_SEL_L4_DPORT) +#define MLX5_HASH_IP_IPSEC_SPI (MLX5_HASH_FIELD_SEL_SRC_IP |\ + MLX5_HASH_FIELD_SEL_DST_IP |\ + MLX5_HASH_FIELD_SEL_IPSEC_SPI) + enum mlx5e_tunnel_types { MLX5E_TT_IPV4_GRE, MLX5E_TT_IPV6_GRE, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/monitor_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en/monitor_stats.c new file mode 100644 index 000000000000..2ce420851e77 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/monitor_stats.c @@ -0,0 +1,169 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2018 Mellanox Technologies. */ + +#include "en.h" +#include "monitor_stats.h" +#include "lib/eq.h" + +/* Driver will set the following watch counters list: + * Ppcnt.802_3: + * a_in_range_length_errors Type: 0x0, Counter: 0x0, group_id = N/A + * a_out_of_range_length_field Type: 0x0, Counter: 0x1, group_id = N/A + * a_frame_too_long_errors Type: 0x0, Counter: 0x2, group_id = N/A + * a_frame_check_sequence_errors Type: 0x0, Counter: 0x3, group_id = N/A + * a_alignment_errors Type: 0x0, Counter: 0x4, group_id = N/A + * if_out_discards Type: 0x0, Counter: 0x5, group_id = N/A + * Q_Counters: + * Q[index].rx_out_of_buffer Type: 0x1, Counter: 0x4, group_id = counter_ix + */ + +#define NUM_REQ_PPCNT_COUNTER_S1 MLX5_CMD_SET_MONITOR_NUM_PPCNT_COUNTER_SET1 +#define NUM_REQ_Q_COUNTERS_S1 MLX5_CMD_SET_MONITOR_NUM_Q_COUNTERS_SET1 + +int mlx5e_monitor_counter_supported(struct mlx5e_priv *priv) +{ + struct mlx5_core_dev *mdev = priv->mdev; + + if (!MLX5_CAP_GEN(mdev, max_num_of_monitor_counters)) + return false; + if (MLX5_CAP_PCAM_REG(mdev, ppcnt) && + MLX5_CAP_GEN(mdev, num_ppcnt_monitor_counters) < + NUM_REQ_PPCNT_COUNTER_S1) + return false; + if (MLX5_CAP_GEN(mdev, num_q_monitor_counters) < + NUM_REQ_Q_COUNTERS_S1) + return false; + return true; +} + +void mlx5e_monitor_counter_arm(struct mlx5e_priv *priv) +{ + u32 in[MLX5_ST_SZ_DW(arm_monitor_counter_in)] = {}; + u32 out[MLX5_ST_SZ_DW(arm_monitor_counter_out)] = {}; + + MLX5_SET(arm_monitor_counter_in, in, opcode, + MLX5_CMD_OP_ARM_MONITOR_COUNTER); + mlx5_cmd_exec(priv->mdev, in, sizeof(in), out, sizeof(out)); +} + +static void mlx5e_monitor_counters_work(struct work_struct *work) +{ + struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv, + monitor_counters_work); + + mutex_lock(&priv->state_lock); + mlx5e_update_ndo_stats(priv); + mutex_unlock(&priv->state_lock); + mlx5e_monitor_counter_arm(priv); +} + +static int mlx5e_monitor_event_handler(struct notifier_block *nb, + unsigned long event, void *eqe) +{ + struct mlx5e_priv *priv = mlx5_nb_cof(nb, struct mlx5e_priv, + monitor_counters_nb); + queue_work(priv->wq, &priv->monitor_counters_work); + return NOTIFY_OK; +} + +void mlx5e_monitor_counter_start(struct mlx5e_priv *priv) +{ + MLX5_NB_INIT(&priv->monitor_counters_nb, mlx5e_monitor_event_handler, + MONITOR_COUNTER); + mlx5_eq_notifier_register(priv->mdev, &priv->monitor_counters_nb); +} + +static void mlx5e_monitor_counter_stop(struct mlx5e_priv *priv) +{ + mlx5_eq_notifier_unregister(priv->mdev, &priv->monitor_counters_nb); + cancel_work_sync(&priv->monitor_counters_work); +} + +static int fill_monitor_counter_ppcnt_set1(int cnt, u32 *in) +{ + enum mlx5_monitor_counter_ppcnt ppcnt_cnt; + + for (ppcnt_cnt = 0; + ppcnt_cnt < NUM_REQ_PPCNT_COUNTER_S1; + ppcnt_cnt++, cnt++) { + MLX5_SET(set_monitor_counter_in, in, + monitor_counter[cnt].type, + MLX5_QUERY_MONITOR_CNT_TYPE_PPCNT); + MLX5_SET(set_monitor_counter_in, in, + monitor_counter[cnt].counter, + ppcnt_cnt); + } + return ppcnt_cnt; +} + +static int fill_monitor_counter_q_counter_set1(int cnt, int q_counter, u32 *in) +{ + MLX5_SET(set_monitor_counter_in, in, + monitor_counter[cnt].type, + MLX5_QUERY_MONITOR_CNT_TYPE_Q_COUNTER); + MLX5_SET(set_monitor_counter_in, in, + monitor_counter[cnt].counter, + MLX5_QUERY_MONITOR_Q_COUNTER_RX_OUT_OF_BUFFER); + MLX5_SET(set_monitor_counter_in, in, + monitor_counter[cnt].counter_group_id, + q_counter); + return 1; +} + +/* check if mlx5e_monitor_counter_supported before calling this function*/ +static void mlx5e_set_monitor_counter(struct mlx5e_priv *priv) +{ + struct mlx5_core_dev *mdev = priv->mdev; + int max_num_of_counters = MLX5_CAP_GEN(mdev, max_num_of_monitor_counters); + int num_q_counters = MLX5_CAP_GEN(mdev, num_q_monitor_counters); + int num_ppcnt_counters = !MLX5_CAP_PCAM_REG(mdev, ppcnt) ? 0 : + MLX5_CAP_GEN(mdev, num_ppcnt_monitor_counters); + u32 in[MLX5_ST_SZ_DW(set_monitor_counter_in)] = {}; + u32 out[MLX5_ST_SZ_DW(set_monitor_counter_out)] = {}; + int q_counter = priv->q_counter; + int cnt = 0; + + if (num_ppcnt_counters >= NUM_REQ_PPCNT_COUNTER_S1 && + max_num_of_counters >= (NUM_REQ_PPCNT_COUNTER_S1 + cnt)) + cnt += fill_monitor_counter_ppcnt_set1(cnt, in); + + if (num_q_counters >= NUM_REQ_Q_COUNTERS_S1 && + max_num_of_counters >= (NUM_REQ_Q_COUNTERS_S1 + cnt) && + q_counter) + cnt += fill_monitor_counter_q_counter_set1(cnt, q_counter, in); + + MLX5_SET(set_monitor_counter_in, in, num_of_counters, cnt); + MLX5_SET(set_monitor_counter_in, in, opcode, + MLX5_CMD_OP_SET_MONITOR_COUNTER); + + mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out)); +} + +/* check if mlx5e_monitor_counter_supported before calling this function*/ +void mlx5e_monitor_counter_init(struct mlx5e_priv *priv) +{ + INIT_WORK(&priv->monitor_counters_work, mlx5e_monitor_counters_work); + mlx5e_monitor_counter_start(priv); + mlx5e_set_monitor_counter(priv); + mlx5e_monitor_counter_arm(priv); + queue_work(priv->wq, &priv->update_stats_work); +} + +static void mlx5e_monitor_counter_disable(struct mlx5e_priv *priv) +{ + u32 in[MLX5_ST_SZ_DW(set_monitor_counter_in)] = {}; + u32 out[MLX5_ST_SZ_DW(set_monitor_counter_out)] = {}; + + MLX5_SET(set_monitor_counter_in, in, num_of_counters, 0); + MLX5_SET(set_monitor_counter_in, in, opcode, + MLX5_CMD_OP_SET_MONITOR_COUNTER); + + mlx5_cmd_exec(priv->mdev, in, sizeof(in), out, sizeof(out)); +} + +/* check if mlx5e_monitor_counter_supported before calling this function*/ +void mlx5e_monitor_counter_cleanup(struct mlx5e_priv *priv) +{ + mlx5e_monitor_counter_disable(priv); + mlx5e_monitor_counter_stop(priv); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/monitor_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en/monitor_stats.h new file mode 100644 index 000000000000..e1ac4b3d22fb --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/monitor_stats.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2018 Mellanox Technologies. */ + +#ifndef __MLX5_MONITOR_H__ +#define __MLX5_MONITOR_H__ + +int mlx5e_monitor_counter_supported(struct mlx5e_priv *priv); +void mlx5e_monitor_counter_init(struct mlx5e_priv *priv); +void mlx5e_monitor_counter_cleanup(struct mlx5e_priv *priv); +void mlx5e_monitor_counter_arm(struct mlx5e_priv *priv); + +#endif /* __MLX5_MONITOR_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c new file mode 100644 index 000000000000..046948ead152 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c @@ -0,0 +1,634 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2018 Mellanox Technologies. */ + +#include +#include +#include "lib/vxlan.h" +#include "en/tc_tun.h" + +static int get_route_and_out_devs(struct mlx5e_priv *priv, + struct net_device *dev, + struct net_device **route_dev, + struct net_device **out_dev) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct net_device *uplink_dev, *uplink_upper; + bool dst_is_lag_dev; + + uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH); + uplink_upper = netdev_master_upper_dev_get(uplink_dev); + dst_is_lag_dev = (uplink_upper && + netif_is_lag_master(uplink_upper) && + dev == uplink_upper && + mlx5_lag_is_sriov(priv->mdev)); + + /* if the egress device isn't on the same HW e-switch or + * it's a LAG device, use the uplink + */ + if (!switchdev_port_same_parent_id(priv->netdev, dev) || + dst_is_lag_dev) { + *route_dev = uplink_dev; + *out_dev = *route_dev; + } else { + *route_dev = dev; + if (is_vlan_dev(*route_dev)) + *out_dev = uplink_dev; + else if (mlx5e_eswitch_rep(dev)) + *out_dev = *route_dev; + else + return -EOPNOTSUPP; + } + + return 0; +} + +static int mlx5e_route_lookup_ipv4(struct mlx5e_priv *priv, + struct net_device *mirred_dev, + struct net_device **out_dev, + struct net_device **route_dev, + struct flowi4 *fl4, + struct neighbour **out_n, + u8 *out_ttl) +{ + struct rtable *rt; + struct neighbour *n = NULL; + +#if IS_ENABLED(CONFIG_INET) + int ret; + + rt = ip_route_output_key(dev_net(mirred_dev), fl4); + ret = PTR_ERR_OR_ZERO(rt); + if (ret) + return ret; +#else + return -EOPNOTSUPP; +#endif + + ret = get_route_and_out_devs(priv, rt->dst.dev, route_dev, out_dev); + if (ret < 0) + return ret; + + if (!(*out_ttl)) + *out_ttl = ip4_dst_hoplimit(&rt->dst); + n = dst_neigh_lookup(&rt->dst, &fl4->daddr); + ip_rt_put(rt); + if (!n) + return -ENOMEM; + + *out_n = n; + return 0; +} + +static const char *mlx5e_netdev_kind(struct net_device *dev) +{ + if (dev->rtnl_link_ops) + return dev->rtnl_link_ops->kind; + else + return ""; +} + +static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv, + struct net_device *mirred_dev, + struct net_device **out_dev, + struct net_device **route_dev, + struct flowi6 *fl6, + struct neighbour **out_n, + u8 *out_ttl) +{ + struct neighbour *n = NULL; + struct dst_entry *dst; + +#if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6) + int ret; + + ret = ipv6_stub->ipv6_dst_lookup(dev_net(mirred_dev), NULL, &dst, + fl6); + if (ret < 0) + return ret; + + if (!(*out_ttl)) + *out_ttl = ip6_dst_hoplimit(dst); + + ret = get_route_and_out_devs(priv, dst->dev, route_dev, out_dev); + if (ret < 0) + return ret; +#else + return -EOPNOTSUPP; +#endif + + n = dst_neigh_lookup(dst, &fl6->daddr); + dst_release(dst); + if (!n) + return -ENOMEM; + + *out_n = n; + return 0; +} + +static int mlx5e_gen_vxlan_header(char buf[], struct ip_tunnel_key *tun_key) +{ + __be32 tun_id = tunnel_id_to_key32(tun_key->tun_id); + struct udphdr *udp = (struct udphdr *)(buf); + struct vxlanhdr *vxh = (struct vxlanhdr *) + ((char *)udp + sizeof(struct udphdr)); + + udp->dest = tun_key->tp_dst; + vxh->vx_flags = VXLAN_HF_VNI; + vxh->vx_vni = vxlan_vni_field(tun_id); + + return 0; +} + +static int mlx5e_gen_gre_header(char buf[], struct ip_tunnel_key *tun_key) +{ + __be32 tun_id = tunnel_id_to_key32(tun_key->tun_id); + int hdr_len; + struct gre_base_hdr *greh = (struct gre_base_hdr *)(buf); + + /* the HW does not calculate GRE csum or sequences */ + if (tun_key->tun_flags & (TUNNEL_CSUM | TUNNEL_SEQ)) + return -EOPNOTSUPP; + + greh->protocol = htons(ETH_P_TEB); + + /* GRE key */ + hdr_len = gre_calc_hlen(tun_key->tun_flags); + greh->flags = gre_tnl_flags_to_gre_flags(tun_key->tun_flags); + if (tun_key->tun_flags & TUNNEL_KEY) { + __be32 *ptr = (__be32 *)(((u8 *)greh) + hdr_len - 4); + + *ptr = tun_id; + } + + return 0; +} + +static int mlx5e_gen_ip_tunnel_header(char buf[], __u8 *ip_proto, + struct mlx5e_encap_entry *e) +{ + int err = 0; + struct ip_tunnel_key *key = &e->tun_info.key; + + if (e->tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) { + *ip_proto = IPPROTO_UDP; + err = mlx5e_gen_vxlan_header(buf, key); + } else if (e->tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP) { + *ip_proto = IPPROTO_GRE; + err = mlx5e_gen_gre_header(buf, key); + } else { + pr_warn("mlx5: Cannot generate tunnel header for tunnel type (%d)\n" + , e->tunnel_type); + err = -EOPNOTSUPP; + } + + return err; +} + +static char *gen_eth_tnl_hdr(char *buf, struct net_device *dev, + struct mlx5e_encap_entry *e, + u16 proto) +{ + struct ethhdr *eth = (struct ethhdr *)buf; + char *ip; + + ether_addr_copy(eth->h_dest, e->h_dest); + ether_addr_copy(eth->h_source, dev->dev_addr); + if (is_vlan_dev(dev)) { + struct vlan_hdr *vlan = (struct vlan_hdr *) + ((char *)eth + ETH_HLEN); + ip = (char *)vlan + VLAN_HLEN; + eth->h_proto = vlan_dev_vlan_proto(dev); + vlan->h_vlan_TCI = htons(vlan_dev_vlan_id(dev)); + vlan->h_vlan_encapsulated_proto = htons(proto); + } else { + eth->h_proto = htons(proto); + ip = (char *)eth + ETH_HLEN; + } + + return ip; +} + +int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv, + struct net_device *mirred_dev, + struct mlx5e_encap_entry *e) +{ + int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size); + struct ip_tunnel_key *tun_key = &e->tun_info.key; + struct net_device *out_dev, *route_dev; + struct neighbour *n = NULL; + struct flowi4 fl4 = {}; + int ipv4_encap_size; + char *encap_header; + u8 nud_state, ttl; + struct iphdr *ip; + int err; + + /* add the IP fields */ + fl4.flowi4_tos = tun_key->tos; + fl4.daddr = tun_key->u.ipv4.dst; + fl4.saddr = tun_key->u.ipv4.src; + ttl = tun_key->ttl; + + err = mlx5e_route_lookup_ipv4(priv, mirred_dev, &out_dev, &route_dev, + &fl4, &n, &ttl); + if (err) + return err; + + ipv4_encap_size = + (is_vlan_dev(route_dev) ? VLAN_ETH_HLEN : ETH_HLEN) + + sizeof(struct iphdr) + + e->tunnel_hlen; + + if (max_encap_size < ipv4_encap_size) { + mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n", + ipv4_encap_size, max_encap_size); + return -EOPNOTSUPP; + } + + encap_header = kzalloc(ipv4_encap_size, GFP_KERNEL); + if (!encap_header) + return -ENOMEM; + + /* used by mlx5e_detach_encap to lookup a neigh hash table + * entry in the neigh hash table when a user deletes a rule + */ + e->m_neigh.dev = n->dev; + e->m_neigh.family = n->ops->family; + memcpy(&e->m_neigh.dst_ip, n->primary_key, n->tbl->key_len); + e->out_dev = out_dev; + + /* It's important to add the neigh to the hash table before checking + * the neigh validity state. So if we'll get a notification, in case the + * neigh changes it's validity state, we would find the relevant neigh + * in the hash. + */ + err = mlx5e_rep_encap_entry_attach(netdev_priv(out_dev), e); + if (err) + goto free_encap; + + read_lock_bh(&n->lock); + nud_state = n->nud_state; + ether_addr_copy(e->h_dest, n->ha); + read_unlock_bh(&n->lock); + + /* add ethernet header */ + ip = (struct iphdr *)gen_eth_tnl_hdr(encap_header, route_dev, e, + ETH_P_IP); + + /* add ip header */ + ip->tos = tun_key->tos; + ip->version = 0x4; + ip->ihl = 0x5; + ip->ttl = ttl; + ip->daddr = fl4.daddr; + ip->saddr = fl4.saddr; + + /* add tunneling protocol header */ + err = mlx5e_gen_ip_tunnel_header((char *)ip + sizeof(struct iphdr), + &ip->protocol, e); + if (err) + goto destroy_neigh_entry; + + e->encap_size = ipv4_encap_size; + e->encap_header = encap_header; + + if (!(nud_state & NUD_VALID)) { + neigh_event_send(n, NULL); + err = -EAGAIN; + goto out; + } + + err = mlx5_packet_reformat_alloc(priv->mdev, + e->reformat_type, + ipv4_encap_size, encap_header, + MLX5_FLOW_NAMESPACE_FDB, + &e->encap_id); + if (err) + goto destroy_neigh_entry; + + e->flags |= MLX5_ENCAP_ENTRY_VALID; + mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev)); + neigh_release(n); + return err; + +destroy_neigh_entry: + mlx5e_rep_encap_entry_detach(netdev_priv(e->out_dev), e); +free_encap: + kfree(encap_header); +out: + if (n) + neigh_release(n); + return err; +} + +int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv, + struct net_device *mirred_dev, + struct mlx5e_encap_entry *e) +{ + int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size); + struct ip_tunnel_key *tun_key = &e->tun_info.key; + struct net_device *out_dev, *route_dev; + struct neighbour *n = NULL; + struct flowi6 fl6 = {}; + struct ipv6hdr *ip6h; + int ipv6_encap_size; + char *encap_header; + u8 nud_state, ttl; + int err; + + ttl = tun_key->ttl; + + fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label); + fl6.daddr = tun_key->u.ipv6.dst; + fl6.saddr = tun_key->u.ipv6.src; + + err = mlx5e_route_lookup_ipv6(priv, mirred_dev, &out_dev, &route_dev, + &fl6, &n, &ttl); + if (err) + return err; + + ipv6_encap_size = + (is_vlan_dev(route_dev) ? VLAN_ETH_HLEN : ETH_HLEN) + + sizeof(struct ipv6hdr) + + e->tunnel_hlen; + + if (max_encap_size < ipv6_encap_size) { + mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n", + ipv6_encap_size, max_encap_size); + return -EOPNOTSUPP; + } + + encap_header = kzalloc(ipv6_encap_size, GFP_KERNEL); + if (!encap_header) + return -ENOMEM; + + /* used by mlx5e_detach_encap to lookup a neigh hash table + * entry in the neigh hash table when a user deletes a rule + */ + e->m_neigh.dev = n->dev; + e->m_neigh.family = n->ops->family; + memcpy(&e->m_neigh.dst_ip, n->primary_key, n->tbl->key_len); + e->out_dev = out_dev; + + /* It's importent to add the neigh to the hash table before checking + * the neigh validity state. So if we'll get a notification, in case the + * neigh changes it's validity state, we would find the relevant neigh + * in the hash. + */ + err = mlx5e_rep_encap_entry_attach(netdev_priv(out_dev), e); + if (err) + goto free_encap; + + read_lock_bh(&n->lock); + nud_state = n->nud_state; + ether_addr_copy(e->h_dest, n->ha); + read_unlock_bh(&n->lock); + + /* add ethernet header */ + ip6h = (struct ipv6hdr *)gen_eth_tnl_hdr(encap_header, route_dev, e, + ETH_P_IPV6); + + /* add ip header */ + ip6_flow_hdr(ip6h, tun_key->tos, 0); + /* the HW fills up ipv6 payload len */ + ip6h->hop_limit = ttl; + ip6h->daddr = fl6.daddr; + ip6h->saddr = fl6.saddr; + + /* add tunneling protocol header */ + err = mlx5e_gen_ip_tunnel_header((char *)ip6h + sizeof(struct ipv6hdr), + &ip6h->nexthdr, e); + if (err) + goto destroy_neigh_entry; + + e->encap_size = ipv6_encap_size; + e->encap_header = encap_header; + + if (!(nud_state & NUD_VALID)) { + neigh_event_send(n, NULL); + err = -EAGAIN; + goto out; + } + + err = mlx5_packet_reformat_alloc(priv->mdev, + e->reformat_type, + ipv6_encap_size, encap_header, + MLX5_FLOW_NAMESPACE_FDB, + &e->encap_id); + if (err) + goto destroy_neigh_entry; + + e->flags |= MLX5_ENCAP_ENTRY_VALID; + mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev)); + neigh_release(n); + return err; + +destroy_neigh_entry: + mlx5e_rep_encap_entry_detach(netdev_priv(e->out_dev), e); +free_encap: + kfree(encap_header); +out: + if (n) + neigh_release(n); + return err; +} + +int mlx5e_tc_tun_get_type(struct net_device *tunnel_dev) +{ + if (netif_is_vxlan(tunnel_dev)) + return MLX5E_TC_TUNNEL_TYPE_VXLAN; + else if (netif_is_gretap(tunnel_dev) || + netif_is_ip6gretap(tunnel_dev)) + return MLX5E_TC_TUNNEL_TYPE_GRETAP; + else + return MLX5E_TC_TUNNEL_TYPE_UNKNOWN; +} + +bool mlx5e_tc_tun_device_to_offload(struct mlx5e_priv *priv, + struct net_device *netdev) +{ + int tunnel_type = mlx5e_tc_tun_get_type(netdev); + + if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN && + MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap)) + return true; + else if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP && + MLX5_CAP_ESW(priv->mdev, nvgre_encap_decap)) + return true; + else + return false; +} + +int mlx5e_tc_tun_init_encap_attr(struct net_device *tunnel_dev, + struct mlx5e_priv *priv, + struct mlx5e_encap_entry *e, + struct netlink_ext_ack *extack) +{ + e->tunnel_type = mlx5e_tc_tun_get_type(tunnel_dev); + + if (e->tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) { + int dst_port = be16_to_cpu(e->tun_info.key.tp_dst); + + if (!mlx5_vxlan_lookup_port(priv->mdev->vxlan, dst_port)) { + NL_SET_ERR_MSG_MOD(extack, + "vxlan udp dport was not registered with the HW"); + netdev_warn(priv->netdev, + "%d isn't an offloaded vxlan udp dport\n", + dst_port); + return -EOPNOTSUPP; + } + e->reformat_type = MLX5_REFORMAT_TYPE_L2_TO_VXLAN; + e->tunnel_hlen = VXLAN_HLEN; + } else if (e->tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP) { + e->reformat_type = MLX5_REFORMAT_TYPE_L2_TO_NVGRE; + e->tunnel_hlen = gre_calc_hlen(e->tun_info.key.tun_flags); + } else { + e->reformat_type = -1; + e->tunnel_hlen = -1; + return -EOPNOTSUPP; + } + return 0; +} + +static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv, + struct mlx5_flow_spec *spec, + struct tc_cls_flower_offload *f, + void *headers_c, + void *headers_v) +{ + struct netlink_ext_ack *extack = f->common.extack; + struct flow_dissector_key_ports *key = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_ENC_PORTS, + f->key); + struct flow_dissector_key_ports *mask = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_ENC_PORTS, + f->mask); + void *misc_c = MLX5_ADDR_OF(fte_match_param, + spec->match_criteria, + misc_parameters); + void *misc_v = MLX5_ADDR_OF(fte_match_param, + spec->match_value, + misc_parameters); + + /* Full udp dst port must be given */ + if (!dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS) || + memchr_inv(&mask->dst, 0xff, sizeof(mask->dst))) { + NL_SET_ERR_MSG_MOD(extack, + "VXLAN decap filter must include enc_dst_port condition"); + netdev_warn(priv->netdev, + "VXLAN decap filter must include enc_dst_port condition\n"); + return -EOPNOTSUPP; + } + + /* udp dst port must be knonwn as a VXLAN port */ + if (!mlx5_vxlan_lookup_port(priv->mdev->vxlan, be16_to_cpu(key->dst))) { + NL_SET_ERR_MSG_MOD(extack, + "Matched UDP port is not registered as a VXLAN port"); + netdev_warn(priv->netdev, + "UDP port %d is not registered as a VXLAN port\n", + be16_to_cpu(key->dst)); + return -EOPNOTSUPP; + } + + /* dst UDP port is valid here */ + MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ip_protocol); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); + + MLX5_SET(fte_match_set_lyr_2_4, headers_c, udp_dport, ntohs(mask->dst)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, ntohs(key->dst)); + + MLX5_SET(fte_match_set_lyr_2_4, headers_c, udp_sport, ntohs(mask->src)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport, ntohs(key->src)); + + /* match on VNI */ + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) { + struct flow_dissector_key_keyid *key = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_ENC_KEYID, + f->key); + struct flow_dissector_key_keyid *mask = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_ENC_KEYID, + f->mask); + MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni, + be32_to_cpu(mask->keyid)); + MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni, + be32_to_cpu(key->keyid)); + } + return 0; +} + +static int mlx5e_tc_tun_parse_gretap(struct mlx5e_priv *priv, + struct mlx5_flow_spec *spec, + struct tc_cls_flower_offload *f, + void *outer_headers_c, + void *outer_headers_v) +{ + void *misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, + misc_parameters); + void *misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, + misc_parameters); + + if (!MLX5_CAP_ESW(priv->mdev, nvgre_encap_decap)) { + NL_SET_ERR_MSG_MOD(f->common.extack, + "GRE HW offloading is not supported"); + netdev_warn(priv->netdev, "GRE HW offloading is not supported\n"); + return -EOPNOTSUPP; + } + + MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, ip_protocol); + MLX5_SET(fte_match_set_lyr_2_4, outer_headers_v, + ip_protocol, IPPROTO_GRE); + + /* gre protocol*/ + MLX5_SET_TO_ONES(fte_match_set_misc, misc_c, gre_protocol); + MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, ETH_P_TEB); + + /* gre key */ + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) { + struct flow_dissector_key_keyid *mask = NULL; + struct flow_dissector_key_keyid *key = NULL; + + mask = skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_ENC_KEYID, + f->mask); + MLX5_SET(fte_match_set_misc, misc_c, + gre_key.key, be32_to_cpu(mask->keyid)); + + key = skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_ENC_KEYID, + f->key); + MLX5_SET(fte_match_set_misc, misc_v, + gre_key.key, be32_to_cpu(key->keyid)); + } + + return 0; +} + +int mlx5e_tc_tun_parse(struct net_device *filter_dev, + struct mlx5e_priv *priv, + struct mlx5_flow_spec *spec, + struct tc_cls_flower_offload *f, + void *headers_c, + void *headers_v) +{ + int tunnel_type; + int err = 0; + + tunnel_type = mlx5e_tc_tun_get_type(filter_dev); + if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) { + err = mlx5e_tc_tun_parse_vxlan(priv, spec, f, + headers_c, headers_v); + } else if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP) { + err = mlx5e_tc_tun_parse_gretap(priv, spec, f, + headers_c, headers_v); + } else { + netdev_warn(priv->netdev, + "decapsulation offload is not supported for %s net device (%d)\n", + mlx5e_netdev_kind(filter_dev), tunnel_type); + return -EOPNOTSUPP; + } + return err; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h new file mode 100644 index 000000000000..706ce7bf15e7 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2018 Mellanox Technologies. */ + +#ifndef __MLX5_EN_TC_TUNNEL_H__ +#define __MLX5_EN_TC_TUNNEL_H__ + +#include +#include +#include +#include +#include "en.h" +#include "en_rep.h" + +enum { + MLX5E_TC_TUNNEL_TYPE_UNKNOWN, + MLX5E_TC_TUNNEL_TYPE_VXLAN, + MLX5E_TC_TUNNEL_TYPE_GRETAP +}; + +int mlx5e_tc_tun_init_encap_attr(struct net_device *tunnel_dev, + struct mlx5e_priv *priv, + struct mlx5e_encap_entry *e, + struct netlink_ext_ack *extack); + +int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv, + struct net_device *mirred_dev, + struct mlx5e_encap_entry *e); + +int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv, + struct net_device *mirred_dev, + struct mlx5e_encap_entry *e); + +int mlx5e_tc_tun_get_type(struct net_device *tunnel_dev); +bool mlx5e_tc_tun_device_to_offload(struct mlx5e_priv *priv, + struct net_device *netdev); + +int mlx5e_tc_tun_parse(struct net_device *filter_dev, + struct mlx5e_priv *priv, + struct mlx5_flow_spec *spec, + struct tc_cls_flower_offload *f, + void *headers_c, + void *headers_v); + +#endif //__MLX5_EN_TC_TUNNEL_H__ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index ad6d471d00dd..3740177eed09 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -47,7 +47,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_dma_info *di, xdpi.xdpf->len, PCI_DMA_TODEVICE); xdpi.di = *di; - return mlx5e_xmit_xdp_frame(sq, &xdpi); + return sq->xmit_xdp_frame(sq, &xdpi); } /* returns true if packet was consumed by xdp */ @@ -102,7 +102,98 @@ xdp_abort: } } -bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi) +static void mlx5e_xdp_mpwqe_session_start(struct mlx5e_xdpsq *sq) +{ + struct mlx5e_xdp_mpwqe *session = &sq->mpwqe; + struct mlx5_wq_cyc *wq = &sq->wq; + u8 wqebbs; + u16 pi; + + mlx5e_xdpsq_fetch_wqe(sq, &session->wqe); + + prefetchw(session->wqe->data); + session->ds_count = MLX5E_XDP_TX_EMPTY_DS_COUNT; + + pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); + +/* The mult of MLX5_SEND_WQE_MAX_WQEBBS * MLX5_SEND_WQEBB_NUM_DS + * (16 * 4 == 64) does not fit in the 6-bit DS field of Ctrl Segment. + * We use a bound lower that MLX5_SEND_WQE_MAX_WQEBBS to let a + * full-session WQE be cache-aligned. + */ +#if L1_CACHE_BYTES < 128 +#define MLX5E_XDP_MPW_MAX_WQEBBS (MLX5_SEND_WQE_MAX_WQEBBS - 1) +#else +#define MLX5E_XDP_MPW_MAX_WQEBBS (MLX5_SEND_WQE_MAX_WQEBBS - 2) +#endif + + wqebbs = min_t(u16, mlx5_wq_cyc_get_contig_wqebbs(wq, pi), + MLX5E_XDP_MPW_MAX_WQEBBS); + + session->max_ds_count = MLX5_SEND_WQEBB_NUM_DS * wqebbs; +} + +static void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq) +{ + struct mlx5_wq_cyc *wq = &sq->wq; + struct mlx5e_xdp_mpwqe *session = &sq->mpwqe; + struct mlx5_wqe_ctrl_seg *cseg = &session->wqe->ctrl; + u16 ds_count = session->ds_count; + u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); + struct mlx5e_xdp_wqe_info *wi = &sq->db.wqe_info[pi]; + + cseg->opmod_idx_opcode = + cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_ENHANCED_MPSW); + cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_count); + + wi->num_wqebbs = DIV_ROUND_UP(ds_count, MLX5_SEND_WQEBB_NUM_DS); + wi->num_ds = ds_count - MLX5E_XDP_TX_EMPTY_DS_COUNT; + + sq->pc += wi->num_wqebbs; + + sq->doorbell_cseg = cseg; + + session->wqe = NULL; /* Close session */ +} + +static bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, + struct mlx5e_xdp_info *xdpi) +{ + struct mlx5e_xdp_mpwqe *session = &sq->mpwqe; + struct mlx5e_xdpsq_stats *stats = sq->stats; + + dma_addr_t dma_addr = xdpi->dma_addr; + struct xdp_frame *xdpf = xdpi->xdpf; + unsigned int dma_len = xdpf->len; + + if (unlikely(sq->hw_mtu < dma_len)) { + stats->err++; + return false; + } + + if (unlikely(!session->wqe)) { + if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, + MLX5_SEND_WQE_MAX_WQEBBS))) { + /* SQ is full, ring doorbell */ + mlx5e_xmit_xdp_doorbell(sq); + stats->full++; + return false; + } + + mlx5e_xdp_mpwqe_session_start(sq); + } + + mlx5e_xdp_mpwqe_add_dseg(sq, dma_addr, dma_len); + + if (unlikely(session->ds_count == session->max_ds_count)) + mlx5e_xdp_mpwqe_complete(sq); + + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, xdpi); + stats->xmit++; + return true; +} + +static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi) { struct mlx5_wq_cyc *wq = &sq->wq; u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); @@ -126,11 +217,8 @@ bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi) } if (unlikely(!mlx5e_wqc_has_room_for(wq, sq->cc, sq->pc, 1))) { - if (sq->doorbell) { - /* SQ is full, ring doorbell */ - mlx5e_xmit_xdp_doorbell(sq); - sq->doorbell = false; - } + /* SQ is full, ring doorbell */ + mlx5e_xmit_xdp_doorbell(sq); stats->full++; return false; } @@ -152,23 +240,20 @@ bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi) cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_SEND); - /* move page to reference to sq responsibility, - * and mark so it's not put back in page-cache. - */ - sq->db.xdpi[pi] = *xdpi; sq->pc++; - sq->doorbell = true; + sq->doorbell_cseg = cseg; + mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, xdpi); stats->xmit++; return true; } -bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq) +bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq) { + struct mlx5e_xdp_info_fifo *xdpi_fifo; struct mlx5e_xdpsq *sq; struct mlx5_cqe64 *cqe; - struct mlx5e_rq *rq; bool is_redirect; u16 sqcc; int i; @@ -182,8 +267,8 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq) if (!cqe) return false; - is_redirect = test_bit(MLX5E_SQ_STATE_REDIRECT, &sq->state); - rq = container_of(sq, struct mlx5e_rq, xdpsq); + is_redirect = !rq; + xdpi_fifo = &sq->db.xdpi_fifo; /* sq->cc must be updated only after mlx5_cqwq_update_db_record(), * otherwise a cq overrun may occur @@ -199,20 +284,33 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq) wqe_counter = be16_to_cpu(cqe->wqe_counter); + if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_REQ)) + netdev_WARN_ONCE(sq->channel->netdev, + "Bad OP in XDPSQ CQE: 0x%x\n", + get_cqe_opcode(cqe)); + do { - u16 ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc); - struct mlx5e_xdp_info *xdpi = &sq->db.xdpi[ci]; + struct mlx5e_xdp_wqe_info *wi; + u16 ci, j; last_wqe = (sqcc == wqe_counter); - sqcc++; - - if (is_redirect) { - xdp_return_frame(xdpi->xdpf); - dma_unmap_single(sq->pdev, xdpi->dma_addr, - xdpi->xdpf->len, DMA_TO_DEVICE); - } else { - /* Recycle RX page */ - mlx5e_page_release(rq, &xdpi->di, true); + ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc); + wi = &sq->db.wqe_info[ci]; + + sqcc += wi->num_wqebbs; + + for (j = 0; j < wi->num_ds; j++) { + struct mlx5e_xdp_info xdpi = + mlx5e_xdpi_fifo_pop(xdpi_fifo); + + if (is_redirect) { + xdp_return_frame(xdpi.xdpf); + dma_unmap_single(sq->pdev, xdpi.dma_addr, + xdpi.xdpf->len, DMA_TO_DEVICE); + } else { + /* Recycle RX page */ + mlx5e_page_release(rq, &xdpi.di, true); + } } } while (!last_wqe); } while ((++i < MLX5E_TX_CQ_POLL_BUDGET) && (cqe = mlx5_cqwq_get_cqe(&cq->wq))); @@ -228,27 +326,32 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq) return (i == MLX5E_TX_CQ_POLL_BUDGET); } -void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq) +void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq) { - struct mlx5e_rq *rq; - bool is_redirect; - - is_redirect = test_bit(MLX5E_SQ_STATE_REDIRECT, &sq->state); - rq = is_redirect ? NULL : container_of(sq, struct mlx5e_rq, xdpsq); + struct mlx5e_xdp_info_fifo *xdpi_fifo = &sq->db.xdpi_fifo; + bool is_redirect = !rq; while (sq->cc != sq->pc) { - u16 ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->cc); - struct mlx5e_xdp_info *xdpi = &sq->db.xdpi[ci]; - - sq->cc++; - - if (is_redirect) { - xdp_return_frame(xdpi->xdpf); - dma_unmap_single(sq->pdev, xdpi->dma_addr, - xdpi->xdpf->len, DMA_TO_DEVICE); - } else { - /* Recycle RX page */ - mlx5e_page_release(rq, &xdpi->di, false); + struct mlx5e_xdp_wqe_info *wi; + u16 ci, i; + + ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->cc); + wi = &sq->db.wqe_info[ci]; + + sq->cc += wi->num_wqebbs; + + for (i = 0; i < wi->num_ds; i++) { + struct mlx5e_xdp_info xdpi = + mlx5e_xdpi_fifo_pop(xdpi_fifo); + + if (is_redirect) { + xdp_return_frame(xdpi.xdpf); + dma_unmap_single(sq->pdev, xdpi.dma_addr, + xdpi.xdpf->len, DMA_TO_DEVICE); + } else { + /* Recycle RX page */ + mlx5e_page_release(rq, &xdpi.di, false); + } } } } @@ -292,7 +395,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, xdpi.xdpf = xdpf; - if (unlikely(!mlx5e_xmit_xdp_frame(sq, &xdpi))) { + if (unlikely(!sq->xmit_xdp_frame(sq, &xdpi))) { dma_unmap_single(sq->pdev, xdpi.dma_addr, xdpf->len, DMA_TO_DEVICE); xdp_return_frame_rx_napi(xdpf); @@ -300,8 +403,33 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, } } - if (flags & XDP_XMIT_FLUSH) + if (flags & XDP_XMIT_FLUSH) { + if (sq->mpwqe.wqe) + mlx5e_xdp_mpwqe_complete(sq); mlx5e_xmit_xdp_doorbell(sq); + } return n - drops; } + +void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq) +{ + struct mlx5e_xdpsq *xdpsq = &rq->xdpsq; + + if (xdpsq->mpwqe.wqe) + mlx5e_xdp_mpwqe_complete(xdpsq); + + mlx5e_xmit_xdp_doorbell(xdpsq); + + if (xdpsq->redirect_flush) { + xdp_do_flush_map(); + xdpsq->redirect_flush = false; + } +} + +void mlx5e_set_xmit_fp(struct mlx5e_xdpsq *sq, bool is_mpw) +{ + sq->xmit_xdp_frame = is_mpw ? + mlx5e_xmit_xdp_frame_mpwqe : mlx5e_xmit_xdp_frame; +} + diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 6dfab045925f..3a67cb3cd179 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -37,27 +37,62 @@ #define MLX5E_XDP_MAX_MTU ((int)(PAGE_SIZE - \ MLX5_SKB_FRAG_SZ(XDP_PACKET_HEADROOM))) #define MLX5E_XDP_MIN_INLINE (ETH_HLEN + VLAN_HLEN) -#define MLX5E_XDP_TX_DS_COUNT \ - ((sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS) + 1 /* SG DS */) +#define MLX5E_XDP_TX_EMPTY_DS_COUNT \ + (sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS) +#define MLX5E_XDP_TX_DS_COUNT (MLX5E_XDP_TX_EMPTY_DS_COUNT + 1 /* SG DS */) bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di, void *va, u16 *rx_headroom, u32 *len); -bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq); -void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq); - -bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi); +bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq); +void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq); +void mlx5e_set_xmit_fp(struct mlx5e_xdpsq *sq, bool is_mpw); +void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq); int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, u32 flags); static inline void mlx5e_xmit_xdp_doorbell(struct mlx5e_xdpsq *sq) +{ + if (sq->doorbell_cseg) { + mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, sq->doorbell_cseg); + sq->doorbell_cseg = NULL; + } +} + +static inline void +mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, dma_addr_t dma_addr, u16 dma_len) +{ + struct mlx5e_xdp_mpwqe *session = &sq->mpwqe; + struct mlx5_wqe_data_seg *dseg = + (struct mlx5_wqe_data_seg *)session->wqe + session->ds_count++; + + dseg->addr = cpu_to_be64(dma_addr); + dseg->byte_count = cpu_to_be32(dma_len); + dseg->lkey = sq->mkey_be; +} + +static inline void mlx5e_xdpsq_fetch_wqe(struct mlx5e_xdpsq *sq, + struct mlx5e_tx_wqe **wqe) { struct mlx5_wq_cyc *wq = &sq->wq; - struct mlx5e_tx_wqe *wqe; - u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc - 1); /* last pi */ + u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); + + *wqe = mlx5_wq_cyc_get_wqe(wq, pi); + memset(*wqe, 0, sizeof(**wqe)); +} - wqe = mlx5_wq_cyc_get_wqe(wq, pi); +static inline void +mlx5e_xdpi_fifo_push(struct mlx5e_xdp_info_fifo *fifo, + struct mlx5e_xdp_info *xi) +{ + u32 i = (*fifo->pc)++ & fifo->mask; - mlx5e_notify_hw(wq, sq->pc, sq->uar_map, &wqe->ctrl); + fifo->xi[i] = *xi; +} + +static inline struct mlx5e_xdp_info +mlx5e_xdpi_fifo_pop(struct mlx5e_xdp_info_fifo *fifo) +{ + return fifo->xi[(*fifo->cc)++ & fifo->mask]; } #endif diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c index 128a82b1dbfc..53608afd39b6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c @@ -254,11 +254,13 @@ struct sk_buff *mlx5e_ipsec_handle_tx_skb(struct net_device *netdev, struct mlx5e_ipsec_metadata *mdata; struct mlx5e_ipsec_sa_entry *sa_entry; struct xfrm_state *x; + struct sec_path *sp; if (!xo) return skb; - if (unlikely(skb->sp->len != 1)) { + sp = skb_sec_path(skb); + if (unlikely(sp->len != 1)) { atomic64_inc(&priv->ipsec->sw_stats.ipsec_tx_drop_bundle); goto drop; } @@ -305,10 +307,11 @@ mlx5e_ipsec_build_sp(struct net_device *netdev, struct sk_buff *skb, struct mlx5e_priv *priv = netdev_priv(netdev); struct xfrm_offload *xo; struct xfrm_state *xs; + struct sec_path *sp; u32 sa_handle; - skb->sp = secpath_dup(skb->sp); - if (unlikely(!skb->sp)) { + sp = secpath_set(skb); + if (unlikely(!sp)) { atomic64_inc(&priv->ipsec->sw_stats.ipsec_rx_drop_sp_alloc); return NULL; } @@ -320,8 +323,9 @@ mlx5e_ipsec_build_sp(struct net_device *netdev, struct sk_buff *skb, return NULL; } - skb->sp->xvec[skb->sp->len++] = xs; - skb->sp->olen++; + sp = skb_sec_path(skb); + sp->xvec[sp->len++] = xs; + sp->olen++; xo = xfrm_offload(skb); xo->flags = CRYPTO_DONE; @@ -372,10 +376,11 @@ struct sk_buff *mlx5e_ipsec_handle_rx_skb(struct net_device *netdev, bool mlx5e_ipsec_feature_check(struct sk_buff *skb, struct net_device *netdev, netdev_features_t features) { + struct sec_path *sp = skb_sec_path(skb); struct xfrm_state *x; - if (skb->sp && skb->sp->len) { - x = skb->sp->xvec[0]; + if (sp && sp->len) { + x = sp->xvec[0]; if (x && x->xso.offload_handle) return true; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index f480763dcd0d..c9df08133718 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -135,14 +135,15 @@ void mlx5e_build_ptys2ethtool_map(void) ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT); } -static const char mlx5e_priv_flags[][ETH_GSTRING_LEN] = { - "rx_cqe_moder", - "tx_cqe_moder", - "rx_cqe_compress", - "rx_striding_rq", - "rx_no_csum_complete", +typedef int (*mlx5e_pflag_handler)(struct net_device *netdev, bool enable); + +struct pflag_desc { + char name[ETH_GSTRING_LEN]; + mlx5e_pflag_handler handler; }; +static const struct pflag_desc mlx5e_priv_flags[MLX5E_NUM_PFLAGS]; + int mlx5e_ethtool_get_sset_count(struct mlx5e_priv *priv, int sset) { int i, num_stats = 0; @@ -153,7 +154,7 @@ int mlx5e_ethtool_get_sset_count(struct mlx5e_priv *priv, int sset) num_stats += mlx5e_stats_grps[i].get_num_stats(priv); return num_stats; case ETH_SS_PRIV_FLAGS: - return ARRAY_SIZE(mlx5e_priv_flags); + return MLX5E_NUM_PFLAGS; case ETH_SS_TEST: return mlx5e_self_test_num(priv); /* fallthrough */ @@ -183,8 +184,9 @@ void mlx5e_ethtool_get_strings(struct mlx5e_priv *priv, u32 stringset, u8 *data) switch (stringset) { case ETH_SS_PRIV_FLAGS: - for (i = 0; i < ARRAY_SIZE(mlx5e_priv_flags); i++) - strcpy(data + i * ETH_GSTRING_LEN, mlx5e_priv_flags[i]); + for (i = 0; i < MLX5E_NUM_PFLAGS; i++) + strcpy(data + i * ETH_GSTRING_LEN, + mlx5e_priv_flags[i].name); break; case ETH_SS_TEST: @@ -353,7 +355,7 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv, new_channels.params = priv->channels.params; new_channels.params.num_channels = count; if (!netif_is_rxfh_configured(priv->netdev)) - mlx5e_build_default_indir_rqt(new_channels.params.indirection_rqt, + mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt, MLX5E_INDIR_RQT_SIZE, count); if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { @@ -785,10 +787,9 @@ static void get_lp_advertising(u32 eth_proto_lp, ptys2ethtool_adver_link(lp_advertising, eth_proto_lp); } -static int mlx5e_get_link_ksettings(struct net_device *netdev, - struct ethtool_link_ksettings *link_ksettings) +int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv, + struct ethtool_link_ksettings *link_ksettings) { - struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5_core_dev *mdev = priv->mdev; u32 out[MLX5_ST_SZ_DW(ptys_reg)] = {0}; u32 rx_pause = 0; @@ -804,7 +805,7 @@ static int mlx5e_get_link_ksettings(struct net_device *netdev, err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN, 1); if (err) { - netdev_err(netdev, "%s: query port ptys failed: %d\n", + netdev_err(priv->netdev, "%s: query port ptys failed: %d\n", __func__, err); goto err_query_regs; } @@ -824,7 +825,7 @@ static int mlx5e_get_link_ksettings(struct net_device *netdev, get_supported(eth_proto_cap, link_ksettings); get_advertising(eth_proto_admin, tx_pause, rx_pause, link_ksettings); - get_speed_duplex(netdev, eth_proto_oper, link_ksettings); + get_speed_duplex(priv->netdev, eth_proto_oper, link_ksettings); eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap; @@ -844,7 +845,7 @@ static int mlx5e_get_link_ksettings(struct net_device *netdev, Autoneg); if (get_fec_supported_advertised(mdev, link_ksettings)) - netdev_dbg(netdev, "%s: FEC caps query failed: %d\n", + netdev_dbg(priv->netdev, "%s: FEC caps query failed: %d\n", __func__, err); if (!an_disable_admin) @@ -855,6 +856,14 @@ err_query_regs: return err; } +static int mlx5e_get_link_ksettings(struct net_device *netdev, + struct ethtool_link_ksettings *link_ksettings) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + return mlx5e_ethtool_get_link_ksettings(priv, link_ksettings); +} + static u32 mlx5e_ethtool2ptys_adver_link(const unsigned long *link_modes) { u32 i, ptys_modes = 0; @@ -869,10 +878,9 @@ static u32 mlx5e_ethtool2ptys_adver_link(const unsigned long *link_modes) return ptys_modes; } -static int mlx5e_set_link_ksettings(struct net_device *netdev, - const struct ethtool_link_ksettings *link_ksettings) +int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv, + const struct ethtool_link_ksettings *link_ksettings) { - struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5_core_dev *mdev = priv->mdev; u32 eth_proto_cap, eth_proto_admin; bool an_changes = false; @@ -892,14 +900,14 @@ static int mlx5e_set_link_ksettings(struct net_device *netdev, err = mlx5_query_port_proto_cap(mdev, ð_proto_cap, MLX5_PTYS_EN); if (err) { - netdev_err(netdev, "%s: query port eth proto cap failed: %d\n", + netdev_err(priv->netdev, "%s: query port eth proto cap failed: %d\n", __func__, err); goto out; } link_modes = link_modes & eth_proto_cap; if (!link_modes) { - netdev_err(netdev, "%s: Not supported link mode(s) requested", + netdev_err(priv->netdev, "%s: Not supported link mode(s) requested", __func__); err = -EINVAL; goto out; @@ -907,7 +915,7 @@ static int mlx5e_set_link_ksettings(struct net_device *netdev, err = mlx5_query_port_proto_admin(mdev, ð_proto_admin, MLX5_PTYS_EN); if (err) { - netdev_err(netdev, "%s: query port eth proto admin failed: %d\n", + netdev_err(priv->netdev, "%s: query port eth proto admin failed: %d\n", __func__, err); goto out; } @@ -929,9 +937,17 @@ out: return err; } +static int mlx5e_set_link_ksettings(struct net_device *netdev, + const struct ethtool_link_ksettings *link_ksettings) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + return mlx5e_ethtool_set_link_ksettings(priv, link_ksettings); +} + u32 mlx5e_ethtool_get_rxfh_key_size(struct mlx5e_priv *priv) { - return sizeof(priv->channels.params.toeplitz_hash_key); + return sizeof(priv->rss_params.toeplitz_hash_key); } static u32 mlx5e_get_rxfh_key_size(struct net_device *netdev) @@ -957,50 +973,27 @@ static int mlx5e_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, u8 *hfunc) { struct mlx5e_priv *priv = netdev_priv(netdev); + struct mlx5e_rss_params *rss = &priv->rss_params; if (indir) - memcpy(indir, priv->channels.params.indirection_rqt, - sizeof(priv->channels.params.indirection_rqt)); + memcpy(indir, rss->indirection_rqt, + sizeof(rss->indirection_rqt)); if (key) - memcpy(key, priv->channels.params.toeplitz_hash_key, - sizeof(priv->channels.params.toeplitz_hash_key)); + memcpy(key, rss->toeplitz_hash_key, + sizeof(rss->toeplitz_hash_key)); if (hfunc) - *hfunc = priv->channels.params.rss_hfunc; + *hfunc = rss->hfunc; return 0; } -static void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen) -{ - void *tirc = MLX5_ADDR_OF(modify_tir_in, in, ctx); - struct mlx5_core_dev *mdev = priv->mdev; - int ctxlen = MLX5_ST_SZ_BYTES(tirc); - int tt; - - MLX5_SET(modify_tir_in, in, bitmask.hash, 1); - - for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) { - memset(tirc, 0, ctxlen); - mlx5e_build_indir_tir_ctx_hash(&priv->channels.params, tt, tirc, false); - mlx5_core_modify_tir(mdev, priv->indir_tir[tt].tirn, in, inlen); - } - - if (!mlx5e_tunnel_inner_ft_supported(priv->mdev)) - return; - - for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) { - memset(tirc, 0, ctxlen); - mlx5e_build_indir_tir_ctx_hash(&priv->channels.params, tt, tirc, true); - mlx5_core_modify_tir(mdev, priv->inner_indir_tir[tt].tirn, in, inlen); - } -} - static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir, const u8 *key, const u8 hfunc) { struct mlx5e_priv *priv = netdev_priv(dev); + struct mlx5e_rss_params *rss = &priv->rss_params; int inlen = MLX5_ST_SZ_BYTES(modify_tir_in); bool hash_changed = false; void *in; @@ -1016,15 +1009,14 @@ static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir, mutex_lock(&priv->state_lock); - if (hfunc != ETH_RSS_HASH_NO_CHANGE && - hfunc != priv->channels.params.rss_hfunc) { - priv->channels.params.rss_hfunc = hfunc; + if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != rss->hfunc) { + rss->hfunc = hfunc; hash_changed = true; } if (indir) { - memcpy(priv->channels.params.indirection_rqt, indir, - sizeof(priv->channels.params.indirection_rqt)); + memcpy(rss->indirection_rqt, indir, + sizeof(rss->indirection_rqt)); if (test_bit(MLX5E_STATE_OPENED, &priv->state)) { u32 rqtn = priv->indir_rqt.rqtn; @@ -1032,7 +1024,7 @@ static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir, .is_rss = true, { .rss = { - .hfunc = priv->channels.params.rss_hfunc, + .hfunc = rss->hfunc, .channels = &priv->channels, }, }, @@ -1043,10 +1035,9 @@ static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir, } if (key) { - memcpy(priv->channels.params.toeplitz_hash_key, key, - sizeof(priv->channels.params.toeplitz_hash_key)); - hash_changed = hash_changed || - priv->channels.params.rss_hfunc == ETH_RSS_HASH_TOP; + memcpy(rss->toeplitz_hash_key, key, + sizeof(rss->toeplitz_hash_key)); + hash_changed = hash_changed || rss->hfunc == ETH_RSS_HASH_TOP; } if (hash_changed) @@ -1150,25 +1141,31 @@ static int mlx5e_set_tunable(struct net_device *dev, return err; } -static void mlx5e_get_pauseparam(struct net_device *netdev, - struct ethtool_pauseparam *pauseparam) +void mlx5e_ethtool_get_pauseparam(struct mlx5e_priv *priv, + struct ethtool_pauseparam *pauseparam) { - struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5_core_dev *mdev = priv->mdev; int err; err = mlx5_query_port_pause(mdev, &pauseparam->rx_pause, &pauseparam->tx_pause); if (err) { - netdev_err(netdev, "%s: mlx5_query_port_pause failed:0x%x\n", + netdev_err(priv->netdev, "%s: mlx5_query_port_pause failed:0x%x\n", __func__, err); } } -static int mlx5e_set_pauseparam(struct net_device *netdev, - struct ethtool_pauseparam *pauseparam) +static void mlx5e_get_pauseparam(struct net_device *netdev, + struct ethtool_pauseparam *pauseparam) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + mlx5e_ethtool_get_pauseparam(priv, pauseparam); +} + +int mlx5e_ethtool_set_pauseparam(struct mlx5e_priv *priv, + struct ethtool_pauseparam *pauseparam) { - struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5_core_dev *mdev = priv->mdev; int err; @@ -1179,13 +1176,21 @@ static int mlx5e_set_pauseparam(struct net_device *netdev, pauseparam->rx_pause ? 1 : 0, pauseparam->tx_pause ? 1 : 0); if (err) { - netdev_err(netdev, "%s: mlx5_set_port_pause failed:0x%x\n", + netdev_err(priv->netdev, "%s: mlx5_set_port_pause failed:0x%x\n", __func__, err); } return err; } +static int mlx5e_set_pauseparam(struct net_device *netdev, + struct ethtool_pauseparam *pauseparam) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + return mlx5e_ethtool_set_pauseparam(priv, pauseparam); +} + int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv, struct ethtool_ts_info *info) { @@ -1505,8 +1510,6 @@ static int mlx5e_get_module_eeprom(struct net_device *netdev, return 0; } -typedef int (*mlx5e_pflag_handler)(struct net_device *netdev, bool enable); - static int set_pflag_cqe_based_moder(struct net_device *netdev, bool enable, bool is_rx_cq) { @@ -1669,23 +1672,58 @@ static int set_pflag_rx_no_csum_complete(struct net_device *netdev, bool enable) return 0; } +static int set_pflag_xdp_tx_mpwqe(struct net_device *netdev, bool enable) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + struct mlx5_core_dev *mdev = priv->mdev; + struct mlx5e_channels new_channels = {}; + int err; + + if (enable && !MLX5_CAP_ETH(mdev, enhanced_multi_pkt_send_wqe)) + return -EOPNOTSUPP; + + new_channels.params = priv->channels.params; + + MLX5E_SET_PFLAG(&new_channels.params, MLX5E_PFLAG_XDP_TX_MPWQE, enable); + + if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { + priv->channels.params = new_channels.params; + return 0; + } + + err = mlx5e_open_channels(priv, &new_channels); + if (err) + return err; + + mlx5e_switch_priv_channels(priv, &new_channels, NULL); + return 0; +} + +static const struct pflag_desc mlx5e_priv_flags[MLX5E_NUM_PFLAGS] = { + { "rx_cqe_moder", set_pflag_rx_cqe_based_moder }, + { "tx_cqe_moder", set_pflag_tx_cqe_based_moder }, + { "rx_cqe_compress", set_pflag_rx_cqe_compress }, + { "rx_striding_rq", set_pflag_rx_striding_rq }, + { "rx_no_csum_complete", set_pflag_rx_no_csum_complete }, + { "xdp_tx_mpwqe", set_pflag_xdp_tx_mpwqe }, +}; + static int mlx5e_handle_pflag(struct net_device *netdev, u32 wanted_flags, - enum mlx5e_priv_flag flag, - mlx5e_pflag_handler pflag_handler) + enum mlx5e_priv_flag flag) { struct mlx5e_priv *priv = netdev_priv(netdev); - bool enable = !!(wanted_flags & flag); + bool enable = !!(wanted_flags & BIT(flag)); u32 changes = wanted_flags ^ priv->channels.params.pflags; int err; - if (!(changes & flag)) + if (!(changes & BIT(flag))) return 0; - err = pflag_handler(netdev, enable); + err = mlx5e_priv_flags[flag].handler(netdev, enable); if (err) { - netdev_err(netdev, "%s private flag 0x%x failed err %d\n", - enable ? "Enable" : "Disable", flag, err); + netdev_err(netdev, "%s private flag '%s' failed err %d\n", + enable ? "Enable" : "Disable", mlx5e_priv_flags[flag].name, err); return err; } @@ -1696,38 +1734,17 @@ static int mlx5e_handle_pflag(struct net_device *netdev, static int mlx5e_set_priv_flags(struct net_device *netdev, u32 pflags) { struct mlx5e_priv *priv = netdev_priv(netdev); + enum mlx5e_priv_flag pflag; int err; mutex_lock(&priv->state_lock); - err = mlx5e_handle_pflag(netdev, pflags, - MLX5E_PFLAG_RX_CQE_BASED_MODER, - set_pflag_rx_cqe_based_moder); - if (err) - goto out; - err = mlx5e_handle_pflag(netdev, pflags, - MLX5E_PFLAG_TX_CQE_BASED_MODER, - set_pflag_tx_cqe_based_moder); - if (err) - goto out; - - err = mlx5e_handle_pflag(netdev, pflags, - MLX5E_PFLAG_RX_CQE_COMPRESS, - set_pflag_rx_cqe_compress); - if (err) - goto out; - - err = mlx5e_handle_pflag(netdev, pflags, - MLX5E_PFLAG_RX_STRIDING_RQ, - set_pflag_rx_striding_rq); - if (err) - goto out; - - err = mlx5e_handle_pflag(netdev, pflags, - MLX5E_PFLAG_RX_NO_CSUM_COMPLETE, - set_pflag_rx_no_csum_complete); + for (pflag = 0; pflag < MLX5E_NUM_PFLAGS; pflag++) { + err = mlx5e_handle_pflag(netdev, pflags, pflag); + if (err) + break; + } -out: mutex_unlock(&priv->state_lock); /* Need to fix some features.. */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c index c18dcebe1462..4421c10f58ae 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c @@ -771,6 +771,112 @@ void mlx5e_ethtool_init_steering(struct mlx5e_priv *priv) INIT_LIST_HEAD(&priv->fs.ethtool.rules); } +static enum mlx5e_traffic_types flow_type_to_traffic_type(u32 flow_type) +{ + switch (flow_type) { + case TCP_V4_FLOW: + return MLX5E_TT_IPV4_TCP; + case TCP_V6_FLOW: + return MLX5E_TT_IPV6_TCP; + case UDP_V4_FLOW: + return MLX5E_TT_IPV4_UDP; + case UDP_V6_FLOW: + return MLX5E_TT_IPV6_UDP; + case AH_V4_FLOW: + return MLX5E_TT_IPV4_IPSEC_AH; + case AH_V6_FLOW: + return MLX5E_TT_IPV6_IPSEC_AH; + case ESP_V4_FLOW: + return MLX5E_TT_IPV4_IPSEC_ESP; + case ESP_V6_FLOW: + return MLX5E_TT_IPV6_IPSEC_ESP; + case IPV4_FLOW: + return MLX5E_TT_IPV4; + case IPV6_FLOW: + return MLX5E_TT_IPV6; + default: + return MLX5E_NUM_INDIR_TIRS; + } +} + +static int mlx5e_set_rss_hash_opt(struct mlx5e_priv *priv, + struct ethtool_rxnfc *nfc) +{ + int inlen = MLX5_ST_SZ_BYTES(modify_tir_in); + enum mlx5e_traffic_types tt; + u8 rx_hash_field = 0; + void *in; + + tt = flow_type_to_traffic_type(nfc->flow_type); + if (tt == MLX5E_NUM_INDIR_TIRS) + return -EINVAL; + + /* RSS does not support anything other than hashing to queues + * on src IP, dest IP, TCP/UDP src port and TCP/UDP dest + * port. + */ + if (nfc->flow_type != TCP_V4_FLOW && + nfc->flow_type != TCP_V6_FLOW && + nfc->flow_type != UDP_V4_FLOW && + nfc->flow_type != UDP_V6_FLOW) + return -EOPNOTSUPP; + + if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3)) + return -EOPNOTSUPP; + + if (nfc->data & RXH_IP_SRC) + rx_hash_field |= MLX5_HASH_FIELD_SEL_SRC_IP; + if (nfc->data & RXH_IP_DST) + rx_hash_field |= MLX5_HASH_FIELD_SEL_DST_IP; + if (nfc->data & RXH_L4_B_0_1) + rx_hash_field |= MLX5_HASH_FIELD_SEL_L4_SPORT; + if (nfc->data & RXH_L4_B_2_3) + rx_hash_field |= MLX5_HASH_FIELD_SEL_L4_DPORT; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return -ENOMEM; + + mutex_lock(&priv->state_lock); + + if (rx_hash_field == priv->rss_params.rx_hash_fields[tt]) + goto out; + + priv->rss_params.rx_hash_fields[tt] = rx_hash_field; + mlx5e_modify_tirs_hash(priv, in, inlen); + +out: + mutex_unlock(&priv->state_lock); + kvfree(in); + return 0; +} + +static int mlx5e_get_rss_hash_opt(struct mlx5e_priv *priv, + struct ethtool_rxnfc *nfc) +{ + enum mlx5e_traffic_types tt; + u32 hash_field = 0; + + tt = flow_type_to_traffic_type(nfc->flow_type); + if (tt == MLX5E_NUM_INDIR_TIRS) + return -EINVAL; + + hash_field = priv->rss_params.rx_hash_fields[tt]; + nfc->data = 0; + + if (hash_field & MLX5_HASH_FIELD_SEL_SRC_IP) + nfc->data |= RXH_IP_SRC; + if (hash_field & MLX5_HASH_FIELD_SEL_DST_IP) + nfc->data |= RXH_IP_DST; + if (hash_field & MLX5_HASH_FIELD_SEL_L4_SPORT) + nfc->data |= RXH_L4_B_0_1; + if (hash_field & MLX5_HASH_FIELD_SEL_L4_DPORT) + nfc->data |= RXH_L4_B_2_3; + + return 0; +} + int mlx5e_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd) { int err = 0; @@ -783,6 +889,9 @@ int mlx5e_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd) case ETHTOOL_SRXCLSRLDEL: err = mlx5e_ethtool_flow_remove(priv, cmd->fs.location); break; + case ETHTOOL_SRXFH: + err = mlx5e_set_rss_hash_opt(priv, cmd); + break; default: err = -EOPNOTSUPP; break; @@ -810,6 +919,9 @@ int mlx5e_get_rxnfc(struct net_device *dev, case ETHTOOL_GRXCLSRLALL: err = mlx5e_ethtool_get_all_flows(priv, info, rule_locs); break; + case ETHTOOL_GRXFH: + err = mlx5e_get_rss_hash_opt(priv, info); + break; default: err = -EOPNOTSUPP; break; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index b70cb6fd164c..8cfd2ec7c0a2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -49,6 +49,8 @@ #include "lib/clock.h" #include "en/port.h" #include "en/xdp.h" +#include "lib/eq.h" +#include "en/monitor_stats.h" struct mlx5e_rq_param { u32 rqc[MLX5_ST_SZ_DW(rqc)]; @@ -59,6 +61,7 @@ struct mlx5e_rq_param { struct mlx5e_sq_param { u32 sqc[MLX5_ST_SZ_DW(sqc)]; struct mlx5_wq_param wq; + bool is_mpw; }; struct mlx5e_cq_param { @@ -228,7 +231,7 @@ void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params) MLX5_WQ_TYPE_CYCLIC; } -static void mlx5e_update_carrier(struct mlx5e_priv *priv) +void mlx5e_update_carrier(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; u8 port_state; @@ -267,7 +270,7 @@ void mlx5e_update_stats(struct mlx5e_priv *priv) mlx5e_stats_grps[i].update_stats(priv); } -static void mlx5e_update_ndo_stats(struct mlx5e_priv *priv) +void mlx5e_update_ndo_stats(struct mlx5e_priv *priv) { int i; @@ -298,33 +301,35 @@ void mlx5e_queue_update_stats(struct mlx5e_priv *priv) queue_work(priv->wq, &priv->update_stats_work); } -static void mlx5e_async_event(struct mlx5_core_dev *mdev, void *vpriv, - enum mlx5_dev_event event, unsigned long param) +static int async_event(struct notifier_block *nb, unsigned long event, void *data) { - struct mlx5e_priv *priv = vpriv; + struct mlx5e_priv *priv = container_of(nb, struct mlx5e_priv, events_nb); + struct mlx5_eqe *eqe = data; - if (!test_bit(MLX5E_STATE_ASYNC_EVENTS_ENABLED, &priv->state)) - return; + if (event != MLX5_EVENT_TYPE_PORT_CHANGE) + return NOTIFY_DONE; - switch (event) { - case MLX5_DEV_EVENT_PORT_UP: - case MLX5_DEV_EVENT_PORT_DOWN: + switch (eqe->sub_type) { + case MLX5_PORT_CHANGE_SUBTYPE_DOWN: + case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE: queue_work(priv->wq, &priv->update_carrier_work); break; default: - break; + return NOTIFY_DONE; } + + return NOTIFY_OK; } static void mlx5e_enable_async_events(struct mlx5e_priv *priv) { - set_bit(MLX5E_STATE_ASYNC_EVENTS_ENABLED, &priv->state); + priv->events_nb.notifier_call = async_event; + mlx5_notifier_register(priv->mdev, &priv->events_nb); } static void mlx5e_disable_async_events(struct mlx5e_priv *priv) { - clear_bit(MLX5E_STATE_ASYNC_EVENTS_ENABLED, &priv->state); - synchronize_irq(pci_irq_vector(priv->mdev->pdev, MLX5_EQ_VEC_ASYNC)); + mlx5_notifier_unregister(priv->mdev, &priv->events_nb); } static inline void mlx5e_build_umr_wqe(struct mlx5e_rq *rq, @@ -988,18 +993,42 @@ static void mlx5e_close_rq(struct mlx5e_rq *rq) static void mlx5e_free_xdpsq_db(struct mlx5e_xdpsq *sq) { - kvfree(sq->db.xdpi); + kvfree(sq->db.xdpi_fifo.xi); + kvfree(sq->db.wqe_info); +} + +static int mlx5e_alloc_xdpsq_fifo(struct mlx5e_xdpsq *sq, int numa) +{ + struct mlx5e_xdp_info_fifo *xdpi_fifo = &sq->db.xdpi_fifo; + int wq_sz = mlx5_wq_cyc_get_size(&sq->wq); + int dsegs_per_wq = wq_sz * MLX5_SEND_WQEBB_NUM_DS; + + xdpi_fifo->xi = kvzalloc_node(sizeof(*xdpi_fifo->xi) * dsegs_per_wq, + GFP_KERNEL, numa); + if (!xdpi_fifo->xi) + return -ENOMEM; + + xdpi_fifo->pc = &sq->xdpi_fifo_pc; + xdpi_fifo->cc = &sq->xdpi_fifo_cc; + xdpi_fifo->mask = dsegs_per_wq - 1; + + return 0; } static int mlx5e_alloc_xdpsq_db(struct mlx5e_xdpsq *sq, int numa) { int wq_sz = mlx5_wq_cyc_get_size(&sq->wq); + int err; - sq->db.xdpi = kvzalloc_node(array_size(wq_sz, sizeof(*sq->db.xdpi)), - GFP_KERNEL, numa); - if (!sq->db.xdpi) { - mlx5e_free_xdpsq_db(sq); + sq->db.wqe_info = kvzalloc_node(sizeof(*sq->db.wqe_info) * wq_sz, + GFP_KERNEL, numa); + if (!sq->db.wqe_info) return -ENOMEM; + + err = mlx5e_alloc_xdpsq_fifo(sq, numa); + if (err) { + mlx5e_free_xdpsq_db(sq); + return err; } return 0; @@ -1558,11 +1587,8 @@ static int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_xdpsq *sq, bool is_redirect) { - unsigned int ds_cnt = MLX5E_XDP_TX_DS_COUNT; struct mlx5e_create_sq_param csp = {}; - unsigned int inline_hdr_sz = 0; int err; - int i; err = mlx5e_alloc_xdpsq(c, params, param, sq, is_redirect); if (err) @@ -1573,30 +1599,40 @@ static int mlx5e_open_xdpsq(struct mlx5e_channel *c, csp.cqn = sq->cq.mcq.cqn; csp.wq_ctrl = &sq->wq_ctrl; csp.min_inline_mode = sq->min_inline_mode; - if (is_redirect) - set_bit(MLX5E_SQ_STATE_REDIRECT, &sq->state); set_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); err = mlx5e_create_sq_rdy(c->mdev, param, &csp, &sq->sqn); if (err) goto err_free_xdpsq; - if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) { - inline_hdr_sz = MLX5E_XDP_MIN_INLINE; - ds_cnt++; - } + mlx5e_set_xmit_fp(sq, param->is_mpw); - /* Pre initialize fixed WQE fields */ - for (i = 0; i < mlx5_wq_cyc_get_size(&sq->wq); i++) { - struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(&sq->wq, i); - struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl; - struct mlx5_wqe_eth_seg *eseg = &wqe->eth; - struct mlx5_wqe_data_seg *dseg; + if (!param->is_mpw) { + unsigned int ds_cnt = MLX5E_XDP_TX_DS_COUNT; + unsigned int inline_hdr_sz = 0; + int i; - cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt); - eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz); + if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) { + inline_hdr_sz = MLX5E_XDP_MIN_INLINE; + ds_cnt++; + } + + /* Pre initialize fixed WQE fields */ + for (i = 0; i < mlx5_wq_cyc_get_size(&sq->wq); i++) { + struct mlx5e_xdp_wqe_info *wi = &sq->db.wqe_info[i]; + struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(&sq->wq, i); + struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl; + struct mlx5_wqe_eth_seg *eseg = &wqe->eth; + struct mlx5_wqe_data_seg *dseg; + + cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt); + eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz); - dseg = (struct mlx5_wqe_data_seg *)cseg + (ds_cnt - 1); - dseg->lkey = sq->mkey_be; + dseg = (struct mlx5_wqe_data_seg *)cseg + (ds_cnt - 1); + dseg->lkey = sq->mkey_be; + + wi->num_wqebbs = 1; + wi->num_ds = 1; + } } return 0; @@ -1608,7 +1644,7 @@ err_free_xdpsq: return err; } -static void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq) +static void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq) { struct mlx5e_channel *c = sq->channel; @@ -1616,7 +1652,7 @@ static void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq) napi_synchronize(&c->napi); mlx5e_destroy_sq(c->mdev, sq->sqn); - mlx5e_free_xdpsq_descs(sq); + mlx5e_free_xdpsq_descs(sq, rq); mlx5e_free_xdpsq(sq); } @@ -1769,11 +1805,6 @@ static void mlx5e_close_cq(struct mlx5e_cq *cq) mlx5e_free_cq(cq); } -static int mlx5e_get_cpu(struct mlx5e_priv *priv, int ix) -{ - return cpumask_first(priv->mdev->priv.irq_info[ix].mask); -} - static int mlx5e_open_tx_cqs(struct mlx5e_channel *c, struct mlx5e_params *params, struct mlx5e_channel_param *cparam) @@ -1924,9 +1955,9 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix, struct mlx5e_channel_param *cparam, struct mlx5e_channel **cp) { + int cpu = cpumask_first(mlx5_comp_irq_get_affinity_mask(priv->mdev, ix)); struct net_dim_cq_moder icocq_moder = {0, 0}; struct net_device *netdev = priv->netdev; - int cpu = mlx5e_get_cpu(priv, ix); struct mlx5e_channel *c; unsigned int irq; int err; @@ -2009,7 +2040,7 @@ err_close_rq: err_close_xdp_sq: if (c->xdp) - mlx5e_close_xdpsq(&c->rq.xdpsq); + mlx5e_close_xdpsq(&c->rq.xdpsq, &c->rq); err_close_sqs: mlx5e_close_sqs(c); @@ -2062,10 +2093,10 @@ static void mlx5e_deactivate_channel(struct mlx5e_channel *c) static void mlx5e_close_channel(struct mlx5e_channel *c) { - mlx5e_close_xdpsq(&c->xdpsq); + mlx5e_close_xdpsq(&c->xdpsq, NULL); mlx5e_close_rq(&c->rq); if (c->xdp) - mlx5e_close_xdpsq(&c->rq.xdpsq); + mlx5e_close_xdpsq(&c->rq.xdpsq, &c->rq); mlx5e_close_sqs(c); mlx5e_close_icosq(&c->icosq); napi_disable(&c->napi); @@ -2232,6 +2263,8 @@ static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv, void *cqc = param->cqc; MLX5_SET(cqc, cqc, uar_page, priv->mdev->priv.uar->index); + if (MLX5_CAP_GEN(priv->mdev, cqe_128_always) && cache_line_size() >= 128) + MLX5_SET(cqc, cqc, cqe_sz, CQE_STRIDE_128_PAD); } static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, @@ -2308,6 +2341,7 @@ static void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, mlx5e_build_sq_param_common(priv, param); MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); + param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_XDP_TX_MPWQE); } static void mlx5e_build_channel_param(struct mlx5e_priv *priv, @@ -2510,7 +2544,7 @@ static void mlx5e_fill_rqt_rqns(struct mlx5e_priv *priv, int sz, if (rrp.rss.hfunc == ETH_RSS_HASH_XOR) ix = mlx5e_bits_invert(i, ilog2(sz)); - ix = priv->channels.params.indirection_rqt[ix]; + ix = priv->rss_params.indirection_rqt[ix]; rqn = rrp.rss.channels->c[ix]->rq.rqn; } else { rqn = rrp.rqn; @@ -2593,7 +2627,7 @@ static void mlx5e_redirect_rqts_to_channels(struct mlx5e_priv *priv, { .rss = { .channels = chs, - .hfunc = chs->params.rss_hfunc, + .hfunc = priv->rss_params.hfunc, } }, }; @@ -2613,6 +2647,54 @@ static void mlx5e_redirect_rqts_to_drop(struct mlx5e_priv *priv) mlx5e_redirect_rqts(priv, drop_rrp); } +static const struct mlx5e_tirc_config tirc_default_config[MLX5E_NUM_INDIR_TIRS] = { + [MLX5E_TT_IPV4_TCP] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV4, + .l4_prot_type = MLX5_L4_PROT_TYPE_TCP, + .rx_hash_fields = MLX5_HASH_IP_L4PORTS, + }, + [MLX5E_TT_IPV6_TCP] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV6, + .l4_prot_type = MLX5_L4_PROT_TYPE_TCP, + .rx_hash_fields = MLX5_HASH_IP_L4PORTS, + }, + [MLX5E_TT_IPV4_UDP] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV4, + .l4_prot_type = MLX5_L4_PROT_TYPE_UDP, + .rx_hash_fields = MLX5_HASH_IP_L4PORTS, + }, + [MLX5E_TT_IPV6_UDP] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV6, + .l4_prot_type = MLX5_L4_PROT_TYPE_UDP, + .rx_hash_fields = MLX5_HASH_IP_L4PORTS, + }, + [MLX5E_TT_IPV4_IPSEC_AH] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV4, + .l4_prot_type = 0, + .rx_hash_fields = MLX5_HASH_IP_IPSEC_SPI, + }, + [MLX5E_TT_IPV6_IPSEC_AH] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV6, + .l4_prot_type = 0, + .rx_hash_fields = MLX5_HASH_IP_IPSEC_SPI, + }, + [MLX5E_TT_IPV4_IPSEC_ESP] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV4, + .l4_prot_type = 0, + .rx_hash_fields = MLX5_HASH_IP_IPSEC_SPI, + }, + [MLX5E_TT_IPV6_IPSEC_ESP] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV6, + .l4_prot_type = 0, + .rx_hash_fields = MLX5_HASH_IP_IPSEC_SPI, + }, + [MLX5E_TT_IPV4] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV4, + .l4_prot_type = 0, + .rx_hash_fields = MLX5_HASH_IP, + }, + [MLX5E_TT_IPV6] = { .l3_prot_type = MLX5_L3_PROT_TYPE_IPV6, + .l4_prot_type = 0, + .rx_hash_fields = MLX5_HASH_IP, + }, +}; + +struct mlx5e_tirc_config mlx5e_tirc_get_default_config(enum mlx5e_traffic_types tt) +{ + return tirc_default_config[tt]; +} + static void mlx5e_build_tir_ctx_lro(struct mlx5e_params *params, void *tirc) { if (!params->lro_en) @@ -2628,116 +2710,68 @@ static void mlx5e_build_tir_ctx_lro(struct mlx5e_params *params, void *tirc) MLX5_SET(tirc, tirc, lro_timeout_period_usecs, params->lro_timeout); } -void mlx5e_build_indir_tir_ctx_hash(struct mlx5e_params *params, - enum mlx5e_traffic_types tt, +void mlx5e_build_indir_tir_ctx_hash(struct mlx5e_rss_params *rss_params, + const struct mlx5e_tirc_config *ttconfig, void *tirc, bool inner) { void *hfso = inner ? MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_inner) : MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_outer); -#define MLX5_HASH_IP (MLX5_HASH_FIELD_SEL_SRC_IP |\ - MLX5_HASH_FIELD_SEL_DST_IP) - -#define MLX5_HASH_IP_L4PORTS (MLX5_HASH_FIELD_SEL_SRC_IP |\ - MLX5_HASH_FIELD_SEL_DST_IP |\ - MLX5_HASH_FIELD_SEL_L4_SPORT |\ - MLX5_HASH_FIELD_SEL_L4_DPORT) - -#define MLX5_HASH_IP_IPSEC_SPI (MLX5_HASH_FIELD_SEL_SRC_IP |\ - MLX5_HASH_FIELD_SEL_DST_IP |\ - MLX5_HASH_FIELD_SEL_IPSEC_SPI) - - MLX5_SET(tirc, tirc, rx_hash_fn, mlx5e_rx_hash_fn(params->rss_hfunc)); - if (params->rss_hfunc == ETH_RSS_HASH_TOP) { + MLX5_SET(tirc, tirc, rx_hash_fn, mlx5e_rx_hash_fn(rss_params->hfunc)); + if (rss_params->hfunc == ETH_RSS_HASH_TOP) { void *rss_key = MLX5_ADDR_OF(tirc, tirc, rx_hash_toeplitz_key); size_t len = MLX5_FLD_SZ_BYTES(tirc, rx_hash_toeplitz_key); MLX5_SET(tirc, tirc, rx_hash_symmetric, 1); - memcpy(rss_key, params->toeplitz_hash_key, len); + memcpy(rss_key, rss_params->toeplitz_hash_key, len); } + MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, + ttconfig->l3_prot_type); + MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, + ttconfig->l4_prot_type); + MLX5_SET(rx_hash_field_select, hfso, selected_fields, + ttconfig->rx_hash_fields); +} - switch (tt) { - case MLX5E_TT_IPV4_TCP: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV4); - MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, - MLX5_L4_PROT_TYPE_TCP); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP_L4PORTS); - break; - - case MLX5E_TT_IPV6_TCP: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV6); - MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, - MLX5_L4_PROT_TYPE_TCP); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP_L4PORTS); - break; - - case MLX5E_TT_IPV4_UDP: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV4); - MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, - MLX5_L4_PROT_TYPE_UDP); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP_L4PORTS); - break; - - case MLX5E_TT_IPV6_UDP: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV6); - MLX5_SET(rx_hash_field_select, hfso, l4_prot_type, - MLX5_L4_PROT_TYPE_UDP); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP_L4PORTS); - break; - - case MLX5E_TT_IPV4_IPSEC_AH: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV4); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP_IPSEC_SPI); - break; +static void mlx5e_update_rx_hash_fields(struct mlx5e_tirc_config *ttconfig, + enum mlx5e_traffic_types tt, + u32 rx_hash_fields) +{ + *ttconfig = tirc_default_config[tt]; + ttconfig->rx_hash_fields = rx_hash_fields; +} - case MLX5E_TT_IPV6_IPSEC_AH: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV6); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP_IPSEC_SPI); - break; +void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen) +{ + void *tirc = MLX5_ADDR_OF(modify_tir_in, in, ctx); + struct mlx5e_rss_params *rss = &priv->rss_params; + struct mlx5_core_dev *mdev = priv->mdev; + int ctxlen = MLX5_ST_SZ_BYTES(tirc); + struct mlx5e_tirc_config ttconfig; + int tt; - case MLX5E_TT_IPV4_IPSEC_ESP: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV4); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP_IPSEC_SPI); - break; + MLX5_SET(modify_tir_in, in, bitmask.hash, 1); - case MLX5E_TT_IPV6_IPSEC_ESP: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV6); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP_IPSEC_SPI); - break; + for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) { + memset(tirc, 0, ctxlen); + mlx5e_update_rx_hash_fields(&ttconfig, tt, + rss->rx_hash_fields[tt]); + mlx5e_build_indir_tir_ctx_hash(rss, &ttconfig, tirc, false); + mlx5_core_modify_tir(mdev, priv->indir_tir[tt].tirn, in, inlen); + } - case MLX5E_TT_IPV4: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV4); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP); - break; + if (!mlx5e_tunnel_inner_ft_supported(priv->mdev)) + return; - case MLX5E_TT_IPV6: - MLX5_SET(rx_hash_field_select, hfso, l3_prot_type, - MLX5_L3_PROT_TYPE_IPV6); - MLX5_SET(rx_hash_field_select, hfso, selected_fields, - MLX5_HASH_IP); - break; - default: - WARN_ONCE(true, "%s: bad traffic type!\n", __func__); + for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) { + memset(tirc, 0, ctxlen); + mlx5e_update_rx_hash_fields(&ttconfig, tt, + rss->rx_hash_fields[tt]); + mlx5e_build_indir_tir_ctx_hash(rss, &ttconfig, tirc, true); + mlx5_core_modify_tir(mdev, priv->inner_indir_tir[tt].tirn, in, + inlen); } } @@ -2794,7 +2828,8 @@ static void mlx5e_build_inner_indir_tir_ctx(struct mlx5e_priv *priv, MLX5_SET(tirc, tirc, indirect_table, priv->indir_rqt.rqtn); MLX5_SET(tirc, tirc, tunneled_offload_en, 0x1); - mlx5e_build_indir_tir_ctx_hash(&priv->channels.params, tt, tirc, true); + mlx5e_build_indir_tir_ctx_hash(&priv->rss_params, + &tirc_default_config[tt], tirc, true); } static int mlx5e_set_mtu(struct mlx5_core_dev *mdev, @@ -2825,7 +2860,7 @@ static void mlx5e_query_mtu(struct mlx5_core_dev *mdev, *mtu = MLX5E_HW2SW_MTU(params, hw_mtu); } -static int mlx5e_set_dev_port_mtu(struct mlx5e_priv *priv) +int mlx5e_set_dev_port_mtu(struct mlx5e_priv *priv) { struct mlx5e_params *params = &priv->channels.params; struct net_device *netdev = priv->netdev; @@ -2905,7 +2940,7 @@ void mlx5e_activate_priv_channels(struct mlx5e_priv *priv) mlx5e_activate_channels(&priv->channels); netif_tx_start_all_queues(priv->netdev); - if (MLX5_ESWITCH_MANAGER(priv->mdev)) + if (mlx5e_is_vport_rep(priv)) mlx5e_add_sqs_fwd_rules(priv); mlx5e_wait_channels_min_rx_wqes(&priv->channels); @@ -2916,7 +2951,7 @@ void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv) { mlx5e_redirect_rqts_to_drop(priv); - if (MLX5_ESWITCH_MANAGER(priv->mdev)) + if (mlx5e_is_vport_rep(priv)) mlx5e_remove_sqs_fwd_rules(priv); /* FIXME: This is a W/A only for tx timeout watch dog false alarm when @@ -3168,7 +3203,7 @@ err_close_tises: return err; } -void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv) +static void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv) { int tc; @@ -3186,7 +3221,9 @@ static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT); MLX5_SET(tirc, tirc, indirect_table, priv->indir_rqt.rqtn); - mlx5e_build_indir_tir_ctx_hash(&priv->channels.params, tt, tirc, false); + + mlx5e_build_indir_tir_ctx_hash(&priv->rss_params, + &tirc_default_config[tt], tirc, false); } static void mlx5e_build_direct_tir_ctx(struct mlx5e_priv *priv, u32 rqtn, u32 *tirc) @@ -3391,11 +3428,14 @@ static int mlx5e_setup_tc_cls_flower(struct mlx5e_priv *priv, { switch (cls_flower->command) { case TC_CLSFLOWER_REPLACE: - return mlx5e_configure_flower(priv, cls_flower, flags); + return mlx5e_configure_flower(priv->netdev, priv, cls_flower, + flags); case TC_CLSFLOWER_DESTROY: - return mlx5e_delete_flower(priv, cls_flower, flags); + return mlx5e_delete_flower(priv->netdev, priv, cls_flower, + flags); case TC_CLSFLOWER_STATS: - return mlx5e_stats_flower(priv, cls_flower, flags); + return mlx5e_stats_flower(priv->netdev, priv, cls_flower, + flags); default: return -EOPNOTSUPP; } @@ -3408,7 +3448,8 @@ static int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data, switch (type) { case TC_SETUP_CLSFLOWER: - return mlx5e_setup_tc_cls_flower(priv, type_data, MLX5E_TC_INGRESS); + return mlx5e_setup_tc_cls_flower(priv, type_data, MLX5E_TC_INGRESS | + MLX5E_TC_NIC_OFFLOAD); default: return -EOPNOTSUPP; } @@ -3451,7 +3492,7 @@ static int mlx5e_setup_tc(struct net_device *dev, enum tc_setup_type type, } } -static void +void mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) { struct mlx5e_priv *priv = netdev_priv(dev); @@ -3459,8 +3500,10 @@ mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) struct mlx5e_vport_stats *vstats = &priv->stats.vport; struct mlx5e_pport_stats *pstats = &priv->stats.pport; - /* update HW stats in background for next time */ - mlx5e_queue_update_stats(priv); + if (!mlx5e_monitor_counter_supported(priv)) { + /* update HW stats in background for next time */ + mlx5e_queue_update_stats(priv); + } if (mlx5e_is_uplink_rep(priv)) { stats->rx_packets = PPORT_802_3_GET(pstats, a_frames_received_ok); @@ -3593,7 +3636,7 @@ static int set_feature_tc_num_filters(struct net_device *netdev, bool enable) { struct mlx5e_priv *priv = netdev_priv(netdev); - if (!enable && mlx5e_tc_num_filters(priv)) { + if (!enable && mlx5e_tc_num_filters(priv, MLX5E_TC_NIC_OFFLOAD)) { netdev_err(netdev, "Active offloaded tc filters, can't turn hw_tc_offload off\n"); return -EINVAL; @@ -3895,7 +3938,7 @@ static int mlx5e_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) } #ifdef CONFIG_MLX5_ESWITCH -static int mlx5e_set_vf_mac(struct net_device *dev, int vf, u8 *mac) +int mlx5e_set_vf_mac(struct net_device *dev, int vf, u8 *mac) { struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5_core_dev *mdev = priv->mdev; @@ -3932,8 +3975,8 @@ static int mlx5e_set_vf_trust(struct net_device *dev, int vf, bool setting) return mlx5_eswitch_set_vport_trust(mdev->priv.eswitch, vf + 1, setting); } -static int mlx5e_set_vf_rate(struct net_device *dev, int vf, int min_tx_rate, - int max_tx_rate) +int mlx5e_set_vf_rate(struct net_device *dev, int vf, int min_tx_rate, + int max_tx_rate) { struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5_core_dev *mdev = priv->mdev; @@ -3974,8 +4017,8 @@ static int mlx5e_set_vf_link_state(struct net_device *dev, int vf, mlx5_ifla_link2vport(link_state)); } -static int mlx5e_get_vf_config(struct net_device *dev, - int vf, struct ifla_vf_info *ivi) +int mlx5e_get_vf_config(struct net_device *dev, + int vf, struct ifla_vf_info *ivi) { struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5_core_dev *mdev = priv->mdev; @@ -3988,8 +4031,8 @@ static int mlx5e_get_vf_config(struct net_device *dev, return 0; } -static int mlx5e_get_vf_stats(struct net_device *dev, - int vf, struct ifla_vf_stats *vf_stats) +int mlx5e_get_vf_stats(struct net_device *dev, + int vf, struct ifla_vf_stats *vf_stats) { struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5_core_dev *mdev = priv->mdev; @@ -4050,8 +4093,7 @@ static void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, u16 port, int add) queue_work(priv->wq, &vxlan_work->work); } -static void mlx5e_add_vxlan_port(struct net_device *netdev, - struct udp_tunnel_info *ti) +void mlx5e_add_vxlan_port(struct net_device *netdev, struct udp_tunnel_info *ti) { struct mlx5e_priv *priv = netdev_priv(netdev); @@ -4064,8 +4106,7 @@ static void mlx5e_add_vxlan_port(struct net_device *netdev, mlx5e_vxlan_queue_work(priv, be16_to_cpu(ti->port), 1); } -static void mlx5e_del_vxlan_port(struct net_device *netdev, - struct udp_tunnel_info *ti) +void mlx5e_del_vxlan_port(struct net_device *netdev, struct udp_tunnel_info *ti) { struct mlx5e_priv *priv = netdev_priv(netdev); @@ -4115,9 +4156,9 @@ out: return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); } -static netdev_features_t mlx5e_features_check(struct sk_buff *skb, - struct net_device *netdev, - netdev_features_t features) +netdev_features_t mlx5e_features_check(struct sk_buff *skb, + struct net_device *netdev, + netdev_features_t features) { struct mlx5e_priv *priv = netdev_priv(netdev); @@ -4140,17 +4181,17 @@ static netdev_features_t mlx5e_features_check(struct sk_buff *skb, static bool mlx5e_tx_timeout_eq_recover(struct net_device *dev, struct mlx5e_txqsq *sq) { - struct mlx5_eq *eq = sq->cq.mcq.eq; + struct mlx5_eq_comp *eq = sq->cq.mcq.eq; u32 eqe_count; netdev_err(dev, "EQ 0x%x: Cons = 0x%x, irqn = 0x%x\n", - eq->eqn, eq->cons_index, eq->irqn); + eq->core.eqn, eq->core.cons_index, eq->core.irqn); eqe_count = mlx5_eq_poll_irq_disabled(eq); if (!eqe_count) return false; - netdev_err(dev, "Recover %d eqes on EQ 0x%x\n", eqe_count, eq->eqn); + netdev_err(dev, "Recover %d eqes on EQ 0x%x\n", eqe_count, eq->core.eqn); sq->channel->stats->eq_rearm++; return true; } @@ -4377,8 +4418,6 @@ const struct net_device_ops mlx5e_netdev_ops = { .ndo_get_vf_config = mlx5e_get_vf_config, .ndo_set_vf_link_state = mlx5e_set_vf_link_state, .ndo_get_vf_stats = mlx5e_get_vf_stats, - .ndo_has_offload_stats = mlx5e_has_offload_stats, - .ndo_get_offload_stats = mlx5e_get_offload_stats, #endif }; @@ -4524,15 +4563,23 @@ void mlx5e_build_rq_params(struct mlx5_core_dev *mdev, mlx5e_init_rq_type_params(mdev, params); } -void mlx5e_build_rss_params(struct mlx5e_params *params) +void mlx5e_build_rss_params(struct mlx5e_rss_params *rss_params, + u16 num_channels) { - params->rss_hfunc = ETH_RSS_HASH_XOR; - netdev_rss_key_fill(params->toeplitz_hash_key, sizeof(params->toeplitz_hash_key)); - mlx5e_build_default_indir_rqt(params->indirection_rqt, - MLX5E_INDIR_RQT_SIZE, params->num_channels); + enum mlx5e_traffic_types tt; + + rss_params->hfunc = ETH_RSS_HASH_XOR; + netdev_rss_key_fill(rss_params->toeplitz_hash_key, + sizeof(rss_params->toeplitz_hash_key)); + mlx5e_build_default_indir_rqt(rss_params->indirection_rqt, + MLX5E_INDIR_RQT_SIZE, num_channels); + for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) + rss_params->rx_hash_fields[tt] = + tirc_default_config[tt].rx_hash_fields; } void mlx5e_build_nic_params(struct mlx5_core_dev *mdev, + struct mlx5e_rss_params *rss_params, struct mlx5e_params *params, u16 max_channels, u16 mtu) { @@ -4548,6 +4595,10 @@ void mlx5e_build_nic_params(struct mlx5_core_dev *mdev, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE : MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE; + /* XDP SQ */ + MLX5E_SET_PFLAG(params, MLX5E_PFLAG_XDP_TX_MPWQE, + MLX5_CAP_ETH(mdev, enhanced_multi_pkt_send_wqe)); + /* set CQE compression */ params->rx_cqe_compress_def = false; if (MLX5_CAP_GEN(mdev, cqe_compression) && @@ -4581,7 +4632,7 @@ void mlx5e_build_nic_params(struct mlx5_core_dev *mdev, params->tx_min_inline_mode = mlx5e_params_calculate_tx_min_inline(mdev); /* RSS */ - mlx5e_build_rss_params(params); + mlx5e_build_rss_params(rss_params, params->num_channels); } static void mlx5e_set_netdev_dev_addr(struct net_device *netdev) @@ -4596,12 +4647,6 @@ static void mlx5e_set_netdev_dev_addr(struct net_device *netdev) } } -#if IS_ENABLED(CONFIG_MLX5_ESWITCH) -static const struct switchdev_ops mlx5e_switchdev_ops = { - .switchdev_port_attr_get = mlx5e_attr_get, -}; -#endif - static void mlx5e_build_nic_netdev(struct net_device *netdev) { struct mlx5e_priv *priv = netdev_priv(netdev); @@ -4711,12 +4756,6 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev) netdev->priv_flags |= IFF_UNICAST_FLT; mlx5e_set_netdev_dev_addr(netdev); - -#if IS_ENABLED(CONFIG_MLX5_ESWITCH) - if (MLX5_ESWITCH_MANAGER(mdev)) - netdev->switchdev_ops = &mlx5e_switchdev_ops; -#endif - mlx5e_ipsec_build_netdev(priv); mlx5e_tls_build_netdev(priv); } @@ -4754,14 +4793,16 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev, void *ppriv) { struct mlx5e_priv *priv = netdev_priv(netdev); + struct mlx5e_rss_params *rss = &priv->rss_params; int err; err = mlx5e_netdev_init(netdev, priv, mdev, profile, ppriv); if (err) return err; - mlx5e_build_nic_params(mdev, &priv->channels.params, - mlx5e_get_netdev_max_channels(netdev), netdev->mtu); + mlx5e_build_nic_params(mdev, rss, &priv->channels.params, + mlx5e_get_netdev_max_channels(netdev), + netdev->mtu); mlx5e_timestamp_init(priv); @@ -4891,9 +4932,8 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv) mlx5_lag_add(mdev, netdev); mlx5e_enable_async_events(priv); - - if (MLX5_ESWITCH_MANAGER(priv->mdev)) - mlx5e_register_vport_reps(priv); + if (mlx5e_monitor_counter_supported(priv)) + mlx5e_monitor_counter_init(priv); if (netdev->reg_state != NETREG_REGISTERED) return; @@ -4927,8 +4967,8 @@ static void mlx5e_nic_disable(struct mlx5e_priv *priv) queue_work(priv->wq, &priv->set_rx_mode_work); - if (MLX5_ESWITCH_MANAGER(priv->mdev)) - mlx5e_unregister_vport_reps(priv); + if (mlx5e_monitor_counter_supported(priv)) + mlx5e_monitor_counter_cleanup(priv); mlx5e_disable_async_events(priv); mlx5_lag_remove(mdev); @@ -4981,7 +5021,7 @@ int mlx5e_netdev_init(struct net_device *netdev, netif_carrier_off(netdev); #ifdef CONFIG_MLX5_EN_ARFS - netdev->rx_cpu_rmap = mdev->rmap; + netdev->rx_cpu_rmap = mlx5_eq_table_get_rmap(mdev); #endif return 0; @@ -5036,7 +5076,7 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv) if (priv->channels.params.num_channels > max_nch) { mlx5_core_warn(priv->mdev, "MLX5E: Reducing number of channels to %d\n", max_nch); priv->channels.params.num_channels = max_nch; - mlx5e_build_default_indir_rqt(priv->channels.params.indirection_rqt, + mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt, MLX5E_INDIR_RQT_SIZE, max_nch); } @@ -5125,7 +5165,6 @@ static void mlx5e_detach(struct mlx5_core_dev *mdev, void *vpriv) static void *mlx5e_add(struct mlx5_core_dev *mdev) { struct net_device *netdev; - void *rpriv = NULL; void *priv; int err; int nch; @@ -5135,20 +5174,18 @@ static void *mlx5e_add(struct mlx5_core_dev *mdev) return NULL; #ifdef CONFIG_MLX5_ESWITCH - if (MLX5_ESWITCH_MANAGER(mdev)) { - rpriv = mlx5e_alloc_nic_rep_priv(mdev); - if (!rpriv) { - mlx5_core_warn(mdev, "Failed to alloc NIC rep priv data\n"); - return NULL; - } + if (MLX5_ESWITCH_MANAGER(mdev) && + mlx5_eswitch_mode(mdev->priv.eswitch) == SRIOV_OFFLOADS) { + mlx5e_rep_register_vport_reps(mdev); + return mdev; } #endif nch = mlx5e_get_max_num_channels(mdev); - netdev = mlx5e_create_netdev(mdev, &mlx5e_nic_profile, nch, rpriv); + netdev = mlx5e_create_netdev(mdev, &mlx5e_nic_profile, nch, NULL); if (!netdev) { mlx5_core_err(mdev, "mlx5e_create_netdev failed\n"); - goto err_free_rpriv; + return NULL; } priv = netdev_priv(netdev); @@ -5174,30 +5211,26 @@ err_detach: mlx5e_detach(mdev, priv); err_destroy_netdev: mlx5e_destroy_netdev(priv); -err_free_rpriv: - kfree(rpriv); return NULL; } static void mlx5e_remove(struct mlx5_core_dev *mdev, void *vpriv) { - struct mlx5e_priv *priv = vpriv; - void *ppriv = priv->ppriv; + struct mlx5e_priv *priv; +#ifdef CONFIG_MLX5_ESWITCH + if (MLX5_ESWITCH_MANAGER(mdev) && vpriv == mdev) { + mlx5e_rep_unregister_vport_reps(mdev); + return; + } +#endif + priv = vpriv; #ifdef CONFIG_MLX5_CORE_EN_DCB mlx5e_dcbnl_delete_app(priv); #endif unregister_netdev(priv->netdev); mlx5e_detach(mdev, vpriv); mlx5e_destroy_netdev(priv); - kfree(ppriv); -} - -static void *mlx5e_get_netdev(void *vpriv) -{ - struct mlx5e_priv *priv = vpriv; - - return priv->netdev; } static struct mlx5_interface mlx5e_interface = { @@ -5205,9 +5238,7 @@ static struct mlx5_interface mlx5e_interface = { .remove = mlx5e_remove, .attach = mlx5e_attach, .detach = mlx5e_detach, - .event = mlx5e_async_event, .protocol = MLX5_INTERFACE_PROTOCOL_ETH, - .get_dev = mlx5e_get_netdev, }; void mlx5e_init(void) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index 820fe85100b0..96cc0c6a4014 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -42,14 +42,24 @@ #include "en.h" #include "en_rep.h" #include "en_tc.h" +#include "en/tc_tun.h" #include "fs_core.h" -#define MLX5E_REP_PARAMS_LOG_SQ_SIZE \ - max(0x6, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE) +#define MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE \ + max(0x7, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE) #define MLX5E_REP_PARAMS_DEF_NUM_CHANNELS 1 static const char mlx5e_rep_driver_name[] = "mlx5e_rep"; +struct mlx5e_rep_indr_block_priv { + struct net_device *netdev; + struct mlx5e_rep_priv *rpriv; + + struct list_head list; +}; + +static void mlx5e_rep_indr_unregister_block(struct net_device *netdev); + static void mlx5e_rep_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *drvinfo) { @@ -99,7 +109,7 @@ static void mlx5e_rep_get_strings(struct net_device *dev, } } -static void mlx5e_rep_update_hw_counters(struct mlx5e_priv *priv) +static void mlx5e_vf_rep_update_hw_counters(struct mlx5e_priv *priv) { struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct mlx5e_rep_priv *rpriv = priv->ppriv; @@ -122,6 +132,32 @@ static void mlx5e_rep_update_hw_counters(struct mlx5e_priv *priv) vport_stats->tx_bytes = vf_stats.rx_bytes; } +static void mlx5e_uplink_rep_update_hw_counters(struct mlx5e_priv *priv) +{ + struct mlx5e_pport_stats *pstats = &priv->stats.pport; + struct rtnl_link_stats64 *vport_stats; + + mlx5e_grp_802_3_update_stats(priv); + + vport_stats = &priv->stats.vf_vport; + + vport_stats->rx_packets = PPORT_802_3_GET(pstats, a_frames_received_ok); + vport_stats->rx_bytes = PPORT_802_3_GET(pstats, a_octets_received_ok); + vport_stats->tx_packets = PPORT_802_3_GET(pstats, a_frames_transmitted_ok); + vport_stats->tx_bytes = PPORT_802_3_GET(pstats, a_octets_transmitted_ok); +} + +static void mlx5e_rep_update_hw_counters(struct mlx5e_priv *priv) +{ + struct mlx5e_rep_priv *rpriv = priv->ppriv; + struct mlx5_eswitch_rep *rep = rpriv->rep; + + if (rep->vport == FDB_UPLINK_VPORT) + mlx5e_uplink_rep_update_hw_counters(priv); + else + mlx5e_vf_rep_update_hw_counters(priv); +} + static void mlx5e_rep_update_sw_counters(struct mlx5e_priv *priv) { struct mlx5e_sw_stats *s = &priv->stats.sw; @@ -257,6 +293,22 @@ static int mlx5e_rep_set_channels(struct net_device *dev, return 0; } +static int mlx5e_rep_get_coalesce(struct net_device *netdev, + struct ethtool_coalesce *coal) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + return mlx5e_ethtool_get_coalesce(priv, coal); +} + +static int mlx5e_rep_set_coalesce(struct net_device *netdev, + struct ethtool_coalesce *coal) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + return mlx5e_ethtool_set_coalesce(priv, coal); +} + static u32 mlx5e_rep_get_rxfh_key_size(struct net_device *netdev) { struct mlx5e_priv *priv = netdev_priv(netdev); @@ -271,7 +323,55 @@ static u32 mlx5e_rep_get_rxfh_indir_size(struct net_device *netdev) return mlx5e_ethtool_get_rxfh_indir_size(priv); } -static const struct ethtool_ops mlx5e_rep_ethtool_ops = { +static void mlx5e_uplink_rep_get_pauseparam(struct net_device *netdev, + struct ethtool_pauseparam *pauseparam) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + mlx5e_ethtool_get_pauseparam(priv, pauseparam); +} + +static int mlx5e_uplink_rep_set_pauseparam(struct net_device *netdev, + struct ethtool_pauseparam *pauseparam) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + return mlx5e_ethtool_set_pauseparam(priv, pauseparam); +} + +static int mlx5e_uplink_rep_get_link_ksettings(struct net_device *netdev, + struct ethtool_link_ksettings *link_ksettings) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + return mlx5e_ethtool_get_link_ksettings(priv, link_ksettings); +} + +static int mlx5e_uplink_rep_set_link_ksettings(struct net_device *netdev, + const struct ethtool_link_ksettings *link_ksettings) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + + return mlx5e_ethtool_set_link_ksettings(priv, link_ksettings); +} + +static const struct ethtool_ops mlx5e_vf_rep_ethtool_ops = { + .get_drvinfo = mlx5e_rep_get_drvinfo, + .get_link = ethtool_op_get_link, + .get_strings = mlx5e_rep_get_strings, + .get_sset_count = mlx5e_rep_get_sset_count, + .get_ethtool_stats = mlx5e_rep_get_ethtool_stats, + .get_ringparam = mlx5e_rep_get_ringparam, + .set_ringparam = mlx5e_rep_set_ringparam, + .get_channels = mlx5e_rep_get_channels, + .set_channels = mlx5e_rep_set_channels, + .get_coalesce = mlx5e_rep_get_coalesce, + .set_coalesce = mlx5e_rep_set_coalesce, + .get_rxfh_key_size = mlx5e_rep_get_rxfh_key_size, + .get_rxfh_indir_size = mlx5e_rep_get_rxfh_indir_size, +}; + +static const struct ethtool_ops mlx5e_uplink_rep_ethtool_ops = { .get_drvinfo = mlx5e_rep_get_drvinfo, .get_link = ethtool_op_get_link, .get_strings = mlx5e_rep_get_strings, @@ -281,24 +381,44 @@ static const struct ethtool_ops mlx5e_rep_ethtool_ops = { .set_ringparam = mlx5e_rep_set_ringparam, .get_channels = mlx5e_rep_get_channels, .set_channels = mlx5e_rep_set_channels, + .get_coalesce = mlx5e_rep_get_coalesce, + .set_coalesce = mlx5e_rep_set_coalesce, + .get_link_ksettings = mlx5e_uplink_rep_get_link_ksettings, + .set_link_ksettings = mlx5e_uplink_rep_set_link_ksettings, .get_rxfh_key_size = mlx5e_rep_get_rxfh_key_size, .get_rxfh_indir_size = mlx5e_rep_get_rxfh_indir_size, + .get_pauseparam = mlx5e_uplink_rep_get_pauseparam, + .set_pauseparam = mlx5e_uplink_rep_set_pauseparam, }; -int mlx5e_attr_get(struct net_device *dev, struct switchdev_attr *attr) +static int mlx5e_attr_get(struct net_device *dev, struct switchdev_attr *attr) { struct mlx5e_priv *priv = netdev_priv(dev); - struct mlx5e_rep_priv *rpriv = priv->ppriv; - struct mlx5_eswitch_rep *rep = rpriv->rep; struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct net_device *uplink_upper = NULL; + struct mlx5e_priv *uplink_priv = NULL; + struct net_device *uplink_dev; if (esw->mode == SRIOV_NONE) return -EOPNOTSUPP; + uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH); + if (uplink_dev) { + uplink_upper = netdev_master_upper_dev_get(uplink_dev); + uplink_priv = netdev_priv(uplink_dev); + } + switch (attr->id) { case SWITCHDEV_ATTR_ID_PORT_PARENT_ID: attr->u.ppid.id_len = ETH_ALEN; - ether_addr_copy(attr->u.ppid.id, rep->hw_id); + if (uplink_upper && mlx5_lag_is_sriov(uplink_priv->mdev)) { + ether_addr_copy(attr->u.ppid.id, uplink_upper->dev_addr); + } else { + struct mlx5e_rep_priv *rpriv = priv->ppriv; + struct mlx5_eswitch_rep *rep = rpriv->rep; + + ether_addr_copy(attr->u.ppid.id, rep->hw_id); + } break; default: return -EOPNOTSUPP; @@ -519,6 +639,184 @@ static void mlx5e_rep_neigh_update(struct work_struct *work) neigh_release(n); } +static struct mlx5e_rep_indr_block_priv * +mlx5e_rep_indr_block_priv_lookup(struct mlx5e_rep_priv *rpriv, + struct net_device *netdev) +{ + struct mlx5e_rep_indr_block_priv *cb_priv; + + /* All callback list access should be protected by RTNL. */ + ASSERT_RTNL(); + + list_for_each_entry(cb_priv, + &rpriv->uplink_priv.tc_indr_block_priv_list, + list) + if (cb_priv->netdev == netdev) + return cb_priv; + + return NULL; +} + +static void mlx5e_rep_indr_clean_block_privs(struct mlx5e_rep_priv *rpriv) +{ + struct mlx5e_rep_indr_block_priv *cb_priv, *temp; + struct list_head *head = &rpriv->uplink_priv.tc_indr_block_priv_list; + + list_for_each_entry_safe(cb_priv, temp, head, list) { + mlx5e_rep_indr_unregister_block(cb_priv->netdev); + kfree(cb_priv); + } +} + +static int +mlx5e_rep_indr_offload(struct net_device *netdev, + struct tc_cls_flower_offload *flower, + struct mlx5e_rep_indr_block_priv *indr_priv) +{ + struct mlx5e_priv *priv = netdev_priv(indr_priv->rpriv->netdev); + int flags = MLX5E_TC_EGRESS | MLX5E_TC_ESW_OFFLOAD; + int err = 0; + + switch (flower->command) { + case TC_CLSFLOWER_REPLACE: + err = mlx5e_configure_flower(netdev, priv, flower, flags); + break; + case TC_CLSFLOWER_DESTROY: + err = mlx5e_delete_flower(netdev, priv, flower, flags); + break; + case TC_CLSFLOWER_STATS: + err = mlx5e_stats_flower(netdev, priv, flower, flags); + break; + default: + err = -EOPNOTSUPP; + } + + return err; +} + +static int mlx5e_rep_indr_setup_block_cb(enum tc_setup_type type, + void *type_data, void *indr_priv) +{ + struct mlx5e_rep_indr_block_priv *priv = indr_priv; + + switch (type) { + case TC_SETUP_CLSFLOWER: + return mlx5e_rep_indr_offload(priv->netdev, type_data, priv); + default: + return -EOPNOTSUPP; + } +} + +static int +mlx5e_rep_indr_setup_tc_block(struct net_device *netdev, + struct mlx5e_rep_priv *rpriv, + struct tc_block_offload *f) +{ + struct mlx5e_rep_indr_block_priv *indr_priv; + int err = 0; + + if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) + return -EOPNOTSUPP; + + switch (f->command) { + case TC_BLOCK_BIND: + indr_priv = mlx5e_rep_indr_block_priv_lookup(rpriv, netdev); + if (indr_priv) + return -EEXIST; + + indr_priv = kmalloc(sizeof(*indr_priv), GFP_KERNEL); + if (!indr_priv) + return -ENOMEM; + + indr_priv->netdev = netdev; + indr_priv->rpriv = rpriv; + list_add(&indr_priv->list, + &rpriv->uplink_priv.tc_indr_block_priv_list); + + err = tcf_block_cb_register(f->block, + mlx5e_rep_indr_setup_block_cb, + netdev, indr_priv, f->extack); + if (err) { + list_del(&indr_priv->list); + kfree(indr_priv); + } + + return err; + case TC_BLOCK_UNBIND: + tcf_block_cb_unregister(f->block, + mlx5e_rep_indr_setup_block_cb, + netdev); + indr_priv = mlx5e_rep_indr_block_priv_lookup(rpriv, netdev); + if (indr_priv) { + list_del(&indr_priv->list); + kfree(indr_priv); + } + + return 0; + default: + return -EOPNOTSUPP; + } + return 0; +} + +static +int mlx5e_rep_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, + enum tc_setup_type type, void *type_data) +{ + switch (type) { + case TC_SETUP_BLOCK: + return mlx5e_rep_indr_setup_tc_block(netdev, cb_priv, + type_data); + default: + return -EOPNOTSUPP; + } +} + +static int mlx5e_rep_indr_register_block(struct mlx5e_rep_priv *rpriv, + struct net_device *netdev) +{ + int err; + + err = __tc_indr_block_cb_register(netdev, rpriv, + mlx5e_rep_indr_setup_tc_cb, + netdev); + if (err) { + struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); + + mlx5_core_err(priv->mdev, "Failed to register remote block notifier for %s err=%d\n", + netdev_name(netdev), err); + } + return err; +} + +static void mlx5e_rep_indr_unregister_block(struct net_device *netdev) +{ + __tc_indr_block_cb_unregister(netdev, mlx5e_rep_indr_setup_tc_cb, + netdev); +} + +static int mlx5e_nic_rep_netdevice_event(struct notifier_block *nb, + unsigned long event, void *ptr) +{ + struct mlx5e_rep_priv *rpriv = container_of(nb, struct mlx5e_rep_priv, + uplink_priv.netdevice_nb); + struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); + struct net_device *netdev = netdev_notifier_info_to_dev(ptr); + + if (!mlx5e_tc_tun_device_to_offload(priv, netdev)) + return NOTIFY_OK; + + switch (event) { + case NETDEV_REGISTER: + mlx5e_rep_indr_register_block(rpriv, netdev); + break; + case NETDEV_UNREGISTER: + mlx5e_rep_indr_unregister_block(netdev); + break; + } + return NOTIFY_OK; +} + static struct mlx5e_neigh_hash_entry * mlx5e_rep_neigh_entry_lookup(struct mlx5e_priv *priv, struct mlx5e_neigh *m_neigh); @@ -780,7 +1078,7 @@ void mlx5e_rep_encap_entry_detach(struct mlx5e_priv *priv, mlx5e_rep_neigh_entry_destroy(priv, nhe); } -static int mlx5e_rep_open(struct net_device *dev) +static int mlx5e_vf_rep_open(struct net_device *dev) { struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5e_rep_priv *rpriv = priv->ppriv; @@ -802,7 +1100,7 @@ unlock: return err; } -static int mlx5e_rep_close(struct net_device *dev) +static int mlx5e_vf_rep_close(struct net_device *dev) { struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5e_rep_priv *rpriv = priv->ppriv; @@ -839,24 +1137,14 @@ mlx5e_rep_setup_tc_cls_flower(struct mlx5e_priv *priv, { switch (cls_flower->command) { case TC_CLSFLOWER_REPLACE: - return mlx5e_configure_flower(priv, cls_flower, flags); + return mlx5e_configure_flower(priv->netdev, priv, cls_flower, + flags); case TC_CLSFLOWER_DESTROY: - return mlx5e_delete_flower(priv, cls_flower, flags); + return mlx5e_delete_flower(priv->netdev, priv, cls_flower, + flags); case TC_CLSFLOWER_STATS: - return mlx5e_stats_flower(priv, cls_flower, flags); - default: - return -EOPNOTSUPP; - } -} - -static int mlx5e_rep_setup_tc_cb_egdev(enum tc_setup_type type, void *type_data, - void *cb_priv) -{ - struct mlx5e_priv *priv = cb_priv; - - switch (type) { - case TC_SETUP_CLSFLOWER: - return mlx5e_rep_setup_tc_cls_flower(priv, type_data, MLX5E_TC_EGRESS); + return mlx5e_stats_flower(priv->netdev, priv, cls_flower, + flags); default: return -EOPNOTSUPP; } @@ -869,7 +1157,8 @@ static int mlx5e_rep_setup_tc_cb(enum tc_setup_type type, void *type_data, switch (type) { case TC_SETUP_CLSFLOWER: - return mlx5e_rep_setup_tc_cls_flower(priv, type_data, MLX5E_TC_INGRESS); + return mlx5e_rep_setup_tc_cls_flower(priv, type_data, MLX5E_TC_INGRESS | + MLX5E_TC_ESW_OFFLOAD); default: return -EOPNOTSUPP; } @@ -908,43 +1197,23 @@ static int mlx5e_rep_setup_tc(struct net_device *dev, enum tc_setup_type type, bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv) { - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct mlx5e_rep_priv *rpriv = priv->ppriv; struct mlx5_eswitch_rep *rep; if (!MLX5_ESWITCH_MANAGER(priv->mdev)) return false; - rep = rpriv->rep; - if (esw->mode == SRIOV_OFFLOADS && - rep && rep->vport == FDB_UPLINK_VPORT) - return true; - - return false; -} - -static bool mlx5e_is_vf_vport_rep(struct mlx5e_priv *priv) -{ - struct mlx5e_rep_priv *rpriv = priv->ppriv; - struct mlx5_eswitch_rep *rep; - - if (!MLX5_ESWITCH_MANAGER(priv->mdev)) + if (!rpriv) /* non vport rep mlx5e instances don't use this field */ return false; rep = rpriv->rep; - if (rep && rep->vport != FDB_UPLINK_VPORT) - return true; - - return false; + return (rep->vport == FDB_UPLINK_VPORT); } -bool mlx5e_has_offload_stats(const struct net_device *dev, int attr_id) +static bool mlx5e_rep_has_offload_stats(const struct net_device *dev, int attr_id) { - struct mlx5e_priv *priv = netdev_priv(dev); - switch (attr_id) { case IFLA_OFFLOAD_XSTATS_CPU_HIT: - if (mlx5e_is_vf_vport_rep(priv) || mlx5e_is_uplink_rep(priv)) return true; } @@ -970,8 +1239,8 @@ mlx5e_get_sw_stats64(const struct net_device *dev, return 0; } -int mlx5e_get_offload_stats(int attr_id, const struct net_device *dev, - void *sp) +static int mlx5e_rep_get_offload_stats(int attr_id, const struct net_device *dev, + void *sp) { switch (attr_id) { case IFLA_OFFLOAD_XSTATS_CPU_HIT: @@ -982,7 +1251,7 @@ int mlx5e_get_offload_stats(int attr_id, const struct net_device *dev, } static void -mlx5e_rep_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) +mlx5e_vf_rep_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) { struct mlx5e_priv *priv = netdev_priv(dev); @@ -991,37 +1260,93 @@ mlx5e_rep_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) memcpy(stats, &priv->stats.vf_vport, sizeof(*stats)); } +static int mlx5e_vf_rep_change_mtu(struct net_device *netdev, int new_mtu) +{ + return mlx5e_change_mtu(netdev, new_mtu, NULL); +} + +static int mlx5e_uplink_rep_change_mtu(struct net_device *netdev, int new_mtu) +{ + return mlx5e_change_mtu(netdev, new_mtu, mlx5e_set_dev_port_mtu); +} + +static int mlx5e_uplink_rep_set_mac(struct net_device *netdev, void *addr) +{ + struct sockaddr *saddr = addr; + + if (!is_valid_ether_addr(saddr->sa_data)) + return -EADDRNOTAVAIL; + + ether_addr_copy(netdev->dev_addr, saddr->sa_data); + return 0; +} + static const struct switchdev_ops mlx5e_rep_switchdev_ops = { .switchdev_port_attr_get = mlx5e_attr_get, }; -static int mlx5e_change_rep_mtu(struct net_device *netdev, int new_mtu) -{ - return mlx5e_change_mtu(netdev, new_mtu, NULL); -} +static const struct net_device_ops mlx5e_netdev_ops_vf_rep = { + .ndo_open = mlx5e_vf_rep_open, + .ndo_stop = mlx5e_vf_rep_close, + .ndo_start_xmit = mlx5e_xmit, + .ndo_get_phys_port_name = mlx5e_rep_get_phys_port_name, + .ndo_setup_tc = mlx5e_rep_setup_tc, + .ndo_get_stats64 = mlx5e_vf_rep_get_stats, + .ndo_has_offload_stats = mlx5e_rep_has_offload_stats, + .ndo_get_offload_stats = mlx5e_rep_get_offload_stats, + .ndo_change_mtu = mlx5e_vf_rep_change_mtu, +}; -static const struct net_device_ops mlx5e_netdev_ops_rep = { - .ndo_open = mlx5e_rep_open, - .ndo_stop = mlx5e_rep_close, +static const struct net_device_ops mlx5e_netdev_ops_uplink_rep = { + .ndo_open = mlx5e_open, + .ndo_stop = mlx5e_close, .ndo_start_xmit = mlx5e_xmit, + .ndo_set_mac_address = mlx5e_uplink_rep_set_mac, .ndo_get_phys_port_name = mlx5e_rep_get_phys_port_name, .ndo_setup_tc = mlx5e_rep_setup_tc, - .ndo_get_stats64 = mlx5e_rep_get_stats, - .ndo_has_offload_stats = mlx5e_has_offload_stats, - .ndo_get_offload_stats = mlx5e_get_offload_stats, - .ndo_change_mtu = mlx5e_change_rep_mtu, + .ndo_get_stats64 = mlx5e_get_stats, + .ndo_has_offload_stats = mlx5e_rep_has_offload_stats, + .ndo_get_offload_stats = mlx5e_rep_get_offload_stats, + .ndo_change_mtu = mlx5e_uplink_rep_change_mtu, + .ndo_udp_tunnel_add = mlx5e_add_vxlan_port, + .ndo_udp_tunnel_del = mlx5e_del_vxlan_port, + .ndo_features_check = mlx5e_features_check, + .ndo_set_vf_mac = mlx5e_set_vf_mac, + .ndo_set_vf_rate = mlx5e_set_vf_rate, + .ndo_get_vf_config = mlx5e_get_vf_config, + .ndo_get_vf_stats = mlx5e_get_vf_stats, }; -static void mlx5e_build_rep_params(struct mlx5_core_dev *mdev, - struct mlx5e_params *params, u16 mtu) +bool mlx5e_eswitch_rep(struct net_device *netdev) +{ + if (netdev->netdev_ops == &mlx5e_netdev_ops_vf_rep || + netdev->netdev_ops == &mlx5e_netdev_ops_uplink_rep) + return true; + + return false; +} + +static void mlx5e_build_rep_params(struct net_device *netdev) { + struct mlx5e_priv *priv = netdev_priv(netdev); + struct mlx5e_rep_priv *rpriv = priv->ppriv; + struct mlx5_eswitch_rep *rep = rpriv->rep; + struct mlx5_core_dev *mdev = priv->mdev; + struct mlx5e_params *params; + u8 cq_period_mode = MLX5_CAP_GEN(mdev, cq_period_start_from_cqe) ? MLX5_CQ_PERIOD_MODE_START_FROM_CQE : MLX5_CQ_PERIOD_MODE_START_FROM_EQE; + params = &priv->channels.params; params->hard_mtu = MLX5E_ETH_HARD_MTU; - params->sw_mtu = mtu; - params->log_sq_size = MLX5E_REP_PARAMS_LOG_SQ_SIZE; + params->sw_mtu = netdev->mtu; + + /* SQ */ + if (rep->vport == FDB_UPLINK_VPORT) + params->log_sq_size = MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE; + else + params->log_sq_size = MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE; /* RQ */ mlx5e_build_rq_params(mdev, params); @@ -1035,24 +1360,38 @@ static void mlx5e_build_rep_params(struct mlx5_core_dev *mdev, mlx5_query_min_inline(mdev, ¶ms->tx_min_inline_mode); /* RSS */ - mlx5e_build_rss_params(params); + mlx5e_build_rss_params(&priv->rss_params, params->num_channels); } static void mlx5e_build_rep_netdev(struct net_device *netdev) { struct mlx5e_priv *priv = netdev_priv(netdev); + struct mlx5e_rep_priv *rpriv = priv->ppriv; + struct mlx5_eswitch_rep *rep = rpriv->rep; struct mlx5_core_dev *mdev = priv->mdev; - u16 max_mtu; - netdev->netdev_ops = &mlx5e_netdev_ops_rep; + if (rep->vport == FDB_UPLINK_VPORT) { + SET_NETDEV_DEV(netdev, &priv->mdev->pdev->dev); + netdev->netdev_ops = &mlx5e_netdev_ops_uplink_rep; + /* we want a persistent mac for the uplink rep */ + mlx5_query_nic_vport_mac_address(mdev, 0, netdev->dev_addr); + netdev->ethtool_ops = &mlx5e_uplink_rep_ethtool_ops; +#ifdef CONFIG_MLX5_CORE_EN_DCB + if (MLX5_CAP_GEN(mdev, qos)) + netdev->dcbnl_ops = &mlx5e_dcbnl_ops; +#endif + } else { + netdev->netdev_ops = &mlx5e_netdev_ops_vf_rep; + eth_hw_addr_random(netdev); + netdev->ethtool_ops = &mlx5e_vf_rep_ethtool_ops; + } netdev->watchdog_timeo = 15 * HZ; - netdev->ethtool_ops = &mlx5e_rep_ethtool_ops; netdev->switchdev_ops = &mlx5e_rep_switchdev_ops; - netdev->features |= NETIF_F_VLAN_CHALLENGED | NETIF_F_HW_TC | NETIF_F_NETNS_LOCAL; + netdev->features |= NETIF_F_HW_TC | NETIF_F_NETNS_LOCAL; netdev->hw_features |= NETIF_F_HW_TC; netdev->hw_features |= NETIF_F_SG; @@ -1063,13 +1402,10 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev) netdev->hw_features |= NETIF_F_TSO6; netdev->hw_features |= NETIF_F_RXCSUM; - netdev->features |= netdev->hw_features; - - eth_hw_addr_random(netdev); + if (rep->vport != FDB_UPLINK_VPORT) + netdev->features |= NETIF_F_VLAN_CHALLENGED; - netdev->min_mtu = ETH_MIN_MTU; - mlx5_query_port_max_mtu(mdev, &max_mtu, 1); - netdev->max_mtu = MLX5E_HW2SW_MTU(&priv->channels.params, max_mtu); + netdev->features |= netdev->hw_features; } static int mlx5e_init_rep(struct mlx5_core_dev *mdev, @@ -1086,7 +1422,7 @@ static int mlx5e_init_rep(struct mlx5_core_dev *mdev, priv->channels.params.num_channels = MLX5E_REP_PARAMS_DEF_NUM_CHANNELS; - mlx5e_build_rep_params(mdev, &priv->channels.params, netdev->mtu); + mlx5e_build_rep_params(netdev); mlx5e_build_rep_netdev(netdev); mlx5e_timestamp_init(priv); @@ -1209,94 +1545,173 @@ static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv) static int mlx5e_init_rep_tx(struct mlx5e_priv *priv) { - int err; + struct mlx5e_rep_priv *rpriv = priv->ppriv; + struct mlx5_rep_uplink_priv *uplink_priv; + int tc, err; err = mlx5e_create_tises(priv); if (err) { mlx5_core_warn(priv->mdev, "create tises failed, %d\n", err); return err; } - return 0; -} -static const struct mlx5e_profile mlx5e_rep_profile = { - .init = mlx5e_init_rep, - .cleanup = mlx5e_cleanup_rep, - .init_rx = mlx5e_init_rep_rx, - .cleanup_rx = mlx5e_cleanup_rep_rx, - .init_tx = mlx5e_init_rep_tx, - .cleanup_tx = mlx5e_cleanup_nic_tx, - .update_stats = mlx5e_rep_update_hw_counters, - .update_carrier = NULL, - .rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe_rep, - .rx_handlers.handle_rx_cqe_mpwqe = mlx5e_handle_rx_cqe_mpwrq, - .max_tc = 1, -}; + if (rpriv->rep->vport == FDB_UPLINK_VPORT) { + uplink_priv = &rpriv->uplink_priv; -/* e-Switch vport representors */ + /* init shared tc flow table */ + err = mlx5e_tc_esw_init(&uplink_priv->tc_ht); + if (err) + goto destroy_tises; + + /* init indirect block notifications */ + INIT_LIST_HEAD(&uplink_priv->tc_indr_block_priv_list); + uplink_priv->netdevice_nb.notifier_call = mlx5e_nic_rep_netdevice_event; + err = register_netdevice_notifier(&uplink_priv->netdevice_nb); + if (err) { + mlx5_core_err(priv->mdev, "Failed to register netdev notifier\n"); + goto tc_esw_cleanup; + } + } -static int -mlx5e_nic_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) + return 0; + +tc_esw_cleanup: + mlx5e_tc_esw_cleanup(&uplink_priv->tc_ht); +destroy_tises: + for (tc = 0; tc < priv->profile->max_tc; tc++) + mlx5e_destroy_tis(priv->mdev, priv->tisn[tc]); + return err; +} + +static void mlx5e_cleanup_rep_tx(struct mlx5e_priv *priv) { - struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep); - struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); + struct mlx5e_rep_priv *rpriv = priv->ppriv; + int tc; - int err; + for (tc = 0; tc < priv->profile->max_tc; tc++) + mlx5e_destroy_tis(priv->mdev, priv->tisn[tc]); - if (test_bit(MLX5E_STATE_OPENED, &priv->state)) { - err = mlx5e_add_sqs_fwd_rules(priv); - if (err) - return err; + if (rpriv->rep->vport == FDB_UPLINK_VPORT) { + /* clean indirect TC block notifications */ + unregister_netdevice_notifier(&rpriv->uplink_priv.netdevice_nb); + mlx5e_rep_indr_clean_block_privs(rpriv); + + /* delete shared tc flow table */ + mlx5e_tc_esw_cleanup(&rpriv->uplink_priv.tc_ht); } +} - err = mlx5e_rep_neigh_init(rpriv); - if (err) - goto err_remove_sqs; +static void mlx5e_vf_rep_enable(struct mlx5e_priv *priv) +{ + struct net_device *netdev = priv->netdev; + struct mlx5_core_dev *mdev = priv->mdev; + u16 max_mtu; - /* init shared tc flow table */ - err = mlx5e_tc_esw_init(&rpriv->tc_ht); - if (err) - goto err_neigh_cleanup; + netdev->min_mtu = ETH_MIN_MTU; + mlx5_query_port_max_mtu(mdev, &max_mtu, 1); + netdev->max_mtu = MLX5E_HW2SW_MTU(&priv->channels.params, max_mtu); +} - return 0; +static int uplink_rep_async_event(struct notifier_block *nb, unsigned long event, void *data) +{ + struct mlx5e_priv *priv = container_of(nb, struct mlx5e_priv, events_nb); + struct mlx5_eqe *eqe = data; -err_neigh_cleanup: - mlx5e_rep_neigh_cleanup(rpriv); -err_remove_sqs: - mlx5e_remove_sqs_fwd_rules(priv); - return err; + if (event != MLX5_EVENT_TYPE_PORT_CHANGE) + return NOTIFY_DONE; + + switch (eqe->sub_type) { + case MLX5_PORT_CHANGE_SUBTYPE_DOWN: + case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE: + queue_work(priv->wq, &priv->update_carrier_work); + break; + default: + return NOTIFY_DONE; + } + + return NOTIFY_OK; } -static void -mlx5e_nic_rep_unload(struct mlx5_eswitch_rep *rep) +static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv) { - struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep); - struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); + struct net_device *netdev = priv->netdev; + struct mlx5_core_dev *mdev = priv->mdev; + u16 max_mtu; - if (test_bit(MLX5E_STATE_OPENED, &priv->state)) - mlx5e_remove_sqs_fwd_rules(priv); + netdev->min_mtu = ETH_MIN_MTU; + mlx5_query_port_max_mtu(priv->mdev, &max_mtu, 1); + netdev->max_mtu = MLX5E_HW2SW_MTU(&priv->channels.params, max_mtu); + mlx5e_set_dev_port_mtu(priv); + + mlx5_lag_add(mdev, netdev); + priv->events_nb.notifier_call = uplink_rep_async_event; + mlx5_notifier_register(mdev, &priv->events_nb); +#ifdef CONFIG_MLX5_CORE_EN_DCB + mlx5e_dcbnl_initialize(priv); + mlx5e_dcbnl_init_app(priv); +#endif +} - /* clean uplink offloaded TC rules, delete shared tc flow table */ - mlx5e_tc_esw_cleanup(&rpriv->tc_ht); +static void mlx5e_uplink_rep_disable(struct mlx5e_priv *priv) +{ + struct mlx5_core_dev *mdev = priv->mdev; - mlx5e_rep_neigh_cleanup(rpriv); +#ifdef CONFIG_MLX5_CORE_EN_DCB + mlx5e_dcbnl_delete_app(priv); +#endif + mlx5_notifier_unregister(mdev, &priv->events_nb); + mlx5_lag_remove(mdev); } +static const struct mlx5e_profile mlx5e_vf_rep_profile = { + .init = mlx5e_init_rep, + .cleanup = mlx5e_cleanup_rep, + .init_rx = mlx5e_init_rep_rx, + .cleanup_rx = mlx5e_cleanup_rep_rx, + .init_tx = mlx5e_init_rep_tx, + .cleanup_tx = mlx5e_cleanup_rep_tx, + .enable = mlx5e_vf_rep_enable, + .update_stats = mlx5e_vf_rep_update_hw_counters, + .rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe_rep, + .rx_handlers.handle_rx_cqe_mpwqe = mlx5e_handle_rx_cqe_mpwrq, + .max_tc = 1, +}; + +static const struct mlx5e_profile mlx5e_uplink_rep_profile = { + .init = mlx5e_init_rep, + .cleanup = mlx5e_cleanup_rep, + .init_rx = mlx5e_init_rep_rx, + .cleanup_rx = mlx5e_cleanup_rep_rx, + .init_tx = mlx5e_init_rep_tx, + .cleanup_tx = mlx5e_cleanup_rep_tx, + .enable = mlx5e_uplink_rep_enable, + .disable = mlx5e_uplink_rep_disable, + .update_stats = mlx5e_uplink_rep_update_hw_counters, + .update_carrier = mlx5e_update_carrier, + .rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe_rep, + .rx_handlers.handle_rx_cqe_mpwqe = mlx5e_handle_rx_cqe_mpwrq, + .max_tc = MLX5E_MAX_NUM_TC, +}; + +/* e-Switch vport representors */ static int mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) { - struct mlx5e_rep_priv *uplink_rpriv; + const struct mlx5e_profile *profile; struct mlx5e_rep_priv *rpriv; struct net_device *netdev; - struct mlx5e_priv *upriv; int nch, err; rpriv = kzalloc(sizeof(*rpriv), GFP_KERNEL); if (!rpriv) return -ENOMEM; + /* rpriv->rep to be looked up when profile->init() is called */ + rpriv->rep = rep; + nch = mlx5e_get_max_num_channels(dev); - netdev = mlx5e_create_netdev(dev, &mlx5e_rep_profile, nch, rpriv); + profile = (rep->vport == FDB_UPLINK_VPORT) ? &mlx5e_uplink_rep_profile : &mlx5e_vf_rep_profile; + netdev = mlx5e_create_netdev(dev, profile, nch, rpriv); if (!netdev) { pr_warn("Failed to create representor netdev for vport %d\n", rep->vport); @@ -1305,15 +1720,20 @@ mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) } rpriv->netdev = netdev; - rpriv->rep = rep; rep->rep_if[REP_ETH].priv = rpriv; INIT_LIST_HEAD(&rpriv->vport_sqs_list); + if (rep->vport == FDB_UPLINK_VPORT) { + err = mlx5e_create_mdev_resources(dev); + if (err) + goto err_destroy_netdev; + } + err = mlx5e_attach_netdev(netdev_priv(netdev)); if (err) { pr_warn("Failed to attach representor netdev for vport %d\n", rep->vport); - goto err_destroy_netdev; + goto err_destroy_mdev_resources; } err = mlx5e_rep_neigh_init(rpriv); @@ -1323,32 +1743,25 @@ mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) goto err_detach_netdev; } - uplink_rpriv = mlx5_eswitch_get_uplink_priv(dev->priv.eswitch, REP_ETH); - upriv = netdev_priv(uplink_rpriv->netdev); - err = tc_setup_cb_egdev_register(netdev, mlx5e_rep_setup_tc_cb_egdev, - upriv); - if (err) - goto err_neigh_cleanup; - err = register_netdev(netdev); if (err) { pr_warn("Failed to register representor netdev for vport %d\n", rep->vport); - goto err_egdev_cleanup; + goto err_neigh_cleanup; } return 0; -err_egdev_cleanup: - tc_setup_cb_egdev_unregister(netdev, mlx5e_rep_setup_tc_cb_egdev, - upriv); - err_neigh_cleanup: mlx5e_rep_neigh_cleanup(rpriv); err_detach_netdev: mlx5e_detach_netdev(netdev_priv(netdev)); +err_destroy_mdev_resources: + if (rep->vport == FDB_UPLINK_VPORT) + mlx5e_destroy_mdev_resources(dev); + err_destroy_netdev: mlx5e_destroy_netdev(netdev_priv(netdev)); kfree(rpriv); @@ -1361,18 +1774,13 @@ mlx5e_vport_rep_unload(struct mlx5_eswitch_rep *rep) struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep); struct net_device *netdev = rpriv->netdev; struct mlx5e_priv *priv = netdev_priv(netdev); - struct mlx5e_rep_priv *uplink_rpriv; void *ppriv = priv->ppriv; - struct mlx5e_priv *upriv; unregister_netdev(netdev); - uplink_rpriv = mlx5_eswitch_get_uplink_priv(priv->mdev->priv.eswitch, - REP_ETH); - upriv = netdev_priv(uplink_rpriv->netdev); - tc_setup_cb_egdev_unregister(netdev, mlx5e_rep_setup_tc_cb_egdev, - upriv); mlx5e_rep_neigh_cleanup(rpriv); mlx5e_detach_netdev(priv); + if (rep->vport == FDB_UPLINK_VPORT) + mlx5e_destroy_mdev_resources(priv->mdev); mlx5e_destroy_netdev(priv); kfree(ppriv); /* mlx5e_rep_priv */ } @@ -1386,14 +1794,13 @@ static void *mlx5e_vport_rep_get_proto_dev(struct mlx5_eswitch_rep *rep) return rpriv->netdev; } -static void mlx5e_rep_register_vf_vports(struct mlx5e_priv *priv) +void mlx5e_rep_register_vport_reps(struct mlx5_core_dev *mdev) { - struct mlx5_core_dev *mdev = priv->mdev; - struct mlx5_eswitch *esw = mdev->priv.eswitch; + struct mlx5_eswitch *esw = mdev->priv.eswitch; int total_vfs = MLX5_TOTAL_VPORTS(mdev); int vport; - for (vport = 1; vport < total_vfs; vport++) { + for (vport = 0; vport < total_vfs; vport++) { struct mlx5_eswitch_rep_if rep_if = {}; rep_if.load = mlx5e_vport_rep_load; @@ -1403,55 +1810,12 @@ static void mlx5e_rep_register_vf_vports(struct mlx5e_priv *priv) } } -static void mlx5e_rep_unregister_vf_vports(struct mlx5e_priv *priv) +void mlx5e_rep_unregister_vport_reps(struct mlx5_core_dev *mdev) { - struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_eswitch *esw = mdev->priv.eswitch; int total_vfs = MLX5_TOTAL_VPORTS(mdev); int vport; - for (vport = 1; vport < total_vfs; vport++) + for (vport = total_vfs - 1; vport >= 0; vport--) mlx5_eswitch_unregister_vport_rep(esw, vport, REP_ETH); } - -void mlx5e_register_vport_reps(struct mlx5e_priv *priv) -{ - struct mlx5_core_dev *mdev = priv->mdev; - struct mlx5_eswitch *esw = mdev->priv.eswitch; - struct mlx5_eswitch_rep_if rep_if; - struct mlx5e_rep_priv *rpriv; - - rpriv = priv->ppriv; - rpriv->netdev = priv->netdev; - - rep_if.load = mlx5e_nic_rep_load; - rep_if.unload = mlx5e_nic_rep_unload; - rep_if.get_proto_dev = mlx5e_vport_rep_get_proto_dev; - rep_if.priv = rpriv; - INIT_LIST_HEAD(&rpriv->vport_sqs_list); - mlx5_eswitch_register_vport_rep(esw, 0, &rep_if, REP_ETH); /* UPLINK PF vport*/ - - mlx5e_rep_register_vf_vports(priv); /* VFs vports */ -} - -void mlx5e_unregister_vport_reps(struct mlx5e_priv *priv) -{ - struct mlx5_core_dev *mdev = priv->mdev; - struct mlx5_eswitch *esw = mdev->priv.eswitch; - - mlx5e_rep_unregister_vf_vports(priv); /* VFs vports */ - mlx5_eswitch_unregister_vport_rep(esw, 0, REP_ETH); /* UPLINK PF*/ -} - -void *mlx5e_alloc_nic_rep_priv(struct mlx5_core_dev *mdev) -{ - struct mlx5_eswitch *esw = mdev->priv.eswitch; - struct mlx5e_rep_priv *rpriv; - - rpriv = kzalloc(sizeof(*rpriv), GFP_KERNEL); - if (!rpriv) - return NULL; - - rpriv->rep = &esw->offloads.vport_reps[0]; - return rpriv; -} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h index 844d32d5c29f..edd722824697 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h @@ -53,13 +53,33 @@ struct mlx5e_neigh_update_table { unsigned long min_interval; /* jiffies */ }; +struct mlx5_rep_uplink_priv { + /* Filters DB - instantiated by the uplink representor and shared by + * the uplink's VFs + */ + struct rhashtable tc_ht; + + /* indirect block callbacks are invoked on bind/unbind events + * on registered higher level devices (e.g. tunnel devices) + * + * tc_indr_block_cb_priv_list is used to lookup indirect callback + * private data + * + * netdevice_nb is the netdev events notifier - used to register + * tunnel devices for block events + * + */ + struct list_head tc_indr_block_priv_list; + struct notifier_block netdevice_nb; +}; + struct mlx5e_rep_priv { struct mlx5_eswitch_rep *rep; struct mlx5e_neigh_update_table neigh_update; struct net_device *netdev; struct mlx5_flow_handle *vport_rx_rule; struct list_head vport_sqs_list; - struct rhashtable tc_ht; /* valid for uplink rep */ + struct mlx5_rep_uplink_priv uplink_priv; /* valid for uplink rep */ }; static inline @@ -129,6 +149,8 @@ struct mlx5e_encap_entry { struct net_device *out_dev; int tunnel_type; + int tunnel_hlen; + int reformat_type; u8 flags; char *encap_header; int encap_size; @@ -140,16 +162,12 @@ struct mlx5e_rep_sq { }; void *mlx5e_alloc_nic_rep_priv(struct mlx5_core_dev *mdev); -void mlx5e_register_vport_reps(struct mlx5e_priv *priv); -void mlx5e_unregister_vport_reps(struct mlx5e_priv *priv); +void mlx5e_rep_register_vport_reps(struct mlx5_core_dev *mdev); +void mlx5e_rep_unregister_vport_reps(struct mlx5_core_dev *mdev); bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv); int mlx5e_add_sqs_fwd_rules(struct mlx5e_priv *priv); void mlx5e_remove_sqs_fwd_rules(struct mlx5e_priv *priv); -int mlx5e_get_offload_stats(int attr_id, const struct net_device *dev, void *sp); -bool mlx5e_has_offload_stats(const struct net_device *dev, int attr_id); - -int mlx5e_attr_get(struct net_device *dev, struct switchdev_attr *attr); void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); int mlx5e_rep_encap_entry_attach(struct mlx5e_priv *priv, @@ -158,12 +176,17 @@ void mlx5e_rep_encap_entry_detach(struct mlx5e_priv *priv, struct mlx5e_encap_entry *e); void mlx5e_rep_queue_neigh_stats_work(struct mlx5e_priv *priv); + +bool mlx5e_eswitch_rep(struct net_device *netdev); + #else /* CONFIG_MLX5_ESWITCH */ -static inline void mlx5e_register_vport_reps(struct mlx5e_priv *priv) {} -static inline void mlx5e_unregister_vport_reps(struct mlx5e_priv *priv) {} static inline bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv) { return false; } static inline int mlx5e_add_sqs_fwd_rules(struct mlx5e_priv *priv) { return 0; } static inline void mlx5e_remove_sqs_fwd_rules(struct mlx5e_priv *priv) {} #endif +static inline bool mlx5e_is_vport_rep(struct mlx5e_priv *priv) +{ + return (MLX5_ESWITCH_MANAGER(priv->mdev) && priv->ppriv); +} #endif /* __MLX5E_REP_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 0b5ef6d4e815..1d0bb5ff8c26 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -554,9 +554,9 @@ static inline void mlx5e_poll_ico_single_cqe(struct mlx5e_cq *cq, mlx5_cqwq_pop(&cq->wq); - if (unlikely((cqe->op_own >> 4) != MLX5_CQE_REQ)) { + if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_REQ)) { netdev_WARN_ONCE(cq->channel->netdev, - "Bad OP in ICOSQ CQE: 0x%x\n", cqe->op_own); + "Bad OP in ICOSQ CQE: 0x%x\n", get_cqe_opcode(cqe)); return; } @@ -898,7 +898,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, prefetchw(va); /* xdp_frame data area */ prefetch(data); - if (unlikely((cqe->op_own >> 4) != MLX5_CQE_RESP_SEND)) { + if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)) { rq->stats->wqe_err++; return NULL; } @@ -930,7 +930,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe, u16 byte_cnt = cqe_bcnt - headlen; struct sk_buff *skb; - if (unlikely((cqe->op_own >> 4) != MLX5_CQE_RESP_SEND)) { + if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)) { rq->stats->wqe_err++; return NULL; } @@ -1154,7 +1154,7 @@ void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) wi->consumed_strides += cstrides; - if (unlikely((cqe->op_own >> 4) != MLX5_CQE_RESP_SEND)) { + if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)) { rq->stats->wqe_err++; goto mpwrq_cqe_out; } @@ -1190,7 +1190,6 @@ mpwrq_cqe_out: int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget) { struct mlx5e_rq *rq = container_of(cq, struct mlx5e_rq, cq); - struct mlx5e_xdpsq *xdpsq = &rq->xdpsq; struct mlx5_cqe64 *cqe; int work_done = 0; @@ -1221,15 +1220,8 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget) } while ((++work_done < budget) && (cqe = mlx5_cqwq_get_cqe(&cq->wq))); out: - if (xdpsq->doorbell) { - mlx5e_xmit_xdp_doorbell(xdpsq); - xdpsq->doorbell = false; - } - - if (xdpsq->redirect_flush) { - xdp_do_flush_map(); - xdpsq->redirect_flush = false; - } + if (rq->xdp_prog) + mlx5e_xdp_rx_poll_complete(rq); mlx5_cqwq_update_db_record(&cq->wq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 4337afd610d7..d3fe48ff9da9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -30,6 +30,7 @@ * SOFTWARE. */ +#include "lib/mlx5.h" #include "en.h" #include "en_accel/ipsec.h" #include "en_accel/tls.h" @@ -480,7 +481,10 @@ static int mlx5e_grp_802_3_fill_stats(struct mlx5e_priv *priv, u64 *data, return idx; } -static void mlx5e_grp_802_3_update_stats(struct mlx5e_priv *priv) +#define MLX5_BASIC_PPCNT_SUPPORTED(mdev) \ + (MLX5_CAP_GEN(mdev, pcam_reg) ? MLX5_CAP_PCAM_REG(mdev, ppcnt) : 1) + +void mlx5e_grp_802_3_update_stats(struct mlx5e_priv *priv) { struct mlx5e_pport_stats *pstats = &priv->stats.pport; struct mlx5_core_dev *mdev = priv->mdev; @@ -488,6 +492,9 @@ static void mlx5e_grp_802_3_update_stats(struct mlx5e_priv *priv) int sz = MLX5_ST_SZ_BYTES(ppcnt_reg); void *out; + if (!MLX5_BASIC_PPCNT_SUPPORTED(mdev)) + return; + MLX5_SET(ppcnt_reg, in, local_port, 1); out = pstats->IEEE_802_3_counters; MLX5_SET(ppcnt_reg, in, grp, MLX5_IEEE_802_3_COUNTERS_GROUP); @@ -600,6 +607,9 @@ static void mlx5e_grp_2819_update_stats(struct mlx5e_priv *priv) int sz = MLX5_ST_SZ_BYTES(ppcnt_reg); void *out; + if (!MLX5_BASIC_PPCNT_SUPPORTED(mdev)) + return; + MLX5_SET(ppcnt_reg, in, local_port, 1); out = pstats->RFC_2819_counters; MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2819_COUNTERS_GROUP); @@ -934,7 +944,7 @@ static const struct counter_desc pport_per_prio_pfc_stats_desc[] = { }; static const struct counter_desc pport_pfc_stall_stats_desc[] = { - { "tx_pause_storm_warning_events ", PPORT_PER_PRIO_OFF(device_stall_minor_watermark_cnt) }, + { "tx_pause_storm_warning_events", PPORT_PER_PRIO_OFF(device_stall_minor_watermark_cnt) }, { "tx_pause_storm_error_events", PPORT_PER_PRIO_OFF(device_stall_critical_watermark_cnt) }, }; @@ -1075,6 +1085,9 @@ static void mlx5e_grp_per_prio_update_stats(struct mlx5e_priv *priv) int prio; void *out; + if (!MLX5_BASIC_PPCNT_SUPPORTED(mdev)) + return; + MLX5_SET(ppcnt_reg, in, local_port, 1); MLX5_SET(ppcnt_reg, in, grp, MLX5_PER_PRIORITY_COUNTERS_GROUP); for (prio = 0; prio < NUM_PPORT_PRIO; prio++) { @@ -1086,13 +1099,13 @@ static void mlx5e_grp_per_prio_update_stats(struct mlx5e_priv *priv) } static const struct counter_desc mlx5e_pme_status_desc[] = { - { "module_unplug", 8 }, + { "module_unplug", sizeof(u64) * MLX5_MODULE_STATUS_UNPLUGGED }, }; static const struct counter_desc mlx5e_pme_error_desc[] = { - { "module_bus_stuck", 16 }, /* bus stuck (I2C or data shorted) */ - { "module_high_temp", 48 }, /* high temperature */ - { "module_bad_shorted", 56 }, /* bad or shorted cable/module */ + { "module_bus_stuck", sizeof(u64) * MLX5_MODULE_EVENT_ERROR_BUS_STUCK }, + { "module_high_temp", sizeof(u64) * MLX5_MODULE_EVENT_ERROR_HIGH_TEMPERATURE }, + { "module_bad_shorted", sizeof(u64) * MLX5_MODULE_EVENT_ERROR_BAD_CABLE }, }; #define NUM_PME_STATUS_STATS ARRAY_SIZE(mlx5e_pme_status_desc) @@ -1120,15 +1133,17 @@ static int mlx5e_grp_pme_fill_strings(struct mlx5e_priv *priv, u8 *data, static int mlx5e_grp_pme_fill_stats(struct mlx5e_priv *priv, u64 *data, int idx) { - struct mlx5_priv *mlx5_priv = &priv->mdev->priv; + struct mlx5_pme_stats pme_stats; int i; + mlx5_get_pme_stats(priv->mdev, &pme_stats); + for (i = 0; i < NUM_PME_STATUS_STATS; i++) - data[idx++] = MLX5E_READ_CTR64_CPU(mlx5_priv->pme_stats.status_counters, + data[idx++] = MLX5E_READ_CTR64_CPU(pme_stats.status_counters, mlx5e_pme_status_desc, i); for (i = 0; i < NUM_PME_ERR_STATS; i++) - data[idx++] = MLX5E_READ_CTR64_CPU(mlx5_priv->pme_stats.error_counters, + data[idx++] = MLX5E_READ_CTR64_CPU(pme_stats.error_counters, mlx5e_pme_error_desc, i); return idx; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h index 3ff69ddae2d3..fe91ec06e3c7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h @@ -278,5 +278,6 @@ extern const struct mlx5e_stats_grp mlx5e_stats_grps[]; extern const int mlx5e_num_stats_grps; void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv); +void mlx5e_grp_802_3_update_stats(struct mlx5e_priv *priv); #endif /* __MLX5_EN_STATS_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 9dabe9d4b279..cae6c6d48984 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -44,15 +44,15 @@ #include #include #include -#include #include #include "en.h" #include "en_rep.h" #include "en_tc.h" #include "eswitch.h" -#include "lib/vxlan.h" #include "fs_core.h" #include "en/port.h" +#include "en/tc_tun.h" +#include "lib/devcom.h" struct mlx5_nic_flow_attr { u32 action; @@ -69,25 +69,54 @@ struct mlx5_nic_flow_attr { enum { MLX5E_TC_FLOW_INGRESS = MLX5E_TC_INGRESS, MLX5E_TC_FLOW_EGRESS = MLX5E_TC_EGRESS, - MLX5E_TC_FLOW_ESWITCH = BIT(MLX5E_TC_FLOW_BASE), - MLX5E_TC_FLOW_NIC = BIT(MLX5E_TC_FLOW_BASE + 1), - MLX5E_TC_FLOW_OFFLOADED = BIT(MLX5E_TC_FLOW_BASE + 2), - MLX5E_TC_FLOW_HAIRPIN = BIT(MLX5E_TC_FLOW_BASE + 3), - MLX5E_TC_FLOW_HAIRPIN_RSS = BIT(MLX5E_TC_FLOW_BASE + 4), - MLX5E_TC_FLOW_SLOW = BIT(MLX5E_TC_FLOW_BASE + 5), + MLX5E_TC_FLOW_ESWITCH = MLX5E_TC_ESW_OFFLOAD, + MLX5E_TC_FLOW_NIC = MLX5E_TC_NIC_OFFLOAD, + MLX5E_TC_FLOW_OFFLOADED = BIT(MLX5E_TC_FLOW_BASE), + MLX5E_TC_FLOW_HAIRPIN = BIT(MLX5E_TC_FLOW_BASE + 1), + MLX5E_TC_FLOW_HAIRPIN_RSS = BIT(MLX5E_TC_FLOW_BASE + 2), + MLX5E_TC_FLOW_SLOW = BIT(MLX5E_TC_FLOW_BASE + 3), + MLX5E_TC_FLOW_DUP = BIT(MLX5E_TC_FLOW_BASE + 4), }; #define MLX5E_TC_MAX_SPLITS 1 +/* Helper struct for accessing a struct containing list_head array. + * Containing struct + * |- Helper array + * [0] Helper item 0 + * |- list_head item 0 + * |- index (0) + * [1] Helper item 1 + * |- list_head item 1 + * |- index (1) + * To access the containing struct from one of the list_head items: + * 1. Get the helper item from the list_head item using + * helper item = + * container_of(list_head item, helper struct type, list_head field) + * 2. Get the contining struct from the helper item and its index in the array: + * containing struct = + * container_of(helper item, containing struct type, helper field[index]) + */ +struct encap_flow_item { + struct list_head list; + int index; +}; + struct mlx5e_tc_flow { struct rhash_head node; struct mlx5e_priv *priv; u64 cookie; u16 flags; struct mlx5_flow_handle *rule[MLX5E_TC_MAX_SPLITS + 1]; - struct list_head encap; /* flows sharing the same encap ID */ + /* Flow can be associated with multiple encap IDs. + * The number of encaps is bounded by the number of supported + * destinations. + */ + struct encap_flow_item encaps[MLX5_MAX_FLOW_FWD_VPORTS]; + struct mlx5e_tc_flow *peer_flow; struct list_head mod_hdr; /* flows sharing the same mod hdr ID */ struct list_head hairpin; /* flows sharing the same hairpin */ + struct list_head peer; /* flows with peer flow */ union { struct mlx5_esw_flow_attr esw_attr[0]; struct mlx5_nic_flow_attr nic_attr[0]; @@ -95,11 +124,12 @@ struct mlx5e_tc_flow { }; struct mlx5e_tc_flow_parse_attr { - struct ip_tunnel_info tun_info; + struct ip_tunnel_info tun_info[MLX5_MAX_FLOW_FWD_VPORTS]; + struct net_device *filter_dev; struct mlx5_flow_spec spec; int num_mod_hdr_actions; void *mod_hdr_actions; - int mirred_ifindex; + int mirred_ifindex[MLX5_MAX_FLOW_FWD_VPORTS]; }; #define MLX5E_TC_TABLE_NUM_GROUPS 4 @@ -316,7 +346,7 @@ static void mlx5e_hairpin_fill_rqt_rqns(struct mlx5e_hairpin *hp, void *rqtc) for (i = 0; i < sz; i++) { ix = i; - if (priv->channels.params.rss_hfunc == ETH_RSS_HASH_XOR) + if (priv->rss_params.hfunc == ETH_RSS_HASH_XOR) ix = mlx5e_bits_invert(i, ilog2(sz)); ix = indirection_rqt[ix]; rqn = hp->pair->rqn[ix]; @@ -360,13 +390,15 @@ static int mlx5e_hairpin_create_indirect_tirs(struct mlx5e_hairpin *hp) void *tirc; for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) { + struct mlx5e_tirc_config ttconfig = mlx5e_tirc_get_default_config(tt); + memset(in, 0, MLX5_ST_SZ_BYTES(create_tir_in)); tirc = MLX5_ADDR_OF(create_tir_in, in, ctx); MLX5_SET(tirc, tirc, transport_domain, hp->tdn); MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT); MLX5_SET(tirc, tirc, indirect_table, hp->indir_rqt.rqtn); - mlx5e_build_indir_tir_ctx_hash(&priv->channels.params, tt, tirc, false); + mlx5e_build_indir_tir_ctx_hash(&priv->rss_params, &ttconfig, tirc, false); err = mlx5_core_create_tir(hp->func_mdev, in, MLX5_ST_SZ_BYTES(create_tir_in), &hp->indir_tirn[tt]); @@ -569,7 +601,7 @@ static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv, struct mlx5e_tc_flow_parse_attr *parse_attr, struct netlink_ext_ack *extack) { - int peer_ifindex = parse_attr->mirred_ifindex; + int peer_ifindex = parse_attr->mirred_ifindex[0]; struct mlx5_hairpin_params params; struct mlx5_core_dev *peer_mdev; struct mlx5e_hairpin_entry *hpe; @@ -802,7 +834,7 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv, mlx5_del_flow_rules(flow->rule[0]); mlx5_fc_destroy(priv->mdev, counter); - if (!mlx5e_tc_num_filters(priv) && priv->fs.tc.t) { + if (!mlx5e_tc_num_filters(priv, MLX5E_TC_NIC_OFFLOAD) && priv->fs.tc.t) { mlx5_destroy_flow_table(priv->fs.tc.t); priv->fs.tc.t = NULL; } @@ -815,14 +847,15 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv, } static void mlx5e_detach_encap(struct mlx5e_priv *priv, - struct mlx5e_tc_flow *flow); + struct mlx5e_tc_flow *flow, int out_index); static int mlx5e_attach_encap(struct mlx5e_priv *priv, struct ip_tunnel_info *tun_info, struct net_device *mirred_dev, struct net_device **encap_dev, struct mlx5e_tc_flow *flow, - struct netlink_ext_ack *extack); + struct netlink_ext_ack *extack, + int out_index); static struct mlx5_flow_handle * mlx5e_tc_offload_fdb_rules(struct mlx5_eswitch *esw, @@ -836,7 +869,7 @@ mlx5e_tc_offload_fdb_rules(struct mlx5_eswitch *esw, if (IS_ERR(rule)) return rule; - if (attr->mirror_count) { + if (attr->split_count) { flow->rule[1] = mlx5_eswitch_add_fwd_rule(esw, spec, attr); if (IS_ERR(flow->rule[1])) { mlx5_eswitch_del_offloaded_rule(esw, rule, attr); @@ -855,7 +888,7 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw, { flow->flags &= ~MLX5E_TC_FLOW_OFFLOADED; - if (attr->mirror_count) + if (attr->split_count) mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); @@ -871,7 +904,7 @@ mlx5e_tc_offload_to_slow_path(struct mlx5_eswitch *esw, memcpy(slow_attr, flow->esw_attr, sizeof(*slow_attr)); slow_attr->action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; - slow_attr->mirror_count = 0; + slow_attr->split_count = 0; slow_attr->dest_chain = FDB_SLOW_PATH_CHAIN; rule = mlx5e_tc_offload_fdb_rules(esw, flow, spec, slow_attr); @@ -888,7 +921,7 @@ mlx5e_tc_unoffload_from_slow_path(struct mlx5_eswitch *esw, { memcpy(slow_attr, flow->esw_attr, sizeof(*slow_attr)); slow_attr->action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; - slow_attr->mirror_count = 0; + slow_attr->split_count = 0; slow_attr->dest_chain = FDB_SLOW_PATH_CHAIN; mlx5e_tc_unoffload_fdb_rules(esw, flow, slow_attr); flow->flags &= ~MLX5E_TC_FLOW_SLOW; @@ -909,6 +942,7 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, struct mlx5e_rep_priv *rpriv; struct mlx5e_priv *out_priv; int err = 0, encap_err = 0; + int out_index; if (!mlx5_eswitch_prios_supported(esw) && attr->prio != 1) { NL_SET_ERR_MSG(extack, "E-switch priorities unsupported, upgrade FW"); @@ -927,20 +961,27 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, goto err_max_prio_chain; } - if (attr->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) { + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { + int mirred_ifindex; + + if (!(attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) + continue; + + mirred_ifindex = attr->parse_attr->mirred_ifindex[out_index]; out_dev = __dev_get_by_index(dev_net(priv->netdev), - attr->parse_attr->mirred_ifindex); - encap_err = mlx5e_attach_encap(priv, &parse_attr->tun_info, - out_dev, &encap_dev, flow, - extack); - if (encap_err && encap_err != -EAGAIN) { - err = encap_err; + mirred_ifindex); + err = mlx5e_attach_encap(priv, + &parse_attr->tun_info[out_index], + out_dev, &encap_dev, flow, + extack, out_index); + if (err && err != -EAGAIN) goto err_attach_encap; - } + if (err == -EAGAIN) + encap_err = err; out_priv = netdev_priv(encap_dev); rpriv = out_priv->ppriv; - attr->out_rep[attr->out_count] = rpriv->rep; - attr->out_mdev[attr->out_count++] = out_priv->mdev; + attr->dests[out_index].rep = rpriv->rep; + attr->dests[out_index].mdev = out_priv->mdev; } err = mlx5_eswitch_add_vlan_action(esw, attr); @@ -955,7 +996,7 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, } if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) { - counter = mlx5_fc_create(esw->dev, true); + counter = mlx5_fc_create(attr->counter_dev, true); if (IS_ERR(counter)) { err = PTR_ERR(counter); goto err_create_counter; @@ -984,15 +1025,16 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, return 0; err_add_rule: - mlx5_fc_destroy(esw->dev, counter); + mlx5_fc_destroy(attr->counter_dev, counter); err_create_counter: if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) mlx5e_detach_mod_hdr(priv, flow); err_mod_hdr: mlx5_eswitch_del_vlan_action(esw, attr); err_add_vlan: - if (attr->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) - mlx5e_detach_encap(priv, flow); + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) + if (attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP) + mlx5e_detach_encap(priv, flow, out_index); err_attach_encap: err_max_prio_chain: return err; @@ -1004,6 +1046,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv, struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct mlx5_esw_flow_attr *attr = flow->esw_attr; struct mlx5_esw_flow_attr slow_attr; + int out_index; if (flow->flags & MLX5E_TC_FLOW_OFFLOADED) { if (flow->flags & MLX5E_TC_FLOW_SLOW) @@ -1014,16 +1057,16 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv, mlx5_eswitch_del_vlan_action(esw, attr); - if (attr->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) { - mlx5e_detach_encap(priv, flow); - kvfree(attr->parse_attr); - } + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) + if (attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP) + mlx5e_detach_encap(priv, flow, out_index); + kvfree(attr->parse_attr); if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) mlx5e_detach_mod_hdr(priv, flow); if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) - mlx5_fc_destroy(esw->dev, attr->counter); + mlx5_fc_destroy(attr->counter_dev, attr->counter); } void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv, @@ -1033,10 +1076,12 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv, struct mlx5_esw_flow_attr slow_attr, *esw_attr; struct mlx5_flow_handle *rule; struct mlx5_flow_spec *spec; + struct encap_flow_item *efi; struct mlx5e_tc_flow *flow; int err; - err = mlx5_packet_reformat_alloc(priv->mdev, e->tunnel_type, + err = mlx5_packet_reformat_alloc(priv->mdev, + e->reformat_type, e->encap_size, e->encap_header, MLX5_FLOW_NAMESPACE_FDB, &e->encap_id); @@ -1048,11 +1093,31 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv, e->flags |= MLX5_ENCAP_ENTRY_VALID; mlx5e_rep_queue_neigh_stats_work(priv); - list_for_each_entry(flow, &e->flows, encap) { + list_for_each_entry(efi, &e->flows, list) { + bool all_flow_encaps_valid = true; + int i; + + flow = container_of(efi, struct mlx5e_tc_flow, encaps[efi->index]); esw_attr = flow->esw_attr; - esw_attr->encap_id = e->encap_id; spec = &esw_attr->parse_attr->spec; + esw_attr->dests[efi->index].encap_id = e->encap_id; + esw_attr->dests[efi->index].flags |= MLX5_ESW_DEST_ENCAP_VALID; + /* Flow can be associated with multiple encap entries. + * Before offloading the flow verify that all of them have + * a valid neighbour. + */ + for (i = 0; i < MLX5_MAX_FLOW_FWD_VPORTS; i++) { + if (!(esw_attr->dests[i].flags & MLX5_ESW_DEST_ENCAP)) + continue; + if (!(esw_attr->dests[i].flags & MLX5_ESW_DEST_ENCAP_VALID)) { + all_flow_encaps_valid = false; + break; + } + } + /* Do not offload flows with unresolved neighbors */ + if (!all_flow_encaps_valid) + continue; /* update from slow path rule to encap rule */ rule = mlx5e_tc_offload_fdb_rules(esw, flow, spec, esw_attr); if (IS_ERR(rule)) { @@ -1075,14 +1140,18 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv, struct mlx5_esw_flow_attr slow_attr; struct mlx5_flow_handle *rule; struct mlx5_flow_spec *spec; + struct encap_flow_item *efi; struct mlx5e_tc_flow *flow; int err; - list_for_each_entry(flow, &e->flows, encap) { + list_for_each_entry(efi, &e->flows, list) { + flow = container_of(efi, struct mlx5e_tc_flow, encaps[efi->index]); spec = &flow->esw_attr->parse_attr->spec; /* update from encap rule to slow path rule */ rule = mlx5e_tc_offload_to_slow_path(esw, flow, spec, &slow_attr); + /* mark the flow's encap dest as non-valid */ + flow->esw_attr->dests[efi->index].flags &= ~MLX5_ESW_DEST_ENCAP_VALID; if (IS_ERR(rule)) { err = PTR_ERR(rule); @@ -1130,9 +1199,12 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe) return; list_for_each_entry(e, &nhe->encap_list, encap_list) { + struct encap_flow_item *efi; if (!(e->flags & MLX5_ENCAP_ENTRY_VALID)) continue; - list_for_each_entry(flow, &e->flows, encap) { + list_for_each_entry(efi, &e->flows, list) { + flow = container_of(efi, struct mlx5e_tc_flow, + encaps[efi->index]); if (flow->flags & MLX5E_TC_FLOW_OFFLOADED) { counter = mlx5e_tc_get_counter(flow); mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse); @@ -1162,11 +1234,11 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe) } static void mlx5e_detach_encap(struct mlx5e_priv *priv, - struct mlx5e_tc_flow *flow) + struct mlx5e_tc_flow *flow, int out_index) { - struct list_head *next = flow->encap.next; + struct list_head *next = flow->encaps[out_index].list.next; - list_del(&flow->encap); + list_del(&flow->encaps[out_index].list); if (list_empty(next)) { struct mlx5e_encap_entry *e; @@ -1182,49 +1254,55 @@ static void mlx5e_detach_encap(struct mlx5e_priv *priv, } } -static void mlx5e_tc_del_flow(struct mlx5e_priv *priv, - struct mlx5e_tc_flow *flow) +static void __mlx5e_tc_del_fdb_peer_flow(struct mlx5e_tc_flow *flow) { - if (flow->flags & MLX5E_TC_FLOW_ESWITCH) - mlx5e_tc_del_fdb_flow(priv, flow); - else - mlx5e_tc_del_nic_flow(priv, flow); + struct mlx5_eswitch *esw = flow->priv->mdev->priv.eswitch; + + if (!(flow->flags & MLX5E_TC_FLOW_ESWITCH) || + !(flow->flags & MLX5E_TC_FLOW_DUP)) + return; + + mutex_lock(&esw->offloads.peer_mutex); + list_del(&flow->peer); + mutex_unlock(&esw->offloads.peer_mutex); + + flow->flags &= ~MLX5E_TC_FLOW_DUP; + + mlx5e_tc_del_fdb_flow(flow->peer_flow->priv, flow->peer_flow); + kvfree(flow->peer_flow); + flow->peer_flow = NULL; } -static void parse_vxlan_attr(struct mlx5_flow_spec *spec, - struct tc_cls_flower_offload *f) +static void mlx5e_tc_del_fdb_peer_flow(struct mlx5e_tc_flow *flow) { - void *headers_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, - outer_headers); - void *headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, - outer_headers); - void *misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, - misc_parameters); - void *misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, - misc_parameters); + struct mlx5_core_dev *dev = flow->priv->mdev; + struct mlx5_devcom *devcom = dev->priv.devcom; + struct mlx5_eswitch *peer_esw; - MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ip_protocol); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); + peer_esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); + if (!peer_esw) + return; - if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) { - struct flow_dissector_key_keyid *key = - skb_flow_dissector_target(f->dissector, - FLOW_DISSECTOR_KEY_ENC_KEYID, - f->key); - struct flow_dissector_key_keyid *mask = - skb_flow_dissector_target(f->dissector, - FLOW_DISSECTOR_KEY_ENC_KEYID, - f->mask); - MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni, - be32_to_cpu(mask->keyid)); - MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni, - be32_to_cpu(key->keyid)); + __mlx5e_tc_del_fdb_peer_flow(flow); + mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); +} + +static void mlx5e_tc_del_flow(struct mlx5e_priv *priv, + struct mlx5e_tc_flow *flow) +{ + if (flow->flags & MLX5E_TC_FLOW_ESWITCH) { + mlx5e_tc_del_fdb_peer_flow(flow); + mlx5e_tc_del_fdb_flow(priv, flow); + } else { + mlx5e_tc_del_nic_flow(priv, flow); } } + static int parse_tunnel_attr(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec, - struct tc_cls_flower_offload *f) + struct tc_cls_flower_offload *f, + struct net_device *filter_dev) { struct netlink_ext_ack *extack = f->common.extack; void *headers_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, @@ -1236,48 +1314,14 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv, skb_flow_dissector_target(f->dissector, FLOW_DISSECTOR_KEY_ENC_CONTROL, f->key); + int err = 0; - if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS)) { - struct flow_dissector_key_ports *key = - skb_flow_dissector_target(f->dissector, - FLOW_DISSECTOR_KEY_ENC_PORTS, - f->key); - struct flow_dissector_key_ports *mask = - skb_flow_dissector_target(f->dissector, - FLOW_DISSECTOR_KEY_ENC_PORTS, - f->mask); - - /* Full udp dst port must be given */ - if (memchr_inv(&mask->dst, 0xff, sizeof(mask->dst))) - goto vxlan_match_offload_err; - - if (mlx5_vxlan_lookup_port(priv->mdev->vxlan, be16_to_cpu(key->dst)) && - MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap)) - parse_vxlan_attr(spec, f); - else { - NL_SET_ERR_MSG_MOD(extack, - "port isn't an offloaded vxlan udp dport"); - netdev_warn(priv->netdev, - "%d isn't an offloaded vxlan udp dport\n", be16_to_cpu(key->dst)); - return -EOPNOTSUPP; - } - - MLX5_SET(fte_match_set_lyr_2_4, headers_c, - udp_dport, ntohs(mask->dst)); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - udp_dport, ntohs(key->dst)); - - MLX5_SET(fte_match_set_lyr_2_4, headers_c, - udp_sport, ntohs(mask->src)); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - udp_sport, ntohs(key->src)); - } else { /* udp dst port must be given */ -vxlan_match_offload_err: + err = mlx5e_tc_tun_parse(filter_dev, priv, spec, f, + headers_c, headers_v); + if (err) { NL_SET_ERR_MSG_MOD(extack, - "IP tunnel decap offload supported only for vxlan, must set UDP dport"); - netdev_warn(priv->netdev, - "IP tunnel decap offload supported only for vxlan, must set UDP dport\n"); - return -EOPNOTSUPP; + "failed to parse tunnel attributes"); + return err; } if (enc_control->addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { @@ -1381,6 +1425,7 @@ vxlan_match_offload_err: static int __parse_cls_flower(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec, struct tc_cls_flower_offload *f, + struct net_device *filter_dev, u8 *match_level) { struct netlink_ext_ack *extack = f->common.extack; @@ -1432,7 +1477,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv, switch (key->addr_type) { case FLOW_DISSECTOR_KEY_IPV4_ADDRS: case FLOW_DISSECTOR_KEY_IPV6_ADDRS: - if (parse_tunnel_attr(priv, spec, f)) + if (parse_tunnel_attr(priv, spec, f, filter_dev)) return -EOPNOTSUPP; break; default: @@ -1774,7 +1819,8 @@ static int __parse_cls_flower(struct mlx5e_priv *priv, static int parse_cls_flower(struct mlx5e_priv *priv, struct mlx5e_tc_flow *flow, struct mlx5_flow_spec *spec, - struct tc_cls_flower_offload *f) + struct tc_cls_flower_offload *f, + struct net_device *filter_dev) { struct netlink_ext_ack *extack = f->common.extack; struct mlx5_core_dev *dev = priv->mdev; @@ -1784,7 +1830,7 @@ static int parse_cls_flower(struct mlx5e_priv *priv, u8 match_level; int err; - err = __parse_cls_flower(priv, spec, f, &match_level); + err = __parse_cls_flower(priv, spec, f, filter_dev, &match_level); if (!err && (flow->flags & MLX5E_TC_FLOW_ESWITCH)) { rep = rpriv->rep; @@ -2137,7 +2183,6 @@ static bool modify_header_match_supported(struct mlx5_flow_spec *spec, { const struct tc_action *a; bool modify_ip_header; - LIST_HEAD(actions); u8 htype, ip_proto; void *headers_v; u16 ethertype; @@ -2226,7 +2271,6 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, { struct mlx5_nic_flow_attr *attr = flow->nic_attr; const struct tc_action *a; - LIST_HEAD(actions); u32 action = 0; int err, i; @@ -2269,7 +2313,7 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, if (priv->netdev->netdev_ops == peer_dev->netdev_ops && same_hw_devs(priv, netdev_priv(peer_dev))) { - parse_attr->mirred_ifindex = peer_dev->ifindex; + parse_attr->mirred_ifindex[0] = peer_dev->ifindex; flow->flags |= MLX5E_TC_FLOW_HAIRPIN; action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_COUNT; @@ -2318,45 +2362,6 @@ static inline int hash_encap_info(struct ip_tunnel_key *key) return jhash(key, sizeof(*key), 0); } -static int mlx5e_route_lookup_ipv4(struct mlx5e_priv *priv, - struct net_device *mirred_dev, - struct net_device **out_dev, - struct flowi4 *fl4, - struct neighbour **out_n, - u8 *out_ttl) -{ - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; - struct mlx5e_rep_priv *uplink_rpriv; - struct rtable *rt; - struct neighbour *n = NULL; - -#if IS_ENABLED(CONFIG_INET) - int ret; - - rt = ip_route_output_key(dev_net(mirred_dev), fl4); - ret = PTR_ERR_OR_ZERO(rt); - if (ret) - return ret; -#else - return -EOPNOTSUPP; -#endif - uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - /* if the egress device isn't on the same HW e-switch, we use the uplink */ - if (!switchdev_port_same_parent_id(priv->netdev, rt->dst.dev)) - *out_dev = uplink_rpriv->netdev; - else - *out_dev = rt->dst.dev; - - if (!(*out_ttl)) - *out_ttl = ip4_dst_hoplimit(&rt->dst); - n = dst_neigh_lookup(&rt->dst, &fl4->daddr); - ip_rt_put(rt); - if (!n) - return -ENOMEM; - - *out_n = n; - return 0; -} static bool is_merged_eswitch_dev(struct mlx5e_priv *priv, struct net_device *peer_netdev) @@ -2372,377 +2377,24 @@ static bool is_merged_eswitch_dev(struct mlx5e_priv *priv, (peer_priv->mdev->priv.eswitch->mode == SRIOV_OFFLOADS)); } -static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv, - struct net_device *mirred_dev, - struct net_device **out_dev, - struct flowi6 *fl6, - struct neighbour **out_n, - u8 *out_ttl) -{ - struct neighbour *n = NULL; - struct dst_entry *dst; - -#if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6) - struct mlx5e_rep_priv *uplink_rpriv; - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; - int ret; - - ret = ipv6_stub->ipv6_dst_lookup(dev_net(mirred_dev), NULL, &dst, - fl6); - if (ret < 0) - return ret; - - if (!(*out_ttl)) - *out_ttl = ip6_dst_hoplimit(dst); - - uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - /* if the egress device isn't on the same HW e-switch, we use the uplink */ - if (!switchdev_port_same_parent_id(priv->netdev, dst->dev)) - *out_dev = uplink_rpriv->netdev; - else - *out_dev = dst->dev; -#else - return -EOPNOTSUPP; -#endif - - n = dst_neigh_lookup(dst, &fl6->daddr); - dst_release(dst); - if (!n) - return -ENOMEM; - - *out_n = n; - return 0; -} - -static void gen_vxlan_header_ipv4(struct net_device *out_dev, - char buf[], int encap_size, - unsigned char h_dest[ETH_ALEN], - u8 tos, u8 ttl, - __be32 daddr, - __be32 saddr, - __be16 udp_dst_port, - __be32 vx_vni) -{ - struct ethhdr *eth = (struct ethhdr *)buf; - struct iphdr *ip = (struct iphdr *)((char *)eth + sizeof(struct ethhdr)); - struct udphdr *udp = (struct udphdr *)((char *)ip + sizeof(struct iphdr)); - struct vxlanhdr *vxh = (struct vxlanhdr *)((char *)udp + sizeof(struct udphdr)); - - memset(buf, 0, encap_size); - - ether_addr_copy(eth->h_dest, h_dest); - ether_addr_copy(eth->h_source, out_dev->dev_addr); - eth->h_proto = htons(ETH_P_IP); - - ip->daddr = daddr; - ip->saddr = saddr; - - ip->tos = tos; - ip->ttl = ttl; - ip->protocol = IPPROTO_UDP; - ip->version = 0x4; - ip->ihl = 0x5; - - udp->dest = udp_dst_port; - vxh->vx_flags = VXLAN_HF_VNI; - vxh->vx_vni = vxlan_vni_field(vx_vni); -} - -static void gen_vxlan_header_ipv6(struct net_device *out_dev, - char buf[], int encap_size, - unsigned char h_dest[ETH_ALEN], - u8 tos, u8 ttl, - struct in6_addr *daddr, - struct in6_addr *saddr, - __be16 udp_dst_port, - __be32 vx_vni) -{ - struct ethhdr *eth = (struct ethhdr *)buf; - struct ipv6hdr *ip6h = (struct ipv6hdr *)((char *)eth + sizeof(struct ethhdr)); - struct udphdr *udp = (struct udphdr *)((char *)ip6h + sizeof(struct ipv6hdr)); - struct vxlanhdr *vxh = (struct vxlanhdr *)((char *)udp + sizeof(struct udphdr)); - - memset(buf, 0, encap_size); - - ether_addr_copy(eth->h_dest, h_dest); - ether_addr_copy(eth->h_source, out_dev->dev_addr); - eth->h_proto = htons(ETH_P_IPV6); - - ip6_flow_hdr(ip6h, tos, 0); - /* the HW fills up ipv6 payload len */ - ip6h->nexthdr = IPPROTO_UDP; - ip6h->hop_limit = ttl; - ip6h->daddr = *daddr; - ip6h->saddr = *saddr; - - udp->dest = udp_dst_port; - vxh->vx_flags = VXLAN_HF_VNI; - vxh->vx_vni = vxlan_vni_field(vx_vni); -} - -static int mlx5e_create_encap_header_ipv4(struct mlx5e_priv *priv, - struct net_device *mirred_dev, - struct mlx5e_encap_entry *e) -{ - int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size); - int ipv4_encap_size = ETH_HLEN + sizeof(struct iphdr) + VXLAN_HLEN; - struct ip_tunnel_key *tun_key = &e->tun_info.key; - struct net_device *out_dev; - struct neighbour *n = NULL; - struct flowi4 fl4 = {}; - u8 nud_state, tos, ttl; - char *encap_header; - int err; - if (max_encap_size < ipv4_encap_size) { - mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n", - ipv4_encap_size, max_encap_size); - return -EOPNOTSUPP; - } - - encap_header = kzalloc(ipv4_encap_size, GFP_KERNEL); - if (!encap_header) - return -ENOMEM; - - switch (e->tunnel_type) { - case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: - fl4.flowi4_proto = IPPROTO_UDP; - fl4.fl4_dport = tun_key->tp_dst; - break; - default: - err = -EOPNOTSUPP; - goto free_encap; - } - - tos = tun_key->tos; - ttl = tun_key->ttl; - - fl4.flowi4_tos = tun_key->tos; - fl4.daddr = tun_key->u.ipv4.dst; - fl4.saddr = tun_key->u.ipv4.src; - - err = mlx5e_route_lookup_ipv4(priv, mirred_dev, &out_dev, - &fl4, &n, &ttl); - if (err) - goto free_encap; - - /* used by mlx5e_detach_encap to lookup a neigh hash table - * entry in the neigh hash table when a user deletes a rule - */ - e->m_neigh.dev = n->dev; - e->m_neigh.family = n->ops->family; - memcpy(&e->m_neigh.dst_ip, n->primary_key, n->tbl->key_len); - e->out_dev = out_dev; - - /* It's importent to add the neigh to the hash table before checking - * the neigh validity state. So if we'll get a notification, in case the - * neigh changes it's validity state, we would find the relevant neigh - * in the hash. - */ - err = mlx5e_rep_encap_entry_attach(netdev_priv(out_dev), e); - if (err) - goto free_encap; - - read_lock_bh(&n->lock); - nud_state = n->nud_state; - ether_addr_copy(e->h_dest, n->ha); - read_unlock_bh(&n->lock); - - switch (e->tunnel_type) { - case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: - gen_vxlan_header_ipv4(out_dev, encap_header, - ipv4_encap_size, e->h_dest, tos, ttl, - fl4.daddr, - fl4.saddr, tun_key->tp_dst, - tunnel_id_to_key32(tun_key->tun_id)); - break; - default: - err = -EOPNOTSUPP; - goto destroy_neigh_entry; - } - e->encap_size = ipv4_encap_size; - e->encap_header = encap_header; - - if (!(nud_state & NUD_VALID)) { - neigh_event_send(n, NULL); - err = -EAGAIN; - goto out; - } - - err = mlx5_packet_reformat_alloc(priv->mdev, e->tunnel_type, - ipv4_encap_size, encap_header, - MLX5_FLOW_NAMESPACE_FDB, - &e->encap_id); - if (err) - goto destroy_neigh_entry; - - e->flags |= MLX5_ENCAP_ENTRY_VALID; - mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev)); - neigh_release(n); - return err; - -destroy_neigh_entry: - mlx5e_rep_encap_entry_detach(netdev_priv(e->out_dev), e); -free_encap: - kfree(encap_header); -out: - if (n) - neigh_release(n); - return err; -} - -static int mlx5e_create_encap_header_ipv6(struct mlx5e_priv *priv, - struct net_device *mirred_dev, - struct mlx5e_encap_entry *e) -{ - int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size); - int ipv6_encap_size = ETH_HLEN + sizeof(struct ipv6hdr) + VXLAN_HLEN; - struct ip_tunnel_key *tun_key = &e->tun_info.key; - struct net_device *out_dev; - struct neighbour *n = NULL; - struct flowi6 fl6 = {}; - u8 nud_state, tos, ttl; - char *encap_header; - int err; - - if (max_encap_size < ipv6_encap_size) { - mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n", - ipv6_encap_size, max_encap_size); - return -EOPNOTSUPP; - } - - encap_header = kzalloc(ipv6_encap_size, GFP_KERNEL); - if (!encap_header) - return -ENOMEM; - - switch (e->tunnel_type) { - case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: - fl6.flowi6_proto = IPPROTO_UDP; - fl6.fl6_dport = tun_key->tp_dst; - break; - default: - err = -EOPNOTSUPP; - goto free_encap; - } - - tos = tun_key->tos; - ttl = tun_key->ttl; - - fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label); - fl6.daddr = tun_key->u.ipv6.dst; - fl6.saddr = tun_key->u.ipv6.src; - - err = mlx5e_route_lookup_ipv6(priv, mirred_dev, &out_dev, - &fl6, &n, &ttl); - if (err) - goto free_encap; - - /* used by mlx5e_detach_encap to lookup a neigh hash table - * entry in the neigh hash table when a user deletes a rule - */ - e->m_neigh.dev = n->dev; - e->m_neigh.family = n->ops->family; - memcpy(&e->m_neigh.dst_ip, n->primary_key, n->tbl->key_len); - e->out_dev = out_dev; - - /* It's importent to add the neigh to the hash table before checking - * the neigh validity state. So if we'll get a notification, in case the - * neigh changes it's validity state, we would find the relevant neigh - * in the hash. - */ - err = mlx5e_rep_encap_entry_attach(netdev_priv(out_dev), e); - if (err) - goto free_encap; - - read_lock_bh(&n->lock); - nud_state = n->nud_state; - ether_addr_copy(e->h_dest, n->ha); - read_unlock_bh(&n->lock); - - switch (e->tunnel_type) { - case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: - gen_vxlan_header_ipv6(out_dev, encap_header, - ipv6_encap_size, e->h_dest, tos, ttl, - &fl6.daddr, - &fl6.saddr, tun_key->tp_dst, - tunnel_id_to_key32(tun_key->tun_id)); - break; - default: - err = -EOPNOTSUPP; - goto destroy_neigh_entry; - } - - e->encap_size = ipv6_encap_size; - e->encap_header = encap_header; - - if (!(nud_state & NUD_VALID)) { - neigh_event_send(n, NULL); - err = -EAGAIN; - goto out; - } - - err = mlx5_packet_reformat_alloc(priv->mdev, e->tunnel_type, - ipv6_encap_size, encap_header, - MLX5_FLOW_NAMESPACE_FDB, - &e->encap_id); - if (err) - goto destroy_neigh_entry; - - e->flags |= MLX5_ENCAP_ENTRY_VALID; - mlx5e_rep_queue_neigh_stats_work(netdev_priv(out_dev)); - neigh_release(n); - return err; - -destroy_neigh_entry: - mlx5e_rep_encap_entry_detach(netdev_priv(e->out_dev), e); -free_encap: - kfree(encap_header); -out: - if (n) - neigh_release(n); - return err; -} static int mlx5e_attach_encap(struct mlx5e_priv *priv, struct ip_tunnel_info *tun_info, struct net_device *mirred_dev, struct net_device **encap_dev, struct mlx5e_tc_flow *flow, - struct netlink_ext_ack *extack) + struct netlink_ext_ack *extack, + int out_index) { struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; unsigned short family = ip_tunnel_info_af(tun_info); struct mlx5_esw_flow_attr *attr = flow->esw_attr; struct ip_tunnel_key *key = &tun_info->key; struct mlx5e_encap_entry *e; - int tunnel_type, err = 0; uintptr_t hash_key; bool found = false; - - /* udp dst port must be set */ - if (!memchr_inv(&key->tp_dst, 0, sizeof(key->tp_dst))) - goto vxlan_encap_offload_err; - - /* setting udp src port isn't supported */ - if (memchr_inv(&key->tp_src, 0, sizeof(key->tp_src))) { -vxlan_encap_offload_err: - NL_SET_ERR_MSG_MOD(extack, - "must set udp dst port and not set udp src port"); - netdev_warn(priv->netdev, - "must set udp dst port and not set udp src port\n"); - return -EOPNOTSUPP; - } - - if (mlx5_vxlan_lookup_port(priv->mdev->vxlan, be16_to_cpu(key->tp_dst)) && - MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap)) { - tunnel_type = MLX5_REFORMAT_TYPE_L2_TO_VXLAN; - } else { - NL_SET_ERR_MSG_MOD(extack, - "port isn't an offloaded vxlan udp dport"); - netdev_warn(priv->netdev, - "%d isn't an offloaded vxlan udp dport\n", be16_to_cpu(key->tp_dst)); - return -EOPNOTSUPP; - } + int err = 0; hash_key = hash_encap_info(key); @@ -2763,13 +2415,16 @@ vxlan_encap_offload_err: return -ENOMEM; e->tun_info = *tun_info; - e->tunnel_type = tunnel_type; + err = mlx5e_tc_tun_init_encap_attr(mirred_dev, priv, e, extack); + if (err) + goto out_err; + INIT_LIST_HEAD(&e->flows); if (family == AF_INET) - err = mlx5e_create_encap_header_ipv4(priv, mirred_dev, e); + err = mlx5e_tc_tun_create_header_ipv4(priv, mirred_dev, e); else if (family == AF_INET6) - err = mlx5e_create_encap_header_ipv6(priv, mirred_dev, e); + err = mlx5e_tc_tun_create_header_ipv6(priv, mirred_dev, e); if (err && err != -EAGAIN) goto out_err; @@ -2777,12 +2432,15 @@ vxlan_encap_offload_err: hash_add_rcu(esw->offloads.encap_tbl, &e->encap_hlist, hash_key); attach_flow: - list_add(&flow->encap, &e->flows); + list_add(&flow->encaps[out_index].list, &e->flows); + flow->encaps[out_index].index = out_index; *encap_dev = e->out_dev; - if (e->flags & MLX5_ENCAP_ENTRY_VALID) - attr->encap_id = e->encap_id; - else + if (e->flags & MLX5_ENCAP_ENTRY_VALID) { + attr->dests[out_index].encap_id = e->encap_id; + attr->dests[out_index].flags |= MLX5_ESW_DEST_ENCAP_VALID; + } else { err = -EAGAIN; + } return err; @@ -2851,7 +2509,6 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, struct mlx5e_rep_priv *rpriv = priv->ppriv; struct ip_tunnel_info *info = NULL; const struct tc_action *a; - LIST_HEAD(actions); bool encap = false; u32 action = 0; int err, i; @@ -2876,7 +2533,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, return err; action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; - attr->mirror_count = attr->out_count; + attr->split_count = attr->out_count; continue; } @@ -2894,6 +2551,13 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, struct net_device *out_dev; out_dev = tcf_mirred_dev(a); + if (!out_dev) { + /* out_dev is NULL when filters with + * non-existing mirred device are replayed to + * the driver. + */ + return -EINVAL; + } if (attr->out_count >= MLX5_MAX_FLOW_FWD_VPORTS) { NL_SET_ERR_MSG_MOD(extack, @@ -2903,23 +2567,47 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, return -EOPNOTSUPP; } + action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | + MLX5_FLOW_CONTEXT_ACTION_COUNT; if (switchdev_port_same_parent_id(priv->netdev, out_dev) || is_merged_eswitch_dev(priv, out_dev)) { - action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | - MLX5_FLOW_CONTEXT_ACTION_COUNT; + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct net_device *uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH); + struct net_device *uplink_upper = netdev_master_upper_dev_get(uplink_dev); + + if (uplink_upper && + netif_is_lag_master(uplink_upper) && + uplink_upper == out_dev) + out_dev = uplink_dev; + + if (!mlx5e_eswitch_rep(out_dev)) + return -EOPNOTSUPP; + out_priv = netdev_priv(out_dev); rpriv = out_priv->ppriv; - attr->out_rep[attr->out_count] = rpriv->rep; - attr->out_mdev[attr->out_count++] = out_priv->mdev; + attr->dests[attr->out_count].rep = rpriv->rep; + attr->dests[attr->out_count].mdev = out_priv->mdev; + attr->out_count++; } else if (encap) { - parse_attr->mirred_ifindex = out_dev->ifindex; - parse_attr->tun_info = *info; + parse_attr->mirred_ifindex[attr->out_count] = + out_dev->ifindex; + parse_attr->tun_info[attr->out_count] = *info; + encap = false; attr->parse_attr = parse_attr; - action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT | - MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | - MLX5_FLOW_CONTEXT_ACTION_COUNT; - /* attr->out_rep is resolved when we handle encap */ + attr->dests[attr->out_count].flags |= + MLX5_ESW_DEST_ENCAP; + attr->out_count++; + /* attr->dests[].rep is resolved when we + * handle encap + */ + } else if (parse_attr->filter_dev != priv->netdev) { + /* All mlx5 devices are called to configure + * high level device filters. Therefore, the + * *attempt* to install a filter on invalid + * eswitch should not trigger an explicit error + */ + return -EINVAL; } else { NL_SET_ERR_MSG_MOD(extack, "devices are not on same switch HW, can't offload forwarding"); @@ -2936,7 +2624,6 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, encap = true; else return -EOPNOTSUPP; - attr->mirror_count = attr->out_count; continue; } @@ -2946,7 +2633,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, if (err) return err; - attr->mirror_count = attr->out_count; + attr->split_count = attr->out_count; continue; } @@ -2988,7 +2675,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; } - if (attr->mirror_count > 0 && !mlx5_esw_has_fwd_fdb(priv->mdev)) { + if (attr->split_count > 0 && !mlx5_esw_has_fwd_fdb(priv->mdev)) { NL_SET_ERR_MSG_MOD(extack, "current firmware doesn't support split rule for port mirroring"); netdev_warn_once(priv->netdev, "current firmware doesn't support split rule for port mirroring\n"); @@ -3007,6 +2694,11 @@ static void get_flags(int flags, u16 *flow_flags) if (flags & MLX5E_TC_EGRESS) __flow_flags |= MLX5E_TC_FLOW_EGRESS; + if (flags & MLX5E_TC_ESW_OFFLOAD) + __flow_flags |= MLX5E_TC_FLOW_ESWITCH; + if (flags & MLX5E_TC_NIC_OFFLOAD) + __flow_flags |= MLX5E_TC_FLOW_NIC; + *flow_flags = __flow_flags; } @@ -3017,18 +2709,32 @@ static const struct rhashtable_params tc_ht_params = { .automatic_shrinking = true, }; -static struct rhashtable *get_tc_ht(struct mlx5e_priv *priv) +static struct rhashtable *get_tc_ht(struct mlx5e_priv *priv, int flags) { struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct mlx5e_rep_priv *uplink_rpriv; - if (MLX5_VPORT_MANAGER(priv->mdev) && esw->mode == SRIOV_OFFLOADS) { + if (flags & MLX5E_TC_ESW_OFFLOAD) { uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - return &uplink_rpriv->tc_ht; - } else + return &uplink_rpriv->uplink_priv.tc_ht; + } else /* NIC offload */ return &priv->fs.tc.ht; } +static bool is_peer_flow_needed(struct mlx5e_tc_flow *flow) +{ + struct mlx5_esw_flow_attr *attr = flow->esw_attr; + bool is_rep_ingress = attr->in_rep->vport != FDB_UPLINK_VPORT && + flow->flags & MLX5E_TC_FLOW_INGRESS; + bool act_is_encap = !!(attr->action & + MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT); + bool esw_paired = mlx5_devcom_is_paired(attr->in_mdev->priv.devcom, + MLX5_DEVCOM_ESW_OFFLOADS); + + return esw_paired && mlx5_lag_is_sriov(attr->in_mdev) && + (is_rep_ingress || act_is_encap); +} + static int mlx5e_alloc_flow(struct mlx5e_priv *priv, int attr_size, struct tc_cls_flower_offload *f, u16 flow_flags, @@ -3050,10 +2756,6 @@ mlx5e_alloc_flow(struct mlx5e_priv *priv, int attr_size, flow->flags = flow_flags; flow->priv = priv; - err = parse_cls_flower(priv, flow, &parse_attr->spec, f); - if (err) - goto err_free; - *__flow = flow; *__parse_attr = parse_attr; @@ -3066,12 +2768,16 @@ err_free: } static int -mlx5e_add_fdb_flow(struct mlx5e_priv *priv, - struct tc_cls_flower_offload *f, - u16 flow_flags, - struct mlx5e_tc_flow **__flow) +__mlx5e_add_fdb_flow(struct mlx5e_priv *priv, + struct tc_cls_flower_offload *f, + u16 flow_flags, + struct net_device *filter_dev, + struct mlx5_eswitch_rep *in_rep, + struct mlx5_core_dev *in_mdev, + struct mlx5e_tc_flow **__flow) { struct netlink_ext_ack *extack = f->common.extack; + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct mlx5e_tc_flow_parse_attr *parse_attr; struct mlx5e_tc_flow *flow; int attr_size, err; @@ -3082,6 +2788,12 @@ mlx5e_add_fdb_flow(struct mlx5e_priv *priv, &parse_attr, &flow); if (err) goto out; + parse_attr->filter_dev = filter_dev; + flow->esw_attr->parse_attr = parse_attr; + err = parse_cls_flower(flow->priv, flow, &parse_attr->spec, + f, filter_dev); + if (err) + goto err_free; flow->esw_attr->chain = f->common.chain_index; flow->esw_attr->prio = TC_H_MAJ(f->common.prio) >> 16; @@ -3089,14 +2801,19 @@ mlx5e_add_fdb_flow(struct mlx5e_priv *priv, if (err) goto err_free; + flow->esw_attr->in_rep = in_rep; + flow->esw_attr->in_mdev = in_mdev; + + if (MLX5_CAP_ESW(esw->dev, counter_eswitch_affinity) == + MLX5_COUNTER_SOURCE_ESWITCH) + flow->esw_attr->counter_dev = in_mdev; + else + flow->esw_attr->counter_dev = priv->mdev; + err = mlx5e_tc_add_fdb_flow(priv, parse_attr, flow, extack); if (err) goto err_free; - if (!(flow->esw_attr->action & - MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT)) - kvfree(parse_attr); - *__flow = flow; return 0; @@ -3108,10 +2825,92 @@ out: return err; } +static int mlx5e_tc_add_fdb_peer_flow(struct tc_cls_flower_offload *f, + struct mlx5e_tc_flow *flow) +{ + struct mlx5e_priv *priv = flow->priv, *peer_priv; + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch, *peer_esw; + struct mlx5_devcom *devcom = priv->mdev->priv.devcom; + struct mlx5e_tc_flow_parse_attr *parse_attr; + struct mlx5e_rep_priv *peer_urpriv; + struct mlx5e_tc_flow *peer_flow; + struct mlx5_core_dev *in_mdev; + int err = 0; + + peer_esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); + if (!peer_esw) + return -ENODEV; + + peer_urpriv = mlx5_eswitch_get_uplink_priv(peer_esw, REP_ETH); + peer_priv = netdev_priv(peer_urpriv->netdev); + + /* in_mdev is assigned of which the packet originated from. + * So packets redirected to uplink use the same mdev of the + * original flow and packets redirected from uplink use the + * peer mdev. + */ + if (flow->esw_attr->in_rep->vport == FDB_UPLINK_VPORT) + in_mdev = peer_priv->mdev; + else + in_mdev = priv->mdev; + + parse_attr = flow->esw_attr->parse_attr; + err = __mlx5e_add_fdb_flow(peer_priv, f, flow->flags, + parse_attr->filter_dev, + flow->esw_attr->in_rep, in_mdev, &peer_flow); + if (err) + goto out; + + flow->peer_flow = peer_flow; + flow->flags |= MLX5E_TC_FLOW_DUP; + mutex_lock(&esw->offloads.peer_mutex); + list_add_tail(&flow->peer, &esw->offloads.peer_flows); + mutex_unlock(&esw->offloads.peer_mutex); + +out: + mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); + return err; +} + +static int +mlx5e_add_fdb_flow(struct mlx5e_priv *priv, + struct tc_cls_flower_offload *f, + u16 flow_flags, + struct net_device *filter_dev, + struct mlx5e_tc_flow **__flow) +{ + struct mlx5e_rep_priv *rpriv = priv->ppriv; + struct mlx5_eswitch_rep *in_rep = rpriv->rep; + struct mlx5_core_dev *in_mdev = priv->mdev; + struct mlx5e_tc_flow *flow; + int err; + + err = __mlx5e_add_fdb_flow(priv, f, flow_flags, filter_dev, in_rep, + in_mdev, &flow); + if (err) + goto out; + + if (is_peer_flow_needed(flow)) { + err = mlx5e_tc_add_fdb_peer_flow(f, flow); + if (err) { + mlx5e_tc_del_fdb_flow(priv, flow); + goto out; + } + } + + *__flow = flow; + + return 0; + +out: + return err; +} + static int mlx5e_add_nic_flow(struct mlx5e_priv *priv, struct tc_cls_flower_offload *f, u16 flow_flags, + struct net_device *filter_dev, struct mlx5e_tc_flow **__flow) { struct netlink_ext_ack *extack = f->common.extack; @@ -3130,6 +2929,12 @@ mlx5e_add_nic_flow(struct mlx5e_priv *priv, if (err) goto out; + parse_attr->filter_dev = filter_dev; + err = parse_cls_flower(flow->priv, flow, &parse_attr->spec, + f, filter_dev); + if (err) + goto err_free; + err = parse_tc_nic_actions(priv, f->exts, parse_attr, flow, extack); if (err) goto err_free; @@ -3155,6 +2960,7 @@ static int mlx5e_tc_add_flow(struct mlx5e_priv *priv, struct tc_cls_flower_offload *f, int flags, + struct net_device *filter_dev, struct mlx5e_tc_flow **flow) { struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; @@ -3167,18 +2973,20 @@ mlx5e_tc_add_flow(struct mlx5e_priv *priv, return -EOPNOTSUPP; if (esw && esw->mode == SRIOV_OFFLOADS) - err = mlx5e_add_fdb_flow(priv, f, flow_flags, flow); + err = mlx5e_add_fdb_flow(priv, f, flow_flags, + filter_dev, flow); else - err = mlx5e_add_nic_flow(priv, f, flow_flags, flow); + err = mlx5e_add_nic_flow(priv, f, flow_flags, + filter_dev, flow); return err; } -int mlx5e_configure_flower(struct mlx5e_priv *priv, +int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv, struct tc_cls_flower_offload *f, int flags) { struct netlink_ext_ack *extack = f->common.extack; - struct rhashtable *tc_ht = get_tc_ht(priv); + struct rhashtable *tc_ht = get_tc_ht(priv, flags); struct mlx5e_tc_flow *flow; int err = 0; @@ -3192,7 +3000,7 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv, goto out; } - err = mlx5e_tc_add_flow(priv, f, flags, &flow); + err = mlx5e_tc_add_flow(priv, f, flags, dev, &flow); if (err) goto out; @@ -3220,10 +3028,10 @@ static bool same_flow_direction(struct mlx5e_tc_flow *flow, int flags) return false; } -int mlx5e_delete_flower(struct mlx5e_priv *priv, +int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv, struct tc_cls_flower_offload *f, int flags) { - struct rhashtable *tc_ht = get_tc_ht(priv); + struct rhashtable *tc_ht = get_tc_ht(priv, flags); struct mlx5e_tc_flow *flow; flow = rhashtable_lookup_fast(tc_ht, &f->cookie, tc_ht_params); @@ -3239,10 +3047,12 @@ int mlx5e_delete_flower(struct mlx5e_priv *priv, return 0; } -int mlx5e_stats_flower(struct mlx5e_priv *priv, +int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv, struct tc_cls_flower_offload *f, int flags) { - struct rhashtable *tc_ht = get_tc_ht(priv); + struct mlx5_devcom *devcom = priv->mdev->priv.devcom; + struct rhashtable *tc_ht = get_tc_ht(priv, flags); + struct mlx5_eswitch *peer_esw; struct mlx5e_tc_flow *flow; struct mlx5_fc *counter; u64 bytes; @@ -3262,6 +3072,27 @@ int mlx5e_stats_flower(struct mlx5e_priv *priv, mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse); + peer_esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); + if (!peer_esw) + goto out; + + if ((flow->flags & MLX5E_TC_FLOW_DUP) && + (flow->peer_flow->flags & MLX5E_TC_FLOW_OFFLOADED)) { + u64 bytes2; + u64 packets2; + u64 lastuse2; + + counter = mlx5e_tc_get_counter(flow->peer_flow); + mlx5_fc_query_cached(counter, &bytes2, &packets2, &lastuse2); + + bytes += bytes2; + packets += packets2; + lastuse = max_t(u64, lastuse, lastuse2); + } + + mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); + +out: tcf_exts_stats_update(f->exts, bytes, packets, lastuse); return 0; @@ -3350,7 +3181,7 @@ void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv) if (tc->netdevice_nb.notifier_call) unregister_netdevice_notifier(&tc->netdevice_nb); - rhashtable_free_and_destroy(&tc->ht, _mlx5e_tc_del_flow, NULL); + rhashtable_destroy(&tc->ht); if (!IS_ERR_OR_NULL(tc->t)) { mlx5_destroy_flow_table(tc->t); @@ -3368,9 +3199,17 @@ void mlx5e_tc_esw_cleanup(struct rhashtable *tc_ht) rhashtable_free_and_destroy(tc_ht, _mlx5e_tc_del_flow, NULL); } -int mlx5e_tc_num_filters(struct mlx5e_priv *priv) +int mlx5e_tc_num_filters(struct mlx5e_priv *priv, int flags) { - struct rhashtable *tc_ht = get_tc_ht(priv); + struct rhashtable *tc_ht = get_tc_ht(priv, flags); return atomic_read(&tc_ht->nelems); } + +void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw) +{ + struct mlx5e_tc_flow *flow, *tmp; + + list_for_each_entry_safe(flow, tmp, &esw->offloads.peer_flows, peer) + __mlx5e_tc_del_fdb_peer_flow(flow); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index 49436bf3b80a..d2d87f978c06 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -42,7 +42,9 @@ enum { MLX5E_TC_INGRESS = BIT(0), MLX5E_TC_EGRESS = BIT(1), - MLX5E_TC_LAST_EXPORTED_BIT = 1, + MLX5E_TC_NIC_OFFLOAD = BIT(2), + MLX5E_TC_ESW_OFFLOAD = BIT(3), + MLX5E_TC_LAST_EXPORTED_BIT = 3, }; int mlx5e_tc_nic_init(struct mlx5e_priv *priv); @@ -51,12 +53,12 @@ void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv); int mlx5e_tc_esw_init(struct rhashtable *tc_ht); void mlx5e_tc_esw_cleanup(struct rhashtable *tc_ht); -int mlx5e_configure_flower(struct mlx5e_priv *priv, +int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv, struct tc_cls_flower_offload *f, int flags); -int mlx5e_delete_flower(struct mlx5e_priv *priv, +int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv, struct tc_cls_flower_offload *f, int flags); -int mlx5e_stats_flower(struct mlx5e_priv *priv, +int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv, struct tc_cls_flower_offload *f, int flags); struct mlx5e_encap_entry; @@ -68,12 +70,13 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv, struct mlx5e_neigh_hash_entry; void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe); -int mlx5e_tc_num_filters(struct mlx5e_priv *priv); +int mlx5e_tc_num_filters(struct mlx5e_priv *priv, int flags); + #else /* CONFIG_MLX5_ESWITCH */ static inline int mlx5e_tc_nic_init(struct mlx5e_priv *priv) { return 0; } static inline void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv) {} -static inline int mlx5e_tc_num_filters(struct mlx5e_priv *priv) { return 0; } +static inline int mlx5e_tc_num_filters(struct mlx5e_priv *priv, int flags) { return 0; } #endif #endif /* __MLX5_EN_TC_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index 6dacaeba2fbf..598ad7e4d5c9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -127,7 +127,7 @@ u16 mlx5e_select_queue(struct net_device *dev, struct sk_buff *skb, else #endif if (skb_vlan_tag_present(skb)) - up = skb->vlan_tci >> VLAN_PRIO_SHIFT; + up = skb_vlan_tag_get_prio(skb); /* channel_ix can be larger than num_channels since * dev->num_real_tx_queues = num_channels * num_tc @@ -459,9 +459,10 @@ static void mlx5e_dump_error_cqe(struct mlx5e_txqsq *sq, u32 ci = mlx5_cqwq_get_ci(&sq->cq.wq); netdev_err(sq->channel->netdev, - "Error cqe on cqn 0x%x, ci 0x%x, sqn 0x%x, syndrome 0x%x, vendor syndrome 0x%x\n", - sq->cq.mcq.cqn, ci, sq->sqn, err_cqe->syndrome, - err_cqe->vendor_err_synd); + "Error cqe on cqn 0x%x, ci 0x%x, sqn 0x%x, opcode 0x%x, syndrome 0x%x, vendor syndrome 0x%x\n", + sq->cq.mcq.cqn, ci, sq->sqn, + get_cqe_opcode((struct mlx5_cqe64 *)err_cqe), + err_cqe->syndrome, err_cqe->vendor_err_synd); mlx5_dump_err_cqe(sq->cq.mdev, err_cqe); } @@ -507,7 +508,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget) wqe_counter = be16_to_cpu(cqe->wqe_counter); - if (unlikely(cqe->op_own >> 4 == MLX5_CQE_REQ_ERR)) { + if (unlikely(get_cqe_opcode(cqe) == MLX5_CQE_REQ_ERR)) { if (!test_and_set_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) { mlx5e_dump_error_cqe(sq, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c index 85d517360157..b4af5e19f6ac 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c @@ -76,6 +76,7 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget) struct mlx5e_channel *c = container_of(napi, struct mlx5e_channel, napi); struct mlx5e_ch_stats *ch_stats = c->stats; + struct mlx5e_rq *rq = &c->rq; bool busy = false; int work_done = 0; int i; @@ -85,17 +86,17 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget) for (i = 0; i < c->num_tc; i++) busy |= mlx5e_poll_tx_cq(&c->sq[i].cq, budget); - busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq); + busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq, NULL); if (c->xdp) - busy |= mlx5e_poll_xdpsq_cq(&c->rq.xdpsq.cq); + busy |= mlx5e_poll_xdpsq_cq(&rq->xdpsq.cq, rq); if (likely(budget)) { /* budget=0 means: don't poll rx rings */ - work_done = mlx5e_poll_rx_cq(&c->rq.cq, budget); + work_done = mlx5e_poll_rx_cq(&rq->cq, budget); busy |= work_done == budget; } - busy |= c->rq.post_wqes(&c->rq); + busy |= c->rq.post_wqes(rq); if (busy) { if (likely(mlx5e_channel_no_affinity_change(c))) @@ -115,9 +116,9 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget) mlx5e_cq_arm(&c->sq[i].cq); } - mlx5e_handle_rx_dim(&c->rq); + mlx5e_handle_rx_dim(rq); - mlx5e_cq_arm(&c->rq.cq); + mlx5e_cq_arm(&rq->cq); mlx5e_cq_arm(&c->icosq.cq); mlx5e_cq_arm(&c->xdpsq.cq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c index c1e1a16a9b07..ee04aab65a9f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c @@ -31,20 +31,22 @@ */ #include +#include #include #include +#include #include #ifdef CONFIG_RFS_ACCEL #include #endif #include "mlx5_core.h" +#include "lib/eq.h" #include "fpga/core.h" #include "eswitch.h" #include "lib/clock.h" #include "diag/fw_tracer.h" enum { - MLX5_EQE_SIZE = sizeof(struct mlx5_eqe), MLX5_EQE_OWNER_INIT_VAL = 0x1, }; @@ -55,14 +57,32 @@ enum { }; enum { - MLX5_NUM_SPARE_EQE = 0x80, - MLX5_NUM_ASYNC_EQE = 0x1000, - MLX5_NUM_CMD_EQE = 32, - MLX5_NUM_PF_DRAIN = 64, + MLX5_EQ_DOORBEL_OFFSET = 0x40, }; -enum { - MLX5_EQ_DOORBEL_OFFSET = 0x40, +struct mlx5_irq_info { + cpumask_var_t mask; + char name[MLX5_MAX_IRQ_NAME]; + void *context; /* dev_id provided to request_irq */ +}; + +struct mlx5_eq_table { + struct list_head comp_eqs_list; + struct mlx5_eq pages_eq; + struct mlx5_eq cmd_eq; + struct mlx5_eq async_eq; + + struct atomic_notifier_head nh[MLX5_EVENT_TYPE_MAX]; + + /* Since CQ DB is stored in async_eq */ + struct mlx5_nb cq_err_nb; + + struct mutex lock; /* sync async eqs creations */ + int num_comp_vectors; + struct mlx5_irq_info *irq_info; +#ifdef CONFIG_RFS_ACCEL + struct cpu_rmap *rmap; +#endif }; #define MLX5_ASYNC_EVENT_MASK ((1ull << MLX5_EVENT_TYPE_PATH_MIG) | \ @@ -78,17 +98,6 @@ enum { (1ull << MLX5_EVENT_TYPE_SRQ_LAST_WQE) | \ (1ull << MLX5_EVENT_TYPE_SRQ_RQ_LIMIT)) -struct map_eq_in { - u64 mask; - u32 reserved; - u32 unmap_eqn; -}; - -struct cre_des_eq { - u8 reserved[15]; - u8 eqn; -}; - static int mlx5_cmd_destroy_eq(struct mlx5_core_dev *dev, u8 eqn) { u32 out[MLX5_ST_SZ_DW(destroy_eq_out)] = {0}; @@ -99,213 +108,56 @@ static int mlx5_cmd_destroy_eq(struct mlx5_core_dev *dev, u8 eqn) return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); } -static struct mlx5_eqe *get_eqe(struct mlx5_eq *eq, u32 entry) -{ - return mlx5_buf_offset(&eq->buf, entry * MLX5_EQE_SIZE); -} - -static struct mlx5_eqe *next_eqe_sw(struct mlx5_eq *eq) -{ - struct mlx5_eqe *eqe = get_eqe(eq, eq->cons_index & (eq->nent - 1)); - - return ((eqe->owner & 1) ^ !!(eq->cons_index & eq->nent)) ? NULL : eqe; -} - -static const char *eqe_type_str(u8 type) -{ - switch (type) { - case MLX5_EVENT_TYPE_COMP: - return "MLX5_EVENT_TYPE_COMP"; - case MLX5_EVENT_TYPE_PATH_MIG: - return "MLX5_EVENT_TYPE_PATH_MIG"; - case MLX5_EVENT_TYPE_COMM_EST: - return "MLX5_EVENT_TYPE_COMM_EST"; - case MLX5_EVENT_TYPE_SQ_DRAINED: - return "MLX5_EVENT_TYPE_SQ_DRAINED"; - case MLX5_EVENT_TYPE_SRQ_LAST_WQE: - return "MLX5_EVENT_TYPE_SRQ_LAST_WQE"; - case MLX5_EVENT_TYPE_SRQ_RQ_LIMIT: - return "MLX5_EVENT_TYPE_SRQ_RQ_LIMIT"; - case MLX5_EVENT_TYPE_CQ_ERROR: - return "MLX5_EVENT_TYPE_CQ_ERROR"; - case MLX5_EVENT_TYPE_WQ_CATAS_ERROR: - return "MLX5_EVENT_TYPE_WQ_CATAS_ERROR"; - case MLX5_EVENT_TYPE_PATH_MIG_FAILED: - return "MLX5_EVENT_TYPE_PATH_MIG_FAILED"; - case MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR: - return "MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR"; - case MLX5_EVENT_TYPE_WQ_ACCESS_ERROR: - return "MLX5_EVENT_TYPE_WQ_ACCESS_ERROR"; - case MLX5_EVENT_TYPE_SRQ_CATAS_ERROR: - return "MLX5_EVENT_TYPE_SRQ_CATAS_ERROR"; - case MLX5_EVENT_TYPE_INTERNAL_ERROR: - return "MLX5_EVENT_TYPE_INTERNAL_ERROR"; - case MLX5_EVENT_TYPE_PORT_CHANGE: - return "MLX5_EVENT_TYPE_PORT_CHANGE"; - case MLX5_EVENT_TYPE_GPIO_EVENT: - return "MLX5_EVENT_TYPE_GPIO_EVENT"; - case MLX5_EVENT_TYPE_PORT_MODULE_EVENT: - return "MLX5_EVENT_TYPE_PORT_MODULE_EVENT"; - case MLX5_EVENT_TYPE_TEMP_WARN_EVENT: - return "MLX5_EVENT_TYPE_TEMP_WARN_EVENT"; - case MLX5_EVENT_TYPE_REMOTE_CONFIG: - return "MLX5_EVENT_TYPE_REMOTE_CONFIG"; - case MLX5_EVENT_TYPE_DB_BF_CONGESTION: - return "MLX5_EVENT_TYPE_DB_BF_CONGESTION"; - case MLX5_EVENT_TYPE_STALL_EVENT: - return "MLX5_EVENT_TYPE_STALL_EVENT"; - case MLX5_EVENT_TYPE_CMD: - return "MLX5_EVENT_TYPE_CMD"; - case MLX5_EVENT_TYPE_PAGE_REQUEST: - return "MLX5_EVENT_TYPE_PAGE_REQUEST"; - case MLX5_EVENT_TYPE_PAGE_FAULT: - return "MLX5_EVENT_TYPE_PAGE_FAULT"; - case MLX5_EVENT_TYPE_PPS_EVENT: - return "MLX5_EVENT_TYPE_PPS_EVENT"; - case MLX5_EVENT_TYPE_NIC_VPORT_CHANGE: - return "MLX5_EVENT_TYPE_NIC_VPORT_CHANGE"; - case MLX5_EVENT_TYPE_FPGA_ERROR: - return "MLX5_EVENT_TYPE_FPGA_ERROR"; - case MLX5_EVENT_TYPE_FPGA_QP_ERROR: - return "MLX5_EVENT_TYPE_FPGA_QP_ERROR"; - case MLX5_EVENT_TYPE_GENERAL_EVENT: - return "MLX5_EVENT_TYPE_GENERAL_EVENT"; - case MLX5_EVENT_TYPE_DEVICE_TRACER: - return "MLX5_EVENT_TYPE_DEVICE_TRACER"; - default: - return "Unrecognized event"; - } -} - -static enum mlx5_dev_event port_subtype_event(u8 subtype) -{ - switch (subtype) { - case MLX5_PORT_CHANGE_SUBTYPE_DOWN: - return MLX5_DEV_EVENT_PORT_DOWN; - case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE: - return MLX5_DEV_EVENT_PORT_UP; - case MLX5_PORT_CHANGE_SUBTYPE_INITIALIZED: - return MLX5_DEV_EVENT_PORT_INITIALIZED; - case MLX5_PORT_CHANGE_SUBTYPE_LID: - return MLX5_DEV_EVENT_LID_CHANGE; - case MLX5_PORT_CHANGE_SUBTYPE_PKEY: - return MLX5_DEV_EVENT_PKEY_CHANGE; - case MLX5_PORT_CHANGE_SUBTYPE_GUID: - return MLX5_DEV_EVENT_GUID_CHANGE; - case MLX5_PORT_CHANGE_SUBTYPE_CLIENT_REREG: - return MLX5_DEV_EVENT_CLIENT_REREG; - } - return -1; -} - -static void eq_update_ci(struct mlx5_eq *eq, int arm) +/* caller must eventually call mlx5_cq_put on the returned cq */ +static struct mlx5_core_cq *mlx5_eq_cq_get(struct mlx5_eq *eq, u32 cqn) { - __be32 __iomem *addr = eq->doorbell + (arm ? 0 : 2); - u32 val = (eq->cons_index & 0xffffff) | (eq->eqn << 24); - - __raw_writel((__force u32)cpu_to_be32(val), addr); - /* We still want ordering, just not swabbing, so add a barrier */ - mb(); -} + struct mlx5_cq_table *table = &eq->cq_table; + struct mlx5_core_cq *cq = NULL; -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING -static void eqe_pf_action(struct work_struct *work) -{ - struct mlx5_pagefault *pfault = container_of(work, - struct mlx5_pagefault, - work); - struct mlx5_eq *eq = pfault->eq; + spin_lock(&table->lock); + cq = radix_tree_lookup(&table->tree, cqn); + if (likely(cq)) + mlx5_cq_hold(cq); + spin_unlock(&table->lock); - mlx5_core_page_fault(eq->dev, pfault); - mempool_free(pfault, eq->pf_ctx.pool); + return cq; } -static void eq_pf_process(struct mlx5_eq *eq) +static irqreturn_t mlx5_eq_comp_int(int irq, void *eq_ptr) { - struct mlx5_core_dev *dev = eq->dev; - struct mlx5_eqe_page_fault *pf_eqe; - struct mlx5_pagefault *pfault; + struct mlx5_eq_comp *eq_comp = eq_ptr; + struct mlx5_eq *eq = eq_ptr; struct mlx5_eqe *eqe; int set_ci = 0; + u32 cqn = -1; while ((eqe = next_eqe_sw(eq))) { - pfault = mempool_alloc(eq->pf_ctx.pool, GFP_ATOMIC); - if (!pfault) { - schedule_work(&eq->pf_ctx.work); - break; - } - + struct mlx5_core_cq *cq; + /* Make sure we read EQ entry contents after we've + * checked the ownership bit. + */ dma_rmb(); - pf_eqe = &eqe->data.page_fault; - pfault->event_subtype = eqe->sub_type; - pfault->bytes_committed = be32_to_cpu(pf_eqe->bytes_committed); - - mlx5_core_dbg(dev, - "PAGE_FAULT: subtype: 0x%02x, bytes_committed: 0x%06x\n", - eqe->sub_type, pfault->bytes_committed); - - switch (eqe->sub_type) { - case MLX5_PFAULT_SUBTYPE_RDMA: - /* RDMA based event */ - pfault->type = - be32_to_cpu(pf_eqe->rdma.pftype_token) >> 24; - pfault->token = - be32_to_cpu(pf_eqe->rdma.pftype_token) & - MLX5_24BIT_MASK; - pfault->rdma.r_key = - be32_to_cpu(pf_eqe->rdma.r_key); - pfault->rdma.packet_size = - be16_to_cpu(pf_eqe->rdma.packet_length); - pfault->rdma.rdma_op_len = - be32_to_cpu(pf_eqe->rdma.rdma_op_len); - pfault->rdma.rdma_va = - be64_to_cpu(pf_eqe->rdma.rdma_va); - mlx5_core_dbg(dev, - "PAGE_FAULT: type:0x%x, token: 0x%06x, r_key: 0x%08x\n", - pfault->type, pfault->token, - pfault->rdma.r_key); - mlx5_core_dbg(dev, - "PAGE_FAULT: rdma_op_len: 0x%08x, rdma_va: 0x%016llx\n", - pfault->rdma.rdma_op_len, - pfault->rdma.rdma_va); - break; - - case MLX5_PFAULT_SUBTYPE_WQE: - /* WQE based event */ - pfault->type = - (be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24) & 0x7; - pfault->token = - be32_to_cpu(pf_eqe->wqe.token); - pfault->wqe.wq_num = - be32_to_cpu(pf_eqe->wqe.pftype_wq) & - MLX5_24BIT_MASK; - pfault->wqe.wqe_index = - be16_to_cpu(pf_eqe->wqe.wqe_index); - pfault->wqe.packet_size = - be16_to_cpu(pf_eqe->wqe.packet_length); - mlx5_core_dbg(dev, - "PAGE_FAULT: type:0x%x, token: 0x%06x, wq_num: 0x%06x, wqe_index: 0x%04x\n", - pfault->type, pfault->token, - pfault->wqe.wq_num, - pfault->wqe.wqe_index); - break; - - default: - mlx5_core_warn(dev, - "Unsupported page fault event sub-type: 0x%02hhx\n", - eqe->sub_type); - /* Unsupported page faults should still be - * resolved by the page fault handler - */ + /* Assume (eqe->type) is always MLX5_EVENT_TYPE_COMP */ + cqn = be32_to_cpu(eqe->data.comp.cqn) & 0xffffff; + + cq = mlx5_eq_cq_get(eq, cqn); + if (likely(cq)) { + ++cq->arm_sn; + cq->comp(cq); + mlx5_cq_put(cq); + } else { + mlx5_core_warn(eq->dev, "Completion event for bogus CQ 0x%x\n", cqn); } - pfault->eq = eq; - INIT_WORK(&pfault->work, eqe_pf_action); - queue_work(eq->pf_ctx.wq, &pfault->work); - ++eq->cons_index; ++set_ci; + /* The HCA will think the queue has overflowed if we + * don't tell it we've been processing events. We + * create our EQs with MLX5_NUM_SPARE_EQE extra + * entries, so we must update our consumer index at + * least that often. + */ if (unlikely(set_ci >= MLX5_NUM_SPARE_EQE)) { eq_update_ci(eq, 0); set_ci = 0; @@ -313,165 +165,41 @@ static void eq_pf_process(struct mlx5_eq *eq) } eq_update_ci(eq, 1); -} -static irqreturn_t mlx5_eq_pf_int(int irq, void *eq_ptr) -{ - struct mlx5_eq *eq = eq_ptr; - unsigned long flags; - - if (spin_trylock_irqsave(&eq->pf_ctx.lock, flags)) { - eq_pf_process(eq); - spin_unlock_irqrestore(&eq->pf_ctx.lock, flags); - } else { - schedule_work(&eq->pf_ctx.work); - } + if (cqn != -1) + tasklet_schedule(&eq_comp->tasklet_ctx.task); return IRQ_HANDLED; } -/* mempool_refill() was proposed but unfortunately wasn't accepted - * http://lkml.iu.edu/hypermail/linux/kernel/1512.1/05073.html - * Chip workaround. +/* Some architectures don't latch interrupts when they are disabled, so using + * mlx5_eq_poll_irq_disabled could end up losing interrupts while trying to + * avoid losing them. It is not recommended to use it, unless this is the last + * resort. */ -static void mempool_refill(mempool_t *pool) -{ - while (pool->curr_nr < pool->min_nr) - mempool_free(mempool_alloc(pool, GFP_KERNEL), pool); -} - -static void eq_pf_action(struct work_struct *work) -{ - struct mlx5_eq *eq = container_of(work, struct mlx5_eq, pf_ctx.work); - - mempool_refill(eq->pf_ctx.pool); - - spin_lock_irq(&eq->pf_ctx.lock); - eq_pf_process(eq); - spin_unlock_irq(&eq->pf_ctx.lock); -} - -static int init_pf_ctx(struct mlx5_eq_pagefault *pf_ctx, const char *name) -{ - spin_lock_init(&pf_ctx->lock); - INIT_WORK(&pf_ctx->work, eq_pf_action); - - pf_ctx->wq = alloc_ordered_workqueue(name, - WQ_MEM_RECLAIM); - if (!pf_ctx->wq) - return -ENOMEM; - - pf_ctx->pool = mempool_create_kmalloc_pool - (MLX5_NUM_PF_DRAIN, sizeof(struct mlx5_pagefault)); - if (!pf_ctx->pool) - goto err_wq; - - return 0; -err_wq: - destroy_workqueue(pf_ctx->wq); - return -ENOMEM; -} - -int mlx5_core_page_fault_resume(struct mlx5_core_dev *dev, u32 token, - u32 wq_num, u8 type, int error) -{ - u32 out[MLX5_ST_SZ_DW(page_fault_resume_out)] = {0}; - u32 in[MLX5_ST_SZ_DW(page_fault_resume_in)] = {0}; - - MLX5_SET(page_fault_resume_in, in, opcode, - MLX5_CMD_OP_PAGE_FAULT_RESUME); - MLX5_SET(page_fault_resume_in, in, error, !!error); - MLX5_SET(page_fault_resume_in, in, page_fault_type, type); - MLX5_SET(page_fault_resume_in, in, wq_number, wq_num); - MLX5_SET(page_fault_resume_in, in, token, token); - - return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); -} -EXPORT_SYMBOL_GPL(mlx5_core_page_fault_resume); -#endif - -static void general_event_handler(struct mlx5_core_dev *dev, - struct mlx5_eqe *eqe) -{ - switch (eqe->sub_type) { - case MLX5_GENERAL_SUBTYPE_DELAY_DROP_TIMEOUT: - if (dev->event) - dev->event(dev, MLX5_DEV_EVENT_DELAY_DROP_TIMEOUT, 0); - break; - default: - mlx5_core_dbg(dev, "General event with unrecognized subtype: sub_type %d\n", - eqe->sub_type); - } -} - -static void mlx5_temp_warning_event(struct mlx5_core_dev *dev, - struct mlx5_eqe *eqe) -{ - u64 value_lsb; - u64 value_msb; - - value_lsb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_lsb); - value_msb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_msb); - - mlx5_core_warn(dev, - "High temperature on sensors with bit set %llx %llx", - value_msb, value_lsb); -} - -/* caller must eventually call mlx5_cq_put on the returned cq */ -static struct mlx5_core_cq *mlx5_eq_cq_get(struct mlx5_eq *eq, u32 cqn) -{ - struct mlx5_cq_table *table = &eq->cq_table; - struct mlx5_core_cq *cq = NULL; - - spin_lock(&table->lock); - cq = radix_tree_lookup(&table->tree, cqn); - if (likely(cq)) - mlx5_cq_hold(cq); - spin_unlock(&table->lock); - - return cq; -} - -static void mlx5_eq_cq_completion(struct mlx5_eq *eq, u32 cqn) -{ - struct mlx5_core_cq *cq = mlx5_eq_cq_get(eq, cqn); - - if (unlikely(!cq)) { - mlx5_core_warn(eq->dev, "Completion event for bogus CQ 0x%x\n", cqn); - return; - } - - ++cq->arm_sn; - - cq->comp(cq); - - mlx5_cq_put(cq); -} - -static void mlx5_eq_cq_event(struct mlx5_eq *eq, u32 cqn, int event_type) +u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq_comp *eq) { - struct mlx5_core_cq *cq = mlx5_eq_cq_get(eq, cqn); - - if (unlikely(!cq)) { - mlx5_core_warn(eq->dev, "Async event for bogus CQ 0x%x\n", cqn); - return; - } + u32 count_eqe; - cq->event(cq, event_type); + disable_irq(eq->core.irqn); + count_eqe = eq->core.cons_index; + mlx5_eq_comp_int(eq->core.irqn, eq); + count_eqe = eq->core.cons_index - count_eqe; + enable_irq(eq->core.irqn); - mlx5_cq_put(cq); + return count_eqe; } -static irqreturn_t mlx5_eq_int(int irq, void *eq_ptr) +static irqreturn_t mlx5_eq_async_int(int irq, void *eq_ptr) { struct mlx5_eq *eq = eq_ptr; - struct mlx5_core_dev *dev = eq->dev; + struct mlx5_eq_table *eqt; + struct mlx5_core_dev *dev; struct mlx5_eqe *eqe; int set_ci = 0; - u32 cqn = -1; - u32 rsn; - u8 port; + + dev = eq->dev; + eqt = dev->priv.eq_table; while ((eqe = next_eqe_sw(eq))) { /* @@ -480,116 +208,12 @@ static irqreturn_t mlx5_eq_int(int irq, void *eq_ptr) */ dma_rmb(); - mlx5_core_dbg(eq->dev, "eqn %d, eqe type %s\n", - eq->eqn, eqe_type_str(eqe->type)); - switch (eqe->type) { - case MLX5_EVENT_TYPE_COMP: - cqn = be32_to_cpu(eqe->data.comp.cqn) & 0xffffff; - mlx5_eq_cq_completion(eq, cqn); - break; - case MLX5_EVENT_TYPE_DCT_DRAINED: - rsn = be32_to_cpu(eqe->data.dct.dctn) & 0xffffff; - rsn |= (MLX5_RES_DCT << MLX5_USER_INDEX_LEN); - mlx5_rsc_event(dev, rsn, eqe->type); - break; - case MLX5_EVENT_TYPE_PATH_MIG: - case MLX5_EVENT_TYPE_COMM_EST: - case MLX5_EVENT_TYPE_SQ_DRAINED: - case MLX5_EVENT_TYPE_SRQ_LAST_WQE: - case MLX5_EVENT_TYPE_WQ_CATAS_ERROR: - case MLX5_EVENT_TYPE_PATH_MIG_FAILED: - case MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR: - case MLX5_EVENT_TYPE_WQ_ACCESS_ERROR: - rsn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff; - rsn |= (eqe->data.qp_srq.type << MLX5_USER_INDEX_LEN); - mlx5_core_dbg(dev, "event %s(%d) arrived on resource 0x%x\n", - eqe_type_str(eqe->type), eqe->type, rsn); - mlx5_rsc_event(dev, rsn, eqe->type); - break; - - case MLX5_EVENT_TYPE_SRQ_RQ_LIMIT: - case MLX5_EVENT_TYPE_SRQ_CATAS_ERROR: - rsn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff; - mlx5_core_dbg(dev, "SRQ event %s(%d): srqn 0x%x\n", - eqe_type_str(eqe->type), eqe->type, rsn); - mlx5_srq_event(dev, rsn, eqe->type); - break; - - case MLX5_EVENT_TYPE_CMD: - mlx5_cmd_comp_handler(dev, be32_to_cpu(eqe->data.cmd.vector), false); - break; + if (likely(eqe->type < MLX5_EVENT_TYPE_MAX)) + atomic_notifier_call_chain(&eqt->nh[eqe->type], eqe->type, eqe); + else + mlx5_core_warn_once(dev, "notifier_call_chain is not setup for eqe: %d\n", eqe->type); - case MLX5_EVENT_TYPE_PORT_CHANGE: - port = (eqe->data.port.port >> 4) & 0xf; - switch (eqe->sub_type) { - case MLX5_PORT_CHANGE_SUBTYPE_DOWN: - case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE: - case MLX5_PORT_CHANGE_SUBTYPE_LID: - case MLX5_PORT_CHANGE_SUBTYPE_PKEY: - case MLX5_PORT_CHANGE_SUBTYPE_GUID: - case MLX5_PORT_CHANGE_SUBTYPE_CLIENT_REREG: - case MLX5_PORT_CHANGE_SUBTYPE_INITIALIZED: - if (dev->event) - dev->event(dev, port_subtype_event(eqe->sub_type), - (unsigned long)port); - break; - default: - mlx5_core_warn(dev, "Port event with unrecognized subtype: port %d, sub_type %d\n", - port, eqe->sub_type); - } - break; - case MLX5_EVENT_TYPE_CQ_ERROR: - cqn = be32_to_cpu(eqe->data.cq_err.cqn) & 0xffffff; - mlx5_core_warn(dev, "CQ error on CQN 0x%x, syndrome 0x%x\n", - cqn, eqe->data.cq_err.syndrome); - mlx5_eq_cq_event(eq, cqn, eqe->type); - break; - - case MLX5_EVENT_TYPE_PAGE_REQUEST: - { - u16 func_id = be16_to_cpu(eqe->data.req_pages.func_id); - s32 npages = be32_to_cpu(eqe->data.req_pages.num_pages); - - mlx5_core_dbg(dev, "page request for func 0x%x, npages %d\n", - func_id, npages); - mlx5_core_req_pages_handler(dev, func_id, npages); - } - break; - - case MLX5_EVENT_TYPE_NIC_VPORT_CHANGE: - mlx5_eswitch_vport_event(dev->priv.eswitch, eqe); - break; - - case MLX5_EVENT_TYPE_PORT_MODULE_EVENT: - mlx5_port_module_event(dev, eqe); - break; - - case MLX5_EVENT_TYPE_PPS_EVENT: - mlx5_pps_event(dev, eqe); - break; - - case MLX5_EVENT_TYPE_FPGA_ERROR: - case MLX5_EVENT_TYPE_FPGA_QP_ERROR: - mlx5_fpga_event(dev, eqe->type, &eqe->data.raw); - break; - - case MLX5_EVENT_TYPE_TEMP_WARN_EVENT: - mlx5_temp_warning_event(dev, eqe); - break; - - case MLX5_EVENT_TYPE_GENERAL_EVENT: - general_event_handler(dev, eqe); - break; - - case MLX5_EVENT_TYPE_DEVICE_TRACER: - mlx5_fw_tracer_event(dev, eqe); - break; - - default: - mlx5_core_warn(dev, "Unhandled event 0x%x on EQ 0x%x\n", - eqe->type, eq->eqn); - break; - } + atomic_notifier_call_chain(&eqt->nh[MLX5_EVENT_TYPE_NOTIFY_ANY], eqe->type, eqe); ++eq->cons_index; ++set_ci; @@ -608,30 +232,9 @@ static irqreturn_t mlx5_eq_int(int irq, void *eq_ptr) eq_update_ci(eq, 1); - if (cqn != -1) - tasklet_schedule(&eq->tasklet_ctx.task); - return IRQ_HANDLED; } -/* Some architectures don't latch interrupts when they are disabled, so using - * mlx5_eq_poll_irq_disabled could end up losing interrupts while trying to - * avoid losing them. It is not recommended to use it, unless this is the last - * resort. - */ -u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq *eq) -{ - u32 count_eqe; - - disable_irq(eq->irqn); - count_eqe = eq->cons_index; - mlx5_eq_int(eq->irqn, eq); - count_eqe = eq->cons_index - count_eqe; - enable_irq(eq->irqn); - - return count_eqe; -} - static void init_eq_buf(struct mlx5_eq *eq) { struct mlx5_eqe *eqe; @@ -643,39 +246,35 @@ static void init_eq_buf(struct mlx5_eq *eq) } } -int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx, - int nent, u64 mask, const char *name, - enum mlx5_eq_type type) +static int +create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, const char *name, + struct mlx5_eq_param *param) { + struct mlx5_eq_table *eq_table = dev->priv.eq_table; struct mlx5_cq_table *cq_table = &eq->cq_table; u32 out[MLX5_ST_SZ_DW(create_eq_out)] = {0}; struct mlx5_priv *priv = &dev->priv; - irq_handler_t handler; + u8 vecidx = param->index; __be64 *pas; void *eqc; int inlen; u32 *in; int err; + if (eq_table->irq_info[vecidx].context) + return -EEXIST; + /* Init CQ table */ memset(cq_table, 0, sizeof(*cq_table)); spin_lock_init(&cq_table->lock); INIT_RADIX_TREE(&cq_table->tree, GFP_ATOMIC); - eq->type = type; - eq->nent = roundup_pow_of_two(nent + MLX5_NUM_SPARE_EQE); + eq->nent = roundup_pow_of_two(param->nent + MLX5_NUM_SPARE_EQE); eq->cons_index = 0; err = mlx5_buf_alloc(dev, eq->nent * MLX5_EQE_SIZE, &eq->buf); if (err) return err; -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - if (type == MLX5_EQ_TYPE_PF) - handler = mlx5_eq_pf_int; - else -#endif - handler = mlx5_eq_int; - init_eq_buf(eq); inlen = MLX5_ST_SZ_BYTES(create_eq_in) + @@ -691,7 +290,7 @@ int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx, mlx5_fill_page_array(&eq->buf, pas); MLX5_SET(create_eq_in, in, opcode, MLX5_CMD_OP_CREATE_EQ); - MLX5_SET64(create_eq_in, in, event_bitmask, mask); + MLX5_SET64(create_eq_in, in, event_bitmask, param->mask); eqc = MLX5_ADDR_OF(create_eq_in, in, eq_context_entry); MLX5_SET(eqc, eqc, log_eq_size, ilog2(eq->nent)); @@ -704,15 +303,17 @@ int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx, if (err) goto err_in; - snprintf(priv->irq_info[vecidx].name, MLX5_MAX_IRQ_NAME, "%s@pci:%s", + snprintf(eq_table->irq_info[vecidx].name, MLX5_MAX_IRQ_NAME, "%s@pci:%s", name, pci_name(dev->pdev)); + eq_table->irq_info[vecidx].context = param->context; + eq->vecidx = vecidx; eq->eqn = MLX5_GET(create_eq_out, out, eq_number); eq->irqn = pci_irq_vector(dev->pdev, vecidx); eq->dev = dev; eq->doorbell = priv->uar->map + MLX5_EQ_DOORBEL_OFFSET; - err = request_irq(eq->irqn, handler, 0, - priv->irq_info[vecidx].name, eq); + err = request_irq(eq->irqn, param->handler, 0, + eq_table->irq_info[vecidx].name, param->context); if (err) goto err_eq; @@ -720,21 +321,6 @@ int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx, if (err) goto err_irq; -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - if (type == MLX5_EQ_TYPE_PF) { - err = init_pf_ctx(&eq->pf_ctx, name); - if (err) - goto err_irq; - } else -#endif - { - INIT_LIST_HEAD(&eq->tasklet_ctx.list); - INIT_LIST_HEAD(&eq->tasklet_ctx.process_list); - spin_lock_init(&eq->tasklet_ctx.lock); - tasklet_init(&eq->tasklet_ctx.task, mlx5_cq_tasklet_cb, - (unsigned long)&eq->tasklet_ctx); - } - /* EQs are created in ARMED state */ eq_update_ci(eq, 1); @@ -756,27 +342,25 @@ err_buf: return err; } -int mlx5_destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq) +static int destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq) { + struct mlx5_eq_table *eq_table = dev->priv.eq_table; + struct mlx5_irq_info *irq_info; int err; + irq_info = &eq_table->irq_info[eq->vecidx]; + mlx5_debug_eq_remove(dev, eq); - free_irq(eq->irqn, eq); + + free_irq(eq->irqn, irq_info->context); + irq_info->context = NULL; + err = mlx5_cmd_destroy_eq(dev, eq->eqn); if (err) mlx5_core_warn(dev, "failed to destroy a previously created eq: eqn %d\n", eq->eqn); synchronize_irq(eq->irqn); - if (eq->type == MLX5_EQ_TYPE_COMP) { - tasklet_disable(&eq->tasklet_ctx.task); -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - } else if (eq->type == MLX5_EQ_TYPE_PF) { - cancel_work_sync(&eq->pf_ctx.work); - destroy_workqueue(eq->pf_ctx.wq); - mempool_destroy(eq->pf_ctx.pool); -#endif - } mlx5_buf_free(dev, &eq->buf); return err; @@ -816,28 +400,106 @@ int mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq) return 0; } -int mlx5_eq_init(struct mlx5_core_dev *dev) +int mlx5_eq_table_init(struct mlx5_core_dev *dev) { - int err; + struct mlx5_eq_table *eq_table; + int i, err; - spin_lock_init(&dev->priv.eq_table.lock); + eq_table = kvzalloc(sizeof(*eq_table), GFP_KERNEL); + if (!eq_table) + return -ENOMEM; + + dev->priv.eq_table = eq_table; err = mlx5_eq_debugfs_init(dev); + if (err) + goto kvfree_eq_table; + + mutex_init(&eq_table->lock); + for (i = 0; i < MLX5_EVENT_TYPE_MAX; i++) + ATOMIC_INIT_NOTIFIER_HEAD(&eq_table->nh[i]); + return 0; + +kvfree_eq_table: + kvfree(eq_table); + dev->priv.eq_table = NULL; return err; } -void mlx5_eq_cleanup(struct mlx5_core_dev *dev) +void mlx5_eq_table_cleanup(struct mlx5_core_dev *dev) { mlx5_eq_debugfs_cleanup(dev); + kvfree(dev->priv.eq_table); } -int mlx5_start_eqs(struct mlx5_core_dev *dev) +/* Async EQs */ + +static int create_async_eq(struct mlx5_core_dev *dev, const char *name, + struct mlx5_eq *eq, struct mlx5_eq_param *param) { - struct mlx5_eq_table *table = &dev->priv.eq_table; - u64 async_event_mask = MLX5_ASYNC_EVENT_MASK; + struct mlx5_eq_table *eq_table = dev->priv.eq_table; + int err; + + mutex_lock(&eq_table->lock); + if (param->index >= MLX5_EQ_MAX_ASYNC_EQS) { + err = -ENOSPC; + goto unlock; + } + + err = create_map_eq(dev, eq, name, param); +unlock: + mutex_unlock(&eq_table->lock); + return err; +} + +static int destroy_async_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq) +{ + struct mlx5_eq_table *eq_table = dev->priv.eq_table; int err; + mutex_lock(&eq_table->lock); + err = destroy_unmap_eq(dev, eq); + mutex_unlock(&eq_table->lock); + return err; +} + +static int cq_err_event_notifier(struct notifier_block *nb, + unsigned long type, void *data) +{ + struct mlx5_eq_table *eqt; + struct mlx5_core_cq *cq; + struct mlx5_eqe *eqe; + struct mlx5_eq *eq; + u32 cqn; + + /* type == MLX5_EVENT_TYPE_CQ_ERROR */ + + eqt = mlx5_nb_cof(nb, struct mlx5_eq_table, cq_err_nb); + eq = &eqt->async_eq; + eqe = data; + + cqn = be32_to_cpu(eqe->data.cq_err.cqn) & 0xffffff; + mlx5_core_warn(eq->dev, "CQ error on CQN 0x%x, syndrome 0x%x\n", + cqn, eqe->data.cq_err.syndrome); + + cq = mlx5_eq_cq_get(eq, cqn); + if (unlikely(!cq)) { + mlx5_core_warn(eq->dev, "Async event for bogus CQ 0x%x\n", cqn); + return NOTIFY_OK; + } + + cq->event(cq, type); + + mlx5_cq_put(cq); + + return NOTIFY_OK; +} + +static u64 gather_async_events_mask(struct mlx5_core_dev *dev) +{ + u64 async_event_mask = MLX5_ASYNC_EVENT_MASK; + if (MLX5_VPORT_MANAGER(dev)) async_event_mask |= (1ull << MLX5_EVENT_TYPE_NIC_VPORT_CHANGE); @@ -865,127 +527,521 @@ int mlx5_start_eqs(struct mlx5_core_dev *dev) if (MLX5_CAP_MCAM_REG(dev, tracer_registers)) async_event_mask |= (1ull << MLX5_EVENT_TYPE_DEVICE_TRACER); - err = mlx5_create_map_eq(dev, &table->cmd_eq, MLX5_EQ_VEC_CMD, - MLX5_NUM_CMD_EQE, 1ull << MLX5_EVENT_TYPE_CMD, - "mlx5_cmd_eq", MLX5_EQ_TYPE_ASYNC); + if (MLX5_CAP_GEN(dev, max_num_of_monitor_counters)) + async_event_mask |= (1ull << MLX5_EVENT_TYPE_MONITOR_COUNTER); + + return async_event_mask; +} + +static int create_async_eqs(struct mlx5_core_dev *dev) +{ + struct mlx5_eq_table *table = dev->priv.eq_table; + struct mlx5_eq_param param = {}; + int err; + + MLX5_NB_INIT(&table->cq_err_nb, cq_err_event_notifier, CQ_ERROR); + mlx5_eq_notifier_register(dev, &table->cq_err_nb); + + param = (struct mlx5_eq_param) { + .index = MLX5_EQ_CMD_IDX, + .mask = 1ull << MLX5_EVENT_TYPE_CMD, + .nent = MLX5_NUM_CMD_EQE, + .context = &table->cmd_eq, + .handler = mlx5_eq_async_int, + }; + err = create_async_eq(dev, "mlx5_cmd_eq", &table->cmd_eq, ¶m); if (err) { mlx5_core_warn(dev, "failed to create cmd EQ %d\n", err); - return err; + goto err0; } mlx5_cmd_use_events(dev); - err = mlx5_create_map_eq(dev, &table->async_eq, MLX5_EQ_VEC_ASYNC, - MLX5_NUM_ASYNC_EQE, async_event_mask, - "mlx5_async_eq", MLX5_EQ_TYPE_ASYNC); + param = (struct mlx5_eq_param) { + .index = MLX5_EQ_ASYNC_IDX, + .mask = gather_async_events_mask(dev), + .nent = MLX5_NUM_ASYNC_EQE, + .context = &table->async_eq, + .handler = mlx5_eq_async_int, + }; + err = create_async_eq(dev, "mlx5_async_eq", &table->async_eq, ¶m); if (err) { mlx5_core_warn(dev, "failed to create async EQ %d\n", err); goto err1; } - err = mlx5_create_map_eq(dev, &table->pages_eq, - MLX5_EQ_VEC_PAGES, - /* TODO: sriov max_vf + */ 1, - 1 << MLX5_EVENT_TYPE_PAGE_REQUEST, "mlx5_pages_eq", - MLX5_EQ_TYPE_ASYNC); + param = (struct mlx5_eq_param) { + .index = MLX5_EQ_PAGEREQ_IDX, + .mask = 1 << MLX5_EVENT_TYPE_PAGE_REQUEST, + .nent = /* TODO: sriov max_vf + */ 1, + .context = &table->pages_eq, + .handler = mlx5_eq_async_int, + }; + err = create_async_eq(dev, "mlx5_pages_eq", &table->pages_eq, ¶m); if (err) { mlx5_core_warn(dev, "failed to create pages EQ %d\n", err); goto err2; } -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - if (MLX5_CAP_GEN(dev, pg)) { - err = mlx5_create_map_eq(dev, &table->pfault_eq, - MLX5_EQ_VEC_PFAULT, - MLX5_NUM_ASYNC_EQE, - 1 << MLX5_EVENT_TYPE_PAGE_FAULT, - "mlx5_page_fault_eq", - MLX5_EQ_TYPE_PF); - if (err) { - mlx5_core_warn(dev, "failed to create page fault EQ %d\n", - err); - goto err3; - } - } - - return err; -err3: - mlx5_destroy_unmap_eq(dev, &table->pages_eq); -#else return err; -#endif err2: - mlx5_destroy_unmap_eq(dev, &table->async_eq); + destroy_async_eq(dev, &table->async_eq); err1: mlx5_cmd_use_polling(dev); - mlx5_destroy_unmap_eq(dev, &table->cmd_eq); + destroy_async_eq(dev, &table->cmd_eq); +err0: + mlx5_eq_notifier_unregister(dev, &table->cq_err_nb); return err; } -void mlx5_stop_eqs(struct mlx5_core_dev *dev) +static void destroy_async_eqs(struct mlx5_core_dev *dev) { - struct mlx5_eq_table *table = &dev->priv.eq_table; + struct mlx5_eq_table *table = dev->priv.eq_table; int err; -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - if (MLX5_CAP_GEN(dev, pg)) { - err = mlx5_destroy_unmap_eq(dev, &table->pfault_eq); - if (err) - mlx5_core_err(dev, "failed to destroy page fault eq, err(%d)\n", - err); - } -#endif - - err = mlx5_destroy_unmap_eq(dev, &table->pages_eq); + err = destroy_async_eq(dev, &table->pages_eq); if (err) mlx5_core_err(dev, "failed to destroy pages eq, err(%d)\n", err); - err = mlx5_destroy_unmap_eq(dev, &table->async_eq); + err = destroy_async_eq(dev, &table->async_eq); if (err) mlx5_core_err(dev, "failed to destroy async eq, err(%d)\n", err); + mlx5_cmd_use_polling(dev); - err = mlx5_destroy_unmap_eq(dev, &table->cmd_eq); + err = destroy_async_eq(dev, &table->cmd_eq); if (err) mlx5_core_err(dev, "failed to destroy command eq, err(%d)\n", err); + + mlx5_eq_notifier_unregister(dev, &table->cq_err_nb); } -int mlx5_core_eq_query(struct mlx5_core_dev *dev, struct mlx5_eq *eq, - u32 *out, int outlen) +struct mlx5_eq *mlx5_get_async_eq(struct mlx5_core_dev *dev) { - u32 in[MLX5_ST_SZ_DW(query_eq_in)] = {0}; + return &dev->priv.eq_table->async_eq; +} - MLX5_SET(query_eq_in, in, opcode, MLX5_CMD_OP_QUERY_EQ); - MLX5_SET(query_eq_in, in, eq_number, eq->eqn); - return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen); +void mlx5_eq_synchronize_async_irq(struct mlx5_core_dev *dev) +{ + synchronize_irq(dev->priv.eq_table->async_eq.irqn); +} + +void mlx5_eq_synchronize_cmd_irq(struct mlx5_core_dev *dev) +{ + synchronize_irq(dev->priv.eq_table->cmd_eq.irqn); +} + +/* Generic EQ API for mlx5_core consumers + * Needed For RDMA ODP EQ for now + */ +struct mlx5_eq * +mlx5_eq_create_generic(struct mlx5_core_dev *dev, const char *name, + struct mlx5_eq_param *param) +{ + struct mlx5_eq *eq = kvzalloc(sizeof(*eq), GFP_KERNEL); + int err; + + if (!eq) + return ERR_PTR(-ENOMEM); + + err = create_async_eq(dev, name, eq, param); + if (err) { + kvfree(eq); + eq = ERR_PTR(err); + } + + return eq; +} +EXPORT_SYMBOL(mlx5_eq_create_generic); + +int mlx5_eq_destroy_generic(struct mlx5_core_dev *dev, struct mlx5_eq *eq) +{ + int err; + + if (IS_ERR(eq)) + return -EINVAL; + + err = destroy_async_eq(dev, eq); + if (err) + goto out; + + kvfree(eq); +out: + return err; +} +EXPORT_SYMBOL(mlx5_eq_destroy_generic); + +struct mlx5_eqe *mlx5_eq_get_eqe(struct mlx5_eq *eq, u32 cc) +{ + u32 ci = eq->cons_index + cc; + struct mlx5_eqe *eqe; + + eqe = get_eqe(eq, ci & (eq->nent - 1)); + eqe = ((eqe->owner & 1) ^ !!(ci & eq->nent)) ? NULL : eqe; + /* Make sure we read EQ entry contents after we've + * checked the ownership bit. + */ + if (eqe) + dma_rmb(); + + return eqe; +} +EXPORT_SYMBOL(mlx5_eq_get_eqe); + +void mlx5_eq_update_ci(struct mlx5_eq *eq, u32 cc, bool arm) +{ + __be32 __iomem *addr = eq->doorbell + (arm ? 0 : 2); + u32 val; + + eq->cons_index += cc; + val = (eq->cons_index & 0xffffff) | (eq->eqn << 24); + + __raw_writel((__force u32)cpu_to_be32(val), addr); + /* We still want ordering, just not swabbing, so add a barrier */ + mb(); +} +EXPORT_SYMBOL(mlx5_eq_update_ci); + +/* Completion EQs */ + +static int set_comp_irq_affinity_hint(struct mlx5_core_dev *mdev, int i) +{ + struct mlx5_priv *priv = &mdev->priv; + int vecidx = MLX5_EQ_VEC_COMP_BASE + i; + int irq = pci_irq_vector(mdev->pdev, vecidx); + struct mlx5_irq_info *irq_info = &priv->eq_table->irq_info[vecidx]; + + if (!zalloc_cpumask_var(&irq_info->mask, GFP_KERNEL)) { + mlx5_core_warn(mdev, "zalloc_cpumask_var failed"); + return -ENOMEM; + } + + cpumask_set_cpu(cpumask_local_spread(i, priv->numa_node), + irq_info->mask); + + if (IS_ENABLED(CONFIG_SMP) && + irq_set_affinity_hint(irq, irq_info->mask)) + mlx5_core_warn(mdev, "irq_set_affinity_hint failed, irq 0x%.4x", irq); + + return 0; +} + +static void clear_comp_irq_affinity_hint(struct mlx5_core_dev *mdev, int i) +{ + int vecidx = MLX5_EQ_VEC_COMP_BASE + i; + struct mlx5_priv *priv = &mdev->priv; + int irq = pci_irq_vector(mdev->pdev, vecidx); + struct mlx5_irq_info *irq_info = &priv->eq_table->irq_info[vecidx]; + + irq_set_affinity_hint(irq, NULL); + free_cpumask_var(irq_info->mask); +} + +static int set_comp_irq_affinity_hints(struct mlx5_core_dev *mdev) +{ + int err; + int i; + + for (i = 0; i < mdev->priv.eq_table->num_comp_vectors; i++) { + err = set_comp_irq_affinity_hint(mdev, i); + if (err) + goto err_out; + } + + return 0; + +err_out: + for (i--; i >= 0; i--) + clear_comp_irq_affinity_hint(mdev, i); + + return err; +} + +static void clear_comp_irqs_affinity_hints(struct mlx5_core_dev *mdev) +{ + int i; + + for (i = 0; i < mdev->priv.eq_table->num_comp_vectors; i++) + clear_comp_irq_affinity_hint(mdev, i); +} + +static void destroy_comp_eqs(struct mlx5_core_dev *dev) +{ + struct mlx5_eq_table *table = dev->priv.eq_table; + struct mlx5_eq_comp *eq, *n; + + clear_comp_irqs_affinity_hints(dev); + +#ifdef CONFIG_RFS_ACCEL + if (table->rmap) { + free_irq_cpu_rmap(table->rmap); + table->rmap = NULL; + } +#endif + list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + list_del(&eq->list); + if (destroy_unmap_eq(dev, &eq->core)) + mlx5_core_warn(dev, "failed to destroy comp EQ 0x%x\n", + eq->core.eqn); + tasklet_disable(&eq->tasklet_ctx.task); + kfree(eq); + } +} + +static int create_comp_eqs(struct mlx5_core_dev *dev) +{ + struct mlx5_eq_table *table = dev->priv.eq_table; + char name[MLX5_MAX_IRQ_NAME]; + struct mlx5_eq_comp *eq; + int ncomp_vec; + int nent; + int err; + int i; + + INIT_LIST_HEAD(&table->comp_eqs_list); + ncomp_vec = table->num_comp_vectors; + nent = MLX5_COMP_EQ_SIZE; +#ifdef CONFIG_RFS_ACCEL + table->rmap = alloc_irq_cpu_rmap(ncomp_vec); + if (!table->rmap) + return -ENOMEM; +#endif + for (i = 0; i < ncomp_vec; i++) { + int vecidx = i + MLX5_EQ_VEC_COMP_BASE; + struct mlx5_eq_param param = {}; + + eq = kzalloc(sizeof(*eq), GFP_KERNEL); + if (!eq) { + err = -ENOMEM; + goto clean; + } + + INIT_LIST_HEAD(&eq->tasklet_ctx.list); + INIT_LIST_HEAD(&eq->tasklet_ctx.process_list); + spin_lock_init(&eq->tasklet_ctx.lock); + tasklet_init(&eq->tasklet_ctx.task, mlx5_cq_tasklet_cb, + (unsigned long)&eq->tasklet_ctx); + +#ifdef CONFIG_RFS_ACCEL + irq_cpu_rmap_add(table->rmap, pci_irq_vector(dev->pdev, vecidx)); +#endif + snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d", i); + param = (struct mlx5_eq_param) { + .index = vecidx, + .mask = 0, + .nent = nent, + .context = &eq->core, + .handler = mlx5_eq_comp_int + }; + err = create_map_eq(dev, &eq->core, name, ¶m); + if (err) { + kfree(eq); + goto clean; + } + mlx5_core_dbg(dev, "allocated completion EQN %d\n", eq->core.eqn); + /* add tail, to keep the list ordered, for mlx5_vector2eqn to work */ + list_add_tail(&eq->list, &table->comp_eqs_list); + } + + err = set_comp_irq_affinity_hints(dev); + if (err) { + mlx5_core_err(dev, "Failed to alloc affinity hint cpumask\n"); + goto clean; + } + + return 0; + +clean: + destroy_comp_eqs(dev); + return err; +} + +int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn, + unsigned int *irqn) +{ + struct mlx5_eq_table *table = dev->priv.eq_table; + struct mlx5_eq_comp *eq, *n; + int err = -ENOENT; + int i = 0; + + list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { + if (i++ == vector) { + *eqn = eq->core.eqn; + *irqn = eq->core.irqn; + err = 0; + break; + } + } + + return err; +} +EXPORT_SYMBOL(mlx5_vector2eqn); + +unsigned int mlx5_comp_vectors_count(struct mlx5_core_dev *dev) +{ + return dev->priv.eq_table->num_comp_vectors; +} +EXPORT_SYMBOL(mlx5_comp_vectors_count); + +struct cpumask * +mlx5_comp_irq_get_affinity_mask(struct mlx5_core_dev *dev, int vector) +{ + /* TODO: consider irq_get_affinity_mask(irq) */ + return dev->priv.eq_table->irq_info[vector + MLX5_EQ_VEC_COMP_BASE].mask; +} +EXPORT_SYMBOL(mlx5_comp_irq_get_affinity_mask); + +struct cpu_rmap *mlx5_eq_table_get_rmap(struct mlx5_core_dev *dev) +{ +#ifdef CONFIG_RFS_ACCEL + return dev->priv.eq_table->rmap; +#else + return NULL; +#endif +} + +struct mlx5_eq_comp *mlx5_eqn2comp_eq(struct mlx5_core_dev *dev, int eqn) +{ + struct mlx5_eq_table *table = dev->priv.eq_table; + struct mlx5_eq_comp *eq; + + list_for_each_entry(eq, &table->comp_eqs_list, list) { + if (eq->core.eqn == eqn) + return eq; + } + + return ERR_PTR(-ENOENT); } /* This function should only be called after mlx5_cmd_force_teardown_hca */ void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev) { - struct mlx5_eq_table *table = &dev->priv.eq_table; - struct mlx5_eq *eq; + struct mlx5_eq_table *table = dev->priv.eq_table; + int i, max_eqs; + + clear_comp_irqs_affinity_hints(dev); #ifdef CONFIG_RFS_ACCEL - if (dev->rmap) { - free_irq_cpu_rmap(dev->rmap); - dev->rmap = NULL; + if (table->rmap) { + free_irq_cpu_rmap(table->rmap); + table->rmap = NULL; } #endif - list_for_each_entry(eq, &table->comp_eqs_list, list) - free_irq(eq->irqn, eq); - - free_irq(table->pages_eq.irqn, &table->pages_eq); - free_irq(table->async_eq.irqn, &table->async_eq); - free_irq(table->cmd_eq.irqn, &table->cmd_eq); -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - if (MLX5_CAP_GEN(dev, pg)) - free_irq(table->pfault_eq.irqn, &table->pfault_eq); -#endif + + mutex_lock(&table->lock); /* sync with create/destroy_async_eq */ + max_eqs = table->num_comp_vectors + MLX5_EQ_VEC_COMP_BASE; + for (i = max_eqs - 1; i >= 0; i--) { + if (!table->irq_info[i].context) + continue; + free_irq(pci_irq_vector(dev->pdev, i), table->irq_info[i].context); + table->irq_info[i].context = NULL; + } + mutex_unlock(&table->lock); + pci_free_irq_vectors(dev->pdev); +} + +static int alloc_irq_vectors(struct mlx5_core_dev *dev) +{ + struct mlx5_priv *priv = &dev->priv; + struct mlx5_eq_table *table = priv->eq_table; + int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ? + MLX5_CAP_GEN(dev, max_num_eqs) : + 1 << MLX5_CAP_GEN(dev, log_max_eq); + int nvec; + int err; + + nvec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() + + MLX5_EQ_VEC_COMP_BASE; + nvec = min_t(int, nvec, num_eqs); + if (nvec <= MLX5_EQ_VEC_COMP_BASE) + return -ENOMEM; + + table->irq_info = kcalloc(nvec, sizeof(*table->irq_info), GFP_KERNEL); + if (!table->irq_info) + return -ENOMEM; + + nvec = pci_alloc_irq_vectors(dev->pdev, MLX5_EQ_VEC_COMP_BASE + 1, + nvec, PCI_IRQ_MSIX); + if (nvec < 0) { + err = nvec; + goto err_free_irq_info; + } + + table->num_comp_vectors = nvec - MLX5_EQ_VEC_COMP_BASE; + + return 0; + +err_free_irq_info: + kfree(table->irq_info); + return err; +} + +static void free_irq_vectors(struct mlx5_core_dev *dev) +{ + struct mlx5_priv *priv = &dev->priv; + pci_free_irq_vectors(dev->pdev); + kfree(priv->eq_table->irq_info); +} + +int mlx5_eq_table_create(struct mlx5_core_dev *dev) +{ + int err; + + err = alloc_irq_vectors(dev); + if (err) { + mlx5_core_err(dev, "alloc irq vectors failed\n"); + return err; + } + + err = create_async_eqs(dev); + if (err) { + mlx5_core_err(dev, "Failed to create async EQs\n"); + goto err_async_eqs; + } + + err = create_comp_eqs(dev); + if (err) { + mlx5_core_err(dev, "Failed to create completion EQs\n"); + goto err_comp_eqs; + } + + return 0; +err_comp_eqs: + destroy_async_eqs(dev); +err_async_eqs: + free_irq_vectors(dev); + return err; +} + +void mlx5_eq_table_destroy(struct mlx5_core_dev *dev) +{ + destroy_comp_eqs(dev); + destroy_async_eqs(dev); + free_irq_vectors(dev); +} + +int mlx5_eq_notifier_register(struct mlx5_core_dev *dev, struct mlx5_nb *nb) +{ + struct mlx5_eq_table *eqt = dev->priv.eq_table; + + if (nb->event_type >= MLX5_EVENT_TYPE_MAX) + return -EINVAL; + + return atomic_notifier_chain_register(&eqt->nh[nb->event_type], &nb->nb); +} + +int mlx5_eq_notifier_unregister(struct mlx5_core_dev *dev, struct mlx5_nb *nb) +{ + struct mlx5_eq_table *eqt = dev->priv.eq_table; + + if (nb->event_type >= MLX5_EVENT_TYPE_MAX) + return -EINVAL; + + return atomic_notifier_chain_unregister(&eqt->nh[nb->event_type], &nb->nb); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index d004957328f9..a44ea7b85614 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -36,6 +36,7 @@ #include #include #include "mlx5_core.h" +#include "lib/eq.h" #include "eswitch.h" #include "fs_core.h" @@ -1567,7 +1568,6 @@ static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num) /* Mark this vport as disabled to discard new events */ vport->enabled = false; - synchronize_irq(pci_irq_vector(esw->dev->pdev, MLX5_EQ_VEC_ASYNC)); /* Wait for current already scheduled events to complete */ flush_workqueue(esw->work_queue); /* Disable events from this vport */ @@ -1593,10 +1593,25 @@ static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num) mutex_unlock(&esw->state_lock); } +static int eswitch_vport_event(struct notifier_block *nb, + unsigned long type, void *data) +{ + struct mlx5_eswitch *esw = mlx5_nb_cof(nb, struct mlx5_eswitch, nb); + struct mlx5_eqe *eqe = data; + struct mlx5_vport *vport; + u16 vport_num; + + vport_num = be16_to_cpu(eqe->data.vport_change.vport_num); + vport = &esw->vports[vport_num]; + if (vport->enabled) + queue_work(esw->work_queue, &vport->vport_change_handler); + + return NOTIFY_OK; +} + /* Public E-Switch API */ #define ESW_ALLOWED(esw) ((esw) && MLX5_ESWITCH_MANAGER((esw)->dev)) - int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) { int err; @@ -1615,13 +1630,16 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) esw_warn(esw->dev, "E-Switch engress ACL is not supported by FW\n"); esw_info(esw->dev, "E-Switch enable SRIOV: nvfs(%d) mode (%d)\n", nvfs, mode); + esw->mode = mode; + mlx5_lag_update(esw->dev); + if (mode == SRIOV_LEGACY) { err = esw_create_legacy_fdb_table(esw); } else { + mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH); mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB); - err = esw_offloads_init(esw, nvfs + 1); } @@ -1640,6 +1658,11 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) for (i = 0; i <= nvfs; i++) esw_enable_vport(esw, i, enabled_events); + if (mode == SRIOV_LEGACY) { + MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE); + mlx5_eq_notifier_register(esw->dev, &esw->nb); + } + esw_info(esw->dev, "SRIOV enabled: active vports(%d)\n", esw->enabled_vports); return 0; @@ -1647,8 +1670,10 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) abort: esw->mode = SRIOV_NONE; - if (mode == SRIOV_OFFLOADS) + if (mode == SRIOV_OFFLOADS) { mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB); + mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH); + } return err; } @@ -1669,6 +1694,9 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw) mc_promisc = &esw->mc_promisc; nvports = esw->enabled_vports; + if (esw->mode == SRIOV_LEGACY) + mlx5_eq_notifier_unregister(esw->dev, &esw->nb); + for (i = 0; i < esw->total_vports; i++) esw_disable_vport(esw, i); @@ -1685,8 +1713,12 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw) old_mode = esw->mode; esw->mode = SRIOV_NONE; - if (old_mode == SRIOV_OFFLOADS) + mlx5_lag_update(esw->dev); + + if (old_mode == SRIOV_OFFLOADS) { mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB); + mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH); + } } int mlx5_eswitch_init(struct mlx5_core_dev *dev) @@ -1777,23 +1809,6 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) kfree(esw); } -void mlx5_eswitch_vport_event(struct mlx5_eswitch *esw, struct mlx5_eqe *eqe) -{ - struct mlx5_eqe_vport_change *vc_eqe = &eqe->data.vport_change; - u16 vport_num = be16_to_cpu(vc_eqe->vport_num); - struct mlx5_vport *vport; - - if (!esw) { - pr_warn("MLX5 E-Switch: vport %d got an event while eswitch is not initialized\n", - vport_num); - return; - } - - vport = &esw->vports[vport_num]; - if (vport->enabled) - queue_work(esw->work_queue, &vport->vport_change_handler); -} - /* Vport Administration */ #define LEGAL_VPORT(esw, vport) (vport >= 0 && vport < esw->total_vports) @@ -2219,3 +2234,14 @@ u8 mlx5_eswitch_mode(struct mlx5_eswitch *esw) return ESW_ALLOWED(esw) ? esw->mode : SRIOV_NONE; } EXPORT_SYMBOL_GPL(mlx5_eswitch_mode); + +bool mlx5_esw_lag_prereq(struct mlx5_core_dev *dev0, struct mlx5_core_dev *dev1) +{ + if ((dev0->priv.eswitch->mode == SRIOV_NONE && + dev1->priv.eswitch->mode == SRIOV_NONE) || + (dev0->priv.eswitch->mode == SRIOV_OFFLOADS && + dev1->priv.eswitch->mode == SRIOV_OFFLOADS)) + return true; + + return false; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index aaafc9f17115..9c89eea9b2c3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -143,6 +143,8 @@ struct mlx5_eswitch_fdb { struct offloads_fdb { struct mlx5_flow_table *slow_fdb; struct mlx5_flow_group *send_to_vport_grp; + struct mlx5_flow_group *peer_miss_grp; + struct mlx5_flow_handle **peer_miss_rules; struct mlx5_flow_group *miss_grp; struct mlx5_flow_handle *miss_rule_uni; struct mlx5_flow_handle *miss_rule_multi; @@ -165,6 +167,8 @@ struct mlx5_esw_offload { struct mlx5_flow_table *ft_offloads; struct mlx5_flow_group *vport_rx_group; struct mlx5_eswitch_rep *vport_reps; + struct list_head peer_flows; + struct mutex peer_mutex; DECLARE_HASHTABLE(encap_tbl, 8); DECLARE_HASHTABLE(mod_hdr_tbl, 8); u8 inline_mode; @@ -181,6 +185,7 @@ struct esw_mc_addr { /* SRIOV only */ struct mlx5_eswitch { struct mlx5_core_dev *dev; + struct mlx5_nb nb; struct mlx5_eswitch_fdb fdb_table; struct hlist_head mc_table[MLX5_L2_ADDR_HASH_SIZE]; struct workqueue_struct *work_queue; @@ -211,7 +216,6 @@ int esw_offloads_init_reps(struct mlx5_eswitch *esw); /* E-Switch API */ int mlx5_eswitch_init(struct mlx5_core_dev *dev); void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw); -void mlx5_eswitch_vport_event(struct mlx5_eswitch *esw, struct mlx5_eqe *eqe); int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode); void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw); int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw, @@ -281,13 +285,17 @@ enum mlx5_flow_match_level { /* current maximum for flow based vport multicasting */ #define MLX5_MAX_FLOW_FWD_VPORTS 2 +enum { + MLX5_ESW_DEST_ENCAP = BIT(0), + MLX5_ESW_DEST_ENCAP_VALID = BIT(1), +}; + struct mlx5_esw_flow_attr { struct mlx5_eswitch_rep *in_rep; - struct mlx5_eswitch_rep *out_rep[MLX5_MAX_FLOW_FWD_VPORTS]; - struct mlx5_core_dev *out_mdev[MLX5_MAX_FLOW_FWD_VPORTS]; struct mlx5_core_dev *in_mdev; + struct mlx5_core_dev *counter_dev; - int mirror_count; + int split_count; int out_count; int action; @@ -296,7 +304,12 @@ struct mlx5_esw_flow_attr { u8 vlan_prio[MLX5_FS_VLAN_DEPTH]; u8 total_vlan; bool vlan_handled; - u32 encap_id; + struct { + u32 flags; + struct mlx5_eswitch_rep *rep; + struct mlx5_core_dev *mdev; + u32 encap_id; + } dests[MLX5_MAX_FLOW_FWD_VPORTS]; u32 mod_hdr_id; u8 match_level; struct mlx5_fc *counter; @@ -338,6 +351,9 @@ static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev MLX5_CAP_ESW_FLOWTABLE_FDB(dev, push_vlan_2); } +bool mlx5_esw_lag_prereq(struct mlx5_core_dev *dev0, + struct mlx5_core_dev *dev1); + #define MLX5_DEBUG_ESWITCH_MASK BIT(3) #define esw_info(dev, format, ...) \ @@ -352,9 +368,9 @@ static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev /* eswitch API stubs */ static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; } static inline void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) {} -static inline void mlx5_eswitch_vport_event(struct mlx5_eswitch *esw, struct mlx5_eqe *eqe) {} static inline int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) { return 0; } static inline void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw) {} +static inline bool mlx5_esw_lag_prereq(struct mlx5_core_dev *dev0, struct mlx5_core_dev *dev1) { return true; } #define FDB_MAX_CHAIN 1 #define FDB_SLOW_PATH_CHAIN (FDB_MAX_CHAIN + 1) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 9eac137790f5..53065b6ae593 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -39,6 +39,7 @@ #include "eswitch.h" #include "en.h" #include "fs_core.h" +#include "lib/devcom.h" enum { FDB_FAST_PATH = 0, @@ -81,7 +82,7 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw, { struct mlx5_flow_destination dest[MLX5_MAX_FLOW_FWD_VPORTS + 1] = {}; struct mlx5_flow_act flow_act = { .flags = FLOW_ACT_NO_APPEND, }; - bool mirror = !!(attr->mirror_count); + bool split = !!(attr->split_count); struct mlx5_flow_handle *rule; struct mlx5_flow_table *fdb; int j, i = 0; @@ -120,13 +121,21 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw, dest[i].ft = ft; i++; } else { - for (j = attr->mirror_count; j < attr->out_count; j++) { + for (j = attr->split_count; j < attr->out_count; j++) { dest[i].type = MLX5_FLOW_DESTINATION_TYPE_VPORT; - dest[i].vport.num = attr->out_rep[j]->vport; + dest[i].vport.num = attr->dests[j].rep->vport; dest[i].vport.vhca_id = - MLX5_CAP_GEN(attr->out_mdev[j], vhca_id); - dest[i].vport.vhca_id_valid = - !!MLX5_CAP_ESW(esw->dev, merged_eswitch); + MLX5_CAP_GEN(attr->dests[j].mdev, vhca_id); + if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) + dest[i].vport.flags |= + MLX5_FLOW_DEST_VPORT_VHCA_ID; + if (attr->dests[j].flags & MLX5_ESW_DEST_ENCAP) { + flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; + flow_act.reformat_id = attr->dests[j].encap_id; + dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_REFORMAT_ID; + dest[i].vport.reformat_id = + attr->dests[j].encap_id; + } i++; } } @@ -163,10 +172,7 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw, if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) flow_act.modify_id = attr->mod_hdr_id; - if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) - flow_act.reformat_id = attr->encap_id; - - fdb = esw_get_prio_table(esw, attr->chain, attr->prio, !!mirror); + fdb = esw_get_prio_table(esw, attr->chain, attr->prio, !!split); if (IS_ERR(fdb)) { rule = ERR_CAST(fdb); goto err_esw_get; @@ -181,7 +187,7 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw, return rule; err_add_rule: - esw_put_prio_table(esw, attr->chain, attr->prio, !!mirror); + esw_put_prio_table(esw, attr->chain, attr->prio, !!split); err_esw_get: if (attr->dest_chain) esw_put_prio_table(esw, attr->dest_chain, 1, 0); @@ -215,12 +221,17 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw, } flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; - for (i = 0; i < attr->mirror_count; i++) { + for (i = 0; i < attr->split_count; i++) { dest[i].type = MLX5_FLOW_DESTINATION_TYPE_VPORT; - dest[i].vport.num = attr->out_rep[i]->vport; + dest[i].vport.num = attr->dests[i].rep->vport; dest[i].vport.vhca_id = - MLX5_CAP_GEN(attr->out_mdev[i], vhca_id); - dest[i].vport.vhca_id_valid = !!MLX5_CAP_ESW(esw->dev, merged_eswitch); + MLX5_CAP_GEN(attr->dests[i].mdev, vhca_id); + if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) + dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID; + if (attr->dests[i].flags & MLX5_ESW_DEST_ENCAP) { + dest[i].vport.flags |= MLX5_FLOW_DEST_VPORT_REFORMAT_ID; + dest[i].vport.reformat_id = attr->dests[i].encap_id; + } } dest[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; dest[i].ft = fwd_fdb, @@ -268,7 +279,7 @@ __mlx5_eswitch_del_rule(struct mlx5_eswitch *esw, struct mlx5_esw_flow_attr *attr, bool fwd_rule) { - bool mirror = (attr->mirror_count > 0); + bool split = (attr->split_count > 0); mlx5_del_flow_rules(rule); esw->offloads.num_flows--; @@ -277,7 +288,7 @@ __mlx5_eswitch_del_rule(struct mlx5_eswitch *esw, esw_put_prio_table(esw, attr->chain, attr->prio, 1); esw_put_prio_table(esw, attr->chain, attr->prio, 0); } else { - esw_put_prio_table(esw, attr->chain, attr->prio, !!mirror); + esw_put_prio_table(esw, attr->chain, attr->prio, !!split); if (attr->dest_chain) esw_put_prio_table(esw, attr->dest_chain, 1, 0); } @@ -325,7 +336,7 @@ esw_vlan_action_get_vport(struct mlx5_esw_flow_attr *attr, bool push, bool pop) struct mlx5_eswitch_rep *in_rep, *out_rep, *vport = NULL; in_rep = attr->in_rep; - out_rep = attr->out_rep[0]; + out_rep = attr->dests[0].rep; if (push) vport = in_rep; @@ -346,7 +357,7 @@ static int esw_add_vlan_action_check(struct mlx5_esw_flow_attr *attr, goto out_notsupp; in_rep = attr->in_rep; - out_rep = attr->out_rep[0]; + out_rep = attr->dests[0].rep; if (push && in_rep->vport == FDB_UPLINK_VPORT) goto out_notsupp; @@ -398,7 +409,7 @@ int mlx5_eswitch_add_vlan_action(struct mlx5_eswitch *esw, if (!push && !pop && fwd) { /* tracks VF --> wire rules without vlan push action */ - if (attr->out_rep[0]->vport == FDB_UPLINK_VPORT) { + if (attr->dests[0].rep->vport == FDB_UPLINK_VPORT) { vport->vlan_refcount++; attr->vlan_handled = true; } @@ -458,7 +469,7 @@ int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw, if (!push && !pop && fwd) { /* tracks VF --> wire rules without vlan push action */ - if (attr->out_rep[0]->vport == FDB_UPLINK_VPORT) + if (attr->dests[0].rep->vport == FDB_UPLINK_VPORT) vport->vlan_refcount--; return 0; @@ -531,6 +542,98 @@ void mlx5_eswitch_del_send_to_vport_rule(struct mlx5_flow_handle *rule) mlx5_del_flow_rules(rule); } +static void peer_miss_rules_setup(struct mlx5_core_dev *peer_dev, + struct mlx5_flow_spec *spec, + struct mlx5_flow_destination *dest) +{ + void *misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, + misc_parameters); + + MLX5_SET(fte_match_set_misc, misc, source_eswitch_owner_vhca_id, + MLX5_CAP_GEN(peer_dev, vhca_id)); + + spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS; + + misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, + misc_parameters); + MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port); + MLX5_SET_TO_ONES(fte_match_set_misc, misc, + source_eswitch_owner_vhca_id); + + dest->type = MLX5_FLOW_DESTINATION_TYPE_VPORT; + dest->vport.num = 0; + dest->vport.vhca_id = MLX5_CAP_GEN(peer_dev, vhca_id); + dest->vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID; +} + +static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw, + struct mlx5_core_dev *peer_dev) +{ + struct mlx5_flow_destination dest = {}; + struct mlx5_flow_act flow_act = {0}; + struct mlx5_flow_handle **flows; + struct mlx5_flow_handle *flow; + struct mlx5_flow_spec *spec; + /* total vports is the same for both e-switches */ + int nvports = esw->total_vports; + void *misc; + int err, i; + + spec = kvzalloc(sizeof(*spec), GFP_KERNEL); + if (!spec) + return -ENOMEM; + + peer_miss_rules_setup(peer_dev, spec, &dest); + + flows = kvzalloc(nvports * sizeof(*flows), GFP_KERNEL); + if (!flows) { + err = -ENOMEM; + goto alloc_flows_err; + } + + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; + misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, + misc_parameters); + + for (i = 1; i < nvports; i++) { + MLX5_SET(fte_match_set_misc, misc, source_port, i); + flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, + spec, &flow_act, &dest, 1); + if (IS_ERR(flow)) { + err = PTR_ERR(flow); + esw_warn(esw->dev, "FDB: Failed to add peer miss flow rule err %d\n", err); + goto add_flow_err; + } + flows[i] = flow; + } + + esw->fdb_table.offloads.peer_miss_rules = flows; + + kvfree(spec); + return 0; + +add_flow_err: + for (i--; i > 0; i--) + mlx5_del_flow_rules(flows[i]); + kvfree(flows); +alloc_flows_err: + kvfree(spec); + return err; +} + +static void esw_del_fdb_peer_miss_rules(struct mlx5_eswitch *esw) +{ + struct mlx5_flow_handle **flows; + int i; + + flows = esw->fdb_table.offloads.peer_miss_rules; + + for (i = 1; i < esw->total_vports; i++) + mlx5_del_flow_rules(flows[i]); + + kvfree(flows); +} + static int esw_add_fdb_miss_rule(struct mlx5_eswitch *esw) { struct mlx5_flow_act flow_act = {0}; @@ -801,7 +904,8 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports) esw->fdb_table.offloads.fdb_left[i] = ESW_POOLS[i] <= fdb_max ? ESW_SIZE / ESW_POOLS[i] : 0; - table_size = nvports * MAX_SQ_NVPORTS + MAX_PF_SQ + 2; + table_size = nvports * MAX_SQ_NVPORTS + MAX_PF_SQ + 2 + + esw->total_vports; /* create the slow path fdb with encap set, so further table instances * can be created at run time while VFs are probed if the FW allows that. @@ -856,6 +960,34 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports) } esw->fdb_table.offloads.send_to_vport_grp = g; + /* create peer esw miss group */ + memset(flow_group_in, 0, inlen); + MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, + MLX5_MATCH_MISC_PARAMETERS); + + match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, + match_criteria); + + MLX5_SET_TO_ONES(fte_match_param, match_criteria, + misc_parameters.source_port); + MLX5_SET_TO_ONES(fte_match_param, match_criteria, + misc_parameters.source_eswitch_owner_vhca_id); + + MLX5_SET(create_flow_group_in, flow_group_in, + source_eswitch_owner_vhca_id_valid, 1); + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); + MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, + ix + esw->total_vports - 1); + ix += esw->total_vports; + + g = mlx5_create_flow_group(fdb, flow_group_in); + if (IS_ERR(g)) { + err = PTR_ERR(g); + esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err); + goto peer_miss_err; + } + esw->fdb_table.offloads.peer_miss_grp = g; + /* create miss group */ memset(flow_group_in, 0, inlen); MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, @@ -888,6 +1020,8 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports) miss_rule_err: mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp); miss_err: + mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); +peer_miss_err: mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp); send_vport_err: esw_destroy_offloads_fast_fdb_tables(esw); @@ -907,6 +1041,7 @@ static void esw_destroy_offloads_fdb_tables(struct mlx5_eswitch *esw) mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_multi); mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_uni); mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp); + mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp); mlx5_destroy_flow_table(esw->fdb_table.offloads.slow_fdb); @@ -1163,6 +1298,105 @@ err_reps: return err; } +#define ESW_OFFLOADS_DEVCOM_PAIR (0) +#define ESW_OFFLOADS_DEVCOM_UNPAIR (1) + +static int mlx5_esw_offloads_pair(struct mlx5_eswitch *esw, + struct mlx5_eswitch *peer_esw) +{ + int err; + + err = esw_add_fdb_peer_miss_rules(esw, peer_esw->dev); + if (err) + return err; + + return 0; +} + +void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw); + +static void mlx5_esw_offloads_unpair(struct mlx5_eswitch *esw) +{ + mlx5e_tc_clean_fdb_peer_flows(esw); + esw_del_fdb_peer_miss_rules(esw); +} + +static int mlx5_esw_offloads_devcom_event(int event, + void *my_data, + void *event_data) +{ + struct mlx5_eswitch *esw = my_data; + struct mlx5_eswitch *peer_esw = event_data; + struct mlx5_devcom *devcom = esw->dev->priv.devcom; + int err; + + switch (event) { + case ESW_OFFLOADS_DEVCOM_PAIR: + err = mlx5_esw_offloads_pair(esw, peer_esw); + if (err) + goto err_out; + + err = mlx5_esw_offloads_pair(peer_esw, esw); + if (err) + goto err_pair; + + mlx5_devcom_set_paired(devcom, MLX5_DEVCOM_ESW_OFFLOADS, true); + break; + + case ESW_OFFLOADS_DEVCOM_UNPAIR: + if (!mlx5_devcom_is_paired(devcom, MLX5_DEVCOM_ESW_OFFLOADS)) + break; + + mlx5_devcom_set_paired(devcom, MLX5_DEVCOM_ESW_OFFLOADS, false); + mlx5_esw_offloads_unpair(peer_esw); + mlx5_esw_offloads_unpair(esw); + break; + } + + return 0; + +err_pair: + mlx5_esw_offloads_unpair(esw); + +err_out: + mlx5_core_err(esw->dev, "esw offloads devcom event failure, event %u err %d", + event, err); + return err; +} + +static void esw_offloads_devcom_init(struct mlx5_eswitch *esw) +{ + struct mlx5_devcom *devcom = esw->dev->priv.devcom; + + INIT_LIST_HEAD(&esw->offloads.peer_flows); + mutex_init(&esw->offloads.peer_mutex); + + if (!MLX5_CAP_ESW(esw->dev, merged_eswitch)) + return; + + mlx5_devcom_register_component(devcom, + MLX5_DEVCOM_ESW_OFFLOADS, + mlx5_esw_offloads_devcom_event, + esw); + + mlx5_devcom_send_event(devcom, + MLX5_DEVCOM_ESW_OFFLOADS, + ESW_OFFLOADS_DEVCOM_PAIR, esw); +} + +static void esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw) +{ + struct mlx5_devcom *devcom = esw->dev->priv.devcom; + + if (!MLX5_CAP_ESW(esw->dev, merged_eswitch)) + return; + + mlx5_devcom_send_event(devcom, MLX5_DEVCOM_ESW_OFFLOADS, + ESW_OFFLOADS_DEVCOM_UNPAIR, esw); + + mlx5_devcom_unregister_component(devcom, MLX5_DEVCOM_ESW_OFFLOADS); +} + int esw_offloads_init(struct mlx5_eswitch *esw, int nvports) { int err; @@ -1185,6 +1419,7 @@ int esw_offloads_init(struct mlx5_eswitch *esw, int nvports) if (err) goto err_reps; + esw_offloads_devcom_init(esw); return 0; err_reps: @@ -1215,14 +1450,12 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw, } } - /* enable back PF RoCE */ - mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB); - return err; } void esw_offloads_cleanup(struct mlx5_eswitch *esw, int nvports) { + esw_offloads_devcom_cleanup(esw); esw_offloads_unload_reps(esw, nvports); esw_destroy_vport_rx_group(esw); esw_destroy_offloads_table(esw); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c new file mode 100644 index 000000000000..fbc42b7252a9 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c @@ -0,0 +1,325 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +// Copyright (c) 2018 Mellanox Technologies + +#include + +#include "mlx5_core.h" +#include "lib/eq.h" +#include "lib/mlx5.h" + +struct mlx5_event_nb { + struct mlx5_nb nb; + void *ctx; +}; + +/* General events handlers for the low level mlx5_core driver + * + * Other Major feature specific events such as + * clock/eswitch/fpga/FW trace and many others, are handled elsewhere, with + * separate notifiers callbacks, specifically by those mlx5 components. + */ +static int any_notifier(struct notifier_block *, unsigned long, void *); +static int temp_warn(struct notifier_block *, unsigned long, void *); +static int port_module(struct notifier_block *, unsigned long, void *); + +/* handler which forwards the event to events->nh, driver notifiers */ +static int forward_event(struct notifier_block *, unsigned long, void *); + +static struct mlx5_nb events_nbs_ref[] = { + /* Events to be proccessed by mlx5_core */ + {.nb.notifier_call = any_notifier, .event_type = MLX5_EVENT_TYPE_NOTIFY_ANY }, + {.nb.notifier_call = temp_warn, .event_type = MLX5_EVENT_TYPE_TEMP_WARN_EVENT }, + {.nb.notifier_call = port_module, .event_type = MLX5_EVENT_TYPE_PORT_MODULE_EVENT }, + + /* Events to be forwarded (as is) to mlx5 core interfaces (mlx5e/mlx5_ib) */ + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_PORT_CHANGE }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_GENERAL_EVENT }, + /* QP/WQ resource events to forward */ + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_DCT_DRAINED }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_PATH_MIG }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_COMM_EST }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_SQ_DRAINED }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_SRQ_LAST_WQE }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_WQ_CATAS_ERROR }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_PATH_MIG_FAILED }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_WQ_ACCESS_ERROR }, + /* SRQ events */ + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_SRQ_CATAS_ERROR }, + {.nb.notifier_call = forward_event, .event_type = MLX5_EVENT_TYPE_SRQ_RQ_LIMIT }, +}; + +struct mlx5_events { + struct mlx5_core_dev *dev; + struct mlx5_event_nb notifiers[ARRAY_SIZE(events_nbs_ref)]; + /* driver notifier chain */ + struct atomic_notifier_head nh; + /* port module events stats */ + struct mlx5_pme_stats pme_stats; +}; + +static const char *eqe_type_str(u8 type) +{ + switch (type) { + case MLX5_EVENT_TYPE_COMP: + return "MLX5_EVENT_TYPE_COMP"; + case MLX5_EVENT_TYPE_PATH_MIG: + return "MLX5_EVENT_TYPE_PATH_MIG"; + case MLX5_EVENT_TYPE_COMM_EST: + return "MLX5_EVENT_TYPE_COMM_EST"; + case MLX5_EVENT_TYPE_SQ_DRAINED: + return "MLX5_EVENT_TYPE_SQ_DRAINED"; + case MLX5_EVENT_TYPE_SRQ_LAST_WQE: + return "MLX5_EVENT_TYPE_SRQ_LAST_WQE"; + case MLX5_EVENT_TYPE_SRQ_RQ_LIMIT: + return "MLX5_EVENT_TYPE_SRQ_RQ_LIMIT"; + case MLX5_EVENT_TYPE_CQ_ERROR: + return "MLX5_EVENT_TYPE_CQ_ERROR"; + case MLX5_EVENT_TYPE_WQ_CATAS_ERROR: + return "MLX5_EVENT_TYPE_WQ_CATAS_ERROR"; + case MLX5_EVENT_TYPE_PATH_MIG_FAILED: + return "MLX5_EVENT_TYPE_PATH_MIG_FAILED"; + case MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR: + return "MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR"; + case MLX5_EVENT_TYPE_WQ_ACCESS_ERROR: + return "MLX5_EVENT_TYPE_WQ_ACCESS_ERROR"; + case MLX5_EVENT_TYPE_SRQ_CATAS_ERROR: + return "MLX5_EVENT_TYPE_SRQ_CATAS_ERROR"; + case MLX5_EVENT_TYPE_INTERNAL_ERROR: + return "MLX5_EVENT_TYPE_INTERNAL_ERROR"; + case MLX5_EVENT_TYPE_PORT_CHANGE: + return "MLX5_EVENT_TYPE_PORT_CHANGE"; + case MLX5_EVENT_TYPE_GPIO_EVENT: + return "MLX5_EVENT_TYPE_GPIO_EVENT"; + case MLX5_EVENT_TYPE_PORT_MODULE_EVENT: + return "MLX5_EVENT_TYPE_PORT_MODULE_EVENT"; + case MLX5_EVENT_TYPE_TEMP_WARN_EVENT: + return "MLX5_EVENT_TYPE_TEMP_WARN_EVENT"; + case MLX5_EVENT_TYPE_REMOTE_CONFIG: + return "MLX5_EVENT_TYPE_REMOTE_CONFIG"; + case MLX5_EVENT_TYPE_DB_BF_CONGESTION: + return "MLX5_EVENT_TYPE_DB_BF_CONGESTION"; + case MLX5_EVENT_TYPE_STALL_EVENT: + return "MLX5_EVENT_TYPE_STALL_EVENT"; + case MLX5_EVENT_TYPE_CMD: + return "MLX5_EVENT_TYPE_CMD"; + case MLX5_EVENT_TYPE_PAGE_REQUEST: + return "MLX5_EVENT_TYPE_PAGE_REQUEST"; + case MLX5_EVENT_TYPE_PAGE_FAULT: + return "MLX5_EVENT_TYPE_PAGE_FAULT"; + case MLX5_EVENT_TYPE_PPS_EVENT: + return "MLX5_EVENT_TYPE_PPS_EVENT"; + case MLX5_EVENT_TYPE_NIC_VPORT_CHANGE: + return "MLX5_EVENT_TYPE_NIC_VPORT_CHANGE"; + case MLX5_EVENT_TYPE_FPGA_ERROR: + return "MLX5_EVENT_TYPE_FPGA_ERROR"; + case MLX5_EVENT_TYPE_FPGA_QP_ERROR: + return "MLX5_EVENT_TYPE_FPGA_QP_ERROR"; + case MLX5_EVENT_TYPE_GENERAL_EVENT: + return "MLX5_EVENT_TYPE_GENERAL_EVENT"; + case MLX5_EVENT_TYPE_MONITOR_COUNTER: + return "MLX5_EVENT_TYPE_MONITOR_COUNTER"; + case MLX5_EVENT_TYPE_DEVICE_TRACER: + return "MLX5_EVENT_TYPE_DEVICE_TRACER"; + default: + return "Unrecognized event"; + } +} + +/* handles all FW events, type == eqe->type */ +static int any_notifier(struct notifier_block *nb, + unsigned long type, void *data) +{ + struct mlx5_event_nb *event_nb = mlx5_nb_cof(nb, struct mlx5_event_nb, nb); + struct mlx5_events *events = event_nb->ctx; + struct mlx5_eqe *eqe = data; + + mlx5_core_dbg(events->dev, "Async eqe type %s, subtype (%d)\n", + eqe_type_str(eqe->type), eqe->sub_type); + return NOTIFY_OK; +} + +/* type == MLX5_EVENT_TYPE_TEMP_WARN_EVENT */ +static int temp_warn(struct notifier_block *nb, unsigned long type, void *data) +{ + struct mlx5_event_nb *event_nb = mlx5_nb_cof(nb, struct mlx5_event_nb, nb); + struct mlx5_events *events = event_nb->ctx; + struct mlx5_eqe *eqe = data; + u64 value_lsb; + u64 value_msb; + + value_lsb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_lsb); + value_msb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_msb); + + mlx5_core_warn(events->dev, + "High temperature on sensors with bit set %llx %llx", + value_msb, value_lsb); + + return NOTIFY_OK; +} + +/* MLX5_EVENT_TYPE_PORT_MODULE_EVENT */ +static const char *mlx5_pme_status_to_string(enum port_module_event_status_type status) +{ + switch (status) { + case MLX5_MODULE_STATUS_PLUGGED: + return "Cable plugged"; + case MLX5_MODULE_STATUS_UNPLUGGED: + return "Cable unplugged"; + case MLX5_MODULE_STATUS_ERROR: + return "Cable error"; + case MLX5_MODULE_STATUS_DISABLED: + return "Cable disabled"; + default: + return "Unknown status"; + } +} + +static const char *mlx5_pme_error_to_string(enum port_module_event_error_type error) +{ + switch (error) { + case MLX5_MODULE_EVENT_ERROR_POWER_BUDGET_EXCEEDED: + return "Power budget exceeded"; + case MLX5_MODULE_EVENT_ERROR_LONG_RANGE_FOR_NON_MLNX: + return "Long Range for non MLNX cable"; + case MLX5_MODULE_EVENT_ERROR_BUS_STUCK: + return "Bus stuck (I2C or data shorted)"; + case MLX5_MODULE_EVENT_ERROR_NO_EEPROM_RETRY_TIMEOUT: + return "No EEPROM/retry timeout"; + case MLX5_MODULE_EVENT_ERROR_ENFORCE_PART_NUMBER_LIST: + return "Enforce part number list"; + case MLX5_MODULE_EVENT_ERROR_UNKNOWN_IDENTIFIER: + return "Unknown identifier"; + case MLX5_MODULE_EVENT_ERROR_HIGH_TEMPERATURE: + return "High Temperature"; + case MLX5_MODULE_EVENT_ERROR_BAD_CABLE: + return "Bad or shorted cable/module"; + case MLX5_MODULE_EVENT_ERROR_PCIE_POWER_SLOT_EXCEEDED: + return "One or more network ports have been powered down due to insufficient/unadvertised power on the PCIe slot"; + default: + return "Unknown error"; + } +} + +/* type == MLX5_EVENT_TYPE_PORT_MODULE_EVENT */ +static int port_module(struct notifier_block *nb, unsigned long type, void *data) +{ + struct mlx5_event_nb *event_nb = mlx5_nb_cof(nb, struct mlx5_event_nb, nb); + struct mlx5_events *events = event_nb->ctx; + struct mlx5_eqe *eqe = data; + + enum port_module_event_status_type module_status; + enum port_module_event_error_type error_type; + struct mlx5_eqe_port_module *module_event_eqe; + const char *status_str, *error_str; + u8 module_num; + + module_event_eqe = &eqe->data.port_module; + module_num = module_event_eqe->module; + module_status = module_event_eqe->module_status & + PORT_MODULE_EVENT_MODULE_STATUS_MASK; + error_type = module_event_eqe->error_type & + PORT_MODULE_EVENT_ERROR_TYPE_MASK; + + if (module_status < MLX5_MODULE_STATUS_NUM) + events->pme_stats.status_counters[module_status]++; + status_str = mlx5_pme_status_to_string(module_status); + + if (module_status == MLX5_MODULE_STATUS_ERROR) { + if (error_type < MLX5_MODULE_EVENT_ERROR_NUM) + events->pme_stats.error_counters[error_type]++; + error_str = mlx5_pme_error_to_string(error_type); + } + + if (!printk_ratelimit()) + return NOTIFY_OK; + + if (module_status == MLX5_MODULE_STATUS_ERROR) + mlx5_core_err(events->dev, + "Port module event[error]: module %u, %s, %s\n", + module_num, status_str, error_str); + else + mlx5_core_info(events->dev, + "Port module event: module %u, %s\n", + module_num, status_str); + + return NOTIFY_OK; +} + +void mlx5_get_pme_stats(struct mlx5_core_dev *dev, struct mlx5_pme_stats *stats) +{ + *stats = dev->priv.events->pme_stats; +} + +/* forward event as is to registered interfaces (mlx5e/mlx5_ib) */ +static int forward_event(struct notifier_block *nb, unsigned long event, void *data) +{ + struct mlx5_event_nb *event_nb = mlx5_nb_cof(nb, struct mlx5_event_nb, nb); + struct mlx5_events *events = event_nb->ctx; + struct mlx5_eqe *eqe = data; + + mlx5_core_dbg(events->dev, "Async eqe type %s, subtype (%d) forward to interfaces\n", + eqe_type_str(eqe->type), eqe->sub_type); + atomic_notifier_call_chain(&events->nh, event, data); + return NOTIFY_OK; +} + +int mlx5_events_init(struct mlx5_core_dev *dev) +{ + struct mlx5_events *events = kzalloc(sizeof(*events), GFP_KERNEL); + + if (!events) + return -ENOMEM; + + ATOMIC_INIT_NOTIFIER_HEAD(&events->nh); + events->dev = dev; + dev->priv.events = events; + return 0; +} + +void mlx5_events_cleanup(struct mlx5_core_dev *dev) +{ + kvfree(dev->priv.events); +} + +void mlx5_events_start(struct mlx5_core_dev *dev) +{ + struct mlx5_events *events = dev->priv.events; + int i; + + for (i = 0; i < ARRAY_SIZE(events_nbs_ref); i++) { + events->notifiers[i].nb = events_nbs_ref[i]; + events->notifiers[i].ctx = events; + mlx5_eq_notifier_register(dev, &events->notifiers[i].nb); + } +} + +void mlx5_events_stop(struct mlx5_core_dev *dev) +{ + struct mlx5_events *events = dev->priv.events; + int i; + + for (i = ARRAY_SIZE(events_nbs_ref) - 1; i >= 0 ; i--) + mlx5_eq_notifier_unregister(dev, &events->notifiers[i].nb); +} + +int mlx5_notifier_register(struct mlx5_core_dev *dev, struct notifier_block *nb) +{ + struct mlx5_events *events = dev->priv.events; + + return atomic_notifier_chain_register(&events->nh, nb); +} +EXPORT_SYMBOL(mlx5_notifier_register); + +int mlx5_notifier_unregister(struct mlx5_core_dev *dev, struct notifier_block *nb) +{ + struct mlx5_events *events = dev->priv.events; + + return atomic_notifier_chain_unregister(&events->nh, nb); +} +EXPORT_SYMBOL(mlx5_notifier_unregister); + +int mlx5_notifier_call_chain(struct mlx5_events *events, unsigned int event, void *data) +{ + return atomic_notifier_call_chain(&events->nh, event, data); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c index 8ca1d1949d93..873541ef4c1b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c @@ -334,7 +334,7 @@ static void mlx5_fpga_conn_handle_cqe(struct mlx5_fpga_conn *conn, { u8 opcode, status = 0; - opcode = cqe->op_own >> 4; + opcode = get_cqe_opcode(cqe); switch (opcode) { case MLX5_CQE_REQ_ERR: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.c index 436a8136f26f..27c5f6c7d36a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.c @@ -36,6 +36,7 @@ #include "mlx5_core.h" #include "lib/mlx5.h" +#include "lib/eq.h" #include "fpga/core.h" #include "fpga/conn.h" @@ -145,6 +146,22 @@ static int mlx5_fpga_device_brb(struct mlx5_fpga_device *fdev) return 0; } +static int mlx5_fpga_event(struct mlx5_fpga_device *, unsigned long, void *); + +static int fpga_err_event(struct notifier_block *nb, unsigned long event, void *eqe) +{ + struct mlx5_fpga_device *fdev = mlx5_nb_cof(nb, struct mlx5_fpga_device, fpga_err_nb); + + return mlx5_fpga_event(fdev, event, eqe); +} + +static int fpga_qp_err_event(struct notifier_block *nb, unsigned long event, void *eqe) +{ + struct mlx5_fpga_device *fdev = mlx5_nb_cof(nb, struct mlx5_fpga_device, fpga_qp_err_nb); + + return mlx5_fpga_event(fdev, event, eqe); +} + int mlx5_fpga_device_start(struct mlx5_core_dev *mdev) { struct mlx5_fpga_device *fdev = mdev->fpga; @@ -185,6 +202,11 @@ int mlx5_fpga_device_start(struct mlx5_core_dev *mdev) if (err) goto out; + MLX5_NB_INIT(&fdev->fpga_err_nb, fpga_err_event, FPGA_ERROR); + MLX5_NB_INIT(&fdev->fpga_qp_err_nb, fpga_qp_err_event, FPGA_QP_ERROR); + mlx5_eq_notifier_register(fdev->mdev, &fdev->fpga_err_nb); + mlx5_eq_notifier_register(fdev->mdev, &fdev->fpga_qp_err_nb); + err = mlx5_fpga_conn_device_init(fdev); if (err) goto err_rsvd_gid; @@ -201,6 +223,8 @@ err_conn_init: mlx5_fpga_conn_device_cleanup(fdev); err_rsvd_gid: + mlx5_eq_notifier_unregister(fdev->mdev, &fdev->fpga_err_nb); + mlx5_eq_notifier_unregister(fdev->mdev, &fdev->fpga_qp_err_nb); mlx5_core_unreserve_gids(mdev, max_num_qps); out: spin_lock_irqsave(&fdev->state_lock, flags); @@ -256,6 +280,9 @@ void mlx5_fpga_device_stop(struct mlx5_core_dev *mdev) } mlx5_fpga_conn_device_cleanup(fdev); + mlx5_eq_notifier_unregister(fdev->mdev, &fdev->fpga_err_nb); + mlx5_eq_notifier_unregister(fdev->mdev, &fdev->fpga_qp_err_nb); + max_num_qps = MLX5_CAP_FPGA(mdev, shell_caps.max_num_qps); mlx5_core_unreserve_gids(mdev, max_num_qps); } @@ -283,9 +310,10 @@ static const char *mlx5_fpga_qp_syndrome_to_string(u8 syndrome) return "Unknown"; } -void mlx5_fpga_event(struct mlx5_core_dev *mdev, u8 event, void *data) +static int mlx5_fpga_event(struct mlx5_fpga_device *fdev, + unsigned long event, void *eqe) { - struct mlx5_fpga_device *fdev = mdev->fpga; + void *data = ((struct mlx5_eqe *)eqe)->data.raw; const char *event_name; bool teardown = false; unsigned long flags; @@ -303,9 +331,7 @@ void mlx5_fpga_event(struct mlx5_core_dev *mdev, u8 event, void *data) fpga_qpn = MLX5_GET(fpga_qp_error_event, data, fpga_qpn); break; default: - mlx5_fpga_warn_ratelimited(fdev, "Unexpected event %u\n", - event); - return; + return NOTIFY_DONE; } spin_lock_irqsave(&fdev->state_lock, flags); @@ -326,4 +352,6 @@ void mlx5_fpga_event(struct mlx5_core_dev *mdev, u8 event, void *data) */ if (teardown) mlx5_trigger_health_work(fdev->mdev); + + return NOTIFY_OK; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.h b/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.h index 3e2355c8df3f..7e2e871dbf83 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.h @@ -35,11 +35,16 @@ #ifdef CONFIG_MLX5_FPGA +#include + +#include "lib/eq.h" #include "fpga/cmd.h" /* Represents an Innova device */ struct mlx5_fpga_device { struct mlx5_core_dev *mdev; + struct mlx5_nb fpga_err_nb; + struct mlx5_nb fpga_qp_err_nb; spinlock_t state_lock; /* Protects state transitions */ enum mlx5_fpga_status state; enum mlx5_fpga_image last_admin_image; @@ -82,7 +87,6 @@ int mlx5_fpga_init(struct mlx5_core_dev *mdev); void mlx5_fpga_cleanup(struct mlx5_core_dev *mdev); int mlx5_fpga_device_start(struct mlx5_core_dev *mdev); void mlx5_fpga_device_stop(struct mlx5_core_dev *mdev); -void mlx5_fpga_event(struct mlx5_core_dev *mdev, u8 event, void *data); #else @@ -104,11 +108,6 @@ static inline void mlx5_fpga_device_stop(struct mlx5_core_dev *mdev) { } -static inline void mlx5_fpga_event(struct mlx5_core_dev *mdev, u8 event, - void *data) -{ -} - #endif #endif /* __MLX5_FPGA_CORE_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c index 08a891f9aade..c44ccb67c4a3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c @@ -308,22 +308,68 @@ static int mlx5_cmd_destroy_flow_group(struct mlx5_core_dev *dev, return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); } +static int mlx5_set_extended_dest(struct mlx5_core_dev *dev, + struct fs_fte *fte, bool *extended_dest) +{ + int fw_log_max_fdb_encap_uplink = + MLX5_CAP_ESW(dev, log_max_fdb_encap_uplink); + int num_fwd_destinations = 0; + struct mlx5_flow_rule *dst; + int num_encap = 0; + + *extended_dest = false; + if (!(fte->action.action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST)) + return 0; + + list_for_each_entry(dst, &fte->node.children, node.list) { + if (dst->dest_attr.type == MLX5_FLOW_DESTINATION_TYPE_COUNTER) + continue; + if (dst->dest_attr.type == MLX5_FLOW_DESTINATION_TYPE_VPORT && + dst->dest_attr.vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID) + num_encap++; + num_fwd_destinations++; + } + if (num_fwd_destinations > 1 && num_encap > 0) + *extended_dest = true; + + if (*extended_dest && !fw_log_max_fdb_encap_uplink) { + mlx5_core_warn(dev, "FW does not support extended destination"); + return -EOPNOTSUPP; + } + if (num_encap > (1 << fw_log_max_fdb_encap_uplink)) { + mlx5_core_warn(dev, "FW does not support more than %d encaps", + 1 << fw_log_max_fdb_encap_uplink); + return -EOPNOTSUPP; + } + + return 0; +} static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, int opmod, int modify_mask, struct mlx5_flow_table *ft, unsigned group_id, struct fs_fte *fte) { - unsigned int inlen = MLX5_ST_SZ_BYTES(set_fte_in) + - fte->dests_size * MLX5_ST_SZ_BYTES(dest_format_struct); u32 out[MLX5_ST_SZ_DW(set_fte_out)] = {0}; + bool extended_dest = false; struct mlx5_flow_rule *dst; void *in_flow_context, *vlan; void *in_match_value; + unsigned int inlen; + int dst_cnt_size; void *in_dests; u32 *in; int err; + if (mlx5_set_extended_dest(dev, fte, &extended_dest)) + return -EOPNOTSUPP; + + if (!extended_dest) + dst_cnt_size = MLX5_ST_SZ_BYTES(dest_format_struct); + else + dst_cnt_size = MLX5_ST_SZ_BYTES(extended_dest_format); + + inlen = MLX5_ST_SZ_BYTES(set_fte_in) + fte->dests_size * dst_cnt_size; in = kvzalloc(inlen, GFP_KERNEL); if (!in) return -ENOMEM; @@ -343,9 +389,20 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, MLX5_SET(flow_context, in_flow_context, group_id, group_id); MLX5_SET(flow_context, in_flow_context, flow_tag, fte->action.flow_tag); - MLX5_SET(flow_context, in_flow_context, action, fte->action.action); - MLX5_SET(flow_context, in_flow_context, packet_reformat_id, - fte->action.reformat_id); + MLX5_SET(flow_context, in_flow_context, extended_destination, + extended_dest); + if (extended_dest) { + u32 action; + + action = fte->action.action & + ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; + MLX5_SET(flow_context, in_flow_context, action, action); + } else { + MLX5_SET(flow_context, in_flow_context, action, + fte->action.action); + MLX5_SET(flow_context, in_flow_context, packet_reformat_id, + fte->action.reformat_id); + } MLX5_SET(flow_context, in_flow_context, modify_header_id, fte->action.modify_id); @@ -387,10 +444,20 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, id = dst->dest_attr.vport.num; MLX5_SET(dest_format_struct, in_dests, destination_eswitch_owner_vhca_id_valid, - dst->dest_attr.vport.vhca_id_valid); + !!(dst->dest_attr.vport.flags & + MLX5_FLOW_DEST_VPORT_VHCA_ID)); MLX5_SET(dest_format_struct, in_dests, destination_eswitch_owner_vhca_id, dst->dest_attr.vport.vhca_id); + if (extended_dest) { + MLX5_SET(dest_format_struct, in_dests, + packet_reformat, + !!(dst->dest_attr.vport.flags & + MLX5_FLOW_DEST_VPORT_REFORMAT_ID)); + MLX5_SET(extended_dest_format, in_dests, + packet_reformat_id, + dst->dest_attr.vport.reformat_id); + } break; default: id = dst->dest_attr.tir_num; @@ -399,7 +466,7 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, MLX5_SET(dest_format_struct, in_dests, destination_type, type); MLX5_SET(dest_format_struct, in_dests, destination_id, id); - in_dests += MLX5_ST_SZ_BYTES(dest_format_struct); + in_dests += dst_cnt_size; list_size++; } @@ -420,7 +487,7 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, MLX5_SET(flow_counter_list, in_dests, flow_counter_id, dst->dest_attr.counter_id); - in_dests += MLX5_ST_SZ_BYTES(dest_format_struct); + in_dests += dst_cnt_size; list_size++; } if (list_size > max_list_size) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 08233cf44871..79f122b45def 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -1373,7 +1373,10 @@ static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1, { if (d1->type == d2->type) { if ((d1->type == MLX5_FLOW_DESTINATION_TYPE_VPORT && - d1->vport.num == d2->vport.num) || + d1->vport.num == d2->vport.num && + d1->vport.flags == d2->vport.flags && + ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID) ? + (d1->vport.reformat_id == d2->vport.reformat_id) : true)) || (d1->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE && d1->ft == d2->ft) || (d1->type == MLX5_FLOW_DESTINATION_TYPE_TIR && diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index b51ad217da32..2dc86347af58 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -145,29 +145,6 @@ struct mlx5_flow_table { struct rhltable fgs_hash; }; -struct mlx5_fc_cache { - u64 packets; - u64 bytes; - u64 lastuse; -}; - -struct mlx5_fc { - struct list_head list; - struct llist_node addlist; - struct llist_node dellist; - - /* last{packets,bytes} members are used when calculating the delta since - * last reading - */ - u64 lastpackets; - u64 lastbytes; - - u32 id; - bool aging; - - struct mlx5_fc_cache cache ____cacheline_aligned_in_smp; -}; - struct mlx5_ft_underlay_qp { struct list_head list; u32 qpn; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c index 32accd6b041b..c6c28f56aa29 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c @@ -41,6 +41,29 @@ /* Max number of counters to query in bulk read is 32K */ #define MLX5_SW_MAX_COUNTERS_BULK BIT(15) +struct mlx5_fc_cache { + u64 packets; + u64 bytes; + u64 lastuse; +}; + +struct mlx5_fc { + struct list_head list; + struct llist_node addlist; + struct llist_node dellist; + + /* last{packets,bytes} members are used when calculating the delta since + * last reading + */ + u64 lastpackets; + u64 lastbytes; + + u32 id; + bool aging; + + struct mlx5_fc_cache cache ____cacheline_aligned_in_smp; +}; + /* locking scheme: * * It is the responsibility of the user to prevent concurrent calls or bad diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c index 43118de8ee99..196c07383082 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c @@ -38,6 +38,8 @@ #include #include #include "mlx5_core.h" +#include "lib/eq.h" +#include "lib/mlx5.h" enum { MLX5_HEALTH_POLL_INTERVAL = 2 * HZ, @@ -78,29 +80,6 @@ void mlx5_set_nic_state(struct mlx5_core_dev *dev, u8 state) &dev->iseg->cmdq_addr_l_sz); } -static void trigger_cmd_completions(struct mlx5_core_dev *dev) -{ - unsigned long flags; - u64 vector; - - /* wait for pending handlers to complete */ - synchronize_irq(pci_irq_vector(dev->pdev, MLX5_EQ_VEC_CMD)); - spin_lock_irqsave(&dev->cmd.alloc_lock, flags); - vector = ~dev->cmd.bitmask & ((1ul << (1 << dev->cmd.log_sz)) - 1); - if (!vector) - goto no_trig; - - vector |= MLX5_TRIGGERED_CMD_COMP; - spin_unlock_irqrestore(&dev->cmd.alloc_lock, flags); - - mlx5_core_dbg(dev, "vector 0x%llx\n", vector); - mlx5_cmd_comp_handler(dev, vector, true); - return; - -no_trig: - spin_unlock_irqrestore(&dev->cmd.alloc_lock, flags); -} - static int in_fatal(struct mlx5_core_dev *dev) { struct mlx5_core_health *health = &dev->priv.health; @@ -124,10 +103,10 @@ void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force) mlx5_core_err(dev, "start\n"); if (pci_channel_offline(dev->pdev) || in_fatal(dev) || force) { dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; - trigger_cmd_completions(dev); + mlx5_cmd_trigger_completions(dev); } - mlx5_core_event(dev, MLX5_DEV_EVENT_SYS_ERROR, 1); + mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_SYS_ERROR, (void *)1); mlx5_core_err(dev, "end\n"); unlock: diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c index 11dabd62e2c7..bfc0f6581729 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c @@ -87,7 +87,7 @@ int mlx5i_init(struct mlx5_core_dev *mdev, mlx5_query_port_max_mtu(mdev, &max_mtu, 1); netdev->mtu = max_mtu; - mlx5e_build_nic_params(mdev, &priv->channels.params, + mlx5e_build_nic_params(mdev, &priv->rss_params, &priv->channels.params, mlx5e_get_netdev_max_channels(netdev), netdev->mtu); mlx5i_build_nic_params(mdev, &priv->channels.params); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c index 582b2f18010a..3a6baed722d8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c @@ -34,11 +34,15 @@ #include #include #include "mlx5_core.h" +#include "eswitch.h" enum { - MLX5_LAG_FLAG_BONDED = 1 << 0, + MLX5_LAG_FLAG_ROCE = 1 << 0, + MLX5_LAG_FLAG_SRIOV = 1 << 1, }; +#define MLX5_LAG_MODE_FLAGS (MLX5_LAG_FLAG_ROCE | MLX5_LAG_FLAG_SRIOV) + struct lag_func { struct mlx5_core_dev *dev; struct net_device *netdev; @@ -61,11 +65,6 @@ struct mlx5_lag { struct lag_tracker tracker; struct delayed_work bond_work; struct notifier_block nb; - - /* Admin state. Allow lag only if allowed is true - * even if network conditions for lag were met - */ - bool allowed; }; /* General purpose, use for short periods of time. @@ -165,9 +164,19 @@ static int mlx5_lag_dev_get_netdev_idx(struct mlx5_lag *ldev, return -1; } -static bool mlx5_lag_is_bonded(struct mlx5_lag *ldev) +static bool __mlx5_lag_is_roce(struct mlx5_lag *ldev) +{ + return !!(ldev->flags & MLX5_LAG_FLAG_ROCE); +} + +static bool __mlx5_lag_is_sriov(struct mlx5_lag *ldev) +{ + return !!(ldev->flags & MLX5_LAG_FLAG_SRIOV); +} + +static bool __mlx5_lag_is_active(struct mlx5_lag *ldev) { - return !!(ldev->flags & MLX5_LAG_FLAG_BONDED); + return !!(ldev->flags & MLX5_LAG_MODE_FLAGS); } static void mlx5_infer_tx_affinity_mapping(struct lag_tracker *tracker, @@ -186,36 +195,131 @@ static void mlx5_infer_tx_affinity_mapping(struct lag_tracker *tracker, *port2 = 1; } -static void mlx5_activate_lag(struct mlx5_lag *ldev, - struct lag_tracker *tracker) +static void mlx5_modify_lag(struct mlx5_lag *ldev, + struct lag_tracker *tracker) { struct mlx5_core_dev *dev0 = ldev->pf[0].dev; + u8 v2p_port1, v2p_port2; int err; - ldev->flags |= MLX5_LAG_FLAG_BONDED; + mlx5_infer_tx_affinity_mapping(tracker, &v2p_port1, + &v2p_port2); + + if (v2p_port1 != ldev->v2p_map[0] || + v2p_port2 != ldev->v2p_map[1]) { + ldev->v2p_map[0] = v2p_port1; + ldev->v2p_map[1] = v2p_port2; + + mlx5_core_info(dev0, "modify lag map port 1:%d port 2:%d", + ldev->v2p_map[0], ldev->v2p_map[1]); + + err = mlx5_cmd_modify_lag(dev0, v2p_port1, v2p_port2); + if (err) + mlx5_core_err(dev0, + "Failed to modify LAG (%d)\n", + err); + } +} + +static int mlx5_create_lag(struct mlx5_lag *ldev, + struct lag_tracker *tracker) +{ + struct mlx5_core_dev *dev0 = ldev->pf[0].dev; + int err; mlx5_infer_tx_affinity_mapping(tracker, &ldev->v2p_map[0], &ldev->v2p_map[1]); + mlx5_core_info(dev0, "lag map port 1:%d port 2:%d", + ldev->v2p_map[0], ldev->v2p_map[1]); + err = mlx5_cmd_create_lag(dev0, ldev->v2p_map[0], ldev->v2p_map[1]); if (err) mlx5_core_err(dev0, "Failed to create LAG (%d)\n", err); + return err; +} + +static int mlx5_activate_lag(struct mlx5_lag *ldev, + struct lag_tracker *tracker, + u8 flags) +{ + bool roce_lag = !!(flags & MLX5_LAG_FLAG_ROCE); + struct mlx5_core_dev *dev0 = ldev->pf[0].dev; + int err; + + err = mlx5_create_lag(ldev, tracker); + if (err) { + if (roce_lag) { + mlx5_core_err(dev0, + "Failed to activate RoCE LAG\n"); + } else { + mlx5_core_err(dev0, + "Failed to activate VF LAG\n" + "Make sure all VFs are unbound prior to VF LAG activation or deactivation\n"); + } + return err; + } + + ldev->flags |= flags; + return 0; } -static void mlx5_deactivate_lag(struct mlx5_lag *ldev) +static int mlx5_deactivate_lag(struct mlx5_lag *ldev) { struct mlx5_core_dev *dev0 = ldev->pf[0].dev; + bool roce_lag = __mlx5_lag_is_roce(ldev); int err; - ldev->flags &= ~MLX5_LAG_FLAG_BONDED; + ldev->flags &= ~MLX5_LAG_MODE_FLAGS; err = mlx5_cmd_destroy_lag(dev0); - if (err) - mlx5_core_err(dev0, - "Failed to destroy LAG (%d)\n", - err); + if (err) { + if (roce_lag) { + mlx5_core_err(dev0, + "Failed to deactivate RoCE LAG; driver restart required\n"); + } else { + mlx5_core_err(dev0, + "Failed to deactivate VF LAG; driver restart required\n" + "Make sure all VFs are unbound prior to VF LAG activation or deactivation\n"); + } + } + + return err; +} + +static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev) +{ + if (!ldev->pf[0].dev || !ldev->pf[1].dev) + return false; + +#ifdef CONFIG_MLX5_ESWITCH + return mlx5_esw_lag_prereq(ldev->pf[0].dev, ldev->pf[1].dev); +#else + return (!mlx5_sriov_is_enabled(ldev->pf[0].dev) && + !mlx5_sriov_is_enabled(ldev->pf[1].dev)); +#endif +} + +static void mlx5_lag_add_ib_devices(struct mlx5_lag *ldev) +{ + int i; + + for (i = 0; i < MLX5_MAX_PORTS; i++) + if (ldev->pf[i].dev) + mlx5_add_dev_by_protocol(ldev->pf[i].dev, + MLX5_INTERFACE_PROTOCOL_IB); +} + +static void mlx5_lag_remove_ib_devices(struct mlx5_lag *ldev) +{ + int i; + + for (i = 0; i < MLX5_MAX_PORTS; i++) + if (ldev->pf[i].dev) + mlx5_remove_dev_by_protocol(ldev->pf[i].dev, + MLX5_INTERFACE_PROTOCOL_IB); } static void mlx5_do_bond(struct mlx5_lag *ldev) @@ -223,9 +327,8 @@ static void mlx5_do_bond(struct mlx5_lag *ldev) struct mlx5_core_dev *dev0 = ldev->pf[0].dev; struct mlx5_core_dev *dev1 = ldev->pf[1].dev; struct lag_tracker tracker; - u8 v2p_port1, v2p_port2; - int i, err; - bool do_bond; + bool do_bond, roce_lag; + int err; if (!dev0 || !dev1) return; @@ -234,42 +337,45 @@ static void mlx5_do_bond(struct mlx5_lag *ldev) tracker = ldev->tracker; mutex_unlock(&lag_mutex); - do_bond = tracker.is_bonded && ldev->allowed; + do_bond = tracker.is_bonded && mlx5_lag_check_prereq(ldev); - if (do_bond && !mlx5_lag_is_bonded(ldev)) { - for (i = 0; i < MLX5_MAX_PORTS; i++) - mlx5_remove_dev_by_protocol(ldev->pf[i].dev, - MLX5_INTERFACE_PROTOCOL_IB); + if (do_bond && !__mlx5_lag_is_active(ldev)) { + roce_lag = !mlx5_sriov_is_enabled(dev0) && + !mlx5_sriov_is_enabled(dev1); - mlx5_activate_lag(ldev, &tracker); + if (roce_lag) + mlx5_lag_remove_ib_devices(ldev); - mlx5_add_dev_by_protocol(dev0, MLX5_INTERFACE_PROTOCOL_IB); - mlx5_nic_vport_enable_roce(dev1); - } else if (do_bond && mlx5_lag_is_bonded(ldev)) { - mlx5_infer_tx_affinity_mapping(&tracker, &v2p_port1, - &v2p_port2); + err = mlx5_activate_lag(ldev, &tracker, + roce_lag ? MLX5_LAG_FLAG_ROCE : + MLX5_LAG_FLAG_SRIOV); + if (err) { + if (roce_lag) + mlx5_lag_add_ib_devices(ldev); - if ((v2p_port1 != ldev->v2p_map[0]) || - (v2p_port2 != ldev->v2p_map[1])) { - ldev->v2p_map[0] = v2p_port1; - ldev->v2p_map[1] = v2p_port2; + return; + } - err = mlx5_cmd_modify_lag(dev0, v2p_port1, v2p_port2); - if (err) - mlx5_core_err(dev0, - "Failed to modify LAG (%d)\n", - err); + if (roce_lag) { + mlx5_add_dev_by_protocol(dev0, MLX5_INTERFACE_PROTOCOL_IB); + mlx5_nic_vport_enable_roce(dev1); + } + } else if (do_bond && __mlx5_lag_is_active(ldev)) { + mlx5_modify_lag(ldev, &tracker); + } else if (!do_bond && __mlx5_lag_is_active(ldev)) { + roce_lag = __mlx5_lag_is_roce(ldev); + + if (roce_lag) { + mlx5_remove_dev_by_protocol(dev0, MLX5_INTERFACE_PROTOCOL_IB); + mlx5_nic_vport_disable_roce(dev1); } - } else if (!do_bond && mlx5_lag_is_bonded(ldev)) { - mlx5_remove_dev_by_protocol(dev0, MLX5_INTERFACE_PROTOCOL_IB); - mlx5_nic_vport_disable_roce(dev1); - mlx5_deactivate_lag(ldev); + err = mlx5_deactivate_lag(ldev); + if (err) + return; - for (i = 0; i < MLX5_MAX_PORTS; i++) - if (ldev->pf[i].dev) - mlx5_add_dev_by_protocol(ldev->pf[i].dev, - MLX5_INTERFACE_PROTOCOL_IB); + if (roce_lag) + mlx5_lag_add_ib_devices(ldev); } } @@ -419,15 +525,6 @@ static int mlx5_lag_netdev_event(struct notifier_block *this, return NOTIFY_DONE; } -static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev) -{ - if ((ldev->pf[0].dev && mlx5_sriov_is_enabled(ldev->pf[0].dev)) || - (ldev->pf[1].dev && mlx5_sriov_is_enabled(ldev->pf[1].dev))) - return false; - else - return true; -} - static struct mlx5_lag *mlx5_lag_dev_alloc(void) { struct mlx5_lag *ldev; @@ -437,7 +534,6 @@ static struct mlx5_lag *mlx5_lag_dev_alloc(void) return NULL; INIT_DELAYED_WORK(&ldev->bond_work, mlx5_do_bond_work); - ldev->allowed = mlx5_lag_check_prereq(ldev); return ldev; } @@ -462,7 +558,6 @@ static void mlx5_lag_dev_add_pf(struct mlx5_lag *ldev, ldev->tracker.netdev_state[fn].link_up = 0; ldev->tracker.netdev_state[fn].tx_enabled = 0; - ldev->allowed = mlx5_lag_check_prereq(ldev); dev->priv.lag = ldev; mutex_unlock(&lag_mutex); @@ -484,7 +579,6 @@ static void mlx5_lag_dev_remove_pf(struct mlx5_lag *ldev, memset(&ldev->pf[i], 0, sizeof(*ldev->pf)); dev->priv.lag = NULL; - ldev->allowed = mlx5_lag_check_prereq(ldev); mutex_unlock(&lag_mutex); } @@ -532,7 +626,7 @@ void mlx5_lag_remove(struct mlx5_core_dev *dev) if (!ldev) return; - if (mlx5_lag_is_bonded(ldev)) + if (__mlx5_lag_is_active(ldev)) mlx5_deactivate_lag(ldev); mlx5_lag_dev_remove_pf(ldev, dev); @@ -549,56 +643,61 @@ void mlx5_lag_remove(struct mlx5_core_dev *dev) } } -bool mlx5_lag_is_active(struct mlx5_core_dev *dev) +bool mlx5_lag_is_roce(struct mlx5_core_dev *dev) { struct mlx5_lag *ldev; bool res; mutex_lock(&lag_mutex); ldev = mlx5_lag_dev_get(dev); - res = ldev && mlx5_lag_is_bonded(ldev); + res = ldev && __mlx5_lag_is_roce(ldev); mutex_unlock(&lag_mutex); return res; } -EXPORT_SYMBOL(mlx5_lag_is_active); +EXPORT_SYMBOL(mlx5_lag_is_roce); -static int mlx5_lag_set_state(struct mlx5_core_dev *dev, bool allow) +bool mlx5_lag_is_active(struct mlx5_core_dev *dev) { struct mlx5_lag *ldev; - int ret = 0; - bool lag_active; - - mlx5_dev_list_lock(); + bool res; + mutex_lock(&lag_mutex); ldev = mlx5_lag_dev_get(dev); - if (!ldev) { - ret = -ENODEV; - goto unlock; - } - lag_active = mlx5_lag_is_bonded(ldev); - if (!mlx5_lag_check_prereq(ldev) && allow) { - ret = -EINVAL; - goto unlock; - } - if (ldev->allowed == allow) - goto unlock; - ldev->allowed = allow; - if ((lag_active && !allow) || allow) - mlx5_do_bond(ldev); -unlock: - mlx5_dev_list_unlock(); - return ret; + res = ldev && __mlx5_lag_is_active(ldev); + mutex_unlock(&lag_mutex); + + return res; } +EXPORT_SYMBOL(mlx5_lag_is_active); -int mlx5_lag_forbid(struct mlx5_core_dev *dev) +bool mlx5_lag_is_sriov(struct mlx5_core_dev *dev) { - return mlx5_lag_set_state(dev, false); + struct mlx5_lag *ldev; + bool res; + + mutex_lock(&lag_mutex); + ldev = mlx5_lag_dev_get(dev); + res = ldev && __mlx5_lag_is_sriov(ldev); + mutex_unlock(&lag_mutex); + + return res; } +EXPORT_SYMBOL(mlx5_lag_is_sriov); -int mlx5_lag_allow(struct mlx5_core_dev *dev) +void mlx5_lag_update(struct mlx5_core_dev *dev) { - return mlx5_lag_set_state(dev, true); + struct mlx5_lag *ldev; + + mlx5_dev_list_lock(); + ldev = mlx5_lag_dev_get(dev); + if (!ldev) + goto unlock; + + mlx5_do_bond(ldev); + +unlock: + mlx5_dev_list_unlock(); } struct net_device *mlx5_lag_get_roce_netdev(struct mlx5_core_dev *dev) @@ -609,7 +708,7 @@ struct net_device *mlx5_lag_get_roce_netdev(struct mlx5_core_dev *dev) mutex_lock(&lag_mutex); ldev = mlx5_lag_dev_get(dev); - if (!(ldev && mlx5_lag_is_bonded(ldev))) + if (!(ldev && __mlx5_lag_is_roce(ldev))) goto unlock; if (ldev->tracker.tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP) { @@ -638,7 +737,7 @@ bool mlx5_lag_intf_add(struct mlx5_interface *intf, struct mlx5_priv *priv) return true; ldev = mlx5_lag_dev_get(dev); - if (!ldev || !mlx5_lag_is_bonded(ldev) || ldev->pf[0].dev == dev) + if (!ldev || !__mlx5_lag_is_roce(ldev) || ldev->pf[0].dev == dev) return true; /* If bonded, we do not add an IB device for PF1. */ @@ -665,7 +764,7 @@ int mlx5_lag_query_cong_counters(struct mlx5_core_dev *dev, mutex_lock(&lag_mutex); ldev = mlx5_lag_dev_get(dev); - if (ldev && mlx5_lag_is_bonded(ldev)) { + if (ldev && __mlx5_lag_is_roce(ldev)) { num_ports = MLX5_MAX_PORTS; mdev[0] = ldev->pf[0].dev; mdev[1] = ldev->pf[1].dev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c index 0d90b1b4a3d3..ca0ee9916e9e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c @@ -33,6 +33,7 @@ #include #include #include +#include "lib/eq.h" #include "en.h" #include "clock.h" @@ -71,7 +72,7 @@ static u64 read_internal_timer(const struct cyclecounter *cc) struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev, clock); - return mlx5_read_internal_timer(mdev) & cc->mask; + return mlx5_read_internal_timer(mdev, NULL) & cc->mask; } static void mlx5_update_clock_info_page(struct mlx5_core_dev *mdev) @@ -155,15 +156,19 @@ static int mlx5_ptp_settime(struct ptp_clock_info *ptp, return 0; } -static int mlx5_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts) +static int mlx5_ptp_gettimex(struct ptp_clock_info *ptp, struct timespec64 *ts, + struct ptp_system_timestamp *sts) { struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info); - u64 ns; + struct mlx5_core_dev *mdev = container_of(clock, struct mlx5_core_dev, + clock); unsigned long flags; + u64 cycles, ns; write_seqlock_irqsave(&clock->lock, flags); - ns = timecounter_read(&clock->tc); + cycles = mlx5_read_internal_timer(mdev, sts); + ns = timecounter_cyc2time(&clock->tc, cycles); write_sequnlock_irqrestore(&clock->lock, flags); *ts = ns_to_timespec64(ns); @@ -306,7 +311,7 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp, ts.tv_sec = rq->perout.start.sec; ts.tv_nsec = rq->perout.start.nsec; ns = timespec64_to_ns(&ts); - cycles_now = mlx5_read_internal_timer(mdev); + cycles_now = mlx5_read_internal_timer(mdev, NULL); write_seqlock_irqsave(&clock->lock, flags); nsec_now = timecounter_cyc2time(&clock->tc, cycles_now); nsec_delta = ns - nsec_now; @@ -383,7 +388,7 @@ static const struct ptp_clock_info mlx5_ptp_clock_info = { .pps = 0, .adjfreq = mlx5_ptp_adjfreq, .adjtime = mlx5_ptp_adjtime, - .gettime64 = mlx5_ptp_gettime, + .gettimex64 = mlx5_ptp_gettimex, .settime64 = mlx5_ptp_settime, .enable = NULL, .verify = NULL, @@ -439,16 +444,17 @@ static void mlx5_get_pps_caps(struct mlx5_core_dev *mdev) clock->pps_info.pin_caps[7] = MLX5_GET(mtpps_reg, out, cap_pin_7_mode); } -void mlx5_pps_event(struct mlx5_core_dev *mdev, - struct mlx5_eqe *eqe) +static int mlx5_pps_event(struct notifier_block *nb, + unsigned long type, void *data) { - struct mlx5_clock *clock = &mdev->clock; + struct mlx5_clock *clock = mlx5_nb_cof(nb, struct mlx5_clock, pps_nb); + struct mlx5_core_dev *mdev = clock->mdev; struct ptp_clock_event ptp_event; - struct timespec64 ts; - u64 nsec_now, nsec_delta; u64 cycles_now, cycles_delta; + u64 nsec_now, nsec_delta, ns; + struct mlx5_eqe *eqe = data; int pin = eqe->data.pps.pin; - s64 ns; + struct timespec64 ts; unsigned long flags; switch (clock->ptp_info.pin_config[pin].func) { @@ -463,11 +469,12 @@ void mlx5_pps_event(struct mlx5_core_dev *mdev, } else { ptp_event.type = PTP_CLOCK_EXTTS; } + /* TODOL clock->ptp can be NULL if ptp_clock_register failes */ ptp_clock_event(clock->ptp, &ptp_event); break; case PTP_PF_PEROUT: - mlx5_ptp_gettime(&clock->ptp_info, &ts); - cycles_now = mlx5_read_internal_timer(mdev); + mlx5_ptp_gettimex(&clock->ptp_info, &ts, NULL); + cycles_now = mlx5_read_internal_timer(mdev, NULL); ts.tv_sec += 1; ts.tv_nsec = 0; ns = timespec64_to_ns(&ts); @@ -481,8 +488,11 @@ void mlx5_pps_event(struct mlx5_core_dev *mdev, write_sequnlock_irqrestore(&clock->lock, flags); break; default: - mlx5_core_err(mdev, " Unhandled event\n"); + mlx5_core_err(mdev, " Unhandled clock PPS event, func %d\n", + clock->ptp_info.pin_config[pin].func); } + + return NOTIFY_OK; } void mlx5_init_clock(struct mlx5_core_dev *mdev) @@ -511,14 +521,14 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev) ktime_to_ns(ktime_get_real())); /* Calculate period in seconds to call the overflow watchdog - to make - * sure counter is checked at least once every wrap around. + * sure counter is checked at least twice every wrap around. * The period is calculated as the minimum between max HW cycles count * (The clock source mask) and max amount of cycles that can be * multiplied by clock multiplier where the result doesn't exceed * 64bits. */ overflow_cycles = div64_u64(~0ULL >> 1, clock->cycles.mult); - overflow_cycles = min(overflow_cycles, clock->cycles.mask >> 1); + overflow_cycles = min(overflow_cycles, div_u64(clock->cycles.mask, 3)); ns = cyclecounter_cyc2ns(&clock->cycles, overflow_cycles, frac, &frac); @@ -567,6 +577,9 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev) PTR_ERR(clock->ptp)); clock->ptp = NULL; } + + MLX5_NB_INIT(&clock->pps_nb, mlx5_pps_event, PPS_EVENT); + mlx5_eq_notifier_register(mdev, &clock->pps_nb); } void mlx5_cleanup_clock(struct mlx5_core_dev *mdev) @@ -576,6 +589,7 @@ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev) if (!MLX5_CAP_GEN(mdev, device_frequency_khz)) return; + mlx5_eq_notifier_unregister(mdev, &clock->pps_nb); if (clock->ptp) { ptp_clock_unregister(clock->ptp); clock->ptp = NULL; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.h index 263cb6e2aeee..31600924bdc3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.h @@ -36,7 +36,6 @@ #if IS_ENABLED(CONFIG_PTP_1588_CLOCK) void mlx5_init_clock(struct mlx5_core_dev *mdev); void mlx5_cleanup_clock(struct mlx5_core_dev *mdev); -void mlx5_pps_event(struct mlx5_core_dev *dev, struct mlx5_eqe *eqe); static inline int mlx5_clock_get_ptp_index(struct mlx5_core_dev *mdev) { @@ -60,8 +59,6 @@ static inline ktime_t mlx5_timecounter_cyc2time(struct mlx5_clock *clock, #else static inline void mlx5_init_clock(struct mlx5_core_dev *mdev) {} static inline void mlx5_cleanup_clock(struct mlx5_core_dev *mdev) {} -static inline void mlx5_pps_event(struct mlx5_core_dev *dev, struct mlx5_eqe *eqe) {} - static inline int mlx5_clock_get_ptp_index(struct mlx5_core_dev *mdev) { return -1; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c new file mode 100644 index 000000000000..bced2efe9bef --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c @@ -0,0 +1,255 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2018 Mellanox Technologies */ + +#include +#include "lib/devcom.h" + +static LIST_HEAD(devcom_list); + +#define devcom_for_each_component(priv, comp, iter) \ + for (iter = 0; \ + comp = &(priv)->components[iter], iter < MLX5_DEVCOM_NUM_COMPONENTS; \ + iter++) + +struct mlx5_devcom_component { + struct { + void *data; + } device[MLX5_MAX_PORTS]; + + mlx5_devcom_event_handler_t handler; + struct rw_semaphore sem; + bool paired; +}; + +struct mlx5_devcom_list { + struct list_head list; + + struct mlx5_devcom_component components[MLX5_DEVCOM_NUM_COMPONENTS]; + struct mlx5_core_dev *devs[MLX5_MAX_PORTS]; +}; + +struct mlx5_devcom { + struct mlx5_devcom_list *priv; + int idx; +}; + +static struct mlx5_devcom_list *mlx5_devcom_list_alloc(void) +{ + struct mlx5_devcom_component *comp; + struct mlx5_devcom_list *priv; + int i; + + priv = kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) + return NULL; + + devcom_for_each_component(priv, comp, i) + init_rwsem(&comp->sem); + + return priv; +} + +static struct mlx5_devcom *mlx5_devcom_alloc(struct mlx5_devcom_list *priv, + u8 idx) +{ + struct mlx5_devcom *devcom; + + devcom = kzalloc(sizeof(*devcom), GFP_KERNEL); + if (!devcom) + return NULL; + + devcom->priv = priv; + devcom->idx = idx; + return devcom; +} + +/* Must be called with intf_mutex held */ +struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev) +{ + struct mlx5_devcom_list *priv = NULL, *iter; + struct mlx5_devcom *devcom = NULL; + bool new_priv = false; + u64 sguid0, sguid1; + int idx, i; + + if (!mlx5_core_is_pf(dev)) + return NULL; + + sguid0 = mlx5_query_nic_system_image_guid(dev); + list_for_each_entry(iter, &devcom_list, list) { + struct mlx5_core_dev *tmp_dev = NULL; + + idx = -1; + for (i = 0; i < MLX5_MAX_PORTS; i++) { + if (iter->devs[i]) + tmp_dev = iter->devs[i]; + else + idx = i; + } + + if (idx == -1) + continue; + + sguid1 = mlx5_query_nic_system_image_guid(tmp_dev); + if (sguid0 != sguid1) + continue; + + priv = iter; + break; + } + + if (!priv) { + priv = mlx5_devcom_list_alloc(); + if (!priv) + return ERR_PTR(-ENOMEM); + + idx = 0; + new_priv = true; + } + + priv->devs[idx] = dev; + devcom = mlx5_devcom_alloc(priv, idx); + if (!devcom) { + kfree(priv); + return ERR_PTR(-ENOMEM); + } + + if (new_priv) + list_add(&priv->list, &devcom_list); + + return devcom; +} + +/* Must be called with intf_mutex held */ +void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom) +{ + struct mlx5_devcom_list *priv; + int i; + + if (IS_ERR_OR_NULL(devcom)) + return; + + priv = devcom->priv; + priv->devs[devcom->idx] = NULL; + + kfree(devcom); + + for (i = 0; i < MLX5_MAX_PORTS; i++) + if (priv->devs[i]) + break; + + if (i != MLX5_MAX_PORTS) + return; + + list_del(&priv->list); + kfree(priv); +} + +void mlx5_devcom_register_component(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id, + mlx5_devcom_event_handler_t handler, + void *data) +{ + struct mlx5_devcom_component *comp; + + if (IS_ERR_OR_NULL(devcom)) + return; + + WARN_ON(!data); + + comp = &devcom->priv->components[id]; + down_write(&comp->sem); + comp->handler = handler; + comp->device[devcom->idx].data = data; + up_write(&comp->sem); +} + +void mlx5_devcom_unregister_component(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id) +{ + struct mlx5_devcom_component *comp; + + if (IS_ERR_OR_NULL(devcom)) + return; + + comp = &devcom->priv->components[id]; + down_write(&comp->sem); + comp->device[devcom->idx].data = NULL; + up_write(&comp->sem); +} + +int mlx5_devcom_send_event(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id, + int event, + void *event_data) +{ + struct mlx5_devcom_component *comp; + int err = -ENODEV, i; + + if (IS_ERR_OR_NULL(devcom)) + return err; + + comp = &devcom->priv->components[id]; + down_write(&comp->sem); + for (i = 0; i < MLX5_MAX_PORTS; i++) + if (i != devcom->idx && comp->device[i].data) { + err = comp->handler(event, comp->device[i].data, + event_data); + break; + } + + up_write(&comp->sem); + return err; +} + +void mlx5_devcom_set_paired(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id, + bool paired) +{ + struct mlx5_devcom_component *comp; + + comp = &devcom->priv->components[id]; + WARN_ON(!rwsem_is_locked(&comp->sem)); + + comp->paired = paired; +} + +bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id) +{ + if (IS_ERR_OR_NULL(devcom)) + return false; + + return devcom->priv->components[id].paired; +} + +void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id) +{ + struct mlx5_devcom_component *comp; + int i; + + if (IS_ERR_OR_NULL(devcom)) + return NULL; + + comp = &devcom->priv->components[id]; + down_read(&comp->sem); + if (!comp->paired) { + up_read(&comp->sem); + return NULL; + } + + for (i = 0; i < MLX5_MAX_PORTS; i++) + if (i != devcom->idx) + break; + + return comp->device[i].data; +} + +void mlx5_devcom_release_peer_data(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id) +{ + struct mlx5_devcom_component *comp = &devcom->priv->components[id]; + + up_read(&comp->sem); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h new file mode 100644 index 000000000000..939d5bf1581b --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2018 Mellanox Technologies */ + +#ifndef __LIB_MLX5_DEVCOM_H__ +#define __LIB_MLX5_DEVCOM_H__ + +#include + +enum mlx5_devcom_components { + MLX5_DEVCOM_ESW_OFFLOADS, + + MLX5_DEVCOM_NUM_COMPONENTS, +}; + +typedef int (*mlx5_devcom_event_handler_t)(int event, + void *my_data, + void *event_data); + +struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev); +void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom); + +void mlx5_devcom_register_component(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id, + mlx5_devcom_event_handler_t handler, + void *data); +void mlx5_devcom_unregister_component(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id); + +int mlx5_devcom_send_event(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id, + int event, + void *event_data); + +void mlx5_devcom_set_paired(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id, + bool paired); +bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id); + +void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id); +void mlx5_devcom_release_peer_data(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id); + +#endif + diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h new file mode 100644 index 000000000000..c0fb6d72b695 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2018 Mellanox Technologies */ + +#ifndef __LIB_MLX5_EQ_H__ +#define __LIB_MLX5_EQ_H__ +#include +#include +#include + +#define MLX5_MAX_IRQ_NAME (32) +#define MLX5_EQE_SIZE (sizeof(struct mlx5_eqe)) + +struct mlx5_eq_tasklet { + struct list_head list; + struct list_head process_list; + struct tasklet_struct task; + spinlock_t lock; /* lock completion tasklet list */ +}; + +struct mlx5_cq_table { + spinlock_t lock; /* protect radix tree */ + struct radix_tree_root tree; +}; + +struct mlx5_eq { + struct mlx5_core_dev *dev; + struct mlx5_cq_table cq_table; + __be32 __iomem *doorbell; + u32 cons_index; + struct mlx5_frag_buf buf; + int size; + unsigned int vecidx; + unsigned int irqn; + u8 eqn; + int nent; + struct mlx5_rsc_debug *dbg; +}; + +struct mlx5_eq_comp { + struct mlx5_eq core; /* Must be first */ + struct mlx5_eq_tasklet tasklet_ctx; + struct list_head list; +}; + +static inline struct mlx5_eqe *get_eqe(struct mlx5_eq *eq, u32 entry) +{ + return mlx5_buf_offset(&eq->buf, entry * MLX5_EQE_SIZE); +} + +static inline struct mlx5_eqe *next_eqe_sw(struct mlx5_eq *eq) +{ + struct mlx5_eqe *eqe = get_eqe(eq, eq->cons_index & (eq->nent - 1)); + + return ((eqe->owner & 1) ^ !!(eq->cons_index & eq->nent)) ? NULL : eqe; +} + +static inline void eq_update_ci(struct mlx5_eq *eq, int arm) +{ + __be32 __iomem *addr = eq->doorbell + (arm ? 0 : 2); + u32 val = (eq->cons_index & 0xffffff) | (eq->eqn << 24); + + __raw_writel((__force u32)cpu_to_be32(val), addr); + /* We still want ordering, just not swabbing, so add a barrier */ + mb(); +} + +int mlx5_eq_table_init(struct mlx5_core_dev *dev); +void mlx5_eq_table_cleanup(struct mlx5_core_dev *dev); +int mlx5_eq_table_create(struct mlx5_core_dev *dev); +void mlx5_eq_table_destroy(struct mlx5_core_dev *dev); + +int mlx5_eq_add_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq); +int mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq); +struct mlx5_eq_comp *mlx5_eqn2comp_eq(struct mlx5_core_dev *dev, int eqn); +struct mlx5_eq *mlx5_get_async_eq(struct mlx5_core_dev *dev); +void mlx5_cq_tasklet_cb(unsigned long data); +struct cpumask *mlx5_eq_comp_cpumask(struct mlx5_core_dev *dev, int ix); + +u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq_comp *eq); +void mlx5_eq_synchronize_async_irq(struct mlx5_core_dev *dev); +void mlx5_eq_synchronize_cmd_irq(struct mlx5_core_dev *dev); + +int mlx5_debug_eq_add(struct mlx5_core_dev *dev, struct mlx5_eq *eq); +void mlx5_debug_eq_remove(struct mlx5_core_dev *dev, struct mlx5_eq *eq); +int mlx5_eq_debugfs_init(struct mlx5_core_dev *dev); +void mlx5_eq_debugfs_cleanup(struct mlx5_core_dev *dev); + +/* This function should only be called after mlx5_cmd_force_teardown_hca */ +void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev); + +#ifdef CONFIG_RFS_ACCEL +struct cpu_rmap *mlx5_eq_table_get_rmap(struct mlx5_core_dev *dev); +#endif + +int mlx5_eq_notifier_register(struct mlx5_core_dev *dev, struct mlx5_nb *nb); +int mlx5_eq_notifier_unregister(struct mlx5_core_dev *dev, struct mlx5_nb *nb); + +#endif diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h index 7550b1cc8c6a..397a2847867a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h @@ -33,6 +33,8 @@ #ifndef __LIB_MLX5_H__ #define __LIB_MLX5_H__ +#include "mlx5_core.h" + void mlx5_init_reserved_gids(struct mlx5_core_dev *dev); void mlx5_cleanup_reserved_gids(struct mlx5_core_dev *dev); int mlx5_core_reserve_gids(struct mlx5_core_dev *dev, unsigned int count); @@ -40,4 +42,38 @@ void mlx5_core_unreserve_gids(struct mlx5_core_dev *dev, unsigned int count); int mlx5_core_reserved_gid_alloc(struct mlx5_core_dev *dev, int *gid_index); void mlx5_core_reserved_gid_free(struct mlx5_core_dev *dev, int gid_index); +/* TODO move to lib/events.h */ + +#define PORT_MODULE_EVENT_MODULE_STATUS_MASK 0xF +#define PORT_MODULE_EVENT_ERROR_TYPE_MASK 0xF + +enum port_module_event_status_type { + MLX5_MODULE_STATUS_PLUGGED = 0x1, + MLX5_MODULE_STATUS_UNPLUGGED = 0x2, + MLX5_MODULE_STATUS_ERROR = 0x3, + MLX5_MODULE_STATUS_DISABLED = 0x4, + MLX5_MODULE_STATUS_NUM, +}; + +enum port_module_event_error_type { + MLX5_MODULE_EVENT_ERROR_POWER_BUDGET_EXCEEDED = 0x0, + MLX5_MODULE_EVENT_ERROR_LONG_RANGE_FOR_NON_MLNX = 0x1, + MLX5_MODULE_EVENT_ERROR_BUS_STUCK = 0x2, + MLX5_MODULE_EVENT_ERROR_NO_EEPROM_RETRY_TIMEOUT = 0x3, + MLX5_MODULE_EVENT_ERROR_ENFORCE_PART_NUMBER_LIST = 0x4, + MLX5_MODULE_EVENT_ERROR_UNKNOWN_IDENTIFIER = 0x5, + MLX5_MODULE_EVENT_ERROR_HIGH_TEMPERATURE = 0x6, + MLX5_MODULE_EVENT_ERROR_BAD_CABLE = 0x7, + MLX5_MODULE_EVENT_ERROR_PCIE_POWER_SLOT_EXCEEDED = 0xc, + MLX5_MODULE_EVENT_ERROR_NUM, +}; + +struct mlx5_pme_stats { + u64 status_counters[MLX5_MODULE_STATUS_NUM]; + u64 error_counters[MLX5_MODULE_EVENT_ERROR_NUM]; +}; + +void mlx5_get_pme_stats(struct mlx5_core_dev *dev, struct mlx5_pme_stats *stats); +int mlx5_notifier_call_chain(struct mlx5_events *events, unsigned int event, void *data); + #endif diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 28132c7dc05f..be81b319b0dc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -43,7 +43,6 @@ #include #include #include -#include #include #include #include @@ -53,6 +52,7 @@ #endif #include #include "mlx5_core.h" +#include "lib/eq.h" #include "fs_core.h" #include "lib/mpfs.h" #include "eswitch.h" @@ -63,6 +63,7 @@ #include "accel/tls.h" #include "lib/clock.h" #include "lib/vxlan.h" +#include "lib/devcom.h" #include "diag/fw_tracer.h" MODULE_AUTHOR("Eli Cohen "); @@ -319,51 +320,6 @@ static void release_bar(struct pci_dev *pdev) pci_release_regions(pdev); } -static int mlx5_alloc_irq_vectors(struct mlx5_core_dev *dev) -{ - struct mlx5_priv *priv = &dev->priv; - struct mlx5_eq_table *table = &priv->eq_table; - int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ? - MLX5_CAP_GEN(dev, max_num_eqs) : - 1 << MLX5_CAP_GEN(dev, log_max_eq); - int nvec; - int err; - - nvec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() + - MLX5_EQ_VEC_COMP_BASE; - nvec = min_t(int, nvec, num_eqs); - if (nvec <= MLX5_EQ_VEC_COMP_BASE) - return -ENOMEM; - - priv->irq_info = kcalloc(nvec, sizeof(*priv->irq_info), GFP_KERNEL); - if (!priv->irq_info) - return -ENOMEM; - - nvec = pci_alloc_irq_vectors(dev->pdev, - MLX5_EQ_VEC_COMP_BASE + 1, nvec, - PCI_IRQ_MSIX); - if (nvec < 0) { - err = nvec; - goto err_free_irq_info; - } - - table->num_comp_vectors = nvec - MLX5_EQ_VEC_COMP_BASE; - - return 0; - -err_free_irq_info: - kfree(priv->irq_info); - return err; -} - -static void mlx5_free_irq_vectors(struct mlx5_core_dev *dev) -{ - struct mlx5_priv *priv = &dev->priv; - - pci_free_irq_vectors(dev->pdev); - kfree(priv->irq_info); -} - struct mlx5_reg_host_endianness { u8 he; u8 rsvd[15]; @@ -624,188 +580,24 @@ int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id) return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); } -u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev) +u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev, + struct ptp_system_timestamp *sts) { u32 timer_h, timer_h1, timer_l; timer_h = ioread32be(&dev->iseg->internal_timer_h); + ptp_read_system_prets(sts); timer_l = ioread32be(&dev->iseg->internal_timer_l); + ptp_read_system_postts(sts); timer_h1 = ioread32be(&dev->iseg->internal_timer_h); - if (timer_h != timer_h1) /* wrap around */ + if (timer_h != timer_h1) { + /* wrap around */ + ptp_read_system_prets(sts); timer_l = ioread32be(&dev->iseg->internal_timer_l); - - return (u64)timer_l | (u64)timer_h1 << 32; -} - -static int mlx5_irq_set_affinity_hint(struct mlx5_core_dev *mdev, int i) -{ - struct mlx5_priv *priv = &mdev->priv; - int irq = pci_irq_vector(mdev->pdev, MLX5_EQ_VEC_COMP_BASE + i); - - if (!zalloc_cpumask_var(&priv->irq_info[i].mask, GFP_KERNEL)) { - mlx5_core_warn(mdev, "zalloc_cpumask_var failed"); - return -ENOMEM; - } - - cpumask_set_cpu(cpumask_local_spread(i, priv->numa_node), - priv->irq_info[i].mask); - - if (IS_ENABLED(CONFIG_SMP) && - irq_set_affinity_hint(irq, priv->irq_info[i].mask)) - mlx5_core_warn(mdev, "irq_set_affinity_hint failed, irq 0x%.4x", irq); - - return 0; -} - -static void mlx5_irq_clear_affinity_hint(struct mlx5_core_dev *mdev, int i) -{ - struct mlx5_priv *priv = &mdev->priv; - int irq = pci_irq_vector(mdev->pdev, MLX5_EQ_VEC_COMP_BASE + i); - - irq_set_affinity_hint(irq, NULL); - free_cpumask_var(priv->irq_info[i].mask); -} - -static int mlx5_irq_set_affinity_hints(struct mlx5_core_dev *mdev) -{ - int err; - int i; - - for (i = 0; i < mdev->priv.eq_table.num_comp_vectors; i++) { - err = mlx5_irq_set_affinity_hint(mdev, i); - if (err) - goto err_out; - } - - return 0; - -err_out: - for (i--; i >= 0; i--) - mlx5_irq_clear_affinity_hint(mdev, i); - - return err; -} - -static void mlx5_irq_clear_affinity_hints(struct mlx5_core_dev *mdev) -{ - int i; - - for (i = 0; i < mdev->priv.eq_table.num_comp_vectors; i++) - mlx5_irq_clear_affinity_hint(mdev, i); -} - -int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn, - unsigned int *irqn) -{ - struct mlx5_eq_table *table = &dev->priv.eq_table; - struct mlx5_eq *eq, *n; - int err = -ENOENT; - - spin_lock(&table->lock); - list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { - if (eq->index == vector) { - *eqn = eq->eqn; - *irqn = eq->irqn; - err = 0; - break; - } + ptp_read_system_postts(sts); } - spin_unlock(&table->lock); - - return err; -} -EXPORT_SYMBOL(mlx5_vector2eqn); -struct mlx5_eq *mlx5_eqn2eq(struct mlx5_core_dev *dev, int eqn) -{ - struct mlx5_eq_table *table = &dev->priv.eq_table; - struct mlx5_eq *eq; - - spin_lock(&table->lock); - list_for_each_entry(eq, &table->comp_eqs_list, list) - if (eq->eqn == eqn) { - spin_unlock(&table->lock); - return eq; - } - - spin_unlock(&table->lock); - - return ERR_PTR(-ENOENT); -} - -static void free_comp_eqs(struct mlx5_core_dev *dev) -{ - struct mlx5_eq_table *table = &dev->priv.eq_table; - struct mlx5_eq *eq, *n; - -#ifdef CONFIG_RFS_ACCEL - if (dev->rmap) { - free_irq_cpu_rmap(dev->rmap); - dev->rmap = NULL; - } -#endif - spin_lock(&table->lock); - list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { - list_del(&eq->list); - spin_unlock(&table->lock); - if (mlx5_destroy_unmap_eq(dev, eq)) - mlx5_core_warn(dev, "failed to destroy EQ 0x%x\n", - eq->eqn); - kfree(eq); - spin_lock(&table->lock); - } - spin_unlock(&table->lock); -} - -static int alloc_comp_eqs(struct mlx5_core_dev *dev) -{ - struct mlx5_eq_table *table = &dev->priv.eq_table; - char name[MLX5_MAX_IRQ_NAME]; - struct mlx5_eq *eq; - int ncomp_vec; - int nent; - int err; - int i; - - INIT_LIST_HEAD(&table->comp_eqs_list); - ncomp_vec = table->num_comp_vectors; - nent = MLX5_COMP_EQ_SIZE; -#ifdef CONFIG_RFS_ACCEL - dev->rmap = alloc_irq_cpu_rmap(ncomp_vec); - if (!dev->rmap) - return -ENOMEM; -#endif - for (i = 0; i < ncomp_vec; i++) { - eq = kzalloc(sizeof(*eq), GFP_KERNEL); - if (!eq) { - err = -ENOMEM; - goto clean; - } - -#ifdef CONFIG_RFS_ACCEL - irq_cpu_rmap_add(dev->rmap, pci_irq_vector(dev->pdev, - MLX5_EQ_VEC_COMP_BASE + i)); -#endif - snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d", i); - err = mlx5_create_map_eq(dev, eq, - i + MLX5_EQ_VEC_COMP_BASE, nent, 0, - name, MLX5_EQ_TYPE_COMP); - if (err) { - kfree(eq); - goto clean; - } - mlx5_core_dbg(dev, "allocated completion EQN %d\n", eq->eqn); - eq->index = i; - spin_lock(&table->lock); - list_add_tail(&eq->list, &table->comp_eqs_list); - spin_unlock(&table->lock); - } - - return 0; - -clean: - free_comp_eqs(dev); - return err; + return (u64)timer_l | (u64)timer_h1 << 32; } static int mlx5_core_set_issi(struct mlx5_core_dev *dev) @@ -877,11 +669,9 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct mlx5_priv *priv) priv->numa_node = dev_to_node(&dev->pdev->dev); - priv->dbg_root = debugfs_create_dir(dev_name(&pdev->dev), mlx5_debugfs_root); - if (!priv->dbg_root) { - dev_err(&pdev->dev, "Cannot create debugfs dir, aborting\n"); - return -ENOMEM; - } + if (mlx5_debugfs_root) + priv->dbg_root = + debugfs_create_dir(pci_name(pdev), mlx5_debugfs_root); err = mlx5_pci_enable_device(dev); if (err) { @@ -938,28 +728,37 @@ static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv) struct pci_dev *pdev = dev->pdev; int err; + priv->devcom = mlx5_devcom_register_device(dev); + if (IS_ERR(priv->devcom)) + dev_err(&pdev->dev, "failed to register with devcom (0x%p)\n", + priv->devcom); + err = mlx5_query_board_id(dev); if (err) { dev_err(&pdev->dev, "query board id failed\n"); - goto out; + goto err_devcom; } - err = mlx5_eq_init(dev); + err = mlx5_eq_table_init(dev); if (err) { dev_err(&pdev->dev, "failed to initialize eq\n"); - goto out; + goto err_devcom; + } + + err = mlx5_events_init(dev); + if (err) { + dev_err(&pdev->dev, "failed to initialize events\n"); + goto err_eq_cleanup; } err = mlx5_cq_debugfs_init(dev); if (err) { dev_err(&pdev->dev, "failed to initialize cq debugfs\n"); - goto err_eq_cleanup; + goto err_events_cleanup; } mlx5_init_qp_table(dev); - mlx5_init_srq_table(dev); - mlx5_init_mkey_table(dev); mlx5_init_reserved_gids(dev); @@ -1013,14 +812,15 @@ err_rl_cleanup: err_tables_cleanup: mlx5_vxlan_destroy(dev->vxlan); mlx5_cleanup_mkey_table(dev); - mlx5_cleanup_srq_table(dev); mlx5_cleanup_qp_table(dev); mlx5_cq_debugfs_cleanup(dev); - +err_events_cleanup: + mlx5_events_cleanup(dev); err_eq_cleanup: - mlx5_eq_cleanup(dev); + mlx5_eq_table_cleanup(dev); +err_devcom: + mlx5_devcom_unregister_device(dev->priv.devcom); -out: return err; } @@ -1036,10 +836,11 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev) mlx5_cleanup_clock(dev); mlx5_cleanup_reserved_gids(dev); mlx5_cleanup_mkey_table(dev); - mlx5_cleanup_srq_table(dev); mlx5_cleanup_qp_table(dev); mlx5_cq_debugfs_cleanup(dev); - mlx5_eq_cleanup(dev); + mlx5_events_cleanup(dev); + mlx5_eq_table_cleanup(dev); + mlx5_devcom_unregister_device(dev->priv.devcom); } static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv, @@ -1131,16 +932,10 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv, goto reclaim_boot_pages; } - err = mlx5_pagealloc_start(dev); - if (err) { - dev_err(&pdev->dev, "mlx5_pagealloc_start failed\n"); - goto reclaim_boot_pages; - } - err = mlx5_cmd_init_hca(dev, sw_owner_id); if (err) { dev_err(&pdev->dev, "init hca failed\n"); - goto err_pagealloc_stop; + goto reclaim_boot_pages; } mlx5_set_driver_version(dev); @@ -1161,23 +956,20 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv, } } - err = mlx5_alloc_irq_vectors(dev); - if (err) { - dev_err(&pdev->dev, "alloc irq vectors failed\n"); - goto err_cleanup_once; - } - dev->priv.uar = mlx5_get_uars_page(dev); if (IS_ERR(dev->priv.uar)) { dev_err(&pdev->dev, "Failed allocating uar, aborting\n"); err = PTR_ERR(dev->priv.uar); - goto err_disable_msix; + goto err_get_uars; } - err = mlx5_start_eqs(dev); + mlx5_events_start(dev); + mlx5_pagealloc_start(dev); + + err = mlx5_eq_table_create(dev); if (err) { - dev_err(&pdev->dev, "Failed to start pages and async EQs\n"); - goto err_put_uars; + dev_err(&pdev->dev, "Failed to create EQs\n"); + goto err_eq_table; } err = mlx5_fw_tracer_init(dev->tracer); @@ -1186,18 +978,6 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv, goto err_fw_tracer; } - err = alloc_comp_eqs(dev); - if (err) { - dev_err(&pdev->dev, "Failed to alloc completion EQs\n"); - goto err_comp_eqs; - } - - err = mlx5_irq_set_affinity_hints(dev); - if (err) { - dev_err(&pdev->dev, "Failed to alloc affinity hint cpumask\n"); - goto err_affinity_hints; - } - err = mlx5_fpga_device_start(dev); if (err) { dev_err(&pdev->dev, "fpga device start failed %d\n", err); @@ -1266,24 +1046,17 @@ err_ipsec_start: mlx5_fpga_device_stop(dev); err_fpga_start: - mlx5_irq_clear_affinity_hints(dev); - -err_affinity_hints: - free_comp_eqs(dev); - -err_comp_eqs: mlx5_fw_tracer_cleanup(dev->tracer); err_fw_tracer: - mlx5_stop_eqs(dev); + mlx5_eq_table_destroy(dev); -err_put_uars: +err_eq_table: + mlx5_pagealloc_stop(dev); + mlx5_events_stop(dev); mlx5_put_uars_page(dev, priv->uar); -err_disable_msix: - mlx5_free_irq_vectors(dev); - -err_cleanup_once: +err_get_uars: if (boot) mlx5_cleanup_once(dev); @@ -1294,9 +1067,6 @@ err_stop_poll: goto out_err; } -err_pagealloc_stop: - mlx5_pagealloc_stop(dev); - reclaim_boot_pages: mlx5_reclaim_startup_pages(dev); @@ -1340,21 +1110,20 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv, mlx5_accel_ipsec_cleanup(dev); mlx5_accel_tls_cleanup(dev); mlx5_fpga_device_stop(dev); - mlx5_irq_clear_affinity_hints(dev); - free_comp_eqs(dev); mlx5_fw_tracer_cleanup(dev->tracer); - mlx5_stop_eqs(dev); + mlx5_eq_table_destroy(dev); + mlx5_pagealloc_stop(dev); + mlx5_events_stop(dev); mlx5_put_uars_page(dev, priv->uar); - mlx5_free_irq_vectors(dev); if (cleanup) mlx5_cleanup_once(dev); mlx5_stop_health_poll(dev, cleanup); + err = mlx5_cmd_teardown_hca(dev); if (err) { dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n"); goto out; } - mlx5_pagealloc_stop(dev); mlx5_reclaim_startup_pages(dev); mlx5_core_disable_hca(dev, 0); mlx5_cmd_cleanup(dev); @@ -1364,12 +1133,6 @@ out: return err; } -struct mlx5_core_event_handler { - void (*event)(struct mlx5_core_dev *dev, - enum mlx5_dev_event event, - void *data); -}; - static const struct devlink_ops mlx5_devlink_ops = { #ifdef CONFIG_MLX5_ESWITCH .eswitch_mode_set = mlx5_devlink_eswitch_mode_set, @@ -1403,7 +1166,6 @@ static int init_one(struct pci_dev *pdev, pci_set_drvdata(pdev, dev); dev->pdev = pdev; - dev->event = mlx5_core_event; dev->profile = &profile[prof_sel]; INIT_LIST_HEAD(&priv->ctx_list); @@ -1411,17 +1173,6 @@ static int init_one(struct pci_dev *pdev, mutex_init(&dev->pci_status_mutex); mutex_init(&dev->intf_state_mutex); - INIT_LIST_HEAD(&priv->waiting_events_list); - priv->is_accum_events = false; - -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - err = init_srcu_struct(&priv->pfault_srcu); - if (err) { - dev_err(&pdev->dev, "init_srcu_struct failed with error code %d\n", - err); - goto clean_dev; - } -#endif mutex_init(&priv->bfregs.reg_head.lock); mutex_init(&priv->bfregs.wc_head.lock); INIT_LIST_HEAD(&priv->bfregs.reg_head.list); @@ -1430,7 +1181,7 @@ static int init_one(struct pci_dev *pdev, err = mlx5_pci_init(dev, priv); if (err) { dev_err(&pdev->dev, "mlx5_pci_init failed with error code %d\n", err); - goto clean_srcu; + goto clean_dev; } err = mlx5_health_init(dev); @@ -1439,12 +1190,14 @@ static int init_one(struct pci_dev *pdev, goto close_pci; } - mlx5_pagealloc_init(dev); + err = mlx5_pagealloc_init(dev); + if (err) + goto err_pagealloc_init; err = mlx5_load_one(dev, priv, true); if (err) { dev_err(&pdev->dev, "mlx5_load_one failed with error code %d\n", err); - goto clean_health; + goto err_load_one; } request_module_nowait(MLX5_IB_MOD); @@ -1458,16 +1211,13 @@ static int init_one(struct pci_dev *pdev, clean_load: mlx5_unload_one(dev, priv, true); -clean_health: +err_load_one: mlx5_pagealloc_cleanup(dev); +err_pagealloc_init: mlx5_health_cleanup(dev); close_pci: mlx5_pci_close(dev, priv); -clean_srcu: -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - cleanup_srcu_struct(&priv->pfault_srcu); clean_dev: -#endif devlink_free(devlink); return err; @@ -1491,9 +1241,6 @@ static void remove_one(struct pci_dev *pdev) mlx5_pagealloc_cleanup(dev); mlx5_health_cleanup(dev); mlx5_pci_close(dev, priv); -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - cleanup_srcu_struct(&priv->pfault_srcu); -#endif devlink_free(devlink); } @@ -1637,7 +1384,6 @@ succeed: * kexec. There is no need to cleanup the mlx5_core software * contexts. */ - mlx5_irq_clear_affinity_hints(dev); mlx5_core_eq_free_irqs(dev); return 0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index 0594d0961cb3..c68dcea5985b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -38,6 +38,7 @@ #include #include #include +#include #include #include @@ -78,6 +79,11 @@ do { \ __func__, __LINE__, current->pid, \ ##__VA_ARGS__) +#define mlx5_core_warn_once(__dev, format, ...) \ + dev_warn_once(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \ + __func__, __LINE__, current->pid, \ + ##__VA_ARGS__) + #define mlx5_core_info(__dev, format, ...) \ dev_info(&(__dev)->pdev->dev, format, ##__VA_ARGS__) @@ -97,12 +103,6 @@ int mlx5_cmd_init_hca(struct mlx5_core_dev *dev, uint32_t *sw_owner_id); int mlx5_cmd_teardown_hca(struct mlx5_core_dev *dev); int mlx5_cmd_force_teardown_hca(struct mlx5_core_dev *dev); int mlx5_cmd_fast_teardown_hca(struct mlx5_core_dev *dev); - -void mlx5_core_event(struct mlx5_core_dev *dev, enum mlx5_dev_event event, - unsigned long param); -void mlx5_core_page_fault(struct mlx5_core_dev *dev, - struct mlx5_pagefault *pfault); -void mlx5_port_module_event(struct mlx5_core_dev *dev, struct mlx5_eqe *eqe); void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force); void mlx5_disable_device(struct mlx5_core_dev *dev); void mlx5_recover_device(struct mlx5_core_dev *dev); @@ -122,30 +122,10 @@ int mlx5_modify_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, int mlx5_destroy_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, u32 element_id); int mlx5_wait_for_vf_pages(struct mlx5_core_dev *dev); -u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev); - -int mlx5_eq_init(struct mlx5_core_dev *dev); -void mlx5_eq_cleanup(struct mlx5_core_dev *dev); -int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx, - int nent, u64 mask, const char *name, - enum mlx5_eq_type type); -int mlx5_destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq); -int mlx5_eq_add_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq); -int mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq); -int mlx5_core_eq_query(struct mlx5_core_dev *dev, struct mlx5_eq *eq, - u32 *out, int outlen); -int mlx5_start_eqs(struct mlx5_core_dev *dev); -void mlx5_stop_eqs(struct mlx5_core_dev *dev); -/* This function should only be called after mlx5_cmd_force_teardown_hca */ -void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev); -struct mlx5_eq *mlx5_eqn2eq(struct mlx5_core_dev *dev, int eqn); -u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq *eq); -void mlx5_cq_tasklet_cb(unsigned long data); -void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced); -int mlx5_debug_eq_add(struct mlx5_core_dev *dev, struct mlx5_eq *eq); -void mlx5_debug_eq_remove(struct mlx5_core_dev *dev, struct mlx5_eq *eq); -int mlx5_eq_debugfs_init(struct mlx5_core_dev *dev); -void mlx5_eq_debugfs_cleanup(struct mlx5_core_dev *dev); +u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev, + struct ptp_system_timestamp *sts); + +void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev); int mlx5_cq_debugfs_init(struct mlx5_core_dev *dev); void mlx5_cq_debugfs_cleanup(struct mlx5_core_dev *dev); @@ -159,6 +139,11 @@ int mlx5_query_qcam_reg(struct mlx5_core_dev *mdev, u32 *qcam, void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev); void mlx5_lag_remove(struct mlx5_core_dev *dev); +int mlx5_events_init(struct mlx5_core_dev *dev); +void mlx5_events_cleanup(struct mlx5_core_dev *dev); +void mlx5_events_start(struct mlx5_core_dev *dev); +void mlx5_events_stop(struct mlx5_core_dev *dev); + void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv); void mlx5_remove_device(struct mlx5_interface *intf, struct mlx5_priv *priv); void mlx5_attach_device(struct mlx5_core_dev *dev); @@ -202,10 +187,8 @@ static inline int mlx5_lag_is_lacp_owner(struct mlx5_core_dev *dev) MLX5_CAP_GEN(dev, lag_master); } -int mlx5_lag_allow(struct mlx5_core_dev *dev); -int mlx5_lag_forbid(struct mlx5_core_dev *dev); - void mlx5_reload_interface(struct mlx5_core_dev *mdev, int protocol); +void mlx5_lag_update(struct mlx5_core_dev *dev); enum { MLX5_NIC_IFC_FULL = 0, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c index e36d3e3675f9..a83b517b0714 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c @@ -37,6 +37,7 @@ #include #include #include "mlx5_core.h" +#include "lib/eq.h" enum { MLX5_PAGES_CANT_GIVE = 0, @@ -433,15 +434,28 @@ static void pages_work_handler(struct work_struct *work) kfree(req); } -void mlx5_core_req_pages_handler(struct mlx5_core_dev *dev, u16 func_id, - s32 npages) +static int req_pages_handler(struct notifier_block *nb, + unsigned long type, void *data) { struct mlx5_pages_req *req; - + struct mlx5_core_dev *dev; + struct mlx5_priv *priv; + struct mlx5_eqe *eqe; + u16 func_id; + s32 npages; + + priv = mlx5_nb_cof(nb, struct mlx5_priv, pg_nb); + dev = container_of(priv, struct mlx5_core_dev, priv); + eqe = data; + + func_id = be16_to_cpu(eqe->data.req_pages.func_id); + npages = be32_to_cpu(eqe->data.req_pages.num_pages); + mlx5_core_dbg(dev, "page request for func 0x%x, npages %d\n", + func_id, npages); req = kzalloc(sizeof(*req), GFP_ATOMIC); if (!req) { mlx5_core_warn(dev, "failed to allocate pages request\n"); - return; + return NOTIFY_DONE; } req->dev = dev; @@ -449,6 +463,7 @@ void mlx5_core_req_pages_handler(struct mlx5_core_dev *dev, u16 func_id, req->npages = npages; INIT_WORK(&req->work, pages_work_handler); queue_work(dev->priv.pg_wq, &req->work); + return NOTIFY_OK; } int mlx5_satisfy_startup_pages(struct mlx5_core_dev *dev, int boot) @@ -524,29 +539,32 @@ int mlx5_reclaim_startup_pages(struct mlx5_core_dev *dev) return 0; } -void mlx5_pagealloc_init(struct mlx5_core_dev *dev) +int mlx5_pagealloc_init(struct mlx5_core_dev *dev) { dev->priv.page_root = RB_ROOT; INIT_LIST_HEAD(&dev->priv.free_list); + dev->priv.pg_wq = create_singlethread_workqueue("mlx5_page_allocator"); + if (!dev->priv.pg_wq) + return -ENOMEM; + + return 0; } void mlx5_pagealloc_cleanup(struct mlx5_core_dev *dev) { - /* nothing */ + destroy_workqueue(dev->priv.pg_wq); } -int mlx5_pagealloc_start(struct mlx5_core_dev *dev) +void mlx5_pagealloc_start(struct mlx5_core_dev *dev) { - dev->priv.pg_wq = create_singlethread_workqueue("mlx5_page_allocator"); - if (!dev->priv.pg_wq) - return -ENOMEM; - - return 0; + MLX5_NB_INIT(&dev->priv.pg_nb, req_pages_handler, PAGE_REQUEST); + mlx5_eq_notifier_register(dev, &dev->priv.pg_nb); } void mlx5_pagealloc_stop(struct mlx5_core_dev *dev) { - destroy_workqueue(dev->priv.pg_wq); + mlx5_eq_notifier_unregister(dev, &dev->priv.pg_nb); + flush_workqueue(dev->priv.pg_wq); } int mlx5_wait_for_vf_pages(struct mlx5_core_dev *dev) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c index 31a9cbd85689..2b82f35f4c35 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/port.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c @@ -915,63 +915,6 @@ void mlx5_query_port_fcs(struct mlx5_core_dev *mdev, bool *supported, *enabled = !!(MLX5_GET(pcmr_reg, out, fcs_chk)); } -static const char *mlx5_pme_status[MLX5_MODULE_STATUS_NUM] = { - "Cable plugged", /* MLX5_MODULE_STATUS_PLUGGED = 0x1 */ - "Cable unplugged", /* MLX5_MODULE_STATUS_UNPLUGGED = 0x2 */ - "Cable error", /* MLX5_MODULE_STATUS_ERROR = 0x3 */ -}; - -static const char *mlx5_pme_error[MLX5_MODULE_EVENT_ERROR_NUM] = { - "Power budget exceeded", - "Long Range for non MLNX cable", - "Bus stuck(I2C or data shorted)", - "No EEPROM/retry timeout", - "Enforce part number list", - "Unknown identifier", - "High Temperature", - "Bad or shorted cable/module", - "Unknown status", -}; - -void mlx5_port_module_event(struct mlx5_core_dev *dev, struct mlx5_eqe *eqe) -{ - enum port_module_event_status_type module_status; - enum port_module_event_error_type error_type; - struct mlx5_eqe_port_module *module_event_eqe; - struct mlx5_priv *priv = &dev->priv; - u8 module_num; - - module_event_eqe = &eqe->data.port_module; - module_num = module_event_eqe->module; - module_status = module_event_eqe->module_status & - PORT_MODULE_EVENT_MODULE_STATUS_MASK; - error_type = module_event_eqe->error_type & - PORT_MODULE_EVENT_ERROR_TYPE_MASK; - - if (module_status < MLX5_MODULE_STATUS_ERROR) { - priv->pme_stats.status_counters[module_status - 1]++; - } else if (module_status == MLX5_MODULE_STATUS_ERROR) { - if (error_type >= MLX5_MODULE_EVENT_ERROR_UNKNOWN) - /* Unknown error type */ - error_type = MLX5_MODULE_EVENT_ERROR_UNKNOWN; - priv->pme_stats.error_counters[error_type]++; - } - - if (!printk_ratelimit()) - return; - - if (module_status < MLX5_MODULE_STATUS_ERROR) - mlx5_core_info(dev, - "Port module event: module %u, %s\n", - module_num, mlx5_pme_status[module_status - 1]); - - else if (module_status == MLX5_MODULE_STATUS_ERROR) - mlx5_core_info(dev, - "Port module event[error]: module %u, %s, %s\n", - module_num, mlx5_pme_status[module_status - 1], - mlx5_pme_error[error_type]); -} - int mlx5_query_mtpps(struct mlx5_core_dev *mdev, u32 *mtpps, u32 mtpps_size) { u32 in[MLX5_ST_SZ_DW(mtpps_reg)] = {0}; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c index 91b8139a388d..388f205a497f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c @@ -38,11 +38,11 @@ #include #include "mlx5_core.h" +#include "lib/eq.h" -static struct mlx5_core_rsc_common *mlx5_get_rsc(struct mlx5_core_dev *dev, - u32 rsn) +static struct mlx5_core_rsc_common * +mlx5_get_rsc(struct mlx5_qp_table *table, u32 rsn) { - struct mlx5_qp_table *table = &dev->priv.qp_table; struct mlx5_core_rsc_common *common; spin_lock(&table->lock); @@ -53,11 +53,6 @@ static struct mlx5_core_rsc_common *mlx5_get_rsc(struct mlx5_core_dev *dev, spin_unlock(&table->lock); - if (!common) { - mlx5_core_warn(dev, "Async event for bogus resource 0x%x\n", - rsn); - return NULL; - } return common; } @@ -120,19 +115,57 @@ static bool is_event_type_allowed(int rsc_type, int event_type) } } -void mlx5_rsc_event(struct mlx5_core_dev *dev, u32 rsn, int event_type) +static int rsc_event_notifier(struct notifier_block *nb, + unsigned long type, void *data) { - struct mlx5_core_rsc_common *common = mlx5_get_rsc(dev, rsn); + struct mlx5_core_rsc_common *common; + struct mlx5_qp_table *table; + struct mlx5_core_dev *dev; struct mlx5_core_dct *dct; + u8 event_type = (u8)type; struct mlx5_core_qp *qp; + struct mlx5_priv *priv; + struct mlx5_eqe *eqe; + u32 rsn; + + switch (event_type) { + case MLX5_EVENT_TYPE_DCT_DRAINED: + eqe = data; + rsn = be32_to_cpu(eqe->data.dct.dctn) & 0xffffff; + rsn |= (MLX5_RES_DCT << MLX5_USER_INDEX_LEN); + break; + case MLX5_EVENT_TYPE_PATH_MIG: + case MLX5_EVENT_TYPE_COMM_EST: + case MLX5_EVENT_TYPE_SQ_DRAINED: + case MLX5_EVENT_TYPE_SRQ_LAST_WQE: + case MLX5_EVENT_TYPE_WQ_CATAS_ERROR: + case MLX5_EVENT_TYPE_PATH_MIG_FAILED: + case MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR: + case MLX5_EVENT_TYPE_WQ_ACCESS_ERROR: + eqe = data; + rsn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff; + rsn |= (eqe->data.qp_srq.type << MLX5_USER_INDEX_LEN); + break; + default: + return NOTIFY_DONE; + } + + table = container_of(nb, struct mlx5_qp_table, nb); + priv = container_of(table, struct mlx5_priv, qp_table); + dev = container_of(priv, struct mlx5_core_dev, priv); + + mlx5_core_dbg(dev, "event (%d) arrived on resource 0x%x\n", eqe->type, rsn); - if (!common) - return; + common = mlx5_get_rsc(table, rsn); + if (!common) { + mlx5_core_warn(dev, "Async event for bogus resource 0x%x\n", rsn); + return NOTIFY_OK; + } if (!is_event_type_allowed((rsn >> MLX5_USER_INDEX_LEN), event_type)) { mlx5_core_warn(dev, "event 0x%.2x is not allowed on resource 0x%.8x\n", event_type, rsn); - return; + goto out; } switch (common->res) { @@ -150,8 +183,10 @@ void mlx5_rsc_event(struct mlx5_core_dev *dev, u32 rsn, int event_type) default: mlx5_core_warn(dev, "invalid resource type for 0x%x\n", rsn); } - +out: mlx5_core_put_rsc(common); + + return NOTIFY_OK; } static int create_resource_common(struct mlx5_core_dev *dev, @@ -487,10 +522,16 @@ void mlx5_init_qp_table(struct mlx5_core_dev *dev) spin_lock_init(&table->lock); INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); mlx5_qp_debugfs_init(dev); + + table->nb.notifier_call = rsc_event_notifier; + mlx5_notifier_register(dev, &table->nb); } void mlx5_cleanup_qp_table(struct mlx5_core_dev *dev) { + struct mlx5_qp_table *table = &dev->priv.qp_table; + + mlx5_notifier_unregister(dev, &table->nb); mlx5_qp_debugfs_cleanup(dev); } @@ -670,3 +711,20 @@ int mlx5_core_query_q_counter(struct mlx5_core_dev *dev, u16 counter_id, return mlx5_cmd_exec(dev, in, sizeof(in), out, out_size); } EXPORT_SYMBOL_GPL(mlx5_core_query_q_counter); + +struct mlx5_core_rsc_common *mlx5_core_res_hold(struct mlx5_core_dev *dev, + int res_num, + enum mlx5_res_type res_type) +{ + u32 rsn = res_num | (res_type << MLX5_USER_INDEX_LEN); + struct mlx5_qp_table *table = &dev->priv.qp_table; + + return mlx5_get_rsc(table, rsn); +} +EXPORT_SYMBOL_GPL(mlx5_core_res_hold); + +void mlx5_core_res_put(struct mlx5_core_rsc_common *res) +{ + mlx5_core_put_rsc(res); +} +EXPORT_SYMBOL_GPL(mlx5_core_res_put); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c index a0674962f02c..6e178030d8fb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c @@ -216,20 +216,10 @@ int mlx5_core_sriov_configure(struct pci_dev *pdev, int num_vfs) if (!mlx5_core_is_pf(dev)) return -EPERM; - if (num_vfs) { - int ret; - - ret = mlx5_lag_forbid(dev); - if (ret && (ret != -ENODEV)) - return ret; - } - - if (num_vfs) { + if (num_vfs) err = mlx5_sriov_enable(pdev, num_vfs); - } else { + else mlx5_sriov_disable(pdev); - mlx5_lag_allow(dev); - } return err ? err : num_vfs; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c index a1ee9a8a769e..c4d4b76096dc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c @@ -258,115 +258,6 @@ void mlx5_core_destroy_tis(struct mlx5_core_dev *dev, u32 tisn) } EXPORT_SYMBOL(mlx5_core_destroy_tis); -int mlx5_core_create_rmp(struct mlx5_core_dev *dev, u32 *in, int inlen, - u32 *rmpn) -{ - u32 out[MLX5_ST_SZ_DW(create_rmp_out)] = {0}; - int err; - - MLX5_SET(create_rmp_in, in, opcode, MLX5_CMD_OP_CREATE_RMP); - err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out)); - if (!err) - *rmpn = MLX5_GET(create_rmp_out, out, rmpn); - - return err; -} - -int mlx5_core_modify_rmp(struct mlx5_core_dev *dev, u32 *in, int inlen) -{ - u32 out[MLX5_ST_SZ_DW(modify_rmp_out)] = {0}; - - MLX5_SET(modify_rmp_in, in, opcode, MLX5_CMD_OP_MODIFY_RMP); - return mlx5_cmd_exec(dev, in, inlen, out, sizeof(out)); -} - -int mlx5_core_destroy_rmp(struct mlx5_core_dev *dev, u32 rmpn) -{ - u32 in[MLX5_ST_SZ_DW(destroy_rmp_in)] = {0}; - u32 out[MLX5_ST_SZ_DW(destroy_rmp_out)] = {0}; - - MLX5_SET(destroy_rmp_in, in, opcode, MLX5_CMD_OP_DESTROY_RMP); - MLX5_SET(destroy_rmp_in, in, rmpn, rmpn); - return mlx5_cmd_exec(dev, in, sizeof(in), out, - sizeof(out)); -} - -int mlx5_core_query_rmp(struct mlx5_core_dev *dev, u32 rmpn, u32 *out) -{ - u32 in[MLX5_ST_SZ_DW(query_rmp_in)] = {0}; - int outlen = MLX5_ST_SZ_BYTES(query_rmp_out); - - MLX5_SET(query_rmp_in, in, opcode, MLX5_CMD_OP_QUERY_RMP); - MLX5_SET(query_rmp_in, in, rmpn, rmpn); - return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen); -} - -int mlx5_core_arm_rmp(struct mlx5_core_dev *dev, u32 rmpn, u16 lwm) -{ - void *in; - void *rmpc; - void *wq; - void *bitmask; - int err; - - in = kvzalloc(MLX5_ST_SZ_BYTES(modify_rmp_in), GFP_KERNEL); - if (!in) - return -ENOMEM; - - rmpc = MLX5_ADDR_OF(modify_rmp_in, in, ctx); - bitmask = MLX5_ADDR_OF(modify_rmp_in, in, bitmask); - wq = MLX5_ADDR_OF(rmpc, rmpc, wq); - - MLX5_SET(modify_rmp_in, in, rmp_state, MLX5_RMPC_STATE_RDY); - MLX5_SET(modify_rmp_in, in, rmpn, rmpn); - MLX5_SET(wq, wq, lwm, lwm); - MLX5_SET(rmp_bitmask, bitmask, lwm, 1); - MLX5_SET(rmpc, rmpc, state, MLX5_RMPC_STATE_RDY); - - err = mlx5_core_modify_rmp(dev, in, MLX5_ST_SZ_BYTES(modify_rmp_in)); - - kvfree(in); - - return err; -} - -int mlx5_core_create_xsrq(struct mlx5_core_dev *dev, u32 *in, int inlen, - u32 *xsrqn) -{ - u32 out[MLX5_ST_SZ_DW(create_xrc_srq_out)] = {0}; - int err; - - MLX5_SET(create_xrc_srq_in, in, opcode, MLX5_CMD_OP_CREATE_XRC_SRQ); - err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out)); - if (!err) - *xsrqn = MLX5_GET(create_xrc_srq_out, out, xrc_srqn); - - return err; -} - -int mlx5_core_destroy_xsrq(struct mlx5_core_dev *dev, u32 xsrqn) -{ - u32 in[MLX5_ST_SZ_DW(destroy_xrc_srq_in)] = {0}; - u32 out[MLX5_ST_SZ_DW(destroy_xrc_srq_out)] = {0}; - - MLX5_SET(destroy_xrc_srq_in, in, opcode, MLX5_CMD_OP_DESTROY_XRC_SRQ); - MLX5_SET(destroy_xrc_srq_in, in, xrc_srqn, xsrqn); - return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); -} - -int mlx5_core_arm_xsrq(struct mlx5_core_dev *dev, u32 xsrqn, u16 lwm) -{ - u32 in[MLX5_ST_SZ_DW(arm_xrc_srq_in)] = {0}; - u32 out[MLX5_ST_SZ_DW(arm_xrc_srq_out)] = {0}; - - MLX5_SET(arm_xrc_srq_in, in, opcode, MLX5_CMD_OP_ARM_XRC_SRQ); - MLX5_SET(arm_xrc_srq_in, in, xrc_srqn, xsrqn); - MLX5_SET(arm_xrc_srq_in, in, lwm, lwm); - MLX5_SET(arm_xrc_srq_in, in, op_mod, - MLX5_ARM_XRC_SRQ_IN_OP_MOD_XRC_SRQ); - return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); -} - int mlx5_core_create_rqt(struct mlx5_core_dev *dev, u32 *in, int inlen, u32 *rqtn) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c index cfbea66b4879..9b150ce9d315 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c @@ -1204,9 +1204,19 @@ EXPORT_SYMBOL_GPL(mlx5_nic_vport_unaffiliate_multiport); u64 mlx5_query_nic_system_image_guid(struct mlx5_core_dev *mdev) { - if (!mdev->sys_image_guid) - mlx5_query_nic_vport_system_image_guid(mdev, &mdev->sys_image_guid); + int port_type_cap = MLX5_CAP_GEN(mdev, port_type); + u64 tmp = 0; - return mdev->sys_image_guid; + if (mdev->sys_image_guid) + return mdev->sys_image_guid; + + if (port_type_cap == MLX5_CAP_PORT_TYPE_ETH) + mlx5_query_nic_vport_system_image_guid(mdev, &tmp); + else + mlx5_query_hca_vport_system_image_guid(mdev, &tmp); + + mdev->sys_image_guid = tmp; + + return tmp; } EXPORT_SYMBOL_GPL(mlx5_query_nic_system_image_guid); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c index 2dcbf1ebfd6a..953cc8efba69 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c @@ -155,7 +155,8 @@ int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, void *cqc, struct mlx5_cqwq *wq, struct mlx5_wq_ctrl *wq_ctrl) { - u8 log_wq_stride = MLX5_GET(cqc, cqc, cqe_sz) + 6; + /* CQE_STRIDE_128 and CQE_STRIDE_128_PAD both mean 128B stride */ + u8 log_wq_stride = MLX5_GET(cqc, cqc, cqe_sz) == CQE_STRIDE_64 ? 6 : 7; u8 log_wq_sz = MLX5_GET(cqc, cqc, log_cq_size); int err; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h index b1293d153a58..ea934a48c90a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h @@ -177,9 +177,14 @@ static inline u32 mlx5_cqwq_get_ci(struct mlx5_cqwq *wq) return mlx5_cqwq_ctr2ix(wq, wq->cc); } -static inline void *mlx5_cqwq_get_wqe(struct mlx5_cqwq *wq, u32 ix) +static inline struct mlx5_cqe64 *mlx5_cqwq_get_wqe(struct mlx5_cqwq *wq, u32 ix) { - return mlx5_frag_buf_get_wqe(&wq->fbc, ix); + struct mlx5_cqe64 *cqe = mlx5_frag_buf_get_wqe(&wq->fbc, ix); + + /* For 128B CQEs the data is in the last 64B */ + cqe += wq->fbc.log_stride == 7; + + return cqe; } static inline u32 mlx5_cqwq_get_ctr_wrap_cnt(struct mlx5_cqwq *wq, u32 ctr) diff --git a/drivers/net/ethernet/mellanox/mlxsw/Kconfig b/drivers/net/ethernet/mellanox/mlxsw/Kconfig index 8a291eb36c64..080ddd1942ec 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/Kconfig +++ b/drivers/net/ethernet/mellanox/mlxsw/Kconfig @@ -80,6 +80,7 @@ config MLXSW_SPECTRUM depends on IPV6_GRE || IPV6_GRE=n select GENERIC_ALLOCATOR select PARMAN + select OBJAGG select MLXFW default m ---help--- diff --git a/drivers/net/ethernet/mellanox/mlxsw/Makefile b/drivers/net/ethernet/mellanox/mlxsw/Makefile index 1f77e97e2d7a..bbf45f10c208 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/Makefile +++ b/drivers/net/ethernet/mellanox/mlxsw/Makefile @@ -20,7 +20,7 @@ mlxsw_spectrum-objs := spectrum.o spectrum_buffers.o \ spectrum_acl_tcam.o spectrum_acl_ctcam.o \ spectrum_acl_atcam.o spectrum_acl_erp.o \ spectrum1_acl_tcam.o spectrum2_acl_tcam.o \ - spectrum_acl.o \ + spectrum_acl_bloom_filter.o spectrum_acl.o \ spectrum_flower.o spectrum_cnt.o \ spectrum_fid.o spectrum_ipip.o \ spectrum_acl_flex_actions.o \ diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c index f7154f358f27..ddedf8ab5b64 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core.c @@ -970,10 +970,11 @@ static const struct devlink_ops mlxsw_devlink_ops = { .sb_occ_tc_port_bind_get = mlxsw_devlink_sb_occ_tc_port_bind_get, }; -int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, - const struct mlxsw_bus *mlxsw_bus, - void *bus_priv, bool reload, - struct devlink *devlink) +static int +__mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, + const struct mlxsw_bus *mlxsw_bus, + void *bus_priv, bool reload, + struct devlink *devlink) { const char *device_kind = mlxsw_bus_info->device_kind; struct mlxsw_core *mlxsw_core; @@ -1040,6 +1041,12 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, goto err_devlink_register; } + if (mlxsw_driver->params_register && !reload) { + err = mlxsw_driver->params_register(mlxsw_core); + if (err) + goto err_register_params; + } + err = mlxsw_hwmon_init(mlxsw_core, mlxsw_bus_info, &mlxsw_core->hwmon); if (err) goto err_hwmon_init; @@ -1062,6 +1069,9 @@ err_driver_init: err_thermal_init: mlxsw_hwmon_fini(mlxsw_core->hwmon); err_hwmon_init: + if (mlxsw_driver->params_unregister && !reload) + mlxsw_driver->params_unregister(mlxsw_core); +err_register_params: if (!reload) devlink_unregister(devlink); err_devlink_register: @@ -1081,6 +1091,29 @@ err_bus_init: err_devlink_alloc: return err; } + +int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info, + const struct mlxsw_bus *mlxsw_bus, + void *bus_priv, bool reload, + struct devlink *devlink) +{ + bool called_again = false; + int err; + +again: + err = __mlxsw_core_bus_device_register(mlxsw_bus_info, mlxsw_bus, + bus_priv, reload, devlink); + /* -EAGAIN is returned in case the FW was updated. FW needs + * a reset, so lets try to call __mlxsw_core_bus_device_register() + * again. + */ + if (err == -EAGAIN && !called_again) { + called_again = true; + goto again; + } + + return err; +} EXPORT_SYMBOL(mlxsw_core_bus_device_register); void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core, @@ -1102,6 +1135,8 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core, mlxsw_core->driver->fini(mlxsw_core); mlxsw_thermal_fini(mlxsw_core->thermal); mlxsw_hwmon_fini(mlxsw_core->hwmon); + if (mlxsw_core->driver->params_unregister && !reload) + mlxsw_core->driver->params_unregister(mlxsw_core); if (!reload) devlink_unregister(devlink); mlxsw_emad_fini(mlxsw_core); @@ -1114,6 +1149,8 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core, return; reload_fail_deinit: + if (mlxsw_core->driver->params_unregister) + mlxsw_core->driver->params_unregister(mlxsw_core); devlink_unregister(devlink); devlink_resources_unregister(devlink, NULL); devlink_free(devlink); diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h index c4e4971764e5..4e114f35ee0d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.h +++ b/drivers/net/ethernet/mellanox/mlxsw/core.h @@ -282,6 +282,8 @@ struct mlxsw_driver { const struct mlxsw_config_profile *profile, u64 *p_single_size, u64 *p_double_size, u64 *p_linear_size); + int (*params_register)(struct mlxsw_core *mlxsw_core); + void (*params_unregister)(struct mlxsw_core *mlxsw_core); u8 txhdr_len; const struct mlxsw_config_profile *profile; bool res_query_enabled; diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c index 785bf01fe2be..df78d23b3ec3 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c @@ -426,15 +426,17 @@ mlxsw_sp_afk_encode_one(const struct mlxsw_afk_element_inst *elinst, void mlxsw_afk_encode(struct mlxsw_afk *mlxsw_afk, struct mlxsw_afk_key_info *key_info, struct mlxsw_afk_element_values *values, - char *key, char *mask, int block_start, int block_end) + char *key, char *mask) { + unsigned int blocks_count = + mlxsw_afk_key_info_blocks_count_get(key_info); char block_mask[MLXSW_SP_AFK_KEY_BLOCK_MAX_SIZE]; char block_key[MLXSW_SP_AFK_KEY_BLOCK_MAX_SIZE]; const struct mlxsw_afk_element_inst *elinst; enum mlxsw_afk_element element; int block_index, i; - for (i = block_start; i <= block_end; i++) { + for (i = 0; i < blocks_count; i++) { memset(block_key, 0, MLXSW_SP_AFK_KEY_BLOCK_MAX_SIZE); memset(block_mask, 0, MLXSW_SP_AFK_KEY_BLOCK_MAX_SIZE); @@ -451,10 +453,18 @@ void mlxsw_afk_encode(struct mlxsw_afk *mlxsw_afk, values->storage.mask); } - if (key) - mlxsw_afk->ops->encode_block(block_key, i, key); - if (mask) - mlxsw_afk->ops->encode_block(block_mask, i, mask); + mlxsw_afk->ops->encode_block(key, i, block_key); + mlxsw_afk->ops->encode_block(mask, i, block_mask); } } EXPORT_SYMBOL(mlxsw_afk_encode); + +void mlxsw_afk_clear(struct mlxsw_afk *mlxsw_afk, char *key, + int block_start, int block_end) +{ + int i; + + for (i = block_start; i <= block_end; i++) + mlxsw_afk->ops->clear_block(key, i); +} +EXPORT_SYMBOL(mlxsw_afk_clear); diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h index c29c045d826d..4a625cdf3e7c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h +++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h @@ -33,6 +33,8 @@ enum mlxsw_afk_element { MLXSW_AFK_ELEMENT_IP_TTL_, MLXSW_AFK_ELEMENT_IP_ECN, MLXSW_AFK_ELEMENT_IP_DSCP, + MLXSW_AFK_ELEMENT_VIRT_ROUTER_8_10, + MLXSW_AFK_ELEMENT_VIRT_ROUTER_0_7, MLXSW_AFK_ELEMENT_MAX, }; @@ -87,6 +89,8 @@ static const struct mlxsw_afk_element_info mlxsw_afk_element_infos[] = { MLXSW_AFK_ELEMENT_INFO_U32(IP_TTL_, 0x18, 0, 8), MLXSW_AFK_ELEMENT_INFO_U32(IP_ECN, 0x18, 9, 2), MLXSW_AFK_ELEMENT_INFO_U32(IP_DSCP, 0x18, 11, 6), + MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_8_10, 0x18, 17, 3), + MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_0_7, 0x18, 20, 8), MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_96_127, 0x20, 4), MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_64_95, 0x24, 4), MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_32_63, 0x28, 4), @@ -188,7 +192,8 @@ struct mlxsw_afk; struct mlxsw_afk_ops { const struct mlxsw_afk_block *blocks; unsigned int blocks_count; - void (*encode_block)(char *block, int block_index, char *output); + void (*encode_block)(char *output, int block_index, char *block); + void (*clear_block)(char *output, int block_index); }; struct mlxsw_afk *mlxsw_afk_create(unsigned int max_blocks, @@ -228,6 +233,8 @@ void mlxsw_afk_values_add_buf(struct mlxsw_afk_element_values *values, void mlxsw_afk_encode(struct mlxsw_afk *mlxsw_afk, struct mlxsw_afk_key_info *key_info, struct mlxsw_afk_element_values *values, - char *key, char *mask, int block_start, int block_end); + char *key, char *mask); +void mlxsw_afk_clear(struct mlxsw_afk *mlxsw_afk, char *key, + int block_start, int block_end); #endif diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c index 6d29dc428608..61f897b40f82 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c @@ -16,6 +16,15 @@ #define MLXSW_THERMAL_MAX_TEMP 110000 /* 110C */ #define MLXSW_THERMAL_MAX_STATE 10 #define MLXSW_THERMAL_MAX_DUTY 255 +/* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values + * MLXSW_THERMAL_MAX_STATE + x, where x is between 2 and 10 are used for + * setting fan speed dynamic minimum. For example, if value is set to 14 (40%) + * cooling levels vector will be set to 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10 to + * introduce PWM speed in percent: 40, 40, 40, 40, 40, 50, 60. 70, 80, 90, 100. + */ +#define MLXSW_THERMAL_SPEED_MIN (MLXSW_THERMAL_MAX_STATE + 2) +#define MLXSW_THERMAL_SPEED_MAX (MLXSW_THERMAL_MAX_STATE * 2) +#define MLXSW_THERMAL_SPEED_MIN_LEVEL 2 /* 20% */ struct mlxsw_thermal_trip { int type; @@ -68,6 +77,7 @@ struct mlxsw_thermal { const struct mlxsw_bus_info *bus_info; struct thermal_zone_device *tzdev; struct thermal_cooling_device *cdevs[MLXSW_MFCR_PWMS_MAX]; + u8 cooling_levels[MLXSW_THERMAL_MAX_STATE + 1]; struct mlxsw_thermal_trip trips[MLXSW_THERMAL_NUM_TRIPS]; enum thermal_device_mode mode; }; @@ -285,12 +295,51 @@ static int mlxsw_thermal_set_cur_state(struct thermal_cooling_device *cdev, struct mlxsw_thermal *thermal = cdev->devdata; struct device *dev = thermal->bus_info->dev; char mfsc_pl[MLXSW_REG_MFSC_LEN]; - int err, idx; + unsigned long cur_state, i; + int idx; + u8 duty; + int err; idx = mlxsw_get_cooling_device_idx(thermal, cdev); if (idx < 0) return idx; + /* Verify if this request is for changing allowed fan dynamical + * minimum. If it is - update cooling levels accordingly and update + * state, if current state is below the newly requested minimum state. + * For example, if current state is 5, and minimal state is to be + * changed from 4 to 6, thermal->cooling_levels[0 to 5] will be changed + * all from 4 to 6. And state 5 (thermal->cooling_levels[4]) should be + * overwritten. + */ + if (state >= MLXSW_THERMAL_SPEED_MIN && + state <= MLXSW_THERMAL_SPEED_MAX) { + state -= MLXSW_THERMAL_MAX_STATE; + for (i = 0; i <= MLXSW_THERMAL_MAX_STATE; i++) + thermal->cooling_levels[i] = max(state, i); + + mlxsw_reg_mfsc_pack(mfsc_pl, idx, 0); + err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfsc), mfsc_pl); + if (err) + return err; + + duty = mlxsw_reg_mfsc_pwm_duty_cycle_get(mfsc_pl); + cur_state = mlxsw_duty_to_state(duty); + + /* If current fan state is lower than requested dynamical + * minimum, increase fan speed up to dynamical minimum. + */ + if (state < cur_state) + return 0; + + state = cur_state; + } + + if (state > MLXSW_THERMAL_MAX_STATE) + return -EINVAL; + + /* Normalize the state to the valid speed range. */ + state = thermal->cooling_levels[state]; mlxsw_reg_mfsc_pack(mfsc_pl, idx, mlxsw_state_to_duty(state)); err = mlxsw_reg_write(thermal->core, MLXSW_REG(mfsc), mfsc_pl); if (err) { @@ -369,6 +418,11 @@ int mlxsw_thermal_init(struct mlxsw_core *core, } } + /* Initialize cooling levels per PWM state. */ + for (i = 0; i < MLXSW_THERMAL_MAX_STATE; i++) + thermal->cooling_levels[i] = max(MLXSW_THERMAL_SPEED_MIN_LEVEL, + i); + thermal->tzdev = thermal_zone_device_register("mlxsw", MLXSW_THERMAL_NUM_TRIPS, MLXSW_THERMAL_TRIP_MASK, diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c index 5890fdfd62c3..66b8098c6fd2 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/pci.c +++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c @@ -1720,7 +1720,6 @@ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { const char *driver_name = pdev->driver->name; struct mlxsw_pci *mlxsw_pci; - bool called_again = false; int err; mlxsw_pci = kzalloc(sizeof(*mlxsw_pci), GFP_KERNEL); @@ -1777,18 +1776,10 @@ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) mlxsw_pci->bus_info.dev = &pdev->dev; mlxsw_pci->id = id; -again: err = mlxsw_core_bus_device_register(&mlxsw_pci->bus_info, &mlxsw_pci_bus, mlxsw_pci, false, NULL); - /* -EAGAIN is returned in case the FW was updated. FW needs - * a reset, so lets try to call mlxsw_core_bus_device_register() - * again. - */ - if (err == -EAGAIN && !called_again) { - called_again = true; - goto again; - } else if (err) { + if (err) { dev_err(&pdev->dev, "cannot register bus device\n"); goto err_bus_device_register; } diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index db3d2790aeec..9b48dffc9f63 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -641,6 +641,10 @@ enum mlxsw_reg_sfn_rec_type { MLXSW_REG_SFN_REC_TYPE_AGED_OUT_MAC = 0x7, /* Aged-out MAC address on a LAG port. */ MLXSW_REG_SFN_REC_TYPE_AGED_OUT_MAC_LAG = 0x8, + /* Learned unicast tunnel record. */ + MLXSW_REG_SFN_REC_TYPE_LEARNED_UNICAST_TUNNEL = 0xD, + /* Aged-out unicast tunnel record. */ + MLXSW_REG_SFN_REC_TYPE_AGED_OUT_UNICAST_TUNNEL = 0xE, }; /* reg_sfn_rec_type @@ -704,6 +708,66 @@ static inline void mlxsw_reg_sfn_mac_lag_unpack(char *payload, int rec_index, *p_lag_id = mlxsw_reg_sfn_mac_lag_lag_id_get(payload, rec_index); } +/* reg_sfn_uc_tunnel_uip_msb + * When protocol is IPv4, the most significant byte of the underlay IPv4 + * address of the remote VTEP. + * When protocol is IPv6, reserved. + * Access: RO + */ +MLXSW_ITEM32_INDEXED(reg, sfn, uc_tunnel_uip_msb, MLXSW_REG_SFN_BASE_LEN, 24, + 8, MLXSW_REG_SFN_REC_LEN, 0x08, false); + +enum mlxsw_reg_sfn_uc_tunnel_protocol { + MLXSW_REG_SFN_UC_TUNNEL_PROTOCOL_IPV4, + MLXSW_REG_SFN_UC_TUNNEL_PROTOCOL_IPV6, +}; + +/* reg_sfn_uc_tunnel_protocol + * IP protocol. + * Access: RO + */ +MLXSW_ITEM32_INDEXED(reg, sfn, uc_tunnel_protocol, MLXSW_REG_SFN_BASE_LEN, 27, + 1, MLXSW_REG_SFN_REC_LEN, 0x0C, false); + +/* reg_sfn_uc_tunnel_uip_lsb + * When protocol is IPv4, the least significant bytes of the underlay + * IPv4 address of the remote VTEP. + * When protocol is IPv6, ipv6_id to be queried from TNIPSD. + * Access: RO + */ +MLXSW_ITEM32_INDEXED(reg, sfn, uc_tunnel_uip_lsb, MLXSW_REG_SFN_BASE_LEN, 0, + 24, MLXSW_REG_SFN_REC_LEN, 0x0C, false); + +enum mlxsw_reg_sfn_tunnel_port { + MLXSW_REG_SFN_TUNNEL_PORT_NVE, + MLXSW_REG_SFN_TUNNEL_PORT_VPLS, + MLXSW_REG_SFN_TUNNEL_FLEX_TUNNEL0, + MLXSW_REG_SFN_TUNNEL_FLEX_TUNNEL1, +}; + +/* reg_sfn_uc_tunnel_port + * Tunnel port. + * Reserved on Spectrum. + * Access: RO + */ +MLXSW_ITEM32_INDEXED(reg, sfn, tunnel_port, MLXSW_REG_SFN_BASE_LEN, 0, 4, + MLXSW_REG_SFN_REC_LEN, 0x10, false); + +static inline void +mlxsw_reg_sfn_uc_tunnel_unpack(char *payload, int rec_index, char *mac, + u16 *p_fid, u32 *p_uip, + enum mlxsw_reg_sfn_uc_tunnel_protocol *p_proto) +{ + u32 uip_msb, uip_lsb; + + mlxsw_reg_sfn_rec_mac_memcpy_from(payload, rec_index, mac); + *p_fid = mlxsw_reg_sfn_mac_fid_get(payload, rec_index); + uip_msb = mlxsw_reg_sfn_uc_tunnel_uip_msb_get(payload, rec_index); + uip_lsb = mlxsw_reg_sfn_uc_tunnel_uip_lsb_get(payload, rec_index); + *p_uip = uip_msb << 24 | uip_lsb; + *p_proto = mlxsw_reg_sfn_uc_tunnel_protocol_get(payload, rec_index); +} + /* SPMS - Switch Port MSTP/RSTP State Register * ------------------------------------------- * Configures the spanning tree state of a physical port. @@ -2431,6 +2495,43 @@ static inline void mlxsw_reg_pefa_unpack(char *payload, bool *p_a) *p_a = mlxsw_reg_pefa_a_get(payload); } +/* PEMRBT - Policy-Engine Multicast Router Binding Table Register + * -------------------------------------------------------------- + * This register is used for binding Multicast router to an ACL group + * that serves the MC router. + * This register is not supported by SwitchX/-2 and Spectrum. + */ +#define MLXSW_REG_PEMRBT_ID 0x3014 +#define MLXSW_REG_PEMRBT_LEN 0x14 + +MLXSW_REG_DEFINE(pemrbt, MLXSW_REG_PEMRBT_ID, MLXSW_REG_PEMRBT_LEN); + +enum mlxsw_reg_pemrbt_protocol { + MLXSW_REG_PEMRBT_PROTO_IPV4, + MLXSW_REG_PEMRBT_PROTO_IPV6, +}; + +/* reg_pemrbt_protocol + * Access: Index + */ +MLXSW_ITEM32(reg, pemrbt, protocol, 0x00, 0, 1); + +/* reg_pemrbt_group_id + * ACL group identifier. + * Range 0..cap_max_acl_groups-1 + * Access: RW + */ +MLXSW_ITEM32(reg, pemrbt, group_id, 0x10, 0, 16); + +static inline void +mlxsw_reg_pemrbt_pack(char *payload, enum mlxsw_reg_pemrbt_protocol protocol, + u16 group_id) +{ + MLXSW_REG_ZERO(pemrbt, payload); + mlxsw_reg_pemrbt_protocol_set(payload, protocol); + mlxsw_reg_pemrbt_group_id_set(payload, group_id); +} + /* PTCE-V2 - Policy-Engine TCAM Entry Register Version 2 * ----------------------------------------------------- * This register is used for accessing rules within a TCAM region. @@ -2642,7 +2743,7 @@ mlxsw_reg_perpt_pack(char *payload, u8 erpt_bank, u8 erpt_index, mlxsw_reg_perpt_erpt_bank_set(payload, erpt_bank); mlxsw_reg_perpt_erpt_index_set(payload, erpt_index); mlxsw_reg_perpt_key_size_set(payload, key_size); - mlxsw_reg_perpt_bf_bypass_set(payload, true); + mlxsw_reg_perpt_bf_bypass_set(payload, false); mlxsw_reg_perpt_erp_id_set(payload, erp_id); mlxsw_reg_perpt_erpt_base_bank_set(payload, erpt_base_bank); mlxsw_reg_perpt_erpt_base_index_set(payload, erpt_base_index); @@ -2834,8 +2935,9 @@ static inline void mlxsw_reg_ptce3_pack(char *payload, bool valid, u32 priority, const char *tcam_region_info, const char *key, u8 erp_id, - bool large_exists, u32 lkey_id, - u32 action_pointer) + u16 delta_start, u8 delta_mask, + u8 delta_value, bool large_exists, + u32 lkey_id, u32 action_pointer) { MLXSW_REG_ZERO(ptce3, payload); mlxsw_reg_ptce3_v_set(payload, valid); @@ -2844,6 +2946,9 @@ static inline void mlxsw_reg_ptce3_pack(char *payload, bool valid, mlxsw_reg_ptce3_tcam_region_info_memcpy_to(payload, tcam_region_info); mlxsw_reg_ptce3_flex2_key_blocks_memcpy_to(payload, key); mlxsw_reg_ptce3_erp_id_set(payload, erp_id); + mlxsw_reg_ptce3_delta_start_set(payload, delta_start); + mlxsw_reg_ptce3_delta_mask_set(payload, delta_mask); + mlxsw_reg_ptce3_delta_value_set(payload, delta_value); mlxsw_reg_ptce3_large_exists_set(payload, large_exists); mlxsw_reg_ptce3_large_entry_key_id_set(payload, lkey_id); mlxsw_reg_ptce3_action_pointer_set(payload, action_pointer); @@ -2901,7 +3006,7 @@ static inline void mlxsw_reg_percr_pack(char *payload, u16 region_id) mlxsw_reg_percr_region_id_set(payload, region_id); mlxsw_reg_percr_atcam_ignore_prune_set(payload, false); mlxsw_reg_percr_ctcam_ignore_prune_set(payload, false); - mlxsw_reg_percr_bf_bypass_set(payload, true); + mlxsw_reg_percr_bf_bypass_set(payload, false); } /* PERERP - Policy-Engine Region eRP Register @@ -2990,6 +3095,72 @@ static inline void mlxsw_reg_pererp_pack(char *payload, u16 region_id, mlxsw_reg_pererp_master_rp_id_set(payload, master_rp_id); } +/* PEABFE - Policy-Engine Algorithmic Bloom Filter Entries Register + * ---------------------------------------------------------------- + * This register configures the Bloom filter entries. + */ +#define MLXSW_REG_PEABFE_ID 0x3022 +#define MLXSW_REG_PEABFE_BASE_LEN 0x10 +#define MLXSW_REG_PEABFE_BF_REC_LEN 0x4 +#define MLXSW_REG_PEABFE_BF_REC_MAX_COUNT 256 +#define MLXSW_REG_PEABFE_LEN (MLXSW_REG_PEABFE_BASE_LEN + \ + MLXSW_REG_PEABFE_BF_REC_LEN * \ + MLXSW_REG_PEABFE_BF_REC_MAX_COUNT) + +MLXSW_REG_DEFINE(peabfe, MLXSW_REG_PEABFE_ID, MLXSW_REG_PEABFE_LEN); + +/* reg_peabfe_size + * Number of BF entries to be updated. + * Range 1..256 + * Access: Op + */ +MLXSW_ITEM32(reg, peabfe, size, 0x00, 0, 9); + +/* reg_peabfe_bf_entry_state + * Bloom filter state + * 0 - Clear + * 1 - Set + * Access: RW + */ +MLXSW_ITEM32_INDEXED(reg, peabfe, bf_entry_state, + MLXSW_REG_PEABFE_BASE_LEN, 31, 1, + MLXSW_REG_PEABFE_BF_REC_LEN, 0x00, false); + +/* reg_peabfe_bf_entry_bank + * Bloom filter bank ID + * Range 0..cap_max_erp_table_banks-1 + * Access: Index + */ +MLXSW_ITEM32_INDEXED(reg, peabfe, bf_entry_bank, + MLXSW_REG_PEABFE_BASE_LEN, 24, 4, + MLXSW_REG_PEABFE_BF_REC_LEN, 0x00, false); + +/* reg_peabfe_bf_entry_index + * Bloom filter entry index + * Range 0..2^cap_max_bf_log-1 + * Access: Index + */ +MLXSW_ITEM32_INDEXED(reg, peabfe, bf_entry_index, + MLXSW_REG_PEABFE_BASE_LEN, 0, 24, + MLXSW_REG_PEABFE_BF_REC_LEN, 0x00, false); + +static inline void mlxsw_reg_peabfe_pack(char *payload) +{ + MLXSW_REG_ZERO(peabfe, payload); +} + +static inline void mlxsw_reg_peabfe_rec_pack(char *payload, int rec_index, + u8 state, u8 bank, u32 bf_index) +{ + u8 num_rec = mlxsw_reg_peabfe_size_get(payload); + + if (rec_index >= num_rec) + mlxsw_reg_peabfe_size_set(payload, rec_index + 1); + mlxsw_reg_peabfe_bf_entry_state_set(payload, rec_index, state); + mlxsw_reg_peabfe_bf_entry_bank_set(payload, rec_index, bank); + mlxsw_reg_peabfe_bf_entry_index_set(payload, rec_index, bf_index); +} + /* IEDR - Infrastructure Entry Delete Register * ---------------------------------------------------- * This register is used for deleting entries from the entry tables. @@ -4231,8 +4402,11 @@ MLXSW_ITEM32(reg, ppcnt, pnat, 0x00, 14, 2); enum mlxsw_reg_ppcnt_grp { MLXSW_REG_PPCNT_IEEE_8023_CNT = 0x0, + MLXSW_REG_PPCNT_RFC_2863_CNT = 0x1, MLXSW_REG_PPCNT_RFC_2819_CNT = 0x2, + MLXSW_REG_PPCNT_RFC_3635_CNT = 0x3, MLXSW_REG_PPCNT_EXT_CNT = 0x5, + MLXSW_REG_PPCNT_DISCARD_CNT = 0x6, MLXSW_REG_PPCNT_PRIO_CNT = 0x10, MLXSW_REG_PPCNT_TC_CNT = 0x11, MLXSW_REG_PPCNT_TC_CONG_TC = 0x13, @@ -4247,6 +4421,7 @@ enum mlxsw_reg_ppcnt_grp { * 0x2: RFC 2819 Counters * 0x3: RFC 3635 Counters * 0x5: Ethernet Extended Counters + * 0x6: Ethernet Discard Counters * 0x8: Link Level Retransmission Counters * 0x10: Per Priority Counters * 0x11: Per Traffic Class Counters @@ -4390,8 +4565,46 @@ MLXSW_ITEM64(reg, ppcnt, a_pause_mac_ctrl_frames_received, MLXSW_ITEM64(reg, ppcnt, a_pause_mac_ctrl_frames_transmitted, MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x90, 0, 64); +/* Ethernet RFC 2863 Counter Group */ + +/* reg_ppcnt_if_in_discards + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, if_in_discards, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x10, 0, 64); + +/* reg_ppcnt_if_out_discards + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, if_out_discards, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x38, 0, 64); + +/* reg_ppcnt_if_out_errors + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, if_out_errors, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x40, 0, 64); + /* Ethernet RFC 2819 Counter Group */ +/* reg_ppcnt_ether_stats_undersize_pkts + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, ether_stats_undersize_pkts, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x30, 0, 64); + +/* reg_ppcnt_ether_stats_oversize_pkts + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, ether_stats_oversize_pkts, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x38, 0, 64); + +/* reg_ppcnt_ether_stats_fragments + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, ether_stats_fragments, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x40, 0, 64); + /* reg_ppcnt_ether_stats_pkts64octets * Access: RO */ @@ -4452,6 +4665,32 @@ MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts4096to8191octets, MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts8192to10239octets, MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0xA0, 0, 64); +/* Ethernet RFC 3635 Counter Group */ + +/* reg_ppcnt_dot3stats_fcs_errors + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, dot3stats_fcs_errors, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x08, 0, 64); + +/* reg_ppcnt_dot3stats_symbol_errors + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, dot3stats_symbol_errors, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x60, 0, 64); + +/* reg_ppcnt_dot3control_in_unknown_opcodes + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, dot3control_in_unknown_opcodes, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x68, 0, 64); + +/* reg_ppcnt_dot3in_pause_frames + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, dot3in_pause_frames, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x70, 0, 64); + /* Ethernet Extended Counter Group Counters */ /* reg_ppcnt_ecn_marked @@ -4460,6 +4699,80 @@ MLXSW_ITEM64(reg, ppcnt, ether_stats_pkts8192to10239octets, MLXSW_ITEM64(reg, ppcnt, ecn_marked, MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x08, 0, 64); +/* Ethernet Discard Counter Group Counters */ + +/* reg_ppcnt_ingress_general + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, ingress_general, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x00, 0, 64); + +/* reg_ppcnt_ingress_policy_engine + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, ingress_policy_engine, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x08, 0, 64); + +/* reg_ppcnt_ingress_vlan_membership + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, ingress_vlan_membership, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x10, 0, 64); + +/* reg_ppcnt_ingress_tag_frame_type + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, ingress_tag_frame_type, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x18, 0, 64); + +/* reg_ppcnt_egress_vlan_membership + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, egress_vlan_membership, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x20, 0, 64); + +/* reg_ppcnt_loopback_filter + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, loopback_filter, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x28, 0, 64); + +/* reg_ppcnt_egress_general + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, egress_general, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x30, 0, 64); + +/* reg_ppcnt_egress_hoq + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, egress_hoq, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x40, 0, 64); + +/* reg_ppcnt_egress_policy_engine + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, egress_policy_engine, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x50, 0, 64); + +/* reg_ppcnt_ingress_tx_link_down + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, ingress_tx_link_down, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x58, 0, 64); + +/* reg_ppcnt_egress_stp_filter + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, egress_stp_filter, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x60, 0, 64); + +/* reg_ppcnt_egress_sll + * Access: RO + */ +MLXSW_ITEM64(reg, ppcnt, egress_sll, + MLXSW_REG_PPCNT_COUNTERS_OFFSET + 0x70, 0, 64); + /* Ethernet Per Priority Group Counters */ /* reg_ppcnt_rx_octets @@ -4862,6 +5175,7 @@ enum mlxsw_reg_htgt_trap_group { MLXSW_REG_HTGT_TRAP_GROUP_SP_EVENT, MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6_MLD, MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6_ND, + MLXSW_REG_HTGT_TRAP_GROUP_SP_LBERROR, }; /* reg_htgt_trap_group @@ -9357,8 +9671,10 @@ static const struct mlxsw_reg_info *mlxsw_reg_infos[] = { MLXSW_REG(ppbs), MLXSW_REG(prcr), MLXSW_REG(pefa), + MLXSW_REG(pemrbt), MLXSW_REG(ptce2), MLXSW_REG(perpt), + MLXSW_REG(peabfe), MLXSW_REG(perar), MLXSW_REG(ptce3), MLXSW_REG(percr), diff --git a/drivers/net/ethernet/mellanox/mlxsw/resources.h b/drivers/net/ethernet/mellanox/mlxsw/resources.h index 99b341539870..b8b3a01c2a9e 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/resources.h +++ b/drivers/net/ethernet/mellanox/mlxsw/resources.h @@ -41,6 +41,7 @@ enum mlxsw_res_id { MLXSW_RES_ID_ACL_ERPT_ENTRIES_4KB, MLXSW_RES_ID_ACL_ERPT_ENTRIES_8KB, MLXSW_RES_ID_ACL_ERPT_ENTRIES_12KB, + MLXSW_RES_ID_ACL_MAX_BF_LOG, MLXSW_RES_ID_MAX_CPU_POLICERS, MLXSW_RES_ID_MAX_VRS, MLXSW_RES_ID_MAX_RIFS, @@ -93,6 +94,7 @@ static u16 mlxsw_res_ids[] = { [MLXSW_RES_ID_ACL_ERPT_ENTRIES_4KB] = 0x2951, [MLXSW_RES_ID_ACL_ERPT_ENTRIES_8KB] = 0x2952, [MLXSW_RES_ID_ACL_ERPT_ENTRIES_12KB] = 0x2953, + [MLXSW_RES_ID_ACL_MAX_BF_LOG] = 0x2960, [MLXSW_RES_ID_MAX_CPU_POLICERS] = 0x2A13, [MLXSW_RES_ID_MAX_VRS] = 0x2C01, [MLXSW_RES_ID_MAX_RIFS] = 0x2C02, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index f84b9c02fcc5..eed1045e4d96 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -45,8 +45,8 @@ #define MLXSW_SP_FWREV_MINOR_TO_BRANCH(minor) ((minor) / 100) #define MLXSW_SP1_FWREV_MAJOR 13 -#define MLXSW_SP1_FWREV_MINOR 1703 -#define MLXSW_SP1_FWREV_SUBMINOR 4 +#define MLXSW_SP1_FWREV_MINOR 1910 +#define MLXSW_SP1_FWREV_SUBMINOR 622 #define MLXSW_SP1_FWREV_CAN_RESET_MINOR 1702 static const struct mlxsw_fw_rev mlxsw_sp1_fw_rev = { @@ -65,6 +65,13 @@ static const char mlxsw_sp1_driver_name[] = "mlxsw_spectrum"; static const char mlxsw_sp2_driver_name[] = "mlxsw_spectrum2"; static const char mlxsw_sp_driver_version[] = "1.0"; +static const unsigned char mlxsw_sp1_mac_mask[ETH_ALEN] = { + 0xff, 0xff, 0xff, 0xff, 0xfc, 0x00 +}; +static const unsigned char mlxsw_sp2_mac_mask[ETH_ALEN] = { + 0xff, 0xff, 0xff, 0xff, 0xf0, 0x00 +}; + /* tx_hdr_version * Tx header version. * Must be set to 1. @@ -323,6 +330,7 @@ static int mlxsw_sp_fw_rev_validate(struct mlxsw_sp *mlxsw_sp) const struct mlxsw_fw_rev *rev = &mlxsw_sp->bus_info->fw_rev; const struct mlxsw_fw_rev *req_rev = mlxsw_sp->req_rev; const char *fw_filename = mlxsw_sp->fw_filename; + union devlink_param_value value; const struct firmware *firmware; int err; @@ -330,6 +338,15 @@ static int mlxsw_sp_fw_rev_validate(struct mlxsw_sp *mlxsw_sp) if (!req_rev || !fw_filename) return 0; + /* Don't check if devlink 'fw_load_policy' param is 'flash' */ + err = devlink_param_driverinit_value_get(priv_to_devlink(mlxsw_sp->core), + DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY, + &value); + if (err) + return err; + if (value.vu8 == DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_FLASH) + return 0; + /* Validate driver & FW are compatible */ if (rev->major != req_rev->major) { WARN(1, "Mismatch in major FW version [%d:%d] is never expected; Please contact support\n", @@ -1123,22 +1140,40 @@ int mlxsw_sp_port_vlan_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid_begin, return 0; } -static void mlxsw_sp_port_vlan_flush(struct mlxsw_sp_port *mlxsw_sp_port) +static void mlxsw_sp_port_vlan_flush(struct mlxsw_sp_port *mlxsw_sp_port, + bool flush_default) { struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan, *tmp; list_for_each_entry_safe(mlxsw_sp_port_vlan, tmp, - &mlxsw_sp_port->vlans_list, list) - mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan); + &mlxsw_sp_port->vlans_list, list) { + if (!flush_default && + mlxsw_sp_port_vlan->vid == MLXSW_SP_DEFAULT_VID) + continue; + mlxsw_sp_port_vlan_destroy(mlxsw_sp_port_vlan); + } +} + +static void +mlxsw_sp_port_vlan_cleanup(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan) +{ + if (mlxsw_sp_port_vlan->bridge_port) + mlxsw_sp_port_vlan_bridge_leave(mlxsw_sp_port_vlan); + else if (mlxsw_sp_port_vlan->fid) + mlxsw_sp_port_vlan_router_leave(mlxsw_sp_port_vlan); } -static struct mlxsw_sp_port_vlan * +struct mlxsw_sp_port_vlan * mlxsw_sp_port_vlan_create(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid) { struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan; - bool untagged = vid == 1; + bool untagged = vid == MLXSW_SP_DEFAULT_VID; int err; + mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); + if (mlxsw_sp_port_vlan) + return ERR_PTR(-EEXIST); + err = mlxsw_sp_port_vlan_set(mlxsw_sp_port, vid, vid, true, untagged); if (err) return ERR_PTR(err); @@ -1150,7 +1185,6 @@ mlxsw_sp_port_vlan_create(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid) } mlxsw_sp_port_vlan->mlxsw_sp_port = mlxsw_sp_port; - mlxsw_sp_port_vlan->ref_count = 1; mlxsw_sp_port_vlan->vid = vid; list_add(&mlxsw_sp_port_vlan->list, &mlxsw_sp_port->vlans_list); @@ -1161,46 +1195,17 @@ err_port_vlan_alloc: return ERR_PTR(err); } -static void -mlxsw_sp_port_vlan_destroy(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan) +void mlxsw_sp_port_vlan_destroy(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan) { struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp_port_vlan->mlxsw_sp_port; u16 vid = mlxsw_sp_port_vlan->vid; + mlxsw_sp_port_vlan_cleanup(mlxsw_sp_port_vlan); list_del(&mlxsw_sp_port_vlan->list); kfree(mlxsw_sp_port_vlan); mlxsw_sp_port_vlan_set(mlxsw_sp_port, vid, vid, false, false); } -struct mlxsw_sp_port_vlan * -mlxsw_sp_port_vlan_get(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid) -{ - struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan; - - mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); - if (mlxsw_sp_port_vlan) { - mlxsw_sp_port_vlan->ref_count++; - return mlxsw_sp_port_vlan; - } - - return mlxsw_sp_port_vlan_create(mlxsw_sp_port, vid); -} - -void mlxsw_sp_port_vlan_put(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan) -{ - struct mlxsw_sp_fid *fid = mlxsw_sp_port_vlan->fid; - - if (--mlxsw_sp_port_vlan->ref_count != 0) - return; - - if (mlxsw_sp_port_vlan->bridge_port) - mlxsw_sp_port_vlan_bridge_leave(mlxsw_sp_port_vlan); - else if (fid) - mlxsw_sp_port_vlan_router_leave(mlxsw_sp_port_vlan); - - mlxsw_sp_port_vlan_destroy(mlxsw_sp_port_vlan); -} - static int mlxsw_sp_port_add_vid(struct net_device *dev, __be16 __always_unused proto, u16 vid) { @@ -1212,7 +1217,7 @@ static int mlxsw_sp_port_add_vid(struct net_device *dev, if (!vid) return 0; - return PTR_ERR_OR_ZERO(mlxsw_sp_port_vlan_get(mlxsw_sp_port, vid)); + return PTR_ERR_OR_ZERO(mlxsw_sp_port_vlan_create(mlxsw_sp_port, vid)); } static int mlxsw_sp_port_kill_vid(struct net_device *dev, @@ -1230,7 +1235,7 @@ static int mlxsw_sp_port_kill_vid(struct net_device *dev, mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); if (!mlxsw_sp_port_vlan) return 0; - mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan); + mlxsw_sp_port_vlan_destroy(mlxsw_sp_port_vlan); return 0; } @@ -1342,7 +1347,6 @@ static int mlxsw_sp_port_add_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port, struct mlxsw_sp_port_mall_tc_entry *mall_tc_entry; __be16 protocol = f->common.protocol; const struct tc_action *a; - LIST_HEAD(actions); int err; if (!tcf_exts_has_one_action(f->exts)) { @@ -1881,7 +1885,37 @@ static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_stats[] = { #define MLXSW_SP_PORT_HW_STATS_LEN ARRAY_SIZE(mlxsw_sp_port_hw_stats) +static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_rfc_2863_stats[] = { + { + .str = "if_in_discards", + .getter = mlxsw_reg_ppcnt_if_in_discards_get, + }, + { + .str = "if_out_discards", + .getter = mlxsw_reg_ppcnt_if_out_discards_get, + }, + { + .str = "if_out_errors", + .getter = mlxsw_reg_ppcnt_if_out_errors_get, + }, +}; + +#define MLXSW_SP_PORT_HW_RFC_2863_STATS_LEN \ + ARRAY_SIZE(mlxsw_sp_port_hw_rfc_2863_stats) + static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_rfc_2819_stats[] = { + { + .str = "ether_stats_undersize_pkts", + .getter = mlxsw_reg_ppcnt_ether_stats_undersize_pkts_get, + }, + { + .str = "ether_stats_oversize_pkts", + .getter = mlxsw_reg_ppcnt_ether_stats_oversize_pkts_get, + }, + { + .str = "ether_stats_fragments", + .getter = mlxsw_reg_ppcnt_ether_stats_fragments_get, + }, { .str = "ether_pkts64octets", .getter = mlxsw_reg_ppcnt_ether_stats_pkts64octets_get, @@ -1927,6 +1961,82 @@ static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_rfc_2819_stats[] = { #define MLXSW_SP_PORT_HW_RFC_2819_STATS_LEN \ ARRAY_SIZE(mlxsw_sp_port_hw_rfc_2819_stats) +static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_rfc_3635_stats[] = { + { + .str = "dot3stats_fcs_errors", + .getter = mlxsw_reg_ppcnt_dot3stats_fcs_errors_get, + }, + { + .str = "dot3stats_symbol_errors", + .getter = mlxsw_reg_ppcnt_dot3stats_symbol_errors_get, + }, + { + .str = "dot3control_in_unknown_opcodes", + .getter = mlxsw_reg_ppcnt_dot3control_in_unknown_opcodes_get, + }, + { + .str = "dot3in_pause_frames", + .getter = mlxsw_reg_ppcnt_dot3in_pause_frames_get, + }, +}; + +#define MLXSW_SP_PORT_HW_RFC_3635_STATS_LEN \ + ARRAY_SIZE(mlxsw_sp_port_hw_rfc_3635_stats) + +static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_discard_stats[] = { + { + .str = "discard_ingress_general", + .getter = mlxsw_reg_ppcnt_ingress_general_get, + }, + { + .str = "discard_ingress_policy_engine", + .getter = mlxsw_reg_ppcnt_ingress_policy_engine_get, + }, + { + .str = "discard_ingress_vlan_membership", + .getter = mlxsw_reg_ppcnt_ingress_vlan_membership_get, + }, + { + .str = "discard_ingress_tag_frame_type", + .getter = mlxsw_reg_ppcnt_ingress_tag_frame_type_get, + }, + { + .str = "discard_egress_vlan_membership", + .getter = mlxsw_reg_ppcnt_egress_vlan_membership_get, + }, + { + .str = "discard_loopback_filter", + .getter = mlxsw_reg_ppcnt_loopback_filter_get, + }, + { + .str = "discard_egress_general", + .getter = mlxsw_reg_ppcnt_egress_general_get, + }, + { + .str = "discard_egress_hoq", + .getter = mlxsw_reg_ppcnt_egress_hoq_get, + }, + { + .str = "discard_egress_policy_engine", + .getter = mlxsw_reg_ppcnt_egress_policy_engine_get, + }, + { + .str = "discard_ingress_tx_link_down", + .getter = mlxsw_reg_ppcnt_ingress_tx_link_down_get, + }, + { + .str = "discard_egress_stp_filter", + .getter = mlxsw_reg_ppcnt_egress_stp_filter_get, + }, + { + .str = "discard_egress_sll", + .getter = mlxsw_reg_ppcnt_egress_sll_get, + }, +}; + +#define MLXSW_SP_PORT_HW_DISCARD_STATS_LEN \ + ARRAY_SIZE(mlxsw_sp_port_hw_discard_stats) + static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_prio_stats[] = { { .str = "rx_octets_prio", @@ -1979,7 +2089,10 @@ static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_tc_stats[] = { #define MLXSW_SP_PORT_HW_TC_STATS_LEN ARRAY_SIZE(mlxsw_sp_port_hw_tc_stats) #define MLXSW_SP_PORT_ETHTOOL_STATS_LEN (MLXSW_SP_PORT_HW_STATS_LEN + \ + MLXSW_SP_PORT_HW_RFC_2863_STATS_LEN + \ MLXSW_SP_PORT_HW_RFC_2819_STATS_LEN + \ + MLXSW_SP_PORT_HW_RFC_3635_STATS_LEN + \ + MLXSW_SP_PORT_HW_DISCARD_STATS_LEN + \ (MLXSW_SP_PORT_HW_PRIO_STATS_LEN * \ IEEE_8021QAZ_MAX_TCS) + \ (MLXSW_SP_PORT_HW_TC_STATS_LEN * \ @@ -2020,12 +2133,31 @@ static void mlxsw_sp_port_get_strings(struct net_device *dev, ETH_GSTRING_LEN); p += ETH_GSTRING_LEN; } + + for (i = 0; i < MLXSW_SP_PORT_HW_RFC_2863_STATS_LEN; i++) { + memcpy(p, mlxsw_sp_port_hw_rfc_2863_stats[i].str, + ETH_GSTRING_LEN); + p += ETH_GSTRING_LEN; + } + for (i = 0; i < MLXSW_SP_PORT_HW_RFC_2819_STATS_LEN; i++) { memcpy(p, mlxsw_sp_port_hw_rfc_2819_stats[i].str, ETH_GSTRING_LEN); p += ETH_GSTRING_LEN; } + for (i = 0; i < MLXSW_SP_PORT_HW_RFC_3635_STATS_LEN; i++) { + memcpy(p, mlxsw_sp_port_hw_rfc_3635_stats[i].str, + ETH_GSTRING_LEN); + p += ETH_GSTRING_LEN; + } + + for (i = 0; i < MLXSW_SP_PORT_HW_DISCARD_STATS_LEN; i++) { + memcpy(p, mlxsw_sp_port_hw_discard_stats[i].str, + ETH_GSTRING_LEN); + p += ETH_GSTRING_LEN; + } + for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) mlxsw_sp_port_get_prio_strings(&p, i); @@ -2068,10 +2200,22 @@ mlxsw_sp_get_hw_stats_by_group(struct mlxsw_sp_port_hw_stats **p_hw_stats, *p_hw_stats = mlxsw_sp_port_hw_stats; *p_len = MLXSW_SP_PORT_HW_STATS_LEN; break; + case MLXSW_REG_PPCNT_RFC_2863_CNT: + *p_hw_stats = mlxsw_sp_port_hw_rfc_2863_stats; + *p_len = MLXSW_SP_PORT_HW_RFC_2863_STATS_LEN; + break; case MLXSW_REG_PPCNT_RFC_2819_CNT: *p_hw_stats = mlxsw_sp_port_hw_rfc_2819_stats; *p_len = MLXSW_SP_PORT_HW_RFC_2819_STATS_LEN; break; + case MLXSW_REG_PPCNT_RFC_3635_CNT: + *p_hw_stats = mlxsw_sp_port_hw_rfc_3635_stats; + *p_len = MLXSW_SP_PORT_HW_RFC_3635_STATS_LEN; + break; + case MLXSW_REG_PPCNT_DISCARD_CNT: + *p_hw_stats = mlxsw_sp_port_hw_discard_stats; + *p_len = MLXSW_SP_PORT_HW_DISCARD_STATS_LEN; + break; case MLXSW_REG_PPCNT_PRIO_CNT: *p_hw_stats = mlxsw_sp_port_hw_prio_stats; *p_len = MLXSW_SP_PORT_HW_PRIO_STATS_LEN; @@ -2121,11 +2265,26 @@ static void mlxsw_sp_port_get_stats(struct net_device *dev, data, data_index); data_index = MLXSW_SP_PORT_HW_STATS_LEN; + /* RFC 2863 Counters */ + __mlxsw_sp_port_get_stats(dev, MLXSW_REG_PPCNT_RFC_2863_CNT, 0, + data, data_index); + data_index += MLXSW_SP_PORT_HW_RFC_2863_STATS_LEN; + /* RFC 2819 Counters */ __mlxsw_sp_port_get_stats(dev, MLXSW_REG_PPCNT_RFC_2819_CNT, 0, data, data_index); data_index += MLXSW_SP_PORT_HW_RFC_2819_STATS_LEN; + /* RFC 3635 Counters */ + __mlxsw_sp_port_get_stats(dev, MLXSW_REG_PPCNT_RFC_3635_CNT, 0, + data, data_index); + data_index += MLXSW_SP_PORT_HW_RFC_3635_STATS_LEN; + + /* Discard Counters */ + __mlxsw_sp_port_get_stats(dev, MLXSW_REG_PPCNT_DISCARD_CNT, 0, + data, data_index); + data_index += MLXSW_SP_PORT_HW_DISCARD_STATS_LEN; + /* Per-Priority Counters */ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { __mlxsw_sp_port_get_stats(dev, MLXSW_REG_PPCNT_PRIO_CNT, i, @@ -2892,7 +3051,7 @@ static int mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port, mlxsw_sp_port->dev = dev; mlxsw_sp_port->mlxsw_sp = mlxsw_sp; mlxsw_sp_port->local_port = local_port; - mlxsw_sp_port->pvid = 1; + mlxsw_sp_port->pvid = MLXSW_SP_DEFAULT_VID; mlxsw_sp_port->split = split; mlxsw_sp_port->mapping.module = module; mlxsw_sp_port->mapping.width = width; @@ -3031,13 +3190,22 @@ static int mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port, goto err_port_nve_init; } - mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_get(mlxsw_sp_port, 1); + err = mlxsw_sp_port_pvid_set(mlxsw_sp_port, MLXSW_SP_DEFAULT_VID); + if (err) { + dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to set PVID\n", + mlxsw_sp_port->local_port); + goto err_port_pvid_set; + } + + mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_create(mlxsw_sp_port, + MLXSW_SP_DEFAULT_VID); if (IS_ERR(mlxsw_sp_port_vlan)) { dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to create VID 1\n", mlxsw_sp_port->local_port); err = PTR_ERR(mlxsw_sp_port_vlan); - goto err_port_vlan_get; + goto err_port_vlan_create; } + mlxsw_sp_port->default_vlan = mlxsw_sp_port_vlan; mlxsw_sp_port_switchdev_init(mlxsw_sp_port); mlxsw_sp->ports[local_port] = mlxsw_sp_port; @@ -3057,8 +3225,9 @@ static int mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port, err_register_netdev: mlxsw_sp->ports[local_port] = NULL; mlxsw_sp_port_switchdev_fini(mlxsw_sp_port); - mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan); -err_port_vlan_get: + mlxsw_sp_port_vlan_destroy(mlxsw_sp_port_vlan); +err_port_vlan_create: +err_port_pvid_set: mlxsw_sp_port_nve_fini(mlxsw_sp_port); err_port_nve_init: mlxsw_sp_tc_qdisc_fini(mlxsw_sp_port); @@ -3099,7 +3268,7 @@ static void mlxsw_sp_port_remove(struct mlxsw_sp *mlxsw_sp, u8 local_port) unregister_netdev(mlxsw_sp_port->dev); /* This calls ndo_stop */ mlxsw_sp->ports[local_port] = NULL; mlxsw_sp_port_switchdev_fini(mlxsw_sp_port); - mlxsw_sp_port_vlan_flush(mlxsw_sp_port); + mlxsw_sp_port_vlan_flush(mlxsw_sp_port, true); mlxsw_sp_port_nve_fini(mlxsw_sp_port); mlxsw_sp_tc_qdisc_fini(mlxsw_sp_port); mlxsw_sp_port_fids_fini(mlxsw_sp_port); @@ -3394,10 +3563,10 @@ static void mlxsw_sp_rx_listener_mark_func(struct sk_buff *skb, u8 local_port, return mlxsw_sp_rx_listener_no_mark_func(skb, local_port, priv); } -static void mlxsw_sp_rx_listener_mr_mark_func(struct sk_buff *skb, +static void mlxsw_sp_rx_listener_l3_mark_func(struct sk_buff *skb, u8 local_port, void *priv) { - skb->offload_mr_fwd_mark = 1; + skb->offload_l3_fwd_mark = 1; skb->offload_fwd_mark = 1; return mlxsw_sp_rx_listener_no_mark_func(skb, local_port, priv); } @@ -3445,8 +3614,8 @@ out: MLXSW_RXL(mlxsw_sp_rx_listener_mark_func, _trap_id, _action, \ _is_ctrl, SP_##_trap_group, DISCARD) -#define MLXSW_SP_RXL_MR_MARK(_trap_id, _action, _trap_group, _is_ctrl) \ - MLXSW_RXL(mlxsw_sp_rx_listener_mr_mark_func, _trap_id, _action, \ +#define MLXSW_SP_RXL_L3_MARK(_trap_id, _action, _trap_group, _is_ctrl) \ + MLXSW_RXL(mlxsw_sp_rx_listener_l3_mark_func, _trap_id, _action, \ _is_ctrl, SP_##_trap_group, DISCARD) #define MLXSW_SP_EVENTL(_func, _trap_id) \ @@ -3479,7 +3648,7 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = { /* L3 traps */ MLXSW_SP_RXL_MARK(MTUERROR, TRAP_TO_CPU, ROUTER_EXP, false), MLXSW_SP_RXL_MARK(TTLERROR, TRAP_TO_CPU, ROUTER_EXP, false), - MLXSW_SP_RXL_MARK(LBERROR, TRAP_TO_CPU, ROUTER_EXP, false), + MLXSW_SP_RXL_L3_MARK(LBERROR, MIRROR_TO_CPU, LBERROR, false), MLXSW_SP_RXL_MARK(IP2ME, TRAP_TO_CPU, IP2ME, false), MLXSW_SP_RXL_MARK(IPV6_UNSPECIFIED_ADDRESS, TRAP_TO_CPU, ROUTER_EXP, false), @@ -3523,7 +3692,7 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = { MLXSW_SP_RXL_MARK(IPV6_PIM, TRAP_TO_CPU, PIM, false), MLXSW_SP_RXL_MARK(RPF, TRAP_TO_CPU, RPF, false), MLXSW_SP_RXL_MARK(ACL1, TRAP_TO_CPU, MULTICAST, false), - MLXSW_SP_RXL_MR_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false), + MLXSW_SP_RXL_L3_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false), /* NVE traps */ MLXSW_SP_RXL_MARK(NVE_ENCAP_ARP, TRAP_TO_CPU, ARP, false), MLXSW_SP_RXL_NO_MARK(NVE_DECAP_ARP, TRAP_TO_CPU, ARP, false), @@ -3554,6 +3723,7 @@ static int mlxsw_sp_cpu_policers_set(struct mlxsw_core *mlxsw_core) case MLXSW_REG_HTGT_TRAP_GROUP_SP_OSPF: case MLXSW_REG_HTGT_TRAP_GROUP_SP_PIM: case MLXSW_REG_HTGT_TRAP_GROUP_SP_RPF: + case MLXSW_REG_HTGT_TRAP_GROUP_SP_LBERROR: rate = 128; burst_size = 7; break; @@ -3639,6 +3809,7 @@ static int mlxsw_sp_trap_groups_set(struct mlxsw_core *mlxsw_core) case MLXSW_REG_HTGT_TRAP_GROUP_SP_ROUTER_EXP: case MLXSW_REG_HTGT_TRAP_GROUP_SP_REMOTE_ROUTE: case MLXSW_REG_HTGT_TRAP_GROUP_SP_MULTICAST: + case MLXSW_REG_HTGT_TRAP_GROUP_SP_LBERROR: priority = 1; tc = 1; break; @@ -3841,6 +4012,12 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core, goto err_nve_init; } + err = mlxsw_sp_acl_init(mlxsw_sp); + if (err) { + dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize ACL\n"); + goto err_acl_init; + } + err = mlxsw_sp_router_init(mlxsw_sp); if (err) { dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize router\n"); @@ -3858,12 +4035,6 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core, goto err_netdev_notifier; } - err = mlxsw_sp_acl_init(mlxsw_sp); - if (err) { - dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize ACL\n"); - goto err_acl_init; - } - err = mlxsw_sp_dpipe_init(mlxsw_sp); if (err) { dev_err(mlxsw_sp->bus_info->dev, "Failed to init pipeline debug\n"); @@ -3881,12 +4052,12 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core, err_ports_create: mlxsw_sp_dpipe_fini(mlxsw_sp); err_dpipe_init: - mlxsw_sp_acl_fini(mlxsw_sp); -err_acl_init: unregister_netdevice_notifier(&mlxsw_sp->netdevice_nb); err_netdev_notifier: mlxsw_sp_router_fini(mlxsw_sp); err_router_init: + mlxsw_sp_acl_fini(mlxsw_sp); +err_acl_init: mlxsw_sp_nve_fini(mlxsw_sp); err_nve_init: mlxsw_sp_afa_fini(mlxsw_sp); @@ -3922,6 +4093,7 @@ static int mlxsw_sp1_init(struct mlxsw_core *mlxsw_core, mlxsw_sp->mr_tcam_ops = &mlxsw_sp1_mr_tcam_ops; mlxsw_sp->acl_tcam_ops = &mlxsw_sp1_acl_tcam_ops; mlxsw_sp->nve_ops_arr = mlxsw_sp1_nve_ops_arr; + mlxsw_sp->mac_mask = mlxsw_sp1_mac_mask; return mlxsw_sp_init(mlxsw_core, mlxsw_bus_info); } @@ -3937,6 +4109,7 @@ static int mlxsw_sp2_init(struct mlxsw_core *mlxsw_core, mlxsw_sp->mr_tcam_ops = &mlxsw_sp2_mr_tcam_ops; mlxsw_sp->acl_tcam_ops = &mlxsw_sp2_acl_tcam_ops; mlxsw_sp->nve_ops_arr = mlxsw_sp2_nve_ops_arr; + mlxsw_sp->mac_mask = mlxsw_sp2_mac_mask; return mlxsw_sp_init(mlxsw_core, mlxsw_bus_info); } @@ -3947,9 +4120,9 @@ static void mlxsw_sp_fini(struct mlxsw_core *mlxsw_core) mlxsw_sp_ports_remove(mlxsw_sp); mlxsw_sp_dpipe_fini(mlxsw_sp); - mlxsw_sp_acl_fini(mlxsw_sp); unregister_netdevice_notifier(&mlxsw_sp->netdevice_nb); mlxsw_sp_router_fini(mlxsw_sp); + mlxsw_sp_acl_fini(mlxsw_sp); mlxsw_sp_nve_fini(mlxsw_sp); mlxsw_sp_afa_fini(mlxsw_sp); mlxsw_sp_counter_pool_fini(mlxsw_sp); @@ -3962,16 +4135,20 @@ static void mlxsw_sp_fini(struct mlxsw_core *mlxsw_core) mlxsw_sp_kvdl_fini(mlxsw_sp); } +/* Per-FID flood tables are used for both "true" 802.1D FIDs and emulated + * 802.1Q FIDs + */ +#define MLXSW_SP_FID_FLOOD_TABLE_SIZE (MLXSW_SP_FID_8021D_MAX + \ + VLAN_VID_MASK - 1) + static const struct mlxsw_config_profile mlxsw_sp1_config_profile = { .used_max_mid = 1, .max_mid = MLXSW_SP_MID_MAX, .used_flood_tables = 1, .used_flood_mode = 1, .flood_mode = 3, - .max_fid_offset_flood_tables = 3, - .fid_offset_flood_table_size = VLAN_N_VID - 1, .max_fid_flood_tables = 3, - .fid_flood_table_size = MLXSW_SP_FID_8021D_MAX, + .fid_flood_table_size = MLXSW_SP_FID_FLOOD_TABLE_SIZE, .used_max_ib_mc = 1, .max_ib_mc = 0, .used_max_pkey = 1, @@ -3994,10 +4171,8 @@ static const struct mlxsw_config_profile mlxsw_sp2_config_profile = { .used_flood_tables = 1, .used_flood_mode = 1, .flood_mode = 3, - .max_fid_offset_flood_tables = 3, - .fid_offset_flood_table_size = VLAN_N_VID - 1, .max_fid_flood_tables = 3, - .fid_flood_table_size = MLXSW_SP_FID_8021D_MAX, + .fid_flood_table_size = MLXSW_SP_FID_FLOOD_TABLE_SIZE, .used_max_ib_mc = 1, .max_ib_mc = 0, .used_max_pkey = 1, @@ -4177,6 +4352,52 @@ static int mlxsw_sp_kvd_sizes_get(struct mlxsw_core *mlxsw_core, return 0; } +static int +mlxsw_sp_devlink_param_fw_load_policy_validate(struct devlink *devlink, u32 id, + union devlink_param_value val, + struct netlink_ext_ack *extack) +{ + if ((val.vu8 != DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DRIVER) && + (val.vu8 != DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_FLASH)) { + NL_SET_ERR_MSG_MOD(extack, "'fw_load_policy' must be 'driver' or 'flash'"); + return -EINVAL; + } + + return 0; +} + +static const struct devlink_param mlxsw_sp_devlink_params[] = { + DEVLINK_PARAM_GENERIC(FW_LOAD_POLICY, + BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), + NULL, NULL, + mlxsw_sp_devlink_param_fw_load_policy_validate), +}; + +static int mlxsw_sp_params_register(struct mlxsw_core *mlxsw_core) +{ + struct devlink *devlink = priv_to_devlink(mlxsw_core); + union devlink_param_value value; + int err; + + err = devlink_params_register(devlink, mlxsw_sp_devlink_params, + ARRAY_SIZE(mlxsw_sp_devlink_params)); + if (err) + return err; + + value.vu8 = DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DRIVER; + devlink_param_driverinit_value_set(devlink, + DEVLINK_PARAM_GENERIC_ID_FW_LOAD_POLICY, + value); + return 0; +} + +static void mlxsw_sp_params_unregister(struct mlxsw_core *mlxsw_core) +{ + devlink_params_unregister(priv_to_devlink(mlxsw_core), + mlxsw_sp_devlink_params, + ARRAY_SIZE(mlxsw_sp_devlink_params)); +} + static struct mlxsw_driver mlxsw_sp1_driver = { .kind = mlxsw_sp1_driver_name, .priv_size = sizeof(struct mlxsw_sp), @@ -4198,6 +4419,8 @@ static struct mlxsw_driver mlxsw_sp1_driver = { .txhdr_construct = mlxsw_sp_txhdr_construct, .resources_register = mlxsw_sp1_resources_register, .kvd_sizes_get = mlxsw_sp_kvd_sizes_get, + .params_register = mlxsw_sp_params_register, + .params_unregister = mlxsw_sp_params_unregister, .txhdr_len = MLXSW_TXHDR_LEN, .profile = &mlxsw_sp1_config_profile, .res_query_enabled = true, @@ -4223,6 +4446,8 @@ static struct mlxsw_driver mlxsw_sp2_driver = { .sb_occ_tc_port_bind_get = mlxsw_sp_sb_occ_tc_port_bind_get, .txhdr_construct = mlxsw_sp_txhdr_construct, .resources_register = mlxsw_sp2_resources_register, + .params_register = mlxsw_sp_params_register, + .params_unregister = mlxsw_sp_params_unregister, .txhdr_len = MLXSW_TXHDR_LEN, .profile = &mlxsw_sp2_config_profile, .res_query_enabled = true, @@ -4298,6 +4523,25 @@ void mlxsw_sp_port_dev_put(struct mlxsw_sp_port *mlxsw_sp_port) dev_put(mlxsw_sp_port->dev); } +static void +mlxsw_sp_port_lag_uppers_cleanup(struct mlxsw_sp_port *mlxsw_sp_port, + struct net_device *lag_dev) +{ + struct net_device *br_dev = netdev_master_upper_dev_get(lag_dev); + struct net_device *upper_dev; + struct list_head *iter; + + if (netif_is_bridge_port(lag_dev)) + mlxsw_sp_port_bridge_leave(mlxsw_sp_port, lag_dev, br_dev); + + netdev_for_each_upper_dev_rcu(lag_dev, upper_dev, iter) { + if (!netif_is_bridge_port(upper_dev)) + continue; + br_dev = netdev_master_upper_dev_get(upper_dev); + mlxsw_sp_port_bridge_leave(mlxsw_sp_port, upper_dev, br_dev); + } +} + static int mlxsw_sp_lag_create(struct mlxsw_sp *mlxsw_sp, u16 lag_id) { char sldr_pl[MLXSW_REG_SLDR_LEN]; @@ -4425,7 +4669,6 @@ static int mlxsw_sp_port_lag_join(struct mlxsw_sp_port *mlxsw_sp_port, struct net_device *lag_dev) { struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; - struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan; struct mlxsw_sp_upper *lag; u16 lag_id; u8 port_index; @@ -4459,9 +4702,8 @@ static int mlxsw_sp_port_lag_join(struct mlxsw_sp_port *mlxsw_sp_port, lag->ref_count++; /* Port is no longer usable as a router interface */ - mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, 1); - if (mlxsw_sp_port_vlan->fid) - mlxsw_sp_port_vlan_router_leave(mlxsw_sp_port_vlan); + if (mlxsw_sp_port->default_vlan->fid) + mlxsw_sp_port_vlan_router_leave(mlxsw_sp_port->default_vlan); return 0; @@ -4489,7 +4731,12 @@ static void mlxsw_sp_port_lag_leave(struct mlxsw_sp_port *mlxsw_sp_port, mlxsw_sp_lag_col_port_remove(mlxsw_sp_port, lag_id); /* Any VLANs configured on the port are no longer valid */ - mlxsw_sp_port_vlan_flush(mlxsw_sp_port); + mlxsw_sp_port_vlan_flush(mlxsw_sp_port, false); + mlxsw_sp_port_vlan_cleanup(mlxsw_sp_port->default_vlan); + /* Make the LAG and its directly linked uppers leave bridges they + * are memeber in + */ + mlxsw_sp_port_lag_uppers_cleanup(mlxsw_sp_port, lag_dev); if (lag->ref_count == 1) mlxsw_sp_lag_destroy(mlxsw_sp, lag_id); @@ -4499,9 +4746,8 @@ static void mlxsw_sp_port_lag_leave(struct mlxsw_sp_port *mlxsw_sp_port, mlxsw_sp_port->lagged = 0; lag->ref_count--; - mlxsw_sp_port_vlan_get(mlxsw_sp_port, 1); /* Make sure untagged frames are allowed to ingress */ - mlxsw_sp_port_pvid_set(mlxsw_sp_port, 1); + mlxsw_sp_port_pvid_set(mlxsw_sp_port, MLXSW_SP_DEFAULT_VID); } static int mlxsw_sp_lag_dist_port_add(struct mlxsw_sp_port *mlxsw_sp_port, @@ -4579,7 +4825,7 @@ static int mlxsw_sp_port_ovs_join(struct mlxsw_sp_port *mlxsw_sp_port) err = mlxsw_sp_port_stp_set(mlxsw_sp_port, true); if (err) goto err_port_stp_set; - err = mlxsw_sp_port_vlan_set(mlxsw_sp_port, 2, VLAN_N_VID - 1, + err = mlxsw_sp_port_vlan_set(mlxsw_sp_port, 1, VLAN_N_VID - 2, true, false); if (err) goto err_port_vlan_set; @@ -4611,7 +4857,7 @@ static void mlxsw_sp_port_ovs_leave(struct mlxsw_sp_port *mlxsw_sp_port) mlxsw_sp_port_vid_learning_set(mlxsw_sp_port, vid, true); - mlxsw_sp_port_vlan_set(mlxsw_sp_port, 2, VLAN_N_VID - 1, + mlxsw_sp_port_vlan_set(mlxsw_sp_port, 1, VLAN_N_VID - 2, false, false); mlxsw_sp_port_stp_set(mlxsw_sp_port, false); mlxsw_sp_port_vp_mode_set(mlxsw_sp_port, false); @@ -4631,6 +4877,30 @@ static bool mlxsw_sp_bridge_has_multiple_vxlans(struct net_device *br_dev) return num_vxlans > 1; } +static bool mlxsw_sp_bridge_vxlan_vlan_is_valid(struct net_device *br_dev) +{ + DECLARE_BITMAP(vlans, VLAN_N_VID) = {0}; + struct net_device *dev; + struct list_head *iter; + + netdev_for_each_lower_dev(br_dev, dev, iter) { + u16 pvid; + int err; + + if (!netif_is_vxlan(dev)) + continue; + + err = mlxsw_sp_vxlan_mapped_vid(dev, &pvid); + if (err || !pvid) + continue; + + if (test_and_set_bit(pvid, vlans)) + return false; + } + + return true; +} + static bool mlxsw_sp_bridge_vxlan_is_valid(struct net_device *br_dev, struct netlink_ext_ack *extack) { @@ -4639,13 +4909,15 @@ static bool mlxsw_sp_bridge_vxlan_is_valid(struct net_device *br_dev, return false; } - if (br_vlan_enabled(br_dev)) { - NL_SET_ERR_MSG_MOD(extack, "VLAN filtering can not be enabled on a bridge with a VxLAN device"); + if (!br_vlan_enabled(br_dev) && + mlxsw_sp_bridge_has_multiple_vxlans(br_dev)) { + NL_SET_ERR_MSG_MOD(extack, "Multiple VxLAN devices are not supported in a VLAN-unaware bridge"); return false; } - if (mlxsw_sp_bridge_has_multiple_vxlans(br_dev)) { - NL_SET_ERR_MSG_MOD(extack, "Multiple VxLAN devices are not supported in a VLAN-unaware bridge"); + if (br_vlan_enabled(br_dev) && + !mlxsw_sp_bridge_vxlan_vlan_is_valid(br_dev)) { + NL_SET_ERR_MSG_MOD(extack, "Multiple VxLAN devices cannot have the same VLAN as PVID and egress untagged"); return false; } @@ -4719,11 +4991,6 @@ static int mlxsw_sp_netdevice_port_upper_event(struct net_device *lower_dev, NL_SET_ERR_MSG_MOD(extack, "Can not put a VLAN on an OVS port"); return -EINVAL; } - if (is_vlan_dev(upper_dev) && - vlan_dev_vlan_id(upper_dev) == 1) { - NL_SET_ERR_MSG_MOD(extack, "Creating a VLAN device with VID 1 is unsupported: VLAN 1 carries untagged traffic"); - return -EINVAL; - } break; case NETDEV_CHANGEUPPER: upper_dev = info->upper_dev; @@ -4752,6 +5019,16 @@ static int mlxsw_sp_netdevice_port_upper_event(struct net_device *lower_dev, } else if (netif_is_macvlan(upper_dev)) { if (!info->linking) mlxsw_sp_rif_macvlan_del(mlxsw_sp, upper_dev); + } else if (is_vlan_dev(upper_dev)) { + struct net_device *br_dev; + + if (!netif_is_bridge_port(upper_dev)) + break; + if (info->linking) + break; + br_dev = netdev_master_upper_dev_get(upper_dev); + mlxsw_sp_port_bridge_leave(mlxsw_sp_port, upper_dev, + br_dev); } break; } @@ -4908,6 +5185,48 @@ static int mlxsw_sp_netdevice_lag_port_vlan_event(struct net_device *vlan_dev, return 0; } +static int mlxsw_sp_netdevice_bridge_vlan_event(struct net_device *vlan_dev, + struct net_device *br_dev, + unsigned long event, void *ptr, + u16 vid) +{ + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(vlan_dev); + struct netdev_notifier_changeupper_info *info = ptr; + struct netlink_ext_ack *extack; + struct net_device *upper_dev; + + if (!mlxsw_sp) + return 0; + + extack = netdev_notifier_info_to_extack(&info->info); + + switch (event) { + case NETDEV_PRECHANGEUPPER: + upper_dev = info->upper_dev; + if (!netif_is_macvlan(upper_dev)) { + NL_SET_ERR_MSG_MOD(extack, "Unknown upper device type"); + return -EOPNOTSUPP; + } + if (!info->linking) + break; + if (netif_is_macvlan(upper_dev) && + !mlxsw_sp_rif_find_by_dev(mlxsw_sp, vlan_dev)) { + NL_SET_ERR_MSG_MOD(extack, "macvlan is only supported on top of router interfaces"); + return -EOPNOTSUPP; + } + break; + case NETDEV_CHANGEUPPER: + upper_dev = info->upper_dev; + if (info->linking) + break; + if (netif_is_macvlan(upper_dev)) + mlxsw_sp_rif_macvlan_del(mlxsw_sp, upper_dev); + break; + } + + return 0; +} + static int mlxsw_sp_netdevice_vlan_event(struct net_device *vlan_dev, unsigned long event, void *ptr) { @@ -4921,6 +5240,9 @@ static int mlxsw_sp_netdevice_vlan_event(struct net_device *vlan_dev, return mlxsw_sp_netdevice_lag_port_vlan_event(vlan_dev, real_dev, event, ptr, vid); + else if (netif_is_bridge_master(real_dev)) + return mlxsw_sp_netdevice_bridge_vlan_event(vlan_dev, real_dev, + event, ptr, vid); return 0; } @@ -5020,10 +5342,21 @@ static int mlxsw_sp_netdevice_vxlan_event(struct mlxsw_sp *mlxsw_sp, if (cu_info->linking) { if (!netif_running(dev)) return 0; + /* When the bridge is VLAN-aware, the VNI of the VxLAN + * device needs to be mapped to a VLAN, but at this + * point no VLANs are configured on the VxLAN device + */ + if (br_vlan_enabled(upper_dev)) + return 0; return mlxsw_sp_bridge_vxlan_join(mlxsw_sp, upper_dev, - dev, extack); + dev, 0, extack); } else { - mlxsw_sp_bridge_vxlan_leave(mlxsw_sp, upper_dev, dev); + /* VLANs were already flushed, which triggered the + * necessary cleanup + */ + if (br_vlan_enabled(upper_dev)) + return 0; + mlxsw_sp_bridge_vxlan_leave(mlxsw_sp, dev); } break; case NETDEV_PRE_UP: @@ -5034,7 +5367,7 @@ static int mlxsw_sp_netdevice_vxlan_event(struct mlxsw_sp *mlxsw_sp, return 0; if (!mlxsw_sp_lower_get(upper_dev)) return 0; - return mlxsw_sp_bridge_vxlan_join(mlxsw_sp, upper_dev, dev, + return mlxsw_sp_bridge_vxlan_join(mlxsw_sp, upper_dev, dev, 0, extack); case NETDEV_DOWN: upper_dev = netdev_master_upper_dev_get(dev); @@ -5044,7 +5377,7 @@ static int mlxsw_sp_netdevice_vxlan_event(struct mlxsw_sp *mlxsw_sp, return 0; if (!mlxsw_sp_lower_get(upper_dev)) return 0; - mlxsw_sp_bridge_vxlan_leave(mlxsw_sp, upper_dev, dev); + mlxsw_sp_bridge_vxlan_leave(mlxsw_sp, dev); break; } @@ -5075,8 +5408,10 @@ static int mlxsw_sp_netdevice_event(struct notifier_block *nb, else if (mlxsw_sp_netdev_is_ipip_ul(mlxsw_sp, dev)) err = mlxsw_sp_netdevice_ipip_ul_event(mlxsw_sp, dev, event, ptr); - else if (event == NETDEV_CHANGEADDR || event == NETDEV_CHANGEMTU) - err = mlxsw_sp_netdevice_router_port_event(dev); + else if (event == NETDEV_PRE_CHANGEADDR || + event == NETDEV_CHANGEADDR || + event == NETDEV_CHANGEMTU) + err = mlxsw_sp_netdevice_router_port_event(dev, event, ptr); else if (mlxsw_sp_is_vrf_event(event, ptr)) err = mlxsw_sp_netdevice_vrf_event(dev, event, ptr); else if (mlxsw_sp_port_dev_check(dev)) @@ -5097,18 +5432,10 @@ static struct notifier_block mlxsw_sp_inetaddr_valid_nb __read_mostly = { .notifier_call = mlxsw_sp_inetaddr_valid_event, }; -static struct notifier_block mlxsw_sp_inetaddr_nb __read_mostly = { - .notifier_call = mlxsw_sp_inetaddr_event, -}; - static struct notifier_block mlxsw_sp_inet6addr_valid_nb __read_mostly = { .notifier_call = mlxsw_sp_inet6addr_valid_event, }; -static struct notifier_block mlxsw_sp_inet6addr_nb __read_mostly = { - .notifier_call = mlxsw_sp_inet6addr_event, -}; - static const struct pci_device_id mlxsw_sp1_pci_id_table[] = { {PCI_VDEVICE(MELLANOX, PCI_DEVICE_ID_MELLANOX_SPECTRUM), 0}, {0, }, @@ -5134,9 +5461,7 @@ static int __init mlxsw_sp_module_init(void) int err; register_inetaddr_validator_notifier(&mlxsw_sp_inetaddr_valid_nb); - register_inetaddr_notifier(&mlxsw_sp_inetaddr_nb); register_inet6addr_validator_notifier(&mlxsw_sp_inet6addr_valid_nb); - register_inet6addr_notifier(&mlxsw_sp_inet6addr_nb); err = mlxsw_core_driver_register(&mlxsw_sp1_driver); if (err) @@ -5163,9 +5488,7 @@ err_sp1_pci_driver_register: err_sp2_core_driver_register: mlxsw_core_driver_unregister(&mlxsw_sp1_driver); err_sp1_core_driver_register: - unregister_inet6addr_notifier(&mlxsw_sp_inet6addr_nb); unregister_inet6addr_validator_notifier(&mlxsw_sp_inet6addr_valid_nb); - unregister_inetaddr_notifier(&mlxsw_sp_inetaddr_nb); unregister_inetaddr_validator_notifier(&mlxsw_sp_inetaddr_valid_nb); return err; } @@ -5176,9 +5499,7 @@ static void __exit mlxsw_sp_module_exit(void) mlxsw_pci_driver_unregister(&mlxsw_sp1_pci_driver); mlxsw_core_driver_unregister(&mlxsw_sp2_driver); mlxsw_core_driver_unregister(&mlxsw_sp1_driver); - unregister_inet6addr_notifier(&mlxsw_sp_inet6addr_nb); unregister_inet6addr_validator_notifier(&mlxsw_sp_inet6addr_valid_nb); - unregister_inetaddr_notifier(&mlxsw_sp_inetaddr_nb); unregister_inetaddr_validator_notifier(&mlxsw_sp_inetaddr_valid_nb); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index 0875a79cbe7b..a1c32a81b011 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -24,6 +25,8 @@ #include "core_acl_flex_actions.h" #include "reg.h" +#define MLXSW_SP_DEFAULT_VID (VLAN_N_VID - 1) + #define MLXSW_SP_FID_8021D_MAX 1024 #define MLXSW_SP_MID_MAX 7000 @@ -80,6 +83,10 @@ enum mlxsw_sp_fid_type { MLXSW_SP_FID_TYPE_MAX, }; +enum mlxsw_sp_nve_type { + MLXSW_SP_NVE_TYPE_VXLAN, +}; + struct mlxsw_sp_mid { struct list_head list; unsigned char addr[ETH_ALEN]; @@ -127,6 +134,7 @@ struct mlxsw_sp { struct mlxsw_core *core; const struct mlxsw_bus_info *bus_info; unsigned char base_mac[ETH_ALEN]; + const unsigned char *mac_mask; struct mlxsw_sp_upper *lags; int *port_to_module; struct mlxsw_sp_sb *sb; @@ -184,7 +192,6 @@ struct mlxsw_sp_port_vlan { struct list_head list; struct mlxsw_sp_port *mlxsw_sp_port; struct mlxsw_sp_fid *fid; - unsigned int ref_count; u16 vid; struct mlxsw_sp_bridge_port *bridge_port; struct list_head bridge_vlan_node; @@ -235,6 +242,7 @@ struct mlxsw_sp_port { } periodic_hw_stats; struct mlxsw_sp_port_sample *sample; struct list_head vlans_list; + struct mlxsw_sp_port_vlan *default_vlan; struct mlxsw_sp_qdisc *root_qdisc; struct mlxsw_sp_qdisc *tclass_qdiscs; unsigned acl_rule_count; @@ -261,6 +269,26 @@ static inline bool mlxsw_sp_bridge_has_vxlan(struct net_device *br_dev) return !!mlxsw_sp_bridge_vxlan_dev_find(br_dev); } +static inline int +mlxsw_sp_vxlan_mapped_vid(const struct net_device *vxlan_dev, u16 *p_vid) +{ + struct bridge_vlan_info vinfo; + u16 vid = 0; + int err; + + err = br_vlan_get_pvid(vxlan_dev, &vid); + if (err || !vid) + goto out; + + err = br_vlan_get_info(vxlan_dev, vid, &vinfo); + if (err || !(vinfo.flags & BRIDGE_VLAN_INFO_UNTAGGED)) + vid = 0; + +out: + *p_vid = vid; + return err; +} + static inline bool mlxsw_sp_port_is_pause_en(const struct mlxsw_sp_port *mlxsw_sp_port) { @@ -358,11 +386,15 @@ bool mlxsw_sp_bridge_device_is_offloaded(const struct mlxsw_sp *mlxsw_sp, const struct net_device *br_dev); int mlxsw_sp_bridge_vxlan_join(struct mlxsw_sp *mlxsw_sp, const struct net_device *br_dev, - const struct net_device *vxlan_dev, + const struct net_device *vxlan_dev, u16 vid, struct netlink_ext_ack *extack); void mlxsw_sp_bridge_vxlan_leave(struct mlxsw_sp *mlxsw_sp, - const struct net_device *br_dev, const struct net_device *vxlan_dev); +struct mlxsw_sp_fid *mlxsw_sp_bridge_fid_get(struct mlxsw_sp *mlxsw_sp, + const struct net_device *br_dev, + u16 vid, + struct netlink_ext_ack *extack); +extern struct notifier_block mlxsw_sp_switchdev_notifier; /* spectrum.c */ int mlxsw_sp_port_ets_set(struct mlxsw_sp_port *mlxsw_sp_port, @@ -384,8 +416,8 @@ int mlxsw_sp_port_vid_learning_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid, bool learn_enable); int mlxsw_sp_port_pvid_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid); struct mlxsw_sp_port_vlan * -mlxsw_sp_port_vlan_get(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid); -void mlxsw_sp_port_vlan_put(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan); +mlxsw_sp_port_vlan_create(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid); +void mlxsw_sp_port_vlan_destroy(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan); int mlxsw_sp_port_vlan_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid_begin, u16 vid_end, bool is_member, bool untagged); int mlxsw_sp_flow_counter_get(struct mlxsw_sp *mlxsw_sp, @@ -429,15 +461,12 @@ union mlxsw_sp_l3addr { int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp); void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp); -int mlxsw_sp_netdevice_router_port_event(struct net_device *dev); +int mlxsw_sp_netdevice_router_port_event(struct net_device *dev, + unsigned long event, void *ptr); void mlxsw_sp_rif_macvlan_del(struct mlxsw_sp *mlxsw_sp, const struct net_device *macvlan_dev); -int mlxsw_sp_inetaddr_event(struct notifier_block *unused, - unsigned long event, void *ptr); int mlxsw_sp_inetaddr_valid_event(struct notifier_block *unused, unsigned long event, void *ptr); -int mlxsw_sp_inet6addr_event(struct notifier_block *unused, - unsigned long event, void *ptr); int mlxsw_sp_inet6addr_valid_event(struct notifier_block *unused, unsigned long event, void *ptr); int mlxsw_sp_netdevice_vrf_event(struct net_device *l3_dev, unsigned long event, @@ -457,7 +486,6 @@ mlxsw_sp_netdevice_ipip_ul_event(struct mlxsw_sp *mlxsw_sp, struct netdev_notifier_info *info); void mlxsw_sp_port_vlan_router_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan); -void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif); void mlxsw_sp_rif_destroy_by_dev(struct mlxsw_sp *mlxsw_sp, struct net_device *dev); struct mlxsw_sp_rif *mlxsw_sp_rif_find_by_dev(const struct mlxsw_sp *mlxsw_sp, @@ -538,6 +566,7 @@ struct mlxsw_sp_acl_rule_info { unsigned int priority; struct mlxsw_afk_element_values values; struct mlxsw_afa_block *act_block; + u8 action_created:1; unsigned int counter_index; }; @@ -547,6 +576,7 @@ struct mlxsw_sp_acl_ruleset; /* spectrum_acl.c */ enum mlxsw_sp_acl_profile { MLXSW_SP_ACL_PROFILE_FLOWER, + MLXSW_SP_ACL_PROFILE_MR, }; struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl); @@ -581,7 +611,8 @@ void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp, u16 mlxsw_sp_acl_ruleset_group_id(struct mlxsw_sp_acl_ruleset *ruleset); struct mlxsw_sp_acl_rule_info * -mlxsw_sp_acl_rulei_create(struct mlxsw_sp_acl *acl); +mlxsw_sp_acl_rulei_create(struct mlxsw_sp_acl *acl, + struct mlxsw_afa_block *afa_block); void mlxsw_sp_acl_rulei_destroy(struct mlxsw_sp_acl_rule_info *rulei); int mlxsw_sp_acl_rulei_commit(struct mlxsw_sp_acl_rule_info *rulei); void mlxsw_sp_acl_rulei_priority(struct mlxsw_sp_acl_rule_info *rulei, @@ -625,6 +656,7 @@ struct mlxsw_sp_acl_rule * mlxsw_sp_acl_rule_create(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ruleset *ruleset, unsigned long cookie, + struct mlxsw_afa_block *afa_block, struct netlink_ext_ack *extack); void mlxsw_sp_acl_rule_destroy(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_rule *rule); @@ -632,6 +664,9 @@ int mlxsw_sp_acl_rule_add(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_rule *rule); void mlxsw_sp_acl_rule_del(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_rule *rule); +int mlxsw_sp_acl_rule_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_rule *rule, + struct mlxsw_afa_block *afa_block); struct mlxsw_sp_acl_rule * mlxsw_sp_acl_rule_lookup(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ruleset *ruleset, @@ -676,6 +711,10 @@ struct mlxsw_sp_acl_tcam_ops { void (*entry_del)(struct mlxsw_sp *mlxsw_sp, void *region_priv, void *chunk_priv, void *entry_priv); + int (*entry_action_replace)(struct mlxsw_sp *mlxsw_sp, + void *region_priv, void *chunk_priv, + void *entry_priv, + struct mlxsw_sp_acl_rule_info *rulei); int (*entry_activity_get)(struct mlxsw_sp *mlxsw_sp, void *region_priv, void *entry_priv, bool *activity); @@ -721,6 +760,12 @@ int mlxsw_sp_setup_tc_prio(struct mlxsw_sp_port *mlxsw_sp_port, struct tc_prio_qopt_offload *p); /* spectrum_fid.c */ +bool mlxsw_sp_fid_lag_vid_valid(const struct mlxsw_sp_fid *fid); +struct mlxsw_sp_fid *mlxsw_sp_fid_lookup_by_index(struct mlxsw_sp *mlxsw_sp, + u16 fid_index); +int mlxsw_sp_fid_nve_ifindex(const struct mlxsw_sp_fid *fid, int *nve_ifindex); +int mlxsw_sp_fid_nve_type(const struct mlxsw_sp_fid *fid, + enum mlxsw_sp_nve_type *p_type); struct mlxsw_sp_fid *mlxsw_sp_fid_lookup_by_vni(struct mlxsw_sp *mlxsw_sp, __be32 vni); int mlxsw_sp_fid_vni(const struct mlxsw_sp_fid *fid, __be32 *vni); @@ -728,9 +773,12 @@ int mlxsw_sp_fid_nve_flood_index_set(struct mlxsw_sp_fid *fid, u32 nve_flood_index); void mlxsw_sp_fid_nve_flood_index_clear(struct mlxsw_sp_fid *fid); bool mlxsw_sp_fid_nve_flood_index_is_set(const struct mlxsw_sp_fid *fid); -int mlxsw_sp_fid_vni_set(struct mlxsw_sp_fid *fid, __be32 vni); +int mlxsw_sp_fid_vni_set(struct mlxsw_sp_fid *fid, enum mlxsw_sp_nve_type type, + __be32 vni, int nve_ifindex); void mlxsw_sp_fid_vni_clear(struct mlxsw_sp_fid *fid); bool mlxsw_sp_fid_vni_is_set(const struct mlxsw_sp_fid *fid); +void mlxsw_sp_fid_fdb_clear_offload(const struct mlxsw_sp_fid *fid, + const struct net_device *nve_dev); int mlxsw_sp_fid_flood_set(struct mlxsw_sp_fid *fid, enum mlxsw_sp_flood_type packet_type, u8 local_port, bool member); @@ -738,10 +786,10 @@ int mlxsw_sp_fid_port_vid_map(struct mlxsw_sp_fid *fid, struct mlxsw_sp_port *mlxsw_sp_port, u16 vid); void mlxsw_sp_fid_port_vid_unmap(struct mlxsw_sp_fid *fid, struct mlxsw_sp_port *mlxsw_sp_port, u16 vid); -enum mlxsw_sp_rif_type mlxsw_sp_fid_rif_type(const struct mlxsw_sp_fid *fid); u16 mlxsw_sp_fid_index(const struct mlxsw_sp_fid *fid); enum mlxsw_sp_fid_type mlxsw_sp_fid_type(const struct mlxsw_sp_fid *fid); void mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif); +struct mlxsw_sp_rif *mlxsw_sp_fid_rif(const struct mlxsw_sp_fid *fid); enum mlxsw_sp_rif_type mlxsw_sp_fid_type_rif_type(const struct mlxsw_sp *mlxsw_sp, enum mlxsw_sp_fid_type type); @@ -749,6 +797,8 @@ u16 mlxsw_sp_fid_8021q_vid(const struct mlxsw_sp_fid *fid); struct mlxsw_sp_fid *mlxsw_sp_fid_8021q_get(struct mlxsw_sp *mlxsw_sp, u16 vid); struct mlxsw_sp_fid *mlxsw_sp_fid_8021d_get(struct mlxsw_sp *mlxsw_sp, int br_ifindex); +struct mlxsw_sp_fid *mlxsw_sp_fid_8021q_lookup(struct mlxsw_sp *mlxsw_sp, + u16 vid); struct mlxsw_sp_fid *mlxsw_sp_fid_8021d_lookup(struct mlxsw_sp *mlxsw_sp, int br_ifindex); struct mlxsw_sp_fid *mlxsw_sp_fid_rfid_get(struct mlxsw_sp *mlxsw_sp, @@ -797,10 +847,6 @@ extern const struct mlxsw_sp_mr_tcam_ops mlxsw_sp1_mr_tcam_ops; extern const struct mlxsw_sp_mr_tcam_ops mlxsw_sp2_mr_tcam_ops; /* spectrum_nve.c */ -enum mlxsw_sp_nve_type { - MLXSW_SP_NVE_TYPE_VXLAN, -}; - struct mlxsw_sp_nve_params { enum mlxsw_sp_nve_type type; __be32 vni; @@ -810,6 +856,9 @@ struct mlxsw_sp_nve_params { extern const struct mlxsw_sp_nve_ops *mlxsw_sp1_nve_ops_arr[]; extern const struct mlxsw_sp_nve_ops *mlxsw_sp2_nve_ops_arr[]; +int mlxsw_sp_nve_learned_ip_resolve(struct mlxsw_sp *mlxsw_sp, u32 uip, + enum mlxsw_sp_l3proto proto, + union mlxsw_sp_l3addr *addr); int mlxsw_sp_nve_flood_ip_add(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fid *fid, enum mlxsw_sp_l3proto proto, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum1_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum1_acl_tcam.c index 2a9eac90002e..fe270c1a26a6 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum1_acl_tcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum1_acl_tcam.c @@ -67,7 +67,7 @@ mlxsw_sp1_acl_ctcam_region_catchall_add(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_acl_ctcam_chunk_init(®ion->cregion, ®ion->catchall.cchunk, MLXSW_SP_ACL_TCAM_CATCHALL_PRIO); - rulei = mlxsw_sp_acl_rulei_create(mlxsw_sp->acl); + rulei = mlxsw_sp_acl_rulei_create(mlxsw_sp->acl, NULL); if (IS_ERR(rulei)) { err = PTR_ERR(rulei); goto err_rulei_create; @@ -192,6 +192,15 @@ static void mlxsw_sp1_acl_tcam_entry_del(struct mlxsw_sp *mlxsw_sp, &chunk->cchunk, &entry->centry); } +static int +mlxsw_sp1_acl_tcam_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + void *region_priv, void *chunk_priv, + void *entry_priv, + struct mlxsw_sp_acl_rule_info *rulei) +{ + return -EOPNOTSUPP; +} + static int mlxsw_sp1_acl_tcam_region_entry_activity_get(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam_region *_region, @@ -240,5 +249,6 @@ const struct mlxsw_sp_acl_tcam_ops mlxsw_sp1_acl_tcam_ops = { .entry_priv_size = sizeof(struct mlxsw_sp1_acl_tcam_entry), .entry_add = mlxsw_sp1_acl_tcam_entry_add, .entry_del = mlxsw_sp1_acl_tcam_entry_del, + .entry_action_replace = mlxsw_sp1_acl_tcam_entry_action_replace, .entry_activity_get = mlxsw_sp1_acl_tcam_entry_activity_get, }; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c index 8ca77f3e8f27..234ab51916db 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c @@ -34,15 +34,15 @@ mlxsw_sp2_acl_ctcam_region_entry_insert(struct mlxsw_sp_acl_ctcam_region *cregio { struct mlxsw_sp_acl_atcam_region *aregion; struct mlxsw_sp_acl_atcam_entry *aentry; - struct mlxsw_sp_acl_erp *erp; + struct mlxsw_sp_acl_erp_mask *erp_mask; aregion = mlxsw_sp_acl_tcam_cregion_aregion(cregion); aentry = mlxsw_sp_acl_tcam_centry_aentry(centry); - erp = mlxsw_sp_acl_erp_get(aregion, mask, true); - if (IS_ERR(erp)) - return PTR_ERR(erp); - aentry->erp = erp; + erp_mask = mlxsw_sp_acl_erp_mask_get(aregion, mask, true); + if (IS_ERR(erp_mask)) + return PTR_ERR(erp_mask); + aentry->erp_mask = erp_mask; return 0; } @@ -57,7 +57,7 @@ mlxsw_sp2_acl_ctcam_region_entry_remove(struct mlxsw_sp_acl_ctcam_region *cregio aregion = mlxsw_sp_acl_tcam_cregion_aregion(cregion); aentry = mlxsw_sp_acl_tcam_centry_aentry(centry); - mlxsw_sp_acl_erp_put(aregion, aentry->erp); + mlxsw_sp_acl_erp_mask_put(aregion, aentry->erp_mask); } static const struct mlxsw_sp_acl_ctcam_region_ops @@ -210,6 +210,23 @@ static void mlxsw_sp2_acl_tcam_entry_del(struct mlxsw_sp *mlxsw_sp, &entry->aentry); } +static int +mlxsw_sp2_acl_tcam_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + void *region_priv, void *chunk_priv, + void *entry_priv, + struct mlxsw_sp_acl_rule_info *rulei) +{ + struct mlxsw_sp2_acl_tcam_region *region = region_priv; + struct mlxsw_sp2_acl_tcam_chunk *chunk = chunk_priv; + struct mlxsw_sp2_acl_tcam_entry *entry = entry_priv; + + entry->act_block = rulei->act_block; + return mlxsw_sp_acl_atcam_entry_action_replace(mlxsw_sp, + ®ion->aregion, + &chunk->achunk, + &entry->aentry, rulei); +} + static int mlxsw_sp2_acl_tcam_entry_activity_get(struct mlxsw_sp *mlxsw_sp, void *region_priv, void *entry_priv, @@ -235,5 +252,6 @@ const struct mlxsw_sp_acl_tcam_ops mlxsw_sp2_acl_tcam_ops = { .entry_priv_size = sizeof(struct mlxsw_sp2_acl_tcam_entry), .entry_add = mlxsw_sp2_acl_tcam_entry_add, .entry_del = mlxsw_sp2_acl_tcam_entry_del, + .entry_action_replace = mlxsw_sp2_acl_tcam_entry_action_replace, .entry_activity_get = mlxsw_sp2_acl_tcam_entry_activity_get, }; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c index 4dd62478162e..e31ec75ac035 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c @@ -7,6 +7,201 @@ #include "spectrum.h" #include "spectrum_mr.h" +struct mlxsw_sp2_mr_tcam { + struct mlxsw_sp *mlxsw_sp; + struct mlxsw_sp_acl_block *acl_block; + struct mlxsw_sp_acl_ruleset *ruleset4; + struct mlxsw_sp_acl_ruleset *ruleset6; +}; + +struct mlxsw_sp2_mr_route { + struct mlxsw_sp2_mr_tcam *mr_tcam; +}; + +static struct mlxsw_sp_acl_ruleset * +mlxsw_sp2_mr_tcam_proto_ruleset(struct mlxsw_sp2_mr_tcam *mr_tcam, + enum mlxsw_sp_l3proto proto) +{ + switch (proto) { + case MLXSW_SP_L3_PROTO_IPV4: + return mr_tcam->ruleset4; + case MLXSW_SP_L3_PROTO_IPV6: + return mr_tcam->ruleset6; + } + return NULL; +} + +static int mlxsw_sp2_mr_tcam_bind_group(struct mlxsw_sp *mlxsw_sp, + enum mlxsw_reg_pemrbt_protocol protocol, + struct mlxsw_sp_acl_ruleset *ruleset) +{ + char pemrbt_pl[MLXSW_REG_PEMRBT_LEN]; + u16 group_id; + + group_id = mlxsw_sp_acl_ruleset_group_id(ruleset); + + mlxsw_reg_pemrbt_pack(pemrbt_pl, protocol, group_id); + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(pemrbt), pemrbt_pl); +} + +static const enum mlxsw_afk_element mlxsw_sp2_mr_tcam_usage_ipv4[] = { + MLXSW_AFK_ELEMENT_VIRT_ROUTER_8_10, + MLXSW_AFK_ELEMENT_VIRT_ROUTER_0_7, + MLXSW_AFK_ELEMENT_SRC_IP_0_31, + MLXSW_AFK_ELEMENT_DST_IP_0_31, +}; + +static int mlxsw_sp2_mr_tcam_ipv4_init(struct mlxsw_sp2_mr_tcam *mr_tcam) +{ + struct mlxsw_afk_element_usage elusage; + int err; + + /* Initialize IPv4 ACL group. */ + mlxsw_afk_element_usage_fill(&elusage, + mlxsw_sp2_mr_tcam_usage_ipv4, + ARRAY_SIZE(mlxsw_sp2_mr_tcam_usage_ipv4)); + mr_tcam->ruleset4 = mlxsw_sp_acl_ruleset_get(mr_tcam->mlxsw_sp, + mr_tcam->acl_block, + MLXSW_SP_L3_PROTO_IPV4, + MLXSW_SP_ACL_PROFILE_MR, + &elusage); + + if (IS_ERR(mr_tcam->ruleset4)) + return PTR_ERR(mr_tcam->ruleset4); + + /* MC Router groups should be bound before routes are inserted. */ + err = mlxsw_sp2_mr_tcam_bind_group(mr_tcam->mlxsw_sp, + MLXSW_REG_PEMRBT_PROTO_IPV4, + mr_tcam->ruleset4); + if (err) + goto err_bind_group; + + return 0; + +err_bind_group: + mlxsw_sp_acl_ruleset_put(mr_tcam->mlxsw_sp, mr_tcam->ruleset4); + return err; +} + +static void mlxsw_sp2_mr_tcam_ipv4_fini(struct mlxsw_sp2_mr_tcam *mr_tcam) +{ + mlxsw_sp_acl_ruleset_put(mr_tcam->mlxsw_sp, mr_tcam->ruleset4); +} + +static const enum mlxsw_afk_element mlxsw_sp2_mr_tcam_usage_ipv6[] = { + MLXSW_AFK_ELEMENT_VIRT_ROUTER_8_10, + MLXSW_AFK_ELEMENT_VIRT_ROUTER_0_7, + MLXSW_AFK_ELEMENT_SRC_IP_96_127, + MLXSW_AFK_ELEMENT_SRC_IP_64_95, + MLXSW_AFK_ELEMENT_SRC_IP_32_63, + MLXSW_AFK_ELEMENT_SRC_IP_0_31, + MLXSW_AFK_ELEMENT_DST_IP_96_127, + MLXSW_AFK_ELEMENT_DST_IP_64_95, + MLXSW_AFK_ELEMENT_DST_IP_32_63, + MLXSW_AFK_ELEMENT_DST_IP_0_31, +}; + +static int mlxsw_sp2_mr_tcam_ipv6_init(struct mlxsw_sp2_mr_tcam *mr_tcam) +{ + struct mlxsw_afk_element_usage elusage; + int err; + + /* Initialize IPv6 ACL group */ + mlxsw_afk_element_usage_fill(&elusage, + mlxsw_sp2_mr_tcam_usage_ipv6, + ARRAY_SIZE(mlxsw_sp2_mr_tcam_usage_ipv6)); + mr_tcam->ruleset6 = mlxsw_sp_acl_ruleset_get(mr_tcam->mlxsw_sp, + mr_tcam->acl_block, + MLXSW_SP_L3_PROTO_IPV6, + MLXSW_SP_ACL_PROFILE_MR, + &elusage); + + if (IS_ERR(mr_tcam->ruleset6)) + return PTR_ERR(mr_tcam->ruleset6); + + /* MC Router groups should be bound before routes are inserted. */ + err = mlxsw_sp2_mr_tcam_bind_group(mr_tcam->mlxsw_sp, + MLXSW_REG_PEMRBT_PROTO_IPV6, + mr_tcam->ruleset6); + if (err) + goto err_bind_group; + + return 0; + +err_bind_group: + mlxsw_sp_acl_ruleset_put(mr_tcam->mlxsw_sp, mr_tcam->ruleset6); + return err; +} + +static void mlxsw_sp2_mr_tcam_ipv6_fini(struct mlxsw_sp2_mr_tcam *mr_tcam) +{ + mlxsw_sp_acl_ruleset_put(mr_tcam->mlxsw_sp, mr_tcam->ruleset6); +} + +static void +mlxsw_sp2_mr_tcam_rule_parse4(struct mlxsw_sp_acl_rule_info *rulei, + struct mlxsw_sp_mr_route_key *key) +{ + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_0_31, + (char *) &key->source.addr4, + (char *) &key->source_mask.addr4, 4); + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_0_31, + (char *) &key->group.addr4, + (char *) &key->group_mask.addr4, 4); +} + +static void +mlxsw_sp2_mr_tcam_rule_parse6(struct mlxsw_sp_acl_rule_info *rulei, + struct mlxsw_sp_mr_route_key *key) +{ + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_96_127, + &key->source.addr6.s6_addr[0x0], + &key->source_mask.addr6.s6_addr[0x0], 4); + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_64_95, + &key->source.addr6.s6_addr[0x4], + &key->source_mask.addr6.s6_addr[0x4], 4); + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_32_63, + &key->source.addr6.s6_addr[0x8], + &key->source_mask.addr6.s6_addr[0x8], 4); + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_0_31, + &key->source.addr6.s6_addr[0xc], + &key->source_mask.addr6.s6_addr[0xc], 4); + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_96_127, + &key->group.addr6.s6_addr[0x0], + &key->group_mask.addr6.s6_addr[0x0], 4); + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_64_95, + &key->group.addr6.s6_addr[0x4], + &key->group_mask.addr6.s6_addr[0x4], 4); + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_32_63, + &key->group.addr6.s6_addr[0x8], + &key->group_mask.addr6.s6_addr[0x8], 4); + mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_0_31, + &key->group.addr6.s6_addr[0xc], + &key->group_mask.addr6.s6_addr[0xc], 4); +} + +static void +mlxsw_sp2_mr_tcam_rule_parse(struct mlxsw_sp_acl_rule *rule, + struct mlxsw_sp_mr_route_key *key, + unsigned int priority) +{ + struct mlxsw_sp_acl_rule_info *rulei; + + rulei = mlxsw_sp_acl_rule_rulei(rule); + rulei->priority = priority; + mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_VIRT_ROUTER_0_7, + key->vrid, GENMASK(7, 0)); + mlxsw_sp_acl_rulei_keymask_u32(rulei, + MLXSW_AFK_ELEMENT_VIRT_ROUTER_8_10, + key->vrid >> 8, GENMASK(2, 0)); + switch (key->proto) { + case MLXSW_SP_L3_PROTO_IPV4: + return mlxsw_sp2_mr_tcam_rule_parse4(rulei, key); + case MLXSW_SP_L3_PROTO_IPV6: + return mlxsw_sp2_mr_tcam_rule_parse6(rulei, key); + } +} + static int mlxsw_sp2_mr_tcam_route_create(struct mlxsw_sp *mlxsw_sp, void *priv, void *route_priv, @@ -14,7 +209,33 @@ mlxsw_sp2_mr_tcam_route_create(struct mlxsw_sp *mlxsw_sp, void *priv, struct mlxsw_afa_block *afa_block, enum mlxsw_sp_mr_route_prio prio) { + struct mlxsw_sp2_mr_route *mr_route = route_priv; + struct mlxsw_sp2_mr_tcam *mr_tcam = priv; + struct mlxsw_sp_acl_ruleset *ruleset; + struct mlxsw_sp_acl_rule *rule; + int err; + + mr_route->mr_tcam = mr_tcam; + ruleset = mlxsw_sp2_mr_tcam_proto_ruleset(mr_tcam, key->proto); + if (WARN_ON(!ruleset)) + return -EINVAL; + + rule = mlxsw_sp_acl_rule_create(mlxsw_sp, ruleset, + (unsigned long) route_priv, afa_block, + NULL); + if (IS_ERR(rule)) + return PTR_ERR(rule); + + mlxsw_sp2_mr_tcam_rule_parse(rule, key, prio); + err = mlxsw_sp_acl_rule_add(mlxsw_sp, rule); + if (err) + goto err_rule_add; + return 0; + +err_rule_add: + mlxsw_sp_acl_rule_destroy(mlxsw_sp, rule); + return err; } static void @@ -22,6 +243,21 @@ mlxsw_sp2_mr_tcam_route_destroy(struct mlxsw_sp *mlxsw_sp, void *priv, void *route_priv, struct mlxsw_sp_mr_route_key *key) { + struct mlxsw_sp2_mr_tcam *mr_tcam = priv; + struct mlxsw_sp_acl_ruleset *ruleset; + struct mlxsw_sp_acl_rule *rule; + + ruleset = mlxsw_sp2_mr_tcam_proto_ruleset(mr_tcam, key->proto); + if (WARN_ON(!ruleset)) + return; + + rule = mlxsw_sp_acl_rule_lookup(mlxsw_sp, ruleset, + (unsigned long) route_priv); + if (WARN_ON(!rule)) + return; + + mlxsw_sp_acl_rule_del(mlxsw_sp, rule); + mlxsw_sp_acl_rule_destroy(mlxsw_sp, rule); } static int @@ -30,21 +266,64 @@ mlxsw_sp2_mr_tcam_route_update(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_mr_route_key *key, struct mlxsw_afa_block *afa_block) { - return 0; + struct mlxsw_sp2_mr_route *mr_route = route_priv; + struct mlxsw_sp2_mr_tcam *mr_tcam = mr_route->mr_tcam; + struct mlxsw_sp_acl_ruleset *ruleset; + struct mlxsw_sp_acl_rule *rule; + + ruleset = mlxsw_sp2_mr_tcam_proto_ruleset(mr_tcam, key->proto); + if (WARN_ON(!ruleset)) + return -EINVAL; + + rule = mlxsw_sp_acl_rule_lookup(mlxsw_sp, ruleset, + (unsigned long) route_priv); + if (WARN_ON(!rule)) + return -EINVAL; + + return mlxsw_sp_acl_rule_action_replace(mlxsw_sp, rule, afa_block); } static int mlxsw_sp2_mr_tcam_init(struct mlxsw_sp *mlxsw_sp, void *priv) { + struct mlxsw_sp2_mr_tcam *mr_tcam = priv; + int err; + + mr_tcam->mlxsw_sp = mlxsw_sp; + mr_tcam->acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, NULL); + if (!mr_tcam->acl_block) + return -ENOMEM; + + err = mlxsw_sp2_mr_tcam_ipv4_init(mr_tcam); + if (err) + goto err_ipv4_init; + + err = mlxsw_sp2_mr_tcam_ipv6_init(mr_tcam); + if (err) + goto err_ipv6_init; + return 0; + +err_ipv6_init: + mlxsw_sp2_mr_tcam_ipv4_fini(mr_tcam); +err_ipv4_init: + mlxsw_sp_acl_block_destroy(mr_tcam->acl_block); + return err; } static void mlxsw_sp2_mr_tcam_fini(void *priv) { + struct mlxsw_sp2_mr_tcam *mr_tcam = priv; + + mlxsw_sp2_mr_tcam_ipv6_fini(mr_tcam); + mlxsw_sp2_mr_tcam_ipv4_fini(mr_tcam); + mlxsw_sp_acl_block_destroy(mr_tcam->acl_block); } const struct mlxsw_sp_mr_tcam_ops mlxsw_sp2_mr_tcam_ops = { + .priv_size = sizeof(struct mlxsw_sp2_mr_tcam), .init = mlxsw_sp2_mr_tcam_init, .fini = mlxsw_sp2_mr_tcam_fini, + .route_priv_size = sizeof(struct mlxsw_sp2_mr_route), .route_create = mlxsw_sp2_mr_tcam_route_create, .route_destroy = mlxsw_sp2_mr_tcam_route_destroy, .route_update = mlxsw_sp2_mr_tcam_route_update, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c index c4f9238591e6..695d33358988 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c @@ -435,7 +435,8 @@ u16 mlxsw_sp_acl_ruleset_group_id(struct mlxsw_sp_acl_ruleset *ruleset) } struct mlxsw_sp_acl_rule_info * -mlxsw_sp_acl_rulei_create(struct mlxsw_sp_acl *acl) +mlxsw_sp_acl_rulei_create(struct mlxsw_sp_acl *acl, + struct mlxsw_afa_block *afa_block) { struct mlxsw_sp_acl_rule_info *rulei; int err; @@ -443,11 +444,18 @@ mlxsw_sp_acl_rulei_create(struct mlxsw_sp_acl *acl) rulei = kzalloc(sizeof(*rulei), GFP_KERNEL); if (!rulei) return NULL; + + if (afa_block) { + rulei->act_block = afa_block; + return rulei; + } + rulei->act_block = mlxsw_afa_block_create(acl->mlxsw_sp->afa); if (IS_ERR(rulei->act_block)) { err = PTR_ERR(rulei->act_block); goto err_afa_block_create; } + rulei->action_created = 1; return rulei; err_afa_block_create: @@ -457,7 +465,8 @@ err_afa_block_create: void mlxsw_sp_acl_rulei_destroy(struct mlxsw_sp_acl_rule_info *rulei) { - mlxsw_afa_block_destroy(rulei->act_block); + if (rulei->action_created) + mlxsw_afa_block_destroy(rulei->act_block); kfree(rulei); } @@ -623,6 +632,7 @@ struct mlxsw_sp_acl_rule * mlxsw_sp_acl_rule_create(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ruleset *ruleset, unsigned long cookie, + struct mlxsw_afa_block *afa_block, struct netlink_ext_ack *extack) { const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; @@ -639,7 +649,7 @@ mlxsw_sp_acl_rule_create(struct mlxsw_sp *mlxsw_sp, rule->cookie = cookie; rule->ruleset = ruleset; - rule->rulei = mlxsw_sp_acl_rulei_create(mlxsw_sp->acl); + rule->rulei = mlxsw_sp_acl_rulei_create(mlxsw_sp->acl, afa_block); if (IS_ERR(rule->rulei)) { err = PTR_ERR(rule->rulei); goto err_rulei_create; @@ -721,6 +731,21 @@ void mlxsw_sp_acl_rule_del(struct mlxsw_sp *mlxsw_sp, ops->rule_del(mlxsw_sp, rule->priv); } +int mlxsw_sp_acl_rule_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_rule *rule, + struct mlxsw_afa_block *afa_block) +{ + struct mlxsw_sp_acl_ruleset *ruleset = rule->ruleset; + const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; + struct mlxsw_sp_acl_rule_info *rulei; + + rulei = mlxsw_sp_acl_rule_rulei(rule); + rulei->act_block = afa_block; + + return ops->rule_action_replace(mlxsw_sp, ruleset->priv, rule->priv, + rule->rulei); +} + struct mlxsw_sp_acl_rule * mlxsw_sp_acl_rule_lookup(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ruleset *ruleset, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c index 2dda028f94db..80fb268d51a5 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_atcam.c @@ -14,8 +14,8 @@ #include "spectrum_acl_tcam.h" #include "core_acl_flex_keys.h" -#define MLXSW_SP_ACL_ATCAM_LKEY_ID_BLOCK_START 6 -#define MLXSW_SP_ACL_ATCAM_LKEY_ID_BLOCK_END 11 +#define MLXSW_SP_ACL_ATCAM_LKEY_ID_BLOCK_CLEAR_START 0 +#define MLXSW_SP_ACL_ATCAM_LKEY_ID_BLOCK_CLEAR_END 5 struct mlxsw_sp_acl_atcam_lkey_id_ht_key { char enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* MSB blocks */ @@ -34,7 +34,7 @@ struct mlxsw_sp_acl_atcam_region_ops { void (*fini)(struct mlxsw_sp_acl_atcam_region *aregion); struct mlxsw_sp_acl_atcam_lkey_id * (*lkey_id_get)(struct mlxsw_sp_acl_atcam_region *aregion, - struct mlxsw_sp_acl_rule_info *rulei, u8 erp_id); + char *enc_key, u8 erp_id); void (*lkey_id_put)(struct mlxsw_sp_acl_atcam_region *aregion, struct mlxsw_sp_acl_atcam_lkey_id *lkey_id); }; @@ -64,7 +64,7 @@ static const struct rhashtable_params mlxsw_sp_acl_atcam_entries_ht_params = { static bool mlxsw_sp_acl_atcam_is_centry(const struct mlxsw_sp_acl_atcam_entry *aentry) { - return mlxsw_sp_acl_erp_is_ctcam_erp(aentry->erp); + return mlxsw_sp_acl_erp_mask_is_ctcam(aentry->erp_mask); } static int @@ -90,8 +90,7 @@ mlxsw_sp_acl_atcam_region_generic_fini(struct mlxsw_sp_acl_atcam_region *aregion static struct mlxsw_sp_acl_atcam_lkey_id * mlxsw_sp_acl_atcam_generic_lkey_id_get(struct mlxsw_sp_acl_atcam_region *aregion, - struct mlxsw_sp_acl_rule_info *rulei, - u8 erp_id) + char *enc_key, u8 erp_id) { struct mlxsw_sp_acl_atcam_region_generic *region_generic; @@ -220,8 +219,7 @@ mlxsw_sp_acl_atcam_lkey_id_destroy(struct mlxsw_sp_acl_atcam_region *aregion, static struct mlxsw_sp_acl_atcam_lkey_id * mlxsw_sp_acl_atcam_12kb_lkey_id_get(struct mlxsw_sp_acl_atcam_region *aregion, - struct mlxsw_sp_acl_rule_info *rulei, - u8 erp_id) + char *enc_key, u8 erp_id) { struct mlxsw_sp_acl_atcam_region_12kb *region_12kb = aregion->priv; struct mlxsw_sp_acl_tcam_region *region = aregion->region; @@ -230,9 +228,10 @@ mlxsw_sp_acl_atcam_12kb_lkey_id_get(struct mlxsw_sp_acl_atcam_region *aregion, struct mlxsw_afk *afk = mlxsw_sp_acl_afk(mlxsw_sp->acl); struct mlxsw_sp_acl_atcam_lkey_id *lkey_id; - mlxsw_afk_encode(afk, region->key_info, &rulei->values, ht_key.enc_key, - NULL, MLXSW_SP_ACL_ATCAM_LKEY_ID_BLOCK_START, - MLXSW_SP_ACL_ATCAM_LKEY_ID_BLOCK_END); + memcpy(ht_key.enc_key, enc_key, sizeof(ht_key.enc_key)); + mlxsw_afk_clear(afk, ht_key.enc_key, + MLXSW_SP_ACL_ATCAM_LKEY_ID_BLOCK_CLEAR_START, + MLXSW_SP_ACL_ATCAM_LKEY_ID_BLOCK_CLEAR_END); ht_key.erp_id = erp_id; lkey_id = rhashtable_lookup_fast(®ion_12kb->lkey_ht, &ht_key, mlxsw_sp_acl_atcam_lkey_id_ht_params); @@ -324,6 +323,7 @@ mlxsw_sp_acl_atcam_region_init(struct mlxsw_sp *mlxsw_sp, aregion->region = region; aregion->atcam = atcam; mlxsw_sp_acl_atcam_region_type_init(aregion); + INIT_LIST_HEAD(&aregion->entries_list); err = rhashtable_init(&aregion->entries_ht, &mlxsw_sp_acl_atcam_entries_ht_params); @@ -357,6 +357,7 @@ void mlxsw_sp_acl_atcam_region_fini(struct mlxsw_sp_acl_atcam_region *aregion) mlxsw_sp_acl_erp_region_fini(aregion); aregion->ops->fini(aregion); rhashtable_destroy(&aregion->entries_ht); + WARN_ON(!list_empty(&aregion->entries_list)); } void mlxsw_sp_acl_atcam_chunk_init(struct mlxsw_sp_acl_atcam_region *aregion, @@ -379,7 +380,7 @@ mlxsw_sp_acl_atcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_rule_info *rulei) { struct mlxsw_sp_acl_tcam_region *region = aregion->region; - u8 erp_id = mlxsw_sp_acl_erp_id(aentry->erp); + u8 erp_id = mlxsw_sp_acl_erp_mask_erp_id(aentry->erp_mask); struct mlxsw_sp_acl_atcam_lkey_id *lkey_id; char ptce3_pl[MLXSW_REG_PTCE3_LEN]; u32 kvdl_index, priority; @@ -389,7 +390,8 @@ mlxsw_sp_acl_atcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp, if (err) return err; - lkey_id = aregion->ops->lkey_id_get(aregion, rulei, erp_id); + lkey_id = aregion->ops->lkey_id_get(aregion, aentry->ht_key.enc_key, + erp_id); if (IS_ERR(lkey_id)) return PTR_ERR(lkey_id); aentry->lkey_id = lkey_id; @@ -398,6 +400,9 @@ mlxsw_sp_acl_atcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp, mlxsw_reg_ptce3_pack(ptce3_pl, true, MLXSW_REG_PTCE3_OP_WRITE_WRITE, priority, region->tcam_region_info, aentry->ht_key.enc_key, erp_id, + aentry->delta_info.start, + aentry->delta_info.mask, + aentry->delta_info.value, refcount_read(&lkey_id->refcnt) != 1, lkey_id->id, kvdl_index); err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ptce3), ptce3_pl); @@ -418,17 +423,50 @@ mlxsw_sp_acl_atcam_region_entry_remove(struct mlxsw_sp *mlxsw_sp, { struct mlxsw_sp_acl_atcam_lkey_id *lkey_id = aentry->lkey_id; struct mlxsw_sp_acl_tcam_region *region = aregion->region; - u8 erp_id = mlxsw_sp_acl_erp_id(aentry->erp); + u8 erp_id = mlxsw_sp_acl_erp_mask_erp_id(aentry->erp_mask); + char *enc_key = aentry->ht_key.enc_key; char ptce3_pl[MLXSW_REG_PTCE3_LEN]; mlxsw_reg_ptce3_pack(ptce3_pl, false, MLXSW_REG_PTCE3_OP_WRITE_WRITE, 0, - region->tcam_region_info, aentry->ht_key.enc_key, - erp_id, refcount_read(&lkey_id->refcnt) != 1, + region->tcam_region_info, + enc_key, erp_id, + aentry->delta_info.start, + aentry->delta_info.mask, + aentry->delta_info.value, + refcount_read(&lkey_id->refcnt) != 1, lkey_id->id, 0); mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ptce3), ptce3_pl); aregion->ops->lkey_id_put(aregion, lkey_id); } +static int +mlxsw_sp_acl_atcam_region_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_atcam_entry *aentry, + struct mlxsw_sp_acl_rule_info *rulei) +{ + struct mlxsw_sp_acl_atcam_lkey_id *lkey_id = aentry->lkey_id; + u8 erp_id = mlxsw_sp_acl_erp_mask_erp_id(aentry->erp_mask); + struct mlxsw_sp_acl_tcam_region *region = aregion->region; + char ptce3_pl[MLXSW_REG_PTCE3_LEN]; + u32 kvdl_index, priority; + int err; + + err = mlxsw_sp_acl_tcam_priority_get(mlxsw_sp, rulei, &priority, true); + if (err) + return err; + kvdl_index = mlxsw_afa_block_first_kvdl_index(rulei->act_block); + mlxsw_reg_ptce3_pack(ptce3_pl, true, MLXSW_REG_PTCE3_OP_WRITE_UPDATE, + priority, region->tcam_region_info, + aentry->ht_key.enc_key, erp_id, + aentry->delta_info.start, + aentry->delta_info.mask, + aentry->delta_info.value, + refcount_read(&lkey_id->refcnt) != 1, lkey_id->id, + kvdl_index); + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ptce3), ptce3_pl); +} + static int __mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_atcam_region *aregion, @@ -438,19 +476,36 @@ __mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam_region *region = aregion->region; char mask[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN] = { 0 }; struct mlxsw_afk *afk = mlxsw_sp_acl_afk(mlxsw_sp->acl); - struct mlxsw_sp_acl_erp *erp; - unsigned int blocks_count; + const struct mlxsw_sp_acl_erp_delta *delta; + struct mlxsw_sp_acl_erp_mask *erp_mask; int err; - blocks_count = mlxsw_afk_key_info_blocks_count_get(region->key_info); mlxsw_afk_encode(afk, region->key_info, &rulei->values, - aentry->ht_key.enc_key, mask, 0, blocks_count - 1); - - erp = mlxsw_sp_acl_erp_get(aregion, mask, false); - if (IS_ERR(erp)) - return PTR_ERR(erp); - aentry->erp = erp; - aentry->ht_key.erp_id = mlxsw_sp_acl_erp_id(erp); + aentry->full_enc_key, mask); + + erp_mask = mlxsw_sp_acl_erp_mask_get(aregion, mask, false); + if (IS_ERR(erp_mask)) + return PTR_ERR(erp_mask); + aentry->erp_mask = erp_mask; + aentry->ht_key.erp_id = mlxsw_sp_acl_erp_mask_erp_id(erp_mask); + memcpy(aentry->ht_key.enc_key, aentry->full_enc_key, + sizeof(aentry->ht_key.enc_key)); + + /* Compute all needed delta information and clear the delta bits + * from the encrypted key. + */ + delta = mlxsw_sp_acl_erp_delta(aentry->erp_mask); + aentry->delta_info.start = mlxsw_sp_acl_erp_delta_start(delta); + aentry->delta_info.mask = mlxsw_sp_acl_erp_delta_mask(delta); + aentry->delta_info.value = + mlxsw_sp_acl_erp_delta_value(delta, aentry->full_enc_key); + mlxsw_sp_acl_erp_delta_clear(delta, aentry->ht_key.enc_key); + + /* Add rule to the list of A-TCAM rules, assuming this + * rule is intended to A-TCAM. In case this rule does + * not fit into A-TCAM it will be removed from the list. + */ + list_add(&aentry->list, &aregion->entries_list); /* We can't insert identical rules into the A-TCAM, so fail and * let the rule spill into C-TCAM @@ -461,6 +516,13 @@ __mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp, if (err) goto err_rhashtable_insert; + /* Bloom filter must be updated here, before inserting the rule into + * the A-TCAM. + */ + err = mlxsw_sp_acl_erp_bf_insert(mlxsw_sp, aregion, erp_mask, aentry); + if (err) + goto err_bf_insert; + err = mlxsw_sp_acl_atcam_region_entry_insert(mlxsw_sp, aregion, aentry, rulei); if (err) @@ -469,10 +531,13 @@ __mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp, return 0; err_rule_insert: + mlxsw_sp_acl_erp_bf_remove(mlxsw_sp, aregion, erp_mask, aentry); +err_bf_insert: rhashtable_remove_fast(&aregion->entries_ht, &aentry->ht_node, mlxsw_sp_acl_atcam_entries_ht_params); err_rhashtable_insert: - mlxsw_sp_acl_erp_put(aregion, erp); + list_del(&aentry->list); + mlxsw_sp_acl_erp_mask_put(aregion, erp_mask); return err; } @@ -482,9 +547,21 @@ __mlxsw_sp_acl_atcam_entry_del(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_atcam_entry *aentry) { mlxsw_sp_acl_atcam_region_entry_remove(mlxsw_sp, aregion, aentry); + mlxsw_sp_acl_erp_bf_remove(mlxsw_sp, aregion, aentry->erp_mask, aentry); rhashtable_remove_fast(&aregion->entries_ht, &aentry->ht_node, mlxsw_sp_acl_atcam_entries_ht_params); - mlxsw_sp_acl_erp_put(aregion, aentry->erp); + list_del(&aentry->list); + mlxsw_sp_acl_erp_mask_put(aregion, aentry->erp_mask); +} + +static int +__mlxsw_sp_acl_atcam_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_atcam_entry *aentry, + struct mlxsw_sp_acl_rule_info *rulei) +{ + return mlxsw_sp_acl_atcam_region_entry_action_replace(mlxsw_sp, aregion, + aentry, rulei); } int mlxsw_sp_acl_atcam_entry_add(struct mlxsw_sp *mlxsw_sp, @@ -523,6 +600,29 @@ void mlxsw_sp_acl_atcam_entry_del(struct mlxsw_sp *mlxsw_sp, __mlxsw_sp_acl_atcam_entry_del(mlxsw_sp, aregion, aentry); } +int +mlxsw_sp_acl_atcam_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_atcam_chunk *achunk, + struct mlxsw_sp_acl_atcam_entry *aentry, + struct mlxsw_sp_acl_rule_info *rulei) +{ + int err; + + if (mlxsw_sp_acl_atcam_is_centry(aentry)) + err = mlxsw_sp_acl_ctcam_entry_action_replace(mlxsw_sp, + &aregion->cregion, + &achunk->cchunk, + &aentry->centry, + rulei); + else + err = __mlxsw_sp_acl_atcam_entry_action_replace(mlxsw_sp, + aregion, aentry, + rulei); + + return err; +} + int mlxsw_sp_acl_atcam_init(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_atcam *atcam) { diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c new file mode 100644 index 000000000000..505b87846acc --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c @@ -0,0 +1,249 @@ +// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 +/* Copyright (c) 2018 Mellanox Technologies. All rights reserved */ + +#include +#include +#include +#include + +#include "spectrum.h" +#include "spectrum_acl_tcam.h" + +struct mlxsw_sp_acl_bf { + unsigned int bank_size; + refcount_t refcnt[0]; +}; + +/* Bloom filter uses a crc-16 hash over chunks of data which contain 4 key + * blocks, eRP ID and region ID. In Spectrum-2, region key is combined of up to + * 12 key blocks, so there can be up to 3 chunks in the Bloom filter key, + * depending on the actual number of key blocks used in the region. + * The layout of the Bloom filter key is as follows: + * + * +-------------------------+------------------------+------------------------+ + * | Chunk 2 Key blocks 11-8 | Chunk 1 Key blocks 7-4 | Chunk 0 Key blocks 3-0 | + * +-------------------------+------------------------+------------------------+ + */ +#define MLXSW_BLOOM_KEY_CHUNKS 3 +#define MLXSW_BLOOM_KEY_LEN 69 + +/* Each chunk size is 23 bytes. 18 bytes of it contain 4 key blocks, each is + * 36 bits, 2 bytes which hold eRP ID and region ID, and 3 bytes of zero + * padding. + * The layout of each chunk is as follows: + * + * +---------+----------------------+-----------------------------------+ + * | 3 bytes | 2 bytes | 18 bytes | + * +---------+-----------+----------+-----------------------------------+ + * | 183:158 | 157:148 | 147:144 | 143:0 | + * +---------+-----------+----------+-----------------------------------+ + * | 0 | region ID | eRP ID | 4 Key blocks (18 Bytes) | + * +---------+-----------+----------+-----------------------------------+ + */ +#define MLXSW_BLOOM_CHUNK_PAD_BYTES 3 +#define MLXSW_BLOOM_CHUNK_KEY_BYTES 18 +#define MLXSW_BLOOM_KEY_CHUNK_BYTES 23 + +/* The offset of the key block within a chunk is 5 bytes as it comes after + * 3 bytes of zero padding and 16 bits of region ID and eRP ID. + */ +#define MLXSW_BLOOM_CHUNK_KEY_OFFSET 5 + +/* Each chunk contains 4 key blocks. Chunk 2 uses key blocks 11-8, + * and we need to populate it with 4 key blocks copied from the entry encoded + * key. Since the encoded key contains a padding, key block 11 starts at offset + * 2. block 7 that is used in chunk 1 starts at offset 20 as 4 key blocks take + * 18 bytes. + * This array defines key offsets for easy access when copying key blocks from + * entry key to Bloom filter chunk. + */ +static const u8 chunk_key_offsets[MLXSW_BLOOM_KEY_CHUNKS] = {2, 20, 38}; + +/* This table is just the CRC of each possible byte. It is + * computed, Msbit first, for the Bloom filter polynomial + * which is 0x8529 (1 + x^3 + x^5 + x^8 + x^10 + x^15 and + * the implicit x^16). + */ +static const u16 mlxsw_sp_acl_bf_crc_tab[256] = { +0x0000, 0x8529, 0x8f7b, 0x0a52, 0x9bdf, 0x1ef6, 0x14a4, 0x918d, +0xb297, 0x37be, 0x3dec, 0xb8c5, 0x2948, 0xac61, 0xa633, 0x231a, +0xe007, 0x652e, 0x6f7c, 0xea55, 0x7bd8, 0xfef1, 0xf4a3, 0x718a, +0x5290, 0xd7b9, 0xddeb, 0x58c2, 0xc94f, 0x4c66, 0x4634, 0xc31d, +0x4527, 0xc00e, 0xca5c, 0x4f75, 0xdef8, 0x5bd1, 0x5183, 0xd4aa, +0xf7b0, 0x7299, 0x78cb, 0xfde2, 0x6c6f, 0xe946, 0xe314, 0x663d, +0xa520, 0x2009, 0x2a5b, 0xaf72, 0x3eff, 0xbbd6, 0xb184, 0x34ad, +0x17b7, 0x929e, 0x98cc, 0x1de5, 0x8c68, 0x0941, 0x0313, 0x863a, +0x8a4e, 0x0f67, 0x0535, 0x801c, 0x1191, 0x94b8, 0x9eea, 0x1bc3, +0x38d9, 0xbdf0, 0xb7a2, 0x328b, 0xa306, 0x262f, 0x2c7d, 0xa954, +0x6a49, 0xef60, 0xe532, 0x601b, 0xf196, 0x74bf, 0x7eed, 0xfbc4, +0xd8de, 0x5df7, 0x57a5, 0xd28c, 0x4301, 0xc628, 0xcc7a, 0x4953, +0xcf69, 0x4a40, 0x4012, 0xc53b, 0x54b6, 0xd19f, 0xdbcd, 0x5ee4, +0x7dfe, 0xf8d7, 0xf285, 0x77ac, 0xe621, 0x6308, 0x695a, 0xec73, +0x2f6e, 0xaa47, 0xa015, 0x253c, 0xb4b1, 0x3198, 0x3bca, 0xbee3, +0x9df9, 0x18d0, 0x1282, 0x97ab, 0x0626, 0x830f, 0x895d, 0x0c74, +0x91b5, 0x149c, 0x1ece, 0x9be7, 0x0a6a, 0x8f43, 0x8511, 0x0038, +0x2322, 0xa60b, 0xac59, 0x2970, 0xb8fd, 0x3dd4, 0x3786, 0xb2af, +0x71b2, 0xf49b, 0xfec9, 0x7be0, 0xea6d, 0x6f44, 0x6516, 0xe03f, +0xc325, 0x460c, 0x4c5e, 0xc977, 0x58fa, 0xddd3, 0xd781, 0x52a8, +0xd492, 0x51bb, 0x5be9, 0xdec0, 0x4f4d, 0xca64, 0xc036, 0x451f, +0x6605, 0xe32c, 0xe97e, 0x6c57, 0xfdda, 0x78f3, 0x72a1, 0xf788, +0x3495, 0xb1bc, 0xbbee, 0x3ec7, 0xaf4a, 0x2a63, 0x2031, 0xa518, +0x8602, 0x032b, 0x0979, 0x8c50, 0x1ddd, 0x98f4, 0x92a6, 0x178f, +0x1bfb, 0x9ed2, 0x9480, 0x11a9, 0x8024, 0x050d, 0x0f5f, 0x8a76, +0xa96c, 0x2c45, 0x2617, 0xa33e, 0x32b3, 0xb79a, 0xbdc8, 0x38e1, +0xfbfc, 0x7ed5, 0x7487, 0xf1ae, 0x6023, 0xe50a, 0xef58, 0x6a71, +0x496b, 0xcc42, 0xc610, 0x4339, 0xd2b4, 0x579d, 0x5dcf, 0xd8e6, +0x5edc, 0xdbf5, 0xd1a7, 0x548e, 0xc503, 0x402a, 0x4a78, 0xcf51, +0xec4b, 0x6962, 0x6330, 0xe619, 0x7794, 0xf2bd, 0xf8ef, 0x7dc6, +0xbedb, 0x3bf2, 0x31a0, 0xb489, 0x2504, 0xa02d, 0xaa7f, 0x2f56, +0x0c4c, 0x8965, 0x8337, 0x061e, 0x9793, 0x12ba, 0x18e8, 0x9dc1, +}; + +static u16 mlxsw_sp_acl_bf_crc_byte(u16 crc, u8 c) +{ + return (crc << 8) ^ mlxsw_sp_acl_bf_crc_tab[(crc >> 8) ^ c]; +} + +static u16 mlxsw_sp_acl_bf_crc(const u8 *buffer, size_t len) +{ + u16 crc = 0; + + while (len--) + crc = mlxsw_sp_acl_bf_crc_byte(crc, *buffer++); + return crc; +} + +static void +mlxsw_sp_acl_bf_key_encode(struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_atcam_entry *aentry, + char *output, u8 *len) +{ + struct mlxsw_afk_key_info *key_info = aregion->region->key_info; + u8 chunk_index, chunk_count, block_count; + char *chunk = output; + __be16 erp_region_id; + + block_count = mlxsw_afk_key_info_blocks_count_get(key_info); + chunk_count = 1 + ((block_count - 1) >> 2); + erp_region_id = cpu_to_be16(aentry->ht_key.erp_id | + (aregion->region->id << 4)); + for (chunk_index = MLXSW_BLOOM_KEY_CHUNKS - chunk_count; + chunk_index < MLXSW_BLOOM_KEY_CHUNKS; chunk_index++) { + memset(chunk, 0, MLXSW_BLOOM_CHUNK_PAD_BYTES); + memcpy(chunk + MLXSW_BLOOM_CHUNK_PAD_BYTES, &erp_region_id, + sizeof(erp_region_id)); + memcpy(chunk + MLXSW_BLOOM_CHUNK_KEY_OFFSET, + &aentry->ht_key.enc_key[chunk_key_offsets[chunk_index]], + MLXSW_BLOOM_CHUNK_KEY_BYTES); + chunk += MLXSW_BLOOM_KEY_CHUNK_BYTES; + } + *len = chunk_count * MLXSW_BLOOM_KEY_CHUNK_BYTES; +} + +static unsigned int +mlxsw_sp_acl_bf_rule_count_index_get(struct mlxsw_sp_acl_bf *bf, + unsigned int erp_bank, + unsigned int bf_index) +{ + return erp_bank * bf->bank_size + bf_index; +} + +static unsigned int +mlxsw_sp_acl_bf_index_get(struct mlxsw_sp_acl_bf *bf, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_atcam_entry *aentry) +{ + char bf_key[MLXSW_BLOOM_KEY_LEN]; + u8 bf_size; + + mlxsw_sp_acl_bf_key_encode(aregion, aentry, bf_key, &bf_size); + return mlxsw_sp_acl_bf_crc(bf_key, bf_size); +} + +int +mlxsw_sp_acl_bf_entry_add(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_bf *bf, + struct mlxsw_sp_acl_atcam_region *aregion, + unsigned int erp_bank, + struct mlxsw_sp_acl_atcam_entry *aentry) +{ + unsigned int rule_index; + char *peabfe_pl; + u16 bf_index; + int err; + + bf_index = mlxsw_sp_acl_bf_index_get(bf, aregion, aentry); + rule_index = mlxsw_sp_acl_bf_rule_count_index_get(bf, erp_bank, + bf_index); + + if (refcount_inc_not_zero(&bf->refcnt[rule_index])) + return 0; + + peabfe_pl = kmalloc(MLXSW_REG_PEABFE_LEN, GFP_KERNEL); + if (!peabfe_pl) + return -ENOMEM; + + mlxsw_reg_peabfe_pack(peabfe_pl); + mlxsw_reg_peabfe_rec_pack(peabfe_pl, 0, 1, erp_bank, bf_index); + err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(peabfe), peabfe_pl); + kfree(peabfe_pl); + if (err) + return err; + + refcount_set(&bf->refcnt[rule_index], 1); + return 0; +} + +void +mlxsw_sp_acl_bf_entry_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_bf *bf, + struct mlxsw_sp_acl_atcam_region *aregion, + unsigned int erp_bank, + struct mlxsw_sp_acl_atcam_entry *aentry) +{ + unsigned int rule_index; + char *peabfe_pl; + u16 bf_index; + + bf_index = mlxsw_sp_acl_bf_index_get(bf, aregion, aentry); + rule_index = mlxsw_sp_acl_bf_rule_count_index_get(bf, erp_bank, + bf_index); + + if (refcount_dec_and_test(&bf->refcnt[rule_index])) { + peabfe_pl = kmalloc(MLXSW_REG_PEABFE_LEN, GFP_KERNEL); + if (!peabfe_pl) + return; + + mlxsw_reg_peabfe_pack(peabfe_pl); + mlxsw_reg_peabfe_rec_pack(peabfe_pl, 0, 0, erp_bank, bf_index); + mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(peabfe), peabfe_pl); + kfree(peabfe_pl); + } +} + +struct mlxsw_sp_acl_bf * +mlxsw_sp_acl_bf_init(struct mlxsw_sp *mlxsw_sp, unsigned int num_erp_banks) +{ + struct mlxsw_sp_acl_bf *bf; + unsigned int bf_bank_size; + + if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, ACL_MAX_BF_LOG)) + return ERR_PTR(-EIO); + + /* Bloom filter size per erp_table_bank + * is 2^ACL_MAX_BF_LOG + */ + bf_bank_size = 1 << MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_BF_LOG); + bf = kzalloc(sizeof(*bf) + bf_bank_size * num_erp_banks * + sizeof(*bf->refcnt), GFP_KERNEL); + if (!bf) + return ERR_PTR(-ENOMEM); + + bf->bank_size = bf_bank_size; + return bf; +} + +void mlxsw_sp_acl_bf_fini(struct mlxsw_sp_acl_bf *bf) +{ + kfree(bf); +} diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_ctcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_ctcam.c index e3c6fe8b1d40..b0f2d8e8ded0 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_ctcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_ctcam.c @@ -46,7 +46,6 @@ mlxsw_sp_acl_ctcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam_region *region = cregion->region; struct mlxsw_afk *afk = mlxsw_sp_acl_afk(mlxsw_sp->acl); char ptce2_pl[MLXSW_REG_PTCE2_LEN]; - unsigned int blocks_count; char *act_set; u32 priority; char *mask; @@ -63,9 +62,7 @@ mlxsw_sp_acl_ctcam_region_entry_insert(struct mlxsw_sp *mlxsw_sp, centry->parman_item.index, priority); key = mlxsw_reg_ptce2_flex_key_blocks_data(ptce2_pl); mask = mlxsw_reg_ptce2_mask_data(ptce2_pl); - blocks_count = mlxsw_afk_key_info_blocks_count_get(region->key_info); - mlxsw_afk_encode(afk, region->key_info, &rulei->values, key, mask, 0, - blocks_count - 1); + mlxsw_afk_encode(afk, region->key_info, &rulei->values, key, mask); err = cregion->ops->entry_insert(cregion, centry, mask); if (err) @@ -92,6 +89,27 @@ mlxsw_sp_acl_ctcam_region_entry_remove(struct mlxsw_sp *mlxsw_sp, cregion->ops->entry_remove(cregion, centry); } +static int +mlxsw_sp_acl_ctcam_region_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_ctcam_region *cregion, + struct mlxsw_sp_acl_ctcam_entry *centry, + struct mlxsw_afa_block *afa_block, + unsigned int priority) +{ + char ptce2_pl[MLXSW_REG_PTCE2_LEN]; + char *act_set; + + mlxsw_reg_ptce2_pack(ptce2_pl, true, MLXSW_REG_PTCE2_OP_WRITE_UPDATE, + cregion->region->tcam_region_info, + centry->parman_item.index, priority); + + act_set = mlxsw_afa_block_first_set(afa_block); + mlxsw_reg_ptce2_flex_action_set_memcpy_to(ptce2_pl, act_set); + + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ptce2), ptce2_pl); +} + + static int mlxsw_sp_acl_ctcam_region_parman_resize(void *priv, unsigned long new_count) { @@ -194,3 +212,15 @@ void mlxsw_sp_acl_ctcam_entry_del(struct mlxsw_sp *mlxsw_sp, parman_item_remove(cregion->parman, &cchunk->parman_prio, ¢ry->parman_item); } + +int mlxsw_sp_acl_ctcam_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_ctcam_region *cregion, + struct mlxsw_sp_acl_ctcam_chunk *cchunk, + struct mlxsw_sp_acl_ctcam_entry *centry, + struct mlxsw_sp_acl_rule_info *rulei) +{ + return mlxsw_sp_acl_ctcam_region_entry_action_replace(mlxsw_sp, cregion, + centry, + rulei->act_block, + rulei->priority); +} diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c index 0a4fd3c8662a..1c19feefa5f2 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c @@ -7,7 +7,7 @@ #include #include #include -#include +#include #include #include @@ -24,11 +24,14 @@ struct mlxsw_sp_acl_erp_core { unsigned int erpt_entries_size[MLXSW_SP_ACL_ATCAM_REGION_TYPE_MAX + 1]; struct gen_pool *erp_tables; struct mlxsw_sp *mlxsw_sp; + struct mlxsw_sp_acl_bf *bf; unsigned int num_erp_banks; }; struct mlxsw_sp_acl_erp_key { char mask[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; +#define __MASK_LEN 0x38 +#define __MASK_IDX(i) (__MASK_LEN - (i) - 1) bool ctcam; }; @@ -36,10 +39,8 @@ struct mlxsw_sp_acl_erp { struct mlxsw_sp_acl_erp_key key; u8 id; u8 index; - refcount_t refcnt; DECLARE_BITMAP(mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN); struct list_head list; - struct rhash_head ht_node; struct mlxsw_sp_acl_erp_table *erp_table; }; @@ -53,7 +54,6 @@ struct mlxsw_sp_acl_erp_table { DECLARE_BITMAP(erp_id_bitmap, MLXSW_SP_ACL_ERP_MAX_PER_REGION); DECLARE_BITMAP(erp_index_bitmap, MLXSW_SP_ACL_ERP_MAX_PER_REGION); struct list_head atcam_erps_list; - struct rhashtable erp_ht; struct mlxsw_sp_acl_erp_core *erp_core; struct mlxsw_sp_acl_atcam_region *aregion; const struct mlxsw_sp_acl_erp_table_ops *ops; @@ -61,12 +61,8 @@ struct mlxsw_sp_acl_erp_table { unsigned int num_atcam_erps; unsigned int num_max_atcam_erps; unsigned int num_ctcam_erps; -}; - -static const struct rhashtable_params mlxsw_sp_acl_erp_ht_params = { - .key_len = sizeof(struct mlxsw_sp_acl_erp_key), - .key_offset = offsetof(struct mlxsw_sp_acl_erp, key), - .head_offset = offsetof(struct mlxsw_sp_acl_erp, ht_node), + unsigned int num_deltas; + struct objagg *objagg; }; struct mlxsw_sp_acl_erp_table_ops { @@ -119,14 +115,17 @@ static const struct mlxsw_sp_acl_erp_table_ops erp_no_mask_ops = { .erp_destroy = mlxsw_sp_acl_erp_no_mask_destroy, }; -bool mlxsw_sp_acl_erp_is_ctcam_erp(const struct mlxsw_sp_acl_erp *erp) +static bool +mlxsw_sp_acl_erp_table_is_used(const struct mlxsw_sp_acl_erp_table *erp_table) { - return erp->key.ctcam; + return erp_table->ops != &erp_single_mask_ops && + erp_table->ops != &erp_no_mask_ops; } -u8 mlxsw_sp_acl_erp_id(const struct mlxsw_sp_acl_erp *erp) +static unsigned int +mlxsw_sp_acl_erp_bank_get(const struct mlxsw_sp_acl_erp *erp) { - return erp->id; + return erp->index % erp->erp_table->erp_core->num_erp_banks; } static unsigned int @@ -194,12 +193,15 @@ mlxsw_sp_acl_erp_master_mask_update(struct mlxsw_sp_acl_erp_table *erp_table) static int mlxsw_sp_acl_erp_master_mask_set(struct mlxsw_sp_acl_erp_table *erp_table, - const struct mlxsw_sp_acl_erp *erp) + struct mlxsw_sp_acl_erp_key *key) { + DECLARE_BITMAP(mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN); unsigned long bit; int err; - for_each_set_bit(bit, erp->mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN) + bitmap_from_arr32(mask_bitmap, (u32 *) key->mask, + MLXSW_SP_ACL_TCAM_MASK_LEN); + for_each_set_bit(bit, mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN) mlxsw_sp_acl_erp_master_mask_bit_set(bit, &erp_table->master_mask); @@ -210,7 +212,7 @@ mlxsw_sp_acl_erp_master_mask_set(struct mlxsw_sp_acl_erp_table *erp_table, return 0; err_master_mask_update: - for_each_set_bit(bit, erp->mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN) + for_each_set_bit(bit, mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN) mlxsw_sp_acl_erp_master_mask_bit_clear(bit, &erp_table->master_mask); return err; @@ -218,12 +220,15 @@ err_master_mask_update: static int mlxsw_sp_acl_erp_master_mask_clear(struct mlxsw_sp_acl_erp_table *erp_table, - const struct mlxsw_sp_acl_erp *erp) + struct mlxsw_sp_acl_erp_key *key) { + DECLARE_BITMAP(mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN); unsigned long bit; int err; - for_each_set_bit(bit, erp->mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN) + bitmap_from_arr32(mask_bitmap, (u32 *) key->mask, + MLXSW_SP_ACL_TCAM_MASK_LEN); + for_each_set_bit(bit, mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN) mlxsw_sp_acl_erp_master_mask_bit_clear(bit, &erp_table->master_mask); @@ -234,7 +239,7 @@ mlxsw_sp_acl_erp_master_mask_clear(struct mlxsw_sp_acl_erp_table *erp_table, return 0; err_master_mask_update: - for_each_set_bit(bit, erp->mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN) + for_each_set_bit(bit, mask_bitmap, MLXSW_SP_ACL_TCAM_MASK_LEN) mlxsw_sp_acl_erp_master_mask_bit_set(bit, &erp_table->master_mask); return err; @@ -256,26 +261,16 @@ mlxsw_sp_acl_erp_generic_create(struct mlxsw_sp_acl_erp_table *erp_table, goto err_erp_id_get; memcpy(&erp->key, key, sizeof(*key)); - bitmap_from_arr32(erp->mask_bitmap, (u32 *) key->mask, - MLXSW_SP_ACL_TCAM_MASK_LEN); list_add(&erp->list, &erp_table->atcam_erps_list); - refcount_set(&erp->refcnt, 1); erp_table->num_atcam_erps++; erp->erp_table = erp_table; - err = mlxsw_sp_acl_erp_master_mask_set(erp_table, erp); + err = mlxsw_sp_acl_erp_master_mask_set(erp_table, &erp->key); if (err) goto err_master_mask_set; - err = rhashtable_insert_fast(&erp_table->erp_ht, &erp->ht_node, - mlxsw_sp_acl_erp_ht_params); - if (err) - goto err_rhashtable_insert; - return erp; -err_rhashtable_insert: - mlxsw_sp_acl_erp_master_mask_clear(erp_table, erp); err_master_mask_set: erp_table->num_atcam_erps--; list_del(&erp->list); @@ -290,9 +285,7 @@ mlxsw_sp_acl_erp_generic_destroy(struct mlxsw_sp_acl_erp *erp) { struct mlxsw_sp_acl_erp_table *erp_table = erp->erp_table; - rhashtable_remove_fast(&erp_table->erp_ht, &erp->ht_node, - mlxsw_sp_acl_erp_ht_params); - mlxsw_sp_acl_erp_master_mask_clear(erp_table, erp); + mlxsw_sp_acl_erp_master_mask_clear(erp_table, &erp->key); erp_table->num_atcam_erps--; list_del(&erp->list); mlxsw_sp_acl_erp_id_put(erp_table, erp->id); @@ -524,6 +517,48 @@ err_table_relocate: return err; } +static int +mlxsw_acl_erp_table_bf_add(struct mlxsw_sp_acl_erp_table *erp_table, + struct mlxsw_sp_acl_erp *erp) +{ + struct mlxsw_sp_acl_atcam_region *aregion = erp_table->aregion; + unsigned int erp_bank = mlxsw_sp_acl_erp_bank_get(erp); + struct mlxsw_sp_acl_atcam_entry *aentry; + int err; + + list_for_each_entry(aentry, &aregion->entries_list, list) { + err = mlxsw_sp_acl_bf_entry_add(aregion->region->mlxsw_sp, + erp_table->erp_core->bf, + aregion, erp_bank, aentry); + if (err) + goto bf_entry_add_err; + } + + return 0; + +bf_entry_add_err: + list_for_each_entry_continue_reverse(aentry, &aregion->entries_list, + list) + mlxsw_sp_acl_bf_entry_del(aregion->region->mlxsw_sp, + erp_table->erp_core->bf, + aregion, erp_bank, aentry); + return err; +} + +static void +mlxsw_acl_erp_table_bf_del(struct mlxsw_sp_acl_erp_table *erp_table, + struct mlxsw_sp_acl_erp *erp) +{ + struct mlxsw_sp_acl_atcam_region *aregion = erp_table->aregion; + unsigned int erp_bank = mlxsw_sp_acl_erp_bank_get(erp); + struct mlxsw_sp_acl_atcam_entry *aentry; + + list_for_each_entry_reverse(aentry, &aregion->entries_list, list) + mlxsw_sp_acl_bf_entry_del(aregion->region->mlxsw_sp, + erp_table->erp_core->bf, + aregion, erp_bank, aentry); +} + static int mlxsw_sp_acl_erp_region_table_trans(struct mlxsw_sp_acl_erp_table *erp_table) { @@ -548,16 +583,24 @@ mlxsw_sp_acl_erp_region_table_trans(struct mlxsw_sp_acl_erp_table *erp_table) goto err_table_master_rp; } - /* Maintain the same eRP bank for the master RP, so that we - * wouldn't need to update the bloom filter + /* Make sure the master RP is using a valid index, as + * only a single eRP row is currently allocated. */ - master_rp->index = master_rp->index % erp_core->num_erp_banks; + master_rp->index = 0; __set_bit(master_rp->index, erp_table->erp_index_bitmap); err = mlxsw_sp_acl_erp_table_erp_add(erp_table, master_rp); if (err) goto err_table_master_rp_add; + /* Update Bloom filter before enabling eRP table, as rules + * on the master RP were not set to Bloom filter up to this + * point. + */ + err = mlxsw_acl_erp_table_bf_add(erp_table, master_rp); + if (err) + goto err_table_bf_add; + err = mlxsw_sp_acl_erp_table_enable(erp_table, false); if (err) goto err_table_enable; @@ -565,6 +608,8 @@ mlxsw_sp_acl_erp_region_table_trans(struct mlxsw_sp_acl_erp_table *erp_table) return 0; err_table_enable: + mlxsw_acl_erp_table_bf_del(erp_table, master_rp); +err_table_bf_add: mlxsw_sp_acl_erp_table_erp_del(master_rp); err_table_master_rp_add: __clear_bit(master_rp->index, erp_table->erp_index_bitmap); @@ -585,6 +630,7 @@ mlxsw_sp_acl_erp_region_master_mask_trans(struct mlxsw_sp_acl_erp_table *erp_tab master_rp = mlxsw_sp_acl_erp_table_master_rp(erp_table); if (!master_rp) return; + mlxsw_acl_erp_table_bf_del(erp_table, master_rp); mlxsw_sp_acl_erp_table_erp_del(master_rp); __clear_bit(master_rp->index, erp_table->erp_index_bitmap); mlxsw_sp_acl_erp_table_free(erp_core, erp_table->num_max_atcam_erps, @@ -647,9 +693,55 @@ mlxsw_sp_acl_erp_region_ctcam_disable(struct mlxsw_sp_acl_erp_table *erp_table) mlxsw_sp_acl_erp_table_enable(erp_table, false); } +static int +__mlxsw_sp_acl_erp_table_other_inc(struct mlxsw_sp_acl_erp_table *erp_table, + unsigned int *inc_num) +{ + int err; + + /* If there are C-TCAM eRP or deltas in use we need to transition + * the region to use eRP table, if it is not already done + */ + if (!mlxsw_sp_acl_erp_table_is_used(erp_table)) { + err = mlxsw_sp_acl_erp_region_table_trans(erp_table); + if (err) + return err; + } + + /* When C-TCAM or deltas are used, the eRP table must be used */ + if (erp_table->ops != &erp_multiple_masks_ops) + erp_table->ops = &erp_multiple_masks_ops; + + (*inc_num)++; + + return 0; +} + +static int mlxsw_sp_acl_erp_ctcam_inc(struct mlxsw_sp_acl_erp_table *erp_table) +{ + return __mlxsw_sp_acl_erp_table_other_inc(erp_table, + &erp_table->num_ctcam_erps); +} + +static int mlxsw_sp_acl_erp_delta_inc(struct mlxsw_sp_acl_erp_table *erp_table) +{ + return __mlxsw_sp_acl_erp_table_other_inc(erp_table, + &erp_table->num_deltas); +} + static void -mlxsw_sp_acl_erp_ctcam_table_ops_set(struct mlxsw_sp_acl_erp_table *erp_table) +__mlxsw_sp_acl_erp_table_other_dec(struct mlxsw_sp_acl_erp_table *erp_table, + unsigned int *dec_num) { + (*dec_num)--; + + /* If there are no C-TCAM eRP or deltas in use, the state we + * transition to depends on the number of A-TCAM eRPs currently + * in use. + */ + if (erp_table->num_ctcam_erps > 0 || erp_table->num_deltas > 0) + return; + switch (erp_table->num_atcam_erps) { case 2: /* Keep using the eRP table, but correctly set the @@ -683,9 +775,21 @@ mlxsw_sp_acl_erp_ctcam_table_ops_set(struct mlxsw_sp_acl_erp_table *erp_table) } } +static void mlxsw_sp_acl_erp_ctcam_dec(struct mlxsw_sp_acl_erp_table *erp_table) +{ + __mlxsw_sp_acl_erp_table_other_dec(erp_table, + &erp_table->num_ctcam_erps); +} + +static void mlxsw_sp_acl_erp_delta_dec(struct mlxsw_sp_acl_erp_table *erp_table) +{ + __mlxsw_sp_acl_erp_table_other_dec(erp_table, + &erp_table->num_deltas); +} + static struct mlxsw_sp_acl_erp * -__mlxsw_sp_acl_erp_ctcam_mask_create(struct mlxsw_sp_acl_erp_table *erp_table, - struct mlxsw_sp_acl_erp_key *key) +mlxsw_sp_acl_erp_ctcam_mask_create(struct mlxsw_sp_acl_erp_table *erp_table, + struct mlxsw_sp_acl_erp_key *key) { struct mlxsw_sp_acl_erp *erp; int err; @@ -697,89 +801,41 @@ __mlxsw_sp_acl_erp_ctcam_mask_create(struct mlxsw_sp_acl_erp_table *erp_table, memcpy(&erp->key, key, sizeof(*key)); bitmap_from_arr32(erp->mask_bitmap, (u32 *) key->mask, MLXSW_SP_ACL_TCAM_MASK_LEN); - refcount_set(&erp->refcnt, 1); - erp_table->num_ctcam_erps++; - erp->erp_table = erp_table; - err = mlxsw_sp_acl_erp_master_mask_set(erp_table, erp); + err = mlxsw_sp_acl_erp_ctcam_inc(erp_table); if (err) - goto err_master_mask_set; + goto err_erp_ctcam_inc; + + erp->erp_table = erp_table; - err = rhashtable_insert_fast(&erp_table->erp_ht, &erp->ht_node, - mlxsw_sp_acl_erp_ht_params); + err = mlxsw_sp_acl_erp_master_mask_set(erp_table, &erp->key); if (err) - goto err_rhashtable_insert; + goto err_master_mask_set; err = mlxsw_sp_acl_erp_region_ctcam_enable(erp_table); if (err) goto err_erp_region_ctcam_enable; - /* When C-TCAM is used, the eRP table must be used */ - erp_table->ops = &erp_multiple_masks_ops; - return erp; err_erp_region_ctcam_enable: - rhashtable_remove_fast(&erp_table->erp_ht, &erp->ht_node, - mlxsw_sp_acl_erp_ht_params); -err_rhashtable_insert: - mlxsw_sp_acl_erp_master_mask_clear(erp_table, erp); + mlxsw_sp_acl_erp_master_mask_clear(erp_table, &erp->key); err_master_mask_set: - erp_table->num_ctcam_erps--; + mlxsw_sp_acl_erp_ctcam_dec(erp_table); +err_erp_ctcam_inc: kfree(erp); return ERR_PTR(err); } -static struct mlxsw_sp_acl_erp * -mlxsw_sp_acl_erp_ctcam_mask_create(struct mlxsw_sp_acl_erp_table *erp_table, - struct mlxsw_sp_acl_erp_key *key) -{ - struct mlxsw_sp_acl_erp *erp; - int err; - - /* There is a special situation where we need to spill rules - * into the C-TCAM, yet the region is still using a master - * mask and thus not performing a lookup in the C-TCAM. This - * can happen when two rules that only differ in priority - and - * thus sharing the same key - are programmed. In this case - * we transition the region to use an eRP table - */ - err = mlxsw_sp_acl_erp_region_table_trans(erp_table); - if (err) - return ERR_PTR(err); - - erp = __mlxsw_sp_acl_erp_ctcam_mask_create(erp_table, key); - if (IS_ERR(erp)) { - err = PTR_ERR(erp); - goto err_erp_create; - } - - return erp; - -err_erp_create: - mlxsw_sp_acl_erp_region_master_mask_trans(erp_table); - return ERR_PTR(err); -} - static void mlxsw_sp_acl_erp_ctcam_mask_destroy(struct mlxsw_sp_acl_erp *erp) { struct mlxsw_sp_acl_erp_table *erp_table = erp->erp_table; mlxsw_sp_acl_erp_region_ctcam_disable(erp_table); - rhashtable_remove_fast(&erp_table->erp_ht, &erp->ht_node, - mlxsw_sp_acl_erp_ht_params); - mlxsw_sp_acl_erp_master_mask_clear(erp_table, erp); - erp_table->num_ctcam_erps--; + mlxsw_sp_acl_erp_master_mask_clear(erp_table, &erp->key); + mlxsw_sp_acl_erp_ctcam_dec(erp_table); kfree(erp); - - /* Once the last C-TCAM eRP was destroyed, the state we - * transition to depends on the number of A-TCAM eRPs currently - * in use - */ - if (erp_table->num_ctcam_erps > 0) - return; - mlxsw_sp_acl_erp_ctcam_table_ops_set(erp_table); } static struct mlxsw_sp_acl_erp * @@ -790,7 +846,7 @@ mlxsw_sp_acl_erp_mask_create(struct mlxsw_sp_acl_erp_table *erp_table, int err; if (key->ctcam) - return __mlxsw_sp_acl_erp_ctcam_mask_create(erp_table, key); + return mlxsw_sp_acl_erp_ctcam_mask_create(erp_table, key); /* Expand the eRP table for the new eRP, if needed */ err = mlxsw_sp_acl_erp_table_expand(erp_table); @@ -838,7 +894,8 @@ mlxsw_sp_acl_erp_mask_destroy(struct mlxsw_sp_acl_erp_table *erp_table, mlxsw_sp_acl_erp_index_put(erp_table, erp->index); mlxsw_sp_acl_erp_generic_destroy(erp); - if (erp_table->num_atcam_erps == 2 && erp_table->num_ctcam_erps == 0) + if (erp_table->num_atcam_erps == 2 && erp_table->num_ctcam_erps == 0 && + erp_table->num_deltas == 0) erp_table->ops = &erp_two_masks_ops; } @@ -940,13 +997,12 @@ mlxsw_sp_acl_erp_no_mask_destroy(struct mlxsw_sp_acl_erp_table *erp_table, WARN_ON(1); } -struct mlxsw_sp_acl_erp * -mlxsw_sp_acl_erp_get(struct mlxsw_sp_acl_atcam_region *aregion, - const char *mask, bool ctcam) +struct mlxsw_sp_acl_erp_mask * +mlxsw_sp_acl_erp_mask_get(struct mlxsw_sp_acl_atcam_region *aregion, + const char *mask, bool ctcam) { - struct mlxsw_sp_acl_erp_table *erp_table = aregion->erp_table; struct mlxsw_sp_acl_erp_key key; - struct mlxsw_sp_acl_erp *erp; + struct objagg_obj *objagg_obj; /* eRPs are allocated from a shared resource, but currently all * allocations are done under RTNL. @@ -955,29 +1011,276 @@ mlxsw_sp_acl_erp_get(struct mlxsw_sp_acl_atcam_region *aregion, memcpy(key.mask, mask, MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN); key.ctcam = ctcam; - erp = rhashtable_lookup_fast(&erp_table->erp_ht, &key, - mlxsw_sp_acl_erp_ht_params); - if (erp) { - refcount_inc(&erp->refcnt); - return erp; - } + objagg_obj = objagg_obj_get(aregion->erp_table->objagg, &key); + if (IS_ERR(objagg_obj)) + return ERR_CAST(objagg_obj); + return (struct mlxsw_sp_acl_erp_mask *) objagg_obj; +} + +void mlxsw_sp_acl_erp_mask_put(struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_erp_mask *erp_mask) +{ + struct objagg_obj *objagg_obj = (struct objagg_obj *) erp_mask; - return erp_table->ops->erp_create(erp_table, &key); + ASSERT_RTNL(); + objagg_obj_put(aregion->erp_table->objagg, objagg_obj); } -void mlxsw_sp_acl_erp_put(struct mlxsw_sp_acl_atcam_region *aregion, - struct mlxsw_sp_acl_erp *erp) +int mlxsw_sp_acl_erp_bf_insert(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_erp_mask *erp_mask, + struct mlxsw_sp_acl_atcam_entry *aentry) { - struct mlxsw_sp_acl_erp_table *erp_table = aregion->erp_table; + struct objagg_obj *objagg_obj = (struct objagg_obj *) erp_mask; + const struct mlxsw_sp_acl_erp *erp = objagg_obj_root_priv(objagg_obj); + unsigned int erp_bank; ASSERT_RTNL(); + if (!mlxsw_sp_acl_erp_table_is_used(erp->erp_table)) + return 0; + + erp_bank = mlxsw_sp_acl_erp_bank_get(erp); + return mlxsw_sp_acl_bf_entry_add(mlxsw_sp, + erp->erp_table->erp_core->bf, + aregion, erp_bank, aentry); +} + +void mlxsw_sp_acl_erp_bf_remove(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_erp_mask *erp_mask, + struct mlxsw_sp_acl_atcam_entry *aentry) +{ + struct objagg_obj *objagg_obj = (struct objagg_obj *) erp_mask; + const struct mlxsw_sp_acl_erp *erp = objagg_obj_root_priv(objagg_obj); + unsigned int erp_bank; - if (!refcount_dec_and_test(&erp->refcnt)) + ASSERT_RTNL(); + if (!mlxsw_sp_acl_erp_table_is_used(erp->erp_table)) return; - erp_table->ops->erp_destroy(erp_table, erp); + erp_bank = mlxsw_sp_acl_erp_bank_get(erp); + mlxsw_sp_acl_bf_entry_del(mlxsw_sp, + erp->erp_table->erp_core->bf, + aregion, erp_bank, aentry); +} + +bool +mlxsw_sp_acl_erp_mask_is_ctcam(const struct mlxsw_sp_acl_erp_mask *erp_mask) +{ + struct objagg_obj *objagg_obj = (struct objagg_obj *) erp_mask; + const struct mlxsw_sp_acl_erp_key *key = objagg_obj_raw(objagg_obj); + + return key->ctcam; +} + +u8 mlxsw_sp_acl_erp_mask_erp_id(const struct mlxsw_sp_acl_erp_mask *erp_mask) +{ + struct objagg_obj *objagg_obj = (struct objagg_obj *) erp_mask; + const struct mlxsw_sp_acl_erp *erp = objagg_obj_root_priv(objagg_obj); + + return erp->id; +} + +struct mlxsw_sp_acl_erp_delta { + struct mlxsw_sp_acl_erp_key key; + u16 start; + u8 mask; +}; + +u16 mlxsw_sp_acl_erp_delta_start(const struct mlxsw_sp_acl_erp_delta *delta) +{ + return delta->start; +} + +u8 mlxsw_sp_acl_erp_delta_mask(const struct mlxsw_sp_acl_erp_delta *delta) +{ + return delta->mask; +} + +u8 mlxsw_sp_acl_erp_delta_value(const struct mlxsw_sp_acl_erp_delta *delta, + const char *enc_key) +{ + u16 start = delta->start; + u8 mask = delta->mask; + u16 tmp; + + if (!mask) + return 0; + + tmp = (unsigned char) enc_key[__MASK_IDX(start / 8)]; + if (start / 8 + 1 < __MASK_LEN) + tmp |= (unsigned char) enc_key[__MASK_IDX(start / 8 + 1)] << 8; + tmp >>= start % 8; + tmp &= mask; + return tmp; +} + +void mlxsw_sp_acl_erp_delta_clear(const struct mlxsw_sp_acl_erp_delta *delta, + const char *enc_key) +{ + u16 start = delta->start; + u8 mask = delta->mask; + unsigned char *byte; + u16 tmp; + + tmp = mask; + tmp <<= start % 8; + tmp = ~tmp; + + byte = (unsigned char *) &enc_key[__MASK_IDX(start / 8)]; + *byte &= tmp & 0xff; + if (start / 8 + 1 < __MASK_LEN) { + byte = (unsigned char *) &enc_key[__MASK_IDX(start / 8 + 1)]; + *byte &= (tmp >> 8) & 0xff; + } +} + +static const struct mlxsw_sp_acl_erp_delta +mlxsw_sp_acl_erp_delta_default = {}; + +const struct mlxsw_sp_acl_erp_delta * +mlxsw_sp_acl_erp_delta(const struct mlxsw_sp_acl_erp_mask *erp_mask) +{ + struct objagg_obj *objagg_obj = (struct objagg_obj *) erp_mask; + const struct mlxsw_sp_acl_erp_delta *delta; + + delta = objagg_obj_delta_priv(objagg_obj); + if (!delta) + delta = &mlxsw_sp_acl_erp_delta_default; + return delta; +} + +static int +mlxsw_sp_acl_erp_delta_fill(const struct mlxsw_sp_acl_erp_key *parent_key, + const struct mlxsw_sp_acl_erp_key *key, + u16 *delta_start, u8 *delta_mask) +{ + int offset = 0; + int si = -1; + u16 pmask; + u16 mask; + int i; + + /* The difference between 2 masks can be up to 8 consecutive bits. */ + for (i = 0; i < __MASK_LEN; i++) { + if (parent_key->mask[__MASK_IDX(i)] == key->mask[__MASK_IDX(i)]) + continue; + if (si == -1) + si = i; + else if (si != i - 1) + return -EINVAL; + } + if (si == -1) { + /* The masks are the same, this cannot happen. + * That means the caller is broken. + */ + WARN_ON(1); + *delta_start = 0; + *delta_mask = 0; + return 0; + } + pmask = (unsigned char) parent_key->mask[__MASK_IDX(si)]; + mask = (unsigned char) key->mask[__MASK_IDX(si)]; + if (si + 1 < __MASK_LEN) { + pmask |= (unsigned char) parent_key->mask[__MASK_IDX(si + 1)] << 8; + mask |= (unsigned char) key->mask[__MASK_IDX(si + 1)] << 8; + } + + if ((pmask ^ mask) & pmask) + return -EINVAL; + mask &= ~pmask; + while (!(mask & (1 << offset))) + offset++; + while (!(mask & 1)) + mask >>= 1; + if (mask & 0xff00) + return -EINVAL; + + *delta_start = si * 8 + offset; + *delta_mask = mask; + + return 0; +} + +static void *mlxsw_sp_acl_erp_delta_create(void *priv, void *parent_obj, + void *obj) +{ + struct mlxsw_sp_acl_erp_key *parent_key = parent_obj; + struct mlxsw_sp_acl_atcam_region *aregion = priv; + struct mlxsw_sp_acl_erp_table *erp_table = aregion->erp_table; + struct mlxsw_sp_acl_erp_key *key = obj; + struct mlxsw_sp_acl_erp_delta *delta; + u16 delta_start; + u8 delta_mask; + int err; + + if (parent_key->ctcam || key->ctcam) + return ERR_PTR(-EINVAL); + err = mlxsw_sp_acl_erp_delta_fill(parent_key, key, + &delta_start, &delta_mask); + if (err) + return ERR_PTR(-EINVAL); + + delta = kzalloc(sizeof(*delta), GFP_KERNEL); + if (!delta) + return ERR_PTR(-ENOMEM); + delta->start = delta_start; + delta->mask = delta_mask; + + err = mlxsw_sp_acl_erp_delta_inc(erp_table); + if (err) + goto err_erp_delta_inc; + + memcpy(&delta->key, key, sizeof(*key)); + err = mlxsw_sp_acl_erp_master_mask_set(erp_table, &delta->key); + if (err) + goto err_master_mask_set; + + return delta; + +err_master_mask_set: + mlxsw_sp_acl_erp_delta_dec(erp_table); +err_erp_delta_inc: + kfree(delta); + return ERR_PTR(err); +} + +static void mlxsw_sp_acl_erp_delta_destroy(void *priv, void *delta_priv) +{ + struct mlxsw_sp_acl_erp_delta *delta = delta_priv; + struct mlxsw_sp_acl_atcam_region *aregion = priv; + struct mlxsw_sp_acl_erp_table *erp_table = aregion->erp_table; + + mlxsw_sp_acl_erp_master_mask_clear(erp_table, &delta->key); + mlxsw_sp_acl_erp_delta_dec(erp_table); + kfree(delta); +} + +static void *mlxsw_sp_acl_erp_root_create(void *priv, void *obj) +{ + struct mlxsw_sp_acl_atcam_region *aregion = priv; + struct mlxsw_sp_acl_erp_table *erp_table = aregion->erp_table; + struct mlxsw_sp_acl_erp_key *key = obj; + + return erp_table->ops->erp_create(erp_table, key); +} + +static void mlxsw_sp_acl_erp_root_destroy(void *priv, void *root_priv) +{ + struct mlxsw_sp_acl_atcam_region *aregion = priv; + struct mlxsw_sp_acl_erp_table *erp_table = aregion->erp_table; + + erp_table->ops->erp_destroy(erp_table, root_priv); } +static const struct objagg_ops mlxsw_sp_acl_erp_objagg_ops = { + .obj_size = sizeof(struct mlxsw_sp_acl_erp_key), + .delta_create = mlxsw_sp_acl_erp_delta_create, + .delta_destroy = mlxsw_sp_acl_erp_delta_destroy, + .root_create = mlxsw_sp_acl_erp_root_create, + .root_destroy = mlxsw_sp_acl_erp_root_destroy, +}; + static struct mlxsw_sp_acl_erp_table * mlxsw_sp_acl_erp_table_create(struct mlxsw_sp_acl_atcam_region *aregion) { @@ -988,9 +1291,12 @@ mlxsw_sp_acl_erp_table_create(struct mlxsw_sp_acl_atcam_region *aregion) if (!erp_table) return ERR_PTR(-ENOMEM); - err = rhashtable_init(&erp_table->erp_ht, &mlxsw_sp_acl_erp_ht_params); - if (err) - goto err_rhashtable_init; + erp_table->objagg = objagg_create(&mlxsw_sp_acl_erp_objagg_ops, + aregion); + if (IS_ERR(erp_table->objagg)) { + err = PTR_ERR(erp_table->objagg); + goto err_objagg_create; + } erp_table->erp_core = aregion->atcam->erp_core; erp_table->ops = &erp_no_mask_ops; @@ -999,7 +1305,7 @@ mlxsw_sp_acl_erp_table_create(struct mlxsw_sp_acl_atcam_region *aregion) return erp_table; -err_rhashtable_init: +err_objagg_create: kfree(erp_table); return ERR_PTR(err); } @@ -1008,7 +1314,7 @@ static void mlxsw_sp_acl_erp_table_destroy(struct mlxsw_sp_acl_erp_table *erp_table) { WARN_ON(!list_empty(&erp_table->atcam_erps_list)); - rhashtable_destroy(&erp_table->erp_ht); + objagg_destroy(erp_table->objagg); kfree(erp_table); } @@ -1118,6 +1424,12 @@ static int mlxsw_sp_acl_erp_tables_init(struct mlxsw_sp *mlxsw_sp, if (err) goto err_gen_pool_add; + erp_core->bf = mlxsw_sp_acl_bf_init(mlxsw_sp, erp_core->num_erp_banks); + if (IS_ERR(erp_core->bf)) { + err = PTR_ERR(erp_core->bf); + goto err_bf_init; + } + /* Different regions require masks of different sizes */ err = mlxsw_sp_acl_erp_tables_sizes_query(mlxsw_sp, erp_core); if (err) @@ -1126,6 +1438,8 @@ static int mlxsw_sp_acl_erp_tables_init(struct mlxsw_sp *mlxsw_sp, return 0; err_erp_tables_sizes_query: + mlxsw_sp_acl_bf_fini(erp_core->bf); +err_bf_init: err_gen_pool_add: gen_pool_destroy(erp_core->erp_tables); return err; @@ -1134,6 +1448,7 @@ err_gen_pool_add: static void mlxsw_sp_acl_erp_tables_fini(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_erp_core *erp_core) { + mlxsw_sp_acl_bf_fini(erp_core->bf); gen_pool_destroy(erp_core->erp_tables); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c index d409b09ba8df..2a998dea4f39 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c @@ -98,8 +98,8 @@ static const struct mlxsw_afk_block mlxsw_sp1_afk_blocks[] = { #define MLXSW_SP1_AFK_KEY_BLOCK_SIZE 16 -static void mlxsw_sp1_afk_encode_block(char *block, int block_index, - char *output) +static void mlxsw_sp1_afk_encode_block(char *output, int block_index, + char *block) { unsigned int offset = block_index * MLXSW_SP1_AFK_KEY_BLOCK_SIZE; char *output_indexed = output + offset; @@ -107,10 +107,19 @@ static void mlxsw_sp1_afk_encode_block(char *block, int block_index, memcpy(output_indexed, block, MLXSW_SP1_AFK_KEY_BLOCK_SIZE); } +static void mlxsw_sp1_afk_clear_block(char *output, int block_index) +{ + unsigned int offset = block_index * MLXSW_SP1_AFK_KEY_BLOCK_SIZE; + char *output_indexed = output + offset; + + memset(output_indexed, 0, MLXSW_SP1_AFK_KEY_BLOCK_SIZE); +} + const struct mlxsw_afk_ops mlxsw_sp1_afk_ops = { .blocks = mlxsw_sp1_afk_blocks, .blocks_count = ARRAY_SIZE(mlxsw_sp1_afk_blocks), .encode_block = mlxsw_sp1_afk_encode_block, + .clear_block = mlxsw_sp1_afk_clear_block, }; static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_0[] = { @@ -158,6 +167,11 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_2[] = { MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x04, 16, 8), }; +static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_4[] = { + MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_0_7, 0x04, 24, 8), + MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_8_10, 0x00, 0, 3), +}; + static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_0[] = { MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_32_63, 0x04, 4), }; @@ -201,6 +215,7 @@ static const struct mlxsw_afk_block mlxsw_sp2_afk_blocks[] = { MLXSW_AFK_BLOCK(0x38, mlxsw_sp_afk_element_info_ipv4_0), MLXSW_AFK_BLOCK(0x39, mlxsw_sp_afk_element_info_ipv4_1), MLXSW_AFK_BLOCK(0x3A, mlxsw_sp_afk_element_info_ipv4_2), + MLXSW_AFK_BLOCK(0x3C, mlxsw_sp_afk_element_info_ipv4_4), MLXSW_AFK_BLOCK(0x40, mlxsw_sp_afk_element_info_ipv6_0), MLXSW_AFK_BLOCK(0x41, mlxsw_sp_afk_element_info_ipv6_1), MLXSW_AFK_BLOCK(0x42, mlxsw_sp_afk_element_info_ipv6_2), @@ -263,10 +278,9 @@ static const struct mlxsw_sp2_afk_block_layout mlxsw_sp2_afk_blocks_layout[] = { MLXSW_SP2_AFK_BLOCK_LAYOUT(block11, 0x00, 12), }; -static void mlxsw_sp2_afk_encode_block(char *block, int block_index, - char *output) +static void __mlxsw_sp2_afk_block_value_set(char *output, int block_index, + u64 block_value) { - u64 block_value = mlxsw_sp2_afk_block_value_get(block); const struct mlxsw_sp2_afk_block_layout *block_layout; if (WARN_ON(block_index < 0 || @@ -278,8 +292,22 @@ static void mlxsw_sp2_afk_encode_block(char *block, int block_index, &block_layout->item, 0, block_value); } +static void mlxsw_sp2_afk_encode_block(char *output, int block_index, + char *block) +{ + u64 block_value = mlxsw_sp2_afk_block_value_get(block); + + __mlxsw_sp2_afk_block_value_set(output, block_index, block_value); +} + +static void mlxsw_sp2_afk_clear_block(char *output, int block_index) +{ + __mlxsw_sp2_afk_block_value_set(output, block_index, 0); +} + const struct mlxsw_afk_ops mlxsw_sp2_afk_ops = { .blocks = mlxsw_sp2_afk_blocks, .blocks_count = ARRAY_SIZE(mlxsw_sp2_afk_blocks), .encode_block = mlxsw_sp2_afk_encode_block, + .clear_block = mlxsw_sp2_afk_clear_block, }; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c index e171513bb32a..fe230acf92a9 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c @@ -95,8 +95,9 @@ int mlxsw_sp_acl_tcam_priority_get(struct mlxsw_sp *mlxsw_sp, if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, KVD_SIZE)) return -EIO; - max_priority = MLXSW_CORE_RES_GET(mlxsw_sp->core, KVD_SIZE); - if (rulei->priority > max_priority) + /* Priority range is 1..cap_kvd_size-1. */ + max_priority = MLXSW_CORE_RES_GET(mlxsw_sp->core, KVD_SIZE) - 1; + if (rulei->priority >= max_priority) return -EINVAL; /* Unlike in TC, in HW, higher number means higher priority. */ @@ -778,6 +779,20 @@ static void mlxsw_sp_acl_tcam_entry_del(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_acl_tcam_chunk_put(mlxsw_sp, chunk); } +static int +mlxsw_sp_acl_tcam_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_tcam_group *group, + struct mlxsw_sp_acl_tcam_entry *entry, + struct mlxsw_sp_acl_rule_info *rulei) +{ + const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops; + struct mlxsw_sp_acl_tcam_chunk *chunk = entry->chunk; + struct mlxsw_sp_acl_tcam_region *region = chunk->region; + + return ops->entry_action_replace(mlxsw_sp, region->priv, chunk->priv, + entry->priv, rulei); +} + static int mlxsw_sp_acl_tcam_entry_activity_get(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam_entry *entry, @@ -848,6 +863,15 @@ struct mlxsw_sp_acl_tcam_flower_rule { struct mlxsw_sp_acl_tcam_entry entry; }; +struct mlxsw_sp_acl_tcam_mr_ruleset { + struct mlxsw_sp_acl_tcam_chunk *chunk; + struct mlxsw_sp_acl_tcam_group group; +}; + +struct mlxsw_sp_acl_tcam_mr_rule { + struct mlxsw_sp_acl_tcam_entry entry; +}; + static int mlxsw_sp_acl_tcam_flower_ruleset_add(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam *tcam, @@ -929,6 +953,15 @@ mlxsw_sp_acl_tcam_flower_rule_del(struct mlxsw_sp *mlxsw_sp, void *rule_priv) mlxsw_sp_acl_tcam_entry_del(mlxsw_sp, &rule->entry); } +static int +mlxsw_sp_acl_tcam_flower_rule_action_replace(struct mlxsw_sp *mlxsw_sp, + void *ruleset_priv, + void *rule_priv, + struct mlxsw_sp_acl_rule_info *rulei) +{ + return -EOPNOTSUPP; +} + static int mlxsw_sp_acl_tcam_flower_rule_activity_get(struct mlxsw_sp *mlxsw_sp, void *rule_priv, bool *activity) @@ -949,12 +982,146 @@ static const struct mlxsw_sp_acl_profile_ops mlxsw_sp_acl_tcam_flower_ops = { .rule_priv_size = mlxsw_sp_acl_tcam_flower_rule_priv_size, .rule_add = mlxsw_sp_acl_tcam_flower_rule_add, .rule_del = mlxsw_sp_acl_tcam_flower_rule_del, + .rule_action_replace = mlxsw_sp_acl_tcam_flower_rule_action_replace, .rule_activity_get = mlxsw_sp_acl_tcam_flower_rule_activity_get, }; +static int +mlxsw_sp_acl_tcam_mr_ruleset_add(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_tcam *tcam, + void *ruleset_priv, + struct mlxsw_afk_element_usage *tmplt_elusage) +{ + struct mlxsw_sp_acl_tcam_mr_ruleset *ruleset = ruleset_priv; + int err; + + err = mlxsw_sp_acl_tcam_group_add(mlxsw_sp, tcam, &ruleset->group, + mlxsw_sp_acl_tcam_patterns, + MLXSW_SP_ACL_TCAM_PATTERNS_COUNT, + tmplt_elusage); + if (err) + return err; + + /* For most of the TCAM clients it would make sense to take a tcam chunk + * only when the first rule is written. This is not the case for + * multicast router as it is required to bind the multicast router to a + * specific ACL Group ID which must exist in HW before multicast router + * is initialized. + */ + ruleset->chunk = mlxsw_sp_acl_tcam_chunk_get(mlxsw_sp, &ruleset->group, + 1, tmplt_elusage); + if (IS_ERR(ruleset->chunk)) { + err = PTR_ERR(ruleset->chunk); + goto err_chunk_get; + } + + return 0; + +err_chunk_get: + mlxsw_sp_acl_tcam_group_del(mlxsw_sp, &ruleset->group); + return err; +} + +static void +mlxsw_sp_acl_tcam_mr_ruleset_del(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv) +{ + struct mlxsw_sp_acl_tcam_mr_ruleset *ruleset = ruleset_priv; + + mlxsw_sp_acl_tcam_chunk_put(mlxsw_sp, ruleset->chunk); + mlxsw_sp_acl_tcam_group_del(mlxsw_sp, &ruleset->group); +} + +static int +mlxsw_sp_acl_tcam_mr_ruleset_bind(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv, + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress) +{ + /* Binding is done when initializing multicast router */ + return 0; +} + +static void +mlxsw_sp_acl_tcam_mr_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, + void *ruleset_priv, + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress) +{ +} + +static u16 +mlxsw_sp_acl_tcam_mr_ruleset_group_id(void *ruleset_priv) +{ + struct mlxsw_sp_acl_tcam_mr_ruleset *ruleset = ruleset_priv; + + return mlxsw_sp_acl_tcam_group_id(&ruleset->group); +} + +static size_t mlxsw_sp_acl_tcam_mr_rule_priv_size(struct mlxsw_sp *mlxsw_sp) +{ + return sizeof(struct mlxsw_sp_acl_tcam_mr_rule) + + mlxsw_sp_acl_tcam_entry_priv_size(mlxsw_sp); +} + +static int +mlxsw_sp_acl_tcam_mr_rule_add(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv, + void *rule_priv, + struct mlxsw_sp_acl_rule_info *rulei) +{ + struct mlxsw_sp_acl_tcam_mr_ruleset *ruleset = ruleset_priv; + struct mlxsw_sp_acl_tcam_mr_rule *rule = rule_priv; + + return mlxsw_sp_acl_tcam_entry_add(mlxsw_sp, &ruleset->group, + &rule->entry, rulei); +} + +static void +mlxsw_sp_acl_tcam_mr_rule_del(struct mlxsw_sp *mlxsw_sp, void *rule_priv) +{ + struct mlxsw_sp_acl_tcam_mr_rule *rule = rule_priv; + + mlxsw_sp_acl_tcam_entry_del(mlxsw_sp, &rule->entry); +} + +static int +mlxsw_sp_acl_tcam_mr_rule_action_replace(struct mlxsw_sp *mlxsw_sp, + void *ruleset_priv, void *rule_priv, + struct mlxsw_sp_acl_rule_info *rulei) +{ + struct mlxsw_sp_acl_tcam_mr_ruleset *ruleset = ruleset_priv; + struct mlxsw_sp_acl_tcam_mr_rule *rule = rule_priv; + + return mlxsw_sp_acl_tcam_entry_action_replace(mlxsw_sp, &ruleset->group, + &rule->entry, rulei); +} + +static int +mlxsw_sp_acl_tcam_mr_rule_activity_get(struct mlxsw_sp *mlxsw_sp, + void *rule_priv, bool *activity) +{ + struct mlxsw_sp_acl_tcam_mr_rule *rule = rule_priv; + + return mlxsw_sp_acl_tcam_entry_activity_get(mlxsw_sp, &rule->entry, + activity); +} + +static const struct mlxsw_sp_acl_profile_ops mlxsw_sp_acl_tcam_mr_ops = { + .ruleset_priv_size = sizeof(struct mlxsw_sp_acl_tcam_mr_ruleset), + .ruleset_add = mlxsw_sp_acl_tcam_mr_ruleset_add, + .ruleset_del = mlxsw_sp_acl_tcam_mr_ruleset_del, + .ruleset_bind = mlxsw_sp_acl_tcam_mr_ruleset_bind, + .ruleset_unbind = mlxsw_sp_acl_tcam_mr_ruleset_unbind, + .ruleset_group_id = mlxsw_sp_acl_tcam_mr_ruleset_group_id, + .rule_priv_size = mlxsw_sp_acl_tcam_mr_rule_priv_size, + .rule_add = mlxsw_sp_acl_tcam_mr_rule_add, + .rule_del = mlxsw_sp_acl_tcam_mr_rule_del, + .rule_action_replace = mlxsw_sp_acl_tcam_mr_rule_action_replace, + .rule_activity_get = mlxsw_sp_acl_tcam_mr_rule_activity_get, +}; + static const struct mlxsw_sp_acl_profile_ops * mlxsw_sp_acl_tcam_profile_ops_arr[] = { [MLXSW_SP_ACL_PROFILE_FLOWER] = &mlxsw_sp_acl_tcam_flower_ops, + [MLXSW_SP_ACL_PROFILE_MR] = &mlxsw_sp_acl_tcam_mr_ops, }; const struct mlxsw_sp_acl_profile_ops * diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h index 219a4e26c332..0f1a9dee63de 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h @@ -48,6 +48,9 @@ struct mlxsw_sp_acl_profile_ops { void *ruleset_priv, void *rule_priv, struct mlxsw_sp_acl_rule_info *rulei); void (*rule_del)(struct mlxsw_sp *mlxsw_sp, void *rule_priv); + int (*rule_action_replace)(struct mlxsw_sp *mlxsw_sp, + void *ruleset_priv, void *rule_priv, + struct mlxsw_sp_acl_rule_info *rulei); int (*rule_activity_get)(struct mlxsw_sp *mlxsw_sp, void *rule_priv, bool *activity); }; @@ -121,6 +124,11 @@ void mlxsw_sp_acl_ctcam_entry_del(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ctcam_region *cregion, struct mlxsw_sp_acl_ctcam_chunk *cchunk, struct mlxsw_sp_acl_ctcam_entry *centry); +int mlxsw_sp_acl_ctcam_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_ctcam_region *cregion, + struct mlxsw_sp_acl_ctcam_chunk *cchunk, + struct mlxsw_sp_acl_ctcam_entry *centry, + struct mlxsw_sp_acl_rule_info *rulei); static inline unsigned int mlxsw_sp_acl_ctcam_entry_offset(struct mlxsw_sp_acl_ctcam_entry *centry) { @@ -144,6 +152,7 @@ struct mlxsw_sp_acl_atcam { struct mlxsw_sp_acl_atcam_region { struct rhashtable entries_ht; /* A-TCAM only */ + struct list_head entries_list; /* A-TCAM only */ struct mlxsw_sp_acl_ctcam_region cregion; const struct mlxsw_sp_acl_atcam_region_ops *ops; struct mlxsw_sp_acl_tcam_region *region; @@ -154,7 +163,9 @@ struct mlxsw_sp_acl_atcam_region { }; struct mlxsw_sp_acl_atcam_entry_ht_key { - char enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded key */ + char enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded key, + * minus delta bits. + */ u8 erp_id; }; @@ -164,10 +175,17 @@ struct mlxsw_sp_acl_atcam_chunk { struct mlxsw_sp_acl_atcam_entry { struct rhash_head ht_node; + struct list_head list; /* Member in entries_list */ struct mlxsw_sp_acl_atcam_entry_ht_key ht_key; + char full_enc_key[MLXSW_REG_PTCEX_FLEX_KEY_BLOCKS_LEN]; /* Encoded key */ + struct { + u16 start; + u8 mask; + u8 value; + } delta_info; struct mlxsw_sp_acl_ctcam_entry centry; struct mlxsw_sp_acl_atcam_lkey_id *lkey_id; - struct mlxsw_sp_acl_erp *erp; + struct mlxsw_sp_acl_erp_mask *erp_mask; }; static inline struct mlxsw_sp_acl_atcam_region * @@ -204,20 +222,45 @@ void mlxsw_sp_acl_atcam_entry_del(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_atcam_region *aregion, struct mlxsw_sp_acl_atcam_chunk *achunk, struct mlxsw_sp_acl_atcam_entry *aentry); +int mlxsw_sp_acl_atcam_entry_action_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_atcam_chunk *achunk, + struct mlxsw_sp_acl_atcam_entry *aentry, + struct mlxsw_sp_acl_rule_info *rulei); int mlxsw_sp_acl_atcam_init(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_atcam *atcam); void mlxsw_sp_acl_atcam_fini(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_atcam *atcam); -struct mlxsw_sp_acl_erp; +struct mlxsw_sp_acl_erp_delta; -bool mlxsw_sp_acl_erp_is_ctcam_erp(const struct mlxsw_sp_acl_erp *erp); -u8 mlxsw_sp_acl_erp_id(const struct mlxsw_sp_acl_erp *erp); -struct mlxsw_sp_acl_erp * -mlxsw_sp_acl_erp_get(struct mlxsw_sp_acl_atcam_region *aregion, - const char *mask, bool ctcam); -void mlxsw_sp_acl_erp_put(struct mlxsw_sp_acl_atcam_region *aregion, - struct mlxsw_sp_acl_erp *erp); +u16 mlxsw_sp_acl_erp_delta_start(const struct mlxsw_sp_acl_erp_delta *delta); +u8 mlxsw_sp_acl_erp_delta_mask(const struct mlxsw_sp_acl_erp_delta *delta); +u8 mlxsw_sp_acl_erp_delta_value(const struct mlxsw_sp_acl_erp_delta *delta, + const char *enc_key); +void mlxsw_sp_acl_erp_delta_clear(const struct mlxsw_sp_acl_erp_delta *delta, + const char *enc_key); + +struct mlxsw_sp_acl_erp_mask; + +bool +mlxsw_sp_acl_erp_mask_is_ctcam(const struct mlxsw_sp_acl_erp_mask *erp_mask); +u8 mlxsw_sp_acl_erp_mask_erp_id(const struct mlxsw_sp_acl_erp_mask *erp_mask); +const struct mlxsw_sp_acl_erp_delta * +mlxsw_sp_acl_erp_delta(const struct mlxsw_sp_acl_erp_mask *erp_mask); +struct mlxsw_sp_acl_erp_mask * +mlxsw_sp_acl_erp_mask_get(struct mlxsw_sp_acl_atcam_region *aregion, + const char *mask, bool ctcam); +void mlxsw_sp_acl_erp_mask_put(struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_erp_mask *erp_mask); +int mlxsw_sp_acl_erp_bf_insert(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_erp_mask *erp_mask, + struct mlxsw_sp_acl_atcam_entry *aentry); +void mlxsw_sp_acl_erp_bf_remove(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_atcam_region *aregion, + struct mlxsw_sp_acl_erp_mask *erp_mask, + struct mlxsw_sp_acl_atcam_entry *aentry); int mlxsw_sp_acl_erp_region_init(struct mlxsw_sp_acl_atcam_region *aregion); void mlxsw_sp_acl_erp_region_fini(struct mlxsw_sp_acl_atcam_region *aregion); int mlxsw_sp_acl_erps_init(struct mlxsw_sp *mlxsw_sp, @@ -225,4 +268,22 @@ int mlxsw_sp_acl_erps_init(struct mlxsw_sp *mlxsw_sp, void mlxsw_sp_acl_erps_fini(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_atcam *atcam); +struct mlxsw_sp_acl_bf; + +int +mlxsw_sp_acl_bf_entry_add(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_bf *bf, + struct mlxsw_sp_acl_atcam_region *aregion, + unsigned int erp_bank, + struct mlxsw_sp_acl_atcam_entry *aentry); +void +mlxsw_sp_acl_bf_entry_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_bf *bf, + struct mlxsw_sp_acl_atcam_region *aregion, + unsigned int erp_bank, + struct mlxsw_sp_acl_atcam_entry *aentry); +struct mlxsw_sp_acl_bf * +mlxsw_sp_acl_bf_init(struct mlxsw_sp *mlxsw_sp, unsigned int num_erp_banks); +void mlxsw_sp_acl_bf_fini(struct mlxsw_sp_acl_bf *bf); + #endif diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c index a3db033d7399..055cc6943b34 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c @@ -15,6 +15,7 @@ struct mlxsw_sp_fid_family; struct mlxsw_sp_fid_core { + struct rhashtable fid_ht; struct rhashtable vni_ht; struct mlxsw_sp_fid_family *fid_family_arr[MLXSW_SP_FID_TYPE_MAX]; unsigned int *port_fid_mappings; @@ -26,10 +27,13 @@ struct mlxsw_sp_fid { unsigned int ref_count; u16 fid_index; struct mlxsw_sp_fid_family *fid_family; + struct rhash_head ht_node; struct rhash_head vni_ht_node; + enum mlxsw_sp_nve_type nve_type; __be32 vni; u32 nve_flood_index; + int nve_ifindex; u8 vni_valid:1, nve_flood_index_valid:1; }; @@ -44,6 +48,12 @@ struct mlxsw_sp_fid_8021d { int br_ifindex; }; +static const struct rhashtable_params mlxsw_sp_fid_ht_params = { + .key_len = sizeof_field(struct mlxsw_sp_fid, fid_index), + .key_offset = offsetof(struct mlxsw_sp_fid, fid_index), + .head_offset = offsetof(struct mlxsw_sp_fid, ht_node), +}; + static const struct rhashtable_params mlxsw_sp_fid_vni_ht_params = { .key_len = sizeof_field(struct mlxsw_sp_fid, vni), .key_offset = offsetof(struct mlxsw_sp_fid, vni), @@ -75,6 +85,8 @@ struct mlxsw_sp_fid_ops { int (*nve_flood_index_set)(struct mlxsw_sp_fid *fid, u32 nve_flood_index); void (*nve_flood_index_clear)(struct mlxsw_sp_fid *fid); + void (*fdb_clear_offload)(const struct mlxsw_sp_fid *fid, + const struct net_device *nve_dev); }; struct mlxsw_sp_fid_family { @@ -89,6 +101,7 @@ struct mlxsw_sp_fid_family { enum mlxsw_sp_rif_type rif_type; const struct mlxsw_sp_fid_ops *ops; struct mlxsw_sp *mlxsw_sp; + u8 lag_vid_valid:1; }; static const int mlxsw_sp_sfgc_uc_packet_types[MLXSW_REG_SFGC_TYPE_MAX] = { @@ -113,6 +126,45 @@ static const int *mlxsw_sp_packet_type_sfgc_types[] = { [MLXSW_SP_FLOOD_TYPE_MC] = mlxsw_sp_sfgc_mc_packet_types, }; +bool mlxsw_sp_fid_lag_vid_valid(const struct mlxsw_sp_fid *fid) +{ + return fid->fid_family->lag_vid_valid; +} + +struct mlxsw_sp_fid *mlxsw_sp_fid_lookup_by_index(struct mlxsw_sp *mlxsw_sp, + u16 fid_index) +{ + struct mlxsw_sp_fid *fid; + + fid = rhashtable_lookup_fast(&mlxsw_sp->fid_core->fid_ht, &fid_index, + mlxsw_sp_fid_ht_params); + if (fid) + fid->ref_count++; + + return fid; +} + +int mlxsw_sp_fid_nve_ifindex(const struct mlxsw_sp_fid *fid, int *nve_ifindex) +{ + if (!fid->vni_valid) + return -EINVAL; + + *nve_ifindex = fid->nve_ifindex; + + return 0; +} + +int mlxsw_sp_fid_nve_type(const struct mlxsw_sp_fid *fid, + enum mlxsw_sp_nve_type *p_type) +{ + if (!fid->vni_valid) + return -EINVAL; + + *p_type = fid->nve_type; + + return 0; +} + struct mlxsw_sp_fid *mlxsw_sp_fid_lookup_by_vni(struct mlxsw_sp *mlxsw_sp, __be32 vni) { @@ -173,7 +225,8 @@ bool mlxsw_sp_fid_nve_flood_index_is_set(const struct mlxsw_sp_fid *fid) return fid->nve_flood_index_valid; } -int mlxsw_sp_fid_vni_set(struct mlxsw_sp_fid *fid, __be32 vni) +int mlxsw_sp_fid_vni_set(struct mlxsw_sp_fid *fid, enum mlxsw_sp_nve_type type, + __be32 vni, int nve_ifindex) { struct mlxsw_sp_fid_family *fid_family = fid->fid_family; const struct mlxsw_sp_fid_ops *ops = fid_family->ops; @@ -183,6 +236,8 @@ int mlxsw_sp_fid_vni_set(struct mlxsw_sp_fid *fid, __be32 vni) if (WARN_ON(!ops->vni_set || fid->vni_valid)) return -EINVAL; + fid->nve_type = type; + fid->nve_ifindex = nve_ifindex; fid->vni = vni; err = rhashtable_lookup_insert_fast(&mlxsw_sp->fid_core->vni_ht, &fid->vni_ht_node, @@ -224,6 +279,16 @@ bool mlxsw_sp_fid_vni_is_set(const struct mlxsw_sp_fid *fid) return fid->vni_valid; } +void mlxsw_sp_fid_fdb_clear_offload(const struct mlxsw_sp_fid *fid, + const struct net_device *nve_dev) +{ + struct mlxsw_sp_fid_family *fid_family = fid->fid_family; + const struct mlxsw_sp_fid_ops *ops = fid_family->ops; + + if (ops->fdb_clear_offload) + ops->fdb_clear_offload(fid, nve_dev); +} + static const struct mlxsw_sp_flood_table * mlxsw_sp_fid_flood_table_lookup(const struct mlxsw_sp_fid *fid, enum mlxsw_sp_flood_type packet_type) @@ -284,11 +349,6 @@ void mlxsw_sp_fid_port_vid_unmap(struct mlxsw_sp_fid *fid, fid->fid_family->ops->port_vid_unmap(fid, mlxsw_sp_port, vid); } -enum mlxsw_sp_rif_type mlxsw_sp_fid_rif_type(const struct mlxsw_sp_fid *fid) -{ - return fid->fid_family->rif_type; -} - u16 mlxsw_sp_fid_index(const struct mlxsw_sp_fid *fid) { return fid->fid_index; @@ -304,6 +364,11 @@ void mlxsw_sp_fid_rif_set(struct mlxsw_sp_fid *fid, struct mlxsw_sp_rif *rif) fid->rif = rif; } +struct mlxsw_sp_rif *mlxsw_sp_fid_rif(const struct mlxsw_sp_fid *fid) +{ + return fid->rif; +} + enum mlxsw_sp_rif_type mlxsw_sp_fid_type_rif_type(const struct mlxsw_sp *mlxsw_sp, enum mlxsw_sp_fid_type type) @@ -568,7 +633,7 @@ mlxsw_sp_fid_8021d_compare(const struct mlxsw_sp_fid *fid, const void *arg) static u16 mlxsw_sp_fid_8021d_flood_index(const struct mlxsw_sp_fid *fid) { - return fid->fid_index - fid->fid_family->start_index; + return fid->fid_index - VLAN_N_VID; } static int mlxsw_sp_port_vp_mode_trans(struct mlxsw_sp_port *mlxsw_sp_port) @@ -713,6 +778,13 @@ static void mlxsw_sp_fid_8021d_nve_flood_index_clear(struct mlxsw_sp_fid *fid) fid->vni_valid, 0, false); } +static void +mlxsw_sp_fid_8021d_fdb_clear_offload(const struct mlxsw_sp_fid *fid, + const struct net_device *nve_dev) +{ + br_fdb_clear_offload(nve_dev, 0); +} + static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_8021d_ops = { .setup = mlxsw_sp_fid_8021d_setup, .configure = mlxsw_sp_fid_8021d_configure, @@ -726,6 +798,7 @@ static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_8021d_ops = { .vni_clear = mlxsw_sp_fid_8021d_vni_clear, .nve_flood_index_set = mlxsw_sp_fid_8021d_nve_flood_index_set, .nve_flood_index_clear = mlxsw_sp_fid_8021d_nve_flood_index_clear, + .fdb_clear_offload = mlxsw_sp_fid_8021d_fdb_clear_offload, }; static const struct mlxsw_sp_flood_table mlxsw_sp_fid_8021d_flood_tables[] = { @@ -759,6 +832,48 @@ static const struct mlxsw_sp_fid_family mlxsw_sp_fid_8021d_family = { .nr_flood_tables = ARRAY_SIZE(mlxsw_sp_fid_8021d_flood_tables), .rif_type = MLXSW_SP_RIF_TYPE_FID, .ops = &mlxsw_sp_fid_8021d_ops, + .lag_vid_valid = 1, +}; + +static void +mlxsw_sp_fid_8021q_fdb_clear_offload(const struct mlxsw_sp_fid *fid, + const struct net_device *nve_dev) +{ + br_fdb_clear_offload(nve_dev, mlxsw_sp_fid_8021q_vid(fid)); +} + +static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_8021q_emu_ops = { + .setup = mlxsw_sp_fid_8021q_setup, + .configure = mlxsw_sp_fid_8021d_configure, + .deconfigure = mlxsw_sp_fid_8021d_deconfigure, + .index_alloc = mlxsw_sp_fid_8021d_index_alloc, + .compare = mlxsw_sp_fid_8021q_compare, + .flood_index = mlxsw_sp_fid_8021d_flood_index, + .port_vid_map = mlxsw_sp_fid_8021d_port_vid_map, + .port_vid_unmap = mlxsw_sp_fid_8021d_port_vid_unmap, + .vni_set = mlxsw_sp_fid_8021d_vni_set, + .vni_clear = mlxsw_sp_fid_8021d_vni_clear, + .nve_flood_index_set = mlxsw_sp_fid_8021d_nve_flood_index_set, + .nve_flood_index_clear = mlxsw_sp_fid_8021d_nve_flood_index_clear, + .fdb_clear_offload = mlxsw_sp_fid_8021q_fdb_clear_offload, +}; + +/* There are 4K-2 emulated 802.1Q FIDs, starting right after the 802.1D FIDs */ +#define MLXSW_SP_FID_8021Q_EMU_START (VLAN_N_VID + MLXSW_SP_FID_8021D_MAX) +#define MLXSW_SP_FID_8021Q_EMU_END (MLXSW_SP_FID_8021Q_EMU_START + \ + VLAN_VID_MASK - 2) + +/* Range and flood configuration must match mlxsw_config_profile */ +static const struct mlxsw_sp_fid_family mlxsw_sp_fid_8021q_emu_family = { + .type = MLXSW_SP_FID_TYPE_8021Q, + .fid_size = sizeof(struct mlxsw_sp_fid_8021q), + .start_index = MLXSW_SP_FID_8021Q_EMU_START, + .end_index = MLXSW_SP_FID_8021Q_EMU_END, + .flood_tables = mlxsw_sp_fid_8021d_flood_tables, + .nr_flood_tables = ARRAY_SIZE(mlxsw_sp_fid_8021d_flood_tables), + .rif_type = MLXSW_SP_RIF_TYPE_VLAN, + .ops = &mlxsw_sp_fid_8021q_emu_ops, + .lag_vid_valid = 1, }; static int mlxsw_sp_fid_rfid_configure(struct mlxsw_sp_fid *fid) @@ -888,7 +1003,7 @@ static const struct mlxsw_sp_fid_family mlxsw_sp_fid_dummy_family = { }; static const struct mlxsw_sp_fid_family *mlxsw_sp_fid_family_arr[] = { - [MLXSW_SP_FID_TYPE_8021Q] = &mlxsw_sp_fid_8021q_family, + [MLXSW_SP_FID_TYPE_8021Q] = &mlxsw_sp_fid_8021q_emu_family, [MLXSW_SP_FID_TYPE_8021D] = &mlxsw_sp_fid_8021d_family, [MLXSW_SP_FID_TYPE_RFID] = &mlxsw_sp_fid_rfid_family, [MLXSW_SP_FID_TYPE_DUMMY] = &mlxsw_sp_fid_dummy_family, @@ -944,10 +1059,17 @@ static struct mlxsw_sp_fid *mlxsw_sp_fid_get(struct mlxsw_sp *mlxsw_sp, if (err) goto err_configure; + err = rhashtable_insert_fast(&mlxsw_sp->fid_core->fid_ht, &fid->ht_node, + mlxsw_sp_fid_ht_params); + if (err) + goto err_rhashtable_insert; + list_add(&fid->list, &fid_family->fids_list); fid->ref_count++; return fid; +err_rhashtable_insert: + fid->fid_family->ops->deconfigure(fid); err_configure: __clear_bit(fid_index - fid_family->start_index, fid_family->fids_bitmap); @@ -959,19 +1081,18 @@ err_index_alloc: void mlxsw_sp_fid_put(struct mlxsw_sp_fid *fid) { struct mlxsw_sp_fid_family *fid_family = fid->fid_family; + struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp; - if (--fid->ref_count == 1 && fid->rif) { - /* Destroy the associated RIF and let it drop the last - * reference on the FID. - */ - return mlxsw_sp_rif_destroy(fid->rif); - } else if (fid->ref_count == 0) { - list_del(&fid->list); - fid->fid_family->ops->deconfigure(fid); - __clear_bit(fid->fid_index - fid_family->start_index, - fid_family->fids_bitmap); - kfree(fid); - } + if (--fid->ref_count != 0) + return; + + list_del(&fid->list); + rhashtable_remove_fast(&mlxsw_sp->fid_core->fid_ht, + &fid->ht_node, mlxsw_sp_fid_ht_params); + fid->fid_family->ops->deconfigure(fid); + __clear_bit(fid->fid_index - fid_family->start_index, + fid_family->fids_bitmap); + kfree(fid); } struct mlxsw_sp_fid *mlxsw_sp_fid_8021q_get(struct mlxsw_sp *mlxsw_sp, u16 vid) @@ -985,6 +1106,12 @@ struct mlxsw_sp_fid *mlxsw_sp_fid_8021d_get(struct mlxsw_sp *mlxsw_sp, return mlxsw_sp_fid_get(mlxsw_sp, MLXSW_SP_FID_TYPE_8021D, &br_ifindex); } +struct mlxsw_sp_fid *mlxsw_sp_fid_8021q_lookup(struct mlxsw_sp *mlxsw_sp, + u16 vid) +{ + return mlxsw_sp_fid_lookup(mlxsw_sp, MLXSW_SP_FID_TYPE_8021Q, &vid); +} + struct mlxsw_sp_fid *mlxsw_sp_fid_8021d_lookup(struct mlxsw_sp *mlxsw_sp, int br_ifindex) { @@ -1126,9 +1253,13 @@ int mlxsw_sp_fids_init(struct mlxsw_sp *mlxsw_sp) return -ENOMEM; mlxsw_sp->fid_core = fid_core; + err = rhashtable_init(&fid_core->fid_ht, &mlxsw_sp_fid_ht_params); + if (err) + goto err_rhashtable_fid_init; + err = rhashtable_init(&fid_core->vni_ht, &mlxsw_sp_fid_vni_ht_params); if (err) - goto err_rhashtable_init; + goto err_rhashtable_vni_init; fid_core->port_fid_mappings = kcalloc(max_ports, sizeof(unsigned int), GFP_KERNEL); @@ -1157,7 +1288,9 @@ err_fid_ops_register: kfree(fid_core->port_fid_mappings); err_alloc_port_fid_mappings: rhashtable_destroy(&fid_core->vni_ht); -err_rhashtable_init: +err_rhashtable_vni_init: + rhashtable_destroy(&fid_core->fid_ht); +err_rhashtable_fid_init: kfree(fid_core); return err; } @@ -1172,5 +1305,6 @@ void mlxsw_sp_fids_fini(struct mlxsw_sp *mlxsw_sp) fid_core->fid_family_arr[i]); kfree(fid_core->port_fid_mappings); rhashtable_destroy(&fid_core->vni_ht); + rhashtable_destroy(&fid_core->fid_ht); kfree(fid_core); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c index 8d211972c5e9..ff072358d950 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c @@ -406,7 +406,7 @@ int mlxsw_sp_flower_replace(struct mlxsw_sp *mlxsw_sp, if (IS_ERR(ruleset)) return PTR_ERR(ruleset); - rule = mlxsw_sp_acl_rule_create(mlxsw_sp, ruleset, f->cookie, + rule = mlxsw_sp_acl_rule_create(mlxsw_sp, ruleset, f->cookie, NULL, f->common.extack); if (IS_ERR(rule)) { err = PTR_ERR(rule); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c index b5b54b41349a..0a31fff2516e 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c @@ -174,6 +174,20 @@ mlxsw_sp_nve_mc_record_ops_arr[] = { [MLXSW_SP_L3_PROTO_IPV6] = &mlxsw_sp_nve_mc_record_ipv6_ops, }; +int mlxsw_sp_nve_learned_ip_resolve(struct mlxsw_sp *mlxsw_sp, u32 uip, + enum mlxsw_sp_l3proto proto, + union mlxsw_sp_l3addr *addr) +{ + switch (proto) { + case MLXSW_SP_L3_PROTO_IPV4: + addr->addr4 = cpu_to_be32(uip); + return 0; + default: + WARN_ON(1); + return -EINVAL; + } +} + static struct mlxsw_sp_nve_mc_list * mlxsw_sp_nve_mc_list_find(struct mlxsw_sp *mlxsw_sp, const struct mlxsw_sp_nve_mc_list_key *key) @@ -775,6 +789,21 @@ static void mlxsw_sp_nve_fdb_flush_by_fid(struct mlxsw_sp *mlxsw_sp, mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfdf), sfdf_pl); } +static void mlxsw_sp_nve_fdb_clear_offload(struct mlxsw_sp *mlxsw_sp, + const struct mlxsw_sp_fid *fid, + const struct net_device *nve_dev, + __be32 vni) +{ + const struct mlxsw_sp_nve_ops *ops; + enum mlxsw_sp_nve_type type; + + if (WARN_ON(mlxsw_sp_fid_nve_type(fid, &type))) + return; + + ops = mlxsw_sp->nve->nve_ops_arr[type]; + ops->fdb_clear_offload(nve_dev, vni); +} + int mlxsw_sp_nve_fid_enable(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fid *fid, struct mlxsw_sp_nve_params *params, struct netlink_ext_ack *extack) @@ -803,7 +832,8 @@ int mlxsw_sp_nve_fid_enable(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fid *fid, return err; } - err = mlxsw_sp_fid_vni_set(fid, params->vni); + err = mlxsw_sp_fid_vni_set(fid, params->type, params->vni, + params->dev->ifindex); if (err) { NL_SET_ERR_MSG_MOD(extack, "Failed to set VNI on FID"); goto err_fid_vni_set; @@ -811,8 +841,16 @@ int mlxsw_sp_nve_fid_enable(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fid *fid, nve->config = config; + err = ops->fdb_replay(params->dev, params->vni); + if (err) { + NL_SET_ERR_MSG_MOD(extack, "Failed to offload the FDB"); + goto err_fdb_replay; + } + return 0; +err_fdb_replay: + mlxsw_sp_fid_vni_clear(fid); err_fid_vni_set: mlxsw_sp_nve_tunnel_fini(mlxsw_sp); return err; @@ -822,9 +860,27 @@ void mlxsw_sp_nve_fid_disable(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fid *fid) { u16 fid_index = mlxsw_sp_fid_index(fid); + struct net_device *nve_dev; + int nve_ifindex; + __be32 vni; mlxsw_sp_nve_flood_ip_flush(mlxsw_sp, fid); mlxsw_sp_nve_fdb_flush_by_fid(mlxsw_sp, fid_index); + + if (WARN_ON(mlxsw_sp_fid_nve_ifindex(fid, &nve_ifindex) || + mlxsw_sp_fid_vni(fid, &vni))) + goto out; + + nve_dev = dev_get_by_index(&init_net, nve_ifindex); + if (!nve_dev) + goto out; + + mlxsw_sp_nve_fdb_clear_offload(mlxsw_sp, fid, nve_dev, vni); + mlxsw_sp_fid_fdb_clear_offload(fid, nve_dev); + + dev_put(nve_dev); + +out: mlxsw_sp_fid_vni_clear(fid); mlxsw_sp_nve_tunnel_fini(mlxsw_sp); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.h index 4cc3297e13d6..02937ea95bc3 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.h @@ -41,6 +41,8 @@ struct mlxsw_sp_nve_ops { int (*init)(struct mlxsw_sp_nve *nve, const struct mlxsw_sp_nve_config *config); void (*fini)(struct mlxsw_sp_nve *nve); + int (*fdb_replay)(const struct net_device *nve_dev, __be32 vni); + void (*fdb_clear_offload)(const struct net_device *nve_dev, __be32 vni); }; extern const struct mlxsw_sp_nve_ops mlxsw_sp1_nve_vxlan_ops; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve_vxlan.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve_vxlan.c index d21c7be5b1c9..74e564c4ac19 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve_vxlan.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_nve_vxlan.c @@ -17,7 +17,8 @@ #define MLXSW_SP_NVE_VXLAN_PARSING_DEPTH 128 #define MLXSW_SP_NVE_DEFAULT_PARSING_DEPTH 96 -#define MLXSW_SP_NVE_VXLAN_SUPPORTED_FLAGS VXLAN_F_UDP_ZERO_CSUM_TX +#define MLXSW_SP_NVE_VXLAN_SUPPORTED_FLAGS (VXLAN_F_UDP_ZERO_CSUM_TX | \ + VXLAN_F_LEARN) static bool mlxsw_sp1_nve_vxlan_can_offload(const struct mlxsw_sp_nve *nve, const struct net_device *dev, @@ -61,11 +62,6 @@ static bool mlxsw_sp1_nve_vxlan_can_offload(const struct mlxsw_sp_nve *nve, return false; } - if (cfg->flags & VXLAN_F_LEARN) { - NL_SET_ERR_MSG_MOD(extack, "VxLAN: Learning is not supported"); - return false; - } - if (!(cfg->flags & VXLAN_F_UDP_ZERO_CSUM_TX)) { NL_SET_ERR_MSG_MOD(extack, "VxLAN: UDP checksum is not supported"); return false; @@ -215,12 +211,30 @@ static void mlxsw_sp1_nve_vxlan_fini(struct mlxsw_sp_nve *nve) config->udp_dport); } +static int +mlxsw_sp_nve_vxlan_fdb_replay(const struct net_device *nve_dev, __be32 vni) +{ + if (WARN_ON(!netif_is_vxlan(nve_dev))) + return -EINVAL; + return vxlan_fdb_replay(nve_dev, vni, &mlxsw_sp_switchdev_notifier); +} + +static void +mlxsw_sp_nve_vxlan_clear_offload(const struct net_device *nve_dev, __be32 vni) +{ + if (WARN_ON(!netif_is_vxlan(nve_dev))) + return; + vxlan_fdb_clear_offload(nve_dev, vni); +} + const struct mlxsw_sp_nve_ops mlxsw_sp1_nve_vxlan_ops = { .type = MLXSW_SP_NVE_TYPE_VXLAN, .can_offload = mlxsw_sp1_nve_vxlan_can_offload, .nve_config = mlxsw_sp_nve_vxlan_config, .init = mlxsw_sp1_nve_vxlan_init, .fini = mlxsw_sp1_nve_vxlan_fini, + .fdb_replay = mlxsw_sp_nve_vxlan_fdb_replay, + .fdb_clear_offload = mlxsw_sp_nve_vxlan_clear_offload, }; static bool mlxsw_sp2_nve_vxlan_can_offload(const struct mlxsw_sp_nve *nve, @@ -246,4 +260,6 @@ const struct mlxsw_sp_nve_ops mlxsw_sp2_nve_vxlan_ops = { .nve_config = mlxsw_sp_nve_vxlan_config, .init = mlxsw_sp2_nve_vxlan_init, .fini = mlxsw_sp2_nve_vxlan_fini, + .fdb_replay = mlxsw_sp_nve_vxlan_fdb_replay, + .fdb_clear_offload = mlxsw_sp_nve_vxlan_clear_offload, }; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 6ebf99cc3154..98e5ffd71b91 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -70,6 +71,8 @@ struct mlxsw_sp_router { bool aborted; struct notifier_block fib_nb; struct notifier_block netevent_nb; + struct notifier_block inetaddr_nb; + struct notifier_block inet6addr_nb; const struct mlxsw_sp_rif_ops **rif_ops_arr; const struct mlxsw_sp_ipip_ops **ipip_ops_arr; }; @@ -104,6 +107,7 @@ struct mlxsw_sp_rif_params { struct mlxsw_sp_rif_subport { struct mlxsw_sp_rif common; + refcount_t ref_count; union { u16 system_port; u16 lag_id; @@ -136,6 +140,7 @@ struct mlxsw_sp_rif_ops { void (*fdb_del)(struct mlxsw_sp_rif *rif, const char *mac); }; +static void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif); static void mlxsw_sp_lpm_tree_hold(struct mlxsw_sp_lpm_tree *lpm_tree); static void mlxsw_sp_lpm_tree_put(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_lpm_tree *lpm_tree); @@ -6297,6 +6302,7 @@ mlxsw_sp_rif_create(struct mlxsw_sp *mlxsw_sp, err = -ENOMEM; goto err_rif_alloc; } + dev_hold(rif->dev); rif->mlxsw_sp = mlxsw_sp; rif->ops = ops; @@ -6335,6 +6341,7 @@ err_configure: if (fid) mlxsw_sp_fid_put(fid); err_fid_get: + dev_put(rif->dev); kfree(rif); err_rif_alloc: err_rif_index_alloc: @@ -6343,7 +6350,7 @@ err_rif_index_alloc: return ERR_PTR(err); } -void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif) +static void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif) { const struct mlxsw_sp_rif_ops *ops = rif->ops; struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp; @@ -6362,6 +6369,7 @@ void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif) if (fid) /* Loopback RIFs are not associated with a FID. */ mlxsw_sp_fid_put(fid); + dev_put(rif->dev); kfree(rif); vr->rif_count--; mlxsw_sp_vr_put(mlxsw_sp, vr); @@ -6392,6 +6400,40 @@ mlxsw_sp_rif_subport_params_init(struct mlxsw_sp_rif_params *params, params->system_port = mlxsw_sp_port->local_port; } +static struct mlxsw_sp_rif_subport * +mlxsw_sp_rif_subport_rif(const struct mlxsw_sp_rif *rif) +{ + return container_of(rif, struct mlxsw_sp_rif_subport, common); +} + +static struct mlxsw_sp_rif * +mlxsw_sp_rif_subport_get(struct mlxsw_sp *mlxsw_sp, + const struct mlxsw_sp_rif_params *params, + struct netlink_ext_ack *extack) +{ + struct mlxsw_sp_rif_subport *rif_subport; + struct mlxsw_sp_rif *rif; + + rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, params->dev); + if (!rif) + return mlxsw_sp_rif_create(mlxsw_sp, params, extack); + + rif_subport = mlxsw_sp_rif_subport_rif(rif); + refcount_inc(&rif_subport->ref_count); + return rif; +} + +static void mlxsw_sp_rif_subport_put(struct mlxsw_sp_rif *rif) +{ + struct mlxsw_sp_rif_subport *rif_subport; + + rif_subport = mlxsw_sp_rif_subport_rif(rif); + if (!refcount_dec_and_test(&rif_subport->ref_count)) + return; + + mlxsw_sp_rif_destroy(rif); +} + static int mlxsw_sp_port_vlan_router_join(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan, struct net_device *l3_dev, @@ -6399,22 +6441,18 @@ mlxsw_sp_port_vlan_router_join(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan, { struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp_port_vlan->mlxsw_sp_port; struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_rif_params params = { + .dev = l3_dev, + }; u16 vid = mlxsw_sp_port_vlan->vid; struct mlxsw_sp_rif *rif; struct mlxsw_sp_fid *fid; int err; - rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, l3_dev); - if (!rif) { - struct mlxsw_sp_rif_params params = { - .dev = l3_dev, - }; - - mlxsw_sp_rif_subport_params_init(¶ms, mlxsw_sp_port_vlan); - rif = mlxsw_sp_rif_create(mlxsw_sp, ¶ms, extack); - if (IS_ERR(rif)) - return PTR_ERR(rif); - } + mlxsw_sp_rif_subport_params_init(¶ms, mlxsw_sp_port_vlan); + rif = mlxsw_sp_rif_subport_get(mlxsw_sp, ¶ms, extack); + if (IS_ERR(rif)) + return PTR_ERR(rif); /* FID was already created, just take a reference */ fid = rif->ops->fid_get(rif, extack); @@ -6441,6 +6479,7 @@ err_port_vid_learning_set: mlxsw_sp_fid_port_vid_unmap(fid, mlxsw_sp_port, vid); err_fid_port_vid_map: mlxsw_sp_fid_put(fid); + mlxsw_sp_rif_subport_put(rif); return err; } @@ -6449,6 +6488,7 @@ mlxsw_sp_port_vlan_router_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan) { struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp_port_vlan->mlxsw_sp_port; struct mlxsw_sp_fid *fid = mlxsw_sp_port_vlan->fid; + struct mlxsw_sp_rif *rif = mlxsw_sp_fid_rif(fid); u16 vid = mlxsw_sp_port_vlan->vid; if (WARN_ON(mlxsw_sp_fid_type(fid) != MLXSW_SP_FID_TYPE_RFID)) @@ -6458,10 +6498,8 @@ mlxsw_sp_port_vlan_router_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan) mlxsw_sp_port_vid_stp_set(mlxsw_sp_port, vid, BR_STATE_BLOCKING); mlxsw_sp_port_vid_learning_set(mlxsw_sp_port, vid, true); mlxsw_sp_fid_port_vid_unmap(fid, mlxsw_sp_port, vid); - /* If router port holds the last reference on the rFID, then the - * associated Sub-port RIF will be destroyed. - */ mlxsw_sp_fid_put(fid); + mlxsw_sp_rif_subport_put(rif); } static int mlxsw_sp_inetaddr_port_vlan_event(struct net_device *l3_dev, @@ -6497,8 +6535,8 @@ static int mlxsw_sp_inetaddr_port_event(struct net_device *port_dev, netif_is_ovs_port(port_dev)) return 0; - return mlxsw_sp_inetaddr_port_vlan_event(port_dev, port_dev, event, 1, - extack); + return mlxsw_sp_inetaddr_port_vlan_event(port_dev, port_dev, event, + MLXSW_SP_DEFAULT_VID, extack); } static int __mlxsw_sp_inetaddr_lag_event(struct net_device *l3_dev, @@ -6531,15 +6569,15 @@ static int mlxsw_sp_inetaddr_lag_event(struct net_device *lag_dev, if (netif_is_bridge_port(lag_dev)) return 0; - return __mlxsw_sp_inetaddr_lag_event(lag_dev, lag_dev, event, 1, - extack); + return __mlxsw_sp_inetaddr_lag_event(lag_dev, lag_dev, event, + MLXSW_SP_DEFAULT_VID, extack); } -static int mlxsw_sp_inetaddr_bridge_event(struct net_device *l3_dev, +static int mlxsw_sp_inetaddr_bridge_event(struct mlxsw_sp *mlxsw_sp, + struct net_device *l3_dev, unsigned long event, struct netlink_ext_ack *extack) { - struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(l3_dev); struct mlxsw_sp_rif_params params = { .dev = l3_dev, }; @@ -6560,7 +6598,8 @@ static int mlxsw_sp_inetaddr_bridge_event(struct net_device *l3_dev, return 0; } -static int mlxsw_sp_inetaddr_vlan_event(struct net_device *vlan_dev, +static int mlxsw_sp_inetaddr_vlan_event(struct mlxsw_sp *mlxsw_sp, + struct net_device *vlan_dev, unsigned long event, struct netlink_ext_ack *extack) { @@ -6577,7 +6616,8 @@ static int mlxsw_sp_inetaddr_vlan_event(struct net_device *vlan_dev, return __mlxsw_sp_inetaddr_lag_event(vlan_dev, real_dev, event, vid, extack); else if (netif_is_bridge_master(real_dev) && br_vlan_enabled(real_dev)) - return mlxsw_sp_inetaddr_bridge_event(vlan_dev, event, extack); + return mlxsw_sp_inetaddr_bridge_event(mlxsw_sp, vlan_dev, event, + extack); return 0; } @@ -6678,16 +6718,11 @@ void mlxsw_sp_rif_macvlan_del(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_fid_index(rif->fid), false); } -static int mlxsw_sp_inetaddr_macvlan_event(struct net_device *macvlan_dev, +static int mlxsw_sp_inetaddr_macvlan_event(struct mlxsw_sp *mlxsw_sp, + struct net_device *macvlan_dev, unsigned long event, struct netlink_ext_ack *extack) { - struct mlxsw_sp *mlxsw_sp; - - mlxsw_sp = mlxsw_sp_lower_get(macvlan_dev); - if (!mlxsw_sp) - return 0; - switch (event) { case NETDEV_UP: return mlxsw_sp_rif_macvlan_add(mlxsw_sp, macvlan_dev, extack); @@ -6699,7 +6734,35 @@ static int mlxsw_sp_inetaddr_macvlan_event(struct net_device *macvlan_dev, return 0; } -static int __mlxsw_sp_inetaddr_event(struct net_device *dev, +static int mlxsw_sp_router_port_check_rif_addr(struct mlxsw_sp *mlxsw_sp, + struct net_device *dev, + const unsigned char *dev_addr, + struct netlink_ext_ack *extack) +{ + struct mlxsw_sp_rif *rif; + int i; + + /* A RIF is not created for macvlan netdevs. Their MAC is used to + * populate the FDB + */ + if (netif_is_macvlan(dev)) + return 0; + + for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS); i++) { + rif = mlxsw_sp->router->rifs[i]; + if (rif && rif->dev != dev && + !ether_addr_equal_masked(rif->dev->dev_addr, dev_addr, + mlxsw_sp->mac_mask)) { + NL_SET_ERR_MSG_MOD(extack, "All router interface MAC addresses must have the same prefix"); + return -EINVAL; + } + } + + return 0; +} + +static int __mlxsw_sp_inetaddr_event(struct mlxsw_sp *mlxsw_sp, + struct net_device *dev, unsigned long event, struct netlink_ext_ack *extack) { @@ -6708,21 +6771,24 @@ static int __mlxsw_sp_inetaddr_event(struct net_device *dev, else if (netif_is_lag_master(dev)) return mlxsw_sp_inetaddr_lag_event(dev, event, extack); else if (netif_is_bridge_master(dev)) - return mlxsw_sp_inetaddr_bridge_event(dev, event, extack); + return mlxsw_sp_inetaddr_bridge_event(mlxsw_sp, dev, event, + extack); else if (is_vlan_dev(dev)) - return mlxsw_sp_inetaddr_vlan_event(dev, event, extack); + return mlxsw_sp_inetaddr_vlan_event(mlxsw_sp, dev, event, + extack); else if (netif_is_macvlan(dev)) - return mlxsw_sp_inetaddr_macvlan_event(dev, event, extack); + return mlxsw_sp_inetaddr_macvlan_event(mlxsw_sp, dev, event, + extack); else return 0; } -int mlxsw_sp_inetaddr_event(struct notifier_block *unused, - unsigned long event, void *ptr) +static int mlxsw_sp_inetaddr_event(struct notifier_block *nb, + unsigned long event, void *ptr) { struct in_ifaddr *ifa = (struct in_ifaddr *) ptr; struct net_device *dev = ifa->ifa_dev->dev; - struct mlxsw_sp *mlxsw_sp; + struct mlxsw_sp_router *router; struct mlxsw_sp_rif *rif; int err = 0; @@ -6730,15 +6796,12 @@ int mlxsw_sp_inetaddr_event(struct notifier_block *unused, if (event == NETDEV_UP) goto out; - mlxsw_sp = mlxsw_sp_lower_get(dev); - if (!mlxsw_sp) - goto out; - - rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, dev); + router = container_of(nb, struct mlxsw_sp_router, inetaddr_nb); + rif = mlxsw_sp_rif_find_by_dev(router->mlxsw_sp, dev); if (!mlxsw_sp_rif_should_config(rif, dev, event)) goto out; - err = __mlxsw_sp_inetaddr_event(dev, event, NULL); + err = __mlxsw_sp_inetaddr_event(router->mlxsw_sp, dev, event, NULL); out: return notifier_from_errno(err); } @@ -6760,13 +6823,19 @@ int mlxsw_sp_inetaddr_valid_event(struct notifier_block *unused, if (!mlxsw_sp_rif_should_config(rif, dev, event)) goto out; - err = __mlxsw_sp_inetaddr_event(dev, event, ivi->extack); + err = mlxsw_sp_router_port_check_rif_addr(mlxsw_sp, dev, dev->dev_addr, + ivi->extack); + if (err) + goto out; + + err = __mlxsw_sp_inetaddr_event(mlxsw_sp, dev, event, ivi->extack); out: return notifier_from_errno(err); } struct mlxsw_sp_inet6addr_event_work { struct work_struct work; + struct mlxsw_sp *mlxsw_sp; struct net_device *dev; unsigned long event; }; @@ -6775,21 +6844,18 @@ static void mlxsw_sp_inet6addr_event_work(struct work_struct *work) { struct mlxsw_sp_inet6addr_event_work *inet6addr_work = container_of(work, struct mlxsw_sp_inet6addr_event_work, work); + struct mlxsw_sp *mlxsw_sp = inet6addr_work->mlxsw_sp; struct net_device *dev = inet6addr_work->dev; unsigned long event = inet6addr_work->event; - struct mlxsw_sp *mlxsw_sp; struct mlxsw_sp_rif *rif; rtnl_lock(); - mlxsw_sp = mlxsw_sp_lower_get(dev); - if (!mlxsw_sp) - goto out; rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, dev); if (!mlxsw_sp_rif_should_config(rif, dev, event)) goto out; - __mlxsw_sp_inetaddr_event(dev, event, NULL); + __mlxsw_sp_inetaddr_event(mlxsw_sp, dev, event, NULL); out: rtnl_unlock(); dev_put(dev); @@ -6797,25 +6863,25 @@ out: } /* Called with rcu_read_lock() */ -int mlxsw_sp_inet6addr_event(struct notifier_block *unused, - unsigned long event, void *ptr) +static int mlxsw_sp_inet6addr_event(struct notifier_block *nb, + unsigned long event, void *ptr) { struct inet6_ifaddr *if6 = (struct inet6_ifaddr *) ptr; struct mlxsw_sp_inet6addr_event_work *inet6addr_work; struct net_device *dev = if6->idev->dev; + struct mlxsw_sp_router *router; /* NETDEV_UP event is handled by mlxsw_sp_inet6addr_valid_event */ if (event == NETDEV_UP) return NOTIFY_DONE; - if (!mlxsw_sp_port_dev_lower_find_rcu(dev)) - return NOTIFY_DONE; - inet6addr_work = kzalloc(sizeof(*inet6addr_work), GFP_ATOMIC); if (!inet6addr_work) return NOTIFY_BAD; + router = container_of(nb, struct mlxsw_sp_router, inet6addr_nb); INIT_WORK(&inet6addr_work->work, mlxsw_sp_inet6addr_event_work); + inet6addr_work->mlxsw_sp = router->mlxsw_sp; inet6addr_work->dev = dev; inet6addr_work->event = event; dev_hold(dev); @@ -6841,7 +6907,12 @@ int mlxsw_sp_inet6addr_valid_event(struct notifier_block *unused, if (!mlxsw_sp_rif_should_config(rif, dev, event)) goto out; - err = __mlxsw_sp_inetaddr_event(dev, event, i6vi->extack); + err = mlxsw_sp_router_port_check_rif_addr(mlxsw_sp, dev, dev->dev_addr, + i6vi->extack); + if (err) + goto out; + + err = __mlxsw_sp_inetaddr_event(mlxsw_sp, dev, event, i6vi->extack); out: return notifier_from_errno(err); } @@ -6863,20 +6934,14 @@ static int mlxsw_sp_rif_edit(struct mlxsw_sp *mlxsw_sp, u16 rif_index, return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ritr), ritr_pl); } -int mlxsw_sp_netdevice_router_port_event(struct net_device *dev) +static int +mlxsw_sp_router_port_change_event(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_rif *rif) { - struct mlxsw_sp *mlxsw_sp; - struct mlxsw_sp_rif *rif; + struct net_device *dev = rif->dev; u16 fid_index; int err; - mlxsw_sp = mlxsw_sp_lower_get(dev); - if (!mlxsw_sp) - return 0; - - rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, dev); - if (!rif) - return 0; fid_index = mlxsw_sp_fid_index(rif->fid); err = mlxsw_sp_rif_fdb_op(mlxsw_sp, rif->addr, fid_index, false); @@ -6920,6 +6985,41 @@ err_rif_edit: return err; } +static int mlxsw_sp_router_port_pre_changeaddr_event(struct mlxsw_sp_rif *rif, + struct netdev_notifier_pre_changeaddr_info *info) +{ + struct netlink_ext_ack *extack; + + extack = netdev_notifier_info_to_extack(&info->info); + return mlxsw_sp_router_port_check_rif_addr(rif->mlxsw_sp, rif->dev, + info->dev_addr, extack); +} + +int mlxsw_sp_netdevice_router_port_event(struct net_device *dev, + unsigned long event, void *ptr) +{ + struct mlxsw_sp *mlxsw_sp; + struct mlxsw_sp_rif *rif; + + mlxsw_sp = mlxsw_sp_lower_get(dev); + if (!mlxsw_sp) + return 0; + + rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, dev); + if (!rif) + return 0; + + switch (event) { + case NETDEV_CHANGEMTU: /* fall through */ + case NETDEV_CHANGEADDR: + return mlxsw_sp_router_port_change_event(mlxsw_sp, rif); + case NETDEV_PRE_CHANGEADDR: + return mlxsw_sp_router_port_pre_changeaddr_event(rif, ptr); + } + + return 0; +} + static int mlxsw_sp_port_vrf_join(struct mlxsw_sp *mlxsw_sp, struct net_device *l3_dev, struct netlink_ext_ack *extack) @@ -6931,9 +7031,10 @@ static int mlxsw_sp_port_vrf_join(struct mlxsw_sp *mlxsw_sp, */ rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, l3_dev); if (rif) - __mlxsw_sp_inetaddr_event(l3_dev, NETDEV_DOWN, extack); + __mlxsw_sp_inetaddr_event(mlxsw_sp, l3_dev, NETDEV_DOWN, + extack); - return __mlxsw_sp_inetaddr_event(l3_dev, NETDEV_UP, extack); + return __mlxsw_sp_inetaddr_event(mlxsw_sp, l3_dev, NETDEV_UP, extack); } static void mlxsw_sp_port_vrf_leave(struct mlxsw_sp *mlxsw_sp, @@ -6944,7 +7045,7 @@ static void mlxsw_sp_port_vrf_leave(struct mlxsw_sp *mlxsw_sp, rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, l3_dev); if (!rif) return; - __mlxsw_sp_inetaddr_event(l3_dev, NETDEV_DOWN, NULL); + __mlxsw_sp_inetaddr_event(mlxsw_sp, l3_dev, NETDEV_DOWN, NULL); } int mlxsw_sp_netdevice_vrf_event(struct net_device *l3_dev, unsigned long event, @@ -6998,18 +7099,13 @@ static int mlxsw_sp_rif_macvlan_flush(struct mlxsw_sp_rif *rif) __mlxsw_sp_rif_macvlan_flush, rif); } -static struct mlxsw_sp_rif_subport * -mlxsw_sp_rif_subport_rif(const struct mlxsw_sp_rif *rif) -{ - return container_of(rif, struct mlxsw_sp_rif_subport, common); -} - static void mlxsw_sp_rif_subport_setup(struct mlxsw_sp_rif *rif, const struct mlxsw_sp_rif_params *params) { struct mlxsw_sp_rif_subport *rif_subport; rif_subport = mlxsw_sp_rif_subport_rif(rif); + refcount_set(&rif_subport->ref_count, 1); rif_subport->vid = params->vid; rif_subport->lag = params->lag; if (params->lag) @@ -7164,11 +7260,15 @@ static struct mlxsw_sp_fid * mlxsw_sp_rif_vlan_fid_get(struct mlxsw_sp_rif *rif, struct netlink_ext_ack *extack) { + struct net_device *br_dev = rif->dev; u16 vid; int err; if (is_vlan_dev(rif->dev)) { vid = vlan_dev_vlan_id(rif->dev); + br_dev = vlan_dev_real_dev(rif->dev); + if (WARN_ON(!netif_is_bridge_master(br_dev))) + return ERR_PTR(-EINVAL); } else { err = br_vlan_get_pvid(rif->dev, &vid); if (err < 0 || !vid) { @@ -7177,7 +7277,7 @@ mlxsw_sp_rif_vlan_fid_get(struct mlxsw_sp_rif *rif, } } - return mlxsw_sp_fid_8021q_get(rif->mlxsw_sp, vid); + return mlxsw_sp_bridge_fid_get(rif->mlxsw_sp, br_dev, vid, extack); } static void mlxsw_sp_rif_vlan_fdb_del(struct mlxsw_sp_rif *rif, const char *mac) @@ -7267,7 +7367,7 @@ static struct mlxsw_sp_fid * mlxsw_sp_rif_fid_fid_get(struct mlxsw_sp_rif *rif, struct netlink_ext_ack *extack) { - return mlxsw_sp_fid_8021d_get(rif->mlxsw_sp, rif->dev->ifindex); + return mlxsw_sp_bridge_fid_get(rif->mlxsw_sp, rif->dev, 0, extack); } static void mlxsw_sp_rif_fid_fdb_del(struct mlxsw_sp_rif *rif, const char *mac) @@ -7293,6 +7393,15 @@ static const struct mlxsw_sp_rif_ops mlxsw_sp_rif_fid_ops = { .fdb_del = mlxsw_sp_rif_fid_fdb_del, }; +static const struct mlxsw_sp_rif_ops mlxsw_sp_rif_vlan_emu_ops = { + .type = MLXSW_SP_RIF_TYPE_VLAN, + .rif_size = sizeof(struct mlxsw_sp_rif), + .configure = mlxsw_sp_rif_fid_configure, + .deconfigure = mlxsw_sp_rif_fid_deconfigure, + .fid_get = mlxsw_sp_rif_vlan_fid_get, + .fdb_del = mlxsw_sp_rif_vlan_fdb_del, +}; + static struct mlxsw_sp_rif_ipip_lb * mlxsw_sp_rif_ipip_lb_rif(struct mlxsw_sp_rif *rif) { @@ -7361,7 +7470,7 @@ static const struct mlxsw_sp_rif_ops mlxsw_sp_rif_ipip_lb_ops = { static const struct mlxsw_sp_rif_ops *mlxsw_sp_rif_ops_arr[] = { [MLXSW_SP_RIF_TYPE_SUBPORT] = &mlxsw_sp_rif_subport_ops, - [MLXSW_SP_RIF_TYPE_VLAN] = &mlxsw_sp_rif_vlan_ops, + [MLXSW_SP_RIF_TYPE_VLAN] = &mlxsw_sp_rif_vlan_emu_ops, [MLXSW_SP_RIF_TYPE_FID] = &mlxsw_sp_rif_fid_ops, [MLXSW_SP_RIF_TYPE_IPIP_LB] = &mlxsw_sp_rif_ipip_lb_ops, }; @@ -7552,6 +7661,16 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp) mlxsw_sp->router = router; router->mlxsw_sp = mlxsw_sp; + router->inetaddr_nb.notifier_call = mlxsw_sp_inetaddr_event; + err = register_inetaddr_notifier(&router->inetaddr_nb); + if (err) + goto err_register_inetaddr_notifier; + + router->inet6addr_nb.notifier_call = mlxsw_sp_inet6addr_event; + err = register_inet6addr_notifier(&router->inet6addr_nb); + if (err) + goto err_register_inet6addr_notifier; + INIT_LIST_HEAD(&mlxsw_sp->router->nexthop_neighs_list); err = __mlxsw_sp_router_init(mlxsw_sp); if (err) @@ -7637,6 +7756,10 @@ err_ipips_init: err_rifs_init: __mlxsw_sp_router_fini(mlxsw_sp); err_router_init: + unregister_inet6addr_notifier(&router->inet6addr_nb); +err_register_inet6addr_notifier: + unregister_inetaddr_notifier(&router->inetaddr_nb); +err_register_inetaddr_notifier: kfree(mlxsw_sp->router); return err; } @@ -7654,5 +7777,7 @@ void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp) mlxsw_sp_ipips_fini(mlxsw_sp); mlxsw_sp_rifs_fini(mlxsw_sp); __mlxsw_sp_router_fini(mlxsw_sp); + unregister_inet6addr_notifier(&mlxsw_sp->router->inet6addr_nb); + unregister_inetaddr_notifier(&mlxsw_sp->router->inetaddr_nb); kfree(mlxsw_sp->router); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c index d965fd275c90..ad5a9b9e1466 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c @@ -383,7 +383,7 @@ mlxsw_sp_span_entry_gretap4_deconfigure(struct mlxsw_sp_span_entry *span_entry) } static const struct mlxsw_sp_span_entry_ops mlxsw_sp_span_entry_ops_gretap4 = { - .can_handle = is_gretap_dev, + .can_handle = netif_is_gretap, .parms = mlxsw_sp_span_entry_gretap4_parms, .configure = mlxsw_sp_span_entry_gretap4_configure, .deconfigure = mlxsw_sp_span_entry_gretap4_deconfigure, @@ -484,7 +484,7 @@ mlxsw_sp_span_entry_gretap6_deconfigure(struct mlxsw_sp_span_entry *span_entry) static const struct mlxsw_sp_span_entry_ops mlxsw_sp_span_entry_ops_gretap6 = { - .can_handle = is_ip6gretap_dev, + .can_handle = netif_is_ip6gretap, .parms = mlxsw_sp_span_entry_gretap6_parms, .configure = mlxsw_sp_span_entry_gretap6_configure, .deconfigure = mlxsw_sp_span_entry_gretap6_deconfigure, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c index 50080c60a279..1bd2c6e15f8d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c @@ -85,13 +85,11 @@ struct mlxsw_sp_bridge_ops { struct mlxsw_sp_bridge_port *bridge_port, struct mlxsw_sp_port *mlxsw_sp_port); int (*vxlan_join)(struct mlxsw_sp_bridge_device *bridge_device, - const struct net_device *vxlan_dev, + const struct net_device *vxlan_dev, u16 vid, struct netlink_ext_ack *extack); - void (*vxlan_leave)(struct mlxsw_sp_bridge_device *bridge_device, - const struct net_device *vxlan_dev); struct mlxsw_sp_fid * (*fid_get)(struct mlxsw_sp_bridge_device *bridge_device, - u16 vid); + u16 vid, struct netlink_ext_ack *extack); struct mlxsw_sp_fid * (*fid_lookup)(struct mlxsw_sp_bridge_device *bridge_device, u16 vid); @@ -292,30 +290,6 @@ mlxsw_sp_bridge_port_destroy(struct mlxsw_sp_bridge_port *bridge_port) kfree(bridge_port); } -static bool -mlxsw_sp_bridge_port_should_destroy(const struct mlxsw_sp_bridge_port * - bridge_port) -{ - struct net_device *dev = bridge_port->dev; - struct mlxsw_sp *mlxsw_sp; - - if (is_vlan_dev(dev)) - mlxsw_sp = mlxsw_sp_lower_get(vlan_dev_real_dev(dev)); - else - mlxsw_sp = mlxsw_sp_lower_get(dev); - - /* In case ports were pulled from out of a bridged LAG, then - * it's possible the reference count isn't zero, yet the bridge - * port should be destroyed, as it's no longer an upper of ours. - */ - if (!mlxsw_sp && list_empty(&bridge_port->vlans_list)) - return true; - else if (bridge_port->ref_count == 0) - return true; - else - return false; -} - static struct mlxsw_sp_bridge_port * mlxsw_sp_bridge_port_get(struct mlxsw_sp_bridge *bridge, struct net_device *brport_dev) @@ -353,8 +327,7 @@ static void mlxsw_sp_bridge_port_put(struct mlxsw_sp_bridge *bridge, { struct mlxsw_sp_bridge_device *bridge_device; - bridge_port->ref_count--; - if (!mlxsw_sp_bridge_port_should_destroy(bridge_port)) + if (--bridge_port->ref_count != 0) return; bridge_device = bridge_port->bridge_device; mlxsw_sp_bridge_port_destroy(bridge_port); @@ -935,7 +908,8 @@ static int mlxsw_sp_port_attr_set(struct net_device *dev, static int mlxsw_sp_port_vlan_fid_join(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan, - struct mlxsw_sp_bridge_port *bridge_port) + struct mlxsw_sp_bridge_port *bridge_port, + struct netlink_ext_ack *extack) { struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp_port_vlan->mlxsw_sp_port; struct mlxsw_sp_bridge_device *bridge_device; @@ -945,7 +919,7 @@ mlxsw_sp_port_vlan_fid_join(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan, int err; bridge_device = bridge_port->bridge_device; - fid = bridge_device->ops->fid_get(bridge_device, vid); + fid = bridge_device->ops->fid_get(bridge_device, vid, extack); if (IS_ERR(fid)) return PTR_ERR(fid); @@ -1013,7 +987,8 @@ mlxsw_sp_port_pvid_determine(const struct mlxsw_sp_port *mlxsw_sp_port, static int mlxsw_sp_port_vlan_bridge_join(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan, - struct mlxsw_sp_bridge_port *bridge_port) + struct mlxsw_sp_bridge_port *bridge_port, + struct netlink_ext_ack *extack) { struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp_port_vlan->mlxsw_sp_port; struct mlxsw_sp_bridge_vlan *bridge_vlan; @@ -1021,12 +996,11 @@ mlxsw_sp_port_vlan_bridge_join(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan, int err; /* No need to continue if only VLAN flags were changed */ - if (mlxsw_sp_port_vlan->bridge_port) { - mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan); + if (mlxsw_sp_port_vlan->bridge_port) return 0; - } - err = mlxsw_sp_port_vlan_fid_join(mlxsw_sp_port_vlan, bridge_port); + err = mlxsw_sp_port_vlan_fid_join(mlxsw_sp_port_vlan, bridge_port, + extack); if (err) return err; @@ -1103,16 +1077,33 @@ mlxsw_sp_port_vlan_bridge_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan) static int mlxsw_sp_bridge_port_vlan_add(struct mlxsw_sp_port *mlxsw_sp_port, struct mlxsw_sp_bridge_port *bridge_port, - u16 vid, bool is_untagged, bool is_pvid) + u16 vid, bool is_untagged, bool is_pvid, + struct netlink_ext_ack *extack, + struct switchdev_trans *trans) { u16 pvid = mlxsw_sp_port_pvid_determine(mlxsw_sp_port, vid, is_pvid); struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan; u16 old_pvid = mlxsw_sp_port->pvid; int err; - mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_get(mlxsw_sp_port, vid); - if (IS_ERR(mlxsw_sp_port_vlan)) - return PTR_ERR(mlxsw_sp_port_vlan); + /* The only valid scenario in which a port-vlan already exists, is if + * the VLAN flags were changed and the port-vlan is associated with the + * correct bridge port + */ + mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); + if (mlxsw_sp_port_vlan && + mlxsw_sp_port_vlan->bridge_port != bridge_port) + return -EEXIST; + + if (switchdev_trans_ph_prepare(trans)) + return 0; + + if (!mlxsw_sp_port_vlan) { + mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_create(mlxsw_sp_port, + vid); + if (IS_ERR(mlxsw_sp_port_vlan)) + return PTR_ERR(mlxsw_sp_port_vlan); + } err = mlxsw_sp_port_vlan_set(mlxsw_sp_port, vid, vid, true, is_untagged); @@ -1123,7 +1114,8 @@ mlxsw_sp_bridge_port_vlan_add(struct mlxsw_sp_port *mlxsw_sp_port, if (err) goto err_port_pvid_set; - err = mlxsw_sp_port_vlan_bridge_join(mlxsw_sp_port_vlan, bridge_port); + err = mlxsw_sp_port_vlan_bridge_join(mlxsw_sp_port_vlan, bridge_port, + extack); if (err) goto err_port_vlan_bridge_join; @@ -1134,7 +1126,7 @@ err_port_vlan_bridge_join: err_port_pvid_set: mlxsw_sp_port_vlan_set(mlxsw_sp_port, vid, vid, false, false); err_port_vlan_set: - mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan); + mlxsw_sp_port_vlan_destroy(mlxsw_sp_port_vlan); return err; } @@ -1173,7 +1165,8 @@ mlxsw_sp_br_ban_rif_pvid_change(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_port_vlans_add(struct mlxsw_sp_port *mlxsw_sp_port, const struct switchdev_obj_port_vlan *vlan, - struct switchdev_trans *trans) + struct switchdev_trans *trans, + struct netlink_ext_ack *extack) { bool flag_untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; bool flag_pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID; @@ -1195,9 +1188,6 @@ static int mlxsw_sp_port_vlans_add(struct mlxsw_sp_port *mlxsw_sp_port, return err; } - if (switchdev_trans_ph_prepare(trans)) - return 0; - bridge_port = mlxsw_sp_bridge_port_find(mlxsw_sp->bridge, orig_dev); if (WARN_ON(!bridge_port)) return -EINVAL; @@ -1210,7 +1200,7 @@ static int mlxsw_sp_port_vlans_add(struct mlxsw_sp_port *mlxsw_sp_port, err = mlxsw_sp_bridge_port_vlan_add(mlxsw_sp_port, bridge_port, vid, flag_untagged, - flag_pvid); + flag_pvid, extack, trans); if (err) return err; } @@ -1779,7 +1769,8 @@ static void mlxsw_sp_span_respin_schedule(struct mlxsw_sp *mlxsw_sp) static int mlxsw_sp_port_obj_add(struct net_device *dev, const struct switchdev_obj *obj, - struct switchdev_trans *trans) + struct switchdev_trans *trans, + struct netlink_ext_ack *extack) { struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev); const struct switchdev_obj_port_vlan *vlan; @@ -1788,7 +1779,8 @@ static int mlxsw_sp_port_obj_add(struct net_device *dev, switch (obj->id) { case SWITCHDEV_OBJ_ID_PORT_VLAN: vlan = SWITCHDEV_OBJ_PORT_VLAN(obj); - err = mlxsw_sp_port_vlans_add(mlxsw_sp_port, vlan, trans); + err = mlxsw_sp_port_vlans_add(mlxsw_sp_port, vlan, trans, + extack); if (switchdev_trans_ph_prepare(trans)) { /* The event is emitted before the changes are actually @@ -1826,7 +1818,7 @@ mlxsw_sp_bridge_port_vlan_del(struct mlxsw_sp_port *mlxsw_sp_port, mlxsw_sp_port_vlan_bridge_leave(mlxsw_sp_port_vlan); mlxsw_sp_port_pvid_set(mlxsw_sp_port, pvid); mlxsw_sp_port_vlan_set(mlxsw_sp_port, vid, vid, false, false); - mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan); + mlxsw_sp_port_vlan_destroy(mlxsw_sp_port_vlan); } static int mlxsw_sp_port_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port, @@ -1974,8 +1966,6 @@ static struct mlxsw_sp_port *mlxsw_sp_lag_rep_port(struct mlxsw_sp *mlxsw_sp, static const struct switchdev_ops mlxsw_sp_port_switchdev_ops = { .switchdev_port_attr_get = mlxsw_sp_port_attr_get, .switchdev_port_attr_set = mlxsw_sp_port_attr_set, - .switchdev_port_obj_add = mlxsw_sp_port_obj_add, - .switchdev_port_obj_del = mlxsw_sp_port_obj_del, }; static int @@ -1984,19 +1974,14 @@ mlxsw_sp_bridge_8021q_port_join(struct mlxsw_sp_bridge_device *bridge_device, struct mlxsw_sp_port *mlxsw_sp_port, struct netlink_ext_ack *extack) { - struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan; - if (is_vlan_dev(bridge_port->dev)) { NL_SET_ERR_MSG_MOD(extack, "Can not enslave a VLAN device to a VLAN-aware bridge"); return -EINVAL; } - mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, 1); - if (WARN_ON(!mlxsw_sp_port_vlan)) - return -EINVAL; - - /* Let VLAN-aware bridge take care of its own VLANs */ - mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan); + /* Port is no longer usable as a router interface */ + if (mlxsw_sp_port->default_vlan->fid) + mlxsw_sp_port_vlan_router_leave(mlxsw_sp_port->default_vlan); return 0; } @@ -2006,41 +1991,133 @@ mlxsw_sp_bridge_8021q_port_leave(struct mlxsw_sp_bridge_device *bridge_device, struct mlxsw_sp_bridge_port *bridge_port, struct mlxsw_sp_port *mlxsw_sp_port) { - mlxsw_sp_port_vlan_get(mlxsw_sp_port, 1); /* Make sure untagged frames are allowed to ingress */ - mlxsw_sp_port_pvid_set(mlxsw_sp_port, 1); + mlxsw_sp_port_pvid_set(mlxsw_sp_port, MLXSW_SP_DEFAULT_VID); } static int mlxsw_sp_bridge_8021q_vxlan_join(struct mlxsw_sp_bridge_device *bridge_device, - const struct net_device *vxlan_dev, + const struct net_device *vxlan_dev, u16 vid, struct netlink_ext_ack *extack) { - WARN_ON(1); - return -EINVAL; + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(bridge_device->dev); + struct vxlan_dev *vxlan = netdev_priv(vxlan_dev); + struct mlxsw_sp_nve_params params = { + .type = MLXSW_SP_NVE_TYPE_VXLAN, + .vni = vxlan->cfg.vni, + .dev = vxlan_dev, + }; + struct mlxsw_sp_fid *fid; + int err; + + /* If the VLAN is 0, we need to find the VLAN that is configured as + * PVID and egress untagged on the bridge port of the VxLAN device. + * It is possible no such VLAN exists + */ + if (!vid) { + err = mlxsw_sp_vxlan_mapped_vid(vxlan_dev, &vid); + if (err || !vid) + return err; + } + + /* If no other port is member in the VLAN, then the FID does not exist. + * NVE will be enabled on the FID once a port joins the VLAN + */ + fid = mlxsw_sp_fid_8021q_lookup(mlxsw_sp, vid); + if (!fid) + return 0; + + if (mlxsw_sp_fid_vni_is_set(fid)) { + err = -EINVAL; + goto err_vni_exists; + } + + err = mlxsw_sp_nve_fid_enable(mlxsw_sp, fid, ¶ms, extack); + if (err) + goto err_nve_fid_enable; + + /* The tunnel port does not hold a reference on the FID. Only + * local ports and the router port + */ + mlxsw_sp_fid_put(fid); + + return 0; + +err_nve_fid_enable: +err_vni_exists: + mlxsw_sp_fid_put(fid); + return err; } -static void -mlxsw_sp_bridge_8021q_vxlan_leave(struct mlxsw_sp_bridge_device *bridge_device, - const struct net_device *vxlan_dev) +static struct net_device * +mlxsw_sp_bridge_8021q_vxlan_dev_find(struct net_device *br_dev, u16 vid) { + struct net_device *dev; + struct list_head *iter; + + netdev_for_each_lower_dev(br_dev, dev, iter) { + u16 pvid; + int err; + + if (!netif_is_vxlan(dev)) + continue; + + err = mlxsw_sp_vxlan_mapped_vid(dev, &pvid); + if (err || pvid != vid) + continue; + + return dev; + } + + return NULL; } static struct mlxsw_sp_fid * mlxsw_sp_bridge_8021q_fid_get(struct mlxsw_sp_bridge_device *bridge_device, - u16 vid) + u16 vid, struct netlink_ext_ack *extack) { struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(bridge_device->dev); + struct net_device *vxlan_dev; + struct mlxsw_sp_fid *fid; + int err; + + fid = mlxsw_sp_fid_8021q_get(mlxsw_sp, vid); + if (IS_ERR(fid)) + return fid; + + if (mlxsw_sp_fid_vni_is_set(fid)) + return fid; + + /* Find the VxLAN device that has the specified VLAN configured as + * PVID and egress untagged. There can be at most one such device + */ + vxlan_dev = mlxsw_sp_bridge_8021q_vxlan_dev_find(bridge_device->dev, + vid); + if (!vxlan_dev) + return fid; + + if (!netif_running(vxlan_dev)) + return fid; + + err = mlxsw_sp_bridge_8021q_vxlan_join(bridge_device, vxlan_dev, vid, + extack); + if (err) + goto err_vxlan_join; + + return fid; - return mlxsw_sp_fid_8021q_get(mlxsw_sp, vid); +err_vxlan_join: + mlxsw_sp_fid_put(fid); + return ERR_PTR(err); } static struct mlxsw_sp_fid * mlxsw_sp_bridge_8021q_fid_lookup(struct mlxsw_sp_bridge_device *bridge_device, u16 vid) { - WARN_ON(1); - return NULL; + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(bridge_device->dev); + + return mlxsw_sp_fid_8021q_lookup(mlxsw_sp, vid); } static u16 @@ -2054,7 +2131,6 @@ static const struct mlxsw_sp_bridge_ops mlxsw_sp_bridge_8021q_ops = { .port_join = mlxsw_sp_bridge_8021q_port_join, .port_leave = mlxsw_sp_bridge_8021q_port_leave, .vxlan_join = mlxsw_sp_bridge_8021q_vxlan_join, - .vxlan_leave = mlxsw_sp_bridge_8021q_vxlan_leave, .fid_get = mlxsw_sp_bridge_8021q_fid_get, .fid_lookup = mlxsw_sp_bridge_8021q_fid_lookup, .fid_vid = mlxsw_sp_bridge_8021q_fid_vid, @@ -2087,7 +2163,7 @@ mlxsw_sp_bridge_8021d_port_join(struct mlxsw_sp_bridge_device *bridge_device, struct net_device *dev = bridge_port->dev; u16 vid; - vid = is_vlan_dev(dev) ? vlan_dev_vlan_id(dev) : 1; + vid = is_vlan_dev(dev) ? vlan_dev_vlan_id(dev) : MLXSW_SP_DEFAULT_VID; mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); if (WARN_ON(!mlxsw_sp_port_vlan)) return -EINVAL; @@ -2101,7 +2177,8 @@ mlxsw_sp_bridge_8021d_port_join(struct mlxsw_sp_bridge_device *bridge_device, if (mlxsw_sp_port_vlan->fid) mlxsw_sp_port_vlan_router_leave(mlxsw_sp_port_vlan); - return mlxsw_sp_port_vlan_bridge_join(mlxsw_sp_port_vlan, bridge_port); + return mlxsw_sp_port_vlan_bridge_join(mlxsw_sp_port_vlan, bridge_port, + extack); } static void @@ -2113,9 +2190,9 @@ mlxsw_sp_bridge_8021d_port_leave(struct mlxsw_sp_bridge_device *bridge_device, struct net_device *dev = bridge_port->dev; u16 vid; - vid = is_vlan_dev(dev) ? vlan_dev_vlan_id(dev) : 1; + vid = is_vlan_dev(dev) ? vlan_dev_vlan_id(dev) : MLXSW_SP_DEFAULT_VID; mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); - if (!mlxsw_sp_port_vlan) + if (!mlxsw_sp_port_vlan || !mlxsw_sp_port_vlan->bridge_port) return; mlxsw_sp_port_vlan_bridge_leave(mlxsw_sp_port_vlan); @@ -2123,7 +2200,7 @@ mlxsw_sp_bridge_8021d_port_leave(struct mlxsw_sp_bridge_device *bridge_device, static int mlxsw_sp_bridge_8021d_vxlan_join(struct mlxsw_sp_bridge_device *bridge_device, - const struct net_device *vxlan_dev, + const struct net_device *vxlan_dev, u16 vid, struct netlink_ext_ack *extack) { struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(bridge_device->dev); @@ -2162,29 +2239,9 @@ err_vni_exists: return err; } -static void -mlxsw_sp_bridge_8021d_vxlan_leave(struct mlxsw_sp_bridge_device *bridge_device, - const struct net_device *vxlan_dev) -{ - struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(bridge_device->dev); - struct mlxsw_sp_fid *fid; - - fid = mlxsw_sp_fid_8021d_lookup(mlxsw_sp, bridge_device->dev->ifindex); - if (WARN_ON(!fid)) - return; - - /* If the VxLAN device is down, then the FID does not have a VNI */ - if (!mlxsw_sp_fid_vni_is_set(fid)) - goto out; - - mlxsw_sp_nve_fid_disable(mlxsw_sp, fid); -out: - mlxsw_sp_fid_put(fid); -} - static struct mlxsw_sp_fid * mlxsw_sp_bridge_8021d_fid_get(struct mlxsw_sp_bridge_device *bridge_device, - u16 vid) + u16 vid, struct netlink_ext_ack *extack) { struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(bridge_device->dev); struct net_device *vxlan_dev; @@ -2205,7 +2262,8 @@ mlxsw_sp_bridge_8021d_fid_get(struct mlxsw_sp_bridge_device *bridge_device, if (!netif_running(vxlan_dev)) return fid; - err = mlxsw_sp_bridge_8021d_vxlan_join(bridge_device, vxlan_dev, NULL); + err = mlxsw_sp_bridge_8021d_vxlan_join(bridge_device, vxlan_dev, 0, + extack); if (err) goto err_vxlan_join; @@ -2240,7 +2298,6 @@ static const struct mlxsw_sp_bridge_ops mlxsw_sp_bridge_8021d_ops = { .port_join = mlxsw_sp_bridge_8021d_port_join, .port_leave = mlxsw_sp_bridge_8021d_port_leave, .vxlan_join = mlxsw_sp_bridge_8021d_vxlan_join, - .vxlan_leave = mlxsw_sp_bridge_8021d_vxlan_leave, .fid_get = mlxsw_sp_bridge_8021d_fid_get, .fid_lookup = mlxsw_sp_bridge_8021d_fid_lookup, .fid_vid = mlxsw_sp_bridge_8021d_fid_vid, @@ -2295,7 +2352,7 @@ void mlxsw_sp_port_bridge_leave(struct mlxsw_sp_port *mlxsw_sp_port, int mlxsw_sp_bridge_vxlan_join(struct mlxsw_sp *mlxsw_sp, const struct net_device *br_dev, - const struct net_device *vxlan_dev, + const struct net_device *vxlan_dev, u16 vid, struct netlink_ext_ack *extack) { struct mlxsw_sp_bridge_device *bridge_device; @@ -2304,20 +2361,102 @@ int mlxsw_sp_bridge_vxlan_join(struct mlxsw_sp *mlxsw_sp, if (WARN_ON(!bridge_device)) return -EINVAL; - return bridge_device->ops->vxlan_join(bridge_device, vxlan_dev, extack); + return bridge_device->ops->vxlan_join(bridge_device, vxlan_dev, vid, + extack); } void mlxsw_sp_bridge_vxlan_leave(struct mlxsw_sp *mlxsw_sp, - const struct net_device *br_dev, const struct net_device *vxlan_dev) +{ + struct vxlan_dev *vxlan = netdev_priv(vxlan_dev); + struct mlxsw_sp_fid *fid; + + /* If the VxLAN device is down, then the FID does not have a VNI */ + fid = mlxsw_sp_fid_lookup_by_vni(mlxsw_sp, vxlan->cfg.vni); + if (!fid) + return; + + mlxsw_sp_nve_fid_disable(mlxsw_sp, fid); + mlxsw_sp_fid_put(fid); +} + +struct mlxsw_sp_fid *mlxsw_sp_bridge_fid_get(struct mlxsw_sp *mlxsw_sp, + const struct net_device *br_dev, + u16 vid, + struct netlink_ext_ack *extack) { struct mlxsw_sp_bridge_device *bridge_device; bridge_device = mlxsw_sp_bridge_device_find(mlxsw_sp->bridge, br_dev); if (WARN_ON(!bridge_device)) - return; + return ERR_PTR(-EINVAL); + + return bridge_device->ops->fid_get(bridge_device, vid, extack); +} + +static void +mlxsw_sp_switchdev_vxlan_addr_convert(const union vxlan_addr *vxlan_addr, + enum mlxsw_sp_l3proto *proto, + union mlxsw_sp_l3addr *addr) +{ + if (vxlan_addr->sa.sa_family == AF_INET) { + addr->addr4 = vxlan_addr->sin.sin_addr.s_addr; + *proto = MLXSW_SP_L3_PROTO_IPV4; + } else { + addr->addr6 = vxlan_addr->sin6.sin6_addr; + *proto = MLXSW_SP_L3_PROTO_IPV6; + } +} - bridge_device->ops->vxlan_leave(bridge_device, vxlan_dev); +static void +mlxsw_sp_switchdev_addr_vxlan_convert(enum mlxsw_sp_l3proto proto, + const union mlxsw_sp_l3addr *addr, + union vxlan_addr *vxlan_addr) +{ + switch (proto) { + case MLXSW_SP_L3_PROTO_IPV4: + vxlan_addr->sa.sa_family = AF_INET; + vxlan_addr->sin.sin_addr.s_addr = addr->addr4; + break; + case MLXSW_SP_L3_PROTO_IPV6: + vxlan_addr->sa.sa_family = AF_INET6; + vxlan_addr->sin6.sin6_addr = addr->addr6; + break; + } +} + +static void mlxsw_sp_fdb_vxlan_call_notifiers(struct net_device *dev, + const char *mac, + enum mlxsw_sp_l3proto proto, + union mlxsw_sp_l3addr *addr, + __be32 vni, bool adding) +{ + struct switchdev_notifier_vxlan_fdb_info info; + struct vxlan_dev *vxlan = netdev_priv(dev); + enum switchdev_notifier_type type; + + type = adding ? SWITCHDEV_VXLAN_FDB_ADD_TO_BRIDGE : + SWITCHDEV_VXLAN_FDB_DEL_TO_BRIDGE; + mlxsw_sp_switchdev_addr_vxlan_convert(proto, addr, &info.remote_ip); + info.remote_port = vxlan->cfg.dst_port; + info.remote_vni = vni; + info.remote_ifindex = 0; + ether_addr_copy(info.eth_addr, mac); + info.vni = vni; + info.offloaded = adding; + call_switchdev_notifiers(type, dev, &info.info); +} + +static void mlxsw_sp_fdb_nve_call_notifiers(struct net_device *dev, + const char *mac, + enum mlxsw_sp_l3proto proto, + union mlxsw_sp_l3addr *addr, + __be32 vni, + bool adding) +{ + if (netif_is_vxlan(dev)) + mlxsw_sp_fdb_vxlan_call_notifiers(dev, mac, proto, addr, vni, + adding); } static void @@ -2428,7 +2567,8 @@ static void mlxsw_sp_fdb_notify_mac_lag_process(struct mlxsw_sp *mlxsw_sp, bridge_device = bridge_port->bridge_device; vid = bridge_device->vlan_enabled ? mlxsw_sp_port_vlan->vid : 0; - lag_vid = mlxsw_sp_port_vlan->vid; + lag_vid = mlxsw_sp_fid_lag_vid_valid(mlxsw_sp_port_vlan->fid) ? + mlxsw_sp_port_vlan->vid : 0; do_fdb_op: err = mlxsw_sp_port_fdb_uc_lag_op(mlxsw_sp, lag_id, mac, fid, lag_vid, @@ -2451,6 +2591,122 @@ just_remove: goto do_fdb_op; } +static int +__mlxsw_sp_fdb_notify_mac_uc_tunnel_process(struct mlxsw_sp *mlxsw_sp, + const struct mlxsw_sp_fid *fid, + bool adding, + struct net_device **nve_dev, + u16 *p_vid, __be32 *p_vni) +{ + struct mlxsw_sp_bridge_device *bridge_device; + struct net_device *br_dev, *dev; + int nve_ifindex; + int err; + + err = mlxsw_sp_fid_nve_ifindex(fid, &nve_ifindex); + if (err) + return err; + + err = mlxsw_sp_fid_vni(fid, p_vni); + if (err) + return err; + + dev = __dev_get_by_index(&init_net, nve_ifindex); + if (!dev) + return -EINVAL; + *nve_dev = dev; + + if (!netif_running(dev)) + return -EINVAL; + + if (adding && !br_port_flag_is_set(dev, BR_LEARNING)) + return -EINVAL; + + if (adding && netif_is_vxlan(dev)) { + struct vxlan_dev *vxlan = netdev_priv(dev); + + if (!(vxlan->cfg.flags & VXLAN_F_LEARN)) + return -EINVAL; + } + + br_dev = netdev_master_upper_dev_get(dev); + if (!br_dev) + return -EINVAL; + + bridge_device = mlxsw_sp_bridge_device_find(mlxsw_sp->bridge, br_dev); + if (!bridge_device) + return -EINVAL; + + *p_vid = bridge_device->ops->fid_vid(bridge_device, fid); + + return 0; +} + +static void mlxsw_sp_fdb_notify_mac_uc_tunnel_process(struct mlxsw_sp *mlxsw_sp, + char *sfn_pl, + int rec_index, + bool adding) +{ + enum mlxsw_reg_sfn_uc_tunnel_protocol sfn_proto; + enum switchdev_notifier_type type; + struct net_device *nve_dev; + union mlxsw_sp_l3addr addr; + struct mlxsw_sp_fid *fid; + char mac[ETH_ALEN]; + u16 fid_index, vid; + __be32 vni; + u32 uip; + int err; + + mlxsw_reg_sfn_uc_tunnel_unpack(sfn_pl, rec_index, mac, &fid_index, + &uip, &sfn_proto); + + fid = mlxsw_sp_fid_lookup_by_index(mlxsw_sp, fid_index); + if (!fid) + goto err_fid_lookup; + + err = mlxsw_sp_nve_learned_ip_resolve(mlxsw_sp, uip, + (enum mlxsw_sp_l3proto) sfn_proto, + &addr); + if (err) + goto err_ip_resolve; + + err = __mlxsw_sp_fdb_notify_mac_uc_tunnel_process(mlxsw_sp, fid, adding, + &nve_dev, &vid, &vni); + if (err) + goto err_fdb_process; + + err = mlxsw_sp_port_fdb_tunnel_uc_op(mlxsw_sp, mac, fid_index, + (enum mlxsw_sp_l3proto) sfn_proto, + &addr, adding, true); + if (err) + goto err_fdb_op; + + mlxsw_sp_fdb_nve_call_notifiers(nve_dev, mac, + (enum mlxsw_sp_l3proto) sfn_proto, + &addr, vni, adding); + + type = adding ? SWITCHDEV_FDB_ADD_TO_BRIDGE : + SWITCHDEV_FDB_DEL_TO_BRIDGE; + mlxsw_sp_fdb_call_notifiers(type, mac, vid, nve_dev, adding); + + mlxsw_sp_fid_put(fid); + + return; + +err_fdb_op: +err_fdb_process: +err_ip_resolve: + mlxsw_sp_fid_put(fid); +err_fid_lookup: + /* Remove an FDB entry in case we cannot process it. Otherwise the + * device will keep sending the same notification over and over again. + */ + mlxsw_sp_port_fdb_tunnel_uc_op(mlxsw_sp, mac, fid_index, + (enum mlxsw_sp_l3proto) sfn_proto, &addr, + false, true); +} + static void mlxsw_sp_fdb_notify_rec_process(struct mlxsw_sp *mlxsw_sp, char *sfn_pl, int rec_index) { @@ -2471,6 +2727,14 @@ static void mlxsw_sp_fdb_notify_rec_process(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_fdb_notify_mac_lag_process(mlxsw_sp, sfn_pl, rec_index, false); break; + case MLXSW_REG_SFN_REC_TYPE_LEARNED_UNICAST_TUNNEL: + mlxsw_sp_fdb_notify_mac_uc_tunnel_process(mlxsw_sp, sfn_pl, + rec_index, true); + break; + case MLXSW_REG_SFN_REC_TYPE_AGED_OUT_UNICAST_TUNNEL: + mlxsw_sp_fdb_notify_mac_uc_tunnel_process(mlxsw_sp, sfn_pl, + rec_index, false); + break; } } @@ -2525,20 +2789,6 @@ struct mlxsw_sp_switchdev_event_work { unsigned long event; }; -static void -mlxsw_sp_switchdev_vxlan_addr_convert(const union vxlan_addr *vxlan_addr, - enum mlxsw_sp_l3proto *proto, - union mlxsw_sp_l3addr *addr) -{ - if (vxlan_addr->sa.sa_family == AF_INET) { - addr->addr4 = vxlan_addr->sin.sin_addr.s_addr; - *proto = MLXSW_SP_L3_PROTO_IPV4; - } else { - addr->addr6 = vxlan_addr->sin6.sin6_addr; - *proto = MLXSW_SP_L3_PROTO_IPV6; - } -} - static void mlxsw_sp_switchdev_bridge_vxlan_fdb_event(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_switchdev_event_work * @@ -2604,7 +2854,8 @@ mlxsw_sp_switchdev_bridge_nve_fdb_event(struct mlxsw_sp_switchdev_event_work * switchdev_work->event != SWITCHDEV_FDB_DEL_TO_DEVICE) return; - if (!switchdev_work->fdb_info.added_by_user) + if (switchdev_work->event == SWITCHDEV_FDB_ADD_TO_DEVICE && + !switchdev_work->fdb_info.added_by_user) return; if (!netif_running(dev)) @@ -2947,10 +3198,274 @@ err_addr_alloc: return NOTIFY_BAD; } -static struct notifier_block mlxsw_sp_switchdev_notifier = { +struct notifier_block mlxsw_sp_switchdev_notifier = { .notifier_call = mlxsw_sp_switchdev_event, }; +static int +mlxsw_sp_switchdev_vxlan_vlan_add(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_bridge_device *bridge_device, + const struct net_device *vxlan_dev, u16 vid, + bool flag_untagged, bool flag_pvid, + struct switchdev_trans *trans, + struct netlink_ext_ack *extack) +{ + struct vxlan_dev *vxlan = netdev_priv(vxlan_dev); + __be32 vni = vxlan->cfg.vni; + struct mlxsw_sp_fid *fid; + u16 old_vid; + int err; + + /* We cannot have the same VLAN as PVID and egress untagged on multiple + * VxLAN devices. Note that we get this notification before the VLAN is + * actually added to the bridge's database, so it is not possible for + * the lookup function to return 'vxlan_dev' + */ + if (flag_untagged && flag_pvid && + mlxsw_sp_bridge_8021q_vxlan_dev_find(bridge_device->dev, vid)) + return -EINVAL; + + if (switchdev_trans_ph_prepare(trans)) + return 0; + + if (!netif_running(vxlan_dev)) + return 0; + + /* First case: FID is not associated with this VNI, but the new VLAN + * is both PVID and egress untagged. Need to enable NVE on the FID, if + * it exists + */ + fid = mlxsw_sp_fid_lookup_by_vni(mlxsw_sp, vni); + if (!fid) { + if (!flag_untagged || !flag_pvid) + return 0; + return mlxsw_sp_bridge_8021q_vxlan_join(bridge_device, + vxlan_dev, vid, extack); + } + + /* Second case: FID is associated with the VNI and the VLAN associated + * with the FID is the same as the notified VLAN. This means the flags + * (PVID / egress untagged) were toggled and that NVE should be + * disabled on the FID + */ + old_vid = mlxsw_sp_fid_8021q_vid(fid); + if (vid == old_vid) { + if (WARN_ON(flag_untagged && flag_pvid)) { + mlxsw_sp_fid_put(fid); + return -EINVAL; + } + mlxsw_sp_bridge_vxlan_leave(mlxsw_sp, vxlan_dev); + mlxsw_sp_fid_put(fid); + return 0; + } + + /* Third case: A new VLAN was configured on the VxLAN device, but this + * VLAN is not PVID, so there is nothing to do. + */ + if (!flag_pvid) { + mlxsw_sp_fid_put(fid); + return 0; + } + + /* Fourth case: Thew new VLAN is PVID, which means the VLAN currently + * mapped to the VNI should be unmapped + */ + mlxsw_sp_bridge_vxlan_leave(mlxsw_sp, vxlan_dev); + mlxsw_sp_fid_put(fid); + + /* Fifth case: The new VLAN is also egress untagged, which means the + * VLAN needs to be mapped to the VNI + */ + if (!flag_untagged) + return 0; + + err = mlxsw_sp_bridge_8021q_vxlan_join(bridge_device, vxlan_dev, vid, + extack); + if (err) + goto err_vxlan_join; + + return 0; + +err_vxlan_join: + mlxsw_sp_bridge_8021q_vxlan_join(bridge_device, vxlan_dev, old_vid, + NULL); + return err; +} + +static void +mlxsw_sp_switchdev_vxlan_vlan_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_bridge_device *bridge_device, + const struct net_device *vxlan_dev, u16 vid) +{ + struct vxlan_dev *vxlan = netdev_priv(vxlan_dev); + __be32 vni = vxlan->cfg.vni; + struct mlxsw_sp_fid *fid; + + if (!netif_running(vxlan_dev)) + return; + + fid = mlxsw_sp_fid_lookup_by_vni(mlxsw_sp, vni); + if (!fid) + return; + + /* A different VLAN than the one mapped to the VNI is deleted */ + if (mlxsw_sp_fid_8021q_vid(fid) != vid) + goto out; + + mlxsw_sp_bridge_vxlan_leave(mlxsw_sp, vxlan_dev); + +out: + mlxsw_sp_fid_put(fid); +} + +static int +mlxsw_sp_switchdev_vxlan_vlans_add(struct net_device *vxlan_dev, + struct switchdev_notifier_port_obj_info * + port_obj_info) +{ + struct switchdev_obj_port_vlan *vlan = + SWITCHDEV_OBJ_PORT_VLAN(port_obj_info->obj); + bool flag_untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; + bool flag_pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID; + struct switchdev_trans *trans = port_obj_info->trans; + struct mlxsw_sp_bridge_device *bridge_device; + struct netlink_ext_ack *extack; + struct mlxsw_sp *mlxsw_sp; + struct net_device *br_dev; + u16 vid; + + extack = switchdev_notifier_info_to_extack(&port_obj_info->info); + br_dev = netdev_master_upper_dev_get(vxlan_dev); + if (!br_dev) + return 0; + + mlxsw_sp = mlxsw_sp_lower_get(br_dev); + if (!mlxsw_sp) + return 0; + + port_obj_info->handled = true; + + bridge_device = mlxsw_sp_bridge_device_find(mlxsw_sp->bridge, br_dev); + if (!bridge_device) + return -EINVAL; + + if (!bridge_device->vlan_enabled) + return 0; + + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) { + int err; + + err = mlxsw_sp_switchdev_vxlan_vlan_add(mlxsw_sp, bridge_device, + vxlan_dev, vid, + flag_untagged, + flag_pvid, trans, + extack); + if (err) + return err; + } + + return 0; +} + +static void +mlxsw_sp_switchdev_vxlan_vlans_del(struct net_device *vxlan_dev, + struct switchdev_notifier_port_obj_info * + port_obj_info) +{ + struct switchdev_obj_port_vlan *vlan = + SWITCHDEV_OBJ_PORT_VLAN(port_obj_info->obj); + struct mlxsw_sp_bridge_device *bridge_device; + struct mlxsw_sp *mlxsw_sp; + struct net_device *br_dev; + u16 vid; + + br_dev = netdev_master_upper_dev_get(vxlan_dev); + if (!br_dev) + return; + + mlxsw_sp = mlxsw_sp_lower_get(br_dev); + if (!mlxsw_sp) + return; + + port_obj_info->handled = true; + + bridge_device = mlxsw_sp_bridge_device_find(mlxsw_sp->bridge, br_dev); + if (!bridge_device) + return; + + if (!bridge_device->vlan_enabled) + return; + + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) + mlxsw_sp_switchdev_vxlan_vlan_del(mlxsw_sp, bridge_device, + vxlan_dev, vid); +} + +static int +mlxsw_sp_switchdev_handle_vxlan_obj_add(struct net_device *vxlan_dev, + struct switchdev_notifier_port_obj_info * + port_obj_info) +{ + int err = 0; + + switch (port_obj_info->obj->id) { + case SWITCHDEV_OBJ_ID_PORT_VLAN: + err = mlxsw_sp_switchdev_vxlan_vlans_add(vxlan_dev, + port_obj_info); + break; + default: + break; + } + + return err; +} + +static void +mlxsw_sp_switchdev_handle_vxlan_obj_del(struct net_device *vxlan_dev, + struct switchdev_notifier_port_obj_info * + port_obj_info) +{ + switch (port_obj_info->obj->id) { + case SWITCHDEV_OBJ_ID_PORT_VLAN: + mlxsw_sp_switchdev_vxlan_vlans_del(vxlan_dev, port_obj_info); + break; + default: + break; + } +} + +static int mlxsw_sp_switchdev_blocking_event(struct notifier_block *unused, + unsigned long event, void *ptr) +{ + struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + int err = 0; + + switch (event) { + case SWITCHDEV_PORT_OBJ_ADD: + if (netif_is_vxlan(dev)) + err = mlxsw_sp_switchdev_handle_vxlan_obj_add(dev, ptr); + else + err = switchdev_handle_port_obj_add(dev, ptr, + mlxsw_sp_port_dev_check, + mlxsw_sp_port_obj_add); + return notifier_from_errno(err); + case SWITCHDEV_PORT_OBJ_DEL: + if (netif_is_vxlan(dev)) + mlxsw_sp_switchdev_handle_vxlan_obj_del(dev, ptr); + else + err = switchdev_handle_port_obj_del(dev, ptr, + mlxsw_sp_port_dev_check, + mlxsw_sp_port_obj_del); + return notifier_from_errno(err); + } + + return NOTIFY_DONE; +} + +static struct notifier_block mlxsw_sp_switchdev_blocking_notifier = { + .notifier_call = mlxsw_sp_switchdev_blocking_event, +}; + u8 mlxsw_sp_bridge_port_stp_state(struct mlxsw_sp_bridge_port *bridge_port) { @@ -2960,6 +3475,7 @@ mlxsw_sp_bridge_port_stp_state(struct mlxsw_sp_bridge_port *bridge_port) static int mlxsw_sp_fdb_init(struct mlxsw_sp *mlxsw_sp) { struct mlxsw_sp_bridge *bridge = mlxsw_sp->bridge; + struct notifier_block *nb; int err; err = mlxsw_sp_ageing_set(mlxsw_sp, MLXSW_SP_DEFAULT_AGEING_TIME); @@ -2974,17 +3490,33 @@ static int mlxsw_sp_fdb_init(struct mlxsw_sp *mlxsw_sp) return err; } + nb = &mlxsw_sp_switchdev_blocking_notifier; + err = register_switchdev_blocking_notifier(nb); + if (err) { + dev_err(mlxsw_sp->bus_info->dev, "Failed to register switchdev blocking notifier\n"); + goto err_register_switchdev_blocking_notifier; + } + INIT_DELAYED_WORK(&bridge->fdb_notify.dw, mlxsw_sp_fdb_notify_work); bridge->fdb_notify.interval = MLXSW_SP_DEFAULT_LEARNING_INTERVAL; mlxsw_sp_fdb_notify_work_schedule(mlxsw_sp); return 0; + +err_register_switchdev_blocking_notifier: + unregister_switchdev_notifier(&mlxsw_sp_switchdev_notifier); + return err; } static void mlxsw_sp_fdb_fini(struct mlxsw_sp *mlxsw_sp) { + struct notifier_block *nb; + cancel_delayed_work_sync(&mlxsw_sp->bridge->fdb_notify.dw); - unregister_switchdev_notifier(&mlxsw_sp_switchdev_notifier); + nb = &mlxsw_sp_switchdev_blocking_notifier; + unregister_switchdev_blocking_notifier(nb); + + unregister_switchdev_notifier(&mlxsw_sp_switchdev_notifier); } int mlxsw_sp_switchdev_init(struct mlxsw_sp *mlxsw_sp) diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c index c84074fa4c95..215a45374d7b 100644 --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -22,6 +23,9 @@ #include "ocelot.h" +#define TABLE_UPDATE_SLEEP_US 10 +#define TABLE_UPDATE_TIMEOUT_US 100000 + /* MAC table entry types. * ENTRYTYPE_NORMAL is subject to aging. * ENTRYTYPE_LOCKED is not subject to aging. @@ -41,23 +45,20 @@ struct ocelot_mact_entry { enum macaccess_entry_type type; }; -static inline int ocelot_mact_wait_for_completion(struct ocelot *ocelot) +static inline u32 ocelot_mact_read_macaccess(struct ocelot *ocelot) { - unsigned int val, timeout = 10; - - /* Wait for the issued mac table command to be completed, or timeout. - * When the command read from ANA_TABLES_MACACCESS is - * MACACCESS_CMD_IDLE, the issued command completed successfully. - */ - do { - val = ocelot_read(ocelot, ANA_TABLES_MACACCESS); - val &= ANA_TABLES_MACACCESS_MAC_TABLE_CMD_M; - } while (val != MACACCESS_CMD_IDLE && timeout--); + return ocelot_read(ocelot, ANA_TABLES_MACACCESS); +} - if (!timeout) - return -ETIMEDOUT; +static inline int ocelot_mact_wait_for_completion(struct ocelot *ocelot) +{ + u32 val; - return 0; + return readx_poll_timeout(ocelot_mact_read_macaccess, + ocelot, val, + (val & ANA_TABLES_MACACCESS_MAC_TABLE_CMD_M) == + MACACCESS_CMD_IDLE, + TABLE_UPDATE_SLEEP_US, TABLE_UPDATE_TIMEOUT_US); } static void ocelot_mact_select(struct ocelot *ocelot, @@ -129,23 +130,21 @@ static void ocelot_mact_init(struct ocelot *ocelot) ocelot_write(ocelot, MACACCESS_CMD_INIT, ANA_TABLES_MACACCESS); } -static inline int ocelot_vlant_wait_for_completion(struct ocelot *ocelot) +static inline u32 ocelot_vlant_read_vlanaccess(struct ocelot *ocelot) { - unsigned int val, timeout = 10; - - /* Wait for the issued vlan table command to be completed, or timeout. - * When the command read from ANA_TABLES_VLANACCESS is - * VLANACCESS_CMD_IDLE, the issued command completed successfully. - */ - do { - val = ocelot_read(ocelot, ANA_TABLES_VLANACCESS); - val &= ANA_TABLES_VLANACCESS_VLAN_TBL_CMD_M; - } while (val != ANA_TABLES_VLANACCESS_CMD_IDLE && timeout--); + return ocelot_read(ocelot, ANA_TABLES_VLANACCESS); +} - if (!timeout) - return -ETIMEDOUT; +static inline int ocelot_vlant_wait_for_completion(struct ocelot *ocelot) +{ + u32 val; - return 0; + return readx_poll_timeout(ocelot_vlant_read_vlanaccess, + ocelot, + val, + (val & ANA_TABLES_VLANACCESS_VLAN_TBL_CMD_M) == + ANA_TABLES_VLANACCESS_CMD_IDLE, + TABLE_UPDATE_SLEEP_US, TABLE_UPDATE_TIMEOUT_US); } static int ocelot_vlant_set_mask(struct ocelot *ocelot, u16 vid, u32 mask) @@ -472,7 +471,6 @@ static int ocelot_port_open(struct net_device *dev) { struct ocelot_port *port = netdev_priv(dev); struct ocelot *ocelot = port->ocelot; - enum phy_mode phy_mode; int err; /* Enable receiving frames on the port, and activate auto-learning of @@ -484,12 +482,8 @@ static int ocelot_port_open(struct net_device *dev) ANA_PORT_PORT_CFG, port->chip_port); if (port->serdes) { - if (port->phy_mode == PHY_INTERFACE_MODE_SGMII) - phy_mode = PHY_MODE_SGMII; - else - phy_mode = PHY_MODE_QSGMII; - - err = phy_set_mode(port->serdes, phy_mode); + err = phy_set_mode_ext(port->serdes, PHY_MODE_ETHERNET, + port->phy_mode); if (err) { netdev_err(dev, "Could not set mode of SerDes\n"); return err; @@ -1293,7 +1287,8 @@ static int ocelot_port_obj_del_mdb(struct net_device *dev, static int ocelot_port_obj_add(struct net_device *dev, const struct switchdev_obj *obj, - struct switchdev_trans *trans) + struct switchdev_trans *trans, + struct netlink_ext_ack *extack) { int ret = 0; @@ -1337,8 +1332,6 @@ static int ocelot_port_obj_del(struct net_device *dev, static const struct switchdev_ops ocelot_port_switchdev_ops = { .switchdev_port_attr_get = ocelot_port_attr_get, .switchdev_port_attr_set = ocelot_port_attr_set, - .switchdev_port_obj_add = ocelot_port_obj_add, - .switchdev_port_obj_del = ocelot_port_obj_del, }; static int ocelot_port_bridge_join(struct ocelot_port *ocelot_port, @@ -1595,6 +1588,34 @@ struct notifier_block ocelot_netdevice_nb __read_mostly = { }; EXPORT_SYMBOL(ocelot_netdevice_nb); +static int ocelot_switchdev_blocking_event(struct notifier_block *unused, + unsigned long event, void *ptr) +{ + struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + int err; + + switch (event) { + /* Blocking events. */ + case SWITCHDEV_PORT_OBJ_ADD: + err = switchdev_handle_port_obj_add(dev, ptr, + ocelot_netdevice_dev_check, + ocelot_port_obj_add); + return notifier_from_errno(err); + case SWITCHDEV_PORT_OBJ_DEL: + err = switchdev_handle_port_obj_del(dev, ptr, + ocelot_netdevice_dev_check, + ocelot_port_obj_del); + return notifier_from_errno(err); + } + + return NOTIFY_DONE; +} + +struct notifier_block ocelot_switchdev_blocking_nb __read_mostly = { + .notifier_call = ocelot_switchdev_blocking_event, +}; +EXPORT_SYMBOL(ocelot_switchdev_blocking_nb); + int ocelot_probe_port(struct ocelot *ocelot, u8 port, void __iomem *regs, struct phy_device *phy) diff --git a/drivers/net/ethernet/mscc/ocelot.h b/drivers/net/ethernet/mscc/ocelot.h index 62c7c8eb00d9..086775f7b52f 100644 --- a/drivers/net/ethernet/mscc/ocelot.h +++ b/drivers/net/ethernet/mscc/ocelot.h @@ -499,5 +499,6 @@ int ocelot_probe_port(struct ocelot *ocelot, u8 port, struct phy_device *phy); extern struct notifier_block ocelot_netdevice_nb; +extern struct notifier_block ocelot_switchdev_blocking_nb; #endif diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c index 4c23d18bbf44..ca3ea2fbfcd0 100644 --- a/drivers/net/ethernet/mscc/ocelot_board.c +++ b/drivers/net/ethernet/mscc/ocelot_board.c @@ -12,6 +12,7 @@ #include #include #include +#include #include "ocelot.h" @@ -328,6 +329,7 @@ static int mscc_ocelot_probe(struct platform_device *pdev) } register_netdevice_notifier(&ocelot_netdevice_nb); + register_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb); dev_info(&pdev->dev, "Ocelot switch probed\n"); @@ -342,6 +344,7 @@ static int mscc_ocelot_remove(struct platform_device *pdev) struct ocelot *ocelot = platform_get_drvdata(pdev); ocelot_deinit(ocelot); + unregister_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb); unregister_netdevice_notifier(&ocelot_netdevice_nb); return 0; diff --git a/drivers/net/ethernet/neterion/Kconfig b/drivers/net/ethernet/neterion/Kconfig index c26e0f70c494..7df20561e3fa 100644 --- a/drivers/net/ethernet/neterion/Kconfig +++ b/drivers/net/ethernet/neterion/Kconfig @@ -26,7 +26,7 @@ config S2IO on its age. More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called s2io. @@ -41,7 +41,7 @@ config VXGE labeled as either one, depending on its age. More specific information on configuring the driver is in - . + . To compile this driver as a module, choose M here. The module will be called vxge. diff --git a/drivers/net/ethernet/neterion/vxge/vxge-traffic.c b/drivers/net/ethernet/neterion/vxge/vxge-traffic.c index f7a0d1d5885e..59e77e3086bb 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-traffic.c +++ b/drivers/net/ethernet/neterion/vxge/vxge-traffic.c @@ -1695,17 +1695,10 @@ exit: */ void vxge_hw_fifo_txdl_free(struct __vxge_hw_fifo *fifo, void *txdlh) { - struct __vxge_hw_fifo_txdl_priv *txdl_priv; - u32 max_frags; struct __vxge_hw_channel *channel; channel = &fifo->channel; - txdl_priv = __vxge_hw_fifo_txdl_priv(fifo, - (struct vxge_hw_fifo_txd *)txdlh); - - max_frags = fifo->config->max_frags; - vxge_hw_channel_dtr_free(channel, txdlh); } diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile index 4afb10375397..47c708f08ade 100644 --- a/drivers/net/ethernet/netronome/nfp/Makefile +++ b/drivers/net/ethernet/netronome/nfp/Makefile @@ -56,7 +56,9 @@ endif ifeq ($(CONFIG_NFP_APP_ABM_NIC),y) nfp-objs += \ + abm/cls.o \ abm/ctrl.o \ + abm/qdisc.o \ abm/main.o endif diff --git a/drivers/net/ethernet/netronome/nfp/abm/cls.c b/drivers/net/ethernet/netronome/nfp/abm/cls.c new file mode 100644 index 000000000000..9852080cf454 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/abm/cls.c @@ -0,0 +1,283 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2018 Netronome Systems, Inc. */ + +#include +#include + +#include "../nfpcore/nfp_cpp.h" +#include "../nfp_app.h" +#include "../nfp_net_repr.h" +#include "main.h" + +struct nfp_abm_u32_match { + u32 handle; + u32 band; + u8 mask; + u8 val; + struct list_head list; +}; + +static bool +nfp_abm_u32_check_knode(struct nfp_abm *abm, struct tc_cls_u32_knode *knode, + __be16 proto, struct netlink_ext_ack *extack) +{ + struct tc_u32_key *k; + unsigned int tos_off; + + if (knode->exts && tcf_exts_has_actions(knode->exts)) { + NL_SET_ERR_MSG_MOD(extack, "action offload not supported"); + return false; + } + if (knode->link_handle) { + NL_SET_ERR_MSG_MOD(extack, "linking not supported"); + return false; + } + if (knode->sel->flags != TC_U32_TERMINAL) { + NL_SET_ERR_MSG_MOD(extack, + "flags must be equal to TC_U32_TERMINAL"); + return false; + } + if (knode->sel->off || knode->sel->offshift || knode->sel->offmask || + knode->sel->offoff || knode->fshift) { + NL_SET_ERR_MSG_MOD(extack, "variable offseting not supported"); + return false; + } + if (knode->sel->hoff || knode->sel->hmask) { + NL_SET_ERR_MSG_MOD(extack, "hashing not supported"); + return false; + } + if (knode->val || knode->mask) { + NL_SET_ERR_MSG_MOD(extack, "matching on mark not supported"); + return false; + } + if (knode->res && knode->res->class) { + NL_SET_ERR_MSG_MOD(extack, "setting non-0 class not supported"); + return false; + } + if (knode->res && knode->res->classid >= abm->num_bands) { + NL_SET_ERR_MSG_MOD(extack, + "classid higher than number of bands"); + return false; + } + if (knode->sel->nkeys != 1) { + NL_SET_ERR_MSG_MOD(extack, "exactly one key required"); + return false; + } + + switch (proto) { + case htons(ETH_P_IP): + tos_off = 16; + break; + case htons(ETH_P_IPV6): + tos_off = 20; + break; + default: + NL_SET_ERR_MSG_MOD(extack, "only IP and IPv6 supported as filter protocol"); + return false; + } + + k = &knode->sel->keys[0]; + if (k->offmask) { + NL_SET_ERR_MSG_MOD(extack, "offset mask - variable offseting not supported"); + return false; + } + if (k->off) { + NL_SET_ERR_MSG_MOD(extack, "only DSCP fields can be matched"); + return false; + } + if (k->val & ~k->mask) { + NL_SET_ERR_MSG_MOD(extack, "mask does not cover the key"); + return false; + } + if (be32_to_cpu(k->mask) >> tos_off & ~abm->dscp_mask) { + NL_SET_ERR_MSG_MOD(extack, "only high DSCP class selector bits can be used"); + nfp_err(abm->app->cpp, + "u32 offload: requested mask %x FW can support only %x\n", + be32_to_cpu(k->mask) >> tos_off, abm->dscp_mask); + return false; + } + + return true; +} + +/* This filter list -> map conversion is O(n * m), we expect single digit or + * low double digit number of prios and likewise for the filters. Also u32 + * doesn't report stats, so it's really only setup time cost. + */ +static unsigned int +nfp_abm_find_band_for_prio(struct nfp_abm_link *alink, unsigned int prio) +{ + struct nfp_abm_u32_match *iter; + + list_for_each_entry(iter, &alink->dscp_map, list) + if ((prio & iter->mask) == iter->val) + return iter->band; + + return alink->def_band; +} + +static int nfp_abm_update_band_map(struct nfp_abm_link *alink) +{ + unsigned int i, bits_per_prio, prios_per_word, base_shift; + struct nfp_abm *abm = alink->abm; + u32 field_mask; + + alink->has_prio = !list_empty(&alink->dscp_map); + + bits_per_prio = roundup_pow_of_two(order_base_2(abm->num_bands)); + field_mask = (1 << bits_per_prio) - 1; + prios_per_word = sizeof(u32) * BITS_PER_BYTE / bits_per_prio; + + /* FW mask applies from top bits */ + base_shift = 8 - order_base_2(abm->num_prios); + + for (i = 0; i < abm->num_prios; i++) { + unsigned int offset; + u32 *word; + u8 band; + + word = &alink->prio_map[i / prios_per_word]; + offset = (i % prios_per_word) * bits_per_prio; + + band = nfp_abm_find_band_for_prio(alink, i << base_shift); + + *word &= ~(field_mask << offset); + *word |= band << offset; + } + + /* Qdisc offload status may change if has_prio changed */ + nfp_abm_qdisc_offload_update(alink); + + return nfp_abm_ctrl_prio_map_update(alink, alink->prio_map); +} + +static void +nfp_abm_u32_knode_delete(struct nfp_abm_link *alink, + struct tc_cls_u32_knode *knode) +{ + struct nfp_abm_u32_match *iter; + + list_for_each_entry(iter, &alink->dscp_map, list) + if (iter->handle == knode->handle) { + list_del(&iter->list); + kfree(iter); + nfp_abm_update_band_map(alink); + return; + } +} + +static int +nfp_abm_u32_knode_replace(struct nfp_abm_link *alink, + struct tc_cls_u32_knode *knode, + __be16 proto, struct netlink_ext_ack *extack) +{ + struct nfp_abm_u32_match *match = NULL, *iter; + unsigned int tos_off; + u8 mask, val; + int err; + + if (!nfp_abm_u32_check_knode(alink->abm, knode, proto, extack)) + goto err_delete; + + tos_off = proto == htons(ETH_P_IP) ? 16 : 20; + + /* Extract the DSCP Class Selector bits */ + val = be32_to_cpu(knode->sel->keys[0].val) >> tos_off & 0xff; + mask = be32_to_cpu(knode->sel->keys[0].mask) >> tos_off & 0xff; + + /* Check if there is no conflicting mapping and find match by handle */ + list_for_each_entry(iter, &alink->dscp_map, list) { + u32 cmask; + + if (iter->handle == knode->handle) { + match = iter; + continue; + } + + cmask = iter->mask & mask; + if ((iter->val & cmask) == (val & cmask) && + iter->band != knode->res->classid) { + NL_SET_ERR_MSG_MOD(extack, "conflict with already offloaded filter"); + goto err_delete; + } + } + + if (!match) { + match = kzalloc(sizeof(*match), GFP_KERNEL); + if (!match) + return -ENOMEM; + list_add(&match->list, &alink->dscp_map); + } + match->handle = knode->handle; + match->band = knode->res->classid; + match->mask = mask; + match->val = val; + + err = nfp_abm_update_band_map(alink); + if (err) + goto err_delete; + + return 0; + +err_delete: + nfp_abm_u32_knode_delete(alink, knode); + return -EOPNOTSUPP; +} + +static int nfp_abm_setup_tc_block_cb(enum tc_setup_type type, + void *type_data, void *cb_priv) +{ + struct tc_cls_u32_offload *cls_u32 = type_data; + struct nfp_repr *repr = cb_priv; + struct nfp_abm_link *alink; + + alink = repr->app_priv; + + if (type != TC_SETUP_CLSU32) { + NL_SET_ERR_MSG_MOD(cls_u32->common.extack, + "only offload of u32 classifier supported"); + return -EOPNOTSUPP; + } + if (!tc_cls_can_offload_and_chain0(repr->netdev, &cls_u32->common)) + return -EOPNOTSUPP; + + if (cls_u32->common.protocol != htons(ETH_P_IP) && + cls_u32->common.protocol != htons(ETH_P_IPV6)) { + NL_SET_ERR_MSG_MOD(cls_u32->common.extack, + "only IP and IPv6 supported as filter protocol"); + return -EOPNOTSUPP; + } + + switch (cls_u32->command) { + case TC_CLSU32_NEW_KNODE: + case TC_CLSU32_REPLACE_KNODE: + return nfp_abm_u32_knode_replace(alink, &cls_u32->knode, + cls_u32->common.protocol, + cls_u32->common.extack); + case TC_CLSU32_DELETE_KNODE: + nfp_abm_u32_knode_delete(alink, &cls_u32->knode); + return 0; + default: + return -EOPNOTSUPP; + } +} + +int nfp_abm_setup_cls_block(struct net_device *netdev, struct nfp_repr *repr, + struct tc_block_offload *f) +{ + if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS) + return -EOPNOTSUPP; + + switch (f->command) { + case TC_BLOCK_BIND: + return tcf_block_cb_register(f->block, + nfp_abm_setup_tc_block_cb, + repr, repr, f->extack); + case TC_BLOCK_UNBIND: + tcf_block_cb_unregister(f->block, nfp_abm_setup_tc_block_cb, + repr); + return 0; + default: + return -EOPNOTSUPP; + } +} diff --git a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c index 3c661f422688..9584f03f3efa 100644 --- a/drivers/net/ethernet/netronome/nfp/abm/ctrl.c +++ b/drivers/net/ethernet/netronome/nfp/abm/ctrl.c @@ -1,7 +1,9 @@ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) /* Copyright (C) 2018 Netronome Systems, Inc. */ +#include #include +#include #include "../nfpcore/nfp_cpp.h" #include "../nfpcore/nfp_nffw.h" @@ -11,38 +13,58 @@ #include "../nfp_net.h" #include "main.h" -#define NFP_QLVL_SYM_NAME "_abi_nfd_out_q_lvls_%u" +#define NFP_NUM_PRIOS_SYM_NAME "_abi_pci_dscp_num_prio_%u" +#define NFP_NUM_BANDS_SYM_NAME "_abi_pci_dscp_num_band_%u" +#define NFP_ACT_MASK_SYM_NAME "_abi_nfd_out_q_actions_%u" + +#define NFP_RED_SUPPORT_SYM_NAME "_abi_nfd_out_red_offload_%u" + +#define NFP_QLVL_SYM_NAME "_abi_nfd_out_q_lvls_%u%s" #define NFP_QLVL_STRIDE 16 #define NFP_QLVL_BLOG_BYTES 0 #define NFP_QLVL_BLOG_PKTS 4 #define NFP_QLVL_THRS 8 +#define NFP_QLVL_ACT 12 -#define NFP_QMSTAT_SYM_NAME "_abi_nfdqm%u_stats" +#define NFP_QMSTAT_SYM_NAME "_abi_nfdqm%u_stats%s" #define NFP_QMSTAT_STRIDE 32 #define NFP_QMSTAT_NON_STO 0 #define NFP_QMSTAT_STO 8 #define NFP_QMSTAT_DROP 16 #define NFP_QMSTAT_ECN 24 +#define NFP_Q_STAT_SYM_NAME "_abi_nfd_rxq_stats%u%s" +#define NFP_Q_STAT_STRIDE 16 +#define NFP_Q_STAT_PKTS 0 +#define NFP_Q_STAT_BYTES 8 + +#define NFP_NET_ABM_MBOX_CMD NFP_NET_CFG_MBOX_SIMPLE_CMD +#define NFP_NET_ABM_MBOX_RET NFP_NET_CFG_MBOX_SIMPLE_RET +#define NFP_NET_ABM_MBOX_DATALEN NFP_NET_CFG_MBOX_SIMPLE_VAL +#define NFP_NET_ABM_MBOX_RESERVED (NFP_NET_CFG_MBOX_SIMPLE_VAL + 4) +#define NFP_NET_ABM_MBOX_DATA (NFP_NET_CFG_MBOX_SIMPLE_VAL + 8) + static int nfp_abm_ctrl_stat(struct nfp_abm_link *alink, const struct nfp_rtsym *sym, - unsigned int stride, unsigned int offset, unsigned int i, - bool is_u64, u64 *res) + unsigned int stride, unsigned int offset, unsigned int band, + unsigned int queue, bool is_u64, u64 *res) { struct nfp_cpp *cpp = alink->abm->app->cpp; u64 val, sym_offset; + unsigned int qid; u32 val32; int err; - sym_offset = (alink->queue_base + i) * stride + offset; + qid = band * NFP_NET_MAX_RX_RINGS + alink->queue_base + queue; + + sym_offset = qid * stride + offset; if (is_u64) err = __nfp_rtsym_readq(cpp, sym, 3, 0, sym_offset, &val); else err = __nfp_rtsym_readl(cpp, sym, 3, 0, sym_offset, &val32); if (err) { - nfp_err(cpp, - "RED offload reading stat failed on vNIC %d queue %d\n", - alink->id, i); + nfp_err(cpp, "RED offload reading stat failed on vNIC %d band %d queue %d (+ %d)\n", + alink->id, band, queue, alink->queue_base); return err; } @@ -50,175 +72,179 @@ nfp_abm_ctrl_stat(struct nfp_abm_link *alink, const struct nfp_rtsym *sym, return 0; } -static int -nfp_abm_ctrl_stat_all(struct nfp_abm_link *alink, const struct nfp_rtsym *sym, - unsigned int stride, unsigned int offset, bool is_u64, - u64 *res) +int __nfp_abm_ctrl_set_q_lvl(struct nfp_abm *abm, unsigned int id, u32 val) { - u64 val, sum = 0; - unsigned int i; + struct nfp_cpp *cpp = abm->app->cpp; + u64 sym_offset; int err; - for (i = 0; i < alink->vnic->max_rx_rings; i++) { - err = nfp_abm_ctrl_stat(alink, sym, stride, offset, i, - is_u64, &val); - if (err) - return err; - sum += val; + __clear_bit(id, abm->threshold_undef); + if (abm->thresholds[id] == val) + return 0; + + sym_offset = id * NFP_QLVL_STRIDE + NFP_QLVL_THRS; + err = __nfp_rtsym_writel(cpp, abm->q_lvls, 4, 0, sym_offset, val); + if (err) { + nfp_err(cpp, + "RED offload setting level failed on subqueue %d\n", + id); + return err; } - *res = sum; + abm->thresholds[id] = val; return 0; } -int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int i, u32 val) +int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int band, + unsigned int queue, u32 val) { - struct nfp_cpp *cpp = alink->abm->app->cpp; + unsigned int threshold; + + threshold = band * NFP_NET_MAX_RX_RINGS + alink->queue_base + queue; + + return __nfp_abm_ctrl_set_q_lvl(alink->abm, threshold, val); +} + +int __nfp_abm_ctrl_set_q_act(struct nfp_abm *abm, unsigned int id, + enum nfp_abm_q_action act) +{ + struct nfp_cpp *cpp = abm->app->cpp; u64 sym_offset; int err; - sym_offset = (alink->queue_base + i) * NFP_QLVL_STRIDE + NFP_QLVL_THRS; - err = __nfp_rtsym_writel(cpp, alink->abm->q_lvls, 4, 0, - sym_offset, val); + if (abm->actions[id] == act) + return 0; + + sym_offset = id * NFP_QLVL_STRIDE + NFP_QLVL_ACT; + err = __nfp_rtsym_writel(cpp, abm->q_lvls, 4, 0, sym_offset, act); if (err) { - nfp_err(cpp, "RED offload setting level failed on vNIC %d queue %d\n", - alink->id, i); + nfp_err(cpp, + "RED offload setting action failed on subqueue %d\n", + id); return err; } + abm->actions[id] = act; return 0; } -int nfp_abm_ctrl_set_all_q_lvls(struct nfp_abm_link *alink, u32 val) +int nfp_abm_ctrl_set_q_act(struct nfp_abm_link *alink, unsigned int band, + unsigned int queue, enum nfp_abm_q_action act) +{ + unsigned int qid; + + qid = band * NFP_NET_MAX_RX_RINGS + alink->queue_base + queue; + + return __nfp_abm_ctrl_set_q_act(alink->abm, qid, act); +} + +u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int queue) { - int i, err; + unsigned int band; + u64 val, sum = 0; - for (i = 0; i < alink->vnic->max_rx_rings; i++) { - err = nfp_abm_ctrl_set_q_lvl(alink, i, val); - if (err) - return err; + for (band = 0; band < alink->abm->num_bands; band++) { + if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, + NFP_QMSTAT_STRIDE, NFP_QMSTAT_NON_STO, + band, queue, true, &val)) + return 0; + sum += val; } - return 0; + return sum; } -u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int i) +u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int queue) { - u64 val; + unsigned int band; + u64 val, sum = 0; - if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE, - NFP_QMSTAT_NON_STO, i, true, &val)) - return 0; - return val; + for (band = 0; band < alink->abm->num_bands; band++) { + if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, + NFP_QMSTAT_STRIDE, NFP_QMSTAT_STO, + band, queue, true, &val)) + return 0; + sum += val; + } + + return sum; } -u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int i) +static int +nfp_abm_ctrl_stat_basic(struct nfp_abm_link *alink, unsigned int band, + unsigned int queue, unsigned int off, u64 *val) { - u64 val; + if (!nfp_abm_has_prio(alink->abm)) { + if (!band) { + unsigned int id = alink->queue_base + queue; + + *val = nn_readq(alink->vnic, + NFP_NET_CFG_RXR_STATS(id) + off); + } else { + *val = 0; + } - if (nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE, - NFP_QMSTAT_STO, i, true, &val)) return 0; - return val; + } else { + return nfp_abm_ctrl_stat(alink, alink->abm->q_stats, + NFP_Q_STAT_STRIDE, off, band, queue, + true, val); + } } -int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink, unsigned int i, - struct nfp_alink_stats *stats) +int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink, unsigned int band, + unsigned int queue, struct nfp_alink_stats *stats) { int err; - stats->tx_pkts = nn_readq(alink->vnic, NFP_NET_CFG_RXR_STATS(i)); - stats->tx_bytes = nn_readq(alink->vnic, NFP_NET_CFG_RXR_STATS(i) + 8); + err = nfp_abm_ctrl_stat_basic(alink, band, queue, NFP_Q_STAT_PKTS, + &stats->tx_pkts); + if (err) + return err; - err = nfp_abm_ctrl_stat(alink, alink->abm->q_lvls, - NFP_QLVL_STRIDE, NFP_QLVL_BLOG_BYTES, - i, false, &stats->backlog_bytes); + err = nfp_abm_ctrl_stat_basic(alink, band, queue, NFP_Q_STAT_BYTES, + &stats->tx_bytes); + if (err) + return err; + + err = nfp_abm_ctrl_stat(alink, alink->abm->q_lvls, NFP_QLVL_STRIDE, + NFP_QLVL_BLOG_BYTES, band, queue, false, + &stats->backlog_bytes); if (err) return err; err = nfp_abm_ctrl_stat(alink, alink->abm->q_lvls, NFP_QLVL_STRIDE, NFP_QLVL_BLOG_PKTS, - i, false, &stats->backlog_pkts); + band, queue, false, &stats->backlog_pkts); if (err) return err; err = nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE, NFP_QMSTAT_DROP, - i, true, &stats->drops); + band, queue, true, &stats->drops); if (err) return err; return nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE, NFP_QMSTAT_ECN, - i, true, &stats->overlimits); + band, queue, true, &stats->overlimits); } -int nfp_abm_ctrl_read_stats(struct nfp_abm_link *alink, - struct nfp_alink_stats *stats) -{ - u64 pkts = 0, bytes = 0; - int i, err; - - for (i = 0; i < alink->vnic->max_rx_rings; i++) { - pkts += nn_readq(alink->vnic, NFP_NET_CFG_RXR_STATS(i)); - bytes += nn_readq(alink->vnic, NFP_NET_CFG_RXR_STATS(i) + 8); - } - stats->tx_pkts = pkts; - stats->tx_bytes = bytes; - - err = nfp_abm_ctrl_stat_all(alink, alink->abm->q_lvls, - NFP_QLVL_STRIDE, NFP_QLVL_BLOG_BYTES, - false, &stats->backlog_bytes); - if (err) - return err; - - err = nfp_abm_ctrl_stat_all(alink, alink->abm->q_lvls, - NFP_QLVL_STRIDE, NFP_QLVL_BLOG_PKTS, - false, &stats->backlog_pkts); - if (err) - return err; - - err = nfp_abm_ctrl_stat_all(alink, alink->abm->qm_stats, - NFP_QMSTAT_STRIDE, NFP_QMSTAT_DROP, - true, &stats->drops); - if (err) - return err; - - return nfp_abm_ctrl_stat_all(alink, alink->abm->qm_stats, - NFP_QMSTAT_STRIDE, NFP_QMSTAT_ECN, - true, &stats->overlimits); -} - -int nfp_abm_ctrl_read_q_xstats(struct nfp_abm_link *alink, unsigned int i, +int nfp_abm_ctrl_read_q_xstats(struct nfp_abm_link *alink, + unsigned int band, unsigned int queue, struct nfp_alink_xstats *xstats) { int err; err = nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE, NFP_QMSTAT_DROP, - i, true, &xstats->pdrop); + band, queue, true, &xstats->pdrop); if (err) return err; return nfp_abm_ctrl_stat(alink, alink->abm->qm_stats, NFP_QMSTAT_STRIDE, NFP_QMSTAT_ECN, - i, true, &xstats->ecn_marked); -} - -int nfp_abm_ctrl_read_xstats(struct nfp_abm_link *alink, - struct nfp_alink_xstats *xstats) -{ - int err; - - err = nfp_abm_ctrl_stat_all(alink, alink->abm->qm_stats, - NFP_QMSTAT_STRIDE, NFP_QMSTAT_DROP, - true, &xstats->pdrop); - if (err) - return err; - - return nfp_abm_ctrl_stat_all(alink, alink->abm->qm_stats, - NFP_QMSTAT_STRIDE, NFP_QMSTAT_ECN, - true, &xstats->ecn_marked); + band, queue, true, &xstats->ecn_marked); } int nfp_abm_ctrl_qm_enable(struct nfp_abm *abm) @@ -233,10 +259,64 @@ int nfp_abm_ctrl_qm_disable(struct nfp_abm *abm) NULL, 0, NULL, 0); } -void nfp_abm_ctrl_read_params(struct nfp_abm_link *alink) +int nfp_abm_ctrl_prio_map_update(struct nfp_abm_link *alink, u32 *packed) +{ + struct nfp_net *nn = alink->vnic; + unsigned int i; + int err; + + /* Write data_len and wipe reserved */ + nn_writeq(nn, nn->tlv_caps.mbox_off + NFP_NET_ABM_MBOX_DATALEN, + alink->abm->prio_map_len); + + for (i = 0; i < alink->abm->prio_map_len; i += sizeof(u32)) + nn_writel(nn, nn->tlv_caps.mbox_off + NFP_NET_ABM_MBOX_DATA + i, + packed[i / sizeof(u32)]); + + err = nfp_net_reconfig_mbox(nn, + NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET); + if (err) + nfp_err(alink->abm->app->cpp, + "setting DSCP -> VQ map failed with error %d\n", err); + return err; +} + +static int nfp_abm_ctrl_prio_check_params(struct nfp_abm_link *alink) +{ + struct nfp_abm *abm = alink->abm; + struct nfp_net *nn = alink->vnic; + unsigned int min_mbox_sz; + + if (!nfp_abm_has_prio(alink->abm)) + return 0; + + min_mbox_sz = NFP_NET_ABM_MBOX_DATA + alink->abm->prio_map_len; + if (nn->tlv_caps.mbox_len < min_mbox_sz) { + nfp_err(abm->app->pf->cpp, "vNIC mailbox too small for prio offload: %u, need: %u\n", + nn->tlv_caps.mbox_len, min_mbox_sz); + return -EINVAL; + } + + return 0; +} + +int nfp_abm_ctrl_read_params(struct nfp_abm_link *alink) { alink->queue_base = nn_readl(alink->vnic, NFP_NET_CFG_START_RXQ); alink->queue_base /= alink->vnic->stride_rx; + + return nfp_abm_ctrl_prio_check_params(alink); +} + +static unsigned int nfp_abm_ctrl_prio_map_size(struct nfp_abm *abm) +{ + unsigned int size; + + size = roundup_pow_of_two(order_base_2(abm->num_bands)); + size = DIV_ROUND_UP(size * abm->num_prios, BITS_PER_BYTE); + size = round_up(size, sizeof(u32)); + + return size; } static const struct nfp_rtsym * @@ -260,33 +340,86 @@ nfp_abm_ctrl_find_rtsym(struct nfp_pf *pf, const char *name, unsigned int size) } static const struct nfp_rtsym * -nfp_abm_ctrl_find_q_rtsym(struct nfp_pf *pf, const char *name, - unsigned int size) +nfp_abm_ctrl_find_q_rtsym(struct nfp_abm *abm, const char *name_fmt, + size_t size) { - return nfp_abm_ctrl_find_rtsym(pf, name, size * NFP_NET_MAX_RX_RINGS); + char pf_symbol[64]; + + size = array3_size(size, abm->num_bands, NFP_NET_MAX_RX_RINGS); + snprintf(pf_symbol, sizeof(pf_symbol), name_fmt, + abm->pf_id, nfp_abm_has_prio(abm) ? "_per_band" : ""); + + return nfp_abm_ctrl_find_rtsym(abm->app->pf, pf_symbol, size); } int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm) { struct nfp_pf *pf = abm->app->pf; const struct nfp_rtsym *sym; - unsigned int pf_id; - char pf_symbol[64]; + int res; + + abm->pf_id = nfp_cppcore_pcie_unit(pf->cpp); + + /* Check if Qdisc offloads are supported */ + res = nfp_pf_rtsym_read_optional(pf, NFP_RED_SUPPORT_SYM_NAME, 1); + if (res < 0) + return res; + abm->red_support = res; + + /* Read count of prios and prio bands */ + res = nfp_pf_rtsym_read_optional(pf, NFP_NUM_BANDS_SYM_NAME, 1); + if (res < 0) + return res; + abm->num_bands = res; + + res = nfp_pf_rtsym_read_optional(pf, NFP_NUM_PRIOS_SYM_NAME, 1); + if (res < 0) + return res; + abm->num_prios = res; + + /* Read available actions */ + res = nfp_pf_rtsym_read_optional(pf, NFP_ACT_MASK_SYM_NAME, + BIT(NFP_ABM_ACT_MARK_DROP)); + if (res < 0) + return res; + abm->action_mask = res; + + abm->prio_map_len = nfp_abm_ctrl_prio_map_size(abm); + abm->dscp_mask = GENMASK(7, 8 - order_base_2(abm->num_prios)); + + /* Check values are sane, U16_MAX is arbitrarily chosen as max */ + if (!is_power_of_2(abm->num_bands) || !is_power_of_2(abm->num_prios) || + abm->num_bands > U16_MAX || abm->num_prios > U16_MAX || + (abm->num_bands == 1) != (abm->num_prios == 1)) { + nfp_err(pf->cpp, + "invalid priomap description num bands: %u and num prios: %u\n", + abm->num_bands, abm->num_prios); + return -EINVAL; + } - pf_id = nfp_cppcore_pcie_unit(pf->cpp); - abm->pf_id = pf_id; + /* Find level and stat symbols */ + if (!abm->red_support) + return 0; - snprintf(pf_symbol, sizeof(pf_symbol), NFP_QLVL_SYM_NAME, pf_id); - sym = nfp_abm_ctrl_find_q_rtsym(pf, pf_symbol, NFP_QLVL_STRIDE); + sym = nfp_abm_ctrl_find_q_rtsym(abm, NFP_QLVL_SYM_NAME, + NFP_QLVL_STRIDE); if (IS_ERR(sym)) return PTR_ERR(sym); abm->q_lvls = sym; - snprintf(pf_symbol, sizeof(pf_symbol), NFP_QMSTAT_SYM_NAME, pf_id); - sym = nfp_abm_ctrl_find_q_rtsym(pf, pf_symbol, NFP_QMSTAT_STRIDE); + sym = nfp_abm_ctrl_find_q_rtsym(abm, NFP_QMSTAT_SYM_NAME, + NFP_QMSTAT_STRIDE); if (IS_ERR(sym)) return PTR_ERR(sym); abm->qm_stats = sym; + if (nfp_abm_has_prio(abm)) { + sym = nfp_abm_ctrl_find_q_rtsym(abm, NFP_Q_STAT_SYM_NAME, + NFP_Q_STAT_STRIDE); + if (IS_ERR(sym)) + return PTR_ERR(sym); + abm->q_stats = sym; + } + return 0; } diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.c b/drivers/net/ethernet/netronome/nfp/abm/main.c index c0830c0c2c3f..4d4ff5844c47 100644 --- a/drivers/net/ethernet/netronome/nfp/abm/main.c +++ b/drivers/net/ethernet/netronome/nfp/abm/main.c @@ -2,14 +2,13 @@ /* Copyright (C) 2018 Netronome Systems, Inc. */ #include +#include #include #include #include #include +#include #include -#include -#include -#include #include "../nfpcore/nfp.h" #include "../nfpcore/nfp_cpp.h" @@ -27,269 +26,6 @@ static u32 nfp_abm_portid(enum nfp_repr_type rtype, unsigned int id) FIELD_PREP(NFP_ABM_PORTID_ID, id); } -static int -__nfp_abm_reset_root(struct net_device *netdev, struct nfp_abm_link *alink, - u32 handle, unsigned int qs, u32 init_val) -{ - struct nfp_port *port = nfp_port_from_netdev(netdev); - int ret; - - ret = nfp_abm_ctrl_set_all_q_lvls(alink, init_val); - memset(alink->qdiscs, 0, sizeof(*alink->qdiscs) * alink->num_qdiscs); - - alink->parent = handle; - alink->num_qdiscs = qs; - port->tc_offload_cnt = qs; - - return ret; -} - -static void -nfp_abm_reset_root(struct net_device *netdev, struct nfp_abm_link *alink, - u32 handle, unsigned int qs) -{ - __nfp_abm_reset_root(netdev, alink, handle, qs, ~0); -} - -static int -nfp_abm_red_find(struct nfp_abm_link *alink, struct tc_red_qopt_offload *opt) -{ - unsigned int i = TC_H_MIN(opt->parent) - 1; - - if (opt->parent == TC_H_ROOT) - i = 0; - else if (TC_H_MAJ(alink->parent) == TC_H_MAJ(opt->parent)) - i = TC_H_MIN(opt->parent) - 1; - else - return -EOPNOTSUPP; - - if (i >= alink->num_qdiscs || opt->handle != alink->qdiscs[i].handle) - return -EOPNOTSUPP; - - return i; -} - -static void -nfp_abm_red_destroy(struct net_device *netdev, struct nfp_abm_link *alink, - u32 handle) -{ - unsigned int i; - - for (i = 0; i < alink->num_qdiscs; i++) - if (handle == alink->qdiscs[i].handle) - break; - if (i == alink->num_qdiscs) - return; - - if (alink->parent == TC_H_ROOT) { - nfp_abm_reset_root(netdev, alink, TC_H_ROOT, 0); - } else { - nfp_abm_ctrl_set_q_lvl(alink, i, ~0); - memset(&alink->qdiscs[i], 0, sizeof(*alink->qdiscs)); - } -} - -static int -nfp_abm_red_replace(struct net_device *netdev, struct nfp_abm_link *alink, - struct tc_red_qopt_offload *opt) -{ - bool existing; - int i, err; - - i = nfp_abm_red_find(alink, opt); - existing = i >= 0; - - if (opt->set.min != opt->set.max || !opt->set.is_ecn) { - nfp_warn(alink->abm->app->cpp, - "RED offload failed - unsupported parameters\n"); - err = -EINVAL; - goto err_destroy; - } - - if (existing) { - if (alink->parent == TC_H_ROOT) - err = nfp_abm_ctrl_set_all_q_lvls(alink, opt->set.min); - else - err = nfp_abm_ctrl_set_q_lvl(alink, i, opt->set.min); - if (err) - goto err_destroy; - return 0; - } - - if (opt->parent == TC_H_ROOT) { - i = 0; - err = __nfp_abm_reset_root(netdev, alink, TC_H_ROOT, 1, - opt->set.min); - } else if (TC_H_MAJ(alink->parent) == TC_H_MAJ(opt->parent)) { - i = TC_H_MIN(opt->parent) - 1; - err = nfp_abm_ctrl_set_q_lvl(alink, i, opt->set.min); - } else { - return -EINVAL; - } - /* Set the handle to try full clean up, in case IO failed */ - alink->qdiscs[i].handle = opt->handle; - if (err) - goto err_destroy; - - if (opt->parent == TC_H_ROOT) - err = nfp_abm_ctrl_read_stats(alink, &alink->qdiscs[i].stats); - else - err = nfp_abm_ctrl_read_q_stats(alink, i, - &alink->qdiscs[i].stats); - if (err) - goto err_destroy; - - if (opt->parent == TC_H_ROOT) - err = nfp_abm_ctrl_read_xstats(alink, - &alink->qdiscs[i].xstats); - else - err = nfp_abm_ctrl_read_q_xstats(alink, i, - &alink->qdiscs[i].xstats); - if (err) - goto err_destroy; - - alink->qdiscs[i].stats.backlog_pkts = 0; - alink->qdiscs[i].stats.backlog_bytes = 0; - - return 0; -err_destroy: - /* If the qdisc keeps on living, but we can't offload undo changes */ - if (existing) { - opt->set.qstats->qlen -= alink->qdiscs[i].stats.backlog_pkts; - opt->set.qstats->backlog -= - alink->qdiscs[i].stats.backlog_bytes; - } - nfp_abm_red_destroy(netdev, alink, opt->handle); - - return err; -} - -static void -nfp_abm_update_stats(struct nfp_alink_stats *new, struct nfp_alink_stats *old, - struct tc_qopt_offload_stats *stats) -{ - _bstats_update(stats->bstats, new->tx_bytes - old->tx_bytes, - new->tx_pkts - old->tx_pkts); - stats->qstats->qlen += new->backlog_pkts - old->backlog_pkts; - stats->qstats->backlog += new->backlog_bytes - old->backlog_bytes; - stats->qstats->overlimits += new->overlimits - old->overlimits; - stats->qstats->drops += new->drops - old->drops; -} - -static int -nfp_abm_red_stats(struct nfp_abm_link *alink, struct tc_red_qopt_offload *opt) -{ - struct nfp_alink_stats *prev_stats; - struct nfp_alink_stats stats; - int i, err; - - i = nfp_abm_red_find(alink, opt); - if (i < 0) - return i; - prev_stats = &alink->qdiscs[i].stats; - - if (alink->parent == TC_H_ROOT) - err = nfp_abm_ctrl_read_stats(alink, &stats); - else - err = nfp_abm_ctrl_read_q_stats(alink, i, &stats); - if (err) - return err; - - nfp_abm_update_stats(&stats, prev_stats, &opt->stats); - - *prev_stats = stats; - - return 0; -} - -static int -nfp_abm_red_xstats(struct nfp_abm_link *alink, struct tc_red_qopt_offload *opt) -{ - struct nfp_alink_xstats *prev_xstats; - struct nfp_alink_xstats xstats; - int i, err; - - i = nfp_abm_red_find(alink, opt); - if (i < 0) - return i; - prev_xstats = &alink->qdiscs[i].xstats; - - if (alink->parent == TC_H_ROOT) - err = nfp_abm_ctrl_read_xstats(alink, &xstats); - else - err = nfp_abm_ctrl_read_q_xstats(alink, i, &xstats); - if (err) - return err; - - opt->xstats->forced_mark += xstats.ecn_marked - prev_xstats->ecn_marked; - opt->xstats->pdrop += xstats.pdrop - prev_xstats->pdrop; - - *prev_xstats = xstats; - - return 0; -} - -static int -nfp_abm_setup_tc_red(struct net_device *netdev, struct nfp_abm_link *alink, - struct tc_red_qopt_offload *opt) -{ - switch (opt->command) { - case TC_RED_REPLACE: - return nfp_abm_red_replace(netdev, alink, opt); - case TC_RED_DESTROY: - nfp_abm_red_destroy(netdev, alink, opt->handle); - return 0; - case TC_RED_STATS: - return nfp_abm_red_stats(alink, opt); - case TC_RED_XSTATS: - return nfp_abm_red_xstats(alink, opt); - default: - return -EOPNOTSUPP; - } -} - -static int -nfp_abm_mq_stats(struct nfp_abm_link *alink, struct tc_mq_qopt_offload *opt) -{ - struct nfp_alink_stats stats; - unsigned int i; - int err; - - for (i = 0; i < alink->num_qdiscs; i++) { - if (alink->qdiscs[i].handle == TC_H_UNSPEC) - continue; - - err = nfp_abm_ctrl_read_q_stats(alink, i, &stats); - if (err) - return err; - - nfp_abm_update_stats(&stats, &alink->qdiscs[i].stats, - &opt->stats); - } - - return 0; -} - -static int -nfp_abm_setup_tc_mq(struct net_device *netdev, struct nfp_abm_link *alink, - struct tc_mq_qopt_offload *opt) -{ - switch (opt->command) { - case TC_MQ_CREATE: - nfp_abm_reset_root(netdev, alink, opt->handle, - alink->total_queues); - return 0; - case TC_MQ_DESTROY: - if (opt->handle == alink->parent) - nfp_abm_reset_root(netdev, alink, TC_H_ROOT, 0); - return 0; - case TC_MQ_STATS: - return nfp_abm_mq_stats(alink, opt); - default: - return -EOPNOTSUPP; - } -} - static int nfp_abm_setup_tc(struct nfp_app *app, struct net_device *netdev, enum tc_setup_type type, void *type_data) @@ -302,10 +38,16 @@ nfp_abm_setup_tc(struct nfp_app *app, struct net_device *netdev, return -EOPNOTSUPP; switch (type) { + case TC_SETUP_ROOT_QDISC: + return nfp_abm_setup_root(netdev, repr->app_priv, type_data); case TC_SETUP_QDISC_MQ: return nfp_abm_setup_tc_mq(netdev, repr->app_priv, type_data); case TC_SETUP_QDISC_RED: return nfp_abm_setup_tc_red(netdev, repr->app_priv, type_data); + case TC_SETUP_QDISC_GRED: + return nfp_abm_setup_tc_gred(netdev, repr->app_priv, type_data); + case TC_SETUP_BLOCK: + return nfp_abm_setup_cls_block(netdev, repr, type_data); default: return -EOPNOTSUPP; } @@ -384,7 +126,9 @@ nfp_abm_spawn_repr(struct nfp_app *app, struct nfp_abm_link *alink, reprs = nfp_reprs_get_locked(app, rtype); WARN(nfp_repr_get_locked(app, reprs, alink->id), "duplicate repr"); + rtnl_lock(); rcu_assign_pointer(reprs->reprs[alink->id], netdev); + rtnl_unlock(); nfp_info(app->cpp, "%s Port %d Representor(%s) created\n", ptype == NFP_PORT_PF_PORT ? "PCIe" : "Phys", @@ -410,7 +154,9 @@ nfp_abm_kill_repr(struct nfp_app *app, struct nfp_abm_link *alink, netdev = nfp_repr_get_locked(app, reprs, alink->id); if (!netdev) return; + rtnl_lock(); rcu_assign_pointer(reprs->reprs[alink->id], NULL); + rtnl_unlock(); synchronize_rcu(); /* Cast to make sure nfp_repr_clean_and_free() takes a nfp_repr */ nfp_repr_clean_and_free((struct nfp_repr *)netdev_priv(netdev)); @@ -461,6 +207,9 @@ static int nfp_abm_eswitch_set_switchdev(struct nfp_abm *abm) struct nfp_net *nn; int err; + if (!abm->red_support) + return -EOPNOTSUPP; + err = nfp_abm_ctrl_qm_enable(abm); if (err) return err; @@ -573,31 +322,34 @@ nfp_abm_vnic_alloc(struct nfp_app *app, struct nfp_net *nn, unsigned int id) alink->abm = abm; alink->vnic = nn; alink->id = id; - alink->parent = TC_H_ROOT; alink->total_queues = alink->vnic->max_rx_rings; - alink->qdiscs = kvcalloc(alink->total_queues, sizeof(*alink->qdiscs), - GFP_KERNEL); - if (!alink->qdiscs) { - err = -ENOMEM; + + INIT_LIST_HEAD(&alink->dscp_map); + + err = nfp_abm_ctrl_read_params(alink); + if (err) + goto err_free_alink; + + alink->prio_map = kzalloc(abm->prio_map_len, GFP_KERNEL); + if (!alink->prio_map) goto err_free_alink; - } /* This is a multi-host app, make sure MAC/PHY is up, but don't * make the MAC/PHY state follow the state of any of the ports. */ err = nfp_eth_set_configured(app->cpp, eth_port->index, true); if (err < 0) - goto err_free_qdiscs; + goto err_free_priomap; netif_keep_dst(nn->dp.netdev); nfp_abm_vnic_set_mac(app->pf, abm, nn, id); - nfp_abm_ctrl_read_params(alink); + INIT_RADIX_TREE(&alink->qdiscs, GFP_KERNEL); return 0; -err_free_qdiscs: - kvfree(alink->qdiscs); +err_free_priomap: + kfree(alink->prio_map); err_free_alink: kfree(alink); return err; @@ -608,10 +360,20 @@ static void nfp_abm_vnic_free(struct nfp_app *app, struct nfp_net *nn) struct nfp_abm_link *alink = nn->app_priv; nfp_abm_kill_reprs(alink->abm, alink); - kvfree(alink->qdiscs); + WARN(!radix_tree_empty(&alink->qdiscs), "left over qdiscs\n"); + kfree(alink->prio_map); kfree(alink); } +static int nfp_abm_vnic_init(struct nfp_app *app, struct nfp_net *nn) +{ + struct nfp_abm_link *alink = nn->app_priv; + + if (nfp_abm_has_prio(alink->abm)) + return nfp_abm_ctrl_prio_map_update(alink, alink->prio_map); + return 0; +} + static u64 * nfp_abm_port_get_stats(struct nfp_app *app, struct nfp_port *port, u64 *data) { @@ -659,6 +421,21 @@ nfp_abm_port_get_stats_strings(struct nfp_app *app, struct nfp_port *port, return data; } +static int nfp_abm_fw_init_reset(struct nfp_abm *abm) +{ + unsigned int i; + + if (!abm->red_support) + return 0; + + for (i = 0; i < abm->num_bands * NFP_NET_MAX_RX_RINGS; i++) { + __nfp_abm_ctrl_set_q_lvl(abm, i, NFP_ABM_LVL_INFINITY); + __nfp_abm_ctrl_set_q_act(abm, i, NFP_ABM_ACT_DROP); + } + + return nfp_abm_ctrl_qm_disable(abm); +} + static int nfp_abm_init(struct nfp_app *app) { struct nfp_pf *pf = app->pf; @@ -690,15 +467,31 @@ static int nfp_abm_init(struct nfp_app *app) if (err) goto err_free_abm; + err = -ENOMEM; + abm->num_thresholds = array_size(abm->num_bands, NFP_NET_MAX_RX_RINGS); + abm->threshold_undef = bitmap_zalloc(abm->num_thresholds, GFP_KERNEL); + if (!abm->threshold_undef) + goto err_free_abm; + + abm->thresholds = kvcalloc(abm->num_thresholds, + sizeof(*abm->thresholds), GFP_KERNEL); + if (!abm->thresholds) + goto err_free_thresh_umap; + + abm->actions = kvcalloc(abm->num_thresholds, sizeof(*abm->actions), + GFP_KERNEL); + if (!abm->actions) + goto err_free_thresh; + /* We start in legacy mode, make sure advanced queuing is disabled */ - err = nfp_abm_ctrl_qm_disable(abm); + err = nfp_abm_fw_init_reset(abm); if (err) - goto err_free_abm; + goto err_free_act; err = -ENOMEM; reprs = nfp_reprs_alloc(pf->max_data_vnics); if (!reprs) - goto err_free_abm; + goto err_free_act; RCU_INIT_POINTER(app->reprs[NFP_REPR_TYPE_PHYS_PORT], reprs); reprs = nfp_reprs_alloc(pf->max_data_vnics); @@ -710,6 +503,12 @@ static int nfp_abm_init(struct nfp_app *app) err_free_phys: nfp_reprs_clean_and_free_by_type(app, NFP_REPR_TYPE_PHYS_PORT); +err_free_act: + kvfree(abm->actions); +err_free_thresh: + kvfree(abm->thresholds); +err_free_thresh_umap: + bitmap_free(abm->threshold_undef); err_free_abm: kfree(abm); app->priv = NULL; @@ -723,6 +522,9 @@ static void nfp_abm_clean(struct nfp_app *app) nfp_abm_eswitch_clean_up(abm); nfp_reprs_clean_and_free_by_type(app, NFP_REPR_TYPE_PF); nfp_reprs_clean_and_free_by_type(app, NFP_REPR_TYPE_PHYS_PORT); + bitmap_free(abm->threshold_undef); + kvfree(abm->actions); + kvfree(abm->thresholds); kfree(abm); app->priv = NULL; } @@ -736,6 +538,7 @@ const struct nfp_app_type app_abm = { .vnic_alloc = nfp_abm_vnic_alloc, .vnic_free = nfp_abm_vnic_free, + .vnic_init = nfp_abm_vnic_init, .port_get_stats = nfp_abm_port_get_stats, .port_get_stats_count = nfp_abm_port_get_stats_count, diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h index f907b7d98917..49749c60885e 100644 --- a/drivers/net/ethernet/netronome/nfp/abm/main.h +++ b/drivers/net/ethernet/netronome/nfp/abm/main.h @@ -4,7 +4,19 @@ #ifndef __NFP_ABM_H__ #define __NFP_ABM_H__ 1 +#include +#include +#include #include +#include +#include + +/* Dump of 64 PRIOs and 256 REDs seems to take 850us on Xeon v4 @ 2.20GHz; + * 2.5ms / 400Hz seems more than sufficient for stats resolution. + */ +#define NFP_ABM_STATS_REFRESH_IVAL (2500 * 1000) /* ns */ + +#define NFP_ABM_LVL_INFINITY S32_MAX struct nfp_app; struct nfp_net; @@ -12,21 +24,64 @@ struct nfp_net; #define NFP_ABM_PORTID_TYPE GENMASK(23, 16) #define NFP_ABM_PORTID_ID GENMASK(7, 0) +/* The possible actions if thresholds are exceeded */ +enum nfp_abm_q_action { + /* mark if ECN capable, otherwise drop */ + NFP_ABM_ACT_MARK_DROP = 0, + /* mark if ECN capable, otherwise goto QM */ + NFP_ABM_ACT_MARK_QUEUE = 1, + NFP_ABM_ACT_DROP = 2, + NFP_ABM_ACT_QUEUE = 3, + NFP_ABM_ACT_NOQUEUE = 4, +}; + /** * struct nfp_abm - ABM NIC app structure * @app: back pointer to nfp_app * @pf_id: ID of our PF link + * + * @red_support: is RED offload supported + * @num_prios: number of supported DSCP priorities + * @num_bands: number of supported DSCP priority bands + * @action_mask: bitmask of supported actions + * + * @thresholds: current threshold configuration + * @threshold_undef: bitmap of thresholds which have not been set + * @actions: current FW action configuration + * @num_thresholds: number of @thresholds and bits in @threshold_undef + * + * @prio_map_len: computed length of FW priority map (in bytes) + * @dscp_mask: mask FW will apply on DSCP field + * * @eswitch_mode: devlink eswitch mode, advanced functions only visible * in switchdev mode + * * @q_lvls: queue level control area * @qm_stats: queue statistics symbol + * @q_stats: basic queue statistics (only in per-band case) */ struct nfp_abm { struct nfp_app *app; unsigned int pf_id; + + unsigned int red_support; + unsigned int num_prios; + unsigned int num_bands; + unsigned int action_mask; + + u32 *thresholds; + unsigned long *threshold_undef; + u8 *actions; + size_t num_thresholds; + + unsigned int prio_map_len; + u8 dscp_mask; + enum devlink_eswitch_mode eswitch_mode; + const struct nfp_rtsym *q_lvls; const struct nfp_rtsym *qm_stats; + const struct nfp_rtsym *q_stats; }; /** @@ -57,16 +112,76 @@ struct nfp_alink_xstats { u64 pdrop; }; +enum nfp_qdisc_type { + NFP_QDISC_NONE = 0, + NFP_QDISC_MQ, + NFP_QDISC_RED, + NFP_QDISC_GRED, +}; + +#define NFP_QDISC_UNTRACKED ((struct nfp_qdisc *)1UL) + /** - * struct nfp_red_qdisc - representation of single RED Qdisc - * @handle: handle of currently offloaded RED Qdisc - * @stats: statistics from last refresh - * @xstats: base of extended statistics + * struct nfp_qdisc - tracked TC Qdisc + * @netdev: netdev on which Qdisc was created + * @type: Qdisc type + * @handle: handle of this Qdisc + * @parent_handle: handle of the parent (unreliable if Qdisc was grafted) + * @use_cnt: number of attachment points in the hierarchy + * @num_children: current size of the @children array + * @children: pointers to children + * + * @params_ok: parameters of this Qdisc are OK for offload + * @offload_mark: offload refresh state - selected for offload + * @offloaded: Qdisc is currently offloaded to the HW + * + * @mq: MQ Qdisc specific parameters and state + * @mq.stats: current stats of the MQ Qdisc + * @mq.prev_stats: previously reported @mq.stats + * + * @red: RED Qdisc specific parameters and state + * @red.num_bands: Number of valid entries in the @red.band table + * @red.band: Per-band array of RED instances + * @red.band.ecn: ECN marking is enabled (rather than drop) + * @red.band.threshold: ECN marking threshold + * @red.band.stats: current stats of the RED Qdisc + * @red.band.prev_stats: previously reported @red.stats + * @red.band.xstats: extended stats for RED - current + * @red.band.prev_xstats: extended stats for RED - previously reported */ -struct nfp_red_qdisc { +struct nfp_qdisc { + struct net_device *netdev; + enum nfp_qdisc_type type; u32 handle; - struct nfp_alink_stats stats; - struct nfp_alink_xstats xstats; + u32 parent_handle; + unsigned int use_cnt; + unsigned int num_children; + struct nfp_qdisc **children; + + bool params_ok; + bool offload_mark; + bool offloaded; + + union { + /* NFP_QDISC_MQ */ + struct { + struct nfp_alink_stats stats; + struct nfp_alink_stats prev_stats; + } mq; + /* TC_SETUP_QDISC_RED, TC_SETUP_QDISC_GRED */ + struct { + unsigned int num_bands; + + struct { + bool ecn; + u32 threshold; + struct nfp_alink_stats stats; + struct nfp_alink_stats prev_stats; + struct nfp_alink_xstats xstats; + struct nfp_alink_xstats prev_xstats; + } band[MAX_DPs]; + } red; + }; }; /** @@ -76,9 +191,17 @@ struct nfp_red_qdisc { * @id: id of the data vNIC * @queue_base: id of base to host queue within PCIe (not QC idx) * @total_queues: number of PF queues - * @parent: handle of expected parent, i.e. handle of MQ, or TC_H_ROOT - * @num_qdiscs: number of currently used qdiscs - * @qdiscs: array of qdiscs + * + * @last_stats_update: ktime of last stats update + * + * @prio_map: current map of priorities + * @has_prio: @prio_map is valid + * + * @def_band: default band to use + * @dscp_map: list of DSCP to band mappings + * + * @root_qdisc: pointer to the current root of the Qdisc hierarchy + * @qdiscs: all qdiscs recorded by major part of the handle */ struct nfp_abm_link { struct nfp_abm *abm; @@ -86,26 +209,65 @@ struct nfp_abm_link { unsigned int id; unsigned int queue_base; unsigned int total_queues; - u32 parent; - unsigned int num_qdiscs; - struct nfp_red_qdisc *qdiscs; + + u64 last_stats_update; + + u32 *prio_map; + bool has_prio; + + u8 def_band; + struct list_head dscp_map; + + struct nfp_qdisc *root_qdisc; + struct radix_tree_root qdiscs; }; -void nfp_abm_ctrl_read_params(struct nfp_abm_link *alink); +static inline bool nfp_abm_has_prio(struct nfp_abm *abm) +{ + return abm->num_bands > 1; +} + +static inline bool nfp_abm_has_drop(struct nfp_abm *abm) +{ + return abm->action_mask & BIT(NFP_ABM_ACT_DROP); +} + +static inline bool nfp_abm_has_mark(struct nfp_abm *abm) +{ + return abm->action_mask & BIT(NFP_ABM_ACT_MARK_DROP); +} + +void nfp_abm_qdisc_offload_update(struct nfp_abm_link *alink); +int nfp_abm_setup_root(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_root_qopt_offload *opt); +int nfp_abm_setup_tc_red(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_red_qopt_offload *opt); +int nfp_abm_setup_tc_mq(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_mq_qopt_offload *opt); +int nfp_abm_setup_tc_gred(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_gred_qopt_offload *opt); +int nfp_abm_setup_cls_block(struct net_device *netdev, struct nfp_repr *repr, + struct tc_block_offload *opt); + +int nfp_abm_ctrl_read_params(struct nfp_abm_link *alink); int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm); -int nfp_abm_ctrl_set_all_q_lvls(struct nfp_abm_link *alink, u32 val); -int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int i, - u32 val); -int nfp_abm_ctrl_read_stats(struct nfp_abm_link *alink, - struct nfp_alink_stats *stats); -int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink, unsigned int i, +int __nfp_abm_ctrl_set_q_lvl(struct nfp_abm *abm, unsigned int id, u32 val); +int nfp_abm_ctrl_set_q_lvl(struct nfp_abm_link *alink, unsigned int band, + unsigned int queue, u32 val); +int __nfp_abm_ctrl_set_q_act(struct nfp_abm *abm, unsigned int id, + enum nfp_abm_q_action act); +int nfp_abm_ctrl_set_q_act(struct nfp_abm_link *alink, unsigned int band, + unsigned int queue, enum nfp_abm_q_action act); +int nfp_abm_ctrl_read_q_stats(struct nfp_abm_link *alink, + unsigned int band, unsigned int queue, struct nfp_alink_stats *stats); -int nfp_abm_ctrl_read_xstats(struct nfp_abm_link *alink, - struct nfp_alink_xstats *xstats); -int nfp_abm_ctrl_read_q_xstats(struct nfp_abm_link *alink, unsigned int i, +int nfp_abm_ctrl_read_q_xstats(struct nfp_abm_link *alink, + unsigned int band, unsigned int queue, struct nfp_alink_xstats *xstats); u64 nfp_abm_ctrl_stat_non_sto(struct nfp_abm_link *alink, unsigned int i); u64 nfp_abm_ctrl_stat_sto(struct nfp_abm_link *alink, unsigned int i); int nfp_abm_ctrl_qm_enable(struct nfp_abm *abm); int nfp_abm_ctrl_qm_disable(struct nfp_abm *abm); +void nfp_abm_prio_map_update(struct nfp_abm *abm); +int nfp_abm_ctrl_prio_map_update(struct nfp_abm_link *alink, u32 *packed); #endif diff --git a/drivers/net/ethernet/netronome/nfp/abm/qdisc.c b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c new file mode 100644 index 000000000000..2473fb5f75e5 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/abm/qdisc.c @@ -0,0 +1,850 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2018 Netronome Systems, Inc. */ + +#include +#include +#include +#include + +#include "../nfpcore/nfp_cpp.h" +#include "../nfp_app.h" +#include "../nfp_main.h" +#include "../nfp_net.h" +#include "../nfp_port.h" +#include "main.h" + +static bool nfp_abm_qdisc_is_red(struct nfp_qdisc *qdisc) +{ + return qdisc->type == NFP_QDISC_RED || qdisc->type == NFP_QDISC_GRED; +} + +static bool nfp_abm_qdisc_child_valid(struct nfp_qdisc *qdisc, unsigned int id) +{ + return qdisc->children[id] && + qdisc->children[id] != NFP_QDISC_UNTRACKED; +} + +static void *nfp_abm_qdisc_tree_deref_slot(void __rcu **slot) +{ + return rtnl_dereference(*slot); +} + +static void +nfp_abm_stats_propagate(struct nfp_alink_stats *parent, + struct nfp_alink_stats *child) +{ + parent->tx_pkts += child->tx_pkts; + parent->tx_bytes += child->tx_bytes; + parent->backlog_pkts += child->backlog_pkts; + parent->backlog_bytes += child->backlog_bytes; + parent->overlimits += child->overlimits; + parent->drops += child->drops; +} + +static void +nfp_abm_stats_update_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc, + unsigned int queue) +{ + struct nfp_cpp *cpp = alink->abm->app->cpp; + unsigned int i; + int err; + + if (!qdisc->offloaded) + return; + + for (i = 0; i < qdisc->red.num_bands; i++) { + err = nfp_abm_ctrl_read_q_stats(alink, i, queue, + &qdisc->red.band[i].stats); + if (err) + nfp_err(cpp, "RED stats (%d, %d) read failed with error %d\n", + i, queue, err); + + err = nfp_abm_ctrl_read_q_xstats(alink, i, queue, + &qdisc->red.band[i].xstats); + if (err) + nfp_err(cpp, "RED xstats (%d, %d) read failed with error %d\n", + i, queue, err); + } +} + +static void +nfp_abm_stats_update_mq(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc) +{ + unsigned int i; + + if (qdisc->type != NFP_QDISC_MQ) + return; + + for (i = 0; i < alink->total_queues; i++) + if (nfp_abm_qdisc_child_valid(qdisc, i)) + nfp_abm_stats_update_red(alink, qdisc->children[i], i); +} + +static void __nfp_abm_stats_update(struct nfp_abm_link *alink, u64 time_now) +{ + alink->last_stats_update = time_now; + if (alink->root_qdisc) + nfp_abm_stats_update_mq(alink, alink->root_qdisc); +} + +static void nfp_abm_stats_update(struct nfp_abm_link *alink) +{ + u64 now; + + /* Limit the frequency of updates - stats of non-leaf qdiscs are a sum + * of all their leafs, so we would read the same stat multiple times + * for every dump. + */ + now = ktime_get(); + if (now - alink->last_stats_update < NFP_ABM_STATS_REFRESH_IVAL) + return; + + __nfp_abm_stats_update(alink, now); +} + +static void +nfp_abm_qdisc_unlink_children(struct nfp_qdisc *qdisc, + unsigned int start, unsigned int end) +{ + unsigned int i; + + for (i = start; i < end; i++) + if (nfp_abm_qdisc_child_valid(qdisc, i)) { + qdisc->children[i]->use_cnt--; + qdisc->children[i] = NULL; + } +} + +static void +nfp_abm_qdisc_offload_stop(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc) +{ + unsigned int i; + + /* Don't complain when qdisc is getting unlinked */ + if (qdisc->use_cnt) + nfp_warn(alink->abm->app->cpp, "Offload of '%08x' stopped\n", + qdisc->handle); + + if (!nfp_abm_qdisc_is_red(qdisc)) + return; + + for (i = 0; i < qdisc->red.num_bands; i++) { + qdisc->red.band[i].stats.backlog_pkts = 0; + qdisc->red.band[i].stats.backlog_bytes = 0; + } +} + +static int +__nfp_abm_stats_init(struct nfp_abm_link *alink, unsigned int band, + unsigned int queue, struct nfp_alink_stats *prev_stats, + struct nfp_alink_xstats *prev_xstats) +{ + u64 backlog_pkts, backlog_bytes; + int err; + + /* Don't touch the backlog, backlog can only be reset after it has + * been reported back to the tc qdisc stats. + */ + backlog_pkts = prev_stats->backlog_pkts; + backlog_bytes = prev_stats->backlog_bytes; + + err = nfp_abm_ctrl_read_q_stats(alink, band, queue, prev_stats); + if (err) { + nfp_err(alink->abm->app->cpp, + "RED stats init (%d, %d) failed with error %d\n", + band, queue, err); + return err; + } + + err = nfp_abm_ctrl_read_q_xstats(alink, band, queue, prev_xstats); + if (err) { + nfp_err(alink->abm->app->cpp, + "RED xstats init (%d, %d) failed with error %d\n", + band, queue, err); + return err; + } + + prev_stats->backlog_pkts = backlog_pkts; + prev_stats->backlog_bytes = backlog_bytes; + return 0; +} + +static int +nfp_abm_stats_init(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc, + unsigned int queue) +{ + unsigned int i; + int err; + + for (i = 0; i < qdisc->red.num_bands; i++) { + err = __nfp_abm_stats_init(alink, i, queue, + &qdisc->red.band[i].prev_stats, + &qdisc->red.band[i].prev_xstats); + if (err) + return err; + } + + return 0; +} + +static void +nfp_abm_offload_compile_red(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc, + unsigned int queue) +{ + bool good_red, good_gred; + unsigned int i; + + good_red = qdisc->type == NFP_QDISC_RED && + qdisc->params_ok && + qdisc->use_cnt == 1 && + !alink->has_prio && + !qdisc->children[0]; + good_gred = qdisc->type == NFP_QDISC_GRED && + qdisc->params_ok && + qdisc->use_cnt == 1; + qdisc->offload_mark = good_red || good_gred; + + /* If we are starting offload init prev_stats */ + if (qdisc->offload_mark && !qdisc->offloaded) + if (nfp_abm_stats_init(alink, qdisc, queue)) + qdisc->offload_mark = false; + + if (!qdisc->offload_mark) + return; + + for (i = 0; i < alink->abm->num_bands; i++) { + enum nfp_abm_q_action act; + + nfp_abm_ctrl_set_q_lvl(alink, i, queue, + qdisc->red.band[i].threshold); + act = qdisc->red.band[i].ecn ? + NFP_ABM_ACT_MARK_DROP : NFP_ABM_ACT_DROP; + nfp_abm_ctrl_set_q_act(alink, i, queue, act); + } +} + +static void +nfp_abm_offload_compile_mq(struct nfp_abm_link *alink, struct nfp_qdisc *qdisc) +{ + unsigned int i; + + qdisc->offload_mark = qdisc->type == NFP_QDISC_MQ; + if (!qdisc->offload_mark) + return; + + for (i = 0; i < alink->total_queues; i++) { + struct nfp_qdisc *child = qdisc->children[i]; + + if (!nfp_abm_qdisc_child_valid(qdisc, i)) + continue; + + nfp_abm_offload_compile_red(alink, child, i); + } +} + +void nfp_abm_qdisc_offload_update(struct nfp_abm_link *alink) +{ + struct nfp_abm *abm = alink->abm; + struct radix_tree_iter iter; + struct nfp_qdisc *qdisc; + void __rcu **slot; + size_t i; + + /* Mark all thresholds as unconfigured */ + for (i = 0; i < abm->num_bands; i++) + __bitmap_set(abm->threshold_undef, + i * NFP_NET_MAX_RX_RINGS + alink->queue_base, + alink->total_queues); + + /* Clear offload marks */ + radix_tree_for_each_slot(slot, &alink->qdiscs, &iter, 0) { + qdisc = nfp_abm_qdisc_tree_deref_slot(slot); + qdisc->offload_mark = false; + } + + if (alink->root_qdisc) + nfp_abm_offload_compile_mq(alink, alink->root_qdisc); + + /* Refresh offload status */ + radix_tree_for_each_slot(slot, &alink->qdiscs, &iter, 0) { + qdisc = nfp_abm_qdisc_tree_deref_slot(slot); + if (!qdisc->offload_mark && qdisc->offloaded) + nfp_abm_qdisc_offload_stop(alink, qdisc); + qdisc->offloaded = qdisc->offload_mark; + } + + /* Reset the unconfigured thresholds */ + for (i = 0; i < abm->num_thresholds; i++) + if (test_bit(i, abm->threshold_undef)) + __nfp_abm_ctrl_set_q_lvl(abm, i, NFP_ABM_LVL_INFINITY); + + __nfp_abm_stats_update(alink, ktime_get()); +} + +static void +nfp_abm_qdisc_clear_mq(struct net_device *netdev, struct nfp_abm_link *alink, + struct nfp_qdisc *qdisc) +{ + struct radix_tree_iter iter; + unsigned int mq_refs = 0; + void __rcu **slot; + + if (!qdisc->use_cnt) + return; + /* MQ doesn't notify well on destruction, we need special handling of + * MQ's children. + */ + if (qdisc->type == NFP_QDISC_MQ && + qdisc == alink->root_qdisc && + netdev->reg_state == NETREG_UNREGISTERING) + return; + + /* Count refs held by MQ instances and clear pointers */ + radix_tree_for_each_slot(slot, &alink->qdiscs, &iter, 0) { + struct nfp_qdisc *mq = nfp_abm_qdisc_tree_deref_slot(slot); + unsigned int i; + + if (mq->type != NFP_QDISC_MQ || mq->netdev != netdev) + continue; + for (i = 0; i < mq->num_children; i++) + if (mq->children[i] == qdisc) { + mq->children[i] = NULL; + mq_refs++; + } + } + + WARN(qdisc->use_cnt != mq_refs, "non-zero qdisc use count: %d (- %d)\n", + qdisc->use_cnt, mq_refs); +} + +static void +nfp_abm_qdisc_free(struct net_device *netdev, struct nfp_abm_link *alink, + struct nfp_qdisc *qdisc) +{ + struct nfp_port *port = nfp_port_from_netdev(netdev); + + if (!qdisc) + return; + nfp_abm_qdisc_clear_mq(netdev, alink, qdisc); + WARN_ON(radix_tree_delete(&alink->qdiscs, + TC_H_MAJ(qdisc->handle)) != qdisc); + + kfree(qdisc->children); + kfree(qdisc); + + port->tc_offload_cnt--; +} + +static struct nfp_qdisc * +nfp_abm_qdisc_alloc(struct net_device *netdev, struct nfp_abm_link *alink, + enum nfp_qdisc_type type, u32 parent_handle, u32 handle, + unsigned int children) +{ + struct nfp_port *port = nfp_port_from_netdev(netdev); + struct nfp_qdisc *qdisc; + int err; + + qdisc = kzalloc(sizeof(*qdisc), GFP_KERNEL); + if (!qdisc) + return NULL; + + if (children) { + qdisc->children = kcalloc(children, sizeof(void *), GFP_KERNEL); + if (!qdisc->children) + goto err_free_qdisc; + } + + qdisc->netdev = netdev; + qdisc->type = type; + qdisc->parent_handle = parent_handle; + qdisc->handle = handle; + qdisc->num_children = children; + + err = radix_tree_insert(&alink->qdiscs, TC_H_MAJ(qdisc->handle), qdisc); + if (err) { + nfp_err(alink->abm->app->cpp, + "Qdisc insertion into radix tree failed: %d\n", err); + goto err_free_child_tbl; + } + + port->tc_offload_cnt++; + return qdisc; + +err_free_child_tbl: + kfree(qdisc->children); +err_free_qdisc: + kfree(qdisc); + return NULL; +} + +static struct nfp_qdisc * +nfp_abm_qdisc_find(struct nfp_abm_link *alink, u32 handle) +{ + return radix_tree_lookup(&alink->qdiscs, TC_H_MAJ(handle)); +} + +static int +nfp_abm_qdisc_replace(struct net_device *netdev, struct nfp_abm_link *alink, + enum nfp_qdisc_type type, u32 parent_handle, u32 handle, + unsigned int children, struct nfp_qdisc **qdisc) +{ + *qdisc = nfp_abm_qdisc_find(alink, handle); + if (*qdisc) { + if (WARN_ON((*qdisc)->type != type)) + return -EINVAL; + return 1; + } + + *qdisc = nfp_abm_qdisc_alloc(netdev, alink, type, parent_handle, handle, + children); + return *qdisc ? 0 : -ENOMEM; +} + +static void +nfp_abm_qdisc_destroy(struct net_device *netdev, struct nfp_abm_link *alink, + u32 handle) +{ + struct nfp_qdisc *qdisc; + + qdisc = nfp_abm_qdisc_find(alink, handle); + if (!qdisc) + return; + + /* We don't get TC_SETUP_ROOT_QDISC w/ MQ when netdev is unregistered */ + if (alink->root_qdisc == qdisc) + qdisc->use_cnt--; + + nfp_abm_qdisc_unlink_children(qdisc, 0, qdisc->num_children); + nfp_abm_qdisc_free(netdev, alink, qdisc); + + if (alink->root_qdisc == qdisc) { + alink->root_qdisc = NULL; + /* Only root change matters, other changes are acted upon on + * the graft notification. + */ + nfp_abm_qdisc_offload_update(alink); + } +} + +static int +nfp_abm_qdisc_graft(struct nfp_abm_link *alink, u32 handle, u32 child_handle, + unsigned int id) +{ + struct nfp_qdisc *parent, *child; + + parent = nfp_abm_qdisc_find(alink, handle); + if (!parent) + return 0; + + if (WARN(id >= parent->num_children, + "graft child out of bound %d >= %d\n", + id, parent->num_children)) + return -EINVAL; + + nfp_abm_qdisc_unlink_children(parent, id, id + 1); + + child = nfp_abm_qdisc_find(alink, child_handle); + if (child) + child->use_cnt++; + else + child = NFP_QDISC_UNTRACKED; + parent->children[id] = child; + + nfp_abm_qdisc_offload_update(alink); + + return 0; +} + +static void +nfp_abm_stats_calculate(struct nfp_alink_stats *new, + struct nfp_alink_stats *old, + struct gnet_stats_basic_packed *bstats, + struct gnet_stats_queue *qstats) +{ + _bstats_update(bstats, new->tx_bytes - old->tx_bytes, + new->tx_pkts - old->tx_pkts); + qstats->qlen += new->backlog_pkts - old->backlog_pkts; + qstats->backlog += new->backlog_bytes - old->backlog_bytes; + qstats->overlimits += new->overlimits - old->overlimits; + qstats->drops += new->drops - old->drops; +} + +static void +nfp_abm_stats_red_calculate(struct nfp_alink_xstats *new, + struct nfp_alink_xstats *old, + struct red_stats *stats) +{ + stats->forced_mark += new->ecn_marked - old->ecn_marked; + stats->pdrop += new->pdrop - old->pdrop; +} + +static int +nfp_abm_gred_stats(struct nfp_abm_link *alink, u32 handle, + struct tc_gred_qopt_offload_stats *stats) +{ + struct nfp_qdisc *qdisc; + unsigned int i; + + nfp_abm_stats_update(alink); + + qdisc = nfp_abm_qdisc_find(alink, handle); + if (!qdisc) + return -EOPNOTSUPP; + /* If the qdisc offload has stopped we may need to adjust the backlog + * counters back so carry on even if qdisc is not currently offloaded. + */ + + for (i = 0; i < qdisc->red.num_bands; i++) { + if (!stats->xstats[i]) + continue; + + nfp_abm_stats_calculate(&qdisc->red.band[i].stats, + &qdisc->red.band[i].prev_stats, + &stats->bstats[i], &stats->qstats[i]); + qdisc->red.band[i].prev_stats = qdisc->red.band[i].stats; + + nfp_abm_stats_red_calculate(&qdisc->red.band[i].xstats, + &qdisc->red.band[i].prev_xstats, + stats->xstats[i]); + qdisc->red.band[i].prev_xstats = qdisc->red.band[i].xstats; + } + + return qdisc->offloaded ? 0 : -EOPNOTSUPP; +} + +static bool +nfp_abm_gred_check_params(struct nfp_abm_link *alink, + struct tc_gred_qopt_offload *opt) +{ + struct nfp_cpp *cpp = alink->abm->app->cpp; + struct nfp_abm *abm = alink->abm; + unsigned int i; + + if (opt->set.grio_on || opt->set.wred_on) { + nfp_warn(cpp, "GRED offload failed - GRIO and WRED not supported (p:%08x h:%08x)\n", + opt->parent, opt->handle); + return false; + } + if (opt->set.dp_def != alink->def_band) { + nfp_warn(cpp, "GRED offload failed - default band must be %d (p:%08x h:%08x)\n", + alink->def_band, opt->parent, opt->handle); + return false; + } + if (opt->set.dp_cnt != abm->num_bands) { + nfp_warn(cpp, "GRED offload failed - band count must be %d (p:%08x h:%08x)\n", + abm->num_bands, opt->parent, opt->handle); + return false; + } + + for (i = 0; i < abm->num_bands; i++) { + struct tc_gred_vq_qopt_offload_params *band = &opt->set.tab[i]; + + if (!band->present) + return false; + if (!band->is_ecn && !nfp_abm_has_drop(abm)) { + nfp_warn(cpp, "GRED offload failed - drop is not supported (ECN option required) (p:%08x h:%08x vq:%d)\n", + opt->parent, opt->handle, i); + return false; + } + if (band->is_ecn && !nfp_abm_has_mark(abm)) { + nfp_warn(cpp, "GRED offload failed - ECN marking not supported (p:%08x h:%08x vq:%d)\n", + opt->parent, opt->handle, i); + return false; + } + if (band->is_harddrop) { + nfp_warn(cpp, "GRED offload failed - harddrop is not supported (p:%08x h:%08x vq:%d)\n", + opt->parent, opt->handle, i); + return false; + } + if (band->min != band->max) { + nfp_warn(cpp, "GRED offload failed - threshold mismatch (p:%08x h:%08x vq:%d)\n", + opt->parent, opt->handle, i); + return false; + } + if (band->min > S32_MAX) { + nfp_warn(cpp, "GRED offload failed - threshold too large %d > %d (p:%08x h:%08x vq:%d)\n", + band->min, S32_MAX, opt->parent, opt->handle, + i); + return false; + } + } + + return true; +} + +static int +nfp_abm_gred_replace(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_gred_qopt_offload *opt) +{ + struct nfp_qdisc *qdisc; + unsigned int i; + int ret; + + ret = nfp_abm_qdisc_replace(netdev, alink, NFP_QDISC_GRED, opt->parent, + opt->handle, 0, &qdisc); + if (ret < 0) + return ret; + + qdisc->params_ok = nfp_abm_gred_check_params(alink, opt); + if (qdisc->params_ok) { + qdisc->red.num_bands = opt->set.dp_cnt; + for (i = 0; i < qdisc->red.num_bands; i++) { + qdisc->red.band[i].ecn = opt->set.tab[i].is_ecn; + qdisc->red.band[i].threshold = opt->set.tab[i].min; + } + } + + if (qdisc->use_cnt) + nfp_abm_qdisc_offload_update(alink); + + return 0; +} + +int nfp_abm_setup_tc_gred(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_gred_qopt_offload *opt) +{ + switch (opt->command) { + case TC_GRED_REPLACE: + return nfp_abm_gred_replace(netdev, alink, opt); + case TC_GRED_DESTROY: + nfp_abm_qdisc_destroy(netdev, alink, opt->handle); + return 0; + case TC_GRED_STATS: + return nfp_abm_gred_stats(alink, opt->handle, &opt->stats); + default: + return -EOPNOTSUPP; + } +} + +static int +nfp_abm_red_xstats(struct nfp_abm_link *alink, struct tc_red_qopt_offload *opt) +{ + struct nfp_qdisc *qdisc; + + nfp_abm_stats_update(alink); + + qdisc = nfp_abm_qdisc_find(alink, opt->handle); + if (!qdisc || !qdisc->offloaded) + return -EOPNOTSUPP; + + nfp_abm_stats_red_calculate(&qdisc->red.band[0].xstats, + &qdisc->red.band[0].prev_xstats, + opt->xstats); + qdisc->red.band[0].prev_xstats = qdisc->red.band[0].xstats; + return 0; +} + +static int +nfp_abm_red_stats(struct nfp_abm_link *alink, u32 handle, + struct tc_qopt_offload_stats *stats) +{ + struct nfp_qdisc *qdisc; + + nfp_abm_stats_update(alink); + + qdisc = nfp_abm_qdisc_find(alink, handle); + if (!qdisc) + return -EOPNOTSUPP; + /* If the qdisc offload has stopped we may need to adjust the backlog + * counters back so carry on even if qdisc is not currently offloaded. + */ + + nfp_abm_stats_calculate(&qdisc->red.band[0].stats, + &qdisc->red.band[0].prev_stats, + stats->bstats, stats->qstats); + qdisc->red.band[0].prev_stats = qdisc->red.band[0].stats; + + return qdisc->offloaded ? 0 : -EOPNOTSUPP; +} + +static bool +nfp_abm_red_check_params(struct nfp_abm_link *alink, + struct tc_red_qopt_offload *opt) +{ + struct nfp_cpp *cpp = alink->abm->app->cpp; + struct nfp_abm *abm = alink->abm; + + if (!opt->set.is_ecn && !nfp_abm_has_drop(abm)) { + nfp_warn(cpp, "RED offload failed - drop is not supported (ECN option required) (p:%08x h:%08x)\n", + opt->parent, opt->handle); + return false; + } + if (opt->set.is_ecn && !nfp_abm_has_mark(abm)) { + nfp_warn(cpp, "RED offload failed - ECN marking not supported (p:%08x h:%08x)\n", + opt->parent, opt->handle); + return false; + } + if (opt->set.is_harddrop) { + nfp_warn(cpp, "RED offload failed - harddrop is not supported (p:%08x h:%08x)\n", + opt->parent, opt->handle); + return false; + } + if (opt->set.min != opt->set.max) { + nfp_warn(cpp, "RED offload failed - unsupported min/max parameters (p:%08x h:%08x)\n", + opt->parent, opt->handle); + return false; + } + if (opt->set.min > NFP_ABM_LVL_INFINITY) { + nfp_warn(cpp, "RED offload failed - threshold too large %d > %d (p:%08x h:%08x)\n", + opt->set.min, NFP_ABM_LVL_INFINITY, opt->parent, + opt->handle); + return false; + } + + return true; +} + +static int +nfp_abm_red_replace(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_red_qopt_offload *opt) +{ + struct nfp_qdisc *qdisc; + int ret; + + ret = nfp_abm_qdisc_replace(netdev, alink, NFP_QDISC_RED, opt->parent, + opt->handle, 1, &qdisc); + if (ret < 0) + return ret; + + /* If limit != 0 child gets reset */ + if (opt->set.limit) { + if (nfp_abm_qdisc_child_valid(qdisc, 0)) + qdisc->children[0]->use_cnt--; + qdisc->children[0] = NULL; + } else { + /* Qdisc was just allocated without a limit will use noop_qdisc, + * i.e. a block hole. + */ + if (!ret) + qdisc->children[0] = NFP_QDISC_UNTRACKED; + } + + qdisc->params_ok = nfp_abm_red_check_params(alink, opt); + if (qdisc->params_ok) { + qdisc->red.num_bands = 1; + qdisc->red.band[0].ecn = opt->set.is_ecn; + qdisc->red.band[0].threshold = opt->set.min; + } + + if (qdisc->use_cnt == 1) + nfp_abm_qdisc_offload_update(alink); + + return 0; +} + +int nfp_abm_setup_tc_red(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_red_qopt_offload *opt) +{ + switch (opt->command) { + case TC_RED_REPLACE: + return nfp_abm_red_replace(netdev, alink, opt); + case TC_RED_DESTROY: + nfp_abm_qdisc_destroy(netdev, alink, opt->handle); + return 0; + case TC_RED_STATS: + return nfp_abm_red_stats(alink, opt->handle, &opt->stats); + case TC_RED_XSTATS: + return nfp_abm_red_xstats(alink, opt); + case TC_RED_GRAFT: + return nfp_abm_qdisc_graft(alink, opt->handle, + opt->child_handle, 0); + default: + return -EOPNOTSUPP; + } +} + +static int +nfp_abm_mq_create(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_mq_qopt_offload *opt) +{ + struct nfp_qdisc *qdisc; + int ret; + + ret = nfp_abm_qdisc_replace(netdev, alink, NFP_QDISC_MQ, + TC_H_ROOT, opt->handle, alink->total_queues, + &qdisc); + if (ret < 0) + return ret; + + qdisc->params_ok = true; + qdisc->offloaded = true; + nfp_abm_qdisc_offload_update(alink); + return 0; +} + +static int +nfp_abm_mq_stats(struct nfp_abm_link *alink, u32 handle, + struct tc_qopt_offload_stats *stats) +{ + struct nfp_qdisc *qdisc, *red; + unsigned int i, j; + + qdisc = nfp_abm_qdisc_find(alink, handle); + if (!qdisc) + return -EOPNOTSUPP; + + nfp_abm_stats_update(alink); + + /* MQ stats are summed over the children in the core, so we need + * to add up the unreported child values. + */ + memset(&qdisc->mq.stats, 0, sizeof(qdisc->mq.stats)); + memset(&qdisc->mq.prev_stats, 0, sizeof(qdisc->mq.prev_stats)); + + for (i = 0; i < qdisc->num_children; i++) { + if (!nfp_abm_qdisc_child_valid(qdisc, i)) + continue; + + if (!nfp_abm_qdisc_is_red(qdisc->children[i])) + continue; + red = qdisc->children[i]; + + for (j = 0; j < red->red.num_bands; j++) { + nfp_abm_stats_propagate(&qdisc->mq.stats, + &red->red.band[j].stats); + nfp_abm_stats_propagate(&qdisc->mq.prev_stats, + &red->red.band[j].prev_stats); + } + } + + nfp_abm_stats_calculate(&qdisc->mq.stats, &qdisc->mq.prev_stats, + stats->bstats, stats->qstats); + + return qdisc->offloaded ? 0 : -EOPNOTSUPP; +} + +int nfp_abm_setup_tc_mq(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_mq_qopt_offload *opt) +{ + switch (opt->command) { + case TC_MQ_CREATE: + return nfp_abm_mq_create(netdev, alink, opt); + case TC_MQ_DESTROY: + nfp_abm_qdisc_destroy(netdev, alink, opt->handle); + return 0; + case TC_MQ_STATS: + return nfp_abm_mq_stats(alink, opt->handle, &opt->stats); + case TC_MQ_GRAFT: + return nfp_abm_qdisc_graft(alink, opt->handle, + opt->graft_params.child_handle, + opt->graft_params.queue); + default: + return -EOPNOTSUPP; + } +} + +int nfp_abm_setup_root(struct net_device *netdev, struct nfp_abm_link *alink, + struct tc_root_qopt_offload *opt) +{ + if (opt->ingress) + return -EOPNOTSUPP; + if (alink->root_qdisc) + alink->root_qdisc->use_cnt--; + alink->root_qdisc = nfp_abm_qdisc_find(alink, opt->handle); + if (alink->root_qdisc) + alink->root_qdisc->use_cnt++; + + nfp_abm_qdisc_offload_update(alink); + + return 0; +} diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c index 97d33bb4d84d..e23ca90289f7 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c @@ -2382,6 +2382,49 @@ static int neg_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) return 0; } +static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +{ + /* Set signedness bit (MSB of result). */ + emit_alu(nfp_prog, reg_none(), reg_a(dst), ALU_OP_OR, reg_imm(0)); + emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR, reg_b(dst), + SHF_SC_R_SHF, shift_amt); + wrp_immed(nfp_prog, reg_both(dst + 1), 0); + + return 0; +} + +static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +{ + const struct bpf_insn *insn = &meta->insn; + u64 umin, umax; + u8 dst, src; + + dst = insn->dst_reg * 2; + umin = meta->umin_src; + umax = meta->umax_src; + if (umin == umax) + return __ashr_imm(nfp_prog, dst, umin); + + src = insn->src_reg * 2; + /* NOTE: the first insn will set both indirect shift amount (source A) + * and signedness bit (MSB of result). + */ + emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_b(dst)); + emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR, + reg_b(dst), SHF_SC_R_SHF); + wrp_immed(nfp_prog, reg_both(dst + 1), 0); + + return 0; +} + +static int ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +{ + const struct bpf_insn *insn = &meta->insn; + u8 dst = insn->dst_reg * 2; + + return __ashr_imm(nfp_prog, dst, insn->imm); +} + static int shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { const struct bpf_insn *insn = &meta->insn; @@ -3009,26 +3052,19 @@ static int jset_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { const struct bpf_insn *insn = &meta->insn; u64 imm = insn->imm; /* sign extend */ + u8 dst_gpr = insn->dst_reg * 2; swreg tmp_reg; - if (!imm) { - meta->skip = true; - return 0; - } - - if (imm & ~0U) { - tmp_reg = ur_load_imm_any(nfp_prog, imm & ~0U, imm_b(nfp_prog)); - emit_alu(nfp_prog, reg_none(), - reg_a(insn->dst_reg * 2), ALU_OP_AND, tmp_reg); - emit_br(nfp_prog, BR_BNE, insn->off, 0); - } - - if (imm >> 32) { - tmp_reg = ur_load_imm_any(nfp_prog, imm >> 32, imm_b(nfp_prog)); + tmp_reg = ur_load_imm_any(nfp_prog, imm & ~0U, imm_b(nfp_prog)); + emit_alu(nfp_prog, imm_b(nfp_prog), + reg_a(dst_gpr), ALU_OP_AND, tmp_reg); + /* Upper word of the mask can only be 0 or ~0 from sign extension, + * so either ignore it or OR the whole thing in. + */ + if (imm >> 32) emit_alu(nfp_prog, reg_none(), - reg_a(insn->dst_reg * 2 + 1), ALU_OP_AND, tmp_reg); - emit_br(nfp_prog, BR_BNE, insn->off, 0); - } + reg_a(dst_gpr + 1), ALU_OP_OR, imm_b(nfp_prog)); + emit_br(nfp_prog, BR_BNE, insn->off, 0); return 0; } @@ -3286,6 +3322,8 @@ static const instr_cb_t instr_cb[256] = { [BPF_ALU | BPF_DIV | BPF_K] = div_imm, [BPF_ALU | BPF_NEG] = neg_reg, [BPF_ALU | BPF_LSH | BPF_K] = shl_imm, + [BPF_ALU | BPF_ARSH | BPF_X] = ashr_reg, + [BPF_ALU | BPF_ARSH | BPF_K] = ashr_imm, [BPF_ALU | BPF_END | BPF_X] = end_reg32, [BPF_LD | BPF_IMM | BPF_DW] = imm_ld8, [BPF_LD | BPF_ABS | BPF_B] = data_ld1, diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c index 6243af0ab025..dccae0319204 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c @@ -465,7 +465,7 @@ static int nfp_bpf_init(struct nfp_app *app) app->ctrl_mtu = nfp_bpf_ctrl_cmsg_mtu(bpf); } - bpf->bpf_dev = bpf_offload_dev_create(); + bpf->bpf_dev = bpf_offload_dev_create(&nfp_bpf_dev_ops); err = PTR_ERR_OR_ZERO(bpf->bpf_dev); if (err) goto err_free_neutral_maps; diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h index 7f591d71ab28..941277936475 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.h +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h @@ -509,7 +509,11 @@ void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog, unsigned int cnt); int nfp_bpf_jit(struct nfp_prog *prog); bool nfp_bpf_supported_opcode(u8 code); -extern const struct bpf_prog_offload_ops nfp_bpf_analyzer_ops; +int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx, + int prev_insn_idx); +int nfp_bpf_finalize(struct bpf_verifier_env *env); + +extern const struct bpf_prog_offload_ops nfp_bpf_dev_ops; struct netdev_bpf; struct nfp_app; diff --git a/drivers/net/ethernet/netronome/nfp/bpf/offload.c b/drivers/net/ethernet/netronome/nfp/bpf/offload.c index ba8ceedcf6a2..f0283854fade 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/offload.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/offload.c @@ -33,9 +33,6 @@ nfp_map_ptr_record(struct nfp_app_bpf *bpf, struct nfp_prog *nfp_prog, struct nfp_bpf_neutral_map *record; int err; - /* Map record paths are entered via ndo, update side is protected. */ - ASSERT_RTNL(); - /* Reuse path - other offloaded program is already tracking this map. */ record = rhashtable_lookup_fast(&bpf->maps_neutral, &map->id, nfp_bpf_maps_neutral_params); @@ -84,8 +81,6 @@ nfp_map_ptrs_forget(struct nfp_app_bpf *bpf, struct nfp_prog *nfp_prog) bool freed = false; int i; - ASSERT_RTNL(); - for (i = 0; i < nfp_prog->map_records_cnt; i++) { if (--nfp_prog->map_records[i]->count) { nfp_prog->map_records[i] = NULL; @@ -187,11 +182,10 @@ static void nfp_prog_free(struct nfp_prog *nfp_prog) kfree(nfp_prog); } -static int -nfp_bpf_verifier_prep(struct nfp_app *app, struct nfp_net *nn, - struct netdev_bpf *bpf) +static int nfp_bpf_verifier_prep(struct bpf_prog *prog) { - struct bpf_prog *prog = bpf->verifier.prog; + struct nfp_net *nn = netdev_priv(prog->aux->offload->netdev); + struct nfp_app *app = nn->app; struct nfp_prog *nfp_prog; int ret; @@ -209,7 +203,6 @@ nfp_bpf_verifier_prep(struct nfp_app *app, struct nfp_net *nn, goto err_free; nfp_prog->verifier_meta = nfp_prog_first_meta(nfp_prog); - bpf->verifier.ops = &nfp_bpf_analyzer_ops; return 0; @@ -219,8 +212,9 @@ err_free: return ret; } -static int nfp_bpf_translate(struct nfp_net *nn, struct bpf_prog *prog) +static int nfp_bpf_translate(struct bpf_prog *prog) { + struct nfp_net *nn = netdev_priv(prog->aux->offload->netdev); struct nfp_prog *nfp_prog = prog->aux->offload->dev_priv; unsigned int max_instr; int err; @@ -242,15 +236,13 @@ static int nfp_bpf_translate(struct nfp_net *nn, struct bpf_prog *prog) return nfp_map_ptrs_record(nfp_prog->bpf, nfp_prog, prog); } -static int nfp_bpf_destroy(struct nfp_net *nn, struct bpf_prog *prog) +static void nfp_bpf_destroy(struct bpf_prog *prog) { struct nfp_prog *nfp_prog = prog->aux->offload->dev_priv; kvfree(nfp_prog->prog); nfp_map_ptrs_forget(nfp_prog->bpf, nfp_prog); nfp_prog_free(nfp_prog); - - return 0; } /* Atomic engine requires values to be in big endian, we need to byte swap @@ -422,12 +414,6 @@ nfp_bpf_map_free(struct nfp_app_bpf *bpf, struct bpf_offloaded_map *offmap) int nfp_ndo_bpf(struct nfp_app *app, struct nfp_net *nn, struct netdev_bpf *bpf) { switch (bpf->command) { - case BPF_OFFLOAD_VERIFIER_PREP: - return nfp_bpf_verifier_prep(app, nn, bpf); - case BPF_OFFLOAD_TRANSLATE: - return nfp_bpf_translate(nn, bpf->offload.prog); - case BPF_OFFLOAD_DESTROY: - return nfp_bpf_destroy(nn, bpf->offload.prog); case BPF_OFFLOAD_MAP_ALLOC: return nfp_bpf_map_alloc(app->priv, bpf->offmap); case BPF_OFFLOAD_MAP_FREE: @@ -489,14 +475,15 @@ nfp_net_bpf_load(struct nfp_net *nn, struct bpf_prog *prog, struct netlink_ext_ack *extack) { struct nfp_prog *nfp_prog = prog->aux->offload->dev_priv; - unsigned int max_mtu, max_stack, max_prog_len; + unsigned int fw_mtu, pkt_off, max_stack, max_prog_len; dma_addr_t dma_addr; void *img; int err; - max_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32; - if (max_mtu < nn->dp.netdev->mtu) { - NL_SET_ERR_MSG_MOD(extack, "BPF offload not supported with MTU larger than HW packet split boundary"); + fw_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32; + pkt_off = min(prog->aux->max_pkt_offset, nn->dp.netdev->mtu); + if (fw_mtu < pkt_off) { + NL_SET_ERR_MSG_MOD(extack, "BPF offload not supported with potential packet access beyond HW packet split boundary"); return -EOPNOTSUPP; } @@ -600,3 +587,11 @@ int nfp_net_bpf_offload(struct nfp_net *nn, struct bpf_prog *prog, return 0; } + +const struct bpf_prog_offload_ops nfp_bpf_dev_ops = { + .insn_hook = nfp_verify_insn, + .finalize = nfp_bpf_finalize, + .prepare = nfp_bpf_verifier_prep, + .translate = nfp_bpf_translate, + .destroy = nfp_bpf_destroy, +}; diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c index 99f977bfd8cc..337bb862ec1d 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c @@ -623,8 +623,8 @@ nfp_bpf_check_alu(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, return 0; } -static int -nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx, int prev_insn_idx) +int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx, + int prev_insn_idx) { struct nfp_prog *nfp_prog = env->prog->aux->offload->dev_priv; struct nfp_insn_meta *meta = nfp_prog->verifier_meta; @@ -745,7 +745,7 @@ continue_subprog: goto continue_subprog; } -static int nfp_bpf_finalize(struct bpf_verifier_env *env) +int nfp_bpf_finalize(struct bpf_verifier_env *env) { struct bpf_subprog_info *info; struct nfp_prog *nfp_prog; @@ -788,8 +788,3 @@ static int nfp_bpf_finalize(struct bpf_verifier_env *env) return 0; } - -const struct bpf_prog_offload_ops nfp_bpf_analyzer_ops = { - .insn_hook = nfp_verify_insn, - .finalize = nfp_bpf_finalize, -}; diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c index 244dc261006e..8d54b36afee8 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/action.c +++ b/drivers/net/ethernet/netronome/nfp/flower/action.c @@ -2,7 +2,6 @@ /* Copyright (C) 2017-2018 Netronome Systems, Inc. */ #include -#include #include #include #include @@ -91,21 +90,6 @@ nfp_fl_pre_lag(struct nfp_app *app, const struct tc_action *action, return act_size; } -static bool nfp_fl_netdev_is_tunnel_type(struct net_device *out_dev, - enum nfp_flower_tun_type tun_type) -{ - if (!out_dev->rtnl_link_ops) - return false; - - if (!strcmp(out_dev->rtnl_link_ops->kind, "vxlan")) - return tun_type == NFP_FL_TUNNEL_VXLAN; - - if (!strcmp(out_dev->rtnl_link_ops->kind, "geneve")) - return tun_type == NFP_FL_TUNNEL_GENEVE; - - return false; -} - static int nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output, const struct tc_action *action, struct nfp_fl_payload *nfp_flow, @@ -151,11 +135,12 @@ nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output, /* Set action output parameters. */ output->flags = cpu_to_be16(tmp_flags); - /* Only offload if egress ports are on the same device as the - * ingress port. - */ - if (!switchdev_port_same_parent_id(in_dev, out_dev)) - return -EOPNOTSUPP; + if (nfp_netdev_is_nfp_repr(in_dev)) { + /* Confirm ingress and egress are on same device. */ + if (!switchdev_port_same_parent_id(in_dev, out_dev)) + return -EOPNOTSUPP; + } + if (!nfp_netdev_is_nfp_repr(out_dev)) return -EOPNOTSUPP; @@ -384,10 +369,21 @@ nfp_fl_set_eth(const struct tc_action *action, int idx, u32 off, return 0; } +struct ipv4_ttl_word { + __u8 ttl; + __u8 protocol; + __sum16 check; +}; + static int nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off, - struct nfp_fl_set_ip4_addrs *set_ip_addr) + struct nfp_fl_set_ip4_addrs *set_ip_addr, + struct nfp_fl_set_ip4_ttl_tos *set_ip_ttl_tos) { + struct ipv4_ttl_word *ttl_word_mask; + struct ipv4_ttl_word *ttl_word; + struct iphdr *tos_word_mask; + struct iphdr *tos_word; __be32 exact, mask; /* We are expecting tcf_pedit to return a big endian value */ @@ -402,20 +398,53 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off, set_ip_addr->ipv4_dst_mask |= mask; set_ip_addr->ipv4_dst &= ~mask; set_ip_addr->ipv4_dst |= exact & mask; + set_ip_addr->head.jump_id = NFP_FL_ACTION_OPCODE_SET_IPV4_ADDRS; + set_ip_addr->head.len_lw = sizeof(*set_ip_addr) >> + NFP_FL_LW_SIZ; break; case offsetof(struct iphdr, saddr): set_ip_addr->ipv4_src_mask |= mask; set_ip_addr->ipv4_src &= ~mask; set_ip_addr->ipv4_src |= exact & mask; + set_ip_addr->head.jump_id = NFP_FL_ACTION_OPCODE_SET_IPV4_ADDRS; + set_ip_addr->head.len_lw = sizeof(*set_ip_addr) >> + NFP_FL_LW_SIZ; + break; + case offsetof(struct iphdr, ttl): + ttl_word_mask = (struct ipv4_ttl_word *)&mask; + ttl_word = (struct ipv4_ttl_word *)&exact; + + if (ttl_word_mask->protocol || ttl_word_mask->check) + return -EOPNOTSUPP; + + set_ip_ttl_tos->ipv4_ttl_mask |= ttl_word_mask->ttl; + set_ip_ttl_tos->ipv4_ttl &= ~ttl_word_mask->ttl; + set_ip_ttl_tos->ipv4_ttl |= ttl_word->ttl & ttl_word_mask->ttl; + set_ip_ttl_tos->head.jump_id = + NFP_FL_ACTION_OPCODE_SET_IPV4_TTL_TOS; + set_ip_ttl_tos->head.len_lw = sizeof(*set_ip_ttl_tos) >> + NFP_FL_LW_SIZ; + break; + case round_down(offsetof(struct iphdr, tos), 4): + tos_word_mask = (struct iphdr *)&mask; + tos_word = (struct iphdr *)&exact; + + if (tos_word_mask->version || tos_word_mask->ihl || + tos_word_mask->tot_len) + return -EOPNOTSUPP; + + set_ip_ttl_tos->ipv4_tos_mask |= tos_word_mask->tos; + set_ip_ttl_tos->ipv4_tos &= ~tos_word_mask->tos; + set_ip_ttl_tos->ipv4_tos |= tos_word->tos & tos_word_mask->tos; + set_ip_ttl_tos->head.jump_id = + NFP_FL_ACTION_OPCODE_SET_IPV4_TTL_TOS; + set_ip_ttl_tos->head.len_lw = sizeof(*set_ip_ttl_tos) >> + NFP_FL_LW_SIZ; break; default: return -EOPNOTSUPP; } - set_ip_addr->reserved = cpu_to_be16(0); - set_ip_addr->head.jump_id = NFP_FL_ACTION_OPCODE_SET_IPV4_ADDRS; - set_ip_addr->head.len_lw = sizeof(*set_ip_addr) >> NFP_FL_LW_SIZ; - return 0; } @@ -432,12 +461,57 @@ nfp_fl_set_ip6_helper(int opcode_tag, u8 word, __be32 exact, __be32 mask, ip6->head.len_lw = sizeof(*ip6) >> NFP_FL_LW_SIZ; } +struct ipv6_hop_limit_word { + __be16 payload_len; + u8 nexthdr; + u8 hop_limit; +}; + +static int +nfp_fl_set_ip6_hop_limit_flow_label(u32 off, __be32 exact, __be32 mask, + struct nfp_fl_set_ipv6_tc_hl_fl *ip_hl_fl) +{ + struct ipv6_hop_limit_word *fl_hl_mask; + struct ipv6_hop_limit_word *fl_hl; + + switch (off) { + case offsetof(struct ipv6hdr, payload_len): + fl_hl_mask = (struct ipv6_hop_limit_word *)&mask; + fl_hl = (struct ipv6_hop_limit_word *)&exact; + + if (fl_hl_mask->nexthdr || fl_hl_mask->payload_len) + return -EOPNOTSUPP; + + ip_hl_fl->ipv6_hop_limit_mask |= fl_hl_mask->hop_limit; + ip_hl_fl->ipv6_hop_limit &= ~fl_hl_mask->hop_limit; + ip_hl_fl->ipv6_hop_limit |= fl_hl->hop_limit & + fl_hl_mask->hop_limit; + break; + case round_down(offsetof(struct ipv6hdr, flow_lbl), 4): + if (mask & ~IPV6_FLOW_LABEL_MASK || + exact & ~IPV6_FLOW_LABEL_MASK) + return -EOPNOTSUPP; + + ip_hl_fl->ipv6_label_mask |= mask; + ip_hl_fl->ipv6_label &= ~mask; + ip_hl_fl->ipv6_label |= exact & mask; + break; + } + + ip_hl_fl->head.jump_id = NFP_FL_ACTION_OPCODE_SET_IPV6_TC_HL_FL; + ip_hl_fl->head.len_lw = sizeof(*ip_hl_fl) >> NFP_FL_LW_SIZ; + + return 0; +} + static int nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off, struct nfp_fl_set_ipv6_addr *ip_dst, - struct nfp_fl_set_ipv6_addr *ip_src) + struct nfp_fl_set_ipv6_addr *ip_src, + struct nfp_fl_set_ipv6_tc_hl_fl *ip_hl_fl) { __be32 exact, mask; + int err = 0; u8 word; /* We are expecting tcf_pedit to return a big endian value */ @@ -448,7 +522,8 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off, return -EOPNOTSUPP; if (off < offsetof(struct ipv6hdr, saddr)) { - return -EOPNOTSUPP; + err = nfp_fl_set_ip6_hop_limit_flow_label(off, exact, mask, + ip_hl_fl); } else if (off < offsetof(struct ipv6hdr, daddr)) { word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact); nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word, @@ -462,7 +537,7 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off, return -EOPNOTSUPP; } - return 0; + return err; } static int @@ -513,6 +588,8 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow, char *nfp_action, int *a_len, u32 *csum_updated) { struct nfp_fl_set_ipv6_addr set_ip6_dst, set_ip6_src; + struct nfp_fl_set_ipv6_tc_hl_fl set_ip6_tc_hl_fl; + struct nfp_fl_set_ip4_ttl_tos set_ip_ttl_tos; struct nfp_fl_set_ip4_addrs set_ip_addr; struct nfp_fl_set_tport set_tport; struct nfp_fl_set_eth set_eth; @@ -522,6 +599,8 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow, u32 offset, cmd; u8 ip_proto = 0; + memset(&set_ip6_tc_hl_fl, 0, sizeof(set_ip6_tc_hl_fl)); + memset(&set_ip_ttl_tos, 0, sizeof(set_ip_ttl_tos)); memset(&set_ip6_dst, 0, sizeof(set_ip6_dst)); memset(&set_ip6_src, 0, sizeof(set_ip6_src)); memset(&set_ip_addr, 0, sizeof(set_ip_addr)); @@ -542,11 +621,12 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow, err = nfp_fl_set_eth(action, idx, offset, &set_eth); break; case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4: - err = nfp_fl_set_ip4(action, idx, offset, &set_ip_addr); + err = nfp_fl_set_ip4(action, idx, offset, &set_ip_addr, + &set_ip_ttl_tos); break; case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6: err = nfp_fl_set_ip6(action, idx, offset, &set_ip6_dst, - &set_ip6_src); + &set_ip6_src, &set_ip6_tc_hl_fl); break; case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP: err = nfp_fl_set_tport(action, idx, offset, &set_tport, @@ -577,6 +657,16 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow, memcpy(nfp_action, &set_eth, act_size); *a_len += act_size; } + if (set_ip_ttl_tos.head.len_lw) { + nfp_action += act_size; + act_size = sizeof(set_ip_ttl_tos); + memcpy(nfp_action, &set_ip_ttl_tos, act_size); + *a_len += act_size; + + /* Hardware will automatically fix IPv4 and TCP/UDP checksum. */ + *csum_updated |= TCA_CSUM_UPDATE_FLAG_IPV4HDR | + nfp_fl_csum_l4_to_flag(ip_proto); + } if (set_ip_addr.head.len_lw) { nfp_action += act_size; act_size = sizeof(set_ip_addr); @@ -587,6 +677,15 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow, *csum_updated |= TCA_CSUM_UPDATE_FLAG_IPV4HDR | nfp_fl_csum_l4_to_flag(ip_proto); } + if (set_ip6_tc_hl_fl.head.len_lw) { + nfp_action += act_size; + act_size = sizeof(set_ip6_tc_hl_fl); + memcpy(nfp_action, &set_ip6_tc_hl_fl, act_size); + *a_len += act_size; + + /* Hardware will automatically fix TCP/UDP checksum. */ + *csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto); + } if (set_ip6_dst.head.len_lw && set_ip6_src.head.len_lw) { /* TC compiles set src and dst IPv6 address as a single action, * the hardware requires this to be 2 separate actions. @@ -728,9 +827,8 @@ nfp_flower_loop_action(struct nfp_app *app, const struct tc_action *a, *a_len += sizeof(struct nfp_fl_push_vlan); } else if (is_tcf_tunnel_set(a)) { struct ip_tunnel_info *ip_tun = tcf_tunnel_info(a); - struct nfp_repr *repr = netdev_priv(netdev); - *tun_type = nfp_fl_get_tun_from_act_l4_port(repr->app, a); + *tun_type = nfp_fl_get_tun_from_act_l4_port(app, a); if (*tun_type == NFP_FL_TUNNEL_NONE) return -EOPNOTSUPP; diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h index 29d673aa5277..15f41cfef9f1 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h +++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h @@ -8,6 +8,7 @@ #include #include #include +#include #include "../nfp_app.h" #include "../nfpcore/nfp_cpp.h" @@ -65,8 +66,10 @@ #define NFP_FL_ACTION_OPCODE_SET_IPV4_TUNNEL 6 #define NFP_FL_ACTION_OPCODE_SET_ETHERNET 7 #define NFP_FL_ACTION_OPCODE_SET_IPV4_ADDRS 9 +#define NFP_FL_ACTION_OPCODE_SET_IPV4_TTL_TOS 10 #define NFP_FL_ACTION_OPCODE_SET_IPV6_SRC 11 #define NFP_FL_ACTION_OPCODE_SET_IPV6_DST 12 +#define NFP_FL_ACTION_OPCODE_SET_IPV6_TC_HL_FL 13 #define NFP_FL_ACTION_OPCODE_SET_UDP 14 #define NFP_FL_ACTION_OPCODE_SET_TCP 15 #define NFP_FL_ACTION_OPCODE_PRE_LAG 16 @@ -82,6 +85,8 @@ #define NFP_FL_PUSH_VLAN_CFI BIT(12) #define NFP_FL_PUSH_VLAN_VID GENMASK(11, 0) +#define IPV6_FLOW_LABEL_MASK cpu_to_be32(0x000fffff) + /* LAG ports */ #define NFP_FL_LAG_OUT 0xC0DE0000 @@ -125,6 +130,26 @@ struct nfp_fl_set_ip4_addrs { __be32 ipv4_dst; }; +struct nfp_fl_set_ip4_ttl_tos { + struct nfp_fl_act_head head; + u8 ipv4_ttl_mask; + u8 ipv4_tos_mask; + u8 ipv4_ttl; + u8 ipv4_tos; + __be16 reserved; +}; + +struct nfp_fl_set_ipv6_tc_hl_fl { + struct nfp_fl_act_head head; + u8 ipv6_tc_mask; + u8 ipv6_hop_limit_mask; + __be16 reserved; + u8 ipv6_tc; + u8 ipv6_hop_limit; + __be32 ipv6_label_mask; + __be32 ipv6_label; +}; + struct nfp_fl_set_ipv6_addr { struct nfp_fl_act_head head; __be16 reserved; @@ -475,6 +500,32 @@ static inline int nfp_flower_cmsg_get_data_len(struct sk_buff *skb) return skb->len - NFP_FLOWER_CMSG_HLEN; } +static inline bool +nfp_fl_netdev_is_tunnel_type(struct net_device *netdev, + enum nfp_flower_tun_type tun_type) +{ + if (netif_is_vxlan(netdev)) + return tun_type == NFP_FL_TUNNEL_VXLAN; + if (netif_is_geneve(netdev)) + return tun_type == NFP_FL_TUNNEL_GENEVE; + + return false; +} + +static inline bool nfp_fl_is_netdev_to_offload(struct net_device *netdev) +{ + if (!netdev->rtnl_link_ops) + return false; + if (!strcmp(netdev->rtnl_link_ops->kind, "openvswitch")) + return true; + if (netif_is_vxlan(netdev)) + return true; + if (netif_is_geneve(netdev)) + return true; + + return false; +} + struct sk_buff * nfp_flower_cmsg_mac_repr_start(struct nfp_app *app, unsigned int num_ports); void diff --git a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c index 81dcf5b318ba..5db838f45694 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c +++ b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c @@ -472,17 +472,25 @@ nfp_fl_lag_schedule_group_remove(struct nfp_fl_lag *lag, schedule_delayed_work(&lag->work, NFP_FL_LAG_DELAY); } -static int +static void nfp_fl_lag_schedule_group_delete(struct nfp_fl_lag *lag, struct net_device *master) { struct nfp_fl_lag_group *group; + struct nfp_flower_priv *priv; + + priv = container_of(lag, struct nfp_flower_priv, nfp_lag); + + if (!netif_is_bond_master(master)) + return; mutex_lock(&lag->lock); group = nfp_fl_lag_find_group_for_master_with_lag(lag, master); if (!group) { mutex_unlock(&lag->lock); - return -ENOENT; + nfp_warn(priv->app->cpp, "untracked bond got unregistered %s\n", + netdev_name(master)); + return; } group->to_remove = true; @@ -490,7 +498,6 @@ nfp_fl_lag_schedule_group_delete(struct nfp_fl_lag *lag, mutex_unlock(&lag->lock); schedule_delayed_work(&lag->work, NFP_FL_LAG_DELAY); - return 0; } static int @@ -575,7 +582,7 @@ nfp_fl_lag_changeupper_event(struct nfp_fl_lag *lag, return 0; } -static int +static void nfp_fl_lag_changels_event(struct nfp_fl_lag *lag, struct net_device *netdev, struct netdev_notifier_changelowerstate_info *info) { @@ -586,18 +593,18 @@ nfp_fl_lag_changels_event(struct nfp_fl_lag *lag, struct net_device *netdev, unsigned long *flags; if (!netif_is_lag_port(netdev) || !nfp_netdev_is_nfp_repr(netdev)) - return 0; + return; lag_lower_info = info->lower_state_info; if (!lag_lower_info) - return 0; + return; priv = container_of(lag, struct nfp_flower_priv, nfp_lag); repr = netdev_priv(netdev); /* Verify that the repr is associated with this app. */ if (repr->app != priv->app) - return 0; + return; repr_priv = repr->app_priv; flags = &repr_priv->lag_port_flags; @@ -617,20 +624,15 @@ nfp_fl_lag_changels_event(struct nfp_fl_lag *lag, struct net_device *netdev, mutex_unlock(&lag->lock); schedule_delayed_work(&lag->work, NFP_FL_LAG_DELAY); - return 0; } -static int -nfp_fl_lag_netdev_event(struct notifier_block *nb, unsigned long event, - void *ptr) +int nfp_flower_lag_netdev_event(struct nfp_flower_priv *priv, + struct net_device *netdev, + unsigned long event, void *ptr) { - struct net_device *netdev; - struct nfp_fl_lag *lag; + struct nfp_fl_lag *lag = &priv->nfp_lag; int err; - netdev = netdev_notifier_info_to_dev(ptr); - lag = container_of(nb, struct nfp_fl_lag, lag_nb); - switch (event) { case NETDEV_CHANGEUPPER: err = nfp_fl_lag_changeupper_event(lag, ptr); @@ -638,17 +640,11 @@ nfp_fl_lag_netdev_event(struct notifier_block *nb, unsigned long event, return NOTIFY_BAD; return NOTIFY_OK; case NETDEV_CHANGELOWERSTATE: - err = nfp_fl_lag_changels_event(lag, netdev, ptr); - if (err) - return NOTIFY_BAD; + nfp_fl_lag_changels_event(lag, netdev, ptr); return NOTIFY_OK; case NETDEV_UNREGISTER: - if (netif_is_bond_master(netdev)) { - err = nfp_fl_lag_schedule_group_delete(lag, netdev); - if (err) - return NOTIFY_BAD; - return NOTIFY_OK; - } + nfp_fl_lag_schedule_group_delete(lag, netdev); + return NOTIFY_OK; } return NOTIFY_DONE; @@ -673,8 +669,6 @@ void nfp_flower_lag_init(struct nfp_fl_lag *lag) /* 0 is a reserved batch version so increment to first valid value. */ nfp_fl_increment_version(lag); - - lag->lag_nb.notifier_call = nfp_fl_lag_netdev_event; } void nfp_flower_lag_cleanup(struct nfp_fl_lag *lag) diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c index 3a54728d2ea6..5059110a1768 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.c +++ b/drivers/net/ethernet/netronome/nfp/flower/main.c @@ -146,23 +146,12 @@ nfp_flower_repr_netdev_stop(struct nfp_app *app, struct nfp_repr *repr) return nfp_flower_cmsg_portmod(repr, false, repr->netdev->mtu, false); } -static int -nfp_flower_repr_netdev_init(struct nfp_app *app, struct net_device *netdev) -{ - return tc_setup_cb_egdev_register(netdev, - nfp_flower_setup_tc_egress_cb, - netdev_priv(netdev)); -} - static void nfp_flower_repr_netdev_clean(struct nfp_app *app, struct net_device *netdev) { struct nfp_repr *repr = netdev_priv(netdev); kfree(repr->app_priv); - - tc_setup_cb_egdev_unregister(netdev, nfp_flower_setup_tc_egress_cb, - netdev_priv(netdev)); } static void @@ -568,6 +557,8 @@ static int nfp_flower_init(struct nfp_app *app) goto err_cleanup_metadata; } + INIT_LIST_HEAD(&app_priv->indr_block_cb_priv); + return 0; err_cleanup_metadata: @@ -661,23 +652,34 @@ static int nfp_flower_start(struct nfp_app *app) err = nfp_flower_lag_reset(&app_priv->nfp_lag); if (err) return err; - - err = register_netdevice_notifier(&app_priv->nfp_lag.lag_nb); - if (err) - return err; } return nfp_tunnel_config_start(app); } static void nfp_flower_stop(struct nfp_app *app) +{ + nfp_tunnel_config_stop(app); +} + +static int +nfp_flower_netdev_event(struct nfp_app *app, struct net_device *netdev, + unsigned long event, void *ptr) { struct nfp_flower_priv *app_priv = app->priv; + int ret; - if (app_priv->flower_ext_feats & NFP_FL_FEATS_LAG) - unregister_netdevice_notifier(&app_priv->nfp_lag.lag_nb); + if (app_priv->flower_ext_feats & NFP_FL_FEATS_LAG) { + ret = nfp_flower_lag_netdev_event(app_priv, netdev, event, ptr); + if (ret & NOTIFY_STOP_MASK) + return ret; + } - nfp_tunnel_config_stop(app); + ret = nfp_flower_reg_indir_block_handler(app, netdev, event); + if (ret & NOTIFY_STOP_MASK) + return ret; + + return nfp_tunnel_mac_event_handler(app, netdev, event, ptr); } const struct nfp_app_type app_flower = { @@ -698,7 +700,6 @@ const struct nfp_app_type app_flower = { .vnic_init = nfp_flower_vnic_init, .vnic_clean = nfp_flower_vnic_clean, - .repr_init = nfp_flower_repr_netdev_init, .repr_preclean = nfp_flower_repr_netdev_preclean, .repr_clean = nfp_flower_repr_netdev_clean, @@ -708,6 +709,8 @@ const struct nfp_app_type app_flower = { .start = nfp_flower_start, .stop = nfp_flower_stop, + .netdev_event = nfp_flower_netdev_event, + .ctrl_msg_rx = nfp_flower_cmsg_rx, .sriov_enable = nfp_flower_sriov_enable, diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h index 90045bab95bf..b858bac47621 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.h +++ b/drivers/net/ethernet/netronome/nfp/flower/main.h @@ -20,7 +20,6 @@ struct nfp_fl_pre_lag; struct net_device; struct nfp_app; -#define NFP_FL_STATS_CTX_DONT_CARE cpu_to_be32(0xffffffff) #define NFP_FL_STATS_ELEM_RS FIELD_SIZEOF(struct nfp_fl_stats_id, \ init_unalloc) #define NFP_FLOWER_MASK_ENTRY_RS 256 @@ -72,7 +71,6 @@ struct nfp_mtu_conf { /** * struct nfp_fl_lag - Flower APP priv data for link aggregation - * @lag_nb: Notifier to track master/slave events * @work: Work queue for writing configs to the HW * @lock: Lock to protect lag_group_list * @group_list: List of all master/slave groups offloaded @@ -85,7 +83,6 @@ struct nfp_mtu_conf { * retransmission */ struct nfp_fl_lag { - struct notifier_block lag_nb; struct delayed_work work; struct mutex lock; struct list_head group_list; @@ -126,13 +123,13 @@ struct nfp_fl_lag { * @nfp_neigh_off_lock: Lock for the neighbour address list * @nfp_mac_off_ids: IDA to manage id assignment for offloaded macs * @nfp_mac_off_count: Number of MACs in address list - * @nfp_tun_mac_nb: Notifier to monitor link state * @nfp_tun_neigh_nb: Notifier to monitor neighbour state * @reify_replies: atomically stores the number of replies received * from firmware for repr reify * @reify_wait_queue: wait queue for repr reify response counting * @mtu_conf: Configuration of repr MTU value * @nfp_lag: Link aggregation data block + * @indr_block_cb_priv: List of priv data passed to indirect block cbs */ struct nfp_flower_priv { struct nfp_app *app; @@ -160,12 +157,12 @@ struct nfp_flower_priv { spinlock_t nfp_neigh_off_lock; struct ida nfp_mac_off_ids; int nfp_mac_off_count; - struct notifier_block nfp_tun_mac_nb; struct notifier_block nfp_tun_neigh_nb; atomic_t reify_replies; wait_queue_head_t reify_wait_queue; struct nfp_mtu_conf mtu_conf; struct nfp_fl_lag nfp_lag; + struct list_head indr_block_cb_priv; }; /** @@ -209,7 +206,6 @@ struct nfp_fl_payload { char *unmasked_data; char *mask_data; char *action_data; - bool ingress_offload; }; extern const struct rhashtable_params nfp_flower_table_params; @@ -226,7 +222,8 @@ void nfp_flower_metadata_cleanup(struct nfp_app *app); int nfp_flower_setup_tc(struct nfp_app *app, struct net_device *netdev, enum tc_setup_type type, void *type_data); -int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow, +int nfp_flower_compile_flow_match(struct nfp_app *app, + struct tc_cls_flower_offload *flow, struct nfp_fl_key_ls *key_ls, struct net_device *netdev, struct nfp_fl_payload *nfp_flow, @@ -244,7 +241,7 @@ int nfp_modify_flow_metadata(struct nfp_app *app, struct nfp_fl_payload * nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie, - struct net_device *netdev, __be32 host_ctx); + struct net_device *netdev); struct nfp_fl_payload * nfp_flower_remove_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie); @@ -252,21 +249,28 @@ void nfp_flower_rx_flow_stats(struct nfp_app *app, struct sk_buff *skb); int nfp_tunnel_config_start(struct nfp_app *app); void nfp_tunnel_config_stop(struct nfp_app *app); +int nfp_tunnel_mac_event_handler(struct nfp_app *app, + struct net_device *netdev, + unsigned long event, void *ptr); void nfp_tunnel_write_macs(struct nfp_app *app); void nfp_tunnel_del_ipv4_off(struct nfp_app *app, __be32 ipv4); void nfp_tunnel_add_ipv4_off(struct nfp_app *app, __be32 ipv4); void nfp_tunnel_request_route(struct nfp_app *app, struct sk_buff *skb); void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb); -int nfp_flower_setup_tc_egress_cb(enum tc_setup_type type, void *type_data, - void *cb_priv); void nfp_flower_lag_init(struct nfp_fl_lag *lag); void nfp_flower_lag_cleanup(struct nfp_fl_lag *lag); int nfp_flower_lag_reset(struct nfp_fl_lag *lag); +int nfp_flower_lag_netdev_event(struct nfp_flower_priv *priv, + struct net_device *netdev, + unsigned long event, void *ptr); bool nfp_flower_lag_unprocessed_msg(struct nfp_app *app, struct sk_buff *skb); int nfp_flower_lag_populate_pre_action(struct nfp_app *app, struct net_device *master, struct nfp_fl_pre_lag *pre_act); int nfp_flower_lag_get_output_id(struct nfp_app *app, struct net_device *master); +int nfp_flower_reg_indir_block_handler(struct nfp_app *app, + struct net_device *netdev, + unsigned long event); #endif diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c index e54fb6034326..cdf75595f627 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/match.c +++ b/drivers/net/ethernet/netronome/nfp/flower/match.c @@ -52,10 +52,13 @@ nfp_flower_compile_port(struct nfp_flower_in_port *frame, u32 cmsg_port, return 0; } - if (tun_type) + if (tun_type) { frame->in_port = cpu_to_be32(NFP_FL_PORT_TYPE_TUN | tun_type); - else + } else { + if (!cmsg_port) + return -EOPNOTSUPP; frame->in_port = cpu_to_be32(cmsg_port); + } return 0; } @@ -289,17 +292,21 @@ nfp_flower_compile_ipv4_udp_tun(struct nfp_flower_ipv4_udp_tun *frame, } } -int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow, +int nfp_flower_compile_flow_match(struct nfp_app *app, + struct tc_cls_flower_offload *flow, struct nfp_fl_key_ls *key_ls, struct net_device *netdev, struct nfp_fl_payload *nfp_flow, enum nfp_flower_tun_type tun_type) { - struct nfp_repr *netdev_repr; + u32 cmsg_port = 0; int err; u8 *ext; u8 *msk; + if (nfp_netdev_is_nfp_repr(netdev)) + cmsg_port = nfp_repr_get_port_id(netdev); + memset(nfp_flow->unmasked_data, 0, key_ls->key_size); memset(nfp_flow->mask_data, 0, key_ls->key_size); @@ -327,15 +334,13 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow, /* Populate Exact Port data. */ err = nfp_flower_compile_port((struct nfp_flower_in_port *)ext, - nfp_repr_get_port_id(netdev), - false, tun_type); + cmsg_port, false, tun_type); if (err) return err; /* Populate Mask Port Data. */ err = nfp_flower_compile_port((struct nfp_flower_in_port *)msk, - nfp_repr_get_port_id(netdev), - true, tun_type); + cmsg_port, true, tun_type); if (err) return err; @@ -399,16 +404,13 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow, msk += sizeof(struct nfp_flower_ipv4_udp_tun); /* Configure tunnel end point MAC. */ - if (nfp_netdev_is_nfp_repr(netdev)) { - netdev_repr = netdev_priv(netdev); - nfp_tunnel_write_macs(netdev_repr->app); - - /* Store the tunnel destination in the rule data. - * This must be present and be an exact match. - */ - nfp_flow->nfp_tun_ipv4_addr = tun_dst; - nfp_tunnel_add_ipv4_off(netdev_repr->app, tun_dst); - } + nfp_tunnel_write_macs(app); + + /* Store the tunnel destination in the rule data. + * This must be present and be an exact match. + */ + nfp_flow->nfp_tun_ipv4_addr = tun_dst; + nfp_tunnel_add_ipv4_off(app, tun_dst); if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_GENEVE_OP) { err = nfp_flower_compile_geneve_opt(ext, flow, false); diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c b/drivers/net/ethernet/netronome/nfp/flower/metadata.c index 48729bf171e0..573a4400a26c 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c +++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c @@ -21,7 +21,6 @@ struct nfp_mask_id_table { struct nfp_fl_flow_table_cmp_arg { struct net_device *netdev; unsigned long cookie; - __be32 host_ctx; }; static int nfp_release_stats_entry(struct nfp_app *app, u32 stats_context_id) @@ -76,14 +75,13 @@ static int nfp_get_stats_entry(struct nfp_app *app, u32 *stats_context_id) /* Must be called with either RTNL or rcu_read_lock */ struct nfp_fl_payload * nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie, - struct net_device *netdev, __be32 host_ctx) + struct net_device *netdev) { struct nfp_fl_flow_table_cmp_arg flower_cmp_arg; struct nfp_flower_priv *priv = app->priv; flower_cmp_arg.netdev = netdev; flower_cmp_arg.cookie = tc_flower_cookie; - flower_cmp_arg.host_ctx = host_ctx; return rhashtable_lookup_fast(&priv->flow_table, &flower_cmp_arg, nfp_flower_table_params); @@ -287,6 +285,7 @@ int nfp_compile_flow_metadata(struct nfp_app *app, nfp_flow->meta.host_ctx_id = cpu_to_be32(stats_cxt); nfp_flow->meta.host_cookie = cpu_to_be64(flow->cookie); + nfp_flow->ingress_dev = netdev; new_mask_id = 0; if (!nfp_check_mask_add(app, nfp_flow->mask_data, @@ -306,8 +305,7 @@ int nfp_compile_flow_metadata(struct nfp_app *app, priv->stats[stats_cxt].bytes = 0; priv->stats[stats_cxt].used = jiffies; - check_entry = nfp_flower_search_fl_table(app, flow->cookie, netdev, - NFP_FL_STATS_CTX_DONT_CARE); + check_entry = nfp_flower_search_fl_table(app, flow->cookie, netdev); if (check_entry) { if (nfp_release_stats_entry(app, stats_cxt)) return -EINVAL; @@ -352,9 +350,7 @@ static int nfp_fl_obj_cmpfn(struct rhashtable_compare_arg *arg, const struct nfp_fl_flow_table_cmp_arg *cmp_arg = arg->key; const struct nfp_fl_payload *flow_entry = obj; - if ((!cmp_arg->netdev || flow_entry->ingress_dev == cmp_arg->netdev) && - (cmp_arg->host_ctx == NFP_FL_STATS_CTX_DONT_CARE || - flow_entry->meta.host_ctx_id == cmp_arg->host_ctx)) + if (flow_entry->ingress_dev == cmp_arg->netdev) return flow_entry->tc_flower_cookie != cmp_arg->cookie; return 1; diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c index 67e576fe7fc0..2cdbf29ecbe7 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/offload.c +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c @@ -56,11 +56,10 @@ BIT(FLOW_DISSECTOR_KEY_ENC_PORTS)) static int -nfp_flower_xmit_flow(struct net_device *netdev, - struct nfp_fl_payload *nfp_flow, u8 mtype) +nfp_flower_xmit_flow(struct nfp_app *app, struct nfp_fl_payload *nfp_flow, + u8 mtype) { u32 meta_len, key_len, mask_len, act_len, tot_len; - struct nfp_repr *priv = netdev_priv(netdev); struct sk_buff *skb; unsigned char *msg; @@ -78,7 +77,7 @@ nfp_flower_xmit_flow(struct net_device *netdev, nfp_flow->meta.mask_len >>= NFP_FL_LW_SIZ; nfp_flow->meta.act_len >>= NFP_FL_LW_SIZ; - skb = nfp_flower_cmsg_alloc(priv->app, tot_len, mtype, GFP_KERNEL); + skb = nfp_flower_cmsg_alloc(app, tot_len, mtype, GFP_KERNEL); if (!skb) return -ENOMEM; @@ -96,7 +95,7 @@ nfp_flower_xmit_flow(struct net_device *netdev, nfp_flow->meta.mask_len <<= NFP_FL_LW_SIZ; nfp_flow->meta.act_len <<= NFP_FL_LW_SIZ; - nfp_ctrl_tx(priv->app->ctrl, skb); + nfp_ctrl_tx(app->ctrl, skb); return 0; } @@ -129,9 +128,9 @@ nfp_flower_calc_opt_layer(struct flow_dissector_key_enc_opts *enc_opts, static int nfp_flower_calculate_key_layers(struct nfp_app *app, + struct net_device *netdev, struct nfp_fl_key_ls *ret_key_ls, struct tc_cls_flower_offload *flow, - bool egress, enum nfp_flower_tun_type *tun_type) { struct flow_dissector_key_basic *mask_basic = NULL; @@ -187,8 +186,6 @@ nfp_flower_calculate_key_layers(struct nfp_app *app, skb_flow_dissector_target(flow->dissector, FLOW_DISSECTOR_KEY_ENC_CONTROL, flow->key); - if (!egress) - return -EOPNOTSUPP; if (mask_enc_ctl->addr_type != 0xffff || enc_ctl->addr_type != FLOW_DISSECTOR_KEY_IPV4_ADDRS) @@ -251,9 +248,10 @@ nfp_flower_calculate_key_layers(struct nfp_app *app, default: return -EOPNOTSUPP; } - } else if (egress) { - /* Reject non tunnel matches offloaded to egress repr. */ - return -EOPNOTSUPP; + + /* Ensure the ingress netdev matches the expected tun type. */ + if (!nfp_fl_netdev_is_tunnel_type(netdev, *tun_type)) + return -EOPNOTSUPP; } if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_BASIC)) { @@ -390,7 +388,7 @@ nfp_flower_calculate_key_layers(struct nfp_app *app, } static struct nfp_fl_payload * -nfp_flower_allocate_new(struct nfp_fl_key_ls *key_layer, bool egress) +nfp_flower_allocate_new(struct nfp_fl_key_ls *key_layer) { struct nfp_fl_payload *flow_pay; @@ -414,7 +412,6 @@ nfp_flower_allocate_new(struct nfp_fl_key_ls *key_layer, bool egress) flow_pay->nfp_tun_ipv4_addr = 0; flow_pay->meta.flags = 0; - flow_pay->ingress_offload = !egress; return flow_pay; @@ -432,7 +429,6 @@ err_free_flow: * @app: Pointer to the APP handle * @netdev: netdev structure. * @flow: TC flower classifier offload structure. - * @egress: NFP netdev is the egress. * * Adds a new flow to the repeated hash structure and action payload. * @@ -440,46 +436,35 @@ err_free_flow: */ static int nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev, - struct tc_cls_flower_offload *flow, bool egress) + struct tc_cls_flower_offload *flow) { enum nfp_flower_tun_type tun_type = NFP_FL_TUNNEL_NONE; - struct nfp_port *port = nfp_port_from_netdev(netdev); struct nfp_flower_priv *priv = app->priv; struct nfp_fl_payload *flow_pay; struct nfp_fl_key_ls *key_layer; - struct net_device *ingr_dev; + struct nfp_port *port = NULL; int err; - ingr_dev = egress ? NULL : netdev; - flow_pay = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev, - NFP_FL_STATS_CTX_DONT_CARE); - if (flow_pay) { - /* Ignore as duplicate if it has been added by different cb. */ - if (flow_pay->ingress_offload && egress) - return 0; - else - return -EOPNOTSUPP; - } + if (nfp_netdev_is_nfp_repr(netdev)) + port = nfp_port_from_netdev(netdev); key_layer = kmalloc(sizeof(*key_layer), GFP_KERNEL); if (!key_layer) return -ENOMEM; - err = nfp_flower_calculate_key_layers(app, key_layer, flow, egress, + err = nfp_flower_calculate_key_layers(app, netdev, key_layer, flow, &tun_type); if (err) goto err_free_key_ls; - flow_pay = nfp_flower_allocate_new(key_layer, egress); + flow_pay = nfp_flower_allocate_new(key_layer); if (!flow_pay) { err = -ENOMEM; goto err_free_key_ls; } - flow_pay->ingress_dev = egress ? NULL : netdev; - - err = nfp_flower_compile_flow_match(flow, key_layer, netdev, flow_pay, - tun_type); + err = nfp_flower_compile_flow_match(app, flow, key_layer, netdev, + flow_pay, tun_type); if (err) goto err_destroy_flow; @@ -487,8 +472,7 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev, if (err) goto err_destroy_flow; - err = nfp_compile_flow_metadata(app, flow, flow_pay, - flow_pay->ingress_dev); + err = nfp_compile_flow_metadata(app, flow, flow_pay, netdev); if (err) goto err_destroy_flow; @@ -498,12 +482,13 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev, if (err) goto err_release_metadata; - err = nfp_flower_xmit_flow(netdev, flow_pay, + err = nfp_flower_xmit_flow(app, flow_pay, NFP_FLOWER_CMSG_TYPE_FLOW_ADD); if (err) goto err_remove_rhash; - port->tc_offload_cnt++; + if (port) + port->tc_offload_cnt++; /* Deallocate flow payload when flower rule has been destroyed. */ kfree(key_layer); @@ -531,7 +516,6 @@ err_free_key_ls: * @app: Pointer to the APP handle * @netdev: netdev structure. * @flow: TC flower classifier offload structure - * @egress: Netdev is the egress dev. * * Removes a flow from the repeated hash structure and clears the * action payload. @@ -540,19 +524,19 @@ err_free_key_ls: */ static int nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev, - struct tc_cls_flower_offload *flow, bool egress) + struct tc_cls_flower_offload *flow) { - struct nfp_port *port = nfp_port_from_netdev(netdev); struct nfp_flower_priv *priv = app->priv; struct nfp_fl_payload *nfp_flow; - struct net_device *ingr_dev; + struct nfp_port *port = NULL; int err; - ingr_dev = egress ? NULL : netdev; - nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev, - NFP_FL_STATS_CTX_DONT_CARE); + if (nfp_netdev_is_nfp_repr(netdev)) + port = nfp_port_from_netdev(netdev); + + nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, netdev); if (!nfp_flow) - return egress ? 0 : -ENOENT; + return -ENOENT; err = nfp_modify_flow_metadata(app, nfp_flow); if (err) @@ -561,13 +545,14 @@ nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev, if (nfp_flow->nfp_tun_ipv4_addr) nfp_tunnel_del_ipv4_off(app, nfp_flow->nfp_tun_ipv4_addr); - err = nfp_flower_xmit_flow(netdev, nfp_flow, + err = nfp_flower_xmit_flow(app, nfp_flow, NFP_FLOWER_CMSG_TYPE_FLOW_DEL); if (err) goto err_free_flow; err_free_flow: - port->tc_offload_cnt--; + if (port) + port->tc_offload_cnt--; kfree(nfp_flow->action_data); kfree(nfp_flow->mask_data); kfree(nfp_flow->unmasked_data); @@ -583,7 +568,6 @@ err_free_flow: * @app: Pointer to the APP handle * @netdev: Netdev structure. * @flow: TC flower classifier offload structure - * @egress: Netdev is the egress dev. * * Populates a flow statistics structure which which corresponds to a * specific flow. @@ -592,22 +576,16 @@ err_free_flow: */ static int nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev, - struct tc_cls_flower_offload *flow, bool egress) + struct tc_cls_flower_offload *flow) { struct nfp_flower_priv *priv = app->priv; struct nfp_fl_payload *nfp_flow; - struct net_device *ingr_dev; u32 ctx_id; - ingr_dev = egress ? NULL : netdev; - nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev, - NFP_FL_STATS_CTX_DONT_CARE); + nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, netdev); if (!nfp_flow) return -EINVAL; - if (nfp_flow->ingress_offload && egress) - return 0; - ctx_id = be32_to_cpu(nfp_flow->meta.host_ctx_id); spin_lock_bh(&priv->stats_lock); @@ -624,35 +602,18 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev, static int nfp_flower_repr_offload(struct nfp_app *app, struct net_device *netdev, - struct tc_cls_flower_offload *flower, bool egress) + struct tc_cls_flower_offload *flower) { if (!eth_proto_is_802_3(flower->common.protocol)) return -EOPNOTSUPP; switch (flower->command) { case TC_CLSFLOWER_REPLACE: - return nfp_flower_add_offload(app, netdev, flower, egress); + return nfp_flower_add_offload(app, netdev, flower); case TC_CLSFLOWER_DESTROY: - return nfp_flower_del_offload(app, netdev, flower, egress); + return nfp_flower_del_offload(app, netdev, flower); case TC_CLSFLOWER_STATS: - return nfp_flower_get_stats(app, netdev, flower, egress); - default: - return -EOPNOTSUPP; - } -} - -int nfp_flower_setup_tc_egress_cb(enum tc_setup_type type, void *type_data, - void *cb_priv) -{ - struct nfp_repr *repr = cb_priv; - - if (!tc_cls_can_offload_and_chain0(repr->netdev, type_data)) - return -EOPNOTSUPP; - - switch (type) { - case TC_SETUP_CLSFLOWER: - return nfp_flower_repr_offload(repr->app, repr->netdev, - type_data, true); + return nfp_flower_get_stats(app, netdev, flower); default: return -EOPNOTSUPP; } @@ -669,7 +630,7 @@ static int nfp_flower_setup_tc_block_cb(enum tc_setup_type type, switch (type) { case TC_SETUP_CLSFLOWER: return nfp_flower_repr_offload(repr->app, repr->netdev, - type_data, false); + type_data); default: return -EOPNOTSUPP; } @@ -708,3 +669,130 @@ int nfp_flower_setup_tc(struct nfp_app *app, struct net_device *netdev, return -EOPNOTSUPP; } } + +struct nfp_flower_indr_block_cb_priv { + struct net_device *netdev; + struct nfp_app *app; + struct list_head list; +}; + +static struct nfp_flower_indr_block_cb_priv * +nfp_flower_indr_block_cb_priv_lookup(struct nfp_app *app, + struct net_device *netdev) +{ + struct nfp_flower_indr_block_cb_priv *cb_priv; + struct nfp_flower_priv *priv = app->priv; + + /* All callback list access should be protected by RTNL. */ + ASSERT_RTNL(); + + list_for_each_entry(cb_priv, &priv->indr_block_cb_priv, list) + if (cb_priv->netdev == netdev) + return cb_priv; + + return NULL; +} + +static int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, + void *type_data, void *cb_priv) +{ + struct nfp_flower_indr_block_cb_priv *priv = cb_priv; + struct tc_cls_flower_offload *flower = type_data; + + if (flower->common.chain_index) + return -EOPNOTSUPP; + + switch (type) { + case TC_SETUP_CLSFLOWER: + return nfp_flower_repr_offload(priv->app, priv->netdev, + type_data); + default: + return -EOPNOTSUPP; + } +} + +static int +nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app, + struct tc_block_offload *f) +{ + struct nfp_flower_indr_block_cb_priv *cb_priv; + struct nfp_flower_priv *priv = app->priv; + int err; + + if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) + return -EOPNOTSUPP; + + switch (f->command) { + case TC_BLOCK_BIND: + cb_priv = kmalloc(sizeof(*cb_priv), GFP_KERNEL); + if (!cb_priv) + return -ENOMEM; + + cb_priv->netdev = netdev; + cb_priv->app = app; + list_add(&cb_priv->list, &priv->indr_block_cb_priv); + + err = tcf_block_cb_register(f->block, + nfp_flower_setup_indr_block_cb, + cb_priv, cb_priv, f->extack); + if (err) { + list_del(&cb_priv->list); + kfree(cb_priv); + } + + return err; + case TC_BLOCK_UNBIND: + cb_priv = nfp_flower_indr_block_cb_priv_lookup(app, netdev); + if (!cb_priv) + return -ENOENT; + + tcf_block_cb_unregister(f->block, + nfp_flower_setup_indr_block_cb, + cb_priv); + list_del(&cb_priv->list); + kfree(cb_priv); + + return 0; + default: + return -EOPNOTSUPP; + } + return 0; +} + +static int +nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, + enum tc_setup_type type, void *type_data) +{ + switch (type) { + case TC_SETUP_BLOCK: + return nfp_flower_setup_indr_tc_block(netdev, cb_priv, + type_data); + default: + return -EOPNOTSUPP; + } +} + +int nfp_flower_reg_indir_block_handler(struct nfp_app *app, + struct net_device *netdev, + unsigned long event) +{ + int err; + + if (!nfp_fl_is_netdev_to_offload(netdev)) + return NOTIFY_OK; + + if (event == NETDEV_REGISTER) { + err = __tc_indr_block_cb_register(netdev, app, + nfp_flower_indr_setup_tc_cb, + app); + if (err) + nfp_flower_cmsg_warn(app, + "Indirect block reg failed - %s\n", + netdev->name); + } else if (event == NETDEV_UNREGISTER) { + __tc_indr_block_cb_unregister(netdev, + nfp_flower_indr_setup_tc_cb, app); + } + + return NOTIFY_OK; +} diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c index 8e5bec04d1f9..2d9f26a725c2 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c +++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c @@ -4,7 +4,6 @@ #include #include #include -#include #include #include #include @@ -182,18 +181,6 @@ void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb) } } -static bool nfp_tun_is_netdev_to_offload(struct net_device *netdev) -{ - if (!netdev->rtnl_link_ops) - return false; - if (!strcmp(netdev->rtnl_link_ops->kind, "openvswitch")) - return true; - if (netif_is_vxlan(netdev)) - return true; - - return false; -} - static int nfp_flower_xmit_tun_conf(struct nfp_app *app, u8 mtype, u16 plen, void *pdata, gfp_t flag) @@ -615,7 +602,7 @@ static void nfp_tun_add_to_mac_offload_list(struct net_device *netdev, if (nfp_netdev_is_nfp_repr(netdev)) port = nfp_repr_get_port_id(netdev); - else if (!nfp_tun_is_netdev_to_offload(netdev)) + else if (!nfp_fl_is_netdev_to_offload(netdev)) return; entry = kmalloc(sizeof(*entry), GFP_KERNEL); @@ -652,29 +639,16 @@ static void nfp_tun_add_to_mac_offload_list(struct net_device *netdev, mutex_unlock(&priv->nfp_mac_off_lock); } -static int nfp_tun_mac_event_handler(struct notifier_block *nb, - unsigned long event, void *ptr) +int nfp_tunnel_mac_event_handler(struct nfp_app *app, + struct net_device *netdev, + unsigned long event, void *ptr) { - struct nfp_flower_priv *app_priv; - struct net_device *netdev; - struct nfp_app *app; - if (event == NETDEV_DOWN || event == NETDEV_UNREGISTER) { - app_priv = container_of(nb, struct nfp_flower_priv, - nfp_tun_mac_nb); - app = app_priv->app; - netdev = netdev_notifier_info_to_dev(ptr); - /* If non-nfp netdev then free its offload index. */ - if (nfp_tun_is_netdev_to_offload(netdev)) + if (nfp_fl_is_netdev_to_offload(netdev)) nfp_tun_del_mac_idx(app, netdev->ifindex); } else if (event == NETDEV_UP || event == NETDEV_CHANGEADDR || event == NETDEV_REGISTER) { - app_priv = container_of(nb, struct nfp_flower_priv, - nfp_tun_mac_nb); - app = app_priv->app; - netdev = netdev_notifier_info_to_dev(ptr); - nfp_tun_add_to_mac_offload_list(netdev, app); /* Force a list write to keep NFP up to date. */ @@ -686,14 +660,11 @@ static int nfp_tun_mac_event_handler(struct notifier_block *nb, int nfp_tunnel_config_start(struct nfp_app *app) { struct nfp_flower_priv *priv = app->priv; - struct net_device *netdev; - int err; /* Initialise priv data for MAC offloading. */ priv->nfp_mac_off_count = 0; mutex_init(&priv->nfp_mac_off_lock); INIT_LIST_HEAD(&priv->nfp_mac_off_list); - priv->nfp_tun_mac_nb.notifier_call = nfp_tun_mac_event_handler; mutex_init(&priv->nfp_mac_index_lock); INIT_LIST_HEAD(&priv->nfp_mac_index_list); ida_init(&priv->nfp_mac_off_ids); @@ -707,27 +678,7 @@ int nfp_tunnel_config_start(struct nfp_app *app) INIT_LIST_HEAD(&priv->nfp_neigh_off_list); priv->nfp_tun_neigh_nb.notifier_call = nfp_tun_neigh_event_handler; - err = register_netdevice_notifier(&priv->nfp_tun_mac_nb); - if (err) - goto err_free_mac_ida; - - err = register_netevent_notifier(&priv->nfp_tun_neigh_nb); - if (err) - goto err_unreg_mac_nb; - - /* Parse netdevs already registered for MACs that need offloaded. */ - rtnl_lock(); - for_each_netdev(&init_net, netdev) - nfp_tun_add_to_mac_offload_list(netdev, app); - rtnl_unlock(); - - return 0; - -err_unreg_mac_nb: - unregister_netdevice_notifier(&priv->nfp_tun_mac_nb); -err_free_mac_ida: - ida_destroy(&priv->nfp_mac_off_ids); - return err; + return register_netevent_notifier(&priv->nfp_tun_neigh_nb); } void nfp_tunnel_config_stop(struct nfp_app *app) @@ -739,7 +690,6 @@ void nfp_tunnel_config_stop(struct nfp_app *app) struct nfp_ipv4_addr_entry *ip_entry; struct list_head *ptr, *storage; - unregister_netdevice_notifier(&priv->nfp_tun_mac_nb); unregister_netevent_notifier(&priv->nfp_tun_neigh_nb); /* Free any memory that may be occupied by MAC list. */ diff --git a/drivers/net/ethernet/netronome/nfp/nfp_app.c b/drivers/net/ethernet/netronome/nfp/nfp_app.c index 68a0991aac22..3a973282b2bb 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_app.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_app.c @@ -131,11 +131,100 @@ nfp_app_reprs_set(struct nfp_app *app, enum nfp_repr_type type, struct nfp_reprs *old; old = nfp_reprs_get_locked(app, type); + rtnl_lock(); rcu_assign_pointer(app->reprs[type], reprs); + rtnl_unlock(); return old; } +static void +nfp_app_netdev_feat_change(struct nfp_app *app, struct net_device *netdev) +{ + struct nfp_net *nn; + unsigned int type; + + if (!nfp_netdev_is_nfp_net(netdev)) + return; + nn = netdev_priv(netdev); + if (nn->app != app) + return; + + for (type = 0; type < __NFP_REPR_TYPE_MAX; type++) { + struct nfp_reprs *reprs; + unsigned int i; + + reprs = rtnl_dereference(app->reprs[type]); + if (!reprs) + continue; + + for (i = 0; i < reprs->num_reprs; i++) { + struct net_device *repr; + + repr = rtnl_dereference(reprs->reprs[i]); + if (!repr) + continue; + + nfp_repr_transfer_features(repr, netdev); + } + } +} + +static int +nfp_app_netdev_event(struct notifier_block *nb, unsigned long event, void *ptr) +{ + struct net_device *netdev; + struct nfp_app *app; + + netdev = netdev_notifier_info_to_dev(ptr); + app = container_of(nb, struct nfp_app, netdev_nb); + + /* Handle events common code is interested in */ + switch (event) { + case NETDEV_FEAT_CHANGE: + nfp_app_netdev_feat_change(app, netdev); + break; + } + + /* Call offload specific handlers */ + if (app->type->netdev_event) + return app->type->netdev_event(app, netdev, event, ptr); + return NOTIFY_DONE; +} + +int nfp_app_start(struct nfp_app *app, struct nfp_net *ctrl) +{ + int err; + + app->ctrl = ctrl; + + if (app->type->start) { + err = app->type->start(app); + if (err) + return err; + } + + app->netdev_nb.notifier_call = nfp_app_netdev_event; + err = register_netdevice_notifier(&app->netdev_nb); + if (err) + goto err_app_stop; + + return 0; + +err_app_stop: + if (app->type->stop) + app->type->stop(app); + return err; +} + +void nfp_app_stop(struct nfp_app *app) +{ + unregister_netdevice_notifier(&app->netdev_nb); + + if (app->type->stop) + app->type->stop(app); +} + struct nfp_app *nfp_app_alloc(struct nfp_pf *pf, enum nfp_app_id id) { struct nfp_app *app; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_app.h b/drivers/net/ethernet/netronome/nfp/nfp_app.h index 4d6ecf99b1cc..d578d856a009 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_app.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_app.h @@ -69,6 +69,7 @@ extern const struct nfp_app_type app_abm; * @port_get_stats_strings: get strings for extra statistics * @start: start application logic * @stop: stop application logic + * @netdev_event: Netdevice notifier event * @ctrl_msg_rx: control message handler * @ctrl_msg_rx_raw: handler for control messages from data queues * @setup_tc: setup TC ndo @@ -122,6 +123,9 @@ struct nfp_app_type { int (*start)(struct nfp_app *app); void (*stop)(struct nfp_app *app); + int (*netdev_event)(struct nfp_app *app, struct net_device *netdev, + unsigned long event, void *ptr); + void (*ctrl_msg_rx)(struct nfp_app *app, struct sk_buff *skb); void (*ctrl_msg_rx_raw)(struct nfp_app *app, const void *data, unsigned int len); @@ -151,6 +155,7 @@ struct nfp_app_type { * @reprs: array of pointers to representors * @type: pointer to const application ops and info * @ctrl_mtu: MTU to set on the control vNIC (set in .init()) + * @netdev_nb: Netdevice notifier block * @priv: app-specific priv data */ struct nfp_app { @@ -163,6 +168,9 @@ struct nfp_app { const struct nfp_app_type *type; unsigned int ctrl_mtu; + + struct notifier_block netdev_nb; + void *priv; }; @@ -264,21 +272,6 @@ nfp_app_repr_change_mtu(struct nfp_app *app, struct net_device *netdev, return app->type->repr_change_mtu(app, netdev, new_mtu); } -static inline int nfp_app_start(struct nfp_app *app, struct nfp_net *ctrl) -{ - app->ctrl = ctrl; - if (!app->type->start) - return 0; - return app->type->start(app); -} - -static inline void nfp_app_stop(struct nfp_app *app) -{ - if (!app->type->stop) - return; - app->type->stop(app); -} - static inline const char *nfp_app_name(struct nfp_app *app) { if (!app) @@ -430,6 +423,8 @@ nfp_app_ctrl_msg_alloc(struct nfp_app *app, unsigned int size, gfp_t priority); struct nfp_app *nfp_app_alloc(struct nfp_pf *pf, enum nfp_app_id id); void nfp_app_free(struct nfp_app *app); +int nfp_app_start(struct nfp_app *app, struct nfp_net *ctrl); +void nfp_app_stop(struct nfp_app *app); /* Callbacks shared between apps */ diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 6f0c37d09256..be37c2d6151c 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -158,6 +158,7 @@ struct nfp_net_tx_desc { __le16 data_len; /* Length of frame + meta data */ } __packed; __le32 vals[4]; + __le64 vals8[2]; }; }; @@ -543,6 +544,7 @@ struct nfp_net_dp { * @reconfig_timer_active: Timer for reading reconfiguration results is pending * @reconfig_sync_present: Some thread is performing synchronous reconfig * @reconfig_timer: Timer for async reading of reconfig results + * @reconfig_in_progress_update: Update FW is processing now (debug only) * @link_up: Is the link up? * @link_status_lock: Protects @link_* and ensures atomicity with BAR reading * @rx_coalesce_usecs: RX interrupt moderation usecs delay parameter @@ -611,6 +613,7 @@ struct nfp_net { bool reconfig_timer_active; bool reconfig_sync_present; struct timer_list reconfig_timer; + u32 reconfig_in_progress_update; u32 rx_coalesce_usecs; u32 rx_coalesce_max_frames; @@ -851,7 +854,7 @@ void nfp_net_get_fw_version(struct nfp_net_fw_version *fw_ver, void __iomem *ctrl_bar); struct nfp_net * -nfp_net_alloc(struct pci_dev *pdev, bool needs_netdev, +nfp_net_alloc(struct pci_dev *pdev, void __iomem *ctrl_bar, bool needs_netdev, unsigned int max_tx_rings, unsigned int max_rx_rings); void nfp_net_free(struct nfp_net *nn); @@ -868,6 +871,7 @@ unsigned int nfp_net_rss_key_sz(struct nfp_net *nn); void nfp_net_rss_write_itbl(struct nfp_net *nn); void nfp_net_rss_write_key(struct nfp_net *nn); void nfp_net_coalesce_write_cfg(struct nfp_net *nn); +int nfp_net_reconfig_mbox(struct nfp_net *nn, u32 mbox_cmd); unsigned int nfp_net_irqs_alloc(struct pci_dev *pdev, struct msix_entry *irq_entries, diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 6bddfcfdec34..e97636d2e6ee 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -101,6 +101,7 @@ static void nfp_net_reconfig_start(struct nfp_net *nn, u32 update) /* ensure update is written before pinging HW */ nn_pci_flush(nn); nfp_qcp_wr_ptr_add(nn->qcp_cfg, 1); + nn->reconfig_in_progress_update = update; } /* Pass 0 as update to run posted reconfigs. */ @@ -123,10 +124,14 @@ static bool nfp_net_reconfig_check_done(struct nfp_net *nn, bool last_check) if (reg == 0) return true; if (reg & NFP_NET_CFG_UPDATE_ERR) { - nn_err(nn, "Reconfig error: 0x%08x\n", reg); + nn_err(nn, "Reconfig error (status: 0x%08x update: 0x%08x ctrl: 0x%08x)\n", + reg, nn->reconfig_in_progress_update, + nn_readl(nn, NFP_NET_CFG_CTRL)); return true; } else if (last_check) { - nn_err(nn, "Reconfig timeout: 0x%08x\n", reg); + nn_err(nn, "Reconfig timeout (status: 0x%08x update: 0x%08x ctrl: 0x%08x)\n", + reg, nn->reconfig_in_progress_update, + nn_readl(nn, NFP_NET_CFG_CTRL)); return true; } @@ -279,7 +284,7 @@ int nfp_net_reconfig(struct nfp_net *nn, u32 update) * * Return: Negative errno on error, 0 on success */ -static int nfp_net_reconfig_mbox(struct nfp_net *nn, u32 mbox_cmd) +int nfp_net_reconfig_mbox(struct nfp_net *nn, u32 mbox_cmd) { u32 mbox = nn->tlv_caps.mbox_off; int ret; @@ -647,27 +652,29 @@ static void nfp_net_tx_ring_stop(struct netdev_queue *nd_q, * @txbuf: Pointer to driver soft TX descriptor * @txd: Pointer to HW TX descriptor * @skb: Pointer to SKB + * @md_bytes: Prepend length * * Set up Tx descriptor for LSO, do nothing for non-LSO skbs. * Return error on packet header greater than maximum supported LSO header size. */ static void nfp_net_tx_tso(struct nfp_net_r_vector *r_vec, struct nfp_net_tx_buf *txbuf, - struct nfp_net_tx_desc *txd, struct sk_buff *skb) + struct nfp_net_tx_desc *txd, struct sk_buff *skb, + u32 md_bytes) { - u32 hdrlen; + u32 l3_offset, l4_offset, hdrlen; u16 mss; if (!skb_is_gso(skb)) return; if (!skb->encapsulation) { - txd->l3_offset = skb_network_offset(skb); - txd->l4_offset = skb_transport_offset(skb); + l3_offset = skb_network_offset(skb); + l4_offset = skb_transport_offset(skb); hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb); } else { - txd->l3_offset = skb_inner_network_offset(skb); - txd->l4_offset = skb_inner_transport_offset(skb); + l3_offset = skb_inner_network_offset(skb); + l4_offset = skb_inner_transport_offset(skb); hdrlen = skb_inner_transport_header(skb) - skb->data + inner_tcp_hdrlen(skb); } @@ -676,7 +683,9 @@ static void nfp_net_tx_tso(struct nfp_net_r_vector *r_vec, txbuf->real_len += hdrlen * (txbuf->pkt_cnt - 1); mss = skb_shinfo(skb)->gso_size & PCIE_DESC_TX_MSS_MASK; - txd->lso_hdrlen = hdrlen; + txd->l3_offset = l3_offset - md_bytes; + txd->l4_offset = l4_offset - md_bytes; + txd->lso_hdrlen = hdrlen - md_bytes; txd->mss = cpu_to_le16(mss); txd->flags |= PCIE_DESC_TX_LSO; @@ -786,11 +795,11 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) { struct nfp_net *nn = netdev_priv(netdev); const struct skb_frag_struct *frag; - struct nfp_net_tx_desc *txd, txdg; int f, nr_frags, wr_idx, md_bytes; struct nfp_net_tx_ring *tx_ring; struct nfp_net_r_vector *r_vec; struct nfp_net_tx_buf *txbuf; + struct nfp_net_tx_desc *txd; struct netdev_queue *nd_q; struct nfp_net_dp *dp; dma_addr_t dma_addr; @@ -801,13 +810,13 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) qidx = skb_get_queue_mapping(skb); tx_ring = &dp->tx_rings[qidx]; r_vec = tx_ring->r_vec; - nd_q = netdev_get_tx_queue(dp->netdev, qidx); nr_frags = skb_shinfo(skb)->nr_frags; if (unlikely(nfp_net_tx_full(tx_ring, nr_frags + 1))) { nn_dp_warn(dp, "TX ring %d busy. wrp=%u rdp=%u\n", qidx, tx_ring->wr_p, tx_ring->rd_p); + nd_q = netdev_get_tx_queue(dp->netdev, qidx); netif_tx_stop_queue(nd_q); nfp_net_tx_xmit_more_flush(tx_ring); u64_stats_update_begin(&r_vec->tx_sync); @@ -851,7 +860,7 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) txd->lso_hdrlen = 0; /* Do not reorder - tso may adjust pkt cnt, vlan may override fields */ - nfp_net_tx_tso(r_vec, txbuf, txd, skb); + nfp_net_tx_tso(r_vec, txbuf, txd, skb, md_bytes); nfp_net_tx_csum(dp, r_vec, txbuf, txd, skb); if (skb_vlan_tag_present(skb) && dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN) { txd->flags |= PCIE_DESC_TX_VLAN; @@ -860,8 +869,10 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) /* Gather DMA */ if (nr_frags > 0) { + __le64 second_half; + /* all descs must match except for in addr, length and eop */ - txdg = *txd; + second_half = txd->vals8[1]; for (f = 0; f < nr_frags; f++) { frag = &skb_shinfo(skb)->frags[f]; @@ -878,11 +889,11 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) tx_ring->txbufs[wr_idx].fidx = f; txd = &tx_ring->txds[wr_idx]; - *txd = txdg; txd->dma_len = cpu_to_le16(fsize); nfp_desc_set_dma_addr(txd, dma_addr); - txd->offset_eop |= - (f == nr_frags - 1) ? PCIE_DESC_TX_EOP : 0; + txd->offset_eop = md_bytes | + ((f == nr_frags - 1) ? PCIE_DESC_TX_EOP : 0); + txd->vals8[1] = second_half; } u64_stats_update_begin(&r_vec->tx_sync); @@ -890,16 +901,16 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) u64_stats_update_end(&r_vec->tx_sync); } - netdev_tx_sent_queue(nd_q, txbuf->real_len); - skb_tx_timestamp(skb); + nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); + tx_ring->wr_p += nr_frags + 1; if (nfp_net_tx_ring_should_stop(tx_ring)) nfp_net_tx_ring_stop(nd_q, tx_ring); tx_ring->wr_ptr_add += nr_frags + 1; - if (!skb->xmit_more || netif_xmit_stopped(nd_q)) + if (__netdev_tx_sent_queue(nd_q, txbuf->real_len, skb->xmit_more)) nfp_net_tx_xmit_more_flush(tx_ring); return NETDEV_TX_OK; @@ -940,14 +951,10 @@ static void nfp_net_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) { struct nfp_net_r_vector *r_vec = tx_ring->r_vec; struct nfp_net_dp *dp = &r_vec->nfp_net->dp; - const struct skb_frag_struct *frag; struct netdev_queue *nd_q; u32 done_pkts = 0, done_bytes = 0; - struct sk_buff *skb; - int todo, nr_frags; u32 qcp_rd_p; - int fidx; - int idx; + int todo; if (tx_ring->wr_p == tx_ring->rd_p) return; @@ -961,26 +968,33 @@ static void nfp_net_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); while (todo--) { + const struct skb_frag_struct *frag; + struct nfp_net_tx_buf *tx_buf; + struct sk_buff *skb; + int fidx, nr_frags; + int idx; + idx = D_IDX(tx_ring, tx_ring->rd_p++); + tx_buf = &tx_ring->txbufs[idx]; - skb = tx_ring->txbufs[idx].skb; + skb = tx_buf->skb; if (!skb) continue; nr_frags = skb_shinfo(skb)->nr_frags; - fidx = tx_ring->txbufs[idx].fidx; + fidx = tx_buf->fidx; if (fidx == -1) { /* unmap head */ - dma_unmap_single(dp->dev, tx_ring->txbufs[idx].dma_addr, + dma_unmap_single(dp->dev, tx_buf->dma_addr, skb_headlen(skb), DMA_TO_DEVICE); - done_pkts += tx_ring->txbufs[idx].pkt_cnt; - done_bytes += tx_ring->txbufs[idx].real_len; + done_pkts += tx_buf->pkt_cnt; + done_bytes += tx_buf->real_len; } else { /* unmap fragment */ frag = &skb_shinfo(skb)->frags[fidx]; - dma_unmap_page(dp->dev, tx_ring->txbufs[idx].dma_addr, + dma_unmap_page(dp->dev, tx_buf->dma_addr, skb_frag_size(frag), DMA_TO_DEVICE); } @@ -988,9 +1002,9 @@ static void nfp_net_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) if (fidx == nr_frags - 1) napi_consume_skb(skb, budget); - tx_ring->txbufs[idx].dma_addr = 0; - tx_ring->txbufs[idx].skb = NULL; - tx_ring->txbufs[idx].fidx = -2; + tx_buf->dma_addr = 0; + tx_buf->skb = NULL; + tx_buf->fidx = -2; } tx_ring->qcp_rd_p = qcp_rd_p; @@ -3275,7 +3289,10 @@ nfp_net_features_check(struct sk_buff *skb, struct net_device *dev, hdrlen = skb_inner_transport_header(skb) - skb->data + inner_tcp_hdrlen(skb); - if (unlikely(hdrlen > NFP_NET_LSO_MAX_HDR_SZ)) + /* Assume worst case scenario of having longest possible + * metadata prepend - 8B + */ + if (unlikely(hdrlen > NFP_NET_LSO_MAX_HDR_SZ - 8)) features &= ~NETIF_F_GSO_MASK; } @@ -3560,6 +3577,7 @@ void nfp_net_info(struct nfp_net *nn) /** * nfp_net_alloc() - Allocate netdev and related structure * @pdev: PCI device + * @ctrl_bar: PCI IOMEM with vNIC config memory * @needs_netdev: Whether to allocate a netdev for this vNIC * @max_tx_rings: Maximum number of TX rings supported by device * @max_rx_rings: Maximum number of RX rings supported by device @@ -3570,11 +3588,12 @@ void nfp_net_info(struct nfp_net *nn) * * Return: NFP Net device structure, or ERR_PTR on error. */ -struct nfp_net *nfp_net_alloc(struct pci_dev *pdev, bool needs_netdev, - unsigned int max_tx_rings, - unsigned int max_rx_rings) +struct nfp_net * +nfp_net_alloc(struct pci_dev *pdev, void __iomem *ctrl_bar, bool needs_netdev, + unsigned int max_tx_rings, unsigned int max_rx_rings) { struct nfp_net *nn; + int err; if (needs_netdev) { struct net_device *netdev; @@ -3594,6 +3613,7 @@ struct nfp_net *nfp_net_alloc(struct pci_dev *pdev, bool needs_netdev, } nn->dp.dev = &pdev->dev; + nn->dp.ctrl_bar = ctrl_bar; nn->pdev = pdev; nn->max_tx_rings = max_tx_rings; @@ -3616,7 +3636,19 @@ struct nfp_net *nfp_net_alloc(struct pci_dev *pdev, bool needs_netdev, timer_setup(&nn->reconfig_timer, nfp_net_reconfig_timer, 0); + err = nfp_net_tlv_caps_parse(&nn->pdev->dev, nn->dp.ctrl_bar, + &nn->tlv_caps); + if (err) + goto err_free_nn; + return nn; + +err_free_nn: + if (nn->dp.netdev) + free_netdev(nn->dp.netdev); + else + vfree(nn); + return ERR_PTR(err); } /** @@ -3889,11 +3921,6 @@ int nfp_net_init(struct nfp_net *nn) nn->dp.ctrl |= NFP_NET_CFG_CTRL_IRQMOD; } - err = nfp_net_tlv_caps_parse(&nn->pdev->dev, nn->dp.ctrl_bar, - &nn->tlv_caps); - if (err) - return err; - if (nn->dp.netdev) nfp_net_netdev_init(nn); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c index f2aaef976c7d..6d5213b5bcb0 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c @@ -41,8 +41,8 @@ int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem, data += 4; if (length % NFP_NET_CFG_TLV_LENGTH_INC) { - dev_err(dev, "TLV size not multiple of %u len:%u\n", - NFP_NET_CFG_TLV_LENGTH_INC, length); + dev_err(dev, "TLV size not multiple of %u offset:%u len:%u\n", + NFP_NET_CFG_TLV_LENGTH_INC, offset, length); return -EINVAL; } if (data + length > end) { @@ -61,14 +61,14 @@ int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem, if (!length) return 0; - dev_err(dev, "END TLV should be empty, has len:%d\n", - length); + dev_err(dev, "END TLV should be empty, has offset:%u len:%d\n", + offset, length); return -EINVAL; case NFP_NET_CFG_TLV_TYPE_ME_FREQ: if (length != 4) { dev_err(dev, - "ME FREQ TLV should be 4B, is %dB\n", - length); + "ME FREQ TLV should be 4B, is %dB offset:%u\n", + length, offset); return -EINVAL; } @@ -90,6 +90,15 @@ int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem, FIELD_GET(NFP_NET_CFG_TLV_HEADER_TYPE, hdr), offset, length); break; + case NFP_NET_CFG_TLV_TYPE_REPR_CAP: + if (length < 4) { + dev_err(dev, "REPR CAP TLV short %dB < 4B offset:%u\n", + length, offset); + return -EINVAL; + } + + caps->repr_cap = readl(data); + break; default: if (!FIELD_GET(NFP_NET_CFG_TLV_HEADER_REQUIRED, hdr)) break; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index d7c8518ac952..166d7f71442e 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -397,6 +397,8 @@ #define NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_ADD 1 #define NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_KILL 2 +#define NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET 5 + /** * VLAN filtering using general use mailbox * %NFP_NET_CFG_VLAN_FILTER: Base address of VLAN filter mailbox @@ -464,6 +466,10 @@ * Variable, experimental IDs. IDs designated for internal development and * experiments before a stable TLV ID has been allocated to a feature. Should * never be present in production firmware. + * + * %NFP_NET_CFG_TLV_TYPE_REPR_CAP: + * Single word, equivalent of %NFP_NET_CFG_CAP for representors, features which + * can be used on representors. */ #define NFP_NET_CFG_TLV_TYPE_UNKNOWN 0 #define NFP_NET_CFG_TLV_TYPE_RESERVED 1 @@ -472,6 +478,7 @@ #define NFP_NET_CFG_TLV_TYPE_MBOX 4 #define NFP_NET_CFG_TLV_TYPE_EXPERIMENTAL0 5 #define NFP_NET_CFG_TLV_TYPE_EXPERIMENTAL1 6 +#define NFP_NET_CFG_TLV_TYPE_REPR_CAP 7 struct device; @@ -480,11 +487,13 @@ struct device; * @me_freq_mhz: ME clock_freq (MHz) * @mbox_off: vNIC mailbox area offset * @mbox_len: vNIC mailbox area length + * @repr_cap: capabilities for representors */ struct nfp_net_tlv_caps { u32 me_freq_mhz; unsigned int mbox_off; unsigned int mbox_len; + u32 repr_cap; }; int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem, diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c index 69b1c9b62e3d..ab7f2498e1c4 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c @@ -8,7 +8,7 @@ static struct dentry *nfp_dir; -static int nfp_net_debugfs_rx_q_read(struct seq_file *file, void *data) +static int nfp_rx_q_show(struct seq_file *file, void *data) { struct nfp_net_r_vector *r_vec = file->private; struct nfp_net_rx_ring *rx_ring; @@ -65,31 +65,12 @@ out: rtnl_unlock(); return 0; } +DEFINE_SHOW_ATTRIBUTE(nfp_rx_q); -static int nfp_net_debugfs_rx_q_open(struct inode *inode, struct file *f) -{ - return single_open(f, nfp_net_debugfs_rx_q_read, inode->i_private); -} +static int nfp_tx_q_show(struct seq_file *file, void *data); +DEFINE_SHOW_ATTRIBUTE(nfp_tx_q); -static const struct file_operations nfp_rx_q_fops = { - .owner = THIS_MODULE, - .open = nfp_net_debugfs_rx_q_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek -}; - -static int nfp_net_debugfs_tx_q_open(struct inode *inode, struct file *f); - -static const struct file_operations nfp_tx_q_fops = { - .owner = THIS_MODULE, - .open = nfp_net_debugfs_tx_q_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek -}; - -static int nfp_net_debugfs_tx_q_read(struct seq_file *file, void *data) +static int nfp_tx_q_show(struct seq_file *file, void *data) { struct nfp_net_r_vector *r_vec = file->private; struct nfp_net_tx_ring *tx_ring; @@ -158,18 +139,11 @@ out: return 0; } -static int nfp_net_debugfs_tx_q_open(struct inode *inode, struct file *f) +static int nfp_xdp_q_show(struct seq_file *file, void *data) { - return single_open(f, nfp_net_debugfs_tx_q_read, inode->i_private); + return nfp_tx_q_show(file, data); } - -static const struct file_operations nfp_xdp_q_fops = { - .owner = THIS_MODULE, - .open = nfp_net_debugfs_tx_q_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek -}; +DEFINE_SHOW_ATTRIBUTE(nfp_xdp_q); void nfp_net_debugfs_vnic_add(struct nfp_net *nn, struct dentry *ddir) { diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c index 1e7d20468a34..08f5fdbd8e41 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c @@ -116,13 +116,13 @@ nfp_net_pf_alloc_vnic(struct nfp_pf *pf, bool needs_netdev, n_rx_rings = readl(ctrl_bar + NFP_NET_CFG_MAX_RXRINGS); /* Allocate and initialise the vNIC */ - nn = nfp_net_alloc(pf->pdev, needs_netdev, n_tx_rings, n_rx_rings); + nn = nfp_net_alloc(pf->pdev, ctrl_bar, needs_netdev, + n_tx_rings, n_rx_rings); if (IS_ERR(nn)) return nn; nn->app = pf->app; nfp_net_get_fw_version(&nn->fw_ver, ctrl_bar); - nn->dp.ctrl_bar = ctrl_bar; nn->tx_bar = qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ; nn->rx_bar = qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ; nn->dp.is_vf = 0; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c index c09b893c30dd..69d7aebda09b 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c @@ -11,6 +11,7 @@ #include "nfpcore/nfp_nsp.h" #include "nfp_app.h" #include "nfp_main.h" +#include "nfp_net.h" #include "nfp_net_ctrl.h" #include "nfp_net_repr.h" #include "nfp_net_sriov.h" @@ -231,6 +232,27 @@ err_port_disable: return err; } +static netdev_features_t +nfp_repr_fix_features(struct net_device *netdev, netdev_features_t features) +{ + struct nfp_repr *repr = netdev_priv(netdev); + netdev_features_t old_features = features; + netdev_features_t lower_features; + struct net_device *lower_dev; + + lower_dev = repr->dst->u.port_info.lower_dev; + + lower_features = lower_dev->features; + if (lower_features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) + lower_features |= NETIF_F_HW_CSUM; + + features = netdev_intersect_features(features, lower_features); + features |= old_features & (NETIF_F_SOFT_FEATURES | NETIF_F_HW_TC); + features |= NETIF_F_LLTX; + + return features; +} + const struct net_device_ops nfp_repr_netdev_ops = { .ndo_init = nfp_app_ndo_init, .ndo_uninit = nfp_app_ndo_uninit, @@ -248,10 +270,25 @@ const struct net_device_ops nfp_repr_netdev_ops = { .ndo_set_vf_spoofchk = nfp_app_set_vf_spoofchk, .ndo_get_vf_config = nfp_app_get_vf_config, .ndo_set_vf_link_state = nfp_app_set_vf_link_state, + .ndo_fix_features = nfp_repr_fix_features, .ndo_set_features = nfp_port_set_features, .ndo_set_mac_address = eth_mac_addr, }; +void +nfp_repr_transfer_features(struct net_device *netdev, struct net_device *lower) +{ + struct nfp_repr *repr = netdev_priv(netdev); + + if (repr->dst->u.port_info.lower_dev != lower) + return; + + netdev->gso_max_size = lower->gso_max_size; + netdev->gso_max_segs = lower->gso_max_segs; + + netdev_update_features(netdev); +} + static void nfp_repr_clean(struct nfp_repr *repr) { unregister_netdev(repr->netdev); @@ -281,6 +318,8 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev, struct net_device *pf_netdev) { struct nfp_repr *repr = netdev_priv(netdev); + struct nfp_net *nn = netdev_priv(pf_netdev); + u32 repr_cap = nn->tlv_caps.repr_cap; int err; nfp_repr_set_lockdep_class(netdev); @@ -299,6 +338,55 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev, SWITCHDEV_SET_OPS(netdev, &nfp_port_switchdev_ops); + /* Set features the lower device can support with representors */ + if (repr_cap & NFP_NET_CFG_CTRL_LIVE_ADDR) + netdev->priv_flags |= IFF_LIVE_ADDR_CHANGE; + + netdev->hw_features = NETIF_F_HIGHDMA; + if (repr_cap & NFP_NET_CFG_CTRL_RXCSUM_ANY) + netdev->hw_features |= NETIF_F_RXCSUM; + if (repr_cap & NFP_NET_CFG_CTRL_TXCSUM) + netdev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; + if (repr_cap & NFP_NET_CFG_CTRL_GATHER) + netdev->hw_features |= NETIF_F_SG; + if ((repr_cap & NFP_NET_CFG_CTRL_LSO && nn->fw_ver.major > 2) || + repr_cap & NFP_NET_CFG_CTRL_LSO2) + netdev->hw_features |= NETIF_F_TSO | NETIF_F_TSO6; + if (repr_cap & NFP_NET_CFG_CTRL_RSS_ANY) + netdev->hw_features |= NETIF_F_RXHASH; + if (repr_cap & NFP_NET_CFG_CTRL_VXLAN) { + if (repr_cap & NFP_NET_CFG_CTRL_LSO) + netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL; + } + if (repr_cap & NFP_NET_CFG_CTRL_NVGRE) { + if (repr_cap & NFP_NET_CFG_CTRL_LSO) + netdev->hw_features |= NETIF_F_GSO_GRE; + } + if (repr_cap & (NFP_NET_CFG_CTRL_VXLAN | NFP_NET_CFG_CTRL_NVGRE)) + netdev->hw_enc_features = netdev->hw_features; + + netdev->vlan_features = netdev->hw_features; + + if (repr_cap & NFP_NET_CFG_CTRL_RXVLAN) + netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX; + if (repr_cap & NFP_NET_CFG_CTRL_TXVLAN) { + if (repr_cap & NFP_NET_CFG_CTRL_LSO2) + netdev_warn(netdev, "Device advertises both TSO2 and TXVLAN. Refusing to enable TXVLAN.\n"); + else + netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX; + } + if (repr_cap & NFP_NET_CFG_CTRL_CTAG_FILTER) + netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER; + + netdev->features = netdev->hw_features; + + /* Advertise but disable TSO by default. */ + netdev->features &= ~(NETIF_F_TSO | NETIF_F_TSO6); + netdev->gso_max_segs = NFP_NET_LSO_MAX_SEGS; + + netdev->priv_flags |= IFF_NO_QUEUE; + netdev->features |= NETIF_F_LLTX; + if (nfp_app_has_tc(app)) { netdev->features |= NETIF_F_HW_TC; netdev->hw_features |= NETIF_F_HW_TC; @@ -442,7 +530,9 @@ int nfp_reprs_resync_phys_ports(struct nfp_app *app) continue; nfp_app_repr_preclean(app, netdev); + rtnl_lock(); rcu_assign_pointer(reprs->reprs[i], NULL); + rtnl_unlock(); synchronize_rcu(); nfp_repr_clean(repr); } diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.h b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.h index c412b94bfb97..e0f13dfe1f39 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.h @@ -92,6 +92,8 @@ nfp_repr_get_locked(struct nfp_app *app, struct nfp_reprs *set, unsigned int id); void nfp_repr_inc_rx_stats(struct net_device *netdev, unsigned int len); +void +nfp_repr_transfer_features(struct net_device *netdev, struct net_device *lower); int nfp_repr_init(struct nfp_app *app, struct net_device *netdev, u32 cmsg_port_id, struct nfp_port *port, struct net_device *pf_netdev); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_netvf_main.c b/drivers/net/ethernet/netronome/nfp/nfp_netvf_main.c index d2c1e9ea5668..1145849ca7ba 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_netvf_main.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_netvf_main.c @@ -172,7 +172,7 @@ static int nfp_netvf_pci_probe(struct pci_dev *pdev, rx_bar_off = NFP_PCIE_QUEUE(startq); /* Allocate and initialise the netdev */ - nn = nfp_net_alloc(pdev, true, max_tx_rings, max_rx_rings); + nn = nfp_net_alloc(pdev, ctrl_bar, true, max_tx_rings, max_rx_rings); if (IS_ERR(nn)) { err = PTR_ERR(nn); goto err_ctrl_unmap; @@ -180,7 +180,6 @@ static int nfp_netvf_pci_probe(struct pci_dev *pdev, vf->nn = nn; nn->fw_ver = fw_ver; - nn->dp.ctrl_bar = ctrl_bar; nn->dp.is_vf = 1; nn->stride_tx = stride; nn->stride_rx = stride; diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c index 25382f8fbb70..89d17399fb5a 100644 --- a/drivers/net/ethernet/nxp/lpc_eth.c +++ b/drivers/net/ethernet/nxp/lpc_eth.c @@ -280,7 +280,7 @@ #define LPC_FCCR_MIRRORCOUNTERCURRENT(n) ((n) & 0xFFFF) /* - * rxfliterctrl, rxfilterwolstatus, and rxfilterwolclear shared + * rxfilterctrl, rxfilterwolstatus, and rxfilterwolclear shared * register definitions */ #define LPC_RXFLTRW_ACCEPTUNICAST (1 << 0) @@ -291,7 +291,7 @@ #define LPC_RXFLTRW_ACCEPTPERFECT (1 << 5) /* - * rxfliterctrl register definitions + * rxfilterctrl register definitions */ #define LPC_RXFLTRWSTS_MAGICPACKETENWOL (1 << 12) #define LPC_RXFLTRWSTS_RXFILTERENWOL (1 << 13) @@ -783,8 +783,6 @@ static int lpc_mii_probe(struct net_device *ndev) phy_set_max_speed(phydev, SPEED_100); - phydev->advertising = phydev->supported; - pldat->link = 0; pldat->speed = 0; pldat->duplex = -1; diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h index d9a03aba0e02..24a90163775e 100644 --- a/drivers/net/ethernet/qlogic/qed/qed.h +++ b/drivers/net/ethernet/qlogic/qed/qed.h @@ -296,6 +296,12 @@ enum qed_wol_support { QED_WOL_SUPPORT_PME, }; +enum qed_db_rec_exec { + DB_REC_DRY_RUN, + DB_REC_REAL_DEAL, + DB_REC_ONCE, +}; + struct qed_hw_info { /* PCI personality */ enum qed_pci_personality personality; @@ -425,6 +431,14 @@ struct qed_qm_info { u8 num_pf_rls; }; +struct qed_db_recovery_info { + struct list_head list; + + /* Lock to protect the doorbell recovery mechanism list */ + spinlock_t lock; + u32 db_recovery_counter; +}; + struct storm_stats { u32 address; u32 len; @@ -522,6 +536,7 @@ struct qed_simd_fp_handler { enum qed_slowpath_wq_flag { QED_SLOWPATH_MFW_TLV_REQ, + QED_SLOWPATH_PERIODIC_DB_REC, }; struct qed_hwfn { @@ -640,6 +655,9 @@ struct qed_hwfn { /* L2-related */ struct qed_l2_info *p_l2_info; + /* Mechanism for recovering from doorbell drop */ + struct qed_db_recovery_info db_recovery_info; + /* Nvm images number and attributes */ struct qed_nvm_image_info nvm_info; @@ -652,11 +670,12 @@ struct qed_hwfn { struct delayed_work iov_task; unsigned long iov_task_flags; #endif - - struct z_stream_s *stream; + struct z_stream_s *stream; + bool slowpath_wq_active; struct workqueue_struct *slowpath_wq; struct delayed_work slowpath_task; unsigned long slowpath_task_flags; + u32 periodic_db_rec_count; }; struct pci_params { @@ -897,6 +916,12 @@ u16 qed_get_cm_pq_idx_llt_mtc(struct qed_hwfn *p_hwfn, u8 tc); #define QED_LEADING_HWFN(dev) (&dev->hwfns[0]) +/* doorbell recovery mechanism */ +void qed_db_recovery_dp(struct qed_hwfn *p_hwfn); +void qed_db_recovery_execute(struct qed_hwfn *p_hwfn, + enum qed_db_rec_exec db_exec); +bool qed_edpm_enabled(struct qed_hwfn *p_hwfn); + /* Other Linux specific common definitions */ #define DP_NAME(cdev) ((cdev)->name) @@ -931,4 +956,6 @@ int qed_mfw_fill_tlv_data(struct qed_hwfn *hwfn, union qed_mfw_tlv_data *tlv_data); void qed_hw_info_set_offload_tc(struct qed_hw_info *p_info, u8 tc); + +void qed_periodic_db_rec_start(struct qed_hwfn *p_hwfn); #endif /* _QED_H */ diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index 88a8576ca9ce..8f6551421945 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -66,6 +66,318 @@ static DEFINE_SPINLOCK(qm_lock); +/******************** Doorbell Recovery *******************/ +/* The doorbell recovery mechanism consists of a list of entries which represent + * doorbelling entities (l2 queues, roce sq/rq/cqs, the slowpath spq, etc). Each + * entity needs to register with the mechanism and provide the parameters + * describing it's doorbell, including a location where last used doorbell data + * can be found. The doorbell execute function will traverse the list and + * doorbell all of the registered entries. + */ +struct qed_db_recovery_entry { + struct list_head list_entry; + void __iomem *db_addr; + void *db_data; + enum qed_db_rec_width db_width; + enum qed_db_rec_space db_space; + u8 hwfn_idx; +}; + +/* Display a single doorbell recovery entry */ +static void qed_db_recovery_dp_entry(struct qed_hwfn *p_hwfn, + struct qed_db_recovery_entry *db_entry, + char *action) +{ + DP_VERBOSE(p_hwfn, + QED_MSG_SPQ, + "(%s: db_entry %p, addr %p, data %p, width %s, %s space, hwfn %d)\n", + action, + db_entry, + db_entry->db_addr, + db_entry->db_data, + db_entry->db_width == DB_REC_WIDTH_32B ? "32b" : "64b", + db_entry->db_space == DB_REC_USER ? "user" : "kernel", + db_entry->hwfn_idx); +} + +/* Doorbell address sanity (address within doorbell bar range) */ +static bool qed_db_rec_sanity(struct qed_dev *cdev, + void __iomem *db_addr, void *db_data) +{ + /* Make sure doorbell address is within the doorbell bar */ + if (db_addr < cdev->doorbells || + (u8 __iomem *)db_addr > + (u8 __iomem *)cdev->doorbells + cdev->db_size) { + WARN(true, + "Illegal doorbell address: %p. Legal range for doorbell addresses is [%p..%p]\n", + db_addr, + cdev->doorbells, + (u8 __iomem *)cdev->doorbells + cdev->db_size); + return false; + } + + /* ake sure doorbell data pointer is not null */ + if (!db_data) { + WARN(true, "Illegal doorbell data pointer: %p", db_data); + return false; + } + + return true; +} + +/* Find hwfn according to the doorbell address */ +static struct qed_hwfn *qed_db_rec_find_hwfn(struct qed_dev *cdev, + void __iomem *db_addr) +{ + struct qed_hwfn *p_hwfn; + + /* In CMT doorbell bar is split down the middle between engine 0 and enigne 1 */ + if (cdev->num_hwfns > 1) + p_hwfn = db_addr < cdev->hwfns[1].doorbells ? + &cdev->hwfns[0] : &cdev->hwfns[1]; + else + p_hwfn = QED_LEADING_HWFN(cdev); + + return p_hwfn; +} + +/* Add a new entry to the doorbell recovery mechanism */ +int qed_db_recovery_add(struct qed_dev *cdev, + void __iomem *db_addr, + void *db_data, + enum qed_db_rec_width db_width, + enum qed_db_rec_space db_space) +{ + struct qed_db_recovery_entry *db_entry; + struct qed_hwfn *p_hwfn; + + /* Shortcircuit VFs, for now */ + if (IS_VF(cdev)) { + DP_VERBOSE(cdev, + QED_MSG_IOV, "db recovery - skipping VF doorbell\n"); + return 0; + } + + /* Sanitize doorbell address */ + if (!qed_db_rec_sanity(cdev, db_addr, db_data)) + return -EINVAL; + + /* Obtain hwfn from doorbell address */ + p_hwfn = qed_db_rec_find_hwfn(cdev, db_addr); + + /* Create entry */ + db_entry = kzalloc(sizeof(*db_entry), GFP_KERNEL); + if (!db_entry) { + DP_NOTICE(cdev, "Failed to allocate a db recovery entry\n"); + return -ENOMEM; + } + + /* Populate entry */ + db_entry->db_addr = db_addr; + db_entry->db_data = db_data; + db_entry->db_width = db_width; + db_entry->db_space = db_space; + db_entry->hwfn_idx = p_hwfn->my_id; + + /* Display */ + qed_db_recovery_dp_entry(p_hwfn, db_entry, "Adding"); + + /* Protect the list */ + spin_lock_bh(&p_hwfn->db_recovery_info.lock); + list_add_tail(&db_entry->list_entry, &p_hwfn->db_recovery_info.list); + spin_unlock_bh(&p_hwfn->db_recovery_info.lock); + + return 0; +} + +/* Remove an entry from the doorbell recovery mechanism */ +int qed_db_recovery_del(struct qed_dev *cdev, + void __iomem *db_addr, void *db_data) +{ + struct qed_db_recovery_entry *db_entry = NULL; + struct qed_hwfn *p_hwfn; + int rc = -EINVAL; + + /* Shortcircuit VFs, for now */ + if (IS_VF(cdev)) { + DP_VERBOSE(cdev, + QED_MSG_IOV, "db recovery - skipping VF doorbell\n"); + return 0; + } + + /* Sanitize doorbell address */ + if (!qed_db_rec_sanity(cdev, db_addr, db_data)) + return -EINVAL; + + /* Obtain hwfn from doorbell address */ + p_hwfn = qed_db_rec_find_hwfn(cdev, db_addr); + + /* Protect the list */ + spin_lock_bh(&p_hwfn->db_recovery_info.lock); + list_for_each_entry(db_entry, + &p_hwfn->db_recovery_info.list, list_entry) { + /* search according to db_data addr since db_addr is not unique (roce) */ + if (db_entry->db_data == db_data) { + qed_db_recovery_dp_entry(p_hwfn, db_entry, "Deleting"); + list_del(&db_entry->list_entry); + rc = 0; + break; + } + } + + spin_unlock_bh(&p_hwfn->db_recovery_info.lock); + + if (rc == -EINVAL) + + DP_NOTICE(p_hwfn, + "Failed to find element in list. Key (db_data addr) was %p. db_addr was %p\n", + db_data, db_addr); + else + kfree(db_entry); + + return rc; +} + +/* Initialize the doorbell recovery mechanism */ +static int qed_db_recovery_setup(struct qed_hwfn *p_hwfn) +{ + DP_VERBOSE(p_hwfn, QED_MSG_SPQ, "Setting up db recovery\n"); + + /* Make sure db_size was set in cdev */ + if (!p_hwfn->cdev->db_size) { + DP_ERR(p_hwfn->cdev, "db_size not set\n"); + return -EINVAL; + } + + INIT_LIST_HEAD(&p_hwfn->db_recovery_info.list); + spin_lock_init(&p_hwfn->db_recovery_info.lock); + p_hwfn->db_recovery_info.db_recovery_counter = 0; + + return 0; +} + +/* Destroy the doorbell recovery mechanism */ +static void qed_db_recovery_teardown(struct qed_hwfn *p_hwfn) +{ + struct qed_db_recovery_entry *db_entry = NULL; + + DP_VERBOSE(p_hwfn, QED_MSG_SPQ, "Tearing down db recovery\n"); + if (!list_empty(&p_hwfn->db_recovery_info.list)) { + DP_VERBOSE(p_hwfn, + QED_MSG_SPQ, + "Doorbell Recovery teardown found the doorbell recovery list was not empty (Expected in disorderly driver unload (e.g. recovery) otherwise this probably means some flow forgot to db_recovery_del). Prepare to purge doorbell recovery list...\n"); + while (!list_empty(&p_hwfn->db_recovery_info.list)) { + db_entry = + list_first_entry(&p_hwfn->db_recovery_info.list, + struct qed_db_recovery_entry, + list_entry); + qed_db_recovery_dp_entry(p_hwfn, db_entry, "Purging"); + list_del(&db_entry->list_entry); + kfree(db_entry); + } + } + p_hwfn->db_recovery_info.db_recovery_counter = 0; +} + +/* Print the content of the doorbell recovery mechanism */ +void qed_db_recovery_dp(struct qed_hwfn *p_hwfn) +{ + struct qed_db_recovery_entry *db_entry = NULL; + + DP_NOTICE(p_hwfn, + "Displaying doorbell recovery database. Counter was %d\n", + p_hwfn->db_recovery_info.db_recovery_counter); + + /* Protect the list */ + spin_lock_bh(&p_hwfn->db_recovery_info.lock); + list_for_each_entry(db_entry, + &p_hwfn->db_recovery_info.list, list_entry) { + qed_db_recovery_dp_entry(p_hwfn, db_entry, "Printing"); + } + + spin_unlock_bh(&p_hwfn->db_recovery_info.lock); +} + +/* Ring the doorbell of a single doorbell recovery entry */ +static void qed_db_recovery_ring(struct qed_hwfn *p_hwfn, + struct qed_db_recovery_entry *db_entry, + enum qed_db_rec_exec db_exec) +{ + if (db_exec != DB_REC_ONCE) { + /* Print according to width */ + if (db_entry->db_width == DB_REC_WIDTH_32B) { + DP_VERBOSE(p_hwfn, QED_MSG_SPQ, + "%s doorbell address %p data %x\n", + db_exec == DB_REC_DRY_RUN ? + "would have rung" : "ringing", + db_entry->db_addr, + *(u32 *)db_entry->db_data); + } else { + DP_VERBOSE(p_hwfn, QED_MSG_SPQ, + "%s doorbell address %p data %llx\n", + db_exec == DB_REC_DRY_RUN ? + "would have rung" : "ringing", + db_entry->db_addr, + *(u64 *)(db_entry->db_data)); + } + } + + /* Sanity */ + if (!qed_db_rec_sanity(p_hwfn->cdev, db_entry->db_addr, + db_entry->db_data)) + return; + + /* Flush the write combined buffer. Since there are multiple doorbelling + * entities using the same address, if we don't flush, a transaction + * could be lost. + */ + wmb(); + + /* Ring the doorbell */ + if (db_exec == DB_REC_REAL_DEAL || db_exec == DB_REC_ONCE) { + if (db_entry->db_width == DB_REC_WIDTH_32B) + DIRECT_REG_WR(db_entry->db_addr, + *(u32 *)(db_entry->db_data)); + else + DIRECT_REG_WR64(db_entry->db_addr, + *(u64 *)(db_entry->db_data)); + } + + /* Flush the write combined buffer. Next doorbell may come from a + * different entity to the same address... + */ + wmb(); +} + +/* Traverse the doorbell recovery entry list and ring all the doorbells */ +void qed_db_recovery_execute(struct qed_hwfn *p_hwfn, + enum qed_db_rec_exec db_exec) +{ + struct qed_db_recovery_entry *db_entry = NULL; + + if (db_exec != DB_REC_ONCE) { + DP_NOTICE(p_hwfn, + "Executing doorbell recovery. Counter was %d\n", + p_hwfn->db_recovery_info.db_recovery_counter); + + /* Track amount of times recovery was executed */ + p_hwfn->db_recovery_info.db_recovery_counter++; + } + + /* Protect the list */ + spin_lock_bh(&p_hwfn->db_recovery_info.lock); + list_for_each_entry(db_entry, + &p_hwfn->db_recovery_info.list, list_entry) { + qed_db_recovery_ring(p_hwfn, db_entry, db_exec); + if (db_exec == DB_REC_ONCE) + break; + } + + spin_unlock_bh(&p_hwfn->db_recovery_info.lock); +} + +/******************** Doorbell Recovery end ****************/ + #define QED_MIN_DPIS (4) #define QED_MIN_PWM_REGION (QED_WID_SIZE * QED_MIN_DPIS) @@ -194,6 +506,9 @@ void qed_resc_free(struct qed_dev *cdev) qed_dmae_info_free(p_hwfn); qed_dcbx_info_free(p_hwfn); qed_dbg_user_data_free(p_hwfn); + + /* Destroy doorbell recovery mechanism */ + qed_db_recovery_teardown(p_hwfn); } } @@ -969,6 +1284,11 @@ int qed_resc_alloc(struct qed_dev *cdev) struct qed_hwfn *p_hwfn = &cdev->hwfns[i]; u32 n_eqes, num_cons; + /* Initialize the doorbell recovery mechanism */ + rc = qed_db_recovery_setup(p_hwfn); + if (rc) + goto alloc_err; + /* First allocate the context manager structure */ rc = qed_cxt_mngr_alloc(p_hwfn); if (rc) @@ -1468,6 +1788,14 @@ enum QED_ROCE_EDPM_MODE { QED_ROCE_EDPM_MODE_DISABLE = 2, }; +bool qed_edpm_enabled(struct qed_hwfn *p_hwfn) +{ + if (p_hwfn->dcbx_no_edpm || p_hwfn->db_bar_no_edpm) + return false; + + return true; +} + static int qed_hw_init_pf_doorbell_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) { @@ -1537,13 +1865,13 @@ qed_hw_init_pf_doorbell_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) p_hwfn->wid_count = (u16) n_cpus; DP_INFO(p_hwfn, - "doorbell bar: normal_region_size=%d, pwm_region_size=%d, dpi_size=%d, dpi_count=%d, roce_edpm=%s\n", + "doorbell bar: normal_region_size=%d, pwm_region_size=%d, dpi_size=%d, dpi_count=%d, roce_edpm=%s, page_size=%lu\n", norm_regsize, pwm_regsize, p_hwfn->dpi_size, p_hwfn->dpi_count, - ((p_hwfn->dcbx_no_edpm) || (p_hwfn->db_bar_no_edpm)) ? - "disabled" : "enabled"); + (!qed_edpm_enabled(p_hwfn)) ? + "disabled" : "enabled", PAGE_SIZE); if (rc) { DP_ERR(p_hwfn, diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h index defdda1ffaa2..acccd85170aa 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h +++ b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h @@ -472,6 +472,34 @@ int qed_get_queue_coalesce(struct qed_hwfn *p_hwfn, u16 *coal, void *handle); int qed_set_queue_coalesce(u16 rx_coal, u16 tx_coal, void *p_handle); +/** + * @brief db_recovery_add - add doorbell information to the doorbell + * recovery mechanism. + * + * @param cdev + * @param db_addr - doorbell address + * @param db_data - address of where db_data is stored + * @param db_width - doorbell is 32b pr 64b + * @param db_space - doorbell recovery addresses are user or kernel space + */ +int qed_db_recovery_add(struct qed_dev *cdev, + void __iomem *db_addr, + void *db_data, + enum qed_db_rec_width db_width, + enum qed_db_rec_space db_space); + +/** + * @brief db_recovery_del - remove doorbell information from the doorbell + * recovery mechanism. db_data serves as key (db_addr is not unique). + * + * @param cdev + * @param db_addr - doorbell address + * @param db_data - address where db_data is stored. Serves as key for the + * entry to delete. + */ +int qed_db_recovery_del(struct qed_dev *cdev, + void __iomem *db_addr, void *db_data); + const char *qed_hw_get_resc_name(enum qed_resources res_id); #endif diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index b38e12c9de9d..b13cfb449d8f 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -12655,6 +12655,7 @@ struct public_drv_mb { #define DRV_MB_PARAM_DCBX_NOTIFY_MASK 0x000000FF #define DRV_MB_PARAM_DCBX_NOTIFY_SHIFT 3 +#define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MBI 0x3 #define DRV_MB_PARAM_NVM_LEN_OFFSET 24 #define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_SHIFT 0 @@ -12814,6 +12815,11 @@ struct public_drv_mb { union drv_union_data union_data; }; +#define FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET_MASK 0x00ffffff +#define FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET_SHIFT 0 +#define FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE_MASK 0xff000000 +#define FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE_SHIFT 24 + enum MFW_DRV_MSG_TYPE { MFW_DRV_MSG_LINK_CHANGE, MFW_DRV_MSG_FLR_FW_ACK_FAILED, diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c index b22f464ea3fa..92340919d852 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_int.c +++ b/drivers/net/ethernet/qlogic/qed/qed_int.c @@ -361,29 +361,147 @@ static int qed_pglub_rbc_attn_cb(struct qed_hwfn *p_hwfn) return 0; } -#define QED_DORQ_ATTENTION_REASON_MASK (0xfffff) -#define QED_DORQ_ATTENTION_OPAQUE_MASK (0xffff) -#define QED_DORQ_ATTENTION_SIZE_MASK (0x7f) -#define QED_DORQ_ATTENTION_SIZE_SHIFT (16) +#define QED_DORQ_ATTENTION_REASON_MASK (0xfffff) +#define QED_DORQ_ATTENTION_OPAQUE_MASK (0xffff) +#define QED_DORQ_ATTENTION_OPAQUE_SHIFT (0x0) +#define QED_DORQ_ATTENTION_SIZE_MASK (0x7f) +#define QED_DORQ_ATTENTION_SIZE_SHIFT (16) + +#define QED_DB_REC_COUNT 1000 +#define QED_DB_REC_INTERVAL 100 + +static int qed_db_rec_flush_queue(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt) +{ + u32 count = QED_DB_REC_COUNT; + u32 usage = 1; + + /* wait for usage to zero or count to run out. This is necessary since + * EDPM doorbell transactions can take multiple 64b cycles, and as such + * can "split" over the pci. Possibly, the doorbell drop can happen with + * half an EDPM in the queue and other half dropped. Another EDPM + * doorbell to the same address (from doorbell recovery mechanism or + * from the doorbelling entity) could have first half dropped and second + * half interpreted as continuation of the first. To prevent such + * malformed doorbells from reaching the device, flush the queue before + * releasing the overflow sticky indication. + */ + while (count-- && usage) { + usage = qed_rd(p_hwfn, p_ptt, DORQ_REG_PF_USAGE_CNT); + udelay(QED_DB_REC_INTERVAL); + } + + /* should have been depleted by now */ + if (usage) { + DP_NOTICE(p_hwfn->cdev, + "DB recovery: doorbell usage failed to zero after %d usec. usage was %x\n", + QED_DB_REC_INTERVAL * QED_DB_REC_COUNT, usage); + return -EBUSY; + } + + return 0; +} + +int qed_db_rec_handler(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) +{ + u32 overflow; + int rc; + + overflow = qed_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY); + DP_NOTICE(p_hwfn, "PF Overflow sticky 0x%x\n", overflow); + if (!overflow) { + qed_db_recovery_execute(p_hwfn, DB_REC_ONCE); + return 0; + } + + if (qed_edpm_enabled(p_hwfn)) { + rc = qed_db_rec_flush_queue(p_hwfn, p_ptt); + if (rc) + return rc; + } + + /* Flush any pending (e)dpm as they may never arrive */ + qed_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1); + + /* Release overflow sticky indication (stop silently dropping everything) */ + qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0); + + /* Repeat all last doorbells (doorbell drop recovery) */ + qed_db_recovery_execute(p_hwfn, DB_REC_REAL_DEAL); + + return 0; +} + static int qed_dorq_attn_cb(struct qed_hwfn *p_hwfn) { - u32 reason; + u32 int_sts, first_drop_reason, details, address, all_drops_reason; + struct qed_ptt *p_ptt = p_hwfn->p_dpc_ptt; + int rc; - reason = qed_rd(p_hwfn, p_hwfn->p_dpc_ptt, DORQ_REG_DB_DROP_REASON) & - QED_DORQ_ATTENTION_REASON_MASK; - if (reason) { - u32 details = qed_rd(p_hwfn, p_hwfn->p_dpc_ptt, - DORQ_REG_DB_DROP_DETAILS); + int_sts = qed_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS); + DP_NOTICE(p_hwfn->cdev, "DORQ attention. int_sts was %x\n", int_sts); - DP_INFO(p_hwfn->cdev, - "DORQ db_drop: address 0x%08x Opaque FID 0x%04x Size [bytes] 0x%08x Reason: 0x%08x\n", - qed_rd(p_hwfn, p_hwfn->p_dpc_ptt, - DORQ_REG_DB_DROP_DETAILS_ADDRESS), - (u16)(details & QED_DORQ_ATTENTION_OPAQUE_MASK), - GET_FIELD(details, QED_DORQ_ATTENTION_SIZE) * 4, - reason); + /* int_sts may be zero since all PFs were interrupted for doorbell + * overflow but another one already handled it. Can abort here. If + * This PF also requires overflow recovery we will be interrupted again. + * The masked almost full indication may also be set. Ignoring. + */ + if (!(int_sts & ~DORQ_REG_INT_STS_DORQ_FIFO_AFULL)) + return 0; + + /* check if db_drop or overflow happened */ + if (int_sts & (DORQ_REG_INT_STS_DB_DROP | + DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR)) { + /* Obtain data about db drop/overflow */ + first_drop_reason = qed_rd(p_hwfn, p_ptt, + DORQ_REG_DB_DROP_REASON) & + QED_DORQ_ATTENTION_REASON_MASK; + details = qed_rd(p_hwfn, p_ptt, DORQ_REG_DB_DROP_DETAILS); + address = qed_rd(p_hwfn, p_ptt, + DORQ_REG_DB_DROP_DETAILS_ADDRESS); + all_drops_reason = qed_rd(p_hwfn, p_ptt, + DORQ_REG_DB_DROP_DETAILS_REASON); + + /* Log info */ + DP_NOTICE(p_hwfn->cdev, + "Doorbell drop occurred\n" + "Address\t\t0x%08x\t(second BAR address)\n" + "FID\t\t0x%04x\t\t(Opaque FID)\n" + "Size\t\t0x%04x\t\t(in bytes)\n" + "1st drop reason\t0x%08x\t(details on first drop since last handling)\n" + "Sticky reasons\t0x%08x\t(all drop reasons since last handling)\n", + address, + GET_FIELD(details, QED_DORQ_ATTENTION_OPAQUE), + GET_FIELD(details, QED_DORQ_ATTENTION_SIZE) * 4, + first_drop_reason, all_drops_reason); + + rc = qed_db_rec_handler(p_hwfn, p_ptt); + qed_periodic_db_rec_start(p_hwfn); + if (rc) + return rc; + + /* Clear the doorbell drop details and prepare for next drop */ + qed_wr(p_hwfn, p_ptt, DORQ_REG_DB_DROP_DETAILS_REL, 0); + + /* Mark interrupt as handled (note: even if drop was due to a different + * reason than overflow we mark as handled) + */ + qed_wr(p_hwfn, + p_ptt, + DORQ_REG_INT_STS_WR, + DORQ_REG_INT_STS_DB_DROP | + DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR); + + /* If there are no indications other than drop indications, success */ + if ((int_sts & ~(DORQ_REG_INT_STS_DB_DROP | + DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR | + DORQ_REG_INT_STS_DORQ_FIFO_AFULL)) == 0) + return 0; } + /* Some other indication was present - non recoverable */ + DP_INFO(p_hwfn, "DORQ fatal attention\n"); + return -EINVAL; } diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.h b/drivers/net/ethernet/qlogic/qed/qed_int.h index 54b4ee0acfd7..d81a62ebd524 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_int.h +++ b/drivers/net/ethernet/qlogic/qed/qed_int.h @@ -190,6 +190,16 @@ void qed_int_get_num_sbs(struct qed_hwfn *p_hwfn, */ void qed_int_disable_post_isr_release(struct qed_dev *cdev); +/** + * @brief - Doorbell Recovery handler. + * Run DB_REAL_DEAL doorbell recovery in case of PF overflow + * (and flush DORQ if needed), otherwise run DB_REC_ONCE. + * + * @param p_hwfn + * @param p_ptt + */ +int qed_db_rec_handler(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); + #define QED_CAU_DEF_RX_TIMER_RES 0 #define QED_CAU_DEF_TX_TIMER_RES 0 diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c index c6f4bab67a5f..90afd514ffe1 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c @@ -1085,7 +1085,14 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn, p_ramrod->gsi_offload_flag = p_ll2_conn->input.gsi_enable; - return qed_spq_post(p_hwfn, p_ent, NULL); + rc = qed_spq_post(p_hwfn, p_ent, NULL); + if (rc) + return rc; + + rc = qed_db_recovery_add(p_hwfn->cdev, p_tx->doorbell_addr, + &p_tx->db_msg, DB_REC_WIDTH_32B, + DB_REC_KERNEL); + return rc; } static int qed_sp_ll2_rx_queue_stop(struct qed_hwfn *p_hwfn, @@ -1119,9 +1126,11 @@ static int qed_sp_ll2_rx_queue_stop(struct qed_hwfn *p_hwfn, static int qed_sp_ll2_tx_queue_stop(struct qed_hwfn *p_hwfn, struct qed_ll2_info *p_ll2_conn) { + struct qed_ll2_tx_queue *p_tx = &p_ll2_conn->tx_queue; struct qed_spq_entry *p_ent = NULL; struct qed_sp_init_data init_data; int rc = -EINVAL; + qed_db_recovery_del(p_hwfn->cdev, p_tx->doorbell_addr, &p_tx->db_msg); /* Get SPQ entry */ memset(&init_data, 0, sizeof(init_data)); @@ -1542,6 +1551,13 @@ int qed_ll2_establish_connection(void *cxt, u8 connection_handle) p_tx->doorbell_addr = (u8 __iomem *)p_hwfn->doorbells + qed_db_addr(p_ll2_conn->cid, DQ_DEMS_LEGACY); + /* prepare db data */ + SET_FIELD(p_tx->db_msg.params, CORE_DB_DATA_DEST, DB_DEST_XCM); + SET_FIELD(p_tx->db_msg.params, CORE_DB_DATA_AGG_CMD, DB_AGG_CMD_SET); + SET_FIELD(p_tx->db_msg.params, CORE_DB_DATA_AGG_VAL_SEL, + DQ_XCM_CORE_TX_BD_PROD_CMD); + p_tx->db_msg.agg_flags = DQ_XCM_CORE_DQ_CF_CMD; + rc = qed_ll2_establish_connection_rx(p_hwfn, p_ll2_conn); if (rc) @@ -1780,7 +1796,6 @@ static void qed_ll2_tx_packet_notify(struct qed_hwfn *p_hwfn, bool b_notify = p_ll2_conn->tx_queue.cur_send_packet->notify_fw; struct qed_ll2_tx_queue *p_tx = &p_ll2_conn->tx_queue; struct qed_ll2_tx_packet *p_pkt = NULL; - struct core_db_data db_msg = { 0, 0, 0 }; u16 bd_prod; /* If there are missing BDs, don't do anything now */ @@ -1809,24 +1824,19 @@ static void qed_ll2_tx_packet_notify(struct qed_hwfn *p_hwfn, list_move_tail(&p_pkt->list_entry, &p_tx->active_descq); } - SET_FIELD(db_msg.params, CORE_DB_DATA_DEST, DB_DEST_XCM); - SET_FIELD(db_msg.params, CORE_DB_DATA_AGG_CMD, DB_AGG_CMD_SET); - SET_FIELD(db_msg.params, CORE_DB_DATA_AGG_VAL_SEL, - DQ_XCM_CORE_TX_BD_PROD_CMD); - db_msg.agg_flags = DQ_XCM_CORE_DQ_CF_CMD; - db_msg.spq_prod = cpu_to_le16(bd_prod); + p_tx->db_msg.spq_prod = cpu_to_le16(bd_prod); /* Make sure the BDs data is updated before ringing the doorbell */ wmb(); - DIRECT_REG_WR(p_tx->doorbell_addr, *((u32 *)&db_msg)); + DIRECT_REG_WR(p_tx->doorbell_addr, *((u32 *)&p_tx->db_msg)); DP_VERBOSE(p_hwfn, (NETIF_MSG_TX_QUEUED | QED_MSG_LL2), "LL2 [q 0x%02x cid 0x%08x type 0x%08x] Doorbelled [producer 0x%04x]\n", p_ll2_conn->queue_id, p_ll2_conn->cid, - p_ll2_conn->input.conn_type, db_msg.spq_prod); + p_ll2_conn->input.conn_type, p_tx->db_msg.spq_prod); } int qed_ll2_prepare_tx_packet(void *cxt, diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.h b/drivers/net/ethernet/qlogic/qed/qed_ll2.h index 1a5c1ae01474..5f01fbd3c073 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.h +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.h @@ -103,6 +103,7 @@ struct qed_ll2_tx_queue { struct qed_ll2_tx_packet cur_completing_packet; u16 cur_completing_bd_idx; void __iomem *doorbell_addr; + struct core_db_data db_msg; u16 bds_idx; u16 cur_send_frag_num; u16 cur_completing_frag_num; diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c index fff7f04d4525..6adf5bda9811 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_main.c +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c @@ -966,9 +966,47 @@ static void qed_update_pf_params(struct qed_dev *cdev, } } +#define QED_PERIODIC_DB_REC_COUNT 100 +#define QED_PERIODIC_DB_REC_INTERVAL_MS 100 +#define QED_PERIODIC_DB_REC_INTERVAL \ + msecs_to_jiffies(QED_PERIODIC_DB_REC_INTERVAL_MS) +#define QED_PERIODIC_DB_REC_WAIT_COUNT 10 +#define QED_PERIODIC_DB_REC_WAIT_INTERVAL \ + (QED_PERIODIC_DB_REC_INTERVAL_MS / QED_PERIODIC_DB_REC_WAIT_COUNT) + +static int qed_slowpath_delayed_work(struct qed_hwfn *hwfn, + enum qed_slowpath_wq_flag wq_flag, + unsigned long delay) +{ + if (!hwfn->slowpath_wq_active) + return -EINVAL; + + /* Memory barrier for setting atomic bit */ + smp_mb__before_atomic(); + set_bit(wq_flag, &hwfn->slowpath_task_flags); + smp_mb__after_atomic(); + queue_delayed_work(hwfn->slowpath_wq, &hwfn->slowpath_task, delay); + + return 0; +} + +void qed_periodic_db_rec_start(struct qed_hwfn *p_hwfn) +{ + /* Reset periodic Doorbell Recovery counter */ + p_hwfn->periodic_db_rec_count = QED_PERIODIC_DB_REC_COUNT; + + /* Don't schedule periodic Doorbell Recovery if already scheduled */ + if (test_bit(QED_SLOWPATH_PERIODIC_DB_REC, + &p_hwfn->slowpath_task_flags)) + return; + + qed_slowpath_delayed_work(p_hwfn, QED_SLOWPATH_PERIODIC_DB_REC, + QED_PERIODIC_DB_REC_INTERVAL); +} + static void qed_slowpath_wq_stop(struct qed_dev *cdev) { - int i; + int i, sleep_count = QED_PERIODIC_DB_REC_WAIT_COUNT; if (IS_VF(cdev)) return; @@ -977,6 +1015,15 @@ static void qed_slowpath_wq_stop(struct qed_dev *cdev) if (!cdev->hwfns[i].slowpath_wq) continue; + /* Stop queuing new delayed works */ + cdev->hwfns[i].slowpath_wq_active = false; + + /* Wait until the last periodic doorbell recovery is executed */ + while (test_bit(QED_SLOWPATH_PERIODIC_DB_REC, + &cdev->hwfns[i].slowpath_task_flags) && + sleep_count--) + msleep(QED_PERIODIC_DB_REC_WAIT_INTERVAL); + flush_workqueue(cdev->hwfns[i].slowpath_wq); destroy_workqueue(cdev->hwfns[i].slowpath_wq); } @@ -989,7 +1036,10 @@ static void qed_slowpath_task(struct work_struct *work) struct qed_ptt *ptt = qed_ptt_acquire(hwfn); if (!ptt) { - queue_delayed_work(hwfn->slowpath_wq, &hwfn->slowpath_task, 0); + if (hwfn->slowpath_wq_active) + queue_delayed_work(hwfn->slowpath_wq, + &hwfn->slowpath_task, 0); + return; } @@ -997,6 +1047,15 @@ static void qed_slowpath_task(struct work_struct *work) &hwfn->slowpath_task_flags)) qed_mfw_process_tlv_req(hwfn, ptt); + if (test_and_clear_bit(QED_SLOWPATH_PERIODIC_DB_REC, + &hwfn->slowpath_task_flags)) { + qed_db_rec_handler(hwfn, ptt); + if (hwfn->periodic_db_rec_count--) + qed_slowpath_delayed_work(hwfn, + QED_SLOWPATH_PERIODIC_DB_REC, + QED_PERIODIC_DB_REC_INTERVAL); + } + qed_ptt_release(hwfn, ptt); } @@ -1023,6 +1082,7 @@ static int qed_slowpath_wq_start(struct qed_dev *cdev) } INIT_DELAYED_WORK(&hwfn->slowpath_task, qed_slowpath_task); + hwfn->slowpath_wq_active = true; } return 0; @@ -1939,21 +1999,30 @@ exit: * 0B | 0x3 [command index] | * 4B | b'0: check_response? | b'1-31 reserved | * 8B | File-type | reserved | + * 12B | Image length in bytes | * \----------------------------------------------------------------------/ * Start a new file of the provided type */ static int qed_nvm_flash_image_file_start(struct qed_dev *cdev, const u8 **data, bool *check_resp) { + u32 file_type, file_size = 0; int rc; *data += 4; *check_resp = !!(**data & BIT(0)); *data += 4; + file_type = **data; DP_VERBOSE(cdev, NETIF_MSG_DRV, - "About to start a new file of type %02x\n", **data); - rc = qed_mcp_nvm_put_file_begin(cdev, **data); + "About to start a new file of type %02x\n", file_type); + if (file_type == DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MBI) { + *data += 4; + file_size = *((u32 *)(*data)); + } + + rc = qed_mcp_nvm_write(cdev, QED_PUT_FILE_BEGIN, file_type, + (u8 *)(&file_size), 4); *data += 4; return rc; @@ -2315,6 +2384,8 @@ const struct qed_common_ops qed_common_ops_pass = { .update_mac = &qed_update_mac, .update_mtu = &qed_update_mtu, .update_wol = &qed_update_wol, + .db_recovery_add = &qed_db_recovery_add, + .db_recovery_del = &qed_db_recovery_del, .read_module_eeprom = &qed_read_module_eeprom, }; diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c index a96364df4320..e7f18e34ff0d 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c @@ -1619,7 +1619,7 @@ static void qed_mcp_update_stag(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) qed_sp_pf_update_stag(p_hwfn); } - DP_VERBOSE(p_hwfn, QED_MSG_SP, "ovlan = %d hw_mode = 0x%x\n", + DP_VERBOSE(p_hwfn, QED_MSG_SP, "ovlan = %d hw_mode = 0x%x\n", p_hwfn->mcp_info->func_info.ovlan, p_hwfn->hw_info.hw_mode); /* Acknowledge the MFW */ @@ -1641,7 +1641,9 @@ void qed_mcp_read_ufp_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) val = (port_cfg & OEM_CFG_CHANNEL_TYPE_MASK) >> OEM_CFG_CHANNEL_TYPE_OFFSET; if (val != OEM_CFG_CHANNEL_TYPE_STAGGED) - DP_NOTICE(p_hwfn, "Incorrect UFP Channel type %d\n", val); + DP_NOTICE(p_hwfn, + "Incorrect UFP Channel type %d port_id 0x%02x\n", + val, MFW_PORT(p_hwfn)); val = (port_cfg & OEM_CFG_SCHED_TYPE_MASK) >> OEM_CFG_SCHED_TYPE_OFFSET; if (val == OEM_CFG_SCHED_TYPE_ETS) { @@ -1650,7 +1652,9 @@ void qed_mcp_read_ufp_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) p_hwfn->ufp_info.mode = QED_UFP_MODE_VNIC_BW; } else { p_hwfn->ufp_info.mode = QED_UFP_MODE_UNKNOWN; - DP_NOTICE(p_hwfn, "Unknown UFP scheduling mode %d\n", val); + DP_NOTICE(p_hwfn, + "Unknown UFP scheduling mode %d port_id 0x%02x\n", + val, MFW_PORT(p_hwfn)); } qed_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info, MCP_PF_ID(p_hwfn)); @@ -1665,13 +1669,15 @@ void qed_mcp_read_ufp_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) p_hwfn->ufp_info.pri_type = QED_UFP_PRI_OS; } else { p_hwfn->ufp_info.pri_type = QED_UFP_PRI_UNKNOWN; - DP_NOTICE(p_hwfn, "Unknown Host priority control %d\n", val); + DP_NOTICE(p_hwfn, + "Unknown Host priority control %d port_id 0x%02x\n", + val, MFW_PORT(p_hwfn)); } DP_NOTICE(p_hwfn, - "UFP shmem config: mode = %d tc = %d pri_type = %d\n", - p_hwfn->ufp_info.mode, - p_hwfn->ufp_info.tc, p_hwfn->ufp_info.pri_type); + "UFP shmem config: mode = %d tc = %d pri_type = %d port_id 0x%02x\n", + p_hwfn->ufp_info.mode, p_hwfn->ufp_info.tc, + p_hwfn->ufp_info.pri_type, MFW_PORT(p_hwfn)); } static int @@ -2739,24 +2745,6 @@ int qed_mcp_nvm_resp(struct qed_dev *cdev, u8 *p_buf) return 0; } -int qed_mcp_nvm_put_file_begin(struct qed_dev *cdev, u32 addr) -{ - struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); - struct qed_ptt *p_ptt; - u32 resp, param; - int rc; - - p_ptt = qed_ptt_acquire(p_hwfn); - if (!p_ptt) - return -EBUSY; - rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_PUT_FILE_BEGIN, addr, - &resp, ¶m); - cdev->mcp_nvm_resp = resp; - qed_ptt_release(p_hwfn, p_ptt); - - return rc; -} - int qed_mcp_nvm_write(struct qed_dev *cdev, u32 cmd, u32 addr, u8 *p_buf, u32 len) { @@ -2770,6 +2758,9 @@ int qed_mcp_nvm_write(struct qed_dev *cdev, return -EBUSY; switch (cmd) { + case QED_PUT_FILE_BEGIN: + nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_BEGIN; + break; case QED_PUT_FILE_DATA: nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_DATA; break; @@ -2782,10 +2773,14 @@ int qed_mcp_nvm_write(struct qed_dev *cdev, goto out; } + buf_size = min_t(u32, (len - buf_idx), MCP_DRV_NVM_BUF_LEN); while (buf_idx < len) { - buf_size = min_t(u32, (len - buf_idx), MCP_DRV_NVM_BUF_LEN); - nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_OFFSET) | - addr) + buf_idx; + if (cmd == QED_PUT_FILE_BEGIN) + nvm_offset = addr; + else + nvm_offset = ((buf_size << + DRV_MB_PARAM_NVM_LEN_OFFSET) | addr) + + buf_idx; rc = qed_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, nvm_offset, &resp, ¶m, buf_size, (u32 *)&p_buf[buf_idx]); @@ -2810,7 +2805,19 @@ int qed_mcp_nvm_write(struct qed_dev *cdev, if (buf_idx % 0x1000 > (buf_idx + buf_size) % 0x1000) usleep_range(1000, 2000); - buf_idx += buf_size; + /* For MBI upgrade, MFW response includes the next buffer offset + * to be delivered to MFW. + */ + if (param && cmd == QED_PUT_FILE_DATA) { + buf_idx = QED_MFW_GET_FIELD(param, + FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET); + buf_size = QED_MFW_GET_FIELD(param, + FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE); + } else { + buf_idx += buf_size; + buf_size = min_t(u32, (len - buf_idx), + MCP_DRV_NVM_BUF_LEN); + } } cdev->mcp_nvm_resp = resp; diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h index 1adfe52b3905..eddf67798d6f 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h @@ -542,16 +542,6 @@ int qed_mcp_nvm_read(struct qed_dev *cdev, u32 addr, u8 *p_buf, u32 len); int qed_mcp_nvm_write(struct qed_dev *cdev, u32 cmd, u32 addr, u8 *p_buf, u32 len); -/** - * @brief Put file begin - * - * @param cdev - * @param addr - nvm offset - * - * @return int - 0 - operation was successful. - */ -int qed_mcp_nvm_put_file_begin(struct qed_dev *cdev, u32 addr); - /** * @brief Check latest response * diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h index 2440970882c4..8939ed6e08b7 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h +++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h @@ -1243,6 +1243,56 @@ 0x1701534UL #define TSEM_REG_DBG_FORCE_FRAME \ 0x1701538UL +#define DORQ_REG_PF_USAGE_CNT \ + 0x1009c0UL +#define DORQ_REG_PF_OVFL_STICKY \ + 0x1009d0UL +#define DORQ_REG_DPM_FORCE_ABORT \ + 0x1009d8UL +#define DORQ_REG_INT_STS \ + 0x100180UL +#define DORQ_REG_INT_STS_ADDRESS_ERROR \ + (0x1UL << 0) +#define DORQ_REG_INT_STS_WR \ + 0x100188UL +#define DORQ_REG_DB_DROP_DETAILS_REL \ + 0x100a28UL +#define DORQ_REG_INT_STS_ADDRESS_ERROR_SHIFT \ + 0 +#define DORQ_REG_INT_STS_DB_DROP \ + (0x1UL << 1) +#define DORQ_REG_INT_STS_DB_DROP_SHIFT \ + 1 +#define DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR \ + (0x1UL << 2) +#define DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR_SHIFT \ + 2 +#define DORQ_REG_INT_STS_DORQ_FIFO_AFULL\ + (0x1UL << 3) +#define DORQ_REG_INT_STS_DORQ_FIFO_AFULL_SHIFT \ + 3 +#define DORQ_REG_INT_STS_CFC_BYP_VALIDATION_ERR \ + (0x1UL << 4) +#define DORQ_REG_INT_STS_CFC_BYP_VALIDATION_ERR_SHIFT \ + 4 +#define DORQ_REG_INT_STS_CFC_LD_RESP_ERR \ + (0x1UL << 5) +#define DORQ_REG_INT_STS_CFC_LD_RESP_ERR_SHIFT \ + 5 +#define DORQ_REG_INT_STS_XCM_DONE_CNT_ERR \ + (0x1UL << 6) +#define DORQ_REG_INT_STS_XCM_DONE_CNT_ERR_SHIFT \ + 6 +#define DORQ_REG_INT_STS_CFC_LD_REQ_FIFO_OVFL_ERR \ + (0x1UL << 7) +#define DORQ_REG_INT_STS_CFC_LD_REQ_FIFO_OVFL_ERR_SHIFT \ + 7 +#define DORQ_REG_INT_STS_CFC_LD_REQ_FIFO_UNDER_ERR \ + (0x1UL << 8) +#define DORQ_REG_INT_STS_CFC_LD_REQ_FIFO_UNDER_ERR_SHIFT \ + 8 +#define DORQ_REG_DB_DROP_DETAILS_REASON \ + 0x100a20UL #define MSEM_REG_DBG_SELECT \ 0x1801528UL #define MSEM_REG_DBG_DWORD_ENABLE \ diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp.h b/drivers/net/ethernet/qlogic/qed/qed_sp.h index 3157c0d99441..4179c9013fc6 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_sp.h +++ b/drivers/net/ethernet/qlogic/qed/qed_sp.h @@ -227,7 +227,9 @@ struct qed_spq { u32 comp_count; u32 cid; - qed_spq_async_comp_cb async_comp_cb[MAX_PROTOCOL_TYPE]; + u32 db_addr_offset; + struct core_db_data db_data; + qed_spq_async_comp_cb async_comp_cb[MAX_PROTOCOL_TYPE]; }; /** diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c index 0a9c5bb0fa48..eb88bbc6b193 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_spq.c +++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c @@ -252,9 +252,9 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn, struct qed_spq *p_spq, struct qed_spq_entry *p_ent) { struct qed_chain *p_chain = &p_hwfn->p_spq->chain; + struct core_db_data *p_db_data = &p_spq->db_data; u16 echo = qed_chain_get_prod_idx(p_chain); struct slow_path_element *elem; - struct core_db_data db; p_ent->elem.hdr.echo = cpu_to_le16(echo); elem = qed_chain_produce(p_chain); @@ -266,27 +266,22 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn, *elem = p_ent->elem; /* struct assignment */ /* send a doorbell on the slow hwfn session */ - memset(&db, 0, sizeof(db)); - SET_FIELD(db.params, CORE_DB_DATA_DEST, DB_DEST_XCM); - SET_FIELD(db.params, CORE_DB_DATA_AGG_CMD, DB_AGG_CMD_SET); - SET_FIELD(db.params, CORE_DB_DATA_AGG_VAL_SEL, - DQ_XCM_CORE_SPQ_PROD_CMD); - db.agg_flags = DQ_XCM_CORE_DQ_CF_CMD; - db.spq_prod = cpu_to_le16(qed_chain_get_prod_idx(p_chain)); + p_db_data->spq_prod = cpu_to_le16(qed_chain_get_prod_idx(p_chain)); /* make sure the SPQE is updated before the doorbell */ wmb(); - DOORBELL(p_hwfn, qed_db_addr(p_spq->cid, DQ_DEMS_LEGACY), *(u32 *)&db); + DOORBELL(p_hwfn, p_spq->db_addr_offset, *(u32 *)p_db_data); /* make sure doorbell is rang */ wmb(); DP_VERBOSE(p_hwfn, QED_MSG_SPQ, "Doorbelled [0x%08x, CID 0x%08x] with Flags: %02x agg_params: %02x, prod: %04x\n", - qed_db_addr(p_spq->cid, DQ_DEMS_LEGACY), - p_spq->cid, db.params, db.agg_flags, - qed_chain_get_prod_idx(p_chain)); + p_spq->db_addr_offset, + p_spq->cid, + p_db_data->params, + p_db_data->agg_flags, qed_chain_get_prod_idx(p_chain)); return 0; } @@ -490,8 +485,11 @@ void qed_spq_setup(struct qed_hwfn *p_hwfn) { struct qed_spq *p_spq = p_hwfn->p_spq; struct qed_spq_entry *p_virt = NULL; + struct core_db_data *p_db_data; + void __iomem *db_addr; dma_addr_t p_phys = 0; u32 i, capacity; + int rc; INIT_LIST_HEAD(&p_spq->pending); INIT_LIST_HEAD(&p_spq->completion_pending); @@ -528,6 +526,25 @@ void qed_spq_setup(struct qed_hwfn *p_hwfn) /* reset the chain itself */ qed_chain_reset(&p_spq->chain); + + /* Initialize the address/data of the SPQ doorbell */ + p_spq->db_addr_offset = qed_db_addr(p_spq->cid, DQ_DEMS_LEGACY); + p_db_data = &p_spq->db_data; + memset(p_db_data, 0, sizeof(*p_db_data)); + SET_FIELD(p_db_data->params, CORE_DB_DATA_DEST, DB_DEST_XCM); + SET_FIELD(p_db_data->params, CORE_DB_DATA_AGG_CMD, DB_AGG_CMD_MAX); + SET_FIELD(p_db_data->params, CORE_DB_DATA_AGG_VAL_SEL, + DQ_XCM_CORE_SPQ_PROD_CMD); + p_db_data->agg_flags = DQ_XCM_CORE_DQ_CF_CMD; + + /* Register the SPQ doorbell with the doorbell recovery mechanism */ + db_addr = (void __iomem *)((u8 __iomem *)p_hwfn->doorbells + + p_spq->db_addr_offset); + rc = qed_db_recovery_add(p_hwfn->cdev, db_addr, &p_spq->db_data, + DB_REC_WIDTH_32B, DB_REC_KERNEL); + if (rc) + DP_INFO(p_hwfn, + "Failed to register the SPQ doorbell with the doorbell recovery mechanism\n"); } int qed_spq_alloc(struct qed_hwfn *p_hwfn) @@ -575,11 +592,17 @@ spq_allocate_fail: void qed_spq_free(struct qed_hwfn *p_hwfn) { struct qed_spq *p_spq = p_hwfn->p_spq; + void __iomem *db_addr; u32 capacity; if (!p_spq) return; + /* Delete the SPQ doorbell from the doorbell recovery mechanism */ + db_addr = (void __iomem *)((u8 __iomem *)p_hwfn->doorbells + + p_spq->db_addr_offset); + qed_db_recovery_del(p_hwfn->cdev, db_addr, &p_spq->db_data); + if (p_spq->p_virt) { capacity = qed_chain_get_capacity(&p_spq->chain); dma_free_coherent(&p_hwfn->cdev->pdev->dev, diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h index de98a974673b..613249d1e967 100644 --- a/drivers/net/ethernet/qlogic/qede/qede.h +++ b/drivers/net/ethernet/qlogic/qede/qede.h @@ -168,6 +168,13 @@ struct qede_ptp; #define QEDE_RFS_MAX_FLTR 256 +enum qede_flags_bit { + QEDE_FLAGS_IS_VF = 0, + QEDE_FLAGS_LINK_REQUESTED, + QEDE_FLAGS_PTP_TX_IN_PRORGESS, + QEDE_FLAGS_TX_TIMESTAMPING_EN +}; + struct qede_dev { struct qed_dev *cdev; struct net_device *ndev; @@ -177,10 +184,7 @@ struct qede_dev { u8 dp_level; unsigned long flags; -#define QEDE_FLAG_IS_VF BIT(0) -#define IS_VF(edev) (!!((edev)->flags & QEDE_FLAG_IS_VF)) -#define QEDE_TX_TIMESTAMPING_EN BIT(1) -#define QEDE_FLAGS_PTP_TX_IN_PRORGESS BIT(2) +#define IS_VF(edev) (test_bit(QEDE_FLAGS_IS_VF, &(edev)->flags)) const struct qed_eth_ops *ops; struct qede_ptp *ptp; @@ -377,6 +381,7 @@ struct qede_tx_queue { u64 xmit_pkts; u64 stopped_cnt; + u64 tx_mem_alloc_err; __le16 *hw_cons_ptr; diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c index 8cbbd628fd73..16331c6c6fa7 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c +++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c @@ -73,6 +73,7 @@ static const struct { } qede_tqstats_arr[] = { QEDE_TQSTAT(xmit_pkts), QEDE_TQSTAT(stopped_cnt), + QEDE_TQSTAT(tx_mem_alloc_err), }; #define QEDE_STAT_OFFSET(stat_name, type, base) \ diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c index 1a78027de071..bdf816fe5a16 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_fp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c @@ -1466,8 +1466,8 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb, struct net_device *ndev) #if ((MAX_SKB_FRAGS + 2) > ETH_TX_MAX_BDS_PER_NON_LSO_PACKET) if (qede_pkt_req_lin(skb, xmit_type)) { if (skb_linearize(skb)) { - DP_NOTICE(edev, - "SKB linearization failed - silently dropping this SKB\n"); + txq->tx_mem_alloc_err++; + dev_kfree_skb_any(skb); return NETDEV_TX_OK; } diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c index 46d0f2eaa0c0..5a74fcbdbc2b 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_main.c +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c @@ -1086,7 +1086,7 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level, } if (is_vf) - edev->flags |= QEDE_FLAG_IS_VF; + set_bit(QEDE_FLAGS_IS_VF, &edev->flags); qede_init_ndev(edev); @@ -1774,6 +1774,10 @@ static int qede_drain_txq(struct qede_dev *edev, static int qede_stop_txq(struct qede_dev *edev, struct qede_tx_queue *txq, int rss_id) { + /* delete doorbell from doorbell recovery mechanism */ + edev->ops->common->db_recovery_del(edev->cdev, txq->doorbell_addr, + &txq->tx_db); + return edev->ops->q_tx_stop(edev->cdev, rss_id, txq->handle); } @@ -1910,6 +1914,11 @@ static int qede_start_txq(struct qede_dev *edev, DQ_XCM_ETH_TX_BD_PROD_CMD); txq->tx_db.data.agg_flags = DQ_XCM_ETH_DQ_CF_CMD; + /* register doorbell with doorbell recovery mechanism */ + rc = edev->ops->common->db_recovery_add(edev->cdev, txq->doorbell_addr, + &txq->tx_db, DB_REC_WIDTH_32B, + DB_REC_KERNEL); + return rc; } @@ -2057,6 +2066,8 @@ static void qede_unload(struct qede_dev *edev, enum qede_unload_mode mode, if (!is_locked) __qede_lock(edev); + clear_bit(QEDE_FLAGS_LINK_REQUESTED, &edev->flags); + edev->state = QEDE_STATE_CLOSED; qede_rdma_dev_event_close(edev); @@ -2163,6 +2174,8 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode, /* Program un-configured VLANs */ qede_configure_vlan_filters(edev); + set_bit(QEDE_FLAGS_LINK_REQUESTED, &edev->flags); + /* Ask for link-up using current configuration */ memset(&link_params, 0, sizeof(link_params)); link_params.link_up = true; @@ -2258,8 +2271,8 @@ static void qede_link_update(void *dev, struct qed_link_output *link) { struct qede_dev *edev = dev; - if (!netif_running(edev->ndev)) { - DP_VERBOSE(edev, NETIF_MSG_LINK, "Interface is not running\n"); + if (!test_bit(QEDE_FLAGS_LINK_REQUESTED, &edev->flags)) { + DP_VERBOSE(edev, NETIF_MSG_LINK, "Interface is not ready\n"); return; } diff --git a/drivers/net/ethernet/qlogic/qede/qede_ptp.c b/drivers/net/ethernet/qlogic/qede/qede_ptp.c index 013ff567283c..5f3f42a25361 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_ptp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_ptp.c @@ -223,12 +223,12 @@ static int qede_ptp_cfg_filters(struct qede_dev *edev) switch (ptp->tx_type) { case HWTSTAMP_TX_ON: - edev->flags |= QEDE_TX_TIMESTAMPING_EN; + set_bit(QEDE_FLAGS_TX_TIMESTAMPING_EN, &edev->flags); tx_type = QED_PTP_HWTSTAMP_TX_ON; break; case HWTSTAMP_TX_OFF: - edev->flags &= ~QEDE_TX_TIMESTAMPING_EN; + clear_bit(QEDE_FLAGS_TX_TIMESTAMPING_EN, &edev->flags); tx_type = QED_PTP_HWTSTAMP_TX_OFF; break; @@ -518,7 +518,7 @@ void qede_ptp_tx_ts(struct qede_dev *edev, struct sk_buff *skb) if (test_and_set_bit_lock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags)) return; - if (unlikely(!(edev->flags & QEDE_TX_TIMESTAMPING_EN))) { + if (unlikely(!test_bit(QEDE_FLAGS_TX_TIMESTAMPING_EN, &edev->flags))) { DP_NOTICE(edev, "Tx timestamping was not enabled, this packet will not be timestamped\n"); } else if (unlikely(ptp->tx_skb)) { diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c index d42ba2293d8c..16d0479f6891 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c @@ -2993,10 +2993,8 @@ int qlcnic_check_temp(struct qlcnic_adapter *adapter) static inline void dump_tx_ring_desc(struct qlcnic_host_tx_ring *tx_ring) { int i; - struct cmd_desc_type0 *tx_desc_info; for (i = 0; i < tx_ring->num_desc; i++) { - tx_desc_info = &tx_ring->desc_head[i]; pr_info("TX Desc: %d\n", i); print_hex_dump(KERN_INFO, "TX: ", DUMP_PREFIX_OFFSET, 16, 1, &tx_ring->desc_head[i], @@ -4008,19 +4006,12 @@ int qlcnic_validate_rings(struct qlcnic_adapter *adapter, __u32 ring_cnt, int queue_type) { struct net_device *netdev = adapter->netdev; - u8 max_hw_rings = 0; char buf[8]; - int cur_rings; - if (queue_type == QLCNIC_RX_QUEUE) { - max_hw_rings = adapter->max_sds_rings; - cur_rings = adapter->drv_sds_rings; + if (queue_type == QLCNIC_RX_QUEUE) strcpy(buf, "SDS"); - } else if (queue_type == QLCNIC_TX_QUEUE) { - max_hw_rings = adapter->max_tx_rings; - cur_rings = adapter->drv_tx_rings; + else strcpy(buf, "Tx"); - } if (!is_power_of_2(ring_cnt)) { netdev_err(netdev, "%s rings value should be a power of 2\n", diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c index 50eaafa3eaba..af3b037fa442 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c @@ -1067,9 +1067,6 @@ static int qlcnic_sriov_pf_cfg_ip_cmd(struct qlcnic_bc_trans *trans, struct qlcnic_vf_info *vf = trans->vf; struct qlcnic_adapter *adapter = vf->adapter; int err = -EIO; - u8 op; - - op = cmd->req.arg[1] & 0xff; cmd->req.arg[1] |= vf->vp->handle << 16; cmd->req.arg[1] |= BIT_31; @@ -1339,14 +1336,13 @@ static int qlcnic_sriov_pf_get_acl_cmd(struct qlcnic_bc_trans *trans, { struct qlcnic_vf_info *vf = trans->vf; struct qlcnic_vport *vp = vf->vp; - u8 cmd_op, mode = vp->vlan_mode; + u8 mode = vp->vlan_mode; struct qlcnic_adapter *adapter; struct qlcnic_sriov *sriov; adapter = vf->adapter; sriov = adapter->ahw->sriov; - cmd_op = trans->req_hdr->cmd_op; cmd->rsp.arg[0] |= 1 << 25; /* For 84xx adapter in case of PVID , PFD should send vlan mode as diff --git a/drivers/net/ethernet/qualcomm/qca_debug.c b/drivers/net/ethernet/qualcomm/qca_debug.c index a9f1bc013364..bcb890b18a94 100644 --- a/drivers/net/ethernet/qualcomm/qca_debug.c +++ b/drivers/net/ethernet/qualcomm/qca_debug.c @@ -61,6 +61,7 @@ static const char qcaspi_gstrings_stats[][ETH_GSTRING_LEN] = { "Transmit ring full", "SPI errors", "Write verify errors", + "Buffer available errors", }; #ifdef CONFIG_DEBUG_FS @@ -125,19 +126,7 @@ qcaspi_info_show(struct seq_file *s, void *what) return 0; } - -static int -qcaspi_info_open(struct inode *inode, struct file *file) -{ - return single_open(file, qcaspi_info_show, inode->i_private); -} - -static const struct file_operations qcaspi_info_ops = { - .open = qcaspi_info_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(qcaspi_info); void qcaspi_init_device_debugfs(struct qcaspi *qca) @@ -153,7 +142,7 @@ qcaspi_init_device_debugfs(struct qcaspi *qca) return; } debugfs_create_file("info", S_IFREG | 0444, device_root, qca, - &qcaspi_info_ops); + &qcaspi_info_fops); } void diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c index d5310504f436..97f92953bdb9 100644 --- a/drivers/net/ethernet/qualcomm/qca_spi.c +++ b/drivers/net/ethernet/qualcomm/qca_spi.c @@ -289,6 +289,14 @@ qcaspi_transmit(struct qcaspi *qca) qcaspi_read_register(qca, SPI_REG_WRBUF_SPC_AVA, &available); + if (available > QCASPI_HW_BUF_LEN) { + /* This could only happen by interferences on the SPI line. + * So retry later ... + */ + qca->stats.buf_avail_err++; + return -1; + } + while (qca->txr.skb[qca->txr.head]) { pkt_len = qca->txr.skb[qca->txr.head]->len + QCASPI_HW_PKT_LEN; @@ -355,7 +363,13 @@ qcaspi_receive(struct qcaspi *qca) netdev_dbg(net_dev, "qcaspi_receive: SPI_REG_RDBUF_BYTE_AVA: Value: %08x\n", available); - if (available == 0) { + if (available > QCASPI_HW_BUF_LEN) { + /* This could only happen by interferences on the SPI line. + * So retry later ... + */ + qca->stats.buf_avail_err++; + return -1; + } else if (available == 0) { netdev_dbg(net_dev, "qcaspi_receive called without any data being available!\n"); return -1; } diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h index 2d2c49726492..eb9af45fcc5e 100644 --- a/drivers/net/ethernet/qualcomm/qca_spi.h +++ b/drivers/net/ethernet/qualcomm/qca_spi.h @@ -74,6 +74,7 @@ struct qcaspi_stats { u64 ring_full; u64 spi_err; u64 write_verify_failed; + u64 buf_avail_err; }; struct qcaspi { diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c index 5f4e447c5dce..b8bbee645f51 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c @@ -301,10 +301,13 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[], struct rmnet_port *port; u16 mux_id; + if (!dev) + return -ENODEV; + real_dev = __dev_get_by_index(dev_net(dev), nla_get_u32(tb[IFLA_LINK])); - if (!real_dev || !dev || !rmnet_is_real_dev_registered(real_dev)) + if (!real_dev || !rmnet_is_real_dev_registered(real_dev)) return -ENODEV; port = rmnet_get_port_rtnl(real_dev); diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c index 3ee8ae9b6838..f6cf59aee212 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c @@ -20,17 +20,12 @@ static u8 rmnet_map_do_flow_control(struct sk_buff *skb, struct rmnet_port *port, int enable) { - struct rmnet_map_control_command *cmd; struct rmnet_endpoint *ep; struct net_device *vnd; - u16 ip_family; - u16 fc_seq; - u32 qos_id; u8 mux_id; int r; mux_id = RMNET_MAP_GET_MUX_ID(skb); - cmd = RMNET_MAP_GET_CMD_START(skb); if (mux_id >= RMNET_MAX_LOGICAL_EP) { kfree_skb(skb); @@ -45,10 +40,6 @@ static u8 rmnet_map_do_flow_control(struct sk_buff *skb, vnd = ep->egress_dev; - ip_family = cmd->flow_control.ip_family; - fc_seq = ntohs(cmd->flow_control.flow_control_seq_num); - qos_id = ntohl(cmd->flow_control.qos_id); - /* Ignore the ip family and pass the sequence number for both v4 and v6 * sequence. User space does not support creating dedicated flows for * the 2 protocols diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c index fe2d754c6c8e..99bc3de906e2 100644 --- a/drivers/net/ethernet/realtek/r8169.c +++ b/drivers/net/ethernet/realtek/r8169.c @@ -56,13 +56,6 @@ #define R8169_MSG_DEFAULT \ (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN) -#define TX_SLOTS_AVAIL(tp) \ - (tp->dirty_tx + NUM_TX_DESC - tp->cur_tx) - -/* A skbuff with nr_frags needs nr_frags+1 entries in the tx queue */ -#define TX_FRAGS_READY_FOR(tp,nr_frags) \ - (TX_SLOTS_AVAIL(tp) >= (nr_frags + 1)) - /* Maximum number of multicast addresses to filter (vs. Rx-all-multicast). The RTL chips use a 64 element hash table based on the Ethernet CRC. */ static const int multicast_filter_limit = 32; @@ -212,24 +205,24 @@ enum cfg_version { }; static const struct pci_device_id rtl8169_pci_tbl[] = { - { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8129), 0, 0, RTL_CFG_0 }, - { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8136), 0, 0, RTL_CFG_2 }, - { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8161), 0, 0, RTL_CFG_1 }, - { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8167), 0, 0, RTL_CFG_0 }, - { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8168), 0, 0, RTL_CFG_1 }, - { PCI_DEVICE(PCI_VENDOR_ID_NCUBE, 0x8168), 0, 0, RTL_CFG_1 }, - { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), 0, 0, RTL_CFG_0 }, - { PCI_VENDOR_ID_DLINK, 0x4300, - PCI_VENDOR_ID_DLINK, 0x4b10, 0, 0, RTL_CFG_1 }, - { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4300), 0, 0, RTL_CFG_0 }, - { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4302), 0, 0, RTL_CFG_0 }, - { PCI_DEVICE(PCI_VENDOR_ID_AT, 0xc107), 0, 0, RTL_CFG_0 }, - { PCI_DEVICE(0x16ec, 0x0116), 0, 0, RTL_CFG_0 }, + { PCI_VDEVICE(REALTEK, 0x8129), RTL_CFG_0 }, + { PCI_VDEVICE(REALTEK, 0x8136), RTL_CFG_2 }, + { PCI_VDEVICE(REALTEK, 0x8161), RTL_CFG_1 }, + { PCI_VDEVICE(REALTEK, 0x8167), RTL_CFG_0 }, + { PCI_VDEVICE(REALTEK, 0x8168), RTL_CFG_1 }, + { PCI_VDEVICE(NCUBE, 0x8168), RTL_CFG_1 }, + { PCI_VDEVICE(REALTEK, 0x8169), RTL_CFG_0 }, + { PCI_VENDOR_ID_DLINK, 0x4300, + PCI_VENDOR_ID_DLINK, 0x4b10, 0, 0, RTL_CFG_1 }, + { PCI_VDEVICE(DLINK, 0x4300), RTL_CFG_0 }, + { PCI_VDEVICE(DLINK, 0x4302), RTL_CFG_0 }, + { PCI_VDEVICE(AT, 0xc107), RTL_CFG_0 }, + { PCI_VDEVICE(USR, 0x0116), RTL_CFG_0 }, { PCI_VENDOR_ID_LINKSYS, 0x1032, PCI_ANY_ID, 0x0024, 0, 0, RTL_CFG_0 }, { 0x0001, 0x8168, PCI_ANY_ID, 0x2410, 0, 0, RTL_CFG_2 }, - {0,}, + {} }; MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl); @@ -603,7 +596,6 @@ struct RxDesc { struct ring_info { struct sk_buff *skb; u32 len; - u8 __pad[sizeof(void *) - sizeof(u32)]; }; struct rtl8169_counters { @@ -661,7 +653,7 @@ struct rtl8169_private { struct ring_info tx_skb[NUM_TX_DESC]; /* Tx data buffers */ u16 cp_cmd; - u16 event_slow; + u16 irq_mask; const struct rtl_coalesce_info *coalesce_info; struct clk *clk; @@ -1102,23 +1094,6 @@ static u32 r8168ep_ocp_read(struct rtl8169_private *tp, u8 mask, u16 reg) return rtl_eri_read(tp, reg, ERIAR_OOB); } -static u32 ocp_read(struct rtl8169_private *tp, u8 mask, u16 reg) -{ - switch (tp->mac_version) { - case RTL_GIGA_MAC_VER_27: - case RTL_GIGA_MAC_VER_28: - case RTL_GIGA_MAC_VER_31: - return r8168dp_ocp_read(tp, mask, reg); - case RTL_GIGA_MAC_VER_49: - case RTL_GIGA_MAC_VER_50: - case RTL_GIGA_MAC_VER_51: - return r8168ep_ocp_read(tp, mask, reg); - default: - BUG(); - return ~0; - } -} - static void r8168dp_ocp_write(struct rtl8169_private *tp, u8 mask, u16 reg, u32 data) { @@ -1134,30 +1109,11 @@ static void r8168ep_ocp_write(struct rtl8169_private *tp, u8 mask, u16 reg, data, ERIAR_OOB); } -static void ocp_write(struct rtl8169_private *tp, u8 mask, u16 reg, u32 data) -{ - switch (tp->mac_version) { - case RTL_GIGA_MAC_VER_27: - case RTL_GIGA_MAC_VER_28: - case RTL_GIGA_MAC_VER_31: - r8168dp_ocp_write(tp, mask, reg, data); - break; - case RTL_GIGA_MAC_VER_49: - case RTL_GIGA_MAC_VER_50: - case RTL_GIGA_MAC_VER_51: - r8168ep_ocp_write(tp, mask, reg, data); - break; - default: - BUG(); - break; - } -} - -static void rtl8168_oob_notify(struct rtl8169_private *tp, u8 cmd) +static void r8168dp_oob_notify(struct rtl8169_private *tp, u8 cmd) { rtl_eri_write(tp, 0xe8, ERIAR_MASK_0001, cmd, ERIAR_EXGMAC); - ocp_write(tp, 0x1, 0x30, 0x00000001); + r8168dp_ocp_write(tp, 0x1, 0x30, 0x00000001); } #define OOB_CMD_RESET 0x00 @@ -1169,18 +1125,18 @@ static u16 rtl8168_get_ocp_reg(struct rtl8169_private *tp) return (tp->mac_version == RTL_GIGA_MAC_VER_31) ? 0xb8 : 0x10; } -DECLARE_RTL_COND(rtl_ocp_read_cond) +DECLARE_RTL_COND(rtl_dp_ocp_read_cond) { u16 reg; reg = rtl8168_get_ocp_reg(tp); - return ocp_read(tp, 0x0f, reg) & 0x00000800; + return r8168dp_ocp_read(tp, 0x0f, reg) & 0x00000800; } DECLARE_RTL_COND(rtl_ep_ocp_read_cond) { - return ocp_read(tp, 0x0f, 0x124) & 0x00000001; + return r8168ep_ocp_read(tp, 0x0f, 0x124) & 0x00000001; } DECLARE_RTL_COND(rtl_ocp_tx_cond) @@ -1198,14 +1154,15 @@ static void rtl8168ep_stop_cmac(struct rtl8169_private *tp) static void rtl8168dp_driver_start(struct rtl8169_private *tp) { - rtl8168_oob_notify(tp, OOB_CMD_DRIVER_START); - rtl_msleep_loop_wait_high(tp, &rtl_ocp_read_cond, 10, 10); + r8168dp_oob_notify(tp, OOB_CMD_DRIVER_START); + rtl_msleep_loop_wait_high(tp, &rtl_dp_ocp_read_cond, 10, 10); } static void rtl8168ep_driver_start(struct rtl8169_private *tp) { - ocp_write(tp, 0x01, 0x180, OOB_CMD_DRIVER_START); - ocp_write(tp, 0x01, 0x30, ocp_read(tp, 0x01, 0x30) | 0x01); + r8168ep_ocp_write(tp, 0x01, 0x180, OOB_CMD_DRIVER_START); + r8168ep_ocp_write(tp, 0x01, 0x30, + r8168ep_ocp_read(tp, 0x01, 0x30) | 0x01); rtl_msleep_loop_wait_high(tp, &rtl_ep_ocp_read_cond, 10, 10); } @@ -1230,15 +1187,16 @@ static void rtl8168_driver_start(struct rtl8169_private *tp) static void rtl8168dp_driver_stop(struct rtl8169_private *tp) { - rtl8168_oob_notify(tp, OOB_CMD_DRIVER_STOP); - rtl_msleep_loop_wait_low(tp, &rtl_ocp_read_cond, 10, 10); + r8168dp_oob_notify(tp, OOB_CMD_DRIVER_STOP); + rtl_msleep_loop_wait_low(tp, &rtl_dp_ocp_read_cond, 10, 10); } static void rtl8168ep_driver_stop(struct rtl8169_private *tp) { rtl8168ep_stop_cmac(tp); - ocp_write(tp, 0x01, 0x180, OOB_CMD_DRIVER_STOP); - ocp_write(tp, 0x01, 0x30, ocp_read(tp, 0x01, 0x30) | 0x01); + r8168ep_ocp_write(tp, 0x01, 0x180, OOB_CMD_DRIVER_STOP); + r8168ep_ocp_write(tp, 0x01, 0x30, + r8168ep_ocp_read(tp, 0x01, 0x30) | 0x01); rtl_msleep_loop_wait_low(tp, &rtl_ep_ocp_read_cond, 10, 10); } @@ -1265,12 +1223,12 @@ static bool r8168dp_check_dash(struct rtl8169_private *tp) { u16 reg = rtl8168_get_ocp_reg(tp); - return !!(ocp_read(tp, 0x0f, reg) & 0x00008000); + return !!(r8168dp_ocp_read(tp, 0x0f, reg) & 0x00008000); } static bool r8168ep_check_dash(struct rtl8169_private *tp) { - return !!(ocp_read(tp, 0x0f, 0x128) & 0x00000001); + return !!(r8168ep_ocp_read(tp, 0x0f, 0x128) & 0x00000001); } static bool r8168_check_dash(struct rtl8169_private *tp) @@ -1325,27 +1283,20 @@ static u16 rtl_get_events(struct rtl8169_private *tp) static void rtl_ack_events(struct rtl8169_private *tp, u16 bits) { RTL_W16(tp, IntrStatus, bits); - mmiowb(); } static void rtl_irq_disable(struct rtl8169_private *tp) { RTL_W16(tp, IntrMask, 0); - mmiowb(); -} - -static void rtl_irq_enable(struct rtl8169_private *tp, u16 bits) -{ - RTL_W16(tp, IntrMask, bits); } #define RTL_EVENT_NAPI_RX (RxOK | RxErr) #define RTL_EVENT_NAPI_TX (TxOK | TxErr) #define RTL_EVENT_NAPI (RTL_EVENT_NAPI_RX | RTL_EVENT_NAPI_TX) -static void rtl_irq_enable_all(struct rtl8169_private *tp) +static void rtl_irq_enable(struct rtl8169_private *tp) { - rtl_irq_enable(tp, RTL_EVENT_NAPI | tp->event_slow); + RTL_W16(tp, IntrMask, tp->irq_mask); } static void rtl8169_irq_mask_and_ack(struct rtl8169_private *tp) @@ -2051,8 +2002,7 @@ static const struct ethtool_ops rtl8169_ethtool_ops = { .set_link_ksettings = phy_ethtool_set_link_ksettings, }; -static void rtl8169_get_mac_version(struct rtl8169_private *tp, - u8 default_version) +static void rtl8169_get_mac_version(struct rtl8169_private *tp) { /* * The driver currently handles the 8168Bf and the 8168Be identically @@ -2066,120 +2016,107 @@ static void rtl8169_get_mac_version(struct rtl8169_private *tp, * (RTL_R32(tp, TxConfig) & 0x700000) == 0x200000 ? 8101Eb : 8101Ec */ static const struct rtl_mac_info { - u32 mask; - u32 val; - int mac_version; + u16 mask; + u16 val; + u16 mac_version; } mac_info[] = { /* 8168EP family. */ - { 0x7cf00000, 0x50200000, RTL_GIGA_MAC_VER_51 }, - { 0x7cf00000, 0x50100000, RTL_GIGA_MAC_VER_50 }, - { 0x7cf00000, 0x50000000, RTL_GIGA_MAC_VER_49 }, + { 0x7cf, 0x502, RTL_GIGA_MAC_VER_51 }, + { 0x7cf, 0x501, RTL_GIGA_MAC_VER_50 }, + { 0x7cf, 0x500, RTL_GIGA_MAC_VER_49 }, /* 8168H family. */ - { 0x7cf00000, 0x54100000, RTL_GIGA_MAC_VER_46 }, - { 0x7cf00000, 0x54000000, RTL_GIGA_MAC_VER_45 }, + { 0x7cf, 0x541, RTL_GIGA_MAC_VER_46 }, + { 0x7cf, 0x540, RTL_GIGA_MAC_VER_45 }, /* 8168G family. */ - { 0x7cf00000, 0x5c800000, RTL_GIGA_MAC_VER_44 }, - { 0x7cf00000, 0x50900000, RTL_GIGA_MAC_VER_42 }, - { 0x7cf00000, 0x4c100000, RTL_GIGA_MAC_VER_41 }, - { 0x7cf00000, 0x4c000000, RTL_GIGA_MAC_VER_40 }, + { 0x7cf, 0x5c8, RTL_GIGA_MAC_VER_44 }, + { 0x7cf, 0x509, RTL_GIGA_MAC_VER_42 }, + { 0x7cf, 0x4c1, RTL_GIGA_MAC_VER_41 }, + { 0x7cf, 0x4c0, RTL_GIGA_MAC_VER_40 }, /* 8168F family. */ - { 0x7c800000, 0x48800000, RTL_GIGA_MAC_VER_38 }, - { 0x7cf00000, 0x48100000, RTL_GIGA_MAC_VER_36 }, - { 0x7cf00000, 0x48000000, RTL_GIGA_MAC_VER_35 }, + { 0x7c8, 0x488, RTL_GIGA_MAC_VER_38 }, + { 0x7cf, 0x481, RTL_GIGA_MAC_VER_36 }, + { 0x7cf, 0x480, RTL_GIGA_MAC_VER_35 }, /* 8168E family. */ - { 0x7c800000, 0x2c800000, RTL_GIGA_MAC_VER_34 }, - { 0x7cf00000, 0x2c100000, RTL_GIGA_MAC_VER_32 }, - { 0x7c800000, 0x2c000000, RTL_GIGA_MAC_VER_33 }, + { 0x7c8, 0x2c8, RTL_GIGA_MAC_VER_34 }, + { 0x7cf, 0x2c1, RTL_GIGA_MAC_VER_32 }, + { 0x7c8, 0x2c0, RTL_GIGA_MAC_VER_33 }, /* 8168D family. */ - { 0x7cf00000, 0x28100000, RTL_GIGA_MAC_VER_25 }, - { 0x7c800000, 0x28000000, RTL_GIGA_MAC_VER_26 }, + { 0x7cf, 0x281, RTL_GIGA_MAC_VER_25 }, + { 0x7c8, 0x280, RTL_GIGA_MAC_VER_26 }, /* 8168DP family. */ - { 0x7cf00000, 0x28800000, RTL_GIGA_MAC_VER_27 }, - { 0x7cf00000, 0x28a00000, RTL_GIGA_MAC_VER_28 }, - { 0x7cf00000, 0x28b00000, RTL_GIGA_MAC_VER_31 }, + { 0x7cf, 0x288, RTL_GIGA_MAC_VER_27 }, + { 0x7cf, 0x28a, RTL_GIGA_MAC_VER_28 }, + { 0x7cf, 0x28b, RTL_GIGA_MAC_VER_31 }, /* 8168C family. */ - { 0x7cf00000, 0x3c900000, RTL_GIGA_MAC_VER_23 }, - { 0x7cf00000, 0x3c800000, RTL_GIGA_MAC_VER_18 }, - { 0x7c800000, 0x3c800000, RTL_GIGA_MAC_VER_24 }, - { 0x7cf00000, 0x3c000000, RTL_GIGA_MAC_VER_19 }, - { 0x7cf00000, 0x3c200000, RTL_GIGA_MAC_VER_20 }, - { 0x7cf00000, 0x3c300000, RTL_GIGA_MAC_VER_21 }, - { 0x7c800000, 0x3c000000, RTL_GIGA_MAC_VER_22 }, + { 0x7cf, 0x3c9, RTL_GIGA_MAC_VER_23 }, + { 0x7cf, 0x3c8, RTL_GIGA_MAC_VER_18 }, + { 0x7c8, 0x3c8, RTL_GIGA_MAC_VER_24 }, + { 0x7cf, 0x3c0, RTL_GIGA_MAC_VER_19 }, + { 0x7cf, 0x3c2, RTL_GIGA_MAC_VER_20 }, + { 0x7cf, 0x3c3, RTL_GIGA_MAC_VER_21 }, + { 0x7c8, 0x3c0, RTL_GIGA_MAC_VER_22 }, /* 8168B family. */ - { 0x7cf00000, 0x38000000, RTL_GIGA_MAC_VER_12 }, - { 0x7c800000, 0x38000000, RTL_GIGA_MAC_VER_17 }, - { 0x7c800000, 0x30000000, RTL_GIGA_MAC_VER_11 }, + { 0x7cf, 0x380, RTL_GIGA_MAC_VER_12 }, + { 0x7c8, 0x380, RTL_GIGA_MAC_VER_17 }, + { 0x7c8, 0x300, RTL_GIGA_MAC_VER_11 }, /* 8101 family. */ - { 0x7c800000, 0x44800000, RTL_GIGA_MAC_VER_39 }, - { 0x7c800000, 0x44000000, RTL_GIGA_MAC_VER_37 }, - { 0x7cf00000, 0x40900000, RTL_GIGA_MAC_VER_29 }, - { 0x7c800000, 0x40800000, RTL_GIGA_MAC_VER_30 }, - { 0x7cf00000, 0x34900000, RTL_GIGA_MAC_VER_08 }, - { 0x7cf00000, 0x24900000, RTL_GIGA_MAC_VER_08 }, - { 0x7cf00000, 0x34800000, RTL_GIGA_MAC_VER_07 }, - { 0x7cf00000, 0x24800000, RTL_GIGA_MAC_VER_07 }, - { 0x7cf00000, 0x34000000, RTL_GIGA_MAC_VER_13 }, - { 0x7cf00000, 0x34300000, RTL_GIGA_MAC_VER_10 }, - { 0x7cf00000, 0x34200000, RTL_GIGA_MAC_VER_16 }, - { 0x7c800000, 0x34800000, RTL_GIGA_MAC_VER_09 }, - { 0x7c800000, 0x24800000, RTL_GIGA_MAC_VER_09 }, - { 0x7c800000, 0x34000000, RTL_GIGA_MAC_VER_16 }, + { 0x7c8, 0x448, RTL_GIGA_MAC_VER_39 }, + { 0x7c8, 0x440, RTL_GIGA_MAC_VER_37 }, + { 0x7cf, 0x409, RTL_GIGA_MAC_VER_29 }, + { 0x7c8, 0x408, RTL_GIGA_MAC_VER_30 }, + { 0x7cf, 0x349, RTL_GIGA_MAC_VER_08 }, + { 0x7cf, 0x249, RTL_GIGA_MAC_VER_08 }, + { 0x7cf, 0x348, RTL_GIGA_MAC_VER_07 }, + { 0x7cf, 0x248, RTL_GIGA_MAC_VER_07 }, + { 0x7cf, 0x340, RTL_GIGA_MAC_VER_13 }, + { 0x7cf, 0x343, RTL_GIGA_MAC_VER_10 }, + { 0x7cf, 0x342, RTL_GIGA_MAC_VER_16 }, + { 0x7c8, 0x348, RTL_GIGA_MAC_VER_09 }, + { 0x7c8, 0x248, RTL_GIGA_MAC_VER_09 }, + { 0x7c8, 0x340, RTL_GIGA_MAC_VER_16 }, /* FIXME: where did these entries come from ? -- FR */ - { 0xfc800000, 0x38800000, RTL_GIGA_MAC_VER_15 }, - { 0xfc800000, 0x30800000, RTL_GIGA_MAC_VER_14 }, + { 0xfc8, 0x388, RTL_GIGA_MAC_VER_15 }, + { 0xfc8, 0x308, RTL_GIGA_MAC_VER_14 }, /* 8110 family. */ - { 0xfc800000, 0x98000000, RTL_GIGA_MAC_VER_06 }, - { 0xfc800000, 0x18000000, RTL_GIGA_MAC_VER_05 }, - { 0xfc800000, 0x10000000, RTL_GIGA_MAC_VER_04 }, - { 0xfc800000, 0x04000000, RTL_GIGA_MAC_VER_03 }, - { 0xfc800000, 0x00800000, RTL_GIGA_MAC_VER_02 }, - { 0xfc800000, 0x00000000, RTL_GIGA_MAC_VER_01 }, + { 0xfc8, 0x980, RTL_GIGA_MAC_VER_06 }, + { 0xfc8, 0x180, RTL_GIGA_MAC_VER_05 }, + { 0xfc8, 0x100, RTL_GIGA_MAC_VER_04 }, + { 0xfc8, 0x040, RTL_GIGA_MAC_VER_03 }, + { 0xfc8, 0x008, RTL_GIGA_MAC_VER_02 }, + { 0xfc8, 0x000, RTL_GIGA_MAC_VER_01 }, /* Catch-all */ - { 0x00000000, 0x00000000, RTL_GIGA_MAC_NONE } + { 0x000, 0x000, RTL_GIGA_MAC_NONE } }; const struct rtl_mac_info *p = mac_info; - u32 reg; + u16 reg = RTL_R32(tp, TxConfig) >> 20; - reg = RTL_R32(tp, TxConfig); while ((reg & p->mask) != p->val) p++; tp->mac_version = p->mac_version; if (tp->mac_version == RTL_GIGA_MAC_NONE) { - dev_notice(tp_to_dev(tp), - "unknown MAC, using family default\n"); - tp->mac_version = default_version; - } else if (tp->mac_version == RTL_GIGA_MAC_VER_42) { - tp->mac_version = tp->supports_gmii ? - RTL_GIGA_MAC_VER_42 : - RTL_GIGA_MAC_VER_43; - } else if (tp->mac_version == RTL_GIGA_MAC_VER_45) { - tp->mac_version = tp->supports_gmii ? - RTL_GIGA_MAC_VER_45 : - RTL_GIGA_MAC_VER_47; - } else if (tp->mac_version == RTL_GIGA_MAC_VER_46) { - tp->mac_version = tp->supports_gmii ? - RTL_GIGA_MAC_VER_46 : - RTL_GIGA_MAC_VER_48; + dev_err(tp_to_dev(tp), "unknown chip XID %03x\n", reg & 0xfcf); + } else if (!tp->supports_gmii) { + if (tp->mac_version == RTL_GIGA_MAC_VER_42) + tp->mac_version = RTL_GIGA_MAC_VER_43; + else if (tp->mac_version == RTL_GIGA_MAC_VER_45) + tp->mac_version = RTL_GIGA_MAC_VER_47; + else if (tp->mac_version == RTL_GIGA_MAC_VER_46) + tp->mac_version = RTL_GIGA_MAC_VER_48; } } -static void rtl8169_print_mac_version(struct rtl8169_private *tp) -{ - netif_dbg(tp, drv, tp->dev, "mac_version = 0x%02x\n", tp->mac_version); -} - struct phy_reg { u16 reg; u16 val; @@ -3902,8 +3839,6 @@ static void rtl_hw_phy_config(struct net_device *dev) { struct rtl8169_private *tp = netdev_priv(dev); - rtl8169_print_mac_version(tp); - switch (tp->mac_version) { case RTL_GIGA_MAC_VER_01: break; @@ -4643,7 +4578,7 @@ static void rtl_hw_start(struct rtl8169_private *tp) rtl_set_rx_mode(tp->dev); /* no early-rx interrupts */ RTL_W16(tp, MultiIntr, RTL_R16(tp, MultiIntr) & 0xf000); - rtl_irq_enable_all(tp); + rtl_irq_enable(tp); } static void rtl_hw_start_8169(struct rtl8169_private *tp) @@ -5394,8 +5329,8 @@ static void rtl_hw_start_8168(struct rtl8169_private *tp) /* Work around for RxFIFO overflow. */ if (tp->mac_version == RTL_GIGA_MAC_VER_11) { - tp->event_slow |= RxFIFOOver | PCSTimeout; - tp->event_slow &= ~RxOverflow; + tp->irq_mask |= RxFIFOOver; + tp->irq_mask &= ~RxOverflow; } switch (tp->mac_version) { @@ -5632,7 +5567,7 @@ static void rtl_hw_start_8106(struct rtl8169_private *tp) static void rtl_hw_start_8101(struct rtl8169_private *tp) { if (tp->mac_version >= RTL_GIGA_MAC_VER_30) - tp->event_slow &= ~RxFIFOOver; + tp->irq_mask &= ~RxFIFOOver; if (tp->mac_version == RTL_GIGA_MAC_VER_13 || tp->mac_version == RTL_GIGA_MAC_VER_16) @@ -5888,6 +5823,16 @@ static void rtl8169_tx_timeout(struct net_device *dev) rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING); } +static __le32 rtl8169_get_txd_opts1(u32 opts0, u32 len, unsigned int entry) +{ + u32 status = opts0 | len; + + if (entry == NUM_TX_DESC - 1) + status |= RingEnd; + + return cpu_to_le32(status); +} + static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb, u32 *opts) { @@ -5900,7 +5845,7 @@ static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb, for (cur_frag = 0; cur_frag < info->nr_frags; cur_frag++) { const skb_frag_t *frag = info->frags + cur_frag; dma_addr_t mapping; - u32 status, len; + u32 len; void *addr; entry = (entry + 1) % NUM_TX_DESC; @@ -5916,11 +5861,7 @@ static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb, goto err_out; } - /* Anti gcc 2.95.3 bugware (sic) */ - status = opts[0] | len | - (RingEnd * !((entry + 1) % NUM_TX_DESC)); - - txd->opts1 = cpu_to_le32(status); + txd->opts1 = rtl8169_get_txd_opts1(opts[0], len, entry); txd->opts2 = cpu_to_le32(opts[1]); txd->addr = cpu_to_le64(mapping); @@ -6108,6 +6049,15 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp, return true; } +static bool rtl_tx_slots_avail(struct rtl8169_private *tp, + unsigned int nr_frags) +{ + unsigned int slots_avail = tp->dirty_tx + NUM_TX_DESC - tp->cur_tx; + + /* A skbuff with nr_frags needs nr_frags+1 entries in the tx queue */ + return slots_avail > nr_frags; +} + static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev) { @@ -6116,11 +6066,11 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb, struct TxDesc *txd = tp->TxDescArray + entry; struct device *d = tp_to_dev(tp); dma_addr_t mapping; - u32 status, len; - u32 opts[2]; + u32 opts[2], len; + bool stop_queue; int frags; - if (unlikely(!TX_FRAGS_READY_FOR(tp, skb_shinfo(skb)->nr_frags))) { + if (unlikely(!rtl_tx_slots_avail(tp, skb_shinfo(skb)->nr_frags))) { netif_err(tp, drv, dev, "BUG! Tx Ring full when queue awake!\n"); goto err_stop_0; } @@ -6159,32 +6109,26 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb, txd->opts2 = cpu_to_le32(opts[1]); - netdev_sent_queue(dev, skb->len); - skb_tx_timestamp(skb); /* Force memory writes to complete before releasing descriptor */ dma_wmb(); - /* Anti gcc 2.95.3 bugware (sic) */ - status = opts[0] | len | (RingEnd * !((entry + 1) % NUM_TX_DESC)); - txd->opts1 = cpu_to_le32(status); + txd->opts1 = rtl8169_get_txd_opts1(opts[0], len, entry); /* Force all memory writes to complete before notifying device */ wmb(); tp->cur_tx += frags + 1; - RTL_W8(tp, TxPoll, NPQ); + stop_queue = !rtl_tx_slots_avail(tp, MAX_SKB_FRAGS); + if (unlikely(stop_queue)) + netif_stop_queue(dev); - mmiowb(); + if (__netdev_sent_queue(dev, skb->len, skb->xmit_more)) + RTL_W8(tp, TxPoll, NPQ); - if (!TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS)) { - /* Avoid wrongly optimistic queue wake-up: rtl_tx thread must - * not miss a ring update when it notices a stopped queue. - */ - smp_wmb(); - netif_stop_queue(dev); + if (unlikely(stop_queue)) { /* Sync with rtl_tx: * - publish queue status and cur_tx ring index (write barrier) * - refresh dirty_tx ring index (read barrier). @@ -6193,7 +6137,7 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb, * can't. */ smp_mb(); - if (TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS)) + if (rtl_tx_slots_avail(tp, MAX_SKB_FRAGS)) netif_wake_queue(dev); } @@ -6257,7 +6201,8 @@ static void rtl8169_pcierr_interrupt(struct net_device *dev) rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING); } -static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp) +static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp, + int budget) { unsigned int dirty_tx, tx_left, bytes_compl = 0, pkts_compl = 0; @@ -6285,7 +6230,7 @@ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp) if (status & LastFrag) { pkts_compl++; bytes_compl += tx_skb->skb->len; - dev_consume_skb_any(tx_skb->skb); + napi_consume_skb(tx_skb->skb, budget); tx_skb->skb = NULL; } dirty_tx++; @@ -6310,7 +6255,7 @@ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp) */ smp_mb(); if (netif_queue_stopped(dev) && - TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS)) { + rtl_tx_slots_avail(tp, MAX_SKB_FRAGS)) { netif_wake_queue(dev); } /* @@ -6460,8 +6405,9 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance) { struct rtl8169_private *tp = dev_instance; u16 status = rtl_get_events(tp); + u16 irq_mask = RTL_R16(tp, IntrMask); - if (status == 0xffff || !(status & (RTL_EVENT_NAPI | tp->event_slow))) + if (status == 0xffff || !(status & irq_mask)) return IRQ_NONE; if (unlikely(status & SYSErr)) { @@ -6528,13 +6474,11 @@ static int rtl8169_poll(struct napi_struct *napi, int budget) work_done = rtl_rx(dev, tp, (u32) budget); - rtl_tx(dev, tp); + rtl_tx(dev, tp, budget); if (work_done < budget) { napi_complete_done(napi, work_done); - - rtl_irq_enable_all(tp); - mmiowb(); + rtl_irq_enable(tp); } return work_done; @@ -6584,7 +6528,7 @@ static int r8169_phy_connect(struct rtl8169_private *tp) phy_set_max_speed(phydev, SPEED_100); /* Ensure to advertise everything, incl. pause */ - phydev->advertising = phydev->supported; + linkmode_copy(phydev->advertising, phydev->supported); phy_attached_info(phydev); @@ -6824,8 +6768,7 @@ static void rtl8169_net_suspend(struct net_device *dev) static int rtl8169_suspend(struct device *device) { - struct pci_dev *pdev = to_pci_dev(device); - struct net_device *dev = pci_get_drvdata(pdev); + struct net_device *dev = dev_get_drvdata(device); struct rtl8169_private *tp = netdev_priv(dev); rtl8169_net_suspend(dev); @@ -6855,8 +6798,7 @@ static void __rtl8169_resume(struct net_device *dev) static int rtl8169_resume(struct device *device) { - struct pci_dev *pdev = to_pci_dev(device); - struct net_device *dev = pci_get_drvdata(pdev); + struct net_device *dev = dev_get_drvdata(device); struct rtl8169_private *tp = netdev_priv(dev); clk_prepare_enable(tp->clk); @@ -6869,8 +6811,7 @@ static int rtl8169_resume(struct device *device) static int rtl8169_runtime_suspend(struct device *device) { - struct pci_dev *pdev = to_pci_dev(device); - struct net_device *dev = pci_get_drvdata(pdev); + struct net_device *dev = dev_get_drvdata(device); struct rtl8169_private *tp = netdev_priv(dev); if (!tp->TxDescArray) @@ -6891,8 +6832,7 @@ static int rtl8169_runtime_suspend(struct device *device) static int rtl8169_runtime_resume(struct device *device) { - struct pci_dev *pdev = to_pci_dev(device); - struct net_device *dev = pci_get_drvdata(pdev); + struct net_device *dev = dev_get_drvdata(device); struct rtl8169_private *tp = netdev_priv(dev); rtl_rar_set(tp, dev->dev_addr); @@ -6910,8 +6850,7 @@ static int rtl8169_runtime_resume(struct device *device) static int rtl8169_runtime_idle(struct device *device) { - struct pci_dev *pdev = to_pci_dev(device); - struct net_device *dev = pci_get_drvdata(pdev); + struct net_device *dev = dev_get_drvdata(device); if (!netif_running(dev) || !netif_carrier_ok(dev)) pm_schedule_suspend(device, 10000); @@ -7023,31 +6962,26 @@ static const struct net_device_ops rtl_netdev_ops = { static const struct rtl_cfg_info { void (*hw_start)(struct rtl8169_private *tp); - u16 event_slow; + u16 irq_mask; unsigned int has_gmii:1; const struct rtl_coalesce_info *coalesce_info; - u8 default_ver; } rtl_cfg_infos [] = { [RTL_CFG_0] = { .hw_start = rtl_hw_start_8169, - .event_slow = SYSErr | LinkChg | RxOverflow | RxFIFOOver, + .irq_mask = SYSErr | LinkChg | RxOverflow | RxFIFOOver, .has_gmii = 1, .coalesce_info = rtl_coalesce_info_8169, - .default_ver = RTL_GIGA_MAC_VER_01, }, [RTL_CFG_1] = { .hw_start = rtl_hw_start_8168, - .event_slow = SYSErr | LinkChg | RxOverflow, + .irq_mask = LinkChg | RxOverflow, .has_gmii = 1, .coalesce_info = rtl_coalesce_info_8168_8136, - .default_ver = RTL_GIGA_MAC_VER_11, }, [RTL_CFG_2] = { .hw_start = rtl_hw_start_8101, - .event_slow = SYSErr | LinkChg | RxOverflow | RxFIFOOver | - PCSTimeout, + .irq_mask = LinkChg | RxOverflow | RxFIFOOver, .coalesce_info = rtl_coalesce_info_8168_8136, - .default_ver = RTL_GIGA_MAC_VER_13, } }; @@ -7309,11 +7243,10 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) tp->mmio_addr = pcim_iomap_table(pdev)[region]; - if (!pci_is_pcie(pdev)) - dev_info(&pdev->dev, "not PCI Express\n"); - /* Identify chip attached to board */ - rtl8169_get_mac_version(tp, cfg->default_ver); + rtl8169_get_mac_version(tp); + if (tp->mac_version == RTL_GIGA_MAC_NONE) + return -ENODEV; if (rtl_tbi_enabled(tp)) { dev_err(&pdev->dev, "TBI fiber mode not supported\n"); @@ -7351,8 +7284,6 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) rtl_init_mdio_ops(tp); rtl_init_jumbo_ops(tp); - rtl8169_print_mac_version(tp); - chipset = tp->mac_version; rc = rtl_alloc_irq(tp); @@ -7426,7 +7357,7 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) dev->max_mtu = jumbo_max; tp->hw_start = cfg->hw_start; - tp->event_slow = cfg->event_slow; + tp->irq_mask = RTL_EVENT_NAPI | cfg->irq_mask; tp->coalesce_info = cfg->coalesce_info; tp->rtl_fw = RTL_FIRMWARE_UNKNOWN; @@ -7450,9 +7381,9 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (rc) goto err_mdio_unregister; - netif_info(tp, probe, dev, "%s, %pM, XID %08x, IRQ %d\n", + netif_info(tp, probe, dev, "%s, %pM, XID %03x, IRQ %d\n", rtl_chip_infos[chipset].name, dev->dev_addr, - (u32)(RTL_R32(tp, TxConfig) & 0xfcf0f8ff), + (RTL_R32(tp, TxConfig) >> 20) & 0xfcf, pci_irq_vector(pdev, 0)); if (jumbo_max > JUMBO_1K) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 1c6e4df94f01..ac9195add811 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1032,7 +1032,6 @@ struct ravb_private { phy_interface_t phy_interface; int msg_enable; int speed; - int duplex; int emac_irq; enum ravb_chip_id chip_id; int rx_irqs[NUM_RX_QUEUE]; diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index defed0d0c51d..ffc1ada4e6da 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -82,13 +82,6 @@ static int ravb_config(struct net_device *ndev) return error; } -static void ravb_set_duplex(struct net_device *ndev) -{ - struct ravb_private *priv = netdev_priv(ndev); - - ravb_modify(ndev, ECMR, ECMR_DM, priv->duplex ? ECMR_DM : 0); -} - static void ravb_set_rate(struct net_device *ndev) { struct ravb_private *priv = netdev_priv(ndev); @@ -406,13 +399,11 @@ error: /* E-MAC init function */ static void ravb_emac_init(struct net_device *ndev) { - struct ravb_private *priv = netdev_priv(ndev); - /* Receive frame limit set register */ ravb_write(ndev, ndev->mtu + ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN, RFLR); /* EMAC Mode: PAUSE prohibition; Duplex; RX Checksum; TX; RX */ - ravb_write(ndev, ECMR_ZPF | (priv->duplex ? ECMR_DM : 0) | + ravb_write(ndev, ECMR_ZPF | ECMR_DM | (ndev->features & NETIF_F_RXCSUM ? ECMR_RCSC : 0) | ECMR_TE | ECMR_RE, ECMR); @@ -995,12 +986,6 @@ static void ravb_adjust_link(struct net_device *ndev) ravb_rcv_snd_disable(ndev); if (phydev->link) { - if (phydev->duplex != priv->duplex) { - new_state = true; - priv->duplex = phydev->duplex; - ravb_set_duplex(ndev); - } - if (phydev->speed != priv->speed) { new_state = true; priv->speed = phydev->speed; @@ -1015,7 +1000,6 @@ static void ravb_adjust_link(struct net_device *ndev) new_state = true; priv->link = 0; priv->speed = 0; - priv->duplex = -1; } /* Enable TX and RX right over here, if E-MAC change is ignored */ @@ -1045,7 +1029,6 @@ static int ravb_phy_init(struct net_device *ndev) priv->link = 0; priv->speed = 0; - priv->duplex = -1; /* Try connecting to PHY */ pn = of_parse_phandle(np, "phy-handle", 0); @@ -1088,6 +1071,10 @@ static int ravb_phy_init(struct net_device *ndev) phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_Pause_BIT); phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_Asym_Pause_BIT); + /* Half Duplex is not supported */ + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT); + phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT); + phy_attached_info(phydev); return 0; diff --git a/drivers/net/ethernet/rocker/rocker_main.c b/drivers/net/ethernet/rocker/rocker_main.c index beb06628f22d..6213827e3956 100644 --- a/drivers/net/ethernet/rocker/rocker_main.c +++ b/drivers/net/ethernet/rocker/rocker_main.c @@ -1632,9 +1632,6 @@ rocker_world_port_obj_vlan_add(struct rocker_port *rocker_port, { struct rocker_world_ops *wops = rocker_port->rocker->wops; - if (netif_is_bridge_master(vlan->obj.orig_dev)) - return -EOPNOTSUPP; - if (!wops->port_obj_vlan_add) return -EOPNOTSUPP; @@ -2145,8 +2142,6 @@ static int rocker_port_obj_del(struct net_device *dev, static const struct switchdev_ops rocker_port_switchdev_ops = { .switchdev_port_attr_get = rocker_port_attr_get, .switchdev_port_attr_set = rocker_port_attr_set, - .switchdev_port_obj_add = rocker_port_obj_add, - .switchdev_port_obj_del = rocker_port_obj_del, }; struct rocker_fib_event_work { @@ -2812,12 +2807,54 @@ static int rocker_switchdev_event(struct notifier_block *unused, return NOTIFY_DONE; } +static int +rocker_switchdev_port_obj_event(unsigned long event, struct net_device *netdev, + struct switchdev_notifier_port_obj_info *port_obj_info) +{ + int err = -EOPNOTSUPP; + + switch (event) { + case SWITCHDEV_PORT_OBJ_ADD: + err = rocker_port_obj_add(netdev, port_obj_info->obj, + port_obj_info->trans); + break; + case SWITCHDEV_PORT_OBJ_DEL: + err = rocker_port_obj_del(netdev, port_obj_info->obj); + break; + } + + port_obj_info->handled = true; + return notifier_from_errno(err); +} + +static int rocker_switchdev_blocking_event(struct notifier_block *unused, + unsigned long event, void *ptr) +{ + struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + + if (!rocker_port_dev_check(dev)) + return NOTIFY_DONE; + + switch (event) { + case SWITCHDEV_PORT_OBJ_ADD: + case SWITCHDEV_PORT_OBJ_DEL: + return rocker_switchdev_port_obj_event(event, dev, ptr); + } + + return NOTIFY_DONE; +} + static struct notifier_block rocker_switchdev_notifier = { .notifier_call = rocker_switchdev_event, }; +static struct notifier_block rocker_switchdev_blocking_notifier = { + .notifier_call = rocker_switchdev_blocking_event, +}; + static int rocker_probe(struct pci_dev *pdev, const struct pci_device_id *id) { + struct notifier_block *nb; struct rocker *rocker; int err; @@ -2933,6 +2970,13 @@ static int rocker_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err_register_switchdev_notifier; } + nb = &rocker_switchdev_blocking_notifier; + err = register_switchdev_blocking_notifier(nb); + if (err) { + dev_err(&pdev->dev, "Failed to register switchdev blocking notifier\n"); + goto err_register_switchdev_blocking_notifier; + } + rocker->hw.id = rocker_read64(rocker, SWITCH_ID); dev_info(&pdev->dev, "Rocker switch with id %*phN\n", @@ -2940,6 +2984,8 @@ static int rocker_probe(struct pci_dev *pdev, const struct pci_device_id *id) return 0; +err_register_switchdev_blocking_notifier: + unregister_switchdev_notifier(&rocker_switchdev_notifier); err_register_switchdev_notifier: unregister_fib_notifier(&rocker->fib_nb); err_register_fib_notifier: @@ -2971,6 +3017,10 @@ err_pci_enable_device: static void rocker_remove(struct pci_dev *pdev) { struct rocker *rocker = pci_get_drvdata(pdev); + struct notifier_block *nb; + + nb = &rocker_switchdev_blocking_notifier; + unregister_switchdev_blocking_notifier(nb); unregister_switchdev_notifier(&rocker_switchdev_notifier); unregister_fib_notifier(&rocker->fib_nb); diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c index 7eeac3d6cfe8..b6a50058bb8d 100644 --- a/drivers/net/ethernet/sfc/ef10.c +++ b/drivers/net/ethernet/sfc/ef10.c @@ -6041,6 +6041,10 @@ static const struct efx_ef10_nvram_type_info efx_ef10_nvram_types[] = { { NVRAM_PARTITION_TYPE_EXPROM_CONFIG_PORT3, 0, 3, "sfc_exp_rom_cfg" }, { NVRAM_PARTITION_TYPE_LICENSE, 0, 0, "sfc_license" }, { NVRAM_PARTITION_TYPE_PHY_MIN, 0xff, 0, "sfc_phy_fw" }, + /* MUM and SUC firmware share the same partition type */ + { NVRAM_PARTITION_TYPE_MUM_FIRMWARE, 0, 0, "sfc_mumfw" }, + { NVRAM_PARTITION_TYPE_EXPANSION_UEFI, 0, 0, "sfc_uefi" }, + { NVRAM_PARTITION_TYPE_STATUS, 0, 0, "sfc_status" } }; static int efx_ef10_mtd_probe_partition(struct efx_nic *efx, @@ -6091,6 +6095,9 @@ static int efx_ef10_mtd_probe_partition(struct efx_nic *efx, part->common.mtd.flags = MTD_CAP_NORFLASH; part->common.mtd.size = size; part->common.mtd.erasesize = erase_size; + /* sfc_status is read-only */ + if (!erase_size) + part->common.mtd.flags |= MTD_NO_ERASE; return 0; } diff --git a/drivers/net/ethernet/sfc/ethtool.c b/drivers/net/ethernet/sfc/ethtool.c index 3143588ffd77..600d7b895cf2 100644 --- a/drivers/net/ethernet/sfc/ethtool.c +++ b/drivers/net/ethernet/sfc/ethtool.c @@ -539,7 +539,7 @@ static void efx_ethtool_self_test(struct net_device *net_dev, /* We need rx buffers and interrupts. */ already_up = (efx->net_dev->flags & IFF_UP); if (!already_up) { - rc = dev_open(efx->net_dev); + rc = dev_open(efx->net_dev, NULL); if (rc) { netif_err(efx, drv, efx->net_dev, "failed opening device.\n"); diff --git a/drivers/net/ethernet/sfc/falcon/ethtool.c b/drivers/net/ethernet/sfc/falcon/ethtool.c index 1ccdb7a82e2a..72cedec945c1 100644 --- a/drivers/net/ethernet/sfc/falcon/ethtool.c +++ b/drivers/net/ethernet/sfc/falcon/ethtool.c @@ -517,7 +517,7 @@ static void ef4_ethtool_self_test(struct net_device *net_dev, /* We need rx buffers and interrupts. */ already_up = (efx->net_dev->flags & IFF_UP); if (!already_up) { - rc = dev_open(efx->net_dev); + rc = dev_open(efx->net_dev, NULL); if (rc) { netif_err(efx, drv, efx->net_dev, "failed opening device.\n"); diff --git a/drivers/net/ethernet/sfc/tx.c b/drivers/net/ethernet/sfc/tx.c index c3ad564ac4c0..22eb059086f7 100644 --- a/drivers/net/ethernet/sfc/tx.c +++ b/drivers/net/ethernet/sfc/tx.c @@ -553,13 +553,10 @@ netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb) if (!data_mapped && (efx_tx_map_data(tx_queue, skb, segments))) goto err; - /* Update BQL */ - netdev_tx_sent_queue(tx_queue->core_txq, skb_len); - efx_tx_maybe_stop_queue(tx_queue); /* Pass off to hardware */ - if (!xmit_more || netif_xmit_stopped(tx_queue->core_txq)) { + if (__netdev_tx_sent_queue(tx_queue->core_txq, skb_len, xmit_more)) { struct efx_tx_queue *txq2 = efx_tx_queue_partner(tx_queue); /* There could be packets left on the partner queue if those diff --git a/drivers/net/ethernet/smsc/Kconfig b/drivers/net/ethernet/smsc/Kconfig index 358820282ef0..79612060d0ba 100644 --- a/drivers/net/ethernet/smsc/Kconfig +++ b/drivers/net/ethernet/smsc/Kconfig @@ -27,7 +27,7 @@ config SMC9194 option if you have a DELL laptop with the docking station, or another SMC9192/9194 based chipset. Say Y if you want it compiled into the kernel, and read the file - . + . To compile this driver as a module, choose M here. The module will be called smc9194. @@ -43,7 +43,7 @@ config SMC91X This is a driver for SMC's 91x series of Ethernet chipsets, including the SMC91C94 and the SMC91C111. Say Y if you want it compiled into the kernel, and read the file - . + . This driver is also available as a module ( = code which can be inserted in and removed from the running kernel whenever you want). diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index d9d0d03e4ce7..05a0948ad929 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -234,6 +234,9 @@ #define DESC_NUM 256 +#define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) +#define NETSEC_RX_BUF_SZ 1536 + #define DESC_SZ sizeof(struct netsec_de) #define NETSEC_F_NETSEC_VER_MAJOR_NUM(x) ((x) & 0xffff0000) @@ -254,7 +257,6 @@ struct netsec_desc_ring { dma_addr_t desc_dma; struct netsec_desc *desc; void *vaddr; - u16 pkt_cnt; u16 head, tail; }; @@ -571,34 +573,10 @@ static const struct ethtool_ops netsec_ethtool_ops = { /************* NETDEV_OPS FOLLOW *************/ -static struct sk_buff *netsec_alloc_skb(struct netsec_priv *priv, - struct netsec_desc *desc) -{ - struct sk_buff *skb; - - if (device_get_dma_attr(priv->dev) == DEV_DMA_COHERENT) { - skb = netdev_alloc_skb_ip_align(priv->ndev, desc->len); - } else { - desc->len = L1_CACHE_ALIGN(desc->len); - skb = netdev_alloc_skb(priv->ndev, desc->len); - } - if (!skb) - return NULL; - - desc->addr = skb->data; - desc->dma_addr = dma_map_single(priv->dev, desc->addr, desc->len, - DMA_FROM_DEVICE); - if (dma_mapping_error(priv->dev, desc->dma_addr)) { - dev_kfree_skb_any(skb); - return NULL; - } - return skb; -} static void netsec_set_rx_de(struct netsec_priv *priv, struct netsec_desc_ring *dring, u16 idx, - const struct netsec_desc *desc, - struct sk_buff *skb) + const struct netsec_desc *desc) { struct netsec_de *de = dring->vaddr + DESC_SZ * idx; u32 attr = (1 << NETSEC_RX_PKT_OWN_FIELD) | @@ -617,88 +595,28 @@ static void netsec_set_rx_de(struct netsec_priv *priv, dring->desc[idx].dma_addr = desc->dma_addr; dring->desc[idx].addr = desc->addr; dring->desc[idx].len = desc->len; - dring->desc[idx].skb = skb; -} - -static struct sk_buff *netsec_get_rx_de(struct netsec_priv *priv, - struct netsec_desc_ring *dring, - u16 idx, - struct netsec_rx_pkt_info *rxpi, - struct netsec_desc *desc, u16 *len) -{ - struct netsec_de de = {}; - - memcpy(&de, dring->vaddr + DESC_SZ * idx, DESC_SZ); - - *len = de.buf_len_info >> 16; - - rxpi->err_flag = (de.attr >> NETSEC_RX_PKT_ER_FIELD) & 1; - rxpi->rx_cksum_result = (de.attr >> NETSEC_RX_PKT_CO_FIELD) & 3; - rxpi->err_code = (de.attr >> NETSEC_RX_PKT_ERR_FIELD) & - NETSEC_RX_PKT_ERR_MASK; - *desc = dring->desc[idx]; - return desc->skb; -} - -static struct sk_buff *netsec_get_rx_pkt_data(struct netsec_priv *priv, - struct netsec_rx_pkt_info *rxpi, - struct netsec_desc *desc, - u16 *len) -{ - struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; - struct sk_buff *tmp_skb, *skb = NULL; - struct netsec_desc td; - int tail; - - *rxpi = (struct netsec_rx_pkt_info){}; - - td.len = priv->ndev->mtu + 22; - - tmp_skb = netsec_alloc_skb(priv, &td); - - tail = dring->tail; - - if (!tmp_skb) { - netsec_set_rx_de(priv, dring, tail, &dring->desc[tail], - dring->desc[tail].skb); - } else { - skb = netsec_get_rx_de(priv, dring, tail, rxpi, desc, len); - netsec_set_rx_de(priv, dring, tail, &td, tmp_skb); - } - - /* move tail ahead */ - dring->tail = (dring->tail + 1) % DESC_NUM; - - return skb; } -static int netsec_clean_tx_dring(struct netsec_priv *priv, int budget) +static bool netsec_clean_tx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX]; unsigned int pkts, bytes; - - dring->pkt_cnt += netsec_read(priv, NETSEC_REG_NRM_TX_DONE_PKTCNT); - - if (dring->pkt_cnt < budget) - budget = dring->pkt_cnt; + struct netsec_de *entry; + int tail = dring->tail; + int cnt = 0; pkts = 0; bytes = 0; + entry = dring->vaddr + DESC_SZ * tail; - while (pkts < budget) { + while (!(entry->attr & (1U << NETSEC_TX_SHIFT_OWN_FIELD)) && + cnt < DESC_NUM) { struct netsec_desc *desc; - struct netsec_de *entry; - int tail, eop; - - tail = dring->tail; - - /* move tail ahead */ - dring->tail = (tail + 1) % DESC_NUM; + int eop; desc = &dring->desc[tail]; - entry = dring->vaddr + DESC_SZ * tail; - eop = (entry->attr >> NETSEC_TX_LAST) & 1; + dma_rmb(); dma_unmap_single(priv->dev, desc->dma_addr, desc->len, DMA_TO_DEVICE); @@ -707,33 +625,94 @@ static int netsec_clean_tx_dring(struct netsec_priv *priv, int budget) bytes += desc->skb->len; dev_kfree_skb(desc->skb); } + /* clean up so netsec_uninit_pkt_dring() won't free the skb + * again + */ *desc = (struct netsec_desc){}; + + /* entry->attr is not going to be accessed by the NIC until + * netsec_set_tx_de() is called. No need for a dma_wmb() here + */ + entry->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD; + /* move tail ahead */ + dring->tail = (tail + 1) % DESC_NUM; + + tail = dring->tail; + entry = dring->vaddr + DESC_SZ * tail; + cnt++; } - dring->pkt_cnt -= budget; - priv->ndev->stats.tx_packets += budget; + if (!cnt) + return false; + + /* reading the register clears the irq */ + netsec_read(priv, NETSEC_REG_NRM_TX_DONE_PKTCNT); + + priv->ndev->stats.tx_packets += cnt; priv->ndev->stats.tx_bytes += bytes; - netdev_completed_queue(priv->ndev, budget, bytes); + netdev_completed_queue(priv->ndev, cnt, bytes); - return budget; + return true; } -static int netsec_process_tx(struct netsec_priv *priv, int budget) +static void netsec_process_tx(struct netsec_priv *priv) { struct net_device *ndev = priv->ndev; - int new, done = 0; + bool cleaned; - do { - new = netsec_clean_tx_dring(priv, budget); - done += new; - budget -= new; - } while (new); + cleaned = netsec_clean_tx_dring(priv); - if (done && netif_queue_stopped(ndev)) + if (cleaned && netif_queue_stopped(ndev)) { + /* Make sure we update the value, anyone stopping the queue + * after this will read the proper consumer idx + */ + smp_wmb(); netif_wake_queue(ndev); + } +} - return done; +static void *netsec_alloc_rx_data(struct netsec_priv *priv, + dma_addr_t *dma_handle, u16 *desc_len) +{ + size_t total_len = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size_t payload_len = NETSEC_RX_BUF_SZ; + dma_addr_t mapping; + void *buf; + + total_len += SKB_DATA_ALIGN(payload_len + NETSEC_SKB_PAD); + + buf = napi_alloc_frag(total_len); + if (!buf) + return NULL; + + mapping = dma_map_single(priv->dev, buf + NETSEC_SKB_PAD, payload_len, + DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(priv->dev, mapping))) + goto err_out; + + *dma_handle = mapping; + *desc_len = payload_len; + + return buf; + +err_out: + skb_free_frag(buf); + return NULL; +} + +static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num) +{ + struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; + u16 idx = from; + + while (num) { + netsec_set_rx_de(priv, dring, idx, &dring->desc[idx]); + idx++; + if (idx >= DESC_NUM) + idx = 0; + num--; + } } static int netsec_process_rx(struct netsec_priv *priv, int budget) @@ -741,14 +720,17 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; struct net_device *ndev = priv->ndev; struct netsec_rx_pkt_info rx_info; - int done = 0; - struct netsec_desc desc; struct sk_buff *skb; - u16 len; + int done = 0; while (done < budget) { u16 idx = dring->tail; struct netsec_de *de = dring->vaddr + (DESC_SZ * idx); + struct netsec_desc *desc = &dring->desc[idx]; + u16 pkt_len, desc_len; + dma_addr_t dma_handle; + void *buf_addr; + u32 truesize; if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) { /* reading the register clears the irq */ @@ -762,18 +744,59 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) */ dma_rmb(); done++; - skb = netsec_get_rx_pkt_data(priv, &rx_info, &desc, &len); - if (unlikely(!skb) || rx_info.err_flag) { + + pkt_len = de->buf_len_info >> 16; + rx_info.err_code = (de->attr >> NETSEC_RX_PKT_ERR_FIELD) & + NETSEC_RX_PKT_ERR_MASK; + rx_info.err_flag = (de->attr >> NETSEC_RX_PKT_ER_FIELD) & 1; + if (rx_info.err_flag) { netif_err(priv, drv, priv->ndev, - "%s: rx fail err(%d)\n", - __func__, rx_info.err_code); + "%s: rx fail err(%d)\n", __func__, + rx_info.err_code); ndev->stats.rx_dropped++; + dring->tail = (dring->tail + 1) % DESC_NUM; + /* reuse buffer page frag */ + netsec_rx_fill(priv, idx, 1); continue; } + rx_info.rx_cksum_result = + (de->attr >> NETSEC_RX_PKT_CO_FIELD) & 3; - dma_unmap_single(priv->dev, desc.dma_addr, desc.len, - DMA_FROM_DEVICE); - skb_put(skb, len); + /* allocate a fresh buffer and map it to the hardware. + * This will eventually replace the old buffer in the hardware + */ + buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len); + if (unlikely(!buf_addr)) + break; + + dma_sync_single_for_cpu(priv->dev, desc->dma_addr, pkt_len, + DMA_FROM_DEVICE); + prefetch(desc->addr); + + truesize = SKB_DATA_ALIGN(desc->len + NETSEC_SKB_PAD) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + skb = build_skb(desc->addr, truesize); + if (unlikely(!skb)) { + /* free the newly allocated buffer, we are not going to + * use it + */ + dma_unmap_single(priv->dev, dma_handle, desc_len, + DMA_FROM_DEVICE); + skb_free_frag(buf_addr); + netif_err(priv, drv, priv->ndev, + "rx failed to build skb\n"); + break; + } + dma_unmap_single_attrs(priv->dev, desc->dma_addr, desc->len, + DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); + + /* Update the descriptor with the new buffer we allocated */ + desc->len = desc_len; + desc->dma_addr = dma_handle; + desc->addr = buf_addr; + + skb_reserve(skb, NETSEC_SKB_PAD); + skb_put(skb, pkt_len); skb->protocol = eth_type_trans(skb, priv->ndev); if (priv->rx_cksum_offload_flag && @@ -782,8 +805,11 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) if (napi_gro_receive(&priv->napi, skb) != GRO_DROP) { ndev->stats.rx_packets++; - ndev->stats.rx_bytes += len; + ndev->stats.rx_bytes += pkt_len; } + + netsec_rx_fill(priv, idx, 1); + dring->tail = (dring->tail + 1) % DESC_NUM; } return done; @@ -792,24 +818,17 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) static int netsec_napi_poll(struct napi_struct *napi, int budget) { struct netsec_priv *priv; - int tx, rx, done, todo; + int rx, done, todo; priv = container_of(napi, struct netsec_priv, napi); + netsec_process_tx(priv); + todo = budget; do { - if (!todo) - break; - - tx = netsec_process_tx(priv, todo); - todo -= tx; - - if (!todo) - break; - rx = netsec_process_rx(priv, todo); todo -= rx; - } while (rx || tx); + } while (rx); done = budget - todo; @@ -861,6 +880,41 @@ static void netsec_set_tx_de(struct netsec_priv *priv, dring->head = (dring->head + 1) % DESC_NUM; } +static int netsec_desc_used(struct netsec_desc_ring *dring) +{ + int used; + + if (dring->head >= dring->tail) + used = dring->head - dring->tail; + else + used = dring->head + DESC_NUM - dring->tail; + + return used; +} + +static int netsec_check_stop_tx(struct netsec_priv *priv, int used) +{ + struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX]; + + /* keep tail from touching the queue */ + if (DESC_NUM - used < 2) { + netif_stop_queue(priv->ndev); + + /* Make sure we read the updated value in case + * descriptors got freed + */ + smp_rmb(); + + used = netsec_desc_used(dring); + if (DESC_NUM - used < 2) + return NETDEV_TX_BUSY; + + netif_wake_queue(priv->ndev); + } + + return 0; +} + static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb, struct net_device *ndev) { @@ -871,16 +925,10 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb, u16 tso_seg_len = 0; int filled; - /* differentiate between full/emtpy ring */ - if (dring->head >= dring->tail) - filled = dring->head - dring->tail; - else - filled = dring->head + DESC_NUM - dring->tail; - - if (DESC_NUM - filled < 2) { /* if less than 2 available */ - netif_err(priv, drv, priv->ndev, "%s: TxQFull!\n", __func__); - netif_stop_queue(priv->ndev); - dma_wmb(); + filled = netsec_desc_used(dring); + if (netsec_check_stop_tx(priv, filled)) { + net_warn_ratelimited("%s %s Tx queue full\n", + dev_name(priv->dev), ndev->name); return NETDEV_TX_BUSY; } @@ -946,7 +994,10 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id) dma_unmap_single(priv->dev, desc->dma_addr, desc->len, id == NETSEC_RING_RX ? DMA_FROM_DEVICE : DMA_TO_DEVICE); - dev_kfree_skb(desc->skb); + if (id == NETSEC_RING_RX) + skb_free_frag(desc->addr); + else if (id == NETSEC_RING_TX) + dev_kfree_skb(desc->skb); } memset(dring->desc, 0, sizeof(struct netsec_desc) * DESC_NUM); @@ -954,7 +1005,6 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id) dring->head = 0; dring->tail = 0; - dring->pkt_cnt = 0; if (id == NETSEC_RING_TX) netdev_reset_queue(priv->ndev); @@ -977,47 +1027,64 @@ static void netsec_free_dring(struct netsec_priv *priv, int id) static int netsec_alloc_dring(struct netsec_priv *priv, enum ring_id id) { struct netsec_desc_ring *dring = &priv->desc_ring[id]; - int ret = 0; + int i; dring->vaddr = dma_zalloc_coherent(priv->dev, DESC_SZ * DESC_NUM, &dring->desc_dma, GFP_KERNEL); - if (!dring->vaddr) { - ret = -ENOMEM; + if (!dring->vaddr) goto err; - } dring->desc = kcalloc(DESC_NUM, sizeof(*dring->desc), GFP_KERNEL); - if (!dring->desc) { - ret = -ENOMEM; + if (!dring->desc) goto err; + + if (id == NETSEC_RING_TX) { + for (i = 0; i < DESC_NUM; i++) { + struct netsec_de *de; + + de = dring->vaddr + (DESC_SZ * i); + /* de->attr is not going to be accessed by the NIC + * until netsec_set_tx_de() is called. + * No need for a dma_wmb() here + */ + de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD; + } } return 0; err: netsec_free_dring(priv, id); - return ret; + return -ENOMEM; } static int netsec_setup_rx_dring(struct netsec_priv *priv) { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; - struct netsec_desc desc; - struct sk_buff *skb; - int n; + int i; - desc.len = priv->ndev->mtu + 22; + for (i = 0; i < DESC_NUM; i++) { + struct netsec_desc *desc = &dring->desc[i]; + dma_addr_t dma_handle; + void *buf; + u16 len; - for (n = 0; n < DESC_NUM; n++) { - skb = netsec_alloc_skb(priv, &desc); - if (!skb) { + buf = netsec_alloc_rx_data(priv, &dma_handle, &len); + if (!buf) { netsec_uninit_pkt_dring(priv, NETSEC_RING_RX); - return -ENOMEM; + goto err_out; } - netsec_set_rx_de(priv, dring, n, &desc, skb); + desc->dma_addr = dma_handle; + desc->addr = buf; + desc->len = len; } + netsec_rx_fill(priv, 0, DESC_NUM); + return 0; + +err_out: + return -ENOMEM; } static int netsec_netdev_load_ucode_region(struct netsec_priv *priv, u32 reg, @@ -1377,6 +1444,8 @@ static int netsec_netdev_init(struct net_device *ndev) int ret; u16 data; + BUILD_BUG_ON_NOT_POWER_OF_2(DESC_NUM); + ret = netsec_alloc_dring(priv, NETSEC_RING_TX); if (ret) return ret; diff --git a/drivers/net/ethernet/socionext/sni_ave.c b/drivers/net/ethernet/socionext/sni_ave.c index 7c7cd9d94bcc..bb6d5fb73035 100644 --- a/drivers/net/ethernet/socionext/sni_ave.c +++ b/drivers/net/ethernet/socionext/sni_ave.c @@ -262,6 +262,7 @@ struct ave_private { struct regmap *regmap; unsigned int pinmode_mask; unsigned int pinmode_val; + u32 wolopts; /* stats */ struct ave_stats stats_rx; @@ -1119,7 +1120,7 @@ static void ave_phy_adjust_link(struct net_device *ndev) if (phydev->asym_pause) rmt_adv |= LPA_PAUSE_ASYM; - lcl_adv = ethtool_adv_to_lcl_adv_t(phydev->advertising); + lcl_adv = linkmode_adv_to_lcl_adv_t(phydev->advertising); cap = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv); if (cap & FLOW_CTRL_TX) txcr |= AVE_TXCR_FLOCTR; @@ -1210,9 +1211,13 @@ static int ave_init(struct net_device *ndev) priv->phydev = phydev; - phy_ethtool_get_wol(phydev, &wol); + ave_ethtool_get_wol(ndev, &wol); device_set_wakeup_capable(&ndev->dev, !!wol.supported); + /* set wol initial state disabled */ + wol.wolopts = 0; + ave_ethtool_set_wol(ndev, &wol); + if (!phy_interface_is_rgmii(phydev)) phy_set_max_speed(phydev, SPEED_100); @@ -1737,6 +1742,58 @@ static int ave_remove(struct platform_device *pdev) return 0; } +#ifdef CONFIG_PM_SLEEP +static int ave_suspend(struct device *dev) +{ + struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; + struct net_device *ndev = dev_get_drvdata(dev); + struct ave_private *priv = netdev_priv(ndev); + int ret = 0; + + if (netif_running(ndev)) { + ret = ave_stop(ndev); + netif_device_detach(ndev); + } + + ave_ethtool_get_wol(ndev, &wol); + priv->wolopts = wol.wolopts; + + return ret; +} + +static int ave_resume(struct device *dev) +{ + struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; + struct net_device *ndev = dev_get_drvdata(dev); + struct ave_private *priv = netdev_priv(ndev); + int ret = 0; + + ave_global_reset(ndev); + + ave_ethtool_get_wol(ndev, &wol); + wol.wolopts = priv->wolopts; + ave_ethtool_set_wol(ndev, &wol); + + if (ndev->phydev) { + ret = phy_resume(ndev->phydev); + if (ret) + return ret; + } + + if (netif_running(ndev)) { + ret = ave_open(ndev); + netif_device_attach(ndev); + } + + return ret; +} + +static SIMPLE_DEV_PM_OPS(ave_pm_ops, ave_suspend, ave_resume); +#define AVE_PM_OPS (&ave_pm_ops) +#else +#define AVE_PM_OPS NULL +#endif + static int ave_pro4_get_pinmode(struct ave_private *priv, phy_interface_t phy_mode, u32 arg) { @@ -1911,6 +1968,7 @@ static struct platform_driver ave_driver = { .remove = ave_remove, .driver = { .name = "ave", + .pm = AVE_PM_OPS, .of_match_table = of_ave_match, }, }; diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig index 324049eebb9b..6209cc1fb305 100644 --- a/drivers/net/ethernet/stmicro/stmmac/Kconfig +++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig @@ -75,6 +75,14 @@ config DWMAC_LPC18XX ---help--- Support for NXP LPC18xx/43xx DWMAC Ethernet. +config DWMAC_MEDIATEK + tristate "MediaTek MT27xx GMAC support" + depends on OF && (ARCH_MEDIATEK || COMPILE_TEST) + help + Support for MediaTek GMAC Ethernet controller. + + This selects the MT2712 SoC support for the stmmac driver. + config DWMAC_MESON tristate "Amlogic Meson dwmac support" default ARCH_MESON diff --git a/drivers/net/ethernet/stmicro/stmmac/Makefile b/drivers/net/ethernet/stmicro/stmmac/Makefile index 99967a80a8c8..bf09701d2623 100644 --- a/drivers/net/ethernet/stmicro/stmmac/Makefile +++ b/drivers/net/ethernet/stmicro/stmmac/Makefile @@ -13,6 +13,7 @@ obj-$(CONFIG_STMMAC_PLATFORM) += stmmac-platform.o obj-$(CONFIG_DWMAC_ANARION) += dwmac-anarion.o obj-$(CONFIG_DWMAC_IPQ806X) += dwmac-ipq806x.o obj-$(CONFIG_DWMAC_LPC18XX) += dwmac-lpc18xx.o +obj-$(CONFIG_DWMAC_MEDIATEK) += dwmac-mediatek.o obj-$(CONFIG_DWMAC_MESON) += dwmac-meson.o dwmac-meson8b.o obj-$(CONFIG_DWMAC_OXNAS) += dwmac-oxnas.o obj-$(CONFIG_DWMAC_ROCKCHIP) += dwmac-rk.o diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c new file mode 100644 index 000000000000..bf2562995fc8 --- /dev/null +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c @@ -0,0 +1,390 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2018 MediaTek Inc. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "stmmac.h" +#include "stmmac_platform.h" + +/* Peri Configuration register for mt2712 */ +#define PERI_ETH_PHY_INTF_SEL 0x418 +#define PHY_INTF_MII 0 +#define PHY_INTF_RGMII 1 +#define PHY_INTF_RMII 4 +#define RMII_CLK_SRC_RXC BIT(4) +#define RMII_CLK_SRC_INTERNAL BIT(5) + +#define PERI_ETH_DLY 0x428 +#define ETH_DLY_GTXC_INV BIT(6) +#define ETH_DLY_GTXC_ENABLE BIT(5) +#define ETH_DLY_GTXC_STAGES GENMASK(4, 0) +#define ETH_DLY_TXC_INV BIT(20) +#define ETH_DLY_TXC_ENABLE BIT(19) +#define ETH_DLY_TXC_STAGES GENMASK(18, 14) +#define ETH_DLY_RXC_INV BIT(13) +#define ETH_DLY_RXC_ENABLE BIT(12) +#define ETH_DLY_RXC_STAGES GENMASK(11, 7) + +#define PERI_ETH_DLY_FINE 0x800 +#define ETH_RMII_DLY_TX_INV BIT(2) +#define ETH_FINE_DLY_GTXC BIT(1) +#define ETH_FINE_DLY_RXC BIT(0) + +struct mac_delay_struct { + u32 tx_delay; + u32 rx_delay; + bool tx_inv; + bool rx_inv; +}; + +struct mediatek_dwmac_plat_data { + const struct mediatek_dwmac_variant *variant; + struct mac_delay_struct mac_delay; + struct clk_bulk_data *clks; + struct device_node *np; + struct regmap *peri_regmap; + struct device *dev; + int phy_mode; + bool rmii_rxc; +}; + +struct mediatek_dwmac_variant { + int (*dwmac_set_phy_interface)(struct mediatek_dwmac_plat_data *plat); + int (*dwmac_set_delay)(struct mediatek_dwmac_plat_data *plat); + + /* clock ids to be requested */ + const char * const *clk_list; + int num_clks; + + u32 dma_bit_mask; + u32 rx_delay_max; + u32 tx_delay_max; +}; + +/* list of clocks required for mac */ +static const char * const mt2712_dwmac_clk_l[] = { + "axi", "apb", "mac_main", "ptp_ref" +}; + +static int mt2712_set_interface(struct mediatek_dwmac_plat_data *plat) +{ + int rmii_rxc = plat->rmii_rxc ? RMII_CLK_SRC_RXC : 0; + u32 intf_val = 0; + + /* select phy interface in top control domain */ + switch (plat->phy_mode) { + case PHY_INTERFACE_MODE_MII: + intf_val |= PHY_INTF_MII; + break; + case PHY_INTERFACE_MODE_RMII: + intf_val |= (PHY_INTF_RMII | rmii_rxc); + break; + case PHY_INTERFACE_MODE_RGMII: + case PHY_INTERFACE_MODE_RGMII_TXID: + case PHY_INTERFACE_MODE_RGMII_RXID: + case PHY_INTERFACE_MODE_RGMII_ID: + intf_val |= PHY_INTF_RGMII; + break; + default: + dev_err(plat->dev, "phy interface not supported\n"); + return -EINVAL; + } + + regmap_write(plat->peri_regmap, PERI_ETH_PHY_INTF_SEL, intf_val); + + return 0; +} + +static void mt2712_delay_ps2stage(struct mediatek_dwmac_plat_data *plat) +{ + struct mac_delay_struct *mac_delay = &plat->mac_delay; + + switch (plat->phy_mode) { + case PHY_INTERFACE_MODE_MII: + case PHY_INTERFACE_MODE_RMII: + /* 550ps per stage for MII/RMII */ + mac_delay->tx_delay /= 550; + mac_delay->rx_delay /= 550; + break; + case PHY_INTERFACE_MODE_RGMII: + case PHY_INTERFACE_MODE_RGMII_TXID: + case PHY_INTERFACE_MODE_RGMII_RXID: + case PHY_INTERFACE_MODE_RGMII_ID: + /* 170ps per stage for RGMII */ + mac_delay->tx_delay /= 170; + mac_delay->rx_delay /= 170; + break; + default: + dev_err(plat->dev, "phy interface not supported\n"); + break; + } +} + +static int mt2712_set_delay(struct mediatek_dwmac_plat_data *plat) +{ + struct mac_delay_struct *mac_delay = &plat->mac_delay; + u32 delay_val = 0, fine_val = 0; + + mt2712_delay_ps2stage(plat); + + switch (plat->phy_mode) { + case PHY_INTERFACE_MODE_MII: + delay_val |= FIELD_PREP(ETH_DLY_TXC_ENABLE, !!mac_delay->tx_delay); + delay_val |= FIELD_PREP(ETH_DLY_TXC_STAGES, mac_delay->tx_delay); + delay_val |= FIELD_PREP(ETH_DLY_TXC_INV, mac_delay->tx_inv); + + delay_val |= FIELD_PREP(ETH_DLY_RXC_ENABLE, !!mac_delay->rx_delay); + delay_val |= FIELD_PREP(ETH_DLY_RXC_STAGES, mac_delay->rx_delay); + delay_val |= FIELD_PREP(ETH_DLY_RXC_INV, mac_delay->rx_inv); + break; + case PHY_INTERFACE_MODE_RMII: + /* the rmii reference clock is from external phy, + * and the property "rmii_rxc" indicates which pin(TXC/RXC) + * the reference clk is connected to. The reference clock is a + * received signal, so rx_delay/rx_inv are used to indicate + * the reference clock timing adjustment + */ + if (plat->rmii_rxc) { + /* the rmii reference clock from outside is connected + * to RXC pin, the reference clock will be adjusted + * by RXC delay macro circuit. + */ + delay_val |= FIELD_PREP(ETH_DLY_RXC_ENABLE, !!mac_delay->rx_delay); + delay_val |= FIELD_PREP(ETH_DLY_RXC_STAGES, mac_delay->rx_delay); + delay_val |= FIELD_PREP(ETH_DLY_RXC_INV, mac_delay->rx_inv); + } else { + /* the rmii reference clock from outside is connected + * to TXC pin, the reference clock will be adjusted + * by TXC delay macro circuit. + */ + delay_val |= FIELD_PREP(ETH_DLY_TXC_ENABLE, !!mac_delay->rx_delay); + delay_val |= FIELD_PREP(ETH_DLY_TXC_STAGES, mac_delay->rx_delay); + delay_val |= FIELD_PREP(ETH_DLY_TXC_INV, mac_delay->rx_inv); + } + /* tx_inv will inverse the tx clock inside mac relateive to + * reference clock from external phy, + * and this bit is located in the same register with fine-tune + */ + if (mac_delay->tx_inv) + fine_val = ETH_RMII_DLY_TX_INV; + break; + case PHY_INTERFACE_MODE_RGMII: + case PHY_INTERFACE_MODE_RGMII_TXID: + case PHY_INTERFACE_MODE_RGMII_RXID: + case PHY_INTERFACE_MODE_RGMII_ID: + fine_val = ETH_FINE_DLY_GTXC | ETH_FINE_DLY_RXC; + + delay_val |= FIELD_PREP(ETH_DLY_GTXC_ENABLE, !!mac_delay->tx_delay); + delay_val |= FIELD_PREP(ETH_DLY_GTXC_STAGES, mac_delay->tx_delay); + delay_val |= FIELD_PREP(ETH_DLY_GTXC_INV, mac_delay->tx_inv); + + delay_val |= FIELD_PREP(ETH_DLY_RXC_ENABLE, !!mac_delay->rx_delay); + delay_val |= FIELD_PREP(ETH_DLY_RXC_STAGES, mac_delay->rx_delay); + delay_val |= FIELD_PREP(ETH_DLY_RXC_INV, mac_delay->rx_inv); + break; + default: + dev_err(plat->dev, "phy interface not supported\n"); + return -EINVAL; + } + regmap_write(plat->peri_regmap, PERI_ETH_DLY, delay_val); + regmap_write(plat->peri_regmap, PERI_ETH_DLY_FINE, fine_val); + + return 0; +} + +static const struct mediatek_dwmac_variant mt2712_gmac_variant = { + .dwmac_set_phy_interface = mt2712_set_interface, + .dwmac_set_delay = mt2712_set_delay, + .clk_list = mt2712_dwmac_clk_l, + .num_clks = ARRAY_SIZE(mt2712_dwmac_clk_l), + .dma_bit_mask = 33, + .rx_delay_max = 17600, + .tx_delay_max = 17600, +}; + +static int mediatek_dwmac_config_dt(struct mediatek_dwmac_plat_data *plat) +{ + struct mac_delay_struct *mac_delay = &plat->mac_delay; + u32 tx_delay_ps, rx_delay_ps; + + plat->peri_regmap = syscon_regmap_lookup_by_phandle(plat->np, "mediatek,pericfg"); + if (IS_ERR(plat->peri_regmap)) { + dev_err(plat->dev, "Failed to get pericfg syscon\n"); + return PTR_ERR(plat->peri_regmap); + } + + plat->phy_mode = of_get_phy_mode(plat->np); + if (plat->phy_mode < 0) { + dev_err(plat->dev, "not find phy-mode\n"); + return -EINVAL; + } + + if (!of_property_read_u32(plat->np, "mediatek,tx-delay-ps", &tx_delay_ps)) { + if (tx_delay_ps < plat->variant->tx_delay_max) { + mac_delay->tx_delay = tx_delay_ps; + } else { + dev_err(plat->dev, "Invalid TX clock delay: %dps\n", tx_delay_ps); + return -EINVAL; + } + } + + if (!of_property_read_u32(plat->np, "mediatek,rx-delay-ps", &rx_delay_ps)) { + if (rx_delay_ps < plat->variant->rx_delay_max) { + mac_delay->rx_delay = rx_delay_ps; + } else { + dev_err(plat->dev, "Invalid RX clock delay: %dps\n", rx_delay_ps); + return -EINVAL; + } + } + + mac_delay->tx_inv = of_property_read_bool(plat->np, "mediatek,txc-inverse"); + mac_delay->rx_inv = of_property_read_bool(plat->np, "mediatek,rxc-inverse"); + plat->rmii_rxc = of_property_read_bool(plat->np, "mediatek,rmii-rxc"); + + return 0; +} + +static int mediatek_dwmac_clk_init(struct mediatek_dwmac_plat_data *plat) +{ + const struct mediatek_dwmac_variant *variant = plat->variant; + int i, num = variant->num_clks; + + plat->clks = devm_kcalloc(plat->dev, num, sizeof(*plat->clks), GFP_KERNEL); + if (!plat->clks) + return -ENOMEM; + + for (i = 0; i < num; i++) + plat->clks[i].id = variant->clk_list[i]; + + return devm_clk_bulk_get(plat->dev, num, plat->clks); +} + +static int mediatek_dwmac_init(struct platform_device *pdev, void *priv) +{ + struct mediatek_dwmac_plat_data *plat = priv; + const struct mediatek_dwmac_variant *variant = plat->variant; + int ret; + + ret = dma_set_mask_and_coherent(plat->dev, DMA_BIT_MASK(variant->dma_bit_mask)); + if (ret) { + dev_err(plat->dev, "No suitable DMA available, err = %d\n", ret); + return ret; + } + + ret = variant->dwmac_set_phy_interface(plat); + if (ret) { + dev_err(plat->dev, "failed to set phy interface, err = %d\n", ret); + return ret; + } + + ret = variant->dwmac_set_delay(plat); + if (ret) { + dev_err(plat->dev, "failed to set delay value, err = %d\n", ret); + return ret; + } + + ret = clk_bulk_prepare_enable(variant->num_clks, plat->clks); + if (ret) { + dev_err(plat->dev, "failed to enable clks, err = %d\n", ret); + return ret; + } + + return 0; +} + +static void mediatek_dwmac_exit(struct platform_device *pdev, void *priv) +{ + struct mediatek_dwmac_plat_data *plat = priv; + const struct mediatek_dwmac_variant *variant = plat->variant; + + clk_bulk_disable_unprepare(variant->num_clks, plat->clks); +} + +static int mediatek_dwmac_probe(struct platform_device *pdev) +{ + struct mediatek_dwmac_plat_data *priv_plat; + struct plat_stmmacenet_data *plat_dat; + struct stmmac_resources stmmac_res; + int ret; + + priv_plat = devm_kzalloc(&pdev->dev, sizeof(*priv_plat), GFP_KERNEL); + if (!priv_plat) + return -ENOMEM; + + priv_plat->variant = of_device_get_match_data(&pdev->dev); + if (!priv_plat->variant) { + dev_err(&pdev->dev, "Missing dwmac-mediatek variant\n"); + return -EINVAL; + } + + priv_plat->dev = &pdev->dev; + priv_plat->np = pdev->dev.of_node; + + ret = mediatek_dwmac_config_dt(priv_plat); + if (ret) + return ret; + + ret = mediatek_dwmac_clk_init(priv_plat); + if (ret) + return ret; + + ret = stmmac_get_platform_resources(pdev, &stmmac_res); + if (ret) + return ret; + + plat_dat = stmmac_probe_config_dt(pdev, &stmmac_res.mac); + if (IS_ERR(plat_dat)) + return PTR_ERR(plat_dat); + + plat_dat->interface = priv_plat->phy_mode; + /* clk_csr_i = 250-300MHz & MDC = clk_csr_i/124 */ + plat_dat->clk_csr = 5; + plat_dat->has_gmac4 = 1; + plat_dat->has_gmac = 0; + plat_dat->pmt = 0; + plat_dat->maxmtu = ETH_DATA_LEN; + plat_dat->bsp_priv = priv_plat; + plat_dat->init = mediatek_dwmac_init; + plat_dat->exit = mediatek_dwmac_exit; + mediatek_dwmac_init(pdev, priv_plat); + + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); + if (ret) { + stmmac_remove_config_dt(pdev, plat_dat); + return ret; + } + + return 0; +} + +static const struct of_device_id mediatek_dwmac_match[] = { + { .compatible = "mediatek,mt2712-gmac", + .data = &mt2712_gmac_variant }, + { } +}; + +MODULE_DEVICE_TABLE(of, mediatek_dwmac_match); + +static struct platform_driver mediatek_dwmac_driver = { + .probe = mediatek_dwmac_probe, + .remove = stmmac_pltfr_remove, + .driver = { + .name = "dwmac-mediatek", + .pm = &stmmac_pltfr_pm_ops, + .of_match_table = mediatek_dwmac_match, + }, +}; +module_platform_driver(mediatek_dwmac_driver); + +MODULE_AUTHOR("Biao Huang "); +MODULE_DESCRIPTION("MediaTek DWMAC specific glue layer"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c index 5710864fa809..d1f61c25d82b 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c @@ -458,8 +458,10 @@ stmmac_get_pauseparam(struct net_device *netdev, if (!adv_lp.pause) return; } else { - if (!(netdev->phydev->supported & SUPPORTED_Pause) || - !(netdev->phydev->supported & SUPPORTED_Asym_Pause)) + if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT, + netdev->phydev->supported) || + linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + netdev->phydev->supported)) return; } @@ -487,8 +489,10 @@ stmmac_set_pauseparam(struct net_device *netdev, if (!adv_lp.pause) return -EOPNOTSUPP; } else { - if (!(phy->supported & SUPPORTED_Pause) || - !(phy->supported & SUPPORTED_Asym_Pause)) + if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phy->supported) || + linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phy->supported)) return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index c4a35e932f05..0e0a0789c2ed 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -3881,7 +3881,7 @@ static void sysfs_display_ring(void *head, int size, int extend_desc, } } -static int stmmac_sysfs_ring_read(struct seq_file *seq, void *v) +static int stmmac_rings_status_show(struct seq_file *seq, void *v) { struct net_device *dev = seq->private; struct stmmac_priv *priv = netdev_priv(dev); @@ -3926,23 +3926,9 @@ static int stmmac_sysfs_ring_read(struct seq_file *seq, void *v) return 0; } +DEFINE_SHOW_ATTRIBUTE(stmmac_rings_status); -static int stmmac_sysfs_ring_open(struct inode *inode, struct file *file) -{ - return single_open(file, stmmac_sysfs_ring_read, inode->i_private); -} - -/* Debugfs files, should appear in /sys/kernel/debug/stmmaceth/eth0 */ - -static const struct file_operations stmmac_rings_status_fops = { - .owner = THIS_MODULE, - .open = stmmac_sysfs_ring_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; - -static int stmmac_sysfs_dma_cap_read(struct seq_file *seq, void *v) +static int stmmac_dma_cap_show(struct seq_file *seq, void *v) { struct net_device *dev = seq->private; struct stmmac_priv *priv = netdev_priv(dev); @@ -4005,19 +3991,7 @@ static int stmmac_sysfs_dma_cap_read(struct seq_file *seq, void *v) return 0; } - -static int stmmac_sysfs_dma_cap_open(struct inode *inode, struct file *file) -{ - return single_open(file, stmmac_sysfs_dma_cap_read, inode->i_private); -} - -static const struct file_operations stmmac_dma_cap_fops = { - .owner = THIS_MODULE, - .open = stmmac_sysfs_dma_cap_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(stmmac_dma_cap); static int stmmac_init_fs(struct net_device *dev) { @@ -4101,7 +4075,7 @@ static void stmmac_reset_subtask(struct stmmac_priv *priv) set_bit(STMMAC_DOWN, &priv->state); dev_close(priv->dev); - dev_open(priv->dev); + dev_open(priv->dev, NULL); clear_bit(STMMAC_DOWN, &priv->state); clear_bit(STMMAC_RESETING, &priv->state); rtnl_unlock(); diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c index 863fd602fd33..ff641cf30a4e 100644 --- a/drivers/net/ethernet/sun/sunhme.c +++ b/drivers/net/ethernet/sun/sunhme.c @@ -2691,7 +2691,7 @@ static int happy_meal_sbus_probe_one(struct platform_device *op, int is_qfe) sbus_dp = op->dev.parent->of_node; /* We can match PCI devices too, do not accept those here. */ - if (strcmp(sbus_dp->name, "sbus") && strcmp(sbus_dp->name, "sbi")) + if (!of_node_name_eq(sbus_dp, "sbus") && !of_node_name_eq(sbus_dp, "sbi")) return err; if (is_qfe) { diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig index f932923f7d56..bb126be1eb72 100644 --- a/drivers/net/ethernet/ti/Kconfig +++ b/drivers/net/ethernet/ti/Kconfig @@ -121,7 +121,8 @@ config TLAN Devices currently supported by this driver are Compaq Netelligent, Compaq NetFlex and Olicom cards. Please read the file - for more details. + + for more details. To compile this driver as a module, choose M here. The module will be called tlan. diff --git a/drivers/net/ethernet/ti/cpmac.c b/drivers/net/ethernet/ti/cpmac.c index 9b8a30bf939b..810dfc7de1f9 100644 --- a/drivers/net/ethernet/ti/cpmac.c +++ b/drivers/net/ethernet/ti/cpmac.c @@ -991,7 +991,6 @@ static int cpmac_open(struct net_device *dev) cpmac_hw_start(dev); napi_enable(&priv->napi); - dev->phydev->state = PHY_CHANGELINK; phy_start(dev->phydev); return 0; diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 500f7ed8c58c..a591583d120e 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include @@ -283,7 +284,7 @@ struct cpsw_ss_regs { #define CTRL_V2_TS_BITS \ (TS_320 | TS_319 | TS_132 | TS_131 | TS_130 | TS_129 |\ - TS_TTL_NONZERO | TS_ANNEX_D_EN | TS_LTYPE1_EN) + TS_TTL_NONZERO | TS_ANNEX_D_EN | TS_LTYPE1_EN | VLAN_LTYPE1_EN) #define CTRL_V2_ALL_TS_MASK (CTRL_V2_TS_BITS | TS_TX_EN | TS_RX_EN) #define CTRL_V2_TX_TS_BITS (CTRL_V2_TS_BITS | TS_TX_EN) @@ -293,7 +294,7 @@ struct cpsw_ss_regs { #define CTRL_V3_TS_BITS \ (TS_107 | TS_320 | TS_319 | TS_132 | TS_131 | TS_130 | TS_129 |\ TS_TTL_NONZERO | TS_ANNEX_F_EN | TS_ANNEX_D_EN |\ - TS_LTYPE1_EN) + TS_LTYPE1_EN | VLAN_LTYPE1_EN) #define CTRL_V3_ALL_TS_MASK (CTRL_V3_TS_BITS | TS_TX_EN | TS_RX_EN) #define CTRL_V3_TX_TS_BITS (CTRL_V3_TS_BITS | TS_TX_EN) @@ -387,6 +388,7 @@ struct cpsw_slave_data { int phy_if; u8 mac_addr[ETH_ALEN]; u16 dual_emac_res_vlan; /* Reserved VLAN for DualEMAC */ + struct phy *ifphy; }; struct cpsw_platform_data { @@ -466,6 +468,8 @@ struct cpsw_priv { bool mqprio_hw; int fifo_bw[CPSW_TC_NUM]; int shp_cfg_speed; + int tx_ts_enabled; + int rx_ts_enabled; u32 emac_port; struct cpsw_common *cpsw; }; @@ -565,26 +569,14 @@ static const struct cpsw_stats cpsw_gstrings_ch_stats[] = { (func)(slave++, ##arg); \ } while (0) +static int cpsw_ndo_vlan_rx_add_vid(struct net_device *ndev, + __be16 proto, u16 vid); + static inline int cpsw_get_slave_port(u32 slave_num) { return slave_num + 1; } -static void cpsw_add_mcast(struct cpsw_priv *priv, const u8 *addr) -{ - struct cpsw_common *cpsw = priv->cpsw; - - if (cpsw->data.dual_emac) { - struct cpsw_slave *slave = cpsw->slaves + priv->emac_port; - - cpsw_ale_add_mcast(cpsw->ale, addr, ALE_PORT_HOST, - ALE_VLAN, slave->port_vlan, 0); - return; - } - - cpsw_ale_add_mcast(cpsw->ale, addr, ALE_ALL_PORTS, 0, 0, 0); -} - static void cpsw_set_promiscious(struct net_device *ndev, bool enable) { struct cpsw_common *cpsw = ndev_to_cpsw(ndev); @@ -640,7 +632,7 @@ static void cpsw_set_promiscious(struct net_device *ndev, bool enable) /* Clear all mcast from ALE */ cpsw_ale_flush_multicast(ale, ALE_ALL_PORTS, -1); - __dev_mc_unsync(ndev, NULL); + __hw_addr_ref_unsync_dev(&ndev->mc, ndev, NULL); /* Flood All Unicast Packets to Host port */ cpsw_ale_control_set(ale, 0, ALE_P0_UNI_FLOOD, 1); @@ -661,29 +653,148 @@ static void cpsw_set_promiscious(struct net_device *ndev, bool enable) } } -static int cpsw_add_mc_addr(struct net_device *ndev, const u8 *addr) +struct addr_sync_ctx { + struct net_device *ndev; + const u8 *addr; /* address to be synched */ + int consumed; /* number of address instances */ + int flush; /* flush flag */ +}; + +/** + * cpsw_set_mc - adds multicast entry to the table if it's not added or deletes + * if it's not deleted + * @ndev: device to sync + * @addr: address to be added or deleted + * @vid: vlan id, if vid < 0 set/unset address for real device + * @add: add address if the flag is set or remove otherwise + */ +static int cpsw_set_mc(struct net_device *ndev, const u8 *addr, + int vid, int add) { struct cpsw_priv *priv = netdev_priv(ndev); + struct cpsw_common *cpsw = priv->cpsw; + int mask, flags, ret; + + if (vid < 0) { + if (cpsw->data.dual_emac) + vid = cpsw->slaves[priv->emac_port].port_vlan; + else + vid = 0; + } + + mask = cpsw->data.dual_emac ? ALE_PORT_HOST : ALE_ALL_PORTS; + flags = vid ? ALE_VLAN : 0; + + if (add) + ret = cpsw_ale_add_mcast(cpsw->ale, addr, mask, flags, vid, 0); + else + ret = cpsw_ale_del_mcast(cpsw->ale, addr, 0, flags, vid); + + return ret; +} + +static int cpsw_update_vlan_mc(struct net_device *vdev, int vid, void *ctx) +{ + struct addr_sync_ctx *sync_ctx = ctx; + struct netdev_hw_addr *ha; + int found = 0, ret = 0; + + if (!vdev || !(vdev->flags & IFF_UP)) + return 0; + + /* vlan address is relevant if its sync_cnt != 0 */ + netdev_for_each_mc_addr(ha, vdev) { + if (ether_addr_equal(ha->addr, sync_ctx->addr)) { + found = ha->sync_cnt; + break; + } + } + + if (found) + sync_ctx->consumed++; + + if (sync_ctx->flush) { + if (!found) + cpsw_set_mc(sync_ctx->ndev, sync_ctx->addr, vid, 0); + return 0; + } + + if (found) + ret = cpsw_set_mc(sync_ctx->ndev, sync_ctx->addr, vid, 1); + + return ret; +} + +static int cpsw_add_mc_addr(struct net_device *ndev, const u8 *addr, int num) +{ + struct addr_sync_ctx sync_ctx; + int ret; + + sync_ctx.consumed = 0; + sync_ctx.addr = addr; + sync_ctx.ndev = ndev; + sync_ctx.flush = 0; + + ret = vlan_for_each(ndev, cpsw_update_vlan_mc, &sync_ctx); + if (sync_ctx.consumed < num && !ret) + ret = cpsw_set_mc(ndev, addr, -1, 1); + + return ret; +} + +static int cpsw_del_mc_addr(struct net_device *ndev, const u8 *addr, int num) +{ + struct addr_sync_ctx sync_ctx; + + sync_ctx.consumed = 0; + sync_ctx.addr = addr; + sync_ctx.ndev = ndev; + sync_ctx.flush = 1; + + vlan_for_each(ndev, cpsw_update_vlan_mc, &sync_ctx); + if (sync_ctx.consumed == num) + cpsw_set_mc(ndev, addr, -1, 0); - cpsw_add_mcast(priv, addr); return 0; } -static int cpsw_del_mc_addr(struct net_device *ndev, const u8 *addr) +static int cpsw_purge_vlan_mc(struct net_device *vdev, int vid, void *ctx) { - struct cpsw_priv *priv = netdev_priv(ndev); - struct cpsw_common *cpsw = priv->cpsw; - int vid, flags; + struct addr_sync_ctx *sync_ctx = ctx; + struct netdev_hw_addr *ha; + int found = 0; - if (cpsw->data.dual_emac) { - vid = cpsw->slaves[priv->emac_port].port_vlan; - flags = ALE_VLAN; - } else { - vid = 0; - flags = 0; + if (!vdev || !(vdev->flags & IFF_UP)) + return 0; + + /* vlan address is relevant if its sync_cnt != 0 */ + netdev_for_each_mc_addr(ha, vdev) { + if (ether_addr_equal(ha->addr, sync_ctx->addr)) { + found = ha->sync_cnt; + break; + } } - cpsw_ale_del_mcast(cpsw->ale, addr, 0, flags, vid); + if (!found) + return 0; + + sync_ctx->consumed++; + cpsw_set_mc(sync_ctx->ndev, sync_ctx->addr, vid, 0); + return 0; +} + +static int cpsw_purge_all_mc(struct net_device *ndev, const u8 *addr, int num) +{ + struct addr_sync_ctx sync_ctx; + + sync_ctx.addr = addr; + sync_ctx.ndev = ndev; + sync_ctx.consumed = 0; + + vlan_for_each(ndev, cpsw_purge_vlan_mc, &sync_ctx); + if (sync_ctx.consumed < num) + cpsw_set_mc(ndev, addr, -1, 0); + return 0; } @@ -704,7 +815,9 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev) /* Restore allmulti on vlans if necessary */ cpsw_ale_set_allmulti(cpsw->ale, ndev->flags & IFF_ALLMULTI); - __dev_mc_sync(ndev, cpsw_add_mc_addr, cpsw_del_mc_addr); + /* add/remove mcast address either for real netdev or for vlan */ + __hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr, + cpsw_del_mc_addr); } static void cpsw_intr_enable(struct cpsw_common *cpsw) @@ -796,6 +909,7 @@ static void cpsw_rx_handler(void *token, int len, int status) struct net_device *ndev = skb->dev; int ret = 0, port; struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct cpsw_priv *priv; if (cpsw->data.dual_emac) { port = CPDMA_RX_SOURCE_PORT(status); @@ -830,7 +944,9 @@ static void cpsw_rx_handler(void *token, int len, int status) skb_put(skb, len); if (status & CPDMA_RX_VLAN_ENCAP) cpsw_rx_vlan_encap(skb); - cpts_rx_timestamp(cpsw->cpts, skb); + priv = netdev_priv(ndev); + if (priv->rx_ts_enabled) + cpts_rx_timestamp(cpsw->cpts, skb); skb->protocol = eth_type_trans(skb, ndev); netif_receive_skb(skb); ndev->stats.rx_bytes += len; @@ -1510,7 +1626,12 @@ static void cpsw_slave_open(struct cpsw_slave *slave, struct cpsw_priv *priv) phy_start(slave->phy); /* Configure GMII_SEL register */ - cpsw_phy_sel(cpsw->dev, slave->phy->interface, slave->slave_num); + if (!IS_ERR(slave->data->ifphy)) + phy_set_mode_ext(slave->data->ifphy, PHY_MODE_ETHERNET, + slave->data->phy_if); + else + cpsw_phy_sel(cpsw->dev, slave->phy->interface, + slave->slave_num); } static inline void cpsw_add_default_vlan(struct cpsw_priv *priv) @@ -1845,9 +1966,23 @@ static void cpsw_mqprio_resume(struct cpsw_slave *slave, struct cpsw_priv *priv) slave_write(slave, tx_prio_map, tx_prio_rg); } +static int cpsw_restore_vlans(struct net_device *vdev, int vid, void *arg) +{ + struct cpsw_priv *priv = arg; + + if (!vdev) + return 0; + + cpsw_ndo_vlan_rx_add_vid(priv->ndev, 0, vid); + return 0; +} + /* restore resources after port reset */ static void cpsw_restore(struct cpsw_priv *priv) { + /* restore vlan configurations */ + vlan_for_each(priv->ndev, cpsw_restore_vlans, priv); + /* restore MQPRIO offload */ for_each_slave(priv, cpsw_mqprio_resume, priv); @@ -1964,7 +2099,7 @@ static int cpsw_ndo_stop(struct net_device *ndev) struct cpsw_common *cpsw = priv->cpsw; cpsw_info(priv, ifdown, "shutting down cpsw device\n"); - __dev_mc_unsync(priv->ndev, cpsw_del_mc_addr); + __hw_addr_ref_unsync_dev(&ndev->mc, ndev, cpsw_purge_all_mc); netif_tx_stop_all_queues(priv->ndev); netif_carrier_off(priv->ndev); @@ -2003,7 +2138,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff *skb, } if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP && - cpts_is_tx_enabled(cpts) && cpts_can_timestamp(cpts, skb)) + priv->tx_ts_enabled && cpts_can_timestamp(cpts, skb)) skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; q_idx = skb_get_queue_mapping(skb); @@ -2047,13 +2182,13 @@ fail: #if IS_ENABLED(CONFIG_TI_CPTS) -static void cpsw_hwtstamp_v1(struct cpsw_common *cpsw) +static void cpsw_hwtstamp_v1(struct cpsw_priv *priv) { + struct cpsw_common *cpsw = priv->cpsw; struct cpsw_slave *slave = &cpsw->slaves[cpsw->data.active_slave]; u32 ts_en, seq_id; - if (!cpts_is_tx_enabled(cpsw->cpts) && - !cpts_is_rx_enabled(cpsw->cpts)) { + if (!priv->tx_ts_enabled && !priv->rx_ts_enabled) { slave_write(slave, 0, CPSW1_TS_CTL); return; } @@ -2061,10 +2196,10 @@ static void cpsw_hwtstamp_v1(struct cpsw_common *cpsw) seq_id = (30 << CPSW_V1_SEQ_ID_OFS_SHIFT) | ETH_P_1588; ts_en = EVENT_MSG_BITS << CPSW_V1_MSG_TYPE_OFS; - if (cpts_is_tx_enabled(cpsw->cpts)) + if (priv->tx_ts_enabled) ts_en |= CPSW_V1_TS_TX_EN; - if (cpts_is_rx_enabled(cpsw->cpts)) + if (priv->rx_ts_enabled) ts_en |= CPSW_V1_TS_RX_EN; slave_write(slave, ts_en, CPSW1_TS_CTL); @@ -2084,20 +2219,20 @@ static void cpsw_hwtstamp_v2(struct cpsw_priv *priv) case CPSW_VERSION_2: ctrl &= ~CTRL_V2_ALL_TS_MASK; - if (cpts_is_tx_enabled(cpsw->cpts)) + if (priv->tx_ts_enabled) ctrl |= CTRL_V2_TX_TS_BITS; - if (cpts_is_rx_enabled(cpsw->cpts)) + if (priv->rx_ts_enabled) ctrl |= CTRL_V2_RX_TS_BITS; break; case CPSW_VERSION_3: default: ctrl &= ~CTRL_V3_ALL_TS_MASK; - if (cpts_is_tx_enabled(cpsw->cpts)) + if (priv->tx_ts_enabled) ctrl |= CTRL_V3_TX_TS_BITS; - if (cpts_is_rx_enabled(cpsw->cpts)) + if (priv->rx_ts_enabled) ctrl |= CTRL_V3_RX_TS_BITS; break; } @@ -2107,6 +2242,7 @@ static void cpsw_hwtstamp_v2(struct cpsw_priv *priv) slave_write(slave, mtype, CPSW2_TS_SEQ_MTYPE); slave_write(slave, ctrl, CPSW2_CONTROL); writel_relaxed(ETH_P_1588, &cpsw->regs->ts_ltype); + writel_relaxed(ETH_P_8021Q, &cpsw->regs->vlan_ltype); } static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) @@ -2114,7 +2250,6 @@ static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) struct cpsw_priv *priv = netdev_priv(dev); struct hwtstamp_config cfg; struct cpsw_common *cpsw = priv->cpsw; - struct cpts *cpts = cpsw->cpts; if (cpsw->version != CPSW_VERSION_1 && cpsw->version != CPSW_VERSION_2 && @@ -2133,7 +2268,7 @@ static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) switch (cfg.rx_filter) { case HWTSTAMP_FILTER_NONE: - cpts_rx_enable(cpts, 0); + priv->rx_ts_enabled = 0; break; case HWTSTAMP_FILTER_ALL: case HWTSTAMP_FILTER_NTP_ALL: @@ -2141,7 +2276,7 @@ static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: - cpts_rx_enable(cpts, HWTSTAMP_FILTER_PTP_V1_L4_EVENT); + priv->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; break; case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: @@ -2153,18 +2288,18 @@ static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) case HWTSTAMP_FILTER_PTP_V2_EVENT: case HWTSTAMP_FILTER_PTP_V2_SYNC: case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: - cpts_rx_enable(cpts, HWTSTAMP_FILTER_PTP_V2_EVENT); + priv->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V2_EVENT; cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; break; default: return -ERANGE; } - cpts_tx_enable(cpts, cfg.tx_type == HWTSTAMP_TX_ON); + priv->tx_ts_enabled = cfg.tx_type == HWTSTAMP_TX_ON; switch (cpsw->version) { case CPSW_VERSION_1: - cpsw_hwtstamp_v1(cpsw); + cpsw_hwtstamp_v1(priv); break; case CPSW_VERSION_2: case CPSW_VERSION_3: @@ -2180,7 +2315,7 @@ static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) static int cpsw_hwtstamp_get(struct net_device *dev, struct ifreq *ifr) { struct cpsw_common *cpsw = ndev_to_cpsw(dev); - struct cpts *cpts = cpsw->cpts; + struct cpsw_priv *priv = netdev_priv(dev); struct hwtstamp_config cfg; if (cpsw->version != CPSW_VERSION_1 && @@ -2189,10 +2324,8 @@ static int cpsw_hwtstamp_get(struct net_device *dev, struct ifreq *ifr) return -EOPNOTSUPP; cfg.flags = 0; - cfg.tx_type = cpts_is_tx_enabled(cpts) ? - HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; - cfg.rx_filter = (cpts_is_rx_enabled(cpts) ? - cpts->rx_enable : HWTSTAMP_FILTER_NONE); + cfg.tx_type = priv->tx_ts_enabled ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; + cfg.rx_filter = priv->rx_ts_enabled; return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0; } @@ -2415,6 +2548,7 @@ static int cpsw_ndo_vlan_rx_kill_vid(struct net_device *ndev, HOST_PORT_NUM, ALE_VLAN, vid); ret |= cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast, 0, ALE_VLAN, vid); + ret |= cpsw_ale_flush_multicast(cpsw->ale, 0, vid); err: pm_runtime_put(cpsw->dev); return ret; @@ -3144,9 +3278,19 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data, const __be32 *parp; /* This is no slave child node, continue */ - if (strcmp(slave_node->name, "slave")) + if (!of_node_name_eq(slave_node, "slave")) continue; + slave_data->ifphy = devm_of_phy_get(&pdev->dev, slave_node, + NULL); + if (!IS_ENABLED(CONFIG_TI_CPSW_PHY_SEL) && + IS_ERR(slave_data->ifphy)) { + ret = PTR_ERR(slave_data->ifphy); + dev_err(&pdev->dev, + "%d: Error retrieving port phy: %d\n", i, ret); + return ret; + } + slave_data->phy_node = of_parse_phandle(slave_node, "phy-handle", 0); parp = of_get_property(slave_node, "phy_id", &lenp); @@ -3240,7 +3384,7 @@ static void cpsw_remove_dt(struct platform_device *pdev) for_each_available_child_of_node(node, slave_node) { struct cpsw_slave_data *slave_data = &data->slave_data[i]; - if (strcmp(slave_node->name, "slave")) + if (!of_node_name_eq(slave_node, "slave")) continue; if (of_phy_is_fixed_link(slave_node)) diff --git a/drivers/net/ethernet/ti/cpts.c b/drivers/net/ethernet/ti/cpts.c index b96b93c686bf..054f78295d1d 100644 --- a/drivers/net/ethernet/ti/cpts.c +++ b/drivers/net/ethernet/ti/cpts.c @@ -86,6 +86,25 @@ static int cpts_purge_events(struct cpts *cpts) return removed ? 0 : -1; } +static void cpts_purge_txq(struct cpts *cpts) +{ + struct cpts_skb_cb_data *skb_cb; + struct sk_buff *skb, *tmp; + int removed = 0; + + skb_queue_walk_safe(&cpts->txq, skb, tmp) { + skb_cb = (struct cpts_skb_cb_data *)skb->cb; + if (time_after(jiffies, skb_cb->tmo)) { + __skb_unlink(skb, &cpts->txq); + dev_consume_skb_any(skb); + ++removed; + } + } + + if (removed) + dev_dbg(cpts->dev, "txq cleaned up %d\n", removed); +} + static bool cpts_match_tx_ts(struct cpts *cpts, struct cpts_event *event) { struct sk_buff *skb, *tmp; @@ -119,9 +138,7 @@ static bool cpts_match_tx_ts(struct cpts *cpts, struct cpts_event *event) if (time_after(jiffies, skb_cb->tmo)) { /* timeout any expired skbs over 1s */ - dev_dbg(cpts->dev, - "expiring tx timestamp mtype %u seqid %04x\n", - mtype, seqid); + dev_dbg(cpts->dev, "expiring tx timestamp from txq\n"); __skb_unlink(skb, &cpts->txq); dev_consume_skb_any(skb); } @@ -294,8 +311,11 @@ static long cpts_overflow_check(struct ptp_clock_info *ptp) spin_lock_irqsave(&cpts->lock, flags); ts = ns_to_timespec64(timecounter_read(&cpts->tc)); - if (!skb_queue_empty(&cpts->txq)) - delay = CPTS_SKB_TX_WORK_TIMEOUT; + if (!skb_queue_empty(&cpts->txq)) { + cpts_purge_txq(cpts); + if (!skb_queue_empty(&cpts->txq)) + delay = CPTS_SKB_TX_WORK_TIMEOUT; + } spin_unlock_irqrestore(&cpts->lock, flags); pr_debug("cpts overflow check at %lld.%09ld\n", @@ -410,8 +430,6 @@ void cpts_rx_timestamp(struct cpts *cpts, struct sk_buff *skb) u64 ns; struct skb_shared_hwtstamps *ssh; - if (!cpts->rx_enable) - return; ns = cpts_find_ts(cpts, skb, CPTS_EV_RX); if (!ns) return; diff --git a/drivers/net/ethernet/ti/cpts.h b/drivers/net/ethernet/ti/cpts.h index 73d73faf0f38..d2c7decd59b6 100644 --- a/drivers/net/ethernet/ti/cpts.h +++ b/drivers/net/ethernet/ti/cpts.h @@ -136,26 +136,6 @@ struct cpts *cpts_create(struct device *dev, void __iomem *regs, struct device_node *node); void cpts_release(struct cpts *cpts); -static inline void cpts_rx_enable(struct cpts *cpts, int enable) -{ - cpts->rx_enable = enable; -} - -static inline bool cpts_is_rx_enabled(struct cpts *cpts) -{ - return !!cpts->rx_enable; -} - -static inline void cpts_tx_enable(struct cpts *cpts, int enable) -{ - cpts->tx_enable = enable; -} - -static inline bool cpts_is_tx_enabled(struct cpts *cpts) -{ - return !!cpts->tx_enable; -} - static inline bool cpts_can_timestamp(struct cpts *cpts, struct sk_buff *skb) { unsigned int class = ptp_classify_raw(skb); @@ -197,24 +177,6 @@ static inline void cpts_unregister(struct cpts *cpts) { } -static inline void cpts_rx_enable(struct cpts *cpts, int enable) -{ -} - -static inline bool cpts_is_rx_enabled(struct cpts *cpts) -{ - return false; -} - -static inline void cpts_tx_enable(struct cpts *cpts, int enable) -{ -} - -static inline bool cpts_is_tx_enabled(struct cpts *cpts) -{ - return false; -} - static inline bool cpts_can_timestamp(struct cpts *cpts, struct sk_buff *skb) { return false; diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c index 9153db120352..840820402cd0 100644 --- a/drivers/net/ethernet/ti/davinci_emac.c +++ b/drivers/net/ethernet/ti/davinci_emac.c @@ -1912,11 +1912,15 @@ static int davinci_emac_probe(struct platform_device *pdev) ether_addr_copy(ndev->dev_addr, priv->mac_addr); if (!is_valid_ether_addr(priv->mac_addr)) { - /* Use random MAC if none passed */ - eth_hw_addr_random(ndev); - memcpy(priv->mac_addr, ndev->dev_addr, ndev->addr_len); - dev_warn(&pdev->dev, "using random MAC addr: %pM\n", - priv->mac_addr); + /* Try nvmem if MAC wasn't passed over pdata or DT. */ + rc = nvmem_get_mac_address(&pdev->dev, priv->mac_addr); + if (rc) { + /* Use random MAC if still none obtained. */ + eth_hw_addr_random(ndev); + memcpy(priv->mac_addr, ndev->dev_addr, ndev->addr_len); + dev_warn(&pdev->dev, "using random MAC addr: %pM\n", + priv->mac_addr); + } } ndev->netdev_ops = &emac_netdev_ops; diff --git a/drivers/net/ethernet/ti/netcp_ethss.c b/drivers/net/ethernet/ti/netcp_ethss.c index 0397ccb6597e..5174d318901e 100644 --- a/drivers/net/ethernet/ti/netcp_ethss.c +++ b/drivers/net/ethernet/ti/netcp_ethss.c @@ -763,6 +763,8 @@ struct gbe_priv { int cpts_registered; struct cpts *cpts; + int rx_ts_enabled; + int tx_ts_enabled; }; struct gbe_intf { @@ -2564,7 +2566,7 @@ static int gbe_txtstamp_mark_pkt(struct gbe_intf *gbe_intf, struct gbe_priv *gbe_dev = gbe_intf->gbe_dev; if (!(skb_shinfo(p_info->skb)->tx_flags & SKBTX_HW_TSTAMP) || - !cpts_is_tx_enabled(gbe_dev->cpts)) + !gbe_dev->tx_ts_enabled) return 0; /* If phy has the txtstamp api, assume it will do it. @@ -2598,7 +2600,9 @@ static int gbe_rxtstamp(struct gbe_intf *gbe_intf, struct netcp_packet *p_info) return 0; } - cpts_rx_timestamp(gbe_dev->cpts, p_info->skb); + if (gbe_dev->rx_ts_enabled) + cpts_rx_timestamp(gbe_dev->cpts, p_info->skb); + p_info->rxtstamp_complete = true; return 0; @@ -2614,10 +2618,8 @@ static int gbe_hwtstamp_get(struct gbe_intf *gbe_intf, struct ifreq *ifr) return -EOPNOTSUPP; cfg.flags = 0; - cfg.tx_type = cpts_is_tx_enabled(cpts) ? - HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; - cfg.rx_filter = (cpts_is_rx_enabled(cpts) ? - cpts->rx_enable : HWTSTAMP_FILTER_NONE); + cfg.tx_type = gbe_dev->tx_ts_enabled ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; + cfg.rx_filter = gbe_dev->rx_ts_enabled; return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0; } @@ -2628,8 +2630,8 @@ static void gbe_hwtstamp(struct gbe_intf *gbe_intf) struct gbe_slave *slave = gbe_intf->slave; u32 ts_en, seq_id, ctl; - if (!cpts_is_rx_enabled(gbe_dev->cpts) && - !cpts_is_tx_enabled(gbe_dev->cpts)) { + if (!gbe_dev->rx_ts_enabled && + !gbe_dev->tx_ts_enabled) { writel(0, GBE_REG_ADDR(slave, port_regs, ts_ctl)); return; } @@ -2641,10 +2643,10 @@ static void gbe_hwtstamp(struct gbe_intf *gbe_intf) (slave->ts_ctl.uni ? TS_UNI_EN : slave->ts_ctl.maddr_map << TS_CTL_MADDR_SHIFT); - if (cpts_is_tx_enabled(gbe_dev->cpts)) + if (gbe_dev->tx_ts_enabled) ts_en |= (TS_TX_ANX_ALL_EN | TS_TX_VLAN_LT1_EN); - if (cpts_is_rx_enabled(gbe_dev->cpts)) + if (gbe_dev->rx_ts_enabled) ts_en |= (TS_RX_ANX_ALL_EN | TS_RX_VLAN_LT1_EN); writel(ts_en, GBE_REG_ADDR(slave, port_regs, ts_ctl)); @@ -2670,10 +2672,10 @@ static int gbe_hwtstamp_set(struct gbe_intf *gbe_intf, struct ifreq *ifr) switch (cfg.tx_type) { case HWTSTAMP_TX_OFF: - cpts_tx_enable(cpts, 0); + gbe_dev->tx_ts_enabled = 0; break; case HWTSTAMP_TX_ON: - cpts_tx_enable(cpts, 1); + gbe_dev->tx_ts_enabled = 1; break; default: return -ERANGE; @@ -2681,12 +2683,12 @@ static int gbe_hwtstamp_set(struct gbe_intf *gbe_intf, struct ifreq *ifr) switch (cfg.rx_filter) { case HWTSTAMP_FILTER_NONE: - cpts_rx_enable(cpts, 0); + gbe_dev->rx_ts_enabled = HWTSTAMP_FILTER_NONE; break; case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: - cpts_rx_enable(cpts, HWTSTAMP_FILTER_PTP_V1_L4_EVENT); + gbe_dev->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; break; case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: @@ -2698,7 +2700,7 @@ static int gbe_hwtstamp_set(struct gbe_intf *gbe_intf, struct ifreq *ifr) case HWTSTAMP_FILTER_PTP_V2_EVENT: case HWTSTAMP_FILTER_PTP_V2_SYNC: case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: - cpts_rx_enable(cpts, HWTSTAMP_FILTER_PTP_V2_EVENT); + gbe_dev->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V2_EVENT; cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; break; default: @@ -3621,7 +3623,7 @@ static int gbe_probe(struct netcp_device *netcp_device, struct device *dev, return -EINVAL; } - if (!strcmp(node->name, "gbe")) { + if (of_node_name_eq(node, "gbe")) { ret = get_gbe_resource_version(gbe_dev, node); if (ret) return ret; @@ -3635,7 +3637,7 @@ static int gbe_probe(struct netcp_device *netcp_device, struct device *dev, else ret = -ENODEV; - } else if (!strcmp(node->name, "xgbe")) { + } else if (of_node_name_eq(node, "xgbe")) { ret = set_xgbe_ethss10_priv(gbe_dev, node); if (ret) return ret; diff --git a/drivers/net/ethernet/ti/tlan.c b/drivers/net/ethernet/ti/tlan.c index 93d142867c2a..b4ab1a5f6cd0 100644 --- a/drivers/net/ethernet/ti/tlan.c +++ b/drivers/net/ethernet/ti/tlan.c @@ -69,7 +69,9 @@ MODULE_AUTHOR("Maintainer: Samuel Chessman "); MODULE_DESCRIPTION("Driver for TI ThunderLAN based ethernet PCI adapters"); MODULE_LICENSE("GPL"); -/* Turn on debugging. See Documentation/networking/tlan.txt for details */ +/* Turn on debugging. + * See Documentation/networking/device_drivers/ti/tlan.txt for details + */ static int debug; module_param(debug, int, 0); MODULE_PARM_DESC(debug, "ThunderLAN debug mask"); diff --git a/drivers/net/ethernet/toshiba/tc35815.c b/drivers/net/ethernet/toshiba/tc35815.c index 6a71c2c0f17d..c50a9772f4af 100644 --- a/drivers/net/ethernet/toshiba/tc35815.c +++ b/drivers/net/ethernet/toshiba/tc35815.c @@ -607,9 +607,9 @@ static void tc_handle_link_change(struct net_device *dev) static int tc_mii_probe(struct net_device *dev) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; struct tc35815_local *lp = netdev_priv(dev); struct phy_device *phydev; - u32 dropmask; phydev = phy_find_first(lp->mii_bus); if (!phydev) { @@ -630,17 +630,22 @@ static int tc_mii_probe(struct net_device *dev) /* mask with MAC supported features */ phy_set_max_speed(phydev, SPEED_100); - dropmask = 0; - if (options.speed == 10) - dropmask |= SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full; - else if (options.speed == 100) - dropmask |= SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full; - if (options.duplex == 1) - dropmask |= SUPPORTED_10baseT_Full | SUPPORTED_100baseT_Full; - else if (options.duplex == 2) - dropmask |= SUPPORTED_10baseT_Half | SUPPORTED_100baseT_Half; - phydev->supported &= ~dropmask; - phydev->advertising = phydev->supported; + if (options.speed == 10) { + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, mask); + } else if (options.speed == 100) { + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, mask); + } + if (options.duplex == 1) { + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, mask); + } else if (options.duplex == 2) { + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, mask); + } + linkmode_and(phydev->supported, phydev->supported, mask); + linkmode_copy(phydev->advertising, phydev->supported); lp->link = 0; lp->speed = 0; diff --git a/drivers/net/fjes/fjes_debugfs.c b/drivers/net/fjes/fjes_debugfs.c index 30052ebd52bf..7fed88ea27a5 100644 --- a/drivers/net/fjes/fjes_debugfs.c +++ b/drivers/net/fjes/fjes_debugfs.c @@ -62,19 +62,7 @@ static int fjes_dbg_status_show(struct seq_file *m, void *v) return 0; } - -static int fjes_dbg_status_open(struct inode *inode, struct file *file) -{ - return single_open(file, fjes_dbg_status_show, inode->i_private); -} - -static const struct file_operations fjes_dbg_status_fops = { - .owner = THIS_MODULE, - .open = fjes_dbg_status_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(fjes_dbg_status); void fjes_dbg_adapter_init(struct fjes_adapter *adapter) { diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c index a0cd1c41cf5f..58bbba8582b0 100644 --- a/drivers/net/geneve.c +++ b/drivers/net/geneve.c @@ -70,6 +70,7 @@ struct geneve_dev { bool collect_md; bool use_udp6_rx_checksums; bool ttl_inherit; + enum ifla_geneve_df df; }; struct geneve_sock { @@ -387,6 +388,59 @@ drop: return 0; } +/* Callback from net/ipv{4,6}/udp.c to check that we have a tunnel for errors */ +static int geneve_udp_encap_err_lookup(struct sock *sk, struct sk_buff *skb) +{ + struct genevehdr *geneveh; + struct geneve_sock *gs; + u8 zero_vni[3] = { 0 }; + u8 *vni = zero_vni; + + if (skb->len < GENEVE_BASE_HLEN) + return -EINVAL; + + geneveh = geneve_hdr(skb); + if (geneveh->ver != GENEVE_VER) + return -EINVAL; + + if (geneveh->proto_type != htons(ETH_P_TEB)) + return -EINVAL; + + gs = rcu_dereference_sk_user_data(sk); + if (!gs) + return -ENOENT; + + if (geneve_get_sk_family(gs) == AF_INET) { + struct iphdr *iph = ip_hdr(skb); + __be32 addr4 = 0; + + if (!gs->collect_md) { + vni = geneve_hdr(skb)->vni; + addr4 = iph->daddr; + } + + return geneve_lookup(gs, addr4, vni) ? 0 : -ENOENT; + } + +#if IS_ENABLED(CONFIG_IPV6) + if (geneve_get_sk_family(gs) == AF_INET6) { + struct ipv6hdr *ip6h = ipv6_hdr(skb); + struct in6_addr addr6; + + memset(&addr6, 0, sizeof(struct in6_addr)); + + if (!gs->collect_md) { + vni = geneve_hdr(skb)->vni; + addr6 = ip6h->daddr; + } + + return geneve6_lookup(gs, addr6, vni) ? 0 : -ENOENT; + } +#endif + + return -EPFNOSUPPORT; +} + static struct socket *geneve_create_sock(struct net *net, bool ipv6, __be16 port, bool ipv6_rx_csum) { @@ -544,6 +598,7 @@ static struct geneve_sock *geneve_socket_create(struct net *net, __be16 port, tunnel_cfg.gro_receive = geneve_gro_receive; tunnel_cfg.gro_complete = geneve_gro_complete; tunnel_cfg.encap_rcv = geneve_udp_encap_recv; + tunnel_cfg.encap_err_lookup = geneve_udp_encap_err_lookup; tunnel_cfg.encap_destroy = NULL; setup_udp_tunnel_sock(net, sock, &tunnel_cfg); list_add(&gs->list, &gn->sock_list); @@ -823,8 +878,8 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev, struct rtable *rt; struct flowi4 fl4; __u8 tos, ttl; + __be16 df = 0; __be16 sport; - __be16 df; int err; rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info); @@ -838,6 +893,8 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev, if (geneve->collect_md) { tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); ttl = key->ttl; + + df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; } else { tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, ip_hdr(skb), skb); if (geneve->ttl_inherit) @@ -845,8 +902,22 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev, else ttl = key->ttl; ttl = ttl ? : ip4_dst_hoplimit(&rt->dst); + + if (geneve->df == GENEVE_DF_SET) { + df = htons(IP_DF); + } else if (geneve->df == GENEVE_DF_INHERIT) { + struct ethhdr *eth = eth_hdr(skb); + + if (ntohs(eth->h_proto) == ETH_P_IPV6) { + df = htons(IP_DF); + } else if (ntohs(eth->h_proto) == ETH_P_IP) { + struct iphdr *iph = ip_hdr(skb); + + if (iph->frag_off & htons(IP_DF)) + df = htons(IP_DF); + } + } } - df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; err = geneve_build_skb(&rt->dst, skb, info, xnet, sizeof(struct iphdr)); if (unlikely(err)) @@ -1093,6 +1164,7 @@ static const struct nla_policy geneve_policy[IFLA_GENEVE_MAX + 1] = { [IFLA_GENEVE_UDP_ZERO_CSUM6_TX] = { .type = NLA_U8 }, [IFLA_GENEVE_UDP_ZERO_CSUM6_RX] = { .type = NLA_U8 }, [IFLA_GENEVE_TTL_INHERIT] = { .type = NLA_U8 }, + [IFLA_GENEVE_DF] = { .type = NLA_U8 }, }; static int geneve_validate(struct nlattr *tb[], struct nlattr *data[], @@ -1128,6 +1200,16 @@ static int geneve_validate(struct nlattr *tb[], struct nlattr *data[], } } + if (data[IFLA_GENEVE_DF]) { + enum ifla_geneve_df df = nla_get_u8(data[IFLA_GENEVE_DF]); + + if (df < 0 || df > GENEVE_DF_MAX) { + NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_GENEVE_DF], + "Invalid DF attribute"); + return -EINVAL; + } + } + return 0; } @@ -1173,7 +1255,7 @@ static int geneve_configure(struct net *net, struct net_device *dev, struct netlink_ext_ack *extack, const struct ip_tunnel_info *info, bool metadata, bool ipv6_rx_csum, - bool ttl_inherit) + bool ttl_inherit, enum ifla_geneve_df df) { struct geneve_net *gn = net_generic(net, geneve_net_id); struct geneve_dev *t, *geneve = netdev_priv(dev); @@ -1223,6 +1305,7 @@ static int geneve_configure(struct net *net, struct net_device *dev, geneve->collect_md = metadata; geneve->use_udp6_rx_checksums = ipv6_rx_csum; geneve->ttl_inherit = ttl_inherit; + geneve->df = df; err = register_netdevice(dev); if (err) @@ -1242,7 +1325,7 @@ static int geneve_nl2info(struct nlattr *tb[], struct nlattr *data[], struct netlink_ext_ack *extack, struct ip_tunnel_info *info, bool *metadata, bool *use_udp6_rx_checksums, bool *ttl_inherit, - bool changelink) + enum ifla_geneve_df *df, bool changelink) { int attrtype; @@ -1330,6 +1413,9 @@ static int geneve_nl2info(struct nlattr *tb[], struct nlattr *data[], if (data[IFLA_GENEVE_TOS]) info->key.tos = nla_get_u8(data[IFLA_GENEVE_TOS]); + if (data[IFLA_GENEVE_DF]) + *df = nla_get_u8(data[IFLA_GENEVE_DF]); + if (data[IFLA_GENEVE_LABEL]) { info->key.label = nla_get_be32(data[IFLA_GENEVE_LABEL]) & IPV6_FLOWLABEL_MASK; @@ -1448,6 +1534,7 @@ static int geneve_newlink(struct net *net, struct net_device *dev, struct nlattr *tb[], struct nlattr *data[], struct netlink_ext_ack *extack) { + enum ifla_geneve_df df = GENEVE_DF_UNSET; bool use_udp6_rx_checksums = false; struct ip_tunnel_info info; bool ttl_inherit = false; @@ -1456,12 +1543,12 @@ static int geneve_newlink(struct net *net, struct net_device *dev, init_tnl_info(&info, GENEVE_UDP_PORT); err = geneve_nl2info(tb, data, extack, &info, &metadata, - &use_udp6_rx_checksums, &ttl_inherit, false); + &use_udp6_rx_checksums, &ttl_inherit, &df, false); if (err) return err; err = geneve_configure(net, dev, extack, &info, metadata, - use_udp6_rx_checksums, ttl_inherit); + use_udp6_rx_checksums, ttl_inherit, df); if (err) return err; @@ -1524,6 +1611,7 @@ static int geneve_changelink(struct net_device *dev, struct nlattr *tb[], struct ip_tunnel_info info; bool metadata; bool use_udp6_rx_checksums; + enum ifla_geneve_df df; bool ttl_inherit; int err; @@ -1539,7 +1627,7 @@ static int geneve_changelink(struct net_device *dev, struct nlattr *tb[], use_udp6_rx_checksums = geneve->use_udp6_rx_checksums; ttl_inherit = geneve->ttl_inherit; err = geneve_nl2info(tb, data, extack, &info, &metadata, - &use_udp6_rx_checksums, &ttl_inherit, true); + &use_udp6_rx_checksums, &ttl_inherit, &df, true); if (err) return err; @@ -1572,6 +1660,7 @@ static size_t geneve_get_size(const struct net_device *dev) nla_total_size(sizeof(struct in6_addr)) + /* IFLA_GENEVE_REMOTE{6} */ nla_total_size(sizeof(__u8)) + /* IFLA_GENEVE_TTL */ nla_total_size(sizeof(__u8)) + /* IFLA_GENEVE_TOS */ + nla_total_size(sizeof(__u8)) + /* IFLA_GENEVE_DF */ nla_total_size(sizeof(__be32)) + /* IFLA_GENEVE_LABEL */ nla_total_size(sizeof(__be16)) + /* IFLA_GENEVE_PORT */ nla_total_size(0) + /* IFLA_GENEVE_COLLECT_METADATA */ @@ -1620,6 +1709,9 @@ static int geneve_fill_info(struct sk_buff *skb, const struct net_device *dev) nla_put_be32(skb, IFLA_GENEVE_LABEL, info->key.label)) goto nla_put_failure; + if (nla_put_u8(skb, IFLA_GENEVE_DF, geneve->df)) + goto nla_put_failure; + if (nla_put_be16(skb, IFLA_GENEVE_PORT, info->key.tp_dst)) goto nla_put_failure; @@ -1666,12 +1758,13 @@ struct net_device *geneve_dev_create_fb(struct net *net, const char *name, memset(tb, 0, sizeof(tb)); dev = rtnl_create_link(net, name, name_assign_type, - &geneve_link_ops, tb); + &geneve_link_ops, tb, NULL); if (IS_ERR(dev)) return dev; init_tnl_info(&info, dst_port); - err = geneve_configure(net, dev, NULL, &info, true, true, false); + err = geneve_configure(net, dev, NULL, &info, + true, true, false, GENEVE_DF_UNSET); if (err) { free_netdev(dev); return ERR_PTR(err); diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c index 17e6dcd2eb42..28c749980359 100644 --- a/drivers/net/hamradio/6pack.c +++ b/drivers/net/hamradio/6pack.c @@ -120,7 +120,7 @@ struct sixpack { struct timer_list tx_t; struct timer_list resync_t; refcount_t refcnt; - struct semaphore dead_sem; + struct completion dead; spinlock_t lock; }; @@ -389,7 +389,7 @@ static struct sixpack *sp_get(struct tty_struct *tty) static void sp_put(struct sixpack *sp) { if (refcount_dec_and_test(&sp->refcnt)) - up(&sp->dead_sem); + complete(&sp->dead); } /* @@ -576,7 +576,7 @@ static int sixpack_open(struct tty_struct *tty) spin_lock_init(&sp->lock); refcount_set(&sp->refcnt, 1); - sema_init(&sp->dead_sem, 0); + init_completion(&sp->dead); /* !!! length of the buffers. MTU is IP MTU, not PACLEN! */ @@ -670,10 +670,10 @@ static void sixpack_close(struct tty_struct *tty) * we have to wait for all existing users to finish. */ if (!refcount_dec_and_test(&sp->refcnt)) - down(&sp->dead_sem); + wait_for_completion(&sp->dead); /* We must stop the queue to avoid potentially scribbling - * on the free buffers. The sp->dead_sem is not sufficient + * on the free buffers. The sp->dead completion is not sufficient * to protect us from sp->xbuff access. */ netif_stop_queue(sp->dev); diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c index 802233d41b25..4938cf4c184c 100644 --- a/drivers/net/hamradio/mkiss.c +++ b/drivers/net/hamradio/mkiss.c @@ -81,7 +81,7 @@ struct mkiss { #define CRC_MODE_SMACK_TEST 4 atomic_t refcnt; - struct semaphore dead_sem; + struct completion dead; }; /*---------------------------------------------------------------------------*/ @@ -687,7 +687,7 @@ static struct mkiss *mkiss_get(struct tty_struct *tty) static void mkiss_put(struct mkiss *ax) { if (atomic_dec_and_test(&ax->refcnt)) - up(&ax->dead_sem); + complete(&ax->dead); } static int crc_force = 0; /* Can be overridden with insmod */ @@ -715,7 +715,7 @@ static int mkiss_open(struct tty_struct *tty) spin_lock_init(&ax->buflock); atomic_set(&ax->refcnt, 1); - sema_init(&ax->dead_sem, 0); + init_completion(&ax->dead); ax->tty = tty; tty->disc_data = ax; @@ -795,7 +795,7 @@ static void mkiss_close(struct tty_struct *tty) * we have to wait for all existing users to finish. */ if (!atomic_dec_and_test(&ax->refcnt)) - down(&ax->dead_sem); + wait_for_completion(&ax->dead); /* * Halt the transmit queue so that a new transmit cannot scribble * on our buffers diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index cf36e7ff3191..91ed15ea5883 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -137,7 +137,7 @@ static int netvsc_open(struct net_device *net) * slave as up. If open fails, then slave will be * still be offline (and not used). */ - ret = dev_open(vf_netdev); + ret = dev_open(vf_netdev, NULL); if (ret) netdev_warn(net, "unable to open slave: %s: %d\n", @@ -605,9 +605,9 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net) IEEE_8021Q_INFO); vlan->value = 0; - vlan->vlanid = skb->vlan_tci & VLAN_VID_MASK; - vlan->pri = (skb->vlan_tci & VLAN_PRIO_MASK) >> - VLAN_PRIO_SHIFT; + vlan->vlanid = skb_vlan_tag_get_id(skb); + vlan->cfi = skb_vlan_tag_get_cfi(skb); + vlan->pri = skb_vlan_tag_get_prio(skb); } if (skb_is_gso(skb)) { @@ -781,7 +781,8 @@ static struct sk_buff *netvsc_alloc_recv_skb(struct net_device *net, } if (vlan) { - u16 vlan_tci = vlan->vlanid | (vlan->pri << VLAN_PRIO_SHIFT); + u16 vlan_tci = vlan->vlanid | (vlan->pri << VLAN_PRIO_SHIFT) | + (vlan->cfi ? VLAN_CFI_MASK : 0); __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tci); @@ -1246,7 +1247,7 @@ static int netvsc_set_mac_addr(struct net_device *ndev, void *p) return -ENODEV; if (vf_netdev) { - err = dev_set_mac_address(vf_netdev, addr); + err = dev_set_mac_address(vf_netdev, addr, NULL); if (err) return err; } @@ -1257,7 +1258,7 @@ static int netvsc_set_mac_addr(struct net_device *ndev, void *p) } else if (vf_netdev) { /* rollback change on VF */ memcpy(addr->sa_data, ndev->dev_addr, ETH_ALEN); - dev_set_mac_address(vf_netdev, addr); + dev_set_mac_address(vf_netdev, addr, NULL); } return err; @@ -1992,7 +1993,7 @@ static void __netvsc_vf_setup(struct net_device *ndev, "unable to change mtu to %u\n", ndev->mtu); /* set multicast etc flags on VF */ - dev_change_flags(vf_netdev, ndev->flags | IFF_SLAVE); + dev_change_flags(vf_netdev, ndev->flags | IFF_SLAVE, NULL); /* sync address list from ndev to VF */ netif_addr_lock_bh(ndev); @@ -2001,7 +2002,7 @@ static void __netvsc_vf_setup(struct net_device *ndev, netif_addr_unlock_bh(ndev); if (netif_running(ndev)) { - ret = dev_open(vf_netdev); + ret = dev_open(vf_netdev, NULL); if (ret) netdev_warn(vf_netdev, "unable to open: %d\n", ret); diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c index 3d9e91579866..0253eb502153 100644 --- a/drivers/net/ieee802154/at86rf230.c +++ b/drivers/net/ieee802154/at86rf230.c @@ -1632,18 +1632,7 @@ static int at86rf230_stats_show(struct seq_file *file, void *offset) seq_printf(file, "INVALID:\t\t%8llu\n", lp->trac.invalid); return 0; } - -static int at86rf230_stats_open(struct inode *inode, struct file *file) -{ - return single_open(file, at86rf230_stats_show, inode->i_private); -} - -static const struct file_operations at86rf230_stats_fops = { - .open = at86rf230_stats_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(at86rf230_stats); static int at86rf230_debugfs_init(struct at86rf230_local *lp) { diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c index 4a949569ec4c..19bdde60680c 100644 --- a/drivers/net/ipvlan/ipvlan_main.c +++ b/drivers/net/ipvlan/ipvlan_main.c @@ -71,7 +71,8 @@ static void ipvlan_unregister_nf_hook(struct net *net) ARRAY_SIZE(ipvl_nfops)); } -static int ipvlan_set_port_mode(struct ipvl_port *port, u16 nval) +static int ipvlan_set_port_mode(struct ipvl_port *port, u16 nval, + struct netlink_ext_ack *extack) { struct ipvl_dev *ipvlan; struct net_device *mdev = port->dev; @@ -84,10 +85,12 @@ static int ipvlan_set_port_mode(struct ipvl_port *port, u16 nval) flags = ipvlan->dev->flags; if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S) { err = dev_change_flags(ipvlan->dev, - flags | IFF_NOARP); + flags | IFF_NOARP, + extack); } else { err = dev_change_flags(ipvlan->dev, - flags & ~IFF_NOARP); + flags & ~IFF_NOARP, + extack); } if (unlikely(err)) goto fail; @@ -116,9 +119,11 @@ fail: flags = ipvlan->dev->flags; if (port->mode == IPVLAN_MODE_L3 || port->mode == IPVLAN_MODE_L3S) - dev_change_flags(ipvlan->dev, flags | IFF_NOARP); + dev_change_flags(ipvlan->dev, flags | IFF_NOARP, + NULL); else - dev_change_flags(ipvlan->dev, flags & ~IFF_NOARP); + dev_change_flags(ipvlan->dev, flags & ~IFF_NOARP, + NULL); } return err; @@ -498,7 +503,7 @@ static int ipvlan_nl_changelink(struct net_device *dev, if (data[IFLA_IPVLAN_MODE]) { u16 nmode = nla_get_u16(data[IFLA_IPVLAN_MODE]); - err = ipvlan_set_port_mode(port, nmode); + err = ipvlan_set_port_mode(port, nmode, extack); } if (!err && data[IFLA_IPVLAN_FLAGS]) { @@ -535,7 +540,7 @@ static int ipvlan_nl_validate(struct nlattr *tb[], struct nlattr *data[], if (data[IFLA_IPVLAN_MODE]) { u16 mode = nla_get_u16(data[IFLA_IPVLAN_MODE]); - if (mode < IPVLAN_MODE_L2 || mode >= IPVLAN_MODE_MAX) + if (mode >= IPVLAN_MODE_MAX) return -EINVAL; } if (data[IFLA_IPVLAN_FLAGS]) { @@ -672,7 +677,7 @@ int ipvlan_link_new(struct net *src_net, struct net_device *dev, if (data && data[IFLA_IPVLAN_MODE]) mode = nla_get_u16(data[IFLA_IPVLAN_MODE]); - err = ipvlan_set_port_mode(port, mode); + err = ipvlan_set_port_mode(port, mode, extack); if (err) goto unlink_netdev; @@ -754,10 +759,13 @@ EXPORT_SYMBOL_GPL(ipvlan_link_register); static int ipvlan_device_event(struct notifier_block *unused, unsigned long event, void *ptr) { + struct netlink_ext_ack *extack = netdev_notifier_info_to_extack(ptr); + struct netdev_notifier_pre_changeaddr_info *prechaddr_info; struct net_device *dev = netdev_notifier_info_to_dev(ptr); struct ipvl_dev *ipvlan, *next; struct ipvl_port *port; LIST_HEAD(lst_kill); + int err; if (!netif_is_ipvlan_port(dev)) return NOTIFY_DONE; @@ -813,6 +821,17 @@ static int ipvlan_device_event(struct notifier_block *unused, ipvlan_adjust_mtu(ipvlan, dev); break; + case NETDEV_PRE_CHANGEADDR: + prechaddr_info = ptr; + list_for_each_entry(ipvlan, &port->ipvlans, pnode) { + err = dev_pre_changeaddr_notify(ipvlan->dev, + prechaddr_info->dev_addr, + extack); + if (err) + return notifier_from_errno(err); + } + break; + case NETDEV_CHANGEADDR: list_for_each_entry(ipvlan, &port->ipvlans, pnode) { ether_addr_copy(ipvlan->dev->dev_addr, dev->dev_addr); diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c index 0da3d36b283b..fc726ce4c164 100644 --- a/drivers/net/macvlan.c +++ b/drivers/net/macvlan.c @@ -744,7 +744,7 @@ static int macvlan_set_mac_address(struct net_device *dev, void *p) if (vlan->mode == MACVLAN_MODE_PASSTHRU) { macvlan_set_addr_change(vlan->port); - return dev_set_mac_address(vlan->lowerdev, addr); + return dev_set_mac_address(vlan->lowerdev, addr, NULL); } if (macvlan_addr_busy(vlan->port, addr->sa_data)) @@ -1213,7 +1213,7 @@ static void macvlan_port_destroy(struct net_device *dev) sa.sa_family = port->dev->type; memcpy(&sa.sa_data, port->perm_addr, port->dev->addr_len); - dev_set_mac_address(port->dev, &sa); + dev_set_mac_address(port->dev, &sa, NULL); } kfree(port); diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c index e964d312f4ca..ed1166adaa2f 100644 --- a/drivers/net/net_failover.c +++ b/drivers/net/net_failover.c @@ -40,14 +40,14 @@ static int net_failover_open(struct net_device *dev) primary_dev = rtnl_dereference(nfo_info->primary_dev); if (primary_dev) { - err = dev_open(primary_dev); + err = dev_open(primary_dev, NULL); if (err) goto err_primary_open; } standby_dev = rtnl_dereference(nfo_info->standby_dev); if (standby_dev) { - err = dev_open(standby_dev); + err = dev_open(standby_dev, NULL); if (err) goto err_standby_open; } @@ -517,7 +517,7 @@ static int net_failover_slave_register(struct net_device *slave_dev, dev_hold(slave_dev); if (netif_running(failover_dev)) { - err = dev_open(slave_dev); + err = dev_open(slave_dev, NULL); if (err && (err != -EBUSY)) { netdev_err(failover_dev, "Opening slave %s failed err:%d\n", slave_dev->name, err); @@ -680,7 +680,7 @@ static int net_failover_slave_name_change(struct net_device *slave_dev, /* We need to bring up the slave after the rename by udev in case * open failed with EBUSY when it was registered. */ - dev_open(slave_dev); + dev_open(slave_dev, NULL); return 0; } diff --git a/drivers/net/netdevsim/bpf.c b/drivers/net/netdevsim/bpf.c index cb3518474f0e..172b271c8bd2 100644 --- a/drivers/net/netdevsim/bpf.c +++ b/drivers/net/netdevsim/bpf.c @@ -48,7 +48,7 @@ struct nsim_bpf_bound_map { struct list_head l; }; -static int nsim_debugfs_bpf_string_read(struct seq_file *file, void *data) +static int nsim_bpf_string_show(struct seq_file *file, void *data) { const char **str = file->private; @@ -57,19 +57,7 @@ static int nsim_debugfs_bpf_string_read(struct seq_file *file, void *data) return 0; } - -static int nsim_debugfs_bpf_string_open(struct inode *inode, struct file *f) -{ - return single_open(f, nsim_debugfs_bpf_string_read, inode->i_private); -} - -static const struct file_operations nsim_bpf_string_fops = { - .owner = THIS_MODULE, - .open = nsim_debugfs_bpf_string_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek -}; +DEFINE_SHOW_ATTRIBUTE(nsim_bpf_string); static int nsim_bpf_verify_insn(struct bpf_verifier_env *env, int insn_idx, int prev_insn) @@ -91,11 +79,6 @@ static int nsim_bpf_finalize(struct bpf_verifier_env *env) return 0; } -static const struct bpf_prog_offload_ops nsim_bpf_analyzer_ops = { - .insn_hook = nsim_bpf_verify_insn, - .finalize = nsim_bpf_finalize, -}; - static bool nsim_xdp_offload_active(struct netdevsim *ns) { return ns->xdp_hw.prog; @@ -263,6 +246,24 @@ static int nsim_bpf_create_prog(struct netdevsim *ns, struct bpf_prog *prog) return 0; } +static int nsim_bpf_verifier_prep(struct bpf_prog *prog) +{ + struct netdevsim *ns = netdev_priv(prog->aux->offload->netdev); + + if (!ns->bpf_bind_accept) + return -EOPNOTSUPP; + + return nsim_bpf_create_prog(ns, prog); +} + +static int nsim_bpf_translate(struct bpf_prog *prog) +{ + struct nsim_bpf_bound_prog *state = prog->aux->offload->dev_priv; + + state->state = "xlated"; + return 0; +} + static void nsim_bpf_destroy_prog(struct bpf_prog *prog) { struct nsim_bpf_bound_prog *state; @@ -275,6 +276,14 @@ static void nsim_bpf_destroy_prog(struct bpf_prog *prog) kfree(state); } +static const struct bpf_prog_offload_ops nsim_bpf_dev_ops = { + .insn_hook = nsim_bpf_verify_insn, + .finalize = nsim_bpf_finalize, + .prepare = nsim_bpf_verifier_prep, + .translate = nsim_bpf_translate, + .destroy = nsim_bpf_destroy_prog, +}; + static int nsim_setup_prog_checks(struct netdevsim *ns, struct netdev_bpf *bpf) { if (bpf->prog && bpf->prog->aux->offload) { @@ -533,30 +542,11 @@ static void nsim_bpf_map_free(struct bpf_offloaded_map *offmap) int nsim_bpf(struct net_device *dev, struct netdev_bpf *bpf) { struct netdevsim *ns = netdev_priv(dev); - struct nsim_bpf_bound_prog *state; int err; ASSERT_RTNL(); switch (bpf->command) { - case BPF_OFFLOAD_VERIFIER_PREP: - if (!ns->bpf_bind_accept) - return -EOPNOTSUPP; - - err = nsim_bpf_create_prog(ns, bpf->verifier.prog); - if (err) - return err; - - bpf->verifier.ops = &nsim_bpf_analyzer_ops; - return 0; - case BPF_OFFLOAD_TRANSLATE: - state = bpf->offload.prog->aux->offload->dev_priv; - - state->state = "xlated"; - return 0; - case BPF_OFFLOAD_DESTROY: - nsim_bpf_destroy_prog(bpf->offload.prog); - return 0; case XDP_QUERY_PROG: return xdp_attachment_query(&ns->xdp, bpf); case XDP_QUERY_PROG_HW: @@ -599,7 +589,7 @@ int nsim_bpf_init(struct netdevsim *ns) if (IS_ERR_OR_NULL(ns->sdev->ddir_bpf_bound_progs)) return -ENOMEM; - ns->sdev->bpf_dev = bpf_offload_dev_create(); + ns->sdev->bpf_dev = bpf_offload_dev_create(&nsim_bpf_dev_ops); err = PTR_ERR_OR_ZERO(ns->sdev->bpf_dev); if (err) return err; diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c index 2dcf6cc269d0..76e11d889bb6 100644 --- a/drivers/net/netdevsim/ipsec.c +++ b/drivers/net/netdevsim/ipsec.c @@ -227,18 +227,19 @@ static const struct xfrmdev_ops nsim_xfrmdev_ops = { bool nsim_ipsec_tx(struct netdevsim *ns, struct sk_buff *skb) { + struct sec_path *sp = skb_sec_path(skb); struct nsim_ipsec *ipsec = &ns->ipsec; struct xfrm_state *xs; struct nsim_sa *tsa; u32 sa_idx; /* do we even need to check this packet? */ - if (!skb->sp) + if (!sp) return true; - if (unlikely(!skb->sp->len)) { + if (unlikely(!sp->len)) { netdev_err(ns->netdev, "no xfrm state len = %d\n", - skb->sp->len); + sp->len); return false; } diff --git a/drivers/net/phy/amd.c b/drivers/net/phy/amd.c index 6fe5dc9201d0..9d0504f3e3b2 100644 --- a/drivers/net/phy/amd.c +++ b/drivers/net/phy/amd.c @@ -66,7 +66,6 @@ static struct phy_driver am79c_driver[] = { { .name = "AM79C874", .phy_id_mask = 0xfffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = am79c_config_init, .ack_interrupt = am79c_ack_interrupt, .config_intr = am79c_config_intr, diff --git a/drivers/net/phy/aquantia.c b/drivers/net/phy/aquantia.c index 632472cab3bb..beb3309bb0f0 100644 --- a/drivers/net/phy/aquantia.c +++ b/drivers/net/phy/aquantia.c @@ -25,15 +25,10 @@ #define PHY_ID_AQR107 0x03a1b4e0 #define PHY_ID_AQR405 0x03a1b4b0 -#define PHY_AQUANTIA_FEATURES (SUPPORTED_10000baseT_Full | \ - SUPPORTED_1000baseT_Full | \ - SUPPORTED_100baseT_Full | \ - PHY_DEFAULT_FEATURES) - static int aquantia_config_aneg(struct phy_device *phydev) { - phydev->supported = PHY_AQUANTIA_FEATURES; - phydev->advertising = phydev->supported; + linkmode_copy(phydev->supported, phy_10gbit_features); + linkmode_copy(phydev->advertising, phydev->supported); return 0; } @@ -116,7 +111,6 @@ static struct phy_driver aquantia_driver[] = { .phy_id_mask = 0xfffffff0, .name = "Aquantia AQ1202", .features = PHY_10GBIT_FULL_FEATURES, - .flags = PHY_HAS_INTERRUPT, .aneg_done = genphy_c45_aneg_done, .config_aneg = aquantia_config_aneg, .config_intr = aquantia_config_intr, @@ -128,7 +122,6 @@ static struct phy_driver aquantia_driver[] = { .phy_id_mask = 0xfffffff0, .name = "Aquantia AQ2104", .features = PHY_10GBIT_FULL_FEATURES, - .flags = PHY_HAS_INTERRUPT, .aneg_done = genphy_c45_aneg_done, .config_aneg = aquantia_config_aneg, .config_intr = aquantia_config_intr, @@ -140,7 +133,6 @@ static struct phy_driver aquantia_driver[] = { .phy_id_mask = 0xfffffff0, .name = "Aquantia AQR105", .features = PHY_10GBIT_FULL_FEATURES, - .flags = PHY_HAS_INTERRUPT, .aneg_done = genphy_c45_aneg_done, .config_aneg = aquantia_config_aneg, .config_intr = aquantia_config_intr, @@ -152,7 +144,6 @@ static struct phy_driver aquantia_driver[] = { .phy_id_mask = 0xfffffff0, .name = "Aquantia AQR106", .features = PHY_10GBIT_FULL_FEATURES, - .flags = PHY_HAS_INTERRUPT, .aneg_done = genphy_c45_aneg_done, .config_aneg = aquantia_config_aneg, .config_intr = aquantia_config_intr, @@ -164,7 +155,6 @@ static struct phy_driver aquantia_driver[] = { .phy_id_mask = 0xfffffff0, .name = "Aquantia AQR107", .features = PHY_10GBIT_FULL_FEATURES, - .flags = PHY_HAS_INTERRUPT, .aneg_done = genphy_c45_aneg_done, .config_aneg = aquantia_config_aneg, .config_intr = aquantia_config_intr, @@ -176,7 +166,6 @@ static struct phy_driver aquantia_driver[] = { .phy_id_mask = 0xfffffff0, .name = "Aquantia AQR405", .features = PHY_10GBIT_FULL_FEATURES, - .flags = PHY_HAS_INTERRUPT, .aneg_done = genphy_c45_aneg_done, .config_aneg = aquantia_config_aneg, .config_intr = aquantia_config_intr, diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c index e74a047a846e..f9432d053a22 100644 --- a/drivers/net/phy/at803x.c +++ b/drivers/net/phy/at803x.c @@ -379,7 +379,6 @@ static struct phy_driver at803x_driver[] = { .suspend = at803x_suspend, .resume = at803x_resume, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .ack_interrupt = at803x_ack_interrupt, .config_intr = at803x_config_intr, }, { @@ -395,7 +394,6 @@ static struct phy_driver at803x_driver[] = { .suspend = at803x_suspend, .resume = at803x_resume, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .ack_interrupt = at803x_ack_interrupt, .config_intr = at803x_config_intr, }, { @@ -410,7 +408,6 @@ static struct phy_driver at803x_driver[] = { .suspend = at803x_suspend, .resume = at803x_resume, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .aneg_done = at803x_aneg_done, .ack_interrupt = &at803x_ack_interrupt, .config_intr = &at803x_config_intr, diff --git a/drivers/net/phy/bcm63xx.c b/drivers/net/phy/bcm63xx.c index d95bffdec4c1..a88dd14a25c0 100644 --- a/drivers/net/phy/bcm63xx.c +++ b/drivers/net/phy/bcm63xx.c @@ -43,7 +43,7 @@ static int bcm63xx_config_init(struct phy_device *phydev) int reg, err; /* ASYM_PAUSE bit is marked RO in datasheet, so don't cheat */ - phydev->supported |= SUPPORTED_Pause; + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->supported); reg = phy_read(phydev, MII_BCM63XX_IR); if (reg < 0) @@ -69,7 +69,7 @@ static struct phy_driver bcm63xx_driver[] = { .phy_id_mask = 0xfffffc00, .name = "Broadcom BCM63XX (1)", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT | PHY_IS_INTERNAL, + .flags = PHY_IS_INTERNAL, .config_init = bcm63xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm63xx_config_intr, @@ -78,7 +78,7 @@ static struct phy_driver bcm63xx_driver[] = { .phy_id = 0x002bdc00, .phy_id_mask = 0xfffffc00, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT | PHY_IS_INTERNAL, + .flags = PHY_IS_INTERNAL, .config_init = bcm63xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm63xx_config_intr, diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c index b2b6307d64a4..712224cc442d 100644 --- a/drivers/net/phy/bcm7xxx.c +++ b/drivers/net/phy/bcm7xxx.c @@ -650,6 +650,7 @@ static int bcm7xxx_28nm_probe(struct phy_device *phydev) static struct phy_driver bcm7xxx_driver[] = { BCM7XXX_28NM_GPHY(PHY_ID_BCM7250, "Broadcom BCM7250"), + BCM7XXX_28NM_EPHY(PHY_ID_BCM7255, "Broadcom BCM7255"), BCM7XXX_28NM_EPHY(PHY_ID_BCM7260, "Broadcom BCM7260"), BCM7XXX_28NM_EPHY(PHY_ID_BCM7268, "Broadcom BCM7268"), BCM7XXX_28NM_EPHY(PHY_ID_BCM7271, "Broadcom BCM7271"), @@ -670,6 +671,7 @@ static struct phy_driver bcm7xxx_driver[] = { static struct mdio_device_id __maybe_unused bcm7xxx_tbl[] = { { PHY_ID_BCM7250, 0xfffffff0, }, + { PHY_ID_BCM7255, 0xfffffff0, }, { PHY_ID_BCM7260, 0xfffffff0, }, { PHY_ID_BCM7268, 0xfffffff0, }, { PHY_ID_BCM7271, 0xfffffff0, }, diff --git a/drivers/net/phy/bcm87xx.c b/drivers/net/phy/bcm87xx.c index f7ebdcff53e4..1b350183bffb 100644 --- a/drivers/net/phy/bcm87xx.c +++ b/drivers/net/phy/bcm87xx.c @@ -86,8 +86,12 @@ static int bcm87xx_of_reg_init(struct phy_device *phydev) static int bcm87xx_config_init(struct phy_device *phydev) { - phydev->supported = SUPPORTED_10000baseR_FEC; - phydev->advertising = ADVERTISED_10000baseR_FEC; + linkmode_zero(phydev->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT, + phydev->supported); + linkmode_zero(phydev->advertising); + linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT, + phydev->advertising); phydev->state = PHY_NOLINK; phydev->autoneg = AUTONEG_DISABLE; @@ -193,7 +197,6 @@ static struct phy_driver bcm87xx_driver[] = { .phy_id = PHY_ID_BCM8706, .phy_id_mask = 0xffffffff, .name = "Broadcom BCM8706", - .flags = PHY_HAS_INTERRUPT, .config_init = bcm87xx_config_init, .config_aneg = bcm87xx_config_aneg, .read_status = bcm87xx_read_status, @@ -205,7 +208,6 @@ static struct phy_driver bcm87xx_driver[] = { .phy_id = PHY_ID_BCM8727, .phy_id_mask = 0xffffffff, .name = "Broadcom BCM8727", - .flags = PHY_HAS_INTERRUPT, .config_init = bcm87xx_config_init, .config_aneg = bcm87xx_config_aneg, .read_status = bcm87xx_read_status, diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c index 704537010453..aa73c5cc5f86 100644 --- a/drivers/net/phy/broadcom.c +++ b/drivers/net/phy/broadcom.c @@ -602,7 +602,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5411", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -611,7 +610,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5421", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -620,7 +618,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM54210E", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -629,7 +626,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5461", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -638,7 +634,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM54612E", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -647,7 +642,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM54616S", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .config_aneg = bcm54616s_config_aneg, .ack_interrupt = bcm_phy_ack_intr, @@ -657,7 +651,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5464", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -666,7 +659,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5481", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .config_aneg = bcm5481_config_aneg, .ack_interrupt = bcm_phy_ack_intr, @@ -676,7 +668,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM54810", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .config_aneg = bcm5481_config_aneg, .ack_interrupt = bcm_phy_ack_intr, @@ -686,7 +677,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5482", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm5482_config_init, .read_status = bcm5482_read_status, .ack_interrupt = bcm_phy_ack_intr, @@ -696,7 +686,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM50610", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -705,7 +694,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM50610M", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -714,7 +702,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM57780", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, @@ -723,7 +710,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCMAC131", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = brcm_fet_config_init, .ack_interrupt = brcm_fet_ack_interrupt, .config_intr = brcm_fet_config_intr, @@ -732,7 +718,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM5241", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = brcm_fet_config_init, .ack_interrupt = brcm_fet_ack_interrupt, .config_intr = brcm_fet_config_intr, @@ -751,7 +736,6 @@ static struct phy_driver broadcom_drivers[] = { .phy_id_mask = 0xfffffff0, .name = "Broadcom BCM89610", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = bcm54xx_config_init, .ack_interrupt = bcm_phy_ack_intr, .config_intr = bcm_phy_config_intr, diff --git a/drivers/net/phy/cicada.c b/drivers/net/phy/cicada.c index c05af00bf4b6..fea61c81bda9 100644 --- a/drivers/net/phy/cicada.c +++ b/drivers/net/phy/cicada.c @@ -108,7 +108,6 @@ static struct phy_driver cis820x_driver[] = { .name = "Cicada Cis8201", .phy_id_mask = 0x000ffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &cis820x_config_init, .ack_interrupt = &cis820x_ack_interrupt, .config_intr = &cis820x_config_intr, @@ -117,7 +116,6 @@ static struct phy_driver cis820x_driver[] = { .name = "Cicada Cis8204", .phy_id_mask = 0x000fffc0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &cis820x_config_init, .ack_interrupt = &cis820x_ack_interrupt, .config_intr = &cis820x_config_intr, diff --git a/drivers/net/phy/davicom.c b/drivers/net/phy/davicom.c index 5ee99b3b428c..97162008f42b 100644 --- a/drivers/net/phy/davicom.c +++ b/drivers/net/phy/davicom.c @@ -150,7 +150,6 @@ static struct phy_driver dm91xx_driver[] = { .name = "Davicom DM9161E", .phy_id_mask = 0x0ffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = dm9161_config_init, .config_aneg = dm9161_config_aneg, .ack_interrupt = dm9161_ack_interrupt, @@ -160,7 +159,6 @@ static struct phy_driver dm91xx_driver[] = { .name = "Davicom DM9161B/C", .phy_id_mask = 0x0ffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = dm9161_config_init, .config_aneg = dm9161_config_aneg, .ack_interrupt = dm9161_ack_interrupt, @@ -170,7 +168,6 @@ static struct phy_driver dm91xx_driver[] = { .name = "Davicom DM9161A", .phy_id_mask = 0x0ffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = dm9161_config_init, .config_aneg = dm9161_config_aneg, .ack_interrupt = dm9161_ack_interrupt, @@ -180,7 +177,6 @@ static struct phy_driver dm91xx_driver[] = { .name = "Davicom DM9131", .phy_id_mask = 0x0ffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .ack_interrupt = dm9161_ack_interrupt, .config_intr = dm9161_config_intr, } }; diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c index edd4d44a386d..18b41bc345ab 100644 --- a/drivers/net/phy/dp83640.c +++ b/drivers/net/phy/dp83640.c @@ -1521,7 +1521,6 @@ static struct phy_driver dp83640_driver = { .phy_id_mask = 0xfffffff0, .name = "NatSemi DP83640", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = dp83640_probe, .remove = dp83640_remove, .soft_reset = dp83640_soft_reset, diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c index 6e8a2a4f3a6e..24c7f149f3e6 100644 --- a/drivers/net/phy/dp83822.c +++ b/drivers/net/phy/dp83822.c @@ -318,7 +318,6 @@ static struct phy_driver dp83822_driver[] = { .phy_id_mask = 0xfffffff0, .name = "TI DP83822", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = dp83822_config_init, .soft_reset = dp83822_phy_reset, .get_wol = dp83822_get_wol, diff --git a/drivers/net/phy/dp83848.c b/drivers/net/phy/dp83848.c index 6e8e42361fd5..a6b55909d1dc 100644 --- a/drivers/net/phy/dp83848.c +++ b/drivers/net/phy/dp83848.c @@ -108,7 +108,6 @@ MODULE_DEVICE_TABLE(mdio, dp83848_tbl); .phy_id_mask = 0xfffffff0, \ .name = _name, \ .features = PHY_BASIC_FEATURES, \ - .flags = PHY_HAS_INTERRUPT, \ \ .soft_reset = genphy_soft_reset, \ .config_init = _config_init, \ diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c index b3935778b19f..da6a67d47ce9 100644 --- a/drivers/net/phy/dp83867.c +++ b/drivers/net/phy/dp83867.c @@ -334,7 +334,6 @@ static struct phy_driver dp83867_driver[] = { .phy_id_mask = 0xfffffff0, .name = "TI DP83867", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = dp83867_config_init, .soft_reset = dp83867_phy_reset, diff --git a/drivers/net/phy/dp83tc811.c b/drivers/net/phy/dp83tc811.c index 78cad134a79e..da13356999e5 100644 --- a/drivers/net/phy/dp83tc811.c +++ b/drivers/net/phy/dp83tc811.c @@ -346,7 +346,6 @@ static struct phy_driver dp83811_driver[] = { .phy_id_mask = 0xfffffff0, .name = "TI DP83TC811", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = dp83811_config_init, .config_aneg = dp83811_config_aneg, .soft_reset = dp83811_phy_reset, diff --git a/drivers/net/phy/fixed_phy.c b/drivers/net/phy/fixed_phy.c index 67b260877f30..72d43c88e6ff 100644 --- a/drivers/net/phy/fixed_phy.c +++ b/drivers/net/phy/fixed_phy.c @@ -25,6 +25,7 @@ #include #include #include +#include #include "swphy.h" @@ -38,6 +39,7 @@ struct fixed_phy { struct phy_device *phydev; seqcount_t seqcount; struct fixed_phy_status status; + bool no_carrier; int (*link_update)(struct net_device *, struct fixed_phy_status *); struct list_head node; int link_gpio; @@ -48,9 +50,28 @@ static struct fixed_mdio_bus platform_fmb = { .phys = LIST_HEAD_INIT(platform_fmb.phys), }; +int fixed_phy_change_carrier(struct net_device *dev, bool new_carrier) +{ + struct fixed_mdio_bus *fmb = &platform_fmb; + struct phy_device *phydev = dev->phydev; + struct fixed_phy *fp; + + if (!phydev || !phydev->mdio.bus) + return -EINVAL; + + list_for_each_entry(fp, &fmb->phys, node) { + if (fp->addr == phydev->mdio.addr) { + fp->no_carrier = !new_carrier; + return 0; + } + } + return -EINVAL; +} +EXPORT_SYMBOL_GPL(fixed_phy_change_carrier); + static void fixed_phy_update(struct fixed_phy *fp) { - if (gpio_is_valid(fp->link_gpio)) + if (!fp->no_carrier && gpio_is_valid(fp->link_gpio)) fp->status.link = !!gpio_get_value_cansleep(fp->link_gpio); } @@ -66,6 +87,7 @@ static int fixed_mdio_read(struct mii_bus *bus, int phy_addr, int reg_num) do { s = read_seqcount_begin(&fp->seqcount); + fp->status.link = !fp->no_carrier; /* Issue callback if user registered it. */ if (fp->link_update) { fp->link_update(fp->phydev->attached_dev, @@ -223,14 +245,23 @@ struct phy_device *fixed_phy_register(unsigned int irq, switch (status->speed) { case SPEED_1000: - phy->supported = PHY_1000BT_FEATURES; - break; + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + phy->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + phy->supported); + /* fall through */ case SPEED_100: - phy->supported = PHY_100BT_FEATURES; - break; + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, + phy->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, + phy->supported); + /* fall through */ case SPEED_10: default: - phy->supported = PHY_10BT_FEATURES; + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, + phy->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, + phy->supported); } ret = phy_device_register(phy); diff --git a/drivers/net/phy/icplus.c b/drivers/net/phy/icplus.c index 791587a49215..7d5938b87660 100644 --- a/drivers/net/phy/icplus.c +++ b/drivers/net/phy/icplus.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -36,14 +37,34 @@ MODULE_LICENSE("GPL"); /* IP101A/G - IP1001 */ #define IP10XX_SPEC_CTRL_STATUS 16 /* Spec. Control Register */ -#define IP1001_RXPHASE_SEL (1<<0) /* Add delay on RX_CLK */ -#define IP1001_TXPHASE_SEL (1<<1) /* Add delay on TX_CLK */ +#define IP1001_RXPHASE_SEL BIT(0) /* Add delay on RX_CLK */ +#define IP1001_TXPHASE_SEL BIT(1) /* Add delay on TX_CLK */ #define IP1001_SPEC_CTRL_STATUS_2 20 /* IP1001 Spec. Control Reg 2 */ #define IP1001_APS_ON 11 /* IP1001 APS Mode bit */ -#define IP101A_G_APS_ON 2 /* IP101A/G APS Mode bit */ +#define IP101A_G_APS_ON BIT(1) /* IP101A/G APS Mode bit */ #define IP101A_G_IRQ_CONF_STATUS 0x11 /* Conf Info IRQ & Status Reg */ -#define IP101A_G_IRQ_PIN_USED (1<<15) /* INTR pin used */ -#define IP101A_G_IRQ_DEFAULT IP101A_G_IRQ_PIN_USED +#define IP101A_G_IRQ_PIN_USED BIT(15) /* INTR pin used */ +#define IP101A_G_IRQ_ALL_MASK BIT(11) /* IRQ's inactive */ +#define IP101A_G_IRQ_SPEED_CHANGE BIT(2) +#define IP101A_G_IRQ_DUPLEX_CHANGE BIT(1) +#define IP101A_G_IRQ_LINK_CHANGE BIT(0) + +#define IP101G_DIGITAL_IO_SPEC_CTRL 0x1d +#define IP101G_DIGITAL_IO_SPEC_CTRL_SEL_INTR32 BIT(2) + +/* The 32-pin IP101GR package can re-configure the mode of the RXER/INTR_32 pin + * (pin number 21). The hardware default is RXER (receive error) mode. But it + * can be configured to interrupt mode manually. + */ +enum ip101gr_sel_intr32 { + IP101GR_SEL_INTR32_KEEP, + IP101GR_SEL_INTR32_INTR, + IP101GR_SEL_INTR32_RXER, +}; + +struct ip101a_g_phy_priv { + enum ip101gr_sel_intr32 sel_intr32; +}; static int ip175c_config_init(struct phy_device *phydev) { @@ -162,18 +183,92 @@ static int ip1001_config_init(struct phy_device *phydev) return 0; } +static int ip175c_read_status(struct phy_device *phydev) +{ + if (phydev->mdio.addr == 4) /* WAN port */ + genphy_read_status(phydev); + else + /* Don't need to read status for switch ports */ + phydev->irq = PHY_IGNORE_INTERRUPT; + + return 0; +} + +static int ip175c_config_aneg(struct phy_device *phydev) +{ + if (phydev->mdio.addr == 4) /* WAN port */ + genphy_config_aneg(phydev); + + return 0; +} + +static int ip101a_g_probe(struct phy_device *phydev) +{ + struct device *dev = &phydev->mdio.dev; + struct ip101a_g_phy_priv *priv; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + /* Both functions (RX error and interrupt status) are sharing the same + * pin on the 32-pin IP101GR, so this is an exclusive choice. + */ + if (device_property_read_bool(dev, "icplus,select-rx-error") && + device_property_read_bool(dev, "icplus,select-interrupt")) { + dev_err(dev, + "RXER and INTR mode cannot be selected together\n"); + return -EINVAL; + } + + if (device_property_read_bool(dev, "icplus,select-rx-error")) + priv->sel_intr32 = IP101GR_SEL_INTR32_RXER; + else if (device_property_read_bool(dev, "icplus,select-interrupt")) + priv->sel_intr32 = IP101GR_SEL_INTR32_INTR; + else + priv->sel_intr32 = IP101GR_SEL_INTR32_KEEP; + + phydev->priv = priv; + + return 0; +} + static int ip101a_g_config_init(struct phy_device *phydev) { - int c; + struct ip101a_g_phy_priv *priv = phydev->priv; + int err, c; c = ip1xx_reset(phydev); if (c < 0) return c; - /* INTR pin used: speed/link/duplex will cause an interrupt */ - c = phy_write(phydev, IP101A_G_IRQ_CONF_STATUS, IP101A_G_IRQ_DEFAULT); - if (c < 0) - return c; + /* configure the RXER/INTR_32 pin of the 32-pin IP101GR if needed: */ + switch (priv->sel_intr32) { + case IP101GR_SEL_INTR32_RXER: + err = phy_modify(phydev, IP101G_DIGITAL_IO_SPEC_CTRL, + IP101G_DIGITAL_IO_SPEC_CTRL_SEL_INTR32, 0); + if (err < 0) + return err; + break; + + case IP101GR_SEL_INTR32_INTR: + err = phy_modify(phydev, IP101G_DIGITAL_IO_SPEC_CTRL, + IP101G_DIGITAL_IO_SPEC_CTRL_SEL_INTR32, + IP101G_DIGITAL_IO_SPEC_CTRL_SEL_INTR32); + if (err < 0) + return err; + break; + + default: + /* Don't touch IP101G_DIGITAL_IO_SPEC_CTRL because it's not + * documented on IP101A and it's not clear whether this would + * cause problems. + * For the 32-pin IP101GR we simply keep the SEL_INTR32 + * configuration as set by the bootloader when not configured + * to one of the special functions. + */ + break; + } /* Enable Auto Power Saving mode */ c = phy_read(phydev, IP10XX_SPEC_CTRL_STATUS); @@ -182,23 +277,29 @@ static int ip101a_g_config_init(struct phy_device *phydev) return phy_write(phydev, IP10XX_SPEC_CTRL_STATUS, c); } -static int ip175c_read_status(struct phy_device *phydev) +static int ip101a_g_config_intr(struct phy_device *phydev) { - if (phydev->mdio.addr == 4) /* WAN port */ - genphy_read_status(phydev); + u16 val; + + if (phydev->interrupts == PHY_INTERRUPT_ENABLED) + /* INTR pin used: Speed/link/duplex will cause an interrupt */ + val = IP101A_G_IRQ_PIN_USED; else - /* Don't need to read status for switch ports */ - phydev->irq = PHY_IGNORE_INTERRUPT; + val = IP101A_G_IRQ_ALL_MASK; - return 0; + return phy_write(phydev, IP101A_G_IRQ_CONF_STATUS, val); } -static int ip175c_config_aneg(struct phy_device *phydev) +static int ip101a_g_did_interrupt(struct phy_device *phydev) { - if (phydev->mdio.addr == 4) /* WAN port */ - genphy_config_aneg(phydev); + int val = phy_read(phydev, IP101A_G_IRQ_CONF_STATUS); - return 0; + if (val < 0) + return 0; + + return val & (IP101A_G_IRQ_SPEED_CHANGE | + IP101A_G_IRQ_DUPLEX_CHANGE | + IP101A_G_IRQ_LINK_CHANGE); } static int ip101a_g_ack_interrupt(struct phy_device *phydev) @@ -234,7 +335,9 @@ static struct phy_driver icplus_driver[] = { .name = "ICPlus IP101A/G", .phy_id_mask = 0x0ffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, + .probe = ip101a_g_probe, + .config_intr = ip101a_g_config_intr, + .did_interrupt = ip101a_g_did_interrupt, .ack_interrupt = ip101a_g_ack_interrupt, .config_init = &ip101a_g_config_init, .suspend = genphy_suspend, diff --git a/drivers/net/phy/intel-xway.c b/drivers/net/phy/intel-xway.c index 7d936fb61c22..fc0f5024a29e 100644 --- a/drivers/net/phy/intel-xway.c +++ b/drivers/net/phy/intel-xway.c @@ -242,7 +242,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY11G (PEF 7071/PEF 7072) v1.3", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .config_aneg = xway_gphy14_config_aneg, .ack_interrupt = xway_gphy_ack_interrupt, @@ -255,7 +254,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY22F (PEF 7061) v1.3", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .config_aneg = xway_gphy14_config_aneg, .ack_interrupt = xway_gphy_ack_interrupt, @@ -268,7 +266,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY11G (PEF 7071/PEF 7072) v1.4", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .config_aneg = xway_gphy14_config_aneg, .ack_interrupt = xway_gphy_ack_interrupt, @@ -281,7 +278,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY22F (PEF 7061) v1.4", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .config_aneg = xway_gphy14_config_aneg, .ack_interrupt = xway_gphy_ack_interrupt, @@ -294,7 +290,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY11G (PEF 7071/PEF 7072) v1.5 / v1.6", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .ack_interrupt = xway_gphy_ack_interrupt, .did_interrupt = xway_gphy_did_interrupt, @@ -306,7 +301,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY22F (PEF 7061) v1.5 / v1.6", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .ack_interrupt = xway_gphy_ack_interrupt, .did_interrupt = xway_gphy_did_interrupt, @@ -318,7 +312,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY11G (xRX v1.1 integrated)", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .ack_interrupt = xway_gphy_ack_interrupt, .did_interrupt = xway_gphy_did_interrupt, @@ -330,7 +323,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY22F (xRX v1.1 integrated)", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .ack_interrupt = xway_gphy_ack_interrupt, .did_interrupt = xway_gphy_did_interrupt, @@ -342,7 +334,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY11G (xRX v1.2 integrated)", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .ack_interrupt = xway_gphy_ack_interrupt, .did_interrupt = xway_gphy_did_interrupt, @@ -354,7 +345,6 @@ static struct phy_driver xway_gphy[] = { .phy_id_mask = 0xffffffff, .name = "Intel XWAY PHY22F (xRX v1.2 integrated)", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = xway_gphy_config_init, .ack_interrupt = xway_gphy_ack_interrupt, .did_interrupt = xway_gphy_did_interrupt, diff --git a/drivers/net/phy/lxt.c b/drivers/net/phy/lxt.c index c14b254b2879..c8bb29ae1a2a 100644 --- a/drivers/net/phy/lxt.c +++ b/drivers/net/phy/lxt.c @@ -177,7 +177,7 @@ static int lxt973a2_read_status(struct phy_device *phydev) */ } while (lpa == adv && retry--); - phydev->lp_advertising = mii_lpa_to_ethtool_lpa_t(lpa); + mii_lpa_to_linkmode_lpa_t(phydev->lp_advertising, lpa); lpa &= adv; @@ -218,7 +218,7 @@ static int lxt973a2_read_status(struct phy_device *phydev) phydev->speed = SPEED_10; phydev->pause = phydev->asym_pause = 0; - phydev->lp_advertising = 0; + linkmode_zero(phydev->lp_advertising); } return 0; @@ -257,7 +257,6 @@ static struct phy_driver lxt97x_driver[] = { .name = "LXT970", .phy_id_mask = 0xfffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = lxt970_config_init, .ack_interrupt = lxt970_ack_interrupt, .config_intr = lxt970_config_intr, @@ -266,7 +265,6 @@ static struct phy_driver lxt97x_driver[] = { .name = "LXT971", .phy_id_mask = 0xfffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .ack_interrupt = lxt971_ack_interrupt, .config_intr = lxt971_config_intr, }, { diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c index cbec296107bd..a9c7c7f41b0c 100644 --- a/drivers/net/phy/marvell.c +++ b/drivers/net/phy/marvell.c @@ -491,25 +491,26 @@ static int m88e1318_config_aneg(struct phy_device *phydev) } /** - * ethtool_adv_to_fiber_adv_t - * @ethadv: the ethtool advertisement settings + * linkmode_adv_to_fiber_adv_t + * @advertise: the linkmode advertisement settings * - * A small helper function that translates ethtool advertisement - * settings to phy autonegotiation advertisements for the - * MII_ADV register for fiber link. + * A small helper function that translates linkmode advertisement + * settings to phy autonegotiation advertisements for the MII_ADV + * register for fiber link. */ -static inline u32 ethtool_adv_to_fiber_adv_t(u32 ethadv) +static inline u32 linkmode_adv_to_fiber_adv_t(unsigned long *advertise) { u32 result = 0; - if (ethadv & ADVERTISED_1000baseT_Half) + if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, advertise)) result |= ADVERTISE_FIBER_1000HALF; - if (ethadv & ADVERTISED_1000baseT_Full) + if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, advertise)) result |= ADVERTISE_FIBER_1000FULL; - if ((ethadv & ADVERTISE_PAUSE_ASYM) && (ethadv & ADVERTISE_PAUSE_CAP)) + if (linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, advertise) && + linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT, advertise)) result |= LPA_PAUSE_ASYM_FIBER; - else if (ethadv & ADVERTISE_PAUSE_CAP) + else if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT, advertise)) result |= (ADVERTISE_PAUSE_FIBER & (~ADVERTISE_PAUSE_ASYM_FIBER)); @@ -530,14 +531,13 @@ static int marvell_config_aneg_fiber(struct phy_device *phydev) int changed = 0; int err; int adv, oldadv; - u32 advertise; if (phydev->autoneg != AUTONEG_ENABLE) return genphy_setup_forced(phydev); /* Only allow advertising what this PHY supports */ - phydev->advertising &= phydev->supported; - advertise = phydev->advertising; + linkmode_and(phydev->advertising, phydev->advertising, + phydev->supported); /* Setup fiber advertisement */ adv = phy_read(phydev, MII_ADVERTISE); @@ -547,7 +547,7 @@ static int marvell_config_aneg_fiber(struct phy_device *phydev) oldadv = adv; adv &= ~(ADVERTISE_FIBER_1000HALF | ADVERTISE_FIBER_1000FULL | LPA_PAUSE_FIBER); - adv |= ethtool_adv_to_fiber_adv_t(advertise); + adv |= linkmode_adv_to_fiber_adv_t(phydev->advertising); if (adv != oldadv) { err = phy_write(phydev, MII_ADVERTISE, adv); @@ -847,7 +847,6 @@ static int m88e1510_config_init(struct phy_device *phydev) /* SGMII-to-Copper mode initialization */ if (phydev->interface == PHY_INTERFACE_MODE_SGMII) { - u32 pause; /* Select page 18 */ err = marvell_set_page(phydev, 18); @@ -878,9 +877,14 @@ static int m88e1510_config_init(struct phy_device *phydev) * This means we can never be truely sure what was advertised, * so disable Pause support. */ - pause = SUPPORTED_Pause | SUPPORTED_Asym_Pause; - phydev->supported &= ~pause; - phydev->advertising &= ~pause; + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->supported); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->supported); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->advertising); } return m88e1318_config_init(phydev); @@ -1043,22 +1047,21 @@ static int m88e1145_config_init(struct phy_device *phydev) } /** - * fiber_lpa_to_ethtool_lpa_t + * fiber_lpa_mod_linkmode_lpa_t + * @advertising: the linkmode advertisement settings * @lpa: value of the MII_LPA register for fiber link * - * A small helper function that translates MII_LPA - * bits to ethtool LP advertisement settings. + * A small helper function that translates MII_LPA bits to linkmode LP + * advertisement settings. Other bits in advertising are left + * unchanged. */ -static u32 fiber_lpa_to_ethtool_lpa_t(u32 lpa) +static void fiber_lpa_mod_linkmode_lpa_t(unsigned long *advertising, u32 lpa) { - u32 result = 0; - - if (lpa & LPA_FIBER_1000HALF) - result |= ADVERTISED_1000baseT_Half; - if (lpa & LPA_FIBER_1000FULL) - result |= ADVERTISED_1000baseT_Full; + linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + advertising, lpa & LPA_FIBER_1000HALF); - return result; + linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + advertising, lpa & LPA_FIBER_1000FULL); } /** @@ -1134,9 +1137,8 @@ static int marvell_read_status_page_an(struct phy_device *phydev, } if (!fiber) { - phydev->lp_advertising = - mii_stat1000_to_ethtool_lpa_t(lpagb) | - mii_lpa_to_ethtool_lpa_t(lpa); + mii_lpa_to_linkmode_lpa_t(phydev->lp_advertising, lpa); + mii_stat1000_mod_linkmode_lpa_t(phydev->lp_advertising, lpagb); if (phydev->duplex == DUPLEX_FULL) { phydev->pause = lpa & LPA_PAUSE_CAP ? 1 : 0; @@ -1144,7 +1146,7 @@ static int marvell_read_status_page_an(struct phy_device *phydev, } } else { /* The fiber link is only 1000M capable */ - phydev->lp_advertising = fiber_lpa_to_ethtool_lpa_t(lpa); + fiber_lpa_mod_linkmode_lpa_t(phydev->lp_advertising, lpa); if (phydev->duplex == DUPLEX_FULL) { if (!(lpa & LPA_PAUSE_FIBER)) { @@ -1183,7 +1185,7 @@ static int marvell_read_status_page_fixed(struct phy_device *phydev) phydev->pause = 0; phydev->asym_pause = 0; - phydev->lp_advertising = 0; + linkmode_zero(phydev->lp_advertising); return 0; } @@ -1235,7 +1237,8 @@ static int marvell_read_status(struct phy_device *phydev) int err; /* Check the fiber mode first */ - if (phydev->supported & SUPPORTED_FIBRE && + if (linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, + phydev->supported) && phydev->interface != PHY_INTERFACE_MODE_SGMII) { err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE); if (err < 0) @@ -1278,7 +1281,8 @@ static int marvell_suspend(struct phy_device *phydev) int err; /* Suspend the fiber mode first */ - if (!(phydev->supported & SUPPORTED_FIBRE)) { + if (!linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, + phydev->supported)) { err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE); if (err < 0) goto error; @@ -1312,7 +1316,8 @@ static int marvell_resume(struct phy_device *phydev) int err; /* Resume the fiber mode first */ - if (!(phydev->supported & SUPPORTED_FIBRE)) { + if (!linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, + phydev->supported)) { err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE); if (err < 0) goto error; @@ -1463,7 +1468,8 @@ error: static int marvell_get_sset_count(struct phy_device *phydev) { - if (phydev->supported & SUPPORTED_FIBRE) + if (linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, + phydev->supported)) return ARRAY_SIZE(marvell_hw_stats); else return ARRAY_SIZE(marvell_hw_stats) - NB_FIBER_STATS; @@ -2005,7 +2011,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1101", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &marvell_config_init, .config_aneg = &m88e1101_config_aneg, @@ -2024,7 +2029,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1112", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e1111_config_init, .config_aneg = &marvell_config_aneg, @@ -2043,7 +2047,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1111", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e1111_config_init, .config_aneg = &marvell_config_aneg, @@ -2063,7 +2066,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1118", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e1118_config_init, .config_aneg = &m88e1118_config_aneg, @@ -2082,7 +2084,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1121R", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = &m88e1121_probe, .config_init = &marvell_config_init, .config_aneg = &m88e1121_config_aneg, @@ -2103,7 +2104,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1318S", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e1318_config_init, .config_aneg = &m88e1318_config_aneg, @@ -2126,7 +2126,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1145", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e1145_config_init, .config_aneg = &m88e1101_config_aneg, @@ -2146,7 +2145,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1149R", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e1149_config_init, .config_aneg = &m88e1118_config_aneg, @@ -2165,7 +2163,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1240", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e1111_config_init, .config_aneg = &marvell_config_aneg, @@ -2184,7 +2181,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1116R", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e1116r_config_init, .ack_interrupt = &marvell_ack_interrupt, @@ -2202,7 +2198,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1510", .features = PHY_GBIT_FIBRE_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = &m88e1510_probe, .config_init = &m88e1510_config_init, .config_aneg = &m88e1510_config_aneg, @@ -2226,7 +2221,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E1540", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = m88e1510_probe, .config_init = &marvell_config_init, .config_aneg = &m88e1510_config_aneg, @@ -2248,7 +2242,6 @@ static struct phy_driver marvell_drivers[] = { .name = "Marvell 88E1545", .probe = m88e1510_probe, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &marvell_config_init, .config_aneg = &m88e1510_config_aneg, .read_status = &marvell_read_status, @@ -2268,7 +2261,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E3016", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = marvell_probe, .config_init = &m88e3016_config_init, .aneg_done = &marvell_aneg_done, @@ -2289,7 +2281,6 @@ static struct phy_driver marvell_drivers[] = { .phy_id_mask = MARVELL_PHY_ID_MASK, .name = "Marvell 88E6390", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = m88e6390_probe, .config_init = &marvell_config_init, .config_aneg = &m88e1510_config_aneg, diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c index 1c9d039eec63..82ab6ed3b74e 100644 --- a/drivers/net/phy/marvell10g.c +++ b/drivers/net/phy/marvell10g.c @@ -252,7 +252,6 @@ static int mv3310_resume(struct phy_device *phydev) static int mv3310_config_init(struct phy_device *phydev) { __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, }; - u32 mask; int val; /* Check that the PHY interface type is compatible */ @@ -336,13 +335,9 @@ static int mv3310_config_init(struct phy_device *phydev) } } - if (!ethtool_convert_link_mode_to_legacy_u32(&mask, supported)) - phydev_warn(phydev, - "PHY supports (%*pb) more modes than phylib supports, some modes not supported.\n", - __ETHTOOL_LINK_MODE_MASK_NBITS, supported); - - phydev->supported &= mask; - phydev->advertising &= phydev->supported; + linkmode_copy(phydev->supported, supported); + linkmode_and(phydev->advertising, phydev->advertising, + phydev->supported); return 0; } @@ -350,7 +345,7 @@ static int mv3310_config_init(struct phy_device *phydev) static int mv3310_config_aneg(struct phy_device *phydev) { bool changed = false; - u32 advertising; + u16 reg; int ret; /* We don't support manual MDI control */ @@ -364,31 +359,35 @@ static int mv3310_config_aneg(struct phy_device *phydev) return genphy_c45_an_disable_aneg(phydev); } - phydev->advertising &= phydev->supported; - advertising = phydev->advertising; + linkmode_and(phydev->advertising, phydev->advertising, + phydev->supported); ret = mv3310_modify(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE, ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM, - ethtool_adv_to_mii_adv_t(advertising)); + linkmode_adv_to_mii_adv_t(phydev->advertising)); if (ret < 0) return ret; if (ret > 0) changed = true; + reg = linkmode_adv_to_mii_ctrl1000_t(phydev->advertising); ret = mv3310_modify(phydev, MDIO_MMD_AN, MV_AN_CTRL1000, - ADVERTISE_1000FULL | ADVERTISE_1000HALF, - ethtool_adv_to_mii_ctrl1000_t(advertising)); + ADVERTISE_1000FULL | ADVERTISE_1000HALF, reg); if (ret < 0) return ret; if (ret > 0) changed = true; /* 10G control register */ + if (linkmode_test_bit(ETHTOOL_LINK_MODE_10000baseT_Full_BIT, + phydev->advertising)) + reg = MDIO_AN_10GBT_CTRL_ADV10G; + else + reg = 0; + ret = mv3310_modify(phydev, MDIO_MMD_AN, MDIO_AN_10GBT_CTRL, - MDIO_AN_10GBT_CTRL_ADV10G, - advertising & ADVERTISED_10000baseT_Full ? - MDIO_AN_10GBT_CTRL_ADV10G : 0); + MDIO_AN_10GBT_CTRL_ADV10G, reg); if (ret < 0) return ret; if (ret > 0) @@ -458,7 +457,7 @@ static int mv3310_read_status(struct phy_device *phydev) phydev->speed = SPEED_UNKNOWN; phydev->duplex = DUPLEX_UNKNOWN; - phydev->lp_advertising = 0; + linkmode_zero(phydev->lp_advertising); phydev->link = 0; phydev->pause = 0; phydev->asym_pause = 0; @@ -491,7 +490,7 @@ static int mv3310_read_status(struct phy_device *phydev) if (val < 0) return val; - phydev->lp_advertising |= mii_stat1000_to_ethtool_lpa_t(val); + mii_stat1000_mod_linkmode_lpa_t(phydev->lp_advertising, val); if (phydev->autoneg == AUTONEG_ENABLE) phy_resolve_aneg_linkmode(phydev); diff --git a/drivers/net/phy/mdio-gpio.c b/drivers/net/phy/mdio-gpio.c index 0fbcedcdf6e2..ea9a0e339778 100644 --- a/drivers/net/phy/mdio-gpio.c +++ b/drivers/net/phy/mdio-gpio.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -112,6 +113,7 @@ static struct mii_bus *mdio_gpio_bus_init(struct device *dev, struct mdio_gpio_info *bitbang, int bus_id) { + struct mdio_gpio_platform_data *pdata = dev_get_platdata(dev); struct mii_bus *new_bus; bitbang->ctrl.ops = &mdio_gpio_ops; @@ -128,6 +130,11 @@ static struct mii_bus *mdio_gpio_bus_init(struct device *dev, else strncpy(new_bus->id, "gpio", MII_BUS_ID_SIZE); + if (pdata) { + new_bus->phy_mask = pdata->phy_mask; + new_bus->phy_ignore_ta_mask = pdata->phy_ignore_ta_mask; + } + dev_set_drvdata(dev, new_bus); return new_bus; diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c index ddc2c5ea3787..b03bcf2c388a 100644 --- a/drivers/net/phy/meson-gxl.c +++ b/drivers/net/phy/meson-gxl.c @@ -232,7 +232,7 @@ static struct phy_driver meson_gxl_phy[] = { .phy_id_mask = 0xfffffff0, .name = "Meson GXL Internal PHY", .features = PHY_BASIC_FEATURES, - .flags = PHY_IS_INTERNAL | PHY_HAS_INTERRUPT, + .flags = PHY_IS_INTERNAL, .config_init = meson_gxl_config_init, .aneg_done = genphy_aneg_done, .read_status = meson_gxl_read_status, diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c index 9265dea79412..c33384710d26 100644 --- a/drivers/net/phy/micrel.c +++ b/drivers/net/phy/micrel.c @@ -311,17 +311,22 @@ static int kszphy_config_init(struct phy_device *phydev) static int ksz8041_config_init(struct phy_device *phydev) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; + struct device_node *of_node = phydev->mdio.dev.of_node; /* Limit supported and advertised modes in fiber mode */ if (of_property_read_bool(of_node, "micrel,fiber-mode")) { phydev->dev_flags |= MICREL_PHY_FXEN; - phydev->supported &= SUPPORTED_100baseT_Full | - SUPPORTED_100baseT_Half; - phydev->supported |= SUPPORTED_FIBRE; - phydev->advertising &= ADVERTISED_100baseT_Full | - ADVERTISED_100baseT_Half; - phydev->advertising |= ADVERTISED_FIBRE; + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, mask); + + linkmode_and(phydev->supported, phydev->supported, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, + phydev->supported); + linkmode_and(phydev->advertising, phydev->advertising, mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, + phydev->advertising); phydev->autoneg = AUTONEG_DISABLE; } @@ -918,7 +923,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Micrel KS8737", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ks8737_type, .config_init = kszphy_config_init, .ack_interrupt = kszphy_ack_interrupt, @@ -930,7 +934,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = 0x00ffffff, .name = "Micrel KSZ8021 or KSZ8031", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz8021_type, .probe = kszphy_probe, .config_init = kszphy_config_init, @@ -946,7 +949,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = 0x00ffffff, .name = "Micrel KSZ8031", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz8021_type, .probe = kszphy_probe, .config_init = kszphy_config_init, @@ -962,7 +964,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Micrel KSZ8041", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz8041_type, .probe = kszphy_probe, .config_init = ksz8041_config_init, @@ -979,7 +980,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Micrel KSZ8041RNLI", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz8041_type, .probe = kszphy_probe, .config_init = kszphy_config_init, @@ -995,7 +995,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Micrel KSZ8051", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz8051_type, .probe = kszphy_probe, .config_init = kszphy_config_init, @@ -1011,7 +1010,6 @@ static struct phy_driver ksphy_driver[] = { .name = "Micrel KSZ8001 or KS8721", .phy_id_mask = 0x00fffffc, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz8041_type, .probe = kszphy_probe, .config_init = kszphy_config_init, @@ -1027,7 +1025,6 @@ static struct phy_driver ksphy_driver[] = { .name = "Micrel KSZ8081 or KSZ8091", .phy_id_mask = MICREL_PHY_ID_MASK, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz8081_type, .probe = kszphy_probe, .config_init = kszphy_config_init, @@ -1043,7 +1040,6 @@ static struct phy_driver ksphy_driver[] = { .name = "Micrel KSZ8061", .phy_id_mask = MICREL_PHY_ID_MASK, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = kszphy_config_init, .ack_interrupt = kszphy_ack_interrupt, .config_intr = kszphy_config_intr, @@ -1054,7 +1050,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = 0x000ffffe, .name = "Micrel KSZ9021 Gigabit PHY", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz9021_type, .probe = kszphy_probe, .config_init = ksz9021_config_init, @@ -1072,7 +1067,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Micrel KSZ9031 Gigabit PHY", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz9021_type, .probe = kszphy_probe, .config_init = ksz9031_config_init, @@ -1089,7 +1083,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Microchip KSZ9131 Gigabit PHY", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .driver_data = &ksz9021_type, .probe = kszphy_probe, .config_init = ksz9131_config_init, @@ -1115,7 +1108,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Micrel KSZ886X Switch", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = kszphy_config_init, .suspend = genphy_suspend, .resume = genphy_resume, @@ -1124,7 +1116,6 @@ static struct phy_driver ksphy_driver[] = { .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Micrel KSZ8795", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = kszphy_config_init, .config_aneg = ksz8873mll_config_aneg, .read_status = ksz8873mll_read_status, diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c index 04b12e34da58..7557bebd5d7f 100644 --- a/drivers/net/phy/microchip.c +++ b/drivers/net/phy/microchip.c @@ -346,7 +346,6 @@ static struct phy_driver microchip_phy_driver[] = { .name = "Microchip LAN88xx", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = lan88xx_probe, .remove = lan88xx_remove, diff --git a/drivers/net/phy/microchip_t1.c b/drivers/net/phy/microchip_t1.c index c600a8509d60..3d09b471632c 100644 --- a/drivers/net/phy/microchip_t1.c +++ b/drivers/net/phy/microchip_t1.c @@ -47,7 +47,6 @@ static struct phy_driver microchip_t1_phy_driver[] = { .name = "Microchip LAN87xx T1", .features = PHY_BASIC_T1_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = genphy_config_init, .config_aneg = genphy_config_aneg, diff --git a/drivers/net/phy/mscc.c b/drivers/net/phy/mscc.c index 7cae17517744..3949fe299b18 100644 --- a/drivers/net/phy/mscc.c +++ b/drivers/net/phy/mscc.c @@ -853,6 +853,51 @@ static void vsc85xx_tr_write(struct phy_device *phydev, u16 addr, u32 val) __phy_write(phydev, MSCC_PHY_TR_CNTL, TR_WRITE | TR_ADDR(addr)); } +static int vsc8531_pre_init_seq_set(struct phy_device *phydev) +{ + int rc; + const struct reg_val init_seq[] = { + {0x0f90, 0x00688980}, + {0x0696, 0x00000003}, + {0x07fa, 0x0050100f}, + {0x1686, 0x00000004}, + }; + unsigned int i; + int oldpage; + + rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_STANDARD, + MSCC_PHY_EXT_CNTL_STATUS, SMI_BROADCAST_WR_EN, + SMI_BROADCAST_WR_EN); + if (rc < 0) + return rc; + rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_TEST, + MSCC_PHY_TEST_PAGE_24, 0, 0x0400); + if (rc < 0) + return rc; + rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_TEST, + MSCC_PHY_TEST_PAGE_5, 0x0a00, 0x0e00); + if (rc < 0) + return rc; + rc = phy_modify_paged(phydev, MSCC_PHY_PAGE_TEST, + MSCC_PHY_TEST_PAGE_8, 0x8000, 0x8000); + if (rc < 0) + return rc; + + mutex_lock(&phydev->lock); + oldpage = phy_select_page(phydev, MSCC_PHY_PAGE_TR); + if (oldpage < 0) + goto out_unlock; + + for (i = 0; i < ARRAY_SIZE(init_seq); i++) + vsc85xx_tr_write(phydev, init_seq[i].reg, init_seq[i].val); + +out_unlock: + oldpage = phy_restore_page(phydev, oldpage, oldpage); + mutex_unlock(&phydev->lock); + + return oldpage; +} + static int vsc85xx_eee_init_seq_set(struct phy_device *phydev) { const struct reg_val init_eee[] = { @@ -1650,7 +1695,7 @@ err: static int vsc85xx_config_init(struct phy_device *phydev) { - int rc, i; + int rc, i, phy_id; struct vsc8531_private *vsc8531 = phydev->priv; rc = vsc85xx_default_config(phydev); @@ -1665,6 +1710,14 @@ static int vsc85xx_config_init(struct phy_device *phydev) if (rc) return rc; + phy_id = phydev->drv->phy_id & phydev->drv->phy_id_mask; + if (PHY_ID_VSC8531 == phy_id || PHY_ID_VSC8541 == phy_id || + PHY_ID_VSC8530 == phy_id || PHY_ID_VSC8540 == phy_id) { + rc = vsc8531_pre_init_seq_set(phydev); + if (rc) + return rc; + } + rc = vsc85xx_eee_init_seq_set(phydev); if (rc) return rc; @@ -1829,7 +1882,6 @@ static struct phy_driver vsc85xx_driver[] = { .name = "Microsemi FE VSC8530", .phy_id_mask = 0xfffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .soft_reset = &genphy_soft_reset, .config_init = &vsc85xx_config_init, .config_aneg = &vsc85xx_config_aneg, @@ -1855,7 +1907,6 @@ static struct phy_driver vsc85xx_driver[] = { .name = "Microsemi VSC8531", .phy_id_mask = 0xfffffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .soft_reset = &genphy_soft_reset, .config_init = &vsc85xx_config_init, .config_aneg = &vsc85xx_config_aneg, @@ -1881,7 +1932,6 @@ static struct phy_driver vsc85xx_driver[] = { .name = "Microsemi FE VSC8540 SyncE", .phy_id_mask = 0xfffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .soft_reset = &genphy_soft_reset, .config_init = &vsc85xx_config_init, .config_aneg = &vsc85xx_config_aneg, @@ -1907,7 +1957,6 @@ static struct phy_driver vsc85xx_driver[] = { .name = "Microsemi VSC8541 SyncE", .phy_id_mask = 0xfffffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .soft_reset = &genphy_soft_reset, .config_init = &vsc85xx_config_init, .config_aneg = &vsc85xx_config_aneg, @@ -1933,7 +1982,6 @@ static struct phy_driver vsc85xx_driver[] = { .name = "Microsemi GE VSC8574 SyncE", .phy_id_mask = 0xfffffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .soft_reset = &genphy_soft_reset, .config_init = &vsc8584_config_init, .config_aneg = &vsc85xx_config_aneg, @@ -1960,7 +2008,6 @@ static struct phy_driver vsc85xx_driver[] = { .name = "Microsemi GE VSC8584 SyncE", .phy_id_mask = 0xfffffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .soft_reset = &genphy_soft_reset, .config_init = &vsc8584_config_init, .config_aneg = &vsc85xx_config_aneg, diff --git a/drivers/net/phy/national.c b/drivers/net/phy/national.c index 2b1e336961f9..139bed2c8ab4 100644 --- a/drivers/net/phy/national.c +++ b/drivers/net/phy/national.c @@ -134,7 +134,6 @@ static struct phy_driver dp83865_driver[] = { { .phy_id_mask = 0xfffffff0, .name = "NatSemi DP83865", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = ns_config_init, .ack_interrupt = ns_ack_interrupt, .config_intr = ns_config_intr, diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c index d7636ff03bc7..03af927fa5ad 100644 --- a/drivers/net/phy/phy-c45.c +++ b/drivers/net/phy/phy-c45.c @@ -181,7 +181,7 @@ int genphy_c45_read_lpa(struct phy_device *phydev) if (val < 0) return val; - phydev->lp_advertising = mii_lpa_to_ethtool_lpa_t(val); + mii_lpa_to_linkmode_lpa_t(phydev->lp_advertising, val); phydev->pause = val & LPA_PAUSE_CAP ? 1 : 0; phydev->asym_pause = val & LPA_PAUSE_ASYM ? 1 : 0; @@ -191,7 +191,8 @@ int genphy_c45_read_lpa(struct phy_device *phydev) return val; if (val & MDIO_AN_10GBT_STAT_LP10G) - phydev->lp_advertising |= ADVERTISED_10000baseT_Full; + linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseT_Full_BIT, + phydev->lp_advertising); return 0; } @@ -304,8 +305,11 @@ EXPORT_SYMBOL_GPL(gen10g_no_soft_reset); int gen10g_config_init(struct phy_device *phydev) { /* Temporarily just say we support everything */ - phydev->supported = SUPPORTED_10000baseT_Full; - phydev->advertising = SUPPORTED_10000baseT_Full; + linkmode_zero(phydev->supported); + + linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseT_Full_BIT, + phydev->supported); + linkmode_copy(phydev->advertising, phydev->supported); return 0; } diff --git a/drivers/net/phy/phy-core.c b/drivers/net/phy/phy-core.c index c7da4cbb1103..20fbd5eb56fd 100644 --- a/drivers/net/phy/phy-core.c +++ b/drivers/net/phy/phy-core.c @@ -62,6 +62,124 @@ EXPORT_SYMBOL_GPL(phy_duplex_to_str); * must be grouped by speed and sorted in descending match priority * - iow, descending speed. */ static const struct phy_setting settings[] = { + /* 100G */ + { + .speed = SPEED_100000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, + }, + { + .speed = SPEED_100000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, + }, + { + .speed = SPEED_100000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT, + }, + { + .speed = SPEED_100000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT, + }, + /* 56G */ + { + .speed = SPEED_56000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT, + }, + { + .speed = SPEED_56000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT, + }, + { + .speed = SPEED_56000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT, + }, + { + .speed = SPEED_56000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT, + }, + /* 50G */ + { + .speed = SPEED_50000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT, + }, + { + .speed = SPEED_50000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT, + }, + { + .speed = SPEED_50000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT, + }, + /* 40G */ + { + .speed = SPEED_40000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT, + }, + { + .speed = SPEED_40000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT, + }, + { + .speed = SPEED_40000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT, + }, + { + .speed = SPEED_40000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT, + }, + /* 25G */ + { + .speed = SPEED_25000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, + }, + { + .speed = SPEED_25000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, + }, + { + .speed = SPEED_25000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, + }, + + /* 20G */ + { + .speed = SPEED_20000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT, + }, + { + .speed = SPEED_20000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT, + }, + /* 10G */ + { + .speed = SPEED_10000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_10000baseCR_Full_BIT, + }, + { + .speed = SPEED_10000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_10000baseER_Full_BIT, + }, { .speed = SPEED_10000, .duplex = DUPLEX_FULL, @@ -72,25 +190,54 @@ static const struct phy_setting settings[] = { .duplex = DUPLEX_FULL, .bit = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, }, + { + .speed = SPEED_10000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_10000baseLR_Full_BIT, + }, + { + .speed = SPEED_10000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_10000baseLRM_Full_BIT, + }, + { + .speed = SPEED_10000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_10000baseR_FEC_BIT, + }, + { + .speed = SPEED_10000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT, + }, { .speed = SPEED_10000, .duplex = DUPLEX_FULL, .bit = ETHTOOL_LINK_MODE_10000baseT_Full_BIT, }, + /* 5G */ + { + .speed = SPEED_5000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_5000baseT_Full_BIT, + }, + + /* 2.5G */ { .speed = SPEED_2500, .duplex = DUPLEX_FULL, - .bit = ETHTOOL_LINK_MODE_2500baseX_Full_BIT, + .bit = ETHTOOL_LINK_MODE_2500baseT_Full_BIT, }, { - .speed = SPEED_1000, + .speed = SPEED_2500, .duplex = DUPLEX_FULL, - .bit = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, + .bit = ETHTOOL_LINK_MODE_2500baseX_Full_BIT, }, + /* 1G */ { .speed = SPEED_1000, .duplex = DUPLEX_FULL, - .bit = ETHTOOL_LINK_MODE_1000baseX_Full_BIT, + .bit = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, }, { .speed = SPEED_1000, @@ -102,6 +249,12 @@ static const struct phy_setting settings[] = { .duplex = DUPLEX_HALF, .bit = ETHTOOL_LINK_MODE_1000baseT_Half_BIT, }, + { + .speed = SPEED_1000, + .duplex = DUPLEX_FULL, + .bit = ETHTOOL_LINK_MODE_1000baseX_Full_BIT, + }, + /* 100M */ { .speed = SPEED_100, .duplex = DUPLEX_FULL, @@ -112,6 +265,7 @@ static const struct phy_setting settings[] = { .duplex = DUPLEX_HALF, .bit = ETHTOOL_LINK_MODE_100baseT_Half_BIT, }, + /* 10M */ { .speed = SPEED_10, .duplex = DUPLEX_FULL, @@ -129,7 +283,6 @@ static const struct phy_setting settings[] = { * @speed: speed to match * @duplex: duplex to match * @mask: allowed link modes - * @maxbit: bit size of link modes * @exact: an exact match is required * * Search the settings array for a setting that matches the speed and @@ -143,14 +296,14 @@ static const struct phy_setting settings[] = { * they all fail, %NULL will be returned. */ const struct phy_setting * -phy_lookup_setting(int speed, int duplex, const unsigned long *mask, - size_t maxbit, bool exact) +phy_lookup_setting(int speed, int duplex, const unsigned long *mask, bool exact) { const struct phy_setting *p, *match = NULL, *last = NULL; int i; for (i = 0, p = settings; i < ARRAY_SIZE(settings); i++, p++) { - if (p->bit < maxbit && test_bit(p->bit, mask)) { + if (p->bit < __ETHTOOL_LINK_MODE_MASK_NBITS && + test_bit(p->bit, mask)) { last = p; if (p->speed == speed && p->duplex == duplex) { /* Exact match for speed and duplex */ @@ -175,13 +328,13 @@ phy_lookup_setting(int speed, int duplex, const unsigned long *mask, EXPORT_SYMBOL_GPL(phy_lookup_setting); size_t phy_speeds(unsigned int *speeds, size_t size, - unsigned long *mask, size_t maxbit) + unsigned long *mask) { size_t count; int i; for (i = 0, count = 0; i < ARRAY_SIZE(settings) && count < size; i++) - if (settings[i].bit < maxbit && + if (settings[i].bit < __ETHTOOL_LINK_MODE_MASK_NBITS && test_bit(settings[i].bit, mask) && (count == 0 || speeds[count - 1] != settings[i].speed)) speeds[count++] = settings[i].speed; @@ -199,35 +352,53 @@ size_t phy_speeds(unsigned int *speeds, size_t size, */ void phy_resolve_aneg_linkmode(struct phy_device *phydev) { - u32 common = phydev->lp_advertising & phydev->advertising; + __ETHTOOL_DECLARE_LINK_MODE_MASK(common); - if (common & ADVERTISED_10000baseT_Full) { + linkmode_and(common, phydev->lp_advertising, phydev->advertising); + + if (linkmode_test_bit(ETHTOOL_LINK_MODE_10000baseT_Full_BIT, common)) { phydev->speed = SPEED_10000; phydev->duplex = DUPLEX_FULL; - } else if (common & ADVERTISED_1000baseT_Full) { + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT, + common)) { + phydev->speed = SPEED_5000; + phydev->duplex = DUPLEX_FULL; + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, + common)) { + phydev->speed = SPEED_2500; + phydev->duplex = DUPLEX_FULL; + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + common)) { phydev->speed = SPEED_1000; phydev->duplex = DUPLEX_FULL; - } else if (common & ADVERTISED_1000baseT_Half) { + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + common)) { phydev->speed = SPEED_1000; phydev->duplex = DUPLEX_HALF; - } else if (common & ADVERTISED_100baseT_Full) { + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, + common)) { phydev->speed = SPEED_100; phydev->duplex = DUPLEX_FULL; - } else if (common & ADVERTISED_100baseT_Half) { + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, + common)) { phydev->speed = SPEED_100; phydev->duplex = DUPLEX_HALF; - } else if (common & ADVERTISED_10baseT_Full) { + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, + common)) { phydev->speed = SPEED_10; phydev->duplex = DUPLEX_FULL; - } else if (common & ADVERTISED_10baseT_Half) { + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, + common)) { phydev->speed = SPEED_10; phydev->duplex = DUPLEX_HALF; } if (phydev->duplex == DUPLEX_FULL) { - phydev->pause = !!(phydev->lp_advertising & ADVERTISED_Pause); - phydev->asym_pause = !!(phydev->lp_advertising & - ADVERTISED_Asym_Pause); + phydev->pause = linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->lp_advertising); + phydev->asym_pause = linkmode_test_bit( + ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->lp_advertising); } } EXPORT_SYMBOL_GPL(phy_resolve_aneg_linkmode); diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 1d73ac3309ce..d33e7b3caf03 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -46,11 +46,8 @@ static const char *phy_state_to_str(enum phy_state st) { switch (st) { PHY_STATE_STR(DOWN) - PHY_STATE_STR(STARTING) PHY_STATE_STR(READY) - PHY_STATE_STR(PENDING) PHY_STATE_STR(UP) - PHY_STATE_STR(AN) PHY_STATE_STR(RUNNING) PHY_STATE_STR(NOLINK) PHY_STATE_STR(FORCING) @@ -62,6 +59,17 @@ static const char *phy_state_to_str(enum phy_state st) return NULL; } +static void phy_link_up(struct phy_device *phydev) +{ + phydev->phy_link_change(phydev, true, true); + phy_led_trigger_change_speed(phydev); +} + +static void phy_link_down(struct phy_device *phydev, bool do_carrier) +{ + phydev->phy_link_change(phydev, false, do_carrier); + phy_led_trigger_change_speed(phydev); +} /** * phy_print_status - Convenience function to print out the current phy status @@ -105,9 +113,9 @@ static int phy_clear_interrupt(struct phy_device *phydev) * * Returns 0 on success or < 0 on error. */ -static int phy_config_interrupt(struct phy_device *phydev, u32 interrupts) +static int phy_config_interrupt(struct phy_device *phydev, bool interrupts) { - phydev->interrupts = interrupts; + phydev->interrupts = interrupts ? 1 : 0; if (phydev->drv->config_intr) return phydev->drv->config_intr(phydev); @@ -171,11 +179,9 @@ EXPORT_SYMBOL(phy_aneg_done); * settings were found. */ static const struct phy_setting * -phy_find_valid(int speed, int duplex, u32 supported) +phy_find_valid(int speed, int duplex, unsigned long *supported) { - unsigned long mask = supported; - - return phy_lookup_setting(speed, duplex, &mask, BITS_PER_LONG, false); + return phy_lookup_setting(speed, duplex, supported, false); } /** @@ -192,9 +198,7 @@ unsigned int phy_supported_speeds(struct phy_device *phy, unsigned int *speeds, unsigned int size) { - unsigned long supported = phy->supported; - - return phy_speeds(speeds, size, &supported, BITS_PER_LONG); + return phy_speeds(speeds, size, phy->supported); } /** @@ -206,11 +210,10 @@ unsigned int phy_supported_speeds(struct phy_device *phy, * * Description: Returns true if there is a valid setting, false otherwise. */ -static inline bool phy_check_valid(int speed, int duplex, u32 features) +static inline bool phy_check_valid(int speed, int duplex, + unsigned long *features) { - unsigned long mask = features; - - return !!phy_lookup_setting(speed, duplex, &mask, BITS_PER_LONG, true); + return !!phy_lookup_setting(speed, duplex, features, true); } /** @@ -224,13 +227,13 @@ static inline bool phy_check_valid(int speed, int duplex, u32 features) static void phy_sanitize_settings(struct phy_device *phydev) { const struct phy_setting *setting; - u32 features = phydev->supported; /* Sanitize settings based on PHY capabilities */ - if ((features & SUPPORTED_Autoneg) == 0) + if (linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, phydev->supported)) phydev->autoneg = AUTONEG_DISABLE; - setting = phy_find_valid(phydev->speed, phydev->duplex, features); + setting = phy_find_valid(phydev->speed, phydev->duplex, + phydev->supported); if (setting) { phydev->speed = setting->speed; phydev->duplex = setting->duplex; @@ -256,13 +259,15 @@ static void phy_sanitize_settings(struct phy_device *phydev) */ int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising); u32 speed = ethtool_cmd_speed(cmd); if (cmd->phy_address != phydev->mdio.addr) return -EINVAL; /* We make sure that we don't pass unsupported values in to the PHY */ - cmd->advertising &= phydev->supported; + ethtool_convert_legacy_u32_to_link_mode(advertising, cmd->advertising); + linkmode_and(advertising, advertising, phydev->supported); /* Verify the settings we care about. */ if (cmd->autoneg != AUTONEG_ENABLE && cmd->autoneg != AUTONEG_DISABLE) @@ -283,12 +288,14 @@ int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd) phydev->speed = speed; - phydev->advertising = cmd->advertising; + linkmode_copy(phydev->advertising, advertising); if (AUTONEG_ENABLE == cmd->autoneg) - phydev->advertising |= ADVERTISED_Autoneg; + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + phydev->advertising); else - phydev->advertising &= ~ADVERTISED_Autoneg; + linkmode_clear_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + phydev->advertising); phydev->duplex = cmd->duplex; @@ -304,25 +311,24 @@ EXPORT_SYMBOL(phy_ethtool_sset); int phy_ethtool_ksettings_set(struct phy_device *phydev, const struct ethtool_link_ksettings *cmd) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising); u8 autoneg = cmd->base.autoneg; u8 duplex = cmd->base.duplex; u32 speed = cmd->base.speed; - u32 advertising; if (cmd->base.phy_address != phydev->mdio.addr) return -EINVAL; - ethtool_convert_link_mode_to_legacy_u32(&advertising, - cmd->link_modes.advertising); + linkmode_copy(advertising, cmd->link_modes.advertising); /* We make sure that we don't pass unsupported values in to the PHY */ - advertising &= phydev->supported; + linkmode_and(advertising, advertising, phydev->supported); /* Verify the settings we care about. */ if (autoneg != AUTONEG_ENABLE && autoneg != AUTONEG_DISABLE) return -EINVAL; - if (autoneg == AUTONEG_ENABLE && advertising == 0) + if (autoneg == AUTONEG_ENABLE && linkmode_empty(advertising)) return -EINVAL; if (autoneg == AUTONEG_DISABLE && @@ -337,12 +343,14 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev, phydev->speed = speed; - phydev->advertising = advertising; + linkmode_copy(phydev->advertising, advertising); if (autoneg == AUTONEG_ENABLE) - phydev->advertising |= ADVERTISED_Autoneg; + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + phydev->advertising); else - phydev->advertising &= ~ADVERTISED_Autoneg; + linkmode_clear_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + phydev->advertising); phydev->duplex = duplex; @@ -358,14 +366,9 @@ EXPORT_SYMBOL(phy_ethtool_ksettings_set); void phy_ethtool_ksettings_get(struct phy_device *phydev, struct ethtool_link_ksettings *cmd) { - ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported, - phydev->supported); - - ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising, - phydev->advertising); - - ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.lp_advertising, - phydev->lp_advertising); + linkmode_copy(cmd->link_modes.supported, phydev->supported); + linkmode_copy(cmd->link_modes.advertising, phydev->advertising); + linkmode_copy(cmd->link_modes.lp_advertising, phydev->lp_advertising); cmd->base.speed = phydev->speed; cmd->base.duplex = phydev->duplex; @@ -434,7 +437,8 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd) } break; case MII_ADVERTISE: - phydev->advertising = mii_adv_to_ethtool_adv_t(val); + mii_adv_mod_linkmode_adv_t(phydev->advertising, + val); change_autoneg = true; break; default: @@ -467,6 +471,18 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd) } EXPORT_SYMBOL(phy_mii_ioctl); +static void phy_queue_state_machine(struct phy_device *phydev, + unsigned int secs) +{ + mod_delayed_work(system_power_efficient_wq, &phydev->state_queue, + secs * HZ); +} + +static void phy_trigger_machine(struct phy_device *phydev) +{ + phy_queue_state_machine(phydev, 0); +} + static int phy_config_aneg(struct phy_device *phydev) { if (phydev->drv->config_aneg) @@ -481,6 +497,34 @@ static int phy_config_aneg(struct phy_device *phydev) return genphy_config_aneg(phydev); } +/** + * phy_check_link_status - check link status and set state accordingly + * @phydev: the phy_device struct + * + * Description: Check for link and whether autoneg was triggered / is running + * and set state accordingly + */ +static int phy_check_link_status(struct phy_device *phydev) +{ + int err; + + WARN_ON(!mutex_is_locked(&phydev->lock)); + + err = phy_read_status(phydev); + if (err) + return err; + + if (phydev->link && phydev->state != PHY_RUNNING) { + phydev->state = PHY_RUNNING; + phy_link_up(phydev); + } else if (!phydev->link && phydev->state != PHY_NOLINK) { + phydev->state = PHY_NOLINK; + phy_link_down(phydev, true); + } + + return 0; +} + /** * phy_start_aneg - start auto-negotiation for this PHY device * @phydev: the phy_device struct @@ -492,7 +536,6 @@ static int phy_config_aneg(struct phy_device *phydev) */ int phy_start_aneg(struct phy_device *phydev) { - bool trigger = 0; int err; if (!phydev->drv) @@ -500,44 +543,33 @@ int phy_start_aneg(struct phy_device *phydev) mutex_lock(&phydev->lock); + if (!__phy_is_started(phydev)) { + WARN(1, "called from state %s\n", + phy_state_to_str(phydev->state)); + err = -EBUSY; + goto out_unlock; + } + if (AUTONEG_DISABLE == phydev->autoneg) phy_sanitize_settings(phydev); /* Invalidate LP advertising flags */ - phydev->lp_advertising = 0; + linkmode_zero(phydev->lp_advertising); err = phy_config_aneg(phydev); if (err < 0) goto out_unlock; - if (phydev->state != PHY_HALTED) { - if (AUTONEG_ENABLE == phydev->autoneg) { - phydev->state = PHY_AN; - phydev->link_timeout = PHY_AN_TIMEOUT; - } else { - phydev->state = PHY_FORCING; - phydev->link_timeout = PHY_FORCE_TIMEOUT; - } - } - - /* Re-schedule a PHY state machine to check PHY status because - * negotiation may already be done and aneg interrupt may not be - * generated. - */ - if (!phy_polling_mode(phydev) && phydev->state == PHY_AN) { - err = phy_aneg_done(phydev); - if (err > 0) { - trigger = true; - err = 0; - } + if (phydev->autoneg == AUTONEG_ENABLE) { + err = phy_check_link_status(phydev); + } else { + phydev->state = PHY_FORCING; + phydev->link_timeout = PHY_FORCE_TIMEOUT; } out_unlock: mutex_unlock(&phydev->lock); - if (trigger) - phy_trigger_machine(phydev); - return err; } EXPORT_SYMBOL(phy_start_aneg); @@ -573,20 +605,38 @@ static int phy_poll_aneg_done(struct phy_device *phydev) */ int phy_speed_down(struct phy_device *phydev, bool sync) { - u32 adv = phydev->lp_advertising & phydev->supported; - u32 adv_old = phydev->advertising; + __ETHTOOL_DECLARE_LINK_MODE_MASK(adv_old); + __ETHTOOL_DECLARE_LINK_MODE_MASK(adv); int ret; if (phydev->autoneg != AUTONEG_ENABLE) return 0; - if (adv & PHY_10BT_FEATURES) - phydev->advertising &= ~(PHY_100BT_FEATURES | - PHY_1000BT_FEATURES); - else if (adv & PHY_100BT_FEATURES) - phydev->advertising &= ~PHY_1000BT_FEATURES; + linkmode_copy(adv_old, phydev->advertising); + linkmode_copy(adv, phydev->lp_advertising); + linkmode_and(adv, adv, phydev->supported); + + if (linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, adv) || + linkmode_test_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, adv)) { + linkmode_clear_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, + phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, + phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + phydev->advertising); + } else if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, + adv) || + linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, + adv)) { + linkmode_clear_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + phydev->advertising); + } - if (phydev->advertising == adv_old) + if (linkmode_equal(phydev->advertising, adv_old)) return 0; ret = phy_config_aneg(phydev); @@ -605,28 +655,36 @@ EXPORT_SYMBOL_GPL(phy_speed_down); */ int phy_speed_up(struct phy_device *phydev) { - u32 mask = PHY_10BT_FEATURES | PHY_100BT_FEATURES | PHY_1000BT_FEATURES; - u32 adv_old = phydev->advertising; + __ETHTOOL_DECLARE_LINK_MODE_MASK(all_speeds) = { 0, }; + __ETHTOOL_DECLARE_LINK_MODE_MASK(not_speeds); + __ETHTOOL_DECLARE_LINK_MODE_MASK(supported); + __ETHTOOL_DECLARE_LINK_MODE_MASK(adv_old); + __ETHTOOL_DECLARE_LINK_MODE_MASK(speeds); + + linkmode_copy(adv_old, phydev->advertising); if (phydev->autoneg != AUTONEG_ENABLE) return 0; - phydev->advertising = (adv_old & ~mask) | (phydev->supported & mask); + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, all_speeds); + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, all_speeds); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, all_speeds); + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, all_speeds); + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, all_speeds); + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, all_speeds); + + linkmode_andnot(not_speeds, adv_old, all_speeds); + linkmode_copy(supported, phydev->supported); + linkmode_and(speeds, supported, all_speeds); + linkmode_or(phydev->advertising, not_speeds, speeds); - if (phydev->advertising == adv_old) + if (linkmode_equal(phydev->advertising, adv_old)) return 0; return phy_config_aneg(phydev); } EXPORT_SYMBOL_GPL(phy_speed_up); -static void phy_queue_state_machine(struct phy_device *phydev, - unsigned int secs) -{ - mod_delayed_work(system_power_efficient_wq, &phydev->state_queue, - secs * HZ); -} - /** * phy_start_machine - start PHY state machine tracking * @phydev: the phy_device struct @@ -643,20 +701,6 @@ void phy_start_machine(struct phy_device *phydev) } EXPORT_SYMBOL_GPL(phy_start_machine); -/** - * phy_trigger_machine - trigger the state machine to run - * - * @phydev: the phy_device struct - * - * Description: There has been a change in state which requires that the - * state machine runs. - */ - -void phy_trigger_machine(struct phy_device *phydev) -{ - phy_queue_state_machine(phydev, 0); -} - /** * phy_stop_machine - stop the PHY state machine tracking * @phydev: target phy_device struct @@ -670,7 +714,7 @@ void phy_stop_machine(struct phy_device *phydev) cancel_delayed_work_sync(&phydev->state_queue); mutex_lock(&phydev->lock); - if (phydev->state > PHY_UP && phydev->state != PHY_HALTED) + if (__phy_is_started(phydev)) phydev->state = PHY_UP; mutex_unlock(&phydev->lock); } @@ -686,6 +730,8 @@ void phy_stop_machine(struct phy_device *phydev) */ static void phy_error(struct phy_device *phydev) { + WARN_ON(1); + mutex_lock(&phydev->lock); phydev->state = PHY_HALTED; mutex_unlock(&phydev->lock); @@ -711,30 +757,26 @@ static int phy_disable_interrupts(struct phy_device *phydev) } /** - * phy_change - Called by the phy_interrupt to handle PHY changes - * @phydev: phy_device struct that interrupted + * phy_interrupt - PHY interrupt handler + * @irq: interrupt line + * @phy_dat: phy_device pointer + * + * Description: Handle PHY interrupt */ -static irqreturn_t phy_change(struct phy_device *phydev) +static irqreturn_t phy_interrupt(int irq, void *phy_dat) { - if (phy_interrupt_is_valid(phydev)) { - if (phydev->drv->did_interrupt && - !phydev->drv->did_interrupt(phydev)) - return IRQ_NONE; - - if (phydev->state == PHY_HALTED) - if (phy_disable_interrupts(phydev)) - goto phy_err; - } + struct phy_device *phydev = phy_dat; - mutex_lock(&phydev->lock); - if ((PHY_RUNNING == phydev->state) || (PHY_NOLINK == phydev->state)) - phydev->state = PHY_CHANGELINK; - mutex_unlock(&phydev->lock); + if (!phy_is_started(phydev)) + return IRQ_NONE; /* It can't be ours. */ + + if (phydev->drv->did_interrupt && !phydev->drv->did_interrupt(phydev)) + return IRQ_NONE; /* reschedule state queue work to run as soon as possible */ phy_trigger_machine(phydev); - if (phy_interrupt_is_valid(phydev) && phy_clear_interrupt(phydev)) + if (phy_clear_interrupt(phydev)) goto phy_err; return IRQ_HANDLED; @@ -743,36 +785,6 @@ phy_err: return IRQ_NONE; } -/** - * phy_change_work - Scheduled by the phy_mac_interrupt to handle PHY changes - * @work: work_struct that describes the work to be done - */ -void phy_change_work(struct work_struct *work) -{ - struct phy_device *phydev = - container_of(work, struct phy_device, phy_queue); - - phy_change(phydev); -} - -/** - * phy_interrupt - PHY interrupt handler - * @irq: interrupt line - * @phy_dat: phy_device pointer - * - * Description: When a PHY interrupt occurs, the handler disables - * interrupts, and uses phy_change to handle the interrupt. - */ -static irqreturn_t phy_interrupt(int irq, void *phy_dat) -{ - struct phy_device *phydev = phy_dat; - - if (PHY_HALTED == phydev->state) - return IRQ_NONE; /* It can't be ours. */ - - return phy_change(phydev); -} - /** * phy_enable_interrupts - Enable the interrupts from the PHY side * @phydev: target phy_device struct @@ -837,21 +849,24 @@ void phy_stop(struct phy_device *phydev) { mutex_lock(&phydev->lock); - if (PHY_HALTED == phydev->state) - goto out_unlock; + if (!__phy_is_started(phydev)) { + WARN(1, "called from state %s\n", + phy_state_to_str(phydev->state)); + mutex_unlock(&phydev->lock); + return; + } if (phy_interrupt_is_valid(phydev)) phy_disable_interrupts(phydev); phydev->state = PHY_HALTED; -out_unlock: mutex_unlock(&phydev->lock); phy_state_machine(&phydev->state_queue.work); /* Cannot call flush_scheduled_work() here as desired because - * of rtnl_lock(), but PHY_HALTED shall guarantee phy_change() + * of rtnl_lock(), but PHY_HALTED shall guarantee irq handler * will not reenable interrupts. */ } @@ -874,9 +889,6 @@ void phy_start(struct phy_device *phydev) mutex_lock(&phydev->lock); switch (phydev->state) { - case PHY_STARTING: - phydev->state = PHY_PENDING; - break; case PHY_READY: phydev->state = PHY_UP; break; @@ -902,18 +914,6 @@ void phy_start(struct phy_device *phydev) } EXPORT_SYMBOL(phy_start); -static void phy_link_up(struct phy_device *phydev) -{ - phydev->phy_link_change(phydev, true, true); - phy_led_trigger_change_speed(phydev); -} - -static void phy_link_down(struct phy_device *phydev, bool do_carrier) -{ - phydev->phy_link_change(phydev, false, do_carrier); - phy_led_trigger_change_speed(phydev); -} - /** * phy_state_machine - Handle the state machine * @work: work_struct that describes the work to be done @@ -936,63 +936,17 @@ void phy_state_machine(struct work_struct *work) switch (phydev->state) { case PHY_DOWN: - case PHY_STARTING: case PHY_READY: - case PHY_PENDING: break; case PHY_UP: needs_aneg = true; - phydev->link_timeout = PHY_AN_TIMEOUT; - - break; - case PHY_AN: - err = phy_read_status(phydev); - if (err < 0) - break; - - /* If the link is down, give up on negotiation for now */ - if (!phydev->link) { - phydev->state = PHY_NOLINK; - phy_link_down(phydev, true); - break; - } - - /* Check if negotiation is done. Break if there's an error */ - err = phy_aneg_done(phydev); - if (err < 0) - break; - - /* If AN is done, we're running */ - if (err > 0) { - phydev->state = PHY_RUNNING; - phy_link_up(phydev); - } else if (0 == phydev->link_timeout--) - needs_aneg = true; break; case PHY_NOLINK: - if (!phy_polling_mode(phydev)) - break; - - err = phy_read_status(phydev); - if (err) - break; - - if (phydev->link) { - if (AUTONEG_ENABLE == phydev->autoneg) { - err = phy_aneg_done(phydev); - if (err < 0) - break; - - if (!err) { - phydev->state = PHY_AN; - phydev->link_timeout = PHY_AN_TIMEOUT; - break; - } - } - phydev->state = PHY_RUNNING; - phy_link_up(phydev); - } + case PHY_RUNNING: + case PHY_CHANGELINK: + case PHY_RESUMING: + err = phy_check_link_status(phydev); break; case PHY_FORCING: err = genphy_update_link(phydev); @@ -1008,32 +962,6 @@ void phy_state_machine(struct work_struct *work) phy_link_down(phydev, false); } break; - case PHY_RUNNING: - if (!phy_polling_mode(phydev)) - break; - - err = phy_read_status(phydev); - if (err) - break; - - if (!phydev->link) { - phydev->state = PHY_NOLINK; - phy_link_down(phydev, true); - } - break; - case PHY_CHANGELINK: - err = phy_read_status(phydev); - if (err) - break; - - if (phydev->link) { - phydev->state = PHY_RUNNING; - phy_link_up(phydev); - } else { - phydev->state = PHY_NOLINK; - phy_link_down(phydev, true); - } - break; case PHY_HALTED: if (phydev->link) { phydev->link = 0; @@ -1041,30 +969,6 @@ void phy_state_machine(struct work_struct *work) do_suspend = true; } break; - case PHY_RESUMING: - if (AUTONEG_ENABLE == phydev->autoneg) { - err = phy_aneg_done(phydev); - if (err < 0) { - break; - } else if (!err) { - phydev->state = PHY_AN; - phydev->link_timeout = PHY_AN_TIMEOUT; - break; - } - } - - err = phy_read_status(phydev); - if (err) - break; - - if (phydev->link) { - phydev->state = PHY_RUNNING; - phy_link_up(phydev); - } else { - phydev->state = PHY_NOLINK; - phy_link_down(phydev, false); - } - break; } mutex_unlock(&phydev->lock); @@ -1090,7 +994,7 @@ void phy_state_machine(struct work_struct *work) * state machine would be pointless and possibly error prone when * called from phy_disconnect() synchronously. */ - if (phy_polling_mode(phydev) && old_state != PHY_HALTED) + if (phy_polling_mode(phydev) && phy_is_started(phydev)) phy_queue_state_machine(phydev, PHY_STATE_TIME); } @@ -1104,10 +1008,34 @@ void phy_state_machine(struct work_struct *work) void phy_mac_interrupt(struct phy_device *phydev) { /* Trigger a state machine change */ - queue_work(system_power_efficient_wq, &phydev->phy_queue); + phy_trigger_machine(phydev); } EXPORT_SYMBOL(phy_mac_interrupt); +static void mmd_eee_adv_to_linkmode(unsigned long *advertising, u16 eee_adv) +{ + linkmode_zero(advertising); + + if (eee_adv & MDIO_EEE_100TX) + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, + advertising); + if (eee_adv & MDIO_EEE_1000T) + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + advertising); + if (eee_adv & MDIO_EEE_10GT) + linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseT_Full_BIT, + advertising); + if (eee_adv & MDIO_EEE_1000KX) + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, + advertising); + if (eee_adv & MDIO_EEE_10GKX4) + linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, + advertising); + if (eee_adv & MDIO_EEE_10GKR) + linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT, + advertising); +} + /** * phy_init_eee - init and check the EEE feature * @phydev: target phy_device struct @@ -1126,9 +1054,12 @@ int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable) /* According to 802.3az,the EEE is supported only in full duplex-mode. */ if (phydev->duplex == DUPLEX_FULL) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(common); + __ETHTOOL_DECLARE_LINK_MODE_MASK(lp); + __ETHTOOL_DECLARE_LINK_MODE_MASK(adv); int eee_lp, eee_cap, eee_adv; - u32 lp, cap, adv; int status; + u32 cap; /* Read phy status to properly get the right settings */ status = phy_read_status(phydev); @@ -1155,9 +1086,11 @@ int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable) if (eee_adv <= 0) goto eee_exit_err; - adv = mmd_eee_adv_to_ethtool_adv_t(eee_adv); - lp = mmd_eee_adv_to_ethtool_adv_t(eee_lp); - if (!phy_check_valid(phydev->speed, phydev->duplex, lp & adv)) + mmd_eee_adv_to_linkmode(adv, eee_adv); + mmd_eee_adv_to_linkmode(lp, eee_lp); + linkmode_and(common, adv, lp); + + if (!phy_check_valid(phydev->speed, phydev->duplex, common)) goto eee_exit_err; if (clk_stop_enable) { @@ -1221,6 +1154,7 @@ int phy_ethtool_get_eee(struct phy_device *phydev, struct ethtool_eee *data) if (val < 0) return val; data->advertised = mmd_eee_adv_to_ethtool_adv_t(val); + data->eee_enabled = !!data->advertised; /* Get LP advertisement EEE */ val = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_LPABLE); @@ -1228,6 +1162,8 @@ int phy_ethtool_get_eee(struct phy_device *phydev, struct ethtool_eee *data) return val; data->lp_advertised = mmd_eee_adv_to_ethtool_adv_t(val); + data->eee_active = !!(data->advertised & data->lp_advertised); + return 0; } EXPORT_SYMBOL(phy_ethtool_get_eee); @@ -1241,7 +1177,7 @@ EXPORT_SYMBOL(phy_ethtool_get_eee); */ int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_eee *data) { - int cap, old_adv, adv, ret; + int cap, old_adv, adv = 0, ret; if (!phydev->drv) return -EIO; @@ -1255,10 +1191,12 @@ int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_eee *data) if (old_adv < 0) return old_adv; - adv = ethtool_adv_to_mmd_eee_adv_t(data->advertised) & cap; - - /* Mask prohibited EEE modes */ - adv &= ~phydev->eee_broken_modes; + if (data->eee_enabled) { + adv = !data->advertised ? cap : + ethtool_adv_to_mmd_eee_adv_t(data->advertised) & cap; + /* Mask prohibited EEE modes */ + adv &= ~phydev->eee_broken_modes; + } if (old_adv != adv) { ret = phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_ADV, adv); diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index 26c41ede54a4..51990002d495 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -66,10 +66,12 @@ static const int phy_basic_ports_array[] = { ETHTOOL_LINK_MODE_TP_BIT, ETHTOOL_LINK_MODE_MII_BIT, }; +EXPORT_SYMBOL_GPL(phy_basic_ports_array); static const int phy_fibre_port_array[] = { ETHTOOL_LINK_MODE_FIBRE_BIT, }; +EXPORT_SYMBOL_GPL(phy_fibre_port_array); static const int phy_all_ports_features_array[] = { ETHTOOL_LINK_MODE_Autoneg_BIT, @@ -80,27 +82,32 @@ static const int phy_all_ports_features_array[] = { ETHTOOL_LINK_MODE_BNC_BIT, ETHTOOL_LINK_MODE_Backplane_BIT, }; +EXPORT_SYMBOL_GPL(phy_all_ports_features_array); -static const int phy_10_100_features_array[] = { +const int phy_10_100_features_array[4] = { ETHTOOL_LINK_MODE_10baseT_Half_BIT, ETHTOOL_LINK_MODE_10baseT_Full_BIT, ETHTOOL_LINK_MODE_100baseT_Half_BIT, ETHTOOL_LINK_MODE_100baseT_Full_BIT, }; +EXPORT_SYMBOL_GPL(phy_10_100_features_array); -static const int phy_basic_t1_features_array[] = { +const int phy_basic_t1_features_array[2] = { ETHTOOL_LINK_MODE_TP_BIT, ETHTOOL_LINK_MODE_100baseT_Full_BIT, }; +EXPORT_SYMBOL_GPL(phy_basic_t1_features_array); -static const int phy_gbit_features_array[] = { +const int phy_gbit_features_array[2] = { ETHTOOL_LINK_MODE_1000baseT_Half_BIT, ETHTOOL_LINK_MODE_1000baseT_Full_BIT, }; +EXPORT_SYMBOL_GPL(phy_gbit_features_array); -static const int phy_10gbit_features_array[] = { +const int phy_10gbit_features_array[1] = { ETHTOOL_LINK_MODE_10000baseT_Full_BIT, }; +EXPORT_SYMBOL_GPL(phy_10gbit_features_array); __ETHTOOL_DECLARE_LINK_MODE_MASK(phy_10gbit_full_features) __ro_after_init; EXPORT_SYMBOL_GPL(phy_10gbit_full_features); @@ -584,7 +591,6 @@ struct phy_device *phy_device_create(struct mii_bus *bus, int addr, int phy_id, mutex_init(&dev->lock); INIT_DELAYED_WORK(&dev->state_queue, phy_state_machine); - INIT_WORK(&dev->phy_queue, phy_change_work); /* Request the appropriate module unconditionally; don't * bother trying to do so only if it isn't already loaded, @@ -596,7 +602,21 @@ struct phy_device *phy_device_create(struct mii_bus *bus, int addr, int phy_id, * driver will get bored and give up as soon as it finds that * there's no driver _already_ loaded. */ - request_module(MDIO_MODULE_PREFIX MDIO_ID_FMT, MDIO_ID_ARGS(phy_id)); + if (is_c45 && c45_ids) { + const int num_ids = ARRAY_SIZE(c45_ids->device_ids); + int i; + + for (i = 1; i < num_ids; i++) { + if (!(c45_ids->devices_in_package & (1 << i))) + continue; + + request_module(MDIO_MODULE_PREFIX MDIO_ID_FMT, + MDIO_ID_ARGS(c45_ids->device_ids[i])); + } + } else { + request_module(MDIO_MODULE_PREFIX MDIO_ID_FMT, + MDIO_ID_ARGS(phy_id)); + } device_initialize(&mdiodev->dev); @@ -1439,8 +1459,13 @@ static int genphy_config_advert(struct phy_device *phydev) int err, changed = 0; /* Only allow advertising what this PHY supports */ - phydev->advertising &= phydev->supported; - advertise = phydev->advertising; + linkmode_and(phydev->advertising, phydev->advertising, + phydev->supported); + if (!ethtool_convert_link_mode_to_legacy_u32(&advertise, + phydev->advertising)) + phydev_warn(phydev, "PHY advertising (%*pb) more modes than genphy supports, some modes not advertised.\n", + __ETHTOOL_LINK_MODE_MASK_NBITS, + phydev->advertising); /* Setup standard advertisement */ adv = phy_read(phydev, MII_ADVERTISE); @@ -1479,10 +1504,11 @@ static int genphy_config_advert(struct phy_device *phydev) oldadv = adv; adv &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF); - if (phydev->supported & (SUPPORTED_1000baseT_Half | - SUPPORTED_1000baseT_Full)) { + if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + phydev->supported) || + linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + phydev->supported)) adv |= ethtool_adv_to_mii_ctrl1000_t(advertise); - } if (adv != oldadv) changed = 1; @@ -1687,11 +1713,13 @@ int genphy_read_status(struct phy_device *phydev) if (err) return err; - phydev->lp_advertising = 0; + linkmode_zero(phydev->lp_advertising); if (AUTONEG_ENABLE == phydev->autoneg) { - if (phydev->supported & (SUPPORTED_1000baseT_Half - | SUPPORTED_1000baseT_Full)) { + if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + phydev->supported) || + linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + phydev->supported)) { lpagb = phy_read(phydev, MII_STAT1000); if (lpagb < 0) return lpagb; @@ -1708,8 +1736,8 @@ int genphy_read_status(struct phy_device *phydev) return -ENOLINK; } - phydev->lp_advertising = - mii_stat1000_to_ethtool_lpa_t(lpagb); + mii_stat1000_mod_linkmode_lpa_t(phydev->lp_advertising, + lpagb); common_adv_gb = lpagb & adv << 2; } @@ -1717,7 +1745,7 @@ int genphy_read_status(struct phy_device *phydev) if (lpa < 0) return lpa; - phydev->lp_advertising |= mii_lpa_to_ethtool_lpa_t(lpa); + mii_lpa_mod_linkmode_lpa_t(phydev->lp_advertising, lpa); adv = phy_read(phydev, MII_ADVERTISE); if (adv < 0) @@ -1798,11 +1826,13 @@ EXPORT_SYMBOL(genphy_soft_reset); int genphy_config_init(struct phy_device *phydev) { int val; - u32 features; + __ETHTOOL_DECLARE_LINK_MODE_MASK(features) = { 0, }; - features = (SUPPORTED_TP | SUPPORTED_MII - | SUPPORTED_AUI | SUPPORTED_FIBRE | - SUPPORTED_BNC | SUPPORTED_Pause | SUPPORTED_Asym_Pause); + linkmode_set_bit_array(phy_basic_ports_array, + ARRAY_SIZE(phy_basic_ports_array), + features); + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, features); + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, features); /* Do we support autonegotiation? */ val = phy_read(phydev, MII_BMSR); @@ -1810,16 +1840,16 @@ int genphy_config_init(struct phy_device *phydev) return val; if (val & BMSR_ANEGCAPABLE) - features |= SUPPORTED_Autoneg; + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, features); if (val & BMSR_100FULL) - features |= SUPPORTED_100baseT_Full; + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, features); if (val & BMSR_100HALF) - features |= SUPPORTED_100baseT_Half; + linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, features); if (val & BMSR_10FULL) - features |= SUPPORTED_10baseT_Full; + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, features); if (val & BMSR_10HALF) - features |= SUPPORTED_10baseT_Half; + linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, features); if (val & BMSR_ESTATEN) { val = phy_read(phydev, MII_ESTATUS); @@ -1827,13 +1857,15 @@ int genphy_config_init(struct phy_device *phydev) return val; if (val & ESTATUS_1000_TFULL) - features |= SUPPORTED_1000baseT_Full; + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + features); if (val & ESTATUS_1000_THALF) - features |= SUPPORTED_1000baseT_Half; + linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + features); } - phydev->supported &= features; - phydev->advertising &= features; + linkmode_and(phydev->supported, phydev->supported, features); + linkmode_and(phydev->advertising, phydev->advertising, features); return 0; } @@ -1879,10 +1911,16 @@ static int __set_phy_supported(struct phy_device *phydev, u32 max_speed) { switch (max_speed) { case SPEED_10: - phydev->supported &= ~PHY_100BT_FEATURES; + linkmode_clear_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, + phydev->supported); + linkmode_clear_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, + phydev->supported); /* fall through */ case SPEED_100: - phydev->supported &= ~PHY_1000BT_FEATURES; + linkmode_clear_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + phydev->supported); + linkmode_clear_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + phydev->supported); break; case SPEED_1000: break; @@ -1901,7 +1939,7 @@ int phy_set_max_speed(struct phy_device *phydev, u32 max_speed) if (err) return err; - phydev->advertising = phydev->supported; + linkmode_copy(phydev->advertising, phydev->supported); return 0; } @@ -1918,10 +1956,8 @@ EXPORT_SYMBOL(phy_set_max_speed); */ void phy_remove_link_mode(struct phy_device *phydev, u32 link_mode) { - WARN_ON(link_mode > 31); - - phydev->supported &= ~BIT(link_mode); - phydev->advertising = phydev->supported; + linkmode_clear_bit(link_mode, phydev->supported); + linkmode_copy(phydev->advertising, phydev->supported); } EXPORT_SYMBOL(phy_remove_link_mode); @@ -1934,9 +1970,9 @@ EXPORT_SYMBOL(phy_remove_link_mode); */ void phy_support_sym_pause(struct phy_device *phydev) { - phydev->supported &= ~SUPPORTED_Asym_Pause; - phydev->supported |= SUPPORTED_Pause; - phydev->advertising = phydev->supported; + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydev->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->supported); + linkmode_copy(phydev->advertising, phydev->supported); } EXPORT_SYMBOL(phy_support_sym_pause); @@ -1948,8 +1984,9 @@ EXPORT_SYMBOL(phy_support_sym_pause); */ void phy_support_asym_pause(struct phy_device *phydev) { - phydev->supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause; - phydev->advertising = phydev->supported; + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydev->supported); + linkmode_copy(phydev->advertising, phydev->supported); } EXPORT_SYMBOL(phy_support_asym_pause); @@ -1967,12 +2004,13 @@ EXPORT_SYMBOL(phy_support_asym_pause); void phy_set_sym_pause(struct phy_device *phydev, bool rx, bool tx, bool autoneg) { - phydev->supported &= ~SUPPORTED_Pause; + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydev->supported); if (rx && tx && autoneg) - phydev->supported |= SUPPORTED_Pause; + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->supported); - phydev->advertising = phydev->supported; + linkmode_copy(phydev->advertising, phydev->supported); } EXPORT_SYMBOL(phy_set_sym_pause); @@ -1989,20 +2027,29 @@ EXPORT_SYMBOL(phy_set_sym_pause); */ void phy_set_asym_pause(struct phy_device *phydev, bool rx, bool tx) { - u16 oldadv = phydev->advertising; - u16 newadv = oldadv &= ~(SUPPORTED_Pause | SUPPORTED_Asym_Pause); + __ETHTOOL_DECLARE_LINK_MODE_MASK(oldadv); - if (rx) - newadv |= SUPPORTED_Pause | SUPPORTED_Asym_Pause; - if (tx) - newadv ^= SUPPORTED_Asym_Pause; + linkmode_copy(oldadv, phydev->advertising); - if (oldadv != newadv) { - phydev->advertising = newadv; + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->advertising); - if (phydev->autoneg) - phy_start_aneg(phydev); + if (rx) { + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->advertising); + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->advertising); } + + if (tx) + linkmode_change_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->advertising); + + if (!linkmode_equal(oldadv, phydev->advertising) && + phydev->autoneg) + phy_start_aneg(phydev); } EXPORT_SYMBOL(phy_set_asym_pause); @@ -2018,8 +2065,10 @@ EXPORT_SYMBOL(phy_set_asym_pause); bool phy_validate_pause(struct phy_device *phydev, struct ethtool_pauseparam *pp) { - if (!(phydev->supported & SUPPORTED_Pause) || - (!(phydev->supported & SUPPORTED_Asym_Pause) && + if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->supported) || + (!linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->supported) && pp->rx_pause != pp->tx_pause)) return false; return true; @@ -2068,6 +2117,11 @@ static void of_set_phy_eee_broken(struct phy_device *phydev) phydev->eee_broken_modes = broken; } +static bool phy_drv_supports_irq(struct phy_driver *phydrv) +{ + return phydrv->config_intr && phydrv->ack_interrupt; +} + /** * phy_probe - probe and init a PHY device * @dev: device to probe and init @@ -2081,7 +2135,6 @@ static int phy_probe(struct device *dev) struct phy_device *phydev = to_phy_device(dev); struct device_driver *drv = phydev->mdio.dev.driver; struct phy_driver *phydrv = to_phy_driver(drv); - u32 features; int err = 0; phydev->drv = phydrv; @@ -2089,8 +2142,7 @@ static int phy_probe(struct device *dev) /* Disable the interrupt if the PHY doesn't support it * but the interrupt is still a valid one */ - if (!(phydrv->flags & PHY_HAS_INTERRUPT) && - phy_interrupt_is_valid(phydev)) + if (!phy_drv_supports_irq(phydrv) && phy_interrupt_is_valid(phydev)) phydev->irq = PHY_POLL; if (phydrv->flags & PHY_IS_INTERNAL) @@ -2102,10 +2154,9 @@ static int phy_probe(struct device *dev) * a controller will attach, and may modify one * or both of these values */ - ethtool_convert_link_mode_to_legacy_u32(&features, phydrv->features); - phydev->supported = features; + linkmode_copy(phydev->supported, phydrv->features); of_set_phy_supported(phydev); - phydev->advertising = phydev->supported; + linkmode_copy(phydev->advertising, phydev->supported); /* Get the EEE modes we want to prohibit. We will ask * the PHY stop advertising these mode later on @@ -2125,14 +2176,22 @@ static int phy_probe(struct device *dev) */ if (test_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydrv->features) || test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydrv->features)) { - phydev->supported &= ~(SUPPORTED_Pause | SUPPORTED_Asym_Pause); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->supported); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->supported); if (test_bit(ETHTOOL_LINK_MODE_Pause_BIT, phydrv->features)) - phydev->supported |= SUPPORTED_Pause; + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->supported); if (test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, phydrv->features)) - phydev->supported |= SUPPORTED_Asym_Pause; + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->supported); } else { - phydev->supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause; + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->supported); + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->supported); } /* Set the state to READY by default */ diff --git a/drivers/net/phy/phy_led_triggers.c b/drivers/net/phy/phy_led_triggers.c index 491efc1bf5c4..263385b75bba 100644 --- a/drivers/net/phy/phy_led_triggers.c +++ b/drivers/net/phy/phy_led_triggers.c @@ -67,7 +67,7 @@ void phy_led_trigger_change_speed(struct phy_device *phy) EXPORT_SYMBOL_GPL(phy_led_trigger_change_speed); static void phy_led_trigger_format_name(struct phy_device *phy, char *buf, - size_t size, char *suffix) + size_t size, const char *suffix) { snprintf(buf, size, PHY_ID_FMT ":%s", phy->mdio.bus->id, phy->mdio.addr, suffix); @@ -77,20 +77,9 @@ static int phy_led_trigger_register(struct phy_device *phy, struct phy_led_trigger *plt, unsigned int speed) { - char name_suffix[PHY_LED_TRIGGER_SPEED_SUFFIX_SIZE]; - plt->speed = speed; - - if (speed < SPEED_1000) - snprintf(name_suffix, sizeof(name_suffix), "%dMbps", speed); - else if (speed == SPEED_2500) - snprintf(name_suffix, sizeof(name_suffix), "2.5Gbps"); - else - snprintf(name_suffix, sizeof(name_suffix), "%dGbps", - DIV_ROUND_CLOSEST(speed, 1000)); - phy_led_trigger_format_name(phy, plt->name, sizeof(plt->name), - name_suffix); + phy_speed_to_str(speed)); plt->trigger.name = plt->name; return led_trigger_register(&plt->trigger); diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index 9b8dd0d0ee42..e7becc7379d7 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -191,8 +191,7 @@ static int phylink_parse_fixedlink(struct phylink *pl, phylink_validate(pl, pl->supported, &pl->link_config); s = phy_lookup_setting(pl->link_config.speed, pl->link_config.duplex, - pl->supported, - __ETHTOOL_LINK_MODE_MASK_NBITS, true); + pl->supported, true); linkmode_zero(pl->supported); phylink_set(pl->supported, MII); if (s) { @@ -634,13 +633,11 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy) { struct phylink_link_state config; __ETHTOOL_DECLARE_LINK_MODE_MASK(supported); - u32 advertising; int ret; memset(&config, 0, sizeof(config)); - ethtool_convert_legacy_u32_to_link_mode(supported, phy->supported); - ethtool_convert_legacy_u32_to_link_mode(config.advertising, - phy->advertising); + linkmode_copy(supported, phy->supported); + linkmode_copy(config.advertising, phy->advertising); config.interface = pl->link_config.interface; /* @@ -673,15 +670,14 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy) linkmode_copy(pl->link_config.advertising, config.advertising); /* Restrict the phy advertisement according to the MAC support. */ - ethtool_convert_link_mode_to_legacy_u32(&advertising, config.advertising); - phy->advertising = advertising; + linkmode_copy(phy->advertising, config.advertising); mutex_unlock(&pl->state_mutex); mutex_unlock(&phy->lock); netdev_dbg(pl->netdev, - "phy: setting supported %*pb advertising 0x%08x\n", + "phy: setting supported %*pb advertising %*pb\n", __ETHTOOL_LINK_MODE_MASK_NBITS, pl->supported, - phy->advertising); + __ETHTOOL_LINK_MODE_MASK_NBITS, phy->advertising); phy_start_machine(phy); if (phy->irq > 0) @@ -1088,8 +1084,7 @@ int phylink_ethtool_ksettings_set(struct phylink *pl, * duplex. */ s = phy_lookup_setting(kset->base.speed, kset->base.duplex, - pl->supported, - __ETHTOOL_LINK_MODE_MASK_NBITS, false); + pl->supported, false); if (!s) return -EINVAL; diff --git a/drivers/net/phy/qsemi.c b/drivers/net/phy/qsemi.c index 889a4dce1648..cfe2313dbefd 100644 --- a/drivers/net/phy/qsemi.c +++ b/drivers/net/phy/qsemi.c @@ -116,7 +116,6 @@ static struct phy_driver qs6612_driver[] = { { .name = "QS6612", .phy_id_mask = 0xfffffff0, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = qs6612_config_init, .ack_interrupt = qs6612_ack_interrupt, .config_intr = qs6612_config_intr, diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c index 271e8adc39f1..c6010fb1aa0f 100644 --- a/drivers/net/phy/realtek.c +++ b/drivers/net/phy/realtek.c @@ -213,17 +213,13 @@ static int rtl8366rb_config_init(struct phy_device *phydev) static struct phy_driver realtek_drvs[] = { { - .phy_id = 0x00008201, + PHY_ID_MATCH_EXACT(0x00008201), .name = "RTL8201CP Ethernet", - .phy_id_mask = 0x0000ffff, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, }, { - .phy_id = 0x001cc816, + PHY_ID_MATCH_EXACT(0x001cc816), .name = "RTL8201F Fast Ethernet", - .phy_id_mask = 0x001fffff, .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .ack_interrupt = &rtl8201_ack_interrupt, .config_intr = &rtl8201_config_intr, .suspend = genphy_suspend, @@ -231,19 +227,16 @@ static struct phy_driver realtek_drvs[] = { .read_page = rtl821x_read_page, .write_page = rtl821x_write_page, }, { - .phy_id = 0x001cc910, + PHY_ID_MATCH_EXACT(0x001cc910), .name = "RTL8211 Gigabit Ethernet", - .phy_id_mask = 0x001fffff, .features = PHY_GBIT_FEATURES, .config_aneg = rtl8211_config_aneg, .read_mmd = &genphy_read_mmd_unsupported, .write_mmd = &genphy_write_mmd_unsupported, }, { - .phy_id = 0x001cc912, + PHY_ID_MATCH_EXACT(0x001cc912), .name = "RTL8211B Gigabit Ethernet", - .phy_id_mask = 0x001fffff, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .ack_interrupt = &rtl821x_ack_interrupt, .config_intr = &rtl8211b_config_intr, .read_mmd = &genphy_read_mmd_unsupported, @@ -251,39 +244,32 @@ static struct phy_driver realtek_drvs[] = { .suspend = rtl8211b_suspend, .resume = rtl8211b_resume, }, { - .phy_id = 0x001cc913, + PHY_ID_MATCH_EXACT(0x001cc913), .name = "RTL8211C Gigabit Ethernet", - .phy_id_mask = 0x001fffff, .features = PHY_GBIT_FEATURES, .config_init = rtl8211c_config_init, .read_mmd = &genphy_read_mmd_unsupported, .write_mmd = &genphy_write_mmd_unsupported, }, { - .phy_id = 0x001cc914, + PHY_ID_MATCH_EXACT(0x001cc914), .name = "RTL8211DN Gigabit Ethernet", - .phy_id_mask = 0x001fffff, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .ack_interrupt = rtl821x_ack_interrupt, .config_intr = rtl8211e_config_intr, .suspend = genphy_suspend, .resume = genphy_resume, }, { - .phy_id = 0x001cc915, + PHY_ID_MATCH_EXACT(0x001cc915), .name = "RTL8211E Gigabit Ethernet", - .phy_id_mask = 0x001fffff, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .ack_interrupt = &rtl821x_ack_interrupt, .config_intr = &rtl8211e_config_intr, .suspend = genphy_suspend, .resume = genphy_resume, }, { - .phy_id = 0x001cc916, + PHY_ID_MATCH_EXACT(0x001cc916), .name = "RTL8211F Gigabit Ethernet", - .phy_id_mask = 0x001fffff, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &rtl8211f_config_init, .ack_interrupt = &rtl8211f_ack_interrupt, .config_intr = &rtl8211f_config_intr, @@ -292,11 +278,9 @@ static struct phy_driver realtek_drvs[] = { .read_page = rtl821x_read_page, .write_page = rtl821x_write_page, }, { - .phy_id = 0x001cc961, + PHY_ID_MATCH_EXACT(0x001cc961), .name = "RTL8366RB Gigabit Ethernet", - .phy_id_mask = 0x001fffff, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &rtl8366rb_config_init, .suspend = genphy_suspend, .resume = genphy_resume, @@ -305,15 +289,8 @@ static struct phy_driver realtek_drvs[] = { module_phy_driver(realtek_drvs); -static struct mdio_device_id __maybe_unused realtek_tbl[] = { - { 0x001cc816, 0x001fffff }, - { 0x001cc910, 0x001fffff }, - { 0x001cc912, 0x001fffff }, - { 0x001cc913, 0x001fffff }, - { 0x001cc914, 0x001fffff }, - { 0x001cc915, 0x001fffff }, - { 0x001cc916, 0x001fffff }, - { 0x001cc961, 0x001fffff }, +static const struct mdio_device_id __maybe_unused realtek_tbl[] = { + { PHY_ID_MATCH_VENDOR(0x001cc800) }, { } }; diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c index c328208388da..f9477ff55545 100644 --- a/drivers/net/phy/smsc.c +++ b/drivers/net/phy/smsc.c @@ -219,7 +219,6 @@ static struct phy_driver smsc_phy_driver[] = { .name = "SMSC LAN83C185", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = smsc_phy_probe, @@ -239,7 +238,6 @@ static struct phy_driver smsc_phy_driver[] = { .name = "SMSC LAN8187", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = smsc_phy_probe, @@ -264,7 +262,6 @@ static struct phy_driver smsc_phy_driver[] = { .name = "SMSC LAN8700", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = smsc_phy_probe, @@ -290,7 +287,6 @@ static struct phy_driver smsc_phy_driver[] = { .name = "SMSC LAN911x Internal PHY", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = smsc_phy_probe, @@ -309,7 +305,7 @@ static struct phy_driver smsc_phy_driver[] = { .name = "SMSC LAN8710/LAN8720", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT | PHY_RST_AFTER_CLK_EN, + .flags = PHY_RST_AFTER_CLK_EN, .probe = smsc_phy_probe, @@ -335,7 +331,6 @@ static struct phy_driver smsc_phy_driver[] = { .name = "SMSC LAN8740", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .probe = smsc_phy_probe, diff --git a/drivers/net/phy/ste10Xp.c b/drivers/net/phy/ste10Xp.c index 2fe9a87b55b5..33d733684f5b 100644 --- a/drivers/net/phy/ste10Xp.c +++ b/drivers/net/phy/ste10Xp.c @@ -87,7 +87,6 @@ static struct phy_driver ste10xp_pdriver[] = { .phy_id_mask = 0xfffffff0, .name = "STe101p", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = ste10Xp_config_init, .ack_interrupt = ste10Xp_ack_interrupt, .config_intr = ste10Xp_config_intr, @@ -98,7 +97,6 @@ static struct phy_driver ste10xp_pdriver[] = { .phy_id_mask = 0xffffffff, .name = "STe100p", .features = PHY_BASIC_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = ste10Xp_config_init, .ack_interrupt = ste10Xp_ack_interrupt, .config_intr = ste10Xp_config_intr, diff --git a/drivers/net/phy/uPD60620.c b/drivers/net/phy/uPD60620.c index 55f48ee3595a..1e4fc42e4629 100644 --- a/drivers/net/phy/uPD60620.c +++ b/drivers/net/phy/uPD60620.c @@ -47,7 +47,7 @@ static int upd60620_read_status(struct phy_device *phydev) return phy_state; phydev->link = 0; - phydev->lp_advertising = 0; + linkmode_zero(phydev->lp_advertising); phydev->pause = 0; phydev->asym_pause = 0; @@ -70,8 +70,8 @@ static int upd60620_read_status(struct phy_device *phydev) if (phy_state < 0) return phy_state; - phydev->lp_advertising - = mii_lpa_to_ethtool_lpa_t(phy_state); + mii_lpa_to_linkmode_lpa_t(phydev->lp_advertising, + phy_state); if (phydev->duplex == DUPLEX_FULL) { if (phy_state & LPA_PAUSE_CAP) diff --git a/drivers/net/phy/vitesse.c b/drivers/net/phy/vitesse.c index fbf9ad429593..0646af458f6a 100644 --- a/drivers/net/phy/vitesse.c +++ b/drivers/net/phy/vitesse.c @@ -70,7 +70,6 @@ #define PHY_ID_VSC8244 0x000fc6c0 #define PHY_ID_VSC8514 0x00070670 #define PHY_ID_VSC8572 0x000704d0 -#define PHY_ID_VSC8574 0x000704a0 #define PHY_ID_VSC8601 0x00070420 #define PHY_ID_VSC7385 0x00070450 #define PHY_ID_VSC7388 0x00070480 @@ -303,7 +302,6 @@ static int vsc82xx_config_intr(struct phy_device *phydev) phydev->drv->phy_id == PHY_ID_VSC8244 || phydev->drv->phy_id == PHY_ID_VSC8514 || phydev->drv->phy_id == PHY_ID_VSC8572 || - phydev->drv->phy_id == PHY_ID_VSC8574 || phydev->drv->phy_id == PHY_ID_VSC8601) ? MII_VSC8244_IMASK_MASK : MII_VSC8221_IMASK_MASK); @@ -399,7 +397,6 @@ static struct phy_driver vsc82xx_driver[] = { .name = "Vitesse VSC8234", .phy_id_mask = 0x000ffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &vsc824x_config_init, .config_aneg = &vsc82x4_config_aneg, .ack_interrupt = &vsc824x_ack_interrupt, @@ -409,7 +406,6 @@ static struct phy_driver vsc82xx_driver[] = { .name = "Vitesse VSC8244", .phy_id_mask = 0x000fffc0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &vsc824x_config_init, .config_aneg = &vsc82x4_config_aneg, .ack_interrupt = &vsc824x_ack_interrupt, @@ -419,7 +415,6 @@ static struct phy_driver vsc82xx_driver[] = { .name = "Vitesse VSC8514", .phy_id_mask = 0x000ffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &vsc824x_config_init, .config_aneg = &vsc82x4_config_aneg, .ack_interrupt = &vsc824x_ack_interrupt, @@ -429,17 +424,6 @@ static struct phy_driver vsc82xx_driver[] = { .name = "Vitesse VSC8572", .phy_id_mask = 0x000ffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, - .config_init = &vsc824x_config_init, - .config_aneg = &vsc82x4_config_aneg, - .ack_interrupt = &vsc824x_ack_interrupt, - .config_intr = &vsc82xx_config_intr, -}, { - .phy_id = PHY_ID_VSC8574, - .name = "Vitesse VSC8574", - .phy_id_mask = 0x000ffff0, - .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &vsc824x_config_init, .config_aneg = &vsc82x4_config_aneg, .ack_interrupt = &vsc824x_ack_interrupt, @@ -449,7 +433,6 @@ static struct phy_driver vsc82xx_driver[] = { .name = "Vitesse VSC8601", .phy_id_mask = 0x000ffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &vsc8601_config_init, .ack_interrupt = &vsc824x_ack_interrupt, .config_intr = &vsc82xx_config_intr, @@ -494,7 +477,6 @@ static struct phy_driver vsc82xx_driver[] = { .name = "Vitesse VSC8662", .phy_id_mask = 0x000ffff0, .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &vsc824x_config_init, .config_aneg = &vsc82x4_config_aneg, .ack_interrupt = &vsc824x_ack_interrupt, @@ -505,7 +487,6 @@ static struct phy_driver vsc82xx_driver[] = { .phy_id_mask = 0x000ffff0, .name = "Vitesse VSC8221", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &vsc8221_config_init, .ack_interrupt = &vsc824x_ack_interrupt, .config_intr = &vsc82xx_config_intr, @@ -515,7 +496,6 @@ static struct phy_driver vsc82xx_driver[] = { .phy_id_mask = 0x000ffff0, .name = "Vitesse VSC8211", .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, .config_init = &vsc8221_config_init, .ack_interrupt = &vsc824x_ack_interrupt, .config_intr = &vsc82xx_config_intr, @@ -528,7 +508,6 @@ static struct mdio_device_id __maybe_unused vitesse_tbl[] = { { PHY_ID_VSC8244, 0x000fffc0 }, { PHY_ID_VSC8514, 0x000ffff0 }, { PHY_ID_VSC8572, 0x000ffff0 }, - { PHY_ID_VSC8574, 0x000ffff0 }, { PHY_ID_VSC7385, 0x000ffff0 }, { PHY_ID_VSC7388, 0x000ffff0 }, { PHY_ID_VSC7395, 0x000ffff0 }, diff --git a/drivers/net/ppp/ppp_async.c b/drivers/net/ppp/ppp_async.c index bdc4d23627c5..b287bb811875 100644 --- a/drivers/net/ppp/ppp_async.c +++ b/drivers/net/ppp/ppp_async.c @@ -70,7 +70,7 @@ struct asyncppp { struct tasklet_struct tsk; refcount_t refcnt; - struct semaphore dead_sem; + struct completion dead; struct ppp_channel chan; /* interface to generic ppp layer */ unsigned char obuf[OBUFSIZE]; }; @@ -148,7 +148,7 @@ static struct asyncppp *ap_get(struct tty_struct *tty) static void ap_put(struct asyncppp *ap) { if (refcount_dec_and_test(&ap->refcnt)) - up(&ap->dead_sem); + complete(&ap->dead); } /* @@ -186,7 +186,7 @@ ppp_asynctty_open(struct tty_struct *tty) tasklet_init(&ap->tsk, ppp_async_process, (unsigned long) ap); refcount_set(&ap->refcnt, 1); - sema_init(&ap->dead_sem, 0); + init_completion(&ap->dead); ap->chan.private = ap; ap->chan.ops = &async_ops; @@ -235,7 +235,7 @@ ppp_asynctty_close(struct tty_struct *tty) * by the time it returns. */ if (!refcount_dec_and_test(&ap->refcnt)) - down(&ap->dead_sem); + wait_for_completion(&ap->dead); tasklet_kill(&ap->tsk); ppp_unregister_channel(&ap->chan); @@ -770,7 +770,7 @@ process_input_packet(struct asyncppp *ap) { struct sk_buff *skb; unsigned char *p; - unsigned int len, fcs, proto; + unsigned int len, fcs; skb = ap->rpkt; if (ap->state & (SC_TOSS | SC_ESCAPE)) @@ -799,14 +799,14 @@ process_input_packet(struct asyncppp *ap) goto err; p = skb_pull(skb, 2); } - proto = p[0]; - if (proto & 1) { - /* protocol is compressed */ - *(u8 *)skb_push(skb, 1) = 0; - } else { + + /* If protocol field is not compressed, it can be LCP packet */ + if (!(p[0] & 0x01)) { + unsigned int proto; + if (skb->len < 2) goto err; - proto = (proto << 8) + p[1]; + proto = (p[0] << 8) + p[1]; if (proto == PPP_LCP) async_lcp_peek(ap, p, skb->len, 1); } diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c index 500bc0027c1b..c708400fff4a 100644 --- a/drivers/net/ppp/ppp_generic.c +++ b/drivers/net/ppp/ppp_generic.c @@ -1965,6 +1965,46 @@ ppp_do_recv(struct ppp *ppp, struct sk_buff *skb, struct channel *pch) ppp_recv_unlock(ppp); } +/** + * __ppp_decompress_proto - Decompress protocol field, slim version. + * @skb: Socket buffer where protocol field should be decompressed. It must have + * at least 1 byte of head room and 1 byte of linear data. First byte of + * data must be a protocol field byte. + * + * Decompress protocol field in PPP header if it's compressed, e.g. when + * Protocol-Field-Compression (PFC) was negotiated. No checks w.r.t. skb data + * length are done in this function. + */ +static void __ppp_decompress_proto(struct sk_buff *skb) +{ + if (skb->data[0] & 0x01) + *(u8 *)skb_push(skb, 1) = 0x00; +} + +/** + * ppp_decompress_proto - Check skb data room and decompress protocol field. + * @skb: Socket buffer where protocol field should be decompressed. First byte + * of data must be a protocol field byte. + * + * Decompress protocol field in PPP header if it's compressed, e.g. when + * Protocol-Field-Compression (PFC) was negotiated. This function also makes + * sure that skb data room is sufficient for Protocol field, before and after + * decompression. + * + * Return: true - decompressed successfully, false - not enough room in skb. + */ +static bool ppp_decompress_proto(struct sk_buff *skb) +{ + /* At least one byte should be present (if protocol is compressed) */ + if (!pskb_may_pull(skb, 1)) + return false; + + __ppp_decompress_proto(skb); + + /* Protocol field should occupy 2 bytes when not compressed */ + return pskb_may_pull(skb, 2); +} + void ppp_input(struct ppp_channel *chan, struct sk_buff *skb) { @@ -1977,7 +2017,7 @@ ppp_input(struct ppp_channel *chan, struct sk_buff *skb) } read_lock_bh(&pch->upl); - if (!pskb_may_pull(skb, 2)) { + if (!ppp_decompress_proto(skb)) { kfree_skb(skb); if (pch->ppp) { ++pch->ppp->dev->stats.rx_length_errors; @@ -2074,6 +2114,9 @@ ppp_receive_nonmp_frame(struct ppp *ppp, struct sk_buff *skb) if (ppp->flags & SC_MUST_COMP && ppp->rstate & SC_DC_FERROR) goto err; + /* At this point the "Protocol" field MUST be decompressed, either in + * ppp_input(), ppp_decompress_frame() or in ppp_receive_mp_frame(). + */ proto = PPP_PROTO(skb); switch (proto) { case PPP_VJC_COMP: @@ -2245,6 +2288,9 @@ ppp_decompress_frame(struct ppp *ppp, struct sk_buff *skb) skb_put(skb, len); skb_pull(skb, 2); /* pull off the A/C bytes */ + /* Don't call __ppp_decompress_proto() here, but instead rely on + * corresponding algo (mppe/bsd/deflate) to decompress it. + */ } else { /* Uncompressed frame - pass to decompressor so it can update its dictionary if necessary. */ @@ -2290,9 +2336,11 @@ ppp_receive_mp_frame(struct ppp *ppp, struct sk_buff *skb, struct channel *pch) /* * Do protocol ID decompression on the first fragment of each packet. + * We have to do that here, because ppp_receive_nonmp_frame() expects + * decompressed protocol field. */ - if ((PPP_MP_CB(skb)->BEbits & B) && (skb->data[0] & 1)) - *(u8 *)skb_push(skb, 1) = 0; + if (PPP_MP_CB(skb)->BEbits & B) + __ppp_decompress_proto(skb); /* * Expand sequence number to 32 bits, making it as close diff --git a/drivers/net/ppp/ppp_synctty.c b/drivers/net/ppp/ppp_synctty.c index 047f6c68a441..d02ba2494d93 100644 --- a/drivers/net/ppp/ppp_synctty.c +++ b/drivers/net/ppp/ppp_synctty.c @@ -709,11 +709,10 @@ ppp_sync_input(struct syncppp *ap, const unsigned char *buf, p = skb_pull(skb, 2); } - /* decompress protocol field if compressed */ - if (p[0] & 1) { - /* protocol is compressed */ - *(u8 *)skb_push(skb, 1) = 0; - } else if (skb->len < 2) + /* PPP packet length should be >= 2 bytes when protocol field is not + * compressed. + */ + if (!(p[0] & 0x01) && skb->len < 2) goto err; /* queue the frame to be processed */ diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c index 67ffe74747a1..8f09edd811e9 100644 --- a/drivers/net/ppp/pptp.c +++ b/drivers/net/ppp/pptp.c @@ -325,11 +325,6 @@ allow_packet: skb_pull(skb, 2); } - if ((*skb->data) & 1) { - /* protocol is compressed */ - *(u8 *)skb_push(skb, 1) = 0; - } - skb->ip_summed = CHECKSUM_NONE; skb_set_network_header(skb, skb->head-skb->data); ppp_input(&po->chan, skb); diff --git a/drivers/net/tap.c b/drivers/net/tap.c index f03004f37eca..443b2694130c 100644 --- a/drivers/net/tap.c +++ b/drivers/net/tap.c @@ -1113,7 +1113,7 @@ static long tap_ioctl(struct file *file, unsigned int cmd, rtnl_unlock(); return -ENOLINK; } - ret = dev_set_mac_address(tap->dev, &sa); + ret = dev_set_mac_address(tap->dev, &sa, NULL); tap_put_tap_dev(tap); rtnl_unlock(); return ret; diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index 364f514d56d8..afd9d25d1992 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -59,7 +59,7 @@ static int __set_port_dev_addr(struct net_device *port_dev, memcpy(addr.__data, dev_addr, port_dev->addr_len); addr.ss_family = port_dev->type; - return dev_set_mac_address(port_dev, (struct sockaddr *)&addr); + return dev_set_mac_address(port_dev, (struct sockaddr *)&addr, NULL); } static int team_port_set_orig_dev_addr(struct team_port *port) @@ -1212,7 +1212,7 @@ static int team_port_add(struct team *team, struct net_device *port_dev, goto err_port_enter; } - err = dev_open(port_dev); + err = dev_open(port_dev, extack); if (err) { netdev_dbg(dev, "Device %s opening failed\n", portname); diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 005020042be9..a4fdad475594 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -188,6 +188,11 @@ struct tun_file { struct xdp_rxq_info xdp_rxq; }; +struct tun_page { + struct page *page; + int count; +}; + struct tun_flow_entry { struct hlist_node hash_link; struct rcu_head rcu; @@ -196,7 +201,7 @@ struct tun_flow_entry { u32 rxhash; u32 rps_rxhash; int queue_index; - unsigned long updated; + unsigned long updated ____cacheline_aligned_in_smp; }; #define TUN_NUM_FLOW_ENTRIES 1024 @@ -524,18 +529,17 @@ static void tun_flow_update(struct tun_struct *tun, u32 rxhash, unsigned long delay = tun->ageing_time; u16 queue_index = tfile->queue_index; - if (!rxhash) - return; - else - head = &tun->flows[tun_hashfn(rxhash)]; + head = &tun->flows[tun_hashfn(rxhash)]; rcu_read_lock(); e = tun_flow_find(head, rxhash); if (likely(e)) { /* TODO: keep queueing to old queue until it's empty? */ - e->queue_index = queue_index; - e->updated = jiffies; + if (e->queue_index != queue_index) + e->queue_index = queue_index; + if (e->updated != jiffies) + e->updated = jiffies; sock_rps_record_flow_hash(e->rps_rxhash); } else { spin_lock_bh(&tun->lock); @@ -1249,6 +1253,21 @@ static int tun_xdp(struct net_device *dev, struct netdev_bpf *xdp) } } +static int tun_net_change_carrier(struct net_device *dev, bool new_carrier) +{ + if (new_carrier) { + struct tun_struct *tun = netdev_priv(dev); + + if (!tun->numqueues) + return -EPERM; + + netif_carrier_on(dev); + } else { + netif_carrier_off(dev); + } + return 0; +} + static const struct net_device_ops tun_netdev_ops = { .ndo_uninit = tun_net_uninit, .ndo_open = tun_net_open, @@ -1258,6 +1277,7 @@ static const struct net_device_ops tun_netdev_ops = { .ndo_select_queue = tun_select_queue, .ndo_set_rx_headroom = tun_set_headroom, .ndo_get_stats64 = tun_net_get_stats64, + .ndo_change_carrier = tun_net_change_carrier, }; static void __tun_xdp_flush_tfile(struct tun_file *tfile) @@ -1340,6 +1360,7 @@ static const struct net_device_ops tap_netdev_ops = { .ndo_get_stats64 = tun_net_get_stats64, .ndo_bpf = tun_xdp, .ndo_xdp_xmit = tun_xdp_xmit, + .ndo_change_carrier = tun_net_change_carrier, }; static void tun_flow_init(struct tun_struct *tun) @@ -1473,23 +1494,22 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile, skb->truesize += skb->data_len; for (i = 1; i < it->nr_segs; i++) { - struct page_frag *pfrag = ¤t->task_frag; size_t fragsz = it->iov[i].iov_len; + struct page *page; + void *frag; if (fragsz == 0 || fragsz > PAGE_SIZE) { err = -EINVAL; goto free; } - - if (!skb_page_frag_refill(fragsz, pfrag, GFP_KERNEL)) { + frag = netdev_alloc_frag(fragsz); + if (!frag) { err = -ENOMEM; goto free; } - - skb_fill_page_desc(skb, i - 1, pfrag->page, - pfrag->offset, fragsz); - page_ref_inc(pfrag->page); - pfrag->offset += fragsz; + page = virt_to_head_page(frag); + skb_fill_page_desc(skb, i - 1, page, + frag - page_address(page), fragsz); } return skb; @@ -2381,9 +2401,16 @@ static void tun_sock_write_space(struct sock *sk) kill_fasync(&tfile->fasync, SIGIO, POLL_OUT); } +static void tun_put_page(struct tun_page *tpage) +{ + if (tpage->page) + __page_frag_cache_drain(tpage->page, tpage->count); +} + static int tun_xdp_one(struct tun_struct *tun, struct tun_file *tfile, - struct xdp_buff *xdp, int *flush) + struct xdp_buff *xdp, int *flush, + struct tun_page *tpage) { unsigned int datasize = xdp->data_end - xdp->data; struct tun_xdp_hdr *hdr = xdp->data_hard_start; @@ -2395,6 +2422,7 @@ static int tun_xdp_one(struct tun_struct *tun, int buflen = hdr->buflen; int err = 0; bool skb_xdp = false; + struct page *page; xdp_prog = rcu_dereference(tun->xdp_prog); if (xdp_prog) { @@ -2421,7 +2449,14 @@ static int tun_xdp_one(struct tun_struct *tun, case XDP_PASS: break; default: - put_page(virt_to_head_page(xdp->data)); + page = virt_to_head_page(xdp->data); + if (tpage->page == page) { + ++tpage->count; + } else { + tun_put_page(tpage); + tpage->page = page; + tpage->count = 1; + } return 0; } } @@ -2453,18 +2488,21 @@ build: goto out; } - if (!rcu_dereference(tun->steering_prog)) + if (!rcu_dereference(tun->steering_prog) && tun->numqueues > 1 && + !tfile->detached) rxhash = __skb_get_hash_symmetric(skb); skb_record_rx_queue(skb, tfile->queue_index); netif_receive_skb(skb); - stats = get_cpu_ptr(tun->pcpu_stats); + /* No need for get_cpu_ptr() here since this function is + * always called with bh disabled + */ + stats = this_cpu_ptr(tun->pcpu_stats); u64_stats_update_begin(&stats->syncp); stats->rx_packets++; stats->rx_bytes += datasize; u64_stats_update_end(&stats->syncp); - put_cpu_ptr(stats); if (rxhash) tun_flow_update(tun, rxhash, tfile); @@ -2485,15 +2523,18 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len) return -EBADFD; if (ctl && (ctl->type == TUN_MSG_PTR)) { + struct tun_page tpage; int n = ctl->num; int flush = 0; + memset(&tpage, 0, sizeof(tpage)); + local_bh_disable(); rcu_read_lock(); for (i = 0; i < n; i++) { xdp = &((struct xdp_buff *)ctl->ptr)[i]; - tun_xdp_one(tun, tfile, xdp, &flush); + tun_xdp_one(tun, tfile, xdp, &flush, &tpage); } if (flush) @@ -2502,6 +2543,8 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len) rcu_read_unlock(); local_bh_enable(); + tun_put_page(&tpage); + ret = total_len; goto out; } @@ -2978,12 +3021,12 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, struct net *net = sock_net(&tfile->sk); struct tun_struct *tun; void __user* argp = (void __user*)arg; + unsigned int ifindex, carrier; struct ifreq ifr; kuid_t owner; kgid_t group; int sndbuf; int vnet_hdr_sz; - unsigned int ifindex; int le; int ret; bool do_notify = false; @@ -3161,7 +3204,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, tun_debug(KERN_DEBUG, tun, "set hw address: %pM\n", ifr.ifr_hwaddr.sa_data); - ret = dev_set_mac_address(tun->dev, &ifr.ifr_hwaddr); + ret = dev_set_mac_address(tun->dev, &ifr.ifr_hwaddr, NULL); break; case TUNGETSNDBUF: @@ -3267,6 +3310,14 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd, ret = tun_set_ebpf(tun, &tun->filter_prog, argp); break; + case TUNSETCARRIER: + ret = -EFAULT; + if (copy_from_user(&carrier, argp, sizeof(carrier))) + goto unlock; + + ret = tun_net_change_carrier(tun->dev, (bool)carrier); + break; + default: ret = -EINVAL; break; diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig index 418b0904cecb..860352a525fb 100644 --- a/drivers/net/usb/Kconfig +++ b/drivers/net/usb/Kconfig @@ -613,4 +613,15 @@ config USB_NET_CH9200 To compile this driver as a module, choose M here: the module will be called ch9200. +config USB_NET_AQC111 + tristate "Aquantia AQtion USB to 5/2.5GbE Controllers support" + depends on USB_USBNET + select CRC32 + help + This option adds support for Aquantia AQtion USB + Ethernet adapters based on AQC111U/AQC112 chips. + + This driver should work with at least the following devices: + * Aquantia AQtion USB to 5GbE + endif # USB_NET_DRIVERS diff --git a/drivers/net/usb/Makefile b/drivers/net/usb/Makefile index 27307a4ab003..99fd12be2111 100644 --- a/drivers/net/usb/Makefile +++ b/drivers/net/usb/Makefile @@ -40,3 +40,4 @@ obj-$(CONFIG_USB_VL600) += lg-vl600.o obj-$(CONFIG_USB_NET_QMI_WWAN) += qmi_wwan.o obj-$(CONFIG_USB_NET_CDC_MBIM) += cdc_mbim.o obj-$(CONFIG_USB_NET_CH9200) += ch9200.o +obj-$(CONFIG_USB_NET_AQC111) += aqc111.o diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c new file mode 100644 index 000000000000..57f1c94fca0b --- /dev/null +++ b/drivers/net/usb/aqc111.c @@ -0,0 +1,1459 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Aquantia Corp. Aquantia AQtion USB to 5GbE Controller + * Copyright (C) 2003-2005 David Hollis + * Copyright (C) 2005 Phil Chang + * Copyright (C) 2002-2003 TiVo Inc. + * Copyright (C) 2017-2018 ASIX + * Copyright (C) 2018 Aquantia Corp. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "aqc111.h" + +#define DRIVER_NAME "aqc111" + +static int aqc111_read_cmd_nopm(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 size, void *data) +{ + int ret; + + ret = usbnet_read_cmd_nopm(dev, cmd, USB_DIR_IN | USB_TYPE_VENDOR | + USB_RECIP_DEVICE, value, index, data, size); + + if (unlikely(ret < 0)) + netdev_warn(dev->net, + "Failed to read(0x%x) reg index 0x%04x: %d\n", + cmd, index, ret); + + return ret; +} + +static int aqc111_read_cmd(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 size, void *data) +{ + int ret; + + ret = usbnet_read_cmd(dev, cmd, USB_DIR_IN | USB_TYPE_VENDOR | + USB_RECIP_DEVICE, value, index, data, size); + + if (unlikely(ret < 0)) + netdev_warn(dev->net, + "Failed to read(0x%x) reg index 0x%04x: %d\n", + cmd, index, ret); + + return ret; +} + +static int aqc111_read16_cmd_nopm(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 *data) +{ + int ret = 0; + + ret = aqc111_read_cmd_nopm(dev, cmd, value, index, sizeof(*data), data); + le16_to_cpus(data); + + return ret; +} + +static int aqc111_read16_cmd(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 *data) +{ + int ret = 0; + + ret = aqc111_read_cmd(dev, cmd, value, index, sizeof(*data), data); + le16_to_cpus(data); + + return ret; +} + +static int __aqc111_write_cmd(struct usbnet *dev, u8 cmd, u8 reqtype, + u16 value, u16 index, u16 size, const void *data) +{ + int err = -ENOMEM; + void *buf = NULL; + + netdev_dbg(dev->net, + "%s cmd=%#x reqtype=%#x value=%#x index=%#x size=%d\n", + __func__, cmd, reqtype, value, index, size); + + if (data) { + buf = kmemdup(data, size, GFP_KERNEL); + if (!buf) + goto out; + } + + err = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0), + cmd, reqtype, value, index, buf, size, + (cmd == AQ_PHY_POWER) ? AQ_USB_PHY_SET_TIMEOUT : + AQ_USB_SET_TIMEOUT); + + if (unlikely(err < 0)) + netdev_warn(dev->net, + "Failed to write(0x%x) reg index 0x%04x: %d\n", + cmd, index, err); + kfree(buf); + +out: + return err; +} + +static int aqc111_write_cmd_nopm(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 size, void *data) +{ + int ret; + + ret = __aqc111_write_cmd(dev, cmd, USB_DIR_OUT | USB_TYPE_VENDOR | + USB_RECIP_DEVICE, value, index, size, data); + + return ret; +} + +static int aqc111_write_cmd(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 size, void *data) +{ + int ret; + + if (usb_autopm_get_interface(dev->intf) < 0) + return -ENODEV; + + ret = __aqc111_write_cmd(dev, cmd, USB_DIR_OUT | USB_TYPE_VENDOR | + USB_RECIP_DEVICE, value, index, size, data); + + usb_autopm_put_interface(dev->intf); + + return ret; +} + +static int aqc111_write16_cmd_nopm(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 *data) +{ + u16 tmp = *data; + + cpu_to_le16s(&tmp); + + return aqc111_write_cmd_nopm(dev, cmd, value, index, sizeof(tmp), &tmp); +} + +static int aqc111_write16_cmd(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 *data) +{ + u16 tmp = *data; + + cpu_to_le16s(&tmp); + + return aqc111_write_cmd(dev, cmd, value, index, sizeof(tmp), &tmp); +} + +static int aqc111_write32_cmd_nopm(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u32 *data) +{ + u32 tmp = *data; + + cpu_to_le32s(&tmp); + + return aqc111_write_cmd_nopm(dev, cmd, value, index, sizeof(tmp), &tmp); +} + +static int aqc111_write32_cmd(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u32 *data) +{ + u32 tmp = *data; + + cpu_to_le32s(&tmp); + + return aqc111_write_cmd(dev, cmd, value, index, sizeof(tmp), &tmp); +} + +static int aqc111_write_cmd_async(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 size, void *data) +{ + return usbnet_write_cmd_async(dev, cmd, USB_DIR_OUT | USB_TYPE_VENDOR | + USB_RECIP_DEVICE, value, index, data, + size); +} + +static int aqc111_write16_cmd_async(struct usbnet *dev, u8 cmd, u16 value, + u16 index, u16 *data) +{ + u16 tmp = *data; + + cpu_to_le16s(&tmp); + + return aqc111_write_cmd_async(dev, cmd, value, index, + sizeof(tmp), &tmp); +} + +static void aqc111_get_drvinfo(struct net_device *net, + struct ethtool_drvinfo *info) +{ + struct usbnet *dev = netdev_priv(net); + struct aqc111_data *aqc111_data = dev->driver_priv; + + /* Inherit standard device info */ + usbnet_get_drvinfo(net, info); + strlcpy(info->driver, DRIVER_NAME, sizeof(info->driver)); + snprintf(info->fw_version, sizeof(info->fw_version), "%u.%u.%u", + aqc111_data->fw_ver.major, + aqc111_data->fw_ver.minor, + aqc111_data->fw_ver.rev); + info->eedump_len = 0x00; + info->regdump_len = 0x00; +} + +static void aqc111_get_wol(struct net_device *net, + struct ethtool_wolinfo *wolinfo) +{ + struct usbnet *dev = netdev_priv(net); + struct aqc111_data *aqc111_data = dev->driver_priv; + + wolinfo->supported = WAKE_MAGIC; + wolinfo->wolopts = 0; + + if (aqc111_data->wol_flags & AQ_WOL_FLAG_MP) + wolinfo->wolopts |= WAKE_MAGIC; +} + +static int aqc111_set_wol(struct net_device *net, + struct ethtool_wolinfo *wolinfo) +{ + struct usbnet *dev = netdev_priv(net); + struct aqc111_data *aqc111_data = dev->driver_priv; + + if (wolinfo->wolopts & ~WAKE_MAGIC) + return -EINVAL; + + aqc111_data->wol_flags = 0; + if (wolinfo->wolopts & WAKE_MAGIC) + aqc111_data->wol_flags |= AQ_WOL_FLAG_MP; + + return 0; +} + +static void aqc111_speed_to_link_mode(u32 speed, + struct ethtool_link_ksettings *elk) +{ + switch (speed) { + case SPEED_5000: + ethtool_link_ksettings_add_link_mode(elk, advertising, + 5000baseT_Full); + break; + case SPEED_2500: + ethtool_link_ksettings_add_link_mode(elk, advertising, + 2500baseT_Full); + break; + case SPEED_1000: + ethtool_link_ksettings_add_link_mode(elk, advertising, + 1000baseT_Full); + break; + case SPEED_100: + ethtool_link_ksettings_add_link_mode(elk, advertising, + 100baseT_Full); + break; + } +} + +static int aqc111_get_link_ksettings(struct net_device *net, + struct ethtool_link_ksettings *elk) +{ + struct usbnet *dev = netdev_priv(net); + struct aqc111_data *aqc111_data = dev->driver_priv; + enum usb_device_speed usb_speed = dev->udev->speed; + u32 speed = SPEED_UNKNOWN; + + ethtool_link_ksettings_zero_link_mode(elk, supported); + ethtool_link_ksettings_add_link_mode(elk, supported, + 100baseT_Full); + ethtool_link_ksettings_add_link_mode(elk, supported, + 1000baseT_Full); + if (usb_speed == USB_SPEED_SUPER) { + ethtool_link_ksettings_add_link_mode(elk, supported, + 2500baseT_Full); + ethtool_link_ksettings_add_link_mode(elk, supported, + 5000baseT_Full); + } + ethtool_link_ksettings_add_link_mode(elk, supported, TP); + ethtool_link_ksettings_add_link_mode(elk, supported, Autoneg); + + elk->base.port = PORT_TP; + elk->base.transceiver = XCVR_INTERNAL; + + elk->base.mdio_support = 0x00; /*Not supported*/ + + if (aqc111_data->autoneg) + linkmode_copy(elk->link_modes.advertising, + elk->link_modes.supported); + else + aqc111_speed_to_link_mode(aqc111_data->advertised_speed, elk); + + elk->base.autoneg = aqc111_data->autoneg; + + switch (aqc111_data->link_speed) { + case AQ_INT_SPEED_5G: + speed = SPEED_5000; + break; + case AQ_INT_SPEED_2_5G: + speed = SPEED_2500; + break; + case AQ_INT_SPEED_1G: + speed = SPEED_1000; + break; + case AQ_INT_SPEED_100M: + speed = SPEED_100; + break; + } + elk->base.duplex = DUPLEX_FULL; + elk->base.speed = speed; + + return 0; +} + +static void aqc111_set_phy_speed(struct usbnet *dev, u8 autoneg, u16 speed) +{ + struct aqc111_data *aqc111_data = dev->driver_priv; + + aqc111_data->phy_cfg &= ~AQ_ADV_MASK; + aqc111_data->phy_cfg |= AQ_PAUSE; + aqc111_data->phy_cfg |= AQ_ASYM_PAUSE; + aqc111_data->phy_cfg |= AQ_DOWNSHIFT; + aqc111_data->phy_cfg &= ~AQ_DSH_RETRIES_MASK; + aqc111_data->phy_cfg |= (3 << AQ_DSH_RETRIES_SHIFT) & + AQ_DSH_RETRIES_MASK; + + if (autoneg == AUTONEG_ENABLE) { + switch (speed) { + case SPEED_5000: + aqc111_data->phy_cfg |= AQ_ADV_5G; + /* fall-through */ + case SPEED_2500: + aqc111_data->phy_cfg |= AQ_ADV_2G5; + /* fall-through */ + case SPEED_1000: + aqc111_data->phy_cfg |= AQ_ADV_1G; + /* fall-through */ + case SPEED_100: + aqc111_data->phy_cfg |= AQ_ADV_100M; + /* fall-through */ + } + } else { + switch (speed) { + case SPEED_5000: + aqc111_data->phy_cfg |= AQ_ADV_5G; + break; + case SPEED_2500: + aqc111_data->phy_cfg |= AQ_ADV_2G5; + break; + case SPEED_1000: + aqc111_data->phy_cfg |= AQ_ADV_1G; + break; + case SPEED_100: + aqc111_data->phy_cfg |= AQ_ADV_100M; + break; + } + } + + aqc111_write32_cmd(dev, AQ_PHY_OPS, 0, 0, &aqc111_data->phy_cfg); +} + +static int aqc111_set_link_ksettings(struct net_device *net, + const struct ethtool_link_ksettings *elk) +{ + struct usbnet *dev = netdev_priv(net); + struct aqc111_data *aqc111_data = dev->driver_priv; + enum usb_device_speed usb_speed = dev->udev->speed; + u8 autoneg = elk->base.autoneg; + u32 speed = elk->base.speed; + + if (autoneg == AUTONEG_ENABLE) { + if (aqc111_data->autoneg != AUTONEG_ENABLE) { + aqc111_data->autoneg = AUTONEG_ENABLE; + aqc111_data->advertised_speed = + (usb_speed == USB_SPEED_SUPER) ? + SPEED_5000 : SPEED_1000; + aqc111_set_phy_speed(dev, aqc111_data->autoneg, + aqc111_data->advertised_speed); + } + } else { + if (speed != SPEED_100 && + speed != SPEED_1000 && + speed != SPEED_2500 && + speed != SPEED_5000 && + speed != SPEED_UNKNOWN) + return -EINVAL; + + if (elk->base.duplex != DUPLEX_FULL) + return -EINVAL; + + if (usb_speed != USB_SPEED_SUPER && speed > SPEED_1000) + return -EINVAL; + + aqc111_data->autoneg = AUTONEG_DISABLE; + if (speed != SPEED_UNKNOWN) + aqc111_data->advertised_speed = speed; + + aqc111_set_phy_speed(dev, aqc111_data->autoneg, + aqc111_data->advertised_speed); + } + + return 0; +} + +static const struct ethtool_ops aqc111_ethtool_ops = { + .get_drvinfo = aqc111_get_drvinfo, + .get_wol = aqc111_get_wol, + .set_wol = aqc111_set_wol, + .get_msglevel = usbnet_get_msglevel, + .set_msglevel = usbnet_set_msglevel, + .get_link = ethtool_op_get_link, + .get_link_ksettings = aqc111_get_link_ksettings, + .set_link_ksettings = aqc111_set_link_ksettings +}; + +static int aqc111_change_mtu(struct net_device *net, int new_mtu) +{ + struct usbnet *dev = netdev_priv(net); + u16 reg16 = 0; + u8 buf[5]; + + net->mtu = new_mtu; + dev->hard_mtu = net->mtu + net->hard_header_len; + + aqc111_read16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + if (net->mtu > 1500) + reg16 |= SFR_MEDIUM_JUMBO_EN; + else + reg16 &= ~SFR_MEDIUM_JUMBO_EN; + + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + + if (dev->net->mtu > 12500 && dev->net->mtu <= 16334) { + memcpy(buf, &AQC111_BULKIN_SIZE[2], 5); + /* RX bulk configuration */ + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_RX_BULKIN_QCTRL, + 5, 5, buf); + } + + /* Set high low water level */ + if (dev->net->mtu <= 4500) + reg16 = 0x0810; + else if (dev->net->mtu <= 9500) + reg16 = 0x1020; + else if (dev->net->mtu <= 12500) + reg16 = 0x1420; + else if (dev->net->mtu <= 16334) + reg16 = 0x1A20; + + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_PAUSE_WATERLVL_LOW, + 2, ®16); + + return 0; +} + +static int aqc111_set_mac_addr(struct net_device *net, void *p) +{ + struct usbnet *dev = netdev_priv(net); + int ret = 0; + + ret = eth_mac_addr(net, p); + if (ret < 0) + return ret; + + /* Set the MAC address */ + return aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_NODE_ID, ETH_ALEN, + ETH_ALEN, net->dev_addr); +} + +static int aqc111_vlan_rx_kill_vid(struct net_device *net, + __be16 proto, u16 vid) +{ + struct usbnet *dev = netdev_priv(net); + u8 vlan_ctrl = 0; + u16 reg16 = 0; + u8 reg8 = 0; + + aqc111_read_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, 1, 1, ®8); + vlan_ctrl = reg8; + + /* Address */ + reg8 = (vid / 16); + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_ADDRESS, 1, 1, ®8); + /* Data */ + reg8 = vlan_ctrl | SFR_VLAN_CONTROL_RD; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, 1, 1, ®8); + aqc111_read16_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_DATA0, 2, ®16); + reg16 &= ~(1 << (vid % 16)); + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_DATA0, 2, ®16); + reg8 = vlan_ctrl | SFR_VLAN_CONTROL_WE; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, 1, 1, ®8); + + return 0; +} + +static int aqc111_vlan_rx_add_vid(struct net_device *net, __be16 proto, u16 vid) +{ + struct usbnet *dev = netdev_priv(net); + u8 vlan_ctrl = 0; + u16 reg16 = 0; + u8 reg8 = 0; + + aqc111_read_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, 1, 1, ®8); + vlan_ctrl = reg8; + + /* Address */ + reg8 = (vid / 16); + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_ADDRESS, 1, 1, ®8); + /* Data */ + reg8 = vlan_ctrl | SFR_VLAN_CONTROL_RD; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, 1, 1, ®8); + aqc111_read16_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_DATA0, 2, ®16); + reg16 |= (1 << (vid % 16)); + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_DATA0, 2, ®16); + reg8 = vlan_ctrl | SFR_VLAN_CONTROL_WE; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, 1, 1, ®8); + + return 0; +} + +static void aqc111_set_rx_mode(struct net_device *net) +{ + struct usbnet *dev = netdev_priv(net); + struct aqc111_data *aqc111_data = dev->driver_priv; + int mc_count = 0; + + mc_count = netdev_mc_count(net); + + aqc111_data->rxctl &= ~(SFR_RX_CTL_PRO | SFR_RX_CTL_AMALL | + SFR_RX_CTL_AM); + + if (net->flags & IFF_PROMISC) { + aqc111_data->rxctl |= SFR_RX_CTL_PRO; + } else if ((net->flags & IFF_ALLMULTI) || mc_count > AQ_MAX_MCAST) { + aqc111_data->rxctl |= SFR_RX_CTL_AMALL; + } else if (!netdev_mc_empty(net)) { + u8 m_filter[AQ_MCAST_FILTER_SIZE] = { 0 }; + struct netdev_hw_addr *ha = NULL; + u32 crc_bits = 0; + + netdev_for_each_mc_addr(ha, net) { + crc_bits = ether_crc(ETH_ALEN, ha->addr) >> 26; + m_filter[crc_bits >> 3] |= BIT(crc_bits & 7); + } + + aqc111_write_cmd_async(dev, AQ_ACCESS_MAC, + SFR_MULTI_FILTER_ARRY, + AQ_MCAST_FILTER_SIZE, + AQ_MCAST_FILTER_SIZE, m_filter); + + aqc111_data->rxctl |= SFR_RX_CTL_AM; + } + + aqc111_write16_cmd_async(dev, AQ_ACCESS_MAC, SFR_RX_CTL, + 2, &aqc111_data->rxctl); +} + +static int aqc111_set_features(struct net_device *net, + netdev_features_t features) +{ + struct usbnet *dev = netdev_priv(net); + struct aqc111_data *aqc111_data = dev->driver_priv; + netdev_features_t changed = net->features ^ features; + u16 reg16 = 0; + u8 reg8 = 0; + + if (changed & NETIF_F_IP_CSUM) { + aqc111_read_cmd(dev, AQ_ACCESS_MAC, SFR_TXCOE_CTL, 1, 1, ®8); + reg8 ^= SFR_TXCOE_TCP | SFR_TXCOE_UDP; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_TXCOE_CTL, + 1, 1, ®8); + } + + if (changed & NETIF_F_IPV6_CSUM) { + aqc111_read_cmd(dev, AQ_ACCESS_MAC, SFR_TXCOE_CTL, 1, 1, ®8); + reg8 ^= SFR_TXCOE_TCPV6 | SFR_TXCOE_UDPV6; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_TXCOE_CTL, + 1, 1, ®8); + } + + if (changed & NETIF_F_RXCSUM) { + aqc111_read_cmd(dev, AQ_ACCESS_MAC, SFR_RXCOE_CTL, 1, 1, ®8); + if (features & NETIF_F_RXCSUM) { + aqc111_data->rx_checksum = 1; + reg8 &= ~(SFR_RXCOE_IP | SFR_RXCOE_TCP | SFR_RXCOE_UDP | + SFR_RXCOE_TCPV6 | SFR_RXCOE_UDPV6); + } else { + aqc111_data->rx_checksum = 0; + reg8 |= SFR_RXCOE_IP | SFR_RXCOE_TCP | SFR_RXCOE_UDP | + SFR_RXCOE_TCPV6 | SFR_RXCOE_UDPV6; + } + + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_RXCOE_CTL, + 1, 1, ®8); + } + if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) { + if (features & NETIF_F_HW_VLAN_CTAG_FILTER) { + u16 i = 0; + + for (i = 0; i < 256; i++) { + /* Address */ + reg8 = i; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, + SFR_VLAN_ID_ADDRESS, + 1, 1, ®8); + /* Data */ + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, + SFR_VLAN_ID_DATA0, + 2, ®16); + reg8 = SFR_VLAN_CONTROL_WE; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, + SFR_VLAN_ID_CONTROL, + 1, 1, ®8); + } + aqc111_read_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, + 1, 1, ®8); + reg8 |= SFR_VLAN_CONTROL_VFE; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, + SFR_VLAN_ID_CONTROL, 1, 1, ®8); + } else { + aqc111_read_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, + 1, 1, ®8); + reg8 &= ~SFR_VLAN_CONTROL_VFE; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, + SFR_VLAN_ID_CONTROL, 1, 1, ®8); + } + } + + return 0; +} + +static const struct net_device_ops aqc111_netdev_ops = { + .ndo_open = usbnet_open, + .ndo_stop = usbnet_stop, + .ndo_start_xmit = usbnet_start_xmit, + .ndo_tx_timeout = usbnet_tx_timeout, + .ndo_get_stats64 = usbnet_get_stats64, + .ndo_change_mtu = aqc111_change_mtu, + .ndo_set_mac_address = aqc111_set_mac_addr, + .ndo_validate_addr = eth_validate_addr, + .ndo_vlan_rx_add_vid = aqc111_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = aqc111_vlan_rx_kill_vid, + .ndo_set_rx_mode = aqc111_set_rx_mode, + .ndo_set_features = aqc111_set_features, +}; + +static int aqc111_read_perm_mac(struct usbnet *dev) +{ + u8 buf[ETH_ALEN]; + int ret; + + ret = aqc111_read_cmd(dev, AQ_FLASH_PARAMETERS, 0, 0, ETH_ALEN, buf); + if (ret < 0) + goto out; + + ether_addr_copy(dev->net->perm_addr, buf); + + return 0; +out: + return ret; +} + +static void aqc111_read_fw_version(struct usbnet *dev, + struct aqc111_data *aqc111_data) +{ + aqc111_read_cmd(dev, AQ_ACCESS_MAC, AQ_FW_VER_MAJOR, + 1, 1, &aqc111_data->fw_ver.major); + aqc111_read_cmd(dev, AQ_ACCESS_MAC, AQ_FW_VER_MINOR, + 1, 1, &aqc111_data->fw_ver.minor); + aqc111_read_cmd(dev, AQ_ACCESS_MAC, AQ_FW_VER_REV, + 1, 1, &aqc111_data->fw_ver.rev); + + if (aqc111_data->fw_ver.major & 0x80) + aqc111_data->fw_ver.major &= ~0x80; +} + +static int aqc111_bind(struct usbnet *dev, struct usb_interface *intf) +{ + struct usb_device *udev = interface_to_usbdev(intf); + enum usb_device_speed usb_speed = udev->speed; + struct aqc111_data *aqc111_data; + int ret; + + /* Check if vendor configuration */ + if (udev->actconfig->desc.bConfigurationValue != 1) { + usb_driver_set_configuration(udev, 1); + return -ENODEV; + } + + usb_reset_configuration(dev->udev); + + ret = usbnet_get_endpoints(dev, intf); + if (ret < 0) { + netdev_dbg(dev->net, "usbnet_get_endpoints failed"); + return ret; + } + + aqc111_data = kzalloc(sizeof(*aqc111_data), GFP_KERNEL); + if (!aqc111_data) + return -ENOMEM; + + /* store aqc111_data pointer in device data field */ + dev->driver_priv = aqc111_data; + + /* Init the MAC address */ + ret = aqc111_read_perm_mac(dev); + if (ret) + goto out; + + ether_addr_copy(dev->net->dev_addr, dev->net->perm_addr); + + /* Set Rx urb size */ + dev->rx_urb_size = URB_SIZE; + + /* Set TX needed headroom & tailroom */ + dev->net->needed_headroom += sizeof(u64); + dev->net->needed_tailroom += sizeof(u64); + + dev->net->max_mtu = 16334; + + dev->net->netdev_ops = &aqc111_netdev_ops; + dev->net->ethtool_ops = &aqc111_ethtool_ops; + + if (usb_device_no_sg_constraint(dev->udev)) + dev->can_dma_sg = 1; + + dev->net->hw_features |= AQ_SUPPORT_HW_FEATURE; + dev->net->features |= AQ_SUPPORT_FEATURE; + dev->net->vlan_features |= AQ_SUPPORT_VLAN_FEATURE; + + netif_set_gso_max_size(dev->net, 65535); + + aqc111_read_fw_version(dev, aqc111_data); + aqc111_data->autoneg = AUTONEG_ENABLE; + aqc111_data->advertised_speed = (usb_speed == USB_SPEED_SUPER) ? + SPEED_5000 : SPEED_1000; + + return 0; + +out: + kfree(aqc111_data); + return ret; +} + +static void aqc111_unbind(struct usbnet *dev, struct usb_interface *intf) +{ + struct aqc111_data *aqc111_data = dev->driver_priv; + u16 reg16; + + /* Force bz */ + reg16 = SFR_PHYPWR_RSTCTL_BZ; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_PHYPWR_RSTCTL, + 2, ®16); + reg16 = 0; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_PHYPWR_RSTCTL, + 2, ®16); + + /* Power down ethernet PHY */ + aqc111_data->phy_cfg &= ~AQ_ADV_MASK; + aqc111_data->phy_cfg |= AQ_LOW_POWER; + aqc111_data->phy_cfg &= ~AQ_PHY_POWER_EN; + aqc111_write32_cmd_nopm(dev, AQ_PHY_OPS, 0, 0, + &aqc111_data->phy_cfg); + + kfree(aqc111_data); +} + +static void aqc111_status(struct usbnet *dev, struct urb *urb) +{ + struct aqc111_data *aqc111_data = dev->driver_priv; + u64 *event_data = NULL; + int link = 0; + + if (urb->actual_length < sizeof(*event_data)) + return; + + event_data = urb->transfer_buffer; + le64_to_cpus(event_data); + + if (*event_data & AQ_LS_MASK) + link = 1; + else + link = 0; + + aqc111_data->link_speed = (*event_data & AQ_SPEED_MASK) >> + AQ_SPEED_SHIFT; + aqc111_data->link = link; + + if (netif_carrier_ok(dev->net) != link) + usbnet_defer_kevent(dev, EVENT_LINK_RESET); +} + +static void aqc111_configure_rx(struct usbnet *dev, + struct aqc111_data *aqc111_data) +{ + enum usb_device_speed usb_speed = dev->udev->speed; + u16 link_speed = 0, usb_host = 0; + u8 buf[5] = { 0 }; + u8 queue_num = 0; + u16 reg16 = 0; + u8 reg8 = 0; + + buf[0] = 0x00; + buf[1] = 0xF8; + buf[2] = 0x07; + switch (aqc111_data->link_speed) { + case AQ_INT_SPEED_5G: + link_speed = 5000; + reg8 = 0x05; + reg16 = 0x001F; + break; + case AQ_INT_SPEED_2_5G: + link_speed = 2500; + reg16 = 0x003F; + break; + case AQ_INT_SPEED_1G: + link_speed = 1000; + reg16 = 0x009F; + break; + case AQ_INT_SPEED_100M: + link_speed = 100; + queue_num = 1; + reg16 = 0x063F; + buf[1] = 0xFB; + buf[2] = 0x4; + break; + } + + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_INTER_PACKET_GAP_0, + 1, 1, ®8); + + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_TX_PAUSE_RESEND_T, 3, 3, buf); + + switch (usb_speed) { + case USB_SPEED_SUPER: + usb_host = 3; + break; + case USB_SPEED_HIGH: + usb_host = 2; + break; + case USB_SPEED_FULL: + case USB_SPEED_LOW: + usb_host = 1; + queue_num = 0; + break; + default: + usb_host = 0; + break; + } + + if (dev->net->mtu > 12500 && dev->net->mtu <= 16334) + queue_num = 2; /* For Jumbo packet 16KB */ + + memcpy(buf, &AQC111_BULKIN_SIZE[queue_num], 5); + /* RX bulk configuration */ + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_RX_BULKIN_QCTRL, 5, 5, buf); + + /* Set high low water level */ + if (dev->net->mtu <= 4500) + reg16 = 0x0810; + else if (dev->net->mtu <= 9500) + reg16 = 0x1020; + else if (dev->net->mtu <= 12500) + reg16 = 0x1420; + else if (dev->net->mtu <= 16334) + reg16 = 0x1A20; + + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_PAUSE_WATERLVL_LOW, + 2, ®16); + netdev_info(dev->net, "Link Speed %d, USB %d", link_speed, usb_host); +} + +static void aqc111_configure_csum_offload(struct usbnet *dev) +{ + u8 reg8 = 0; + + if (dev->net->features & NETIF_F_RXCSUM) { + reg8 |= SFR_RXCOE_IP | SFR_RXCOE_TCP | SFR_RXCOE_UDP | + SFR_RXCOE_TCPV6 | SFR_RXCOE_UDPV6; + } + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_RXCOE_CTL, 1, 1, ®8); + + reg8 = 0; + if (dev->net->features & NETIF_F_IP_CSUM) + reg8 |= SFR_TXCOE_IP | SFR_TXCOE_TCP | SFR_TXCOE_UDP; + + if (dev->net->features & NETIF_F_IPV6_CSUM) + reg8 |= SFR_TXCOE_TCPV6 | SFR_TXCOE_UDPV6; + + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_TXCOE_CTL, 1, 1, ®8); +} + +static int aqc111_link_reset(struct usbnet *dev) +{ + struct aqc111_data *aqc111_data = dev->driver_priv; + u16 reg16 = 0; + u8 reg8 = 0; + + if (aqc111_data->link == 1) { /* Link up */ + aqc111_configure_rx(dev, aqc111_data); + + /* Vlan Tag Filter */ + reg8 = SFR_VLAN_CONTROL_VSO; + if (dev->net->features & NETIF_F_HW_VLAN_CTAG_FILTER) + reg8 |= SFR_VLAN_CONTROL_VFE; + + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_VLAN_ID_CONTROL, + 1, 1, ®8); + + reg8 = 0x0; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_BMRX_DMA_CONTROL, + 1, 1, ®8); + + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_BMTX_DMA_CONTROL, + 1, 1, ®8); + + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_ARC_CTRL, 1, 1, ®8); + + reg16 = SFR_RX_CTL_IPE | SFR_RX_CTL_AB; + aqc111_data->rxctl = reg16; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_RX_CTL, 2, ®16); + + reg8 = SFR_RX_PATH_READY; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_ETH_MAC_PATH, + 1, 1, ®8); + + reg8 = SFR_BULK_OUT_EFF_EN; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_BULK_OUT_CTRL, + 1, 1, ®8); + + reg16 = 0; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + + reg16 = SFR_MEDIUM_XGMIIMODE | SFR_MEDIUM_FULL_DUPLEX; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + + aqc111_configure_csum_offload(dev); + + aqc111_set_rx_mode(dev->net); + + aqc111_read16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + + if (dev->net->mtu > 1500) + reg16 |= SFR_MEDIUM_JUMBO_EN; + + reg16 |= SFR_MEDIUM_RECEIVE_EN | SFR_MEDIUM_RXFLOW_CTRLEN | + SFR_MEDIUM_TXFLOW_CTRLEN; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + + aqc111_data->rxctl |= SFR_RX_CTL_START; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_RX_CTL, + 2, &aqc111_data->rxctl); + + netif_carrier_on(dev->net); + } else { + aqc111_read16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + reg16 &= ~SFR_MEDIUM_RECEIVE_EN; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + + aqc111_data->rxctl &= ~SFR_RX_CTL_START; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_RX_CTL, + 2, &aqc111_data->rxctl); + + reg8 = SFR_BULK_OUT_FLUSH_EN | SFR_BULK_OUT_EFF_EN; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_BULK_OUT_CTRL, + 1, 1, ®8); + reg8 = SFR_BULK_OUT_EFF_EN; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_BULK_OUT_CTRL, + 1, 1, ®8); + + netif_carrier_off(dev->net); + } + return 0; +} + +static int aqc111_reset(struct usbnet *dev) +{ + struct aqc111_data *aqc111_data = dev->driver_priv; + u8 reg8 = 0; + + dev->rx_urb_size = URB_SIZE; + + if (usb_device_no_sg_constraint(dev->udev)) + dev->can_dma_sg = 1; + + dev->net->hw_features |= AQ_SUPPORT_HW_FEATURE; + dev->net->features |= AQ_SUPPORT_FEATURE; + dev->net->vlan_features |= AQ_SUPPORT_VLAN_FEATURE; + + /* Power up ethernet PHY */ + aqc111_data->phy_cfg = AQ_PHY_POWER_EN; + aqc111_write32_cmd(dev, AQ_PHY_OPS, 0, 0, + &aqc111_data->phy_cfg); + + /* Set the MAC address */ + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_NODE_ID, ETH_ALEN, + ETH_ALEN, dev->net->dev_addr); + + reg8 = 0xFF; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_BM_INT_MASK, 1, 1, ®8); + + reg8 = 0x0; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_SWP_CTRL, 1, 1, ®8); + + aqc111_read_cmd(dev, AQ_ACCESS_MAC, SFR_MONITOR_MODE, 1, 1, ®8); + reg8 &= ~(SFR_MONITOR_MODE_EPHYRW | SFR_MONITOR_MODE_RWLC | + SFR_MONITOR_MODE_RWMP | SFR_MONITOR_MODE_RWWF | + SFR_MONITOR_MODE_RW_FLAG); + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_MONITOR_MODE, 1, 1, ®8); + + netif_carrier_off(dev->net); + + /* Phy advertise */ + aqc111_set_phy_speed(dev, aqc111_data->autoneg, + aqc111_data->advertised_speed); + + return 0; +} + +static int aqc111_stop(struct usbnet *dev) +{ + struct aqc111_data *aqc111_data = dev->driver_priv; + u16 reg16 = 0; + + aqc111_read16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + reg16 &= ~SFR_MEDIUM_RECEIVE_EN; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + reg16 = 0; + aqc111_write16_cmd(dev, AQ_ACCESS_MAC, SFR_RX_CTL, 2, ®16); + + /* Put PHY to low power*/ + aqc111_data->phy_cfg |= AQ_LOW_POWER; + aqc111_write32_cmd(dev, AQ_PHY_OPS, 0, 0, + &aqc111_data->phy_cfg); + + netif_carrier_off(dev->net); + + return 0; +} + +static void aqc111_rx_checksum(struct sk_buff *skb, u64 pkt_desc) +{ + u32 pkt_type = 0; + + skb->ip_summed = CHECKSUM_NONE; + /* checksum error bit is set */ + if (pkt_desc & AQ_RX_PD_L4_ERR || pkt_desc & AQ_RX_PD_L3_ERR) + return; + + pkt_type = pkt_desc & AQ_RX_PD_L4_TYPE_MASK; + /* It must be a TCP or UDP packet with a valid checksum */ + if (pkt_type == AQ_RX_PD_L4_TCP || pkt_type == AQ_RX_PD_L4_UDP) + skb->ip_summed = CHECKSUM_UNNECESSARY; +} + +static int aqc111_rx_fixup(struct usbnet *dev, struct sk_buff *skb) +{ + struct aqc111_data *aqc111_data = dev->driver_priv; + struct sk_buff *new_skb = NULL; + u32 pkt_total_offset = 0; + u64 *pkt_desc_ptr = NULL; + u32 start_of_descs = 0; + u32 desc_offset = 0; /*RX Header Offset*/ + u16 pkt_count = 0; + u64 desc_hdr = 0; + u16 vlan_tag = 0; + u32 skb_len = 0; + + if (!skb) + goto err; + + if (skb->len == 0) + goto err; + + skb_len = skb->len; + /* RX Descriptor Header */ + skb_trim(skb, skb->len - sizeof(desc_hdr)); + desc_hdr = le64_to_cpup((u64 *)skb_tail_pointer(skb)); + + /* Check these packets */ + desc_offset = (desc_hdr & AQ_RX_DH_DESC_OFFSET_MASK) >> + AQ_RX_DH_DESC_OFFSET_SHIFT; + pkt_count = desc_hdr & AQ_RX_DH_PKT_CNT_MASK; + start_of_descs = skb_len - ((pkt_count + 1) * sizeof(desc_hdr)); + + /* self check descs position */ + if (start_of_descs != desc_offset) + goto err; + + /* self check desc_offset from header*/ + if (desc_offset >= skb_len) + goto err; + + if (pkt_count == 0) + goto err; + + /* Get the first RX packet descriptor */ + pkt_desc_ptr = (u64 *)(skb->data + desc_offset); + + while (pkt_count--) { + u64 pkt_desc = le64_to_cpup(pkt_desc_ptr); + u32 pkt_len_with_padd = 0; + u32 pkt_len = 0; + + pkt_len = (u32)((pkt_desc & AQ_RX_PD_LEN_MASK) >> + AQ_RX_PD_LEN_SHIFT); + pkt_len_with_padd = ((pkt_len + 7) & 0x7FFF8); + + pkt_total_offset += pkt_len_with_padd; + if (pkt_total_offset > desc_offset || + (pkt_count == 0 && pkt_total_offset != desc_offset)) { + goto err; + } + + if (pkt_desc & AQ_RX_PD_DROP || + !(pkt_desc & AQ_RX_PD_RX_OK) || + pkt_len > (dev->hard_mtu + AQ_RX_HW_PAD)) { + skb_pull(skb, pkt_len_with_padd); + /* Next RX Packet Descriptor */ + pkt_desc_ptr++; + continue; + } + + /* Clone SKB */ + new_skb = skb_clone(skb, GFP_ATOMIC); + + if (!new_skb) + goto err; + + new_skb->len = pkt_len; + skb_pull(new_skb, AQ_RX_HW_PAD); + skb_set_tail_pointer(new_skb, new_skb->len); + + new_skb->truesize = SKB_TRUESIZE(new_skb->len); + if (aqc111_data->rx_checksum) + aqc111_rx_checksum(new_skb, pkt_desc); + + if (pkt_desc & AQ_RX_PD_VLAN) { + vlan_tag = pkt_desc >> AQ_RX_PD_VLAN_SHIFT; + __vlan_hwaccel_put_tag(new_skb, htons(ETH_P_8021Q), + vlan_tag & VLAN_VID_MASK); + } + + usbnet_skb_return(dev, new_skb); + if (pkt_count == 0) + break; + + skb_pull(skb, pkt_len_with_padd); + + /* Next RX Packet Header */ + pkt_desc_ptr++; + + new_skb = NULL; + } + + return 1; + +err: + return 0; +} + +static struct sk_buff *aqc111_tx_fixup(struct usbnet *dev, struct sk_buff *skb, + gfp_t flags) +{ + int frame_size = dev->maxpacket; + struct sk_buff *new_skb = NULL; + u64 *tx_desc_ptr = NULL; + int padding_size = 0; + int headroom = 0; + int tailroom = 0; + u64 tx_desc = 0; + u16 tci = 0; + + /*Length of actual data*/ + tx_desc |= skb->len & AQ_TX_DESC_LEN_MASK; + + /* TSO MSS */ + tx_desc |= ((u64)(skb_shinfo(skb)->gso_size & AQ_TX_DESC_MSS_MASK)) << + AQ_TX_DESC_MSS_SHIFT; + + headroom = (skb->len + sizeof(tx_desc)) % 8; + if (headroom != 0) + padding_size = 8 - headroom; + + if (((skb->len + sizeof(tx_desc) + padding_size) % frame_size) == 0) { + padding_size += 8; + tx_desc |= AQ_TX_DESC_DROP_PADD; + } + + /* Vlan Tag */ + if (vlan_get_tag(skb, &tci) >= 0) { + tx_desc |= AQ_TX_DESC_VLAN; + tx_desc |= ((u64)tci & AQ_TX_DESC_VLAN_MASK) << + AQ_TX_DESC_VLAN_SHIFT; + } + + if (!dev->can_dma_sg && (dev->net->features & NETIF_F_SG) && + skb_linearize(skb)) + return NULL; + + headroom = skb_headroom(skb); + tailroom = skb_tailroom(skb); + + if (!(headroom >= sizeof(tx_desc) && tailroom >= padding_size)) { + new_skb = skb_copy_expand(skb, sizeof(tx_desc), + padding_size, flags); + dev_kfree_skb_any(skb); + skb = new_skb; + if (!skb) + return NULL; + } + if (padding_size != 0) + skb_put_zero(skb, padding_size); + /* Copy TX header */ + tx_desc_ptr = skb_push(skb, sizeof(tx_desc)); + *tx_desc_ptr = cpu_to_le64(tx_desc); + + usbnet_set_skb_tx_stats(skb, 1, 0); + + return skb; +} + +static const struct driver_info aqc111_info = { + .description = "Aquantia AQtion USB to 5GbE Controller", + .bind = aqc111_bind, + .unbind = aqc111_unbind, + .status = aqc111_status, + .link_reset = aqc111_link_reset, + .reset = aqc111_reset, + .stop = aqc111_stop, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | + FLAG_AVOID_UNLINK_URBS | FLAG_MULTI_PACKET, + .rx_fixup = aqc111_rx_fixup, + .tx_fixup = aqc111_tx_fixup, +}; + +#define ASIX111_DESC \ +"ASIX USB 3.1 Gen1 to 5G Multi-Gigabit Ethernet Adapter" + +static const struct driver_info asix111_info = { + .description = ASIX111_DESC, + .bind = aqc111_bind, + .unbind = aqc111_unbind, + .status = aqc111_status, + .link_reset = aqc111_link_reset, + .reset = aqc111_reset, + .stop = aqc111_stop, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | + FLAG_AVOID_UNLINK_URBS | FLAG_MULTI_PACKET, + .rx_fixup = aqc111_rx_fixup, + .tx_fixup = aqc111_tx_fixup, +}; + +#undef ASIX111_DESC + +#define ASIX112_DESC \ +"ASIX USB 3.1 Gen1 to 2.5G Multi-Gigabit Ethernet Adapter" + +static const struct driver_info asix112_info = { + .description = ASIX112_DESC, + .bind = aqc111_bind, + .unbind = aqc111_unbind, + .status = aqc111_status, + .link_reset = aqc111_link_reset, + .reset = aqc111_reset, + .stop = aqc111_stop, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | + FLAG_AVOID_UNLINK_URBS | FLAG_MULTI_PACKET, + .rx_fixup = aqc111_rx_fixup, + .tx_fixup = aqc111_tx_fixup, +}; + +#undef ASIX112_DESC + +static int aqc111_suspend(struct usb_interface *intf, pm_message_t message) +{ + struct usbnet *dev = usb_get_intfdata(intf); + struct aqc111_data *aqc111_data = dev->driver_priv; + u16 temp_rx_ctrl = 0x00; + u16 reg16; + u8 reg8; + + usbnet_suspend(intf, message); + + aqc111_read16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_CTL, 2, ®16); + temp_rx_ctrl = reg16; + /* Stop RX operations*/ + reg16 &= ~SFR_RX_CTL_START; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_CTL, 2, ®16); + /* Force bz */ + aqc111_read16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_PHYPWR_RSTCTL, + 2, ®16); + reg16 |= SFR_PHYPWR_RSTCTL_BZ; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_PHYPWR_RSTCTL, + 2, ®16); + + reg8 = SFR_BULK_OUT_EFF_EN; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_BULK_OUT_CTRL, + 1, 1, ®8); + + temp_rx_ctrl &= ~(SFR_RX_CTL_START | SFR_RX_CTL_RF_WAK | + SFR_RX_CTL_AP | SFR_RX_CTL_AM); + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_CTL, + 2, &temp_rx_ctrl); + + reg8 = 0x00; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_ETH_MAC_PATH, + 1, 1, ®8); + + if (aqc111_data->wol_flags) { + struct aqc111_wol_cfg wol_cfg; + + memset(&wol_cfg, 0, sizeof(struct aqc111_wol_cfg)); + + aqc111_data->phy_cfg |= AQ_WOL; + ether_addr_copy(wol_cfg.hw_addr, dev->net->dev_addr); + wol_cfg.flags = aqc111_data->wol_flags; + + temp_rx_ctrl |= (SFR_RX_CTL_AB | SFR_RX_CTL_START); + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_CTL, + 2, &temp_rx_ctrl); + reg8 = 0x00; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_BM_INT_MASK, + 1, 1, ®8); + reg8 = SFR_BMRX_DMA_EN; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_BMRX_DMA_CONTROL, + 1, 1, ®8); + reg8 = SFR_RX_PATH_READY; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_ETH_MAC_PATH, + 1, 1, ®8); + reg8 = 0x07; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_BULKIN_QCTRL, + 1, 1, ®8); + reg8 = 0x00; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, + SFR_RX_BULKIN_QTIMR_LOW, 1, 1, ®8); + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, + SFR_RX_BULKIN_QTIMR_HIGH, 1, 1, ®8); + reg8 = 0xFF; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_BULKIN_QSIZE, + 1, 1, ®8); + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_BULKIN_QIFG, + 1, 1, ®8); + + aqc111_read16_cmd_nopm(dev, AQ_ACCESS_MAC, + SFR_MEDIUM_STATUS_MODE, 2, ®16); + reg16 |= SFR_MEDIUM_RECEIVE_EN; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, + SFR_MEDIUM_STATUS_MODE, 2, ®16); + + aqc111_write_cmd(dev, AQ_WOL_CFG, 0, 0, + WOL_CFG_SIZE, &wol_cfg); + aqc111_write32_cmd(dev, AQ_PHY_OPS, 0, 0, + &aqc111_data->phy_cfg); + } else { + aqc111_data->phy_cfg |= AQ_LOW_POWER; + aqc111_write32_cmd(dev, AQ_PHY_OPS, 0, 0, + &aqc111_data->phy_cfg); + + /* Disable RX path */ + aqc111_read16_cmd_nopm(dev, AQ_ACCESS_MAC, + SFR_MEDIUM_STATUS_MODE, 2, ®16); + reg16 &= ~SFR_MEDIUM_RECEIVE_EN; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, + SFR_MEDIUM_STATUS_MODE, 2, ®16); + } + + return 0; +} + +static int aqc111_resume(struct usb_interface *intf) +{ + struct usbnet *dev = usb_get_intfdata(intf); + struct aqc111_data *aqc111_data = dev->driver_priv; + u16 reg16; + u8 reg8; + + netif_carrier_off(dev->net); + + /* Power up ethernet PHY */ + aqc111_data->phy_cfg |= AQ_PHY_POWER_EN; + aqc111_data->phy_cfg &= ~AQ_LOW_POWER; + aqc111_data->phy_cfg &= ~AQ_WOL; + + reg8 = 0xFF; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_BM_INT_MASK, + 1, 1, ®8); + /* Configure RX control register => start operation */ + reg16 = aqc111_data->rxctl; + reg16 &= ~SFR_RX_CTL_START; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_CTL, 2, ®16); + + reg16 |= SFR_RX_CTL_START; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_RX_CTL, 2, ®16); + + aqc111_set_phy_speed(dev, aqc111_data->autoneg, + aqc111_data->advertised_speed); + + aqc111_read16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + reg16 |= SFR_MEDIUM_RECEIVE_EN; + aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_MEDIUM_STATUS_MODE, + 2, ®16); + reg8 = SFR_RX_PATH_READY; + aqc111_write_cmd_nopm(dev, AQ_ACCESS_MAC, SFR_ETH_MAC_PATH, + 1, 1, ®8); + reg8 = 0x0; + aqc111_write_cmd(dev, AQ_ACCESS_MAC, SFR_BMRX_DMA_CONTROL, 1, 1, ®8); + + return usbnet_resume(intf); +} + +#define AQC111_USB_ETH_DEV(vid, pid, table) \ + USB_DEVICE_INTERFACE_CLASS((vid), (pid), USB_CLASS_VENDOR_SPEC), \ + .driver_info = (unsigned long)&(table) \ +}, \ +{ \ + USB_DEVICE_AND_INTERFACE_INFO((vid), (pid), \ + USB_CLASS_COMM, \ + USB_CDC_SUBCLASS_ETHERNET, \ + USB_CDC_PROTO_NONE), \ + .driver_info = (unsigned long)&(table), + +static const struct usb_device_id products[] = { + {AQC111_USB_ETH_DEV(0x2eca, 0xc101, aqc111_info)}, + {AQC111_USB_ETH_DEV(0x0b95, 0x2790, asix111_info)}, + {AQC111_USB_ETH_DEV(0x0b95, 0x2791, asix112_info)}, + { },/* END */ +}; +MODULE_DEVICE_TABLE(usb, products); + +static struct usb_driver aq_driver = { + .name = "aqc111", + .id_table = products, + .probe = usbnet_probe, + .suspend = aqc111_suspend, + .resume = aqc111_resume, + .disconnect = usbnet_disconnect, +}; + +module_usb_driver(aq_driver); + +MODULE_DESCRIPTION("Aquantia AQtion USB to 5/2.5GbE Controllers"); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/usb/aqc111.h b/drivers/net/usb/aqc111.h new file mode 100644 index 000000000000..4d68b3a6067c --- /dev/null +++ b/drivers/net/usb/aqc111.h @@ -0,0 +1,232 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later + * Aquantia Corp. Aquantia AQtion USB to 5GbE Controller + * Copyright (C) 2003-2005 David Hollis + * Copyright (C) 2005 Phil Chang + * Copyright (C) 2002-2003 TiVo Inc. + * Copyright (C) 2017-2018 ASIX + * Copyright (C) 2018 Aquantia Corp. + */ + +#ifndef __LINUX_USBNET_AQC111_H +#define __LINUX_USBNET_AQC111_H + +#define URB_SIZE (1024 * 62) + +#define AQ_MCAST_FILTER_SIZE 8 +#define AQ_MAX_MCAST 64 + +#define AQ_ACCESS_MAC 0x01 +#define AQ_FLASH_PARAMETERS 0x20 +#define AQ_PHY_POWER 0x31 +#define AQ_WOL_CFG 0x60 +#define AQ_PHY_OPS 0x61 + +#define AQ_USB_PHY_SET_TIMEOUT 10000 +#define AQ_USB_SET_TIMEOUT 4000 + +/* Feature. ********************************************/ +#define AQ_SUPPORT_FEATURE (NETIF_F_SG | NETIF_F_IP_CSUM |\ + NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM |\ + NETIF_F_TSO | NETIF_F_HW_VLAN_CTAG_TX |\ + NETIF_F_HW_VLAN_CTAG_RX) + +#define AQ_SUPPORT_HW_FEATURE (NETIF_F_SG | NETIF_F_IP_CSUM |\ + NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM |\ + NETIF_F_TSO | NETIF_F_HW_VLAN_CTAG_FILTER) + +#define AQ_SUPPORT_VLAN_FEATURE (NETIF_F_SG | NETIF_F_IP_CSUM |\ + NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM |\ + NETIF_F_TSO) + +/* SFR Reg. ********************************************/ + +#define SFR_GENERAL_STATUS 0x03 +#define SFR_CHIP_STATUS 0x05 +#define SFR_RX_CTL 0x0B + #define SFR_RX_CTL_TXPADCRC 0x0400 + #define SFR_RX_CTL_IPE 0x0200 + #define SFR_RX_CTL_DROPCRCERR 0x0100 + #define SFR_RX_CTL_START 0x0080 + #define SFR_RX_CTL_RF_WAK 0x0040 + #define SFR_RX_CTL_AP 0x0020 + #define SFR_RX_CTL_AM 0x0010 + #define SFR_RX_CTL_AB 0x0008 + #define SFR_RX_CTL_AMALL 0x0002 + #define SFR_RX_CTL_PRO 0x0001 + #define SFR_RX_CTL_STOP 0x0000 +#define SFR_INTER_PACKET_GAP_0 0x0D +#define SFR_NODE_ID 0x10 +#define SFR_MULTI_FILTER_ARRY 0x16 +#define SFR_MEDIUM_STATUS_MODE 0x22 + #define SFR_MEDIUM_XGMIIMODE 0x0001 + #define SFR_MEDIUM_FULL_DUPLEX 0x0002 + #define SFR_MEDIUM_RXFLOW_CTRLEN 0x0010 + #define SFR_MEDIUM_TXFLOW_CTRLEN 0x0020 + #define SFR_MEDIUM_JUMBO_EN 0x0040 + #define SFR_MEDIUM_RECEIVE_EN 0x0100 +#define SFR_MONITOR_MODE 0x24 + #define SFR_MONITOR_MODE_EPHYRW 0x01 + #define SFR_MONITOR_MODE_RWLC 0x02 + #define SFR_MONITOR_MODE_RWMP 0x04 + #define SFR_MONITOR_MODE_RWWF 0x08 + #define SFR_MONITOR_MODE_RW_FLAG 0x10 + #define SFR_MONITOR_MODE_PMEPOL 0x20 + #define SFR_MONITOR_MODE_PMETYPE 0x40 +#define SFR_PHYPWR_RSTCTL 0x26 + #define SFR_PHYPWR_RSTCTL_BZ 0x0010 + #define SFR_PHYPWR_RSTCTL_IPRL 0x0020 +#define SFR_VLAN_ID_ADDRESS 0x2A +#define SFR_VLAN_ID_CONTROL 0x2B + #define SFR_VLAN_CONTROL_WE 0x0001 + #define SFR_VLAN_CONTROL_RD 0x0002 + #define SFR_VLAN_CONTROL_VSO 0x0010 + #define SFR_VLAN_CONTROL_VFE 0x0020 +#define SFR_VLAN_ID_DATA0 0x2C +#define SFR_VLAN_ID_DATA1 0x2D +#define SFR_RX_BULKIN_QCTRL 0x2E + #define SFR_RX_BULKIN_QCTRL_TIME 0x01 + #define SFR_RX_BULKIN_QCTRL_IFG 0x02 + #define SFR_RX_BULKIN_QCTRL_SIZE 0x04 +#define SFR_RX_BULKIN_QTIMR_LOW 0x2F +#define SFR_RX_BULKIN_QTIMR_HIGH 0x30 +#define SFR_RX_BULKIN_QSIZE 0x31 +#define SFR_RX_BULKIN_QIFG 0x32 +#define SFR_RXCOE_CTL 0x34 + #define SFR_RXCOE_IP 0x01 + #define SFR_RXCOE_TCP 0x02 + #define SFR_RXCOE_UDP 0x04 + #define SFR_RXCOE_ICMP 0x08 + #define SFR_RXCOE_IGMP 0x10 + #define SFR_RXCOE_TCPV6 0x20 + #define SFR_RXCOE_UDPV6 0x40 + #define SFR_RXCOE_ICMV6 0x80 +#define SFR_TXCOE_CTL 0x35 + #define SFR_TXCOE_IP 0x01 + #define SFR_TXCOE_TCP 0x02 + #define SFR_TXCOE_UDP 0x04 + #define SFR_TXCOE_ICMP 0x08 + #define SFR_TXCOE_IGMP 0x10 + #define SFR_TXCOE_TCPV6 0x20 + #define SFR_TXCOE_UDPV6 0x40 + #define SFR_TXCOE_ICMV6 0x80 +#define SFR_BM_INT_MASK 0x41 +#define SFR_BMRX_DMA_CONTROL 0x43 + #define SFR_BMRX_DMA_EN 0x80 +#define SFR_BMTX_DMA_CONTROL 0x46 +#define SFR_PAUSE_WATERLVL_LOW 0x54 +#define SFR_PAUSE_WATERLVL_HIGH 0x55 +#define SFR_ARC_CTRL 0x9E +#define SFR_SWP_CTRL 0xB1 +#define SFR_TX_PAUSE_RESEND_T 0xB2 +#define SFR_ETH_MAC_PATH 0xB7 + #define SFR_RX_PATH_READY 0x01 +#define SFR_BULK_OUT_CTRL 0xB9 + #define SFR_BULK_OUT_FLUSH_EN 0x01 + #define SFR_BULK_OUT_EFF_EN 0x02 + +#define AQ_FW_VER_MAJOR 0xDA +#define AQ_FW_VER_MINOR 0xDB +#define AQ_FW_VER_REV 0xDC + +/*PHY_OPS**********************************************************************/ + +#define AQ_ADV_100M BIT(0) +#define AQ_ADV_1G BIT(1) +#define AQ_ADV_2G5 BIT(2) +#define AQ_ADV_5G BIT(3) +#define AQ_ADV_MASK 0x0F + +#define AQ_PAUSE BIT(16) +#define AQ_ASYM_PAUSE BIT(17) +#define AQ_LOW_POWER BIT(18) +#define AQ_PHY_POWER_EN BIT(19) +#define AQ_WOL BIT(20) +#define AQ_DOWNSHIFT BIT(21) + +#define AQ_DSH_RETRIES_SHIFT 0x18 +#define AQ_DSH_RETRIES_MASK 0xF000000 + +#define AQ_WOL_FLAG_MP 0x2 + +/******************************************************************************/ + +struct aqc111_wol_cfg { + u8 hw_addr[6]; + u8 flags; + u8 rsvd[283]; +} __packed; + +#define WOL_CFG_SIZE sizeof(struct aqc111_wol_cfg) + +struct aqc111_data { + u16 rxctl; + u8 rx_checksum; + u8 link_speed; + u8 link; + u8 autoneg; + u32 advertised_speed; + struct { + u8 major; + u8 minor; + u8 rev; + } fw_ver; + u32 phy_cfg; + u8 wol_flags; +}; + +#define AQ_LS_MASK 0x8000 +#define AQ_SPEED_MASK 0x7F00 +#define AQ_SPEED_SHIFT 0x0008 +#define AQ_INT_SPEED_5G 0x000F +#define AQ_INT_SPEED_2_5G 0x0010 +#define AQ_INT_SPEED_1G 0x0011 +#define AQ_INT_SPEED_100M 0x0013 + +/* TX Descriptor */ +#define AQ_TX_DESC_LEN_MASK 0x1FFFFF +#define AQ_TX_DESC_DROP_PADD BIT(28) +#define AQ_TX_DESC_VLAN BIT(29) +#define AQ_TX_DESC_MSS_MASK 0x7FFF +#define AQ_TX_DESC_MSS_SHIFT 0x20 +#define AQ_TX_DESC_VLAN_MASK 0xFFFF +#define AQ_TX_DESC_VLAN_SHIFT 0x30 + +#define AQ_RX_HW_PAD 0x02 + +/* RX Packet Descriptor */ +#define AQ_RX_PD_L4_ERR BIT(0) +#define AQ_RX_PD_L3_ERR BIT(1) +#define AQ_RX_PD_L4_TYPE_MASK 0x1C +#define AQ_RX_PD_L4_UDP 0x04 +#define AQ_RX_PD_L4_TCP 0x10 +#define AQ_RX_PD_L3_TYPE_MASK 0x60 +#define AQ_RX_PD_L3_IP 0x20 +#define AQ_RX_PD_L3_IP6 0x40 + +#define AQ_RX_PD_VLAN BIT(10) +#define AQ_RX_PD_RX_OK BIT(11) +#define AQ_RX_PD_DROP BIT(31) +#define AQ_RX_PD_LEN_MASK 0x7FFF0000 +#define AQ_RX_PD_LEN_SHIFT 0x10 +#define AQ_RX_PD_VLAN_SHIFT 0x20 + +/* RX Descriptor header */ +#define AQ_RX_DH_PKT_CNT_MASK 0x1FFF +#define AQ_RX_DH_DESC_OFFSET_MASK 0xFFFFE000 +#define AQ_RX_DH_DESC_OFFSET_SHIFT 0x0D + +static struct { + unsigned char ctrl; + unsigned char timer_l; + unsigned char timer_h; + unsigned char size; + unsigned char ifg; +} AQC111_BULKIN_SIZE[] = { + /* xHCI & EHCI & OHCI */ + {7, 0x00, 0x01, 0x1E, 0xFF},/* 10G, 5G, 2.5G, 1G */ + {7, 0xA0, 0x00, 0x14, 0x00},/* 100M */ + /* Jumbo packet */ + {7, 0x00, 0x01, 0x18, 0xFF}, +}; + +#endif /* __LINUX_USBNET_AQC111_H */ diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c index 5c42cf81a08b..b3b3c05903a1 100644 --- a/drivers/net/usb/cdc_ether.c +++ b/drivers/net/usb/cdc_ether.c @@ -562,6 +562,8 @@ static const struct driver_info wwan_info = { #define MICROSOFT_VENDOR_ID 0x045e #define UBLOX_VENDOR_ID 0x1546 #define TPLINK_VENDOR_ID 0x2357 +#define AQUANTIA_VENDOR_ID 0x2eca +#define ASIX_VENDOR_ID 0x0b95 static const struct usb_device_id products[] = { /* BLACKLIST !! @@ -821,6 +823,30 @@ static const struct usb_device_id products[] = { .driver_info = 0, }, +/* Aquantia AQtion USB to 5GbE Controller (based on AQC111U) */ +{ + USB_DEVICE_AND_INTERFACE_INFO(AQUANTIA_VENDOR_ID, 0xc101, + USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET, + USB_CDC_PROTO_NONE), + .driver_info = 0, +}, + +/* ASIX USB 3.1 Gen1 to 5G Multi-Gigabit Ethernet Adapter(based on AQC111U) */ +{ + USB_DEVICE_AND_INTERFACE_INFO(ASIX_VENDOR_ID, 0x2790, USB_CLASS_COMM, + USB_CDC_SUBCLASS_ETHERNET, + USB_CDC_PROTO_NONE), + .driver_info = 0, +}, + +/* ASIX USB 3.1 Gen1 to 2.5G Multi-Gigabit Ethernet Adapter(based on AQC112U) */ +{ + USB_DEVICE_AND_INTERFACE_INFO(ASIX_VENDOR_ID, 0x2791, USB_CLASS_COMM, + USB_CDC_SUBCLASS_ETHERNET, + USB_CDC_PROTO_NONE), + .driver_info = 0, +}, + /* WHITELIST!!! * * CDC Ether uses two interfaces, not necessarily consecutive. diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c index 77d3c85febf1..e96bc0c6140f 100644 --- a/drivers/net/usb/lan78xx.c +++ b/drivers/net/usb/lan78xx.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -1586,18 +1587,17 @@ static int lan78xx_set_pause(struct net_device *net, dev->fc_request_control |= FLOW_CTRL_TX; if (ecmd.base.autoneg) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(fc) = { 0, }; u32 mii_adv; - u32 advertising; - ethtool_convert_link_mode_to_legacy_u32( - &advertising, ecmd.link_modes.advertising); - - advertising &= ~(ADVERTISED_Pause | ADVERTISED_Asym_Pause); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, + ecmd.link_modes.advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + ecmd.link_modes.advertising); mii_adv = (u32)mii_advertise_flowctrl(dev->fc_request_control); - advertising |= mii_adv_to_ethtool_adv_t(mii_adv); - - ethtool_convert_legacy_u32_to_link_mode( - ecmd.link_modes.advertising, advertising); + mii_adv_to_linkmode_adv_t(fc, mii_adv); + linkmode_or(ecmd.link_modes.advertising, fc, + ecmd.link_modes.advertising); phy_ethtool_ksettings_set(phydev, &ecmd); } @@ -2095,6 +2095,7 @@ static struct phy_device *lan7801_phy_init(struct lan78xx_net *dev) static int lan78xx_phy_init(struct lan78xx_net *dev) { + __ETHTOOL_DECLARE_LINK_MODE_MASK(fc) = { 0, }; int ret; u32 mii_adv; struct phy_device *phydev; @@ -2158,9 +2159,13 @@ static int lan78xx_phy_init(struct lan78xx_net *dev) /* support both flow controls */ dev->fc_request_control = (FLOW_CTRL_RX | FLOW_CTRL_TX); - phydev->advertising &= ~(ADVERTISED_Pause | ADVERTISED_Asym_Pause); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT, + phydev->advertising); + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + phydev->advertising); mii_adv = (u32)mii_advertise_flowctrl(dev->fc_request_control); - phydev->advertising |= mii_adv_to_ethtool_adv_t(mii_adv); + mii_adv_to_linkmode_adv_t(fc, mii_adv); + linkmode_or(phydev->advertising, fc, phydev->advertising); if (phydev->mdio.dev.of_node) { u32 reg; diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c index f2d01cb6f958..e3d08626828e 100644 --- a/drivers/net/usb/smsc95xx.c +++ b/drivers/net/usb/smsc95xx.c @@ -618,9 +618,7 @@ static void smsc95xx_status(struct usbnet *dev, struct urb *urb) return; } - memcpy(&intdata, urb->transfer_buffer, 4); - le32_to_cpus(&intdata); - + intdata = get_unaligned_le32(urb->transfer_buffer); netif_dbg(dev, link, dev->net, "intdata: 0x%08X\n", intdata); if (intdata & INT_ENP_PHY_INT_) @@ -1295,6 +1293,7 @@ static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf) dev->net->features |= NETIF_F_RXCSUM; dev->net->hw_features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM; + set_bit(EVENT_NO_IP_ALIGN, &dev->flags); smsc95xx_init_mac_address(dev); @@ -1933,8 +1932,7 @@ static int smsc95xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb) unsigned char *packet; u16 size; - memcpy(&header, skb->data, sizeof(header)); - le32_to_cpus(&header); + header = get_unaligned_le32(skb->data); skb_pull(skb, 4 + NET_IP_ALIGN); packet = skb->data; @@ -2011,12 +2009,30 @@ static u32 smsc95xx_calc_csum_preamble(struct sk_buff *skb) return (high_16 << 16) | low_16; } +/* The TX CSUM won't work if the checksum lies in the last 4 bytes of the + * transmission. This is fairly unlikely, only seems to trigger with some + * short TCP ACK packets sent. + * + * Note, this calculation should probably check for the alignment of the + * data as well, but a straight check for csum being in the last four bytes + * of the packet should be ok for now. + */ +static bool smsc95xx_can_tx_checksum(struct sk_buff *skb) +{ + unsigned int len = skb->len - skb_checksum_start_offset(skb); + + if (skb->len <= 45) + return false; + return skb->csum_offset < (len - (4 + 1)); +} + static struct sk_buff *smsc95xx_tx_fixup(struct usbnet *dev, struct sk_buff *skb, gfp_t flags) { bool csum = skb->ip_summed == CHECKSUM_PARTIAL; int overhead = csum ? SMSC95XX_TX_OVERHEAD_CSUM : SMSC95XX_TX_OVERHEAD; u32 tx_cmd_a, tx_cmd_b; + void *ptr; /* We do not advertise SG, so skbs should be already linearized */ BUG_ON(skb_shinfo(skb)->nr_frags); @@ -2030,8 +2046,11 @@ static struct sk_buff *smsc95xx_tx_fixup(struct usbnet *dev, return NULL; } + tx_cmd_b = (u32)skb->len; + tx_cmd_a = tx_cmd_b | TX_CMD_A_FIRST_SEG_ | TX_CMD_A_LAST_SEG_; + if (csum) { - if (skb->len <= 45) { + if (!smsc95xx_can_tx_checksum(skb)) { /* workaround - hardware tx checksum does not work * properly with extremely small packets */ long csstart = skb_checksum_start_offset(skb); @@ -2043,24 +2062,18 @@ static struct sk_buff *smsc95xx_tx_fixup(struct usbnet *dev, csum = false; } else { u32 csum_preamble = smsc95xx_calc_csum_preamble(skb); - skb_push(skb, 4); - cpu_to_le32s(&csum_preamble); - memcpy(skb->data, &csum_preamble, 4); + ptr = skb_push(skb, 4); + put_unaligned_le32(csum_preamble, ptr); + + tx_cmd_a += 4; + tx_cmd_b += 4; + tx_cmd_b |= TX_CMD_B_CSUM_ENABLE; } } - skb_push(skb, 4); - tx_cmd_b = (u32)(skb->len - 4); - if (csum) - tx_cmd_b |= TX_CMD_B_CSUM_ENABLE; - cpu_to_le32s(&tx_cmd_b); - memcpy(skb->data, &tx_cmd_b, 4); - - skb_push(skb, 4); - tx_cmd_a = (u32)(skb->len - 8) | TX_CMD_A_FIRST_SEG_ | - TX_CMD_A_LAST_SEG_; - cpu_to_le32s(&tx_cmd_a); - memcpy(skb->data, &tx_cmd_a, 4); + ptr = skb_push(skb, 8); + put_unaligned_le32(tx_cmd_a, ptr); + put_unaligned_le32(tx_cmd_b, ptr+4); return skb; } diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 890fa5b905e2..f412ea1cef18 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -1253,7 +1253,7 @@ static int veth_newlink(struct net *src_net, struct net_device *dev, return PTR_ERR(net); peer = rtnl_create_link(net, ifname, name_assign_type, - &veth_link_ops, tbp); + &veth_link_ops, tbp, extack); if (IS_ERR(peer)) { put_net(net); return PTR_ERR(peer); diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index ea672145f6a6..023725086046 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -236,6 +236,7 @@ struct virtnet_info { u32 speed; unsigned long guest_offloads; + unsigned long guest_offloads_capable; /* failover when STANDBY feature enabled */ struct failover *failover; @@ -2479,6 +2480,31 @@ static int virtnet_get_phys_port_name(struct net_device *dev, char *buf, return 0; } +static int virtnet_set_features(struct net_device *dev, + netdev_features_t features) +{ + struct virtnet_info *vi = netdev_priv(dev); + u64 offloads; + int err; + + if ((dev->features ^ features) & NETIF_F_LRO) { + if (vi->xdp_queue_pairs) + return -EBUSY; + + if (features & NETIF_F_LRO) + offloads = vi->guest_offloads_capable; + else + offloads = 0; + + err = virtnet_set_guest_offloads(vi, offloads); + if (err) + return err; + vi->guest_offloads = offloads; + } + + return 0; +} + static const struct net_device_ops virtnet_netdev = { .ndo_open = virtnet_open, .ndo_stop = virtnet_close, @@ -2493,6 +2519,7 @@ static const struct net_device_ops virtnet_netdev = { .ndo_xdp_xmit = virtnet_xdp_xmit, .ndo_features_check = passthru_features_check, .ndo_get_phys_port_name = virtnet_get_phys_port_name, + .ndo_set_features = virtnet_set_features, }; static void virtnet_config_changed_work(struct work_struct *work) @@ -2951,6 +2978,11 @@ static int virtnet_probe(struct virtio_device *vdev) } if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM)) dev->features |= NETIF_F_RXCSUM; + if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || + virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6)) + dev->features |= NETIF_F_LRO; + if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) + dev->hw_features |= NETIF_F_LRO; dev->vlan_features = dev->features; @@ -3080,6 +3112,7 @@ static int virtnet_probe(struct virtio_device *vdev) for (i = 0; i < ARRAY_SIZE(guest_offloads); i++) if (virtio_has_feature(vi->vdev, guest_offloads[i])) set_bit(guest_offloads[i], &vi->guest_offloads); + vi->guest_offloads_capable = vi->guest_offloads; pr_debug("virtnet: registered device %s with %d RX and TX vq's\n", dev->name, max_queue_pairs); diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c index 69b7227c637e..95909e262ba4 100644 --- a/drivers/net/vrf.c +++ b/drivers/net/vrf.c @@ -747,7 +747,8 @@ static int vrf_rtable_create(struct net_device *dev) /**************************** device handling ********************/ /* cycle interface to flush neighbor cache and move routes across tables */ -static void cycle_netdev(struct net_device *dev) +static void cycle_netdev(struct net_device *dev, + struct netlink_ext_ack *extack) { unsigned int flags = dev->flags; int ret; @@ -755,9 +756,9 @@ static void cycle_netdev(struct net_device *dev) if (!netif_running(dev)) return; - ret = dev_change_flags(dev, flags & ~IFF_UP); + ret = dev_change_flags(dev, flags & ~IFF_UP, extack); if (ret >= 0) - ret = dev_change_flags(dev, flags); + ret = dev_change_flags(dev, flags, extack); if (ret < 0) { netdev_err(dev, @@ -785,7 +786,7 @@ static int do_vrf_add_slave(struct net_device *dev, struct net_device *port_dev, if (ret < 0) goto err; - cycle_netdev(port_dev); + cycle_netdev(port_dev, extack); return 0; @@ -815,7 +816,7 @@ static int do_vrf_del_slave(struct net_device *dev, struct net_device *port_dev) netdev_upper_dev_unlink(port_dev, dev); port_dev->priv_flags &= ~IFF_L3MDEV_SLAVE; - cycle_netdev(port_dev); + cycle_netdev(port_dev, NULL); return 0; } @@ -981,24 +982,23 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev, struct sk_buff *skb) { int orig_iif = skb->skb_iif; - bool need_strict; + bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr); + bool is_ndisc = ipv6_ndisc_frame(skb); - /* loopback traffic; do not push through packet taps again. - * Reset pkt_type for upper layers to process skb + /* loopback, multicast & non-ND link-local traffic; do not push through + * packet taps again. Reset pkt_type for upper layers to process skb */ - if (skb->pkt_type == PACKET_LOOPBACK) { + if (skb->pkt_type == PACKET_LOOPBACK || (need_strict && !is_ndisc)) { skb->dev = vrf_dev; skb->skb_iif = vrf_dev->ifindex; IP6CB(skb)->flags |= IP6SKB_L3SLAVE; - skb->pkt_type = PACKET_HOST; + if (skb->pkt_type == PACKET_LOOPBACK) + skb->pkt_type = PACKET_HOST; goto out; } - /* if packet is NDISC or addressed to multicast or link-local - * then keep the ingress interface - */ - need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr); - if (!ipv6_ndisc_frame(skb) && !need_strict) { + /* if packet is NDISC then keep the ingress interface */ + if (!is_ndisc) { vrf_rx_stats(vrf_dev, skb->len); skb->dev = vrf_dev; skb->skb_iif = vrf_dev->ifindex; diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c index 0565f8880199..5209ee9aac47 100644 --- a/drivers/net/vxlan.c +++ b/drivers/net/vxlan.c @@ -79,9 +79,11 @@ struct vxlan_fdb { u8 eth_addr[ETH_ALEN]; u16 state; /* see ndm_state */ __be32 vni; - u8 flags; /* see ndm_flags */ + u16 flags; /* see ndm_flags and below */ }; +#define NTF_VXLAN_ADDED_BY_USER 0x100 + /* salt for hash table */ static u32 vxlan_salt __read_mostly; @@ -186,7 +188,7 @@ static inline struct vxlan_rdst *first_remote_rtnl(struct vxlan_fdb *fdb) * and enabled unshareable flags. */ static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family, - __be16 port, u32 flags) + __be16 port, u32 flags, int ifindex) { struct vxlan_sock *vs; @@ -195,7 +197,8 @@ static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family, hlist_for_each_entry_rcu(vs, vs_head(net, port), hlist) { if (inet_sk(vs->sock->sk)->inet_sport == port && vxlan_get_sk_family(vs) == family && - vs->flags == flags) + vs->flags == flags && + vs->sock->sk->sk_bound_dev_if == ifindex) return vs; } return NULL; @@ -235,7 +238,7 @@ static struct vxlan_dev *vxlan_find_vni(struct net *net, int ifindex, { struct vxlan_sock *vs; - vs = vxlan_find_sock(net, family, port, flags); + vs = vxlan_find_sock(net, family, port, flags, ifindex); if (!vs) return NULL; @@ -355,6 +358,23 @@ errout: rtnl_set_sk_err(net, RTNLGRP_NEIGH, err); } +static void vxlan_fdb_switchdev_notifier_info(const struct vxlan_dev *vxlan, + const struct vxlan_fdb *fdb, + const struct vxlan_rdst *rd, + struct switchdev_notifier_vxlan_fdb_info *fdb_info) +{ + fdb_info->info.dev = vxlan->dev; + fdb_info->info.extack = NULL; + fdb_info->remote_ip = rd->remote_ip; + fdb_info->remote_port = rd->remote_port; + fdb_info->remote_vni = rd->remote_vni; + fdb_info->remote_ifindex = rd->remote_ifindex; + memcpy(fdb_info->eth_addr, fdb->eth_addr, ETH_ALEN); + fdb_info->vni = fdb->vni; + fdb_info->offloaded = rd->offloaded; + fdb_info->added_by_user = fdb->flags & NTF_VXLAN_ADDED_BY_USER; +} + static void vxlan_fdb_switchdev_call_notifiers(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb, struct vxlan_rdst *rd, @@ -368,31 +388,25 @@ static void vxlan_fdb_switchdev_call_notifiers(struct vxlan_dev *vxlan, notifier_type = adding ? SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE : SWITCHDEV_VXLAN_FDB_DEL_TO_DEVICE; - - info = (struct switchdev_notifier_vxlan_fdb_info){ - .remote_ip = rd->remote_ip, - .remote_port = rd->remote_port, - .remote_vni = rd->remote_vni, - .remote_ifindex = rd->remote_ifindex, - .vni = fdb->vni, - .offloaded = rd->offloaded, - }; - memcpy(info.eth_addr, fdb->eth_addr, ETH_ALEN); - + vxlan_fdb_switchdev_notifier_info(vxlan, fdb, rd, &info); call_switchdev_notifiers(notifier_type, vxlan->dev, &info.info); } static void vxlan_fdb_notify(struct vxlan_dev *vxlan, struct vxlan_fdb *fdb, - struct vxlan_rdst *rd, int type) + struct vxlan_rdst *rd, int type, bool swdev_notify) { - switch (type) { - case RTM_NEWNEIGH: - vxlan_fdb_switchdev_call_notifiers(vxlan, fdb, rd, true); - break; - case RTM_DELNEIGH: - vxlan_fdb_switchdev_call_notifiers(vxlan, fdb, rd, false); - break; + if (swdev_notify) { + switch (type) { + case RTM_NEWNEIGH: + vxlan_fdb_switchdev_call_notifiers(vxlan, fdb, rd, + true); + break; + case RTM_DELNEIGH: + vxlan_fdb_switchdev_call_notifiers(vxlan, fdb, rd, + false); + break; + } } __vxlan_fdb_notify(vxlan, fdb, rd, type); @@ -409,7 +423,7 @@ static void vxlan_ip_miss(struct net_device *dev, union vxlan_addr *ipa) .remote_vni = cpu_to_be32(VXLAN_N_VID), }; - vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH); + vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH, true); } static void vxlan_fdb_miss(struct vxlan_dev *vxlan, const u8 eth_addr[ETH_ALEN]) @@ -421,7 +435,7 @@ static void vxlan_fdb_miss(struct vxlan_dev *vxlan, const u8 eth_addr[ETH_ALEN]) memcpy(f.eth_addr, eth_addr, ETH_ALEN); - vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH); + vxlan_fdb_notify(vxlan, &f, &remote, RTM_GETNEIGH, true); } /* Hash Ethernet address */ @@ -531,16 +545,7 @@ int vxlan_fdb_find_uc(struct net_device *dev, const u8 *mac, __be32 vni, } rdst = first_remote_rcu(f); - - memset(fdb_info, 0, sizeof(*fdb_info)); - fdb_info->info.dev = dev; - fdb_info->remote_ip = rdst->remote_ip; - fdb_info->remote_port = rdst->remote_port; - fdb_info->remote_vni = rdst->remote_vni; - fdb_info->remote_ifindex = rdst->remote_ifindex; - fdb_info->vni = vni; - fdb_info->offloaded = rdst->offloaded; - ether_addr_copy(fdb_info->eth_addr, mac); + vxlan_fdb_switchdev_notifier_info(vxlan, f, rdst, fdb_info); out: rcu_read_unlock(); @@ -548,6 +553,75 @@ out: } EXPORT_SYMBOL_GPL(vxlan_fdb_find_uc); +static int vxlan_fdb_notify_one(struct notifier_block *nb, + const struct vxlan_dev *vxlan, + const struct vxlan_fdb *f, + const struct vxlan_rdst *rdst) +{ + struct switchdev_notifier_vxlan_fdb_info fdb_info; + int rc; + + vxlan_fdb_switchdev_notifier_info(vxlan, f, rdst, &fdb_info); + rc = nb->notifier_call(nb, SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE, + &fdb_info); + return notifier_to_errno(rc); +} + +int vxlan_fdb_replay(const struct net_device *dev, __be32 vni, + struct notifier_block *nb) +{ + struct vxlan_dev *vxlan; + struct vxlan_rdst *rdst; + struct vxlan_fdb *f; + unsigned int h; + int rc = 0; + + if (!netif_is_vxlan(dev)) + return -EINVAL; + vxlan = netdev_priv(dev); + + spin_lock_bh(&vxlan->hash_lock); + for (h = 0; h < FDB_HASH_SIZE; ++h) { + hlist_for_each_entry(f, &vxlan->fdb_head[h], hlist) { + if (f->vni == vni) { + list_for_each_entry(rdst, &f->remotes, list) { + rc = vxlan_fdb_notify_one(nb, vxlan, + f, rdst); + if (rc) + goto out; + } + } + } + } + +out: + spin_unlock_bh(&vxlan->hash_lock); + return rc; +} +EXPORT_SYMBOL_GPL(vxlan_fdb_replay); + +void vxlan_fdb_clear_offload(const struct net_device *dev, __be32 vni) +{ + struct vxlan_dev *vxlan; + struct vxlan_rdst *rdst; + struct vxlan_fdb *f; + unsigned int h; + + if (!netif_is_vxlan(dev)) + return; + vxlan = netdev_priv(dev); + + spin_lock_bh(&vxlan->hash_lock); + for (h = 0; h < FDB_HASH_SIZE; ++h) { + hlist_for_each_entry(f, &vxlan->fdb_head[h], hlist) + if (f->vni == vni) + list_for_each_entry(rdst, &f->remotes, list) + rdst->offloaded = false; + } + spin_unlock_bh(&vxlan->hash_lock); +} +EXPORT_SYMBOL_GPL(vxlan_fdb_clear_offload); + /* Replace destination of unicast mac */ static int vxlan_fdb_replace(struct vxlan_fdb *f, union vxlan_addr *ip, __be16 port, __be32 vni, @@ -701,7 +775,7 @@ static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff) static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan, const u8 *mac, __u16 state, - __be32 src_vni, __u8 ndm_flags) + __be32 src_vni, __u16 ndm_flags) { struct vxlan_fdb *f; @@ -721,7 +795,7 @@ static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan, static int vxlan_fdb_create(struct vxlan_dev *vxlan, const u8 *mac, union vxlan_addr *ip, __u16 state, __be16 port, __be32 src_vni, - __be32 vni, __u32 ifindex, __u8 ndm_flags, + __be32 vni, __u32 ifindex, __u16 ndm_flags, struct vxlan_fdb **fdb) { struct vxlan_rdst *rd = NULL; @@ -757,9 +831,10 @@ static int vxlan_fdb_update(struct vxlan_dev *vxlan, const u8 *mac, union vxlan_addr *ip, __u16 state, __u16 flags, __be16 port, __be32 src_vni, __be32 vni, - __u32 ifindex, __u8 ndm_flags) + __u32 ifindex, __u16 ndm_flags, + bool swdev_notify) { - __u8 fdb_flags = (ndm_flags & ~NTF_USE); + __u16 fdb_flags = (ndm_flags & ~NTF_USE); struct vxlan_rdst *rd = NULL; struct vxlan_fdb *f; int notify = 0; @@ -772,16 +847,24 @@ static int vxlan_fdb_update(struct vxlan_dev *vxlan, "lost race to create %pM\n", mac); return -EEXIST; } - if (f->state != state) { - f->state = state; - f->updated = jiffies; - notify = 1; - } - if (f->flags != fdb_flags) { - f->flags = fdb_flags; - f->updated = jiffies; - notify = 1; + + /* Do not allow an externally learned entry to take over an + * entry added by the user. + */ + if (!(fdb_flags & NTF_EXT_LEARNED) || + !(f->flags & NTF_VXLAN_ADDED_BY_USER)) { + if (f->state != state) { + f->state = state; + f->updated = jiffies; + notify = 1; + } + if (f->flags != fdb_flags) { + f->flags = fdb_flags; + f->updated = jiffies; + notify = 1; + } } + if ((flags & NLM_F_REPLACE)) { /* Only change unicasts */ if (!(is_multicast_ether_addr(f->eth_addr) || @@ -823,7 +906,7 @@ static int vxlan_fdb_update(struct vxlan_dev *vxlan, if (notify) { if (rd == NULL) rd = first_remote_rtnl(f); - vxlan_fdb_notify(vxlan, f, rd, RTM_NEWNEIGH); + vxlan_fdb_notify(vxlan, f, rd, RTM_NEWNEIGH, swdev_notify); } return 0; @@ -842,7 +925,7 @@ static void vxlan_fdb_free(struct rcu_head *head) } static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f, - bool do_notify) + bool do_notify, bool swdev_notify) { struct vxlan_rdst *rd; @@ -852,7 +935,8 @@ static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f, --vxlan->addrcnt; if (do_notify) list_for_each_entry(rd, &f->remotes, list) - vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH); + vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH, + swdev_notify); hlist_del_rcu(&f->hlist); call_rcu(&f->rcu, vxlan_fdb_free); @@ -867,10 +951,10 @@ static void vxlan_dst_free(struct rcu_head *head) } static void vxlan_fdb_dst_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f, - struct vxlan_rdst *rd) + struct vxlan_rdst *rd, bool swdev_notify) { list_del_rcu(&rd->list); - vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH); + vxlan_fdb_notify(vxlan, f, rd, RTM_DELNEIGH, swdev_notify); call_rcu(&rd->rcu, vxlan_dst_free); } @@ -969,7 +1053,9 @@ static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], spin_lock_bh(&vxlan->hash_lock); err = vxlan_fdb_update(vxlan, addr, &ip, ndm->ndm_state, flags, - port, src_vni, vni, ifindex, ndm->ndm_flags); + port, src_vni, vni, ifindex, + ndm->ndm_flags | NTF_VXLAN_ADDED_BY_USER, + true); spin_unlock_bh(&vxlan->hash_lock); return err; @@ -978,7 +1064,7 @@ static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], static int __vxlan_fdb_delete(struct vxlan_dev *vxlan, const unsigned char *addr, union vxlan_addr ip, __be16 port, __be32 src_vni, __be32 vni, - u32 ifindex, u16 vid) + u32 ifindex, bool swdev_notify) { struct vxlan_fdb *f; struct vxlan_rdst *rd = NULL; @@ -998,11 +1084,11 @@ static int __vxlan_fdb_delete(struct vxlan_dev *vxlan, * otherwise destroy the fdb entry */ if (rd && !list_is_singular(&f->remotes)) { - vxlan_fdb_dst_destroy(vxlan, f, rd); + vxlan_fdb_dst_destroy(vxlan, f, rd, swdev_notify); goto out; } - vxlan_fdb_destroy(vxlan, f, true); + vxlan_fdb_destroy(vxlan, f, true, swdev_notify); out: return 0; @@ -1026,7 +1112,7 @@ static int vxlan_fdb_delete(struct ndmsg *ndm, struct nlattr *tb[], spin_lock_bh(&vxlan->hash_lock); err = __vxlan_fdb_delete(vxlan, addr, ip, port, src_vni, vni, ifindex, - vid); + true); spin_unlock_bh(&vxlan->hash_lock); return err; @@ -1067,6 +1153,39 @@ out: return err; } +static int vxlan_fdb_get(struct sk_buff *skb, + struct nlattr *tb[], + struct net_device *dev, + const unsigned char *addr, + u16 vid, u32 portid, u32 seq, + struct netlink_ext_ack *extack) +{ + struct vxlan_dev *vxlan = netdev_priv(dev); + struct vxlan_fdb *f; + __be32 vni; + int err; + + if (tb[NDA_VNI]) + vni = cpu_to_be32(nla_get_u32(tb[NDA_VNI])); + else + vni = vxlan->default_dst.remote_vni; + + rcu_read_lock(); + + f = __vxlan_find_mac(vxlan, addr, vni); + if (!f) { + NL_SET_ERR_MSG(extack, "Fdb entry not found"); + err = -ENOENT; + goto errout; + } + + err = vxlan_fdb_info(skb, vxlan, f, portid, seq, + RTM_NEWNEIGH, 0, first_remote_rcu(f)); +errout: + rcu_read_unlock(); + return err; +} + /* Watch incoming packets to learn mapping between Ethernet address * and Tunnel endpoint. * Return true if packet is bogus and should be dropped. @@ -1104,7 +1223,7 @@ static bool vxlan_snoop(struct net_device *dev, rdst->remote_ip = *src_ip; f->updated = jiffies; - vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH); + vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH, true); } else { /* learned new entry */ spin_lock(&vxlan->hash_lock); @@ -1117,7 +1236,7 @@ static bool vxlan_snoop(struct net_device *dev, vxlan->cfg.dst_port, vni, vxlan->default_dst.remote_vni, - ifindex, NTF_SELF); + ifindex, NTF_SELF, true); spin_unlock(&vxlan->hash_lock); } @@ -1553,6 +1672,34 @@ drop: return 0; } +/* Callback from net/ipv{4,6}/udp.c to check that we have a VNI for errors */ +static int vxlan_err_lookup(struct sock *sk, struct sk_buff *skb) +{ + struct vxlan_dev *vxlan; + struct vxlan_sock *vs; + struct vxlanhdr *hdr; + __be32 vni; + + if (skb->len < VXLAN_HLEN) + return -EINVAL; + + hdr = vxlan_hdr(skb); + + if (!(hdr->vx_flags & VXLAN_HF_VNI)) + return -EINVAL; + + vs = rcu_dereference_sk_user_data(sk); + if (!vs) + return -ENOENT; + + vni = vxlan_vni(hdr->vx_vni); + vxlan = vxlan_vs_find_vni(vs, skb->dev->ifindex, vni); + if (!vxlan) + return -ENOENT; + + return 0; +} + static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni) { struct vxlan_dev *vxlan = netdev_priv(dev); @@ -2241,6 +2388,9 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, struct rtable *rt; __be16 df = 0; + if (!ifindex) + ifindex = sock4->sock->sk->sk_bound_dev_if; + rt = vxlan_get_route(vxlan, dev, sock4, skb, ifindex, tos, dst->sin.sin_addr.s_addr, &local_ip.sin.sin_addr.s_addr, @@ -2251,13 +2401,24 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, goto tx_error; } - /* Bypass encapsulation if the destination is local */ if (!info) { + /* Bypass encapsulation if the destination is local */ err = encap_bypass_if_local(skb, dev, vxlan, dst, dst_port, ifindex, vni, &rt->dst, rt->rt_flags); if (err) goto out_unlock; + + if (vxlan->cfg.df == VXLAN_DF_SET) { + df = htons(IP_DF); + } else if (vxlan->cfg.df == VXLAN_DF_INHERIT) { + struct ethhdr *eth = eth_hdr(skb); + + if (ntohs(eth->h_proto) == ETH_P_IPV6 || + (ntohs(eth->h_proto) == ETH_P_IP && + old_iph->frag_off & htons(IP_DF))) + df = htons(IP_DF); + } } else if (info->key.tun_flags & TUNNEL_DONT_FRAGMENT) { df = htons(IP_DF); } @@ -2279,6 +2440,9 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, } else { struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock); + if (!ifindex) + ifindex = sock6->sock->sk->sk_bound_dev_if; + ndst = vxlan6_get_route(vxlan, dev, sock6, skb, ifindex, tos, label, &dst->sin6.sin6_addr, &local_ip.sin6.sin6_addr, @@ -2462,7 +2626,7 @@ static void vxlan_cleanup(struct timer_list *t) "garbage collect %pM\n", f->eth_addr); f->state = NUD_STALE; - vxlan_fdb_destroy(vxlan, f, true); + vxlan_fdb_destroy(vxlan, f, true, true); } else if (time_before(timeout, next_timer)) next_timer = timeout; } @@ -2513,7 +2677,7 @@ static void vxlan_fdb_delete_default(struct vxlan_dev *vxlan, __be32 vni) spin_lock_bh(&vxlan->hash_lock); f = __vxlan_find_mac(vxlan, all_zeros_mac, vni); if (f) - vxlan_fdb_destroy(vxlan, f, true); + vxlan_fdb_destroy(vxlan, f, true, true); spin_unlock_bh(&vxlan->hash_lock); } @@ -2567,7 +2731,7 @@ static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all) continue; /* the all_zeros_mac entry is deleted at vxlan_uninit */ if (!is_zero_ether_addr(f->eth_addr)) - vxlan_fdb_destroy(vxlan, f, true); + vxlan_fdb_destroy(vxlan, f, true, true); } } spin_unlock_bh(&vxlan->hash_lock); @@ -2675,6 +2839,7 @@ static const struct net_device_ops vxlan_netdev_ether_ops = { .ndo_fdb_add = vxlan_fdb_add, .ndo_fdb_del = vxlan_fdb_delete, .ndo_fdb_dump = vxlan_fdb_dump, + .ndo_fdb_get = vxlan_fdb_get, .ndo_fill_metadata_dst = vxlan_fill_metadata_dst, }; @@ -2810,6 +2975,7 @@ static const struct nla_policy vxlan_policy[IFLA_VXLAN_MAX + 1] = { [IFLA_VXLAN_GPE] = { .type = NLA_FLAG, }, [IFLA_VXLAN_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG }, [IFLA_VXLAN_TTL_INHERIT] = { .type = NLA_FLAG }, + [IFLA_VXLAN_DF] = { .type = NLA_U8 }, }; static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[], @@ -2866,6 +3032,16 @@ static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[], } } + if (data[IFLA_VXLAN_DF]) { + enum ifla_vxlan_df df = nla_get_u8(data[IFLA_VXLAN_DF]); + + if (df < 0 || df > VXLAN_DF_MAX) { + NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_DF], + "Invalid DF attribute"); + return -EINVAL; + } + } + return 0; } @@ -2882,7 +3058,7 @@ static const struct ethtool_ops vxlan_ethtool_ops = { }; static struct socket *vxlan_create_sock(struct net *net, bool ipv6, - __be16 port, u32 flags) + __be16 port, u32 flags, int ifindex) { struct socket *sock; struct udp_port_cfg udp_conf; @@ -2900,6 +3076,7 @@ static struct socket *vxlan_create_sock(struct net *net, bool ipv6, } udp_conf.local_udp_port = port; + udp_conf.bind_ifindex = ifindex; /* Open UDP socket */ err = udp_sock_create(net, &udp_conf, &sock); @@ -2911,7 +3088,8 @@ static struct socket *vxlan_create_sock(struct net *net, bool ipv6, /* Create new listen socket if needed */ static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6, - __be16 port, u32 flags) + __be16 port, u32 flags, + int ifindex) { struct vxlan_net *vn = net_generic(net, vxlan_net_id); struct vxlan_sock *vs; @@ -2926,7 +3104,7 @@ static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6, for (h = 0; h < VNI_HASH_SIZE; ++h) INIT_HLIST_HEAD(&vs->vni_list[h]); - sock = vxlan_create_sock(net, ipv6, port, flags); + sock = vxlan_create_sock(net, ipv6, port, flags, ifindex); if (IS_ERR(sock)) { kfree(vs); return ERR_CAST(sock); @@ -2949,6 +3127,7 @@ static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6, tunnel_cfg.sk_user_data = vs; tunnel_cfg.encap_type = 1; tunnel_cfg.encap_rcv = vxlan_rcv; + tunnel_cfg.encap_err_lookup = vxlan_err_lookup; tunnel_cfg.encap_destroy = NULL; tunnel_cfg.gro_receive = vxlan_gro_receive; tunnel_cfg.gro_complete = vxlan_gro_complete; @@ -2963,11 +3142,17 @@ static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6) struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id); struct vxlan_sock *vs = NULL; struct vxlan_dev_node *node; + int l3mdev_index = 0; + + if (vxlan->cfg.remote_ifindex) + l3mdev_index = l3mdev_master_upper_ifindex_by_index( + vxlan->net, vxlan->cfg.remote_ifindex); if (!vxlan->cfg.no_share) { spin_lock(&vn->sock_lock); vs = vxlan_find_sock(vxlan->net, ipv6 ? AF_INET6 : AF_INET, - vxlan->cfg.dst_port, vxlan->cfg.flags); + vxlan->cfg.dst_port, vxlan->cfg.flags, + l3mdev_index); if (vs && !refcount_inc_not_zero(&vs->refcnt)) { spin_unlock(&vn->sock_lock); return -EBUSY; @@ -2976,7 +3161,8 @@ static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6) } if (!vs) vs = vxlan_socket_create(vxlan->net, ipv6, - vxlan->cfg.dst_port, vxlan->cfg.flags); + vxlan->cfg.dst_port, vxlan->cfg.flags, + l3mdev_index); if (IS_ERR(vs)) return PTR_ERR(vs); #if IS_ENABLED(CONFIG_IPV6) @@ -3293,7 +3479,8 @@ static int __vxlan_dev_create(struct net *net, struct net_device *dev, /* notify default fdb entry */ if (f) - vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH); + vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH, + true); list_add(&vxlan->next, &vn->vxlan_list); return 0; @@ -3304,7 +3491,7 @@ errout: * destroy the entry by hand here. */ if (f) - vxlan_fdb_destroy(vxlan, f, false); + vxlan_fdb_destroy(vxlan, f, false, false); if (unregister) unregister_netdevice(dev); return err; @@ -3394,11 +3581,8 @@ static int vxlan_nl2conf(struct nlattr *tb[], struct nlattr *data[], conf->flags |= VXLAN_F_LEARN; } - if (data[IFLA_VXLAN_AGEING]) { - if (changelink) - return -EOPNOTSUPP; + if (data[IFLA_VXLAN_AGEING]) conf->age_interval = nla_get_u32(data[IFLA_VXLAN_AGEING]); - } if (data[IFLA_VXLAN_PROXY]) { if (changelink) @@ -3517,6 +3701,9 @@ static int vxlan_nl2conf(struct nlattr *tb[], struct nlattr *data[], conf->mtu = nla_get_u32(tb[IFLA_MTU]); } + if (data[IFLA_VXLAN_DF]) + conf->df = nla_get_u8(data[IFLA_VXLAN_DF]); + return 0; } @@ -3540,6 +3727,7 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[], { struct vxlan_dev *vxlan = netdev_priv(dev); struct vxlan_rdst *dst = &vxlan->default_dst; + unsigned long old_age_interval; struct vxlan_rdst old_dst; struct vxlan_config conf; int err; @@ -3549,12 +3737,16 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[], if (err) return err; + old_age_interval = vxlan->cfg.age_interval; memcpy(&old_dst, dst, sizeof(struct vxlan_rdst)); err = vxlan_dev_configure(vxlan->net, dev, &conf, true, extack); if (err) return err; + if (old_age_interval != vxlan->cfg.age_interval) + mod_timer(&vxlan->age_timer, jiffies); + /* handle default dst entry */ if (!vxlan_addr_equal(&dst->remote_ip, &old_dst.remote_ip)) { spin_lock_bh(&vxlan->hash_lock); @@ -3564,7 +3756,8 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[], vxlan->cfg.dst_port, old_dst.remote_vni, old_dst.remote_vni, - old_dst.remote_ifindex, 0); + old_dst.remote_ifindex, + true); if (!vxlan_addr_any(&dst->remote_ip)) { err = vxlan_fdb_update(vxlan, all_zeros_mac, @@ -3575,7 +3768,7 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[], dst->remote_vni, dst->remote_vni, dst->remote_ifindex, - NTF_SELF); + NTF_SELF, true); if (err) { spin_unlock_bh(&vxlan->hash_lock); return err; @@ -3608,6 +3801,7 @@ static size_t vxlan_get_size(const struct net_device *dev) nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_TTL */ nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_TTL_INHERIT */ nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_TOS */ + nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_DF */ nla_total_size(sizeof(__be32)) + /* IFLA_VXLAN_LABEL */ nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_LEARNING */ nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_PROXY */ @@ -3674,6 +3868,7 @@ static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev) nla_put_u8(skb, IFLA_VXLAN_TTL_INHERIT, !!(vxlan->cfg.flags & VXLAN_F_TTL_INHERIT)) || nla_put_u8(skb, IFLA_VXLAN_TOS, vxlan->cfg.tos) || + nla_put_u8(skb, IFLA_VXLAN_DF, vxlan->cfg.df) || nla_put_be32(skb, IFLA_VXLAN_LABEL, vxlan->cfg.label) || nla_put_u8(skb, IFLA_VXLAN_LEARNING, !!(vxlan->cfg.flags & VXLAN_F_LEARN)) || @@ -3756,7 +3951,7 @@ struct net_device *vxlan_dev_create(struct net *net, const char *name, memset(&tb, 0, sizeof(tb)); dev = rtnl_create_link(net, name, name_assign_type, - &vxlan_link_ops, tb); + &vxlan_link_ops, tb, NULL); if (IS_ERR(dev)) return dev; @@ -3851,18 +4046,89 @@ out: spin_unlock_bh(&vxlan->hash_lock); } +static int +vxlan_fdb_external_learn_add(struct net_device *dev, + struct switchdev_notifier_vxlan_fdb_info *fdb_info) +{ + struct vxlan_dev *vxlan = netdev_priv(dev); + int err; + + spin_lock_bh(&vxlan->hash_lock); + err = vxlan_fdb_update(vxlan, fdb_info->eth_addr, &fdb_info->remote_ip, + NUD_REACHABLE, + NLM_F_CREATE | NLM_F_REPLACE, + fdb_info->remote_port, + fdb_info->vni, + fdb_info->remote_vni, + fdb_info->remote_ifindex, + NTF_USE | NTF_SELF | NTF_EXT_LEARNED, + false); + spin_unlock_bh(&vxlan->hash_lock); + + return err; +} + +static int +vxlan_fdb_external_learn_del(struct net_device *dev, + struct switchdev_notifier_vxlan_fdb_info *fdb_info) +{ + struct vxlan_dev *vxlan = netdev_priv(dev); + struct vxlan_fdb *f; + int err = 0; + + spin_lock_bh(&vxlan->hash_lock); + + f = vxlan_find_mac(vxlan, fdb_info->eth_addr, fdb_info->vni); + if (!f) + err = -ENOENT; + else if (f->flags & NTF_EXT_LEARNED) + err = __vxlan_fdb_delete(vxlan, fdb_info->eth_addr, + fdb_info->remote_ip, + fdb_info->remote_port, + fdb_info->vni, + fdb_info->remote_vni, + fdb_info->remote_ifindex, + false); + + spin_unlock_bh(&vxlan->hash_lock); + + return err; +} + static int vxlan_switchdev_event(struct notifier_block *unused, unsigned long event, void *ptr) { struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + struct switchdev_notifier_vxlan_fdb_info *fdb_info; + int err = 0; switch (event) { case SWITCHDEV_VXLAN_FDB_OFFLOADED: vxlan_fdb_offloaded_set(dev, ptr); break; + case SWITCHDEV_VXLAN_FDB_ADD_TO_BRIDGE: + fdb_info = ptr; + err = vxlan_fdb_external_learn_add(dev, fdb_info); + if (err) { + err = notifier_from_errno(err); + break; + } + fdb_info->offloaded = true; + vxlan_fdb_offloaded_set(dev, fdb_info); + break; + case SWITCHDEV_VXLAN_FDB_DEL_TO_BRIDGE: + fdb_info = ptr; + err = vxlan_fdb_external_learn_del(dev, fdb_info); + if (err) { + err = notifier_from_errno(err); + break; + } + fdb_info->offloaded = false; + vxlan_fdb_offloaded_set(dev, fdb_info); + break; } - return 0; + return err; } static struct notifier_block vxlan_switchdev_notifier_block __read_mostly = { diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c index 4d6409605207..7a42336c8af8 100644 --- a/drivers/net/wan/fsl_ucc_hdlc.c +++ b/drivers/net/wan/fsl_ucc_hdlc.c @@ -391,6 +391,7 @@ static netdev_tx_t ucc_hdlc_tx(struct sk_buff *skb, struct net_device *dev) dev_kfree_skb(skb); return -ENOMEM; } + netdev_sent_queue(dev, skb->len); spin_lock_irqsave(&priv->lock, flags); /* Start from the next BD that should be filled */ @@ -447,6 +448,8 @@ static int hdlc_tx_done(struct ucc_hdlc_private *priv) { /* Start from the next BD that should be filled */ struct net_device *dev = priv->ndev; + unsigned int bytes_sent = 0; + int howmany = 0; struct qe_bd *bd; /* BD pointer */ u16 bd_status; int tx_restart = 0; @@ -474,6 +477,8 @@ static int hdlc_tx_done(struct ucc_hdlc_private *priv) skb = priv->tx_skbuff[priv->skb_dirtytx]; if (!skb) break; + howmany++; + bytes_sent += skb->len; dev->stats.tx_packets++; memset(priv->tx_buffer + (be32_to_cpu(bd->buf) - priv->dma_tx_addr), @@ -501,6 +506,7 @@ static int hdlc_tx_done(struct ucc_hdlc_private *priv) if (tx_restart) hdlc_tx_restart(priv); + netdev_completed_queue(dev, howmany, bytes_sent); return 0; } @@ -721,6 +727,7 @@ static int uhdlc_open(struct net_device *dev) priv->hdlc_busy = 1; netif_device_attach(priv->ndev); napi_enable(&priv->napi); + netdev_reset_queue(dev); netif_start_queue(dev); hdlc_open(dev); } @@ -812,6 +819,7 @@ static int uhdlc_close(struct net_device *dev) free_irq(priv->ut_info->uf_info.irq, priv); netif_stop_queue(dev); + netdev_reset_queue(dev); priv->hdlc_busy = 0; return 0; diff --git a/drivers/net/wireless/Kconfig b/drivers/net/wireless/Kconfig index 166920ae23f8..8c456a66ac3b 100644 --- a/drivers/net/wireless/Kconfig +++ b/drivers/net/wireless/Kconfig @@ -114,4 +114,11 @@ config USB_NET_RNDIS_WLAN If you choose to build a module, it'll be called rndis_wlan. +config VIRT_WIFI + tristate "Wifi wrapper for ethernet drivers" + depends on CFG80211 + ---help--- + This option adds support for ethernet connections to appear as if they + are wifi connections through a special rtnetlink device. + endif # WLAN diff --git a/drivers/net/wireless/Makefile b/drivers/net/wireless/Makefile index 7fc96306712a..6cfe74515c95 100644 --- a/drivers/net/wireless/Makefile +++ b/drivers/net/wireless/Makefile @@ -27,3 +27,5 @@ obj-$(CONFIG_PCMCIA_WL3501) += wl3501_cs.o obj-$(CONFIG_USB_NET_RNDIS_WLAN) += rndis_wlan.o obj-$(CONFIG_MAC80211_HWSIM) += mac80211_hwsim.o + +obj-$(CONFIG_VIRT_WIFI) += virt_wifi.o diff --git a/drivers/net/wireless/ath/ath10k/Kconfig b/drivers/net/wireless/ath/ath10k/Kconfig index e1ad6b9166a6..a7fb5441ced4 100644 --- a/drivers/net/wireless/ath/ath10k/Kconfig +++ b/drivers/net/wireless/ath/ath10k/Kconfig @@ -47,8 +47,7 @@ config ATH10K_SNOC select QCOM_QMI_HELPERS ---help--- This module adds support for integrated WCN3990 chip connected - to system NOC(SNOC). Currently work in progress and will not - fully work. + to system NOC(SNOC). config ATH10K_DEBUG bool "Atheros ath10k debugging" diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c index d210b0ed59be..399b501f3c3c 100644 --- a/drivers/net/wireless/ath/ath10k/core.c +++ b/drivers/net/wireless/ath/ath10k/core.c @@ -561,6 +561,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = { .hw_ops = &wcn3990_ops, .decap_align_bytes = 1, .num_peers = TARGET_HL_10_TLV_NUM_PEERS, + .n_cipher_suites = 8, .ast_skid_limit = TARGET_HL_10_TLV_AST_SKID_LIMIT, .num_wds_entries = TARGET_HL_10_TLV_NUM_WDS_ENTRIES, .target_64bit = true, @@ -594,6 +595,7 @@ static const char *const ath10k_core_fw_feature_str[] = { [ATH10K_FW_FEATURE_NO_PS] = "no-ps", [ATH10K_FW_FEATURE_MGMT_TX_BY_REF] = "mgmt-tx-by-reference", [ATH10K_FW_FEATURE_NON_BMI] = "non-bmi", + [ATH10K_FW_FEATURE_SINGLE_CHAN_INFO_PER_CHANNEL] = "single-chan-info-per-channel", }; static unsigned int ath10k_core_get_fw_feature_str(char *buf, @@ -2183,6 +2185,8 @@ static void ath10k_core_restart(struct work_struct *work) if (ret) ath10k_warn(ar, "failed to send firmware crash dump via devcoredump: %d", ret); + + complete(&ar->driver_recovery); } static void ath10k_core_set_coverage_class_work(struct work_struct *work) @@ -3074,6 +3078,7 @@ struct ath10k *ath10k_core_create(size_t priv_size, struct device *dev, init_completion(&ar->scan.completed); init_completion(&ar->scan.on_channel); init_completion(&ar->target_suspend); + init_completion(&ar->driver_recovery); init_completion(&ar->wow.wakeup_completed); init_completion(&ar->install_key_done); diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h index 042418097cf9..46e9c8c97a4d 100644 --- a/drivers/net/wireless/ath/ath10k/core.h +++ b/drivers/net/wireless/ath/ath10k/core.h @@ -474,6 +474,7 @@ struct ath10k_htt_data_stats { u64 bw[ATH10K_COUNTER_TYPE_MAX][ATH10K_BW_NUM]; u64 nss[ATH10K_COUNTER_TYPE_MAX][ATH10K_NSS_NUM]; u64 gi[ATH10K_COUNTER_TYPE_MAX][ATH10K_GI_NUM]; + u64 rate_table[ATH10K_COUNTER_TYPE_MAX][ATH10K_RATE_TABLE_NUM]; }; struct ath10k_htt_tx_stats { @@ -493,6 +494,7 @@ struct ath10k_sta { u32 smps; u16 peer_id; struct rate_info txrate; + struct ieee80211_tx_info tx_info; struct work_struct update_wk; u64 rx_duration; @@ -760,6 +762,9 @@ enum ath10k_fw_features { /* Firmware load is done externally, not by bmi */ ATH10K_FW_FEATURE_NON_BMI = 19, + /* Firmware sends only one chan_info event per channel */ + ATH10K_FW_FEATURE_SINGLE_CHAN_INFO_PER_CHANNEL = 20, + /* keep last */ ATH10K_FW_FEATURE_COUNT, }; @@ -960,6 +965,7 @@ struct ath10k { } hif; struct completion target_suspend; + struct completion driver_recovery; const struct ath10k_hw_regs *regs; const struct ath10k_hw_ce_regs *hw_ce_regs; diff --git a/drivers/net/wireless/ath/ath10k/coredump.c b/drivers/net/wireless/ath/ath10k/coredump.c index 4d28063052fe..eadae2f9206b 100644 --- a/drivers/net/wireless/ath/ath10k/coredump.c +++ b/drivers/net/wireless/ath/ath10k/coredump.c @@ -867,9 +867,105 @@ static const struct ath10k_mem_region qca9984_hw10_mem_regions[] = { }, }; +static const struct ath10k_mem_section ipq4019_soc_reg_range[] = { + {0x080000, 0x080004}, + {0x080020, 0x080024}, + {0x080028, 0x080050}, + {0x0800d4, 0x0800ec}, + {0x08010c, 0x080118}, + {0x080284, 0x080290}, + {0x0802a8, 0x0802b8}, + {0x0802dc, 0x08030c}, + {0x082000, 0x083fff} +}; + +static const struct ath10k_mem_region qca4019_hw10_mem_regions[] = { + { + .type = ATH10K_MEM_REGION_TYPE_DRAM, + .start = 0x400000, + .len = 0x68000, + .name = "DRAM", + .section_table = { + .sections = NULL, + .size = 0, + }, + }, + { + .type = ATH10K_MEM_REGION_TYPE_REG, + .start = 0xC0000, + .len = 0x40000, + .name = "SRAM", + .section_table = { + .sections = NULL, + .size = 0, + }, + }, + { + .type = ATH10K_MEM_REGION_TYPE_REG, + .start = 0x98000, + .len = 0x50000, + .name = "IRAM", + .section_table = { + .sections = NULL, + .size = 0, + }, + }, + { + .type = ATH10K_MEM_REGION_TYPE_IOREG, + .start = 0x30000, + .len = 0x7000, + .name = "APB REG 1", + .section_table = { + .sections = NULL, + .size = 0, + }, + }, + { + .type = ATH10K_MEM_REGION_TYPE_IOREG, + .start = 0x3f000, + .len = 0x3000, + .name = "APB REG 2", + .section_table = { + .sections = NULL, + .size = 0, + }, + }, + { + .type = ATH10K_MEM_REGION_TYPE_IOREG, + .start = 0x43000, + .len = 0x3000, + .name = "WIFI REG", + .section_table = { + .sections = NULL, + .size = 0, + }, + }, + { + .type = ATH10K_MEM_REGION_TYPE_IOREG, + .start = 0x4A000, + .len = 0x5000, + .name = "CE REG", + .section_table = { + .sections = NULL, + .size = 0, + }, + }, + { + .type = ATH10K_MEM_REGION_TYPE_REG, + .start = 0x080000, + .len = 0x083fff - 0x080000, + .name = "REG_TOTAL", + .section_table = { + .sections = ipq4019_soc_reg_range, + .size = ARRAY_SIZE(ipq4019_soc_reg_range), + }, + }, +}; + static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { { .hw_id = QCA6174_HW_1_0_VERSION, + .hw_rev = ATH10K_HW_QCA6174, .region_table = { .regions = qca6174_hw10_mem_regions, .size = ARRAY_SIZE(qca6174_hw10_mem_regions), @@ -877,6 +973,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA6174_HW_1_1_VERSION, + .hw_rev = ATH10K_HW_QCA6174, .region_table = { .regions = qca6174_hw10_mem_regions, .size = ARRAY_SIZE(qca6174_hw10_mem_regions), @@ -884,6 +981,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA6174_HW_1_3_VERSION, + .hw_rev = ATH10K_HW_QCA6174, .region_table = { .regions = qca6174_hw10_mem_regions, .size = ARRAY_SIZE(qca6174_hw10_mem_regions), @@ -891,6 +989,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA6174_HW_2_1_VERSION, + .hw_rev = ATH10K_HW_QCA6174, .region_table = { .regions = qca6174_hw21_mem_regions, .size = ARRAY_SIZE(qca6174_hw21_mem_regions), @@ -898,6 +997,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA6174_HW_3_0_VERSION, + .hw_rev = ATH10K_HW_QCA6174, .region_table = { .regions = qca6174_hw30_mem_regions, .size = ARRAY_SIZE(qca6174_hw30_mem_regions), @@ -905,6 +1005,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA6174_HW_3_2_VERSION, + .hw_rev = ATH10K_HW_QCA6174, .region_table = { .regions = qca6174_hw30_mem_regions, .size = ARRAY_SIZE(qca6174_hw30_mem_regions), @@ -912,6 +1013,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA9377_HW_1_1_DEV_VERSION, + .hw_rev = ATH10K_HW_QCA9377, .region_table = { .regions = qca6174_hw30_mem_regions, .size = ARRAY_SIZE(qca6174_hw30_mem_regions), @@ -919,6 +1021,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA988X_HW_2_0_VERSION, + .hw_rev = ATH10K_HW_QCA988X, .region_table = { .regions = qca988x_hw20_mem_regions, .size = ARRAY_SIZE(qca988x_hw20_mem_regions), @@ -926,6 +1029,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA9984_HW_1_0_DEV_VERSION, + .hw_rev = ATH10K_HW_QCA9984, .region_table = { .regions = qca9984_hw10_mem_regions, .size = ARRAY_SIZE(qca9984_hw10_mem_regions), @@ -933,6 +1037,7 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA9888_HW_2_0_DEV_VERSION, + .hw_rev = ATH10K_HW_QCA9888, .region_table = { .regions = qca9984_hw10_mem_regions, .size = ARRAY_SIZE(qca9984_hw10_mem_regions), @@ -940,12 +1045,20 @@ static const struct ath10k_hw_mem_layout hw_mem_layouts[] = { }, { .hw_id = QCA99X0_HW_2_0_DEV_VERSION, + .hw_rev = ATH10K_HW_QCA99X0, .region_table = { .regions = qca99x0_hw20_mem_regions, .size = ARRAY_SIZE(qca99x0_hw20_mem_regions), }, }, - + { + .hw_id = QCA4019_HW_1_0_DEV_VERSION, + .hw_rev = ATH10K_HW_QCA4019, + .region_table = { + .regions = qca4019_hw10_mem_regions, + .size = ARRAY_SIZE(qca4019_hw10_mem_regions), + }, + }, }; static u32 ath10k_coredump_get_ramdump_size(struct ath10k *ar) @@ -987,7 +1100,8 @@ const struct ath10k_hw_mem_layout *ath10k_coredump_get_mem_layout(struct ath10k return NULL; for (i = 0; i < ARRAY_SIZE(hw_mem_layouts); i++) { - if (ar->target_version == hw_mem_layouts[i].hw_id) + if (ar->target_version == hw_mem_layouts[i].hw_id && + ar->hw_rev == hw_mem_layouts[i].hw_rev) return &hw_mem_layouts[i]; } diff --git a/drivers/net/wireless/ath/ath10k/coredump.h b/drivers/net/wireless/ath/ath10k/coredump.h index 3baaf9d2cbcd..5dac653e1649 100644 --- a/drivers/net/wireless/ath/ath10k/coredump.h +++ b/drivers/net/wireless/ath/ath10k/coredump.h @@ -165,6 +165,7 @@ struct ath10k_mem_region { */ struct ath10k_hw_mem_layout { u32 hw_id; + u32 hw_rev; struct { const struct ath10k_mem_region *regions; diff --git a/drivers/net/wireless/ath/ath10k/debugfs_sta.c b/drivers/net/wireless/ath/ath10k/debugfs_sta.c index b09cdc699c69..4778a455d81a 100644 --- a/drivers/net/wireless/ath/ath10k/debugfs_sta.c +++ b/drivers/net/wireless/ath/ath10k/debugfs_sta.c @@ -71,7 +71,7 @@ void ath10k_sta_update_rx_tid_stats_ampdu(struct ath10k *ar, u16 peer_id, u8 tid spin_lock_bh(&ar->data_lock); peer = ath10k_peer_find_by_id(ar, peer_id); - if (!peer) + if (!peer || !peer->sta) goto out; arsta = (struct ath10k_sta *)peer->sta->drv_priv; @@ -665,7 +665,7 @@ static ssize_t ath10k_dbg_sta_dump_tx_stats(struct file *file, "retry", "ampdu"}; const char *str[ATH10K_COUNTER_TYPE_MAX] = {"bytes", "packets"}; int len = 0, i, j, k, retval = 0; - const int size = 2 * 4096; + const int size = 16 * 4096; char *buf; buf = kzalloc(size, GFP_KERNEL); @@ -719,6 +719,16 @@ static ssize_t ath10k_dbg_sta_dump_tx_stats(struct file *file, len += scnprintf(buf + len, size - len, "%llu ", stats->legacy[j][i]); len += scnprintf(buf + len, size - len, "\n"); + len += scnprintf(buf + len, size - len, + " Rate table %s (1,2 ... Mbps)\n ", + str[j]); + for (i = 0; i < ATH10K_RATE_TABLE_NUM; i++) { + len += scnprintf(buf + len, size - len, "%llu ", + stats->rate_table[j][i]); + if (!((i + 1) % 8)) + len += + scnprintf(buf + len, size - len, "\n "); + } } } diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c index ffec98f7be50..f42bac204ef8 100644 --- a/drivers/net/wireless/ath/ath10k/htt_rx.c +++ b/drivers/net/wireless/ath/ath10k/htt_rx.c @@ -469,6 +469,166 @@ static struct sk_buff *ath10k_htt_rx_pop_paddr(struct ath10k_htt *htt, return msdu; } +static inline void ath10k_htt_append_frag_list(struct sk_buff *skb_head, + struct sk_buff *frag_list, + unsigned int frag_len) +{ + skb_shinfo(skb_head)->frag_list = frag_list; + skb_head->data_len = frag_len; + skb_head->len += skb_head->data_len; +} + +static int ath10k_htt_rx_handle_amsdu_mon_32(struct ath10k_htt *htt, + struct sk_buff *msdu, + struct htt_rx_in_ord_msdu_desc **msdu_desc) +{ + struct ath10k *ar = htt->ar; + u32 paddr; + struct sk_buff *frag_buf; + struct sk_buff *prev_frag_buf; + u8 last_frag; + struct htt_rx_in_ord_msdu_desc *ind_desc = *msdu_desc; + struct htt_rx_desc *rxd; + int amsdu_len = __le16_to_cpu(ind_desc->msdu_len); + + rxd = (void *)msdu->data; + trace_ath10k_htt_rx_desc(ar, rxd, sizeof(*rxd)); + + skb_put(msdu, sizeof(struct htt_rx_desc)); + skb_pull(msdu, sizeof(struct htt_rx_desc)); + skb_put(msdu, min(amsdu_len, HTT_RX_MSDU_SIZE)); + amsdu_len -= msdu->len; + + last_frag = ind_desc->reserved; + if (last_frag) { + if (amsdu_len) { + ath10k_warn(ar, "invalid amsdu len %u, left %d", + __le16_to_cpu(ind_desc->msdu_len), + amsdu_len); + } + return 0; + } + + ind_desc++; + paddr = __le32_to_cpu(ind_desc->msdu_paddr); + frag_buf = ath10k_htt_rx_pop_paddr(htt, paddr); + if (!frag_buf) { + ath10k_warn(ar, "failed to pop frag-1 paddr: 0x%x", paddr); + return -ENOENT; + } + + skb_put(frag_buf, min(amsdu_len, HTT_RX_BUF_SIZE)); + ath10k_htt_append_frag_list(msdu, frag_buf, amsdu_len); + + amsdu_len -= frag_buf->len; + prev_frag_buf = frag_buf; + last_frag = ind_desc->reserved; + while (!last_frag) { + ind_desc++; + paddr = __le32_to_cpu(ind_desc->msdu_paddr); + frag_buf = ath10k_htt_rx_pop_paddr(htt, paddr); + if (!frag_buf) { + ath10k_warn(ar, "failed to pop frag-n paddr: 0x%x", + paddr); + prev_frag_buf->next = NULL; + return -ENOENT; + } + + skb_put(frag_buf, min(amsdu_len, HTT_RX_BUF_SIZE)); + last_frag = ind_desc->reserved; + amsdu_len -= frag_buf->len; + + prev_frag_buf->next = frag_buf; + prev_frag_buf = frag_buf; + } + + if (amsdu_len) { + ath10k_warn(ar, "invalid amsdu len %u, left %d", + __le16_to_cpu(ind_desc->msdu_len), amsdu_len); + } + + *msdu_desc = ind_desc; + + prev_frag_buf->next = NULL; + return 0; +} + +static int +ath10k_htt_rx_handle_amsdu_mon_64(struct ath10k_htt *htt, + struct sk_buff *msdu, + struct htt_rx_in_ord_msdu_desc_ext **msdu_desc) +{ + struct ath10k *ar = htt->ar; + u64 paddr; + struct sk_buff *frag_buf; + struct sk_buff *prev_frag_buf; + u8 last_frag; + struct htt_rx_in_ord_msdu_desc_ext *ind_desc = *msdu_desc; + struct htt_rx_desc *rxd; + int amsdu_len = __le16_to_cpu(ind_desc->msdu_len); + + rxd = (void *)msdu->data; + trace_ath10k_htt_rx_desc(ar, rxd, sizeof(*rxd)); + + skb_put(msdu, sizeof(struct htt_rx_desc)); + skb_pull(msdu, sizeof(struct htt_rx_desc)); + skb_put(msdu, min(amsdu_len, HTT_RX_MSDU_SIZE)); + amsdu_len -= msdu->len; + + last_frag = ind_desc->reserved; + if (last_frag) { + if (amsdu_len) { + ath10k_warn(ar, "invalid amsdu len %u, left %d", + __le16_to_cpu(ind_desc->msdu_len), + amsdu_len); + } + return 0; + } + + ind_desc++; + paddr = __le64_to_cpu(ind_desc->msdu_paddr); + frag_buf = ath10k_htt_rx_pop_paddr(htt, paddr); + if (!frag_buf) { + ath10k_warn(ar, "failed to pop frag-1 paddr: 0x%llx", paddr); + return -ENOENT; + } + + skb_put(frag_buf, min(amsdu_len, HTT_RX_BUF_SIZE)); + ath10k_htt_append_frag_list(msdu, frag_buf, amsdu_len); + + amsdu_len -= frag_buf->len; + prev_frag_buf = frag_buf; + last_frag = ind_desc->reserved; + while (!last_frag) { + ind_desc++; + paddr = __le64_to_cpu(ind_desc->msdu_paddr); + frag_buf = ath10k_htt_rx_pop_paddr(htt, paddr); + if (!frag_buf) { + ath10k_warn(ar, "failed to pop frag-n paddr: 0x%llx", + paddr); + prev_frag_buf->next = NULL; + return -ENOENT; + } + + skb_put(frag_buf, min(amsdu_len, HTT_RX_BUF_SIZE)); + last_frag = ind_desc->reserved; + amsdu_len -= frag_buf->len; + + prev_frag_buf->next = frag_buf; + prev_frag_buf = frag_buf; + } + + if (amsdu_len) { + ath10k_warn(ar, "invalid amsdu len %u, left %d", + __le16_to_cpu(ind_desc->msdu_len), amsdu_len); + } + + *msdu_desc = ind_desc; + + prev_frag_buf->next = NULL; + return 0; +} + static int ath10k_htt_rx_pop_paddr32_list(struct ath10k_htt *htt, struct htt_rx_in_ord_ind *ev, struct sk_buff_head *list) @@ -477,7 +637,7 @@ static int ath10k_htt_rx_pop_paddr32_list(struct ath10k_htt *htt, struct htt_rx_in_ord_msdu_desc *msdu_desc = ev->msdu_descs32; struct htt_rx_desc *rxd; struct sk_buff *msdu; - int msdu_count; + int msdu_count, ret; bool is_offload; u32 paddr; @@ -495,6 +655,18 @@ static int ath10k_htt_rx_pop_paddr32_list(struct ath10k_htt *htt, return -ENOENT; } + if (!is_offload && ar->monitor_arvif) { + ret = ath10k_htt_rx_handle_amsdu_mon_32(htt, msdu, + &msdu_desc); + if (ret) { + __skb_queue_purge(list); + return ret; + } + __skb_queue_tail(list, msdu); + msdu_desc++; + continue; + } + __skb_queue_tail(list, msdu); if (!is_offload) { @@ -527,7 +699,7 @@ static int ath10k_htt_rx_pop_paddr64_list(struct ath10k_htt *htt, struct htt_rx_in_ord_msdu_desc_ext *msdu_desc = ev->msdu_descs64; struct htt_rx_desc *rxd; struct sk_buff *msdu; - int msdu_count; + int msdu_count, ret; bool is_offload; u64 paddr; @@ -544,6 +716,18 @@ static int ath10k_htt_rx_pop_paddr64_list(struct ath10k_htt *htt, return -ENOENT; } + if (!is_offload && ar->monitor_arvif) { + ret = ath10k_htt_rx_handle_amsdu_mon_64(htt, msdu, + &msdu_desc); + if (ret) { + __skb_queue_purge(list); + return ret; + } + __skb_queue_tail(list, msdu); + msdu_desc++; + continue; + } + __skb_queue_tail(list, msdu); if (!is_offload) { @@ -1159,7 +1343,8 @@ static void ath10k_htt_rx_h_undecap_raw(struct ath10k *ar, struct sk_buff *msdu, struct ieee80211_rx_status *status, enum htt_rx_mpdu_encrypt_type enctype, - bool is_decrypted) + bool is_decrypted, + const u8 first_hdr[64]) { struct ieee80211_hdr *hdr; struct htt_rx_desc *rxd; @@ -1167,6 +1352,9 @@ static void ath10k_htt_rx_h_undecap_raw(struct ath10k *ar, size_t crypto_len; bool is_first; bool is_last; + bool msdu_limit_err; + int bytes_aligned = ar->hw_params.decap_align_bytes; + u8 *qos; rxd = (void *)msdu->data - sizeof(*rxd); is_first = !!(rxd->msdu_end.common.info0 & @@ -1184,16 +1372,45 @@ static void ath10k_htt_rx_h_undecap_raw(struct ath10k *ar, * [FCS] <-- at end, needs to be trimmed */ + /* Some hardwares(QCA99x0 variants) limit number of msdus in a-msdu when + * deaggregate, so that unwanted MSDU-deaggregation is avoided for + * error packets. If limit exceeds, hw sends all remaining MSDUs as + * a single last MSDU with this msdu limit error set. + */ + msdu_limit_err = ath10k_rx_desc_msdu_limit_error(&ar->hw_params, rxd); + + /* If MSDU limit error happens, then don't warn on, the partial raw MSDU + * without first MSDU is expected in that case, and handled later here. + */ /* This probably shouldn't happen but warn just in case */ - if (WARN_ON_ONCE(!is_first)) + if (WARN_ON_ONCE(!is_first && !msdu_limit_err)) return; /* This probably shouldn't happen but warn just in case */ - if (WARN_ON_ONCE(!(is_first && is_last))) + if (WARN_ON_ONCE(!(is_first && is_last) && !msdu_limit_err)) return; skb_trim(msdu, msdu->len - FCS_LEN); + /* Push original 80211 header */ + if (unlikely(msdu_limit_err)) { + hdr = (struct ieee80211_hdr *)first_hdr; + hdr_len = ieee80211_hdrlen(hdr->frame_control); + crypto_len = ath10k_htt_rx_crypto_param_len(ar, enctype); + + if (ieee80211_is_data_qos(hdr->frame_control)) { + qos = ieee80211_get_qos_ctl(hdr); + qos[0] |= IEEE80211_QOS_CTL_A_MSDU_PRESENT; + } + + if (crypto_len) + memcpy(skb_push(msdu, crypto_len), + (void *)hdr + round_up(hdr_len, bytes_aligned), + crypto_len); + + memcpy(skb_push(msdu, hdr_len), hdr, hdr_len); + } + /* In most cases this will be true for sniffed frames. It makes sense * to deliver them as-is without stripping the crypto param. This is * necessary for software based decryption. @@ -1467,7 +1684,7 @@ static void ath10k_htt_rx_h_undecap(struct ath10k *ar, switch (decap) { case RX_MSDU_DECAP_RAW: ath10k_htt_rx_h_undecap_raw(ar, msdu, status, enctype, - is_decrypted); + is_decrypted, first_hdr); break; case RX_MSDU_DECAP_NATIVE_WIFI: ath10k_htt_rx_h_undecap_nwifi(ar, msdu, status, first_hdr, @@ -2627,7 +2844,7 @@ void ath10k_htt_htc_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb) dev_kfree_skb_any(skb); } -static inline int ath10k_get_legacy_rate_idx(struct ath10k *ar, u8 rate) +static inline s8 ath10k_get_legacy_rate_idx(struct ath10k *ar, u8 rate) { static const u8 legacy_rates[] = {1, 2, 5, 11, 6, 9, 12, 18, 24, 36, 48, 54}; @@ -2646,11 +2863,11 @@ static void ath10k_accumulate_per_peer_tx_stats(struct ath10k *ar, struct ath10k_sta *arsta, struct ath10k_per_peer_tx_stats *pstats, - u8 legacy_rate_idx) + s8 legacy_rate_idx) { struct rate_info *txrate = &arsta->txrate; struct ath10k_htt_tx_stats *tx_stats; - int ht_idx, gi, mcs, bw, nss; + int idx, ht_idx, gi, mcs, bw, nss; if (!arsta->tx_stats) return; @@ -2661,6 +2878,8 @@ ath10k_accumulate_per_peer_tx_stats(struct ath10k *ar, mcs = txrate->mcs; bw = txrate->bw; nss = txrate->nss; + idx = mcs * 8 + 8 * 10 * nss; + idx += bw * 2 + gi; #define STATS_OP_FMT(name) tx_stats->stats[ATH10K_STATS_TYPE_##name] @@ -2709,12 +2928,16 @@ ath10k_accumulate_per_peer_tx_stats(struct ath10k *ar, pstats->succ_bytes + pstats->retry_bytes; STATS_OP_FMT(AMPDU).gi[0][gi] += pstats->succ_bytes + pstats->retry_bytes; + STATS_OP_FMT(AMPDU).rate_table[0][idx] += + pstats->succ_bytes + pstats->retry_bytes; STATS_OP_FMT(AMPDU).bw[1][bw] += pstats->succ_pkts + pstats->retry_pkts; STATS_OP_FMT(AMPDU).nss[1][nss] += pstats->succ_pkts + pstats->retry_pkts; STATS_OP_FMT(AMPDU).gi[1][gi] += pstats->succ_pkts + pstats->retry_pkts; + STATS_OP_FMT(AMPDU).rate_table[1][idx] += + pstats->succ_pkts + pstats->retry_pkts; } else { tx_stats->ack_fails += ATH10K_HW_BA_FAIL(pstats->flags); @@ -2743,6 +2966,15 @@ ath10k_accumulate_per_peer_tx_stats(struct ath10k *ar, STATS_OP_FMT(RETRY).bw[1][bw] += pstats->retry_pkts; STATS_OP_FMT(RETRY).nss[1][nss] += pstats->retry_pkts; STATS_OP_FMT(RETRY).gi[1][gi] += pstats->retry_pkts; + + if (txrate->flags >= RATE_INFO_FLAGS_MCS) { + STATS_OP_FMT(SUCC).rate_table[0][idx] += pstats->succ_bytes; + STATS_OP_FMT(SUCC).rate_table[1][idx] += pstats->succ_pkts; + STATS_OP_FMT(FAIL).rate_table[0][idx] += pstats->failed_bytes; + STATS_OP_FMT(FAIL).rate_table[1][idx] += pstats->failed_pkts; + STATS_OP_FMT(RETRY).rate_table[0][idx] += pstats->retry_bytes; + STATS_OP_FMT(RETRY).rate_table[1][idx] += pstats->retry_pkts; + } } static void @@ -2751,8 +2983,10 @@ ath10k_update_per_peer_tx_stats(struct ath10k *ar, struct ath10k_per_peer_tx_stats *peer_stats) { struct ath10k_sta *arsta = (struct ath10k_sta *)sta->drv_priv; + struct ieee80211_chanctx_conf *conf = NULL; u8 rate = 0, sgi; s8 rate_idx = 0; + bool skip_auto_rate; struct rate_info txrate; lockdep_assert_held(&ar->data_lock); @@ -2762,6 +2996,13 @@ ath10k_update_per_peer_tx_stats(struct ath10k *ar, txrate.nss = ATH10K_HW_NSS(peer_stats->ratecode); txrate.mcs = ATH10K_HW_MCS_RATE(peer_stats->ratecode); sgi = ATH10K_HW_GI(peer_stats->flags); + skip_auto_rate = ATH10K_FW_SKIPPED_RATE_CTRL(peer_stats->flags); + + /* Firmware's rate control skips broadcast/management frames, + * if host has configure fixed rates and in some other special cases. + */ + if (skip_auto_rate) + return; if (txrate.flags == WMI_RATE_PREAMBLE_VHT && txrate.mcs > 9) { ath10k_warn(ar, "Invalid VHT mcs %hhd peer stats", txrate.mcs); @@ -2776,7 +3017,7 @@ ath10k_update_per_peer_tx_stats(struct ath10k *ar, } memset(&arsta->txrate, 0, sizeof(arsta->txrate)); - + memset(&arsta->tx_info.status, 0, sizeof(arsta->tx_info.status)); if (txrate.flags == WMI_RATE_PREAMBLE_CCK || txrate.flags == WMI_RATE_PREAMBLE_OFDM) { rate = ATH10K_HW_LEGACY_RATE(peer_stats->ratecode); @@ -2795,11 +3036,59 @@ ath10k_update_per_peer_tx_stats(struct ath10k *ar, arsta->txrate.mcs = txrate.mcs; } - if (sgi) - arsta->txrate.flags |= RATE_INFO_FLAGS_SHORT_GI; + switch (txrate.flags) { + case WMI_RATE_PREAMBLE_OFDM: + if (arsta->arvif && arsta->arvif->vif) + conf = rcu_dereference(arsta->arvif->vif->chanctx_conf); + if (conf && conf->def.chan->band == NL80211_BAND_5GHZ) + arsta->tx_info.status.rates[0].idx = rate_idx - 4; + break; + case WMI_RATE_PREAMBLE_CCK: + arsta->tx_info.status.rates[0].idx = rate_idx; + if (sgi) + arsta->tx_info.status.rates[0].flags |= + (IEEE80211_TX_RC_USE_SHORT_PREAMBLE | + IEEE80211_TX_RC_SHORT_GI); + break; + case WMI_RATE_PREAMBLE_HT: + arsta->tx_info.status.rates[0].idx = + txrate.mcs + ((txrate.nss - 1) * 8); + if (sgi) + arsta->tx_info.status.rates[0].flags |= + IEEE80211_TX_RC_SHORT_GI; + arsta->tx_info.status.rates[0].flags |= IEEE80211_TX_RC_MCS; + break; + case WMI_RATE_PREAMBLE_VHT: + ieee80211_rate_set_vht(&arsta->tx_info.status.rates[0], + txrate.mcs, txrate.nss); + if (sgi) + arsta->tx_info.status.rates[0].flags |= + IEEE80211_TX_RC_SHORT_GI; + arsta->tx_info.status.rates[0].flags |= IEEE80211_TX_RC_VHT_MCS; + break; + } arsta->txrate.nss = txrate.nss; arsta->txrate.bw = ath10k_bw_to_mac80211_bw(txrate.bw); + if (sgi) + arsta->txrate.flags |= RATE_INFO_FLAGS_SHORT_GI; + + switch (arsta->txrate.bw) { + case RATE_INFO_BW_40: + arsta->tx_info.status.rates[0].flags |= + IEEE80211_TX_RC_40_MHZ_WIDTH; + break; + case RATE_INFO_BW_80: + arsta->tx_info.status.rates[0].flags |= + IEEE80211_TX_RC_80_MHZ_WIDTH; + break; + } + + if (peer_stats->succ_pkts) { + arsta->tx_info.flags = IEEE80211_TX_STAT_ACK; + arsta->tx_info.status.rates[0].count = 1; + ieee80211_tx_rate_update(ar->hw, sta, &arsta->tx_info); + } if (ath10k_debug_is_extd_tx_stats_enabled(ar)) ath10k_accumulate_per_peer_tx_stats(ar, arsta, peer_stats, @@ -2832,7 +3121,7 @@ static void ath10k_htt_fetch_peer_stats(struct ath10k *ar, rcu_read_lock(); spin_lock_bh(&ar->data_lock); peer = ath10k_peer_find_by_id(ar, peer_id); - if (!peer) { + if (!peer || !peer->sta) { ath10k_warn(ar, "Invalid peer id %d peer stats buffer\n", peer_id); goto out; @@ -2885,7 +3174,7 @@ static void ath10k_fetch_10_2_tx_stats(struct ath10k *ar, u8 *data) rcu_read_lock(); spin_lock_bh(&ar->data_lock); peer = ath10k_peer_find_by_id(ar, peer_id); - if (!peer) { + if (!peer || !peer->sta) { ath10k_warn(ar, "Invalid peer id %d in peer stats buffer\n", peer_id); goto out; diff --git a/drivers/net/wireless/ath/ath10k/hw.c b/drivers/net/wireless/ath/ath10k/hw.c index af8ae8117c62..61ecf931ba4d 100644 --- a/drivers/net/wireless/ath/ath10k/hw.c +++ b/drivers/net/wireless/ath/ath10k/hw.c @@ -1119,8 +1119,15 @@ static int ath10k_qca99x0_rx_desc_get_l3_pad_bytes(struct htt_rx_desc *rxd) RX_MSDU_END_INFO1_L3_HDR_PAD); } +static bool ath10k_qca99x0_rx_desc_msdu_limit_error(struct htt_rx_desc *rxd) +{ + return !!(rxd->msdu_end.common.info0 & + __cpu_to_le32(RX_MSDU_END_INFO0_MSDU_LIMIT_ERR)); +} + const struct ath10k_hw_ops qca99x0_ops = { .rx_desc_get_l3_pad_bytes = ath10k_qca99x0_rx_desc_get_l3_pad_bytes, + .rx_desc_get_msdu_limit_error = ath10k_qca99x0_rx_desc_msdu_limit_error, }; const struct ath10k_hw_ops qca6174_ops = { diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h index 1b5da272d18c..e50a8dc5b093 100644 --- a/drivers/net/wireless/ath/ath10k/hw.h +++ b/drivers/net/wireless/ath/ath10k/hw.h @@ -624,6 +624,7 @@ struct ath10k_hw_ops { int (*rx_desc_get_l3_pad_bytes)(struct htt_rx_desc *rxd); void (*set_coverage_class)(struct ath10k *ar, s16 value); int (*enable_pll_clk)(struct ath10k *ar); + bool (*rx_desc_get_msdu_limit_error)(struct htt_rx_desc *rxd); }; extern const struct ath10k_hw_ops qca988x_ops; @@ -642,6 +643,15 @@ ath10k_rx_desc_get_l3_pad_bytes(struct ath10k_hw_params *hw, return 0; } +static inline bool +ath10k_rx_desc_msdu_limit_error(struct ath10k_hw_params *hw, + struct htt_rx_desc *rxd) +{ + if (hw->hw_ops->rx_desc_get_msdu_limit_error) + return hw->hw_ops->rx_desc_get_msdu_limit_error(rxd); + return false; +} + /* Target specific defines for MAIN firmware */ #define TARGET_NUM_VDEVS 8 #define TARGET_NUM_PEER_AST 2 diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c index 7e49342bae38..e49b36752ba2 100644 --- a/drivers/net/wireless/ath/ath10k/mac.c +++ b/drivers/net/wireless/ath/ath10k/mac.c @@ -22,6 +22,7 @@ #include #include #include +#include #include "hif.h" #include "core.h" @@ -4637,11 +4638,44 @@ static int ath10k_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant) return ret; } +static int __ath10k_fetch_bb_timing_dt(struct ath10k *ar, + struct wmi_bb_timing_cfg_arg *bb_timing) +{ + struct device_node *node; + const char *fem_name; + int ret; + + node = ar->dev->of_node; + if (!node) + return -ENOENT; + + ret = of_property_read_string_index(node, "ext-fem-name", 0, &fem_name); + if (ret) + return -ENOENT; + + /* + * If external Front End module used in hardware, then default base band timing + * parameter cannot be used since they were fine tuned for reference hardware, + * so choosing different value suitable for that external FEM. + */ + if (!strcmp("microsemi-lx5586", fem_name)) { + bb_timing->bb_tx_timing = 0x00; + bb_timing->bb_xpa_timing = 0x0101; + } else { + return -ENOENT; + } + + ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot bb_tx_timing 0x%x bb_xpa_timing 0x%x\n", + bb_timing->bb_tx_timing, bb_timing->bb_xpa_timing); + return 0; +} + static int ath10k_start(struct ieee80211_hw *hw) { struct ath10k *ar = hw->priv; u32 param; int ret = 0; + struct wmi_bb_timing_cfg_arg bb_timing = {0}; /* * This makes sense only when restarting hw. It is harmless to call @@ -4796,6 +4830,19 @@ static int ath10k_start(struct ieee80211_hw *hw) clear_bit(ATH10K_FLAG_BTCOEX, &ar->dev_flags); } + if (test_bit(WMI_SERVICE_BB_TIMING_CONFIG_SUPPORT, ar->wmi.svc_map)) { + ret = __ath10k_fetch_bb_timing_dt(ar, &bb_timing); + if (!ret) { + ret = ath10k_wmi_pdev_bb_timing(ar, &bb_timing); + if (ret) { + ath10k_warn(ar, + "failed to set bb timings: %d\n", + ret); + goto err_core_stop; + } + } + } + ar->num_started_vdevs = 0; ath10k_regd_update(ar); @@ -5154,6 +5201,17 @@ static int ath10k_add_interface(struct ieee80211_hw *hw, goto err; } + if (test_bit(WMI_SERVICE_VDEV_DISABLE_4_ADDR_SRC_LRN_SUPPORT, + ar->wmi.svc_map)) { + vdev_param = ar->wmi.vdev_param->disable_4addr_src_lrn; + ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id, vdev_param, + WMI_VDEV_DISABLE_4_ADDR_SRC_LRN); + if (ret && ret != -EOPNOTSUPP) { + ath10k_warn(ar, "failed to disable 4addr src lrn vdev %i: %d\n", + arvif->vdev_id, ret); + } + } + ar->free_vdev_map &= ~(1LL << arvif->vdev_id); spin_lock_bh(&ar->data_lock); list_add(&arvif->list, &ar->arvifs); @@ -5754,30 +5812,6 @@ static int ath10k_mac_tdls_vif_stations_count(struct ieee80211_hw *hw, return data.num_tdls_stations; } -static void ath10k_mac_tdls_vifs_count_iter(void *data, u8 *mac, - struct ieee80211_vif *vif) -{ - struct ath10k_vif *arvif = (void *)vif->drv_priv; - int *num_tdls_vifs = data; - - if (vif->type != NL80211_IFTYPE_STATION) - return; - - if (ath10k_mac_tdls_vif_stations_count(arvif->ar->hw, vif) > 0) - (*num_tdls_vifs)++; -} - -static int ath10k_mac_tdls_vifs_count(struct ieee80211_hw *hw) -{ - int num_tdls_vifs = 0; - - ieee80211_iterate_active_interfaces_atomic(hw, - IEEE80211_IFACE_ITER_NORMAL, - ath10k_mac_tdls_vifs_count_iter, - &num_tdls_vifs); - return num_tdls_vifs; -} - static int ath10k_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_scan_request *hw_req) @@ -6285,7 +6319,6 @@ static int ath10k_sta_state(struct ieee80211_hw *hw, */ enum wmi_peer_type peer_type = WMI_PEER_TYPE_DEFAULT; u32 num_tdls_stations; - u32 num_tdls_vifs; ath10k_dbg(ar, ATH10K_DBG_MAC, "mac vdev %d peer create %pM (new sta) sta %d / %d peer %d / %d\n", @@ -6293,15 +6326,7 @@ static int ath10k_sta_state(struct ieee80211_hw *hw, ar->num_stations + 1, ar->max_num_stations, ar->num_peers + 1, ar->max_num_peers); - if (ath10k_debug_is_extd_tx_stats_enabled(ar)) { - arsta->tx_stats = kzalloc(sizeof(*arsta->tx_stats), - GFP_KERNEL); - if (!arsta->tx_stats) - goto exit; - } - num_tdls_stations = ath10k_mac_tdls_vif_stations_count(hw, vif); - num_tdls_vifs = ath10k_mac_tdls_vifs_count(hw); if (sta->tdls) { if (num_tdls_stations >= ar->max_num_tdls_vdevs) { @@ -6321,12 +6346,22 @@ static int ath10k_sta_state(struct ieee80211_hw *hw, goto exit; } + if (ath10k_debug_is_extd_tx_stats_enabled(ar)) { + arsta->tx_stats = kzalloc(sizeof(*arsta->tx_stats), + GFP_KERNEL); + if (!arsta->tx_stats) { + ret = -ENOMEM; + goto exit; + } + } + ret = ath10k_peer_create(ar, vif, sta, arvif->vdev_id, sta->addr, peer_type); if (ret) { ath10k_warn(ar, "failed to add peer %pM for vdev %d when adding a new sta: %i\n", sta->addr, arvif->vdev_id, ret); ath10k_mac_dec_num_stations(arvif, sta); + kfree(arsta->tx_stats); goto exit; } @@ -6339,6 +6374,7 @@ static int ath10k_sta_state(struct ieee80211_hw *hw, spin_unlock_bh(&ar->data_lock); ath10k_peer_delete(ar, arvif->vdev_id, sta->addr); ath10k_mac_dec_num_stations(arvif, sta); + kfree(arsta->tx_stats); ret = -ENOENT; goto exit; } @@ -6359,6 +6395,7 @@ static int ath10k_sta_state(struct ieee80211_hw *hw, ath10k_peer_delete(ar, arvif->vdev_id, sta->addr); ath10k_mac_dec_num_stations(arvif, sta); + kfree(arsta->tx_stats); goto exit; } @@ -6370,6 +6407,7 @@ static int ath10k_sta_state(struct ieee80211_hw *hw, sta->addr, arvif->vdev_id, ret); ath10k_peer_delete(ar, arvif->vdev_id, sta->addr); ath10k_mac_dec_num_stations(arvif, sta); + kfree(arsta->tx_stats); if (num_tdls_stations != 0) goto exit; @@ -6385,9 +6423,6 @@ static int ath10k_sta_state(struct ieee80211_hw *hw, "mac vdev %d peer delete %pM sta %pK (sta gone)\n", arvif->vdev_id, sta->addr, sta); - if (ath10k_debug_is_extd_tx_stats_enabled(ar)) - kfree(arsta->tx_stats); - if (sta->tdls) { ret = ath10k_mac_tdls_peer_update(ar, arvif->vdev_id, sta, @@ -6427,6 +6462,11 @@ static int ath10k_sta_state(struct ieee80211_hw *hw, } spin_unlock_bh(&ar->data_lock); + if (ath10k_debug_is_extd_tx_stats_enabled(ar)) { + kfree(arsta->tx_stats); + arsta->tx_stats = NULL; + } + for (i = 0; i < ARRAY_SIZE(sta->txq); i++) ath10k_mac_txq_unref(ar, sta->txq[i]); @@ -8313,7 +8353,6 @@ static u32 ath10k_mac_wrdd_get_mcc(struct ath10k *ar, union acpi_object *wrdd) static int ath10k_mac_get_wrdd_regulatory(struct ath10k *ar, u16 *rd) { - struct pci_dev __maybe_unused *pdev = to_pci_dev(ar->dev); acpi_handle root_handle; acpi_handle handle; struct acpi_buffer wrdd = {ACPI_ALLOCATE_BUFFER, NULL}; @@ -8321,7 +8360,7 @@ static int ath10k_mac_get_wrdd_regulatory(struct ath10k *ar, u16 *rd) u32 alpha2_code; char alpha2[3]; - root_handle = ACPI_HANDLE(&pdev->dev); + root_handle = ACPI_HANDLE(ar->dev); if (!root_handle) return -EOPNOTSUPP; diff --git a/drivers/net/wireless/ath/ath10k/qmi.c b/drivers/net/wireless/ath/ath10k/qmi.c index 56cb1831dcdf..37b3bd629f48 100644 --- a/drivers/net/wireless/ath/ath10k/qmi.c +++ b/drivers/net/wireless/ath/ath10k/qmi.c @@ -543,7 +543,7 @@ static int ath10k_qmi_cap_send_sync_msg(struct ath10k_qmi *qmi) goto out; if (resp->resp.result != QMI_RESULT_SUCCESS_V01) { - ath10k_err(ar, "capablity req rejected: %d\n", resp->resp.error); + ath10k_err(ar, "capability req rejected: %d\n", resp->resp.error); ret = -EINVAL; goto out; } @@ -623,7 +623,7 @@ static int ath10k_qmi_host_cap_send_sync(struct ath10k_qmi *qmi) goto out; } - ath10k_dbg(ar, ATH10K_DBG_QMI, "qmi host capablity request completed\n"); + ath10k_dbg(ar, ATH10K_DBG_QMI, "qmi host capability request completed\n"); return 0; out: @@ -657,7 +657,7 @@ ath10k_qmi_ind_register_send_sync_msg(struct ath10k_qmi *qmi) wlfw_ind_register_req_msg_v01_ei, &req); if (ret < 0) { qmi_txn_cancel(&txn); - ath10k_err(ar, "failed to send indication registed request: %d\n", ret); + ath10k_err(ar, "failed to send indication registered request: %d\n", ret); goto out; } @@ -931,9 +931,9 @@ static int ath10k_qmi_setup_msa_resources(struct ath10k_qmi *qmi, u32 msa_size) qmi->msa_mem_size = resource_size(&r); qmi->msa_va = devm_memremap(dev, qmi->msa_pa, qmi->msa_mem_size, MEMREMAP_WT); - if (!qmi->msa_pa) { + if (IS_ERR(qmi->msa_va)) { dev_err(dev, "failed to map memory region: %pa\n", &r.start); - return -EBUSY; + return PTR_ERR(qmi->msa_va); } } else { qmi->msa_va = dmam_alloc_coherent(dev, msa_size, diff --git a/drivers/net/wireless/ath/ath10k/rx_desc.h b/drivers/net/wireless/ath/ath10k/rx_desc.h index 310674de3cb8..dfbfe674e11e 100644 --- a/drivers/net/wireless/ath/ath10k/rx_desc.h +++ b/drivers/net/wireless/ath/ath10k/rx_desc.h @@ -572,6 +572,7 @@ struct rx_msdu_start { #define RX_MSDU_END_INFO0_REPORTED_MPDU_LENGTH_LSB 0 #define RX_MSDU_END_INFO0_FIRST_MSDU BIT(14) #define RX_MSDU_END_INFO0_LAST_MSDU BIT(15) +#define RX_MSDU_END_INFO0_MSDU_LIMIT_ERR BIT(18) #define RX_MSDU_END_INFO0_PRE_DELIM_ERR BIT(30) #define RX_MSDU_END_INFO0_RESERVED_3B BIT(31) @@ -676,6 +677,12 @@ struct rx_msdu_end { * Indicates the last MSDU of the A-MSDU. MPDU end status is * only valid when last_msdu is set. * + *msdu_limit_error + * Indicates that the MSDU threshold was exceeded and thus + * all the rest of the MSDUs will not be scattered and + * will not be decapsulated but will be received in RAW format + * as a single MSDU buffer. + * *reserved_3a * Reserved: HW should fill with zero. FW should ignore. * diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c index 8d3d9bca410f..54efe6be8f1d 100644 --- a/drivers/net/wireless/ath/ath10k/snoc.c +++ b/drivers/net/wireless/ath/ath10k/snoc.c @@ -46,14 +46,14 @@ static char *const ce_name[] = { "WLAN_CE_11", }; -static struct ath10k_wcn3990_vreg_info vreg_cfg[] = { - {NULL, "vdd-0.8-cx-mx", 800000, 800000, 0, 0, false}, - {NULL, "vdd-1.8-xo", 1800000, 1800000, 0, 0, false}, - {NULL, "vdd-1.3-rfa", 1304000, 1304000, 0, 0, false}, - {NULL, "vdd-3.3-ch0", 3312000, 3312000, 0, 0, false}, +static struct ath10k_vreg_info vreg_cfg[] = { + {NULL, "vdd-0.8-cx-mx", 800000, 850000, 0, 0, false}, + {NULL, "vdd-1.8-xo", 1800000, 1850000, 0, 0, false}, + {NULL, "vdd-1.3-rfa", 1300000, 1350000, 0, 0, false}, + {NULL, "vdd-3.3-ch0", 3300000, 3350000, 0, 0, false}, }; -static struct ath10k_wcn3990_clk_info clk_cfg[] = { +static struct ath10k_clk_info clk_cfg[] = { {NULL, "cxo_ref_clk_pin", 0, false}, }; @@ -474,14 +474,14 @@ static struct service_to_pipe target_service_to_ce_map_wlan[] = { }, }; -void ath10k_snoc_write32(struct ath10k *ar, u32 offset, u32 value) +static void ath10k_snoc_write32(struct ath10k *ar, u32 offset, u32 value) { struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar); iowrite32(value, ar_snoc->mem + offset); } -u32 ath10k_snoc_read32(struct ath10k *ar, u32 offset) +static u32 ath10k_snoc_read32(struct ath10k *ar, u32 offset) { struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar); u32 val; @@ -918,7 +918,9 @@ static void ath10k_snoc_buffer_cleanup(struct ath10k *ar) static void ath10k_snoc_hif_stop(struct ath10k *ar) { - ath10k_snoc_irq_disable(ar); + if (!test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags)) + ath10k_snoc_irq_disable(ar); + napi_synchronize(&ar->napi); napi_disable(&ar->napi); ath10k_snoc_buffer_cleanup(ar); @@ -927,10 +929,14 @@ static void ath10k_snoc_hif_stop(struct ath10k *ar) static int ath10k_snoc_hif_start(struct ath10k *ar) { + struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar); + napi_enable(&ar->napi); ath10k_snoc_irq_enable(ar); ath10k_snoc_rx_post(ar); + clear_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags); + ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot hif start\n"); return 0; @@ -994,7 +1000,8 @@ static int ath10k_snoc_wlan_enable(struct ath10k *ar) static void ath10k_snoc_wlan_disable(struct ath10k *ar) { - ath10k_qmi_wlan_disable(ar); + if (!test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags)) + ath10k_qmi_wlan_disable(ar); } static void ath10k_snoc_hif_power_down(struct ath10k *ar) @@ -1091,6 +1098,11 @@ static int ath10k_snoc_napi_poll(struct napi_struct *ctx, int budget) struct ath10k *ar = container_of(ctx, struct ath10k, napi); int done = 0; + if (test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags)) { + napi_complete(ctx); + return done; + } + ath10k_ce_per_engine_service_any(ar); done = ath10k_htt_txrx_compl_task(ar, budget); @@ -1187,17 +1199,29 @@ int ath10k_snoc_fw_indication(struct ath10k *ar, u64 type) struct ath10k_bus_params bus_params; int ret; + if (test_bit(ATH10K_SNOC_FLAG_UNREGISTERING, &ar_snoc->flags)) + return 0; + switch (type) { case ATH10K_QMI_EVENT_FW_READY_IND: + if (test_bit(ATH10K_SNOC_FLAG_REGISTERED, &ar_snoc->flags)) { + queue_work(ar->workqueue, &ar->restart_work); + break; + } + bus_params.dev_type = ATH10K_DEV_TYPE_LL; bus_params.chip_id = ar_snoc->target_info.soc_version; ret = ath10k_core_register(ar, &bus_params); if (ret) { - ath10k_err(ar, "failed to register driver core: %d\n", + ath10k_err(ar, "Failed to register driver core: %d\n", ret); + return ret; } + set_bit(ATH10K_SNOC_FLAG_REGISTERED, &ar_snoc->flags); break; case ATH10K_QMI_EVENT_FW_DOWN_IND: + set_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags); + set_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags); break; default: ath10k_err(ar, "invalid fw indication: %llx\n", type); @@ -1246,7 +1270,7 @@ static void ath10k_snoc_release_resource(struct ath10k *ar) } static int ath10k_get_vreg_info(struct ath10k *ar, struct device *dev, - struct ath10k_wcn3990_vreg_info *vreg_info) + struct ath10k_vreg_info *vreg_info) { struct regulator *reg; int ret = 0; @@ -1284,7 +1308,7 @@ done: } static int ath10k_get_clk_info(struct ath10k *ar, struct device *dev, - struct ath10k_wcn3990_clk_info *clk_info) + struct ath10k_clk_info *clk_info) { struct clk *handle; int ret = 0; @@ -1311,10 +1335,80 @@ static int ath10k_get_clk_info(struct ath10k *ar, struct device *dev, return ret; } -static int ath10k_wcn3990_vreg_on(struct ath10k *ar) +static int __ath10k_snoc_vreg_on(struct ath10k *ar, + struct ath10k_vreg_info *vreg_info) +{ + int ret; + + ath10k_dbg(ar, ATH10K_DBG_SNOC, "snoc regulator %s being enabled\n", + vreg_info->name); + + ret = regulator_set_voltage(vreg_info->reg, vreg_info->min_v, + vreg_info->max_v); + if (ret) { + ath10k_err(ar, + "failed to set regulator %s voltage-min: %d voltage-max: %d\n", + vreg_info->name, vreg_info->min_v, vreg_info->max_v); + return ret; + } + + if (vreg_info->load_ua) { + ret = regulator_set_load(vreg_info->reg, vreg_info->load_ua); + if (ret < 0) { + ath10k_err(ar, "failed to set regulator %s load: %d\n", + vreg_info->name, vreg_info->load_ua); + goto err_set_load; + } + } + + ret = regulator_enable(vreg_info->reg); + if (ret) { + ath10k_err(ar, "failed to enable regulator %s\n", + vreg_info->name); + goto err_enable; + } + + if (vreg_info->settle_delay) + udelay(vreg_info->settle_delay); + + return 0; + +err_enable: + regulator_set_load(vreg_info->reg, 0); +err_set_load: + regulator_set_voltage(vreg_info->reg, 0, vreg_info->max_v); + + return ret; +} + +static int __ath10k_snoc_vreg_off(struct ath10k *ar, + struct ath10k_vreg_info *vreg_info) +{ + int ret; + + ath10k_dbg(ar, ATH10K_DBG_SNOC, "snoc regulator %s being disabled\n", + vreg_info->name); + + ret = regulator_disable(vreg_info->reg); + if (ret) + ath10k_err(ar, "failed to disable regulator %s\n", + vreg_info->name); + + ret = regulator_set_load(vreg_info->reg, 0); + if (ret < 0) + ath10k_err(ar, "failed to set load %s\n", vreg_info->name); + + ret = regulator_set_voltage(vreg_info->reg, 0, vreg_info->max_v); + if (ret) + ath10k_err(ar, "failed to set voltage %s\n", vreg_info->name); + + return ret; +} + +static int ath10k_snoc_vreg_on(struct ath10k *ar) { struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar); - struct ath10k_wcn3990_vreg_info *vreg_info; + struct ath10k_vreg_info *vreg_info; int ret = 0; int i; @@ -1324,62 +1418,30 @@ static int ath10k_wcn3990_vreg_on(struct ath10k *ar) if (!vreg_info->reg) continue; - ath10k_dbg(ar, ATH10K_DBG_SNOC, "snoc regulator %s being enabled\n", - vreg_info->name); - - ret = regulator_set_voltage(vreg_info->reg, vreg_info->min_v, - vreg_info->max_v); - if (ret) { - ath10k_err(ar, - "failed to set regulator %s voltage-min: %d voltage-max: %d\n", - vreg_info->name, vreg_info->min_v, vreg_info->max_v); - goto err_reg_config; - } - - if (vreg_info->load_ua) { - ret = regulator_set_load(vreg_info->reg, - vreg_info->load_ua); - if (ret < 0) { - ath10k_err(ar, - "failed to set regulator %s load: %d\n", - vreg_info->name, - vreg_info->load_ua); - goto err_reg_config; - } - } - - ret = regulator_enable(vreg_info->reg); - if (ret) { - ath10k_err(ar, "failed to enable regulator %s\n", - vreg_info->name); + ret = __ath10k_snoc_vreg_on(ar, vreg_info); + if (ret) goto err_reg_config; - } - - if (vreg_info->settle_delay) - udelay(vreg_info->settle_delay); } return 0; err_reg_config: - for (; i >= 0; i--) { + for (i = i - 1; i >= 0; i--) { vreg_info = &ar_snoc->vreg[i]; if (!vreg_info->reg) continue; - regulator_disable(vreg_info->reg); - regulator_set_load(vreg_info->reg, 0); - regulator_set_voltage(vreg_info->reg, 0, vreg_info->max_v); + __ath10k_snoc_vreg_off(ar, vreg_info); } return ret; } -static int ath10k_wcn3990_vreg_off(struct ath10k *ar) +static int ath10k_snoc_vreg_off(struct ath10k *ar) { struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar); - struct ath10k_wcn3990_vreg_info *vreg_info; + struct ath10k_vreg_info *vreg_info; int ret = 0; int i; @@ -1389,33 +1451,16 @@ static int ath10k_wcn3990_vreg_off(struct ath10k *ar) if (!vreg_info->reg) continue; - ath10k_dbg(ar, ATH10K_DBG_SNOC, "snoc regulator %s being disabled\n", - vreg_info->name); - - ret = regulator_disable(vreg_info->reg); - if (ret) - ath10k_err(ar, "failed to disable regulator %s\n", - vreg_info->name); - - ret = regulator_set_load(vreg_info->reg, 0); - if (ret < 0) - ath10k_err(ar, "failed to set load %s\n", - vreg_info->name); - - ret = regulator_set_voltage(vreg_info->reg, 0, - vreg_info->max_v); - if (ret) - ath10k_err(ar, "failed to set voltage %s\n", - vreg_info->name); + ret = __ath10k_snoc_vreg_off(ar, vreg_info); } return ret; } -static int ath10k_wcn3990_clk_init(struct ath10k *ar) +static int ath10k_snoc_clk_init(struct ath10k *ar) { struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar); - struct ath10k_wcn3990_clk_info *clk_info; + struct ath10k_clk_info *clk_info; int ret = 0; int i; @@ -1449,7 +1494,7 @@ static int ath10k_wcn3990_clk_init(struct ath10k *ar) return 0; err_clock_config: - for (; i >= 0; i--) { + for (i = i - 1; i >= 0; i--) { clk_info = &ar_snoc->clk[i]; if (!clk_info->handle) @@ -1461,10 +1506,10 @@ err_clock_config: return ret; } -static int ath10k_wcn3990_clk_deinit(struct ath10k *ar) +static int ath10k_snoc_clk_deinit(struct ath10k *ar) { struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar); - struct ath10k_wcn3990_clk_info *clk_info; + struct ath10k_clk_info *clk_info; int i; for (i = 0; i < ARRAY_SIZE(clk_cfg); i++) { @@ -1488,18 +1533,18 @@ static int ath10k_hw_power_on(struct ath10k *ar) ath10k_dbg(ar, ATH10K_DBG_SNOC, "soc power on\n"); - ret = ath10k_wcn3990_vreg_on(ar); + ret = ath10k_snoc_vreg_on(ar); if (ret) return ret; - ret = ath10k_wcn3990_clk_init(ar); + ret = ath10k_snoc_clk_init(ar); if (ret) goto vreg_off; return ret; vreg_off: - ath10k_wcn3990_vreg_off(ar); + ath10k_snoc_vreg_off(ar); return ret; } @@ -1509,9 +1554,9 @@ static int ath10k_hw_power_off(struct ath10k *ar) ath10k_dbg(ar, ATH10K_DBG_SNOC, "soc power off\n"); - ath10k_wcn3990_clk_deinit(ar); + ath10k_snoc_clk_deinit(ar); - ret = ath10k_wcn3990_vreg_off(ar); + ret = ath10k_snoc_vreg_off(ar); return ret; } @@ -1609,7 +1654,6 @@ static int ath10k_snoc_probe(struct platform_device *pdev) } ath10k_dbg(ar, ATH10K_DBG_SNOC, "snoc probe\n"); - ath10k_warn(ar, "Warning: SNOC support is still work-in-progress, it will not work properly!"); return 0; @@ -1628,8 +1672,17 @@ err_core_destroy: static int ath10k_snoc_remove(struct platform_device *pdev) { struct ath10k *ar = platform_get_drvdata(pdev); + struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar); ath10k_dbg(ar, ATH10K_DBG_SNOC, "snoc remove\n"); + + reinit_completion(&ar->driver_recovery); + + if (test_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags)) + wait_for_completion_timeout(&ar->driver_recovery, 3 * HZ); + + set_bit(ATH10K_SNOC_FLAG_UNREGISTERING, &ar_snoc->flags); + ath10k_core_unregister(ar); ath10k_hw_power_off(ar); ath10k_snoc_free_irq(ar); @@ -1641,12 +1694,12 @@ static int ath10k_snoc_remove(struct platform_device *pdev) } static struct platform_driver ath10k_snoc_driver = { - .probe = ath10k_snoc_probe, - .remove = ath10k_snoc_remove, - .driver = { - .name = "ath10k_snoc", - .of_match_table = ath10k_snoc_dt_match, - }, + .probe = ath10k_snoc_probe, + .remove = ath10k_snoc_remove, + .driver = { + .name = "ath10k_snoc", + .of_match_table = ath10k_snoc_dt_match, + }, }; module_platform_driver(ath10k_snoc_driver); diff --git a/drivers/net/wireless/ath/ath10k/snoc.h b/drivers/net/wireless/ath/ath10k/snoc.h index e1d2d6675556..2b2f23cf7c5d 100644 --- a/drivers/net/wireless/ath/ath10k/snoc.h +++ b/drivers/net/wireless/ath/ath10k/snoc.h @@ -53,7 +53,7 @@ struct ath10k_snoc_ce_irq { u32 irq_line; }; -struct ath10k_wcn3990_vreg_info { +struct ath10k_vreg_info { struct regulator *reg; const char *name; u32 min_v; @@ -63,13 +63,19 @@ struct ath10k_wcn3990_vreg_info { bool required; }; -struct ath10k_wcn3990_clk_info { +struct ath10k_clk_info { struct clk *handle; const char *name; u32 freq; bool required; }; +enum ath10k_snoc_flags { + ATH10K_SNOC_FLAG_REGISTERED, + ATH10K_SNOC_FLAG_UNREGISTERING, + ATH10K_SNOC_FLAG_RECOVERY, +}; + struct ath10k_snoc { struct platform_device *dev; struct ath10k *ar; @@ -81,9 +87,10 @@ struct ath10k_snoc { struct ath10k_snoc_ce_irq ce_irqs[CE_COUNT_MAX]; struct ath10k_ce ce; struct timer_list rx_post_retry; - struct ath10k_wcn3990_vreg_info *vreg; - struct ath10k_wcn3990_clk_info *clk; + struct ath10k_vreg_info *vreg; + struct ath10k_clk_info *clk; struct ath10k_qmi *qmi; + unsigned long int flags; }; static inline struct ath10k_snoc *ath10k_snoc_priv(struct ath10k *ar) @@ -91,8 +98,6 @@ static inline struct ath10k_snoc *ath10k_snoc_priv(struct ath10k *ar) return (struct ath10k_snoc *)ar->drv_priv; } -void ath10k_snoc_write32(struct ath10k *ar, u32 offset, u32 value); -u32 ath10k_snoc_read32(struct ath10k *ar, u32 offset); int ath10k_snoc_fw_indication(struct ath10k *ar, u64 type); #endif /* _SNOC_H_ */ diff --git a/drivers/net/wireless/ath/ath10k/wmi-ops.h b/drivers/net/wireless/ath/ath10k/wmi-ops.h index 7978a7783f90..04663076d27a 100644 --- a/drivers/net/wireless/ath/ath10k/wmi-ops.h +++ b/drivers/net/wireless/ath/ath10k/wmi-ops.h @@ -219,6 +219,9 @@ struct wmi_ops { struct sk_buff *(*gen_echo)(struct ath10k *ar, u32 value); struct sk_buff *(*gen_pdev_get_tpc_table_cmdid)(struct ath10k *ar, u32 param); + struct sk_buff *(*gen_bb_timing) + (struct ath10k *ar, + const struct wmi_bb_timing_cfg_arg *arg); }; @@ -1576,4 +1579,21 @@ ath10k_wmi_report_radar_found(struct ath10k *ar, ar->wmi.cmd->radar_found_cmdid); } +static inline int +ath10k_wmi_pdev_bb_timing(struct ath10k *ar, + const struct wmi_bb_timing_cfg_arg *arg) +{ + struct sk_buff *skb; + + if (!ar->wmi.ops->gen_bb_timing) + return -EOPNOTSUPP; + + skb = ar->wmi.ops->gen_bb_timing(ar, arg); + + if (IS_ERR(skb)) + return PTR_ERR(skb); + + return ath10k_wmi_cmd_send(ar, skb, + ar->wmi.cmd->set_bb_timing_cmdid); +} #endif diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c index bab8b2527fb8..892bd8c30dd9 100644 --- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c +++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c @@ -621,7 +621,7 @@ static void ath10k_wmi_tlv_op_rx(struct ath10k *ar, struct sk_buff *skb) ath10k_wmi_event_mgmt_tx_compl(ar, skb); break; default: - ath10k_warn(ar, "Unknown eventid: %d\n", id); + ath10k_dbg(ar, ATH10K_DBG_WMI, "Unknown eventid: %d\n", id); break; } @@ -762,6 +762,9 @@ static int ath10k_wmi_tlv_op_pull_ch_info_ev(struct ath10k *ar, arg->noise_floor = ev->noise_floor; arg->rx_clear_count = ev->rx_clear_count; arg->cycle_count = ev->cycle_count; + if (test_bit(ATH10K_FW_FEATURE_SINGLE_CHAN_INFO_PER_CHANNEL, + ar->running_fw->fw_file.fw_features)) + arg->mac_clk_mhz = ev->mac_clk_mhz; kfree(tb); return 0; @@ -3452,7 +3455,6 @@ ath10k_wmi_tlv_op_gen_config_pno_start(struct ath10k *ar, struct wmi_tlv *tlv; struct sk_buff *skb; __le32 *channel_list; - u16 tlv_len; size_t len; void *ptr; u32 i; @@ -3510,8 +3512,6 @@ ath10k_wmi_tlv_op_gen_config_pno_start(struct ath10k *ar, /* nlo_configured_parameters(nlo_list) */ cmd->no_of_ssids = __cpu_to_le32(min_t(u8, pno->uc_networks_count, WMI_NLO_MAX_SSIDS)); - tlv_len = __le32_to_cpu(cmd->no_of_ssids) * - sizeof(struct nlo_configured_parameters); tlv = ptr; tlv->tag = __cpu_to_le16(WMI_TLV_TAG_ARRAY_STRUCT); diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.h b/drivers/net/wireless/ath/ath10k/wmi-tlv.h index c2cb413392ee..e07e9907e355 100644 --- a/drivers/net/wireless/ath/ath10k/wmi-tlv.h +++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.h @@ -1582,6 +1582,16 @@ struct ath10k_mgmt_tx_pkt_addr { dma_addr_t paddr; }; +struct chan_info_params { + u32 err_code; + u32 freq; + u32 cmd_flags; + u32 noise_floor; + u32 rx_clear_count; + u32 cycle_count; + u32 mac_clk_mhz; +}; + struct wmi_tlv_mgmt_tx_compl_ev { __le32 desc_id; __le32 status; diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c index 25e8fa789e8d..ba837403e266 100644 --- a/drivers/net/wireless/ath/ath10k/wmi.c +++ b/drivers/net/wireless/ath/ath10k/wmi.c @@ -539,6 +539,7 @@ static struct wmi_cmd_map wmi_10_2_4_cmd_map = { WMI_10_2_PDEV_BSS_CHAN_INFO_REQUEST_CMDID, .pdev_get_tpc_table_cmdid = WMI_CMD_UNSUPPORTED, .radar_found_cmdid = WMI_CMD_UNSUPPORTED, + .set_bb_timing_cmdid = WMI_10_2_PDEV_SET_BB_TIMING_CONFIG_CMDID, }; /* 10.4 WMI cmd track */ @@ -825,6 +826,7 @@ static struct wmi_vdev_param_map wmi_vdev_param_map = { .meru_vc = WMI_VDEV_PARAM_UNSUPPORTED, .rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED, .bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED, + .disable_4addr_src_lrn = WMI_VDEV_PARAM_UNSUPPORTED, }; /* 10.X WMI VDEV param map */ @@ -900,6 +902,7 @@ static struct wmi_vdev_param_map wmi_10x_vdev_param_map = { .meru_vc = WMI_VDEV_PARAM_UNSUPPORTED, .rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED, .bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED, + .disable_4addr_src_lrn = WMI_VDEV_PARAM_UNSUPPORTED, }; static struct wmi_vdev_param_map wmi_10_2_4_vdev_param_map = { @@ -974,6 +977,7 @@ static struct wmi_vdev_param_map wmi_10_2_4_vdev_param_map = { .meru_vc = WMI_VDEV_PARAM_UNSUPPORTED, .rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED, .bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED, + .disable_4addr_src_lrn = WMI_VDEV_PARAM_UNSUPPORTED, }; static struct wmi_vdev_param_map wmi_10_4_vdev_param_map = { @@ -1051,6 +1055,7 @@ static struct wmi_vdev_param_map wmi_10_4_vdev_param_map = { .bw_nss_ratemask = WMI_10_4_VDEV_PARAM_BW_NSS_RATEMASK, .inc_tsf = WMI_10_4_VDEV_PARAM_TSF_INCREMENT, .dec_tsf = WMI_10_4_VDEV_PARAM_TSF_DECREMENT, + .disable_4addr_src_lrn = WMI_10_4_VDEV_PARAM_DISABLE_4_ADDR_SRC_LRN, }; static struct wmi_pdev_param_map wmi_pdev_param_map = { @@ -2554,60 +2559,69 @@ static int ath10k_wmi_10_4_op_pull_ch_info_ev(struct ath10k *ar, return 0; } -void ath10k_wmi_event_chan_info(struct ath10k *ar, struct sk_buff *skb) +/* + * Handle the channel info event for firmware which only sends one + * chan_info event per scanned channel. + */ +static void ath10k_wmi_event_chan_info_unpaired(struct ath10k *ar, + struct chan_info_params *params) { - struct wmi_ch_info_ev_arg arg = {}; struct survey_info *survey; - u32 err_code, freq, cmd_flags, noise_floor, rx_clear_count, cycle_count; - int idx, ret; + int idx; - ret = ath10k_wmi_pull_ch_info(ar, skb, &arg); - if (ret) { - ath10k_warn(ar, "failed to parse chan info event: %d\n", ret); + if (params->cmd_flags & WMI_CHAN_INFO_FLAG_COMPLETE) { + ath10k_dbg(ar, ATH10K_DBG_WMI, "chan info report completed\n"); return; } - err_code = __le32_to_cpu(arg.err_code); - freq = __le32_to_cpu(arg.freq); - cmd_flags = __le32_to_cpu(arg.cmd_flags); - noise_floor = __le32_to_cpu(arg.noise_floor); - rx_clear_count = __le32_to_cpu(arg.rx_clear_count); - cycle_count = __le32_to_cpu(arg.cycle_count); + idx = freq_to_idx(ar, params->freq); + if (idx >= ARRAY_SIZE(ar->survey)) { + ath10k_warn(ar, "chan info: invalid frequency %d (idx %d out of bounds)\n", + params->freq, idx); + return; + } - ath10k_dbg(ar, ATH10K_DBG_WMI, - "chan info err_code %d freq %d cmd_flags %d noise_floor %d rx_clear_count %d cycle_count %d\n", - err_code, freq, cmd_flags, noise_floor, rx_clear_count, - cycle_count); + survey = &ar->survey[idx]; - spin_lock_bh(&ar->data_lock); + if (!params->mac_clk_mhz) + return; - switch (ar->scan.state) { - case ATH10K_SCAN_IDLE: - case ATH10K_SCAN_STARTING: - ath10k_warn(ar, "received chan info event without a scan request, ignoring\n"); - goto exit; - case ATH10K_SCAN_RUNNING: - case ATH10K_SCAN_ABORTING: - break; - } + memset(survey, 0, sizeof(*survey)); - idx = freq_to_idx(ar, freq); + survey->noise = params->noise_floor; + survey->time = (params->cycle_count / params->mac_clk_mhz) / 1000; + survey->time_busy = (params->rx_clear_count / params->mac_clk_mhz) / 1000; + survey->filled |= SURVEY_INFO_NOISE_DBM | SURVEY_INFO_TIME | + SURVEY_INFO_TIME_BUSY; +} + +/* + * Handle the channel info event for firmware which sends chan_info + * event in pairs(start and stop events) for every scanned channel. + */ +static void ath10k_wmi_event_chan_info_paired(struct ath10k *ar, + struct chan_info_params *params) +{ + struct survey_info *survey; + int idx; + + idx = freq_to_idx(ar, params->freq); if (idx >= ARRAY_SIZE(ar->survey)) { ath10k_warn(ar, "chan info: invalid frequency %d (idx %d out of bounds)\n", - freq, idx); - goto exit; + params->freq, idx); + return; } - if (cmd_flags & WMI_CHAN_INFO_FLAG_COMPLETE) { + if (params->cmd_flags & WMI_CHAN_INFO_FLAG_COMPLETE) { if (ar->ch_info_can_report_survey) { survey = &ar->survey[idx]; - survey->noise = noise_floor; + survey->noise = params->noise_floor; survey->filled = SURVEY_INFO_NOISE_DBM; ath10k_hw_fill_survey_time(ar, survey, - cycle_count, - rx_clear_count, + params->cycle_count, + params->rx_clear_count, ar->survey_last_cycle_count, ar->survey_last_rx_clear_count); } @@ -2617,10 +2631,55 @@ void ath10k_wmi_event_chan_info(struct ath10k *ar, struct sk_buff *skb) ar->ch_info_can_report_survey = true; } - if (!(cmd_flags & WMI_CHAN_INFO_FLAG_PRE_COMPLETE)) { - ar->survey_last_rx_clear_count = rx_clear_count; - ar->survey_last_cycle_count = cycle_count; + if (!(params->cmd_flags & WMI_CHAN_INFO_FLAG_PRE_COMPLETE)) { + ar->survey_last_rx_clear_count = params->rx_clear_count; + ar->survey_last_cycle_count = params->cycle_count; } +} + +void ath10k_wmi_event_chan_info(struct ath10k *ar, struct sk_buff *skb) +{ + struct chan_info_params ch_info_param; + struct wmi_ch_info_ev_arg arg = {}; + int ret; + + ret = ath10k_wmi_pull_ch_info(ar, skb, &arg); + if (ret) { + ath10k_warn(ar, "failed to parse chan info event: %d\n", ret); + return; + } + + ch_info_param.err_code = __le32_to_cpu(arg.err_code); + ch_info_param.freq = __le32_to_cpu(arg.freq); + ch_info_param.cmd_flags = __le32_to_cpu(arg.cmd_flags); + ch_info_param.noise_floor = __le32_to_cpu(arg.noise_floor); + ch_info_param.rx_clear_count = __le32_to_cpu(arg.rx_clear_count); + ch_info_param.cycle_count = __le32_to_cpu(arg.cycle_count); + ch_info_param.mac_clk_mhz = __le32_to_cpu(arg.mac_clk_mhz); + + ath10k_dbg(ar, ATH10K_DBG_WMI, + "chan info err_code %d freq %d cmd_flags %d noise_floor %d rx_clear_count %d cycle_count %d\n", + ch_info_param.err_code, ch_info_param.freq, ch_info_param.cmd_flags, + ch_info_param.noise_floor, ch_info_param.rx_clear_count, + ch_info_param.cycle_count); + + spin_lock_bh(&ar->data_lock); + + switch (ar->scan.state) { + case ATH10K_SCAN_IDLE: + case ATH10K_SCAN_STARTING: + ath10k_warn(ar, "received chan info event without a scan request, ignoring\n"); + goto exit; + case ATH10K_SCAN_RUNNING: + case ATH10K_SCAN_ABORTING: + break; + } + + if (test_bit(ATH10K_FW_FEATURE_SINGLE_CHAN_INFO_PER_CHANNEL, + ar->running_fw->fw_file.fw_features)) + ath10k_wmi_event_chan_info_unpaired(ar, &ch_info_param); + else + ath10k_wmi_event_chan_info_paired(ar, &ch_info_param); exit: spin_unlock_bh(&ar->data_lock); @@ -8785,6 +8844,27 @@ ath10k_wmi_barrier(struct ath10k *ar) return 0; } +static struct sk_buff * +ath10k_wmi_10_2_4_op_gen_bb_timing(struct ath10k *ar, + const struct wmi_bb_timing_cfg_arg *arg) +{ + struct wmi_pdev_bb_timing_cfg_cmd *cmd; + struct sk_buff *skb; + + skb = ath10k_wmi_alloc_skb(ar, sizeof(*cmd)); + if (!skb) + return ERR_PTR(-ENOMEM); + + cmd = (struct wmi_pdev_bb_timing_cfg_cmd *)skb->data; + cmd->bb_tx_timing = __cpu_to_le32(arg->bb_tx_timing); + cmd->bb_xpa_timing = __cpu_to_le32(arg->bb_xpa_timing); + + ath10k_dbg(ar, ATH10K_DBG_WMI, + "wmi pdev bb_tx_timing 0x%x bb_xpa_timing 0x%x\n", + arg->bb_tx_timing, arg->bb_xpa_timing); + return skb; +} + static const struct wmi_ops wmi_ops = { .rx = ath10k_wmi_op_rx, .map_svc = wmi_main_svc_map, @@ -9058,6 +9138,7 @@ static const struct wmi_ops wmi_10_2_4_ops = { .gen_pdev_enable_adaptive_cca = ath10k_wmi_op_gen_pdev_enable_adaptive_cca, .get_vdev_subtype = ath10k_wmi_10_2_4_op_get_vdev_subtype, + .gen_bb_timing = ath10k_wmi_10_2_4_op_gen_bb_timing, /* .gen_bcn_tmpl not implemented */ /* .gen_prb_tmpl not implemented */ /* .gen_p2p_go_bcn_ie not implemented */ diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h index c5a343c93013..2034ccc7cc72 100644 --- a/drivers/net/wireless/ath/ath10k/wmi.h +++ b/drivers/net/wireless/ath/ath10k/wmi.h @@ -205,6 +205,8 @@ enum wmi_service { WMI_SERVICE_SPOOF_MAC_SUPPORT, WMI_SERVICE_TX_DATA_ACK_RSSI, WMI_SERVICE_VDEV_DIFFERENT_BEACON_INTERVAL_SUPPORT, + WMI_SERVICE_VDEV_DISABLE_4_ADDR_SRC_LRN_SUPPORT, + WMI_SERVICE_BB_TIMING_CONFIG_SUPPORT, WMI_SERVICE_THERM_THROT, /* keep last */ @@ -245,6 +247,9 @@ enum wmi_10x_service { WMI_10X_SERVICE_PEER_STATS, WMI_10X_SERVICE_RESET_CHIP, WMI_10X_SERVICE_HTT_MGMT_TX_COMP_VALID_FLAGS, + WMI_10X_SERVICE_VDEV_BCN_RATE_CONTROL, + WMI_10X_SERVICE_PER_PACKET_SW_ENCRYPT, + WMI_10X_SERVICE_BB_TIMING_CONFIG_SUPPORT, }; enum wmi_main_service { @@ -360,6 +365,9 @@ enum wmi_10_4_service { WMI_10_4_SERVICE_PEER_TID_CONFIGS_SUPPORT, WMI_10_4_SERVICE_VDEV_BCN_RATE_CONTROL, WMI_10_4_SERVICE_VDEV_DIFFERENT_BEACON_INTERVAL_SUPPORT, + WMI_10_4_SERVICE_HTT_ASSERT_TRIGGER_SUPPORT, + WMI_10_4_SERVICE_VDEV_FILTER_NEIGHBOR_RX_PACKETS, + WMI_10_4_SERVICE_VDEV_DISABLE_4_ADDR_SRC_LRN_SUPPORT, }; static inline char *wmi_service_name(int service_id) @@ -569,6 +577,8 @@ static inline void wmi_10x_svc_map(const __le32 *in, unsigned long *out, WMI_SERVICE_RESET_CHIP, len); SVCMAP(WMI_10X_SERVICE_HTT_MGMT_TX_COMP_VALID_FLAGS, WMI_SERVICE_HTT_MGMT_TX_COMP_VALID_FLAGS, len); + SVCMAP(WMI_10X_SERVICE_BB_TIMING_CONFIG_SUPPORT, + WMI_SERVICE_BB_TIMING_CONFIG_SUPPORT, len); } static inline void wmi_main_svc_map(const __le32 *in, unsigned long *out, @@ -787,6 +797,8 @@ static inline void wmi_10_4_svc_map(const __le32 *in, unsigned long *out, WMI_SERVICE_TX_DATA_ACK_RSSI, len); SVCMAP(WMI_10_4_SERVICE_VDEV_DIFFERENT_BEACON_INTERVAL_SUPPORT, WMI_SERVICE_VDEV_DIFFERENT_BEACON_INTERVAL_SUPPORT, len); + SVCMAP(WMI_10_4_SERVICE_VDEV_DISABLE_4_ADDR_SRC_LRN_SUPPORT, + WMI_SERVICE_VDEV_DISABLE_4_ADDR_SRC_LRN_SUPPORT, len); } #undef SVCMAP @@ -987,6 +999,7 @@ struct wmi_cmd_map { u32 pdev_wds_entry_list_cmdid; u32 tdls_set_offchan_mode_cmdid; u32 radar_found_cmdid; + u32 set_bb_timing_cmdid; }; /* @@ -1602,6 +1615,8 @@ enum wmi_10_2_cmd_id { WMI_10_2_SET_LTEU_CONFIG_CMDID, WMI_10_2_SET_CCA_PARAMS, WMI_10_2_PDEV_BSS_CHAN_INFO_REQUEST_CMDID, + WMI_10_2_FWTEST_CMDID, + WMI_10_2_PDEV_SET_BB_TIMING_CONFIG_CMDID, WMI_10_2_PDEV_UTF_CMDID = WMI_10_2_END_CMDID - 1, }; @@ -4985,6 +5000,7 @@ enum wmi_rate_preamble { (((preamble) << 6) | ((nss) << 4) | (rate)) #define ATH10K_HW_AMPDU(flags) ((flags) & 0x1) #define ATH10K_HW_BA_FAIL(flags) (((flags) >> 1) & 0x3) +#define ATH10K_FW_SKIPPED_RATE_CTRL(flags) (((flags) >> 6) & 0x1) #define ATH10K_VHT_MCS_NUM 10 #define ATH10K_BW_NUM 4 @@ -4992,6 +5008,7 @@ enum wmi_rate_preamble { #define ATH10K_LEGACY_NUM 12 #define ATH10K_GI_NUM 2 #define ATH10K_HT_MCS_NUM 32 +#define ATH10K_RATE_TABLE_NUM 320 /* Value to disable fixed rate setting */ #define WMI_FIXED_RATE_NONE (0xff) @@ -5065,6 +5082,7 @@ struct wmi_vdev_param_map { u32 bw_nss_ratemask; u32 inc_tsf; u32 dec_tsf; + u32 disable_4addr_src_lrn; }; #define WMI_VDEV_PARAM_UNSUPPORTED 0 @@ -5404,8 +5422,20 @@ enum wmi_10_4_vdev_param { WMI_10_4_VDEV_PARAM_ATF_SSID_SCHED_POLICY, WMI_10_4_VDEV_PARAM_DISABLE_DYN_BW_RTS, WMI_10_4_VDEV_PARAM_TSF_DECREMENT, + WMI_10_4_VDEV_PARAM_SELFGEN_FIXED_RATE, + WMI_10_4_VDEV_PARAM_AMPDU_SUBFRAME_SIZE_PER_AC, + WMI_10_4_VDEV_PARAM_NSS_VHT160, + WMI_10_4_VDEV_PARAM_NSS_VHT80_80, + WMI_10_4_VDEV_PARAM_AMSDU_SUBFRAME_SIZE_PER_AC, + WMI_10_4_VDEV_PARAM_DISABLE_CABQ, + WMI_10_4_VDEV_PARAM_SIFS_TRIGGER_RATE, + WMI_10_4_VDEV_PARAM_TX_POWER, + WMI_10_4_VDEV_PARAM_ENABLE_DISABLE_RTT_RESPONDER_ROLE, + WMI_10_4_VDEV_PARAM_DISABLE_4_ADDR_SRC_LRN, }; +#define WMI_VDEV_DISABLE_4_ADDR_SRC_LRN 1 + #define WMI_VDEV_PARAM_TXBF_SU_TX_BFEE BIT(0) #define WMI_VDEV_PARAM_TXBF_MU_TX_BFEE BIT(1) #define WMI_VDEV_PARAM_TXBF_SU_TX_BFER BIT(2) @@ -6442,6 +6472,14 @@ struct wmi_chan_info_event { __le32 noise_floor; __le32 rx_clear_count; __le32 cycle_count; + __le32 chan_tx_pwr_range; + __le32 chan_tx_pwr_tp; + __le32 rx_frame_count; + __le32 my_bss_rx_cycle_count; + __le32 rx_11b_mode_data_duration; + __le32 tx_frame_cnt; + __le32 mac_clk_mhz; + } __packed; struct wmi_10_4_chan_info_event { @@ -6670,6 +6708,10 @@ struct wmi_ch_info_ev_arg { __le32 chan_tx_pwr_range; __le32 chan_tx_pwr_tp; __le32 rx_frame_count; + __le32 my_bss_rx_cycle_count; + __le32 rx_11b_mode_data_duration; + __le32 tx_frame_cnt; + __le32 mac_clk_mhz; }; /* From 10.4 firmware, not sure all have the same values. */ @@ -7141,6 +7183,23 @@ struct wmi_pdev_chan_info_req_cmd { __le32 reserved; } __packed; +/* bb timing register configurations */ +struct wmi_bb_timing_cfg_arg { + /* Tx_end to pa off timing */ + u32 bb_tx_timing; + + /* Tx_end to external pa off timing */ + u32 bb_xpa_timing; +}; + +struct wmi_pdev_bb_timing_cfg_cmd { + /* Tx_end to pa off timing */ + __le32 bb_tx_timing; + + /* Tx_end to external pa off timing */ + __le32 bb_xpa_timing; +} __packed; + struct ath10k; struct ath10k_vif; struct ath10k_fw_stats_pdev; diff --git a/drivers/net/wireless/ath/ath10k/wow.c b/drivers/net/wireless/ath/ath10k/wow.c index 51b26b305885..36d4245c308e 100644 --- a/drivers/net/wireless/ath/ath10k/wow.c +++ b/drivers/net/wireless/ath/ath10k/wow.c @@ -135,7 +135,7 @@ static void ath10k_wow_convert_8023_to_80211 &old_hdr_mask->h_proto, sizeof(old_hdr_mask->h_proto)); - /* Caculate new pkt_offset */ + /* Calculate new pkt_offset */ if (old->pkt_offset < ETH_ALEN) new->pkt_offset = old->pkt_offset + offsetof(struct ieee80211_hdr_3addr, addr1); @@ -146,7 +146,7 @@ static void ath10k_wow_convert_8023_to_80211 else new->pkt_offset = old->pkt_offset + hdr_len + rfc_len - ETH_HLEN; - /* Caculate new hdr end offset */ + /* Calculate new hdr end offset */ if (total_len > ETH_HLEN) hdr_80211_end_offset = hdr_len + rfc_len; else if (total_len > offsetof(struct ethhdr, h_proto)) diff --git a/drivers/net/wireless/ath/ath6kl/cfg80211.c b/drivers/net/wireless/ath/ath6kl/cfg80211.c index e121187f371f..5477a014e1fb 100644 --- a/drivers/net/wireless/ath/ath6kl/cfg80211.c +++ b/drivers/net/wireless/ath/ath6kl/cfg80211.c @@ -291,7 +291,7 @@ static bool ath6kl_cfg80211_ready(struct ath6kl_vif *vif) } if (!test_bit(WLAN_ENABLED, &vif->flags)) { - ath6kl_err("wlan disabled\n"); + ath6kl_dbg(ATH6KL_DBG_WLAN_CFG, "wlan disabled\n"); return false; } @@ -939,7 +939,7 @@ static int ath6kl_set_probed_ssids(struct ath6kl *ar, else ssid_list[i].flag = ANY_SSID_FLAG; - if (n_match_ssid == 0) + if (ar->wiphy->max_match_sets != 0 && n_match_ssid == 0) ssid_list[i].flag |= MATCH_SSID_FLAG; } @@ -1093,7 +1093,7 @@ void ath6kl_cfg80211_scan_complete_event(struct ath6kl_vif *vif, bool aborted) if (vif->scan_req->n_ssids && vif->scan_req->ssids[0].ssid_len) { for (i = 0; i < vif->scan_req->n_ssids; i++) { ath6kl_wmi_probedssid_cmd(ar->wmi, vif->fw_vif_idx, - i + 1, DISABLE_SSID_FLAG, + i, DISABLE_SSID_FLAG, 0, NULL); } } @@ -1322,7 +1322,7 @@ static int ath6kl_cfg80211_set_default_key(struct wiphy *wiphy, struct ath6kl_vif *vif = netdev_priv(ndev); struct ath6kl_key *key = NULL; u8 key_usage; - enum crypto_type key_type = NONE_CRYPT; + enum ath6kl_crypto_type key_type = NONE_CRYPT; ath6kl_dbg(ATH6KL_DBG_WLAN_CFG, "%s: index %d\n", __func__, key_index); diff --git a/drivers/net/wireless/ath/ath6kl/common.h b/drivers/net/wireless/ath/ath6kl/common.h index 4f82e8632d37..d6e5234f67a1 100644 --- a/drivers/net/wireless/ath/ath6kl/common.h +++ b/drivers/net/wireless/ath/ath6kl/common.h @@ -67,7 +67,7 @@ struct ath6kl_llc_snap_hdr { __be16 eth_type; } __packed; -enum crypto_type { +enum ath6kl_crypto_type { NONE_CRYPT = 0x01, WEP_CRYPT = 0x02, TKIP_CRYPT = 0x04, diff --git a/drivers/net/wireless/ath/ath6kl/main.c b/drivers/net/wireless/ath/ath6kl/main.c index cb59016c723b..5e7ea838a921 100644 --- a/drivers/net/wireless/ath/ath6kl/main.c +++ b/drivers/net/wireless/ath/ath6kl/main.c @@ -389,6 +389,7 @@ void ath6kl_connect_ap_mode_bss(struct ath6kl_vif *vif, u16 channel) if (!ik->valid || ik->key_type != WAPI_CRYPT) break; /* for WAPI, we need to set the delayed group key, continue: */ + /* fall through */ case WPA_PSK_AUTH: case WPA2_PSK_AUTH: case (WPA_PSK_AUTH | WPA2_PSK_AUTH): diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c index 777acc564ac9..9d7ac1ab2d02 100644 --- a/drivers/net/wireless/ath/ath6kl/wmi.c +++ b/drivers/net/wireless/ath/ath6kl/wmi.c @@ -1849,9 +1849,9 @@ int ath6kl_wmi_connect_cmd(struct wmi *wmi, u8 if_idx, enum network_type nw_type, enum dot11_auth_mode dot11_auth_mode, enum auth_mode auth_mode, - enum crypto_type pairwise_crypto, + enum ath6kl_crypto_type pairwise_crypto, u8 pairwise_crypto_len, - enum crypto_type group_crypto, + enum ath6kl_crypto_type group_crypto, u8 group_crypto_len, int ssid_len, u8 *ssid, u8 *bssid, u16 channel, u32 ctrl_flags, u8 nw_subtype) @@ -2301,7 +2301,7 @@ int ath6kl_wmi_disctimeout_cmd(struct wmi *wmi, u8 if_idx, u8 timeout) } int ath6kl_wmi_addkey_cmd(struct wmi *wmi, u8 if_idx, u8 key_index, - enum crypto_type key_type, + enum ath6kl_crypto_type key_type, u8 key_usage, u8 key_len, u8 *key_rsc, unsigned int key_rsc_len, u8 *key_material, diff --git a/drivers/net/wireless/ath/ath6kl/wmi.h b/drivers/net/wireless/ath/ath6kl/wmi.h index a60bb49fe920..784940ba4c90 100644 --- a/drivers/net/wireless/ath/ath6kl/wmi.h +++ b/drivers/net/wireless/ath/ath6kl/wmi.h @@ -2556,9 +2556,9 @@ int ath6kl_wmi_connect_cmd(struct wmi *wmi, u8 if_idx, enum network_type nw_type, enum dot11_auth_mode dot11_auth_mode, enum auth_mode auth_mode, - enum crypto_type pairwise_crypto, + enum ath6kl_crypto_type pairwise_crypto, u8 pairwise_crypto_len, - enum crypto_type group_crypto, + enum ath6kl_crypto_type group_crypto, u8 group_crypto_len, int ssid_len, u8 *ssid, u8 *bssid, u16 channel, u32 ctrl_flags, u8 nw_subtype); @@ -2610,7 +2610,7 @@ int ath6kl_wmi_config_debug_module_cmd(struct wmi *wmi, u32 valid, u32 config); int ath6kl_wmi_get_stats_cmd(struct wmi *wmi, u8 if_idx); int ath6kl_wmi_addkey_cmd(struct wmi *wmi, u8 if_idx, u8 key_index, - enum crypto_type key_type, + enum ath6kl_crypto_type key_type, u8 key_usage, u8 key_len, u8 *key_rsc, unsigned int key_rsc_len, u8 *key_material, diff --git a/drivers/net/wireless/ath/ath9k/Kconfig b/drivers/net/wireless/ath/ath9k/Kconfig index 1f3523019509..ceca23a851d5 100644 --- a/drivers/net/wireless/ath/ath9k/Kconfig +++ b/drivers/net/wireless/ath/ath9k/Kconfig @@ -116,7 +116,7 @@ config ATH9K_DFS_CERTIFIED except increase code size. config ATH9K_DYNACK - bool "Atheros ath9k ACK timeout estimation algorithm (EXPERIMENTAL)" + bool "Atheros ath9k ACK timeout estimation algorithm" depends on ATH9K default n ---help--- diff --git a/drivers/net/wireless/ath/ath9k/ar5008_phy.c b/drivers/net/wireless/ath/ath9k/ar5008_phy.c index 11d6f975c87d..dae95402eb3a 100644 --- a/drivers/net/wireless/ath/ath9k/ar5008_phy.c +++ b/drivers/net/wireless/ath/ath9k/ar5008_phy.c @@ -586,7 +586,7 @@ static void ar5008_hw_init_chain_masks(struct ath_hw *ah) REG_WRITE(ah, AR_PHY_CAL_CHAINMASK, 0x7); break; } - /* else: fall through */ + /* fall through */ case 0x1: case 0x2: case 0x7: diff --git a/drivers/net/wireless/ath/ath9k/ar9002_phy.c b/drivers/net/wireless/ath/ath9k/ar9002_phy.c index 713291881208..6f32b8d2ec7f 100644 --- a/drivers/net/wireless/ath/ath9k/ar9002_phy.c +++ b/drivers/net/wireless/ath/ath9k/ar9002_phy.c @@ -119,7 +119,7 @@ static int ar9002_hw_set_channel(struct ath_hw *ah, struct ath9k_channel *chan) aModeRefSel = 2; if (aModeRefSel) break; - /* else: fall through */ + /* fall through */ case 1: default: aModeRefSel = 0; diff --git a/drivers/net/wireless/ath/ath9k/ar9003_mci.c b/drivers/net/wireless/ath/ath9k/ar9003_mci.c index 0fe9c8378249..9899661f9a60 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_mci.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_mci.c @@ -1055,17 +1055,15 @@ void ar9003_mci_stop_bt(struct ath_hw *ah, bool save_fullsleep) static void ar9003_mci_send_2g5g_status(struct ath_hw *ah, bool wait_done) { struct ath9k_hw_mci *mci = &ah->btcoex_hw.mci; - u32 new_flags, to_set, to_clear; + u32 to_set, to_clear; if (!mci->update_2g5g || (mci->bt_state == MCI_BT_SLEEP)) return; if (mci->is_2g) { - new_flags = MCI_2G_FLAGS; to_clear = MCI_2G_FLAGS_CLEAR_MASK; to_set = MCI_2G_FLAGS_SET_MASK; } else { - new_flags = MCI_5G_FLAGS; to_clear = MCI_5G_FLAGS_CLEAR_MASK; to_set = MCI_5G_FLAGS_SET_MASK; } diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h index 21ba20981a80..0fca44e91a71 100644 --- a/drivers/net/wireless/ath/ath9k/ath9k.h +++ b/drivers/net/wireless/ath/ath9k/ath9k.h @@ -272,7 +272,7 @@ struct ath_node { #endif u8 key_idx[4]; - u32 ackto; + int ackto; struct list_head list; }; diff --git a/drivers/net/wireless/ath/ath9k/dynack.c b/drivers/net/wireless/ath/ath9k/dynack.c index 7334c9b09e82..f112fa5b2eac 100644 --- a/drivers/net/wireless/ath/ath9k/dynack.c +++ b/drivers/net/wireless/ath/ath9k/dynack.c @@ -29,9 +29,13 @@ * ath_dynack_ewma - EWMA (Exponentially Weighted Moving Average) calculation * */ -static inline u32 ath_dynack_ewma(u32 old, u32 new) +static inline int ath_dynack_ewma(int old, int new) { - return (new * (EWMA_DIV - EWMA_LEVEL) + old * EWMA_LEVEL) / EWMA_DIV; + if (old > 0) + return (new * (EWMA_DIV - EWMA_LEVEL) + + old * EWMA_LEVEL) / EWMA_DIV; + else + return new; } /** @@ -82,10 +86,10 @@ static inline bool ath_dynack_bssidmask(struct ath_hw *ah, const u8 *mac) */ static void ath_dynack_compute_ackto(struct ath_hw *ah) { - struct ath_node *an; - u32 to = 0; - struct ath_dynack *da = &ah->dynack; struct ath_common *common = ath9k_hw_common(ah); + struct ath_dynack *da = &ah->dynack; + struct ath_node *an; + int to = 0; list_for_each_entry(an, &da->nodes, list) if (an->ackto > to) @@ -144,7 +148,8 @@ static void ath_dynack_compute_to(struct ath_hw *ah) an->ackto = ath_dynack_ewma(an->ackto, ackto); ath_dbg(ath9k_hw_common(ah), DYNACK, - "%pM to %u\n", dst, an->ackto); + "%pM to %d [%u]\n", dst, + an->ackto, ackto); if (time_is_before_jiffies(da->lto)) { ath_dynack_compute_ackto(ah); da->lto = jiffies + COMPUTE_TO; @@ -166,18 +171,21 @@ static void ath_dynack_compute_to(struct ath_hw *ah) * @ah: ath hw * @skb: socket buffer * @ts: tx status info + * @sta: station pointer * */ void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, - struct ath_tx_status *ts) + struct ath_tx_status *ts, + struct ieee80211_sta *sta) { - u8 ridx; struct ieee80211_hdr *hdr; struct ath_dynack *da = &ah->dynack; struct ath_common *common = ath9k_hw_common(ah); struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + u32 dur = ts->duration; + u8 ridx; - if ((info->flags & IEEE80211_TX_CTL_NO_ACK) || !da->enabled) + if (!da->enabled || (info->flags & IEEE80211_TX_CTL_NO_ACK)) return; spin_lock_bh(&da->qlock); @@ -187,11 +195,19 @@ void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, /* late ACK */ if (ts->ts_status & ATH9K_TXERR_XRETRY) { if (ieee80211_is_assoc_req(hdr->frame_control) || - ieee80211_is_assoc_resp(hdr->frame_control)) { + ieee80211_is_assoc_resp(hdr->frame_control) || + ieee80211_is_auth(hdr->frame_control)) { ath_dbg(common, DYNACK, "late ack\n"); + ath9k_hw_setslottime(ah, (LATEACK_TO - 3) / 2); ath9k_hw_set_ack_timeout(ah, LATEACK_TO); ath9k_hw_set_cts_timeout(ah, LATEACK_TO); + if (sta) { + struct ath_node *an; + + an = (struct ath_node *)sta->drv_priv; + an->ackto = -1; + } da->lto = jiffies + LATEACK_DELAY; } @@ -202,14 +218,13 @@ void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, ridx = ts->ts_rateindex; da->st_rbf.ts[da->st_rbf.t_rb].tstamp = ts->ts_tstamp; - da->st_rbf.ts[da->st_rbf.t_rb].dur = ts->duration; ether_addr_copy(da->st_rbf.addr[da->st_rbf.t_rb].h_dest, hdr->addr1); ether_addr_copy(da->st_rbf.addr[da->st_rbf.t_rb].h_src, hdr->addr2); if (!(info->status.rates[ridx].flags & IEEE80211_TX_RC_MCS)) { - u32 phy, sifs; const struct ieee80211_rate *rate; struct ieee80211_tx_rate *rates = info->status.rates; + u32 phy; rate = &common->sbands[info->band].bitrates[rates[ridx].idx]; if (info->band == NL80211_BAND_2GHZ && @@ -218,19 +233,18 @@ void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, else phy = WLAN_RC_PHY_OFDM; - sifs = ath_dynack_get_sifs(ah, phy); - da->st_rbf.ts[da->st_rbf.t_rb].dur -= sifs; + dur -= ath_dynack_get_sifs(ah, phy); } - - ath_dbg(common, DYNACK, "{%pM} tx sample %u [dur %u][h %u-t %u]\n", - hdr->addr1, da->st_rbf.ts[da->st_rbf.t_rb].tstamp, - da->st_rbf.ts[da->st_rbf.t_rb].dur, da->st_rbf.h_rb, - (da->st_rbf.t_rb + 1) % ATH_DYN_BUF); + da->st_rbf.ts[da->st_rbf.t_rb].dur = dur; INCR(da->st_rbf.t_rb, ATH_DYN_BUF); if (da->st_rbf.t_rb == da->st_rbf.h_rb) INCR(da->st_rbf.h_rb, ATH_DYN_BUF); + ath_dbg(common, DYNACK, "{%pM} tx sample %u [dur %u][h %u-t %u]\n", + hdr->addr1, ts->ts_tstamp, dur, da->st_rbf.h_rb, + da->st_rbf.t_rb); + ath_dynack_compute_to(ah); spin_unlock_bh(&da->qlock); @@ -251,20 +265,19 @@ void ath_dynack_sample_ack_ts(struct ath_hw *ah, struct sk_buff *skb, struct ath_common *common = ath9k_hw_common(ah); struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; - if (!ath_dynack_bssidmask(ah, hdr->addr1) || !da->enabled) + if (!da->enabled || !ath_dynack_bssidmask(ah, hdr->addr1)) return; spin_lock_bh(&da->qlock); da->ack_rbf.tstamp[da->ack_rbf.t_rb] = ts; - ath_dbg(common, DYNACK, "rx sample %u [h %u-t %u]\n", - da->ack_rbf.tstamp[da->ack_rbf.t_rb], - da->ack_rbf.h_rb, (da->ack_rbf.t_rb + 1) % ATH_DYN_BUF); - INCR(da->ack_rbf.t_rb, ATH_DYN_BUF); if (da->ack_rbf.t_rb == da->ack_rbf.h_rb) INCR(da->ack_rbf.h_rb, ATH_DYN_BUF); + ath_dbg(common, DYNACK, "rx sample %u [h %u-t %u]\n", + ts, da->ack_rbf.h_rb, da->ack_rbf.t_rb); + ath_dynack_compute_to(ah); spin_unlock_bh(&da->qlock); diff --git a/drivers/net/wireless/ath/ath9k/dynack.h b/drivers/net/wireless/ath/ath9k/dynack.h index 6d7bef976742..cf60224d40df 100644 --- a/drivers/net/wireless/ath/ath9k/dynack.h +++ b/drivers/net/wireless/ath/ath9k/dynack.h @@ -86,7 +86,8 @@ void ath_dynack_node_deinit(struct ath_hw *ah, struct ath_node *an); void ath_dynack_init(struct ath_hw *ah); void ath_dynack_sample_ack_ts(struct ath_hw *ah, struct sk_buff *skb, u32 ts); void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, - struct ath_tx_status *ts); + struct ath_tx_status *ts, + struct ieee80211_sta *sta); #else static inline void ath_dynack_init(struct ath_hw *ah) {} static inline void ath_dynack_node_init(struct ath_hw *ah, @@ -97,7 +98,8 @@ static inline void ath_dynack_sample_ack_ts(struct ath_hw *ah, struct sk_buff *skb, u32 ts) {} static inline void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, - struct ath_tx_status *ts) {} + struct ath_tx_status *ts, + struct ieee80211_sta *sta) {} #endif #endif /* DYNACK_H */ diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c index bb319f22761f..8581d917635a 100644 --- a/drivers/net/wireless/ath/ath9k/hw.c +++ b/drivers/net/wireless/ath/ath9k/hw.c @@ -2279,6 +2279,7 @@ void ath9k_hw_beaconinit(struct ath_hw *ah, u32 next_beacon, u32 beacon_period) case NL80211_IFTYPE_ADHOC: REG_SET_BIT(ah, AR_TXCFG, AR_TXCFG_ADHOC_BEACON_ATIM_TX_POLICY); + /* fall through */ case NL80211_IFTYPE_MESH_POINT: case NL80211_IFTYPE_AP: REG_WRITE(ah, AR_NEXT_TBTT_TIMER, next_beacon); diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c index 25b3fc82d4ac..f448d5716639 100644 --- a/drivers/net/wireless/ath/ath9k/xmit.c +++ b/drivers/net/wireless/ath/ath9k/xmit.c @@ -629,7 +629,7 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq, if (bf == bf->bf_lastbf) ath_dynack_sample_tx_ts(sc->sc_ah, bf->bf_mpdu, - ts); + ts, sta); } ath_tx_complete_buf(sc, bf, txq, &bf_head, sta, ts, @@ -773,7 +773,8 @@ static void ath_tx_process_buffer(struct ath_softc *sc, struct ath_txq *txq, memcpy(info->control.rates, bf->rates, sizeof(info->control.rates)); ath_tx_rc_status(sc, bf, ts, 1, txok ? 0 : 1, txok); - ath_dynack_sample_tx_ts(sc->sc_ah, bf->bf_mpdu, ts); + ath_dynack_sample_tx_ts(sc->sc_ah, bf->bf_mpdu, ts, + sta); } ath_tx_complete_buf(sc, bf, txq, bf_head, sta, ts, txok); } else diff --git a/drivers/net/wireless/ath/carl9170/rx.c b/drivers/net/wireless/ath/carl9170/rx.c index 705063259c8f..f7c2f19e81c1 100644 --- a/drivers/net/wireless/ath/carl9170/rx.c +++ b/drivers/net/wireless/ath/carl9170/rx.c @@ -766,6 +766,7 @@ static void carl9170_rx_untie_data(struct ar9170 *ar, u8 *buf, int len) goto drop; } + /* fall through */ case AR9170_RX_STATUS_MPDU_MIDDLE: /* These are just data + mac status */ diff --git a/drivers/net/wireless/ath/carl9170/tx.c b/drivers/net/wireless/ath/carl9170/tx.c index 8c75651ede6c..2407931440ed 100644 --- a/drivers/net/wireless/ath/carl9170/tx.c +++ b/drivers/net/wireless/ath/carl9170/tx.c @@ -830,10 +830,12 @@ static bool carl9170_tx_rts_check(struct ar9170 *ar, case CARL9170_ERP_AUTO: if (ampdu) break; + /* fall through */ case CARL9170_ERP_MAC80211: if (!(rate->flags & IEEE80211_TX_RC_USE_RTS_CTS)) break; + /* fall through */ case CARL9170_ERP_RTS: if (likely(!multi)) @@ -854,6 +856,7 @@ static bool carl9170_tx_cts_check(struct ar9170 *ar, case CARL9170_ERP_MAC80211: if (!(rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT)) break; + /* fall through */ case CARL9170_ERP_CTS: return true; diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c index d18e81fae5f1..9b2f9f543952 100644 --- a/drivers/net/wireless/ath/wil6210/cfg80211.c +++ b/drivers/net/wireless/ath/wil6210/cfg80211.c @@ -51,6 +51,19 @@ static struct ieee80211_channel wil_60ghz_channels[] = { CHAN60G(4, 0), }; +static void +wil_memdup_ie(u8 **pdst, size_t *pdst_len, const u8 *src, size_t src_len) +{ + kfree(*pdst); + *pdst = NULL; + *pdst_len = 0; + if (src_len > 0) { + *pdst = kmemdup(src, src_len, GFP_KERNEL); + if (*pdst) + *pdst_len = src_len; + } +} + static int wil_num_supported_channels(struct wil6210_priv *wil) { int num_channels = ARRAY_SIZE(wil_60ghz_channels); @@ -1441,11 +1454,19 @@ static int wil_cfg80211_add_key(struct wiphy *wiphy, rc = wmi_add_cipher_key(vif, key_index, mac_addr, params->key_len, params->key, key_usage); - if (!rc && !IS_ERR(cs)) + if (!rc && !IS_ERR(cs)) { + /* update local storage used for AP recovery */ + if (key_usage == WMI_KEY_USE_TX_GROUP && params->key && + params->key_len <= WMI_MAX_KEY_LEN) { + vif->gtk_index = key_index; + memcpy(vif->gtk, params->key, params->key_len); + vif->gtk_len = params->key_len; + } /* in FT set crypto will take place upon receiving * WMI_RING_EN_EVENTID event */ wil_set_crypto_rx(key_index, key_usage, cs, params); + } return rc; } @@ -1634,6 +1655,14 @@ static int _wil_cfg80211_set_ies(struct wil6210_vif *vif, u16 len = 0, proberesp_len = 0; u8 *ies = NULL, *proberesp; + /* update local storage used for AP recovery */ + wil_memdup_ie(&vif->proberesp, &vif->proberesp_len, bcon->probe_resp, + bcon->probe_resp_len); + wil_memdup_ie(&vif->proberesp_ies, &vif->proberesp_ies_len, + bcon->proberesp_ies, bcon->proberesp_ies_len); + wil_memdup_ie(&vif->assocresp_ies, &vif->assocresp_ies_len, + bcon->assocresp_ies, bcon->assocresp_ies_len); + proberesp = _wil_cfg80211_get_proberesp_ies(bcon->probe_resp, bcon->probe_resp_len, &proberesp_len); @@ -1735,6 +1764,9 @@ static int _wil_cfg80211_start_ap(struct wiphy *wiphy, vif->channel = chan; vif->hidden_ssid = hidden_ssid; vif->pbss = pbss; + vif->bi = bi; + memcpy(vif->ssid, ssid, ssid_len); + vif->ssid_len = ssid_len; netif_carrier_on(ndev); if (!wil_has_other_active_ifaces(wil, ndev, false, true)) @@ -1761,11 +1793,64 @@ out: return rc; } +void wil_cfg80211_ap_recovery(struct wil6210_priv *wil) +{ + int rc, i; + struct wiphy *wiphy = wil_to_wiphy(wil); + + for (i = 0; i < wil->max_vifs; i++) { + struct wil6210_vif *vif = wil->vifs[i]; + struct net_device *ndev; + struct cfg80211_beacon_data bcon = {}; + struct key_params key_params = {}; + + if (!vif || vif->ssid_len == 0) + continue; + + ndev = vif_to_ndev(vif); + bcon.proberesp_ies = vif->proberesp_ies; + bcon.assocresp_ies = vif->assocresp_ies; + bcon.probe_resp = vif->proberesp; + bcon.proberesp_ies_len = vif->proberesp_ies_len; + bcon.assocresp_ies_len = vif->assocresp_ies_len; + bcon.probe_resp_len = vif->proberesp_len; + + wil_info(wil, + "AP (vif %d) recovery: privacy %d, bi %d, channel %d, hidden %d, pbss %d\n", + i, vif->privacy, vif->bi, vif->channel, + vif->hidden_ssid, vif->pbss); + wil_hex_dump_misc("SSID ", DUMP_PREFIX_OFFSET, 16, 1, + vif->ssid, vif->ssid_len, true); + rc = _wil_cfg80211_start_ap(wiphy, ndev, + vif->ssid, vif->ssid_len, + vif->privacy, vif->bi, + vif->channel, &bcon, + vif->hidden_ssid, vif->pbss); + if (rc) { + wil_err(wil, "vif %d recovery failed (%d)\n", i, rc); + continue; + } + + if (!vif->privacy || vif->gtk_len == 0) + continue; + + key_params.key = vif->gtk; + key_params.key_len = vif->gtk_len; + key_params.seq_len = IEEE80211_GCMP_PN_LEN; + rc = wil_cfg80211_add_key(wiphy, ndev, vif->gtk_index, false, + NULL, &key_params); + if (rc) + wil_err(wil, "vif %d recovery add key failed (%d)\n", + i, rc); + } +} + static int wil_cfg80211_change_beacon(struct wiphy *wiphy, struct net_device *ndev, struct cfg80211_beacon_data *bcon) { struct wil6210_priv *wil = wiphy_to_wil(wiphy); + struct wireless_dev *wdev = ndev->ieee80211_ptr; struct wil6210_vif *vif = ndev_to_vif(ndev); int rc; u32 privacy = 0; @@ -1778,15 +1863,16 @@ static int wil_cfg80211_change_beacon(struct wiphy *wiphy, bcon->tail_len)) privacy = 1; + memcpy(vif->ssid, wdev->ssid, wdev->ssid_len); + vif->ssid_len = wdev->ssid_len; + /* in case privacy has changed, need to restart the AP */ if (vif->privacy != privacy) { - struct wireless_dev *wdev = ndev->ieee80211_ptr; - wil_dbg_misc(wil, "privacy changed %d=>%d. Restarting AP\n", vif->privacy, privacy); - rc = _wil_cfg80211_start_ap(wiphy, ndev, wdev->ssid, - wdev->ssid_len, privacy, + rc = _wil_cfg80211_start_ap(wiphy, ndev, vif->ssid, + vif->ssid_len, privacy, wdev->beacon_interval, vif->channel, bcon, vif->hidden_ssid, @@ -1876,6 +1962,12 @@ static int wil_cfg80211_stop_ap(struct wiphy *wiphy, wmi_pcp_stop(vif); clear_bit(wil_vif_ft_roam, vif->status); + vif->ssid_len = 0; + wil_memdup_ie(&vif->proberesp, &vif->proberesp_len, NULL, 0); + wil_memdup_ie(&vif->proberesp_ies, &vif->proberesp_ies_len, NULL, 0); + wil_memdup_ie(&vif->assocresp_ies, &vif->assocresp_ies_len, NULL, 0); + memset(vif->gtk, 0, WMI_MAX_KEY_LEN); + vif->gtk_len = 0; if (last) __wil_down(wil); @@ -1923,7 +2015,7 @@ static int wil_cfg80211_del_station(struct wiphy *wiphy, params->mac, params->reason_code, vif->mid); mutex_lock(&wil->mutex); - wil6210_disconnect(vif, params->mac, params->reason_code, false); + wil6210_disconnect(vif, params->mac, params->reason_code); mutex_unlock(&wil->mutex); return 0; diff --git a/drivers/net/wireless/ath/wil6210/debugfs.c b/drivers/net/wireless/ath/wil6210/debugfs.c index aa50813a0595..835c902b84c1 100644 --- a/drivers/net/wireless/ath/wil6210/debugfs.c +++ b/drivers/net/wireless/ath/wil6210/debugfs.c @@ -124,7 +124,7 @@ static void wil_print_ring(struct seq_file *s, struct wil6210_priv *wil, seq_puts(s, "}\n"); } -static int wil_ring_debugfs_show(struct seq_file *s, void *data) +static int ring_show(struct seq_file *s, void *data) { uint i; struct wil6210_priv *wil = s->private; @@ -183,18 +183,7 @@ static int wil_ring_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_ring_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_ring_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_ring = { - .open = wil_ring_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(ring); static void wil_print_sring(struct seq_file *s, struct wil6210_priv *wil, struct wil_status_ring *sring) @@ -240,7 +229,7 @@ static void wil_print_sring(struct seq_file *s, struct wil6210_priv *wil, seq_puts(s, "}\n"); } -static int wil_srings_debugfs_show(struct seq_file *s, void *data) +static int srings_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; int i = 0; @@ -251,18 +240,7 @@ static int wil_srings_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_srings_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_srings_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_srings = { - .open = wil_srings_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(srings); static void wil_seq_hexdump(struct seq_file *s, void *p, int len, const char *prefix) @@ -348,7 +326,7 @@ static void wil_print_mbox_ring(struct seq_file *s, const char *prefix, wil_halp_unvote(wil); } -static int wil_mbox_debugfs_show(struct seq_file *s, void *data) +static int mbox_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; int ret; @@ -366,18 +344,7 @@ static int wil_mbox_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_mbox_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_mbox_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_mbox = { - .open = wil_mbox_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(mbox); static int wil_debugfs_iomem_x32_set(void *data, u64 val) { @@ -624,7 +591,7 @@ static int wil6210_debugfs_create_ITR_CNT(struct wil6210_priv *wil, return 0; } -static int wil_memread_debugfs_show(struct seq_file *s, void *data) +static int memread_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; void __iomem *a; @@ -645,18 +612,7 @@ static int wil_memread_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_memread_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_memread_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_memread = { - .open = wil_memread_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(memread); static ssize_t wil_read_file_ioblob(struct file *file, char __user *user_buf, size_t count, loff_t *ppos) @@ -664,10 +620,10 @@ static ssize_t wil_read_file_ioblob(struct file *file, char __user *user_buf, enum { max_count = 4096 }; struct wil_blob_wrapper *wil_blob = file->private_data; struct wil6210_priv *wil = wil_blob->wil; - loff_t pos = *ppos; + loff_t aligned_pos, pos = *ppos; size_t available = wil_blob->blob.size; void *buf; - size_t ret; + size_t unaligned_bytes, aligned_count, ret; int rc; if (test_bit(wil_status_suspending, wil_blob->wil->status) || @@ -685,7 +641,12 @@ static ssize_t wil_read_file_ioblob(struct file *file, char __user *user_buf, if (count > max_count) count = max_count; - buf = kmalloc(count, GFP_KERNEL); + /* set pos to 4 bytes aligned */ + unaligned_bytes = pos % 4; + aligned_pos = pos - unaligned_bytes; + aligned_count = count + unaligned_bytes; + + buf = kmalloc(aligned_count, GFP_KERNEL); if (!buf) return -ENOMEM; @@ -696,9 +657,9 @@ static ssize_t wil_read_file_ioblob(struct file *file, char __user *user_buf, } wil_memcpy_fromio_32(buf, (const void __iomem *) - wil_blob->blob.data + pos, count); + wil_blob->blob.data + aligned_pos, aligned_count); - ret = copy_to_user(user_buf, buf, count); + ret = copy_to_user(user_buf, buf + unaligned_bytes, count); wil_pm_runtime_put(wil); @@ -962,6 +923,8 @@ static ssize_t wil_write_file_txmgmt(struct file *file, const char __user *buf, int rc; void *frame; + memset(¶ms, 0, sizeof(params)); + if (!len) return -EINVAL; @@ -1053,7 +1016,7 @@ static void wil_seq_print_skb(struct seq_file *s, struct sk_buff *skb) } /*---------Tx/Rx descriptor------------*/ -static int wil_txdesc_debugfs_show(struct seq_file *s, void *data) +static int txdesc_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; struct wil_ring *ring; @@ -1146,21 +1109,10 @@ static int wil_txdesc_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_txdesc_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_txdesc_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_txdesc = { - .open = wil_txdesc_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(txdesc); /*---------Tx/Rx status message------------*/ -static int wil_status_msg_debugfs_show(struct seq_file *s, void *data) +static int status_msg_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; int sring_idx = dbg_sring_index; @@ -1202,19 +1154,7 @@ static int wil_status_msg_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_status_msg_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_status_msg_debugfs_show, - inode->i_private); -} - -static const struct file_operations fops_status_msg = { - .open = wil_status_msg_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(status_msg); static int wil_print_rx_buff(struct seq_file *s, struct list_head *lh) { @@ -1232,7 +1172,7 @@ static int wil_print_rx_buff(struct seq_file *s, struct list_head *lh) return i; } -static int wil_rx_buff_mgmt_debugfs_show(struct seq_file *s, void *data) +static int rx_buff_mgmt_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; struct wil_rx_buff_mgmt *rbm = &wil->rx_buff_mgmt; @@ -1257,19 +1197,7 @@ static int wil_rx_buff_mgmt_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_rx_buff_mgmt_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_rx_buff_mgmt_debugfs_show, - inode->i_private); -} - -static const struct file_operations fops_rx_buff_mgmt = { - .open = wil_rx_buff_mgmt_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(rx_buff_mgmt); /*---------beamforming------------*/ static char *wil_bfstatus_str(u32 status) @@ -1299,7 +1227,7 @@ static bool is_all_zeros(void * const x_, size_t sz) return true; } -static int wil_bf_debugfs_show(struct seq_file *s, void *data) +static int bf_show(struct seq_file *s, void *data) { int rc; int i; @@ -1353,18 +1281,7 @@ static int wil_bf_debugfs_show(struct seq_file *s, void *data) } return 0; } - -static int wil_bf_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_bf_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_bf = { - .open = wil_bf_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(bf); /*---------temp------------*/ static void print_temp(struct seq_file *s, const char *prefix, s32 t) @@ -1381,7 +1298,7 @@ static void print_temp(struct seq_file *s, const char *prefix, s32 t) } } -static int wil_temp_debugfs_show(struct seq_file *s, void *data) +static int temp_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; s32 t_m, t_r; @@ -1397,21 +1314,10 @@ static int wil_temp_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_temp_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_temp_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_temp = { - .open = wil_temp_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(temp); /*---------freq------------*/ -static int wil_freq_debugfs_show(struct seq_file *s, void *data) +static int freq_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; struct wireless_dev *wdev = wil->main_ndev->ieee80211_ptr; @@ -1421,21 +1327,10 @@ static int wil_freq_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_freq_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_freq_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_freq = { - .open = wil_freq_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(freq); /*---------link------------*/ -static int wil_link_debugfs_show(struct seq_file *s, void *data) +static int link_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; struct station_info *sinfo; @@ -1487,21 +1382,10 @@ out: kfree(sinfo); return rc; } - -static int wil_link_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_link_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_link = { - .open = wil_link_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(link); /*---------info------------*/ -static int wil_info_debugfs_show(struct seq_file *s, void *data) +static int info_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; struct net_device *ndev = wil->main_ndev; @@ -1536,18 +1420,7 @@ static int wil_info_debugfs_show(struct seq_file *s, void *data) #undef CHECK_QSTATE return 0; } - -static int wil_info_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_info_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_info = { - .open = wil_info_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(info); /*---------recovery------------*/ /* mode = [manual|auto] @@ -1663,7 +1536,7 @@ has_keys: seq_puts(s, "\n"); } -static int wil_sta_debugfs_show(struct seq_file *s, void *data) +static int sta_show(struct seq_file *s, void *data) __acquires(&p->tid_rx_lock) __releases(&p->tid_rx_lock) { struct wil6210_priv *wil = s->private; @@ -1745,20 +1618,9 @@ __acquires(&p->tid_rx_lock) __releases(&p->tid_rx_lock) return 0; } +DEFINE_SHOW_ATTRIBUTE(sta); -static int wil_sta_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_sta_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_sta = { - .open = wil_sta_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; - -static int wil_mids_debugfs_show(struct seq_file *s, void *data) +static int mids_show(struct seq_file *s, void *data) { struct wil6210_priv *wil = s->private; struct wil6210_vif *vif; @@ -1781,18 +1643,7 @@ static int wil_mids_debugfs_show(struct seq_file *s, void *data) return 0; } - -static int wil_mids_seq_open(struct inode *inode, struct file *file) -{ - return single_open(file, wil_mids_debugfs_show, inode->i_private); -} - -static const struct file_operations fops_mids = { - .open = wil_mids_seq_open, - .release = single_release, - .read = seq_read, - .llseek = seq_lseek, -}; +DEFINE_SHOW_ATTRIBUTE(mids); static int wil_tx_latency_debugfs_show(struct seq_file *s, void *data) __acquires(&p->tid_rx_lock) __releases(&p->tid_rx_lock) @@ -2436,23 +2287,23 @@ static const struct { umode_t mode; const struct file_operations *fops; } dbg_files[] = { - {"mbox", 0444, &fops_mbox}, - {"rings", 0444, &fops_ring}, - {"stations", 0444, &fops_sta}, - {"mids", 0444, &fops_mids}, - {"desc", 0444, &fops_txdesc}, - {"bf", 0444, &fops_bf}, - {"mem_val", 0644, &fops_memread}, + {"mbox", 0444, &mbox_fops}, + {"rings", 0444, &ring_fops}, + {"stations", 0444, &sta_fops}, + {"mids", 0444, &mids_fops}, + {"desc", 0444, &txdesc_fops}, + {"bf", 0444, &bf_fops}, + {"mem_val", 0644, &memread_fops}, {"rxon", 0244, &fops_rxon}, {"tx_mgmt", 0244, &fops_txmgmt}, {"wmi_send", 0244, &fops_wmi}, {"back", 0644, &fops_back}, {"pmccfg", 0644, &fops_pmccfg}, {"pmcdata", 0444, &fops_pmcdata}, - {"temp", 0444, &fops_temp}, - {"freq", 0444, &fops_freq}, - {"link", 0444, &fops_link}, - {"info", 0444, &fops_info}, + {"temp", 0444, &temp_fops}, + {"freq", 0444, &freq_fops}, + {"link", 0444, &link_fops}, + {"info", 0444, &info_fops}, {"recovery", 0644, &fops_recovery}, {"led_cfg", 0644, &fops_led_cfg}, {"led_blink_time", 0644, &fops_led_blink_time}, @@ -2460,9 +2311,9 @@ static const struct { {"fw_version", 0444, &fops_fw_version}, {"suspend_stats", 0644, &fops_suspend_stats}, {"compressed_rx_status", 0644, &fops_compressed_rx_status}, - {"srings", 0444, &fops_srings}, - {"status_msg", 0444, &fops_status_msg}, - {"rx_buff_mgmt", 0444, &fops_rx_buff_mgmt}, + {"srings", 0444, &srings_fops}, + {"status_msg", 0444, &status_msg_fops}, + {"rx_buff_mgmt", 0444, &rx_buff_mgmt_fops}, {"tx_latency", 0644, &fops_tx_latency}, {"link_stats", 0644, &fops_link_stats}, {"link_stats_global", 0644, &fops_link_stats_global}, diff --git a/drivers/net/wireless/ath/wil6210/main.c b/drivers/net/wireless/ath/wil6210/main.c index 398900a1c29e..5b7de00affe2 100644 --- a/drivers/net/wireless/ath/wil6210/main.c +++ b/drivers/net/wireless/ath/wil6210/main.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "wil6210.h" #include "txrx.h" @@ -80,7 +81,7 @@ static const struct kernel_param_ops mtu_max_ops = { module_param_cb(mtu_max, &mtu_max_ops, &mtu_max, 0444); MODULE_PARM_DESC(mtu_max, " Max MTU value."); -static uint rx_ring_order = WIL_RX_RING_SIZE_ORDER_DEFAULT; +static uint rx_ring_order; static uint tx_ring_order = WIL_TX_RING_SIZE_ORDER_DEFAULT; static uint bcast_ring_order = WIL_BCAST_RING_SIZE_ORDER_DEFAULT; @@ -214,8 +215,21 @@ static void wil_ring_fini_tx(struct wil6210_priv *wil, int id) wil->txrx_ops.ring_fini_tx(wil, ring); } -static void wil_disconnect_cid(struct wil6210_vif *vif, int cid, - u16 reason_code, bool from_event) +static bool wil_vif_is_connected(struct wil6210_priv *wil, u8 mid) +{ + int i; + + for (i = 0; i < WIL6210_MAX_CID; i++) { + if (wil->sta[i].mid == mid && + wil->sta[i].status == wil_sta_connected) + return true; + } + + return false; +} + +static void wil_disconnect_cid_complete(struct wil6210_vif *vif, int cid, + u16 reason_code) __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock) { uint i; @@ -226,24 +240,14 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock) int min_ring_id = wil_get_min_tx_ring_id(wil); might_sleep(); - wil_dbg_misc(wil, "disconnect_cid: CID %d, MID %d, status %d\n", + wil_dbg_misc(wil, + "disconnect_cid_complete: CID %d, MID %d, status %d\n", cid, sta->mid, sta->status); - /* inform upper/lower layers */ + /* inform upper layers */ if (sta->status != wil_sta_unused) { if (vif->mid != sta->mid) { wil_err(wil, "STA MID mismatch with VIF MID(%d)\n", vif->mid); - /* let FW override sta->mid but be more strict with - * user space requests - */ - if (!from_event) - return; - } - if (!from_event) { - bool del_sta = (wdev->iftype == NL80211_IFTYPE_AP) ? - disable_ap_sme : false; - wmi_disconnect_sta(vif, sta->addr, reason_code, - true, del_sta); } switch (wdev->iftype) { @@ -283,36 +287,20 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock) sta->stats.tx_latency_min_us = U32_MAX; } -static bool wil_vif_is_connected(struct wil6210_priv *wil, u8 mid) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(wil->sta); i++) { - if (wil->sta[i].mid == mid && - wil->sta[i].status == wil_sta_connected) - return true; - } - - return false; -} - -static void _wil6210_disconnect(struct wil6210_vif *vif, const u8 *bssid, - u16 reason_code, bool from_event) +static void _wil6210_disconnect_complete(struct wil6210_vif *vif, + const u8 *bssid, u16 reason_code) { struct wil6210_priv *wil = vif_to_wil(vif); int cid = -ENOENT; struct net_device *ndev; struct wireless_dev *wdev; - if (unlikely(!vif)) - return; - ndev = vif_to_ndev(vif); wdev = vif_to_wdev(vif); might_sleep(); - wil_info(wil, "bssid=%pM, reason=%d, ev%s\n", bssid, - reason_code, from_event ? "+" : "-"); + wil_info(wil, "disconnect_complete: bssid=%pM, reason=%d\n", + bssid, reason_code); /* Cases are: * - disconnect single STA, still connected @@ -327,14 +315,15 @@ static void _wil6210_disconnect(struct wil6210_vif *vif, const u8 *bssid, if (bssid && !is_broadcast_ether_addr(bssid) && !ether_addr_equal_unaligned(ndev->dev_addr, bssid)) { cid = wil_find_cid(wil, vif->mid, bssid); - wil_dbg_misc(wil, "Disconnect %pM, CID=%d, reason=%d\n", + wil_dbg_misc(wil, + "Disconnect complete %pM, CID=%d, reason=%d\n", bssid, cid, reason_code); if (cid >= 0) /* disconnect 1 peer */ - wil_disconnect_cid(vif, cid, reason_code, from_event); + wil_disconnect_cid_complete(vif, cid, reason_code); } else { /* all */ - wil_dbg_misc(wil, "Disconnect all\n"); + wil_dbg_misc(wil, "Disconnect complete all\n"); for (cid = 0; cid < WIL6210_MAX_CID; cid++) - wil_disconnect_cid(vif, cid, reason_code, from_event); + wil_disconnect_cid_complete(vif, cid, reason_code); } /* link state */ @@ -380,6 +369,82 @@ static void _wil6210_disconnect(struct wil6210_vif *vif, const u8 *bssid, } } +static int wil_disconnect_cid(struct wil6210_vif *vif, int cid, + u16 reason_code) +{ + struct wil6210_priv *wil = vif_to_wil(vif); + struct wireless_dev *wdev = vif_to_wdev(vif); + struct wil_sta_info *sta = &wil->sta[cid]; + bool del_sta = false; + + might_sleep(); + wil_dbg_misc(wil, "disconnect_cid: CID %d, MID %d, status %d\n", + cid, sta->mid, sta->status); + + if (sta->status == wil_sta_unused) + return 0; + + if (vif->mid != sta->mid) { + wil_err(wil, "STA MID mismatch with VIF MID(%d)\n", vif->mid); + return -EINVAL; + } + + /* inform lower layers */ + if (wdev->iftype == NL80211_IFTYPE_AP && disable_ap_sme) + del_sta = true; + + /* disconnect by sending command disconnect/del_sta and wait + * synchronously for WMI_DISCONNECT_EVENTID event. + */ + return wmi_disconnect_sta(vif, sta->addr, reason_code, del_sta); +} + +static void _wil6210_disconnect(struct wil6210_vif *vif, const u8 *bssid, + u16 reason_code) +{ + struct wil6210_priv *wil; + struct net_device *ndev; + int cid = -ENOENT; + + if (unlikely(!vif)) + return; + + wil = vif_to_wil(vif); + ndev = vif_to_ndev(vif); + + might_sleep(); + wil_info(wil, "disconnect bssid=%pM, reason=%d\n", bssid, reason_code); + + /* Cases are: + * - disconnect single STA, still connected + * - disconnect single STA, already disconnected + * - disconnect all + * + * For "disconnect all", there are 3 options: + * - bssid == NULL + * - bssid is broadcast address (ff:ff:ff:ff:ff:ff) + * - bssid is our MAC address + */ + if (bssid && !is_broadcast_ether_addr(bssid) && + !ether_addr_equal_unaligned(ndev->dev_addr, bssid)) { + cid = wil_find_cid(wil, vif->mid, bssid); + wil_dbg_misc(wil, "Disconnect %pM, CID=%d, reason=%d\n", + bssid, cid, reason_code); + if (cid >= 0) /* disconnect 1 peer */ + wil_disconnect_cid(vif, cid, reason_code); + } else { /* all */ + wil_dbg_misc(wil, "Disconnect all\n"); + for (cid = 0; cid < WIL6210_MAX_CID; cid++) + wil_disconnect_cid(vif, cid, reason_code); + } + + /* call event handler manually after processing wmi_call, + * to avoid deadlock - disconnect event handler acquires + * wil->mutex while it is already held here + */ + _wil6210_disconnect_complete(vif, bssid, reason_code); +} + void wil_disconnect_worker(struct work_struct *work) { struct wil6210_vif *vif = container_of(work, @@ -485,10 +550,11 @@ static void wil_fw_error_worker(struct work_struct *work) if (wil_wait_for_recovery(wil) != 0) return; + rtnl_lock(); mutex_lock(&wil->mutex); /* Needs adaptation for multiple VIFs * need to go over all VIFs and consider the appropriate - * recovery. + * recovery because each one can have different iftype. */ switch (wdev->iftype) { case NL80211_IFTYPE_STATION: @@ -500,15 +566,24 @@ static void wil_fw_error_worker(struct work_struct *work) break; case NL80211_IFTYPE_AP: case NL80211_IFTYPE_P2P_GO: - wil_info(wil, "No recovery for AP-like interface\n"); - /* recovery in these modes is done by upper layers */ + if (no_fw_recovery) /* upper layers do recovery */ + break; + /* silent recovery, upper layers will see disconnect */ + __wil_down(wil); + __wil_up(wil); + mutex_unlock(&wil->mutex); + wil_cfg80211_ap_recovery(wil); + mutex_lock(&wil->mutex); + wil_info(wil, "... completed\n"); break; default: wil_err(wil, "No recovery - unknown interface type %d\n", wdev->iftype); break; } + mutex_unlock(&wil->mutex); + rtnl_unlock(); } static int wil_find_free_ring(struct wil6210_priv *wil) @@ -694,20 +769,41 @@ void wil6210_bus_request(struct wil6210_priv *wil, u32 kbps) * @vif: virtual interface context * @bssid: peer to disconnect, NULL to disconnect all * @reason_code: Reason code for the Disassociation frame - * @from_event: whether is invoked from FW event handler * - * Disconnect and release associated resources. If invoked not from the - * FW event handler, issue WMI command(s) to trigger MAC disconnect. + * Disconnect and release associated resources. Issue WMI + * command(s) to trigger MAC disconnect. When command was issued + * successfully, call the wil6210_disconnect_complete function + * to handle the event synchronously */ void wil6210_disconnect(struct wil6210_vif *vif, const u8 *bssid, - u16 reason_code, bool from_event) + u16 reason_code) +{ + struct wil6210_priv *wil = vif_to_wil(vif); + + wil_dbg_misc(wil, "disconnecting\n"); + + del_timer_sync(&vif->connect_timer); + _wil6210_disconnect(vif, bssid, reason_code); +} + +/** + * wil6210_disconnect_complete - handle disconnect event + * @vif: virtual interface context + * @bssid: peer to disconnect, NULL to disconnect all + * @reason_code: Reason code for the Disassociation frame + * + * Release associated resources and indicate upper layers the + * connection is terminated. + */ +void wil6210_disconnect_complete(struct wil6210_vif *vif, const u8 *bssid, + u16 reason_code) { struct wil6210_priv *wil = vif_to_wil(vif); - wil_dbg_misc(wil, "disconnect\n"); + wil_dbg_misc(wil, "got disconnect\n"); del_timer_sync(&vif->connect_timer); - _wil6210_disconnect(vif, bssid, reason_code, from_event); + _wil6210_disconnect_complete(vif, bssid, reason_code); } void wil_priv_deinit(struct wil6210_priv *wil) @@ -998,10 +1094,13 @@ static int wil_target_reset(struct wil6210_priv *wil, int no_flash) wil_dbg_misc(wil, "Resetting \"%s\"...\n", wil->hw_name); - /* Clear MAC link up */ - wil_s(wil, RGF_HP_CTRL, BIT(15)); - wil_s(wil, RGF_USER_CLKS_CTL_SW_RST_MASK_0, BIT_HPAL_PERST_FROM_PAD); - wil_s(wil, RGF_USER_CLKS_CTL_SW_RST_MASK_0, BIT_CAR_PERST_RST); + if (wil->hw_version < HW_VER_TALYN) { + /* Clear MAC link up */ + wil_s(wil, RGF_HP_CTRL, BIT(15)); + wil_s(wil, RGF_USER_CLKS_CTL_SW_RST_MASK_0, + BIT_HPAL_PERST_FROM_PAD); + wil_s(wil, RGF_USER_CLKS_CTL_SW_RST_MASK_0, BIT_CAR_PERST_RST); + } wil_halt_cpu(wil); @@ -1398,8 +1497,15 @@ static void wil_pre_fw_config(struct wil6210_priv *wil) wil6210_clear_irq(wil); /* CAF_ICR - clear and mask */ /* it is W1C, clear by writing back same value */ - wil_s(wil, RGF_CAF_ICR + offsetof(struct RGF_ICR, ICR), 0); - wil_w(wil, RGF_CAF_ICR + offsetof(struct RGF_ICR, IMV), ~0); + if (wil->hw_version < HW_VER_TALYN_MB) { + wil_s(wil, RGF_CAF_ICR + offsetof(struct RGF_ICR, ICR), 0); + wil_w(wil, RGF_CAF_ICR + offsetof(struct RGF_ICR, IMV), ~0); + } else { + wil_s(wil, + RGF_CAF_ICR_TALYN_MB + offsetof(struct RGF_ICR, ICR), 0); + wil_w(wil, RGF_CAF_ICR_TALYN_MB + + offsetof(struct RGF_ICR, IMV), ~0); + } /* clear PAL_UNIT_ICR (potential D0->D3 leftover) * In Talyn-MB host cannot access this register due to * access control, hence PAL_UNIT_ICR is cleared by the FW @@ -1511,7 +1617,7 @@ int wil_reset(struct wil6210_priv *wil, bool load_fw) if (vif) { cancel_work_sync(&vif->disconnect_worker); wil6210_disconnect(vif, NULL, - WLAN_REASON_DEAUTH_LEAVING, false); + WLAN_REASON_DEAUTH_LEAVING); } } wil_bcast_fini_all(wil); @@ -1681,7 +1787,12 @@ int __wil_up(struct wil6210_priv *wil) return rc; /* Rx RING. After MAC and beacon */ - rc = wil->txrx_ops.rx_init(wil, 1 << rx_ring_order); + if (rx_ring_order == 0) + rx_ring_order = wil->hw_version < HW_VER_TALYN_MB ? + WIL_RX_RING_SIZE_ORDER_DEFAULT : + WIL_RX_RING_SIZE_ORDER_TALYN_DEFAULT; + + rc = wil->txrx_ops.rx_init(wil, rx_ring_order); if (rc) return rc; diff --git a/drivers/net/wireless/ath/wil6210/netdev.c b/drivers/net/wireless/ath/wil6210/netdev.c index 7a78a06bd356..b4e0eb1585b9 100644 --- a/drivers/net/wireless/ath/wil6210/netdev.c +++ b/drivers/net/wireless/ath/wil6210/netdev.c @@ -345,8 +345,7 @@ wil_vif_alloc(struct wil6210_priv *wil, const char *name, ndev->ieee80211_ptr = wdev; ndev->hw_features = NETIF_F_HW_CSUM | NETIF_F_RXCSUM | NETIF_F_SG | NETIF_F_GRO | - NETIF_F_TSO | NETIF_F_TSO6 | - NETIF_F_RXHASH; + NETIF_F_TSO | NETIF_F_TSO6; ndev->features |= ndev->hw_features; SET_NETDEV_DEV(ndev, wiphy_dev(wdev->wiphy)); @@ -513,7 +512,7 @@ void wil_vif_remove(struct wil6210_priv *wil, u8 mid) } mutex_lock(&wil->mutex); - wil6210_disconnect(vif, NULL, WLAN_REASON_DEAUTH_LEAVING, false); + wil6210_disconnect(vif, NULL, WLAN_REASON_DEAUTH_LEAVING); mutex_unlock(&wil->mutex); ndev = vif_to_ndev(vif); diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c index cc5f263cc965..3e1c831ab2fb 100644 --- a/drivers/net/wireless/ath/wil6210/txrx.c +++ b/drivers/net/wireless/ath/wil6210/txrx.c @@ -743,14 +743,6 @@ void wil_netif_rx_any(struct sk_buff *skb, struct net_device *ndev) stats = &wil->sta[cid].stats; - if (ndev->features & NETIF_F_RXHASH) - /* fake L4 to ensure it won't be re-calculated later - * set hash to any non-zero value to activate rps - * mechanism, core will be chosen according - * to user-level rps configuration. - */ - skb_set_hash(skb, 1, PKT_HASH_TYPE_L4); - skb_orphan(skb); if (security && (wil->txrx_ops.rx_crypto_check(wil, skb) != 0)) { @@ -880,7 +872,7 @@ static void wil_rx_buf_len_init(struct wil6210_priv *wil) } } -static int wil_rx_init(struct wil6210_priv *wil, u16 size) +static int wil_rx_init(struct wil6210_priv *wil, uint order) { struct wil_ring *vring = &wil->ring_rx; int rc; @@ -894,7 +886,7 @@ static int wil_rx_init(struct wil6210_priv *wil, u16 size) wil_rx_buf_len_init(wil); - vring->size = size; + vring->size = 1 << order; vring->is_rx = true; rc = wil_vring_alloc(wil, vring); if (rc) @@ -1403,6 +1395,8 @@ found: wil_dbg_txrx(wil, "BCAST DUP -> ring %d\n", i); wil_set_da_for_vring(wil, skb2, i); wil_tx_ring(wil, vif, v2, skb2); + /* successful call to wil_tx_ring takes skb2 ref */ + dev_kfree_skb_any(skb2); } else { wil_err(wil, "skb_copy failed\n"); } diff --git a/drivers/net/wireless/ath/wil6210/txrx_edma.c b/drivers/net/wireless/ath/wil6210/txrx_edma.c index 2bbae75b9a84..05a8348bd7b9 100644 --- a/drivers/net/wireless/ath/wil6210/txrx_edma.c +++ b/drivers/net/wireless/ath/wil6210/txrx_edma.c @@ -160,7 +160,7 @@ static int wil_ring_alloc_skb_edma(struct wil6210_priv *wil, struct wil_ring *ring, u32 i) { struct device *dev = wil_to_dev(wil); - unsigned int sz = ALIGN(wil->rx_buf_len, 4); + unsigned int sz = wil->rx_buf_len; dma_addr_t pa; u16 buff_id; struct list_head *active = &wil->rx_buff_mgmt.active; @@ -234,9 +234,10 @@ static int wil_rx_refill_edma(struct wil6210_priv *wil) struct wil_ring *ring = &wil->ring_rx; u32 next_head; int rc = 0; - u32 swtail = *ring->edma_rx_swtail.va; + ring->swtail = *ring->edma_rx_swtail.va; - for (; next_head = wil_ring_next_head(ring), (next_head != swtail); + for (; next_head = wil_ring_next_head(ring), + (next_head != ring->swtail); ring->swhead = next_head) { rc = wil_ring_alloc_skb_edma(wil, ring, ring->swhead); if (unlikely(rc)) { @@ -264,43 +265,26 @@ static void wil_move_all_rx_buff_to_free_list(struct wil6210_priv *wil, struct wil_ring *ring) { struct device *dev = wil_to_dev(wil); - u32 next_tail; - u32 swhead = (ring->swhead + 1) % ring->size; + struct list_head *active = &wil->rx_buff_mgmt.active; dma_addr_t pa; - u16 dmalen; - for (; next_tail = wil_ring_next_tail(ring), (next_tail != swhead); - ring->swtail = next_tail) { - struct wil_rx_enhanced_desc dd, *d = ⅆ - struct wil_rx_enhanced_desc *_d = - (struct wil_rx_enhanced_desc *) - &ring->va[ring->swtail].rx.enhanced; - struct sk_buff *skb; - u16 buff_id; + while (!list_empty(active)) { + struct wil_rx_buff *rx_buff = + list_first_entry(active, struct wil_rx_buff, list); + struct sk_buff *skb = rx_buff->skb; - *d = *_d; - - /* Extract the SKB from the rx_buff management array */ - buff_id = __le16_to_cpu(d->mac.buff_id); - if (buff_id >= wil->rx_buff_mgmt.size) { - wil_err(wil, "invalid buff_id %d\n", buff_id); - continue; - } - skb = wil->rx_buff_mgmt.buff_arr[buff_id].skb; - wil->rx_buff_mgmt.buff_arr[buff_id].skb = NULL; if (unlikely(!skb)) { - wil_err(wil, "No Rx skb at buff_id %d\n", buff_id); + wil_err(wil, "No Rx skb at buff_id %d\n", rx_buff->id); } else { - pa = wil_rx_desc_get_addr_edma(&d->dma); - dmalen = le16_to_cpu(d->dma.length); - dma_unmap_single(dev, pa, dmalen, DMA_FROM_DEVICE); - + rx_buff->skb = NULL; + memcpy(&pa, skb->cb, sizeof(pa)); + dma_unmap_single(dev, pa, wil->rx_buf_len, + DMA_FROM_DEVICE); kfree_skb(skb); } /* Move the buffer from the active to the free list */ - list_move(&wil->rx_buff_mgmt.buff_arr[buff_id].list, - &wil->rx_buff_mgmt.free); + list_move(&rx_buff->list, &wil->rx_buff_mgmt.free); } } @@ -357,8 +341,8 @@ static int wil_init_rx_sring(struct wil6210_priv *wil, struct wil_status_ring *sring = &wil->srings[ring_id]; int rc; - wil_dbg_misc(wil, "init RX sring: size=%u, ring_id=%u\n", sring->size, - ring_id); + wil_dbg_misc(wil, "init RX sring: size=%u, ring_id=%u\n", + status_ring_size, ring_id); memset(&sring->rx_data, 0, sizeof(sring->rx_data)); @@ -602,20 +586,20 @@ static bool wil_is_rx_idle_edma(struct wil6210_priv *wil) static void wil_rx_buf_len_init_edma(struct wil6210_priv *wil) { + /* RX buffer size must be aligned to 4 bytes */ wil->rx_buf_len = rx_large_buf ? WIL_MAX_ETH_MTU : WIL_EDMA_RX_BUF_LEN_DEFAULT; } -static int wil_rx_init_edma(struct wil6210_priv *wil, u16 desc_ring_size) +static int wil_rx_init_edma(struct wil6210_priv *wil, uint desc_ring_order) { - u16 status_ring_size; + u16 status_ring_size, desc_ring_size = 1 << desc_ring_order; struct wil_ring *ring = &wil->ring_rx; int rc; size_t elem_size = wil->use_compressed_rx_status ? sizeof(struct wil_rx_status_compressed) : sizeof(struct wil_rx_status_extended); int i; - u16 max_rx_pl_per_desc; /* In SW reorder one must use extended status messages */ if (wil->use_compressed_rx_status && !wil->use_rx_hw_reordering) { @@ -623,7 +607,12 @@ static int wil_rx_init_edma(struct wil6210_priv *wil, u16 desc_ring_size) "compressed RX status cannot be used with SW reorder\n"); return -EINVAL; } - + if (wil->rx_status_ring_order <= desc_ring_order) + /* make sure sring is larger than desc ring */ + wil->rx_status_ring_order = desc_ring_order + 1; + if (wil->rx_buff_id_count <= desc_ring_size) + /* make sure we will not run out of buff_ids */ + wil->rx_buff_id_count = desc_ring_size + 512; if (wil->rx_status_ring_order < WIL_SRING_SIZE_ORDER_MIN || wil->rx_status_ring_order > WIL_SRING_SIZE_ORDER_MAX) wil->rx_status_ring_order = WIL_RX_SRING_SIZE_ORDER_DEFAULT; @@ -636,8 +625,6 @@ static int wil_rx_init_edma(struct wil6210_priv *wil, u16 desc_ring_size) wil_rx_buf_len_init_edma(wil); - max_rx_pl_per_desc = ALIGN(wil->rx_buf_len, 4); - /* Use debugfs dbg_num_rx_srings if set, reserve one sring for TX */ if (wil->num_rx_status_rings > WIL6210_MAX_STATUS_RINGS - 1) wil->num_rx_status_rings = WIL6210_MAX_STATUS_RINGS - 1; @@ -645,7 +632,7 @@ static int wil_rx_init_edma(struct wil6210_priv *wil, u16 desc_ring_size) wil_dbg_misc(wil, "rx_init: allocate %d status rings\n", wil->num_rx_status_rings); - rc = wil_wmi_cfg_def_rx_offload(wil, max_rx_pl_per_desc); + rc = wil_wmi_cfg_def_rx_offload(wil, wil->rx_buf_len); if (rc) return rc; @@ -834,23 +821,24 @@ static int wil_rx_error_check_edma(struct wil6210_priv *wil, wil_dbg_txrx(wil, "L2 RX error, l2_rx_status=0x%x\n", l2_rx_status); /* Due to HW issue, KEY error will trigger a MIC error */ - if (l2_rx_status & WIL_RX_EDMA_ERROR_MIC) { - wil_dbg_txrx(wil, - "L2 MIC/KEY error, dropping packet\n"); + if (l2_rx_status == WIL_RX_EDMA_ERROR_MIC) { + wil_err_ratelimited(wil, + "L2 MIC/KEY error, dropping packet\n"); stats->rx_mic_error++; } - if (l2_rx_status & WIL_RX_EDMA_ERROR_KEY) { - wil_dbg_txrx(wil, "L2 KEY error, dropping packet\n"); + if (l2_rx_status == WIL_RX_EDMA_ERROR_KEY) { + wil_err_ratelimited(wil, + "L2 KEY error, dropping packet\n"); stats->rx_key_error++; } - if (l2_rx_status & WIL_RX_EDMA_ERROR_REPLAY) { - wil_dbg_txrx(wil, - "L2 REPLAY error, dropping packet\n"); + if (l2_rx_status == WIL_RX_EDMA_ERROR_REPLAY) { + wil_err_ratelimited(wil, + "L2 REPLAY error, dropping packet\n"); stats->rx_replay++; } - if (l2_rx_status & WIL_RX_EDMA_ERROR_AMSDU) { - wil_dbg_txrx(wil, - "L2 AMSDU error, dropping packet\n"); + if (l2_rx_status == WIL_RX_EDMA_ERROR_AMSDU) { + wil_err_ratelimited(wil, + "L2 AMSDU error, dropping packet\n"); stats->rx_amsdu_error++; } return -EFAULT; @@ -881,7 +869,7 @@ static struct sk_buff *wil_sring_reap_rx_edma(struct wil6210_priv *wil, struct sk_buff *skb; dma_addr_t pa; struct wil_ring_rx_data *rxdata = &sring->rx_data; - unsigned int sz = ALIGN(wil->rx_buf_len, 4); + unsigned int sz = wil->rx_buf_len; struct wil_net_stats *stats = NULL; u16 dmalen; int cid; diff --git a/drivers/net/wireless/ath/wil6210/txrx_edma.h b/drivers/net/wireless/ath/wil6210/txrx_edma.h index a7fe9292fda3..343516a03a1e 100644 --- a/drivers/net/wireless/ath/wil6210/txrx_edma.h +++ b/drivers/net/wireless/ath/wil6210/txrx_edma.h @@ -23,9 +23,9 @@ #define WIL_SRING_SIZE_ORDER_MIN (WIL_RING_SIZE_ORDER_MIN) #define WIL_SRING_SIZE_ORDER_MAX (WIL_RING_SIZE_ORDER_MAX) /* RX sring order should be bigger than RX ring order */ -#define WIL_RX_SRING_SIZE_ORDER_DEFAULT (11) +#define WIL_RX_SRING_SIZE_ORDER_DEFAULT (12) #define WIL_TX_SRING_SIZE_ORDER_DEFAULT (12) -#define WIL_RX_BUFF_ARR_SIZE_DEFAULT (1536) +#define WIL_RX_BUFF_ARR_SIZE_DEFAULT (2600) #define WIL_DEFAULT_RX_STATUS_RING_ID 0 #define WIL_RX_DESC_RING_ID 0 diff --git a/drivers/net/wireless/ath/wil6210/wil6210.h b/drivers/net/wireless/ath/wil6210/wil6210.h index abb82018d3b4..0f3be3ffc6a2 100644 --- a/drivers/net/wireless/ath/wil6210/wil6210.h +++ b/drivers/net/wireless/ath/wil6210/wil6210.h @@ -81,6 +81,7 @@ static inline u32 WIL_GET_BITS(u32 x, int b0, int b1) #define WIL_TX_Q_LEN_DEFAULT (4000) #define WIL_RX_RING_SIZE_ORDER_DEFAULT (10) +#define WIL_RX_RING_SIZE_ORDER_TALYN_DEFAULT (11) #define WIL_TX_RING_SIZE_ORDER_DEFAULT (12) #define WIL_BCAST_RING_SIZE_ORDER_DEFAULT (7) #define WIL_BCAST_MCS0_LIMIT (1024) /* limit for MCS0 frame size */ @@ -319,6 +320,7 @@ struct RGF_ICR { /* MAC timer, usec, for packet lifetime */ #define RGF_MAC_MTRL_COUNTER_0 (0x886aa8) +#define RGF_CAF_ICR_TALYN_MB (0x8893d4) /* struct RGF_ICR */ #define RGF_CAF_ICR (0x88946c) /* struct RGF_ICR */ #define RGF_CAF_OSC_CONTROL (0x88afa4) #define BIT_CAF_OSC_XTAL_EN BIT(0) @@ -613,7 +615,7 @@ struct wil_txrx_ops { int cid, int tid); irqreturn_t (*irq_tx)(int irq, void *cookie); /* RX ops */ - int (*rx_init)(struct wil6210_priv *wil, u16 ring_size); + int (*rx_init)(struct wil6210_priv *wil, uint ring_order); void (*rx_fini)(struct wil6210_priv *wil); int (*wmi_addba_rx_resp)(struct wil6210_priv *wil, u8 mid, u8 cid, u8 tid, u8 token, u16 status, bool amsdu, @@ -848,6 +850,14 @@ struct wil6210_vif { u8 hidden_ssid; /* relevant in AP mode */ u32 ap_isolate; /* no intra-BSS communication */ bool pbss; + int bi; + u8 *proberesp, *proberesp_ies, *assocresp_ies; + size_t proberesp_len, proberesp_ies_len, assocresp_ies_len; + u8 ssid[IEEE80211_MAX_SSID_LEN]; + size_t ssid_len; + u8 gtk_index; + u8 gtk[WMI_MAX_KEY_LEN]; + size_t gtk_len; int bcast_ring; struct cfg80211_bss *bss; /* connected bss, relevant in STA mode */ int locally_generated_disc; /* relevant in STA mode */ @@ -1220,8 +1230,8 @@ int wmi_rx_chain_add(struct wil6210_priv *wil, struct wil_ring *vring); int wmi_update_ft_ies(struct wil6210_vif *vif, u16 ie_len, const void *ie); int wmi_rxon(struct wil6210_priv *wil, bool on); int wmi_get_temperature(struct wil6210_priv *wil, u32 *t_m, u32 *t_r); -int wmi_disconnect_sta(struct wil6210_vif *vif, const u8 *mac, - u16 reason, bool full_disconnect, bool del_sta); +int wmi_disconnect_sta(struct wil6210_vif *vif, const u8 *mac, u16 reason, + bool del_sta); int wmi_addba(struct wil6210_priv *wil, u8 mid, u8 ringid, u8 size, u16 timeout); int wmi_delba_tx(struct wil6210_priv *wil, u8 mid, u8 ringid, u16 reason); @@ -1276,6 +1286,7 @@ int wmi_stop_discovery(struct wil6210_vif *vif); int wil_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev, struct cfg80211_mgmt_tx_params *params, u64 *cookie); +void wil_cfg80211_ap_recovery(struct wil6210_priv *wil); int wil_cfg80211_iface_combinations_from_fw( struct wil6210_priv *wil, const struct wil_fw_record_concurrency *conc); @@ -1306,7 +1317,9 @@ void wil_abort_scan(struct wil6210_vif *vif, bool sync); void wil_abort_scan_all_vifs(struct wil6210_priv *wil, bool sync); void wil6210_bus_request(struct wil6210_priv *wil, u32 kbps); void wil6210_disconnect(struct wil6210_vif *vif, const u8 *bssid, - u16 reason_code, bool from_event); + u16 reason_code); +void wil6210_disconnect_complete(struct wil6210_vif *vif, const u8 *bssid, + u16 reason_code); void wil_probe_client_flush(struct wil6210_vif *vif); void wil_probe_client_worker(struct work_struct *work); void wil_disconnect_worker(struct work_struct *work); diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c index 4859f0e43658..345f05969190 100644 --- a/drivers/net/wireless/ath/wil6210/wmi.c +++ b/drivers/net/wireless/ath/wil6210/wmi.c @@ -1018,7 +1018,7 @@ static void wmi_evt_connect(struct wil6210_vif *vif, int id, void *d, int len) wil_err(wil, "config tx vring failed for CID %d, rc (%d)\n", evt->cid, rc); wmi_disconnect_sta(vif, wil->sta[evt->cid].addr, - WLAN_REASON_UNSPECIFIED, false, false); + WLAN_REASON_UNSPECIFIED, false); } else { wil_info(wil, "successful connection to CID %d\n", evt->cid); } @@ -1112,7 +1112,24 @@ static void wmi_evt_disconnect(struct wil6210_vif *vif, int id, } mutex_lock(&wil->mutex); - wil6210_disconnect(vif, evt->bssid, reason_code, true); + wil6210_disconnect_complete(vif, evt->bssid, reason_code); + if (disable_ap_sme) { + struct wireless_dev *wdev = vif_to_wdev(vif); + struct net_device *ndev = vif_to_ndev(vif); + + /* disconnect event in disable_ap_sme mode means link loss */ + switch (wdev->iftype) { + /* AP-like interface */ + case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_P2P_GO: + /* notify hostapd about link loss */ + cfg80211_cqm_pktloss_notify(ndev, evt->bssid, 0, + GFP_KERNEL); + break; + default: + break; + } + } mutex_unlock(&wil->mutex); } @@ -1637,7 +1654,7 @@ wmi_evt_auth_status(struct wil6210_vif *vif, int id, void *d, int len) return; fail: - wil6210_disconnect(vif, NULL, WLAN_REASON_PREV_AUTH_NOT_VALID, false); + wil6210_disconnect(vif, NULL, WLAN_REASON_PREV_AUTH_NOT_VALID); } static void @@ -1766,7 +1783,7 @@ wmi_evt_reassoc_status(struct wil6210_vif *vif, int id, void *d, int len) return; fail: - wil6210_disconnect(vif, NULL, WLAN_REASON_PREV_AUTH_NOT_VALID, false); + wil6210_disconnect(vif, NULL, WLAN_REASON_PREV_AUTH_NOT_VALID); } /** @@ -1949,16 +1966,17 @@ int wmi_call(struct wil6210_priv *wil, u16 cmdid, u8 mid, void *buf, u16 len, { int rc; unsigned long remain; + ulong flags; mutex_lock(&wil->wmi_mutex); - spin_lock(&wil->wmi_ev_lock); + spin_lock_irqsave(&wil->wmi_ev_lock, flags); wil->reply_id = reply_id; wil->reply_mid = mid; wil->reply_buf = reply; wil->reply_size = reply_size; reinit_completion(&wil->wmi_call); - spin_unlock(&wil->wmi_ev_lock); + spin_unlock_irqrestore(&wil->wmi_ev_lock, flags); rc = __wmi_send(wil, cmdid, mid, buf, len); if (rc) @@ -1978,12 +1996,12 @@ int wmi_call(struct wil6210_priv *wil, u16 cmdid, u8 mid, void *buf, u16 len, } out: - spin_lock(&wil->wmi_ev_lock); + spin_lock_irqsave(&wil->wmi_ev_lock, flags); wil->reply_id = 0; wil->reply_mid = U8_MAX; wil->reply_buf = NULL; wil->reply_size = 0; - spin_unlock(&wil->wmi_ev_lock); + spin_unlock_irqrestore(&wil->wmi_ev_lock, flags); mutex_unlock(&wil->wmi_mutex); @@ -2560,12 +2578,11 @@ int wmi_get_temperature(struct wil6210_priv *wil, u32 *t_bb, u32 *t_rf) return 0; } -int wmi_disconnect_sta(struct wil6210_vif *vif, const u8 *mac, - u16 reason, bool full_disconnect, bool del_sta) +int wmi_disconnect_sta(struct wil6210_vif *vif, const u8 *mac, u16 reason, + bool del_sta) { struct wil6210_priv *wil = vif_to_wil(vif); int rc; - u16 reason_code; struct wmi_disconnect_sta_cmd disc_sta_cmd = { .disconnect_reason = cpu_to_le16(reason), }; @@ -2598,21 +2615,8 @@ int wmi_disconnect_sta(struct wil6210_vif *vif, const u8 *mac, wil_fw_error_recovery(wil); return rc; } + wil->sinfo_gen++; - if (full_disconnect) { - /* call event handler manually after processing wmi_call, - * to avoid deadlock - disconnect event handler acquires - * wil->mutex while it is already held here - */ - reason_code = le16_to_cpu(reply.evt.protocol_reason_status); - - wil_dbg_wmi(wil, "Disconnect %pM reason [proto %d wmi %d]\n", - reply.evt.bssid, reason_code, - reply.evt.disconnect_reason); - - wil->sinfo_gen++; - wil6210_disconnect(vif, reply.evt.bssid, reason_code, true); - } return 0; } @@ -3145,7 +3149,7 @@ static void wmi_event_handle(struct wil6210_priv *wil, if (mid == MID_BROADCAST) mid = 0; - if (mid >= wil->max_vifs) { + if (mid >= ARRAY_SIZE(wil->vifs) || mid >= wil->max_vifs) { wil_dbg_wmi(wil, "invalid mid %d, event skipped\n", mid); return; diff --git a/drivers/net/wireless/broadcom/b43/Kconfig b/drivers/net/wireless/broadcom/b43/Kconfig index fba856032ca5..3e4145747b20 100644 --- a/drivers/net/wireless/broadcom/b43/Kconfig +++ b/drivers/net/wireless/broadcom/b43/Kconfig @@ -4,6 +4,7 @@ config B43 select BCMA if B43_BCMA select SSB if B43_SSB select FW_LOADER + select CORDIC ---help--- b43 is a driver for the Broadcom 43xx series wireless devices. diff --git a/drivers/net/wireless/broadcom/b43/phy_common.c b/drivers/net/wireless/broadcom/b43/phy_common.c index 85f2ca989565..98c4fa5b919c 100644 --- a/drivers/net/wireless/broadcom/b43/phy_common.c +++ b/drivers/net/wireless/broadcom/b43/phy_common.c @@ -604,50 +604,3 @@ void b43_phy_force_clock(struct b43_wldev *dev, bool force) #endif } } - -/* http://bcm-v4.sipsolutions.net/802.11/PHY/Cordic */ -struct b43_c32 b43_cordic(int theta) -{ - static const u32 arctg[] = { - 2949120, 1740967, 919879, 466945, 234379, 117304, - 58666, 29335, 14668, 7334, 3667, 1833, - 917, 458, 229, 115, 57, 29, - }; - u8 i; - s32 tmp; - s8 signx = 1; - u32 angle = 0; - struct b43_c32 ret = { .i = 39797, .q = 0, }; - - while (theta > (180 << 16)) - theta -= (360 << 16); - while (theta < -(180 << 16)) - theta += (360 << 16); - - if (theta > (90 << 16)) { - theta -= (180 << 16); - signx = -1; - } else if (theta < -(90 << 16)) { - theta += (180 << 16); - signx = -1; - } - - for (i = 0; i <= 17; i++) { - if (theta > angle) { - tmp = ret.i - (ret.q >> i); - ret.q += ret.i >> i; - ret.i = tmp; - angle += arctg[i]; - } else { - tmp = ret.i + (ret.q >> i); - ret.q -= ret.i >> i; - ret.i = tmp; - angle -= arctg[i]; - } - } - - ret.i *= signx; - ret.q *= signx; - - return ret; -} diff --git a/drivers/net/wireless/broadcom/b43/phy_common.h b/drivers/net/wireless/broadcom/b43/phy_common.h index 57a1ad8afa08..4213caca9117 100644 --- a/drivers/net/wireless/broadcom/b43/phy_common.h +++ b/drivers/net/wireless/broadcom/b43/phy_common.h @@ -7,13 +7,6 @@ struct b43_wldev; -/* Complex number using 2 32-bit signed integers */ -struct b43_c32 { s32 i, q; }; - -#define CORDIC_CONVERT(value) (((value) >= 0) ? \ - ((((value) >> 15) + 1) >> 1) : \ - -((((-(value)) >> 15) + 1) >> 1)) - /* PHY register routing bits */ #define B43_PHYROUTE 0x0C00 /* PHY register routing bits mask */ #define B43_PHYROUTE_BASE 0x0000 /* Base registers */ @@ -450,6 +443,4 @@ bool b43_is_40mhz(struct b43_wldev *dev); void b43_phy_force_clock(struct b43_wldev *dev, bool force); -struct b43_c32 b43_cordic(int theta); - #endif /* LINUX_B43_PHY_COMMON_H_ */ diff --git a/drivers/net/wireless/broadcom/b43/phy_lp.c b/drivers/net/wireless/broadcom/b43/phy_lp.c index 6922cbb99a04..46408a560814 100644 --- a/drivers/net/wireless/broadcom/b43/phy_lp.c +++ b/drivers/net/wireless/broadcom/b43/phy_lp.c @@ -23,6 +23,7 @@ */ +#include #include #include "b43.h" @@ -1780,9 +1781,9 @@ static void lpphy_start_tx_tone(struct b43_wldev *dev, s32 freq, u16 max) { struct b43_phy_lp *lpphy = dev->phy.lp; u16 buf[64]; - int i, samples = 0, angle = 0; + int i, samples = 0, theta = 0; int rotation = (((36 * freq) / 20) << 16) / 100; - struct b43_c32 sample; + struct cordic_iq sample; lpphy->tx_tone_freq = freq; @@ -1798,10 +1799,10 @@ static void lpphy_start_tx_tone(struct b43_wldev *dev, s32 freq, u16 max) } for (i = 0; i < samples; i++) { - sample = b43_cordic(angle); - angle += rotation; - buf[i] = CORDIC_CONVERT((sample.i * max) & 0xFF) << 8; - buf[i] |= CORDIC_CONVERT((sample.q * max) & 0xFF); + sample = cordic_calc_iq(CORDIC_FIXED(theta)); + theta += rotation; + buf[i] = CORDIC_FLOAT((sample.i * max) & 0xFF) << 8; + buf[i] |= CORDIC_FLOAT((sample.q * max) & 0xFF); } b43_lptab_write_bulk(dev, B43_LPTAB16(5, 0), samples, buf); diff --git a/drivers/net/wireless/broadcom/b43/phy_n.c b/drivers/net/wireless/broadcom/b43/phy_n.c index 44ab080d6518..77d7cd5563c4 100644 --- a/drivers/net/wireless/broadcom/b43/phy_n.c +++ b/drivers/net/wireless/broadcom/b43/phy_n.c @@ -23,6 +23,7 @@ */ +#include #include #include #include @@ -1513,7 +1514,7 @@ static void b43_radio_init2055(struct b43_wldev *dev) /* http://bcm-v4.sipsolutions.net/802.11/PHY/N/LoadSampleTable */ static int b43_nphy_load_samples(struct b43_wldev *dev, - struct b43_c32 *samples, u16 len) { + struct cordic_iq *samples, u16 len) { struct b43_phy_n *nphy = dev->phy.n; u16 i; u32 *data; @@ -1544,7 +1545,7 @@ static u16 b43_nphy_gen_load_samples(struct b43_wldev *dev, u32 freq, u16 max, { int i; u16 bw, len, rot, angle; - struct b43_c32 *samples; + struct cordic_iq *samples; bw = b43_is_40mhz(dev) ? 40 : 20; len = bw << 3; @@ -1561,7 +1562,7 @@ static u16 b43_nphy_gen_load_samples(struct b43_wldev *dev, u32 freq, u16 max, len = bw << 1; } - samples = kcalloc(len, sizeof(struct b43_c32), GFP_KERNEL); + samples = kcalloc(len, sizeof(struct cordic_iq), GFP_KERNEL); if (!samples) { b43err(dev->wl, "allocation for samples generation failed\n"); return 0; @@ -1570,10 +1571,10 @@ static u16 b43_nphy_gen_load_samples(struct b43_wldev *dev, u32 freq, u16 max, angle = 0; for (i = 0; i < len; i++) { - samples[i] = b43_cordic(angle); + samples[i] = cordic_calc_iq(CORDIC_FIXED(angle)); angle += rot; - samples[i].q = CORDIC_CONVERT(samples[i].q * max); - samples[i].i = CORDIC_CONVERT(samples[i].i * max); + samples[i].q = CORDIC_FLOAT(samples[i].q * max); + samples[i].i = CORDIC_FLOAT(samples[i].i * max); } i = b43_nphy_load_samples(dev, samples, len); @@ -5894,7 +5895,6 @@ static enum b43_txpwr_result b43_nphy_op_recalc_txpower(struct b43_wldev *dev, struct ieee80211_channel *channel = dev->wl->hw->conf.chandef.chan; struct b43_ppr *ppr = &nphy->tx_pwr_max_ppr; u8 max; /* qdBm */ - bool tx_pwr_state; if (nphy->tx_pwr_last_recalc_freq == channel->center_freq && nphy->tx_pwr_last_recalc_limit == phy->desired_txpower) @@ -5930,7 +5930,6 @@ static enum b43_txpwr_result b43_nphy_op_recalc_txpower(struct b43_wldev *dev, b43_ppr_apply_min(dev, ppr, INT_TO_Q52(8)); /* Apply */ - tx_pwr_state = nphy->txpwrctrl; b43_mac_suspend(dev); b43_nphy_tx_power_ctl_setup(dev); if (dev->dev->core_rev == 11 || dev->dev->core_rev == 12) { @@ -6043,7 +6042,6 @@ static int b43_phy_initn(struct b43_wldev *dev) u8 tx_pwr_state; struct nphy_txgains target; u16 tmp; - enum nl80211_band tmp2; bool do_rssi_cal; u16 clip[2]; @@ -6137,7 +6135,6 @@ static int b43_phy_initn(struct b43_wldev *dev) b43_phy_write(dev, B43_NPHY_DUP40_BL, 0x9A4); } - tmp2 = b43_current_band(dev->wl); if (b43_nphy_ipa(dev)) { b43_phy_set(dev, B43_NPHY_PAPD_EN0, 0x1); b43_phy_maskset(dev, B43_NPHY_EPS_TABLE_ADJ0, 0x007F, diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile index 1f5a9b948abf..22fd95a736a8 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile @@ -54,3 +54,5 @@ brcmfmac-$(CONFIG_BRCM_TRACING) += \ tracepoint.o brcmfmac-$(CONFIG_OF) += \ of.o +brcmfmac-$(CONFIG_DMI) += \ + dmi.o diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c index 3e37c8cf82c6..d64bf233b12c 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c @@ -342,6 +342,37 @@ static int brcmf_sdiod_skbuff_write(struct brcmf_sdio_dev *sdiodev, return err; } +static int mmc_submit_one(struct mmc_data *md, struct mmc_request *mr, + struct mmc_command *mc, int sg_cnt, int req_sz, + int func_blk_sz, u32 *addr, + struct brcmf_sdio_dev *sdiodev, + struct sdio_func *func, int write) +{ + int ret; + + md->sg_len = sg_cnt; + md->blocks = req_sz / func_blk_sz; + mc->arg |= (*addr & 0x1FFFF) << 9; /* address */ + mc->arg |= md->blocks & 0x1FF; /* block count */ + /* incrementing addr for function 1 */ + if (func->num == 1) + *addr += req_sz; + + mmc_set_data_timeout(md, func->card); + mmc_wait_for_req(func->card->host, mr); + + ret = mc->error ? mc->error : md->error; + if (ret == -ENOMEDIUM) { + brcmf_sdiod_change_state(sdiodev, BRCMF_SDIOD_NOMEDIUM); + } else if (ret != 0) { + brcmf_err("CMD53 sg block %s failed %d\n", + write ? "write" : "read", ret); + ret = -EIO; + } + + return ret; +} + /** * brcmf_sdiod_sglist_rw - SDIO interface function for block data access * @sdiodev: brcmfmac sdio device @@ -360,11 +391,11 @@ static int brcmf_sdiod_sglist_rw(struct brcmf_sdio_dev *sdiodev, struct sk_buff_head *pktlist) { unsigned int req_sz, func_blk_sz, sg_cnt, sg_data_sz, pkt_offset; - unsigned int max_req_sz, orig_offset, dst_offset; - unsigned short max_seg_cnt, seg_sz; + unsigned int max_req_sz, src_offset, dst_offset; unsigned char *pkt_data, *orig_data, *dst_data; - struct sk_buff *pkt_next = NULL, *local_pkt_next; struct sk_buff_head local_list, *target_list; + struct sk_buff *pkt_next = NULL, *src; + unsigned short max_seg_cnt; struct mmc_request mmc_req; struct mmc_command mmc_cmd; struct mmc_data mmc_dat; @@ -404,9 +435,6 @@ static int brcmf_sdiod_sglist_rw(struct brcmf_sdio_dev *sdiodev, max_req_sz = sdiodev->max_request_size; max_seg_cnt = min_t(unsigned short, sdiodev->max_segment_count, target_list->qlen); - seg_sz = target_list->qlen; - pkt_offset = 0; - pkt_next = target_list->next; memset(&mmc_req, 0, sizeof(struct mmc_request)); memset(&mmc_cmd, 0, sizeof(struct mmc_command)); @@ -425,12 +453,12 @@ static int brcmf_sdiod_sglist_rw(struct brcmf_sdio_dev *sdiodev, mmc_req.cmd = &mmc_cmd; mmc_req.data = &mmc_dat; - while (seg_sz) { - req_sz = 0; - sg_cnt = 0; - sgl = sdiodev->sgtable.sgl; - /* prep sg table */ - while (pkt_next != (struct sk_buff *)target_list) { + req_sz = 0; + sg_cnt = 0; + sgl = sdiodev->sgtable.sgl; + skb_queue_walk(target_list, pkt_next) { + pkt_offset = 0; + while (pkt_offset < pkt_next->len) { pkt_data = pkt_next->data + pkt_offset; sg_data_sz = pkt_next->len - pkt_offset; if (sg_data_sz > sdiodev->max_segment_size) @@ -439,72 +467,55 @@ static int brcmf_sdiod_sglist_rw(struct brcmf_sdio_dev *sdiodev, sg_data_sz = max_req_sz - req_sz; sg_set_buf(sgl, pkt_data, sg_data_sz); - sg_cnt++; + sgl = sg_next(sgl); req_sz += sg_data_sz; pkt_offset += sg_data_sz; - if (pkt_offset == pkt_next->len) { - pkt_offset = 0; - pkt_next = pkt_next->next; + if (req_sz >= max_req_sz || sg_cnt >= max_seg_cnt) { + ret = mmc_submit_one(&mmc_dat, &mmc_req, &mmc_cmd, + sg_cnt, req_sz, func_blk_sz, + &addr, sdiodev, func, write); + if (ret) + goto exit_queue_walk; + req_sz = 0; + sg_cnt = 0; + sgl = sdiodev->sgtable.sgl; } - - if (req_sz >= max_req_sz || sg_cnt >= max_seg_cnt) - break; - } - seg_sz -= sg_cnt; - - if (req_sz % func_blk_sz != 0) { - brcmf_err("sg request length %u is not %u aligned\n", - req_sz, func_blk_sz); - ret = -ENOTBLK; - goto exit; - } - - mmc_dat.sg_len = sg_cnt; - mmc_dat.blocks = req_sz / func_blk_sz; - mmc_cmd.arg |= (addr & 0x1FFFF) << 9; /* address */ - mmc_cmd.arg |= mmc_dat.blocks & 0x1FF; /* block count */ - /* incrementing addr for function 1 */ - if (func->num == 1) - addr += req_sz; - - mmc_set_data_timeout(&mmc_dat, func->card); - mmc_wait_for_req(func->card->host, &mmc_req); - - ret = mmc_cmd.error ? mmc_cmd.error : mmc_dat.error; - if (ret == -ENOMEDIUM) { - brcmf_sdiod_change_state(sdiodev, BRCMF_SDIOD_NOMEDIUM); - break; - } else if (ret != 0) { - brcmf_err("CMD53 sg block %s failed %d\n", - write ? "write" : "read", ret); - ret = -EIO; - break; } } - + if (sg_cnt) + ret = mmc_submit_one(&mmc_dat, &mmc_req, &mmc_cmd, + sg_cnt, req_sz, func_blk_sz, + &addr, sdiodev, func, write); +exit_queue_walk: if (!write && sdiodev->settings->bus.sdio.broken_sg_support) { - local_pkt_next = local_list.next; - orig_offset = 0; + src = __skb_peek(&local_list); + src_offset = 0; skb_queue_walk(pktlist, pkt_next) { dst_offset = 0; - do { - req_sz = local_pkt_next->len - orig_offset; - req_sz = min_t(uint, pkt_next->len - dst_offset, - req_sz); - orig_data = local_pkt_next->data + orig_offset; + + /* This is safe because we must have enough SKB data + * in the local list to cover everything in pktlist. + */ + while (1) { + req_sz = pkt_next->len - dst_offset; + if (req_sz > src->len - src_offset) + req_sz = src->len - src_offset; + + orig_data = src->data + src_offset; dst_data = pkt_next->data + dst_offset; memcpy(dst_data, orig_data, req_sz); - orig_offset += req_sz; - dst_offset += req_sz; - if (orig_offset == local_pkt_next->len) { - orig_offset = 0; - local_pkt_next = local_pkt_next->next; + + src_offset += req_sz; + if (src_offset == src->len) { + src_offset = 0; + src = skb_peek_next(src, &local_list); } + dst_offset += req_sz; if (dst_offset == pkt_next->len) break; - } while (!skb_queue_empty(&local_list)); + } } } @@ -972,6 +983,7 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = { BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4354), BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4356), BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_CYPRESS_4373), + BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_CYPRESS_43012), { /* end: all zeroes */ } }; MODULE_DEVICE_TABLE(sdio, brcmf_sdmmc_ids); diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c index 7f0a5bade70a..35301237d435 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c @@ -5196,10 +5196,17 @@ static struct cfg80211_ops brcmf_cfg80211_ops = { .del_pmk = brcmf_cfg80211_del_pmk, }; -struct cfg80211_ops *brcmf_cfg80211_get_ops(void) +struct cfg80211_ops *brcmf_cfg80211_get_ops(struct brcmf_mp_device *settings) { - return kmemdup(&brcmf_cfg80211_ops, sizeof(brcmf_cfg80211_ops), + struct cfg80211_ops *ops; + + ops = kmemdup(&brcmf_cfg80211_ops, sizeof(brcmf_cfg80211_ops), GFP_KERNEL); + + if (ops && settings->roamoff) + ops->update_connect_params = NULL; + + return ops; } struct brcmf_cfg80211_vif *brcmf_alloc_vif(struct brcmf_cfg80211_info *cfg, @@ -6309,6 +6316,16 @@ brcmf_txrx_stypes[NUM_NL80211_IFTYPES] = { .tx = 0xffff, .rx = BIT(IEEE80211_STYPE_ACTION >> 4) | BIT(IEEE80211_STYPE_PROBE_REQ >> 4) + }, + [NL80211_IFTYPE_AP] = { + .tx = 0xffff, + .rx = BIT(IEEE80211_STYPE_ASSOC_REQ >> 4) | + BIT(IEEE80211_STYPE_REASSOC_REQ >> 4) | + BIT(IEEE80211_STYPE_PROBE_REQ >> 4) | + BIT(IEEE80211_STYPE_DISASSOC >> 4) | + BIT(IEEE80211_STYPE_AUTH >> 4) | + BIT(IEEE80211_STYPE_DEAUTH >> 4) | + BIT(IEEE80211_STYPE_ACTION >> 4) } }; @@ -6639,6 +6656,12 @@ static s32 brcmf_config_dongle(struct brcmf_cfg80211_info *cfg) brcmf_configure_arp_nd_offload(ifp, true); + err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_FAKEFRAG, 1); + if (err) { + brcmf_err("failed to set frameburst mode\n"); + goto default_conf_out; + } + cfg->dongle_up = true; default_conf_out: diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h index a4aec0004e4f..9a6287f084a9 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h @@ -404,7 +404,7 @@ struct brcmf_cfg80211_info *brcmf_cfg80211_attach(struct brcmf_pub *drvr, void brcmf_cfg80211_detach(struct brcmf_cfg80211_info *cfg); s32 brcmf_cfg80211_up(struct net_device *ndev); s32 brcmf_cfg80211_down(struct net_device *ndev); -struct cfg80211_ops *brcmf_cfg80211_get_ops(void); +struct cfg80211_ops *brcmf_cfg80211_get_ops(struct brcmf_mp_device *settings); enum nl80211_iftype brcmf_cfg80211_get_iftype(struct brcmf_if *ifp); struct brcmf_cfg80211_vif *brcmf_alloc_vif(struct brcmf_cfg80211_info *cfg, diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c index 927d62b3d41b..22534bf2a90c 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c @@ -165,6 +165,7 @@ struct sbconfig { #define SRCI_LSS_MASK 0x00f00000 #define SRCI_LSS_SHIFT 20 #define SRCI_SRNB_MASK 0xf0 +#define SRCI_SRNB_MASK_EXT 0x100 #define SRCI_SRNB_SHIFT 4 #define SRCI_SRBSZ_MASK 0xf #define SRCI_SRBSZ_SHIFT 0 @@ -592,7 +593,13 @@ static void brcmf_chip_socram_ramsize(struct brcmf_core_priv *sr, u32 *ramsize, if (lss != 0) *ramsize += (1 << ((lss - 1) + SR_BSZ_BASE)); } else { - nb = (coreinfo & SRCI_SRNB_MASK) >> SRCI_SRNB_SHIFT; + /* length of SRAM Banks increased for corerev greater than 23 */ + if (sr->pub.rev >= 23) { + nb = (coreinfo & (SRCI_SRNB_MASK | SRCI_SRNB_MASK_EXT)) + >> SRCI_SRNB_SHIFT; + } else { + nb = (coreinfo & SRCI_SRNB_MASK) >> SRCI_SRNB_SHIFT; + } for (i = 0; i < nb; i++) { retent = brcmf_chip_socram_banksize(sr, i, &banksize); *ramsize += banksize; @@ -779,7 +786,7 @@ static int brcmf_chip_dmp_get_regaddr(struct brcmf_chip_priv *ci, u32 *eromaddr, u32 *regbase, u32 *wrapbase) { u8 desc; - u32 val; + u32 val, szdesc; u8 mpnum = 0; u8 stype, sztype, wraptype; @@ -825,14 +832,15 @@ static int brcmf_chip_dmp_get_regaddr(struct brcmf_chip_priv *ci, u32 *eromaddr, /* next size descriptor can be skipped */ if (sztype == DMP_SLAVE_SIZE_DESC) { - val = brcmf_chip_dmp_get_desc(ci, eromaddr, NULL); + szdesc = brcmf_chip_dmp_get_desc(ci, eromaddr, NULL); /* skip upper size descriptor if present */ - if (val & DMP_DESC_ADDRSIZE_GT32) + if (szdesc & DMP_DESC_ADDRSIZE_GT32) brcmf_chip_dmp_get_desc(ci, eromaddr, NULL); } - /* only look for 4K register regions */ - if (sztype != DMP_SLAVE_SIZE_4K) + /* look for 4K or 8K register regions */ + if (sztype != DMP_SLAVE_SIZE_4K && + sztype != DMP_SLAVE_SIZE_8K) continue; stype = (val & DMP_SLAVE_TYPE) >> DMP_SLAVE_TYPE_S; @@ -889,7 +897,8 @@ int brcmf_chip_dmp_erom_scan(struct brcmf_chip_priv *ci) /* need core with ports */ if (nmw + nsw == 0 && - id != BCMA_CORE_PMU) + id != BCMA_CORE_PMU && + id != BCMA_CORE_GCI) continue; /* try to obtain register address info */ @@ -1356,6 +1365,16 @@ bool brcmf_chip_sr_capable(struct brcmf_chip *pub) addr = CORE_CC_REG(base, sr_control1); reg = chip->ops->read32(chip->ctx, addr); return reg != 0; + case CY_CC_4373_CHIP_ID: + /* explicitly check SR engine enable bit */ + addr = CORE_CC_REG(base, sr_control0); + reg = chip->ops->read32(chip->ctx, addr); + return (reg & CC_SR_CTL0_ENABLE_MASK) != 0; + case CY_CC_43012_CHIP_ID: + addr = CORE_CC_REG(pmu->base, retention_ctl); + reg = chip->ops->read32(chip->ctx, addr); + return (reg & (PMU_RCTL_MACPHY_DISABLE_MASK | + PMU_RCTL_LOGIC_DISABLE_MASK)) == 0; default: addr = CORE_CC_REG(pmu->base, pmucapabilities_ext); reg = chip->ops->read32(chip->ctx, addr); diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c index 94044a7a6021..1f1e95a15a17 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c @@ -214,7 +214,7 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp) err = brcmf_fil_iovar_data_get(ifp, "cur_etheraddr", ifp->mac_addr, sizeof(ifp->mac_addr)); if (err < 0) { - brcmf_err("Retreiving cur_etheraddr failed, %d\n", err); + brcmf_err("Retrieving cur_etheraddr failed, %d\n", err); goto done; } memcpy(ifp->drvr->wiphy->perm_addr, ifp->drvr->mac, ETH_ALEN); @@ -269,7 +269,7 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp) strcpy(buf, "ver"); err = brcmf_fil_iovar_data_get(ifp, "ver", buf, sizeof(buf)); if (err < 0) { - brcmf_err("Retreiving version information failed, %d\n", + brcmf_err("Retrieving version information failed, %d\n", err); goto done; } @@ -448,7 +448,8 @@ struct brcmf_mp_device *brcmf_get_module_param(struct device *dev, } } if (!found) { - /* No platform data for this device, try OF (Open Firwmare) */ + /* No platform data for this device, try OF and DMI data */ + brcmf_dmi_probe(settings, chip, chiprev); brcmf_of_probe(dev, bus_type, settings); } return settings; diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h index a34642cb4d2f..4ce56be90b74 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h @@ -59,6 +59,7 @@ struct brcmf_mp_device { bool iapp; bool ignore_probe_fail; struct brcmfmac_pd_cc *country_codes; + const char *board_type; union { struct brcmfmac_sdio_pd sdio; } bus; @@ -74,4 +75,11 @@ void brcmf_release_module_param(struct brcmf_mp_device *module_param); /* Sets dongle media info (drv_version, mac address). */ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp); +#ifdef CONFIG_DMI +void brcmf_dmi_probe(struct brcmf_mp_device *settings, u32 chip, u32 chiprev); +#else +static inline void +brcmf_dmi_probe(struct brcmf_mp_device *settings, u32 chip, u32 chiprev) {} +#endif + #endif /* BRCMFMAC_COMMON_H */ diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c index b1f702faff4f..860a4372cb56 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c @@ -1130,7 +1130,7 @@ int brcmf_attach(struct device *dev, struct brcmf_mp_device *settings) brcmf_dbg(TRACE, "Enter\n"); - ops = brcmf_cfg80211_get_ops(); + ops = brcmf_cfg80211_get_ops(settings); if (!ops) return -ENOMEM; diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c new file mode 100644 index 000000000000..51d76ac45075 --- /dev/null +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c @@ -0,0 +1,116 @@ +/* + * Copyright 2018 Hans de Goede + * + * Permission to use, copy, modify, and/or distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#include +#include +#include "core.h" +#include "common.h" +#include "brcm_hw_ids.h" + +/* The DMI data never changes so we can use a static buf for this */ +static char dmi_board_type[128]; + +struct brcmf_dmi_data { + u32 chip; + u32 chiprev; + const char *board_type; +}; + +/* NOTE: Please keep all entries sorted alphabetically */ + +static const struct brcmf_dmi_data gpd_win_pocket_data = { + BRCM_CC_4356_CHIP_ID, 2, "gpd-win-pocket" +}; + +static const struct brcmf_dmi_data jumper_ezpad_mini3_data = { + BRCM_CC_43430_CHIP_ID, 0, "jumper-ezpad-mini3" +}; + +static const struct brcmf_dmi_data meegopad_t08_data = { + BRCM_CC_43340_CHIP_ID, 2, "meegopad-t08" +}; + +static const struct dmi_system_id dmi_platform_data[] = { + { + /* Match for the GPDwin which unfortunately uses somewhat + * generic dmi strings, which is why we test for 4 strings. + * Comparing against 23 other byt/cht boards, board_vendor + * and board_name are unique to the GPDwin, where as only one + * other board has the same board_serial and 3 others have + * the same default product_name. Also the GPDwin is the + * only device to have both board_ and product_name not set. + */ + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), + DMI_MATCH(DMI_BOARD_NAME, "Default string"), + DMI_MATCH(DMI_BOARD_SERIAL, "Default string"), + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), + }, + .driver_data = (void *)&gpd_win_pocket_data, + }, + { + /* Jumper EZpad mini3 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Insyde"), + DMI_MATCH(DMI_PRODUCT_NAME, "CherryTrail"), + /* jumperx.T87.KFBNEEA02 with the version-nr dropped */ + DMI_MATCH(DMI_BIOS_VERSION, "jumperx.T87.KFBNEEA"), + }, + .driver_data = (void *)&jumper_ezpad_mini3_data, + }, + { + /* Meegopad T08 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Default string"), + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), + DMI_MATCH(DMI_BOARD_NAME, "T3 MRD"), + DMI_MATCH(DMI_BOARD_VERSION, "V1.1"), + }, + .driver_data = (void *)&meegopad_t08_data, + }, + {} +}; + +void brcmf_dmi_probe(struct brcmf_mp_device *settings, u32 chip, u32 chiprev) +{ + const struct dmi_system_id *match; + const struct brcmf_dmi_data *data; + const char *sys_vendor; + const char *product_name; + + /* Some models have DMI strings which are too generic, e.g. + * "Default string", we use a quirk table for these. + */ + for (match = dmi_first_match(dmi_platform_data); + match; + match = dmi_first_match(match + 1)) { + data = match->driver_data; + + if (data->chip == chip && data->chiprev == chiprev) { + settings->board_type = data->board_type; + return; + } + } + + /* Not found in the quirk-table, use sys_vendor-product_name */ + sys_vendor = dmi_get_system_info(DMI_SYS_VENDOR); + product_name = dmi_get_system_info(DMI_PRODUCT_NAME); + if (sys_vendor && product_name) { + snprintf(dmi_board_type, sizeof(dmi_board_type), "%s-%s", + sys_vendor, product_name); + settings->board_type = dmi_board_type; + } +} diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c index 9095b830ae4d..14b948917a1a 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c @@ -14,6 +14,7 @@ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ +#include #include #include #include @@ -445,6 +446,75 @@ struct brcmf_fw { static void brcmf_fw_request_done(const struct firmware *fw, void *ctx); +#ifdef CONFIG_EFI +/* In some cases the EFI-var stored nvram contains "ccode=ALL" or "ccode=XV" + * to specify "worldwide" compatible settings, but these 2 ccode-s do not work + * properly. "ccode=ALL" causes channels 12 and 13 to not be available, + * "ccode=XV" causes all 5GHz channels to not be available. So we replace both + * with "ccode=X2" which allows channels 12+13 and 5Ghz channels in + * no-Initiate-Radiation mode. This means that we will never send on these + * channels without first having received valid wifi traffic on the channel. + */ +static void brcmf_fw_fix_efi_nvram_ccode(char *data, unsigned long data_len) +{ + char *ccode; + + ccode = strnstr((char *)data, "ccode=ALL", data_len); + if (!ccode) + ccode = strnstr((char *)data, "ccode=XV\r", data_len); + if (!ccode) + return; + + ccode[6] = 'X'; + ccode[7] = '2'; + ccode[8] = '\r'; +} + +static u8 *brcmf_fw_nvram_from_efi(size_t *data_len_ret) +{ + const u16 name[] = { 'n', 'v', 'r', 'a', 'm', 0 }; + struct efivar_entry *nvram_efivar; + unsigned long data_len = 0; + u8 *data = NULL; + int err; + + nvram_efivar = kzalloc(sizeof(*nvram_efivar), GFP_KERNEL); + if (!nvram_efivar) + return NULL; + + memcpy(&nvram_efivar->var.VariableName, name, sizeof(name)); + nvram_efivar->var.VendorGuid = EFI_GUID(0x74b00bd9, 0x805a, 0x4d61, + 0xb5, 0x1f, 0x43, 0x26, + 0x81, 0x23, 0xd1, 0x13); + + err = efivar_entry_size(nvram_efivar, &data_len); + if (err) + goto fail; + + data = kmalloc(data_len, GFP_KERNEL); + if (!data) + goto fail; + + err = efivar_entry_get(nvram_efivar, NULL, &data_len, data); + if (err) + goto fail; + + brcmf_fw_fix_efi_nvram_ccode(data, data_len); + brcmf_info("Using nvram EFI variable\n"); + + kfree(nvram_efivar); + *data_len_ret = data_len; + return data; + +fail: + kfree(data); + kfree(nvram_efivar); + return NULL; +} +#else +static inline u8 *brcmf_fw_nvram_from_efi(size_t *data_len) { return NULL; } +#endif + static void brcmf_fw_free_request(struct brcmf_fw_request *req) { struct brcmf_fw_item *item; @@ -463,11 +533,12 @@ static int brcmf_fw_request_nvram_done(const struct firmware *fw, void *ctx) { struct brcmf_fw *fwctx = ctx; struct brcmf_fw_item *cur; + bool free_bcm47xx_nvram = false; + bool kfree_nvram = false; u32 nvram_length = 0; void *nvram = NULL; u8 *data = NULL; size_t data_len; - bool raw_nvram; brcmf_dbg(TRACE, "enter: dev=%s\n", dev_name(fwctx->dev)); @@ -476,12 +547,13 @@ static int brcmf_fw_request_nvram_done(const struct firmware *fw, void *ctx) if (fw && fw->data) { data = (u8 *)fw->data; data_len = fw->size; - raw_nvram = false; } else { - data = bcm47xx_nvram_get_contents(&data_len); - if (!data && !(cur->flags & BRCMF_FW_REQF_OPTIONAL)) + if ((data = bcm47xx_nvram_get_contents(&data_len))) + free_bcm47xx_nvram = true; + else if ((data = brcmf_fw_nvram_from_efi(&data_len))) + kfree_nvram = true; + else if (!(cur->flags & BRCMF_FW_REQF_OPTIONAL)) goto fail; - raw_nvram = true; } if (data) @@ -489,8 +561,11 @@ static int brcmf_fw_request_nvram_done(const struct firmware *fw, void *ctx) fwctx->req->domain_nr, fwctx->req->bus_nr); - if (raw_nvram) + if (free_bcm47xx_nvram) bcm47xx_nvram_release_contents(data); + if (kfree_nvram) + kfree(data); + release_firmware(fw); if (!nvram && !(cur->flags & BRCMF_FW_REQF_OPTIONAL)) goto fail; @@ -504,90 +579,75 @@ fail: return -ENOENT; } -static int brcmf_fw_request_next_item(struct brcmf_fw *fwctx, bool async) +static int brcmf_fw_complete_request(const struct firmware *fw, + struct brcmf_fw *fwctx) { - struct brcmf_fw_item *cur; - const struct firmware *fw = NULL; - int ret; - - cur = &fwctx->req->items[fwctx->curpos]; - - brcmf_dbg(TRACE, "%srequest for %s\n", async ? "async " : "", - cur->path); - - if (async) - ret = request_firmware_nowait(THIS_MODULE, true, cur->path, - fwctx->dev, GFP_KERNEL, fwctx, - brcmf_fw_request_done); - else - ret = request_firmware(&fw, cur->path, fwctx->dev); - - if (ret < 0) { - brcmf_fw_request_done(NULL, fwctx); - } else if (!async && fw) { - brcmf_dbg(TRACE, "firmware %s %sfound\n", cur->path, - fw ? "" : "not "); - if (cur->type == BRCMF_FW_TYPE_BINARY) - cur->binary = fw; - else if (cur->type == BRCMF_FW_TYPE_NVRAM) - brcmf_fw_request_nvram_done(fw, fwctx); - else - release_firmware(fw); - - return -EAGAIN; - } - return 0; -} - -static void brcmf_fw_request_done(const struct firmware *fw, void *ctx) -{ - struct brcmf_fw *fwctx = ctx; - struct brcmf_fw_item *cur; + struct brcmf_fw_item *cur = &fwctx->req->items[fwctx->curpos]; int ret = 0; - cur = &fwctx->req->items[fwctx->curpos]; - - brcmf_dbg(TRACE, "enter: firmware %s %sfound\n", cur->path, - fw ? "" : "not "); - - if (!fw) - ret = -ENOENT; + brcmf_dbg(TRACE, "firmware %s %sfound\n", cur->path, fw ? "" : "not "); switch (cur->type) { case BRCMF_FW_TYPE_NVRAM: ret = brcmf_fw_request_nvram_done(fw, fwctx); break; case BRCMF_FW_TYPE_BINARY: - cur->binary = fw; + if (fw) + cur->binary = fw; + else + ret = -ENOENT; break; default: /* something fishy here so bail out early */ brcmf_err("unknown fw type: %d\n", cur->type); release_firmware(fw); ret = -EINVAL; - goto fail; } - if (ret < 0 && !(cur->flags & BRCMF_FW_REQF_OPTIONAL)) - goto fail; + return (cur->flags & BRCMF_FW_REQF_OPTIONAL) ? 0 : ret; +} - do { - if (++fwctx->curpos == fwctx->req->n_items) { - ret = 0; - goto done; - } +static int brcmf_fw_request_firmware(const struct firmware **fw, + struct brcmf_fw *fwctx) +{ + struct brcmf_fw_item *cur = &fwctx->req->items[fwctx->curpos]; + int ret; - ret = brcmf_fw_request_next_item(fwctx, false); - } while (ret == -EAGAIN); + /* nvram files are board-specific, first try a board-specific path */ + if (cur->type == BRCMF_FW_TYPE_NVRAM && fwctx->req->board_type) { + char alt_path[BRCMF_FW_NAME_LEN]; - return; + strlcpy(alt_path, cur->path, BRCMF_FW_NAME_LEN); + /* strip .txt at the end */ + alt_path[strlen(alt_path) - 4] = 0; + strlcat(alt_path, ".", BRCMF_FW_NAME_LEN); + strlcat(alt_path, fwctx->req->board_type, BRCMF_FW_NAME_LEN); + strlcat(alt_path, ".txt", BRCMF_FW_NAME_LEN); -fail: - brcmf_dbg(TRACE, "failed err=%d: dev=%s, fw=%s\n", ret, - dev_name(fwctx->dev), cur->path); - brcmf_fw_free_request(fwctx->req); - fwctx->req = NULL; -done: + ret = request_firmware(fw, alt_path, fwctx->dev); + if (ret == 0) + return ret; + } + + return request_firmware(fw, cur->path, fwctx->dev); +} + +static void brcmf_fw_request_done(const struct firmware *fw, void *ctx) +{ + struct brcmf_fw *fwctx = ctx; + int ret; + + ret = brcmf_fw_complete_request(fw, fwctx); + + while (ret == 0 && ++fwctx->curpos < fwctx->req->n_items) { + brcmf_fw_request_firmware(&fw, fwctx); + ret = brcmf_fw_complete_request(fw, ctx); + } + + if (ret) { + brcmf_fw_free_request(fwctx->req); + fwctx->req = NULL; + } fwctx->done(fwctx->dev, ret, fwctx->req); kfree(fwctx); } @@ -611,7 +671,9 @@ int brcmf_fw_get_firmwares(struct device *dev, struct brcmf_fw_request *req, void (*fw_cb)(struct device *dev, int err, struct brcmf_fw_request *req)) { + struct brcmf_fw_item *first = &req->items[0]; struct brcmf_fw *fwctx; + int ret; brcmf_dbg(TRACE, "enter: dev=%s\n", dev_name(dev)); if (!fw_cb) @@ -628,7 +690,12 @@ int brcmf_fw_get_firmwares(struct device *dev, struct brcmf_fw_request *req, fwctx->req = req; fwctx->done = fw_cb; - brcmf_fw_request_next_item(fwctx, true); + ret = request_firmware_nowait(THIS_MODULE, true, first->path, + fwctx->dev, GFP_KERNEL, fwctx, + brcmf_fw_request_done); + if (ret < 0) + brcmf_fw_request_done(NULL, fwctx); + return 0; } @@ -641,8 +708,9 @@ brcmf_fw_alloc_request(u32 chip, u32 chiprev, struct brcmf_fw_request *fwreq; char chipname[12]; const char *mp_path; + size_t mp_path_len; u32 i, j; - char end; + char end = '\0'; size_t reqsz; for (i = 0; i < table_size; i++) { @@ -667,7 +735,10 @@ brcmf_fw_alloc_request(u32 chip, u32 chiprev, mapping_table[i].fw_base, chipname); mp_path = brcmf_mp_global.firmware_path; - end = mp_path[strlen(mp_path) - 1]; + mp_path_len = strnlen(mp_path, BRCMF_FW_ALTPATH_LEN); + if (mp_path_len) + end = mp_path[mp_path_len - 1]; + fwreq->n_items = n_fwnames; for (j = 0; j < n_fwnames; j++) { diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h index 2893e56910f0..a0834be8864e 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h @@ -70,6 +70,7 @@ struct brcmf_fw_request { u16 domain_nr; u16 bus_nr; u32 n_items; + const char *board_type; struct brcmf_fw_item items[0]; }; diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h index 63b1287e2e6d..b6b183b18413 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h @@ -80,6 +80,7 @@ #define BRCMF_C_SCB_DEAUTHENTICATE_FOR_REASON 201 #define BRCMF_C_SET_ASSOC_PREFER 205 #define BRCMF_C_GET_VALID_CHANNELS 217 +#define BRCMF_C_SET_FAKEFRAG 219 #define BRCMF_C_GET_KEY_PRIMARY 235 #define BRCMF_C_SET_KEY_PRIMARY 236 #define BRCMF_C_SET_SCAN_PASSIVE_TIME 258 diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h index d5bb81e88762..39ac1bbb6cc0 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h @@ -176,6 +176,8 @@ #define BRCMF_VHT_CAP_MCS_MAP_NSS_MAX 8 +#define BRCMF_HE_CAP_MCS_MAP_NSS_MAX 8 + /* MAX_CHUNK_LEN is the maximum length for data passing to firmware in each * ioctl. It is relatively small because firmware has small maximum size input * playload restriction for ioctls. @@ -601,13 +603,37 @@ struct brcmf_sta_info_le { __le32 rx_pkts_retried; /* # rx with retry bit set */ __le32 tx_rate_fallback; /* lowest fallback TX rate */ - /* Fields valid for ver >= 5 */ - struct { - __le32 count; /* # rates in this set */ - u8 rates[BRCMF_MAXRATES_IN_SET]; /* rates in 500kbps units w/hi bit set if basic */ - u8 mcs[BRCMF_MCSSET_LEN]; /* supported mcs index bit map */ - __le16 vht_mcs[BRCMF_VHT_CAP_MCS_MAP_NSS_MAX]; /* supported mcs index bit map per nss */ - } rateset_adv; + union { + struct { + struct { + __le32 count; /* # rates in this set */ + u8 rates[BRCMF_MAXRATES_IN_SET]; /* rates in 500kbps units w/hi bit set if basic */ + u8 mcs[BRCMF_MCSSET_LEN]; /* supported mcs index bit map */ + __le16 vht_mcs[BRCMF_VHT_CAP_MCS_MAP_NSS_MAX]; /* supported mcs index bit map per nss */ + } rateset_adv; + } v5; + + struct { + __le32 rx_dur_total; /* total user RX duration (estimated) */ + __le16 chanspec; /** chanspec this sta is on */ + __le16 pad_1; + struct { + __le16 version; /* version */ + __le16 len; /* length */ + __le32 count; /* # rates in this set */ + u8 rates[BRCMF_MAXRATES_IN_SET]; /* rates in 500kbps units w/hi bit set if basic */ + u8 mcs[BRCMF_MCSSET_LEN]; /* supported mcs index bit map */ + __le16 vht_mcs[BRCMF_VHT_CAP_MCS_MAP_NSS_MAX]; /* supported mcs index bit map per nss */ + __le16 he_mcs[BRCMF_HE_CAP_MCS_MAP_NSS_MAX]; /* supported he mcs index bit map per nss */ + } rateset_adv; /* rateset along with mcs index bitmap */ + __le16 wpauth; /* authentication type */ + u8 algo; /* crypto algorithm */ + u8 pad_2; + __le32 tx_rspec; /* Rate of last successful tx frame */ + __le32 rx_rspec; /* Rate of last successful rx frame */ + __le32 wnm_cap; /* wnm capabilities */ + } v7; + }; }; struct brcmf_chanspec_list { diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c index f3cbf78c8899..02759ebd207c 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c @@ -511,6 +511,7 @@ struct brcmf_fws_info { struct work_struct fws_dequeue_work; u32 fifo_enqpkt[BRCMF_FWS_FIFO_COUNT]; int fifo_credit[BRCMF_FWS_FIFO_COUNT]; + int init_fifo_credit[BRCMF_FWS_FIFO_COUNT]; int credits_borrowed[BRCMF_FWS_FIFO_AC_VO + 1]; int deq_node_pos[BRCMF_FWS_FIFO_COUNT]; u32 fifo_credit_map; @@ -1237,6 +1238,9 @@ static void brcmf_fws_return_credits(struct brcmf_fws_info *fws, } fws->fifo_credit[fifo] += credits; + if (fws->fifo_credit[fifo] > fws->init_fifo_credit[fifo]) + fws->fifo_credit[fifo] = fws->init_fifo_credit[fifo]; + } static void brcmf_fws_schedule_deq(struct brcmf_fws_info *fws) @@ -1451,9 +1455,10 @@ static int brcmf_fws_txstatus_suppressed(struct brcmf_fws_info *fws, int fifo, static int brcmf_fws_txs_process(struct brcmf_fws_info *fws, u8 flags, u32 hslot, - u32 genbit, u16 seq) + u32 genbit, u16 seq, u8 compcnt) { u32 fifo; + u8 cnt = 0; int ret; bool remove_from_hanger = true; struct sk_buff *skb; @@ -1464,60 +1469,71 @@ brcmf_fws_txs_process(struct brcmf_fws_info *fws, u8 flags, u32 hslot, brcmf_dbg(DATA, "flags %d\n", flags); if (flags == BRCMF_FWS_TXSTATUS_DISCARD) - fws->stats.txs_discard++; + fws->stats.txs_discard += compcnt; else if (flags == BRCMF_FWS_TXSTATUS_CORE_SUPPRESS) { - fws->stats.txs_supp_core++; + fws->stats.txs_supp_core += compcnt; remove_from_hanger = false; } else if (flags == BRCMF_FWS_TXSTATUS_FW_PS_SUPPRESS) { - fws->stats.txs_supp_ps++; + fws->stats.txs_supp_ps += compcnt; remove_from_hanger = false; } else if (flags == BRCMF_FWS_TXSTATUS_FW_TOSSED) - fws->stats.txs_tossed++; + fws->stats.txs_tossed += compcnt; else if (flags == BRCMF_FWS_TXSTATUS_HOST_TOSSED) - fws->stats.txs_host_tossed++; + fws->stats.txs_host_tossed += compcnt; else brcmf_err("unexpected txstatus\n"); - ret = brcmf_fws_hanger_poppkt(&fws->hanger, hslot, &skb, - remove_from_hanger); - if (ret != 0) { - brcmf_err("no packet in hanger slot: hslot=%d\n", hslot); - return ret; - } + while (cnt < compcnt) { + ret = brcmf_fws_hanger_poppkt(&fws->hanger, hslot, &skb, + remove_from_hanger); + if (ret != 0) { + brcmf_err("no packet in hanger slot: hslot=%d\n", + hslot); + goto cont; + } - skcb = brcmf_skbcb(skb); - entry = skcb->mac; - if (WARN_ON(!entry)) { - brcmu_pkt_buf_free_skb(skb); - return -EINVAL; - } - entry->transit_count--; - if (entry->suppressed && entry->suppr_transit_count) - entry->suppr_transit_count--; + skcb = brcmf_skbcb(skb); + entry = skcb->mac; + if (WARN_ON(!entry)) { + brcmu_pkt_buf_free_skb(skb); + goto cont; + } + entry->transit_count--; + if (entry->suppressed && entry->suppr_transit_count) + entry->suppr_transit_count--; - brcmf_dbg(DATA, "%s flags %d htod %X seq %X\n", entry->name, flags, - skcb->htod, seq); + brcmf_dbg(DATA, "%s flags %d htod %X seq %X\n", entry->name, + flags, skcb->htod, seq); - /* pick up the implicit credit from this packet */ - fifo = brcmf_skb_htod_tag_get_field(skb, FIFO); - if ((fws->fcmode == BRCMF_FWS_FCMODE_IMPLIED_CREDIT) || - (brcmf_skb_if_flags_get_field(skb, REQ_CREDIT)) || - (flags == BRCMF_FWS_TXSTATUS_HOST_TOSSED)) { - brcmf_fws_return_credits(fws, fifo, 1); - brcmf_fws_schedule_deq(fws); - } - brcmf_fws_macdesc_return_req_credit(skb); + /* pick up the implicit credit from this packet */ + fifo = brcmf_skb_htod_tag_get_field(skb, FIFO); + if (fws->fcmode == BRCMF_FWS_FCMODE_IMPLIED_CREDIT || + (brcmf_skb_if_flags_get_field(skb, REQ_CREDIT)) || + flags == BRCMF_FWS_TXSTATUS_HOST_TOSSED) { + brcmf_fws_return_credits(fws, fifo, 1); + brcmf_fws_schedule_deq(fws); + } + brcmf_fws_macdesc_return_req_credit(skb); - ret = brcmf_proto_hdrpull(fws->drvr, false, skb, &ifp); - if (ret) { - brcmu_pkt_buf_free_skb(skb); - return -EINVAL; + ret = brcmf_proto_hdrpull(fws->drvr, false, skb, &ifp); + if (ret) { + brcmu_pkt_buf_free_skb(skb); + goto cont; + } + if (!remove_from_hanger) + ret = brcmf_fws_txstatus_suppressed(fws, fifo, skb, + genbit, seq); + if (remove_from_hanger || ret) + brcmf_txfinalize(ifp, skb, true); + +cont: + hslot = (hslot + 1) & (BRCMF_FWS_TXSTAT_HSLOT_MASK >> + BRCMF_FWS_TXSTAT_HSLOT_SHIFT); + if (BRCMF_FWS_MODE_GET_REUSESEQ(fws->mode)) + seq = (seq + 1) & BRCMF_SKB_HTOD_SEQ_NR_MASK; + + cnt++; } - if (!remove_from_hanger) - ret = brcmf_fws_txstatus_suppressed(fws, fifo, skb, - genbit, seq); - if (remove_from_hanger || ret) - brcmf_txfinalize(ifp, skb, true); return 0; } @@ -1543,7 +1559,8 @@ static int brcmf_fws_fifocreditback_indicate(struct brcmf_fws_info *fws, return BRCMF_FWS_RET_OK_SCHEDULE; } -static int brcmf_fws_txstatus_indicate(struct brcmf_fws_info *fws, u8 *data) +static int brcmf_fws_txstatus_indicate(struct brcmf_fws_info *fws, u8 type, + u8 *data) { __le32 status_le; __le16 seq_le; @@ -1552,23 +1569,31 @@ static int brcmf_fws_txstatus_indicate(struct brcmf_fws_info *fws, u8 *data) u32 genbit; u8 flags; u16 seq; + u8 compcnt; + u8 compcnt_offset = BRCMF_FWS_TYPE_TXSTATUS_LEN; - fws->stats.txs_indicate++; memcpy(&status_le, data, sizeof(status_le)); status = le32_to_cpu(status_le); flags = brcmf_txstatus_get_field(status, FLAGS); hslot = brcmf_txstatus_get_field(status, HSLOT); genbit = brcmf_txstatus_get_field(status, GENERATION); if (BRCMF_FWS_MODE_GET_REUSESEQ(fws->mode)) { - memcpy(&seq_le, &data[BRCMF_FWS_TYPE_PKTTAG_LEN], + memcpy(&seq_le, &data[BRCMF_FWS_TYPE_TXSTATUS_LEN], sizeof(seq_le)); seq = le16_to_cpu(seq_le); + compcnt_offset += BRCMF_FWS_TYPE_SEQ_LEN; } else { seq = 0; } + if (type == BRCMF_FWS_TYPE_COMP_TXSTATUS) + compcnt = data[compcnt_offset]; + else + compcnt = 1; + fws->stats.txs_indicate += compcnt; + brcmf_fws_lock(fws); - brcmf_fws_txs_process(fws, flags, hslot, genbit, seq); + brcmf_fws_txs_process(fws, flags, hslot, genbit, seq, compcnt); brcmf_fws_unlock(fws); return BRCMF_FWS_RET_OK_NOSCHEDULE; } @@ -1595,19 +1620,21 @@ static int brcmf_fws_notify_credit_map(struct brcmf_if *ifp, brcmf_err("event payload too small (%d)\n", e->datalen); return -EINVAL; } - if (fws->creditmap_received) - return 0; fws->creditmap_received = true; brcmf_dbg(TRACE, "enter: credits %pM\n", credits); brcmf_fws_lock(fws); for (i = 0; i < ARRAY_SIZE(fws->fifo_credit); i++) { - if (*credits) + fws->fifo_credit[i] += credits[i] - fws->init_fifo_credit[i]; + fws->init_fifo_credit[i] = credits[i]; + if (fws->fifo_credit[i] > 0) fws->fifo_credit_map |= 1 << i; else fws->fifo_credit_map &= ~(1 << i); - fws->fifo_credit[i] = *credits++; + WARN_ONCE(fws->fifo_credit[i] < 0, + "fifo_credit[%d] is negative(%d)\n", i, + fws->fifo_credit[i]); } brcmf_fws_schedule_deq(fws); brcmf_fws_unlock(fws); @@ -1882,8 +1909,6 @@ void brcmf_fws_hdrpull(struct brcmf_if *ifp, s16 siglen, struct sk_buff *skb) err = BRCMF_FWS_RET_OK_NOSCHEDULE; switch (type) { - case BRCMF_FWS_TYPE_COMP_TXSTATUS: - break; case BRCMF_FWS_TYPE_HOST_REORDER_RXPKTS: rd = (struct brcmf_skb_reorder_data *)skb->cb; rd->reorder = data; @@ -1906,7 +1931,8 @@ void brcmf_fws_hdrpull(struct brcmf_if *ifp, s16 siglen, struct sk_buff *skb) err = brcmf_fws_request_indicate(fws, type, data); break; case BRCMF_FWS_TYPE_TXSTATUS: - brcmf_fws_txstatus_indicate(fws, data); + case BRCMF_FWS_TYPE_COMP_TXSTATUS: + brcmf_fws_txstatus_indicate(fws, type, data); break; case BRCMF_FWS_TYPE_FIFO_CREDITBACK: err = brcmf_fws_fifocreditback_indicate(fws, data); @@ -1995,7 +2021,7 @@ static void brcmf_fws_rollback_toq(struct brcmf_fws_info *fws, fws->stats.rollback_failed++; hslot = brcmf_skb_htod_tag_get_field(skb, HSLOT); brcmf_fws_txs_process(fws, BRCMF_FWS_TXSTATUS_HOST_TOSSED, - hslot, 0, 0); + hslot, 0, 0, 1); } else { fws->stats.rollback_success++; brcmf_fws_return_credits(fws, fifo, 1); @@ -2013,7 +2039,7 @@ static int brcmf_fws_borrow_credit(struct brcmf_fws_info *fws) } for (lender_ac = 0; lender_ac <= BRCMF_FWS_FIFO_AC_VO; lender_ac++) { - if (fws->fifo_credit[lender_ac]) { + if (fws->fifo_credit[lender_ac] > 0) { fws->credits_borrowed[lender_ac]++; fws->fifo_credit[lender_ac]--; if (fws->fifo_credit[lender_ac] == 0) @@ -2210,8 +2236,9 @@ static void brcmf_fws_dequeue_worker(struct work_struct *worker) } continue; } - while ((fws->fifo_credit[fifo]) || ((!fws->bcmc_credit_check) && - (fifo == BRCMF_FWS_FIFO_BCMC))) { + while ((fws->fifo_credit[fifo] > 0) || + ((!fws->bcmc_credit_check) && + (fifo == BRCMF_FWS_FIFO_BCMC))) { skb = brcmf_fws_deq(fws, fifo); if (!skb) break; @@ -2222,7 +2249,7 @@ static void brcmf_fws_dequeue_worker(struct work_struct *worker) break; } if ((fifo == BRCMF_FWS_FIFO_AC_BE) && - (fws->fifo_credit[fifo] == 0) && + (fws->fifo_credit[fifo] <= 0) && (!fws->bus_flow_blocked)) { while (brcmf_fws_borrow_credit(fws) == 0) { skb = brcmf_fws_deq(fws, fifo); @@ -2455,7 +2482,8 @@ void brcmf_fws_bustxfail(struct brcmf_fws_info *fws, struct sk_buff *skb) } brcmf_fws_lock(fws); hslot = brcmf_skb_htod_tag_get_field(skb, HSLOT); - brcmf_fws_txs_process(fws, BRCMF_FWS_TXSTATUS_HOST_TOSSED, hslot, 0, 0); + brcmf_fws_txs_process(fws, BRCMF_FWS_TXSTATUS_HOST_TOSSED, hslot, 0, 0, + 1); brcmf_fws_unlock(fws); } diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c index aee6e5937c41..84e3373289eb 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c @@ -27,11 +27,20 @@ void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type, struct brcmf_mp_device *settings) { struct brcmfmac_sdio_pd *sdio = &settings->bus.sdio; - struct device_node *np = dev->of_node; + struct device_node *root, *np = dev->of_node; + struct property *prop; int irq; u32 irqf; u32 val; + /* Set board-type to the first string of the machine compatible prop */ + root = of_find_node_by_path("/"); + if (root) { + prop = of_find_property(root, "compatible", NULL); + settings->board_type = of_prop_next_string(prop, NULL); + of_node_put(root); + } + if (!np || bus_type != BRCMF_BUSTYPE_SDIO || !of_device_is_compatible(np, "brcm,bcm4329-fmac")) return; diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c index 5dea569d63ed..16d7dda965d8 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c @@ -1785,6 +1785,7 @@ brcmf_pcie_prepare_fw_request(struct brcmf_pciedev_info *devinfo) fwreq->items[BRCMF_PCIE_FW_CODE].type = BRCMF_FW_TYPE_BINARY; fwreq->items[BRCMF_PCIE_FW_NVRAM].type = BRCMF_FW_TYPE_NVRAM; fwreq->items[BRCMF_PCIE_FW_NVRAM].flags = BRCMF_FW_REQF_OPTIONAL; + fwreq->board_type = devinfo->settings->board_type; /* NVRAM reserves PCI domain 0 for Broadcom's SDK faked bus */ fwreq->domain_nr = pci_domain_nr(devinfo->pdev->bus) + 1; fwreq->bus_nr = devinfo->pdev->bus->number; @@ -2018,6 +2019,7 @@ static const struct dev_pm_ops brcmf_pciedrvr_pm = { static const struct pci_device_id brcmf_pcie_devid_table[] = { BRCMF_PCIE_DEVICE(BRCM_PCIE_4350_DEVICE_ID), BRCMF_PCIE_DEVICE_SUB(0x4355, BRCM_PCIE_VENDOR_ID_BROADCOM, 0x4355), + BRCMF_PCIE_DEVICE(BRCM_PCIE_4354_RAW_DEVICE_ID), BRCMF_PCIE_DEVICE(BRCM_PCIE_4356_DEVICE_ID), BRCMF_PCIE_DEVICE(BRCM_PCIE_43567_DEVICE_ID), BRCMF_PCIE_DEVICE(BRCM_PCIE_43570_DEVICE_ID), diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c index b2e1ab5adb64..0cd5b8d970d7 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c @@ -49,6 +49,11 @@ #define DCMD_RESP_TIMEOUT msecs_to_jiffies(2500) #define CTL_DONE_TIMEOUT msecs_to_jiffies(2500) +/* watermark expressed in number of words */ +#define DEFAULT_F2_WATERMARK 0x8 +#define CY_4373_F2_WATERMARK 0x40 +#define CY_43012_F2_WATERMARK 0x60 + #ifdef DEBUG #define BRCMF_TRAP_INFO_SIZE 80 @@ -138,6 +143,8 @@ struct rte_console { /* 1: isolate internal sdio signals, put external pads in tri-state; requires * sdio bus power cycle to clear (rev 9) */ #define SBSDIO_DEVCTL_PADS_ISO 0x08 +/* 1: enable F2 Watermark */ +#define SBSDIO_DEVCTL_F2WM_ENAB 0x10 /* Force SD->SB reset mapping (rev 11) */ #define SBSDIO_DEVCTL_SB_RST_CTL 0x30 /* Determined by CoreControl bit */ @@ -618,6 +625,7 @@ BRCMF_FW_DEF(43455, "brcmfmac43455-sdio"); BRCMF_FW_DEF(4354, "brcmfmac4354-sdio"); BRCMF_FW_DEF(4356, "brcmfmac4356-sdio"); BRCMF_FW_DEF(4373, "brcmfmac4373-sdio"); +BRCMF_FW_DEF(43012, "brcmfmac43012-sdio"); static const struct brcmf_firmware_mapping brcmf_sdio_fwnames[] = { BRCMF_FW_ENTRY(BRCM_CC_43143_CHIP_ID, 0xFFFFFFFF, 43143), @@ -637,7 +645,8 @@ static const struct brcmf_firmware_mapping brcmf_sdio_fwnames[] = { BRCMF_FW_ENTRY(BRCM_CC_4345_CHIP_ID, 0xFFFFFFC0, 43455), BRCMF_FW_ENTRY(BRCM_CC_4354_CHIP_ID, 0xFFFFFFFF, 4354), BRCMF_FW_ENTRY(BRCM_CC_4356_CHIP_ID, 0xFFFFFFFF, 4356), - BRCMF_FW_ENTRY(CY_CC_4373_CHIP_ID, 0xFFFFFFFF, 4373) + BRCMF_FW_ENTRY(CY_CC_4373_CHIP_ID, 0xFFFFFFFF, 4373), + BRCMF_FW_ENTRY(CY_CC_43012_CHIP_ID, 0xFFFFFFFF, 43012) }; static void pkt_align(struct sk_buff *p, int len, int align) @@ -671,6 +680,14 @@ brcmf_sdio_kso_control(struct brcmf_sdio *bus, bool on) /* 1st KSO write goes to AOS wake up core if device is asleep */ brcmf_sdiod_writeb(bus->sdiodev, SBSDIO_FUNC1_SLEEPCSR, wr_val, &err); + /* In case of 43012 chip, the chip could go down immediately after + * KSO bit is cleared. So the further reads of KSO register could + * fail. Thereby just bailing out immediately after clearing KSO + * bit, to avoid polling of KSO bit. + */ + if (!on && bus->ci->chip == CY_CC_43012_CHIP_ID) + return err; + if (on) { /* device WAKEUP through KSO: * write bit 0 & read back until @@ -2396,6 +2413,14 @@ static int brcmf_sdio_tx_ctrlframe(struct brcmf_sdio *bus, u8 *frame, u16 len) return ret; } +static bool brcmf_chip_is_ulp(struct brcmf_chip *ci) +{ + if (ci->chip == CY_CC_43012_CHIP_ID) + return true; + else + return false; +} + static void brcmf_sdio_bus_stop(struct device *dev) { struct brcmf_bus *bus_if = dev_get_drvdata(dev); @@ -2403,7 +2428,7 @@ static void brcmf_sdio_bus_stop(struct device *dev) struct brcmf_sdio *bus = sdiodev->bus; struct brcmf_core *core = bus->sdio_core; u32 local_hostintmask; - u8 saveclk; + u8 saveclk, bpreq; int err; brcmf_dbg(TRACE, "Enter\n"); @@ -2430,9 +2455,14 @@ static void brcmf_sdio_bus_stop(struct device *dev) /* Force backplane clocks to assure F2 interrupt propagates */ saveclk = brcmf_sdiod_readb(sdiodev, SBSDIO_FUNC1_CHIPCLKCSR, &err); - if (!err) - brcmf_sdiod_writeb(sdiodev, SBSDIO_FUNC1_CHIPCLKCSR, - (saveclk | SBSDIO_FORCE_HT), &err); + if (!err) { + bpreq = saveclk; + bpreq |= brcmf_chip_is_ulp(bus->ci) ? + SBSDIO_HT_AVAIL_REQ : SBSDIO_FORCE_HT; + brcmf_sdiod_writeb(sdiodev, + SBSDIO_FUNC1_CHIPCLKCSR, + bpreq, &err); + } if (err) brcmf_err("Failed to force clock for F2: err %d\n", err); @@ -3322,20 +3352,49 @@ err: return bcmerror; } +static bool brcmf_sdio_aos_no_decode(struct brcmf_sdio *bus) +{ + if (bus->ci->chip == CY_CC_43012_CHIP_ID || + bus->ci->chip == CY_CC_4373_CHIP_ID || + bus->ci->chip == BRCM_CC_4339_CHIP_ID || + bus->ci->chip == BRCM_CC_4345_CHIP_ID || + bus->ci->chip == BRCM_CC_4354_CHIP_ID) + return true; + else + return false; +} + static void brcmf_sdio_sr_init(struct brcmf_sdio *bus) { int err = 0; u8 val; + u8 wakeupctrl; + u8 cardcap; + u8 chipclkcsr; brcmf_dbg(TRACE, "Enter\n"); + if (brcmf_chip_is_ulp(bus->ci)) { + wakeupctrl = SBSDIO_FUNC1_WCTRL_ALPWAIT_SHIFT; + chipclkcsr = SBSDIO_HT_AVAIL_REQ; + } else { + wakeupctrl = SBSDIO_FUNC1_WCTRL_HTWAIT_SHIFT; + chipclkcsr = SBSDIO_FORCE_HT; + } + + if (brcmf_sdio_aos_no_decode(bus)) { + cardcap = SDIO_CCCR_BRCM_CARDCAP_CMD_NODEC; + } else { + cardcap = (SDIO_CCCR_BRCM_CARDCAP_CMD14_SUPPORT | + SDIO_CCCR_BRCM_CARDCAP_CMD14_EXT); + } + val = brcmf_sdiod_readb(bus->sdiodev, SBSDIO_FUNC1_WAKEUPCTRL, &err); if (err) { brcmf_err("error reading SBSDIO_FUNC1_WAKEUPCTRL\n"); return; } - - val |= 1 << SBSDIO_FUNC1_WCTRL_HTWAIT_SHIFT; + val |= 1 << wakeupctrl; brcmf_sdiod_writeb(bus->sdiodev, SBSDIO_FUNC1_WAKEUPCTRL, val, &err); if (err) { brcmf_err("error writing SBSDIO_FUNC1_WAKEUPCTRL\n"); @@ -3344,8 +3403,7 @@ static void brcmf_sdio_sr_init(struct brcmf_sdio *bus) /* Add CMD14 Support */ brcmf_sdiod_func0_wb(bus->sdiodev, SDIO_CCCR_BRCM_CARDCAP, - (SDIO_CCCR_BRCM_CARDCAP_CMD14_SUPPORT | - SDIO_CCCR_BRCM_CARDCAP_CMD14_EXT), + cardcap, &err); if (err) { brcmf_err("error writing SDIO_CCCR_BRCM_CARDCAP\n"); @@ -3353,7 +3411,7 @@ static void brcmf_sdio_sr_init(struct brcmf_sdio *bus) } brcmf_sdiod_writeb(bus->sdiodev, SBSDIO_FUNC1_CHIPCLKCSR, - SBSDIO_FORCE_HT, &err); + chipclkcsr, &err); if (err) { brcmf_err("error writing SBSDIO_FUNC1_CHIPCLKCSR\n"); return; @@ -4045,7 +4103,8 @@ static void brcmf_sdio_firmware_callback(struct device *dev, int err, const struct firmware *code; void *nvram; u32 nvram_len; - u8 saveclk; + u8 saveclk, bpreq; + u8 devctl; brcmf_dbg(TRACE, "Enter: dev=%s, err=%d\n", dev_name(dev), err); @@ -4078,8 +4137,11 @@ static void brcmf_sdio_firmware_callback(struct device *dev, int err, /* Force clocks on backplane to be sure F2 interrupt propagates */ saveclk = brcmf_sdiod_readb(sdiod, SBSDIO_FUNC1_CHIPCLKCSR, &err); if (!err) { + bpreq = saveclk; + bpreq |= brcmf_chip_is_ulp(bus->ci) ? + SBSDIO_HT_AVAIL_REQ : SBSDIO_FORCE_HT; brcmf_sdiod_writeb(sdiod, SBSDIO_FUNC1_CHIPCLKCSR, - (saveclk | SBSDIO_FORCE_HT), &err); + bpreq, &err); } if (err) { brcmf_err("Failed to force clock for F2: err %d\n", err); @@ -4101,8 +4163,37 @@ static void brcmf_sdio_firmware_callback(struct device *dev, int err, brcmf_sdiod_writel(sdiod, core->base + SD_REG(hostintmask), bus->hostintmask, NULL); - - brcmf_sdiod_writeb(sdiod, SBSDIO_WATERMARK, 8, &err); + switch (sdiod->func1->device) { + case SDIO_DEVICE_ID_CYPRESS_4373: + brcmf_dbg(INFO, "set F2 watermark to 0x%x*4 bytes\n", + CY_4373_F2_WATERMARK); + brcmf_sdiod_writeb(sdiod, SBSDIO_WATERMARK, + CY_4373_F2_WATERMARK, &err); + devctl = brcmf_sdiod_readb(sdiod, SBSDIO_DEVICE_CTL, + &err); + devctl |= SBSDIO_DEVCTL_F2WM_ENAB; + brcmf_sdiod_writeb(sdiod, SBSDIO_DEVICE_CTL, devctl, + &err); + brcmf_sdiod_writeb(sdiod, SBSDIO_FUNC1_MESBUSYCTRL, + CY_4373_F2_WATERMARK | + SBSDIO_MESBUSYCTRL_ENAB, &err); + break; + case SDIO_DEVICE_ID_CYPRESS_43012: + brcmf_dbg(INFO, "set F2 watermark to 0x%x*4 bytes\n", + CY_43012_F2_WATERMARK); + brcmf_sdiod_writeb(sdiod, SBSDIO_WATERMARK, + CY_43012_F2_WATERMARK, &err); + devctl = brcmf_sdiod_readb(sdiod, SBSDIO_DEVICE_CTL, + &err); + devctl |= SBSDIO_DEVCTL_F2WM_ENAB; + brcmf_sdiod_writeb(sdiod, SBSDIO_DEVICE_CTL, devctl, + &err); + break; + default: + brcmf_sdiod_writeb(sdiod, SBSDIO_WATERMARK, + DEFAULT_F2_WATERMARK, &err); + break; + } } else { /* Disable F2 again */ sdio_disable_func(sdiod->func2); @@ -4174,6 +4265,7 @@ brcmf_sdio_prepare_fw_request(struct brcmf_sdio *bus) fwreq->items[BRCMF_SDIO_FW_CODE].type = BRCMF_FW_TYPE_BINARY; fwreq->items[BRCMF_SDIO_FW_NVRAM].type = BRCMF_FW_TYPE_NVRAM; + fwreq->board_type = bus->sdiodev->settings->board_type; return fwreq; } diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h index 7faed831f07d..34b031154da9 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h @@ -77,7 +77,7 @@ #define SBSDIO_GPIO_OUT 0x10006 /* gpio enable */ #define SBSDIO_GPIO_EN 0x10007 -/* rev < 7, watermark for sdio device */ +/* rev < 7, watermark for sdio device TX path */ #define SBSDIO_WATERMARK 0x10008 /* control busy signal generation */ #define SBSDIO_DEVICE_CTL 0x10009 @@ -104,6 +104,13 @@ #define SBSDIO_FUNC1_RFRAMEBCHI 0x1001C /* MesBusyCtl (rev 11) */ #define SBSDIO_FUNC1_MESBUSYCTRL 0x1001D +/* Watermark for sdio device RX path */ +#define SBSDIO_MESBUSY_RXFIFO_WM_MASK 0x7F +#define SBSDIO_MESBUSY_RXFIFO_WM_SHIFT 0 +/* Enable busy capability for MES access */ +#define SBSDIO_MESBUSYCTRL_ENAB 0x80 +#define SBSDIO_MESBUSYCTRL_ENAB_SHIFT 7 + /* Sdio Core Rev 12 */ #define SBSDIO_FUNC1_WAKEUPCTRL 0x1001E #define SBSDIO_FUNC1_WCTRL_ALPWAIT_MASK 0x1 diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c index 81ff558046a8..6188275b17e5 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c @@ -846,8 +846,8 @@ brcms_ops_ampdu_action(struct ieee80211_hw *hw, status = brcms_c_aggregatable(wl->wlc, tid); spin_unlock_bh(&wl->lock); if (!status) { - brcms_err(wl->wlc->hw->d11core, - "START: tid %d is not agg\'able\n", tid); + brcms_dbg_ht(wl->wlc->hw->d11core, + "START: tid %d is not agg\'able\n", tid); return -EINVAL; } ieee80211_start_tx_ba_cb_irqsafe(vif, sta->addr, tid); diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h index 4960f7d26804..e9e8337f386c 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h +++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h @@ -220,13 +220,6 @@ enum phy_cal_mode { #define BB_MULT_MASK 0x0000ffff #define BB_MULT_VALID_MASK 0x80000000 -#define CORDIC_AG 39797 -#define CORDIC_NI 18 -#define FIXED(X) ((s32)((X) << 16)) - -#define FLOAT(X) \ - (((X) >= 0) ? ((((X) >> 15) + 1) >> 1) : -((((-(X)) >> 15) + 1) >> 1)) - #define PHY_CHAIN_TX_DISABLE_TEMP 115 #define PHY_HYSTERESIS_DELTATEMP 5 diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c index 9fb0d9fbd939..e78a93a45741 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c @@ -3447,8 +3447,8 @@ wlc_lcnphy_start_tx_tone(struct brcms_phy *pi, s32 f_kHz, u16 max_val, theta += rot; - i_samp = (u16) (FLOAT(tone_samp.i * max_val) & 0x3ff); - q_samp = (u16) (FLOAT(tone_samp.q * max_val) & 0x3ff); + i_samp = (u16)(CORDIC_FLOAT(tone_samp.i * max_val) & 0x3ff); + q_samp = (u16)(CORDIC_FLOAT(tone_samp.q * max_val) & 0x3ff); data_buf[t] = (i_samp << 10) | q_samp; } diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c index a57f2711f3c0..f4f5e9044152 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c @@ -23089,8 +23089,8 @@ wlc_phy_gen_load_samples_nphy(struct brcms_phy *pi, u32 f_kHz, u16 max_val, theta += rot; - tone_buf[t].q = (s32) FLOAT(tone_buf[t].q * max_val); - tone_buf[t].i = (s32) FLOAT(tone_buf[t].i * max_val); + tone_buf[t].q = (s32)CORDIC_FLOAT(tone_buf[t].q * max_val); + tone_buf[t].i = (s32)CORDIC_FLOAT(tone_buf[t].i * max_val); } wlc_phy_loadsampletable_nphy(pi, tone_buf, num_samps); diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c index eb5db94f5745..8ac34821f1c1 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c @@ -128,7 +128,7 @@ static void brcmu_d11n_decchspec(struct brcmu_chan *ch) } break; default: - WARN_ON_ONCE(1); + WARN_ONCE(1, "Invalid chanspec 0x%04x\n", ch->chspec); break; } @@ -140,7 +140,7 @@ static void brcmu_d11n_decchspec(struct brcmu_chan *ch) ch->band = BRCMU_CHAN_BAND_2G; break; default: - WARN_ON_ONCE(1); + WARN_ONCE(1, "Invalid chanspec 0x%04x\n", ch->chspec); break; } } @@ -167,7 +167,7 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch) ch->sb = BRCMU_CHAN_SB_U; ch->control_ch_num += CH_10MHZ_APART; } else { - WARN_ON_ONCE(1); + WARN_ONCE(1, "Invalid chanspec 0x%04x\n", ch->chspec); } break; case BRCMU_CHSPEC_D11AC_BW_80: @@ -188,7 +188,7 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch) ch->control_ch_num += CH_30MHZ_APART; break; default: - WARN_ON_ONCE(1); + WARN_ONCE(1, "Invalid chanspec 0x%04x\n", ch->chspec); break; } break; @@ -222,13 +222,13 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch) ch->control_ch_num += CH_70MHZ_APART; break; default: - WARN_ON_ONCE(1); + WARN_ONCE(1, "Invalid chanspec 0x%04x\n", ch->chspec); break; } break; case BRCMU_CHSPEC_D11AC_BW_8080: default: - WARN_ON_ONCE(1); + WARN_ONCE(1, "Invalid chanspec 0x%04x\n", ch->chspec); break; } @@ -240,7 +240,7 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch) ch->band = BRCMU_CHAN_BAND_2G; break; default: - WARN_ON_ONCE(1); + WARN_ONCE(1, "Invalid chanspec 0x%04x\n", ch->chspec); break; } } diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h index 686f7a85a045..839980da9643 100644 --- a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h +++ b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h @@ -60,6 +60,7 @@ #define BRCM_CC_43664_CHIP_ID 43664 #define BRCM_CC_4371_CHIP_ID 0x4371 #define CY_CC_4373_CHIP_ID 0x4373 +#define CY_CC_43012_CHIP_ID 43012 /* USB Device IDs */ #define BRCM_USB_43143_DEVICE_ID 0xbd1e @@ -74,6 +75,7 @@ /* PCIE Device IDs */ #define BRCM_PCIE_4350_DEVICE_ID 0x43a3 #define BRCM_PCIE_4354_DEVICE_ID 0x43df +#define BRCM_PCIE_4354_RAW_DEVICE_ID 0x4354 #define BRCM_PCIE_4356_DEVICE_ID 0x43ec #define BRCM_PCIE_43567_DEVICE_ID 0x43d3 #define BRCM_PCIE_43570_DEVICE_ID 0x43d9 diff --git a/drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h b/drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h index e1fd499930a0..de8225e6248b 100644 --- a/drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h +++ b/drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h @@ -269,6 +269,25 @@ struct chipcregs { /* GSIO (spi/i2c) present, rev >= 37 */ #define CC_CAP2_GSIO 0x00000002 +/* sr_control0, rev >= 48 */ +#define CC_SR_CTL0_ENABLE_MASK BIT(0) +#define CC_SR_CTL0_ENABLE_SHIFT 0 +#define CC_SR_CTL0_EN_SR_ENG_CLK_SHIFT 1 /* sr_clk to sr_memory enable */ +#define CC_SR_CTL0_RSRC_TRIGGER_SHIFT 2 /* Rising edge resource trigger 0 to + * sr_engine + */ +#define CC_SR_CTL0_MIN_DIV_SHIFT 6 /* Min division value for fast clk + * in sr_engine + */ +#define CC_SR_CTL0_EN_SBC_STBY_SHIFT 16 +#define CC_SR_CTL0_EN_SR_ALP_CLK_MASK_SHIFT 18 +#define CC_SR_CTL0_EN_SR_HT_CLK_SHIFT 19 +#define CC_SR_CTL0_ALLOW_PIC_SHIFT 20 /* Allow pic to separate power + * domains + */ +#define CC_SR_CTL0_MAX_SR_LQ_CLK_CNT_SHIFT 25 +#define CC_SR_CTL0_EN_MEM_DISABLE_FOR_SLEEP 30 + /* pmucapabilities */ #define PCAP_REV_MASK 0x000000ff #define PCAP_RC_MASK 0x00001f00 diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c index 04dd7a936593..3f5a14112c6b 100644 --- a/drivers/net/wireless/cisco/airo.c +++ b/drivers/net/wireless/cisco/airo.c @@ -1359,7 +1359,7 @@ static int micsetup(struct airo_info *ai) { int i; if (ai->tfm == NULL) - ai->tfm = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC); + ai->tfm = crypto_alloc_cipher("aes", 0, 0); if (IS_ERR(ai->tfm)) { airo_print_err(ai->dev->name, "failed to load transform for AES"); @@ -5462,7 +5462,7 @@ static int proc_BSSList_open( struct inode *inode, struct file *file ) { we have to add a spin lock... */ rc = readBSSListRid(ai, doLoseSync, &BSSList_rid); while(rc == 0 && BSSList_rid.index != cpu_to_le16(0xffff)) { - ptr += sprintf(ptr, "%pM %*s rssi = %d", + ptr += sprintf(ptr, "%pM %.*s rssi = %d", BSSList_rid.bssid, (int)BSSList_rid.ssidLen, BSSList_rid.ssid, diff --git a/drivers/net/wireless/intel/ipw2x00/Kconfig b/drivers/net/wireless/intel/ipw2x00/Kconfig index d6ec44d7a391..562395517e6c 100644 --- a/drivers/net/wireless/intel/ipw2x00/Kconfig +++ b/drivers/net/wireless/intel/ipw2x00/Kconfig @@ -15,9 +15,9 @@ config IPW2100 A driver for the Intel PRO/Wireless 2100 Network Connection 802.11b wireless network adapter. - See for information on - the capabilities currently enabled in this driver and for tips - for debugging issues and problems. + See + for information on the capabilities currently enabled in this driver + and for tips for debugging issues and problems. In order to use this driver, you will need a firmware image for it. You can obtain the firmware from @@ -77,8 +77,8 @@ config IPW2200 A driver for the Intel PRO/Wireless 2200BG and 2915ABG Network Connection adapters. - See for - information on the capabilities currently enabled in this + See + for information on the capabilities currently enabled in this driver and for tips for debugging issues and problems. In order to use this driver, you will need a firmware image for it. diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2100.c b/drivers/net/wireless/intel/ipw2x00/ipw2100.c index 910db46db6a1..52e5ed2d3bc2 100644 --- a/drivers/net/wireless/intel/ipw2x00/ipw2100.c +++ b/drivers/net/wireless/intel/ipw2x00/ipw2100.c @@ -5603,12 +5603,8 @@ static void shim__set_security(struct net_device *dev, if ((sec->flags & SEC_ACTIVE_KEY) && priv->ieee->sec.active_key != sec->active_key) { - if (sec->active_key <= 3) { - priv->ieee->sec.active_key = sec->active_key; - priv->ieee->sec.flags |= SEC_ACTIVE_KEY; - } else - priv->ieee->sec.flags &= ~SEC_ACTIVE_KEY; - + priv->ieee->sec.active_key = sec->active_key; + priv->ieee->sec.flags |= SEC_ACTIVE_KEY; priv->status |= STATUS_SECURITY_UPDATED; } @@ -8370,7 +8366,7 @@ static int ipw2100_mod_firmware_load(struct ipw2100_fw *fw) if (IPW2100_FW_MAJOR(h->version) != IPW2100_FW_MAJOR_VERSION) { printk(KERN_WARNING DRV_NAME ": Firmware image not compatible " "(detected version id of %u). " - "See Documentation/networking/README.ipw2100\n", + "See Documentation/networking/device_drivers/intel/ipw2100.txt\n", h->version); return 1; } diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wireless/intel/ipw2x00/ipw2200.c index bbdca13c5a9f..fa400f92d7e2 100644 --- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c +++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c @@ -10722,11 +10722,8 @@ static void shim__set_security(struct net_device *dev, } if (sec->flags & SEC_ACTIVE_KEY) { - if (sec->active_key <= 3) { - priv->ieee->sec.active_key = sec->active_key; - priv->ieee->sec.flags |= SEC_ACTIVE_KEY; - } else - priv->ieee->sec.flags &= ~SEC_ACTIVE_KEY; + priv->ieee->sec.active_key = sec->active_key; + priv->ieee->sec.flags |= SEC_ACTIVE_KEY; priv->status |= STATUS_SECURITY_UPDATED; } else priv->ieee->sec.flags &= ~SEC_ACTIVE_KEY; diff --git a/drivers/net/wireless/intel/iwlegacy/3945-rs.c b/drivers/net/wireless/intel/iwlegacy/3945-rs.c index e8983c6a2b7b..a697edd46e7f 100644 --- a/drivers/net/wireless/intel/iwlegacy/3945-rs.c +++ b/drivers/net/wireless/intel/iwlegacy/3945-rs.c @@ -781,7 +781,7 @@ il3945_rs_get_rate(void *il_r, struct ieee80211_sta *sta, void *il_sta, switch (scale_action) { case -1: - /* Decrese rate */ + /* Decrease rate */ if (low != RATE_INVALID) idx = low; break; diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c index 280cd8ae1696..6b4488a178a7 100644 --- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c +++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c @@ -559,7 +559,7 @@ il4965_translate_rx_status(struct il_priv *il, u32 decrypt_in) decrypt_out |= RX_RES_STATUS_BAD_KEY_TTAK; break; } - /* fall through if TTAK OK */ + /* fall through - if TTAK OK */ default: if (!(decrypt_in & RX_MPDU_RES_STATUS_ICV_OK)) decrypt_out |= RX_RES_STATUS_BAD_ICV_MIC; diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c index 6514baf799fe..a2f86cbcc740 100644 --- a/drivers/net/wireless/intel/iwlegacy/common.c +++ b/drivers/net/wireless/intel/iwlegacy/common.c @@ -2695,6 +2695,7 @@ il_set_decrypted_flag(struct il_priv *il, struct ieee80211_hdr *hdr, if ((decrypt_res & RX_RES_STATUS_DECRYPT_TYPE_MSK) == RX_RES_STATUS_BAD_KEY_TTAK) break; + /* fall through */ case RX_RES_STATUS_SEC_TYPE_WEP: if ((decrypt_res & RX_RES_STATUS_DECRYPT_TYPE_MSK) == @@ -2704,6 +2705,7 @@ il_set_decrypted_flag(struct il_priv *il, struct ieee80211_hdr *hdr, D_RX("Packet destroyed\n"); return -1; } + /* fall through */ case RX_RES_STATUS_SEC_TYPE_CCMP: if ((decrypt_res & RX_RES_STATUS_DECRYPT_TYPE_MSK) == RX_RES_STATUS_DECRYPT_OK) { diff --git a/drivers/net/wireless/intel/iwlwifi/Kconfig b/drivers/net/wireless/intel/iwlwifi/Kconfig index e5a2fc738ac3..491ca3c8b43c 100644 --- a/drivers/net/wireless/intel/iwlwifi/Kconfig +++ b/drivers/net/wireless/intel/iwlwifi/Kconfig @@ -1,6 +1,6 @@ config IWLWIFI tristate "Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi) " - depends on PCI && MAC80211 && HAS_IOMEM + depends on PCI && HAS_IOMEM select FW_LOADER ---help--- Select to build the driver supporting the: @@ -53,6 +53,7 @@ config IWLWIFI_LEDS config IWLDVM tristate "Intel Wireless WiFi DVM Firmware support" + depends on MAC80211 help This is the driver that supports the DVM firmware. The list of the devices that use this firmware is available here: @@ -61,6 +62,7 @@ config IWLDVM config IWLMVM tristate "Intel Wireless WiFi MVM Firmware support" select WANT_DEV_COREDUMP + depends on MAC80211 help This is the driver that supports the MVM firmware. The list of the devices that use this firmware is available here: diff --git a/drivers/net/wireless/intel/iwlwifi/Makefile b/drivers/net/wireless/intel/iwlwifi/Makefile index 04e376cc898c..ff41987a7e35 100644 --- a/drivers/net/wireless/intel/iwlwifi/Makefile +++ b/drivers/net/wireless/intel/iwlwifi/Makefile @@ -11,6 +11,7 @@ iwlwifi-objs += pcie/ctxt-info.o pcie/ctxt-info-gen3.o iwlwifi-objs += pcie/trans-gen2.o pcie/tx-gen2.o iwlwifi-$(CONFIG_IWLDVM) += cfg/1000.o cfg/2000.o cfg/5000.o cfg/6000.o iwlwifi-$(CONFIG_IWLMVM) += cfg/7000.o cfg/8000.o cfg/9000.o cfg/22000.o +iwlwifi-objs += iwl-dbg-tlv.o iwlwifi-objs += iwl-trans.o iwlwifi-objs += fw/notif-wait.o iwlwifi-$(CONFIG_IWLMVM) += fw/paging.o fw/smem.o fw/init.o fw/dbg.o diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/1000.c b/drivers/net/wireless/intel/iwlwifi/cfg/1000.c index 76b5ddb20248..1ff388b593ad 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/1000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/1000.c @@ -48,7 +48,7 @@ static const struct iwl_base_params iwl1000_base_params = { .num_of_queues = IWLAGN_NUM_QUEUES, .max_tfd_queue_size = 256, - .eeprom_size = OTP_LOW_IMAGE_SIZE, + .eeprom_size = OTP_LOW_IMAGE_SIZE_2K, .pll_cfg = true, .max_ll_items = OTP_MAX_LL_ITEMS_1000, .shadow_ram_support = false, diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/2000.c b/drivers/net/wireless/intel/iwlwifi/cfg/2000.c index e7e45846dd07..a6ec7ad39dcb 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/2000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/2000.c @@ -57,7 +57,7 @@ #define IWL135_MODULE_FIRMWARE(api) IWL135_FW_PRE __stringify(api) ".ucode" static const struct iwl_base_params iwl2000_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE, + .eeprom_size = OTP_LOW_IMAGE_SIZE_2K, .num_of_queues = IWLAGN_NUM_QUEUES, .max_tfd_queue_size = 256, .max_ll_items = OTP_MAX_LL_ITEMS_2x00, @@ -71,7 +71,7 @@ static const struct iwl_base_params iwl2000_base_params = { static const struct iwl_base_params iwl2030_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE, + .eeprom_size = OTP_LOW_IMAGE_SIZE_2K, .num_of_queues = IWLAGN_NUM_QUEUES, .max_tfd_queue_size = 256, .max_ll_items = OTP_MAX_LL_ITEMS_2x00, diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c index da5d5f9b2573..7e65073834b7 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c @@ -56,14 +56,13 @@ #include "iwl-config.h" /* Highest firmware API version supported */ -#define IWL_22000_UCODE_API_MAX 41 +#define IWL_22000_UCODE_API_MAX 43 /* Lowest firmware API version supported */ #define IWL_22000_UCODE_API_MIN 39 /* NVM versions */ #define IWL_22000_NVM_VERSION 0x0a1d -#define IWL_22000_TX_POWER_VERSION 0xffff /* meaningless */ /* Memory offsets and lengths */ #define IWL_22000_DCCM_OFFSET 0x800000 /* LMAC1 */ @@ -106,10 +105,8 @@ #define IWL_QU_B_JF_B_MODULE_FIRMWARE(api) \ IWL_QU_B_JF_B_FW_PRE __stringify(api) ".ucode" -#define NVM_HW_SECTION_NUM_FAMILY_22000 10 - static const struct iwl_base_params iwl_22000_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE_FAMILY_22000, + .eeprom_size = OTP_LOW_IMAGE_SIZE_32K, .num_of_queues = 512, .max_tfd_queue_size = 256, .shadow_ram_support = true, @@ -121,7 +118,7 @@ static const struct iwl_base_params iwl_22000_base_params = { }; static const struct iwl_base_params iwl_22560_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE_FAMILY_22000, + .eeprom_size = OTP_LOW_IMAGE_SIZE_32K, .num_of_queues = 512, .max_tfd_queue_size = 65536, .shadow_ram_support = true, @@ -142,7 +139,7 @@ static const struct iwl_ht_params iwl_22000_ht_params = { .ucode_api_max = IWL_22000_UCODE_API_MAX, \ .ucode_api_min = IWL_22000_UCODE_API_MIN, \ .led_mode = IWL_LED_RF_STATE, \ - .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_22000, \ + .nvm_hw_section_num = 10, \ .non_shared_ant = ANT_B, \ .dccm_offset = IWL_22000_DCCM_OFFSET, \ .dccm_len = IWL_22000_DCCM_LEN, \ @@ -157,7 +154,6 @@ static const struct iwl_ht_params iwl_22000_ht_params = { .mac_addr_from_csr = true, \ .ht_params = &iwl_22000_ht_params, \ .nvm_ver = IWL_22000_NVM_VERSION, \ - .nvm_calib_ver = IWL_22000_TX_POWER_VERSION, \ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, \ .use_tfh = true, \ .rf_id = true, \ @@ -323,7 +319,6 @@ MODULE_FIRMWARE(IWL_22000_HR_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL_22000_JF_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL_22000_HR_A_F0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL_22000_HR_B_F0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); -MODULE_FIRMWARE(IWL_22000_QU_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL_22000_HR_B_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL_22000_JF_B0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL_22000_HR_A0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/6000.c b/drivers/net/wireless/intel/iwlwifi/cfg/6000.c index 30e62a7c9d52..fbb18d066cd0 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/6000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/6000.c @@ -66,7 +66,7 @@ #define IWL6030_MODULE_FIRMWARE(api) IWL6030_FW_PRE __stringify(api) ".ucode" static const struct iwl_base_params iwl6000_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE, + .eeprom_size = OTP_LOW_IMAGE_SIZE_2K, .num_of_queues = IWLAGN_NUM_QUEUES, .max_tfd_queue_size = 256, .max_ll_items = OTP_MAX_LL_ITEMS_6x00, @@ -79,7 +79,7 @@ static const struct iwl_base_params iwl6000_base_params = { }; static const struct iwl_base_params iwl6050_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE, + .eeprom_size = OTP_LOW_IMAGE_SIZE_2K, .num_of_queues = IWLAGN_NUM_QUEUES, .max_tfd_queue_size = 256, .max_ll_items = OTP_MAX_LL_ITEMS_6x50, @@ -92,7 +92,7 @@ static const struct iwl_base_params iwl6050_base_params = { }; static const struct iwl_base_params iwl6000_g2_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE, + .eeprom_size = OTP_LOW_IMAGE_SIZE_2K, .num_of_queues = IWLAGN_NUM_QUEUES, .max_tfd_queue_size = 256, .max_ll_items = OTP_MAX_LL_ITEMS_6x00, diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/7000.c b/drivers/net/wireless/intel/iwlwifi/cfg/7000.c index c973bfaa3414..289e3c398a12 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/7000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/7000.c @@ -80,17 +80,11 @@ /* NVM versions */ #define IWL7260_NVM_VERSION 0x0a1d -#define IWL7260_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL3160_NVM_VERSION 0x709 -#define IWL3160_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL3165_NVM_VERSION 0x709 -#define IWL3165_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL3168_NVM_VERSION 0xd01 -#define IWL3168_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL7265_NVM_VERSION 0x0a1d -#define IWL7265_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL7265D_NVM_VERSION 0x0c11 -#define IWL7265_TX_POWER_VERSION 0xffff /* meaningless */ /* DCCM offsets and lengths */ #define IWL7000_DCCM_OFFSET 0x800000 @@ -113,10 +107,8 @@ #define IWL7265D_FW_PRE "iwlwifi-7265D-" #define IWL7265D_MODULE_FIRMWARE(api) IWL7265D_FW_PRE __stringify(api) ".ucode" -#define NVM_HW_SECTION_NUM_FAMILY_7000 0 - static const struct iwl_base_params iwl7000_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE_FAMILY_7000, + .eeprom_size = OTP_LOW_IMAGE_SIZE_16K, .num_of_queues = 31, .max_tfd_queue_size = 256, .shadow_ram_support = true, @@ -159,7 +151,7 @@ static const struct iwl_ht_params iwl7000_ht_params = { .device_family = IWL_DEVICE_FAMILY_7000, \ .base_params = &iwl7000_base_params, \ .led_mode = IWL_LED_RF_STATE, \ - .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_7000, \ + .nvm_hw_section_num = 0, \ .non_shared_ant = ANT_A, \ .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, \ .dccm_offset = IWL7000_DCCM_OFFSET, \ @@ -191,7 +183,6 @@ const struct iwl_cfg iwl7260_2ac_cfg = { IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL7260_NVM_VERSION, - .nvm_calib_ver = IWL7260_TX_POWER_VERSION, .host_interrupt_operation_mode = true, .lp_xtal_workaround = true, .dccm_len = IWL7260_DCCM_LEN, @@ -203,7 +194,6 @@ const struct iwl_cfg iwl7260_2ac_cfg_high_temp = { IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL7260_NVM_VERSION, - .nvm_calib_ver = IWL7260_TX_POWER_VERSION, .high_temp = true, .host_interrupt_operation_mode = true, .lp_xtal_workaround = true, @@ -217,7 +207,6 @@ const struct iwl_cfg iwl7260_2n_cfg = { IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL7260_NVM_VERSION, - .nvm_calib_ver = IWL7260_TX_POWER_VERSION, .host_interrupt_operation_mode = true, .lp_xtal_workaround = true, .dccm_len = IWL7260_DCCM_LEN, @@ -229,7 +218,6 @@ const struct iwl_cfg iwl7260_n_cfg = { IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL7260_NVM_VERSION, - .nvm_calib_ver = IWL7260_TX_POWER_VERSION, .host_interrupt_operation_mode = true, .lp_xtal_workaround = true, .dccm_len = IWL7260_DCCM_LEN, @@ -241,7 +229,6 @@ const struct iwl_cfg iwl3160_2ac_cfg = { IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL3160_NVM_VERSION, - .nvm_calib_ver = IWL3160_TX_POWER_VERSION, .host_interrupt_operation_mode = true, .dccm_len = IWL3160_DCCM_LEN, }; @@ -252,7 +239,6 @@ const struct iwl_cfg iwl3160_2n_cfg = { IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL3160_NVM_VERSION, - .nvm_calib_ver = IWL3160_TX_POWER_VERSION, .host_interrupt_operation_mode = true, .dccm_len = IWL3160_DCCM_LEN, }; @@ -263,7 +249,6 @@ const struct iwl_cfg iwl3160_n_cfg = { IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL3160_NVM_VERSION, - .nvm_calib_ver = IWL3160_TX_POWER_VERSION, .host_interrupt_operation_mode = true, .dccm_len = IWL3160_DCCM_LEN, }; @@ -291,7 +276,6 @@ const struct iwl_cfg iwl3165_2ac_cfg = { IWL_DEVICE_7005D, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL3165_NVM_VERSION, - .nvm_calib_ver = IWL3165_TX_POWER_VERSION, .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, .dccm_len = IWL7265_DCCM_LEN, }; @@ -302,7 +286,6 @@ const struct iwl_cfg iwl3168_2ac_cfg = { IWL_DEVICE_3008, .ht_params = &iwl7000_ht_params, .nvm_ver = IWL3168_NVM_VERSION, - .nvm_calib_ver = IWL3168_TX_POWER_VERSION, .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, .dccm_len = IWL7265_DCCM_LEN, .nvm_type = IWL_NVM_SDP, @@ -314,7 +297,6 @@ const struct iwl_cfg iwl7265_2ac_cfg = { IWL_DEVICE_7005, .ht_params = &iwl7265_ht_params, .nvm_ver = IWL7265_NVM_VERSION, - .nvm_calib_ver = IWL7265_TX_POWER_VERSION, .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, .dccm_len = IWL7265_DCCM_LEN, }; @@ -325,7 +307,6 @@ const struct iwl_cfg iwl7265_2n_cfg = { IWL_DEVICE_7005, .ht_params = &iwl7265_ht_params, .nvm_ver = IWL7265_NVM_VERSION, - .nvm_calib_ver = IWL7265_TX_POWER_VERSION, .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, .dccm_len = IWL7265_DCCM_LEN, }; @@ -336,7 +317,6 @@ const struct iwl_cfg iwl7265_n_cfg = { IWL_DEVICE_7005, .ht_params = &iwl7265_ht_params, .nvm_ver = IWL7265_NVM_VERSION, - .nvm_calib_ver = IWL7265_TX_POWER_VERSION, .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, .dccm_len = IWL7265_DCCM_LEN, }; @@ -347,7 +327,6 @@ const struct iwl_cfg iwl7265d_2ac_cfg = { IWL_DEVICE_7005D, .ht_params = &iwl7265_ht_params, .nvm_ver = IWL7265D_NVM_VERSION, - .nvm_calib_ver = IWL7265_TX_POWER_VERSION, .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, .dccm_len = IWL7265_DCCM_LEN, }; @@ -358,7 +337,6 @@ const struct iwl_cfg iwl7265d_2n_cfg = { IWL_DEVICE_7005D, .ht_params = &iwl7265_ht_params, .nvm_ver = IWL7265D_NVM_VERSION, - .nvm_calib_ver = IWL7265_TX_POWER_VERSION, .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, .dccm_len = IWL7265_DCCM_LEN, }; @@ -369,7 +347,6 @@ const struct iwl_cfg iwl7265d_n_cfg = { IWL_DEVICE_7005D, .ht_params = &iwl7265_ht_params, .nvm_ver = IWL7265D_NVM_VERSION, - .nvm_calib_ver = IWL7265_TX_POWER_VERSION, .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, .dccm_len = IWL7265_DCCM_LEN, }; diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/8000.c b/drivers/net/wireless/intel/iwlwifi/cfg/8000.c index 348c40fcddcb..d7d17c1cceea 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/8000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/8000.c @@ -75,7 +75,6 @@ /* NVM versions */ #define IWL8000_NVM_VERSION 0x0a1d -#define IWL8000_TX_POWER_VERSION 0xffff /* meaningless */ /* Memory offsets and lengths */ #define IWL8260_DCCM_OFFSET 0x800000 @@ -93,11 +92,10 @@ #define IWL8265_MODULE_FIRMWARE(api) \ IWL8265_FW_PRE __stringify(api) ".ucode" -#define NVM_HW_SECTION_NUM_FAMILY_8000 10 #define DEFAULT_NVM_FILE_FAMILY_8000C "nvmData-8000C" static const struct iwl_base_params iwl8000_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE_FAMILY_8000, + .eeprom_size = OTP_LOW_IMAGE_SIZE_32K, .num_of_queues = 31, .max_tfd_queue_size = 256, .shadow_ram_support = true, @@ -139,7 +137,7 @@ static const struct iwl_tt_params iwl8000_tt_params = { .device_family = IWL_DEVICE_FAMILY_8000, \ .base_params = &iwl8000_base_params, \ .led_mode = IWL_LED_RF_STATE, \ - .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_8000, \ + .nvm_hw_section_num = 10, \ .features = NETIF_F_RXCSUM, \ .non_shared_ant = ANT_A, \ .dccm_offset = IWL8260_DCCM_OFFSET, \ @@ -177,7 +175,6 @@ const struct iwl_cfg iwl8260_2n_cfg = { IWL_DEVICE_8260, .ht_params = &iwl8000_ht_params, .nvm_ver = IWL8000_NVM_VERSION, - .nvm_calib_ver = IWL8000_TX_POWER_VERSION, }; const struct iwl_cfg iwl8260_2ac_cfg = { @@ -186,7 +183,6 @@ const struct iwl_cfg iwl8260_2ac_cfg = { IWL_DEVICE_8260, .ht_params = &iwl8000_ht_params, .nvm_ver = IWL8000_NVM_VERSION, - .nvm_calib_ver = IWL8000_TX_POWER_VERSION, .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, }; @@ -196,7 +192,6 @@ const struct iwl_cfg iwl8265_2ac_cfg = { IWL_DEVICE_8265, .ht_params = &iwl8000_ht_params, .nvm_ver = IWL8000_NVM_VERSION, - .nvm_calib_ver = IWL8000_TX_POWER_VERSION, .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .vht_mu_mimo_supported = true, }; @@ -207,7 +202,6 @@ const struct iwl_cfg iwl8275_2ac_cfg = { IWL_DEVICE_8265, .ht_params = &iwl8000_ht_params, .nvm_ver = IWL8000_NVM_VERSION, - .nvm_calib_ver = IWL8000_TX_POWER_VERSION, .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .vht_mu_mimo_supported = true, }; @@ -218,7 +212,6 @@ const struct iwl_cfg iwl4165_2ac_cfg = { IWL_DEVICE_8000, .ht_params = &iwl8000_ht_params, .nvm_ver = IWL8000_NVM_VERSION, - .nvm_calib_ver = IWL8000_TX_POWER_VERSION, .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, }; diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/9000.c b/drivers/net/wireless/intel/iwlwifi/cfg/9000.c index d55fd23cafe6..f2114137c13f 100644 --- a/drivers/net/wireless/intel/iwlwifi/cfg/9000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/9000.c @@ -57,14 +57,13 @@ #include "fw/file.h" /* Highest firmware API version supported */ -#define IWL9000_UCODE_API_MAX 41 +#define IWL9000_UCODE_API_MAX 43 /* Lowest firmware API version supported */ #define IWL9000_UCODE_API_MIN 30 /* NVM versions */ #define IWL9000_NVM_VERSION 0x0a1d -#define IWL9000_TX_POWER_VERSION 0xffff /* meaningless */ /* Memory offsets and lengths */ #define IWL9000_DCCM_OFFSET 0x800000 @@ -90,10 +89,8 @@ #define IWL9260B_MODULE_FIRMWARE(api) \ IWL9260B_FW_PRE __stringify(api) ".ucode" -#define NVM_HW_SECTION_NUM_FAMILY_9000 10 - static const struct iwl_base_params iwl9000_base_params = { - .eeprom_size = OTP_LOW_IMAGE_SIZE_FAMILY_9000, + .eeprom_size = OTP_LOW_IMAGE_SIZE_32K, .num_of_queues = 31, .max_tfd_queue_size = 256, .shadow_ram_support = true, @@ -137,7 +134,7 @@ static const struct iwl_tt_params iwl9000_tt_params = { .device_family = IWL_DEVICE_FAMILY_9000, \ .base_params = &iwl9000_base_params, \ .led_mode = IWL_LED_RF_STATE, \ - .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_9000, \ + .nvm_hw_section_num = 10, \ .non_shared_ant = ANT_B, \ .dccm_offset = IWL9000_DCCM_OFFSET, \ .dccm_len = IWL9000_DCCM_LEN, \ @@ -157,17 +154,17 @@ static const struct iwl_tt_params iwl9000_tt_params = { .min_umac_error_event_table = 0x800000, \ .csr = &iwl_csr_v1, \ .d3_debug_data_base_addr = 0x401000, \ - .d3_debug_data_length = 92 * 1024 + .d3_debug_data_length = 92 * 1024, \ + .ht_params = &iwl9000_ht_params, \ + .nvm_ver = IWL9000_NVM_VERSION, \ + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K + const struct iwl_cfg iwl9160_2ac_cfg = { .name = "Intel(R) Dual Band Wireless AC 9160", .fw_name_pre = IWL9260A_FW_PRE, .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, }; const struct iwl_cfg iwl9260_2ac_cfg = { @@ -175,10 +172,6 @@ const struct iwl_cfg iwl9260_2ac_cfg = { .fw_name_pre = IWL9260A_FW_PRE, .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, }; const struct iwl_cfg iwl9260_killer_2ac_cfg = { @@ -186,10 +179,6 @@ const struct iwl_cfg iwl9260_killer_2ac_cfg = { .fw_name_pre = IWL9260A_FW_PRE, .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, }; const struct iwl_cfg iwl9270_2ac_cfg = { @@ -197,10 +186,6 @@ const struct iwl_cfg iwl9270_2ac_cfg = { .fw_name_pre = IWL9260A_FW_PRE, .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, }; const struct iwl_cfg iwl9460_2ac_cfg = { @@ -208,10 +193,6 @@ const struct iwl_cfg iwl9460_2ac_cfg = { .fw_name_pre = IWL9260A_FW_PRE, .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, }; const struct iwl_cfg iwl9460_2ac_cfg_soc = { @@ -220,10 +201,6 @@ const struct iwl_cfg iwl9460_2ac_cfg_soc = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, }; @@ -234,10 +211,6 @@ const struct iwl_cfg iwl9461_2ac_cfg_soc = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, }; @@ -248,10 +221,6 @@ const struct iwl_cfg iwl9462_2ac_cfg_soc = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, }; @@ -261,10 +230,6 @@ const struct iwl_cfg iwl9560_2ac_cfg = { .fw_name_pre = IWL9260A_FW_PRE, .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, }; const struct iwl_cfg iwl9560_2ac_cfg_soc = { @@ -273,10 +238,6 @@ const struct iwl_cfg iwl9560_2ac_cfg_soc = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, }; @@ -287,10 +248,6 @@ const struct iwl_cfg iwl9560_killer_2ac_cfg_soc = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, }; @@ -301,10 +258,6 @@ const struct iwl_cfg iwl9560_killer_s_2ac_cfg_soc = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, }; @@ -315,10 +268,6 @@ const struct iwl_cfg iwl9460_2ac_cfg_shared_clk = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK @@ -330,10 +279,6 @@ const struct iwl_cfg iwl9461_2ac_cfg_shared_clk = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK @@ -345,10 +290,6 @@ const struct iwl_cfg iwl9462_2ac_cfg_shared_clk = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK @@ -360,10 +301,6 @@ const struct iwl_cfg iwl9560_2ac_cfg_shared_clk = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK @@ -375,10 +312,6 @@ const struct iwl_cfg iwl9560_killer_2ac_cfg_shared_clk = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK @@ -390,10 +323,6 @@ const struct iwl_cfg iwl9560_killer_s_2ac_cfg_shared_clk = { .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, IWL_DEVICE_9000, - .ht_params = &iwl9000_ht_params, - .nvm_ver = IWL9000_NVM_VERSION, - .nvm_calib_ver = IWL9000_TX_POWER_VERSION, - .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, .soc_latency = 5000, .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c index 1088ff036e13..c219bca5cff4 100644 --- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c +++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c @@ -1224,6 +1224,23 @@ static int iwl_eeprom_init_hw_params(struct iwl_priv *priv) return 0; } +static int iwl_nvm_check_version(struct iwl_nvm_data *data, + struct iwl_trans *trans) +{ + if (data->nvm_version >= trans->cfg->nvm_ver || + data->calib_version >= trans->cfg->nvm_calib_ver) { + IWL_DEBUG_INFO(trans, "device EEPROM VER=0x%x, CALIB=0x%x\n", + data->nvm_version, data->calib_version); + return 0; + } + + IWL_ERR(trans, + "Unsupported (too old) EEPROM VER=0x%x < 0x%x CALIB=0x%x < 0x%x\n", + data->nvm_version, trans->cfg->nvm_ver, + data->calib_version, trans->cfg->nvm_calib_ver); + return -EINVAL; +} + static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, const struct iwl_fw *fw, diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/config.h b/drivers/net/wireless/intel/iwlwifi/fw/api/config.h index 7f645b62804e..5e88fa2e6fb7 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/config.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/config.h @@ -8,6 +8,7 @@ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH * Copyright(c) 2016 - 2017 Intel Deutschland GmbH + * Copyright (C) 2018 Intel Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as @@ -30,6 +31,7 @@ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH * Copyright(c) 2016 - 2017 Intel Deutschland GmbH + * Copyright (C) 2018 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -127,17 +129,6 @@ struct iwl_phy_cfg_cmd { struct iwl_calib_ctrl calib_control; } __packed; -#define PHY_CFG_RADIO_TYPE (BIT(0) | BIT(1)) -#define PHY_CFG_RADIO_STEP (BIT(2) | BIT(3)) -#define PHY_CFG_RADIO_DASH (BIT(4) | BIT(5)) -#define PHY_CFG_PRODUCT_NUMBER (BIT(6) | BIT(7)) -#define PHY_CFG_TX_CHAIN_A BIT(8) -#define PHY_CFG_TX_CHAIN_B BIT(9) -#define PHY_CFG_TX_CHAIN_C BIT(10) -#define PHY_CFG_RX_CHAIN_A BIT(12) -#define PHY_CFG_RX_CHAIN_B BIT(13) -#define PHY_CFG_RX_CHAIN_C BIT(14) - /* * enum iwl_dc2dc_config_id - flag ids * diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h b/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h index eff3249af48a..fdc54a5dc9de 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/datapath.h @@ -104,6 +104,11 @@ enum iwl_data_path_subcmd_ids { */ HE_AIR_SNIFFER_CONFIG_CMD = 0x13, + /** + * @RX_NO_DATA_NOTIF: &struct iwl_rx_no_data + */ + RX_NO_DATA_NOTIF = 0xF5, + /** * @TLC_MNG_UPDATE_NOTIF: &struct iwl_tlc_update_notif */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h new file mode 100644 index 000000000000..ab82b7a67967 --- /dev/null +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h @@ -0,0 +1,401 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright (C) 2018 Intel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * The full GNU General Public License is included in this distribution + * in the file called COPYING. + * + * Contact Information: + * Intel Linux Wireless + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright (C) 2018 Intel Corporation + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + *****************************************************************************/ +#ifndef __iwl_fw_dbg_tlv_h__ +#define __iwl_fw_dbg_tlv_h__ + +#include + +/* + * struct iwl_fw_ini_header: Common Header for all debug group TLV's structures + * @tlv_version: version info + * @apply_point: &enum iwl_fw_ini_apply_point + * @data: TLV data followed + **/ +struct iwl_fw_ini_header { + __le32 tlv_version; + __le32 apply_point; + u8 data[]; +} __packed; /* FW_INI_HEADER_TLV_S */ + +/** + * struct iwl_fw_ini_allocation_tlv - (IWL_FW_INI_TLV_TYPE_BUFFER_ALLOCATION) + * buffer allocation TLV - for debug + * + * @iwl_fw_ini_header: header + * @allocation_id: &enum iwl_fw_ini_allocation_id - to bind allocation and hcmd + * if needed (DBGC1/DBGC2/SDFX/...) + * @buffer_location: type of iwl_fw_ini_buffer_location + * @size: size in bytes + * @max_fragments: the maximum allowed fragmentation in the desired memory + * allocation above + * @min_frag_size: the minimum allowed fragmentation size in bytes +*/ +struct iwl_fw_ini_allocation_tlv { + struct iwl_fw_ini_header header; + __le32 allocation_id; + __le32 buffer_location; + __le32 size; + __le32 max_fragments; + __le32 min_frag_size; +} __packed; /* FW_INI_BUFFER_ALLOCATION_TLV_S_VER_1 */ + +/** + * struct iwl_fw_ini_hcmd (IWL_FW_INI_TLV_TYPE_HCMD) + * Generic Host command pass through TLV + * + * @id: the debug configuration command type for instance: 0xf6 / 0xf5 / DHC + * @group: the desired cmd group + * @padding: all zeros for dword alignment + * @data: all of the relevant command (0xf6/0xf5) to be sent +*/ +struct iwl_fw_ini_hcmd { + u8 id; + u8 group; + __le16 padding; + u8 data[0]; +} __packed; /* FW_INI_HCMD_S */ + +/** + * struct iwl_fw_ini_hcmd_tlv + * @header: header + * @hcmd: a variable length host-command to be sent to apply the configuration. + */ +struct iwl_fw_ini_hcmd_tlv { + struct iwl_fw_ini_header header; + struct iwl_fw_ini_hcmd hcmd; +} __packed; /* FW_INI_HCMD_TLV_S_VER_1 */ + +/* + * struct iwl_fw_ini_debug_flow_tlv (IWL_FW_INI_TLV_TYPE_DEBUG_FLOW) + * + * @header: header + * @debug_flow_cfg: &enum iwl_fw_ini_debug_flow + */ +struct iwl_fw_ini_debug_flow_tlv { + struct iwl_fw_ini_header header; + __le32 debug_flow_cfg; +} __packed; /* FW_INI_DEBUG_FLOW_TLV_S_VER_1 */ + +#define IWL_FW_INI_MAX_REGION_ID 20 +#define IWL_FW_INI_MAX_NAME 32 +/** + * struct iwl_fw_ini_region_cfg + * @region_id: ID of this dump configuration + * @region_type: &enum iwl_fw_ini_region_type + * @num_regions: amount of regions in the address array. + * @allocation_id: For DRAM type field substitutes for allocation_id. + * @name_len: name length + * @name: file name to use for this region + * @size: size of the data, in bytes.(unused for IWL_FW_INI_REGION_DRAM_BUFFER) + * @start_addr: array of addresses. (unused for IWL_FW_INI_REGION_DRAM_BUFFER) + */ +struct iwl_fw_ini_region_cfg { + __le32 region_id; + __le32 region_type; + __le32 name_len; + u8 name[IWL_FW_INI_MAX_NAME]; + union { + __le32 num_regions; + __le32 allocation_id; + }; + __le32 size; + __le32 start_addr[]; +} __packed; /* FW_INI_REGION_CONFIG_S */ + +/** + * struct iwl_fw_ini_region_tlv - (IWL_FW_INI_TLV_TYPE_REGION_CFG) + * DUMP sections define IDs and triggers that use those IDs TLV + * @header: header + * @num_regions: how many different region section and IDs are coming next + * @iwl_fw_ini_dump dump_config: list of dump configurations + */ +struct iwl_fw_ini_region_tlv { + struct iwl_fw_ini_header header; + __le32 num_regions; + struct iwl_fw_ini_region_cfg region_config[]; +} __packed; /* FW_INI_REGION_CFG_S */ + +/** + * struct iwl_fw_ini_trigger - (IWL_FW_INI_TLV_TYPE_DUMP_CFG) + * Region sections define IDs and triggers that use those IDs TLV + * + * @trigger_id: enum &iwl_fw_ini_tigger_id + * @ignore_default: override FW TLV with binary TLV + * @dump_delay: delay from trigger fire to dump, in usec + * @occurrences: max amount of times to be fired + * @ignore_consec: ignore consecutive triggers, in usec + * @force_restart: force FW restart + * @multi_dut: initiate debug dump data on several DUTs + * @trigger_data: generic data to be utilized per trigger + * @num_regions: number of dump regions defined for this trigger + * @data: region IDs + */ +struct iwl_fw_ini_trigger { + __le32 trigger_id; + __le32 ignore_default; + __le32 dump_delay; + __le32 occurrences; + __le32 ignore_consec; + __le32 force_restart; + __le32 multi_dut; + __le32 trigger_data; + __le32 num_regions; + __le32 data[]; +} __packed; /* FW_INI_TRIGGER_CONFIG_S */ + +/** + * struct iwl_fw_ini_trigger_tlv - (IWL_FW_INI_TLV_TYPE_TRIGGERS_CFG) + * DUMP sections define IDs and triggers that use those IDs TLV + * + * @header: header + * @num_triggers: how many different triggers section and IDs are coming next + * @trigger_config: list of trigger configurations + */ +struct iwl_fw_ini_trigger_tlv { + struct iwl_fw_ini_header header; + __le32 num_triggers; + struct iwl_fw_ini_trigger trigger_config[]; +} __packed; /* FW_INI_TRIGGER_CFG_S */ + +/** + * enum iwl_fw_ini_trigger_id + * @IWL_FW_TRIGGER_ID_FW_ASSERT: FW assert + * @IWL_FW_TRIGGER_ID_FW_TFD_Q_HANG: TFD queue hang + * @IWL_FW_TRIGGER_ID_FW_HW_ERROR: HW assert + * @IWL_FW_TRIGGER_ID_FW_TRIGGER_ERROR: FW error notification + * @IWL_FW_TRIGGER_ID_FW_TRIGGER_WARNING: FW warning notification + * @IWL_FW_TRIGGER_ID_FW_TRIGGER_INFO: FW info notification + * @IWL_FW_TRIGGER_ID_FW_TRIGGER_DEBUG: FW debug notification + * @IWL_FW_TRIGGER_ID_USER_TRIGGER: User trigger + * @IWL_FW_TRIGGER_ID_HOST_PEER_CLIENT_INACTIVITY: peer inactivity + * @FW_DEBUG_TLV_TRIGGER_ID_HOST_DID_INITIATED_EVENT: undefined + * @IWL_FW_TRIGGER_ID_HOST_TX_LATENCY_THRESHOLD_CROSSED: TX latency + * threshold was crossed + * @IWL_FW_TRIGGER_ID_HOST_TX_RESPONSE_STATUS_FAILED: TX failed + * @IWL_FW_TRIGGER_ID_HOST_OS_REQ_DEAUTH_PEER: Deauth initiated by host + * @IWL_FW_TRIGGER_ID_HOST_STOP_GO_REQUEST: stop GO request + * @IWL_FW_TRIGGER_ID_HOST_START_GO_REQUEST: start GO request + * @IWL_FW_TRIGGER_ID_HOST_JOIN_GROUP_REQUEST: join P2P group request + * @IWL_FW_TRIGGER_ID_HOST_SCAN_START: scan started event + * @IWL_FW_TRIGGER_ID_HOST_SCAN_SUBMITTED: undefined + * @IWL_FW_TRIGGER_ID_HOST_SCAN_PARAMS: undefined + * @IWL_FW_TRIGGER_ID_HOST_CHECK_FOR_HANG: undefined + * @IWL_FW_TRIGGER_ID_HOST_BAR_RECEIVED: BAR frame was received + * @IWL_FW_TRIGGER_ID_HOST_AGG_TX_RESPONSE_STATUS_FAILED: agg TX failed + * @IWL_FW_TRIGGER_ID_HOST_EAPOL_TX_RESPONSE_FAILED: EAPOL TX failed + * @IWL_FW_TRIGGER_ID_HOST_FAKE_TX_RESPONSE_SUSPECTED: suspicious TX response + * @IWL_FW_TRIGGER_ID_HOST_AUTH_REQ_FROM_ASSOC_CLIENT: received suspicious auth + * @IWL_FW_TRIGGER_ID_HOST_ROAM_COMPLETE: roaming was completed + * @IWL_FW_TRIGGER_ID_HOST_AUTH_ASSOC_FAST_FAILED: fast assoc failed + * @IWL_FW_TRIGGER_ID_HOST_D3_START: D3 start + * @IWL_FW_TRIGGER_ID_HOST_D3_END: D3 end + * @IWL_FW_TRIGGER_ID_HOST_BSS_MISSED_BEACONS: missed beacon events + * @IWL_FW_TRIGGER_ID_HOST_P2P_CLIENT_MISSED_BEACONS: P2P missed beacon events + * @IWL_FW_TRIGGER_ID_HOST_PEER_CLIENT_TX_FAILURES: undefined + * @IWL_FW_TRIGGER_ID_HOST_TX_WFD_ACTION_FRAME_FAILED: undefined + * @IWL_FW_TRIGGER_ID_HOST_AUTH_ASSOC_FAILED: authentication / association + * failed + * @IWL_FW_TRIGGER_ID_HOST_SCAN_COMPLETE: scan complete event + * @IWL_FW_TRIGGER_ID_HOST_SCAN_ABORT: scan abort complete + * @IWL_FW_TRIGGER_ID_HOST_NIC_ALIVE: nic alive message was received + * @IWL_FW_TRIGGER_ID_HOST_CHANNEL_SWITCH_COMPLETE: CSA was completed + * @IWL_FW_TRIGGER_ID_NUM: number of trigger IDs + */ +enum iwl_fw_ini_trigger_id { + /* Errors triggers */ + IWL_FW_TRIGGER_ID_FW_ASSERT = 1, + IWL_FW_TRIGGER_ID_FW_TFD_Q_HANG = 2, + IWL_FW_TRIGGER_ID_FW_HW_ERROR = 3, + /* Generic triggers */ + IWL_FW_TRIGGER_ID_FW_TRIGGER_ERROR = 4, + IWL_FW_TRIGGER_ID_FW_TRIGGER_WARNING = 5, + IWL_FW_TRIGGER_ID_FW_TRIGGER_INFO = 6, + IWL_FW_TRIGGER_ID_FW_TRIGGER_DEBUG = 7, + /* User Trigger */ + IWL_FW_TRIGGER_ID_USER_TRIGGER = 8, + /* Host triggers */ + IWL_FW_TRIGGER_ID_HOST_PEER_CLIENT_INACTIVITY = 9, + IWL_FW_TRIGGER_ID_HOST_DID_INITIATED_EVENT = 10, + IWL_FW_TRIGGER_ID_HOST_TX_LATENCY_THRESHOLD_CROSSED = 11, + IWL_FW_TRIGGER_ID_HOST_TX_RESPONSE_STATUS_FAILED = 12, + IWL_FW_TRIGGER_ID_HOST_OS_REQ_DEAUTH_PEER = 13, + IWL_FW_TRIGGER_ID_HOST_STOP_GO_REQUEST = 14, + IWL_FW_TRIGGER_ID_HOST_START_GO_REQUEST = 15, + IWL_FW_TRIGGER_ID_HOST_JOIN_GROUP_REQUEST = 16, + IWL_FW_TRIGGER_ID_HOST_SCAN_START = 17, + IWL_FW_TRIGGER_ID_HOST_SCAN_SUBITTED = 18, + IWL_FW_TRIGGER_ID_HOST_SCAN_PARAMS = 19, + IWL_FW_TRIGGER_ID_HOST_CHECK_FOR_HANG = 20, + IWL_FW_TRIGGER_ID_HOST_BAR_RECEIVED = 21, + IWL_FW_TRIGGER_ID_HOST_AGG_TX_RESPONSE_STATUS_FAILED = 22, + IWL_FW_TRIGGER_ID_HOST_EAPOL_TX_RESPONSE_FAILED = 23, + IWL_FW_TRIGGER_ID_HOST_FAKE_TX_RESPONSE_SUSPECTED = 24, + IWL_FW_TRIGGER_ID_HOST_AUTH_REQ_FROM_ASSOC_CLIENT = 25, + IWL_FW_TRIGGER_ID_HOST_ROAM_COMPLETE = 26, + IWL_FW_TRIGGER_ID_HOST_AUTH_ASSOC_FAST_FAILED = 27, + IWL_FW_TRIGGER_ID_HOST_D3_START = 28, + IWL_FW_TRIGGER_ID_HOST_D3_END = 29, + IWL_FW_TRIGGER_ID_HOST_BSS_MISSED_BEACONS = 30, + IWL_FW_TRIGGER_ID_HOST_P2P_CLIENT_MISSED_BEACONS = 31, + IWL_FW_TRIGGER_ID_HOST_PEER_CLIENT_TX_FAILURES = 32, + IWL_FW_TRIGGER_ID_HOST_TX_WFD_ACTION_FRAME_FAILED = 33, + IWL_FW_TRIGGER_ID_HOST_AUTH_ASSOC_FAILED = 34, + IWL_FW_TRIGGER_ID_HOST_SCAN_COMPLETE = 35, + IWL_FW_TRIGGER_ID_HOST_SCAN_ABORT = 36, + IWL_FW_TRIGGER_ID_HOST_NIC_ALIVE = 37, + IWL_FW_TRIGGER_ID_HOST_CHANNEL_SWITCH_COMPLETE = 38, + IWL_FW_TRIGGER_ID_NUM, +}; /* FW_INI_TRIGGER_ID_E_VER_1 */ + +/** + * enum iwl_fw_ini_apply_point + * @IWL_FW_INI_APPLY_INVALID: invalid + * @IWL_FW_INI_APPLY_EARLY: pre loading FW + * @IWL_FW_INI_APPLY_AFTER_ALIVE: first cmd from host after alive + * @IWL_FW_INI_APPLY_POST_INIT: last cmd in initialization sequence + * @IWL_FW_INI_APPLY_MISSED_BEACONS: missed beacons notification + * @IWL_FW_INI_APPLY_SCAN_COMPLETE: scan completed + * @IWL_FW_INI_APPLY_NUM: number of apply points +*/ +enum iwl_fw_ini_apply_point { + IWL_FW_INI_APPLY_INVALID, + IWL_FW_INI_APPLY_EARLY, + IWL_FW_INI_APPLY_AFTER_ALIVE, + IWL_FW_INI_APPLY_POST_INIT, + IWL_FW_INI_APPLY_MISSED_BEACONS, + IWL_FW_INI_APPLY_SCAN_COMPLETE, + IWL_FW_INI_APPLY_NUM, +}; /* FW_INI_APPLY_POINT_E_VER_1 */ + +/** + * enum iwl_fw_ini_allocation_id + * @IWL_FW_INI_ALLOCATION_INVALID: invalid + * @IWL_FW_INI_ALLOCATION_ID_DBGC1: allocation meant for DBGC1 configuration + * @IWL_FW_INI_ALLOCATION_ID_DBGC2: allocation meant for DBGC2 configuration + * @IWL_FW_INI_ALLOCATION_ID_DBGC3: allocation meant for DBGC3 configuration + * @IWL_FW_INI_ALLOCATION_ID_SDFX: for SDFX module + * @IWL_FW_INI_ALLOCATION_ID_FW_DUMP: used for crash and runtime dumps + * @IWL_FW_INI_ALLOCATION_ID_USER_DEFINED: for future user scenarios +*/ +enum iwl_fw_ini_allocation_id { + IWL_FW_INI_ALLOCATION_INVALID, + IWL_FW_INI_ALLOCATION_ID_DBGC1, + IWL_FW_INI_ALLOCATION_ID_DBGC2, + IWL_FW_INI_ALLOCATION_ID_DBGC3, + IWL_FW_INI_ALLOCATION_ID_SDFX, + IWL_FW_INI_ALLOCATION_ID_FW_DUMP, + IWL_FW_INI_ALLOCATION_ID_USER_DEFINED, +}; /* FW_INI_ALLOCATION_ID_E_VER_1 */ + +/** + * enum iwl_fw_ini_buffer_location + * @IWL_FW_INI_LOCATION_INVALID: invalid + * @IWL_FW_INI_LOCATION_SRAM_PATH: SRAM location + * @IWL_FW_INI_LOCATION_DRAM_PATH: DRAM location + */ +enum iwl_fw_ini_buffer_location { + IWL_FW_INI_LOCATION_SRAM_INVALID, + IWL_FW_INI_LOCATION_SRAM_PATH, + IWL_FW_INI_LOCATION_DRAM_PATH, +}; /* FW_INI_BUFFER_LOCATION_E_VER_1 */ + +/** + * enum iwl_fw_ini_debug_flow + * @IWL_FW_INI_DEBUG_INVALID: invalid + * @IWL_FW_INI_DEBUG_DBTR_FLOW: undefined + * @IWL_FW_INI_DEBUG_TB2DTF_FLOW: undefined + */ +enum iwl_fw_ini_debug_flow { + IWL_FW_INI_DEBUG_INVALID, + IWL_FW_INI_DEBUG_DBTR_FLOW, + IWL_FW_INI_DEBUG_TB2DTF_FLOW, +}; /* FW_INI_DEBUG_FLOW_E_VER_1 */ + +/** + * enum iwl_fw_ini_region_type + * @IWL_FW_INI_REGION_INVALID: invalid + * @IWL_FW_INI_REGION_DEVICE_MEMORY: device internal memory + * @IWL_FW_INI_REGION_PERIPHERY_MAC: periphery registers of MAC + * @IWL_FW_INI_REGION_PERIPHERY_PHY: periphery registers of PHY + * @IWL_FW_INI_REGION_PERIPHERY_AUX: periphery registers of AUX + * @IWL_FW_INI_REGION_DRAM_BUFFER: DRAM buffer + * @IWL_FW_INI_REGION_DRAM_IMR: IMR memory + * @IWL_FW_INI_REGION_INTERNAL_BUFFER: undefined + * @IWL_FW_INI_REGION_TXF: TX fifos + * @IWL_FW_INI_REGION_RXF: RX fifo + * @IWL_FW_INI_REGION_PAGING: paging memory + * @IWL_FW_INI_REGION_CSR: CSR registers + * @IWL_FW_INI_REGION_NUM: number of region types + */ +enum iwl_fw_ini_region_type { + IWL_FW_INI_REGION_INVALID, + IWL_FW_INI_REGION_DEVICE_MEMORY, + IWL_FW_INI_REGION_PERIPHERY_MAC, + IWL_FW_INI_REGION_PERIPHERY_PHY, + IWL_FW_INI_REGION_PERIPHERY_AUX, + IWL_FW_INI_REGION_DRAM_BUFFER, + IWL_FW_INI_REGION_DRAM_IMR, + IWL_FW_INI_REGION_INTERNAL_BUFFER, + IWL_FW_INI_REGION_TXF, + IWL_FW_INI_REGION_RXF, + IWL_FW_INI_REGION_PAGING, + IWL_FW_INI_REGION_CSR, + IWL_FW_INI_REGION_NUM +}; /* FW_INI_REGION_TYPE_E_VER_1*/ + +#endif diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/mac.h b/drivers/net/wireless/intel/iwlwifi/fw/api/mac.h index 1dd23f846fb9..7a3f7b7e6358 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/mac.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/mac.h @@ -151,9 +151,9 @@ enum iwl_tsf_id { * @beacon_time: beacon transmit time in system time * @beacon_tsf: beacon transmit time in TSF * @bi: beacon interval in TU - * @bi_reciprocal: 2^32 / bi + * @reserved1: reserved * @dtim_interval: dtim transmit time in TU - * @dtim_reciprocal: 2^32 / dtim_interval + * @reserved2: reserved * @mcast_qid: queue ID for multicast traffic. * NOTE: obsolete from VER2 and on * @beacon_template: beacon template ID @@ -162,9 +162,9 @@ struct iwl_mac_data_ap { __le32 beacon_time; __le64 beacon_tsf; __le32 bi; - __le32 bi_reciprocal; + __le32 reserved1; __le32 dtim_interval; - __le32 dtim_reciprocal; + __le32 reserved2; __le32 mcast_qid; __le32 beacon_template; } __packed; /* AP_MAC_DATA_API_S_VER_2 */ @@ -174,26 +174,34 @@ struct iwl_mac_data_ap { * @beacon_time: beacon transmit time in system time * @beacon_tsf: beacon transmit time in TSF * @bi: beacon interval in TU - * @bi_reciprocal: 2^32 / bi + * @reserved: reserved * @beacon_template: beacon template ID */ struct iwl_mac_data_ibss { __le32 beacon_time; __le64 beacon_tsf; __le32 bi; - __le32 bi_reciprocal; + __le32 reserved; __le32 beacon_template; } __packed; /* IBSS_MAC_DATA_API_S_VER_1 */ +/** + * enum iwl_mac_data_policy - policy of the data path for this MAC + * @TWT_SUPPORTED: twt is supported + */ +enum iwl_mac_data_policy { + TWT_SUPPORTED = BIT(0), +}; + /** * struct iwl_mac_data_sta - configuration data for station MAC context * @is_assoc: 1 for associated state, 0 otherwise * @dtim_time: DTIM arrival time in system time * @dtim_tsf: DTIM arrival time in TSF * @bi: beacon interval in TU, applicable only when associated - * @bi_reciprocal: 2^32 / bi , applicable only when associated + * @reserved1: reserved * @dtim_interval: DTIM interval in TU, applicable only when associated - * @dtim_reciprocal: 2^32 / dtim_interval , applicable only when associated + * @data_policy: see &enum iwl_mac_data_policy * @listen_interval: in beacon intervals, applicable only when associated * @assoc_id: unique ID assigned by the AP during association * @assoc_beacon_arrive_time: TSF of first beacon after association @@ -203,13 +211,13 @@ struct iwl_mac_data_sta { __le32 dtim_time; __le64 dtim_tsf; __le32 bi; - __le32 bi_reciprocal; + __le32 reserved1; __le32 dtim_interval; - __le32 dtim_reciprocal; + __le32 data_policy; __le32 listen_interval; __le32 assoc_id; __le32 assoc_beacon_arrive_time; -} __packed; /* STA_MAC_DATA_API_S_VER_1 */ +} __packed; /* STA_MAC_DATA_API_S_VER_2 */ /** * struct iwl_mac_data_go - configuration data for P2P GO MAC context @@ -233,7 +241,7 @@ struct iwl_mac_data_go { struct iwl_mac_data_p2p_sta { struct iwl_mac_data_sta sta; __le32 ctwin; -} __packed; /* P2P_STA_MAC_DATA_API_S_VER_1 */ +} __packed; /* P2P_STA_MAC_DATA_API_S_VER_2 */ /** * struct iwl_mac_data_pibss - Pseudo IBSS config data @@ -378,13 +386,6 @@ struct iwl_mac_ctx_cmd { }; } __packed; /* MAC_CONTEXT_CMD_API_S_VER_1 */ -static inline u32 iwl_mvm_reciprocal(u32 v) -{ - if (!v) - return 0; - return 0xFFFFFFFF / v; -} - #define IWL_NONQOS_SEQ_GET 0x1 #define IWL_NONQOS_SEQ_SET 0x2 struct iwl_nonqos_seq_query_cmd { @@ -442,7 +443,7 @@ struct iwl_he_backoff_conf { * Support for Nss x BW (or RU) matrix: * (0=SISO, 1=MIMO2) x (0-20MHz, 1-40MHz, 2-80MHz, 3-160MHz) * Each entry contains 2 QAM thresholds for 8us and 16us: - * 0=BPSK, 1=QPSK, 2=16QAM, 3=64QAM, 4=256QAM, 5=1024QAM, 6/7=RES + * 0=BPSK, 1=QPSK, 2=16QAM, 3=64QAM, 4=256QAM, 5=1024QAM, 6=RES, 7=NONE * i.e. QAM_th1 < QAM_th2 such if TX uses QAM_tx: * QAM_tx < QAM_th1 --> PPE=0us * QAM_th1 <= QAM_tx < QAM_th2 --> PPE=8us diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h b/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h index 0537496b6eb1..0791a854fc8f 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h @@ -345,66 +345,98 @@ enum iwl_rx_mpdu_mac_info { IWL_RX_MPDU_PHY_PHY_INDEX_MASK = 0xf0, }; -/* - * enum iwl_rx_he_phy - HE PHY data - */ -enum iwl_rx_he_phy { - IWL_RX_HE_PHY_BEAM_CHNG = BIT(0), - IWL_RX_HE_PHY_UPLINK = BIT(1), - IWL_RX_HE_PHY_BSS_COLOR_MASK = 0xfc, - IWL_RX_HE_PHY_SPATIAL_REUSE_MASK = 0xf00, - IWL_RX_HE_PHY_SU_EXT_BW10 = BIT(12), - IWL_RX_HE_PHY_TXOP_DUR_MASK = 0xfe000, - IWL_RX_HE_PHY_LDPC_EXT_SYM = BIT(20), - IWL_RX_HE_PHY_PRE_FEC_PAD_MASK = 0x600000, - IWL_RX_HE_PHY_PE_DISAMBIG = BIT(23), - IWL_RX_HE_PHY_DOPPLER = BIT(24), +/* TSF overload low dword */ +enum iwl_rx_phy_data0 { + /* info type: HE any */ + IWL_RX_PHY_DATA0_HE_BEAM_CHNG = 0x00000001, + IWL_RX_PHY_DATA0_HE_UPLINK = 0x00000002, + IWL_RX_PHY_DATA0_HE_BSS_COLOR_MASK = 0x000000fc, + IWL_RX_PHY_DATA0_HE_SPATIAL_REUSE_MASK = 0x00000f00, + /* 1 bit reserved */ + IWL_RX_PHY_DATA0_HE_TXOP_DUR_MASK = 0x000fe000, + IWL_RX_PHY_DATA0_HE_LDPC_EXT_SYM = 0x00100000, + IWL_RX_PHY_DATA0_HE_PRE_FEC_PAD_MASK = 0x00600000, + IWL_RX_PHY_DATA0_HE_PE_DISAMBIG = 0x00800000, + IWL_RX_PHY_DATA0_HE_DOPPLER = 0x01000000, /* 6 bits reserved */ - IWL_RX_HE_PHY_DELIM_EOF = BIT(31), + IWL_RX_PHY_DATA0_HE_DELIM_EOF = 0x80000000, +}; + +enum iwl_rx_phy_info_type { + IWL_RX_PHY_INFO_TYPE_NONE = 0, + IWL_RX_PHY_INFO_TYPE_CCK = 1, + IWL_RX_PHY_INFO_TYPE_OFDM_LGCY = 2, + IWL_RX_PHY_INFO_TYPE_HT = 3, + IWL_RX_PHY_INFO_TYPE_VHT_SU = 4, + IWL_RX_PHY_INFO_TYPE_VHT_MU = 5, + IWL_RX_PHY_INFO_TYPE_HE_SU = 6, + IWL_RX_PHY_INFO_TYPE_HE_MU = 7, + IWL_RX_PHY_INFO_TYPE_HE_TB = 8, + IWL_RX_PHY_INFO_TYPE_HE_MU_EXT = 9, + IWL_RX_PHY_INFO_TYPE_HE_TB_EXT = 10, +}; - /* second dword - common data */ - IWL_RX_HE_PHY_HE_LTF_NUM_MASK = 0xe000000000ULL, - IWL_RX_HE_PHY_RU_ALLOC_SEC80 = BIT_ULL(32 + 8), +/* TSF overload high dword */ +enum iwl_rx_phy_data1 { + /* + * check this first - if TSF overload is set, + * see &enum iwl_rx_phy_info_type + */ + IWL_RX_PHY_DATA1_INFO_TYPE_MASK = 0xf0000000, + + /* info type: HT/VHT/HE any */ + IWL_RX_PHY_DATA1_LSIG_LEN_MASK = 0x0fff0000, + + /* info type: HE MU/MU-EXT */ + IWL_RX_PHY_DATA1_HE_MU_SIGB_COMPRESSION = 0x00000001, + IWL_RX_PHY_DATA1_HE_MU_SIBG_SYM_OR_USER_NUM_MASK = 0x0000001e, + + /* info type: HE any */ + IWL_RX_PHY_DATA1_HE_LTF_NUM_MASK = 0x000000e0, + IWL_RX_PHY_DATA1_HE_RU_ALLOC_SEC80 = 0x00000100, /* trigger encoded */ - IWL_RX_HE_PHY_RU_ALLOC_MASK = 0xfe0000000000ULL, - IWL_RX_HE_PHY_INFO_TYPE_MASK = 0xf000000000000000ULL, - IWL_RX_HE_PHY_INFO_TYPE_SU = 0x0, /* TSF low valid (first DW) */ - IWL_RX_HE_PHY_INFO_TYPE_MU = 0x1, /* TSF low/high valid (both DWs) */ - IWL_RX_HE_PHY_INFO_TYPE_MU_EXT_INFO = 0x2, /* same + SIGB-common0/1/2 valid */ - IWL_RX_HE_PHY_INFO_TYPE_TB = 0x3, /* TSF low/high valid (both DWs) */ - - /* second dword - MU data */ - IWL_RX_HE_PHY_MU_SIGB_COMPRESSION = BIT_ULL(32 + 0), - IWL_RX_HE_PHY_MU_SIBG_SYM_OR_USER_NUM_MASK = 0x1e00000000ULL, - IWL_RX_HE_PHY_MU_SIGB_MCS_MASK = 0xf000000000000ULL, - IWL_RX_HE_PHY_MU_SIGB_DCM = BIT_ULL(32 + 21), - IWL_RX_HE_PHY_MU_PREAMBLE_PUNC_TYPE_MASK = 0xc0000000000000ULL, - - /* second dword - TB data */ - IWL_RX_HE_PHY_TB_PILOT_TYPE = BIT_ULL(32 + 0), - IWL_RX_HE_PHY_TB_LOW_SS_MASK = 0xe00000000ULL + IWL_RX_PHY_DATA1_HE_RU_ALLOC_MASK = 0x0000fe00, + + /* info type: HE TB/TX-EXT */ + IWL_RX_PHY_DATA1_HE_TB_PILOT_TYPE = 0x00000001, + IWL_RX_PHY_DATA1_HE_TB_LOW_SS_MASK = 0x0000000e, }; -enum iwl_rx_he_sigb_common0 { +/* goes into Metadata DW 7 */ +enum iwl_rx_phy_data2 { + /* info type: HE MU-EXT */ /* the a1/a2/... is what the PHY/firmware calls the values */ - IWL_RX_HE_SIGB_COMMON0_CH1_RU0 = 0x000000ff, /* a1 */ - IWL_RX_HE_SIGB_COMMON0_CH1_RU2 = 0x0000ff00, /* a2 */ - IWL_RX_HE_SIGB_COMMON0_CH2_RU0 = 0x00ff0000, /* b1 */ - IWL_RX_HE_SIGB_COMMON0_CH2_RU2 = 0xff000000, /* b2 */ + IWL_RX_PHY_DATA2_HE_MU_EXT_CH1_RU0 = 0x000000ff, /* a1 */ + IWL_RX_PHY_DATA2_HE_MU_EXT_CH1_RU2 = 0x0000ff00, /* a2 */ + IWL_RX_PHY_DATA2_HE_MU_EXT_CH2_RU0 = 0x00ff0000, /* b1 */ + IWL_RX_PHY_DATA2_HE_MU_EXT_CH2_RU2 = 0xff000000, /* b2 */ + + /* info type: HE TB-EXT */ + IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE1 = 0x0000000f, + IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE2 = 0x000000f0, + IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE3 = 0x00000f00, + IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE4 = 0x0000f000, }; -enum iwl_rx_he_sigb_common1 { - IWL_RX_HE_SIGB_COMMON1_CH1_RU1 = 0x000000ff, /* c1 */ - IWL_RX_HE_SIGB_COMMON1_CH1_RU3 = 0x0000ff00, /* c2 */ - IWL_RX_HE_SIGB_COMMON1_CH2_RU1 = 0x00ff0000, /* d1 */ - IWL_RX_HE_SIGB_COMMON1_CH2_RU3 = 0xff000000, /* d2 */ +/* goes into Metadata DW 8 */ +enum iwl_rx_phy_data3 { + /* info type: HE MU-EXT */ + IWL_RX_PHY_DATA3_HE_MU_EXT_CH1_RU1 = 0x000000ff, /* c1 */ + IWL_RX_PHY_DATA3_HE_MU_EXT_CH1_RU3 = 0x0000ff00, /* c2 */ + IWL_RX_PHY_DATA3_HE_MU_EXT_CH2_RU1 = 0x00ff0000, /* d1 */ + IWL_RX_PHY_DATA3_HE_MU_EXT_CH2_RU3 = 0xff000000, /* d2 */ }; -enum iwl_rx_he_sigb_common2 { - IWL_RX_HE_SIGB_COMMON2_CH1_CTR_RU = 0x0001, - IWL_RX_HE_SIGB_COMMON2_CH2_CTR_RU = 0x0002, - IWL_RX_HE_SIGB_COMMON2_CH1_CRC_OK = 0x0004, - IWL_RX_HE_SIGB_COMMON2_CH2_CRC_OK = 0x0008, +/* goes into Metadata DW 4 high 16 bits */ +enum iwl_rx_phy_data4 { + /* info type: HE MU-EXT */ + IWL_RX_PHY_DATA4_HE_MU_EXT_CH1_CTR_RU = 0x0001, + IWL_RX_PHY_DATA4_HE_MU_EXT_CH2_CTR_RU = 0x0002, + IWL_RX_PHY_DATA4_HE_MU_EXT_CH1_CRC_OK = 0x0004, + IWL_RX_PHY_DATA4_HE_MU_EXT_CH2_CRC_OK = 0x0008, + IWL_RX_PHY_DATA4_HE_MU_EXT_SIGB_MCS_MASK = 0x00f0, + IWL_RX_PHY_DATA4_HE_MU_EXT_SIGB_DCM = 0x0100, + IWL_RX_PHY_DATA4_HE_MU_EXT_PREAMBLE_PUNC_TYPE_MASK = 0x0600, }; /** @@ -419,9 +451,9 @@ struct iwl_rx_mpdu_desc_v1 { __le32 rss_hash; /** - * @sigb_common0: for HE sniffer, HE-SIG-B common part 0 + * @phy_data2: depends on info type (see @phy_data1) */ - __le32 sigb_common0; + __le32 phy_data2; }; /* DW8 - carries filter_match only when rpa_en == 1 */ @@ -432,9 +464,9 @@ struct iwl_rx_mpdu_desc_v1 { __le32 filter_match; /** - * @sigb_common1: for HE sniffer, HE-SIG-B common part 1 + * @phy_data3: depends on info type (see @phy_data1) */ - __le32 sigb_common1; + __le32 phy_data3; }; /* DW9 */ @@ -472,12 +504,19 @@ struct iwl_rx_mpdu_desc_v1 { * %IWL_RX_MPDU_PHY_TSF_OVERLOAD isn't set */ __le64 tsf_on_air_rise; - /** - * @he_phy_data: - * HE PHY data, see &enum iwl_rx_he_phy, valid - * only if %IWL_RX_MPDU_PHY_TSF_OVERLOAD is set - */ - __le64 he_phy_data; + + struct { + /** + * @phy_data0: depends on info_type, see @phy_data1 + */ + __le32 phy_data0; + /** + * @phy_data1: valid only if + * %IWL_RX_MPDU_PHY_TSF_OVERLOAD is set, + * see &enum iwl_rx_phy_data1. + */ + __le32 phy_data1; + }; }; } __packed; @@ -493,9 +532,9 @@ struct iwl_rx_mpdu_desc_v3 { __le32 filter_match; /** - * @sigb_common0: for HE sniffer, HE-SIG-B common part 0 + * @phy_data2: depends on info type (see @phy_data1) */ - __le32 sigb_common0; + __le32 phy_data2; }; /* DW8 - carries rss_hash only when rpa_en == 1 */ @@ -506,9 +545,9 @@ struct iwl_rx_mpdu_desc_v3 { __le32 rss_hash; /** - * @sigb_common1: for HE sniffer, HE-SIG-B common part 1 + * @phy_data3: depends on info type (see @phy_data1) */ - __le32 sigb_common1; + __le32 phy_data3; }; /* DW9 */ /** @@ -556,12 +595,19 @@ struct iwl_rx_mpdu_desc_v3 { * %IWL_RX_MPDU_PHY_TSF_OVERLOAD isn't set */ __le64 tsf_on_air_rise; - /** - * @he_phy_data: - * HE PHY data, see &enum iwl_rx_he_phy, valid - * only if %IWL_RX_MPDU_PHY_TSF_OVERLOAD is set - */ - __le64 he_phy_data; + + struct { + /** + * @phy_data0: depends on info_type, see @phy_data1 + */ + __le32 phy_data0; + /** + * @phy_data1: valid only if + * %IWL_RX_MPDU_PHY_TSF_OVERLOAD is set, + * see &enum iwl_rx_phy_data1. + */ + __le32 phy_data1; + }; }; /* DW16 & DW17 */ /** @@ -613,9 +659,9 @@ struct iwl_rx_mpdu_desc { __le16 l3l4_flags; /** - * @sigb_common2: for HE sniffer, HE-SIG-B common part 2 + * @phy_data4: depends on info type, see phy_data1 */ - __le16 sigb_common2; + __le16 phy_data4; }; /* DW5 */ /** @@ -651,6 +697,55 @@ struct iwl_rx_mpdu_desc { #define IWL_CD_STTS_WIFI_STATUS_POS 4 #define IWL_CD_STTS_WIFI_STATUS_MSK 0xF0 +#define RX_NO_DATA_CHAIN_A_POS 0 +#define RX_NO_DATA_CHAIN_A_MSK (0xff << RX_NO_DATA_CHAIN_A_POS) +#define RX_NO_DATA_CHAIN_B_POS 8 +#define RX_NO_DATA_CHAIN_B_MSK (0xff << RX_NO_DATA_CHAIN_B_POS) +#define RX_NO_DATA_CHANNEL_POS 16 +#define RX_NO_DATA_CHANNEL_MSK (0xff << RX_NO_DATA_CHANNEL_POS) + +#define RX_NO_DATA_INFO_TYPE_POS 0 +#define RX_NO_DATA_INFO_TYPE_MSK (0xff << RX_NO_DATA_INFO_TYPE_POS) +#define RX_NO_DATA_INFO_TYPE_NONE 0 +#define RX_NO_DATA_INFO_TYPE_RX_ERR 1 +#define RX_NO_DATA_INFO_TYPE_NDP 2 +#define RX_NO_DATA_INFO_TYPE_MU_UNMATCHED 3 +#define RX_NO_DATA_INFO_TYPE_HE_TB_UNMATCHED 4 + +#define RX_NO_DATA_INFO_ERR_POS 8 +#define RX_NO_DATA_INFO_ERR_MSK (0xff << RX_NO_DATA_INFO_ERR_POS) +#define RX_NO_DATA_INFO_ERR_NONE 0 +#define RX_NO_DATA_INFO_ERR_BAD_PLCP 1 +#define RX_NO_DATA_INFO_ERR_UNSUPPORTED_RATE 2 +#define RX_NO_DATA_INFO_ERR_NO_DELIM 3 +#define RX_NO_DATA_INFO_ERR_BAD_MAC_HDR 4 + +#define RX_NO_DATA_FRAME_TIME_POS 0 +#define RX_NO_DATA_FRAME_TIME_MSK (0xfffff << RX_NO_DATA_FRAME_TIME_POS) + +/** + * struct iwl_rx_no_data - RX no data descriptor + * @info: 7:0 frame type, 15:8 RX error type + * @rssi: 7:0 energy chain-A, + * 15:8 chain-B, measured at FINA time (FINA_ENERGY), 16:23 channel + * @on_air_rise_time: GP2 during on air rise + * @fr_time: frame time + * @rate: rate/mcs of frame + * @phy_info: &enum iwl_rx_phy_data0 and &enum iwl_rx_phy_info_type + * @rx_vec: DW-12:9 raw RX vectors from DSP according to modulation type. + * for VHT: OFDM_RX_VECTOR_SIGA1_OUT, OFDM_RX_VECTOR_SIGA2_OUT + * for HE: OFDM_RX_VECTOR_HE_SIGA1_OUT, OFDM_RX_VECTOR_HE_SIGA2_OUT + */ +struct iwl_rx_no_data { + __le32 info; + __le32 rssi; + __le32 on_air_rise_time; + __le32 fr_time; + __le32 rate; + __le32 phy_info[2]; + __le32 rx_vec[3]; +} __packed; /* RX_NO_DATA_NTFY_API_S_VER_1 */ + /** * enum iwl_completion_desc_transfer_status - transfer status (bits 1-3) * @IWL_CD_STTS_UNUSED: unused diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c index c16757051f16..2a19b178c5e8 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c @@ -225,22 +225,18 @@ static void iwl_fwrt_dump_txf(struct iwl_fw_runtime *fwrt, *dump_data = iwl_fw_error_next_data(*dump_data); } -static void iwl_fw_dump_fifos(struct iwl_fw_runtime *fwrt, - struct iwl_fw_error_dump_data **dump_data) +static void iwl_fw_dump_rxf(struct iwl_fw_runtime *fwrt, + struct iwl_fw_error_dump_data **dump_data) { - struct iwl_fw_error_dump_fifo *fifo_hdr; struct iwl_fwrt_shared_mem_cfg *cfg = &fwrt->smem_cfg; - u32 *fifo_data; - u32 fifo_len; unsigned long flags; - int i, j; - IWL_DEBUG_INFO(fwrt, "WRT FIFO dump\n"); + IWL_DEBUG_INFO(fwrt, "WRT RX FIFO dump\n"); if (!iwl_trans_grab_nic_access(fwrt->trans, &flags)) return; - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_RXF)) { + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_RXF)) { /* Pull RXF1 */ iwl_fwrt_dump_rxf(fwrt, dump_data, cfg->lmac[0].rxfifo1_size, 0, 0); @@ -254,7 +250,25 @@ static void iwl_fw_dump_fifos(struct iwl_fw_runtime *fwrt, LMAC2_PRPH_OFFSET, 2); } - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_TXF)) { + iwl_trans_release_nic_access(fwrt->trans, &flags); +} + +static void iwl_fw_dump_txf(struct iwl_fw_runtime *fwrt, + struct iwl_fw_error_dump_data **dump_data) +{ + struct iwl_fw_error_dump_fifo *fifo_hdr; + struct iwl_fwrt_shared_mem_cfg *cfg = &fwrt->smem_cfg; + u32 *fifo_data; + u32 fifo_len; + unsigned long flags; + int i, j; + + IWL_DEBUG_INFO(fwrt, "WRT TX FIFO dump\n"); + + if (!iwl_trans_grab_nic_access(fwrt->trans, &flags)) + return; + + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_TXF)) { /* Pull TXF data from LMAC1 */ for (i = 0; i < fwrt->smem_cfg.num_txfifo_entries; i++) { /* Mark the number of TXF we're pulling now */ @@ -279,7 +293,7 @@ static void iwl_fw_dump_fifos(struct iwl_fw_runtime *fwrt, } } - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_INTERNAL_TXF) && + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_INTERNAL_TXF) && fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG)) { /* Pull UMAC internal TXF data from all TXFs */ @@ -591,20 +605,42 @@ static void iwl_fw_dump_mem(struct iwl_fw_runtime *fwrt, IWL_DEBUG_INFO(fwrt, "WRT memory dump. Type=%u\n", dump_mem->type); } +static void iwl_fw_dump_named_mem(struct iwl_fw_runtime *fwrt, + struct iwl_fw_error_dump_data **dump_data, + u32 len, u32 ofs, u8 *name, u8 name_len) +{ + struct iwl_fw_error_dump_named_mem *dump_mem; + + if (!len) + return; + + (*dump_data)->type = cpu_to_le32(IWL_FW_ERROR_DUMP_MEM); + (*dump_data)->len = cpu_to_le32(len + sizeof(*dump_mem)); + dump_mem = (void *)(*dump_data)->data; + dump_mem->type = cpu_to_le32(IWL_FW_ERROR_DUMP_MEM_NAMED_MEM); + dump_mem->offset = cpu_to_le32(ofs); + dump_mem->name_len = name_len; + memcpy(dump_mem->name, name, name_len); + iwl_trans_read_mem_bytes(fwrt->trans, ofs, dump_mem->data, len); + *dump_data = iwl_fw_error_next_data(*dump_data); + + IWL_DEBUG_INFO(fwrt, "WRT memory dump. Type=%u\n", dump_mem->type); +} + #define ADD_LEN(len, item_len, const_len) \ do {size_t item = item_len; len += (!!item) * const_len + item; } \ while (0) -static int iwl_fw_fifo_len(struct iwl_fw_runtime *fwrt, - struct iwl_fwrt_shared_mem_cfg *mem_cfg) +static int iwl_fw_rxf_len(struct iwl_fw_runtime *fwrt, + struct iwl_fwrt_shared_mem_cfg *mem_cfg) { size_t hdr_len = sizeof(struct iwl_fw_error_dump_data) + sizeof(struct iwl_fw_error_dump_fifo); u32 fifo_len = 0; int i; - if (!(fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_RXF))) - goto dump_txf; + if (!iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_RXF)) + return 0; /* Count RXF2 size */ ADD_LEN(fifo_len, mem_cfg->rxfifo2_size, hdr_len); @@ -613,8 +649,18 @@ static int iwl_fw_fifo_len(struct iwl_fw_runtime *fwrt, for (i = 0; i < mem_cfg->num_lmacs; i++) ADD_LEN(fifo_len, mem_cfg->lmac[i].rxfifo1_size, hdr_len); -dump_txf: - if (!(fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_TXF))) + return fifo_len; +} + +static int iwl_fw_txf_len(struct iwl_fw_runtime *fwrt, + struct iwl_fwrt_shared_mem_cfg *mem_cfg) +{ + size_t hdr_len = sizeof(struct iwl_fw_error_dump_data) + + sizeof(struct iwl_fw_error_dump_fifo); + u32 fifo_len = 0; + int i; + + if (!iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_TXF)) goto dump_internal_txf; /* Count TXF sizes */ @@ -627,7 +673,7 @@ dump_txf: } dump_internal_txf: - if (!((fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_INTERNAL_TXF)) && + if (!(iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_INTERNAL_TXF) && fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG))) goto out; @@ -639,6 +685,32 @@ out: return fifo_len; } +static void iwl_dump_paging(struct iwl_fw_runtime *fwrt, + struct iwl_fw_error_dump_data **data) +{ + int i; + + IWL_DEBUG_INFO(fwrt, "WRT paging dump\n"); + for (i = 1; i < fwrt->num_of_paging_blk + 1; i++) { + struct iwl_fw_error_dump_paging *paging; + struct page *pages = + fwrt->fw_paging_db[i].fw_paging_block; + dma_addr_t addr = fwrt->fw_paging_db[i].fw_paging_phys; + + (*data)->type = cpu_to_le32(IWL_FW_ERROR_DUMP_PAGING); + (*data)->len = cpu_to_le32(sizeof(*paging) + + PAGING_BLOCK_SIZE); + paging = (void *)(*data)->data; + paging->index = cpu_to_le32(i); + dma_sync_single_for_cpu(fwrt->trans->dev, addr, + PAGING_BLOCK_SIZE, + DMA_BIDIRECTIONAL); + memcpy(paging->data, page_address(pages), + PAGING_BLOCK_SIZE); + (*data) = iwl_fw_error_next_data(*data); + } +} + static struct iwl_fw_error_dump_file * _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, struct iwl_fw_dump_ptrs *fw_error_dump) @@ -655,13 +727,8 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, u32 smem_len = fwrt->fw->dbg.n_mem_tlv ? 0 : fwrt->trans->cfg->smem_len; u32 sram2_len = fwrt->fw->dbg.n_mem_tlv ? 0 : fwrt->trans->cfg->dccm2_len; - bool monitor_dump_only = false; int i; - if (fwrt->dump.trig && - fwrt->dump.trig->mode & IWL_FW_DBG_TRIGGER_MONITOR_ONLY) - monitor_dump_only = true; - /* SRAM - include stack CCM if driver knows the values for it */ if (!fwrt->trans->cfg->dccm_offset || !fwrt->trans->cfg->dccm_len) { const struct fw_img *img; @@ -676,26 +743,27 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, /* reading RXF/TXF sizes */ if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status)) { - fifo_len = iwl_fw_fifo_len(fwrt, mem_cfg); + fifo_len = iwl_fw_rxf_len(fwrt, mem_cfg); + fifo_len += iwl_fw_txf_len(fwrt, mem_cfg); /* Make room for PRPH registers */ if (!fwrt->trans->cfg->gen2 && - fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_PRPH)) + iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_PRPH)) prph_len += iwl_fw_get_prph_len(fwrt); if (fwrt->trans->cfg->device_family == IWL_DEVICE_FAMILY_7000 && - fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_RADIO_REG)) + iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_RADIO_REG)) radio_len = sizeof(*dump_data) + RADIO_REG_MAX_READ; } file_len = sizeof(*dump_file) + fifo_len + prph_len + radio_len; - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_DEV_FW_INFO)) + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_DEV_FW_INFO)) file_len += sizeof(*dump_data) + sizeof(*dump_info); - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_MEM_CFG)) + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_MEM_CFG)) file_len += sizeof(*dump_data) + sizeof(*dump_smem_cfg); - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_MEM)) { + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_MEM)) { size_t hdr_len = sizeof(*dump_data) + sizeof(struct iwl_fw_error_dump_mem); @@ -712,10 +780,7 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, } /* Make room for fw's virtual image pages, if it exists */ - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_PAGING) && - !fwrt->trans->cfg->gen2 && - fwrt->fw->img[fwrt->cur_fw_img].paging_mem_size && - fwrt->fw_paging_db[0].fw_paging_block) + if (iwl_fw_dbg_is_paging_enabled(fwrt)) file_len += fwrt->num_of_paging_blk * (sizeof(*dump_data) + sizeof(struct iwl_fw_error_dump_paging) + @@ -727,12 +792,12 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, } /* If we only want a monitor dump, reset the file length */ - if (monitor_dump_only) { + if (fwrt->dump.monitor_only) { file_len = sizeof(*dump_file) + sizeof(*dump_data) * 2 + sizeof(*dump_info) + sizeof(*dump_smem_cfg); } - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_ERROR_INFO) && + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_ERROR_INFO) && fwrt->dump.desc) file_len += sizeof(*dump_data) + sizeof(*dump_trig) + fwrt->dump.desc->len; @@ -746,7 +811,7 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, dump_file->barker = cpu_to_le32(IWL_FW_ERROR_DUMP_BARKER); dump_data = (void *)dump_file->data; - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_DEV_FW_INFO)) { + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_DEV_FW_INFO)) { dump_data->type = cpu_to_le32(IWL_FW_ERROR_DUMP_DEV_FW_INFO); dump_data->len = cpu_to_le32(sizeof(*dump_info)); dump_info = (void *)dump_data->data; @@ -763,11 +828,12 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, sizeof(dump_info->dev_human_readable) - 1); strncpy(dump_info->bus_human_readable, fwrt->dev->bus->name, sizeof(dump_info->bus_human_readable) - 1); + dump_info->rt_status = cpu_to_le32(fwrt->dump.rt_status); dump_data = iwl_fw_error_next_data(dump_data); } - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_MEM_CFG)) { + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_MEM_CFG)) { /* Dump shared memory configuration */ dump_data->type = cpu_to_le32(IWL_FW_ERROR_DUMP_MEM_CFG); dump_data->len = cpu_to_le32(sizeof(*dump_smem_cfg)); @@ -799,12 +865,13 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, /* We only dump the FIFOs if the FW is in error state */ if (fifo_len) { - iwl_fw_dump_fifos(fwrt, &dump_data); + iwl_fw_dump_rxf(fwrt, &dump_data); + iwl_fw_dump_txf(fwrt, &dump_data); if (radio_len) iwl_read_radio_regs(fwrt, &dump_data); } - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_ERROR_INFO) && + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_ERROR_INFO) && fwrt->dump.desc) { dump_data->type = cpu_to_le32(IWL_FW_ERROR_DUMP_ERROR_INFO); dump_data->len = cpu_to_le32(sizeof(*dump_trig) + @@ -817,10 +884,10 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, } /* In case we only want monitor dump, skip to dump trasport data */ - if (monitor_dump_only) + if (fwrt->dump.monitor_only) goto out; - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_MEM)) { + if (iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_MEM)) { const struct iwl_fw_dbg_mem_seg_tlv *fw_dbg_mem = fwrt->fw->dbg.mem_tlv; @@ -865,30 +932,8 @@ _iwl_fw_error_dump(struct iwl_fw_runtime *fwrt, } /* Dump fw's virtual image */ - if (fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_PAGING) && - !fwrt->trans->cfg->gen2 && - fwrt->fw->img[fwrt->cur_fw_img].paging_mem_size && - fwrt->fw_paging_db[0].fw_paging_block) { - IWL_DEBUG_INFO(fwrt, "WRT paging dump\n"); - for (i = 1; i < fwrt->num_of_paging_blk + 1; i++) { - struct iwl_fw_error_dump_paging *paging; - struct page *pages = - fwrt->fw_paging_db[i].fw_paging_block; - dma_addr_t addr = fwrt->fw_paging_db[i].fw_paging_phys; - - dump_data->type = cpu_to_le32(IWL_FW_ERROR_DUMP_PAGING); - dump_data->len = cpu_to_le32(sizeof(*paging) + - PAGING_BLOCK_SIZE); - paging = (void *)dump_data->data; - paging->index = cpu_to_le32(i); - dma_sync_single_for_cpu(fwrt->trans->dev, addr, - PAGING_BLOCK_SIZE, - DMA_BIDIRECTIONAL); - memcpy(paging->data, page_address(pages), - PAGING_BLOCK_SIZE); - dump_data = iwl_fw_error_next_data(dump_data); - } - } + if (iwl_fw_dbg_is_paging_enabled(fwrt)) + iwl_dump_paging(fwrt, &dump_data); if (prph_len) { iwl_dump_prph(fwrt->trans, &dump_data, @@ -906,12 +951,245 @@ out: return dump_file; } +static void iwl_dump_prph_ini(struct iwl_trans *trans, + struct iwl_fw_error_dump_data **data, + struct iwl_fw_ini_region_cfg *reg) +{ + struct iwl_fw_error_dump_prph *prph; + unsigned long flags; + u32 i, size = le32_to_cpu(reg->num_regions); + + IWL_DEBUG_INFO(trans, "WRT PRPH dump\n"); + + if (!iwl_trans_grab_nic_access(trans, &flags)) + return; + + for (i = 0; i < size; i++) { + (*data)->type = cpu_to_le32(IWL_FW_ERROR_DUMP_PRPH); + (*data)->len = cpu_to_le32(le32_to_cpu(reg->size) + + sizeof(*prph)); + prph = (void *)(*data)->data; + prph->prph_start = reg->start_addr[i]; + prph->data[0] = cpu_to_le32(iwl_read_prph_no_grab(trans, + le32_to_cpu(prph->prph_start))); + *data = iwl_fw_error_next_data(*data); + } + iwl_trans_release_nic_access(trans, &flags); +} + +static void iwl_dump_csr_ini(struct iwl_trans *trans, + struct iwl_fw_error_dump_data **data, + struct iwl_fw_ini_region_cfg *reg) +{ + int i, num = le32_to_cpu(reg->num_regions); + u32 size = le32_to_cpu(reg->size); + + IWL_DEBUG_INFO(trans, "WRT CSR dump\n"); + + for (i = 0; i < num; i++) { + u32 add = le32_to_cpu(reg->start_addr[i]); + __le32 *val; + int j; + + (*data)->type = cpu_to_le32(IWL_FW_ERROR_DUMP_CSR); + (*data)->len = cpu_to_le32(size); + val = (void *)(*data)->data; + + for (j = 0; j < size; j += 4) + *val++ = cpu_to_le32(iwl_trans_read32(trans, j + add)); + + *data = iwl_fw_error_next_data(*data); + } +} + +static int iwl_fw_ini_get_trigger_len(struct iwl_fw_runtime *fwrt, + struct iwl_fw_ini_trigger *trigger) +{ + int i, num, size = 0, hdr_len = sizeof(struct iwl_fw_error_dump_data); + + if (!trigger || !trigger->num_regions) + return 0; + + num = le32_to_cpu(trigger->num_regions); + for (i = 0; i < num; i++) { + u32 reg_id = le32_to_cpu(trigger->data[i]); + struct iwl_fw_ini_region_cfg *reg; + enum iwl_fw_ini_region_type type; + u32 num_entries; + + if (WARN_ON(reg_id >= ARRAY_SIZE(fwrt->dump.active_regs))) + continue; + + reg = fwrt->dump.active_regs[reg_id].reg; + if (WARN(!reg, "Unassigned region %d\n", reg_id)) + continue; + + type = le32_to_cpu(reg->region_type); + num_entries = le32_to_cpu(reg->num_regions); + + switch (type) { + case IWL_FW_INI_REGION_DEVICE_MEMORY: + size += hdr_len + + sizeof(struct iwl_fw_error_dump_named_mem) + + le32_to_cpu(reg->size); + break; + case IWL_FW_INI_REGION_PERIPHERY_MAC: + case IWL_FW_INI_REGION_PERIPHERY_PHY: + case IWL_FW_INI_REGION_PERIPHERY_AUX: + size += num_entries * + (hdr_len + + sizeof(struct iwl_fw_error_dump_prph) + + sizeof(u32)); + break; + case IWL_FW_INI_REGION_TXF: + size += iwl_fw_txf_len(fwrt, &fwrt->smem_cfg); + break; + case IWL_FW_INI_REGION_RXF: + size += iwl_fw_rxf_len(fwrt, &fwrt->smem_cfg); + break; + case IWL_FW_INI_REGION_PAGING: + if (!iwl_fw_dbg_is_paging_enabled(fwrt)) + break; + size += fwrt->num_of_paging_blk * + (hdr_len + + sizeof(struct iwl_fw_error_dump_paging) + + PAGING_BLOCK_SIZE); + break; + case IWL_FW_INI_REGION_CSR: + size += num_entries * + (hdr_len + le32_to_cpu(reg->size)); + break; + case IWL_FW_INI_REGION_DRAM_BUFFER: + /* Transport takes care of DRAM dumping */ + case IWL_FW_INI_REGION_INTERNAL_BUFFER: + case IWL_FW_INI_REGION_DRAM_IMR: + /* Undefined yet */ + default: + break; + } + } + return size; +} + +static void iwl_fw_ini_dump_trigger(struct iwl_fw_runtime *fwrt, + struct iwl_fw_ini_trigger *trigger, + struct iwl_fw_error_dump_data **data, + u32 *dump_mask) +{ + int i, num = le32_to_cpu(trigger->num_regions); + + for (i = 0; i < num; i++) { + u32 reg_id = le32_to_cpu(trigger->data[i]); + enum iwl_fw_ini_region_type type; + struct iwl_fw_ini_region_cfg *reg; + + if (reg_id >= ARRAY_SIZE(fwrt->dump.active_regs)) + continue; + + reg = fwrt->dump.active_regs[reg_id].reg; + /* Don't warn, get_trigger_len already warned */ + if (!reg) + continue; + + type = le32_to_cpu(reg->region_type); + switch (type) { + case IWL_FW_INI_REGION_DEVICE_MEMORY: + if (WARN_ON(le32_to_cpu(reg->num_regions) > 1)) + continue; + iwl_fw_dump_named_mem(fwrt, data, + le32_to_cpu(reg->size), + le32_to_cpu(reg->start_addr[0]), + reg->name, + le32_to_cpu(reg->name_len)); + break; + case IWL_FW_INI_REGION_PERIPHERY_MAC: + case IWL_FW_INI_REGION_PERIPHERY_PHY: + case IWL_FW_INI_REGION_PERIPHERY_AUX: + iwl_dump_prph_ini(fwrt->trans, data, reg); + break; + case IWL_FW_INI_REGION_DRAM_BUFFER: + *dump_mask |= IWL_FW_ERROR_DUMP_FW_MONITOR; + break; + case IWL_FW_INI_REGION_PAGING: + if (iwl_fw_dbg_is_paging_enabled(fwrt)) + iwl_dump_paging(fwrt, data); + else + *dump_mask |= IWL_FW_ERROR_DUMP_PAGING; + break; + case IWL_FW_INI_REGION_TXF: + iwl_fw_dump_txf(fwrt, data); + break; + case IWL_FW_INI_REGION_RXF: + iwl_fw_dump_rxf(fwrt, data); + break; + case IWL_FW_INI_REGION_CSR: + iwl_dump_csr_ini(fwrt->trans, data, reg); + break; + case IWL_FW_INI_REGION_DRAM_IMR: + case IWL_FW_INI_REGION_INTERNAL_BUFFER: + /* This is undefined yet */ + default: + break; + } + } +} + +static struct iwl_fw_error_dump_file * +_iwl_fw_error_ini_dump(struct iwl_fw_runtime *fwrt, + struct iwl_fw_dump_ptrs *fw_error_dump, + u32 *dump_mask) +{ + int size, id = le32_to_cpu(fwrt->dump.desc->trig_desc.type); + struct iwl_fw_error_dump_data *dump_data; + struct iwl_fw_error_dump_file *dump_file; + struct iwl_fw_ini_trigger *trigger, *ext; + + if (id == FW_DBG_TRIGGER_FW_ASSERT) + id = IWL_FW_TRIGGER_ID_FW_ASSERT; + else if (id == FW_DBG_TRIGGER_USER) + id = IWL_FW_TRIGGER_ID_USER_TRIGGER; + else if (id < FW_DBG_TRIGGER_MAX) + return NULL; + + if (WARN_ON(id >= ARRAY_SIZE(fwrt->dump.active_trigs))) + return NULL; + + trigger = fwrt->dump.active_trigs[id].conf; + ext = fwrt->dump.active_trigs[id].conf_ext; + + size = sizeof(*dump_file); + size += iwl_fw_ini_get_trigger_len(fwrt, trigger); + size += iwl_fw_ini_get_trigger_len(fwrt, ext); + + if (!size) + return NULL; + + dump_file = vzalloc(size); + if (!dump_file) + return NULL; + + fw_error_dump->fwrt_ptr = dump_file; + + dump_file->barker = cpu_to_le32(IWL_FW_ERROR_DUMP_BARKER); + dump_data = (void *)dump_file->data; + dump_file->file_len = cpu_to_le32(size); + + *dump_mask = 0; + if (trigger) + iwl_fw_ini_dump_trigger(fwrt, trigger, &dump_data, dump_mask); + if (ext) + iwl_fw_ini_dump_trigger(fwrt, ext, &dump_data, dump_mask); + + return dump_file; +} + void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt) { struct iwl_fw_dump_ptrs *fw_error_dump; struct iwl_fw_error_dump_file *dump_file; struct scatterlist *sg_dump_data; u32 file_len; + u32 dump_mask = fwrt->fw->dbg.dump_mask; IWL_DEBUG_INFO(fwrt, "WRT dump start\n"); @@ -925,14 +1203,21 @@ void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt) if (!fw_error_dump) goto out; - dump_file = _iwl_fw_error_dump(fwrt, fw_error_dump); + if (fwrt->trans->ini_valid) + dump_file = _iwl_fw_error_ini_dump(fwrt, fw_error_dump, + &dump_mask); + else + dump_file = _iwl_fw_error_dump(fwrt, fw_error_dump); + if (!dump_file) { kfree(fw_error_dump); goto out; } - fw_error_dump->trans_ptr = iwl_trans_dump_data(fwrt->trans, - fwrt->dump.trig); + if (!fwrt->trans->ini_valid && fwrt->dump.monitor_only) + dump_mask &= IWL_FW_ERROR_DUMP_FW_MONITOR; + + fw_error_dump->trans_ptr = iwl_trans_dump_data(fwrt->trans, dump_mask); file_len = le32_to_cpu(dump_file->file_len); fw_error_dump->fwrt_len = file_len; if (fw_error_dump->trans_ptr) { @@ -973,6 +1258,14 @@ const struct iwl_fw_dump_desc iwl_dump_desc_assert = { }; IWL_EXPORT_SYMBOL(iwl_dump_desc_assert); +void iwl_fw_assert_error_dump(struct iwl_fw_runtime *fwrt) +{ + IWL_INFO(fwrt, "error dump due to fw assert\n"); + fwrt->dump.desc = &iwl_dump_desc_assert; + iwl_fw_error_dump(fwrt); +} +IWL_EXPORT_SYMBOL(iwl_fw_assert_error_dump); + void iwl_fw_alive_error_dump(struct iwl_fw_runtime *fwrt) { struct iwl_fw_dump_desc *iwl_dump_desc_no_alive = @@ -998,7 +1291,8 @@ void iwl_fw_alive_error_dump(struct iwl_fw_runtime *fwrt) IWL_EXPORT_SYMBOL(iwl_fw_alive_error_dump); int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fwrt, - const struct iwl_fw_dump_desc *desc, void *trigger, + const struct iwl_fw_dump_desc *desc, + bool monitor_only, unsigned int delay) { /* @@ -1028,7 +1322,7 @@ int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fwrt, le32_to_cpu(desc->trig_desc.type)); fwrt->dump.desc = desc; - fwrt->dump.trig = trigger; + fwrt->dump.monitor_only = monitor_only; schedule_delayed_work(&fwrt->dump.wk, delay); @@ -1036,13 +1330,14 @@ int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fwrt, } IWL_EXPORT_SYMBOL(iwl_fw_dbg_collect_desc); -int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt, - enum iwl_fw_dbg_trigger trig, - const char *str, size_t len, - struct iwl_fw_dbg_trigger_tlv *trigger) +int _iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt, + enum iwl_fw_dbg_trigger trig, + const char *str, size_t len, + struct iwl_fw_dbg_trigger_tlv *trigger) { struct iwl_fw_dump_desc *desc; unsigned int delay = 0; + bool monitor_only = false; if (trigger) { u16 occurrences = le16_to_cpu(trigger->occurrences) - 1; @@ -1059,6 +1354,7 @@ int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt, trigger->occurrences = cpu_to_le16(occurrences); delay = le16_to_cpu(trigger->trig_dis_ms); + monitor_only = trigger->mode & IWL_FW_DBG_TRIGGER_MONITOR_ONLY; } desc = kzalloc(sizeof(*desc) + len, GFP_ATOMIC); @@ -1070,7 +1366,48 @@ int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt, desc->trig_desc.type = cpu_to_le32(trig); memcpy(desc->trig_desc.data, str, len); - return iwl_fw_dbg_collect_desc(fwrt, desc, trigger, delay); + return iwl_fw_dbg_collect_desc(fwrt, desc, monitor_only, delay); +} +IWL_EXPORT_SYMBOL(_iwl_fw_dbg_collect); + +int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt, + u32 id, const char *str, size_t len) +{ + struct iwl_fw_dump_desc *desc; + u32 occur, delay; + + if (!fwrt->trans->ini_valid) + return _iwl_fw_dbg_collect(fwrt, id, str, len, NULL); + + if (id == FW_DBG_TRIGGER_USER) + id = IWL_FW_TRIGGER_ID_USER_TRIGGER; + + if (WARN_ON(!fwrt->dump.active_trigs[id].active)) + return -EINVAL; + + delay = le32_to_cpu(fwrt->dump.active_trigs[id].conf->ignore_consec); + occur = le32_to_cpu(fwrt->dump.active_trigs[id].conf->occurrences); + if (!occur) + return 0; + + if (le32_to_cpu(fwrt->dump.active_trigs[id].conf->force_restart)) { + IWL_WARN(fwrt, "Force restart: trigger %d fired.\n", id); + iwl_force_nmi(fwrt->trans); + return 0; + } + + desc = kzalloc(sizeof(*desc) + len, GFP_ATOMIC); + if (!desc) + return -ENOMEM; + + occur--; + fwrt->dump.active_trigs[id].conf->occurrences = cpu_to_le32(occur); + + desc->len = len; + desc->trig_desc.type = cpu_to_le32(id); + memcpy(desc->trig_desc.data, str, len); + + return iwl_fw_dbg_collect_desc(fwrt, desc, true, delay); } IWL_EXPORT_SYMBOL(iwl_fw_dbg_collect); @@ -1081,6 +1418,9 @@ int iwl_fw_dbg_collect_trig(struct iwl_fw_runtime *fwrt, int ret, len = 0; char buf[64]; + if (fwrt->trans->ini_valid) + return 0; + if (fmt) { va_list ap; @@ -1097,8 +1437,8 @@ int iwl_fw_dbg_collect_trig(struct iwl_fw_runtime *fwrt, len = strlen(buf) + 1; } - ret = iwl_fw_dbg_collect(fwrt, le32_to_cpu(trigger->id), buf, len, - trigger); + ret = _iwl_fw_dbg_collect(fwrt, le32_to_cpu(trigger->id), buf, len, + trigger); if (ret) return ret; @@ -1224,3 +1564,217 @@ void iwl_fw_dbg_read_d3_debug_data(struct iwl_fw_runtime *fwrt) cfg->d3_debug_data_length); } IWL_EXPORT_SYMBOL(iwl_fw_dbg_read_d3_debug_data); + +static void +iwl_fw_dbg_buffer_allocation(struct iwl_fw_runtime *fwrt, + struct iwl_fw_ini_allocation_tlv *alloc) +{ + struct iwl_trans *trans = fwrt->trans; + struct iwl_continuous_record_cmd cont_rec = {}; + struct iwl_buffer_allocation_cmd *cmd = (void *)&cont_rec.pad[0]; + struct iwl_host_cmd hcmd = { + .id = LDBG_CONFIG_CMD, + .flags = CMD_ASYNC, + .data[0] = &cont_rec, + .len[0] = sizeof(cont_rec), + }; + void *virtual_addr = NULL; + u32 size = le32_to_cpu(alloc->size); + dma_addr_t phys_addr; + + cont_rec.record_mode.enable_recording = cpu_to_le16(BUFFER_ALLOCATION); + + if (!trans->num_blocks && + le32_to_cpu(alloc->buffer_location) != + IWL_FW_INI_LOCATION_DRAM_PATH) + return; + + virtual_addr = dma_alloc_coherent(fwrt->trans->dev, size, + &phys_addr, GFP_KERNEL); + + /* TODO: alloc fragments if needed */ + if (!virtual_addr) + IWL_ERR(fwrt, "Failed to allocate debug memory\n"); + + if (WARN_ON_ONCE(trans->num_blocks == ARRAY_SIZE(trans->fw_mon))) + return; + + trans->fw_mon[trans->num_blocks].block = virtual_addr; + trans->fw_mon[trans->num_blocks].physical = phys_addr; + trans->fw_mon[trans->num_blocks].size = size; + trans->num_blocks++; + + IWL_DEBUG_FW(trans, "Allocated debug block of size %d\n", size); + + /* First block is assigned via registers / context info */ + if (trans->num_blocks == 1) + return; + + cmd->num_frags = cpu_to_le32(1); + cmd->fragments[0].address = cpu_to_le64(phys_addr); + cmd->fragments[0].size = alloc->size; + cmd->allocation_id = alloc->allocation_id; + cmd->buffer_location = alloc->buffer_location; + + iwl_trans_send_cmd(trans, &hcmd); +} + +static void iwl_fw_dbg_send_hcmd(struct iwl_fw_runtime *fwrt, + struct iwl_ucode_tlv *tlv) +{ + struct iwl_fw_ini_hcmd_tlv *hcmd_tlv = (void *)&tlv->data[0]; + struct iwl_fw_ini_hcmd *data = &hcmd_tlv->hcmd; + u16 len = le32_to_cpu(tlv->length) - sizeof(*hcmd_tlv); + + struct iwl_host_cmd hcmd = { + .id = WIDE_ID(data->group, data->id), + .len = { len, }, + .data = { data->data, }, + }; + + iwl_trans_send_cmd(fwrt->trans, &hcmd); +} + +static void iwl_fw_dbg_update_regions(struct iwl_fw_runtime *fwrt, + struct iwl_fw_ini_region_tlv *tlv, + bool ext, enum iwl_fw_ini_apply_point pnt) +{ + void *iter = (void *)tlv->region_config; + int i, size = le32_to_cpu(tlv->num_regions); + + for (i = 0; i < size; i++) { + struct iwl_fw_ini_region_cfg *reg = iter; + int id = le32_to_cpu(reg->region_id); + struct iwl_fw_ini_active_regs *active; + + if (WARN(id >= ARRAY_SIZE(fwrt->dump.active_regs), + "Invalid region id %d for apply point %d\n", id, pnt)) + break; + + active = &fwrt->dump.active_regs[id]; + + if (ext && active->apply_point == pnt) + IWL_WARN(fwrt->trans, + "External region TLV overrides FW default %x\n", + id); + + IWL_DEBUG_FW(fwrt, + "%s: apply point %d, activating region ID %d\n", + __func__, pnt, id); + + active->reg = reg; + active->apply_point = pnt; + + if (le32_to_cpu(reg->region_type) != + IWL_FW_INI_REGION_DRAM_BUFFER) + iter += le32_to_cpu(reg->num_regions) * sizeof(__le32); + + iter += sizeof(*reg); + } +} + +static void iwl_fw_dbg_update_triggers(struct iwl_fw_runtime *fwrt, + struct iwl_fw_ini_trigger_tlv *tlv, + bool ext, + enum iwl_fw_ini_apply_point apply_point) +{ + int i, size = le32_to_cpu(tlv->num_triggers); + void *iter = (void *)tlv->trigger_config; + + for (i = 0; i < size; i++) { + struct iwl_fw_ini_trigger *trig = iter; + struct iwl_fw_ini_active_triggers *active; + int id = le32_to_cpu(trig->trigger_id); + u32 num; + + if (WARN_ON(id >= ARRAY_SIZE(fwrt->dump.active_trigs))) + break; + + active = &fwrt->dump.active_trigs[id]; + + if (active->apply_point != apply_point) { + active->conf = NULL; + active->conf_ext = NULL; + } + + num = le32_to_cpu(trig->num_regions); + + if (ext && active->apply_point == apply_point) { + num += le32_to_cpu(active->conf->num_regions); + if (trig->ignore_default) { + active->conf_ext = active->conf; + active->conf = trig; + } else { + active->conf_ext = trig; + } + } else { + active->conf = trig; + } + + /* Since zero means infinity - just set to -1 */ + if (!le32_to_cpu(trig->occurrences)) + trig->occurrences = cpu_to_le32(-1); + if (!le32_to_cpu(trig->ignore_consec)) + trig->ignore_consec = cpu_to_le32(-1); + + iter += sizeof(*trig) + + le32_to_cpu(trig->num_regions) * sizeof(__le32); + + active->active = num; + active->apply_point = apply_point; + } +} + +static void _iwl_fw_dbg_apply_point(struct iwl_fw_runtime *fwrt, + struct iwl_apply_point_data *data, + enum iwl_fw_ini_apply_point pnt, + bool ext) +{ + void *iter = data->data; + + while (iter && iter < data->data + data->size) { + struct iwl_ucode_tlv *tlv = iter; + void *ini_tlv = (void *)tlv->data; + u32 type = le32_to_cpu(tlv->type); + + switch (type) { + case IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION: + iwl_fw_dbg_buffer_allocation(fwrt, ini_tlv); + break; + case IWL_UCODE_TLV_TYPE_HCMD: + if (pnt < IWL_FW_INI_APPLY_AFTER_ALIVE) { + IWL_ERR(fwrt, + "Invalid apply point %x for host command\n", + pnt); + goto next; + } + iwl_fw_dbg_send_hcmd(fwrt, tlv); + break; + case IWL_UCODE_TLV_TYPE_REGIONS: + iwl_fw_dbg_update_regions(fwrt, ini_tlv, ext, pnt); + break; + case IWL_UCODE_TLV_TYPE_TRIGGERS: + iwl_fw_dbg_update_triggers(fwrt, ini_tlv, ext, pnt); + break; + case IWL_UCODE_TLV_TYPE_DEBUG_FLOW: + break; + default: + WARN_ONCE(1, "Invalid TLV %x for apply point\n", type); + break; + } +next: + iter += sizeof(*tlv) + le32_to_cpu(tlv->length); + } +} + +void iwl_fw_dbg_apply_point(struct iwl_fw_runtime *fwrt, + enum iwl_fw_ini_apply_point apply_point) +{ + void *data = &fwrt->trans->apply_points[apply_point]; + + _iwl_fw_dbg_apply_point(fwrt, data, apply_point, false); + + data = &fwrt->trans->apply_points_ext[apply_point]; + _iwl_fw_dbg_apply_point(fwrt, data, apply_point, true); +} +IWL_EXPORT_SYMBOL(iwl_fw_dbg_apply_point); diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h index 6f8d3256f7b0..6aabbdd72326 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h @@ -72,6 +72,7 @@ #include "file.h" #include "error-dump.h" #include "api/commands.h" +#include "api/dbg-tlv.h" /** * struct iwl_fw_dump_desc - describes the dump @@ -101,17 +102,19 @@ static inline void iwl_fw_free_dump_desc(struct iwl_fw_runtime *fwrt) if (fwrt->dump.desc != &iwl_dump_desc_assert) kfree(fwrt->dump.desc); fwrt->dump.desc = NULL; - fwrt->dump.trig = NULL; + fwrt->dump.rt_status = 0; } void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt); int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fwrt, const struct iwl_fw_dump_desc *desc, - void *trigger, unsigned int delay); + bool monitor_only, unsigned int delay); +int _iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt, + enum iwl_fw_dbg_trigger trig, + const char *str, size_t len, + struct iwl_fw_dbg_trigger_tlv *trigger); int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt, - enum iwl_fw_dbg_trigger trig, - const char *str, size_t len, - struct iwl_fw_dbg_trigger_tlv *trigger); + u32 id, const char *str, size_t len); int iwl_fw_dbg_collect_trig(struct iwl_fw_runtime *fwrt, struct iwl_fw_dbg_trigger_tlv *trigger, const char *fmt, ...) __printf(3, 4); @@ -193,6 +196,9 @@ _iwl_fw_dbg_trigger_on(struct iwl_fw_runtime *fwrt, { struct iwl_fw_dbg_trigger_tlv *trig; + if (fwrt->trans->ini_valid) + return NULL; + if (!iwl_fw_dbg_trigger_enabled(fwrt->fw, id)) return NULL; @@ -210,6 +216,37 @@ _iwl_fw_dbg_trigger_on(struct iwl_fw_runtime *fwrt, _iwl_fw_dbg_trigger_on((fwrt), (wdev), (id)); \ }) +static inline bool +_iwl_fw_ini_trigger_on(struct iwl_fw_runtime *fwrt, + const enum iwl_fw_dbg_trigger id) +{ + struct iwl_fw_ini_active_triggers *trig = &fwrt->dump.active_trigs[id]; + u32 ms; + + if (!fwrt->trans->ini_valid) + return false; + + if (!trig || !trig->active) + return false; + + ms = le32_to_cpu(trig->conf->ignore_consec); + if (ms) + ms /= USEC_PER_MSEC; + + if (iwl_fw_dbg_no_trig_window(fwrt, id, ms)) { + IWL_WARN(fwrt, "Trigger %d fired in no-collect window\n", id); + return false; + } + + return true; +} + +#define iwl_fw_ini_trigger_on(fwrt, wdev, id) ({ \ + BUILD_BUG_ON(!__builtin_constant_p(id)); \ + BUILD_BUG_ON((id) >= IWL_FW_TRIGGER_ID_NUM); \ + _iwl_fw_ini_trigger_on((fwrt), (wdev), (id)); \ +}) + static inline void _iwl_fw_dbg_trigger_simple_stop(struct iwl_fw_runtime *fwrt, struct wireless_dev *wdev, @@ -263,6 +300,9 @@ _iwl_fw_dbg_stop_recording(struct iwl_trans *trans, iwl_write_prph(trans, DBGC_IN_SAMPLE, 0); udelay(100); iwl_write_prph(trans, DBGC_OUT_CTRL, 0); +#ifdef CONFIG_IWLWIFI_DEBUGFS + trans->dbg_rec_on = false; +#endif } static inline void @@ -293,6 +333,14 @@ _iwl_fw_dbg_restart_recording(struct iwl_trans *trans, } } +#ifdef CONFIG_IWLWIFI_DEBUGFS +static inline void iwl_fw_set_dbg_rec_on(struct iwl_fw_runtime *fwrt) +{ + if (fwrt->fw->dbg.dest_tlv && fwrt->cur_fw_img == IWL_UCODE_REGULAR) + fwrt->trans->dbg_rec_on = true; +} +#endif + static inline void iwl_fw_dbg_restart_recording(struct iwl_fw_runtime *fwrt, struct iwl_fw_dbg_params *params) @@ -301,6 +349,9 @@ iwl_fw_dbg_restart_recording(struct iwl_fw_runtime *fwrt, _iwl_fw_dbg_restart_recording(fwrt->trans, params); else iwl_fw_dbg_start_stop_hcmd(fwrt, true); +#ifdef CONFIG_IWLWIFI_DEBUGFS + iwl_fw_set_dbg_rec_on(fwrt); +#endif } static inline void iwl_fw_dump_conf_clear(struct iwl_fw_runtime *fwrt) @@ -310,12 +361,25 @@ static inline void iwl_fw_dump_conf_clear(struct iwl_fw_runtime *fwrt) void iwl_fw_error_dump_wk(struct work_struct *work); +static inline bool iwl_fw_dbg_type_on(struct iwl_fw_runtime *fwrt, u32 type) +{ + return (fwrt->fw->dbg.dump_mask & BIT(type) || fwrt->trans->ini_valid); +} + static inline bool iwl_fw_dbg_is_d3_debug_enabled(struct iwl_fw_runtime *fwrt) { return fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_D3_DEBUG) && fwrt->trans->cfg->d3_debug_data_length && - fwrt->fw->dbg.dump_mask & BIT(IWL_FW_ERROR_DUMP_D3_DEBUG_DATA); + iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_D3_DEBUG_DATA); +} + +static inline bool iwl_fw_dbg_is_paging_enabled(struct iwl_fw_runtime *fwrt) +{ + return iwl_fw_dbg_type_on(fwrt, IWL_FW_ERROR_DUMP_PAGING) && + !fwrt->trans->cfg->gen2 && + fwrt->fw->img[fwrt->cur_fw_img].paging_mem_size && + fwrt->fw_paging_db[0].fw_paging_block; } void iwl_fw_dbg_read_d3_debug_data(struct iwl_fw_runtime *fwrt); @@ -366,6 +430,10 @@ static inline void iwl_fw_resume_timestamp(struct iwl_fw_runtime *fwrt) {} #endif /* CONFIG_IWLWIFI_DEBUGFS */ +void iwl_fw_assert_error_dump(struct iwl_fw_runtime *fwrt); void iwl_fw_alive_error_dump(struct iwl_fw_runtime *fwrt); void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt); +void iwl_fw_dbg_apply_point(struct iwl_fw_runtime *fwrt, + enum iwl_fw_ini_apply_point apply_point); + #endif /* __iwl_fw_dbg_h__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h b/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h index 6fede174c664..65faecf552cd 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h @@ -187,6 +187,8 @@ enum iwl_fw_error_dump_family { * @fw_human_readable: human readable FW version * @dev_human_readable: name of the device * @bus_human_readable: name of the bus used + * @rt_status: the error_id/rt_status that that triggered the latest dump + * if the dump collection was not initiated by an assert, the value is 0 */ struct iwl_fw_error_dump_info { __le32 device_family; @@ -194,6 +196,7 @@ struct iwl_fw_error_dump_info { u8 fw_human_readable[FW_VER_HUMAN_READABLE_SZ]; u8 dev_human_readable[64]; u8 bus_human_readable[8]; + __le32 rt_status; } __packed; /** @@ -249,6 +252,7 @@ struct iwl_fw_error_dump_prph { enum iwl_fw_error_dump_mem_type { IWL_FW_ERROR_DUMP_MEM_SRAM, IWL_FW_ERROR_DUMP_MEM_SMEM, + IWL_FW_ERROR_DUMP_MEM_NAMED_MEM = 10, }; /** @@ -263,6 +267,22 @@ struct iwl_fw_error_dump_mem { u8 data[]; }; +/** + * struct iwl_fw_error_dump_named_mem - chunk of memory + * @type: &enum iwl_fw_error_dump_mem_type + * @offset: the offset from which the memory was read + * @name_len: name length + * @name: file name + * @data: the content of the memory + */ +struct iwl_fw_error_dump_named_mem { + __le32 type; + __le32 offset; + u8 name_len; + u8 name[32]; + u8 data[]; +}; + /** * struct iwl_fw_error_dump_rb - content of an Receive Buffer * @index: the index of the Receive Buffer in the Rx queue diff --git a/drivers/net/wireless/intel/iwlwifi/fw/file.h b/drivers/net/wireless/intel/iwlwifi/fw/file.h index 6005a41c53d1..81f557c0b58d 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/file.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/file.h @@ -91,6 +91,8 @@ struct iwl_ucode_header { } u; }; +#define IWL_UCODE_INI_TLV_GROUP BIT(24) + /* * new TLV uCode file layout * @@ -141,6 +143,11 @@ enum iwl_ucode_tlv_type { IWL_UCODE_TLV_FW_GSCAN_CAPA = 50, IWL_UCODE_TLV_FW_MEM_SEG = 51, IWL_UCODE_TLV_IML = 52, + IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION = IWL_UCODE_INI_TLV_GROUP | 0x1, + IWL_UCODE_TLV_TYPE_HCMD = IWL_UCODE_INI_TLV_GROUP | 0x2, + IWL_UCODE_TLV_TYPE_REGIONS = IWL_UCODE_INI_TLV_GROUP | 0x3, + IWL_UCODE_TLV_TYPE_TRIGGERS = IWL_UCODE_INI_TLV_GROUP | 0x4, + IWL_UCODE_TLV_TYPE_DEBUG_FLOW = IWL_UCODE_INI_TLV_GROUP | 0x5, /* TLVs 0x1000-0x2000 are for internal driver usage */ IWL_UCODE_TLV_FW_DBG_DUMP_LST = 0x1000, diff --git a/drivers/net/wireless/intel/iwlwifi/fw/img.h b/drivers/net/wireless/intel/iwlwifi/fw/img.h index 54dbbd998abf..12333167ea23 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/img.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/img.h @@ -65,6 +65,8 @@ #define __iwl_fw_img_h__ #include +#include "api/dbg-tlv.h" + #include "file.h" #include "error-dump.h" @@ -220,6 +222,30 @@ struct iwl_fw_dbg { u32 dump_mask; }; +/** + * struct iwl_fw_ini_active_triggers + * @active: is this trigger active + * @apply_point: last apply point that updated this trigger + * @conf: active trigger + * @conf_ext: second trigger, contains extra regions to dump + */ +struct iwl_fw_ini_active_triggers { + bool active; + enum iwl_fw_ini_apply_point apply_point; + struct iwl_fw_ini_trigger *conf; + struct iwl_fw_ini_trigger *conf_ext; +}; + +/** + * struct iwl_fw_ini_active_regs + * @reg: active region from TLV + * @apply_point: apply point where it became active + */ +struct iwl_fw_ini_active_regs { + struct iwl_fw_ini_region_cfg *reg; + enum iwl_fw_ini_apply_point apply_point; +}; + /** * struct iwl_fw - variables associated with the firmware * diff --git a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h index 2b8b50a77990..4f7090f88cb0 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h @@ -64,6 +64,7 @@ #include "iwl-trans.h" #include "img.h" #include "fw/api/debug.h" +#include "fw/api/dbg-tlv.h" #include "fw/api/paging.h" #include "iwl-eeprom-parse.h" @@ -131,14 +132,17 @@ struct iwl_fw_runtime { /* debug */ struct { const struct iwl_fw_dump_desc *desc; - const struct iwl_fw_dbg_trigger_tlv *trig; + bool monitor_only; struct delayed_work wk; u8 conf; /* ts of the beginning of a non-collect fw dbg data period */ - unsigned long non_collect_ts_start[FW_DBG_TRIGGER_MAX - 1]; + unsigned long non_collect_ts_start[IWL_FW_TRIGGER_ID_NUM - 1]; u32 *d3_debug_data; + struct iwl_fw_ini_active_regs active_regs[IWL_FW_INI_MAX_REGION_ID]; + struct iwl_fw_ini_active_triggers active_trigs[IWL_FW_TRIGGER_ID_NUM]; + u32 rt_status; } dump; #ifdef CONFIG_IWLWIFI_DEBUGFS struct { diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h index 5eb906a0d0d2..91861a9cbe57 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h @@ -265,11 +265,9 @@ struct iwl_tt_params { #define EEPROM_REGULATORY_BAND_NO_HT40 0 /* lower blocks contain EEPROM image and calibration data */ -#define OTP_LOW_IMAGE_SIZE (2 * 512 * sizeof(u16)) /* 2 KB */ -#define OTP_LOW_IMAGE_SIZE_FAMILY_7000 (16 * 512 * sizeof(u16)) /* 16 KB */ -#define OTP_LOW_IMAGE_SIZE_FAMILY_8000 (32 * 512 * sizeof(u16)) /* 32 KB */ -#define OTP_LOW_IMAGE_SIZE_FAMILY_9000 OTP_LOW_IMAGE_SIZE_FAMILY_8000 -#define OTP_LOW_IMAGE_SIZE_FAMILY_22000 OTP_LOW_IMAGE_SIZE_FAMILY_9000 +#define OTP_LOW_IMAGE_SIZE_2K (2 * 512 * sizeof(u16)) /* 2 KB */ +#define OTP_LOW_IMAGE_SIZE_16K (16 * 512 * sizeof(u16)) /* 16 KB */ +#define OTP_LOW_IMAGE_SIZE_32K (32 * 512 * sizeof(u16)) /* 32 KB */ struct iwl_eeprom_params { const u8 regulatory_bands[7]; diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c new file mode 100644 index 000000000000..43d815cb3ce9 --- /dev/null +++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c @@ -0,0 +1,231 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright (C) 2018 Intel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. + * + * The full GNU General Public License is included in this distribution + * in the file called COPYING. + * + * Contact Information: + * Intel Linux Wireless + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright (C) 2018 Intel Corporation + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + *****************************************************************************/ + +#include +#include "iwl-trans.h" +#include "iwl-dbg-tlv.h" + +void iwl_fw_dbg_copy_tlv(struct iwl_trans *trans, struct iwl_ucode_tlv *tlv, + bool ext) +{ + struct iwl_apply_point_data *data; + struct iwl_fw_ini_header *header = (void *)&tlv->data[0]; + u32 apply_point = le32_to_cpu(header->apply_point); + + int copy_size = le32_to_cpu(tlv->length) + sizeof(*tlv); + + if (WARN_ONCE(apply_point >= IWL_FW_INI_APPLY_NUM, + "Invalid apply point id %d\n", apply_point)) + return; + + if (ext) + data = &trans->apply_points_ext[apply_point]; + else + data = &trans->apply_points[apply_point]; + + /* + * Make sure we still have room to copy this TLV. Offset points to the + * location the last copy ended. + */ + if (WARN_ONCE(data->offset + copy_size > data->size, + "Not enough memory for apply point %d\n", + apply_point)) + return; + + memcpy(data->data + data->offset, (void *)tlv, copy_size); + data->offset += copy_size; +} + +void iwl_alloc_dbg_tlv(struct iwl_trans *trans, size_t len, const u8 *data, + bool ext) +{ + struct iwl_ucode_tlv *tlv; + u32 size[IWL_FW_INI_APPLY_NUM] = {0}; + int i; + + while (len >= sizeof(*tlv)) { + u32 tlv_len, tlv_type, apply; + struct iwl_fw_ini_header *hdr; + + len -= sizeof(*tlv); + tlv = (void *)data; + + tlv_len = le32_to_cpu(tlv->length); + tlv_type = le32_to_cpu(tlv->type); + + if (len < tlv_len) + return; + + len -= ALIGN(tlv_len, 4); + data += sizeof(*tlv) + ALIGN(tlv_len, 4); + + if (!(tlv_type & IWL_UCODE_INI_TLV_GROUP)) + continue; + + hdr = (void *)&tlv->data[0]; + apply = le32_to_cpu(hdr->apply_point); + + IWL_DEBUG_FW(trans, "Read TLV %x, apply point %d\n", + le32_to_cpu(tlv->type), apply); + + if (WARN_ON(apply >= IWL_FW_INI_APPLY_NUM)) + continue; + + size[apply] += sizeof(*tlv) + tlv_len; + } + + for (i = 0; i < ARRAY_SIZE(size); i++) { + void *mem; + + if (!size[i]) + continue; + + mem = kzalloc(size[i], GFP_KERNEL); + + if (!mem) { + IWL_ERR(trans, "No memory for apply point %d\n", i); + return; + } + + if (ext) { + trans->apply_points_ext[i].data = mem; + trans->apply_points_ext[i].size = size[i]; + } else { + trans->apply_points[i].data = mem; + trans->apply_points[i].size = size[i]; + } + + trans->ini_valid = true; + } +} + +void iwl_fw_dbg_free(struct iwl_trans *trans) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(trans->apply_points); i++) { + kfree(trans->apply_points[i].data); + trans->apply_points[i].size = 0; + trans->apply_points[i].offset = 0; + + kfree(trans->apply_points_ext[i].data); + trans->apply_points_ext[i].size = 0; + trans->apply_points_ext[i].offset = 0; + } +} + +static int iwl_parse_fw_dbg_tlv(struct iwl_trans *trans, const u8 *data, + size_t len) +{ + struct iwl_ucode_tlv *tlv; + enum iwl_ucode_tlv_type tlv_type; + u32 tlv_len; + + while (len >= sizeof(*tlv)) { + len -= sizeof(*tlv); + tlv = (void *)data; + + tlv_len = le32_to_cpu(tlv->length); + tlv_type = le32_to_cpu(tlv->type); + + if (len < tlv_len) { + IWL_ERR(trans, "invalid TLV len: %zd/%u\n", + len, tlv_len); + return -EINVAL; + } + len -= ALIGN(tlv_len, 4); + data += sizeof(*tlv) + ALIGN(tlv_len, 4); + + switch (tlv_type) { + case IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION: + case IWL_UCODE_TLV_TYPE_HCMD: + case IWL_UCODE_TLV_TYPE_REGIONS: + case IWL_UCODE_TLV_TYPE_TRIGGERS: + case IWL_UCODE_TLV_TYPE_DEBUG_FLOW: + iwl_fw_dbg_copy_tlv(trans, tlv, true); + break; + default: + WARN_ONCE(1, "Invalid TLV %x\n", tlv_type); + break; + } + } + + return 0; +} + +void iwl_load_fw_dbg_tlv(struct device *dev, struct iwl_trans *trans) +{ + const struct firmware *fw; + int res; + + if (trans->external_ini_loaded || !iwlwifi_mod_params.enable_ini) + return; + + res = request_firmware(&fw, "iwl-dbg-tlv.ini", dev); + if (res) + return; + + iwl_alloc_dbg_tlv(trans, fw->size, fw->data, true); + iwl_parse_fw_dbg_tlv(trans, fw->data, fw->size); + + trans->external_ini_loaded = true; + release_firmware(fw); +} diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.h b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.h new file mode 100644 index 000000000000..222cd789e07a --- /dev/null +++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.h @@ -0,0 +1,87 @@ +/****************************************************************************** + * + * This file is provided under a dual BSD/GPLv2 license. When using or + * redistributing this file, you may do so under either license. + * + * GPL LICENSE SUMMARY + * + * Copyright (C) 2018 Intel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. + * + * The full GNU General Public License is included in this distribution + * in the file called COPYING. + * + * Contact Information: + * Intel Linux Wireless + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * BSD LICENSE + * + * Copyright (C) 2018 Intel Corporation + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + *****************************************************************************/ +#ifndef __iwl_dbg_tlv_h__ +#define __iwl_dbg_tlv_h__ + +#include +#include + +/** + * struct iwl_apply_point_data + * @data: start address of this apply point data + * @size total size of the data + * @offset: current offset of the copied data + */ +struct iwl_apply_point_data { + void *data; + int size; + int offset; +}; + +struct iwl_trans; +void iwl_load_fw_dbg_tlv(struct device *dev, struct iwl_trans *trans); +void iwl_fw_dbg_free(struct iwl_trans *trans); +void iwl_fw_dbg_copy_tlv(struct iwl_trans *trans, struct iwl_ucode_tlv *tlv, + bool ext); +void iwl_alloc_dbg_tlv(struct iwl_trans *trans, size_t len, const u8 *data, + bool ext); + +#endif /* __iwl_dbg_tlv_h__*/ diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c index ba41d23b4211..bf1be985f36b 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c @@ -72,6 +72,7 @@ #include "iwl-op-mode.h" #include "iwl-agn-hw.h" #include "fw/img.h" +#include "iwl-dbg-tlv.h" #include "iwl-config.h" #include "iwl-modparams.h" @@ -645,6 +646,9 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv, len -= sizeof(*ucode); + if (iwlwifi_mod_params.enable_ini) + iwl_alloc_dbg_tlv(drv->trans, len, data, false); + while (len >= sizeof(*tlv)) { len -= sizeof(*tlv); tlv = (void *)data; @@ -1086,6 +1090,14 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv, return -ENOMEM; break; } + case IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION: + case IWL_UCODE_TLV_TYPE_HCMD: + case IWL_UCODE_TLV_TYPE_REGIONS: + case IWL_UCODE_TLV_TYPE_TRIGGERS: + case IWL_UCODE_TLV_TYPE_DEBUG_FLOW: + if (iwlwifi_mod_params.enable_ini) + iwl_fw_dbg_copy_tlv(drv->trans, tlv, false); + break; default: IWL_DEBUG_INFO(drv, "unknown TLV: %d\n", tlv_type); break; @@ -1565,7 +1577,7 @@ struct iwl_drv *iwl_drv_start(struct iwl_trans *trans) if (!drv->dbgfs_drv) { IWL_ERR(drv, "failed to create debugfs directory\n"); ret = -ENOMEM; - goto err_free_drv; + goto err_free_tlv; } /* Create transport layer debugfs dir */ @@ -1590,7 +1602,8 @@ err_fw: #ifdef CONFIG_IWLWIFI_DEBUGFS err_free_dbgfs: debugfs_remove_recursive(drv->dbgfs_drv); -err_free_drv: +err_free_tlv: + iwl_fw_dbg_free(drv->trans); #endif kfree(drv); err: @@ -1616,9 +1629,13 @@ void iwl_drv_stop(struct iwl_drv *drv) mutex_unlock(&iwlwifi_opmode_table_mtx); #ifdef CONFIG_IWLWIFI_DEBUGFS + drv->trans->ops->debugfs_cleanup(drv->trans); + debugfs_remove_recursive(drv->dbgfs_drv); #endif + iwl_fw_dbg_free(drv->trans); + kfree(drv); } @@ -1749,6 +1766,10 @@ MODULE_PARM_DESC(lar_disable, "disable LAR functionality (default: N)"); module_param_named(uapsd_disable, iwlwifi_mod_params.uapsd_disable, uint, 0644); MODULE_PARM_DESC(uapsd_disable, "disable U-APSD functionality bitmap 1: BSS 2: P2P Client (default: 3)"); +module_param_named(enable_ini, iwlwifi_mod_params.enable_ini, + bool, S_IRUGO | S_IWUSR); +MODULE_PARM_DESC(enable_ini, + "Enable debug INI TLV FW debug infrastructure (default: 0"); /* * set bt_coex_active to true, uCode will do kill/defer diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c index 4e3422a1c7bb..75940ac406b9 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c @@ -927,22 +927,3 @@ iwl_parse_eeprom_data(struct device *dev, const struct iwl_cfg *cfg, return NULL; } IWL_EXPORT_SYMBOL(iwl_parse_eeprom_data); - -/* helper functions */ -int iwl_nvm_check_version(struct iwl_nvm_data *data, - struct iwl_trans *trans) -{ - if (data->nvm_version >= trans->cfg->nvm_ver || - data->calib_version >= trans->cfg->nvm_calib_ver) { - IWL_DEBUG_INFO(trans, "device EEPROM VER=0x%x, CALIB=0x%x\n", - data->nvm_version, data->calib_version); - return 0; - } - - IWL_ERR(trans, - "Unsupported (too old) EEPROM VER=0x%x < 0x%x CALIB=0x%x < 0x%x\n", - data->nvm_version, trans->cfg->nvm_ver, - data->calib_version, trans->cfg->nvm_calib_ver); - return -EINVAL; -} -IWL_EXPORT_SYMBOL(iwl_nvm_check_version); diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h index d910bda087f7..2375d300a7cd 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h @@ -7,6 +7,7 @@ * * Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2015 Intel Mobile Communications GmbH + * Copyright (C) 2018 Intel Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as @@ -28,6 +29,7 @@ * * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2015 Intel Mobile Communications GmbH + * Copyright (C) 2018 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -117,9 +119,6 @@ struct iwl_nvm_data * iwl_parse_eeprom_data(struct device *dev, const struct iwl_cfg *cfg, const u8 *eeprom, size_t eeprom_size); -int iwl_nvm_check_version(struct iwl_nvm_data *data, - struct iwl_trans *trans); - int iwl_init_sband_channels(struct iwl_nvm_data *data, struct ieee80211_supported_band *sband, int n_channels, enum nl80211_band band); diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-modparams.h b/drivers/net/wireless/intel/iwlwifi/iwl-modparams.h index 6fc8dac4aab7..73b1c46f1158 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-modparams.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-modparams.h @@ -122,6 +122,7 @@ enum iwl_uapsd_disable { * @fw_monitor: allow to use firmware monitor * @disable_11ac: disable VHT capabilities, default = false. * @remove_when_gone: remove an inaccessible device from the PCIe bus. + * @enable_ini: enable new FW debug infratructure (INI TLVs) */ struct iwl_mod_params { int swcrypto; @@ -148,6 +149,7 @@ struct iwl_mod_params { */ bool disable_11ax; bool remove_when_gone; + bool enable_ini; }; #endif /* #__iwl_modparams_h__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c index 96e101d79662..d9afedc3d1d9 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c +++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c @@ -465,101 +465,185 @@ static void iwl_init_vht_hw_capab(const struct iwl_cfg *cfg, vht_cap->vht_mcs.tx_mcs_map = vht_cap->vht_mcs.rx_mcs_map; } -static struct ieee80211_sband_iftype_data iwl_he_capa = { - .types_mask = BIT(NL80211_IFTYPE_STATION) | BIT(NL80211_IFTYPE_AP), - .he_cap = { - .has_he = true, - .he_cap_elem = { - .mac_cap_info[0] = - IEEE80211_HE_MAC_CAP0_HTC_HE | - IEEE80211_HE_MAC_CAP0_TWT_REQ, - .mac_cap_info[1] = - IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US | - IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8, - .mac_cap_info[2] = - IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP | - IEEE80211_HE_MAC_CAP2_MU_CASCADING | - IEEE80211_HE_MAC_CAP2_ACK_EN, - .mac_cap_info[3] = - IEEE80211_HE_MAC_CAP3_OMI_CONTROL | - IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_VHT_2, - .mac_cap_info[4] = - IEEE80211_HE_MAC_CAP4_AMDSU_IN_AMPDU | - IEEE80211_HE_MAC_CAP4_MULTI_TID_AGG_TX_QOS_B39, - .mac_cap_info[5] = - IEEE80211_HE_MAC_CAP5_MULTI_TID_AGG_TX_QOS_B40 | - IEEE80211_HE_MAC_CAP5_MULTI_TID_AGG_TX_QOS_B41 | - IEEE80211_HE_MAC_CAP5_UL_2x996_TONE_RU, - .phy_cap_info[0] = - IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G | - IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G | - IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G, - .phy_cap_info[1] = - IEEE80211_HE_PHY_CAP1_PREAMBLE_PUNC_RX_MASK | - IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A | - IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD | - IEEE80211_HE_PHY_CAP1_MIDAMBLE_RX_TX_MAX_NSTS, - .phy_cap_info[2] = - IEEE80211_HE_PHY_CAP2_NDP_4x_LTF_AND_3_2US | - IEEE80211_HE_PHY_CAP2_STBC_TX_UNDER_80MHZ | - IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ | - IEEE80211_HE_PHY_CAP2_UL_MU_FULL_MU_MIMO | - IEEE80211_HE_PHY_CAP2_UL_MU_PARTIAL_MU_MIMO, - .phy_cap_info[3] = - IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_BPSK | - IEEE80211_HE_PHY_CAP3_DCM_MAX_TX_NSS_1 | - IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_BPSK | - IEEE80211_HE_PHY_CAP3_DCM_MAX_RX_NSS_1, - .phy_cap_info[4] = - IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE | - IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_8 | - IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_8, - .phy_cap_info[5] = - IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_2 | - IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_ABOVE_80MHZ_2 | - IEEE80211_HE_PHY_CAP5_NG16_SU_FEEDBACK | - IEEE80211_HE_PHY_CAP5_NG16_MU_FEEDBACK, - .phy_cap_info[6] = - IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_42_SU | - IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_75_MU | - IEEE80211_HE_PHY_CAP6_TRIG_SU_BEAMFORMER_FB | - IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMER_FB | - IEEE80211_HE_PHY_CAP6_TRIG_CQI_FB | - IEEE80211_HE_PHY_CAP6_PARTIAL_BANDWIDTH_DL_MUMIMO | - IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT, - .phy_cap_info[7] = - IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_AR | - IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI | - IEEE80211_HE_PHY_CAP7_MAX_NC_1, - .phy_cap_info[8] = - IEEE80211_HE_PHY_CAP8_HE_ER_SU_PPDU_4XLTF_AND_08_US_GI | - IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G | - IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU | - IEEE80211_HE_PHY_CAP8_80MHZ_IN_160MHZ_HE_PPDU | - IEEE80211_HE_PHY_CAP8_DCM_MAX_BW_160_OR_80P80_MHZ, - .phy_cap_info[9] = - IEEE80211_HE_PHY_CAP9_NON_TRIGGERED_CQI_FEEDBACK | - IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB | - IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB, +static struct ieee80211_sband_iftype_data iwl_he_capa[] = { + { + .types_mask = BIT(NL80211_IFTYPE_STATION), + .he_cap = { + .has_he = true, + .he_cap_elem = { + .mac_cap_info[0] = + IEEE80211_HE_MAC_CAP0_HTC_HE | + IEEE80211_HE_MAC_CAP0_TWT_REQ, + .mac_cap_info[1] = + IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US | + IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8, + .mac_cap_info[2] = + IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP | + IEEE80211_HE_MAC_CAP2_MU_CASCADING | + IEEE80211_HE_MAC_CAP2_ACK_EN, + .mac_cap_info[3] = + IEEE80211_HE_MAC_CAP3_OMI_CONTROL | + IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_VHT_2, + .mac_cap_info[4] = + IEEE80211_HE_MAC_CAP4_AMDSU_IN_AMPDU | + IEEE80211_HE_MAC_CAP4_MULTI_TID_AGG_TX_QOS_B39, + .mac_cap_info[5] = + IEEE80211_HE_MAC_CAP5_MULTI_TID_AGG_TX_QOS_B40 | + IEEE80211_HE_MAC_CAP5_MULTI_TID_AGG_TX_QOS_B41 | + IEEE80211_HE_MAC_CAP5_UL_2x996_TONE_RU, + .phy_cap_info[0] = + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G | + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G | + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G, + .phy_cap_info[1] = + IEEE80211_HE_PHY_CAP1_PREAMBLE_PUNC_RX_MASK | + IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A | + IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD | + IEEE80211_HE_PHY_CAP1_MIDAMBLE_RX_TX_MAX_NSTS, + .phy_cap_info[2] = + IEEE80211_HE_PHY_CAP2_NDP_4x_LTF_AND_3_2US | + IEEE80211_HE_PHY_CAP2_STBC_TX_UNDER_80MHZ | + IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ | + IEEE80211_HE_PHY_CAP2_UL_MU_FULL_MU_MIMO | + IEEE80211_HE_PHY_CAP2_UL_MU_PARTIAL_MU_MIMO, + .phy_cap_info[3] = + IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_BPSK | + IEEE80211_HE_PHY_CAP3_DCM_MAX_TX_NSS_1 | + IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_BPSK | + IEEE80211_HE_PHY_CAP3_DCM_MAX_RX_NSS_1, + .phy_cap_info[4] = + IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE | + IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_8 | + IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_8, + .phy_cap_info[5] = + IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_2 | + IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_ABOVE_80MHZ_2 | + IEEE80211_HE_PHY_CAP5_NG16_SU_FEEDBACK | + IEEE80211_HE_PHY_CAP5_NG16_MU_FEEDBACK, + .phy_cap_info[6] = + IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_42_SU | + IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_75_MU | + IEEE80211_HE_PHY_CAP6_TRIG_SU_BEAMFORMER_FB | + IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMER_FB | + IEEE80211_HE_PHY_CAP6_TRIG_CQI_FB | + IEEE80211_HE_PHY_CAP6_PARTIAL_BANDWIDTH_DL_MUMIMO | + IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT, + .phy_cap_info[7] = + IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_AR | + IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI | + IEEE80211_HE_PHY_CAP7_MAX_NC_1, + .phy_cap_info[8] = + IEEE80211_HE_PHY_CAP8_HE_ER_SU_PPDU_4XLTF_AND_08_US_GI | + IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G | + IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU | + IEEE80211_HE_PHY_CAP8_80MHZ_IN_160MHZ_HE_PPDU | + IEEE80211_HE_PHY_CAP8_DCM_MAX_BW_160_OR_80P80_MHZ, + .phy_cap_info[9] = + IEEE80211_HE_PHY_CAP9_NON_TRIGGERED_CQI_FEEDBACK | + IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB | + IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB, + }, + /* + * Set default Tx/Rx HE MCS NSS Support field. + * Indicate support for up to 2 spatial streams and all + * MCS, without any special cases + */ + .he_mcs_nss_supp = { + .rx_mcs_80 = cpu_to_le16(0xfffa), + .tx_mcs_80 = cpu_to_le16(0xfffa), + .rx_mcs_160 = cpu_to_le16(0xfffa), + .tx_mcs_160 = cpu_to_le16(0xfffa), + .rx_mcs_80p80 = cpu_to_le16(0xffff), + .tx_mcs_80p80 = cpu_to_le16(0xffff), + }, + /* + * Set default PPE thresholds, with PPET16 set to 0, + * PPET8 set to 7 + */ + .ppe_thres = {0x61, 0x1c, 0xc7, 0x71}, }, - /* - * Set default Tx/Rx HE MCS NSS Support field. Indicate support - * for up to 2 spatial streams and all MCS, without any special - * cases - */ - .he_mcs_nss_supp = { - .rx_mcs_80 = cpu_to_le16(0xfffa), - .tx_mcs_80 = cpu_to_le16(0xfffa), - .rx_mcs_160 = cpu_to_le16(0xfffa), - .tx_mcs_160 = cpu_to_le16(0xfffa), - .rx_mcs_80p80 = cpu_to_le16(0xffff), - .tx_mcs_80p80 = cpu_to_le16(0xffff), + }, + { + .types_mask = BIT(NL80211_IFTYPE_AP), + .he_cap = { + .has_he = true, + .he_cap_elem = { + .mac_cap_info[0] = + IEEE80211_HE_MAC_CAP0_HTC_HE | + IEEE80211_HE_MAC_CAP0_TWT_RES, + .mac_cap_info[1] = + IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US | + IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8, + .mac_cap_info[2] = + IEEE80211_HE_MAC_CAP2_BSR | + IEEE80211_HE_MAC_CAP2_MU_CASCADING | + IEEE80211_HE_MAC_CAP2_ACK_EN, + .mac_cap_info[3] = + IEEE80211_HE_MAC_CAP3_OMI_CONTROL | + IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_VHT_2, + .mac_cap_info[4] = + IEEE80211_HE_MAC_CAP4_AMDSU_IN_AMPDU, + .phy_cap_info[0] = + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G | + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G | + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G, + .phy_cap_info[1] = + IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD | + IEEE80211_HE_PHY_CAP1_MIDAMBLE_RX_TX_MAX_NSTS, + .phy_cap_info[2] = + IEEE80211_HE_PHY_CAP2_NDP_4x_LTF_AND_3_2US | + IEEE80211_HE_PHY_CAP2_STBC_TX_UNDER_80MHZ | + IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ, + .phy_cap_info[3] = + IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_BPSK | + IEEE80211_HE_PHY_CAP3_DCM_MAX_TX_NSS_1 | + IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_BPSK | + IEEE80211_HE_PHY_CAP3_DCM_MAX_RX_NSS_1, + .phy_cap_info[4] = + IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE | + IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_8 | + IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_8, + .phy_cap_info[5] = + IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_2 | + IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_ABOVE_80MHZ_2 | + IEEE80211_HE_PHY_CAP5_NG16_SU_FEEDBACK | + IEEE80211_HE_PHY_CAP5_NG16_MU_FEEDBACK, + .phy_cap_info[6] = + IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_42_SU | + IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_75_MU | + IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT, + .phy_cap_info[7] = + IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI | + IEEE80211_HE_PHY_CAP7_MAX_NC_1, + .phy_cap_info[8] = + IEEE80211_HE_PHY_CAP8_HE_ER_SU_PPDU_4XLTF_AND_08_US_GI | + IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G | + IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU | + IEEE80211_HE_PHY_CAP8_80MHZ_IN_160MHZ_HE_PPDU | + IEEE80211_HE_PHY_CAP8_DCM_MAX_BW_160_OR_80P80_MHZ, + .phy_cap_info[9] = + IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB | + IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB, + }, + /* + * Set default Tx/Rx HE MCS NSS Support field. + * Indicate support for up to 2 spatial streams and all + * MCS, without any special cases + */ + .he_mcs_nss_supp = { + .rx_mcs_80 = cpu_to_le16(0xfffa), + .tx_mcs_80 = cpu_to_le16(0xfffa), + .rx_mcs_160 = cpu_to_le16(0xfffa), + .tx_mcs_160 = cpu_to_le16(0xfffa), + .rx_mcs_80p80 = cpu_to_le16(0xffff), + .tx_mcs_80p80 = cpu_to_le16(0xffff), + }, + /* + * Set default PPE thresholds, with PPET16 set to 0, + * PPET8 set to 7 + */ + .ppe_thres = {0x61, 0x1c, 0xc7, 0x71}, }, - /* - * Set default PPE thresholds, with PPET16 set to 0, PPET8 set - * to 7 - */ - .ppe_thres = {0x61, 0x1c, 0xc7, 0x71}, }, }; @@ -568,20 +652,24 @@ static void iwl_init_he_hw_capab(struct ieee80211_supported_band *sband, { if (sband->band == NL80211_BAND_2GHZ || sband->band == NL80211_BAND_5GHZ) - sband->iftype_data = &iwl_he_capa; + sband->iftype_data = iwl_he_capa; else return; - sband->n_iftype_data = 1; + sband->n_iftype_data = ARRAY_SIZE(iwl_he_capa); /* If not 2x2, we need to indicate 1x1 in the Midamble RX Max NSTS */ if ((tx_chains & rx_chains) != ANT_AB) { - iwl_he_capa.he_cap.he_cap_elem.phy_cap_info[1] &= - ~IEEE80211_HE_PHY_CAP1_MIDAMBLE_RX_TX_MAX_NSTS; - iwl_he_capa.he_cap.he_cap_elem.phy_cap_info[2] &= - ~IEEE80211_HE_PHY_CAP2_MIDAMBLE_RX_TX_MAX_NSTS; - iwl_he_capa.he_cap.he_cap_elem.phy_cap_info[7] &= - ~IEEE80211_HE_PHY_CAP7_MAX_NC_MASK; + int i; + + for (i = 0; i < sband->n_iftype_data; i++) { + iwl_he_capa[i].he_cap.he_cap_elem.phy_cap_info[1] &= + ~IEEE80211_HE_PHY_CAP1_MIDAMBLE_RX_TX_MAX_NSTS; + iwl_he_capa[i].he_cap.he_cap_elem.phy_cap_info[2] &= + ~IEEE80211_HE_PHY_CAP2_MIDAMBLE_RX_TX_MAX_NSTS; + iwl_he_capa[i].he_cap.he_cap_elem.phy_cap_info[7] &= + ~IEEE80211_HE_PHY_CAP7_MAX_NC_MASK; + } } } diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h index 0f51c7bea8d0..9d89b7d7f9fa 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h @@ -8,6 +8,7 @@ * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH * Copyright(c) 2016 Intel Deutschland GmbH + * Copyright (C) 2018 Intel Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as @@ -30,6 +31,7 @@ * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH * Copyright(c) 2016 Intel Deutschland GmbH + * Copyright (C) 2018 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -360,6 +362,12 @@ #define MON_BUFF_END_ADDR (0xa03c40) #define MON_BUFF_WRPTR (0xa03c44) #define MON_BUFF_CYCLE_CNT (0xa03c48) +/* FW monitor family 8000 and on */ +#define MON_BUFF_BASE_ADDR_VER2 (0xa03c3c) +#define MON_BUFF_END_ADDR_VER2 (0xa03c20) +#define MON_BUFF_WRPTR_VER2 (0xa03c24) +#define MON_BUFF_CYCLE_CNT_VER2 (0xa03c28) +#define MON_BUFF_SHIFT_VER2 (0x8) #define MON_DMARB_RD_CTL_ADDR (0xa03c60) #define MON_DMARB_RD_DATA_ADDR (0xa03c5c) @@ -394,6 +402,7 @@ enum aux_misc_master1_en { #define AUX_MISC_MASTER1_SMPHR_STATUS 0xA20800 #define RSA_ENABLE 0xA24B08 #define PREG_AUX_BUS_WPROT_0 0xA04CC0 +#define PREG_PRPH_WPROT_0 0xA04CE0 #define SB_CPU_1_STATUS 0xA01E30 #define SB_CPU_2_STATUS 0xA01E34 #define UMAG_SB_CPU_1_STATUS 0xA038C0 @@ -420,4 +429,8 @@ enum { #define UREG_CHICK (0xA05C00) #define UREG_CHICK_MSI_ENABLE BIT(24) #define UREG_CHICK_MSIX_ENABLE BIT(25) + +#define HPM_DEBUG 0xA03440 +#define PERSISTENCE_BIT BIT(12) +#define PREG_WFPM_ACCESS BIT(12) #endif /* __iwl_prph_h__ */ diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h index 26b3c73051ca..a7009cd4232d 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h @@ -73,6 +73,8 @@ #include "iwl-op-mode.h" #include "fw/api/cmdhdr.h" #include "fw/api/txq.h" +#include "fw/api/dbg-tlv.h" +#include "iwl-dbg-tlv.h" /** * DOC: Transport layer - what is it ? @@ -534,6 +536,8 @@ struct iwl_trans_rxq_dma_data { * @dump_data: return a vmalloc'ed buffer with debug data, maybe containing last * TX'ed commands and similar. The buffer will be vfree'd by the caller. * Note that the transport must fill in the proper file headers. + * @debugfs_cleanup: used in the driver unload flow to make a proper cleanup + * of the trans debugfs */ struct iwl_trans_ops { @@ -602,8 +606,8 @@ struct iwl_trans_ops { void (*resume)(struct iwl_trans *trans); struct iwl_trans_dump_data *(*dump_data)(struct iwl_trans *trans, - const struct iwl_fw_dbg_trigger_tlv - *trigger); + u32 dump_mask); + void (*debugfs_cleanup)(struct iwl_trans *trans); }; /** @@ -679,7 +683,6 @@ enum iwl_plat_pm_mode { * enter/exit (in msecs). */ #define IWL_TRANS_IDLE_TIMEOUT 2000 -#define IWL_MAX_DEBUG_ALLOCATIONS 1 /** * struct iwl_dram_data @@ -734,6 +737,7 @@ struct iwl_dram_data { * @runtime_pm_mode: the runtime power management mode in use. This * mode is set during the initialization phase and is not * supposed to change during runtime. + * @dbg_rec_on: true iff there is a fw debug recording currently active */ struct iwl_trans { const struct iwl_trans_ops *ops; @@ -774,17 +778,23 @@ struct iwl_trans { struct lockdep_map sync_cmd_lockdep_map; #endif + struct iwl_apply_point_data apply_points[IWL_FW_INI_APPLY_NUM]; + struct iwl_apply_point_data apply_points_ext[IWL_FW_INI_APPLY_NUM]; + + bool external_ini_loaded; + bool ini_valid; + const struct iwl_fw_dbg_dest_tlv_v1 *dbg_dest_tlv; const struct iwl_fw_dbg_conf_tlv *dbg_conf_tlv[FW_DBG_CONF_MAX]; struct iwl_fw_dbg_trigger_tlv * const *dbg_trigger_tlv; - u32 dbg_dump_mask; u8 dbg_n_dest_reg; int num_blocks; - struct iwl_dram_data fw_mon[IWL_MAX_DEBUG_ALLOCATIONS]; + struct iwl_dram_data fw_mon[IWL_FW_INI_APPLY_NUM]; enum iwl_plat_pm_mode system_pm_mode; enum iwl_plat_pm_mode runtime_pm_mode; bool suspending; + bool dbg_rec_on; /* pointer to trans specific struct */ /*Ensure that this pointer will always be aligned to sizeof pointer */ @@ -897,12 +907,11 @@ static inline void iwl_trans_resume(struct iwl_trans *trans) } static inline struct iwl_trans_dump_data * -iwl_trans_dump_data(struct iwl_trans *trans, - const struct iwl_fw_dbg_trigger_tlv *trigger) +iwl_trans_dump_data(struct iwl_trans *trans, u32 dump_mask) { if (!trans->ops->dump_data) return NULL; - return trans->ops->dump_data(trans, trigger); + return trans->ops->dump_data(trans, dump_mask); } static inline struct iwl_device_cmd * diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c index 843f3b41b72e..01b5338201d6 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c @@ -1811,8 +1811,7 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm, n_matches = 0; } - net_detect = kzalloc(sizeof(*net_detect) + - (n_matches * sizeof(net_detect->matches[0])), + net_detect = kzalloc(struct_size(net_detect, matches, n_matches), GFP_KERNEL); if (!net_detect || !n_matches) goto out_report_nd; @@ -1827,8 +1826,7 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm, for (j = 0; j < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN; j++) n_channels += hweight8(fw_match->matching_channels[j]); - match = kzalloc(sizeof(*match) + - (n_channels * sizeof(*match->channels)), + match = kzalloc(struct_size(match, channels, n_channels), GFP_KERNEL); if (!match) goto out_report_nd; @@ -1956,7 +1954,7 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test) set_bit(STATUS_FW_ERROR, &mvm->trans->status); iwl_mvm_dump_nic_error_log(mvm); iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert, - NULL, 0); + false, 0); ret = 1; goto err; } diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c index 1aa6c7e93088..33b0af24a537 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c @@ -1299,10 +1299,11 @@ static ssize_t iwl_dbgfs_low_latency_read(struct file *file, int len; len = scnprintf(buf, sizeof(buf) - 1, - "traffic=%d\ndbgfs=%d\nvcmd=%d\n", + "traffic=%d\ndbgfs=%d\nvcmd=%d\nvif_type=%d\n", !!(mvmvif->low_latency & LOW_LATENCY_TRAFFIC), !!(mvmvif->low_latency & LOW_LATENCY_DEBUGFS), - !!(mvmvif->low_latency & LOW_LATENCY_VCMD)); + !!(mvmvif->low_latency & LOW_LATENCY_VCMD), + !!(mvmvif->low_latency & LOW_LATENCY_VIF_TYPE)); return simple_read_from_buffer(user_buf, count, ppos, buf, len); } @@ -1440,15 +1441,6 @@ static ssize_t iwl_dbgfs_quota_min_read(struct file *file, return simple_read_from_buffer(user_buf, count, ppos, buf, len); } -static const char * const chanwidths[] = { - [NL80211_CHAN_WIDTH_20_NOHT] = "noht", - [NL80211_CHAN_WIDTH_20] = "ht20", - [NL80211_CHAN_WIDTH_40] = "ht40", - [NL80211_CHAN_WIDTH_80] = "vht80", - [NL80211_CHAN_WIDTH_80P80] = "vht80p80", - [NL80211_CHAN_WIDTH_160] = "vht160", -}; - #define MVM_DEBUGFS_WRITE_FILE_OPS(name, bufsz) \ _MVM_DEBUGFS_WRITE_FILE_OPS(name, bufsz, struct ieee80211_vif) #define MVM_DEBUGFS_READ_WRITE_FILE_OPS(name, bufsz) \ diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c index 3b6b3d8fb961..52c361a6124c 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c @@ -1284,7 +1284,7 @@ static ssize_t iwl_dbgfs_fw_dbg_collect_write(struct iwl_mvm *mvm, return 0; iwl_fw_dbg_collect(&mvm->fwrt, FW_DBG_TRIGGER_USER, buf, - (count - 1), NULL); + (count - 1)); iwl_mvm_unref(mvm, IWL_MVM_REF_PRPH_WRITE); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c index 1689bead1b4f..0d6c313b6669 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c @@ -377,6 +377,9 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm, atomic_set(&mvm->mac80211_queue_stop_count[i], 0); set_bit(IWL_MVM_STATUS_FIRMWARE_RUNNING, &mvm->status); +#ifdef CONFIG_IWLWIFI_DEBUGFS + iwl_fw_set_dbg_rec_on(&mvm->fwrt); +#endif clear_bit(IWL_FWRT_STATUS_WAIT_ALIVE, &mvm->fwrt.status); return 0; @@ -407,6 +410,7 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm) ret = iwl_mvm_load_ucode_wait_alive(mvm, IWL_UCODE_REGULAR); if (ret) { IWL_ERR(mvm, "Failed to start RT ucode: %d\n", ret); + iwl_fw_assert_error_dump(&mvm->fwrt); goto error; } @@ -543,7 +547,9 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm) if (mvm->nvm_file_name) iwl_mvm_load_nvm_to_nic(mvm); - WARN_ON(iwl_nvm_check_version(mvm->nvm_data, mvm->trans)); + WARN_ONCE(mvm->nvm_data->nvm_version < mvm->trans->cfg->nvm_ver, + "Too old NVM version (0x%0x, required = 0x%0x)", + mvm->nvm_data->nvm_version, mvm->trans->cfg->nvm_ver); /* * abort after reading the nvm in case RF Kill is on, we will complete @@ -1023,10 +1029,14 @@ static int iwl_mvm_load_rt_fw(struct iwl_mvm *mvm) if (ret) return ret; + iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_EARLY); + ret = iwl_mvm_load_ucode_wait_alive(mvm, IWL_UCODE_REGULAR); if (ret) return ret; + iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_AFTER_ALIVE); + return iwl_init_paging(&mvm->fwrt, mvm->fwrt.cur_fw_img); } @@ -1045,6 +1055,7 @@ int iwl_mvm_up(struct iwl_mvm *mvm) ret = iwl_mvm_load_rt_fw(mvm); if (ret) { IWL_ERR(mvm, "Failed to start RT ucode: %d\n", ret); + iwl_fw_assert_error_dump(&mvm->fwrt); goto error; } @@ -1054,11 +1065,13 @@ int iwl_mvm_up(struct iwl_mvm *mvm) if (ret) IWL_ERR(mvm, "Failed to initialize Smart Fifo\n"); - mvm->fwrt.dump.conf = FW_DBG_INVALID; - /* if we have a destination, assume EARLY START */ - if (mvm->fw->dbg.dest_tlv) - mvm->fwrt.dump.conf = FW_DBG_START_FROM_ALIVE; - iwl_fw_start_dbg_conf(&mvm->fwrt, FW_DBG_START_FROM_ALIVE); + if (!mvm->trans->ini_valid) { + mvm->fwrt.dump.conf = FW_DBG_INVALID; + /* if we have a destination, assume EARLY START */ + if (mvm->fw->dbg.dest_tlv) + mvm->fwrt.dump.conf = FW_DBG_START_FROM_ALIVE; + iwl_fw_start_dbg_conf(&mvm->fwrt, FW_DBG_START_FROM_ALIVE); + } ret = iwl_send_tx_ant_cfg(mvm, iwl_mvm_get_valid_tx_ant(mvm)); if (ret) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c index 6486cfb33f40..7779951a9533 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c @@ -767,13 +767,8 @@ static int iwl_mvm_mac_ctxt_cmd_sta(struct iwl_mvm *mvm, } ctxt_sta->bi = cpu_to_le32(vif->bss_conf.beacon_int); - ctxt_sta->bi_reciprocal = - cpu_to_le32(iwl_mvm_reciprocal(vif->bss_conf.beacon_int)); ctxt_sta->dtim_interval = cpu_to_le32(vif->bss_conf.beacon_int * vif->bss_conf.dtim_period); - ctxt_sta->dtim_reciprocal = - cpu_to_le32(iwl_mvm_reciprocal(vif->bss_conf.beacon_int * - vif->bss_conf.dtim_period)); ctxt_sta->listen_interval = cpu_to_le32(mvm->hw->conf.listen_interval); ctxt_sta->assoc_id = cpu_to_le32(vif->bss_conf.aid); @@ -782,8 +777,30 @@ static int iwl_mvm_mac_ctxt_cmd_sta(struct iwl_mvm *mvm, cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_PROBE_REQUEST); if (vif->bss_conf.assoc && vif->bss_conf.he_support && - !iwlwifi_mod_params.disable_11ax) + !iwlwifi_mod_params.disable_11ax) { + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); + u8 sta_id = mvmvif->ap_sta_id; + cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_11AX); + if (sta_id != IWL_MVM_INVALID_STA) { + struct ieee80211_sta *sta; + + sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[sta_id], + lockdep_is_held(&mvm->mutex)); + + /* + * TODO: we should check the ext cap IE but it is + * unclear why the spec requires two bits (one in HE + * cap IE, and one in the ext cap IE). In the meantime + * rely on the HE cap IE only. + */ + if (sta && (sta->he_cap.he_cap_elem.mac_cap_info[0] & + IEEE80211_HE_MAC_CAP0_TWT_RES)) + ctxt_sta->data_policy |= + cpu_to_le32(TWT_SUPPORTED); + } + } + return iwl_mvm_mac_ctxt_send_cmd(mvm, &cmd); } @@ -832,8 +849,6 @@ static int iwl_mvm_mac_ctxt_cmd_ibss(struct iwl_mvm *mvm, /* cmd.ibss.beacon_time/cmd.ibss.beacon_tsf are curently ignored */ cmd.ibss.bi = cpu_to_le32(vif->bss_conf.beacon_int); - cmd.ibss.bi_reciprocal = - cpu_to_le32(iwl_mvm_reciprocal(vif->bss_conf.beacon_int)); /* TODO: Assumes that the beacon id == mac context id */ cmd.ibss.beacon_template = cpu_to_le32(mvmvif->id); @@ -965,11 +980,8 @@ static void iwl_mvm_mac_ctxt_set_tx(struct iwl_mvm *mvm, tx->tx_flags = cpu_to_le32(tx_flags); if (!fw_has_capa(&mvm->fw->ucode_capa, - IWL_UCODE_TLV_CAPA_BEACON_ANT_SELECTION)) { - mvm->mgmt_last_antenna_idx = - iwl_mvm_next_antenna(mvm, iwl_mvm_get_valid_tx_ant(mvm), - mvm->mgmt_last_antenna_idx); - } + IWL_UCODE_TLV_CAPA_BEACON_ANT_SELECTION)) + iwl_mvm_toggle_tx_ant(mvm, &mvm->mgmt_last_antenna_idx); tx->rate_n_flags = cpu_to_le32(BIT(mvm->mgmt_last_antenna_idx) << @@ -1182,14 +1194,12 @@ static void iwl_mvm_mac_ctxt_cmd_fill_ap(struct iwl_mvm *mvm, IWL_DEBUG_HC(mvm, "No need to receive beacons\n"); } + if (vif->bss_conf.he_support && !iwlwifi_mod_params.disable_11ax) + cmd->filter_flags |= cpu_to_le32(MAC_FILTER_IN_11AX); + ctxt_ap->bi = cpu_to_le32(vif->bss_conf.beacon_int); - ctxt_ap->bi_reciprocal = - cpu_to_le32(iwl_mvm_reciprocal(vif->bss_conf.beacon_int)); ctxt_ap->dtim_interval = cpu_to_le32(vif->bss_conf.beacon_int * vif->bss_conf.dtim_period); - ctxt_ap->dtim_reciprocal = - cpu_to_le32(iwl_mvm_reciprocal(vif->bss_conf.beacon_int * - vif->bss_conf.dtim_period)); if (!fw_has_api(&mvm->fw->ucode_capa, IWL_UCODE_TLV_API_STA_TYPE)) @@ -1522,6 +1532,8 @@ void iwl_mvm_rx_missed_beacons_notif(struct iwl_mvm *mvm, IEEE80211_IFACE_ITER_NORMAL, iwl_mvm_beacon_loss_iterator, mb); + + iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_MISSED_BEACONS); } void iwl_mvm_rx_stored_beacon_notif(struct iwl_mvm *mvm, diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c index 00f831d88366..97dc464379d2 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c @@ -423,6 +423,7 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm) ieee80211_hw_set(hw, SUPPORTS_AMSDU_IN_AMPDU); ieee80211_hw_set(hw, NEEDS_UNIQUE_STA_ADDR); ieee80211_hw_set(hw, DEAUTH_NEED_MGD_TX_PREP); + ieee80211_hw_set(hw, SUPPORTS_VHT_EXT_NSS_BW); if (iwl_mvm_has_tlc_offload(mvm)) { ieee80211_hw_set(hw, TX_AMPDU_SETUP_IN_HW); @@ -813,6 +814,21 @@ static void iwl_mvm_mac_tx(struct ieee80211_hw *hw, !ieee80211_is_bufferable_mmpdu(hdr->frame_control)) sta = NULL; + /* If there is no sta, and it's not offchannel - send through AP */ + if (info->control.vif->type == NL80211_IFTYPE_STATION && + info->hw_queue != IWL_MVM_OFFCHANNEL_QUEUE && !sta) { + struct iwl_mvm_vif *mvmvif = + iwl_mvm_vif_from_mac80211(info->control.vif); + u8 ap_sta_id = READ_ONCE(mvmvif->ap_sta_id); + + if (ap_sta_id < IWL_MVM_STATION_COUNT) { + /* mac80211 holds rcu read lock */ + sta = rcu_dereference(mvm->fw_id_to_mac_id[ap_sta_id]); + if (IS_ERR_OR_NULL(sta)) + goto drop; + } + } + if (sta) { if (iwl_mvm_defer_tx(mvm, sta, skb)) return; @@ -1113,6 +1129,8 @@ int __iwl_mvm_mac_start(struct iwl_mvm *mvm) } ret = iwl_mvm_up(mvm); + iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_POST_INIT); + if (ret && test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) { /* Something went wrong - we need to finish some cleanup * that normally iwl_mvm_mac_restart_complete() below @@ -2005,7 +2023,13 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm, if (sta->he_cap.he_cap_elem.mac_cap_info[4] & IEEE80211_HE_MAC_CAP4_BQR) sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_BQR_SUPP); - /* If PPE Thresholds exist, parse them into a FW-familiar format */ + /* + * Initialize the PPE thresholds to "None" (7), as described in Table + * 9-262ac of 80211.ax/D3.0. + */ + memset(&sta_ctxt_cmd.pkt_ext, 7, sizeof(sta_ctxt_cmd.pkt_ext)); + + /* If PPE Thresholds exist, parse them into a FW-familiar format. */ if (sta->he_cap.he_cap_elem.phy_cap_info[6] & IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) { u8 nss = (sta->he_cap.ppe_thres[0] & @@ -2383,6 +2407,12 @@ static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw, /* must be set before quota calculations */ mvmvif->ap_ibss_active = true; + if (vif->type == NL80211_IFTYPE_AP && !vif->p2p) { + iwl_mvm_vif_set_low_latency(mvmvif, true, + LOW_LATENCY_VIF_TYPE); + iwl_mvm_send_low_latency_cmd(mvm, true, mvmvif->id); + } + /* power updated needs to be done before quotas */ iwl_mvm_power_update_mac(mvm); @@ -2445,6 +2475,12 @@ static void iwl_mvm_stop_ap_ibss(struct ieee80211_hw *hw, mvmvif->ap_ibss_active = false; mvm->ap_last_beacon_gp2 = 0; + if (vif->type == NL80211_IFTYPE_AP && !vif->p2p) { + iwl_mvm_vif_set_low_latency(mvmvif, false, + LOW_LATENCY_VIF_TYPE); + iwl_mvm_send_low_latency_cmd(mvm, false, mvmvif->id); + } + iwl_mvm_bt_coex_vif_change(mvm); iwl_mvm_unref(mvm, IWL_MVM_REF_AP_IBSS); @@ -2945,6 +2981,9 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw, if (vif->type == NL80211_IFTYPE_AP) { mvmvif->ap_assoc_sta_count++; iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL); + if (vif->bss_conf.he_support && + !iwlwifi_mod_params.disable_11ax) + iwl_mvm_cfg_he_sta(mvm, vif, mvm_sta->sta_id); } iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band, @@ -3355,7 +3394,7 @@ static bool iwl_mvm_rx_aux_roc(struct iwl_notif_wait_data *notif_wait, resp = (void *)pkt->data; IWL_DEBUG_TE(mvm, - "Aux ROC: Recieved response from ucode: status=%d uid=%d\n", + "Aux ROC: Received response from ucode: status=%d uid=%d\n", resp->status, resp->event_unique_id); te_data->uid = le32_to_cpu(resp->event_unique_id); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h index 7ba5bc2ed1c4..1aa690e081ff 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -303,11 +303,13 @@ enum iwl_bt_force_ant_mode { * @LOW_LATENCY_TRAFFIC: indicates low latency traffic was detected * @LOW_LATENCY_DEBUGFS: low latency mode set from debugfs * @LOW_LATENCY_VCMD: low latency mode set from vendor command +* @LOW_LATENCY_VIF_TYPE: low latency mode set because of vif type (ap) */ enum iwl_mvm_low_latency_cause { LOW_LATENCY_TRAFFIC = BIT(0), LOW_LATENCY_DEBUGFS = BIT(1), LOW_LATENCY_VCMD = BIT(2), + LOW_LATENCY_VIF_TYPE = BIT(3), }; /** @@ -844,7 +846,6 @@ struct iwl_mvm { u16 hw_queue_to_mac80211[IWL_MAX_TVQM_QUEUES]; struct iwl_mvm_dqa_txq_info queue_info[IWL_MAX_HW_QUEUES]; - spinlock_t queue_info_lock; /* For syncing queue mgmt operations */ struct work_struct add_stream_wk; /* To add streams to queues */ atomic_t mac80211_queue_stop_count[IEEE80211_MAX_QUEUES]; @@ -1521,6 +1522,11 @@ static inline u8 iwl_mvm_get_valid_rx_ant(struct iwl_mvm *mvm) mvm->fw->valid_rx_ant; } +static inline void iwl_mvm_toggle_tx_ant(struct iwl_mvm *mvm, u8 *ant) +{ + *ant = iwl_mvm_next_antenna(mvm, iwl_mvm_get_valid_tx_ant(mvm), *ant); +} + static inline u32 iwl_mvm_get_phy_config(struct iwl_mvm *mvm) { u32 phy_config = ~(FW_PHY_CFG_TX_CHAIN | @@ -1550,6 +1556,8 @@ void iwl_mvm_rx_rx_mpdu(struct iwl_mvm *mvm, struct napi_struct *napi, struct iwl_rx_cmd_buffer *rxb); void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi, struct iwl_rx_cmd_buffer *rxb, int queue); +void iwl_mvm_rx_monitor_ndp(struct iwl_mvm *mvm, struct napi_struct *napi, + struct iwl_rx_cmd_buffer *rxb, int queue); void iwl_mvm_rx_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi, struct iwl_rx_cmd_buffer *rxb, int queue); int iwl_mvm_notify_rx_queue(struct iwl_mvm *mvm, u32 rxq_mask, @@ -1846,6 +1854,8 @@ int iwl_mvm_update_low_latency(struct iwl_mvm *mvm, struct ieee80211_vif *vif, /* get SystemLowLatencyMode - only needed for beacon threshold? */ bool iwl_mvm_low_latency(struct iwl_mvm *mvm); bool iwl_mvm_low_latency_band(struct iwl_mvm *mvm, enum nl80211_band band); +void iwl_mvm_send_low_latency_cmd(struct iwl_mvm *mvm, bool low_latency, + u16 mac_id); /* get VMACLowLatencyMode */ static inline bool iwl_mvm_vif_low_latency(struct iwl_mvm_vif *mvmvif) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c index af3fba10abc1..30c5127034a0 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c @@ -676,7 +676,6 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, INIT_LIST_HEAD(&mvm->aux_roc_te_list); INIT_LIST_HEAD(&mvm->async_handlers_list); spin_lock_init(&mvm->time_event_lock); - spin_lock_init(&mvm->queue_info_lock); INIT_WORK(&mvm->async_handlers_wk, iwl_mvm_async_handlers_wk); INIT_WORK(&mvm->roc_done_wk, iwl_mvm_roc_done_wk); @@ -770,7 +769,6 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, memcpy(trans->dbg_conf_tlv, mvm->fw->dbg.conf_tlv, sizeof(trans->dbg_conf_tlv)); trans->dbg_trigger_tlv = mvm->fw->dbg.trigger_tlv; - trans->dbg_dump_mask = mvm->fw->dbg.dump_mask; trans->iml = mvm->fw->iml; trans->iml_len = mvm->fw->iml_len; @@ -846,6 +844,8 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, iwl_mvm_tof_init(mvm); + iwl_mvm_toggle_tx_ant(mvm, &mvm->mgmt_last_antenna_idx); + return op_mode; out_unregister: @@ -1073,6 +1073,8 @@ static void iwl_mvm_rx_mq(struct iwl_op_mode *op_mode, iwl_mvm_rx_queue_notif(mvm, rxb, 0); else if (cmd == WIDE_ID(LEGACY_GROUP, FRAME_RELEASE)) iwl_mvm_rx_frame_release(mvm, napi, rxb, 0); + else if (cmd == WIDE_ID(DATA_PATH_GROUP, RX_NO_DATA_NOTIF)) + iwl_mvm_rx_monitor_ndp(mvm, napi, rxb, 0); else iwl_mvm_rx_common(mvm, rxb, pkt); } @@ -1110,11 +1112,7 @@ static void iwl_mvm_async_cb(struct iwl_op_mode *op_mode, static void iwl_mvm_stop_sw_queue(struct iwl_op_mode *op_mode, int hw_queue) { struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); - unsigned long mq; - - spin_lock_bh(&mvm->queue_info_lock); - mq = mvm->hw_queue_to_mac80211[hw_queue]; - spin_unlock_bh(&mvm->queue_info_lock); + unsigned long mq = mvm->hw_queue_to_mac80211[hw_queue]; iwl_mvm_stop_mac_queues(mvm, mq); } @@ -1140,11 +1138,7 @@ void iwl_mvm_start_mac_queues(struct iwl_mvm *mvm, unsigned long mq) static void iwl_mvm_wake_sw_queue(struct iwl_op_mode *op_mode, int hw_queue) { struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); - unsigned long mq; - - spin_lock_bh(&mvm->queue_info_lock); - mq = mvm->hw_queue_to_mac80211[hw_queue]; - spin_unlock_bh(&mvm->queue_info_lock); + unsigned long mq = mvm->hw_queue_to_mac80211[hw_queue]; iwl_mvm_start_mac_queues(mvm, mq); } @@ -1242,7 +1236,7 @@ void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error) */ if (!mvm->fw_restart && fw_error) { iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert, - NULL, 0); + false, 0); } else if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) { struct iwl_mvm_reprobe *reprobe; diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c index 7a98e1a1dc40..dabbc04853ac 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c @@ -98,8 +98,12 @@ static u8 rs_fw_sgi_cw_support(struct ieee80211_sta *sta) { struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap; struct ieee80211_sta_vht_cap *vht_cap = &sta->vht_cap; + struct ieee80211_sta_he_cap *he_cap = &sta->he_cap; u8 supp = 0; + if (he_cap && he_cap->has_he) + return 0; + if (ht_cap->cap & IEEE80211_HT_CAP_SGI_20) supp |= BIT(IWL_TLC_MNG_CH_WIDTH_20MHZ); if (ht_cap->cap & IEEE80211_HT_CAP_SGI_40) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c index ef624833cf1b..6653a238f32e 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c @@ -593,31 +593,28 @@ static void iwl_mvm_stat_iterator(void *_data, u8 *mac, int hyst = vif->bss_conf.cqm_rssi_hyst; u16 id = le32_to_cpu(data->mac_id); struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); + u16 vif_id = mvmvif->id; /* This doesn't need the MAC ID check since it's not taking the * data copied into the "data" struct, but rather the data from * the notification directly. */ - if (data->general) { - u16 vif_id = mvmvif->id; - - if (iwl_mvm_is_cdb_supported(mvm)) { - struct mvm_statistics_general_cdb *general = - data->general; - - mvmvif->beacon_stats.num_beacons = - le32_to_cpu(general->beacon_counter[vif_id]); - mvmvif->beacon_stats.avg_signal = - -general->beacon_average_energy[vif_id]; - } else { - struct mvm_statistics_general_v8 *general = - data->general; - - mvmvif->beacon_stats.num_beacons = - le32_to_cpu(general->beacon_counter[vif_id]); - mvmvif->beacon_stats.avg_signal = - -general->beacon_average_energy[vif_id]; - } + if (iwl_mvm_is_cdb_supported(mvm)) { + struct mvm_statistics_general_cdb *general = + data->general; + + mvmvif->beacon_stats.num_beacons = + le32_to_cpu(general->beacon_counter[vif_id]); + mvmvif->beacon_stats.avg_signal = + -general->beacon_average_energy[vif_id]; + } else { + struct mvm_statistics_general_v8 *general = + data->general; + + mvmvif->beacon_stats.num_beacons = + le32_to_cpu(general->beacon_counter[vif_id]); + mvmvif->beacon_stats.avg_signal = + -general->beacon_average_energy[vif_id]; } if (mvmvif->id != id) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c index 26ac9402568d..7bd8676508f5 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c @@ -200,7 +200,8 @@ static void iwl_mvm_pass_packet_to_mac80211(struct iwl_mvm *mvm, { struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb); - if (iwl_mvm_check_pn(mvm, skb, queue, sta)) { + if (!(rx_status->flag & RX_FLAG_NO_PSDU) && + iwl_mvm_check_pn(mvm, skb, queue, sta)) { kfree_skb(skb); } else { unsigned int radiotap_len = 0; @@ -863,68 +864,66 @@ static void iwl_mvm_flip_address(u8 *addr) ether_addr_copy(addr, mac_addr); } -static void iwl_mvm_decode_he_sigb(struct iwl_mvm *mvm, - struct iwl_rx_mpdu_desc *desc, - u32 rate_n_flags, - struct ieee80211_radiotap_he_mu *he_mu) -{ - u32 sigb0, sigb1; - u16 sigb2; - - if (mvm->trans->cfg->device_family >= IWL_DEVICE_FAMILY_22560) { - sigb0 = le32_to_cpu(desc->v3.sigb_common0); - sigb1 = le32_to_cpu(desc->v3.sigb_common1); - } else { - sigb0 = le32_to_cpu(desc->v1.sigb_common0); - sigb1 = le32_to_cpu(desc->v1.sigb_common1); - } +struct iwl_mvm_rx_phy_data { + enum iwl_rx_phy_info_type info_type; + __le32 d0, d1, d2, d3; + __le16 d4; +}; - sigb2 = le16_to_cpu(desc->sigb_common2); +static void iwl_mvm_decode_he_mu_ext(struct iwl_mvm *mvm, + struct iwl_mvm_rx_phy_data *phy_data, + u32 rate_n_flags, + struct ieee80211_radiotap_he_mu *he_mu) +{ + u32 phy_data2 = le32_to_cpu(phy_data->d2); + u32 phy_data3 = le32_to_cpu(phy_data->d3); + u16 phy_data4 = le16_to_cpu(phy_data->d4); - if (FIELD_GET(IWL_RX_HE_SIGB_COMMON2_CH1_CRC_OK, sigb2)) { + if (FIELD_GET(IWL_RX_PHY_DATA4_HE_MU_EXT_CH1_CRC_OK, phy_data4)) { he_mu->flags1 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_MU_FLAGS1_CH1_RU_KNOWN | IEEE80211_RADIOTAP_HE_MU_FLAGS1_CH1_CTR_26T_RU_KNOWN); he_mu->flags1 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_SIGB_COMMON2_CH1_CTR_RU, - sigb2), + le16_encode_bits(FIELD_GET(IWL_RX_PHY_DATA4_HE_MU_EXT_CH1_CTR_RU, + phy_data4), IEEE80211_RADIOTAP_HE_MU_FLAGS1_CH1_CTR_26T_RU); - he_mu->ru_ch1[0] = FIELD_GET(IWL_RX_HE_SIGB_COMMON0_CH1_RU0, - sigb0); - he_mu->ru_ch1[1] = FIELD_GET(IWL_RX_HE_SIGB_COMMON1_CH1_RU1, - sigb1); - he_mu->ru_ch1[2] = FIELD_GET(IWL_RX_HE_SIGB_COMMON0_CH1_RU2, - sigb0); - he_mu->ru_ch1[3] = FIELD_GET(IWL_RX_HE_SIGB_COMMON1_CH1_RU3, - sigb1); + he_mu->ru_ch1[0] = FIELD_GET(IWL_RX_PHY_DATA2_HE_MU_EXT_CH1_RU0, + phy_data2); + he_mu->ru_ch1[1] = FIELD_GET(IWL_RX_PHY_DATA3_HE_MU_EXT_CH1_RU1, + phy_data3); + he_mu->ru_ch1[2] = FIELD_GET(IWL_RX_PHY_DATA2_HE_MU_EXT_CH1_RU2, + phy_data2); + he_mu->ru_ch1[3] = FIELD_GET(IWL_RX_PHY_DATA3_HE_MU_EXT_CH1_RU3, + phy_data3); } - if (FIELD_GET(IWL_RX_HE_SIGB_COMMON2_CH2_CRC_OK, sigb2) && + if (FIELD_GET(IWL_RX_PHY_DATA4_HE_MU_EXT_CH2_CRC_OK, phy_data4) && (rate_n_flags & RATE_MCS_CHAN_WIDTH_MSK) != RATE_MCS_CHAN_WIDTH_20) { he_mu->flags1 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_MU_FLAGS1_CH2_RU_KNOWN | IEEE80211_RADIOTAP_HE_MU_FLAGS1_CH2_CTR_26T_RU_KNOWN); he_mu->flags2 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_SIGB_COMMON2_CH2_CTR_RU, - sigb2), + le16_encode_bits(FIELD_GET(IWL_RX_PHY_DATA4_HE_MU_EXT_CH2_CTR_RU, + phy_data4), IEEE80211_RADIOTAP_HE_MU_FLAGS2_CH2_CTR_26T_RU); - he_mu->ru_ch2[0] = FIELD_GET(IWL_RX_HE_SIGB_COMMON0_CH2_RU0, - sigb0); - he_mu->ru_ch2[1] = FIELD_GET(IWL_RX_HE_SIGB_COMMON1_CH2_RU1, - sigb1); - he_mu->ru_ch2[2] = FIELD_GET(IWL_RX_HE_SIGB_COMMON0_CH2_RU2, - sigb0); - he_mu->ru_ch2[3] = FIELD_GET(IWL_RX_HE_SIGB_COMMON1_CH2_RU3, - sigb1); + he_mu->ru_ch2[0] = FIELD_GET(IWL_RX_PHY_DATA2_HE_MU_EXT_CH2_RU0, + phy_data2); + he_mu->ru_ch2[1] = FIELD_GET(IWL_RX_PHY_DATA3_HE_MU_EXT_CH2_RU1, + phy_data3); + he_mu->ru_ch2[2] = FIELD_GET(IWL_RX_PHY_DATA2_HE_MU_EXT_CH2_RU2, + phy_data2); + he_mu->ru_ch2[3] = FIELD_GET(IWL_RX_PHY_DATA3_HE_MU_EXT_CH2_RU3, + phy_data3); } } static void -iwl_mvm_decode_he_phy_ru_alloc(u64 he_phy_data, u32 rate_n_flags, +iwl_mvm_decode_he_phy_ru_alloc(struct iwl_mvm_rx_phy_data *phy_data, + u32 rate_n_flags, struct ieee80211_radiotap_he *he, struct ieee80211_radiotap_he_mu *he_mu, struct ieee80211_rx_status *rx_status) @@ -937,7 +936,7 @@ iwl_mvm_decode_he_phy_ru_alloc(u64 he_phy_data, u32 rate_n_flags, * happen though as management frames where we need * the TSF/timers are not be transmitted in HE-MU. */ - u8 ru = FIELD_GET(IWL_RX_HE_PHY_RU_ALLOC_MASK, he_phy_data); + u8 ru = le32_get_bits(phy_data->d1, IWL_RX_PHY_DATA1_HE_RU_ALLOC_MASK); u8 offs = 0; rx_status->bw = RATE_INFO_BW_HE_RU; @@ -976,7 +975,7 @@ iwl_mvm_decode_he_phy_ru_alloc(u64 he_phy_data, u32 rate_n_flags, IEEE80211_RADIOTAP_HE_DATA2_RU_OFFSET); he->data2 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA2_PRISEC_80_KNOWN | IEEE80211_RADIOTAP_HE_DATA2_RU_OFFSET_KNOWN); - if (he_phy_data & IWL_RX_HE_PHY_RU_ALLOC_SEC80) + if (phy_data->d1 & cpu_to_le32(IWL_RX_PHY_DATA1_HE_RU_ALLOC_SEC80)) he->data2 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA2_PRISEC_80_SEC); @@ -996,106 +995,122 @@ iwl_mvm_decode_he_phy_ru_alloc(u64 he_phy_data, u32 rate_n_flags, } static void iwl_mvm_decode_he_phy_data(struct iwl_mvm *mvm, - struct iwl_rx_mpdu_desc *desc, + struct iwl_mvm_rx_phy_data *phy_data, struct ieee80211_radiotap_he *he, struct ieee80211_radiotap_he_mu *he_mu, struct ieee80211_rx_status *rx_status, - u64 he_phy_data, u32 rate_n_flags, - int queue) + u32 rate_n_flags, int queue) { - u32 he_type = rate_n_flags & RATE_MCS_HE_TYPE_MSK; - bool sigb_data; - u16 d1known = IEEE80211_RADIOTAP_HE_DATA1_LDPC_XSYMSEG_KNOWN | - IEEE80211_RADIOTAP_HE_DATA1_UL_DL_KNOWN | - IEEE80211_RADIOTAP_HE_DATA1_SPTL_REUSE_KNOWN | - IEEE80211_RADIOTAP_HE_DATA1_DOPPLER_KNOWN | - IEEE80211_RADIOTAP_HE_DATA1_BSS_COLOR_KNOWN; - u16 d2known = IEEE80211_RADIOTAP_HE_DATA2_PRE_FEC_PAD_KNOWN | - IEEE80211_RADIOTAP_HE_DATA2_PE_DISAMBIG_KNOWN | - IEEE80211_RADIOTAP_HE_DATA2_TXOP_KNOWN; - - he->data1 |= cpu_to_le16(d1known); - he->data2 |= cpu_to_le16(d2known); - he->data3 |= le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_BSS_COLOR_MASK, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA3_BSS_COLOR); - he->data3 |= le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_UPLINK, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA3_UL_DL); - he->data3 |= le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_LDPC_EXT_SYM, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA3_LDPC_XSYMSEG); - he->data4 |= le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_SPATIAL_REUSE_MASK, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA4_SU_MU_SPTL_REUSE); - he->data5 |= le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_PRE_FEC_PAD_MASK, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA5_PRE_FEC_PAD); - he->data5 |= le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_PE_DISAMBIG, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA5_PE_DISAMBIG); - he->data6 |= le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_TXOP_DUR_MASK, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA6_TXOP); - he->data6 |= le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_DOPPLER, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA6_DOPPLER); - - switch (he_type) { - case RATE_MCS_HE_TYPE_MU: + switch (phy_data->info_type) { + case IWL_RX_PHY_INFO_TYPE_NONE: + case IWL_RX_PHY_INFO_TYPE_CCK: + case IWL_RX_PHY_INFO_TYPE_OFDM_LGCY: + case IWL_RX_PHY_INFO_TYPE_HT: + case IWL_RX_PHY_INFO_TYPE_VHT_SU: + case IWL_RX_PHY_INFO_TYPE_VHT_MU: + return; + case IWL_RX_PHY_INFO_TYPE_HE_TB_EXT: + he->data1 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_SPTL_REUSE_KNOWN | + IEEE80211_RADIOTAP_HE_DATA1_SPTL_REUSE2_KNOWN | + IEEE80211_RADIOTAP_HE_DATA1_SPTL_REUSE3_KNOWN | + IEEE80211_RADIOTAP_HE_DATA1_SPTL_REUSE4_KNOWN); + he->data4 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE1), + IEEE80211_RADIOTAP_HE_DATA4_TB_SPTL_REUSE1); + he->data4 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE2), + IEEE80211_RADIOTAP_HE_DATA4_TB_SPTL_REUSE2); + he->data4 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE3), + IEEE80211_RADIOTAP_HE_DATA4_TB_SPTL_REUSE3); + he->data4 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE4), + IEEE80211_RADIOTAP_HE_DATA4_TB_SPTL_REUSE4); + /* fall through */ + case IWL_RX_PHY_INFO_TYPE_HE_SU: + case IWL_RX_PHY_INFO_TYPE_HE_MU: + case IWL_RX_PHY_INFO_TYPE_HE_MU_EXT: + case IWL_RX_PHY_INFO_TYPE_HE_TB: + /* HE common */ + he->data1 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_LDPC_XSYMSEG_KNOWN | + IEEE80211_RADIOTAP_HE_DATA1_SPTL_REUSE_KNOWN | + IEEE80211_RADIOTAP_HE_DATA1_DOPPLER_KNOWN | + IEEE80211_RADIOTAP_HE_DATA1_BSS_COLOR_KNOWN); + he->data2 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA2_PRE_FEC_PAD_KNOWN | + IEEE80211_RADIOTAP_HE_DATA2_PE_DISAMBIG_KNOWN | + IEEE80211_RADIOTAP_HE_DATA2_TXOP_KNOWN | + IEEE80211_RADIOTAP_HE_DATA2_NUM_LTF_SYMS_KNOWN); + he->data3 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_BSS_COLOR_MASK), + IEEE80211_RADIOTAP_HE_DATA3_BSS_COLOR); + if (phy_data->info_type != IWL_RX_PHY_INFO_TYPE_HE_TB && + phy_data->info_type != IWL_RX_PHY_INFO_TYPE_HE_TB_EXT) { + he->data1 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_UL_DL_KNOWN); + he->data3 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_UPLINK), + IEEE80211_RADIOTAP_HE_DATA3_UL_DL); + } + he->data3 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_LDPC_EXT_SYM), + IEEE80211_RADIOTAP_HE_DATA3_LDPC_XSYMSEG); + he->data4 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_SPATIAL_REUSE_MASK), + IEEE80211_RADIOTAP_HE_DATA4_SU_MU_SPTL_REUSE); + he->data5 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_PRE_FEC_PAD_MASK), + IEEE80211_RADIOTAP_HE_DATA5_PRE_FEC_PAD); + he->data5 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_PE_DISAMBIG), + IEEE80211_RADIOTAP_HE_DATA5_PE_DISAMBIG); + he->data5 |= le16_encode_bits(le32_get_bits(phy_data->d1, + IWL_RX_PHY_DATA1_HE_LTF_NUM_MASK), + IEEE80211_RADIOTAP_HE_DATA5_NUM_LTF_SYMS); + he->data6 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_TXOP_DUR_MASK), + IEEE80211_RADIOTAP_HE_DATA6_TXOP); + he->data6 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_DOPPLER), + IEEE80211_RADIOTAP_HE_DATA6_DOPPLER); + break; + } + + switch (phy_data->info_type) { + case IWL_RX_PHY_INFO_TYPE_HE_MU_EXT: he_mu->flags1 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_MU_SIGB_DCM, - he_phy_data), + le16_encode_bits(le16_get_bits(phy_data->d4, + IWL_RX_PHY_DATA4_HE_MU_EXT_SIGB_DCM), IEEE80211_RADIOTAP_HE_MU_FLAGS1_SIG_B_DCM); he_mu->flags1 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_MU_SIGB_MCS_MASK, - he_phy_data), + le16_encode_bits(le16_get_bits(phy_data->d4, + IWL_RX_PHY_DATA4_HE_MU_EXT_SIGB_MCS_MASK), IEEE80211_RADIOTAP_HE_MU_FLAGS1_SIG_B_MCS); he_mu->flags2 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_MU_SIBG_SYM_OR_USER_NUM_MASK, - he_phy_data), - IEEE80211_RADIOTAP_HE_MU_FLAGS2_SIG_B_SYMS_USERS); + le16_encode_bits(le16_get_bits(phy_data->d4, + IWL_RX_PHY_DATA4_HE_MU_EXT_PREAMBLE_PUNC_TYPE_MASK), + IEEE80211_RADIOTAP_HE_MU_FLAGS2_PUNC_FROM_SIG_A_BW); + iwl_mvm_decode_he_mu_ext(mvm, phy_data, rate_n_flags, he_mu); + /* fall through */ + case IWL_RX_PHY_INFO_TYPE_HE_MU: he_mu->flags2 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_MU_SIGB_COMPRESSION, - he_phy_data), - IEEE80211_RADIOTAP_HE_MU_FLAGS2_SIG_B_COMP); + le16_encode_bits(le32_get_bits(phy_data->d1, + IWL_RX_PHY_DATA1_HE_MU_SIBG_SYM_OR_USER_NUM_MASK), + IEEE80211_RADIOTAP_HE_MU_FLAGS2_SIG_B_SYMS_USERS); he_mu->flags2 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_MU_PREAMBLE_PUNC_TYPE_MASK, - he_phy_data), - IEEE80211_RADIOTAP_HE_MU_FLAGS2_PUNC_FROM_SIG_A_BW); - - sigb_data = FIELD_GET(IWL_RX_HE_PHY_INFO_TYPE_MASK, - he_phy_data) == - IWL_RX_HE_PHY_INFO_TYPE_MU_EXT_INFO; - if (sigb_data) - iwl_mvm_decode_he_sigb(mvm, desc, rate_n_flags, he_mu); + le16_encode_bits(le32_get_bits(phy_data->d1, + IWL_RX_PHY_DATA1_HE_MU_SIGB_COMPRESSION), + IEEE80211_RADIOTAP_HE_MU_FLAGS2_SIG_B_COMP); /* fall through */ - case RATE_MCS_HE_TYPE_TRIG: - he->data2 |= - cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA2_NUM_LTF_SYMS_KNOWN); - he->data5 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_HE_LTF_NUM_MASK, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA5_NUM_LTF_SYMS); - break; - case RATE_MCS_HE_TYPE_SU: - case RATE_MCS_HE_TYPE_EXT_SU: - he->data1 |= - cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_BEAM_CHANGE_KNOWN); - he->data3 |= - le16_encode_bits(FIELD_GET(IWL_RX_HE_PHY_BEAM_CHNG, - he_phy_data), - IEEE80211_RADIOTAP_HE_DATA3_BEAM_CHANGE); - break; - } - - switch (FIELD_GET(IWL_RX_HE_PHY_INFO_TYPE_MASK, he_phy_data)) { - case IWL_RX_HE_PHY_INFO_TYPE_MU: - case IWL_RX_HE_PHY_INFO_TYPE_MU_EXT_INFO: - case IWL_RX_HE_PHY_INFO_TYPE_TB: - iwl_mvm_decode_he_phy_ru_alloc(he_phy_data, rate_n_flags, + case IWL_RX_PHY_INFO_TYPE_HE_TB: + case IWL_RX_PHY_INFO_TYPE_HE_TB_EXT: + iwl_mvm_decode_he_phy_ru_alloc(phy_data, rate_n_flags, he, he_mu, rx_status); break; + case IWL_RX_PHY_INFO_TYPE_HE_SU: + he->data1 |= cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_BEAM_CHANGE_KNOWN); + he->data3 |= le16_encode_bits(le32_get_bits(phy_data->d0, + IWL_RX_PHY_DATA0_HE_BEAM_CHNG), + IEEE80211_RADIOTAP_HE_DATA3_BEAM_CHANGE); + break; default: /* nothing */ break; @@ -1103,13 +1118,10 @@ static void iwl_mvm_decode_he_phy_data(struct iwl_mvm *mvm, } static void iwl_mvm_rx_he(struct iwl_mvm *mvm, struct sk_buff *skb, - struct iwl_rx_mpdu_desc *desc, + struct iwl_mvm_rx_phy_data *phy_data, u32 rate_n_flags, u16 phy_info, int queue) { struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb); - /* this is invalid e.g. because puncture type doesn't allow 0b11 */ -#define HE_PHY_DATA_INVAL ((u64)-1) - u64 he_phy_data = HE_PHY_DATA_INVAL; struct ieee80211_radiotap_he *he = NULL; struct ieee80211_radiotap_he_mu *he_mu = NULL; u32 he_type = rate_n_flags & RATE_MCS_HE_TYPE_MSK; @@ -1136,49 +1148,41 @@ static void iwl_mvm_rx_he(struct iwl_mvm *mvm, struct sk_buff *skb, radiotap_len += sizeof(known); rx_status->flag |= RX_FLAG_RADIOTAP_HE; - if (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD) { - if (mvm->trans->cfg->device_family >= IWL_DEVICE_FAMILY_22560) - he_phy_data = le64_to_cpu(desc->v3.he_phy_data); - else - he_phy_data = le64_to_cpu(desc->v1.he_phy_data); - - if (he_type == RATE_MCS_HE_TYPE_MU) { - he_mu = skb_put_data(skb, &mu_known, sizeof(mu_known)); - radiotap_len += sizeof(mu_known); - rx_status->flag |= RX_FLAG_RADIOTAP_HE_MU; - } + if (phy_data->info_type == IWL_RX_PHY_INFO_TYPE_HE_MU || + phy_data->info_type == IWL_RX_PHY_INFO_TYPE_HE_MU_EXT) { + he_mu = skb_put_data(skb, &mu_known, sizeof(mu_known)); + radiotap_len += sizeof(mu_known); + rx_status->flag |= RX_FLAG_RADIOTAP_HE_MU; } /* temporarily hide the radiotap data */ __skb_pull(skb, radiotap_len); - if (he_phy_data != HE_PHY_DATA_INVAL && - he_type == RATE_MCS_HE_TYPE_SU) { + if (phy_data->info_type == IWL_RX_PHY_INFO_TYPE_HE_SU) { /* report the AMPDU-EOF bit on single frames */ if (!queue && !(phy_info & IWL_RX_MPDU_PHY_AMPDU)) { rx_status->flag |= RX_FLAG_AMPDU_DETAILS; rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT_KNOWN; - if (FIELD_GET(IWL_RX_HE_PHY_DELIM_EOF, he_phy_data)) + if (phy_data->d0 & cpu_to_le32(IWL_RX_PHY_DATA0_HE_DELIM_EOF)) rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT; } } - if (he_phy_data != HE_PHY_DATA_INVAL) - iwl_mvm_decode_he_phy_data(mvm, desc, he, he_mu, rx_status, - he_phy_data, rate_n_flags, queue); + if (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD) + iwl_mvm_decode_he_phy_data(mvm, phy_data, he, he_mu, rx_status, + rate_n_flags, queue); /* update aggregation data for monitor sake on default queue */ - if (!queue && (phy_info & IWL_RX_MPDU_PHY_AMPDU)) { + if (!queue && (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD) && + (phy_info & IWL_RX_MPDU_PHY_AMPDU)) { bool toggle_bit = phy_info & IWL_RX_MPDU_PHY_AMPDU_TOGGLE; /* toggle is switched whenever new aggregation starts */ if (toggle_bit != mvm->ampdu_toggle && - he_phy_data != HE_PHY_DATA_INVAL && (he_type == RATE_MCS_HE_TYPE_MU || he_type == RATE_MCS_HE_TYPE_SU)) { rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT_KNOWN; - if (FIELD_GET(IWL_RX_HE_PHY_DELIM_EOF, - he_phy_data)) + if (phy_data->d0 & cpu_to_le32(IWL_RX_PHY_DATA0_HE_DELIM_EOF)) rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT; } } @@ -1261,43 +1265,34 @@ static void iwl_mvm_rx_he(struct iwl_mvm *mvm, struct sk_buff *skb, break; } - he->data5 |= le16_encode_bits(ltf, IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE); - - if (he_type == RATE_MCS_HE_TYPE_SU || - he_type == RATE_MCS_HE_TYPE_EXT_SU) { - u16 val; - - /* LTF syms correspond to streams */ - he->data2 |= - cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA2_NUM_LTF_SYMS_KNOWN); - switch (rx_status->nss) { - case 1: - val = 0; - break; - case 2: - val = 1; - break; - case 3: - case 4: - val = 2; - break; - case 5: - case 6: - val = 3; - break; - case 7: - case 8: - val = 4; - break; - default: - WARN_ONCE(1, "invalid nss: %d\n", - rx_status->nss); - val = 0; - } + he->data5 |= le16_encode_bits(ltf, + IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE); +} - he->data5 |= - le16_encode_bits(val, - IEEE80211_RADIOTAP_HE_DATA5_NUM_LTF_SYMS); +static void iwl_mvm_decode_lsig(struct sk_buff *skb, + struct iwl_mvm_rx_phy_data *phy_data) +{ + struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb); + struct ieee80211_radiotap_lsig *lsig; + + switch (phy_data->info_type) { + case IWL_RX_PHY_INFO_TYPE_HT: + case IWL_RX_PHY_INFO_TYPE_VHT_SU: + case IWL_RX_PHY_INFO_TYPE_VHT_MU: + case IWL_RX_PHY_INFO_TYPE_HE_TB_EXT: + case IWL_RX_PHY_INFO_TYPE_HE_SU: + case IWL_RX_PHY_INFO_TYPE_HE_MU: + case IWL_RX_PHY_INFO_TYPE_HE_MU_EXT: + case IWL_RX_PHY_INFO_TYPE_HE_TB: + lsig = skb_put(skb, sizeof(*lsig)); + lsig->data1 = cpu_to_le16(IEEE80211_RADIOTAP_LSIG_DATA1_LENGTH_KNOWN); + lsig->data2 = le16_encode_bits(le32_get_bits(phy_data->d1, + IWL_RX_PHY_DATA1_LSIG_LEN_MASK), + IEEE80211_RADIOTAP_LSIG_DATA2_LENGTH); + rx_status->flag |= RX_FLAG_RADIOTAP_LSIG; + break; + default: + break; } } @@ -1315,6 +1310,10 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi, struct sk_buff *skb; u8 crypt_len = 0, channel, energy_a, energy_b; size_t desc_size; + struct iwl_mvm_rx_phy_data phy_data = { + .d4 = desc->phy_data4, + .info_type = IWL_RX_PHY_INFO_TYPE_NONE, + }; if (unlikely(test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))) return; @@ -1326,6 +1325,11 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi, energy_a = desc->v3.energy_a; energy_b = desc->v3.energy_b; desc_size = sizeof(*desc); + + phy_data.d0 = desc->v3.phy_data0; + phy_data.d1 = desc->v3.phy_data1; + phy_data.d2 = desc->v3.phy_data2; + phy_data.d3 = desc->v3.phy_data3; } else { rate_n_flags = le32_to_cpu(desc->v1.rate_n_flags); channel = desc->v1.channel; @@ -1333,8 +1337,18 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi, energy_a = desc->v1.energy_a; energy_b = desc->v1.energy_b; desc_size = IWL_RX_DESC_SIZE_V1; + + phy_data.d0 = desc->v1.phy_data0; + phy_data.d1 = desc->v1.phy_data1; + phy_data.d2 = desc->v1.phy_data2; + phy_data.d3 = desc->v1.phy_data3; } + if (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD) + phy_data.info_type = + le32_get_bits(phy_data.d1, + IWL_RX_PHY_DATA1_INFO_TYPE_MASK); + hdr = (void *)(pkt->data + desc_size); /* Dont use dev_alloc_skb(), we'll have enough headroom once * ieee80211_hdr pulled. @@ -1373,7 +1387,10 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi, } if (rate_n_flags & RATE_MCS_HE_MSK) - iwl_mvm_rx_he(mvm, skb, desc, rate_n_flags, phy_info, queue); + iwl_mvm_rx_he(mvm, skb, &phy_data, rate_n_flags, + phy_info, queue); + + iwl_mvm_decode_lsig(skb, &phy_data); rx_status = IEEE80211_SKB_RXCB(skb); @@ -1422,12 +1439,6 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi, /* update aggregation data for monitor sake on default queue */ if (!queue && (phy_info & IWL_RX_MPDU_PHY_AMPDU)) { bool toggle_bit = phy_info & IWL_RX_MPDU_PHY_AMPDU_TOGGLE; - u64 he_phy_data; - - if (mvm->trans->cfg->device_family >= IWL_DEVICE_FAMILY_22560) - he_phy_data = le64_to_cpu(desc->v3.he_phy_data); - else - he_phy_data = le64_to_cpu(desc->v1.he_phy_data); rx_status->flag |= RX_FLAG_AMPDU_DETAILS; rx_status->ampdu_reference = mvm->ampdu_ref; @@ -1596,6 +1607,129 @@ out: rcu_read_unlock(); } +void iwl_mvm_rx_monitor_ndp(struct iwl_mvm *mvm, struct napi_struct *napi, + struct iwl_rx_cmd_buffer *rxb, int queue) +{ + struct ieee80211_rx_status *rx_status; + struct iwl_rx_packet *pkt = rxb_addr(rxb); + struct iwl_rx_no_data *desc = (void *)pkt->data; + u32 rate_n_flags = le32_to_cpu(desc->rate); + u32 gp2_on_air_rise = le32_to_cpu(desc->on_air_rise_time); + u32 rssi = le32_to_cpu(desc->rssi); + u32 info_type = le32_to_cpu(desc->info) & RX_NO_DATA_INFO_TYPE_MSK; + u16 phy_info = IWL_RX_MPDU_PHY_TSF_OVERLOAD; + struct ieee80211_sta *sta = NULL; + struct sk_buff *skb; + u8 channel, energy_a, energy_b; + struct iwl_mvm_rx_phy_data phy_data = { + .d0 = desc->phy_info[0], + .info_type = IWL_RX_PHY_INFO_TYPE_NONE, + }; + + if (unlikely(test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))) + return; + + /* Currently only NDP type is supported */ + if (info_type != RX_NO_DATA_INFO_TYPE_NDP) + return; + + energy_a = (rssi & RX_NO_DATA_CHAIN_A_MSK) >> RX_NO_DATA_CHAIN_A_POS; + energy_b = (rssi & RX_NO_DATA_CHAIN_B_MSK) >> RX_NO_DATA_CHAIN_B_POS; + channel = (rssi & RX_NO_DATA_CHANNEL_MSK) >> RX_NO_DATA_CHANNEL_POS; + + phy_data.info_type = + le32_get_bits(desc->phy_info[1], + IWL_RX_PHY_DATA1_INFO_TYPE_MASK); + + /* Dont use dev_alloc_skb(), we'll have enough headroom once + * ieee80211_hdr pulled. + */ + skb = alloc_skb(128, GFP_ATOMIC); + if (!skb) { + IWL_ERR(mvm, "alloc_skb failed\n"); + return; + } + + rx_status = IEEE80211_SKB_RXCB(skb); + + /* 0-length PSDU */ + rx_status->flag |= RX_FLAG_NO_PSDU; + /* currently this is the only type for which we get this notif */ + rx_status->zero_length_psdu_type = + IEEE80211_RADIOTAP_ZERO_LEN_PSDU_SOUNDING; + + /* This may be overridden by iwl_mvm_rx_he() to HE_RU */ + switch (rate_n_flags & RATE_MCS_CHAN_WIDTH_MSK) { + case RATE_MCS_CHAN_WIDTH_20: + break; + case RATE_MCS_CHAN_WIDTH_40: + rx_status->bw = RATE_INFO_BW_40; + break; + case RATE_MCS_CHAN_WIDTH_80: + rx_status->bw = RATE_INFO_BW_80; + break; + case RATE_MCS_CHAN_WIDTH_160: + rx_status->bw = RATE_INFO_BW_160; + break; + } + + if (rate_n_flags & RATE_MCS_HE_MSK) + iwl_mvm_rx_he(mvm, skb, &phy_data, rate_n_flags, + phy_info, queue); + + iwl_mvm_decode_lsig(skb, &phy_data); + + rx_status->device_timestamp = gp2_on_air_rise; + rx_status->band = channel > 14 ? NL80211_BAND_5GHZ : + NL80211_BAND_2GHZ; + rx_status->freq = ieee80211_channel_to_frequency(channel, + rx_status->band); + iwl_mvm_get_signal_strength(mvm, rx_status, rate_n_flags, energy_a, + energy_b); + + rcu_read_lock(); + + if (!(rate_n_flags & RATE_MCS_CCK_MSK) && + rate_n_flags & RATE_MCS_SGI_MSK) + rx_status->enc_flags |= RX_ENC_FLAG_SHORT_GI; + if (rate_n_flags & RATE_HT_MCS_GF_MSK) + rx_status->enc_flags |= RX_ENC_FLAG_HT_GF; + if (rate_n_flags & RATE_MCS_LDPC_MSK) + rx_status->enc_flags |= RX_ENC_FLAG_LDPC; + if (rate_n_flags & RATE_MCS_HT_MSK) { + u8 stbc = (rate_n_flags & RATE_MCS_STBC_MSK) >> + RATE_MCS_STBC_POS; + rx_status->encoding = RX_ENC_HT; + rx_status->rate_idx = rate_n_flags & RATE_HT_MCS_INDEX_MSK; + rx_status->enc_flags |= stbc << RX_ENC_FLAG_STBC_SHIFT; + } else if (rate_n_flags & RATE_MCS_VHT_MSK) { + u8 stbc = (rate_n_flags & RATE_MCS_STBC_MSK) >> + RATE_MCS_STBC_POS; + rx_status->nss = + ((rate_n_flags & RATE_VHT_MCS_NSS_MSK) >> + RATE_VHT_MCS_NSS_POS) + 1; + rx_status->rate_idx = rate_n_flags & RATE_VHT_MCS_RATE_CODE_MSK; + rx_status->encoding = RX_ENC_VHT; + rx_status->enc_flags |= stbc << RX_ENC_FLAG_STBC_SHIFT; + if (rate_n_flags & RATE_MCS_BF_MSK) + rx_status->enc_flags |= RX_ENC_FLAG_BF; + } else if (!(rate_n_flags & RATE_MCS_HE_MSK)) { + int rate = iwl_mvm_legacy_rate_to_mac80211_idx(rate_n_flags, + rx_status->band); + + if (WARN(rate < 0 || rate > 0xFF, + "Invalid rate flags 0x%x, band %d,\n", + rate_n_flags, rx_status->band)) { + kfree_skb(skb); + goto out; + } + rx_status->rate_idx = rate; + } + + iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb, queue, sta); +out: + rcu_read_unlock(); +} void iwl_mvm_rx_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi, struct iwl_rx_cmd_buffer *rxb, int queue) { diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c index cfb784fea77b..86d598d5b68f 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c @@ -205,9 +205,7 @@ iwl_mvm_scan_rate_n_flags(struct iwl_mvm *mvm, enum nl80211_band band, { u32 tx_ant; - mvm->scan_last_antenna_idx = - iwl_mvm_next_antenna(mvm, iwl_mvm_get_valid_tx_ant(mvm), - mvm->scan_last_antenna_idx); + iwl_mvm_toggle_tx_ant(mvm, &mvm->scan_last_antenna_idx); tx_ant = BIT(mvm->scan_last_antenna_idx) << RATE_MCS_ANT_POS; if (band == NL80211_BAND_2GHZ && !no_cck) @@ -1895,6 +1893,8 @@ void iwl_mvm_rx_umac_scan_complete_notif(struct iwl_mvm *mvm, mvm->last_ebs_successful = false; mvm->scan_uid_status[uid] = 0; + + iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_SCAN_COMPLETE); } void iwl_mvm_rx_umac_scan_iter_complete_notif(struct iwl_mvm *mvm, diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c index 1887d2b9f185..e28009832da0 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c @@ -319,9 +319,7 @@ static int iwl_mvm_invalidate_sta_queue(struct iwl_mvm *mvm, int queue, if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return -EINVAL; - spin_lock_bh(&mvm->queue_info_lock); sta_id = mvm->queue_info[queue].ra_sta_id; - spin_unlock_bh(&mvm->queue_info_lock); rcu_read_lock(); @@ -372,25 +370,17 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, return -EINVAL; if (iwl_mvm_has_new_tx_api(mvm)) { - spin_lock_bh(&mvm->queue_info_lock); - if (remove_mac_queue) mvm->hw_queue_to_mac80211[queue] &= ~BIT(mac80211_queue); - spin_unlock_bh(&mvm->queue_info_lock); - iwl_trans_txq_free(mvm->trans, queue); return 0; } - spin_lock_bh(&mvm->queue_info_lock); - - if (WARN_ON(mvm->queue_info[queue].tid_bitmap == 0)) { - spin_unlock_bh(&mvm->queue_info_lock); + if (WARN_ON(mvm->queue_info[queue].tid_bitmap == 0)) return 0; - } mvm->queue_info[queue].tid_bitmap &= ~BIT(tid); @@ -426,10 +416,8 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, mvm->hw_queue_to_mac80211[queue]); /* If the queue is still enabled - nothing left to do in this func */ - if (cmd.action == SCD_CFG_ENABLE_QUEUE) { - spin_unlock_bh(&mvm->queue_info_lock); + if (cmd.action == SCD_CFG_ENABLE_QUEUE) return 0; - } cmd.sta_id = mvm->queue_info[queue].ra_sta_id; cmd.tid = mvm->queue_info[queue].txq_tid; @@ -448,8 +436,6 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, /* Regardless if this is a reserved TXQ for a STA - mark it as false */ mvm->queue_info[queue].reserved = false; - spin_unlock_bh(&mvm->queue_info_lock); - iwl_trans_txq_disable(mvm->trans, queue, false); ret = iwl_mvm_send_cmd_pdu(mvm, SCD_QUEUE_CFG, flags, sizeof(struct iwl_scd_txq_cfg_cmd), &cmd); @@ -474,10 +460,8 @@ static int iwl_mvm_get_queue_agg_tids(struct iwl_mvm *mvm, int queue) if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return -EINVAL; - spin_lock_bh(&mvm->queue_info_lock); sta_id = mvm->queue_info[queue].ra_sta_id; tid_bitmap = mvm->queue_info[queue].tid_bitmap; - spin_unlock_bh(&mvm->queue_info_lock); sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[sta_id], lockdep_is_held(&mvm->mutex)); @@ -516,10 +500,8 @@ static int iwl_mvm_remove_sta_queue_marking(struct iwl_mvm *mvm, int queue) if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return -EINVAL; - spin_lock_bh(&mvm->queue_info_lock); sta_id = mvm->queue_info[queue].ra_sta_id; tid_bitmap = mvm->queue_info[queue].tid_bitmap; - spin_unlock_bh(&mvm->queue_info_lock); rcu_read_lock(); @@ -545,6 +527,16 @@ static int iwl_mvm_remove_sta_queue_marking(struct iwl_mvm *mvm, int queue) rcu_read_unlock(); + /* + * The TX path may have been using this TXQ_ID from the tid_data, + * so make sure it's no longer running so that we can safely reuse + * this TXQ later. We've set all the TIDs to IWL_MVM_INVALID_QUEUE + * above, but nothing guarantees we've stopped using them. Thus, + * without this, we could get to iwl_mvm_disable_txq() and remove + * the queue while still sending frames to it. + */ + synchronize_net(); + return disable_agg_tids; } @@ -562,11 +554,9 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue, if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return -EINVAL; - spin_lock_bh(&mvm->queue_info_lock); txq_curr_ac = mvm->queue_info[queue].mac80211_ac; sta_id = mvm->queue_info[queue].ra_sta_id; tid = mvm->queue_info[queue].txq_tid; - spin_unlock_bh(&mvm->queue_info_lock); same_sta = sta_id == new_sta_id; @@ -610,7 +600,6 @@ static int iwl_mvm_get_shared_queue(struct iwl_mvm *mvm, * by the inactivity checker. */ lockdep_assert_held(&mvm->mutex); - lockdep_assert_held(&mvm->queue_info_lock); if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return -EINVAL; @@ -696,10 +685,7 @@ static int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, * value 3 and VO with value 0, so to check if ac X is lower than ac Y * we need to check if the numerical value of X is LARGER than of Y. */ - spin_lock_bh(&mvm->queue_info_lock); if (ac <= mvm->queue_info[queue].mac80211_ac && !force) { - spin_unlock_bh(&mvm->queue_info_lock); - IWL_DEBUG_TX_QUEUES(mvm, "No redirection needed on TXQ #%d\n", queue); @@ -711,7 +697,6 @@ static int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, cmd.tid = mvm->queue_info[queue].txq_tid; mq = mvm->hw_queue_to_mac80211[queue]; shared_queue = hweight16(mvm->queue_info[queue].tid_bitmap) > 1; - spin_unlock_bh(&mvm->queue_info_lock); IWL_DEBUG_TX_QUEUES(mvm, "Redirecting TXQ #%d to FIFO #%d\n", queue, iwl_mvm_ac_to_tx_fifo[ac]); @@ -737,9 +722,7 @@ static int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, iwl_trans_txq_enable_cfg(mvm->trans, queue, ssn, NULL, wdg_timeout); /* Update the TID "owner" of the queue */ - spin_lock_bh(&mvm->queue_info_lock); mvm->queue_info[queue].txq_tid = tid; - spin_unlock_bh(&mvm->queue_info_lock); /* TODO: Work-around SCD bug when moving back by multiples of 0x40 */ @@ -748,9 +731,7 @@ static int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid, cmd.sta_id, tid, IWL_FRAME_LIMIT, ssn); /* Update AC marking of the queue */ - spin_lock_bh(&mvm->queue_info_lock); mvm->queue_info[queue].mac80211_ac = ac; - spin_unlock_bh(&mvm->queue_info_lock); /* * Mark queue as shared in transport if shared @@ -773,7 +754,7 @@ static int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 sta_id, { int i; - lockdep_assert_held(&mvm->queue_info_lock); + lockdep_assert_held(&mvm->mutex); /* This should not be hit with new TX path */ if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) @@ -853,11 +834,8 @@ static bool iwl_mvm_update_txq_mapping(struct iwl_mvm *mvm, int queue, { bool enable_queue = true; - spin_lock_bh(&mvm->queue_info_lock); - /* Make sure this TID isn't already enabled */ if (mvm->queue_info[queue].tid_bitmap & BIT(tid)) { - spin_unlock_bh(&mvm->queue_info_lock); IWL_ERR(mvm, "Trying to enable TXQ %d with existing TID %d\n", queue, tid); return false; @@ -893,8 +871,6 @@ static bool iwl_mvm_update_txq_mapping(struct iwl_mvm *mvm, int queue, queue, mvm->queue_info[queue].tid_bitmap, mvm->hw_queue_to_mac80211[queue]); - spin_unlock_bh(&mvm->queue_info_lock); - return enable_queue; } @@ -949,9 +925,7 @@ static void iwl_mvm_change_queue_tid(struct iwl_mvm *mvm, int queue) if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return; - spin_lock_bh(&mvm->queue_info_lock); tid_bitmap = mvm->queue_info[queue].tid_bitmap; - spin_unlock_bh(&mvm->queue_info_lock); if (WARN(!tid_bitmap, "TXQ %d has no tids assigned to it\n", queue)) return; @@ -968,9 +942,7 @@ static void iwl_mvm_change_queue_tid(struct iwl_mvm *mvm, int queue) return; } - spin_lock_bh(&mvm->queue_info_lock); mvm->queue_info[queue].txq_tid = tid; - spin_unlock_bh(&mvm->queue_info_lock); IWL_DEBUG_TX_QUEUES(mvm, "Changed TXQ %d ownership to tid %d\n", queue, tid); } @@ -992,10 +964,8 @@ static void iwl_mvm_unshare_queue(struct iwl_mvm *mvm, int queue) lockdep_assert_held(&mvm->mutex); - spin_lock_bh(&mvm->queue_info_lock); sta_id = mvm->queue_info[queue].ra_sta_id; tid_bitmap = mvm->queue_info[queue].tid_bitmap; - spin_unlock_bh(&mvm->queue_info_lock); /* Find TID for queue, and make sure it is the only one on the queue */ tid = find_first_bit(&tid_bitmap, IWL_MAX_TID_COUNT + 1); @@ -1052,9 +1022,7 @@ static void iwl_mvm_unshare_queue(struct iwl_mvm *mvm, int queue) } } - spin_lock_bh(&mvm->queue_info_lock); mvm->queue_info[queue].status = IWL_MVM_QUEUE_READY; - spin_unlock_bh(&mvm->queue_info_lock); } /* @@ -1073,7 +1041,7 @@ static bool iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm, int tid; lockdep_assert_held(&mvmsta->lock); - lockdep_assert_held(&mvm->queue_info_lock); + lockdep_assert_held(&mvm->mutex); if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return false; @@ -1174,8 +1142,6 @@ static int iwl_mvm_inactivity_check(struct iwl_mvm *mvm, u8 alloc_for_sta) if (iwl_mvm_has_new_tx_api(mvm)) return -ENOSPC; - spin_lock_bh(&mvm->queue_info_lock); - rcu_read_lock(); /* we skip the CMD queue below by starting at 1 */ @@ -1230,12 +1196,7 @@ static int iwl_mvm_inactivity_check(struct iwl_mvm *mvm, u8 alloc_for_sta) mvmsta = iwl_mvm_sta_from_mac80211(sta); - /* this isn't so nice, but works OK due to the way we loop */ - spin_unlock(&mvm->queue_info_lock); - - /* and we need this locking order */ - spin_lock(&mvmsta->lock); - spin_lock(&mvm->queue_info_lock); + spin_lock_bh(&mvmsta->lock); ret = iwl_mvm_remove_inactive_tids(mvm, mvmsta, i, inactive_tid_bitmap, &unshare_queues, @@ -1243,11 +1204,10 @@ static int iwl_mvm_inactivity_check(struct iwl_mvm *mvm, u8 alloc_for_sta) if (ret >= 0 && free_queue < 0) free_queue = ret; /* only unlock sta lock - we still need the queue info lock */ - spin_unlock(&mvmsta->lock); + spin_unlock_bh(&mvmsta->lock); } rcu_read_unlock(); - spin_unlock_bh(&mvm->queue_info_lock); /* Reconfigure queues requiring reconfiguation */ for_each_set_bit(i, &unshare_queues, IWL_MAX_HW_QUEUES) @@ -1294,10 +1254,9 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, spin_lock_bh(&mvmsta->lock); tfd_queue_mask = mvmsta->tfd_queue_msk; + ssn = IEEE80211_SEQ_TO_SN(mvmsta->tid_data[tid].seq_number); spin_unlock_bh(&mvmsta->lock); - spin_lock_bh(&mvm->queue_info_lock); - /* * Non-QoS, QoS NDP and MGMT frames should go to a MGMT queue, if one * exists @@ -1327,12 +1286,8 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, IWL_MVM_DQA_MIN_DATA_QUEUE, IWL_MVM_DQA_MAX_DATA_QUEUE); if (queue < 0) { - spin_unlock_bh(&mvm->queue_info_lock); - /* try harder - perhaps kill an inactive queue */ queue = iwl_mvm_inactivity_check(mvm, mvmsta->sta_id); - - spin_lock_bh(&mvm->queue_info_lock); } /* No free queue - we'll have to share */ @@ -1353,8 +1308,6 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, if (queue > 0 && !shared_queue) mvm->queue_info[queue].status = IWL_MVM_QUEUE_READY; - spin_unlock_bh(&mvm->queue_info_lock); - /* This shouldn't happen - out of queues */ if (WARN_ON(queue <= 0)) { IWL_ERR(mvm, "No available queues for tid %d on sta_id %d\n", @@ -1388,13 +1341,8 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, } } - ssn = IEEE80211_SEQ_TO_SN(le16_to_cpu(hdr->seq_ctrl)); inc_ssn = iwl_mvm_enable_txq(mvm, queue, mac_queue, ssn, &cfg, wdg_timeout); - if (inc_ssn) { - ssn = (ssn + 1) & IEEE80211_SCTL_SEQ; - le16_add_cpu(&hdr->seq_ctrl, 0x10); - } /* * Mark queue as shared in transport if shared @@ -1411,8 +1359,10 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, * this ra/tid in our Tx path since we stop the Qdisc when we * need to allocate a new TFD queue. */ - if (inc_ssn) + if (inc_ssn) { mvmsta->tid_data[tid].seq_number += 0x10; + ssn = (ssn + 1) & IEEE80211_SCTL_SEQ; + } mvmsta->tid_data[tid].txq_id = queue; mvmsta->tfd_queue_msk |= BIT(queue); queue_state = mvmsta->tid_data[tid].state; @@ -1556,8 +1506,6 @@ static int iwl_mvm_reserve_sta_stream(struct iwl_mvm *mvm, /* run the general cleanup/unsharing of queues */ iwl_mvm_inactivity_check(mvm, IWL_MVM_INVALID_STA); - spin_lock_bh(&mvm->queue_info_lock); - /* Make sure we have free resources for this STA */ if (vif_type == NL80211_IFTYPE_STATION && !sta->tdls && !mvm->queue_info[IWL_MVM_DQA_BSS_CLIENT_QUEUE].tid_bitmap && @@ -1569,19 +1517,15 @@ static int iwl_mvm_reserve_sta_stream(struct iwl_mvm *mvm, IWL_MVM_DQA_MIN_DATA_QUEUE, IWL_MVM_DQA_MAX_DATA_QUEUE); if (queue < 0) { - spin_unlock_bh(&mvm->queue_info_lock); /* try again - this time kick out a queue if needed */ queue = iwl_mvm_inactivity_check(mvm, mvmsta->sta_id); if (queue < 0) { IWL_ERR(mvm, "No available queues for new station\n"); return -ENOSPC; } - spin_lock_bh(&mvm->queue_info_lock); } mvm->queue_info[queue].status = IWL_MVM_QUEUE_RESERVED; - spin_unlock_bh(&mvm->queue_info_lock); - mvmsta->reserved_queue = queue; IWL_DEBUG_TX_QUEUES(mvm, "Reserving data queue #%d for sta_id %d\n", @@ -1822,6 +1766,8 @@ int iwl_mvm_add_sta(struct iwl_mvm *mvm, if (iwl_mvm_has_tlc_offload(mvm)) iwl_mvm_rs_add_sta(mvm, mvm_sta); + iwl_mvm_toggle_tx_ant(mvm, &mvm_sta->tx_ant); + update_fw: ret = iwl_mvm_sta_send_to_fw(mvm, sta, sta_update, sta_flags); if (ret) @@ -2004,18 +1950,14 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm, * is still marked as IWL_MVM_QUEUE_RESERVED, and * should be manually marked as free again */ - spin_lock_bh(&mvm->queue_info_lock); status = &mvm->queue_info[reserved_txq].status; if (WARN((*status != IWL_MVM_QUEUE_RESERVED) && (*status != IWL_MVM_QUEUE_FREE), "sta_id %d reserved txq %d status %d", - sta_id, reserved_txq, *status)) { - spin_unlock_bh(&mvm->queue_info_lock); + sta_id, reserved_txq, *status)) return -EINVAL; - } *status = IWL_MVM_QUEUE_FREE; - spin_unlock_bh(&mvm->queue_info_lock); } if (vif->type == NL80211_IFTYPE_STATION && @@ -2873,8 +2815,6 @@ int iwl_mvm_sta_tx_agg_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif, return -EIO; } - spin_lock(&mvm->queue_info_lock); - /* * Note the possible cases: * 1. An enabled TXQ - TXQ needs to become agg'ed @@ -2889,7 +2829,7 @@ int iwl_mvm_sta_tx_agg_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif, if (txq_id < 0) { ret = txq_id; IWL_ERR(mvm, "Failed to allocate agg queue\n"); - goto release_locks; + goto out; } /* TXQ hasn't yet been enabled, so mark it only as reserved */ @@ -2900,11 +2840,9 @@ int iwl_mvm_sta_tx_agg_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif, IWL_DEBUG_TX_QUEUES(mvm, "Can't start tid %d agg on shared queue!\n", tid); - goto release_locks; + goto out; } - spin_unlock(&mvm->queue_info_lock); - IWL_DEBUG_TX_QUEUES(mvm, "AGG for tid %d will be on queue #%d\n", tid, txq_id); @@ -2935,10 +2873,7 @@ int iwl_mvm_sta_tx_agg_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif, } ret = 0; - goto out; -release_locks: - spin_unlock(&mvm->queue_info_lock); out: spin_unlock_bh(&mvmsta->lock); @@ -3007,9 +2942,7 @@ int iwl_mvm_sta_tx_agg_oper(struct iwl_mvm *mvm, struct ieee80211_vif *vif, cfg.fifo = iwl_mvm_ac_to_tx_fifo[tid_to_mac80211_ac[tid]]; - spin_lock_bh(&mvm->queue_info_lock); queue_status = mvm->queue_info[queue].status; - spin_unlock_bh(&mvm->queue_info_lock); /* Maybe there is no need to even alloc a queue... */ if (mvm->queue_info[queue].status == IWL_MVM_QUEUE_READY) @@ -3055,9 +2988,7 @@ int iwl_mvm_sta_tx_agg_oper(struct iwl_mvm *mvm, struct ieee80211_vif *vif, } /* No need to mark as reserved */ - spin_lock_bh(&mvm->queue_info_lock); mvm->queue_info[queue].status = IWL_MVM_QUEUE_READY; - spin_unlock_bh(&mvm->queue_info_lock); out: /* @@ -3083,10 +3014,11 @@ static void iwl_mvm_unreserve_agg_queue(struct iwl_mvm *mvm, { u16 txq_id = tid_data->txq_id; + lockdep_assert_held(&mvm->mutex); + if (iwl_mvm_has_new_tx_api(mvm)) return; - spin_lock_bh(&mvm->queue_info_lock); /* * The TXQ is marked as reserved only if no traffic came through yet * This means no traffic has been sent on this TID (agg'd or not), so @@ -3098,8 +3030,6 @@ static void iwl_mvm_unreserve_agg_queue(struct iwl_mvm *mvm, mvm->queue_info[txq_id].status = IWL_MVM_QUEUE_FREE; tid_data->txq_id = IWL_MVM_INVALID_QUEUE; } - - spin_unlock_bh(&mvm->queue_info_lock); } int iwl_mvm_sta_tx_agg_stop(struct iwl_mvm *mvm, struct ieee80211_vif *vif, diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h index de1a0a2d8723..d52cd888f77d 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h @@ -397,6 +397,9 @@ struct iwl_mvm_rxq_dup_data { * @ptk_pn: per-queue PTK PN data structures * @dup_data: per queue duplicate packet detection data * @deferred_traffic_tid_map: indication bitmap of deferred traffic per-TID + * @tx_ant: the index of the antenna to use for data tx to this station. Only + * used during connection establishment (e.g. for the 4 way handshake + * exchange). * * When mac80211 creates a station it reserves some space (hw->sta_data_size) * in the structure for use by driver. This structure is placed in that @@ -439,6 +442,7 @@ struct iwl_mvm_sta { u8 agg_tids; u8 sleep_tx_count; u8 avg_energy; + u8 tx_ant; }; u16 iwl_mvm_tid_queued(struct iwl_mvm *mvm, struct iwl_mvm_tid_data *tid_data); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c index ec57682efe54..995fe2a6abbb 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c @@ -302,13 +302,30 @@ void iwl_mvm_set_tx_cmd(struct iwl_mvm *mvm, struct sk_buff *skb, offload_assist)); } +static u32 iwl_mvm_get_tx_ant(struct iwl_mvm *mvm, + struct ieee80211_tx_info *info, + struct ieee80211_sta *sta, __le16 fc) +{ + if (info->band == NL80211_BAND_2GHZ && + !iwl_mvm_bt_coex_is_shared_ant_avail(mvm)) + return mvm->cfg->non_shared_ant << RATE_MCS_ANT_POS; + + if (sta && ieee80211_is_data(fc)) { + struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); + + return BIT(mvmsta->tx_ant) << RATE_MCS_ANT_POS; + } + + return BIT(mvm->mgmt_last_antenna_idx) << RATE_MCS_ANT_POS; +} + static u32 iwl_mvm_get_tx_rate(struct iwl_mvm *mvm, struct ieee80211_tx_info *info, struct ieee80211_sta *sta) { int rate_idx; u8 rate_plcp; - u32 rate_flags; + u32 rate_flags = 0; /* HT rate doesn't make sense for a non data frame */ WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS, @@ -332,13 +349,6 @@ static u32 iwl_mvm_get_tx_rate(struct iwl_mvm *mvm, /* Get PLCP rate for tx_cmd->rate_n_flags */ rate_plcp = iwl_mvm_mac80211_idx_to_hwrate(rate_idx); - if (info->band == NL80211_BAND_2GHZ && - !iwl_mvm_bt_coex_is_shared_ant_avail(mvm)) - rate_flags = mvm->cfg->non_shared_ant << RATE_MCS_ANT_POS; - else - rate_flags = - BIT(mvm->mgmt_last_antenna_idx) << RATE_MCS_ANT_POS; - /* Set CCK flag as needed */ if ((rate_idx >= IWL_FIRST_CCK_RATE) && (rate_idx <= IWL_LAST_CCK_RATE)) rate_flags |= RATE_MCS_CCK_MSK; @@ -346,6 +356,14 @@ static u32 iwl_mvm_get_tx_rate(struct iwl_mvm *mvm, return (u32)rate_plcp | rate_flags; } +static u32 iwl_mvm_get_tx_rate_n_flags(struct iwl_mvm *mvm, + struct ieee80211_tx_info *info, + struct ieee80211_sta *sta, __le16 fc) +{ + return iwl_mvm_get_tx_rate(mvm, info, sta) | + iwl_mvm_get_tx_ant(mvm, info, sta, fc); +} + /* * Sets the fields in the Tx cmd that are rate related */ @@ -373,20 +391,21 @@ void iwl_mvm_set_tx_cmd_rate(struct iwl_mvm *mvm, struct iwl_tx_cmd *tx_cmd, */ if (ieee80211_is_data(fc) && sta) { - tx_cmd->initial_rate_index = 0; - tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE); - return; + struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); + + if (mvmsta->sta_state >= IEEE80211_STA_AUTHORIZED) { + tx_cmd->initial_rate_index = 0; + tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE); + return; + } } else if (ieee80211_is_back_req(fc)) { tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_ACK | TX_CMD_FLG_BAR); } - mvm->mgmt_last_antenna_idx = - iwl_mvm_next_antenna(mvm, iwl_mvm_get_valid_tx_ant(mvm), - mvm->mgmt_last_antenna_idx); - /* Set the rate in the TX cmd */ - tx_cmd->rate_n_flags = cpu_to_le32(iwl_mvm_get_tx_rate(mvm, info, sta)); + tx_cmd->rate_n_flags = + cpu_to_le32(iwl_mvm_get_tx_rate_n_flags(mvm, info, sta, fc)); } static inline void iwl_mvm_set_tx_cmd_pn(struct ieee80211_tx_info *info, @@ -491,6 +510,8 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb, u16 offload_assist = 0; u32 rate_n_flags = 0; u16 flags = 0; + struct iwl_mvm_sta *mvmsta = sta ? + iwl_mvm_sta_from_mac80211(sta) : NULL; if (ieee80211_is_data_qos(hdr->frame_control)) { u8 *qc = ieee80211_get_qos_ctl(hdr); @@ -510,10 +531,16 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb, if (!info->control.hw_key) flags |= IWL_TX_FLAGS_ENCRYPT_DIS; - /* For data packets rate info comes from the fw */ - if (!(ieee80211_is_data(hdr->frame_control) && sta)) { + /* + * For data packets rate info comes from the fw. Only + * set rate/antenna during connection establishment. + */ + if (sta && (!ieee80211_is_data(hdr->frame_control) || + mvmsta->sta_state < IEEE80211_STA_AUTHORIZED)) { flags |= IWL_TX_FLAGS_CMD_RATE; - rate_n_flags = iwl_mvm_get_tx_rate(mvm, info, sta); + rate_n_flags = + iwl_mvm_get_tx_rate_n_flags(mvm, info, sta, + hdr->frame_control); } if (mvm->trans->cfg->device_family >= @@ -681,22 +708,12 @@ out: int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb) { struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; - struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb); struct ieee80211_tx_info info; struct iwl_device_cmd *dev_cmd; u8 sta_id; int hdrlen = ieee80211_hdrlen(hdr->frame_control); __le16 fc = hdr->frame_control; - int queue; - - /* IWL_MVM_OFFCHANNEL_QUEUE is used for ROC packets that can be used - * in 2 different types of vifs, P2P & STATION. P2P uses the offchannel - * queue. STATION (HS2.0) uses the auxiliary context of the FW, - * and hence needs to be sent on the aux queue - */ - if (skb_info->hw_queue == IWL_MVM_OFFCHANNEL_QUEUE && - skb_info->control.vif->type == NL80211_IFTYPE_STATION) - skb_info->hw_queue = mvm->aux_queue; + int queue = -1; memcpy(&info, skb->cb, sizeof(info)); @@ -708,18 +725,6 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb) info.hw_queue != info.control.vif->cab_queue))) return -1; - queue = info.hw_queue; - - /* - * If the interface on which the frame is sent is the P2P_DEVICE - * or an AP/GO interface use the broadcast station associated - * with it; otherwise if the interface is a managed interface - * use the AP station associated with it for multicast traffic - * (this is not possible for unicast packets as a TLDS discovery - * response are sent without a station entry); otherwise use the - * AUX station. - */ - sta_id = mvm->aux_sta.sta_id; if (info.control.vif) { struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(info.control.vif); @@ -734,20 +739,28 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb) queue = iwl_mvm_get_ctrl_vif_queue(mvm, &info, hdr->frame_control); - if (queue < 0) - return -1; - } else if (info.control.vif->type == NL80211_IFTYPE_STATION && - is_multicast_ether_addr(hdr->addr1)) { - u8 ap_sta_id = READ_ONCE(mvmvif->ap_sta_id); - if (ap_sta_id != IWL_MVM_INVALID_STA) - sta_id = ap_sta_id; } else if (info.control.vif->type == NL80211_IFTYPE_MONITOR) { queue = mvm->snif_queue; sta_id = mvm->snif_sta.sta_id; + } else if (info.control.vif->type == NL80211_IFTYPE_STATION && + info.hw_queue == IWL_MVM_OFFCHANNEL_QUEUE) { + /* + * IWL_MVM_OFFCHANNEL_QUEUE is used for ROC packets + * that can be used in 2 different types of vifs, P2P & + * STATION. + * P2P uses the offchannel queue. + * STATION (HS2.0) uses the auxiliary context of the FW, + * and hence needs to be sent on the aux queue. + */ + sta_id = mvm->aux_sta.sta_id; + queue = mvm->aux_queue; } } + if (queue < 0) + return -1; + if (unlikely(ieee80211_is_probe_resp(fc))) iwl_mvm_probe_resp_set_noa(mvm, skb); @@ -1160,11 +1173,11 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb, * If we have timed-out TIDs - schedule the worker that will * reconfig the queues and update them * - * Note that the mvm->queue_info_lock isn't being taken here in - * order to not serialize the TX flow. This isn't dangerous - * because scheduling mvm->add_stream_wk can't ruin the state, - * and if we DON'T schedule it due to some race condition then - * next TX we get here we will. + * Note that the no lock is taken here in order to not serialize + * the TX flow. This isn't dangerous because scheduling + * mvm->add_stream_wk can't ruin the state, and if we DON'T + * schedule it due to some race condition then next TX we get + * here we will. */ if (unlikely(mvm->queue_info[txq_id].status == IWL_MVM_QUEUE_SHARED && @@ -1451,7 +1464,6 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm, iwl_mvm_get_agg_status(mvm, tx_resp); u32 status = le16_to_cpu(agg_status->status); u16 ssn = iwl_mvm_get_scd_ssn(mvm, tx_resp); - struct iwl_mvm_sta *mvmsta; struct sk_buff_head skbs; u8 skb_freed = 0; u8 lq_color; @@ -1501,6 +1513,10 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm, break; } + if ((status & TX_STATUS_MSK) != TX_STATUS_SUCCESS && + ieee80211_is_mgmt(hdr->frame_control)) + iwl_mvm_toggle_tx_ant(mvm, &mvm->mgmt_last_antenna_idx); + /* * If we are freeing multiple frames, mark all the frames * but the first one as acked, since they were acknowledged @@ -1595,11 +1611,15 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm, goto out; if (!IS_ERR(sta)) { - mvmsta = iwl_mvm_sta_from_mac80211(sta); + struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); iwl_mvm_tx_airtime(mvm, mvmsta, le16_to_cpu(tx_resp->wireless_media_time)); + if ((status & TX_STATUS_MSK) != TX_STATUS_SUCCESS && + mvmsta->sta_state < IEEE80211_STA_AUTHORIZED) + iwl_mvm_toggle_tx_ant(mvm, &mvmsta->tx_ant); + if (sta->wme && tid != IWL_MGMT_TID) { struct iwl_mvm_tid_data *tid_data = &mvmsta->tid_data[tid]; @@ -1654,10 +1674,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm, mvmsta->next_status_eosp = false; ieee80211_sta_eosp(sta); } - } else { - mvmsta = NULL; } - out: rcu_read_unlock(); } @@ -1800,8 +1817,6 @@ static void iwl_mvm_tx_reclaim(struct iwl_mvm *mvm, int sta_id, int tid, return; } - spin_lock_bh(&mvmsta->lock); - __skb_queue_head_init(&reclaimed_skbs); /* @@ -1811,6 +1826,8 @@ static void iwl_mvm_tx_reclaim(struct iwl_mvm *mvm, int sta_id, int tid, */ iwl_trans_reclaim(mvm->trans, txq, index, &reclaimed_skbs); + spin_lock_bh(&mvmsta->lock); + tid_data->next_reclaimed = index; iwl_mvm_check_ratid_empty(mvm, sta, tid); diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c index 818e1180bbdd..d116c6ae18ff 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c @@ -285,6 +285,7 @@ u8 iwl_mvm_next_antenna(struct iwl_mvm *mvm, u8 valid, u8 last_idx) return last_idx; } +#define FW_SYSASSERT_CPU_MASK 0xf0000000 static const struct { const char *name; u8 num; @@ -301,6 +302,9 @@ static const struct { { "NMI_INTERRUPT_WDG_RXF_FULL", 0x5C }, { "NMI_INTERRUPT_WDG_NO_RBD_RXF_FULL", 0x64 }, { "NMI_INTERRUPT_HOST", 0x66 }, + { "NMI_INTERRUPT_LMAC_FATAL", 0x70 }, + { "NMI_INTERRUPT_UMAC_FATAL", 0x71 }, + { "NMI_INTERRUPT_OTHER_LMAC_FATAL", 0x73 }, { "NMI_INTERRUPT_ACTION_PT", 0x7C }, { "NMI_INTERRUPT_UNKNOWN", 0x84 }, { "NMI_INTERRUPT_INST_ACTION_PT", 0x86 }, @@ -312,7 +316,7 @@ static const char *desc_lookup(u32 num) int i; for (i = 0; i < ARRAY_SIZE(advanced_lookup) - 1; i++) - if (advanced_lookup[i].num == num) + if (advanced_lookup[i].num == (num & ~FW_SYSASSERT_CPU_MASK)) return advanced_lookup[i].name; /* No entry matches 'num', so it is the last: ADVANCED_SYSASSERT */ @@ -536,6 +540,9 @@ static void iwl_mvm_dump_lmac_error_log(struct iwl_mvm *mvm, u32 base) iwl_trans_read_mem_bytes(trans, base, &table, sizeof(table)); + if (table.valid) + mvm->fwrt.dump.rt_status = table.error_id; + if (ERROR_START_OFFSET <= table.valid * ERROR_ELEM_SIZE) { IWL_ERR(trans, "Start IWL Error Log Dump:\n"); IWL_ERR(trans, "Status: 0x%08lX, count: %d\n", @@ -618,13 +625,9 @@ int iwl_mvm_reconfig_scd(struct iwl_mvm *mvm, int queue, int fifo, int sta_id, if (WARN_ON(iwl_mvm_has_new_tx_api(mvm))) return -EINVAL; - spin_lock_bh(&mvm->queue_info_lock); if (WARN(mvm->queue_info[queue].tid_bitmap == 0, - "Trying to reconfig unallocated queue %d\n", queue)) { - spin_unlock_bh(&mvm->queue_info_lock); + "Trying to reconfig unallocated queue %d\n", queue)) return -ENXIO; - } - spin_unlock_bh(&mvm->queue_info_lock); IWL_DEBUG_TX_QUEUES(mvm, "Reconfig SCD for TXQ #%d\n", queue); @@ -768,6 +771,29 @@ bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm) return result; } +void iwl_mvm_send_low_latency_cmd(struct iwl_mvm *mvm, + bool low_latency, u16 mac_id) +{ + struct iwl_mac_low_latency_cmd cmd = { + .mac_id = cpu_to_le32(mac_id) + }; + + if (!fw_has_capa(&mvm->fw->ucode_capa, + IWL_UCODE_TLV_CAPA_DYNAMIC_QUOTA)) + return; + + if (low_latency) { + /* currently we don't care about the direction */ + cmd.low_latency_rx = 1; + cmd.low_latency_tx = 1; + } + + if (iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(LOW_LATENCY_CMD, + MAC_CONF_GROUP, 0), + 0, sizeof(cmd), &cmd)) + IWL_ERR(mvm, "Failed to send low latency command\n"); +} + int iwl_mvm_update_low_latency(struct iwl_mvm *mvm, struct ieee80211_vif *vif, bool low_latency, enum iwl_mvm_low_latency_cause cause) @@ -786,24 +812,7 @@ int iwl_mvm_update_low_latency(struct iwl_mvm *mvm, struct ieee80211_vif *vif, if (low_latency == prev) return 0; - if (fw_has_capa(&mvm->fw->ucode_capa, - IWL_UCODE_TLV_CAPA_DYNAMIC_QUOTA)) { - struct iwl_mac_low_latency_cmd cmd = { - .mac_id = cpu_to_le32(mvmvif->id) - }; - - if (low_latency) { - /* currently we don't care about the direction */ - cmd.low_latency_rx = 1; - cmd.low_latency_tx = 1; - } - res = iwl_mvm_send_cmd_pdu(mvm, - iwl_cmd_id(LOW_LATENCY_CMD, - MAC_CONF_GROUP, 0), - 0, sizeof(cmd), &cmd); - if (res) - IWL_ERR(mvm, "Failed to send low latency command\n"); - } + iwl_mvm_send_low_latency_cmd(mvm, low_latency, mvmvif->id); res = iwl_mvm_update_quotas(mvm, false, NULL); if (res) @@ -1372,6 +1381,7 @@ void iwl_mvm_pause_tcm(struct iwl_mvm *mvm, bool with_cancel) void iwl_mvm_resume_tcm(struct iwl_mvm *mvm) { int mac; + bool low_latency = false; spin_lock_bh(&mvm->tcm.lock); mvm->tcm.ts = jiffies; @@ -1383,10 +1393,23 @@ void iwl_mvm_resume_tcm(struct iwl_mvm *mvm) memset(&mdata->tx.pkts, 0, sizeof(mdata->tx.pkts)); memset(&mdata->rx.airtime, 0, sizeof(mdata->rx.airtime)); memset(&mdata->tx.airtime, 0, sizeof(mdata->tx.airtime)); + + if (mvm->tcm.result.low_latency[mac]) + low_latency = true; } /* The TCM data needs to be reset before "paused" flag changes */ smp_mb(); mvm->tcm.paused = false; + + /* + * if the current load is not low or low latency is active, force + * re-evaluation to cover the case of no traffic. + */ + if (mvm->tcm.result.global_load > IWL_MVM_TRAFFIC_LOW) + schedule_delayed_work(&mvm->tcm.work, MVM_TCM_PERIOD); + else if (low_latency) + schedule_delayed_work(&mvm->tcm.work, MVM_LL_PERIOD); + spin_unlock_bh(&mvm->tcm.lock); } diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c index 05ed4fb88e0c..ceb3aa03d561 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c @@ -94,11 +94,14 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans, cpu_to_le64(trans_pcie->rxq->bd_dma); /* Configure debug, for integration */ - iwl_pcie_alloc_fw_monitor(trans, 0); - prph_sc_ctrl->hwm_cfg.hwm_base_addr = - cpu_to_le64(trans->fw_mon[0].physical); - prph_sc_ctrl->hwm_cfg.hwm_size = - cpu_to_le32(trans->fw_mon[0].size); + if (!trans->ini_valid) + iwl_pcie_alloc_fw_monitor(trans, 0); + if (trans->num_blocks) { + prph_sc_ctrl->hwm_cfg.hwm_base_addr = + cpu_to_le64(trans->fw_mon[0].physical); + prph_sc_ctrl->hwm_cfg.hwm_size = + cpu_to_le32(trans->fw_mon[0].size); + } /* allocate ucode sections in dram and set addresses */ ret = iwl_pcie_init_fw_sec(trans, fw, &prph_scratch->dram); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c index 6f45a0303ddd..7f4aaa810ea1 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c @@ -227,7 +227,7 @@ int iwl_pcie_ctxt_info_init(struct iwl_trans *trans, iwl_enable_interrupts(trans); /* Configure debug, if exists */ - if (trans->dbg_dest_tlv) + if (iwl_pcie_dbg_on(trans)) iwl_pcie_apply_destination(trans); /* kick FW self load */ diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c index 9e015212c2c0..353581ccc01e 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -513,6 +513,56 @@ static const struct pci_device_id iwl_hw_card_ids[] = { {IWL_PCI_DEVICE(0x24FD, 0x9074, iwl8265_2ac_cfg)}, /* 9000 Series */ + {IWL_PCI_DEVICE(0x02F0, 0x0030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0038, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x003C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0060, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0064, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x00A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x00A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0230, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0234, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0238, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x023C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0260, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x0264, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x02A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x02A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x1552, iwl9560_killer_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x2030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x2034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x4030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x4034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x40A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x4234, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x02F0, 0x42A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0038, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x003C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0060, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0064, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x00A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x00A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0230, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0234, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0238, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x023C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0260, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x0264, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x02A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x02A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x1552, iwl9560_killer_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x2030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x2034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x4030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x4034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x40A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x4234, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x06F0, 0x42A4, iwl9462_2ac_cfg_soc)}, {IWL_PCI_DEVICE(0x2526, 0x0010, iwl9260_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x0014, iwl9260_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x0018, iwl9260_2ac_cfg)}, @@ -832,7 +882,7 @@ static const struct pci_device_id iwl_hw_card_ids[] = { {IWL_PCI_DEVICE(0x34F0, 0x0040, iwl22000_2ax_cfg_hr)}, {IWL_PCI_DEVICE(0x34F0, 0x0070, iwl22000_2ax_cfg_hr)}, {IWL_PCI_DEVICE(0x34F0, 0x0078, iwl22000_2ax_cfg_hr)}, - {IWL_PCI_DEVICE(0x34F0, 0x0310, iwl22000_2ac_cfg_jf)}, + {IWL_PCI_DEVICE(0x34F0, 0x0310, iwl22000_2ax_cfg_hr)}, {IWL_PCI_DEVICE(0x40C0, 0x0000, iwl22560_2ax_cfg_su_cdb)}, {IWL_PCI_DEVICE(0x40C0, 0x0010, iwl22560_2ax_cfg_su_cdb)}, {IWL_PCI_DEVICE(0x40c0, 0x0090, iwl22560_2ax_cfg_su_cdb)}, diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h index f9c4c64dee66..d6fc6ce73e0a 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h @@ -378,6 +378,23 @@ struct iwl_tso_hdr_page { u8 *pos; }; +#ifdef CONFIG_IWLWIFI_DEBUGFS +/** + * enum iwl_fw_mon_dbgfs_state - the different states of the monitor_data + * debugfs file + * + * @IWL_FW_MON_DBGFS_STATE_CLOSED: the file is closed. + * @IWL_FW_MON_DBGFS_STATE_OPEN: the file is open. + * @IWL_FW_MON_DBGFS_STATE_DISABLED: the file is disabled, once this state is + * set the file can no longer be used. + */ +enum iwl_fw_mon_dbgfs_state { + IWL_FW_MON_DBGFS_STATE_CLOSED, + IWL_FW_MON_DBGFS_STATE_OPEN, + IWL_FW_MON_DBGFS_STATE_DISABLED, +}; +#endif + /** * enum iwl_shared_irq_flags - level of sharing for irq * @IWL_SHARED_IRQ_NON_RX: interrupt vector serves non rx causes. @@ -414,6 +431,26 @@ struct iwl_self_init_dram { int paging_cnt; }; +/** + * struct cont_rec: continuous recording data structure + * @prev_wr_ptr: the last address that was read in monitor_data + * debugfs file + * @prev_wrap_cnt: the wrap count that was used during the last read in + * monitor_data debugfs file + * @state: the state of monitor_data debugfs file as described + * in &iwl_fw_mon_dbgfs_state enum + * @mutex: locked while reading from monitor_data debugfs file + */ +#ifdef CONFIG_IWLWIFI_DEBUGFS +struct cont_rec { + u32 prev_wr_ptr; + u32 prev_wrap_cnt; + u8 state; + /* Used to sync monitor_data debugfs file with driver unload flow */ + struct mutex mutex; +}; +#endif + /** * struct iwl_trans_pcie - PCIe transport specific data * @rxq: all the RX queue data @@ -451,6 +488,9 @@ struct iwl_self_init_dram { * @reg_lock: protect hw register access * @mutex: to protect stop_device / start_fw / start_hw * @cmd_in_flight: true when we have a host command in flight +#ifdef CONFIG_IWLWIFI_DEBUGFS + * @fw_mon_data: fw continuous recording data +#endif * @msix_entries: array of MSI-X entries * @msix_enabled: true if managed to enable MSI-X * @shared_vec_mask: the type of causes the shared vector handles @@ -538,6 +578,10 @@ struct iwl_trans_pcie { bool cmd_hold_nic_awake; bool ref_cmd_in_flight; +#ifdef CONFIG_IWLWIFI_DEBUGFS + struct cont_rec fw_mon_data; +#endif + struct msix_entry msix_entries[IWL_MAX_RX_HW_QUEUES]; bool msix_enabled; u8 shared_vec_mask; @@ -965,6 +1009,11 @@ static inline void __iwl_trans_pcie_set_bit(struct iwl_trans *trans, __iwl_trans_pcie_set_bits_mask(trans, reg, mask, mask); } +static inline bool iwl_pcie_dbg_on(struct iwl_trans *trans) +{ + return (trans->dbg_dest_tlv || trans->ini_valid); +} + void iwl_trans_pcie_rf_kill(struct iwl_trans *trans, bool state); void iwl_trans_pcie_dump_regs(struct iwl_trans *trans); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c index 5bafb3f46eb8..f97aea5ffc44 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c @@ -71,6 +71,7 @@ #include #include #include +#include #include "iwl-drv.h" #include "iwl-trans.h" @@ -923,6 +924,20 @@ void iwl_pcie_apply_destination(struct iwl_trans *trans) const struct iwl_fw_dbg_dest_tlv_v1 *dest = trans->dbg_dest_tlv; int i; + if (trans->ini_valid) { + if (!trans->num_blocks) + return; + + iwl_write_prph(trans, MON_BUFF_BASE_ADDR_VER2, + trans->fw_mon[0].physical >> + MON_BUFF_SHIFT_VER2); + iwl_write_prph(trans, MON_BUFF_END_ADDR_VER2, + (trans->fw_mon[0].physical + + trans->fw_mon[0].size - 256) >> + MON_BUFF_SHIFT_VER2); + return; + } + IWL_INFO(trans, "Applying debug destination %s\n", get_fw_dbg_mode_string(dest->monitor_mode)); @@ -1025,7 +1040,7 @@ static int iwl_pcie_load_given_ucode(struct iwl_trans *trans, (trans->fw_mon[0].physical + trans->fw_mon[0].size) >> 4); } - } else if (trans->dbg_dest_tlv) { + } else if (iwl_pcie_dbg_on(trans)) { iwl_pcie_apply_destination(trans); } @@ -1046,7 +1061,7 @@ static int iwl_pcie_load_given_ucode_8000(struct iwl_trans *trans, IWL_DEBUG_FW(trans, "working with %s CPU\n", image->is_dual_cpus ? "Dual" : "Single"); - if (trans->dbg_dest_tlv) + if (iwl_pcie_dbg_on(trans)) iwl_pcie_apply_destination(trans); IWL_DEBUG_POWER(trans, "Original WFPM value = 0x%08X\n", @@ -1729,6 +1744,7 @@ static int iwl_pcie_init_msix_handler(struct pci_dev *pdev, static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + u32 hpm; int err; lockdep_assert_held(&trans_pcie->mutex); @@ -1739,6 +1755,17 @@ static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power) return err; } + hpm = iwl_trans_read_prph(trans, HPM_DEBUG); + if (hpm != 0xa5a5a5a0 && (hpm & PERSISTENCE_BIT)) { + if (iwl_trans_read_prph(trans, PREG_PRPH_WPROT_0) & + PREG_WFPM_ACCESS) { + IWL_ERR(trans, + "Error, can not clear persistence bit\n"); + return -EPERM; + } + iwl_trans_write_prph(trans, HPM_DEBUG, hpm & ~PERSISTENCE_BIT); + } + iwl_trans_pcie_sw_reset(trans); err = iwl_pcie_apm_init(trans); @@ -2697,6 +2724,137 @@ static ssize_t iwl_dbgfs_rfkill_write(struct file *file, return count; } +static int iwl_dbgfs_monitor_data_open(struct inode *inode, + struct file *file) +{ + struct iwl_trans *trans = inode->i_private; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + + if (!trans->dbg_dest_tlv || + trans->dbg_dest_tlv->monitor_mode != EXTERNAL_MODE) { + IWL_ERR(trans, "Debug destination is not set to DRAM\n"); + return -ENOENT; + } + + if (trans_pcie->fw_mon_data.state != IWL_FW_MON_DBGFS_STATE_CLOSED) + return -EBUSY; + + trans_pcie->fw_mon_data.state = IWL_FW_MON_DBGFS_STATE_OPEN; + return simple_open(inode, file); +} + +static int iwl_dbgfs_monitor_data_release(struct inode *inode, + struct file *file) +{ + struct iwl_trans_pcie *trans_pcie = + IWL_TRANS_GET_PCIE_TRANS(inode->i_private); + + if (trans_pcie->fw_mon_data.state == IWL_FW_MON_DBGFS_STATE_OPEN) + trans_pcie->fw_mon_data.state = IWL_FW_MON_DBGFS_STATE_CLOSED; + return 0; +} + +static bool iwl_write_to_user_buf(char __user *user_buf, ssize_t count, + void *buf, ssize_t *size, + ssize_t *bytes_copied) +{ + int buf_size_left = count - *bytes_copied; + + buf_size_left = buf_size_left - (buf_size_left % sizeof(u32)); + if (*size > buf_size_left) + *size = buf_size_left; + + *size -= copy_to_user(user_buf, buf, *size); + *bytes_copied += *size; + + if (buf_size_left == *size) + return true; + return false; +} + +static ssize_t iwl_dbgfs_monitor_data_read(struct file *file, + char __user *user_buf, + size_t count, loff_t *ppos) +{ + struct iwl_trans *trans = file->private_data; + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + void *cpu_addr = (void *)trans->fw_mon[0].block, *curr_buf; + struct cont_rec *data = &trans_pcie->fw_mon_data; + u32 write_ptr_addr, wrap_cnt_addr, write_ptr, wrap_cnt; + ssize_t size, bytes_copied = 0; + bool b_full; + + if (trans->dbg_dest_tlv) { + write_ptr_addr = + le32_to_cpu(trans->dbg_dest_tlv->write_ptr_reg); + wrap_cnt_addr = le32_to_cpu(trans->dbg_dest_tlv->wrap_count); + } else { + write_ptr_addr = MON_BUFF_WRPTR; + wrap_cnt_addr = MON_BUFF_CYCLE_CNT; + } + + if (unlikely(!trans->dbg_rec_on)) + return 0; + + mutex_lock(&data->mutex); + if (data->state == + IWL_FW_MON_DBGFS_STATE_DISABLED) { + mutex_unlock(&data->mutex); + return 0; + } + + /* write_ptr position in bytes rather then DW */ + write_ptr = iwl_read_prph(trans, write_ptr_addr) * sizeof(u32); + wrap_cnt = iwl_read_prph(trans, wrap_cnt_addr); + + if (data->prev_wrap_cnt == wrap_cnt) { + size = write_ptr - data->prev_wr_ptr; + curr_buf = cpu_addr + data->prev_wr_ptr; + b_full = iwl_write_to_user_buf(user_buf, count, + curr_buf, &size, + &bytes_copied); + data->prev_wr_ptr += size; + + } else if (data->prev_wrap_cnt == wrap_cnt - 1 && + write_ptr < data->prev_wr_ptr) { + size = trans->fw_mon[0].size - data->prev_wr_ptr; + curr_buf = cpu_addr + data->prev_wr_ptr; + b_full = iwl_write_to_user_buf(user_buf, count, + curr_buf, &size, + &bytes_copied); + data->prev_wr_ptr += size; + + if (!b_full) { + size = write_ptr; + b_full = iwl_write_to_user_buf(user_buf, count, + cpu_addr, &size, + &bytes_copied); + data->prev_wr_ptr = size; + data->prev_wrap_cnt++; + } + } else { + if (data->prev_wrap_cnt == wrap_cnt - 1 && + write_ptr > data->prev_wr_ptr) + IWL_WARN(trans, + "write pointer passed previous write pointer, start copying from the beginning\n"); + else if (!unlikely(data->prev_wrap_cnt == 0 && + data->prev_wr_ptr == 0)) + IWL_WARN(trans, + "monitor data is out of sync, start copying from the beginning\n"); + + size = write_ptr; + b_full = iwl_write_to_user_buf(user_buf, count, + cpu_addr, &size, + &bytes_copied); + data->prev_wr_ptr = size; + data->prev_wrap_cnt = wrap_cnt; + } + + mutex_unlock(&data->mutex); + + return bytes_copied; +} + DEBUGFS_READ_WRITE_FILE_OPS(interrupt); DEBUGFS_READ_FILE_OPS(fh_reg); DEBUGFS_READ_FILE_OPS(rx_queue); @@ -2704,6 +2862,12 @@ DEBUGFS_READ_FILE_OPS(tx_queue); DEBUGFS_WRITE_FILE_OPS(csr); DEBUGFS_READ_WRITE_FILE_OPS(rfkill); +static const struct file_operations iwl_dbgfs_monitor_data_ops = { + .read = iwl_dbgfs_monitor_data_read, + .open = iwl_dbgfs_monitor_data_open, + .release = iwl_dbgfs_monitor_data_release, +}; + /* Create the debugfs files and directories */ int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans) { @@ -2715,12 +2879,23 @@ int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans) DEBUGFS_ADD_FILE(csr, dir, 0200); DEBUGFS_ADD_FILE(fh_reg, dir, 0400); DEBUGFS_ADD_FILE(rfkill, dir, 0600); + DEBUGFS_ADD_FILE(monitor_data, dir, 0400); return 0; err: IWL_ERR(trans, "failed to create the trans debugfs entry\n"); return -ENOMEM; } + +static void iwl_trans_pcie_debugfs_cleanup(struct iwl_trans *trans) +{ + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); + struct cont_rec *data = &trans_pcie->fw_mon_data; + + mutex_lock(&data->mutex); + data->state = IWL_FW_MON_DBGFS_STATE_DISABLED; + mutex_unlock(&data->mutex); +} #endif /*CONFIG_IWLWIFI_DEBUGFS */ static u32 iwl_trans_pcie_get_cmdlen(struct iwl_trans *trans, void *tfd) @@ -2854,6 +3029,34 @@ iwl_trans_pci_dump_marbh_monitor(struct iwl_trans *trans, return monitor_len; } +static void +iwl_trans_pcie_dump_pointers(struct iwl_trans *trans, + struct iwl_fw_error_dump_fw_mon *fw_mon_data) +{ + u32 base, write_ptr, wrap_cnt; + + /* If there was a dest TLV - use the values from there */ + if (trans->ini_valid) { + base = MON_BUFF_BASE_ADDR_VER2; + write_ptr = MON_BUFF_WRPTR_VER2; + wrap_cnt = MON_BUFF_CYCLE_CNT_VER2; + } else if (trans->dbg_dest_tlv) { + write_ptr = le32_to_cpu(trans->dbg_dest_tlv->write_ptr_reg); + wrap_cnt = le32_to_cpu(trans->dbg_dest_tlv->wrap_count); + base = le32_to_cpu(trans->dbg_dest_tlv->base_reg); + } else { + base = MON_BUFF_BASE_ADDR; + write_ptr = MON_BUFF_WRPTR; + wrap_cnt = MON_BUFF_CYCLE_CNT; + } + fw_mon_data->fw_mon_wr_ptr = + cpu_to_le32(iwl_read_prph(trans, write_ptr)); + fw_mon_data->fw_mon_cycle_cnt = + cpu_to_le32(iwl_read_prph(trans, wrap_cnt)); + fw_mon_data->fw_mon_base_ptr = + cpu_to_le32(iwl_read_prph(trans, base)); +} + static u32 iwl_trans_pcie_dump_monitor(struct iwl_trans *trans, struct iwl_fw_error_dump_data **data, @@ -2863,30 +3066,14 @@ iwl_trans_pcie_dump_monitor(struct iwl_trans *trans, if ((trans->num_blocks && trans->cfg->device_family == IWL_DEVICE_FAMILY_7000) || - trans->dbg_dest_tlv) { + (trans->dbg_dest_tlv && !trans->ini_valid) || + (trans->ini_valid && trans->num_blocks)) { struct iwl_fw_error_dump_fw_mon *fw_mon_data; - u32 base, write_ptr, wrap_cnt; - - /* If there was a dest TLV - use the values from there */ - if (trans->dbg_dest_tlv) { - write_ptr = - le32_to_cpu(trans->dbg_dest_tlv->write_ptr_reg); - wrap_cnt = le32_to_cpu(trans->dbg_dest_tlv->wrap_count); - base = le32_to_cpu(trans->dbg_dest_tlv->base_reg); - } else { - base = MON_BUFF_BASE_ADDR; - write_ptr = MON_BUFF_WRPTR; - wrap_cnt = MON_BUFF_CYCLE_CNT; - } (*data)->type = cpu_to_le32(IWL_FW_ERROR_DUMP_FW_MONITOR); fw_mon_data = (void *)(*data)->data; - fw_mon_data->fw_mon_wr_ptr = - cpu_to_le32(iwl_read_prph(trans, write_ptr)); - fw_mon_data->fw_mon_cycle_cnt = - cpu_to_le32(iwl_read_prph(trans, wrap_cnt)); - fw_mon_data->fw_mon_base_ptr = - cpu_to_le32(iwl_read_prph(trans, base)); + + iwl_trans_pcie_dump_pointers(trans, fw_mon_data); len += sizeof(**data) + sizeof(*fw_mon_data); if (trans->num_blocks) { @@ -2896,6 +3083,7 @@ iwl_trans_pcie_dump_monitor(struct iwl_trans *trans, monitor_len = trans->fw_mon[0].size; } else if (trans->dbg_dest_tlv->monitor_mode == SMEM_MODE) { + u32 base = le32_to_cpu(fw_mon_data->fw_mon_base_ptr); /* * Update pointers to reflect actual values after * shifting @@ -2978,7 +3166,7 @@ static int iwl_trans_get_fw_monitor_len(struct iwl_trans *trans, int *len) static struct iwl_trans_dump_data *iwl_trans_pcie_dump_data(struct iwl_trans *trans, - const struct iwl_fw_dbg_trigger_tlv *trigger) + u32 dump_mask) { struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_fw_error_dump_data *data; @@ -2990,7 +3178,10 @@ static struct iwl_trans_dump_data int i, ptr; bool dump_rbs = test_bit(STATUS_FW_ERROR, &trans->status) && !trans->cfg->mq_rx_supported && - trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_RB); + dump_mask & BIT(IWL_FW_ERROR_DUMP_RB); + + if (!dump_mask) + return NULL; /* transport dump header */ len = sizeof(*dump_data); @@ -3002,11 +3193,7 @@ static struct iwl_trans_dump_data /* FW monitor */ monitor_len = iwl_trans_get_fw_monitor_len(trans, &len); - if (trigger && (trigger->mode & IWL_FW_DBG_TRIGGER_MONITOR_ONLY)) { - if (!(trans->dbg_dump_mask & - BIT(IWL_FW_ERROR_DUMP_FW_MONITOR))) - return NULL; - + if (dump_mask == BIT(IWL_FW_ERROR_DUMP_FW_MONITOR)) { dump_data = vzalloc(len); if (!dump_data) return NULL; @@ -3019,11 +3206,11 @@ static struct iwl_trans_dump_data } /* CSR registers */ - if (trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_CSR)) + if (dump_mask & BIT(IWL_FW_ERROR_DUMP_CSR)) len += sizeof(*data) + IWL_CSR_TO_DUMP; /* FH registers */ - if (trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_FH_REGS)) { + if (dump_mask & BIT(IWL_FW_ERROR_DUMP_FH_REGS)) { if (trans->cfg->gen2) len += sizeof(*data) + (FH_MEM_UPPER_BOUND_GEN2 - @@ -3048,8 +3235,7 @@ static struct iwl_trans_dump_data } /* Paged memory for gen2 HW */ - if (trans->cfg->gen2 && - trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_PAGING)) + if (trans->cfg->gen2 && dump_mask & BIT(IWL_FW_ERROR_DUMP_PAGING)) for (i = 0; i < trans_pcie->init_dram.paging_cnt; i++) len += sizeof(*data) + sizeof(struct iwl_fw_error_dump_paging) + @@ -3062,7 +3248,7 @@ static struct iwl_trans_dump_data len = 0; data = (void *)dump_data->data; - if (trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_TXCMD)) { + if (dump_mask & BIT(IWL_FW_ERROR_DUMP_TXCMD)) { u16 tfd_size = trans_pcie->tfd_size; data->type = cpu_to_le32(IWL_FW_ERROR_DUMP_TXCMD); @@ -3096,16 +3282,15 @@ static struct iwl_trans_dump_data data = iwl_fw_error_next_data(data); } - if (trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_CSR)) + if (dump_mask & BIT(IWL_FW_ERROR_DUMP_CSR)) len += iwl_trans_pcie_dump_csr(trans, &data); - if (trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_FH_REGS)) + if (dump_mask & BIT(IWL_FW_ERROR_DUMP_FH_REGS)) len += iwl_trans_pcie_fh_regs_dump(trans, &data); if (dump_rbs) len += iwl_trans_pcie_dump_rbs(trans, &data, num_rbs); /* Paged memory for gen2 HW */ - if (trans->cfg->gen2 && - trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_PAGING)) { + if (trans->cfg->gen2 && dump_mask & BIT(IWL_FW_ERROR_DUMP_PAGING)) { for (i = 0; i < trans_pcie->init_dram.paging_cnt; i++) { struct iwl_fw_error_dump_paging *paging; dma_addr_t addr = @@ -3125,7 +3310,7 @@ static struct iwl_trans_dump_data len += sizeof(*data) + sizeof(*paging) + page_len; } } - if (trans->dbg_dump_mask & BIT(IWL_FW_ERROR_DUMP_FW_MONITOR)) + if (dump_mask & BIT(IWL_FW_ERROR_DUMP_FW_MONITOR)) len += iwl_trans_pcie_dump_monitor(trans, &data, monitor_len); dump_data->len = len; @@ -3202,6 +3387,9 @@ static const struct iwl_trans_ops trans_ops_pcie = { .freeze_txq_timer = iwl_trans_pcie_freeze_txq_timer, .block_txq_ptrs = iwl_trans_pcie_block_txq_ptrs, +#ifdef CONFIG_IWLWIFI_DEBUGFS + .debugfs_cleanup = iwl_trans_pcie_debugfs_cleanup, +#endif }; static const struct iwl_trans_ops trans_ops_pcie_gen2 = { @@ -3221,6 +3409,9 @@ static const struct iwl_trans_ops trans_ops_pcie_gen2 = { .txq_free = iwl_trans_pcie_dyn_txq_free, .wait_txq_empty = iwl_trans_pcie_wait_txq_empty, .rxq_dma_data = iwl_trans_pcie_rxq_dma_data, +#ifdef CONFIG_IWLWIFI_DEBUGFS + .debugfs_cleanup = iwl_trans_pcie_debugfs_cleanup, +#endif }; struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, @@ -3392,8 +3583,26 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, #if IS_ENABLED(CONFIG_IWLMVM) trans->hw_rf_id = iwl_read32(trans, CSR_HW_RF_ID); - if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == - CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR)) { + if (cfg == &iwl22000_2ax_cfg_hr) { + if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == + CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR)) { + trans->cfg = &iwl22000_2ax_cfg_hr; + } else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == + CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_JF)) { + trans->cfg = &iwl22000_2ax_cfg_jf; + } else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == + CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HRCDB)) { + IWL_ERR(trans, "RF ID HRCDB is not supported\n"); + ret = -EINVAL; + goto out_no_pci; + } else { + IWL_ERR(trans, "Unrecognized RF ID 0x%08x\n", + CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id)); + ret = -EINVAL; + goto out_no_pci; + } + } else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == + CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR)) { u32 hw_status; hw_status = iwl_read_prph(trans, UMAG_GEN_HW_STATUS); @@ -3454,6 +3663,11 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, trans->runtime_pm_mode = IWL_PLAT_PM_MODE_DISABLED; #endif /* CONFIG_IWLWIFI_PCIE_RTPM */ +#ifdef CONFIG_IWLWIFI_DEBUGFS + trans_pcie->fw_mon_data.state = IWL_FW_MON_DBGFS_STATE_CLOSED; + mutex_init(&trans_pcie->fw_mon_data.mutex); +#endif + return trans; out_free_ict: diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c index e880f69eac26..156ca1b1f621 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c @@ -238,7 +238,7 @@ static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans, { #ifdef CONFIG_INET struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); - struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; + struct iwl_tx_cmd_gen2 *tx_cmd = (void *)dev_cmd->payload; struct ieee80211_hdr *hdr = (void *)skb->data; unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; unsigned int mss = skb_shinfo(skb)->gso_size; @@ -583,18 +583,6 @@ int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, spin_lock(&txq->lock); - if (trans->cfg->device_family >= IWL_DEVICE_FAMILY_22560) { - struct iwl_tx_cmd_gen3 *tx_cmd_gen3 = - (void *)dev_cmd->payload; - - cmd_len = le16_to_cpu(tx_cmd_gen3->len); - } else { - struct iwl_tx_cmd_gen2 *tx_cmd_gen2 = - (void *)dev_cmd->payload; - - cmd_len = le16_to_cpu(tx_cmd_gen2->len); - } - if (iwl_queue_space(trans, txq) < txq->high_mark) { iwl_stop_queue(trans, txq); @@ -632,6 +620,18 @@ int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, return -1; } + if (trans->cfg->device_family >= IWL_DEVICE_FAMILY_22560) { + struct iwl_tx_cmd_gen3 *tx_cmd_gen3 = + (void *)dev_cmd->payload; + + cmd_len = le16_to_cpu(tx_cmd_gen3->len); + } else { + struct iwl_tx_cmd_gen2 *tx_cmd_gen2 = + (void *)dev_cmd->payload; + + cmd_len = le16_to_cpu(tx_cmd_gen2->len); + } + /* Set up entry for this TFD in Tx byte-count array */ iwl_pcie_gen2_update_byte_tbl(trans_pcie, txq, cmd_len, iwl_pcie_gen2_get_num_tbs(trans, tfd)); @@ -1228,8 +1228,7 @@ int iwl_trans_pcie_txq_alloc_response(struct iwl_trans *trans, /* Place first TFD at index corresponding to start sequence number */ txq->read_ptr = wr_ptr; txq->write_ptr = wr_ptr; - iwl_write_direct32(trans, HBUS_TARG_WRPTR, - (txq->write_ptr) | (qid << 16)); + IWL_DEBUG_TX_QUEUES(trans, "Activate queue %d\n", qid); iwl_free_resp(hcmd); diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c index 87b7225fe289..ee990a7a5411 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c @@ -1160,10 +1160,11 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn, */ iwl_trans_tx(trans, skb, dev_cmd_ptr, txq_id); } - spin_lock_bh(&txq->lock); if (iwl_queue_space(trans, txq) > txq->low_mark) iwl_wake_queue(trans, txq); + + spin_lock_bh(&txq->lock); } if (txq->read_ptr == txq->write_ptr) { @@ -1245,11 +1246,11 @@ void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx) if (idx >= trans->cfg->base_params->max_tfd_queue_size || (!iwl_queue_used(txq, idx))) { - IWL_ERR(trans, - "%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n", - __func__, txq_id, idx, - trans->cfg->base_params->max_tfd_queue_size, - txq->write_ptr, txq->read_ptr); + WARN_ONCE(test_bit(txq_id, trans_pcie->queue_used), + "%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n", + __func__, txq_id, idx, + trans->cfg->base_params->max_tfd_queue_size, + txq->write_ptr, txq->read_ptr); return; } diff --git a/drivers/net/wireless/intersil/hostap/hostap_main.c b/drivers/net/wireless/intersil/hostap/hostap_main.c index 012930d35434..b0e7c0a0617e 100644 --- a/drivers/net/wireless/intersil/hostap/hostap_main.c +++ b/drivers/net/wireless/intersil/hostap/hostap_main.c @@ -690,7 +690,7 @@ static int prism2_open(struct net_device *dev) /* Master radio interface is needed for all operation, so open * it automatically when any virtual net_device is opened. */ local->master_dev_auto_open = 1; - dev_open(local->dev); + dev_open(local->dev, NULL); } netif_device_attach(dev); diff --git a/drivers/net/wireless/intersil/orinoco/mic.c b/drivers/net/wireless/intersil/orinoco/mic.c index 08bc7822f820..709d9ab3e7bc 100644 --- a/drivers/net/wireless/intersil/orinoco/mic.c +++ b/drivers/net/wireless/intersil/orinoco/mic.c @@ -16,8 +16,7 @@ /********************************************************************/ int orinoco_mic_init(struct orinoco_private *priv) { - priv->tx_tfm_mic = crypto_alloc_shash("michael_mic", 0, - CRYPTO_ALG_ASYNC); + priv->tx_tfm_mic = crypto_alloc_shash("michael_mic", 0, 0); if (IS_ERR(priv->tx_tfm_mic)) { printk(KERN_DEBUG "orinoco_mic_init: could not allocate " "crypto API michael_mic\n"); @@ -25,8 +24,7 @@ int orinoco_mic_init(struct orinoco_private *priv) return -ENOMEM; } - priv->rx_tfm_mic = crypto_alloc_shash("michael_mic", 0, - CRYPTO_ALG_ASYNC); + priv->rx_tfm_mic = crypto_alloc_shash("michael_mic", 0, 0); if (IS_ERR(priv->rx_tfm_mic)) { printk(KERN_DEBUG "orinoco_mic_init: could not allocate " "crypto API michael_mic\n"); diff --git a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c index 21bb68457cfe..40a8b941ad5c 100644 --- a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c +++ b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c @@ -908,6 +908,7 @@ static int ezusb_access_ltv(struct ezusb_priv *upriv, case EZUSB_CTX_REQ_SUBMITTED: if (!ctx->in_rid) break; + /* fall through */ default: err("%s: Unexpected context state %d", __func__, state); diff --git a/drivers/net/wireless/intersil/prism54/isl_38xx.c b/drivers/net/wireless/intersil/prism54/isl_38xx.c index ce9d4db0d9ca..b0eb58a62c90 100644 --- a/drivers/net/wireless/intersil/prism54/isl_38xx.c +++ b/drivers/net/wireless/intersil/prism54/isl_38xx.c @@ -235,6 +235,7 @@ isl38xx_in_queue(isl38xx_control_block *cb, int queue) /* send queues */ case ISL38XX_CB_TX_MGMTQ: BUG_ON(delta > ISL38XX_CB_MGMT_QSIZE); + /* fall through */ case ISL38XX_CB_TX_DATA_LQ: case ISL38XX_CB_TX_DATA_HQ: diff --git a/drivers/net/wireless/intersil/prism54/isl_ioctl.c b/drivers/net/wireless/intersil/prism54/isl_ioctl.c index 334717b0a2be..3ccf2a4b548c 100644 --- a/drivers/net/wireless/intersil/prism54/isl_ioctl.c +++ b/drivers/net/wireless/intersil/prism54/isl_ioctl.c @@ -1691,6 +1691,7 @@ static int prism54_get_encodeext(struct net_device *ndev, case DOT11_AUTH_BOTH: case DOT11_AUTH_SK: wrqu->encoding.flags |= IW_ENCODE_RESTRICTED; + /* fall through */ case DOT11_AUTH_OS: default: wrqu->encoding.flags |= IW_ENCODE_OPEN; diff --git a/drivers/net/wireless/intersil/prism54/islpci_dev.c b/drivers/net/wireless/intersil/prism54/islpci_dev.c index 325176d4d796..ad6d3a56ae06 100644 --- a/drivers/net/wireless/intersil/prism54/islpci_dev.c +++ b/drivers/net/wireless/intersil/prism54/islpci_dev.c @@ -932,6 +932,7 @@ islpci_set_state(islpci_private *priv, islpci_state_t new_state) switch (new_state) { case PRV_STATE_OFF: priv->state_off++; + /* fall through */ default: priv->state = new_state; break; diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c index d1464e3e1be2..3a4b8786f7ea 100644 --- a/drivers/net/wireless/mac80211_hwsim.c +++ b/drivers/net/wireless/mac80211_hwsim.c @@ -374,6 +374,20 @@ static const struct ieee80211_rate hwsim_rates[] = { { .bitrate = 540 } }; +static const u32 hwsim_ciphers[] = { + WLAN_CIPHER_SUITE_WEP40, + WLAN_CIPHER_SUITE_WEP104, + WLAN_CIPHER_SUITE_TKIP, + WLAN_CIPHER_SUITE_CCMP, + WLAN_CIPHER_SUITE_CCMP_256, + WLAN_CIPHER_SUITE_GCMP, + WLAN_CIPHER_SUITE_GCMP_256, + WLAN_CIPHER_SUITE_AES_CMAC, + WLAN_CIPHER_SUITE_BIP_CMAC_256, + WLAN_CIPHER_SUITE_BIP_GMAC_128, + WLAN_CIPHER_SUITE_BIP_GMAC_256, +}; + #define OUI_QCA 0x001374 #define QCA_NL80211_SUBCMD_TEST 1 enum qca_nl80211_vendor_subcmds { @@ -451,48 +465,6 @@ static const struct nl80211_vendor_cmd_info mac80211_hwsim_vendor_events[] = { { .vendor_id = OUI_QCA, .subcmd = 1 }, }; -static const struct ieee80211_iface_limit hwsim_if_limits[] = { - { .max = 1, .types = BIT(NL80211_IFTYPE_ADHOC) }, - { .max = 2048, .types = BIT(NL80211_IFTYPE_STATION) | - BIT(NL80211_IFTYPE_P2P_CLIENT) | -#ifdef CONFIG_MAC80211_MESH - BIT(NL80211_IFTYPE_MESH_POINT) | -#endif - BIT(NL80211_IFTYPE_AP) | - BIT(NL80211_IFTYPE_P2P_GO) }, - /* must be last, see hwsim_if_comb */ - { .max = 1, .types = BIT(NL80211_IFTYPE_P2P_DEVICE) } -}; - -static const struct ieee80211_iface_combination hwsim_if_comb[] = { - { - .limits = hwsim_if_limits, - /* remove the last entry which is P2P_DEVICE */ - .n_limits = ARRAY_SIZE(hwsim_if_limits) - 1, - .max_interfaces = 2048, - .num_different_channels = 1, - .radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20_NOHT) | - BIT(NL80211_CHAN_WIDTH_20) | - BIT(NL80211_CHAN_WIDTH_40) | - BIT(NL80211_CHAN_WIDTH_80) | - BIT(NL80211_CHAN_WIDTH_160), - }, -}; - -static const struct ieee80211_iface_combination hwsim_if_comb_p2p_dev[] = { - { - .limits = hwsim_if_limits, - .n_limits = ARRAY_SIZE(hwsim_if_limits), - .max_interfaces = 2048, - .num_different_channels = 1, - .radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20_NOHT) | - BIT(NL80211_CHAN_WIDTH_20) | - BIT(NL80211_CHAN_WIDTH_40) | - BIT(NL80211_CHAN_WIDTH_80) | - BIT(NL80211_CHAN_WIDTH_160), - }, -}; - static spinlock_t hwsim_radio_lock; static LIST_HEAD(hwsim_radios); static struct rhashtable hwsim_radios_rht; @@ -515,6 +487,10 @@ struct mac80211_hwsim_data { struct ieee80211_channel channels_5ghz[ARRAY_SIZE(hwsim_channels_5ghz)]; struct ieee80211_rate rates[ARRAY_SIZE(hwsim_rates)]; struct ieee80211_iface_combination if_combination; + struct ieee80211_iface_limit if_limits[3]; + int n_if_limits; + + u32 ciphers[ARRAY_SIZE(hwsim_ciphers)]; struct mac_address addresses[2]; int channels, idx; @@ -642,6 +618,8 @@ static const struct nla_policy hwsim_genl_policy[HWSIM_ATTR_MAX + 1] = { [HWSIM_ATTR_NO_VIF] = { .type = NLA_FLAG }, [HWSIM_ATTR_FREQ] = { .type = NLA_U32 }, [HWSIM_ATTR_PERM_ADDR] = { .type = NLA_UNSPEC, .len = ETH_ALEN }, + [HWSIM_ATTR_IFTYPE_SUPPORT] = { .type = NLA_U32 }, + [HWSIM_ATTR_CIPHER_SUPPORT] = { .type = NLA_BINARY }, }; static void mac80211_hwsim_tx_frame(struct ieee80211_hw *hw, @@ -2414,6 +2392,9 @@ struct hwsim_new_radio_params { const char *hwname; bool no_vif; const u8 *perm_addr; + u32 iftypes; + u32 *ciphers; + u8 n_ciphers; }; static void hwsim_mcast_config_msg(struct sk_buff *mcast_skb, @@ -2630,6 +2611,27 @@ static void mac80211_hswim_he_capab(struct ieee80211_supported_band *sband) sband->n_iftype_data = 1; } +#ifdef CONFIG_MAC80211_MESH +#define HWSIM_MESH_BIT BIT(NL80211_IFTYPE_MESH_POINT) +#else +#define HWSIM_MESH_BIT 0 +#endif + +#define HWSIM_DEFAULT_IF_LIMIT \ + (BIT(NL80211_IFTYPE_STATION) | \ + BIT(NL80211_IFTYPE_P2P_CLIENT) | \ + BIT(NL80211_IFTYPE_AP) | \ + BIT(NL80211_IFTYPE_P2P_GO) | \ + HWSIM_MESH_BIT) + +#define HWSIM_IFTYPE_SUPPORT_MASK \ + (BIT(NL80211_IFTYPE_STATION) | \ + BIT(NL80211_IFTYPE_AP) | \ + BIT(NL80211_IFTYPE_P2P_CLIENT) | \ + BIT(NL80211_IFTYPE_P2P_GO) | \ + BIT(NL80211_IFTYPE_ADHOC) | \ + BIT(NL80211_IFTYPE_MESH_POINT)) + static int mac80211_hwsim_new_radio(struct genl_info *info, struct hwsim_new_radio_params *param) { @@ -2641,6 +2643,7 @@ static int mac80211_hwsim_new_radio(struct genl_info *info, const struct ieee80211_ops *ops = &mac80211_hwsim_ops; struct net *net; int idx; + int n_limits = 0; if (WARN_ON(param->channels > 1 && !param->use_chanctx)) return -EINVAL; @@ -2716,26 +2719,60 @@ static int mac80211_hwsim_new_radio(struct genl_info *info, if (info) data->portid = info->snd_portid; + /* setup interface limits, only on interface types we support */ + if (param->iftypes & BIT(NL80211_IFTYPE_ADHOC)) { + data->if_limits[n_limits].max = 1; + data->if_limits[n_limits].types = BIT(NL80211_IFTYPE_ADHOC); + n_limits++; + } + + if (param->iftypes & HWSIM_DEFAULT_IF_LIMIT) { + data->if_limits[n_limits].max = 2048; + /* + * For this case, we may only support a subset of + * HWSIM_DEFAULT_IF_LIMIT, therefore we only want to add the + * bits that both param->iftype & HWSIM_DEFAULT_IF_LIMIT have. + */ + data->if_limits[n_limits].types = + HWSIM_DEFAULT_IF_LIMIT & param->iftypes; + n_limits++; + } + + if (param->iftypes & BIT(NL80211_IFTYPE_P2P_DEVICE)) { + data->if_limits[n_limits].max = 1; + data->if_limits[n_limits].types = + BIT(NL80211_IFTYPE_P2P_DEVICE); + n_limits++; + } + if (data->use_chanctx) { hw->wiphy->max_scan_ssids = 255; hw->wiphy->max_scan_ie_len = IEEE80211_MAX_DATA_LEN; hw->wiphy->max_remain_on_channel_duration = 1000; - hw->wiphy->iface_combinations = &data->if_combination; - if (param->p2p_device) - data->if_combination = hwsim_if_comb_p2p_dev[0]; - else - data->if_combination = hwsim_if_comb[0]; - hw->wiphy->n_iface_combinations = 1; - /* For channels > 1 DFS is not allowed */ data->if_combination.radar_detect_widths = 0; data->if_combination.num_different_channels = data->channels; - } else if (param->p2p_device) { - hw->wiphy->iface_combinations = hwsim_if_comb_p2p_dev; - hw->wiphy->n_iface_combinations = - ARRAY_SIZE(hwsim_if_comb_p2p_dev); } else { - hw->wiphy->iface_combinations = hwsim_if_comb; - hw->wiphy->n_iface_combinations = ARRAY_SIZE(hwsim_if_comb); + data->if_combination.num_different_channels = 1; + data->if_combination.radar_detect_widths = + BIT(NL80211_CHAN_WIDTH_20_NOHT) | + BIT(NL80211_CHAN_WIDTH_20) | + BIT(NL80211_CHAN_WIDTH_40) | + BIT(NL80211_CHAN_WIDTH_80) | + BIT(NL80211_CHAN_WIDTH_160); + } + + data->if_combination.n_limits = n_limits; + data->if_combination.max_interfaces = 2048; + data->if_combination.limits = data->if_limits; + + hw->wiphy->iface_combinations = &data->if_combination; + hw->wiphy->n_iface_combinations = 1; + + if (param->ciphers) { + memcpy(data->ciphers, param->ciphers, + param->n_ciphers * sizeof(u32)); + hw->wiphy->cipher_suites = data->ciphers; + hw->wiphy->n_cipher_suites = param->n_ciphers; } INIT_DELAYED_WORK(&data->roc_start, hw_roc_start); @@ -2744,15 +2781,6 @@ static int mac80211_hwsim_new_radio(struct genl_info *info, hw->queues = 5; hw->offchannel_tx_hw_queue = 4; - hw->wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) | - BIT(NL80211_IFTYPE_AP) | - BIT(NL80211_IFTYPE_P2P_CLIENT) | - BIT(NL80211_IFTYPE_P2P_GO) | - BIT(NL80211_IFTYPE_ADHOC) | - BIT(NL80211_IFTYPE_MESH_POINT); - - if (param->p2p_device) - hw->wiphy->interface_modes |= BIT(NL80211_IFTYPE_P2P_DEVICE); ieee80211_hw_set(hw, SUPPORT_FAST_XMIT); ieee80211_hw_set(hw, CHANCTX_STA_CSA); @@ -2778,6 +2806,8 @@ static int mac80211_hwsim_new_radio(struct genl_info *info, NL80211_FEATURE_SCAN_RANDOM_MAC_ADDR; wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_VHT_IBSS); + hw->wiphy->interface_modes = param->iftypes; + /* ask mac80211 to reserve space for magic */ hw->vif_data_size = sizeof(struct hwsim_vif_priv); hw->sta_data_size = sizeof(struct hwsim_sta_priv); @@ -3293,6 +3323,29 @@ static int hwsim_register_received_nl(struct sk_buff *skb_2, return 0; } +/* ensures ciphers only include ciphers listed in 'hwsim_ciphers' array */ +static bool hwsim_known_ciphers(const u32 *ciphers, int n_ciphers) +{ + int i; + + for (i = 0; i < n_ciphers; i++) { + int j; + int found = 0; + + for (j = 0; j < ARRAY_SIZE(hwsim_ciphers); j++) { + if (ciphers[i] == hwsim_ciphers[j]) { + found = 1; + break; + } + } + + if (!found) + return false; + } + + return true; +} + static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info) { struct hwsim_new_radio_params param = { 0 }; @@ -3321,15 +3374,6 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info) if (info->attrs[HWSIM_ATTR_NO_VIF]) param.no_vif = true; - if (info->attrs[HWSIM_ATTR_RADIO_NAME]) { - hwname = kasprintf(GFP_KERNEL, "%.*s", - nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]), - (char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME])); - if (!hwname) - return -ENOMEM; - param.hwname = hwname; - } - if (info->attrs[HWSIM_ATTR_USE_CHANCTX]) param.use_chanctx = true; else @@ -3342,10 +3386,8 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info) if (info->attrs[HWSIM_ATTR_REG_CUSTOM_REG]) { u32 idx = nla_get_u32(info->attrs[HWSIM_ATTR_REG_CUSTOM_REG]); - if (idx >= ARRAY_SIZE(hwsim_world_regdom_custom)) { - kfree(hwname); + if (idx >= ARRAY_SIZE(hwsim_world_regdom_custom)) return -EINVAL; - } idx = array_index_nospec(idx, ARRAY_SIZE(hwsim_world_regdom_custom)); @@ -3358,14 +3400,72 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info) GENL_SET_ERR_MSG(info,"MAC is no valid source addr"); NL_SET_BAD_ATTR(info->extack, info->attrs[HWSIM_ATTR_PERM_ADDR]); - kfree(hwname); return -EINVAL; } - param.perm_addr = nla_data(info->attrs[HWSIM_ATTR_PERM_ADDR]); } + if (info->attrs[HWSIM_ATTR_IFTYPE_SUPPORT]) { + param.iftypes = + nla_get_u32(info->attrs[HWSIM_ATTR_IFTYPE_SUPPORT]); + + if (param.iftypes & ~HWSIM_IFTYPE_SUPPORT_MASK) { + NL_SET_ERR_MSG_ATTR(info->extack, + info->attrs[HWSIM_ATTR_IFTYPE_SUPPORT], + "cannot support more iftypes than kernel"); + return -EINVAL; + } + } else { + param.iftypes = HWSIM_IFTYPE_SUPPORT_MASK; + } + + /* ensure both flag and iftype support is honored */ + if (param.p2p_device || + param.iftypes & BIT(NL80211_IFTYPE_P2P_DEVICE)) { + param.iftypes |= BIT(NL80211_IFTYPE_P2P_DEVICE); + param.p2p_device = true; + } + + if (info->attrs[HWSIM_ATTR_CIPHER_SUPPORT]) { + u32 len = nla_len(info->attrs[HWSIM_ATTR_CIPHER_SUPPORT]); + + param.ciphers = + nla_data(info->attrs[HWSIM_ATTR_CIPHER_SUPPORT]); + + if (len % sizeof(u32)) { + NL_SET_ERR_MSG_ATTR(info->extack, + info->attrs[HWSIM_ATTR_CIPHER_SUPPORT], + "bad cipher list length"); + return -EINVAL; + } + + param.n_ciphers = len / sizeof(u32); + + if (param.n_ciphers > ARRAY_SIZE(hwsim_ciphers)) { + NL_SET_ERR_MSG_ATTR(info->extack, + info->attrs[HWSIM_ATTR_CIPHER_SUPPORT], + "too many ciphers specified"); + return -EINVAL; + } + + if (!hwsim_known_ciphers(param.ciphers, param.n_ciphers)) { + NL_SET_ERR_MSG_ATTR(info->extack, + info->attrs[HWSIM_ATTR_CIPHER_SUPPORT], + "unsupported ciphers specified"); + return -EINVAL; + } + } + + if (info->attrs[HWSIM_ATTR_RADIO_NAME]) { + hwname = kasprintf(GFP_KERNEL, "%.*s", + nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]), + (char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME])); + if (!hwname) + return -ENOMEM; + param.hwname = hwname; + } + ret = mac80211_hwsim_new_radio(info, ¶m); kfree(hwname); return ret; @@ -3785,6 +3885,7 @@ static int __init init_mac80211_hwsim(void) param.p2p_device = support_p2p_device; param.use_chanctx = channels > 1; + param.iftypes = HWSIM_IFTYPE_SUPPORT_MASK; err = mac80211_hwsim_new_radio(NULL, ¶m); if (err < 0) diff --git a/drivers/net/wireless/mac80211_hwsim.h b/drivers/net/wireless/mac80211_hwsim.h index 0fe3199f8c72..a1ef8457fad4 100644 --- a/drivers/net/wireless/mac80211_hwsim.h +++ b/drivers/net/wireless/mac80211_hwsim.h @@ -132,6 +132,8 @@ enum { * @HWSIM_ATTR_TX_INFO_FLAGS: additional flags for corresponding * rates of %HWSIM_ATTR_TX_INFO * @HWSIM_ATTR_PERM_ADDR: permanent mac address of new radio + * @HWSIM_ATTR_IFTYPE_SUPPORT: u32 attribute of supported interface types bits + * @HWSIM_ATTR_CIPHER_SUPPORT: u32 array of supported cipher types * @__HWSIM_ATTR_MAX: enum limit */ @@ -160,6 +162,8 @@ enum { HWSIM_ATTR_PAD, HWSIM_ATTR_TX_INFO_FLAGS, HWSIM_ATTR_PERM_ADDR, + HWSIM_ATTR_IFTYPE_SUPPORT, + HWSIM_ATTR_CIPHER_SUPPORT, __HWSIM_ATTR_MAX, }; #define HWSIM_ATTR_MAX (__HWSIM_ATTR_MAX - 1) diff --git a/drivers/net/wireless/marvell/libertas/if_spi.c b/drivers/net/wireless/marvell/libertas/if_spi.c index 504d6e096476..7c3224b83ef7 100644 --- a/drivers/net/wireless/marvell/libertas/if_spi.c +++ b/drivers/net/wireless/marvell/libertas/if_spi.c @@ -796,15 +796,13 @@ static void if_spi_h2c(struct if_spi_card *card, { struct lbs_private *priv = card->priv; int err = 0; - u16 int_type, port_reg; + u16 port_reg; switch (type) { case MVMS_DAT: - int_type = IF_SPI_CIC_TX_DOWNLOAD_OVER; port_reg = IF_SPI_DATA_RDWRPORT_REG; break; case MVMS_CMD: - int_type = IF_SPI_CIC_CMD_DOWNLOAD_OVER; port_reg = IF_SPI_CMD_RDWRPORT_REG; break; default: diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c index adc88433faa8..1467af22e394 100644 --- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c +++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c @@ -1275,27 +1275,27 @@ mwifiex_cfg80211_change_virtual_intf(struct wiphy *wiphy, } static void -mwifiex_parse_htinfo(struct mwifiex_private *priv, u8 tx_htinfo, +mwifiex_parse_htinfo(struct mwifiex_private *priv, u8 rateinfo, u8 htinfo, struct rate_info *rate) { struct mwifiex_adapter *adapter = priv->adapter; if (adapter->is_hw_11ac_capable) { /* bit[1-0]: 00=LG 01=HT 10=VHT */ - if (tx_htinfo & BIT(0)) { + if (htinfo & BIT(0)) { /* HT */ - rate->mcs = priv->tx_rate; + rate->mcs = rateinfo; rate->flags |= RATE_INFO_FLAGS_MCS; } - if (tx_htinfo & BIT(1)) { + if (htinfo & BIT(1)) { /* VHT */ - rate->mcs = priv->tx_rate & 0x0F; + rate->mcs = rateinfo & 0x0F; rate->flags |= RATE_INFO_FLAGS_VHT_MCS; } - if (tx_htinfo & (BIT(1) | BIT(0))) { + if (htinfo & (BIT(1) | BIT(0))) { /* HT or VHT */ - switch (tx_htinfo & (BIT(3) | BIT(2))) { + switch (htinfo & (BIT(3) | BIT(2))) { case 0: rate->bw = RATE_INFO_BW_20; break; @@ -1310,29 +1310,51 @@ mwifiex_parse_htinfo(struct mwifiex_private *priv, u8 tx_htinfo, break; } - if (tx_htinfo & BIT(4)) + if (htinfo & BIT(4)) rate->flags |= RATE_INFO_FLAGS_SHORT_GI; - if ((priv->tx_rate >> 4) == 1) + if ((rateinfo >> 4) == 1) rate->nss = 2; else rate->nss = 1; } } else { /* - * Bit 0 in tx_htinfo indicates that current Tx rate - * is 11n rate. Valid MCS index values for us are 0 to 15. + * Bit 0 in htinfo indicates that current rate is 11n. Valid + * MCS index values for us are 0 to 15. */ - if ((tx_htinfo & BIT(0)) && (priv->tx_rate < 16)) { - rate->mcs = priv->tx_rate; + if ((htinfo & BIT(0)) && (rateinfo < 16)) { + rate->mcs = rateinfo; rate->flags |= RATE_INFO_FLAGS_MCS; rate->bw = RATE_INFO_BW_20; - if (tx_htinfo & BIT(1)) + if (htinfo & BIT(1)) rate->bw = RATE_INFO_BW_40; - if (tx_htinfo & BIT(2)) + if (htinfo & BIT(2)) rate->flags |= RATE_INFO_FLAGS_SHORT_GI; } } + + /* Decode legacy rates for non-HT. */ + if (!(htinfo & (BIT(0) | BIT(1)))) { + /* Bitrates in multiples of 100kb/s. */ + static const int legacy_rates[] = { + [0] = 10, + [1] = 20, + [2] = 55, + [3] = 110, + [4] = 60, /* MWIFIEX_RATE_INDEX_OFDM0 */ + [5] = 60, + [6] = 90, + [7] = 120, + [8] = 180, + [9] = 240, + [10] = 360, + [11] = 480, + [12] = 540, + }; + if (rateinfo < ARRAY_SIZE(legacy_rates)) + rate->legacy = legacy_rates[rateinfo]; + } } /* @@ -1375,7 +1397,8 @@ mwifiex_dump_station_info(struct mwifiex_private *priv, sinfo->tx_packets = node->stats.tx_packets; sinfo->tx_failed = node->stats.tx_failed; - mwifiex_parse_htinfo(priv, node->stats.last_tx_htinfo, + mwifiex_parse_htinfo(priv, priv->tx_rate, + node->stats.last_tx_htinfo, &sinfo->txrate); sinfo->txrate.legacy = node->stats.last_tx_rate * 5; @@ -1401,7 +1424,8 @@ mwifiex_dump_station_info(struct mwifiex_private *priv, HostCmd_ACT_GEN_GET, DTIM_PERIOD_I, &priv->dtim_period, true); - mwifiex_parse_htinfo(priv, priv->tx_htinfo, &sinfo->txrate); + mwifiex_parse_htinfo(priv, priv->tx_rate, priv->tx_htinfo, + &sinfo->txrate); sinfo->signal_avg = priv->bcn_rssi_avg; sinfo->rx_bytes = priv->stats.rx_bytes; @@ -1412,6 +1436,10 @@ mwifiex_dump_station_info(struct mwifiex_private *priv, /* bit rate is in 500 kb/s units. Convert it to 100kb/s units */ sinfo->txrate.legacy = rate * 5; + sinfo->filled |= BIT(NL80211_STA_INFO_RX_BITRATE); + mwifiex_parse_htinfo(priv, priv->rxpd_rate, priv->rxpd_htinfo, + &sinfo->rxrate); + if (priv->bss_mode == NL80211_IFTYPE_STATION) { sinfo->filled |= BIT_ULL(NL80211_STA_INFO_BSS_PARAM); sinfo->bss_param.flags = 0; diff --git a/drivers/net/wireless/marvell/mwifiex/debugfs.c b/drivers/net/wireless/marvell/mwifiex/debugfs.c index cce70252fd96..cbe4493b3266 100644 --- a/drivers/net/wireless/marvell/mwifiex/debugfs.c +++ b/drivers/net/wireless/marvell/mwifiex/debugfs.c @@ -273,15 +273,13 @@ mwifiex_histogram_read(struct file *file, char __user *ubuf, "total samples = %d\n", atomic_read(&phist_data->num_samples)); - p += sprintf(p, "rx rates (in Mbps): 0=1M 1=2M"); - p += sprintf(p, "2=5.5M 3=11M 4=6M 5=9M 6=12M\n"); - p += sprintf(p, "7=18M 8=24M 9=36M 10=48M 11=54M"); - p += sprintf(p, "12-27=MCS0-15(BW20) 28-43=MCS0-15(BW40)\n"); + p += sprintf(p, + "rx rates (in Mbps): 0=1M 1=2M 2=5.5M 3=11M 4=6M 5=9M 6=12M\n" + "7=18M 8=24M 9=36M 10=48M 11=54M 12-27=MCS0-15(BW20) 28-43=MCS0-15(BW40)\n"); if (ISSUPP_11ACENABLED(priv->adapter->fw_cap_info)) { - p += sprintf(p, "44-53=MCS0-9(VHT:BW20)"); - p += sprintf(p, "54-63=MCS0-9(VHT:BW40)"); - p += sprintf(p, "64-73=MCS0-9(VHT:BW80)\n\n"); + p += sprintf(p, + "44-53=MCS0-9(VHT:BW20) 54-63=MCS0-9(VHT:BW40) 64-73=MCS0-9(VHT:BW80)\n\n"); } else { p += sprintf(p, "\n"); } @@ -310,7 +308,7 @@ mwifiex_histogram_read(struct file *file, char __user *ubuf, for (i = 0; i < MWIFIEX_MAX_NOISE_FLR; i++) { value = atomic_read(&phist_data->noise_flr[i]); if (value) - p += sprintf(p, "noise_flr[-%02ddBm] = %d\n", + p += sprintf(p, "noise_flr[%02ddBm] = %d\n", (int)(i-128), value); } for (i = 0; i < MWIFIEX_MAX_SIG_STRENGTH; i++) { diff --git a/drivers/net/wireless/marvell/mwifiex/ie.c b/drivers/net/wireless/marvell/mwifiex/ie.c index 75cbd609d606..6845eb57b39a 100644 --- a/drivers/net/wireless/marvell/mwifiex/ie.c +++ b/drivers/net/wireless/marvell/mwifiex/ie.c @@ -363,6 +363,7 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv, (const u8 *)hdr, hdr->len + sizeof(struct ieee_types_header))) break; + /* fall through */ default: memcpy(gen_ie->ie_buffer + ie_len, hdr, hdr->len + sizeof(struct ieee_types_header)); diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c index 8e483b0bc3b1..935778ec9a1b 100644 --- a/drivers/net/wireless/marvell/mwifiex/scan.c +++ b/drivers/net/wireless/marvell/mwifiex/scan.c @@ -1882,15 +1882,17 @@ mwifiex_parse_single_response_buf(struct mwifiex_private *priv, u8 **bss_info, ETH_ALEN)) mwifiex_update_curr_bss_params(priv, bss); - cfg80211_put_bss(priv->wdev.wiphy, bss); - } - if ((chan->flags & IEEE80211_CHAN_RADAR) || - (chan->flags & IEEE80211_CHAN_NO_IR)) { - mwifiex_dbg(adapter, INFO, - "radar or passive channel %d\n", - channel); - mwifiex_save_hidden_ssid_channels(priv, bss); + if ((chan->flags & IEEE80211_CHAN_RADAR) || + (chan->flags & IEEE80211_CHAN_NO_IR)) { + mwifiex_dbg(adapter, INFO, + "radar or passive channel %d\n", + channel); + mwifiex_save_hidden_ssid_channels(priv, + bss); + } + + cfg80211_put_bss(priv->wdev.wiphy, bss); } } } else { diff --git a/drivers/net/wireless/marvell/mwifiex/sta_rx.c b/drivers/net/wireless/marvell/mwifiex/sta_rx.c index 00fcbda09349..fb28a5c7f441 100644 --- a/drivers/net/wireless/marvell/mwifiex/sta_rx.c +++ b/drivers/net/wireless/marvell/mwifiex/sta_rx.c @@ -152,14 +152,17 @@ int mwifiex_process_rx_packet(struct mwifiex_private *priv, mwifiex_process_tdls_action_frame(priv, offset, rx_pkt_len); } - priv->rxpd_rate = local_rx_pd->rx_rate; - - priv->rxpd_htinfo = local_rx_pd->ht_info; + /* Only stash RX bitrate for unicast packets. */ + if (likely(!is_multicast_ether_addr(rx_pkt_hdr->eth803_hdr.h_dest))) { + priv->rxpd_rate = local_rx_pd->rx_rate; + priv->rxpd_htinfo = local_rx_pd->ht_info; + } if (GET_BSS_ROLE(priv) == MWIFIEX_BSS_ROLE_STA || GET_BSS_ROLE(priv) == MWIFIEX_BSS_ROLE_UAP) { - adj_rx_rate = mwifiex_adjust_data_rate(priv, priv->rxpd_rate, - priv->rxpd_htinfo); + adj_rx_rate = mwifiex_adjust_data_rate(priv, + local_rx_pd->rx_rate, + local_rx_pd->ht_info); mwifiex_hist_data_add(priv, adj_rx_rate, local_rx_pd->snr, local_rx_pd->nf); } diff --git a/drivers/net/wireless/mediatek/mt76/Makefile b/drivers/net/wireless/mediatek/mt76/Makefile index 9b8d7488c545..1a45cb30f39f 100644 --- a/drivers/net/wireless/mediatek/mt76/Makefile +++ b/drivers/net/wireless/mediatek/mt76/Makefile @@ -14,7 +14,8 @@ CFLAGS_mt76x02_trace.o := -I$(src) mt76x02-lib-y := mt76x02_util.o mt76x02_mac.o mt76x02_mcu.o \ mt76x02_eeprom.o mt76x02_phy.o mt76x02_mmio.o \ - mt76x02_txrx.o mt76x02_trace.o + mt76x02_txrx.o mt76x02_trace.o mt76x02_debugfs.o \ + mt76x02_dfs.o mt76x02-usb-y := mt76x02_usb_mcu.o mt76x02_usb_core.o diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c index f7fbd7016403..e2ba26378575 100644 --- a/drivers/net/wireless/mediatek/mt76/dma.c +++ b/drivers/net/wireless/mediatek/mt76/dma.c @@ -157,17 +157,20 @@ mt76_dma_tx_cleanup(struct mt76_dev *dev, enum mt76_txq_id qid, bool flush) if (entry.schedule) q->swq_queued--; - if (entry.skb) + q->tail = (q->tail + 1) % q->ndesc; + q->queued--; + + if (entry.skb) { + spin_unlock_bh(&q->lock); dev->drv->tx_complete_skb(dev, q, &entry, flush); + spin_lock_bh(&q->lock); + } if (entry.txwi) { mt76_put_txwi(dev, entry.txwi); - wake = true; + wake = !flush; } - q->tail = (q->tail + 1) % q->ndesc; - q->queued--; - if (!flush && q->tail == last) last = ioread32(&q->regs->dma_idx); } @@ -258,6 +261,7 @@ int mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q, return -ENOMEM; } + skb->prev = skb->next = NULL; dma_sync_single_for_cpu(dev->dev, t->dma_addr, sizeof(t->txwi), DMA_TO_DEVICE); ret = dev->drv->tx_prepare_skb(dev, &t->txwi, skb, q, wcid, sta, diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c index 7d219ff2d480..7b926dfa6b97 100644 --- a/drivers/net/wireless/mediatek/mt76/mac80211.c +++ b/drivers/net/wireless/mediatek/mt76/mac80211.c @@ -285,6 +285,7 @@ mt76_alloc_device(unsigned int size, const struct ieee80211_ops *ops) spin_lock_init(&dev->cc_lock); mutex_init(&dev->mutex); init_waitqueue_head(&dev->tx_wait); + skb_queue_head_init(&dev->status_list); return dev; } @@ -326,6 +327,7 @@ int mt76_register_device(struct mt76_dev *dev, bool vht, ieee80211_hw_set(hw, TX_FRAG_LIST); ieee80211_hw_set(hw, MFP_CAPABLE); ieee80211_hw_set(hw, AP_LINK_PS); + ieee80211_hw_set(hw, REPORTS_TX_ACK_STATUS); wiphy->flags |= WIPHY_FLAG_IBSS_RSN; @@ -359,6 +361,7 @@ void mt76_unregister_device(struct mt76_dev *dev) { struct ieee80211_hw *hw = dev->hw; + mt76_tx_status_check(dev, NULL, true); ieee80211_unregister_hw(hw); mt76_tx_free(dev); } @@ -629,3 +632,80 @@ void mt76_rx_poll_complete(struct mt76_dev *dev, enum mt76_rxq_id q, mt76_rx_complete(dev, &frames, napi); } EXPORT_SYMBOL_GPL(mt76_rx_poll_complete); + +static int +mt76_sta_add(struct mt76_dev *dev, struct ieee80211_vif *vif, + struct ieee80211_sta *sta) +{ + struct mt76_wcid *wcid = (struct mt76_wcid *)sta->drv_priv; + int ret; + int i; + + mutex_lock(&dev->mutex); + + ret = dev->drv->sta_add(dev, vif, sta); + if (ret) + goto out; + + for (i = 0; i < ARRAY_SIZE(sta->txq); i++) { + struct mt76_txq *mtxq; + + if (!sta->txq[i]) + continue; + + mtxq = (struct mt76_txq *)sta->txq[i]->drv_priv; + mtxq->wcid = wcid; + + mt76_txq_init(dev, sta->txq[i]); + } + + rcu_assign_pointer(dev->wcid[wcid->idx], wcid); + +out: + mutex_unlock(&dev->mutex); + + return ret; +} + +static void +mt76_sta_remove(struct mt76_dev *dev, struct ieee80211_vif *vif, + struct ieee80211_sta *sta) +{ + struct mt76_wcid *wcid = (struct mt76_wcid *)sta->drv_priv; + int idx = wcid->idx; + int i; + + rcu_assign_pointer(dev->wcid[idx], NULL); + synchronize_rcu(); + + mutex_lock(&dev->mutex); + + if (dev->drv->sta_remove) + dev->drv->sta_remove(dev, vif, sta); + + mt76_tx_status_check(dev, wcid, true); + for (i = 0; i < ARRAY_SIZE(sta->txq); i++) + mt76_txq_remove(dev, sta->txq[i]); + mt76_wcid_free(dev->wcid_mask, idx); + + mutex_unlock(&dev->mutex); +} + +int mt76_sta_state(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + enum ieee80211_sta_state old_state, + enum ieee80211_sta_state new_state) +{ + struct mt76_dev *dev = hw->priv; + + if (old_state == IEEE80211_STA_NOTEXIST && + new_state == IEEE80211_STA_NONE) + return mt76_sta_add(dev, vif, sta); + + if (old_state == IEEE80211_STA_NONE && + new_state == IEEE80211_STA_NOTEXIST) + mt76_sta_remove(dev, vif, sta); + + return 0; +} +EXPORT_SYMBOL_GPL(mt76_sta_state); diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h index 3bfa7f5e3513..5cd508a68609 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76.h +++ b/drivers/net/wireless/mediatek/mt76/mt76.h @@ -135,9 +135,8 @@ struct mt76_queue { }; struct mt76_mcu_ops { - struct sk_buff *(*mcu_msg_alloc)(const void *data, int len); - int (*mcu_send_msg)(struct mt76_dev *dev, struct sk_buff *skb, - int cmd, bool wait_resp); + int (*mcu_send_msg)(struct mt76_dev *dev, int cmd, const void *data, + int len, bool wait_resp); int (*mcu_wr_rp)(struct mt76_dev *dev, u32 base, const struct mt76_reg_pair *rp, int len); int (*mcu_rd_rp)(struct mt76_dev *dev, u32 base, @@ -195,6 +194,8 @@ struct mt76_wcid { u8 tx_rate_nss; s8 max_txpwr_adj; bool sw_iv; + + u8 packet_id; }; struct mt76_txq { @@ -233,6 +234,22 @@ struct mt76_rx_tid { struct sk_buff *reorder_buf[]; }; +#define MT_TX_CB_DMA_DONE BIT(0) +#define MT_TX_CB_TXS_DONE BIT(1) +#define MT_TX_CB_TXS_FAILED BIT(2) + +#define MT_PACKET_ID_MASK GENMASK(7, 0) +#define MT_PACKET_ID_NO_ACK MT_PACKET_ID_MASK + +#define MT_TX_STATUS_SKB_TIMEOUT HZ + +struct mt76_tx_cb { + unsigned long jiffies; + u8 wcid; + u8 pktid; + u8 flags; +}; + enum { MT76_STATE_INITIALIZED, MT76_STATE_RUNNING, @@ -271,6 +288,12 @@ struct mt76_driver_ops { void (*sta_ps)(struct mt76_dev *dev, struct ieee80211_sta *sta, bool ps); + + int (*sta_add)(struct mt76_dev *dev, struct ieee80211_vif *vif, + struct ieee80211_sta *sta); + + void (*sta_remove)(struct mt76_dev *dev, struct ieee80211_vif *vif, + struct ieee80211_sta *sta); }; struct mt76_channel_state { @@ -400,6 +423,7 @@ struct mt76_dev { const struct mt76_queue_ops *queue_ops; wait_queue_head_t tx_wait; + struct sk_buff_head status_list; unsigned long wcid_mask[MT76_N_WCIDS / BITS_PER_LONG]; @@ -484,7 +508,6 @@ struct mt76_rx_status { #define mt76_wr_rp(dev, ...) (dev)->mt76.bus->wr_rp(&((dev)->mt76), __VA_ARGS__) #define mt76_rd_rp(dev, ...) (dev)->mt76.bus->rd_rp(&((dev)->mt76), __VA_ARGS__) -#define mt76_mcu_msg_alloc(dev, ...) (dev)->mt76.mcu_ops->mcu_msg_alloc(__VA_ARGS__) #define mt76_mcu_send_msg(dev, ...) (dev)->mt76.mcu_ops->mcu_send_msg(&((dev)->mt76), __VA_ARGS__) #define mt76_set(dev, offset, val) mt76_rmw(dev, offset, 0, val) @@ -594,6 +617,13 @@ wcid_to_sta(struct mt76_wcid *wcid) return container_of(ptr, struct ieee80211_sta, drv_priv); } +static inline struct mt76_tx_cb *mt76_tx_skb_cb(struct sk_buff *skb) +{ + BUILD_BUG_ON(sizeof(struct mt76_tx_cb) > + sizeof(IEEE80211_SKB_CB(skb)->status.status_driver_data)); + return ((void *) IEEE80211_SKB_CB(skb)->status.status_driver_data); +} + int mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q, struct sk_buff *skb, struct mt76_wcid *wcid, struct ieee80211_sta *sta); @@ -625,6 +655,26 @@ void mt76_rx_aggr_stop(struct mt76_dev *dev, struct mt76_wcid *wcid, u8 tid); void mt76_wcid_key_setup(struct mt76_dev *dev, struct mt76_wcid *wcid, struct ieee80211_key_conf *key); +void mt76_tx_status_lock(struct mt76_dev *dev, struct sk_buff_head *list) + __acquires(&dev->status_list.lock); +void mt76_tx_status_unlock(struct mt76_dev *dev, struct sk_buff_head *list) + __releases(&dev->status_list.lock); + +int mt76_tx_status_skb_add(struct mt76_dev *dev, struct mt76_wcid *wcid, + struct sk_buff *skb); +struct sk_buff *mt76_tx_status_skb_get(struct mt76_dev *dev, + struct mt76_wcid *wcid, int pktid, + struct sk_buff_head *list); +void mt76_tx_status_skb_done(struct mt76_dev *dev, struct sk_buff *skb, + struct sk_buff_head *list); +void mt76_tx_complete_skb(struct mt76_dev *dev, struct sk_buff *skb); +void mt76_tx_status_check(struct mt76_dev *dev, struct mt76_wcid *wcid, + bool flush); +int mt76_sta_state(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + enum ieee80211_sta_state old_state, + enum ieee80211_sta_state new_state); + struct ieee80211_sta *mt76_rx_convert(struct sk_buff *skb); /* internal */ @@ -668,8 +718,6 @@ int mt76u_vendor_request(struct mt76_dev *dev, u8 req, void *buf, size_t len); void mt76u_single_wr(struct mt76_dev *dev, const u8 req, const u16 offset, const u32 val); -u32 mt76u_rr(struct mt76_dev *dev, u32 addr); -void mt76u_wr(struct mt76_dev *dev, u32 addr, u32 val); int mt76u_init(struct mt76_dev *dev, struct usb_interface *intf); void mt76u_deinit(struct mt76_dev *dev); int mt76u_buf_alloc(struct mt76_dev *dev, struct mt76u_buf *buf, diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/Makefile b/drivers/net/wireless/mediatek/mt76/mt76x0/Makefile index 20672978dceb..aa22ba954716 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/Makefile +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/Makefile @@ -2,11 +2,9 @@ obj-$(CONFIG_MT76x0U) += mt76x0u.o obj-$(CONFIG_MT76x0E) += mt76x0e.o obj-$(CONFIG_MT76x0_COMMON) += mt76x0-common.o -mt76x0-common-y := \ - init.o main.o trace.o eeprom.o phy.o \ - mac.o debugfs.o +mt76x0-common-y := init.o main.o eeprom.o phy.o + mt76x0u-y := usb.o usb_mcu.o mt76x0e-y := pci.o pci_mcu.o # ccflags-y := -DDEBUG -CFLAGS_trace.o := -I$(src) diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt76x0/debugfs.c deleted file mode 100644 index 3224e5b1a1e5..000000000000 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/debugfs.c +++ /dev/null @@ -1,87 +0,0 @@ -/* - * Copyright (C) 2014 Felix Fietkau - * Copyright (C) 2015 Jakub Kicinski - * Copyright (C) 2018 Stanislaw Gruszka - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 - * as published by the Free Software Foundation - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ - -#include - -#include "mt76x0.h" -#include "eeprom.h" - -static int -mt76x0_ampdu_stat_read(struct seq_file *file, void *data) -{ - struct mt76x02_dev *dev = file->private; - int i, j; - -#define stat_printf(grp, off, name) \ - seq_printf(file, #name ":\t%llu\n", dev->stats.grp[off]) - - stat_printf(rx_stat, 0, rx_crc_err); - stat_printf(rx_stat, 1, rx_phy_err); - stat_printf(rx_stat, 2, rx_false_cca); - stat_printf(rx_stat, 3, rx_plcp_err); - stat_printf(rx_stat, 4, rx_fifo_overflow); - stat_printf(rx_stat, 5, rx_duplicate); - - stat_printf(tx_stat, 0, tx_fail_cnt); - stat_printf(tx_stat, 1, tx_bcn_cnt); - stat_printf(tx_stat, 2, tx_success); - stat_printf(tx_stat, 3, tx_retransmit); - stat_printf(tx_stat, 4, tx_zero_len); - stat_printf(tx_stat, 5, tx_underflow); - - stat_printf(aggr_stat, 0, non_aggr_tx); - stat_printf(aggr_stat, 1, aggr_tx); - - stat_printf(zero_len_del, 0, tx_zero_len_del); - stat_printf(zero_len_del, 1, rx_zero_len_del); -#undef stat_printf - - seq_puts(file, "Aggregations stats:\n"); - for (i = 0; i < 4; i++) { - for (j = 0; j < 8; j++) - seq_printf(file, "%08llx ", - dev->stats.aggr_n[i * 8 + j]); - seq_putc(file, '\n'); - } - - seq_printf(file, "recent average AMPDU len: %d\n", - atomic_read(&dev->avg_ampdu_len)); - - return 0; -} - -static int -mt76x0_ampdu_stat_open(struct inode *inode, struct file *f) -{ - return single_open(f, mt76x0_ampdu_stat_read, inode->i_private); -} - -static const struct file_operations fops_ampdu_stat = { - .open = mt76x0_ampdu_stat_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; - -void mt76x0_init_debugfs(struct mt76x02_dev *dev) -{ - struct dentry *dir; - - dir = mt76_register_debugfs(&dev->mt76); - if (!dir) - return; - - debugfs_create_file("ampdu_stat", S_IRUSR, dir, dev, &fops_ampdu_stat); -} diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c index ab4fd6e0f23a..497e762978cc 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c @@ -135,9 +135,6 @@ static s8 mt76x0_get_delta(struct mt76x02_dev *dev) struct cfg80211_chan_def *chandef = &dev->mt76.chandef; u8 val; - if (mt76x0_tssi_enabled(dev)) - return 0; - if (chandef->width == NL80211_CHAN_WIDTH_80) { val = mt76x02_eeprom_get(dev, MT_EE_5G_TARGET_POWER) >> 8; } else if (chandef->width == NL80211_CHAN_WIDTH_40) { @@ -160,8 +157,8 @@ void mt76x0_get_tx_power_per_rate(struct mt76x02_dev *dev) struct ieee80211_channel *chan = dev->mt76.chandef.chan; bool is_2ghz = chan->band == NL80211_BAND_2GHZ; struct mt76_rate_power *t = &dev->mt76.rate_power; - s8 delta = mt76x0_get_delta(dev); u16 val, addr; + s8 delta; memset(t, 0, sizeof(*t)); @@ -211,6 +208,7 @@ void mt76x0_get_tx_power_per_rate(struct mt76x02_dev *dev) t->vht[7] = s6_to_s8(val); t->vht[8] = s6_to_s8(val >> 8); + delta = mt76x0_tssi_enabled(dev) ? 0 : mt76x0_get_delta(dev); mt76x02_add_rate_power_offset(t, delta); } @@ -233,6 +231,20 @@ void mt76x0_get_power_info(struct mt76x02_dev *dev, u8 *info) u16 data; int i; + if (mt76x0_tssi_enabled(dev)) { + s8 target_power; + + if (chan->band == NL80211_BAND_5GHZ) + data = mt76x02_eeprom_get(dev, MT_EE_5G_TARGET_POWER); + else + data = mt76x02_eeprom_get(dev, MT_EE_2G_TARGET_POWER); + target_power = (data & 0xff) - dev->mt76.rate_power.ofdm[7]; + info[0] = target_power + mt76x0_get_delta(dev); + info[1] = 0; + + return; + } + for (i = 0; i < ARRAY_SIZE(chan_map); i++) { if (chan_map[i].chan <= chan->hw_value) { offset = chan_map[i].offset; @@ -340,8 +352,6 @@ int mt76x0_eeprom_init(struct mt76x02_dev *dev) mt76x0_set_freq_offset(dev); mt76x0_set_temp_offset(dev); - dev->mt76.chainmask = 0x0101; - return 0; } diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/init.c b/drivers/net/wireless/mediatek/mt76/mt76x0/init.c index 4a9408801260..87b575fe1c74 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/init.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/init.c @@ -16,7 +16,6 @@ #include "mt76x0.h" #include "eeprom.h" -#include "trace.h" #include "mcu.h" #include "initvals.h" @@ -113,7 +112,7 @@ static int mt76x0_init_bbp(struct mt76x02_dev *dev) { int ret, i; - ret = mt76x0_wait_bbp_ready(dev); + ret = mt76x0_phy_wait_bbp_ready(dev); if (ret) return ret; @@ -134,80 +133,28 @@ static int mt76x0_init_bbp(struct mt76x02_dev *dev) static void mt76x0_init_mac_registers(struct mt76x02_dev *dev) { - u32 reg; - RANDOM_WRITE(dev, common_mac_reg_table); - mt76x02_set_beacon_offsets(dev); - /* Enable PBF and MAC clock SYS_CTRL[11:10] = 0x3 */ RANDOM_WRITE(dev, mt76x0_mac_reg_table); /* Release BBP and MAC reset MAC_SYS_CTRL[1:0] = 0x0 */ - reg = mt76_rr(dev, MT_MAC_SYS_CTRL); - reg &= ~0x3; - mt76_wr(dev, MT_MAC_SYS_CTRL, reg); + mt76_clear(dev, MT_MAC_SYS_CTRL, 0x3); /* Set 0x141C[15:12]=0xF */ - reg = mt76_rr(dev, MT_EXT_CCA_CFG); - reg |= 0x0000F000; - mt76_wr(dev, MT_EXT_CCA_CFG, reg); + mt76_set(dev, MT_EXT_CCA_CFG, 0xf000); mt76_clear(dev, MT_FCE_L2_STUFF, MT_FCE_L2_STUFF_WR_MPDU_LEN_EN); /* - TxRing 9 is for Mgmt frame. - TxRing 8 is for In-band command frame. - WMM_RG0_TXQMA: This register setting is for FCE to define the rule of TxRing 9. - WMM_RG1_TXQMA: This register setting is for FCE to define the rule of TxRing 8. - */ - reg = mt76_rr(dev, MT_WMM_CTRL); - reg &= ~0x000003FF; - reg |= 0x00000201; - mt76_wr(dev, MT_WMM_CTRL, reg); -} - -static int mt76x0_init_wcid_mem(struct mt76x02_dev *dev) -{ - u32 *vals; - int i; - - vals = kmalloc(sizeof(*vals) * MT76_N_WCIDS * 2, GFP_KERNEL); - if (!vals) - return -ENOMEM; - - for (i = 0; i < MT76_N_WCIDS; i++) { - vals[i * 2] = 0xffffffff; - vals[i * 2 + 1] = 0x00ffffff; - } - - mt76_wr_copy(dev, MT_WCID_ADDR_BASE, vals, MT76_N_WCIDS * 2); - kfree(vals); - return 0; -} - -static void mt76x0_init_key_mem(struct mt76x02_dev *dev) -{ - u32 vals[4] = {}; - - mt76_wr_copy(dev, MT_SKEY_MODE_BASE_0, vals, ARRAY_SIZE(vals)); -} - -static int mt76x0_init_wcid_attr_mem(struct mt76x02_dev *dev) -{ - u32 *vals; - int i; - - vals = kmalloc(sizeof(*vals) * MT76_N_WCIDS * 2, GFP_KERNEL); - if (!vals) - return -ENOMEM; - - for (i = 0; i < MT76_N_WCIDS * 2; i++) - vals[i] = 1; - - mt76_wr_copy(dev, MT_WCID_ATTR_BASE, vals, MT76_N_WCIDS * 2); - kfree(vals); - return 0; + * tx_ring 9 is for mgmt frame + * tx_ring 8 is for in-band command frame. + * WMM_RG0_TXQMA: this register setting is for FCE to + * define the rule of tx_ring 9 + * WMM_RG1_TXQMA: this register setting is for FCE to + * define the rule of tx_ring 8 + */ + mt76_rmw(dev, MT_WMM_CTRL, 0x3ff, 0x201); } static void mt76x0_reset_counters(struct mt76x02_dev *dev) @@ -270,7 +217,7 @@ EXPORT_SYMBOL_GPL(mt76x0_mac_stop); int mt76x0_init_hardware(struct mt76x02_dev *dev) { - int ret; + int ret, i, k; if (!mt76x02_wait_for_wpdma(&dev->mt76, 1000)) return -EIO; @@ -280,7 +227,7 @@ int mt76x0_init_hardware(struct mt76x02_dev *dev) return -ETIMEDOUT; mt76x0_reset_csr_bbp(dev); - ret = mt76x02_mcu_function_select(dev, Q_SELECT, 1, false); + ret = mt76x02_mcu_function_select(dev, Q_SELECT, 1); if (ret) return ret; @@ -295,20 +242,12 @@ int mt76x0_init_hardware(struct mt76x02_dev *dev) dev->mt76.rxfilter = mt76_rr(dev, MT_RX_FILTR_CFG); - ret = mt76x0_init_wcid_mem(dev); - if (ret) - return ret; + for (i = 0; i < 16; i++) + for (k = 0; k < 4; k++) + mt76x02_mac_shared_key_setup(dev, i, k, NULL); - mt76x0_init_key_mem(dev); - - ret = mt76x0_init_wcid_attr_mem(dev); - if (ret) - return ret; - - mt76_clear(dev, MT_BEACON_TIME_CFG, (MT_BEACON_TIME_CFG_TIMER_EN | - MT_BEACON_TIME_CFG_SYNC_MODE | - MT_BEACON_TIME_CFG_TBTT_EN | - MT_BEACON_TIME_CFG_BEACON_TX)); + for (i = 0; i < 256; i++) + mt76x02_mac_wcid_setup(dev, i, 0, NULL); mt76x0_reset_counters(dev); @@ -317,6 +256,7 @@ int mt76x0_init_hardware(struct mt76x02_dev *dev) return ret; mt76x0_phy_init(dev); + mt76x02_init_beacon_config(dev); return 0; } @@ -339,7 +279,6 @@ mt76x0_alloc_device(struct device *pdev, dev = container_of(mdev, struct mt76x02_dev, mt76); mutex_init(&dev->phy_mutex); - atomic_set(&dev->avg_ampdu_len, 1); return dev; } @@ -347,49 +286,21 @@ EXPORT_SYMBOL_GPL(mt76x0_alloc_device); int mt76x0_register_device(struct mt76x02_dev *dev) { - struct mt76_dev *mdev = &dev->mt76; - struct ieee80211_hw *hw = mdev->hw; - struct wiphy *wiphy = hw->wiphy; int ret; - /* Reserve WCID 0 for mcast - thanks to this APs WCID will go to - * entry no. 1 like it does in the vendor driver. - */ - mdev->wcid_mask[0] |= 1; - - /* init fake wcid for monitor interfaces */ - mdev->global_wcid.idx = 0xff; - mdev->global_wcid.hw_key_idx = -1; - - /* init antenna configuration */ - mdev->antenna_mask = 1; - - hw->queues = 4; - hw->max_rates = 1; - hw->max_report_rates = 7; - hw->max_rate_tries = 1; - hw->extra_tx_headroom = 2; - if (mt76_is_usb(dev)) - hw->extra_tx_headroom += sizeof(struct mt76x02_txwi) + - MT_DMA_HDR_LEN; - - hw->sta_data_size = sizeof(struct mt76x02_sta); - hw->vif_data_size = sizeof(struct mt76x02_vif); - - wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION); - - INIT_DELAYED_WORK(&dev->mac_work, mt76x0_mac_work); + mt76x02_init_device(dev); + mt76x02_config_mac_addr_list(dev); - ret = mt76_register_device(mdev, true, mt76x02_rates, + ret = mt76_register_device(&dev->mt76, true, mt76x02_rates, ARRAY_SIZE(mt76x02_rates)); if (ret) return ret; /* overwrite unsupported features */ - if (mdev->cap.has_5ghz) + if (dev->mt76.cap.has_5ghz) mt76x0_vht_cap_mask(&dev->mt76.sband_5g.sband); - mt76x0_init_debugfs(dev); + mt76x02_init_debugfs(dev); return 0; } diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/initvals.h b/drivers/net/wireless/mediatek/mt76/mt76x0/initvals.h index 236dce6860b4..a1657922758e 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/initvals.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/initvals.h @@ -37,14 +37,14 @@ static const struct mt76_reg_pair common_mac_reg_table[] = { { MT_PBF_RX_MAX_PCNT, 0x0000fe9f }, { MT_TX_RETRY_CFG, 0x47d01f0f }, { MT_AUTO_RSP_CFG, 0x00000013 }, - { MT_CCK_PROT_CFG, 0x05740003 }, - { MT_OFDM_PROT_CFG, 0x05740003 }, + { MT_CCK_PROT_CFG, 0x07f40003 }, + { MT_OFDM_PROT_CFG, 0x07f42004 }, { MT_PBF_CFG, 0x00f40006 }, { MT_WPDMA_GLO_CFG, 0x00000030 }, - { MT_GF20_PROT_CFG, 0x01744004 }, - { MT_GF40_PROT_CFG, 0x03f44084 }, - { MT_MM20_PROT_CFG, 0x01744004 }, - { MT_MM40_PROT_CFG, 0x03f54084 }, + { MT_GF20_PROT_CFG, 0x01742004 }, + { MT_GF40_PROT_CFG, 0x03f42084 }, + { MT_MM20_PROT_CFG, 0x01742004 }, + { MT_MM40_PROT_CFG, 0x03f42084 }, { MT_TXOP_CTRL_CFG, 0x0000583f }, { MT_TX_RTS_CFG, 0x00092b20 }, { MT_EXP_ACK_TIME, 0x002400ca }, @@ -85,6 +85,9 @@ static const struct mt76_reg_pair mt76x0_mac_reg_table[] = { { MT_HT_CTRL_CFG, 0x000001FF }, { MT_TXOP_HLDR_ET, 0x00000000 }, { MT_PN_PAD_MODE, 0x00000003 }, + { MT_TX_PROT_CFG6, 0xe3f42004 }, + { MT_TX_PROT_CFG7, 0xe3f42084 }, + { MT_TX_PROT_CFG8, 0xe3f42104 }, }; static const struct mt76_reg_pair mt76x0_bbp_init_tab[] = { diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/initvals_phy.h b/drivers/net/wireless/mediatek/mt76/mt76x0/initvals_phy.h index 95d43efc1f3d..56c6fa73daf5 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/initvals_phy.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/initvals_phy.h @@ -16,757 +16,626 @@ #ifndef __MT76X0U_PHY_INITVALS_H #define __MT76X0U_PHY_INITVALS_H -#define RF_REG_PAIR(bank, reg, value) \ - { (bank) << 16 | (reg), value } - - static const struct mt76_reg_pair mt76x0_rf_central_tab[] = { -/* - Bank 0 - For central blocks: BG, PLL, XTAL, LO, ADC/DAC -*/ - { MT_RF(0, 1), 0x01}, - { MT_RF(0, 2), 0x11}, - - /* - R3 ~ R7: VCO Cal. - */ - { MT_RF(0, 3), 0x73}, /* VCO Freq Cal - No Bypass, VCO Amp Cal - No Bypass */ - { MT_RF(0, 4), 0x30}, /* R4 b<7>=1, VCO cal */ - { MT_RF(0, 5), 0x00}, - { MT_RF(0, 6), 0x41}, /* Set the open loop amplitude to middle since bypassing amplitude calibration */ - { MT_RF(0, 7), 0x00}, - - /* - XO - */ - { MT_RF(0, 8), 0x00}, - { MT_RF(0, 9), 0x00}, - { MT_RF(0, 10), 0x0C}, - { MT_RF(0, 11), 0x00}, - { MT_RF(0, 12), 0x00}, - - /* - BG - */ - { MT_RF(0, 13), 0x00}, - { MT_RF(0, 14), 0x00}, - { MT_RF(0, 15), 0x00}, - - /* - LDO - */ - { MT_RF(0, 19), 0x20}, - /* - XO - */ - { MT_RF(0, 20), 0x22}, - { MT_RF(0, 21), 0x12}, - { MT_RF(0, 23), 0x00}, - { MT_RF(0, 24), 0x33}, /* See band selection for R24<1:0> */ - { MT_RF(0, 25), 0x00}, - - /* - PLL, See Freq Selection - */ - { MT_RF(0, 26), 0x00}, - { MT_RF(0, 27), 0x00}, - { MT_RF(0, 28), 0x00}, - { MT_RF(0, 29), 0x00}, - { MT_RF(0, 30), 0x00}, - { MT_RF(0, 31), 0x00}, - { MT_RF(0, 32), 0x00}, - { MT_RF(0, 33), 0x00}, - { MT_RF(0, 34), 0x00}, - { MT_RF(0, 35), 0x00}, - { MT_RF(0, 36), 0x00}, - { MT_RF(0, 37), 0x00}, - - /* - LO Buffer - */ - { MT_RF(0, 38), 0x2F}, - - /* - Test Ports - */ - { MT_RF(0, 64), 0x00}, - { MT_RF(0, 65), 0x80}, - { MT_RF(0, 66), 0x01}, - { MT_RF(0, 67), 0x04}, - - /* - ADC/DAC - */ - { MT_RF(0, 68), 0x00}, - { MT_RF(0, 69), 0x08}, - { MT_RF(0, 70), 0x08}, - { MT_RF(0, 71), 0x40}, - { MT_RF(0, 72), 0xD0}, - { MT_RF(0, 73), 0x93}, + { MT_RF(0, 1), 0x01 }, + { MT_RF(0, 2), 0x11 }, + /* R3 ~ R7: VCO Cal */ + { MT_RF(0, 3), 0x73 }, /* VCO Freq Cal */ + { MT_RF(0, 4), 0x30 }, /* R4 b<7>=1, VCO cal */ + { MT_RF(0, 5), 0x00 }, + { MT_RF(0, 6), 0x41 }, + { MT_RF(0, 7), 0x00 }, + { MT_RF(0, 8), 0x00 }, + { MT_RF(0, 9), 0x00 }, + { MT_RF(0, 10), 0x0C }, + { MT_RF(0, 11), 0x00 }, + { MT_RF(0, 12), 0x00 }, + /* BG */ + { MT_RF(0, 13), 0x00 }, + { MT_RF(0, 14), 0x00 }, + { MT_RF(0, 15), 0x00 }, + /* LDO */ + { MT_RF(0, 19), 0x20 }, + { MT_RF(0, 20), 0x22 }, + { MT_RF(0, 21), 0x12 }, + { MT_RF(0, 23), 0x00 }, + { MT_RF(0, 24), 0x33 }, + { MT_RF(0, 25), 0x00 }, + /* PLL */ + { MT_RF(0, 26), 0x00 }, + { MT_RF(0, 27), 0x00 }, + { MT_RF(0, 28), 0x00 }, + { MT_RF(0, 29), 0x00 }, + { MT_RF(0, 30), 0x00 }, + { MT_RF(0, 31), 0x00 }, + { MT_RF(0, 32), 0x00 }, + { MT_RF(0, 33), 0x00 }, + { MT_RF(0, 34), 0x00 }, + { MT_RF(0, 35), 0x00 }, + { MT_RF(0, 36), 0x00 }, + { MT_RF(0, 37), 0x00 }, + /* LO Buffer */ + { MT_RF(0, 38), 0x2F }, + /* Test Ports */ + { MT_RF(0, 64), 0x00 }, + { MT_RF(0, 65), 0x80 }, + { MT_RF(0, 66), 0x01 }, + { MT_RF(0, 67), 0x04 }, + /* ADC-DAC */ + { MT_RF(0, 68), 0x00 }, + { MT_RF(0, 69), 0x08 }, + { MT_RF(0, 70), 0x08 }, + { MT_RF(0, 71), 0x40 }, + { MT_RF(0, 72), 0xD0 }, + { MT_RF(0, 73), 0x93 }, }; static const struct mt76_reg_pair mt76x0_rf_2g_channel_0_tab[] = { -/* - Bank 5 - Channel 0 2G RF registers -*/ - /* - RX logic operation - */ - /* RF_R00 Change in SelectBand6590 */ - - { MT_RF(5, 2), 0x0C}, /* 5G+2G (MT7610U) */ - { MT_RF(5, 3), 0x00}, - - /* - TX logic operation - */ - { MT_RF(5, 4), 0x00}, - { MT_RF(5, 5), 0x84}, - { MT_RF(5, 6), 0x02}, - - /* - LDO - */ - { MT_RF(5, 7), 0x00}, - { MT_RF(5, 8), 0x00}, - { MT_RF(5, 9), 0x00}, - - /* - RX - */ - { MT_RF(5, 10), 0x51}, - { MT_RF(5, 11), 0x22}, - { MT_RF(5, 12), 0x22}, - { MT_RF(5, 13), 0x0F}, - { MT_RF(5, 14), 0x47}, /* Increase mixer current for more gain */ - { MT_RF(5, 15), 0x25}, - { MT_RF(5, 16), 0xC7}, /* Tune LNA2 tank */ - { MT_RF(5, 17), 0x00}, - { MT_RF(5, 18), 0x00}, - { MT_RF(5, 19), 0x30}, /* Improve max Pin */ - { MT_RF(5, 20), 0x33}, - { MT_RF(5, 21), 0x02}, - { MT_RF(5, 22), 0x32}, /* Tune LNA1 tank */ - { MT_RF(5, 23), 0x00}, - { MT_RF(5, 24), 0x25}, - { MT_RF(5, 26), 0x00}, - { MT_RF(5, 27), 0x12}, - { MT_RF(5, 28), 0x0F}, - { MT_RF(5, 29), 0x00}, - - /* - LOGEN - */ - { MT_RF(5, 30), 0x51}, /* Tune LOGEN tank */ - { MT_RF(5, 31), 0x35}, - { MT_RF(5, 32), 0x31}, - { MT_RF(5, 33), 0x31}, - { MT_RF(5, 34), 0x34}, - { MT_RF(5, 35), 0x03}, - { MT_RF(5, 36), 0x00}, - - /* - TX - */ - { MT_RF(5, 37), 0xDD}, /* Improve 3.2GHz spur */ - { MT_RF(5, 38), 0xB3}, - { MT_RF(5, 39), 0x33}, - { MT_RF(5, 40), 0xB1}, - { MT_RF(5, 41), 0x71}, - { MT_RF(5, 42), 0xF2}, - { MT_RF(5, 43), 0x47}, - { MT_RF(5, 44), 0x77}, - { MT_RF(5, 45), 0x0E}, - { MT_RF(5, 46), 0x10}, - { MT_RF(5, 47), 0x00}, - { MT_RF(5, 48), 0x53}, - { MT_RF(5, 49), 0x03}, - { MT_RF(5, 50), 0xEF}, - { MT_RF(5, 51), 0xC7}, - { MT_RF(5, 52), 0x62}, - { MT_RF(5, 53), 0x62}, - { MT_RF(5, 54), 0x00}, - { MT_RF(5, 55), 0x00}, - { MT_RF(5, 56), 0x0F}, - { MT_RF(5, 57), 0x0F}, - { MT_RF(5, 58), 0x16}, - { MT_RF(5, 59), 0x16}, - { MT_RF(5, 60), 0x10}, - { MT_RF(5, 61), 0x10}, - { MT_RF(5, 62), 0xD0}, - { MT_RF(5, 63), 0x6C}, - { MT_RF(5, 64), 0x58}, - { MT_RF(5, 65), 0x58}, - { MT_RF(5, 66), 0xF2}, - { MT_RF(5, 67), 0xE8}, - { MT_RF(5, 68), 0xF0}, - { MT_RF(5, 69), 0xF0}, - { MT_RF(5, 127), 0x04}, + /* RX logic operation */ + { MT_RF(5, 2), 0x0C }, /* 5G+2G */ + { MT_RF(5, 3), 0x00 }, + /* TX logic operation */ + { MT_RF(5, 4), 0x00 }, + { MT_RF(5, 5), 0x84 }, + { MT_RF(5, 6), 0x02 }, + /* LDO */ + { MT_RF(5, 7), 0x00 }, + { MT_RF(5, 8), 0x00 }, + { MT_RF(5, 9), 0x00 }, + /* RX */ + { MT_RF(5, 10), 0x51 }, + { MT_RF(5, 11), 0x22 }, + { MT_RF(5, 12), 0x22 }, + { MT_RF(5, 13), 0x0F }, + { MT_RF(5, 14), 0x47 }, + { MT_RF(5, 15), 0x25 }, + { MT_RF(5, 16), 0xC7 }, + { MT_RF(5, 17), 0x00 }, + { MT_RF(5, 18), 0x00 }, + { MT_RF(5, 19), 0x30 }, + { MT_RF(5, 20), 0x33 }, + { MT_RF(5, 21), 0x02 }, + { MT_RF(5, 22), 0x32 }, + { MT_RF(5, 23), 0x00 }, + { MT_RF(5, 24), 0x25 }, + { MT_RF(5, 26), 0x00 }, + { MT_RF(5, 27), 0x12 }, + { MT_RF(5, 28), 0x0F }, + { MT_RF(5, 29), 0x00 }, + /* LOGEN */ + { MT_RF(5, 30), 0x51 }, + { MT_RF(5, 31), 0x35 }, + { MT_RF(5, 32), 0x31 }, + { MT_RF(5, 33), 0x31 }, + { MT_RF(5, 34), 0x34 }, + { MT_RF(5, 35), 0x03 }, + { MT_RF(5, 36), 0x00 }, + /* TX */ + { MT_RF(5, 37), 0xDD }, + { MT_RF(5, 38), 0xB3 }, + { MT_RF(5, 39), 0x33 }, + { MT_RF(5, 40), 0xB1 }, + { MT_RF(5, 41), 0x71 }, + { MT_RF(5, 42), 0xF2 }, + { MT_RF(5, 43), 0x47 }, + { MT_RF(5, 44), 0x77 }, + { MT_RF(5, 45), 0x0E }, + { MT_RF(5, 46), 0x10 }, + { MT_RF(5, 47), 0x00 }, + { MT_RF(5, 48), 0x53 }, + { MT_RF(5, 49), 0x03 }, + { MT_RF(5, 50), 0xEF }, + { MT_RF(5, 51), 0xC7 }, + { MT_RF(5, 52), 0x62 }, + { MT_RF(5, 53), 0x62 }, + { MT_RF(5, 54), 0x00 }, + { MT_RF(5, 55), 0x00 }, + { MT_RF(5, 56), 0x0F }, + { MT_RF(5, 57), 0x0F }, + { MT_RF(5, 58), 0x16 }, + { MT_RF(5, 59), 0x16 }, + { MT_RF(5, 60), 0x10 }, + { MT_RF(5, 61), 0x10 }, + { MT_RF(5, 62), 0xD0 }, + { MT_RF(5, 63), 0x6C }, + { MT_RF(5, 64), 0x58 }, + { MT_RF(5, 65), 0x58 }, + { MT_RF(5, 66), 0xF2 }, + { MT_RF(5, 67), 0xE8 }, + { MT_RF(5, 68), 0xF0 }, + { MT_RF(5, 69), 0xF0 }, + { MT_RF(5, 127), 0x04 }, }; static const struct mt76_reg_pair mt76x0_rf_5g_channel_0_tab[] = { -/* - Bank 6 - Channel 0 5G RF registers -*/ - /* - RX logic operation - */ - /* RF_R00 Change in SelectBandmt76x0 */ - - { MT_RF(6, 2), 0x0C}, - { MT_RF(6, 3), 0x00}, - - /* - TX logic operation - */ - { MT_RF(6, 4), 0x00}, - { MT_RF(6, 5), 0x84}, - { MT_RF(6, 6), 0x02}, - - /* - LDO - */ - { MT_RF(6, 7), 0x00}, - { MT_RF(6, 8), 0x00}, - { MT_RF(6, 9), 0x00}, - - /* - RX - */ - { MT_RF(6, 10), 0x00}, - { MT_RF(6, 11), 0x01}, - - { MT_RF(6, 13), 0x23}, - { MT_RF(6, 14), 0x00}, - { MT_RF(6, 15), 0x04}, - { MT_RF(6, 16), 0x22}, - - { MT_RF(6, 18), 0x08}, - { MT_RF(6, 19), 0x00}, - { MT_RF(6, 20), 0x00}, - { MT_RF(6, 21), 0x00}, - { MT_RF(6, 22), 0xFB}, - - /* - LOGEN5G - */ - { MT_RF(6, 25), 0x76}, - { MT_RF(6, 26), 0x24}, - { MT_RF(6, 27), 0x04}, - { MT_RF(6, 28), 0x00}, - { MT_RF(6, 29), 0x00}, - - /* - TX - */ - { MT_RF(6, 37), 0xBB}, - { MT_RF(6, 38), 0xB3}, - - { MT_RF(6, 40), 0x33}, - { MT_RF(6, 41), 0x33}, - - { MT_RF(6, 43), 0x03}, - { MT_RF(6, 44), 0xB3}, - - { MT_RF(6, 46), 0x17}, - { MT_RF(6, 47), 0x0E}, - { MT_RF(6, 48), 0x10}, - { MT_RF(6, 49), 0x07}, - - { MT_RF(6, 62), 0x00}, - { MT_RF(6, 63), 0x00}, - { MT_RF(6, 64), 0xF1}, - { MT_RF(6, 65), 0x0F}, + /* RX logic operation */ + { MT_RF(6, 2), 0x0C }, + { MT_RF(6, 3), 0x00 }, + /* TX logic operation */ + { MT_RF(6, 4), 0x00 }, + { MT_RF(6, 5), 0x84 }, + { MT_RF(6, 6), 0x02 }, + /* LDO */ + { MT_RF(6, 7), 0x00 }, + { MT_RF(6, 8), 0x00 }, + { MT_RF(6, 9), 0x00 }, + /* RX */ + { MT_RF(6, 10), 0x00 }, + { MT_RF(6, 11), 0x01 }, + { MT_RF(6, 13), 0x23 }, + { MT_RF(6, 14), 0x00 }, + { MT_RF(6, 15), 0x04 }, + { MT_RF(6, 16), 0x22 }, + { MT_RF(6, 18), 0x08 }, + { MT_RF(6, 19), 0x00 }, + { MT_RF(6, 20), 0x00 }, + { MT_RF(6, 21), 0x00 }, + { MT_RF(6, 22), 0xFB }, + /* LOGEN5G */ + { MT_RF(6, 25), 0x76 }, + { MT_RF(6, 26), 0x24 }, + { MT_RF(6, 27), 0x04 }, + { MT_RF(6, 28), 0x00 }, + { MT_RF(6, 29), 0x00 }, + /* TX */ + { MT_RF(6, 37), 0xBB }, + { MT_RF(6, 38), 0xB3 }, + { MT_RF(6, 40), 0x33 }, + { MT_RF(6, 41), 0x33 }, + { MT_RF(6, 43), 0x03 }, + { MT_RF(6, 44), 0xB3 }, + { MT_RF(6, 46), 0x17 }, + { MT_RF(6, 47), 0x0E }, + { MT_RF(6, 48), 0x10 }, + { MT_RF(6, 49), 0x07 }, + { MT_RF(6, 62), 0x00 }, + { MT_RF(6, 63), 0x00 }, + { MT_RF(6, 64), 0xF1 }, + { MT_RF(6, 65), 0x0F }, }; static const struct mt76_reg_pair mt76x0_rf_vga_channel_0_tab[] = { -/* - Bank 7 - Channel 0 VGA RF registers -*/ /* E3 CR */ - { MT_RF(7, 0), 0x47}, /* Allow BBP/MAC to do calibration */ - { MT_RF(7, 1), 0x00}, - { MT_RF(7, 2), 0x00}, - { MT_RF(7, 3), 0x00}, - { MT_RF(7, 4), 0x00}, - - { MT_RF(7, 10), 0x13}, - { MT_RF(7, 11), 0x0F}, - { MT_RF(7, 12), 0x13}, /* For dcoc */ - { MT_RF(7, 13), 0x13}, /* For dcoc */ - { MT_RF(7, 14), 0x13}, /* For dcoc */ - { MT_RF(7, 15), 0x20}, /* For dcoc */ - { MT_RF(7, 16), 0x22}, /* For dcoc */ - - { MT_RF(7, 17), 0x7C}, - - { MT_RF(7, 18), 0x00}, - { MT_RF(7, 19), 0x00}, - { MT_RF(7, 20), 0x00}, - { MT_RF(7, 21), 0xF1}, - { MT_RF(7, 22), 0x11}, - { MT_RF(7, 23), 0xC2}, - { MT_RF(7, 24), 0x41}, - { MT_RF(7, 25), 0x20}, - { MT_RF(7, 26), 0x40}, - { MT_RF(7, 27), 0xD7}, - { MT_RF(7, 28), 0xA2}, - { MT_RF(7, 29), 0x60}, - { MT_RF(7, 30), 0x49}, - { MT_RF(7, 31), 0x20}, - { MT_RF(7, 32), 0x44}, - { MT_RF(7, 33), 0xC1}, - { MT_RF(7, 34), 0x60}, - { MT_RF(7, 35), 0xC0}, - - { MT_RF(7, 61), 0x01}, - - { MT_RF(7, 72), 0x3C}, - { MT_RF(7, 73), 0x34}, - { MT_RF(7, 74), 0x00}, + { MT_RF(7, 0), 0x47 }, + { MT_RF(7, 1), 0x00 }, + { MT_RF(7, 2), 0x00 }, + { MT_RF(7, 3), 0x00 }, + { MT_RF(7, 4), 0x00 }, + { MT_RF(7, 10), 0x13 }, + { MT_RF(7, 11), 0x0F }, + { MT_RF(7, 12), 0x13 }, + { MT_RF(7, 13), 0x13 }, + { MT_RF(7, 14), 0x13 }, + { MT_RF(7, 15), 0x20 }, + { MT_RF(7, 16), 0x22 }, + { MT_RF(7, 17), 0x7C }, + { MT_RF(7, 18), 0x00 }, + { MT_RF(7, 19), 0x00 }, + { MT_RF(7, 20), 0x00 }, + { MT_RF(7, 21), 0xF1 }, + { MT_RF(7, 22), 0x11 }, + { MT_RF(7, 23), 0xC2 }, + { MT_RF(7, 24), 0x41 }, + { MT_RF(7, 25), 0x20 }, + { MT_RF(7, 26), 0x40 }, + { MT_RF(7, 27), 0xD7 }, + { MT_RF(7, 28), 0xA2 }, + { MT_RF(7, 29), 0x60 }, + { MT_RF(7, 30), 0x49 }, + { MT_RF(7, 31), 0x20 }, + { MT_RF(7, 32), 0x44 }, + { MT_RF(7, 33), 0xC1 }, + { MT_RF(7, 34), 0x60 }, + { MT_RF(7, 35), 0xC0 }, + { MT_RF(7, 61), 0x01 }, + { MT_RF(7, 72), 0x3C }, + { MT_RF(7, 73), 0x34 }, + { MT_RF(7, 74), 0x00 }, }; static const struct mt76x0_rf_switch_item mt76x0_rf_bw_switch_tab[] = { - /* Bank, Register, Bw/Band, Value */ - { MT_RF(0, 17), RF_G_BAND | RF_BW_20, 0x00}, - { MT_RF(0, 17), RF_G_BAND | RF_BW_40, 0x00}, - { MT_RF(0, 17), RF_A_BAND | RF_BW_20, 0x00}, - { MT_RF(0, 17), RF_A_BAND | RF_BW_40, 0x00}, - { MT_RF(0, 17), RF_A_BAND | RF_BW_80, 0x00}, - - /* TODO: need to check B7.R6 & B7.R7 setting for 2.4G again @20121112 */ - { MT_RF(7, 6), RF_G_BAND | RF_BW_20, 0x40}, - { MT_RF(7, 6), RF_G_BAND | RF_BW_40, 0x1C}, - { MT_RF(7, 6), RF_A_BAND | RF_BW_20, 0x40}, - { MT_RF(7, 6), RF_A_BAND | RF_BW_40, 0x20}, - { MT_RF(7, 6), RF_A_BAND | RF_BW_80, 0x10}, - - { MT_RF(7, 7), RF_G_BAND | RF_BW_20, 0x40}, - { MT_RF(7, 7), RF_G_BAND | RF_BW_40, 0x20}, - { MT_RF(7, 7), RF_A_BAND | RF_BW_20, 0x40}, - { MT_RF(7, 7), RF_A_BAND | RF_BW_40, 0x20}, - { MT_RF(7, 7), RF_A_BAND | RF_BW_80, 0x10}, - - { MT_RF(7, 8), RF_G_BAND | RF_BW_20, 0x03}, - { MT_RF(7, 8), RF_G_BAND | RF_BW_40, 0x01}, - { MT_RF(7, 8), RF_A_BAND | RF_BW_20, 0x03}, - { MT_RF(7, 8), RF_A_BAND | RF_BW_40, 0x01}, - { MT_RF(7, 8), RF_A_BAND | RF_BW_80, 0x00}, - - /* TODO: need to check B7.R58 & B7.R59 setting for 2.4G again @20121112 */ - { MT_RF(7, 58), RF_G_BAND | RF_BW_20, 0x40}, - { MT_RF(7, 58), RF_G_BAND | RF_BW_40, 0x40}, - { MT_RF(7, 58), RF_A_BAND | RF_BW_20, 0x40}, - { MT_RF(7, 58), RF_A_BAND | RF_BW_40, 0x40}, - { MT_RF(7, 58), RF_A_BAND | RF_BW_80, 0x10}, - - { MT_RF(7, 59), RF_G_BAND | RF_BW_20, 0x40}, - { MT_RF(7, 59), RF_G_BAND | RF_BW_40, 0x40}, - { MT_RF(7, 59), RF_A_BAND | RF_BW_20, 0x40}, - { MT_RF(7, 59), RF_A_BAND | RF_BW_40, 0x40}, - { MT_RF(7, 59), RF_A_BAND | RF_BW_80, 0x10}, - - { MT_RF(7, 60), RF_G_BAND | RF_BW_20, 0xAA}, - { MT_RF(7, 60), RF_G_BAND | RF_BW_40, 0xAA}, - { MT_RF(7, 60), RF_A_BAND | RF_BW_20, 0xAA}, - { MT_RF(7, 60), RF_A_BAND | RF_BW_40, 0xAA}, - { MT_RF(7, 60), RF_A_BAND | RF_BW_80, 0xAA}, - - { MT_RF(7, 76), RF_BW_20, 0x40}, - { MT_RF(7, 76), RF_BW_40, 0x40}, - { MT_RF(7, 76), RF_BW_80, 0x10}, - - { MT_RF(7, 77), RF_BW_20, 0x40}, - { MT_RF(7, 77), RF_BW_40, 0x40}, - { MT_RF(7, 77), RF_BW_80, 0x10}, + /* bank, reg bw/band value */ + { MT_RF(0, 17), RF_G_BAND | RF_BW_20, 0x00 }, + { MT_RF(0, 17), RF_G_BAND | RF_BW_40, 0x00 }, + { MT_RF(0, 17), RF_A_BAND | RF_BW_20, 0x00 }, + { MT_RF(0, 17), RF_A_BAND | RF_BW_40, 0x00 }, + { MT_RF(0, 17), RF_A_BAND | RF_BW_80, 0x00 }, + { MT_RF(7, 6), RF_G_BAND | RF_BW_20, 0x40 }, + { MT_RF(7, 6), RF_G_BAND | RF_BW_40, 0x1C }, + { MT_RF(7, 6), RF_A_BAND | RF_BW_20, 0x40 }, + { MT_RF(7, 6), RF_A_BAND | RF_BW_40, 0x20 }, + { MT_RF(7, 6), RF_A_BAND | RF_BW_80, 0x10 }, + { MT_RF(7, 7), RF_G_BAND | RF_BW_20, 0x40 }, + { MT_RF(7, 7), RF_G_BAND | RF_BW_40, 0x20 }, + { MT_RF(7, 7), RF_A_BAND | RF_BW_20, 0x40 }, + { MT_RF(7, 7), RF_A_BAND | RF_BW_40, 0x20 }, + { MT_RF(7, 7), RF_A_BAND | RF_BW_80, 0x10 }, + { MT_RF(7, 8), RF_G_BAND | RF_BW_20, 0x03 }, + { MT_RF(7, 8), RF_G_BAND | RF_BW_40, 0x01 }, + { MT_RF(7, 8), RF_A_BAND | RF_BW_20, 0x03 }, + { MT_RF(7, 8), RF_A_BAND | RF_BW_40, 0x01 }, + { MT_RF(7, 8), RF_A_BAND | RF_BW_80, 0x00 }, + { MT_RF(7, 58), RF_G_BAND | RF_BW_20, 0x40 }, + { MT_RF(7, 58), RF_G_BAND | RF_BW_40, 0x40 }, + { MT_RF(7, 58), RF_A_BAND | RF_BW_20, 0x40 }, + { MT_RF(7, 58), RF_A_BAND | RF_BW_40, 0x40 }, + { MT_RF(7, 58), RF_A_BAND | RF_BW_80, 0x10 }, + { MT_RF(7, 59), RF_G_BAND | RF_BW_20, 0x40 }, + { MT_RF(7, 59), RF_G_BAND | RF_BW_40, 0x40 }, + { MT_RF(7, 59), RF_A_BAND | RF_BW_20, 0x40 }, + { MT_RF(7, 59), RF_A_BAND | RF_BW_40, 0x40 }, + { MT_RF(7, 59), RF_A_BAND | RF_BW_80, 0x10 }, + { MT_RF(7, 60), RF_G_BAND | RF_BW_20, 0xAA }, + { MT_RF(7, 60), RF_G_BAND | RF_BW_40, 0xAA }, + { MT_RF(7, 60), RF_A_BAND | RF_BW_20, 0xAA }, + { MT_RF(7, 60), RF_A_BAND | RF_BW_40, 0xAA }, + { MT_RF(7, 60), RF_A_BAND | RF_BW_80, 0xAA }, + { MT_RF(7, 76), RF_BW_20, 0x40 }, + { MT_RF(7, 76), RF_BW_40, 0x40 }, + { MT_RF(7, 76), RF_BW_80, 0x10 }, + { MT_RF(7, 77), RF_BW_20, 0x40 }, + { MT_RF(7, 77), RF_BW_40, 0x40 }, + { MT_RF(7, 77), RF_BW_80, 0x10 }, }; static const struct mt76x0_rf_switch_item mt76x0_rf_band_switch_tab[] = { - /* Bank, Register, Bw/Band, Value */ - { MT_RF(0, 16), RF_G_BAND, 0x20}, - { MT_RF(0, 16), RF_A_BAND, 0x20}, - - { MT_RF(0, 18), RF_G_BAND, 0x00}, - { MT_RF(0, 18), RF_A_BAND, 0x00}, - - { MT_RF(0, 39), RF_G_BAND, 0x36}, - { MT_RF(0, 39), RF_A_BAND_LB, 0x34}, - { MT_RF(0, 39), RF_A_BAND_MB, 0x33}, - { MT_RF(0, 39), RF_A_BAND_HB, 0x31}, - { MT_RF(0, 39), RF_A_BAND_11J, 0x36}, - - { MT_RF(6, 12), RF_A_BAND_LB, 0x44}, - { MT_RF(6, 12), RF_A_BAND_MB, 0x44}, - { MT_RF(6, 12), RF_A_BAND_HB, 0x55}, - { MT_RF(6, 12), RF_A_BAND_11J, 0x44}, - - { MT_RF(6, 17), RF_A_BAND_LB, 0x02}, - { MT_RF(6, 17), RF_A_BAND_MB, 0x00}, - { MT_RF(6, 17), RF_A_BAND_HB, 0x00}, - { MT_RF(6, 17), RF_A_BAND_11J, 0x05}, - - { MT_RF(6, 24), RF_A_BAND_LB, 0xA1}, - { MT_RF(6, 24), RF_A_BAND_MB, 0x41}, - { MT_RF(6, 24), RF_A_BAND_HB, 0x21}, - { MT_RF(6, 24), RF_A_BAND_11J, 0xE1}, - - { MT_RF(6, 39), RF_A_BAND_LB, 0x36}, - { MT_RF(6, 39), RF_A_BAND_MB, 0x34}, - { MT_RF(6, 39), RF_A_BAND_HB, 0x32}, - { MT_RF(6, 39), RF_A_BAND_11J, 0x37}, - - { MT_RF(6, 42), RF_A_BAND_LB, 0xFB}, - { MT_RF(6, 42), RF_A_BAND_MB, 0xF3}, - { MT_RF(6, 42), RF_A_BAND_HB, 0xEB}, - { MT_RF(6, 42), RF_A_BAND_11J, 0xEB}, - - /* Move R6-R45, R50~R59 to mt76x0_RF_INT_PA_5G_Channel_0_RegTb/mt76x0_RF_EXT_PA_5G_Channel_0_RegTb */ - - { MT_RF(6, 127), RF_G_BAND, 0x84}, - { MT_RF(6, 127), RF_A_BAND, 0x04}, - - { MT_RF(7, 5), RF_G_BAND, 0x40}, - { MT_RF(7, 5), RF_A_BAND, 0x00}, - - { MT_RF(7, 9), RF_G_BAND, 0x00}, - { MT_RF(7, 9), RF_A_BAND, 0x00}, - - { MT_RF(7, 70), RF_G_BAND, 0x00}, - { MT_RF(7, 70), RF_A_BAND, 0x6D}, - - { MT_RF(7, 71), RF_G_BAND, 0x00}, - { MT_RF(7, 71), RF_A_BAND, 0xB0}, - - { MT_RF(7, 78), RF_G_BAND, 0x00}, - { MT_RF(7, 78), RF_A_BAND, 0x55}, - - { MT_RF(7, 79), RF_G_BAND, 0x00}, - { MT_RF(7, 79), RF_A_BAND, 0x55}, + /* bank, reg bw/band value */ + { MT_RF(0, 16), RF_G_BAND, 0x20 }, + { MT_RF(0, 16), RF_A_BAND, 0x20 }, + { MT_RF(0, 18), RF_G_BAND, 0x00 }, + { MT_RF(0, 18), RF_A_BAND, 0x00 }, + { MT_RF(0, 39), RF_G_BAND, 0x36 }, + { MT_RF(0, 39), RF_A_BAND_LB, 0x34 }, + { MT_RF(0, 39), RF_A_BAND_MB, 0x33 }, + { MT_RF(0, 39), RF_A_BAND_HB, 0x31 }, + { MT_RF(0, 39), RF_A_BAND_11J, 0x36 }, + { MT_RF(6, 12), RF_A_BAND_LB, 0x44 }, + { MT_RF(6, 12), RF_A_BAND_MB, 0x44 }, + { MT_RF(6, 12), RF_A_BAND_HB, 0x55 }, + { MT_RF(6, 12), RF_A_BAND_11J, 0x44 }, + { MT_RF(6, 17), RF_A_BAND_LB, 0x02 }, + { MT_RF(6, 17), RF_A_BAND_MB, 0x00 }, + { MT_RF(6, 17), RF_A_BAND_HB, 0x00 }, + { MT_RF(6, 17), RF_A_BAND_11J, 0x05 }, + { MT_RF(6, 24), RF_A_BAND_LB, 0xA1 }, + { MT_RF(6, 24), RF_A_BAND_MB, 0x41 }, + { MT_RF(6, 24), RF_A_BAND_HB, 0x21 }, + { MT_RF(6, 24), RF_A_BAND_11J, 0xE1 }, + { MT_RF(6, 39), RF_A_BAND_LB, 0x36 }, + { MT_RF(6, 39), RF_A_BAND_MB, 0x34 }, + { MT_RF(6, 39), RF_A_BAND_HB, 0x32 }, + { MT_RF(6, 39), RF_A_BAND_11J, 0x37 }, + { MT_RF(6, 42), RF_A_BAND_LB, 0xFB }, + { MT_RF(6, 42), RF_A_BAND_MB, 0xF3 }, + { MT_RF(6, 42), RF_A_BAND_HB, 0xEB }, + { MT_RF(6, 42), RF_A_BAND_11J, 0xEB }, + { MT_RF(6, 127), RF_G_BAND, 0x84 }, + { MT_RF(6, 127), RF_A_BAND, 0x04 }, + { MT_RF(7, 5), RF_G_BAND, 0x40 }, + { MT_RF(7, 5), RF_A_BAND, 0x00 }, + { MT_RF(7, 9), RF_G_BAND, 0x00 }, + { MT_RF(7, 9), RF_A_BAND, 0x00 }, + { MT_RF(7, 70), RF_G_BAND, 0x00 }, + { MT_RF(7, 70), RF_A_BAND, 0x6D }, + { MT_RF(7, 71), RF_G_BAND, 0x00 }, + { MT_RF(7, 71), RF_A_BAND, 0xB0 }, + { MT_RF(7, 78), RF_G_BAND, 0x00 }, + { MT_RF(7, 78), RF_A_BAND, 0x55 }, + { MT_RF(7, 79), RF_G_BAND, 0x00 }, + { MT_RF(7, 79), RF_A_BAND, 0x55 }, }; static const struct mt76x0_freq_item mt76x0_frequency_plan[] = { - {1, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xE2, 0x40, 0x02, 0x40, 0x02, 0, 0, 1, 0x28, 0, 0x30, 0, 0, 0x3}, /* Freq 2412 */ - {2, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xE4, 0x40, 0x07, 0x40, 0x02, 0, 0, 1, 0xA1, 0, 0x30, 0, 0, 0x1}, /* Freq 2417 */ - {3, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xE2, 0x40, 0x07, 0x40, 0x0B, 0, 0, 1, 0x50, 0, 0x30, 0, 0, 0x0}, /* Freq 2422 */ - {4, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xD4, 0x40, 0x02, 0x40, 0x09, 0, 0, 1, 0x50, 0, 0x30, 0, 0, 0x0}, /* Freq 2427 */ - {5, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x02, 0, 0, 1, 0xA2, 0, 0x30, 0, 0, 0x1}, /* Freq 2432 */ - {6, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x07, 0, 0, 1, 0xA2, 0, 0x30, 0, 0, 0x1}, /* Freq 2437 */ - {7, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xE2, 0x40, 0x02, 0x40, 0x07, 0, 0, 1, 0x28, 0, 0x30, 0, 0, 0x3}, /* Freq 2442 */ - {8, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x02, 0, 0, 1, 0xA3, 0, 0x30, 0, 0, 0x1}, /* Freq 2447 */ - {9, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xF2, 0x40, 0x07, 0x40, 0x0D, 0, 0, 1, 0x28, 0, 0x30, 0, 0, 0x3}, /* Freq 2452 */ - {10, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xD4, 0x40, 0x02, 0x40, 0x09, 0, 0, 1, 0x51, 0, 0x30, 0, 0, 0x0}, /* Freq 2457 */ - {11, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x02, 0, 0, 1, 0xA4, 0, 0x30, 0, 0, 0x1}, /* Freq 2462 */ - {12, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x07, 0, 0, 1, 0xA4, 0, 0x30, 0, 0, 0x1}, /* Freq 2467 */ - {13, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xF2, 0x40, 0x02, 0x40, 0x02, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 2472 */ - {14, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xF2, 0x40, 0x02, 0x40, 0x04, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 2484 */ - - {183, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x28, 0, 0x30, 0, 0, 0x3}, /* Freq 4915 */ - {184, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x00, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 4920 */ - {185, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 4925 */ - {187, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 4935 */ - {188, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 4940 */ - {189, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 4945 */ - {192, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 4960 */ - {196, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3}, /* Freq 4980 */ - - {36, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5180 */ - {37, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5185 */ - {38, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5190 */ - {39, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5195 */ - {40, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5200 */ - {41, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x09, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5205 */ - {42, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x05, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5210 */ - {43, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0B, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5215 */ - {44, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5220 */ - {45, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0D, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5225 */ - {46, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x07, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5230 */ - {47, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0F, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5235 */ - {48, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x08, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5240 */ - {49, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x11, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5245 */ - {50, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x09, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5250 */ - {51, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x13, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5255 */ - {52, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5260 */ - {53, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5265 */ - {54, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x0B, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5270 */ - {55, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3}, /* Freq 5275 */ - {56, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x00, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5280 */ - {57, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5285 */ - {58, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x01, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5290 */ - {59, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5295 */ - {60, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5300 */ - {61, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5305 */ - {62, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5310 */ - {63, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5315 */ - {64, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3}, /* Freq 5320 */ - - {100, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x2D, 0, 0x30, 0, 0, 0x3}, /* Freq 5500 */ - {101, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x2D, 0, 0x30, 0, 0, 0x3}, /* Freq 5505 */ - {102, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x0B, 0, 0, 1, 0x2D, 0, 0x30, 0, 0, 0x3}, /* Freq 5510 */ - {103, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x2D, 0, 0x30, 0, 0, 0x3}, /* Freq 5515 */ - {104, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x00, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5520 */ - {105, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5525 */ - {106, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x01, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5530 */ - {107, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5535 */ - {108, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5540 */ - {109, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5545 */ - {110, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5550 */ - {111, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5555 */ - {112, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5560 */ - {113, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x09, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5565 */ - {114, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x05, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5570 */ - {115, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0B, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5575 */ - {116, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5580 */ - {117, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0D, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5585 */ - {118, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x07, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5590 */ - {119, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0F, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5595 */ - {120, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x08, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5600 */ - {121, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x11, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5605 */ - {122, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x09, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5610 */ - {123, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x13, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5615 */ - {124, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5620 */ - {125, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5625 */ - {126, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x0B, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5630 */ - {127, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3}, /* Freq 5635 */ - {128, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x00, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5640 */ - {129, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5645 */ - {130, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x01, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5650 */ - {131, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5655 */ - {132, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5660 */ - {133, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5665 */ - {134, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5670 */ - {135, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5675 */ - {136, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5680 */ - - {137, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x09, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5685 */ - {138, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x05, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5690 */ - {139, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0B, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5695 */ - {140, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5700 */ - {141, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0D, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5705 */ - {142, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x07, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5710 */ - {143, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0F, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5715 */ - {144, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x08, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5720 */ - {145, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x11, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5725 */ - {146, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x09, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5730 */ - {147, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x13, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5735 */ - {148, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5740 */ - {149, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5745 */ - {150, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x0B, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5750 */ - {151, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3}, /* Freq 5755 */ - {152, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x00, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5760 */ - {153, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5765 */ - {154, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x01, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5770 */ - {155, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5775 */ - {156, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5780 */ - {157, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5785 */ - {158, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5790 */ - {159, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5795 */ - {160, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5800 */ - {161, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x09, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5805 */ - {162, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x05, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5810 */ - {163, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0B, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5815 */ - {164, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5820 */ - {165, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0D, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5825 */ - {166, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x07, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5830 */ - {167, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0F, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5835 */ - {168, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x08, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5840 */ - {169, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x11, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5845 */ - {170, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x09, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5850 */ - {171, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x13, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5855 */ - {172, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5860 */ - {173, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3}, /* Freq 5865 */ + { 1, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xE2, 0x40, 0x02, 0x40, 0x02, 0, 0, 1, 0x28, 0, 0x30, 0, 0, 0x3 }, /* Freq 2412 */ + { 2, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xE4, 0x40, 0x07, 0x40, 0x02, 0, 0, 1, 0xA1, 0, 0x30, 0, 0, 0x1 }, /* Freq 2417 */ + { 3, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xE2, 0x40, 0x07, 0x40, 0x0B, 0, 0, 1, 0x50, 0, 0x30, 0, 0, 0x0 }, /* Freq 2422 */ + { 4, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xD4, 0x40, 0x02, 0x40, 0x09, 0, 0, 1, 0x50, 0, 0x30, 0, 0, 0x0 }, /* Freq 2427 */ + { 5, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x02, 0, 0, 1, 0xA2, 0, 0x30, 0, 0, 0x1 }, /* Freq 2432 */ + { 6, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x07, 0, 0, 1, 0xA2, 0, 0x30, 0, 0, 0x1 }, /* Freq 2437 */ + { 7, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xE2, 0x40, 0x02, 0x40, 0x07, 0, 0, 1, 0x28, 0, 0x30, 0, 0, 0x3 }, /* Freq 2442 */ + { 8, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x02, 0, 0, 1, 0xA3, 0, 0x30, 0, 0, 0x1 }, /* Freq 2447 */ + { 9, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xF2, 0x40, 0x07, 0x40, 0x0D, 0, 0, 1, 0x28, 0, 0x30, 0, 0, 0x3 }, /* Freq 2452 */ + { 10, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xD4, 0x40, 0x02, 0x40, 0x09, 0, 0, 1, 0x51, 0, 0x30, 0, 0, 0x0 }, /* Freq 2457 */ + { 11, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x02, 0, 0, 1, 0xA4, 0, 0x30, 0, 0, 0x1 }, /* Freq 2462 */ + { 12, RF_G_BAND, 0x02, 0x3F, 0x3C, 0xDD, 0xD4, 0x40, 0x07, 0x40, 0x07, 0, 0, 1, 0xA4, 0, 0x30, 0, 0, 0x1 }, /* Freq 2467 */ + { 13, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xF2, 0x40, 0x02, 0x40, 0x02, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 2472 */ + { 14, RF_G_BAND, 0x02, 0x3F, 0x28, 0xDD, 0xF2, 0x40, 0x02, 0x40, 0x04, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 2484 */ + { 183, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x28, 0, 0x30, 0, 0, 0x3 }, /* Freq 4915 */ + { 184, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x00, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 4920 */ + { 185, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 4925 */ + { 187, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 4935 */ + { 188, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 4940 */ + { 189, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 4945 */ + { 192, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 4960 */ + { 196, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x29, 0, 0x30, 0, 0, 0x3 }, /* Freq 4980 */ + { 36, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5180 */ + { 37, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5185 */ + { 38, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5190 */ + { 39, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5195 */ + { 40, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5200 */ + { 41, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x09, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5205 */ + { 42, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x05, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5210 */ + { 43, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0B, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5215 */ + { 44, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5220 */ + { 45, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0D, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5225 */ + { 46, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x07, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5230 */ + { 47, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0F, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5235 */ + { 48, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x08, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5240 */ + { 49, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x11, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5245 */ + { 50, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x09, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5250 */ + { 51, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x13, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5255 */ + { 52, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5260 */ + { 53, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5265 */ + { 54, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x0B, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5270 */ + { 55, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x2B, 0, 0x30, 0, 0, 0x3 }, /* Freq 5275 */ + { 56, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x00, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5280 */ + { 57, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5285 */ + { 58, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x01, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5290 */ + { 59, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5295 */ + { 60, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5300 */ + { 61, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5305 */ + { 62, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5310 */ + { 63, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5315 */ + { 64, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x2C, 0, 0x30, 0, 0, 0x3 }, /* Freq 5320 */ + { 100, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x2D, 0, 0x30, 0, 0, 0x3 }, /* Freq 5500 */ + { 101, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x2D, 0, 0x30, 0, 0, 0x3 }, /* Freq 5505 */ + { 102, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x0B, 0, 0, 1, 0x2D, 0, 0x30, 0, 0, 0x3 }, /* Freq 5510 */ + { 103, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x2D, 0, 0x30, 0, 0, 0x3 }, /* Freq 5515 */ + { 104, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x00, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5520 */ + { 105, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5525 */ + { 106, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x01, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5530 */ + { 107, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5535 */ + { 108, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5540 */ + { 109, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5545 */ + { 110, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5550 */ + { 111, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5555 */ + { 112, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5560 */ + { 113, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x09, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5565 */ + { 114, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x05, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5570 */ + { 115, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0B, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5575 */ + { 116, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5580 */ + { 117, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0D, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5585 */ + { 118, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x07, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5590 */ + { 119, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0F, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5595 */ + { 120, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x08, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5600 */ + { 121, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x11, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5605 */ + { 122, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x09, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5610 */ + { 123, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x13, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5615 */ + { 124, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5620 */ + { 125, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5625 */ + { 126, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x0B, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5630 */ + { 127, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x2E, 0, 0x30, 0, 0, 0x3 }, /* Freq 5635 */ + { 128, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x00, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5640 */ + { 129, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5645 */ + { 130, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x01, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5650 */ + { 131, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5655 */ + { 132, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5660 */ + { 133, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5665 */ + { 134, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5670 */ + { 135, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5675 */ + { 136, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5680 */ + { 137, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x09, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5685 */ + { 138, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x05, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5690 */ + { 139, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0B, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5695 */ + { 140, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5700 */ + { 141, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0D, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5705 */ + { 142, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x07, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5710 */ + { 143, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0F, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5715 */ + { 144, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x08, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5720 */ + { 145, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x11, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5725 */ + { 146, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x09, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5730 */ + { 147, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x13, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5735 */ + { 148, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5740 */ + { 149, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5745 */ + { 150, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x0B, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5750 */ + { 151, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x70, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x17, 0, 0, 1, 0x2F, 0, 0x30, 0, 0, 0x3 }, /* Freq 5755 */ + { 152, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x00, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5760 */ + { 153, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x01, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5765 */ + { 154, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x01, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5770 */ + { 155, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x03, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5775 */ + { 156, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x02, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5780 */ + { 157, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x05, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5785 */ + { 158, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x03, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5790 */ + { 159, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x07, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5795 */ + { 160, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x04, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5800 */ + { 161, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x09, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5805 */ + { 162, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x05, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5810 */ + { 163, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0B, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5815 */ + { 164, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x06, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5820 */ + { 165, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0D, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5825 */ + { 166, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0xDD, 0xD2, 0x40, 0x04, 0x40, 0x07, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5830 */ + { 167, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x0F, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5835 */ + { 168, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x08, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5840 */ + { 169, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x11, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5845 */ + { 170, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x09, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5850 */ + { 171, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x13, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5855 */ + { 172, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x30, 0x97, 0xD2, 0x40, 0x04, 0x40, 0x0A, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5860 */ + { 173, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x68, 0xDD, 0xD2, 0x40, 0x10, 0x40, 0x15, 0, 0, 1, 0x30, 0, 0x30, 0, 0, 0x3 }, /* Freq 5865 */ }; static const struct mt76x0_freq_item mt76x0_sdm_frequency_plan[] = { - {1, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0xCCCC, 0x3}, /* Freq 2412 */ - {2, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x12222, 0x3}, /* Freq 2417 */ - {3, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x17777, 0x3}, /* Freq 2422 */ - {4, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x1CCCC, 0x3}, /* Freq 2427 */ - {5, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x22222, 0x3}, /* Freq 2432 */ - {6, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x27777, 0x3}, /* Freq 2437 */ - {7, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x2CCCC, 0x3}, /* Freq 2442 */ - {8, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x32222, 0x3}, /* Freq 2447 */ - {9, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x37777, 0x3}, /* Freq 2452 */ - {10, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x3CCCC, 0x3}, /* Freq 2457 */ - {11, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0x2222, 0x3}, /* Freq 2462 */ - {12, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0x7777, 0x3}, /* Freq 2467 */ - {13, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0xCCCC, 0x3}, /* Freq 2472 */ - {14, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0x19999, 0x3}, /* Freq 2484 */ - - {183, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x28, 0, 0x0, 0x8, 0x3D555, 0x3}, /* Freq 4915 */ - {184, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0x0, 0x3}, /* Freq 4920 */ - {185, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0x2AAA, 0x3}, /* Freq 4925 */ - {187, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0x8000, 0x3}, /* Freq 4935 */ - {188, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0xAAAA, 0x3}, /* Freq 4940 */ - {189, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0xD555, 0x3}, /* Freq 4945 */ - {192, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0x15555, 0x3}, /* Freq 4960 */ - {196, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x29, 0, 0x0, 0x8, 0x20000, 0x3}, /* Freq 4980 */ - - {36, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0xAAAA, 0x3}, /* Freq 5180 */ - {37, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0xD555, 0x3}, /* Freq 5185 */ - {38, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x10000, 0x3}, /* Freq 5190 */ - {39, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x12AAA, 0x3}, /* Freq 5195 */ - {40, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x15555, 0x3}, /* Freq 5200 */ - {41, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x18000, 0x3}, /* Freq 5205 */ - {42, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x1AAAA, 0x3}, /* Freq 5210 */ - {43, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x1D555, 0x3}, /* Freq 5215 */ - {44, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x20000, 0x3}, /* Freq 5220 */ - {45, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x22AAA, 0x3}, /* Freq 5225 */ - {46, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x25555, 0x3}, /* Freq 5230 */ - {47, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x28000, 0x3}, /* Freq 5235 */ - {48, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x2AAAA, 0x3}, /* Freq 5240 */ - {49, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x2D555, 0x3}, /* Freq 5245 */ - {50, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x30000, 0x3}, /* Freq 5250 */ - {51, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x32AAA, 0x3}, /* Freq 5255 */ - {52, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x35555, 0x3}, /* Freq 5260 */ - {53, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x38000, 0x3}, /* Freq 5265 */ - {54, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x3AAAA, 0x3}, /* Freq 5270 */ - {55, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2B, 0, 0x0, 0x8, 0x3D555, 0x3}, /* Freq 5275 */ - {56, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x00000, 0x3}, /* Freq 5280 */ - {57, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x02AAA, 0x3}, /* Freq 5285 */ - {58, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x05555, 0x3}, /* Freq 5290 */ - {59, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x08000, 0x3}, /* Freq 5295 */ - {60, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x0AAAA, 0x3}, /* Freq 5300 */ - {61, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x0D555, 0x3}, /* Freq 5305 */ - {62, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x10000, 0x3}, /* Freq 5310 */ - {63, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x12AAA, 0x3}, /* Freq 5315 */ - {64, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2C, 0, 0x0, 0x8, 0x15555, 0x3}, /* Freq 5320 */ - - {100, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2D, 0, 0x0, 0x8, 0x35555, 0x3}, /* Freq 5500 */ - {101, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2D, 0, 0x0, 0x8, 0x38000, 0x3}, /* Freq 5505 */ - {102, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2D, 0, 0x0, 0x8, 0x3AAAA, 0x3}, /* Freq 5510 */ - {103, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2D, 0, 0x0, 0x8, 0x3D555, 0x3}, /* Freq 5515 */ - {104, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x00000, 0x3}, /* Freq 5520 */ - {105, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x02AAA, 0x3}, /* Freq 5525 */ - {106, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x05555, 0x3}, /* Freq 5530 */ - {107, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x08000, 0x3}, /* Freq 5535 */ - {108, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x0AAAA, 0x3}, /* Freq 5540 */ - {109, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x0D555, 0x3}, /* Freq 5545 */ - {110, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x10000, 0x3}, /* Freq 5550 */ - {111, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x12AAA, 0x3}, /* Freq 5555 */ - {112, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x15555, 0x3}, /* Freq 5560 */ - {113, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x18000, 0x3}, /* Freq 5565 */ - {114, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x1AAAA, 0x3}, /* Freq 5570 */ - {115, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x1D555, 0x3}, /* Freq 5575 */ - {116, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x20000, 0x3}, /* Freq 5580 */ - {117, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x22AAA, 0x3}, /* Freq 5585 */ - {118, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x25555, 0x3}, /* Freq 5590 */ - {119, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x28000, 0x3}, /* Freq 5595 */ - {120, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x2AAAA, 0x3}, /* Freq 5600 */ - {121, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x2D555, 0x3}, /* Freq 5605 */ - {122, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x30000, 0x3}, /* Freq 5610 */ - {123, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x32AAA, 0x3}, /* Freq 5615 */ - {124, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x35555, 0x3}, /* Freq 5620 */ - {125, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x38000, 0x3}, /* Freq 5625 */ - {126, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x3AAAA, 0x3}, /* Freq 5630 */ - {127, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2E, 0, 0x0, 0x8, 0x3D555, 0x3}, /* Freq 5635 */ - {128, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x00000, 0x3}, /* Freq 5640 */ - {129, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x02AAA, 0x3}, /* Freq 5645 */ - {130, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x05555, 0x3}, /* Freq 5650 */ - {131, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x08000, 0x3}, /* Freq 5655 */ - {132, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x0AAAA, 0x3}, /* Freq 5660 */ - {133, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x0D555, 0x3}, /* Freq 5665 */ - {134, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x10000, 0x3}, /* Freq 5670 */ - {135, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x12AAA, 0x3}, /* Freq 5675 */ - {136, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x15555, 0x3}, /* Freq 5680 */ - - {137, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x18000, 0x3}, /* Freq 5685 */ - {138, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x1AAAA, 0x3}, /* Freq 5690 */ - {139, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x1D555, 0x3}, /* Freq 5695 */ - {140, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x20000, 0x3}, /* Freq 5700 */ - {141, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x22AAA, 0x3}, /* Freq 5705 */ - {142, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x25555, 0x3}, /* Freq 5710 */ - {143, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x28000, 0x3}, /* Freq 5715 */ - {144, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x2AAAA, 0x3}, /* Freq 5720 */ - {145, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x2D555, 0x3}, /* Freq 5725 */ - {146, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x30000, 0x3}, /* Freq 5730 */ - {147, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x32AAA, 0x3}, /* Freq 5735 */ - {148, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x35555, 0x3}, /* Freq 5740 */ - {149, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x38000, 0x3}, /* Freq 5745 */ - {150, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x3AAAA, 0x3}, /* Freq 5750 */ - {151, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x2F, 0, 0x0, 0x8, 0x3D555, 0x3}, /* Freq 5755 */ - {152, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x00000, 0x3}, /* Freq 5760 */ - {153, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x02AAA, 0x3}, /* Freq 5765 */ - {154, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x05555, 0x3}, /* Freq 5770 */ - {155, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x08000, 0x3}, /* Freq 5775 */ - {156, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x0AAAA, 0x3}, /* Freq 5780 */ - {157, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x0D555, 0x3}, /* Freq 5785 */ - {158, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x10000, 0x3}, /* Freq 5790 */ - {159, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x12AAA, 0x3}, /* Freq 5795 */ - {160, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x15555, 0x3}, /* Freq 5800 */ - {161, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x18000, 0x3}, /* Freq 5805 */ - {162, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x1AAAA, 0x3}, /* Freq 5810 */ - {163, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x1D555, 0x3}, /* Freq 5815 */ - {164, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x20000, 0x3}, /* Freq 5820 */ - {165, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x22AAA, 0x3}, /* Freq 5825 */ - {166, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x25555, 0x3}, /* Freq 5830 */ - {167, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x28000, 0x3}, /* Freq 5835 */ - {168, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x2AAAA, 0x3}, /* Freq 5840 */ - {169, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x2D555, 0x3}, /* Freq 5845 */ - {170, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x30000, 0x3}, /* Freq 5850 */ - {171, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x32AAA, 0x3}, /* Freq 5855 */ - {172, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x35555, 0x3}, /* Freq 5860 */ - {173, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0/*0 -> 1*/, 0, 0, 0x30, 0, 0x0, 0x8, 0x38000, 0x3}, /* Freq 5865 */ + { 1, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x0CCCC, 0x3 }, /* Freq 2412 */ + { 2, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x12222, 0x3 }, /* Freq 2417 */ + { 3, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x17777, 0x3 }, /* Freq 2422 */ + { 4, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x1CCCC, 0x3 }, /* Freq 2427 */ + { 5, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x22222, 0x3 }, /* Freq 2432 */ + { 6, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x27777, 0x3 }, /* Freq 2437 */ + { 7, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x2CCCC, 0x3 }, /* Freq 2442 */ + { 8, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x32222, 0x3 }, /* Freq 2447 */ + { 9, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x37777, 0x3 }, /* Freq 2452 */ + { 10, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x3CCCC, 0x3 }, /* Freq 2457 */ + { 11, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x02222, 0x3 }, /* Freq 2462 */ + { 12, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x07777, 0x3 }, /* Freq 2467 */ + { 13, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x0CCCC, 0x3 }, /* Freq 2472 */ + { 14, RF_G_BAND, 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x19999, 0x3 }, /* Freq 2484 */ + { 183, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x28, 0, 0x0, 0x8, 0x3D555, 0x3 }, /* Freq 4915 */ + { 184, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x00000, 0x3 }, /* Freq 4920 */ + { 185, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x02AAA, 0x3 }, /* Freq 4925 */ + { 187, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x08000, 0x3 }, /* Freq 4935 */ + { 188, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x0AAAA, 0x3 }, /* Freq 4940 */ + { 189, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x0D555, 0x3 }, /* Freq 4945 */ + { 192, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x15555, 0x3 }, /* Freq 4960 */ + { 196, (RF_A_BAND | RF_A_BAND_11J), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x29, 0, 0x0, 0x8, 0x20000, 0x3 }, /* Freq 4980 */ + { 36, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x0AAAA, 0x3 }, /* Freq 5180 */ + { 37, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x0D555, 0x3 }, /* Freq 5185 */ + { 38, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x10000, 0x3 }, /* Freq 5190 */ + { 39, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x12AAA, 0x3 }, /* Freq 5195 */ + { 40, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x15555, 0x3 }, /* Freq 5200 */ + { 41, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x18000, 0x3 }, /* Freq 5205 */ + { 42, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x1AAAA, 0x3 }, /* Freq 5210 */ + { 43, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x1D555, 0x3 }, /* Freq 5215 */ + { 44, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x20000, 0x3 }, /* Freq 5220 */ + { 45, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x22AAA, 0x3 }, /* Freq 5225 */ + { 46, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x25555, 0x3 }, /* Freq 5230 */ + { 47, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x28000, 0x3 }, /* Freq 5235 */ + { 48, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x2AAAA, 0x3 }, /* Freq 5240 */ + { 49, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x2D555, 0x3 }, /* Freq 5245 */ + { 50, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x30000, 0x3 }, /* Freq 5250 */ + { 51, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x32AAA, 0x3 }, /* Freq 5255 */ + { 52, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x35555, 0x3 }, /* Freq 5260 */ + { 53, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x38000, 0x3 }, /* Freq 5265 */ + { 54, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x3AAAA, 0x3 }, /* Freq 5270 */ + { 55, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2B, 0, 0x0, 0x8, 0x3D555, 0x3 }, /* Freq 5275 */ + { 56, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x00000, 0x3 }, /* Freq 5280 */ + { 57, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x02AAA, 0x3 }, /* Freq 5285 */ + { 58, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x05555, 0x3 }, /* Freq 5290 */ + { 59, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x08000, 0x3 }, /* Freq 5295 */ + { 60, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x0AAAA, 0x3 }, /* Freq 5300 */ + { 61, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x0D555, 0x3 }, /* Freq 5305 */ + { 62, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x10000, 0x3 }, /* Freq 5310 */ + { 63, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x12AAA, 0x3 }, /* Freq 5315 */ + { 64, (RF_A_BAND | RF_A_BAND_LB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2C, 0, 0x0, 0x8, 0x15555, 0x3 }, /* Freq 5320 */ + { 100, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2D, 0, 0x0, 0x8, 0x35555, 0x3 }, /* Freq 5500 */ + { 101, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2D, 0, 0x0, 0x8, 0x38000, 0x3 }, /* Freq 5505 */ + { 102, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2D, 0, 0x0, 0x8, 0x3AAAA, 0x3 }, /* Freq 5510 */ + { 103, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2D, 0, 0x0, 0x8, 0x3D555, 0x3 }, /* Freq 5515 */ + { 104, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x00000, 0x3 }, /* Freq 5520 */ + { 105, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x02AAA, 0x3 }, /* Freq 5525 */ + { 106, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x05555, 0x3 }, /* Freq 5530 */ + { 107, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x08000, 0x3 }, /* Freq 5535 */ + { 108, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x0AAAA, 0x3 }, /* Freq 5540 */ + { 109, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x0D555, 0x3 }, /* Freq 5545 */ + { 110, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x10000, 0x3 }, /* Freq 5550 */ + { 111, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x12AAA, 0x3 }, /* Freq 5555 */ + { 112, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x15555, 0x3 }, /* Freq 5560 */ + { 113, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x18000, 0x3 }, /* Freq 5565 */ + { 114, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x1AAAA, 0x3 }, /* Freq 5570 */ + { 115, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x1D555, 0x3 }, /* Freq 5575 */ + { 116, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x20000, 0x3 }, /* Freq 5580 */ + { 117, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x22AAA, 0x3 }, /* Freq 5585 */ + { 118, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x25555, 0x3 }, /* Freq 5590 */ + { 119, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x28000, 0x3 }, /* Freq 5595 */ + { 120, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x2AAAA, 0x3 }, /* Freq 5600 */ + { 121, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x2D555, 0x3 }, /* Freq 5605 */ + { 122, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x30000, 0x3 }, /* Freq 5610 */ + { 123, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x32AAA, 0x3 }, /* Freq 5615 */ + { 124, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x35555, 0x3 }, /* Freq 5620 */ + { 125, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x38000, 0x3 }, /* Freq 5625 */ + { 126, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x3AAAA, 0x3 }, /* Freq 5630 */ + { 127, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2E, 0, 0x0, 0x8, 0x3D555, 0x3 }, /* Freq 5635 */ + { 128, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x00000, 0x3 }, /* Freq 5640 */ + { 129, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x02AAA, 0x3 }, /* Freq 5645 */ + { 130, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x05555, 0x3 }, /* Freq 5650 */ + { 131, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x08000, 0x3 }, /* Freq 5655 */ + { 132, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x0AAAA, 0x3 }, /* Freq 5660 */ + { 133, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x0D555, 0x3 }, /* Freq 5665 */ + { 134, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x10000, 0x3 }, /* Freq 5670 */ + { 135, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x12AAA, 0x3 }, /* Freq 5675 */ + { 136, (RF_A_BAND | RF_A_BAND_MB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x15555, 0x3 }, /* Freq 5680 */ + { 137, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x18000, 0x3 }, /* Freq 5685 */ + { 138, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x1AAAA, 0x3 }, /* Freq 5690 */ + { 139, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x1D555, 0x3 }, /* Freq 5695 */ + { 140, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x20000, 0x3 }, /* Freq 5700 */ + { 141, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x22AAA, 0x3 }, /* Freq 5705 */ + { 142, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x25555, 0x3 }, /* Freq 5710 */ + { 143, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x28000, 0x3 }, /* Freq 5715 */ + { 144, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x2AAAA, 0x3 }, /* Freq 5720 */ + { 145, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x2D555, 0x3 }, /* Freq 5725 */ + { 146, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x30000, 0x3 }, /* Freq 5730 */ + { 147, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x32AAA, 0x3 }, /* Freq 5735 */ + { 148, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x35555, 0x3 }, /* Freq 5740 */ + { 149, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x38000, 0x3 }, /* Freq 5745 */ + { 150, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x3AAAA, 0x3 }, /* Freq 5750 */ + { 151, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x2F, 0, 0x0, 0x8, 0x3D555, 0x3 }, /* Freq 5755 */ + { 152, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x00000, 0x3 }, /* Freq 5760 */ + { 153, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x02AAA, 0x3 }, /* Freq 5765 */ + { 154, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x05555, 0x3 }, /* Freq 5770 */ + { 155, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x08000, 0x3 }, /* Freq 5775 */ + { 156, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x0AAAA, 0x3 }, /* Freq 5780 */ + { 157, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x0D555, 0x3 }, /* Freq 5785 */ + { 158, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x10000, 0x3 }, /* Freq 5790 */ + { 159, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x12AAA, 0x3 }, /* Freq 5795 */ + { 160, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x15555, 0x3 }, /* Freq 5800 */ + { 161, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x18000, 0x3 }, /* Freq 5805 */ + { 162, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x1AAAA, 0x3 }, /* Freq 5810 */ + { 163, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x1D555, 0x3 }, /* Freq 5815 */ + { 164, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x20000, 0x3 }, /* Freq 5820 */ + { 165, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x22AAA, 0x3 }, /* Freq 5825 */ + { 166, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x25555, 0x3 }, /* Freq 5830 */ + { 167, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x28000, 0x3 }, /* Freq 5835 */ + { 168, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x2AAAA, 0x3 }, /* Freq 5840 */ + { 169, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x2D555, 0x3 }, /* Freq 5845 */ + { 170, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x30000, 0x3 }, /* Freq 5850 */ + { 171, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x32AAA, 0x3 }, /* Freq 5855 */ + { 172, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x35555, 0x3 }, /* Freq 5860 */ + { 173, (RF_A_BAND | RF_A_BAND_HB), 0x02, 0x3F, 0x7F, 0xDD, 0xC3, 0x40, 0x0, 0x80, 0x0, 0, 0, 0, 0x30, 0, 0x0, 0x8, 0x38000, 0x3 }, /* Freq 5865 */ }; static const u8 mt76x0_sdm_channel[] = { - 183, 185, 43, 45, 54, 55, 57, 58, 102, 103, 105, 106, 115, 117, 126, 127, 129, 130, 139, 141, 150, 151, 153, 154, 163, 165 + 183, 185, 43, 45, + 54, 55, 57, 58, + 102, 103, 105, 106, + 115, 117, 126, 127, + 129, 130, 139, 141, + 150, 151, 153, 154, + 163, 165 }; static const struct mt76x0_rf_switch_item mt76x0_rf_ext_pa_tab[] = { - { MT_RF(6, 45), RF_A_BAND_LB, 0x63}, - { MT_RF(6, 45), RF_A_BAND_MB, 0x43}, - { MT_RF(6, 45), RF_A_BAND_HB, 0x33}, - { MT_RF(6, 45), RF_A_BAND_11J, 0x73}, - - { MT_RF(6, 50), RF_A_BAND_LB, 0x02}, - { MT_RF(6, 50), RF_A_BAND_MB, 0x02}, - { MT_RF(6, 50), RF_A_BAND_HB, 0x02}, - { MT_RF(6, 50), RF_A_BAND_11J, 0x02}, - - { MT_RF(6, 51), RF_A_BAND_LB, 0x02}, - { MT_RF(6, 51), RF_A_BAND_MB, 0x02}, - { MT_RF(6, 51), RF_A_BAND_HB, 0x02}, - { MT_RF(6, 51), RF_A_BAND_11J, 0x02}, - - { MT_RF(6, 52), RF_A_BAND_LB, 0x08}, - { MT_RF(6, 52), RF_A_BAND_MB, 0x08}, - { MT_RF(6, 52), RF_A_BAND_HB, 0x08}, - { MT_RF(6, 52), RF_A_BAND_11J, 0x08}, - - { MT_RF(6, 53), RF_A_BAND_LB, 0x08}, - { MT_RF(6, 53), RF_A_BAND_MB, 0x08}, - { MT_RF(6, 53), RF_A_BAND_HB, 0x08}, - { MT_RF(6, 53), RF_A_BAND_11J, 0x08}, - - { MT_RF(6, 54), RF_A_BAND_LB, 0x0A}, - { MT_RF(6, 54), RF_A_BAND_MB, 0x0A}, - { MT_RF(6, 54), RF_A_BAND_HB, 0x0A}, - { MT_RF(6, 54), RF_A_BAND_11J, 0x0A}, - - { MT_RF(6, 55), RF_A_BAND_LB, 0x0A}, - { MT_RF(6, 55), RF_A_BAND_MB, 0x0A}, - { MT_RF(6, 55), RF_A_BAND_HB, 0x0A}, - { MT_RF(6, 55), RF_A_BAND_11J, 0x0A}, - - { MT_RF(6, 56), RF_A_BAND_LB, 0x05}, - { MT_RF(6, 56), RF_A_BAND_MB, 0x05}, - { MT_RF(6, 56), RF_A_BAND_HB, 0x05}, - { MT_RF(6, 56), RF_A_BAND_11J, 0x05}, - - { MT_RF(6, 57), RF_A_BAND_LB, 0x05}, - { MT_RF(6, 57), RF_A_BAND_MB, 0x05}, - { MT_RF(6, 57), RF_A_BAND_HB, 0x05}, - { MT_RF(6, 57), RF_A_BAND_11J, 0x05}, - - { MT_RF(6, 58), RF_A_BAND_LB, 0x05}, - { MT_RF(6, 58), RF_A_BAND_MB, 0x03}, - { MT_RF(6, 58), RF_A_BAND_HB, 0x02}, - { MT_RF(6, 58), RF_A_BAND_11J, 0x07}, - - { MT_RF(6, 59), RF_A_BAND_LB, 0x05}, - { MT_RF(6, 59), RF_A_BAND_MB, 0x03}, - { MT_RF(6, 59), RF_A_BAND_HB, 0x02}, - { MT_RF(6, 59), RF_A_BAND_11J, 0x07}, + { MT_RF(6, 45), RF_A_BAND_LB, 0x63 }, + { MT_RF(6, 45), RF_A_BAND_MB, 0x43 }, + { MT_RF(6, 45), RF_A_BAND_HB, 0x33 }, + { MT_RF(6, 45), RF_A_BAND_11J, 0x73 }, + { MT_RF(6, 50), RF_A_BAND_LB, 0x02 }, + { MT_RF(6, 50), RF_A_BAND_MB, 0x02 }, + { MT_RF(6, 50), RF_A_BAND_HB, 0x02 }, + { MT_RF(6, 50), RF_A_BAND_11J, 0x02 }, + { MT_RF(6, 51), RF_A_BAND_LB, 0x02 }, + { MT_RF(6, 51), RF_A_BAND_MB, 0x02 }, + { MT_RF(6, 51), RF_A_BAND_HB, 0x02 }, + { MT_RF(6, 51), RF_A_BAND_11J, 0x02 }, + { MT_RF(6, 52), RF_A_BAND_LB, 0x08 }, + { MT_RF(6, 52), RF_A_BAND_MB, 0x08 }, + { MT_RF(6, 52), RF_A_BAND_HB, 0x08 }, + { MT_RF(6, 52), RF_A_BAND_11J, 0x08 }, + { MT_RF(6, 53), RF_A_BAND_LB, 0x08 }, + { MT_RF(6, 53), RF_A_BAND_MB, 0x08 }, + { MT_RF(6, 53), RF_A_BAND_HB, 0x08 }, + { MT_RF(6, 53), RF_A_BAND_11J, 0x08 }, + { MT_RF(6, 54), RF_A_BAND_LB, 0x0A }, + { MT_RF(6, 54), RF_A_BAND_MB, 0x0A }, + { MT_RF(6, 54), RF_A_BAND_HB, 0x0A }, + { MT_RF(6, 54), RF_A_BAND_11J, 0x0A }, + { MT_RF(6, 55), RF_A_BAND_LB, 0x0A }, + { MT_RF(6, 55), RF_A_BAND_MB, 0x0A }, + { MT_RF(6, 55), RF_A_BAND_HB, 0x0A }, + { MT_RF(6, 55), RF_A_BAND_11J, 0x0A }, + { MT_RF(6, 56), RF_A_BAND_LB, 0x05 }, + { MT_RF(6, 56), RF_A_BAND_MB, 0x05 }, + { MT_RF(6, 56), RF_A_BAND_HB, 0x05 }, + { MT_RF(6, 56), RF_A_BAND_11J, 0x05 }, + { MT_RF(6, 57), RF_A_BAND_LB, 0x05 }, + { MT_RF(6, 57), RF_A_BAND_MB, 0x05 }, + { MT_RF(6, 57), RF_A_BAND_HB, 0x05 }, + { MT_RF(6, 57), RF_A_BAND_11J, 0x05 }, + { MT_RF(6, 58), RF_A_BAND_LB, 0x05 }, + { MT_RF(6, 58), RF_A_BAND_MB, 0x03 }, + { MT_RF(6, 58), RF_A_BAND_HB, 0x02 }, + { MT_RF(6, 58), RF_A_BAND_11J, 0x07 }, + { MT_RF(6, 59), RF_A_BAND_LB, 0x05 }, + { MT_RF(6, 59), RF_A_BAND_MB, 0x03 }, + { MT_RF(6, 59), RF_A_BAND_HB, 0x02 }, + { MT_RF(6, 59), RF_A_BAND_11J, 0x07 }, }; #endif diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/mac.c b/drivers/net/wireless/mediatek/mt76/mt76x0/mac.c deleted file mode 100644 index 7a422c590211..000000000000 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/mac.c +++ /dev/null @@ -1,197 +0,0 @@ -/* - * Copyright (C) 2014 Felix Fietkau - * Copyright (C) 2015 Jakub Kicinski - * Copyright (C) 2018 Stanislaw Gruszka - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 - * as published by the Free Software Foundation - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ - -#include - -#include "mt76x0.h" -#include "trace.h" - -void mt76x0_mac_set_protection(struct mt76x02_dev *dev, bool legacy_prot, - int ht_mode) -{ - int mode = ht_mode & IEEE80211_HT_OP_MODE_PROTECTION; - bool non_gf = !!(ht_mode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT); - u32 prot[6]; - bool ht_rts[4] = {}; - int i; - - prot[0] = MT_PROT_NAV_SHORT | - MT_PROT_TXOP_ALLOW_ALL | - MT_PROT_RTS_THR_EN; - prot[1] = prot[0]; - if (legacy_prot) - prot[1] |= MT_PROT_CTRL_CTS2SELF; - - prot[2] = prot[4] = MT_PROT_NAV_SHORT | MT_PROT_TXOP_ALLOW_BW20; - prot[3] = prot[5] = MT_PROT_NAV_SHORT | MT_PROT_TXOP_ALLOW_ALL; - - if (legacy_prot) { - prot[2] |= MT_PROT_RATE_CCK_11; - prot[3] |= MT_PROT_RATE_CCK_11; - prot[4] |= MT_PROT_RATE_CCK_11; - prot[5] |= MT_PROT_RATE_CCK_11; - } else { - prot[2] |= MT_PROT_RATE_OFDM_24; - prot[3] |= MT_PROT_RATE_DUP_OFDM_24; - prot[4] |= MT_PROT_RATE_OFDM_24; - prot[5] |= MT_PROT_RATE_DUP_OFDM_24; - } - - switch (mode) { - case IEEE80211_HT_OP_MODE_PROTECTION_NONE: - break; - - case IEEE80211_HT_OP_MODE_PROTECTION_NONMEMBER: - ht_rts[0] = ht_rts[1] = ht_rts[2] = ht_rts[3] = true; - break; - - case IEEE80211_HT_OP_MODE_PROTECTION_20MHZ: - ht_rts[1] = ht_rts[3] = true; - break; - - case IEEE80211_HT_OP_MODE_PROTECTION_NONHT_MIXED: - ht_rts[0] = ht_rts[1] = ht_rts[2] = ht_rts[3] = true; - break; - } - - if (non_gf) - ht_rts[2] = ht_rts[3] = true; - - for (i = 0; i < 4; i++) - if (ht_rts[i]) - prot[i + 2] |= MT_PROT_CTRL_RTS_CTS; - - for (i = 0; i < 6; i++) - mt76_wr(dev, MT_CCK_PROT_CFG + i * 4, prot[i]); -} - -void mt76x0_mac_set_short_preamble(struct mt76x02_dev *dev, bool short_preamb) -{ - if (short_preamb) - mt76_set(dev, MT_AUTO_RSP_CFG, MT_AUTO_RSP_PREAMB_SHORT); - else - mt76_clear(dev, MT_AUTO_RSP_CFG, MT_AUTO_RSP_PREAMB_SHORT); -} - -void mt76x0_mac_config_tsf(struct mt76x02_dev *dev, bool enable, int interval) -{ - u32 val = mt76_rr(dev, MT_BEACON_TIME_CFG); - - val &= ~(MT_BEACON_TIME_CFG_TIMER_EN | - MT_BEACON_TIME_CFG_SYNC_MODE | - MT_BEACON_TIME_CFG_TBTT_EN); - - if (!enable) { - mt76_wr(dev, MT_BEACON_TIME_CFG, val); - return; - } - - val &= ~MT_BEACON_TIME_CFG_INTVAL; - val |= FIELD_PREP(MT_BEACON_TIME_CFG_INTVAL, interval << 4) | - MT_BEACON_TIME_CFG_TIMER_EN | - MT_BEACON_TIME_CFG_SYNC_MODE | - MT_BEACON_TIME_CFG_TBTT_EN; -} - -static void mt76x0_check_mac_err(struct mt76x02_dev *dev) -{ - u32 val = mt76_rr(dev, 0x10f4); - - if (!(val & BIT(29)) || !(val & (BIT(7) | BIT(5)))) - return; - - dev_err(dev->mt76.dev, "Error: MAC specific condition occurred\n"); - - mt76_set(dev, MT_MAC_SYS_CTRL, MT_MAC_SYS_CTRL_RESET_CSR); - udelay(10); - mt76_clear(dev, MT_MAC_SYS_CTRL, MT_MAC_SYS_CTRL_RESET_CSR); -} -void mt76x0_mac_work(struct work_struct *work) -{ - struct mt76x02_dev *dev = container_of(work, struct mt76x02_dev, - mac_work.work); - struct { - u32 addr_base; - u32 span; - u64 *stat_base; - } spans[] = { - { MT_RX_STAT_0, 3, dev->stats.rx_stat }, - { MT_TX_STA_0, 3, dev->stats.tx_stat }, - { MT_TX_AGG_STAT, 1, dev->stats.aggr_stat }, - { MT_MPDU_DENSITY_CNT, 1, dev->stats.zero_len_del }, - { MT_TX_AGG_CNT_BASE0, 8, &dev->stats.aggr_n[0] }, - { MT_TX_AGG_CNT_BASE1, 8, &dev->stats.aggr_n[16] }, - }; - u32 sum, n; - int i, j, k; - - /* Note: using MCU_RANDOM_READ is actually slower then reading all the - * registers by hand. MCU takes ca. 20ms to complete read of 24 - * registers while reading them one by one will takes roughly - * 24*200us =~ 5ms. - */ - - k = 0; - n = 0; - sum = 0; - for (i = 0; i < ARRAY_SIZE(spans); i++) - for (j = 0; j < spans[i].span; j++) { - u32 val = mt76_rr(dev, spans[i].addr_base + j * 4); - - spans[i].stat_base[j * 2] += val & 0xffff; - spans[i].stat_base[j * 2 + 1] += val >> 16; - - /* Calculate average AMPDU length */ - if (spans[i].addr_base != MT_TX_AGG_CNT_BASE0 && - spans[i].addr_base != MT_TX_AGG_CNT_BASE1) - continue; - - n += (val >> 16) + (val & 0xffff); - sum += (val & 0xffff) * (1 + k * 2) + - (val >> 16) * (2 + k * 2); - k++; - } - - atomic_set(&dev->avg_ampdu_len, n ? DIV_ROUND_CLOSEST(sum, n) : 1); - - mt76x0_check_mac_err(dev); - - ieee80211_queue_delayed_work(dev->mt76.hw, &dev->mac_work, 10 * HZ); -} - -void mt76x0_mac_set_ampdu_factor(struct mt76x02_dev *dev) -{ - struct ieee80211_sta *sta; - struct mt76_wcid *wcid; - void *msta; - u8 min_factor = 3; - int i; - - rcu_read_lock(); - for (i = 0; i < ARRAY_SIZE(dev->mt76.wcid); i++) { - wcid = rcu_dereference(dev->mt76.wcid[i]); - if (!wcid) - continue; - - msta = container_of(wcid, struct mt76x02_sta, wcid); - sta = container_of(msta, struct ieee80211_sta, drv_priv); - - min_factor = min(min_factor, sta->ht_cap.ampdu_factor); - } - rcu_read_unlock(); - - mt76_wr(dev, MT_MAX_LEN_CFG, 0xa0fff | - FIELD_PREP(MT_MAX_LEN_CFG_AMPDU, min_factor)); -} diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/main.c b/drivers/net/wireless/mediatek/mt76/mt76x0/main.c index 9273d2d2764a..a803a9b6a4c5 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/main.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/main.c @@ -22,9 +22,23 @@ mt76x0_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef) int ret; cancel_delayed_work_sync(&dev->cal_work); + if (mt76_is_mmio(dev)) { + tasklet_disable(&dev->pre_tbtt_tasklet); + tasklet_disable(&dev->dfs_pd.dfs_tasklet); + } mt76_set_channel(&dev->mt76); ret = mt76x0_phy_set_channel(dev, chandef); + + /* channel cycle counters read-and-clear */ + mt76_rr(dev, MT_CH_IDLE); + mt76_rr(dev, MT_CH_BUSY); + + if (mt76_is_mmio(dev)) { + mt76x02_dfs_init_params(dev); + tasklet_enable(&dev->pre_tbtt_tasklet); + tasklet_enable(&dev->dfs_pd.dfs_tasklet); + } mt76_txq_schedule_all(&dev->mt76); return ret; @@ -64,89 +78,3 @@ int mt76x0_config(struct ieee80211_hw *hw, u32 changed) return ret; } EXPORT_SYMBOL_GPL(mt76x0_config); - -static void -mt76x0_addr_wr(struct mt76x02_dev *dev, const u32 offset, const u8 *addr) -{ - mt76_wr(dev, offset, get_unaligned_le32(addr)); - mt76_wr(dev, offset + 4, addr[4] | addr[5] << 8); -} - -void mt76x0_bss_info_changed(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct ieee80211_bss_conf *info, u32 changed) -{ - struct mt76x02_dev *dev = hw->priv; - - mutex_lock(&dev->mt76.mutex); - - if (changed & BSS_CHANGED_BSSID) { - mt76x0_addr_wr(dev, MT_MAC_BSSID_DW0, info->bssid); - - /* Note: this is a hack because beacon_int is not changed - * on leave nor is any more appropriate event generated. - * rt2x00 doesn't seem to be bothered though. - */ - if (is_zero_ether_addr(info->bssid)) - mt76x0_mac_config_tsf(dev, false, 0); - } - - if (changed & BSS_CHANGED_BASIC_RATES) { - mt76_wr(dev, MT_LEGACY_BASIC_RATE, info->basic_rates); - mt76_wr(dev, MT_VHT_HT_FBK_CFG0, 0x65432100); - mt76_wr(dev, MT_VHT_HT_FBK_CFG1, 0xedcba980); - mt76_wr(dev, MT_LG_FBK_CFG0, 0xedcba988); - mt76_wr(dev, MT_LG_FBK_CFG1, 0x00002100); - } - - if (changed & BSS_CHANGED_BEACON_INT) - mt76x0_mac_config_tsf(dev, true, info->beacon_int); - - if (changed & BSS_CHANGED_HT || changed & BSS_CHANGED_ERP_CTS_PROT) - mt76x0_mac_set_protection(dev, info->use_cts_prot, - info->ht_operation_mode); - - if (changed & BSS_CHANGED_ERP_PREAMBLE) - mt76x0_mac_set_short_preamble(dev, info->use_short_preamble); - - if (changed & BSS_CHANGED_ERP_SLOT) { - int slottime = info->use_short_slot ? 9 : 20; - - mt76_rmw_field(dev, MT_BKOFF_SLOT_CFG, - MT_BKOFF_SLOT_CFG_SLOTTIME, slottime); - } - - if (changed & BSS_CHANGED_ASSOC) - mt76x0_phy_recalibrate_after_assoc(dev); - - mutex_unlock(&dev->mt76.mutex); -} -EXPORT_SYMBOL_GPL(mt76x0_bss_info_changed); - -void mt76x0_sw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - const u8 *mac_addr) -{ - struct mt76x02_dev *dev = hw->priv; - - set_bit(MT76_SCANNING, &dev->mt76.state); -} -EXPORT_SYMBOL_GPL(mt76x0_sw_scan); - -void mt76x0_sw_scan_complete(struct ieee80211_hw *hw, - struct ieee80211_vif *vif) -{ - struct mt76x02_dev *dev = hw->priv; - - clear_bit(MT76_SCANNING, &dev->mt76.state); -} -EXPORT_SYMBOL_GPL(mt76x0_sw_scan_complete); - -int mt76x0_set_rts_threshold(struct ieee80211_hw *hw, u32 value) -{ - struct mt76x02_dev *dev = hw->priv; - - mt76_rmw_field(dev, MT_TX_RTS_CFG, MT_TX_RTS_CFG_THRESH, value); - - return 0; -} -EXPORT_SYMBOL_GPL(mt76x0_set_rts_threshold); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0.h b/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0.h index 2187bafaf2e9..46629f61673b 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/mt76x0.h @@ -28,18 +28,26 @@ #include "../mt76x02.h" #include "eeprom.h" -#define MT_CALIBRATE_INTERVAL (4 * HZ) +#define MT7610E_FIRMWARE "mediatek/mt7610e.bin" +#define MT7650E_FIRMWARE "mediatek/mt7650e.bin" + +#define MT7610U_FIRMWARE "mediatek/mt7610u.bin" #define MT_USB_AGGR_SIZE_LIMIT 21 /* * 1024B */ #define MT_USB_AGGR_TIMEOUT 0x80 /* * 33ns */ static inline bool is_mt7610e(struct mt76x02_dev *dev) { - /* TODO */ - return false; + if (!mt76_is_mmio(dev)) + return false; + + return mt76_chip(&dev->mt76) == 0x7610; } -void mt76x0_init_debugfs(struct mt76x02_dev *dev); +static inline bool is_mt7630(struct mt76x02_dev *dev) +{ + return mt76_chip(&dev->mt76) == 0x7630; +} /* Init */ struct mt76x02_dev * @@ -54,30 +62,12 @@ int mt76x0_mac_start(struct mt76x02_dev *dev); void mt76x0_mac_stop(struct mt76x02_dev *dev); int mt76x0_config(struct ieee80211_hw *hw, u32 changed); -void mt76x0_bss_info_changed(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct ieee80211_bss_conf *info, u32 changed); -void mt76x0_sw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - const u8 *mac_addr); -void mt76x0_sw_scan_complete(struct ieee80211_hw *hw, - struct ieee80211_vif *vif); -int mt76x0_set_rts_threshold(struct ieee80211_hw *hw, u32 value); /* PHY */ void mt76x0_phy_init(struct mt76x02_dev *dev); -int mt76x0_wait_bbp_ready(struct mt76x02_dev *dev); +int mt76x0_phy_wait_bbp_ready(struct mt76x02_dev *dev); int mt76x0_phy_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef); -void mt76x0_phy_recalibrate_after_assoc(struct mt76x02_dev *dev); void mt76x0_phy_set_txpower(struct mt76x02_dev *dev); void mt76x0_phy_calibrate(struct mt76x02_dev *dev, bool power_on); - -/* MAC */ -void mt76x0_mac_work(struct work_struct *work); -void mt76x0_mac_set_protection(struct mt76x02_dev *dev, bool legacy_prot, - int ht_mode); -void mt76x0_mac_set_short_preamble(struct mt76x02_dev *dev, bool short_preamb); -void mt76x0_mac_config_tsf(struct mt76x02_dev *dev, bool enable, int interval); -void mt76x0_mac_set_ampdu_factor(struct mt76x02_dev *dev); - #endif diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c index 522c86059bcb..d895b6f3dc44 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c @@ -68,6 +68,19 @@ static void mt76x0e_stop(struct ieee80211_hw *hw) mutex_unlock(&dev->mt76.mutex); } +static void +mt76x0e_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + u32 queues, bool drop) +{ +} + +static int +mt76x0e_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, + bool set) +{ + return 0; +} + static const struct ieee80211_ops mt76x0e_ops = { .tx = mt76x02_tx, .start = mt76x0e_start, @@ -76,15 +89,22 @@ static const struct ieee80211_ops mt76x0e_ops = { .remove_interface = mt76x02_remove_interface, .config = mt76x0_config, .configure_filter = mt76x02_configure_filter, - .sta_add = mt76x02_sta_add, - .sta_remove = mt76x02_sta_remove, + .bss_info_changed = mt76x02_bss_info_changed, + .sta_state = mt76_sta_state, .set_key = mt76x02_set_key, .conf_tx = mt76x02_conf_tx, - .sw_scan_start = mt76x0_sw_scan, - .sw_scan_complete = mt76x0_sw_scan_complete, + .sw_scan_start = mt76x02_sw_scan, + .sw_scan_complete = mt76x02_sw_scan_complete, .ampdu_action = mt76x02_ampdu_action, .sta_rate_tbl_update = mt76x02_sta_rate_tbl_update, .wake_tx_queue = mt76_wake_tx_queue, + .get_survey = mt76_get_survey, + .get_txpower = mt76x02_get_txpower, + .flush = mt76x0e_flush, + .set_tim = mt76x0e_set_tim, + .release_buffered_frames = mt76_release_buffered_frames, + .set_coverage_class = mt76x02_set_coverage_class, + .set_rts_threshold = mt76x02_set_rts_threshold, }; static int mt76x0e_register_device(struct mt76x02_dev *dev) @@ -135,10 +155,14 @@ mt76x0e_probe(struct pci_dev *pdev, const struct pci_device_id *id) { static const struct mt76_driver_ops drv_ops = { .txwi_size = sizeof(struct mt76x02_txwi), + .update_survey = mt76x02_update_channel, .tx_prepare_skb = mt76x02_tx_prepare_skb, .tx_complete_skb = mt76x02_tx_complete_skb, .rx_skb = mt76x02_queue_rx_skb, .rx_poll_complete = mt76x02_rx_poll_complete, + .sta_ps = mt76x02_sta_ps, + .sta_add = mt76x02_sta_add, + .sta_remove = mt76x02_sta_remove, }; struct mt76x02_dev *dev; int ret; @@ -185,6 +209,7 @@ error: static void mt76x0e_cleanup(struct mt76x02_dev *dev) { clear_bit(MT76_STATE_INITIALIZED, &dev->mt76.state); + tasklet_disable(&dev->pre_tbtt_tasklet); mt76x0_chip_onoff(dev, false, false); mt76x0e_stop_hw(dev); mt76x02_dma_cleanup(dev); @@ -209,6 +234,8 @@ static const struct pci_device_id mt76x0e_device_table[] = { }; MODULE_DEVICE_TABLE(pci, mt76x0e_device_table); +MODULE_FIRMWARE(MT7610E_FIRMWARE); +MODULE_FIRMWARE(MT7650E_FIRMWARE); MODULE_LICENSE("Dual BSD/GPL"); static struct pci_driver mt76x0e_driver = { diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x0/pci_mcu.c index 569861289aa5..490c1869f2c4 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/pci_mcu.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/pci_mcu.c @@ -19,9 +19,6 @@ #include "mt76x0.h" #include "mcu.h" -#define MT7610E_FIRMWARE "mediatek/mt7610e.bin" -#define MT7650E_FIRMWARE "mediatek/mt7650e.bin" - #define MT_MCU_IVB_ADDR (MT_MCU_ILM_ADDR + 0x54000 - MT_MCU_IVB_SIZE) static int mt76x0e_load_firmware(struct mt76x02_dev *dev) @@ -130,7 +127,6 @@ out: int mt76x0e_mcu_init(struct mt76x02_dev *dev) { static const struct mt76_mcu_ops mt76x0e_mcu_ops = { - .mcu_msg_alloc = mt76x02_mcu_msg_alloc, .mcu_send_msg = mt76x02_mcu_msg_send, }; int err; diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c index cf024950e0ed..1eb1a802ed20 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c @@ -20,7 +20,6 @@ #include "mt76x0.h" #include "mcu.h" #include "eeprom.h" -#include "trace.h" #include "phy.h" #include "initvals.h" #include "initvals_phy.h" @@ -49,12 +48,12 @@ mt76x0_rf_csr_wr(struct mt76x02_dev *dev, u32 offset, u8 value) } mt76_wr(dev, MT_RF_CSR_CFG, - FIELD_PREP(MT_RF_CSR_CFG_DATA, value) | - FIELD_PREP(MT_RF_CSR_CFG_REG_BANK, bank) | - FIELD_PREP(MT_RF_CSR_CFG_REG_ID, reg) | - MT_RF_CSR_CFG_WR | - MT_RF_CSR_CFG_KICK); - trace_mt76x0_rf_write(&dev->mt76, bank, offset, value); + FIELD_PREP(MT_RF_CSR_CFG_DATA, value) | + FIELD_PREP(MT_RF_CSR_CFG_REG_BANK, bank) | + FIELD_PREP(MT_RF_CSR_CFG_REG_ID, reg) | + MT_RF_CSR_CFG_WR | + MT_RF_CSR_CFG_KICK); + out: mutex_unlock(&dev->phy_mutex); @@ -86,19 +85,18 @@ static int mt76x0_rf_csr_rr(struct mt76x02_dev *dev, u32 offset) goto out; mt76_wr(dev, MT_RF_CSR_CFG, - FIELD_PREP(MT_RF_CSR_CFG_REG_BANK, bank) | - FIELD_PREP(MT_RF_CSR_CFG_REG_ID, reg) | - MT_RF_CSR_CFG_KICK); + FIELD_PREP(MT_RF_CSR_CFG_REG_BANK, bank) | + FIELD_PREP(MT_RF_CSR_CFG_REG_ID, reg) | + MT_RF_CSR_CFG_KICK); if (!mt76_poll(dev, MT_RF_CSR_CFG, MT_RF_CSR_CFG_KICK, 0, 100)) goto out; val = mt76_rr(dev, MT_RF_CSR_CFG); if (FIELD_GET(MT_RF_CSR_CFG_REG_ID, val) == reg && - FIELD_GET(MT_RF_CSR_CFG_REG_BANK, val) == bank) { + FIELD_GET(MT_RF_CSR_CFG_REG_BANK, val) == bank) ret = FIELD_GET(MT_RF_CSR_CFG_DATA, val); - trace_mt76x0_rf_read(&dev->mt76, bank, offset, ret); - } + out: mutex_unlock(&dev->phy_mutex); @@ -110,7 +108,7 @@ out: } static int -rf_wr(struct mt76x02_dev *dev, u32 offset, u8 val) +mt76x0_rf_wr(struct mt76x02_dev *dev, u32 offset, u8 val) { if (mt76_is_usb(dev)) { struct mt76_reg_pair pair = { @@ -126,8 +124,7 @@ rf_wr(struct mt76x02_dev *dev, u32 offset, u8 val) } } -static int -rf_rr(struct mt76x02_dev *dev, u32 offset) +static int mt76x0_rf_rr(struct mt76x02_dev *dev, u32 offset) { int ret; u32 val; @@ -149,38 +146,36 @@ rf_rr(struct mt76x02_dev *dev, u32 offset) } static int -rf_rmw(struct mt76x02_dev *dev, u32 offset, u8 mask, u8 val) +mt76x0_rf_rmw(struct mt76x02_dev *dev, u32 offset, u8 mask, u8 val) { int ret; - ret = rf_rr(dev, offset); + ret = mt76x0_rf_rr(dev, offset); if (ret < 0) return ret; + val |= ret & ~mask; - ret = rf_wr(dev, offset, val); - if (ret) - return ret; - return val; + ret = mt76x0_rf_wr(dev, offset, val); + return ret ? ret : val; } static int -rf_set(struct mt76x02_dev *dev, u32 offset, u8 val) +mt76x0_rf_set(struct mt76x02_dev *dev, u32 offset, u8 val) { - return rf_rmw(dev, offset, 0, val); + return mt76x0_rf_rmw(dev, offset, 0, val); } -#if 0 static int -rf_clear(struct mt76x02_dev *dev, u32 offset, u8 mask) +mt76x0_rf_clear(struct mt76x02_dev *dev, u32 offset, u8 mask) { - return rf_rmw(dev, offset, mask, 0); + return mt76x0_rf_rmw(dev, offset, mask, 0); } -#endif static void -mt76x0_rf_csr_wr_rp(struct mt76x02_dev *dev, const struct mt76_reg_pair *data, - int n) +mt76x0_phy_rf_csr_wr_rp(struct mt76x02_dev *dev, + const struct mt76_reg_pair *data, + int n) { while (n-- > 0) { mt76x0_rf_csr_wr(dev, data->reg, data->value); @@ -190,12 +185,12 @@ mt76x0_rf_csr_wr_rp(struct mt76x02_dev *dev, const struct mt76_reg_pair *data, #define RF_RANDOM_WRITE(dev, tab) do { \ if (mt76_is_mmio(dev)) \ - mt76x0_rf_csr_wr_rp(dev, tab, ARRAY_SIZE(tab)); \ + mt76x0_phy_rf_csr_wr_rp(dev, tab, ARRAY_SIZE(tab)); \ else \ mt76_wr_rp(dev, MT_MCU_MEMMAP_RF, tab, ARRAY_SIZE(tab));\ } while (0) -int mt76x0_wait_bbp_ready(struct mt76x02_dev *dev) +int mt76x0_phy_wait_bbp_ready(struct mt76x02_dev *dev) { int i = 20; u32 val; @@ -215,62 +210,6 @@ int mt76x0_wait_bbp_ready(struct mt76x02_dev *dev) return 0; } -static void mt76x0_vco_cal(struct mt76x02_dev *dev, u8 channel) -{ - u8 val; - - val = rf_rr(dev, MT_RF(0, 4)); - if ((val & 0x70) != 0x30) - return; - - /* - * Calibration Mode - Open loop, closed loop, and amplitude: - * B0.R06.[0]: 1 - * B0.R06.[3:1] bp_close_code: 100 - * B0.R05.[7:0] bp_open_code: 0x0 - * B0.R04.[2:0] cal_bits: 000 - * B0.R03.[2:0] startup_time: 011 - * B0.R03.[6:4] settle_time: - * 80MHz channel: 110 - * 40MHz channel: 101 - * 20MHz channel: 100 - */ - val = rf_rr(dev, MT_RF(0, 6)); - val &= ~0xf; - val |= 0x09; - rf_wr(dev, MT_RF(0, 6), val); - - val = rf_rr(dev, MT_RF(0, 5)); - if (val != 0) - rf_wr(dev, MT_RF(0, 5), 0x0); - - val = rf_rr(dev, MT_RF(0, 4)); - val &= ~0x07; - rf_wr(dev, MT_RF(0, 4), val); - - val = rf_rr(dev, MT_RF(0, 3)); - val &= ~0x77; - if (channel == 1 || channel == 7 || channel == 9 || channel >= 13) { - val |= 0x63; - } else if (channel == 3 || channel == 4 || channel == 10) { - val |= 0x53; - } else if (channel == 2 || channel == 5 || channel == 6 || - channel == 8 || channel == 11 || channel == 12) { - val |= 0x43; - } else { - WARN(1, "Unknown channel %u\n", channel); - return; - } - rf_wr(dev, MT_RF(0, 3), val); - - /* TODO replace by mt76x0_rf_set(dev, MT_RF(0, 4), BIT(7)); */ - val = rf_rr(dev, MT_RF(0, 4)); - val = ((val & ~(0x80)) | 0x80); - rf_wr(dev, MT_RF(0, 4), val); - - msleep(2); -} - static void mt76x0_phy_set_band(struct mt76x02_dev *dev, enum nl80211_band band) { @@ -278,8 +217,8 @@ mt76x0_phy_set_band(struct mt76x02_dev *dev, enum nl80211_band band) case NL80211_BAND_2GHZ: RF_RANDOM_WRITE(dev, mt76x0_rf_2g_channel_0_tab); - rf_wr(dev, MT_RF(5, 0), 0x45); - rf_wr(dev, MT_RF(6, 0), 0x44); + mt76x0_rf_wr(dev, MT_RF(5, 0), 0x45); + mt76x0_rf_wr(dev, MT_RF(6, 0), 0x44); mt76_wr(dev, MT_TX_ALC_VGA3, 0x00050007); mt76_wr(dev, MT_TX0_RF_GAIN_CORR, 0x003E0002); @@ -287,8 +226,8 @@ mt76x0_phy_set_band(struct mt76x02_dev *dev, enum nl80211_band band) case NL80211_BAND_5GHZ: RF_RANDOM_WRITE(dev, mt76x0_rf_5g_channel_0_tab); - rf_wr(dev, MT_RF(5, 0), 0x44); - rf_wr(dev, MT_RF(6, 0), 0x45); + mt76x0_rf_wr(dev, MT_RF(5, 0), 0x44); + mt76x0_rf_wr(dev, MT_RF(6, 0), 0x45); mt76_wr(dev, MT_TX_ALC_VGA3, 0x00000005); mt76_wr(dev, MT_TX0_RF_GAIN_CORR, 0x01010102); @@ -301,18 +240,17 @@ mt76x0_phy_set_band(struct mt76x02_dev *dev, enum nl80211_band band) static void mt76x0_phy_set_chan_rf_params(struct mt76x02_dev *dev, u8 channel, u16 rf_bw_band) { + const struct mt76x0_freq_item *freq_item; u16 rf_band = rf_bw_band & 0xff00; u16 rf_bw = rf_bw_band & 0x00ff; enum nl80211_band band; + bool b_sdm = false; u32 mac_reg; - u8 rf_val; int i; - bool bSDM = false; - const struct mt76x0_freq_item *freq_item; for (i = 0; i < ARRAY_SIZE(mt76x0_sdm_channel); i++) { if (channel == mt76x0_sdm_channel[i]) { - bSDM = true; + b_sdm = true; break; } } @@ -321,108 +259,84 @@ mt76x0_phy_set_chan_rf_params(struct mt76x02_dev *dev, u8 channel, u16 rf_bw_ban if (channel == mt76x0_frequency_plan[i].channel) { rf_band = mt76x0_frequency_plan[i].band; - if (bSDM) + if (b_sdm) freq_item = &(mt76x0_sdm_frequency_plan[i]); else freq_item = &(mt76x0_frequency_plan[i]); - rf_wr(dev, MT_RF(0, 37), freq_item->pllR37); - rf_wr(dev, MT_RF(0, 36), freq_item->pllR36); - rf_wr(dev, MT_RF(0, 35), freq_item->pllR35); - rf_wr(dev, MT_RF(0, 34), freq_item->pllR34); - rf_wr(dev, MT_RF(0, 33), freq_item->pllR33); + mt76x0_rf_wr(dev, MT_RF(0, 37), freq_item->pllR37); + mt76x0_rf_wr(dev, MT_RF(0, 36), freq_item->pllR36); + mt76x0_rf_wr(dev, MT_RF(0, 35), freq_item->pllR35); + mt76x0_rf_wr(dev, MT_RF(0, 34), freq_item->pllR34); + mt76x0_rf_wr(dev, MT_RF(0, 33), freq_item->pllR33); - rf_val = rf_rr(dev, MT_RF(0, 32)); - rf_val &= ~0xE0; - rf_val |= freq_item->pllR32_b7b5; - rf_wr(dev, MT_RF(0, 32), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 32), 0xe0, + freq_item->pllR32_b7b5); /* R32<4:0> pll_den: (Denomina - 8) */ - rf_val = rf_rr(dev, MT_RF(0, 32)); - rf_val &= ~0x1F; - rf_val |= freq_item->pllR32_b4b0; - rf_wr(dev, MT_RF(0, 32), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 32), MT_RF_PLL_DEN_MASK, + freq_item->pllR32_b4b0); /* R31<7:5> */ - rf_val = rf_rr(dev, MT_RF(0, 31)); - rf_val &= ~0xE0; - rf_val |= freq_item->pllR31_b7b5; - rf_wr(dev, MT_RF(0, 31), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 31), 0xe0, + freq_item->pllR31_b7b5); /* R31<4:0> pll_k(Nominator) */ - rf_val = rf_rr(dev, MT_RF(0, 31)); - rf_val &= ~0x1F; - rf_val |= freq_item->pllR31_b4b0; - rf_wr(dev, MT_RF(0, 31), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 31), MT_RF_PLL_K_MASK, + freq_item->pllR31_b4b0); /* R30<7> sdm_reset_n */ - rf_val = rf_rr(dev, MT_RF(0, 30)); - rf_val &= ~0x80; - if (bSDM) { - rf_wr(dev, MT_RF(0, 30), rf_val); - rf_val |= 0x80; - rf_wr(dev, MT_RF(0, 30), rf_val); + if (b_sdm) { + mt76x0_rf_clear(dev, MT_RF(0, 30), + MT_RF_SDM_RESET_MASK); + mt76x0_rf_set(dev, MT_RF(0, 30), + MT_RF_SDM_RESET_MASK); } else { - rf_val |= freq_item->pllR30_b7; - rf_wr(dev, MT_RF(0, 30), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 30), + MT_RF_SDM_RESET_MASK, + freq_item->pllR30_b7); } /* R30<6:2> sdmmash_prbs,sin */ - rf_val = rf_rr(dev, MT_RF(0, 30)); - rf_val &= ~0x7C; - rf_val |= freq_item->pllR30_b6b2; - rf_wr(dev, MT_RF(0, 30), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 30), + MT_RF_SDM_MASH_PRBS_MASK, + freq_item->pllR30_b6b2); /* R30<1> sdm_bp */ - rf_val = rf_rr(dev, MT_RF(0, 30)); - rf_val &= ~0x02; - rf_val |= (freq_item->pllR30_b1 << 1); - rf_wr(dev, MT_RF(0, 30), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 30), MT_RF_SDM_BP_MASK, + freq_item->pllR30_b1 << 1); /* R30<0> R29<7:0> (hex) pll_n */ - rf_val = freq_item->pll_n & 0x00FF; - rf_wr(dev, MT_RF(0, 29), rf_val); + mt76x0_rf_wr(dev, MT_RF(0, 29), + freq_item->pll_n & 0xff); - rf_val = rf_rr(dev, MT_RF(0, 30)); - rf_val &= ~0x1; - rf_val |= ((freq_item->pll_n >> 8) & 0x0001); - rf_wr(dev, MT_RF(0, 30), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 30), 0x1, + (freq_item->pll_n >> 8) & 0x1); /* R28<7:6> isi_iso */ - rf_val = rf_rr(dev, MT_RF(0, 28)); - rf_val &= ~0xC0; - rf_val |= freq_item->pllR28_b7b6; - rf_wr(dev, MT_RF(0, 28), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 28), MT_RF_ISI_ISO_MASK, + freq_item->pllR28_b7b6); /* R28<5:4> pfd_dly */ - rf_val = rf_rr(dev, MT_RF(0, 28)); - rf_val &= ~0x30; - rf_val |= freq_item->pllR28_b5b4; - rf_wr(dev, MT_RF(0, 28), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 28), MT_RF_PFD_DLY_MASK, + freq_item->pllR28_b5b4); /* R28<3:2> clksel option */ - rf_val = rf_rr(dev, MT_RF(0, 28)); - rf_val &= ~0x0C; - rf_val |= freq_item->pllR28_b3b2; - rf_wr(dev, MT_RF(0, 28), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 28), MT_RF_CLK_SEL_MASK, + freq_item->pllR28_b3b2); /* R28<1:0> R27<7:0> R26<7:0> (hex) sdm_k */ - rf_val = freq_item->pll_sdm_k & 0x000000FF; - rf_wr(dev, MT_RF(0, 26), rf_val); - - rf_val = ((freq_item->pll_sdm_k >> 8) & 0x000000FF); - rf_wr(dev, MT_RF(0, 27), rf_val); + mt76x0_rf_wr(dev, MT_RF(0, 26), + freq_item->pll_sdm_k & 0xff); + mt76x0_rf_wr(dev, MT_RF(0, 27), + (freq_item->pll_sdm_k >> 8) & 0xff); - rf_val = rf_rr(dev, MT_RF(0, 28)); - rf_val &= ~0x3; - rf_val |= ((freq_item->pll_sdm_k >> 16) & 0x0003); - rf_wr(dev, MT_RF(0, 28), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 28), 0x3, + (freq_item->pll_sdm_k >> 16) & 0x3); /* R24<1:0> xo_div */ - rf_val = rf_rr(dev, MT_RF(0, 24)); - rf_val &= ~0x3; - rf_val |= freq_item->pllR24_b1b0; - rf_wr(dev, MT_RF(0, 24), rf_val); + mt76x0_rf_rmw(dev, MT_RF(0, 24), MT_RF_XO_DIV_MASK, + freq_item->pllR24_b1b0); break; } @@ -430,25 +344,26 @@ mt76x0_phy_set_chan_rf_params(struct mt76x02_dev *dev, u8 channel, u16 rf_bw_ban for (i = 0; i < ARRAY_SIZE(mt76x0_rf_bw_switch_tab); i++) { if (rf_bw == mt76x0_rf_bw_switch_tab[i].bw_band) { - rf_wr(dev, mt76x0_rf_bw_switch_tab[i].rf_bank_reg, - mt76x0_rf_bw_switch_tab[i].value); + mt76x0_rf_wr(dev, + mt76x0_rf_bw_switch_tab[i].rf_bank_reg, + mt76x0_rf_bw_switch_tab[i].value); } else if ((rf_bw == (mt76x0_rf_bw_switch_tab[i].bw_band & 0xFF)) && (rf_band & mt76x0_rf_bw_switch_tab[i].bw_band)) { - rf_wr(dev, mt76x0_rf_bw_switch_tab[i].rf_bank_reg, - mt76x0_rf_bw_switch_tab[i].value); + mt76x0_rf_wr(dev, + mt76x0_rf_bw_switch_tab[i].rf_bank_reg, + mt76x0_rf_bw_switch_tab[i].value); } } for (i = 0; i < ARRAY_SIZE(mt76x0_rf_band_switch_tab); i++) { if (mt76x0_rf_band_switch_tab[i].bw_band & rf_band) { - rf_wr(dev, mt76x0_rf_band_switch_tab[i].rf_bank_reg, - mt76x0_rf_band_switch_tab[i].value); + mt76x0_rf_wr(dev, + mt76x0_rf_band_switch_tab[i].rf_bank_reg, + mt76x0_rf_band_switch_tab[i].value); } } - mac_reg = mt76_rr(dev, MT_RF_MISC); - mac_reg &= ~0xC; /* Clear 0x518[3:2] */ - mt76_wr(dev, MT_RF_MISC, mac_reg); + mt76_clear(dev, MT_RF_MISC, 0xc); band = (rf_band & RF_G_BAND) ? NL80211_BAND_2GHZ : NL80211_BAND_5GHZ; if (mt76x02_ext_pa_enabled(dev, band)) { @@ -457,21 +372,17 @@ mt76x0_phy_set_chan_rf_params(struct mt76x02_dev *dev, u8 channel, u16 rf_bw_ban [2]1'b1: enable external A band PA, 1'b0: disable external A band PA [3]1'b1: enable external G band PA, 1'b0: disable external G band PA */ - if (rf_band & RF_A_BAND) { - mac_reg = mt76_rr(dev, MT_RF_MISC); - mac_reg |= 0x4; - mt76_wr(dev, MT_RF_MISC, mac_reg); - } else { - mac_reg = mt76_rr(dev, MT_RF_MISC); - mac_reg |= 0x8; - mt76_wr(dev, MT_RF_MISC, mac_reg); - } + if (rf_band & RF_A_BAND) + mt76_set(dev, MT_RF_MISC, BIT(2)); + else + mt76_set(dev, MT_RF_MISC, BIT(3)); /* External PA */ for (i = 0; i < ARRAY_SIZE(mt76x0_rf_ext_pa_tab); i++) if (mt76x0_rf_ext_pa_tab[i].bw_band & rf_band) - rf_wr(dev, mt76x0_rf_ext_pa_tab[i].rf_bank_reg, - mt76x0_rf_ext_pa_tab[i].value); + mt76x0_rf_wr(dev, + mt76x0_rf_ext_pa_tab[i].rf_bank_reg, + mt76x0_rf_ext_pa_tab[i].value); } if (rf_band & RF_G_BAND) { @@ -516,27 +427,53 @@ mt76x0_phy_set_chan_bbp_params(struct mt76x02_dev *dev, u16 rf_bw_band) } } -static void mt76x0_ant_select(struct mt76x02_dev *dev) +static void mt76x0_phy_ant_select(struct mt76x02_dev *dev) { - struct ieee80211_channel *chan = dev->mt76.chandef.chan; - - /* single antenna mode */ - if (chan->band == NL80211_BAND_2GHZ) { - mt76_rmw(dev, MT_COEXCFG3, - BIT(5) | BIT(4) | BIT(3) | BIT(2), BIT(1)); - mt76_rmw(dev, MT_WLAN_FUN_CTRL, BIT(5), BIT(6)); + u16 ee_ant = mt76x02_eeprom_get(dev, MT_EE_ANTENNA); + u16 nic_conf2 = mt76x02_eeprom_get(dev, MT_EE_NIC_CONF_2); + u32 wlan, coex3, cmb; + bool ant_div; + + wlan = mt76_rr(dev, MT_WLAN_FUN_CTRL); + cmb = mt76_rr(dev, MT_CMB_CTRL); + coex3 = mt76_rr(dev, MT_COEXCFG3); + + cmb &= ~(BIT(14) | BIT(12)); + wlan &= ~(BIT(6) | BIT(5)); + coex3 &= ~GENMASK(5, 2); + + if (ee_ant & MT_EE_ANTENNA_DUAL) { + /* dual antenna mode */ + ant_div = !(nic_conf2 & MT_EE_NIC_CONF_2_ANT_OPT) && + (nic_conf2 & MT_EE_NIC_CONF_2_ANT_DIV); + if (ant_div) + cmb |= BIT(12); + else + coex3 |= BIT(4); + coex3 |= BIT(3); + if (dev->mt76.cap.has_2ghz) + wlan |= BIT(6); } else { - mt76_rmw(dev, MT_COEXCFG3, BIT(5) | BIT(2), - BIT(4) | BIT(3)); - mt76_clear(dev, MT_WLAN_FUN_CTRL, - BIT(6) | BIT(5)); + /* sigle antenna mode */ + if (dev->mt76.cap.has_5ghz) { + coex3 |= BIT(3) | BIT(4); + } else { + wlan |= BIT(6); + coex3 |= BIT(1); + } } - mt76_clear(dev, MT_CMB_CTRL, BIT(14) | BIT(12)); + + if (is_mt7630(dev)) + cmb |= BIT(14) | BIT(11); + + mt76_wr(dev, MT_WLAN_FUN_CTRL, wlan); + mt76_wr(dev, MT_CMB_CTRL, cmb); mt76_clear(dev, MT_COEXCFG0, BIT(2)); + mt76_wr(dev, MT_COEXCFG3, coex3); } static void -mt76x0_bbp_set_bw(struct mt76x02_dev *dev, enum nl80211_chan_width width) +mt76x0_phy_bbp_set_bw(struct mt76x02_dev *dev, enum nl80211_chan_width width) { enum { BW_20 = 0, BW_40 = 1, BW_80 = 2, BW_10 = 4}; int bw; @@ -563,7 +500,346 @@ mt76x0_bbp_set_bw(struct mt76x02_dev *dev, enum nl80211_chan_width width) return ; } - mt76x02_mcu_function_select(dev, BW_SETTING, bw, false); + mt76x02_mcu_function_select(dev, BW_SETTING, bw); +} + +static void mt76x0_phy_tssi_dc_calibrate(struct mt76x02_dev *dev) +{ + struct ieee80211_channel *chan = dev->mt76.chandef.chan; + u32 val; + + if (chan->band == NL80211_BAND_5GHZ) + mt76x0_rf_clear(dev, MT_RF(0, 67), 0xf); + + /* bypass ADDA control */ + mt76_wr(dev, MT_RF_SETTING_0, 0x60002237); + mt76_wr(dev, MT_RF_BYPASS_0, 0xffffffff); + + /* bbp sw reset */ + mt76_set(dev, MT_BBP(CORE, 4), BIT(0)); + usleep_range(500, 1000); + mt76_clear(dev, MT_BBP(CORE, 4), BIT(0)); + + val = (chan->band == NL80211_BAND_5GHZ) ? 0x80055 : 0x80050; + mt76_wr(dev, MT_BBP(CORE, 34), val); + + /* enable TX with DAC0 input */ + mt76_wr(dev, MT_BBP(TXBE, 6), BIT(31)); + + mt76_poll_msec(dev, MT_BBP(CORE, 34), BIT(4), 0, 200); + dev->cal.tssi_dc = mt76_rr(dev, MT_BBP(CORE, 35)) & 0xff; + + /* stop bypass ADDA */ + mt76_wr(dev, MT_RF_BYPASS_0, 0); + /* stop TX */ + mt76_wr(dev, MT_BBP(TXBE, 6), 0); + /* bbp sw reset */ + mt76_set(dev, MT_BBP(CORE, 4), BIT(0)); + usleep_range(500, 1000); + mt76_clear(dev, MT_BBP(CORE, 4), BIT(0)); + + if (chan->band == NL80211_BAND_5GHZ) + mt76x0_rf_rmw(dev, MT_RF(0, 67), 0xf, 0x4); +} + +static int +mt76x0_phy_tssi_adc_calibrate(struct mt76x02_dev *dev, s16 *ltssi, + u8 *info) +{ + struct ieee80211_channel *chan = dev->mt76.chandef.chan; + u32 val; + + val = (chan->band == NL80211_BAND_5GHZ) ? 0x80055 : 0x80050; + mt76_wr(dev, MT_BBP(CORE, 34), val); + + if (!mt76_poll_msec(dev, MT_BBP(CORE, 34), BIT(4), 0, 200)) { + mt76_clear(dev, MT_BBP(CORE, 34), BIT(4)); + return -ETIMEDOUT; + } + + *ltssi = mt76_rr(dev, MT_BBP(CORE, 35)) & 0xff; + if (chan->band == NL80211_BAND_5GHZ) + *ltssi += 128; + + /* set packet info#1 mode */ + mt76_wr(dev, MT_BBP(CORE, 34), 0x80041); + info[0] = mt76_rr(dev, MT_BBP(CORE, 35)) & 0xff; + + /* set packet info#2 mode */ + mt76_wr(dev, MT_BBP(CORE, 34), 0x80042); + info[1] = mt76_rr(dev, MT_BBP(CORE, 35)) & 0xff; + + /* set packet info#3 mode */ + mt76_wr(dev, MT_BBP(CORE, 34), 0x80043); + info[2] = mt76_rr(dev, MT_BBP(CORE, 35)) & 0xff; + + return 0; +} + +static u8 mt76x0_phy_get_rf_pa_mode(struct mt76x02_dev *dev, + int index, u8 tx_rate) +{ + u32 val, reg; + + reg = (index == 1) ? MT_RF_PA_MODE_CFG1 : MT_RF_PA_MODE_CFG0; + val = mt76_rr(dev, reg); + return (val & (3 << (tx_rate * 2))) >> (tx_rate * 2); +} + +static int +mt76x0_phy_get_target_power(struct mt76x02_dev *dev, u8 tx_mode, + u8 *info, s8 *target_power, + s8 *target_pa_power) +{ + u8 tx_rate, cur_power; + + cur_power = mt76_rr(dev, MT_TX_ALC_CFG_0) & MT_TX_ALC_CFG_0_CH_INIT_0; + switch (tx_mode) { + case 0: + /* cck rates */ + tx_rate = (info[0] & 0x60) >> 5; + if (tx_rate > 3) + return -EINVAL; + + *target_power = cur_power + dev->mt76.rate_power.cck[tx_rate]; + *target_pa_power = mt76x0_phy_get_rf_pa_mode(dev, 0, tx_rate); + break; + case 1: { + u8 index; + + /* ofdm rates */ + tx_rate = (info[0] & 0xf0) >> 4; + switch (tx_rate) { + case 0xb: + index = 0; + break; + case 0xf: + index = 1; + break; + case 0xa: + index = 2; + break; + case 0xe: + index = 3; + break; + case 0x9: + index = 4; + break; + case 0xd: + index = 5; + break; + case 0x8: + index = 6; + break; + case 0xc: + index = 7; + break; + default: + return -EINVAL; + } + + *target_power = cur_power + dev->mt76.rate_power.ofdm[index]; + *target_pa_power = mt76x0_phy_get_rf_pa_mode(dev, 0, index + 4); + break; + } + case 4: + /* vht rates */ + tx_rate = info[1] & 0xf; + if (tx_rate > 9) + return -EINVAL; + + *target_power = cur_power + dev->mt76.rate_power.vht[tx_rate]; + *target_pa_power = mt76x0_phy_get_rf_pa_mode(dev, 1, tx_rate); + break; + default: + /* ht rates */ + tx_rate = info[1] & 0x7f; + if (tx_rate > 9) + return -EINVAL; + + *target_power = cur_power + dev->mt76.rate_power.ht[tx_rate]; + *target_pa_power = mt76x0_phy_get_rf_pa_mode(dev, 1, tx_rate); + break; + } + + return 0; +} + +static s16 mt76x0_phy_lin2db(u16 val) +{ + u32 mantissa = val << 4; + int ret, data; + s16 exp = -4; + + while (mantissa < BIT(15)) { + mantissa <<= 1; + if (--exp < -20) + return -10000; + } + while (mantissa > 0xffff) { + mantissa >>= 1; + if (++exp > 20) + return -10000; + } + + /* s(15,0) */ + if (mantissa <= 47104) + data = mantissa + (mantissa >> 3) + (mantissa >> 4) - 38400; + else + data = mantissa - (mantissa >> 3) - (mantissa >> 6) - 23040; + data = max_t(int, 0, data); + + ret = ((15 + exp) << 15) + data; + ret = (ret << 2) + (ret << 1) + (ret >> 6) + (ret >> 7); + return ret >> 10; +} + +static int +mt76x0_phy_get_delta_power(struct mt76x02_dev *dev, u8 tx_mode, + s8 target_power, s8 target_pa_power, + s16 ltssi) +{ + struct ieee80211_channel *chan = dev->mt76.chandef.chan; + int tssi_target = target_power << 12, tssi_slope; + int tssi_offset, tssi_db, ret; + u32 data; + u16 val; + + if (chan->band == NL80211_BAND_5GHZ) { + u8 bound[7]; + int i, err; + + err = mt76x02_eeprom_copy(dev, MT_EE_TSSI_BOUND1, bound, + sizeof(bound)); + if (err < 0) + return err; + + for (i = 0; i < ARRAY_SIZE(bound); i++) { + if (chan->hw_value <= bound[i] || !bound[i]) + break; + } + val = mt76x02_eeprom_get(dev, MT_EE_TSSI_SLOPE_5G + i * 2); + + tssi_offset = val >> 8; + if ((tssi_offset >= 64 && tssi_offset <= 127) || + (tssi_offset & BIT(7))) + tssi_offset -= BIT(8); + } else { + val = mt76x02_eeprom_get(dev, MT_EE_TSSI_SLOPE_2G); + + tssi_offset = val >> 8; + if (tssi_offset & BIT(7)) + tssi_offset -= BIT(8); + } + tssi_slope = val & 0xff; + + switch (target_pa_power) { + case 1: + if (chan->band == NL80211_BAND_2GHZ) + tssi_target += 29491; /* 3.6 * 8192 */ + /* fall through */ + case 0: + break; + default: + tssi_target += 4424; /* 0.54 * 8192 */ + break; + } + + if (!tx_mode) { + data = mt76_rr(dev, MT_BBP(CORE, 1)); + if (is_mt7630(dev) && mt76_is_mmio(dev)) { + int offset; + + /* 2.3 * 8192 or 1.5 * 8192 */ + offset = (data & BIT(5)) ? 18841 : 12288; + tssi_target += offset; + } else if (data & BIT(5)) { + /* 0.8 * 8192 */ + tssi_target += 6554; + } + } + + data = mt76_rr(dev, MT_BBP(TXBE, 4)); + switch (data & 0x3) { + case 1: + tssi_target -= 49152; /* -6db * 8192 */ + break; + case 2: + tssi_target -= 98304; /* -12db * 8192 */ + break; + case 3: + tssi_target += 49152; /* 6db * 8192 */ + break; + default: + break; + } + + tssi_db = mt76x0_phy_lin2db(ltssi - dev->cal.tssi_dc) * tssi_slope; + if (chan->band == NL80211_BAND_5GHZ) { + tssi_db += ((tssi_offset - 50) << 10); /* offset s4.3 */ + tssi_target -= tssi_db; + if (ltssi > 254 && tssi_target > 0) { + /* upper saturate */ + tssi_target = 0; + } + } else { + tssi_db += (tssi_offset << 9); /* offset s3.4 */ + tssi_target -= tssi_db; + /* upper-lower saturate */ + if ((ltssi > 126 && tssi_target > 0) || + ((ltssi - dev->cal.tssi_dc) < 1 && tssi_target < 0)) { + tssi_target = 0; + } + } + + if ((dev->cal.tssi_target ^ tssi_target) < 0 && + dev->cal.tssi_target > -4096 && dev->cal.tssi_target < 4096 && + tssi_target > -4096 && tssi_target < 4096) { + if ((tssi_target < 0 && + tssi_target + dev->cal.tssi_target > 0) || + (tssi_target > 0 && + tssi_target + dev->cal.tssi_target <= 0)) + tssi_target = 0; + else + dev->cal.tssi_target = tssi_target; + } else { + dev->cal.tssi_target = tssi_target; + } + + /* make the compensate value to the nearest compensate code */ + if (tssi_target > 0) + tssi_target += 2048; + else + tssi_target -= 2048; + tssi_target >>= 12; + + ret = mt76_get_field(dev, MT_TX_ALC_CFG_1, MT_TX_ALC_CFG_1_TEMP_COMP); + if (ret & BIT(5)) + ret -= BIT(6); + ret += tssi_target; + + ret = min_t(int, 31, ret); + return max_t(int, -32, ret); +} + +static void mt76x0_phy_tssi_calibrate(struct mt76x02_dev *dev) +{ + s8 target_power, target_pa_power; + u8 tssi_info[3], tx_mode; + s16 ltssi; + s8 val; + + if (mt76x0_phy_tssi_adc_calibrate(dev, <ssi, tssi_info) < 0) + return; + + tx_mode = tssi_info[0] & 0x7; + if (mt76x0_phy_get_target_power(dev, tx_mode, tssi_info, + &target_power, &target_pa_power) < 0) + return; + + val = mt76x0_phy_get_delta_power(dev, tx_mode, target_power, + target_pa_power, ltssi); + mt76_rmw_field(dev, MT_TX_ALC_CFG_1, MT_TX_ALC_CFG_1_TEMP_COMP, val); } void mt76x0_phy_set_txpower(struct mt76x02_dev *dev) @@ -571,8 +847,8 @@ void mt76x0_phy_set_txpower(struct mt76x02_dev *dev) struct mt76_rate_power *t = &dev->mt76.rate_power; u8 info[2]; - mt76x0_get_power_info(dev, info); mt76x0_get_tx_power_per_rate(dev); + mt76x0_get_power_info(dev, info); mt76x02_add_rate_power_offset(t, info[0]); mt76x02_limit_rate_power(t, dev->mt76.txpower_conf); @@ -585,14 +861,25 @@ void mt76x0_phy_set_txpower(struct mt76x02_dev *dev) void mt76x0_phy_calibrate(struct mt76x02_dev *dev, bool power_on) { struct ieee80211_channel *chan = dev->mt76.chandef.chan; + int is_5ghz = (chan->band == NL80211_BAND_5GHZ) ? 1 : 0; u32 val, tx_alc, reg_val; + if (is_mt7630(dev)) + return; + if (power_on) { - mt76x02_mcu_calibrate(dev, MCU_CAL_R, 0, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_VCO, chan->hw_value, - false); + mt76x02_mcu_calibrate(dev, MCU_CAL_R, 0); + mt76x02_mcu_calibrate(dev, MCU_CAL_VCO, chan->hw_value); usleep_range(10, 20); - /* XXX: tssi */ + + if (mt76x0_tssi_enabled(dev)) { + mt76_wr(dev, MT_MAC_SYS_CTRL, + MT_MAC_SYS_CTRL_ENABLE_RX); + mt76x0_phy_tssi_dc_calibrate(dev); + mt76_wr(dev, MT_MAC_SYS_CTRL, + MT_MAC_SYS_CTRL_ENABLE_TX | + MT_MAC_SYS_CTRL_ENABLE_RX); + } } tx_alc = mt76_rr(dev, MT_TX_ALC_CFG_0); @@ -602,7 +889,7 @@ void mt76x0_phy_calibrate(struct mt76x02_dev *dev, bool power_on) reg_val = mt76_rr(dev, MT_BBP(IBI, 9)); mt76_wr(dev, MT_BBP(IBI, 9), 0xffffff7e); - if (chan->band == NL80211_BAND_5GHZ) { + if (is_5ghz) { if (chan->hw_value < 100) val = 0x701; else if (chan->hw_value < 140) @@ -613,14 +900,14 @@ void mt76x0_phy_calibrate(struct mt76x02_dev *dev, bool power_on) val = 0x600; } - mt76x02_mcu_calibrate(dev, MCU_CAL_FULL, val, false); + mt76x02_mcu_calibrate(dev, MCU_CAL_FULL, val); msleep(350); - mt76x02_mcu_calibrate(dev, MCU_CAL_LC, 1, false); + mt76x02_mcu_calibrate(dev, MCU_CAL_LC, is_5ghz); usleep_range(15000, 20000); mt76_wr(dev, MT_BBP(IBI, 9), reg_val); mt76_wr(dev, MT_TX_ALC_CFG_0, tx_alc); - mt76x02_mcu_calibrate(dev, MCU_CAL_RXDCOC, 1, false); + mt76x02_mcu_calibrate(dev, MCU_CAL_RXDCOC, 1); } EXPORT_SYMBOL_GPL(mt76x0_phy_calibrate); @@ -684,7 +971,7 @@ int mt76x0_phy_set_channel(struct mt76x02_dev *dev, } if (mt76_is_usb(dev)) { - mt76x0_bbp_set_bw(dev, chandef->width); + mt76x0_phy_bbp_set_bw(dev, chandef->width); } else { if (chandef->width == NL80211_CHAN_WIDTH_80 || chandef->width == NL80211_CHAN_WIDTH_40) @@ -696,7 +983,6 @@ int mt76x0_phy_set_channel(struct mt76x02_dev *dev, mt76x02_phy_set_bw(dev, chandef->width, ch_group_index); mt76x02_phy_set_band(dev, chandef->chan->band, ch_group_index & 1); - mt76x0_ant_select(dev); mt76_rmw(dev, MT_EXT_CCA_CFG, (MT_EXT_CCA_CFG_CCA0 | @@ -710,29 +996,21 @@ int mt76x0_phy_set_channel(struct mt76x02_dev *dev, mt76x0_phy_set_chan_rf_params(dev, channel, rf_bw_band); /* set Japan Tx filter at channel 14 */ - val = mt76_rr(dev, MT_BBP(CORE, 1)); if (channel == 14) - val |= 0x20; + mt76_set(dev, MT_BBP(CORE, 1), 0x20); else - val &= ~0x20; - mt76_wr(dev, MT_BBP(CORE, 1), val); + mt76_clear(dev, MT_BBP(CORE, 1), 0x20); mt76x0_read_rx_gain(dev); mt76x0_phy_set_chan_bbp_params(dev, rf_bw_band); - mt76x02_init_agc_gain(dev); - - if (mt76_is_usb(dev)) { - mt76x0_vco_cal(dev, channel); - } else { - /* enable vco */ - rf_set(dev, MT_RF(0, 4), BIT(7)); - } + /* enable vco */ + mt76x0_rf_set(dev, MT_RF(0, 4), BIT(7)); if (scan) return 0; - if (mt76_is_mmio(dev)) - mt76x0_phy_calibrate(dev, false); + mt76x02_init_agc_gain(dev); + mt76x0_phy_calibrate(dev, false); mt76x0_phy_set_txpower(dev); ieee80211_queue_delayed_work(dev->mt76.hw, &dev->cal_work, @@ -741,55 +1019,21 @@ int mt76x0_phy_set_channel(struct mt76x02_dev *dev, return 0; } -void mt76x0_phy_recalibrate_after_assoc(struct mt76x02_dev *dev) -{ - u32 tx_alc, reg_val; - u8 channel = dev->mt76.chandef.chan->hw_value; - int is_5ghz = (dev->mt76.chandef.chan->band == NL80211_BAND_5GHZ) ? 1 : 0; - - mt76x02_mcu_calibrate(dev, MCU_CAL_R, 0, false); - - mt76x0_vco_cal(dev, channel); - - tx_alc = mt76_rr(dev, MT_TX_ALC_CFG_0); - mt76_wr(dev, MT_TX_ALC_CFG_0, 0); - usleep_range(500, 700); - - reg_val = mt76_rr(dev, MT_BBP(IBI, 9)); - mt76_wr(dev, MT_BBP(IBI, 9), 0xffffff7e); - - mt76x02_mcu_calibrate(dev, MCU_CAL_RXDCOC, 0, false); - - mt76x02_mcu_calibrate(dev, MCU_CAL_LC, is_5ghz, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_LOFT, is_5ghz, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_TXIQ, is_5ghz, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_TX_GROUP_DELAY, is_5ghz, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_RXIQ, is_5ghz, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_RX_GROUP_DELAY, is_5ghz, false); - - mt76_wr(dev, MT_BBP(IBI, 9), reg_val); - mt76_wr(dev, MT_TX_ALC_CFG_0, tx_alc); - msleep(100); - - mt76x02_mcu_calibrate(dev, MCU_CAL_RXDCOC, 1, false); -} - -static void mt76x0_temp_sensor(struct mt76x02_dev *dev) +static void mt76x0_phy_temp_sensor(struct mt76x02_dev *dev) { u8 rf_b7_73, rf_b0_66, rf_b0_67; s8 val; - rf_b7_73 = rf_rr(dev, MT_RF(7, 73)); - rf_b0_66 = rf_rr(dev, MT_RF(0, 66)); - rf_b0_67 = rf_rr(dev, MT_RF(0, 67)); + rf_b7_73 = mt76x0_rf_rr(dev, MT_RF(7, 73)); + rf_b0_66 = mt76x0_rf_rr(dev, MT_RF(0, 66)); + rf_b0_67 = mt76x0_rf_rr(dev, MT_RF(0, 67)); - rf_wr(dev, MT_RF(7, 73), 0x02); - rf_wr(dev, MT_RF(0, 66), 0x23); - rf_wr(dev, MT_RF(0, 67), 0x01); + mt76x0_rf_wr(dev, MT_RF(7, 73), 0x02); + mt76x0_rf_wr(dev, MT_RF(0, 66), 0x23); + mt76x0_rf_wr(dev, MT_RF(0, 67), 0x01); mt76_wr(dev, MT_BBP(CORE, 34), 0x00080055); - - if (!mt76_poll(dev, MT_BBP(CORE, 34), BIT(4), 0, 2000)) { + if (!mt76_poll_msec(dev, MT_BBP(CORE, 34), BIT(4), 0, 200)) { mt76_clear(dev, MT_BBP(CORE, 34), BIT(4)); goto done; } @@ -799,8 +1043,7 @@ static void mt76x0_temp_sensor(struct mt76x02_dev *dev) if (abs(val - dev->cal.temp_vco) > 20) { mt76x02_mcu_calibrate(dev, MCU_CAL_VCO, - dev->mt76.chandef.chan->hw_value, - false); + dev->mt76.chandef.chan->hw_value); dev->cal.temp_vco = val; } if (abs(val - dev->cal.temp) > 30) { @@ -809,18 +1052,20 @@ static void mt76x0_temp_sensor(struct mt76x02_dev *dev) } done: - rf_wr(dev, MT_RF(7, 73), rf_b7_73); - rf_wr(dev, MT_RF(0, 66), rf_b0_66); - rf_wr(dev, MT_RF(0, 67), rf_b0_67); + mt76x0_rf_wr(dev, MT_RF(7, 73), rf_b7_73); + mt76x0_rf_wr(dev, MT_RF(0, 66), rf_b0_66); + mt76x0_rf_wr(dev, MT_RF(0, 67), rf_b0_67); } static void mt76x0_phy_set_gain_val(struct mt76x02_dev *dev) { u8 gain = dev->cal.agc_gain_cur[0] - dev->cal.agc_gain_adjust; - u32 val = 0x122c << 16 | 0xf2; - mt76_wr(dev, MT_BBP(AGC, 8), - val | FIELD_PREP(MT_BBP_AGC_GAIN, gain)); + mt76_rmw_field(dev, MT_BBP(AGC, 8), MT_BBP_AGC_GAIN, gain); + + if ((dev->mt76.chandef.chan->flags & IEEE80211_CHAN_RADAR) && + !is_mt7630(dev)) + mt76x02_phy_dfs_adjust_agc(dev); } static void @@ -835,7 +1080,8 @@ mt76x0_phy_update_channel_gain(struct mt76x02_dev *dev) low_gain = (dev->cal.avg_rssi_all > mt76x02_get_rssi_gain_thresh(dev)) + (dev->cal.avg_rssi_all > mt76x02_get_low_rssi_gain_thresh(dev)); - gain_change = (dev->cal.low_gain & 2) ^ (low_gain & 2); + gain_change = dev->cal.low_gain < 0 || + (dev->cal.low_gain & 2) ^ (low_gain & 2); dev->cal.low_gain = low_gain; if (!gain_change) { @@ -860,20 +1106,65 @@ static void mt76x0_phy_calibration_work(struct work_struct *work) cal_work.work); mt76x0_phy_update_channel_gain(dev); - if (!mt76x0_tssi_enabled(dev)) - mt76x0_temp_sensor(dev); + if (mt76x0_tssi_enabled(dev)) + mt76x0_phy_tssi_calibrate(dev); + else + mt76x0_phy_temp_sensor(dev); ieee80211_queue_delayed_work(dev->mt76.hw, &dev->cal_work, - MT_CALIBRATE_INTERVAL); + 4 * MT_CALIBRATE_INTERVAL); } -static void mt76x0_rf_init(struct mt76x02_dev *dev) +static void mt76x0_rf_patch_reg_array(struct mt76x02_dev *dev, + const struct mt76_reg_pair *rp, int len) +{ + int i; + + for (i = 0; i < len; i++) { + u32 reg = rp[i].reg; + u8 val = rp[i].value; + + switch (reg) { + case MT_RF(0, 3): + if (mt76_is_mmio(dev)) { + if (is_mt7630(dev)) + val = 0x70; + else + val = 0x63; + } else { + val = 0x73; + } + break; + case MT_RF(0, 21): + if (is_mt7610e(dev)) + val = 0x10; + else + val = 0x12; + break; + case MT_RF(5, 2): + if (is_mt7630(dev)) + val = 0x1d; + else if (is_mt7610e(dev)) + val = 0x00; + else + val = 0x0c; + break; + default: + break; + } + mt76x0_rf_wr(dev, reg, val); + } +} + +static void mt76x0_phy_rf_init(struct mt76x02_dev *dev) { int i; u8 val; - RF_RANDOM_WRITE(dev, mt76x0_rf_central_tab); - RF_RANDOM_WRITE(dev, mt76x0_rf_2g_channel_0_tab); + mt76x0_rf_patch_reg_array(dev, mt76x0_rf_central_tab, + ARRAY_SIZE(mt76x0_rf_central_tab)); + mt76x0_rf_patch_reg_array(dev, mt76x0_rf_2g_channel_0_tab, + ARRAY_SIZE(mt76x0_rf_2g_channel_0_tab)); RF_RANDOM_WRITE(dev, mt76x0_rf_5g_channel_0_tab); RF_RANDOM_WRITE(dev, mt76x0_rf_vga_channel_0_tab); @@ -881,16 +1172,16 @@ static void mt76x0_rf_init(struct mt76x02_dev *dev) const struct mt76x0_rf_switch_item *item = &mt76x0_rf_bw_switch_tab[i]; if (item->bw_band == RF_BW_20) - rf_wr(dev, item->rf_bank_reg, item->value); + mt76x0_rf_wr(dev, item->rf_bank_reg, item->value); else if (((RF_G_BAND | RF_BW_20) & item->bw_band) == (RF_G_BAND | RF_BW_20)) - rf_wr(dev, item->rf_bank_reg, item->value); + mt76x0_rf_wr(dev, item->rf_bank_reg, item->value); } for (i = 0; i < ARRAY_SIZE(mt76x0_rf_band_switch_tab); i++) { if (mt76x0_rf_band_switch_tab[i].bw_band & RF_G_BAND) { - rf_wr(dev, - mt76x0_rf_band_switch_tab[i].rf_bank_reg, - mt76x0_rf_band_switch_tab[i].value); + mt76x0_rf_wr(dev, + mt76x0_rf_band_switch_tab[i].rf_bank_reg, + mt76x0_rf_band_switch_tab[i].value); } } @@ -899,32 +1190,29 @@ static void mt76x0_rf_init(struct mt76x02_dev *dev) E1: B0.R22<6:0>: xo_cxo<6:0> E2: B0.R21<0>: xo_cxo<0>, B0.R22<7:0>: xo_cxo<8:1> */ - rf_wr(dev, MT_RF(0, 22), - min_t(u8, dev->cal.rx.freq_offset, 0xbf)); - val = rf_rr(dev, MT_RF(0, 22)); - - /* - Reset the DAC (Set B0.R73<7>=1, then set B0.R73<7>=0, and then set B0.R73<7>) during power up. + mt76x0_rf_wr(dev, MT_RF(0, 22), + min_t(u8, dev->cal.rx.freq_offset, 0xbf)); + val = mt76x0_rf_rr(dev, MT_RF(0, 22)); + + /* Reset procedure DAC during power-up: + * - set B0.R73<7> + * - clear B0.R73<7> + * - set B0.R73<7> */ - val = rf_rr(dev, MT_RF(0, 73)); - val |= 0x80; - rf_wr(dev, MT_RF(0, 73), val); - val &= ~0x80; - rf_wr(dev, MT_RF(0, 73), val); - val |= 0x80; - rf_wr(dev, MT_RF(0, 73), val); + mt76x0_rf_set(dev, MT_RF(0, 73), BIT(7)); + mt76x0_rf_clear(dev, MT_RF(0, 73), BIT(7)); + mt76x0_rf_set(dev, MT_RF(0, 73), BIT(7)); - /* - vcocal_en (initiate VCO calibration (reset after completion)) - It should be at the end of RF configuration. - */ - rf_set(dev, MT_RF(0, 4), 0x80); + /* vcocal_en: initiate VCO calibration (reset after completion)) */ + mt76x0_rf_set(dev, MT_RF(0, 4), 0x80); } void mt76x0_phy_init(struct mt76x02_dev *dev) { INIT_DELAYED_WORK(&dev->cal_work, mt76x0_phy_calibration_work); - mt76x0_rf_init(dev); + mt76x0_phy_ant_select(dev); + mt76x0_phy_rf_init(dev); mt76x02_phy_set_rxpath(dev); mt76x02_phy_set_txdac(dev); } diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.h b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.h index 2880a43c3cb0..9889132b768a 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.h @@ -30,6 +30,23 @@ #define MT_RF_BANK(offset) (offset >> 16) #define MT_RF_REG(offset) (offset & 0xff) +#define MT_RF_VCO_BP_CLOSE_LOOP BIT(3) +#define MT_RF_VCO_BP_CLOSE_LOOP_MASK GENMASK(3, 0) +#define MT_RF_VCO_CAL_MASK GENMASK(2, 0) +#define MT_RF_START_TIME 0x3 +#define MT_RF_START_TIME_MASK GENMASK(2, 0) +#define MT_RF_SETTLE_TIME_MASK GENMASK(6, 4) + +#define MT_RF_PLL_DEN_MASK GENMASK(4, 0) +#define MT_RF_PLL_K_MASK GENMASK(4, 0) +#define MT_RF_SDM_RESET_MASK BIT(7) +#define MT_RF_SDM_MASH_PRBS_MASK GENMASK(6, 2) +#define MT_RF_SDM_BP_MASK BIT(1) +#define MT_RF_ISI_ISO_MASK GENMASK(7, 6) +#define MT_RF_PFD_DLY_MASK GENMASK(5, 4) +#define MT_RF_CLK_SEL_MASK GENMASK(3, 2) +#define MT_RF_XO_DIV_MASK GENMASK(1, 0) + struct mt76x0_bbp_switch_item { u16 bw_band; struct mt76_reg_pair reg_pair; diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/trace.c b/drivers/net/wireless/mediatek/mt76/mt76x0/trace.c deleted file mode 100644 index 8abdd3cd546d..000000000000 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/trace.c +++ /dev/null @@ -1,21 +0,0 @@ -/* - * Copyright (C) 2014 Felix Fietkau - * Copyright (C) 2015 Jakub Kicinski - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 - * as published by the Free Software Foundation - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ - -#include - -#ifndef __CHECKER__ -#define CREATE_TRACE_POINTS -#include "trace.h" - -#endif diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/trace.h b/drivers/net/wireless/mediatek/mt76/mt76x0/trace.h deleted file mode 100644 index 75d1d6738c34..000000000000 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/trace.h +++ /dev/null @@ -1,312 +0,0 @@ -/* - * Copyright (C) 2014 Felix Fietkau - * Copyright (C) 2015 Jakub Kicinski - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 - * as published by the Free Software Foundation - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ - -#if !defined(__MT76X0U_TRACE_H) || defined(TRACE_HEADER_MULTI_READ) -#define __MT76X0U_TRACE_H - -#include -#include "mt76x0.h" - -#undef TRACE_SYSTEM -#define TRACE_SYSTEM mt76x0 - -#define MAXNAME 32 -#define DEV_ENTRY __array(char, wiphy_name, 32) -#define DEV_ASSIGN strlcpy(__entry->wiphy_name, \ - wiphy_name(dev->hw->wiphy), MAXNAME) -#define DEV_PR_FMT "%s " -#define DEV_PR_ARG __entry->wiphy_name - -#define REG_ENTRY __field(u32, reg) __field(u32, val) -#define REG_ASSIGN __entry->reg = reg; __entry->val = val -#define REG_PR_FMT "%04x=%08x" -#define REG_PR_ARG __entry->reg, __entry->val - -DECLARE_EVENT_CLASS(dev_reg_evt, - TP_PROTO(struct mt76_dev *dev, u32 reg, u32 val), - TP_ARGS(dev, reg, val), - TP_STRUCT__entry( - DEV_ENTRY - REG_ENTRY - ), - TP_fast_assign( - DEV_ASSIGN; - REG_ASSIGN; - ), - TP_printk( - DEV_PR_FMT REG_PR_FMT, - DEV_PR_ARG, REG_PR_ARG - ) -); - -DEFINE_EVENT(dev_reg_evt, mt76x0_reg_read, - TP_PROTO(struct mt76_dev *dev, u32 reg, u32 val), - TP_ARGS(dev, reg, val) -); - -DEFINE_EVENT(dev_reg_evt, mt76x0_reg_write, - TP_PROTO(struct mt76_dev *dev, u32 reg, u32 val), - TP_ARGS(dev, reg, val) -); - -TRACE_EVENT(mt76x0_submit_urb, - TP_PROTO(struct mt76_dev *dev, struct urb *u), - TP_ARGS(dev, u), - TP_STRUCT__entry( - DEV_ENTRY __field(unsigned, pipe) __field(u32, len) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->pipe = u->pipe; - __entry->len = u->transfer_buffer_length; - ), - TP_printk(DEV_PR_FMT "p:%08x len:%u", - DEV_PR_ARG, __entry->pipe, __entry->len) -); - -#define trace_mt76x0_submit_urb_sync(__dev, __pipe, __len) ({ \ - struct urb u; \ - u.pipe = __pipe; \ - u.transfer_buffer_length = __len; \ - trace_mt76x0_submit_urb(__dev, &u); \ -}) - -TRACE_EVENT(mt76x0_mcu_msg_send, - TP_PROTO(struct mt76_dev *dev, - struct sk_buff *skb, u32 csum, bool resp), - TP_ARGS(dev, skb, csum, resp), - TP_STRUCT__entry( - DEV_ENTRY - __field(u32, info) - __field(u32, csum) - __field(bool, resp) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->info = *(u32 *)skb->data; - __entry->csum = csum; - __entry->resp = resp; - ), - TP_printk(DEV_PR_FMT "i:%08x c:%08x r:%d", - DEV_PR_ARG, __entry->info, __entry->csum, __entry->resp) -); - -TRACE_EVENT(mt76x0_vend_req, - TP_PROTO(struct mt76_dev *dev, unsigned pipe, u8 req, u8 req_type, - u16 val, u16 offset, void *buf, size_t buflen, int ret), - TP_ARGS(dev, pipe, req, req_type, val, offset, buf, buflen, ret), - TP_STRUCT__entry( - DEV_ENTRY - __field(unsigned, pipe) __field(u8, req) __field(u8, req_type) - __field(u16, val) __field(u16, offset) __field(void*, buf) - __field(int, buflen) __field(int, ret) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->pipe = pipe; - __entry->req = req; - __entry->req_type = req_type; - __entry->val = val; - __entry->offset = offset; - __entry->buf = buf; - __entry->buflen = buflen; - __entry->ret = ret; - ), - TP_printk(DEV_PR_FMT - "%d p:%08x req:%02hhx %02hhx val:%04hx %04hx buf:%d %d", - DEV_PR_ARG, __entry->ret, __entry->pipe, __entry->req, - __entry->req_type, __entry->val, __entry->offset, - !!__entry->buf, __entry->buflen) -); - -DECLARE_EVENT_CLASS(dev_rf_reg_evt, - TP_PROTO(struct mt76_dev *dev, u8 bank, u8 reg, u8 val), - TP_ARGS(dev, bank, reg, val), - TP_STRUCT__entry( - DEV_ENTRY - __field(u8, bank) - __field(u8, reg) - __field(u8, val) - ), - TP_fast_assign( - DEV_ASSIGN; - REG_ASSIGN; - __entry->bank = bank; - ), - TP_printk( - DEV_PR_FMT "%02hhx:%02hhx=%02hhx", - DEV_PR_ARG, __entry->bank, __entry->reg, __entry->val - ) -); - -DEFINE_EVENT(dev_rf_reg_evt, mt76x0_rf_read, - TP_PROTO(struct mt76_dev *dev, u8 bank, u8 reg, u8 val), - TP_ARGS(dev, bank, reg, val) -); - -DEFINE_EVENT(dev_rf_reg_evt, mt76x0_rf_write, - TP_PROTO(struct mt76_dev *dev, u8 bank, u8 reg, u8 val), - TP_ARGS(dev, bank, reg, val) -); - -DECLARE_EVENT_CLASS(dev_simple_evt, - TP_PROTO(struct mt76_dev *dev, u8 val), - TP_ARGS(dev, val), - TP_STRUCT__entry( - DEV_ENTRY - __field(u8, val) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->val = val; - ), - TP_printk( - DEV_PR_FMT "%02hhx", DEV_PR_ARG, __entry->val - ) -); - -TRACE_EVENT(mt76x0_rx, - TP_PROTO(struct mt76_dev *dev, struct mt76x02_rxwi *rxwi, u32 f), - TP_ARGS(dev, rxwi, f), - TP_STRUCT__entry( - DEV_ENTRY - __field_struct(struct mt76x02_rxwi, rxwi) - __field(u32, fce_info) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->rxwi = *rxwi; - __entry->fce_info = f; - ), - TP_printk(DEV_PR_FMT "rxi:%08x ctl:%08x", DEV_PR_ARG, - le32_to_cpu(__entry->rxwi.rxinfo), - le32_to_cpu(__entry->rxwi.ctl)) -); - -TRACE_EVENT(mt76x0_tx, - TP_PROTO(struct mt76_dev *dev, struct sk_buff *skb, - struct mt76x02_sta *sta, struct mt76x02_txwi *h), - TP_ARGS(dev, skb, sta, h), - TP_STRUCT__entry( - DEV_ENTRY - __field_struct(struct mt76x02_txwi, h) - __field(struct sk_buff *, skb) - __field(struct mt76x02_sta *, sta) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->h = *h; - __entry->skb = skb; - __entry->sta = sta; - ), - TP_printk(DEV_PR_FMT "skb:%p sta:%p flg:%04hx rate:%04hx " - "ack:%02hhx wcid:%02hhx len_ctl:%05hx", DEV_PR_ARG, - __entry->skb, __entry->sta, - le16_to_cpu(__entry->h.flags), - le16_to_cpu(__entry->h.rate), - __entry->h.ack_ctl, __entry->h.wcid, - le16_to_cpu(__entry->h.len_ctl)) -); - -TRACE_EVENT(mt76x0_tx_dma_done, - TP_PROTO(struct mt76_dev *dev, struct sk_buff *skb), - TP_ARGS(dev, skb), - TP_STRUCT__entry( - DEV_ENTRY - __field(struct sk_buff *, skb) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->skb = skb; - ), - TP_printk(DEV_PR_FMT "%p", DEV_PR_ARG, __entry->skb) -); - -TRACE_EVENT(mt76x0_tx_status_cleaned, - TP_PROTO(struct mt76_dev *dev, int cleaned), - TP_ARGS(dev, cleaned), - TP_STRUCT__entry( - DEV_ENTRY - __field(int, cleaned) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->cleaned = cleaned; - ), - TP_printk(DEV_PR_FMT "%d", DEV_PR_ARG, __entry->cleaned) -); - -TRACE_EVENT(mt76x0_tx_status, - TP_PROTO(struct mt76_dev *dev, u32 stat1, u32 stat2), - TP_ARGS(dev, stat1, stat2), - TP_STRUCT__entry( - DEV_ENTRY - __field(u32, stat1) __field(u32, stat2) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->stat1 = stat1; - __entry->stat2 = stat2; - ), - TP_printk(DEV_PR_FMT "%08x %08x", - DEV_PR_ARG, __entry->stat1, __entry->stat2) -); - -TRACE_EVENT(mt76x0_rx_dma_aggr, - TP_PROTO(struct mt76_dev *dev, int cnt, bool paged), - TP_ARGS(dev, cnt, paged), - TP_STRUCT__entry( - DEV_ENTRY - __field(u8, cnt) - __field(bool, paged) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->cnt = cnt; - __entry->paged = paged; - ), - TP_printk(DEV_PR_FMT "cnt:%d paged:%d", - DEV_PR_ARG, __entry->cnt, __entry->paged) -); - -DEFINE_EVENT(dev_simple_evt, mt76x0_set_key, - TP_PROTO(struct mt76_dev *dev, u8 val), - TP_ARGS(dev, val) -); - -TRACE_EVENT(mt76x0_set_shared_key, - TP_PROTO(struct mt76_dev *dev, u8 vid, u8 key), - TP_ARGS(dev, vid, key), - TP_STRUCT__entry( - DEV_ENTRY - __field(u8, vid) - __field(u8, key) - ), - TP_fast_assign( - DEV_ASSIGN; - __entry->vid = vid; - __entry->key = key; - ), - TP_printk(DEV_PR_FMT "phy:%02hhx off:%02hhx", - DEV_PR_ARG, __entry->vid, __entry->key) -); - -#endif - -#undef TRACE_INCLUDE_PATH -#define TRACE_INCLUDE_PATH . -#undef TRACE_INCLUDE_FILE -#define TRACE_INCLUDE_FILE trace - -#include diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c index a7fd36c2f633..0e6b43bb4678 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c @@ -17,7 +17,6 @@ #include "mt76x0.h" #include "mcu.h" -#include "trace.h" #include "../mt76x02_usb.h" static struct usb_device_id mt76x0_device_table[] = { @@ -117,6 +116,7 @@ static int mt76x0u_start(struct ieee80211_hw *hw) if (ret) goto out; + mt76x0_phy_calibrate(dev, true); ieee80211_queue_delayed_work(dev->mt76.hw, &dev->mac_work, MT_CALIBRATE_INTERVAL); ieee80211_queue_delayed_work(dev->mt76.hw, &dev->cal_work, @@ -145,17 +145,17 @@ static const struct ieee80211_ops mt76x0u_ops = { .remove_interface = mt76x02_remove_interface, .config = mt76x0_config, .configure_filter = mt76x02_configure_filter, - .bss_info_changed = mt76x0_bss_info_changed, - .sta_add = mt76x02_sta_add, - .sta_remove = mt76x02_sta_remove, + .bss_info_changed = mt76x02_bss_info_changed, + .sta_state = mt76_sta_state, .set_key = mt76x02_set_key, .conf_tx = mt76x02_conf_tx, - .sw_scan_start = mt76x0_sw_scan, - .sw_scan_complete = mt76x0_sw_scan_complete, + .sw_scan_start = mt76x02_sw_scan, + .sw_scan_complete = mt76x02_sw_scan_complete, .ampdu_action = mt76x02_ampdu_action, .sta_rate_tbl_update = mt76x02_sta_rate_tbl_update, - .set_rts_threshold = mt76x0_set_rts_threshold, + .set_rts_threshold = mt76x02_set_rts_threshold, .wake_tx_queue = mt76_wake_tx_queue, + .get_txpower = mt76x02_get_txpower, }; static int mt76x0u_register_device(struct mt76x02_dev *dev) @@ -218,6 +218,8 @@ static int mt76x0u_probe(struct usb_interface *usb_intf, .tx_complete_skb = mt76x02u_tx_complete_skb, .tx_status_data = mt76x02_tx_status_data, .rx_skb = mt76x02_queue_rx_skb, + .sta_add = mt76x02_sta_add, + .sta_remove = mt76x02_sta_remove, }; struct usb_device *usb_dev = interface_to_usbdev(usb_intf); struct mt76x02_dev *dev; @@ -337,6 +339,8 @@ err: } MODULE_DEVICE_TABLE(usb, mt76x0_device_table); +MODULE_FIRMWARE(MT7610E_FIRMWARE); +MODULE_FIRMWARE(MT7610U_FIRMWARE); MODULE_LICENSE("GPL"); static struct usb_driver mt76x0_driver = { diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c index a9f14d5149d1..9d7585029df9 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/usb_mcu.c @@ -22,7 +22,6 @@ #define MCU_FW_URB_MAX_PAYLOAD 0x38f8 #define MCU_FW_URB_SIZE (MCU_FW_URB_MAX_PAYLOAD + 12) -#define MT7610U_FIRMWARE "mediatek/mt7610u.bin" static int mt76x0u_upload_firmware(struct mt76x02_dev *dev, @@ -75,6 +74,24 @@ out: return err; } +static int mt76x0_get_firmware(struct mt76x02_dev *dev, + const struct firmware **fw) +{ + int err; + + /* try to load mt7610e fw if available + * otherwise fall back to mt7610u one + */ + err = firmware_request_nowarn(fw, MT7610E_FIRMWARE, dev->mt76.dev); + if (err) { + dev_info(dev->mt76.dev, "%s not found, switching to %s", + MT7610E_FIRMWARE, MT7610U_FIRMWARE); + return request_firmware(fw, MT7610U_FIRMWARE, + dev->mt76.dev); + } + return 0; +} + static int mt76x0u_load_firmware(struct mt76x02_dev *dev) { const struct firmware *fw; @@ -88,7 +105,7 @@ static int mt76x0u_load_firmware(struct mt76x02_dev *dev) if (mt76x0_firmware_running(dev)) return 0; - ret = request_firmware(&fw, MT7610U_FIRMWARE, dev->mt76.dev); + ret = mt76x0_get_firmware(dev, &fw); if (ret) return ret; @@ -171,5 +188,3 @@ int mt76x0u_mcu_init(struct mt76x02_dev *dev) return 0; } - -MODULE_FIRMWARE(MT7610U_FIRMWARE); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02.h b/drivers/net/wireless/mediatek/mt76/mt76x02.h index 7806963b1905..6782665049dd 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x02.h @@ -26,13 +26,7 @@ #include "mt76x02_dfs.h" #include "mt76x02_dma.h" -struct mt76x02_mac_stats { - u64 rx_stat[6]; - u64 tx_stat[6]; - u64 aggr_stat[2]; - u64 aggr_n[32]; - u64 zero_len_del[2]; -}; +#define MT_CALIBRATE_INTERVAL HZ #define MT_MAX_CHAINS 2 struct mt76x02_rx_freq_cal { @@ -63,6 +57,10 @@ struct mt76x02_calibration { bool tssi_comp_pending; bool dpd_cal_done; bool channel_cal_done; + bool gain_init_done; + + int tssi_target; + s8 tssi_dc; }; struct mt76x02_dev { @@ -82,8 +80,6 @@ struct mt76x02_dev { struct delayed_work cal_work; struct delayed_work mac_work; - struct mt76x02_mac_stats stats; - atomic_t avg_ampdu_len; u32 aggr_stats[32]; struct sk_buff *beacons[8]; @@ -109,14 +105,16 @@ struct mt76x02_dev { extern struct ieee80211_rate mt76x02_rates[12]; +void mt76x02_init_device(struct mt76x02_dev *dev); void mt76x02_configure_filter(struct ieee80211_hw *hw, unsigned int changed_flags, unsigned int *total_flags, u64 multicast); -int mt76x02_sta_add(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - struct ieee80211_sta *sta); -int mt76x02_sta_remove(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - struct ieee80211_sta *sta); +int mt76x02_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif, + struct ieee80211_sta *sta); +void mt76x02_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif, + struct ieee80211_sta *sta); +void mt76x02_config_mac_addr_list(struct mt76x02_dev *dev); void mt76x02_vif_init(struct mt76x02_dev *dev, struct ieee80211_vif *vif, unsigned int idx); int mt76x02_add_interface(struct ieee80211_hw *hw, @@ -139,9 +137,12 @@ s8 mt76x02_tx_get_max_txpwr_adj(struct mt76x02_dev *dev, s8 mt76x02_tx_get_txpwr_adj(struct mt76x02_dev *dev, s8 txpwr, s8 max_txpwr_adj); void mt76x02_tx_set_txpwr_auto(struct mt76x02_dev *dev, s8 txpwr); +void mt76x02_set_tx_ackto(struct mt76x02_dev *dev); +void mt76x02_set_coverage_class(struct ieee80211_hw *hw, + s16 coverage_class); +int mt76x02_set_rts_threshold(struct ieee80211_hw *hw, u32 val); int mt76x02_insert_hdr_pad(struct sk_buff *skb); void mt76x02_remove_hdr_pad(struct sk_buff *skb, int len); -void mt76x02_tx_complete(struct mt76_dev *dev, struct sk_buff *skb); bool mt76x02_tx_status_data(struct mt76_dev *mdev, u8 *update); void mt76x02_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q, struct sk_buff *skb); @@ -153,12 +154,24 @@ int mt76x02_tx_prepare_skb(struct mt76_dev *mdev, void *txwi, struct sk_buff *skb, struct mt76_queue *q, struct mt76_wcid *wcid, struct ieee80211_sta *sta, u32 *tx_info); +void mt76x02_sw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + const u8 *mac); +void mt76x02_sw_scan_complete(struct ieee80211_hw *hw, + struct ieee80211_vif *vif); +int mt76x02_get_txpower(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, int *dbm); +void mt76x02_sta_ps(struct mt76_dev *dev, struct ieee80211_sta *sta, bool ps); +void mt76x02_bss_info_changed(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *info, u32 changed); extern const u16 mt76x02_beacon_offsets[16]; -void mt76x02_set_beacon_offsets(struct mt76x02_dev *dev); +void mt76x02_init_beacon_config(struct mt76x02_dev *dev); void mt76x02_set_irq_mask(struct mt76x02_dev *dev, u32 clear, u32 set); void mt76x02_mac_start(struct mt76x02_dev *dev); +void mt76x02_init_debugfs(struct mt76x02_dev *dev); + static inline bool is_mt76x2(struct mt76x02_dev *dev) { return mt76_chip(&dev->mt76) == 0x7612 || diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt76x02_debugfs.c similarity index 86% rename from drivers/net/wireless/mediatek/mt76/mt76x2/debugfs.c rename to drivers/net/wireless/mediatek/mt76/mt76x02_debugfs.c index e8f8ccc0a5ed..a9d52ba1e270 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/debugfs.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_debugfs.c @@ -15,10 +15,10 @@ */ #include -#include "mt76x2.h" +#include "mt76x02.h" static int -mt76x2_ampdu_stat_read(struct seq_file *file, void *data) +mt76x02_ampdu_stat_read(struct seq_file *file, void *data) { struct mt76x02_dev *dev = file->private; int i, j; @@ -42,9 +42,9 @@ mt76x2_ampdu_stat_read(struct seq_file *file, void *data) } static int -mt76x2_ampdu_stat_open(struct inode *inode, struct file *f) +mt76x02_ampdu_stat_open(struct inode *inode, struct file *f) { - return single_open(f, mt76x2_ampdu_stat_read, inode->i_private); + return single_open(f, mt76x02_ampdu_stat_read, inode->i_private); } static int read_txpower(struct seq_file *file, void *data) @@ -59,14 +59,14 @@ static int read_txpower(struct seq_file *file, void *data) } static const struct file_operations fops_ampdu_stat = { - .open = mt76x2_ampdu_stat_open, + .open = mt76x02_ampdu_stat_open, .read = seq_read, .llseek = seq_lseek, .release = single_release, }; static int -mt76x2_dfs_stat_read(struct seq_file *file, void *data) +mt76x02_dfs_stat_read(struct seq_file *file, void *data) { struct mt76x02_dev *dev = file->private; struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; @@ -92,13 +92,13 @@ mt76x2_dfs_stat_read(struct seq_file *file, void *data) } static int -mt76x2_dfs_stat_open(struct inode *inode, struct file *f) +mt76x02_dfs_stat_open(struct inode *inode, struct file *f) { - return single_open(f, mt76x2_dfs_stat_read, inode->i_private); + return single_open(f, mt76x02_dfs_stat_read, inode->i_private); } static const struct file_operations fops_dfs_stat = { - .open = mt76x2_dfs_stat_open, + .open = mt76x02_dfs_stat_open, .read = seq_read, .llseek = seq_lseek, .release = single_release, @@ -116,7 +116,7 @@ static int read_agc(struct seq_file *file, void *data) return 0; } -void mt76x2_init_debugfs(struct mt76x02_dev *dev) +void mt76x02_init_debugfs(struct mt76x02_dev *dev) { struct dentry *dir; @@ -134,4 +134,4 @@ void mt76x2_init_debugfs(struct mt76x02_dev *dev) debugfs_create_devm_seqfile(dev->mt76.dev, "agc", dir, read_agc); } -EXPORT_SYMBOL_GPL(mt76x2_init_debugfs); +EXPORT_SYMBOL_GPL(mt76x02_init_debugfs); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_dfs.c b/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.c similarity index 84% rename from drivers/net/wireless/mediatek/mt76/mt76x2/pci_dfs.c rename to drivers/net/wireless/mediatek/mt76/mt76x02_dfs.c index b56febae8945..054609c634a2 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_dfs.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.c @@ -14,7 +14,7 @@ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ -#include "mt76x2.h" +#include "mt76x02.h" #define RADAR_SPEC(m, len, el, eh, wl, wh, \ w_tolerance, tl, th, t_tolerance, \ @@ -151,8 +151,7 @@ static const struct mt76x02_radar_specs jp_w53_radar_specs[] = { }; static void -mt76x2_dfs_set_capture_mode_ctrl(struct mt76x02_dev *dev, - u8 enable) +mt76x02_dfs_set_capture_mode_ctrl(struct mt76x02_dev *dev, u8 enable) { u32 data; @@ -160,8 +159,8 @@ mt76x2_dfs_set_capture_mode_ctrl(struct mt76x02_dev *dev, mt76_wr(dev, MT_BBP(DFS, 36), data); } -static void mt76x2_dfs_seq_pool_put(struct mt76x02_dev *dev, - struct mt76x02_dfs_sequence *seq) +static void mt76x02_dfs_seq_pool_put(struct mt76x02_dev *dev, + struct mt76x02_dfs_sequence *seq) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; @@ -172,7 +171,7 @@ static void mt76x2_dfs_seq_pool_put(struct mt76x02_dev *dev, } static struct mt76x02_dfs_sequence * -mt76x2_dfs_seq_pool_get(struct mt76x02_dev *dev) +mt76x02_dfs_seq_pool_get(struct mt76x02_dev *dev) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; struct mt76x02_dfs_sequence *seq; @@ -192,7 +191,7 @@ mt76x2_dfs_seq_pool_get(struct mt76x02_dev *dev) return seq; } -static int mt76x2_dfs_get_multiple(int val, int frac, int margin) +static int mt76x02_dfs_get_multiple(int val, int frac, int margin) { int remainder, factor; @@ -214,7 +213,7 @@ static int mt76x2_dfs_get_multiple(int val, int frac, int margin) return factor; } -static void mt76x2_dfs_detector_reset(struct mt76x02_dev *dev) +static void mt76x02_dfs_detector_reset(struct mt76x02_dev *dev) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; struct mt76x02_dfs_sequence *seq, *tmp_seq; @@ -231,11 +230,11 @@ static void mt76x2_dfs_detector_reset(struct mt76x02_dev *dev) list_for_each_entry_safe(seq, tmp_seq, &dfs_pd->sequences, head) { list_del_init(&seq->head); - mt76x2_dfs_seq_pool_put(dev, seq); + mt76x02_dfs_seq_pool_put(dev, seq); } } -static bool mt76x2_dfs_check_chirp(struct mt76x02_dev *dev) +static bool mt76x02_dfs_check_chirp(struct mt76x02_dev *dev) { bool ret = false; u32 current_ts, delta_ts; @@ -256,8 +255,8 @@ static bool mt76x2_dfs_check_chirp(struct mt76x02_dev *dev) return ret; } -static void mt76x2_dfs_get_hw_pulse(struct mt76x02_dev *dev, - struct mt76x02_dfs_hw_pulse *pulse) +static void mt76x02_dfs_get_hw_pulse(struct mt76x02_dev *dev, + struct mt76x02_dfs_hw_pulse *pulse) { u32 data; @@ -276,8 +275,8 @@ static void mt76x2_dfs_get_hw_pulse(struct mt76x02_dev *dev, pulse->burst = mt76_rr(dev, MT_BBP(DFS, 22)); } -static bool mt76x2_dfs_check_hw_pulse(struct mt76x02_dev *dev, - struct mt76x02_dfs_hw_pulse *pulse) +static bool mt76x02_dfs_check_hw_pulse(struct mt76x02_dev *dev, + struct mt76x02_dfs_hw_pulse *pulse) { bool ret = false; @@ -290,7 +289,7 @@ static bool mt76x2_dfs_check_hw_pulse(struct mt76x02_dev *dev, break; if (pulse->engine == 3) { - ret = mt76x2_dfs_check_chirp(dev); + ret = mt76x02_dfs_check_chirp(dev); break; } @@ -334,7 +333,7 @@ static bool mt76x2_dfs_check_hw_pulse(struct mt76x02_dev *dev, break; if (pulse->engine == 3) { - ret = mt76x2_dfs_check_chirp(dev); + ret = mt76x02_dfs_check_chirp(dev); break; } @@ -371,8 +370,8 @@ static bool mt76x2_dfs_check_hw_pulse(struct mt76x02_dev *dev, return ret; } -static bool mt76x2_dfs_fetch_event(struct mt76x02_dev *dev, - struct mt76x02_dfs_event *event) +static bool mt76x02_dfs_fetch_event(struct mt76x02_dev *dev, + struct mt76x02_dfs_event *event) { u32 data; @@ -398,8 +397,8 @@ static bool mt76x2_dfs_fetch_event(struct mt76x02_dev *dev, return true; } -static bool mt76x2_dfs_check_event(struct mt76x02_dev *dev, - struct mt76x02_dfs_event *event) +static bool mt76x02_dfs_check_event(struct mt76x02_dev *dev, + struct mt76x02_dfs_event *event) { if (event->engine == 2) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; @@ -417,8 +416,8 @@ static bool mt76x2_dfs_check_event(struct mt76x02_dev *dev, return true; } -static void mt76x2_dfs_queue_event(struct mt76x02_dev *dev, - struct mt76x02_dfs_event *event) +static void mt76x02_dfs_queue_event(struct mt76x02_dev *dev, + struct mt76x02_dfs_event *event) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; struct mt76x02_dfs_event_rb *event_buff; @@ -435,9 +434,9 @@ static void mt76x2_dfs_queue_event(struct mt76x02_dev *dev, MT_DFS_EVENT_BUFLEN); } -static int mt76x2_dfs_create_sequence(struct mt76x02_dev *dev, - struct mt76x02_dfs_event *event, - u16 cur_len) +static int mt76x02_dfs_create_sequence(struct mt76x02_dev *dev, + struct mt76x02_dfs_event *event, + u16 cur_len) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; struct mt76x02_dfs_sw_detector_params *sw_params; @@ -497,7 +496,7 @@ static int mt76x2_dfs_create_sequence(struct mt76x02_dev *dev, while (j != end) { cur_event = &event_rb->data[j]; cur_pri = event->ts - cur_event->ts; - factor = mt76x2_dfs_get_multiple(cur_pri, seq.pri, + factor = mt76x02_dfs_get_multiple(cur_pri, seq.pri, sw_params->pri_margin); if (factor > 0) { seq.first_ts = cur_event->ts; @@ -509,7 +508,7 @@ static int mt76x2_dfs_create_sequence(struct mt76x02_dev *dev, if (seq.count <= cur_len) goto next; - seq_p = mt76x2_dfs_seq_pool_get(dev); + seq_p = mt76x02_dfs_seq_pool_get(dev); if (!seq_p) return -ENOMEM; @@ -522,8 +521,8 @@ next: return 0; } -static u16 mt76x2_dfs_add_event_to_sequence(struct mt76x02_dev *dev, - struct mt76x02_dfs_event *event) +static u16 mt76x02_dfs_add_event_to_sequence(struct mt76x02_dev *dev, + struct mt76x02_dfs_event *event) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; struct mt76x02_dfs_sw_detector_params *sw_params; @@ -535,7 +534,7 @@ static u16 mt76x2_dfs_add_event_to_sequence(struct mt76x02_dev *dev, list_for_each_entry_safe(seq, tmp_seq, &dfs_pd->sequences, head) { if (event->ts > seq->first_ts + MT_DFS_SEQUENCE_WINDOW) { list_del_init(&seq->head); - mt76x2_dfs_seq_pool_put(dev, seq); + mt76x02_dfs_seq_pool_put(dev, seq); continue; } @@ -543,8 +542,8 @@ static u16 mt76x2_dfs_add_event_to_sequence(struct mt76x02_dev *dev, continue; pri = event->ts - seq->last_ts; - factor = mt76x2_dfs_get_multiple(pri, seq->pri, - sw_params->pri_margin); + factor = mt76x02_dfs_get_multiple(pri, seq->pri, + sw_params->pri_margin); if (factor > 0) { seq->last_ts = event->ts; seq->count++; @@ -554,7 +553,7 @@ static u16 mt76x2_dfs_add_event_to_sequence(struct mt76x02_dev *dev, return max_seq_len; } -static bool mt76x2_dfs_check_detection(struct mt76x02_dev *dev) +static bool mt76x02_dfs_check_detection(struct mt76x02_dev *dev) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; struct mt76x02_dfs_sequence *seq; @@ -571,34 +570,34 @@ static bool mt76x2_dfs_check_detection(struct mt76x02_dev *dev) return false; } -static void mt76x2_dfs_add_events(struct mt76x02_dev *dev) +static void mt76x02_dfs_add_events(struct mt76x02_dev *dev) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; struct mt76x02_dfs_event event; int i, seq_len; /* disable debug mode */ - mt76x2_dfs_set_capture_mode_ctrl(dev, false); + mt76x02_dfs_set_capture_mode_ctrl(dev, false); for (i = 0; i < MT_DFS_EVENT_LOOP; i++) { - if (!mt76x2_dfs_fetch_event(dev, &event)) + if (!mt76x02_dfs_fetch_event(dev, &event)) break; if (dfs_pd->last_event_ts > event.ts) - mt76x2_dfs_detector_reset(dev); + mt76x02_dfs_detector_reset(dev); dfs_pd->last_event_ts = event.ts; - if (!mt76x2_dfs_check_event(dev, &event)) + if (!mt76x02_dfs_check_event(dev, &event)) continue; - seq_len = mt76x2_dfs_add_event_to_sequence(dev, &event); - mt76x2_dfs_create_sequence(dev, &event, seq_len); + seq_len = mt76x02_dfs_add_event_to_sequence(dev, &event); + mt76x02_dfs_create_sequence(dev, &event, seq_len); - mt76x2_dfs_queue_event(dev, &event); + mt76x02_dfs_queue_event(dev, &event); } - mt76x2_dfs_set_capture_mode_ctrl(dev, true); + mt76x02_dfs_set_capture_mode_ctrl(dev, true); } -static void mt76x2_dfs_check_event_window(struct mt76x02_dev *dev) +static void mt76x02_dfs_check_event_window(struct mt76x02_dev *dev) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; struct mt76x02_dfs_event_rb *event_buff; @@ -621,7 +620,7 @@ static void mt76x2_dfs_check_event_window(struct mt76x02_dev *dev) } } -static void mt76x2_dfs_tasklet(unsigned long arg) +static void mt76x02_dfs_tasklet(unsigned long arg) { struct mt76x02_dev *dev = (struct mt76x02_dev *)arg; struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; @@ -637,16 +636,16 @@ static void mt76x2_dfs_tasklet(unsigned long arg) dfs_pd->last_sw_check = jiffies; - mt76x2_dfs_add_events(dev); - radar_detected = mt76x2_dfs_check_detection(dev); + mt76x02_dfs_add_events(dev); + radar_detected = mt76x02_dfs_check_detection(dev); if (radar_detected) { /* sw detector rx radar pattern */ ieee80211_radar_detected(dev->mt76.hw); - mt76x2_dfs_detector_reset(dev); + mt76x02_dfs_detector_reset(dev); return; } - mt76x2_dfs_check_event_window(dev); + mt76x02_dfs_check_event_window(dev); } engine_mask = mt76_rr(dev, MT_BBP(DFS, 1)); @@ -660,9 +659,9 @@ static void mt76x2_dfs_tasklet(unsigned long arg) continue; pulse.engine = i; - mt76x2_dfs_get_hw_pulse(dev, &pulse); + mt76x02_dfs_get_hw_pulse(dev, &pulse); - if (!mt76x2_dfs_check_hw_pulse(dev, &pulse)) { + if (!mt76x02_dfs_check_hw_pulse(dev, &pulse)) { dfs_pd->stats[i].hw_pulse_discarded++; continue; } @@ -670,7 +669,7 @@ static void mt76x2_dfs_tasklet(unsigned long arg) /* hw detector rx radar pattern */ dfs_pd->stats[i].hw_pattern++; ieee80211_radar_detected(dev->mt76.hw); - mt76x2_dfs_detector_reset(dev); + mt76x02_dfs_detector_reset(dev); return; } @@ -682,7 +681,7 @@ out: mt76x02_irq_enable(dev, MT_INT_GPTIMER); } -static void mt76x2_dfs_init_sw_detector(struct mt76x02_dev *dev) +static void mt76x02_dfs_init_sw_detector(struct mt76x02_dev *dev) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; @@ -708,7 +707,7 @@ static void mt76x2_dfs_init_sw_detector(struct mt76x02_dev *dev) } } -static void mt76x2_dfs_set_bbp_params(struct mt76x02_dev *dev) +static void mt76x02_dfs_set_bbp_params(struct mt76x02_dev *dev) { const struct mt76x02_radar_specs *radar_specs; u8 i, shift; @@ -800,10 +799,10 @@ static void mt76x2_dfs_set_bbp_params(struct mt76x02_dev *dev) /* enable detection*/ mt76_wr(dev, MT_BBP(DFS, 0), MT_DFS_CH_EN << 16); - mt76_wr(dev, 0x212c, 0x0c350001); + mt76_wr(dev, MT_BBP(IBI, 11), 0x0c350001); } -void mt76x2_dfs_adjust_agc(struct mt76x02_dev *dev) +void mt76x02_phy_dfs_adjust_agc(struct mt76x02_dev *dev) { u32 agc_r8, agc_r4, val_r8, val_r4, dfs_r31; @@ -821,19 +820,27 @@ void mt76x2_dfs_adjust_agc(struct mt76x02_dev *dev) dfs_r31 = (dfs_r31 << 16) | 0x00000307; mt76_wr(dev, MT_BBP(DFS, 31), dfs_r31); - mt76_wr(dev, MT_BBP(DFS, 32), 0x00040071); + if (is_mt76x2(dev)) { + mt76_wr(dev, MT_BBP(DFS, 32), 0x00040071); + } else { + /* disable hw detector */ + mt76_wr(dev, MT_BBP(DFS, 0), 0); + /* enable hw detector */ + mt76_wr(dev, MT_BBP(DFS, 0), MT_DFS_CH_EN << 16); + } } +EXPORT_SYMBOL_GPL(mt76x02_phy_dfs_adjust_agc); -void mt76x2_dfs_init_params(struct mt76x02_dev *dev) +void mt76x02_dfs_init_params(struct mt76x02_dev *dev) { struct cfg80211_chan_def *chandef = &dev->mt76.chandef; if ((chandef->chan->flags & IEEE80211_CHAN_RADAR) && dev->dfs_pd.region != NL80211_DFS_UNSET) { - mt76x2_dfs_init_sw_detector(dev); - mt76x2_dfs_set_bbp_params(dev); + mt76x02_dfs_init_sw_detector(dev); + mt76x02_dfs_set_bbp_params(dev); /* enable debug mode */ - mt76x2_dfs_set_capture_mode_ctrl(dev, true); + mt76x02_dfs_set_capture_mode_ctrl(dev, true); mt76x02_irq_enable(dev, MT_INT_GPTIMER); mt76_rmw_field(dev, MT_INT_TIMER_EN, @@ -843,15 +850,20 @@ void mt76x2_dfs_init_params(struct mt76x02_dev *dev) mt76_wr(dev, MT_BBP(DFS, 0), 0); /* clear detector status */ mt76_wr(dev, MT_BBP(DFS, 1), 0xf); - mt76_wr(dev, 0x212c, 0); + if (mt76_chip(&dev->mt76) == 0x7610 || + mt76_chip(&dev->mt76) == 0x7630) + mt76_wr(dev, MT_BBP(IBI, 11), 0xfde8081); + else + mt76_wr(dev, MT_BBP(IBI, 11), 0); mt76x02_irq_disable(dev, MT_INT_GPTIMER); mt76_rmw_field(dev, MT_INT_TIMER_EN, MT_INT_TIMER_EN_GP_TIMER_EN, 0); } } +EXPORT_SYMBOL_GPL(mt76x02_dfs_init_params); -void mt76x2_dfs_init_detector(struct mt76x02_dev *dev) +void mt76x02_dfs_init_detector(struct mt76x02_dev *dev) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; @@ -859,20 +871,29 @@ void mt76x2_dfs_init_detector(struct mt76x02_dev *dev) INIT_LIST_HEAD(&dfs_pd->seq_pool); dfs_pd->region = NL80211_DFS_UNSET; dfs_pd->last_sw_check = jiffies; - tasklet_init(&dfs_pd->dfs_tasklet, mt76x2_dfs_tasklet, + tasklet_init(&dfs_pd->dfs_tasklet, mt76x02_dfs_tasklet, (unsigned long)dev); } -void mt76x2_dfs_set_domain(struct mt76x02_dev *dev, - enum nl80211_dfs_regions region) +static void +mt76x02_dfs_set_domain(struct mt76x02_dev *dev, + enum nl80211_dfs_regions region) { struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd; if (dfs_pd->region != region) { tasklet_disable(&dfs_pd->dfs_tasklet); dfs_pd->region = region; - mt76x2_dfs_init_params(dev); + mt76x02_dfs_init_params(dev); tasklet_enable(&dfs_pd->dfs_tasklet); } } +void mt76x02_regd_notifier(struct wiphy *wiphy, + struct regulatory_request *request) +{ + struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy); + struct mt76x02_dev *dev = hw->priv; + + mt76x02_dfs_set_domain(dev, request->dfs_region); +} diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h b/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h index 7e177c934592..70b394e17340 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h @@ -137,4 +137,9 @@ struct mt76x02_dfs_pattern_detector { struct tasklet_struct dfs_tasklet; }; +void mt76x02_dfs_init_params(struct mt76x02_dev *dev); +void mt76x02_dfs_init_detector(struct mt76x02_dev *dev); +void mt76x02_regd_notifier(struct wiphy *wiphy, + struct regulatory_request *request); +void mt76x02_phy_dfs_adjust_agc(struct mt76x02_dev *dev); #endif /* __MT76x02_DFS_H */ diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c index 9390de2a323e..07f0496d828a 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c @@ -53,6 +53,18 @@ mt76x02_efuse_read(struct mt76x02_dev *dev, u16 addr, u8 *data, return 0; } +int mt76x02_eeprom_copy(struct mt76x02_dev *dev, + enum mt76x02_eeprom_field field, + void *dest, int len) +{ + if (field + len > dev->mt76.eeprom.size) + return -1; + + memcpy(dest, dev->mt76.eeprom.data + field, len); + return 0; +} +EXPORT_SYMBOL_GPL(mt76x02_eeprom_copy); + int mt76x02_get_efuse_data(struct mt76x02_dev *dev, u16 base, void *buf, int len, enum mt76x02_eeprom_modes mode) { diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h index b3ec74835d10..e3442bc4e0a4 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h @@ -25,6 +25,7 @@ enum mt76x02_eeprom_field { MT_EE_VERSION = 0x002, MT_EE_MAC_ADDR = 0x004, MT_EE_PCI_ID = 0x00A, + MT_EE_ANTENNA = 0x022, MT_EE_NIC_CONF_0 = 0x034, MT_EE_NIC_CONF_1 = 0x036, MT_EE_COUNTRY_REGION_5GHZ = 0x038, @@ -55,6 +56,7 @@ enum mt76x02_eeprom_field { #define MT_TX_POWER_GROUP_SIZE_5G 5 #define MT_TX_POWER_GROUPS_5G 6 MT_EE_TX_POWER_0_START_5G = 0x062, + MT_EE_TSSI_SLOPE_2G = 0x06e, MT_EE_TX_POWER_0_GRP3_TX_POWER_DELTA = 0x074, MT_EE_TX_POWER_0_GRP4_TSSI_SLOPE = 0x076, @@ -85,6 +87,7 @@ enum mt76x02_eeprom_field { MT_EE_TSSI_BOUND5 = 0x0dc, MT_EE_TX_POWER_BYRATE_BASE = 0x0de, + MT_EE_TSSI_SLOPE_5G = 0x0f0, MT_EE_RF_TEMP_COMP_SLOPE_5G = 0x0f2, MT_EE_RF_TEMP_COMP_SLOPE_2G = 0x0f4, @@ -104,6 +107,8 @@ enum mt76x02_eeprom_field { __MT_EE_MAX }; +#define MT_EE_ANTENNA_DUAL BIT(15) + #define MT_EE_NIC_CONF_0_RX_PATH GENMASK(3, 0) #define MT_EE_NIC_CONF_0_TX_PATH GENMASK(7, 4) #define MT_EE_NIC_CONF_0_PA_TYPE GENMASK(9, 8) @@ -118,12 +123,9 @@ enum mt76x02_eeprom_field { #define MT_EE_NIC_CONF_1_LNA_EXT_5G BIT(3) #define MT_EE_NIC_CONF_1_TX_ALC_EN BIT(13) -#define MT_EE_NIC_CONF_2_RX_STREAM GENMASK(3, 0) -#define MT_EE_NIC_CONF_2_TX_STREAM GENMASK(7, 4) -#define MT_EE_NIC_CONF_2_HW_ANTDIV BIT(8) +#define MT_EE_NIC_CONF_2_ANT_OPT BIT(3) +#define MT_EE_NIC_CONF_2_ANT_DIV BIT(4) #define MT_EE_NIC_CONF_2_XTAL_OPTION GENMASK(10, 9) -#define MT_EE_NIC_CONF_2_TEMP_DISABLE BIT(11) -#define MT_EE_NIC_CONF_2_COEX_METHOD GENMASK(15, 13) #define MT_EFUSE_USAGE_MAP_SIZE (MT_EE_USAGE_MAP_END - \ MT_EE_USAGE_MAP_START + 1) @@ -188,5 +190,8 @@ u8 mt76x02_get_lna_gain(struct mt76x02_dev *dev, s8 *lna_2g, s8 *lna_5g, struct ieee80211_channel *chan); void mt76x02_eeprom_parse_hw_cap(struct mt76x02_dev *dev); +int mt76x02_eeprom_copy(struct mt76x02_dev *dev, + enum mt76x02_eeprom_field field, + void *dest, int len); #endif /* __MT76x02_EEPROM_H */ diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c index 10578e4cb269..c08bf371e527 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c @@ -18,7 +18,7 @@ #include "mt76x02.h" #include "mt76x02_trace.h" -enum mt76x02_cipher_type +static enum mt76x02_cipher_type mt76x02_mac_get_key_info(struct ieee80211_key_conf *key, u8 *key_data) { memset(key_data, 0, 32); @@ -43,7 +43,6 @@ mt76x02_mac_get_key_info(struct ieee80211_key_conf *key, u8 *key_data) return MT_CIPHER_NONE; } } -EXPORT_SYMBOL_GPL(mt76x02_mac_get_key_info); int mt76x02_mac_shared_key_setup(struct mt76x02_dev *dev, u8 vif_idx, u8 key_idx, struct ieee80211_key_conf *key) @@ -95,7 +94,6 @@ int mt76x02_mac_wcid_set_key(struct mt76x02_dev *dev, u8 idx, return 0; } -EXPORT_SYMBOL_GPL(mt76x02_mac_wcid_set_key); void mt76x02_mac_wcid_setup(struct mt76x02_dev *dev, u8 idx, u8 vif_idx, u8 *mac) @@ -108,9 +106,6 @@ void mt76x02_mac_wcid_setup(struct mt76x02_dev *dev, u8 idx, mt76_wr(dev, MT_WCID_ATTR(idx), attr); - mt76_wr(dev, MT_WCID_TX_RATE(idx), 0); - mt76_wr(dev, MT_WCID_TX_RATE(idx) + 4, 0); - if (idx >= 128) return; @@ -130,31 +125,6 @@ void mt76x02_mac_wcid_set_drop(struct mt76x02_dev *dev, u8 idx, bool drop) if ((val & bit) != (bit * drop)) mt76_wr(dev, MT_WCID_DROP(idx), (val & ~bit) | (bit * drop)); } -EXPORT_SYMBOL_GPL(mt76x02_mac_wcid_set_drop); - -void mt76x02_txq_init(struct mt76x02_dev *dev, struct ieee80211_txq *txq) -{ - struct mt76_txq *mtxq; - - if (!txq) - return; - - mtxq = (struct mt76_txq *) txq->drv_priv; - if (txq->sta) { - struct mt76x02_sta *sta; - - sta = (struct mt76x02_sta *) txq->sta->drv_priv; - mtxq->wcid = &sta->wcid; - } else { - struct mt76x02_vif *mvif; - - mvif = (struct mt76x02_vif *) txq->vif->drv_priv; - mtxq->wcid = &mvif->group_wcid; - } - - mt76_txq_init(&dev->mt76, txq); -} -EXPORT_SYMBOL_GPL(mt76x02_txq_init); static __le16 mt76x02_mac_tx_rate_val(struct mt76x02_dev *dev, @@ -216,6 +186,14 @@ void mt76x02_mac_wcid_set_rate(struct mt76x02_dev *dev, struct mt76_wcid *wcid, spin_unlock_bh(&dev->mt76.lock); } +void mt76x02_mac_set_short_preamble(struct mt76x02_dev *dev, bool enable) +{ + if (enable) + mt76_set(dev, MT_AUTO_RSP_CFG, MT_AUTO_RSP_PREAMB_SHORT); + else + mt76_clear(dev, MT_AUTO_RSP_CFG, MT_AUTO_RSP_PREAMB_SHORT); +} + bool mt76x02_mac_load_tx_status(struct mt76x02_dev *dev, struct mt76x02_tx_status *stat) { @@ -237,9 +215,10 @@ bool mt76x02_mac_load_tx_status(struct mt76x02_dev *dev, stat->retry = FIELD_GET(MT_TX_STAT_FIFO_EXT_RETRY, stat2); stat->pktid = FIELD_GET(MT_TX_STAT_FIFO_EXT_PKTID, stat2); + trace_mac_txstat_fetch(dev, stat); + return true; } -EXPORT_SYMBOL_GPL(mt76x02_mac_load_tx_status); static int mt76x02_mac_process_tx_rate(struct ieee80211_tx_rate *txrate, u16 rate, @@ -319,8 +298,6 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi, else txwi->wcid = 0xff; - txwi->pktid = 1; - if (wcid && wcid->sw_iv && key) { u64 pn = atomic64_inc_return(&key->tx_pn); ccmp_pn[0] = pn; @@ -366,8 +343,6 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi, txwi->ack_ctl |= MT_TXWI_ACK_CTL_REQ; if (info->flags & IEEE80211_TX_CTL_ASSIGN_SEQ) txwi->ack_ctl |= MT_TXWI_ACK_CTL_NSEQ; - if (info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) - txwi->pktid |= MT_TXWI_PKTID_PROBE; if ((info->flags & IEEE80211_TX_CTL_AMPDU) && sta) { u8 ba_size = IEEE80211_MIN_AMPDU_BUF; @@ -420,9 +395,6 @@ mt76x02_mac_fill_tx_status(struct mt76x02_dev *dev, info->status.ampdu_len = n_frames; info->status.ampdu_ack_len = st->success ? n_frames : 0; - if (st->pktid & MT_TXWI_PKTID_PROBE) - info->flags |= IEEE80211_TX_CTL_RATE_CTRL_PROBE; - if (st->aggr) info->flags |= IEEE80211_TX_CTL_AMPDU | IEEE80211_TX_STAT_AMPDU; @@ -437,23 +409,40 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev, struct mt76x02_tx_status *stat, u8 *update) { struct ieee80211_tx_info info = {}; - struct ieee80211_sta *sta = NULL; + struct ieee80211_tx_status status = { + .info = &info + }; struct mt76_wcid *wcid = NULL; struct mt76x02_sta *msta = NULL; + struct mt76_dev *mdev = &dev->mt76; + struct sk_buff_head list; + + if (stat->pktid == MT_PACKET_ID_NO_ACK) + return; rcu_read_lock(); + mt76_tx_status_lock(mdev, &list); + if (stat->wcid < ARRAY_SIZE(dev->mt76.wcid)) wcid = rcu_dereference(dev->mt76.wcid[stat->wcid]); - if (wcid) { + if (wcid && wcid->sta) { void *priv; priv = msta = container_of(wcid, struct mt76x02_sta, wcid); - sta = container_of(priv, struct ieee80211_sta, - drv_priv); + status.sta = container_of(priv, struct ieee80211_sta, + drv_priv); + } + + if (wcid) { + if (stat->pktid) + status.skb = mt76_tx_status_skb_get(mdev, wcid, + stat->pktid, &list); + if (status.skb) + status.info = IEEE80211_SKB_CB(status.skb); } - if (msta && stat->aggr) { + if (msta && stat->aggr && !status.skb) { u32 stat_val, stat_cache; stat_val = stat->rate; @@ -467,25 +456,28 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev, goto out; } - mt76x02_mac_fill_tx_status(dev, &info, &msta->status, + mt76x02_mac_fill_tx_status(dev, status.info, &msta->status, msta->n_frames); msta->status = *stat; msta->n_frames = 1; *update = 0; } else { - mt76x02_mac_fill_tx_status(dev, &info, stat, 1); + mt76x02_mac_fill_tx_status(dev, status.info, stat, 1); *update = 1; } - ieee80211_tx_status_noskb(dev->mt76.hw, sta, &info); + if (status.skb) + mt76_tx_status_skb_done(mdev, status.skb, &list); + else + ieee80211_tx_status_ext(mt76_hw(dev), &status); out: + mt76_tx_status_unlock(mdev, &list); rcu_read_unlock(); } -EXPORT_SYMBOL_GPL(mt76x02_send_tx_status); -int +static int mt76x02_mac_process_rate(struct mt76_rx_status *status, u16 rate) { u8 idx = FIELD_GET(MT_RXWI_RATE_INDEX, rate); @@ -551,7 +543,6 @@ mt76x02_mac_process_rate(struct mt76_rx_status *status, u16 rate) return 0; } -EXPORT_SYMBOL_GPL(mt76x02_mac_process_rate); void mt76x02_mac_setaddr(struct mt76x02_dev *dev, u8 *addr) { @@ -695,8 +686,6 @@ void mt76x02_mac_poll_tx_status(struct mt76x02_dev *dev, bool irq) if (!ret) break; - trace_mac_txstat_fetch(dev, &stat); - if (!irq) { mt76x02_send_tx_status(dev, &stat, &update); continue; @@ -705,33 +694,230 @@ void mt76x02_mac_poll_tx_status(struct mt76x02_dev *dev, bool irq) kfifo_put(&dev->txstatus_fifo, stat); } } -EXPORT_SYMBOL_GPL(mt76x02_mac_poll_tx_status); -static void -mt76x02_mac_queue_txdone(struct mt76x02_dev *dev, struct sk_buff *skb, - void *txwi_ptr) +void mt76x02_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue *q, + struct mt76_queue_entry *e, bool flush) { - struct mt76x02_tx_info *txi = mt76x02_skb_tx_info(skb); - struct mt76x02_txwi *txwi = txwi_ptr; + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); + struct mt76x02_txwi *txwi; + + if (!e->txwi) { + dev_kfree_skb_any(e->skb); + return; + } mt76x02_mac_poll_tx_status(dev, false); - txi->tries = 0; - txi->jiffies = jiffies; - txi->wcid = txwi->wcid; - txi->pktid = txwi->pktid; + txwi = (struct mt76x02_txwi *) &e->txwi->txwi; trace_mac_txdone_add(dev, txwi->wcid, txwi->pktid); - mt76x02_tx_complete(&dev->mt76, skb); + + mt76_tx_complete_skb(mdev, e->skb); } +EXPORT_SYMBOL_GPL(mt76x02_tx_complete_skb); -void mt76x02_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue *q, - struct mt76_queue_entry *e, bool flush) +void mt76x02_mac_set_tx_protection(struct mt76x02_dev *dev, u32 val) +{ + u32 data = 0; + + if (val != ~0) + data = FIELD_PREP(MT_PROT_CFG_CTRL, 1) | + MT_PROT_CFG_RTS_THRESH; + + mt76_rmw_field(dev, MT_TX_RTS_CFG, MT_TX_RTS_CFG_THRESH, val); + + mt76_rmw(dev, MT_CCK_PROT_CFG, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); + mt76_rmw(dev, MT_OFDM_PROT_CFG, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); + mt76_rmw(dev, MT_MM20_PROT_CFG, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); + mt76_rmw(dev, MT_MM40_PROT_CFG, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); + mt76_rmw(dev, MT_GF20_PROT_CFG, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); + mt76_rmw(dev, MT_GF40_PROT_CFG, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); + mt76_rmw(dev, MT_TX_PROT_CFG6, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); + mt76_rmw(dev, MT_TX_PROT_CFG7, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); + mt76_rmw(dev, MT_TX_PROT_CFG8, + MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); +} + +void mt76x02_update_channel(struct mt76_dev *mdev) { struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); + struct mt76_channel_state *state; + u32 active, busy; + + state = mt76_channel_state(&dev->mt76, dev->mt76.chandef.chan); + + busy = mt76_rr(dev, MT_CH_BUSY); + active = busy + mt76_rr(dev, MT_CH_IDLE); + + spin_lock_bh(&dev->mt76.cc_lock); + state->cc_busy += busy; + state->cc_active += active; + spin_unlock_bh(&dev->mt76.cc_lock); +} +EXPORT_SYMBOL_GPL(mt76x02_update_channel); + +static void mt76x02_check_mac_err(struct mt76x02_dev *dev) +{ + u32 val = mt76_rr(dev, 0x10f4); + + if (!(val & BIT(29)) || !(val & (BIT(7) | BIT(5)))) + return; + + dev_err(dev->mt76.dev, "mac specific condition occurred\n"); + + mt76_set(dev, MT_MAC_SYS_CTRL, MT_MAC_SYS_CTRL_RESET_CSR); + udelay(10); + mt76_clear(dev, MT_MAC_SYS_CTRL, + MT_MAC_SYS_CTRL_ENABLE_TX | MT_MAC_SYS_CTRL_ENABLE_RX); +} + +void mt76x02_mac_work(struct work_struct *work) +{ + struct mt76x02_dev *dev = container_of(work, struct mt76x02_dev, + mac_work.work); + int i, idx; + + mt76x02_update_channel(&dev->mt76); + for (i = 0, idx = 0; i < 16; i++) { + u32 val = mt76_rr(dev, MT_TX_AGG_CNT(i)); - if (e->txwi) - mt76x02_mac_queue_txdone(dev, e->skb, &e->txwi->txwi); + dev->aggr_stats[idx++] += val & 0xffff; + dev->aggr_stats[idx++] += val >> 16; + } + + /* XXX: check beacon stuck for ap mode */ + if (!dev->beacon_mask) + mt76x02_check_mac_err(dev); + + mt76_tx_status_check(&dev->mt76, NULL, false); + + ieee80211_queue_delayed_work(mt76_hw(dev), &dev->mac_work, + MT_CALIBRATE_INTERVAL); +} + +void mt76x02_mac_set_bssid(struct mt76x02_dev *dev, u8 idx, const u8 *addr) +{ + idx &= 7; + mt76_wr(dev, MT_MAC_APC_BSSID_L(idx), get_unaligned_le32(addr)); + mt76_rmw_field(dev, MT_MAC_APC_BSSID_H(idx), MT_MAC_APC_BSSID_H_ADDR, + get_unaligned_le16(addr + 4)); +} + +static int +mt76x02_write_beacon(struct mt76x02_dev *dev, int offset, struct sk_buff *skb) +{ + int beacon_len = mt76x02_beacon_offsets[1] - mt76x02_beacon_offsets[0]; + struct mt76x02_txwi txwi; + + if (WARN_ON_ONCE(beacon_len < skb->len + sizeof(struct mt76x02_txwi))) + return -ENOSPC; + + mt76x02_mac_write_txwi(dev, &txwi, skb, NULL, NULL, skb->len); + + mt76_wr_copy(dev, offset, &txwi, sizeof(txwi)); + offset += sizeof(txwi); + + mt76_wr_copy(dev, offset, skb->data, skb->len); + return 0; +} + +static int +__mt76x02_mac_set_beacon(struct mt76x02_dev *dev, u8 bcn_idx, + struct sk_buff *skb) +{ + int beacon_len = mt76x02_beacon_offsets[1] - mt76x02_beacon_offsets[0]; + int beacon_addr = mt76x02_beacon_offsets[bcn_idx]; + int ret = 0; + int i; + + /* Prevent corrupt transmissions during update */ + mt76_set(dev, MT_BCN_BYPASS_MASK, BIT(bcn_idx)); + + if (skb) { + ret = mt76x02_write_beacon(dev, beacon_addr, skb); + if (!ret) + dev->beacon_data_mask |= BIT(bcn_idx); + } else { + dev->beacon_data_mask &= ~BIT(bcn_idx); + for (i = 0; i < beacon_len; i += 4) + mt76_wr(dev, beacon_addr + i, 0); + } + + mt76_wr(dev, MT_BCN_BYPASS_MASK, 0xff00 | ~dev->beacon_data_mask); + + return ret; +} + +int mt76x02_mac_set_beacon(struct mt76x02_dev *dev, u8 vif_idx, + struct sk_buff *skb) +{ + bool force_update = false; + int bcn_idx = 0; + int i; + + for (i = 0; i < ARRAY_SIZE(dev->beacons); i++) { + if (vif_idx == i) { + force_update = !!dev->beacons[i] ^ !!skb; + + if (dev->beacons[i]) + dev_kfree_skb(dev->beacons[i]); + + dev->beacons[i] = skb; + __mt76x02_mac_set_beacon(dev, bcn_idx, skb); + } else if (force_update && dev->beacons[i]) { + __mt76x02_mac_set_beacon(dev, bcn_idx, + dev->beacons[i]); + } + + bcn_idx += !!dev->beacons[i]; + } + + for (i = bcn_idx; i < ARRAY_SIZE(dev->beacons); i++) { + if (!(dev->beacon_data_mask & BIT(i))) + break; + + __mt76x02_mac_set_beacon(dev, i, NULL); + } + + mt76_rmw_field(dev, MT_MAC_BSSID_DW1, MT_MAC_BSSID_DW1_MBEACON_N, + bcn_idx - 1); + return 0; +} + +void mt76x02_mac_set_beacon_enable(struct mt76x02_dev *dev, + u8 vif_idx, bool val) +{ + u8 old_mask = dev->beacon_mask; + bool en; + u32 reg; + + if (val) { + dev->beacon_mask |= BIT(vif_idx); + } else { + dev->beacon_mask &= ~BIT(vif_idx); + mt76x02_mac_set_beacon(dev, vif_idx, NULL); + } + + if (!!old_mask == !!dev->beacon_mask) + return; + + en = dev->beacon_mask; + + mt76_rmw_field(dev, MT_INT_TIMER_EN, MT_INT_TIMER_EN_PRE_TBTT_EN, en); + reg = MT_BEACON_TIME_CFG_BEACON_TX | + MT_BEACON_TIME_CFG_TBTT_EN | + MT_BEACON_TIME_CFG_TIMER_EN; + mt76_rmw(dev, MT_BEACON_TIME_CFG, reg, reg * en); + + if (en) + mt76x02_irq_enable(dev, MT_INT_PRE_TBTT | MT_INT_TBTT); else - dev_kfree_skb_any(e->skb); + mt76x02_irq_disable(dev, MT_INT_PRE_TBTT | MT_INT_TBTT); } -EXPORT_SYMBOL_GPL(mt76x02_tx_complete_skb); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.h b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.h index d99c18743969..4e597004c445 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.h @@ -37,18 +37,8 @@ struct mt76x02_tx_status { #define MT_MAX_VIFS 8 struct mt76x02_vif { + struct mt76_wcid group_wcid; /* must be first */ u8 idx; - - struct mt76_wcid group_wcid; -}; - -struct mt76x02_tx_info { - unsigned long jiffies; - u8 tries; - - u8 wcid; - u8 pktid; - u8 retry; }; DECLARE_EWMA(signal, 10, 8); @@ -153,8 +143,6 @@ enum mt76x2_phy_bandwidth { #define MT_TXWI_ACK_CTL_NSEQ BIT(1) #define MT_TXWI_ACK_CTL_BA_WINDOW GENMASK(7, 2) -#define MT_TXWI_PKTID_PROBE BIT(7) - struct mt76x02_txwi { __le16 flags; __le16 rate; @@ -190,18 +178,7 @@ static inline bool mt76x02_wait_for_mac(struct mt76_dev *dev) return false; } -static inline struct mt76x02_tx_info * -mt76x02_skb_tx_info(struct sk_buff *skb) -{ - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); - - return (void *)info->status.status_driver_data; -} - -void mt76x02_txq_init(struct mt76x02_dev *dev, struct ieee80211_txq *txq); -enum mt76x02_cipher_type -mt76x02_mac_get_key_info(struct ieee80211_key_conf *key, u8 *key_data); - +void mt76x02_mac_set_short_preamble(struct mt76x02_dev *dev, bool enable); int mt76x02_mac_shared_key_setup(struct mt76x02_dev *dev, u8 vif_idx, u8 key_idx, struct ieee80211_key_conf *key); int mt76x02_mac_wcid_set_key(struct mt76x02_dev *dev, u8 idx, @@ -217,8 +194,7 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev, struct mt76x02_tx_status *stat, u8 *update); int mt76x02_mac_process_rx(struct mt76x02_dev *dev, struct sk_buff *skb, void *rxi); -int -mt76x02_mac_process_rate(struct mt76_rx_status *status, u16 rate); +void mt76x02_mac_set_tx_protection(struct mt76x02_dev *dev, u32 val); void mt76x02_mac_setaddr(struct mt76x02_dev *dev, u8 *addr); void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi, struct sk_buff *skb, struct mt76_wcid *wcid, @@ -226,4 +202,12 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi, void mt76x02_mac_poll_tx_status(struct mt76x02_dev *dev, bool irq); void mt76x02_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue *q, struct mt76_queue_entry *e, bool flush); +void mt76x02_update_channel(struct mt76_dev *mdev); +void mt76x02_mac_work(struct work_struct *work); + +void mt76x02_mac_set_bssid(struct mt76x02_dev *dev, u8 idx, const u8 *addr); +int mt76x02_mac_set_beacon(struct mt76x02_dev *dev, u8 vif_idx, + struct sk_buff *skb); +void mt76x02_mac_set_beacon_enable(struct mt76x02_dev *dev, u8 vif_idx, + bool val); #endif diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.c index 1b853bb723fb..b7f4edb729e3 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.c @@ -21,7 +21,7 @@ #include "mt76x02_mcu.h" -struct sk_buff *mt76x02_mcu_msg_alloc(const void *data, int len) +static struct sk_buff *mt76x02_mcu_msg_alloc(const void *data, int len) { struct sk_buff *skb; @@ -32,7 +32,6 @@ struct sk_buff *mt76x02_mcu_msg_alloc(const void *data, int len) return skb; } -EXPORT_SYMBOL_GPL(mt76x02_mcu_msg_alloc); static struct sk_buff * mt76x02_mcu_get_response(struct mt76x02_dev *dev, unsigned long expires) @@ -80,16 +79,18 @@ mt76x02_tx_queue_mcu(struct mt76x02_dev *dev, enum mt76_txq_id qid, return 0; } -int mt76x02_mcu_msg_send(struct mt76_dev *mdev, struct sk_buff *skb, - int cmd, bool wait_resp) +int mt76x02_mcu_msg_send(struct mt76_dev *mdev, int cmd, const void *data, + int len, bool wait_resp) { struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); unsigned long expires = jiffies + HZ; + struct sk_buff *skb; int ret; u8 seq; + skb = mt76x02_mcu_msg_alloc(data, len); if (!skb) - return -EINVAL; + return -ENOMEM; mutex_lock(&mdev->mmio.mcu.mutex); @@ -131,11 +132,9 @@ out: } EXPORT_SYMBOL_GPL(mt76x02_mcu_msg_send); -int mt76x02_mcu_function_select(struct mt76x02_dev *dev, - enum mcu_function func, - u32 val, bool wait_resp) +int mt76x02_mcu_function_select(struct mt76x02_dev *dev, enum mcu_function func, + u32 val) { - struct sk_buff *skb; struct { __le32 id; __le32 value; @@ -143,16 +142,17 @@ int mt76x02_mcu_function_select(struct mt76x02_dev *dev, .id = cpu_to_le32(func), .value = cpu_to_le32(val), }; + bool wait = false; - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - return mt76_mcu_send_msg(dev, skb, CMD_FUN_SET_OP, wait_resp); + if (func != Q_SELECT) + wait = true; + + return mt76_mcu_send_msg(dev, CMD_FUN_SET_OP, &msg, sizeof(msg), wait); } EXPORT_SYMBOL_GPL(mt76x02_mcu_function_select); -int mt76x02_mcu_set_radio_state(struct mt76x02_dev *dev, bool on, - bool wait_resp) +int mt76x02_mcu_set_radio_state(struct mt76x02_dev *dev, bool on) { - struct sk_buff *skb; struct { __le32 mode; __le32 level; @@ -161,15 +161,12 @@ int mt76x02_mcu_set_radio_state(struct mt76x02_dev *dev, bool on, .level = cpu_to_le32(0), }; - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - return mt76_mcu_send_msg(dev, skb, CMD_POWER_SAVING_OP, wait_resp); + return mt76_mcu_send_msg(dev, CMD_POWER_SAVING_OP, &msg, sizeof(msg), false); } EXPORT_SYMBOL_GPL(mt76x02_mcu_set_radio_state); -int mt76x02_mcu_calibrate(struct mt76x02_dev *dev, int type, - u32 param, bool wait) +int mt76x02_mcu_calibrate(struct mt76x02_dev *dev, int type, u32 param) { - struct sk_buff *skb; struct { __le32 id; __le32 value; @@ -177,17 +174,18 @@ int mt76x02_mcu_calibrate(struct mt76x02_dev *dev, int type, .id = cpu_to_le32(type), .value = cpu_to_le32(param), }; + bool is_mt76x2e = mt76_is_mmio(dev) && is_mt76x2(dev); int ret; - if (wait) + if (is_mt76x2e) mt76_rmw(dev, MT_MCU_COM_REG0, BIT(31), 0); - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - ret = mt76_mcu_send_msg(dev, skb, CMD_CALIBRATION_OP, true); + ret = mt76_mcu_send_msg(dev, CMD_CALIBRATION_OP, &msg, sizeof(msg), + true); if (ret) return ret; - if (wait && + if (is_mt76x2e && WARN_ON(!mt76_poll_msec(dev, MT_MCU_COM_REG0, BIT(31), BIT(31), 100))) return -ETIMEDOUT; diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.h index 2d8fd2514570..7e4004120102 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.h @@ -97,16 +97,12 @@ struct mt76x02_patch_header { }; int mt76x02_mcu_cleanup(struct mt76x02_dev *dev); -int mt76x02_mcu_calibrate(struct mt76x02_dev *dev, int type, - u32 param, bool wait); -struct sk_buff *mt76x02_mcu_msg_alloc(const void *data, int len); -int mt76x02_mcu_msg_send(struct mt76_dev *mdev, struct sk_buff *skb, - int cmd, bool wait_resp); -int mt76x02_mcu_function_select(struct mt76x02_dev *dev, - enum mcu_function func, - u32 val, bool wait_resp); -int mt76x02_mcu_set_radio_state(struct mt76x02_dev *dev, bool on, - bool wait_resp); +int mt76x02_mcu_calibrate(struct mt76x02_dev *dev, int type, u32 param); +int mt76x02_mcu_msg_send(struct mt76_dev *mdev, int cmd, const void *data, + int len, bool wait_resp); +int mt76x02_mcu_function_select(struct mt76x02_dev *dev, enum mcu_function func, + u32 val); +int mt76x02_mcu_set_radio_state(struct mt76x02_dev *dev, bool on); void mt76x02_set_ethtool_fwver(struct mt76x02_dev *dev, const struct mt76x02_fw_header *h); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c index 39f092034240..66315410aebe 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c @@ -21,6 +21,130 @@ #include "mt76x02.h" #include "mt76x02_trace.h" +struct beacon_bc_data { + struct mt76x02_dev *dev; + struct sk_buff_head q; + struct sk_buff *tail[8]; +}; + +static void +mt76x02_update_beacon_iter(void *priv, u8 *mac, struct ieee80211_vif *vif) +{ + struct mt76x02_dev *dev = (struct mt76x02_dev *)priv; + struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv; + struct sk_buff *skb = NULL; + + if (!(dev->beacon_mask & BIT(mvif->idx))) + return; + + skb = ieee80211_beacon_get(mt76_hw(dev), vif); + if (!skb) + return; + + mt76x02_mac_set_beacon(dev, mvif->idx, skb); +} + +static void +mt76x02_add_buffered_bc(void *priv, u8 *mac, struct ieee80211_vif *vif) +{ + struct beacon_bc_data *data = priv; + struct mt76x02_dev *dev = data->dev; + struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv; + struct ieee80211_tx_info *info; + struct sk_buff *skb; + + if (!(dev->beacon_mask & BIT(mvif->idx))) + return; + + skb = ieee80211_get_buffered_bc(mt76_hw(dev), vif); + if (!skb) + return; + + info = IEEE80211_SKB_CB(skb); + info->control.vif = vif; + info->flags |= IEEE80211_TX_CTL_ASSIGN_SEQ; + mt76_skb_set_moredata(skb, true); + __skb_queue_tail(&data->q, skb); + data->tail[mvif->idx] = skb; +} + +static void +mt76x02_resync_beacon_timer(struct mt76x02_dev *dev) +{ + u32 timer_val = dev->beacon_int << 4; + + dev->tbtt_count++; + + /* + * Beacon timer drifts by 1us every tick, the timer is configured + * in 1/16 TU (64us) units. + */ + if (dev->tbtt_count < 62) + return; + + if (dev->tbtt_count >= 64) { + dev->tbtt_count = 0; + return; + } + + /* + * The updated beacon interval takes effect after two TBTT, because + * at this point the original interval has already been loaded into + * the next TBTT_TIMER value + */ + if (dev->tbtt_count == 62) + timer_val -= 1; + + mt76_rmw_field(dev, MT_BEACON_TIME_CFG, + MT_BEACON_TIME_CFG_INTVAL, timer_val); +} + +static void mt76x02_pre_tbtt_tasklet(unsigned long arg) +{ + struct mt76x02_dev *dev = (struct mt76x02_dev *)arg; + struct mt76_queue *q = &dev->mt76.q_tx[MT_TXQ_PSD]; + struct beacon_bc_data data = {}; + struct sk_buff *skb; + int i, nframes; + + mt76x02_resync_beacon_timer(dev); + + data.dev = dev; + __skb_queue_head_init(&data.q); + + ieee80211_iterate_active_interfaces_atomic(mt76_hw(dev), + IEEE80211_IFACE_ITER_RESUME_ALL, + mt76x02_update_beacon_iter, dev); + + do { + nframes = skb_queue_len(&data.q); + ieee80211_iterate_active_interfaces_atomic(mt76_hw(dev), + IEEE80211_IFACE_ITER_RESUME_ALL, + mt76x02_add_buffered_bc, &data); + } while (nframes != skb_queue_len(&data.q)); + + if (!nframes) + return; + + for (i = 0; i < ARRAY_SIZE(data.tail); i++) { + if (!data.tail[i]) + continue; + + mt76_skb_set_moredata(data.tail[i], false); + } + + spin_lock_bh(&q->lock); + while ((skb = __skb_dequeue(&data.q)) != NULL) { + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + struct ieee80211_vif *vif = info->control.vif; + struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv; + + mt76_dma_tx_queue_skb(&dev->mt76, q, skb, &mvif->group_wcid, + NULL); + } + spin_unlock_bh(&q->lock); +} + static int mt76x02_init_tx_queue(struct mt76x02_dev *dev, struct mt76_queue *q, int idx, int n_desc) @@ -98,6 +222,9 @@ int mt76x02_dma_init(struct mt76x02_dev *dev) return -ENOMEM; tasklet_init(&dev->tx_tasklet, mt76x02_tx_tasklet, (unsigned long) dev); + tasklet_init(&dev->pre_tbtt_tasklet, mt76x02_pre_tbtt_tasklet, + (unsigned long)dev); + kfifo_init(&dev->txstatus_fifo, status_fifo, fifo_size); mt76_dma_attach(&dev->mt76); @@ -225,7 +352,6 @@ static void mt76x02_dma_enable(struct mt76x02_dev *dev) mt76_clear(dev, MT_WPDMA_GLO_CFG, MT_WPDMA_GLO_CFG_TX_WRITEBACK_DONE); } -EXPORT_SYMBOL_GPL(mt76x02_dma_enable); void mt76x02_dma_cleanup(struct mt76x02_dev *dev) { diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_phy.c b/drivers/net/wireless/mediatek/mt76/mt76x02_phy.c index 0f1d7b5c9f68..977a8e7e26df 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_phy.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_phy.c @@ -254,5 +254,6 @@ void mt76x02_init_agc_gain(struct mt76x02_dev *dev) memcpy(dev->cal.agc_gain_cur, dev->cal.agc_gain_init, sizeof(dev->cal.agc_gain_cur)); dev->cal.low_gain = -1; + dev->cal.gain_init_done = true; } EXPORT_SYMBOL_GPL(mt76x02_init_agc_gain); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c b/drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c index d3de08872d6e..4598cb2cc3ff 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c @@ -22,6 +22,7 @@ void mt76x02_tx(struct ieee80211_hw *hw, struct ieee80211_tx_control *control, struct sk_buff *skb) { + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); struct mt76x02_dev *dev = hw->priv; struct ieee80211_vif *vif = info->control.vif; @@ -33,7 +34,8 @@ void mt76x02_tx(struct ieee80211_hw *hw, struct ieee80211_tx_control *control, msta = (struct mt76x02_sta *)control->sta->drv_priv; wcid = &msta->wcid; /* sw encrypted frames */ - if (!info->control.hw_key && wcid->hw_key_idx != 0xff) + if (!info->control.hw_key && wcid->hw_key_idx != 0xff && + ieee80211_has_protected(hdr->frame_control)) control->sta = NULL; } @@ -110,7 +112,6 @@ s8 mt76x02_tx_get_max_txpwr_adj(struct mt76x02_dev *dev, return max_txpwr; } -EXPORT_SYMBOL_GPL(mt76x02_tx_get_max_txpwr_adj); s8 mt76x02_tx_get_txpwr_adj(struct mt76x02_dev *dev, s8 txpwr, s8 max_txpwr_adj) { @@ -125,7 +126,6 @@ s8 mt76x02_tx_get_txpwr_adj(struct mt76x02_dev *dev, s8 txpwr, s8 max_txpwr_adj) else return (txpwr < -16) ? 8 : (txpwr + 32) / 2; } -EXPORT_SYMBOL_GPL(mt76x02_tx_get_txpwr_adj); void mt76x02_tx_set_txpwr_auto(struct mt76x02_dev *dev, s8 txpwr) { @@ -140,21 +140,6 @@ void mt76x02_tx_set_txpwr_auto(struct mt76x02_dev *dev, s8 txpwr) } EXPORT_SYMBOL_GPL(mt76x02_tx_set_txpwr_auto); -void mt76x02_tx_complete(struct mt76_dev *dev, struct sk_buff *skb) -{ - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); - - if (info->flags & IEEE80211_TX_CTL_AMPDU) { - ieee80211_free_txskb(dev->hw, skb); - } else { - ieee80211_tx_info_clear_status(info); - info->status.rates[0].idx = -1; - info->flags |= IEEE80211_TX_STAT_ACK; - ieee80211_tx_status(dev->hw, skb); - } -} -EXPORT_SYMBOL_GPL(mt76x02_tx_complete); - bool mt76x02_tx_status_data(struct mt76_dev *mdev, u8 *update) { struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); @@ -169,14 +154,15 @@ bool mt76x02_tx_status_data(struct mt76_dev *mdev, u8 *update) } EXPORT_SYMBOL_GPL(mt76x02_tx_status_data); -int mt76x02_tx_prepare_skb(struct mt76_dev *mdev, void *txwi, +int mt76x02_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr, struct sk_buff *skb, struct mt76_queue *q, struct mt76_wcid *wcid, struct ieee80211_sta *sta, u32 *tx_info) { struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + struct mt76x02_txwi *txwi = txwi_ptr; int qsel = MT_QSEL_EDCA; + int pid; int ret; if (q == &dev->mt76.q_tx[MT_TXQ_PSD] && wcid && wcid->idx < 128) @@ -184,11 +170,14 @@ int mt76x02_tx_prepare_skb(struct mt76_dev *mdev, void *txwi, mt76x02_mac_write_txwi(dev, txwi, skb, wcid, sta, skb->len); + pid = mt76_tx_status_skb_add(mdev, wcid, skb); + txwi->pktid = pid; + ret = mt76x02_insert_hdr_pad(skb); if (ret < 0) return ret; - if (info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) + if (pid && pid != MT_PACKET_ID_NO_ACK) qsel = MT_QSEL_MGMT; *tx_info = FIELD_PREP(MT_TXD_INFO_QSEL, qsel) | diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c index dc2226c722dd..81970cf777c0 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c @@ -30,7 +30,7 @@ void mt76x02u_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue *q, struct mt76_queue_entry *e, bool flush) { mt76x02u_remove_dma_hdr(e->skb); - mt76x02_tx_complete(mdev, e->skb); + mt76_tx_complete_skb(mdev, e->skb); } EXPORT_SYMBOL_GPL(mt76x02u_tx_complete_skb); @@ -67,27 +67,6 @@ int mt76x02u_skb_dma_info(struct sk_buff *skb, int port, u32 flags) return 0; } -static int -mt76x02u_set_txinfo(struct sk_buff *skb, struct mt76_wcid *wcid, u8 ep) -{ - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); - enum mt76_qsel qsel; - u32 flags; - - if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) || - ep == MT_EP_OUT_HCCA) - qsel = MT_QSEL_MGMT; - else - qsel = MT_QSEL_EDCA; - - flags = FIELD_PREP(MT_TXD_INFO_QSEL, qsel) | - MT_TXD_INFO_80211; - if (!wcid || wcid->hw_key_idx == 0xff || wcid->sw_iv) - flags |= MT_TXD_INFO_WIV; - - return mt76x02u_skb_dma_info(skb, WLAN_PORT, flags); -} - int mt76x02u_tx_prepare_skb(struct mt76_dev *mdev, void *data, struct sk_buff *skb, struct mt76_queue *q, struct mt76_wcid *wcid, struct ieee80211_sta *sta, @@ -95,13 +74,30 @@ int mt76x02u_tx_prepare_skb(struct mt76_dev *mdev, void *data, { struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); struct mt76x02_txwi *txwi; + enum mt76_qsel qsel; int len = skb->len; + u32 flags; + int pid; mt76x02_insert_hdr_pad(skb); txwi = skb_push(skb, sizeof(struct mt76x02_txwi)); mt76x02_mac_write_txwi(dev, txwi, skb, wcid, sta, len); - return mt76x02u_set_txinfo(skb, wcid, q2ep(q->hw_idx)); + pid = mt76_tx_status_skb_add(mdev, wcid, skb); + txwi->pktid = pid; + + if ((pid && pid != MT_PACKET_ID_NO_ACK) || + q2ep(q->hw_idx) == MT_EP_OUT_HCCA) + qsel = MT_QSEL_MGMT; + else + qsel = MT_QSEL_EDCA; + + flags = FIELD_PREP(MT_TXD_INFO_QSEL, qsel) | + MT_TXD_INFO_80211; + if (!wcid || wcid->hw_key_idx == 0xff || wcid->sw_iv) + flags |= MT_TXD_INFO_WIV; + + return mt76x02u_skb_dma_info(skb, WLAN_PORT, flags); } EXPORT_SYMBOL_GPL(mt76x02u_tx_prepare_skb); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c index da299b8a1334..6db789f90269 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c @@ -129,9 +129,6 @@ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb, u8 seq = 0; u32 info; - if (!skb) - return -EINVAL; - if (test_bit(MT76_REMOVED, &dev->state)) return 0; @@ -162,12 +159,17 @@ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb, } static int -mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb, - int cmd, bool wait_resp) +mt76x02u_mcu_send_msg(struct mt76_dev *dev, int cmd, const void *data, + int len, bool wait_resp) { struct mt76_usb *usb = &dev->usb; + struct sk_buff *skb; int err; + skb = mt76x02u_mcu_msg_alloc(data, len); + if (!skb) + return -ENOMEM; + mutex_lock(&usb->mcu.mutex); err = __mt76x02u_mcu_send_msg(dev, skb, cmd, wait_resp); mutex_unlock(&usb->mcu.mutex); @@ -186,6 +188,7 @@ mt76x02u_mcu_wr_rp(struct mt76_dev *dev, u32 base, { const int CMD_RANDOM_WRITE = 12; const int max_vals_per_cmd = MT_INBAND_PACKET_MAX_LEN / 8; + struct mt76_usb *usb = &dev->usb; struct sk_buff *skb; int cnt, i, ret; @@ -204,7 +207,9 @@ mt76x02u_mcu_wr_rp(struct mt76_dev *dev, u32 base, skb_put_le32(skb, data[i].value); } - ret = mt76x02u_mcu_send_msg(dev, skb, CMD_RANDOM_WRITE, cnt == n); + mutex_lock(&usb->mcu.mutex); + ret = __mt76x02u_mcu_send_msg(dev, skb, CMD_RANDOM_WRITE, cnt == n); + mutex_unlock(&usb->mcu.mutex); if (ret) return ret; @@ -345,7 +350,6 @@ EXPORT_SYMBOL_GPL(mt76x02u_mcu_fw_send_data); void mt76x02u_init_mcu(struct mt76_dev *dev) { static const struct mt76_mcu_ops mt76x02u_mcu_ops = { - .mcu_msg_alloc = mt76x02u_mcu_msg_alloc, .mcu_send_msg = mt76x02u_mcu_send_msg, .mcu_wr_rp = mt76x02u_mcu_wr_rp, .mcu_rd_rp = mt76x02u_mcu_rd_rp, diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c index ca05332f81fc..38bd466cff16 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c @@ -47,6 +47,92 @@ struct ieee80211_rate mt76x02_rates[] = { }; EXPORT_SYMBOL_GPL(mt76x02_rates); +static const struct ieee80211_iface_limit mt76x02_if_limits[] = { + { + .max = 1, + .types = BIT(NL80211_IFTYPE_ADHOC) + }, { + .max = 8, + .types = BIT(NL80211_IFTYPE_STATION) | +#ifdef CONFIG_MAC80211_MESH + BIT(NL80211_IFTYPE_MESH_POINT) | +#endif + BIT(NL80211_IFTYPE_AP) + }, +}; + +static const struct ieee80211_iface_combination mt76x02_if_comb[] = { + { + .limits = mt76x02_if_limits, + .n_limits = ARRAY_SIZE(mt76x02_if_limits), + .max_interfaces = 8, + .num_different_channels = 1, + .beacon_int_infra_match = true, + .radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20_NOHT) | + BIT(NL80211_CHAN_WIDTH_20) | + BIT(NL80211_CHAN_WIDTH_40) | + BIT(NL80211_CHAN_WIDTH_80), + } +}; + +void mt76x02_init_device(struct mt76x02_dev *dev) +{ + struct ieee80211_hw *hw = mt76_hw(dev); + struct wiphy *wiphy = hw->wiphy; + + INIT_DELAYED_WORK(&dev->mac_work, mt76x02_mac_work); + + hw->queues = 4; + hw->max_rates = 1; + hw->max_report_rates = 7; + hw->max_rate_tries = 1; + hw->extra_tx_headroom = 2; + + if (mt76_is_usb(dev)) { + hw->extra_tx_headroom += sizeof(struct mt76x02_txwi) + + MT_DMA_HDR_LEN; + wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION); + } else { + mt76x02_dfs_init_detector(dev); + + wiphy->reg_notifier = mt76x02_regd_notifier; + wiphy->iface_combinations = mt76x02_if_comb; + wiphy->n_iface_combinations = ARRAY_SIZE(mt76x02_if_comb); + wiphy->interface_modes = + BIT(NL80211_IFTYPE_STATION) | + BIT(NL80211_IFTYPE_AP) | +#ifdef CONFIG_MAC80211_MESH + BIT(NL80211_IFTYPE_MESH_POINT) | +#endif + BIT(NL80211_IFTYPE_ADHOC); + + wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_VHT_IBSS); + } + + hw->sta_data_size = sizeof(struct mt76x02_sta); + hw->vif_data_size = sizeof(struct mt76x02_vif); + + ieee80211_hw_set(hw, SUPPORTS_HT_CCK_RATES); + ieee80211_hw_set(hw, SUPPORTS_REORDERING_BUFFER); + + dev->mt76.global_wcid.idx = 255; + dev->mt76.global_wcid.hw_key_idx = -1; + dev->slottime = 9; + + if (is_mt76x2(dev)) { + dev->mt76.sband_2g.sband.ht_cap.cap |= + IEEE80211_HT_CAP_LDPC_CODING; + dev->mt76.sband_5g.sband.ht_cap.cap |= + IEEE80211_HT_CAP_LDPC_CODING; + dev->mt76.chainmask = 0x202; + dev->mt76.antenna_mask = 3; + } else { + dev->mt76.chainmask = 0x101; + dev->mt76.antenna_mask = 1; + } +} +EXPORT_SYMBOL_GPL(mt76x02_init_device); + void mt76x02_configure_filter(struct ieee80211_hw *hw, unsigned int changed_flags, unsigned int *total_flags, u64 multicast) @@ -81,23 +167,17 @@ void mt76x02_configure_filter(struct ieee80211_hw *hw, } EXPORT_SYMBOL_GPL(mt76x02_configure_filter); -int mt76x02_sta_add(struct ieee80211_hw *hw, struct ieee80211_vif *vif, +int mt76x02_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif, struct ieee80211_sta *sta) { - struct mt76x02_dev *dev = hw->priv; + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); struct mt76x02_sta *msta = (struct mt76x02_sta *)sta->drv_priv; struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv; - int ret = 0; int idx = 0; - int i; - - mutex_lock(&dev->mt76.mutex); idx = mt76_wcid_alloc(dev->mt76.wcid_mask, ARRAY_SIZE(dev->mt76.wcid)); - if (idx < 0) { - ret = -ENOSPC; - goto out; - } + if (idx < 0) + return -ENOSPC; msta->vif = mvif; msta->wcid.sta = 1; @@ -105,41 +185,25 @@ int mt76x02_sta_add(struct ieee80211_hw *hw, struct ieee80211_vif *vif, msta->wcid.hw_key_idx = -1; mt76x02_mac_wcid_setup(dev, idx, mvif->idx, sta->addr); mt76x02_mac_wcid_set_drop(dev, idx, false); - for (i = 0; i < ARRAY_SIZE(sta->txq); i++) - mt76x02_txq_init(dev, sta->txq[i]); if (vif->type == NL80211_IFTYPE_AP) set_bit(MT_WCID_FLAG_CHECK_PS, &msta->wcid.flags); ewma_signal_init(&msta->rssi); - rcu_assign_pointer(dev->mt76.wcid[idx], &msta->wcid); - -out: - mutex_unlock(&dev->mt76.mutex); - - return ret; + return 0; } EXPORT_SYMBOL_GPL(mt76x02_sta_add); -int mt76x02_sta_remove(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - struct ieee80211_sta *sta) +void mt76x02_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif, + struct ieee80211_sta *sta) { - struct mt76x02_dev *dev = hw->priv; - struct mt76x02_sta *msta = (struct mt76x02_sta *)sta->drv_priv; - int idx = msta->wcid.idx; - int i; + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); + struct mt76_wcid *wcid = (struct mt76_wcid *)sta->drv_priv; + int idx = wcid->idx; - mutex_lock(&dev->mt76.mutex); - rcu_assign_pointer(dev->mt76.wcid[idx], NULL); - for (i = 0; i < ARRAY_SIZE(sta->txq); i++) - mt76_txq_remove(&dev->mt76, sta->txq[i]); mt76x02_mac_wcid_set_drop(dev, idx, true); - mt76_wcid_free(dev->mt76.wcid_mask, idx); mt76x02_mac_wcid_setup(dev, idx, 0, NULL); - mutex_unlock(&dev->mt76.mutex); - - return 0; } EXPORT_SYMBOL_GPL(mt76x02_sta_remove); @@ -147,11 +211,15 @@ void mt76x02_vif_init(struct mt76x02_dev *dev, struct ieee80211_vif *vif, unsigned int idx) { struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv; + struct mt76_txq *mtxq; mvif->idx = idx; mvif->group_wcid.idx = MT_VIF_WCID(idx); mvif->group_wcid.hw_key_idx = -1; - mt76x02_txq_init(dev, vif->txq); + mtxq = (struct mt76_txq *) vif->txq->drv_priv; + mtxq->wcid = &mvif->group_wcid; + + mt76_txq_init(&dev->mt76, vif->txq); } EXPORT_SYMBOL_GPL(mt76x02_vif_init); @@ -357,6 +425,51 @@ int mt76x02_conf_tx(struct ieee80211_hw *hw, struct ieee80211_vif *vif, } EXPORT_SYMBOL_GPL(mt76x02_conf_tx); +void mt76x02_set_tx_ackto(struct mt76x02_dev *dev) +{ + u8 ackto, sifs, slottime = dev->slottime; + + /* As defined by IEEE 802.11-2007 17.3.8.6 */ + slottime += 3 * dev->coverage_class; + mt76_rmw_field(dev, MT_BKOFF_SLOT_CFG, + MT_BKOFF_SLOT_CFG_SLOTTIME, slottime); + + sifs = mt76_get_field(dev, MT_XIFS_TIME_CFG, + MT_XIFS_TIME_CFG_OFDM_SIFS); + + ackto = slottime + sifs; + mt76_rmw_field(dev, MT_TX_TIMEOUT_CFG, + MT_TX_TIMEOUT_CFG_ACKTO, ackto); +} +EXPORT_SYMBOL_GPL(mt76x02_set_tx_ackto); + +void mt76x02_set_coverage_class(struct ieee80211_hw *hw, + s16 coverage_class) +{ + struct mt76x02_dev *dev = hw->priv; + + mutex_lock(&dev->mt76.mutex); + dev->coverage_class = coverage_class; + mt76x02_set_tx_ackto(dev); + mutex_unlock(&dev->mt76.mutex); +} +EXPORT_SYMBOL_GPL(mt76x02_set_coverage_class); + +int mt76x02_set_rts_threshold(struct ieee80211_hw *hw, u32 val) +{ + struct mt76x02_dev *dev = hw->priv; + + if (val != ~0 && val > 0xffff) + return -EINVAL; + + mutex_lock(&dev->mt76.mutex); + mt76x02_mac_set_tx_protection(dev, val); + mutex_unlock(&dev->mt76.mutex); + + return 0; +} +EXPORT_SYMBOL_GPL(mt76x02_set_rts_threshold); + void mt76x02_sta_rate_tbl_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_sta *sta) @@ -405,6 +518,64 @@ void mt76x02_remove_hdr_pad(struct sk_buff *skb, int len) } EXPORT_SYMBOL_GPL(mt76x02_remove_hdr_pad); +void mt76x02_sw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + const u8 *mac) +{ + struct mt76x02_dev *dev = hw->priv; + + if (mt76_is_mmio(dev)) + tasklet_disable(&dev->pre_tbtt_tasklet); + set_bit(MT76_SCANNING, &dev->mt76.state); +} +EXPORT_SYMBOL_GPL(mt76x02_sw_scan); + +void mt76x02_sw_scan_complete(struct ieee80211_hw *hw, + struct ieee80211_vif *vif) +{ + struct mt76x02_dev *dev = hw->priv; + + clear_bit(MT76_SCANNING, &dev->mt76.state); + if (mt76_is_mmio(dev)) + tasklet_enable(&dev->pre_tbtt_tasklet); + + if (dev->cal.gain_init_done) { + /* Restore AGC gain and resume calibration after scanning. */ + dev->cal.low_gain = -1; + ieee80211_queue_delayed_work(hw, &dev->cal_work, 0); + } +} +EXPORT_SYMBOL_GPL(mt76x02_sw_scan_complete); + +int mt76x02_get_txpower(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, int *dbm) +{ + struct mt76x02_dev *dev = hw->priv; + u8 nstreams = dev->mt76.chainmask & 0xf; + + *dbm = dev->mt76.txpower_cur / 2; + + /* convert from per-chain power to combined + * output on 2x2 devices + */ + if (nstreams > 1) + *dbm += 3; + + return 0; +} +EXPORT_SYMBOL_GPL(mt76x02_get_txpower); + +void mt76x02_sta_ps(struct mt76_dev *mdev, struct ieee80211_sta *sta, + bool ps) +{ + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); + struct mt76x02_sta *msta = (struct mt76x02_sta *)sta->drv_priv; + int idx = msta->wcid.idx; + + mt76_stop_tx_queues(&dev->mt76, sta, true); + mt76x02_mac_wcid_set_drop(dev, idx, ps); +} +EXPORT_SYMBOL_GPL(mt76x02_sta_ps); + const u16 mt76x02_beacon_offsets[16] = { /* 1024 byte per beacon */ 0xc000, @@ -425,9 +596,8 @@ const u16 mt76x02_beacon_offsets[16] = { 0xc000, 0xc000, }; -EXPORT_SYMBOL_GPL(mt76x02_beacon_offsets); -void mt76x02_set_beacon_offsets(struct mt76x02_dev *dev) +static void mt76x02_set_beacon_offsets(struct mt76x02_dev *dev) { u16 val, base = MT_BEACON_BASE; u32 regs[4] = {}; @@ -441,6 +611,98 @@ void mt76x02_set_beacon_offsets(struct mt76x02_dev *dev) for (i = 0; i < 4; i++) mt76_wr(dev, MT_BCN_OFFSET(i), regs[i]); } -EXPORT_SYMBOL_GPL(mt76x02_set_beacon_offsets); + +void mt76x02_init_beacon_config(struct mt76x02_dev *dev) +{ + static const u8 null_addr[ETH_ALEN] = {}; + int i; + + mt76_wr(dev, MT_MAC_BSSID_DW0, + get_unaligned_le32(dev->mt76.macaddr)); + mt76_wr(dev, MT_MAC_BSSID_DW1, + get_unaligned_le16(dev->mt76.macaddr + 4) | + FIELD_PREP(MT_MAC_BSSID_DW1_MBSS_MODE, 3) | /* 8 beacons */ + MT_MAC_BSSID_DW1_MBSS_LOCAL_BIT); + + /* Fire a pre-TBTT interrupt 8 ms before TBTT */ + mt76_rmw_field(dev, MT_INT_TIMER_CFG, MT_INT_TIMER_CFG_PRE_TBTT, + 8 << 4); + mt76_rmw_field(dev, MT_INT_TIMER_CFG, MT_INT_TIMER_CFG_GP_TIMER, + MT_DFS_GP_INTERVAL); + mt76_wr(dev, MT_INT_TIMER_EN, 0); + + mt76_wr(dev, MT_BCN_BYPASS_MASK, 0xffff); + + for (i = 0; i < 8; i++) { + mt76x02_mac_set_bssid(dev, i, null_addr); + mt76x02_mac_set_beacon(dev, i, NULL); + } + mt76x02_set_beacon_offsets(dev); +} +EXPORT_SYMBOL_GPL(mt76x02_init_beacon_config); + +void mt76x02_bss_info_changed(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *info, + u32 changed) +{ + struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv; + struct mt76x02_dev *dev = hw->priv; + + mutex_lock(&dev->mt76.mutex); + + if (changed & BSS_CHANGED_BSSID) + mt76x02_mac_set_bssid(dev, mvif->idx, info->bssid); + + if (changed & BSS_CHANGED_BEACON_ENABLED) { + tasklet_disable(&dev->pre_tbtt_tasklet); + mt76x02_mac_set_beacon_enable(dev, mvif->idx, + info->enable_beacon); + tasklet_enable(&dev->pre_tbtt_tasklet); + } + + if (changed & BSS_CHANGED_BEACON_INT) { + mt76_rmw_field(dev, MT_BEACON_TIME_CFG, + MT_BEACON_TIME_CFG_INTVAL, + info->beacon_int << 4); + dev->beacon_int = info->beacon_int; + dev->tbtt_count = 0; + } + + if (changed & BSS_CHANGED_ERP_PREAMBLE) + mt76x02_mac_set_short_preamble(dev, info->use_short_preamble); + + if (changed & BSS_CHANGED_ERP_SLOT) { + int slottime = info->use_short_slot ? 9 : 20; + + dev->slottime = slottime; + mt76x02_set_tx_ackto(dev); + } + + mutex_unlock(&dev->mt76.mutex); +} +EXPORT_SYMBOL_GPL(mt76x02_bss_info_changed); + +void mt76x02_config_mac_addr_list(struct mt76x02_dev *dev) +{ + struct ieee80211_hw *hw = mt76_hw(dev); + struct wiphy *wiphy = hw->wiphy; + int i; + + for (i = 0; i < ARRAY_SIZE(dev->macaddr_list); i++) { + u8 *addr = dev->macaddr_list[i].addr; + + memcpy(addr, dev->mt76.macaddr, ETH_ALEN); + + if (!i) + continue; + + addr[0] |= BIT(1); + addr[0] ^= ((i - 1) << 2); + } + wiphy->addresses = dev->macaddr_list; + wiphy->n_addresses = ARRAY_SIZE(dev->macaddr_list); +} +EXPORT_SYMBOL_GPL(mt76x02_config_mac_addr_list); MODULE_LICENSE("Dual BSD/GPL"); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/Makefile b/drivers/net/wireless/mediatek/mt76/mt76x2/Makefile index b71bb1049170..9297b850bbba 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/Makefile +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/Makefile @@ -3,11 +3,11 @@ obj-$(CONFIG_MT76x2E) += mt76x2e.o obj-$(CONFIG_MT76x2U) += mt76x2u.o mt76x2-common-y := \ - eeprom.o mac.o init.o phy.o debugfs.o mcu.o + eeprom.o mac.o init.o phy.o mcu.o mt76x2e-y := \ - pci.o pci_main.o pci_init.o pci_tx.o \ - pci_mac.o pci_mcu.o pci_phy.o pci_dfs.o + pci.o pci_main.o pci_init.o pci_mcu.o \ + pci_phy.o mt76x2u-y := \ usb.o usb_init.o usb_main.o usb_mac.o usb_mcu.o \ diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/dfs.h b/drivers/net/wireless/mediatek/mt76/mt76x2/dfs.h deleted file mode 100644 index 3cb9d1864286..000000000000 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/dfs.h +++ /dev/null @@ -1,26 +0,0 @@ -/* - * Copyright (C) 2016 Lorenzo Bianconi - * - * Permission to use, copy, modify, and/or distribute this software for any - * purpose with or without fee is hereby granted, provided that the above - * copyright notice and this permission notice appear in all copies. - * - * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR - * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES - * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN - * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - */ - -#ifndef __DFS_H -#define __DFS_H - -void mt76x2_dfs_init_params(struct mt76x02_dev *dev); -void mt76x2_dfs_init_detector(struct mt76x02_dev *dev); -void mt76x2_dfs_adjust_agc(struct mt76x02_dev *dev); -void mt76x2_dfs_set_domain(struct mt76x02_dev *dev, - enum nl80211_dfs_regions region); - -#endif /* __DFS_H */ diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c index f39b622d03f4..6f6998561d9d 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c @@ -21,17 +21,6 @@ #define EE_FIELD(_name, _value) [MT_EE_##_name] = (_value) | 1 -static int -mt76x2_eeprom_copy(struct mt76x02_dev *dev, enum mt76x02_eeprom_field field, - void *dest, int len) -{ - if (field + len > dev->mt76.eeprom.size) - return -1; - - memcpy(dest, dev->mt76.eeprom.data + field, len); - return 0; -} - static int mt76x2_eeprom_get_macaddr(struct mt76x02_dev *dev) { @@ -378,7 +367,7 @@ mt76x2_get_power_info_2g(struct mt76x02_dev *dev, else delta_idx = 5; - mt76x2_eeprom_copy(dev, offset, data, sizeof(data)); + mt76x02_eeprom_copy(dev, offset, data, sizeof(data)); t->chain[chain].tssi_slope = data[0]; t->chain[chain].tssi_offset = data[1]; @@ -429,7 +418,7 @@ mt76x2_get_power_info_5g(struct mt76x02_dev *dev, else delta_idx = 4; - mt76x2_eeprom_copy(dev, offset, data, sizeof(data)); + mt76x02_eeprom_copy(dev, offset, data, sizeof(data)); t->chain[chain].tssi_slope = data[0]; t->chain[chain].tssi_offset = data[1]; diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/init.c b/drivers/net/wireless/mediatek/mt76/mt76x2/init.c index 3c73fdeaf30f..54a9b5fac787 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/init.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/init.c @@ -158,38 +158,6 @@ void mt76_write_mac_initvals(struct mt76x02_dev *dev) } EXPORT_SYMBOL_GPL(mt76_write_mac_initvals); -void mt76x2_init_device(struct mt76x02_dev *dev) -{ - struct ieee80211_hw *hw = mt76_hw(dev); - - hw->queues = 4; - hw->max_rates = 1; - hw->max_report_rates = 7; - hw->max_rate_tries = 1; - hw->extra_tx_headroom = 2; - if (mt76_is_usb(dev)) - hw->extra_tx_headroom += sizeof(struct mt76x02_txwi) + - MT_DMA_HDR_LEN; - - hw->sta_data_size = sizeof(struct mt76x02_sta); - hw->vif_data_size = sizeof(struct mt76x02_vif); - - ieee80211_hw_set(hw, SUPPORTS_HT_CCK_RATES); - ieee80211_hw_set(hw, SUPPORTS_REORDERING_BUFFER); - - dev->mt76.sband_2g.sband.ht_cap.cap |= IEEE80211_HT_CAP_LDPC_CODING; - dev->mt76.sband_5g.sband.ht_cap.cap |= IEEE80211_HT_CAP_LDPC_CODING; - - dev->mt76.chainmask = 0x202; - dev->mt76.global_wcid.idx = 255; - dev->mt76.global_wcid.hw_key_idx = -1; - dev->slottime = 9; - - /* init antenna configuration */ - dev->mt76.antenna_mask = 3; -} -EXPORT_SYMBOL_GPL(mt76x2_init_device); - void mt76x2_init_txpower(struct mt76x02_dev *dev, struct ieee80211_supported_band *sband) { diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/mac.h b/drivers/net/wireless/mediatek/mt76/mt76x2/mac.h index a31bd49ae6cb..4c8e20bce920 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/mac.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/mac.h @@ -26,12 +26,5 @@ struct mt76x02_vif; int mt76x2_mac_start(struct mt76x02_dev *dev); void mt76x2_mac_stop(struct mt76x02_dev *dev, bool force); void mt76x2_mac_resume(struct mt76x02_dev *dev); -void mt76x2_mac_set_bssid(struct mt76x02_dev *dev, u8 idx, const u8 *addr); - -int mt76x2_mac_set_beacon(struct mt76x02_dev *dev, u8 vif_idx, - struct sk_buff *skb); -void mt76x2_mac_set_beacon_enable(struct mt76x02_dev *dev, u8 vif_idx, bool val); - -void mt76x2_mac_work(struct work_struct *work); #endif diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x2/mcu.c index 88bd62cfbdf9..cd3e082f486c 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/mcu.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/mcu.c @@ -26,7 +26,6 @@ int mt76x2_mcu_set_channel(struct mt76x02_dev *dev, u8 channel, u8 bw, u8 bw_index, bool scan) { - struct sk_buff *skb; struct { u8 idx; u8 scan; @@ -45,21 +44,19 @@ int mt76x2_mcu_set_channel(struct mt76x02_dev *dev, u8 channel, u8 bw, }; /* first set the channel without the extension channel info */ - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - mt76_mcu_send_msg(dev, skb, CMD_SWITCH_CHANNEL_OP, true); + mt76_mcu_send_msg(dev, CMD_SWITCH_CHANNEL_OP, &msg, sizeof(msg), true); usleep_range(5000, 10000); msg.ext_chan = 0xe0 + bw_index; - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - return mt76_mcu_send_msg(dev, skb, CMD_SWITCH_CHANNEL_OP, true); + return mt76_mcu_send_msg(dev, CMD_SWITCH_CHANNEL_OP, &msg, sizeof(msg), + true); } EXPORT_SYMBOL_GPL(mt76x2_mcu_set_channel); int mt76x2_mcu_load_cr(struct mt76x02_dev *dev, u8 type, u8 temp_level, u8 channel) { - struct sk_buff *skb; struct { u8 cr_mode; u8 temp; @@ -80,15 +77,13 @@ int mt76x2_mcu_load_cr(struct mt76x02_dev *dev, u8 type, u8 temp_level, msg.cfg = cpu_to_le32(val); /* first set the channel without the extension channel info */ - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - return mt76_mcu_send_msg(dev, skb, CMD_LOAD_CR, true); + return mt76_mcu_send_msg(dev, CMD_LOAD_CR, &msg, sizeof(msg), true); } EXPORT_SYMBOL_GPL(mt76x2_mcu_load_cr); int mt76x2_mcu_init_gain(struct mt76x02_dev *dev, u8 channel, u32 gain, bool force) { - struct sk_buff *skb; struct { __le32 channel; __le32 gain_val; @@ -100,15 +95,14 @@ int mt76x2_mcu_init_gain(struct mt76x02_dev *dev, u8 channel, u32 gain, if (force) msg.channel |= cpu_to_le32(BIT(31)); - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - return mt76_mcu_send_msg(dev, skb, CMD_INIT_GAIN_OP, true); + return mt76_mcu_send_msg(dev, CMD_INIT_GAIN_OP, &msg, sizeof(msg), + true); } EXPORT_SYMBOL_GPL(mt76x2_mcu_init_gain); int mt76x2_mcu_tssi_comp(struct mt76x02_dev *dev, struct mt76x2_tssi_comp *tssi_data) { - struct sk_buff *skb; struct { __le32 id; struct mt76x2_tssi_comp data; @@ -117,7 +111,7 @@ int mt76x2_mcu_tssi_comp(struct mt76x02_dev *dev, .data = *tssi_data, }; - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - return mt76_mcu_send_msg(dev, skb, CMD_CALIBRATION_OP, true); + return mt76_mcu_send_msg(dev, CMD_CALIBRATION_OP, &msg, sizeof(msg), + true); } EXPORT_SYMBOL_GPL(mt76x2_mcu_tssi_comp); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/mt76x2.h b/drivers/net/wireless/mediatek/mt76/mt76x2/mt76x2.h index ab93125f46de..b259e4b50f1e 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/mt76x2.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/mt76x2.h @@ -31,14 +31,8 @@ #define MT7662_ROM_PATCH "mt7662_rom_patch.bin" #define MT7662_EEPROM_SIZE 512 -#define MT7662U_FIRMWARE "mediatek/mt7662u.bin" -#define MT7662U_ROM_PATCH "mediatek/mt7662u_rom_patch.bin" - -#define MT_CALIBRATE_INTERVAL HZ - #include "../mt76x02.h" #include "mac.h" -#include "dfs.h" static inline bool is_mt7612(struct mt76x02_dev *dev) { @@ -57,15 +51,12 @@ extern const struct ieee80211_ops mt76x2_ops; struct mt76x02_dev *mt76x2_alloc_device(struct device *pdev); int mt76x2_register_device(struct mt76x02_dev *dev); -void mt76x2_init_debugfs(struct mt76x02_dev *dev); -void mt76x2_init_device(struct mt76x02_dev *dev); void mt76x2_phy_power_on(struct mt76x02_dev *dev); int mt76x2_init_hardware(struct mt76x02_dev *dev); void mt76x2_stop_hardware(struct mt76x02_dev *dev); int mt76x2_eeprom_init(struct mt76x02_dev *dev); int mt76x2_apply_calibration_data(struct mt76x02_dev *dev, int channel); -void mt76x2_set_tx_ackto(struct mt76x02_dev *dev); void mt76x2_phy_set_antenna(struct mt76x02_dev *dev); int mt76x2_phy_start(struct mt76x02_dev *dev); @@ -82,24 +73,17 @@ int mt76x2_mcu_load_cr(struct mt76x02_dev *dev, u8 type, u8 temp_level, void mt76x2_cleanup(struct mt76x02_dev *dev); -void mt76x2_mac_set_tx_protection(struct mt76x02_dev *dev, u32 val); - -void mt76x2_pre_tbtt_tasklet(unsigned long arg); - -void mt76x2_sta_ps(struct mt76_dev *dev, struct ieee80211_sta *sta, bool ps); - -void mt76x2_update_channel(struct mt76_dev *mdev); - void mt76x2_reset_wlan(struct mt76x02_dev *dev, bool enable); void mt76x2_init_txpower(struct mt76x02_dev *dev, struct ieee80211_supported_band *sband); void mt76_write_mac_initvals(struct mt76x02_dev *dev); -void mt76x2_phy_tssi_compensate(struct mt76x02_dev *dev, bool wait); +void mt76x2_phy_tssi_compensate(struct mt76x02_dev *dev); void mt76x2_phy_set_txpower_regs(struct mt76x02_dev *dev, enum nl80211_band band); void mt76x2_configure_tx_delay(struct mt76x02_dev *dev, enum nl80211_band band, u8 bw); void mt76x2_apply_gain_adj(struct mt76x02_dev *dev); +void mt76x2_phy_update_channel_gain(struct mt76x02_dev *dev); #endif diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/mt76x2u.h b/drivers/net/wireless/mediatek/mt76/mt76x2/mt76x2u.h index 6e932b5010ef..0b0075411b34 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/mt76x2u.h +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/mt76x2u.h @@ -43,11 +43,8 @@ int mt76x2u_mac_stop(struct mt76x02_dev *dev); int mt76x2u_phy_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef); void mt76x2u_phy_calibrate(struct work_struct *work); -void mt76x2u_phy_channel_calibrate(struct mt76x02_dev *dev); void mt76x2u_mcu_complete_urb(struct urb *urb); -int mt76x2u_mcu_set_dynamic_vga(struct mt76x02_dev *dev, u8 channel, bool ap, - bool ext, int rssi, u32 false_cca); int mt76x2u_mcu_init(struct mt76x02_dev *dev); int mt76x2u_mcu_fw_init(struct mt76x02_dev *dev); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c index fd125722d1fb..7f4ea2d00f42 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c @@ -79,7 +79,6 @@ mt76x2_fixup_xtal(struct mt76x02_dev *dev) static int mt76x2_mac_reset(struct mt76x02_dev *dev, bool hard) { - static const u8 null_addr[ETH_ALEN] = {}; const u8 *macaddr = dev->mt76.macaddr; u32 val; int i, k; @@ -123,27 +122,18 @@ static int mt76x2_mac_reset(struct mt76x02_dev *dev, bool hard) mt76_wr(dev, MT_MAC_ADDR_DW0, get_unaligned_le32(macaddr)); mt76_wr(dev, MT_MAC_ADDR_DW1, get_unaligned_le16(macaddr + 4)); - mt76_wr(dev, MT_MAC_BSSID_DW0, get_unaligned_le32(macaddr)); - mt76_wr(dev, MT_MAC_BSSID_DW1, get_unaligned_le16(macaddr + 4) | - FIELD_PREP(MT_MAC_BSSID_DW1_MBSS_MODE, 3) | /* 8 beacons */ - MT_MAC_BSSID_DW1_MBSS_LOCAL_BIT); - - /* Fire a pre-TBTT interrupt 8 ms before TBTT */ - mt76_rmw_field(dev, MT_INT_TIMER_CFG, MT_INT_TIMER_CFG_PRE_TBTT, - 8 << 4); - mt76_rmw_field(dev, MT_INT_TIMER_CFG, MT_INT_TIMER_CFG_GP_TIMER, - MT_DFS_GP_INTERVAL); - mt76_wr(dev, MT_INT_TIMER_EN, 0); - - mt76_wr(dev, MT_BCN_BYPASS_MASK, 0xffff); + mt76x02_init_beacon_config(dev); if (!hard) return 0; for (i = 0; i < 256 / 32; i++) mt76_wr(dev, MT_WCID_DROP_BASE + i * 4, 0); - for (i = 0; i < 256; i++) + for (i = 0; i < 256; i++) { mt76x02_mac_wcid_setup(dev, i, 0, NULL); + mt76_wr(dev, MT_WCID_TX_RATE(i), 0); + mt76_wr(dev, MT_WCID_TX_RATE(i) + 4, 0); + } for (i = 0; i < MT_MAX_VIFS; i++) mt76x02_mac_wcid_setup(dev, MT_VIF_WCID(i), i, NULL); @@ -152,11 +142,6 @@ static int mt76x2_mac_reset(struct mt76x02_dev *dev, bool hard) for (k = 0; k < 4; k++) mt76x02_mac_shared_key_setup(dev, i, k, NULL); - for (i = 0; i < 8; i++) { - mt76x2_mac_set_bssid(dev, i, null_addr); - mt76x2_mac_set_beacon(dev, i, NULL); - } - for (i = 0; i < 16; i++) mt76_rr(dev, MT_TX_STAT_FIFO); @@ -168,9 +153,7 @@ static int mt76x2_mac_reset(struct mt76x02_dev *dev, bool hard) MT_CH_TIME_CFG_EIFS_AS_BUSY | FIELD_PREP(MT_CH_TIME_CFG_CH_TIMER_CLR, 1)); - mt76x02_set_beacon_offsets(dev); - - mt76x2_set_tx_ackto(dev); + mt76x02_set_tx_ackto(dev); return 0; } @@ -277,30 +260,10 @@ mt76x2_power_on(struct mt76x02_dev *dev) mt76x2_power_on_rf(dev, 1); } -void mt76x2_set_tx_ackto(struct mt76x02_dev *dev) -{ - u8 ackto, sifs, slottime = dev->slottime; - - /* As defined by IEEE 802.11-2007 17.3.8.6 */ - slottime += 3 * dev->coverage_class; - mt76_rmw_field(dev, MT_BKOFF_SLOT_CFG, - MT_BKOFF_SLOT_CFG_SLOTTIME, slottime); - - sifs = mt76_get_field(dev, MT_XIFS_TIME_CFG, - MT_XIFS_TIME_CFG_OFDM_SIFS); - - ackto = slottime + sifs; - mt76_rmw_field(dev, MT_TX_TIMEOUT_CFG, - MT_TX_TIMEOUT_CFG_ACKTO, ackto); -} - int mt76x2_init_hardware(struct mt76x02_dev *dev) { int ret; - tasklet_init(&dev->pre_tbtt_tasklet, mt76x2_pre_tbtt_tasklet, - (unsigned long) dev); - mt76x02_dma_disable(dev); mt76x2_reset_wlan(dev, true); mt76x2_power_on(dev); @@ -337,7 +300,7 @@ void mt76x2_stop_hardware(struct mt76x02_dev *dev) { cancel_delayed_work_sync(&dev->cal_work); cancel_delayed_work_sync(&dev->mac_work); - mt76x02_mcu_set_radio_state(dev, false, true); + mt76x02_mcu_set_radio_state(dev, false); mt76x2_mac_stop(dev, false); } @@ -354,12 +317,14 @@ struct mt76x02_dev *mt76x2_alloc_device(struct device *pdev) { static const struct mt76_driver_ops drv_ops = { .txwi_size = sizeof(struct mt76x02_txwi), - .update_survey = mt76x2_update_channel, + .update_survey = mt76x02_update_channel, .tx_prepare_skb = mt76x02_tx_prepare_skb, .tx_complete_skb = mt76x02_tx_complete_skb, .rx_skb = mt76x02_queue_rx_skb, .rx_poll_complete = mt76x02_rx_poll_complete, - .sta_ps = mt76x2_sta_ps, + .sta_ps = mt76x02_sta_ps, + .sta_add = mt76x02_sta_add, + .sta_remove = mt76x02_sta_remove, }; struct mt76x02_dev *dev; struct mt76_dev *mdev; @@ -375,43 +340,6 @@ struct mt76x02_dev *mt76x2_alloc_device(struct device *pdev) return dev; } -static void mt76x2_regd_notifier(struct wiphy *wiphy, - struct regulatory_request *request) -{ - struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy); - struct mt76x02_dev *dev = hw->priv; - - mt76x2_dfs_set_domain(dev, request->dfs_region); -} - -static const struct ieee80211_iface_limit if_limits[] = { - { - .max = 1, - .types = BIT(NL80211_IFTYPE_ADHOC) - }, { - .max = 8, - .types = BIT(NL80211_IFTYPE_STATION) | -#ifdef CONFIG_MAC80211_MESH - BIT(NL80211_IFTYPE_MESH_POINT) | -#endif - BIT(NL80211_IFTYPE_AP) - }, -}; - -static const struct ieee80211_iface_combination if_comb[] = { - { - .limits = if_limits, - .n_limits = ARRAY_SIZE(if_limits), - .max_interfaces = 8, - .num_different_channels = 1, - .beacon_int_infra_match = true, - .radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20_NOHT) | - BIT(NL80211_CHAN_WIDTH_20) | - BIT(NL80211_CHAN_WIDTH_40) | - BIT(NL80211_CHAN_WIDTH_80), - } -}; - static void mt76x2_led_set_config(struct mt76_dev *mt76, u8 delay_on, u8 delay_off) { @@ -462,49 +390,17 @@ static void mt76x2_led_set_brightness(struct led_classdev *led_cdev, int mt76x2_register_device(struct mt76x02_dev *dev) { - struct ieee80211_hw *hw = mt76_hw(dev); - struct wiphy *wiphy = hw->wiphy; - int i, ret; + int ret; INIT_DELAYED_WORK(&dev->cal_work, mt76x2_phy_calibrate); - INIT_DELAYED_WORK(&dev->mac_work, mt76x2_mac_work); - mt76x2_init_device(dev); + mt76x02_init_device(dev); ret = mt76x2_init_hardware(dev); if (ret) return ret; - for (i = 0; i < ARRAY_SIZE(dev->macaddr_list); i++) { - u8 *addr = dev->macaddr_list[i].addr; - - memcpy(addr, dev->mt76.macaddr, ETH_ALEN); - - if (!i) - continue; - - addr[0] |= BIT(1); - addr[0] ^= ((i - 1) << 2); - } - wiphy->addresses = dev->macaddr_list; - wiphy->n_addresses = ARRAY_SIZE(dev->macaddr_list); - - wiphy->iface_combinations = if_comb; - wiphy->n_iface_combinations = ARRAY_SIZE(if_comb); - - wiphy->reg_notifier = mt76x2_regd_notifier; - - wiphy->interface_modes = - BIT(NL80211_IFTYPE_STATION) | - BIT(NL80211_IFTYPE_AP) | -#ifdef CONFIG_MAC80211_MESH - BIT(NL80211_IFTYPE_MESH_POINT) | -#endif - BIT(NL80211_IFTYPE_ADHOC); - - wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_VHT_IBSS); - - mt76x2_dfs_init_detector(dev); + mt76x02_config_mac_addr_list(dev); /* init led callbacks */ if (IS_ENABLED(CONFIG_MT76_LEDS)) { @@ -517,7 +413,7 @@ int mt76x2_register_device(struct mt76x02_dev *dev) if (ret) goto fail; - mt76x2_init_debugfs(dev); + mt76x02_init_debugfs(dev); mt76x2_init_txpower(dev, &dev->mt76.sband_2g.sband); mt76x2_init_txpower(dev, &dev->mt76.sband_5g.sband); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_mac.c deleted file mode 100644 index 4b331ed14bb2..000000000000 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_mac.c +++ /dev/null @@ -1,203 +0,0 @@ -/* - * Copyright (C) 2016 Felix Fietkau - * - * Permission to use, copy, modify, and/or distribute this software for any - * purpose with or without fee is hereby granted, provided that the above - * copyright notice and this permission notice appear in all copies. - * - * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR - * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES - * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN - * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - */ - -#include -#include "mt76x2.h" -#include "mcu.h" -#include "eeprom.h" - -void mt76x2_mac_set_bssid(struct mt76x02_dev *dev, u8 idx, const u8 *addr) -{ - idx &= 7; - mt76_wr(dev, MT_MAC_APC_BSSID_L(idx), get_unaligned_le32(addr)); - mt76_rmw_field(dev, MT_MAC_APC_BSSID_H(idx), MT_MAC_APC_BSSID_H_ADDR, - get_unaligned_le16(addr + 4)); -} - -static int -mt76_write_beacon(struct mt76x02_dev *dev, int offset, struct sk_buff *skb) -{ - int beacon_len = mt76x02_beacon_offsets[1] - mt76x02_beacon_offsets[0]; - struct mt76x02_txwi txwi; - - if (WARN_ON_ONCE(beacon_len < skb->len + sizeof(struct mt76x02_txwi))) - return -ENOSPC; - - mt76x02_mac_write_txwi(dev, &txwi, skb, NULL, NULL, skb->len); - - mt76_wr_copy(dev, offset, &txwi, sizeof(txwi)); - offset += sizeof(txwi); - - mt76_wr_copy(dev, offset, skb->data, skb->len); - return 0; -} - -static int -__mt76x2_mac_set_beacon(struct mt76x02_dev *dev, u8 bcn_idx, struct sk_buff *skb) -{ - int beacon_len = mt76x02_beacon_offsets[1] - mt76x02_beacon_offsets[0]; - int beacon_addr = mt76x02_beacon_offsets[bcn_idx]; - int ret = 0; - int i; - - /* Prevent corrupt transmissions during update */ - mt76_set(dev, MT_BCN_BYPASS_MASK, BIT(bcn_idx)); - - if (skb) { - ret = mt76_write_beacon(dev, beacon_addr, skb); - if (!ret) - dev->beacon_data_mask |= BIT(bcn_idx); - } else { - dev->beacon_data_mask &= ~BIT(bcn_idx); - for (i = 0; i < beacon_len; i += 4) - mt76_wr(dev, beacon_addr + i, 0); - } - - mt76_wr(dev, MT_BCN_BYPASS_MASK, 0xff00 | ~dev->beacon_data_mask); - - return ret; -} - -int mt76x2_mac_set_beacon(struct mt76x02_dev *dev, u8 vif_idx, - struct sk_buff *skb) -{ - bool force_update = false; - int bcn_idx = 0; - int i; - - for (i = 0; i < ARRAY_SIZE(dev->beacons); i++) { - if (vif_idx == i) { - force_update = !!dev->beacons[i] ^ !!skb; - - if (dev->beacons[i]) - dev_kfree_skb(dev->beacons[i]); - - dev->beacons[i] = skb; - __mt76x2_mac_set_beacon(dev, bcn_idx, skb); - } else if (force_update && dev->beacons[i]) { - __mt76x2_mac_set_beacon(dev, bcn_idx, dev->beacons[i]); - } - - bcn_idx += !!dev->beacons[i]; - } - - for (i = bcn_idx; i < ARRAY_SIZE(dev->beacons); i++) { - if (!(dev->beacon_data_mask & BIT(i))) - break; - - __mt76x2_mac_set_beacon(dev, i, NULL); - } - - mt76_rmw_field(dev, MT_MAC_BSSID_DW1, MT_MAC_BSSID_DW1_MBEACON_N, - bcn_idx - 1); - return 0; -} - -void mt76x2_mac_set_beacon_enable(struct mt76x02_dev *dev, - u8 vif_idx, bool val) -{ - u8 old_mask = dev->beacon_mask; - bool en; - u32 reg; - - if (val) { - dev->beacon_mask |= BIT(vif_idx); - } else { - dev->beacon_mask &= ~BIT(vif_idx); - mt76x2_mac_set_beacon(dev, vif_idx, NULL); - } - - if (!!old_mask == !!dev->beacon_mask) - return; - - en = dev->beacon_mask; - - mt76_rmw_field(dev, MT_INT_TIMER_EN, MT_INT_TIMER_EN_PRE_TBTT_EN, en); - reg = MT_BEACON_TIME_CFG_BEACON_TX | - MT_BEACON_TIME_CFG_TBTT_EN | - MT_BEACON_TIME_CFG_TIMER_EN; - mt76_rmw(dev, MT_BEACON_TIME_CFG, reg, reg * en); - - if (en) - mt76x02_irq_enable(dev, MT_INT_PRE_TBTT | MT_INT_TBTT); - else - mt76x02_irq_disable(dev, MT_INT_PRE_TBTT | MT_INT_TBTT); -} - -void mt76x2_update_channel(struct mt76_dev *mdev) -{ - struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); - struct mt76_channel_state *state; - u32 active, busy; - - state = mt76_channel_state(&dev->mt76, dev->mt76.chandef.chan); - - busy = mt76_rr(dev, MT_CH_BUSY); - active = busy + mt76_rr(dev, MT_CH_IDLE); - - spin_lock_bh(&dev->mt76.cc_lock); - state->cc_busy += busy; - state->cc_active += active; - spin_unlock_bh(&dev->mt76.cc_lock); -} - -void mt76x2_mac_work(struct work_struct *work) -{ - struct mt76x02_dev *dev = container_of(work, struct mt76x02_dev, - mac_work.work); - int i, idx; - - mt76x2_update_channel(&dev->mt76); - for (i = 0, idx = 0; i < 16; i++) { - u32 val = mt76_rr(dev, MT_TX_AGG_CNT(i)); - - dev->aggr_stats[idx++] += val & 0xffff; - dev->aggr_stats[idx++] += val >> 16; - } - - ieee80211_queue_delayed_work(mt76_hw(dev), &dev->mac_work, - MT_CALIBRATE_INTERVAL); -} - -void mt76x2_mac_set_tx_protection(struct mt76x02_dev *dev, u32 val) -{ - u32 data = 0; - - if (val != ~0) - data = FIELD_PREP(MT_PROT_CFG_CTRL, 1) | - MT_PROT_CFG_RTS_THRESH; - - mt76_rmw_field(dev, MT_TX_RTS_CFG, MT_TX_RTS_CFG_THRESH, val); - - mt76_rmw(dev, MT_CCK_PROT_CFG, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); - mt76_rmw(dev, MT_OFDM_PROT_CFG, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); - mt76_rmw(dev, MT_MM20_PROT_CFG, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); - mt76_rmw(dev, MT_MM40_PROT_CFG, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); - mt76_rmw(dev, MT_GF20_PROT_CFG, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); - mt76_rmw(dev, MT_GF40_PROT_CFG, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); - mt76_rmw(dev, MT_TX_PROT_CFG6, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); - mt76_rmw(dev, MT_TX_PROT_CFG7, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); - mt76_rmw(dev, MT_TX_PROT_CFG8, - MT_PROT_CFG_CTRL | MT_PROT_CFG_RTS_THRESH, data); -} diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c index 3f001bd6806c..b54a32397486 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c @@ -74,7 +74,7 @@ mt76x2_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef) mt76_rr(dev, MT_CH_IDLE); mt76_rr(dev, MT_CH_BUSY); - mt76x2_dfs_init_params(dev); + mt76x02_dfs_init_params(dev); mt76x2_mac_resume(dev); tasklet_enable(&dev->dfs_pd.dfs_tasklet); @@ -127,103 +127,12 @@ mt76x2_config(struct ieee80211_hw *hw, u32 changed) return ret; } -static void -mt76x2_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - struct ieee80211_bss_conf *info, u32 changed) -{ - struct mt76x02_dev *dev = hw->priv; - struct mt76x02_vif *mvif = (struct mt76x02_vif *) vif->drv_priv; - - mutex_lock(&dev->mt76.mutex); - - if (changed & BSS_CHANGED_BSSID) - mt76x2_mac_set_bssid(dev, mvif->idx, info->bssid); - - if (changed & BSS_CHANGED_BEACON_INT) { - mt76_rmw_field(dev, MT_BEACON_TIME_CFG, - MT_BEACON_TIME_CFG_INTVAL, - info->beacon_int << 4); - dev->beacon_int = info->beacon_int; - dev->tbtt_count = 0; - } - - if (changed & BSS_CHANGED_BEACON_ENABLED) { - tasklet_disable(&dev->pre_tbtt_tasklet); - mt76x2_mac_set_beacon_enable(dev, mvif->idx, - info->enable_beacon); - tasklet_enable(&dev->pre_tbtt_tasklet); - } - - if (changed & BSS_CHANGED_ERP_SLOT) { - int slottime = info->use_short_slot ? 9 : 20; - - dev->slottime = slottime; - mt76x2_set_tx_ackto(dev); - } - - mutex_unlock(&dev->mt76.mutex); -} - -void -mt76x2_sta_ps(struct mt76_dev *mdev, struct ieee80211_sta *sta, bool ps) -{ - struct mt76x02_sta *msta = (struct mt76x02_sta *) sta->drv_priv; - struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); - int idx = msta->wcid.idx; - - mt76_stop_tx_queues(&dev->mt76, sta, true); - mt76x02_mac_wcid_set_drop(dev, idx, ps); -} - -static void -mt76x2_sw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - const u8 *mac) -{ - struct mt76x02_dev *dev = hw->priv; - - tasklet_disable(&dev->pre_tbtt_tasklet); - set_bit(MT76_SCANNING, &dev->mt76.state); -} - -static void -mt76x2_sw_scan_complete(struct ieee80211_hw *hw, struct ieee80211_vif *vif) -{ - struct mt76x02_dev *dev = hw->priv; - - clear_bit(MT76_SCANNING, &dev->mt76.state); - tasklet_enable(&dev->pre_tbtt_tasklet); -} - static void mt76x2_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u32 queues, bool drop) { } -static int -mt76x2_get_txpower(struct ieee80211_hw *hw, struct ieee80211_vif *vif, int *dbm) -{ - struct mt76x02_dev *dev = hw->priv; - - *dbm = dev->mt76.txpower_cur / 2; - - /* convert from per-chain power to combined output on 2x2 devices */ - *dbm += 3; - - return 0; -} - -static void mt76x2_set_coverage_class(struct ieee80211_hw *hw, - s16 coverage_class) -{ - struct mt76x02_dev *dev = hw->priv; - - mutex_lock(&dev->mt76.mutex); - dev->coverage_class = coverage_class; - mt76x2_set_tx_ackto(dev); - mutex_unlock(&dev->mt76.mutex); -} - static int mt76x2_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, bool set) { @@ -264,21 +173,6 @@ static int mt76x2_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, return 0; } -static int -mt76x2_set_rts_threshold(struct ieee80211_hw *hw, u32 val) -{ - struct mt76x02_dev *dev = hw->priv; - - if (val != ~0 && val > 0xffff) - return -EINVAL; - - mutex_lock(&dev->mt76.mutex); - mt76x2_mac_set_tx_protection(dev, val); - mutex_unlock(&dev->mt76.mutex); - - return 0; -} - const struct ieee80211_ops mt76x2_ops = { .tx = mt76x02_tx, .start = mt76x2_start, @@ -287,24 +181,23 @@ const struct ieee80211_ops mt76x2_ops = { .remove_interface = mt76x02_remove_interface, .config = mt76x2_config, .configure_filter = mt76x02_configure_filter, - .bss_info_changed = mt76x2_bss_info_changed, - .sta_add = mt76x02_sta_add, - .sta_remove = mt76x02_sta_remove, + .bss_info_changed = mt76x02_bss_info_changed, + .sta_state = mt76_sta_state, .set_key = mt76x02_set_key, .conf_tx = mt76x02_conf_tx, - .sw_scan_start = mt76x2_sw_scan, - .sw_scan_complete = mt76x2_sw_scan_complete, + .sw_scan_start = mt76x02_sw_scan, + .sw_scan_complete = mt76x02_sw_scan_complete, .flush = mt76x2_flush, .ampdu_action = mt76x02_ampdu_action, - .get_txpower = mt76x2_get_txpower, + .get_txpower = mt76x02_get_txpower, .wake_tx_queue = mt76_wake_tx_queue, .sta_rate_tbl_update = mt76x02_sta_rate_tbl_update, .release_buffered_frames = mt76_release_buffered_frames, - .set_coverage_class = mt76x2_set_coverage_class, + .set_coverage_class = mt76x02_set_coverage_class, .get_survey = mt76_get_survey, .set_tim = mt76x2_set_tim, .set_antenna = mt76x2_set_antenna, .get_antenna = mt76x2_get_antenna, - .set_rts_threshold = mt76x2_set_rts_threshold, + .set_rts_threshold = mt76x02_set_rts_threshold, }; diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_mcu.c index d8fa9ba56437..03e24ae7f66c 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_mcu.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_mcu.c @@ -168,7 +168,6 @@ error: int mt76x2_mcu_init(struct mt76x02_dev *dev) { static const struct mt76_mcu_ops mt76x2_mcu_ops = { - .mcu_msg_alloc = mt76x02_mcu_msg_alloc, .mcu_send_msg = mt76x02_mcu_msg_send, }; int ret; @@ -183,6 +182,6 @@ int mt76x2_mcu_init(struct mt76x02_dev *dev) if (ret) return ret; - mt76x02_mcu_function_select(dev, Q_SELECT, 1, true); + mt76x02_mcu_function_select(dev, Q_SELECT, 1); return 0; } diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c index 5bda44540225..da7cd40f56ff 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c @@ -38,7 +38,7 @@ mt76x2_phy_tssi_init_cal(struct mt76x02_dev *dev) if (mt76x02_ext_pa_enabled(dev, chan->band)) flag |= BIT(8); - mt76x02_mcu_calibrate(dev, MCU_CAL_TSSI, flag, true); + mt76x02_mcu_calibrate(dev, MCU_CAL_TSSI, flag); dev->cal.tssi_cal_done = true; return true; } @@ -62,13 +62,13 @@ mt76x2_phy_channel_calibrate(struct mt76x02_dev *dev, bool mac_stopped) mt76x2_mac_stop(dev, false); if (is_5ghz) - mt76x02_mcu_calibrate(dev, MCU_CAL_LC, 0, true); + mt76x02_mcu_calibrate(dev, MCU_CAL_LC, 0); - mt76x02_mcu_calibrate(dev, MCU_CAL_TX_LOFT, is_5ghz, true); - mt76x02_mcu_calibrate(dev, MCU_CAL_TXIQ, is_5ghz, true); - mt76x02_mcu_calibrate(dev, MCU_CAL_RXIQC_FI, is_5ghz, true); - mt76x02_mcu_calibrate(dev, MCU_CAL_TEMP_SENSOR, 0, true); - mt76x02_mcu_calibrate(dev, MCU_CAL_TX_SHAPING, 0, true); + mt76x02_mcu_calibrate(dev, MCU_CAL_TX_LOFT, is_5ghz); + mt76x02_mcu_calibrate(dev, MCU_CAL_TXIQ, is_5ghz); + mt76x02_mcu_calibrate(dev, MCU_CAL_RXIQC_FI, is_5ghz); + mt76x02_mcu_calibrate(dev, MCU_CAL_TEMP_SENSOR, 0); + mt76x02_mcu_calibrate(dev, MCU_CAL_TX_SHAPING, 0); if (!mac_stopped) mt76x2_mac_resume(dev); @@ -124,96 +124,6 @@ void mt76x2_phy_set_antenna(struct mt76x02_dev *dev) mt76_wr(dev, MT_BBP(AGC, 0), val); } -static void -mt76x2_phy_set_gain_val(struct mt76x02_dev *dev) -{ - u32 val; - u8 gain_val[2]; - - gain_val[0] = dev->cal.agc_gain_cur[0] - dev->cal.agc_gain_adjust; - gain_val[1] = dev->cal.agc_gain_cur[1] - dev->cal.agc_gain_adjust; - - if (dev->mt76.chandef.width >= NL80211_CHAN_WIDTH_40) - val = 0x1e42 << 16; - else - val = 0x1836 << 16; - - val |= 0xf8; - - mt76_wr(dev, MT_BBP(AGC, 8), - val | FIELD_PREP(MT_BBP_AGC_GAIN, gain_val[0])); - mt76_wr(dev, MT_BBP(AGC, 9), - val | FIELD_PREP(MT_BBP_AGC_GAIN, gain_val[1])); - - if (dev->mt76.chandef.chan->flags & IEEE80211_CHAN_RADAR) - mt76x2_dfs_adjust_agc(dev); -} - -static void -mt76x2_phy_update_channel_gain(struct mt76x02_dev *dev) -{ - u8 *gain = dev->cal.agc_gain_init; - u8 low_gain_delta, gain_delta; - bool gain_change; - int low_gain; - u32 val; - - dev->cal.avg_rssi_all = mt76x02_phy_get_min_avg_rssi(dev); - - low_gain = (dev->cal.avg_rssi_all > mt76x02_get_rssi_gain_thresh(dev)) + - (dev->cal.avg_rssi_all > mt76x02_get_low_rssi_gain_thresh(dev)); - - gain_change = (dev->cal.low_gain & 2) ^ (low_gain & 2); - dev->cal.low_gain = low_gain; - - if (!gain_change) { - if (mt76x02_phy_adjust_vga_gain(dev)) - mt76x2_phy_set_gain_val(dev); - return; - } - - if (dev->mt76.chandef.width == NL80211_CHAN_WIDTH_80) { - mt76_wr(dev, MT_BBP(RXO, 14), 0x00560211); - val = mt76_rr(dev, MT_BBP(AGC, 26)) & ~0xf; - if (low_gain == 2) - val |= 0x3; - else - val |= 0x5; - mt76_wr(dev, MT_BBP(AGC, 26), val); - } else { - mt76_wr(dev, MT_BBP(RXO, 14), 0x00560423); - } - - if (mt76x2_has_ext_lna(dev)) - low_gain_delta = 10; - else - low_gain_delta = 14; - - if (low_gain == 2) { - mt76_wr(dev, MT_BBP(RXO, 18), 0xf000a990); - mt76_wr(dev, MT_BBP(AGC, 35), 0x08080808); - mt76_wr(dev, MT_BBP(AGC, 37), 0x08080808); - gain_delta = low_gain_delta; - dev->cal.agc_gain_adjust = 0; - } else { - mt76_wr(dev, MT_BBP(RXO, 18), 0xf000a991); - if (dev->mt76.chandef.width == NL80211_CHAN_WIDTH_80) - mt76_wr(dev, MT_BBP(AGC, 35), 0x10101014); - else - mt76_wr(dev, MT_BBP(AGC, 35), 0x11111116); - mt76_wr(dev, MT_BBP(AGC, 37), 0x2121262C); - gain_delta = 0; - dev->cal.agc_gain_adjust = low_gain_delta; - } - - dev->cal.agc_gain_cur[0] = gain[0] - gain_delta; - dev->cal.agc_gain_cur[1] = gain[1] - gain_delta; - mt76x2_phy_set_gain_val(dev); - - /* clear false CCA counters */ - mt76_rr(dev, MT_RX_STAT_1); -} - int mt76x2_phy_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef) { @@ -313,14 +223,14 @@ int mt76x2_phy_set_channel(struct mt76x02_dev *dev, u8 val = mt76x02_eeprom_get(dev, MT_EE_BT_RCAL_RESULT); if (val != 0xff) - mt76x02_mcu_calibrate(dev, MCU_CAL_R, 0, true); + mt76x02_mcu_calibrate(dev, MCU_CAL_R, 0); } - mt76x02_mcu_calibrate(dev, MCU_CAL_RXDCOC, channel, true); + mt76x02_mcu_calibrate(dev, MCU_CAL_RXDCOC, channel); /* Rx LPF calibration */ if (!dev->cal.init_cal_done) - mt76x02_mcu_calibrate(dev, MCU_CAL_RC, 0, true); + mt76x02_mcu_calibrate(dev, MCU_CAL_RC, 0); dev->cal.init_cal_done = true; @@ -384,7 +294,7 @@ void mt76x2_phy_calibrate(struct work_struct *work) dev = container_of(work, struct mt76x02_dev, cal_work.work); mt76x2_phy_channel_calibrate(dev, false); - mt76x2_phy_tssi_compensate(dev, true); + mt76x2_phy_tssi_compensate(dev); mt76x2_phy_temp_compensate(dev); mt76x2_phy_update_channel_gain(dev); ieee80211_queue_delayed_work(mt76_hw(dev), &dev->cal_work, @@ -395,7 +305,7 @@ int mt76x2_phy_start(struct mt76x02_dev *dev) { int ret; - ret = mt76x02_mcu_set_radio_state(dev, true, true); + ret = mt76x02_mcu_set_radio_state(dev, true); if (ret) return ret; diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_tx.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_tx.c deleted file mode 100644 index 3a2ec86d3e88..000000000000 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_tx.c +++ /dev/null @@ -1,142 +0,0 @@ -/* - * Copyright (C) 2016 Felix Fietkau - * - * Permission to use, copy, modify, and/or distribute this software for any - * purpose with or without fee is hereby granted, provided that the above - * copyright notice and this permission notice appear in all copies. - * - * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES - * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR - * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES - * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN - * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF - * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - */ - -#include "mt76x2.h" - -struct beacon_bc_data { - struct mt76x02_dev *dev; - struct sk_buff_head q; - struct sk_buff *tail[8]; -}; - -static void -mt76x2_update_beacon_iter(void *priv, u8 *mac, struct ieee80211_vif *vif) -{ - struct mt76x02_dev *dev = (struct mt76x02_dev *) priv; - struct mt76x02_vif *mvif = (struct mt76x02_vif *) vif->drv_priv; - struct sk_buff *skb = NULL; - - if (!(dev->beacon_mask & BIT(mvif->idx))) - return; - - skb = ieee80211_beacon_get(mt76_hw(dev), vif); - if (!skb) - return; - - mt76x2_mac_set_beacon(dev, mvif->idx, skb); -} - -static void -mt76x2_add_buffered_bc(void *priv, u8 *mac, struct ieee80211_vif *vif) -{ - struct beacon_bc_data *data = priv; - struct mt76x02_dev *dev = data->dev; - struct mt76x02_vif *mvif = (struct mt76x02_vif *) vif->drv_priv; - struct ieee80211_tx_info *info; - struct sk_buff *skb; - - if (!(dev->beacon_mask & BIT(mvif->idx))) - return; - - skb = ieee80211_get_buffered_bc(mt76_hw(dev), vif); - if (!skb) - return; - - info = IEEE80211_SKB_CB(skb); - info->control.vif = vif; - info->flags |= IEEE80211_TX_CTL_ASSIGN_SEQ; - mt76_skb_set_moredata(skb, true); - __skb_queue_tail(&data->q, skb); - data->tail[mvif->idx] = skb; -} - -static void -mt76x2_resync_beacon_timer(struct mt76x02_dev *dev) -{ - u32 timer_val = dev->beacon_int << 4; - - dev->tbtt_count++; - - /* - * Beacon timer drifts by 1us every tick, the timer is configured - * in 1/16 TU (64us) units. - */ - if (dev->tbtt_count < 62) - return; - - if (dev->tbtt_count >= 64) { - dev->tbtt_count = 0; - return; - } - - /* - * The updated beacon interval takes effect after two TBTT, because - * at this point the original interval has already been loaded into - * the next TBTT_TIMER value - */ - if (dev->tbtt_count == 62) - timer_val -= 1; - - mt76_rmw_field(dev, MT_BEACON_TIME_CFG, - MT_BEACON_TIME_CFG_INTVAL, timer_val); -} - -void mt76x2_pre_tbtt_tasklet(unsigned long arg) -{ - struct mt76x02_dev *dev = (struct mt76x02_dev *) arg; - struct mt76_queue *q = &dev->mt76.q_tx[MT_TXQ_PSD]; - struct beacon_bc_data data = {}; - struct sk_buff *skb; - int i, nframes; - - mt76x2_resync_beacon_timer(dev); - - data.dev = dev; - __skb_queue_head_init(&data.q); - - ieee80211_iterate_active_interfaces_atomic(mt76_hw(dev), - IEEE80211_IFACE_ITER_RESUME_ALL, - mt76x2_update_beacon_iter, dev); - - do { - nframes = skb_queue_len(&data.q); - ieee80211_iterate_active_interfaces_atomic(mt76_hw(dev), - IEEE80211_IFACE_ITER_RESUME_ALL, - mt76x2_add_buffered_bc, &data); - } while (nframes != skb_queue_len(&data.q)); - - if (!nframes) - return; - - for (i = 0; i < ARRAY_SIZE(data.tail); i++) { - if (!data.tail[i]) - continue; - - mt76_skb_set_moredata(data.tail[i], false); - } - - spin_lock_bh(&q->lock); - while ((skb = __skb_dequeue(&data.q)) != NULL) { - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); - struct ieee80211_vif *vif = info->control.vif; - struct mt76x02_vif *mvif = (struct mt76x02_vif *) vif->drv_priv; - - mt76_dma_tx_queue_skb(&dev->mt76, q, skb, &mvif->group_wcid, - NULL); - } - spin_unlock_bh(&q->lock); -} - diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/phy.c b/drivers/net/wireless/mediatek/mt76/mt76x2/phy.c index e9fff5b7f125..c9634a774705 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/phy.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/phy.c @@ -210,7 +210,7 @@ void mt76x2_configure_tx_delay(struct mt76x02_dev *dev, } EXPORT_SYMBOL_GPL(mt76x2_configure_tx_delay); -void mt76x2_phy_tssi_compensate(struct mt76x02_dev *dev, bool wait) +void mt76x2_phy_tssi_compensate(struct mt76x02_dev *dev) { struct ieee80211_channel *chan = dev->mt76.chandef.chan; struct mt76x2_tx_power_info txp; @@ -245,8 +245,99 @@ void mt76x2_phy_tssi_compensate(struct mt76x02_dev *dev, bool wait) return; usleep_range(10000, 20000); - mt76x02_mcu_calibrate(dev, MCU_CAL_DPD, chan->hw_value, wait); + mt76x02_mcu_calibrate(dev, MCU_CAL_DPD, chan->hw_value); dev->cal.dpd_cal_done = true; } } EXPORT_SYMBOL_GPL(mt76x2_phy_tssi_compensate); + +static void +mt76x2_phy_set_gain_val(struct mt76x02_dev *dev) +{ + u32 val; + u8 gain_val[2]; + + gain_val[0] = dev->cal.agc_gain_cur[0] - dev->cal.agc_gain_adjust; + gain_val[1] = dev->cal.agc_gain_cur[1] - dev->cal.agc_gain_adjust; + + if (dev->mt76.chandef.width >= NL80211_CHAN_WIDTH_40) + val = 0x1e42 << 16; + else + val = 0x1836 << 16; + + val |= 0xf8; + + mt76_wr(dev, MT_BBP(AGC, 8), + val | FIELD_PREP(MT_BBP_AGC_GAIN, gain_val[0])); + mt76_wr(dev, MT_BBP(AGC, 9), + val | FIELD_PREP(MT_BBP_AGC_GAIN, gain_val[1])); + + if (dev->mt76.chandef.chan->flags & IEEE80211_CHAN_RADAR) + mt76x02_phy_dfs_adjust_agc(dev); +} + +void mt76x2_phy_update_channel_gain(struct mt76x02_dev *dev) +{ + u8 *gain = dev->cal.agc_gain_init; + u8 low_gain_delta, gain_delta; + bool gain_change; + int low_gain; + u32 val; + + dev->cal.avg_rssi_all = mt76x02_phy_get_min_avg_rssi(dev); + + low_gain = (dev->cal.avg_rssi_all > mt76x02_get_rssi_gain_thresh(dev)) + + (dev->cal.avg_rssi_all > mt76x02_get_low_rssi_gain_thresh(dev)); + + gain_change = dev->cal.low_gain < 0 || + (dev->cal.low_gain & 2) ^ (low_gain & 2); + dev->cal.low_gain = low_gain; + + if (!gain_change) { + if (mt76x02_phy_adjust_vga_gain(dev)) + mt76x2_phy_set_gain_val(dev); + return; + } + + if (dev->mt76.chandef.width == NL80211_CHAN_WIDTH_80) { + mt76_wr(dev, MT_BBP(RXO, 14), 0x00560211); + val = mt76_rr(dev, MT_BBP(AGC, 26)) & ~0xf; + if (low_gain == 2) + val |= 0x3; + else + val |= 0x5; + mt76_wr(dev, MT_BBP(AGC, 26), val); + } else { + mt76_wr(dev, MT_BBP(RXO, 14), 0x00560423); + } + + if (mt76x2_has_ext_lna(dev)) + low_gain_delta = 10; + else + low_gain_delta = 14; + + if (low_gain == 2) { + mt76_wr(dev, MT_BBP(RXO, 18), 0xf000a990); + mt76_wr(dev, MT_BBP(AGC, 35), 0x08080808); + mt76_wr(dev, MT_BBP(AGC, 37), 0x08080808); + gain_delta = low_gain_delta; + dev->cal.agc_gain_adjust = 0; + } else { + mt76_wr(dev, MT_BBP(RXO, 18), 0xf000a991); + if (dev->mt76.chandef.width == NL80211_CHAN_WIDTH_80) + mt76_wr(dev, MT_BBP(AGC, 35), 0x10101014); + else + mt76_wr(dev, MT_BBP(AGC, 35), 0x11111116); + mt76_wr(dev, MT_BBP(AGC, 37), 0x2121262C); + gain_delta = 0; + dev->cal.agc_gain_adjust = low_gain_delta; + } + + dev->cal.agc_gain_cur[0] = gain[0] - gain_delta; + dev->cal.agc_gain_cur[1] = gain[1] - gain_delta; + mt76x2_phy_set_gain_val(dev); + + /* clear false CCA counters */ + mt76_rr(dev, MT_RX_STAT_1); +} +EXPORT_SYMBOL_GPL(mt76x2_phy_update_channel_gain); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c index 57baf8d1c830..4d1788eb3812 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c @@ -131,8 +131,8 @@ err: } MODULE_DEVICE_TABLE(usb, mt76x2u_device_table); -MODULE_FIRMWARE(MT7662U_FIRMWARE); -MODULE_FIRMWARE(MT7662U_ROM_PATCH); +MODULE_FIRMWARE(MT7662_FIRMWARE); +MODULE_FIRMWARE(MT7662_ROM_PATCH); static struct usb_driver mt76x2u_driver = { .name = KBUILD_MODNAME, diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c index 13cce2937573..0be3784f44fb 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c @@ -141,6 +141,8 @@ struct mt76x02_dev *mt76x2u_alloc_device(struct device *pdev) .tx_complete_skb = mt76x02u_tx_complete_skb, .tx_status_data = mt76x02_tx_status_data, .rx_skb = mt76x02_queue_rx_skb, + .sta_add = mt76x02_sta_add, + .sta_remove = mt76x02_sta_remove, }; struct mt76x02_dev *dev; struct mt76_dev *mdev; @@ -156,21 +158,9 @@ struct mt76x02_dev *mt76x2u_alloc_device(struct device *pdev) return dev; } -static void mt76x2u_init_beacon_offsets(struct mt76x02_dev *dev) -{ - mt76_wr(dev, MT_BCN_OFFSET(0), 0x18100800); - mt76_wr(dev, MT_BCN_OFFSET(1), 0x38302820); - mt76_wr(dev, MT_BCN_OFFSET(2), 0x58504840); - mt76_wr(dev, MT_BCN_OFFSET(3), 0x78706860); -} - int mt76x2u_init_hardware(struct mt76x02_dev *dev) { - const struct mt76_wcid_addr addr = { - .macaddr = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}, - .ba_mask = 0, - }; - int i, err; + int i, k, err; mt76x2_reset_wlan(dev, true); mt76x2u_power_on(dev); @@ -191,9 +181,6 @@ int mt76x2u_init_hardware(struct mt76x02_dev *dev) if (!mt76x02_wait_for_mac(&dev->mt76)) return -ETIMEDOUT; - mt76_wr(dev, MT_HEADER_TRANS_CTRL_REG, 0); - mt76_wr(dev, MT_TSO_CTRL, 0); - mt76x2u_init_dma(dev); err = mt76x2u_mcu_init(dev); @@ -207,21 +194,18 @@ int mt76x2u_init_hardware(struct mt76x02_dev *dev) mt76x02_mac_setaddr(dev, dev->mt76.eeprom.data + MT_EE_MAC_ADDR); dev->mt76.rxfilter = mt76_rr(dev, MT_RX_FILTR_CFG); - mt76x2u_init_beacon_offsets(dev); - if (!mt76x02_wait_for_txrx_idle(&dev->mt76)) return -ETIMEDOUT; /* reset wcid table */ - for (i = 0; i < 254; i++) - mt76_wr_copy(dev, MT_WCID_ADDR(i), &addr, - sizeof(struct mt76_wcid_addr)); + for (i = 0; i < 256; i++) + mt76x02_mac_wcid_setup(dev, i, 0, NULL); /* reset shared key table and pairwise key table */ - for (i = 0; i < 4; i++) - mt76_wr(dev, MT_SKEY_MODE_BASE_0 + 4 * i, 0); - for (i = 0; i < 256; i++) - mt76_wr(dev, MT_WCID_ATTR(i), 1); + for (i = 0; i < 16; i++) { + for (k = 0; k < 4; k++) + mt76x02_mac_shared_key_setup(dev, i, k, NULL); + } mt76_clear(dev, MT_BEACON_TIME_CFG, MT_BEACON_TIME_CFG_TIMER_EN | @@ -245,11 +229,10 @@ int mt76x2u_init_hardware(struct mt76x02_dev *dev) int mt76x2u_register_device(struct mt76x02_dev *dev) { struct ieee80211_hw *hw = mt76_hw(dev); - struct wiphy *wiphy = hw->wiphy; int err; INIT_DELAYED_WORK(&dev->cal_work, mt76x2u_phy_calibrate); - mt76x2_init_device(dev); + mt76x02_init_device(dev); err = mt76x2u_init_eeprom(dev); if (err < 0) @@ -267,8 +250,6 @@ int mt76x2u_register_device(struct mt76x02_dev *dev) if (err < 0) goto fail; - wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION); - err = mt76_register_device(&dev->mt76, true, mt76x02_rates, ARRAY_SIZE(mt76x02_rates)); if (err) @@ -282,7 +263,7 @@ int mt76x2u_register_device(struct mt76x02_dev *dev) set_bit(MT76_STATE_INITIALIZED, &dev->mt76.state); - mt76x2_init_debugfs(dev); + mt76x02_init_debugfs(dev); mt76x2_init_txpower(dev, &dev->mt76.sband_2g.sband); mt76x2_init_txpower(dev, &dev->mt76.sband_5g.sband); @@ -297,12 +278,13 @@ void mt76x2u_stop_hw(struct mt76x02_dev *dev) { mt76u_stop_stat_wk(&dev->mt76); cancel_delayed_work_sync(&dev->cal_work); + cancel_delayed_work_sync(&dev->mac_work); mt76x2u_mac_stop(dev); } void mt76x2u_cleanup(struct mt76x02_dev *dev) { - mt76x02_mcu_set_radio_state(dev, false, false); + mt76x02_mcu_set_radio_state(dev, false); mt76x2u_stop_hw(dev); mt76u_queues_deinit(&dev->mt76); mt76u_mcu_deinit(&dev->mt76); diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c index 1971a1b00038..2b48cc51a30d 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c @@ -27,6 +27,8 @@ static int mt76x2u_start(struct ieee80211_hw *hw) if (ret) goto out; + ieee80211_queue_delayed_work(mt76_hw(dev), &dev->mac_work, + MT_CALIBRATE_INTERVAL); set_bit(MT76_STATE_RUNNING, &dev->mt76.state); out: @@ -48,11 +50,12 @@ static int mt76x2u_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif) { struct mt76x02_dev *dev = hw->priv; + unsigned int idx = 8; if (!ether_addr_equal(dev->mt76.macaddr, vif->addr)) mt76x02_mac_setaddr(dev, vif->addr); - mt76x02_vif_init(dev, vif, 0); + mt76x02_vif_init(dev, vif, idx); return 0; } @@ -81,29 +84,6 @@ mt76x2u_set_channel(struct mt76x02_dev *dev, return err; } -static void -mt76x2u_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - struct ieee80211_bss_conf *info, u32 changed) -{ - struct mt76x02_dev *dev = hw->priv; - - mutex_lock(&dev->mt76.mutex); - - if (changed & BSS_CHANGED_ASSOC) { - mt76x2u_phy_channel_calibrate(dev); - mt76x2_apply_gain_adj(dev); - } - - if (changed & BSS_CHANGED_BSSID) { - mt76_wr(dev, MT_MAC_BSSID_DW0, - get_unaligned_le32(info->bssid)); - mt76_wr(dev, MT_MAC_BSSID_DW1, - get_unaligned_le16(info->bssid + 4)); - } - - mutex_unlock(&dev->mt76.mutex); -} - static int mt76x2u_config(struct ieee80211_hw *hw, u32 changed) { @@ -141,39 +121,22 @@ mt76x2u_config(struct ieee80211_hw *hw, u32 changed) return err; } -static void -mt76x2u_sw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif, - const u8 *mac) -{ - struct mt76x02_dev *dev = hw->priv; - - set_bit(MT76_SCANNING, &dev->mt76.state); -} - -static void -mt76x2u_sw_scan_complete(struct ieee80211_hw *hw, struct ieee80211_vif *vif) -{ - struct mt76x02_dev *dev = hw->priv; - - clear_bit(MT76_SCANNING, &dev->mt76.state); -} - const struct ieee80211_ops mt76x2u_ops = { .tx = mt76x02_tx, .start = mt76x2u_start, .stop = mt76x2u_stop, .add_interface = mt76x2u_add_interface, .remove_interface = mt76x02_remove_interface, - .sta_add = mt76x02_sta_add, - .sta_remove = mt76x02_sta_remove, + .sta_state = mt76_sta_state, .set_key = mt76x02_set_key, .ampdu_action = mt76x02_ampdu_action, .config = mt76x2u_config, .wake_tx_queue = mt76_wake_tx_queue, - .bss_info_changed = mt76x2u_bss_info_changed, + .bss_info_changed = mt76x02_bss_info_changed, .configure_filter = mt76x02_configure_filter, .conf_tx = mt76x02_conf_tx, - .sw_scan_start = mt76x2u_sw_scan, - .sw_scan_complete = mt76x2u_sw_scan_complete, + .sw_scan_start = mt76x02_sw_scan, + .sw_scan_complete = mt76x02_sw_scan_complete, .sta_rate_tbl_update = mt76x02_sta_rate_tbl_update, + .get_txpower = mt76x02_get_txpower, }; diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_mcu.c index 3f1e558e5e6d..45a95ee3a415 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_mcu.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_mcu.c @@ -29,30 +29,6 @@ #define MT76U_MCU_DLM_OFFSET 0x110000 #define MT76U_MCU_ROM_PATCH_OFFSET 0x90000 -int mt76x2u_mcu_set_dynamic_vga(struct mt76x02_dev *dev, u8 channel, bool ap, - bool ext, int rssi, u32 false_cca) -{ - struct { - __le32 channel; - __le32 rssi_val; - __le32 false_cca_val; - } __packed __aligned(4) msg = { - .rssi_val = cpu_to_le32(rssi), - .false_cca_val = cpu_to_le32(false_cca), - }; - struct sk_buff *skb; - u32 val = channel; - - if (ap) - val |= BIT(31); - if (ext) - val |= BIT(30); - msg.channel = cpu_to_le32(val); - - skb = mt76_mcu_msg_alloc(dev, &msg, sizeof(msg)); - return mt76_mcu_send_msg(dev, skb, CMD_DYNC_VGA_OP, true); -} - static void mt76x2u_mcu_load_ivb(struct mt76x02_dev *dev) { mt76u_vendor_request(&dev->mt76, MT_VEND_DEV_MODE, @@ -117,7 +93,7 @@ static int mt76x2u_mcu_load_rom_patch(struct mt76x02_dev *dev) return 0; } - err = request_firmware(&fw, MT7662U_ROM_PATCH, dev->mt76.dev); + err = request_firmware(&fw, MT7662_ROM_PATCH, dev->mt76.dev); if (err < 0) return err; @@ -183,7 +159,7 @@ static int mt76x2u_mcu_load_firmware(struct mt76x02_dev *dev) int err, len, ilm_len, dlm_len; const struct firmware *fw; - err = request_firmware(&fw, MT7662U_FIRMWARE, dev->mt76.dev); + err = request_firmware(&fw, MT7662_FIRMWARE, dev->mt76.dev); if (err < 0) return err; @@ -282,9 +258,9 @@ int mt76x2u_mcu_init(struct mt76x02_dev *dev) { int err; - err = mt76x02_mcu_function_select(dev, Q_SELECT, 1, false); + err = mt76x02_mcu_function_select(dev, Q_SELECT, 1); if (err < 0) return err; - return mt76x02_mcu_set_radio_state(dev, true, false); + return mt76x02_mcu_set_radio_state(dev, true); } diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c index ca96ba60510e..11d414d86c68 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c @@ -18,63 +18,35 @@ #include "eeprom.h" #include "../mt76x02_phy.h" -void mt76x2u_phy_channel_calibrate(struct mt76x02_dev *dev) +static void +mt76x2u_phy_channel_calibrate(struct mt76x02_dev *dev, bool mac_stopped) { struct ieee80211_channel *chan = dev->mt76.chandef.chan; bool is_5ghz = chan->band == NL80211_BAND_5GHZ; + if (dev->cal.channel_cal_done) + return; + if (mt76x2_channel_silent(dev)) return; - mt76x2u_mac_stop(dev); + if (!mac_stopped) + mt76x2u_mac_stop(dev); if (is_5ghz) - mt76x02_mcu_calibrate(dev, MCU_CAL_LC, 0, false); - - mt76x02_mcu_calibrate(dev, MCU_CAL_TX_LOFT, is_5ghz, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_TXIQ, is_5ghz, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_RXIQC_FI, is_5ghz, false); - mt76x02_mcu_calibrate(dev, MCU_CAL_TEMP_SENSOR, 0, false); - - mt76x2u_mac_resume(dev); -} + mt76x02_mcu_calibrate(dev, MCU_CAL_LC, 0); -static void -mt76x2u_phy_update_channel_gain(struct mt76x02_dev *dev) -{ - u8 channel = dev->mt76.chandef.chan->hw_value; - int freq, freq1; - u32 false_cca; - - freq = dev->mt76.chandef.chan->center_freq; - freq1 = dev->mt76.chandef.center_freq1; + mt76x02_mcu_calibrate(dev, MCU_CAL_TX_LOFT, is_5ghz); + mt76x02_mcu_calibrate(dev, MCU_CAL_TXIQ, is_5ghz); + mt76x02_mcu_calibrate(dev, MCU_CAL_RXIQC_FI, is_5ghz); + mt76x02_mcu_calibrate(dev, MCU_CAL_TEMP_SENSOR, 0); + mt76x02_mcu_calibrate(dev, MCU_CAL_TX_SHAPING, 0); - switch (dev->mt76.chandef.width) { - case NL80211_CHAN_WIDTH_80: { - int ch_group_index; + if (!mac_stopped) + mt76x2u_mac_resume(dev); + mt76x2_apply_gain_adj(dev); - ch_group_index = (freq - freq1 + 30) / 20; - if (WARN_ON(ch_group_index < 0 || ch_group_index > 3)) - ch_group_index = 0; - channel += 6 - ch_group_index * 4; - break; - } - case NL80211_CHAN_WIDTH_40: - if (freq1 > freq) - channel += 2; - else - channel -= 2; - break; - default: - break; - } - - dev->cal.avg_rssi_all = mt76x02_phy_get_min_avg_rssi(dev); - false_cca = FIELD_GET(MT_RX_STAT_1_CCA_ERRORS, - mt76_rr(dev, MT_RX_STAT_1)); - - mt76x2u_mcu_set_dynamic_vga(dev, channel, false, false, - dev->cal.avg_rssi_all, false_cca); + dev->cal.channel_cal_done = true; } void mt76x2u_phy_calibrate(struct work_struct *work) @@ -82,8 +54,9 @@ void mt76x2u_phy_calibrate(struct work_struct *work) struct mt76x02_dev *dev; dev = container_of(work, struct mt76x02_dev, cal_work.work); - mt76x2_phy_tssi_compensate(dev, false); - mt76x2u_phy_update_channel_gain(dev); + mt76x2u_phy_channel_calibrate(dev, false); + mt76x2_phy_tssi_compensate(dev); + mt76x2_phy_update_channel_gain(dev); ieee80211_queue_delayed_work(mt76_hw(dev), &dev->cal_work, MT_CALIBRATE_INTERVAL); @@ -180,14 +153,14 @@ int mt76x2u_phy_set_channel(struct mt76x02_dev *dev, u8 val = mt76x02_eeprom_get(dev, MT_EE_BT_RCAL_RESULT); if (val != 0xff) - mt76x02_mcu_calibrate(dev, MCU_CAL_R, 0, false); + mt76x02_mcu_calibrate(dev, MCU_CAL_R, 0); } - mt76x02_mcu_calibrate(dev, MCU_CAL_RXDCOC, channel, false); + mt76x02_mcu_calibrate(dev, MCU_CAL_RXDCOC, channel); /* Rx LPF calibration */ if (!dev->cal.init_cal_done) - mt76x02_mcu_calibrate(dev, MCU_CAL_RC, 0, false); + mt76x02_mcu_calibrate(dev, MCU_CAL_RC, 0); dev->cal.init_cal_done = true; mt76_wr(dev, MT_BBP(AGC, 61), 0xff64a4e2); @@ -202,6 +175,9 @@ int mt76x2u_phy_set_channel(struct mt76x02_dev *dev, if (scan) return 0; + mt76x2u_phy_channel_calibrate(dev, true); + mt76x02_init_agc_gain(dev); + if (mt76x2_tssi_enabled(dev)) { /* init default values for temp compensation */ mt76_rmw_field(dev, MT_TX_ALC_CFG_1, MT_TX_ALC_CFG_1_TEMP_COMP, @@ -219,7 +195,7 @@ int mt76x2u_phy_set_channel(struct mt76x02_dev *dev, flag |= BIT(0); if (mt76x02_ext_pa_enabled(dev, chan->band)) flag |= BIT(8); - mt76x02_mcu_calibrate(dev, MCU_CAL_TSSI, flag, false); + mt76x02_mcu_calibrate(dev, MCU_CAL_TSSI, flag); dev->cal.tssi_cal_done = true; } } diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c index aa426b838ffa..7b711058807d 100644 --- a/drivers/net/wireless/mediatek/mt76/tx.c +++ b/drivers/net/wireless/mediatek/mt76/tx.c @@ -103,6 +103,157 @@ mt76_check_agg_ssn(struct mt76_txq *mtxq, struct sk_buff *skb) mtxq->agg_ssn = le16_to_cpu(hdr->seq_ctrl) + 0x10; } +void +mt76_tx_status_lock(struct mt76_dev *dev, struct sk_buff_head *list) + __acquires(&dev->status_list.lock) +{ + __skb_queue_head_init(list); + spin_lock_bh(&dev->status_list.lock); + __acquire(&dev->status_list.lock); +} +EXPORT_SYMBOL_GPL(mt76_tx_status_lock); + +void +mt76_tx_status_unlock(struct mt76_dev *dev, struct sk_buff_head *list) + __releases(&dev->status_list.unlock) +{ + struct sk_buff *skb; + + spin_unlock_bh(&dev->status_list.lock); + __release(&dev->status_list.unlock); + + while ((skb = __skb_dequeue(list)) != NULL) + ieee80211_tx_status(dev->hw, skb); +} +EXPORT_SYMBOL_GPL(mt76_tx_status_unlock); + +static void +__mt76_tx_status_skb_done(struct mt76_dev *dev, struct sk_buff *skb, u8 flags, + struct sk_buff_head *list) +{ + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + struct mt76_tx_cb *cb = mt76_tx_skb_cb(skb); + u8 done = MT_TX_CB_DMA_DONE | MT_TX_CB_TXS_DONE; + + flags |= cb->flags; + cb->flags = flags; + + if ((flags & done) != done) + return; + + __skb_unlink(skb, &dev->status_list); + + /* Tx status can be unreliable. if it fails, mark the frame as ACKed */ + if (flags & MT_TX_CB_TXS_FAILED) { + ieee80211_tx_info_clear_status(info); + info->status.rates[0].idx = -1; + info->flags |= IEEE80211_TX_STAT_ACK; + } + + __skb_queue_tail(list, skb); +} + +void +mt76_tx_status_skb_done(struct mt76_dev *dev, struct sk_buff *skb, + struct sk_buff_head *list) +{ + __mt76_tx_status_skb_done(dev, skb, MT_TX_CB_TXS_DONE, list); +} +EXPORT_SYMBOL_GPL(mt76_tx_status_skb_done); + +int +mt76_tx_status_skb_add(struct mt76_dev *dev, struct mt76_wcid *wcid, + struct sk_buff *skb) +{ + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + struct mt76_tx_cb *cb = mt76_tx_skb_cb(skb); + int pid; + + if (!wcid) + return 0; + + if (info->flags & IEEE80211_TX_CTL_NO_ACK) + return MT_PACKET_ID_NO_ACK; + + if (!(info->flags & (IEEE80211_TX_CTL_REQ_TX_STATUS | + IEEE80211_TX_CTL_RATE_CTRL_PROBE))) + return 0; + + spin_lock_bh(&dev->status_list.lock); + + memset(cb, 0, sizeof(*cb)); + wcid->packet_id = (wcid->packet_id + 1) & MT_PACKET_ID_MASK; + if (!wcid->packet_id || wcid->packet_id == MT_PACKET_ID_NO_ACK) + wcid->packet_id = 1; + + pid = wcid->packet_id; + cb->wcid = wcid->idx; + cb->pktid = pid; + cb->jiffies = jiffies; + + __skb_queue_tail(&dev->status_list, skb); + spin_unlock_bh(&dev->status_list.lock); + + return pid; +} +EXPORT_SYMBOL_GPL(mt76_tx_status_skb_add); + +struct sk_buff * +mt76_tx_status_skb_get(struct mt76_dev *dev, struct mt76_wcid *wcid, int pktid, + struct sk_buff_head *list) +{ + struct sk_buff *skb, *tmp; + + if (pktid == MT_PACKET_ID_NO_ACK) + return NULL; + + skb_queue_walk_safe(&dev->status_list, skb, tmp) { + struct mt76_tx_cb *cb = mt76_tx_skb_cb(skb); + + if (wcid && cb->wcid != wcid->idx) + continue; + + if (cb->pktid == pktid) + return skb; + + if (!pktid && + !time_after(jiffies, cb->jiffies + MT_TX_STATUS_SKB_TIMEOUT)) + continue; + + __mt76_tx_status_skb_done(dev, skb, MT_TX_CB_TXS_FAILED | + MT_TX_CB_TXS_DONE, list); + } + + return NULL; +} +EXPORT_SYMBOL_GPL(mt76_tx_status_skb_get); + +void +mt76_tx_status_check(struct mt76_dev *dev, struct mt76_wcid *wcid, bool flush) +{ + struct sk_buff_head list; + + mt76_tx_status_lock(dev, &list); + mt76_tx_status_skb_get(dev, wcid, flush ? -1 : 0, &list); + mt76_tx_status_unlock(dev, &list); +} +EXPORT_SYMBOL_GPL(mt76_tx_status_check); + +void mt76_tx_complete_skb(struct mt76_dev *dev, struct sk_buff *skb) +{ + struct sk_buff_head list; + + if (!skb->prev) { + ieee80211_free_txskb(dev->hw, skb); + return; + } + + mt76_tx_status_lock(dev, &list); + __mt76_tx_status_skb_done(dev, skb, MT_TX_CB_DMA_DONE, &list); + mt76_tx_status_unlock(dev, &list); +} +EXPORT_SYMBOL_GPL(mt76_tx_complete_skb); + void mt76_tx(struct mt76_dev *dev, struct ieee80211_sta *sta, struct mt76_wcid *wcid, struct sk_buff *skb) @@ -444,7 +595,7 @@ void mt76_txq_remove(struct mt76_dev *dev, struct ieee80211_txq *txq) spin_lock_bh(&hwq->lock); if (!list_empty(&mtxq->list)) - list_del(&mtxq->list); + list_del_init(&mtxq->list); spin_unlock_bh(&hwq->lock); while ((skb = skb_dequeue(&mtxq->retry_q)) != NULL) diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c index 5f0faf07c346..b061263453d4 100644 --- a/drivers/net/wireless/mediatek/mt76/usb.c +++ b/drivers/net/wireless/mediatek/mt76/usb.c @@ -100,7 +100,7 @@ static u32 __mt76u_rr(struct mt76_dev *dev, u32 addr) return data; } -u32 mt76u_rr(struct mt76_dev *dev, u32 addr) +static u32 mt76u_rr(struct mt76_dev *dev, u32 addr) { u32 ret; @@ -110,7 +110,6 @@ u32 mt76u_rr(struct mt76_dev *dev, u32 addr) return ret; } -EXPORT_SYMBOL_GPL(mt76u_rr); /* should be called with usb_ctrl_mtx locked */ static void __mt76u_wr(struct mt76_dev *dev, u32 addr, u32 val) @@ -136,13 +135,12 @@ static void __mt76u_wr(struct mt76_dev *dev, u32 addr, u32 val) trace_usb_reg_wr(dev, addr, val); } -void mt76u_wr(struct mt76_dev *dev, u32 addr, u32 val) +static void mt76u_wr(struct mt76_dev *dev, u32 addr, u32 val) { mutex_lock(&dev->usb.usb_ctrl_mtx); __mt76u_wr(dev, addr, val); mutex_unlock(&dev->usb.usb_ctrl_mtx); } -EXPORT_SYMBOL_GPL(mt76u_wr); static u32 mt76u_rmw(struct mt76_dev *dev, u32 addr, u32 mask, u32 val) @@ -356,6 +354,7 @@ int mt76u_submit_buf(struct mt76_dev *dev, int dir, int index, usb_fill_bulk_urb(buf->urb, udev, pipe, NULL, buf->len, complete_fn, context); + trace_submit_urb(dev, buf->urb); return usb_submit_urb(buf->urb, gfp); } @@ -442,6 +441,8 @@ static void mt76u_complete_rx(struct urb *urb) struct mt76_queue *q = &dev->q_rx[MT_RXQ_MAIN]; unsigned long flags; + trace_rx_urb(dev, urb); + switch (urb->status) { case -ECONNRESET: case -ESHUTDOWN: @@ -699,6 +700,7 @@ mt76u_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q, if (q->queued == q->ndesc) return -ENOSPC; + skb->prev = skb->next = NULL; err = dev->drv->tx_prepare_skb(dev, NULL, skb, q, wcid, sta, NULL); if (err < 0) return err; @@ -728,6 +730,8 @@ static void mt76u_tx_kick(struct mt76_dev *dev, struct mt76_queue *q) while (q->first != q->tail) { buf = &q->entry[q->first].ubuf; + + trace_submit_urb(dev, buf->urb); err = usb_submit_urb(buf->urb, GFP_ATOMIC); if (err < 0) { if (err == -ENODEV) diff --git a/drivers/net/wireless/mediatek/mt76/usb_trace.h b/drivers/net/wireless/mediatek/mt76/usb_trace.h index 52db7012304a..b56c32343eb1 100644 --- a/drivers/net/wireless/mediatek/mt76/usb_trace.h +++ b/drivers/net/wireless/mediatek/mt76/usb_trace.h @@ -26,12 +26,12 @@ #define MAXNAME 32 #define DEV_ENTRY __array(char, wiphy_name, 32) #define DEV_ASSIGN strlcpy(__entry->wiphy_name, wiphy_name(dev->hw->wiphy), MAXNAME) -#define DEV_PR_FMT "%s" +#define DEV_PR_FMT "%s " #define DEV_PR_ARG __entry->wiphy_name #define REG_ENTRY __field(u32, reg) __field(u32, val) #define REG_ASSIGN __entry->reg = reg; __entry->val = val -#define REG_PR_FMT " %04x=%08x" +#define REG_PR_FMT "reg:0x%04x=0x%08x" #define REG_PR_ARG __entry->reg, __entry->val DECLARE_EVENT_CLASS(dev_reg_evt, @@ -61,6 +61,31 @@ DEFINE_EVENT(dev_reg_evt, usb_reg_wr, TP_ARGS(dev, reg, val) ); +DECLARE_EVENT_CLASS(urb_transfer, + TP_PROTO(struct mt76_dev *dev, struct urb *u), + TP_ARGS(dev, u), + TP_STRUCT__entry( + DEV_ENTRY __field(unsigned, pipe) __field(u32, len) + ), + TP_fast_assign( + DEV_ASSIGN; + __entry->pipe = u->pipe; + __entry->len = u->transfer_buffer_length; + ), + TP_printk(DEV_PR_FMT "p:%08x len:%u", + DEV_PR_ARG, __entry->pipe, __entry->len) +); + +DEFINE_EVENT(urb_transfer, submit_urb, + TP_PROTO(struct mt76_dev *dev, struct urb *u), + TP_ARGS(dev, u) +); + +DEFINE_EVENT(urb_transfer, rx_urb, + TP_PROTO(struct mt76_dev *dev, struct urb *u), + TP_ARGS(dev, u) +); + #endif #undef TRACE_INCLUDE_PATH diff --git a/drivers/net/wireless/quantenna/qtnfmac/Kconfig b/drivers/net/wireless/quantenna/qtnfmac/Kconfig index b8c12a5f16b4..6cf5202c3666 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/Kconfig +++ b/drivers/net/wireless/quantenna/qtnfmac/Kconfig @@ -1,11 +1,11 @@ config QTNFMAC tristate - depends on QTNFMAC_PEARL_PCIE - default m if QTNFMAC_PEARL_PCIE=m - default y if QTNFMAC_PEARL_PCIE=y + depends on QTNFMAC_PCIE + default m if QTNFMAC_PCIE=m + default y if QTNFMAC_PCIE=y -config QTNFMAC_PEARL_PCIE - tristate "Quantenna QSR10g PCIe support" +config QTNFMAC_PCIE + tristate "Quantenna QSR1000/QSR2000/QSR10g PCIe support" default n depends on PCI && CFG80211 select QTNFMAC @@ -13,7 +13,8 @@ config QTNFMAC_PEARL_PCIE select CRC32 help This option adds support for wireless adapters based on Quantenna - 802.11ac QSR10g (aka Pearl) FullMAC chipset running over PCIe. + 802.11ac QSR10g (aka Pearl) and QSR1000/QSR2000 (aka Topaz) + FullMAC chipsets running over PCIe. If you choose to build it as a module, two modules will be built: - qtnfmac.ko and qtnfmac_pearl_pcie.ko. + qtnfmac.ko and qtnfmac_pcie.ko. diff --git a/drivers/net/wireless/quantenna/qtnfmac/Makefile b/drivers/net/wireless/quantenna/qtnfmac/Makefile index 17cd7adb4109..40dffbd2ea47 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/Makefile +++ b/drivers/net/wireless/quantenna/qtnfmac/Makefile @@ -19,11 +19,12 @@ qtnfmac-objs += \ # -obj-$(CONFIG_QTNFMAC_PEARL_PCIE) += qtnfmac_pearl_pcie.o +obj-$(CONFIG_QTNFMAC_PCIE) += qtnfmac_pcie.o -qtnfmac_pearl_pcie-objs += \ +qtnfmac_pcie-objs += \ shm_ipc.o \ pcie/pcie.o \ - pcie/pearl_pcie.o + pcie/pearl_pcie.o \ + pcie/topaz_pcie.o -qtnfmac_pearl_pcie-$(CONFIG_DEBUG_FS) += debug.o +qtnfmac_pcie-$(CONFIG_DEBUG_FS) += debug.o diff --git a/drivers/net/wireless/quantenna/qtnfmac/commands.c b/drivers/net/wireless/quantenna/qtnfmac/commands.c index bfdc1ad30c13..659e7649fe22 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/commands.c +++ b/drivers/net/wireless/quantenna/qtnfmac/commands.c @@ -84,7 +84,7 @@ static int qtnf_cmd_send_with_reply(struct qtnf_bus *bus, size_t *var_resp_size) { struct qlink_cmd *cmd; - const struct qlink_resp *resp; + struct qlink_resp *resp = NULL; struct sk_buff *resp_skb = NULL; u16 cmd_id; u8 mac_id; @@ -113,7 +113,12 @@ static int qtnf_cmd_send_with_reply(struct qtnf_bus *bus, if (ret) goto out; - resp = (const struct qlink_resp *)resp_skb->data; + if (WARN_ON(!resp_skb || !resp_skb->data)) { + ret = -EFAULT; + goto out; + } + + resp = (struct qlink_resp *)resp_skb->data; ret = qtnf_cmd_check_reply_header(resp, cmd_id, mac_id, vif_id, const_resp_size); if (ret) @@ -686,7 +691,7 @@ int qtnf_cmd_get_sta_info(struct qtnf_vif *vif, const u8 *sta_mac, struct sk_buff *cmd_skb, *resp_skb = NULL; struct qlink_cmd_get_sta_info *cmd; const struct qlink_resp_get_sta_info *resp; - size_t var_resp_len; + size_t var_resp_len = 0; int ret = 0; cmd_skb = qtnf_cmd_alloc_new_cmdskb(vif->mac->macid, vif->vifid, @@ -1650,7 +1655,7 @@ int qtnf_cmd_get_mac_info(struct qtnf_wmac *mac) { struct sk_buff *cmd_skb, *resp_skb = NULL; const struct qlink_resp_get_mac_info *resp; - size_t var_data_len; + size_t var_data_len = 0; int ret = 0; cmd_skb = qtnf_cmd_alloc_new_cmdskb(mac->macid, QLINK_VIFID_RSVD, @@ -1680,8 +1685,8 @@ int qtnf_cmd_get_hw_info(struct qtnf_bus *bus) { struct sk_buff *cmd_skb, *resp_skb = NULL; const struct qlink_resp_get_hw_info *resp; + size_t info_len = 0; int ret = 0; - size_t info_len; cmd_skb = qtnf_cmd_alloc_new_cmdskb(QLINK_MACID_RSVD, QLINK_VIFID_RSVD, QLINK_CMD_GET_HW_INFO, @@ -1709,9 +1714,9 @@ int qtnf_cmd_band_info_get(struct qtnf_wmac *mac, struct ieee80211_supported_band *band) { struct sk_buff *cmd_skb, *resp_skb = NULL; - size_t info_len; struct qlink_cmd_band_info_get *cmd; struct qlink_resp_band_info_get *resp; + size_t info_len = 0; int ret = 0; u8 qband; @@ -1764,8 +1769,8 @@ out: int qtnf_cmd_send_get_phy_params(struct qtnf_wmac *mac) { struct sk_buff *cmd_skb, *resp_skb = NULL; - size_t response_size; struct qlink_resp_phy_params *resp; + size_t response_size = 0; int ret = 0; cmd_skb = qtnf_cmd_alloc_new_cmdskb(mac->macid, 0, @@ -2431,7 +2436,7 @@ int qtnf_cmd_get_chan_stats(struct qtnf_wmac *mac, u16 channel, struct sk_buff *cmd_skb, *resp_skb = NULL; struct qlink_cmd_get_chan_stats *cmd; struct qlink_resp_get_chan_stats *resp; - size_t var_data_len; + size_t var_data_len = 0; int ret = 0; cmd_skb = qtnf_cmd_alloc_new_cmdskb(mac->macid, QLINK_VIFID_RSVD, diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c index 16795dbe475b..c3a32effa6f0 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0+ /* Copyright (c) 2018 Quantenna Communications, Inc. All rights reserved. */ +#include #include #include #include @@ -15,14 +16,37 @@ #include "shm_ipc.h" #include "core.h" #include "debug.h" - -#undef pr_fmt -#define pr_fmt(fmt) "qtnf_pcie: %s: " fmt, __func__ +#include "util.h" +#include "qtn_hw_ids.h" #define QTN_SYSCTL_BAR 0 #define QTN_SHMEM_BAR 2 #define QTN_DMA_BAR 3 +#define QTN_PCIE_MAX_FW_BUFSZ (1 * 1024 * 1024) + +static bool use_msi = true; +module_param(use_msi, bool, 0644); +MODULE_PARM_DESC(use_msi, "set 0 to use legacy interrupt"); + +static unsigned int tx_bd_size_param; +module_param(tx_bd_size_param, uint, 0644); +MODULE_PARM_DESC(tx_bd_size_param, "Tx descriptors queue size"); + +static unsigned int rx_bd_size_param = 256; +module_param(rx_bd_size_param, uint, 0644); +MODULE_PARM_DESC(rx_bd_size_param, "Rx descriptors queue size"); + +static u8 flashboot = 1; +module_param(flashboot, byte, 0644); +MODULE_PARM_DESC(flashboot, "set to 0 to use FW binary file on FS"); + +static unsigned int fw_blksize_param = QTN_PCIE_MAX_FW_BUFSZ; +module_param(fw_blksize_param, uint, 0644); +MODULE_PARM_DESC(fw_blksize_param, "firmware loading block size in bytes"); + +#define DRV_NAME "qtnfmac_pcie" + int qtnf_pcie_control_tx(struct qtnf_bus *bus, struct sk_buff *skb) { struct qtnf_pcie_bus_priv *priv = get_bus_priv(bus); @@ -58,7 +82,7 @@ int qtnf_pcie_alloc_skb_array(struct qtnf_pcie_bus_priv *priv) return 0; } -void qtnf_pcie_bringup_fw_async(struct qtnf_bus *bus) +static void qtnf_pcie_bringup_fw_async(struct qtnf_bus *bus) { struct qtnf_pcie_bus_priv *priv = get_bus_priv(bus); struct pci_dev *pdev = priv->pdev; @@ -72,7 +96,7 @@ static int qtnf_dbg_mps_show(struct seq_file *s, void *data) struct qtnf_bus *bus = dev_get_drvdata(s->private); struct qtnf_pcie_bus_priv *priv = get_bus_priv(bus); - seq_printf(s, "%d\n", priv->mps); + seq_printf(s, "%d\n", pcie_get_mps(priv->pdev)); return 0; } @@ -104,8 +128,7 @@ static int qtnf_dbg_shm_stats(struct seq_file *s, void *data) return 0; } -void qtnf_pcie_fw_boot_done(struct qtnf_bus *bus, bool boot_success, - const char *drv_name) +void qtnf_pcie_fw_boot_done(struct qtnf_bus *bus, bool boot_success) { struct qtnf_pcie_bus_priv *priv = get_bus_priv(bus); struct pci_dev *pdev = priv->pdev; @@ -122,7 +145,7 @@ void qtnf_pcie_fw_boot_done(struct qtnf_bus *bus, bool boot_success, } if (boot_success) { - qtnf_debugfs_init(bus, drv_name); + qtnf_debugfs_init(bus, DRV_NAME); qtnf_debugfs_add_entry(bus, "mps", qtnf_dbg_mps_show); qtnf_debugfs_add_entry(bus, "msi_enabled", qtnf_dbg_msi_show); qtnf_debugfs_add_entry(bus, "shm_stats", qtnf_dbg_shm_stats); @@ -133,9 +156,8 @@ void qtnf_pcie_fw_boot_done(struct qtnf_bus *bus, bool boot_success, put_device(&pdev->dev); } -static void qtnf_tune_pcie_mps(struct qtnf_pcie_bus_priv *priv) +static void qtnf_tune_pcie_mps(struct pci_dev *pdev) { - struct pci_dev *pdev = priv->pdev; struct pci_dev *parent; int mps_p, mps_o, mps_m, mps; int ret; @@ -163,12 +185,10 @@ static void qtnf_tune_pcie_mps(struct qtnf_pcie_bus_priv *priv) if (ret) { pr_err("failed to set mps to %d, keep using current %d\n", mps, mps_o); - priv->mps = mps_o; return; } pr_debug("set mps to %d (was %d, max %d)\n", mps, mps_o, mps_m); - priv->mps = mps; } static void qtnf_pcie_init_irq(struct qtnf_pcie_bus_priv *priv, bool use_msi) @@ -194,20 +214,20 @@ static void qtnf_pcie_init_irq(struct qtnf_pcie_bus_priv *priv, bool use_msi) } } -static void __iomem *qtnf_map_bar(struct qtnf_pcie_bus_priv *priv, u8 index) +static void __iomem *qtnf_map_bar(struct pci_dev *pdev, u8 index) { void __iomem *vaddr; dma_addr_t busaddr; size_t len; int ret; - ret = pcim_iomap_regions(priv->pdev, 1 << index, "qtnfmac_pcie"); + ret = pcim_iomap_regions(pdev, 1 << index, "qtnfmac_pcie"); if (ret) return IOMEM_ERR_PTR(ret); - busaddr = pci_resource_start(priv->pdev, index); - len = pci_resource_len(priv->pdev, index); - vaddr = pcim_iomap_table(priv->pdev)[index]; + busaddr = pci_resource_start(pdev, index); + len = pci_resource_len(pdev, index); + vaddr = pcim_iomap_table(pdev)[index]; if (!vaddr) return IOMEM_ERR_PTR(-ENOMEM); @@ -217,31 +237,6 @@ static void __iomem *qtnf_map_bar(struct qtnf_pcie_bus_priv *priv, u8 index) return vaddr; } -static int qtnf_pcie_init_memory(struct qtnf_pcie_bus_priv *priv) -{ - int ret = -ENOMEM; - - priv->sysctl_bar = qtnf_map_bar(priv, QTN_SYSCTL_BAR); - if (IS_ERR(priv->sysctl_bar)) { - pr_err("failed to map BAR%u\n", QTN_SYSCTL_BAR); - return ret; - } - - priv->dmareg_bar = qtnf_map_bar(priv, QTN_DMA_BAR); - if (IS_ERR(priv->dmareg_bar)) { - pr_err("failed to map BAR%u\n", QTN_DMA_BAR); - return ret; - } - - priv->epmem_bar = qtnf_map_bar(priv, QTN_SHMEM_BAR); - if (IS_ERR(priv->epmem_bar)) { - pr_err("failed to map BAR%u\n", QTN_SHMEM_BAR); - return ret; - } - - return 0; -} - static void qtnf_pcie_control_rx_callback(void *arg, const u8 __iomem *buf, size_t len) { @@ -282,27 +277,83 @@ void qtnf_pcie_init_shm_ipc(struct qtnf_pcie_bus_priv *priv, ipc_int, &rx_callback); } -int qtnf_pcie_probe(struct pci_dev *pdev, size_t priv_size, - const struct qtnf_bus_ops *bus_ops, u64 dma_mask, - bool use_msi) +static int qtnf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct qtnf_pcie_bus_priv *pcie_priv; struct qtnf_bus *bus; + void __iomem *sysctl_bar; + void __iomem *epmem_bar; + void __iomem *dmareg_bar; + unsigned int chipid; int ret; - bus = devm_kzalloc(&pdev->dev, - sizeof(*bus) + priv_size, GFP_KERNEL); + if (!pci_is_pcie(pdev)) { + pr_err("device %s is not PCI Express\n", pci_name(pdev)); + return -EIO; + } + + qtnf_tune_pcie_mps(pdev); + + ret = pcim_enable_device(pdev); + if (ret) { + pr_err("failed to init PCI device %x\n", pdev->device); + return ret; + } + + pci_set_master(pdev); + + sysctl_bar = qtnf_map_bar(pdev, QTN_SYSCTL_BAR); + if (IS_ERR(sysctl_bar)) { + pr_err("failed to map BAR%u\n", QTN_SYSCTL_BAR); + return ret; + } + + dmareg_bar = qtnf_map_bar(pdev, QTN_DMA_BAR); + if (IS_ERR(dmareg_bar)) { + pr_err("failed to map BAR%u\n", QTN_DMA_BAR); + return ret; + } + + epmem_bar = qtnf_map_bar(pdev, QTN_SHMEM_BAR); + if (IS_ERR(epmem_bar)) { + pr_err("failed to map BAR%u\n", QTN_SHMEM_BAR); + return ret; + } + + chipid = qtnf_chip_id_get(sysctl_bar); + + pr_info("identified device: %s\n", qtnf_chipid_to_string(chipid)); + + switch (chipid) { + case QTN_CHIP_ID_PEARL: + case QTN_CHIP_ID_PEARL_B: + case QTN_CHIP_ID_PEARL_C: + bus = qtnf_pcie_pearl_alloc(pdev); + break; + case QTN_CHIP_ID_TOPAZ: + bus = qtnf_pcie_topaz_alloc(pdev); + break; + default: + pr_err("unsupported chip ID 0x%x\n", chipid); + return -ENOTSUPP; + } + if (!bus) return -ENOMEM; pcie_priv = get_bus_priv(bus); - pci_set_drvdata(pdev, bus); - bus->bus_ops = bus_ops; bus->dev = &pdev->dev; bus->fw_state = QTNF_FW_STATE_RESET; pcie_priv->pdev = pdev; pcie_priv->tx_stopped = 0; + pcie_priv->rx_bd_num = rx_bd_size_param; + pcie_priv->flashboot = flashboot; + + if (fw_blksize_param > QTN_PCIE_MAX_FW_BUFSZ) + pcie_priv->fw_blksize = QTN_PCIE_MAX_FW_BUFSZ; + else + pcie_priv->fw_blksize = fw_blksize_param; mutex_init(&bus->bus_lock); spin_lock_init(&pcie_priv->tx_lock); @@ -317,53 +368,35 @@ int qtnf_pcie_probe(struct pci_dev *pdev, size_t priv_size, pcie_priv->workqueue = create_singlethread_workqueue("QTNF_PCIE"); if (!pcie_priv->workqueue) { pr_err("failed to alloc bus workqueue\n"); - ret = -ENODEV; - goto err_init; - } - - init_dummy_netdev(&bus->mux_dev); - - if (!pci_is_pcie(pdev)) { - pr_err("device %s is not PCI Express\n", pci_name(pdev)); - ret = -EIO; - goto err_base; - } - - qtnf_tune_pcie_mps(pcie_priv); - - ret = pcim_enable_device(pdev); - if (ret) { - pr_err("failed to init PCI device %x\n", pdev->device); - goto err_base; - } else { - pr_debug("successful init of PCI device %x\n", pdev->device); + return -ENODEV; } - ret = dma_set_mask_and_coherent(&pdev->dev, dma_mask); + ret = dma_set_mask_and_coherent(&pdev->dev, + pcie_priv->dma_mask_get_cb()); if (ret) { - pr_err("PCIE DMA coherent mask init failed\n"); - goto err_base; + pr_err("PCIE DMA coherent mask init failed 0x%llx\n", + pcie_priv->dma_mask_get_cb()); + goto error; } - pci_set_master(pdev); + init_dummy_netdev(&bus->mux_dev); qtnf_pcie_init_irq(pcie_priv, use_msi); - - ret = qtnf_pcie_init_memory(pcie_priv); - if (ret < 0) { - pr_err("PCIE memory init failed\n"); - goto err_base; - } - + pcie_priv->sysctl_bar = sysctl_bar; + pcie_priv->dmareg_bar = dmareg_bar; + pcie_priv->epmem_bar = epmem_bar; pci_save_state(pdev); + ret = pcie_priv->probe_cb(bus, tx_bd_size_param); + if (ret) + goto error; + + qtnf_pcie_bringup_fw_async(bus); return 0; -err_base: +error: flush_workqueue(pcie_priv->workqueue); destroy_workqueue(pcie_priv->workqueue); -err_init: pci_set_drvdata(pdev, NULL); - return ret; } @@ -373,8 +406,17 @@ static void qtnf_pcie_free_shm_ipc(struct qtnf_pcie_bus_priv *priv) qtnf_shm_ipc_free(&priv->shm_ipc_ep_out); } -void qtnf_pcie_remove(struct qtnf_bus *bus, struct qtnf_pcie_bus_priv *priv) +static void qtnf_pcie_remove(struct pci_dev *dev) { + struct qtnf_pcie_bus_priv *priv; + struct qtnf_bus *bus; + + bus = pci_get_drvdata(dev); + if (!bus) + return; + + priv = get_bus_priv(bus); + cancel_work_sync(&bus->fw_work); if (bus->fw_state == QTNF_FW_STATE_ACTIVE || @@ -388,5 +430,77 @@ void qtnf_pcie_remove(struct qtnf_bus *bus, struct qtnf_pcie_bus_priv *priv) qtnf_pcie_free_shm_ipc(priv); qtnf_debugfs_remove(bus); + priv->remove_cb(bus); pci_set_drvdata(priv->pdev, NULL); } + +#ifdef CONFIG_PM_SLEEP +static int qtnf_pcie_suspend(struct device *dev) +{ + struct qtnf_pcie_bus_priv *priv; + struct qtnf_bus *bus; + + bus = pci_get_drvdata(to_pci_dev(dev)); + if (!bus) + return -EFAULT; + + priv = get_bus_priv(bus); + return priv->suspend_cb(bus); +} + +static int qtnf_pcie_resume(struct device *dev) +{ + struct qtnf_pcie_bus_priv *priv; + struct qtnf_bus *bus; + + bus = pci_get_drvdata(to_pci_dev(dev)); + if (!bus) + return -EFAULT; + + priv = get_bus_priv(bus); + return priv->resume_cb(bus); +} + +/* Power Management Hooks */ +static SIMPLE_DEV_PM_OPS(qtnf_pcie_pm_ops, qtnf_pcie_suspend, + qtnf_pcie_resume); +#endif + +static const struct pci_device_id qtnf_pcie_devid_table[] = { + { + PCIE_VENDOR_ID_QUANTENNA, PCIE_DEVICE_ID_QSR, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + }, + { }, +}; + +MODULE_DEVICE_TABLE(pci, qtnf_pcie_devid_table); + +static struct pci_driver qtnf_pcie_drv_data = { + .name = DRV_NAME, + .id_table = qtnf_pcie_devid_table, + .probe = qtnf_pcie_probe, + .remove = qtnf_pcie_remove, +#ifdef CONFIG_PM_SLEEP + .driver = { + .pm = &qtnf_pcie_pm_ops, + }, +#endif +}; + +static int __init qtnf_pcie_register(void) +{ + return pci_register_driver(&qtnf_pcie_drv_data); +} + +static void __exit qtnf_pcie_exit(void) +{ + pci_unregister_driver(&qtnf_pcie_drv_data); +} + +module_init(qtnf_pcie_register); +module_exit(qtnf_pcie_exit); + +MODULE_AUTHOR("Quantenna Communications"); +MODULE_DESCRIPTION("Quantenna PCIe bus driver for 802.11 wireless LAN."); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie_priv.h b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie_priv.h index 5c70fb4c0f92..bbc074e1f34d 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie_priv.h +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pcie_priv.h @@ -23,9 +23,14 @@ struct qtnf_pcie_bus_priv { struct pci_dev *pdev; + int (*probe_cb)(struct qtnf_bus *bus, unsigned int tx_bd_size); + void (*remove_cb)(struct qtnf_bus *bus); + int (*suspend_cb)(struct qtnf_bus *bus); + int (*resume_cb)(struct qtnf_bus *bus); + u64 (*dma_mask_get_cb)(void); + spinlock_t tx_reclaim_lock; spinlock_t tx_lock; - int mps; struct workqueue_struct *workqueue; struct tasklet_struct reclaim_tq; @@ -43,6 +48,8 @@ struct qtnf_pcie_bus_priv { struct sk_buff **tx_skb; struct sk_buff **rx_skb; + unsigned int fw_blksize; + u32 rx_bd_w_index; u32 rx_bd_r_index; @@ -58,21 +65,18 @@ struct qtnf_pcie_bus_priv { u8 msi_enabled; u8 tx_stopped; + bool flashboot; }; int qtnf_pcie_control_tx(struct qtnf_bus *bus, struct sk_buff *skb); int qtnf_pcie_alloc_skb_array(struct qtnf_pcie_bus_priv *priv); -void qtnf_pcie_bringup_fw_async(struct qtnf_bus *bus); -void qtnf_pcie_fw_boot_done(struct qtnf_bus *bus, bool boot_success, - const char *drv_name); +void qtnf_pcie_fw_boot_done(struct qtnf_bus *bus, bool boot_success); void qtnf_pcie_init_shm_ipc(struct qtnf_pcie_bus_priv *priv, struct qtnf_shm_ipc_region __iomem *ipc_tx_reg, struct qtnf_shm_ipc_region __iomem *ipc_rx_reg, const struct qtnf_shm_ipc_int *ipc_int); -int qtnf_pcie_probe(struct pci_dev *pdev, size_t priv_size, - const struct qtnf_bus_ops *bus_ops, u64 dma_mask, - bool use_msi); -void qtnf_pcie_remove(struct qtnf_bus *bus, struct qtnf_pcie_bus_priv *priv); +struct qtnf_bus *qtnf_pcie_pearl_alloc(struct pci_dev *pdev); +struct qtnf_bus *qtnf_pcie_topaz_alloc(struct pci_dev *pdev); static inline void qtnf_non_posted_write(u32 val, void __iomem *basereg) { diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c index 95c7b95c6f8a..1f5facbb8905 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/pearl_pcie.c @@ -2,7 +2,6 @@ /* Copyright (c) 2018 Quantenna Communications */ #include -#include #include #include #include @@ -24,23 +23,7 @@ #include "shm_ipc.h" #include "debug.h" -static bool use_msi = true; -module_param(use_msi, bool, 0644); -MODULE_PARM_DESC(use_msi, "set 0 to use legacy interrupt"); - -static unsigned int tx_bd_size_param = 32; -module_param(tx_bd_size_param, uint, 0644); -MODULE_PARM_DESC(tx_bd_size_param, "Tx descriptors queue size, power of two"); - -static unsigned int rx_bd_size_param = 256; -module_param(rx_bd_size_param, uint, 0644); -MODULE_PARM_DESC(rx_bd_size_param, "Rx descriptors queue size, power of two"); - -static u8 flashboot = 1; -module_param(flashboot, byte, 0644); -MODULE_PARM_DESC(flashboot, "set to 0 to use FW binary file on FS"); - -#define DRV_NAME "qtnfmac_pearl_pcie" +#define PEARL_TX_BD_SIZE_DEFAULT 32 struct qtnf_pearl_bda { __le16 bda_len; @@ -415,30 +398,28 @@ static int pearl_hhbm_init(struct qtnf_pcie_pearl_state *ps) return 0; } -static int qtnf_pcie_pearl_init_xfer(struct qtnf_pcie_pearl_state *ps) +static int qtnf_pcie_pearl_init_xfer(struct qtnf_pcie_pearl_state *ps, + unsigned int tx_bd_size) { struct qtnf_pcie_bus_priv *priv = &ps->base; int ret; u32 val; - priv->tx_bd_num = tx_bd_size_param; - priv->rx_bd_num = rx_bd_size_param; - priv->rx_bd_w_index = 0; - priv->rx_bd_r_index = 0; + if (tx_bd_size == 0) + tx_bd_size = PEARL_TX_BD_SIZE_DEFAULT; - if (!priv->tx_bd_num || !is_power_of_2(priv->tx_bd_num)) { - pr_err("tx_bd_size_param %u is not power of two\n", - priv->tx_bd_num); - return -EINVAL; - } + val = tx_bd_size * sizeof(struct qtnf_pearl_tx_bd); - val = priv->tx_bd_num * sizeof(struct qtnf_pearl_tx_bd); - if (val > PCIE_HHBM_MAX_SIZE) { - pr_err("tx_bd_size_param %u is too large\n", - priv->tx_bd_num); - return -EINVAL; + if (!is_power_of_2(tx_bd_size) || val > PCIE_HHBM_MAX_SIZE) { + pr_warn("bad tx_bd_size value %u\n", tx_bd_size); + priv->tx_bd_num = PEARL_TX_BD_SIZE_DEFAULT; + } else { + priv->tx_bd_num = tx_bd_size; } + priv->rx_bd_w_index = 0; + priv->rx_bd_r_index = 0; + if (!priv->rx_bd_num || !is_power_of_2(priv->rx_bd_num)) { pr_err("rx_bd_size_param %u is not power of two\n", priv->rx_bd_num); @@ -1006,7 +987,7 @@ static void qtnf_pearl_fw_work_handler(struct work_struct *work) const char *fwname = QTN_PCI_PEARL_FW_NAME; bool fw_boot_success = false; - if (flashboot) { + if (ps->base.flashboot) { state |= QTN_RC_FW_FLASHBOOT; } else { ret = request_firmware(&fw, fwname, &pdev->dev); @@ -1022,7 +1003,7 @@ static void qtnf_pearl_fw_work_handler(struct work_struct *work) QTN_FW_DL_TIMEOUT_MS)) { pr_err("card is not ready\n"); - if (!flashboot) + if (!ps->base.flashboot) release_firmware(fw); goto fw_load_exit; @@ -1030,7 +1011,7 @@ static void qtnf_pearl_fw_work_handler(struct work_struct *work) qtnf_clear_state(&ps->bda->bda_ep_state, QTN_EP_FW_LOADRDY); - if (flashboot) { + if (ps->base.flashboot) { pr_info("booting firmware from flash\n"); } else { @@ -1061,7 +1042,7 @@ static void qtnf_pearl_fw_work_handler(struct work_struct *work) fw_boot_success = true; fw_load_exit: - qtnf_pcie_fw_boot_done(bus, fw_boot_success, DRV_NAME); + qtnf_pcie_fw_boot_done(bus, fw_boot_success); if (fw_boot_success) { qtnf_debugfs_add_entry(bus, "hdp_stats", qtnf_dbg_hdp_stats); @@ -1077,74 +1058,34 @@ static void qtnf_pearl_reclaim_tasklet_fn(unsigned long data) qtnf_en_txdone_irq(ps); } -static int qtnf_pearl_check_chip_id(struct qtnf_pcie_pearl_state *ps) +static u64 qtnf_pearl_dma_mask_get(void) { - unsigned int chipid; - - chipid = qtnf_chip_id_get(ps->base.sysctl_bar); - - switch (chipid) { - case QTN_CHIP_ID_PEARL: - case QTN_CHIP_ID_PEARL_B: - case QTN_CHIP_ID_PEARL_C: - pr_info("chip ID is 0x%x\n", chipid); - break; - default: - pr_err("incorrect chip ID 0x%x\n", chipid); - return -ENODEV; - } - - return 0; +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + return DMA_BIT_MASK(64); +#else + return DMA_BIT_MASK(32); +#endif } -static int qtnf_pcie_pearl_probe(struct pci_dev *pdev, - const struct pci_device_id *id) +static int qtnf_pcie_pearl_probe(struct qtnf_bus *bus, unsigned int tx_bd_size) { struct qtnf_shm_ipc_int ipc_int; - struct qtnf_pcie_pearl_state *ps; - struct qtnf_bus *bus; + struct qtnf_pcie_pearl_state *ps = get_bus_priv(bus); + struct pci_dev *pdev = ps->base.pdev; int ret; - u64 dma_mask; - -#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT - dma_mask = DMA_BIT_MASK(64); -#else - dma_mask = DMA_BIT_MASK(32); -#endif - - ret = qtnf_pcie_probe(pdev, sizeof(*ps), &qtnf_pcie_pearl_bus_ops, - dma_mask, use_msi); - if (ret) - return ret; - - bus = pci_get_drvdata(pdev); - ps = get_bus_priv(bus); + bus->bus_ops = &qtnf_pcie_pearl_bus_ops; spin_lock_init(&ps->irq_lock); - - tasklet_init(&ps->base.reclaim_tq, qtnf_pearl_reclaim_tasklet_fn, - (unsigned long)ps); - netif_napi_add(&bus->mux_dev, &bus->mux_napi, - qtnf_pcie_pearl_rx_poll, 10); INIT_WORK(&bus->fw_work, qtnf_pearl_fw_work_handler); ps->pcie_reg_base = ps->base.dmareg_bar; ps->bda = ps->base.epmem_bar; writel(ps->base.msi_enabled, &ps->bda->bda_rc_msi_enabled); - ipc_int.fn = qtnf_pcie_pearl_ipc_gen_ep_int; - ipc_int.arg = ps; - qtnf_pcie_init_shm_ipc(&ps->base, &ps->bda->bda_shm_reg1, - &ps->bda->bda_shm_reg2, &ipc_int); - - ret = qtnf_pearl_check_chip_id(ps); - if (ret) - goto error; - - ret = qtnf_pcie_pearl_init_xfer(ps); + ret = qtnf_pcie_pearl_init_xfer(ps, tx_bd_size); if (ret) { pr_err("PCIE xfer init failed\n"); - goto error; + return ret; } /* init default irq settings */ @@ -1155,95 +1096,63 @@ static int qtnf_pcie_pearl_probe(struct pci_dev *pdev, ret = devm_request_irq(&pdev->dev, pdev->irq, &qtnf_pcie_pearl_interrupt, 0, - "qtnf_pcie_irq", (void *)bus); + "qtnf_pearl_irq", (void *)bus); if (ret) { pr_err("failed to request pcie irq %d\n", pdev->irq); - goto err_xfer; + qtnf_pearl_free_xfer_buffers(ps); + return ret; } - qtnf_pcie_bringup_fw_async(bus); - - return 0; + tasklet_init(&ps->base.reclaim_tq, qtnf_pearl_reclaim_tasklet_fn, + (unsigned long)ps); + netif_napi_add(&bus->mux_dev, &bus->mux_napi, + qtnf_pcie_pearl_rx_poll, 10); -err_xfer: - qtnf_pearl_free_xfer_buffers(ps); -error: - qtnf_pcie_remove(bus, &ps->base); + ipc_int.fn = qtnf_pcie_pearl_ipc_gen_ep_int; + ipc_int.arg = ps; + qtnf_pcie_init_shm_ipc(&ps->base, &ps->bda->bda_shm_reg1, + &ps->bda->bda_shm_reg2, &ipc_int); - return ret; + return 0; } -static void qtnf_pcie_pearl_remove(struct pci_dev *pdev) +static void qtnf_pcie_pearl_remove(struct qtnf_bus *bus) { - struct qtnf_pcie_pearl_state *ps; - struct qtnf_bus *bus; - - bus = pci_get_drvdata(pdev); - if (!bus) - return; - - ps = get_bus_priv(bus); + struct qtnf_pcie_pearl_state *ps = get_bus_priv(bus); - qtnf_pcie_remove(bus, &ps->base); qtnf_pearl_reset_ep(ps); qtnf_pearl_free_xfer_buffers(ps); } #ifdef CONFIG_PM_SLEEP -static int qtnf_pcie_pearl_suspend(struct device *dev) +static int qtnf_pcie_pearl_suspend(struct qtnf_bus *bus) { return -EOPNOTSUPP; } -static int qtnf_pcie_pearl_resume(struct device *dev) +static int qtnf_pcie_pearl_resume(struct qtnf_bus *bus) { return 0; } -#endif /* CONFIG_PM_SLEEP */ - -#ifdef CONFIG_PM_SLEEP -/* Power Management Hooks */ -static SIMPLE_DEV_PM_OPS(qtnf_pcie_pearl_pm_ops, qtnf_pcie_pearl_suspend, - qtnf_pcie_pearl_resume); #endif -static const struct pci_device_id qtnf_pcie_devid_table[] = { - { - PCIE_VENDOR_ID_QUANTENNA, PCIE_DEVICE_ID_QTN_PEARL, - PCI_ANY_ID, PCI_ANY_ID, 0, 0, - }, - { }, -}; +struct qtnf_bus *qtnf_pcie_pearl_alloc(struct pci_dev *pdev) +{ + struct qtnf_bus *bus; + struct qtnf_pcie_pearl_state *ps; -MODULE_DEVICE_TABLE(pci, qtnf_pcie_devid_table); + bus = devm_kzalloc(&pdev->dev, sizeof(*bus) + sizeof(*ps), GFP_KERNEL); + if (!bus) + return NULL; -static struct pci_driver qtnf_pcie_pearl_drv_data = { - .name = DRV_NAME, - .id_table = qtnf_pcie_devid_table, - .probe = qtnf_pcie_pearl_probe, - .remove = qtnf_pcie_pearl_remove, + ps = get_bus_priv(bus); + ps->base.probe_cb = qtnf_pcie_pearl_probe; + ps->base.remove_cb = qtnf_pcie_pearl_remove; + ps->base.dma_mask_get_cb = qtnf_pearl_dma_mask_get; #ifdef CONFIG_PM_SLEEP - .driver = { - .pm = &qtnf_pcie_pearl_pm_ops, - }, + ps->base.resume_cb = qtnf_pcie_pearl_resume; + ps->base.suspend_cb = qtnf_pcie_pearl_suspend; #endif -}; - -static int __init qtnf_pcie_pearl_register(void) -{ - pr_info("register Quantenna QSR10g FullMAC PCIE driver\n"); - return pci_register_driver(&qtnf_pcie_pearl_drv_data); -} -static void __exit qtnf_pcie_pearl_exit(void) -{ - pr_info("unregister Quantenna QSR10g FullMAC PCIE driver\n"); - pci_unregister_driver(&qtnf_pcie_pearl_drv_data); + return bus; } - -module_init(qtnf_pcie_pearl_register); -module_exit(qtnf_pcie_pearl_exit); - -MODULE_AUTHOR("Quantenna Communications"); -MODULE_DESCRIPTION("Quantenna QSR10g PCIe bus driver for 802.11 wireless LAN."); -MODULE_LICENSE("GPL"); diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie.c new file mode 100644 index 000000000000..598edb814421 --- /dev/null +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie.c @@ -0,0 +1,1219 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Copyright (c) 2018 Quantenna Communications */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "pcie_priv.h" +#include "topaz_pcie_regs.h" +#include "topaz_pcie_ipc.h" +#include "qtn_hw_ids.h" +#include "core.h" +#include "bus.h" +#include "shm_ipc.h" +#include "debug.h" + +#define TOPAZ_TX_BD_SIZE_DEFAULT 128 + +struct qtnf_topaz_tx_bd { + __le32 addr; + __le32 info; +} __packed; + +struct qtnf_topaz_rx_bd { + __le32 addr; + __le32 info; +} __packed; + +struct qtnf_extra_bd_params { + __le32 param1; + __le32 param2; + __le32 param3; + __le32 param4; +} __packed; + +#define QTNF_BD_PARAM_OFFSET(n) offsetof(struct qtnf_extra_bd_params, param##n) + +struct vmac_pkt_info { + __le32 addr; + __le32 info; +}; + +struct qtnf_topaz_bda { + __le16 bda_len; + __le16 bda_version; + __le32 bda_bootstate; + __le32 bda_dma_mask; + __le32 bda_dma_offset; + __le32 bda_flags; + __le32 bda_img; + __le32 bda_img_size; + __le32 bda_ep2h_irqstatus; + __le32 bda_h2ep_irqstatus; + __le32 bda_msi_addr; + u8 reserved1[56]; + __le32 bda_flashsz; + u8 bda_boardname[PCIE_BDA_NAMELEN]; + __le32 bda_pci_pre_status; + __le32 bda_pci_endian; + __le32 bda_pci_post_status; + __le32 bda_h2ep_txd_budget; + __le32 bda_ep2h_txd_budget; + __le32 bda_rc_rx_bd_base; + __le32 bda_rc_rx_bd_num; + __le32 bda_rc_tx_bd_base; + __le32 bda_rc_tx_bd_num; + u8 bda_ep_link_state; + u8 bda_rc_link_state; + u8 bda_rc_msi_enabled; + u8 reserved2; + __le32 bda_ep_next_pkt; + struct vmac_pkt_info request[QTN_PCIE_RC_TX_QUEUE_LEN]; + struct qtnf_shm_ipc_region bda_shm_reg1 __aligned(4096); + struct qtnf_shm_ipc_region bda_shm_reg2 __aligned(4096); +} __packed; + +struct qtnf_pcie_topaz_state { + struct qtnf_pcie_bus_priv base; + struct qtnf_topaz_bda __iomem *bda; + + dma_addr_t dma_msi_dummy; + u32 dma_msi_imwr; + + struct qtnf_topaz_tx_bd *tx_bd_vbase; + struct qtnf_topaz_rx_bd *rx_bd_vbase; + + __le32 __iomem *ep_next_rx_pkt; + __le32 __iomem *txqueue_wake; + __le32 __iomem *ep_pmstate; + + unsigned long rx_pkt_count; +}; + +static void qtnf_deassert_intx(struct qtnf_pcie_topaz_state *ts) +{ + void __iomem *reg = ts->base.sysctl_bar + TOPAZ_PCIE_CFG0_OFFSET; + u32 cfg; + + cfg = readl(reg); + cfg &= ~TOPAZ_ASSERT_INTX; + qtnf_non_posted_write(cfg, reg); +} + +static inline int qtnf_topaz_intx_asserted(struct qtnf_pcie_topaz_state *ts) +{ + void __iomem *reg = ts->base.sysctl_bar + TOPAZ_PCIE_CFG0_OFFSET; + u32 cfg = readl(reg); + + return !!(cfg & TOPAZ_ASSERT_INTX); +} + +static void qtnf_topaz_reset_ep(struct qtnf_pcie_topaz_state *ts) +{ + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_RST_EP_IRQ), + TOPAZ_LH_IPC4_INT(ts->base.sysctl_bar)); + msleep(QTN_EP_RESET_WAIT_MS); + pci_restore_state(ts->base.pdev); +} + +static void setup_rx_irqs(struct qtnf_pcie_topaz_state *ts) +{ + void __iomem *reg = PCIE_DMA_WR_DONE_IMWR_ADDR_LOW(ts->base.dmareg_bar); + + ts->dma_msi_imwr = readl(reg); +} + +static void enable_rx_irqs(struct qtnf_pcie_topaz_state *ts) +{ + void __iomem *reg = PCIE_DMA_WR_DONE_IMWR_ADDR_LOW(ts->base.dmareg_bar); + + qtnf_non_posted_write(ts->dma_msi_imwr, reg); +} + +static void disable_rx_irqs(struct qtnf_pcie_topaz_state *ts) +{ + void __iomem *reg = PCIE_DMA_WR_DONE_IMWR_ADDR_LOW(ts->base.dmareg_bar); + + qtnf_non_posted_write(QTN_HOST_LO32(ts->dma_msi_dummy), reg); +} + +static void qtnf_topaz_ipc_gen_ep_int(void *arg) +{ + struct qtnf_pcie_topaz_state *ts = arg; + + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_CTRL_IRQ), + TOPAZ_CTL_M2L_INT(ts->base.sysctl_bar)); +} + +static int qtnf_is_state(__le32 __iomem *reg, u32 state) +{ + u32 s = readl(reg); + + return (s == state); +} + +static void qtnf_set_state(__le32 __iomem *reg, u32 state) +{ + qtnf_non_posted_write(state, reg); +} + +static int qtnf_poll_state(__le32 __iomem *reg, u32 state, u32 delay_in_ms) +{ + u32 timeout = 0; + + while ((qtnf_is_state(reg, state) == 0)) { + usleep_range(1000, 1200); + if (++timeout > delay_in_ms) + return -1; + } + + return 0; +} + +static int topaz_alloc_bd_table(struct qtnf_pcie_topaz_state *ts, + struct qtnf_topaz_bda __iomem *bda) +{ + struct qtnf_extra_bd_params __iomem *extra_params; + struct qtnf_pcie_bus_priv *priv = &ts->base; + dma_addr_t paddr; + void *vaddr; + int len; + int i; + + /* bd table */ + + len = priv->tx_bd_num * sizeof(struct qtnf_topaz_tx_bd) + + priv->rx_bd_num * sizeof(struct qtnf_topaz_rx_bd) + + sizeof(struct qtnf_extra_bd_params); + + vaddr = dmam_alloc_coherent(&priv->pdev->dev, len, &paddr, GFP_KERNEL); + if (!vaddr) + return -ENOMEM; + + memset(vaddr, 0, len); + + /* tx bd */ + + ts->tx_bd_vbase = vaddr; + qtnf_non_posted_write(paddr, &bda->bda_rc_tx_bd_base); + + for (i = 0; i < priv->tx_bd_num; i++) + ts->tx_bd_vbase[i].info |= cpu_to_le32(QTN_BD_EMPTY); + + pr_debug("TX descriptor table: vaddr=0x%p paddr=%pad\n", vaddr, &paddr); + + priv->tx_bd_r_index = 0; + priv->tx_bd_w_index = 0; + + /* rx bd */ + + vaddr = ((struct qtnf_topaz_tx_bd *)vaddr) + priv->tx_bd_num; + paddr += priv->tx_bd_num * sizeof(struct qtnf_topaz_tx_bd); + + ts->rx_bd_vbase = vaddr; + qtnf_non_posted_write(paddr, &bda->bda_rc_rx_bd_base); + + pr_debug("RX descriptor table: vaddr=0x%p paddr=%pad\n", vaddr, &paddr); + + /* extra shared params */ + + vaddr = ((struct qtnf_topaz_rx_bd *)vaddr) + priv->rx_bd_num; + paddr += priv->rx_bd_num * sizeof(struct qtnf_topaz_rx_bd); + + extra_params = (struct qtnf_extra_bd_params __iomem *)vaddr; + + ts->ep_next_rx_pkt = &extra_params->param1; + qtnf_non_posted_write(paddr + QTNF_BD_PARAM_OFFSET(1), + &bda->bda_ep_next_pkt); + ts->txqueue_wake = &extra_params->param2; + ts->ep_pmstate = &extra_params->param3; + ts->dma_msi_dummy = paddr + QTNF_BD_PARAM_OFFSET(4); + + return 0; +} + +static int +topaz_skb2rbd_attach(struct qtnf_pcie_topaz_state *ts, u16 index, u32 wrap) +{ + struct qtnf_topaz_rx_bd *rxbd = &ts->rx_bd_vbase[index]; + struct sk_buff *skb; + dma_addr_t paddr; + + skb = __netdev_alloc_skb_ip_align(NULL, SKB_BUF_SIZE, GFP_ATOMIC); + if (!skb) { + ts->base.rx_skb[index] = NULL; + return -ENOMEM; + } + + ts->base.rx_skb[index] = skb; + + paddr = pci_map_single(ts->base.pdev, skb->data, + SKB_BUF_SIZE, PCI_DMA_FROMDEVICE); + if (pci_dma_mapping_error(ts->base.pdev, paddr)) { + pr_err("skb mapping error: %pad\n", &paddr); + return -ENOMEM; + } + + rxbd->addr = cpu_to_le32(QTN_HOST_LO32(paddr)); + rxbd->info = cpu_to_le32(QTN_BD_EMPTY | wrap); + + ts->base.rx_bd_w_index = index; + + return 0; +} + +static int topaz_alloc_rx_buffers(struct qtnf_pcie_topaz_state *ts) +{ + u16 i; + int ret = 0; + + memset(ts->rx_bd_vbase, 0x0, + ts->base.rx_bd_num * sizeof(struct qtnf_topaz_rx_bd)); + + for (i = 0; i < ts->base.rx_bd_num; i++) { + ret = topaz_skb2rbd_attach(ts, i, 0); + if (ret) + break; + } + + ts->rx_bd_vbase[ts->base.rx_bd_num - 1].info |= + cpu_to_le32(QTN_BD_WRAP); + + return ret; +} + +/* all rx/tx activity should have ceased before calling this function */ +static void qtnf_topaz_free_xfer_buffers(struct qtnf_pcie_topaz_state *ts) +{ + struct qtnf_pcie_bus_priv *priv = &ts->base; + struct qtnf_topaz_rx_bd *rxbd; + struct qtnf_topaz_tx_bd *txbd; + struct sk_buff *skb; + dma_addr_t paddr; + int i; + + /* free rx buffers */ + for (i = 0; i < priv->rx_bd_num; i++) { + if (priv->rx_skb && priv->rx_skb[i]) { + rxbd = &ts->rx_bd_vbase[i]; + skb = priv->rx_skb[i]; + paddr = QTN_HOST_ADDR(0x0, le32_to_cpu(rxbd->addr)); + pci_unmap_single(priv->pdev, paddr, SKB_BUF_SIZE, + PCI_DMA_FROMDEVICE); + dev_kfree_skb_any(skb); + priv->rx_skb[i] = NULL; + rxbd->addr = 0; + rxbd->info = 0; + } + } + + /* free tx buffers */ + for (i = 0; i < priv->tx_bd_num; i++) { + if (priv->tx_skb && priv->tx_skb[i]) { + txbd = &ts->tx_bd_vbase[i]; + skb = priv->tx_skb[i]; + paddr = QTN_HOST_ADDR(0x0, le32_to_cpu(txbd->addr)); + pci_unmap_single(priv->pdev, paddr, SKB_BUF_SIZE, + PCI_DMA_TODEVICE); + dev_kfree_skb_any(skb); + priv->tx_skb[i] = NULL; + txbd->addr = 0; + txbd->info = 0; + } + } +} + +static int qtnf_pcie_topaz_init_xfer(struct qtnf_pcie_topaz_state *ts, + unsigned int tx_bd_size) +{ + struct qtnf_topaz_bda __iomem *bda = ts->bda; + struct qtnf_pcie_bus_priv *priv = &ts->base; + int ret; + + if (tx_bd_size == 0) + tx_bd_size = TOPAZ_TX_BD_SIZE_DEFAULT; + + /* check TX BD queue max length according to struct qtnf_topaz_bda */ + if (tx_bd_size > QTN_PCIE_RC_TX_QUEUE_LEN) { + pr_warn("TX BD queue cannot exceed %d\n", + QTN_PCIE_RC_TX_QUEUE_LEN); + tx_bd_size = QTN_PCIE_RC_TX_QUEUE_LEN; + } + + priv->tx_bd_num = tx_bd_size; + qtnf_non_posted_write(priv->tx_bd_num, &bda->bda_rc_tx_bd_num); + qtnf_non_posted_write(priv->rx_bd_num, &bda->bda_rc_rx_bd_num); + + priv->rx_bd_w_index = 0; + priv->rx_bd_r_index = 0; + + ret = qtnf_pcie_alloc_skb_array(priv); + if (ret) { + pr_err("failed to allocate skb array\n"); + return ret; + } + + ret = topaz_alloc_bd_table(ts, bda); + if (ret) { + pr_err("failed to allocate bd table\n"); + return ret; + } + + ret = topaz_alloc_rx_buffers(ts); + if (ret) { + pr_err("failed to allocate rx buffers\n"); + return ret; + } + + return ret; +} + +static void qtnf_topaz_data_tx_reclaim(struct qtnf_pcie_topaz_state *ts) +{ + struct qtnf_pcie_bus_priv *priv = &ts->base; + struct qtnf_topaz_tx_bd *txbd; + struct sk_buff *skb; + unsigned long flags; + dma_addr_t paddr; + u32 tx_done_index; + int count = 0; + int i; + + spin_lock_irqsave(&priv->tx_reclaim_lock, flags); + + tx_done_index = readl(ts->ep_next_rx_pkt); + i = priv->tx_bd_r_index; + + if (CIRC_CNT(priv->tx_bd_w_index, tx_done_index, priv->tx_bd_num)) + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_TX_DONE_IRQ), + TOPAZ_LH_IPC4_INT(priv->sysctl_bar)); + + while (CIRC_CNT(tx_done_index, i, priv->tx_bd_num)) { + skb = priv->tx_skb[i]; + + if (likely(skb)) { + txbd = &ts->tx_bd_vbase[i]; + paddr = QTN_HOST_ADDR(0x0, le32_to_cpu(txbd->addr)); + pci_unmap_single(priv->pdev, paddr, skb->len, + PCI_DMA_TODEVICE); + + if (skb->dev) { + qtnf_update_tx_stats(skb->dev, skb); + if (unlikely(priv->tx_stopped)) { + qtnf_wake_all_queues(skb->dev); + priv->tx_stopped = 0; + } + } + + dev_kfree_skb_any(skb); + } + + priv->tx_skb[i] = NULL; + count++; + + if (++i >= priv->tx_bd_num) + i = 0; + } + + priv->tx_reclaim_done += count; + priv->tx_reclaim_req++; + priv->tx_bd_r_index = i; + + spin_unlock_irqrestore(&priv->tx_reclaim_lock, flags); +} + +static void qtnf_try_stop_xmit(struct qtnf_bus *bus, struct net_device *ndev) +{ + struct qtnf_pcie_topaz_state *ts = (void *)get_bus_priv(bus); + + if (ndev) { + netif_tx_stop_all_queues(ndev); + ts->base.tx_stopped = 1; + } + + writel(0x0, ts->txqueue_wake); + + /* sync up tx queue status before generating interrupt */ + dma_wmb(); + + /* send irq to card: tx stopped */ + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_TX_STOP_IRQ), + TOPAZ_LH_IPC4_INT(ts->base.sysctl_bar)); + + /* schedule reclaim attempt */ + tasklet_hi_schedule(&ts->base.reclaim_tq); +} + +static void qtnf_try_wake_xmit(struct qtnf_bus *bus, struct net_device *ndev) +{ + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + int ready; + + ready = readl(ts->txqueue_wake); + if (ready) { + netif_wake_queue(ndev); + } else { + /* re-send irq to card: tx stopped */ + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_TX_STOP_IRQ), + TOPAZ_LH_IPC4_INT(ts->base.sysctl_bar)); + } +} + +static int qtnf_tx_queue_ready(struct qtnf_pcie_topaz_state *ts) +{ + struct qtnf_pcie_bus_priv *priv = &ts->base; + + if (!CIRC_SPACE(priv->tx_bd_w_index, priv->tx_bd_r_index, + priv->tx_bd_num)) { + qtnf_topaz_data_tx_reclaim(ts); + + if (!CIRC_SPACE(priv->tx_bd_w_index, priv->tx_bd_r_index, + priv->tx_bd_num)) { + priv->tx_full_count++; + return 0; + } + } + + return 1; +} + +static int qtnf_pcie_data_tx(struct qtnf_bus *bus, struct sk_buff *skb) +{ + struct qtnf_pcie_topaz_state *ts = (void *)get_bus_priv(bus); + struct qtnf_pcie_bus_priv *priv = &ts->base; + struct qtnf_topaz_bda __iomem *bda = ts->bda; + struct qtnf_topaz_tx_bd *txbd; + dma_addr_t skb_paddr; + unsigned long flags; + int ret = 0; + int len; + int i; + + spin_lock_irqsave(&priv->tx_lock, flags); + + if (!qtnf_tx_queue_ready(ts)) { + qtnf_try_stop_xmit(bus, skb->dev); + spin_unlock_irqrestore(&priv->tx_lock, flags); + return NETDEV_TX_BUSY; + } + + i = priv->tx_bd_w_index; + priv->tx_skb[i] = skb; + len = skb->len; + + skb_paddr = pci_map_single(priv->pdev, skb->data, + skb->len, PCI_DMA_TODEVICE); + if (pci_dma_mapping_error(priv->pdev, skb_paddr)) { + ret = -ENOMEM; + goto tx_done; + } + + txbd = &ts->tx_bd_vbase[i]; + txbd->addr = cpu_to_le32(QTN_HOST_LO32(skb_paddr)); + + writel(QTN_HOST_LO32(skb_paddr), &bda->request[i].addr); + writel(len | QTN_PCIE_TX_VALID_PKT, &bda->request[i].info); + + /* sync up descriptor updates before generating interrupt */ + dma_wmb(); + + /* generate irq to card: tx done */ + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_TX_DONE_IRQ), + TOPAZ_LH_IPC4_INT(priv->sysctl_bar)); + + if (++i >= priv->tx_bd_num) + i = 0; + + priv->tx_bd_w_index = i; + +tx_done: + if (ret) { + if (skb->dev) + skb->dev->stats.tx_dropped++; + dev_kfree_skb_any(skb); + } + + priv->tx_done_count++; + spin_unlock_irqrestore(&priv->tx_lock, flags); + + qtnf_topaz_data_tx_reclaim(ts); + + return NETDEV_TX_OK; +} + +static irqreturn_t qtnf_pcie_topaz_interrupt(int irq, void *data) +{ + struct qtnf_bus *bus = (struct qtnf_bus *)data; + struct qtnf_pcie_topaz_state *ts = (void *)get_bus_priv(bus); + struct qtnf_pcie_bus_priv *priv = &ts->base; + + if (!priv->msi_enabled && !qtnf_topaz_intx_asserted(ts)) + return IRQ_NONE; + + priv->pcie_irq_count++; + + qtnf_shm_ipc_irq_handler(&priv->shm_ipc_ep_in); + qtnf_shm_ipc_irq_handler(&priv->shm_ipc_ep_out); + + if (napi_schedule_prep(&bus->mux_napi)) { + disable_rx_irqs(ts); + __napi_schedule(&bus->mux_napi); + } + + tasklet_hi_schedule(&priv->reclaim_tq); + + if (!priv->msi_enabled) + qtnf_deassert_intx(ts); + + return IRQ_HANDLED; +} + +static int qtnf_rx_data_ready(struct qtnf_pcie_topaz_state *ts) +{ + u16 index = ts->base.rx_bd_r_index; + struct qtnf_topaz_rx_bd *rxbd; + u32 descw; + + rxbd = &ts->rx_bd_vbase[index]; + descw = le32_to_cpu(rxbd->info); + + if (descw & QTN_BD_EMPTY) + return 0; + + return 1; +} + +static int qtnf_topaz_rx_poll(struct napi_struct *napi, int budget) +{ + struct qtnf_bus *bus = container_of(napi, struct qtnf_bus, mux_napi); + struct qtnf_pcie_topaz_state *ts = (void *)get_bus_priv(bus); + struct qtnf_pcie_bus_priv *priv = &ts->base; + struct net_device *ndev = NULL; + struct sk_buff *skb = NULL; + int processed = 0; + struct qtnf_topaz_rx_bd *rxbd; + dma_addr_t skb_paddr; + int consume; + u32 descw; + u32 poffset; + u32 psize; + u16 r_idx; + u16 w_idx; + int ret; + + while (processed < budget) { + if (!qtnf_rx_data_ready(ts)) + goto rx_out; + + r_idx = priv->rx_bd_r_index; + rxbd = &ts->rx_bd_vbase[r_idx]; + descw = le32_to_cpu(rxbd->info); + + skb = priv->rx_skb[r_idx]; + poffset = QTN_GET_OFFSET(descw); + psize = QTN_GET_LEN(descw); + consume = 1; + + if (descw & QTN_BD_EMPTY) { + pr_warn("skip invalid rxbd[%d]\n", r_idx); + consume = 0; + } + + if (!skb) { + pr_warn("skip missing rx_skb[%d]\n", r_idx); + consume = 0; + } + + if (skb && (skb_tailroom(skb) < psize)) { + pr_err("skip packet with invalid length: %u > %u\n", + psize, skb_tailroom(skb)); + consume = 0; + } + + if (skb) { + skb_paddr = QTN_HOST_ADDR(0x0, le32_to_cpu(rxbd->addr)); + pci_unmap_single(priv->pdev, skb_paddr, SKB_BUF_SIZE, + PCI_DMA_FROMDEVICE); + } + + if (consume) { + skb_reserve(skb, poffset); + skb_put(skb, psize); + ndev = qtnf_classify_skb(bus, skb); + if (likely(ndev)) { + qtnf_update_rx_stats(ndev, skb); + skb->protocol = eth_type_trans(skb, ndev); + netif_receive_skb(skb); + } else { + pr_debug("drop untagged skb\n"); + bus->mux_dev.stats.rx_dropped++; + dev_kfree_skb_any(skb); + } + } else { + if (skb) { + bus->mux_dev.stats.rx_dropped++; + dev_kfree_skb_any(skb); + } + } + + /* notify card about recv packets once per several packets */ + if (((++ts->rx_pkt_count) & RX_DONE_INTR_MSK) == 0) + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_RX_DONE_IRQ), + TOPAZ_LH_IPC4_INT(priv->sysctl_bar)); + + priv->rx_skb[r_idx] = NULL; + if (++r_idx >= priv->rx_bd_num) + r_idx = 0; + + priv->rx_bd_r_index = r_idx; + + /* repalce processed buffer by a new one */ + w_idx = priv->rx_bd_w_index; + while (CIRC_SPACE(priv->rx_bd_w_index, priv->rx_bd_r_index, + priv->rx_bd_num) > 0) { + if (++w_idx >= priv->rx_bd_num) + w_idx = 0; + + ret = topaz_skb2rbd_attach(ts, w_idx, + descw & QTN_BD_WRAP); + if (ret) { + pr_err("failed to allocate new rx_skb[%d]\n", + w_idx); + break; + } + } + + processed++; + } + +rx_out: + if (processed < budget) { + napi_complete(napi); + enable_rx_irqs(ts); + } + + return processed; +} + +static void +qtnf_pcie_data_tx_timeout(struct qtnf_bus *bus, struct net_device *ndev) +{ + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + + qtnf_try_wake_xmit(bus, ndev); + tasklet_hi_schedule(&ts->base.reclaim_tq); +} + +static void qtnf_pcie_data_rx_start(struct qtnf_bus *bus) +{ + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + + napi_enable(&bus->mux_napi); + enable_rx_irqs(ts); +} + +static void qtnf_pcie_data_rx_stop(struct qtnf_bus *bus) +{ + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + + disable_rx_irqs(ts); + napi_disable(&bus->mux_napi); +} + +static const struct qtnf_bus_ops qtnf_pcie_topaz_bus_ops = { + /* control path methods */ + .control_tx = qtnf_pcie_control_tx, + + /* data path methods */ + .data_tx = qtnf_pcie_data_tx, + .data_tx_timeout = qtnf_pcie_data_tx_timeout, + .data_rx_start = qtnf_pcie_data_rx_start, + .data_rx_stop = qtnf_pcie_data_rx_stop, +}; + +static int qtnf_dbg_irq_stats(struct seq_file *s, void *data) +{ + struct qtnf_bus *bus = dev_get_drvdata(s->private); + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + + seq_printf(s, "pcie_irq_count(%u)\n", ts->base.pcie_irq_count); + + return 0; +} + +static int qtnf_dbg_pkt_stats(struct seq_file *s, void *data) +{ + struct qtnf_bus *bus = dev_get_drvdata(s->private); + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + struct qtnf_pcie_bus_priv *priv = &ts->base; + u32 tx_done_index = readl(ts->ep_next_rx_pkt); + + seq_printf(s, "tx_full_count(%u)\n", priv->tx_full_count); + seq_printf(s, "tx_done_count(%u)\n", priv->tx_done_count); + seq_printf(s, "tx_reclaim_done(%u)\n", priv->tx_reclaim_done); + seq_printf(s, "tx_reclaim_req(%u)\n", priv->tx_reclaim_req); + + seq_printf(s, "tx_bd_r_index(%u)\n", priv->tx_bd_r_index); + seq_printf(s, "tx_done_index(%u)\n", tx_done_index); + seq_printf(s, "tx_bd_w_index(%u)\n", priv->tx_bd_w_index); + + seq_printf(s, "tx host queue len(%u)\n", + CIRC_CNT(priv->tx_bd_w_index, priv->tx_bd_r_index, + priv->tx_bd_num)); + seq_printf(s, "tx reclaim queue len(%u)\n", + CIRC_CNT(tx_done_index, priv->tx_bd_r_index, + priv->tx_bd_num)); + seq_printf(s, "tx card queue len(%u)\n", + CIRC_CNT(priv->tx_bd_w_index, tx_done_index, + priv->tx_bd_num)); + + seq_printf(s, "rx_bd_r_index(%u)\n", priv->rx_bd_r_index); + seq_printf(s, "rx_bd_w_index(%u)\n", priv->rx_bd_w_index); + seq_printf(s, "rx alloc queue len(%u)\n", + CIRC_SPACE(priv->rx_bd_w_index, priv->rx_bd_r_index, + priv->rx_bd_num)); + + return 0; +} + +static void qtnf_reset_dma_offset(struct qtnf_pcie_topaz_state *ts) +{ + struct qtnf_topaz_bda __iomem *bda = ts->bda; + u32 offset = readl(&bda->bda_dma_offset); + + if ((offset & PCIE_DMA_OFFSET_ERROR_MASK) != PCIE_DMA_OFFSET_ERROR) + return; + + writel(0x0, &bda->bda_dma_offset); +} + +static int qtnf_pcie_endian_detect(struct qtnf_pcie_topaz_state *ts) +{ + struct qtnf_topaz_bda __iomem *bda = ts->bda; + u32 timeout = 0; + u32 endian; + int ret = 0; + + writel(QTN_PCI_ENDIAN_DETECT_DATA, &bda->bda_pci_endian); + + /* flush endian modifications before status update */ + dma_wmb(); + + writel(QTN_PCI_ENDIAN_VALID_STATUS, &bda->bda_pci_pre_status); + + while (readl(&bda->bda_pci_post_status) != + QTN_PCI_ENDIAN_VALID_STATUS) { + usleep_range(1000, 1200); + if (++timeout > QTN_FW_DL_TIMEOUT_MS) { + pr_err("card endianness detection timed out\n"); + ret = -ETIMEDOUT; + goto endian_out; + } + } + + /* do not read before status is updated */ + dma_rmb(); + + endian = readl(&bda->bda_pci_endian); + WARN(endian != QTN_PCI_LITTLE_ENDIAN, + "%s: unexpected card endianness", __func__); + +endian_out: + writel(0, &bda->bda_pci_pre_status); + writel(0, &bda->bda_pci_post_status); + writel(0, &bda->bda_pci_endian); + + return ret; +} + +static int qtnf_pre_init_ep(struct qtnf_bus *bus) +{ + struct qtnf_pcie_topaz_state *ts = (void *)get_bus_priv(bus); + struct qtnf_topaz_bda __iomem *bda = ts->bda; + u32 flags; + int ret; + + ret = qtnf_pcie_endian_detect(ts); + if (ret < 0) { + pr_err("failed to detect card endianness\n"); + return ret; + } + + writeb(ts->base.msi_enabled, &ts->bda->bda_rc_msi_enabled); + qtnf_reset_dma_offset(ts); + + /* notify card about driver type and boot mode */ + flags = readl(&bda->bda_flags) | QTN_BDA_HOST_QLINK_DRV; + + if (ts->base.flashboot) + flags |= QTN_BDA_FLASH_BOOT; + else + flags &= ~QTN_BDA_FLASH_BOOT; + + writel(flags, &bda->bda_flags); + + qtnf_set_state(&ts->bda->bda_bootstate, QTN_BDA_FW_HOST_RDY); + if (qtnf_poll_state(&ts->bda->bda_bootstate, QTN_BDA_FW_TARGET_RDY, + QTN_FW_DL_TIMEOUT_MS)) { + pr_err("card is not ready to boot...\n"); + return -ETIMEDOUT; + } + + return ret; +} + +static int qtnf_post_init_ep(struct qtnf_pcie_topaz_state *ts) +{ + struct pci_dev *pdev = ts->base.pdev; + + setup_rx_irqs(ts); + disable_rx_irqs(ts); + + if (qtnf_poll_state(&ts->bda->bda_bootstate, QTN_BDA_FW_QLINK_DONE, + QTN_FW_QLINK_TIMEOUT_MS)) + return -ETIMEDOUT; + + enable_irq(pdev->irq); + return 0; +} + +static int +qtnf_ep_fw_load(struct qtnf_pcie_topaz_state *ts, const u8 *fw, u32 fw_size) +{ + struct qtnf_topaz_bda __iomem *bda = ts->bda; + struct pci_dev *pdev = ts->base.pdev; + u32 remaining = fw_size; + u8 *curr = (u8 *)fw; + u32 blksize; + u32 nblocks; + u32 offset; + u32 count; + u32 size; + dma_addr_t paddr; + void *data; + int ret = 0; + + pr_debug("FW upload started: fw_addr = 0x%p, size=%d\n", fw, fw_size); + + blksize = ts->base.fw_blksize; + + if (blksize < PAGE_SIZE) + blksize = PAGE_SIZE; + + while (blksize >= PAGE_SIZE) { + pr_debug("allocating %u bytes to upload FW\n", blksize); + data = dma_alloc_coherent(&pdev->dev, blksize, + &paddr, GFP_KERNEL); + if (data) + break; + blksize /= 2; + } + + if (!data) { + pr_err("failed to allocate DMA buffer for FW upload\n"); + ret = -ENOMEM; + goto fw_load_out; + } + + nblocks = NBLOCKS(fw_size, blksize); + offset = readl(&bda->bda_dma_offset); + + qtnf_set_state(&ts->bda->bda_bootstate, QTN_BDA_FW_HOST_LOAD); + if (qtnf_poll_state(&ts->bda->bda_bootstate, QTN_BDA_FW_EP_RDY, + QTN_FW_DL_TIMEOUT_MS)) { + pr_err("card is not ready to download FW\n"); + ret = -ETIMEDOUT; + goto fw_load_map; + } + + for (count = 0 ; count < nblocks; count++) { + size = (remaining > blksize) ? blksize : remaining; + + memcpy(data, curr, size); + qtnf_non_posted_write(paddr + offset, &bda->bda_img); + qtnf_non_posted_write(size, &bda->bda_img_size); + + pr_debug("chunk[%u] VA[0x%p] PA[%pad] sz[%u]\n", + count, (void *)curr, &paddr, size); + + qtnf_set_state(&ts->bda->bda_bootstate, QTN_BDA_FW_BLOCK_RDY); + if (qtnf_poll_state(&ts->bda->bda_bootstate, + QTN_BDA_FW_BLOCK_DONE, + QTN_FW_DL_TIMEOUT_MS)) { + pr_err("confirmation for block #%d timed out\n", count); + ret = -ETIMEDOUT; + goto fw_load_map; + } + + remaining = (remaining < size) ? remaining : (remaining - size); + curr += size; + } + + /* upload completion mark: zero-sized block */ + qtnf_non_posted_write(0, &bda->bda_img); + qtnf_non_posted_write(0, &bda->bda_img_size); + + qtnf_set_state(&ts->bda->bda_bootstate, QTN_BDA_FW_BLOCK_RDY); + if (qtnf_poll_state(&ts->bda->bda_bootstate, QTN_BDA_FW_BLOCK_DONE, + QTN_FW_DL_TIMEOUT_MS)) { + pr_err("confirmation for the last block timed out\n"); + ret = -ETIMEDOUT; + goto fw_load_map; + } + + /* RC is done */ + qtnf_set_state(&ts->bda->bda_bootstate, QTN_BDA_FW_BLOCK_END); + if (qtnf_poll_state(&ts->bda->bda_bootstate, QTN_BDA_FW_LOAD_DONE, + QTN_FW_DL_TIMEOUT_MS)) { + pr_err("confirmation for FW upload completion timed out\n"); + ret = -ETIMEDOUT; + goto fw_load_map; + } + + pr_debug("FW upload completed: totally sent %d blocks\n", count); + +fw_load_map: + dma_free_coherent(&pdev->dev, blksize, data, paddr); + +fw_load_out: + return ret; +} + +static int qtnf_topaz_fw_upload(struct qtnf_pcie_topaz_state *ts, + const char *fwname) +{ + const struct firmware *fw; + struct pci_dev *pdev = ts->base.pdev; + int ret; + + if (qtnf_poll_state(&ts->bda->bda_bootstate, + QTN_BDA_FW_LOAD_RDY, + QTN_FW_DL_TIMEOUT_MS)) { + pr_err("%s: card is not ready\n", fwname); + return -1; + } + + pr_info("starting firmware upload: %s\n", fwname); + + ret = request_firmware(&fw, fwname, &pdev->dev); + if (ret < 0) { + pr_err("%s: request_firmware error %d\n", fwname, ret); + return -1; + } + + ret = qtnf_ep_fw_load(ts, fw->data, fw->size); + release_firmware(fw); + + if (ret) + pr_err("%s: FW upload error\n", fwname); + + return ret; +} + +static void qtnf_topaz_fw_work_handler(struct work_struct *work) +{ + struct qtnf_bus *bus = container_of(work, struct qtnf_bus, fw_work); + struct qtnf_pcie_topaz_state *ts = (void *)get_bus_priv(bus); + int ret; + int bootloader_needed = readl(&ts->bda->bda_flags) & QTN_BDA_XMIT_UBOOT; + + qtnf_set_state(&ts->bda->bda_bootstate, QTN_BDA_FW_TARGET_BOOT); + + if (bootloader_needed) { + ret = qtnf_topaz_fw_upload(ts, QTN_PCI_TOPAZ_BOOTLD_NAME); + if (ret) + goto fw_load_exit; + + ret = qtnf_pre_init_ep(bus); + if (ret) + goto fw_load_exit; + + qtnf_set_state(&ts->bda->bda_bootstate, + QTN_BDA_FW_TARGET_BOOT); + } + + if (ts->base.flashboot) { + pr_info("booting firmware from flash\n"); + + ret = qtnf_poll_state(&ts->bda->bda_bootstate, + QTN_BDA_FW_FLASH_BOOT, + QTN_FW_DL_TIMEOUT_MS); + if (ret) + goto fw_load_exit; + } else { + ret = qtnf_topaz_fw_upload(ts, QTN_PCI_TOPAZ_FW_NAME); + if (ret) + goto fw_load_exit; + + qtnf_set_state(&ts->bda->bda_bootstate, QTN_BDA_FW_START); + ret = qtnf_poll_state(&ts->bda->bda_bootstate, + QTN_BDA_FW_CONFIG, + QTN_FW_QLINK_TIMEOUT_MS); + if (ret) { + pr_err("FW bringup timed out\n"); + goto fw_load_exit; + } + + qtnf_set_state(&ts->bda->bda_bootstate, QTN_BDA_FW_RUN); + ret = qtnf_poll_state(&ts->bda->bda_bootstate, + QTN_BDA_FW_RUNNING, + QTN_FW_QLINK_TIMEOUT_MS); + if (ret) { + pr_err("card bringup timed out\n"); + goto fw_load_exit; + } + } + + pr_info("firmware is up and running\n"); + + ret = qtnf_post_init_ep(ts); + if (ret) + pr_err("FW runtime failure\n"); + +fw_load_exit: + qtnf_pcie_fw_boot_done(bus, ret ? false : true); + + if (ret == 0) { + qtnf_debugfs_add_entry(bus, "pkt_stats", qtnf_dbg_pkt_stats); + qtnf_debugfs_add_entry(bus, "irq_stats", qtnf_dbg_irq_stats); + } +} + +static void qtnf_reclaim_tasklet_fn(unsigned long data) +{ + struct qtnf_pcie_topaz_state *ts = (void *)data; + + qtnf_topaz_data_tx_reclaim(ts); +} + +static u64 qtnf_topaz_dma_mask_get(void) +{ + return DMA_BIT_MASK(32); +} + +static int qtnf_pcie_topaz_probe(struct qtnf_bus *bus, unsigned int tx_bd_num) +{ + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + struct pci_dev *pdev = ts->base.pdev; + struct qtnf_shm_ipc_int ipc_int; + unsigned long irqflags; + int ret; + + bus->bus_ops = &qtnf_pcie_topaz_bus_ops; + INIT_WORK(&bus->fw_work, qtnf_topaz_fw_work_handler); + ts->bda = ts->base.epmem_bar; + + /* assign host msi irq before card init */ + if (ts->base.msi_enabled) + irqflags = IRQF_NOBALANCING; + else + irqflags = IRQF_NOBALANCING | IRQF_SHARED; + + ret = devm_request_irq(&pdev->dev, pdev->irq, + &qtnf_pcie_topaz_interrupt, + irqflags, "qtnf_topaz_irq", (void *)bus); + if (ret) { + pr_err("failed to request pcie irq %d\n", pdev->irq); + return ret; + } + + disable_irq(pdev->irq); + + ret = qtnf_pre_init_ep(bus); + if (ret) { + pr_err("failed to init card\n"); + return ret; + } + + ret = qtnf_pcie_topaz_init_xfer(ts, tx_bd_num); + if (ret) { + pr_err("PCIE xfer init failed\n"); + return ret; + } + + tasklet_init(&ts->base.reclaim_tq, qtnf_reclaim_tasklet_fn, + (unsigned long)ts); + netif_napi_add(&bus->mux_dev, &bus->mux_napi, + qtnf_topaz_rx_poll, 10); + + ipc_int.fn = qtnf_topaz_ipc_gen_ep_int; + ipc_int.arg = ts; + qtnf_pcie_init_shm_ipc(&ts->base, &ts->bda->bda_shm_reg1, + &ts->bda->bda_shm_reg2, &ipc_int); + + return 0; +} + +static void qtnf_pcie_topaz_remove(struct qtnf_bus *bus) +{ + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + + qtnf_topaz_reset_ep(ts); + qtnf_topaz_free_xfer_buffers(ts); +} + +#ifdef CONFIG_PM_SLEEP +static int qtnf_pcie_topaz_suspend(struct qtnf_bus *bus) +{ + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + struct pci_dev *pdev = ts->base.pdev; + + writel((u32 __force)PCI_D3hot, ts->ep_pmstate); + dma_wmb(); + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_PM_EP_IRQ), + TOPAZ_LH_IPC4_INT(ts->base.sysctl_bar)); + + pci_save_state(pdev); + pci_enable_wake(pdev, PCI_D3hot, 1); + pci_set_power_state(pdev, PCI_D3hot); + + return 0; +} + +static int qtnf_pcie_topaz_resume(struct qtnf_bus *bus) +{ + struct qtnf_pcie_topaz_state *ts = get_bus_priv(bus); + struct pci_dev *pdev = ts->base.pdev; + + pci_set_power_state(pdev, PCI_D0); + pci_restore_state(pdev); + pci_enable_wake(pdev, PCI_D0, 0); + + writel((u32 __force)PCI_D0, ts->ep_pmstate); + dma_wmb(); + writel(TOPAZ_IPC_IRQ_WORD(TOPAZ_RC_PM_EP_IRQ), + TOPAZ_LH_IPC4_INT(ts->base.sysctl_bar)); + + return 0; +} +#endif + +struct qtnf_bus *qtnf_pcie_topaz_alloc(struct pci_dev *pdev) +{ + struct qtnf_bus *bus; + struct qtnf_pcie_topaz_state *ts; + + bus = devm_kzalloc(&pdev->dev, sizeof(*bus) + sizeof(*ts), GFP_KERNEL); + if (!bus) + return NULL; + + ts = get_bus_priv(bus); + ts->base.probe_cb = qtnf_pcie_topaz_probe; + ts->base.remove_cb = qtnf_pcie_topaz_remove; + ts->base.dma_mask_get_cb = qtnf_topaz_dma_mask_get; +#ifdef CONFIG_PM_SLEEP + ts->base.resume_cb = qtnf_pcie_topaz_resume; + ts->base.suspend_cb = qtnf_pcie_topaz_suspend; +#endif + + return bus; +} diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie_ipc.h b/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie_ipc.h new file mode 100644 index 000000000000..eb30e9d08de2 --- /dev/null +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie_ipc.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* Copyright (c) 2018 Quantenna Communications */ + +#ifndef _QTN_FMAC_PCIE_IPC_H_ +#define _QTN_FMAC_PCIE_IPC_H_ + +#include + +#include "shm_ipc_defs.h" + +/* EP/RC status and flags */ +#define QTN_BDA_PCIE_INIT 0x01 +#define QTN_BDA_PCIE_RDY 0x02 +#define QTN_BDA_FW_LOAD_RDY 0x03 +#define QTN_BDA_FW_LOAD_DONE 0x04 +#define QTN_BDA_FW_START 0x05 +#define QTN_BDA_FW_RUN 0x06 +#define QTN_BDA_FW_HOST_RDY 0x07 +#define QTN_BDA_FW_TARGET_RDY 0x11 +#define QTN_BDA_FW_TARGET_BOOT 0x12 +#define QTN_BDA_FW_FLASH_BOOT 0x13 +#define QTN_BDA_FW_QLINK_DONE 0x14 +#define QTN_BDA_FW_HOST_LOAD 0x08 +#define QTN_BDA_FW_BLOCK_DONE 0x09 +#define QTN_BDA_FW_BLOCK_RDY 0x0A +#define QTN_BDA_FW_EP_RDY 0x0B +#define QTN_BDA_FW_BLOCK_END 0x0C +#define QTN_BDA_FW_CONFIG 0x0D +#define QTN_BDA_FW_RUNNING 0x0E +#define QTN_BDA_PCIE_FAIL 0x82 +#define QTN_BDA_FW_LOAD_FAIL 0x85 + +#define QTN_BDA_RCMODE BIT(1) +#define QTN_BDA_MSI BIT(2) +#define QTN_BDA_HOST_CALCMD BIT(3) +#define QTN_BDA_FLASH_PRESENT BIT(4) +#define QTN_BDA_FLASH_BOOT BIT(5) +#define QTN_BDA_XMIT_UBOOT BIT(6) +#define QTN_BDA_HOST_QLINK_DRV BIT(7) +#define QTN_BDA_TARGET_FBOOT_ERR BIT(8) +#define QTN_BDA_TARGET_FWLOAD_ERR BIT(9) +#define QTN_BDA_HOST_NOFW_ERR BIT(12) +#define QTN_BDA_HOST_MEMALLOC_ERR BIT(13) +#define QTN_BDA_HOST_MEMMAP_ERR BIT(14) +#define QTN_BDA_VER(x) (((x) >> 4) & 0xFF) +#define QTN_BDA_ERROR_MASK 0xFF00 + +/* registers and shmem address macros */ +#if BITS_PER_LONG == 64 +#define QTN_HOST_HI32(a) ((u32)(((u64)a) >> 32)) +#define QTN_HOST_LO32(a) ((u32)(((u64)a) & 0xffffffffUL)) +#define QTN_HOST_ADDR(h, l) ((((u64)h) << 32) | ((u64)l)) +#elif BITS_PER_LONG == 32 +#define QTN_HOST_HI32(a) 0 +#define QTN_HOST_LO32(a) ((u32)(((u32)a) & 0xffffffffUL)) +#define QTN_HOST_ADDR(h, l) ((u32)l) +#else +#error Unexpected BITS_PER_LONG value +#endif + +#define QTN_PCIE_BDA_VERSION 0x1001 + +#define PCIE_BDA_NAMELEN 32 + +#define QTN_PCIE_RC_TX_QUEUE_LEN 256 +#define QTN_PCIE_TX_VALID_PKT 0x80000000 +#define QTN_PCIE_PKT_LEN_MASK 0xffff + +#define QTN_BD_EMPTY ((uint32_t)0x00000001) +#define QTN_BD_WRAP ((uint32_t)0x00000002) +#define QTN_BD_MASK_LEN ((uint32_t)0xFFFF0000) +#define QTN_BD_MASK_OFFSET ((uint32_t)0x0000FF00) + +#define QTN_GET_LEN(x) (((x) >> 16) & 0xFFFF) +#define QTN_GET_OFFSET(x) (((x) >> 8) & 0xFF) +#define QTN_SET_LEN(len) (((len) & 0xFFFF) << 16) +#define QTN_SET_OFFSET(of) (((of) & 0xFF) << 8) + +#define RX_DONE_INTR_MSK ((0x1 << 6) - 1) + +#define PCIE_DMA_OFFSET_ERROR 0xFFFF +#define PCIE_DMA_OFFSET_ERROR_MASK 0xFFFF + +#define QTN_PCI_ENDIAN_DETECT_DATA 0x12345678 +#define QTN_PCI_ENDIAN_REVERSE_DATA 0x78563412 +#define QTN_PCI_ENDIAN_VALID_STATUS 0x3c3c3c3c +#define QTN_PCI_ENDIAN_INVALID_STATUS 0 +#define QTN_PCI_LITTLE_ENDIAN 0 +#define QTN_PCI_BIG_ENDIAN 0xffffffff + +#define NBLOCKS(size, blksize) \ + ((size) / (blksize) + (((size) % (blksize) > 0) ? 1 : 0)) + +#endif /* _QTN_FMAC_PCIE_IPC_H_ */ diff --git a/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie_regs.h b/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie_regs.h new file mode 100644 index 000000000000..4782e1ed3c2c --- /dev/null +++ b/drivers/net/wireless/quantenna/qtnfmac/pcie/topaz_pcie_regs.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* Copyright (c) 2018 Quantenna Communications */ + +#ifndef __TOPAZ_PCIE_H +#define __TOPAZ_PCIE_H + +/* Topaz PCIe DMA registers */ +#define PCIE_DMA_WR_INTR_STATUS(base) ((base) + 0x9bc) +#define PCIE_DMA_WR_INTR_MASK(base) ((base) + 0x9c4) +#define PCIE_DMA_WR_INTR_CLR(base) ((base) + 0x9c8) +#define PCIE_DMA_WR_ERR_STATUS(base) ((base) + 0x9cc) +#define PCIE_DMA_WR_DONE_IMWR_ADDR_LOW(base) ((base) + 0x9D0) +#define PCIE_DMA_WR_DONE_IMWR_ADDR_HIGH(base) ((base) + 0x9d4) + +#define PCIE_DMA_RD_INTR_STATUS(base) ((base) + 0x310) +#define PCIE_DMA_RD_INTR_MASK(base) ((base) + 0x319) +#define PCIE_DMA_RD_INTR_CLR(base) ((base) + 0x31c) +#define PCIE_DMA_RD_ERR_STATUS_LOW(base) ((base) + 0x324) +#define PCIE_DMA_RD_ERR_STATUS_HIGH(base) ((base) + 0x328) +#define PCIE_DMA_RD_DONE_IMWR_ADDR_LOW(base) ((base) + 0x33c) +#define PCIE_DMA_RD_DONE_IMWR_ADDR_HIGH(base) ((base) + 0x340) + +/* Topaz LHost IPC4 interrupt */ +#define TOPAZ_LH_IPC4_INT(base) ((base) + 0x13C) +#define TOPAZ_LH_IPC4_INT_MASK(base) ((base) + 0x140) + +#define TOPAZ_RC_TX_DONE_IRQ (0) +#define TOPAZ_RC_RST_EP_IRQ (1) +#define TOPAZ_RC_TX_STOP_IRQ (2) +#define TOPAZ_RC_RX_DONE_IRQ (3) +#define TOPAZ_RC_PM_EP_IRQ (4) + +/* Topaz LHost M2L interrupt */ +#define TOPAZ_CTL_M2L_INT(base) ((base) + 0x2C) +#define TOPAZ_CTL_M2L_INT_MASK(base) ((base) + 0x30) + +#define TOPAZ_RC_CTRL_IRQ (6) + +#define TOPAZ_IPC_IRQ_WORD(irq) (BIT(irq) | BIT(irq + 16)) + +/* PCIe legacy INTx */ +#define TOPAZ_PCIE_CFG0_OFFSET (0x6C) +#define TOPAZ_ASSERT_INTX BIT(9) + +#endif /* __TOPAZ_PCIE_H */ diff --git a/drivers/net/wireless/quantenna/qtnfmac/qtn_hw_ids.h b/drivers/net/wireless/quantenna/qtnfmac/qtn_hw_ids.h index 1fe798a9a667..40295a511224 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/qtn_hw_ids.h +++ b/drivers/net/wireless/quantenna/qtnfmac/qtn_hw_ids.h @@ -23,7 +23,7 @@ /* PCIE Device IDs */ -#define PCIE_DEVICE_ID_QTN_PEARL (0x0008) +#define PCIE_DEVICE_ID_QSR (0x0008) #define QTN_REG_SYS_CTRL_CSR 0x14 #define QTN_CHIP_ID_MASK 0xF0 @@ -35,6 +35,8 @@ /* FW names */ #define QTN_PCI_PEARL_FW_NAME "qtn/fmac_qsr10g.img" +#define QTN_PCI_TOPAZ_FW_NAME "qtn/fmac_qsr1000.img" +#define QTN_PCI_TOPAZ_BOOTLD_NAME "qtn/uboot_qsr1000.img" static inline unsigned int qtnf_chip_id_get(const void __iomem *regs_base) { diff --git a/drivers/net/wireless/quantenna/qtnfmac/util.c b/drivers/net/wireless/quantenna/qtnfmac/util.c index e745733ba417..3bc96b264769 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/util.c +++ b/drivers/net/wireless/quantenna/qtnfmac/util.c @@ -15,6 +15,7 @@ */ #include "util.h" +#include "qtn_hw_ids.h" void qtnf_sta_list_init(struct qtnf_sta_list *list) { @@ -116,3 +117,20 @@ void qtnf_sta_list_free(struct qtnf_sta_list *list) INIT_LIST_HEAD(&list->head); } + +const char *qtnf_chipid_to_string(unsigned long chip_id) +{ + switch (chip_id) { + case QTN_CHIP_ID_TOPAZ: + return "Topaz"; + case QTN_CHIP_ID_PEARL: + return "Pearl revA"; + case QTN_CHIP_ID_PEARL_B: + return "Pearl revB"; + case QTN_CHIP_ID_PEARL_C: + return "Pearl revC"; + default: + return "unknown"; + } +} +EXPORT_SYMBOL_GPL(qtnf_chipid_to_string); diff --git a/drivers/net/wireless/quantenna/qtnfmac/util.h b/drivers/net/wireless/quantenna/qtnfmac/util.h index 0d4d92b11540..b8744baac332 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/util.h +++ b/drivers/net/wireless/quantenna/qtnfmac/util.h @@ -20,6 +20,8 @@ #include #include "core.h" +const char *qtnf_chipid_to_string(unsigned long chip_id); + void qtnf_sta_list_init(struct qtnf_sta_list *list); struct qtnf_sta_node *qtnf_sta_list_lookup(struct qtnf_sta_list *list, diff --git a/drivers/net/wireless/ralink/rt2x00/rt2400pci.c b/drivers/net/wireless/ralink/rt2x00/rt2400pci.c index 0bc8b0249c57..49a732798395 100644 --- a/drivers/net/wireless/ralink/rt2x00/rt2400pci.c +++ b/drivers/net/wireless/ralink/rt2x00/rt2400pci.c @@ -1302,7 +1302,7 @@ static void rt2400pci_txdone(struct rt2x00_dev *rt2x00dev, break; case 2: /* Failure, excessive retries */ __set_bit(TXDONE_EXCESSIVE_RETRY, &txdesc.flags); - /* Don't break, this is a failed frame! */ + /* Fall through - this is a failed frame! */ default: /* Failure */ __set_bit(TXDONE_FAILURE, &txdesc.flags); } diff --git a/drivers/net/wireless/ralink/rt2x00/rt2500pci.c b/drivers/net/wireless/ralink/rt2x00/rt2500pci.c index 1ff5434798ec..e8e7bfe1ba9b 100644 --- a/drivers/net/wireless/ralink/rt2x00/rt2500pci.c +++ b/drivers/net/wireless/ralink/rt2x00/rt2500pci.c @@ -1430,7 +1430,7 @@ static void rt2500pci_txdone(struct rt2x00_dev *rt2x00dev, break; case 2: /* Failure, excessive retries */ __set_bit(TXDONE_EXCESSIVE_RETRY, &txdesc.flags); - /* Don't break, this is a failed frame! */ + /* Fall through - this is a failed frame! */ default: /* Failure */ __set_bit(TXDONE_FAILURE, &txdesc.flags); } diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c index 9e7b8933d30c..0e95555aec62 100644 --- a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c +++ b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c @@ -2482,6 +2482,7 @@ static void rt2800_config_channel_rf3052(struct rt2x00_dev *rt2x00dev, switch (rt2x00dev->default_ant.tx_chain_num) { case 1: rt2x00_set_field8(&rfcsr, RFCSR1_TX1_PD, 1); + /* fall through */ case 2: rt2x00_set_field8(&rfcsr, RFCSR1_TX2_PD, 1); break; @@ -2490,6 +2491,7 @@ static void rt2800_config_channel_rf3052(struct rt2x00_dev *rt2x00dev, switch (rt2x00dev->default_ant.rx_chain_num) { case 1: rt2x00_set_field8(&rfcsr, RFCSR1_RX1_PD, 1); + /* fall through */ case 2: rt2x00_set_field8(&rfcsr, RFCSR1_RX2_PD, 1); break; @@ -9457,8 +9459,10 @@ static int rt2800_probe_hw_mode(struct rt2x00_dev *rt2x00dev) switch (rx_chains) { case 3: spec->ht.mcs.rx_mask[2] = 0xff; + /* fall through */ case 2: spec->ht.mcs.rx_mask[1] = 0xff; + /* fall through */ case 1: spec->ht.mcs.rx_mask[0] = 0xff; spec->ht.mcs.rx_mask[4] = 0x1; /* MCS32 */ diff --git a/drivers/net/wireless/ralink/rt2x00/rt61pci.c b/drivers/net/wireless/ralink/rt2x00/rt61pci.c index cb0e1196f2c2..4c5de8fc8f12 100644 --- a/drivers/net/wireless/ralink/rt2x00/rt61pci.c +++ b/drivers/net/wireless/ralink/rt2x00/rt61pci.c @@ -2226,7 +2226,7 @@ static void rt61pci_txdone(struct rt2x00_dev *rt2x00dev) break; case 6: /* Failure, excessive retries */ __set_bit(TXDONE_EXCESSIVE_RETRY, &txdesc.flags); - /* Don't break, this is a failed frame! */ + /* Fall through - this is a failed frame! */ default: /* Failure */ __set_bit(TXDONE_FAILURE, &txdesc.flags); } diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c index 08c607c031bc..33ad87528d9a 100644 --- a/drivers/net/wireless/ray_cs.c +++ b/drivers/net/wireless/ray_cs.c @@ -889,8 +889,10 @@ static int ray_hw_xmit(unsigned char *data, int len, struct net_device *dev, switch (ccsindex = get_free_tx_ccs(local)) { case ECCSBUSY: pr_debug("ray_hw_xmit tx_ccs table busy\n"); + /* fall through */ case ECCSFULL: pr_debug("ray_hw_xmit No free tx ccs\n"); + /* fall through */ case ECARDGONE: netif_stop_queue(dev); return XMIT_NO_CCS; diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c index cec37787ecf8..1a2ea8b47714 100644 --- a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c +++ b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c @@ -444,12 +444,13 @@ static int rtl8187_init_urbs(struct ieee80211_hw *dev) skb_queue_tail(&priv->rx_queue, skb); usb_anchor_urb(entry, &priv->anchored); ret = usb_submit_urb(entry, GFP_KERNEL); - usb_put_urb(entry); if (ret) { skb_unlink(skb, &priv->rx_queue); usb_unanchor_urb(entry); + usb_put_urb(entry); goto err; } + usb_put_urb(entry); } return ret; diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c index 56040b181cf5..2bd43057dda3 100644 --- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c +++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c @@ -1153,6 +1153,7 @@ void rtl8xxxu_gen1_config_channel(struct ieee80211_hw *hw) switch (hw->conf.chandef.width) { case NL80211_CHAN_WIDTH_20_NOHT: ht = false; + /* fall through */ case NL80211_CHAN_WIDTH_20: opmode |= BW_OPMODE_20MHZ; rtl8xxxu_write8(priv, REG_BW_OPMODE, opmode); @@ -1280,6 +1281,7 @@ void rtl8xxxu_gen2_config_channel(struct ieee80211_hw *hw) switch (hw->conf.chandef.width) { case NL80211_CHAN_WIDTH_20_NOHT: ht = false; + /* fall through */ case NL80211_CHAN_WIDTH_20: rf_mode_bw |= WMAC_TRXPTCL_CTL_BW_20; subchannel = 0; @@ -1748,9 +1750,11 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv) case 3: priv->ep_tx_low_queue = 1; priv->ep_tx_count++; + /* fall through */ case 2: priv->ep_tx_normal_queue = 1; priv->ep_tx_count++; + /* fall through */ case 1: priv->ep_tx_high_queue = 1; priv->ep_tx_count++; @@ -5688,6 +5692,7 @@ static int rtl8xxxu_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, break; case WLAN_CIPHER_SUITE_TKIP: key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIC; + break; default: return -EOPNOTSUPP; } diff --git a/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c b/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c index 6fbf8845a2ab..d748aab66aa2 100644 --- a/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c +++ b/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c @@ -292,11 +292,9 @@ bool halbtc_send_bt_mp_operation(struct btc_coexist *btcoexist, u8 op_code, static void halbtc_leave_lps(struct btc_coexist *btcoexist) { struct rtl_priv *rtlpriv; - struct rtl_ps_ctl *ppsc; bool ap_enable = false; rtlpriv = btcoexist->adapter; - ppsc = rtl_psc(rtlpriv); btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_AP_MODE_ENABLE, &ap_enable); @@ -315,11 +313,9 @@ static void halbtc_leave_lps(struct btc_coexist *btcoexist) static void halbtc_enter_lps(struct btc_coexist *btcoexist) { struct rtl_priv *rtlpriv; - struct rtl_ps_ctl *ppsc; bool ap_enable = false; rtlpriv = btcoexist->adapter; - ppsc = rtl_psc(rtlpriv); btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_AP_MODE_ENABLE, &ap_enable); diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c index 4c1f8b08fc10..14dcb0816bc0 100644 --- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c @@ -29,7 +29,6 @@ #include "../stats.h" #include "reg.h" #include "def.h" -#include "phy.h" #include "trx.h" #include "led.h" #include "dm.h" diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c index 85cedd083d2b..75bfa9dfef4a 100644 --- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c @@ -173,7 +173,7 @@ static int _rtl92d_fw_init(struct ieee80211_hw *hw) rtl_read_byte(rtlpriv, FW_MAC1_READY)); } RT_TRACE(rtlpriv, COMP_FW, DBG_DMESG, - "Polling FW ready fail!! REG_MCUFWDL:0x%08ul\n", + "Polling FW ready fail!! REG_MCUFWDL:0x%08x\n", rtl_read_dword(rtlpriv, REG_MCUFWDL)); return -1; } diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c index 5cf29f5a4b54..3f3327878b51 100644 --- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c @@ -509,13 +509,10 @@ bool rtl8723e_phy_config_rf_with_headerfile(struct ieee80211_hw *hw, int i; bool rtstatus = true; u32 *radioa_array_table; - u32 *radiob_array_table; - u16 radioa_arraylen, radiob_arraylen; + u16 radioa_arraylen; radioa_arraylen = RTL8723ERADIOA_1TARRAYLENGTH; radioa_array_table = RTL8723E_RADIOA_1TARRAY; - radiob_arraylen = RTL8723E_RADIOB_1TARRAYLENGTH; - radiob_array_table = RTL8723E_RADIOB_1TARRAY; rtstatus = true; diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/table.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/table.c index 61e86045f15c..1bbee0bfac23 100644 --- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/table.c +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/table.c @@ -475,10 +475,6 @@ u32 RTL8723E_RADIOA_1TARRAY[RTL8723ERADIOA_1TARRAYLENGTH] = { 0x000, 0x00030159, }; -u32 RTL8723E_RADIOB_1TARRAY[RTL8723E_RADIOB_1TARRAYLENGTH] = { - 0x0, -}; - u32 RTL8723EMAC_ARRAY[RTL8723E_MACARRAYLENGTH] = { 0x420, 0x00000080, 0x423, 0x00000000, diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/table.h b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/table.h index 57a548ceba7d..a044f3c456fa 100644 --- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/table.h +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/table.h @@ -36,8 +36,6 @@ extern u32 RTL8723EPHY_REG_1TARRAY[RTL8723E_PHY_REG_1TARRAY_LENGTH]; extern u32 RTL8723EPHY_REG_ARRAY_PG[RTL8723E_PHY_REG_ARRAY_PGLENGTH]; #define RTL8723ERADIOA_1TARRAYLENGTH 282 extern u32 RTL8723E_RADIOA_1TARRAY[RTL8723ERADIOA_1TARRAYLENGTH]; -#define RTL8723E_RADIOB_1TARRAYLENGTH 1 -extern u32 RTL8723E_RADIOB_1TARRAY[RTL8723E_RADIOB_1TARRAYLENGTH]; #define RTL8723E_MACARRAYLENGTH 172 extern u32 RTL8723EMAC_ARRAY[RTL8723E_MACARRAYLENGTH]; #define RTL8723E_AGCTAB_1TARRAYLENGTH 320 diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c index 176deb2b5386..a75451c246fd 100644 --- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c @@ -394,6 +394,7 @@ static void _rtl8812ae_phy_set_rfe_reg_24g(struct ieee80211_hw *hw) rtl_set_bbreg(hw, RB_RFE_INV, BMASKRFEINV, 0x000); break; } + /* fall through */ case 0: case 2: default: diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c index d7960dd5bf1a..0f2b7c619918 100644 --- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c @@ -29,7 +29,6 @@ #include "../stats.h" #include "reg.h" #include "def.h" -#include "phy.h" #include "trx.h" #include "led.h" #include "dm.h" @@ -307,14 +306,12 @@ static void translate_rx_signal_stuff(struct ieee80211_hw *hw, u8 *praddr; u8 *psaddr; __le16 fc; - u16 type; bool packet_matchbssid, packet_toself, packet_beacon; tmp_buf = skb->data + pstatus->rx_drvinfo_size + pstatus->rx_bufshift; hdr = (struct ieee80211_hdr *)tmp_buf; fc = hdr->frame_control; - type = WLAN_FC_GET_TYPE(hdr->frame_control); praddr = hdr->addr1; psaddr = ieee80211_get_SA(hdr); ether_addr_copy(pstatus->psaddr, psaddr); diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c index 612c211e21a1..449f6d23c5e3 100644 --- a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c +++ b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c @@ -210,7 +210,7 @@ int rsi_init_sdio_slave_regs(struct rsi_hw *adapter) } /* This tells SDIO FIFO when to start read to host */ - rsi_dbg(INIT_ZONE, "%s: Initialzing SDIO read start level\n", __func__); + rsi_dbg(INIT_ZONE, "%s: Initializing SDIO read start level\n", __func__); byte = 0x24; status = rsi_sdio_write_register(adapter, @@ -223,7 +223,7 @@ int rsi_init_sdio_slave_regs(struct rsi_hw *adapter) return -1; } - rsi_dbg(INIT_ZONE, "%s: Initialzing FIFO ctrl registers\n", __func__); + rsi_dbg(INIT_ZONE, "%s: Initializing FIFO ctrl registers\n", __func__); byte = (128 - 32); status = rsi_sdio_write_register(adapter, diff --git a/drivers/net/wireless/st/cw1200/debug.c b/drivers/net/wireless/st/cw1200/debug.c index 295cb1a29f25..2231ba08bc1f 100644 --- a/drivers/net/wireless/st/cw1200/debug.c +++ b/drivers/net/wireless/st/cw1200/debug.c @@ -289,19 +289,7 @@ static int cw1200_status_show(struct seq_file *seq, void *v) return 0; } -static int cw1200_status_open(struct inode *inode, struct file *file) -{ - return single_open(file, &cw1200_status_show, - inode->i_private); -} - -static const struct file_operations fops_status = { - .open = cw1200_status_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, - .owner = THIS_MODULE, -}; +DEFINE_SHOW_ATTRIBUTE(cw1200_status); static int cw1200_counters_show(struct seq_file *seq, void *v) { @@ -345,19 +333,7 @@ static int cw1200_counters_show(struct seq_file *seq, void *v) return 0; } -static int cw1200_counters_open(struct inode *inode, struct file *file) -{ - return single_open(file, &cw1200_counters_show, - inode->i_private); -} - -static const struct file_operations fops_counters = { - .open = cw1200_counters_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, - .owner = THIS_MODULE, -}; +DEFINE_SHOW_ATTRIBUTE(cw1200_counters); static ssize_t cw1200_wsm_dumps(struct file *file, const char __user *user_buf, size_t count, loff_t *ppos) @@ -399,11 +375,11 @@ int cw1200_debug_init(struct cw1200_common *priv) goto err; if (!debugfs_create_file("status", 0400, d->debugfs_phy, - priv, &fops_status)) + priv, &cw1200_status_fops)) goto err; if (!debugfs_create_file("counters", 0400, d->debugfs_phy, - priv, &fops_counters)) + priv, &cw1200_counters_fops)) goto err; if (!debugfs_create_file("wsm_dumps", 0200, d->debugfs_phy, diff --git a/drivers/net/wireless/st/cw1200/scan.c b/drivers/net/wireless/st/cw1200/scan.c index 67213f11acbd..0a9eac93dd01 100644 --- a/drivers/net/wireless/st/cw1200/scan.c +++ b/drivers/net/wireless/st/cw1200/scan.c @@ -78,6 +78,10 @@ int cw1200_hw_scan(struct ieee80211_hw *hw, if (req->n_ssids > WSM_SCAN_MAX_NUM_OF_SSIDS) return -EINVAL; + /* will be unlocked in cw1200_scan_work() */ + down(&priv->scan.lock); + mutex_lock(&priv->conf_mutex); + frame.skb = ieee80211_probereq_get(hw, priv->vif->addr, NULL, 0, req->ie_len); if (!frame.skb) @@ -86,19 +90,15 @@ int cw1200_hw_scan(struct ieee80211_hw *hw, if (req->ie_len) skb_put_data(frame.skb, req->ie, req->ie_len); - /* will be unlocked in cw1200_scan_work() */ - down(&priv->scan.lock); - mutex_lock(&priv->conf_mutex); - ret = wsm_set_template_frame(priv, &frame); if (!ret) { /* Host want to be the probe responder. */ ret = wsm_set_probe_responder(priv, true); } if (ret) { + dev_kfree_skb(frame.skb); mutex_unlock(&priv->conf_mutex); up(&priv->scan.lock); - dev_kfree_skb(frame.skb); return ret; } @@ -120,10 +120,9 @@ int cw1200_hw_scan(struct ieee80211_hw *hw, ++priv->scan.n_ssids; } - mutex_unlock(&priv->conf_mutex); - if (frame.skb) dev_kfree_skb(frame.skb); + mutex_unlock(&priv->conf_mutex); queue_work(priv->workqueue, &priv->scan.work); return 0; } diff --git a/drivers/net/wireless/st/cw1200/sta.c b/drivers/net/wireless/st/cw1200/sta.c index 38678e9a0562..8dae92a79fe1 100644 --- a/drivers/net/wireless/st/cw1200/sta.c +++ b/drivers/net/wireless/st/cw1200/sta.c @@ -1123,7 +1123,7 @@ int cw1200_setup_mac(struct cw1200_common *priv) * * NOTE2: RSSI based reports have been switched to RCPI, since * FW has a bug and RSSI reported values are not stable, - * what can leads to signal level oscilations in user-end applications + * what can lead to signal level oscilations in user-end applications */ struct wsm_rcpi_rssi_threshold threshold = { .rssiRcpiMode = WSM_RCPI_RSSI_THRESHOLD_ENABLE | diff --git a/drivers/net/wireless/ti/wlcore/vendor_cmd.c b/drivers/net/wireless/ti/wlcore/vendor_cmd.c index dbe78d8491ef..7f34ec077ee5 100644 --- a/drivers/net/wireless/ti/wlcore/vendor_cmd.c +++ b/drivers/net/wireless/ti/wlcore/vendor_cmd.c @@ -70,7 +70,7 @@ wlcore_vendor_cmd_smart_config_start(struct wiphy *wiphy, out: mutex_unlock(&wl->mutex); - return 0; + return ret; } static int diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c new file mode 100644 index 000000000000..64b218699656 --- /dev/null +++ b/drivers/net/wireless/virt_wifi.c @@ -0,0 +1,632 @@ +// SPDX-License-Identifier: GPL-2.0 +/* drivers/net/wireless/virt_wifi.c + * + * A fake implementation of cfg80211_ops that can be tacked on to an ethernet + * net_device to make it appear as a wireless connection. + * + * Copyright (C) 2018 Google, Inc. + * + * Author: schuffelen@google.com + */ + +#include +#include +#include +#include + +#include +#include +#include +#include + +static struct wiphy *common_wiphy; + +struct virt_wifi_wiphy_priv { + struct delayed_work scan_result; + struct cfg80211_scan_request *scan_request; + bool being_deleted; +}; + +static struct ieee80211_channel channel_2ghz = { + .band = NL80211_BAND_2GHZ, + .center_freq = 2432, + .hw_value = 2432, + .max_power = 20, +}; + +static struct ieee80211_rate bitrates_2ghz[] = { + { .bitrate = 10 }, + { .bitrate = 20 }, + { .bitrate = 55 }, + { .bitrate = 110 }, + { .bitrate = 60 }, + { .bitrate = 120 }, + { .bitrate = 240 }, +}; + +static struct ieee80211_supported_band band_2ghz = { + .channels = &channel_2ghz, + .bitrates = bitrates_2ghz, + .band = NL80211_BAND_2GHZ, + .n_channels = 1, + .n_bitrates = ARRAY_SIZE(bitrates_2ghz), + .ht_cap = { + .ht_supported = true, + .cap = IEEE80211_HT_CAP_SUP_WIDTH_20_40 | + IEEE80211_HT_CAP_GRN_FLD | + IEEE80211_HT_CAP_SGI_20 | + IEEE80211_HT_CAP_SGI_40 | + IEEE80211_HT_CAP_DSSSCCK40, + .ampdu_factor = 0x3, + .ampdu_density = 0x6, + .mcs = { + .rx_mask = {0xff, 0xff}, + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, + }, +}; + +static struct ieee80211_channel channel_5ghz = { + .band = NL80211_BAND_5GHZ, + .center_freq = 5240, + .hw_value = 5240, + .max_power = 20, +}; + +static struct ieee80211_rate bitrates_5ghz[] = { + { .bitrate = 60 }, + { .bitrate = 120 }, + { .bitrate = 240 }, +}; + +#define RX_MCS_MAP (IEEE80211_VHT_MCS_SUPPORT_0_9 << 0 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 2 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 4 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 6 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 8 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 10 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 12 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 14) + +#define TX_MCS_MAP (IEEE80211_VHT_MCS_SUPPORT_0_9 << 0 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 2 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 4 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 6 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 8 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 10 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 12 | \ + IEEE80211_VHT_MCS_SUPPORT_0_9 << 14) + +static struct ieee80211_supported_band band_5ghz = { + .channels = &channel_5ghz, + .bitrates = bitrates_5ghz, + .band = NL80211_BAND_5GHZ, + .n_channels = 1, + .n_bitrates = ARRAY_SIZE(bitrates_5ghz), + .ht_cap = { + .ht_supported = true, + .cap = IEEE80211_HT_CAP_SUP_WIDTH_20_40 | + IEEE80211_HT_CAP_GRN_FLD | + IEEE80211_HT_CAP_SGI_20 | + IEEE80211_HT_CAP_SGI_40 | + IEEE80211_HT_CAP_DSSSCCK40, + .ampdu_factor = 0x3, + .ampdu_density = 0x6, + .mcs = { + .rx_mask = {0xff, 0xff}, + .tx_params = IEEE80211_HT_MCS_TX_DEFINED, + }, + }, + .vht_cap = { + .vht_supported = true, + .cap = IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 | + IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ | + IEEE80211_VHT_CAP_RXLDPC | + IEEE80211_VHT_CAP_SHORT_GI_80 | + IEEE80211_VHT_CAP_SHORT_GI_160 | + IEEE80211_VHT_CAP_TXSTBC | + IEEE80211_VHT_CAP_RXSTBC_1 | + IEEE80211_VHT_CAP_RXSTBC_2 | + IEEE80211_VHT_CAP_RXSTBC_3 | + IEEE80211_VHT_CAP_RXSTBC_4 | + IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK, + .vht_mcs = { + .rx_mcs_map = cpu_to_le16(RX_MCS_MAP), + .tx_mcs_map = cpu_to_le16(TX_MCS_MAP), + } + }, +}; + +/* Assigned at module init. Guaranteed locally-administered and unicast. */ +static u8 fake_router_bssid[ETH_ALEN] __ro_after_init = {}; + +/* Called with the rtnl lock held. */ +static int virt_wifi_scan(struct wiphy *wiphy, + struct cfg80211_scan_request *request) +{ + struct virt_wifi_wiphy_priv *priv = wiphy_priv(wiphy); + + wiphy_debug(wiphy, "scan\n"); + + if (priv->scan_request || priv->being_deleted) + return -EBUSY; + + priv->scan_request = request; + schedule_delayed_work(&priv->scan_result, HZ * 2); + + return 0; +} + +/* Acquires and releases the rdev BSS lock. */ +static void virt_wifi_scan_result(struct work_struct *work) +{ + struct { + u8 tag; + u8 len; + u8 ssid[8]; + } __packed ssid = { + .tag = WLAN_EID_SSID, .len = 8, .ssid = "VirtWifi", + }; + struct cfg80211_bss *informed_bss; + struct virt_wifi_wiphy_priv *priv = + container_of(work, struct virt_wifi_wiphy_priv, + scan_result.work); + struct wiphy *wiphy = priv_to_wiphy(priv); + struct cfg80211_scan_info scan_info = { .aborted = false }; + + informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz, + CFG80211_BSS_FTYPE_PRESP, + fake_router_bssid, + ktime_get_boot_ns(), + WLAN_CAPABILITY_ESS, 0, + (void *)&ssid, sizeof(ssid), + DBM_TO_MBM(-50), GFP_KERNEL); + cfg80211_put_bss(wiphy, informed_bss); + + /* Schedules work which acquires and releases the rtnl lock. */ + cfg80211_scan_done(priv->scan_request, &scan_info); + priv->scan_request = NULL; +} + +/* May acquire and release the rdev BSS lock. */ +static void virt_wifi_cancel_scan(struct wiphy *wiphy) +{ + struct virt_wifi_wiphy_priv *priv = wiphy_priv(wiphy); + + cancel_delayed_work_sync(&priv->scan_result); + /* Clean up dangling callbacks if necessary. */ + if (priv->scan_request) { + struct cfg80211_scan_info scan_info = { .aborted = true }; + /* Schedules work which acquires and releases the rtnl lock. */ + cfg80211_scan_done(priv->scan_request, &scan_info); + priv->scan_request = NULL; + } +} + +struct virt_wifi_netdev_priv { + struct delayed_work connect; + struct net_device *lowerdev; + struct net_device *upperdev; + u32 tx_packets; + u32 tx_failed; + u8 connect_requested_bss[ETH_ALEN]; + bool is_up; + bool is_connected; + bool being_deleted; +}; + +/* Called with the rtnl lock held. */ +static int virt_wifi_connect(struct wiphy *wiphy, struct net_device *netdev, + struct cfg80211_connect_params *sme) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(netdev); + bool could_schedule; + + if (priv->being_deleted || !priv->is_up) + return -EBUSY; + + could_schedule = schedule_delayed_work(&priv->connect, HZ * 2); + if (!could_schedule) + return -EBUSY; + + if (sme->bssid) + ether_addr_copy(priv->connect_requested_bss, sme->bssid); + else + eth_zero_addr(priv->connect_requested_bss); + + wiphy_debug(wiphy, "connect\n"); + + return 0; +} + +/* Acquires and releases the rdev event lock. */ +static void virt_wifi_connect_complete(struct work_struct *work) +{ + struct virt_wifi_netdev_priv *priv = + container_of(work, struct virt_wifi_netdev_priv, connect.work); + u8 *requested_bss = priv->connect_requested_bss; + bool has_addr = !is_zero_ether_addr(requested_bss); + bool right_addr = ether_addr_equal(requested_bss, fake_router_bssid); + u16 status = WLAN_STATUS_SUCCESS; + + if (!priv->is_up || (has_addr && !right_addr)) + status = WLAN_STATUS_UNSPECIFIED_FAILURE; + else + priv->is_connected = true; + + /* Schedules an event that acquires the rtnl lock. */ + cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0, + status, GFP_KERNEL); + netif_carrier_on(priv->upperdev); +} + +/* May acquire and release the rdev event lock. */ +static void virt_wifi_cancel_connect(struct net_device *netdev) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(netdev); + + /* If there is work pending, clean up dangling callbacks. */ + if (cancel_delayed_work_sync(&priv->connect)) { + /* Schedules an event that acquires the rtnl lock. */ + cfg80211_connect_result(priv->upperdev, + priv->connect_requested_bss, NULL, 0, + NULL, 0, + WLAN_STATUS_UNSPECIFIED_FAILURE, + GFP_KERNEL); + } +} + +/* Called with the rtnl lock held. Acquires the rdev event lock. */ +static int virt_wifi_disconnect(struct wiphy *wiphy, struct net_device *netdev, + u16 reason_code) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(netdev); + + if (priv->being_deleted) + return -EBUSY; + + wiphy_debug(wiphy, "disconnect\n"); + virt_wifi_cancel_connect(netdev); + + cfg80211_disconnected(netdev, reason_code, NULL, 0, true, GFP_KERNEL); + priv->is_connected = false; + netif_carrier_off(netdev); + + return 0; +} + +/* Called with the rtnl lock held. */ +static int virt_wifi_get_station(struct wiphy *wiphy, struct net_device *dev, + const u8 *mac, struct station_info *sinfo) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(dev); + + wiphy_debug(wiphy, "get_station\n"); + + if (!priv->is_connected || !ether_addr_equal(mac, fake_router_bssid)) + return -ENOENT; + + sinfo->filled = BIT_ULL(NL80211_STA_INFO_TX_PACKETS) | + BIT_ULL(NL80211_STA_INFO_TX_FAILED) | + BIT_ULL(NL80211_STA_INFO_SIGNAL) | + BIT_ULL(NL80211_STA_INFO_TX_BITRATE); + sinfo->tx_packets = priv->tx_packets; + sinfo->tx_failed = priv->tx_failed; + /* For CFG80211_SIGNAL_TYPE_MBM, value is expressed in _dBm_ */ + sinfo->signal = -50; + sinfo->txrate = (struct rate_info) { + .legacy = 10, /* units are 100kbit/s */ + }; + return 0; +} + +/* Called with the rtnl lock held. */ +static int virt_wifi_dump_station(struct wiphy *wiphy, struct net_device *dev, + int idx, u8 *mac, struct station_info *sinfo) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(dev); + + wiphy_debug(wiphy, "dump_station\n"); + + if (idx != 0 || !priv->is_connected) + return -ENOENT; + + ether_addr_copy(mac, fake_router_bssid); + return virt_wifi_get_station(wiphy, dev, fake_router_bssid, sinfo); +} + +static const struct cfg80211_ops virt_wifi_cfg80211_ops = { + .scan = virt_wifi_scan, + + .connect = virt_wifi_connect, + .disconnect = virt_wifi_disconnect, + + .get_station = virt_wifi_get_station, + .dump_station = virt_wifi_dump_station, +}; + +/* Acquires and releases the rtnl lock. */ +static struct wiphy *virt_wifi_make_wiphy(void) +{ + struct wiphy *wiphy; + struct virt_wifi_wiphy_priv *priv; + int err; + + wiphy = wiphy_new(&virt_wifi_cfg80211_ops, sizeof(*priv)); + + if (!wiphy) + return NULL; + + wiphy->max_scan_ssids = 4; + wiphy->max_scan_ie_len = 1000; + wiphy->signal_type = CFG80211_SIGNAL_TYPE_MBM; + + wiphy->bands[NL80211_BAND_2GHZ] = &band_2ghz; + wiphy->bands[NL80211_BAND_5GHZ] = &band_5ghz; + wiphy->bands[NL80211_BAND_60GHZ] = NULL; + + wiphy->regulatory_flags = REGULATORY_WIPHY_SELF_MANAGED; + wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION); + + priv = wiphy_priv(wiphy); + priv->being_deleted = false; + priv->scan_request = NULL; + INIT_DELAYED_WORK(&priv->scan_result, virt_wifi_scan_result); + + err = wiphy_register(wiphy); + if (err < 0) { + wiphy_free(wiphy); + return NULL; + } + + return wiphy; +} + +/* Acquires and releases the rtnl lock. */ +static void virt_wifi_destroy_wiphy(struct wiphy *wiphy) +{ + struct virt_wifi_wiphy_priv *priv; + + WARN(!wiphy, "%s called with null wiphy", __func__); + if (!wiphy) + return; + + priv = wiphy_priv(wiphy); + priv->being_deleted = true; + virt_wifi_cancel_scan(wiphy); + + if (wiphy->registered) + wiphy_unregister(wiphy); + wiphy_free(wiphy); +} + +/* Enters and exits a RCU-bh critical section. */ +static netdev_tx_t virt_wifi_start_xmit(struct sk_buff *skb, + struct net_device *dev) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(dev); + + priv->tx_packets++; + if (!priv->is_connected) { + priv->tx_failed++; + return NET_XMIT_DROP; + } + + skb->dev = priv->lowerdev; + return dev_queue_xmit(skb); +} + +/* Called with rtnl lock held. */ +static int virt_wifi_net_device_open(struct net_device *dev) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(dev); + + priv->is_up = true; + return 0; +} + +/* Called with rtnl lock held. */ +static int virt_wifi_net_device_stop(struct net_device *dev) +{ + struct virt_wifi_netdev_priv *n_priv = netdev_priv(dev); + struct virt_wifi_wiphy_priv *w_priv; + + n_priv->is_up = false; + + if (!dev->ieee80211_ptr) + return 0; + w_priv = wiphy_priv(dev->ieee80211_ptr->wiphy); + + virt_wifi_cancel_scan(dev->ieee80211_ptr->wiphy); + virt_wifi_cancel_connect(dev); + netif_carrier_off(dev); + + return 0; +} + +static const struct net_device_ops virt_wifi_ops = { + .ndo_start_xmit = virt_wifi_start_xmit, + .ndo_open = virt_wifi_net_device_open, + .ndo_stop = virt_wifi_net_device_stop, +}; + +/* Invoked as part of rtnl lock release. */ +static void virt_wifi_net_device_destructor(struct net_device *dev) +{ + /* Delayed past dellink to allow nl80211 to react to the device being + * deleted. + */ + kfree(dev->ieee80211_ptr); + dev->ieee80211_ptr = NULL; + free_netdev(dev); +} + +/* No lock interaction. */ +static void virt_wifi_setup(struct net_device *dev) +{ + ether_setup(dev); + dev->netdev_ops = &virt_wifi_ops; + dev->priv_destructor = virt_wifi_net_device_destructor; +} + +/* Called in a RCU read critical section from netif_receive_skb */ +static rx_handler_result_t virt_wifi_rx_handler(struct sk_buff **pskb) +{ + struct sk_buff *skb = *pskb; + struct virt_wifi_netdev_priv *priv = + rcu_dereference(skb->dev->rx_handler_data); + + if (!priv->is_connected) + return RX_HANDLER_PASS; + + /* GFP_ATOMIC because this is a packet interrupt handler. */ + skb = skb_share_check(skb, GFP_ATOMIC); + if (!skb) { + dev_err(&priv->upperdev->dev, "can't skb_share_check\n"); + return RX_HANDLER_CONSUMED; + } + + *pskb = skb; + skb->dev = priv->upperdev; + skb->pkt_type = PACKET_HOST; + return RX_HANDLER_ANOTHER; +} + +/* Called with rtnl lock held. */ +static int virt_wifi_newlink(struct net *src_net, struct net_device *dev, + struct nlattr *tb[], struct nlattr *data[], + struct netlink_ext_ack *extack) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(dev); + int err; + + if (!tb[IFLA_LINK]) + return -EINVAL; + + netif_carrier_off(dev); + + priv->upperdev = dev; + priv->lowerdev = __dev_get_by_index(src_net, + nla_get_u32(tb[IFLA_LINK])); + + if (!priv->lowerdev) + return -ENODEV; + if (!tb[IFLA_MTU]) + dev->mtu = priv->lowerdev->mtu; + else if (dev->mtu > priv->lowerdev->mtu) + return -EINVAL; + + err = netdev_rx_handler_register(priv->lowerdev, virt_wifi_rx_handler, + priv); + if (err) { + dev_err(&priv->lowerdev->dev, + "can't netdev_rx_handler_register: %d\n", err); + return err; + } + + eth_hw_addr_inherit(dev, priv->lowerdev); + netif_stacked_transfer_operstate(priv->lowerdev, dev); + + SET_NETDEV_DEV(dev, &priv->lowerdev->dev); + dev->ieee80211_ptr = kzalloc(sizeof(*dev->ieee80211_ptr), GFP_KERNEL); + + if (!dev->ieee80211_ptr) + goto remove_handler; + + dev->ieee80211_ptr->iftype = NL80211_IFTYPE_STATION; + dev->ieee80211_ptr->wiphy = common_wiphy; + + err = register_netdevice(dev); + if (err) { + dev_err(&priv->lowerdev->dev, "can't register_netdevice: %d\n", + err); + goto free_wireless_dev; + } + + err = netdev_upper_dev_link(priv->lowerdev, dev, extack); + if (err) { + dev_err(&priv->lowerdev->dev, "can't netdev_upper_dev_link: %d\n", + err); + goto unregister_netdev; + } + + priv->being_deleted = false; + priv->is_connected = false; + priv->is_up = false; + INIT_DELAYED_WORK(&priv->connect, virt_wifi_connect_complete); + + return 0; +unregister_netdev: + unregister_netdevice(dev); +free_wireless_dev: + kfree(dev->ieee80211_ptr); + dev->ieee80211_ptr = NULL; +remove_handler: + netdev_rx_handler_unregister(priv->lowerdev); + + return err; +} + +/* Called with rtnl lock held. */ +static void virt_wifi_dellink(struct net_device *dev, + struct list_head *head) +{ + struct virt_wifi_netdev_priv *priv = netdev_priv(dev); + + if (dev->ieee80211_ptr) + virt_wifi_cancel_scan(dev->ieee80211_ptr->wiphy); + + priv->being_deleted = true; + virt_wifi_cancel_connect(dev); + netif_carrier_off(dev); + + netdev_rx_handler_unregister(priv->lowerdev); + netdev_upper_dev_unlink(priv->lowerdev, dev); + + unregister_netdevice_queue(dev, head); + + /* Deleting the wiphy is handled in the module destructor. */ +} + +static struct rtnl_link_ops virt_wifi_link_ops = { + .kind = "virt_wifi", + .setup = virt_wifi_setup, + .newlink = virt_wifi_newlink, + .dellink = virt_wifi_dellink, + .priv_size = sizeof(struct virt_wifi_netdev_priv), +}; + +/* Acquires and releases the rtnl lock. */ +static int __init virt_wifi_init_module(void) +{ + int err; + + /* Guaranteed to be locallly-administered and not multicast. */ + eth_random_addr(fake_router_bssid); + + common_wiphy = virt_wifi_make_wiphy(); + if (!common_wiphy) + return -ENOMEM; + + err = rtnl_link_register(&virt_wifi_link_ops); + if (err) + virt_wifi_destroy_wiphy(common_wiphy); + + return err; +} + +/* Acquires and releases the rtnl lock. */ +static void __exit virt_wifi_cleanup_module(void) +{ + /* Will delete any devices that depend on the wiphy. */ + rtnl_link_unregister(&virt_wifi_link_ops); + virt_wifi_destroy_wiphy(common_wiphy); +} + +module_init(virt_wifi_init_module); +module_exit(virt_wifi_cleanup_module); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Cody Schuffelen "); +MODULE_DESCRIPTION("Driver for a wireless wrapper of ethernet devices"); +MODULE_ALIAS_RTNL_LINK("virt_wifi"); diff --git a/drivers/net/wireless/zydas/zd1201.c b/drivers/net/wireless/zydas/zd1201.c index 253403899fe9..22c70f1f568c 100644 --- a/drivers/net/wireless/zydas/zd1201.c +++ b/drivers/net/wireless/zydas/zd1201.c @@ -969,6 +969,7 @@ static int zd1201_set_mode(struct net_device *dev, */ zd1201_join(zd, "\0-*#\0", 5); /* Put port in pIBSS */ + /* Fall through */ case 8: /* No pseudo-IBSS in wireless extensions (yet) */ porttype = ZD1201_PORTTYPE_PSEUDOIBSS; break; diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c index fe1d52247bbe..2625740bdc4a 100644 --- a/drivers/net/xen-netback/xenbus.c +++ b/drivers/net/xen-netback/xenbus.c @@ -186,7 +186,7 @@ static const struct file_operations xenvif_dbg_io_ring_ops_fops = { .write = xenvif_write_io_ring, }; -static int xenvif_read_ctrl(struct seq_file *m, void *v) +static int xenvif_ctrl_show(struct seq_file *m, void *v) { struct xenvif *vif = m->private; @@ -194,19 +194,7 @@ static int xenvif_read_ctrl(struct seq_file *m, void *v) return 0; } - -static int xenvif_ctrl_open(struct inode *inode, struct file *filp) -{ - return single_open(filp, xenvif_read_ctrl, inode->i_private); -} - -static const struct file_operations xenvif_dbg_ctrl_ops_fops = { - .owner = THIS_MODULE, - .open = xenvif_ctrl_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(xenvif_ctrl); static void xenvif_debugfs_addif(struct xenvif *vif) { @@ -238,7 +226,7 @@ static void xenvif_debugfs_addif(struct xenvif *vif) 0400, vif->xenvif_dbg_root, vif, - &xenvif_dbg_ctrl_ops_fops); + &xenvif_ctrl_fops); if (IS_ERR_OR_NULL(pfile)) pr_warn("Creation of ctrl file returned %ld!\n", PTR_ERR(pfile)); diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 5b97cc946d70..c914c24f880b 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -337,8 +337,6 @@ static void xennet_alloc_rx_buffers(struct netfront_queue *queue) return; } - wmb(); /* barrier so backend seens requests */ - RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify); if (notify) notify_remote_via_irq(queue->rx_irq); diff --git a/drivers/nvdimm/Kconfig b/drivers/nvdimm/Kconfig index 9d36473dc2a2..5e27918e4624 100644 --- a/drivers/nvdimm/Kconfig +++ b/drivers/nvdimm/Kconfig @@ -112,4 +112,9 @@ config OF_PMEM Select Y if unsure. +config NVDIMM_KEYS + def_bool y + depends on ENCRYPTED_KEYS + depends on (LIBNVDIMM=ENCRYPTED_KEYS) || LIBNVDIMM=m + endif diff --git a/drivers/nvdimm/Makefile b/drivers/nvdimm/Makefile index e8847045dac0..6f2a088afad6 100644 --- a/drivers/nvdimm/Makefile +++ b/drivers/nvdimm/Makefile @@ -27,3 +27,4 @@ libnvdimm-$(CONFIG_ND_CLAIM) += claim.o libnvdimm-$(CONFIG_BTT) += btt_devs.o libnvdimm-$(CONFIG_NVDIMM_PFN) += pfn_devs.o libnvdimm-$(CONFIG_NVDIMM_DAX) += dax_devs.o +libnvdimm-$(CONFIG_NVDIMM_KEYS) += security.o diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c index f1fb39921236..dca5f7a805cb 100644 --- a/drivers/nvdimm/bus.c +++ b/drivers/nvdimm/bus.c @@ -331,6 +331,12 @@ struct nvdimm_bus *to_nvdimm_bus(struct device *dev) } EXPORT_SYMBOL_GPL(to_nvdimm_bus); +struct nvdimm_bus *nvdimm_to_bus(struct nvdimm *nvdimm) +{ + return to_nvdimm_bus(nvdimm->dev.parent); +} +EXPORT_SYMBOL_GPL(nvdimm_to_bus); + struct nvdimm_bus *nvdimm_bus_register(struct device *parent, struct nvdimm_bus_descriptor *nd_desc) { @@ -344,12 +350,12 @@ struct nvdimm_bus *nvdimm_bus_register(struct device *parent, INIT_LIST_HEAD(&nvdimm_bus->mapping_list); init_waitqueue_head(&nvdimm_bus->probe_wait); nvdimm_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL); - mutex_init(&nvdimm_bus->reconfig_mutex); - badrange_init(&nvdimm_bus->badrange); if (nvdimm_bus->id < 0) { kfree(nvdimm_bus); return NULL; } + mutex_init(&nvdimm_bus->reconfig_mutex); + badrange_init(&nvdimm_bus->badrange); nvdimm_bus->nd_desc = nd_desc; nvdimm_bus->dev.parent = parent; nvdimm_bus->dev.release = nvdimm_bus_release; @@ -387,9 +393,24 @@ static int child_unregister(struct device *dev, void *data) * i.e. remove classless children */ if (dev->class) - /* pass */; - else - nd_device_unregister(dev, ND_SYNC); + return 0; + + if (is_nvdimm(dev)) { + struct nvdimm *nvdimm = to_nvdimm(dev); + bool dev_put = false; + + /* We are shutting down. Make state frozen artificially. */ + nvdimm_bus_lock(dev); + nvdimm->sec.state = NVDIMM_SECURITY_FROZEN; + if (test_and_clear_bit(NDD_WORK_PENDING, &nvdimm->flags)) + dev_put = true; + nvdimm_bus_unlock(dev); + cancel_delayed_work_sync(&nvdimm->dwork); + if (dev_put) + put_device(dev); + } + nd_device_unregister(dev, ND_SYNC); + return 0; } @@ -902,7 +923,7 @@ static int nd_cmd_clear_to_send(struct nvdimm_bus *nvdimm_bus, /* ask the bus provider if it would like to block this request */ if (nd_desc->clear_to_send) { - int rc = nd_desc->clear_to_send(nd_desc, nvdimm, cmd); + int rc = nd_desc->clear_to_send(nd_desc, nvdimm, cmd, data); if (rc) return rc; diff --git a/drivers/nvdimm/dimm.c b/drivers/nvdimm/dimm.c index 9899c97138a3..0cf58cabc9ed 100644 --- a/drivers/nvdimm/dimm.c +++ b/drivers/nvdimm/dimm.c @@ -34,7 +34,11 @@ static int nvdimm_probe(struct device *dev) return rc; } - /* reset locked, to be validated below... */ + /* + * The locked status bit reflects explicit status codes from the + * label reading commands, revalidate it each time the driver is + * activated and re-reads the label area. + */ nvdimm_clear_locked(dev); ndd = kzalloc(sizeof(*ndd), GFP_KERNEL); @@ -51,6 +55,16 @@ static int nvdimm_probe(struct device *dev) get_device(dev); kref_init(&ndd->kref); + /* + * Attempt to unlock, if the DIMM supports security commands, + * otherwise the locked indication is determined by explicit + * status codes from the label reading commands. + */ + rc = nvdimm_security_unlock(dev); + if (rc < 0) + dev_dbg(dev, "failed to unlock dimm: %d\n", rc); + + /* * EACCES failures reading the namespace label-area-properties * are interpreted as the DIMM capacity being locked but the diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c index 6c3de2317390..4890310df874 100644 --- a/drivers/nvdimm/dimm_devs.c +++ b/drivers/nvdimm/dimm_devs.c @@ -370,23 +370,172 @@ static ssize_t available_slots_show(struct device *dev, } static DEVICE_ATTR_RO(available_slots); +__weak ssize_t security_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct nvdimm *nvdimm = to_nvdimm(dev); + + switch (nvdimm->sec.state) { + case NVDIMM_SECURITY_DISABLED: + return sprintf(buf, "disabled\n"); + case NVDIMM_SECURITY_UNLOCKED: + return sprintf(buf, "unlocked\n"); + case NVDIMM_SECURITY_LOCKED: + return sprintf(buf, "locked\n"); + case NVDIMM_SECURITY_FROZEN: + return sprintf(buf, "frozen\n"); + case NVDIMM_SECURITY_OVERWRITE: + return sprintf(buf, "overwrite\n"); + default: + return -ENOTTY; + } + + return -ENOTTY; +} + +#define OPS \ + C( OP_FREEZE, "freeze", 1), \ + C( OP_DISABLE, "disable", 2), \ + C( OP_UPDATE, "update", 3), \ + C( OP_ERASE, "erase", 2), \ + C( OP_OVERWRITE, "overwrite", 2), \ + C( OP_MASTER_UPDATE, "master_update", 3), \ + C( OP_MASTER_ERASE, "master_erase", 2) +#undef C +#define C(a, b, c) a +enum nvdimmsec_op_ids { OPS }; +#undef C +#define C(a, b, c) { b, c } +static struct { + const char *name; + int args; +} ops[] = { OPS }; +#undef C + +#define SEC_CMD_SIZE 32 +#define KEY_ID_SIZE 10 + +static ssize_t __security_store(struct device *dev, const char *buf, size_t len) +{ + struct nvdimm *nvdimm = to_nvdimm(dev); + ssize_t rc; + char cmd[SEC_CMD_SIZE+1], keystr[KEY_ID_SIZE+1], + nkeystr[KEY_ID_SIZE+1]; + unsigned int key, newkey; + int i; + + if (atomic_read(&nvdimm->busy)) + return -EBUSY; + + rc = sscanf(buf, "%"__stringify(SEC_CMD_SIZE)"s" + " %"__stringify(KEY_ID_SIZE)"s" + " %"__stringify(KEY_ID_SIZE)"s", + cmd, keystr, nkeystr); + if (rc < 1) + return -EINVAL; + for (i = 0; i < ARRAY_SIZE(ops); i++) + if (sysfs_streq(cmd, ops[i].name)) + break; + if (i >= ARRAY_SIZE(ops)) + return -EINVAL; + if (ops[i].args > 1) + rc = kstrtouint(keystr, 0, &key); + if (rc >= 0 && ops[i].args > 2) + rc = kstrtouint(nkeystr, 0, &newkey); + if (rc < 0) + return rc; + + if (i == OP_FREEZE) { + dev_dbg(dev, "freeze\n"); + rc = nvdimm_security_freeze(nvdimm); + } else if (i == OP_DISABLE) { + dev_dbg(dev, "disable %u\n", key); + rc = nvdimm_security_disable(nvdimm, key); + } else if (i == OP_UPDATE) { + dev_dbg(dev, "update %u %u\n", key, newkey); + rc = nvdimm_security_update(nvdimm, key, newkey, NVDIMM_USER); + } else if (i == OP_ERASE) { + dev_dbg(dev, "erase %u\n", key); + rc = nvdimm_security_erase(nvdimm, key, NVDIMM_USER); + } else if (i == OP_OVERWRITE) { + dev_dbg(dev, "overwrite %u\n", key); + rc = nvdimm_security_overwrite(nvdimm, key); + } else if (i == OP_MASTER_UPDATE) { + dev_dbg(dev, "master_update %u %u\n", key, newkey); + rc = nvdimm_security_update(nvdimm, key, newkey, + NVDIMM_MASTER); + } else if (i == OP_MASTER_ERASE) { + dev_dbg(dev, "master_erase %u\n", key); + rc = nvdimm_security_erase(nvdimm, key, + NVDIMM_MASTER); + } else + return -EINVAL; + + if (rc == 0) + rc = len; + return rc; +} + +static ssize_t security_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t len) + +{ + ssize_t rc; + + /* + * Require all userspace triggered security management to be + * done while probing is idle and the DIMM is not in active use + * in any region. + */ + device_lock(dev); + nvdimm_bus_lock(dev); + wait_nvdimm_bus_probe_idle(dev); + rc = __security_store(dev, buf, len); + nvdimm_bus_unlock(dev); + device_unlock(dev); + + return rc; +} +static DEVICE_ATTR_RW(security); + static struct attribute *nvdimm_attributes[] = { &dev_attr_state.attr, &dev_attr_flags.attr, &dev_attr_commands.attr, &dev_attr_available_slots.attr, + &dev_attr_security.attr, NULL, }; +static umode_t nvdimm_visible(struct kobject *kobj, struct attribute *a, int n) +{ + struct device *dev = container_of(kobj, typeof(*dev), kobj); + struct nvdimm *nvdimm = to_nvdimm(dev); + + if (a != &dev_attr_security.attr) + return a->mode; + if (nvdimm->sec.state < 0) + return 0; + /* Are there any state mutation ops? */ + if (nvdimm->sec.ops->freeze || nvdimm->sec.ops->disable + || nvdimm->sec.ops->change_key + || nvdimm->sec.ops->erase + || nvdimm->sec.ops->overwrite) + return a->mode; + return 0444; +} + struct attribute_group nvdimm_attribute_group = { .attrs = nvdimm_attributes, + .is_visible = nvdimm_visible, }; EXPORT_SYMBOL_GPL(nvdimm_attribute_group); -struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, void *provider_data, - const struct attribute_group **groups, unsigned long flags, - unsigned long cmd_mask, int num_flush, - struct resource *flush_wpq) +struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus, + void *provider_data, const struct attribute_group **groups, + unsigned long flags, unsigned long cmd_mask, int num_flush, + struct resource *flush_wpq, const char *dimm_id, + const struct nvdimm_security_ops *sec_ops) { struct nvdimm *nvdimm = kzalloc(sizeof(*nvdimm), GFP_KERNEL); struct device *dev; @@ -399,6 +548,8 @@ struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, void *provider_data, kfree(nvdimm); return NULL; } + + nvdimm->dimm_id = dimm_id; nvdimm->provider_data = provider_data; nvdimm->flags = flags; nvdimm->cmd_mask = cmd_mask; @@ -411,11 +562,60 @@ struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, void *provider_data, dev->type = &nvdimm_device_type; dev->devt = MKDEV(nvdimm_major, nvdimm->id); dev->groups = groups; + nvdimm->sec.ops = sec_ops; + nvdimm->sec.overwrite_tmo = 0; + INIT_DELAYED_WORK(&nvdimm->dwork, nvdimm_security_overwrite_query); + /* + * Security state must be initialized before device_add() for + * attribute visibility. + */ + /* get security state and extended (master) state */ + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); + nvdimm->sec.ext_state = nvdimm_security_state(nvdimm, NVDIMM_MASTER); nd_device_register(dev); return nvdimm; } -EXPORT_SYMBOL_GPL(nvdimm_create); +EXPORT_SYMBOL_GPL(__nvdimm_create); + +int nvdimm_security_setup_events(struct nvdimm *nvdimm) +{ + nvdimm->sec.overwrite_state = sysfs_get_dirent(nvdimm->dev.kobj.sd, + "security"); + if (!nvdimm->sec.overwrite_state) + return -ENODEV; + return 0; +} +EXPORT_SYMBOL_GPL(nvdimm_security_setup_events); + +int nvdimm_in_overwrite(struct nvdimm *nvdimm) +{ + return test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags); +} +EXPORT_SYMBOL_GPL(nvdimm_in_overwrite); + +int nvdimm_security_freeze(struct nvdimm *nvdimm) +{ + int rc; + + WARN_ON_ONCE(!is_nvdimm_bus_locked(&nvdimm->dev)); + + if (!nvdimm->sec.ops || !nvdimm->sec.ops->freeze) + return -EOPNOTSUPP; + + if (nvdimm->sec.state < 0) + return -EIO; + + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { + dev_warn(&nvdimm->dev, "Overwrite operation in progress.\n"); + return -EBUSY; + } + + rc = nvdimm->sec.ops->freeze(nvdimm); + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); + + return rc; +} int alias_dpa_busy(struct device *dev, void *data) { diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c index 750dbaa6ce82..a11bf4e6b451 100644 --- a/drivers/nvdimm/label.c +++ b/drivers/nvdimm/label.c @@ -944,8 +944,7 @@ static int __blk_label_update(struct nd_region *nd_region, victims = 0; if (old_num_resources) { /* convert old local-label-map to dimm-slot victim-map */ - victim_map = kcalloc(BITS_TO_LONGS(nslot), sizeof(long), - GFP_KERNEL); + victim_map = bitmap_zalloc(nslot, GFP_KERNEL); if (!victim_map) return -ENOMEM; @@ -968,7 +967,7 @@ static int __blk_label_update(struct nd_region *nd_region, /* don't allow updates that consume the last label */ if (nfree - alloc < 0 || nfree - alloc + victims < 1) { dev_info(&nsblk->common.dev, "insufficient label space\n"); - kfree(victim_map); + bitmap_free(victim_map); return -ENOSPC; } /* from here on we need to abort on error */ @@ -1140,7 +1139,7 @@ static int __blk_label_update(struct nd_region *nd_region, out: kfree(old_res_list); - kfree(victim_map); + bitmap_free(victim_map); return rc; abort: diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c index 681af3a8fd62..4b077555ac70 100644 --- a/drivers/nvdimm/namespace_devs.c +++ b/drivers/nvdimm/namespace_devs.c @@ -270,11 +270,10 @@ static ssize_t __alt_name_store(struct device *dev, const char *buf, if (dev->driver || to_ndns(dev)->claim) return -EBUSY; - input = kmemdup(buf, len + 1, GFP_KERNEL); + input = kstrndup(buf, len, GFP_KERNEL); if (!input) return -ENOMEM; - input[len] = '\0'; pos = strim(input); if (strlen(pos) + 1 > NSLABEL_NAME_LEN) { rc = -EINVAL; diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index d0c621b32f72..2b2cf4e554d3 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -21,6 +21,7 @@ extern struct list_head nvdimm_bus_list; extern struct mutex nvdimm_bus_list_mutex; extern int nvdimm_major; +extern struct workqueue_struct *nvdimm_wq; struct nvdimm_bus { struct nvdimm_bus_descriptor *nd_desc; @@ -41,8 +42,64 @@ struct nvdimm { atomic_t busy; int id, num_flush; struct resource *flush_wpq; + const char *dimm_id; + struct { + const struct nvdimm_security_ops *ops; + enum nvdimm_security_state state; + enum nvdimm_security_state ext_state; + unsigned int overwrite_tmo; + struct kernfs_node *overwrite_state; + } sec; + struct delayed_work dwork; }; +static inline enum nvdimm_security_state nvdimm_security_state( + struct nvdimm *nvdimm, bool master) +{ + if (!nvdimm->sec.ops) + return -ENXIO; + + return nvdimm->sec.ops->state(nvdimm, master); +} +int nvdimm_security_freeze(struct nvdimm *nvdimm); +#if IS_ENABLED(CONFIG_NVDIMM_KEYS) +int nvdimm_security_disable(struct nvdimm *nvdimm, unsigned int keyid); +int nvdimm_security_update(struct nvdimm *nvdimm, unsigned int keyid, + unsigned int new_keyid, + enum nvdimm_passphrase_type pass_type); +int nvdimm_security_erase(struct nvdimm *nvdimm, unsigned int keyid, + enum nvdimm_passphrase_type pass_type); +int nvdimm_security_overwrite(struct nvdimm *nvdimm, unsigned int keyid); +void nvdimm_security_overwrite_query(struct work_struct *work); +#else +static inline int nvdimm_security_disable(struct nvdimm *nvdimm, + unsigned int keyid) +{ + return -EOPNOTSUPP; +} +static inline int nvdimm_security_update(struct nvdimm *nvdimm, + unsigned int keyid, + unsigned int new_keyid, + enum nvdimm_passphrase_type pass_type) +{ + return -EOPNOTSUPP; +} +static inline int nvdimm_security_erase(struct nvdimm *nvdimm, + unsigned int keyid, + enum nvdimm_passphrase_type pass_type) +{ + return -EOPNOTSUPP; +} +static inline int nvdimm_security_overwrite(struct nvdimm *nvdimm, + unsigned int keyid) +{ + return -EOPNOTSUPP; +} +static inline void nvdimm_security_overwrite_query(struct work_struct *work) +{ +} +#endif + /** * struct blk_alloc_info - tracking info for BLK dpa scanning * @nd_mapping: blk region mapping boundaries diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index e79cc8e5c114..cfde992684e7 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -250,6 +250,14 @@ long nvdimm_clear_poison(struct device *dev, phys_addr_t phys, void nvdimm_set_aliasing(struct device *dev); void nvdimm_set_locked(struct device *dev); void nvdimm_clear_locked(struct device *dev); +#if IS_ENABLED(CONFIG_NVDIMM_KEYS) +int nvdimm_security_unlock(struct device *dev); +#else +static inline int nvdimm_security_unlock(struct device *dev) +{ + return 0; +} +#endif struct nd_btt *to_nd_btt(struct device *dev); struct nd_gen_sb { diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 0e39e3d1846f..bc2f700feef8 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -309,8 +309,11 @@ static void pmem_release_queue(void *q) blk_cleanup_queue(q); } -static void pmem_freeze_queue(void *q) +static void pmem_freeze_queue(struct percpu_ref *ref) { + struct request_queue *q; + + q = container_of(ref, typeof(*q), q_usage_counter); blk_freeze_queue_start(q); } @@ -393,7 +396,7 @@ static int pmem_attach_disk(struct device *dev, return -EBUSY; } - q = blk_alloc_queue_node(GFP_KERNEL, dev_to_node(dev), NULL); + q = blk_alloc_queue_node(GFP_KERNEL, dev_to_node(dev)); if (!q) return -ENOMEM; @@ -402,6 +405,7 @@ static int pmem_attach_disk(struct device *dev, pmem->pfn_flags = PFN_DEV; pmem->pgmap.ref = &q->q_usage_counter; + pmem->pgmap.kill = pmem_freeze_queue; if (is_nd_pfn(dev)) { if (setup_pagemap_fsdax(dev, &pmem->pgmap)) return -ENOMEM; @@ -427,13 +431,6 @@ static int pmem_attach_disk(struct device *dev, memcpy(&bb_res, &nsio->res, sizeof(bb_res)); } - /* - * At release time the queue must be frozen before - * devm_memremap_pages is unwound - */ - if (devm_add_action_or_reset(dev, pmem_freeze_queue, q)) - return -ENOMEM; - if (IS_ERR(addr)) return PTR_ERR(addr); pmem->virt_addr = addr; diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index e7377f1028ef..e2818f94f292 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -79,6 +79,11 @@ int nd_region_activate(struct nd_region *nd_region) struct nd_mapping *nd_mapping = &nd_region->mapping[i]; struct nvdimm *nvdimm = nd_mapping->nvdimm; + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { + nvdimm_bus_unlock(&nd_region->dev); + return -EBUSY; + } + /* at least one null hint slot per-dimm for the "no-hint" case */ flush_data_size += sizeof(void *); num_flush = min_not_zero(num_flush, nvdimm->num_flush); diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c new file mode 100644 index 000000000000..f8bb746a549f --- /dev/null +++ b/drivers/nvdimm/security.c @@ -0,0 +1,454 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright(c) 2018 Intel Corporation. All rights reserved. */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "nd-core.h" +#include "nd.h" + +#define NVDIMM_BASE_KEY 0 +#define NVDIMM_NEW_KEY 1 + +static bool key_revalidate = true; +module_param(key_revalidate, bool, 0444); +MODULE_PARM_DESC(key_revalidate, "Require key validation at init."); + +static void *key_data(struct key *key) +{ + struct encrypted_key_payload *epayload = dereference_key_locked(key); + + lockdep_assert_held_read(&key->sem); + + return epayload->decrypted_data; +} + +static void nvdimm_put_key(struct key *key) +{ + if (!key) + return; + + up_read(&key->sem); + key_put(key); +} + +/* + * Retrieve kernel key for DIMM and request from user space if + * necessary. Returns a key held for read and must be put by + * nvdimm_put_key() before the usage goes out of scope. + */ +static struct key *nvdimm_request_key(struct nvdimm *nvdimm) +{ + struct key *key = NULL; + static const char NVDIMM_PREFIX[] = "nvdimm:"; + char desc[NVDIMM_KEY_DESC_LEN + sizeof(NVDIMM_PREFIX)]; + struct device *dev = &nvdimm->dev; + + sprintf(desc, "%s%s", NVDIMM_PREFIX, nvdimm->dimm_id); + key = request_key(&key_type_encrypted, desc, ""); + if (IS_ERR(key)) { + if (PTR_ERR(key) == -ENOKEY) + dev_dbg(dev, "request_key() found no key\n"); + else + dev_dbg(dev, "request_key() upcall failed\n"); + key = NULL; + } else { + struct encrypted_key_payload *epayload; + + down_read(&key->sem); + epayload = dereference_key_locked(key); + if (epayload->decrypted_datalen != NVDIMM_PASSPHRASE_LEN) { + up_read(&key->sem); + key_put(key); + key = NULL; + } + } + + return key; +} + +static struct key *nvdimm_lookup_user_key(struct nvdimm *nvdimm, + key_serial_t id, int subclass) +{ + key_ref_t keyref; + struct key *key; + struct encrypted_key_payload *epayload; + struct device *dev = &nvdimm->dev; + + keyref = lookup_user_key(id, 0, 0); + if (IS_ERR(keyref)) + return NULL; + + key = key_ref_to_ptr(keyref); + if (key->type != &key_type_encrypted) { + key_put(key); + return NULL; + } + + dev_dbg(dev, "%s: key found: %#x\n", __func__, key_serial(key)); + + down_read_nested(&key->sem, subclass); + epayload = dereference_key_locked(key); + if (epayload->decrypted_datalen != NVDIMM_PASSPHRASE_LEN) { + up_read(&key->sem); + key_put(key); + key = NULL; + } + return key; +} + +static struct key *nvdimm_key_revalidate(struct nvdimm *nvdimm) +{ + struct key *key; + int rc; + + if (!nvdimm->sec.ops->change_key) + return NULL; + + key = nvdimm_request_key(nvdimm); + if (!key) + return NULL; + + /* + * Send the same key to the hardware as new and old key to + * verify that the key is good. + */ + rc = nvdimm->sec.ops->change_key(nvdimm, key_data(key), + key_data(key), NVDIMM_USER); + if (rc < 0) { + nvdimm_put_key(key); + key = NULL; + } + return key; +} + +static int __nvdimm_security_unlock(struct nvdimm *nvdimm) +{ + struct device *dev = &nvdimm->dev; + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); + struct key *key = NULL; + int rc; + + /* The bus lock should be held at the top level of the call stack */ + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); + + if (!nvdimm->sec.ops || !nvdimm->sec.ops->unlock + || nvdimm->sec.state < 0) + return -EIO; + + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { + dev_dbg(dev, "Security operation in progress.\n"); + return -EBUSY; + } + + /* + * If the pre-OS has unlocked the DIMM, attempt to send the key + * from request_key() to the hardware for verification. Failure + * to revalidate the key against the hardware results in a + * freeze of the security configuration. I.e. if the OS does not + * have the key, security is being managed pre-OS. + */ + if (nvdimm->sec.state == NVDIMM_SECURITY_UNLOCKED) { + if (!key_revalidate) + return 0; + + key = nvdimm_key_revalidate(nvdimm); + if (!key) + return nvdimm_security_freeze(nvdimm); + } else + key = nvdimm_request_key(nvdimm); + + if (!key) + return -ENOKEY; + + rc = nvdimm->sec.ops->unlock(nvdimm, key_data(key)); + dev_dbg(dev, "key: %d unlock: %s\n", key_serial(key), + rc == 0 ? "success" : "fail"); + + nvdimm_put_key(key); + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); + return rc; +} + +int nvdimm_security_unlock(struct device *dev) +{ + struct nvdimm *nvdimm = to_nvdimm(dev); + int rc; + + nvdimm_bus_lock(dev); + rc = __nvdimm_security_unlock(nvdimm); + nvdimm_bus_unlock(dev); + return rc; +} + +int nvdimm_security_disable(struct nvdimm *nvdimm, unsigned int keyid) +{ + struct device *dev = &nvdimm->dev; + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); + struct key *key; + int rc; + + /* The bus lock should be held at the top level of the call stack */ + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); + + if (!nvdimm->sec.ops || !nvdimm->sec.ops->disable + || nvdimm->sec.state < 0) + return -EOPNOTSUPP; + + if (nvdimm->sec.state >= NVDIMM_SECURITY_FROZEN) { + dev_dbg(dev, "Incorrect security state: %d\n", + nvdimm->sec.state); + return -EIO; + } + + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { + dev_dbg(dev, "Security operation in progress.\n"); + return -EBUSY; + } + + key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY); + if (!key) + return -ENOKEY; + + rc = nvdimm->sec.ops->disable(nvdimm, key_data(key)); + dev_dbg(dev, "key: %d disable: %s\n", key_serial(key), + rc == 0 ? "success" : "fail"); + + nvdimm_put_key(key); + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); + return rc; +} + +int nvdimm_security_update(struct nvdimm *nvdimm, unsigned int keyid, + unsigned int new_keyid, + enum nvdimm_passphrase_type pass_type) +{ + struct device *dev = &nvdimm->dev; + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); + struct key *key, *newkey; + int rc; + + /* The bus lock should be held at the top level of the call stack */ + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); + + if (!nvdimm->sec.ops || !nvdimm->sec.ops->change_key + || nvdimm->sec.state < 0) + return -EOPNOTSUPP; + + if (nvdimm->sec.state >= NVDIMM_SECURITY_FROZEN) { + dev_dbg(dev, "Incorrect security state: %d\n", + nvdimm->sec.state); + return -EIO; + } + + if (keyid == 0) + key = NULL; + else { + key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY); + if (!key) + return -ENOKEY; + } + + newkey = nvdimm_lookup_user_key(nvdimm, new_keyid, NVDIMM_NEW_KEY); + if (!newkey) { + nvdimm_put_key(key); + return -ENOKEY; + } + + rc = nvdimm->sec.ops->change_key(nvdimm, key ? key_data(key) : NULL, + key_data(newkey), pass_type); + dev_dbg(dev, "key: %d %d update%s: %s\n", + key_serial(key), key_serial(newkey), + pass_type == NVDIMM_MASTER ? "(master)" : "(user)", + rc == 0 ? "success" : "fail"); + + nvdimm_put_key(newkey); + nvdimm_put_key(key); + if (pass_type == NVDIMM_MASTER) + nvdimm->sec.ext_state = nvdimm_security_state(nvdimm, + NVDIMM_MASTER); + else + nvdimm->sec.state = nvdimm_security_state(nvdimm, + NVDIMM_USER); + return rc; +} + +int nvdimm_security_erase(struct nvdimm *nvdimm, unsigned int keyid, + enum nvdimm_passphrase_type pass_type) +{ + struct device *dev = &nvdimm->dev; + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); + struct key *key; + int rc; + + /* The bus lock should be held at the top level of the call stack */ + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); + + if (!nvdimm->sec.ops || !nvdimm->sec.ops->erase + || nvdimm->sec.state < 0) + return -EOPNOTSUPP; + + if (atomic_read(&nvdimm->busy)) { + dev_dbg(dev, "Unable to secure erase while DIMM active.\n"); + return -EBUSY; + } + + if (nvdimm->sec.state >= NVDIMM_SECURITY_FROZEN) { + dev_dbg(dev, "Incorrect security state: %d\n", + nvdimm->sec.state); + return -EIO; + } + + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { + dev_dbg(dev, "Security operation in progress.\n"); + return -EBUSY; + } + + if (nvdimm->sec.ext_state != NVDIMM_SECURITY_UNLOCKED + && pass_type == NVDIMM_MASTER) { + dev_dbg(dev, + "Attempt to secure erase in wrong master state.\n"); + return -EOPNOTSUPP; + } + + key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY); + if (!key) + return -ENOKEY; + + rc = nvdimm->sec.ops->erase(nvdimm, key_data(key), pass_type); + dev_dbg(dev, "key: %d erase%s: %s\n", key_serial(key), + pass_type == NVDIMM_MASTER ? "(master)" : "(user)", + rc == 0 ? "success" : "fail"); + + nvdimm_put_key(key); + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); + return rc; +} + +int nvdimm_security_overwrite(struct nvdimm *nvdimm, unsigned int keyid) +{ + struct device *dev = &nvdimm->dev; + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); + struct key *key; + int rc; + + /* The bus lock should be held at the top level of the call stack */ + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); + + if (!nvdimm->sec.ops || !nvdimm->sec.ops->overwrite + || nvdimm->sec.state < 0) + return -EOPNOTSUPP; + + if (atomic_read(&nvdimm->busy)) { + dev_dbg(dev, "Unable to overwrite while DIMM active.\n"); + return -EBUSY; + } + + if (dev->driver == NULL) { + dev_dbg(dev, "Unable to overwrite while DIMM active.\n"); + return -EINVAL; + } + + if (nvdimm->sec.state >= NVDIMM_SECURITY_FROZEN) { + dev_dbg(dev, "Incorrect security state: %d\n", + nvdimm->sec.state); + return -EIO; + } + + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { + dev_dbg(dev, "Security operation in progress.\n"); + return -EBUSY; + } + + if (keyid == 0) + key = NULL; + else { + key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY); + if (!key) + return -ENOKEY; + } + + rc = nvdimm->sec.ops->overwrite(nvdimm, key ? key_data(key) : NULL); + dev_dbg(dev, "key: %d overwrite submission: %s\n", key_serial(key), + rc == 0 ? "success" : "fail"); + + nvdimm_put_key(key); + if (rc == 0) { + set_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags); + set_bit(NDD_WORK_PENDING, &nvdimm->flags); + nvdimm->sec.state = NVDIMM_SECURITY_OVERWRITE; + /* + * Make sure we don't lose device while doing overwrite + * query. + */ + get_device(dev); + queue_delayed_work(system_wq, &nvdimm->dwork, 0); + } + + return rc; +} + +void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm) +{ + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(&nvdimm->dev); + int rc; + unsigned int tmo; + + /* The bus lock should be held at the top level of the call stack */ + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); + + /* + * Abort and release device if we no longer have the overwrite + * flag set. It means the work has been canceled. + */ + if (!test_bit(NDD_WORK_PENDING, &nvdimm->flags)) + return; + + tmo = nvdimm->sec.overwrite_tmo; + + if (!nvdimm->sec.ops || !nvdimm->sec.ops->query_overwrite + || nvdimm->sec.state < 0) + return; + + rc = nvdimm->sec.ops->query_overwrite(nvdimm); + if (rc == -EBUSY) { + + /* setup delayed work again */ + tmo += 10; + queue_delayed_work(system_wq, &nvdimm->dwork, tmo * HZ); + nvdimm->sec.overwrite_tmo = min(15U * 60U, tmo); + return; + } + + if (rc < 0) + dev_dbg(&nvdimm->dev, "overwrite failed\n"); + else + dev_dbg(&nvdimm->dev, "overwrite completed\n"); + + if (nvdimm->sec.overwrite_state) + sysfs_notify_dirent(nvdimm->sec.overwrite_state); + nvdimm->sec.overwrite_tmo = 0; + clear_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags); + clear_bit(NDD_WORK_PENDING, &nvdimm->flags); + put_device(&nvdimm->dev); + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); + nvdimm->sec.ext_state = nvdimm_security_state(nvdimm, NVDIMM_MASTER); +} + +void nvdimm_security_overwrite_query(struct work_struct *work) +{ + struct nvdimm *nvdimm = + container_of(work, typeof(*nvdimm), dwork.work); + + nvdimm_bus_lock(&nvdimm->dev); + __nvdimm_security_overwrite_query(nvdimm); + nvdimm_bus_unlock(&nvdimm->dev); +} diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig index 88a8b5916624..0f345e207675 100644 --- a/drivers/nvme/host/Kconfig +++ b/drivers/nvme/host/Kconfig @@ -57,3 +57,18 @@ config NVME_FC from https://github.com/linux-nvme/nvme-cli. If unsure, say N. + +config NVME_TCP + tristate "NVM Express over Fabrics TCP host driver" + depends on INET + depends on BLK_DEV_NVME + select NVME_FABRICS + help + This provides support for the NVMe over Fabrics protocol using + the TCP transport. This allows you to use remote block devices + exported using the NVMe protocol set. + + To configure a NVMe over Fabrics controller use the nvme-cli tool + from https://github.com/linux-nvme/nvme-cli. + + If unsure, say N. diff --git a/drivers/nvme/host/Makefile b/drivers/nvme/host/Makefile index aea459c65ae1..8a4b671c5f0c 100644 --- a/drivers/nvme/host/Makefile +++ b/drivers/nvme/host/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_BLK_DEV_NVME) += nvme.o obj-$(CONFIG_NVME_FABRICS) += nvme-fabrics.o obj-$(CONFIG_NVME_RDMA) += nvme-rdma.o obj-$(CONFIG_NVME_FC) += nvme-fc.o +obj-$(CONFIG_NVME_TCP) += nvme-tcp.o nvme-core-y := core.o nvme-core-$(CONFIG_TRACING) += trace.o @@ -21,3 +22,5 @@ nvme-fabrics-y += fabrics.o nvme-rdma-y += rdma.o nvme-fc-y += fc.o + +nvme-tcp-y += tcp.o diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 962012135b62..08f2c92602f4 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -97,7 +97,6 @@ static dev_t nvme_chr_devt; static struct class *nvme_class; static struct class *nvme_subsys_class; -static void nvme_ns_remove(struct nvme_ns *ns); static int nvme_revalidate_disk(struct gendisk *disk); static void nvme_put_subsystem(struct nvme_subsystem *subsys); static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl, @@ -245,12 +244,31 @@ static inline bool nvme_req_needs_retry(struct request *req) return true; } +static void nvme_retry_req(struct request *req) +{ + struct nvme_ns *ns = req->q->queuedata; + unsigned long delay = 0; + u16 crd; + + /* The mask and shift result must be <= 3 */ + crd = (nvme_req(req)->status & NVME_SC_CRD) >> 11; + if (ns && crd) + delay = ns->ctrl->crdt[crd - 1] * 100; + + nvme_req(req)->retries++; + blk_mq_requeue_request(req, false); + blk_mq_delay_kick_requeue_list(req->q, delay); +} + void nvme_complete_rq(struct request *req) { blk_status_t status = nvme_error_status(req); trace_nvme_complete_rq(req); + if (nvme_req(req)->ctrl->kas) + nvme_req(req)->ctrl->comp_seen = true; + if (unlikely(status != BLK_STS_OK && nvme_req_needs_retry(req))) { if ((req->cmd_flags & REQ_NVME_MPATH) && blk_path_error(status)) { @@ -259,8 +277,7 @@ void nvme_complete_rq(struct request *req) } if (!blk_queue_dying(req->q)) { - nvme_req(req)->retries++; - blk_mq_requeue_request(req, true); + nvme_retry_req(req); return; } } @@ -268,14 +285,14 @@ void nvme_complete_rq(struct request *req) } EXPORT_SYMBOL_GPL(nvme_complete_rq); -void nvme_cancel_request(struct request *req, void *data, bool reserved) +bool nvme_cancel_request(struct request *req, void *data, bool reserved) { dev_dbg_ratelimited(((struct nvme_ctrl *) data)->device, "Cancelling I/O %d", req->tag); nvme_req(req)->status = NVME_SC_ABORT_REQ; blk_mq_complete_request(req); - + return true; } EXPORT_SYMBOL_GPL(nvme_cancel_request); @@ -536,7 +553,6 @@ static void nvme_assign_write_stream(struct nvme_ctrl *ctrl, static inline void nvme_setup_flush(struct nvme_ns *ns, struct nvme_command *cmnd) { - memset(cmnd, 0, sizeof(*cmnd)); cmnd->common.opcode = nvme_cmd_flush; cmnd->common.nsid = cpu_to_le32(ns->head->ns_id); } @@ -548,9 +564,19 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, struct nvme_dsm_range *range; struct bio *bio; - range = kmalloc_array(segments, sizeof(*range), GFP_ATOMIC); - if (!range) - return BLK_STS_RESOURCE; + range = kmalloc_array(segments, sizeof(*range), + GFP_ATOMIC | __GFP_NOWARN); + if (!range) { + /* + * If we fail allocation our range, fallback to the controller + * discard page. If that's also busy, it's safe to return + * busy, as we know we can make progress once that's freed. + */ + if (test_and_set_bit_lock(0, &ns->ctrl->discard_page_busy)) + return BLK_STS_RESOURCE; + + range = page_address(ns->ctrl->discard_page); + } __rq_for_each_bio(bio, req) { u64 slba = nvme_block_nr(ns, bio->bi_iter.bi_sector); @@ -565,11 +591,13 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, } if (WARN_ON_ONCE(n != segments)) { - kfree(range); + if (virt_to_page(range) == ns->ctrl->discard_page) + clear_bit_unlock(0, &ns->ctrl->discard_page_busy); + else + kfree(range); return BLK_STS_IOERR; } - memset(cmnd, 0, sizeof(*cmnd)); cmnd->dsm.opcode = nvme_cmd_dsm; cmnd->dsm.nsid = cpu_to_le32(ns->head->ns_id); cmnd->dsm.nr = cpu_to_le32(segments - 1); @@ -598,7 +626,6 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, if (req->cmd_flags & REQ_RAHEAD) dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH; - memset(cmnd, 0, sizeof(*cmnd)); cmnd->rw.opcode = (rq_data_dir(req) ? nvme_cmd_write : nvme_cmd_read); cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id); cmnd->rw.slba = cpu_to_le64(nvme_block_nr(ns, blk_rq_pos(req))); @@ -650,8 +677,13 @@ void nvme_cleanup_cmd(struct request *req) blk_rq_bytes(req) >> ns->lba_shift); } if (req->rq_flags & RQF_SPECIAL_PAYLOAD) { - kfree(page_address(req->special_vec.bv_page) + - req->special_vec.bv_offset); + struct nvme_ns *ns = req->rq_disk->private_data; + struct page *page = req->special_vec.bv_page; + + if (page == ns->ctrl->discard_page) + clear_bit_unlock(0, &ns->ctrl->discard_page_busy); + else + kfree(page_address(page) + req->special_vec.bv_offset); } } EXPORT_SYMBOL_GPL(nvme_cleanup_cmd); @@ -663,6 +695,7 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, nvme_clear_nvme_request(req); + memset(cmd, 0, sizeof(*cmd)); switch (req_op(req)) { case REQ_OP_DRV_IN: case REQ_OP_DRV_OUT: @@ -691,6 +724,31 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, } EXPORT_SYMBOL_GPL(nvme_setup_cmd); +static void nvme_end_sync_rq(struct request *rq, blk_status_t error) +{ + struct completion *waiting = rq->end_io_data; + + rq->end_io_data = NULL; + complete(waiting); +} + +static void nvme_execute_rq_polled(struct request_queue *q, + struct gendisk *bd_disk, struct request *rq, int at_head) +{ + DECLARE_COMPLETION_ONSTACK(wait); + + WARN_ON_ONCE(!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)); + + rq->cmd_flags |= REQ_HIPRI; + rq->end_io_data = &wait; + blk_execute_rq_nowait(q, bd_disk, rq, at_head, nvme_end_sync_rq); + + while (!completion_done(&wait)) { + blk_poll(q, request_to_qc_t(rq->mq_hctx, rq), true); + cond_resched(); + } +} + /* * Returns 0 on success. If the result is negative, it's a Linux error code; * if the result is positive, it's an NVM Express status code @@ -698,7 +756,7 @@ EXPORT_SYMBOL_GPL(nvme_setup_cmd); int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, union nvme_result *result, void *buffer, unsigned bufflen, unsigned timeout, int qid, int at_head, - blk_mq_req_flags_t flags) + blk_mq_req_flags_t flags, bool poll) { struct request *req; int ret; @@ -715,7 +773,10 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, goto out; } - blk_execute_rq(req->q, NULL, req, at_head); + if (poll) + nvme_execute_rq_polled(req->q, NULL, req, at_head); + else + blk_execute_rq(req->q, NULL, req, at_head); if (result) *result = nvme_req(req)->result; if (nvme_req(req)->flags & NVME_REQ_CANCELLED) @@ -732,7 +793,7 @@ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, void *buffer, unsigned bufflen) { return __nvme_submit_sync_cmd(q, cmd, NULL, buffer, bufflen, 0, - NVME_QID_ANY, 0, 0); + NVME_QID_ANY, 0, 0, false); } EXPORT_SYMBOL_GPL(nvme_submit_sync_cmd); @@ -843,6 +904,7 @@ static void nvme_keep_alive_end_io(struct request *rq, blk_status_t status) return; } + ctrl->comp_seen = false; spin_lock_irqsave(&ctrl->lock, flags); if (ctrl->state == NVME_CTRL_LIVE || ctrl->state == NVME_CTRL_CONNECTING) @@ -873,6 +935,15 @@ static void nvme_keep_alive_work(struct work_struct *work) { struct nvme_ctrl *ctrl = container_of(to_delayed_work(work), struct nvme_ctrl, ka_work); + bool comp_seen = ctrl->comp_seen; + + if ((ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) && comp_seen) { + dev_dbg(ctrl->device, + "reschedule traffic based keep-alive timer\n"); + ctrl->comp_seen = false; + schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ); + return; + } if (nvme_keep_alive(ctrl)) { /* allocation failure, reset the controller */ @@ -1041,7 +1112,7 @@ static int nvme_set_features(struct nvme_ctrl *dev, unsigned fid, unsigned dword c.features.dword11 = cpu_to_le32(dword11); ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &res, - buffer, buflen, 0, NVME_QID_ANY, 0, 0); + buffer, buflen, 0, NVME_QID_ANY, 0, 0, false); if (ret >= 0 && result) *result = le32_to_cpu(res.u32); return ret; @@ -1240,12 +1311,12 @@ static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns, c.common.nsid = cpu_to_le32(cmd.nsid); c.common.cdw2[0] = cpu_to_le32(cmd.cdw2); c.common.cdw2[1] = cpu_to_le32(cmd.cdw3); - c.common.cdw10[0] = cpu_to_le32(cmd.cdw10); - c.common.cdw10[1] = cpu_to_le32(cmd.cdw11); - c.common.cdw10[2] = cpu_to_le32(cmd.cdw12); - c.common.cdw10[3] = cpu_to_le32(cmd.cdw13); - c.common.cdw10[4] = cpu_to_le32(cmd.cdw14); - c.common.cdw10[5] = cpu_to_le32(cmd.cdw15); + c.common.cdw10 = cpu_to_le32(cmd.cdw10); + c.common.cdw11 = cpu_to_le32(cmd.cdw11); + c.common.cdw12 = cpu_to_le32(cmd.cdw12); + c.common.cdw13 = cpu_to_le32(cmd.cdw13); + c.common.cdw14 = cpu_to_le32(cmd.cdw14); + c.common.cdw15 = cpu_to_le32(cmd.cdw15); if (cmd.timeout_ms) timeout = msecs_to_jiffies(cmd.timeout_ms); @@ -1524,8 +1595,6 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id) if (ns->noiob) nvme_set_chunk_size(ns); nvme_update_disk_info(disk, ns, id); - if (ns->ndev) - nvme_nvm_update_nvm_info(ns); #ifdef CONFIG_NVME_MULTIPATH if (ns->head->disk) { nvme_update_disk_info(ns->head->disk, ns, id); @@ -1608,7 +1677,7 @@ static int nvme_pr_command(struct block_device *bdev, u32 cdw10, memset(&c, 0, sizeof(c)); c.common.opcode = op; c.common.nsid = cpu_to_le32(ns->head->ns_id); - c.common.cdw10[0] = cpu_to_le32(cdw10); + c.common.cdw10 = cpu_to_le32(cdw10); ret = nvme_submit_sync_cmd(ns->queue, &c, data, 16); nvme_put_ns_from_disk(head, srcu_idx); @@ -1682,11 +1751,11 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len, else cmd.common.opcode = nvme_admin_security_recv; cmd.common.nsid = 0; - cmd.common.cdw10[0] = cpu_to_le32(((u32)secp) << 24 | ((u32)spsp) << 8); - cmd.common.cdw10[1] = cpu_to_le32(len); + cmd.common.cdw10 = cpu_to_le32(((u32)secp) << 24 | ((u32)spsp) << 8); + cmd.common.cdw11 = cpu_to_le32(len); return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len, - ADMIN_TIMEOUT, NVME_QID_ANY, 1, 0); + ADMIN_TIMEOUT, NVME_QID_ANY, 1, 0, false); } EXPORT_SYMBOL_GPL(nvme_sec_submit); #endif /* CONFIG_BLK_SED_OPAL */ @@ -1881,6 +1950,26 @@ static int nvme_configure_timestamp(struct nvme_ctrl *ctrl) return ret; } +static int nvme_configure_acre(struct nvme_ctrl *ctrl) +{ + struct nvme_feat_host_behavior *host; + int ret; + + /* Don't bother enabling the feature if retry delay is not reported */ + if (!ctrl->crdt[0]) + return 0; + + host = kzalloc(sizeof(*host), GFP_KERNEL); + if (!host) + return 0; + + host->acre = NVME_ENABLE_ACRE; + ret = nvme_set_features(ctrl, NVME_FEAT_HOST_BEHAVIOR, 0, + host, sizeof(*host), NULL); + kfree(host); + return ret; +} + static int nvme_configure_apst(struct nvme_ctrl *ctrl) { /* @@ -2402,6 +2491,10 @@ int nvme_init_identify(struct nvme_ctrl *ctrl) ctrl->quirks &= ~NVME_QUIRK_NO_DEEPEST_PS; } + ctrl->crdt[0] = le16_to_cpu(id->crdt1); + ctrl->crdt[1] = le16_to_cpu(id->crdt2); + ctrl->crdt[2] = le16_to_cpu(id->crdt3); + ctrl->oacs = le16_to_cpu(id->oacs); ctrl->oncs = le16_to_cpup(&id->oncs); ctrl->oaes = le32_to_cpu(id->oaes); @@ -2419,6 +2512,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl) ctrl->sgls = le32_to_cpu(id->sgls); ctrl->kas = le16_to_cpu(id->kas); ctrl->max_namespaces = le32_to_cpu(id->mnan); + ctrl->ctratt = le32_to_cpu(id->ctratt); if (id->rtd3e) { /* us -> s */ @@ -2501,6 +2595,10 @@ int nvme_init_identify(struct nvme_ctrl *ctrl) if (ret < 0) return ret; + ret = nvme_configure_acre(ctrl); + if (ret < 0) + return ret; + ctrl->identified = true; return 0; @@ -2776,6 +2874,7 @@ static ssize_t field##_show(struct device *dev, \ static DEVICE_ATTR(field, S_IRUGO, field##_show, NULL); nvme_show_int_function(cntlid); +nvme_show_int_function(numa_node); static ssize_t nvme_sysfs_delete(struct device *dev, struct device_attribute *attr, const char *buf, @@ -2855,6 +2954,7 @@ static struct attribute *nvme_dev_attrs[] = { &dev_attr_subsysnqn.attr, &dev_attr_address.attr, &dev_attr_state.attr, + &dev_attr_numa_node.attr, NULL }; @@ -3065,7 +3165,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) struct gendisk *disk; struct nvme_id_ns *id; char disk_name[DISK_NAME_LEN]; - int node = dev_to_node(ctrl->dev), flags = GENHD_FL_EXT_DEVT; + int node = ctrl->numa_node, flags = GENHD_FL_EXT_DEVT; ns = kzalloc_node(sizeof(*ns), GFP_KERNEL, node); if (!ns) @@ -3100,13 +3200,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) nvme_setup_streams_ns(ctrl, ns); nvme_set_disk_name(disk_name, ns, ctrl, &flags); - if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) { - if (nvme_nvm_register(ns, disk_name, node)) { - dev_warn(ctrl->device, "LightNVM init failure\n"); - goto out_unlink_ns; - } - } - disk = alloc_disk_node(0, node); if (!disk) goto out_unlink_ns; @@ -3120,6 +3213,13 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) __nvme_revalidate_disk(disk, id); + if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) { + if (nvme_nvm_register(ns, disk_name, node)) { + dev_warn(ctrl->device, "LightNVM init failure\n"); + goto out_put_disk; + } + } + down_write(&ctrl->namespaces_rwsem); list_add_tail(&ns->list, &ctrl->namespaces); up_write(&ctrl->namespaces_rwsem); @@ -3133,6 +3233,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) kfree(id); return; + out_put_disk: + put_disk(ns->disk); out_unlink_ns: mutex_lock(&ctrl->subsys->lock); list_del_rcu(&ns->siblings); @@ -3522,6 +3624,7 @@ static void nvme_free_ctrl(struct device *dev) ida_simple_remove(&nvme_instance_ida, ctrl->instance); kfree(ctrl->effects); nvme_mpath_uninit(ctrl); + __free_page(ctrl->discard_page); if (subsys) { mutex_lock(&subsys->lock); @@ -3562,6 +3665,14 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev, memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd)); ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive; + BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) > + PAGE_SIZE); + ctrl->discard_page = alloc_page(GFP_KERNEL); + if (!ctrl->discard_page) { + ret = -ENOMEM; + goto out; + } + ret = ida_simple_get(&nvme_instance_ida, 0, 0, GFP_KERNEL); if (ret < 0) goto out; @@ -3599,6 +3710,8 @@ out_free_name: out_release_instance: ida_simple_remove(&nvme_instance_ida, ctrl->instance); out: + if (ctrl->discard_page) + __free_page(ctrl->discard_page); return ret; } EXPORT_SYMBOL_GPL(nvme_init_ctrl); @@ -3746,7 +3859,7 @@ out: return result; } -void nvme_core_exit(void) +void __exit nvme_core_exit(void) { ida_destroy(&nvme_subsystems_ida); class_destroy(nvme_subsys_class); diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index bd0969db6225..b2ab213f43de 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -159,7 +159,7 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val) cmd.prop_get.offset = cpu_to_le32(off); ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &res, NULL, 0, 0, - NVME_QID_ANY, 0, 0); + NVME_QID_ANY, 0, 0, false); if (ret >= 0) *val = le64_to_cpu(res.u64); @@ -206,7 +206,7 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val) cmd.prop_get.offset = cpu_to_le32(off); ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &res, NULL, 0, 0, - NVME_QID_ANY, 0, 0); + NVME_QID_ANY, 0, 0, false); if (ret >= 0) *val = le64_to_cpu(res.u64); @@ -252,7 +252,7 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val) cmd.prop_set.value = cpu_to_le64(val); ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, NULL, 0, 0, - NVME_QID_ANY, 0, 0); + NVME_QID_ANY, 0, 0, false); if (unlikely(ret)) dev_err(ctrl->device, "Property Set error: %d, offset %#x\n", @@ -392,6 +392,9 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl) cmd.connect.kato = ctrl->opts->discovery_nqn ? 0 : cpu_to_le32((ctrl->kato + NVME_KATO_GRACE) * 1000); + if (ctrl->opts->disable_sqflow) + cmd.connect.cattr |= NVME_CONNECT_DISABLE_SQFLOW; + data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) return -ENOMEM; @@ -403,7 +406,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl) ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &res, data, sizeof(*data), 0, NVME_QID_ANY, 1, - BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT); + BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT, false); if (ret) { nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32), &cmd, data); @@ -438,7 +441,7 @@ EXPORT_SYMBOL_GPL(nvmf_connect_admin_queue); * > 0: NVMe error status code * < 0: Linux errno error code */ -int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid) +int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid, bool poll) { struct nvme_command cmd; struct nvmf_connect_data *data; @@ -451,6 +454,9 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid) cmd.connect.qid = cpu_to_le16(qid); cmd.connect.sqsize = cpu_to_le16(ctrl->sqsize); + if (ctrl->opts->disable_sqflow) + cmd.connect.cattr |= NVME_CONNECT_DISABLE_SQFLOW; + data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) return -ENOMEM; @@ -462,7 +468,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid) ret = __nvme_submit_sync_cmd(ctrl->connect_q, &cmd, &res, data, sizeof(*data), 0, qid, 1, - BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT); + BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT, poll); if (ret) { nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32), &cmd, data); @@ -607,6 +613,11 @@ static const match_table_t opt_tokens = { { NVMF_OPT_HOST_TRADDR, "host_traddr=%s" }, { NVMF_OPT_HOST_ID, "hostid=%s" }, { NVMF_OPT_DUP_CONNECT, "duplicate_connect" }, + { NVMF_OPT_DISABLE_SQFLOW, "disable_sqflow" }, + { NVMF_OPT_HDR_DIGEST, "hdr_digest" }, + { NVMF_OPT_DATA_DIGEST, "data_digest" }, + { NVMF_OPT_NR_WRITE_QUEUES, "nr_write_queues=%d" }, + { NVMF_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" }, { NVMF_OPT_ERR, NULL } }; @@ -626,6 +637,8 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts, opts->reconnect_delay = NVMF_DEF_RECONNECT_DELAY; opts->kato = NVME_DEFAULT_KATO; opts->duplicate_connect = false; + opts->hdr_digest = false; + opts->data_digest = false; options = o = kstrdup(buf, GFP_KERNEL); if (!options) @@ -817,6 +830,39 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts, case NVMF_OPT_DUP_CONNECT: opts->duplicate_connect = true; break; + case NVMF_OPT_DISABLE_SQFLOW: + opts->disable_sqflow = true; + break; + case NVMF_OPT_HDR_DIGEST: + opts->hdr_digest = true; + break; + case NVMF_OPT_DATA_DIGEST: + opts->data_digest = true; + break; + case NVMF_OPT_NR_WRITE_QUEUES: + if (match_int(args, &token)) { + ret = -EINVAL; + goto out; + } + if (token <= 0) { + pr_err("Invalid nr_write_queues %d\n", token); + ret = -EINVAL; + goto out; + } + opts->nr_write_queues = token; + break; + case NVMF_OPT_NR_POLL_QUEUES: + if (match_int(args, &token)) { + ret = -EINVAL; + goto out; + } + if (token <= 0) { + pr_err("Invalid nr_poll_queues %d\n", token); + ret = -EINVAL; + goto out; + } + opts->nr_poll_queues = token; + break; default: pr_warn("unknown parameter or missing value '%s' in ctrl creation request\n", p); @@ -933,7 +979,8 @@ EXPORT_SYMBOL_GPL(nvmf_free_options); #define NVMF_REQUIRED_OPTS (NVMF_OPT_TRANSPORT | NVMF_OPT_NQN) #define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \ NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \ - NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT) + NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\ + NVMF_OPT_DISABLE_SQFLOW) static struct nvme_ctrl * nvmf_create_ctrl(struct device *dev, const char *buf, size_t count) diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index 6ea6275f332a..478343b73e38 100644 --- a/drivers/nvme/host/fabrics.h +++ b/drivers/nvme/host/fabrics.h @@ -58,6 +58,11 @@ enum { NVMF_OPT_CTRL_LOSS_TMO = 1 << 11, NVMF_OPT_HOST_ID = 1 << 12, NVMF_OPT_DUP_CONNECT = 1 << 13, + NVMF_OPT_DISABLE_SQFLOW = 1 << 14, + NVMF_OPT_HDR_DIGEST = 1 << 15, + NVMF_OPT_DATA_DIGEST = 1 << 16, + NVMF_OPT_NR_WRITE_QUEUES = 1 << 17, + NVMF_OPT_NR_POLL_QUEUES = 1 << 18, }; /** @@ -85,6 +90,11 @@ enum { * @max_reconnects: maximum number of allowed reconnect attempts before removing * the controller, (-1) means reconnect forever, zero means remove * immediately; + * @disable_sqflow: disable controller sq flow control + * @hdr_digest: generate/verify header digest (TCP) + * @data_digest: generate/verify data digest (TCP) + * @nr_write_queues: number of queues for write I/O + * @nr_poll_queues: number of queues for polling I/O */ struct nvmf_ctrl_options { unsigned mask; @@ -101,6 +111,11 @@ struct nvmf_ctrl_options { unsigned int kato; struct nvmf_host *host; int max_reconnects; + bool disable_sqflow; + bool hdr_digest; + bool data_digest; + unsigned int nr_write_queues; + unsigned int nr_poll_queues; }; /* @@ -156,7 +171,7 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val); int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val); int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val); int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl); -int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid); +int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid, bool poll); int nvmf_register_transport(struct nvmf_transport_ops *ops); void nvmf_unregister_transport(struct nvmf_transport_ops *ops); void nvmf_free_options(struct nvmf_ctrl_options *opts); diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index feb86b59170e..89accc76d71c 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -1975,7 +1975,7 @@ nvme_fc_connect_io_queues(struct nvme_fc_ctrl *ctrl, u16 qsize) (qsize / 5)); if (ret) break; - ret = nvmf_connect_io_queue(&ctrl->ctrl, i); + ret = nvmf_connect_io_queue(&ctrl->ctrl, i, false); if (ret) break; @@ -2326,38 +2326,6 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx, return nvme_fc_start_fcp_op(ctrl, queue, op, data_len, io_dir); } -static struct blk_mq_tags * -nvme_fc_tagset(struct nvme_fc_queue *queue) -{ - if (queue->qnum == 0) - return queue->ctrl->admin_tag_set.tags[queue->qnum]; - - return queue->ctrl->tag_set.tags[queue->qnum - 1]; -} - -static int -nvme_fc_poll(struct blk_mq_hw_ctx *hctx, unsigned int tag) - -{ - struct nvme_fc_queue *queue = hctx->driver_data; - struct nvme_fc_ctrl *ctrl = queue->ctrl; - struct request *req; - struct nvme_fc_fcp_op *op; - - req = blk_mq_tag_to_rq(nvme_fc_tagset(queue), tag); - if (!req) - return 0; - - op = blk_mq_rq_to_pdu(req); - - if ((atomic_read(&op->state) == FCPOP_STATE_ACTIVE) && - (ctrl->lport->ops->poll_queue)) - ctrl->lport->ops->poll_queue(&ctrl->lport->localport, - queue->lldd_handle); - - return ((atomic_read(&op->state) != FCPOP_STATE_ACTIVE)); -} - static void nvme_fc_submit_async_event(struct nvme_ctrl *arg) { @@ -2410,7 +2378,7 @@ nvme_fc_complete_rq(struct request *rq) * status. The done path will return the io request back to the block * layer with an error status. */ -static void +static bool nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved) { struct nvme_ctrl *nctrl = data; @@ -2418,6 +2386,7 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved) struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(req); __nvme_fc_abort_op(ctrl, op); + return true; } @@ -2427,7 +2396,6 @@ static const struct blk_mq_ops nvme_fc_mq_ops = { .init_request = nvme_fc_init_request, .exit_request = nvme_fc_exit_request, .init_hctx = nvme_fc_init_hctx, - .poll = nvme_fc_poll, .timeout = nvme_fc_timeout, }; @@ -2457,7 +2425,7 @@ nvme_fc_create_io_queues(struct nvme_fc_ctrl *ctrl) ctrl->tag_set.ops = &nvme_fc_mq_ops; ctrl->tag_set.queue_depth = ctrl->ctrl.opts->queue_size; ctrl->tag_set.reserved_tags = 1; /* fabric connect */ - ctrl->tag_set.numa_node = NUMA_NO_NODE; + ctrl->tag_set.numa_node = ctrl->ctrl.numa_node; ctrl->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; ctrl->tag_set.cmd_size = struct_size((struct nvme_fcp_op_w_sgl *)NULL, priv, @@ -3050,6 +3018,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, ctrl->ctrl.opts = opts; ctrl->ctrl.nr_reconnects = 0; + ctrl->ctrl.numa_node = dev_to_node(lport->dev); INIT_LIST_HEAD(&ctrl->ctrl_list); ctrl->lport = lport; ctrl->rport = rport; @@ -3090,7 +3059,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, ctrl->admin_tag_set.ops = &nvme_fc_admin_mq_ops; ctrl->admin_tag_set.queue_depth = NVME_AQ_MQ_TAG_DEPTH; ctrl->admin_tag_set.reserved_tags = 2; /* fabric connect + Keep-Alive */ - ctrl->admin_tag_set.numa_node = NUMA_NO_NODE; + ctrl->admin_tag_set.numa_node = ctrl->ctrl.numa_node; ctrl->admin_tag_set.cmd_size = struct_size((struct nvme_fcp_op_w_sgl *)NULL, priv, ctrl->lport->ops->fcprqst_priv_sz); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index a4f3b263cd6c..b759c25c89c8 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -577,7 +577,8 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, struct ppa_addr ppa; size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); size_t log_pos, offset, len; - int ret, i, max_len; + int i, max_len; + int ret = 0; /* * limit requests to maximum 256K to avoid issuing arbitrary large @@ -731,11 +732,12 @@ static int nvme_nvm_submit_io_sync(struct nvm_dev *dev, struct nvm_rq *rqd) return ret; } -static void *nvme_nvm_create_dma_pool(struct nvm_dev *nvmdev, char *name) +static void *nvme_nvm_create_dma_pool(struct nvm_dev *nvmdev, char *name, + int size) { struct nvme_ns *ns = nvmdev->q->queuedata; - return dma_pool_create(name, ns->ctrl->dev, PAGE_SIZE, PAGE_SIZE, 0); + return dma_pool_create(name, ns->ctrl->dev, size, PAGE_SIZE, 0); } static void nvme_nvm_destroy_dma_pool(void *pool) @@ -935,9 +937,9 @@ static int nvme_nvm_user_vcmd(struct nvme_ns *ns, int admin, /* cdw11-12 */ c.ph_rw.length = cpu_to_le16(vcmd.nppas); c.ph_rw.control = cpu_to_le16(vcmd.control); - c.common.cdw10[3] = cpu_to_le32(vcmd.cdw13); - c.common.cdw10[4] = cpu_to_le32(vcmd.cdw14); - c.common.cdw10[5] = cpu_to_le32(vcmd.cdw15); + c.common.cdw13 = cpu_to_le32(vcmd.cdw13); + c.common.cdw14 = cpu_to_le32(vcmd.cdw14); + c.common.cdw15 = cpu_to_le32(vcmd.cdw15); if (vcmd.timeout_ms) timeout = msecs_to_jiffies(vcmd.timeout_ms); @@ -972,22 +974,11 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) } } -void nvme_nvm_update_nvm_info(struct nvme_ns *ns) -{ - struct nvm_dev *ndev = ns->ndev; - struct nvm_geo *geo = &ndev->geo; - - if (geo->version == NVM_OCSSD_SPEC_12) - return; - - geo->csecs = 1 << ns->lba_shift; - geo->sos = ns->ms; -} - int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) { struct request_queue *q = ns->queue; struct nvm_dev *dev; + struct nvm_geo *geo; _nvme_nvm_check_size(); @@ -995,6 +986,12 @@ int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) if (!dev) return -ENOMEM; + /* Note that csecs and sos will be overridden if it is a 1.2 drive. */ + geo = &dev->geo; + geo->csecs = 1 << ns->lba_shift; + geo->sos = ns->ms; + geo->ext = ns->ext; + dev->q = q; memcpy(dev->name, disk_name, DISK_NAME_LEN); dev->ops = &nvme_nvm_dev_ops; diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 9901afd804ce..183ec17ba067 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -141,7 +141,7 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node) test_bit(NVME_NS_ANA_PENDING, &ns->flags)) continue; - distance = node_distance(node, dev_to_node(ns->ctrl->dev)); + distance = node_distance(node, ns->ctrl->numa_node); switch (ns->ana_state) { case NVME_ANA_OPTIMIZED: @@ -220,21 +220,6 @@ static blk_qc_t nvme_ns_head_make_request(struct request_queue *q, return ret; } -static bool nvme_ns_head_poll(struct request_queue *q, blk_qc_t qc) -{ - struct nvme_ns_head *head = q->queuedata; - struct nvme_ns *ns; - bool found = false; - int srcu_idx; - - srcu_idx = srcu_read_lock(&head->srcu); - ns = srcu_dereference(head->current_path[numa_node_id()], &head->srcu); - if (likely(ns && nvme_path_is_optimized(ns))) - found = ns->queue->poll_fn(q, qc); - srcu_read_unlock(&head->srcu, srcu_idx); - return found; -} - static void nvme_requeue_work(struct work_struct *work) { struct nvme_ns_head *head = @@ -276,12 +261,11 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head) if (!(ctrl->subsys->cmic & (1 << 1)) || !multipath) return 0; - q = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE, NULL); + q = blk_alloc_queue_node(GFP_KERNEL, ctrl->numa_node); if (!q) goto out; q->queuedata = head; blk_queue_make_request(q, nvme_ns_head_make_request); - q->poll_fn = nvme_ns_head_poll; blk_queue_flag_set(QUEUE_FLAG_NONROT, q); /* set to a default value for 512 until disk is validated */ blk_queue_logical_block_size(q, 512); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 081cbdcce880..2b36ac922596 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -145,6 +145,7 @@ enum nvme_ctrl_state { }; struct nvme_ctrl { + bool comp_seen; enum nvme_ctrl_state state; bool identified; spinlock_t lock; @@ -153,6 +154,7 @@ struct nvme_ctrl { struct request_queue *connect_q; struct device *dev; int instance; + int numa_node; struct blk_mq_tag_set *tagset; struct blk_mq_tag_set *admin_tagset; struct list_head namespaces; @@ -179,6 +181,7 @@ struct nvme_ctrl { u32 page_size; u32 max_hw_sectors; u32 max_segments; + u16 crdt[3]; u16 oncs; u16 oacs; u16 nssa; @@ -193,6 +196,7 @@ struct nvme_ctrl { u8 apsta; u32 oaes; u32 aen_result; + u32 ctratt; unsigned int shutdown_timeout; unsigned int kato; bool subsystem; @@ -237,6 +241,9 @@ struct nvme_ctrl { u16 maxcmd; int nr_reconnects; struct nvmf_ctrl_options *opts; + + struct page *discard_page; + unsigned long discard_page_busy; }; struct nvme_subsystem { @@ -364,15 +371,6 @@ static inline void nvme_fault_inject_fini(struct nvme_ns *ns) {} static inline void nvme_should_fail(struct request *req) {} #endif -static inline bool nvme_ctrl_ready(struct nvme_ctrl *ctrl) -{ - u32 val = 0; - - if (ctrl->ops->reg_read32(ctrl, NVME_REG_CSTS, &val)) - return false; - return val & NVME_CSTS_RDY; -} - static inline int nvme_reset_subsystem(struct nvme_ctrl *ctrl) { if (!ctrl->subsystem) @@ -408,7 +406,7 @@ static inline void nvme_put_ctrl(struct nvme_ctrl *ctrl) } void nvme_complete_rq(struct request *req); -void nvme_cancel_request(struct request *req, void *data, bool reserved); +bool nvme_cancel_request(struct request *req, void *data, bool reserved); bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl, enum nvme_ctrl_state new_state); int nvme_disable_ctrl(struct nvme_ctrl *ctrl, u64 cap); @@ -449,7 +447,7 @@ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, union nvme_result *result, void *buffer, unsigned bufflen, unsigned timeout, int qid, int at_head, - blk_mq_req_flags_t flags); + blk_mq_req_flags_t flags, bool poll); int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count); void nvme_stop_keep_alive(struct nvme_ctrl *ctrl); int nvme_reset_ctrl(struct nvme_ctrl *ctrl); @@ -545,13 +543,11 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl) #endif /* CONFIG_NVME_MULTIPATH */ #ifdef CONFIG_NVM -void nvme_nvm_update_nvm_info(struct nvme_ns *ns); int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node); void nvme_nvm_unregister(struct nvme_ns *ns); extern const struct attribute_group nvme_nvm_attr_group; int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg); #else -static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {}; static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node) { @@ -572,6 +568,6 @@ static inline struct nvme_ns *nvme_get_ns_from_dev(struct device *dev) } int __init nvme_core_init(void); -void nvme_core_exit(void); +void __exit nvme_core_exit(void); #endif /* _NVME_H */ diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index c33bb201b884..5a0bf6a24d50 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -32,6 +32,7 @@ #include #include +#include "trace.h" #include "nvme.h" #define SQ_SIZE(depth) (depth * sizeof(struct nvme_command)) @@ -74,6 +75,22 @@ static int io_queue_depth = 1024; module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644); MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2"); +static int queue_count_set(const char *val, const struct kernel_param *kp); +static const struct kernel_param_ops queue_count_ops = { + .set = queue_count_set, + .get = param_get_int, +}; + +static int write_queues; +module_param_cb(write_queues, &queue_count_ops, &write_queues, 0644); +MODULE_PARM_DESC(write_queues, + "Number of queues to use for writes. If not set, reads and writes " + "will share a queue set."); + +static int poll_queues = 0; +module_param_cb(poll_queues, &queue_count_ops, &poll_queues, 0644); +MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO."); + struct nvme_dev; struct nvme_queue; @@ -92,6 +109,7 @@ struct nvme_dev { struct dma_pool *prp_small_pool; unsigned online_queues; unsigned max_qid; + unsigned io_queues[HCTX_MAX_TYPES]; unsigned int num_vecs; int q_depth; u32 db_stride; @@ -105,7 +123,6 @@ struct nvme_dev { u32 cmbsz; u32 cmbloc; struct nvme_ctrl ctrl; - struct completion ioq_wait; mempool_t *iod_mempool; @@ -134,6 +151,17 @@ static int io_queue_depth_set(const char *val, const struct kernel_param *kp) return param_set_int(val, kp); } +static int queue_count_set(const char *val, const struct kernel_param *kp) +{ + int n = 0, ret; + + ret = kstrtoint(val, 10, &n); + if (n > num_possible_cpus()) + n = num_possible_cpus(); + + return param_set_int(val, kp); +} + static inline unsigned int sq_idx(unsigned int qid, u32 stride) { return qid * 2 * stride; @@ -158,8 +186,8 @@ struct nvme_queue { struct nvme_dev *dev; spinlock_t sq_lock; struct nvme_command *sq_cmds; - bool sq_cmds_is_io; - spinlock_t cq_lock ____cacheline_aligned_in_smp; + /* only used for poll queues: */ + spinlock_t cq_poll_lock ____cacheline_aligned_in_smp; volatile struct nvme_completion *cqes; struct blk_mq_tags **tags; dma_addr_t sq_dma_addr; @@ -168,14 +196,20 @@ struct nvme_queue { u16 q_depth; s16 cq_vector; u16 sq_tail; + u16 last_sq_tail; u16 cq_head; u16 last_cq_head; u16 qid; u8 cq_phase; + unsigned long flags; +#define NVMEQ_ENABLED 0 +#define NVMEQ_SQ_CMB 1 +#define NVMEQ_DELETE_ERROR 2 u32 *dbbuf_sq_db; u32 *dbbuf_cq_db; u32 *dbbuf_sq_ei; u32 *dbbuf_cq_ei; + struct completion delete_done; }; /* @@ -218,9 +252,20 @@ static inline void _nvme_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_dbbuf) != 64); } +static unsigned int max_io_queues(void) +{ + return num_possible_cpus() + write_queues + poll_queues; +} + +static unsigned int max_queue_count(void) +{ + /* IO queues + admin queue */ + return 1 + max_io_queues(); +} + static inline unsigned int nvme_dbbuf_size(u32 stride) { - return ((num_possible_cpus() + 1) * 8 * stride); + return (max_queue_count() * 8 * stride); } static int nvme_dbbuf_dma_alloc(struct nvme_dev *dev) @@ -431,30 +476,90 @@ static int nvme_init_request(struct blk_mq_tag_set *set, struct request *req, return 0; } +static int queue_irq_offset(struct nvme_dev *dev) +{ + /* if we have more than 1 vec, admin queue offsets us by 1 */ + if (dev->num_vecs > 1) + return 1; + + return 0; +} + static int nvme_pci_map_queues(struct blk_mq_tag_set *set) { struct nvme_dev *dev = set->driver_data; + int i, qoff, offset; + + offset = queue_irq_offset(dev); + for (i = 0, qoff = 0; i < set->nr_maps; i++) { + struct blk_mq_queue_map *map = &set->map[i]; + + map->nr_queues = dev->io_queues[i]; + if (!map->nr_queues) { + BUG_ON(i == HCTX_TYPE_DEFAULT); + continue; + } - return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev), - dev->num_vecs > 1 ? 1 /* admin queue */ : 0); + /* + * The poll queue(s) doesn't have an IRQ (and hence IRQ + * affinity), so use the regular blk-mq cpu mapping + */ + map->queue_offset = qoff; + if (i != HCTX_TYPE_POLL) + blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset); + else + blk_mq_map_queues(map); + qoff += map->nr_queues; + offset += map->nr_queues; + } + + return 0; +} + +/* + * Write sq tail if we are asked to, or if the next command would wrap. + */ +static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq) +{ + if (!write_sq) { + u16 next_tail = nvmeq->sq_tail + 1; + + if (next_tail == nvmeq->q_depth) + next_tail = 0; + if (next_tail != nvmeq->last_sq_tail) + return; + } + + if (nvme_dbbuf_update_and_check_event(nvmeq->sq_tail, + nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei)) + writel(nvmeq->sq_tail, nvmeq->q_db); + nvmeq->last_sq_tail = nvmeq->sq_tail; } /** * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell * @nvmeq: The queue to use * @cmd: The command to send + * @write_sq: whether to write to the SQ doorbell */ -static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd) +static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd, + bool write_sq) { spin_lock(&nvmeq->sq_lock); - memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd)); - if (++nvmeq->sq_tail == nvmeq->q_depth) nvmeq->sq_tail = 0; - if (nvme_dbbuf_update_and_check_event(nvmeq->sq_tail, - nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei)) - writel(nvmeq->sq_tail, nvmeq->q_db); + nvme_write_sq_db(nvmeq, write_sq); + spin_unlock(&nvmeq->sq_lock); +} + +static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx) +{ + struct nvme_queue *nvmeq = hctx->driver_data; + + spin_lock(&nvmeq->sq_lock); + if (nvmeq->sq_tail != nvmeq->last_sq_tail) + nvme_write_sq_db(nvmeq, true); spin_unlock(&nvmeq->sq_lock); } @@ -822,7 +927,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, * We should not need to do this, but we're still using this to * ensure we can drain requests on a dying queue. */ - if (unlikely(nvmeq->cq_vector < 0)) + if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) return BLK_STS_IOERR; ret = nvme_setup_cmd(ns, req, &cmnd); @@ -840,7 +945,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, } blk_mq_start_request(req); - nvme_submit_cmd(nvmeq, &cmnd); + nvme_submit_cmd(nvmeq, &cmnd, bd->last); return BLK_STS_OK; out_cleanup_iod: nvme_free_iod(dev, req); @@ -899,6 +1004,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx) } req = blk_mq_tag_to_rq(*nvmeq->tags, cqe->command_id); + trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail); nvme_end_request(req, cqe->status, cqe->result); } @@ -919,15 +1025,15 @@ static inline void nvme_update_cq_head(struct nvme_queue *nvmeq) } } -static inline bool nvme_process_cq(struct nvme_queue *nvmeq, u16 *start, - u16 *end, int tag) +static inline int nvme_process_cq(struct nvme_queue *nvmeq, u16 *start, + u16 *end, unsigned int tag) { - bool found = false; + int found = 0; *start = nvmeq->cq_head; - while (!found && nvme_cqe_pending(nvmeq)) { - if (nvmeq->cqes[nvmeq->cq_head].command_id == tag) - found = true; + while (nvme_cqe_pending(nvmeq)) { + if (tag == -1U || nvmeq->cqes[nvmeq->cq_head].command_id == tag) + found++; nvme_update_cq_head(nvmeq); } *end = nvmeq->cq_head; @@ -943,12 +1049,16 @@ static irqreturn_t nvme_irq(int irq, void *data) irqreturn_t ret = IRQ_NONE; u16 start, end; - spin_lock(&nvmeq->cq_lock); + /* + * The rmb/wmb pair ensures we see all updates from a previous run of + * the irq handler, even if that was on another CPU. + */ + rmb(); if (nvmeq->cq_head != nvmeq->last_cq_head) ret = IRQ_HANDLED; nvme_process_cq(nvmeq, &start, &end, -1); nvmeq->last_cq_head = nvmeq->cq_head; - spin_unlock(&nvmeq->cq_lock); + wmb(); if (start != end) { nvme_complete_cqes(nvmeq, start, end); @@ -966,27 +1076,50 @@ static irqreturn_t nvme_irq_check(int irq, void *data) return IRQ_NONE; } -static int __nvme_poll(struct nvme_queue *nvmeq, unsigned int tag) +/* + * Poll for completions any queue, including those not dedicated to polling. + * Can be called from any context. + */ +static int nvme_poll_irqdisable(struct nvme_queue *nvmeq, unsigned int tag) { + struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev); u16 start, end; - bool found; + int found; - if (!nvme_cqe_pending(nvmeq)) - return 0; - - spin_lock_irq(&nvmeq->cq_lock); - found = nvme_process_cq(nvmeq, &start, &end, tag); - spin_unlock_irq(&nvmeq->cq_lock); + /* + * For a poll queue we need to protect against the polling thread + * using the CQ lock. For normal interrupt driven threads we have + * to disable the interrupt to avoid racing with it. + */ + if (nvmeq->cq_vector == -1) { + spin_lock(&nvmeq->cq_poll_lock); + found = nvme_process_cq(nvmeq, &start, &end, tag); + spin_unlock(&nvmeq->cq_poll_lock); + } else { + disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector)); + found = nvme_process_cq(nvmeq, &start, &end, tag); + enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector)); + } nvme_complete_cqes(nvmeq, start, end); return found; } -static int nvme_poll(struct blk_mq_hw_ctx *hctx, unsigned int tag) +static int nvme_poll(struct blk_mq_hw_ctx *hctx) { struct nvme_queue *nvmeq = hctx->driver_data; + u16 start, end; + bool found; + + if (!nvme_cqe_pending(nvmeq)) + return 0; + + spin_lock(&nvmeq->cq_poll_lock); + found = nvme_process_cq(nvmeq, &start, &end, -1); + spin_unlock(&nvmeq->cq_poll_lock); - return __nvme_poll(nvmeq, tag); + nvme_complete_cqes(nvmeq, start, end); + return found; } static void nvme_pci_submit_async_event(struct nvme_ctrl *ctrl) @@ -998,7 +1131,7 @@ static void nvme_pci_submit_async_event(struct nvme_ctrl *ctrl) memset(&c, 0, sizeof(c)); c.common.opcode = nvme_admin_async_event; c.common.command_id = NVME_AQ_BLK_MQ_DEPTH; - nvme_submit_cmd(nvmeq, &c); + nvme_submit_cmd(nvmeq, &c, true); } static int adapter_delete_queue(struct nvme_dev *dev, u8 opcode, u16 id) @@ -1016,7 +1149,10 @@ static int adapter_alloc_cq(struct nvme_dev *dev, u16 qid, struct nvme_queue *nvmeq, s16 vector) { struct nvme_command c; - int flags = NVME_QUEUE_PHYS_CONTIG | NVME_CQ_IRQ_ENABLED; + int flags = NVME_QUEUE_PHYS_CONTIG; + + if (vector != -1) + flags |= NVME_CQ_IRQ_ENABLED; /* * Note: we (ab)use the fact that the prp fields survive if no data @@ -1028,7 +1164,10 @@ static int adapter_alloc_cq(struct nvme_dev *dev, u16 qid, c.create_cq.cqid = cpu_to_le16(qid); c.create_cq.qsize = cpu_to_le16(nvmeq->q_depth - 1); c.create_cq.cq_flags = cpu_to_le16(flags); - c.create_cq.irq_vector = cpu_to_le16(vector); + if (vector != -1) + c.create_cq.irq_vector = cpu_to_le16(vector); + else + c.create_cq.irq_vector = 0; return nvme_submit_sync_cmd(dev->ctrl.admin_q, &c, NULL, 0); } @@ -1157,7 +1296,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) /* * Did we miss an interrupt? */ - if (__nvme_poll(nvmeq, req->tag)) { + if (nvme_poll_irqdisable(nvmeq, req->tag)) { dev_warn(dev->ctrl.device, "I/O %d QID %d timeout, completion polled\n", req->tag, nvmeq->qid); @@ -1237,17 +1376,15 @@ static void nvme_free_queue(struct nvme_queue *nvmeq) { dma_free_coherent(nvmeq->q_dmadev, CQ_SIZE(nvmeq->q_depth), (void *)nvmeq->cqes, nvmeq->cq_dma_addr); + if (!nvmeq->sq_cmds) + return; - if (nvmeq->sq_cmds) { - if (nvmeq->sq_cmds_is_io) - pci_free_p2pmem(to_pci_dev(nvmeq->q_dmadev), - nvmeq->sq_cmds, - SQ_SIZE(nvmeq->q_depth)); - else - dma_free_coherent(nvmeq->q_dmadev, - SQ_SIZE(nvmeq->q_depth), - nvmeq->sq_cmds, - nvmeq->sq_dma_addr); + if (test_and_clear_bit(NVMEQ_SQ_CMB, &nvmeq->flags)) { + pci_free_p2pmem(to_pci_dev(nvmeq->q_dmadev), + nvmeq->sq_cmds, SQ_SIZE(nvmeq->q_depth)); + } else { + dma_free_coherent(nvmeq->q_dmadev, SQ_SIZE(nvmeq->q_depth), + nvmeq->sq_cmds, nvmeq->sq_dma_addr); } } @@ -1267,47 +1404,32 @@ static void nvme_free_queues(struct nvme_dev *dev, int lowest) */ static int nvme_suspend_queue(struct nvme_queue *nvmeq) { - int vector; - - spin_lock_irq(&nvmeq->cq_lock); - if (nvmeq->cq_vector == -1) { - spin_unlock_irq(&nvmeq->cq_lock); + if (!test_and_clear_bit(NVMEQ_ENABLED, &nvmeq->flags)) return 1; - } - vector = nvmeq->cq_vector; - nvmeq->dev->online_queues--; - nvmeq->cq_vector = -1; - spin_unlock_irq(&nvmeq->cq_lock); - /* - * Ensure that nvme_queue_rq() sees it ->cq_vector == -1 without - * having to grab the lock. - */ + /* ensure that nvme_queue_rq() sees NVMEQ_ENABLED cleared */ mb(); + nvmeq->dev->online_queues--; if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q) blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q); - - pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq); - + if (nvmeq->cq_vector == -1) + return 0; + pci_free_irq(to_pci_dev(nvmeq->dev->dev), nvmeq->cq_vector, nvmeq); + nvmeq->cq_vector = -1; return 0; } static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown) { struct nvme_queue *nvmeq = &dev->queues[0]; - u16 start, end; if (shutdown) nvme_shutdown_ctrl(&dev->ctrl); else nvme_disable_ctrl(&dev->ctrl, dev->ctrl.cap); - spin_lock_irq(&nvmeq->cq_lock); - nvme_process_cq(nvmeq, &start, &end, -1); - spin_unlock_irq(&nvmeq->cq_lock); - - nvme_complete_cqes(nvmeq, start, end); + nvme_poll_irqdisable(nvmeq, -1); } static int nvme_cmb_qdepth(struct nvme_dev *dev, int nr_io_queues, @@ -1343,15 +1465,14 @@ static int nvme_alloc_sq_cmds(struct nvme_dev *dev, struct nvme_queue *nvmeq, nvmeq->sq_cmds = pci_alloc_p2pmem(pdev, SQ_SIZE(depth)); nvmeq->sq_dma_addr = pci_p2pmem_virt_to_bus(pdev, nvmeq->sq_cmds); - nvmeq->sq_cmds_is_io = true; - } - - if (!nvmeq->sq_cmds) { - nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth), - &nvmeq->sq_dma_addr, GFP_KERNEL); - nvmeq->sq_cmds_is_io = false; + if (nvmeq->sq_dma_addr) { + set_bit(NVMEQ_SQ_CMB, &nvmeq->flags); + return 0; + } } + nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth), + &nvmeq->sq_dma_addr, GFP_KERNEL); if (!nvmeq->sq_cmds) return -ENOMEM; return 0; @@ -1375,7 +1496,7 @@ static int nvme_alloc_queue(struct nvme_dev *dev, int qid, int depth) nvmeq->q_dmadev = dev->dev; nvmeq->dev = dev; spin_lock_init(&nvmeq->sq_lock); - spin_lock_init(&nvmeq->cq_lock); + spin_lock_init(&nvmeq->cq_poll_lock); nvmeq->cq_head = 0; nvmeq->cq_phase = 1; nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride]; @@ -1411,28 +1532,34 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid) { struct nvme_dev *dev = nvmeq->dev; - spin_lock_irq(&nvmeq->cq_lock); nvmeq->sq_tail = 0; + nvmeq->last_sq_tail = 0; nvmeq->cq_head = 0; nvmeq->cq_phase = 1; nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride]; memset((void *)nvmeq->cqes, 0, CQ_SIZE(nvmeq->q_depth)); nvme_dbbuf_init(dev, nvmeq, qid); dev->online_queues++; - spin_unlock_irq(&nvmeq->cq_lock); + wmb(); /* ensure the first interrupt sees the initialization */ } -static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) +static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled) { struct nvme_dev *dev = nvmeq->dev; int result; s16 vector; + clear_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags); + /* * A queue's vector matches the queue identifier unless the controller * has only one vector available. */ - vector = dev->num_vecs == 1 ? 0 : qid; + if (!polled) + vector = dev->num_vecs == 1 ? 0 : qid; + else + vector = -1; + result = adapter_alloc_cq(dev, qid, nvmeq, vector); if (result) return result; @@ -1443,17 +1570,16 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) else if (result) goto release_cq; - /* - * Set cq_vector after alloc cq/sq, otherwise nvme_suspend_queue will - * invoke free_irq for it and cause a 'Trying to free already-free IRQ - * xxx' warning if the create CQ/SQ command times out. - */ nvmeq->cq_vector = vector; nvme_init_queue(nvmeq, qid); - result = queue_request_irq(nvmeq); - if (result < 0) - goto release_sq; + if (vector != -1) { + result = queue_request_irq(nvmeq); + if (result < 0) + goto release_sq; + } + + set_bit(NVMEQ_ENABLED, &nvmeq->flags); return result; release_sq: @@ -1477,6 +1603,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = { static const struct blk_mq_ops nvme_mq_ops = { .queue_rq = nvme_queue_rq, .complete = nvme_pci_complete_rq, + .commit_rqs = nvme_commit_rqs, .init_hctx = nvme_init_hctx, .init_request = nvme_init_request, .map_queues = nvme_pci_map_queues, @@ -1602,12 +1729,13 @@ static int nvme_pci_configure_admin_queue(struct nvme_dev *dev) return result; } + set_bit(NVMEQ_ENABLED, &nvmeq->flags); return result; } static int nvme_create_io_queues(struct nvme_dev *dev) { - unsigned i, max; + unsigned i, max, rw_queues; int ret = 0; for (i = dev->ctrl.queue_count; i <= dev->max_qid; i++) { @@ -1618,8 +1746,17 @@ static int nvme_create_io_queues(struct nvme_dev *dev) } max = min(dev->max_qid, dev->ctrl.queue_count - 1); + if (max != 1 && dev->io_queues[HCTX_TYPE_POLL]) { + rw_queues = dev->io_queues[HCTX_TYPE_DEFAULT] + + dev->io_queues[HCTX_TYPE_READ]; + } else { + rw_queues = max; + } + for (i = dev->online_queues; i <= max; i++) { - ret = nvme_create_queue(&dev->queues[i], i); + bool polled = i > rw_queues; + + ret = nvme_create_queue(&dev->queues[i], i, polled); if (ret) break; } @@ -1891,6 +2028,110 @@ static int nvme_setup_host_mem(struct nvme_dev *dev) return ret; } +static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues) +{ + unsigned int this_w_queues = write_queues; + + /* + * Setup read/write queue split + */ + if (irq_queues == 1) { + dev->io_queues[HCTX_TYPE_DEFAULT] = 1; + dev->io_queues[HCTX_TYPE_READ] = 0; + return; + } + + /* + * If 'write_queues' is set, ensure it leaves room for at least + * one read queue + */ + if (this_w_queues >= irq_queues) + this_w_queues = irq_queues - 1; + + /* + * If 'write_queues' is set to zero, reads and writes will share + * a queue set. + */ + if (!this_w_queues) { + dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues; + dev->io_queues[HCTX_TYPE_READ] = 0; + } else { + dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues; + dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues; + } +} + +static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues) +{ + struct pci_dev *pdev = to_pci_dev(dev->dev); + int irq_sets[2]; + struct irq_affinity affd = { + .pre_vectors = 1, + .nr_sets = ARRAY_SIZE(irq_sets), + .sets = irq_sets, + }; + int result = 0; + unsigned int irq_queues, this_p_queues; + + /* + * Poll queues don't need interrupts, but we need at least one IO + * queue left over for non-polled IO. + */ + this_p_queues = poll_queues; + if (this_p_queues >= nr_io_queues) { + this_p_queues = nr_io_queues - 1; + irq_queues = 1; + } else { + irq_queues = nr_io_queues - this_p_queues; + } + dev->io_queues[HCTX_TYPE_POLL] = this_p_queues; + + /* + * For irq sets, we have to ask for minvec == maxvec. This passes + * any reduction back to us, so we can adjust our queue counts and + * IRQ vector needs. + */ + do { + nvme_calc_io_queues(dev, irq_queues); + irq_sets[0] = dev->io_queues[HCTX_TYPE_DEFAULT]; + irq_sets[1] = dev->io_queues[HCTX_TYPE_READ]; + if (!irq_sets[1]) + affd.nr_sets = 1; + + /* + * If we got a failure and we're down to asking for just + * 1 + 1 queues, just ask for a single vector. We'll share + * that between the single IO queue and the admin queue. + */ + if (result >= 0 && irq_queues > 1) + irq_queues = irq_sets[0] + irq_sets[1] + 1; + + result = pci_alloc_irq_vectors_affinity(pdev, irq_queues, + irq_queues, + PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd); + + /* + * Need to reduce our vec counts. If we get ENOSPC, the + * platform should support mulitple vecs, we just need + * to decrease our ask. If we get EINVAL, the platform + * likely does not. Back down to ask for just one vector. + */ + if (result == -ENOSPC) { + irq_queues--; + if (!irq_queues) + return result; + continue; + } else if (result == -EINVAL) { + irq_queues = 1; + continue; + } else if (result <= 0) + return -EIO; + break; + } while (1); + + return result; +} + static int nvme_setup_io_queues(struct nvme_dev *dev) { struct nvme_queue *adminq = &dev->queues[0]; @@ -1898,17 +2139,15 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) int result, nr_io_queues; unsigned long size; - struct irq_affinity affd = { - .pre_vectors = 1 - }; - - nr_io_queues = num_possible_cpus(); + nr_io_queues = max_io_queues(); result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues); if (result < 0) return result; if (nr_io_queues == 0) return 0; + + clear_bit(NVMEQ_ENABLED, &adminq->flags); if (dev->cmb_use_sqes) { result = nvme_cmb_qdepth(dev, nr_io_queues, @@ -1937,12 +2176,19 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) * setting up the full range we need. */ pci_free_irq_vectors(pdev); - result = pci_alloc_irq_vectors_affinity(pdev, 1, nr_io_queues + 1, - PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd); + + result = nvme_setup_irqs(dev, nr_io_queues); if (result <= 0) return -EIO; + dev->num_vecs = result; - dev->max_qid = max(result - 1, 1); + result = max(result - 1, 1); + dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL]; + + dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n", + dev->io_queues[HCTX_TYPE_DEFAULT], + dev->io_queues[HCTX_TYPE_READ], + dev->io_queues[HCTX_TYPE_POLL]); /* * Should investigate if there's a performance win from allocating @@ -1956,6 +2202,7 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) adminq->cq_vector = -1; return result; } + set_bit(NVMEQ_ENABLED, &adminq->flags); return nvme_create_io_queues(dev); } @@ -1964,23 +2211,15 @@ static void nvme_del_queue_end(struct request *req, blk_status_t error) struct nvme_queue *nvmeq = req->end_io_data; blk_mq_free_request(req); - complete(&nvmeq->dev->ioq_wait); + complete(&nvmeq->delete_done); } static void nvme_del_cq_end(struct request *req, blk_status_t error) { struct nvme_queue *nvmeq = req->end_io_data; - u16 start, end; - - if (!error) { - unsigned long flags; - spin_lock_irqsave(&nvmeq->cq_lock, flags); - nvme_process_cq(nvmeq, &start, &end, -1); - spin_unlock_irqrestore(&nvmeq->cq_lock, flags); - - nvme_complete_cqes(nvmeq, start, end); - } + if (error) + set_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags); nvme_del_queue_end(req, error); } @@ -2002,37 +2241,44 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) req->timeout = ADMIN_TIMEOUT; req->end_io_data = nvmeq; + init_completion(&nvmeq->delete_done); blk_execute_rq_nowait(q, NULL, req, false, opcode == nvme_admin_delete_cq ? nvme_del_cq_end : nvme_del_queue_end); return 0; } -static void nvme_disable_io_queues(struct nvme_dev *dev) +static bool nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode) { - int pass, queues = dev->online_queues - 1; + int nr_queues = dev->online_queues - 1, sent = 0; unsigned long timeout; - u8 opcode = nvme_admin_delete_sq; - - for (pass = 0; pass < 2; pass++) { - int sent = 0, i = queues; - reinit_completion(&dev->ioq_wait); retry: - timeout = ADMIN_TIMEOUT; - for (; i > 0; i--, sent++) - if (nvme_delete_queue(&dev->queues[i], opcode)) - break; - - while (sent--) { - timeout = wait_for_completion_io_timeout(&dev->ioq_wait, timeout); - if (timeout == 0) - return; - if (i) - goto retry; - } - opcode = nvme_admin_delete_cq; + timeout = ADMIN_TIMEOUT; + while (nr_queues > 0) { + if (nvme_delete_queue(&dev->queues[nr_queues], opcode)) + break; + nr_queues--; + sent++; } + while (sent) { + struct nvme_queue *nvmeq = &dev->queues[nr_queues + sent]; + + timeout = wait_for_completion_io_timeout(&nvmeq->delete_done, + timeout); + if (timeout == 0) + return false; + + /* handle any remaining CQEs */ + if (opcode == nvme_admin_delete_cq && + !test_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags)) + nvme_poll_irqdisable(nvmeq, -1); + + sent--; + if (nr_queues) + goto retry; + } + return true; } /* @@ -2045,6 +2291,10 @@ static int nvme_dev_add(struct nvme_dev *dev) if (!dev->ctrl.tagset) { dev->tagset.ops = &nvme_mq_ops; dev->tagset.nr_hw_queues = dev->online_queues - 1; + dev->tagset.nr_maps = 2; /* default + read */ + if (dev->io_queues[HCTX_TYPE_POLL]) + dev->tagset.nr_maps++; + dev->tagset.nr_maps = HCTX_MAX_TYPES; dev->tagset.timeout = NVME_IO_TIMEOUT; dev->tagset.numa_node = dev_to_node(dev->dev); dev->tagset.queue_depth = @@ -2187,7 +2437,8 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) nvme_stop_queues(&dev->ctrl); if (!dead && dev->ctrl.queue_count > 0) { - nvme_disable_io_queues(dev); + if (nvme_disable_io_queues(dev, nvme_admin_delete_sq)) + nvme_disable_io_queues(dev, nvme_admin_delete_cq); nvme_disable_admin_queue(dev, shutdown); } for (i = dev->ctrl.queue_count - 1; i >= 0; i--) @@ -2491,8 +2742,8 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (!dev) return -ENOMEM; - dev->queues = kcalloc_node(num_possible_cpus() + 1, - sizeof(struct nvme_queue), GFP_KERNEL, node); + dev->queues = kcalloc_node(max_queue_count(), sizeof(struct nvme_queue), + GFP_KERNEL, node); if (!dev->queues) goto free; @@ -2506,7 +2757,6 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work); INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work); mutex_init(&dev->shutdown_lock); - init_completion(&dev->ioq_wait); result = nvme_setup_prp_pools(dev); if (result) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index ab6ec7295bf9..0a2fd2949ad7 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -162,6 +162,13 @@ static inline int nvme_rdma_queue_idx(struct nvme_rdma_queue *queue) return queue - queue->ctrl->queues; } +static bool nvme_rdma_poll_queue(struct nvme_rdma_queue *queue) +{ + return nvme_rdma_queue_idx(queue) > + queue->ctrl->ctrl.opts->nr_io_queues + + queue->ctrl->ctrl.opts->nr_write_queues; +} + static inline size_t nvme_rdma_inline_data_size(struct nvme_rdma_queue *queue) { return queue->cmnd_capsule_len - sizeof(struct nvme_command); @@ -440,6 +447,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) const int send_wr_factor = 3; /* MR, SEND, INV */ const int cq_factor = send_wr_factor + 1; /* + RECV */ int comp_vector, idx = nvme_rdma_queue_idx(queue); + enum ib_poll_context poll_ctx; int ret; queue->device = nvme_rdma_find_get_device(queue->cm_id); @@ -456,10 +464,16 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) */ comp_vector = idx == 0 ? idx : idx - 1; + /* Polling queues need direct cq polling context */ + if (nvme_rdma_poll_queue(queue)) + poll_ctx = IB_POLL_DIRECT; + else + poll_ctx = IB_POLL_SOFTIRQ; + /* +1 for ib_stop_cq */ queue->ib_cq = ib_alloc_cq(ibdev, queue, cq_factor * queue->queue_size + 1, - comp_vector, IB_POLL_SOFTIRQ); + comp_vector, poll_ctx); if (IS_ERR(queue->ib_cq)) { ret = PTR_ERR(queue->ib_cq); goto out_put_dev; @@ -595,15 +609,17 @@ static void nvme_rdma_stop_io_queues(struct nvme_rdma_ctrl *ctrl) static int nvme_rdma_start_queue(struct nvme_rdma_ctrl *ctrl, int idx) { + struct nvme_rdma_queue *queue = &ctrl->queues[idx]; + bool poll = nvme_rdma_poll_queue(queue); int ret; if (idx) - ret = nvmf_connect_io_queue(&ctrl->ctrl, idx); + ret = nvmf_connect_io_queue(&ctrl->ctrl, idx, poll); else ret = nvmf_connect_admin_queue(&ctrl->ctrl); if (!ret) - set_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[idx].flags); + set_bit(NVME_RDMA_Q_LIVE, &queue->flags); else dev_info(ctrl->ctrl.device, "failed to connect queue: %d ret=%d\n", idx, ret); @@ -645,6 +661,9 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl) nr_io_queues = min_t(unsigned int, nr_io_queues, ibdev->num_comp_vectors); + nr_io_queues += min(opts->nr_write_queues, num_online_cpus()); + nr_io_queues += min(opts->nr_poll_queues, num_online_cpus()); + ret = nvme_set_queue_count(&ctrl->ctrl, &nr_io_queues); if (ret) return ret; @@ -694,7 +713,7 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl, set->ops = &nvme_rdma_admin_mq_ops; set->queue_depth = NVME_AQ_MQ_TAG_DEPTH; set->reserved_tags = 2; /* connect + keep-alive */ - set->numa_node = NUMA_NO_NODE; + set->numa_node = nctrl->numa_node; set->cmd_size = sizeof(struct nvme_rdma_request) + SG_CHUNK_SIZE * sizeof(struct scatterlist); set->driver_data = ctrl; @@ -707,13 +726,14 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl, set->ops = &nvme_rdma_mq_ops; set->queue_depth = nctrl->sqsize + 1; set->reserved_tags = 1; /* fabric connect */ - set->numa_node = NUMA_NO_NODE; + set->numa_node = nctrl->numa_node; set->flags = BLK_MQ_F_SHOULD_MERGE; set->cmd_size = sizeof(struct nvme_rdma_request) + SG_CHUNK_SIZE * sizeof(struct scatterlist); set->driver_data = ctrl; set->nr_hw_queues = nctrl->queue_count - 1; set->timeout = NVME_IO_TIMEOUT; + set->nr_maps = nctrl->opts->nr_poll_queues ? HCTX_MAX_TYPES : 2; } ret = blk_mq_alloc_tag_set(set); @@ -763,6 +783,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, return error; ctrl->device = ctrl->queues[0].device; + ctrl->ctrl.numa_node = dev_to_node(ctrl->device->dev->dma_device); ctrl->max_fr_pages = nvme_rdma_get_max_fr_pages(ctrl->device->dev); @@ -1411,12 +1432,11 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg) WARN_ON_ONCE(ret); } -static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, - struct nvme_completion *cqe, struct ib_wc *wc, int tag) +static void nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, + struct nvme_completion *cqe, struct ib_wc *wc) { struct request *rq; struct nvme_rdma_request *req; - int ret = 0; rq = blk_mq_tag_to_rq(nvme_rdma_tagset(queue), cqe->command_id); if (!rq) { @@ -1424,7 +1444,7 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, "tag 0x%x on QP %#x not found\n", cqe->command_id, queue->qp->qp_num); nvme_rdma_error_recovery(queue->ctrl); - return ret; + return; } req = blk_mq_rq_to_pdu(rq); @@ -1439,6 +1459,8 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, nvme_rdma_error_recovery(queue->ctrl); } } else if (req->mr) { + int ret; + ret = nvme_rdma_inv_rkey(queue, req); if (unlikely(ret < 0)) { dev_err(queue->ctrl->ctrl.device, @@ -1447,19 +1469,14 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, nvme_rdma_error_recovery(queue->ctrl); } /* the local invalidation completion will end the request */ - return 0; + return; } - if (refcount_dec_and_test(&req->ref)) { - if (rq->tag == tag) - ret = 1; + if (refcount_dec_and_test(&req->ref)) nvme_end_request(rq, req->status, req->result); - } - - return ret; } -static int __nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc, int tag) +static void nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) { struct nvme_rdma_qe *qe = container_of(wc->wr_cqe, struct nvme_rdma_qe, cqe); @@ -1467,11 +1484,10 @@ static int __nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc, int tag) struct ib_device *ibdev = queue->device->dev; struct nvme_completion *cqe = qe->data; const size_t len = sizeof(struct nvme_completion); - int ret = 0; if (unlikely(wc->status != IB_WC_SUCCESS)) { nvme_rdma_wr_error(cq, wc, "RECV"); - return 0; + return; } ib_dma_sync_single_for_cpu(ibdev, qe->dma, len, DMA_FROM_DEVICE); @@ -1486,16 +1502,10 @@ static int __nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc, int tag) nvme_complete_async_event(&queue->ctrl->ctrl, cqe->status, &cqe->result); else - ret = nvme_rdma_process_nvme_rsp(queue, cqe, wc, tag); + nvme_rdma_process_nvme_rsp(queue, cqe, wc); ib_dma_sync_single_for_device(ibdev, qe->dma, len, DMA_FROM_DEVICE); nvme_rdma_post_recv(queue, qe); - return ret; -} - -static void nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) -{ - __nvme_rdma_recv_done(cq, wc, -1); } static int nvme_rdma_conn_established(struct nvme_rdma_queue *queue) @@ -1749,25 +1759,11 @@ err: return BLK_STS_IOERR; } -static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx, unsigned int tag) +static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx) { struct nvme_rdma_queue *queue = hctx->driver_data; - struct ib_cq *cq = queue->ib_cq; - struct ib_wc wc; - int found = 0; - - while (ib_poll_cq(cq, 1, &wc) > 0) { - struct ib_cqe *cqe = wc.wr_cqe; - - if (cqe) { - if (cqe->done == nvme_rdma_recv_done) - found |= __nvme_rdma_recv_done(cq, &wc, tag); - else - cqe->done(cq, &wc); - } - } - return found; + return ib_process_cq_direct(queue->ib_cq, -1); } static void nvme_rdma_complete_rq(struct request *rq) @@ -1782,7 +1778,36 @@ static int nvme_rdma_map_queues(struct blk_mq_tag_set *set) { struct nvme_rdma_ctrl *ctrl = set->driver_data; - return blk_mq_rdma_map_queues(set, ctrl->device->dev, 0); + set->map[HCTX_TYPE_DEFAULT].queue_offset = 0; + set->map[HCTX_TYPE_READ].nr_queues = ctrl->ctrl.opts->nr_io_queues; + if (ctrl->ctrl.opts->nr_write_queues) { + /* separate read/write queues */ + set->map[HCTX_TYPE_DEFAULT].nr_queues = + ctrl->ctrl.opts->nr_write_queues; + set->map[HCTX_TYPE_READ].queue_offset = + ctrl->ctrl.opts->nr_write_queues; + } else { + /* mixed read/write queues */ + set->map[HCTX_TYPE_DEFAULT].nr_queues = + ctrl->ctrl.opts->nr_io_queues; + set->map[HCTX_TYPE_READ].queue_offset = 0; + } + blk_mq_rdma_map_queues(&set->map[HCTX_TYPE_DEFAULT], + ctrl->device->dev, 0); + blk_mq_rdma_map_queues(&set->map[HCTX_TYPE_READ], + ctrl->device->dev, 0); + + if (ctrl->ctrl.opts->nr_poll_queues) { + set->map[HCTX_TYPE_POLL].nr_queues = + ctrl->ctrl.opts->nr_poll_queues; + set->map[HCTX_TYPE_POLL].queue_offset = + ctrl->ctrl.opts->nr_io_queues; + if (ctrl->ctrl.opts->nr_write_queues) + set->map[HCTX_TYPE_POLL].queue_offset += + ctrl->ctrl.opts->nr_write_queues; + blk_mq_map_queues(&set->map[HCTX_TYPE_POLL]); + } + return 0; } static const struct blk_mq_ops nvme_rdma_mq_ops = { @@ -1791,9 +1816,9 @@ static const struct blk_mq_ops nvme_rdma_mq_ops = { .init_request = nvme_rdma_init_request, .exit_request = nvme_rdma_exit_request, .init_hctx = nvme_rdma_init_hctx, - .poll = nvme_rdma_poll, .timeout = nvme_rdma_timeout, .map_queues = nvme_rdma_map_queues, + .poll = nvme_rdma_poll, }; static const struct blk_mq_ops nvme_rdma_admin_mq_ops = { @@ -1938,7 +1963,8 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev, INIT_WORK(&ctrl->err_work, nvme_rdma_error_recovery_work); INIT_WORK(&ctrl->ctrl.reset_work, nvme_rdma_reset_ctrl_work); - ctrl->ctrl.queue_count = opts->nr_io_queues + 1; /* +1 for admin queue */ + ctrl->ctrl.queue_count = opts->nr_io_queues + opts->nr_write_queues + + opts->nr_poll_queues + 1; ctrl->ctrl.sqsize = opts->queue_size - 1; ctrl->ctrl.kato = opts->kato; @@ -1989,7 +2015,8 @@ static struct nvmf_transport_ops nvme_rdma_transport = { .module = THIS_MODULE, .required_opts = NVMF_OPT_TRADDR, .allowed_opts = NVMF_OPT_TRSVCID | NVMF_OPT_RECONNECT_DELAY | - NVMF_OPT_HOST_TRADDR | NVMF_OPT_CTRL_LOSS_TMO, + NVMF_OPT_HOST_TRADDR | NVMF_OPT_CTRL_LOSS_TMO | + NVMF_OPT_NR_WRITE_QUEUES | NVMF_OPT_NR_POLL_QUEUES, .create_ctrl = nvme_rdma_create_ctrl, }; diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c new file mode 100644 index 000000000000..de174912445e --- /dev/null +++ b/drivers/nvme/host/tcp.c @@ -0,0 +1,2278 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NVMe over Fabrics TCP host. + * Copyright (c) 2018 Lightbits Labs. All rights reserved. + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "nvme.h" +#include "fabrics.h" + +struct nvme_tcp_queue; + +enum nvme_tcp_send_state { + NVME_TCP_SEND_CMD_PDU = 0, + NVME_TCP_SEND_H2C_PDU, + NVME_TCP_SEND_DATA, + NVME_TCP_SEND_DDGST, +}; + +struct nvme_tcp_request { + struct nvme_request req; + void *pdu; + struct nvme_tcp_queue *queue; + u32 data_len; + u32 pdu_len; + u32 pdu_sent; + u16 ttag; + struct list_head entry; + __le32 ddgst; + + struct bio *curr_bio; + struct iov_iter iter; + + /* send state */ + size_t offset; + size_t data_sent; + enum nvme_tcp_send_state state; +}; + +enum nvme_tcp_queue_flags { + NVME_TCP_Q_ALLOCATED = 0, + NVME_TCP_Q_LIVE = 1, +}; + +enum nvme_tcp_recv_state { + NVME_TCP_RECV_PDU = 0, + NVME_TCP_RECV_DATA, + NVME_TCP_RECV_DDGST, +}; + +struct nvme_tcp_ctrl; +struct nvme_tcp_queue { + struct socket *sock; + struct work_struct io_work; + int io_cpu; + + spinlock_t lock; + struct list_head send_list; + + /* recv state */ + void *pdu; + int pdu_remaining; + int pdu_offset; + size_t data_remaining; + size_t ddgst_remaining; + + /* send state */ + struct nvme_tcp_request *request; + + int queue_size; + size_t cmnd_capsule_len; + struct nvme_tcp_ctrl *ctrl; + unsigned long flags; + bool rd_enabled; + + bool hdr_digest; + bool data_digest; + struct ahash_request *rcv_hash; + struct ahash_request *snd_hash; + __le32 exp_ddgst; + __le32 recv_ddgst; + + struct page_frag_cache pf_cache; + + void (*state_change)(struct sock *); + void (*data_ready)(struct sock *); + void (*write_space)(struct sock *); +}; + +struct nvme_tcp_ctrl { + /* read only in the hot path */ + struct nvme_tcp_queue *queues; + struct blk_mq_tag_set tag_set; + + /* other member variables */ + struct list_head list; + struct blk_mq_tag_set admin_tag_set; + struct sockaddr_storage addr; + struct sockaddr_storage src_addr; + struct nvme_ctrl ctrl; + + struct work_struct err_work; + struct delayed_work connect_work; + struct nvme_tcp_request async_req; +}; + +static LIST_HEAD(nvme_tcp_ctrl_list); +static DEFINE_MUTEX(nvme_tcp_ctrl_mutex); +static struct workqueue_struct *nvme_tcp_wq; +static struct blk_mq_ops nvme_tcp_mq_ops; +static struct blk_mq_ops nvme_tcp_admin_mq_ops; + +static inline struct nvme_tcp_ctrl *to_tcp_ctrl(struct nvme_ctrl *ctrl) +{ + return container_of(ctrl, struct nvme_tcp_ctrl, ctrl); +} + +static inline int nvme_tcp_queue_id(struct nvme_tcp_queue *queue) +{ + return queue - queue->ctrl->queues; +} + +static inline struct blk_mq_tags *nvme_tcp_tagset(struct nvme_tcp_queue *queue) +{ + u32 queue_idx = nvme_tcp_queue_id(queue); + + if (queue_idx == 0) + return queue->ctrl->admin_tag_set.tags[queue_idx]; + return queue->ctrl->tag_set.tags[queue_idx - 1]; +} + +static inline u8 nvme_tcp_hdgst_len(struct nvme_tcp_queue *queue) +{ + return queue->hdr_digest ? NVME_TCP_DIGEST_LENGTH : 0; +} + +static inline u8 nvme_tcp_ddgst_len(struct nvme_tcp_queue *queue) +{ + return queue->data_digest ? NVME_TCP_DIGEST_LENGTH : 0; +} + +static inline size_t nvme_tcp_inline_data_size(struct nvme_tcp_queue *queue) +{ + return queue->cmnd_capsule_len - sizeof(struct nvme_command); +} + +static inline bool nvme_tcp_async_req(struct nvme_tcp_request *req) +{ + return req == &req->queue->ctrl->async_req; +} + +static inline bool nvme_tcp_has_inline_data(struct nvme_tcp_request *req) +{ + struct request *rq; + unsigned int bytes; + + if (unlikely(nvme_tcp_async_req(req))) + return false; /* async events don't have a request */ + + rq = blk_mq_rq_from_pdu(req); + bytes = blk_rq_payload_bytes(rq); + + return rq_data_dir(rq) == WRITE && bytes && + bytes <= nvme_tcp_inline_data_size(req->queue); +} + +static inline struct page *nvme_tcp_req_cur_page(struct nvme_tcp_request *req) +{ + return req->iter.bvec->bv_page; +} + +static inline size_t nvme_tcp_req_cur_offset(struct nvme_tcp_request *req) +{ + return req->iter.bvec->bv_offset + req->iter.iov_offset; +} + +static inline size_t nvme_tcp_req_cur_length(struct nvme_tcp_request *req) +{ + return min_t(size_t, req->iter.bvec->bv_len - req->iter.iov_offset, + req->pdu_len - req->pdu_sent); +} + +static inline size_t nvme_tcp_req_offset(struct nvme_tcp_request *req) +{ + return req->iter.iov_offset; +} + +static inline size_t nvme_tcp_pdu_data_left(struct nvme_tcp_request *req) +{ + return rq_data_dir(blk_mq_rq_from_pdu(req)) == WRITE ? + req->pdu_len - req->pdu_sent : 0; +} + +static inline size_t nvme_tcp_pdu_last_send(struct nvme_tcp_request *req, + int len) +{ + return nvme_tcp_pdu_data_left(req) <= len; +} + +static void nvme_tcp_init_iter(struct nvme_tcp_request *req, + unsigned int dir) +{ + struct request *rq = blk_mq_rq_from_pdu(req); + struct bio_vec *vec; + unsigned int size; + int nsegs; + size_t offset; + + if (rq->rq_flags & RQF_SPECIAL_PAYLOAD) { + vec = &rq->special_vec; + nsegs = 1; + size = blk_rq_payload_bytes(rq); + offset = 0; + } else { + struct bio *bio = req->curr_bio; + + vec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter); + nsegs = bio_segments(bio); + size = bio->bi_iter.bi_size; + offset = bio->bi_iter.bi_bvec_done; + } + + iov_iter_bvec(&req->iter, dir, vec, nsegs, size); + req->iter.iov_offset = offset; +} + +static inline void nvme_tcp_advance_req(struct nvme_tcp_request *req, + int len) +{ + req->data_sent += len; + req->pdu_sent += len; + iov_iter_advance(&req->iter, len); + if (!iov_iter_count(&req->iter) && + req->data_sent < req->data_len) { + req->curr_bio = req->curr_bio->bi_next; + nvme_tcp_init_iter(req, WRITE); + } +} + +static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req) +{ + struct nvme_tcp_queue *queue = req->queue; + + spin_lock(&queue->lock); + list_add_tail(&req->entry, &queue->send_list); + spin_unlock(&queue->lock); + + queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); +} + +static inline struct nvme_tcp_request * +nvme_tcp_fetch_request(struct nvme_tcp_queue *queue) +{ + struct nvme_tcp_request *req; + + spin_lock(&queue->lock); + req = list_first_entry_or_null(&queue->send_list, + struct nvme_tcp_request, entry); + if (req) + list_del(&req->entry); + spin_unlock(&queue->lock); + + return req; +} + +static inline void nvme_tcp_ddgst_final(struct ahash_request *hash, + __le32 *dgst) +{ + ahash_request_set_crypt(hash, NULL, (u8 *)dgst, 0); + crypto_ahash_final(hash); +} + +static inline void nvme_tcp_ddgst_update(struct ahash_request *hash, + struct page *page, off_t off, size_t len) +{ + struct scatterlist sg; + + sg_init_marker(&sg, 1); + sg_set_page(&sg, page, len, off); + ahash_request_set_crypt(hash, &sg, NULL, len); + crypto_ahash_update(hash); +} + +static inline void nvme_tcp_hdgst(struct ahash_request *hash, + void *pdu, size_t len) +{ + struct scatterlist sg; + + sg_init_one(&sg, pdu, len); + ahash_request_set_crypt(hash, &sg, pdu + len, len); + crypto_ahash_digest(hash); +} + +static int nvme_tcp_verify_hdgst(struct nvme_tcp_queue *queue, + void *pdu, size_t pdu_len) +{ + struct nvme_tcp_hdr *hdr = pdu; + __le32 recv_digest; + __le32 exp_digest; + + if (unlikely(!(hdr->flags & NVME_TCP_F_HDGST))) { + dev_err(queue->ctrl->ctrl.device, + "queue %d: header digest flag is cleared\n", + nvme_tcp_queue_id(queue)); + return -EPROTO; + } + + recv_digest = *(__le32 *)(pdu + hdr->hlen); + nvme_tcp_hdgst(queue->rcv_hash, pdu, pdu_len); + exp_digest = *(__le32 *)(pdu + hdr->hlen); + if (recv_digest != exp_digest) { + dev_err(queue->ctrl->ctrl.device, + "header digest error: recv %#x expected %#x\n", + le32_to_cpu(recv_digest), le32_to_cpu(exp_digest)); + return -EIO; + } + + return 0; +} + +static int nvme_tcp_check_ddgst(struct nvme_tcp_queue *queue, void *pdu) +{ + struct nvme_tcp_hdr *hdr = pdu; + u8 digest_len = nvme_tcp_hdgst_len(queue); + u32 len; + + len = le32_to_cpu(hdr->plen) - hdr->hlen - + ((hdr->flags & NVME_TCP_F_HDGST) ? digest_len : 0); + + if (unlikely(len && !(hdr->flags & NVME_TCP_F_DDGST))) { + dev_err(queue->ctrl->ctrl.device, + "queue %d: data digest flag is cleared\n", + nvme_tcp_queue_id(queue)); + return -EPROTO; + } + crypto_ahash_init(queue->rcv_hash); + + return 0; +} + +static void nvme_tcp_exit_request(struct blk_mq_tag_set *set, + struct request *rq, unsigned int hctx_idx) +{ + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); + + page_frag_free(req->pdu); +} + +static int nvme_tcp_init_request(struct blk_mq_tag_set *set, + struct request *rq, unsigned int hctx_idx, + unsigned int numa_node) +{ + struct nvme_tcp_ctrl *ctrl = set->driver_data; + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); + int queue_idx = (set == &ctrl->tag_set) ? hctx_idx + 1 : 0; + struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx]; + u8 hdgst = nvme_tcp_hdgst_len(queue); + + req->pdu = page_frag_alloc(&queue->pf_cache, + sizeof(struct nvme_tcp_cmd_pdu) + hdgst, + GFP_KERNEL | __GFP_ZERO); + if (!req->pdu) + return -ENOMEM; + + req->queue = queue; + nvme_req(rq)->ctrl = &ctrl->ctrl; + + return 0; +} + +static int nvme_tcp_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, + unsigned int hctx_idx) +{ + struct nvme_tcp_ctrl *ctrl = data; + struct nvme_tcp_queue *queue = &ctrl->queues[hctx_idx + 1]; + + hctx->driver_data = queue; + return 0; +} + +static int nvme_tcp_init_admin_hctx(struct blk_mq_hw_ctx *hctx, void *data, + unsigned int hctx_idx) +{ + struct nvme_tcp_ctrl *ctrl = data; + struct nvme_tcp_queue *queue = &ctrl->queues[0]; + + hctx->driver_data = queue; + return 0; +} + +static enum nvme_tcp_recv_state +nvme_tcp_recv_state(struct nvme_tcp_queue *queue) +{ + return (queue->pdu_remaining) ? NVME_TCP_RECV_PDU : + (queue->ddgst_remaining) ? NVME_TCP_RECV_DDGST : + NVME_TCP_RECV_DATA; +} + +static void nvme_tcp_init_recv_ctx(struct nvme_tcp_queue *queue) +{ + queue->pdu_remaining = sizeof(struct nvme_tcp_rsp_pdu) + + nvme_tcp_hdgst_len(queue); + queue->pdu_offset = 0; + queue->data_remaining = -1; + queue->ddgst_remaining = 0; +} + +static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl) +{ + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING)) + return; + + queue_work(nvme_wq, &to_tcp_ctrl(ctrl)->err_work); +} + +static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue, + struct nvme_completion *cqe) +{ + struct request *rq; + + rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), cqe->command_id); + if (!rq) { + dev_err(queue->ctrl->ctrl.device, + "queue %d tag 0x%x not found\n", + nvme_tcp_queue_id(queue), cqe->command_id); + nvme_tcp_error_recovery(&queue->ctrl->ctrl); + return -EINVAL; + } + + nvme_end_request(rq, cqe->status, cqe->result); + + return 0; +} + +static int nvme_tcp_handle_c2h_data(struct nvme_tcp_queue *queue, + struct nvme_tcp_data_pdu *pdu) +{ + struct request *rq; + + rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id); + if (!rq) { + dev_err(queue->ctrl->ctrl.device, + "queue %d tag %#x not found\n", + nvme_tcp_queue_id(queue), pdu->command_id); + return -ENOENT; + } + + if (!blk_rq_payload_bytes(rq)) { + dev_err(queue->ctrl->ctrl.device, + "queue %d tag %#x unexpected data\n", + nvme_tcp_queue_id(queue), rq->tag); + return -EIO; + } + + queue->data_remaining = le32_to_cpu(pdu->data_length); + + return 0; + +} + +static int nvme_tcp_handle_comp(struct nvme_tcp_queue *queue, + struct nvme_tcp_rsp_pdu *pdu) +{ + struct nvme_completion *cqe = &pdu->cqe; + int ret = 0; + + /* + * AEN requests are special as they don't time out and can + * survive any kind of queue freeze and often don't respond to + * aborts. We don't even bother to allocate a struct request + * for them but rather special case them here. + */ + if (unlikely(nvme_tcp_queue_id(queue) == 0 && + cqe->command_id >= NVME_AQ_BLK_MQ_DEPTH)) + nvme_complete_async_event(&queue->ctrl->ctrl, cqe->status, + &cqe->result); + else + ret = nvme_tcp_process_nvme_cqe(queue, cqe); + + return ret; +} + +static int nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req, + struct nvme_tcp_r2t_pdu *pdu) +{ + struct nvme_tcp_data_pdu *data = req->pdu; + struct nvme_tcp_queue *queue = req->queue; + struct request *rq = blk_mq_rq_from_pdu(req); + u8 hdgst = nvme_tcp_hdgst_len(queue); + u8 ddgst = nvme_tcp_ddgst_len(queue); + + req->pdu_len = le32_to_cpu(pdu->r2t_length); + req->pdu_sent = 0; + + if (unlikely(req->data_sent + req->pdu_len > req->data_len)) { + dev_err(queue->ctrl->ctrl.device, + "req %d r2t len %u exceeded data len %u (%zu sent)\n", + rq->tag, req->pdu_len, req->data_len, + req->data_sent); + return -EPROTO; + } + + if (unlikely(le32_to_cpu(pdu->r2t_offset) < req->data_sent)) { + dev_err(queue->ctrl->ctrl.device, + "req %d unexpected r2t offset %u (expected %zu)\n", + rq->tag, le32_to_cpu(pdu->r2t_offset), + req->data_sent); + return -EPROTO; + } + + memset(data, 0, sizeof(*data)); + data->hdr.type = nvme_tcp_h2c_data; + data->hdr.flags = NVME_TCP_F_DATA_LAST; + if (queue->hdr_digest) + data->hdr.flags |= NVME_TCP_F_HDGST; + if (queue->data_digest) + data->hdr.flags |= NVME_TCP_F_DDGST; + data->hdr.hlen = sizeof(*data); + data->hdr.pdo = data->hdr.hlen + hdgst; + data->hdr.plen = + cpu_to_le32(data->hdr.hlen + hdgst + req->pdu_len + ddgst); + data->ttag = pdu->ttag; + data->command_id = rq->tag; + data->data_offset = cpu_to_le32(req->data_sent); + data->data_length = cpu_to_le32(req->pdu_len); + return 0; +} + +static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue, + struct nvme_tcp_r2t_pdu *pdu) +{ + struct nvme_tcp_request *req; + struct request *rq; + int ret; + + rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id); + if (!rq) { + dev_err(queue->ctrl->ctrl.device, + "queue %d tag %#x not found\n", + nvme_tcp_queue_id(queue), pdu->command_id); + return -ENOENT; + } + req = blk_mq_rq_to_pdu(rq); + + ret = nvme_tcp_setup_h2c_data_pdu(req, pdu); + if (unlikely(ret)) + return ret; + + req->state = NVME_TCP_SEND_H2C_PDU; + req->offset = 0; + + nvme_tcp_queue_request(req); + + return 0; +} + +static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb, + unsigned int *offset, size_t *len) +{ + struct nvme_tcp_hdr *hdr; + char *pdu = queue->pdu; + size_t rcv_len = min_t(size_t, *len, queue->pdu_remaining); + int ret; + + ret = skb_copy_bits(skb, *offset, + &pdu[queue->pdu_offset], rcv_len); + if (unlikely(ret)) + return ret; + + queue->pdu_remaining -= rcv_len; + queue->pdu_offset += rcv_len; + *offset += rcv_len; + *len -= rcv_len; + if (queue->pdu_remaining) + return 0; + + hdr = queue->pdu; + if (queue->hdr_digest) { + ret = nvme_tcp_verify_hdgst(queue, queue->pdu, hdr->hlen); + if (unlikely(ret)) + return ret; + } + + + if (queue->data_digest) { + ret = nvme_tcp_check_ddgst(queue, queue->pdu); + if (unlikely(ret)) + return ret; + } + + switch (hdr->type) { + case nvme_tcp_c2h_data: + ret = nvme_tcp_handle_c2h_data(queue, (void *)queue->pdu); + break; + case nvme_tcp_rsp: + nvme_tcp_init_recv_ctx(queue); + ret = nvme_tcp_handle_comp(queue, (void *)queue->pdu); + break; + case nvme_tcp_r2t: + nvme_tcp_init_recv_ctx(queue); + ret = nvme_tcp_handle_r2t(queue, (void *)queue->pdu); + break; + default: + dev_err(queue->ctrl->ctrl.device, + "unsupported pdu type (%d)\n", hdr->type); + return -EINVAL; + } + + return ret; +} + +static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct sk_buff *skb, + unsigned int *offset, size_t *len) +{ + struct nvme_tcp_data_pdu *pdu = (void *)queue->pdu; + struct nvme_tcp_request *req; + struct request *rq; + + rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id); + if (!rq) { + dev_err(queue->ctrl->ctrl.device, + "queue %d tag %#x not found\n", + nvme_tcp_queue_id(queue), pdu->command_id); + return -ENOENT; + } + req = blk_mq_rq_to_pdu(rq); + + while (true) { + int recv_len, ret; + + recv_len = min_t(size_t, *len, queue->data_remaining); + if (!recv_len) + break; + + if (!iov_iter_count(&req->iter)) { + req->curr_bio = req->curr_bio->bi_next; + + /* + * If we don`t have any bios it means that controller + * sent more data than we requested, hence error + */ + if (!req->curr_bio) { + dev_err(queue->ctrl->ctrl.device, + "queue %d no space in request %#x", + nvme_tcp_queue_id(queue), rq->tag); + nvme_tcp_init_recv_ctx(queue); + return -EIO; + } + nvme_tcp_init_iter(req, READ); + } + + /* we can read only from what is left in this bio */ + recv_len = min_t(size_t, recv_len, + iov_iter_count(&req->iter)); + + if (queue->data_digest) + ret = skb_copy_and_hash_datagram_iter(skb, *offset, + &req->iter, recv_len, queue->rcv_hash); + else + ret = skb_copy_datagram_iter(skb, *offset, + &req->iter, recv_len); + if (ret) { + dev_err(queue->ctrl->ctrl.device, + "queue %d failed to copy request %#x data", + nvme_tcp_queue_id(queue), rq->tag); + return ret; + } + + *len -= recv_len; + *offset += recv_len; + queue->data_remaining -= recv_len; + } + + if (!queue->data_remaining) { + if (queue->data_digest) { + nvme_tcp_ddgst_final(queue->rcv_hash, &queue->exp_ddgst); + queue->ddgst_remaining = NVME_TCP_DIGEST_LENGTH; + } else { + nvme_tcp_init_recv_ctx(queue); + } + } + + return 0; +} + +static int nvme_tcp_recv_ddgst(struct nvme_tcp_queue *queue, + struct sk_buff *skb, unsigned int *offset, size_t *len) +{ + char *ddgst = (char *)&queue->recv_ddgst; + size_t recv_len = min_t(size_t, *len, queue->ddgst_remaining); + off_t off = NVME_TCP_DIGEST_LENGTH - queue->ddgst_remaining; + int ret; + + ret = skb_copy_bits(skb, *offset, &ddgst[off], recv_len); + if (unlikely(ret)) + return ret; + + queue->ddgst_remaining -= recv_len; + *offset += recv_len; + *len -= recv_len; + if (queue->ddgst_remaining) + return 0; + + if (queue->recv_ddgst != queue->exp_ddgst) { + dev_err(queue->ctrl->ctrl.device, + "data digest error: recv %#x expected %#x\n", + le32_to_cpu(queue->recv_ddgst), + le32_to_cpu(queue->exp_ddgst)); + return -EIO; + } + + nvme_tcp_init_recv_ctx(queue); + return 0; +} + +static int nvme_tcp_recv_skb(read_descriptor_t *desc, struct sk_buff *skb, + unsigned int offset, size_t len) +{ + struct nvme_tcp_queue *queue = desc->arg.data; + size_t consumed = len; + int result; + + while (len) { + switch (nvme_tcp_recv_state(queue)) { + case NVME_TCP_RECV_PDU: + result = nvme_tcp_recv_pdu(queue, skb, &offset, &len); + break; + case NVME_TCP_RECV_DATA: + result = nvme_tcp_recv_data(queue, skb, &offset, &len); + break; + case NVME_TCP_RECV_DDGST: + result = nvme_tcp_recv_ddgst(queue, skb, &offset, &len); + break; + default: + result = -EFAULT; + } + if (result) { + dev_err(queue->ctrl->ctrl.device, + "receive failed: %d\n", result); + queue->rd_enabled = false; + nvme_tcp_error_recovery(&queue->ctrl->ctrl); + return result; + } + } + + return consumed; +} + +static void nvme_tcp_data_ready(struct sock *sk) +{ + struct nvme_tcp_queue *queue; + + read_lock(&sk->sk_callback_lock); + queue = sk->sk_user_data; + if (likely(queue && queue->rd_enabled)) + queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); + read_unlock(&sk->sk_callback_lock); +} + +static void nvme_tcp_write_space(struct sock *sk) +{ + struct nvme_tcp_queue *queue; + + read_lock_bh(&sk->sk_callback_lock); + queue = sk->sk_user_data; + if (likely(queue && sk_stream_is_writeable(sk))) { + clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags); + queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); + } + read_unlock_bh(&sk->sk_callback_lock); +} + +static void nvme_tcp_state_change(struct sock *sk) +{ + struct nvme_tcp_queue *queue; + + read_lock(&sk->sk_callback_lock); + queue = sk->sk_user_data; + if (!queue) + goto done; + + switch (sk->sk_state) { + case TCP_CLOSE: + case TCP_CLOSE_WAIT: + case TCP_LAST_ACK: + case TCP_FIN_WAIT1: + case TCP_FIN_WAIT2: + /* fallthrough */ + nvme_tcp_error_recovery(&queue->ctrl->ctrl); + break; + default: + dev_info(queue->ctrl->ctrl.device, + "queue %d socket state %d\n", + nvme_tcp_queue_id(queue), sk->sk_state); + } + + queue->state_change(sk); +done: + read_unlock(&sk->sk_callback_lock); +} + +static inline void nvme_tcp_done_send_req(struct nvme_tcp_queue *queue) +{ + queue->request = NULL; +} + +static void nvme_tcp_fail_request(struct nvme_tcp_request *req) +{ + union nvme_result res = {}; + + nvme_end_request(blk_mq_rq_from_pdu(req), + cpu_to_le16(NVME_SC_DATA_XFER_ERROR), res); +} + +static int nvme_tcp_try_send_data(struct nvme_tcp_request *req) +{ + struct nvme_tcp_queue *queue = req->queue; + + while (true) { + struct page *page = nvme_tcp_req_cur_page(req); + size_t offset = nvme_tcp_req_cur_offset(req); + size_t len = nvme_tcp_req_cur_length(req); + bool last = nvme_tcp_pdu_last_send(req, len); + int ret, flags = MSG_DONTWAIT; + + if (last && !queue->data_digest) + flags |= MSG_EOR; + else + flags |= MSG_MORE; + + ret = kernel_sendpage(queue->sock, page, offset, len, flags); + if (ret <= 0) + return ret; + + nvme_tcp_advance_req(req, ret); + if (queue->data_digest) + nvme_tcp_ddgst_update(queue->snd_hash, page, + offset, ret); + + /* fully successful last write*/ + if (last && ret == len) { + if (queue->data_digest) { + nvme_tcp_ddgst_final(queue->snd_hash, + &req->ddgst); + req->state = NVME_TCP_SEND_DDGST; + req->offset = 0; + } else { + nvme_tcp_done_send_req(queue); + } + return 1; + } + } + return -EAGAIN; +} + +static int nvme_tcp_try_send_cmd_pdu(struct nvme_tcp_request *req) +{ + struct nvme_tcp_queue *queue = req->queue; + struct nvme_tcp_cmd_pdu *pdu = req->pdu; + bool inline_data = nvme_tcp_has_inline_data(req); + int flags = MSG_DONTWAIT | (inline_data ? MSG_MORE : MSG_EOR); + u8 hdgst = nvme_tcp_hdgst_len(queue); + int len = sizeof(*pdu) + hdgst - req->offset; + int ret; + + if (queue->hdr_digest && !req->offset) + nvme_tcp_hdgst(queue->snd_hash, pdu, sizeof(*pdu)); + + ret = kernel_sendpage(queue->sock, virt_to_page(pdu), + offset_in_page(pdu) + req->offset, len, flags); + if (unlikely(ret <= 0)) + return ret; + + len -= ret; + if (!len) { + if (inline_data) { + req->state = NVME_TCP_SEND_DATA; + if (queue->data_digest) + crypto_ahash_init(queue->snd_hash); + nvme_tcp_init_iter(req, WRITE); + } else { + nvme_tcp_done_send_req(queue); + } + return 1; + } + req->offset += ret; + + return -EAGAIN; +} + +static int nvme_tcp_try_send_data_pdu(struct nvme_tcp_request *req) +{ + struct nvme_tcp_queue *queue = req->queue; + struct nvme_tcp_data_pdu *pdu = req->pdu; + u8 hdgst = nvme_tcp_hdgst_len(queue); + int len = sizeof(*pdu) - req->offset + hdgst; + int ret; + + if (queue->hdr_digest && !req->offset) + nvme_tcp_hdgst(queue->snd_hash, pdu, sizeof(*pdu)); + + ret = kernel_sendpage(queue->sock, virt_to_page(pdu), + offset_in_page(pdu) + req->offset, len, + MSG_DONTWAIT | MSG_MORE); + if (unlikely(ret <= 0)) + return ret; + + len -= ret; + if (!len) { + req->state = NVME_TCP_SEND_DATA; + if (queue->data_digest) + crypto_ahash_init(queue->snd_hash); + if (!req->data_sent) + nvme_tcp_init_iter(req, WRITE); + return 1; + } + req->offset += ret; + + return -EAGAIN; +} + +static int nvme_tcp_try_send_ddgst(struct nvme_tcp_request *req) +{ + struct nvme_tcp_queue *queue = req->queue; + int ret; + struct msghdr msg = { .msg_flags = MSG_DONTWAIT | MSG_EOR }; + struct kvec iov = { + .iov_base = &req->ddgst + req->offset, + .iov_len = NVME_TCP_DIGEST_LENGTH - req->offset + }; + + ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len); + if (unlikely(ret <= 0)) + return ret; + + if (req->offset + ret == NVME_TCP_DIGEST_LENGTH) { + nvme_tcp_done_send_req(queue); + return 1; + } + + req->offset += ret; + return -EAGAIN; +} + +static int nvme_tcp_try_send(struct nvme_tcp_queue *queue) +{ + struct nvme_tcp_request *req; + int ret = 1; + + if (!queue->request) { + queue->request = nvme_tcp_fetch_request(queue); + if (!queue->request) + return 0; + } + req = queue->request; + + if (req->state == NVME_TCP_SEND_CMD_PDU) { + ret = nvme_tcp_try_send_cmd_pdu(req); + if (ret <= 0) + goto done; + if (!nvme_tcp_has_inline_data(req)) + return ret; + } + + if (req->state == NVME_TCP_SEND_H2C_PDU) { + ret = nvme_tcp_try_send_data_pdu(req); + if (ret <= 0) + goto done; + } + + if (req->state == NVME_TCP_SEND_DATA) { + ret = nvme_tcp_try_send_data(req); + if (ret <= 0) + goto done; + } + + if (req->state == NVME_TCP_SEND_DDGST) + ret = nvme_tcp_try_send_ddgst(req); +done: + if (ret == -EAGAIN) + ret = 0; + return ret; +} + +static int nvme_tcp_try_recv(struct nvme_tcp_queue *queue) +{ + struct sock *sk = queue->sock->sk; + read_descriptor_t rd_desc; + int consumed; + + rd_desc.arg.data = queue; + rd_desc.count = 1; + lock_sock(sk); + consumed = tcp_read_sock(sk, &rd_desc, nvme_tcp_recv_skb); + release_sock(sk); + return consumed; +} + +static void nvme_tcp_io_work(struct work_struct *w) +{ + struct nvme_tcp_queue *queue = + container_of(w, struct nvme_tcp_queue, io_work); + unsigned long start = jiffies + msecs_to_jiffies(1); + + do { + bool pending = false; + int result; + + result = nvme_tcp_try_send(queue); + if (result > 0) { + pending = true; + } else if (unlikely(result < 0)) { + dev_err(queue->ctrl->ctrl.device, + "failed to send request %d\n", result); + if (result != -EPIPE) + nvme_tcp_fail_request(queue->request); + nvme_tcp_done_send_req(queue); + return; + } + + result = nvme_tcp_try_recv(queue); + if (result > 0) + pending = true; + + if (!pending) + return; + + } while (time_after(jiffies, start)); /* quota is exhausted */ + + queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); +} + +static void nvme_tcp_free_crypto(struct nvme_tcp_queue *queue) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(queue->rcv_hash); + + ahash_request_free(queue->rcv_hash); + ahash_request_free(queue->snd_hash); + crypto_free_ahash(tfm); +} + +static int nvme_tcp_alloc_crypto(struct nvme_tcp_queue *queue) +{ + struct crypto_ahash *tfm; + + tfm = crypto_alloc_ahash("crc32c", 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(tfm)) + return PTR_ERR(tfm); + + queue->snd_hash = ahash_request_alloc(tfm, GFP_KERNEL); + if (!queue->snd_hash) + goto free_tfm; + ahash_request_set_callback(queue->snd_hash, 0, NULL, NULL); + + queue->rcv_hash = ahash_request_alloc(tfm, GFP_KERNEL); + if (!queue->rcv_hash) + goto free_snd_hash; + ahash_request_set_callback(queue->rcv_hash, 0, NULL, NULL); + + return 0; +free_snd_hash: + ahash_request_free(queue->snd_hash); +free_tfm: + crypto_free_ahash(tfm); + return -ENOMEM; +} + +static void nvme_tcp_free_async_req(struct nvme_tcp_ctrl *ctrl) +{ + struct nvme_tcp_request *async = &ctrl->async_req; + + page_frag_free(async->pdu); +} + +static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) +{ + struct nvme_tcp_queue *queue = &ctrl->queues[0]; + struct nvme_tcp_request *async = &ctrl->async_req; + u8 hdgst = nvme_tcp_hdgst_len(queue); + + async->pdu = page_frag_alloc(&queue->pf_cache, + sizeof(struct nvme_tcp_cmd_pdu) + hdgst, + GFP_KERNEL | __GFP_ZERO); + if (!async->pdu) + return -ENOMEM; + + async->queue = &ctrl->queues[0]; + return 0; +} + +static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) +{ + struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); + struct nvme_tcp_queue *queue = &ctrl->queues[qid]; + + if (!test_and_clear_bit(NVME_TCP_Q_ALLOCATED, &queue->flags)) + return; + + if (queue->hdr_digest || queue->data_digest) + nvme_tcp_free_crypto(queue); + + sock_release(queue->sock); + kfree(queue->pdu); +} + +static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue) +{ + struct nvme_tcp_icreq_pdu *icreq; + struct nvme_tcp_icresp_pdu *icresp; + struct msghdr msg = {}; + struct kvec iov; + bool ctrl_hdgst, ctrl_ddgst; + int ret; + + icreq = kzalloc(sizeof(*icreq), GFP_KERNEL); + if (!icreq) + return -ENOMEM; + + icresp = kzalloc(sizeof(*icresp), GFP_KERNEL); + if (!icresp) { + ret = -ENOMEM; + goto free_icreq; + } + + icreq->hdr.type = nvme_tcp_icreq; + icreq->hdr.hlen = sizeof(*icreq); + icreq->hdr.pdo = 0; + icreq->hdr.plen = cpu_to_le32(icreq->hdr.hlen); + icreq->pfv = cpu_to_le16(NVME_TCP_PFV_1_0); + icreq->maxr2t = 0; /* single inflight r2t supported */ + icreq->hpda = 0; /* no alignment constraint */ + if (queue->hdr_digest) + icreq->digest |= NVME_TCP_HDR_DIGEST_ENABLE; + if (queue->data_digest) + icreq->digest |= NVME_TCP_DATA_DIGEST_ENABLE; + + iov.iov_base = icreq; + iov.iov_len = sizeof(*icreq); + ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len); + if (ret < 0) + goto free_icresp; + + memset(&msg, 0, sizeof(msg)); + iov.iov_base = icresp; + iov.iov_len = sizeof(*icresp); + ret = kernel_recvmsg(queue->sock, &msg, &iov, 1, + iov.iov_len, msg.msg_flags); + if (ret < 0) + goto free_icresp; + + ret = -EINVAL; + if (icresp->hdr.type != nvme_tcp_icresp) { + pr_err("queue %d: bad type returned %d\n", + nvme_tcp_queue_id(queue), icresp->hdr.type); + goto free_icresp; + } + + if (le32_to_cpu(icresp->hdr.plen) != sizeof(*icresp)) { + pr_err("queue %d: bad pdu length returned %d\n", + nvme_tcp_queue_id(queue), icresp->hdr.plen); + goto free_icresp; + } + + if (icresp->pfv != NVME_TCP_PFV_1_0) { + pr_err("queue %d: bad pfv returned %d\n", + nvme_tcp_queue_id(queue), icresp->pfv); + goto free_icresp; + } + + ctrl_ddgst = !!(icresp->digest & NVME_TCP_DATA_DIGEST_ENABLE); + if ((queue->data_digest && !ctrl_ddgst) || + (!queue->data_digest && ctrl_ddgst)) { + pr_err("queue %d: data digest mismatch host: %s ctrl: %s\n", + nvme_tcp_queue_id(queue), + queue->data_digest ? "enabled" : "disabled", + ctrl_ddgst ? "enabled" : "disabled"); + goto free_icresp; + } + + ctrl_hdgst = !!(icresp->digest & NVME_TCP_HDR_DIGEST_ENABLE); + if ((queue->hdr_digest && !ctrl_hdgst) || + (!queue->hdr_digest && ctrl_hdgst)) { + pr_err("queue %d: header digest mismatch host: %s ctrl: %s\n", + nvme_tcp_queue_id(queue), + queue->hdr_digest ? "enabled" : "disabled", + ctrl_hdgst ? "enabled" : "disabled"); + goto free_icresp; + } + + if (icresp->cpda != 0) { + pr_err("queue %d: unsupported cpda returned %d\n", + nvme_tcp_queue_id(queue), icresp->cpda); + goto free_icresp; + } + + ret = 0; +free_icresp: + kfree(icresp); +free_icreq: + kfree(icreq); + return ret; +} + +static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, + int qid, size_t queue_size) +{ + struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); + struct nvme_tcp_queue *queue = &ctrl->queues[qid]; + struct linger sol = { .l_onoff = 1, .l_linger = 0 }; + int ret, opt, rcv_pdu_size, n; + + queue->ctrl = ctrl; + INIT_LIST_HEAD(&queue->send_list); + spin_lock_init(&queue->lock); + INIT_WORK(&queue->io_work, nvme_tcp_io_work); + queue->queue_size = queue_size; + + if (qid > 0) + queue->cmnd_capsule_len = ctrl->ctrl.ioccsz * 16; + else + queue->cmnd_capsule_len = sizeof(struct nvme_command) + + NVME_TCP_ADMIN_CCSZ; + + ret = sock_create(ctrl->addr.ss_family, SOCK_STREAM, + IPPROTO_TCP, &queue->sock); + if (ret) { + dev_err(ctrl->ctrl.device, + "failed to create socket: %d\n", ret); + return ret; + } + + /* Single syn retry */ + opt = 1; + ret = kernel_setsockopt(queue->sock, IPPROTO_TCP, TCP_SYNCNT, + (char *)&opt, sizeof(opt)); + if (ret) { + dev_err(ctrl->ctrl.device, + "failed to set TCP_SYNCNT sock opt %d\n", ret); + goto err_sock; + } + + /* Set TCP no delay */ + opt = 1; + ret = kernel_setsockopt(queue->sock, IPPROTO_TCP, + TCP_NODELAY, (char *)&opt, sizeof(opt)); + if (ret) { + dev_err(ctrl->ctrl.device, + "failed to set TCP_NODELAY sock opt %d\n", ret); + goto err_sock; + } + + /* + * Cleanup whatever is sitting in the TCP transmit queue on socket + * close. This is done to prevent stale data from being sent should + * the network connection be restored before TCP times out. + */ + ret = kernel_setsockopt(queue->sock, SOL_SOCKET, SO_LINGER, + (char *)&sol, sizeof(sol)); + if (ret) { + dev_err(ctrl->ctrl.device, + "failed to set SO_LINGER sock opt %d\n", ret); + goto err_sock; + } + + queue->sock->sk->sk_allocation = GFP_ATOMIC; + if (!qid) + n = 0; + else + n = (qid - 1) % num_online_cpus(); + queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false); + queue->request = NULL; + queue->data_remaining = 0; + queue->ddgst_remaining = 0; + queue->pdu_remaining = 0; + queue->pdu_offset = 0; + sk_set_memalloc(queue->sock->sk); + + if (ctrl->ctrl.opts->mask & NVMF_OPT_HOST_TRADDR) { + ret = kernel_bind(queue->sock, (struct sockaddr *)&ctrl->src_addr, + sizeof(ctrl->src_addr)); + if (ret) { + dev_err(ctrl->ctrl.device, + "failed to bind queue %d socket %d\n", + qid, ret); + goto err_sock; + } + } + + queue->hdr_digest = nctrl->opts->hdr_digest; + queue->data_digest = nctrl->opts->data_digest; + if (queue->hdr_digest || queue->data_digest) { + ret = nvme_tcp_alloc_crypto(queue); + if (ret) { + dev_err(ctrl->ctrl.device, + "failed to allocate queue %d crypto\n", qid); + goto err_sock; + } + } + + rcv_pdu_size = sizeof(struct nvme_tcp_rsp_pdu) + + nvme_tcp_hdgst_len(queue); + queue->pdu = kmalloc(rcv_pdu_size, GFP_KERNEL); + if (!queue->pdu) { + ret = -ENOMEM; + goto err_crypto; + } + + dev_dbg(ctrl->ctrl.device, "connecting queue %d\n", + nvme_tcp_queue_id(queue)); + + ret = kernel_connect(queue->sock, (struct sockaddr *)&ctrl->addr, + sizeof(ctrl->addr), 0); + if (ret) { + dev_err(ctrl->ctrl.device, + "failed to connect socket: %d\n", ret); + goto err_rcv_pdu; + } + + ret = nvme_tcp_init_connection(queue); + if (ret) + goto err_init_connect; + + queue->rd_enabled = true; + set_bit(NVME_TCP_Q_ALLOCATED, &queue->flags); + nvme_tcp_init_recv_ctx(queue); + + write_lock_bh(&queue->sock->sk->sk_callback_lock); + queue->sock->sk->sk_user_data = queue; + queue->state_change = queue->sock->sk->sk_state_change; + queue->data_ready = queue->sock->sk->sk_data_ready; + queue->write_space = queue->sock->sk->sk_write_space; + queue->sock->sk->sk_data_ready = nvme_tcp_data_ready; + queue->sock->sk->sk_state_change = nvme_tcp_state_change; + queue->sock->sk->sk_write_space = nvme_tcp_write_space; + write_unlock_bh(&queue->sock->sk->sk_callback_lock); + + return 0; + +err_init_connect: + kernel_sock_shutdown(queue->sock, SHUT_RDWR); +err_rcv_pdu: + kfree(queue->pdu); +err_crypto: + if (queue->hdr_digest || queue->data_digest) + nvme_tcp_free_crypto(queue); +err_sock: + sock_release(queue->sock); + queue->sock = NULL; + return ret; +} + +static void nvme_tcp_restore_sock_calls(struct nvme_tcp_queue *queue) +{ + struct socket *sock = queue->sock; + + write_lock_bh(&sock->sk->sk_callback_lock); + sock->sk->sk_user_data = NULL; + sock->sk->sk_data_ready = queue->data_ready; + sock->sk->sk_state_change = queue->state_change; + sock->sk->sk_write_space = queue->write_space; + write_unlock_bh(&sock->sk->sk_callback_lock); +} + +static void __nvme_tcp_stop_queue(struct nvme_tcp_queue *queue) +{ + kernel_sock_shutdown(queue->sock, SHUT_RDWR); + nvme_tcp_restore_sock_calls(queue); + cancel_work_sync(&queue->io_work); +} + +static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid) +{ + struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); + struct nvme_tcp_queue *queue = &ctrl->queues[qid]; + + if (!test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags)) + return; + + __nvme_tcp_stop_queue(queue); +} + +static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx) +{ + struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); + int ret; + + if (idx) + ret = nvmf_connect_io_queue(nctrl, idx, false); + else + ret = nvmf_connect_admin_queue(nctrl); + + if (!ret) { + set_bit(NVME_TCP_Q_LIVE, &ctrl->queues[idx].flags); + } else { + __nvme_tcp_stop_queue(&ctrl->queues[idx]); + dev_err(nctrl->device, + "failed to connect queue: %d ret=%d\n", idx, ret); + } + return ret; +} + +static struct blk_mq_tag_set *nvme_tcp_alloc_tagset(struct nvme_ctrl *nctrl, + bool admin) +{ + struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); + struct blk_mq_tag_set *set; + int ret; + + if (admin) { + set = &ctrl->admin_tag_set; + memset(set, 0, sizeof(*set)); + set->ops = &nvme_tcp_admin_mq_ops; + set->queue_depth = NVME_AQ_MQ_TAG_DEPTH; + set->reserved_tags = 2; /* connect + keep-alive */ + set->numa_node = NUMA_NO_NODE; + set->cmd_size = sizeof(struct nvme_tcp_request); + set->driver_data = ctrl; + set->nr_hw_queues = 1; + set->timeout = ADMIN_TIMEOUT; + } else { + set = &ctrl->tag_set; + memset(set, 0, sizeof(*set)); + set->ops = &nvme_tcp_mq_ops; + set->queue_depth = nctrl->sqsize + 1; + set->reserved_tags = 1; /* fabric connect */ + set->numa_node = NUMA_NO_NODE; + set->flags = BLK_MQ_F_SHOULD_MERGE; + set->cmd_size = sizeof(struct nvme_tcp_request); + set->driver_data = ctrl; + set->nr_hw_queues = nctrl->queue_count - 1; + set->timeout = NVME_IO_TIMEOUT; + set->nr_maps = 2 /* default + read */; + } + + ret = blk_mq_alloc_tag_set(set); + if (ret) + return ERR_PTR(ret); + + return set; +} + +static void nvme_tcp_free_admin_queue(struct nvme_ctrl *ctrl) +{ + if (to_tcp_ctrl(ctrl)->async_req.pdu) { + nvme_tcp_free_async_req(to_tcp_ctrl(ctrl)); + to_tcp_ctrl(ctrl)->async_req.pdu = NULL; + } + + nvme_tcp_free_queue(ctrl, 0); +} + +static void nvme_tcp_free_io_queues(struct nvme_ctrl *ctrl) +{ + int i; + + for (i = 1; i < ctrl->queue_count; i++) + nvme_tcp_free_queue(ctrl, i); +} + +static void nvme_tcp_stop_io_queues(struct nvme_ctrl *ctrl) +{ + int i; + + for (i = 1; i < ctrl->queue_count; i++) + nvme_tcp_stop_queue(ctrl, i); +} + +static int nvme_tcp_start_io_queues(struct nvme_ctrl *ctrl) +{ + int i, ret = 0; + + for (i = 1; i < ctrl->queue_count; i++) { + ret = nvme_tcp_start_queue(ctrl, i); + if (ret) + goto out_stop_queues; + } + + return 0; + +out_stop_queues: + for (i--; i >= 1; i--) + nvme_tcp_stop_queue(ctrl, i); + return ret; +} + +static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl) +{ + int ret; + + ret = nvme_tcp_alloc_queue(ctrl, 0, NVME_AQ_DEPTH); + if (ret) + return ret; + + ret = nvme_tcp_alloc_async_req(to_tcp_ctrl(ctrl)); + if (ret) + goto out_free_queue; + + return 0; + +out_free_queue: + nvme_tcp_free_queue(ctrl, 0); + return ret; +} + +static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl) +{ + int i, ret; + + for (i = 1; i < ctrl->queue_count; i++) { + ret = nvme_tcp_alloc_queue(ctrl, i, + ctrl->sqsize + 1); + if (ret) + goto out_free_queues; + } + + return 0; + +out_free_queues: + for (i--; i >= 1; i--) + nvme_tcp_free_queue(ctrl, i); + + return ret; +} + +static unsigned int nvme_tcp_nr_io_queues(struct nvme_ctrl *ctrl) +{ + unsigned int nr_io_queues; + + nr_io_queues = min(ctrl->opts->nr_io_queues, num_online_cpus()); + nr_io_queues += min(ctrl->opts->nr_write_queues, num_online_cpus()); + + return nr_io_queues; +} + +static int nvme_alloc_io_queues(struct nvme_ctrl *ctrl) +{ + unsigned int nr_io_queues; + int ret; + + nr_io_queues = nvme_tcp_nr_io_queues(ctrl); + ret = nvme_set_queue_count(ctrl, &nr_io_queues); + if (ret) + return ret; + + ctrl->queue_count = nr_io_queues + 1; + if (ctrl->queue_count < 2) + return 0; + + dev_info(ctrl->device, + "creating %d I/O queues.\n", nr_io_queues); + + return nvme_tcp_alloc_io_queues(ctrl); +} + +static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove) +{ + nvme_tcp_stop_io_queues(ctrl); + if (remove) { + if (ctrl->ops->flags & NVME_F_FABRICS) + blk_cleanup_queue(ctrl->connect_q); + blk_mq_free_tag_set(ctrl->tagset); + } + nvme_tcp_free_io_queues(ctrl); +} + +static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new) +{ + int ret; + + ret = nvme_alloc_io_queues(ctrl); + if (ret) + return ret; + + if (new) { + ctrl->tagset = nvme_tcp_alloc_tagset(ctrl, false); + if (IS_ERR(ctrl->tagset)) { + ret = PTR_ERR(ctrl->tagset); + goto out_free_io_queues; + } + + if (ctrl->ops->flags & NVME_F_FABRICS) { + ctrl->connect_q = blk_mq_init_queue(ctrl->tagset); + if (IS_ERR(ctrl->connect_q)) { + ret = PTR_ERR(ctrl->connect_q); + goto out_free_tag_set; + } + } + } else { + blk_mq_update_nr_hw_queues(ctrl->tagset, + ctrl->queue_count - 1); + } + + ret = nvme_tcp_start_io_queues(ctrl); + if (ret) + goto out_cleanup_connect_q; + + return 0; + +out_cleanup_connect_q: + if (new && (ctrl->ops->flags & NVME_F_FABRICS)) + blk_cleanup_queue(ctrl->connect_q); +out_free_tag_set: + if (new) + blk_mq_free_tag_set(ctrl->tagset); +out_free_io_queues: + nvme_tcp_free_io_queues(ctrl); + return ret; +} + +static void nvme_tcp_destroy_admin_queue(struct nvme_ctrl *ctrl, bool remove) +{ + nvme_tcp_stop_queue(ctrl, 0); + if (remove) { + free_opal_dev(ctrl->opal_dev); + blk_cleanup_queue(ctrl->admin_q); + blk_mq_free_tag_set(ctrl->admin_tagset); + } + nvme_tcp_free_admin_queue(ctrl); +} + +static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new) +{ + int error; + + error = nvme_tcp_alloc_admin_queue(ctrl); + if (error) + return error; + + if (new) { + ctrl->admin_tagset = nvme_tcp_alloc_tagset(ctrl, true); + if (IS_ERR(ctrl->admin_tagset)) { + error = PTR_ERR(ctrl->admin_tagset); + goto out_free_queue; + } + + ctrl->admin_q = blk_mq_init_queue(ctrl->admin_tagset); + if (IS_ERR(ctrl->admin_q)) { + error = PTR_ERR(ctrl->admin_q); + goto out_free_tagset; + } + } + + error = nvme_tcp_start_queue(ctrl, 0); + if (error) + goto out_cleanup_queue; + + error = ctrl->ops->reg_read64(ctrl, NVME_REG_CAP, &ctrl->cap); + if (error) { + dev_err(ctrl->device, + "prop_get NVME_REG_CAP failed\n"); + goto out_stop_queue; + } + + ctrl->sqsize = min_t(int, NVME_CAP_MQES(ctrl->cap), ctrl->sqsize); + + error = nvme_enable_ctrl(ctrl, ctrl->cap); + if (error) + goto out_stop_queue; + + error = nvme_init_identify(ctrl); + if (error) + goto out_stop_queue; + + return 0; + +out_stop_queue: + nvme_tcp_stop_queue(ctrl, 0); +out_cleanup_queue: + if (new) + blk_cleanup_queue(ctrl->admin_q); +out_free_tagset: + if (new) + blk_mq_free_tag_set(ctrl->admin_tagset); +out_free_queue: + nvme_tcp_free_admin_queue(ctrl); + return error; +} + +static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl, + bool remove) +{ + blk_mq_quiesce_queue(ctrl->admin_q); + nvme_tcp_stop_queue(ctrl, 0); + blk_mq_tagset_busy_iter(ctrl->admin_tagset, nvme_cancel_request, ctrl); + blk_mq_unquiesce_queue(ctrl->admin_q); + nvme_tcp_destroy_admin_queue(ctrl, remove); +} + +static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl, + bool remove) +{ + if (ctrl->queue_count <= 1) + return; + nvme_stop_queues(ctrl); + nvme_tcp_stop_io_queues(ctrl); + blk_mq_tagset_busy_iter(ctrl->tagset, nvme_cancel_request, ctrl); + if (remove) + nvme_start_queues(ctrl); + nvme_tcp_destroy_io_queues(ctrl, remove); +} + +static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl) +{ + /* If we are resetting/deleting then do nothing */ + if (ctrl->state != NVME_CTRL_CONNECTING) { + WARN_ON_ONCE(ctrl->state == NVME_CTRL_NEW || + ctrl->state == NVME_CTRL_LIVE); + return; + } + + if (nvmf_should_reconnect(ctrl)) { + dev_info(ctrl->device, "Reconnecting in %d seconds...\n", + ctrl->opts->reconnect_delay); + queue_delayed_work(nvme_wq, &to_tcp_ctrl(ctrl)->connect_work, + ctrl->opts->reconnect_delay * HZ); + } else { + dev_info(ctrl->device, "Removing controller...\n"); + nvme_delete_ctrl(ctrl); + } +} + +static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new) +{ + struct nvmf_ctrl_options *opts = ctrl->opts; + int ret = -EINVAL; + + ret = nvme_tcp_configure_admin_queue(ctrl, new); + if (ret) + return ret; + + if (ctrl->icdoff) { + dev_err(ctrl->device, "icdoff is not supported!\n"); + goto destroy_admin; + } + + if (opts->queue_size > ctrl->sqsize + 1) + dev_warn(ctrl->device, + "queue_size %zu > ctrl sqsize %u, clamping down\n", + opts->queue_size, ctrl->sqsize + 1); + + if (ctrl->sqsize + 1 > ctrl->maxcmd) { + dev_warn(ctrl->device, + "sqsize %u > ctrl maxcmd %u, clamping down\n", + ctrl->sqsize + 1, ctrl->maxcmd); + ctrl->sqsize = ctrl->maxcmd - 1; + } + + if (ctrl->queue_count > 1) { + ret = nvme_tcp_configure_io_queues(ctrl, new); + if (ret) + goto destroy_admin; + } + + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) { + /* state change failure is ok if we're in DELETING state */ + WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING); + ret = -EINVAL; + goto destroy_io; + } + + nvme_start_ctrl(ctrl); + return 0; + +destroy_io: + if (ctrl->queue_count > 1) + nvme_tcp_destroy_io_queues(ctrl, new); +destroy_admin: + nvme_tcp_stop_queue(ctrl, 0); + nvme_tcp_destroy_admin_queue(ctrl, new); + return ret; +} + +static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work) +{ + struct nvme_tcp_ctrl *tcp_ctrl = container_of(to_delayed_work(work), + struct nvme_tcp_ctrl, connect_work); + struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl; + + ++ctrl->nr_reconnects; + + if (nvme_tcp_setup_ctrl(ctrl, false)) + goto requeue; + + dev_info(ctrl->device, "Successfully reconnected (%d attempt)\n", + ctrl->nr_reconnects); + + ctrl->nr_reconnects = 0; + + return; + +requeue: + dev_info(ctrl->device, "Failed reconnect attempt %d\n", + ctrl->nr_reconnects); + nvme_tcp_reconnect_or_remove(ctrl); +} + +static void nvme_tcp_error_recovery_work(struct work_struct *work) +{ + struct nvme_tcp_ctrl *tcp_ctrl = container_of(work, + struct nvme_tcp_ctrl, err_work); + struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl; + + nvme_stop_keep_alive(ctrl); + nvme_tcp_teardown_io_queues(ctrl, false); + /* unquiesce to fail fast pending requests */ + nvme_start_queues(ctrl); + nvme_tcp_teardown_admin_queue(ctrl, false); + + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) { + /* state change failure is ok if we're in DELETING state */ + WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING); + return; + } + + nvme_tcp_reconnect_or_remove(ctrl); +} + +static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown) +{ + nvme_tcp_teardown_io_queues(ctrl, shutdown); + if (shutdown) + nvme_shutdown_ctrl(ctrl); + else + nvme_disable_ctrl(ctrl, ctrl->cap); + nvme_tcp_teardown_admin_queue(ctrl, shutdown); +} + +static void nvme_tcp_delete_ctrl(struct nvme_ctrl *ctrl) +{ + nvme_tcp_teardown_ctrl(ctrl, true); +} + +static void nvme_reset_ctrl_work(struct work_struct *work) +{ + struct nvme_ctrl *ctrl = + container_of(work, struct nvme_ctrl, reset_work); + + nvme_stop_ctrl(ctrl); + nvme_tcp_teardown_ctrl(ctrl, false); + + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) { + /* state change failure is ok if we're in DELETING state */ + WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING); + return; + } + + if (nvme_tcp_setup_ctrl(ctrl, false)) + goto out_fail; + + return; + +out_fail: + ++ctrl->nr_reconnects; + nvme_tcp_reconnect_or_remove(ctrl); +} + +static void nvme_tcp_stop_ctrl(struct nvme_ctrl *ctrl) +{ + cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work); + cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work); +} + +static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl) +{ + struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); + + if (list_empty(&ctrl->list)) + goto free_ctrl; + + mutex_lock(&nvme_tcp_ctrl_mutex); + list_del(&ctrl->list); + mutex_unlock(&nvme_tcp_ctrl_mutex); + + nvmf_free_options(nctrl->opts); +free_ctrl: + kfree(ctrl->queues); + kfree(ctrl); +} + +static void nvme_tcp_set_sg_null(struct nvme_command *c) +{ + struct nvme_sgl_desc *sg = &c->common.dptr.sgl; + + sg->addr = 0; + sg->length = 0; + sg->type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) | + NVME_SGL_FMT_TRANSPORT_A; +} + +static void nvme_tcp_set_sg_inline(struct nvme_tcp_queue *queue, + struct nvme_command *c, u32 data_len) +{ + struct nvme_sgl_desc *sg = &c->common.dptr.sgl; + + sg->addr = cpu_to_le64(queue->ctrl->ctrl.icdoff); + sg->length = cpu_to_le32(data_len); + sg->type = (NVME_SGL_FMT_DATA_DESC << 4) | NVME_SGL_FMT_OFFSET; +} + +static void nvme_tcp_set_sg_host_data(struct nvme_command *c, + u32 data_len) +{ + struct nvme_sgl_desc *sg = &c->common.dptr.sgl; + + sg->addr = 0; + sg->length = cpu_to_le32(data_len); + sg->type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) | + NVME_SGL_FMT_TRANSPORT_A; +} + +static void nvme_tcp_submit_async_event(struct nvme_ctrl *arg) +{ + struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(arg); + struct nvme_tcp_queue *queue = &ctrl->queues[0]; + struct nvme_tcp_cmd_pdu *pdu = ctrl->async_req.pdu; + struct nvme_command *cmd = &pdu->cmd; + u8 hdgst = nvme_tcp_hdgst_len(queue); + + memset(pdu, 0, sizeof(*pdu)); + pdu->hdr.type = nvme_tcp_cmd; + if (queue->hdr_digest) + pdu->hdr.flags |= NVME_TCP_F_HDGST; + pdu->hdr.hlen = sizeof(*pdu); + pdu->hdr.plen = cpu_to_le32(pdu->hdr.hlen + hdgst); + + cmd->common.opcode = nvme_admin_async_event; + cmd->common.command_id = NVME_AQ_BLK_MQ_DEPTH; + cmd->common.flags |= NVME_CMD_SGL_METABUF; + nvme_tcp_set_sg_null(cmd); + + ctrl->async_req.state = NVME_TCP_SEND_CMD_PDU; + ctrl->async_req.offset = 0; + ctrl->async_req.curr_bio = NULL; + ctrl->async_req.data_len = 0; + + nvme_tcp_queue_request(&ctrl->async_req); +} + +static enum blk_eh_timer_return +nvme_tcp_timeout(struct request *rq, bool reserved) +{ + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); + struct nvme_tcp_ctrl *ctrl = req->queue->ctrl; + struct nvme_tcp_cmd_pdu *pdu = req->pdu; + + dev_dbg(ctrl->ctrl.device, + "queue %d: timeout request %#x type %d\n", + nvme_tcp_queue_id(req->queue), rq->tag, + pdu->hdr.type); + + if (ctrl->ctrl.state != NVME_CTRL_LIVE) { + union nvme_result res = {}; + + nvme_req(rq)->flags |= NVME_REQ_CANCELLED; + nvme_end_request(rq, cpu_to_le16(NVME_SC_ABORT_REQ), res); + return BLK_EH_DONE; + } + + /* queue error recovery */ + nvme_tcp_error_recovery(&ctrl->ctrl); + + return BLK_EH_RESET_TIMER; +} + +static blk_status_t nvme_tcp_map_data(struct nvme_tcp_queue *queue, + struct request *rq) +{ + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); + struct nvme_tcp_cmd_pdu *pdu = req->pdu; + struct nvme_command *c = &pdu->cmd; + + c->common.flags |= NVME_CMD_SGL_METABUF; + + if (rq_data_dir(rq) == WRITE && req->data_len && + req->data_len <= nvme_tcp_inline_data_size(queue)) + nvme_tcp_set_sg_inline(queue, c, req->data_len); + else + nvme_tcp_set_sg_host_data(c, req->data_len); + + return 0; +} + +static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns, + struct request *rq) +{ + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); + struct nvme_tcp_cmd_pdu *pdu = req->pdu; + struct nvme_tcp_queue *queue = req->queue; + u8 hdgst = nvme_tcp_hdgst_len(queue), ddgst = 0; + blk_status_t ret; + + ret = nvme_setup_cmd(ns, rq, &pdu->cmd); + if (ret) + return ret; + + req->state = NVME_TCP_SEND_CMD_PDU; + req->offset = 0; + req->data_sent = 0; + req->pdu_len = 0; + req->pdu_sent = 0; + req->data_len = blk_rq_payload_bytes(rq); + req->curr_bio = rq->bio; + + if (rq_data_dir(rq) == WRITE && + req->data_len <= nvme_tcp_inline_data_size(queue)) + req->pdu_len = req->data_len; + else if (req->curr_bio) + nvme_tcp_init_iter(req, READ); + + pdu->hdr.type = nvme_tcp_cmd; + pdu->hdr.flags = 0; + if (queue->hdr_digest) + pdu->hdr.flags |= NVME_TCP_F_HDGST; + if (queue->data_digest && req->pdu_len) { + pdu->hdr.flags |= NVME_TCP_F_DDGST; + ddgst = nvme_tcp_ddgst_len(queue); + } + pdu->hdr.hlen = sizeof(*pdu); + pdu->hdr.pdo = req->pdu_len ? pdu->hdr.hlen + hdgst : 0; + pdu->hdr.plen = + cpu_to_le32(pdu->hdr.hlen + hdgst + req->pdu_len + ddgst); + + ret = nvme_tcp_map_data(queue, rq); + if (unlikely(ret)) { + dev_err(queue->ctrl->ctrl.device, + "Failed to map data (%d)\n", ret); + return ret; + } + + return 0; +} + +static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +{ + struct nvme_ns *ns = hctx->queue->queuedata; + struct nvme_tcp_queue *queue = hctx->driver_data; + struct request *rq = bd->rq; + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); + bool queue_ready = test_bit(NVME_TCP_Q_LIVE, &queue->flags); + blk_status_t ret; + + if (!nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready)) + return nvmf_fail_nonready_command(&queue->ctrl->ctrl, rq); + + ret = nvme_tcp_setup_cmd_pdu(ns, rq); + if (unlikely(ret)) + return ret; + + blk_mq_start_request(rq); + + nvme_tcp_queue_request(req); + + return BLK_STS_OK; +} + +static int nvme_tcp_map_queues(struct blk_mq_tag_set *set) +{ + struct nvme_tcp_ctrl *ctrl = set->driver_data; + + set->map[HCTX_TYPE_DEFAULT].queue_offset = 0; + set->map[HCTX_TYPE_READ].nr_queues = ctrl->ctrl.opts->nr_io_queues; + if (ctrl->ctrl.opts->nr_write_queues) { + /* separate read/write queues */ + set->map[HCTX_TYPE_DEFAULT].nr_queues = + ctrl->ctrl.opts->nr_write_queues; + set->map[HCTX_TYPE_READ].queue_offset = + ctrl->ctrl.opts->nr_write_queues; + } else { + /* mixed read/write queues */ + set->map[HCTX_TYPE_DEFAULT].nr_queues = + ctrl->ctrl.opts->nr_io_queues; + set->map[HCTX_TYPE_READ].queue_offset = 0; + } + blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]); + blk_mq_map_queues(&set->map[HCTX_TYPE_READ]); + return 0; +} + +static struct blk_mq_ops nvme_tcp_mq_ops = { + .queue_rq = nvme_tcp_queue_rq, + .complete = nvme_complete_rq, + .init_request = nvme_tcp_init_request, + .exit_request = nvme_tcp_exit_request, + .init_hctx = nvme_tcp_init_hctx, + .timeout = nvme_tcp_timeout, + .map_queues = nvme_tcp_map_queues, +}; + +static struct blk_mq_ops nvme_tcp_admin_mq_ops = { + .queue_rq = nvme_tcp_queue_rq, + .complete = nvme_complete_rq, + .init_request = nvme_tcp_init_request, + .exit_request = nvme_tcp_exit_request, + .init_hctx = nvme_tcp_init_admin_hctx, + .timeout = nvme_tcp_timeout, +}; + +static const struct nvme_ctrl_ops nvme_tcp_ctrl_ops = { + .name = "tcp", + .module = THIS_MODULE, + .flags = NVME_F_FABRICS, + .reg_read32 = nvmf_reg_read32, + .reg_read64 = nvmf_reg_read64, + .reg_write32 = nvmf_reg_write32, + .free_ctrl = nvme_tcp_free_ctrl, + .submit_async_event = nvme_tcp_submit_async_event, + .delete_ctrl = nvme_tcp_delete_ctrl, + .get_address = nvmf_get_address, + .stop_ctrl = nvme_tcp_stop_ctrl, +}; + +static bool +nvme_tcp_existing_controller(struct nvmf_ctrl_options *opts) +{ + struct nvme_tcp_ctrl *ctrl; + bool found = false; + + mutex_lock(&nvme_tcp_ctrl_mutex); + list_for_each_entry(ctrl, &nvme_tcp_ctrl_list, list) { + found = nvmf_ip_options_match(&ctrl->ctrl, opts); + if (found) + break; + } + mutex_unlock(&nvme_tcp_ctrl_mutex); + + return found; +} + +static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev, + struct nvmf_ctrl_options *opts) +{ + struct nvme_tcp_ctrl *ctrl; + int ret; + + ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); + if (!ctrl) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&ctrl->list); + ctrl->ctrl.opts = opts; + ctrl->ctrl.queue_count = opts->nr_io_queues + opts->nr_write_queues + 1; + ctrl->ctrl.sqsize = opts->queue_size - 1; + ctrl->ctrl.kato = opts->kato; + + INIT_DELAYED_WORK(&ctrl->connect_work, + nvme_tcp_reconnect_ctrl_work); + INIT_WORK(&ctrl->err_work, nvme_tcp_error_recovery_work); + INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work); + + if (!(opts->mask & NVMF_OPT_TRSVCID)) { + opts->trsvcid = + kstrdup(__stringify(NVME_TCP_DISC_PORT), GFP_KERNEL); + if (!opts->trsvcid) { + ret = -ENOMEM; + goto out_free_ctrl; + } + opts->mask |= NVMF_OPT_TRSVCID; + } + + ret = inet_pton_with_scope(&init_net, AF_UNSPEC, + opts->traddr, opts->trsvcid, &ctrl->addr); + if (ret) { + pr_err("malformed address passed: %s:%s\n", + opts->traddr, opts->trsvcid); + goto out_free_ctrl; + } + + if (opts->mask & NVMF_OPT_HOST_TRADDR) { + ret = inet_pton_with_scope(&init_net, AF_UNSPEC, + opts->host_traddr, NULL, &ctrl->src_addr); + if (ret) { + pr_err("malformed src address passed: %s\n", + opts->host_traddr); + goto out_free_ctrl; + } + } + + if (!opts->duplicate_connect && nvme_tcp_existing_controller(opts)) { + ret = -EALREADY; + goto out_free_ctrl; + } + + ctrl->queues = kcalloc(ctrl->ctrl.queue_count, sizeof(*ctrl->queues), + GFP_KERNEL); + if (!ctrl->queues) { + ret = -ENOMEM; + goto out_free_ctrl; + } + + ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_tcp_ctrl_ops, 0); + if (ret) + goto out_kfree_queues; + + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { + WARN_ON_ONCE(1); + ret = -EINTR; + goto out_uninit_ctrl; + } + + ret = nvme_tcp_setup_ctrl(&ctrl->ctrl, true); + if (ret) + goto out_uninit_ctrl; + + dev_info(ctrl->ctrl.device, "new ctrl: NQN \"%s\", addr %pISp\n", + ctrl->ctrl.opts->subsysnqn, &ctrl->addr); + + nvme_get_ctrl(&ctrl->ctrl); + + mutex_lock(&nvme_tcp_ctrl_mutex); + list_add_tail(&ctrl->list, &nvme_tcp_ctrl_list); + mutex_unlock(&nvme_tcp_ctrl_mutex); + + return &ctrl->ctrl; + +out_uninit_ctrl: + nvme_uninit_ctrl(&ctrl->ctrl); + nvme_put_ctrl(&ctrl->ctrl); + if (ret > 0) + ret = -EIO; + return ERR_PTR(ret); +out_kfree_queues: + kfree(ctrl->queues); +out_free_ctrl: + kfree(ctrl); + return ERR_PTR(ret); +} + +static struct nvmf_transport_ops nvme_tcp_transport = { + .name = "tcp", + .module = THIS_MODULE, + .required_opts = NVMF_OPT_TRADDR, + .allowed_opts = NVMF_OPT_TRSVCID | NVMF_OPT_RECONNECT_DELAY | + NVMF_OPT_HOST_TRADDR | NVMF_OPT_CTRL_LOSS_TMO | + NVMF_OPT_HDR_DIGEST | NVMF_OPT_DATA_DIGEST | + NVMF_OPT_NR_WRITE_QUEUES, + .create_ctrl = nvme_tcp_create_ctrl, +}; + +static int __init nvme_tcp_init_module(void) +{ + nvme_tcp_wq = alloc_workqueue("nvme_tcp_wq", + WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + if (!nvme_tcp_wq) + return -ENOMEM; + + nvmf_register_transport(&nvme_tcp_transport); + return 0; +} + +static void __exit nvme_tcp_cleanup_module(void) +{ + struct nvme_tcp_ctrl *ctrl; + + nvmf_unregister_transport(&nvme_tcp_transport); + + mutex_lock(&nvme_tcp_ctrl_mutex); + list_for_each_entry(ctrl, &nvme_tcp_ctrl_list, list) + nvme_delete_ctrl(&ctrl->ctrl); + mutex_unlock(&nvme_tcp_ctrl_mutex); + flush_workqueue(nvme_delete_wq); + + destroy_workqueue(nvme_tcp_wq); +} + +module_init(nvme_tcp_init_module); +module_exit(nvme_tcp_cleanup_module); + +MODULE_LICENSE("GPL v2"); diff --git a/drivers/nvme/host/trace.c b/drivers/nvme/host/trace.c index 25b0e310f4a8..5566dda3237a 100644 --- a/drivers/nvme/host/trace.c +++ b/drivers/nvme/host/trace.c @@ -139,3 +139,6 @@ const char *nvme_trace_disk_name(struct trace_seq *p, char *name) return ret; } +EXPORT_SYMBOL_GPL(nvme_trace_disk_name); + +EXPORT_TRACEPOINT_SYMBOL_GPL(nvme_sq); diff --git a/drivers/nvme/host/trace.h b/drivers/nvme/host/trace.h index 196d5bd56718..3564120aa7b3 100644 --- a/drivers/nvme/host/trace.h +++ b/drivers/nvme/host/trace.h @@ -115,8 +115,8 @@ TRACE_EVENT(nvme_setup_cmd, __entry->nsid = le32_to_cpu(cmd->common.nsid); __entry->metadata = le64_to_cpu(cmd->common.metadata); __assign_disk_name(__entry->disk, req->rq_disk); - memcpy(__entry->cdw10, cmd->common.cdw10, - sizeof(__entry->cdw10)); + memcpy(__entry->cdw10, &cmd->common.cdw10, + 6 * sizeof(__entry->cdw10)); ), TP_printk("nvme%d: %sqid=%d, cmdid=%u, nsid=%u, flags=0x%x, meta=0x%llx, cmd=(%s %s)", __entry->ctrl_id, __print_disk_name(__entry->disk), @@ -184,6 +184,29 @@ TRACE_EVENT(nvme_async_event, #undef aer_name +TRACE_EVENT(nvme_sq, + TP_PROTO(struct request *req, __le16 sq_head, int sq_tail), + TP_ARGS(req, sq_head, sq_tail), + TP_STRUCT__entry( + __field(int, ctrl_id) + __array(char, disk, DISK_NAME_LEN) + __field(int, qid) + __field(u16, sq_head) + __field(u16, sq_tail) + ), + TP_fast_assign( + __entry->ctrl_id = nvme_req(req)->ctrl->instance; + __assign_disk_name(__entry->disk, req->rq_disk); + __entry->qid = nvme_req_qid(req); + __entry->sq_head = le16_to_cpu(sq_head); + __entry->sq_tail = sq_tail; + ), + TP_printk("nvme%d: %sqid=%d, head=%u, tail=%u", + __entry->ctrl_id, __print_disk_name(__entry->disk), + __entry->qid, __entry->sq_head, __entry->sq_tail + ) +); + #endif /* _TRACE_NVME_H */ #undef TRACE_INCLUDE_PATH diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig index 3c7b61ddb0d1..d94f25cde019 100644 --- a/drivers/nvme/target/Kconfig +++ b/drivers/nvme/target/Kconfig @@ -60,3 +60,13 @@ config NVME_TARGET_FCLOOP to test NVMe-FC transport interfaces. If unsure, say N. + +config NVME_TARGET_TCP + tristate "NVMe over Fabrics TCP target support" + depends on INET + depends on NVME_TARGET + help + This enables the NVMe TCP target support, which allows exporting NVMe + devices over TCP. + + If unsure, say N. diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile index 8118c93391c6..8c3ad0fb6860 100644 --- a/drivers/nvme/target/Makefile +++ b/drivers/nvme/target/Makefile @@ -5,6 +5,7 @@ obj-$(CONFIG_NVME_TARGET_LOOP) += nvme-loop.o obj-$(CONFIG_NVME_TARGET_RDMA) += nvmet-rdma.o obj-$(CONFIG_NVME_TARGET_FC) += nvmet-fc.o obj-$(CONFIG_NVME_TARGET_FCLOOP) += nvme-fcloop.o +obj-$(CONFIG_NVME_TARGET_TCP) += nvmet-tcp.o nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \ discovery.o io-cmd-file.o io-cmd-bdev.o @@ -12,3 +13,4 @@ nvme-loop-y += loop.o nvmet-rdma-y += rdma.o nvmet-fc-y += fc.o nvme-fcloop-y += fcloop.o +nvmet-tcp-y += tcp.o diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 1179f6314323..11baeb14c388 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -19,19 +19,6 @@ #include #include "nvmet.h" -/* - * This helper allows us to clear the AEN based on the RAE bit, - * Please use this helper when processing the log pages which are - * associated with the AEN. - */ -static inline void nvmet_clear_aen(struct nvmet_req *req, u32 aen_bit) -{ - int rae = le32_to_cpu(req->cmd->common.cdw10[0]) & 1 << 15; - - if (!rae) - clear_bit(aen_bit, &req->sq->ctrl->aen_masked); -} - u32 nvmet_get_log_page_len(struct nvme_command *cmd) { u32 len = le16_to_cpu(cmd->get_log_page.numdu); @@ -50,6 +37,34 @@ static void nvmet_execute_get_log_page_noop(struct nvmet_req *req) nvmet_req_complete(req, nvmet_zero_sgl(req, 0, req->data_len)); } +static void nvmet_execute_get_log_page_error(struct nvmet_req *req) +{ + struct nvmet_ctrl *ctrl = req->sq->ctrl; + u16 status = NVME_SC_SUCCESS; + unsigned long flags; + off_t offset = 0; + u64 slot; + u64 i; + + spin_lock_irqsave(&ctrl->error_lock, flags); + slot = ctrl->err_counter % NVMET_ERROR_LOG_SLOTS; + + for (i = 0; i < NVMET_ERROR_LOG_SLOTS; i++) { + status = nvmet_copy_to_sgl(req, offset, &ctrl->slots[slot], + sizeof(struct nvme_error_slot)); + if (status) + break; + + if (slot == 0) + slot = NVMET_ERROR_LOG_SLOTS - 1; + else + slot--; + offset += sizeof(struct nvme_error_slot); + } + spin_unlock_irqrestore(&ctrl->error_lock, flags); + nvmet_req_complete(req, status); +} + static u16 nvmet_get_smart_log_nsid(struct nvmet_req *req, struct nvme_smart_log *slog) { @@ -60,6 +75,7 @@ static u16 nvmet_get_smart_log_nsid(struct nvmet_req *req, if (!ns) { pr_err("Could not find namespace id : %d\n", le32_to_cpu(req->cmd->get_log_page.nsid)); + req->error_loc = offsetof(struct nvme_rw_command, nsid); return NVME_SC_INVALID_NS; } @@ -119,6 +135,7 @@ static void nvmet_execute_get_log_page_smart(struct nvmet_req *req) { struct nvme_smart_log *log; u16 status = NVME_SC_INTERNAL; + unsigned long flags; if (req->data_len != sizeof(*log)) goto out; @@ -134,6 +151,11 @@ static void nvmet_execute_get_log_page_smart(struct nvmet_req *req) if (status) goto out_free_log; + spin_lock_irqsave(&req->sq->ctrl->error_lock, flags); + put_unaligned_le64(req->sq->ctrl->err_counter, + &log->num_err_log_entries); + spin_unlock_irqrestore(&req->sq->ctrl->error_lock, flags); + status = nvmet_copy_to_sgl(req, 0, log, sizeof(*log)); out_free_log: kfree(log); @@ -189,7 +211,7 @@ static void nvmet_execute_get_log_changed_ns(struct nvmet_req *req) if (!status) status = nvmet_zero_sgl(req, len, req->data_len - len); ctrl->nr_changed_ns = 0; - nvmet_clear_aen(req, NVME_AEN_CFG_NS_ATTR); + nvmet_clear_aen_bit(req, NVME_AEN_BIT_NS_ATTR); mutex_unlock(&ctrl->lock); out: nvmet_req_complete(req, status); @@ -252,7 +274,7 @@ static void nvmet_execute_get_log_page_ana(struct nvmet_req *req) hdr.chgcnt = cpu_to_le64(nvmet_ana_chgcnt); hdr.ngrps = cpu_to_le16(ngrps); - nvmet_clear_aen(req, NVME_AEN_CFG_ANA_CHANGE); + nvmet_clear_aen_bit(req, NVME_AEN_BIT_ANA_CHANGE); up_read(&nvmet_ana_sem); kfree(desc); @@ -304,7 +326,8 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req) /* XXX: figure out what to do about RTD3R/RTD3 */ id->oaes = cpu_to_le32(NVMET_AEN_CFG_OPTIONAL); - id->ctratt = cpu_to_le32(1 << 0); + id->ctratt = cpu_to_le32(NVME_CTRL_ATTR_HID_128_BIT | + NVME_CTRL_ATTR_TBKAS); id->oacs = 0; @@ -392,6 +415,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req) u16 status = 0; if (le32_to_cpu(req->cmd->identify.nsid) == NVME_NSID_ALL) { + req->error_loc = offsetof(struct nvme_identify, nsid); status = NVME_SC_INVALID_NS | NVME_SC_DNR; goto out; } @@ -512,6 +536,7 @@ static void nvmet_execute_identify_desclist(struct nvmet_req *req) ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid); if (!ns) { + req->error_loc = offsetof(struct nvme_identify, nsid); status = NVME_SC_INVALID_NS | NVME_SC_DNR; goto out; } @@ -569,13 +594,15 @@ static u16 nvmet_write_protect_flush_sync(struct nvmet_req *req) static u16 nvmet_set_feat_write_protect(struct nvmet_req *req) { - u32 write_protect = le32_to_cpu(req->cmd->common.cdw10[1]); + u32 write_protect = le32_to_cpu(req->cmd->common.cdw11); struct nvmet_subsys *subsys = req->sq->ctrl->subsys; u16 status = NVME_SC_FEATURE_NOT_CHANGEABLE; req->ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->rw.nsid); - if (unlikely(!req->ns)) + if (unlikely(!req->ns)) { + req->error_loc = offsetof(struct nvme_common_command, nsid); return status; + } mutex_lock(&subsys->lock); switch (write_protect) { @@ -599,11 +626,36 @@ static u16 nvmet_set_feat_write_protect(struct nvmet_req *req) return status; } +u16 nvmet_set_feat_kato(struct nvmet_req *req) +{ + u32 val32 = le32_to_cpu(req->cmd->common.cdw11); + + req->sq->ctrl->kato = DIV_ROUND_UP(val32, 1000); + + nvmet_set_result(req, req->sq->ctrl->kato); + + return 0; +} + +u16 nvmet_set_feat_async_event(struct nvmet_req *req, u32 mask) +{ + u32 val32 = le32_to_cpu(req->cmd->common.cdw11); + + if (val32 & ~mask) { + req->error_loc = offsetof(struct nvme_common_command, cdw11); + return NVME_SC_INVALID_FIELD | NVME_SC_DNR; + } + + WRITE_ONCE(req->sq->ctrl->aen_enabled, val32); + nvmet_set_result(req, val32); + + return 0; +} + static void nvmet_execute_set_features(struct nvmet_req *req) { struct nvmet_subsys *subsys = req->sq->ctrl->subsys; - u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10[0]); - u32 val32; + u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10); u16 status = 0; switch (cdw10 & 0xff) { @@ -612,19 +664,10 @@ static void nvmet_execute_set_features(struct nvmet_req *req) (subsys->max_qid - 1) | ((subsys->max_qid - 1) << 16)); break; case NVME_FEAT_KATO: - val32 = le32_to_cpu(req->cmd->common.cdw10[1]); - req->sq->ctrl->kato = DIV_ROUND_UP(val32, 1000); - nvmet_set_result(req, req->sq->ctrl->kato); + status = nvmet_set_feat_kato(req); break; case NVME_FEAT_ASYNC_EVENT: - val32 = le32_to_cpu(req->cmd->common.cdw10[1]); - if (val32 & ~NVMET_AEN_CFG_ALL) { - status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; - break; - } - - WRITE_ONCE(req->sq->ctrl->aen_enabled, val32); - nvmet_set_result(req, val32); + status = nvmet_set_feat_async_event(req, NVMET_AEN_CFG_ALL); break; case NVME_FEAT_HOST_ID: status = NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR; @@ -633,6 +676,7 @@ static void nvmet_execute_set_features(struct nvmet_req *req) status = nvmet_set_feat_write_protect(req); break; default: + req->error_loc = offsetof(struct nvme_common_command, cdw10); status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; break; } @@ -646,9 +690,10 @@ static u16 nvmet_get_feat_write_protect(struct nvmet_req *req) u32 result; req->ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->common.nsid); - if (!req->ns) + if (!req->ns) { + req->error_loc = offsetof(struct nvme_common_command, nsid); return NVME_SC_INVALID_NS | NVME_SC_DNR; - + } mutex_lock(&subsys->lock); if (req->ns->readonly == true) result = NVME_NS_WRITE_PROTECT; @@ -660,10 +705,20 @@ static u16 nvmet_get_feat_write_protect(struct nvmet_req *req) return 0; } +void nvmet_get_feat_kato(struct nvmet_req *req) +{ + nvmet_set_result(req, req->sq->ctrl->kato * 1000); +} + +void nvmet_get_feat_async_event(struct nvmet_req *req) +{ + nvmet_set_result(req, READ_ONCE(req->sq->ctrl->aen_enabled)); +} + static void nvmet_execute_get_features(struct nvmet_req *req) { struct nvmet_subsys *subsys = req->sq->ctrl->subsys; - u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10[0]); + u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10); u16 status = 0; switch (cdw10 & 0xff) { @@ -689,7 +744,7 @@ static void nvmet_execute_get_features(struct nvmet_req *req) break; #endif case NVME_FEAT_ASYNC_EVENT: - nvmet_set_result(req, READ_ONCE(req->sq->ctrl->aen_enabled)); + nvmet_get_feat_async_event(req); break; case NVME_FEAT_VOLATILE_WC: nvmet_set_result(req, 1); @@ -699,11 +754,13 @@ static void nvmet_execute_get_features(struct nvmet_req *req) (subsys->max_qid-1) | ((subsys->max_qid-1) << 16)); break; case NVME_FEAT_KATO: - nvmet_set_result(req, req->sq->ctrl->kato * 1000); + nvmet_get_feat_kato(req); break; case NVME_FEAT_HOST_ID: /* need 128-bit host identifier flag */ - if (!(req->cmd->common.cdw10[1] & cpu_to_le32(1 << 0))) { + if (!(req->cmd->common.cdw11 & cpu_to_le32(1 << 0))) { + req->error_loc = + offsetof(struct nvme_common_command, cdw11); status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; break; } @@ -715,6 +772,8 @@ static void nvmet_execute_get_features(struct nvmet_req *req) status = nvmet_get_feat_write_protect(req); break; default: + req->error_loc = + offsetof(struct nvme_common_command, cdw10); status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; break; } @@ -722,7 +781,7 @@ static void nvmet_execute_get_features(struct nvmet_req *req) nvmet_req_complete(req, status); } -static void nvmet_execute_async_event(struct nvmet_req *req) +void nvmet_execute_async_event(struct nvmet_req *req) { struct nvmet_ctrl *ctrl = req->sq->ctrl; @@ -738,7 +797,7 @@ static void nvmet_execute_async_event(struct nvmet_req *req) schedule_work(&ctrl->async_event_work); } -static void nvmet_execute_keep_alive(struct nvmet_req *req) +void nvmet_execute_keep_alive(struct nvmet_req *req) { struct nvmet_ctrl *ctrl = req->sq->ctrl; @@ -764,13 +823,7 @@ u16 nvmet_parse_admin_cmd(struct nvmet_req *req) switch (cmd->get_log_page.lid) { case NVME_LOG_ERROR: - /* - * We currently never set the More bit in the status - * field, so all error log entries are invalid and can - * be zeroed out. This is called a minum viable - * implementation (TM) of this mandatory log page. - */ - req->execute = nvmet_execute_get_log_page_noop; + req->execute = nvmet_execute_get_log_page_error; return 0; case NVME_LOG_SMART: req->execute = nvmet_execute_get_log_page_smart; @@ -836,5 +889,6 @@ u16 nvmet_parse_admin_cmd(struct nvmet_req *req) pr_err("unhandled cmd %d on qid %d\n", cmd->common.opcode, req->sq->qid); + req->error_loc = offsetof(struct nvme_common_command, opcode); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c index d895579b6c5d..618bbd006544 100644 --- a/drivers/nvme/target/configfs.c +++ b/drivers/nvme/target/configfs.c @@ -25,12 +25,16 @@ static const struct config_item_type nvmet_host_type; static const struct config_item_type nvmet_subsys_type; +static LIST_HEAD(nvmet_ports_list); +struct list_head *nvmet_ports = &nvmet_ports_list; + static const struct nvmet_transport_name { u8 type; const char *name; } nvmet_transport_names[] = { { NVMF_TRTYPE_RDMA, "rdma" }, { NVMF_TRTYPE_FC, "fc" }, + { NVMF_TRTYPE_TCP, "tcp" }, { NVMF_TRTYPE_LOOP, "loop" }, }; @@ -150,7 +154,8 @@ CONFIGFS_ATTR(nvmet_, addr_traddr); static ssize_t nvmet_addr_treq_show(struct config_item *item, char *page) { - switch (to_nvmet_port(item)->disc_addr.treq) { + switch (to_nvmet_port(item)->disc_addr.treq & + NVME_TREQ_SECURE_CHANNEL_MASK) { case NVMF_TREQ_NOT_SPECIFIED: return sprintf(page, "not specified\n"); case NVMF_TREQ_REQUIRED: @@ -166,6 +171,7 @@ static ssize_t nvmet_addr_treq_store(struct config_item *item, const char *page, size_t count) { struct nvmet_port *port = to_nvmet_port(item); + u8 treq = port->disc_addr.treq & ~NVME_TREQ_SECURE_CHANNEL_MASK; if (port->enabled) { pr_err("Cannot modify address while enabled\n"); @@ -174,15 +180,16 @@ static ssize_t nvmet_addr_treq_store(struct config_item *item, } if (sysfs_streq(page, "not specified")) { - port->disc_addr.treq = NVMF_TREQ_NOT_SPECIFIED; + treq |= NVMF_TREQ_NOT_SPECIFIED; } else if (sysfs_streq(page, "required")) { - port->disc_addr.treq = NVMF_TREQ_REQUIRED; + treq |= NVMF_TREQ_REQUIRED; } else if (sysfs_streq(page, "not required")) { - port->disc_addr.treq = NVMF_TREQ_NOT_REQUIRED; + treq |= NVMF_TREQ_NOT_REQUIRED; } else { pr_err("Invalid value '%s' for treq\n", page); return -EINVAL; } + port->disc_addr.treq = treq; return count; } @@ -646,7 +653,8 @@ static int nvmet_port_subsys_allow_link(struct config_item *parent, } list_add_tail(&link->entry, &port->subsystems); - nvmet_genctr++; + nvmet_port_disc_changed(port, subsys); + up_write(&nvmet_config_sem); return 0; @@ -673,7 +681,8 @@ static void nvmet_port_subsys_drop_link(struct config_item *parent, found: list_del(&p->entry); - nvmet_genctr++; + nvmet_port_disc_changed(port, subsys); + if (list_empty(&port->subsystems)) nvmet_disable_port(port); up_write(&nvmet_config_sem); @@ -722,7 +731,8 @@ static int nvmet_allowed_hosts_allow_link(struct config_item *parent, goto out_free_link; } list_add_tail(&link->entry, &subsys->hosts); - nvmet_genctr++; + nvmet_subsys_disc_changed(subsys, host); + up_write(&nvmet_config_sem); return 0; out_free_link: @@ -748,7 +758,8 @@ static void nvmet_allowed_hosts_drop_link(struct config_item *parent, found: list_del(&p->entry); - nvmet_genctr++; + nvmet_subsys_disc_changed(subsys, host); + up_write(&nvmet_config_sem); kfree(p); } @@ -787,7 +798,11 @@ static ssize_t nvmet_subsys_attr_allow_any_host_store(struct config_item *item, goto out_unlock; } - subsys->allow_any_host = allow_any_host; + if (subsys->allow_any_host != allow_any_host) { + subsys->allow_any_host = allow_any_host; + nvmet_subsys_disc_changed(subsys, NULL); + } + out_unlock: up_write(&nvmet_config_sem); return ret ? ret : count; @@ -936,7 +951,7 @@ static ssize_t nvmet_referral_enable_store(struct config_item *item, if (enable) nvmet_referral_enable(parent, port); else - nvmet_referral_disable(port); + nvmet_referral_disable(parent, port); return count; inval: @@ -962,9 +977,10 @@ static struct configfs_attribute *nvmet_referral_attrs[] = { static void nvmet_referral_release(struct config_item *item) { + struct nvmet_port *parent = to_nvmet_port(item->ci_parent->ci_parent); struct nvmet_port *port = to_nvmet_port(item); - nvmet_referral_disable(port); + nvmet_referral_disable(parent, port); kfree(port); } @@ -1137,6 +1153,8 @@ static void nvmet_port_release(struct config_item *item) { struct nvmet_port *port = to_nvmet_port(item); + list_del(&port->global_entry); + kfree(port->ana_state); kfree(port); } @@ -1189,12 +1207,15 @@ static struct config_group *nvmet_ports_make(struct config_group *group, port->ana_state[i] = NVME_ANA_INACCESSIBLE; } + list_add(&port->global_entry, &nvmet_ports_list); + INIT_LIST_HEAD(&port->entry); INIT_LIST_HEAD(&port->subsystems); INIT_LIST_HEAD(&port->referrals); port->inline_data_size = -1; /* < 0 == let the transport choose */ port->disc_addr.portid = cpu_to_le16(portid); + port->disc_addr.treq = NVMF_TREQ_DISABLE_SQFLOW; config_group_init_type_name(&port->group, name, &nvmet_port_type); config_group_init_type_name(&port->subsys_group, diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index a5f9bbce863f..88d260f31835 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -45,28 +45,72 @@ u32 nvmet_ana_group_enabled[NVMET_MAX_ANAGRPS + 1]; u64 nvmet_ana_chgcnt; DECLARE_RWSEM(nvmet_ana_sem); +inline u16 errno_to_nvme_status(struct nvmet_req *req, int errno) +{ + u16 status; + + switch (errno) { + case -ENOSPC: + req->error_loc = offsetof(struct nvme_rw_command, length); + status = NVME_SC_CAP_EXCEEDED | NVME_SC_DNR; + break; + case -EREMOTEIO: + req->error_loc = offsetof(struct nvme_rw_command, slba); + status = NVME_SC_LBA_RANGE | NVME_SC_DNR; + break; + case -EOPNOTSUPP: + req->error_loc = offsetof(struct nvme_common_command, opcode); + switch (req->cmd->common.opcode) { + case nvme_cmd_dsm: + case nvme_cmd_write_zeroes: + status = NVME_SC_ONCS_NOT_SUPPORTED | NVME_SC_DNR; + break; + default: + status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR; + } + break; + case -ENODATA: + req->error_loc = offsetof(struct nvme_rw_command, nsid); + status = NVME_SC_ACCESS_DENIED; + break; + case -EIO: + /* FALLTHRU */ + default: + req->error_loc = offsetof(struct nvme_common_command, opcode); + status = NVME_SC_INTERNAL | NVME_SC_DNR; + } + + return status; +} + static struct nvmet_subsys *nvmet_find_get_subsys(struct nvmet_port *port, const char *subsysnqn); u16 nvmet_copy_to_sgl(struct nvmet_req *req, off_t off, const void *buf, size_t len) { - if (sg_pcopy_from_buffer(req->sg, req->sg_cnt, buf, len, off) != len) + if (sg_pcopy_from_buffer(req->sg, req->sg_cnt, buf, len, off) != len) { + req->error_loc = offsetof(struct nvme_common_command, dptr); return NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR; + } return 0; } u16 nvmet_copy_from_sgl(struct nvmet_req *req, off_t off, void *buf, size_t len) { - if (sg_pcopy_to_buffer(req->sg, req->sg_cnt, buf, len, off) != len) + if (sg_pcopy_to_buffer(req->sg, req->sg_cnt, buf, len, off) != len) { + req->error_loc = offsetof(struct nvme_common_command, dptr); return NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR; + } return 0; } u16 nvmet_zero_sgl(struct nvmet_req *req, off_t off, size_t len) { - if (sg_zero_buffer(req->sg, req->sg_cnt, len, off) != len) + if (sg_zero_buffer(req->sg, req->sg_cnt, len, off) != len) { + req->error_loc = offsetof(struct nvme_common_command, dptr); return NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR; + } return 0; } @@ -130,7 +174,7 @@ static void nvmet_async_event_work(struct work_struct *work) } } -static void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, +void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, u8 event_info, u8 log_page) { struct nvmet_async_event *aen; @@ -150,13 +194,6 @@ static void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, schedule_work(&ctrl->async_event_work); } -static bool nvmet_aen_disabled(struct nvmet_ctrl *ctrl, u32 aen) -{ - if (!(READ_ONCE(ctrl->aen_enabled) & aen)) - return true; - return test_and_set_bit(aen, &ctrl->aen_masked); -} - static void nvmet_add_to_changed_ns_log(struct nvmet_ctrl *ctrl, __le32 nsid) { u32 i; @@ -187,7 +224,7 @@ void nvmet_ns_changed(struct nvmet_subsys *subsys, u32 nsid) list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) { nvmet_add_to_changed_ns_log(ctrl, cpu_to_le32(nsid)); - if (nvmet_aen_disabled(ctrl, NVME_AEN_CFG_NS_ATTR)) + if (nvmet_aen_bit_disabled(ctrl, NVME_AEN_BIT_NS_ATTR)) continue; nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, NVME_AER_NOTICE_NS_CHANGED, @@ -204,7 +241,7 @@ void nvmet_send_ana_event(struct nvmet_subsys *subsys, list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) { if (port && ctrl->port != port) continue; - if (nvmet_aen_disabled(ctrl, NVME_AEN_CFG_ANA_CHANGE)) + if (nvmet_aen_bit_disabled(ctrl, NVME_AEN_BIT_ANA_CHANGE)) continue; nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, NVME_AER_NOTICE_ANA, NVME_LOG_ANA); @@ -299,6 +336,15 @@ static void nvmet_keep_alive_timer(struct work_struct *work) { struct nvmet_ctrl *ctrl = container_of(to_delayed_work(work), struct nvmet_ctrl, ka_work); + bool cmd_seen = ctrl->cmd_seen; + + ctrl->cmd_seen = false; + if (cmd_seen) { + pr_debug("ctrl %d reschedule traffic based keep-alive timer\n", + ctrl->cntlid); + schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ); + return; + } pr_err("ctrl %d keep-alive timer (%d seconds) expired!\n", ctrl->cntlid, ctrl->kato); @@ -595,26 +641,58 @@ struct nvmet_ns *nvmet_ns_alloc(struct nvmet_subsys *subsys, u32 nsid) return ns; } -static void __nvmet_req_complete(struct nvmet_req *req, u16 status) +static void nvmet_update_sq_head(struct nvmet_req *req) { - u32 old_sqhd, new_sqhd; - u16 sqhd; - - if (status) - nvmet_set_status(req, status); - if (req->sq->size) { + u32 old_sqhd, new_sqhd; + do { old_sqhd = req->sq->sqhd; new_sqhd = (old_sqhd + 1) % req->sq->size; } while (cmpxchg(&req->sq->sqhd, old_sqhd, new_sqhd) != old_sqhd); } - sqhd = req->sq->sqhd & 0x0000FFFF; - req->rsp->sq_head = cpu_to_le16(sqhd); + req->rsp->sq_head = cpu_to_le16(req->sq->sqhd & 0x0000FFFF); +} + +static void nvmet_set_error(struct nvmet_req *req, u16 status) +{ + struct nvmet_ctrl *ctrl = req->sq->ctrl; + struct nvme_error_slot *new_error_slot; + unsigned long flags; + + req->rsp->status = cpu_to_le16(status << 1); + + if (!ctrl || req->error_loc == NVMET_NO_ERROR_LOC) + return; + + spin_lock_irqsave(&ctrl->error_lock, flags); + ctrl->err_counter++; + new_error_slot = + &ctrl->slots[ctrl->err_counter % NVMET_ERROR_LOG_SLOTS]; + + new_error_slot->error_count = cpu_to_le64(ctrl->err_counter); + new_error_slot->sqid = cpu_to_le16(req->sq->qid); + new_error_slot->cmdid = cpu_to_le16(req->cmd->common.command_id); + new_error_slot->status_field = cpu_to_le16(status << 1); + new_error_slot->param_error_location = cpu_to_le16(req->error_loc); + new_error_slot->lba = cpu_to_le64(req->error_slba); + new_error_slot->nsid = req->cmd->common.nsid; + spin_unlock_irqrestore(&ctrl->error_lock, flags); + + /* set the more bit for this request */ + req->rsp->status |= cpu_to_le16(1 << 14); +} + +static void __nvmet_req_complete(struct nvmet_req *req, u16 status) +{ + if (!req->sq->sqhd_disabled) + nvmet_update_sq_head(req); req->rsp->sq_id = cpu_to_le16(req->sq->qid); req->rsp->command_id = req->cmd->common.command_id; + if (unlikely(status)) + nvmet_set_error(req, status); if (req->ns) nvmet_put_namespace(req->ns); req->ops->queue_response(req); @@ -735,14 +813,20 @@ static u16 nvmet_parse_io_cmd(struct nvmet_req *req) return ret; req->ns = nvmet_find_namespace(req->sq->ctrl, cmd->rw.nsid); - if (unlikely(!req->ns)) + if (unlikely(!req->ns)) { + req->error_loc = offsetof(struct nvme_common_command, nsid); return NVME_SC_INVALID_NS | NVME_SC_DNR; + } ret = nvmet_check_ana_state(req->port, req->ns); - if (unlikely(ret)) + if (unlikely(ret)) { + req->error_loc = offsetof(struct nvme_common_command, nsid); return ret; + } ret = nvmet_io_cmd_check_access(req); - if (unlikely(ret)) + if (unlikely(ret)) { + req->error_loc = offsetof(struct nvme_common_command, nsid); return ret; + } if (req->ns->file) return nvmet_file_parse_io_cmd(req); @@ -763,10 +847,14 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq, req->sg_cnt = 0; req->transfer_len = 0; req->rsp->status = 0; + req->rsp->sq_head = 0; req->ns = NULL; + req->error_loc = NVMET_NO_ERROR_LOC; + req->error_slba = 0; /* no support for fused commands yet */ if (unlikely(flags & (NVME_CMD_FUSE_FIRST | NVME_CMD_FUSE_SECOND))) { + req->error_loc = offsetof(struct nvme_common_command, flags); status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; goto fail; } @@ -777,6 +865,7 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq, * byte aligned. */ if (unlikely((flags & NVME_CMD_SGL_ALL) != NVME_CMD_SGL_METABUF)) { + req->error_loc = offsetof(struct nvme_common_command, flags); status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; goto fail; } @@ -801,6 +890,9 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq, goto fail; } + if (sq->ctrl) + sq->ctrl->cmd_seen = true; + return true; fail: @@ -819,9 +911,10 @@ EXPORT_SYMBOL_GPL(nvmet_req_uninit); void nvmet_req_execute(struct nvmet_req *req) { - if (unlikely(req->data_len != req->transfer_len)) + if (unlikely(req->data_len != req->transfer_len)) { + req->error_loc = offsetof(struct nvme_common_command, dptr); nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR); - else + } else req->execute(req); } EXPORT_SYMBOL_GPL(nvmet_req_execute); @@ -1027,14 +1120,18 @@ u16 nvmet_check_ctrl_status(struct nvmet_req *req, struct nvme_command *cmd) return 0; } -static bool __nvmet_host_allowed(struct nvmet_subsys *subsys, - const char *hostnqn) +bool nvmet_host_allowed(struct nvmet_subsys *subsys, const char *hostnqn) { struct nvmet_host_link *p; + lockdep_assert_held(&nvmet_config_sem); + if (subsys->allow_any_host) return true; + if (subsys->type == NVME_NQN_DISC) /* allow all access to disc subsys */ + return true; + list_for_each_entry(p, &subsys->hosts, entry) { if (!strcmp(nvmet_host_name(p->host), hostnqn)) return true; @@ -1043,30 +1140,6 @@ static bool __nvmet_host_allowed(struct nvmet_subsys *subsys, return false; } -static bool nvmet_host_discovery_allowed(struct nvmet_req *req, - const char *hostnqn) -{ - struct nvmet_subsys_link *s; - - list_for_each_entry(s, &req->port->subsystems, entry) { - if (__nvmet_host_allowed(s->subsys, hostnqn)) - return true; - } - - return false; -} - -bool nvmet_host_allowed(struct nvmet_req *req, struct nvmet_subsys *subsys, - const char *hostnqn) -{ - lockdep_assert_held(&nvmet_config_sem); - - if (subsys->type == NVME_NQN_DISC) - return nvmet_host_discovery_allowed(req, hostnqn); - else - return __nvmet_host_allowed(subsys, hostnqn); -} - /* * Note: ctrl->subsys->lock should be held when calling this function */ @@ -1117,7 +1190,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR; down_read(&nvmet_config_sem); - if (!nvmet_host_allowed(req, subsys, hostnqn)) { + if (!nvmet_host_allowed(subsys, hostnqn)) { pr_info("connect by host %s for subsystem %s not allowed\n", hostnqn, subsysnqn); req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(hostnqn); @@ -1175,31 +1248,20 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, ctrl->cntlid = ret; ctrl->ops = req->ops; - if (ctrl->subsys->type == NVME_NQN_DISC) { - /* Don't accept keep-alive timeout for discovery controllers */ - if (kato) { - status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; - goto out_remove_ida; - } - /* - * Discovery controllers use some arbitrary high value in order - * to cleanup stale discovery sessions - * - * From the latest base diff RC: - * "The Keep Alive command is not supported by - * Discovery controllers. A transport may specify a - * fixed Discovery controller activity timeout value - * (e.g., 2 minutes). If no commands are received - * by a Discovery controller within that time - * period, the controller may perform the - * actions for Keep Alive Timer expiration". - */ - ctrl->kato = NVMET_DISC_KATO; - } else { - /* keep-alive timeout in seconds */ - ctrl->kato = DIV_ROUND_UP(kato, 1000); - } + /* + * Discovery controllers may use some arbitrary high value + * in order to cleanup stale discovery sessions + */ + if ((ctrl->subsys->type == NVME_NQN_DISC) && !kato) + kato = NVMET_DISC_KATO_MS; + + /* keep-alive timeout in seconds */ + ctrl->kato = DIV_ROUND_UP(kato, 1000); + + ctrl->err_counter = 0; + spin_lock_init(&ctrl->error_lock); + nvmet_start_keep_alive_timer(ctrl); mutex_lock(&subsys->lock); @@ -1210,8 +1272,6 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, *ctrlp = ctrl; return 0; -out_remove_ida: - ida_simple_remove(&cntlid_ida, ctrl->cntlid); out_free_sqs: kfree(ctrl->sqs); out_free_cqs: diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c index bc0aa0bf1543..d2cb71a0b419 100644 --- a/drivers/nvme/target/discovery.c +++ b/drivers/nvme/target/discovery.c @@ -18,7 +18,65 @@ struct nvmet_subsys *nvmet_disc_subsys; -u64 nvmet_genctr; +static u64 nvmet_genctr; + +static void __nvmet_disc_changed(struct nvmet_port *port, + struct nvmet_ctrl *ctrl) +{ + if (ctrl->port != port) + return; + + if (nvmet_aen_bit_disabled(ctrl, NVME_AEN_BIT_DISC_CHANGE)) + return; + + nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, + NVME_AER_NOTICE_DISC_CHANGED, NVME_LOG_DISC); +} + +void nvmet_port_disc_changed(struct nvmet_port *port, + struct nvmet_subsys *subsys) +{ + struct nvmet_ctrl *ctrl; + + nvmet_genctr++; + + list_for_each_entry(ctrl, &nvmet_disc_subsys->ctrls, subsys_entry) { + if (subsys && !nvmet_host_allowed(subsys, ctrl->hostnqn)) + continue; + + __nvmet_disc_changed(port, ctrl); + } +} + +static void __nvmet_subsys_disc_changed(struct nvmet_port *port, + struct nvmet_subsys *subsys, + struct nvmet_host *host) +{ + struct nvmet_ctrl *ctrl; + + list_for_each_entry(ctrl, &nvmet_disc_subsys->ctrls, subsys_entry) { + if (host && strcmp(nvmet_host_name(host), ctrl->hostnqn)) + continue; + + __nvmet_disc_changed(port, ctrl); + } +} + +void nvmet_subsys_disc_changed(struct nvmet_subsys *subsys, + struct nvmet_host *host) +{ + struct nvmet_port *port; + struct nvmet_subsys_link *s; + + nvmet_genctr++; + + list_for_each_entry(port, nvmet_ports, global_entry) + list_for_each_entry(s, &port->subsystems, entry) { + if (s->subsys != subsys) + continue; + __nvmet_subsys_disc_changed(port, subsys, host); + } +} void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port) { @@ -26,18 +84,18 @@ void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port) if (list_empty(&port->entry)) { list_add_tail(&port->entry, &parent->referrals); port->enabled = true; - nvmet_genctr++; + nvmet_port_disc_changed(parent, NULL); } up_write(&nvmet_config_sem); } -void nvmet_referral_disable(struct nvmet_port *port) +void nvmet_referral_disable(struct nvmet_port *parent, struct nvmet_port *port) { down_write(&nvmet_config_sem); if (!list_empty(&port->entry)) { port->enabled = false; list_del_init(&port->entry); - nvmet_genctr++; + nvmet_port_disc_changed(parent, NULL); } up_write(&nvmet_config_sem); } @@ -107,7 +165,7 @@ static void nvmet_execute_get_disc_log_page(struct nvmet_req *req) down_read(&nvmet_config_sem); list_for_each_entry(p, &req->port->subsystems, entry) { - if (!nvmet_host_allowed(req, p->subsys, ctrl->hostnqn)) + if (!nvmet_host_allowed(p->subsys, ctrl->hostnqn)) continue; if (residual_len >= entry_size) { char traddr[NVMF_TRADDR_SIZE]; @@ -136,6 +194,8 @@ static void nvmet_execute_get_disc_log_page(struct nvmet_req *req) hdr->numrec = cpu_to_le64(numrec); hdr->recfmt = cpu_to_le16(0); + nvmet_clear_aen_bit(req, NVME_AEN_BIT_DISC_CHANGE); + up_read(&nvmet_config_sem); status = nvmet_copy_to_sgl(req, 0, hdr, data_len); @@ -174,6 +234,8 @@ static void nvmet_execute_identify_disc_ctrl(struct nvmet_req *req) if (req->port->inline_data_size) id->sgls |= cpu_to_le32(1 << 20); + id->oaes = cpu_to_le32(NVMET_DISC_AEN_CFG_OPTIONAL); + strlcpy(id->subnqn, ctrl->subsys->subsysnqn, sizeof(id->subnqn)); status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); @@ -183,6 +245,51 @@ out: nvmet_req_complete(req, status); } +static void nvmet_execute_disc_set_features(struct nvmet_req *req) +{ + u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10); + u16 stat; + + switch (cdw10 & 0xff) { + case NVME_FEAT_KATO: + stat = nvmet_set_feat_kato(req); + break; + case NVME_FEAT_ASYNC_EVENT: + stat = nvmet_set_feat_async_event(req, + NVMET_DISC_AEN_CFG_OPTIONAL); + break; + default: + req->error_loc = + offsetof(struct nvme_common_command, cdw10); + stat = NVME_SC_INVALID_FIELD | NVME_SC_DNR; + break; + } + + nvmet_req_complete(req, stat); +} + +static void nvmet_execute_disc_get_features(struct nvmet_req *req) +{ + u32 cdw10 = le32_to_cpu(req->cmd->common.cdw10); + u16 stat = 0; + + switch (cdw10 & 0xff) { + case NVME_FEAT_KATO: + nvmet_get_feat_kato(req); + break; + case NVME_FEAT_ASYNC_EVENT: + nvmet_get_feat_async_event(req); + break; + default: + req->error_loc = + offsetof(struct nvme_common_command, cdw10); + stat = NVME_SC_INVALID_FIELD | NVME_SC_DNR; + break; + } + + nvmet_req_complete(req, stat); +} + u16 nvmet_parse_discovery_cmd(struct nvmet_req *req) { struct nvme_command *cmd = req->cmd; @@ -190,10 +297,28 @@ u16 nvmet_parse_discovery_cmd(struct nvmet_req *req) if (unlikely(!(req->sq->ctrl->csts & NVME_CSTS_RDY))) { pr_err("got cmd %d while not ready\n", cmd->common.opcode); + req->error_loc = + offsetof(struct nvme_common_command, opcode); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } switch (cmd->common.opcode) { + case nvme_admin_set_features: + req->execute = nvmet_execute_disc_set_features; + req->data_len = 0; + return 0; + case nvme_admin_get_features: + req->execute = nvmet_execute_disc_get_features; + req->data_len = 0; + return 0; + case nvme_admin_async_event: + req->execute = nvmet_execute_async_event; + req->data_len = 0; + return 0; + case nvme_admin_keep_alive: + req->execute = nvmet_execute_keep_alive; + req->data_len = 0; + return 0; case nvme_admin_get_log_page: req->data_len = nvmet_get_log_page_len(cmd); @@ -204,6 +329,8 @@ u16 nvmet_parse_discovery_cmd(struct nvmet_req *req) default: pr_err("unsupported get_log_page lid %d\n", cmd->get_log_page.lid); + req->error_loc = + offsetof(struct nvme_get_log_page_command, lid); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } case nvme_admin_identify: @@ -216,10 +343,12 @@ u16 nvmet_parse_discovery_cmd(struct nvmet_req *req) default: pr_err("unsupported identify cns %d\n", cmd->identify.cns); + req->error_loc = offsetof(struct nvme_identify, cns); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } default: pr_err("unhandled cmd %d\n", cmd->common.opcode); + req->error_loc = offsetof(struct nvme_common_command, opcode); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c index d84ae004cb85..6cf1fd9eb32e 100644 --- a/drivers/nvme/target/fabrics-cmd.c +++ b/drivers/nvme/target/fabrics-cmd.c @@ -17,23 +17,26 @@ static void nvmet_execute_prop_set(struct nvmet_req *req) { + u64 val = le64_to_cpu(req->cmd->prop_set.value); u16 status = 0; - if (!(req->cmd->prop_set.attrib & 1)) { - u64 val = le64_to_cpu(req->cmd->prop_set.value); - - switch (le32_to_cpu(req->cmd->prop_set.offset)) { - case NVME_REG_CC: - nvmet_update_cc(req->sq->ctrl, val); - break; - default: - status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; - break; - } - } else { + if (req->cmd->prop_set.attrib & 1) { + req->error_loc = + offsetof(struct nvmf_property_set_command, attrib); status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; + goto out; } + switch (le32_to_cpu(req->cmd->prop_set.offset)) { + case NVME_REG_CC: + nvmet_update_cc(req->sq->ctrl, val); + break; + default: + req->error_loc = + offsetof(struct nvmf_property_set_command, offset); + status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; + } +out: nvmet_req_complete(req, status); } @@ -69,6 +72,14 @@ static void nvmet_execute_prop_get(struct nvmet_req *req) } } + if (status && req->cmd->prop_get.attrib & 1) { + req->error_loc = + offsetof(struct nvmf_property_get_command, offset); + } else { + req->error_loc = + offsetof(struct nvmf_property_get_command, attrib); + } + req->rsp->result.u64 = cpu_to_le64(val); nvmet_req_complete(req, status); } @@ -89,6 +100,7 @@ u16 nvmet_parse_fabrics_cmd(struct nvmet_req *req) default: pr_err("received unknown capsule type 0x%x\n", cmd->fabrics.fctype); + req->error_loc = offsetof(struct nvmf_common_command, fctype); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } @@ -105,16 +117,34 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req) old = cmpxchg(&req->sq->ctrl, NULL, ctrl); if (old) { pr_warn("queue already connected!\n"); + req->error_loc = offsetof(struct nvmf_connect_command, opcode); return NVME_SC_CONNECT_CTRL_BUSY | NVME_SC_DNR; } if (!sqsize) { pr_warn("queue size zero!\n"); + req->error_loc = offsetof(struct nvmf_connect_command, sqsize); return NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR; } /* note: convert queue size from 0's-based value to 1's-based value */ nvmet_cq_setup(ctrl, req->cq, qid, sqsize + 1); nvmet_sq_setup(ctrl, req->sq, qid, sqsize + 1); + + if (c->cattr & NVME_CONNECT_DISABLE_SQFLOW) { + req->sq->sqhd_disabled = true; + req->rsp->sq_head = cpu_to_le16(0xffff); + } + + if (ctrl->ops->install_queue) { + u16 ret = ctrl->ops->install_queue(req->sq); + + if (ret) { + pr_err("failed to install queue %d cntlid %d ret %x\n", + qid, ret, ctrl->cntlid); + return ret; + } + } + return 0; } @@ -141,6 +171,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req) if (c->recfmt != 0) { pr_warn("invalid connect version (%d).\n", le16_to_cpu(c->recfmt)); + req->error_loc = offsetof(struct nvmf_connect_command, recfmt); status = NVME_SC_CONNECT_FORMAT | NVME_SC_DNR; goto out; } @@ -155,8 +186,13 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req) status = nvmet_alloc_ctrl(d->subsysnqn, d->hostnqn, req, le32_to_cpu(c->kato), &ctrl); - if (status) + if (status) { + if (status == (NVME_SC_INVALID_FIELD | NVME_SC_DNR)) + req->error_loc = + offsetof(struct nvme_common_command, opcode); goto out; + } + uuid_copy(&ctrl->hostid, &d->hostid); status = nvmet_install_queue(ctrl, req); @@ -243,11 +279,13 @@ u16 nvmet_parse_connect_cmd(struct nvmet_req *req) if (cmd->common.opcode != nvme_fabrics_command) { pr_err("invalid command 0x%x on unconnected queue.\n", cmd->fabrics.opcode); + req->error_loc = offsetof(struct nvme_common_command, opcode); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } if (cmd->fabrics.fctype != nvme_fabrics_type_connect) { pr_err("invalid capsule type 0x%x on unconnected queue.\n", cmd->fabrics.fctype); + req->error_loc = offsetof(struct nvmf_common_command, fctype); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index 409081a03b24..f98f5c5bea26 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -86,8 +86,6 @@ struct nvmet_fc_fcp_iod { spinlock_t flock; struct nvmet_req req; - struct work_struct work; - struct work_struct done_work; struct work_struct defer_work; struct nvmet_fc_tgtport *tgtport; @@ -134,7 +132,6 @@ struct nvmet_fc_tgt_queue { u16 sqsize; u16 ersp_ratio; __le16 sqhd; - int cpu; atomic_t connected; atomic_t sqtail; atomic_t zrspcnt; @@ -232,8 +229,6 @@ static LIST_HEAD(nvmet_fc_portentry_list); static void nvmet_fc_handle_ls_rqst_work(struct work_struct *work); -static void nvmet_fc_handle_fcp_rqst_work(struct work_struct *work); -static void nvmet_fc_fcp_rqst_op_done_work(struct work_struct *work); static void nvmet_fc_fcp_rqst_op_defer_work(struct work_struct *work); static void nvmet_fc_tgt_a_put(struct nvmet_fc_tgt_assoc *assoc); static int nvmet_fc_tgt_a_get(struct nvmet_fc_tgt_assoc *assoc); @@ -438,8 +433,6 @@ nvmet_fc_prep_fcp_iodlist(struct nvmet_fc_tgtport *tgtport, int i; for (i = 0; i < queue->sqsize; fod++, i++) { - INIT_WORK(&fod->work, nvmet_fc_handle_fcp_rqst_work); - INIT_WORK(&fod->done_work, nvmet_fc_fcp_rqst_op_done_work); INIT_WORK(&fod->defer_work, nvmet_fc_fcp_rqst_op_defer_work); fod->tgtport = tgtport; fod->queue = queue; @@ -517,10 +510,7 @@ nvmet_fc_queue_fcp_req(struct nvmet_fc_tgtport *tgtport, fcpreq->hwqid = queue->qid ? ((queue->qid - 1) % tgtport->ops->max_hw_queues) : 0; - if (tgtport->ops->target_features & NVMET_FCTGTFEAT_CMD_IN_ISR) - queue_work_on(queue->cpu, queue->work_q, &fod->work); - else - nvmet_fc_handle_fcp_rqst(tgtport, fod); + nvmet_fc_handle_fcp_rqst(tgtport, fod); } static void @@ -599,30 +589,6 @@ nvmet_fc_free_fcp_iod(struct nvmet_fc_tgt_queue *queue, queue_work(queue->work_q, &fod->defer_work); } -static int -nvmet_fc_queue_to_cpu(struct nvmet_fc_tgtport *tgtport, int qid) -{ - int cpu, idx, cnt; - - if (tgtport->ops->max_hw_queues == 1) - return WORK_CPU_UNBOUND; - - /* Simple cpu selection based on qid modulo active cpu count */ - idx = !qid ? 0 : (qid - 1) % num_active_cpus(); - - /* find the n'th active cpu */ - for (cpu = 0, cnt = 0; ; ) { - if (cpu_active(cpu)) { - if (cnt == idx) - break; - cnt++; - } - cpu = (cpu + 1) % num_possible_cpus(); - } - - return cpu; -} - static struct nvmet_fc_tgt_queue * nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc, u16 qid, u16 sqsize) @@ -653,7 +619,6 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc, queue->qid = qid; queue->sqsize = sqsize; queue->assoc = assoc; - queue->cpu = nvmet_fc_queue_to_cpu(assoc->tgtport, qid); INIT_LIST_HEAD(&queue->fod_list); INIT_LIST_HEAD(&queue->avail_defer_list); INIT_LIST_HEAD(&queue->pending_cmd_list); @@ -2145,26 +2110,12 @@ nvmet_fc_fod_op_done(struct nvmet_fc_fcp_iod *fod) } } -static void -nvmet_fc_fcp_rqst_op_done_work(struct work_struct *work) -{ - struct nvmet_fc_fcp_iod *fod = - container_of(work, struct nvmet_fc_fcp_iod, done_work); - - nvmet_fc_fod_op_done(fod); -} - static void nvmet_fc_xmt_fcp_op_done(struct nvmefc_tgt_fcp_req *fcpreq) { struct nvmet_fc_fcp_iod *fod = fcpreq->nvmet_fc_private; - struct nvmet_fc_tgt_queue *queue = fod->queue; - if (fod->tgtport->ops->target_features & NVMET_FCTGTFEAT_OPDONE_IN_ISR) - /* context switch so completion is not in ISR context */ - queue_work_on(queue->cpu, queue->work_q, &fod->done_work); - else - nvmet_fc_fod_op_done(fod); + nvmet_fc_fod_op_done(fod); } /* @@ -2332,19 +2283,6 @@ transport_error: nvmet_fc_abort_op(tgtport, fod); } -/* - * Actual processing routine for received FC-NVME LS Requests from the LLD - */ -static void -nvmet_fc_handle_fcp_rqst_work(struct work_struct *work) -{ - struct nvmet_fc_fcp_iod *fod = - container_of(work, struct nvmet_fc_fcp_iod, work); - struct nvmet_fc_tgtport *tgtport = fod->tgtport; - - nvmet_fc_handle_fcp_rqst(tgtport, fod); -} - /** * nvmet_fc_rcv_fcp_req - transport entry point called by an LLDD * upon the reception of a NVME FCP CMD IU. diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index c1ec3475a140..b6d030d3259f 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -44,13 +44,69 @@ void nvmet_bdev_ns_disable(struct nvmet_ns *ns) } } +static u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts) +{ + u16 status = NVME_SC_SUCCESS; + + if (likely(blk_sts == BLK_STS_OK)) + return status; + /* + * Right now there exists M : 1 mapping between block layer error + * to the NVMe status code (see nvme_error_status()). For consistency, + * when we reverse map we use most appropriate NVMe Status code from + * the group of the NVMe staus codes used in the nvme_error_status(). + */ + switch (blk_sts) { + case BLK_STS_NOSPC: + status = NVME_SC_CAP_EXCEEDED | NVME_SC_DNR; + req->error_loc = offsetof(struct nvme_rw_command, length); + break; + case BLK_STS_TARGET: + status = NVME_SC_LBA_RANGE | NVME_SC_DNR; + req->error_loc = offsetof(struct nvme_rw_command, slba); + break; + case BLK_STS_NOTSUPP: + req->error_loc = offsetof(struct nvme_common_command, opcode); + switch (req->cmd->common.opcode) { + case nvme_cmd_dsm: + case nvme_cmd_write_zeroes: + status = NVME_SC_ONCS_NOT_SUPPORTED | NVME_SC_DNR; + break; + default: + status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR; + } + break; + case BLK_STS_MEDIUM: + status = NVME_SC_ACCESS_DENIED; + req->error_loc = offsetof(struct nvme_rw_command, nsid); + break; + case BLK_STS_IOERR: + /* fallthru */ + default: + status = NVME_SC_INTERNAL | NVME_SC_DNR; + req->error_loc = offsetof(struct nvme_common_command, opcode); + } + + switch (req->cmd->common.opcode) { + case nvme_cmd_read: + case nvme_cmd_write: + req->error_slba = le64_to_cpu(req->cmd->rw.slba); + break; + case nvme_cmd_write_zeroes: + req->error_slba = + le64_to_cpu(req->cmd->write_zeroes.slba); + break; + default: + req->error_slba = 0; + } + return status; +} + static void nvmet_bio_done(struct bio *bio) { struct nvmet_req *req = bio->bi_private; - nvmet_req_complete(req, - bio->bi_status ? NVME_SC_INTERNAL | NVME_SC_DNR : 0); - + nvmet_req_complete(req, blk_to_nvme_status(req, bio->bi_status)); if (bio != &req->b.inline_bio) bio_put(bio); } @@ -61,7 +117,6 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) struct bio *bio; struct scatterlist *sg; sector_t sector; - blk_qc_t cookie; int op, op_flags = 0, i; if (!req->sg_cnt) { @@ -114,9 +169,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) sg_cnt--; } - cookie = submit_bio(bio); - - blk_poll(bdev_get_queue(req->ns->bdev), cookie); + submit_bio(bio); } static void nvmet_bdev_execute_flush(struct nvmet_req *req) @@ -139,18 +192,21 @@ u16 nvmet_bdev_flush(struct nvmet_req *req) return 0; } -static u16 nvmet_bdev_discard_range(struct nvmet_ns *ns, +static u16 nvmet_bdev_discard_range(struct nvmet_req *req, struct nvme_dsm_range *range, struct bio **bio) { + struct nvmet_ns *ns = req->ns; int ret; ret = __blkdev_issue_discard(ns->bdev, le64_to_cpu(range->slba) << (ns->blksize_shift - 9), le32_to_cpu(range->nlb) << (ns->blksize_shift - 9), GFP_KERNEL, 0, bio); - if (ret && ret != -EOPNOTSUPP) - return NVME_SC_INTERNAL | NVME_SC_DNR; - return 0; + + if (ret) + req->error_slba = le64_to_cpu(range->slba); + + return blk_to_nvme_status(req, errno_to_blk_status(ret)); } static void nvmet_bdev_execute_discard(struct nvmet_req *req) @@ -166,7 +222,7 @@ static void nvmet_bdev_execute_discard(struct nvmet_req *req) if (status) break; - status = nvmet_bdev_discard_range(req->ns, &range, &bio); + status = nvmet_bdev_discard_range(req, &range, &bio); if (status) break; } @@ -207,16 +263,16 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req) u16 status = NVME_SC_SUCCESS; sector_t sector; sector_t nr_sector; + int ret; sector = le64_to_cpu(write_zeroes->slba) << (req->ns->blksize_shift - 9); nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length) + 1) << (req->ns->blksize_shift - 9)); - if (__blkdev_issue_zeroout(req->ns->bdev, sector, nr_sector, - GFP_KERNEL, &bio, 0)) - status = NVME_SC_INTERNAL | NVME_SC_DNR; - + ret = __blkdev_issue_zeroout(req->ns->bdev, sector, nr_sector, + GFP_KERNEL, &bio, 0); + status = blk_to_nvme_status(req, errno_to_blk_status(ret)); if (bio) { bio->bi_private = req; bio->bi_end_io = nvmet_bio_done; @@ -251,6 +307,7 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req) default: pr_err("unhandled cmd %d on qid %d\n", cmd->common.opcode, req->sq->qid); + req->error_loc = offsetof(struct nvme_common_command, opcode); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } } diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c index 01feebec29ea..517522305e5c 100644 --- a/drivers/nvme/target/io-cmd-file.c +++ b/drivers/nvme/target/io-cmd-file.c @@ -83,17 +83,16 @@ static void nvmet_file_init_bvec(struct bio_vec *bv, struct sg_page_iter *iter) } static ssize_t nvmet_file_submit_bvec(struct nvmet_req *req, loff_t pos, - unsigned long nr_segs, size_t count) + unsigned long nr_segs, size_t count, int ki_flags) { struct kiocb *iocb = &req->f.iocb; ssize_t (*call_iter)(struct kiocb *iocb, struct iov_iter *iter); struct iov_iter iter; - int ki_flags = 0, rw; - ssize_t ret; + int rw; if (req->cmd->rw.opcode == nvme_cmd_write) { if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA)) - ki_flags = IOCB_DSYNC; + ki_flags |= IOCB_DSYNC; call_iter = req->ns->file->f_op->write_iter; rw = WRITE; } else { @@ -107,17 +106,13 @@ static ssize_t nvmet_file_submit_bvec(struct nvmet_req *req, loff_t pos, iocb->ki_filp = req->ns->file; iocb->ki_flags = ki_flags | iocb_flags(req->ns->file); - ret = call_iter(iocb, &iter); - - if (ret != -EIOCBQUEUED && iocb->ki_complete) - iocb->ki_complete(iocb, ret, 0); - - return ret; + return call_iter(iocb, &iter); } static void nvmet_file_io_done(struct kiocb *iocb, long ret, long ret2) { struct nvmet_req *req = container_of(iocb, struct nvmet_req, f.iocb); + u16 status = NVME_SC_SUCCESS; if (req->f.bvec != req->inline_bvec) { if (likely(req->f.mpool_alloc == false)) @@ -126,11 +121,12 @@ static void nvmet_file_io_done(struct kiocb *iocb, long ret, long ret2) mempool_free(req->f.bvec, req->ns->bvec_pool); } - nvmet_req_complete(req, ret != req->data_len ? - NVME_SC_INTERNAL | NVME_SC_DNR : 0); + if (unlikely(ret != req->data_len)) + status = errno_to_nvme_status(req, ret); + nvmet_req_complete(req, status); } -static void nvmet_file_execute_rw(struct nvmet_req *req) +static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags) { ssize_t nr_bvec = DIV_ROUND_UP(req->data_len, PAGE_SIZE); struct sg_page_iter sg_pg_iter; @@ -140,30 +136,14 @@ static void nvmet_file_execute_rw(struct nvmet_req *req) ssize_t ret = 0; loff_t pos; - if (!req->sg_cnt || !nr_bvec) { - nvmet_req_complete(req, 0); - return; - } + + if (req->f.mpool_alloc && nr_bvec > NVMET_MAX_MPOOL_BVEC) + is_sync = true; pos = le64_to_cpu(req->cmd->rw.slba) << req->ns->blksize_shift; if (unlikely(pos + req->data_len > req->ns->size)) { - nvmet_req_complete(req, NVME_SC_LBA_RANGE | NVME_SC_DNR); - return; - } - - if (nr_bvec > NVMET_MAX_INLINE_BIOVEC) - req->f.bvec = kmalloc_array(nr_bvec, sizeof(struct bio_vec), - GFP_KERNEL); - else - req->f.bvec = req->inline_bvec; - - req->f.mpool_alloc = false; - if (unlikely(!req->f.bvec)) { - /* fallback under memory pressure */ - req->f.bvec = mempool_alloc(req->ns->bvec_pool, GFP_KERNEL); - req->f.mpool_alloc = true; - if (nr_bvec > NVMET_MAX_MPOOL_BVEC) - is_sync = true; + nvmet_req_complete(req, errno_to_nvme_status(req, -ENOSPC)); + return true; } memset(&req->f.iocb, 0, sizeof(struct kiocb)); @@ -177,9 +157,10 @@ static void nvmet_file_execute_rw(struct nvmet_req *req) if (unlikely(is_sync) && (nr_bvec - 1 == 0 || bv_cnt == NVMET_MAX_MPOOL_BVEC)) { - ret = nvmet_file_submit_bvec(req, pos, bv_cnt, len); + ret = nvmet_file_submit_bvec(req, pos, bv_cnt, len, 0); if (ret < 0) - goto out; + goto complete; + pos += len; bv_cnt = 0; len = 0; @@ -187,35 +168,95 @@ static void nvmet_file_execute_rw(struct nvmet_req *req) nr_bvec--; } - if (WARN_ON_ONCE(total_len != req->data_len)) + if (WARN_ON_ONCE(total_len != req->data_len)) { ret = -EIO; -out: - if (unlikely(is_sync || ret)) { - nvmet_file_io_done(&req->f.iocb, ret < 0 ? ret : total_len, 0); - return; + goto complete; + } + + if (unlikely(is_sync)) { + ret = total_len; + goto complete; } - req->f.iocb.ki_complete = nvmet_file_io_done; - nvmet_file_submit_bvec(req, pos, bv_cnt, total_len); + + /* + * A NULL ki_complete ask for synchronous execution, which we want + * for the IOCB_NOWAIT case. + */ + if (!(ki_flags & IOCB_NOWAIT)) + req->f.iocb.ki_complete = nvmet_file_io_done; + + ret = nvmet_file_submit_bvec(req, pos, bv_cnt, total_len, ki_flags); + + switch (ret) { + case -EIOCBQUEUED: + return true; + case -EAGAIN: + if (WARN_ON_ONCE(!(ki_flags & IOCB_NOWAIT))) + goto complete; + return false; + case -EOPNOTSUPP: + /* + * For file systems returning error -EOPNOTSUPP, handle + * IOCB_NOWAIT error case separately and retry without + * IOCB_NOWAIT. + */ + if ((ki_flags & IOCB_NOWAIT)) + return false; + break; + } + +complete: + nvmet_file_io_done(&req->f.iocb, ret, 0); + return true; } static void nvmet_file_buffered_io_work(struct work_struct *w) { struct nvmet_req *req = container_of(w, struct nvmet_req, f.work); - nvmet_file_execute_rw(req); + nvmet_file_execute_io(req, 0); } -static void nvmet_file_execute_rw_buffered_io(struct nvmet_req *req) +static void nvmet_file_submit_buffered_io(struct nvmet_req *req) { INIT_WORK(&req->f.work, nvmet_file_buffered_io_work); queue_work(buffered_io_wq, &req->f.work); } +static void nvmet_file_execute_rw(struct nvmet_req *req) +{ + ssize_t nr_bvec = DIV_ROUND_UP(req->data_len, PAGE_SIZE); + + if (!req->sg_cnt || !nr_bvec) { + nvmet_req_complete(req, 0); + return; + } + + if (nr_bvec > NVMET_MAX_INLINE_BIOVEC) + req->f.bvec = kmalloc_array(nr_bvec, sizeof(struct bio_vec), + GFP_KERNEL); + else + req->f.bvec = req->inline_bvec; + + if (unlikely(!req->f.bvec)) { + /* fallback under memory pressure */ + req->f.bvec = mempool_alloc(req->ns->bvec_pool, GFP_KERNEL); + req->f.mpool_alloc = true; + } else + req->f.mpool_alloc = false; + + if (req->ns->buffered_io) { + if (likely(!req->f.mpool_alloc) && + nvmet_file_execute_io(req, IOCB_NOWAIT)) + return; + nvmet_file_submit_buffered_io(req); + } else + nvmet_file_execute_io(req, 0); +} + u16 nvmet_file_flush(struct nvmet_req *req) { - if (vfs_fsync(req->ns->file, 1) < 0) - return NVME_SC_INTERNAL | NVME_SC_DNR; - return 0; + return errno_to_nvme_status(req, vfs_fsync(req->ns->file, 1)); } static void nvmet_file_flush_work(struct work_struct *w) @@ -236,30 +277,34 @@ static void nvmet_file_execute_discard(struct nvmet_req *req) int mode = FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE; struct nvme_dsm_range range; loff_t offset, len; - u16 ret; + u16 status = 0; + int ret; int i; for (i = 0; i <= le32_to_cpu(req->cmd->dsm.nr); i++) { - ret = nvmet_copy_from_sgl(req, i * sizeof(range), &range, + status = nvmet_copy_from_sgl(req, i * sizeof(range), &range, sizeof(range)); - if (ret) + if (status) break; offset = le64_to_cpu(range.slba) << req->ns->blksize_shift; len = le32_to_cpu(range.nlb); len <<= req->ns->blksize_shift; if (offset + len > req->ns->size) { - ret = NVME_SC_LBA_RANGE | NVME_SC_DNR; + req->error_slba = le64_to_cpu(range.slba); + status = errno_to_nvme_status(req, -ENOSPC); break; } - if (vfs_fallocate(req->ns->file, mode, offset, len)) { - ret = NVME_SC_INTERNAL | NVME_SC_DNR; + ret = vfs_fallocate(req->ns->file, mode, offset, len); + if (ret) { + req->error_slba = le64_to_cpu(range.slba); + status = errno_to_nvme_status(req, ret); break; } } - nvmet_req_complete(req, ret); + nvmet_req_complete(req, status); } static void nvmet_file_dsm_work(struct work_struct *w) @@ -299,12 +344,12 @@ static void nvmet_file_write_zeroes_work(struct work_struct *w) req->ns->blksize_shift); if (unlikely(offset + len > req->ns->size)) { - nvmet_req_complete(req, NVME_SC_LBA_RANGE | NVME_SC_DNR); + nvmet_req_complete(req, errno_to_nvme_status(req, -ENOSPC)); return; } ret = vfs_fallocate(req->ns->file, mode, offset, len); - nvmet_req_complete(req, ret < 0 ? NVME_SC_INTERNAL | NVME_SC_DNR : 0); + nvmet_req_complete(req, ret < 0 ? errno_to_nvme_status(req, ret) : 0); } static void nvmet_file_execute_write_zeroes(struct nvmet_req *req) @@ -320,10 +365,7 @@ u16 nvmet_file_parse_io_cmd(struct nvmet_req *req) switch (cmd->common.opcode) { case nvme_cmd_read: case nvme_cmd_write: - if (req->ns->buffered_io) - req->execute = nvmet_file_execute_rw_buffered_io; - else - req->execute = nvmet_file_execute_rw; + req->execute = nvmet_file_execute_rw; req->data_len = nvmet_rw_len(req); return 0; case nvme_cmd_flush: @@ -342,6 +384,7 @@ u16 nvmet_file_parse_io_cmd(struct nvmet_req *req) default: pr_err("unhandled cmd for file ns %d on qid %d\n", cmd->common.opcode, req->sq->qid); + req->error_loc = offsetof(struct nvme_common_command, opcode); return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; } } diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index 9908082b32c4..4aac1b4a8112 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -345,7 +345,7 @@ static int nvme_loop_connect_io_queues(struct nvme_loop_ctrl *ctrl) int i, ret; for (i = 1; i < ctrl->ctrl.queue_count; i++) { - ret = nvmf_connect_io_queue(&ctrl->ctrl, i); + ret = nvmf_connect_io_queue(&ctrl->ctrl, i, false); if (ret) return ret; set_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[i].flags); diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index c2b4d9ee6391..3e4719fdba85 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -30,12 +30,15 @@ #define NVMET_ASYNC_EVENTS 4 #define NVMET_ERROR_LOG_SLOTS 128 +#define NVMET_NO_ERROR_LOC ((u16)-1) /* * Supported optional AENs: */ #define NVMET_AEN_CFG_OPTIONAL \ (NVME_AEN_CFG_NS_ATTR | NVME_AEN_CFG_ANA_CHANGE) +#define NVMET_DISC_AEN_CFG_OPTIONAL \ + (NVME_AEN_CFG_DISC_CHANGE) /* * Plus mandatory SMART AENs (we'll never send them, but allow enabling them): @@ -104,6 +107,7 @@ struct nvmet_sq { u16 qid; u16 size; u32 sqhd; + bool sqhd_disabled; struct completion free_done; struct completion confirm_done; }; @@ -137,6 +141,7 @@ struct nvmet_port { struct list_head subsystems; struct config_group referrals_group; struct list_head referrals; + struct list_head global_entry; struct config_group ana_groups_group; struct nvmet_ana_group ana_default_group; enum nvme_ana_state *ana_state; @@ -163,6 +168,8 @@ struct nvmet_ctrl { struct nvmet_cq **cqs; struct nvmet_sq **sqs; + bool cmd_seen; + struct mutex lock; u64 cap; u32 cc; @@ -194,8 +201,12 @@ struct nvmet_ctrl { char subsysnqn[NVMF_NQN_FIELD_LEN]; char hostnqn[NVMF_NQN_FIELD_LEN]; - struct device *p2p_client; - struct radix_tree_root p2p_ns_map; + struct device *p2p_client; + struct radix_tree_root p2p_ns_map; + + spinlock_t error_lock; + u64 err_counter; + struct nvme_error_slot slots[NVMET_ERROR_LOG_SLOTS]; }; struct nvmet_subsys { @@ -273,6 +284,7 @@ struct nvmet_fabrics_ops { void (*delete_ctrl)(struct nvmet_ctrl *ctrl); void (*disc_traddr)(struct nvmet_req *req, struct nvmet_port *port, char *traddr); + u16 (*install_queue)(struct nvmet_sq *nvme_sq); }; #define NVMET_MAX_INLINE_BIOVEC 8 @@ -308,17 +320,14 @@ struct nvmet_req { void (*execute)(struct nvmet_req *req); const struct nvmet_fabrics_ops *ops; - struct pci_dev *p2p_dev; - struct device *p2p_client; + struct pci_dev *p2p_dev; + struct device *p2p_client; + u16 error_loc; + u64 error_slba; }; extern struct workqueue_struct *buffered_io_wq; -static inline void nvmet_set_status(struct nvmet_req *req, u16 status) -{ - req->rsp->status = cpu_to_le16(status << 1); -} - static inline void nvmet_set_result(struct nvmet_req *req, u32 result) { req->rsp->result.u32 = cpu_to_le32(result); @@ -340,6 +349,27 @@ struct nvmet_async_event { u8 log_page; }; +static inline void nvmet_clear_aen_bit(struct nvmet_req *req, u32 bn) +{ + int rae = le32_to_cpu(req->cmd->common.cdw10) & 1 << 15; + + if (!rae) + clear_bit(bn, &req->sq->ctrl->aen_masked); +} + +static inline bool nvmet_aen_bit_disabled(struct nvmet_ctrl *ctrl, u32 bn) +{ + if (!(READ_ONCE(ctrl->aen_enabled) & (1 << bn))) + return true; + return test_and_set_bit(bn, &ctrl->aen_masked); +} + +void nvmet_get_feat_kato(struct nvmet_req *req); +void nvmet_get_feat_async_event(struct nvmet_req *req); +u16 nvmet_set_feat_kato(struct nvmet_req *req); +u16 nvmet_set_feat_async_event(struct nvmet_req *req, u32 mask); +void nvmet_execute_async_event(struct nvmet_req *req); + u16 nvmet_parse_connect_cmd(struct nvmet_req *req); u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req); u16 nvmet_file_parse_io_cmd(struct nvmet_req *req); @@ -355,6 +385,8 @@ void nvmet_req_complete(struct nvmet_req *req, u16 status); int nvmet_req_alloc_sgl(struct nvmet_req *req); void nvmet_req_free_sgl(struct nvmet_req *req); +void nvmet_execute_keep_alive(struct nvmet_req *req); + void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid, u16 size); void nvmet_sq_setup(struct nvmet_ctrl *ctrl, struct nvmet_sq *sq, u16 qid, @@ -395,7 +427,7 @@ int nvmet_enable_port(struct nvmet_port *port); void nvmet_disable_port(struct nvmet_port *port); void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port); -void nvmet_referral_disable(struct nvmet_port *port); +void nvmet_referral_disable(struct nvmet_port *parent, struct nvmet_port *port); u16 nvmet_copy_to_sgl(struct nvmet_req *req, off_t off, const void *buf, size_t len); @@ -405,6 +437,14 @@ u16 nvmet_zero_sgl(struct nvmet_req *req, off_t off, size_t len); u32 nvmet_get_log_page_len(struct nvme_command *cmd); +extern struct list_head *nvmet_ports; +void nvmet_port_disc_changed(struct nvmet_port *port, + struct nvmet_subsys *subsys); +void nvmet_subsys_disc_changed(struct nvmet_subsys *subsys, + struct nvmet_host *host); +void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, + u8 event_info, u8 log_page); + #define NVMET_QUEUE_SIZE 1024 #define NVMET_NR_QUEUES 128 #define NVMET_MAX_CMD NVMET_QUEUE_SIZE @@ -425,7 +465,7 @@ u32 nvmet_get_log_page_len(struct nvme_command *cmd); #define NVMET_DEFAULT_ANA_GRPID 1 #define NVMET_KAS 10 -#define NVMET_DISC_KATO 120 +#define NVMET_DISC_KATO_MS 120000 int __init nvmet_init_configfs(void); void __exit nvmet_exit_configfs(void); @@ -434,15 +474,13 @@ int __init nvmet_init_discovery(void); void nvmet_exit_discovery(void); extern struct nvmet_subsys *nvmet_disc_subsys; -extern u64 nvmet_genctr; extern struct rw_semaphore nvmet_config_sem; extern u32 nvmet_ana_group_enabled[NVMET_MAX_ANAGRPS + 1]; extern u64 nvmet_ana_chgcnt; extern struct rw_semaphore nvmet_ana_sem; -bool nvmet_host_allowed(struct nvmet_req *req, struct nvmet_subsys *subsys, - const char *hostnqn); +bool nvmet_host_allowed(struct nvmet_subsys *subsys, const char *hostnqn); int nvmet_bdev_ns_enable(struct nvmet_ns *ns); int nvmet_file_ns_enable(struct nvmet_ns *ns); @@ -457,4 +495,6 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req) return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) << req->ns->blksize_shift; } + +u16 errno_to_nvme_status(struct nvmet_req *req, int errno); #endif /* _NVMET_H */ diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 583086dd9cb9..a8d23eb80192 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -196,7 +196,7 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp) { unsigned long flags; - if (rsp->allocated) { + if (unlikely(rsp->allocated)) { kfree(rsp); return; } @@ -630,8 +630,11 @@ static u16 nvmet_rdma_map_sgl_inline(struct nvmet_rdma_rsp *rsp) u64 off = le64_to_cpu(sgl->addr); u32 len = le32_to_cpu(sgl->length); - if (!nvme_is_write(rsp->req.cmd)) + if (!nvme_is_write(rsp->req.cmd)) { + rsp->req.error_loc = + offsetof(struct nvme_common_command, opcode); return NVME_SC_INVALID_FIELD | NVME_SC_DNR; + } if (off + len > rsp->queue->dev->inline_data_size) { pr_err("invalid inline data offset!\n"); @@ -696,6 +699,8 @@ static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp) return nvmet_rdma_map_sgl_inline(rsp); default: pr_err("invalid SGL subtype: %#x\n", sgl->type); + rsp->req.error_loc = + offsetof(struct nvme_common_command, dptr); return NVME_SC_INVALID_FIELD | NVME_SC_DNR; } case NVME_KEY_SGL_FMT_DATA_DESC: @@ -706,10 +711,13 @@ static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp) return nvmet_rdma_map_sgl_keyed(rsp, sgl, false); default: pr_err("invalid SGL subtype: %#x\n", sgl->type); + rsp->req.error_loc = + offsetof(struct nvme_common_command, dptr); return NVME_SC_INVALID_FIELD | NVME_SC_DNR; } default: pr_err("invalid SGL type: %#x\n", sgl->type); + rsp->req.error_loc = offsetof(struct nvme_common_command, dptr); return NVME_SC_SGL_INVALID_TYPE | NVME_SC_DNR; } } diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c new file mode 100644 index 000000000000..44b37b202e39 --- /dev/null +++ b/drivers/nvme/target/tcp.c @@ -0,0 +1,1737 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NVMe over Fabrics TCP target. + * Copyright (c) 2018 Lightbits Labs. All rights reserved. + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "nvmet.h" + +#define NVMET_TCP_DEF_INLINE_DATA_SIZE (4 * PAGE_SIZE) + +#define NVMET_TCP_RECV_BUDGET 8 +#define NVMET_TCP_SEND_BUDGET 8 +#define NVMET_TCP_IO_WORK_BUDGET 64 + +enum nvmet_tcp_send_state { + NVMET_TCP_SEND_DATA_PDU, + NVMET_TCP_SEND_DATA, + NVMET_TCP_SEND_R2T, + NVMET_TCP_SEND_DDGST, + NVMET_TCP_SEND_RESPONSE +}; + +enum nvmet_tcp_recv_state { + NVMET_TCP_RECV_PDU, + NVMET_TCP_RECV_DATA, + NVMET_TCP_RECV_DDGST, + NVMET_TCP_RECV_ERR, +}; + +enum { + NVMET_TCP_F_INIT_FAILED = (1 << 0), +}; + +struct nvmet_tcp_cmd { + struct nvmet_tcp_queue *queue; + struct nvmet_req req; + + struct nvme_tcp_cmd_pdu *cmd_pdu; + struct nvme_tcp_rsp_pdu *rsp_pdu; + struct nvme_tcp_data_pdu *data_pdu; + struct nvme_tcp_r2t_pdu *r2t_pdu; + + u32 rbytes_done; + u32 wbytes_done; + + u32 pdu_len; + u32 pdu_recv; + int sg_idx; + int nr_mapped; + struct msghdr recv_msg; + struct kvec *iov; + u32 flags; + + struct list_head entry; + struct llist_node lentry; + + /* send state */ + u32 offset; + struct scatterlist *cur_sg; + enum nvmet_tcp_send_state state; + + __le32 exp_ddgst; + __le32 recv_ddgst; +}; + +enum nvmet_tcp_queue_state { + NVMET_TCP_Q_CONNECTING, + NVMET_TCP_Q_LIVE, + NVMET_TCP_Q_DISCONNECTING, +}; + +struct nvmet_tcp_queue { + struct socket *sock; + struct nvmet_tcp_port *port; + struct work_struct io_work; + int cpu; + struct nvmet_cq nvme_cq; + struct nvmet_sq nvme_sq; + + /* send state */ + struct nvmet_tcp_cmd *cmds; + unsigned int nr_cmds; + struct list_head free_list; + struct llist_head resp_list; + struct list_head resp_send_list; + int send_list_len; + struct nvmet_tcp_cmd *snd_cmd; + + /* recv state */ + int offset; + int left; + enum nvmet_tcp_recv_state rcv_state; + struct nvmet_tcp_cmd *cmd; + union nvme_tcp_pdu pdu; + + /* digest state */ + bool hdr_digest; + bool data_digest; + struct ahash_request *snd_hash; + struct ahash_request *rcv_hash; + + spinlock_t state_lock; + enum nvmet_tcp_queue_state state; + + struct sockaddr_storage sockaddr; + struct sockaddr_storage sockaddr_peer; + struct work_struct release_work; + + int idx; + struct list_head queue_list; + + struct nvmet_tcp_cmd connect; + + struct page_frag_cache pf_cache; + + void (*data_ready)(struct sock *); + void (*state_change)(struct sock *); + void (*write_space)(struct sock *); +}; + +struct nvmet_tcp_port { + struct socket *sock; + struct work_struct accept_work; + struct nvmet_port *nport; + struct sockaddr_storage addr; + int last_cpu; + void (*data_ready)(struct sock *); +}; + +static DEFINE_IDA(nvmet_tcp_queue_ida); +static LIST_HEAD(nvmet_tcp_queue_list); +static DEFINE_MUTEX(nvmet_tcp_queue_mutex); + +static struct workqueue_struct *nvmet_tcp_wq; +static struct nvmet_fabrics_ops nvmet_tcp_ops; +static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c); +static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd); + +static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue, + struct nvmet_tcp_cmd *cmd) +{ + return cmd - queue->cmds; +} + +static inline bool nvmet_tcp_has_data_in(struct nvmet_tcp_cmd *cmd) +{ + return nvme_is_write(cmd->req.cmd) && + cmd->rbytes_done < cmd->req.transfer_len; +} + +static inline bool nvmet_tcp_need_data_in(struct nvmet_tcp_cmd *cmd) +{ + return nvmet_tcp_has_data_in(cmd) && !cmd->req.rsp->status; +} + +static inline bool nvmet_tcp_need_data_out(struct nvmet_tcp_cmd *cmd) +{ + return !nvme_is_write(cmd->req.cmd) && + cmd->req.transfer_len > 0 && + !cmd->req.rsp->status; +} + +static inline bool nvmet_tcp_has_inline_data(struct nvmet_tcp_cmd *cmd) +{ + return nvme_is_write(cmd->req.cmd) && cmd->pdu_len && + !cmd->rbytes_done; +} + +static inline struct nvmet_tcp_cmd * +nvmet_tcp_get_cmd(struct nvmet_tcp_queue *queue) +{ + struct nvmet_tcp_cmd *cmd; + + cmd = list_first_entry_or_null(&queue->free_list, + struct nvmet_tcp_cmd, entry); + if (!cmd) + return NULL; + list_del_init(&cmd->entry); + + cmd->rbytes_done = cmd->wbytes_done = 0; + cmd->pdu_len = 0; + cmd->pdu_recv = 0; + cmd->iov = NULL; + cmd->flags = 0; + return cmd; +} + +static inline void nvmet_tcp_put_cmd(struct nvmet_tcp_cmd *cmd) +{ + if (unlikely(cmd == &cmd->queue->connect)) + return; + + list_add_tail(&cmd->entry, &cmd->queue->free_list); +} + +static inline u8 nvmet_tcp_hdgst_len(struct nvmet_tcp_queue *queue) +{ + return queue->hdr_digest ? NVME_TCP_DIGEST_LENGTH : 0; +} + +static inline u8 nvmet_tcp_ddgst_len(struct nvmet_tcp_queue *queue) +{ + return queue->data_digest ? NVME_TCP_DIGEST_LENGTH : 0; +} + +static inline void nvmet_tcp_hdgst(struct ahash_request *hash, + void *pdu, size_t len) +{ + struct scatterlist sg; + + sg_init_one(&sg, pdu, len); + ahash_request_set_crypt(hash, &sg, pdu + len, len); + crypto_ahash_digest(hash); +} + +static int nvmet_tcp_verify_hdgst(struct nvmet_tcp_queue *queue, + void *pdu, size_t len) +{ + struct nvme_tcp_hdr *hdr = pdu; + __le32 recv_digest; + __le32 exp_digest; + + if (unlikely(!(hdr->flags & NVME_TCP_F_HDGST))) { + pr_err("queue %d: header digest enabled but no header digest\n", + queue->idx); + return -EPROTO; + } + + recv_digest = *(__le32 *)(pdu + hdr->hlen); + nvmet_tcp_hdgst(queue->rcv_hash, pdu, len); + exp_digest = *(__le32 *)(pdu + hdr->hlen); + if (recv_digest != exp_digest) { + pr_err("queue %d: header digest error: recv %#x expected %#x\n", + queue->idx, le32_to_cpu(recv_digest), + le32_to_cpu(exp_digest)); + return -EPROTO; + } + + return 0; +} + +static int nvmet_tcp_check_ddgst(struct nvmet_tcp_queue *queue, void *pdu) +{ + struct nvme_tcp_hdr *hdr = pdu; + u8 digest_len = nvmet_tcp_hdgst_len(queue); + u32 len; + + len = le32_to_cpu(hdr->plen) - hdr->hlen - + (hdr->flags & NVME_TCP_F_HDGST ? digest_len : 0); + + if (unlikely(len && !(hdr->flags & NVME_TCP_F_DDGST))) { + pr_err("queue %d: data digest flag is cleared\n", queue->idx); + return -EPROTO; + } + + return 0; +} + +static void nvmet_tcp_unmap_pdu_iovec(struct nvmet_tcp_cmd *cmd) +{ + struct scatterlist *sg; + int i; + + sg = &cmd->req.sg[cmd->sg_idx]; + + for (i = 0; i < cmd->nr_mapped; i++) + kunmap(sg_page(&sg[i])); +} + +static void nvmet_tcp_map_pdu_iovec(struct nvmet_tcp_cmd *cmd) +{ + struct kvec *iov = cmd->iov; + struct scatterlist *sg; + u32 length, offset, sg_offset; + + length = cmd->pdu_len; + cmd->nr_mapped = DIV_ROUND_UP(length, PAGE_SIZE); + offset = cmd->rbytes_done; + cmd->sg_idx = DIV_ROUND_UP(offset, PAGE_SIZE); + sg_offset = offset % PAGE_SIZE; + sg = &cmd->req.sg[cmd->sg_idx]; + + while (length) { + u32 iov_len = min_t(u32, length, sg->length - sg_offset); + + iov->iov_base = kmap(sg_page(sg)) + sg->offset + sg_offset; + iov->iov_len = iov_len; + + length -= iov_len; + sg = sg_next(sg); + iov++; + } + + iov_iter_kvec(&cmd->recv_msg.msg_iter, READ, cmd->iov, + cmd->nr_mapped, cmd->pdu_len); +} + +static void nvmet_tcp_fatal_error(struct nvmet_tcp_queue *queue) +{ + queue->rcv_state = NVMET_TCP_RECV_ERR; + if (queue->nvme_sq.ctrl) + nvmet_ctrl_fatal_error(queue->nvme_sq.ctrl); + else + kernel_sock_shutdown(queue->sock, SHUT_RDWR); +} + +static int nvmet_tcp_map_data(struct nvmet_tcp_cmd *cmd) +{ + struct nvme_sgl_desc *sgl = &cmd->req.cmd->common.dptr.sgl; + u32 len = le32_to_cpu(sgl->length); + + if (!cmd->req.data_len) + return 0; + + if (sgl->type == ((NVME_SGL_FMT_DATA_DESC << 4) | + NVME_SGL_FMT_OFFSET)) { + if (!nvme_is_write(cmd->req.cmd)) + return NVME_SC_INVALID_FIELD | NVME_SC_DNR; + + if (len > cmd->req.port->inline_data_size) + return NVME_SC_SGL_INVALID_OFFSET | NVME_SC_DNR; + cmd->pdu_len = len; + } + cmd->req.transfer_len += len; + + cmd->req.sg = sgl_alloc(len, GFP_KERNEL, &cmd->req.sg_cnt); + if (!cmd->req.sg) + return NVME_SC_INTERNAL; + cmd->cur_sg = cmd->req.sg; + + if (nvmet_tcp_has_data_in(cmd)) { + cmd->iov = kmalloc_array(cmd->req.sg_cnt, + sizeof(*cmd->iov), GFP_KERNEL); + if (!cmd->iov) + goto err; + } + + return 0; +err: + sgl_free(cmd->req.sg); + return NVME_SC_INTERNAL; +} + +static void nvmet_tcp_ddgst(struct ahash_request *hash, + struct nvmet_tcp_cmd *cmd) +{ + ahash_request_set_crypt(hash, cmd->req.sg, + (void *)&cmd->exp_ddgst, cmd->req.transfer_len); + crypto_ahash_digest(hash); +} + +static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd) +{ + struct nvme_tcp_data_pdu *pdu = cmd->data_pdu; + struct nvmet_tcp_queue *queue = cmd->queue; + u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); + u8 ddgst = nvmet_tcp_ddgst_len(cmd->queue); + + cmd->offset = 0; + cmd->state = NVMET_TCP_SEND_DATA_PDU; + + pdu->hdr.type = nvme_tcp_c2h_data; + pdu->hdr.flags = NVME_TCP_F_DATA_LAST; + pdu->hdr.hlen = sizeof(*pdu); + pdu->hdr.pdo = pdu->hdr.hlen + hdgst; + pdu->hdr.plen = + cpu_to_le32(pdu->hdr.hlen + hdgst + + cmd->req.transfer_len + ddgst); + pdu->command_id = cmd->req.rsp->command_id; + pdu->data_length = cpu_to_le32(cmd->req.transfer_len); + pdu->data_offset = cpu_to_le32(cmd->wbytes_done); + + if (queue->data_digest) { + pdu->hdr.flags |= NVME_TCP_F_DDGST; + nvmet_tcp_ddgst(queue->snd_hash, cmd); + } + + if (cmd->queue->hdr_digest) { + pdu->hdr.flags |= NVME_TCP_F_HDGST; + nvmet_tcp_hdgst(queue->snd_hash, pdu, sizeof(*pdu)); + } +} + +static void nvmet_setup_r2t_pdu(struct nvmet_tcp_cmd *cmd) +{ + struct nvme_tcp_r2t_pdu *pdu = cmd->r2t_pdu; + struct nvmet_tcp_queue *queue = cmd->queue; + u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); + + cmd->offset = 0; + cmd->state = NVMET_TCP_SEND_R2T; + + pdu->hdr.type = nvme_tcp_r2t; + pdu->hdr.flags = 0; + pdu->hdr.hlen = sizeof(*pdu); + pdu->hdr.pdo = 0; + pdu->hdr.plen = cpu_to_le32(pdu->hdr.hlen + hdgst); + + pdu->command_id = cmd->req.cmd->common.command_id; + pdu->ttag = nvmet_tcp_cmd_tag(cmd->queue, cmd); + pdu->r2t_length = cpu_to_le32(cmd->req.transfer_len - cmd->rbytes_done); + pdu->r2t_offset = cpu_to_le32(cmd->rbytes_done); + if (cmd->queue->hdr_digest) { + pdu->hdr.flags |= NVME_TCP_F_HDGST; + nvmet_tcp_hdgst(queue->snd_hash, pdu, sizeof(*pdu)); + } +} + +static void nvmet_setup_response_pdu(struct nvmet_tcp_cmd *cmd) +{ + struct nvme_tcp_rsp_pdu *pdu = cmd->rsp_pdu; + struct nvmet_tcp_queue *queue = cmd->queue; + u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); + + cmd->offset = 0; + cmd->state = NVMET_TCP_SEND_RESPONSE; + + pdu->hdr.type = nvme_tcp_rsp; + pdu->hdr.flags = 0; + pdu->hdr.hlen = sizeof(*pdu); + pdu->hdr.pdo = 0; + pdu->hdr.plen = cpu_to_le32(pdu->hdr.hlen + hdgst); + if (cmd->queue->hdr_digest) { + pdu->hdr.flags |= NVME_TCP_F_HDGST; + nvmet_tcp_hdgst(queue->snd_hash, pdu, sizeof(*pdu)); + } +} + +static void nvmet_tcp_process_resp_list(struct nvmet_tcp_queue *queue) +{ + struct llist_node *node; + + node = llist_del_all(&queue->resp_list); + if (!node) + return; + + while (node) { + struct nvmet_tcp_cmd *cmd = llist_entry(node, + struct nvmet_tcp_cmd, lentry); + + list_add(&cmd->entry, &queue->resp_send_list); + node = node->next; + queue->send_list_len++; + } +} + +static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue) +{ + queue->snd_cmd = list_first_entry_or_null(&queue->resp_send_list, + struct nvmet_tcp_cmd, entry); + if (!queue->snd_cmd) { + nvmet_tcp_process_resp_list(queue); + queue->snd_cmd = + list_first_entry_or_null(&queue->resp_send_list, + struct nvmet_tcp_cmd, entry); + if (unlikely(!queue->snd_cmd)) + return NULL; + } + + list_del_init(&queue->snd_cmd->entry); + queue->send_list_len--; + + if (nvmet_tcp_need_data_out(queue->snd_cmd)) + nvmet_setup_c2h_data_pdu(queue->snd_cmd); + else if (nvmet_tcp_need_data_in(queue->snd_cmd)) + nvmet_setup_r2t_pdu(queue->snd_cmd); + else + nvmet_setup_response_pdu(queue->snd_cmd); + + return queue->snd_cmd; +} + +static void nvmet_tcp_queue_response(struct nvmet_req *req) +{ + struct nvmet_tcp_cmd *cmd = + container_of(req, struct nvmet_tcp_cmd, req); + struct nvmet_tcp_queue *queue = cmd->queue; + + llist_add(&cmd->lentry, &queue->resp_list); + queue_work_on(cmd->queue->cpu, nvmet_tcp_wq, &cmd->queue->io_work); +} + +static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd) +{ + u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); + int left = sizeof(*cmd->data_pdu) - cmd->offset + hdgst; + int ret; + + ret = kernel_sendpage(cmd->queue->sock, virt_to_page(cmd->data_pdu), + offset_in_page(cmd->data_pdu) + cmd->offset, + left, MSG_DONTWAIT | MSG_MORE); + if (ret <= 0) + return ret; + + cmd->offset += ret; + left -= ret; + + if (left) + return -EAGAIN; + + cmd->state = NVMET_TCP_SEND_DATA; + cmd->offset = 0; + return 1; +} + +static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd) +{ + struct nvmet_tcp_queue *queue = cmd->queue; + int ret; + + while (cmd->cur_sg) { + struct page *page = sg_page(cmd->cur_sg); + u32 left = cmd->cur_sg->length - cmd->offset; + + ret = kernel_sendpage(cmd->queue->sock, page, cmd->offset, + left, MSG_DONTWAIT | MSG_MORE); + if (ret <= 0) + return ret; + + cmd->offset += ret; + cmd->wbytes_done += ret; + + /* Done with sg?*/ + if (cmd->offset == cmd->cur_sg->length) { + cmd->cur_sg = sg_next(cmd->cur_sg); + cmd->offset = 0; + } + } + + if (queue->data_digest) { + cmd->state = NVMET_TCP_SEND_DDGST; + cmd->offset = 0; + } else { + nvmet_setup_response_pdu(cmd); + } + return 1; + +} + +static int nvmet_try_send_response(struct nvmet_tcp_cmd *cmd, + bool last_in_batch) +{ + u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); + int left = sizeof(*cmd->rsp_pdu) - cmd->offset + hdgst; + int flags = MSG_DONTWAIT; + int ret; + + if (!last_in_batch && cmd->queue->send_list_len) + flags |= MSG_MORE; + else + flags |= MSG_EOR; + + ret = kernel_sendpage(cmd->queue->sock, virt_to_page(cmd->rsp_pdu), + offset_in_page(cmd->rsp_pdu) + cmd->offset, left, flags); + if (ret <= 0) + return ret; + cmd->offset += ret; + left -= ret; + + if (left) + return -EAGAIN; + + kfree(cmd->iov); + sgl_free(cmd->req.sg); + cmd->queue->snd_cmd = NULL; + nvmet_tcp_put_cmd(cmd); + return 1; +} + +static int nvmet_try_send_r2t(struct nvmet_tcp_cmd *cmd, bool last_in_batch) +{ + u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); + int left = sizeof(*cmd->r2t_pdu) - cmd->offset + hdgst; + int flags = MSG_DONTWAIT; + int ret; + + if (!last_in_batch && cmd->queue->send_list_len) + flags |= MSG_MORE; + else + flags |= MSG_EOR; + + ret = kernel_sendpage(cmd->queue->sock, virt_to_page(cmd->r2t_pdu), + offset_in_page(cmd->r2t_pdu) + cmd->offset, left, flags); + if (ret <= 0) + return ret; + cmd->offset += ret; + left -= ret; + + if (left) + return -EAGAIN; + + cmd->queue->snd_cmd = NULL; + return 1; +} + +static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd) +{ + struct nvmet_tcp_queue *queue = cmd->queue; + struct msghdr msg = { .msg_flags = MSG_DONTWAIT }; + struct kvec iov = { + .iov_base = &cmd->exp_ddgst + cmd->offset, + .iov_len = NVME_TCP_DIGEST_LENGTH - cmd->offset + }; + int ret; + + ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len); + if (unlikely(ret <= 0)) + return ret; + + cmd->offset += ret; + nvmet_setup_response_pdu(cmd); + return 1; +} + +static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue, + bool last_in_batch) +{ + struct nvmet_tcp_cmd *cmd = queue->snd_cmd; + int ret = 0; + + if (!cmd || queue->state == NVMET_TCP_Q_DISCONNECTING) { + cmd = nvmet_tcp_fetch_cmd(queue); + if (unlikely(!cmd)) + return 0; + } + + if (cmd->state == NVMET_TCP_SEND_DATA_PDU) { + ret = nvmet_try_send_data_pdu(cmd); + if (ret <= 0) + goto done_send; + } + + if (cmd->state == NVMET_TCP_SEND_DATA) { + ret = nvmet_try_send_data(cmd); + if (ret <= 0) + goto done_send; + } + + if (cmd->state == NVMET_TCP_SEND_DDGST) { + ret = nvmet_try_send_ddgst(cmd); + if (ret <= 0) + goto done_send; + } + + if (cmd->state == NVMET_TCP_SEND_R2T) { + ret = nvmet_try_send_r2t(cmd, last_in_batch); + if (ret <= 0) + goto done_send; + } + + if (cmd->state == NVMET_TCP_SEND_RESPONSE) + ret = nvmet_try_send_response(cmd, last_in_batch); + +done_send: + if (ret < 0) { + if (ret == -EAGAIN) + return 0; + return ret; + } + + return 1; +} + +static int nvmet_tcp_try_send(struct nvmet_tcp_queue *queue, + int budget, int *sends) +{ + int i, ret = 0; + + for (i = 0; i < budget; i++) { + ret = nvmet_tcp_try_send_one(queue, i == budget - 1); + if (ret <= 0) + break; + (*sends)++; + } + + return ret; +} + +static void nvmet_prepare_receive_pdu(struct nvmet_tcp_queue *queue) +{ + queue->offset = 0; + queue->left = sizeof(struct nvme_tcp_hdr); + queue->cmd = NULL; + queue->rcv_state = NVMET_TCP_RECV_PDU; +} + +static void nvmet_tcp_free_crypto(struct nvmet_tcp_queue *queue) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(queue->rcv_hash); + + ahash_request_free(queue->rcv_hash); + ahash_request_free(queue->snd_hash); + crypto_free_ahash(tfm); +} + +static int nvmet_tcp_alloc_crypto(struct nvmet_tcp_queue *queue) +{ + struct crypto_ahash *tfm; + + tfm = crypto_alloc_ahash("crc32c", 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(tfm)) + return PTR_ERR(tfm); + + queue->snd_hash = ahash_request_alloc(tfm, GFP_KERNEL); + if (!queue->snd_hash) + goto free_tfm; + ahash_request_set_callback(queue->snd_hash, 0, NULL, NULL); + + queue->rcv_hash = ahash_request_alloc(tfm, GFP_KERNEL); + if (!queue->rcv_hash) + goto free_snd_hash; + ahash_request_set_callback(queue->rcv_hash, 0, NULL, NULL); + + return 0; +free_snd_hash: + ahash_request_free(queue->snd_hash); +free_tfm: + crypto_free_ahash(tfm); + return -ENOMEM; +} + + +static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue) +{ + struct nvme_tcp_icreq_pdu *icreq = &queue->pdu.icreq; + struct nvme_tcp_icresp_pdu *icresp = &queue->pdu.icresp; + struct msghdr msg = {}; + struct kvec iov; + int ret; + + if (le32_to_cpu(icreq->hdr.plen) != sizeof(struct nvme_tcp_icreq_pdu)) { + pr_err("bad nvme-tcp pdu length (%d)\n", + le32_to_cpu(icreq->hdr.plen)); + nvmet_tcp_fatal_error(queue); + } + + if (icreq->pfv != NVME_TCP_PFV_1_0) { + pr_err("queue %d: bad pfv %d\n", queue->idx, icreq->pfv); + return -EPROTO; + } + + if (icreq->hpda != 0) { + pr_err("queue %d: unsupported hpda %d\n", queue->idx, + icreq->hpda); + return -EPROTO; + } + + if (icreq->maxr2t != 0) { + pr_err("queue %d: unsupported maxr2t %d\n", queue->idx, + le32_to_cpu(icreq->maxr2t) + 1); + return -EPROTO; + } + + queue->hdr_digest = !!(icreq->digest & NVME_TCP_HDR_DIGEST_ENABLE); + queue->data_digest = !!(icreq->digest & NVME_TCP_DATA_DIGEST_ENABLE); + if (queue->hdr_digest || queue->data_digest) { + ret = nvmet_tcp_alloc_crypto(queue); + if (ret) + return ret; + } + + memset(icresp, 0, sizeof(*icresp)); + icresp->hdr.type = nvme_tcp_icresp; + icresp->hdr.hlen = sizeof(*icresp); + icresp->hdr.pdo = 0; + icresp->hdr.plen = cpu_to_le32(icresp->hdr.hlen); + icresp->pfv = cpu_to_le16(NVME_TCP_PFV_1_0); + icresp->maxdata = cpu_to_le32(0xffff); /* FIXME: support r2t */ + icresp->cpda = 0; + if (queue->hdr_digest) + icresp->digest |= NVME_TCP_HDR_DIGEST_ENABLE; + if (queue->data_digest) + icresp->digest |= NVME_TCP_DATA_DIGEST_ENABLE; + + iov.iov_base = icresp; + iov.iov_len = sizeof(*icresp); + ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len); + if (ret < 0) + goto free_crypto; + + queue->state = NVMET_TCP_Q_LIVE; + nvmet_prepare_receive_pdu(queue); + return 0; +free_crypto: + if (queue->hdr_digest || queue->data_digest) + nvmet_tcp_free_crypto(queue); + return ret; +} + +static void nvmet_tcp_handle_req_failure(struct nvmet_tcp_queue *queue, + struct nvmet_tcp_cmd *cmd, struct nvmet_req *req) +{ + int ret; + + /* recover the expected data transfer length */ + req->data_len = le32_to_cpu(req->cmd->common.dptr.sgl.length); + + if (!nvme_is_write(cmd->req.cmd) || + req->data_len > cmd->req.port->inline_data_size) { + nvmet_prepare_receive_pdu(queue); + return; + } + + ret = nvmet_tcp_map_data(cmd); + if (unlikely(ret)) { + pr_err("queue %d: failed to map data\n", queue->idx); + nvmet_tcp_fatal_error(queue); + return; + } + + queue->rcv_state = NVMET_TCP_RECV_DATA; + nvmet_tcp_map_pdu_iovec(cmd); + cmd->flags |= NVMET_TCP_F_INIT_FAILED; +} + +static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue) +{ + struct nvme_tcp_data_pdu *data = &queue->pdu.data; + struct nvmet_tcp_cmd *cmd; + + cmd = &queue->cmds[data->ttag]; + + if (le32_to_cpu(data->data_offset) != cmd->rbytes_done) { + pr_err("ttag %u unexpected data offset %u (expected %u)\n", + data->ttag, le32_to_cpu(data->data_offset), + cmd->rbytes_done); + /* FIXME: use path and transport errors */ + nvmet_req_complete(&cmd->req, + NVME_SC_INVALID_FIELD | NVME_SC_DNR); + return -EPROTO; + } + + cmd->pdu_len = le32_to_cpu(data->data_length); + cmd->pdu_recv = 0; + nvmet_tcp_map_pdu_iovec(cmd); + queue->cmd = cmd; + queue->rcv_state = NVMET_TCP_RECV_DATA; + + return 0; +} + +static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue) +{ + struct nvme_tcp_hdr *hdr = &queue->pdu.cmd.hdr; + struct nvme_command *nvme_cmd = &queue->pdu.cmd.cmd; + struct nvmet_req *req; + int ret; + + if (unlikely(queue->state == NVMET_TCP_Q_CONNECTING)) { + if (hdr->type != nvme_tcp_icreq) { + pr_err("unexpected pdu type (%d) before icreq\n", + hdr->type); + nvmet_tcp_fatal_error(queue); + return -EPROTO; + } + return nvmet_tcp_handle_icreq(queue); + } + + if (hdr->type == nvme_tcp_h2c_data) { + ret = nvmet_tcp_handle_h2c_data_pdu(queue); + if (unlikely(ret)) + return ret; + return 0; + } + + queue->cmd = nvmet_tcp_get_cmd(queue); + if (unlikely(!queue->cmd)) { + /* This should never happen */ + pr_err("queue %d: out of commands (%d) send_list_len: %d, opcode: %d", + queue->idx, queue->nr_cmds, queue->send_list_len, + nvme_cmd->common.opcode); + nvmet_tcp_fatal_error(queue); + return -ENOMEM; + } + + req = &queue->cmd->req; + memcpy(req->cmd, nvme_cmd, sizeof(*nvme_cmd)); + + if (unlikely(!nvmet_req_init(req, &queue->nvme_cq, + &queue->nvme_sq, &nvmet_tcp_ops))) { + pr_err("failed cmd %p id %d opcode %d, data_len: %d\n", + req->cmd, req->cmd->common.command_id, + req->cmd->common.opcode, + le32_to_cpu(req->cmd->common.dptr.sgl.length)); + + nvmet_tcp_handle_req_failure(queue, queue->cmd, req); + return -EAGAIN; + } + + ret = nvmet_tcp_map_data(queue->cmd); + if (unlikely(ret)) { + pr_err("queue %d: failed to map data\n", queue->idx); + if (nvmet_tcp_has_inline_data(queue->cmd)) + nvmet_tcp_fatal_error(queue); + else + nvmet_req_complete(req, ret); + ret = -EAGAIN; + goto out; + } + + if (nvmet_tcp_need_data_in(queue->cmd)) { + if (nvmet_tcp_has_inline_data(queue->cmd)) { + queue->rcv_state = NVMET_TCP_RECV_DATA; + nvmet_tcp_map_pdu_iovec(queue->cmd); + return 0; + } + /* send back R2T */ + nvmet_tcp_queue_response(&queue->cmd->req); + goto out; + } + + nvmet_req_execute(&queue->cmd->req); +out: + nvmet_prepare_receive_pdu(queue); + return ret; +} + +static const u8 nvme_tcp_pdu_sizes[] = { + [nvme_tcp_icreq] = sizeof(struct nvme_tcp_icreq_pdu), + [nvme_tcp_cmd] = sizeof(struct nvme_tcp_cmd_pdu), + [nvme_tcp_h2c_data] = sizeof(struct nvme_tcp_data_pdu), +}; + +static inline u8 nvmet_tcp_pdu_size(u8 type) +{ + size_t idx = type; + + return (idx < ARRAY_SIZE(nvme_tcp_pdu_sizes) && + nvme_tcp_pdu_sizes[idx]) ? + nvme_tcp_pdu_sizes[idx] : 0; +} + +static inline bool nvmet_tcp_pdu_valid(u8 type) +{ + switch (type) { + case nvme_tcp_icreq: + case nvme_tcp_cmd: + case nvme_tcp_h2c_data: + /* fallthru */ + return true; + } + + return false; +} + +static int nvmet_tcp_try_recv_pdu(struct nvmet_tcp_queue *queue) +{ + struct nvme_tcp_hdr *hdr = &queue->pdu.cmd.hdr; + int len; + struct kvec iov; + struct msghdr msg = { .msg_flags = MSG_DONTWAIT }; + +recv: + iov.iov_base = (void *)&queue->pdu + queue->offset; + iov.iov_len = queue->left; + len = kernel_recvmsg(queue->sock, &msg, &iov, 1, + iov.iov_len, msg.msg_flags); + if (unlikely(len < 0)) + return len; + + queue->offset += len; + queue->left -= len; + if (queue->left) + return -EAGAIN; + + if (queue->offset == sizeof(struct nvme_tcp_hdr)) { + u8 hdgst = nvmet_tcp_hdgst_len(queue); + + if (unlikely(!nvmet_tcp_pdu_valid(hdr->type))) { + pr_err("unexpected pdu type %d\n", hdr->type); + nvmet_tcp_fatal_error(queue); + return -EIO; + } + + if (unlikely(hdr->hlen != nvmet_tcp_pdu_size(hdr->type))) { + pr_err("pdu %d bad hlen %d\n", hdr->type, hdr->hlen); + return -EIO; + } + + queue->left = hdr->hlen - queue->offset + hdgst; + goto recv; + } + + if (queue->hdr_digest && + nvmet_tcp_verify_hdgst(queue, &queue->pdu, queue->offset)) { + nvmet_tcp_fatal_error(queue); /* fatal */ + return -EPROTO; + } + + if (queue->data_digest && + nvmet_tcp_check_ddgst(queue, &queue->pdu)) { + nvmet_tcp_fatal_error(queue); /* fatal */ + return -EPROTO; + } + + return nvmet_tcp_done_recv_pdu(queue); +} + +static void nvmet_tcp_prep_recv_ddgst(struct nvmet_tcp_cmd *cmd) +{ + struct nvmet_tcp_queue *queue = cmd->queue; + + nvmet_tcp_ddgst(queue->rcv_hash, cmd); + queue->offset = 0; + queue->left = NVME_TCP_DIGEST_LENGTH; + queue->rcv_state = NVMET_TCP_RECV_DDGST; +} + +static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue) +{ + struct nvmet_tcp_cmd *cmd = queue->cmd; + int ret; + + while (msg_data_left(&cmd->recv_msg)) { + ret = sock_recvmsg(cmd->queue->sock, &cmd->recv_msg, + cmd->recv_msg.msg_flags); + if (ret <= 0) + return ret; + + cmd->pdu_recv += ret; + cmd->rbytes_done += ret; + } + + nvmet_tcp_unmap_pdu_iovec(cmd); + + if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) && + cmd->rbytes_done == cmd->req.transfer_len) { + if (queue->data_digest) { + nvmet_tcp_prep_recv_ddgst(cmd); + return 0; + } + nvmet_req_execute(&cmd->req); + } + + nvmet_prepare_receive_pdu(queue); + return 0; +} + +static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue) +{ + struct nvmet_tcp_cmd *cmd = queue->cmd; + int ret; + struct msghdr msg = { .msg_flags = MSG_DONTWAIT }; + struct kvec iov = { + .iov_base = (void *)&cmd->recv_ddgst + queue->offset, + .iov_len = queue->left + }; + + ret = kernel_recvmsg(queue->sock, &msg, &iov, 1, + iov.iov_len, msg.msg_flags); + if (unlikely(ret < 0)) + return ret; + + queue->offset += ret; + queue->left -= ret; + if (queue->left) + return -EAGAIN; + + if (queue->data_digest && cmd->exp_ddgst != cmd->recv_ddgst) { + pr_err("queue %d: cmd %d pdu (%d) data digest error: recv %#x expected %#x\n", + queue->idx, cmd->req.cmd->common.command_id, + queue->pdu.cmd.hdr.type, le32_to_cpu(cmd->recv_ddgst), + le32_to_cpu(cmd->exp_ddgst)); + nvmet_tcp_finish_cmd(cmd); + nvmet_tcp_fatal_error(queue); + ret = -EPROTO; + goto out; + } + + if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) && + cmd->rbytes_done == cmd->req.transfer_len) + nvmet_req_execute(&cmd->req); + ret = 0; +out: + nvmet_prepare_receive_pdu(queue); + return ret; +} + +static int nvmet_tcp_try_recv_one(struct nvmet_tcp_queue *queue) +{ + int result; + + if (unlikely(queue->rcv_state == NVMET_TCP_RECV_ERR)) + return 0; + + if (queue->rcv_state == NVMET_TCP_RECV_PDU) { + result = nvmet_tcp_try_recv_pdu(queue); + if (result != 0) + goto done_recv; + } + + if (queue->rcv_state == NVMET_TCP_RECV_DATA) { + result = nvmet_tcp_try_recv_data(queue); + if (result != 0) + goto done_recv; + } + + if (queue->rcv_state == NVMET_TCP_RECV_DDGST) { + result = nvmet_tcp_try_recv_ddgst(queue); + if (result != 0) + goto done_recv; + } + +done_recv: + if (result < 0) { + if (result == -EAGAIN) + return 0; + return result; + } + return 1; +} + +static int nvmet_tcp_try_recv(struct nvmet_tcp_queue *queue, + int budget, int *recvs) +{ + int i, ret = 0; + + for (i = 0; i < budget; i++) { + ret = nvmet_tcp_try_recv_one(queue); + if (ret <= 0) + break; + (*recvs)++; + } + + return ret; +} + +static void nvmet_tcp_schedule_release_queue(struct nvmet_tcp_queue *queue) +{ + spin_lock(&queue->state_lock); + if (queue->state != NVMET_TCP_Q_DISCONNECTING) { + queue->state = NVMET_TCP_Q_DISCONNECTING; + schedule_work(&queue->release_work); + } + spin_unlock(&queue->state_lock); +} + +static void nvmet_tcp_io_work(struct work_struct *w) +{ + struct nvmet_tcp_queue *queue = + container_of(w, struct nvmet_tcp_queue, io_work); + bool pending; + int ret, ops = 0; + + do { + pending = false; + + ret = nvmet_tcp_try_recv(queue, NVMET_TCP_RECV_BUDGET, &ops); + if (ret > 0) { + pending = true; + } else if (ret < 0) { + if (ret == -EPIPE || ret == -ECONNRESET) + kernel_sock_shutdown(queue->sock, SHUT_RDWR); + else + nvmet_tcp_fatal_error(queue); + return; + } + + ret = nvmet_tcp_try_send(queue, NVMET_TCP_SEND_BUDGET, &ops); + if (ret > 0) { + /* transmitted message/data */ + pending = true; + } else if (ret < 0) { + if (ret == -EPIPE || ret == -ECONNRESET) + kernel_sock_shutdown(queue->sock, SHUT_RDWR); + else + nvmet_tcp_fatal_error(queue); + return; + } + + } while (pending && ops < NVMET_TCP_IO_WORK_BUDGET); + + /* + * We exahusted our budget, requeue our selves + */ + if (pending) + queue_work_on(queue->cpu, nvmet_tcp_wq, &queue->io_work); +} + +static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, + struct nvmet_tcp_cmd *c) +{ + u8 hdgst = nvmet_tcp_hdgst_len(queue); + + c->queue = queue; + c->req.port = queue->port->nport; + + c->cmd_pdu = page_frag_alloc(&queue->pf_cache, + sizeof(*c->cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); + if (!c->cmd_pdu) + return -ENOMEM; + c->req.cmd = &c->cmd_pdu->cmd; + + c->rsp_pdu = page_frag_alloc(&queue->pf_cache, + sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); + if (!c->rsp_pdu) + goto out_free_cmd; + c->req.rsp = &c->rsp_pdu->cqe; + + c->data_pdu = page_frag_alloc(&queue->pf_cache, + sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); + if (!c->data_pdu) + goto out_free_rsp; + + c->r2t_pdu = page_frag_alloc(&queue->pf_cache, + sizeof(*c->r2t_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); + if (!c->r2t_pdu) + goto out_free_data; + + c->recv_msg.msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL; + + list_add_tail(&c->entry, &queue->free_list); + + return 0; +out_free_data: + page_frag_free(c->data_pdu); +out_free_rsp: + page_frag_free(c->rsp_pdu); +out_free_cmd: + page_frag_free(c->cmd_pdu); + return -ENOMEM; +} + +static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c) +{ + page_frag_free(c->r2t_pdu); + page_frag_free(c->data_pdu); + page_frag_free(c->rsp_pdu); + page_frag_free(c->cmd_pdu); +} + +static int nvmet_tcp_alloc_cmds(struct nvmet_tcp_queue *queue) +{ + struct nvmet_tcp_cmd *cmds; + int i, ret = -EINVAL, nr_cmds = queue->nr_cmds; + + cmds = kcalloc(nr_cmds, sizeof(struct nvmet_tcp_cmd), GFP_KERNEL); + if (!cmds) + goto out; + + for (i = 0; i < nr_cmds; i++) { + ret = nvmet_tcp_alloc_cmd(queue, cmds + i); + if (ret) + goto out_free; + } + + queue->cmds = cmds; + + return 0; +out_free: + while (--i >= 0) + nvmet_tcp_free_cmd(cmds + i); + kfree(cmds); +out: + return ret; +} + +static void nvmet_tcp_free_cmds(struct nvmet_tcp_queue *queue) +{ + struct nvmet_tcp_cmd *cmds = queue->cmds; + int i; + + for (i = 0; i < queue->nr_cmds; i++) + nvmet_tcp_free_cmd(cmds + i); + + nvmet_tcp_free_cmd(&queue->connect); + kfree(cmds); +} + +static void nvmet_tcp_restore_socket_callbacks(struct nvmet_tcp_queue *queue) +{ + struct socket *sock = queue->sock; + + write_lock_bh(&sock->sk->sk_callback_lock); + sock->sk->sk_data_ready = queue->data_ready; + sock->sk->sk_state_change = queue->state_change; + sock->sk->sk_write_space = queue->write_space; + sock->sk->sk_user_data = NULL; + write_unlock_bh(&sock->sk->sk_callback_lock); +} + +static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd) +{ + nvmet_req_uninit(&cmd->req); + nvmet_tcp_unmap_pdu_iovec(cmd); + sgl_free(cmd->req.sg); +} + +static void nvmet_tcp_uninit_data_in_cmds(struct nvmet_tcp_queue *queue) +{ + struct nvmet_tcp_cmd *cmd = queue->cmds; + int i; + + for (i = 0; i < queue->nr_cmds; i++, cmd++) { + if (nvmet_tcp_need_data_in(cmd)) + nvmet_tcp_finish_cmd(cmd); + } + + if (!queue->nr_cmds && nvmet_tcp_need_data_in(&queue->connect)) { + /* failed in connect */ + nvmet_tcp_finish_cmd(&queue->connect); + } +} + +static void nvmet_tcp_release_queue_work(struct work_struct *w) +{ + struct nvmet_tcp_queue *queue = + container_of(w, struct nvmet_tcp_queue, release_work); + + mutex_lock(&nvmet_tcp_queue_mutex); + list_del_init(&queue->queue_list); + mutex_unlock(&nvmet_tcp_queue_mutex); + + nvmet_tcp_restore_socket_callbacks(queue); + flush_work(&queue->io_work); + + nvmet_tcp_uninit_data_in_cmds(queue); + nvmet_sq_destroy(&queue->nvme_sq); + cancel_work_sync(&queue->io_work); + sock_release(queue->sock); + nvmet_tcp_free_cmds(queue); + if (queue->hdr_digest || queue->data_digest) + nvmet_tcp_free_crypto(queue); + ida_simple_remove(&nvmet_tcp_queue_ida, queue->idx); + + kfree(queue); +} + +static void nvmet_tcp_data_ready(struct sock *sk) +{ + struct nvmet_tcp_queue *queue; + + read_lock_bh(&sk->sk_callback_lock); + queue = sk->sk_user_data; + if (likely(queue)) + queue_work_on(queue->cpu, nvmet_tcp_wq, &queue->io_work); + read_unlock_bh(&sk->sk_callback_lock); +} + +static void nvmet_tcp_write_space(struct sock *sk) +{ + struct nvmet_tcp_queue *queue; + + read_lock_bh(&sk->sk_callback_lock); + queue = sk->sk_user_data; + if (unlikely(!queue)) + goto out; + + if (unlikely(queue->state == NVMET_TCP_Q_CONNECTING)) { + queue->write_space(sk); + goto out; + } + + if (sk_stream_is_writeable(sk)) { + clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags); + queue_work_on(queue->cpu, nvmet_tcp_wq, &queue->io_work); + } +out: + read_unlock_bh(&sk->sk_callback_lock); +} + +static void nvmet_tcp_state_change(struct sock *sk) +{ + struct nvmet_tcp_queue *queue; + + write_lock_bh(&sk->sk_callback_lock); + queue = sk->sk_user_data; + if (!queue) + goto done; + + switch (sk->sk_state) { + case TCP_FIN_WAIT1: + case TCP_CLOSE_WAIT: + case TCP_CLOSE: + /* FALLTHRU */ + sk->sk_user_data = NULL; + nvmet_tcp_schedule_release_queue(queue); + break; + default: + pr_warn("queue %d unhandled state %d\n", + queue->idx, sk->sk_state); + } +done: + write_unlock_bh(&sk->sk_callback_lock); +} + +static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue) +{ + struct socket *sock = queue->sock; + struct linger sol = { .l_onoff = 1, .l_linger = 0 }; + int ret; + + ret = kernel_getsockname(sock, + (struct sockaddr *)&queue->sockaddr); + if (ret < 0) + return ret; + + ret = kernel_getpeername(sock, + (struct sockaddr *)&queue->sockaddr_peer); + if (ret < 0) + return ret; + + /* + * Cleanup whatever is sitting in the TCP transmit queue on socket + * close. This is done to prevent stale data from being sent should + * the network connection be restored before TCP times out. + */ + ret = kernel_setsockopt(sock, SOL_SOCKET, SO_LINGER, + (char *)&sol, sizeof(sol)); + if (ret) + return ret; + + write_lock_bh(&sock->sk->sk_callback_lock); + sock->sk->sk_user_data = queue; + queue->data_ready = sock->sk->sk_data_ready; + sock->sk->sk_data_ready = nvmet_tcp_data_ready; + queue->state_change = sock->sk->sk_state_change; + sock->sk->sk_state_change = nvmet_tcp_state_change; + queue->write_space = sock->sk->sk_write_space; + sock->sk->sk_write_space = nvmet_tcp_write_space; + write_unlock_bh(&sock->sk->sk_callback_lock); + + return 0; +} + +static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port, + struct socket *newsock) +{ + struct nvmet_tcp_queue *queue; + int ret; + + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + if (!queue) + return -ENOMEM; + + INIT_WORK(&queue->release_work, nvmet_tcp_release_queue_work); + INIT_WORK(&queue->io_work, nvmet_tcp_io_work); + queue->sock = newsock; + queue->port = port; + queue->nr_cmds = 0; + spin_lock_init(&queue->state_lock); + queue->state = NVMET_TCP_Q_CONNECTING; + INIT_LIST_HEAD(&queue->free_list); + init_llist_head(&queue->resp_list); + INIT_LIST_HEAD(&queue->resp_send_list); + + queue->idx = ida_simple_get(&nvmet_tcp_queue_ida, 0, 0, GFP_KERNEL); + if (queue->idx < 0) { + ret = queue->idx; + goto out_free_queue; + } + + ret = nvmet_tcp_alloc_cmd(queue, &queue->connect); + if (ret) + goto out_ida_remove; + + ret = nvmet_sq_init(&queue->nvme_sq); + if (ret) + goto out_free_connect; + + port->last_cpu = cpumask_next_wrap(port->last_cpu, + cpu_online_mask, -1, false); + queue->cpu = port->last_cpu; + nvmet_prepare_receive_pdu(queue); + + mutex_lock(&nvmet_tcp_queue_mutex); + list_add_tail(&queue->queue_list, &nvmet_tcp_queue_list); + mutex_unlock(&nvmet_tcp_queue_mutex); + + ret = nvmet_tcp_set_queue_sock(queue); + if (ret) + goto out_destroy_sq; + + queue_work_on(queue->cpu, nvmet_tcp_wq, &queue->io_work); + + return 0; +out_destroy_sq: + mutex_lock(&nvmet_tcp_queue_mutex); + list_del_init(&queue->queue_list); + mutex_unlock(&nvmet_tcp_queue_mutex); + nvmet_sq_destroy(&queue->nvme_sq); +out_free_connect: + nvmet_tcp_free_cmd(&queue->connect); +out_ida_remove: + ida_simple_remove(&nvmet_tcp_queue_ida, queue->idx); +out_free_queue: + kfree(queue); + return ret; +} + +static void nvmet_tcp_accept_work(struct work_struct *w) +{ + struct nvmet_tcp_port *port = + container_of(w, struct nvmet_tcp_port, accept_work); + struct socket *newsock; + int ret; + + while (true) { + ret = kernel_accept(port->sock, &newsock, O_NONBLOCK); + if (ret < 0) { + if (ret != -EAGAIN) + pr_warn("failed to accept err=%d\n", ret); + return; + } + ret = nvmet_tcp_alloc_queue(port, newsock); + if (ret) { + pr_err("failed to allocate queue\n"); + sock_release(newsock); + } + } +} + +static void nvmet_tcp_listen_data_ready(struct sock *sk) +{ + struct nvmet_tcp_port *port; + + read_lock_bh(&sk->sk_callback_lock); + port = sk->sk_user_data; + if (!port) + goto out; + + if (sk->sk_state == TCP_LISTEN) + schedule_work(&port->accept_work); +out: + read_unlock_bh(&sk->sk_callback_lock); +} + +static int nvmet_tcp_add_port(struct nvmet_port *nport) +{ + struct nvmet_tcp_port *port; + __kernel_sa_family_t af; + int opt, ret; + + port = kzalloc(sizeof(*port), GFP_KERNEL); + if (!port) + return -ENOMEM; + + switch (nport->disc_addr.adrfam) { + case NVMF_ADDR_FAMILY_IP4: + af = AF_INET; + break; + case NVMF_ADDR_FAMILY_IP6: + af = AF_INET6; + break; + default: + pr_err("address family %d not supported\n", + nport->disc_addr.adrfam); + ret = -EINVAL; + goto err_port; + } + + ret = inet_pton_with_scope(&init_net, af, nport->disc_addr.traddr, + nport->disc_addr.trsvcid, &port->addr); + if (ret) { + pr_err("malformed ip/port passed: %s:%s\n", + nport->disc_addr.traddr, nport->disc_addr.trsvcid); + goto err_port; + } + + port->nport = nport; + port->last_cpu = -1; + INIT_WORK(&port->accept_work, nvmet_tcp_accept_work); + if (port->nport->inline_data_size < 0) + port->nport->inline_data_size = NVMET_TCP_DEF_INLINE_DATA_SIZE; + + ret = sock_create(port->addr.ss_family, SOCK_STREAM, + IPPROTO_TCP, &port->sock); + if (ret) { + pr_err("failed to create a socket\n"); + goto err_port; + } + + port->sock->sk->sk_user_data = port; + port->data_ready = port->sock->sk->sk_data_ready; + port->sock->sk->sk_data_ready = nvmet_tcp_listen_data_ready; + + opt = 1; + ret = kernel_setsockopt(port->sock, IPPROTO_TCP, + TCP_NODELAY, (char *)&opt, sizeof(opt)); + if (ret) { + pr_err("failed to set TCP_NODELAY sock opt %d\n", ret); + goto err_sock; + } + + ret = kernel_setsockopt(port->sock, SOL_SOCKET, SO_REUSEADDR, + (char *)&opt, sizeof(opt)); + if (ret) { + pr_err("failed to set SO_REUSEADDR sock opt %d\n", ret); + goto err_sock; + } + + ret = kernel_bind(port->sock, (struct sockaddr *)&port->addr, + sizeof(port->addr)); + if (ret) { + pr_err("failed to bind port socket %d\n", ret); + goto err_sock; + } + + ret = kernel_listen(port->sock, 128); + if (ret) { + pr_err("failed to listen %d on port sock\n", ret); + goto err_sock; + } + + nport->priv = port; + pr_info("enabling port %d (%pISpc)\n", + le16_to_cpu(nport->disc_addr.portid), &port->addr); + + return 0; + +err_sock: + sock_release(port->sock); +err_port: + kfree(port); + return ret; +} + +static void nvmet_tcp_remove_port(struct nvmet_port *nport) +{ + struct nvmet_tcp_port *port = nport->priv; + + write_lock_bh(&port->sock->sk->sk_callback_lock); + port->sock->sk->sk_data_ready = port->data_ready; + port->sock->sk->sk_user_data = NULL; + write_unlock_bh(&port->sock->sk->sk_callback_lock); + cancel_work_sync(&port->accept_work); + + sock_release(port->sock); + kfree(port); +} + +static void nvmet_tcp_delete_ctrl(struct nvmet_ctrl *ctrl) +{ + struct nvmet_tcp_queue *queue; + + mutex_lock(&nvmet_tcp_queue_mutex); + list_for_each_entry(queue, &nvmet_tcp_queue_list, queue_list) + if (queue->nvme_sq.ctrl == ctrl) + kernel_sock_shutdown(queue->sock, SHUT_RDWR); + mutex_unlock(&nvmet_tcp_queue_mutex); +} + +static u16 nvmet_tcp_install_queue(struct nvmet_sq *sq) +{ + struct nvmet_tcp_queue *queue = + container_of(sq, struct nvmet_tcp_queue, nvme_sq); + + if (sq->qid == 0) { + /* Let inflight controller teardown complete */ + flush_scheduled_work(); + } + + queue->nr_cmds = sq->size * 2; + if (nvmet_tcp_alloc_cmds(queue)) + return NVME_SC_INTERNAL; + return 0; +} + +static void nvmet_tcp_disc_port_addr(struct nvmet_req *req, + struct nvmet_port *nport, char *traddr) +{ + struct nvmet_tcp_port *port = nport->priv; + + if (inet_addr_is_any((struct sockaddr *)&port->addr)) { + struct nvmet_tcp_cmd *cmd = + container_of(req, struct nvmet_tcp_cmd, req); + struct nvmet_tcp_queue *queue = cmd->queue; + + sprintf(traddr, "%pISc", (struct sockaddr *)&queue->sockaddr); + } else { + memcpy(traddr, nport->disc_addr.traddr, NVMF_TRADDR_SIZE); + } +} + +static struct nvmet_fabrics_ops nvmet_tcp_ops = { + .owner = THIS_MODULE, + .type = NVMF_TRTYPE_TCP, + .msdbd = 1, + .has_keyed_sgls = 0, + .add_port = nvmet_tcp_add_port, + .remove_port = nvmet_tcp_remove_port, + .queue_response = nvmet_tcp_queue_response, + .delete_ctrl = nvmet_tcp_delete_ctrl, + .install_queue = nvmet_tcp_install_queue, + .disc_traddr = nvmet_tcp_disc_port_addr, +}; + +static int __init nvmet_tcp_init(void) +{ + int ret; + + nvmet_tcp_wq = alloc_workqueue("nvmet_tcp_wq", WQ_HIGHPRI, 0); + if (!nvmet_tcp_wq) + return -ENOMEM; + + ret = nvmet_register_transport(&nvmet_tcp_ops); + if (ret) + goto err; + + return 0; +err: + destroy_workqueue(nvmet_tcp_wq); + return ret; +} + +static void __exit nvmet_tcp_exit(void) +{ + struct nvmet_tcp_queue *queue; + + nvmet_unregister_transport(&nvmet_tcp_ops); + + flush_scheduled_work(); + mutex_lock(&nvmet_tcp_queue_mutex); + list_for_each_entry(queue, &nvmet_tcp_queue_list, queue_list) + kernel_sock_shutdown(queue->sock, SHUT_RDWR); + mutex_unlock(&nvmet_tcp_queue_mutex); + flush_scheduled_work(); + + destroy_workqueue(nvmet_tcp_wq); +} + +module_init(nvmet_tcp_init); +module_exit(nvmet_tcp_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS("nvmet-transport-3"); /* 3 == NVMF_TRTYPE_TCP */ diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index 27f67dfa649d..f7301bb4ef3b 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -28,6 +28,7 @@ struct nvmem_device { size_t size; bool read_only; int flags; + enum nvmem_type type; struct bin_attribute eeprom; struct device *base_dev; struct list_head cells; @@ -60,6 +61,13 @@ static LIST_HEAD(nvmem_lookup_list); static BLOCKING_NOTIFIER_HEAD(nvmem_notifier); +static const char * const nvmem_type_str[] = { + [NVMEM_TYPE_UNKNOWN] = "Unknown", + [NVMEM_TYPE_EEPROM] = "EEPROM", + [NVMEM_TYPE_OTP] = "OTP", + [NVMEM_TYPE_BATTERY_BACKED] = "Battery backed", +}; + #ifdef CONFIG_DEBUG_LOCK_ALLOC static struct lock_class_key eeprom_lock_key; #endif @@ -83,6 +91,21 @@ static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset, return -EINVAL; } +static ssize_t type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct nvmem_device *nvmem = to_nvmem_device(dev); + + return sprintf(buf, "%s\n", nvmem_type_str[nvmem->type]); +} + +static DEVICE_ATTR_RO(type); + +static struct attribute *nvmem_attrs[] = { + &dev_attr_type.attr, + NULL, +}; + static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj, struct bin_attribute *attr, char *buf, loff_t pos, size_t count) @@ -168,6 +191,7 @@ static struct bin_attribute *nvmem_bin_rw_attributes[] = { static const struct attribute_group nvmem_bin_rw_group = { .bin_attrs = nvmem_bin_rw_attributes, + .attrs = nvmem_attrs, }; static const struct attribute_group *nvmem_rw_dev_groups[] = { @@ -191,6 +215,7 @@ static struct bin_attribute *nvmem_bin_ro_attributes[] = { static const struct attribute_group nvmem_bin_ro_group = { .bin_attrs = nvmem_bin_ro_attributes, + .attrs = nvmem_attrs, }; static const struct attribute_group *nvmem_ro_dev_groups[] = { @@ -215,6 +240,7 @@ static struct bin_attribute *nvmem_bin_rw_root_attributes[] = { static const struct attribute_group nvmem_bin_rw_root_group = { .bin_attrs = nvmem_bin_rw_root_attributes, + .attrs = nvmem_attrs, }; static const struct attribute_group *nvmem_rw_root_dev_groups[] = { @@ -238,6 +264,7 @@ static struct bin_attribute *nvmem_bin_ro_root_attributes[] = { static const struct attribute_group nvmem_bin_ro_root_group = { .bin_attrs = nvmem_bin_ro_root_attributes, + .attrs = nvmem_attrs, }; static const struct attribute_group *nvmem_ro_root_dev_groups[] = { @@ -605,9 +632,11 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) nvmem->dev.bus = &nvmem_bus_type; nvmem->dev.parent = config->dev; nvmem->priv = config->priv; + nvmem->type = config->type; nvmem->reg_read = config->reg_read; nvmem->reg_write = config->reg_write; - nvmem->dev.of_node = config->dev->of_node; + if (!config->no_of_node) + nvmem->dev.of_node = config->dev->of_node; if (config->id == -1 && config->name) { dev_set_name(&nvmem->dev, "%s", config->name); diff --git a/drivers/nvmem/meson-efuse.c b/drivers/nvmem/meson-efuse.c index d769840d1e18..99372768446b 100644 --- a/drivers/nvmem/meson-efuse.c +++ b/drivers/nvmem/meson-efuse.c @@ -14,6 +14,7 @@ * more details. */ +#include #include #include #include @@ -46,10 +47,36 @@ static int meson_efuse_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct nvmem_device *nvmem; struct nvmem_config *econfig; + struct clk *clk; unsigned int size; + int ret; - if (meson_sm_call(SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) + clk = devm_clk_get(dev, NULL); + if (IS_ERR(clk)) { + ret = PTR_ERR(clk); + if (ret != -EPROBE_DEFER) + dev_err(dev, "failed to get efuse gate"); + return ret; + } + + ret = clk_prepare_enable(clk); + if (ret) { + dev_err(dev, "failed to enable gate"); + return ret; + } + + ret = devm_add_action_or_reset(dev, + (void(*)(void *))clk_disable_unprepare, + clk); + if (ret) { + dev_err(dev, "failed to add disable callback"); + return ret; + } + + if (meson_sm_call(SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) { + dev_err(dev, "failed to get max user"); return -EINVAL; + } econfig = devm_kzalloc(dev, sizeof(*econfig), GFP_KERNEL); if (!econfig) diff --git a/drivers/of/address.c b/drivers/of/address.c index 7ddbf0a1ab86..2270373b30ab 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -110,8 +110,8 @@ static int of_bus_pci_match(struct device_node *np) * "vci" is for the /chaos bridge on 1st-gen PCI powermacs * "ht" is hypertransport */ - return !strcmp(np->type, "pci") || !strcmp(np->type, "pciex") || - !strcmp(np->type, "vci") || !strcmp(np->type, "ht"); + return of_node_is_type(np, "pci") || of_node_is_type(np, "pciex") || + of_node_is_type(np, "vci") || of_node_is_type(np, "ht"); } static void of_bus_pci_count_cells(struct device_node *np, @@ -371,7 +371,7 @@ EXPORT_SYMBOL(of_pci_range_to_resource); static int of_bus_isa_match(struct device_node *np) { - return !strcmp(np->name, "isa"); + return of_node_name_eq(np, "isa"); } static void of_bus_isa_count_cells(struct device_node *child, diff --git a/drivers/of/base.c b/drivers/of/base.c index 09692c9b32a7..5226e898476e 100644 --- a/drivers/of/base.c +++ b/drivers/of/base.c @@ -79,6 +79,13 @@ bool of_node_name_prefix(const struct device_node *np, const char *prefix) } EXPORT_SYMBOL(of_node_name_prefix); +static bool __of_node_is_type(const struct device_node *np, const char *type) +{ + const char *match = __of_get_property(np, "device_type", NULL); + + return np && match && type && !strcmp(match, type); +} + int of_n_addr_cells(struct device_node *np) { u32 cells; @@ -116,9 +123,6 @@ int __weak of_node_to_nid(struct device_node *np) } #endif -static struct device_node **phandle_cache; -static u32 phandle_cache_mask; - /* * Assumptions behind phandle_cache implementation: * - phandle property values are in a contiguous range of 1..n @@ -127,6 +131,66 @@ static u32 phandle_cache_mask; * - the phandle lookup overhead reduction provided by the cache * will likely be less */ + +static struct device_node **phandle_cache; +static u32 phandle_cache_mask; + +/* + * Caller must hold devtree_lock. + */ +static void __of_free_phandle_cache(void) +{ + u32 cache_entries = phandle_cache_mask + 1; + u32 k; + + if (!phandle_cache) + return; + + for (k = 0; k < cache_entries; k++) + of_node_put(phandle_cache[k]); + + kfree(phandle_cache); + phandle_cache = NULL; +} + +int of_free_phandle_cache(void) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&devtree_lock, flags); + + __of_free_phandle_cache(); + + raw_spin_unlock_irqrestore(&devtree_lock, flags); + + return 0; +} +#if !defined(CONFIG_MODULES) +late_initcall_sync(of_free_phandle_cache); +#endif + +/* + * Caller must hold devtree_lock. + */ +void __of_free_phandle_cache_entry(phandle handle) +{ + phandle masked_handle; + struct device_node *np; + + if (!handle) + return; + + masked_handle = handle & phandle_cache_mask; + + if (phandle_cache) { + np = phandle_cache[masked_handle]; + if (np && handle == np->phandle) { + of_node_put(np); + phandle_cache[masked_handle] = NULL; + } + } +} + void of_populate_phandle_cache(void) { unsigned long flags; @@ -136,8 +200,7 @@ void of_populate_phandle_cache(void) raw_spin_lock_irqsave(&devtree_lock, flags); - kfree(phandle_cache); - phandle_cache = NULL; + __of_free_phandle_cache(); for_each_of_allnodes(np) if (np->phandle && np->phandle != OF_PHANDLE_ILLEGAL) @@ -155,30 +218,15 @@ void of_populate_phandle_cache(void) goto out; for_each_of_allnodes(np) - if (np->phandle && np->phandle != OF_PHANDLE_ILLEGAL) + if (np->phandle && np->phandle != OF_PHANDLE_ILLEGAL) { + of_node_get(np); phandle_cache[np->phandle & phandle_cache_mask] = np; + } out: raw_spin_unlock_irqrestore(&devtree_lock, flags); } -int of_free_phandle_cache(void) -{ - unsigned long flags; - - raw_spin_lock_irqsave(&devtree_lock, flags); - - kfree(phandle_cache); - phandle_cache = NULL; - - raw_spin_unlock_irqrestore(&devtree_lock, flags); - - return 0; -} -#if !defined(CONFIG_MODULES) -late_initcall_sync(of_free_phandle_cache); -#endif - void __init of_core_init(void) { struct device_node *np; @@ -482,14 +530,14 @@ static int __of_device_is_compatible(const struct device_node *device, /* Matching type is better than matching name */ if (type && type[0]) { - if (!device->type || of_node_cmp(type, device->type)) + if (!__of_node_is_type(device, type)) return 0; score += 2; } /* Matching name is a bit better than not */ if (name && name[0]) { - if (!device->name || of_node_cmp(name, device->name)) + if (!of_node_name_eq(device, name)) return 0; score++; } @@ -775,7 +823,7 @@ struct device_node *of_get_next_cpu_node(struct device_node *prev) } for (; next; next = next->sibling) { if (!(of_node_name_eq(next, "cpu") || - (next->type && !of_node_cmp(next->type, "cpu")))) + __of_node_is_type(next, "cpu"))) continue; if (of_node_get(next)) break; @@ -828,7 +876,7 @@ struct device_node *of_get_child_by_name(const struct device_node *node, struct device_node *child; for_each_child_of_node(node, child) - if (child->name && (of_node_cmp(child->name, name) == 0)) + if (of_node_name_eq(child, name)) break; return child; } @@ -954,8 +1002,7 @@ struct device_node *of_find_node_by_name(struct device_node *from, raw_spin_lock_irqsave(&devtree_lock, flags); for_each_of_allnodes_from(from, np) - if (np->name && (of_node_cmp(np->name, name) == 0) - && of_node_get(np)) + if (of_node_name_eq(np, name) && of_node_get(np)) break; of_node_put(from); raw_spin_unlock_irqrestore(&devtree_lock, flags); @@ -983,8 +1030,7 @@ struct device_node *of_find_node_by_type(struct device_node *from, raw_spin_lock_irqsave(&devtree_lock, flags); for_each_of_allnodes_from(from, np) - if (np->type && (of_node_cmp(np->type, type) == 0) - && of_node_get(np)) + if (__of_node_is_type(np, type) && of_node_get(np)) break; of_node_put(from); raw_spin_unlock_irqrestore(&devtree_lock, flags); @@ -1190,13 +1236,23 @@ struct device_node *of_find_node_by_phandle(phandle handle) if (phandle_cache[masked_handle] && handle == phandle_cache[masked_handle]->phandle) np = phandle_cache[masked_handle]; + if (np && of_node_check_flag(np, OF_DETACHED)) { + WARN_ON(1); /* did not uncache np on node removal */ + of_node_put(np); + phandle_cache[masked_handle] = NULL; + np = NULL; + } } if (!np) { for_each_of_allnodes(np) - if (np->phandle == handle) { - if (phandle_cache) + if (np->phandle == handle && + !of_node_check_flag(np, OF_DETACHED)) { + if (phandle_cache) { + /* will put when removed from cache */ + of_node_get(np); phandle_cache[masked_handle] = np; + } break; } } @@ -2108,9 +2164,9 @@ struct device_node *of_find_next_cache_node(const struct device_node *np) /* OF on pmac has nodes instead of properties named "l2-cache" * beneath CPU nodes. */ - if (IS_ENABLED(CONFIG_PPC_PMAC) && !strcmp(np->type, "cpu")) + if (IS_ENABLED(CONFIG_PPC_PMAC) && of_node_is_type(np, "cpu")) for_each_child_of_node(np, child) - if (!strcmp(child->type, "cache")) + if (of_node_is_type(child, "cache")) return child; return NULL; diff --git a/drivers/of/device.c b/drivers/of/device.c index 5592437bb3d1..3717f2a20d0d 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -211,7 +211,7 @@ static ssize_t of_device_get_modalias(struct device *dev, char *str, ssize_t len /* Name & Type */ /* %p eats all alphanum characters, so %c must be used here */ csize = snprintf(str, len, "of:N%pOFn%c%s", dev->of_node, 'T', - dev->of_node->type); + of_node_get_device_type(dev->of_node)); tsize = csize; len -= csize; if (str) @@ -281,7 +281,7 @@ EXPORT_SYMBOL_GPL(of_device_modalias); */ void of_device_uevent(struct device *dev, struct kobj_uevent_env *env) { - const char *compat; + const char *compat, *type; struct alias_prop *app; struct property *p; int seen = 0; @@ -291,8 +291,9 @@ void of_device_uevent(struct device *dev, struct kobj_uevent_env *env) add_uevent_var(env, "OF_NAME=%pOFn", dev->of_node); add_uevent_var(env, "OF_FULLNAME=%pOF", dev->of_node); - if (dev->of_node->type && strcmp("", dev->of_node->type) != 0) - add_uevent_var(env, "OF_TYPE=%s", dev->of_node->type); + type = of_node_get_device_type(dev->of_node); + if (type) + add_uevent_var(env, "OF_TYPE=%s", type); /* Since the compatible field can contain pretty much anything * it's not really legal to split it out with commas. We split it diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c index f4f8ed9b5454..a09c1c3cf831 100644 --- a/drivers/of/dynamic.c +++ b/drivers/of/dynamic.c @@ -205,15 +205,24 @@ static void __of_attach_node(struct device_node *np) const __be32 *phandle; int sz; - np->name = __of_get_property(np, "name", NULL) ? : ""; - np->type = __of_get_property(np, "device_type", NULL) ? : ""; - - phandle = __of_get_property(np, "phandle", &sz); - if (!phandle) - phandle = __of_get_property(np, "linux,phandle", &sz); - if (IS_ENABLED(CONFIG_PPC_PSERIES) && !phandle) - phandle = __of_get_property(np, "ibm,phandle", &sz); - np->phandle = (phandle && (sz >= 4)) ? be32_to_cpup(phandle) : 0; + if (!of_node_check_flag(np, OF_OVERLAY)) { + np->name = __of_get_property(np, "name", NULL); + np->type = __of_get_property(np, "device_type", NULL); + if (!np->name) + np->name = ""; + if (!np->type) + np->type = ""; + + phandle = __of_get_property(np, "phandle", &sz); + if (!phandle) + phandle = __of_get_property(np, "linux,phandle", &sz); + if (IS_ENABLED(CONFIG_PPC_PSERIES) && !phandle) + phandle = __of_get_property(np, "ibm,phandle", &sz); + if (phandle && (sz >= 4)) + np->phandle = be32_to_cpup(phandle); + else + np->phandle = 0; + } np->child = NULL; np->sibling = np->parent->child; @@ -268,13 +277,13 @@ void __of_detach_node(struct device_node *np) } of_node_set_flag(np, OF_DETACHED); + + /* race with of_find_node_by_phandle() prevented by devtree_lock */ + __of_free_phandle_cache_entry(np->phandle); } /** * of_detach_node() - "Unplug" a node from the device tree. - * - * The caller must hold a reference to the node. The memory associated with - * the node is not freed until its refcount goes to zero. */ int of_detach_node(struct device_node *np) { @@ -330,6 +339,25 @@ void of_node_release(struct kobject *kobj) if (!of_node_check_flag(node, OF_DYNAMIC)) return; + if (of_node_check_flag(node, OF_OVERLAY)) { + + if (!of_node_check_flag(node, OF_OVERLAY_FREE_CSET)) { + /* premature refcount of zero, do not free memory */ + pr_err("ERROR: memory leak before free overlay changeset, %pOF\n", + node); + return; + } + + /* + * If node->properties non-empty then properties were added + * to this node either by different overlay that has not + * yet been removed, or by a non-overlay mechanism. + */ + if (node->properties) + pr_err("ERROR: %s(), unexpected properties in %pOF\n", + __func__, node); + } + property_list_free(node->properties); property_list_free(node->deadprops); @@ -434,6 +462,16 @@ struct device_node *__of_node_dup(const struct device_node *np, static void __of_changeset_entry_destroy(struct of_changeset_entry *ce) { + if (ce->action == OF_RECONFIG_ATTACH_NODE && + of_node_check_flag(ce->np, OF_OVERLAY)) { + if (kref_read(&ce->np->kobj.kref) > 1) { + pr_err("ERROR: memory leak, expected refcount 1 instead of %d, of_node_get()/of_node_put() unbalanced - destroy cset entry: attach overlay node %pOF\n", + kref_read(&ce->np->kobj.kref), ce->np); + } else { + of_node_set_flag(ce->np, OF_OVERLAY_FREE_CSET); + } + } + of_node_put(ce->np); list_del(&ce->node); kfree(ce); diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index bb532aae0d92..7099c652c6a5 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -891,15 +891,20 @@ const void * __init of_flat_dt_match_machine(const void *default_match, } #ifdef CONFIG_BLK_DEV_INITRD -#ifndef __early_init_dt_declare_initrd static void __early_init_dt_declare_initrd(unsigned long start, unsigned long end) { - initrd_start = (unsigned long)__va(start); - initrd_end = (unsigned long)__va(end); - initrd_below_start_ok = 1; + /* ARM64 would cause a BUG to occur here when CONFIG_DEBUG_VM is + * enabled since __va() is called too early. ARM64 does make use + * of phys_initrd_start/phys_initrd_size so we can skip this + * conversion. + */ + if (!IS_ENABLED(CONFIG_ARM64)) { + initrd_start = (unsigned long)__va(start); + initrd_end = (unsigned long)__va(end); + initrd_below_start_ok = 1; + } } -#endif /** * early_init_dt_check_for_initrd - Decode initrd location from flat tree @@ -924,6 +929,8 @@ static void __init early_init_dt_check_for_initrd(unsigned long node) end = of_read_number(prop, len/4); __early_init_dt_declare_initrd(start, end); + phys_initrd_start = start; + phys_initrd_size = end - start; pr_debug("initrd_start=0x%llx initrd_end=0x%llx\n", (unsigned long long)start, (unsigned long long)end); @@ -1200,8 +1207,12 @@ bool __init early_init_dt_verify(void *params) void __init early_init_dt_scan_nodes(void) { + int rc = 0; + /* Retrieve various information from the /chosen node */ - of_scan_flat_dt(early_init_dt_scan_chosen, boot_command_line); + rc = of_scan_flat_dt(early_init_dt_scan_chosen, boot_command_line); + if (!rc) + pr_warn("No chosen node found, continuing without\n"); /* Initialize {size,address}-cells info */ of_scan_flat_dt(early_init_dt_scan_root, NULL); diff --git a/drivers/of/kobj.c b/drivers/of/kobj.c index 7a0a18980b98..c72eef988041 100644 --- a/drivers/of/kobj.c +++ b/drivers/of/kobj.c @@ -133,6 +133,9 @@ int __of_attach_node_sysfs(struct device_node *np) } if (!name) return -ENOMEM; + + of_node_get(np); + rc = kobject_add(&np->kobj, parent, "%s", name); kfree(name); if (rc) @@ -159,6 +162,5 @@ void __of_detach_node_sysfs(struct device_node *np) kobject_del(&np->kobj); } - /* finally remove the kobj_init ref */ of_node_put(np); } diff --git a/drivers/of/of_net.c b/drivers/of/of_net.c index 53189d4022a6..810ab0fbcccb 100644 --- a/drivers/of/of_net.c +++ b/drivers/of/of_net.c @@ -81,42 +81,3 @@ const void *of_get_mac_address(struct device_node *np) return of_get_mac_addr(np, "address"); } EXPORT_SYMBOL(of_get_mac_address); - -/** - * Obtain the MAC address from an nvmem provider named 'mac-address' through - * device tree. - * On success, copies the new address into memory pointed to by addr and - * returns 0. Returns a negative error code otherwise. - * @np: Device tree node containing the nvmem-cells phandle - * @addr: Pointer to receive the MAC address using ether_addr_copy() - */ -int of_get_nvmem_mac_address(struct device_node *np, void *addr) -{ - struct nvmem_cell *cell; - const void *mac; - size_t len; - int ret; - - cell = of_nvmem_cell_get(np, "mac-address"); - if (IS_ERR(cell)) - return PTR_ERR(cell); - - mac = nvmem_cell_read(cell, &len); - - nvmem_cell_put(cell); - - if (IS_ERR(mac)) - return PTR_ERR(mac); - - if (len < ETH_ALEN || !is_valid_ether_addr(mac)) { - ret = -EINVAL; - } else { - ether_addr_copy(addr, mac); - ret = 0; - } - - kfree(mac); - - return ret; -} -EXPORT_SYMBOL(of_get_nvmem_mac_address); diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h index 5d1567025358..24786818e32e 100644 --- a/drivers/of/of_private.h +++ b/drivers/of/of_private.h @@ -84,6 +84,10 @@ static inline void __of_detach_node_sysfs(struct device_node *np) {} int of_resolve_phandles(struct device_node *tree); #endif +#if defined(CONFIG_OF_DYNAMIC) +void __of_free_phandle_cache_entry(phandle handle); +#endif + #if defined(CONFIG_OF_OVERLAY) void of_overlay_mutex_lock(void); void of_overlay_mutex_unlock(void); diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c index 42b1f73ac5f6..2b5ac43a5690 100644 --- a/drivers/of/overlay.c +++ b/drivers/of/overlay.c @@ -23,14 +23,34 @@ #include "of_private.h" +/** + * struct target - info about current target node as recursing through overlay + * @np: node where current level of overlay will be applied + * @in_livetree: @np is a node in the live devicetree + * + * Used in the algorithm to create the portion of a changeset that describes + * an overlay fragment, which is a devicetree subtree. Initially @np is a node + * in the live devicetree where the overlay subtree is targeted to be grafted + * into. When recursing to the next level of the overlay subtree, the target + * also recurses to the next level of the live devicetree, as long as overlay + * subtree node also exists in the live devicetree. When a node in the overlay + * subtree does not exist at the same level in the live devicetree, target->np + * points to a newly allocated node, and all subsequent targets in the subtree + * will be newly allocated nodes. + */ +struct target { + struct device_node *np; + bool in_livetree; +}; + /** * struct fragment - info about fragment nodes in overlay expanded device tree * @target: target of the overlay operation * @overlay: pointer to the __overlay__ node */ struct fragment { - struct device_node *target; struct device_node *overlay; + struct device_node *target; }; /** @@ -72,8 +92,7 @@ static int devicetree_corrupt(void) } static int build_changeset_next_level(struct overlay_changeset *ovcs, - struct device_node *target_node, - const struct device_node *overlay_node); + struct target *target, const struct device_node *overlay_node); /* * of_resolve_phandles() finds the largest phandle in the live tree. @@ -257,15 +276,23 @@ err_free_target_path: /** * add_changeset_property() - add @overlay_prop to overlay changeset * @ovcs: overlay changeset - * @target_node: where to place @overlay_prop in live tree + * @target: where @overlay_prop will be placed * @overlay_prop: property to add or update, from overlay tree * @is_symbols_prop: 1 if @overlay_prop is from node "/__symbols__" * - * If @overlay_prop does not already exist in @target_node, add changeset entry - * to add @overlay_prop in @target_node, else add changeset entry to update + * If @overlay_prop does not already exist in live devicetree, add changeset + * entry to add @overlay_prop in @target, else add changeset entry to update * value of @overlay_prop. * - * Some special properties are not updated (no error returned). + * @target may be either in the live devicetree or in a new subtree that + * is contained in the changeset. + * + * Some special properties are not added or updated (no error returned): + * "name", "phandle", "linux,phandle". + * + * Properties "#address-cells" and "#size-cells" are not updated if they + * are already in the live tree, but if present in the live tree, the values + * in the overlay must match the values in the live tree. * * Update of property in symbols node is not allowed. * @@ -273,19 +300,23 @@ err_free_target_path: * invalid @overlay. */ static int add_changeset_property(struct overlay_changeset *ovcs, - struct device_node *target_node, - struct property *overlay_prop, + struct target *target, struct property *overlay_prop, bool is_symbols_prop) { struct property *new_prop = NULL, *prop; int ret = 0; + bool check_for_non_overlay_node = false; - prop = of_find_property(target_node, overlay_prop->name, NULL); + if (target->in_livetree) + if (!of_prop_cmp(overlay_prop->name, "name") || + !of_prop_cmp(overlay_prop->name, "phandle") || + !of_prop_cmp(overlay_prop->name, "linux,phandle")) + return 0; - if (!of_prop_cmp(overlay_prop->name, "name") || - !of_prop_cmp(overlay_prop->name, "phandle") || - !of_prop_cmp(overlay_prop->name, "linux,phandle")) - return 0; + if (target->in_livetree) + prop = of_find_property(target->np, overlay_prop->name, NULL); + else + prop = NULL; if (is_symbols_prop) { if (prop) @@ -298,12 +329,36 @@ static int add_changeset_property(struct overlay_changeset *ovcs, if (!new_prop) return -ENOMEM; - if (!prop) - ret = of_changeset_add_property(&ovcs->cset, target_node, + if (!prop) { + check_for_non_overlay_node = true; + if (!target->in_livetree) { + new_prop->next = target->np->deadprops; + target->np->deadprops = new_prop; + } + ret = of_changeset_add_property(&ovcs->cset, target->np, new_prop); - else - ret = of_changeset_update_property(&ovcs->cset, target_node, + } else if (!of_prop_cmp(prop->name, "#address-cells")) { + if (!of_prop_val_eq(prop, new_prop)) { + pr_err("ERROR: changing value of #address-cells is not allowed in %pOF\n", + target->np); + ret = -EINVAL; + } + } else if (!of_prop_cmp(prop->name, "#size-cells")) { + if (!of_prop_val_eq(prop, new_prop)) { + pr_err("ERROR: changing value of #size-cells is not allowed in %pOF\n", + target->np); + ret = -EINVAL; + } + } else { + check_for_non_overlay_node = true; + ret = of_changeset_update_property(&ovcs->cset, target->np, new_prop); + } + + if (check_for_non_overlay_node && + !of_node_check_flag(target->np, OF_OVERLAY)) + pr_err("WARNING: memory leak will occur if overlay removed, property: %pOF/%s\n", + target->np, new_prop->name); if (ret) { kfree(new_prop->name); @@ -315,14 +370,14 @@ static int add_changeset_property(struct overlay_changeset *ovcs, /** * add_changeset_node() - add @node (and children) to overlay changeset - * @ovcs: overlay changeset - * @target_node: where to place @node in live tree - * @node: node from within overlay device tree fragment + * @ovcs: overlay changeset + * @target: where @node will be placed in live tree or changeset + * @node: node from within overlay device tree fragment * - * If @node does not already exist in @target_node, add changeset entry - * to add @node in @target_node. + * If @node does not already exist in @target, add changeset entry + * to add @node in @target. * - * If @node already exists in @target_node, and the existing node has + * If @node already exists in @target, and the existing node has * a phandle, the overlay node is not allowed to have a phandle. * * If @node has child nodes, add the children recursively via @@ -342,49 +397,65 @@ static int add_changeset_property(struct overlay_changeset *ovcs, * a live devicetree created from Open Firmware. * * NOTE_2: Multiple mods of created nodes not supported. - * If more than one fragment contains a node that does not already exist - * in the live tree, then for each fragment of_changeset_attach_node() - * will add a changeset entry to add the node. When the changeset is - * applied, __of_attach_node() will attach the node twice (once for - * each fragment). At this point the device tree will be corrupted. - * - * TODO: add integrity check to ensure that multiple fragments do not - * create the same node. * * Returns 0 on success, -ENOMEM if memory allocation failure, or -EINVAL if * invalid @overlay. */ static int add_changeset_node(struct overlay_changeset *ovcs, - struct device_node *target_node, struct device_node *node) + struct target *target, struct device_node *node) { const char *node_kbasename; + const __be32 *phandle; struct device_node *tchild; - int ret = 0; + struct target target_child; + int ret = 0, size; node_kbasename = kbasename(node->full_name); - for_each_child_of_node(target_node, tchild) + for_each_child_of_node(target->np, tchild) if (!of_node_cmp(node_kbasename, kbasename(tchild->full_name))) break; if (!tchild) { - tchild = __of_node_dup(node, node_kbasename); + tchild = __of_node_dup(NULL, node_kbasename); if (!tchild) return -ENOMEM; - tchild->parent = target_node; + tchild->parent = target->np; + tchild->name = __of_get_property(node, "name", NULL); + tchild->type = __of_get_property(node, "device_type", NULL); + + if (!tchild->name) + tchild->name = ""; + if (!tchild->type) + tchild->type = ""; + + /* ignore obsolete "linux,phandle" */ + phandle = __of_get_property(node, "phandle", &size); + if (phandle && (size == 4)) + tchild->phandle = be32_to_cpup(phandle); + + of_node_set_flag(tchild, OF_OVERLAY); ret = of_changeset_attach_node(&ovcs->cset, tchild); if (ret) return ret; - return build_changeset_next_level(ovcs, tchild, node); + target_child.np = tchild; + target_child.in_livetree = false; + + ret = build_changeset_next_level(ovcs, &target_child, node); + of_node_put(tchild); + return ret; } - if (node->phandle && tchild->phandle) + if (node->phandle && tchild->phandle) { ret = -EINVAL; - else - ret = build_changeset_next_level(ovcs, tchild, node); + } else { + target_child.np = tchild; + target_child.in_livetree = target->in_livetree; + ret = build_changeset_next_level(ovcs, &target_child, node); + } of_node_put(tchild); return ret; @@ -393,7 +464,7 @@ static int add_changeset_node(struct overlay_changeset *ovcs, /** * build_changeset_next_level() - add level of overlay changeset * @ovcs: overlay changeset - * @target_node: where to place @overlay_node in live tree + * @target: where to place @overlay_node in live tree * @overlay_node: node from within an overlay device tree fragment * * Add the properties (if any) and nodes (if any) from @overlay_node to the @@ -406,27 +477,26 @@ static int add_changeset_node(struct overlay_changeset *ovcs, * invalid @overlay_node. */ static int build_changeset_next_level(struct overlay_changeset *ovcs, - struct device_node *target_node, - const struct device_node *overlay_node) + struct target *target, const struct device_node *overlay_node) { struct device_node *child; struct property *prop; int ret; for_each_property_of_node(overlay_node, prop) { - ret = add_changeset_property(ovcs, target_node, prop, 0); + ret = add_changeset_property(ovcs, target, prop, 0); if (ret) { pr_debug("Failed to apply prop @%pOF/%s, err=%d\n", - target_node, prop->name, ret); + target->np, prop->name, ret); return ret; } } for_each_child_of_node(overlay_node, child) { - ret = add_changeset_node(ovcs, target_node, child); + ret = add_changeset_node(ovcs, target, child); if (ret) { pr_debug("Failed to apply node @%pOF/%pOFn, err=%d\n", - target_node, child, ret); + target->np, child, ret); of_node_put(child); return ret; } @@ -439,17 +509,17 @@ static int build_changeset_next_level(struct overlay_changeset *ovcs, * Add the properties from __overlay__ node to the @ovcs->cset changeset. */ static int build_changeset_symbols_node(struct overlay_changeset *ovcs, - struct device_node *target_node, + struct target *target, const struct device_node *overlay_symbols_node) { struct property *prop; int ret; for_each_property_of_node(overlay_symbols_node, prop) { - ret = add_changeset_property(ovcs, target_node, prop, 1); + ret = add_changeset_property(ovcs, target, prop, 1); if (ret) { - pr_debug("Failed to apply prop @%pOF/%s, err=%d\n", - target_node, prop->name, ret); + pr_debug("Failed to apply symbols prop @%pOF/%s, err=%d\n", + target->np, prop->name, ret); return ret; } } @@ -457,6 +527,98 @@ static int build_changeset_symbols_node(struct overlay_changeset *ovcs, return 0; } +static int find_dup_cset_node_entry(struct overlay_changeset *ovcs, + struct of_changeset_entry *ce_1) +{ + struct of_changeset_entry *ce_2; + char *fn_1, *fn_2; + int node_path_match; + + if (ce_1->action != OF_RECONFIG_ATTACH_NODE && + ce_1->action != OF_RECONFIG_DETACH_NODE) + return 0; + + ce_2 = ce_1; + list_for_each_entry_continue(ce_2, &ovcs->cset.entries, node) { + if ((ce_2->action != OF_RECONFIG_ATTACH_NODE && + ce_2->action != OF_RECONFIG_DETACH_NODE) || + of_node_cmp(ce_1->np->full_name, ce_2->np->full_name)) + continue; + + fn_1 = kasprintf(GFP_KERNEL, "%pOF", ce_1->np); + fn_2 = kasprintf(GFP_KERNEL, "%pOF", ce_2->np); + node_path_match = !strcmp(fn_1, fn_2); + kfree(fn_1); + kfree(fn_2); + if (node_path_match) { + pr_err("ERROR: multiple fragments add and/or delete node %pOF\n", + ce_1->np); + return -EINVAL; + } + } + + return 0; +} + +static int find_dup_cset_prop(struct overlay_changeset *ovcs, + struct of_changeset_entry *ce_1) +{ + struct of_changeset_entry *ce_2; + char *fn_1, *fn_2; + int node_path_match; + + if (ce_1->action != OF_RECONFIG_ADD_PROPERTY && + ce_1->action != OF_RECONFIG_REMOVE_PROPERTY && + ce_1->action != OF_RECONFIG_UPDATE_PROPERTY) + return 0; + + ce_2 = ce_1; + list_for_each_entry_continue(ce_2, &ovcs->cset.entries, node) { + if ((ce_2->action != OF_RECONFIG_ADD_PROPERTY && + ce_2->action != OF_RECONFIG_REMOVE_PROPERTY && + ce_2->action != OF_RECONFIG_UPDATE_PROPERTY) || + of_node_cmp(ce_1->np->full_name, ce_2->np->full_name)) + continue; + + fn_1 = kasprintf(GFP_KERNEL, "%pOF", ce_1->np); + fn_2 = kasprintf(GFP_KERNEL, "%pOF", ce_2->np); + node_path_match = !strcmp(fn_1, fn_2); + kfree(fn_1); + kfree(fn_2); + if (node_path_match && + !of_prop_cmp(ce_1->prop->name, ce_2->prop->name)) { + pr_err("ERROR: multiple fragments add, update, and/or delete property %pOF/%s\n", + ce_1->np, ce_1->prop->name); + return -EINVAL; + } + } + + return 0; +} + +/** + * changeset_dup_entry_check() - check for duplicate entries + * @ovcs: Overlay changeset + * + * Check changeset @ovcs->cset for multiple {add or delete} node entries for + * the same node or duplicate {add, delete, or update} properties entries + * for the same property. + * + * Returns 0 on success, or -EINVAL if duplicate changeset entry found. + */ +static int changeset_dup_entry_check(struct overlay_changeset *ovcs) +{ + struct of_changeset_entry *ce_1; + int dup_entry = 0; + + list_for_each_entry(ce_1, &ovcs->cset.entries, node) { + dup_entry |= find_dup_cset_node_entry(ovcs, ce_1); + dup_entry |= find_dup_cset_prop(ovcs, ce_1); + } + + return dup_entry ? -EINVAL : 0; +} + /** * build_changeset() - populate overlay changeset in @ovcs from @ovcs->fragments * @ovcs: Overlay changeset @@ -472,6 +634,7 @@ static int build_changeset_symbols_node(struct overlay_changeset *ovcs, static int build_changeset(struct overlay_changeset *ovcs) { struct fragment *fragment; + struct target target; int fragments_count, i, ret; /* @@ -486,25 +649,32 @@ static int build_changeset(struct overlay_changeset *ovcs) for (i = 0; i < fragments_count; i++) { fragment = &ovcs->fragments[i]; - ret = build_changeset_next_level(ovcs, fragment->target, + target.np = fragment->target; + target.in_livetree = true; + ret = build_changeset_next_level(ovcs, &target, fragment->overlay); if (ret) { - pr_debug("apply failed '%pOF'\n", fragment->target); + pr_debug("fragment apply failed '%pOF'\n", + fragment->target); return ret; } } if (ovcs->symbols_fragment) { fragment = &ovcs->fragments[ovcs->count - 1]; - ret = build_changeset_symbols_node(ovcs, fragment->target, + + target.np = fragment->target; + target.in_livetree = true; + ret = build_changeset_symbols_node(ovcs, &target, fragment->overlay); if (ret) { - pr_debug("apply failed '%pOF'\n", fragment->target); + pr_debug("symbols fragment apply failed '%pOF'\n", + fragment->target); return ret; } } - return 0; + return changeset_dup_entry_check(ovcs); } /* @@ -514,7 +684,7 @@ static int build_changeset(struct overlay_changeset *ovcs) * 1) "target" property containing the phandle of the target * 2) "target-path" property containing the path of the target */ -static struct device_node *find_target_node(struct device_node *info_node) +static struct device_node *find_target(struct device_node *info_node) { struct device_node *node; const char *path; @@ -620,7 +790,7 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs, fragment = &fragments[cnt]; fragment->overlay = overlay_node; - fragment->target = find_target_node(node); + fragment->target = find_target(node); if (!fragment->target) { of_node_put(fragment->overlay); ret = -EINVAL; @@ -808,7 +978,7 @@ static int of_overlay_apply(const void *fdt, struct device_node *tree, ret = __of_changeset_apply_notify(&ovcs->cset); if (ret) - pr_err("overlay changeset entry notify error %d\n", ret); + pr_err("overlay apply changeset entry notify error %d\n", ret); /* notify failure is not fatal, continue */ list_add_tail(&ovcs->ovcs_list, &ovcs_list); @@ -1067,7 +1237,7 @@ int of_overlay_remove(int *ovcs_id) ret = __of_changeset_revert_notify(&ovcs->cset); if (ret) - pr_err("overlay changeset entry notify error %d\n", ret); + pr_err("overlay remove changeset entry notify error %d\n", ret); /* notify failure is not fatal, continue */ *ovcs_id = 0; diff --git a/drivers/of/pdt.c b/drivers/of/pdt.c index c1633041621d..d3185063d369 100644 --- a/drivers/of/pdt.c +++ b/drivers/of/pdt.c @@ -21,8 +21,6 @@ static struct of_pdt_ops *of_pdt_prom_ops __initdata; -void __initdata (*of_pdt_build_more)(struct device_node *dp); - #if defined(CONFIG_SPARC) unsigned int of_pdt_unique_id __initdata; @@ -189,9 +187,6 @@ static struct device_node * __init of_pdt_build_tree(struct device_node *parent, dp->child = of_pdt_build_tree(dp, of_pdt_prom_ops->getchild(node)); - if (of_pdt_build_more) - of_pdt_build_more(dp); - node = of_pdt_prom_ops->getsibling(node); } diff --git a/drivers/of/property.c b/drivers/of/property.c index f46828e3b082..08430031bd28 100644 --- a/drivers/of/property.c +++ b/drivers/of/property.c @@ -571,7 +571,7 @@ struct device_node *of_graph_get_port_by_id(struct device_node *parent, u32 id) for_each_child_of_node(parent, port) { u32 port_id = 0; - if (of_node_cmp(port->name, "port") != 0) + if (!of_node_name_eq(port, "port")) continue; of_property_read_u32(port, "reg", &port_id); if (id == port_id) @@ -646,7 +646,7 @@ struct device_node *of_graph_get_next_endpoint(const struct device_node *parent, port = of_get_next_child(parent, port); if (!port) return NULL; - } while (of_node_cmp(port->name, "port")); + } while (!of_node_name_eq(port, "port")); } } EXPORT_SYMBOL(of_graph_get_next_endpoint); @@ -715,7 +715,7 @@ struct device_node *of_graph_get_port_parent(struct device_node *node) /* Walk 3 levels up only if there is 'ports' node. */ for (depth = 3; depth && node; depth--) { node = of_get_next_parent(node); - if (depth == 2 && of_node_cmp(node->name, "ports")) + if (depth == 2 && !of_node_name_eq(node, "ports")) break; } return node; @@ -893,7 +893,7 @@ of_fwnode_get_named_child_node(const struct fwnode_handle *fwnode, struct device_node *child; for_each_available_child_of_node(node, child) - if (!of_node_cmp(child->name, childname)) + if (of_node_name_eq(child, childname)) return of_fwnode_handle(child); return NULL; @@ -955,7 +955,7 @@ of_fwnode_graph_get_port_parent(struct fwnode_handle *fwnode) return NULL; /* Is this the "ports" node? If not, it's the port parent. */ - if (of_node_cmp(np->name, "ports")) + if (!of_node_name_eq(np, "ports")) return of_fwnode_handle(np); return of_fwnode_handle(of_get_next_parent(np)); diff --git a/drivers/of/resolver.c b/drivers/of/resolver.c index 7edfac6f1914..c1b67dd7cd6e 100644 --- a/drivers/of/resolver.c +++ b/drivers/of/resolver.c @@ -281,7 +281,7 @@ int of_resolve_phandles(struct device_node *overlay) adjust_overlay_phandles(overlay, phandle_delta); for_each_child_of_node(overlay, local_fixups) - if (!of_node_cmp(local_fixups->name, "__local_fixups__")) + if (of_node_name_eq(local_fixups, "__local_fixups__")) break; err = adjust_local_phandle_references(local_fixups, overlay, phandle_delta); @@ -291,7 +291,7 @@ int of_resolve_phandles(struct device_node *overlay) overlay_fixups = NULL; for_each_child_of_node(overlay, child) { - if (!of_node_cmp(child->name, "__fixups__")) + if (of_node_name_eq(child, "__fixups__")) overlay_fixups = child; } diff --git a/drivers/of/unittest-data/Makefile b/drivers/of/unittest-data/Makefile index 013d85e694c6..9b6807065827 100644 --- a/drivers/of/unittest-data/Makefile +++ b/drivers/of/unittest-data/Makefile @@ -17,6 +17,8 @@ obj-$(CONFIG_OF_OVERLAY) += overlay.dtb.o \ overlay_12.dtb.o \ overlay_13.dtb.o \ overlay_15.dtb.o \ + overlay_bad_add_dup_node.dtb.o \ + overlay_bad_add_dup_prop.dtb.o \ overlay_bad_phandle.dtb.o \ overlay_bad_symbol.dtb.o \ overlay_base.dtb.o diff --git a/drivers/of/unittest-data/overlay_bad_add_dup_node.dts b/drivers/of/unittest-data/overlay_bad_add_dup_node.dts new file mode 100644 index 000000000000..145dfc3b1024 --- /dev/null +++ b/drivers/of/unittest-data/overlay_bad_add_dup_node.dts @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0 +/dts-v1/; +/plugin/; + +/* + * &electric_1/motor-1 and &spin_ctrl_1 are the same node: + * /testcase-data-2/substation@100/motor-1 + * + * Thus the new node "controller" in each fragment will + * result in an attempt to add the same node twice. + * This will result in an error and the overlay apply + * will fail. + */ + +&electric_1 { + + motor-1 { + controller { + power_bus = < 0x1 0x2 >; + }; + }; +}; + +&spin_ctrl_1 { + controller { + power_bus_emergency = < 0x101 0x102 >; + }; +}; diff --git a/drivers/of/unittest-data/overlay_bad_add_dup_prop.dts b/drivers/of/unittest-data/overlay_bad_add_dup_prop.dts new file mode 100644 index 000000000000..c190da54f175 --- /dev/null +++ b/drivers/of/unittest-data/overlay_bad_add_dup_prop.dts @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 +/dts-v1/; +/plugin/; + +/* + * &electric_1/motor-1 and &spin_ctrl_1 are the same node: + * /testcase-data-2/substation@100/motor-1 + * + * Thus the property "rpm_avail" in each fragment will + * result in an attempt to update the same property twice. + * This will result in an error and the overlay apply + * will fail. + */ + +&electric_1 { + + motor-1 { + rpm_avail = < 100 >; + }; +}; + +&spin_ctrl_1 { + rpm_avail = < 100 200 >; +}; diff --git a/drivers/of/unittest-data/overlay_base.dts b/drivers/of/unittest-data/overlay_base.dts index 820b79ca378a..99ab9d12d00b 100644 --- a/drivers/of/unittest-data/overlay_base.dts +++ b/drivers/of/unittest-data/overlay_base.dts @@ -30,6 +30,7 @@ spin_ctrl_1: motor-1 { compatible = "ot,ferris-wheel-motor"; spin = "clockwise"; + rpm_avail = < 50 >; }; spin_ctrl_2: motor-8 { diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c index 49ae2aa744d6..84427384654d 100644 --- a/drivers/of/unittest.c +++ b/drivers/of/unittest.c @@ -379,6 +379,7 @@ static void __init of_unittest_parse_phandle_with_args(void) for (i = 0; i < 8; i++) { bool passed = true; + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args(np, "phandle-list", "#phandle-cells", i, &args); @@ -432,6 +433,7 @@ static void __init of_unittest_parse_phandle_with_args(void) } /* Check for missing list property */ + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args(np, "phandle-list-missing", "#phandle-cells", 0, &args); unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc); @@ -440,6 +442,7 @@ static void __init of_unittest_parse_phandle_with_args(void) unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc); /* Check for missing cells property */ + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args(np, "phandle-list", "#phandle-cells-missing", 0, &args); unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); @@ -448,6 +451,7 @@ static void __init of_unittest_parse_phandle_with_args(void) unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); /* Check for bad phandle in list */ + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle", "#phandle-cells", 0, &args); unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); @@ -456,6 +460,7 @@ static void __init of_unittest_parse_phandle_with_args(void) unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); /* Check for incorrectly formed argument list */ + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args(np, "phandle-list-bad-args", "#phandle-cells", 1, &args); unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); @@ -506,6 +511,7 @@ static void __init of_unittest_parse_phandle_with_args_map(void) for (i = 0; i < 8; i++) { bool passed = true; + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args_map(np, "phandle-list", "phandle", i, &args); @@ -563,21 +569,25 @@ static void __init of_unittest_parse_phandle_with_args_map(void) } /* Check for missing list property */ + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args_map(np, "phandle-list-missing", "phandle", 0, &args); unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc); /* Check for missing cells,map,mask property */ + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args_map(np, "phandle-list", "phandle-missing", 0, &args); unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); /* Check for bad phandle in list */ + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle", "phandle", 0, &args); unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); /* Check for incorrectly formed argument list */ + memset(&args, 0, sizeof(args)); rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args", "phandle", 1, &args); unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); @@ -787,7 +797,7 @@ static void __init of_unittest_parse_interrupts(void) for (i = 0; i < 4; i++) { bool passed = true; - args.args_count = 0; + memset(&args, 0, sizeof(args)); rc = of_irq_parse_one(np, i, &args); passed &= !rc; @@ -808,7 +818,7 @@ static void __init of_unittest_parse_interrupts(void) for (i = 0; i < 4; i++) { bool passed = true; - args.args_count = 0; + memset(&args, 0, sizeof(args)); rc = of_irq_parse_one(np, i, &args); /* Test the values from tests-phandle.dtsi */ @@ -864,6 +874,7 @@ static void __init of_unittest_parse_interrupts_extended(void) for (i = 0; i < 7; i++) { bool passed = true; + memset(&args, 0, sizeof(args)); rc = of_irq_parse_one(np, i, &args); /* Test the values from tests-phandle.dtsi */ @@ -1071,20 +1082,44 @@ static void __init of_unittest_platform_populate(void) * of np into dup node (present in live tree) and * updates parent of children of np to dup. * - * @np: node already present in live tree + * @np: node whose properties are being added to the live tree * @dup: node present in live tree to be updated */ static void update_node_properties(struct device_node *np, struct device_node *dup) { struct property *prop; + struct property *save_next; struct device_node *child; - - for_each_property_of_node(np, prop) - of_add_property(dup, prop); + int ret; for_each_child_of_node(np, child) child->parent = dup; + + /* + * "unittest internal error: unable to add testdata property" + * + * If this message reports a property in node '/__symbols__' then + * the respective unittest overlay contains a label that has the + * same name as a label in the live devicetree. The label will + * be in the live devicetree only if the devicetree source was + * compiled with the '-@' option. If you encounter this error, + * please consider renaming __all__ of the labels in the unittest + * overlay dts files with an odd prefix that is unlikely to be + * used in a real devicetree. + */ + + /* + * open code for_each_property_of_node() because of_add_property() + * sets prop->next to NULL + */ + for (prop = np->properties; prop != NULL; prop = save_next) { + save_next = prop->next; + ret = of_add_property(dup, prop); + if (ret) + pr_err("unittest internal error: unable to add testdata property %pOF/%s", + np, prop->name); + } } /** @@ -1093,18 +1128,23 @@ static void update_node_properties(struct device_node *np, * * @np: Node to attach to live tree */ -static int attach_node_and_children(struct device_node *np) +static void attach_node_and_children(struct device_node *np) { struct device_node *next, *dup, *child; unsigned long flags; const char *full_name; full_name = kasprintf(GFP_KERNEL, "%pOF", np); + + if (!strcmp(full_name, "/__local_fixups__") || + !strcmp(full_name, "/__fixups__")) + return; + dup = of_find_node_by_path(full_name); kfree(full_name); if (dup) { update_node_properties(np, dup); - return 0; + return; } child = np->child; @@ -1125,8 +1165,6 @@ static int attach_node_and_children(struct device_node *np) attach_node_and_children(child); child = next; } - - return 0; } /** @@ -1433,8 +1471,7 @@ static void of_unittest_destroy_tracked_overlays(void) } while (defers > 0); } -static int __init of_unittest_apply_overlay(int overlay_nr, int unittest_nr, - int *overlay_id) +static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id) { const char *overlay_name; @@ -1467,7 +1504,7 @@ static int __init of_unittest_apply_overlay_check(int overlay_nr, } ovcs_id = 0; - ret = of_unittest_apply_overlay(overlay_nr, unittest_nr, &ovcs_id); + ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id); if (ret != 0) { /* of_unittest_apply_overlay already called unittest() */ return ret; @@ -1503,7 +1540,7 @@ static int __init of_unittest_apply_revert_overlay_check(int overlay_nr, /* apply the overlay */ ovcs_id = 0; - ret = of_unittest_apply_overlay(overlay_nr, unittest_nr, &ovcs_id); + ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id); if (ret != 0) { /* of_unittest_apply_overlay already called unittest() */ return ret; @@ -2161,10 +2198,12 @@ OVERLAY_INFO_EXTERN(overlay_11); OVERLAY_INFO_EXTERN(overlay_12); OVERLAY_INFO_EXTERN(overlay_13); OVERLAY_INFO_EXTERN(overlay_15); +OVERLAY_INFO_EXTERN(overlay_bad_add_dup_node); +OVERLAY_INFO_EXTERN(overlay_bad_add_dup_prop); OVERLAY_INFO_EXTERN(overlay_bad_phandle); OVERLAY_INFO_EXTERN(overlay_bad_symbol); -/* order of entries is hard-coded into users of overlays[] */ +/* entries found by name */ static struct overlay_info overlays[] = { OVERLAY_INFO(overlay_base, -9999), OVERLAY_INFO(overlay, 0), @@ -2183,9 +2222,12 @@ static struct overlay_info overlays[] = { OVERLAY_INFO(overlay_12, 0), OVERLAY_INFO(overlay_13, 0), OVERLAY_INFO(overlay_15, 0), + OVERLAY_INFO(overlay_bad_add_dup_node, -EINVAL), + OVERLAY_INFO(overlay_bad_add_dup_prop, -EINVAL), OVERLAY_INFO(overlay_bad_phandle, -EINVAL), OVERLAY_INFO(overlay_bad_symbol, -EINVAL), - {} + /* end marker */ + {.dtb_begin = NULL, .dtb_end = NULL, .expected_result = 0, .name = NULL} }; static struct device_node *overlay_base_root; @@ -2215,6 +2257,19 @@ void __init unittest_unflatten_overlay_base(void) u32 data_size; void *new_fdt; u32 size; + int found = 0; + const char *overlay_name = "overlay_base"; + + for (info = overlays; info && info->name; info++) { + if (!strcmp(overlay_name, info->name)) { + found = 1; + break; + } + } + if (!found) { + pr_err("no overlay data for %s\n", overlay_name); + return; + } info = &overlays[0]; @@ -2262,11 +2317,10 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id) { struct overlay_info *info; int found = 0; - int k; int ret; u32 size; - for (k = 0, info = overlays; info && info->name; info++, k++) { + for (info = overlays; info && info->name; info++) { if (!strcmp(overlay_name, info->name)) { found = 1; break; @@ -2339,7 +2393,7 @@ static __init void of_unittest_overlay_high_level(void) */ pprev = &overlay_base_root->child; for (np = overlay_base_root->child; np; np = np->sibling) { - if (!of_node_cmp(np->name, "__local_fixups__")) { + if (of_node_name_eq(np, "__local_fixups__")) { *pprev = np->sibling; break; } @@ -2352,7 +2406,7 @@ static __init void of_unittest_overlay_high_level(void) /* will have to graft properties from node into live tree */ pprev = &overlay_base_root->child; for (np = overlay_base_root->child; np; np = np->sibling) { - if (!of_node_cmp(np->name, "__symbols__")) { + if (of_node_name_eq(np, "__symbols__")) { overlay_base_symbols = np; *pprev = np->sibling; break; @@ -2430,6 +2484,12 @@ static __init void of_unittest_overlay_high_level(void) unittest(overlay_data_apply("overlay", NULL), "Adding overlay 'overlay' failed\n"); + unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL), + "Adding overlay 'overlay_bad_add_dup_node' failed\n"); + + unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL), + "Adding overlay 'overlay_bad_add_dup_prop' failed\n"); + unittest(overlay_data_apply("overlay_bad_phandle", NULL), "Adding overlay 'overlay_bad_phandle' failed\n"); diff --git a/drivers/parisc/Kconfig b/drivers/parisc/Kconfig index 5a48b5606110..74e119adca01 100644 --- a/drivers/parisc/Kconfig +++ b/drivers/parisc/Kconfig @@ -2,6 +2,7 @@ menu "Bus options (PCI, PCMCIA, EISA, GSC, ISA)" config GSC bool "VSC/GSC/HSC bus support" + select HAVE_EISA default y help The VSC, GSC and HSC busses were used from the earliest 700-series @@ -46,16 +47,6 @@ config GSC_WAX used), a HIL interface chip and is also known to be used as the GSC bridge for an X.25 GSC card. -config EISA - bool "EISA support" - depends on GSC - help - Say Y here if you have an EISA bus in your machine. This code - supports both the Mongoose & Wax EISA adapters. It is sadly - incomplete and lacks support for card-to-host DMA. - -source "drivers/eisa/Kconfig" - config ISA bool "ISA support" depends on EISA @@ -63,17 +54,6 @@ config ISA If you want to plug an ISA card into your EISA bus, say Y here. Most people should say N. -config PCI - bool "PCI support" - help - All recent HP machines have PCI slots, and you should say Y here - if you have a recent machine. If you are convinced you do not have - PCI slots in your machine (eg a 712), then you may say "N" here. - Beware that some GSC cards have a Dino onboard and PCI inside them, - so it may be safest to say "Y" anyway. - -source "drivers/pci/Kconfig" - config GSC_DINO bool "GSCtoPCI/Dino PCI support" depends on PCI && GSC @@ -103,8 +83,6 @@ config IOMMU_SBA depends on PCI_LBA default PCI_LBA -source "drivers/pcmcia/Kconfig" - endmenu menu "PA-RISC specific drivers" diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c index 701a7d6a74d5..8d2fc84119c6 100644 --- a/drivers/parisc/ccio-dma.c +++ b/drivers/parisc/ccio-dma.c @@ -110,8 +110,6 @@ #define CMD_TLB_DIRECT_WRITE 35 /* IO_COMMAND for I/O TLB Writes */ #define CMD_TLB_PURGE 33 /* IO_COMMAND to Purge I/O TLB entry */ -#define CCIO_MAPPING_ERROR (~(dma_addr_t)0) - struct ioa_registers { /* Runway Supervisory Set */ int32_t unused1[12]; @@ -740,7 +738,7 @@ ccio_map_single(struct device *dev, void *addr, size_t size, BUG_ON(!dev); ioc = GET_IOC(dev); if (!ioc) - return CCIO_MAPPING_ERROR; + return DMA_MAPPING_ERROR; BUG_ON(size <= 0); @@ -1021,11 +1019,6 @@ ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, DBG_RUN_SG("%s() DONE (nents %d)\n", __func__, nents); } -static int ccio_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == CCIO_MAPPING_ERROR; -} - static const struct dma_map_ops ccio_ops = { .dma_supported = ccio_dma_supported, .alloc = ccio_alloc, @@ -1034,7 +1027,6 @@ static const struct dma_map_ops ccio_ops = { .unmap_page = ccio_unmap_page, .map_sg = ccio_map_sg, .unmap_sg = ccio_unmap_sg, - .mapping_error = ccio_mapping_error, }; #ifdef CONFIG_PROC_FS @@ -1251,7 +1243,7 @@ ccio_ioc_init(struct ioc *ioc) ** Hot-Plug/Removal of PCI cards. (aka PCI OLARD). */ - iova_space_size = (u32) (totalram_pages / count_parisc_driver(&ccio_driver)); + iova_space_size = (u32) (totalram_pages() / count_parisc_driver(&ccio_driver)); /* limit IOVA space size to 1MB-1GB */ @@ -1290,7 +1282,7 @@ ccio_ioc_init(struct ioc *ioc) DBG_INIT("%s() hpa 0x%p mem %luMB IOV %dMB (%d bits)\n", __func__, ioc->ioc_regs, - (unsigned long) totalram_pages >> (20 - PAGE_SHIFT), + (unsigned long) totalram_pages() >> (20 - PAGE_SHIFT), iova_space_size>>20, iov_order + PAGE_SHIFT); diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c index c1e599a429af..42172eb32235 100644 --- a/drivers/parisc/sba_iommu.c +++ b/drivers/parisc/sba_iommu.c @@ -93,8 +93,6 @@ #define DEFAULT_DMA_HINT_REG 0 -#define SBA_MAPPING_ERROR (~(dma_addr_t)0) - struct sba_device *sba_list; EXPORT_SYMBOL_GPL(sba_list); @@ -725,7 +723,7 @@ sba_map_single(struct device *dev, void *addr, size_t size, ioc = GET_IOC(dev); if (!ioc) - return SBA_MAPPING_ERROR; + return DMA_MAPPING_ERROR; /* save offset bits */ offset = ((dma_addr_t) (long) addr) & ~IOVP_MASK; @@ -1080,11 +1078,6 @@ sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, } -static int sba_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == SBA_MAPPING_ERROR; -} - static const struct dma_map_ops sba_ops = { .dma_supported = sba_dma_supported, .alloc = sba_alloc, @@ -1093,7 +1086,6 @@ static const struct dma_map_ops sba_ops = { .unmap_page = sba_unmap_page, .map_sg = sba_map_sg, .unmap_sg = sba_unmap_sg, - .mapping_error = sba_mapping_error, }; @@ -1414,7 +1406,7 @@ sba_ioc_init(struct parisc_device *sba, struct ioc *ioc, int ioc_num) ** for DMA hints - ergo only 30 bits max. */ - iova_space_size = (u32) (totalram_pages/global_ioc_cnt); + iova_space_size = (u32) (totalram_pages()/global_ioc_cnt); /* limit IOVA space size to 1MB-1GB */ if (iova_space_size < (1 << (20 - PAGE_SHIFT))) { @@ -1439,7 +1431,7 @@ sba_ioc_init(struct parisc_device *sba, struct ioc *ioc, int ioc_num) DBG_INIT("%s() hpa 0x%lx mem %ldMB IOV %dMB (%d bits)\n", __func__, ioc->ioc_hpa, - (unsigned long) totalram_pages >> (20 - PAGE_SHIFT), + (unsigned long) totalram_pages() >> (20 - PAGE_SHIFT), iova_space_size>>20, iov_order + PAGE_SHIFT); diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c index 380916bff9e0..9c8249f74479 100644 --- a/drivers/parport/parport_pc.c +++ b/drivers/parport/parport_pc.c @@ -1667,7 +1667,7 @@ static int parport_ECP_supported(struct parport *pb) default: printk(KERN_WARNING "0x%lx: Unknown implementation ID\n", pb->base); - /* Assume 1 */ + /* Fall through - Assume 1 */ case 1: pword = 1; } diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 2dcc30429e8b..31ec770b433d 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -3,6 +3,36 @@ # PCI configuration # +# select this to offer the PCI prompt +config HAVE_PCI + bool + +# select this to unconditionally force on PCI support +config FORCE_PCI + bool + select HAVE_PCI + select PCI + +menuconfig PCI + bool "PCI support" + depends on HAVE_PCI + help + This option enables support for the PCI local bus, including + support for PCI-X and the foundations for PCI Express support. + Say 'Y' here unless you know what you are doing. + +config PCI_DOMAINS + bool + depends on PCI + +config PCI_DOMAINS_GENERIC + bool + depends on PCI + select PCI_DOMAINS + +config PCI_SYSCALL + bool + source "drivers/pci/pcie/Kconfig" config PCI_MSI diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index e50b0b5815ff..3890812cdf87 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -307,39 +307,32 @@ static struct device *to_vmd_dev(struct device *dev) return &vmd->dev->dev; } -static const struct dma_map_ops *vmd_dma_ops(struct device *dev) -{ - return get_dma_ops(to_vmd_dev(dev)); -} - static void *vmd_alloc(struct device *dev, size_t size, dma_addr_t *addr, gfp_t flag, unsigned long attrs) { - return vmd_dma_ops(dev)->alloc(to_vmd_dev(dev), size, addr, flag, - attrs); + return dma_alloc_attrs(to_vmd_dev(dev), size, addr, flag, attrs); } static void vmd_free(struct device *dev, size_t size, void *vaddr, dma_addr_t addr, unsigned long attrs) { - return vmd_dma_ops(dev)->free(to_vmd_dev(dev), size, vaddr, addr, - attrs); + return dma_free_attrs(to_vmd_dev(dev), size, vaddr, addr, attrs); } static int vmd_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t addr, size_t size, unsigned long attrs) { - return vmd_dma_ops(dev)->mmap(to_vmd_dev(dev), vma, cpu_addr, addr, - size, attrs); + return dma_mmap_attrs(to_vmd_dev(dev), vma, cpu_addr, addr, size, + attrs); } static int vmd_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t addr, size_t size, unsigned long attrs) { - return vmd_dma_ops(dev)->get_sgtable(to_vmd_dev(dev), sgt, cpu_addr, - addr, size, attrs); + return dma_get_sgtable_attrs(to_vmd_dev(dev), sgt, cpu_addr, addr, size, + attrs); } static dma_addr_t vmd_map_page(struct device *dev, struct page *page, @@ -347,66 +340,60 @@ static dma_addr_t vmd_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - return vmd_dma_ops(dev)->map_page(to_vmd_dev(dev), page, offset, size, - dir, attrs); + return dma_map_page_attrs(to_vmd_dev(dev), page, offset, size, dir, + attrs); } static void vmd_unmap_page(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - vmd_dma_ops(dev)->unmap_page(to_vmd_dev(dev), addr, size, dir, attrs); + dma_unmap_page_attrs(to_vmd_dev(dev), addr, size, dir, attrs); } static int vmd_map_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) { - return vmd_dma_ops(dev)->map_sg(to_vmd_dev(dev), sg, nents, dir, attrs); + return dma_map_sg_attrs(to_vmd_dev(dev), sg, nents, dir, attrs); } static void vmd_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) { - vmd_dma_ops(dev)->unmap_sg(to_vmd_dev(dev), sg, nents, dir, attrs); + dma_unmap_sg_attrs(to_vmd_dev(dev), sg, nents, dir, attrs); } static void vmd_sync_single_for_cpu(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir) { - vmd_dma_ops(dev)->sync_single_for_cpu(to_vmd_dev(dev), addr, size, dir); + dma_sync_single_for_cpu(to_vmd_dev(dev), addr, size, dir); } static void vmd_sync_single_for_device(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir) { - vmd_dma_ops(dev)->sync_single_for_device(to_vmd_dev(dev), addr, size, - dir); + dma_sync_single_for_device(to_vmd_dev(dev), addr, size, dir); } static void vmd_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir) { - vmd_dma_ops(dev)->sync_sg_for_cpu(to_vmd_dev(dev), sg, nents, dir); + dma_sync_sg_for_cpu(to_vmd_dev(dev), sg, nents, dir); } static void vmd_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir) { - vmd_dma_ops(dev)->sync_sg_for_device(to_vmd_dev(dev), sg, nents, dir); -} - -static int vmd_mapping_error(struct device *dev, dma_addr_t addr) -{ - return vmd_dma_ops(dev)->mapping_error(to_vmd_dev(dev), addr); + dma_sync_sg_for_device(to_vmd_dev(dev), sg, nents, dir); } static int vmd_dma_supported(struct device *dev, u64 mask) { - return vmd_dma_ops(dev)->dma_supported(to_vmd_dev(dev), mask); + return dma_supported(to_vmd_dev(dev), mask); } static u64 vmd_get_required_mask(struct device *dev) { - return vmd_dma_ops(dev)->get_required_mask(to_vmd_dev(dev)); + return dma_get_required_mask(to_vmd_dev(dev)); } static void vmd_teardown_dma_ops(struct vmd_dev *vmd) @@ -446,7 +433,6 @@ static void vmd_setup_dma_ops(struct vmd_dev *vmd) ASSIGN_VMD_DMA_OPS(source, dest, sync_single_for_device); ASSIGN_VMD_DMA_OPS(source, dest, sync_sg_for_cpu); ASSIGN_VMD_DMA_OPS(source, dest, sync_sg_for_device); - ASSIGN_VMD_DMA_OPS(source, dest, mapping_error); ASSIGN_VMD_DMA_OPS(source, dest, dma_supported); ASSIGN_VMD_DMA_OPS(source, dest, get_required_mask); add_dma_domain(domain); diff --git a/drivers/pci/endpoint/Kconfig b/drivers/pci/endpoint/Kconfig index d1e7e4199432..17bbdc9bbde0 100644 --- a/drivers/pci/endpoint/Kconfig +++ b/drivers/pci/endpoint/Kconfig @@ -7,7 +7,7 @@ menu "PCI Endpoint" config PCI_ENDPOINT bool "PCI Endpoint Support" - depends on HAS_DMA + depends on HAVE_PCI help Enable this configuration option to support configurable PCI endpoint. This should be enabled if the platform has a PCI diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index ae3c5b25dcc7..a2eb25271c96 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -82,10 +82,8 @@ static void pci_p2pdma_percpu_release(struct percpu_ref *ref) complete_all(&p2p->devmap_ref_done); } -static void pci_p2pdma_percpu_kill(void *data) +static void pci_p2pdma_percpu_kill(struct percpu_ref *ref) { - struct percpu_ref *ref = data; - /* * pci_p2pdma_add_resource() may be called multiple times * by a driver and may register the percpu_kill devm action multiple @@ -198,6 +196,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) - pci_resource_start(pdev, bar); + pgmap->kill = pci_p2pdma_percpu_kill; addr = devm_memremap_pages(&pdev->dev, pgmap); if (IS_ERR(addr)) { @@ -211,11 +210,6 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, if (error) goto pgmap_free; - error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_percpu_kill, - &pdev->p2pdma->devmap_ref); - if (error) - goto pgmap_free; - pci_info(pdev, "added peer-to-peer DMA memory %pR\n", &pgmap->res); diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c index 921db6f80340..e1949f7efd9c 100644 --- a/drivers/pci/pci-acpi.c +++ b/drivers/pci/pci-acpi.c @@ -789,6 +789,24 @@ static void pci_acpi_optimize_delay(struct pci_dev *pdev, ACPI_FREE(obj); } +static void pci_acpi_set_untrusted(struct pci_dev *dev) +{ + u8 val; + + if (pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) + return; + if (device_property_read_u8(&dev->dev, "ExternalFacingPort", &val)) + return; + + /* + * These root ports expose PCIe (including DMA) outside of the + * system so make sure we treat them and everything behind as + * untrusted. + */ + if (val) + dev->untrusted = 1; +} + static void pci_acpi_setup(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); @@ -798,6 +816,7 @@ static void pci_acpi_setup(struct device *dev) return; pci_acpi_optimize_delay(pci_dev, adev->handle); + pci_acpi_set_untrusted(pci_dev); pci_acpi_add_pm_notifier(adev, pci_dev); if (!adev->wakeup.flags.valid) diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index bef17c3fca67..ea55444e6ead 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -1600,10 +1600,8 @@ static int pci_dma_configure(struct device *dev) ret = of_dma_configure(dev, bridge->parent->of_node, true); } else if (has_acpi_companion(bridge)) { struct acpi_device *adev = to_acpi_device_node(bridge->fwnode); - enum dev_dma_attr attr = acpi_get_dma_attr(adev); - if (attr != DEV_DMA_NOT_SUPPORTED) - ret = acpi_dma_configure(dev, attr); + ret = acpi_dma_configure(dev, acpi_get_dma_attr(adev)); } pci_put_host_bridge_device(bridge); diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index b1c05b5054a0..257b9f6f2ebb 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -1378,6 +1378,19 @@ static void set_pcie_thunderbolt(struct pci_dev *dev) } } +static void set_pcie_untrusted(struct pci_dev *dev) +{ + struct pci_dev *parent; + + /* + * If the upstream bridge is untrusted we treat this device + * untrusted as well. + */ + parent = pci_upstream_bridge(dev); + if (parent && parent->untrusted) + dev->untrusted = true; +} + /** * pci_ext_cfg_is_aliased - Is ext config space just an alias of std config? * @dev: PCI device @@ -1638,6 +1651,8 @@ int pci_setup_device(struct pci_dev *dev) /* Need to have dev->cfg_size ready */ set_pcie_thunderbolt(dev); + set_pcie_untrusted(dev); + /* "Unknown power state" */ dev->current_state = PCI_UNKNOWN; diff --git a/drivers/pcmcia/Kconfig b/drivers/pcmcia/Kconfig index cbbe4a285b48..c9bdbb463a7e 100644 --- a/drivers/pcmcia/Kconfig +++ b/drivers/pcmcia/Kconfig @@ -4,6 +4,7 @@ menuconfig PCCARD tristate "PCCard (PCMCIA/CardBus) support" + depends on !UML ---help--- Say Y here if you want to attach PCMCIA- or PC-cards to your Linux computer. These are credit-card size devices such as network cards, diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig index 60f949e2a684..250abe290ca1 100644 --- a/drivers/phy/Kconfig +++ b/drivers/phy/Kconfig @@ -15,6 +15,14 @@ config GENERIC_PHY phy users can obtain reference to the PHY. All the users of this framework should select this config. +config GENERIC_PHY_MIPI_DPHY + bool + help + Generic MIPI D-PHY support. + + Provides a number of helpers a core functions for MIPI D-PHY + drivers to us. + config PHY_LPC18XX_USB_OTG tristate "NXP LPC18xx/43xx SoC USB OTG PHY driver" depends on OF && (ARCH_LPC18XX || COMPILE_TEST) @@ -44,6 +52,7 @@ source "drivers/phy/allwinner/Kconfig" source "drivers/phy/amlogic/Kconfig" source "drivers/phy/broadcom/Kconfig" source "drivers/phy/cadence/Kconfig" +source "drivers/phy/freescale/Kconfig" source "drivers/phy/hisilicon/Kconfig" source "drivers/phy/lantiq/Kconfig" source "drivers/phy/marvell/Kconfig" diff --git a/drivers/phy/Makefile b/drivers/phy/Makefile index 0301e25d07c1..0d9fddc498a6 100644 --- a/drivers/phy/Makefile +++ b/drivers/phy/Makefile @@ -4,6 +4,7 @@ # obj-$(CONFIG_GENERIC_PHY) += phy-core.o +obj-$(CONFIG_GENERIC_PHY_MIPI_DPHY) += phy-core-mipi-dphy.o obj-$(CONFIG_PHY_LPC18XX_USB_OTG) += phy-lpc18xx-usb-otg.o obj-$(CONFIG_PHY_XGENE) += phy-xgene.o obj-$(CONFIG_PHY_PISTACHIO_USB) += phy-pistachio-usb.o @@ -16,6 +17,7 @@ obj-$(CONFIG_ARCH_ROCKCHIP) += rockchip/ obj-$(CONFIG_ARCH_TEGRA) += tegra/ obj-y += broadcom/ \ cadence/ \ + freescale/ \ hisilicon/ \ marvell/ \ motorola/ \ diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c index d4dcd39b8d76..5163097b43df 100644 --- a/drivers/phy/allwinner/phy-sun4i-usb.c +++ b/drivers/phy/allwinner/phy-sun4i-usb.c @@ -115,6 +115,7 @@ enum sun4i_usb_phy_type { sun8i_r40_phy, sun8i_v3s_phy, sun50i_a64_phy, + sun50i_h6_phy, }; struct sun4i_usb_phy_cfg { @@ -126,6 +127,7 @@ struct sun4i_usb_phy_cfg { bool dedicated_clocks; bool enable_pmu_unk1; bool phy0_dual_route; + int missing_phys; }; struct sun4i_usb_phy_data { @@ -294,7 +296,8 @@ static int sun4i_usb_phy_init(struct phy *_phy) return ret; } - if (data->cfg->type == sun8i_a83t_phy) { + if (data->cfg->type == sun8i_a83t_phy || + data->cfg->type == sun50i_h6_phy) { if (phy->index == 0) { val = readl(data->base + data->cfg->phyctl_offset); val |= PHY_CTL_VBUSVLDEXT; @@ -343,7 +346,8 @@ static int sun4i_usb_phy_exit(struct phy *_phy) struct sun4i_usb_phy_data *data = to_sun4i_usb_phy_data(phy); if (phy->index == 0) { - if (data->cfg->type == sun8i_a83t_phy) { + if (data->cfg->type == sun8i_a83t_phy || + data->cfg->type == sun50i_h6_phy) { void __iomem *phyctl = data->base + data->cfg->phyctl_offset; @@ -474,7 +478,8 @@ static int sun4i_usb_phy_power_off(struct phy *_phy) return 0; } -static int sun4i_usb_phy_set_mode(struct phy *_phy, enum phy_mode mode) +static int sun4i_usb_phy_set_mode(struct phy *_phy, + enum phy_mode mode, int submode) { struct sun4i_usb_phy *phy = phy_get_drvdata(_phy); struct sun4i_usb_phy_data *data = to_sun4i_usb_phy_data(phy); @@ -646,6 +651,9 @@ static struct phy *sun4i_usb_phy_xlate(struct device *dev, if (args->args[0] >= data->cfg->num_phys) return ERR_PTR(-ENODEV); + if (data->cfg->missing_phys & BIT(args->args[0])) + return ERR_PTR(-ENODEV); + return data->phys[args->args[0]].phy; } @@ -741,6 +749,9 @@ static int sun4i_usb_phy_probe(struct platform_device *pdev) struct sun4i_usb_phy *phy = data->phys + i; char name[16]; + if (data->cfg->missing_phys & BIT(i)) + continue; + snprintf(name, sizeof(name), "usb%d_vbus", i); phy->vbus = devm_regulator_get_optional(dev, name); if (IS_ERR(phy->vbus)) { @@ -952,6 +963,17 @@ static const struct sun4i_usb_phy_cfg sun50i_a64_cfg = { .phy0_dual_route = true, }; +static const struct sun4i_usb_phy_cfg sun50i_h6_cfg = { + .num_phys = 4, + .type = sun50i_h6_phy, + .disc_thresh = 3, + .phyctl_offset = REG_PHYCTL_A33, + .dedicated_clocks = true, + .enable_pmu_unk1 = true, + .phy0_dual_route = true, + .missing_phys = BIT(1) | BIT(2), +}; + static const struct of_device_id sun4i_usb_phy_of_match[] = { { .compatible = "allwinner,sun4i-a10-usb-phy", .data = &sun4i_a10_cfg }, { .compatible = "allwinner,sun5i-a13-usb-phy", .data = &sun5i_a13_cfg }, @@ -965,6 +987,7 @@ static const struct of_device_id sun4i_usb_phy_of_match[] = { { .compatible = "allwinner,sun8i-v3s-usb-phy", .data = &sun8i_v3s_cfg }, { .compatible = "allwinner,sun50i-a64-usb-phy", .data = &sun50i_a64_cfg}, + { .compatible = "allwinner,sun50i-h6-usb-phy", .data = &sun50i_h6_cfg }, { }, }; MODULE_DEVICE_TABLE(of, sun4i_usb_phy_of_match); diff --git a/drivers/phy/amlogic/phy-meson-gxl-usb2.c b/drivers/phy/amlogic/phy-meson-gxl-usb2.c index 9f9b5414b97a..148ef0bdb9c1 100644 --- a/drivers/phy/amlogic/phy-meson-gxl-usb2.c +++ b/drivers/phy/amlogic/phy-meson-gxl-usb2.c @@ -152,7 +152,8 @@ static int phy_meson_gxl_usb2_reset(struct phy *phy) return 0; } -static int phy_meson_gxl_usb2_set_mode(struct phy *phy, enum phy_mode mode) +static int phy_meson_gxl_usb2_set_mode(struct phy *phy, + enum phy_mode mode, int submode) { struct phy_meson_gxl_usb2_priv *priv = phy_get_drvdata(phy); @@ -209,7 +210,7 @@ static int phy_meson_gxl_usb2_power_on(struct phy *phy) /* power on the PHY by taking it out of reset mode */ regmap_update_bits(priv->regmap, U2P_R0, U2P_R0_POWER_ON_RESET, 0); - ret = phy_meson_gxl_usb2_set_mode(phy, priv->mode); + ret = phy_meson_gxl_usb2_set_mode(phy, priv->mode, 0); if (ret) { phy_meson_gxl_usb2_power_off(phy); diff --git a/drivers/phy/amlogic/phy-meson-gxl-usb3.c b/drivers/phy/amlogic/phy-meson-gxl-usb3.c index d37d94ddf9c0..c0e9e4c16149 100644 --- a/drivers/phy/amlogic/phy-meson-gxl-usb3.c +++ b/drivers/phy/amlogic/phy-meson-gxl-usb3.c @@ -119,7 +119,8 @@ static int phy_meson_gxl_usb3_power_off(struct phy *phy) return 0; } -static int phy_meson_gxl_usb3_set_mode(struct phy *phy, enum phy_mode mode) +static int phy_meson_gxl_usb3_set_mode(struct phy *phy, + enum phy_mode mode, int submode) { struct phy_meson_gxl_usb3_priv *priv = phy_get_drvdata(phy); @@ -164,7 +165,7 @@ static int phy_meson_gxl_usb3_init(struct phy *phy) if (ret) goto err_disable_clk_phy; - ret = phy_meson_gxl_usb3_set_mode(phy, priv->mode); + ret = phy_meson_gxl_usb3_set_mode(phy, priv->mode, 0); if (ret) goto err_disable_clk_peripheral; diff --git a/drivers/phy/cadence/Kconfig b/drivers/phy/cadence/Kconfig index 57fff7de4031..2b8c0851ff33 100644 --- a/drivers/phy/cadence/Kconfig +++ b/drivers/phy/cadence/Kconfig @@ -1,5 +1,5 @@ # -# Phy driver for Cadence MHDP DisplayPort controller +# Phy drivers for Cadence PHYs # config PHY_CADENCE_DP tristate "Cadence MHDP DisplayPort PHY driver" @@ -8,3 +8,10 @@ config PHY_CADENCE_DP select GENERIC_PHY help Support for Cadence MHDP DisplayPort PHY. + +config PHY_CADENCE_SIERRA + tristate "Cadence Sierra PHY Driver" + depends on OF && HAS_IOMEM && RESET_CONTROLLER + select GENERIC_PHY + help + Enable this to support the Cadence Sierra PHY driver \ No newline at end of file diff --git a/drivers/phy/cadence/Makefile b/drivers/phy/cadence/Makefile index e5b0a11cf28a..412349af0492 100644 --- a/drivers/phy/cadence/Makefile +++ b/drivers/phy/cadence/Makefile @@ -1 +1,2 @@ obj-$(CONFIG_PHY_CADENCE_DP) += phy-cadence-dp.o +obj-$(CONFIG_PHY_CADENCE_SIERRA) += phy-cadence-sierra.o diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c new file mode 100644 index 000000000000..de10402f2931 --- /dev/null +++ b/drivers/phy/cadence/phy-cadence-sierra.c @@ -0,0 +1,395 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Cadence Sierra PHY Driver + * + * Copyright (c) 2018 Cadence Design Systems + * Author: Alan Douglas + * + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* PHY register offsets */ +#define SIERRA_PHY_PLL_CFG (0xc00e << 2) +#define SIERRA_DET_STANDEC_A (0x4000 << 2) +#define SIERRA_DET_STANDEC_B (0x4001 << 2) +#define SIERRA_DET_STANDEC_C (0x4002 << 2) +#define SIERRA_DET_STANDEC_D (0x4003 << 2) +#define SIERRA_DET_STANDEC_E (0x4004 << 2) +#define SIERRA_PSM_LANECAL (0x4008 << 2) +#define SIERRA_PSM_DIAG (0x4015 << 2) +#define SIERRA_PSC_TX_A0 (0x4028 << 2) +#define SIERRA_PSC_TX_A1 (0x4029 << 2) +#define SIERRA_PSC_TX_A2 (0x402A << 2) +#define SIERRA_PSC_TX_A3 (0x402B << 2) +#define SIERRA_PSC_RX_A0 (0x4030 << 2) +#define SIERRA_PSC_RX_A1 (0x4031 << 2) +#define SIERRA_PSC_RX_A2 (0x4032 << 2) +#define SIERRA_PSC_RX_A3 (0x4033 << 2) +#define SIERRA_PLLCTRL_SUBRATE (0x403A << 2) +#define SIERRA_PLLCTRL_GEN_D (0x403E << 2) +#define SIERRA_DRVCTRL_ATTEN (0x406A << 2) +#define SIERRA_CLKPATHCTRL_TMR (0x4081 << 2) +#define SIERRA_RX_CREQ_FLTR_A_MODE1 (0x4087 << 2) +#define SIERRA_RX_CREQ_FLTR_A_MODE0 (0x4088 << 2) +#define SIERRA_CREQ_CCLKDET_MODE01 (0x408E << 2) +#define SIERRA_RX_CTLE_MAINTENANCE (0x4091 << 2) +#define SIERRA_CREQ_FSMCLK_SEL (0x4092 << 2) +#define SIERRA_CTLELUT_CTRL (0x4098 << 2) +#define SIERRA_DFE_ECMP_RATESEL (0x40C0 << 2) +#define SIERRA_DFE_SMP_RATESEL (0x40C1 << 2) +#define SIERRA_DEQ_VGATUNE_CTRL (0x40E1 << 2) +#define SIERRA_TMRVAL_MODE3 (0x416E << 2) +#define SIERRA_TMRVAL_MODE2 (0x416F << 2) +#define SIERRA_TMRVAL_MODE1 (0x4170 << 2) +#define SIERRA_TMRVAL_MODE0 (0x4171 << 2) +#define SIERRA_PICNT_MODE1 (0x4174 << 2) +#define SIERRA_CPI_OUTBUF_RATESEL (0x417C << 2) +#define SIERRA_LFPSFILT_NS (0x418A << 2) +#define SIERRA_LFPSFILT_RD (0x418B << 2) +#define SIERRA_LFPSFILT_MP (0x418C << 2) +#define SIERRA_SDFILT_H2L_A (0x4191 << 2) + +#define SIERRA_MACRO_ID 0x00007364 +#define SIERRA_MAX_LANES 4 + +struct cdns_sierra_inst { + struct phy *phy; + u32 phy_type; + u32 num_lanes; + u32 mlane; + struct reset_control *lnk_rst; +}; + +struct cdns_reg_pairs { + u16 val; + u32 off; +}; + +struct cdns_sierra_data { + u32 id_value; + u32 pcie_regs; + u32 usb_regs; + struct cdns_reg_pairs *pcie_vals; + struct cdns_reg_pairs *usb_vals; +}; + +struct cdns_sierra_phy { + struct device *dev; + void __iomem *base; + struct cdns_sierra_data *init_data; + struct cdns_sierra_inst phys[SIERRA_MAX_LANES]; + struct reset_control *phy_rst; + struct reset_control *apb_rst; + struct clk *clk; + int nsubnodes; + bool autoconf; +}; + +static void cdns_sierra_phy_init(struct phy *gphy) +{ + struct cdns_sierra_inst *ins = phy_get_drvdata(gphy); + struct cdns_sierra_phy *phy = dev_get_drvdata(gphy->dev.parent); + int i, j; + struct cdns_reg_pairs *vals; + u32 num_regs; + + if (ins->phy_type == PHY_TYPE_PCIE) { + num_regs = phy->init_data->pcie_regs; + vals = phy->init_data->pcie_vals; + } else if (ins->phy_type == PHY_TYPE_USB3) { + num_regs = phy->init_data->usb_regs; + vals = phy->init_data->usb_vals; + } else { + return; + } + for (i = 0; i < ins->num_lanes; i++) + for (j = 0; j < num_regs ; j++) + writel(vals[j].val, phy->base + + vals[j].off + (i + ins->mlane) * 0x800); +} + +static int cdns_sierra_phy_on(struct phy *gphy) +{ + struct cdns_sierra_inst *ins = phy_get_drvdata(gphy); + + /* Take the PHY lane group out of reset */ + return reset_control_deassert(ins->lnk_rst); +} + +static int cdns_sierra_phy_off(struct phy *gphy) +{ + struct cdns_sierra_inst *ins = phy_get_drvdata(gphy); + + return reset_control_assert(ins->lnk_rst); +} + +static const struct phy_ops ops = { + .power_on = cdns_sierra_phy_on, + .power_off = cdns_sierra_phy_off, + .owner = THIS_MODULE, +}; + +static int cdns_sierra_get_optional(struct cdns_sierra_inst *inst, + struct device_node *child) +{ + if (of_property_read_u32(child, "reg", &inst->mlane)) + return -EINVAL; + + if (of_property_read_u32(child, "cdns,num-lanes", &inst->num_lanes)) + return -EINVAL; + + if (of_property_read_u32(child, "cdns,phy-type", &inst->phy_type)) + return -EINVAL; + + return 0; +} + +static const struct of_device_id cdns_sierra_id_table[]; + +static int cdns_sierra_phy_probe(struct platform_device *pdev) +{ + struct cdns_sierra_phy *sp; + struct phy_provider *phy_provider; + struct device *dev = &pdev->dev; + const struct of_device_id *match; + struct resource *res; + int i, ret, node = 0; + struct device_node *dn = dev->of_node, *child; + + if (of_get_child_count(dn) == 0) + return -ENODEV; + + sp = devm_kzalloc(dev, sizeof(*sp), GFP_KERNEL); + if (!sp) + return -ENOMEM; + dev_set_drvdata(dev, sp); + sp->dev = dev; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + sp->base = devm_ioremap_resource(dev, res); + if (IS_ERR(sp->base)) { + dev_err(dev, "missing \"reg\"\n"); + return PTR_ERR(sp->base); + } + + /* Get init data for this PHY */ + match = of_match_device(cdns_sierra_id_table, dev); + if (!match) + return -EINVAL; + sp->init_data = (struct cdns_sierra_data *)match->data; + + platform_set_drvdata(pdev, sp); + + sp->clk = devm_clk_get(dev, "phy_clk"); + if (IS_ERR(sp->clk)) { + dev_err(dev, "failed to get clock phy_clk\n"); + return PTR_ERR(sp->clk); + } + + sp->phy_rst = devm_reset_control_get(dev, "sierra_reset"); + if (IS_ERR(sp->phy_rst)) { + dev_err(dev, "failed to get reset\n"); + return PTR_ERR(sp->phy_rst); + } + + sp->apb_rst = devm_reset_control_get(dev, "sierra_apb"); + if (IS_ERR(sp->apb_rst)) { + dev_err(dev, "failed to get apb reset\n"); + return PTR_ERR(sp->apb_rst); + } + + ret = clk_prepare_enable(sp->clk); + if (ret) + return ret; + + /* Enable APB */ + reset_control_deassert(sp->apb_rst); + + /* Check that PHY is present */ + if (sp->init_data->id_value != readl(sp->base)) { + ret = -EINVAL; + goto clk_disable; + } + + sp->autoconf = of_property_read_bool(dn, "cdns,autoconf"); + + for_each_available_child_of_node(dn, child) { + struct phy *gphy; + + sp->phys[node].lnk_rst = + of_reset_control_get_exclusive_by_index(child, 0); + + if (IS_ERR(sp->phys[node].lnk_rst)) { + dev_err(dev, "failed to get reset %s\n", + child->full_name); + ret = PTR_ERR(sp->phys[node].lnk_rst); + goto put_child2; + } + + if (!sp->autoconf) { + ret = cdns_sierra_get_optional(&sp->phys[node], child); + if (ret) { + dev_err(dev, "missing property in node %s\n", + child->name); + goto put_child; + } + } + + gphy = devm_phy_create(dev, child, &ops); + + if (IS_ERR(gphy)) { + ret = PTR_ERR(gphy); + goto put_child; + } + sp->phys[node].phy = gphy; + phy_set_drvdata(gphy, &sp->phys[node]); + + /* Initialise the PHY registers, unless auto configured */ + if (!sp->autoconf) + cdns_sierra_phy_init(gphy); + + node++; + } + sp->nsubnodes = node; + + /* If more than one subnode, configure the PHY as multilink */ + if (!sp->autoconf && sp->nsubnodes > 1) + writel(2, sp->base + SIERRA_PHY_PLL_CFG); + + pm_runtime_enable(dev); + phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); + reset_control_deassert(sp->phy_rst); + return PTR_ERR_OR_ZERO(phy_provider); + +put_child: + node++; +put_child2: + for (i = 0; i < node; i++) + reset_control_put(sp->phys[i].lnk_rst); + of_node_put(child); +clk_disable: + clk_disable_unprepare(sp->clk); + reset_control_assert(sp->apb_rst); + return ret; +} + +static int cdns_sierra_phy_remove(struct platform_device *pdev) +{ + struct cdns_sierra_phy *phy = dev_get_drvdata(pdev->dev.parent); + int i; + + reset_control_assert(phy->phy_rst); + reset_control_assert(phy->apb_rst); + pm_runtime_disable(&pdev->dev); + + /* + * The device level resets will be put automatically. + * Need to put the subnode resets here though. + */ + for (i = 0; i < phy->nsubnodes; i++) { + reset_control_assert(phy->phys[i].lnk_rst); + reset_control_put(phy->phys[i].lnk_rst); + } + return 0; +} + +static struct cdns_reg_pairs cdns_usb_regs[] = { + /* + * Write USB configuration parameters to the PHY. + * These values are specific to this specific hardware + * configuration. + */ + {0xFE0A, SIERRA_DET_STANDEC_A}, + {0x000F, SIERRA_DET_STANDEC_B}, + {0x55A5, SIERRA_DET_STANDEC_C}, + {0x69AD, SIERRA_DET_STANDEC_D}, + {0x0241, SIERRA_DET_STANDEC_E}, + {0x0110, SIERRA_PSM_LANECAL}, + {0xCF00, SIERRA_PSM_DIAG}, + {0x001F, SIERRA_PSC_TX_A0}, + {0x0007, SIERRA_PSC_TX_A1}, + {0x0003, SIERRA_PSC_TX_A2}, + {0x0003, SIERRA_PSC_TX_A3}, + {0x0FFF, SIERRA_PSC_RX_A0}, + {0x0003, SIERRA_PSC_RX_A1}, + {0x0003, SIERRA_PSC_RX_A2}, + {0x0001, SIERRA_PSC_RX_A3}, + {0x0001, SIERRA_PLLCTRL_SUBRATE}, + {0x0406, SIERRA_PLLCTRL_GEN_D}, + {0x0000, SIERRA_DRVCTRL_ATTEN}, + {0x823E, SIERRA_CLKPATHCTRL_TMR}, + {0x078F, SIERRA_RX_CREQ_FLTR_A_MODE1}, + {0x078F, SIERRA_RX_CREQ_FLTR_A_MODE0}, + {0x7B3C, SIERRA_CREQ_CCLKDET_MODE01}, + {0x023C, SIERRA_RX_CTLE_MAINTENANCE}, + {0x3232, SIERRA_CREQ_FSMCLK_SEL}, + {0x8452, SIERRA_CTLELUT_CTRL}, + {0x4121, SIERRA_DFE_ECMP_RATESEL}, + {0x4121, SIERRA_DFE_SMP_RATESEL}, + {0x9999, SIERRA_DEQ_VGATUNE_CTRL}, + {0x0330, SIERRA_TMRVAL_MODE0}, + {0x01FF, SIERRA_PICNT_MODE1}, + {0x0009, SIERRA_CPI_OUTBUF_RATESEL}, + {0x000F, SIERRA_LFPSFILT_NS}, + {0x0009, SIERRA_LFPSFILT_RD}, + {0x0001, SIERRA_LFPSFILT_MP}, + {0x8013, SIERRA_SDFILT_H2L_A}, + {0x0400, SIERRA_TMRVAL_MODE1}, +}; + +static struct cdns_reg_pairs cdns_pcie_regs[] = { + /* + * Write PCIe configuration parameters to the PHY. + * These values are specific to this specific hardware + * configuration. + */ + {0x891f, SIERRA_DET_STANDEC_D}, + {0x0053, SIERRA_DET_STANDEC_E}, + {0x0400, SIERRA_TMRVAL_MODE2}, + {0x0200, SIERRA_TMRVAL_MODE3}, +}; + +static const struct cdns_sierra_data cdns_map_sierra = { + SIERRA_MACRO_ID, + ARRAY_SIZE(cdns_pcie_regs), + ARRAY_SIZE(cdns_usb_regs), + cdns_pcie_regs, + cdns_usb_regs +}; + +static const struct of_device_id cdns_sierra_id_table[] = { + { + .compatible = "cdns,sierra-phy-t0", + .data = &cdns_map_sierra, + }, + {} +}; +MODULE_DEVICE_TABLE(of, cdns_sierra_id_table); + +static struct platform_driver cdns_sierra_driver = { + .probe = cdns_sierra_phy_probe, + .remove = cdns_sierra_phy_remove, + .driver = { + .name = "cdns-sierra-phy", + .of_match_table = cdns_sierra_id_table, + }, +}; +module_platform_driver(cdns_sierra_driver); + +MODULE_ALIAS("platform:cdns_sierra"); +MODULE_AUTHOR("Cadence Design Systems"); +MODULE_DESCRIPTION("CDNS sierra phy driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/phy/freescale/Kconfig b/drivers/phy/freescale/Kconfig new file mode 100644 index 000000000000..f050bd4e97e0 --- /dev/null +++ b/drivers/phy/freescale/Kconfig @@ -0,0 +1,5 @@ +config PHY_FSL_IMX8MQ_USB + tristate "Freescale i.MX8M USB3 PHY" + depends on OF && HAS_IOMEM + select GENERIC_PHY + default SOC_IMX8MQ diff --git a/drivers/phy/freescale/Makefile b/drivers/phy/freescale/Makefile new file mode 100644 index 000000000000..dc2b3f1f2f80 --- /dev/null +++ b/drivers/phy/freescale/Makefile @@ -0,0 +1 @@ +obj-$(CONFIG_PHY_FSL_IMX8MQ_USB) += phy-fsl-imx8mq-usb.o diff --git a/drivers/phy/freescale/phy-fsl-imx8mq-usb.c b/drivers/phy/freescale/phy-fsl-imx8mq-usb.c new file mode 100644 index 000000000000..d6ea5ce8afa5 --- /dev/null +++ b/drivers/phy/freescale/phy-fsl-imx8mq-usb.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Copyright (c) 2017 NXP. */ + +#include +#include +#include +#include +#include + +#define PHY_CTRL0 0x0 +#define PHY_CTRL0_REF_SSP_EN BIT(2) + +#define PHY_CTRL1 0x4 +#define PHY_CTRL1_RESET BIT(0) +#define PHY_CTRL1_COMMONONN BIT(1) +#define PHY_CTRL1_ATERESET BIT(3) +#define PHY_CTRL1_VDATSRCENB0 BIT(19) +#define PHY_CTRL1_VDATDETENB0 BIT(20) + +#define PHY_CTRL2 0x8 +#define PHY_CTRL2_TXENABLEN0 BIT(8) + +struct imx8mq_usb_phy { + struct phy *phy; + struct clk *clk; + void __iomem *base; +}; + +static int imx8mq_usb_phy_init(struct phy *phy) +{ + struct imx8mq_usb_phy *imx_phy = phy_get_drvdata(phy); + u32 value; + + value = readl(imx_phy->base + PHY_CTRL1); + value &= ~(PHY_CTRL1_VDATSRCENB0 | PHY_CTRL1_VDATDETENB0 | + PHY_CTRL1_COMMONONN); + value |= PHY_CTRL1_RESET | PHY_CTRL1_ATERESET; + writel(value, imx_phy->base + PHY_CTRL1); + + value = readl(imx_phy->base + PHY_CTRL0); + value |= PHY_CTRL0_REF_SSP_EN; + writel(value, imx_phy->base + PHY_CTRL0); + + value = readl(imx_phy->base + PHY_CTRL2); + value |= PHY_CTRL2_TXENABLEN0; + writel(value, imx_phy->base + PHY_CTRL2); + + value = readl(imx_phy->base + PHY_CTRL1); + value &= ~(PHY_CTRL1_RESET | PHY_CTRL1_ATERESET); + writel(value, imx_phy->base + PHY_CTRL1); + + return 0; +} + +static int imx8mq_phy_power_on(struct phy *phy) +{ + struct imx8mq_usb_phy *imx_phy = phy_get_drvdata(phy); + + return clk_prepare_enable(imx_phy->clk); +} + +static int imx8mq_phy_power_off(struct phy *phy) +{ + struct imx8mq_usb_phy *imx_phy = phy_get_drvdata(phy); + + clk_disable_unprepare(imx_phy->clk); + + return 0; +} + +static struct phy_ops imx8mq_usb_phy_ops = { + .init = imx8mq_usb_phy_init, + .power_on = imx8mq_phy_power_on, + .power_off = imx8mq_phy_power_off, + .owner = THIS_MODULE, +}; + +static int imx8mq_usb_phy_probe(struct platform_device *pdev) +{ + struct phy_provider *phy_provider; + struct device *dev = &pdev->dev; + struct imx8mq_usb_phy *imx_phy; + struct resource *res; + + imx_phy = devm_kzalloc(dev, sizeof(*imx_phy), GFP_KERNEL); + if (!imx_phy) + return -ENOMEM; + + imx_phy->clk = devm_clk_get(dev, "phy"); + if (IS_ERR(imx_phy->clk)) { + dev_err(dev, "failed to get imx8mq usb phy clock\n"); + return PTR_ERR(imx_phy->clk); + } + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + imx_phy->base = devm_ioremap_resource(dev, res); + if (IS_ERR(imx_phy->base)) + return PTR_ERR(imx_phy->base); + + imx_phy->phy = devm_phy_create(dev, NULL, &imx8mq_usb_phy_ops); + if (IS_ERR(imx_phy->phy)) + return PTR_ERR(imx_phy->phy); + + phy_set_drvdata(imx_phy->phy, imx_phy); + + phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); + + return PTR_ERR_OR_ZERO(phy_provider); +} + +static const struct of_device_id imx8mq_usb_phy_of_match[] = { + {.compatible = "fsl,imx8mq-usb-phy",}, + { }, +}; +MODULE_DEVICE_TABLE(of, imx8mq_usb_phy_of_match); + +static struct platform_driver imx8mq_usb_phy_driver = { + .probe = imx8mq_usb_phy_probe, + .driver = { + .name = "imx8mq-usb-phy", + .of_match_table = imx8mq_usb_phy_of_match, + } +}; +module_platform_driver(imx8mq_usb_phy_driver); + +MODULE_DESCRIPTION("FSL IMX8MQ USB PHY driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/phy/marvell/phy-mvebu-cp110-comphy.c b/drivers/phy/marvell/phy-mvebu-cp110-comphy.c index 86a5f7b9448b..187cccde53b5 100644 --- a/drivers/phy/marvell/phy-mvebu-cp110-comphy.c +++ b/drivers/phy/marvell/phy-mvebu-cp110-comphy.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -114,43 +115,45 @@ #define MVEBU_COMPHY_LANES 6 #define MVEBU_COMPHY_PORTS 3 -struct mvebu_comhy_conf { +struct mvebu_comphy_conf { enum phy_mode mode; + int submode; unsigned lane; unsigned port; u32 mux; }; -#define MVEBU_COMPHY_CONF(_lane, _port, _mode, _mux) \ +#define MVEBU_COMPHY_CONF(_lane, _port, _submode, _mux) \ { \ .lane = _lane, \ .port = _port, \ - .mode = _mode, \ + .mode = PHY_MODE_ETHERNET, \ + .submode = _submode, \ .mux = _mux, \ } -static const struct mvebu_comhy_conf mvebu_comphy_cp110_modes[] = { +static const struct mvebu_comphy_conf mvebu_comphy_cp110_modes[] = { /* lane 0 */ - MVEBU_COMPHY_CONF(0, 1, PHY_MODE_SGMII, 0x1), - MVEBU_COMPHY_CONF(0, 1, PHY_MODE_2500SGMII, 0x1), + MVEBU_COMPHY_CONF(0, 1, PHY_INTERFACE_MODE_SGMII, 0x1), + MVEBU_COMPHY_CONF(0, 1, PHY_INTERFACE_MODE_2500BASEX, 0x1), /* lane 1 */ - MVEBU_COMPHY_CONF(1, 2, PHY_MODE_SGMII, 0x1), - MVEBU_COMPHY_CONF(1, 2, PHY_MODE_2500SGMII, 0x1), + MVEBU_COMPHY_CONF(1, 2, PHY_INTERFACE_MODE_SGMII, 0x1), + MVEBU_COMPHY_CONF(1, 2, PHY_INTERFACE_MODE_2500BASEX, 0x1), /* lane 2 */ - MVEBU_COMPHY_CONF(2, 0, PHY_MODE_SGMII, 0x1), - MVEBU_COMPHY_CONF(2, 0, PHY_MODE_2500SGMII, 0x1), - MVEBU_COMPHY_CONF(2, 0, PHY_MODE_10GKR, 0x1), + MVEBU_COMPHY_CONF(2, 0, PHY_INTERFACE_MODE_SGMII, 0x1), + MVEBU_COMPHY_CONF(2, 0, PHY_INTERFACE_MODE_2500BASEX, 0x1), + MVEBU_COMPHY_CONF(2, 0, PHY_INTERFACE_MODE_10GKR, 0x1), /* lane 3 */ - MVEBU_COMPHY_CONF(3, 1, PHY_MODE_SGMII, 0x2), - MVEBU_COMPHY_CONF(3, 1, PHY_MODE_2500SGMII, 0x2), + MVEBU_COMPHY_CONF(3, 1, PHY_INTERFACE_MODE_SGMII, 0x2), + MVEBU_COMPHY_CONF(3, 1, PHY_INTERFACE_MODE_2500BASEX, 0x2), /* lane 4 */ - MVEBU_COMPHY_CONF(4, 0, PHY_MODE_SGMII, 0x2), - MVEBU_COMPHY_CONF(4, 0, PHY_MODE_2500SGMII, 0x2), - MVEBU_COMPHY_CONF(4, 0, PHY_MODE_10GKR, 0x2), - MVEBU_COMPHY_CONF(4, 1, PHY_MODE_SGMII, 0x1), + MVEBU_COMPHY_CONF(4, 0, PHY_INTERFACE_MODE_SGMII, 0x2), + MVEBU_COMPHY_CONF(4, 0, PHY_INTERFACE_MODE_2500BASEX, 0x2), + MVEBU_COMPHY_CONF(4, 0, PHY_INTERFACE_MODE_10GKR, 0x2), + MVEBU_COMPHY_CONF(4, 1, PHY_INTERFACE_MODE_SGMII, 0x1), /* lane 5 */ - MVEBU_COMPHY_CONF(5, 2, PHY_MODE_SGMII, 0x1), - MVEBU_COMPHY_CONF(5, 2, PHY_MODE_2500SGMII, 0x1), + MVEBU_COMPHY_CONF(5, 2, PHY_INTERFACE_MODE_SGMII, 0x1), + MVEBU_COMPHY_CONF(5, 2, PHY_INTERFACE_MODE_2500BASEX, 0x1), }; struct mvebu_comphy_priv { @@ -163,10 +166,12 @@ struct mvebu_comphy_lane { struct mvebu_comphy_priv *priv; unsigned id; enum phy_mode mode; + int submode; int port; }; -static int mvebu_comphy_get_mux(int lane, int port, enum phy_mode mode) +static int mvebu_comphy_get_mux(int lane, int port, + enum phy_mode mode, int submode) { int i, n = ARRAY_SIZE(mvebu_comphy_cp110_modes); @@ -177,7 +182,8 @@ static int mvebu_comphy_get_mux(int lane, int port, enum phy_mode mode) for (i = 0; i < n; i++) { if (mvebu_comphy_cp110_modes[i].lane == lane && mvebu_comphy_cp110_modes[i].port == port && - mvebu_comphy_cp110_modes[i].mode == mode) + mvebu_comphy_cp110_modes[i].mode == mode && + mvebu_comphy_cp110_modes[i].submode == submode) break; } @@ -187,8 +193,7 @@ static int mvebu_comphy_get_mux(int lane, int port, enum phy_mode mode) return mvebu_comphy_cp110_modes[i].mux; } -static void mvebu_comphy_ethernet_init_reset(struct mvebu_comphy_lane *lane, - enum phy_mode mode) +static void mvebu_comphy_ethernet_init_reset(struct mvebu_comphy_lane *lane) { struct mvebu_comphy_priv *priv = lane->priv; u32 val; @@ -206,14 +211,14 @@ static void mvebu_comphy_ethernet_init_reset(struct mvebu_comphy_lane *lane, MVEBU_COMPHY_SERDES_CFG0_HALF_BUS | MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0xf) | MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0xf)); - if (mode == PHY_MODE_10GKR) + if (lane->submode == PHY_INTERFACE_MODE_10GKR) val |= MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0xe) | MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0xe); - else if (mode == PHY_MODE_2500SGMII) + else if (lane->submode == PHY_INTERFACE_MODE_2500BASEX) val |= MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0x8) | MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0x8) | MVEBU_COMPHY_SERDES_CFG0_HALF_BUS; - else if (mode == PHY_MODE_SGMII) + else if (lane->submode == PHY_INTERFACE_MODE_SGMII) val |= MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0x6) | MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0x6) | MVEBU_COMPHY_SERDES_CFG0_HALF_BUS; @@ -243,7 +248,7 @@ static void mvebu_comphy_ethernet_init_reset(struct mvebu_comphy_lane *lane, /* refclk selection */ val = readl(priv->base + MVEBU_COMPHY_MISC_CTRL0(lane->id)); val &= ~MVEBU_COMPHY_MISC_CTRL0_REFCLK_SEL; - if (mode == PHY_MODE_10GKR) + if (lane->submode == PHY_INTERFACE_MODE_10GKR) val |= MVEBU_COMPHY_MISC_CTRL0_ICP_FORCE; writel(val, priv->base + MVEBU_COMPHY_MISC_CTRL0(lane->id)); @@ -261,8 +266,7 @@ static void mvebu_comphy_ethernet_init_reset(struct mvebu_comphy_lane *lane, writel(val, priv->base + MVEBU_COMPHY_LOOPBACK(lane->id)); } -static int mvebu_comphy_init_plls(struct mvebu_comphy_lane *lane, - enum phy_mode mode) +static int mvebu_comphy_init_plls(struct mvebu_comphy_lane *lane) { struct mvebu_comphy_priv *priv = lane->priv; u32 val; @@ -303,13 +307,13 @@ static int mvebu_comphy_init_plls(struct mvebu_comphy_lane *lane, return 0; } -static int mvebu_comphy_set_mode_sgmii(struct phy *phy, enum phy_mode mode) +static int mvebu_comphy_set_mode_sgmii(struct phy *phy) { struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); struct mvebu_comphy_priv *priv = lane->priv; u32 val; - mvebu_comphy_ethernet_init_reset(lane, mode); + mvebu_comphy_ethernet_init_reset(lane); val = readl(priv->base + MVEBU_COMPHY_RX_CTRL1(lane->id)); val &= ~MVEBU_COMPHY_RX_CTRL1_CLK8T_EN; @@ -330,7 +334,7 @@ static int mvebu_comphy_set_mode_sgmii(struct phy *phy, enum phy_mode mode) val |= MVEBU_COMPHY_GEN1_S0_TX_EMPH(0x1); writel(val, priv->base + MVEBU_COMPHY_GEN1_S0(lane->id)); - return mvebu_comphy_init_plls(lane, PHY_MODE_SGMII); + return mvebu_comphy_init_plls(lane); } static int mvebu_comphy_set_mode_10gkr(struct phy *phy) @@ -339,7 +343,7 @@ static int mvebu_comphy_set_mode_10gkr(struct phy *phy) struct mvebu_comphy_priv *priv = lane->priv; u32 val; - mvebu_comphy_ethernet_init_reset(lane, PHY_MODE_10GKR); + mvebu_comphy_ethernet_init_reset(lane); val = readl(priv->base + MVEBU_COMPHY_RX_CTRL1(lane->id)); val |= MVEBU_COMPHY_RX_CTRL1_RXCLK2X_SEL | @@ -469,7 +473,7 @@ static int mvebu_comphy_set_mode_10gkr(struct phy *phy) val |= MVEBU_COMPHY_EXT_SELV_RX_SAMPL(0x1a); writel(val, priv->base + MVEBU_COMPHY_EXT_SELV(lane->id)); - return mvebu_comphy_init_plls(lane, PHY_MODE_10GKR); + return mvebu_comphy_init_plls(lane); } static int mvebu_comphy_power_on(struct phy *phy) @@ -479,7 +483,8 @@ static int mvebu_comphy_power_on(struct phy *phy) int ret, mux; u32 val; - mux = mvebu_comphy_get_mux(lane->id, lane->port, lane->mode); + mux = mvebu_comphy_get_mux(lane->id, lane->port, + lane->mode, lane->submode); if (mux < 0) return -ENOTSUPP; @@ -492,12 +497,12 @@ static int mvebu_comphy_power_on(struct phy *phy) val |= mux << MVEBU_COMPHY_SELECTOR_PHY(lane->id); regmap_write(priv->regmap, MVEBU_COMPHY_SELECTOR, val); - switch (lane->mode) { - case PHY_MODE_SGMII: - case PHY_MODE_2500SGMII: - ret = mvebu_comphy_set_mode_sgmii(phy, lane->mode); + switch (lane->submode) { + case PHY_INTERFACE_MODE_SGMII: + case PHY_INTERFACE_MODE_2500BASEX: + ret = mvebu_comphy_set_mode_sgmii(phy); break; - case PHY_MODE_10GKR: + case PHY_INTERFACE_MODE_10GKR: ret = mvebu_comphy_set_mode_10gkr(phy); break; default: @@ -512,14 +517,22 @@ static int mvebu_comphy_power_on(struct phy *phy) return ret; } -static int mvebu_comphy_set_mode(struct phy *phy, enum phy_mode mode) +static int mvebu_comphy_set_mode(struct phy *phy, + enum phy_mode mode, int submode) { struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); - if (mvebu_comphy_get_mux(lane->id, lane->port, mode) < 0) + if (mode != PHY_MODE_ETHERNET) + return -EINVAL; + + if (submode == PHY_INTERFACE_MODE_1000BASEX) + submode = PHY_INTERFACE_MODE_SGMII; + + if (mvebu_comphy_get_mux(lane->id, lane->port, mode, submode) < 0) return -EINVAL; lane->mode = mode; + lane->submode = submode; return 0; } diff --git a/drivers/phy/mediatek/phy-mtk-tphy.c b/drivers/phy/mediatek/phy-mtk-tphy.c index 3eb8e1bd7b78..5b6a470ca145 100644 --- a/drivers/phy/mediatek/phy-mtk-tphy.c +++ b/drivers/phy/mediatek/phy-mtk-tphy.c @@ -971,7 +971,7 @@ static int mtk_phy_exit(struct phy *phy) return 0; } -static int mtk_phy_set_mode(struct phy *phy, enum phy_mode mode) +static int mtk_phy_set_mode(struct phy *phy, enum phy_mode mode, int submode) { struct mtk_phy_instance *instance = phy_get_drvdata(phy); struct mtk_tphy *tphy = dev_get_drvdata(phy->dev.parent); diff --git a/drivers/phy/mediatek/phy-mtk-xsphy.c b/drivers/phy/mediatek/phy-mtk-xsphy.c index 020cd0227397..8c51131945c0 100644 --- a/drivers/phy/mediatek/phy-mtk-xsphy.c +++ b/drivers/phy/mediatek/phy-mtk-xsphy.c @@ -426,7 +426,7 @@ static int mtk_phy_exit(struct phy *phy) return 0; } -static int mtk_phy_set_mode(struct phy *phy, enum phy_mode mode) +static int mtk_phy_set_mode(struct phy *phy, enum phy_mode mode, int submode) { struct xsphy_instance *inst = phy_get_drvdata(phy); struct mtk_xsphy *xsphy = dev_get_drvdata(phy->dev.parent); diff --git a/drivers/phy/motorola/phy-mapphone-mdm6600.c b/drivers/phy/motorola/phy-mapphone-mdm6600.c index 25d456a323c2..ee184d5607bd 100644 --- a/drivers/phy/motorola/phy-mapphone-mdm6600.c +++ b/drivers/phy/motorola/phy-mapphone-mdm6600.c @@ -16,6 +16,7 @@ #include #include #include +#include #define PHY_MDM6600_PHY_DELAY_MS 4000 /* PHY enable 2.2s to 3.5s */ #define PHY_MDM6600_ENABLED_DELAY_MS 8000 /* 8s more total for MDM6600 */ @@ -120,12 +121,22 @@ static int phy_mdm6600_power_on(struct phy *x) { struct phy_mdm6600 *ddata = phy_get_drvdata(x); struct gpio_desc *enable_gpio = ddata->ctrl_gpios[PHY_MDM6600_ENABLE]; + int error; if (!ddata->enabled) return -ENODEV; + error = pinctrl_pm_select_default_state(ddata->dev); + if (error) + dev_warn(ddata->dev, "%s: error with default_state: %i\n", + __func__, error); + gpiod_set_value_cansleep(enable_gpio, 1); + /* Allow aggressive PM for USB, it's only needed for n_gsm port */ + if (pm_runtime_enabled(&x->dev)) + phy_pm_runtime_put(x); + return 0; } @@ -133,12 +144,26 @@ static int phy_mdm6600_power_off(struct phy *x) { struct phy_mdm6600 *ddata = phy_get_drvdata(x); struct gpio_desc *enable_gpio = ddata->ctrl_gpios[PHY_MDM6600_ENABLE]; + int error; if (!ddata->enabled) return -ENODEV; + /* Paired with phy_pm_runtime_put() in phy_mdm6600_power_on() */ + if (pm_runtime_enabled(&x->dev)) { + error = phy_pm_runtime_get(x); + if (error < 0 && error != -EINPROGRESS) + dev_warn(ddata->dev, "%s: phy_pm_runtime_get: %i\n", + __func__, error); + } + gpiod_set_value_cansleep(enable_gpio, 0); + error = pinctrl_pm_select_sleep_state(ddata->dev); + if (error) + dev_warn(ddata->dev, "%s: error with sleep_state: %i\n", + __func__, error); + return 0; } @@ -529,28 +554,17 @@ static int phy_mdm6600_probe(struct platform_device *pdev) ddata->dev = &pdev->dev; platform_set_drvdata(pdev, ddata); + /* Active state selected in phy_mdm6600_power_on() */ + error = pinctrl_pm_select_sleep_state(ddata->dev); + if (error) + dev_warn(ddata->dev, "%s: error with sleep_state: %i\n", + __func__, error); + error = phy_mdm6600_init_lines(ddata); if (error) return error; phy_mdm6600_init_irq(ddata); - - ddata->generic_phy = devm_phy_create(ddata->dev, NULL, &gpio_usb_ops); - if (IS_ERR(ddata->generic_phy)) { - error = PTR_ERR(ddata->generic_phy); - goto cleanup; - } - - phy_set_drvdata(ddata->generic_phy, ddata); - - ddata->phy_provider = - devm_of_phy_provider_register(ddata->dev, - of_phy_simple_xlate); - if (IS_ERR(ddata->phy_provider)) { - error = PTR_ERR(ddata->phy_provider); - goto cleanup; - } - schedule_delayed_work(&ddata->bootup_work, 0); /* @@ -574,14 +588,31 @@ static int phy_mdm6600_probe(struct platform_device *pdev) if (error < 0) { dev_warn(ddata->dev, "failed to wake modem: %i\n", error); pm_runtime_put_noidle(ddata->dev); + goto cleanup; } + + ddata->generic_phy = devm_phy_create(ddata->dev, NULL, &gpio_usb_ops); + if (IS_ERR(ddata->generic_phy)) { + error = PTR_ERR(ddata->generic_phy); + goto idle; + } + + phy_set_drvdata(ddata->generic_phy, ddata); + + ddata->phy_provider = + devm_of_phy_provider_register(ddata->dev, + of_phy_simple_xlate); + if (IS_ERR(ddata->phy_provider)) + error = PTR_ERR(ddata->phy_provider); + +idle: pm_runtime_mark_last_busy(ddata->dev); pm_runtime_put_autosuspend(ddata->dev); - return 0; - cleanup: - phy_mdm6600_device_power_off(ddata); + if (error < 0) + phy_mdm6600_device_power_off(ddata); + return error; } diff --git a/drivers/phy/mscc/phy-ocelot-serdes.c b/drivers/phy/mscc/phy-ocelot-serdes.c index cbb49d9da6f9..77c46f639fbf 100644 --- a/drivers/phy/mscc/phy-ocelot-serdes.c +++ b/drivers/phy/mscc/phy-ocelot-serdes.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -104,20 +105,24 @@ struct serdes_mux { u8 idx; u8 port; enum phy_mode mode; + int submode; u32 mask; u32 mux; }; -#define SERDES_MUX(_idx, _port, _mode, _mask, _mux) { \ +#define SERDES_MUX(_idx, _port, _mode, _submode, _mask, _mux) { \ .idx = _idx, \ .port = _port, \ .mode = _mode, \ + .submode = _submode, \ .mask = _mask, \ .mux = _mux, \ } -#define SERDES_MUX_SGMII(i, p, m, c) SERDES_MUX(i, p, PHY_MODE_SGMII, m, c) -#define SERDES_MUX_QSGMII(i, p, m, c) SERDES_MUX(i, p, PHY_MODE_QSGMII, m, c) +#define SERDES_MUX_SGMII(i, p, m, c) \ + SERDES_MUX(i, p, PHY_MODE_ETHERNET, PHY_INTERFACE_MODE_SGMII, m, c) +#define SERDES_MUX_QSGMII(i, p, m, c) \ + SERDES_MUX(i, p, PHY_MODE_ETHERNET, PHY_INTERFACE_MODE_QSGMII, m, c) static const struct serdes_mux ocelot_serdes_muxes[] = { SERDES_MUX_SGMII(SERDES1G(0), 0, 0, 0), @@ -154,22 +159,27 @@ static const struct serdes_mux ocelot_serdes_muxes[] = { SERDES_MUX_SGMII(SERDES6G(1), 8, 0, 0), SERDES_MUX_SGMII(SERDES6G(2), 10, HSIO_HW_CFG_PCIE_ENA | HSIO_HW_CFG_DEV2G5_10_MODE, 0), - SERDES_MUX(SERDES6G(2), 10, PHY_MODE_PCIE, HSIO_HW_CFG_PCIE_ENA, + SERDES_MUX(SERDES6G(2), 10, PHY_MODE_PCIE, 0, HSIO_HW_CFG_PCIE_ENA, HSIO_HW_CFG_PCIE_ENA), }; -static int serdes_set_mode(struct phy *phy, enum phy_mode mode) +static int serdes_set_mode(struct phy *phy, enum phy_mode mode, int submode) { struct serdes_macro *macro = phy_get_drvdata(phy); unsigned int i; int ret; + /* As of now only PHY_MODE_ETHERNET is supported */ + if (mode != PHY_MODE_ETHERNET) + return -EOPNOTSUPP; + for (i = 0; i < ARRAY_SIZE(ocelot_serdes_muxes); i++) { if (macro->idx != ocelot_serdes_muxes[i].idx || - mode != ocelot_serdes_muxes[i].mode) + mode != ocelot_serdes_muxes[i].mode || + submode != ocelot_serdes_muxes[i].submode) continue; - if (mode != PHY_MODE_QSGMII && + if (submode != PHY_INTERFACE_MODE_QSGMII && macro->port != ocelot_serdes_muxes[i].port) continue; diff --git a/drivers/phy/phy-core-mipi-dphy.c b/drivers/phy/phy-core-mipi-dphy.c new file mode 100644 index 000000000000..465fa1b91a5f --- /dev/null +++ b/drivers/phy/phy-core-mipi-dphy.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2013 NVIDIA Corporation + * Copyright (C) 2018 Cadence Design Systems Inc. + */ + +#include +#include +#include +#include + +#include +#include + +#define PSEC_PER_SEC 1000000000000LL + +/* + * Minimum D-PHY timings based on MIPI D-PHY specification. Derived + * from the valid ranges specified in Section 6.9, Table 14, Page 41 + * of the D-PHY specification (v2.1). + */ +int phy_mipi_dphy_get_default_config(unsigned long pixel_clock, + unsigned int bpp, + unsigned int lanes, + struct phy_configure_opts_mipi_dphy *cfg) +{ + unsigned long long hs_clk_rate; + unsigned long long ui; + + if (!cfg) + return -EINVAL; + + hs_clk_rate = pixel_clock * bpp; + do_div(hs_clk_rate, lanes); + + ui = ALIGN(PSEC_PER_SEC, hs_clk_rate); + do_div(ui, hs_clk_rate); + + cfg->clk_miss = 0; + cfg->clk_post = 60000 + 52 * ui; + cfg->clk_pre = 8000; + cfg->clk_prepare = 38000; + cfg->clk_settle = 95000; + cfg->clk_term_en = 0; + cfg->clk_trail = 60000; + cfg->clk_zero = 262000; + cfg->d_term_en = 0; + cfg->eot = 0; + cfg->hs_exit = 100000; + cfg->hs_prepare = 40000 + 4 * ui; + cfg->hs_zero = 105000 + 6 * ui; + cfg->hs_settle = 85000 + 6 * ui; + cfg->hs_skip = 40000; + + /* + * The MIPI D-PHY specification (Section 6.9, v1.2, Table 14, Page 40) + * contains this formula as: + * + * T_HS-TRAIL = max(n * 8 * ui, 60 + n * 4 * ui) + * + * where n = 1 for forward-direction HS mode and n = 4 for reverse- + * direction HS mode. There's only one setting and this function does + * not parameterize on anything other that ui, so this code will + * assumes that reverse-direction HS mode is supported and uses n = 4. + */ + cfg->hs_trail = max(4 * 8 * ui, 60000 + 4 * 4 * ui); + + cfg->init = 100000000; + cfg->lpx = 60000; + cfg->ta_get = 5 * cfg->lpx; + cfg->ta_go = 4 * cfg->lpx; + cfg->ta_sure = 2 * cfg->lpx; + cfg->wakeup = 1000000000; + + cfg->hs_clk_rate = hs_clk_rate; + cfg->lanes = lanes; + + return 0; +} +EXPORT_SYMBOL(phy_mipi_dphy_get_default_config); + +/* + * Validate D-PHY configuration according to MIPI D-PHY specification + * (v1.2, Section Section 6.9 "Global Operation Timing Parameters"). + */ +int phy_mipi_dphy_config_validate(struct phy_configure_opts_mipi_dphy *cfg) +{ + unsigned long long ui; + + if (!cfg) + return -EINVAL; + + ui = ALIGN(PSEC_PER_SEC, cfg->hs_clk_rate); + do_div(ui, cfg->hs_clk_rate); + + if (cfg->clk_miss > 60000) + return -EINVAL; + + if (cfg->clk_post < (60000 + 52 * ui)) + return -EINVAL; + + if (cfg->clk_pre < 8000) + return -EINVAL; + + if (cfg->clk_prepare < 38000 || cfg->clk_prepare > 95000) + return -EINVAL; + + if (cfg->clk_settle < 95000 || cfg->clk_settle > 300000) + return -EINVAL; + + if (cfg->clk_term_en > 38000) + return -EINVAL; + + if (cfg->clk_trail < 60000) + return -EINVAL; + + if ((cfg->clk_prepare + cfg->clk_zero) < 300000) + return -EINVAL; + + if (cfg->d_term_en > (35000 + 4 * ui)) + return -EINVAL; + + if (cfg->eot > (105000 + 12 * ui)) + return -EINVAL; + + if (cfg->hs_exit < 100000) + return -EINVAL; + + if (cfg->hs_prepare < (40000 + 4 * ui) || + cfg->hs_prepare > (85000 + 6 * ui)) + return -EINVAL; + + if ((cfg->hs_prepare + cfg->hs_zero) < (145000 + 10 * ui)) + return -EINVAL; + + if ((cfg->hs_settle < (85000 + 6 * ui)) || + (cfg->hs_settle > (145000 + 10 * ui))) + return -EINVAL; + + if (cfg->hs_skip < 40000 || cfg->hs_skip > (55000 + 4 * ui)) + return -EINVAL; + + if (cfg->hs_trail < max(8 * ui, 60000 + 4 * ui)) + return -EINVAL; + + if (cfg->init < 100000000) + return -EINVAL; + + if (cfg->lpx < 50000) + return -EINVAL; + + if (cfg->ta_get != (5 * cfg->lpx)) + return -EINVAL; + + if (cfg->ta_go != (4 * cfg->lpx)) + return -EINVAL; + + if (cfg->ta_sure < cfg->lpx || cfg->ta_sure > (2 * cfg->lpx)) + return -EINVAL; + + if (cfg->wakeup < 1000000000) + return -EINVAL; + + return 0; +} +EXPORT_SYMBOL(phy_mipi_dphy_config_validate); diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c index 35fd38c5a4a1..19b05e824ee4 100644 --- a/drivers/phy/phy-core.c +++ b/drivers/phy/phy-core.c @@ -360,7 +360,7 @@ int phy_power_off(struct phy *phy) } EXPORT_SYMBOL_GPL(phy_power_off); -int phy_set_mode(struct phy *phy, enum phy_mode mode) +int phy_set_mode_ext(struct phy *phy, enum phy_mode mode, int submode) { int ret; @@ -368,14 +368,14 @@ int phy_set_mode(struct phy *phy, enum phy_mode mode) return 0; mutex_lock(&phy->mutex); - ret = phy->ops->set_mode(phy, mode); + ret = phy->ops->set_mode(phy, mode, submode); if (!ret) phy->attrs.mode = mode; mutex_unlock(&phy->mutex); return ret; } -EXPORT_SYMBOL_GPL(phy_set_mode); +EXPORT_SYMBOL_GPL(phy_set_mode_ext); int phy_reset(struct phy *phy) { @@ -407,6 +407,70 @@ int phy_calibrate(struct phy *phy) } EXPORT_SYMBOL_GPL(phy_calibrate); +/** + * phy_configure() - Changes the phy parameters + * @phy: the phy returned by phy_get() + * @opts: New configuration to apply + * + * Used to change the PHY parameters. phy_init() must have been called + * on the phy. The configuration will be applied on the current phy + * mode, that can be changed using phy_set_mode(). + * + * Returns: 0 if successful, an negative error code otherwise + */ +int phy_configure(struct phy *phy, union phy_configure_opts *opts) +{ + int ret; + + if (!phy) + return -EINVAL; + + if (!phy->ops->configure) + return -EOPNOTSUPP; + + mutex_lock(&phy->mutex); + ret = phy->ops->configure(phy, opts); + mutex_unlock(&phy->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(phy_configure); + +/** + * phy_validate() - Checks the phy parameters + * @phy: the phy returned by phy_get() + * @mode: phy_mode the configuration is applicable to. + * @submode: PHY submode the configuration is applicable to. + * @opts: Configuration to check + * + * Used to check that the current set of parameters can be handled by + * the phy. Implementations are free to tune the parameters passed as + * arguments if needed by some implementation detail or + * constraints. It will not change any actual configuration of the + * PHY, so calling it as many times as deemed fit will have no side + * effect. + * + * Returns: 0 if successful, an negative error code otherwise + */ +int phy_validate(struct phy *phy, enum phy_mode mode, int submode, + union phy_configure_opts *opts) +{ + int ret; + + if (!phy) + return -EINVAL; + + if (!phy->ops->validate) + return -EOPNOTSUPP; + + mutex_lock(&phy->mutex); + ret = phy->ops->validate(phy, mode, submode, opts); + mutex_unlock(&phy->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(phy_validate); + /** * _of_phy_get() - lookup and obtain a reference to a phy by phandle * @np: device_node for which to get the phy diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c index a83332411026..b4006818e1b6 100644 --- a/drivers/phy/qualcomm/phy-qcom-qmp.c +++ b/drivers/phy/qualcomm/phy-qcom-qmp.c @@ -72,6 +72,9 @@ #define MAX_PROP_NAME 32 +/* Define the assumed distance between lanes for underspecified device trees. */ +#define QMP_PHY_LEGACY_LANE_STRIDE 0x400 + struct qmp_phy_init_tbl { unsigned int offset; unsigned int val; @@ -733,9 +736,6 @@ struct qmp_phy_cfg { bool has_phy_dp_com_ctrl; /* true, if PHY has secondary tx/rx lanes to be configured */ bool is_dual_lane_phy; - /* Register offset of secondary tx/rx lanes for USB DP combo PHY */ - unsigned int tx_b_lane_offset; - unsigned int rx_b_lane_offset; /* true, if PCS block has no separate SW_RESET register */ bool no_pcs_sw_reset; @@ -748,6 +748,8 @@ struct qmp_phy_cfg { * @tx: iomapped memory space for lane's tx * @rx: iomapped memory space for lane's rx * @pcs: iomapped memory space for lane's pcs + * @tx2: iomapped memory space for second lane's tx (in dual lane PHYs) + * @rx2: iomapped memory space for second lane's rx (in dual lane PHYs) * @pcs_misc: iomapped memory space for lane's pcs_misc * @pipe_clk: pipe lock * @index: lane index @@ -759,6 +761,8 @@ struct qmp_phy { void __iomem *tx; void __iomem *rx; void __iomem *pcs; + void __iomem *tx2; + void __iomem *rx2; void __iomem *pcs_misc; struct clk *pipe_clk; unsigned int index; @@ -975,8 +979,6 @@ static const struct qmp_phy_cfg qmp_v3_usb3phy_cfg = { .has_phy_dp_com_ctrl = true, .is_dual_lane_phy = true, - .tx_b_lane_offset = 0x400, - .rx_b_lane_offset = 0x400, }; static const struct qmp_phy_cfg qmp_v3_usb3_uniphy_cfg = { @@ -1031,9 +1033,6 @@ static const struct qmp_phy_cfg sdm845_ufsphy_cfg = { .mask_pcs_ready = PCS_READY, .is_dual_lane_phy = true, - .tx_b_lane_offset = 0x400, - .rx_b_lane_offset = 0x400, - .no_pcs_sw_reset = true, }; @@ -1238,12 +1237,12 @@ static int qcom_qmp_phy_init(struct phy *phy) qcom_qmp_phy_configure(tx, cfg->regs, cfg->tx_tbl, cfg->tx_tbl_num); /* Configuration for other LANE for USB-DP combo PHY */ if (cfg->is_dual_lane_phy) - qcom_qmp_phy_configure(tx + cfg->tx_b_lane_offset, cfg->regs, + qcom_qmp_phy_configure(qphy->tx2, cfg->regs, cfg->tx_tbl, cfg->tx_tbl_num); qcom_qmp_phy_configure(rx, cfg->regs, cfg->rx_tbl, cfg->rx_tbl_num); if (cfg->is_dual_lane_phy) - qcom_qmp_phy_configure(rx + cfg->rx_b_lane_offset, cfg->regs, + qcom_qmp_phy_configure(qphy->rx2, cfg->regs, cfg->rx_tbl, cfg->rx_tbl_num); qcom_qmp_phy_configure(pcs, cfg->regs, cfg->pcs_tbl, cfg->pcs_tbl_num); @@ -1365,7 +1364,8 @@ static int qcom_qmp_phy_poweron(struct phy *phy) return ret; } -static int qcom_qmp_phy_set_mode(struct phy *phy, enum phy_mode mode) +static int qcom_qmp_phy_set_mode(struct phy *phy, + enum phy_mode mode, int submode) { struct qmp_phy *qphy = phy_get_drvdata(phy); struct qcom_qmp *qmp = qphy->qmp; @@ -1542,6 +1542,11 @@ static int qcom_qmp_phy_clk_init(struct device *dev) return devm_clk_bulk_get(dev, num, qmp->clks); } +static void phy_pipe_clk_release_provider(void *res) +{ + of_clk_del_provider(res); +} + /* * Register a fixed rate pipe clock. * @@ -1588,7 +1593,23 @@ static int phy_pipe_clk_register(struct qcom_qmp *qmp, struct device_node *np) fixed->fixed_rate = 125000000; fixed->hw.init = &init; - return devm_clk_hw_register(qmp->dev, &fixed->hw); + ret = devm_clk_hw_register(qmp->dev, &fixed->hw); + if (ret) + return ret; + + ret = of_clk_add_hw_provider(np, of_clk_hw_simple_get, &fixed->hw); + if (ret) + return ret; + + /* + * Roll a devm action because the clock provider is the child node, but + * the child node is not actually a device. + */ + ret = devm_add_action(qmp->dev, phy_pipe_clk_release_provider, np); + if (ret) + phy_pipe_clk_release_provider(np); + + return ret; } static const struct phy_ops qcom_qmp_phy_gen_ops = { @@ -1614,8 +1635,9 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id) /* * Get memory resources for each phy lane: - * Resources are indexed as: tx -> 0; rx -> 1; pcs -> 2; and - * pcs_misc (optional) -> 3. + * Resources are indexed as: tx -> 0; rx -> 1; pcs -> 2. + * For dual lane PHYs: tx2 -> 3, rx2 -> 4, pcs_misc (optional) -> 5 + * For single lane PHYs: pcs_misc (optional) -> 3. */ qphy->tx = of_iomap(np, 0); if (!qphy->tx) @@ -1629,7 +1651,32 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id) if (!qphy->pcs) return -ENOMEM; - qphy->pcs_misc = of_iomap(np, 3); + /* + * If this is a dual-lane PHY, then there should be registers for the + * second lane. Some old device trees did not specify this, so fall + * back to old legacy behavior of assuming they can be reached at an + * offset from the first lane. + */ + if (qmp->cfg->is_dual_lane_phy) { + qphy->tx2 = of_iomap(np, 3); + qphy->rx2 = of_iomap(np, 4); + if (!qphy->tx2 || !qphy->rx2) { + dev_warn(dev, + "Underspecified device tree, falling back to legacy register regions\n"); + + /* In the old version, pcs_misc is at index 3. */ + qphy->pcs_misc = qphy->tx2; + qphy->tx2 = qphy->tx + QMP_PHY_LEGACY_LANE_STRIDE; + qphy->rx2 = qphy->rx + QMP_PHY_LEGACY_LANE_STRIDE; + + } else { + qphy->pcs_misc = of_iomap(np, 5); + } + + } else { + qphy->pcs_misc = of_iomap(np, 3); + } + if (!qphy->pcs_misc) dev_vdbg(dev, "PHY pcs_misc-reg not used\n"); diff --git a/drivers/phy/qualcomm/phy-qcom-qusb2.c b/drivers/phy/qualcomm/phy-qcom-qusb2.c index 6d4b44b569bc..9177989f22d1 100644 --- a/drivers/phy/qualcomm/phy-qcom-qusb2.c +++ b/drivers/phy/qualcomm/phy-qcom-qusb2.c @@ -425,7 +425,8 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy) HSTX_TRIM_MASK); } -static int qusb2_phy_set_mode(struct phy *phy, enum phy_mode mode) +static int qusb2_phy_set_mode(struct phy *phy, + enum phy_mode mode, int submode) { struct qusb2_phy *qphy = phy_get_drvdata(phy); diff --git a/drivers/phy/qualcomm/phy-qcom-ufs-qmp-14nm.c b/drivers/phy/qualcomm/phy-qcom-ufs-qmp-14nm.c index ba1895b76a5d..1e0d4f2046a4 100644 --- a/drivers/phy/qualcomm/phy-qcom-ufs-qmp-14nm.c +++ b/drivers/phy/qualcomm/phy-qcom-ufs-qmp-14nm.c @@ -65,7 +65,8 @@ static int ufs_qcom_phy_qmp_14nm_exit(struct phy *generic_phy) } static -int ufs_qcom_phy_qmp_14nm_set_mode(struct phy *generic_phy, enum phy_mode mode) +int ufs_qcom_phy_qmp_14nm_set_mode(struct phy *generic_phy, + enum phy_mode mode, int submode) { struct ufs_qcom_phy *phy_common = get_ufs_qcom_phy(generic_phy); diff --git a/drivers/phy/qualcomm/phy-qcom-ufs-qmp-20nm.c b/drivers/phy/qualcomm/phy-qcom-ufs-qmp-20nm.c index 49f435c71147..aef40f7a41d4 100644 --- a/drivers/phy/qualcomm/phy-qcom-ufs-qmp-20nm.c +++ b/drivers/phy/qualcomm/phy-qcom-ufs-qmp-20nm.c @@ -84,7 +84,8 @@ static int ufs_qcom_phy_qmp_20nm_exit(struct phy *generic_phy) } static -int ufs_qcom_phy_qmp_20nm_set_mode(struct phy *generic_phy, enum phy_mode mode) +int ufs_qcom_phy_qmp_20nm_set_mode(struct phy *generic_phy, + enum phy_mode mode, int submode) { struct ufs_qcom_phy *phy_common = get_ufs_qcom_phy(generic_phy); diff --git a/drivers/phy/qualcomm/phy-qcom-usb-hs.c b/drivers/phy/qualcomm/phy-qcom-usb-hs.c index abbbe75070da..04934f8dac91 100644 --- a/drivers/phy/qualcomm/phy-qcom-usb-hs.c +++ b/drivers/phy/qualcomm/phy-qcom-usb-hs.c @@ -42,7 +42,8 @@ struct qcom_usb_hs_phy { struct notifier_block vbus_notify; }; -static int qcom_usb_hs_phy_set_mode(struct phy *phy, enum phy_mode mode) +static int qcom_usb_hs_phy_set_mode(struct phy *phy, + enum phy_mode mode, int submode) { struct qcom_usb_hs_phy *uphy = phy_get_drvdata(phy); u8 addr; diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c index d0f412c25981..0a34782aaaa2 100644 --- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c +++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c @@ -307,16 +307,21 @@ static void rcar_gen3_init_otg(struct rcar_gen3_chan *ch) void __iomem *usb2_base = ch->base; u32 val; + /* Should not use functions of read-modify-write a register */ + val = readl(usb2_base + USB2_LINECTRL1); + val = (val & ~USB2_LINECTRL1_DP_RPD) | USB2_LINECTRL1_DPRPD_EN | + USB2_LINECTRL1_DMRPD_EN | USB2_LINECTRL1_DM_RPD; + writel(val, usb2_base + USB2_LINECTRL1); + val = readl(usb2_base + USB2_VBCTRL); writel(val | USB2_VBCTRL_DRVVBUSSEL, usb2_base + USB2_VBCTRL); - writel(USB2_OBINT_BITS, usb2_base + USB2_OBINTSTA); - rcar_gen3_control_otg_irq(ch, 1); val = readl(usb2_base + USB2_ADPCTRL); writel(val | USB2_ADPCTRL_IDPULLUP, usb2_base + USB2_ADPCTRL); - val = readl(usb2_base + USB2_LINECTRL1); - rcar_gen3_set_linectrl(ch, 0, 0); - writel(val | USB2_LINECTRL1_DPRPD_EN | USB2_LINECTRL1_DMRPD_EN, - usb2_base + USB2_LINECTRL1); + + msleep(20); + + writel(0xffffffff, usb2_base + USB2_OBINTSTA); + writel(USB2_OBINT_BITS, usb2_base + USB2_OBINTEN); rcar_gen3_device_recognition(ch); } diff --git a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c index 24bd2717abdb..91fba60267a0 100644 --- a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c +++ b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c @@ -1168,8 +1168,8 @@ static int rockchip_usb2phy_probe(struct platform_device *pdev) struct phy *phy; /* This driver aims to support both otg-port and host-port */ - if (of_node_cmp(child_np->name, "host-port") && - of_node_cmp(child_np->name, "otg-port")) + if (!of_node_name_eq(child_np, "host-port") && + !of_node_name_eq(child_np, "otg-port")) goto next_child; phy = devm_phy_create(dev, child_np, &rockchip_usb2phy_ops); @@ -1183,7 +1183,7 @@ static int rockchip_usb2phy_probe(struct platform_device *pdev) phy_set_drvdata(rport->phy, rport); /* initialize otg/host port separately */ - if (!of_node_cmp(child_np->name, "host-port")) { + if (of_node_name_eq(child_np, "host-port")) { ret = rockchip_usb2phy_host_port_init(rphy, rport, child_np); if (ret) diff --git a/drivers/phy/rockchip/phy-rockchip-typec.c b/drivers/phy/rockchip/phy-rockchip-typec.c index c57e496f0b0c..e32edeebcd63 100644 --- a/drivers/phy/rockchip/phy-rockchip-typec.c +++ b/drivers/phy/rockchip/phy-rockchip-typec.c @@ -1176,10 +1176,10 @@ static int rockchip_typec_phy_probe(struct platform_device *pdev) for_each_available_child_of_node(np, child_np) { struct phy *phy; - if (!of_node_cmp(child_np->name, "dp-port")) + if (of_node_name_eq(child_np, "dp-port")) phy = devm_phy_create(dev, child_np, &rockchip_dp_phy_ops); - else if (!of_node_cmp(child_np->name, "usb3-port")) + else if (of_node_name_eq(child_np, "usb3-port")) phy = devm_phy_create(dev, child_np, &rockchip_usb3_phy_ops); else diff --git a/drivers/phy/ti/Kconfig b/drivers/phy/ti/Kconfig index 20503562666c..f137e0107764 100644 --- a/drivers/phy/ti/Kconfig +++ b/drivers/phy/ti/Kconfig @@ -76,3 +76,13 @@ config TWL4030_USB family chips (including the TWL5030 and TPS659x0 devices). This transceiver supports high and full speed devices plus, in host mode, low speed. + +config PHY_TI_GMII_SEL + tristate + default y if TI_CPSW=y + depends on TI_CPSW || COMPILE_TEST + select GENERIC_PHY + default m + help + This driver supports configuring of the TI CPSW Port mode depending on + the Ethernet PHY connected to the CPSW Port. diff --git a/drivers/phy/ti/Makefile b/drivers/phy/ti/Makefile index 9f361756eaf2..bea8f25a137a 100644 --- a/drivers/phy/ti/Makefile +++ b/drivers/phy/ti/Makefile @@ -6,3 +6,4 @@ obj-$(CONFIG_OMAP_USB2) += phy-omap-usb2.o obj-$(CONFIG_TI_PIPE3) += phy-ti-pipe3.o obj-$(CONFIG_PHY_TUSB1210) += phy-tusb1210.o obj-$(CONFIG_TWL4030_USB) += phy-twl4030-usb.o +obj-$(CONFIG_PHY_TI_GMII_SEL) += phy-gmii-sel.o diff --git a/drivers/phy/ti/phy-da8xx-usb.c b/drivers/phy/ti/phy-da8xx-usb.c index befb886ff121..d5f4fbc32b52 100644 --- a/drivers/phy/ti/phy-da8xx-usb.c +++ b/drivers/phy/ti/phy-da8xx-usb.c @@ -93,7 +93,8 @@ static int da8xx_usb20_phy_power_off(struct phy *phy) return 0; } -static int da8xx_usb20_phy_set_mode(struct phy *phy, enum phy_mode mode) +static int da8xx_usb20_phy_set_mode(struct phy *phy, + enum phy_mode mode, int submode) { struct da8xx_usb_phy *d_phy = phy_get_drvdata(phy); u32 val; diff --git a/drivers/phy/ti/phy-gmii-sel.c b/drivers/phy/ti/phy-gmii-sel.c new file mode 100644 index 000000000000..77fdaa551977 --- /dev/null +++ b/drivers/phy/ti/phy-gmii-sel.c @@ -0,0 +1,349 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Texas Instruments CPSW Port's PHY Interface Mode selection Driver + * + * Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/ + * + * Based on cpsw-phy-sel.c driver created by Mugunthan V N + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* AM33xx SoC specific definitions for the CONTROL port */ +#define AM33XX_GMII_SEL_MODE_MII 0 +#define AM33XX_GMII_SEL_MODE_RMII 1 +#define AM33XX_GMII_SEL_MODE_RGMII 2 + +enum { + PHY_GMII_SEL_PORT_MODE, + PHY_GMII_SEL_RGMII_ID_MODE, + PHY_GMII_SEL_RMII_IO_CLK_EN, + PHY_GMII_SEL_LAST, +}; + +struct phy_gmii_sel_phy_priv { + struct phy_gmii_sel_priv *priv; + u32 id; + struct phy *if_phy; + int rmii_clock_external; + int phy_if_mode; + struct regmap_field *fields[PHY_GMII_SEL_LAST]; +}; + +struct phy_gmii_sel_soc_data { + u32 num_ports; + u32 features; + const struct reg_field (*regfields)[PHY_GMII_SEL_LAST]; +}; + +struct phy_gmii_sel_priv { + struct device *dev; + const struct phy_gmii_sel_soc_data *soc_data; + struct regmap *regmap; + struct phy_provider *phy_provider; + struct phy_gmii_sel_phy_priv *if_phys; +}; + +static int phy_gmii_sel_mode(struct phy *phy, enum phy_mode mode, int submode) +{ + struct phy_gmii_sel_phy_priv *if_phy = phy_get_drvdata(phy); + const struct phy_gmii_sel_soc_data *soc_data = if_phy->priv->soc_data; + struct device *dev = if_phy->priv->dev; + struct regmap_field *regfield; + int ret, rgmii_id = 0; + u32 gmii_sel_mode = 0; + + if (mode != PHY_MODE_ETHERNET) + return -EINVAL; + + switch (submode) { + case PHY_INTERFACE_MODE_RMII: + gmii_sel_mode = AM33XX_GMII_SEL_MODE_RMII; + break; + + case PHY_INTERFACE_MODE_RGMII: + gmii_sel_mode = AM33XX_GMII_SEL_MODE_RGMII; + break; + + case PHY_INTERFACE_MODE_RGMII_ID: + case PHY_INTERFACE_MODE_RGMII_RXID: + case PHY_INTERFACE_MODE_RGMII_TXID: + gmii_sel_mode = AM33XX_GMII_SEL_MODE_RGMII; + rgmii_id = 1; + break; + + case PHY_INTERFACE_MODE_MII: + mode = AM33XX_GMII_SEL_MODE_MII; + break; + + default: + dev_warn(dev, + "port%u: unsupported mode: \"%s\". Defaulting to MII.\n", + if_phy->id, phy_modes(rgmii_id)); + return -EINVAL; + } + + if_phy->phy_if_mode = submode; + + dev_dbg(dev, "%s id:%u mode:%u rgmii_id:%d rmii_clk_ext:%d\n", + __func__, if_phy->id, mode, rgmii_id, + if_phy->rmii_clock_external); + + regfield = if_phy->fields[PHY_GMII_SEL_PORT_MODE]; + ret = regmap_field_write(regfield, gmii_sel_mode); + if (ret) { + dev_err(dev, "port%u: set mode fail %d", if_phy->id, ret); + return ret; + } + + if (soc_data->features & BIT(PHY_GMII_SEL_RGMII_ID_MODE) && + if_phy->fields[PHY_GMII_SEL_RGMII_ID_MODE]) { + regfield = if_phy->fields[PHY_GMII_SEL_RGMII_ID_MODE]; + ret = regmap_field_write(regfield, rgmii_id); + if (ret) + return ret; + } + + if (soc_data->features & BIT(PHY_GMII_SEL_RMII_IO_CLK_EN) && + if_phy->fields[PHY_GMII_SEL_RMII_IO_CLK_EN]) { + regfield = if_phy->fields[PHY_GMII_SEL_RMII_IO_CLK_EN]; + ret = regmap_field_write(regfield, + if_phy->rmii_clock_external); + } + + return 0; +} + +static const +struct reg_field phy_gmii_sel_fields_am33xx[][PHY_GMII_SEL_LAST] = { + { + [PHY_GMII_SEL_PORT_MODE] = REG_FIELD(0x650, 0, 1), + [PHY_GMII_SEL_RGMII_ID_MODE] = REG_FIELD(0x650, 4, 4), + [PHY_GMII_SEL_RMII_IO_CLK_EN] = REG_FIELD(0x650, 6, 6), + }, + { + [PHY_GMII_SEL_PORT_MODE] = REG_FIELD(0x650, 2, 3), + [PHY_GMII_SEL_RGMII_ID_MODE] = REG_FIELD(0x650, 5, 5), + [PHY_GMII_SEL_RMII_IO_CLK_EN] = REG_FIELD(0x650, 7, 7), + }, +}; + +static const +struct phy_gmii_sel_soc_data phy_gmii_sel_soc_am33xx = { + .num_ports = 2, + .features = BIT(PHY_GMII_SEL_RGMII_ID_MODE) | + BIT(PHY_GMII_SEL_RMII_IO_CLK_EN), + .regfields = phy_gmii_sel_fields_am33xx, +}; + +static const +struct reg_field phy_gmii_sel_fields_dra7[][PHY_GMII_SEL_LAST] = { + { + [PHY_GMII_SEL_PORT_MODE] = REG_FIELD(0x554, 0, 1), + [PHY_GMII_SEL_RGMII_ID_MODE] = REG_FIELD((~0), 0, 0), + [PHY_GMII_SEL_RMII_IO_CLK_EN] = REG_FIELD((~0), 0, 0), + }, + { + [PHY_GMII_SEL_PORT_MODE] = REG_FIELD(0x554, 4, 5), + [PHY_GMII_SEL_RGMII_ID_MODE] = REG_FIELD((~0), 0, 0), + [PHY_GMII_SEL_RMII_IO_CLK_EN] = REG_FIELD((~0), 0, 0), + }, +}; + +static const +struct phy_gmii_sel_soc_data phy_gmii_sel_soc_dra7 = { + .num_ports = 2, + .regfields = phy_gmii_sel_fields_dra7, +}; + +static const +struct phy_gmii_sel_soc_data phy_gmii_sel_soc_dm814 = { + .num_ports = 2, + .features = BIT(PHY_GMII_SEL_RGMII_ID_MODE), + .regfields = phy_gmii_sel_fields_am33xx, +}; + +static const struct of_device_id phy_gmii_sel_id_table[] = { + { + .compatible = "ti,am3352-phy-gmii-sel", + .data = &phy_gmii_sel_soc_am33xx, + }, + { + .compatible = "ti,dra7xx-phy-gmii-sel", + .data = &phy_gmii_sel_soc_dra7, + }, + { + .compatible = "ti,am43xx-phy-gmii-sel", + .data = &phy_gmii_sel_soc_am33xx, + }, + { + .compatible = "ti,dm814-phy-gmii-sel", + .data = &phy_gmii_sel_soc_dm814, + }, + {} +}; +MODULE_DEVICE_TABLE(of, phy_gmii_sel_id_table); + +static const struct phy_ops phy_gmii_sel_ops = { + .set_mode = phy_gmii_sel_mode, + .owner = THIS_MODULE, +}; + +static struct phy *phy_gmii_sel_of_xlate(struct device *dev, + struct of_phandle_args *args) +{ + struct phy_gmii_sel_priv *priv = dev_get_drvdata(dev); + int phy_id = args->args[0]; + + if (args->args_count < 1) + return ERR_PTR(-EINVAL); + if (priv->soc_data->features & BIT(PHY_GMII_SEL_RMII_IO_CLK_EN) && + args->args_count < 2) + return ERR_PTR(-EINVAL); + if (!priv || !priv->if_phys) + return ERR_PTR(-ENODEV); + if (phy_id > priv->soc_data->num_ports) + return ERR_PTR(-EINVAL); + if (phy_id != priv->if_phys[phy_id - 1].id) + return ERR_PTR(-EINVAL); + + phy_id--; + if (priv->soc_data->features & BIT(PHY_GMII_SEL_RMII_IO_CLK_EN)) + priv->if_phys[phy_id].rmii_clock_external = args->args[1]; + dev_dbg(dev, "%s id:%u ext:%d\n", __func__, + priv->if_phys[phy_id].id, args->args[1]); + + return priv->if_phys[phy_id].if_phy; +} + +static int phy_gmii_sel_init_ports(struct phy_gmii_sel_priv *priv) +{ + const struct phy_gmii_sel_soc_data *soc_data = priv->soc_data; + struct device *dev = priv->dev; + struct phy_gmii_sel_phy_priv *if_phys; + int i, num_ports, ret; + + num_ports = priv->soc_data->num_ports; + + if_phys = devm_kcalloc(priv->dev, num_ports, + sizeof(*if_phys), GFP_KERNEL); + if (!if_phys) + return -ENOMEM; + dev_dbg(dev, "%s %d\n", __func__, num_ports); + + for (i = 0; i < num_ports; i++) { + const struct reg_field *field; + struct regmap_field *regfield; + + if_phys[i].id = i + 1; + if_phys[i].priv = priv; + + field = &soc_data->regfields[i][PHY_GMII_SEL_PORT_MODE]; + dev_dbg(dev, "%s field %x %d %d\n", __func__, + field->reg, field->msb, field->lsb); + + regfield = devm_regmap_field_alloc(dev, priv->regmap, *field); + if (IS_ERR(regfield)) + return PTR_ERR(regfield); + if_phys[i].fields[PHY_GMII_SEL_PORT_MODE] = regfield; + + field = &soc_data->regfields[i][PHY_GMII_SEL_RGMII_ID_MODE]; + if (field->reg != (~0)) { + regfield = devm_regmap_field_alloc(dev, + priv->regmap, + *field); + if (IS_ERR(regfield)) + return PTR_ERR(regfield); + if_phys[i].fields[PHY_GMII_SEL_RGMII_ID_MODE] = + regfield; + } + + field = &soc_data->regfields[i][PHY_GMII_SEL_RMII_IO_CLK_EN]; + if (field->reg != (~0)) { + regfield = devm_regmap_field_alloc(dev, + priv->regmap, + *field); + if (IS_ERR(regfield)) + return PTR_ERR(regfield); + if_phys[i].fields[PHY_GMII_SEL_RMII_IO_CLK_EN] = + regfield; + } + + if_phys[i].if_phy = devm_phy_create(dev, + priv->dev->of_node, + &phy_gmii_sel_ops); + if (IS_ERR(if_phys[i].if_phy)) { + ret = PTR_ERR(if_phys[i].if_phy); + dev_err(dev, "Failed to create phy%d %d\n", i, ret); + return ret; + } + phy_set_drvdata(if_phys[i].if_phy, &if_phys[i]); + } + + priv->if_phys = if_phys; + return 0; +} + +static int phy_gmii_sel_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct device_node *node = dev->of_node; + const struct of_device_id *of_id; + struct phy_gmii_sel_priv *priv; + int ret; + + of_id = of_match_node(phy_gmii_sel_id_table, pdev->dev.of_node); + if (!of_id) + return -EINVAL; + + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->dev = &pdev->dev; + priv->soc_data = of_id->data; + + priv->regmap = syscon_node_to_regmap(node->parent); + if (IS_ERR(priv->regmap)) { + ret = PTR_ERR(priv->regmap); + dev_err(dev, "Failed to get syscon %d\n", ret); + return ret; + } + + ret = phy_gmii_sel_init_ports(priv); + if (ret) + return ret; + + dev_set_drvdata(&pdev->dev, priv); + + priv->phy_provider = + devm_of_phy_provider_register(dev, + phy_gmii_sel_of_xlate); + if (IS_ERR(priv->phy_provider)) { + ret = PTR_ERR(priv->phy_provider); + dev_err(dev, "Failed to create phy provider %d\n", ret); + return ret; + } + + return 0; +} + +static struct platform_driver phy_gmii_sel_driver = { + .probe = phy_gmii_sel_probe, + .driver = { + .name = "phy-gmii-sel", + .of_match_table = phy_gmii_sel_id_table, + }, +}; +module_platform_driver(phy_gmii_sel_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Grygorii Strashko "); +MODULE_DESCRIPTION("TI CPSW Port's PHY Interface Mode selection Driver"); diff --git a/drivers/phy/ti/phy-tusb1210.c b/drivers/phy/ti/phy-tusb1210.c index b8ec39ac4dfc..329fb938099a 100644 --- a/drivers/phy/ti/phy-tusb1210.c +++ b/drivers/phy/ti/phy-tusb1210.c @@ -53,7 +53,7 @@ static int tusb1210_power_off(struct phy *phy) return 0; } -static int tusb1210_set_mode(struct phy *phy, enum phy_mode mode) +static int tusb1210_set_mode(struct phy *phy, enum phy_mode mode, int submode) { struct tusb1210 *tusb = phy_get_drvdata(phy); int ret; diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig index 45ef4d22f14c..e3b62c2ee8d1 100644 --- a/drivers/platform/x86/Kconfig +++ b/drivers/platform/x86/Kconfig @@ -1172,14 +1172,6 @@ config INTEL_SMARTCONNECT This driver checks to determine whether the device has Intel Smart Connect enabled, and if so disables it. -config PVPANIC - tristate "pvpanic device support" - depends on ACPI - ---help--- - This driver provides support for the pvpanic device. pvpanic is - a paravirtualized device provided by QEMU; it lets a virtual machine - (guest) communicate panic events to the host. - config INTEL_PMC_IPC tristate "Intel PMC IPC Driver" depends on ACPI diff --git a/drivers/platform/x86/Makefile b/drivers/platform/x86/Makefile index d841c550e3cc..ce8da260c223 100644 --- a/drivers/platform/x86/Makefile +++ b/drivers/platform/x86/Makefile @@ -79,7 +79,6 @@ obj-$(CONFIG_APPLE_GMUX) += apple-gmux.o obj-$(CONFIG_INTEL_RST) += intel-rst.o obj-$(CONFIG_INTEL_SMARTCONNECT) += intel-smartconnect.o -obj-$(CONFIG_PVPANIC) += pvpanic.o obj-$(CONFIG_ALIENWARE_WMI) += alienware-wmi.o obj-$(CONFIG_INTEL_PMC_IPC) += intel_pmc_ipc.o obj-$(CONFIG_TOUCHSCREEN_DMI) += touchscreen_dmi.o diff --git a/drivers/platform/x86/intel_cht_int33fe.c b/drivers/platform/x86/intel_cht_int33fe.c index 616b8853a91f..02bc74608cf3 100644 --- a/drivers/platform/x86/intel_cht_int33fe.c +++ b/drivers/platform/x86/intel_cht_int33fe.c @@ -79,7 +79,7 @@ static const struct property_entry max17047_props[] = { }; static const struct property_entry fusb302_props[] = { - PROPERTY_ENTRY_STRING("fcs,extcon-name", "cht_wcove_pwrsrc"), + PROPERTY_ENTRY_STRING("linux,extcon-name", "cht_wcove_pwrsrc"), PROPERTY_ENTRY_U32("fcs,max-sink-microvolt", 12000000), PROPERTY_ENTRY_U32("fcs,max-sink-microamp", 3000000), PROPERTY_ENTRY_U32("fcs,max-sink-microwatt", 36000000), diff --git a/drivers/platform/x86/pvpanic.c b/drivers/platform/x86/pvpanic.c deleted file mode 100644 index fd86daba7ffd..000000000000 --- a/drivers/platform/x86/pvpanic.c +++ /dev/null @@ -1,124 +0,0 @@ -/* - * pvpanic.c - pvpanic Device Support - * - * Copyright (C) 2013 Fujitsu. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include -#include -#include -#include -#include - -MODULE_AUTHOR("Hu Tao "); -MODULE_DESCRIPTION("pvpanic device driver"); -MODULE_LICENSE("GPL"); - -static int pvpanic_add(struct acpi_device *device); -static int pvpanic_remove(struct acpi_device *device); - -static const struct acpi_device_id pvpanic_device_ids[] = { - { "QEMU0001", 0 }, - { "", 0 }, -}; -MODULE_DEVICE_TABLE(acpi, pvpanic_device_ids); - -#define PVPANIC_PANICKED (1 << 0) - -static u16 port; - -static struct acpi_driver pvpanic_driver = { - .name = "pvpanic", - .class = "QEMU", - .ids = pvpanic_device_ids, - .ops = { - .add = pvpanic_add, - .remove = pvpanic_remove, - }, - .owner = THIS_MODULE, -}; - -static void -pvpanic_send_event(unsigned int event) -{ - outb(event, port); -} - -static int -pvpanic_panic_notify(struct notifier_block *nb, unsigned long code, - void *unused) -{ - pvpanic_send_event(PVPANIC_PANICKED); - return NOTIFY_DONE; -} - -static struct notifier_block pvpanic_panic_nb = { - .notifier_call = pvpanic_panic_notify, - .priority = 1, /* let this called before broken drm_fb_helper */ -}; - - -static acpi_status -pvpanic_walk_resources(struct acpi_resource *res, void *context) -{ - switch (res->type) { - case ACPI_RESOURCE_TYPE_END_TAG: - return AE_OK; - - case ACPI_RESOURCE_TYPE_IO: - port = res->data.io.minimum; - return AE_OK; - - default: - return AE_ERROR; - } -} - -static int pvpanic_add(struct acpi_device *device) -{ - int ret; - - ret = acpi_bus_get_status(device); - if (ret < 0) - return ret; - - if (!device->status.enabled || !device->status.functional) - return -ENODEV; - - acpi_walk_resources(device->handle, METHOD_NAME__CRS, - pvpanic_walk_resources, NULL); - - if (!port) - return -ENODEV; - - atomic_notifier_chain_register(&panic_notifier_list, - &pvpanic_panic_nb); - - return 0; -} - -static int pvpanic_remove(struct acpi_device *device) -{ - - atomic_notifier_chain_unregister(&panic_notifier_list, - &pvpanic_panic_nb); - return 0; -} - -module_acpi_driver(pvpanic_driver); diff --git a/drivers/power/reset/at91-poweroff.c b/drivers/power/reset/at91-poweroff.c index fb2fc8f741a1..9e74e131c675 100644 --- a/drivers/power/reset/at91-poweroff.c +++ b/drivers/power/reset/at91-poweroff.c @@ -51,14 +51,16 @@ static const char *shdwc_wakeup_modes[] = { [AT91_SHDW_WKMODE0_ANYLEVEL] = "any", }; -static void __iomem *at91_shdwc_base; -static struct clk *sclk; -static void __iomem *mpddrc_base; +static struct shdwc { + struct clk *sclk; + void __iomem *shdwc_base; + void __iomem *mpddrc_base; +} at91_shdwc; static void __init at91_wakeup_status(struct platform_device *pdev) { const char *reason; - u32 reg = readl(at91_shdwc_base + AT91_SHDW_SR); + u32 reg = readl(at91_shdwc.shdwc_base + AT91_SHDW_SR); /* Simple power-on, just bail out */ if (!reg) @@ -75,11 +77,6 @@ static void __init at91_wakeup_status(struct platform_device *pdev) } static void at91_poweroff(void) -{ - writel(AT91_SHDW_KEY | AT91_SHDW_SHDW, at91_shdwc_base + AT91_SHDW_CR); -} - -static void at91_lpddr_poweroff(void) { asm volatile( /* Align to cache lines */ @@ -89,15 +86,17 @@ static void at91_lpddr_poweroff(void) " ldr r6, [%2, #" __stringify(AT91_SHDW_CR) "]\n\t" /* Power down SDRAM0 */ + " tst %0, #0\n\t" + " beq 1f\n\t" " str %1, [%0, #" __stringify(AT91_DDRSDRC_LPR) "]\n\t" /* Shutdown CPU */ - " str %3, [%2, #" __stringify(AT91_SHDW_CR) "]\n\t" + "1: str %3, [%2, #" __stringify(AT91_SHDW_CR) "]\n\t" " b .\n\t" : - : "r" (mpddrc_base), + : "r" (at91_shdwc.mpddrc_base), "r" cpu_to_le32(AT91_DDRSDRC_LPDDR2_PWOFF), - "r" (at91_shdwc_base), + "r" (at91_shdwc.shdwc_base), "r" cpu_to_le32(AT91_SHDW_KEY | AT91_SHDW_SHDW) : "r6"); } @@ -147,7 +146,7 @@ static void at91_poweroff_dt_set_wakeup_mode(struct platform_device *pdev) if (of_property_read_bool(np, "atmel,wakeup-rtt-timer")) mode |= AT91_SHDW_RTTWKEN; - writel(wakeup_mode | mode, at91_shdwc_base + AT91_SHDW_MR); + writel(wakeup_mode | mode, at91_shdwc.shdwc_base + AT91_SHDW_MR); } static int __init at91_poweroff_probe(struct platform_device *pdev) @@ -158,15 +157,15 @@ static int __init at91_poweroff_probe(struct platform_device *pdev) int ret; res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - at91_shdwc_base = devm_ioremap_resource(&pdev->dev, res); - if (IS_ERR(at91_shdwc_base)) - return PTR_ERR(at91_shdwc_base); + at91_shdwc.shdwc_base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(at91_shdwc.shdwc_base)) + return PTR_ERR(at91_shdwc.shdwc_base); - sclk = devm_clk_get(&pdev->dev, NULL); - if (IS_ERR(sclk)) - return PTR_ERR(sclk); + at91_shdwc.sclk = devm_clk_get(&pdev->dev, NULL); + if (IS_ERR(at91_shdwc.sclk)) + return PTR_ERR(at91_shdwc.sclk); - ret = clk_prepare_enable(sclk); + ret = clk_prepare_enable(at91_shdwc.sclk); if (ret) { dev_err(&pdev->dev, "Could not enable slow clock\n"); return ret; @@ -177,44 +176,47 @@ static int __init at91_poweroff_probe(struct platform_device *pdev) if (pdev->dev.of_node) at91_poweroff_dt_set_wakeup_mode(pdev); - pm_power_off = at91_poweroff; - np = of_find_compatible_node(NULL, NULL, "atmel,sama5d3-ddramc"); - if (!np) - return 0; + if (np) { + at91_shdwc.mpddrc_base = of_iomap(np, 0); + of_node_put(np); - mpddrc_base = of_iomap(np, 0); - of_node_put(np); + if (!at91_shdwc.mpddrc_base) { + ret = -ENOMEM; + goto clk_disable; + } - if (!mpddrc_base) - return 0; + ddr_type = readl(at91_shdwc.mpddrc_base + AT91_DDRSDRC_MDR) & + AT91_DDRSDRC_MD; + if (ddr_type != AT91_DDRSDRC_MD_LPDDR2 && + ddr_type != AT91_DDRSDRC_MD_LPDDR3) { + iounmap(at91_shdwc.mpddrc_base); + at91_shdwc.mpddrc_base = NULL; + } + } - ddr_type = readl(mpddrc_base + AT91_DDRSDRC_MDR) & AT91_DDRSDRC_MD; - if ((ddr_type == AT91_DDRSDRC_MD_LPDDR2) || - (ddr_type == AT91_DDRSDRC_MD_LPDDR3)) - pm_power_off = at91_lpddr_poweroff; - else - iounmap(mpddrc_base); + pm_power_off = at91_poweroff; return 0; + +clk_disable: + clk_disable_unprepare(at91_shdwc.sclk); + return ret; } static int __exit at91_poweroff_remove(struct platform_device *pdev) { - if (pm_power_off == at91_poweroff || - pm_power_off == at91_lpddr_poweroff) + if (pm_power_off == at91_poweroff) pm_power_off = NULL; - clk_disable_unprepare(sclk); + if (at91_shdwc.mpddrc_base) + iounmap(at91_shdwc.mpddrc_base); + + clk_disable_unprepare(at91_shdwc.sclk); return 0; } -static const struct of_device_id at91_ramc_of_match[] = { - { .compatible = "atmel,sama5d3-ddramc", }, - { /* sentinel */ } -}; - static const struct of_device_id at91_poweroff_of_match[] = { { .compatible = "atmel,at91sam9260-shdwc", }, { .compatible = "atmel,at91sam9rl-shdwc", }, diff --git a/drivers/power/reset/axxia-reset.c b/drivers/power/reset/axxia-reset.c index 4e4cd1c8fe50..b16013265142 100644 --- a/drivers/power/reset/axxia-reset.c +++ b/drivers/power/reset/axxia-reset.c @@ -65,7 +65,7 @@ static int axxia_reset_probe(struct platform_device *pdev) syscon = syscon_regmap_lookup_by_phandle(dev->of_node, "syscon"); if (IS_ERR(syscon)) { - pr_err("%s: syscon lookup failed\n", dev->of_node->name); + pr_err("%pOFn: syscon lookup failed\n", dev->of_node); return PTR_ERR(syscon); } diff --git a/drivers/power/reset/gpio-poweroff.c b/drivers/power/reset/gpio-poweroff.c index 38206c39b3bf..52525b6c18db 100644 --- a/drivers/power/reset/gpio-poweroff.c +++ b/drivers/power/reset/gpio-poweroff.c @@ -26,6 +26,8 @@ */ static struct gpio_desc *reset_gpio; static u32 timeout = DEFAULT_TIMEOUT_MS; +static u32 active_delay = 100; +static u32 inactive_delay = 100; static void gpio_poweroff_do_poweroff(void) { @@ -33,10 +35,11 @@ static void gpio_poweroff_do_poweroff(void) /* drive it active, also inactive->active edge */ gpiod_direction_output(reset_gpio, 1); - mdelay(100); + mdelay(active_delay); + /* drive inactive, also active->inactive edge */ gpiod_set_value_cansleep(reset_gpio, 0); - mdelay(100); + mdelay(inactive_delay); /* drive it active, also inactive->active edge */ gpiod_set_value_cansleep(reset_gpio, 1); @@ -66,6 +69,9 @@ static int gpio_poweroff_probe(struct platform_device *pdev) else flags = GPIOD_OUT_LOW; + device_property_read_u32(&pdev->dev, "active-delay-ms", &active_delay); + device_property_read_u32(&pdev->dev, "inactive-delay-ms", + &inactive_delay); device_property_read_u32(&pdev->dev, "timeout-ms", &timeout); reset_gpio = devm_gpiod_get(&pdev->dev, NULL, flags); diff --git a/drivers/power/reset/ocelot-reset.c b/drivers/power/reset/ocelot-reset.c index 5a13a5cc8188..419952c61fd0 100644 --- a/drivers/power/reset/ocelot-reset.c +++ b/drivers/power/reset/ocelot-reset.c @@ -26,6 +26,13 @@ struct ocelot_reset_context { #define SOFT_CHIP_RST BIT(0) +#define ICPU_CFG_CPU_SYSTEM_CTRL_GENERAL_CTRL 0x24 +#define IF_SI_OWNER_MASK GENMASK(1, 0) +#define IF_SI_OWNER_SISL 0 +#define IF_SI_OWNER_SIBM 1 +#define IF_SI_OWNER_SIMC 2 +#define IF_SI_OWNER_OFFSET 4 + static int ocelot_restart_handle(struct notifier_block *this, unsigned long mode, void *cmd) { @@ -37,6 +44,11 @@ static int ocelot_restart_handle(struct notifier_block *this, regmap_update_bits(ctx->cpu_ctrl, ICPU_CFG_CPU_SYSTEM_CTRL_RESET, CORE_RST_PROTECT, 0); + /* Make the SI back to boot mode */ + regmap_update_bits(ctx->cpu_ctrl, ICPU_CFG_CPU_SYSTEM_CTRL_GENERAL_CTRL, + IF_SI_OWNER_MASK << IF_SI_OWNER_OFFSET, + IF_SI_OWNER_SIBM << IF_SI_OWNER_OFFSET); + writel(SOFT_CHIP_RST, ctx->base); pr_emerg("Unable to restart system\n"); diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig index f27cf0709500..e901b9879e7e 100644 --- a/drivers/power/supply/Kconfig +++ b/drivers/power/supply/Kconfig @@ -652,4 +652,12 @@ config CHARGER_SC2731 Say Y here to enable support for battery charging with SC2731 PMIC chips. +config FUEL_GAUGE_SC27XX + tristate "Spreadtrum SC27XX fuel gauge driver" + depends on MFD_SC27XX_PMIC || COMPILE_TEST + depends on IIO + help + Say Y here to enable support for fuel gauge with SC27XX + PMIC chips. + endif # POWER_SUPPLY diff --git a/drivers/power/supply/Makefile b/drivers/power/supply/Makefile index 767105b88d00..b731c2a9b695 100644 --- a/drivers/power/supply/Makefile +++ b/drivers/power/supply/Makefile @@ -86,3 +86,4 @@ obj-$(CONFIG_AXP288_FUEL_GAUGE) += axp288_fuel_gauge.o obj-$(CONFIG_AXP288_CHARGER) += axp288_charger.o obj-$(CONFIG_CHARGER_CROS_USBPD) += cros_usbpd-charger.o obj-$(CONFIG_CHARGER_SC2731) += sc2731_charger.o +obj-$(CONFIG_FUEL_GAUGE_SC27XX) += sc27xx_fuel_gauge.o diff --git a/drivers/power/supply/axp20x_ac_power.c b/drivers/power/supply/axp20x_ac_power.c index 0771f951b11f..59b4c8d3b961 100644 --- a/drivers/power/supply/axp20x_ac_power.c +++ b/drivers/power/supply/axp20x_ac_power.c @@ -27,6 +27,16 @@ #define AXP20X_PWR_STATUS_ACIN_PRESENT BIT(7) #define AXP20X_PWR_STATUS_ACIN_AVAIL BIT(6) +#define AXP813_VHOLD_MASK GENMASK(5, 3) +#define AXP813_VHOLD_UV_TO_BIT(x) ((((x) / 100000) - 40) << 3) +#define AXP813_VHOLD_REG_TO_UV(x) \ + (((((x) & AXP813_VHOLD_MASK) >> 3) + 40) * 100000) + +#define AXP813_CURR_LIMIT_MASK GENMASK(2, 0) +#define AXP813_CURR_LIMIT_UA_TO_BIT(x) (((x) / 500000) - 3) +#define AXP813_CURR_LIMIT_REG_TO_UA(x) \ + ((((x) & AXP813_CURR_LIMIT_MASK) + 3) * 500000) + #define DRVNAME "axp20x-ac-power-supply" struct axp20x_ac_power { @@ -102,6 +112,57 @@ static int axp20x_ac_power_get_property(struct power_supply *psy, return 0; + case POWER_SUPPLY_PROP_VOLTAGE_MIN: + ret = regmap_read(power->regmap, AXP813_ACIN_PATH_CTRL, ®); + if (ret) + return ret; + + val->intval = AXP813_VHOLD_REG_TO_UV(reg); + + return 0; + + case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT: + ret = regmap_read(power->regmap, AXP813_ACIN_PATH_CTRL, ®); + if (ret) + return ret; + + val->intval = AXP813_CURR_LIMIT_REG_TO_UA(reg); + /* AXP813 datasheet defines values 11x as 4000mA */ + if (val->intval > 4000000) + val->intval = 4000000; + + return 0; + + default: + return -EINVAL; + } + + return -EINVAL; +} + +static int axp813_ac_power_set_property(struct power_supply *psy, + enum power_supply_property psp, + const union power_supply_propval *val) +{ + struct axp20x_ac_power *power = power_supply_get_drvdata(psy); + + switch (psp) { + case POWER_SUPPLY_PROP_VOLTAGE_MIN: + if (val->intval < 4000000 || val->intval > 4700000) + return -EINVAL; + + return regmap_update_bits(power->regmap, AXP813_ACIN_PATH_CTRL, + AXP813_VHOLD_MASK, + AXP813_VHOLD_UV_TO_BIT(val->intval)); + + case POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT: + if (val->intval < 1500000 || val->intval > 4000000) + return -EINVAL; + + return regmap_update_bits(power->regmap, AXP813_ACIN_PATH_CTRL, + AXP813_CURR_LIMIT_MASK, + AXP813_CURR_LIMIT_UA_TO_BIT(val->intval)); + default: return -EINVAL; } @@ -109,6 +170,13 @@ static int axp20x_ac_power_get_property(struct power_supply *psy, return -EINVAL; } +static int axp813_ac_power_prop_writeable(struct power_supply *psy, + enum power_supply_property psp) +{ + return psp == POWER_SUPPLY_PROP_VOLTAGE_MIN || + psp == POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT; +} + static enum power_supply_property axp20x_ac_power_properties[] = { POWER_SUPPLY_PROP_HEALTH, POWER_SUPPLY_PROP_PRESENT, @@ -123,6 +191,14 @@ static enum power_supply_property axp22x_ac_power_properties[] = { POWER_SUPPLY_PROP_ONLINE, }; +static enum power_supply_property axp813_ac_power_properties[] = { + POWER_SUPPLY_PROP_HEALTH, + POWER_SUPPLY_PROP_PRESENT, + POWER_SUPPLY_PROP_ONLINE, + POWER_SUPPLY_PROP_VOLTAGE_MIN, + POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT, +}; + static const struct power_supply_desc axp20x_ac_power_desc = { .name = "axp20x-ac", .type = POWER_SUPPLY_TYPE_MAINS, @@ -139,6 +215,16 @@ static const struct power_supply_desc axp22x_ac_power_desc = { .get_property = axp20x_ac_power_get_property, }; +static const struct power_supply_desc axp813_ac_power_desc = { + .name = "axp813-ac", + .type = POWER_SUPPLY_TYPE_MAINS, + .properties = axp813_ac_power_properties, + .num_properties = ARRAY_SIZE(axp813_ac_power_properties), + .property_is_writeable = axp813_ac_power_prop_writeable, + .get_property = axp20x_ac_power_get_property, + .set_property = axp813_ac_power_set_property, +}; + struct axp_data { const struct power_supply_desc *power_desc; bool acin_adc; @@ -154,6 +240,11 @@ static const struct axp_data axp22x_data = { .acin_adc = false, }; +static const struct axp_data axp813_data = { + .power_desc = &axp813_ac_power_desc, + .acin_adc = false, +}; + static int axp20x_ac_power_probe(struct platform_device *pdev) { struct axp20x_dev *axp20x = dev_get_drvdata(pdev->dev.parent); @@ -234,6 +325,9 @@ static const struct of_device_id axp20x_ac_power_match[] = { }, { .compatible = "x-powers,axp221-ac-power-supply", .data = &axp22x_data, + }, { + .compatible = "x-powers,axp813-ac-power-supply", + .data = &axp813_data, }, { /* sentinel */ } }; MODULE_DEVICE_TABLE(of, axp20x_ac_power_match); diff --git a/drivers/power/supply/axp20x_usb_power.c b/drivers/power/supply/axp20x_usb_power.c index 42001df4bd13..f52fe77edb6f 100644 --- a/drivers/power/supply/axp20x_usb_power.c +++ b/drivers/power/supply/axp20x_usb_power.c @@ -10,6 +10,7 @@ * option) any later version. */ +#include #include #include #include diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c index 735658ee1c60..f8c6da9277b3 100644 --- a/drivers/power/supply/axp288_charger.c +++ b/drivers/power/supply/axp288_charger.c @@ -16,6 +16,7 @@ */ #include +#include #include #include #include @@ -29,17 +30,17 @@ #include #include -#define PS_STAT_VBUS_TRIGGER (1 << 0) -#define PS_STAT_BAT_CHRG_DIR (1 << 2) -#define PS_STAT_VBAT_ABOVE_VHOLD (1 << 3) -#define PS_STAT_VBUS_VALID (1 << 4) -#define PS_STAT_VBUS_PRESENT (1 << 5) +#define PS_STAT_VBUS_TRIGGER BIT(0) +#define PS_STAT_BAT_CHRG_DIR BIT(2) +#define PS_STAT_VBAT_ABOVE_VHOLD BIT(3) +#define PS_STAT_VBUS_VALID BIT(4) +#define PS_STAT_VBUS_PRESENT BIT(5) -#define CHRG_STAT_BAT_SAFE_MODE (1 << 3) -#define CHRG_STAT_BAT_VALID (1 << 4) -#define CHRG_STAT_BAT_PRESENT (1 << 5) -#define CHRG_STAT_CHARGING (1 << 6) -#define CHRG_STAT_PMIC_OTP (1 << 7) +#define CHRG_STAT_BAT_SAFE_MODE BIT(3) +#define CHRG_STAT_BAT_VALID BIT(4) +#define CHRG_STAT_BAT_PRESENT BIT(5) +#define CHRG_STAT_CHARGING BIT(6) +#define CHRG_STAT_PMIC_OTP BIT(7) #define VBUS_ISPOUT_CUR_LIM_MASK 0x03 #define VBUS_ISPOUT_CUR_LIM_BIT_POS 0 @@ -52,33 +53,33 @@ #define VBUS_ISPOUT_VHOLD_SET_OFFSET 4000 /* 4000mV */ #define VBUS_ISPOUT_VHOLD_SET_LSB_RES 100 /* 100mV */ #define VBUS_ISPOUT_VHOLD_SET_4300MV 0x3 /* 4300mV */ -#define VBUS_ISPOUT_VBUS_PATH_DIS (1 << 7) +#define VBUS_ISPOUT_VBUS_PATH_DIS BIT(7) #define CHRG_CCCV_CC_MASK 0xf /* 4 bits */ #define CHRG_CCCV_CC_BIT_POS 0 #define CHRG_CCCV_CC_OFFSET 200 /* 200mA */ #define CHRG_CCCV_CC_LSB_RES 200 /* 200mA */ -#define CHRG_CCCV_ITERM_20P (1 << 4) /* 20% of CC */ +#define CHRG_CCCV_ITERM_20P BIT(4) /* 20% of CC */ #define CHRG_CCCV_CV_MASK 0x60 /* 2 bits */ #define CHRG_CCCV_CV_BIT_POS 5 #define CHRG_CCCV_CV_4100MV 0x0 /* 4.10V */ #define CHRG_CCCV_CV_4150MV 0x1 /* 4.15V */ #define CHRG_CCCV_CV_4200MV 0x2 /* 4.20V */ #define CHRG_CCCV_CV_4350MV 0x3 /* 4.35V */ -#define CHRG_CCCV_CHG_EN (1 << 7) +#define CHRG_CCCV_CHG_EN BIT(7) #define CNTL2_CC_TIMEOUT_MASK 0x3 /* 2 bits */ #define CNTL2_CC_TIMEOUT_OFFSET 6 /* 6 Hrs */ #define CNTL2_CC_TIMEOUT_LSB_RES 2 /* 2 Hrs */ #define CNTL2_CC_TIMEOUT_12HRS 0x3 /* 12 Hrs */ -#define CNTL2_CHGLED_TYPEB (1 << 4) -#define CNTL2_CHG_OUT_TURNON (1 << 5) +#define CNTL2_CHGLED_TYPEB BIT(4) +#define CNTL2_CHG_OUT_TURNON BIT(5) #define CNTL2_PC_TIMEOUT_MASK 0xC0 #define CNTL2_PC_TIMEOUT_OFFSET 40 /* 40 mins */ #define CNTL2_PC_TIMEOUT_LSB_RES 10 /* 10 mins */ #define CNTL2_PC_TIMEOUT_70MINS 0x3 -#define CHRG_ILIM_TEMP_LOOP_EN (1 << 3) +#define CHRG_ILIM_TEMP_LOOP_EN BIT(3) #define CHRG_VBUS_ILIM_MASK 0xf0 #define CHRG_VBUS_ILIM_BIT_POS 4 #define CHRG_VBUS_ILIM_100MA 0x0 /* 100mA */ @@ -94,7 +95,7 @@ #define CHRG_VLTFC_0C 0xA5 /* 0 DegC */ #define CHRG_VHTFC_45C 0x1F /* 45 DegC */ -#define FG_CNTL_OCV_ADJ_EN (1 << 3) +#define FG_CNTL_OCV_ADJ_EN BIT(3) #define CV_4100MV 4100 /* 4100mV */ #define CV_4150MV 4150 /* 4150mV */ diff --git a/drivers/power/supply/bq2415x_charger.c b/drivers/power/supply/bq2415x_charger.c index cbec70f3e73e..6693e7aeead5 100644 --- a/drivers/power/supply/bq2415x_charger.c +++ b/drivers/power/supply/bq2415x_charger.c @@ -1032,54 +1032,6 @@ static int bq2415x_power_supply_get_property(struct power_supply *psy, return 0; } -static int bq2415x_power_supply_init(struct bq2415x_device *bq) -{ - int ret; - int chip; - char revstr[8]; - struct power_supply_config psy_cfg = { - .drv_data = bq, - .of_node = bq->dev->of_node, - }; - - bq->charger_desc.name = bq->name; - bq->charger_desc.type = POWER_SUPPLY_TYPE_USB; - bq->charger_desc.properties = bq2415x_power_supply_props; - bq->charger_desc.num_properties = - ARRAY_SIZE(bq2415x_power_supply_props); - bq->charger_desc.get_property = bq2415x_power_supply_get_property; - - ret = bq2415x_detect_chip(bq); - if (ret < 0) - chip = BQUNKNOWN; - else - chip = ret; - - ret = bq2415x_detect_revision(bq); - if (ret < 0) - strcpy(revstr, "unknown"); - else - sprintf(revstr, "1.%d", ret); - - bq->model = kasprintf(GFP_KERNEL, - "chip %s, revision %s, vender code %.3d", - bq2415x_chip_name[chip], revstr, - bq2415x_get_vender_code(bq)); - if (!bq->model) { - dev_err(bq->dev, "failed to allocate model name\n"); - return -ENOMEM; - } - - bq->charger = power_supply_register(bq->dev, &bq->charger_desc, - &psy_cfg); - if (IS_ERR(bq->charger)) { - kfree(bq->model); - return PTR_ERR(bq->charger); - } - - return 0; -} - static void bq2415x_power_supply_exit(struct bq2415x_device *bq) { bq->autotimer = 0; @@ -1496,7 +1448,7 @@ static DEVICE_ATTR(charge_status, S_IRUGO, bq2415x_sysfs_show_status, NULL); static DEVICE_ATTR(boost_status, S_IRUGO, bq2415x_sysfs_show_status, NULL); static DEVICE_ATTR(fault_status, S_IRUGO, bq2415x_sysfs_show_status, NULL); -static struct attribute *bq2415x_sysfs_attributes[] = { +static struct attribute *bq2415x_sysfs_attrs[] = { /* * TODO: some (appropriate) of these attrs should be switched to * use power supply class props. @@ -1525,19 +1477,55 @@ static struct attribute *bq2415x_sysfs_attributes[] = { NULL, }; -static const struct attribute_group bq2415x_sysfs_attr_group = { - .attrs = bq2415x_sysfs_attributes, -}; +ATTRIBUTE_GROUPS(bq2415x_sysfs); -static int bq2415x_sysfs_init(struct bq2415x_device *bq) +static int bq2415x_power_supply_init(struct bq2415x_device *bq) { - return sysfs_create_group(&bq->charger->dev.kobj, - &bq2415x_sysfs_attr_group); -} + int ret; + int chip; + char revstr[8]; + struct power_supply_config psy_cfg = { + .drv_data = bq, + .of_node = bq->dev->of_node, + .attr_grp = bq2415x_sysfs_groups, + }; -static void bq2415x_sysfs_exit(struct bq2415x_device *bq) -{ - sysfs_remove_group(&bq->charger->dev.kobj, &bq2415x_sysfs_attr_group); + bq->charger_desc.name = bq->name; + bq->charger_desc.type = POWER_SUPPLY_TYPE_USB; + bq->charger_desc.properties = bq2415x_power_supply_props; + bq->charger_desc.num_properties = + ARRAY_SIZE(bq2415x_power_supply_props); + bq->charger_desc.get_property = bq2415x_power_supply_get_property; + + ret = bq2415x_detect_chip(bq); + if (ret < 0) + chip = BQUNKNOWN; + else + chip = ret; + + ret = bq2415x_detect_revision(bq); + if (ret < 0) + strcpy(revstr, "unknown"); + else + sprintf(revstr, "1.%d", ret); + + bq->model = kasprintf(GFP_KERNEL, + "chip %s, revision %s, vender code %.3d", + bq2415x_chip_name[chip], revstr, + bq2415x_get_vender_code(bq)); + if (!bq->model) { + dev_err(bq->dev, "failed to allocate model name\n"); + return -ENOMEM; + } + + bq->charger = power_supply_register(bq->dev, &bq->charger_desc, + &psy_cfg); + if (IS_ERR(bq->charger)) { + kfree(bq->model); + return PTR_ERR(bq->charger); + } + + return 0; } /* main bq2415x probe function */ @@ -1651,16 +1639,10 @@ static int bq2415x_probe(struct i2c_client *client, goto error_2; } - ret = bq2415x_sysfs_init(bq); - if (ret) { - dev_err(bq->dev, "failed to create sysfs entries: %d\n", ret); - goto error_3; - } - ret = bq2415x_set_defaults(bq); if (ret) { dev_err(bq->dev, "failed to set default values: %d\n", ret); - goto error_4; + goto error_3; } if (bq->notify_node || bq->init_data.notify_device) { @@ -1668,7 +1650,7 @@ static int bq2415x_probe(struct i2c_client *client, ret = power_supply_reg_notifier(&bq->nb); if (ret) { dev_err(bq->dev, "failed to reg notifier: %d\n", ret); - goto error_4; + goto error_3; } bq->automode = 1; @@ -1707,8 +1689,6 @@ static int bq2415x_probe(struct i2c_client *client, dev_info(bq->dev, "driver registered\n"); return 0; -error_4: - bq2415x_sysfs_exit(bq); error_3: bq2415x_power_supply_exit(bq); error_2: @@ -1733,7 +1713,6 @@ static int bq2415x_remove(struct i2c_client *client) power_supply_unreg_notifier(&bq->nb); of_node_put(bq->notify_node); - bq2415x_sysfs_exit(bq); bq2415x_power_supply_exit(bq); bq2415x_reset_chip(bq); diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c index b58df04d03b3..cc0dfdc9e85a 100644 --- a/drivers/power/supply/bq24190_charger.c +++ b/drivers/power/supply/bq24190_charger.c @@ -21,6 +21,7 @@ #include #include #include +#include #define BQ24190_MANUFACTURER "Texas Instruments" @@ -142,7 +143,7 @@ #define BQ24190_REG_VPRS_PN_MASK (BIT(5) | BIT(4) | BIT(3)) #define BQ24190_REG_VPRS_PN_SHIFT 3 #define BQ24190_REG_VPRS_PN_24190 0x4 -#define BQ24190_REG_VPRS_PN_24192 0x5 /* Also 24193 */ +#define BQ24190_REG_VPRS_PN_24192 0x5 /* Also 24193, 24196 */ #define BQ24190_REG_VPRS_PN_24192I 0x3 #define BQ24190_REG_VPRS_TS_PROFILE_MASK BIT(2) #define BQ24190_REG_VPRS_TS_PROFILE_SHIFT 2 @@ -159,6 +160,7 @@ struct bq24190_dev_info { struct i2c_client *client; struct device *dev; + struct extcon_dev *edev; struct power_supply *charger; struct power_supply *battery; struct delayed_work input_current_limit_work; @@ -174,6 +176,11 @@ struct bq24190_dev_info { u8 watchdog; }; +static const unsigned int bq24190_usb_extcon_cable[] = { + EXTCON_USB, + EXTCON_NONE, +}; + /* * The tables below provide a 2-way mapping for the value that goes in * the register field and the real-world value that it represents. @@ -402,9 +409,7 @@ static struct bq24190_sysfs_field_info bq24190_sysfs_field_tbl[] = { static struct attribute * bq24190_sysfs_attrs[ARRAY_SIZE(bq24190_sysfs_field_tbl) + 1]; -static const struct attribute_group bq24190_sysfs_attr_group = { - .attrs = bq24190_sysfs_attrs, -}; +ATTRIBUTE_GROUPS(bq24190_sysfs); static void bq24190_sysfs_init_attrs(void) { @@ -491,26 +496,6 @@ static ssize_t bq24190_sysfs_store(struct device *dev, return count; } - -static int bq24190_sysfs_create_group(struct bq24190_dev_info *bdi) -{ - bq24190_sysfs_init_attrs(); - - return sysfs_create_group(&bdi->charger->dev.kobj, - &bq24190_sysfs_attr_group); -} - -static void bq24190_sysfs_remove_group(struct bq24190_dev_info *bdi) -{ - sysfs_remove_group(&bdi->charger->dev.kobj, &bq24190_sysfs_attr_group); -} -#else -static int bq24190_sysfs_create_group(struct bq24190_dev_info *bdi) -{ - return 0; -} - -static inline void bq24190_sysfs_remove_group(struct bq24190_dev_info *bdi) {} #endif #ifdef CONFIG_REGULATOR @@ -577,6 +562,7 @@ static const struct regulator_ops bq24190_vbus_ops = { static const struct regulator_desc bq24190_vbus_desc = { .name = "usb_otg_vbus", + .of_match = "usb-otg-vbus", .type = REGULATOR_VOLTAGE, .owner = THIS_MODULE, .ops = &bq24190_vbus_ops, @@ -1527,6 +1513,20 @@ static const struct power_supply_desc bq24190_battery_desc = { .property_is_writeable = bq24190_battery_property_is_writeable, }; +static int bq24190_configure_usb_otg(struct bq24190_dev_info *bdi, u8 ss_reg) +{ + bool otg_enabled; + int ret; + + otg_enabled = !!(ss_reg & BQ24190_REG_SS_VBUS_STAT_MASK); + ret = extcon_set_state_sync(bdi->edev, EXTCON_USB, otg_enabled); + if (ret < 0) + dev_err(bdi->dev, "Can't set extcon state to %d: %d\n", + otg_enabled, ret); + + return ret; +} + static void bq24190_check_status(struct bq24190_dev_info *bdi) { const u8 battery_mask_ss = BQ24190_REG_SS_CHRG_STAT_MASK; @@ -1596,8 +1596,10 @@ static void bq24190_check_status(struct bq24190_dev_info *bdi) bdi->ss_reg = ss_reg; } - if (alert_charger || alert_battery) + if (alert_charger || alert_battery) { power_supply_changed(bdi->charger); + bq24190_configure_usb_otg(bdi, ss_reg); + } if (alert_battery && bdi->battery) power_supply_changed(bdi->battery); @@ -1637,8 +1639,12 @@ static int bq24190_hw_init(struct bq24190_dev_info *bdi) if (ret < 0) return ret; - if (v != BQ24190_REG_VPRS_PN_24190 && - v != BQ24190_REG_VPRS_PN_24192I) { + switch (v) { + case BQ24190_REG_VPRS_PN_24190: + case BQ24190_REG_VPRS_PN_24192: + case BQ24190_REG_VPRS_PN_24192I: + break; + default: dev_err(bdi->dev, "Error unknown model: 0x%02x\n", v); return -ENODEV; } @@ -1727,6 +1733,14 @@ static int bq24190_probe(struct i2c_client *client, return -EINVAL; } + bdi->edev = devm_extcon_dev_allocate(dev, bq24190_usb_extcon_cable); + if (IS_ERR(bdi->edev)) + return PTR_ERR(bdi->edev); + + ret = devm_extcon_dev_register(dev, bdi->edev); + if (ret < 0) + return ret; + pm_runtime_enable(dev); pm_runtime_use_autosuspend(dev); pm_runtime_set_autosuspend_delay(dev, 600); @@ -1736,6 +1750,11 @@ static int bq24190_probe(struct i2c_client *client, goto out_pmrt; } +#ifdef CONFIG_SYSFS + bq24190_sysfs_init_attrs(); + charger_cfg.attr_grp = bq24190_sysfs_groups; +#endif + charger_cfg.drv_data = bdi; charger_cfg.of_node = dev->of_node; charger_cfg.supplied_to = bq24190_charger_supplied_to; @@ -1773,11 +1792,9 @@ static int bq24190_probe(struct i2c_client *client, goto out_charger; } - ret = bq24190_sysfs_create_group(bdi); - if (ret < 0) { - dev_err(dev, "Can't create sysfs entries\n"); + ret = bq24190_configure_usb_otg(bdi, bdi->ss_reg); + if (ret < 0) goto out_charger; - } bdi->initialized = true; @@ -1787,12 +1804,12 @@ static int bq24190_probe(struct i2c_client *client, "bq24190-charger", bdi); if (ret < 0) { dev_err(dev, "Can't set up irq handler\n"); - goto out_sysfs; + goto out_charger; } ret = bq24190_register_vbus_regulator(bdi); if (ret < 0) - goto out_sysfs; + goto out_charger; enable_irq_wake(client->irq); @@ -1801,9 +1818,6 @@ static int bq24190_probe(struct i2c_client *client, return 0; -out_sysfs: - bq24190_sysfs_remove_group(bdi); - out_charger: if (!IS_ERR_OR_NULL(bdi->battery)) power_supply_unregister(bdi->battery); @@ -1828,7 +1842,6 @@ static int bq24190_remove(struct i2c_client *client) } bq24190_register_reset(bdi); - bq24190_sysfs_remove_group(bdi); if (bdi->battery) power_supply_unregister(bdi->battery); power_supply_unregister(bdi->charger); @@ -1931,7 +1944,9 @@ static const struct dev_pm_ops bq24190_pm_ops = { static const struct i2c_device_id bq24190_i2c_ids[] = { { "bq24190" }, + { "bq24192" }, { "bq24192i" }, + { "bq24196" }, { }, }; MODULE_DEVICE_TABLE(i2c, bq24190_i2c_ids); @@ -1939,7 +1954,9 @@ MODULE_DEVICE_TABLE(i2c, bq24190_i2c_ids); #ifdef CONFIG_OF static const struct of_device_id bq24190_of_match[] = { { .compatible = "ti,bq24190", }, + { .compatible = "ti,bq24192", }, { .compatible = "ti,bq24192i", }, + { .compatible = "ti,bq24196", }, { }, }; MODULE_DEVICE_TABLE(of, bq24190_of_match); diff --git a/drivers/power/supply/bq24257_charger.c b/drivers/power/supply/bq24257_charger.c index 6fc31bdc639b..673c7d6ff1a7 100644 --- a/drivers/power/supply/bq24257_charger.c +++ b/drivers/power/supply/bq24257_charger.c @@ -845,7 +845,7 @@ static DEVICE_ATTR(high_impedance_enable, S_IWUSR | S_IRUGO, static DEVICE_ATTR(sysoff_enable, S_IWUSR | S_IRUGO, bq24257_sysfs_show_enable, bq24257_sysfs_set_enable); -static struct attribute *bq24257_charger_attr[] = { +static struct attribute *bq24257_charger_sysfs_attrs[] = { &dev_attr_ovp_voltage.attr, &dev_attr_in_dpm_voltage.attr, &dev_attr_high_impedance_enable.attr, @@ -853,14 +853,13 @@ static struct attribute *bq24257_charger_attr[] = { NULL, }; -static const struct attribute_group bq24257_attr_group = { - .attrs = bq24257_charger_attr, -}; +ATTRIBUTE_GROUPS(bq24257_charger_sysfs); static int bq24257_power_supply_init(struct bq24257_device *bq) { struct power_supply_config psy_cfg = { .drv_data = bq, }; + psy_cfg.attr_grp = bq24257_charger_sysfs_groups; psy_cfg.supplied_to = bq24257_charger_supplied_to; psy_cfg.num_supplicants = ARRAY_SIZE(bq24257_charger_supplied_to); @@ -1084,12 +1083,6 @@ static int bq24257_probe(struct i2c_client *client, return ret; } - ret = sysfs_create_group(&bq->charger->dev.kobj, &bq24257_attr_group); - if (ret < 0) { - dev_err(dev, "Can't create sysfs entries\n"); - return ret; - } - return 0; } @@ -1100,8 +1093,6 @@ static int bq24257_remove(struct i2c_client *client) if (bq->iilimit_autoset_enable) cancel_delayed_work_sync(&bq->iilimit_setup_work); - sysfs_remove_group(&bq->charger->dev.kobj, &bq24257_attr_group); - bq24257_field_write(bq, F_RESET, 1); /* reset to defaults */ return 0; diff --git a/drivers/power/supply/bq25890_charger.c b/drivers/power/supply/bq25890_charger.c index 70b90db5ae38..3f6fb49c956c 100644 --- a/drivers/power/supply/bq25890_charger.c +++ b/drivers/power/supply/bq25890_charger.c @@ -183,7 +183,7 @@ static const struct reg_field bq25890_reg_fields[] = { [F_CHG_TMR] = REG_FIELD(0x07, 1, 2), [F_JEITA_ISET] = REG_FIELD(0x07, 0, 0), /* REG08 */ - [F_BATCMP] = REG_FIELD(0x08, 6, 7), // 5-7 on BQ25896 + [F_BATCMP] = REG_FIELD(0x08, 5, 7), [F_VCLAMP] = REG_FIELD(0x08, 2, 4), [F_TREG] = REG_FIELD(0x08, 0, 1), /* REG09 */ diff --git a/drivers/power/supply/charger-manager.c b/drivers/power/supply/charger-manager.c index faa1a67cf3d2..38be91f21cc4 100644 --- a/drivers/power/supply/charger-manager.c +++ b/drivers/power/supply/charger-manager.c @@ -363,7 +363,7 @@ static int try_charger_enable(struct charger_manager *cm, bool enable) int err = 0, i; struct charger_desc *desc = cm->desc; - /* Ignore if it's redundent command */ + /* Ignore if it's redundant command */ if (enable == cm->charger_enabled) return 0; @@ -1212,7 +1212,6 @@ static int charger_extcon_init(struct charger_manager *cm, if (ret < 0) { pr_info("Cannot register extcon_dev for %s(cable: %s)\n", cable->extcon_name, cable->name); - ret = -EINVAL; } return ret; @@ -1352,7 +1351,7 @@ static ssize_t charger_externally_control_store(struct device *dev, } /** - * charger_manager_register_sysfs - Register sysfs entry for each charger + * charger_manager_prepare_sysfs - Prepare sysfs entry for each charger * @cm: the Charger Manager representing the battery. * * This function add sysfs entry for charger(regulator) to control charger from @@ -1364,34 +1363,30 @@ static ssize_t charger_externally_control_store(struct device *dev, * externally_control, this charger isn't controlled from charger-manager and * always stay off state of regulator. */ -static int charger_manager_register_sysfs(struct charger_manager *cm) +static int charger_manager_prepare_sysfs(struct charger_manager *cm) { struct charger_desc *desc = cm->desc; struct charger_regulator *charger; int chargers_externally_control = 1; - char buf[11]; - char *str; - int ret; + char *name; int i; /* Create sysfs entry to control charger(regulator) */ for (i = 0; i < desc->num_charger_regulators; i++) { charger = &desc->charger_regulators[i]; - snprintf(buf, 10, "charger.%d", i); - str = devm_kzalloc(cm->dev, - strlen(buf) + 1, GFP_KERNEL); - if (!str) + name = devm_kasprintf(cm->dev, GFP_KERNEL, "charger.%d", i); + if (!name) return -ENOMEM; - strcpy(str, buf); - charger->attrs[0] = &charger->attr_name.attr; charger->attrs[1] = &charger->attr_state.attr; charger->attrs[2] = &charger->attr_externally_control.attr; charger->attrs[3] = NULL; - charger->attr_g.name = str; - charger->attr_g.attrs = charger->attrs; + + charger->attr_grp.name = name; + charger->attr_grp.attrs = charger->attrs; + desc->sysfs_groups[i] = &charger->attr_grp; sysfs_attr_init(&charger->attr_name.attr); charger->attr_name.attr.name = "name"; @@ -1418,14 +1413,6 @@ static int charger_manager_register_sysfs(struct charger_manager *cm) dev_info(cm->dev, "'%s' regulator's externally_control is %d\n", charger->regulator_name, charger->externally_control); - - ret = sysfs_create_group(&cm->charger_psy->dev.kobj, - &charger->attr_g); - if (ret < 0) { - dev_err(cm->dev, "Cannot create sysfs entry of %s regulator\n", - charger->regulator_name); - return ret; - } } if (chargers_externally_control) { @@ -1521,19 +1508,19 @@ static struct charger_desc *of_cm_parse_desc(struct device *dev) /* chargers */ of_property_read_u32(np, "cm-num-chargers", &num_chgs); if (num_chgs) { + int i; + /* Allocate empty bin at the tail of array */ desc->psy_charger_stat = devm_kcalloc(dev, num_chgs + 1, sizeof(char *), GFP_KERNEL); - if (desc->psy_charger_stat) { - int i; - for (i = 0; i < num_chgs; i++) - of_property_read_string_index(np, "cm-chargers", - i, &desc->psy_charger_stat[i]); - } else { + if (!desc->psy_charger_stat) return ERR_PTR(-ENOMEM); - } + + for (i = 0; i < num_chgs; i++) + of_property_read_string_index(np, "cm-chargers", + i, &desc->psy_charger_stat[i]); } of_property_read_string(np, "cm-fuel-gauge", &desc->psy_fuel_gauge); @@ -1566,6 +1553,13 @@ static struct charger_desc *of_cm_parse_desc(struct device *dev) desc->charger_regulators = chg_regs; + desc->sysfs_groups = devm_kcalloc(dev, + desc->num_charger_regulators + 1, + sizeof(*desc->sysfs_groups), + GFP_KERNEL); + if (!desc->sysfs_groups) + return ERR_PTR(-ENOMEM); + for_each_child_of_node(np, child) { struct charger_cable *cables; struct device_node *_child; @@ -1633,7 +1627,7 @@ static int charger_manager_probe(struct platform_device *pdev) if (IS_ERR(desc)) { dev_err(&pdev->dev, "No platform data (desc) found\n"); - return -ENODEV; + return PTR_ERR(desc); } cm = devm_kzalloc(&pdev->dev, sizeof(*cm), GFP_KERNEL); @@ -1687,10 +1681,6 @@ static int charger_manager_probe(struct platform_device *pdev) return -EINVAL; } - /* Counting index only */ - while (desc->psy_charger_stat[i]) - i++; - /* Check if charger's supplies are present at probe */ for (i = 0; desc->psy_charger_stat[i]; i++) { struct power_supply *psy; @@ -1772,6 +1762,15 @@ static int charger_manager_probe(struct platform_device *pdev) INIT_DELAYED_WORK(&cm->fullbatt_vchk_work, fullbatt_vchk); + /* Register sysfs entry for charger(regulator) */ + ret = charger_manager_prepare_sysfs(cm); + if (ret < 0) { + dev_err(&pdev->dev, + "Cannot prepare sysfs entry of regulators\n"); + return ret; + } + psy_cfg.attr_grp = desc->sysfs_groups; + cm->charger_psy = power_supply_register(&pdev->dev, &cm->charger_psy_desc, &psy_cfg); @@ -1788,14 +1787,6 @@ static int charger_manager_probe(struct platform_device *pdev) goto err_reg_extcon; } - /* Register sysfs entry for charger(regulator) */ - ret = charger_manager_register_sysfs(cm); - if (ret < 0) { - dev_err(&pdev->dev, - "Cannot initialize sysfs entry of regulator\n"); - goto err_reg_sysfs; - } - /* Add to the list */ mutex_lock(&cm_list_mtx); list_add(&cm->entry, &cm_list); @@ -1803,7 +1794,7 @@ static int charger_manager_probe(struct platform_device *pdev) /* * Charger-manager is capable of waking up the systme from sleep - * when event is happend through cm_notify_event() + * when event is happened through cm_notify_event() */ device_init_wakeup(&pdev->dev, true); device_set_wakeup_capable(&pdev->dev, false); @@ -1819,14 +1810,6 @@ static int charger_manager_probe(struct platform_device *pdev) return 0; -err_reg_sysfs: - for (i = 0; i < desc->num_charger_regulators; i++) { - struct charger_regulator *charger; - - charger = &desc->charger_regulators[i]; - sysfs_remove_group(&cm->charger_psy->dev.kobj, - &charger->attr_g); - } err_reg_extcon: for (i = 0; i < desc->num_charger_regulators; i++) { struct charger_regulator *charger; @@ -2023,7 +2006,7 @@ module_exit(charger_manager_cleanup); * cm_notify_event - charger driver notify Charger Manager of charger event * @psy: pointer to instance of charger's power_supply * @type: type of charger event - * @msg: optional message passed to uevent_notify fuction + * @msg: optional message passed to uevent_notify function */ void cm_notify_event(struct power_supply *psy, enum cm_event_types type, char *msg) diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c index 98ba07869c3b..08d5037fd052 100644 --- a/drivers/power/supply/cpcap-battery.c +++ b/drivers/power/supply/cpcap-battery.c @@ -620,7 +620,7 @@ static int cpcap_battery_init_irq(struct platform_device *pdev, static int cpcap_battery_init_interrupts(struct platform_device *pdev, struct cpcap_battery_ddata *ddata) { - const char * const cpcap_battery_irqs[] = { + static const char * const cpcap_battery_irqs[] = { "eol", "lowbph", "lowbpl", "chrgcurr1", "battdetb" }; diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c index e4905bef2663..c843eaff8ad0 100644 --- a/drivers/power/supply/cpcap-charger.c +++ b/drivers/power/supply/cpcap-charger.c @@ -544,7 +544,7 @@ static void cpcap_charger_init_optional_gpios(struct cpcap_charger_ddata *ddata) if (IS_ERR(ddata->gpio[i])) { dev_info(ddata->dev, "no mode change GPIO%i: %li\n", i, PTR_ERR(ddata->gpio[i])); - ddata->gpio[i] = NULL; + ddata->gpio[i] = NULL; } } } diff --git a/drivers/power/supply/ds2780_battery.c b/drivers/power/supply/ds2780_battery.c index cad14ba1b648..5bf7c714a6ee 100644 --- a/drivers/power/supply/ds2780_battery.c +++ b/drivers/power/supply/ds2780_battery.c @@ -658,7 +658,7 @@ static ssize_t ds2780_write_param_eeprom_bin(struct file *filp, return count; } -static const struct bin_attribute ds2780_param_eeprom_bin_attr = { +static struct bin_attribute ds2780_param_eeprom_bin_attr = { .attr = { .name = "param_eeprom", .mode = S_IRUGO | S_IWUSR, @@ -703,7 +703,7 @@ static ssize_t ds2780_write_user_eeprom_bin(struct file *filp, return count; } -static const struct bin_attribute ds2780_user_eeprom_bin_attr = { +static struct bin_attribute ds2780_user_eeprom_bin_attr = { .attr = { .name = "user_eeprom", .mode = S_IRUGO | S_IWUSR, @@ -722,8 +722,7 @@ static DEVICE_ATTR(rsgain_setting, S_IRUGO | S_IWUSR, ds2780_get_rsgain_setting, static DEVICE_ATTR(pio_pin, S_IRUGO | S_IWUSR, ds2780_get_pio_pin, ds2780_set_pio_pin); - -static struct attribute *ds2780_attributes[] = { +static struct attribute *ds2780_sysfs_attrs[] = { &dev_attr_pmod_enabled.attr, &dev_attr_sense_resistor_value.attr, &dev_attr_rsgain_setting.attr, @@ -731,21 +730,30 @@ static struct attribute *ds2780_attributes[] = { NULL }; -static const struct attribute_group ds2780_attr_group = { - .attrs = ds2780_attributes, +static struct bin_attribute *ds2780_sysfs_bin_attrs[] = { + &ds2780_param_eeprom_bin_attr, + &ds2780_user_eeprom_bin_attr, + NULL +}; + +static const struct attribute_group ds2780_sysfs_group = { + .attrs = ds2780_sysfs_attrs, + .bin_attrs = ds2780_sysfs_bin_attrs, +}; + +static const struct attribute_group *ds2780_sysfs_groups[] = { + &ds2780_sysfs_group, + NULL, }; static int ds2780_battery_probe(struct platform_device *pdev) { struct power_supply_config psy_cfg = {}; - int ret = 0; struct ds2780_device_info *dev_info; dev_info = devm_kzalloc(&pdev->dev, sizeof(*dev_info), GFP_KERNEL); - if (!dev_info) { - ret = -ENOMEM; - goto fail; - } + if (!dev_info) + return -ENOMEM; platform_set_drvdata(pdev, dev_info); @@ -758,62 +766,16 @@ static int ds2780_battery_probe(struct platform_device *pdev) dev_info->bat_desc.get_property = ds2780_battery_get_property; psy_cfg.drv_data = dev_info; + psy_cfg.attr_grp = ds2780_sysfs_groups; - dev_info->bat = power_supply_register(&pdev->dev, &dev_info->bat_desc, - &psy_cfg); + dev_info->bat = devm_power_supply_register(&pdev->dev, + &dev_info->bat_desc, + &psy_cfg); if (IS_ERR(dev_info->bat)) { dev_err(dev_info->dev, "failed to register battery\n"); - ret = PTR_ERR(dev_info->bat); - goto fail; - } - - ret = sysfs_create_group(&dev_info->bat->dev.kobj, &ds2780_attr_group); - if (ret) { - dev_err(dev_info->dev, "failed to create sysfs group\n"); - goto fail_unregister; + return PTR_ERR(dev_info->bat); } - ret = sysfs_create_bin_file(&dev_info->bat->dev.kobj, - &ds2780_param_eeprom_bin_attr); - if (ret) { - dev_err(dev_info->dev, - "failed to create param eeprom bin file"); - goto fail_remove_group; - } - - ret = sysfs_create_bin_file(&dev_info->bat->dev.kobj, - &ds2780_user_eeprom_bin_attr); - if (ret) { - dev_err(dev_info->dev, - "failed to create user eeprom bin file"); - goto fail_remove_bin_file; - } - - return 0; - -fail_remove_bin_file: - sysfs_remove_bin_file(&dev_info->bat->dev.kobj, - &ds2780_param_eeprom_bin_attr); -fail_remove_group: - sysfs_remove_group(&dev_info->bat->dev.kobj, &ds2780_attr_group); -fail_unregister: - power_supply_unregister(dev_info->bat); -fail: - return ret; -} - -static int ds2780_battery_remove(struct platform_device *pdev) -{ - struct ds2780_device_info *dev_info = platform_get_drvdata(pdev); - - /* - * Remove attributes before unregistering power supply - * because 'bat' will be freed on power_supply_unregister() call. - */ - sysfs_remove_group(&dev_info->bat->dev.kobj, &ds2780_attr_group); - - power_supply_unregister(dev_info->bat); - return 0; } @@ -822,7 +784,6 @@ static struct platform_driver ds2780_battery_driver = { .name = "ds2780-battery", }, .probe = ds2780_battery_probe, - .remove = ds2780_battery_remove, }; module_platform_driver(ds2780_battery_driver); diff --git a/drivers/power/supply/ds2781_battery.c b/drivers/power/supply/ds2781_battery.c index 5e794607f732..166a8bd58811 100644 --- a/drivers/power/supply/ds2781_battery.c +++ b/drivers/power/supply/ds2781_battery.c @@ -660,7 +660,7 @@ static ssize_t ds2781_write_param_eeprom_bin(struct file *filp, return count; } -static const struct bin_attribute ds2781_param_eeprom_bin_attr = { +static struct bin_attribute ds2781_param_eeprom_bin_attr = { .attr = { .name = "param_eeprom", .mode = S_IRUGO | S_IWUSR, @@ -706,7 +706,7 @@ static ssize_t ds2781_write_user_eeprom_bin(struct file *filp, return count; } -static const struct bin_attribute ds2781_user_eeprom_bin_attr = { +static struct bin_attribute ds2781_user_eeprom_bin_attr = { .attr = { .name = "user_eeprom", .mode = S_IRUGO | S_IWUSR, @@ -725,8 +725,7 @@ static DEVICE_ATTR(rsgain_setting, S_IRUGO | S_IWUSR, ds2781_get_rsgain_setting, static DEVICE_ATTR(pio_pin, S_IRUGO | S_IWUSR, ds2781_get_pio_pin, ds2781_set_pio_pin); - -static struct attribute *ds2781_attributes[] = { +static struct attribute *ds2781_sysfs_attrs[] = { &dev_attr_pmod_enabled.attr, &dev_attr_sense_resistor_value.attr, &dev_attr_rsgain_setting.attr, @@ -734,14 +733,26 @@ static struct attribute *ds2781_attributes[] = { NULL }; -static const struct attribute_group ds2781_attr_group = { - .attrs = ds2781_attributes, +static struct bin_attribute *ds2781_sysfs_bin_attrs[] = { + &ds2781_param_eeprom_bin_attr, + &ds2781_user_eeprom_bin_attr, + NULL, +}; + +static const struct attribute_group ds2781_sysfs_group = { + .attrs = ds2781_sysfs_attrs, + .bin_attrs = ds2781_sysfs_bin_attrs, + +}; + +static const struct attribute_group *ds2781_sysfs_groups[] = { + &ds2781_sysfs_group, + NULL, }; static int ds2781_battery_probe(struct platform_device *pdev) { struct power_supply_config psy_cfg = {}; - int ret = 0; struct ds2781_device_info *dev_info; dev_info = devm_kzalloc(&pdev->dev, sizeof(*dev_info), GFP_KERNEL); @@ -759,62 +770,16 @@ static int ds2781_battery_probe(struct platform_device *pdev) dev_info->bat_desc.get_property = ds2781_battery_get_property; psy_cfg.drv_data = dev_info; + psy_cfg.attr_grp = ds2781_sysfs_groups; - dev_info->bat = power_supply_register(&pdev->dev, &dev_info->bat_desc, - &psy_cfg); + dev_info->bat = devm_power_supply_register(&pdev->dev, + &dev_info->bat_desc, + &psy_cfg); if (IS_ERR(dev_info->bat)) { dev_err(dev_info->dev, "failed to register battery\n"); - ret = PTR_ERR(dev_info->bat); - goto fail; - } - - ret = sysfs_create_group(&dev_info->bat->dev.kobj, &ds2781_attr_group); - if (ret) { - dev_err(dev_info->dev, "failed to create sysfs group\n"); - goto fail_unregister; - } - - ret = sysfs_create_bin_file(&dev_info->bat->dev.kobj, - &ds2781_param_eeprom_bin_attr); - if (ret) { - dev_err(dev_info->dev, - "failed to create param eeprom bin file"); - goto fail_remove_group; - } - - ret = sysfs_create_bin_file(&dev_info->bat->dev.kobj, - &ds2781_user_eeprom_bin_attr); - if (ret) { - dev_err(dev_info->dev, - "failed to create user eeprom bin file"); - goto fail_remove_bin_file; + return PTR_ERR(dev_info->bat); } - return 0; - -fail_remove_bin_file: - sysfs_remove_bin_file(&dev_info->bat->dev.kobj, - &ds2781_param_eeprom_bin_attr); -fail_remove_group: - sysfs_remove_group(&dev_info->bat->dev.kobj, &ds2781_attr_group); -fail_unregister: - power_supply_unregister(dev_info->bat); -fail: - return ret; -} - -static int ds2781_battery_remove(struct platform_device *pdev) -{ - struct ds2781_device_info *dev_info = platform_get_drvdata(pdev); - - /* - * Remove attributes before unregistering power supply - * because 'bat' will be freed on power_supply_unregister() call. - */ - sysfs_remove_group(&dev_info->bat->dev.kobj, &ds2781_attr_group); - - power_supply_unregister(dev_info->bat); - return 0; } @@ -823,7 +788,6 @@ static struct platform_driver ds2781_battery_driver = { .name = "ds2781-battery", }, .probe = ds2781_battery_probe, - .remove = ds2781_battery_remove, }; module_platform_driver(ds2781_battery_driver); diff --git a/drivers/power/supply/gpio-charger.c b/drivers/power/supply/gpio-charger.c index c3f2a9479468..7e4f11d5a230 100644 --- a/drivers/power/supply/gpio-charger.c +++ b/drivers/power/supply/gpio-charger.c @@ -82,11 +82,11 @@ static enum power_supply_type gpio_charger_get_type(struct device *dev) if (!strcmp("usb-sdp", chargetype)) return POWER_SUPPLY_TYPE_USB; if (!strcmp("usb-dcp", chargetype)) - return POWER_SUPPLY_TYPE_USB_DCP; + return POWER_SUPPLY_TYPE_USB; if (!strcmp("usb-cdp", chargetype)) - return POWER_SUPPLY_TYPE_USB_CDP; + return POWER_SUPPLY_TYPE_USB; if (!strcmp("usb-aca", chargetype)) - return POWER_SUPPLY_TYPE_USB_ACA; + return POWER_SUPPLY_TYPE_USB; } dev_warn(dev, "unknown charger type %s\n", chargetype); diff --git a/drivers/power/supply/lp8788-charger.c b/drivers/power/supply/lp8788-charger.c index 0f3432795f3c..309e6efbb8ef 100644 --- a/drivers/power/supply/lp8788-charger.c +++ b/drivers/power/supply/lp8788-charger.c @@ -410,30 +410,6 @@ static const struct power_supply_desc lp8788_psy_battery_desc = { .get_property = lp8788_battery_get_property, }; -static int lp8788_psy_register(struct platform_device *pdev, - struct lp8788_charger *pchg) -{ - struct power_supply_config charger_cfg = {}; - - charger_cfg.supplied_to = battery_supplied_to; - charger_cfg.num_supplicants = ARRAY_SIZE(battery_supplied_to); - - pchg->charger = power_supply_register(&pdev->dev, - &lp8788_psy_charger_desc, - &charger_cfg); - if (IS_ERR(pchg->charger)) - return -EPERM; - - pchg->battery = power_supply_register(&pdev->dev, - &lp8788_psy_battery_desc, NULL); - if (IS_ERR(pchg->battery)) { - power_supply_unregister(pchg->charger); - return -EPERM; - } - - return 0; -} - static void lp8788_psy_unregister(struct lp8788_charger *pchg) { power_supply_unregister(pchg->battery); @@ -690,16 +666,39 @@ static DEVICE_ATTR(charger_status, S_IRUSR, lp8788_show_charger_status, NULL); static DEVICE_ATTR(eoc_time, S_IRUSR, lp8788_show_eoc_time, NULL); static DEVICE_ATTR(eoc_level, S_IRUSR, lp8788_show_eoc_level, NULL); -static struct attribute *lp8788_charger_attr[] = { +static struct attribute *lp8788_charger_sysfs_attrs[] = { &dev_attr_charger_status.attr, &dev_attr_eoc_time.attr, &dev_attr_eoc_level.attr, NULL, }; -static const struct attribute_group lp8788_attr_group = { - .attrs = lp8788_charger_attr, -}; +ATTRIBUTE_GROUPS(lp8788_charger_sysfs); + +static int lp8788_psy_register(struct platform_device *pdev, + struct lp8788_charger *pchg) +{ + struct power_supply_config charger_cfg = {}; + + charger_cfg.attr_grp = lp8788_charger_sysfs_groups; + charger_cfg.supplied_to = battery_supplied_to; + charger_cfg.num_supplicants = ARRAY_SIZE(battery_supplied_to); + + pchg->charger = power_supply_register(&pdev->dev, + &lp8788_psy_charger_desc, + &charger_cfg); + if (IS_ERR(pchg->charger)) + return -EPERM; + + pchg->battery = power_supply_register(&pdev->dev, + &lp8788_psy_battery_desc, NULL); + if (IS_ERR(pchg->battery)) { + power_supply_unregister(pchg->charger); + return -EPERM; + } + + return 0; +} static int lp8788_charger_probe(struct platform_device *pdev) { @@ -726,12 +725,6 @@ static int lp8788_charger_probe(struct platform_device *pdev) if (ret) return ret; - ret = sysfs_create_group(&pdev->dev.kobj, &lp8788_attr_group); - if (ret) { - lp8788_psy_unregister(pchg); - return ret; - } - ret = lp8788_irq_register(pdev, pchg); if (ret) dev_warn(dev, "failed to register charger irq: %d\n", ret); @@ -745,7 +738,6 @@ static int lp8788_charger_remove(struct platform_device *pdev) flush_work(&pchg->charger_work); lp8788_irq_unregister(pdev, pchg); - sysfs_remove_group(&pdev->dev.kobj, &lp8788_attr_group); lp8788_psy_unregister(pchg); lp8788_release_adc_channel(pchg); diff --git a/drivers/power/supply/olpc_battery.c b/drivers/power/supply/olpc_battery.c index 6da79ae14860..5a97e42a3547 100644 --- a/drivers/power/supply/olpc_battery.c +++ b/drivers/power/supply/olpc_battery.c @@ -428,14 +428,14 @@ static int olpc_bat_get_property(struct power_supply *psy, if (ret) return ret; - val->intval = (s16)be16_to_cpu(ec_word) * 100 / 256; + val->intval = (s16)be16_to_cpu(ec_word) * 10 / 256; break; case POWER_SUPPLY_PROP_TEMP_AMBIENT: ret = olpc_ec_cmd(EC_AMB_TEMP, NULL, 0, (void *)&ec_word, 2); if (ret) return ret; - val->intval = (int)be16_to_cpu(ec_word) * 100 / 256; + val->intval = (int)be16_to_cpu(ec_word) * 10 / 256; break; case POWER_SUPPLY_PROP_CHARGE_COUNTER: ret = olpc_ec_cmd(EC_BAT_ACR, NULL, 0, (void *)&ec_word, 2); diff --git a/drivers/power/supply/pcf50633-charger.c b/drivers/power/supply/pcf50633-charger.c index 1aba14046a83..28ed463c866d 100644 --- a/drivers/power/supply/pcf50633-charger.c +++ b/drivers/power/supply/pcf50633-charger.c @@ -245,17 +245,14 @@ static ssize_t set_chglim(struct device *dev, */ static DEVICE_ATTR(chg_curlim, S_IRUGO | S_IWUSR, show_chglim, set_chglim); -static struct attribute *pcf50633_mbc_sysfs_entries[] = { +static struct attribute *pcf50633_mbc_sysfs_attrs[] = { &dev_attr_chgmode.attr, &dev_attr_usb_curlim.attr, &dev_attr_chg_curlim.attr, NULL, }; -static const struct attribute_group mbc_attr_group = { - .name = NULL, /* put in device directory */ - .attrs = pcf50633_mbc_sysfs_entries, -}; +ATTRIBUTE_GROUPS(pcf50633_mbc_sysfs); static void pcf50633_mbc_irq_handler(int irq, void *data) @@ -390,6 +387,7 @@ static const struct power_supply_desc pcf50633_mbc_ac_desc = { static int pcf50633_mbc_probe(struct platform_device *pdev) { struct power_supply_config psy_cfg = {}; + struct power_supply_config usb_psy_cfg; struct pcf50633_mbc *mbc; int i; u8 mbcs1; @@ -419,8 +417,11 @@ static int pcf50633_mbc_probe(struct platform_device *pdev) return PTR_ERR(mbc->adapter); } + usb_psy_cfg = psy_cfg; + usb_psy_cfg.attr_grp = pcf50633_mbc_sysfs_groups; + mbc->usb = power_supply_register(&pdev->dev, &pcf50633_mbc_usb_desc, - &psy_cfg); + &usb_psy_cfg); if (IS_ERR(mbc->usb)) { dev_err(mbc->pcf->dev, "failed to register usb\n"); power_supply_unregister(mbc->adapter); @@ -436,9 +437,6 @@ static int pcf50633_mbc_probe(struct platform_device *pdev) return PTR_ERR(mbc->ac); } - if (sysfs_create_group(&pdev->dev.kobj, &mbc_attr_group)) - dev_err(mbc->pcf->dev, "failed to create sysfs entries\n"); - mbcs1 = pcf50633_reg_read(mbc->pcf, PCF50633_REG_MBCS1); if (mbcs1 & PCF50633_MBCS1_USBPRES) pcf50633_mbc_irq_handler(PCF50633_IRQ_USBINS, mbc); @@ -457,7 +455,6 @@ static int pcf50633_mbc_remove(struct platform_device *pdev) for (i = 0; i < ARRAY_SIZE(mbc_irq_handlers); i++) pcf50633_free_irq(mbc->pcf, mbc_irq_handlers[i]); - sysfs_remove_group(&pdev->dev.kobj, &mbc_attr_group); power_supply_unregister(mbc->usb); power_supply_unregister(mbc->adapter); power_supply_unregister(mbc->ac); diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c index e85361878450..569790ea6917 100644 --- a/drivers/power/supply/power_supply_core.c +++ b/drivers/power/supply/power_supply_core.c @@ -570,7 +570,7 @@ int power_supply_get_battery_info(struct power_supply *psy, { struct device_node *battery_np; const char *value; - int err; + int err, len, index; info->energy_full_design_uwh = -EINVAL; info->charge_full_design_uah = -EINVAL; @@ -579,6 +579,13 @@ int power_supply_get_battery_info(struct power_supply *psy, info->charge_term_current_ua = -EINVAL; info->constant_charge_current_max_ua = -EINVAL; info->constant_charge_voltage_max_uv = -EINVAL; + info->factory_internal_resistance_uohm = -EINVAL; + + for (index = 0; index < POWER_SUPPLY_OCV_TEMP_MAX; index++) { + info->ocv_table[index] = NULL; + info->ocv_temp[index] = -EINVAL; + info->ocv_table_size[index] = -EINVAL; + } if (!psy->of_node) { dev_warn(&psy->dev, "%s currently only supports devicetree\n", @@ -616,11 +623,142 @@ int power_supply_get_battery_info(struct power_supply *psy, &info->constant_charge_current_max_ua); of_property_read_u32(battery_np, "constant_charge_voltage_max_microvolt", &info->constant_charge_voltage_max_uv); + of_property_read_u32(battery_np, "factory-internal-resistance-micro-ohms", + &info->factory_internal_resistance_uohm); + + len = of_property_count_u32_elems(battery_np, "ocv-capacity-celsius"); + if (len < 0 && len != -EINVAL) { + return len; + } else if (len > POWER_SUPPLY_OCV_TEMP_MAX) { + dev_err(&psy->dev, "Too many temperature values\n"); + return -EINVAL; + } else if (len > 0) { + of_property_read_u32_array(battery_np, "ocv-capacity-celsius", + info->ocv_temp, len); + } + + for (index = 0; index < len; index++) { + struct power_supply_battery_ocv_table *table; + char *propname; + const __be32 *list; + int i, tab_len, size; + + propname = kasprintf(GFP_KERNEL, "ocv-capacity-table-%d", index); + list = of_get_property(battery_np, propname, &size); + if (!list || !size) { + dev_err(&psy->dev, "failed to get %s\n", propname); + kfree(propname); + power_supply_put_battery_info(psy, info); + return -EINVAL; + } + + kfree(propname); + tab_len = size / (2 * sizeof(__be32)); + info->ocv_table_size[index] = tab_len; + + table = info->ocv_table[index] = + devm_kcalloc(&psy->dev, tab_len, sizeof(*table), GFP_KERNEL); + if (!info->ocv_table[index]) { + power_supply_put_battery_info(psy, info); + return -ENOMEM; + } + + for (i = 0; i < tab_len; i++) { + table[i].ocv = be32_to_cpu(*list++); + table[i].capacity = be32_to_cpu(*list++); + } + } return 0; } EXPORT_SYMBOL_GPL(power_supply_get_battery_info); +void power_supply_put_battery_info(struct power_supply *psy, + struct power_supply_battery_info *info) +{ + int i; + + for (i = 0; i < POWER_SUPPLY_OCV_TEMP_MAX; i++) { + if (info->ocv_table[i]) + devm_kfree(&psy->dev, info->ocv_table[i]); + } +} +EXPORT_SYMBOL_GPL(power_supply_put_battery_info); + +/** + * power_supply_ocv2cap_simple() - find the battery capacity + * @table: Pointer to battery OCV lookup table + * @table_len: OCV table length + * @ocv: Current OCV value + * + * This helper function is used to look up battery capacity according to + * current OCV value from one OCV table, and the OCV table must be ordered + * descending. + * + * Return: the battery capacity. + */ +int power_supply_ocv2cap_simple(struct power_supply_battery_ocv_table *table, + int table_len, int ocv) +{ + int i, cap, tmp; + + for (i = 0; i < table_len; i++) + if (ocv > table[i].ocv) + break; + + if (i > 0 && i < table_len) { + tmp = (table[i - 1].capacity - table[i].capacity) * + (ocv - table[i].ocv); + tmp /= table[i - 1].ocv - table[i].ocv; + cap = tmp + table[i].capacity; + } else if (i == 0) { + cap = table[0].capacity; + } else { + cap = table[table_len - 1].capacity; + } + + return cap; +} +EXPORT_SYMBOL_GPL(power_supply_ocv2cap_simple); + +struct power_supply_battery_ocv_table * +power_supply_find_ocv2cap_table(struct power_supply_battery_info *info, + int temp, int *table_len) +{ + int best_temp_diff = INT_MAX, temp_diff; + u8 i, best_index = 0; + + if (!info->ocv_table[0]) + return NULL; + + for (i = 0; i < POWER_SUPPLY_OCV_TEMP_MAX; i++) { + temp_diff = abs(info->ocv_temp[i] - temp); + + if (temp_diff < best_temp_diff) { + best_temp_diff = temp_diff; + best_index = i; + } + } + + *table_len = info->ocv_table_size[best_index]; + return info->ocv_table[best_index]; +} +EXPORT_SYMBOL_GPL(power_supply_find_ocv2cap_table); + +int power_supply_batinfo_ocv2cap(struct power_supply_battery_info *info, + int ocv, int temp) +{ + struct power_supply_battery_ocv_table *table; + int table_len; + + table = power_supply_find_ocv2cap_table(info, temp, &table_len); + if (!table) + return -EINVAL; + + return power_supply_ocv2cap_simple(table, table_len, ocv); +} +EXPORT_SYMBOL_GPL(power_supply_batinfo_ocv2cap); + int power_supply_get_property(struct power_supply *psy, enum power_supply_property psp, union power_supply_propval *val) @@ -880,6 +1018,7 @@ __power_supply_register(struct device *parent, dev_set_drvdata(dev, psy); psy->desc = desc; if (cfg) { + dev->groups = cfg->attr_grp; psy->drv_data = cfg->drv_data; psy->of_node = cfg->fwnode ? to_of_node(cfg->fwnode) : cfg->of_node; diff --git a/drivers/power/supply/sc2731_charger.c b/drivers/power/supply/sc2731_charger.c index 525a820537bf..335cb857ef30 100644 --- a/drivers/power/supply/sc2731_charger.c +++ b/drivers/power/supply/sc2731_charger.c @@ -57,9 +57,11 @@ struct sc2731_charger_info { struct usb_phy *usb_phy; struct notifier_block usb_notify; struct power_supply *psy_usb; + struct work_struct work; struct mutex lock; bool charging; u32 base; + u32 limit; }; static void sc2731_charger_stop_charge(struct sc2731_charger_info *info) @@ -318,22 +320,21 @@ static const struct power_supply_desc sc2731_charger_desc = { .property_is_writeable = sc2731_charger_property_is_writeable, }; -static int sc2731_charger_usb_change(struct notifier_block *nb, - unsigned long limit, void *data) +static void sc2731_charger_work(struct work_struct *data) { struct sc2731_charger_info *info = - container_of(nb, struct sc2731_charger_info, usb_notify); - int ret = 0; + container_of(data, struct sc2731_charger_info, work); + int ret; mutex_lock(&info->lock); - if (limit > 0) { + if (info->limit > 0 && !info->charging) { /* set current limitation and start to charge */ - ret = sc2731_charger_set_current_limit(info, limit); + ret = sc2731_charger_set_current_limit(info, info->limit); if (ret) goto out; - ret = sc2731_charger_set_current(info, limit); + ret = sc2731_charger_set_current(info, info->limit); if (ret) goto out; @@ -342,7 +343,7 @@ static int sc2731_charger_usb_change(struct notifier_block *nb, goto out; info->charging = true; - } else { + } else if (!info->limit && info->charging) { /* Stop charging */ info->charging = false; sc2731_charger_stop_charge(info); @@ -350,7 +351,19 @@ static int sc2731_charger_usb_change(struct notifier_block *nb, out: mutex_unlock(&info->lock); - return ret; +} + +static int sc2731_charger_usb_change(struct notifier_block *nb, + unsigned long limit, void *data) +{ + struct sc2731_charger_info *info = + container_of(nb, struct sc2731_charger_info, usb_notify); + + info->limit = limit; + + schedule_work(&info->work); + + return NOTIFY_OK; } static int sc2731_charger_hw_init(struct sc2731_charger_info *info) @@ -395,6 +408,8 @@ static int sc2731_charger_hw_init(struct sc2731_charger_info *info) vol_val = (term_voltage - 4200) / 100; else vol_val = 0; + + power_supply_put_battery_info(info->psy_usb, &bat_info); } /* Set charge termination current */ @@ -419,6 +434,24 @@ error: return ret; } +static void sc2731_charger_detect_status(struct sc2731_charger_info *info) +{ + unsigned int min, max; + + /* + * If the USB charger status has been USB_CHARGER_PRESENT before + * registering the notifier, we should start to charge with getting + * the charge current. + */ + if (info->usb_phy->chg_state != USB_CHARGER_PRESENT) + return; + + usb_phy_get_charger_current(info->usb_phy, &min, &max); + info->limit = min; + + schedule_work(&info->work); +} + static int sc2731_charger_probe(struct platform_device *pdev) { struct device_node *np = pdev->dev.of_node; @@ -432,6 +465,7 @@ static int sc2731_charger_probe(struct platform_device *pdev) mutex_init(&info->lock); info->dev = &pdev->dev; + INIT_WORK(&info->work, sc2731_charger_work); info->regmap = dev_get_regmap(pdev->dev.parent, NULL); if (!info->regmap) { @@ -472,6 +506,8 @@ static int sc2731_charger_probe(struct platform_device *pdev) return ret; } + sc2731_charger_detect_status(info); + return 0; } diff --git a/drivers/power/supply/sc27xx_fuel_gauge.c b/drivers/power/supply/sc27xx_fuel_gauge.c new file mode 100644 index 000000000000..76da1895b782 --- /dev/null +++ b/drivers/power/supply/sc27xx_fuel_gauge.c @@ -0,0 +1,1075 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (C) 2018 Spreadtrum Communications Inc. + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* PMIC global control registers definition */ +#define SC27XX_MODULE_EN0 0xc08 +#define SC27XX_CLK_EN0 0xc18 +#define SC27XX_FGU_EN BIT(7) +#define SC27XX_FGU_RTC_EN BIT(6) + +/* FGU registers definition */ +#define SC27XX_FGU_START 0x0 +#define SC27XX_FGU_CONFIG 0x4 +#define SC27XX_FGU_ADC_CONFIG 0x8 +#define SC27XX_FGU_STATUS 0xc +#define SC27XX_FGU_INT_EN 0x10 +#define SC27XX_FGU_INT_CLR 0x14 +#define SC27XX_FGU_INT_STS 0x1c +#define SC27XX_FGU_VOLTAGE 0x20 +#define SC27XX_FGU_OCV 0x24 +#define SC27XX_FGU_POCV 0x28 +#define SC27XX_FGU_CURRENT 0x2c +#define SC27XX_FGU_LOW_OVERLOAD 0x34 +#define SC27XX_FGU_CLBCNT_SETH 0x50 +#define SC27XX_FGU_CLBCNT_SETL 0x54 +#define SC27XX_FGU_CLBCNT_DELTH 0x58 +#define SC27XX_FGU_CLBCNT_DELTL 0x5c +#define SC27XX_FGU_CLBCNT_VALH 0x68 +#define SC27XX_FGU_CLBCNT_VALL 0x6c +#define SC27XX_FGU_CLBCNT_QMAXL 0x74 +#define SC27XX_FGU_USER_AREA_SET 0xa0 +#define SC27XX_FGU_USER_AREA_CLEAR 0xa4 +#define SC27XX_FGU_USER_AREA_STATUS 0xa8 + +#define SC27XX_WRITE_SELCLB_EN BIT(0) +#define SC27XX_FGU_CLBCNT_MASK GENMASK(15, 0) +#define SC27XX_FGU_CLBCNT_SHIFT 16 +#define SC27XX_FGU_LOW_OVERLOAD_MASK GENMASK(12, 0) + +#define SC27XX_FGU_INT_MASK GENMASK(9, 0) +#define SC27XX_FGU_LOW_OVERLOAD_INT BIT(0) +#define SC27XX_FGU_CLBCNT_DELTA_INT BIT(2) + +#define SC27XX_FGU_MODE_AREA_MASK GENMASK(15, 12) +#define SC27XX_FGU_CAP_AREA_MASK GENMASK(11, 0) +#define SC27XX_FGU_MODE_AREA_SHIFT 12 + +#define SC27XX_FGU_FIRST_POWERTON GENMASK(3, 0) +#define SC27XX_FGU_DEFAULT_CAP GENMASK(11, 0) +#define SC27XX_FGU_NORMAIL_POWERTON 0x5 + +#define SC27XX_FGU_CUR_BASIC_ADC 8192 +#define SC27XX_FGU_SAMPLE_HZ 2 + +/* + * struct sc27xx_fgu_data: describe the FGU device + * @regmap: regmap for register access + * @dev: platform device + * @battery: battery power supply + * @base: the base offset for the controller + * @lock: protect the structure + * @gpiod: GPIO for battery detection + * @channel: IIO channel to get battery temperature + * @internal_resist: the battery internal resistance in mOhm + * @total_cap: the total capacity of the battery in mAh + * @init_cap: the initial capacity of the battery in mAh + * @alarm_cap: the alarm capacity + * @init_clbcnt: the initial coulomb counter + * @max_volt: the maximum constant input voltage in millivolt + * @min_volt: the minimum drained battery voltage in microvolt + * @table_len: the capacity table length + * @cur_1000ma_adc: ADC value corresponding to 1000 mA + * @vol_1000mv_adc: ADC value corresponding to 1000 mV + * @cap_table: capacity table with corresponding ocv + */ +struct sc27xx_fgu_data { + struct regmap *regmap; + struct device *dev; + struct power_supply *battery; + u32 base; + struct mutex lock; + struct gpio_desc *gpiod; + struct iio_channel *channel; + bool bat_present; + int internal_resist; + int total_cap; + int init_cap; + int alarm_cap; + int init_clbcnt; + int max_volt; + int min_volt; + int table_len; + int cur_1000ma_adc; + int vol_1000mv_adc; + struct power_supply_battery_ocv_table *cap_table; +}; + +static int sc27xx_fgu_cap_to_clbcnt(struct sc27xx_fgu_data *data, int capacity); + +static const char * const sc27xx_charger_supply_name[] = { + "sc2731_charger", + "sc2720_charger", + "sc2721_charger", + "sc2723_charger", +}; + +static int sc27xx_fgu_adc_to_current(struct sc27xx_fgu_data *data, int adc) +{ + return DIV_ROUND_CLOSEST(adc * 1000, data->cur_1000ma_adc); +} + +static int sc27xx_fgu_adc_to_voltage(struct sc27xx_fgu_data *data, int adc) +{ + return DIV_ROUND_CLOSEST(adc * 1000, data->vol_1000mv_adc); +} + +static int sc27xx_fgu_voltage_to_adc(struct sc27xx_fgu_data *data, int vol) +{ + return DIV_ROUND_CLOSEST(vol * data->vol_1000mv_adc, 1000); +} + +static bool sc27xx_fgu_is_first_poweron(struct sc27xx_fgu_data *data) +{ + int ret, status, cap, mode; + + ret = regmap_read(data->regmap, + data->base + SC27XX_FGU_USER_AREA_STATUS, &status); + if (ret) + return false; + + /* + * We use low 4 bits to save the last battery capacity and high 12 bits + * to save the system boot mode. + */ + mode = (status & SC27XX_FGU_MODE_AREA_MASK) >> SC27XX_FGU_MODE_AREA_SHIFT; + cap = status & SC27XX_FGU_CAP_AREA_MASK; + + /* + * When FGU has been powered down, the user area registers became + * default value (0xffff), which can be used to valid if the system is + * first power on or not. + */ + if (mode == SC27XX_FGU_FIRST_POWERTON || cap == SC27XX_FGU_DEFAULT_CAP) + return true; + + return false; +} + +static int sc27xx_fgu_save_boot_mode(struct sc27xx_fgu_data *data, + int boot_mode) +{ + int ret; + + ret = regmap_update_bits(data->regmap, + data->base + SC27XX_FGU_USER_AREA_CLEAR, + SC27XX_FGU_MODE_AREA_MASK, + SC27XX_FGU_MODE_AREA_MASK); + if (ret) + return ret; + + return regmap_update_bits(data->regmap, + data->base + SC27XX_FGU_USER_AREA_SET, + SC27XX_FGU_MODE_AREA_MASK, + boot_mode << SC27XX_FGU_MODE_AREA_SHIFT); +} + +static int sc27xx_fgu_save_last_cap(struct sc27xx_fgu_data *data, int cap) +{ + int ret; + + ret = regmap_update_bits(data->regmap, + data->base + SC27XX_FGU_USER_AREA_CLEAR, + SC27XX_FGU_CAP_AREA_MASK, + SC27XX_FGU_CAP_AREA_MASK); + if (ret) + return ret; + + return regmap_update_bits(data->regmap, + data->base + SC27XX_FGU_USER_AREA_SET, + SC27XX_FGU_CAP_AREA_MASK, cap); +} + +static int sc27xx_fgu_read_last_cap(struct sc27xx_fgu_data *data, int *cap) +{ + int ret, value; + + ret = regmap_read(data->regmap, + data->base + SC27XX_FGU_USER_AREA_STATUS, &value); + if (ret) + return ret; + + *cap = value & SC27XX_FGU_CAP_AREA_MASK; + return 0; +} + +/* + * When system boots on, we can not read battery capacity from coulomb + * registers, since now the coulomb registers are invalid. So we should + * calculate the battery open circuit voltage, and get current battery + * capacity according to the capacity table. + */ +static int sc27xx_fgu_get_boot_capacity(struct sc27xx_fgu_data *data, int *cap) +{ + int volt, cur, oci, ocv, ret; + bool is_first_poweron = sc27xx_fgu_is_first_poweron(data); + + /* + * If system is not the first power on, we should use the last saved + * battery capacity as the initial battery capacity. Otherwise we should + * re-calculate the initial battery capacity. + */ + if (!is_first_poweron) { + ret = sc27xx_fgu_read_last_cap(data, cap); + if (ret) + return ret; + + return sc27xx_fgu_save_boot_mode(data, SC27XX_FGU_NORMAIL_POWERTON); + } + + /* + * After system booting on, the SC27XX_FGU_CLBCNT_QMAXL register saved + * the first sampled open circuit current. + */ + ret = regmap_read(data->regmap, data->base + SC27XX_FGU_CLBCNT_QMAXL, + &cur); + if (ret) + return ret; + + cur <<= 1; + oci = sc27xx_fgu_adc_to_current(data, cur - SC27XX_FGU_CUR_BASIC_ADC); + + /* + * Should get the OCV from SC27XX_FGU_POCV register at the system + * beginning. It is ADC values reading from registers which need to + * convert the corresponding voltage. + */ + ret = regmap_read(data->regmap, data->base + SC27XX_FGU_POCV, &volt); + if (ret) + return ret; + + volt = sc27xx_fgu_adc_to_voltage(data, volt); + ocv = volt * 1000 - oci * data->internal_resist; + + /* + * Parse the capacity table to look up the correct capacity percent + * according to current battery's corresponding OCV values. + */ + *cap = power_supply_ocv2cap_simple(data->cap_table, data->table_len, + ocv); + + ret = sc27xx_fgu_save_last_cap(data, *cap); + if (ret) + return ret; + + return sc27xx_fgu_save_boot_mode(data, SC27XX_FGU_NORMAIL_POWERTON); +} + +static int sc27xx_fgu_set_clbcnt(struct sc27xx_fgu_data *data, int clbcnt) +{ + int ret; + + clbcnt *= SC27XX_FGU_SAMPLE_HZ; + + ret = regmap_update_bits(data->regmap, + data->base + SC27XX_FGU_CLBCNT_SETL, + SC27XX_FGU_CLBCNT_MASK, clbcnt); + if (ret) + return ret; + + ret = regmap_update_bits(data->regmap, + data->base + SC27XX_FGU_CLBCNT_SETH, + SC27XX_FGU_CLBCNT_MASK, + clbcnt >> SC27XX_FGU_CLBCNT_SHIFT); + if (ret) + return ret; + + return regmap_update_bits(data->regmap, data->base + SC27XX_FGU_START, + SC27XX_WRITE_SELCLB_EN, + SC27XX_WRITE_SELCLB_EN); +} + +static int sc27xx_fgu_get_clbcnt(struct sc27xx_fgu_data *data, int *clb_cnt) +{ + int ccl, cch, ret; + + ret = regmap_read(data->regmap, data->base + SC27XX_FGU_CLBCNT_VALL, + &ccl); + if (ret) + return ret; + + ret = regmap_read(data->regmap, data->base + SC27XX_FGU_CLBCNT_VALH, + &cch); + if (ret) + return ret; + + *clb_cnt = ccl & SC27XX_FGU_CLBCNT_MASK; + *clb_cnt |= (cch & SC27XX_FGU_CLBCNT_MASK) << SC27XX_FGU_CLBCNT_SHIFT; + *clb_cnt /= SC27XX_FGU_SAMPLE_HZ; + + return 0; +} + +static int sc27xx_fgu_get_capacity(struct sc27xx_fgu_data *data, int *cap) +{ + int ret, cur_clbcnt, delta_clbcnt, delta_cap, temp; + + /* Get current coulomb counters firstly */ + ret = sc27xx_fgu_get_clbcnt(data, &cur_clbcnt); + if (ret) + return ret; + + delta_clbcnt = cur_clbcnt - data->init_clbcnt; + + /* + * Convert coulomb counter to delta capacity (mAh), and set multiplier + * as 100 to improve the precision. + */ + temp = DIV_ROUND_CLOSEST(delta_clbcnt, 360); + temp = sc27xx_fgu_adc_to_current(data, temp); + + /* + * Convert to capacity percent of the battery total capacity, + * and multiplier is 100 too. + */ + delta_cap = DIV_ROUND_CLOSEST(temp * 100, data->total_cap); + *cap = delta_cap + data->init_cap; + + return 0; +} + +static int sc27xx_fgu_get_vbat_vol(struct sc27xx_fgu_data *data, int *val) +{ + int ret, vol; + + ret = regmap_read(data->regmap, data->base + SC27XX_FGU_VOLTAGE, &vol); + if (ret) + return ret; + + /* + * It is ADC values reading from registers which need to convert to + * corresponding voltage values. + */ + *val = sc27xx_fgu_adc_to_voltage(data, vol); + + return 0; +} + +static int sc27xx_fgu_get_current(struct sc27xx_fgu_data *data, int *val) +{ + int ret, cur; + + ret = regmap_read(data->regmap, data->base + SC27XX_FGU_CURRENT, &cur); + if (ret) + return ret; + + /* + * It is ADC values reading from registers which need to convert to + * corresponding current values. + */ + *val = sc27xx_fgu_adc_to_current(data, cur - SC27XX_FGU_CUR_BASIC_ADC); + + return 0; +} + +static int sc27xx_fgu_get_vbat_ocv(struct sc27xx_fgu_data *data, int *val) +{ + int vol, cur, ret; + + ret = sc27xx_fgu_get_vbat_vol(data, &vol); + if (ret) + return ret; + + ret = sc27xx_fgu_get_current(data, &cur); + if (ret) + return ret; + + /* Return the battery OCV in micro volts. */ + *val = vol * 1000 - cur * data->internal_resist; + + return 0; +} + +static int sc27xx_fgu_get_temp(struct sc27xx_fgu_data *data, int *temp) +{ + return iio_read_channel_processed(data->channel, temp); +} + +static int sc27xx_fgu_get_health(struct sc27xx_fgu_data *data, int *health) +{ + int ret, vol; + + ret = sc27xx_fgu_get_vbat_vol(data, &vol); + if (ret) + return ret; + + if (vol > data->max_volt) + *health = POWER_SUPPLY_HEALTH_OVERVOLTAGE; + else + *health = POWER_SUPPLY_HEALTH_GOOD; + + return 0; +} + +static int sc27xx_fgu_get_status(struct sc27xx_fgu_data *data, int *status) +{ + union power_supply_propval val; + struct power_supply *psy; + int i, ret = -EINVAL; + + for (i = 0; i < ARRAY_SIZE(sc27xx_charger_supply_name); i++) { + psy = power_supply_get_by_name(sc27xx_charger_supply_name[i]); + if (!psy) + continue; + + ret = power_supply_get_property(psy, POWER_SUPPLY_PROP_STATUS, + &val); + power_supply_put(psy); + if (ret) + return ret; + + *status = val.intval; + } + + return ret; +} + +static int sc27xx_fgu_get_property(struct power_supply *psy, + enum power_supply_property psp, + union power_supply_propval *val) +{ + struct sc27xx_fgu_data *data = power_supply_get_drvdata(psy); + int ret = 0; + int value; + + mutex_lock(&data->lock); + + switch (psp) { + case POWER_SUPPLY_PROP_STATUS: + ret = sc27xx_fgu_get_status(data, &value); + if (ret) + goto error; + + val->intval = value; + break; + + case POWER_SUPPLY_PROP_HEALTH: + ret = sc27xx_fgu_get_health(data, &value); + if (ret) + goto error; + + val->intval = value; + break; + + case POWER_SUPPLY_PROP_PRESENT: + val->intval = data->bat_present; + break; + + case POWER_SUPPLY_PROP_TEMP: + ret = sc27xx_fgu_get_temp(data, &value); + if (ret) + goto error; + + val->intval = value; + break; + + case POWER_SUPPLY_PROP_TECHNOLOGY: + val->intval = POWER_SUPPLY_TECHNOLOGY_LION; + break; + + case POWER_SUPPLY_PROP_CAPACITY: + ret = sc27xx_fgu_get_capacity(data, &value); + if (ret) + goto error; + + val->intval = value; + break; + + case POWER_SUPPLY_PROP_VOLTAGE_NOW: + ret = sc27xx_fgu_get_vbat_vol(data, &value); + if (ret) + goto error; + + val->intval = value * 1000; + break; + + case POWER_SUPPLY_PROP_VOLTAGE_OCV: + ret = sc27xx_fgu_get_vbat_ocv(data, &value); + if (ret) + goto error; + + val->intval = value; + break; + + case POWER_SUPPLY_PROP_CURRENT_NOW: + case POWER_SUPPLY_PROP_CURRENT_AVG: + ret = sc27xx_fgu_get_current(data, &value); + if (ret) + goto error; + + val->intval = value * 1000; + break; + + default: + ret = -EINVAL; + break; + } + +error: + mutex_unlock(&data->lock); + return ret; +} + +static int sc27xx_fgu_set_property(struct power_supply *psy, + enum power_supply_property psp, + const union power_supply_propval *val) +{ + struct sc27xx_fgu_data *data = power_supply_get_drvdata(psy); + int ret; + + if (psp != POWER_SUPPLY_PROP_CAPACITY) + return -EINVAL; + + mutex_lock(&data->lock); + + ret = sc27xx_fgu_save_last_cap(data, val->intval); + + mutex_unlock(&data->lock); + + if (ret < 0) + dev_err(data->dev, "failed to save battery capacity\n"); + + return ret; +} + +static void sc27xx_fgu_external_power_changed(struct power_supply *psy) +{ + struct sc27xx_fgu_data *data = power_supply_get_drvdata(psy); + + power_supply_changed(data->battery); +} + +static int sc27xx_fgu_property_is_writeable(struct power_supply *psy, + enum power_supply_property psp) +{ + return psp == POWER_SUPPLY_PROP_CAPACITY; +} + +static enum power_supply_property sc27xx_fgu_props[] = { + POWER_SUPPLY_PROP_STATUS, + POWER_SUPPLY_PROP_HEALTH, + POWER_SUPPLY_PROP_PRESENT, + POWER_SUPPLY_PROP_TEMP, + POWER_SUPPLY_PROP_TECHNOLOGY, + POWER_SUPPLY_PROP_CAPACITY, + POWER_SUPPLY_PROP_VOLTAGE_NOW, + POWER_SUPPLY_PROP_VOLTAGE_OCV, + POWER_SUPPLY_PROP_CURRENT_NOW, + POWER_SUPPLY_PROP_CURRENT_AVG, +}; + +static const struct power_supply_desc sc27xx_fgu_desc = { + .name = "sc27xx-fgu", + .type = POWER_SUPPLY_TYPE_BATTERY, + .properties = sc27xx_fgu_props, + .num_properties = ARRAY_SIZE(sc27xx_fgu_props), + .get_property = sc27xx_fgu_get_property, + .set_property = sc27xx_fgu_set_property, + .external_power_changed = sc27xx_fgu_external_power_changed, + .property_is_writeable = sc27xx_fgu_property_is_writeable, +}; + +static void sc27xx_fgu_adjust_cap(struct sc27xx_fgu_data *data, int cap) +{ + data->init_cap = cap; + data->init_clbcnt = sc27xx_fgu_cap_to_clbcnt(data, data->init_cap); +} + +static irqreturn_t sc27xx_fgu_interrupt(int irq, void *dev_id) +{ + struct sc27xx_fgu_data *data = dev_id; + int ret, cap, ocv, adc; + u32 status; + + mutex_lock(&data->lock); + + ret = regmap_read(data->regmap, data->base + SC27XX_FGU_INT_STS, + &status); + if (ret) + goto out; + + ret = regmap_update_bits(data->regmap, data->base + SC27XX_FGU_INT_CLR, + status, status); + if (ret) + goto out; + + /* + * When low overload voltage interrupt happens, we should calibrate the + * battery capacity in lower voltage stage. + */ + if (!(status & SC27XX_FGU_LOW_OVERLOAD_INT)) + goto out; + + ret = sc27xx_fgu_get_capacity(data, &cap); + if (ret) + goto out; + + ret = sc27xx_fgu_get_vbat_ocv(data, &ocv); + if (ret) + goto out; + + /* + * If current OCV value is less than the minimum OCV value in OCV table, + * which means now battery capacity is 0%, and we should adjust the + * inititial capacity to 0. + */ + if (ocv <= data->cap_table[data->table_len - 1].ocv) { + sc27xx_fgu_adjust_cap(data, 0); + } else if (ocv <= data->min_volt) { + /* + * If current OCV value is less than the low alarm voltage, but + * current capacity is larger than the alarm capacity, we should + * adjust the inititial capacity to alarm capacity. + */ + if (cap > data->alarm_cap) { + sc27xx_fgu_adjust_cap(data, data->alarm_cap); + } else if (cap <= 0) { + int cur_cap; + + /* + * If current capacity is equal with 0 or less than 0 + * (some error occurs), we should adjust inititial + * capacity to the capacity corresponding to current OCV + * value. + */ + cur_cap = power_supply_ocv2cap_simple(data->cap_table, + data->table_len, + ocv); + sc27xx_fgu_adjust_cap(data, cur_cap); + } + + /* + * After adjusting the battery capacity, we should set the + * lowest alarm voltage instead. + */ + data->min_volt = data->cap_table[data->table_len - 1].ocv; + adc = sc27xx_fgu_voltage_to_adc(data, data->min_volt / 1000); + regmap_update_bits(data->regmap, data->base + SC27XX_FGU_LOW_OVERLOAD, + SC27XX_FGU_LOW_OVERLOAD_MASK, adc); + } + +out: + mutex_unlock(&data->lock); + + power_supply_changed(data->battery); + return IRQ_HANDLED; +} + +static irqreturn_t sc27xx_fgu_bat_detection(int irq, void *dev_id) +{ + struct sc27xx_fgu_data *data = dev_id; + int state; + + mutex_lock(&data->lock); + + state = gpiod_get_value_cansleep(data->gpiod); + if (state < 0) { + dev_err(data->dev, "failed to get gpio state\n"); + mutex_unlock(&data->lock); + return IRQ_RETVAL(state); + } + + data->bat_present = !!state; + + mutex_unlock(&data->lock); + + power_supply_changed(data->battery); + return IRQ_HANDLED; +} + +static void sc27xx_fgu_disable(void *_data) +{ + struct sc27xx_fgu_data *data = _data; + + regmap_update_bits(data->regmap, SC27XX_CLK_EN0, SC27XX_FGU_RTC_EN, 0); + regmap_update_bits(data->regmap, SC27XX_MODULE_EN0, SC27XX_FGU_EN, 0); +} + +static int sc27xx_fgu_cap_to_clbcnt(struct sc27xx_fgu_data *data, int capacity) +{ + /* + * Get current capacity (mAh) = battery total capacity (mAh) * + * current capacity percent (capacity / 100). + */ + int cur_cap = DIV_ROUND_CLOSEST(data->total_cap * capacity, 100); + + /* + * Convert current capacity (mAh) to coulomb counter according to the + * formula: 1 mAh =3.6 coulomb. + */ + return DIV_ROUND_CLOSEST(cur_cap * 36, 10); +} + +static int sc27xx_fgu_calibration(struct sc27xx_fgu_data *data) +{ + struct nvmem_cell *cell; + int calib_data, cal_4200mv; + void *buf; + size_t len; + + cell = nvmem_cell_get(data->dev, "fgu_calib"); + if (IS_ERR(cell)) + return PTR_ERR(cell); + + buf = nvmem_cell_read(cell, &len); + nvmem_cell_put(cell); + + if (IS_ERR(buf)) + return PTR_ERR(buf); + + memcpy(&calib_data, buf, min(len, sizeof(u32))); + + /* + * Get the ADC value corresponding to 4200 mV from eFuse controller + * according to below formula. Then convert to ADC values corresponding + * to 1000 mV and 1000 mA. + */ + cal_4200mv = (calib_data & 0x1ff) + 6963 - 4096 - 256; + data->vol_1000mv_adc = DIV_ROUND_CLOSEST(cal_4200mv * 10, 42); + data->cur_1000ma_adc = data->vol_1000mv_adc * 4; + + kfree(buf); + return 0; +} + +static int sc27xx_fgu_hw_init(struct sc27xx_fgu_data *data) +{ + struct power_supply_battery_info info = { }; + struct power_supply_battery_ocv_table *table; + int ret, delta_clbcnt, alarm_adc; + + ret = power_supply_get_battery_info(data->battery, &info); + if (ret) { + dev_err(data->dev, "failed to get battery information\n"); + return ret; + } + + data->total_cap = info.charge_full_design_uah / 1000; + data->max_volt = info.constant_charge_voltage_max_uv / 1000; + data->internal_resist = info.factory_internal_resistance_uohm / 1000; + data->min_volt = info.voltage_min_design_uv; + + /* + * For SC27XX fuel gauge device, we only use one ocv-capacity + * table in normal temperature 20 Celsius. + */ + table = power_supply_find_ocv2cap_table(&info, 20, &data->table_len); + if (!table) + return -EINVAL; + + data->cap_table = devm_kmemdup(data->dev, table, + data->table_len * sizeof(*table), + GFP_KERNEL); + if (!data->cap_table) { + power_supply_put_battery_info(data->battery, &info); + return -ENOMEM; + } + + data->alarm_cap = power_supply_ocv2cap_simple(data->cap_table, + data->table_len, + data->min_volt); + + power_supply_put_battery_info(data->battery, &info); + + ret = sc27xx_fgu_calibration(data); + if (ret) + return ret; + + /* Enable the FGU module */ + ret = regmap_update_bits(data->regmap, SC27XX_MODULE_EN0, + SC27XX_FGU_EN, SC27XX_FGU_EN); + if (ret) { + dev_err(data->dev, "failed to enable fgu\n"); + return ret; + } + + /* Enable the FGU RTC clock to make it work */ + ret = regmap_update_bits(data->regmap, SC27XX_CLK_EN0, + SC27XX_FGU_RTC_EN, SC27XX_FGU_RTC_EN); + if (ret) { + dev_err(data->dev, "failed to enable fgu RTC clock\n"); + goto disable_fgu; + } + + ret = regmap_update_bits(data->regmap, data->base + SC27XX_FGU_INT_CLR, + SC27XX_FGU_INT_MASK, SC27XX_FGU_INT_MASK); + if (ret) { + dev_err(data->dev, "failed to clear interrupt status\n"); + goto disable_clk; + } + + /* + * Set the voltage low overload threshold, which means when the battery + * voltage is lower than this threshold, the controller will generate + * one interrupt to notify. + */ + alarm_adc = sc27xx_fgu_voltage_to_adc(data, data->min_volt / 1000); + ret = regmap_update_bits(data->regmap, data->base + SC27XX_FGU_LOW_OVERLOAD, + SC27XX_FGU_LOW_OVERLOAD_MASK, alarm_adc); + if (ret) { + dev_err(data->dev, "failed to set fgu low overload\n"); + goto disable_clk; + } + + /* + * Set the coulomb counter delta threshold, that means when the coulomb + * counter change is multiples of the delta threshold, the controller + * will generate one interrupt to notify the users to update the battery + * capacity. Now we set the delta threshold as a counter value of 1% + * capacity. + */ + delta_clbcnt = sc27xx_fgu_cap_to_clbcnt(data, 1); + + ret = regmap_update_bits(data->regmap, data->base + SC27XX_FGU_CLBCNT_DELTL, + SC27XX_FGU_CLBCNT_MASK, delta_clbcnt); + if (ret) { + dev_err(data->dev, "failed to set low delta coulomb counter\n"); + goto disable_clk; + } + + ret = regmap_update_bits(data->regmap, data->base + SC27XX_FGU_CLBCNT_DELTH, + SC27XX_FGU_CLBCNT_MASK, + delta_clbcnt >> SC27XX_FGU_CLBCNT_SHIFT); + if (ret) { + dev_err(data->dev, "failed to set high delta coulomb counter\n"); + goto disable_clk; + } + + /* + * Get the boot battery capacity when system powers on, which is used to + * initialize the coulomb counter. After that, we can read the coulomb + * counter to measure the battery capacity. + */ + ret = sc27xx_fgu_get_boot_capacity(data, &data->init_cap); + if (ret) { + dev_err(data->dev, "failed to get boot capacity\n"); + goto disable_clk; + } + + /* + * Convert battery capacity to the corresponding initial coulomb counter + * and set into coulomb counter registers. + */ + data->init_clbcnt = sc27xx_fgu_cap_to_clbcnt(data, data->init_cap); + ret = sc27xx_fgu_set_clbcnt(data, data->init_clbcnt); + if (ret) { + dev_err(data->dev, "failed to initialize coulomb counter\n"); + goto disable_clk; + } + + return 0; + +disable_clk: + regmap_update_bits(data->regmap, SC27XX_CLK_EN0, SC27XX_FGU_RTC_EN, 0); +disable_fgu: + regmap_update_bits(data->regmap, SC27XX_MODULE_EN0, SC27XX_FGU_EN, 0); + + return ret; +} + +static int sc27xx_fgu_probe(struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + struct power_supply_config fgu_cfg = { }; + struct sc27xx_fgu_data *data; + int ret, irq; + + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->regmap = dev_get_regmap(pdev->dev.parent, NULL); + if (!data->regmap) { + dev_err(&pdev->dev, "failed to get regmap\n"); + return -ENODEV; + } + + ret = device_property_read_u32(&pdev->dev, "reg", &data->base); + if (ret) { + dev_err(&pdev->dev, "failed to get fgu address\n"); + return ret; + } + + data->channel = devm_iio_channel_get(&pdev->dev, "bat-temp"); + if (IS_ERR(data->channel)) { + dev_err(&pdev->dev, "failed to get IIO channel\n"); + return PTR_ERR(data->channel); + } + + data->gpiod = devm_gpiod_get(&pdev->dev, "bat-detect", GPIOD_IN); + if (IS_ERR(data->gpiod)) { + dev_err(&pdev->dev, "failed to get battery detection GPIO\n"); + return PTR_ERR(data->gpiod); + } + + ret = gpiod_get_value_cansleep(data->gpiod); + if (ret < 0) { + dev_err(&pdev->dev, "failed to get gpio state\n"); + return ret; + } + + data->bat_present = !!ret; + mutex_init(&data->lock); + data->dev = &pdev->dev; + platform_set_drvdata(pdev, data); + + fgu_cfg.drv_data = data; + fgu_cfg.of_node = np; + data->battery = devm_power_supply_register(&pdev->dev, &sc27xx_fgu_desc, + &fgu_cfg); + if (IS_ERR(data->battery)) { + dev_err(&pdev->dev, "failed to register power supply\n"); + return PTR_ERR(data->battery); + } + + ret = sc27xx_fgu_hw_init(data); + if (ret) { + dev_err(&pdev->dev, "failed to initialize fgu hardware\n"); + return ret; + } + + ret = devm_add_action(&pdev->dev, sc27xx_fgu_disable, data); + if (ret) { + sc27xx_fgu_disable(data); + dev_err(&pdev->dev, "failed to add fgu disable action\n"); + return ret; + } + + irq = platform_get_irq(pdev, 0); + if (irq < 0) { + dev_err(&pdev->dev, "no irq resource specified\n"); + return irq; + } + + ret = devm_request_threaded_irq(data->dev, irq, NULL, + sc27xx_fgu_interrupt, + IRQF_NO_SUSPEND | IRQF_ONESHOT, + pdev->name, data); + if (ret) { + dev_err(data->dev, "failed to request fgu IRQ\n"); + return ret; + } + + irq = gpiod_to_irq(data->gpiod); + if (irq < 0) { + dev_err(&pdev->dev, "failed to translate GPIO to IRQ\n"); + return irq; + } + + ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, + sc27xx_fgu_bat_detection, + IRQF_ONESHOT | IRQF_TRIGGER_RISING | + IRQF_TRIGGER_FALLING, + pdev->name, data); + if (ret) { + dev_err(&pdev->dev, "failed to request IRQ\n"); + return ret; + } + + return 0; +} + +#ifdef CONFIG_PM_SLEEP +static int sc27xx_fgu_resume(struct device *dev) +{ + struct sc27xx_fgu_data *data = dev_get_drvdata(dev); + int ret; + + ret = regmap_update_bits(data->regmap, data->base + SC27XX_FGU_INT_EN, + SC27XX_FGU_LOW_OVERLOAD_INT | + SC27XX_FGU_CLBCNT_DELTA_INT, 0); + if (ret) { + dev_err(data->dev, "failed to disable fgu interrupts\n"); + return ret; + } + + return 0; +} + +static int sc27xx_fgu_suspend(struct device *dev) +{ + struct sc27xx_fgu_data *data = dev_get_drvdata(dev); + int ret, status, ocv; + + ret = sc27xx_fgu_get_status(data, &status); + if (ret) + return ret; + + /* + * If we are charging, then no need to enable the FGU interrupts to + * adjust the battery capacity. + */ + if (status != POWER_SUPPLY_STATUS_NOT_CHARGING) + return 0; + + ret = regmap_update_bits(data->regmap, data->base + SC27XX_FGU_INT_EN, + SC27XX_FGU_LOW_OVERLOAD_INT, + SC27XX_FGU_LOW_OVERLOAD_INT); + if (ret) { + dev_err(data->dev, "failed to enable low voltage interrupt\n"); + return ret; + } + + ret = sc27xx_fgu_get_vbat_ocv(data, &ocv); + if (ret) + goto disable_int; + + /* + * If current OCV is less than the minimum voltage, we should enable the + * coulomb counter threshold interrupt to notify events to adjust the + * battery capacity. + */ + if (ocv < data->min_volt) { + ret = regmap_update_bits(data->regmap, + data->base + SC27XX_FGU_INT_EN, + SC27XX_FGU_CLBCNT_DELTA_INT, + SC27XX_FGU_CLBCNT_DELTA_INT); + if (ret) { + dev_err(data->dev, + "failed to enable coulomb threshold int\n"); + goto disable_int; + } + } + + return 0; + +disable_int: + regmap_update_bits(data->regmap, data->base + SC27XX_FGU_INT_EN, + SC27XX_FGU_LOW_OVERLOAD_INT, 0); + return ret; +} +#endif + +static const struct dev_pm_ops sc27xx_fgu_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(sc27xx_fgu_suspend, sc27xx_fgu_resume) +}; + +static const struct of_device_id sc27xx_fgu_of_match[] = { + { .compatible = "sprd,sc2731-fgu", }, + { } +}; + +static struct platform_driver sc27xx_fgu_driver = { + .probe = sc27xx_fgu_probe, + .driver = { + .name = "sc27xx-fgu", + .of_match_table = sc27xx_fgu_of_match, + .pm = &sc27xx_fgu_pm_ops, + } +}; + +module_platform_driver(sc27xx_fgu_driver); + +MODULE_DESCRIPTION("Spreadtrum SC27XX PMICs Fual Gauge Unit Driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/pps/Kconfig b/drivers/pps/Kconfig index c6008f296605..965aa086a1e0 100644 --- a/drivers/pps/Kconfig +++ b/drivers/pps/Kconfig @@ -37,8 +37,8 @@ config NTP_PPS It doesn't work on tickless systems at the moment. -source drivers/pps/clients/Kconfig +source "drivers/pps/clients/Kconfig" -source drivers/pps/generators/Kconfig +source "drivers/pps/generators/Kconfig" endif # PPS diff --git a/drivers/pps/clients/pps-gpio.c b/drivers/pps/clients/pps-gpio.c index 333ad7d5b45b..dd5d1103e02b 100644 --- a/drivers/pps/clients/pps-gpio.c +++ b/drivers/pps/clients/pps-gpio.c @@ -158,10 +158,10 @@ static int pps_gpio_probe(struct platform_device *pdev) if (data->capture_clear) pps_default_params |= PPS_CAPTURECLEAR | PPS_OFFSETCLEAR; data->pps = pps_register_source(&data->info, pps_default_params); - if (data->pps == NULL) { + if (IS_ERR(data->pps)) { dev_err(&pdev->dev, "failed to register IRQ %d as PPS source\n", data->irq); - return -EINVAL; + return PTR_ERR(data->pps); } /* register IRQ interrupt handler */ diff --git a/drivers/pps/clients/pps-ktimer.c b/drivers/pps/clients/pps-ktimer.c index 04735649052a..728818b87af3 100644 --- a/drivers/pps/clients/pps-ktimer.c +++ b/drivers/pps/clients/pps-ktimer.c @@ -80,9 +80,9 @@ static int __init pps_ktimer_init(void) { pps = pps_register_source(&pps_ktimer_info, PPS_CAPTUREASSERT | PPS_OFFSETASSERT); - if (pps == NULL) { + if (IS_ERR(pps)) { pr_err("cannot register PPS source\n"); - return -ENOMEM; + return PTR_ERR(pps); } timer_setup(&ktimer, pps_ktimer_event, 0); diff --git a/drivers/pps/clients/pps-ldisc.c b/drivers/pps/clients/pps-ldisc.c index 73bd3bb4d93b..00f6c460e493 100644 --- a/drivers/pps/clients/pps-ldisc.c +++ b/drivers/pps/clients/pps-ldisc.c @@ -72,9 +72,9 @@ static int pps_tty_open(struct tty_struct *tty) pps = pps_register_source(&info, PPS_CAPTUREBOTH | \ PPS_OFFSETASSERT | PPS_OFFSETCLEAR); - if (pps == NULL) { + if (IS_ERR(pps)) { pr_err("cannot register PPS source \"%s\"\n", info.path); - return -ENOMEM; + return PTR_ERR(pps); } pps->lookup_cookie = tty; diff --git a/drivers/pps/clients/pps_parport.c b/drivers/pps/clients/pps_parport.c index 4db824f88d00..7226e39aae83 100644 --- a/drivers/pps/clients/pps_parport.c +++ b/drivers/pps/clients/pps_parport.c @@ -179,7 +179,7 @@ static void parport_attach(struct parport *port) device->pps = pps_register_source(&info, PPS_CAPTUREBOTH | PPS_OFFSETASSERT | PPS_OFFSETCLEAR); - if (device->pps == NULL) { + if (IS_ERR(device->pps)) { pr_err("couldn't register PPS source\n"); goto err_release_dev; } diff --git a/drivers/pps/kapi.c b/drivers/pps/kapi.c index 805c749ac1ad..a1c3cd38754f 100644 --- a/drivers/pps/kapi.c +++ b/drivers/pps/kapi.c @@ -72,7 +72,8 @@ static void pps_echo_client_default(struct pps_device *pps, int event, * source is described by info's fields and it will have, as default PPS * parameters, the ones specified into default_params. * - * The function returns, in case of success, the PPS device. Otherwise NULL. + * The function returns, in case of success, the PPS device. Otherwise + * ERR_PTR(errno). */ struct pps_device *pps_register_source(struct pps_source_info *info, @@ -135,7 +136,7 @@ kfree_pps: pps_register_source_exit: pr_err("%s: unable to register source\n", info->name); - return NULL; + return ERR_PTR(err); } EXPORT_SYMBOL(pps_register_source); diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c index 2012551d93e0..797fab33bb98 100644 --- a/drivers/ptp/ptp_chardev.c +++ b/drivers/ptp/ptp_chardev.c @@ -121,18 +121,20 @@ int ptp_open(struct posix_clock *pc, fmode_t fmode) long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg) { - struct ptp_clock_caps caps; - struct ptp_clock_request req; - struct ptp_sys_offset *sysoff = NULL; - struct ptp_sys_offset_precise precise_offset; - struct ptp_pin_desc pd; struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock); + struct ptp_sys_offset_extended *extoff = NULL; + struct ptp_sys_offset_precise precise_offset; + struct system_device_crosststamp xtstamp; struct ptp_clock_info *ops = ptp->info; + struct ptp_sys_offset *sysoff = NULL; + struct ptp_system_timestamp sts; + struct ptp_clock_request req; + struct ptp_clock_caps caps; struct ptp_clock_time *pct; + unsigned int i, pin_index; + struct ptp_pin_desc pd; struct timespec64 ts; - struct system_device_crosststamp xtstamp; int enable, err = 0; - unsigned int i, pin_index; switch (cmd) { @@ -211,6 +213,36 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg) err = -EFAULT; break; + case PTP_SYS_OFFSET_EXTENDED: + if (!ptp->info->gettimex64) { + err = -EOPNOTSUPP; + break; + } + extoff = memdup_user((void __user *)arg, sizeof(*extoff)); + if (IS_ERR(extoff)) { + err = PTR_ERR(extoff); + extoff = NULL; + break; + } + if (extoff->n_samples > PTP_MAX_SAMPLES) { + err = -EINVAL; + break; + } + for (i = 0; i < extoff->n_samples; i++) { + err = ptp->info->gettimex64(ptp->info, &ts, &sts); + if (err) + goto out; + extoff->ts[i][0].sec = sts.pre_ts.tv_sec; + extoff->ts[i][0].nsec = sts.pre_ts.tv_nsec; + extoff->ts[i][1].sec = ts.tv_sec; + extoff->ts[i][1].nsec = ts.tv_nsec; + extoff->ts[i][2].sec = sts.post_ts.tv_sec; + extoff->ts[i][2].nsec = sts.post_ts.tv_nsec; + } + if (copy_to_user((void __user *)arg, extoff, sizeof(*extoff))) + err = -EFAULT; + break; + case PTP_SYS_OFFSET: sysoff = memdup_user((void __user *)arg, sizeof(*sysoff)); if (IS_ERR(sysoff)) { @@ -228,7 +260,12 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg) pct->sec = ts.tv_sec; pct->nsec = ts.tv_nsec; pct++; - ptp->info->gettime64(ptp->info, &ts); + if (ops->gettimex64) + err = ops->gettimex64(ops, &ts, NULL); + else + err = ops->gettime64(ops, &ts); + if (err) + goto out; pct->sec = ts.tv_sec; pct->nsec = ts.tv_nsec; pct++; @@ -281,6 +318,8 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg) break; } +out: + kfree(extoff); kfree(sysoff); return err; } diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c index 5419a89d300e..48f3594a7458 100644 --- a/drivers/ptp/ptp_clock.c +++ b/drivers/ptp/ptp_clock.c @@ -117,7 +117,10 @@ static int ptp_clock_gettime(struct posix_clock *pc, struct timespec64 *tp) struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock); int err; - err = ptp->info->gettime64(ptp->info, tp); + if (ptp->info->gettimex64) + err = ptp->info->gettimex64(ptp->info, tp, NULL); + else + err = ptp->info->gettime64(ptp->info, tp); return err; } @@ -249,8 +252,10 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info, ptp->dev = device_create_with_groups(ptp_class, parent, ptp->devid, ptp, ptp->pin_attr_groups, "ptp%d", ptp->index); - if (IS_ERR(ptp->dev)) + if (IS_ERR(ptp->dev)) { + err = PTR_ERR(ptp->dev); goto no_device; + } /* Register a new PPS source. */ if (info->pps) { @@ -260,7 +265,8 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info, pps.mode = PTP_PPS_MODE; pps.owner = info->owner; ptp->pps_source = pps_register_source(&pps, PTP_PPS_DEFAULTS); - if (!ptp->pps_source) { + if (IS_ERR(ptp->pps_source)) { + err = PTR_ERR(ptp->pps_source); pr_err("failed to register pps source\n"); goto no_pps; } diff --git a/drivers/rapidio/Kconfig b/drivers/rapidio/Kconfig index d6d2f20c4597..e3d8fe41b50c 100644 --- a/drivers/rapidio/Kconfig +++ b/drivers/rapidio/Kconfig @@ -1,6 +1,17 @@ # # RapidIO configuration # + +config HAVE_RAPIDIO + bool + +menuconfig RAPIDIO + tristate "RapidIO support" + depends on HAVE_RAPIDIO || PCI + help + If you say Y here, the kernel will include drivers and + infrastructure code to support RapidIO interconnect devices. + source "drivers/rapidio/devices/Kconfig" config RAPIDIO_DISC_TIMEOUT diff --git a/drivers/ras/Kconfig b/drivers/ras/Kconfig index 4c3c67d13254..b834ff555188 100644 --- a/drivers/ras/Kconfig +++ b/drivers/ras/Kconfig @@ -30,6 +30,6 @@ menuconfig RAS if RAS -source arch/x86/ras/Kconfig +source "arch/x86/ras/Kconfig" endif diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c index de21f620b882..183fc42a510a 100644 --- a/drivers/remoteproc/remoteproc_virtio.c +++ b/drivers/remoteproc/remoteproc_virtio.c @@ -214,6 +214,16 @@ static u64 rproc_virtio_get_features(struct virtio_device *vdev) return rsc->dfeatures; } +static void rproc_transport_features(struct virtio_device *vdev) +{ + /* + * Packed ring isn't enabled on remoteproc for now, + * because remoteproc uses vring_new_virtqueue() which + * creates virtio rings on preallocated memory. + */ + __virtio_clear_bit(vdev, VIRTIO_F_RING_PACKED); +} + static int rproc_virtio_finalize_features(struct virtio_device *vdev) { struct rproc_vdev *rvdev = vdev_to_rvdev(vdev); @@ -224,6 +234,9 @@ static int rproc_virtio_finalize_features(struct virtio_device *vdev) /* Give virtio_ring a chance to accept features */ vring_transport_features(vdev); + /* Give virtio_rproc a chance to accept features. */ + rproc_transport_features(vdev); + /* Make sure we don't have any features > 32 bits! */ BUG_ON((u32)vdev->features != vdev->features); diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c index 2016e0ed5865..8e26001dc11c 100644 --- a/drivers/s390/block/dasd_ioctl.c +++ b/drivers/s390/block/dasd_ioctl.c @@ -412,6 +412,7 @@ static int dasd_ioctl_information(struct dasd_block *block, struct ccw_dev_id dev_id; struct dasd_device *base; struct ccw_device *cdev; + struct list_head *l; unsigned long flags; int rc; @@ -462,23 +463,10 @@ static int dasd_ioctl_information(struct dasd_block *block, memcpy(dasd_info->type, base->discipline->name, 4); - if (block->request_queue->request_fn) { - struct list_head *l; -#ifdef DASD_EXTENDED_PROFILING - { - struct list_head *l; - spin_lock_irqsave(&block->lock, flags); - list_for_each(l, &block->request_queue->queue_head) - dasd_info->req_queue_len++; - spin_unlock_irqrestore(&block->lock, flags); - } -#endif /* DASD_EXTENDED_PROFILING */ - spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags); - list_for_each(l, &base->ccw_queue) - dasd_info->chanq_len++; - spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), - flags); - } + spin_lock_irqsave(&block->queue_lock, flags); + list_for_each(l, &base->ccw_queue) + dasd_info->chanq_len++; + spin_unlock_irqrestore(&block->queue_lock, flags); rc = 0; if (copy_to_user(argp, dasd_info, diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 04e294d1d16d..0ee026947f20 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -314,7 +314,7 @@ struct qeth_hdr_layer3 { __u16 frame_offset; union { /* TX: */ - u8 ipv6_addr[16]; + struct in6_addr ipv6_addr; struct ipv4 { u8 res[12]; u32 addr; @@ -665,7 +665,6 @@ struct qeth_card_blkt { #define QETH_BROADCAST_WITH_ECHO 0x01 #define QETH_BROADCAST_WITHOUT_ECHO 0x02 -#define QETH_LAYER2_MAC_READ 0x01 #define QETH_LAYER2_MAC_REGISTERED 0x02 struct qeth_card_info { unsigned short unit_addr2; @@ -775,7 +774,6 @@ struct qeth_switch_info { #define QETH_NAPI_WEIGHT NAPI_POLL_WEIGHT struct qeth_card { - struct list_head list; enum qeth_card_states state; spinlock_t lock; struct ccwgroup_device *gdev; @@ -827,11 +825,6 @@ struct qeth_card { struct work_struct close_dev_work; }; -struct qeth_card_list_struct { - struct list_head list; - rwlock_t rwlock; -}; - struct qeth_trap_id { __u16 lparnr; char vmname[8]; @@ -978,11 +971,11 @@ int qeth_core_load_discipline(struct qeth_card *, enum qeth_discipline_id); void qeth_core_free_discipline(struct qeth_card *); /* exports for qeth discipline device drivers */ -extern struct qeth_card_list_struct qeth_core_card_list; extern struct kmem_cache *qeth_core_header_cache; extern struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS]; struct net_device *qeth_clone_netdev(struct net_device *orig); +struct qeth_card *qeth_get_card_by_busid(char *bus_id); void qeth_set_recovery_task(struct qeth_card *); void qeth_clear_recovery_task(struct qeth_card *); void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int); @@ -1025,9 +1018,6 @@ int qeth_send_control_data(struct qeth_card *, int, struct qeth_cmd_buffer *, int (*reply_cb)(struct qeth_card *, struct qeth_reply*, unsigned long), void *reply_param); unsigned int qeth_count_elements(struct sk_buff *skb, unsigned int data_offset); -int qeth_do_send_packet_fast(struct qeth_qdio_out_q *queue, struct sk_buff *skb, - struct qeth_hdr *hdr, unsigned int offset, - unsigned int hd_len); int qeth_do_send_packet(struct qeth_card *card, struct qeth_qdio_out_q *queue, struct sk_buff *skb, struct qeth_hdr *hdr, unsigned int offset, unsigned int hd_len, @@ -1058,11 +1048,6 @@ netdev_features_t qeth_features_check(struct sk_buff *skb, struct net_device *dev, netdev_features_t features); int qeth_vm_request_mac(struct qeth_card *card); -int qeth_add_hw_header(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr **hdr, unsigned int hdr_len, - unsigned int proto_len, unsigned int *elements); -void qeth_fill_tso_ext(struct qeth_hdr_tso *hdr, unsigned int payload_len, - struct sk_buff *skb, unsigned int proto_len); int qeth_xmit(struct qeth_card *card, struct sk_buff *skb, struct qeth_qdio_out_q *queue, int ipv, int cast_type, void (*fill_header)(struct qeth_card *card, struct qeth_hdr *hdr, diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 254065271867..e63e03143ca7 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -54,8 +54,6 @@ struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS] = { }; EXPORT_SYMBOL_GPL(qeth_dbf); -struct qeth_card_list_struct qeth_core_card_list; -EXPORT_SYMBOL_GPL(qeth_core_card_list); struct kmem_cache *qeth_core_header_cache; EXPORT_SYMBOL_GPL(qeth_core_header_cache); static struct kmem_cache *qeth_qdio_outbuf_cache; @@ -2837,6 +2835,17 @@ static void qeth_fill_ipacmd_header(struct qeth_card *card, cmd->hdr.prot_version = prot; } +void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob) +{ + u8 prot_type = qeth_mpc_select_prot_type(card); + + memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); + memcpy(QETH_IPA_CMD_PROT_TYPE(iob->data), &prot_type, 1); + memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), + &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); +} +EXPORT_SYMBOL_GPL(qeth_prepare_ipa_cmd); + struct qeth_cmd_buffer *qeth_get_ipacmd_buffer(struct qeth_card *card, enum qeth_ipa_cmds ipacmd, enum qeth_prot_versions prot) { @@ -2844,6 +2853,7 @@ struct qeth_cmd_buffer *qeth_get_ipacmd_buffer(struct qeth_card *card, iob = qeth_get_buffer(&card->write); if (iob) { + qeth_prepare_ipa_cmd(card, iob); qeth_fill_ipacmd_header(card, __ipa_cmd(iob), ipacmd, prot); } else { dev_warn(&card->gdev->dev, @@ -2856,17 +2866,6 @@ struct qeth_cmd_buffer *qeth_get_ipacmd_buffer(struct qeth_card *card, } EXPORT_SYMBOL_GPL(qeth_get_ipacmd_buffer); -void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob) -{ - u8 prot_type = qeth_mpc_select_prot_type(card); - - memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); - memcpy(QETH_IPA_CMD_PROT_TYPE(iob->data), &prot_type, 1); - memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), - &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); -} -EXPORT_SYMBOL_GPL(qeth_prepare_ipa_cmd); - /** * qeth_send_ipa_cmd() - send an IPA command * @@ -2881,7 +2880,6 @@ int qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, int rc; QETH_CARD_TEXT(card, 4, "sendipa"); - qeth_prepare_ipa_cmd(card, iob); rc = qeth_send_control_data(card, IPA_CMD_LENGTH, iob, reply_cb, reply_param); if (rc == -ETIME) { @@ -3777,9 +3775,9 @@ EXPORT_SYMBOL_GPL(qeth_count_elements); * The number of needed buffer elements is returned in @elements. * Error to create the hdr is indicated by returning with < 0. */ -int qeth_add_hw_header(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr **hdr, unsigned int hdr_len, - unsigned int proto_len, unsigned int *elements) +static int qeth_add_hw_header(struct qeth_card *card, struct sk_buff *skb, + struct qeth_hdr **hdr, unsigned int hdr_len, + unsigned int proto_len, unsigned int *elements) { const unsigned int max_elements = QETH_MAX_BUFFER_ELEMENTS(card); const unsigned int contiguous = proto_len ? proto_len : 1; @@ -3849,7 +3847,6 @@ check_layout: skb_copy_from_linear_data(skb, ((char *)*hdr) + hdr_len, proto_len); return 0; } -EXPORT_SYMBOL_GPL(qeth_add_hw_header); static void __qeth_fill_buffer(struct sk_buff *skb, struct qeth_qdio_out_buffer *buf, @@ -3972,9 +3969,9 @@ static int qeth_fill_buffer(struct qeth_qdio_out_q *queue, return flush_cnt; } -int qeth_do_send_packet_fast(struct qeth_qdio_out_q *queue, struct sk_buff *skb, - struct qeth_hdr *hdr, unsigned int offset, - unsigned int hd_len) +static int qeth_do_send_packet_fast(struct qeth_qdio_out_q *queue, + struct sk_buff *skb, struct qeth_hdr *hdr, + unsigned int offset, unsigned int hd_len) { int index = queue->next_buf_to_fill; struct qeth_qdio_out_buffer *buffer = queue->bufs[index]; @@ -3990,7 +3987,6 @@ int qeth_do_send_packet_fast(struct qeth_qdio_out_q *queue, struct sk_buff *skb, qeth_flush_buffers(queue, index, 1); return 0; } -EXPORT_SYMBOL_GPL(qeth_do_send_packet_fast); int qeth_do_send_packet(struct qeth_card *card, struct qeth_qdio_out_q *queue, struct sk_buff *skb, struct qeth_hdr *hdr, @@ -4082,8 +4078,9 @@ out: } EXPORT_SYMBOL_GPL(qeth_do_send_packet); -void qeth_fill_tso_ext(struct qeth_hdr_tso *hdr, unsigned int payload_len, - struct sk_buff *skb, unsigned int proto_len) +static void qeth_fill_tso_ext(struct qeth_hdr_tso *hdr, + unsigned int payload_len, struct sk_buff *skb, + unsigned int proto_len) { struct qeth_hdr_ext_tso *ext = &hdr->ext; @@ -4096,7 +4093,6 @@ void qeth_fill_tso_ext(struct qeth_hdr_tso *hdr, unsigned int payload_len, ext->mss = skb_shinfo(skb)->gso_size; ext->dg_hdr_len = proto_len; } -EXPORT_SYMBOL_GPL(qeth_fill_tso_ext); int qeth_xmit(struct qeth_card *card, struct sk_buff *skb, struct qeth_qdio_out_q *queue, int ipv, int cast_type, @@ -4119,7 +4115,7 @@ int qeth_xmit(struct qeth_card *card, struct sk_buff *skb, proto_len = skb_transport_offset(skb) + tcp_hdrlen(skb); } else { hw_hdr_len = sizeof(struct qeth_hdr); - proto_len = IS_IQD(card) ? ETH_HLEN : 0; + proto_len = (IS_IQD(card) && IS_LAYER2(card)) ? ETH_HLEN : 0; } rc = skb_cow_head(skb, hw_hdr_len); @@ -4235,16 +4231,18 @@ static int qeth_setadpparms_change_macaddr_cb(struct qeth_card *card, struct qeth_reply *reply, unsigned long data) { struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *) data; + struct qeth_ipacmd_setadpparms *adp_cmd; QETH_CARD_TEXT(card, 4, "chgmaccb"); if (qeth_setadpparms_inspect_rc(cmd)) return 0; - if (IS_LAYER3(card) || !(card->info.mac_bits & QETH_LAYER2_MAC_READ)) { - ether_addr_copy(card->dev->dev_addr, - cmd->data.setadapterparms.data.change_addr.addr); - card->info.mac_bits |= QETH_LAYER2_MAC_READ; - } + adp_cmd = &cmd->data.setadapterparms; + if (IS_LAYER2(card) && IS_OSD(card) && !IS_VM_NIC(card) && + !(adp_cmd->hdr.flags & QETH_SETADP_FLAGS_VIRTUAL_MAC)) + return 0; + + ether_addr_copy(card->dev->dev_addr, adp_cmd->data.change_addr.addr); return 0; } @@ -4499,9 +4497,6 @@ static int qeth_send_ipa_snmp_cmd(struct qeth_card *card, QETH_CARD_TEXT(card, 4, "sendsnmp"); - memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); - memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), - &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); /* adjust PDU length fields in IPA_PDU_HEADER */ s1 = (u32) IPA_PDU_HEADER_SIZE + len; s2 = (u32) len; @@ -5477,34 +5472,11 @@ struct qeth_cmd_buffer *qeth_get_setassparms_cmd(struct qeth_card *card, } EXPORT_SYMBOL_GPL(qeth_get_setassparms_cmd); -static int qeth_send_setassparms(struct qeth_card *card, - struct qeth_cmd_buffer *iob, u16 len, - long data, int (*reply_cb)(struct qeth_card *, - struct qeth_reply *, - unsigned long), - void *reply_param) -{ - int rc; - struct qeth_ipa_cmd *cmd; - - QETH_CARD_TEXT(card, 4, "sendassp"); - - cmd = __ipa_cmd(iob); - if (len <= sizeof(__u32)) - cmd->data.setassparms.data.flags_32bit = (__u32) data; - else /* (len > sizeof(__u32)) */ - memcpy(&cmd->data.setassparms.data, (void *) data, len); - - rc = qeth_send_ipa_cmd(card, iob, reply_cb, reply_param); - return rc; -} - int qeth_send_simple_setassparms_prot(struct qeth_card *card, enum qeth_ipa_funcs ipa_func, u16 cmd_code, long data, enum qeth_prot_versions prot) { - int rc; int length = 0; struct qeth_cmd_buffer *iob; @@ -5514,9 +5486,9 @@ int qeth_send_simple_setassparms_prot(struct qeth_card *card, iob = qeth_get_setassparms_cmd(card, ipa_func, cmd_code, length, prot); if (!iob) return -ENOMEM; - rc = qeth_send_setassparms(card, iob, length, data, - qeth_setassparms_cb, NULL); - return rc; + + __ipa_cmd(iob)->data.setassparms.data.flags_32bit = (__u32) data; + return qeth_send_ipa_cmd(card, iob, qeth_setassparms_cb, NULL); } EXPORT_SYMBOL_GPL(qeth_send_simple_setassparms_prot); @@ -5803,9 +5775,6 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev) break; } - write_lock_irq(&qeth_core_card_list.rwlock); - list_add_tail(&card->list, &qeth_core_card_list.list); - write_unlock_irq(&qeth_core_card_list.rwlock); return 0; err_disc: @@ -5830,9 +5799,6 @@ static void qeth_core_remove_device(struct ccwgroup_device *gdev) qeth_core_free_discipline(card); } - write_lock_irq(&qeth_core_card_list.rwlock); - list_del(&card->list); - write_unlock_irq(&qeth_core_card_list.rwlock); free_netdev(card->dev); qeth_core_free_card(card); put_device(&gdev->dev); @@ -5947,6 +5913,21 @@ static struct ccwgroup_driver qeth_core_ccwgroup_driver = { .restore = qeth_core_restore, }; +struct qeth_card *qeth_get_card_by_busid(char *bus_id) +{ + struct ccwgroup_device *gdev; + struct qeth_card *card; + + gdev = get_ccwgroupdev_by_busid(&qeth_core_ccwgroup_driver, bus_id); + if (!gdev) + return NULL; + + card = dev_get_drvdata(&gdev->dev); + put_device(&gdev->dev); + return card; +} +EXPORT_SYMBOL_GPL(qeth_get_card_by_busid); + int qeth_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) { struct qeth_card *card = dev->ml_priv; @@ -6378,16 +6359,16 @@ static int qeth_ipa_checksum_run_cmd(struct qeth_card *card, enum qeth_prot_versions prot) { struct qeth_cmd_buffer *iob; - int rc = -ENOMEM; QETH_CARD_TEXT(card, 4, "chkdocmd"); iob = qeth_get_setassparms_cmd(card, ipa_func, cmd_code, sizeof(__u32), prot); - if (iob) - rc = qeth_send_setassparms(card, iob, sizeof(__u32), data, - qeth_ipa_checksum_run_cmd_cb, - chksum_cb); - return rc; + if (!iob) + return -ENOMEM; + + __ipa_cmd(iob)->data.setassparms.data.flags_32bit = (__u32) data; + return qeth_send_ipa_cmd(card, iob, qeth_ipa_checksum_run_cmd_cb, + chksum_cb); } static int qeth_send_checksum_on(struct qeth_card *card, int cstype, @@ -6485,8 +6466,7 @@ static int qeth_set_tso_on(struct qeth_card *card, if (!iob) return -ENOMEM; - rc = qeth_send_setassparms(card, iob, 0, 0 /* unused */, - qeth_start_tso_cb, &tso_data); + rc = qeth_send_ipa_cmd(card, iob, qeth_start_tso_cb, &tso_data); if (rc) return rc; @@ -6503,10 +6483,9 @@ static int qeth_set_tso_on(struct qeth_card *card, } /* enable TSO capability */ - caps.supported = 0; - caps.enabled = QETH_IPA_LARGE_SEND_TCP; - rc = qeth_send_setassparms(card, iob, sizeof(caps), (long) &caps, - qeth_setassparms_get_caps_cb, &caps); + __ipa_cmd(iob)->data.setassparms.data.caps.enabled = + QETH_IPA_LARGE_SEND_TCP; + rc = qeth_send_ipa_cmd(card, iob, qeth_setassparms_get_caps_cb, &caps); if (rc) { qeth_set_tso_off(card, prot); return rc; @@ -6685,8 +6664,6 @@ static int __init qeth_core_init(void) int rc; pr_info("loading core functions\n"); - INIT_LIST_HEAD(&qeth_core_card_list.list); - rwlock_init(&qeth_core_card_list.rwlock); qeth_wq = create_singlethread_workqueue("qeth_wq"); if (!qeth_wq) { diff --git a/drivers/s390/net/qeth_core_mpc.c b/drivers/s390/net/qeth_core_mpc.c index e891c0b52f4c..16fc51ad0514 100644 --- a/drivers/s390/net/qeth_core_mpc.c +++ b/drivers/s390/net/qeth_core_mpc.c @@ -144,7 +144,6 @@ unsigned char IPA_PDU_HEADER[] = { sizeof(struct qeth_ipa_cmd) % 256, 0x00, 0x00, 0x00, 0x40, }; -EXPORT_SYMBOL_GPL(IPA_PDU_HEADER); struct ipa_rc_msg { enum qeth_ipa_return_codes rc; diff --git a/drivers/s390/net/qeth_core_mpc.h b/drivers/s390/net/qeth_core_mpc.h index 3e54be201b27..1ab321926f64 100644 --- a/drivers/s390/net/qeth_core_mpc.h +++ b/drivers/s390/net/qeth_core_mpc.h @@ -80,7 +80,9 @@ enum qeth_card_types { }; #define IS_IQD(card) ((card)->info.type == QETH_CARD_TYPE_IQD) +#define IS_OSD(card) ((card)->info.type == QETH_CARD_TYPE_OSD) #define IS_OSN(card) ((card)->info.type == QETH_CARD_TYPE_OSN) +#define IS_VM_NIC(card) ((card)->info.guestlan) #define QETH_MPC_DIFINFO_LEN_INDICATES_LINK_TYPE 0x18 /* only the first two bytes are looked at in qeth_get_cardname_short */ @@ -529,17 +531,20 @@ struct qeth_query_switch_attributes { __u8 reserved3[8]; }; +#define QETH_SETADP_FLAGS_VIRTUAL_MAC 0x80 /* for CHANGE_ADDR_READ_MAC */ + struct qeth_ipacmd_setadpparms_hdr { - __u32 supp_hw_cmds; - __u32 reserved1; - __u16 cmdlength; - __u16 reserved2; - __u32 command_code; - __u16 return_code; - __u8 used_total; - __u8 seq_no; - __u32 reserved3; -} __attribute__ ((packed)); + u32 supp_hw_cmds; + u32 reserved1; + u16 cmdlength; + u16 reserved2; + u32 command_code; + u16 return_code; + u8 used_total; + u8 seq_no; + u8 flags; + u8 reserved3[3]; +}; struct qeth_ipacmd_setadpparms { struct qeth_ipacmd_setadpparms_hdr hdr; @@ -828,10 +833,9 @@ enum qeth_ipa_arp_return_codes { extern const char *qeth_get_ipa_msg(enum qeth_ipa_return_codes rc); extern const char *qeth_get_ipa_cmd_name(enum qeth_ipa_cmds cmd); -#define QETH_SETASS_BASE_LEN (sizeof(struct qeth_ipacmd_hdr) + \ - sizeof(struct qeth_ipacmd_setassparms_hdr)) -#define QETH_IPA_ARP_DATA_POS(buffer) (buffer + IPA_PDU_HEADER_SIZE + \ - QETH_SETASS_BASE_LEN) +#define QETH_SETASS_BASE_LEN (IPA_PDU_HEADER_SIZE + \ + sizeof(struct qeth_ipacmd_hdr) + \ + sizeof(struct qeth_ipacmd_setassparms_hdr)) #define QETH_SETADP_BASE_LEN (sizeof(struct qeth_ipacmd_hdr) + \ sizeof(struct qeth_ipacmd_setadpparms_hdr)) #define QETH_SNMP_SETADP_CMDLENGTH 16 diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index 2914a1a69f83..f108d4b44605 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -36,28 +36,6 @@ static void qeth_l2_vnicc_init(struct qeth_card *card); static bool qeth_l2_vnicc_recover_timeout(struct qeth_card *card, u32 vnicc, u32 *timeout); -static struct net_device *qeth_l2_netdev_by_devno(unsigned char *read_dev_no) -{ - struct qeth_card *card; - struct net_device *ndev; - __u16 temp_dev_no; - unsigned long flags; - struct ccw_dev_id read_devid; - - ndev = NULL; - memcpy(&temp_dev_no, read_dev_no, 2); - read_lock_irqsave(&qeth_core_card_list.rwlock, flags); - list_for_each_entry(card, &qeth_core_card_list.list, list) { - ccw_device_get_id(CARD_RDEV(card), &read_devid); - if (read_devid.devno == temp_dev_no) { - ndev = card->dev; - break; - } - } - read_unlock_irqrestore(&qeth_core_card_list.rwlock, flags); - return ndev; -} - static int qeth_setdelmac_makerc(struct qeth_card *card, int retcode) { int rc; @@ -461,12 +439,9 @@ static int qeth_l2_request_initial_mac(struct qeth_card *card) /* fall back to alternative mechanism: */ } - if (card->info.type == QETH_CARD_TYPE_IQD || - card->info.type == QETH_CARD_TYPE_OSM || - card->info.type == QETH_CARD_TYPE_OSX || - card->info.guestlan) { + if (!IS_OSN(card)) { rc = qeth_setadpparms_change_macaddr(card); - if (!rc) + if (!rc && is_valid_ether_addr(card->dev->dev_addr)) goto out; QETH_DBF_MESSAGE(2, "READ_MAC Assist failed on device %x: %#x\n", CARD_DEVID(card), rc); @@ -917,7 +892,8 @@ static int qeth_l2_setup_netdev(struct qeth_card *card, bool carrier_ok) PAGE_SIZE * (QDIO_MAX_ELEMENTS_PER_BUFFER - 1)); } - qeth_l2_request_initial_mac(card); + if (!is_valid_ether_addr(card->dev->dev_addr)) + qeth_l2_request_initial_mac(card); netif_napi_add(card->dev, &card->napi, qeth_poll, QETH_NAPI_WEIGHT); rc = register_netdev(card->dev); if (!rc && carrier_ok) @@ -1031,7 +1007,7 @@ static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode) qeth_l2_set_rx_mode(card->dev); } else { rtnl_lock(); - dev_open(card->dev); + dev_open(card->dev, NULL); rtnl_unlock(); } } @@ -1288,13 +1264,16 @@ int qeth_osn_register(unsigned char *read_dev_no, struct net_device **dev, int (*data_cb)(struct sk_buff *)) { struct qeth_card *card; + char bus_id[16]; + u16 devno; - *dev = qeth_l2_netdev_by_devno(read_dev_no); - if (*dev == NULL) - return -ENODEV; - card = (*dev)->ml_priv; - if (!card) + memcpy(&devno, read_dev_no, 2); + sprintf(bus_id, "0.0.%04x", devno); + card = qeth_get_card_by_busid(bus_id); + if (!card || !IS_OSN(card)) return -ENODEV; + *dev = card->dev; + QETH_CARD_TEXT(card, 2, "osnreg"); if ((assist_cb == NULL) || (data_cb == NULL)) return -EINVAL; diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index f08b745c2007..42a7cdc59b76 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -949,9 +949,6 @@ static int qeth_l3_iqd_read_initial_mac_cb(struct qeth_card *card, if (cmd->hdr.return_code == 0) ether_addr_copy(card->dev->dev_addr, cmd->data.create_destroy_addr.unique_id); - else - eth_random_addr(card->dev->dev_addr); - return 0; } @@ -1685,21 +1682,6 @@ out_error: return 0; } -static int qeth_l3_send_ipa_arp_cmd(struct qeth_card *card, - struct qeth_cmd_buffer *iob, int len, - int (*reply_cb)(struct qeth_card *, struct qeth_reply *, - unsigned long), - void *reply_param) -{ - QETH_CARD_TEXT(card, 4, "sendarp"); - - memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); - memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), - &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); - return qeth_send_control_data(card, IPA_PDU_HEADER_SIZE + len, iob, - reply_cb, reply_param); -} - static int qeth_l3_query_arp_cache_info(struct qeth_card *card, enum qeth_prot_versions prot, struct qeth_arp_query_info *qinfo) @@ -1719,11 +1701,9 @@ static int qeth_l3_query_arp_cache_info(struct qeth_card *card, return -ENOMEM; cmd = __ipa_cmd(iob); cmd->data.setassparms.data.query_arp.request_bits = 0x000F; - cmd->data.setassparms.data.query_arp.reply_bits = 0; - cmd->data.setassparms.data.query_arp.no_entries = 0; - rc = qeth_l3_send_ipa_arp_cmd(card, iob, - QETH_SETASS_BASE_LEN+QETH_ARP_CMD_LEN, - qeth_l3_arp_query_cb, (void *)qinfo); + rc = qeth_send_control_data(card, + QETH_SETASS_BASE_LEN + QETH_ARP_CMD_LEN, + iob, qeth_l3_arp_query_cb, qinfo); if (rc) QETH_DBF_MESSAGE(2, "Error while querying ARP cache on device %x: %#x\n", CARD_DEVID(card), rc); @@ -1929,22 +1909,6 @@ static int qeth_l3_get_cast_type(struct sk_buff *skb) } } -static void qeth_l3_fill_af_iucv_hdr(struct qeth_hdr *hdr, struct sk_buff *skb, - unsigned int data_len) -{ - char daddr[16]; - - hdr->hdr.l3.id = QETH_HEADER_TYPE_LAYER3; - hdr->hdr.l3.length = data_len; - hdr->hdr.l3.flags = QETH_HDR_IPV6 | QETH_CAST_UNICAST; - - memset(daddr, 0, sizeof(daddr)); - daddr[0] = 0xfe; - daddr[1] = 0x80; - memcpy(&daddr[8], iucv_trans_hdr(skb)->destUserID, 8); - memcpy(hdr->hdr.l3.next_hop.ipv6_addr, daddr, 16); -} - static u8 qeth_l3_cast_type_to_flag(int cast_type) { if (cast_type == RTN_MULTICAST) @@ -1960,6 +1924,7 @@ static void qeth_l3_fill_header(struct qeth_card *card, struct qeth_hdr *hdr, struct sk_buff *skb, int ipv, int cast_type, unsigned int data_len) { + struct qeth_hdr_layer3 *l3_hdr = &hdr->hdr.l3; struct vlan_ethhdr *veth = vlan_eth_hdr(skb); hdr->hdr.l3.length = data_len; @@ -1968,6 +1933,15 @@ static void qeth_l3_fill_header(struct qeth_card *card, struct qeth_hdr *hdr, hdr->hdr.l3.id = QETH_HEADER_TYPE_L3_TSO; } else { hdr->hdr.l3.id = QETH_HEADER_TYPE_LAYER3; + + if (skb->protocol == htons(ETH_P_AF_IUCV)) { + l3_hdr->flags = QETH_HDR_IPV6 | QETH_CAST_UNICAST; + l3_hdr->next_hop.ipv6_addr.s6_addr16[0] = htons(0xfe80); + memcpy(&l3_hdr->next_hop.ipv6_addr.s6_addr32[2], + iucv_trans_hdr(skb)->destUserID, 8); + return; + } + if (skb->ip_summed == CHECKSUM_PARTIAL) { qeth_tx_csum(skb, &hdr->hdr.l3.ext_flags, ipv); /* some HW requires combined L3+L4 csum offload: */ @@ -2012,13 +1986,11 @@ static void qeth_l3_fill_header(struct qeth_card *card, struct qeth_hdr *hdr, } else { /* IPv6 */ const struct rt6_info *rt = skb_rt6_info(skb); - const struct in6_addr *next_hop; if (rt && !ipv6_addr_any(&rt->rt6i_gateway)) - next_hop = &rt->rt6i_gateway; + l3_hdr->next_hop.ipv6_addr = rt->rt6i_gateway; else - next_hop = &ipv6_hdr(skb)->daddr; - memcpy(hdr->hdr.l3.next_hop.ipv6_addr, next_hop, 16); + l3_hdr->next_hop.ipv6_addr = ipv6_hdr(skb)->daddr; hdr->hdr.l3.flags |= QETH_HDR_IPV6; if (card->info.type != QETH_CARD_TYPE_IQD) @@ -2044,84 +2016,25 @@ static void qeth_l3_fixup_headers(struct sk_buff *skb) static int qeth_l3_xmit(struct qeth_card *card, struct sk_buff *skb, struct qeth_qdio_out_q *queue, int ipv, int cast_type) { - unsigned int hw_hdr_len, proto_len, frame_len, elements; unsigned char eth_hdr[ETH_HLEN]; - bool is_tso = skb_is_gso(skb); - unsigned int data_offset = 0; - struct qeth_hdr *hdr = NULL; - unsigned int hd_len = 0; - int push_len, rc; - bool is_sg; - - if (is_tso) { - hw_hdr_len = sizeof(struct qeth_hdr_tso); - proto_len = skb_transport_offset(skb) + tcp_hdrlen(skb) - - ETH_HLEN; - } else { - hw_hdr_len = sizeof(struct qeth_hdr); - proto_len = 0; - } + unsigned int hw_hdr_len; + int rc; /* re-use the L2 header area for the HW header: */ + hw_hdr_len = skb_is_gso(skb) ? sizeof(struct qeth_hdr_tso) : + sizeof(struct qeth_hdr); rc = skb_cow_head(skb, hw_hdr_len - ETH_HLEN); if (rc) return rc; skb_copy_from_linear_data(skb, eth_hdr, ETH_HLEN); skb_pull(skb, ETH_HLEN); - frame_len = skb->len; qeth_l3_fixup_headers(skb); - push_len = qeth_add_hw_header(card, skb, &hdr, hw_hdr_len, proto_len, - &elements); - if (push_len < 0) - return push_len; - if (is_tso || !push_len) { - /* HW header needs its own buffer element. */ - hd_len = hw_hdr_len + proto_len; - data_offset = push_len + proto_len; - } - memset(hdr, 0, hw_hdr_len); - - if (skb->protocol == htons(ETH_P_AF_IUCV)) { - qeth_l3_fill_af_iucv_hdr(hdr, skb, frame_len); - } else { - qeth_l3_fill_header(card, hdr, skb, ipv, cast_type, frame_len); - if (is_tso) - qeth_fill_tso_ext((struct qeth_hdr_tso *) hdr, - frame_len - proto_len, skb, - proto_len); - } - - is_sg = skb_is_nonlinear(skb); - if (IS_IQD(card)) { - rc = qeth_do_send_packet_fast(queue, skb, hdr, data_offset, - hd_len); - } else { - /* TODO: drop skb_orphan() once TX completion is fast enough */ - skb_orphan(skb); - rc = qeth_do_send_packet(card, queue, skb, hdr, data_offset, - hd_len, elements); - } - - if (!rc) { - if (card->options.performance_stats) { - card->perf_stats.buf_elements_sent += elements; - if (is_sg) - card->perf_stats.sg_skbs_sent++; - if (is_tso) { - card->perf_stats.large_send_bytes += frame_len; - card->perf_stats.large_send_cnt++; - } - } - } else { - if (!push_len) - kmem_cache_free(qeth_core_header_cache, hdr); - if (rc == -EBUSY) { - /* roll back to ETH header */ - skb_pull(skb, push_len); - skb_push(skb, ETH_HLEN); - skb_copy_to_linear_data(skb, eth_hdr, ETH_HLEN); - } + rc = qeth_xmit(card, skb, queue, ipv, cast_type, qeth_l3_fill_header); + if (rc == -EBUSY) { + /* roll back to ETH header */ + skb_push(skb, ETH_HLEN); + skb_copy_to_linear_data(skb, eth_hdr, ETH_HLEN); } return rc; } @@ -2366,9 +2279,6 @@ static int qeth_l3_setup_netdev(struct qeth_card *card, bool carrier_ok) rc = qeth_l3_iqd_read_initial_mac(card); if (rc) goto out; - - if (card->options.hsuid[0]) - memcpy(card->dev->perm_addr, card->options.hsuid, 9); } else return -ENODEV; @@ -2507,7 +2417,7 @@ static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode) __qeth_l3_open(card->dev); qeth_l3_set_rx_mode(card->dev); } else { - dev_open(card->dev); + dev_open(card->dev, NULL); } rtnl_unlock(); } diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c index 94f4d8fe85e0..9cf30d124b9e 100644 --- a/drivers/s390/scsi/zfcp_aux.c +++ b/drivers/s390/scsi/zfcp_aux.c @@ -4,7 +4,7 @@ * * Module interface and handling of zfcp data structures. * - * Copyright IBM Corp. 2002, 2013 + * Copyright IBM Corp. 2002, 2017 */ /* @@ -124,6 +124,9 @@ static int __init zfcp_module_init(void) { int retval = -ENOMEM; + if (zfcp_experimental_dix) + pr_warn("DIX is enabled. It is experimental and might cause problems\n"); + zfcp_fsf_qtcb_cache = zfcp_cache_hw_align("zfcp_fsf_qtcb", sizeof(struct fsf_qtcb)); if (!zfcp_fsf_qtcb_cache) @@ -248,43 +251,36 @@ static int zfcp_allocate_low_mem_buffers(struct zfcp_adapter *adapter) static void zfcp_free_low_mem_buffers(struct zfcp_adapter *adapter) { - if (adapter->pool.erp_req) - mempool_destroy(adapter->pool.erp_req); - if (adapter->pool.scsi_req) - mempool_destroy(adapter->pool.scsi_req); - if (adapter->pool.scsi_abort) - mempool_destroy(adapter->pool.scsi_abort); - if (adapter->pool.qtcb_pool) - mempool_destroy(adapter->pool.qtcb_pool); - if (adapter->pool.status_read_req) - mempool_destroy(adapter->pool.status_read_req); - if (adapter->pool.sr_data) - mempool_destroy(adapter->pool.sr_data); - if (adapter->pool.gid_pn) - mempool_destroy(adapter->pool.gid_pn); + mempool_destroy(adapter->pool.erp_req); + mempool_destroy(adapter->pool.scsi_req); + mempool_destroy(adapter->pool.scsi_abort); + mempool_destroy(adapter->pool.qtcb_pool); + mempool_destroy(adapter->pool.status_read_req); + mempool_destroy(adapter->pool.sr_data); + mempool_destroy(adapter->pool.gid_pn); } /** * zfcp_status_read_refill - refill the long running status_read_requests * @adapter: ptr to struct zfcp_adapter for which the buffers should be refilled * - * Returns: 0 on success, 1 otherwise - * - * if there are 16 or more status_read requests missing an adapter_reopen - * is triggered + * Return: + * * 0 on success meaning at least one status read is pending + * * 1 if posting failed and not a single status read buffer is pending, + * also triggers adapter reopen recovery */ int zfcp_status_read_refill(struct zfcp_adapter *adapter) { - while (atomic_read(&adapter->stat_miss) > 0) + while (atomic_add_unless(&adapter->stat_miss, -1, 0)) if (zfcp_fsf_status_read(adapter->qdio)) { + atomic_inc(&adapter->stat_miss); /* undo add -1 */ if (atomic_read(&adapter->stat_miss) >= adapter->stat_read_buf_num) { zfcp_erp_adapter_reopen(adapter, 0, "axsref1"); return 1; } break; - } else - atomic_dec(&adapter->stat_miss); + } return 0; } @@ -542,45 +538,3 @@ err_out: zfcp_ccw_adapter_put(adapter); return ERR_PTR(retval); } - -/** - * zfcp_sg_free_table - free memory used by scatterlists - * @sg: pointer to scatterlist - * @count: number of scatterlist which are to be free'ed - * the scatterlist are expected to reference pages always - */ -void zfcp_sg_free_table(struct scatterlist *sg, int count) -{ - int i; - - for (i = 0; i < count; i++, sg++) - if (sg) - free_page((unsigned long) sg_virt(sg)); - else - break; -} - -/** - * zfcp_sg_setup_table - init scatterlist and allocate, assign buffers - * @sg: pointer to struct scatterlist - * @count: number of scatterlists which should be assigned with buffers - * of size page - * - * Returns: 0 on success, -ENOMEM otherwise - */ -int zfcp_sg_setup_table(struct scatterlist *sg, int count) -{ - void *addr; - int i; - - sg_init_table(sg, count); - for (i = 0; i < count; i++, sg++) { - addr = (void *) get_zeroed_page(GFP_KERNEL); - if (!addr) { - zfcp_sg_free_table(sg, i); - return -ENOMEM; - } - sg_set_buf(sg, addr, PAGE_SIZE); - } - return 0; -} diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 3b368fcf13f4..dccdb41bed8c 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -63,7 +63,8 @@ void zfcp_dbf_pl_write(struct zfcp_dbf *dbf, void *data, u16 length, char *area, /** * zfcp_dbf_hba_fsf_res - trace event for fsf responses - * @tag: tag indicating which kind of unsolicited status has been received + * @tag: tag indicating which kind of FSF response has been received + * @level: trace level to be used for event * @req: request for which a response was received */ void zfcp_dbf_hba_fsf_res(char *tag, int level, struct zfcp_fsf_req *req) @@ -81,8 +82,8 @@ void zfcp_dbf_hba_fsf_res(char *tag, int level, struct zfcp_fsf_req *req) rec->id = ZFCP_DBF_HBA_RES; rec->fsf_req_id = req->req_id; rec->fsf_req_status = req->status; - rec->fsf_cmd = req->fsf_command; - rec->fsf_seq_no = req->seq_no; + rec->fsf_cmd = q_head->fsf_command; + rec->fsf_seq_no = q_pref->req_seq_no; rec->u.res.req_issued = req->issued; rec->u.res.prot_status = q_pref->prot_status; rec->u.res.fsf_status = q_head->fsf_status; @@ -94,7 +95,7 @@ void zfcp_dbf_hba_fsf_res(char *tag, int level, struct zfcp_fsf_req *req) memcpy(rec->u.res.fsf_status_qual, &q_head->fsf_status_qual, FSF_STATUS_QUALIFIER_SIZE); - if (req->fsf_command != FSF_QTCB_FCP_CMND) { + if (q_head->fsf_command != FSF_QTCB_FCP_CMND) { rec->pl_len = q_head->log_length; zfcp_dbf_pl_write(dbf, (char *)q_pref + q_head->log_start, rec->pl_len, "fsf_res", req->req_id); @@ -127,7 +128,7 @@ void zfcp_dbf_hba_fsf_uss(char *tag, struct zfcp_fsf_req *req) rec->id = ZFCP_DBF_HBA_USS; rec->fsf_req_id = req->req_id; rec->fsf_req_status = req->status; - rec->fsf_cmd = req->fsf_command; + rec->fsf_cmd = FSF_QTCB_UNSOLICITED_STATUS; if (!srb) goto log; @@ -153,7 +154,7 @@ log: /** * zfcp_dbf_hba_bit_err - trace event for bit error conditions - * @tag: tag indicating which kind of unsolicited status has been received + * @tag: tag indicating which kind of bit error unsolicited status was received * @req: request which caused the bit_error condition */ void zfcp_dbf_hba_bit_err(char *tag, struct zfcp_fsf_req *req) @@ -174,7 +175,7 @@ void zfcp_dbf_hba_bit_err(char *tag, struct zfcp_fsf_req *req) rec->id = ZFCP_DBF_HBA_BIT; rec->fsf_req_id = req->req_id; rec->fsf_req_status = req->status; - rec->fsf_cmd = req->fsf_command; + rec->fsf_cmd = FSF_QTCB_UNSOLICITED_STATUS; memcpy(&rec->u.be, &sr_buf->payload.bit_error, sizeof(struct fsf_bit_error_payload)); @@ -224,6 +225,7 @@ void zfcp_dbf_hba_def_err(struct zfcp_adapter *adapter, u64 req_id, u16 scount, /** * zfcp_dbf_hba_basic - trace event for basic adapter events + * @tag: identifier for event * @adapter: pointer to struct zfcp_adapter */ void zfcp_dbf_hba_basic(char *tag, struct zfcp_adapter *adapter) @@ -357,7 +359,7 @@ void zfcp_dbf_rec_run_lvl(int level, char *tag, struct zfcp_erp_action *erp) rec->u.run.fsf_req_id = erp->fsf_req_id; rec->u.run.rec_status = erp->status; rec->u.run.rec_step = erp->step; - rec->u.run.rec_action = erp->action; + rec->u.run.rec_action = erp->type; if (erp->sdev) rec->u.run.rec_count = @@ -478,7 +480,8 @@ out: /** * zfcp_dbf_san_req - trace event for issued SAN request * @tag: identifier for event - * @fsf_req: request containing issued CT data + * @fsf: request containing issued CT or ELS data + * @d_id: N_Port_ID where SAN request is sent to * d_id: destination ID */ void zfcp_dbf_san_req(char *tag, struct zfcp_fsf_req *fsf, u32 d_id) @@ -560,7 +563,7 @@ static u16 zfcp_dbf_san_res_cap_len_if_gpn_ft(char *tag, /** * zfcp_dbf_san_res - trace event for received SAN request * @tag: identifier for event - * @fsf_req: request containing issued CT data + * @fsf: request containing received CT or ELS data */ void zfcp_dbf_san_res(char *tag, struct zfcp_fsf_req *fsf) { @@ -580,7 +583,7 @@ void zfcp_dbf_san_res(char *tag, struct zfcp_fsf_req *fsf) /** * zfcp_dbf_san_in_els - trace event for incoming ELS * @tag: identifier for event - * @fsf_req: request containing issued CT data + * @fsf: request containing received ELS data */ void zfcp_dbf_san_in_els(char *tag, struct zfcp_fsf_req *fsf) { diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h index d116c07ed77a..900c779cc39b 100644 --- a/drivers/s390/scsi/zfcp_dbf.h +++ b/drivers/s390/scsi/zfcp_dbf.h @@ -42,7 +42,8 @@ struct zfcp_dbf_rec_trigger { * @fsf_req_id: request id for fsf requests * @rec_status: status of the fsf request * @rec_step: current step of the recovery action - * rec_count: recovery counter + * @rec_action: ERP action type + * @rec_count: recoveries including retries for particular @rec_action */ struct zfcp_dbf_rec_running { u64 fsf_req_id; @@ -72,6 +73,7 @@ enum zfcp_dbf_rec_id { * @adapter_status: current status of the adapter * @port_status: current status of the port * @lun_status: current status of the lun + * @u: record type specific data * @u.trig: structure zfcp_dbf_rec_trigger * @u.run: structure zfcp_dbf_rec_running */ @@ -126,6 +128,8 @@ struct zfcp_dbf_san { * @prot_status_qual: protocol status qualifier * @fsf_status: fsf status * @fsf_status_qual: fsf status qualifier + * @port_handle: handle for port + * @lun_handle: handle for LUN */ struct zfcp_dbf_hba_res { u64 req_issued; @@ -158,6 +162,7 @@ struct zfcp_dbf_hba_uss { * @ZFCP_DBF_HBA_RES: response trace record * @ZFCP_DBF_HBA_USS: unsolicited status trace record * @ZFCP_DBF_HBA_BIT: bit error trace record + * @ZFCP_DBF_HBA_BASIC: basic adapter event, only trace tag, no other data */ enum zfcp_dbf_hba_id { ZFCP_DBF_HBA_RES = 1, @@ -176,6 +181,9 @@ enum zfcp_dbf_hba_id { * @fsf_seq_no: fsf sequence number * @pl_len: length of payload stored as zfcp_dbf_pay * @u: record type specific data + * @u.res: data for fsf responses + * @u.uss: data for unsolicited status buffer + * @u.be: data for bit error unsolicited status buffer */ struct zfcp_dbf_hba { u8 id; @@ -339,8 +347,8 @@ void zfcp_dbf_hba_fsf_response(struct zfcp_fsf_req *req) zfcp_dbf_hba_fsf_resp_suppress(req) ? 5 : 1, req); - } else if ((req->fsf_command == FSF_QTCB_OPEN_PORT_WITH_DID) || - (req->fsf_command == FSF_QTCB_OPEN_LUN)) { + } else if ((qtcb->header.fsf_command == FSF_QTCB_OPEN_PORT_WITH_DID) || + (qtcb->header.fsf_command == FSF_QTCB_OPEN_LUN)) { zfcp_dbf_hba_fsf_resp("fs_open", 4, req); } else if (qtcb->header.log_length) { diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 3396a47721a7..87d2f47a6990 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -4,7 +4,7 @@ * * Global definitions for the zfcp device driver. * - * Copyright IBM Corp. 2002, 2010 + * Copyright IBM Corp. 2002, 2017 */ #ifndef ZFCP_DEF_H @@ -41,24 +41,16 @@ #include "zfcp_fc.h" #include "zfcp_qdio.h" -struct zfcp_reqlist; - -/********************* SCSI SPECIFIC DEFINES *********************************/ -#define ZFCP_SCSI_ER_TIMEOUT (10*HZ) - /********************* FSF SPECIFIC DEFINES *********************************/ /* ATTENTION: value must not be used by hardware */ #define FSF_QTCB_UNSOLICITED_STATUS 0x6305 -/* timeout value for "default timer" for fsf requests */ -#define ZFCP_FSF_REQUEST_TIMEOUT (60*HZ) - /*************** ADAPTER/PORT/UNIT AND FSF_REQ STATUS FLAGS ******************/ /* - * Note, the leftmost status byte is common among adapter, port - * and unit + * Note, the leftmost 12 status bits (3 nibbles) are common among adapter, port + * and unit. This is a mask for bitwise 'and' with status values. */ #define ZFCP_COMMON_FLAGS 0xfff00000 @@ -97,49 +89,60 @@ struct zfcp_reqlist; /************************* STRUCTURE DEFINITIONS *****************************/ -struct zfcp_fsf_req; +/** + * enum zfcp_erp_act_type - Type of ERP action object. + * @ZFCP_ERP_ACTION_REOPEN_LUN: LUN recovery. + * @ZFCP_ERP_ACTION_REOPEN_PORT: Port recovery. + * @ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: Forced port recovery. + * @ZFCP_ERP_ACTION_REOPEN_ADAPTER: Adapter recovery. + * + * Values must fit into u8 because of code dependencies: + * zfcp_dbf_rec_trig(), &zfcp_dbf_rec_trigger.want, &zfcp_dbf_rec_trigger.need; + * zfcp_dbf_rec_run_lvl(), zfcp_dbf_rec_run(), &zfcp_dbf_rec_running.rec_action. + */ +enum zfcp_erp_act_type { + ZFCP_ERP_ACTION_REOPEN_LUN = 1, + ZFCP_ERP_ACTION_REOPEN_PORT = 2, + ZFCP_ERP_ACTION_REOPEN_PORT_FORCED = 3, + ZFCP_ERP_ACTION_REOPEN_ADAPTER = 4, +}; -/* holds various memory pools of an adapter */ -struct zfcp_adapter_mempool { - mempool_t *erp_req; - mempool_t *gid_pn_req; - mempool_t *scsi_req; - mempool_t *scsi_abort; - mempool_t *status_read_req; - mempool_t *sr_data; - mempool_t *gid_pn; - mempool_t *qtcb_pool; +/* + * Values must fit into u16 because of code dependencies: + * zfcp_dbf_rec_run_lvl(), zfcp_dbf_rec_run(), zfcp_dbf_rec_run_wka(), + * &zfcp_dbf_rec_running.rec_step. + */ +enum zfcp_erp_steps { + ZFCP_ERP_STEP_UNINITIALIZED = 0x0000, + ZFCP_ERP_STEP_PHYS_PORT_CLOSING = 0x0010, + ZFCP_ERP_STEP_PORT_CLOSING = 0x0100, + ZFCP_ERP_STEP_PORT_OPENING = 0x0800, + ZFCP_ERP_STEP_LUN_CLOSING = 0x1000, + ZFCP_ERP_STEP_LUN_OPENING = 0x2000, }; struct zfcp_erp_action { struct list_head list; - int action; /* requested action code */ + enum zfcp_erp_act_type type; /* requested action code */ struct zfcp_adapter *adapter; /* device which should be recovered */ struct zfcp_port *port; struct scsi_device *sdev; u32 status; /* recovery status */ - u32 step; /* active step of this erp action */ + enum zfcp_erp_steps step; /* active step of this erp action */ unsigned long fsf_req_id; struct timer_list timer; }; -struct fsf_latency_record { - u32 min; - u32 max; - u64 sum; -}; - -struct latency_cont { - struct fsf_latency_record channel; - struct fsf_latency_record fabric; - u64 counter; -}; - -struct zfcp_latencies { - struct latency_cont read; - struct latency_cont write; - struct latency_cont cmd; - spinlock_t lock; +/* holds various memory pools of an adapter */ +struct zfcp_adapter_mempool { + mempool_t *erp_req; + mempool_t *gid_pn_req; + mempool_t *scsi_req; + mempool_t *scsi_abort; + mempool_t *status_read_req; + mempool_t *sr_data; + mempool_t *gid_pn; + mempool_t *qtcb_pool; }; struct zfcp_adapter { @@ -220,6 +223,25 @@ struct zfcp_port { unsigned int starget_id; }; +struct zfcp_latency_record { + u32 min; + u32 max; + u64 sum; +}; + +struct zfcp_latency_cont { + struct zfcp_latency_record channel; + struct zfcp_latency_record fabric; + u64 counter; +}; + +struct zfcp_latencies { + struct zfcp_latency_cont read; + struct zfcp_latency_cont write; + struct zfcp_latency_cont cmd; + spinlock_t lock; +}; + /** * struct zfcp_unit - LUN configured via zfcp sysfs * @dev: struct device for sysfs representation and reference counting @@ -287,9 +309,7 @@ static inline u64 zfcp_scsi_dev_lun(struct scsi_device *sdev) * @qdio_req: qdio queue related values * @completion: used to signal the completion of the request * @status: status of the request - * @fsf_command: FSF command issued * @qtcb: associated QTCB - * @seq_no: sequence number of this request * @data: private data * @timer: timer data of this request * @erp_action: reference to erp action if request issued on behalf of ERP @@ -304,9 +324,7 @@ struct zfcp_fsf_req { struct zfcp_qdio_req qdio_req; struct completion completion; u32 status; - u32 fsf_command; struct fsf_qtcb *qtcb; - u32 seq_no; void *data; struct timer_list timer; struct zfcp_erp_action *erp_action; @@ -321,4 +339,9 @@ int zfcp_adapter_multi_buffer_active(struct zfcp_adapter *adapter) return atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_MB_ACT; } +static inline bool zfcp_fsf_req_is_status_read_buffer(struct zfcp_fsf_req *req) +{ + return req->qtcb == NULL; +} + #endif /* ZFCP_DEF_H */ diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index e7e6b63905e2..744a64680d5b 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -4,7 +4,7 @@ * * Error Recovery Procedures (ERP). * - * Copyright IBM Corp. 2002, 2016 + * Copyright IBM Corp. 2002, 2017 */ #define KMSG_COMPONENT "zfcp" @@ -24,38 +24,18 @@ enum zfcp_erp_act_flags { ZFCP_STATUS_ERP_NO_REF = 0x00800000, }; -enum zfcp_erp_steps { - ZFCP_ERP_STEP_UNINITIALIZED = 0x0000, - ZFCP_ERP_STEP_PHYS_PORT_CLOSING = 0x0010, - ZFCP_ERP_STEP_PORT_CLOSING = 0x0100, - ZFCP_ERP_STEP_PORT_OPENING = 0x0800, - ZFCP_ERP_STEP_LUN_CLOSING = 0x1000, - ZFCP_ERP_STEP_LUN_OPENING = 0x2000, -}; - -/** - * enum zfcp_erp_act_type - Type of ERP action object. - * @ZFCP_ERP_ACTION_REOPEN_LUN: LUN recovery. - * @ZFCP_ERP_ACTION_REOPEN_PORT: Port recovery. - * @ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: Forced port recovery. - * @ZFCP_ERP_ACTION_REOPEN_ADAPTER: Adapter recovery. - * @ZFCP_ERP_ACTION_NONE: Eyecatcher pseudo flag to bitwise or-combine with - * either of the first four enum values. - * Used to indicate that an ERP action could not be - * set up despite a detected need for some recovery. - * @ZFCP_ERP_ACTION_FAILED: Eyecatcher pseudo flag to bitwise or-combine with - * either of the first four enum values. - * Used to indicate that ERP not needed because - * the object has ZFCP_STATUS_COMMON_ERP_FAILED. +/* + * Eyecatcher pseudo flag to bitwise or-combine with enum zfcp_erp_act_type. + * Used to indicate that an ERP action could not be set up despite a detected + * need for some recovery. */ -enum zfcp_erp_act_type { - ZFCP_ERP_ACTION_REOPEN_LUN = 1, - ZFCP_ERP_ACTION_REOPEN_PORT = 2, - ZFCP_ERP_ACTION_REOPEN_PORT_FORCED = 3, - ZFCP_ERP_ACTION_REOPEN_ADAPTER = 4, - ZFCP_ERP_ACTION_NONE = 0xc0, - ZFCP_ERP_ACTION_FAILED = 0xe0, -}; +#define ZFCP_ERP_ACTION_NONE 0xc0 +/* + * Eyecatcher pseudo flag to bitwise or-combine with enum zfcp_erp_act_type. + * Used to indicate that ERP not needed because the object has + * ZFCP_STATUS_COMMON_ERP_FAILED. + */ +#define ZFCP_ERP_ACTION_FAILED 0xe0 enum zfcp_erp_act_result { ZFCP_ERP_SUCCEEDED = 0, @@ -136,11 +116,11 @@ static void zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *adapter) } } -static int zfcp_erp_handle_failed(int want, struct zfcp_adapter *adapter, - struct zfcp_port *port, - struct scsi_device *sdev) +static enum zfcp_erp_act_type zfcp_erp_handle_failed( + enum zfcp_erp_act_type want, struct zfcp_adapter *adapter, + struct zfcp_port *port, struct scsi_device *sdev) { - int need = want; + enum zfcp_erp_act_type need = want; struct zfcp_scsi_dev *zsdev; switch (want) { @@ -171,19 +151,17 @@ static int zfcp_erp_handle_failed(int want, struct zfcp_adapter *adapter, adapter, ZFCP_STATUS_COMMON_ERP_FAILED); } break; - default: - need = 0; - break; } return need; } -static int zfcp_erp_required_act(int want, struct zfcp_adapter *adapter, +static enum zfcp_erp_act_type zfcp_erp_required_act(enum zfcp_erp_act_type want, + struct zfcp_adapter *adapter, struct zfcp_port *port, struct scsi_device *sdev) { - int need = want; + enum zfcp_erp_act_type need = want; int l_status, p_status, a_status; struct zfcp_scsi_dev *zfcp_sdev; @@ -230,7 +208,8 @@ static int zfcp_erp_required_act(int want, struct zfcp_adapter *adapter, return need; } -static struct zfcp_erp_action *zfcp_erp_setup_act(int need, u32 act_status, +static struct zfcp_erp_action *zfcp_erp_setup_act(enum zfcp_erp_act_type need, + u32 act_status, struct zfcp_adapter *adapter, struct zfcp_port *port, struct scsi_device *sdev) @@ -278,9 +257,6 @@ static struct zfcp_erp_action *zfcp_erp_setup_act(int need, u32 act_status, ZFCP_STATUS_COMMON_RUNNING)) act_status |= ZFCP_STATUS_ERP_CLOSE_ONLY; break; - - default: - return NULL; } WARN_ON_ONCE(erp_action->adapter != adapter); @@ -288,18 +264,19 @@ static struct zfcp_erp_action *zfcp_erp_setup_act(int need, u32 act_status, memset(&erp_action->timer, 0, sizeof(erp_action->timer)); erp_action->step = ZFCP_ERP_STEP_UNINITIALIZED; erp_action->fsf_req_id = 0; - erp_action->action = need; + erp_action->type = need; erp_action->status = act_status; return erp_action; } -static void zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, +static void zfcp_erp_action_enqueue(enum zfcp_erp_act_type want, + struct zfcp_adapter *adapter, struct zfcp_port *port, struct scsi_device *sdev, - char *id, u32 act_status) + char *dbftag, u32 act_status) { - int need; + enum zfcp_erp_act_type need; struct zfcp_erp_action *act; need = zfcp_erp_handle_failed(want, adapter, port, sdev); @@ -327,10 +304,11 @@ static void zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, list_add_tail(&act->list, &adapter->erp_ready_head); wake_up(&adapter->erp_ready_wq); out: - zfcp_dbf_rec_trig(id, adapter, port, sdev, want, need); + zfcp_dbf_rec_trig(dbftag, adapter, port, sdev, want, need); } -void zfcp_erp_port_forced_no_port_dbf(char *id, struct zfcp_adapter *adapter, +void zfcp_erp_port_forced_no_port_dbf(char *dbftag, + struct zfcp_adapter *adapter, u64 port_name, u32 port_id) { unsigned long flags; @@ -344,29 +322,30 @@ void zfcp_erp_port_forced_no_port_dbf(char *id, struct zfcp_adapter *adapter, atomic_set(&tmpport.status, -1); /* unknown */ tmpport.wwpn = port_name; tmpport.d_id = port_id; - zfcp_dbf_rec_trig(id, adapter, &tmpport, NULL, + zfcp_dbf_rec_trig(dbftag, adapter, &tmpport, NULL, ZFCP_ERP_ACTION_REOPEN_PORT_FORCED, ZFCP_ERP_ACTION_NONE); write_unlock_irqrestore(&adapter->erp_lock, flags); } static void _zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, - int clear_mask, char *id) + int clear_mask, char *dbftag) { zfcp_erp_adapter_block(adapter, clear_mask); zfcp_scsi_schedule_rports_block(adapter); zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER, - adapter, NULL, NULL, id, 0); + adapter, NULL, NULL, dbftag, 0); } /** * zfcp_erp_adapter_reopen - Reopen adapter. * @adapter: Adapter to reopen. * @clear: Status flags to clear. - * @id: Id for debug trace event. + * @dbftag: Tag for debug trace event. */ -void zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear, char *id) +void zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear, + char *dbftag) { unsigned long flags; @@ -375,7 +354,7 @@ void zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear, char *id) write_lock_irqsave(&adapter->erp_lock, flags); zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER, adapter, - NULL, NULL, id, 0); + NULL, NULL, dbftag, 0); write_unlock_irqrestore(&adapter->erp_lock, flags); } @@ -383,25 +362,25 @@ void zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear, char *id) * zfcp_erp_adapter_shutdown - Shutdown adapter. * @adapter: Adapter to shut down. * @clear: Status flags to clear. - * @id: Id for debug trace event. + * @dbftag: Tag for debug trace event. */ void zfcp_erp_adapter_shutdown(struct zfcp_adapter *adapter, int clear, - char *id) + char *dbftag) { int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED; - zfcp_erp_adapter_reopen(adapter, clear | flags, id); + zfcp_erp_adapter_reopen(adapter, clear | flags, dbftag); } /** * zfcp_erp_port_shutdown - Shutdown port * @port: Port to shut down. * @clear: Status flags to clear. - * @id: Id for debug trace event. + * @dbftag: Tag for debug trace event. */ -void zfcp_erp_port_shutdown(struct zfcp_port *port, int clear, char *id) +void zfcp_erp_port_shutdown(struct zfcp_port *port, int clear, char *dbftag) { int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED; - zfcp_erp_port_reopen(port, clear | flags, id); + zfcp_erp_port_reopen(port, clear | flags, dbftag); } static void zfcp_erp_port_block(struct zfcp_port *port, int clear) @@ -411,53 +390,55 @@ static void zfcp_erp_port_block(struct zfcp_port *port, int clear) } static void _zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear, - char *id) + char *dbftag) { zfcp_erp_port_block(port, clear); zfcp_scsi_schedule_rport_block(port); zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_PORT_FORCED, - port->adapter, port, NULL, id, 0); + port->adapter, port, NULL, dbftag, 0); } /** * zfcp_erp_port_forced_reopen - Forced close of port and open again * @port: Port to force close and to reopen. * @clear: Status flags to clear. - * @id: Id for debug trace event. + * @dbftag: Tag for debug trace event. */ -void zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear, char *id) +void zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear, + char *dbftag) { unsigned long flags; struct zfcp_adapter *adapter = port->adapter; write_lock_irqsave(&adapter->erp_lock, flags); - _zfcp_erp_port_forced_reopen(port, clear, id); + _zfcp_erp_port_forced_reopen(port, clear, dbftag); write_unlock_irqrestore(&adapter->erp_lock, flags); } -static void _zfcp_erp_port_reopen(struct zfcp_port *port, int clear, char *id) +static void _zfcp_erp_port_reopen(struct zfcp_port *port, int clear, + char *dbftag) { zfcp_erp_port_block(port, clear); zfcp_scsi_schedule_rport_block(port); zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_PORT, - port->adapter, port, NULL, id, 0); + port->adapter, port, NULL, dbftag, 0); } /** * zfcp_erp_port_reopen - trigger remote port recovery * @port: port to recover - * @clear_mask: flags in port status to be cleared - * @id: Id for debug trace event. + * @clear: flags in port status to be cleared + * @dbftag: Tag for debug trace event. */ -void zfcp_erp_port_reopen(struct zfcp_port *port, int clear, char *id) +void zfcp_erp_port_reopen(struct zfcp_port *port, int clear, char *dbftag) { unsigned long flags; struct zfcp_adapter *adapter = port->adapter; write_lock_irqsave(&adapter->erp_lock, flags); - _zfcp_erp_port_reopen(port, clear, id); + _zfcp_erp_port_reopen(port, clear, dbftag); write_unlock_irqrestore(&adapter->erp_lock, flags); } @@ -467,8 +448,8 @@ static void zfcp_erp_lun_block(struct scsi_device *sdev, int clear_mask) ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask); } -static void _zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, char *id, - u32 act_status) +static void _zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, + char *dbftag, u32 act_status) { struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); struct zfcp_adapter *adapter = zfcp_sdev->port->adapter; @@ -476,18 +457,18 @@ static void _zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, char *id, zfcp_erp_lun_block(sdev, clear); zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_LUN, adapter, - zfcp_sdev->port, sdev, id, act_status); + zfcp_sdev->port, sdev, dbftag, act_status); } /** * zfcp_erp_lun_reopen - initiate reopen of a LUN * @sdev: SCSI device / LUN to be reopened - * @clear_mask: specifies flags in LUN status to be cleared - * @id: Id for debug trace event. + * @clear: specifies flags in LUN status to be cleared + * @dbftag: Tag for debug trace event. * * Return: 0 on success, < 0 on error */ -void zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, char *id) +void zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, char *dbftag) { unsigned long flags; struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); @@ -495,7 +476,7 @@ void zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, char *id) struct zfcp_adapter *adapter = port->adapter; write_lock_irqsave(&adapter->erp_lock, flags); - _zfcp_erp_lun_reopen(sdev, clear, id, 0); + _zfcp_erp_lun_reopen(sdev, clear, dbftag, 0); write_unlock_irqrestore(&adapter->erp_lock, flags); } @@ -503,25 +484,25 @@ void zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, char *id) * zfcp_erp_lun_shutdown - Shutdown LUN * @sdev: SCSI device / LUN to shut down. * @clear: Status flags to clear. - * @id: Id for debug trace event. + * @dbftag: Tag for debug trace event. */ -void zfcp_erp_lun_shutdown(struct scsi_device *sdev, int clear, char *id) +void zfcp_erp_lun_shutdown(struct scsi_device *sdev, int clear, char *dbftag) { int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED; - zfcp_erp_lun_reopen(sdev, clear | flags, id); + zfcp_erp_lun_reopen(sdev, clear | flags, dbftag); } /** * zfcp_erp_lun_shutdown_wait - Shutdown LUN and wait for erp completion * @sdev: SCSI device / LUN to shut down. - * @id: Id for debug trace event. + * @dbftag: Tag for debug trace event. * * Do not acquire a reference for the LUN when creating the ERP * action. It is safe, because this function waits for the ERP to * complete first. This allows to shutdown the LUN, even when the SCSI * device is in the state SDEV_DEL when scsi_device_get will fail. */ -void zfcp_erp_lun_shutdown_wait(struct scsi_device *sdev, char *id) +void zfcp_erp_lun_shutdown_wait(struct scsi_device *sdev, char *dbftag) { unsigned long flags; struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); @@ -530,7 +511,7 @@ void zfcp_erp_lun_shutdown_wait(struct scsi_device *sdev, char *id) int clear = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED; write_lock_irqsave(&adapter->erp_lock, flags); - _zfcp_erp_lun_reopen(sdev, clear, id, ZFCP_STATUS_ERP_NO_REF); + _zfcp_erp_lun_reopen(sdev, clear, dbftag, ZFCP_STATUS_ERP_NO_REF); write_unlock_irqrestore(&adapter->erp_lock, flags); zfcp_erp_wait(adapter); @@ -619,7 +600,7 @@ void zfcp_erp_notify(struct zfcp_erp_action *erp_action, unsigned long set_mask) /** * zfcp_erp_timeout_handler - Trigger ERP action from timed out ERP request - * @data: ERP action (from timer data) + * @t: timer list entry embedded in zfcp FSF request */ void zfcp_erp_timeout_handler(struct timer_list *t) { @@ -644,31 +625,31 @@ static void zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action) } static void _zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter, - int clear, char *id) + int clear, char *dbftag) { struct zfcp_port *port; read_lock(&adapter->port_list_lock); list_for_each_entry(port, &adapter->port_list, list) - _zfcp_erp_port_reopen(port, clear, id); + _zfcp_erp_port_reopen(port, clear, dbftag); read_unlock(&adapter->port_list_lock); } static void _zfcp_erp_lun_reopen_all(struct zfcp_port *port, int clear, - char *id) + char *dbftag) { struct scsi_device *sdev; spin_lock(port->adapter->scsi_host->host_lock); __shost_for_each_device(sdev, port->adapter->scsi_host) if (sdev_to_zfcp(sdev)->port == port) - _zfcp_erp_lun_reopen(sdev, clear, id, 0); + _zfcp_erp_lun_reopen(sdev, clear, dbftag, 0); spin_unlock(port->adapter->scsi_host->host_lock); } static void zfcp_erp_strategy_followup_failed(struct zfcp_erp_action *act) { - switch (act->action) { + switch (act->type) { case ZFCP_ERP_ACTION_REOPEN_ADAPTER: _zfcp_erp_adapter_reopen(act->adapter, 0, "ersff_1"); break; @@ -686,7 +667,7 @@ static void zfcp_erp_strategy_followup_failed(struct zfcp_erp_action *act) static void zfcp_erp_strategy_followup_success(struct zfcp_erp_action *act) { - switch (act->action) { + switch (act->type) { case ZFCP_ERP_ACTION_REOPEN_ADAPTER: _zfcp_erp_port_reopen_all(act->adapter, 0, "ersfs_1"); break; @@ -696,6 +677,9 @@ static void zfcp_erp_strategy_followup_success(struct zfcp_erp_action *act) case ZFCP_ERP_ACTION_REOPEN_PORT: _zfcp_erp_lun_reopen_all(act->port, 0, "ersfs_3"); break; + case ZFCP_ERP_ACTION_REOPEN_LUN: + /* NOP */ + break; } } @@ -723,7 +707,8 @@ static void zfcp_erp_enqueue_ptp_port(struct zfcp_adapter *adapter) _zfcp_erp_port_reopen(port, 0, "ereptp1"); } -static int zfcp_erp_adapter_strat_fsf_xconf(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_adapter_strat_fsf_xconf( + struct zfcp_erp_action *erp_action) { int retries; int sleep = 1; @@ -768,7 +753,8 @@ static int zfcp_erp_adapter_strat_fsf_xconf(struct zfcp_erp_action *erp_action) return ZFCP_ERP_SUCCEEDED; } -static int zfcp_erp_adapter_strategy_open_fsf_xport(struct zfcp_erp_action *act) +static enum zfcp_erp_act_result zfcp_erp_adapter_strategy_open_fsf_xport( + struct zfcp_erp_action *act) { int ret; struct zfcp_adapter *adapter = act->adapter; @@ -793,7 +779,8 @@ static int zfcp_erp_adapter_strategy_open_fsf_xport(struct zfcp_erp_action *act) return ZFCP_ERP_SUCCEEDED; } -static int zfcp_erp_adapter_strategy_open_fsf(struct zfcp_erp_action *act) +static enum zfcp_erp_act_result zfcp_erp_adapter_strategy_open_fsf( + struct zfcp_erp_action *act) { if (zfcp_erp_adapter_strat_fsf_xconf(act) == ZFCP_ERP_FAILED) return ZFCP_ERP_FAILED; @@ -832,7 +819,8 @@ static void zfcp_erp_adapter_strategy_close(struct zfcp_erp_action *act) ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status); } -static int zfcp_erp_adapter_strategy_open(struct zfcp_erp_action *act) +static enum zfcp_erp_act_result zfcp_erp_adapter_strategy_open( + struct zfcp_erp_action *act) { struct zfcp_adapter *adapter = act->adapter; @@ -853,7 +841,8 @@ static int zfcp_erp_adapter_strategy_open(struct zfcp_erp_action *act) return ZFCP_ERP_SUCCEEDED; } -static int zfcp_erp_adapter_strategy(struct zfcp_erp_action *act) +static enum zfcp_erp_act_result zfcp_erp_adapter_strategy( + struct zfcp_erp_action *act) { struct zfcp_adapter *adapter = act->adapter; @@ -871,7 +860,8 @@ static int zfcp_erp_adapter_strategy(struct zfcp_erp_action *act) return ZFCP_ERP_SUCCEEDED; } -static int zfcp_erp_port_forced_strategy_close(struct zfcp_erp_action *act) +static enum zfcp_erp_act_result zfcp_erp_port_forced_strategy_close( + struct zfcp_erp_action *act) { int retval; @@ -885,7 +875,8 @@ static int zfcp_erp_port_forced_strategy_close(struct zfcp_erp_action *act) return ZFCP_ERP_CONTINUES; } -static int zfcp_erp_port_forced_strategy(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_port_forced_strategy( + struct zfcp_erp_action *erp_action) { struct zfcp_port *port = erp_action->port; int status = atomic_read(&port->status); @@ -901,11 +892,19 @@ static int zfcp_erp_port_forced_strategy(struct zfcp_erp_action *erp_action) case ZFCP_ERP_STEP_PHYS_PORT_CLOSING: if (!(status & ZFCP_STATUS_PORT_PHYS_OPEN)) return ZFCP_ERP_SUCCEEDED; + break; + case ZFCP_ERP_STEP_PORT_CLOSING: + case ZFCP_ERP_STEP_PORT_OPENING: + case ZFCP_ERP_STEP_LUN_CLOSING: + case ZFCP_ERP_STEP_LUN_OPENING: + /* NOP */ + break; } return ZFCP_ERP_FAILED; } -static int zfcp_erp_port_strategy_close(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_port_strategy_close( + struct zfcp_erp_action *erp_action) { int retval; @@ -918,7 +917,8 @@ static int zfcp_erp_port_strategy_close(struct zfcp_erp_action *erp_action) return ZFCP_ERP_CONTINUES; } -static int zfcp_erp_port_strategy_open_port(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_port_strategy_open_port( + struct zfcp_erp_action *erp_action) { int retval; @@ -944,7 +944,8 @@ static int zfcp_erp_open_ptp_port(struct zfcp_erp_action *act) return zfcp_erp_port_strategy_open_port(act); } -static int zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *act) +static enum zfcp_erp_act_result zfcp_erp_port_strategy_open_common( + struct zfcp_erp_action *act) { struct zfcp_adapter *adapter = act->adapter; struct zfcp_port *port = act->port; @@ -975,12 +976,18 @@ static int zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *act) port->d_id = 0; return ZFCP_ERP_FAILED; } - /* fall through otherwise */ + /* no early return otherwise, continue after switch case */ + break; + case ZFCP_ERP_STEP_LUN_CLOSING: + case ZFCP_ERP_STEP_LUN_OPENING: + /* NOP */ + break; } return ZFCP_ERP_FAILED; } -static int zfcp_erp_port_strategy(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_port_strategy( + struct zfcp_erp_action *erp_action) { struct zfcp_port *port = erp_action->port; int p_status = atomic_read(&port->status); @@ -999,6 +1006,12 @@ static int zfcp_erp_port_strategy(struct zfcp_erp_action *erp_action) if (p_status & ZFCP_STATUS_COMMON_OPEN) return ZFCP_ERP_FAILED; break; + case ZFCP_ERP_STEP_PHYS_PORT_CLOSING: + case ZFCP_ERP_STEP_PORT_OPENING: + case ZFCP_ERP_STEP_LUN_CLOSING: + case ZFCP_ERP_STEP_LUN_OPENING: + /* NOP */ + break; } close_init_done: @@ -1016,7 +1029,8 @@ static void zfcp_erp_lun_strategy_clearstati(struct scsi_device *sdev) &zfcp_sdev->status); } -static int zfcp_erp_lun_strategy_close(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_lun_strategy_close( + struct zfcp_erp_action *erp_action) { int retval = zfcp_fsf_close_lun(erp_action); if (retval == -ENOMEM) @@ -1027,7 +1041,8 @@ static int zfcp_erp_lun_strategy_close(struct zfcp_erp_action *erp_action) return ZFCP_ERP_CONTINUES; } -static int zfcp_erp_lun_strategy_open(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_lun_strategy_open( + struct zfcp_erp_action *erp_action) { int retval = zfcp_fsf_open_lun(erp_action); if (retval == -ENOMEM) @@ -1038,7 +1053,8 @@ static int zfcp_erp_lun_strategy_open(struct zfcp_erp_action *erp_action) return ZFCP_ERP_CONTINUES; } -static int zfcp_erp_lun_strategy(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_lun_strategy( + struct zfcp_erp_action *erp_action) { struct scsi_device *sdev = erp_action->sdev; struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); @@ -1048,7 +1064,8 @@ static int zfcp_erp_lun_strategy(struct zfcp_erp_action *erp_action) zfcp_erp_lun_strategy_clearstati(sdev); if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_OPEN) return zfcp_erp_lun_strategy_close(erp_action); - /* already closed, fall through */ + /* already closed */ + /* fall through */ case ZFCP_ERP_STEP_LUN_CLOSING: if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_OPEN) return ZFCP_ERP_FAILED; @@ -1059,11 +1076,18 @@ static int zfcp_erp_lun_strategy(struct zfcp_erp_action *erp_action) case ZFCP_ERP_STEP_LUN_OPENING: if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_OPEN) return ZFCP_ERP_SUCCEEDED; + break; + case ZFCP_ERP_STEP_PHYS_PORT_CLOSING: + case ZFCP_ERP_STEP_PORT_CLOSING: + case ZFCP_ERP_STEP_PORT_OPENING: + /* NOP */ + break; } return ZFCP_ERP_FAILED; } -static int zfcp_erp_strategy_check_lun(struct scsi_device *sdev, int result) +static enum zfcp_erp_act_result zfcp_erp_strategy_check_lun( + struct scsi_device *sdev, enum zfcp_erp_act_result result) { struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); @@ -1084,6 +1108,12 @@ static int zfcp_erp_strategy_check_lun(struct scsi_device *sdev, int result) ZFCP_STATUS_COMMON_ERP_FAILED); } break; + case ZFCP_ERP_CONTINUES: + case ZFCP_ERP_EXIT: + case ZFCP_ERP_DISMISSED: + case ZFCP_ERP_NOMEM: + /* NOP */ + break; } if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_ERP_FAILED) { @@ -1093,7 +1123,8 @@ static int zfcp_erp_strategy_check_lun(struct scsi_device *sdev, int result) return result; } -static int zfcp_erp_strategy_check_port(struct zfcp_port *port, int result) +static enum zfcp_erp_act_result zfcp_erp_strategy_check_port( + struct zfcp_port *port, enum zfcp_erp_act_result result) { switch (result) { case ZFCP_ERP_SUCCEEDED : @@ -1115,6 +1146,12 @@ static int zfcp_erp_strategy_check_port(struct zfcp_port *port, int result) ZFCP_STATUS_COMMON_ERP_FAILED); } break; + case ZFCP_ERP_CONTINUES: + case ZFCP_ERP_EXIT: + case ZFCP_ERP_DISMISSED: + case ZFCP_ERP_NOMEM: + /* NOP */ + break; } if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_FAILED) { @@ -1124,8 +1161,8 @@ static int zfcp_erp_strategy_check_port(struct zfcp_port *port, int result) return result; } -static int zfcp_erp_strategy_check_adapter(struct zfcp_adapter *adapter, - int result) +static enum zfcp_erp_act_result zfcp_erp_strategy_check_adapter( + struct zfcp_adapter *adapter, enum zfcp_erp_act_result result) { switch (result) { case ZFCP_ERP_SUCCEEDED : @@ -1143,6 +1180,12 @@ static int zfcp_erp_strategy_check_adapter(struct zfcp_adapter *adapter, ZFCP_STATUS_COMMON_ERP_FAILED); } break; + case ZFCP_ERP_CONTINUES: + case ZFCP_ERP_EXIT: + case ZFCP_ERP_DISMISSED: + case ZFCP_ERP_NOMEM: + /* NOP */ + break; } if (atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_ERP_FAILED) { @@ -1152,14 +1195,14 @@ static int zfcp_erp_strategy_check_adapter(struct zfcp_adapter *adapter, return result; } -static int zfcp_erp_strategy_check_target(struct zfcp_erp_action *erp_action, - int result) +static enum zfcp_erp_act_result zfcp_erp_strategy_check_target( + struct zfcp_erp_action *erp_action, enum zfcp_erp_act_result result) { struct zfcp_adapter *adapter = erp_action->adapter; struct zfcp_port *port = erp_action->port; struct scsi_device *sdev = erp_action->sdev; - switch (erp_action->action) { + switch (erp_action->type) { case ZFCP_ERP_ACTION_REOPEN_LUN: result = zfcp_erp_strategy_check_lun(sdev, result); @@ -1192,16 +1235,17 @@ static int zfcp_erp_strat_change_det(atomic_t *target_status, u32 erp_status) return 0; } -static int zfcp_erp_strategy_statechange(struct zfcp_erp_action *act, int ret) +static enum zfcp_erp_act_result zfcp_erp_strategy_statechange( + struct zfcp_erp_action *act, enum zfcp_erp_act_result result) { - int action = act->action; + enum zfcp_erp_act_type type = act->type; struct zfcp_adapter *adapter = act->adapter; struct zfcp_port *port = act->port; struct scsi_device *sdev = act->sdev; struct zfcp_scsi_dev *zfcp_sdev; u32 erp_status = act->status; - switch (action) { + switch (type) { case ZFCP_ERP_ACTION_REOPEN_ADAPTER: if (zfcp_erp_strat_change_det(&adapter->status, erp_status)) { _zfcp_erp_adapter_reopen(adapter, @@ -1231,7 +1275,7 @@ static int zfcp_erp_strategy_statechange(struct zfcp_erp_action *act, int ret) } break; } - return ret; + return result; } static void zfcp_erp_action_dequeue(struct zfcp_erp_action *erp_action) @@ -1248,7 +1292,7 @@ static void zfcp_erp_action_dequeue(struct zfcp_erp_action *erp_action) list_del(&erp_action->list); zfcp_dbf_rec_run("eractd1", erp_action); - switch (erp_action->action) { + switch (erp_action->type) { case ZFCP_ERP_ACTION_REOPEN_LUN: zfcp_sdev = sdev_to_zfcp(erp_action->sdev); atomic_andnot(ZFCP_STATUS_COMMON_ERP_INUSE, @@ -1324,13 +1368,14 @@ static void zfcp_erp_try_rport_unblock(struct zfcp_port *port) write_unlock_irqrestore(&adapter->erp_lock, flags); } -static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result) +static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, + enum zfcp_erp_act_result result) { struct zfcp_adapter *adapter = act->adapter; struct zfcp_port *port = act->port; struct scsi_device *sdev = act->sdev; - switch (act->action) { + switch (act->type) { case ZFCP_ERP_ACTION_REOPEN_LUN: if (!(act->status & ZFCP_STATUS_ERP_NO_REF)) scsi_device_put(sdev); @@ -1364,9 +1409,10 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result) } } -static int zfcp_erp_strategy_do_action(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_strategy_do_action( + struct zfcp_erp_action *erp_action) { - switch (erp_action->action) { + switch (erp_action->type) { case ZFCP_ERP_ACTION_REOPEN_ADAPTER: return zfcp_erp_adapter_strategy(erp_action); case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: @@ -1379,9 +1425,10 @@ static int zfcp_erp_strategy_do_action(struct zfcp_erp_action *erp_action) return ZFCP_ERP_FAILED; } -static int zfcp_erp_strategy(struct zfcp_erp_action *erp_action) +static enum zfcp_erp_act_result zfcp_erp_strategy( + struct zfcp_erp_action *erp_action) { - int retval; + enum zfcp_erp_act_result result; unsigned long flags; struct zfcp_adapter *adapter = erp_action->adapter; @@ -1392,12 +1439,12 @@ static int zfcp_erp_strategy(struct zfcp_erp_action *erp_action) if (erp_action->status & ZFCP_STATUS_ERP_DISMISSED) { zfcp_erp_action_dequeue(erp_action); - retval = ZFCP_ERP_DISMISSED; + result = ZFCP_ERP_DISMISSED; goto unlock; } if (erp_action->status & ZFCP_STATUS_ERP_TIMEDOUT) { - retval = ZFCP_ERP_FAILED; + result = ZFCP_ERP_FAILED; goto check_target; } @@ -1405,13 +1452,13 @@ static int zfcp_erp_strategy(struct zfcp_erp_action *erp_action) /* no lock to allow for blocking operations */ write_unlock_irqrestore(&adapter->erp_lock, flags); - retval = zfcp_erp_strategy_do_action(erp_action); + result = zfcp_erp_strategy_do_action(erp_action); write_lock_irqsave(&adapter->erp_lock, flags); if (erp_action->status & ZFCP_STATUS_ERP_DISMISSED) - retval = ZFCP_ERP_CONTINUES; + result = ZFCP_ERP_CONTINUES; - switch (retval) { + switch (result) { case ZFCP_ERP_NOMEM: if (!(erp_action->status & ZFCP_STATUS_ERP_LOWMEM)) { ++adapter->erp_low_mem_count; @@ -1421,7 +1468,7 @@ static int zfcp_erp_strategy(struct zfcp_erp_action *erp_action) _zfcp_erp_adapter_reopen(adapter, 0, "erstgy1"); else { zfcp_erp_strategy_memwait(erp_action); - retval = ZFCP_ERP_CONTINUES; + result = ZFCP_ERP_CONTINUES; } goto unlock; @@ -1431,27 +1478,33 @@ static int zfcp_erp_strategy(struct zfcp_erp_action *erp_action) erp_action->status &= ~ZFCP_STATUS_ERP_LOWMEM; } goto unlock; + case ZFCP_ERP_SUCCEEDED: + case ZFCP_ERP_FAILED: + case ZFCP_ERP_EXIT: + case ZFCP_ERP_DISMISSED: + /* NOP */ + break; } check_target: - retval = zfcp_erp_strategy_check_target(erp_action, retval); + result = zfcp_erp_strategy_check_target(erp_action, result); zfcp_erp_action_dequeue(erp_action); - retval = zfcp_erp_strategy_statechange(erp_action, retval); - if (retval == ZFCP_ERP_EXIT) + result = zfcp_erp_strategy_statechange(erp_action, result); + if (result == ZFCP_ERP_EXIT) goto unlock; - if (retval == ZFCP_ERP_SUCCEEDED) + if (result == ZFCP_ERP_SUCCEEDED) zfcp_erp_strategy_followup_success(erp_action); - if (retval == ZFCP_ERP_FAILED) + if (result == ZFCP_ERP_FAILED) zfcp_erp_strategy_followup_failed(erp_action); unlock: write_unlock_irqrestore(&adapter->erp_lock, flags); - if (retval != ZFCP_ERP_CONTINUES) - zfcp_erp_action_cleanup(erp_action, retval); + if (result != ZFCP_ERP_CONTINUES) + zfcp_erp_action_cleanup(erp_action, result); kref_put(&adapter->ref, zfcp_adapter_release); - return retval; + return result; } static int zfcp_erp_thread(void *data) @@ -1489,7 +1542,7 @@ static int zfcp_erp_thread(void *data) * zfcp_erp_thread_setup - Start ERP thread for adapter * @adapter: Adapter to start the ERP thread for * - * Returns 0 on success or error code from kernel_thread() + * Return: 0 on success, or error code from kthread_run(). */ int zfcp_erp_thread_setup(struct zfcp_adapter *adapter) { @@ -1694,11 +1747,11 @@ void zfcp_erp_clear_lun_status(struct scsi_device *sdev, u32 mask) /** * zfcp_erp_adapter_reset_sync() - Really reopen adapter and wait. * @adapter: Pointer to zfcp_adapter to reopen. - * @id: Trace tag string of length %ZFCP_DBF_TAG_LEN. + * @dbftag: Trace tag string of length %ZFCP_DBF_TAG_LEN. */ -void zfcp_erp_adapter_reset_sync(struct zfcp_adapter *adapter, char *id) +void zfcp_erp_adapter_reset_sync(struct zfcp_adapter *adapter, char *dbftag) { zfcp_erp_set_adapter_status(adapter, ZFCP_STATUS_COMMON_RUNNING); - zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, id); + zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, dbftag); zfcp_erp_wait(adapter); } diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h index bd0c5a9f04cb..3fce47b0b21b 100644 --- a/drivers/s390/scsi/zfcp_ext.h +++ b/drivers/s390/scsi/zfcp_ext.h @@ -59,14 +59,15 @@ extern void zfcp_dbf_scsi_eh(char *tag, struct zfcp_adapter *adapter, /* zfcp_erp.c */ extern void zfcp_erp_set_adapter_status(struct zfcp_adapter *, u32); extern void zfcp_erp_clear_adapter_status(struct zfcp_adapter *, u32); -extern void zfcp_erp_port_forced_no_port_dbf(char *id, +extern void zfcp_erp_port_forced_no_port_dbf(char *dbftag, struct zfcp_adapter *adapter, u64 port_name, u32 port_id); extern void zfcp_erp_adapter_reopen(struct zfcp_adapter *, int, char *); extern void zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int, char *); extern void zfcp_erp_set_port_status(struct zfcp_port *, u32); extern void zfcp_erp_clear_port_status(struct zfcp_port *, u32); -extern void zfcp_erp_port_reopen(struct zfcp_port *port, int clear, char *id); +extern void zfcp_erp_port_reopen(struct zfcp_port *port, int clear, + char *dbftag); extern void zfcp_erp_port_shutdown(struct zfcp_port *, int, char *); extern void zfcp_erp_port_forced_reopen(struct zfcp_port *, int, char *); extern void zfcp_erp_set_lun_status(struct scsi_device *, u32); @@ -79,7 +80,8 @@ extern void zfcp_erp_thread_kill(struct zfcp_adapter *); extern void zfcp_erp_wait(struct zfcp_adapter *); extern void zfcp_erp_notify(struct zfcp_erp_action *, unsigned long); extern void zfcp_erp_timeout_handler(struct timer_list *t); -extern void zfcp_erp_adapter_reset_sync(struct zfcp_adapter *adapter, char *id); +extern void zfcp_erp_adapter_reset_sync(struct zfcp_adapter *adapter, + char *dbftag); /* zfcp_fc.c */ extern struct kmem_cache *zfcp_fc_req_cache; @@ -144,6 +146,7 @@ extern void zfcp_qdio_close(struct zfcp_qdio *); extern void zfcp_qdio_siosl(struct zfcp_adapter *); /* zfcp_scsi.c */ +extern bool zfcp_experimental_dix; extern struct scsi_transport_template *zfcp_scsi_transport_template; extern int zfcp_scsi_adapter_register(struct zfcp_adapter *); extern void zfcp_scsi_adapter_unregister(struct zfcp_adapter *); diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c index f6c415d6ef48..db00b5e3abbe 100644 --- a/drivers/s390/scsi/zfcp_fc.c +++ b/drivers/s390/scsi/zfcp_fc.c @@ -312,7 +312,7 @@ static void zfcp_fc_incoming_logo(struct zfcp_fsf_req *req) /** * zfcp_fc_incoming_els - handle incoming ELS - * @fsf_req - request which contains incoming ELS + * @fsf_req: request which contains incoming ELS */ void zfcp_fc_incoming_els(struct zfcp_fsf_req *fsf_req) { @@ -597,6 +597,48 @@ void zfcp_fc_test_link(struct zfcp_port *port) put_device(&port->dev); } +/** + * zfcp_fc_sg_free_table - free memory used by scatterlists + * @sg: pointer to scatterlist + * @count: number of scatterlist which are to be free'ed + * the scatterlist are expected to reference pages always + */ +static void zfcp_fc_sg_free_table(struct scatterlist *sg, int count) +{ + int i; + + for (i = 0; i < count; i++, sg++) + if (sg) + free_page((unsigned long) sg_virt(sg)); + else + break; +} + +/** + * zfcp_fc_sg_setup_table - init scatterlist and allocate, assign buffers + * @sg: pointer to struct scatterlist + * @count: number of scatterlists which should be assigned with buffers + * of size page + * + * Returns: 0 on success, -ENOMEM otherwise + */ +static int zfcp_fc_sg_setup_table(struct scatterlist *sg, int count) +{ + void *addr; + int i; + + sg_init_table(sg, count); + for (i = 0; i < count; i++, sg++) { + addr = (void *) get_zeroed_page(GFP_KERNEL); + if (!addr) { + zfcp_fc_sg_free_table(sg, i); + return -ENOMEM; + } + sg_set_buf(sg, addr, PAGE_SIZE); + } + return 0; +} + static struct zfcp_fc_req *zfcp_fc_alloc_sg_env(int buf_num) { struct zfcp_fc_req *fc_req; @@ -605,7 +647,7 @@ static struct zfcp_fc_req *zfcp_fc_alloc_sg_env(int buf_num) if (!fc_req) return NULL; - if (zfcp_sg_setup_table(&fc_req->sg_rsp, buf_num)) { + if (zfcp_fc_sg_setup_table(&fc_req->sg_rsp, buf_num)) { kmem_cache_free(zfcp_fc_req_cache, fc_req); return NULL; } @@ -763,7 +805,7 @@ void zfcp_fc_scan_ports(struct work_struct *work) break; } } - zfcp_sg_free_table(&fc_req->sg_rsp, buf_num); + zfcp_fc_sg_free_table(&fc_req->sg_rsp, buf_num); kmem_cache_free(zfcp_fc_req_cache, fc_req); out: zfcp_fc_wka_port_put(&adapter->gs->ds); diff --git a/drivers/s390/scsi/zfcp_fc.h b/drivers/s390/scsi/zfcp_fc.h index 3cd74729cfb9..6902ae1f8e4f 100644 --- a/drivers/s390/scsi/zfcp_fc.h +++ b/drivers/s390/scsi/zfcp_fc.h @@ -121,9 +121,24 @@ struct zfcp_fc_rspn_req { /** * struct zfcp_fc_req - Container for FC ELS and CT requests sent from zfcp * @ct_els: data required for issuing fsf command - * @sg_req: scatterlist entry for request data - * @sg_rsp: scatterlist entry for response data - * @u: request specific data + * @sg_req: scatterlist entry for request data, refers to embedded @u submember + * @sg_rsp: scatterlist entry for response data, refers to embedded @u submember + * @u: request and response specific data + * @u.adisc: ADISC specific data + * @u.adisc.req: ADISC request + * @u.adisc.rsp: ADISC response + * @u.gid_pn: GID_PN specific data + * @u.gid_pn.req: GID_PN request + * @u.gid_pn.rsp: GID_PN response + * @u.gpn_ft: GPN_FT specific data + * @u.gpn_ft.sg_rsp2: GPN_FT response, not embedded here, allocated elsewhere + * @u.gpn_ft.req: GPN_FT request + * @u.gspn: GSPN specific data + * @u.gspn.req: GSPN request + * @u.gspn.rsp: GSPN response + * @u.rspn: RSPN specific data + * @u.rspn.req: RSPN request + * @u.rspn.rsp: RSPN response */ struct zfcp_fc_req { struct zfcp_fsf_ct_els ct_els; diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index 3c86e27f094d..d94496ee6883 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -19,6 +19,11 @@ #include "zfcp_qdio.h" #include "zfcp_reqlist.h" +/* timeout for FSF requests sent during scsi_eh: abort or FCP TMF */ +#define ZFCP_FSF_SCSI_ER_TIMEOUT (10*HZ) +/* timeout for: exchange config/port data outside ERP, or open/close WKA port */ +#define ZFCP_FSF_REQUEST_TIMEOUT (60*HZ) + struct kmem_cache *zfcp_fsf_qtcb_cache; static void zfcp_fsf_request_timeout_handler(struct timer_list *t) @@ -74,18 +79,18 @@ static void zfcp_fsf_class_not_supp(struct zfcp_fsf_req *req) /** * zfcp_fsf_req_free - free memory used by fsf request - * @fsf_req: pointer to struct zfcp_fsf_req + * @req: pointer to struct zfcp_fsf_req */ void zfcp_fsf_req_free(struct zfcp_fsf_req *req) { if (likely(req->pool)) { - if (likely(req->qtcb)) + if (likely(!zfcp_fsf_req_is_status_read_buffer(req))) mempool_free(req->qtcb, req->adapter->pool.qtcb_pool); mempool_free(req, req->pool); return; } - if (likely(req->qtcb)) + if (likely(!zfcp_fsf_req_is_status_read_buffer(req))) kmem_cache_free(zfcp_fsf_qtcb_cache, req->qtcb); kfree(req); } @@ -379,7 +384,7 @@ static void zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *req) /** * zfcp_fsf_req_complete - process completion of a FSF request - * @fsf_req: The FSF request that has been completed. + * @req: The FSF request that has been completed. * * When a request has been completed either from the FCP adapter, * or it has been dismissed due to a queue shutdown, this function @@ -388,7 +393,7 @@ static void zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *req) */ static void zfcp_fsf_req_complete(struct zfcp_fsf_req *req) { - if (unlikely(req->fsf_command == FSF_QTCB_UNSOLICITED_STATUS)) { + if (unlikely(zfcp_fsf_req_is_status_read_buffer(req))) { zfcp_fsf_status_read_handler(req); return; } @@ -705,7 +710,6 @@ static struct zfcp_fsf_req *zfcp_fsf_req_create(struct zfcp_qdio *qdio, init_completion(&req->completion); req->adapter = adapter; - req->fsf_command = fsf_cmd; req->req_id = adapter->req_no; if (likely(fsf_cmd != FSF_QTCB_UNSOLICITED_STATUS)) { @@ -720,14 +724,13 @@ static struct zfcp_fsf_req *zfcp_fsf_req_create(struct zfcp_qdio *qdio, return ERR_PTR(-ENOMEM); } - req->seq_no = adapter->fsf_req_seq_no; req->qtcb->prefix.req_seq_no = adapter->fsf_req_seq_no; req->qtcb->prefix.req_id = req->req_id; req->qtcb->prefix.ulp_info = 26; - req->qtcb->prefix.qtcb_type = fsf_qtcb_type[req->fsf_command]; + req->qtcb->prefix.qtcb_type = fsf_qtcb_type[fsf_cmd]; req->qtcb->prefix.qtcb_version = FSF_QTCB_CURRENT_VERSION; req->qtcb->header.req_handle = req->req_id; - req->qtcb->header.fsf_command = req->fsf_command; + req->qtcb->header.fsf_command = fsf_cmd; } zfcp_qdio_req_init(adapter->qdio, &req->qdio_req, req->req_id, sbtype, @@ -740,7 +743,6 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req) { struct zfcp_adapter *adapter = req->adapter; struct zfcp_qdio *qdio = adapter->qdio; - int with_qtcb = (req->qtcb != NULL); int req_id = req->req_id; zfcp_reqlist_add(adapter->req_list, req); @@ -756,7 +758,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req) } /* Don't increase for unsolicited status */ - if (with_qtcb) + if (!zfcp_fsf_req_is_status_read_buffer(req)) adapter->fsf_req_seq_no++; adapter->req_no++; @@ -765,8 +767,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req) /** * zfcp_fsf_status_read - send status read request - * @adapter: pointer to struct zfcp_adapter - * @req_flags: request flags + * @qdio: pointer to struct zfcp_qdio * Returns: 0 on success, ERROR otherwise */ int zfcp_fsf_status_read(struct zfcp_qdio *qdio) @@ -912,7 +913,7 @@ struct zfcp_fsf_req *zfcp_fsf_abort_fcp_cmnd(struct scsi_cmnd *scmnd) req->qtcb->header.port_handle = zfcp_sdev->port->handle; req->qtcb->bottom.support.req_handle = (u64) old_req_id; - zfcp_fsf_start_timer(req, ZFCP_SCSI_ER_TIMEOUT); + zfcp_fsf_start_timer(req, ZFCP_FSF_SCSI_ER_TIMEOUT); if (!zfcp_fsf_req_send(req)) goto out; @@ -1057,8 +1058,10 @@ static int zfcp_fsf_setup_ct_els(struct zfcp_fsf_req *req, /** * zfcp_fsf_send_ct - initiate a Generic Service request (FC-GS) + * @wka_port: pointer to zfcp WKA port to send CT/GS to * @ct: pointer to struct zfcp_send_ct with data for request * @pool: if non-null this mempool is used to allocate struct zfcp_fsf_req + * @timeout: timeout that hardware should use, and a later software timeout */ int zfcp_fsf_send_ct(struct zfcp_fc_wka_port *wka_port, struct zfcp_fsf_ct_els *ct, mempool_t *pool, @@ -1151,7 +1154,10 @@ skip_fsfstatus: /** * zfcp_fsf_send_els - initiate an ELS command (FC-FS) + * @adapter: pointer to zfcp adapter + * @d_id: N_Port_ID to send ELS to * @els: pointer to struct zfcp_send_els with data for the command + * @timeout: timeout that hardware should use, and a later software timeout */ int zfcp_fsf_send_els(struct zfcp_adapter *adapter, u32 d_id, struct zfcp_fsf_ct_els *els, unsigned int timeout) @@ -1809,7 +1815,7 @@ static void zfcp_fsf_open_lun_handler(struct zfcp_fsf_req *req) case FSF_LUN_SHARING_VIOLATION: if (qual->word[0]) dev_warn(&zfcp_sdev->port->adapter->ccw_device->dev, - "LUN 0x%Lx on port 0x%Lx is already in " + "LUN 0x%016Lx on port 0x%016Lx is already in " "use by CSS%d, MIF Image ID %x\n", zfcp_scsi_dev_lun(sdev), (unsigned long long)zfcp_sdev->port->wwpn, @@ -1986,7 +1992,7 @@ out: return retval; } -static void zfcp_fsf_update_lat(struct fsf_latency_record *lat_rec, u32 lat) +static void zfcp_fsf_update_lat(struct zfcp_latency_record *lat_rec, u32 lat) { lat_rec->sum += lat; lat_rec->min = min(lat_rec->min, lat); @@ -1996,7 +2002,7 @@ static void zfcp_fsf_update_lat(struct fsf_latency_record *lat_rec, u32 lat) static void zfcp_fsf_req_trace(struct zfcp_fsf_req *req, struct scsi_cmnd *scsi) { struct fsf_qual_latency_info *lat_in; - struct latency_cont *lat = NULL; + struct zfcp_latency_cont *lat = NULL; struct zfcp_scsi_dev *zfcp_sdev; struct zfcp_blk_drv_data blktrc; int ticks = req->adapter->timer_ticks; @@ -2088,11 +2094,8 @@ static void zfcp_fsf_fcp_handler_common(struct zfcp_fsf_req *req, break; case FSF_CMND_LENGTH_NOT_VALID: dev_err(&req->adapter->ccw_device->dev, - "Incorrect CDB length %d, LUN 0x%016Lx on " - "port 0x%016Lx closed\n", - req->qtcb->bottom.io.fcp_cmnd_length, - (unsigned long long)zfcp_scsi_dev_lun(sdev), - (unsigned long long)zfcp_sdev->port->wwpn); + "Incorrect FCP_CMND length %d, FCP device closed\n", + req->qtcb->bottom.io.fcp_cmnd_length); zfcp_erp_adapter_shutdown(req->adapter, 0, "fssfch4"); req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2369,7 +2372,7 @@ struct zfcp_fsf_req *zfcp_fsf_fcp_task_mgmt(struct scsi_device *sdev, fcp_cmnd = &req->qtcb->bottom.io.fcp_cmnd.iu; zfcp_fc_fcp_tm(fcp_cmnd, sdev, tm_flags); - zfcp_fsf_start_timer(req, ZFCP_SCSI_ER_TIMEOUT); + zfcp_fsf_start_timer(req, ZFCP_FSF_SCSI_ER_TIMEOUT); if (!zfcp_fsf_req_send(req)) goto out; @@ -2382,7 +2385,7 @@ out: /** * zfcp_fsf_reqid_check - validate req_id contained in SBAL returned by QDIO - * @adapter: pointer to struct zfcp_adapter + * @qdio: pointer to struct zfcp_qdio * @sbal_idx: response queue index of SBAL to be processed */ void zfcp_fsf_reqid_check(struct zfcp_qdio *qdio, int sbal_idx) diff --git a/drivers/s390/scsi/zfcp_fsf.h b/drivers/s390/scsi/zfcp_fsf.h index 535628b92f0a..2c658b66318c 100644 --- a/drivers/s390/scsi/zfcp_fsf.h +++ b/drivers/s390/scsi/zfcp_fsf.h @@ -438,8 +438,8 @@ struct zfcp_blk_drv_data { /** * struct zfcp_fsf_ct_els - zfcp data for ct or els request - * @req: scatter-gather list for request - * @resp: scatter-gather list for response + * @req: scatter-gather list for request, points to &zfcp_fc_req.sg_req or BSG + * @resp: scatter-gather list for response, points to &zfcp_fc_req.sg_rsp or BSG * @handler: handler function (called for response to the request) * @handler_data: data passed to handler function * @port: Optional pointer to port for zfcp internal ELS (only test link ADISC) diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c index 4ab02e8d36f3..10c4e8e3fd59 100644 --- a/drivers/s390/scsi/zfcp_qdio.c +++ b/drivers/s390/scsi/zfcp_qdio.c @@ -4,7 +4,7 @@ * * Setup and helper functions to access QDIO. * - * Copyright IBM Corp. 2002, 2010 + * Copyright IBM Corp. 2002, 2017 */ #define KMSG_COMPONENT "zfcp" @@ -19,7 +19,7 @@ static bool enable_multibuffer = true; module_param_named(datarouter, enable_multibuffer, bool, 0400); MODULE_PARM_DESC(datarouter, "Enable hardware data router support (default on)"); -static void zfcp_qdio_handler_error(struct zfcp_qdio *qdio, char *id, +static void zfcp_qdio_handler_error(struct zfcp_qdio *qdio, char *dbftag, unsigned int qdio_err) { struct zfcp_adapter *adapter = qdio->adapter; @@ -28,12 +28,12 @@ static void zfcp_qdio_handler_error(struct zfcp_qdio *qdio, char *id, if (qdio_err & QDIO_ERROR_SLSB_STATE) { zfcp_qdio_siosl(adapter); - zfcp_erp_adapter_shutdown(adapter, 0, id); + zfcp_erp_adapter_shutdown(adapter, 0, dbftag); return; } zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED | - ZFCP_STATUS_COMMON_ERP_FAILED, id); + ZFCP_STATUS_COMMON_ERP_FAILED, dbftag); } static void zfcp_qdio_zero_sbals(struct qdio_buffer *sbal[], int first, int cnt) @@ -180,7 +180,6 @@ zfcp_qdio_sbale_next(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req) * @qdio: pointer to struct zfcp_qdio * @q_req: pointer to struct zfcp_qdio_req * @sg: scatter-gather list - * @max_sbals: upper bound for number of SBALs to be used * Returns: zero or -EINVAL on error */ int zfcp_qdio_sbals_from_sg(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req, @@ -303,7 +302,7 @@ static void zfcp_qdio_setup_init_data(struct qdio_initialize *id, /** * zfcp_qdio_allocate - allocate queue memory and initialize QDIO data - * @adapter: pointer to struct zfcp_adapter + * @qdio: pointer to struct zfcp_qdio * Returns: -ENOMEM on memory allocation error or return value from * qdio_allocate */ diff --git a/drivers/s390/scsi/zfcp_qdio.h b/drivers/s390/scsi/zfcp_qdio.h index 886c662cc154..2a816a37b3c0 100644 --- a/drivers/s390/scsi/zfcp_qdio.h +++ b/drivers/s390/scsi/zfcp_qdio.h @@ -30,6 +30,8 @@ * @req_q_full: queue full incidents * @req_q_wq: used to wait for SBAL availability * @adapter: adapter used in conjunction with this qdio structure + * @max_sbale_per_sbal: qdio limit per sbal + * @max_sbale_per_req: qdio limit per request */ struct zfcp_qdio { struct qdio_buffer *res_q[QDIO_MAX_BUFFERS_PER_Q]; @@ -70,7 +72,7 @@ struct zfcp_qdio_req { /** * zfcp_qdio_sbale_req - return pointer to sbale on req_q for a request * @qdio: pointer to struct zfcp_qdio - * @q_rec: pointer to struct zfcp_qdio_req + * @q_req: pointer to struct zfcp_qdio_req * Returns: pointer to qdio_buffer_element (sbale) structure */ static inline struct qdio_buffer_element * @@ -82,7 +84,7 @@ zfcp_qdio_sbale_req(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req) /** * zfcp_qdio_sbale_curr - return current sbale on req_q for a request * @qdio: pointer to struct zfcp_qdio - * @fsf_req: pointer to struct zfcp_fsf_req + * @q_req: pointer to struct zfcp_qdio_req * Returns: pointer to qdio_buffer_element (sbale) structure */ static inline struct qdio_buffer_element * @@ -135,6 +137,8 @@ void zfcp_qdio_req_init(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req, * zfcp_qdio_fill_next - Fill next sbale, only for single sbal requests * @qdio: pointer to struct zfcp_qdio * @q_req: pointer to struct zfcp_queue_req + * @data: pointer to data + * @len: length of data * * This is only required for single sbal requests, calling it when * wrapping around to the next sbal is a bug. @@ -182,6 +186,7 @@ int zfcp_qdio_sg_one_sbale(struct scatterlist *sg) /** * zfcp_qdio_skip_to_last_sbale - skip to last sbale in sbal + * @qdio: pointer to struct zfcp_qdio * @q_req: The current zfcp_qdio_req */ static inline diff --git a/drivers/s390/scsi/zfcp_reqlist.h b/drivers/s390/scsi/zfcp_reqlist.h index 59a943c0d51d..9b8ff249e31c 100644 --- a/drivers/s390/scsi/zfcp_reqlist.h +++ b/drivers/s390/scsi/zfcp_reqlist.h @@ -17,7 +17,7 @@ /** * struct zfcp_reqlist - Container for request list (reqlist) * @lock: Spinlock for protecting the hash list - * @list: Array of hashbuckets, each is a list of requests in this bucket + * @buckets: Array of hashbuckets, each is a list of requests in this bucket */ struct zfcp_reqlist { spinlock_t lock; diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c index a8efcb330bc1..00acc7144bbc 100644 --- a/drivers/s390/scsi/zfcp_scsi.c +++ b/drivers/s390/scsi/zfcp_scsi.c @@ -27,7 +27,11 @@ MODULE_PARM_DESC(queue_depth, "Default queue depth for new SCSI devices"); static bool enable_dif; module_param_named(dif, enable_dif, bool, 0400); -MODULE_PARM_DESC(dif, "Enable DIF/DIX data integrity support"); +MODULE_PARM_DESC(dif, "Enable DIF data integrity support (default off)"); + +bool zfcp_experimental_dix; +module_param_named(dix, zfcp_experimental_dix, bool, 0400); +MODULE_PARM_DESC(dix, "Enable experimental DIX (data integrity extension) support which implies DIF support (default off)"); static bool allow_lun_scan = true; module_param(allow_lun_scan, bool, 0600); @@ -226,7 +230,9 @@ static void zfcp_scsi_forget_cmnd(struct zfcp_fsf_req *old_req, void *data) (struct zfcp_scsi_req_filter *)data; /* already aborted - prevent side-effects - or not a SCSI command */ - if (old_req->data == NULL || old_req->fsf_command != FSF_QTCB_FCP_CMND) + if (old_req->data == NULL || + zfcp_fsf_req_is_status_read_buffer(old_req) || + old_req->qtcb->header.fsf_command != FSF_QTCB_FCP_CMND) return; /* (tmf_scope == FCP_TMF_TGT_RESET || tmf_scope == FCP_TMF_LUN_RESET) */ @@ -423,7 +429,6 @@ static struct scsi_host_template zfcp_scsi_host_template = { * ZFCP_QDIO_MAX_SBALS_PER_REQ) - 2) * 8, /* GCD, adjusted later */ .dma_boundary = ZFCP_QDIO_SBALE_LEN - 1, - .use_clustering = 1, .shost_attrs = zfcp_sysfs_shost_attrs, .sdev_attrs = zfcp_sysfs_sdev_attrs, .track_queue_depth = 1, @@ -788,11 +793,11 @@ void zfcp_scsi_set_prot(struct zfcp_adapter *adapter) data_div = atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED; - if (enable_dif && + if ((enable_dif || zfcp_experimental_dix) && adapter->adapter_features & FSF_FEATURE_DIF_PROT_TYPE1) mask |= SHOST_DIF_TYPE1_PROTECTION; - if (enable_dif && data_div && + if (zfcp_experimental_dix && data_div && adapter->adapter_features & FSF_FEATURE_DIX_PROT_TCPIP) { mask |= SHOST_DIX_TYPE1_PROTECTION; scsi_host_set_guard(shost, SHOST_DIX_GUARD_IP); diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c index c9c57b4a0b71..fc9dbad476c0 100644 --- a/drivers/s390/virtio/virtio_ccw.c +++ b/drivers/s390/virtio/virtio_ccw.c @@ -769,6 +769,17 @@ out_free: return rc; } +static void ccw_transport_features(struct virtio_device *vdev) +{ + /* + * Packed ring isn't enabled on virtio_ccw for now, + * because virtio_ccw uses some legacy accessors, + * e.g. virtqueue_get_avail() and virtqueue_get_used() + * which aren't available in packed ring currently. + */ + __virtio_clear_bit(vdev, VIRTIO_F_RING_PACKED); +} + static int virtio_ccw_finalize_features(struct virtio_device *vdev) { struct virtio_ccw_device *vcdev = to_vc_device(vdev); @@ -795,6 +806,9 @@ static int virtio_ccw_finalize_features(struct virtio_device *vdev) /* Give virtio_ring a chance to accept features. */ vring_transport_features(vdev); + /* Give virtio_ccw a chance to accept features. */ + ccw_transport_features(vdev); + features->index = 0; features->features = cpu_to_le32((u32)vdev->features); /* Write the first half of the feature bits to the host. */ diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c index 2d655a97b959..a3c20e3a8b7c 100644 --- a/drivers/scsi/3w-9xxx.c +++ b/drivers/scsi/3w-9xxx.c @@ -1998,7 +1998,6 @@ static struct scsi_host_template driver_template = { .sg_tablesize = TW_APACHE_MAX_SGL_LENGTH, .max_sectors = TW_MAX_SECTORS, .cmd_per_lun = TW_MAX_CMDS_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = twa_host_attrs, .emulated = 1, .no_write_same = 1, diff --git a/drivers/scsi/3w-sas.c b/drivers/scsi/3w-sas.c index 480cf82700e9..e8f5f7c63190 100644 --- a/drivers/scsi/3w-sas.c +++ b/drivers/scsi/3w-sas.c @@ -1550,7 +1550,6 @@ static struct scsi_host_template driver_template = { .sg_tablesize = TW_LIBERATOR_MAX_SGL_LENGTH, .max_sectors = TW_MAX_SECTORS, .cmd_per_lun = TW_MAX_CMDS_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = twl_host_attrs, .emulated = 1, .no_write_same = 1, diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c index a58257645e94..2b1e0d503020 100644 --- a/drivers/scsi/3w-xxxx.c +++ b/drivers/scsi/3w-xxxx.c @@ -1174,7 +1174,7 @@ static int tw_setfeature(TW_Device_Extension *tw_dev, int parm, int param_size, command_que_value = tw_dev->command_packet_physical_address[request_id]; if (command_que_value == 0) { printk(KERN_WARNING "3w-xxxx: tw_setfeature(): Bad command packet physical address.\n"); - return 1; + return 1; } /* Send command packet to the board */ @@ -2247,7 +2247,6 @@ static struct scsi_host_template driver_template = { .sg_tablesize = TW_MAX_SGL_LENGTH, .max_sectors = TW_MAX_SECTORS, .cmd_per_lun = TW_MAX_CMDS_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = tw_host_attrs, .emulated = 1, .no_write_same = 1, diff --git a/drivers/scsi/53c700.c b/drivers/scsi/53c700.c index 6be77b3aa8a5..128d658d472a 100644 --- a/drivers/scsi/53c700.c +++ b/drivers/scsi/53c700.c @@ -318,7 +318,6 @@ NCR_700_detect(struct scsi_host_template *tpnt, tpnt->can_queue = NCR_700_COMMAND_SLOTS_PER_HOST; tpnt->sg_tablesize = NCR_700_SG_SEGMENTS; tpnt->cmd_per_lun = NCR_700_CMD_PER_LUN; - tpnt->use_clustering = ENABLE_CLUSTERING; tpnt->slave_configure = NCR_700_slave_configure; tpnt->slave_destroy = NCR_700_slave_destroy; tpnt->slave_alloc = NCR_700_slave_alloc; diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c index 9cee941f97d6..e41e51f1da71 100644 --- a/drivers/scsi/BusLogic.c +++ b/drivers/scsi/BusLogic.c @@ -2641,6 +2641,7 @@ static int blogic_resultcode(struct blogic_adapter *adapter, case BLOGIC_BAD_CMD_PARAM: blogic_warn("BusLogic Driver Protocol Error 0x%02X\n", adapter, adapter_status); + /* fall through */ case BLOGIC_DATA_UNDERRUN: case BLOGIC_DATA_OVERRUN: case BLOGIC_NOEXPECT_BUSFREE: @@ -3857,7 +3858,6 @@ static struct scsi_host_template blogic_template = { #endif .unchecked_isa_dma = 1, .max_sectors = 128, - .use_clustering = ENABLE_CLUSTERING, }; /* diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 640cd1b31a18..f38882f6f37d 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -50,18 +50,6 @@ config SCSI_NETLINK default n depends on NET -config SCSI_MQ_DEFAULT - bool "SCSI: use blk-mq I/O path by default" - default y - depends on SCSI - ---help--- - This option enables the blk-mq based I/O path for SCSI devices by - default. With this option the scsi_mod.use_blk_mq module/boot - option defaults to Y, without it to N, but it can still be - overridden either way. - - If unsure say Y. - config SCSI_PROC_FS bool "legacy /proc/scsi/ support" depends on SCSI && PROC_FS diff --git a/drivers/scsi/a100u2w.c b/drivers/scsi/a100u2w.c index 00072ed9540b..ff53fd0d12f2 100644 --- a/drivers/scsi/a100u2w.c +++ b/drivers/scsi/a100u2w.c @@ -1078,7 +1078,6 @@ static struct scsi_host_template inia100_template = { .can_queue = 1, .this_id = 1, .sg_tablesize = SG_ALL, - .use_clustering = ENABLE_CLUSTERING, }; static int inia100_probe_one(struct pci_dev *pdev, diff --git a/drivers/scsi/a2091.c b/drivers/scsi/a2091.c index 61aadc7acb49..c96bc7261a42 100644 --- a/drivers/scsi/a2091.c +++ b/drivers/scsi/a2091.c @@ -160,7 +160,7 @@ static struct scsi_host_template a2091_scsi_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = CMD_PER_LUN, - .use_clustering = DISABLE_CLUSTERING + .dma_boundary = PAGE_SIZE - 1, }; static int a2091_probe(struct zorro_dev *z, const struct zorro_device_id *ent) diff --git a/drivers/scsi/a3000.c b/drivers/scsi/a3000.c index 2427a8541247..dcf435f312dd 100644 --- a/drivers/scsi/a3000.c +++ b/drivers/scsi/a3000.c @@ -175,7 +175,6 @@ static struct scsi_host_template amiga_a3000_scsi_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING }; static int __init amiga_a3000_scsi_probe(struct platform_device *pdev) diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c index bd7f352c28f3..75ab5ff6b78c 100644 --- a/drivers/scsi/aacraid/aachba.c +++ b/drivers/scsi/aacraid/aachba.c @@ -2892,6 +2892,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd) !(dev->raw_io_64) || ((scsicmd->cmnd[1] & 0x1f) != SAI_READ_CAPACITY_16)) break; + /* fall through */ case INQUIRY: case READ_CAPACITY: case TEST_UNIT_READY: @@ -2966,6 +2967,7 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd) /* Issue FIB to tell Firmware to flush it's cache */ if ((aac_cache & 6) != 2) return aac_synchronize(scsicmd); + /* fall through */ case INQUIRY: { struct inquiry_data inq_data; @@ -3319,8 +3321,9 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd) min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data), SCSI_SENSE_BUFFERSIZE)); - break; + break; } + /* fall through */ case RESERVE: case RELEASE: case REZERO_UNIT: diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h index 39eb415987fc..3291d1c16864 100644 --- a/drivers/scsi/aacraid/aacraid.h +++ b/drivers/scsi/aacraid/aacraid.h @@ -40,6 +40,7 @@ #define nblank(x) _nblank(x)[0] #include +#include #include #include @@ -1241,7 +1242,7 @@ struct aac_fib_context { u32 unique; // unique value representing this context ulong jiffies; // used for cleanup - dmb changed to ulong struct list_head next; // used to link context's into a linked list - struct semaphore wait_sem; // this is used to wait for the next fib to arrive. + struct completion completion; // this is used to wait for the next fib to arrive. int wait; // Set to true when thread is in WaitForSingleObject unsigned long count; // total number of FIBs on FibList struct list_head fib_list; // this holds fibs and their attachd hw_fibs @@ -1313,7 +1314,7 @@ struct fib { * This is the event the sendfib routine will wait on if the * caller did not pass one and this is synch io. */ - struct semaphore event_wait; + struct completion event_wait; spinlock_t event_lock; u32 done; /* gets set to 1 when fib is complete */ diff --git a/drivers/scsi/aacraid/commctrl.c b/drivers/scsi/aacraid/commctrl.c index 25f6600d6c09..e2899ff7913e 100644 --- a/drivers/scsi/aacraid/commctrl.c +++ b/drivers/scsi/aacraid/commctrl.c @@ -41,7 +41,6 @@ #include #include /* ssleep prototype */ #include -#include #include #include @@ -203,7 +202,7 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg) /* * Initialize the mutex used to wait for the next AIF. */ - sema_init(&fibctx->wait_sem, 0); + init_completion(&fibctx->completion); fibctx->wait = 0; /* * Initialize the fibs and set the count of fibs on @@ -335,7 +334,7 @@ return_fib: ssleep(1); } if (f.wait) { - if(down_interruptible(&fibctx->wait_sem) < 0) { + if (wait_for_completion_interruptible(&fibctx->completion) < 0) { status = -ERESTARTSYS; } else { /* Lock again and retry */ diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c index 1e77d96a18f2..d5a6aa9676c8 100644 --- a/drivers/scsi/aacraid/commsup.c +++ b/drivers/scsi/aacraid/commsup.c @@ -44,7 +44,6 @@ #include #include #include -#include #include #include #include @@ -189,7 +188,7 @@ int aac_fib_setup(struct aac_dev * dev) fibptr->hw_fib_va = hw_fib; fibptr->data = (void *) fibptr->hw_fib_va->data; fibptr->next = fibptr+1; /* Forward chain the fibs */ - sema_init(&fibptr->event_wait, 0); + init_completion(&fibptr->event_wait); spin_lock_init(&fibptr->event_lock); hw_fib->header.XferState = cpu_to_le32(0xffffffff); hw_fib->header.SenderSize = @@ -623,7 +622,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size, } if (wait) { fibptr->flags |= FIB_CONTEXT_FLAG_WAIT; - if (down_interruptible(&fibptr->event_wait)) { + if (wait_for_completion_interruptible(&fibptr->event_wait)) { fibptr->flags &= ~FIB_CONTEXT_FLAG_WAIT; return -EFAULT; } @@ -659,7 +658,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size, * hardware failure has occurred. */ unsigned long timeout = jiffies + (180 * HZ); /* 3 minutes */ - while (down_trylock(&fibptr->event_wait)) { + while (!try_wait_for_completion(&fibptr->event_wait)) { int blink; if (time_is_before_eq_jiffies(timeout)) { struct aac_queue * q = &dev->queues->queue[AdapNormCmdQueue]; @@ -689,9 +688,9 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size, */ schedule(); } - } else if (down_interruptible(&fibptr->event_wait)) { + } else if (wait_for_completion_interruptible(&fibptr->event_wait)) { /* Do nothing ... satisfy - * down_interruptible must_check */ + * wait_for_completion_interruptible must_check */ } spin_lock_irqsave(&fibptr->event_lock, flags); @@ -777,7 +776,7 @@ int aac_hba_send(u8 command, struct fib *fibptr, fib_callback callback, return -EFAULT; fibptr->flags |= FIB_CONTEXT_FLAG_WAIT; - if (down_interruptible(&fibptr->event_wait)) + if (wait_for_completion_interruptible(&fibptr->event_wait)) fibptr->done = 2; fibptr->flags &= ~(FIB_CONTEXT_FLAG_WAIT); @@ -1538,7 +1537,7 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced, u8 reset_type) || fib->flags & FIB_CONTEXT_FLAG_WAIT) { unsigned long flagv; spin_lock_irqsave(&fib->event_lock, flagv); - up(&fib->event_wait); + complete(&fib->event_wait); spin_unlock_irqrestore(&fib->event_lock, flagv); schedule(); retval = 0; @@ -1828,7 +1827,7 @@ int aac_check_health(struct aac_dev * aac) * Set the event to wake up the * thread that will waiting. */ - up(&fibctx->wait_sem); + complete(&fibctx->completion); } else { printk(KERN_WARNING "aifd: didn't allocate NewFib.\n"); kfree(fib); @@ -2165,7 +2164,7 @@ static void wakeup_fibctx_threads(struct aac_dev *dev, * Set the event to wake up the * thread that is waiting. */ - up(&fibctx->wait_sem); + complete(&fibctx->completion); entry = entry->next; } diff --git a/drivers/scsi/aacraid/dpcsup.c b/drivers/scsi/aacraid/dpcsup.c index ddc69738375f..40a771dd1c0e 100644 --- a/drivers/scsi/aacraid/dpcsup.c +++ b/drivers/scsi/aacraid/dpcsup.c @@ -38,7 +38,6 @@ #include #include #include -#include #include "aacraid.h" @@ -129,7 +128,7 @@ unsigned int aac_response_normal(struct aac_queue * q) spin_lock_irqsave(&fib->event_lock, flagv); if (!fib->done) { fib->done = 1; - up(&fib->event_wait); + complete(&fib->event_wait); } spin_unlock_irqrestore(&fib->event_lock, flagv); @@ -376,16 +375,16 @@ unsigned int aac_intr_normal(struct aac_dev *dev, u32 index, int isAif, start_callback = 1; } else { unsigned long flagv; - int complete = 0; + int completed = 0; dprintk((KERN_INFO "event_wait up\n")); spin_lock_irqsave(&fib->event_lock, flagv); if (fib->done == 2) { fib->done = 1; - complete = 1; + completed = 1; } else { fib->done = 1; - up(&fib->event_wait); + complete(&fib->event_wait); } spin_unlock_irqrestore(&fib->event_lock, flagv); @@ -395,7 +394,7 @@ unsigned int aac_intr_normal(struct aac_dev *dev, u32 index, int isAif, mflags); FIB_COUNTER_INCREMENT(aac_config.NativeRecved); - if (complete) + if (completed) aac_fib_complete(fib); } } else { @@ -428,16 +427,16 @@ unsigned int aac_intr_normal(struct aac_dev *dev, u32 index, int isAif, start_callback = 1; } else { unsigned long flagv; - int complete = 0; + int completed = 0; dprintk((KERN_INFO "event_wait up\n")); spin_lock_irqsave(&fib->event_lock, flagv); if (fib->done == 2) { fib->done = 1; - complete = 1; + completed = 1; } else { fib->done = 1; - up(&fib->event_wait); + complete(&fib->event_wait); } spin_unlock_irqrestore(&fib->event_lock, flagv); @@ -447,7 +446,7 @@ unsigned int aac_intr_normal(struct aac_dev *dev, u32 index, int isAif, mflags); FIB_COUNTER_INCREMENT(aac_config.NormalRecved); - if (complete) + if (completed) aac_fib_complete(fib); } } diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c index 2d4e4ddc5ace..634ddb90e7aa 100644 --- a/drivers/scsi/aacraid/linit.c +++ b/drivers/scsi/aacraid/linit.c @@ -759,6 +759,7 @@ static int aac_eh_abort(struct scsi_cmnd* cmd) !(aac->raw_io_64) || ((cmd->cmnd[1] & 0x1f) != SAI_READ_CAPACITY_16)) break; + /* fall through */ case INQUIRY: case READ_CAPACITY: /* @@ -1539,7 +1540,6 @@ static struct scsi_host_template aac_driver_template = { #else .cmd_per_lun = AAC_NUM_IO_FIB, #endif - .use_clustering = ENABLE_CLUSTERING, .emulated = 1, .no_write_same = 1, }; @@ -1559,7 +1559,7 @@ static void __aac_shutdown(struct aac_dev * aac) struct fib *fib = &aac->fibs[i]; if (!(fib->hw_fib_va->header.XferState & cpu_to_le32(NoResponseExpected | Async)) && (fib->hw_fib_va->header.XferState & cpu_to_le32(ResponseExpected))) - up(&fib->event_wait); + complete(&fib->event_wait); } kthread_stop(aac->thread); aac->thread = NULL; diff --git a/drivers/scsi/aacraid/src.c b/drivers/scsi/aacraid/src.c index 7a51ccfa8662..8377aec0649d 100644 --- a/drivers/scsi/aacraid/src.c +++ b/drivers/scsi/aacraid/src.c @@ -106,7 +106,7 @@ static irqreturn_t aac_src_intr_message(int irq, void *dev_id) spin_lock_irqsave(&dev->sync_fib->event_lock, sflags); if (dev->sync_fib->flags & FIB_CONTEXT_FLAG_WAIT) { dev->management_fib_count--; - up(&dev->sync_fib->event_wait); + complete(&dev->sync_fib->event_wait); } spin_unlock_irqrestore(&dev->sync_fib->event_lock, sflags); diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c index 223ef6f4e258..d37584403c33 100644 --- a/drivers/scsi/advansys.c +++ b/drivers/scsi/advansys.c @@ -3192,8 +3192,8 @@ static void asc_prt_driver_conf(struct seq_file *m, struct Scsi_Host *shost) shost->sg_tablesize, shost->cmd_per_lun); seq_printf(m, - " unchecked_isa_dma %d, use_clustering %d\n", - shost->unchecked_isa_dma, shost->use_clustering); + " unchecked_isa_dma %d\n", + shost->unchecked_isa_dma); seq_printf(m, " flags 0x%x, last_reset 0x%lx, jiffies 0x%lx, asc_n_io_port 0x%x\n", @@ -10808,14 +10808,6 @@ static struct scsi_host_template advansys_template = { * for non-ISA adapters. */ .unchecked_isa_dma = true, - /* - * All adapters controlled by this driver are capable of large - * scatter-gather lists. According to the mid-level SCSI documentation - * this obviates any performance gain provided by setting - * 'use_clustering'. But empirically while CPU utilization is increased - * by enabling clustering, I/O throughput increases as well. - */ - .use_clustering = ENABLE_CLUSTERING, }; static int advansys_wide_init_chip(struct Scsi_Host *shost) diff --git a/drivers/scsi/aha152x.c b/drivers/scsi/aha152x.c index 301b3cad15f8..97872838b983 100644 --- a/drivers/scsi/aha152x.c +++ b/drivers/scsi/aha152x.c @@ -2920,7 +2920,7 @@ static struct scsi_host_template aha152x_driver_template = { .can_queue = 1, .this_id = 7, .sg_tablesize = SG_ALL, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .slave_alloc = aha152x_adjust_queue, }; diff --git a/drivers/scsi/aha1542.c b/drivers/scsi/aha1542.c index 41add33e3f1f..ba7a5725be04 100644 --- a/drivers/scsi/aha1542.c +++ b/drivers/scsi/aha1542.c @@ -58,8 +58,15 @@ struct aha1542_hostdata { int aha1542_last_mbi_used; int aha1542_last_mbo_used; struct scsi_cmnd *int_cmds[AHA1542_MAILBOXES]; - struct mailbox mb[2 * AHA1542_MAILBOXES]; - struct ccb ccb[AHA1542_MAILBOXES]; + struct mailbox *mb; + dma_addr_t mb_handle; + struct ccb *ccb; + dma_addr_t ccb_handle; +}; + +struct aha1542_cmd { + struct chain *chain; + dma_addr_t chain_handle; }; static inline void aha1542_intr_reset(u16 base) @@ -233,6 +240,21 @@ static int aha1542_test_port(struct Scsi_Host *sh) return 1; } +static void aha1542_free_cmd(struct scsi_cmnd *cmd) +{ + struct aha1542_cmd *acmd = scsi_cmd_priv(cmd); + struct device *dev = cmd->device->host->dma_dev; + size_t len = scsi_sg_count(cmd) * sizeof(struct chain); + + if (acmd->chain) { + dma_unmap_single(dev, acmd->chain_handle, len, DMA_TO_DEVICE); + kfree(acmd->chain); + } + + acmd->chain = NULL; + scsi_dma_unmap(cmd); +} + static irqreturn_t aha1542_interrupt(int irq, void *dev_id) { struct Scsi_Host *sh = dev_id; @@ -303,7 +325,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id) return IRQ_HANDLED; }; - mbo = (scsi2int(mb[mbi].ccbptr) - (isa_virt_to_bus(&ccb[0]))) / sizeof(struct ccb); + mbo = (scsi2int(mb[mbi].ccbptr) - (unsigned long)aha1542->ccb_handle) / sizeof(struct ccb); mbistatus = mb[mbi].status; mb[mbi].status = 0; aha1542->aha1542_last_mbi_used = mbi; @@ -331,8 +353,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } my_done = tmp_cmd->scsi_done; - kfree(tmp_cmd->host_scribble); - tmp_cmd->host_scribble = NULL; + aha1542_free_cmd(tmp_cmd); /* Fetch the sense data, and tuck it away, in the required slot. The Adaptec automatically fetches it, and there is no guarantee that we will still have it in the cdb when we come back */ @@ -369,6 +390,7 @@ static irqreturn_t aha1542_interrupt(int irq, void *dev_id) static int aha1542_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) { + struct aha1542_cmd *acmd = scsi_cmd_priv(cmd); struct aha1542_hostdata *aha1542 = shost_priv(sh); u8 direction; u8 target = cmd->device->id; @@ -378,7 +400,6 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) int mbo, sg_count; struct mailbox *mb = aha1542->mb; struct ccb *ccb = aha1542->ccb; - struct chain *cptr; if (*cmd->cmnd == REQUEST_SENSE) { /* Don't do the command - we have the sense data already */ @@ -398,15 +419,17 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) print_hex_dump_bytes("command: ", DUMP_PREFIX_NONE, cmd->cmnd, cmd->cmd_len); } #endif - if (bufflen) { /* allocate memory before taking host_lock */ - sg_count = scsi_sg_count(cmd); - cptr = kmalloc_array(sg_count, sizeof(*cptr), - GFP_KERNEL | GFP_DMA); - if (!cptr) - return SCSI_MLQUEUE_HOST_BUSY; - } else { - sg_count = 0; - cptr = NULL; + sg_count = scsi_dma_map(cmd); + if (sg_count) { + size_t len = sg_count * sizeof(struct chain); + + acmd->chain = kmalloc(len, GFP_DMA); + if (!acmd->chain) + goto out_unmap; + acmd->chain_handle = dma_map_single(sh->dma_dev, acmd->chain, + len, DMA_TO_DEVICE); + if (dma_mapping_error(sh->dma_dev, acmd->chain_handle)) + goto out_free_chain; } /* Use the outgoing mailboxes in a round-robin fashion, because this @@ -437,7 +460,8 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) shost_printk(KERN_DEBUG, sh, "Sending command (%d %p)...", mbo, cmd->scsi_done); #endif - any2scsi(mb[mbo].ccbptr, isa_virt_to_bus(&ccb[mbo])); /* This gets trashed for some reason */ + /* This gets trashed for some reason */ + any2scsi(mb[mbo].ccbptr, aha1542->ccb_handle + mbo * sizeof(*ccb)); memset(&ccb[mbo], 0, sizeof(struct ccb)); @@ -456,21 +480,18 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) int i; ccb[mbo].op = 2; /* SCSI Initiator Command w/scatter-gather */ - cmd->host_scribble = (void *)cptr; scsi_for_each_sg(cmd, sg, sg_count, i) { - any2scsi(cptr[i].dataptr, isa_page_to_bus(sg_page(sg)) - + sg->offset); - any2scsi(cptr[i].datalen, sg->length); + any2scsi(acmd->chain[i].dataptr, sg_dma_address(sg)); + any2scsi(acmd->chain[i].datalen, sg_dma_len(sg)); }; any2scsi(ccb[mbo].datalen, sg_count * sizeof(struct chain)); - any2scsi(ccb[mbo].dataptr, isa_virt_to_bus(cptr)); + any2scsi(ccb[mbo].dataptr, acmd->chain_handle); #ifdef DEBUG - shost_printk(KERN_DEBUG, sh, "cptr %p: ", cptr); - print_hex_dump_bytes("cptr: ", DUMP_PREFIX_NONE, cptr, 18); + shost_printk(KERN_DEBUG, sh, "cptr %p: ", acmd->chain); + print_hex_dump_bytes("cptr: ", DUMP_PREFIX_NONE, acmd->chain, 18); #endif } else { ccb[mbo].op = 0; /* SCSI Initiator Command */ - cmd->host_scribble = NULL; any2scsi(ccb[mbo].datalen, 0); any2scsi(ccb[mbo].dataptr, 0); }; @@ -488,24 +509,29 @@ static int aha1542_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) spin_unlock_irqrestore(sh->host_lock, flags); return 0; +out_free_chain: + kfree(acmd->chain); + acmd->chain = NULL; +out_unmap: + scsi_dma_unmap(cmd); + return SCSI_MLQUEUE_HOST_BUSY; } /* Initialize mailboxes */ static void setup_mailboxes(struct Scsi_Host *sh) { struct aha1542_hostdata *aha1542 = shost_priv(sh); - int i; - struct mailbox *mb = aha1542->mb; - struct ccb *ccb = aha1542->ccb; - u8 mb_cmd[5] = { CMD_MBINIT, AHA1542_MAILBOXES, 0, 0, 0}; + int i; for (i = 0; i < AHA1542_MAILBOXES; i++) { - mb[i].status = mb[AHA1542_MAILBOXES + i].status = 0; - any2scsi(mb[i].ccbptr, isa_virt_to_bus(&ccb[i])); + aha1542->mb[i].status = 0; + any2scsi(aha1542->mb[i].ccbptr, + aha1542->ccb_handle + i * sizeof(struct ccb)); + aha1542->mb[AHA1542_MAILBOXES + i].status = 0; }; aha1542_intr_reset(sh->io_port); /* reset interrupts, so they don't block */ - any2scsi((mb_cmd + 2), isa_virt_to_bus(mb)); + any2scsi(mb_cmd + 2, aha1542->mb_handle); if (aha1542_out(sh->io_port, mb_cmd, 5)) shost_printk(KERN_ERR, sh, "failed setting up mailboxes\n"); aha1542_intr_reset(sh->io_port); @@ -739,11 +765,26 @@ static struct Scsi_Host *aha1542_hw_init(struct scsi_host_template *tpnt, struct if (aha1542->bios_translation == BIOS_TRANSLATION_25563) shost_printk(KERN_INFO, sh, "Using extended bios translation\n"); + if (dma_set_mask_and_coherent(pdev, DMA_BIT_MASK(24)) < 0) + goto unregister; + + aha1542->mb = dma_alloc_coherent(pdev, + AHA1542_MAILBOXES * 2 * sizeof(struct mailbox), + &aha1542->mb_handle, GFP_KERNEL); + if (!aha1542->mb) + goto unregister; + + aha1542->ccb = dma_alloc_coherent(pdev, + AHA1542_MAILBOXES * sizeof(struct ccb), + &aha1542->ccb_handle, GFP_KERNEL); + if (!aha1542->ccb) + goto free_mb; + setup_mailboxes(sh); if (request_irq(sh->irq, aha1542_interrupt, 0, "aha1542", sh)) { shost_printk(KERN_ERR, sh, "Unable to allocate IRQ.\n"); - goto unregister; + goto free_ccb; } if (sh->dma_channel != 0xFF) { if (request_dma(sh->dma_channel, "aha1542")) { @@ -762,11 +803,18 @@ static struct Scsi_Host *aha1542_hw_init(struct scsi_host_template *tpnt, struct scsi_scan_host(sh); return sh; + free_dma: if (sh->dma_channel != 0xff) free_dma(sh->dma_channel); free_irq: free_irq(sh->irq, sh); +free_ccb: + dma_free_coherent(pdev, AHA1542_MAILBOXES * sizeof(struct ccb), + aha1542->ccb, aha1542->ccb_handle); +free_mb: + dma_free_coherent(pdev, AHA1542_MAILBOXES * 2 * sizeof(struct mailbox), + aha1542->mb, aha1542->mb_handle); unregister: scsi_host_put(sh); release: @@ -777,9 +825,16 @@ release: static int aha1542_release(struct Scsi_Host *sh) { + struct aha1542_hostdata *aha1542 = shost_priv(sh); + struct device *dev = sh->dma_dev; + scsi_remove_host(sh); if (sh->dma_channel != 0xff) free_dma(sh->dma_channel); + dma_free_coherent(dev, AHA1542_MAILBOXES * sizeof(struct ccb), + aha1542->ccb, aha1542->ccb_handle); + dma_free_coherent(dev, AHA1542_MAILBOXES * 2 * sizeof(struct mailbox), + aha1542->mb, aha1542->mb_handle); if (sh->irq) free_irq(sh->irq, sh); if (sh->io_port && sh->n_io_port) @@ -826,7 +881,8 @@ static int aha1542_dev_reset(struct scsi_cmnd *cmd) aha1542->aha1542_last_mbo_used = mbo; - any2scsi(mb[mbo].ccbptr, isa_virt_to_bus(&ccb[mbo])); /* This gets trashed for some reason */ + /* This gets trashed for some reason */ + any2scsi(mb[mbo].ccbptr, aha1542->ccb_handle + mbo * sizeof(*ccb)); memset(&ccb[mbo], 0, sizeof(struct ccb)); @@ -901,8 +957,7 @@ static int aha1542_reset(struct scsi_cmnd *cmd, u8 reset_cmd) */ continue; } - kfree(tmp_cmd->host_scribble); - tmp_cmd->host_scribble = NULL; + aha1542_free_cmd(tmp_cmd); aha1542->int_cmds[i] = NULL; aha1542->mb[i].status = 0; } @@ -946,6 +1001,7 @@ static struct scsi_host_template driver_template = { .module = THIS_MODULE, .proc_name = "aha1542", .name = "Adaptec 1542", + .cmd_size = sizeof(struct aha1542_cmd), .queuecommand = aha1542_queuecommand, .eh_device_reset_handler= aha1542_dev_reset, .eh_bus_reset_handler = aha1542_bus_reset, @@ -955,7 +1011,6 @@ static struct scsi_host_template driver_template = { .this_id = 7, .sg_tablesize = 16, .unchecked_isa_dma = 1, - .use_clustering = ENABLE_CLUSTERING, }; static int aha1542_isa_match(struct device *pdev, unsigned int ndev) diff --git a/drivers/scsi/aha1740.c b/drivers/scsi/aha1740.c index 786bf7f32c64..da4150c17781 100644 --- a/drivers/scsi/aha1740.c +++ b/drivers/scsi/aha1740.c @@ -545,7 +545,6 @@ static struct scsi_host_template aha1740_template = { .can_queue = AHA1740_ECBS, .this_id = 7, .sg_tablesize = AHA1740_SCATTER, - .use_clustering = ENABLE_CLUSTERING, .eh_abort_handler = aha1740_eh_abort_handler, }; diff --git a/drivers/scsi/aic7xxx/aic79xx_osm.c b/drivers/scsi/aic7xxx/aic79xx_osm.c index 2588b8f84ba0..57992519384e 100644 --- a/drivers/scsi/aic7xxx/aic79xx_osm.c +++ b/drivers/scsi/aic7xxx/aic79xx_osm.c @@ -920,7 +920,6 @@ struct scsi_host_template aic79xx_driver_template = { .this_id = -1, .max_sectors = 8192, .cmd_per_lun = 2, - .use_clustering = ENABLE_CLUSTERING, .slave_alloc = ahd_linux_slave_alloc, .slave_configure = ahd_linux_slave_configure, .target_alloc = ahd_linux_target_alloc, diff --git a/drivers/scsi/aic7xxx/aic7xxx_osm.c b/drivers/scsi/aic7xxx/aic7xxx_osm.c index c6be3aeb302b..3c9c17450bb3 100644 --- a/drivers/scsi/aic7xxx/aic7xxx_osm.c +++ b/drivers/scsi/aic7xxx/aic7xxx_osm.c @@ -807,7 +807,6 @@ struct scsi_host_template aic7xxx_driver_template = { .this_id = -1, .max_sectors = 8192, .cmd_per_lun = 2, - .use_clustering = ENABLE_CLUSTERING, .slave_alloc = ahc_linux_slave_alloc, .slave_configure = ahc_linux_slave_configure, .target_alloc = ahc_linux_target_alloc, diff --git a/drivers/scsi/aic94xx/aic94xx_hwi.c b/drivers/scsi/aic94xx/aic94xx_hwi.c index 3b8ad55e59de..2bc7615193bd 100644 --- a/drivers/scsi/aic94xx/aic94xx_hwi.c +++ b/drivers/scsi/aic94xx/aic94xx_hwi.c @@ -1057,14 +1057,13 @@ static struct asd_ascb *asd_ascb_alloc(struct asd_ha_struct *asd_ha, if (ascb) { ascb->dma_scb.size = sizeof(struct scb); - ascb->dma_scb.vaddr = dma_pool_alloc(asd_ha->scb_pool, + ascb->dma_scb.vaddr = dma_pool_zalloc(asd_ha->scb_pool, gfp_flags, &ascb->dma_scb.dma_handle); if (!ascb->dma_scb.vaddr) { kmem_cache_free(asd_ascb_cache, ascb); return NULL; } - memset(ascb->dma_scb.vaddr, 0, sizeof(struct scb)); asd_init_ascb(asd_ha, ascb); spin_lock_irqsave(&seq->tc_index_lock, flags); diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c index 41c4d8abdd4a..f83f79b07b50 100644 --- a/drivers/scsi/aic94xx/aic94xx_init.c +++ b/drivers/scsi/aic94xx/aic94xx_init.c @@ -68,7 +68,6 @@ static struct scsi_host_template aic94xx_sht = { .this_id = -1, .sg_tablesize = SG_ALL, .max_sectors = SCSI_DEFAULT_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .eh_device_reset_handler = sas_eh_device_reset_handler, .eh_target_reset_handler = sas_eh_target_reset_handler, .target_destroy = sas_target_destroy, diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c index d4404eea24fb..0f6751b0a633 100644 --- a/drivers/scsi/arcmsr/arcmsr_hba.c +++ b/drivers/scsi/arcmsr/arcmsr_hba.c @@ -156,7 +156,6 @@ static struct scsi_host_template arcmsr_scsi_host_template = { .sg_tablesize = ARCMSR_DEFAULT_SG_ENTRIES, .max_sectors = ARCMSR_MAX_XFER_SECTORS_C, .cmd_per_lun = ARCMSR_DEFAULT_CMD_PERLUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = arcmsr_host_attrs, .no_write_same = 1, }; @@ -903,9 +902,9 @@ static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id) if(!host){ goto pci_disable_dev; } - error = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); + error = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if(error){ - error = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + error = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if(error){ printk(KERN_WARNING "scsi%d: No suitable DMA mask available\n", @@ -1049,9 +1048,9 @@ static int arcmsr_resume(struct pci_dev *pdev) pr_warn("%s: pci_enable_device error\n", __func__); return -ENODEV; } - error = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); + error = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (error) { - error = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + error = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (error) { pr_warn("scsi%d: No suitable DMA mask available\n", host->host_no); diff --git a/drivers/scsi/arm/acornscsi.c b/drivers/scsi/arm/acornscsi.c index 421fe869a11e..d7509859dc00 100644 --- a/drivers/scsi/arm/acornscsi.c +++ b/drivers/scsi/arm/acornscsi.c @@ -2890,7 +2890,7 @@ static struct scsi_host_template acornscsi_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .proc_name = "acornscsi", }; diff --git a/drivers/scsi/arm/arxescsi.c b/drivers/scsi/arm/arxescsi.c index 3110736fd337..5e9dd9f34821 100644 --- a/drivers/scsi/arm/arxescsi.c +++ b/drivers/scsi/arm/arxescsi.c @@ -245,7 +245,7 @@ static struct scsi_host_template arxescsi_template = { .can_queue = 0, .this_id = 7, .sg_tablesize = SG_ALL, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .proc_name = "arxescsi", }; diff --git a/drivers/scsi/arm/cumana_1.c b/drivers/scsi/arm/cumana_1.c index ae1d809904fb..e2d2a81d8e0b 100644 --- a/drivers/scsi/arm/cumana_1.c +++ b/drivers/scsi/arm/cumana_1.c @@ -221,10 +221,10 @@ static struct scsi_host_template cumanascsi_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, .proc_name = "CumanaSCSI-1", .cmd_size = NCR5380_CMD_SIZE, .max_sectors = 128, + .dma_boundary = PAGE_SIZE - 1, }; static int cumanascsi1_probe(struct expansion_card *ec, diff --git a/drivers/scsi/arm/cumana_2.c b/drivers/scsi/arm/cumana_2.c index edce5f3cfdba..40afcbd8de61 100644 --- a/drivers/scsi/arm/cumana_2.c +++ b/drivers/scsi/arm/cumana_2.c @@ -367,7 +367,6 @@ static struct scsi_host_template cumanascsi2_template = { .this_id = 7, .sg_tablesize = SG_MAX_SEGMENTS, .dma_boundary = IOMD_DMA_BOUNDARY, - .use_clustering = DISABLE_CLUSTERING, .proc_name = "cumanascsi2", }; diff --git a/drivers/scsi/arm/eesox.c b/drivers/scsi/arm/eesox.c index e93e047f4316..8f64c370a8a7 100644 --- a/drivers/scsi/arm/eesox.c +++ b/drivers/scsi/arm/eesox.c @@ -486,7 +486,6 @@ static struct scsi_host_template eesox_template = { .this_id = 7, .sg_tablesize = SG_MAX_SEGMENTS, .dma_boundary = IOMD_DMA_BOUNDARY, - .use_clustering = DISABLE_CLUSTERING, .proc_name = "eesox", }; diff --git a/drivers/scsi/arm/oak.c b/drivers/scsi/arm/oak.c index 05b7f755499b..8f2efaab8d46 100644 --- a/drivers/scsi/arm/oak.c +++ b/drivers/scsi/arm/oak.c @@ -110,7 +110,7 @@ static struct scsi_host_template oakscsi_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .proc_name = "oakscsi", .cmd_size = NCR5380_CMD_SIZE, .max_sectors = 128, diff --git a/drivers/scsi/arm/powertec.c b/drivers/scsi/arm/powertec.c index 79aa88911b7f..759f95ba993c 100644 --- a/drivers/scsi/arm/powertec.c +++ b/drivers/scsi/arm/powertec.c @@ -294,7 +294,6 @@ static struct scsi_host_template powertecscsi_template = { .sg_tablesize = SG_MAX_SEGMENTS, .dma_boundary = IOMD_DMA_BOUNDARY, .cmd_per_lun = 2, - .use_clustering = ENABLE_CLUSTERING, .proc_name = "powertec", }; diff --git a/drivers/scsi/atari_scsi.c b/drivers/scsi/atari_scsi.c index 89f5154c40b6..a503dc50c4f8 100644 --- a/drivers/scsi/atari_scsi.c +++ b/drivers/scsi/atari_scsi.c @@ -714,7 +714,7 @@ static struct scsi_host_template atari_scsi_template = { .eh_host_reset_handler = atari_scsi_host_reset, .this_id = 7, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .cmd_size = NCR5380_CMD_SIZE, }; diff --git a/drivers/scsi/atp870u.c b/drivers/scsi/atp870u.c index 802d15018ec0..1267200380f8 100644 --- a/drivers/scsi/atp870u.c +++ b/drivers/scsi/atp870u.c @@ -1681,7 +1681,6 @@ static struct scsi_host_template atp870u_template = { .can_queue = qcnt /* can_queue */, .this_id = 7 /* SCSI ID */, .sg_tablesize = ATP870U_SCATTER /*SG_ALL*/ /*SG_NONE*/, - .use_clustering = ENABLE_CLUSTERING, .max_sectors = ATP870U_MAX_SECTORS, }; diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c index effb6fc95af4..39f3820572b4 100644 --- a/drivers/scsi/be2iscsi/be_main.c +++ b/drivers/scsi/be2iscsi/be_main.c @@ -214,12 +214,6 @@ static char const *cqe_desc[] = { "CXN_KILLED_IMM_DATA_RCVD" }; -static int beiscsi_slave_configure(struct scsi_device *sdev) -{ - blk_queue_max_segment_size(sdev->request_queue, 65536); - return 0; -} - static int beiscsi_eh_abort(struct scsi_cmnd *sc) { struct iscsi_task *abrt_task = (struct iscsi_task *)sc->SCp.ptr; @@ -393,7 +387,6 @@ static struct scsi_host_template beiscsi_sht = { .proc_name = DRV_NAME, .queuecommand = iscsi_queuecommand, .change_queue_depth = scsi_change_queue_depth, - .slave_configure = beiscsi_slave_configure, .target_alloc = iscsi_target_alloc, .eh_timed_out = iscsi_eh_cmd_timed_out, .eh_abort_handler = beiscsi_eh_abort, @@ -404,8 +397,8 @@ static struct scsi_host_template beiscsi_sht = { .can_queue = BE2_IO_DEPTH, .this_id = -1, .max_sectors = BEISCSI_MAX_SECTORS, + .max_segment_size = 65536, .cmd_per_lun = BEISCSI_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .vendor_id = SCSI_NL_VID_TYPE_PCI | BE_VENDOR_ID, .track_queue_depth = 1, }; diff --git a/drivers/scsi/bfa/bfa_ioc.c b/drivers/scsi/bfa/bfa_ioc.c index 16d3aeb0e572..9631877aba4f 100644 --- a/drivers/scsi/bfa/bfa_ioc.c +++ b/drivers/scsi/bfa/bfa_ioc.c @@ -3819,7 +3819,7 @@ bfa_sfp_scn(struct bfa_sfp_s *sfp, struct bfi_mbmsg_s *msg) sfp->state = BFA_SFP_STATE_REMOVED; sfp->data_valid = 0; bfa_sfp_scn_aen_post(sfp, rsp); - break; + break; case BFA_SFP_SCN_FAILED: sfp->state = BFA_SFP_STATE_FAILED; sfp->data_valid = 0; @@ -5763,7 +5763,7 @@ bfa_phy_intr(void *phyarg, struct bfi_mbmsg_s *msg) (struct bfa_phy_stats_s *) phy->ubuf; bfa_phy_ntoh32((u32 *)stats, (u32 *)phy->dbuf_kva, sizeof(struct bfa_phy_stats_s)); - bfa_trc(phy, stats->status); + bfa_trc(phy, stats->status); } phy->status = status; diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c index 911efc98d1fd..42a0caf6740d 100644 --- a/drivers/scsi/bfa/bfad.c +++ b/drivers/scsi/bfa/bfad.c @@ -739,14 +739,10 @@ bfad_pci_init(struct pci_dev *pdev, struct bfad_s *bfad) pci_set_master(pdev); - - if ((pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) || - (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0)) { - if ((pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) || - (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0)) { - printk(KERN_ERR "pci_set_dma_mask fail %p\n", pdev); - goto out_release_region; - } + if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { + printk(KERN_ERR "dma_set_mask_and_coherent fail %p\n", pdev); + goto out_release_region; } /* Enable PCIE Advanced Error Recovery (AER) if kernel supports */ @@ -1565,9 +1561,9 @@ bfad_pci_slot_reset(struct pci_dev *pdev) pci_save_state(pdev); pci_set_master(pdev); - if (pci_set_dma_mask(bfad->pcidev, DMA_BIT_MASK(64)) != 0) - if (pci_set_dma_mask(bfad->pcidev, DMA_BIT_MASK(32)) != 0) - goto out_disable_device; + if (dma_set_mask_and_coherent(&bfad->pcidev->dev, DMA_BIT_MASK(64)) || + dma_set_mask_and_coherent(&bfad->pcidev->dev, DMA_BIT_MASK(32))) + goto out_disable_device; if (restart_bfa(bfad) == -1) goto out_disable_device; diff --git a/drivers/scsi/bfa/bfad_im.c b/drivers/scsi/bfa/bfad_im.c index c4a33317d344..394930cbaa13 100644 --- a/drivers/scsi/bfa/bfad_im.c +++ b/drivers/scsi/bfa/bfad_im.c @@ -817,7 +817,6 @@ struct scsi_host_template bfad_im_scsi_host_template = { .this_id = -1, .sg_tablesize = BFAD_IO_MAX_SGE, .cmd_per_lun = 3, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = bfad_im_host_attrs, .max_sectors = BFAD_MAX_SECTORS, .vendor_id = BFA_PCI_VENDOR_ID_BROCADE, @@ -840,7 +839,6 @@ struct scsi_host_template bfad_im_vport_template = { .this_id = -1, .sg_tablesize = BFAD_IO_MAX_SGE, .cmd_per_lun = 3, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = bfad_im_vport_attrs, .max_sectors = BFAD_MAX_SECTORS, }; diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c index bcd30e2374f1..2e4e7159ebf9 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c +++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c @@ -2970,7 +2970,6 @@ static struct scsi_host_template bnx2fc_shost_template = { .change_queue_depth = scsi_change_queue_depth, .this_id = -1, .cmd_per_lun = 3, - .use_clustering = ENABLE_CLUSTERING, .sg_tablesize = BNX2FC_MAX_BDS_PER_CMD, .max_sectors = 1024, .track_queue_depth = 1, diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c index e9e669a6c2bc..91f5316aa3ab 100644 --- a/drivers/scsi/bnx2i/bnx2i_hwi.c +++ b/drivers/scsi/bnx2i/bnx2i_hwi.c @@ -1906,7 +1906,6 @@ static int bnx2i_queue_scsi_cmd_resp(struct iscsi_session *session, struct iscsi_task *task; struct scsi_cmnd *sc; int rc = 0; - int cpu; spin_lock(&session->back_lock); task = iscsi_itt_to_task(bnx2i_conn->cls_conn->dd_data, @@ -1917,14 +1916,9 @@ static int bnx2i_queue_scsi_cmd_resp(struct iscsi_session *session, } sc = task->sc; - if (!blk_rq_cpu_valid(sc->request)) - cpu = smp_processor_id(); - else - cpu = sc->request->cpu; - spin_unlock(&session->back_lock); - p = &per_cpu(bnx2i_percpu, cpu); + p = &per_cpu(bnx2i_percpu, blk_mq_rq_cpu(sc->request)); spin_lock(&p->p_work_lock); if (unlikely(!p->iothread)) { rc = -EINVAL; @@ -2433,7 +2427,6 @@ static void bnx2i_process_ofld_cmpl(struct bnx2i_hba *hba, { u32 cid_addr; struct bnx2i_endpoint *ep; - u32 cid_num; ep = bnx2i_find_ep_in_ofld_list(hba, ofld_kcqe->iscsi_conn_id); if (!ep) { @@ -2468,7 +2461,6 @@ static void bnx2i_process_ofld_cmpl(struct bnx2i_hba *hba, } else { ep->state = EP_STATE_OFLD_COMPL; cid_addr = ofld_kcqe->iscsi_conn_context_id; - cid_num = bnx2i_get_cid_num(ep); ep->ep_cid = cid_addr; ep->qp.ctx_base = NULL; } diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c index de0a507577ef..69c75426c5eb 100644 --- a/drivers/scsi/bnx2i/bnx2i_iscsi.c +++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c @@ -2263,7 +2263,6 @@ static struct scsi_host_template bnx2i_host_template = { .max_sectors = 127, .cmd_per_lun = 128, .this_id = -1, - .use_clustering = ENABLE_CLUSTERING, .sg_tablesize = ISCSI_MAX_BDS_PER_CMD, .shost_attrs = bnx2i_dev_attributes, .track_queue_depth = 1, diff --git a/drivers/scsi/csiostor/csio_init.c b/drivers/scsi/csiostor/csio_init.c index 1a458ce08210..cf629380a981 100644 --- a/drivers/scsi/csiostor/csio_init.c +++ b/drivers/scsi/csiostor/csio_init.c @@ -255,7 +255,6 @@ static void csio_hw_exit_workers(struct csio_hw *hw) { cancel_work_sync(&hw->evtq_work); - flush_scheduled_work(); } static int @@ -646,7 +645,7 @@ csio_shost_init(struct csio_hw *hw, struct device *dev, if (csio_lnode_init(ln, hw, pln)) goto err_shost_put; - if (scsi_add_host(shost, dev)) + if (scsi_add_host_with_dma(shost, dev, &hw->pdev->dev)) goto err_lnode_exit; return ln; diff --git a/drivers/scsi/csiostor/csio_scsi.c b/drivers/scsi/csiostor/csio_scsi.c index 8c15b7acb4b7..bc5547a62c00 100644 --- a/drivers/scsi/csiostor/csio_scsi.c +++ b/drivers/scsi/csiostor/csio_scsi.c @@ -1780,16 +1780,10 @@ csio_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmnd) int nsge = 0; int rv = SCSI_MLQUEUE_HOST_BUSY, nr; int retval; - int cpu; struct csio_scsi_qset *sqset; struct fc_rport *rport = starget_to_rport(scsi_target(cmnd->device)); - if (!blk_rq_cpu_valid(cmnd->request)) - cpu = smp_processor_id(); - else - cpu = cmnd->request->cpu; - - sqset = &hw->sqset[ln->portid][cpu]; + sqset = &hw->sqset[ln->portid][blk_mq_rq_cpu(cmnd->request)]; nr = fc_remote_port_chkready(rport); if (nr) { @@ -2280,7 +2274,6 @@ struct scsi_host_template csio_fcoe_shost_template = { .this_id = -1, .sg_tablesize = CSIO_SCSI_MAX_SGE, .cmd_per_lun = CSIO_MAX_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = csio_fcoe_lport_attrs, .max_sectors = CSIO_MAX_SECTOR_SIZE, }; @@ -2300,7 +2293,6 @@ struct scsi_host_template csio_fcoe_shost_vport_template = { .this_id = -1, .sg_tablesize = CSIO_SCSI_MAX_SGE, .cmd_per_lun = CSIO_MAX_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = csio_fcoe_vport_attrs, .max_sectors = CSIO_MAX_SECTOR_SIZE, }; diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c index bf07735275a4..8a20411699d9 100644 --- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c +++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c @@ -95,7 +95,7 @@ static struct scsi_host_template cxgb3i_host_template = { .eh_device_reset_handler = iscsi_eh_device_reset, .eh_target_reset_handler = iscsi_eh_recover_target, .target_alloc = iscsi_target_alloc, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .this_id = -1, .track_queue_depth = 1, }; diff --git a/drivers/scsi/cxgbi/cxgb4i/Kconfig b/drivers/scsi/cxgbi/cxgb4i/Kconfig index 594f593c8821..f36b76e8e12c 100644 --- a/drivers/scsi/cxgbi/cxgb4i/Kconfig +++ b/drivers/scsi/cxgbi/cxgb4i/Kconfig @@ -1,8 +1,8 @@ config SCSI_CXGB4_ISCSI tristate "Chelsio T4 iSCSI support" depends on PCI && INET && (IPV6 || IPV6=n) - select NETDEVICES - select ETHERNET + depends on THERMAL || !THERMAL + depends on ETHERNET select NET_VENDOR_CHELSIO select CHELSIO_T4 select CHELSIO_LIB diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c index 064ef5735182..49f8028ac524 100644 --- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c +++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c @@ -113,7 +113,7 @@ static struct scsi_host_template cxgb4i_host_template = { .eh_device_reset_handler = iscsi_eh_device_reset, .eh_target_reset_handler = iscsi_eh_recover_target, .target_alloc = iscsi_target_alloc, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .this_id = -1, .track_queue_depth = 1, }; @@ -1767,8 +1767,7 @@ static int init_act_open(struct cxgbi_sock *csk) csk->mtu = dst_mtu(csk->dst); cxgb4_best_mtu(lldi->mtus, csk->mtu, &csk->mss_idx); csk->tx_chan = cxgb4_port_chan(ndev); - csk->smac_idx = cxgb4_tp_smt_idx(lldi->adapter_type, - cxgb4_port_viid(ndev)); + csk->smac_idx = ((struct port_info *)netdev_priv(ndev))->smt_idx; step = lldi->ntxq / lldi->nchan; csk->txq_idx = cxgb4_port_idx(ndev) * step; step = lldi->nrxq / lldi->nchan; diff --git a/drivers/scsi/cxlflash/main.c b/drivers/scsi/cxlflash/main.c index 6637116529aa..bfa13e3b191c 100644 --- a/drivers/scsi/cxlflash/main.c +++ b/drivers/scsi/cxlflash/main.c @@ -3088,12 +3088,6 @@ static ssize_t hwq_mode_store(struct device *dev, return -EINVAL; } - if ((mode == HWQ_MODE_TAG) && !shost_use_blk_mq(shost)) { - dev_info(cfgdev, "SCSI-MQ is not enabled, use a different " - "HWQ steering mode.\n"); - return -EINVAL; - } - afu->hwq_mode = mode; return count; @@ -3180,7 +3174,6 @@ static struct scsi_host_template driver_template = { .this_id = -1, .sg_tablesize = 1, /* No scatter gather support */ .max_sectors = CXLFLASH_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = cxlflash_host_attrs, .sdev_attrs = cxlflash_dev_attrs, }; diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c index 8c55ec6e1827..13fbb2eab842 100644 --- a/drivers/scsi/dc395x.c +++ b/drivers/scsi/dc395x.c @@ -4631,7 +4631,7 @@ static struct scsi_host_template dc395x_driver_template = { .cmd_per_lun = DC395x_MAX_CMD_PER_LUN, .eh_abort_handler = dc395x_eh_abort, .eh_bus_reset_handler = dc395x_eh_bus_reset, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, }; diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c index 12dc7100bb4c..d7ac498ba35a 100644 --- a/drivers/scsi/device_handler/scsi_dh_alua.c +++ b/drivers/scsi/device_handler/scsi_dh_alua.c @@ -1071,28 +1071,29 @@ static void alua_check(struct scsi_device *sdev, bool force) * Fail I/O to all paths not in state * active/optimized or active/non-optimized. */ -static int alua_prep_fn(struct scsi_device *sdev, struct request *req) +static blk_status_t alua_prep_fn(struct scsi_device *sdev, struct request *req) { struct alua_dh_data *h = sdev->handler_data; struct alua_port_group *pg; unsigned char state = SCSI_ACCESS_STATE_OPTIMAL; - int ret = BLKPREP_OK; rcu_read_lock(); pg = rcu_dereference(h->pg); if (pg) state = pg->state; rcu_read_unlock(); - if (state == SCSI_ACCESS_STATE_TRANSITIONING) - ret = BLKPREP_DEFER; - else if (state != SCSI_ACCESS_STATE_OPTIMAL && - state != SCSI_ACCESS_STATE_ACTIVE && - state != SCSI_ACCESS_STATE_LBA) { - ret = BLKPREP_KILL; + + switch (state) { + case SCSI_ACCESS_STATE_OPTIMAL: + case SCSI_ACCESS_STATE_ACTIVE: + case SCSI_ACCESS_STATE_LBA: + return BLK_STS_OK; + case SCSI_ACCESS_STATE_TRANSITIONING: + return BLK_STS_RESOURCE; + default: req->rq_flags |= RQF_QUIET; + return BLK_STS_IOERR; } - return ret; - } static void alua_rescan(struct scsi_device *sdev) diff --git a/drivers/scsi/device_handler/scsi_dh_emc.c b/drivers/scsi/device_handler/scsi_dh_emc.c index 95c47909a58f..bea8e13febb6 100644 --- a/drivers/scsi/device_handler/scsi_dh_emc.c +++ b/drivers/scsi/device_handler/scsi_dh_emc.c @@ -341,17 +341,17 @@ static int clariion_check_sense(struct scsi_device *sdev, return SCSI_RETURN_NOT_HANDLED; } -static int clariion_prep_fn(struct scsi_device *sdev, struct request *req) +static blk_status_t clariion_prep_fn(struct scsi_device *sdev, + struct request *req) { struct clariion_dh_data *h = sdev->handler_data; - int ret = BLKPREP_OK; if (h->lun_state != CLARIION_LUN_OWNED) { - ret = BLKPREP_KILL; req->rq_flags |= RQF_QUIET; + return BLK_STS_IOERR; } - return ret; + return BLK_STS_OK; } static int clariion_std_inquiry(struct scsi_device *sdev, diff --git a/drivers/scsi/device_handler/scsi_dh_hp_sw.c b/drivers/scsi/device_handler/scsi_dh_hp_sw.c index e65a0ebb4b54..80129b033855 100644 --- a/drivers/scsi/device_handler/scsi_dh_hp_sw.c +++ b/drivers/scsi/device_handler/scsi_dh_hp_sw.c @@ -172,17 +172,16 @@ retry: return rc; } -static int hp_sw_prep_fn(struct scsi_device *sdev, struct request *req) +static blk_status_t hp_sw_prep_fn(struct scsi_device *sdev, struct request *req) { struct hp_sw_dh_data *h = sdev->handler_data; - int ret = BLKPREP_OK; if (h->path_state != HP_SW_PATH_ACTIVE) { - ret = BLKPREP_KILL; req->rq_flags |= RQF_QUIET; + return BLK_STS_IOERR; } - return ret; + return BLK_STS_OK; } /* diff --git a/drivers/scsi/device_handler/scsi_dh_rdac.c b/drivers/scsi/device_handler/scsi_dh_rdac.c index d27fabae8ddd..65f1fe343c64 100644 --- a/drivers/scsi/device_handler/scsi_dh_rdac.c +++ b/drivers/scsi/device_handler/scsi_dh_rdac.c @@ -642,17 +642,16 @@ done: return 0; } -static int rdac_prep_fn(struct scsi_device *sdev, struct request *req) +static blk_status_t rdac_prep_fn(struct scsi_device *sdev, struct request *req) { struct rdac_dh_data *h = sdev->handler_data; - int ret = BLKPREP_OK; if (h->state != RDAC_STATE_ACTIVE) { - ret = BLKPREP_KILL; req->rq_flags |= RQF_QUIET; + return BLK_STS_IOERR; } - return ret; + return BLK_STS_OK; } static int rdac_check_sense(struct scsi_device *sdev, diff --git a/drivers/scsi/dmx3191d.c b/drivers/scsi/dmx3191d.c index 003c3d726238..8db1cc552932 100644 --- a/drivers/scsi/dmx3191d.c +++ b/drivers/scsi/dmx3191d.c @@ -63,7 +63,7 @@ static struct scsi_host_template dmx3191d_driver_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .cmd_size = NCR5380_CMD_SIZE, }; diff --git a/drivers/scsi/dpt_i2o.c b/drivers/scsi/dpt_i2o.c index 37de8fb186d7..70d1a18278af 100644 --- a/drivers/scsi/dpt_i2o.c +++ b/drivers/scsi/dpt_i2o.c @@ -934,15 +934,15 @@ static int adpt_install_hba(struct scsi_host_template* sht, struct pci_dev* pDev * See if we should enable dma64 mode. */ if (sizeof(dma_addr_t) > 4 && - pci_set_dma_mask(pDev, DMA_BIT_MASK(64)) == 0) { - if (dma_get_required_mask(&pDev->dev) > DMA_BIT_MASK(32)) - dma64 = 1; - } - if (!dma64 && pci_set_dma_mask(pDev, DMA_BIT_MASK(32)) != 0) + dma_get_required_mask(&pDev->dev) > DMA_BIT_MASK(32) && + dma_set_mask(&pDev->dev, DMA_BIT_MASK(64)) == 0) + dma64 = 1; + + if (!dma64 && dma_set_mask(&pDev->dev, DMA_BIT_MASK(32)) != 0) return -EINVAL; /* adapter only supports message blocks below 4GB */ - pci_set_consistent_dma_mask(pDev, DMA_BIT_MASK(32)); + dma_set_coherent_mask(&pDev->dev, DMA_BIT_MASK(32)); base_addr0_phys = pci_resource_start(pDev,0); hba_map0_area_size = pci_resource_len(pDev,0); @@ -3569,7 +3569,6 @@ static struct scsi_host_template driver_template = { .slave_configure = adpt_slave_configure, .can_queue = MAX_TO_IOP_MESSAGES, .this_id = 7, - .use_clustering = ENABLE_CLUSTERING, }; static int __init adpt_init(void) diff --git a/drivers/scsi/esas2r/esas2r_init.c b/drivers/scsi/esas2r/esas2r_init.c index bbe77db8938d..46b2c83ba21f 100644 --- a/drivers/scsi/esas2r/esas2r_init.c +++ b/drivers/scsi/esas2r/esas2r_init.c @@ -266,6 +266,7 @@ int esas2r_init_adapter(struct Scsi_Host *host, struct pci_dev *pcid, int i; void *next_uncached; struct esas2r_request *first_request, *last_request; + bool dma64 = false; if (index >= MAX_ADAPTERS) { esas2r_log(ESAS2R_LOG_CRIT, @@ -286,42 +287,20 @@ int esas2r_init_adapter(struct Scsi_Host *host, struct pci_dev *pcid, a->pcid = pcid; a->host = host; - if (sizeof(dma_addr_t) > 4) { - const uint64_t required_mask = dma_get_required_mask - (&pcid->dev); - if (required_mask > DMA_BIT_MASK(32) - && !pci_set_dma_mask(pcid, DMA_BIT_MASK(64)) - && !pci_set_consistent_dma_mask(pcid, - DMA_BIT_MASK(64))) { - esas2r_log_dev(ESAS2R_LOG_INFO, - &(a->pcid->dev), - "64-bit PCI addressing enabled\n"); - } else if (!pci_set_dma_mask(pcid, DMA_BIT_MASK(32)) - && !pci_set_consistent_dma_mask(pcid, - DMA_BIT_MASK(32))) { - esas2r_log_dev(ESAS2R_LOG_INFO, - &(a->pcid->dev), - "32-bit PCI addressing enabled\n"); - } else { - esas2r_log(ESAS2R_LOG_CRIT, - "failed to set DMA mask"); - esas2r_kill_adapter(index); - return 0; - } - } else { - if (!pci_set_dma_mask(pcid, DMA_BIT_MASK(32)) - && !pci_set_consistent_dma_mask(pcid, - DMA_BIT_MASK(32))) { - esas2r_log_dev(ESAS2R_LOG_INFO, - &(a->pcid->dev), - "32-bit PCI addressing enabled\n"); - } else { - esas2r_log(ESAS2R_LOG_CRIT, - "failed to set DMA mask"); - esas2r_kill_adapter(index); - return 0; - } + if (sizeof(dma_addr_t) > 4 && + dma_get_required_mask(&pcid->dev) > DMA_BIT_MASK(32) && + !dma_set_mask_and_coherent(&pcid->dev, DMA_BIT_MASK(64))) + dma64 = true; + + if (!dma64 && dma_set_mask_and_coherent(&pcid->dev, DMA_BIT_MASK(32))) { + esas2r_log(ESAS2R_LOG_CRIT, "failed to set DMA mask"); + esas2r_kill_adapter(index); + return 0; } + + esas2r_log_dev(ESAS2R_LOG_INFO, &pcid->dev, + "%s-bit PCI addressing enabled\n", dma64 ? "64" : "32"); + esas2r_adapters[index] = a; sprintf(a->name, ESAS2R_DRVR_NAME "_%02d", index); esas2r_debug("new adapter %p, name %s", a, a->name); diff --git a/drivers/scsi/esas2r/esas2r_main.c b/drivers/scsi/esas2r/esas2r_main.c index c07118617d89..64397d441bae 100644 --- a/drivers/scsi/esas2r/esas2r_main.c +++ b/drivers/scsi/esas2r/esas2r_main.c @@ -250,7 +250,6 @@ static struct scsi_host_template driver_template = { ESAS2R_DEFAULT_CMD_PER_LUN, .present = 0, .unchecked_isa_dma = 0, - .use_clustering = ENABLE_CLUSTERING, .emulated = 0, .proc_name = ESAS2R_DRVR_NAME, .change_queue_depth = scsi_change_queue_depth, diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c index ac7da9db7317..465df475f753 100644 --- a/drivers/scsi/esp_scsi.c +++ b/drivers/scsi/esp_scsi.c @@ -2676,7 +2676,6 @@ struct scsi_host_template scsi_esp_template = { .can_queue = 7, .this_id = 7, .sg_tablesize = SG_ALL, - .use_clustering = ENABLE_CLUSTERING, .max_sectors = 0xffff, .skip_settle_delay = 1, }; diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c index f46b312d04bc..cd19be3f3405 100644 --- a/drivers/scsi/fcoe/fcoe.c +++ b/drivers/scsi/fcoe/fcoe.c @@ -286,7 +286,6 @@ static struct scsi_host_template fcoe_shost_template = { .this_id = -1, .cmd_per_lun = 3, .can_queue = FCOE_MAX_OUTSTANDING_COMMANDS, - .use_clustering = ENABLE_CLUSTERING, .sg_tablesize = SG_ALL, .max_sectors = 0xffff, .track_queue_depth = 1, @@ -1670,7 +1669,6 @@ static void fcoe_recv_frame(struct sk_buff *skb) struct fc_stats *stats; struct fcoe_crc_eof crc_eof; struct fc_frame *fp; - struct fcoe_port *port; struct fcoe_hdr *hp; fr = fcoe_dev_from_skb(skb); @@ -1688,7 +1686,6 @@ static void fcoe_recv_frame(struct sk_buff *skb) skb_end_pointer(skb), skb->csum, skb->dev ? skb->dev->name : ""); - port = lport_priv(lport); skb_linearize(skb); /* check for skb_is_nonlinear is within skb_linearize */ /* @@ -1859,7 +1856,6 @@ static int fcoe_device_notification(struct notifier_block *notifier, struct net_device *netdev = netdev_notifier_info_to_dev(ptr); struct fcoe_ctlr *ctlr; struct fcoe_interface *fcoe; - struct fcoe_port *port; struct fc_stats *stats; u32 link_possible = 1; u32 mfs; @@ -1897,7 +1893,6 @@ static int fcoe_device_notification(struct notifier_block *notifier, break; case NETDEV_UNREGISTER: list_del(&fcoe->list); - port = lport_priv(ctlr->lp); fcoe_vport_remove(lport); mutex_lock(&fcoe_config_mutex); fcoe_if_destroy(lport); diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c index cc461fd7bef1..5b3534b0deda 100644 --- a/drivers/scsi/fnic/fnic_main.c +++ b/drivers/scsi/fnic/fnic_main.c @@ -115,7 +115,6 @@ static struct scsi_host_template fnic_host_template = { .this_id = -1, .cmd_per_lun = 3, .can_queue = FNIC_DFLT_IO_REQ, - .use_clustering = ENABLE_CLUSTERING, .sg_tablesize = FNIC_MAX_SG_DESC_CNT, .max_sectors = 0xffff, .shost_attrs = fnic_attrs, diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c index 96acfcecd540..cafbcfb85bfa 100644 --- a/drivers/scsi/fnic/fnic_scsi.c +++ b/drivers/scsi/fnic/fnic_scsi.c @@ -2274,7 +2274,7 @@ fnic_scsi_host_start_tag(struct fnic *fnic, struct scsi_cmnd *sc) return SCSI_NO_TAG; sc->tag = sc->request->tag = dummy->tag; - sc->request->special = sc; + sc->host_scribble = (unsigned char *)dummy; return dummy->tag; } @@ -2286,7 +2286,7 @@ fnic_scsi_host_start_tag(struct fnic *fnic, struct scsi_cmnd *sc) static inline void fnic_scsi_host_end_tag(struct fnic *fnic, struct scsi_cmnd *sc) { - struct request *dummy = sc->request->special; + struct request *dummy = (struct request *)sc->host_scribble; blk_mq_free_request(dummy); } diff --git a/drivers/scsi/fnic/fnic_trace.c b/drivers/scsi/fnic/fnic_trace.c index 8271785bdb93..bf0fd2aeb92e 100644 --- a/drivers/scsi/fnic/fnic_trace.c +++ b/drivers/scsi/fnic/fnic_trace.c @@ -468,14 +468,13 @@ int fnic_trace_buf_init(void) fnic_max_trace_entries = (trace_max_pages * PAGE_SIZE)/ FNIC_ENTRY_SIZE_BYTES; - fnic_trace_buf_p = (unsigned long)vmalloc((trace_max_pages * PAGE_SIZE)); + fnic_trace_buf_p = (unsigned long)vzalloc(trace_max_pages * PAGE_SIZE); if (!fnic_trace_buf_p) { printk(KERN_ERR PFX "Failed to allocate memory " "for fnic_trace_buf_p\n"); err = -ENOMEM; goto err_fnic_trace_buf_init; } - memset((void *)fnic_trace_buf_p, 0, (trace_max_pages * PAGE_SIZE)); fnic_trace_entries.page_offset = vmalloc(array_size(fnic_max_trace_entries, diff --git a/drivers/scsi/g_NCR5380.c b/drivers/scsi/g_NCR5380.c index fc538181f8df..9cdca0625498 100644 --- a/drivers/scsi/g_NCR5380.c +++ b/drivers/scsi/g_NCR5380.c @@ -700,7 +700,7 @@ static struct scsi_host_template driver_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .cmd_size = NCR5380_CMD_SIZE, .max_sectors = 128, }; diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c index 16709735b546..194c294f9b6c 100644 --- a/drivers/scsi/gdth.c +++ b/drivers/scsi/gdth.c @@ -4680,7 +4680,6 @@ static struct scsi_host_template gdth_template = { .sg_tablesize = GDTH_MAXSG, .cmd_per_lun = GDTH_MAXC_P_L, .unchecked_isa_dma = 1, - .use_clustering = ENABLE_CLUSTERING, .no_write_same = 1, }; diff --git a/drivers/scsi/gvp11.c b/drivers/scsi/gvp11.c index a27fc49ebd3a..d2acd0d826e2 100644 --- a/drivers/scsi/gvp11.c +++ b/drivers/scsi/gvp11.c @@ -184,7 +184,7 @@ static struct scsi_host_template gvp11_scsi_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = CMD_PER_LUN, - .use_clustering = DISABLE_CLUSTERING + .dma_boundary = PAGE_SIZE - 1, }; static int check_wd33c93(struct gvp11_scsiregs *regs) diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h index 0ddb53c8a2e2..af291947a54d 100644 --- a/drivers/scsi/hisi_sas/hisi_sas.h +++ b/drivers/scsi/hisi_sas/hisi_sas.h @@ -69,6 +69,12 @@ #define HISI_SAS_SATA_PROTOCOL_FPDMA 0x8 #define HISI_SAS_SATA_PROTOCOL_ATAPI 0x10 +#define HISI_SAS_DIF_PROT_MASK (SHOST_DIF_TYPE1_PROTECTION | \ + SHOST_DIF_TYPE2_PROTECTION | \ + SHOST_DIF_TYPE3_PROTECTION) + +#define HISI_SAS_PROT_MASK (HISI_SAS_DIF_PROT_MASK) + struct hisi_hba; enum { @@ -211,7 +217,7 @@ struct hisi_sas_slot { /* Do not reorder/change members after here */ void *buf; dma_addr_t buf_dma; - int idx; + u16 idx; }; struct hisi_sas_hw { @@ -268,6 +274,8 @@ struct hisi_hba { struct pci_dev *pci_dev; struct device *dev; + int prot_mask; + void __iomem *regs; void __iomem *sgpio_regs; struct regmap *ctrl; @@ -322,6 +330,8 @@ struct hisi_hba { unsigned long sata_dev_bitmap[BITS_TO_LONGS(HISI_SAS_MAX_DEVICES)]; struct work_struct rst_work; u32 phy_state; + u32 intr_coal_ticks; /* Time of interrupt coalesce in us */ + u32 intr_coal_count; /* Interrupt count to coalesce */ }; /* Generic HW DMA host memory structures */ @@ -468,7 +478,6 @@ extern int hisi_sas_remove(struct platform_device *pdev); extern int hisi_sas_slave_configure(struct scsi_device *sdev); extern int hisi_sas_scan_finished(struct Scsi_Host *shost, unsigned long time); extern void hisi_sas_scan_start(struct Scsi_Host *shost); -extern struct device_attribute *host_attrs[]; extern int hisi_sas_host_reset(struct Scsi_Host *shost, int reset_type); extern void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy); extern void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba, diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c index b3f01d5b821b..eed7fc5b3389 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_main.c +++ b/drivers/scsi/hisi_sas/hisi_sas_main.c @@ -296,42 +296,109 @@ static void hisi_sas_task_prep_abort(struct hisi_hba *hisi_hba, device_id, abort_flag, tag_to_abort); } +static void hisi_sas_dma_unmap(struct hisi_hba *hisi_hba, + struct sas_task *task, int n_elem, + int n_elem_req, int n_elem_resp) +{ + struct device *dev = hisi_hba->dev; + + if (!sas_protocol_ata(task->task_proto)) { + if (task->num_scatter) { + if (n_elem) + dma_unmap_sg(dev, task->scatter, + task->num_scatter, + task->data_dir); + } else if (task->task_proto & SAS_PROTOCOL_SMP) { + if (n_elem_req) + dma_unmap_sg(dev, &task->smp_task.smp_req, + 1, DMA_TO_DEVICE); + if (n_elem_resp) + dma_unmap_sg(dev, &task->smp_task.smp_resp, + 1, DMA_FROM_DEVICE); + } + } +} + +static int hisi_sas_dma_map(struct hisi_hba *hisi_hba, + struct sas_task *task, int *n_elem, + int *n_elem_req, int *n_elem_resp) +{ + struct device *dev = hisi_hba->dev; + int rc; + + if (sas_protocol_ata(task->task_proto)) { + *n_elem = task->num_scatter; + } else { + unsigned int req_len, resp_len; + + if (task->num_scatter) { + *n_elem = dma_map_sg(dev, task->scatter, + task->num_scatter, task->data_dir); + if (!*n_elem) { + rc = -ENOMEM; + goto prep_out; + } + } else if (task->task_proto & SAS_PROTOCOL_SMP) { + *n_elem_req = dma_map_sg(dev, &task->smp_task.smp_req, + 1, DMA_TO_DEVICE); + if (!*n_elem_req) { + rc = -ENOMEM; + goto prep_out; + } + req_len = sg_dma_len(&task->smp_task.smp_req); + if (req_len & 0x3) { + rc = -EINVAL; + goto err_out_dma_unmap; + } + *n_elem_resp = dma_map_sg(dev, &task->smp_task.smp_resp, + 1, DMA_FROM_DEVICE); + if (!*n_elem_resp) { + rc = -ENOMEM; + goto err_out_dma_unmap; + } + resp_len = sg_dma_len(&task->smp_task.smp_resp); + if (resp_len & 0x3) { + rc = -EINVAL; + goto err_out_dma_unmap; + } + } + } + + if (*n_elem > HISI_SAS_SGE_PAGE_CNT) { + dev_err(dev, "task prep: n_elem(%d) > HISI_SAS_SGE_PAGE_CNT", + *n_elem); + rc = -EINVAL; + goto err_out_dma_unmap; + } + return 0; + +err_out_dma_unmap: + /* It would be better to call dma_unmap_sg() here, but it's messy */ + hisi_sas_dma_unmap(hisi_hba, task, *n_elem, + *n_elem_req, *n_elem_resp); +prep_out: + return rc; +} + static int hisi_sas_task_prep(struct sas_task *task, struct hisi_sas_dq **dq_pointer, bool is_tmf, struct hisi_sas_tmf_task *tmf, int *pass) { struct domain_device *device = task->dev; - struct hisi_hba *hisi_hba; + struct hisi_hba *hisi_hba = dev_to_hisi_hba(device); struct hisi_sas_device *sas_dev = device->lldd_dev; struct hisi_sas_port *port; struct hisi_sas_slot *slot; struct hisi_sas_cmd_hdr *cmd_hdr_base; struct asd_sas_port *sas_port = device->port; - struct device *dev; + struct device *dev = hisi_hba->dev; int dlvry_queue_slot, dlvry_queue, rc, slot_idx; int n_elem = 0, n_elem_req = 0, n_elem_resp = 0; struct hisi_sas_dq *dq; unsigned long flags; int wr_q_index; - if (!sas_port) { - struct task_status_struct *ts = &task->task_status; - - ts->resp = SAS_TASK_UNDELIVERED; - ts->stat = SAS_PHY_DOWN; - /* - * libsas will use dev->port, should - * not call task_done for sata - */ - if (device->dev_type != SAS_SATA_DEV) - task->task_done(task); - return -ECOMM; - } - - hisi_hba = dev_to_hisi_hba(device); - dev = hisi_hba->dev; - if (DEV_IS_GONE(sas_dev)) { if (sas_dev) dev_info(dev, "task prep: device %d not ready\n", @@ -355,49 +422,10 @@ static int hisi_sas_task_prep(struct sas_task *task, return -ECOMM; } - if (!sas_protocol_ata(task->task_proto)) { - unsigned int req_len, resp_len; - - if (task->num_scatter) { - n_elem = dma_map_sg(dev, task->scatter, - task->num_scatter, task->data_dir); - if (!n_elem) { - rc = -ENOMEM; - goto prep_out; - } - } else if (task->task_proto & SAS_PROTOCOL_SMP) { - n_elem_req = dma_map_sg(dev, &task->smp_task.smp_req, - 1, DMA_TO_DEVICE); - if (!n_elem_req) { - rc = -ENOMEM; - goto prep_out; - } - req_len = sg_dma_len(&task->smp_task.smp_req); - if (req_len & 0x3) { - rc = -EINVAL; - goto err_out_dma_unmap; - } - n_elem_resp = dma_map_sg(dev, &task->smp_task.smp_resp, - 1, DMA_FROM_DEVICE); - if (!n_elem_resp) { - rc = -ENOMEM; - goto err_out_dma_unmap; - } - resp_len = sg_dma_len(&task->smp_task.smp_resp); - if (resp_len & 0x3) { - rc = -EINVAL; - goto err_out_dma_unmap; - } - } - } else - n_elem = task->num_scatter; - - if (n_elem > HISI_SAS_SGE_PAGE_CNT) { - dev_err(dev, "task prep: n_elem(%d) > HISI_SAS_SGE_PAGE_CNT", - n_elem); - rc = -EINVAL; - goto err_out_dma_unmap; - } + rc = hisi_sas_dma_map(hisi_hba, task, &n_elem, + &n_elem_req, &n_elem_resp); + if (rc < 0) + goto prep_out; if (hisi_hba->hw->slot_index_alloc) rc = hisi_hba->hw->slot_index_alloc(hisi_hba, device); @@ -482,19 +510,8 @@ static int hisi_sas_task_prep(struct sas_task *task, err_out_tag: hisi_sas_slot_index_free(hisi_hba, slot_idx); err_out_dma_unmap: - if (!sas_protocol_ata(task->task_proto)) { - if (task->num_scatter) { - dma_unmap_sg(dev, task->scatter, task->num_scatter, - task->data_dir); - } else if (task->task_proto & SAS_PROTOCOL_SMP) { - if (n_elem_req) - dma_unmap_sg(dev, &task->smp_task.smp_req, - 1, DMA_TO_DEVICE); - if (n_elem_resp) - dma_unmap_sg(dev, &task->smp_task.smp_resp, - 1, DMA_FROM_DEVICE); - } - } + hisi_sas_dma_unmap(hisi_hba, task, n_elem, + n_elem_req, n_elem_resp); prep_out: dev_err(dev, "task prep: failed[%d]!\n", rc); return rc; @@ -506,10 +523,29 @@ static int hisi_sas_task_exec(struct sas_task *task, gfp_t gfp_flags, u32 rc; u32 pass = 0; unsigned long flags; - struct hisi_hba *hisi_hba = dev_to_hisi_hba(task->dev); - struct device *dev = hisi_hba->dev; + struct hisi_hba *hisi_hba; + struct device *dev; + struct domain_device *device = task->dev; + struct asd_sas_port *sas_port = device->port; struct hisi_sas_dq *dq = NULL; + if (!sas_port) { + struct task_status_struct *ts = &task->task_status; + + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_PHY_DOWN; + /* + * libsas will use dev->port, should + * not call task_done for sata + */ + if (device->dev_type != SAS_SATA_DEV) + task->task_done(task); + return -ECOMM; + } + + hisi_hba = dev_to_hisi_hba(device); + dev = hisi_hba->dev; + if (unlikely(test_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags))) { if (in_softirq()) return -EINVAL; @@ -1459,12 +1495,12 @@ static int hisi_sas_abort_task(struct sas_task *task) if (task->lldd_task && task->task_proto & SAS_PROTOCOL_SSP) { struct scsi_cmnd *cmnd = task->uldd_task; struct hisi_sas_slot *slot = task->lldd_task; - u32 tag = slot->idx; + u16 tag = slot->idx; int rc2; int_to_scsilun(cmnd->device->lun, &lun); tmf_task.tmf = TMF_ABORT_TASK; - tmf_task.tag_of_task_to_be_managed = cpu_to_le16(tag); + tmf_task.tag_of_task_to_be_managed = tag; rc = hisi_sas_debug_issue_ssp_tmf(task->dev, lun.scsi_lun, &tmf_task); @@ -1718,7 +1754,7 @@ static int hisi_sas_query_task(struct sas_task *task) int_to_scsilun(cmnd->device->lun, &lun); tmf_task.tmf = TMF_QUERY_TASK; - tmf_task.tag_of_task_to_be_managed = cpu_to_le16(tag); + tmf_task.tag_of_task_to_be_managed = tag; rc = hisi_sas_debug_issue_ssp_tmf(device, lun.scsi_lun, @@ -1994,12 +2030,6 @@ EXPORT_SYMBOL_GPL(hisi_sas_kill_tasklets); struct scsi_transport_template *hisi_sas_stt; EXPORT_SYMBOL_GPL(hisi_sas_stt); -struct device_attribute *host_attrs[] = { - &dev_attr_phy_event_threshold, - NULL, -}; -EXPORT_SYMBOL_GPL(host_attrs); - static struct sas_domain_function_template hisi_sas_transport_ops = { .lldd_dev_found = hisi_sas_dev_found, .lldd_dev_gone = hisi_sas_dev_gone, @@ -2380,7 +2410,6 @@ int hisi_sas_probe(struct platform_device *pdev, shost->max_lun = ~0; shost->max_channel = 1; shost->max_cmd_len = 16; - shost->sg_tablesize = min_t(u16, SG_ALL, HISI_SAS_SGE_PAGE_CNT); if (hisi_hba->hw->slot_index_alloc) { shost->can_queue = hisi_hba->hw->max_command_entries; shost->cmd_per_lun = hisi_hba->hw->max_command_entries; diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c index 8df822a4a1bd..28ab52a021cf 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c @@ -510,6 +510,7 @@ static void setup_itct_v1_hw(struct hisi_hba *hisi_hba, struct hisi_sas_itct *itct = &hisi_hba->itct[device_id]; struct asd_sas_port *sas_port = device->port; struct hisi_sas_port *port = to_hisi_sas_port(sas_port); + u64 sas_addr; memset(itct, 0, sizeof(*itct)); @@ -534,8 +535,8 @@ static void setup_itct_v1_hw(struct hisi_hba *hisi_hba, itct->qw0 = cpu_to_le64(qw0); /* qw1 */ - memcpy(&itct->sas_addr, device->sas_addr, SAS_ADDR_SIZE); - itct->sas_addr = __swab64(itct->sas_addr); + memcpy(&sas_addr, device->sas_addr, SAS_ADDR_SIZE); + itct->sas_addr = cpu_to_le64(__swab64(sas_addr)); /* qw2 */ itct->qw2 = cpu_to_le64((500ULL << ITCT_HDR_IT_NEXUS_LOSS_TL_OFF) | @@ -561,7 +562,7 @@ static void clear_itct_v1_hw(struct hisi_hba *hisi_hba, reg_val &= ~CFG_AGING_TIME_ITCT_REL_MSK; hisi_sas_write32(hisi_hba, CFG_AGING_TIME, reg_val); - qw0 = cpu_to_le64(itct->qw0); + qw0 = le64_to_cpu(itct->qw0); qw0 &= ~ITCT_HDR_VALID_MSK; itct->qw0 = cpu_to_le64(qw0); } @@ -1100,7 +1101,7 @@ static void slot_err_v1_hw(struct hisi_hba *hisi_hba, case SAS_PROTOCOL_SSP: { int error = -1; - u32 dma_err_type = cpu_to_le32(err_record->dma_err_type); + u32 dma_err_type = le32_to_cpu(err_record->dma_err_type); u32 dma_tx_err_type = ((dma_err_type & ERR_HDR_DMA_TX_ERR_TYPE_MSK)) >> ERR_HDR_DMA_TX_ERR_TYPE_OFF; @@ -1108,9 +1109,9 @@ static void slot_err_v1_hw(struct hisi_hba *hisi_hba, ERR_HDR_DMA_RX_ERR_TYPE_MSK)) >> ERR_HDR_DMA_RX_ERR_TYPE_OFF; u32 trans_tx_fail_type = - cpu_to_le32(err_record->trans_tx_fail_type); + le32_to_cpu(err_record->trans_tx_fail_type); u32 trans_rx_fail_type = - cpu_to_le32(err_record->trans_rx_fail_type); + le32_to_cpu(err_record->trans_rx_fail_type); if (dma_tx_err_type) { /* dma tx err */ @@ -1558,7 +1559,7 @@ static irqreturn_t cq_interrupt_v1_hw(int irq, void *p) u32 cmplt_hdr_data; complete_hdr = &complete_queue[rd_point]; - cmplt_hdr_data = cpu_to_le32(complete_hdr->data); + cmplt_hdr_data = le32_to_cpu(complete_hdr->data); idx = (cmplt_hdr_data & CMPLT_HDR_IPTT_MSK) >> CMPLT_HDR_IPTT_OFF; slot = &hisi_hba->slot_info[idx]; @@ -1797,6 +1798,11 @@ static int hisi_sas_v1_init(struct hisi_hba *hisi_hba) return 0; } +static struct device_attribute *host_attrs_v1_hw[] = { + &dev_attr_phy_event_threshold, + NULL +}; + static struct scsi_host_template sht_v1_hw = { .name = DRV_NAME, .module = THIS_MODULE, @@ -1808,14 +1814,13 @@ static struct scsi_host_template sht_v1_hw = { .change_queue_depth = sas_change_queue_depth, .bios_param = sas_bios_param, .this_id = -1, - .sg_tablesize = SG_ALL, + .sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .max_sectors = SCSI_DEFAULT_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .eh_device_reset_handler = sas_eh_device_reset_handler, .eh_target_reset_handler = sas_eh_target_reset_handler, .target_destroy = sas_target_destroy, .ioctl = sas_ioctl, - .shost_attrs = host_attrs, + .shost_attrs = host_attrs_v1_hw, }; static const struct hisi_sas_hw hisi_sas_v1_hw = { diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c index 77a85ead483e..c8ebff3ba559 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c @@ -934,6 +934,7 @@ static void setup_itct_v2_hw(struct hisi_hba *hisi_hba, struct domain_device *parent_dev = device->parent; struct asd_sas_port *sas_port = device->port; struct hisi_sas_port *port = to_hisi_sas_port(sas_port); + u64 sas_addr; memset(itct, 0, sizeof(*itct)); @@ -966,8 +967,8 @@ static void setup_itct_v2_hw(struct hisi_hba *hisi_hba, itct->qw0 = cpu_to_le64(qw0); /* qw1 */ - memcpy(&itct->sas_addr, device->sas_addr, SAS_ADDR_SIZE); - itct->sas_addr = __swab64(itct->sas_addr); + memcpy(&sas_addr, device->sas_addr, SAS_ADDR_SIZE); + itct->sas_addr = cpu_to_le64(__swab64(sas_addr)); /* qw2 */ if (!dev_is_sata(device)) @@ -2044,11 +2045,11 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba, struct task_status_struct *ts = &task->task_status; struct hisi_sas_err_record_v2 *err_record = hisi_sas_status_buf_addr_mem(slot); - u32 trans_tx_fail_type = cpu_to_le32(err_record->trans_tx_fail_type); - u32 trans_rx_fail_type = cpu_to_le32(err_record->trans_rx_fail_type); - u16 dma_tx_err_type = cpu_to_le16(err_record->dma_tx_err_type); - u16 sipc_rx_err_type = cpu_to_le16(err_record->sipc_rx_err_type); - u32 dma_rx_err_type = cpu_to_le32(err_record->dma_rx_err_type); + u32 trans_tx_fail_type = le32_to_cpu(err_record->trans_tx_fail_type); + u32 trans_rx_fail_type = le32_to_cpu(err_record->trans_rx_fail_type); + u16 dma_tx_err_type = le16_to_cpu(err_record->dma_tx_err_type); + u16 sipc_rx_err_type = le16_to_cpu(err_record->sipc_rx_err_type); + u32 dma_rx_err_type = le32_to_cpu(err_record->dma_rx_err_type); int error = -1; if (err_phase == 1) { @@ -2059,8 +2060,7 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba, trans_tx_fail_type); } else if (err_phase == 2) { /* error in RX phase, the priority is: DW1 > DW3 > DW2 */ - error = parse_trans_rx_err_code_v2_hw( - trans_rx_fail_type); + error = parse_trans_rx_err_code_v2_hw(trans_rx_fail_type); if (error == -1) { error = parse_dma_rx_err_code_v2_hw( dma_rx_err_type); @@ -2358,6 +2358,7 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) &complete_queue[slot->cmplt_queue_slot]; unsigned long flags; bool is_internal = slot->is_internal; + u32 dw0; if (unlikely(!task || !task->lldd_task || !task->dev)) return -EINVAL; @@ -2382,8 +2383,9 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) } /* Use SAS+TMF status codes */ - switch ((complete_hdr->dw0 & CMPLT_HDR_ABORT_STAT_MSK) - >> CMPLT_HDR_ABORT_STAT_OFF) { + dw0 = le32_to_cpu(complete_hdr->dw0); + switch ((dw0 & CMPLT_HDR_ABORT_STAT_MSK) >> + CMPLT_HDR_ABORT_STAT_OFF) { case STAT_IO_ABORTED: /* this io has been aborted by abort command */ ts->stat = SAS_ABORTED_TASK; @@ -2408,9 +2410,8 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) break; } - if ((complete_hdr->dw0 & CMPLT_HDR_ERX_MSK) && - (!(complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK))) { - u32 err_phase = (complete_hdr->dw0 & CMPLT_HDR_ERR_PHASE_MSK) + if ((dw0 & CMPLT_HDR_ERX_MSK) && (!(dw0 & CMPLT_HDR_RSPNS_XFRD_MSK))) { + u32 err_phase = (dw0 & CMPLT_HDR_ERR_PHASE_MSK) >> CMPLT_HDR_ERR_PHASE_OFF; u32 *error_info = hisi_sas_status_buf_addr_mem(slot); @@ -2526,22 +2527,23 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_tmf_task *tmf = slot->tmf; u8 *buf_cmd; int has_data = 0, hdr_tag = 0; - u32 dw1 = 0, dw2 = 0; + u32 dw0, dw1 = 0, dw2 = 0; /* create header */ /* dw0 */ - hdr->dw0 = cpu_to_le32(port->id << CMD_HDR_PORT_OFF); + dw0 = port->id << CMD_HDR_PORT_OFF; if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) - hdr->dw0 |= cpu_to_le32(3 << CMD_HDR_CMD_OFF); + dw0 |= 3 << CMD_HDR_CMD_OFF; else - hdr->dw0 |= cpu_to_le32(4 << CMD_HDR_CMD_OFF); + dw0 |= 4 << CMD_HDR_CMD_OFF; if (tmf && tmf->force_phy) { - hdr->dw0 |= CMD_HDR_FORCE_PHY_MSK; - hdr->dw0 |= cpu_to_le32((1 << tmf->phy_id) - << CMD_HDR_PHY_ID_OFF); + dw0 |= CMD_HDR_FORCE_PHY_MSK; + dw0 |= (1 << tmf->phy_id) << CMD_HDR_PHY_ID_OFF; } + hdr->dw0 = cpu_to_le32(dw0); + /* dw1 */ switch (task->data_dir) { case DMA_TO_DEVICE: @@ -3152,20 +3154,24 @@ static void cq_tasklet_v2_hw(unsigned long val) /* Check for NCQ completion */ if (complete_hdr->act) { - u32 act_tmp = complete_hdr->act; + u32 act_tmp = le32_to_cpu(complete_hdr->act); int ncq_tag_count = ffs(act_tmp); + u32 dw1 = le32_to_cpu(complete_hdr->dw1); - dev_id = (complete_hdr->dw1 & CMPLT_HDR_DEV_ID_MSK) >> + dev_id = (dw1 & CMPLT_HDR_DEV_ID_MSK) >> CMPLT_HDR_DEV_ID_OFF; itct = &hisi_hba->itct[dev_id]; /* The NCQ tags are held in the itct header */ while (ncq_tag_count) { - __le64 *ncq_tag = &itct->qw4_15[0]; + __le64 *_ncq_tag = &itct->qw4_15[0], __ncq_tag; + u64 ncq_tag; - ncq_tag_count -= 1; - iptt = (ncq_tag[ncq_tag_count / 5] - >> (ncq_tag_count % 5) * 12) & 0xfff; + ncq_tag_count--; + __ncq_tag = _ncq_tag[ncq_tag_count / 5]; + ncq_tag = le64_to_cpu(__ncq_tag); + iptt = (ncq_tag >> (ncq_tag_count % 5) * 12) & + 0xfff; slot = &hisi_hba->slot_info[iptt]; slot->cmplt_queue_slot = rd_point; @@ -3176,7 +3182,9 @@ static void cq_tasklet_v2_hw(unsigned long val) ncq_tag_count = ffs(act_tmp); } } else { - iptt = (complete_hdr->dw1) & CMPLT_HDR_IPTT_MSK; + u32 dw1 = le32_to_cpu(complete_hdr->dw1); + + iptt = dw1 & CMPLT_HDR_IPTT_MSK; slot = &hisi_hba->slot_info[iptt]; slot->cmplt_queue_slot = rd_point; slot->cmplt_queue = queue; @@ -3552,6 +3560,11 @@ static void wait_cmds_complete_timeout_v2_hw(struct hisi_hba *hisi_hba, dev_dbg(dev, "wait commands complete %dms\n", time); } +static struct device_attribute *host_attrs_v2_hw[] = { + &dev_attr_phy_event_threshold, + NULL +}; + static struct scsi_host_template sht_v2_hw = { .name = DRV_NAME, .module = THIS_MODULE, @@ -3563,14 +3576,13 @@ static struct scsi_host_template sht_v2_hw = { .change_queue_depth = sas_change_queue_depth, .bios_param = sas_bios_param, .this_id = -1, - .sg_tablesize = SG_ALL, + .sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .max_sectors = SCSI_DEFAULT_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .eh_device_reset_handler = sas_eh_device_reset_handler, .eh_target_reset_handler = sas_eh_target_reset_handler, .target_destroy = sas_target_destroy, .ioctl = sas_ioctl, - .shost_attrs = host_attrs, + .shost_attrs = host_attrs_v2_hw, }; static const struct hisi_sas_hw hisi_sas_v2_hw = { diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c index a369450a1fa7..e2420a810e99 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c @@ -42,6 +42,7 @@ #define MAX_CON_TIME_LIMIT_TIME 0xa4 #define BUS_INACTIVE_LIMIT_TIME 0xa8 #define REJECT_TO_OPEN_LIMIT_TIME 0xac +#define CQ_INT_CONVERGE_EN 0xb0 #define CFG_AGING_TIME 0xbc #define HGC_DFX_CFG2 0xc0 #define CFG_ABT_SET_QUERY_IPTT 0xd4 @@ -126,6 +127,8 @@ #define PHY_CTRL (PORT_BASE + 0x14) #define PHY_CTRL_RESET_OFF 0 #define PHY_CTRL_RESET_MSK (0x1 << PHY_CTRL_RESET_OFF) +#define CMD_HDR_PIR_OFF 8 +#define CMD_HDR_PIR_MSK (0x1 << CMD_HDR_PIR_OFF) #define SL_CFG (PORT_BASE + 0x84) #define AIP_LIMIT (PORT_BASE + 0x90) #define SL_CONTROL (PORT_BASE + 0x94) @@ -332,6 +335,16 @@ #define ITCT_HDR_RTOLT_OFF 48 #define ITCT_HDR_RTOLT_MSK (0xffffULL << ITCT_HDR_RTOLT_OFF) +struct hisi_sas_protect_iu_v3_hw { + u32 dw0; + u32 lbrtcv; + u32 lbrtgv; + u32 dw3; + u32 dw4; + u32 dw5; + u32 rsv; +}; + struct hisi_sas_complete_v3_hdr { __le32 dw0; __le32 dw1; @@ -371,6 +384,28 @@ struct hisi_sas_err_record_v3 { ((fis.command == ATA_CMD_DEV_RESET) && \ ((fis.control & ATA_SRST) != 0))) +#define T10_INSRT_EN_OFF 0 +#define T10_INSRT_EN_MSK (1 << T10_INSRT_EN_OFF) +#define T10_RMV_EN_OFF 1 +#define T10_RMV_EN_MSK (1 << T10_RMV_EN_OFF) +#define T10_RPLC_EN_OFF 2 +#define T10_RPLC_EN_MSK (1 << T10_RPLC_EN_OFF) +#define T10_CHK_EN_OFF 3 +#define T10_CHK_EN_MSK (1 << T10_CHK_EN_OFF) +#define INCR_LBRT_OFF 5 +#define INCR_LBRT_MSK (1 << INCR_LBRT_OFF) +#define USR_DATA_BLOCK_SZ_OFF 20 +#define USR_DATA_BLOCK_SZ_MSK (0x3 << USR_DATA_BLOCK_SZ_OFF) +#define T10_CHK_MSK_OFF 16 + +static bool hisi_sas_intr_conv; +MODULE_PARM_DESC(intr_conv, "interrupt converge enable (0-1)"); + +/* permit overriding the host protection capabilities mask (EEDP/T10 PI) */ +static int prot_mask; +module_param(prot_mask, int, 0); +MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=0x0 "); + static u32 hisi_sas_read32(struct hisi_hba *hisi_hba, u32 off) { void __iomem *regs = hisi_hba->regs + off; @@ -436,6 +471,8 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba) hisi_sas_write32(hisi_hba, INT_COAL_EN, 0x1); hisi_sas_write32(hisi_hba, OQ_INT_COAL_TIME, 0x1); hisi_sas_write32(hisi_hba, OQ_INT_COAL_CNT, 0x1); + hisi_sas_write32(hisi_hba, CQ_INT_CONVERGE_EN, + hisi_sas_intr_conv); hisi_sas_write32(hisi_hba, OQ_INT_SRC, 0xffff); hisi_sas_write32(hisi_hba, ENT_INT_SRC1, 0xffffffff); hisi_sas_write32(hisi_hba, ENT_INT_SRC2, 0xffffffff); @@ -494,7 +531,7 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba) hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_OOB_RESTART_MSK, 0x1); hisi_sas_phy_write32(hisi_hba, i, STP_LINK_TIMER, 0x7f7a120); hisi_sas_phy_write32(hisi_hba, i, CON_CFG_DRIVER, 0x2a0a01); - + hisi_sas_phy_write32(hisi_hba, i, SAS_SSP_CON_TIMER_CFG, 0x32); /* used for 12G negotiate */ hisi_sas_phy_write32(hisi_hba, i, COARSETUNE_TIME, 0x1e); hisi_sas_phy_write32(hisi_hba, i, AIP_LIMIT, 0x2ffff); @@ -622,6 +659,7 @@ static void setup_itct_v3_hw(struct hisi_hba *hisi_hba, struct domain_device *parent_dev = device->parent; struct asd_sas_port *sas_port = device->port; struct hisi_sas_port *port = to_hisi_sas_port(sas_port); + u64 sas_addr; memset(itct, 0, sizeof(*itct)); @@ -654,8 +692,8 @@ static void setup_itct_v3_hw(struct hisi_hba *hisi_hba, itct->qw0 = cpu_to_le64(qw0); /* qw1 */ - memcpy(&itct->sas_addr, device->sas_addr, SAS_ADDR_SIZE); - itct->sas_addr = __swab64(itct->sas_addr); + memcpy(&sas_addr, device->sas_addr, SAS_ADDR_SIZE); + itct->sas_addr = cpu_to_le64(__swab64(sas_addr)); /* qw2 */ if (!dev_is_sata(device)) @@ -932,6 +970,58 @@ static void prep_prd_sge_v3_hw(struct hisi_hba *hisi_hba, hdr->sg_len = cpu_to_le32(n_elem << CMD_HDR_DATA_SGL_LEN_OFF); } +static u32 get_prot_chk_msk_v3_hw(struct scsi_cmnd *scsi_cmnd) +{ + unsigned char prot_flags = scsi_cmnd->prot_flags; + + if (prot_flags & SCSI_PROT_TRANSFER_PI) { + if (prot_flags & SCSI_PROT_REF_CHECK) + return 0xc << 16; + return 0xfc << 16; + } + return 0; +} + +static void fill_prot_v3_hw(struct scsi_cmnd *scsi_cmnd, + struct hisi_sas_protect_iu_v3_hw *prot) +{ + unsigned char prot_op = scsi_get_prot_op(scsi_cmnd); + unsigned int interval = scsi_prot_interval(scsi_cmnd); + u32 lbrt_chk_val = t10_pi_ref_tag(scsi_cmnd->request); + + switch (prot_op) { + case SCSI_PROT_READ_STRIP: + prot->dw0 |= (T10_RMV_EN_MSK | T10_CHK_EN_MSK); + prot->lbrtcv = lbrt_chk_val; + prot->dw4 |= get_prot_chk_msk_v3_hw(scsi_cmnd); + break; + case SCSI_PROT_WRITE_INSERT: + prot->dw0 |= T10_INSRT_EN_MSK; + prot->lbrtgv = lbrt_chk_val; + break; + default: + WARN(1, "prot_op(0x%x) is not valid\n", prot_op); + break; + } + + switch (interval) { + case 512: + break; + case 4096: + prot->dw0 |= (0x1 << USR_DATA_BLOCK_SZ_OFF); + break; + case 520: + prot->dw0 |= (0x2 << USR_DATA_BLOCK_SZ_OFF); + break; + default: + WARN(1, "protection interval (0x%x) invalid\n", + interval); + break; + } + + prot->dw0 |= INCR_LBRT_MSK; +} + static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) { @@ -943,9 +1033,10 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba, struct sas_ssp_task *ssp_task = &task->ssp_task; struct scsi_cmnd *scsi_cmnd = ssp_task->cmd; struct hisi_sas_tmf_task *tmf = slot->tmf; + unsigned char prot_op = scsi_get_prot_op(scsi_cmnd); int has_data = 0, priority = !!tmf; u8 *buf_cmd; - u32 dw1 = 0, dw2 = 0; + u32 dw1 = 0, dw2 = 0, len = 0; hdr->dw0 = cpu_to_le32((1 << CMD_HDR_RESP_REPORT_OFF) | (2 << CMD_HDR_TLR_CTRL_OFF) | @@ -975,7 +1066,6 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba, /* map itct entry */ dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF; - hdr->dw1 = cpu_to_le32(dw1); dw2 = (((sizeof(struct ssp_command_iu) + sizeof(struct ssp_frame_hdr) + 3) / 4) << CMD_HDR_CFL_OFF) | @@ -988,7 +1078,6 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba, prep_prd_sge_v3_hw(hisi_hba, slot, hdr, task->scatter, slot->n_elem); - hdr->data_transfer_len = cpu_to_le32(task->total_xfer_len); hdr->cmd_table_addr = cpu_to_le64(hisi_sas_cmd_hdr_addr_dma(slot)); hdr->sts_buffer_addr = cpu_to_le64(hisi_sas_status_buf_addr_dma(slot)); @@ -1013,6 +1102,38 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba, break; } } + + if (has_data && (prot_op != SCSI_PROT_NORMAL)) { + struct hisi_sas_protect_iu_v3_hw prot; + u8 *buf_cmd_prot; + + hdr->dw7 |= cpu_to_le32(1 << CMD_HDR_ADDR_MODE_SEL_OFF); + dw1 |= CMD_HDR_PIR_MSK; + buf_cmd_prot = hisi_sas_cmd_hdr_addr_mem(slot) + + sizeof(struct ssp_frame_hdr) + + sizeof(struct ssp_command_iu); + + memset(&prot, 0, sizeof(struct hisi_sas_protect_iu_v3_hw)); + fill_prot_v3_hw(scsi_cmnd, &prot); + memcpy(buf_cmd_prot, &prot, + sizeof(struct hisi_sas_protect_iu_v3_hw)); + + /* + * For READ, we need length of info read to memory, while for + * WRITE we need length of data written to the disk. + */ + if (prot_op == SCSI_PROT_WRITE_INSERT) { + unsigned int interval = scsi_prot_interval(scsi_cmnd); + unsigned int ilog2_interval = ilog2(interval); + + len = (task->total_xfer_len >> ilog2_interval) * 8; + } + + } + + hdr->dw1 = cpu_to_le32(dw1); + + hdr->data_transfer_len = cpu_to_le32(task->total_xfer_len + len); } static void prep_smp_v3_hw(struct hisi_hba *hisi_hba, @@ -1584,15 +1705,16 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task, &complete_queue[slot->cmplt_queue_slot]; struct hisi_sas_err_record_v3 *record = hisi_sas_status_buf_addr_mem(slot); - u32 dma_rx_err_type = record->dma_rx_err_type; - u32 trans_tx_fail_type = record->trans_tx_fail_type; + u32 dma_rx_err_type = le32_to_cpu(record->dma_rx_err_type); + u32 trans_tx_fail_type = le32_to_cpu(record->trans_tx_fail_type); + u32 dw3 = le32_to_cpu(complete_hdr->dw3); switch (task->task_proto) { case SAS_PROTOCOL_SSP: if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) { ts->residual = trans_tx_fail_type; ts->stat = SAS_DATA_UNDERRUN; - } else if (complete_hdr->dw3 & CMPLT_HDR_IO_IN_TARGET_MSK) { + } else if (dw3 & CMPLT_HDR_IO_IN_TARGET_MSK) { ts->stat = SAS_QUEUE_FULL; slot->abort = 1; } else { @@ -1606,7 +1728,7 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task, if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) { ts->residual = trans_tx_fail_type; ts->stat = SAS_DATA_UNDERRUN; - } else if (complete_hdr->dw3 & CMPLT_HDR_IO_IN_TARGET_MSK) { + } else if (dw3 & CMPLT_HDR_IO_IN_TARGET_MSK) { ts->stat = SAS_PHY_DOWN; slot->abort = 1; } else { @@ -1639,6 +1761,7 @@ slot_complete_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) &complete_queue[slot->cmplt_queue_slot]; unsigned long flags; bool is_internal = slot->is_internal; + u32 dw0, dw1, dw3; if (unlikely(!task || !task->lldd_task || !task->dev)) return -EINVAL; @@ -1662,11 +1785,14 @@ slot_complete_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) goto out; } + dw0 = le32_to_cpu(complete_hdr->dw0); + dw1 = le32_to_cpu(complete_hdr->dw1); + dw3 = le32_to_cpu(complete_hdr->dw3); + /* * Use SAS+TMF status codes */ - switch ((complete_hdr->dw0 & CMPLT_HDR_ABORT_STAT_MSK) - >> CMPLT_HDR_ABORT_STAT_OFF) { + switch ((dw0 & CMPLT_HDR_ABORT_STAT_MSK) >> CMPLT_HDR_ABORT_STAT_OFF) { case STAT_IO_ABORTED: /* this IO has been aborted by abort command */ ts->stat = SAS_ABORTED_TASK; @@ -1689,7 +1815,7 @@ slot_complete_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) } /* check for erroneous completion */ - if ((complete_hdr->dw0 & CMPLT_HDR_CMPLT_MSK) == 0x3) { + if ((dw0 & CMPLT_HDR_CMPLT_MSK) == 0x3) { u32 *error_info = hisi_sas_status_buf_addr_mem(slot); slot_err_v3_hw(hisi_hba, task, slot); @@ -1698,8 +1824,7 @@ slot_complete_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) "CQ hdr: 0x%x 0x%x 0x%x 0x%x " "Error info: 0x%x 0x%x 0x%x 0x%x\n", slot->idx, task, sas_dev->device_id, - complete_hdr->dw0, complete_hdr->dw1, - complete_hdr->act, complete_hdr->dw3, + dw0, dw1, complete_hdr->act, dw3, error_info[0], error_info[1], error_info[2], error_info[3]); if (unlikely(slot->abort)) @@ -1797,11 +1922,13 @@ static void cq_tasklet_v3_hw(unsigned long val) while (rd_point != wr_point) { struct hisi_sas_complete_v3_hdr *complete_hdr; struct device *dev = hisi_hba->dev; + u32 dw1; int iptt; complete_hdr = &complete_queue[rd_point]; + dw1 = le32_to_cpu(complete_hdr->dw1); - iptt = (complete_hdr->dw1) & CMPLT_HDR_IPTT_MSK; + iptt = dw1 & CMPLT_HDR_IPTT_MSK; if (likely(iptt < HISI_SAS_COMMAND_ENTRIES_V3_HW)) { slot = &hisi_hba->slot_info[iptt]; slot->cmplt_queue_slot = rd_point; @@ -1878,10 +2005,12 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba) for (i = 0; i < hisi_hba->queue_count; i++) { struct hisi_sas_cq *cq = &hisi_hba->cq[i]; struct tasklet_struct *t = &cq->tasklet; + int nr = hisi_sas_intr_conv ? 16 : 16 + i; + unsigned long irqflags = hisi_sas_intr_conv ? IRQF_SHARED : 0; - rc = devm_request_irq(dev, pci_irq_vector(pdev, i+16), - cq_interrupt_v3_hw, 0, - DRV_NAME " cq", cq); + rc = devm_request_irq(dev, pci_irq_vector(pdev, nr), + cq_interrupt_v3_hw, irqflags, + DRV_NAME " cq", cq); if (rc) { dev_err(dev, "could not request cq%d interrupt, rc=%d\n", @@ -1898,8 +2027,9 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba) free_cq_irqs: for (k = 0; k < i; k++) { struct hisi_sas_cq *cq = &hisi_hba->cq[k]; + int nr = hisi_sas_intr_conv ? 16 : 16 + k; - free_irq(pci_irq_vector(pdev, k+16), cq); + free_irq(pci_irq_vector(pdev, nr), cq); } free_irq(pci_irq_vector(pdev, 11), hisi_hba); free_chnl_interrupt: @@ -2089,6 +2219,119 @@ static void wait_cmds_complete_timeout_v3_hw(struct hisi_hba *hisi_hba, dev_dbg(dev, "wait commands complete %dms\n", time); } +static ssize_t intr_conv_v3_hw_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return scnprintf(buf, PAGE_SIZE, "%u\n", hisi_sas_intr_conv); +} +static DEVICE_ATTR_RO(intr_conv_v3_hw); + +static void config_intr_coal_v3_hw(struct hisi_hba *hisi_hba) +{ + /* config those registers between enable and disable PHYs */ + hisi_sas_stop_phys(hisi_hba); + + if (hisi_hba->intr_coal_ticks == 0 || + hisi_hba->intr_coal_count == 0) { + hisi_sas_write32(hisi_hba, INT_COAL_EN, 0x1); + hisi_sas_write32(hisi_hba, OQ_INT_COAL_TIME, 0x1); + hisi_sas_write32(hisi_hba, OQ_INT_COAL_CNT, 0x1); + } else { + hisi_sas_write32(hisi_hba, INT_COAL_EN, 0x3); + hisi_sas_write32(hisi_hba, OQ_INT_COAL_TIME, + hisi_hba->intr_coal_ticks); + hisi_sas_write32(hisi_hba, OQ_INT_COAL_CNT, + hisi_hba->intr_coal_count); + } + phys_init_v3_hw(hisi_hba); +} + +static ssize_t intr_coal_ticks_v3_hw_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct hisi_hba *hisi_hba = shost_priv(shost); + + return scnprintf(buf, PAGE_SIZE, "%u\n", + hisi_hba->intr_coal_ticks); +} + +static ssize_t intr_coal_ticks_v3_hw_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct hisi_hba *hisi_hba = shost_priv(shost); + u32 intr_coal_ticks; + int ret; + + ret = kstrtou32(buf, 10, &intr_coal_ticks); + if (ret) { + dev_err(dev, "Input data of interrupt coalesce unmatch\n"); + return -EINVAL; + } + + if (intr_coal_ticks >= BIT(24)) { + dev_err(dev, "intr_coal_ticks must be less than 2^24!\n"); + return -EINVAL; + } + + hisi_hba->intr_coal_ticks = intr_coal_ticks; + + config_intr_coal_v3_hw(hisi_hba); + + return count; +} +static DEVICE_ATTR_RW(intr_coal_ticks_v3_hw); + +static ssize_t intr_coal_count_v3_hw_show(struct device *dev, + struct device_attribute + *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct hisi_hba *hisi_hba = shost_priv(shost); + + return scnprintf(buf, PAGE_SIZE, "%u\n", + hisi_hba->intr_coal_count); +} + +static ssize_t intr_coal_count_v3_hw_store(struct device *dev, + struct device_attribute + *attr, const char *buf, size_t count) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct hisi_hba *hisi_hba = shost_priv(shost); + u32 intr_coal_count; + int ret; + + ret = kstrtou32(buf, 10, &intr_coal_count); + if (ret) { + dev_err(dev, "Input data of interrupt coalesce unmatch\n"); + return -EINVAL; + } + + if (intr_coal_count >= BIT(8)) { + dev_err(dev, "intr_coal_count must be less than 2^8!\n"); + return -EINVAL; + } + + hisi_hba->intr_coal_count = intr_coal_count; + + config_intr_coal_v3_hw(hisi_hba); + + return count; +} +static DEVICE_ATTR_RW(intr_coal_count_v3_hw); + +static struct device_attribute *host_attrs_v3_hw[] = { + &dev_attr_phy_event_threshold, + &dev_attr_intr_conv_v3_hw, + &dev_attr_intr_coal_ticks_v3_hw, + &dev_attr_intr_coal_count_v3_hw, + NULL +}; + static struct scsi_host_template sht_v3_hw = { .name = DRV_NAME, .module = THIS_MODULE, @@ -2100,14 +2343,13 @@ static struct scsi_host_template sht_v3_hw = { .change_queue_depth = sas_change_queue_depth, .bios_param = sas_bios_param, .this_id = -1, - .sg_tablesize = SG_ALL, + .sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .max_sectors = SCSI_DEFAULT_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .eh_device_reset_handler = sas_eh_device_reset_handler, .eh_target_reset_handler = sas_eh_target_reset_handler, .target_destroy = sas_target_destroy, .ioctl = sas_ioctl, - .shost_attrs = host_attrs, + .shost_attrs = host_attrs_v3_hw, .tag_alloc_policy = BLK_TAG_ALLOC_RR, }; @@ -2161,6 +2403,12 @@ hisi_sas_shost_alloc_pci(struct pci_dev *pdev) hisi_hba->shost = shost; SHOST_TO_SAS_HA(shost) = &hisi_hba->sha; + if (prot_mask & ~HISI_SAS_PROT_MASK) + dev_err(dev, "unsupported protection mask 0x%x, using default (0x0)\n", + prot_mask); + else + hisi_hba->prot_mask = prot_mask; + timer_setup(&hisi_hba->timer, NULL, 0); if (hisi_sas_get_fw_info(hisi_hba) < 0) @@ -2199,14 +2447,11 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) goto err_out_disable_device; - if ((pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) || - (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0)) { - if ((pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) || - (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) != 0)) { - dev_err(dev, "No usable DMA addressing method\n"); - rc = -EIO; - goto err_out_regions; - } + if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { + dev_err(dev, "No usable DMA addressing method\n"); + rc = -EIO; + goto err_out_regions; } shost = hisi_sas_shost_alloc_pci(pdev); @@ -2245,7 +2490,6 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id) shost->max_lun = ~0; shost->max_channel = 1; shost->max_cmd_len = 16; - shost->sg_tablesize = min_t(u16, SG_ALL, HISI_SAS_SGE_PAGE_CNT); shost->can_queue = hisi_hba->hw->max_command_entries - HISI_SAS_RESERVED_IPTT_CNT; shost->cmd_per_lun = hisi_hba->hw->max_command_entries - @@ -2275,6 +2519,12 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) goto err_out_register_ha; + if (hisi_hba->prot_mask) { + dev_info(dev, "Registering for DIF/DIX prot_mask=0x%x\n", + prot_mask); + scsi_host_set_prot(hisi_hba->shost, prot_mask); + } + scsi_scan_host(shost); return 0; @@ -2301,8 +2551,9 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba) free_irq(pci_irq_vector(pdev, 11), hisi_hba); for (i = 0; i < hisi_hba->queue_count; i++) { struct hisi_sas_cq *cq = &hisi_hba->cq[i]; + int nr = hisi_sas_intr_conv ? 16 : 16 + i; - free_irq(pci_irq_vector(pdev, i+16), cq); + free_irq(pci_irq_vector(pdev, nr), cq); } pci_free_irq_vectors(pdev); } @@ -2529,7 +2780,7 @@ static int hisi_sas_v3_suspend(struct pci_dev *pdev, pm_message_t state) struct hisi_hba *hisi_hba = sha->lldd_ha; struct device *dev = hisi_hba->dev; struct Scsi_Host *shost = hisi_hba->shost; - u32 device_state; + pci_power_t device_state; int rc; if (!pdev->pm_cap) { @@ -2575,7 +2826,7 @@ static int hisi_sas_v3_resume(struct pci_dev *pdev) struct Scsi_Host *shost = hisi_hba->shost; struct device *dev = hisi_hba->dev; unsigned int rc; - u32 device_state = pdev->current_state; + pci_power_t device_state = pdev->current_state; dev_warn(dev, "resuming from operating state [D%d]\n", device_state); @@ -2624,6 +2875,7 @@ static struct pci_driver sas_v3_pci_driver = { }; module_pci_driver(sas_v3_pci_driver); +module_param_named(intr_conv, hisi_sas_intr_conv, bool, 0444); MODULE_LICENSE("GPL"); MODULE_AUTHOR("John Garry "); diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c index ea4b0bb0c1cd..eaf329db3973 100644 --- a/drivers/scsi/hosts.c +++ b/drivers/scsi/hosts.c @@ -222,18 +222,9 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev, if (error) goto fail; - if (shost_use_blk_mq(shost)) { - error = scsi_mq_setup_tags(shost); - if (error) - goto fail; - } else { - shost->bqt = blk_init_tags(shost->can_queue, - shost->hostt->tag_alloc_policy); - if (!shost->bqt) { - error = -ENOMEM; - goto fail; - } - } + error = scsi_mq_setup_tags(shost); + if (error) + goto fail; if (!shost->shost_gendev.parent) shost->shost_gendev.parent = dev ? dev : &platform_bus; @@ -309,8 +300,7 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev, pm_runtime_disable(&shost->shost_gendev); pm_runtime_set_suspended(&shost->shost_gendev); pm_runtime_put_noidle(&shost->shost_gendev); - if (shost_use_blk_mq(shost)) - scsi_mq_destroy_tags(shost); + scsi_mq_destroy_tags(shost); fail: return error; } @@ -344,13 +334,8 @@ static void scsi_host_dev_release(struct device *dev) kfree(dev_name(&shost->shost_dev)); } - if (shost_use_blk_mq(shost)) { - if (shost->tag_set.tags) - scsi_mq_destroy_tags(shost); - } else { - if (shost->bqt) - blk_free_tags(shost->bqt); - } + if (shost->tag_set.tags) + scsi_mq_destroy_tags(shost); kfree(shost->shost_data); @@ -431,7 +416,6 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize) shost->sg_prot_tablesize = sht->sg_prot_tablesize; shost->cmd_per_lun = sht->cmd_per_lun; shost->unchecked_isa_dma = sht->unchecked_isa_dma; - shost->use_clustering = sht->use_clustering; shost->no_write_same = sht->no_write_same; if (shost_eh_deadline == -1 || !sht->eh_host_reset_handler) @@ -464,6 +448,11 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize) else shost->max_sectors = SCSI_DEFAULT_MAX_SECTORS; + if (sht->max_segment_size) + shost->max_segment_size = sht->max_segment_size; + else + shost->max_segment_size = BLK_MAX_SEGMENT_SIZE; + /* * assume a 4GB boundary, if not set */ @@ -472,8 +461,6 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize) else shost->dma_boundary = 0xffffffff; - shost->use_blk_mq = scsi_use_blk_mq || shost->hostt->force_blk_mq; - device_initialize(&shost->shost_gendev); dev_set_name(&shost->shost_gendev, "host%d", shost->host_no); shost->shost_gendev.bus = &scsi_bus_type; diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c index c9cccf35e9d7..ff67ef5d5347 100644 --- a/drivers/scsi/hpsa.c +++ b/drivers/scsi/hpsa.c @@ -965,7 +965,6 @@ static struct scsi_host_template hpsa_driver_template = { .scan_finished = hpsa_scan_finished, .change_queue_depth = hpsa_change_queue_depth, .this_id = -1, - .use_clustering = ENABLE_CLUSTERING, .eh_device_reset_handler = hpsa_eh_device_reset_handler, .ioctl = hpsa_ioctl, .slave_alloc = hpsa_slave_alloc, @@ -4663,6 +4662,7 @@ static int fixup_ioaccel_cdb(u8 *cdb, int *cdb_len) case WRITE_6: case WRITE_12: is_write = 1; + /* fall through */ case READ_6: case READ_12: if (*cdb_len == 6) { @@ -5093,6 +5093,7 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h, switch (cmd->cmnd[0]) { case WRITE_6: is_write = 1; + /* fall through */ case READ_6: first_block = (((cmd->cmnd[1] & 0x1F) << 16) | (cmd->cmnd[2] << 8) | @@ -5103,6 +5104,7 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h, break; case WRITE_10: is_write = 1; + /* fall through */ case READ_10: first_block = (((u64) cmd->cmnd[2]) << 24) | @@ -5115,6 +5117,7 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h, break; case WRITE_12: is_write = 1; + /* fall through */ case READ_12: first_block = (((u64) cmd->cmnd[2]) << 24) | @@ -5129,6 +5132,7 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h, break; case WRITE_16: is_write = 1; + /* fall through */ case READ_16: first_block = (((u64) cmd->cmnd[2]) << 56) | diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c index 2fad7f03aa02..3eedfd4f8f57 100644 --- a/drivers/scsi/hptiop.c +++ b/drivers/scsi/hptiop.c @@ -1180,7 +1180,6 @@ static struct scsi_host_template driver_template = { .eh_host_reset_handler = hptiop_reset, .info = hptiop_info, .emulated = 0, - .use_clustering = ENABLE_CLUSTERING, .proc_name = driver_name, .shost_attrs = hptiop_attrs, .slave_configure = hptiop_slave_config, @@ -1309,11 +1308,11 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id) /* Enable 64bit DMA if possible */ iop_ops = (struct hptiop_adapter_ops *)id->driver_data; - if (pci_set_dma_mask(pcidev, DMA_BIT_MASK(iop_ops->hw_dma_bit_mask))) { - if (pci_set_dma_mask(pcidev, DMA_BIT_MASK(32))) { - printk(KERN_ERR "hptiop: fail to set dma_mask\n"); - goto disable_pci_device; - } + if (dma_set_mask(&pcidev->dev, + DMA_BIT_MASK(iop_ops->hw_dma_bit_mask)) || + dma_set_mask(&pcidev->dev, DMA_BIT_MASK(32))) { + printk(KERN_ERR "hptiop: fail to set dma_mask\n"); + goto disable_pci_device; } if (pci_request_regions(pcidev, driver_name)) { diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index b64ca977825d..dbaa4f131433 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -3100,7 +3100,6 @@ static struct scsi_host_template driver_template = { .this_id = -1, .sg_tablesize = SG_ALL, .max_sectors = IBMVFC_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = ibmvfc_attrs, .track_queue_depth = 1, }; diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c index 9df8a1a2299c..1135e74646e2 100644 --- a/drivers/scsi/ibmvscsi/ibmvscsi.c +++ b/drivers/scsi/ibmvscsi/ibmvscsi.c @@ -2079,7 +2079,6 @@ static struct scsi_host_template driver_template = { .can_queue = IBMVSCSI_MAX_REQUESTS_DEFAULT, .this_id = -1, .sg_tablesize = SG_ALL, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = ibmvscsi_attrs, }; diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c index e63aadd10dfd..cc9cae469c4b 100644 --- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c +++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c @@ -3695,11 +3695,6 @@ static int ibmvscsis_get_system_info(void) return 0; } -static char *ibmvscsis_get_fabric_name(void) -{ - return "ibmvscsis"; -} - static char *ibmvscsis_get_fabric_wwn(struct se_portal_group *se_tpg) { struct ibmvscsis_tport *tport = @@ -4044,9 +4039,8 @@ static struct configfs_attribute *ibmvscsis_tpg_attrs[] = { static const struct target_core_fabric_ops ibmvscsis_ops = { .module = THIS_MODULE, - .name = "ibmvscsis", + .fabric_name = "ibmvscsis", .max_data_sg_nents = MAX_TXU / PAGE_SIZE, - .get_fabric_name = ibmvscsis_get_fabric_name, .tpg_get_wwn = ibmvscsis_get_fabric_wwn, .tpg_get_tag = ibmvscsis_get_tag, .tpg_get_default_depth = ibmvscsis_get_default_depth, diff --git a/drivers/scsi/imm.c b/drivers/scsi/imm.c index 8c6627bc8a39..cea7f502e8ca 100644 --- a/drivers/scsi/imm.c +++ b/drivers/scsi/imm.c @@ -1110,7 +1110,6 @@ static struct scsi_host_template imm_template = { .bios_param = imm_biosparam, .this_id = 7, .sg_tablesize = SG_ALL, - .use_clustering = ENABLE_CLUSTERING, .can_queue = 1, .slave_alloc = imm_adjust_queue, }; diff --git a/drivers/scsi/initio.c b/drivers/scsi/initio.c index 7a91cf3ff173..eb2778b5c81b 100644 --- a/drivers/scsi/initio.c +++ b/drivers/scsi/initio.c @@ -2817,7 +2817,6 @@ static struct scsi_host_template initio_template = { .can_queue = MAX_TARGETS * i91u_MAXQUEUE, .this_id = 1, .sg_tablesize = SG_ALL, - .use_clustering = ENABLE_CLUSTERING, }; static int initio_probe_one(struct pci_dev *pdev, @@ -2840,7 +2839,7 @@ static int initio_probe_one(struct pci_dev *pdev, reg = 0; bios_seg = (bios_seg << 8) + ((u16) ((reg & 0xFF00) >> 8)); - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) { + if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) { printk(KERN_WARNING "i91u: Could not set 32 bit DMA mask\n"); error = -ENODEV; goto out_disable_device; diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c index 271990bc065b..d1b4025a4503 100644 --- a/drivers/scsi/ipr.c +++ b/drivers/scsi/ipr.c @@ -6754,7 +6754,6 @@ static struct scsi_host_template driver_template = { .sg_tablesize = IPR_MAX_SGLIST, .max_sectors = IPR_IOA_MAX_SECTORS, .cmd_per_lun = IPR_MAX_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = ipr_ioa_attrs, .sdev_attrs = ipr_dev_attrs, .proc_name = IPR_NAME, diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c index ee8a1ecd58fd..e8bc8d328bab 100644 --- a/drivers/scsi/ips.c +++ b/drivers/scsi/ips.c @@ -365,7 +365,6 @@ static struct scsi_host_template ips_driver_template = { .this_id = -1, .sg_tablesize = IPS_MAX_SG, .cmd_per_lun = 3, - .use_clustering = ENABLE_CLUSTERING, .no_write_same = 1, }; @@ -1801,13 +1800,13 @@ ips_fill_scb_sg_single(ips_ha_t * ha, dma_addr_t busaddr, } if (IPS_USE_ENH_SGLIST(ha)) { scb->sg_list.enh_list[indx].address_lo = - cpu_to_le32(pci_dma_lo32(busaddr)); + cpu_to_le32(lower_32_bits(busaddr)); scb->sg_list.enh_list[indx].address_hi = - cpu_to_le32(pci_dma_hi32(busaddr)); + cpu_to_le32(upper_32_bits(busaddr)); scb->sg_list.enh_list[indx].length = cpu_to_le32(e_len); } else { scb->sg_list.std_list[indx].address = - cpu_to_le32(pci_dma_lo32(busaddr)); + cpu_to_le32(lower_32_bits(busaddr)); scb->sg_list.std_list[indx].length = cpu_to_le32(e_len); } @@ -6678,7 +6677,6 @@ ips_register_scsi(int index) sh->sg_tablesize = sh->hostt->sg_tablesize; sh->can_queue = sh->hostt->can_queue; sh->cmd_per_lun = sh->hostt->cmd_per_lun; - sh->use_clustering = sh->hostt->use_clustering; sh->max_sectors = 128; sh->max_id = ha->ntargets; @@ -6926,7 +6924,7 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr) * it! Also, don't use 64bit addressing if dma addresses * are guaranteed to be < 4G. */ - if (IPS_ENABLE_DMA64 && IPS_HAS_ENH_SGLIST(ha) && + if (sizeof(dma_addr_t) > 4 && IPS_HAS_ENH_SGLIST(ha) && !dma_set_mask(&ha->pcidev->dev, DMA_BIT_MASK(64))) { (ha)->flags |= IPS_HA_ENH_SG; } else { diff --git a/drivers/scsi/ips.h b/drivers/scsi/ips.h index db546171e97f..6c0678fb9a67 100644 --- a/drivers/scsi/ips.h +++ b/drivers/scsi/ips.h @@ -96,15 +96,6 @@ #define __iomem #endif - #define pci_dma_hi32(a) ((a >> 16) >> 16) - #define pci_dma_lo32(a) (a & 0xffffffff) - - #if (BITS_PER_LONG > 32) || defined(CONFIG_HIGHMEM64G) - #define IPS_ENABLE_DMA64 (1) - #else - #define IPS_ENABLE_DMA64 (0) - #endif - /* * Adapter address map equates */ diff --git a/drivers/scsi/isci/init.c b/drivers/scsi/isci/init.c index 08c7b1e25fe4..68b90c4f79a3 100644 --- a/drivers/scsi/isci/init.c +++ b/drivers/scsi/isci/init.c @@ -163,7 +163,6 @@ static struct scsi_host_template isci_sht = { .this_id = -1, .sg_tablesize = SG_ALL, .max_sectors = SCSI_DEFAULT_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .eh_abort_handler = sas_eh_abort_handler, .eh_device_reset_handler = sas_eh_device_reset_handler, .eh_target_reset_handler = sas_eh_target_reset_handler, @@ -304,21 +303,10 @@ static int isci_pci_init(struct pci_dev *pdev) pci_set_master(pdev); - err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); - if (err) { - err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); - if (err) - return err; - } - - err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); - if (err) { - err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); - if (err) - return err; - } - - return 0; + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (err) + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + return err; } static int num_controllers(struct pci_dev *pdev) diff --git a/drivers/scsi/isci/phy.c b/drivers/scsi/isci/phy.c index 1deca8c5a94f..7f9b3f20e5e4 100644 --- a/drivers/scsi/isci/phy.c +++ b/drivers/scsi/isci/phy.c @@ -778,6 +778,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) break; case SCU_EVENT_LINK_FAILURE: scu_link_layer_set_txcomsas_timeout(iphy, SCU_SAS_LINK_LAYER_TXCOMSAS_NEGTIME_DEFAULT); + /* fall through */ case SCU_EVENT_HARD_RESET_RECEIVED: /* Start the oob/sn state machine over again */ sci_change_state(&iphy->sm, SCI_PHY_STARTING); diff --git a/drivers/scsi/isci/remote_device.c b/drivers/scsi/isci/remote_device.c index cc51f38b116d..9d29edb9f590 100644 --- a/drivers/scsi/isci/remote_device.c +++ b/drivers/scsi/isci/remote_device.c @@ -310,7 +310,7 @@ static void isci_remote_device_not_ready(struct isci_host *ihost, /* Kill all outstanding requests for the device. */ sci_remote_device_terminate_requests(idev); - /* Fall through into the default case... */ + /* Fall through - into the default case... */ default: clear_bit(IDEV_IO_READY, &idev->flags); break; @@ -593,7 +593,7 @@ enum sci_status sci_remote_device_event_handler(struct isci_remote_device *idev, break; } - /* Else, fall through and treat as unhandled... */ + /* fall through - and treat as unhandled... */ default: dev_dbg(scirdev_to_dev(idev), "%s: device: %p event code: %x: %s\n", diff --git a/drivers/scsi/isci/remote_node_context.c b/drivers/scsi/isci/remote_node_context.c index e3f2a5359d71..474a43460963 100644 --- a/drivers/scsi/isci/remote_node_context.c +++ b/drivers/scsi/isci/remote_node_context.c @@ -601,9 +601,9 @@ enum sci_status sci_remote_node_context_suspend( __func__, sci_rnc); return SCI_FAILURE_INVALID_STATE; } - /* Fall through and handle like SCI_RNC_POSTING */ + /* Fall through - and handle like SCI_RNC_POSTING */ case SCI_RNC_RESUMING: - /* Fall through and handle like SCI_RNC_POSTING */ + /* Fall through - and handle like SCI_RNC_POSTING */ case SCI_RNC_POSTING: /* Set the destination state to AWAIT - this signals the * entry into the SCI_RNC_READY state that a suspension diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c index 2f151708b59a..1b18cf55167e 100644 --- a/drivers/scsi/isci/request.c +++ b/drivers/scsi/isci/request.c @@ -894,7 +894,7 @@ sci_io_request_terminate(struct isci_request *ireq) * and don't wait for the task response. */ sci_change_state(&ireq->sm, SCI_REQ_ABORTING); - /* Fall through and handle like ABORTING... */ + /* Fall through - and handle like ABORTING... */ case SCI_REQ_ABORTING: if (!isci_remote_device_is_safe_to_abort(ireq->target_device)) set_bit(IREQ_PENDING_ABORT, &ireq->flags); diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c index 23354f206533..cae6368ebb98 100644 --- a/drivers/scsi/iscsi_tcp.c +++ b/drivers/scsi/iscsi_tcp.c @@ -44,6 +44,7 @@ #include #include #include +#include #include "iscsi_tcp.h" @@ -72,6 +73,9 @@ MODULE_PARM_DESC(debug_iscsi_tcp, "Turn on debugging for iscsi_tcp module " iscsi_conn_printk(KERN_INFO, _conn, \ "%s " dbg_fmt, \ __func__, ##arg); \ + iscsi_dbg_trace(trace_iscsi_dbg_sw_tcp, \ + &(_conn)->cls_conn->dev, \ + "%s " dbg_fmt, __func__, ##arg);\ } while (0); @@ -980,7 +984,7 @@ static struct scsi_host_template iscsi_sw_tcp_sht = { .eh_abort_handler = iscsi_eh_abort, .eh_device_reset_handler= iscsi_eh_device_reset, .eh_target_reset_handler = iscsi_eh_recover_target, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .slave_alloc = iscsi_sw_tcp_slave_alloc, .slave_configure = iscsi_sw_tcp_slave_configure, .target_alloc = iscsi_target_alloc, diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c index 1e1c0f1b9e69..9192a1d9dec6 100644 --- a/drivers/scsi/libfc/fc_rport.c +++ b/drivers/scsi/libfc/fc_rport.c @@ -860,7 +860,6 @@ static void fc_rport_enter_flogi(struct fc_rport_priv *rdata) static void fc_rport_recv_flogi_req(struct fc_lport *lport, struct fc_frame *rx_fp) { - struct fc_disc *disc; struct fc_els_flogi *flp; struct fc_rport_priv *rdata; struct fc_frame *fp = rx_fp; @@ -871,7 +870,6 @@ static void fc_rport_recv_flogi_req(struct fc_lport *lport, FC_RPORT_ID_DBG(lport, sid, "Received FLOGI request\n"); - disc = &lport->disc; if (!lport->point_to_multipoint) { rjt_data.reason = ELS_RJT_UNSUP; rjt_data.explan = ELS_EXPL_NONE; @@ -1724,6 +1722,7 @@ static void fc_rport_recv_els_req(struct fc_lport *lport, struct fc_frame *fp) kref_put(&rdata->kref, fc_rport_destroy); goto busy; } + /* fall through */ default: FC_RPORT_DBG(rdata, "Reject ELS 0x%02x while in state %s\n", diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c index f78d2e5c1471..b8d325ce8754 100644 --- a/drivers/scsi/libiscsi.c +++ b/drivers/scsi/libiscsi.c @@ -40,6 +40,7 @@ #include #include #include +#include static int iscsi_dbg_lib_conn; module_param_named(debug_libiscsi_conn, iscsi_dbg_lib_conn, int, @@ -68,6 +69,9 @@ MODULE_PARM_DESC(debug_libiscsi_eh, iscsi_conn_printk(KERN_INFO, _conn, \ "%s " dbg_fmt, \ __func__, ##arg); \ + iscsi_dbg_trace(trace_iscsi_dbg_conn, \ + &(_conn)->cls_conn->dev, \ + "%s " dbg_fmt, __func__, ##arg);\ } while (0); #define ISCSI_DBG_SESSION(_session, dbg_fmt, arg...) \ @@ -76,6 +80,9 @@ MODULE_PARM_DESC(debug_libiscsi_eh, iscsi_session_printk(KERN_INFO, _session, \ "%s " dbg_fmt, \ __func__, ##arg); \ + iscsi_dbg_trace(trace_iscsi_dbg_session, \ + &(_session)->cls_session->dev, \ + "%s " dbg_fmt, __func__, ##arg); \ } while (0); #define ISCSI_DBG_EH(_session, dbg_fmt, arg...) \ @@ -84,6 +91,9 @@ MODULE_PARM_DESC(debug_libiscsi_eh, iscsi_session_printk(KERN_INFO, _session, \ "%s " dbg_fmt, \ __func__, ##arg); \ + iscsi_dbg_trace(trace_iscsi_dbg_eh, \ + &(_session)->cls_session->dev, \ + "%s " dbg_fmt, __func__, ##arg); \ } while (0); inline void iscsi_conn_queue_work(struct iscsi_conn *conn) diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c index 4fcb9e65be57..8a6b1b3f8277 100644 --- a/drivers/scsi/libiscsi_tcp.c +++ b/drivers/scsi/libiscsi_tcp.c @@ -43,6 +43,7 @@ #include #include #include +#include #include "iscsi_tcp.h" @@ -65,6 +66,9 @@ MODULE_PARM_DESC(debug_libiscsi_tcp, "Turn on debugging for libiscsi_tcp " iscsi_conn_printk(KERN_INFO, _conn, \ "%s " dbg_fmt, \ __func__, ##arg); \ + iscsi_dbg_trace(trace_iscsi_dbg_tcp, \ + &(_conn)->cls_conn->dev, \ + "%s " dbg_fmt, __func__, ##arg);\ } while (0); static int iscsi_tcp_hdr_recv_done(struct iscsi_tcp_conn *tcp_conn, diff --git a/drivers/scsi/libsas/Makefile b/drivers/scsi/libsas/Makefile index 2e70140f70c3..5d51520c6f23 100644 --- a/drivers/scsi/libsas/Makefile +++ b/drivers/scsi/libsas/Makefile @@ -26,10 +26,11 @@ libsas-y += sas_init.o \ sas_phy.o \ sas_port.o \ sas_event.o \ - sas_dump.o \ sas_discover.o \ sas_expander.o \ sas_scsi_host.o \ sas_task.o libsas-$(CONFIG_SCSI_SAS_ATA) += sas_ata.o libsas-$(CONFIG_SCSI_SAS_HOST_SMP) += sas_host_smp.o + +ccflags-y := -DDEBUG diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c index 4f6cdf53e913..6f93fee2b21b 100644 --- a/drivers/scsi/libsas/sas_ata.c +++ b/drivers/scsi/libsas/sas_ata.c @@ -75,8 +75,8 @@ static enum ata_completion_errors sas_to_ata_err(struct task_status_struct *ts) case SAS_OPEN_TO: case SAS_OPEN_REJECT: - SAS_DPRINTK("%s: Saw error %d. What to do?\n", - __func__, ts->stat); + pr_warn("%s: Saw error %d. What to do?\n", + __func__, ts->stat); return AC_ERR_OTHER; case SAM_STAT_CHECK_CONDITION: @@ -151,8 +151,7 @@ static void sas_ata_task_done(struct sas_task *task) } else { ac = sas_to_ata_err(stat); if (ac) { - SAS_DPRINTK("%s: SAS error %x\n", __func__, - stat->stat); + pr_warn("%s: SAS error %x\n", __func__, stat->stat); /* We saw a SAS error. Send a vague error. */ if (!link->sactive) { qc->err_mask = ac; @@ -237,7 +236,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc) ret = i->dft->lldd_execute_task(task, GFP_ATOMIC); if (ret) { - SAS_DPRINTK("lldd_execute_task returned: %d\n", ret); + pr_debug("lldd_execute_task returned: %d\n", ret); if (qc->scsicmd) ASSIGN_SAS_TASK(qc->scsicmd, NULL); @@ -282,9 +281,9 @@ int sas_get_ata_info(struct domain_device *dev, struct ex_phy *phy) res = sas_get_report_phy_sata(dev->parent, phy->phy_id, &dev->sata_dev.rps_resp); if (res) { - SAS_DPRINTK("report phy sata to %016llx:0x%x returned " - "0x%x\n", SAS_ADDR(dev->parent->sas_addr), - phy->phy_id, res); + pr_debug("report phy sata to %016llx:0x%x returned 0x%x\n", + SAS_ADDR(dev->parent->sas_addr), + phy->phy_id, res); return res; } memcpy(dev->frame_rcvd, &dev->sata_dev.rps_resp.rps.fis, @@ -375,7 +374,7 @@ static int sas_ata_printk(const char *level, const struct domain_device *ddev, vaf.fmt = fmt; vaf.va = &args; - r = printk("%ssas: ata%u: %s: %pV", + r = printk("%s" SAS_FMT "ata%u: %s: %pV", level, ap->print_id, dev_name(dev), &vaf); va_end(args); @@ -431,8 +430,7 @@ static void sas_ata_internal_abort(struct sas_task *task) if (task->task_state_flags & SAS_TASK_STATE_ABORTED || task->task_state_flags & SAS_TASK_STATE_DONE) { spin_unlock_irqrestore(&task->task_state_lock, flags); - SAS_DPRINTK("%s: Task %p already finished.\n", __func__, - task); + pr_debug("%s: Task %p already finished.\n", __func__, task); goto out; } task->task_state_flags |= SAS_TASK_STATE_ABORTED; @@ -452,7 +450,7 @@ static void sas_ata_internal_abort(struct sas_task *task) * aborted ata tasks, otherwise we (likely) leak the sas task * here */ - SAS_DPRINTK("%s: Task %p leaked.\n", __func__, task); + pr_warn("%s: Task %p leaked.\n", __func__, task); if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) task->task_state_flags &= ~SAS_TASK_STATE_ABORTED; @@ -558,7 +556,7 @@ int sas_ata_init(struct domain_device *found_dev) ata_host = kzalloc(sizeof(*ata_host), GFP_KERNEL); if (!ata_host) { - SAS_DPRINTK("ata host alloc failed.\n"); + pr_err("ata host alloc failed.\n"); return -ENOMEM; } @@ -566,7 +564,7 @@ int sas_ata_init(struct domain_device *found_dev) ap = ata_sas_port_alloc(ata_host, &sata_port_info, shost); if (!ap) { - SAS_DPRINTK("ata_sas_port_alloc failed.\n"); + pr_err("ata_sas_port_alloc failed.\n"); rc = -ENODEV; goto free_host; } @@ -601,12 +599,7 @@ void sas_ata_task_abort(struct sas_task *task) /* Bounce SCSI-initiated commands to the SCSI EH */ if (qc->scsicmd) { - struct request_queue *q = qc->scsicmd->device->request_queue; - unsigned long flags; - - spin_lock_irqsave(q->queue_lock, flags); blk_abort_request(qc->scsicmd->request); - spin_unlock_irqrestore(q->queue_lock, flags); return; } diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c index dde433aa59c2..726ada9b8c79 100644 --- a/drivers/scsi/libsas/sas_discover.c +++ b/drivers/scsi/libsas/sas_discover.c @@ -128,7 +128,7 @@ static int sas_get_port_device(struct asd_sas_port *port) SAS_FANOUT_EXPANDER_DEVICE); break; default: - printk("ERROR: Unidentified device type %d\n", dev->dev_type); + pr_warn("ERROR: Unidentified device type %d\n", dev->dev_type); rphy = NULL; break; } @@ -186,10 +186,9 @@ int sas_notify_lldd_dev_found(struct domain_device *dev) res = i->dft->lldd_dev_found(dev); if (res) { - printk("sas: driver on pcidev %s cannot handle " - "device %llx, error:%d\n", - dev_name(sas_ha->dev), - SAS_ADDR(dev->sas_addr), res); + pr_warn("driver on host %s cannot handle device %llx, error:%d\n", + dev_name(sas_ha->dev), + SAS_ADDR(dev->sas_addr), res); } set_bit(SAS_DEV_FOUND, &dev->state); kref_get(&dev->kref); @@ -456,8 +455,8 @@ static void sas_discover_domain(struct work_struct *work) return; dev = port->port_dev; - SAS_DPRINTK("DOING DISCOVERY on port %d, pid:%d\n", port->id, - task_pid_nr(current)); + pr_debug("DOING DISCOVERY on port %d, pid:%d\n", port->id, + task_pid_nr(current)); switch (dev->dev_type) { case SAS_END_DEVICE: @@ -473,12 +472,12 @@ static void sas_discover_domain(struct work_struct *work) error = sas_discover_sata(dev); break; #else - SAS_DPRINTK("ATA device seen but CONFIG_SCSI_SAS_ATA=N so cannot attach\n"); + pr_notice("ATA device seen but CONFIG_SCSI_SAS_ATA=N so cannot attach\n"); /* Fall through */ #endif default: error = -ENXIO; - SAS_DPRINTK("unhandled device %d\n", dev->dev_type); + pr_err("unhandled device %d\n", dev->dev_type); break; } @@ -495,8 +494,8 @@ static void sas_discover_domain(struct work_struct *work) sas_probe_devices(port); - SAS_DPRINTK("DONE DISCOVERY on port %d, pid:%d, result:%d\n", port->id, - task_pid_nr(current), error); + pr_debug("DONE DISCOVERY on port %d, pid:%d, result:%d\n", port->id, + task_pid_nr(current), error); } static void sas_revalidate_domain(struct work_struct *work) @@ -510,22 +509,22 @@ static void sas_revalidate_domain(struct work_struct *work) /* prevent revalidation from finding sata links in recovery */ mutex_lock(&ha->disco_mutex); if (test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state)) { - SAS_DPRINTK("REVALIDATION DEFERRED on port %d, pid:%d\n", - port->id, task_pid_nr(current)); + pr_debug("REVALIDATION DEFERRED on port %d, pid:%d\n", + port->id, task_pid_nr(current)); goto out; } clear_bit(DISCE_REVALIDATE_DOMAIN, &port->disc.pending); - SAS_DPRINTK("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id, - task_pid_nr(current)); + pr_debug("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id, + task_pid_nr(current)); if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE || ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE)) res = sas_ex_revalidate_domain(ddev); - SAS_DPRINTK("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n", - port->id, task_pid_nr(current), res); + pr_debug("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n", + port->id, task_pid_nr(current), res); out: mutex_unlock(&ha->disco_mutex); diff --git a/drivers/scsi/libsas/sas_dump.c b/drivers/scsi/libsas/sas_dump.c deleted file mode 100644 index 7e5d262e7a7d..000000000000 --- a/drivers/scsi/libsas/sas_dump.c +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Serial Attached SCSI (SAS) Dump/Debugging routines - * - * Copyright (C) 2005 Adaptec, Inc. All rights reserved. - * Copyright (C) 2005 Luben Tuikov - * - * This file is licensed under GPLv2. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 of the - * License, or (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA - * - */ - -#include "sas_dump.h" - -static const char *sas_porte_str[] = { - [0] = "PORTE_BYTES_DMAED", - [1] = "PORTE_BROADCAST_RCVD", - [2] = "PORTE_LINK_RESET_ERR", - [3] = "PORTE_TIMER_EVENT", - [4] = "PORTE_HARD_RESET", -}; - -static const char *sas_phye_str[] = { - [0] = "PHYE_LOSS_OF_SIGNAL", - [1] = "PHYE_OOB_DONE", - [2] = "PHYE_OOB_ERROR", - [3] = "PHYE_SPINUP_HOLD", - [4] = "PHYE_RESUME_TIMEOUT", -}; - -void sas_dprint_porte(int phyid, enum port_event pe) -{ - SAS_DPRINTK("phy%d: port event: %s\n", phyid, sas_porte_str[pe]); -} -void sas_dprint_phye(int phyid, enum phy_event pe) -{ - SAS_DPRINTK("phy%d: phy event: %s\n", phyid, sas_phye_str[pe]); -} - -void sas_dump_port(struct asd_sas_port *port) -{ - SAS_DPRINTK("port%d: class:0x%x\n", port->id, port->class); - SAS_DPRINTK("port%d: sas_addr:%llx\n", port->id, - SAS_ADDR(port->sas_addr)); - SAS_DPRINTK("port%d: attached_sas_addr:%llx\n", port->id, - SAS_ADDR(port->attached_sas_addr)); - SAS_DPRINTK("port%d: iproto:0x%x\n", port->id, port->iproto); - SAS_DPRINTK("port%d: tproto:0x%x\n", port->id, port->tproto); - SAS_DPRINTK("port%d: oob_mode:0x%x\n", port->id, port->oob_mode); - SAS_DPRINTK("port%d: num_phys:%d\n", port->id, port->num_phys); -} diff --git a/drivers/scsi/libsas/sas_dump.h b/drivers/scsi/libsas/sas_dump.h deleted file mode 100644 index 6aaee6b0fcdb..000000000000 --- a/drivers/scsi/libsas/sas_dump.h +++ /dev/null @@ -1,29 +0,0 @@ -/* - * Serial Attached SCSI (SAS) Dump/Debugging routines header file - * - * Copyright (C) 2005 Adaptec, Inc. All rights reserved. - * Copyright (C) 2005 Luben Tuikov - * - * This file is licensed under GPLv2. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 of the - * License, or (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA - * - */ - -#include "sas_internal.h" - -void sas_dprint_porte(int phyid, enum port_event pe); -void sas_dprint_phye(int phyid, enum phy_event pe); -void sas_dump_port(struct asd_sas_port *port); diff --git a/drivers/scsi/libsas/sas_event.c b/drivers/scsi/libsas/sas_event.c index ae923eb6de95..b1e0f7d2b396 100644 --- a/drivers/scsi/libsas/sas_event.c +++ b/drivers/scsi/libsas/sas_event.c @@ -25,7 +25,6 @@ #include #include #include "sas_internal.h" -#include "sas_dump.h" int sas_queue_work(struct sas_ha_struct *ha, struct sas_work *sw) { diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c index 0d1f72752ca2..17eb4185f29d 100644 --- a/drivers/scsi/libsas/sas_expander.c +++ b/drivers/scsi/libsas/sas_expander.c @@ -99,17 +99,17 @@ static int smp_execute_task_sg(struct domain_device *dev, if (res) { del_timer(&task->slow_task->timer); - SAS_DPRINTK("executing SMP task failed:%d\n", res); + pr_notice("executing SMP task failed:%d\n", res); break; } wait_for_completion(&task->slow_task->completion); res = -ECOMM; if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) { - SAS_DPRINTK("smp task timed out or aborted\n"); + pr_notice("smp task timed out or aborted\n"); i->dft->lldd_abort_task(task); if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) { - SAS_DPRINTK("SMP task aborted and not done\n"); + pr_notice("SMP task aborted and not done\n"); break; } } @@ -134,11 +134,11 @@ static int smp_execute_task_sg(struct domain_device *dev, task->task_status.stat == SAS_DEVICE_UNKNOWN) break; else { - SAS_DPRINTK("%s: task to dev %016llx response: 0x%x " - "status 0x%x\n", __func__, - SAS_ADDR(dev->sas_addr), - task->task_status.resp, - task->task_status.stat); + pr_notice("%s: task to dev %016llx response: 0x%x status 0x%x\n", + __func__, + SAS_ADDR(dev->sas_addr), + task->task_status.resp, + task->task_status.stat); sas_free_task(task); task = NULL; } @@ -347,11 +347,11 @@ static void sas_set_ex_phy(struct domain_device *dev, int phy_id, void *rsp) if (test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state)) set_bit(DISCE_REVALIDATE_DOMAIN, &dev->port->disc.pending); - SAS_DPRINTK("%sex %016llx phy%02d:%c:%X attached: %016llx (%s)\n", - test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state) ? "ata: " : "", - SAS_ADDR(dev->sas_addr), phy->phy_id, - sas_route_char(dev, phy), phy->linkrate, - SAS_ADDR(phy->attached_sas_addr), type); + pr_debug("%sex %016llx phy%02d:%c:%X attached: %016llx (%s)\n", + test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state) ? "ata: " : "", + SAS_ADDR(dev->sas_addr), phy->phy_id, + sas_route_char(dev, phy), phy->linkrate, + SAS_ADDR(phy->attached_sas_addr), type); } /* check if we have an existing attached ata device on this expander phy */ @@ -393,7 +393,7 @@ static int sas_ex_phy_discover_helper(struct domain_device *dev, u8 *disc_req, return res; dr = &((struct smp_resp *)disc_resp)->disc; if (memcmp(dev->sas_addr, dr->attached_sas_addr, SAS_ADDR_SIZE) == 0) { - sas_printk("Found loopback topology, just ignore it!\n"); + pr_notice("Found loopback topology, just ignore it!\n"); return 0; } sas_set_ex_phy(dev, single, disc_resp); @@ -500,12 +500,12 @@ static int sas_ex_general(struct domain_device *dev) RG_RESP_SIZE); if (res) { - SAS_DPRINTK("RG to ex %016llx failed:0x%x\n", - SAS_ADDR(dev->sas_addr), res); + pr_notice("RG to ex %016llx failed:0x%x\n", + SAS_ADDR(dev->sas_addr), res); goto out; } else if (rg_resp->result != SMP_RESP_FUNC_ACC) { - SAS_DPRINTK("RG:ex %016llx returned SMP result:0x%x\n", - SAS_ADDR(dev->sas_addr), rg_resp->result); + pr_debug("RG:ex %016llx returned SMP result:0x%x\n", + SAS_ADDR(dev->sas_addr), rg_resp->result); res = rg_resp->result; goto out; } @@ -513,8 +513,8 @@ static int sas_ex_general(struct domain_device *dev) ex_assign_report_general(dev, rg_resp); if (dev->ex_dev.configuring) { - SAS_DPRINTK("RG: ex %llx self-configuring...\n", - SAS_ADDR(dev->sas_addr)); + pr_debug("RG: ex %llx self-configuring...\n", + SAS_ADDR(dev->sas_addr)); schedule_timeout_interruptible(5*HZ); } else break; @@ -568,12 +568,12 @@ static int sas_ex_manuf_info(struct domain_device *dev) res = smp_execute_task(dev, mi_req, MI_REQ_SIZE, mi_resp,MI_RESP_SIZE); if (res) { - SAS_DPRINTK("MI: ex %016llx failed:0x%x\n", - SAS_ADDR(dev->sas_addr), res); + pr_notice("MI: ex %016llx failed:0x%x\n", + SAS_ADDR(dev->sas_addr), res); goto out; } else if (mi_resp[2] != SMP_RESP_FUNC_ACC) { - SAS_DPRINTK("MI ex %016llx returned SMP result:0x%x\n", - SAS_ADDR(dev->sas_addr), mi_resp[2]); + pr_debug("MI ex %016llx returned SMP result:0x%x\n", + SAS_ADDR(dev->sas_addr), mi_resp[2]); goto out; } @@ -836,10 +836,9 @@ static struct domain_device *sas_ex_discover_end_dev( res = sas_discover_sata(child); if (res) { - SAS_DPRINTK("sas_discover_sata() for device %16llx at " - "%016llx:0x%x returned 0x%x\n", - SAS_ADDR(child->sas_addr), - SAS_ADDR(parent->sas_addr), phy_id, res); + pr_notice("sas_discover_sata() for device %16llx at %016llx:0x%x returned 0x%x\n", + SAS_ADDR(child->sas_addr), + SAS_ADDR(parent->sas_addr), phy_id, res); goto out_list_del; } } else @@ -861,16 +860,15 @@ static struct domain_device *sas_ex_discover_end_dev( res = sas_discover_end_dev(child); if (res) { - SAS_DPRINTK("sas_discover_end_dev() for device %16llx " - "at %016llx:0x%x returned 0x%x\n", - SAS_ADDR(child->sas_addr), - SAS_ADDR(parent->sas_addr), phy_id, res); + pr_notice("sas_discover_end_dev() for device %16llx at %016llx:0x%x returned 0x%x\n", + SAS_ADDR(child->sas_addr), + SAS_ADDR(parent->sas_addr), phy_id, res); goto out_list_del; } } else { - SAS_DPRINTK("target proto 0x%x at %016llx:0x%x not handled\n", - phy->attached_tproto, SAS_ADDR(parent->sas_addr), - phy_id); + pr_notice("target proto 0x%x at %016llx:0x%x not handled\n", + phy->attached_tproto, SAS_ADDR(parent->sas_addr), + phy_id); goto out_free; } @@ -927,11 +925,10 @@ static struct domain_device *sas_ex_discover_expander( int res; if (phy->routing_attr == DIRECT_ROUTING) { - SAS_DPRINTK("ex %016llx:0x%x:D <--> ex %016llx:0x%x is not " - "allowed\n", - SAS_ADDR(parent->sas_addr), phy_id, - SAS_ADDR(phy->attached_sas_addr), - phy->attached_phy_id); + pr_warn("ex %016llx:0x%x:D <--> ex %016llx:0x%x is not allowed\n", + SAS_ADDR(parent->sas_addr), phy_id, + SAS_ADDR(phy->attached_sas_addr), + phy->attached_phy_id); return NULL; } child = sas_alloc_device(); @@ -1038,25 +1035,24 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id) ex_phy->attached_dev_type != SAS_FANOUT_EXPANDER_DEVICE && ex_phy->attached_dev_type != SAS_EDGE_EXPANDER_DEVICE && ex_phy->attached_dev_type != SAS_SATA_PENDING) { - SAS_DPRINTK("unknown device type(0x%x) attached to ex %016llx " - "phy 0x%x\n", ex_phy->attached_dev_type, - SAS_ADDR(dev->sas_addr), - phy_id); + pr_warn("unknown device type(0x%x) attached to ex %016llx phy 0x%x\n", + ex_phy->attached_dev_type, + SAS_ADDR(dev->sas_addr), + phy_id); return 0; } res = sas_configure_routing(dev, ex_phy->attached_sas_addr); if (res) { - SAS_DPRINTK("configure routing for dev %016llx " - "reported 0x%x. Forgotten\n", - SAS_ADDR(ex_phy->attached_sas_addr), res); + pr_notice("configure routing for dev %016llx reported 0x%x. Forgotten\n", + SAS_ADDR(ex_phy->attached_sas_addr), res); sas_disable_routing(dev, ex_phy->attached_sas_addr); return res; } if (sas_ex_join_wide_port(dev, phy_id)) { - SAS_DPRINTK("Attaching ex phy%d to wide port %016llx\n", - phy_id, SAS_ADDR(ex_phy->attached_sas_addr)); + pr_debug("Attaching ex phy%d to wide port %016llx\n", + phy_id, SAS_ADDR(ex_phy->attached_sas_addr)); return res; } @@ -1067,12 +1063,11 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id) break; case SAS_FANOUT_EXPANDER_DEVICE: if (SAS_ADDR(dev->port->disc.fanout_sas_addr)) { - SAS_DPRINTK("second fanout expander %016llx phy 0x%x " - "attached to ex %016llx phy 0x%x\n", - SAS_ADDR(ex_phy->attached_sas_addr), - ex_phy->attached_phy_id, - SAS_ADDR(dev->sas_addr), - phy_id); + pr_debug("second fanout expander %016llx phy 0x%x attached to ex %016llx phy 0x%x\n", + SAS_ADDR(ex_phy->attached_sas_addr), + ex_phy->attached_phy_id, + SAS_ADDR(dev->sas_addr), + phy_id); sas_ex_disable_phy(dev, phy_id); break; } else @@ -1101,9 +1096,8 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id) SAS_ADDR(child->sas_addr)) { ex->ex_phy[i].phy_state= PHY_DEVICE_DISCOVERED; if (sas_ex_join_wide_port(dev, i)) - SAS_DPRINTK("Attaching ex phy%d to wide port %016llx\n", - i, SAS_ADDR(ex->ex_phy[i].attached_sas_addr)); - + pr_debug("Attaching ex phy%d to wide port %016llx\n", + i, SAS_ADDR(ex->ex_phy[i].attached_sas_addr)); } } } @@ -1154,13 +1148,11 @@ static int sas_check_level_subtractive_boundary(struct domain_device *dev) if (sas_find_sub_addr(child, s2) && (SAS_ADDR(sub_addr) != SAS_ADDR(s2))) { - SAS_DPRINTK("ex %016llx->%016llx-?->%016llx " - "diverges from subtractive " - "boundary %016llx\n", - SAS_ADDR(dev->sas_addr), - SAS_ADDR(child->sas_addr), - SAS_ADDR(s2), - SAS_ADDR(sub_addr)); + pr_notice("ex %016llx->%016llx-?->%016llx diverges from subtractive boundary %016llx\n", + SAS_ADDR(dev->sas_addr), + SAS_ADDR(child->sas_addr), + SAS_ADDR(s2), + SAS_ADDR(sub_addr)); sas_ex_disable_port(child, s2); } @@ -1239,12 +1231,10 @@ static int sas_check_ex_subtractive_boundary(struct domain_device *dev) else if (SAS_ADDR(sub_sas_addr) != SAS_ADDR(phy->attached_sas_addr)) { - SAS_DPRINTK("ex %016llx phy 0x%x " - "diverges(%016llx) on subtractive " - "boundary(%016llx). Disabled\n", - SAS_ADDR(dev->sas_addr), i, - SAS_ADDR(phy->attached_sas_addr), - SAS_ADDR(sub_sas_addr)); + pr_notice("ex %016llx phy 0x%x diverges(%016llx) on subtractive boundary(%016llx). Disabled\n", + SAS_ADDR(dev->sas_addr), i, + SAS_ADDR(phy->attached_sas_addr), + SAS_ADDR(sub_sas_addr)); sas_ex_disable_phy(dev, i); } } @@ -1262,19 +1252,17 @@ static void sas_print_parent_topology_bug(struct domain_device *child, }; struct domain_device *parent = child->parent; - sas_printk("%s ex %016llx phy 0x%x <--> %s ex %016llx " - "phy 0x%x has %c:%c routing link!\n", - - ex_type[parent->dev_type], - SAS_ADDR(parent->sas_addr), - parent_phy->phy_id, + pr_notice("%s ex %016llx phy 0x%x <--> %s ex %016llx phy 0x%x has %c:%c routing link!\n", + ex_type[parent->dev_type], + SAS_ADDR(parent->sas_addr), + parent_phy->phy_id, - ex_type[child->dev_type], - SAS_ADDR(child->sas_addr), - child_phy->phy_id, + ex_type[child->dev_type], + SAS_ADDR(child->sas_addr), + child_phy->phy_id, - sas_route_char(parent, parent_phy), - sas_route_char(child, child_phy)); + sas_route_char(parent, parent_phy), + sas_route_char(child, child_phy)); } static int sas_check_eeds(struct domain_device *child, @@ -1286,13 +1274,12 @@ static int sas_check_eeds(struct domain_device *child, if (SAS_ADDR(parent->port->disc.fanout_sas_addr) != 0) { res = -ENODEV; - SAS_DPRINTK("edge ex %016llx phy S:0x%x <--> edge ex %016llx " - "phy S:0x%x, while there is a fanout ex %016llx\n", - SAS_ADDR(parent->sas_addr), - parent_phy->phy_id, - SAS_ADDR(child->sas_addr), - child_phy->phy_id, - SAS_ADDR(parent->port->disc.fanout_sas_addr)); + pr_warn("edge ex %016llx phy S:0x%x <--> edge ex %016llx phy S:0x%x, while there is a fanout ex %016llx\n", + SAS_ADDR(parent->sas_addr), + parent_phy->phy_id, + SAS_ADDR(child->sas_addr), + child_phy->phy_id, + SAS_ADDR(parent->port->disc.fanout_sas_addr)); } else if (SAS_ADDR(parent->port->disc.eeds_a) == 0) { memcpy(parent->port->disc.eeds_a, parent->sas_addr, SAS_ADDR_SIZE); @@ -1310,12 +1297,11 @@ static int sas_check_eeds(struct domain_device *child, ; else { res = -ENODEV; - SAS_DPRINTK("edge ex %016llx phy 0x%x <--> edge ex %016llx " - "phy 0x%x link forms a third EEDS!\n", - SAS_ADDR(parent->sas_addr), - parent_phy->phy_id, - SAS_ADDR(child->sas_addr), - child_phy->phy_id); + pr_warn("edge ex %016llx phy 0x%x <--> edge ex %016llx phy 0x%x link forms a third EEDS!\n", + SAS_ADDR(parent->sas_addr), + parent_phy->phy_id, + SAS_ADDR(child->sas_addr), + child_phy->phy_id); } return res; @@ -1429,14 +1415,13 @@ static int sas_configure_present(struct domain_device *dev, int phy_id, goto out; res = rri_resp[2]; if (res == SMP_RESP_NO_INDEX) { - SAS_DPRINTK("overflow of indexes: dev %016llx " - "phy 0x%x index 0x%x\n", - SAS_ADDR(dev->sas_addr), phy_id, i); + pr_warn("overflow of indexes: dev %016llx phy 0x%x index 0x%x\n", + SAS_ADDR(dev->sas_addr), phy_id, i); goto out; } else if (res != SMP_RESP_FUNC_ACC) { - SAS_DPRINTK("%s: dev %016llx phy 0x%x index 0x%x " - "result 0x%x\n", __func__, - SAS_ADDR(dev->sas_addr), phy_id, i, res); + pr_notice("%s: dev %016llx phy 0x%x index 0x%x result 0x%x\n", + __func__, SAS_ADDR(dev->sas_addr), phy_id, + i, res); goto out; } if (SAS_ADDR(sas_addr) != 0) { @@ -1500,9 +1485,8 @@ static int sas_configure_set(struct domain_device *dev, int phy_id, goto out; res = cri_resp[2]; if (res == SMP_RESP_NO_INDEX) { - SAS_DPRINTK("overflow of indexes: dev %016llx phy 0x%x " - "index 0x%x\n", - SAS_ADDR(dev->sas_addr), phy_id, index); + pr_warn("overflow of indexes: dev %016llx phy 0x%x index 0x%x\n", + SAS_ADDR(dev->sas_addr), phy_id, index); } out: kfree(cri_req); @@ -1549,8 +1533,8 @@ static int sas_configure_parent(struct domain_device *parent, } if (ex_parent->conf_route_table == 0) { - SAS_DPRINTK("ex %016llx has self-configuring routing table\n", - SAS_ADDR(parent->sas_addr)); + pr_debug("ex %016llx has self-configuring routing table\n", + SAS_ADDR(parent->sas_addr)); return 0; } @@ -1611,8 +1595,8 @@ static int sas_discover_expander(struct domain_device *dev) res = sas_expander_discover(dev); if (res) { - SAS_DPRINTK("expander %016llx discovery failed(0x%x)\n", - SAS_ADDR(dev->sas_addr), res); + pr_warn("expander %016llx discovery failed(0x%x)\n", + SAS_ADDR(dev->sas_addr), res); goto out_err; } @@ -1856,10 +1840,10 @@ static int sas_find_bcast_dev(struct domain_device *dev, if (phy_id != -1) { *src_dev = dev; ex->ex_change_count = ex_change_count; - SAS_DPRINTK("Expander phy change count has changed\n"); + pr_info("Expander phy change count has changed\n"); return res; } else - SAS_DPRINTK("Expander phys DID NOT change\n"); + pr_info("Expander phys DID NOT change\n"); } list_for_each_entry(ch, &ex->children, siblings) { if (ch->dev_type == SAS_EDGE_EXPANDER_DEVICE || ch->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { @@ -1969,8 +1953,8 @@ static int sas_discover_new(struct domain_device *dev, int phy_id) struct domain_device *child; int res; - SAS_DPRINTK("ex %016llx phy%d new device attached\n", - SAS_ADDR(dev->sas_addr), phy_id); + pr_debug("ex %016llx phy%d new device attached\n", + SAS_ADDR(dev->sas_addr), phy_id); res = sas_ex_phy_discover(dev, phy_id); if (res) return res; @@ -2048,15 +2032,15 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id, bool last) if (ata_dev && phy->attached_dev_type == SAS_SATA_PENDING) action = ", needs recovery"; - SAS_DPRINTK("ex %016llx phy 0x%x broadcast flutter%s\n", - SAS_ADDR(dev->sas_addr), phy_id, action); + pr_debug("ex %016llx phy 0x%x broadcast flutter%s\n", + SAS_ADDR(dev->sas_addr), phy_id, action); return res; } /* we always have to delete the old device when we went here */ - SAS_DPRINTK("ex %016llx phy 0x%x replace %016llx\n", - SAS_ADDR(dev->sas_addr), phy_id, - SAS_ADDR(phy->attached_sas_addr)); + pr_info("ex %016llx phy 0x%x replace %016llx\n", + SAS_ADDR(dev->sas_addr), phy_id, + SAS_ADDR(phy->attached_sas_addr)); sas_unregister_devs_sas_addr(dev, phy_id, last); return sas_discover_new(dev, phy_id); @@ -2084,8 +2068,8 @@ static int sas_rediscover(struct domain_device *dev, const int phy_id) int i; bool last = true; /* is this the last phy of the port */ - SAS_DPRINTK("ex %016llx phy%d originated BROADCAST(CHANGE)\n", - SAS_ADDR(dev->sas_addr), phy_id); + pr_debug("ex %016llx phy%d originated BROADCAST(CHANGE)\n", + SAS_ADDR(dev->sas_addr), phy_id); if (SAS_ADDR(changed_phy->attached_sas_addr) != 0) { for (i = 0; i < ex->num_phys; i++) { @@ -2095,8 +2079,8 @@ static int sas_rediscover(struct domain_device *dev, const int phy_id) continue; if (SAS_ADDR(phy->attached_sas_addr) == SAS_ADDR(changed_phy->attached_sas_addr)) { - SAS_DPRINTK("phy%d part of wide port with " - "phy%d\n", phy_id, i); + pr_debug("phy%d part of wide port with phy%d\n", + phy_id, i); last = false; break; } @@ -2154,23 +2138,23 @@ void sas_smp_handler(struct bsg_job *job, struct Scsi_Host *shost, case SAS_FANOUT_EXPANDER_DEVICE: break; default: - printk("%s: can we send a smp request to a device?\n", + pr_err("%s: can we send a smp request to a device?\n", __func__); goto out; } dev = sas_find_dev_by_rphy(rphy); if (!dev) { - printk("%s: fail to find a domain_device?\n", __func__); + pr_err("%s: fail to find a domain_device?\n", __func__); goto out; } /* do we need to support multiple segments? */ if (job->request_payload.sg_cnt > 1 || job->reply_payload.sg_cnt > 1) { - printk("%s: multiple segments req %u, rsp %u\n", - __func__, job->request_payload.payload_len, - job->reply_payload.payload_len); + pr_info("%s: multiple segments req %u, rsp %u\n", + __func__, job->request_payload.payload_len, + job->reply_payload.payload_len); goto out; } diff --git a/drivers/scsi/libsas/sas_init.c b/drivers/scsi/libsas/sas_init.c index ede0af78144f..221340ee8651 100644 --- a/drivers/scsi/libsas/sas_init.c +++ b/drivers/scsi/libsas/sas_init.c @@ -128,19 +128,19 @@ int sas_register_ha(struct sas_ha_struct *sas_ha) error = sas_register_phys(sas_ha); if (error) { - printk(KERN_NOTICE "couldn't register sas phys:%d\n", error); + pr_notice("couldn't register sas phys:%d\n", error); return error; } error = sas_register_ports(sas_ha); if (error) { - printk(KERN_NOTICE "couldn't register sas ports:%d\n", error); + pr_notice("couldn't register sas ports:%d\n", error); goto Undo_phys; } error = sas_init_events(sas_ha); if (error) { - printk(KERN_NOTICE "couldn't start event thread:%d\n", error); + pr_notice("couldn't start event thread:%d\n", error); goto Undo_ports; } @@ -623,8 +623,8 @@ struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy) if (atomic_read(&phy->event_nr) > phy->ha->event_thres) { if (i->dft->lldd_control_phy) { if (cmpxchg(&phy->in_shutdown, 0, 1) == 0) { - sas_printk("The phy%02d bursting events, shut it down.\n", - phy->id); + pr_notice("The phy%02d bursting events, shut it down.\n", + phy->id); sas_notify_phy_event(phy, PHYE_SHUTDOWN); } } else { diff --git a/drivers/scsi/libsas/sas_internal.h b/drivers/scsi/libsas/sas_internal.h index 50e12d662ffe..2cdb981cf476 100644 --- a/drivers/scsi/libsas/sas_internal.h +++ b/drivers/scsi/libsas/sas_internal.h @@ -32,9 +32,13 @@ #include #include -#define sas_printk(fmt, ...) printk(KERN_NOTICE "sas: " fmt, ## __VA_ARGS__) +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define SAS_FMT "sas: " -#define SAS_DPRINTK(fmt, ...) printk(KERN_DEBUG "sas: " fmt, ## __VA_ARGS__) +#define pr_fmt(fmt) SAS_FMT fmt #define TO_SAS_TASK(_scsi_cmd) ((void *)(_scsi_cmd)->host_scribble) #define ASSIGN_SAS_TASK(_sc, _t) do { (_sc)->host_scribble = (void *) _t; } while (0) @@ -120,10 +124,10 @@ static inline void sas_smp_host_handler(struct bsg_job *job, static inline void sas_fail_probe(struct domain_device *dev, const char *func, int err) { - SAS_DPRINTK("%s: for %s device %16llx returned %d\n", - func, dev->parent ? "exp-attached" : - "direct-attached", - SAS_ADDR(dev->sas_addr), err); + pr_warn("%s: for %s device %16llx returned %d\n", + func, dev->parent ? "exp-attached" : + "direct-attached", + SAS_ADDR(dev->sas_addr), err); sas_unregister_dev(dev->port, dev); } diff --git a/drivers/scsi/libsas/sas_phy.c b/drivers/scsi/libsas/sas_phy.c index bf3e1b979ca6..0374243c85d0 100644 --- a/drivers/scsi/libsas/sas_phy.c +++ b/drivers/scsi/libsas/sas_phy.c @@ -122,11 +122,11 @@ static void sas_phye_shutdown(struct work_struct *work) phy->enabled = 0; ret = i->dft->lldd_control_phy(phy, PHY_FUNC_DISABLE, NULL); if (ret) - sas_printk("lldd disable phy%02d returned %d\n", - phy->id, ret); + pr_notice("lldd disable phy%02d returned %d\n", + phy->id, ret); } else - sas_printk("phy%02d is not enabled, cannot shutdown\n", - phy->id); + pr_notice("phy%02d is not enabled, cannot shutdown\n", + phy->id); } /* ---------- Phy class registration ---------- */ diff --git a/drivers/scsi/libsas/sas_port.c b/drivers/scsi/libsas/sas_port.c index fad23dd39114..03fe479359b6 100644 --- a/drivers/scsi/libsas/sas_port.c +++ b/drivers/scsi/libsas/sas_port.c @@ -110,9 +110,9 @@ static void sas_form_port(struct asd_sas_phy *phy) wake_up(&sas_ha->eh_wait_q); return; } else { - SAS_DPRINTK("%s: phy%d belongs to port%d already(%d)!\n", - __func__, phy->id, phy->port->id, - phy->port->num_phys); + pr_info("%s: phy%d belongs to port%d already(%d)!\n", + __func__, phy->id, phy->port->id, + phy->port->num_phys); return; } } @@ -125,8 +125,8 @@ static void sas_form_port(struct asd_sas_phy *phy) if (*(u64 *) port->sas_addr && phy_is_wideport_member(port, phy) && port->num_phys > 0) { /* wide port */ - SAS_DPRINTK("phy%d matched wide port%d\n", phy->id, - port->id); + pr_debug("phy%d matched wide port%d\n", phy->id, + port->id); break; } spin_unlock(&port->phy_list_lock); @@ -147,8 +147,7 @@ static void sas_form_port(struct asd_sas_phy *phy) } if (i >= sas_ha->num_phys) { - printk(KERN_NOTICE "%s: couldn't find a free port, bug?\n", - __func__); + pr_err("%s: couldn't find a free port, bug?\n", __func__); spin_unlock_irqrestore(&sas_ha->phy_port_lock, flags); return; } @@ -180,10 +179,10 @@ static void sas_form_port(struct asd_sas_phy *phy) } sas_port_add_phy(port->port, phy->phy); - SAS_DPRINTK("%s added to %s, phy_mask:0x%x (%16llx)\n", - dev_name(&phy->phy->dev), dev_name(&port->port->dev), - port->phy_mask, - SAS_ADDR(port->attached_sas_addr)); + pr_debug("%s added to %s, phy_mask:0x%x (%16llx)\n", + dev_name(&phy->phy->dev), dev_name(&port->port->dev), + port->phy_mask, + SAS_ADDR(port->attached_sas_addr)); if (port->port_dev) port->port_dev->pathways = port->num_phys; @@ -279,7 +278,7 @@ void sas_porte_broadcast_rcvd(struct work_struct *work) prim = phy->sas_prim; spin_unlock_irqrestore(&phy->sas_prim_lock, flags); - SAS_DPRINTK("broadcast received: %d\n", prim); + pr_debug("broadcast received: %d\n", prim); sas_discover_event(phy->port, DISCE_REVALIDATE_DOMAIN); if (phy->port) diff --git a/drivers/scsi/libsas/sas_scsi_host.c b/drivers/scsi/libsas/sas_scsi_host.c index 33229348dcb6..c43a00a9d819 100644 --- a/drivers/scsi/libsas/sas_scsi_host.c +++ b/drivers/scsi/libsas/sas_scsi_host.c @@ -93,9 +93,8 @@ static void sas_end_task(struct scsi_cmnd *sc, struct sas_task *task) hs = DID_ERROR; break; case SAS_PROTO_RESPONSE: - SAS_DPRINTK("LLDD:%s sent SAS_PROTO_RESP for an SSP " - "task; please report this\n", - task->dev->port->ha->sas_ha_name); + pr_notice("LLDD:%s sent SAS_PROTO_RESP for an SSP task; please report this\n", + task->dev->port->ha->sas_ha_name); break; case SAS_ABORTED_TASK: hs = DID_ABORT; @@ -132,12 +131,12 @@ static void sas_scsi_task_done(struct sas_task *task) if (unlikely(!task)) { /* task will be completed by the error handler */ - SAS_DPRINTK("task done but aborted\n"); + pr_debug("task done but aborted\n"); return; } if (unlikely(!sc)) { - SAS_DPRINTK("task_done called with non existing SCSI cmnd!\n"); + pr_debug("task_done called with non existing SCSI cmnd!\n"); sas_free_task(task); return; } @@ -208,7 +207,7 @@ int sas_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) return 0; out_free_task: - SAS_DPRINTK("lldd_execute_task returned: %d\n", res); + pr_debug("lldd_execute_task returned: %d\n", res); ASSIGN_SAS_TASK(cmd, NULL); sas_free_task(task); if (res == -SAS_QUEUE_FULL) @@ -301,40 +300,38 @@ static enum task_disposition sas_scsi_find_task(struct sas_task *task) to_sas_internal(task->dev->port->ha->core.shost->transportt); for (i = 0; i < 5; i++) { - SAS_DPRINTK("%s: aborting task 0x%p\n", __func__, task); + pr_notice("%s: aborting task 0x%p\n", __func__, task); res = si->dft->lldd_abort_task(task); spin_lock_irqsave(&task->task_state_lock, flags); if (task->task_state_flags & SAS_TASK_STATE_DONE) { spin_unlock_irqrestore(&task->task_state_lock, flags); - SAS_DPRINTK("%s: task 0x%p is done\n", __func__, - task); + pr_debug("%s: task 0x%p is done\n", __func__, task); return TASK_IS_DONE; } spin_unlock_irqrestore(&task->task_state_lock, flags); if (res == TMF_RESP_FUNC_COMPLETE) { - SAS_DPRINTK("%s: task 0x%p is aborted\n", - __func__, task); + pr_notice("%s: task 0x%p is aborted\n", + __func__, task); return TASK_IS_ABORTED; } else if (si->dft->lldd_query_task) { - SAS_DPRINTK("%s: querying task 0x%p\n", - __func__, task); + pr_notice("%s: querying task 0x%p\n", __func__, task); res = si->dft->lldd_query_task(task); switch (res) { case TMF_RESP_FUNC_SUCC: - SAS_DPRINTK("%s: task 0x%p at LU\n", - __func__, task); + pr_notice("%s: task 0x%p at LU\n", __func__, + task); return TASK_IS_AT_LU; case TMF_RESP_FUNC_COMPLETE: - SAS_DPRINTK("%s: task 0x%p not at LU\n", - __func__, task); + pr_notice("%s: task 0x%p not at LU\n", + __func__, task); return TASK_IS_NOT_AT_LU; case TMF_RESP_FUNC_FAILED: - SAS_DPRINTK("%s: task 0x%p failed to abort\n", - __func__, task); - return TASK_ABORT_FAILED; - } + pr_notice("%s: task 0x%p failed to abort\n", + __func__, task); + return TASK_ABORT_FAILED; + } } } @@ -350,9 +347,9 @@ static int sas_recover_lu(struct domain_device *dev, struct scsi_cmnd *cmd) int_to_scsilun(cmd->device->lun, &lun); - SAS_DPRINTK("eh: device %llx LUN %llx has the task\n", - SAS_ADDR(dev->sas_addr), - cmd->device->lun); + pr_notice("eh: device %llx LUN %llx has the task\n", + SAS_ADDR(dev->sas_addr), + cmd->device->lun); if (i->dft->lldd_abort_task_set) res = i->dft->lldd_abort_task_set(dev, lun.scsi_lun); @@ -376,8 +373,8 @@ static int sas_recover_I_T(struct domain_device *dev) struct sas_internal *i = to_sas_internal(dev->port->ha->core.shost->transportt); - SAS_DPRINTK("I_T nexus reset for dev %016llx\n", - SAS_ADDR(dev->sas_addr)); + pr_notice("I_T nexus reset for dev %016llx\n", + SAS_ADDR(dev->sas_addr)); if (i->dft->lldd_I_T_nexus_reset) res = i->dft->lldd_I_T_nexus_reset(dev); @@ -471,9 +468,9 @@ static int sas_queue_reset(struct domain_device *dev, int reset_type, return SUCCESS; } - SAS_DPRINTK("%s reset of %s failed\n", - reset_type == SAS_DEV_LU_RESET ? "LUN" : "Bus", - dev_name(&dev->rphy->dev)); + pr_warn("%s reset of %s failed\n", + reset_type == SAS_DEV_LU_RESET ? "LUN" : "Bus", + dev_name(&dev->rphy->dev)); return FAILED; } @@ -501,7 +498,7 @@ int sas_eh_abort_handler(struct scsi_cmnd *cmd) if (task) res = i->dft->lldd_abort_task(task); else - SAS_DPRINTK("no task to abort\n"); + pr_notice("no task to abort\n"); if (res == TMF_RESP_FUNC_SUCC || res == TMF_RESP_FUNC_COMPLETE) return SUCCESS; @@ -612,34 +609,33 @@ static void sas_eh_handle_sas_errors(struct Scsi_Host *shost, struct list_head * spin_unlock_irqrestore(&task->task_state_lock, flags); if (need_reset) { - SAS_DPRINTK("%s: task 0x%p requests reset\n", - __func__, task); + pr_notice("%s: task 0x%p requests reset\n", + __func__, task); goto reset; } - SAS_DPRINTK("trying to find task 0x%p\n", task); + pr_debug("trying to find task 0x%p\n", task); res = sas_scsi_find_task(task); switch (res) { case TASK_IS_DONE: - SAS_DPRINTK("%s: task 0x%p is done\n", __func__, + pr_notice("%s: task 0x%p is done\n", __func__, task); sas_eh_finish_cmd(cmd); continue; case TASK_IS_ABORTED: - SAS_DPRINTK("%s: task 0x%p is aborted\n", - __func__, task); + pr_notice("%s: task 0x%p is aborted\n", + __func__, task); sas_eh_finish_cmd(cmd); continue; case TASK_IS_AT_LU: - SAS_DPRINTK("task 0x%p is at LU: lu recover\n", task); + pr_info("task 0x%p is at LU: lu recover\n", task); reset: tmf_resp = sas_recover_lu(task->dev, cmd); if (tmf_resp == TMF_RESP_FUNC_COMPLETE) { - SAS_DPRINTK("dev %016llx LU %llx is " - "recovered\n", - SAS_ADDR(task->dev), - cmd->device->lun); + pr_notice("dev %016llx LU %llx is recovered\n", + SAS_ADDR(task->dev), + cmd->device->lun); sas_eh_finish_cmd(cmd); sas_scsi_clear_queue_lu(work_q, cmd); goto Again; @@ -647,14 +643,14 @@ static void sas_eh_handle_sas_errors(struct Scsi_Host *shost, struct list_head * /* fallthrough */ case TASK_IS_NOT_AT_LU: case TASK_ABORT_FAILED: - SAS_DPRINTK("task 0x%p is not at LU: I_T recover\n", - task); + pr_notice("task 0x%p is not at LU: I_T recover\n", + task); tmf_resp = sas_recover_I_T(task->dev); if (tmf_resp == TMF_RESP_FUNC_COMPLETE || tmf_resp == -ENODEV) { struct domain_device *dev = task->dev; - SAS_DPRINTK("I_T %016llx recovered\n", - SAS_ADDR(task->dev->sas_addr)); + pr_notice("I_T %016llx recovered\n", + SAS_ADDR(task->dev->sas_addr)); sas_eh_finish_cmd(cmd); sas_scsi_clear_queue_I_T(work_q, dev); goto Again; @@ -663,12 +659,12 @@ static void sas_eh_handle_sas_errors(struct Scsi_Host *shost, struct list_head * try_to_reset_cmd_device(cmd); if (i->dft->lldd_clear_nexus_port) { struct asd_sas_port *port = task->dev->port; - SAS_DPRINTK("clearing nexus for port:%d\n", - port->id); + pr_debug("clearing nexus for port:%d\n", + port->id); res = i->dft->lldd_clear_nexus_port(port); if (res == TMF_RESP_FUNC_COMPLETE) { - SAS_DPRINTK("clear nexus port:%d " - "succeeded\n", port->id); + pr_notice("clear nexus port:%d succeeded\n", + port->id); sas_eh_finish_cmd(cmd); sas_scsi_clear_queue_port(work_q, port); @@ -676,11 +672,10 @@ static void sas_eh_handle_sas_errors(struct Scsi_Host *shost, struct list_head * } } if (i->dft->lldd_clear_nexus_ha) { - SAS_DPRINTK("clear nexus ha\n"); + pr_debug("clear nexus ha\n"); res = i->dft->lldd_clear_nexus_ha(ha); if (res == TMF_RESP_FUNC_COMPLETE) { - SAS_DPRINTK("clear nexus ha " - "succeeded\n"); + pr_notice("clear nexus ha succeeded\n"); sas_eh_finish_cmd(cmd); goto clear_q; } @@ -689,10 +684,9 @@ static void sas_eh_handle_sas_errors(struct Scsi_Host *shost, struct list_head * * of effort could recover from errors. Quite * possibly the HA just disappeared. */ - SAS_DPRINTK("error from device %llx, LUN %llx " - "couldn't be recovered in any way\n", - SAS_ADDR(task->dev->sas_addr), - cmd->device->lun); + pr_err("error from device %llx, LUN %llx couldn't be recovered in any way\n", + SAS_ADDR(task->dev->sas_addr), + cmd->device->lun); sas_eh_finish_cmd(cmd); goto clear_q; @@ -704,7 +698,7 @@ static void sas_eh_handle_sas_errors(struct Scsi_Host *shost, struct list_head * return; clear_q: - SAS_DPRINTK("--- Exit %s -- clear_q\n", __func__); + pr_debug("--- Exit %s -- clear_q\n", __func__); list_for_each_entry_safe(cmd, n, work_q, eh_entry) sas_eh_finish_cmd(cmd); goto out; @@ -758,8 +752,8 @@ retry: list_splice_init(&shost->eh_cmd_q, &eh_work_q); spin_unlock_irq(shost->host_lock); - SAS_DPRINTK("Enter %s busy: %d failed: %d\n", - __func__, scsi_host_busy(shost), shost->host_failed); + pr_notice("Enter %s busy: %d failed: %d\n", + __func__, scsi_host_busy(shost), shost->host_failed); /* * Deal with commands that still have SAS tasks (i.e. they didn't * complete via the normal sas_task completion mechanism), @@ -800,9 +794,9 @@ out: if (retry) goto retry; - SAS_DPRINTK("--- Exit %s: busy: %d failed: %d tries: %d\n", - __func__, scsi_host_busy(shost), - shost->host_failed, tries); + pr_notice("--- Exit %s: busy: %d failed: %d tries: %d\n", + __func__, scsi_host_busy(shost), + shost->host_failed, tries); } int sas_ioctl(struct scsi_device *sdev, int cmd, void __user *arg) @@ -875,9 +869,8 @@ int sas_slave_configure(struct scsi_device *scsi_dev) if (scsi_dev->tagged_supported) { scsi_change_queue_depth(scsi_dev, SAS_DEF_QD); } else { - SAS_DPRINTK("device %llx, LUN %llx doesn't support " - "TCQ\n", SAS_ADDR(dev->sas_addr), - scsi_dev->lun); + pr_notice("device %llx, LUN %llx doesn't support TCQ\n", + SAS_ADDR(dev->sas_addr), scsi_dev->lun); scsi_change_queue_depth(scsi_dev, 1); } @@ -930,16 +923,10 @@ void sas_task_abort(struct sas_task *task) return; } - if (dev_is_sata(task->dev)) { + if (dev_is_sata(task->dev)) sas_ata_task_abort(task); - } else { - struct request_queue *q = sc->device->request_queue; - unsigned long flags; - - spin_lock_irqsave(q->queue_lock, flags); + else blk_abort_request(sc->request); - spin_unlock_irqrestore(q->queue_lock, flags); - } } void sas_target_destroy(struct scsi_target *starget) diff --git a/drivers/scsi/libsas/sas_task.c b/drivers/scsi/libsas/sas_task.c index a78e5bd3e514..c3b9befad4e6 100644 --- a/drivers/scsi/libsas/sas_task.c +++ b/drivers/scsi/libsas/sas_task.c @@ -1,3 +1,6 @@ + +#include "sas_internal.h" + #include #include #include @@ -23,11 +26,8 @@ void sas_ssp_task_response(struct device *dev, struct sas_task *task, memcpy(tstat->buf, iu->sense_data, tstat->buf_valid_size); if (iu->status != SAM_STAT_CHECK_CONDITION) - dev_printk(KERN_WARNING, dev, - "dev %llx sent sense data, but " - "stat(%x) is not CHECK CONDITION\n", - SAS_ADDR(task->dev->sas_addr), - iu->status); + dev_warn(dev, "dev %llx sent sense data, but stat(%x) is not CHECK CONDITION\n", + SAS_ADDR(task->dev->sas_addr), iu->status); } else /* when datapres contains corrupt/unknown value... */ diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h index c1eb2b00ca7f..ebdfe5b26937 100644 --- a/drivers/scsi/lpfc/lpfc.h +++ b/drivers/scsi/lpfc/lpfc.h @@ -335,6 +335,18 @@ enum hba_state { LPFC_HBA_ERROR = -1 }; +struct lpfc_trunk_link_state { + enum hba_state state; + uint8_t fault; +}; + +struct lpfc_trunk_link { + struct lpfc_trunk_link_state link0, + link1, + link2, + link3; +}; + struct lpfc_vport { struct lpfc_hba *phba; struct list_head listentry; @@ -490,6 +502,7 @@ struct lpfc_vport { struct nvme_fc_local_port *localport; uint8_t nvmei_support; /* driver supports NVME Initiator */ uint32_t last_fcp_wqidx; + uint32_t rcv_flogi_cnt; /* How many unsol FLOGIs ACK'd. */ }; struct hbq_s { @@ -683,6 +696,7 @@ struct lpfc_hba { uint32_t iocb_cmd_size; uint32_t iocb_rsp_size; + struct lpfc_trunk_link trunk_link; enum hba_state link_state; uint32_t link_flag; /* link state flags */ #define LS_LOOPBACK_MODE 0x1 /* NPort is in Loopback mode */ @@ -717,6 +731,7 @@ struct lpfc_hba { * capability */ #define HBA_NVME_IOQ_FLUSH 0x80000 /* NVME IO queues flushed. */ +#define HBA_FLOGI_ISSUED 0x100000 /* FLOGI was issued */ uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/ struct lpfc_dmabuf slim2p; @@ -783,6 +798,7 @@ struct lpfc_hba { #define LPFC_FCF_PRIORITY 2 /* Priority fcf failover */ uint32_t cfg_fcf_failover_policy; uint32_t cfg_fcp_io_sched; + uint32_t cfg_ns_query; uint32_t cfg_fcp2_no_tgt_reset; uint32_t cfg_cr_delay; uint32_t cfg_cr_count; @@ -989,7 +1005,8 @@ struct lpfc_hba { spinlock_t port_list_lock; /* lock for port_list mutations */ struct lpfc_vport *pport; /* physical lpfc_vport pointer */ uint16_t max_vpi; /* Maximum virtual nports */ -#define LPFC_MAX_VPI 0xFFFF /* Max number of VPI supported */ +#define LPFC_MAX_VPI 0xFF /* Max number VPI supported 0 - 0xff */ +#define LPFC_MAX_VPORTS 0x100 /* Max vports per port, with pport */ uint16_t max_vports; /* * For IOV HBAs max_vpi can change * after a reset. max_vports is max @@ -1111,6 +1128,10 @@ struct lpfc_hba { uint16_t vlan_id; struct list_head fcf_conn_rec_list; + bool defer_flogi_acc_flag; + uint16_t defer_flogi_acc_rx_id; + uint16_t defer_flogi_acc_ox_id; + spinlock_t ct_ev_lock; /* synchronize access to ct_ev_waiters */ struct list_head ct_ev_waiters; struct unsol_rcv_ct_ctx ct_ctx[LPFC_CT_CTX_MAX]; @@ -1262,6 +1283,12 @@ lpfc_sli_read_hs(struct lpfc_hba *phba) static inline struct lpfc_sli_ring * lpfc_phba_elsring(struct lpfc_hba *phba) { + /* Return NULL if sli_rev has become invalid due to bad fw */ + if (phba->sli_rev != LPFC_SLI_REV4 && + phba->sli_rev != LPFC_SLI_REV3 && + phba->sli_rev != LPFC_SLI_REV2) + return NULL; + if (phba->sli_rev == LPFC_SLI_REV4) { if (phba->sli4_hba.els_wq) return phba->sli4_hba.els_wq->pring; diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c index dda7f450b96d..4bae72cbf3f6 100644 --- a/drivers/scsi/lpfc/lpfc_attr.c +++ b/drivers/scsi/lpfc/lpfc_attr.c @@ -883,6 +883,42 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr, } } + if ((phba->sli_rev == LPFC_SLI_REV4) && + ((bf_get(lpfc_sli_intf_if_type, + &phba->sli4_hba.sli_intf) == + LPFC_SLI_INTF_IF_TYPE_6))) { + struct lpfc_trunk_link link = phba->trunk_link; + + if (bf_get(lpfc_conf_trunk_port0, &phba->sli4_hba)) + len += snprintf(buf + len, PAGE_SIZE - len, + "Trunk port 0: Link %s %s\n", + (link.link0.state == LPFC_LINK_UP) ? + "Up" : "Down. ", + trunk_errmsg[link.link0.fault]); + + if (bf_get(lpfc_conf_trunk_port1, &phba->sli4_hba)) + len += snprintf(buf + len, PAGE_SIZE - len, + "Trunk port 1: Link %s %s\n", + (link.link1.state == LPFC_LINK_UP) ? + "Up" : "Down. ", + trunk_errmsg[link.link1.fault]); + + if (bf_get(lpfc_conf_trunk_port2, &phba->sli4_hba)) + len += snprintf(buf + len, PAGE_SIZE - len, + "Trunk port 2: Link %s %s\n", + (link.link2.state == LPFC_LINK_UP) ? + "Up" : "Down. ", + trunk_errmsg[link.link2.fault]); + + if (bf_get(lpfc_conf_trunk_port3, &phba->sli4_hba)) + len += snprintf(buf + len, PAGE_SIZE - len, + "Trunk port 3: Link %s %s\n", + (link.link3.state == LPFC_LINK_UP) ? + "Up" : "Down. ", + trunk_errmsg[link.link3.fault]); + + } + return len; } @@ -1155,6 +1191,82 @@ out: return 0; } +/** + * lpfc_reset_pci_bus - resets PCI bridge controller's secondary bus of an HBA + * @phba: lpfc_hba pointer. + * + * Description: + * Issues a PCI secondary bus reset for the phba->pcidev. + * + * Notes: + * First walks the bus_list to ensure only PCI devices with Emulex + * vendor id, device ids that support hot reset, only one occurrence + * of function 0, and all ports on the bus are in offline mode to ensure the + * hot reset only affects one valid HBA. + * + * Returns: + * -ENOTSUPP, cfg_enable_hba_reset must be of value 2 + * -ENODEV, NULL ptr to pcidev + * -EBADSLT, detected invalid device + * -EBUSY, port is not in offline state + * 0, successful + */ +int +lpfc_reset_pci_bus(struct lpfc_hba *phba) +{ + struct pci_dev *pdev = phba->pcidev; + struct Scsi_Host *shost = NULL; + struct lpfc_hba *phba_other = NULL; + struct pci_dev *ptr = NULL; + int res; + + if (phba->cfg_enable_hba_reset != 2) + return -ENOTSUPP; + + if (!pdev) { + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, "8345 pdev NULL!\n"); + return -ENODEV; + } + + res = lpfc_check_pci_resettable(phba); + if (res) + return res; + + /* Walk the list of devices on the pci_dev's bus */ + list_for_each_entry(ptr, &pdev->bus->devices, bus_list) { + /* Check port is offline */ + shost = pci_get_drvdata(ptr); + if (shost) { + phba_other = + ((struct lpfc_vport *)shost->hostdata)->phba; + if (!(phba_other->pport->fc_flag & FC_OFFLINE_MODE)) { + lpfc_printf_log(phba_other, KERN_INFO, LOG_INIT, + "8349 WWPN = 0x%02x%02x%02x%02x" + "%02x%02x%02x%02x is not " + "offline!\n", + phba_other->wwpn[0], + phba_other->wwpn[1], + phba_other->wwpn[2], + phba_other->wwpn[3], + phba_other->wwpn[4], + phba_other->wwpn[5], + phba_other->wwpn[6], + phba_other->wwpn[7]); + return -EBUSY; + } + } + } + + /* Issue PCI bus reset */ + res = pci_reset_bus(pdev); + if (res) { + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, + "8350 PCI reset bus failed: %d\n", res); + } + + return res; +} + /** * lpfc_selective_reset - Offline then onlines the port * @phba: lpfc_hba pointer. @@ -1322,7 +1434,7 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode) return -EACCES; if ((phba->sli_rev < LPFC_SLI_REV4) || - (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) != + (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) < LPFC_SLI_INTF_IF_TYPE_2)) return -EPERM; @@ -1430,6 +1542,66 @@ lpfc_nport_evt_cnt_show(struct device *dev, struct device_attribute *attr, return snprintf(buf, PAGE_SIZE, "%d\n", phba->nport_event_cnt); } +int +lpfc_set_trunking(struct lpfc_hba *phba, char *buff_out) +{ + LPFC_MBOXQ_t *mbox = NULL; + unsigned long val = 0; + char *pval = 0; + int rc = 0; + + if (!strncmp("enable", buff_out, + strlen("enable"))) { + pval = buff_out + strlen("enable") + 1; + rc = kstrtoul(pval, 0, &val); + if (rc) + return rc; /* Invalid number */ + } else if (!strncmp("disable", buff_out, + strlen("disable"))) { + val = 0; + } else { + return -EINVAL; /* Invalid command */ + } + + switch (val) { + case 0: + val = 0x0; /* Disable */ + break; + case 2: + val = 0x1; /* Enable two port trunk */ + break; + case 4: + val = 0x2; /* Enable four port trunk */ + break; + default: + return -EINVAL; + } + + lpfc_printf_log(phba, KERN_ERR, LOG_MBOX, + "0070 Set trunk mode with val %ld ", val); + + mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); + if (!mbox) + return -ENOMEM; + + lpfc_sli4_config(phba, mbox, LPFC_MBOX_SUBSYSTEM_FCOE, + LPFC_MBOX_OPCODE_FCOE_FC_SET_TRUNK_MODE, + 12, LPFC_SLI4_MBX_EMBED); + + bf_set(lpfc_mbx_set_trunk_mode, + &mbox->u.mqe.un.set_trunk_mode, + val); + rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL); + if (rc) + lpfc_printf_log(phba, KERN_ERR, LOG_MBOX, + "0071 Set trunk mode failed with status: %d", + rc); + if (rc != MBX_TIMEOUT) + mempool_free(mbox, phba->mbox_mem_pool); + + return 0; +} + /** * lpfc_board_mode_show - Return the state of the board * @dev: class device that is converted into a Scsi_host. @@ -1522,6 +1694,11 @@ lpfc_board_mode_store(struct device *dev, struct device_attribute *attr, status = lpfc_sli4_pdev_reg_request(phba, LPFC_FW_RESET); else if (strncmp(buf, "dv_reset", sizeof("dv_reset") - 1) == 0) status = lpfc_sli4_pdev_reg_request(phba, LPFC_DV_RESET); + else if (strncmp(buf, "pci_bus_reset", sizeof("pci_bus_reset") - 1) + == 0) + status = lpfc_reset_pci_bus(phba); + else if (strncmp(buf, "trunk", sizeof("trunk") - 1) == 0) + status = lpfc_set_trunking(phba, (char *)buf + sizeof("trunk")); else status = -EINVAL; @@ -1590,7 +1767,7 @@ lpfc_get_hba_info(struct lpfc_hba *phba, pmb = &pmboxq->u.mb; pmb->mbxCommand = MBX_READ_CONFIG; pmb->mbxOwner = OWN_HOST; - pmboxq->context1 = NULL; + pmboxq->ctx_buf = NULL; if (phba->pport->fc_flag & FC_OFFLINE_MODE) rc = MBX_NOT_FINISHED; @@ -1622,6 +1799,9 @@ lpfc_get_hba_info(struct lpfc_hba *phba, max_vpi = (bf_get(lpfc_mbx_rd_conf_vpi_count, rd_config) > 0) ? (bf_get(lpfc_mbx_rd_conf_vpi_count, rd_config) - 1) : 0; + /* Limit the max we support */ + if (max_vpi > LPFC_MAX_VPI) + max_vpi = LPFC_MAX_VPI; if (mvpi) *mvpi = max_vpi; if (avpi) @@ -1637,8 +1817,13 @@ lpfc_get_hba_info(struct lpfc_hba *phba, *axri = pmb->un.varRdConfig.avail_xri; if (mvpi) *mvpi = pmb->un.varRdConfig.max_vpi; - if (avpi) - *avpi = pmb->un.varRdConfig.avail_vpi; + if (avpi) { + /* avail_vpi is only valid if link is up and ready */ + if (phba->link_state == LPFC_HBA_READY) + *avpi = pmb->un.varRdConfig.avail_vpi; + else + *avpi = pmb->un.varRdConfig.max_vpi; + } } mempool_free(pmboxq, phba->mbox_mem_pool); @@ -3831,8 +4016,9 @@ lpfc_topology_store(struct device *dev, struct device_attribute *attr, val); return -EINVAL; } - if (phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC && - val == 4) { + if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC || + phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) && + val == 4) { lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, "3114 Loop mode not supported\n"); return -EINVAL; @@ -4254,7 +4440,7 @@ lpfc_link_speed_store(struct device *dev, struct device_attribute *attr, uint32_t prev_val, if_type; if_type = bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf); - if (if_type == LPFC_SLI_INTF_IF_TYPE_2 && + if (if_type >= LPFC_SLI_INTF_IF_TYPE_2 && phba->hba_flag & HBA_FORCED_LINK_SPEED) return -EPERM; @@ -5069,6 +5255,18 @@ LPFC_ATTR_RW(fcp_io_sched, LPFC_FCP_SCHED_ROUND_ROBIN, "Determine scheduling algorithm for " "issuing commands [0] - Round Robin, [1] - Current CPU"); +/* + * lpfc_ns_query: Determine algrithmn for NameServer queries after RSCN + * range is [0,1]. Default value is 0. + * For [0], GID_FT is used for NameServer queries after RSCN (default) + * For [1], GID_PT is used for NameServer queries after RSCN + * + */ +LPFC_ATTR_RW(ns_query, LPFC_NS_QUERY_GID_FT, + LPFC_NS_QUERY_GID_FT, LPFC_NS_QUERY_GID_PT, + "Determine algorithm NameServer queries after RSCN " + "[0] - GID_FT, [1] - GID_PT"); + /* # lpfc_fcp2_no_tgt_reset: Determine bus reset behavior # range is [0,1]. Default value is 0. @@ -5257,9 +5455,10 @@ LPFC_ATTR_R(nvme_io_channel, # lpfc_enable_hba_reset: Allow or prevent HBA resets to the hardware. # 0 = HBA resets disabled # 1 = HBA resets enabled (default) -# Value range is [0,1]. Default value is 1. +# 2 = HBA reset via PCI bus reset enabled +# Value range is [0,2]. Default value is 1. */ -LPFC_ATTR_R(enable_hba_reset, 1, 0, 1, "Enable HBA resets from the driver."); +LPFC_ATTR_RW(enable_hba_reset, 1, 0, 2, "Enable HBA resets from the driver."); /* # lpfc_enable_hba_heartbeat: Disable HBA heartbeat timer.. @@ -5514,6 +5713,7 @@ struct device_attribute *lpfc_hba_attrs[] = { &dev_attr_lpfc_scan_down, &dev_attr_lpfc_link_speed, &dev_attr_lpfc_fcp_io_sched, + &dev_attr_lpfc_ns_query, &dev_attr_lpfc_fcp2_no_tgt_reset, &dev_attr_lpfc_cr_delay, &dev_attr_lpfc_cr_count, @@ -6006,6 +6206,9 @@ lpfc_get_host_speed(struct Scsi_Host *shost) case LPFC_LINK_SPEED_64GHZ: fc_host_speed(shost) = FC_PORTSPEED_64GBIT; break; + case LPFC_LINK_SPEED_128GHZ: + fc_host_speed(shost) = FC_PORTSPEED_128GBIT; + break; default: fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN; break; @@ -6105,7 +6308,7 @@ lpfc_get_stats(struct Scsi_Host *shost) pmb = &pmboxq->u.mb; pmb->mbxCommand = MBX_READ_STATUS; pmb->mbxOwner = OWN_HOST; - pmboxq->context1 = NULL; + pmboxq->ctx_buf = NULL; pmboxq->vport = vport; if (vport->fc_flag & FC_OFFLINE_MODE) @@ -6137,7 +6340,7 @@ lpfc_get_stats(struct Scsi_Host *shost) memset(pmboxq, 0, sizeof (LPFC_MBOXQ_t)); pmb->mbxCommand = MBX_READ_LNK_STAT; pmb->mbxOwner = OWN_HOST; - pmboxq->context1 = NULL; + pmboxq->ctx_buf = NULL; pmboxq->vport = vport; if (vport->fc_flag & FC_OFFLINE_MODE) @@ -6217,7 +6420,7 @@ lpfc_reset_stats(struct Scsi_Host *shost) pmb->mbxCommand = MBX_READ_STATUS; pmb->mbxOwner = OWN_HOST; pmb->un.varWords[0] = 0x1; /* reset request */ - pmboxq->context1 = NULL; + pmboxq->ctx_buf = NULL; pmboxq->vport = vport; if ((vport->fc_flag & FC_OFFLINE_MODE) || @@ -6235,7 +6438,7 @@ lpfc_reset_stats(struct Scsi_Host *shost) memset(pmboxq, 0, sizeof(LPFC_MBOXQ_t)); pmb->mbxCommand = MBX_READ_LNK_STAT; pmb->mbxOwner = OWN_HOST; - pmboxq->context1 = NULL; + pmboxq->ctx_buf = NULL; pmboxq->vport = vport; if ((vport->fc_flag & FC_OFFLINE_MODE) || @@ -6564,6 +6767,7 @@ void lpfc_get_cfgparam(struct lpfc_hba *phba) { lpfc_fcp_io_sched_init(phba, lpfc_fcp_io_sched); + lpfc_ns_query_init(phba, lpfc_ns_query); lpfc_fcp2_no_tgt_reset_init(phba, lpfc_fcp2_no_tgt_reset); lpfc_cr_delay_init(phba, lpfc_cr_delay); lpfc_cr_count_init(phba, lpfc_cr_count); diff --git a/drivers/scsi/lpfc/lpfc_bsg.c b/drivers/scsi/lpfc/lpfc_bsg.c index 7bd7ae86bed5..8698af86485d 100644 --- a/drivers/scsi/lpfc/lpfc_bsg.c +++ b/drivers/scsi/lpfc/lpfc_bsg.c @@ -2222,7 +2222,7 @@ lpfc_bsg_diag_loopback_mode(struct bsg_job *job) if (phba->sli_rev < LPFC_SLI_REV4) rc = lpfc_sli3_bsg_diag_loopback_mode(phba, job); - else if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) == + else if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >= LPFC_SLI_INTF_IF_TYPE_2) rc = lpfc_sli4_bsg_diag_loopback_mode(phba, job); else @@ -2262,7 +2262,7 @@ lpfc_sli4_bsg_diag_mode_end(struct bsg_job *job) if (phba->sli_rev < LPFC_SLI_REV4) return -ENODEV; - if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) != + if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) < LPFC_SLI_INTF_IF_TYPE_2) return -ENODEV; @@ -2354,7 +2354,7 @@ lpfc_sli4_bsg_link_diag_test(struct bsg_job *job) rc = -ENODEV; goto job_error; } - if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) != + if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) < LPFC_SLI_INTF_IF_TYPE_2) { rc = -ENODEV; goto job_error; @@ -2501,9 +2501,9 @@ static int lpfcdiag_loop_self_reg(struct lpfc_hba *phba, uint16_t *rpi) return -ENOMEM; } - dmabuff = (struct lpfc_dmabuf *) mbox->context1; - mbox->context1 = NULL; - mbox->context2 = NULL; + dmabuff = (struct lpfc_dmabuf *)mbox->ctx_buf; + mbox->ctx_buf = NULL; + mbox->ctx_ndlp = NULL; status = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO); if ((status != MBX_SUCCESS) || (mbox->u.mb.mbxStatus)) { @@ -3388,7 +3388,7 @@ lpfc_bsg_issue_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq) unsigned long flags; uint8_t *pmb, *pmb_buf; - dd_data = pmboxq->context1; + dd_data = pmboxq->ctx_ndlp; /* * The outgoing buffer is readily referred from the dma buffer, @@ -3573,7 +3573,7 @@ lpfc_bsg_issue_mbox_ext_handle_job(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq) struct lpfc_sli_config_mbox *sli_cfg_mbx; uint8_t *pmbx; - dd_data = pmboxq->context1; + dd_data = pmboxq->ctx_buf; /* Determine if job has been aborted */ spin_lock_irqsave(&phba->ct_ev_lock, flags); @@ -3960,7 +3960,7 @@ lpfc_bsg_sli_cfg_read_cmd_ext(struct lpfc_hba *phba, struct bsg_job *job, pmboxq->mbox_cmpl = lpfc_bsg_issue_read_mbox_ext_cmpl; /* context fields to callback function */ - pmboxq->context1 = dd_data; + pmboxq->ctx_buf = dd_data; dd_data->type = TYPE_MBOX; dd_data->set_job = job; dd_data->context_un.mbox.pmboxq = pmboxq; @@ -4131,7 +4131,7 @@ lpfc_bsg_sli_cfg_write_cmd_ext(struct lpfc_hba *phba, struct bsg_job *job, pmboxq->mbox_cmpl = lpfc_bsg_issue_write_mbox_ext_cmpl; /* context fields to callback function */ - pmboxq->context1 = dd_data; + pmboxq->ctx_buf = dd_data; dd_data->type = TYPE_MBOX; dd_data->set_job = job; dd_data->context_un.mbox.pmboxq = pmboxq; @@ -4476,7 +4476,7 @@ lpfc_bsg_write_ebuf_set(struct lpfc_hba *phba, struct bsg_job *job, pmboxq->mbox_cmpl = lpfc_bsg_issue_write_mbox_ext_cmpl; /* context fields to callback function */ - pmboxq->context1 = dd_data; + pmboxq->ctx_buf = dd_data; dd_data->type = TYPE_MBOX; dd_data->set_job = job; dd_data->context_un.mbox.pmboxq = pmboxq; @@ -4761,7 +4761,7 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct bsg_job *job, if (mbox_req->inExtWLen || mbox_req->outExtWLen) { from = pmbx; ext = from + sizeof(MAILBOX_t); - pmboxq->context2 = ext; + pmboxq->ctx_buf = ext; pmboxq->in_ext_byte_len = mbox_req->inExtWLen * sizeof(uint32_t); pmboxq->out_ext_byte_len = @@ -4889,7 +4889,7 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct bsg_job *job, pmboxq->mbox_cmpl = lpfc_bsg_issue_mbox_cmpl; /* setup context field to pass wait_queue pointer to wake function */ - pmboxq->context1 = dd_data; + pmboxq->ctx_ndlp = dd_data; dd_data->type = TYPE_MBOX; dd_data->set_job = job; dd_data->context_un.mbox.pmboxq = pmboxq; @@ -5348,7 +5348,7 @@ lpfc_bsg_get_ras_config(struct bsg_job *job) sizeof(struct fc_bsg_request) + sizeof(struct lpfc_bsg_ras_req)) { lpfc_printf_log(phba, KERN_ERR, LOG_LIBDFC, - "6181 Received RAS_LOG request " + "6192 FW_LOG request received " "below minimum size\n"); rc = -EINVAL; goto ras_job_error; @@ -5356,7 +5356,7 @@ lpfc_bsg_get_ras_config(struct bsg_job *job) /* Check FW log status */ rc = lpfc_check_fwlog_support(phba); - if (rc == -EACCES || rc == -EPERM) + if (rc) goto ras_job_error; ras_reply = (struct lpfc_bsg_get_ras_config_reply *) @@ -5380,25 +5380,6 @@ ras_job_error: return rc; } -/** - * lpfc_ras_stop_fwlog: Disable FW logging by the adapter - * @phba: Pointer to HBA context object. - * - * Disable FW logging into host memory on the adapter. To - * be done before reading logs from the host memory. - **/ -static void -lpfc_ras_stop_fwlog(struct lpfc_hba *phba) -{ - struct lpfc_ras_fwlog *ras_fwlog = &phba->ras_fwlog; - - ras_fwlog->ras_active = false; - - /* Disable FW logging to host memory */ - writel(LPFC_CTL_PDEV_CTL_DDL_RAS, - phba->sli4_hba.conf_regs_memmap_p + LPFC_CTL_PDEV_CTL_OFFSET); -} - /** * lpfc_bsg_set_ras_config: Set FW logging parameters * @job: fc_bsg_job to handle @@ -5416,7 +5397,7 @@ lpfc_bsg_set_ras_config(struct bsg_job *job) struct lpfc_ras_fwlog *ras_fwlog = &phba->ras_fwlog; struct fc_bsg_reply *bsg_reply = job->reply; uint8_t action = 0, log_level = 0; - int rc = 0; + int rc = 0, action_status = 0; if (job->request_len < sizeof(struct fc_bsg_request) + @@ -5430,7 +5411,7 @@ lpfc_bsg_set_ras_config(struct bsg_job *job) /* Check FW log status */ rc = lpfc_check_fwlog_support(phba); - if (rc == -EACCES || rc == -EPERM) + if (rc) goto ras_job_error; ras_req = (struct lpfc_bsg_set_ras_config_req *) @@ -5449,16 +5430,25 @@ lpfc_bsg_set_ras_config(struct bsg_job *job) lpfc_ras_stop_fwlog(phba); } else { /*action = LPFC_RASACTION_START_LOGGING*/ - if (ras_fwlog->ras_active == true) { - rc = -EINPROGRESS; - goto ras_job_error; - } + + /* Even though FW-logging is active re-initialize + * FW-logging with new log-level. Return status + * "Logging already Running" to caller. + **/ + if (ras_fwlog->ras_active) + action_status = -EINPROGRESS; /* Enable logging */ rc = lpfc_sli4_ras_fwlog_init(phba, log_level, LPFC_RAS_ENABLE_LOGGING); - if (rc) + if (rc) { rc = -EINVAL; + goto ras_job_error; + } + + /* Check if FW-logging is re-initialized */ + if (action_status == -EINPROGRESS) + rc = action_status; } ras_job_error: /* make error code available to userspace */ @@ -5487,12 +5477,11 @@ lpfc_bsg_get_ras_lwpd(struct bsg_job *job) struct lpfc_hba *phba = vport->phba; struct lpfc_ras_fwlog *ras_fwlog = &phba->ras_fwlog; struct fc_bsg_reply *bsg_reply = job->reply; - uint32_t lwpd_offset = 0; - uint64_t wrap_value = 0; + u32 *lwpd_ptr = NULL; int rc = 0; rc = lpfc_check_fwlog_support(phba); - if (rc == -EACCES || rc == -EPERM) + if (rc) goto ras_job_error; if (job->request_len < @@ -5508,11 +5497,19 @@ lpfc_bsg_get_ras_lwpd(struct bsg_job *job) ras_reply = (struct lpfc_bsg_get_ras_lwpd *) bsg_reply->reply_data.vendor_reply.vendor_rsp; - lwpd_offset = *((uint32_t *)ras_fwlog->lwpd.virt) & 0xffffffff; - ras_reply->offset = be32_to_cpu(lwpd_offset); + if (!ras_fwlog->lwpd.virt) { + lpfc_printf_log(phba, KERN_ERR, LOG_LIBDFC, + "6193 Restart FW Logging\n"); + rc = -EINVAL; + goto ras_job_error; + } + + /* Get lwpd offset */ + lwpd_ptr = (uint32_t *)(ras_fwlog->lwpd.virt); + ras_reply->offset = be32_to_cpu(*lwpd_ptr & 0xffffffff); - wrap_value = *((uint64_t *)ras_fwlog->lwpd.virt); - ras_reply->wrap_count = be32_to_cpu((wrap_value >> 32) & 0xffffffff); + /* Get wrap count */ + ras_reply->wrap_count = be32_to_cpu(*(++lwpd_ptr) & 0xffffffff); ras_job_error: /* make error code available to userspace */ @@ -5539,9 +5536,8 @@ lpfc_bsg_get_ras_fwlog(struct bsg_job *job) struct fc_bsg_request *bsg_request = job->request; struct fc_bsg_reply *bsg_reply = job->reply; struct lpfc_bsg_get_fwlog_req *ras_req; - uint32_t rd_offset, rd_index, offset, pending_wlen; - uint32_t boundary = 0, align_len = 0, write_len = 0; - void *dest, *src, *fwlog_buff; + u32 rd_offset, rd_index, offset; + void *src, *fwlog_buff; struct lpfc_ras_fwlog *ras_fwlog = NULL; struct lpfc_dmabuf *dmabuf, *next; int rc = 0; @@ -5549,7 +5545,7 @@ lpfc_bsg_get_ras_fwlog(struct bsg_job *job) ras_fwlog = &phba->ras_fwlog; rc = lpfc_check_fwlog_support(phba); - if (rc == -EACCES || rc == -EPERM) + if (rc) goto ras_job_error; /* Logging to be stopped before reading */ @@ -5581,8 +5577,6 @@ lpfc_bsg_get_ras_fwlog(struct bsg_job *job) rd_index = (rd_offset / LPFC_RAS_MAX_ENTRY_SIZE); offset = (rd_offset % LPFC_RAS_MAX_ENTRY_SIZE); - pending_wlen = ras_req->read_size; - dest = fwlog_buff; list_for_each_entry_safe(dmabuf, next, &ras_fwlog->fwlog_buff_list, list) { @@ -5590,29 +5584,9 @@ lpfc_bsg_get_ras_fwlog(struct bsg_job *job) if (dmabuf->buffer_tag < rd_index) continue; - /* Align read to buffer size */ - if (offset) { - boundary = ((dmabuf->buffer_tag + 1) * - LPFC_RAS_MAX_ENTRY_SIZE); - - align_len = (boundary - offset); - write_len = min_t(u32, align_len, - LPFC_RAS_MAX_ENTRY_SIZE); - } else { - write_len = min_t(u32, pending_wlen, - LPFC_RAS_MAX_ENTRY_SIZE); - align_len = 0; - boundary = 0; - } src = dmabuf->virt + offset; - memcpy(dest, src, write_len); - - pending_wlen -= write_len; - if (!pending_wlen) - break; - - dest += write_len; - offset = (offset + write_len) % LPFC_RAS_MAX_ENTRY_SIZE; + memcpy(fwlog_buff, src, ras_req->read_size); + break; } bsg_reply->reply_payload_rcv_len = @@ -5629,6 +5603,77 @@ ras_job_error: return rc; } +static int +lpfc_get_trunk_info(struct bsg_job *job) +{ + struct lpfc_vport *vport = shost_priv(fc_bsg_to_shost(job)); + struct lpfc_hba *phba = vport->phba; + struct fc_bsg_reply *bsg_reply = job->reply; + struct lpfc_trunk_info *event_reply; + int rc = 0; + + if (job->request_len < + sizeof(struct fc_bsg_request) + sizeof(struct get_trunk_info_req)) { + lpfc_printf_log(phba, KERN_ERR, LOG_LIBDFC, + "2744 Received GET TRUNK _INFO request below " + "minimum size\n"); + rc = -EINVAL; + goto job_error; + } + + event_reply = (struct lpfc_trunk_info *) + bsg_reply->reply_data.vendor_reply.vendor_rsp; + + if (job->reply_len < + sizeof(struct fc_bsg_request) + sizeof(struct lpfc_trunk_info)) { + lpfc_printf_log(phba, KERN_WARNING, LOG_LIBDFC, + "2728 Received GET TRUNK _INFO reply below " + "minimum size\n"); + rc = -EINVAL; + goto job_error; + } + if (event_reply == NULL) { + rc = -EINVAL; + goto job_error; + } + + bsg_bf_set(lpfc_trunk_info_link_status, event_reply, + (phba->link_state >= LPFC_LINK_UP) ? 1 : 0); + + bsg_bf_set(lpfc_trunk_info_trunk_active0, event_reply, + (phba->trunk_link.link0.state == LPFC_LINK_UP) ? 1 : 0); + + bsg_bf_set(lpfc_trunk_info_trunk_active1, event_reply, + (phba->trunk_link.link1.state == LPFC_LINK_UP) ? 1 : 0); + + bsg_bf_set(lpfc_trunk_info_trunk_active2, event_reply, + (phba->trunk_link.link2.state == LPFC_LINK_UP) ? 1 : 0); + + bsg_bf_set(lpfc_trunk_info_trunk_active3, event_reply, + (phba->trunk_link.link3.state == LPFC_LINK_UP) ? 1 : 0); + + bsg_bf_set(lpfc_trunk_info_trunk_config0, event_reply, + bf_get(lpfc_conf_trunk_port0, &phba->sli4_hba)); + + bsg_bf_set(lpfc_trunk_info_trunk_config1, event_reply, + bf_get(lpfc_conf_trunk_port1, &phba->sli4_hba)); + + bsg_bf_set(lpfc_trunk_info_trunk_config2, event_reply, + bf_get(lpfc_conf_trunk_port2, &phba->sli4_hba)); + + bsg_bf_set(lpfc_trunk_info_trunk_config3, event_reply, + bf_get(lpfc_conf_trunk_port3, &phba->sli4_hba)); + + event_reply->port_speed = phba->sli4_hba.link_state.speed / 1000; + event_reply->logical_speed = + phba->sli4_hba.link_state.logical_speed / 100; +job_error: + bsg_reply->result = rc; + bsg_job_done(job, bsg_reply->result, + bsg_reply->reply_payload_rcv_len); + return rc; + +} /** * lpfc_bsg_hst_vendor - process a vendor-specific fc_bsg_job @@ -5689,6 +5734,9 @@ lpfc_bsg_hst_vendor(struct bsg_job *job) case LPFC_BSG_VENDOR_RAS_SET_CONFIG: rc = lpfc_bsg_set_ras_config(job); break; + case LPFC_BSG_VENDOR_GET_TRUNK_INFO: + rc = lpfc_get_trunk_info(job); + break; default: rc = -EINVAL; bsg_reply->reply_payload_rcv_len = 0; diff --git a/drivers/scsi/lpfc/lpfc_bsg.h b/drivers/scsi/lpfc/lpfc_bsg.h index 820323f1139b..9151824beea4 100644 --- a/drivers/scsi/lpfc/lpfc_bsg.h +++ b/drivers/scsi/lpfc/lpfc_bsg.h @@ -42,6 +42,7 @@ #define LPFC_BSG_VENDOR_RAS_GET_FWLOG 17 #define LPFC_BSG_VENDOR_RAS_GET_CONFIG 18 #define LPFC_BSG_VENDOR_RAS_SET_CONFIG 19 +#define LPFC_BSG_VENDOR_GET_TRUNK_INFO 20 struct set_ct_event { uint32_t command; @@ -331,6 +332,43 @@ struct lpfc_bsg_get_ras_config_reply { uint32_t log_buff_sz; }; +struct lpfc_trunk_info { + uint32_t word0; +#define lpfc_trunk_info_link_status_SHIFT 0 +#define lpfc_trunk_info_link_status_MASK 1 +#define lpfc_trunk_info_link_status_WORD word0 +#define lpfc_trunk_info_trunk_active0_SHIFT 8 +#define lpfc_trunk_info_trunk_active0_MASK 1 +#define lpfc_trunk_info_trunk_active0_WORD word0 +#define lpfc_trunk_info_trunk_active1_SHIFT 9 +#define lpfc_trunk_info_trunk_active1_MASK 1 +#define lpfc_trunk_info_trunk_active1_WORD word0 +#define lpfc_trunk_info_trunk_active2_SHIFT 10 +#define lpfc_trunk_info_trunk_active2_MASK 1 +#define lpfc_trunk_info_trunk_active2_WORD word0 +#define lpfc_trunk_info_trunk_active3_SHIFT 11 +#define lpfc_trunk_info_trunk_active3_MASK 1 +#define lpfc_trunk_info_trunk_active3_WORD word0 +#define lpfc_trunk_info_trunk_config0_SHIFT 12 +#define lpfc_trunk_info_trunk_config0_MASK 1 +#define lpfc_trunk_info_trunk_config0_WORD word0 +#define lpfc_trunk_info_trunk_config1_SHIFT 13 +#define lpfc_trunk_info_trunk_config1_MASK 1 +#define lpfc_trunk_info_trunk_config1_WORD word0 +#define lpfc_trunk_info_trunk_config2_SHIFT 14 +#define lpfc_trunk_info_trunk_config2_MASK 1 +#define lpfc_trunk_info_trunk_config2_WORD word0 +#define lpfc_trunk_info_trunk_config3_SHIFT 15 +#define lpfc_trunk_info_trunk_config3_MASK 1 +#define lpfc_trunk_info_trunk_config3_WORD word0 + uint16_t port_speed; + uint16_t logical_speed; + uint32_t reserved3; +}; + +struct get_trunk_info_req { + uint32_t command; +}; /* driver only */ #define SLI_CONFIG_NOT_HANDLED 0 diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h index e01136507780..39f3fa988732 100644 --- a/drivers/scsi/lpfc/lpfc_crtn.h +++ b/drivers/scsi/lpfc/lpfc_crtn.h @@ -74,7 +74,6 @@ void lpfc_mbx_cmpl_read_topology(struct lpfc_hba *, LPFC_MBOXQ_t *); void lpfc_init_vpi_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *); void lpfc_cancel_all_vport_retry_delay_timer(struct lpfc_hba *); void lpfc_retry_pport_discovery(struct lpfc_hba *); -void lpfc_release_rpi(struct lpfc_hba *, struct lpfc_vport *, uint16_t); int lpfc_init_iocb_list(struct lpfc_hba *phba, int cnt); void lpfc_free_iocb_list(struct lpfc_hba *phba); int lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq, @@ -175,6 +174,7 @@ void lpfc_hb_timeout_handler(struct lpfc_hba *); void lpfc_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *, struct lpfc_iocbq *); int lpfc_ct_handle_unsol_abort(struct lpfc_hba *, struct hbq_dmabuf *); +int lpfc_issue_gidpt(struct lpfc_vport *vport); int lpfc_issue_gidft(struct lpfc_vport *vport); int lpfc_get_gidft_type(struct lpfc_vport *vport, struct lpfc_iocbq *iocbq); int lpfc_ns_cmd(struct lpfc_vport *, int, uint8_t, uint32_t); @@ -380,8 +380,10 @@ void lpfc_nvmet_buf_free(struct lpfc_hba *phba, void *virtp, dma_addr_t dma); void lpfc_in_buf_free(struct lpfc_hba *, struct lpfc_dmabuf *); void lpfc_rq_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp); +int lpfc_link_reset(struct lpfc_vport *vport); /* Function prototypes. */ +int lpfc_check_pci_resettable(const struct lpfc_hba *phba); const char* lpfc_info(struct Scsi_Host *); int lpfc_scan_finished(struct Scsi_Host *, unsigned long); @@ -550,6 +552,7 @@ void lpfc_sli4_ras_init(struct lpfc_hba *phba); void lpfc_sli4_ras_setup(struct lpfc_hba *phba); int lpfc_sli4_ras_fwlog_init(struct lpfc_hba *phba, uint32_t fwlog_level, uint32_t fwlog_enable); +void lpfc_ras_stop_fwlog(struct lpfc_hba *phba); int lpfc_check_fwlog_support(struct lpfc_hba *phba); /* NVME interfaces. */ diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c index 789ad1502534..552da8bf43e4 100644 --- a/drivers/scsi/lpfc/lpfc_ct.c +++ b/drivers/scsi/lpfc/lpfc_ct.c @@ -540,7 +540,17 @@ lpfc_ns_rsp_audit_did(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type) struct lpfc_hba *phba = vport->phba; struct lpfc_nodelist *ndlp = NULL; struct Scsi_Host *shost = lpfc_shost_from_vport(vport); + char *str; + if (phba->cfg_ns_query == LPFC_NS_QUERY_GID_FT) + str = "GID_FT"; + else + str = "GID_PT"; + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "6430 Process %s rsp for %08x type %x %s %s\n", + str, Did, fc4_type, + (fc4_type == FC_TYPE_FCP) ? "FCP" : " ", + (fc4_type == FC_TYPE_NVME) ? "NVME" : " "); /* * To conserve rpi's, filter out addresses for other * vports on the same physical HBAs. @@ -831,6 +841,198 @@ out: return; } +static void +lpfc_cmpl_ct_cmd_gid_pt(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + struct lpfc_iocbq *rspiocb) +{ + struct lpfc_vport *vport = cmdiocb->vport; + struct Scsi_Host *shost = lpfc_shost_from_vport(vport); + IOCB_t *irsp; + struct lpfc_dmabuf *outp; + struct lpfc_dmabuf *inp; + struct lpfc_sli_ct_request *CTrsp; + struct lpfc_sli_ct_request *CTreq; + struct lpfc_nodelist *ndlp; + int rc; + + /* First save ndlp, before we overwrite it */ + ndlp = cmdiocb->context_un.ndlp; + + /* we pass cmdiocb to state machine which needs rspiocb as well */ + cmdiocb->context_un.rsp_iocb = rspiocb; + inp = (struct lpfc_dmabuf *)cmdiocb->context1; + outp = (struct lpfc_dmabuf *)cmdiocb->context2; + irsp = &rspiocb->iocb; + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT, + "GID_PT cmpl: status:x%x/x%x rtry:%d", + irsp->ulpStatus, irsp->un.ulpWord[4], + vport->fc_ns_retry); + + /* Don't bother processing response if vport is being torn down. */ + if (vport->load_flag & FC_UNLOADING) { + if (vport->fc_flag & FC_RSCN_MODE) + lpfc_els_flush_rscn(vport); + goto out; + } + + if (lpfc_els_chk_latt(vport)) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "4108 Link event during NS query\n"); + if (vport->fc_flag & FC_RSCN_MODE) + lpfc_els_flush_rscn(vport); + lpfc_vport_set_state(vport, FC_VPORT_FAILED); + goto out; + } + if (lpfc_error_lost_link(irsp)) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "4101 NS query failed due to link event\n"); + if (vport->fc_flag & FC_RSCN_MODE) + lpfc_els_flush_rscn(vport); + goto out; + } + + spin_lock_irq(shost->host_lock); + if (vport->fc_flag & FC_RSCN_DEFERRED) { + vport->fc_flag &= ~FC_RSCN_DEFERRED; + spin_unlock_irq(shost->host_lock); + + /* This is a GID_PT completing so the gidft_inp counter was + * incremented before the GID_PT was issued to the wire. + */ + vport->gidft_inp--; + + /* + * Skip processing the NS response + * Re-issue the NS cmd + */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "4102 Process Deferred RSCN Data: x%x x%x\n", + vport->fc_flag, vport->fc_rscn_id_cnt); + lpfc_els_handle_rscn(vport); + + goto out; + } + spin_unlock_irq(shost->host_lock); + + if (irsp->ulpStatus) { + /* Check for retry */ + if (vport->fc_ns_retry < LPFC_MAX_NS_RETRY) { + if (irsp->ulpStatus != IOSTAT_LOCAL_REJECT || + (irsp->un.ulpWord[4] & IOERR_PARAM_MASK) != + IOERR_NO_RESOURCES) + vport->fc_ns_retry++; + + /* CT command is being retried */ + vport->gidft_inp--; + rc = lpfc_ns_cmd(vport, SLI_CTNS_GID_PT, + vport->fc_ns_retry, GID_PT_N_PORT); + if (rc == 0) + goto out; + } + if (vport->fc_flag & FC_RSCN_MODE) + lpfc_els_flush_rscn(vport); + lpfc_vport_set_state(vport, FC_VPORT_FAILED); + lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, + "4103 GID_FT Query error: 0x%x 0x%x\n", + irsp->ulpStatus, vport->fc_ns_retry); + } else { + /* Good status, continue checking */ + CTreq = (struct lpfc_sli_ct_request *)inp->virt; + CTrsp = (struct lpfc_sli_ct_request *)outp->virt; + if (CTrsp->CommandResponse.bits.CmdRsp == + cpu_to_be16(SLI_CT_RESPONSE_FS_ACC)) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "4105 NameServer Rsp Data: x%x x%x\n", + vport->fc_flag, + CTreq->un.gid.Fc4Type); + + lpfc_ns_rsp(vport, + outp, + CTreq->un.gid.Fc4Type, + (uint32_t)(irsp->un.genreq64.bdl.bdeSize)); + } else if (CTrsp->CommandResponse.bits.CmdRsp == + be16_to_cpu(SLI_CT_RESPONSE_FS_RJT)) { + /* NameServer Rsp Error */ + if ((CTrsp->ReasonCode == SLI_CT_UNABLE_TO_PERFORM_REQ) + && (CTrsp->Explanation == SLI_CT_NO_FC4_TYPES)) { + lpfc_printf_vlog( + vport, KERN_INFO, LOG_DISCOVERY, + "4106 No NameServer Entries " + "Data: x%x x%x x%x x%x\n", + CTrsp->CommandResponse.bits.CmdRsp, + (uint32_t)CTrsp->ReasonCode, + (uint32_t)CTrsp->Explanation, + vport->fc_flag); + + lpfc_debugfs_disc_trc( + vport, LPFC_DISC_TRC_CT, + "GID_PT no entry cmd:x%x rsn:x%x exp:x%x", + (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, + (uint32_t)CTrsp->ReasonCode, + (uint32_t)CTrsp->Explanation); + } else { + lpfc_printf_vlog( + vport, KERN_INFO, LOG_DISCOVERY, + "4107 NameServer Rsp Error " + "Data: x%x x%x x%x x%x\n", + CTrsp->CommandResponse.bits.CmdRsp, + (uint32_t)CTrsp->ReasonCode, + (uint32_t)CTrsp->Explanation, + vport->fc_flag); + + lpfc_debugfs_disc_trc( + vport, LPFC_DISC_TRC_CT, + "GID_PT rsp err1 cmd:x%x rsn:x%x exp:x%x", + (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, + (uint32_t)CTrsp->ReasonCode, + (uint32_t)CTrsp->Explanation); + } + } else { + /* NameServer Rsp Error */ + lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY, + "4109 NameServer Rsp Error " + "Data: x%x x%x x%x x%x\n", + CTrsp->CommandResponse.bits.CmdRsp, + (uint32_t)CTrsp->ReasonCode, + (uint32_t)CTrsp->Explanation, + vport->fc_flag); + + lpfc_debugfs_disc_trc( + vport, LPFC_DISC_TRC_CT, + "GID_PT rsp err2 cmd:x%x rsn:x%x exp:x%x", + (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, + (uint32_t)CTrsp->ReasonCode, + (uint32_t)CTrsp->Explanation); + } + vport->gidft_inp--; + } + /* Link up / RSCN discovery */ + if ((vport->num_disc_nodes == 0) && + (vport->gidft_inp == 0)) { + /* + * The driver has cycled through all Nports in the RSCN payload. + * Complete the handling by cleaning up and marking the + * current driver state. + */ + if (vport->port_state >= LPFC_DISC_AUTH) { + if (vport->fc_flag & FC_RSCN_MODE) { + lpfc_els_flush_rscn(vport); + spin_lock_irq(shost->host_lock); + vport->fc_flag |= FC_RSCN_MODE; /* RSCN still */ + spin_unlock_irq(shost->host_lock); + } else { + lpfc_els_flush_rscn(vport); + } + } + + lpfc_disc_start(vport); + } +out: + cmdiocb->context_un.ndlp = ndlp; /* Now restore ndlp for free */ + lpfc_ct_free_iocb(phba, cmdiocb); +} + static void lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, struct lpfc_iocbq *rspiocb) @@ -857,6 +1059,13 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, CTrsp = (struct lpfc_sli_ct_request *) outp->virt; fbits = CTrsp->un.gff_acc.fbits[FCP_TYPE_FEATURE_OFFSET]; + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "6431 Process GFF_ID rsp for %08x " + "fbits %02x %s %s\n", + did, fbits, + (fbits & FC4_FEATURE_INIT) ? "Initiator" : " ", + (fbits & FC4_FEATURE_TARGET) ? "Target" : " "); + if (CTrsp->CommandResponse.bits.CmdRsp == be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) { if ((fbits & FC4_FEATURE_INIT) && @@ -979,9 +1188,15 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, CTrsp = (struct lpfc_sli_ct_request *)outp->virt; fc4_data_0 = be32_to_cpu(CTrsp->un.gft_acc.fc4_types[0]); fc4_data_1 = be32_to_cpu(CTrsp->un.gft_acc.fc4_types[1]); + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, - "3062 DID x%06x GFT Wd0 x%08x Wd1 x%08x\n", - did, fc4_data_0, fc4_data_1); + "6432 Process GFT_ID rsp for %08x " + "Data %08x %08x %s %s\n", + did, fc4_data_0, fc4_data_1, + (fc4_data_0 & LPFC_FC4_TYPE_BITMASK) ? + "FCP" : " ", + (fc4_data_1 & LPFC_FC4_TYPE_BITMASK) ? + "NVME" : " "); ndlp = lpfc_findnode_did(vport, did); if (ndlp) { @@ -1312,6 +1527,7 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode, struct ulp_bde64 *bpl; void (*cmpl) (struct lpfc_hba *, struct lpfc_iocbq *, struct lpfc_iocbq *) = NULL; + uint32_t *ptr; uint32_t rsp_size = 1024; size_t size; int rc = 0; @@ -1365,6 +1581,8 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode, bpl->tus.f.bdeFlags = 0; if (cmdcode == SLI_CTNS_GID_FT) bpl->tus.f.bdeSize = GID_REQUEST_SZ; + else if (cmdcode == SLI_CTNS_GID_PT) + bpl->tus.f.bdeSize = GID_REQUEST_SZ; else if (cmdcode == SLI_CTNS_GFF_ID) bpl->tus.f.bdeSize = GFF_REQUEST_SZ; else if (cmdcode == SLI_CTNS_GFT_ID) @@ -1405,6 +1623,18 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode, rsp_size = FC_MAX_NS_RSP; break; + case SLI_CTNS_GID_PT: + CtReq->CommandResponse.bits.CmdRsp = + cpu_to_be16(SLI_CTNS_GID_PT); + CtReq->un.gid.PortType = context; + + if (vport->port_state < LPFC_NS_QRY) + vport->port_state = LPFC_NS_QRY; + lpfc_set_disctmo(vport); + cmpl = lpfc_cmpl_ct_cmd_gid_pt; + rsp_size = FC_MAX_NS_RSP; + break; + case SLI_CTNS_GFF_ID: CtReq->CommandResponse.bits.CmdRsp = cpu_to_be16(SLI_CTNS_GFF_ID); @@ -1436,8 +1666,18 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode, */ if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) || (phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) - CtReq->un.rft.rsvd[0] = cpu_to_be32(0x00000100); + CtReq->un.rft.rsvd[0] = + cpu_to_be32(LPFC_FC4_TYPE_BITMASK); + ptr = (uint32_t *)CtReq; + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "6433 Issue RFT (%s %s): %08x %08x %08x %08x " + "%08x %08x %08x %08x\n", + CtReq->un.rft.fcpReg ? "FCP" : " ", + CtReq->un.rft.rsvd[0] ? "NVME" : " ", + *ptr, *(ptr + 1), *(ptr + 2), *(ptr + 3), + *(ptr + 4), *(ptr + 5), + *(ptr + 6), *(ptr + 7)); cmpl = lpfc_cmpl_ct_cmd_rft_id; break; @@ -1512,6 +1752,14 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode, else goto ns_cmd_free_bmpvirt; + ptr = (uint32_t *)CtReq; + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "6434 Issue RFF (%s): %08x %08x %08x %08x " + "%08x %08x %08x %08x\n", + (context == FC_TYPE_NVME) ? "NVME" : "FCP", + *ptr, *(ptr + 1), *(ptr + 2), *(ptr + 3), + *(ptr + 4), *(ptr + 5), + *(ptr + 6), *(ptr + 7)); cmpl = lpfc_cmpl_ct_cmd_rff_id; break; } @@ -1758,7 +2006,7 @@ lpfc_fdmi_hba_attr_manufacturer(struct lpfc_vport *vport, memset(ae, 0, 256); strncpy(ae->un.AttrString, - "Emulex Corporation", + "Broadcom Inc.", sizeof(ae->un.AttrString)); len = strnlen(ae->un.AttrString, sizeof(ae->un.AttrString)); @@ -2134,6 +2382,8 @@ lpfc_fdmi_port_attr_support_speed(struct lpfc_vport *vport, ae->un.AttrInt = 0; if (!(phba->hba_flag & HBA_FCOE_MODE)) { + if (phba->lmt & LMT_128Gb) + ae->un.AttrInt |= HBA_PORTSPEED_128GFC; if (phba->lmt & LMT_64Gb) ae->un.AttrInt |= HBA_PORTSPEED_64GFC; if (phba->lmt & LMT_32Gb) @@ -2210,6 +2460,9 @@ lpfc_fdmi_port_attr_speed(struct lpfc_vport *vport, case LPFC_LINK_SPEED_64GHZ: ae->un.AttrInt = HBA_PORTSPEED_64GFC; break; + case LPFC_LINK_SPEED_128GHZ: + ae->un.AttrInt = HBA_PORTSPEED_128GFC; + break; default: ae->un.AttrInt = HBA_PORTSPEED_UNKNOWN; break; diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c index 34d311a7dbef..a58f0b3f03a9 100644 --- a/drivers/scsi/lpfc/lpfc_debugfs.c +++ b/drivers/scsi/lpfc/lpfc_debugfs.c @@ -645,6 +645,8 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size) i, ndlp->cmd_qdepth); outio += i; } + len += snprintf(buf + len, size - len, "defer:%x ", + ndlp->nlp_defer_did); len += snprintf(buf+len, size-len, "\n"); } spin_unlock_irq(shost->host_lock); diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h index 28e2b60fc5c0..1c89c9f314fa 100644 --- a/drivers/scsi/lpfc/lpfc_disc.h +++ b/drivers/scsi/lpfc/lpfc_disc.h @@ -138,6 +138,7 @@ struct lpfc_nodelist { uint32_t nvme_fb_size; /* NVME target's supported byte cnt */ #define NVME_FB_BIT_SHIFT 9 /* PRLI Rsp first burst in 512B units. */ + uint32_t nlp_defer_did; }; struct lpfc_node_rrq { struct list_head list; @@ -165,6 +166,7 @@ struct lpfc_node_rrq { #define NLP_ELS_SND_MASK 0x000007e0 /* sent ELS request for this entry */ #define NLP_NVMET_RECOV 0x00001000 /* NVMET auditing node for recovery. */ #define NLP_FCP_PRLI_RJT 0x00002000 /* Rport does not support FCP PRLI. */ +#define NLP_UNREG_INP 0x00008000 /* UNREG_RPI cmd is in progress */ #define NLP_DEFER_RM 0x00010000 /* Remove this ndlp if no longer used */ #define NLP_DELAY_TMO 0x00020000 /* delay timeout is running for node */ #define NLP_NPR_2B_DISC 0x00040000 /* node is included in num_disc_nodes */ @@ -293,4 +295,4 @@ struct lpfc_node_rrq { #define NLP_EVT_DEVICE_RM 0xb /* Device not found in NS / ALPAmap */ #define NLP_EVT_DEVICE_RECOVERY 0xc /* Device existence unknown */ #define NLP_EVT_MAX_EVENT 0xd - +#define NLP_EVT_NOTHING_PENDING 0xff diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c index f1c1faa74b46..b3a4789468c3 100644 --- a/drivers/scsi/lpfc/lpfc_els.c +++ b/drivers/scsi/lpfc/lpfc_els.c @@ -242,6 +242,8 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp, icmd->ulpCommand = CMD_ELS_REQUEST64_CR; if (elscmd == ELS_CMD_FLOGI) icmd->ulpTimeout = FF_DEF_RATOV * 2; + else if (elscmd == ELS_CMD_LOGO) + icmd->ulpTimeout = phba->fc_ratov; else icmd->ulpTimeout = phba->fc_ratov * 2; } else { @@ -313,20 +315,20 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp, /* Xmit ELS command to remote NPORT */ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, "0116 Xmit ELS command x%x to remote " - "NPORT x%x I/O tag: x%x, port state:x%x" - " fc_flag:x%x\n", + "NPORT x%x I/O tag: x%x, port state:x%x " + "rpi x%x fc_flag:x%x\n", elscmd, did, elsiocb->iotag, - vport->port_state, + vport->port_state, ndlp->nlp_rpi, vport->fc_flag); } else { /* Xmit ELS response to remote NPORT */ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, "0117 Xmit ELS response x%x to remote " "NPORT x%x I/O tag: x%x, size: x%x " - "port_state x%x fc_flag x%x\n", + "port_state x%x rpi x%x fc_flag x%x\n", elscmd, ndlp->nlp_DID, elsiocb->iotag, cmdSize, vport->port_state, - vport->fc_flag); + ndlp->nlp_rpi, vport->fc_flag); } return elsiocb; @@ -413,7 +415,7 @@ lpfc_issue_fabric_reglogin(struct lpfc_vport *vport) /* increment the reference count on ndlp to hold reference * for the callback routine. */ - mbox->context2 = lpfc_nlp_get(ndlp); + mbox->ctx_ndlp = lpfc_nlp_get(ndlp); rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) { @@ -428,7 +430,7 @@ fail_issue_reg_login: * for the failed mbox command. */ lpfc_nlp_put(ndlp); - mp = (struct lpfc_dmabuf *) mbox->context1; + mp = (struct lpfc_dmabuf *)mbox->ctx_buf; lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); fail_free_mbox: @@ -502,7 +504,7 @@ lpfc_issue_reg_vfi(struct lpfc_vport *vport) mboxq->mbox_cmpl = lpfc_mbx_cmpl_reg_vfi; mboxq->vport = vport; - mboxq->context1 = dmabuf; + mboxq->ctx_buf = dmabuf; rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) { rc = -ENXIO; @@ -1055,9 +1057,9 @@ stop_rr_fcf_flogi: goto flogifail; lpfc_printf_vlog(vport, KERN_WARNING, LOG_ELS, - "0150 FLOGI failure Status:x%x/x%x TMO:x%x\n", + "0150 FLOGI failure Status:x%x/x%x xri x%x TMO:x%x\n", irsp->ulpStatus, irsp->un.ulpWord[4], - irsp->ulpTimeout); + cmdiocb->sli4_xritag, irsp->ulpTimeout); /* FLOGI failed, so there is no fabric */ spin_lock_irq(shost->host_lock); @@ -1111,7 +1113,8 @@ stop_rr_fcf_flogi: /* FLOGI completes successfully */ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, "0101 FLOGI completes successfully, I/O tag:x%x, " - "Data: x%x x%x x%x x%x x%x x%x\n", cmdiocb->iotag, + "xri x%x Data: x%x x%x x%x x%x x%x %x\n", + cmdiocb->iotag, cmdiocb->sli4_xritag, irsp->un.ulpWord[4], sp->cmn.e_d_tov, sp->cmn.w2.r_a_tov, sp->cmn.edtovResolution, vport->port_state, vport->fc_flag); @@ -1155,6 +1158,7 @@ stop_rr_fcf_flogi: phba->fcf.fcf_flag &= ~FCF_DISCOVERY; phba->hba_flag &= ~(FCF_RR_INPROG | HBA_DEVLOSS_TMO); spin_unlock_irq(&phba->hbalock); + phba->fcf.fcf_redisc_attempted = 0; /* reset */ goto out; } if (!rc) { @@ -1169,6 +1173,7 @@ stop_rr_fcf_flogi: phba->fcf.fcf_flag &= ~FCF_DISCOVERY; phba->hba_flag &= ~(FCF_RR_INPROG | HBA_DEVLOSS_TMO); spin_unlock_irq(&phba->hbalock); + phba->fcf.fcf_redisc_attempted = 0; /* reset */ goto out; } } @@ -1229,9 +1234,10 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, struct serv_parm *sp; IOCB_t *icmd; struct lpfc_iocbq *elsiocb; + struct lpfc_iocbq defer_flogi_acc; uint8_t *pcmd; uint16_t cmdsize; - uint32_t tmo; + uint32_t tmo, did; int rc; cmdsize = (sizeof(uint32_t) + sizeof(struct serv_parm)); @@ -1303,6 +1309,35 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, phba->sli3_options, 0, 0); rc = lpfc_issue_fabric_iocb(phba, elsiocb); + + phba->hba_flag |= HBA_FLOGI_ISSUED; + + /* Check for a deferred FLOGI ACC condition */ + if (phba->defer_flogi_acc_flag) { + did = vport->fc_myDID; + vport->fc_myDID = Fabric_DID; + + memset(&defer_flogi_acc, 0, sizeof(struct lpfc_iocbq)); + + defer_flogi_acc.iocb.ulpContext = phba->defer_flogi_acc_rx_id; + defer_flogi_acc.iocb.unsli3.rcvsli3.ox_id = + phba->defer_flogi_acc_ox_id; + + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "3354 Xmit deferred FLOGI ACC: rx_id: x%x," + " ox_id: x%x, hba_flag x%x\n", + phba->defer_flogi_acc_rx_id, + phba->defer_flogi_acc_ox_id, phba->hba_flag); + + /* Send deferred FLOGI ACC */ + lpfc_els_rsp_acc(vport, ELS_CMD_FLOGI, &defer_flogi_acc, + ndlp, NULL); + + phba->defer_flogi_acc_flag = false; + + vport->fc_myDID = did; + } + if (rc == IOCB_ERROR) { lpfc_els_free_iocb(phba, elsiocb); return 1; @@ -1338,6 +1373,8 @@ lpfc_els_abort_flogi(struct lpfc_hba *phba) Fabric_DID); pring = lpfc_phba_elsring(phba); + if (unlikely(!pring)) + return -EIO; /* * Check the txcmplq for an iocb that matches the nport the driver is @@ -1531,7 +1568,9 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, struct serv_parm *sp; uint8_t name[sizeof(struct lpfc_name)]; uint32_t rc, keepDID = 0, keep_nlp_flag = 0; + uint32_t keep_new_nlp_flag = 0; uint16_t keep_nlp_state; + u32 keep_nlp_fc4_type = 0; struct lpfc_nvme_rport *keep_nrport = NULL; int put_node; int put_rport; @@ -1551,8 +1590,10 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, */ new_ndlp = lpfc_findnode_wwpn(vport, &sp->portName); + /* return immediately if the WWPN matches ndlp */ if (new_ndlp == ndlp && NLP_CHK_NODE_ACT(new_ndlp)) return ndlp; + if (phba->sli_rev == LPFC_SLI_REV4) { active_rrqs_xri_bitmap = mempool_alloc(phba->active_rrq_pool, GFP_KERNEL); @@ -1561,9 +1602,13 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, phba->cfg_rrq_xri_bitmap_sz); } - lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, - "3178 PLOGI confirm: ndlp %p x%x: new_ndlp %p\n", - ndlp, ndlp->nlp_DID, new_ndlp); + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS | LOG_NODE, + "3178 PLOGI confirm: ndlp x%x x%x x%x: " + "new_ndlp x%x x%x x%x\n", + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_fc4_type, + (new_ndlp ? new_ndlp->nlp_DID : 0), + (new_ndlp ? new_ndlp->nlp_flag : 0), + (new_ndlp ? new_ndlp->nlp_fc4_type : 0)); if (!new_ndlp) { rc = memcmp(&ndlp->nlp_portname, name, @@ -1612,6 +1657,16 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, phba->cfg_rrq_xri_bitmap_sz); } + /* At this point in this routine, we know new_ndlp will be + * returned. however, any previous GID_FTs that were done + * would have updated nlp_fc4_type in ndlp, so we must ensure + * new_ndlp has the right value. + */ + if (vport->fc_flag & FC_FABRIC) { + keep_nlp_fc4_type = new_ndlp->nlp_fc4_type; + new_ndlp->nlp_fc4_type = ndlp->nlp_fc4_type; + } + lpfc_unreg_rpi(vport, new_ndlp); new_ndlp->nlp_DID = ndlp->nlp_DID; new_ndlp->nlp_prev_state = ndlp->nlp_prev_state; @@ -1621,9 +1676,36 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, phba->cfg_rrq_xri_bitmap_sz); spin_lock_irq(shost->host_lock); - keep_nlp_flag = new_ndlp->nlp_flag; + keep_new_nlp_flag = new_ndlp->nlp_flag; + keep_nlp_flag = ndlp->nlp_flag; new_ndlp->nlp_flag = ndlp->nlp_flag; - ndlp->nlp_flag = keep_nlp_flag; + + /* if new_ndlp had NLP_UNREG_INP set, keep it */ + if (keep_new_nlp_flag & NLP_UNREG_INP) + new_ndlp->nlp_flag |= NLP_UNREG_INP; + else + new_ndlp->nlp_flag &= ~NLP_UNREG_INP; + + /* if new_ndlp had NLP_RPI_REGISTERED set, keep it */ + if (keep_new_nlp_flag & NLP_RPI_REGISTERED) + new_ndlp->nlp_flag |= NLP_RPI_REGISTERED; + else + new_ndlp->nlp_flag &= ~NLP_RPI_REGISTERED; + + ndlp->nlp_flag = keep_new_nlp_flag; + + /* if ndlp had NLP_UNREG_INP set, keep it */ + if (keep_nlp_flag & NLP_UNREG_INP) + ndlp->nlp_flag |= NLP_UNREG_INP; + else + ndlp->nlp_flag &= ~NLP_UNREG_INP; + + /* if ndlp had NLP_RPI_REGISTERED set, keep it */ + if (keep_nlp_flag & NLP_RPI_REGISTERED) + ndlp->nlp_flag |= NLP_RPI_REGISTERED; + else + ndlp->nlp_flag &= ~NLP_RPI_REGISTERED; + spin_unlock_irq(shost->host_lock); /* Set nlp_states accordingly */ @@ -1661,7 +1743,6 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, if (ndlp->nrport) { ndlp->nrport = NULL; lpfc_nlp_put(ndlp); - new_ndlp->nlp_fc4_type = ndlp->nlp_fc4_type; } /* We shall actually free the ndlp with both nlp_DID and @@ -1674,7 +1755,10 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, spin_unlock_irq(&phba->ndlp_lock); } - /* Two ndlps cannot have the same did on the nodelist */ + /* Two ndlps cannot have the same did on the nodelist. + * Note: for this case, ndlp has a NULL WWPN so setting + * the nlp_fc4_type isn't required. + */ ndlp->nlp_DID = keepDID; lpfc_nlp_set_state(vport, ndlp, keep_nlp_state); if (phba->sli_rev == LPFC_SLI_REV4 && @@ -1693,8 +1777,13 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, lpfc_unreg_rpi(vport, ndlp); - /* Two ndlps cannot have the same did */ + /* Two ndlps cannot have the same did and the fc4 + * type must be transferred because the ndlp is in + * flight. + */ ndlp->nlp_DID = keepDID; + ndlp->nlp_fc4_type = keep_nlp_fc4_type; + if (phba->sli_rev == LPFC_SLI_REV4 && active_rrqs_xri_bitmap) memcpy(ndlp->active_rrqs_xri_bitmap, @@ -1735,6 +1824,12 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, active_rrqs_xri_bitmap) mempool_free(active_rrqs_xri_bitmap, phba->active_rrq_pool); + + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS | LOG_NODE, + "3173 PLOGI confirm exit: new_ndlp x%x x%x x%x\n", + new_ndlp->nlp_DID, new_ndlp->nlp_flag, + new_ndlp->nlp_fc4_type); + return new_ndlp; } @@ -1895,7 +1990,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, disc = (ndlp->nlp_flag & NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; spin_unlock_irq(shost->host_lock); - rc = 0; + rc = 0; /* PLOGI completes to NPort */ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, @@ -2002,8 +2097,29 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry) int ret; ndlp = lpfc_findnode_did(vport, did); - if (ndlp && !NLP_CHK_NODE_ACT(ndlp)) - ndlp = NULL; + + if (ndlp) { + /* Defer the processing of the issue PLOGI until after the + * outstanding UNREG_RPI mbox command completes, unless we + * are going offline. This logic does not apply for Fabric DIDs + */ + if ((ndlp->nlp_flag & NLP_UNREG_INP) && + ((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) && + !(vport->fc_flag & FC_OFFLINE_MODE)) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "4110 Issue PLOGI x%x deferred " + "on NPort x%x rpi x%x Data: %p\n", + ndlp->nlp_defer_did, ndlp->nlp_DID, + ndlp->nlp_rpi, ndlp); + + /* We can only defer 1st PLOGI */ + if (ndlp->nlp_defer_did == NLP_EVT_NOTHING_PENDING) + ndlp->nlp_defer_did = did; + return 0; + } + if (!NLP_CHK_NODE_ACT(ndlp)) + ndlp = NULL; + } /* If ndlp is not NULL, we will bump the reference count on it */ cmdsize = (sizeof(uint32_t) + sizeof(struct serv_parm)); @@ -2137,7 +2253,7 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, else lpfc_disc_state_machine(vport, ndlp, cmdiocb, NLP_EVT_CMPL_PRLI); - } else + } else { /* Good status, call state machine. However, if another * PRLI is outstanding, don't call the state machine * because final disposition to Mapped or Unmapped is @@ -2145,6 +2261,7 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, */ lpfc_disc_state_machine(vport, ndlp, cmdiocb, NLP_EVT_CMPL_PRLI); + } out: lpfc_els_free_iocb(phba, cmdiocb); @@ -2203,7 +2320,7 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR); ndlp->nlp_type &= ~(NLP_NVME_TARGET | NLP_NVME_INITIATOR); ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; - ndlp->nlp_flag &= ~NLP_FIRSTBURST; + ndlp->nlp_flag &= ~(NLP_FIRSTBURST | NLP_NPR_2B_DISC); ndlp->nvme_fb_size = 0; send_next_prli: @@ -2682,16 +2799,15 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, goto out; } + /* The LOGO will not be retried on failure. A LOGO was + * issued to the remote rport and a ACC or RJT or no Answer are + * all acceptable. Note the failure and move forward with + * discovery. The PLOGI will retry. + */ if (irsp->ulpStatus) { - /* Check for retry */ - if (lpfc_els_retry(phba, cmdiocb, rspiocb)) { - /* ELS command is being retried */ - skip_recovery = 1; - goto out; - } /* LOGO failed */ lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, - "2756 LOGO failure DID:%06X Status:x%x/x%x\n", + "2756 LOGO failure, No Retry DID:%06X Status:x%x/x%x\n", ndlp->nlp_DID, irsp->ulpStatus, irsp->un.ulpWord[4]); /* Do not call DSM for lpfc_els_abort'ed ELS cmds */ @@ -2737,7 +2853,8 @@ out: * For any other port type, the rpi is unregistered as an implicit * LOGO. */ - if ((ndlp->nlp_type & NLP_FCP_TARGET) && (skip_recovery == 0)) { + if (ndlp->nlp_type & (NLP_FCP_TARGET | NLP_NVME_TARGET) && + skip_recovery == 0) { lpfc_cancel_retry_delay_tmo(vport, ndlp); spin_lock_irqsave(shost->host_lock, flags); ndlp->nlp_flag |= NLP_NPR_2B_DISC; @@ -2770,6 +2887,8 @@ out: * will be stored into the context1 field of the IOCB for the completion * callback function to the LOGO ELS command. * + * Callers of this routine are expected to unregister the RPI first + * * Return code * 0 - successfully issued logo * 1 - failed to issue logo @@ -2811,22 +2930,6 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, "Issue LOGO: did:x%x", ndlp->nlp_DID, 0, 0); - /* - * If we are issuing a LOGO, we may try to recover the remote NPort - * by issuing a PLOGI later. Even though we issue ELS cmds by the - * VPI, if we have a valid RPI, and that RPI gets unreg'ed while - * that ELS command is in-flight, the HBA returns a IOERR_INVALID_RPI - * for that ELS cmd. To avoid this situation, lets get rid of the - * RPI right now, before any ELS cmds are sent. - */ - spin_lock_irq(shost->host_lock); - ndlp->nlp_flag |= NLP_ISSUE_LOGO; - spin_unlock_irq(shost->host_lock); - if (lpfc_unreg_rpi(vport, ndlp)) { - lpfc_els_free_iocb(phba, elsiocb); - return 0; - } - phba->fc_stat.elsXmitLOGO++; elsiocb->iocb_cmpl = lpfc_cmpl_els_logo; spin_lock_irq(shost->host_lock); @@ -2834,7 +2937,6 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, ndlp->nlp_flag &= ~NLP_ISSUE_LOGO; spin_unlock_irq(shost->host_lock); rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0); - if (rc == IOCB_ERROR) { spin_lock_irq(shost->host_lock); ndlp->nlp_flag &= ~NLP_LOGO_SND; @@ -2842,6 +2944,11 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_els_free_iocb(phba, elsiocb); return 1; } + + spin_lock_irq(shost->host_lock); + ndlp->nlp_prev_state = ndlp->nlp_state; + spin_unlock_irq(shost->host_lock); + lpfc_nlp_set_state(vport, ndlp, NLP_STE_LOGO_ISSUE); return 0; } @@ -3249,6 +3356,62 @@ lpfc_els_retry_delay_handler(struct lpfc_nodelist *ndlp) return; } +/** + * lpfc_link_reset - Issue link reset + * @vport: pointer to a virtual N_Port data structure. + * + * This routine performs link reset by sending INIT_LINK mailbox command. + * For SLI-3 adapter, link attention interrupt is enabled before issuing + * INIT_LINK mailbox command. + * + * Return code + * 0 - Link reset initiated successfully + * 1 - Failed to initiate link reset + **/ +int +lpfc_link_reset(struct lpfc_vport *vport) +{ + struct lpfc_hba *phba = vport->phba; + LPFC_MBOXQ_t *mbox; + uint32_t control; + int rc; + + lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, + "2851 Attempt link reset\n"); + mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); + if (!mbox) { + lpfc_printf_log(phba, KERN_ERR, LOG_MBOX, + "2852 Failed to allocate mbox memory"); + return 1; + } + + /* Enable Link attention interrupts */ + if (phba->sli_rev <= LPFC_SLI_REV3) { + spin_lock_irq(&phba->hbalock); + phba->sli.sli_flag |= LPFC_PROCESS_LA; + control = readl(phba->HCregaddr); + control |= HC_LAINT_ENA; + writel(control, phba->HCregaddr); + readl(phba->HCregaddr); /* flush */ + spin_unlock_irq(&phba->hbalock); + } + + lpfc_init_link(phba, mbox, phba->cfg_topology, + phba->cfg_link_speed); + mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; + mbox->vport = vport; + rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); + if ((rc != MBX_BUSY) && (rc != MBX_SUCCESS)) { + lpfc_printf_log(phba, KERN_ERR, LOG_MBOX, + "2853 Failed to issue INIT_LINK " + "mbox command, rc:x%x\n", rc); + mempool_free(mbox, phba->mbox_mem_pool); + return 1; + } + + return 0; +} + /** * lpfc_els_retry - Make retry decision on an els command iocb * @phba: pointer to lpfc hba data structure. @@ -3285,6 +3448,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, int logerr = 0; uint32_t cmd = 0; uint32_t did; + int link_reset = 0, rc; /* Note: context2 may be 0 for internal driver abort @@ -3366,7 +3530,6 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, retry = 1; break; - case IOERR_SEQUENCE_TIMEOUT: case IOERR_INVALID_RPI: if (cmd == ELS_CMD_PLOGI && did == NameServer_DID) { @@ -3377,6 +3540,18 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, } retry = 1; break; + + case IOERR_SEQUENCE_TIMEOUT: + if (cmd == ELS_CMD_PLOGI && + did == NameServer_DID && + (cmdiocb->retry + 1) == maxretry) { + /* Reset the Link */ + link_reset = 1; + break; + } + retry = 1; + delay = 100; + break; } break; @@ -3533,6 +3708,19 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, break; } + if (link_reset) { + rc = lpfc_link_reset(vport); + if (rc) { + /* Do not give up. Retry PLOGI one more time and attempt + * link reset if PLOGI fails again. + */ + retry = 1; + delay = 100; + goto out_retry; + } + return 1; + } + if (did == FDMI_DID) retry = 1; @@ -3895,11 +4083,11 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, void lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) { - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1); - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2; + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); + struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; - pmb->context1 = NULL; - pmb->context2 = NULL; + pmb->ctx_buf = NULL; + pmb->ctx_ndlp = NULL; lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); @@ -3975,7 +4163,7 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, /* Check to see if link went down during discovery */ if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) || lpfc_els_chk_latt(vport)) { if (mbox) { - mp = (struct lpfc_dmabuf *) mbox->context1; + mp = (struct lpfc_dmabuf *)mbox->ctx_buf; if (mp) { lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); @@ -4019,7 +4207,7 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, "Data: x%x x%x x%x\n", ndlp->nlp_DID, ndlp->nlp_state, ndlp->nlp_rpi, ndlp->nlp_flag); - mp = mbox->context1; + mp = mbox->ctx_buf; if (mp) { lpfc_mbuf_free(phba, mp->virt, mp->phys); @@ -4032,7 +4220,7 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, /* Increment reference count to ndlp to hold the * reference to ndlp for the callback function. */ - mbox->context2 = lpfc_nlp_get(ndlp); + mbox->ctx_ndlp = lpfc_nlp_get(ndlp); mbox->vport = vport; if (ndlp->nlp_flag & NLP_RM_DFLT_RPI) { mbox->mbox_flag |= LPFC_MBX_IMED_UNREG; @@ -4086,7 +4274,7 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, } } } - mp = (struct lpfc_dmabuf *) mbox->context1; + mp = (struct lpfc_dmabuf *)mbox->ctx_buf; if (mp) { lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); @@ -4272,14 +4460,6 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag, default: return 1; } - /* Xmit ELS ACC response tag */ - lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, - "0128 Xmit ELS ACC response tag x%x, XRI: x%x, " - "DID: x%x, nlp_flag: x%x nlp_state: x%x RPI: x%x " - "fc_flag x%x\n", - elsiocb->iotag, elsiocb->iocb.ulpContext, - ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, - ndlp->nlp_rpi, vport->fc_flag); if (ndlp->nlp_flag & NLP_LOGO_ACC) { spin_lock_irq(shost->host_lock); if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED || @@ -4448,6 +4628,15 @@ lpfc_els_rsp_adisc_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb, lpfc_els_free_iocb(phba, elsiocb); return 1; } + + /* Xmit ELS ACC response tag */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0128 Xmit ELS ACC response Status: x%x, IoTag: x%x, " + "XRI: x%x, DID: x%x, nlp_flag: x%x nlp_state: x%x " + "RPI: x%x, fc_flag x%x\n", + rc, elsiocb->iotag, elsiocb->sli4_xritag, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi, vport->fc_flag); return 0; } @@ -5281,6 +5470,8 @@ lpfc_rdp_res_speed(struct fc_rdp_port_speed_desc *desc, struct lpfc_hba *phba) desc->info.port_speed.speed = cpu_to_be16(rdp_speed); + if (phba->lmt & LMT_128Gb) + rdp_cap |= RDP_PS_128GB; if (phba->lmt & LMT_64Gb) rdp_cap |= RDP_PS_64GB; if (phba->lmt & LMT_32Gb) @@ -5499,7 +5690,7 @@ lpfc_get_rdp_info(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context) goto prep_mbox_fail; mbox->vport = rdp_context->ndlp->vport; mbox->mbox_cmpl = lpfc_mbx_cmpl_rdp_page_a0; - mbox->context2 = (struct lpfc_rdp_context *) rdp_context; + mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) goto issue_mbox_fail; @@ -5542,7 +5733,7 @@ lpfc_els_rcv_rdp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, struct ls_rjt stat; if (phba->sli_rev < LPFC_SLI_REV4 || - bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) != + bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) < LPFC_SLI_INTF_IF_TYPE_2) { rjt_err = LSRJT_UNABLE_TPC; rjt_expl = LSEXP_REQ_UNSUPPORTED; @@ -5624,10 +5815,10 @@ lpfc_els_lcb_rsp(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) int rc; mb = &pmb->u.mb; - lcb_context = (struct lpfc_lcb_context *)pmb->context1; + lcb_context = (struct lpfc_lcb_context *)pmb->ctx_ndlp; ndlp = lcb_context->ndlp; - pmb->context1 = NULL; - pmb->context2 = NULL; + pmb->ctx_ndlp = NULL; + pmb->ctx_buf = NULL; shdr = (union lpfc_sli4_cfg_shdr *) &pmb->u.mqe.un.beacon_config.header.cfg_shdr; @@ -5701,6 +5892,9 @@ error: stat = (struct ls_rjt *)(pcmd + sizeof(uint32_t)); stat->un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC; + if (shdr_add_status == ADD_STATUS_OPERATION_ALREADY_ACTIVE) + stat->un.b.lsRjtRsnCodeExp = LSEXP_CMD_IN_PROGRESS; + elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp; phba->fc_stat.elsXmitLSRJT++; rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0); @@ -5731,7 +5925,7 @@ lpfc_sli4_set_beacon(struct lpfc_vport *vport, lpfc_sli4_config(phba, mbox, LPFC_MBOX_SUBSYSTEM_COMMON, LPFC_MBOX_OPCODE_SET_BEACON_CONFIG, len, LPFC_SLI4_MBX_EMBED); - mbox->context1 = (void *)lcb_context; + mbox->ctx_ndlp = (void *)lcb_context; mbox->vport = phba->pport; mbox->mbox_cmpl = lpfc_els_lcb_rsp; bf_set(lpfc_mbx_set_beacon_port_num, &mbox->u.mqe.un.beacon_config, @@ -6011,6 +6205,19 @@ lpfc_rscn_recovery_check(struct lpfc_vport *vport) if (vport->phba->nvmet_support) continue; + /* If we are in the process of doing discovery on this + * NPort, let it continue on its own. + */ + switch (ndlp->nlp_state) { + case NLP_STE_PLOGI_ISSUE: + case NLP_STE_ADISC_ISSUE: + case NLP_STE_REG_LOGIN_ISSUE: + case NLP_STE_PRLI_ISSUE: + case NLP_STE_LOGO_ISSUE: + continue; + } + + lpfc_disc_state_machine(vport, ndlp, NULL, NLP_EVT_DEVICE_RECOVERY); lpfc_cancel_retry_delay_tmo(vport, ndlp); @@ -6272,6 +6479,7 @@ int lpfc_els_handle_rscn(struct lpfc_vport *vport) { struct lpfc_nodelist *ndlp; + struct lpfc_hba *phba = vport->phba; /* Ignore RSCN if the port is being torn down. */ if (vport->load_flag & FC_UNLOADING) { @@ -6300,8 +6508,15 @@ lpfc_els_handle_rscn(struct lpfc_vport *vport) * flush the RSCN. Otherwise, the outstanding requests * need to complete. */ - if (lpfc_issue_gidft(vport) > 0) + if (phba->cfg_ns_query == LPFC_NS_QUERY_GID_FT) { + if (lpfc_issue_gidft(vport) > 0) + return 1; + } else if (phba->cfg_ns_query == LPFC_NS_QUERY_GID_PT) { + if (lpfc_issue_gidpt(vport) > 0) + return 1; + } else { return 1; + } } else { /* Nameserver login in question. Revalidate. */ if (ndlp) { @@ -6455,6 +6670,11 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, port_state = vport->port_state; vport->fc_flag |= FC_PT2PT; vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP); + + /* Acking an unsol FLOGI. Count 1 for link bounce + * work-around. + */ + vport->rcv_flogi_cnt++; spin_unlock_irq(shost->host_lock); lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, "3311 Rcv Flogi PS x%x new PS x%x " @@ -6472,6 +6692,25 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, memcpy(&phba->fc_fabparam, sp, sizeof(struct serv_parm)); + /* Defer ACC response until AFTER we issue a FLOGI */ + if (!(phba->hba_flag & HBA_FLOGI_ISSUED)) { + phba->defer_flogi_acc_rx_id = cmdiocb->iocb.ulpContext; + phba->defer_flogi_acc_ox_id = + cmdiocb->iocb.unsli3.rcvsli3.ox_id; + + vport->fc_myDID = did; + + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "3344 Deferring FLOGI ACC: rx_id: x%x," + " ox_id: x%x, hba_flag x%x\n", + phba->defer_flogi_acc_rx_id, + phba->defer_flogi_acc_ox_id, phba->hba_flag); + + phba->defer_flogi_acc_flag = true; + + return 0; + } + /* Send back ACC */ lpfc_els_rsp_acc(vport, ELS_CMD_FLOGI, cmdiocb, ndlp, NULL); @@ -6644,11 +6883,11 @@ lpfc_els_rsp_rls_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) mb = &pmb->u.mb; - ndlp = (struct lpfc_nodelist *) pmb->context2; - rxid = (uint16_t) ((unsigned long)(pmb->context1) & 0xffff); - oxid = (uint16_t) (((unsigned long)(pmb->context1) >> 16) & 0xffff); - pmb->context1 = NULL; - pmb->context2 = NULL; + ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; + rxid = (uint16_t)((unsigned long)(pmb->ctx_buf) & 0xffff); + oxid = (uint16_t)(((unsigned long)(pmb->ctx_buf) >> 16) & 0xffff); + pmb->ctx_buf = NULL; + pmb->ctx_ndlp = NULL; if (mb->mbxStatus) { mempool_free(pmb, phba->mbox_mem_pool); @@ -6732,11 +6971,11 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) mb = &pmb->u.mb; - ndlp = (struct lpfc_nodelist *) pmb->context2; - rxid = (uint16_t) ((unsigned long)(pmb->context1) & 0xffff); - oxid = (uint16_t) (((unsigned long)(pmb->context1) >> 16) & 0xffff); - pmb->context1 = NULL; - pmb->context2 = NULL; + ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; + rxid = (uint16_t)((unsigned long)(pmb->ctx_buf) & 0xffff); + oxid = (uint16_t)(((unsigned long)(pmb->ctx_buf) >> 16) & 0xffff); + pmb->ctx_ndlp = NULL; + pmb->ctx_buf = NULL; if (mb->mbxStatus) { mempool_free(pmb, phba->mbox_mem_pool); @@ -6827,10 +7066,10 @@ lpfc_els_rcv_rls(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, mbox = mempool_alloc(phba->mbox_mem_pool, GFP_ATOMIC); if (mbox) { lpfc_read_lnk_stat(phba, mbox); - mbox->context1 = (void *)((unsigned long) + mbox->ctx_buf = (void *)((unsigned long) ((cmdiocb->iocb.unsli3.rcvsli3.ox_id << 16) | cmdiocb->iocb.ulpContext)); /* rx_id */ - mbox->context2 = lpfc_nlp_get(ndlp); + mbox->ctx_ndlp = lpfc_nlp_get(ndlp); mbox->vport = vport; mbox->mbox_cmpl = lpfc_els_rsp_rls_acc; if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) @@ -6990,10 +7229,10 @@ lpfc_els_rcv_rps(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, mbox = mempool_alloc(phba->mbox_mem_pool, GFP_ATOMIC); if (mbox) { lpfc_read_lnk_stat(phba, mbox); - mbox->context1 = (void *)((unsigned long) + mbox->ctx_buf = (void *)((unsigned long) ((cmdiocb->iocb.unsli3.rcvsli3.ox_id << 16) | cmdiocb->iocb.ulpContext)); /* rx_id */ - mbox->context2 = lpfc_nlp_get(ndlp); + mbox->ctx_ndlp = lpfc_nlp_get(ndlp); mbox->vport = vport; mbox->mbox_cmpl = lpfc_els_rsp_rps_acc; if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) @@ -7852,8 +8091,9 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, struct ls_rjt stat; uint32_t *payload; uint32_t cmd, did, newnode; - uint8_t rjt_exp, rjt_err = 0; + uint8_t rjt_exp, rjt_err = 0, init_link = 0; IOCB_t *icmd = &elsiocb->iocb; + LPFC_MBOXQ_t *mbox; if (!vport || !(elsiocb->context2)) goto dropit; @@ -7940,9 +8180,10 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, cmd, did, vport->port_state, vport->fc_flag, vport->fc_myDID, vport->fc_prevDID); - /* reject till our FLOGI completes */ + /* reject till our FLOGI completes or PLOGI assigned DID via PT2PT */ if ((vport->port_state < LPFC_FABRIC_CFG_LINK) && - (cmd != ELS_CMD_FLOGI)) { + (cmd != ELS_CMD_FLOGI) && + !((cmd == ELS_CMD_PLOGI) && (vport->fc_flag & FC_PT2PT))) { rjt_err = LSRJT_LOGICAL_BSY; rjt_exp = LSEXP_NOTHING_MORE; goto lsrjt; @@ -8002,6 +8243,19 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, did, vport->port_state, ndlp->nlp_flag); phba->fc_stat.elsRcvFLOGI++; + + /* If the driver believes fabric discovery is done and is ready, + * bounce the link. There is some descrepancy. + */ + if (vport->port_state >= LPFC_LOCAL_CFG_LINK && + vport->fc_flag & FC_PT2PT && + vport->rcv_flogi_cnt >= 1) { + rjt_err = LSRJT_LOGICAL_BSY; + rjt_exp = LSEXP_NOTHING_MORE; + init_link++; + goto lsrjt; + } + lpfc_els_rcv_flogi(vport, elsiocb, ndlp); if (newnode) lpfc_nlp_put(ndlp); @@ -8230,6 +8484,27 @@ lsrjt: lpfc_nlp_put(elsiocb->context1); elsiocb->context1 = NULL; + + /* Special case. Driver received an unsolicited command that + * unsupportable given the driver's current state. Reset the + * link and start over. + */ + if (init_link) { + mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); + if (!mbox) + return; + lpfc_linkdown(phba); + lpfc_init_link(phba, mbox, + phba->cfg_topology, + phba->cfg_link_speed); + mbox->u.mb.un.varInitLnk.lipsr_AL_PA = 0; + mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; + mbox->vport = vport; + if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) == + MBX_NOT_FINISHED) + mempool_free(mbox, phba->mbox_mem_pool); + } + return; dropit: @@ -8453,7 +8728,7 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) { struct lpfc_vport *vport = pmb->vport; struct Scsi_Host *shost = lpfc_shost_from_vport(vport); - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2; + struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; MAILBOX_t *mb = &pmb->u.mb; int rc; @@ -8571,7 +8846,7 @@ lpfc_register_new_vport(struct lpfc_hba *phba, struct lpfc_vport *vport, if (mbox) { lpfc_reg_vpi(vport, mbox); mbox->vport = vport; - mbox->context2 = lpfc_nlp_get(ndlp); + mbox->ctx_ndlp = lpfc_nlp_get(ndlp); mbox->mbox_cmpl = lpfc_cmpl_reg_new_vport; if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) == MBX_NOT_FINISHED) { @@ -9505,7 +9780,8 @@ lpfc_sli_abts_recover_port(struct lpfc_vport *vport, "rport in state 0x%x\n", ndlp->nlp_state); return; } - lpfc_printf_log(phba, KERN_INFO, LOG_SLI, + lpfc_printf_log(phba, KERN_ERR, + LOG_ELS | LOG_FCP_ERROR | LOG_NVME_IOERR, "3094 Start rport recovery on shost id 0x%x " "fc_id 0x%06x vpi 0x%x rpi 0x%x state 0x%x " "flags 0x%x\n", @@ -9518,8 +9794,8 @@ lpfc_sli_abts_recover_port(struct lpfc_vport *vport, */ spin_lock_irqsave(shost->host_lock, flags); ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; + ndlp->nlp_flag |= NLP_ISSUE_LOGO; spin_unlock_irqrestore(shost->host_lock, flags); - lpfc_issue_els_logo(vport, ndlp, 0); - lpfc_nlp_set_state(vport, ndlp, NLP_STE_LOGO_ISSUE); + lpfc_unreg_rpi(vport, ndlp); } diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c index f4deb862efc6..b183b882d506 100644 --- a/drivers/scsi/lpfc/lpfc_hbadisc.c +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c @@ -888,17 +888,31 @@ lpfc_linkdown(struct lpfc_hba *phba) LPFC_MBOXQ_t *mb; int i; - if (phba->link_state == LPFC_LINK_DOWN) + if (phba->link_state == LPFC_LINK_DOWN) { + if (phba->sli4_hba.conf_trunk) { + phba->trunk_link.link0.state = 0; + phba->trunk_link.link1.state = 0; + phba->trunk_link.link2.state = 0; + phba->trunk_link.link3.state = 0; + } return 0; - + } /* Block all SCSI stack I/Os */ lpfc_scsi_dev_block(phba); + phba->defer_flogi_acc_flag = false; + spin_lock_irq(&phba->hbalock); phba->fcf.fcf_flag &= ~(FCF_AVAILABLE | FCF_SCAN_DONE); spin_unlock_irq(&phba->hbalock); if (phba->link_state > LPFC_LINK_DOWN) { phba->link_state = LPFC_LINK_DOWN; + if (phba->sli4_hba.conf_trunk) { + phba->trunk_link.link0.state = 0; + phba->trunk_link.link1.state = 0; + phba->trunk_link.link2.state = 0; + phba->trunk_link.link3.state = 0; + } spin_lock_irq(shost->host_lock); phba->pport->fc_flag &= ~FC_LBIT; spin_unlock_irq(shost->host_lock); @@ -947,6 +961,7 @@ lpfc_linkdown(struct lpfc_hba *phba) } spin_lock_irq(shost->host_lock); phba->pport->fc_flag &= ~(FC_PT2PT | FC_PT2PT_PLOGI); + phba->pport->rcv_flogi_cnt = 0; spin_unlock_irq(shost->host_lock); } return 0; @@ -1018,6 +1033,7 @@ lpfc_linkup(struct lpfc_hba *phba) { struct lpfc_vport **vports; int i; + struct Scsi_Host *shost = lpfc_shost_from_vport(phba->pport); phba->link_state = LPFC_LINK_UP; @@ -1031,6 +1047,18 @@ lpfc_linkup(struct lpfc_hba *phba) lpfc_linkup_port(vports[i]); lpfc_destroy_vport_work_array(phba, vports); + /* Clear the pport flogi counter in case the link down was + * absorbed without an ACQE. No lock here - in worker thread + * and discovery is synchronized. + */ + spin_lock_irq(shost->host_lock); + phba->pport->rcv_flogi_cnt = 0; + spin_unlock_irq(shost->host_lock); + + /* reinitialize initial FLOGI flag */ + phba->hba_flag &= ~(HBA_FLOGI_ISSUED); + phba->defer_flogi_acc_flag = false; + return 0; } @@ -1992,6 +2020,26 @@ int lpfc_sli4_fcf_rr_next_proc(struct lpfc_vport *vport, uint16_t fcf_index) "failover and change port state:x%x/x%x\n", phba->pport->port_state, LPFC_VPORT_UNKNOWN); phba->pport->port_state = LPFC_VPORT_UNKNOWN; + + if (!phba->fcf.fcf_redisc_attempted) { + lpfc_unregister_fcf(phba); + + rc = lpfc_sli4_redisc_fcf_table(phba); + if (!rc) { + lpfc_printf_log(phba, KERN_INFO, LOG_FIP, + "3195 Rediscover FCF table\n"); + phba->fcf.fcf_redisc_attempted = 1; + lpfc_sli4_clear_fcf_rr_bmask(phba); + } else { + lpfc_printf_log(phba, KERN_WARNING, LOG_FIP, + "3196 Rediscover FCF table " + "failed. Status:x%x\n", rc); + } + } else { + lpfc_printf_log(phba, KERN_WARNING, LOG_FIP, + "3197 Already rediscover FCF table " + "attempted. No more retry\n"); + } goto stop_flogi_current_fcf; } else { lpfc_printf_log(phba, KERN_INFO, LOG_FIP | LOG_ELS, @@ -2915,7 +2963,7 @@ lpfc_start_fdiscs(struct lpfc_hba *phba) void lpfc_mbx_cmpl_reg_vfi(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq) { - struct lpfc_dmabuf *dmabuf = mboxq->context1; + struct lpfc_dmabuf *dmabuf = mboxq->ctx_buf; struct lpfc_vport *vport = mboxq->vport; struct Scsi_Host *shost = lpfc_shost_from_vport(vport); @@ -3008,7 +3056,7 @@ static void lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) { MAILBOX_t *mb = &pmb->u.mb; - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) pmb->context1; + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)pmb->ctx_buf; struct lpfc_vport *vport = pmb->vport; struct Scsi_Host *shost = lpfc_shost_from_vport(vport); struct serv_parm *sp = &vport->fc_sparam; @@ -3052,7 +3100,7 @@ lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) return; out: - pmb->context1 = NULL; + pmb->ctx_buf = NULL; lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); lpfc_issue_clear_la(phba, vport); @@ -3085,6 +3133,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la) case LPFC_LINK_SPEED_16GHZ: case LPFC_LINK_SPEED_32GHZ: case LPFC_LINK_SPEED_64GHZ: + case LPFC_LINK_SPEED_128GHZ: break; default: phba->fc_linkspeed = LPFC_LINK_SPEED_UNKNOWN; @@ -3190,7 +3239,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la) sparam_mbox->mbox_cmpl = lpfc_mbx_cmpl_read_sparam; rc = lpfc_sli_issue_mbox(phba, sparam_mbox, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) { - mp = (struct lpfc_dmabuf *) sparam_mbox->context1; + mp = (struct lpfc_dmabuf *)sparam_mbox->ctx_buf; lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); mempool_free(sparam_mbox, phba->mbox_mem_pool); @@ -3319,7 +3368,7 @@ lpfc_mbx_cmpl_read_topology(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) struct lpfc_mbx_read_top *la; struct lpfc_sli_ring *pring; MAILBOX_t *mb = &pmb->u.mb; - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1); + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); uint8_t attn_type; /* Unblock ELS traffic */ @@ -3476,12 +3525,12 @@ void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) { struct lpfc_vport *vport = pmb->vport; - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1); - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2; + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); + struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; struct Scsi_Host *shost = lpfc_shost_from_vport(vport); - pmb->context1 = NULL; - pmb->context2 = NULL; + pmb->ctx_buf = NULL; + pmb->ctx_ndlp = NULL; lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI, "0002 rpi:%x DID:%x flg:%x %d map:%x %p\n", @@ -3689,8 +3738,8 @@ lpfc_create_static_vport(struct lpfc_hba *phba) vport_buff = (uint8_t *) vport_info; do { /* free dma buffer from previous round */ - if (pmb->context1) { - mp = (struct lpfc_dmabuf *)pmb->context1; + if (pmb->ctx_buf) { + mp = (struct lpfc_dmabuf *)pmb->ctx_buf; lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); } @@ -3712,7 +3761,7 @@ lpfc_create_static_vport(struct lpfc_hba *phba) if (phba->sli_rev == LPFC_SLI_REV4) { byte_count = pmb->u.mqe.un.mb_words[5]; - mp = (struct lpfc_dmabuf *)pmb->context1; + mp = (struct lpfc_dmabuf *)pmb->ctx_buf; if (byte_count > sizeof(struct static_vport_info) - offset) byte_count = sizeof(struct static_vport_info) @@ -3777,8 +3826,8 @@ lpfc_create_static_vport(struct lpfc_hba *phba) out: kfree(vport_info); if (mbx_wait_rc != MBX_TIMEOUT) { - if (pmb->context1) { - mp = (struct lpfc_dmabuf *)pmb->context1; + if (pmb->ctx_buf) { + mp = (struct lpfc_dmabuf *)pmb->ctx_buf; lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); } @@ -3799,13 +3848,13 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) { struct lpfc_vport *vport = pmb->vport; MAILBOX_t *mb = &pmb->u.mb; - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1); + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); struct lpfc_nodelist *ndlp; struct Scsi_Host *shost; - ndlp = (struct lpfc_nodelist *) pmb->context2; - pmb->context1 = NULL; - pmb->context2 = NULL; + ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; + pmb->ctx_ndlp = NULL; + pmb->ctx_buf = NULL; if (mb->mbxStatus) { lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX, @@ -3913,6 +3962,35 @@ lpfc_issue_gidft(struct lpfc_vport *vport) return vport->gidft_inp; } +/** + * lpfc_issue_gidpt - issue a GID_PT for all N_Ports + * @vport: The virtual port for which this call is being executed. + * + * This routine will issue a GID_PT to get a list of all N_Ports + * + * Return value : + * 0 - Failure to issue a GID_PT + * 1 - GID_PT issued + **/ +int +lpfc_issue_gidpt(struct lpfc_vport *vport) +{ + /* Good status, issue CT Request to NameServer */ + if (lpfc_ns_cmd(vport, SLI_CTNS_GID_PT, 0, GID_PT_N_PORT)) { + /* Cannot issue NameServer FCP Query, so finish up + * discovery + */ + lpfc_printf_vlog(vport, KERN_ERR, LOG_SLI, + "0606 %s Port TYPE %x %s\n", + "Failed to issue GID_PT to ", + GID_PT_N_PORT, + "Finishing discovery."); + return 0; + } + vport->gidft_inp++; + return 1; +} + /* * This routine handles processing a NameServer REG_LOGIN mailbox * command upon completion. It is setup in the LPFC_MBOXQ @@ -3923,12 +4001,12 @@ void lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) { MAILBOX_t *mb = &pmb->u.mb; - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1); - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2; + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); + struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; struct lpfc_vport *vport = pmb->vport; - pmb->context1 = NULL; - pmb->context2 = NULL; + pmb->ctx_buf = NULL; + pmb->ctx_ndlp = NULL; vport->gidft_inp = 0; if (mb->mbxStatus) { @@ -4385,6 +4463,7 @@ lpfc_initialize_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, NLP_INT_NODE_ACT(ndlp); atomic_set(&ndlp->cmd_pending, 0); ndlp->cmd_qdepth = vport->cfg_tgt_queue_depth; + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; } struct lpfc_nodelist * @@ -4392,10 +4471,11 @@ lpfc_enable_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, int state) { struct lpfc_hba *phba = vport->phba; - uint32_t did; + uint32_t did, flag; unsigned long flags; unsigned long *active_rrqs_xri_bitmap = NULL; int rpi = LPFC_RPI_ALLOC_ERROR; + uint32_t defer_did = 0; if (!ndlp) return NULL; @@ -4428,16 +4508,23 @@ lpfc_enable_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, goto free_rpi; } - /* Keep the original DID */ + /* First preserve the orginal DID, xri_bitmap and some flags */ did = ndlp->nlp_DID; + flag = (ndlp->nlp_flag & NLP_UNREG_INP); + if (flag & NLP_UNREG_INP) + defer_did = ndlp->nlp_defer_did; if (phba->sli_rev == LPFC_SLI_REV4) active_rrqs_xri_bitmap = ndlp->active_rrqs_xri_bitmap; - /* re-initialize ndlp except of ndlp linked list pointer */ + /* Zero ndlp except of ndlp linked list pointer */ memset((((char *)ndlp) + sizeof (struct list_head)), 0, sizeof (struct lpfc_nodelist) - sizeof (struct list_head)); - lpfc_initialize_node(vport, ndlp, did); + /* Next reinitialize and restore saved objects */ + lpfc_initialize_node(vport, ndlp, did); + ndlp->nlp_flag |= flag; + if (flag & NLP_UNREG_INP) + ndlp->nlp_defer_did = defer_did; if (phba->sli_rev == LPFC_SLI_REV4) ndlp->active_rrqs_xri_bitmap = active_rrqs_xri_bitmap; @@ -4697,11 +4784,27 @@ lpfc_nlp_logo_unreg(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) struct lpfc_vport *vport = pmb->vport; struct lpfc_nodelist *ndlp; - ndlp = (struct lpfc_nodelist *)(pmb->context1); + ndlp = (struct lpfc_nodelist *)(pmb->ctx_ndlp); if (!ndlp) return; lpfc_issue_els_logo(vport, ndlp, 0); mempool_free(pmb, phba->mbox_mem_pool); + + /* Check to see if there are any deferred events to process */ + if ((ndlp->nlp_flag & NLP_UNREG_INP) && + (ndlp->nlp_defer_did != NLP_EVT_NOTHING_PENDING)) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "1434 UNREG cmpl deferred logo x%x " + "on NPort x%x Data: x%x %p\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_defer_did, ndlp); + + ndlp->nlp_flag &= ~NLP_UNREG_INP; + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); + } else { + ndlp->nlp_flag &= ~NLP_UNREG_INP; + } } /* @@ -4730,6 +4833,21 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) "did x%x\n", ndlp->nlp_rpi, ndlp->nlp_flag, ndlp->nlp_DID); + + /* If there is already an UNREG in progress for this ndlp, + * no need to queue up another one. + */ + if (ndlp->nlp_flag & NLP_UNREG_INP) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "1436 unreg_rpi SKIP UNREG x%x on " + "NPort x%x deferred x%x flg x%x " + "Data: %p\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_defer_did, + ndlp->nlp_flag, ndlp); + goto out; + } + mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); if (mbox) { /* SLI4 ports require the physical rpi value. */ @@ -4740,26 +4858,38 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) lpfc_unreg_login(phba, vport->vpi, rpi, mbox); mbox->vport = vport; if (ndlp->nlp_flag & NLP_ISSUE_LOGO) { - mbox->context1 = ndlp; + mbox->ctx_ndlp = ndlp; mbox->mbox_cmpl = lpfc_nlp_logo_unreg; } else { if (phba->sli_rev == LPFC_SLI_REV4 && (!(vport->load_flag & FC_UNLOADING)) && (bf_get(lpfc_sli_intf_if_type, - &phba->sli4_hba.sli_intf) == + &phba->sli4_hba.sli_intf) >= LPFC_SLI_INTF_IF_TYPE_2) && (kref_read(&ndlp->kref) > 0)) { - mbox->context1 = lpfc_nlp_get(ndlp); + mbox->ctx_ndlp = lpfc_nlp_get(ndlp); mbox->mbox_cmpl = lpfc_sli4_unreg_rpi_cmpl_clr; /* * accept PLOGIs after unreg_rpi_cmpl */ acc_plogi = 0; - } else + } else { + mbox->ctx_ndlp = ndlp; mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; + } } + if (((ndlp->nlp_DID & Fabric_DID_MASK) != + Fabric_DID_MASK) && + (!(vport->fc_flag & FC_OFFLINE_MODE))) + ndlp->nlp_flag |= NLP_UNREG_INP; + + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "1433 unreg_rpi UNREG x%x on " + "NPort x%x deferred flg x%x Data:%p\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_flag, ndlp); rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) { @@ -4768,7 +4898,7 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) } } lpfc_no_rpi(phba, ndlp); - +out: if (phba->sli_rev != LPFC_SLI_REV4) ndlp->nlp_rpi = 0; ndlp->nlp_flag &= ~NLP_RPI_REGISTERED; @@ -4836,7 +4966,7 @@ lpfc_unreg_all_rpis(struct lpfc_vport *vport) mbox); mbox->vport = vport; mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; - mbox->context1 = NULL; + mbox->ctx_ndlp = NULL; rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO); if (rc != MBX_TIMEOUT) mempool_free(mbox, phba->mbox_mem_pool); @@ -4861,7 +4991,7 @@ lpfc_unreg_default_rpis(struct lpfc_vport *vport) mbox); mbox->vport = vport; mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; - mbox->context1 = NULL; + mbox->ctx_ndlp = NULL; rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO); if (rc != MBX_TIMEOUT) mempool_free(mbox, phba->mbox_mem_pool); @@ -4915,8 +5045,8 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) if ((mb = phba->sli.mbox_active)) { if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && !(mb->mbox_flag & LPFC_MBX_IMED_UNREG) && - (ndlp == (struct lpfc_nodelist *) mb->context2)) { - mb->context2 = NULL; + (ndlp == (struct lpfc_nodelist *)mb->ctx_ndlp)) { + mb->ctx_ndlp = NULL; mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; } } @@ -4926,18 +5056,18 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) list_for_each_entry(mb, &phba->sli.mboxq_cmpl, list) { if ((mb->u.mb.mbxCommand != MBX_REG_LOGIN64) || (mb->mbox_flag & LPFC_MBX_IMED_UNREG) || - (ndlp != (struct lpfc_nodelist *) mb->context2)) + (ndlp != (struct lpfc_nodelist *)mb->ctx_ndlp)) continue; - mb->context2 = NULL; + mb->ctx_ndlp = NULL; mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; } list_for_each_entry_safe(mb, nextmb, &phba->sli.mboxq, list) { if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && !(mb->mbox_flag & LPFC_MBX_IMED_UNREG) && - (ndlp == (struct lpfc_nodelist *) mb->context2)) { - mp = (struct lpfc_dmabuf *) (mb->context1); + (ndlp == (struct lpfc_nodelist *)mb->ctx_ndlp)) { + mp = (struct lpfc_dmabuf *)(mb->ctx_buf); if (mp) { __lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); @@ -5007,7 +5137,7 @@ lpfc_nlp_remove(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) mbox->mbox_flag |= LPFC_MBX_IMED_UNREG; mbox->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi; mbox->vport = vport; - mbox->context2 = ndlp; + mbox->ctx_ndlp = ndlp; rc =lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) { mempool_free(mbox, phba->mbox_mem_pool); @@ -5772,12 +5902,12 @@ void lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) { MAILBOX_t *mb = &pmb->u.mb; - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1); - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2; + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); + struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; struct lpfc_vport *vport = pmb->vport; - pmb->context1 = NULL; - pmb->context2 = NULL; + pmb->ctx_buf = NULL; + pmb->ctx_ndlp = NULL; if (phba->sli_rev < LPFC_SLI_REV4) ndlp->nlp_rpi = mb->un.varWords[0]; diff --git a/drivers/scsi/lpfc/lpfc_hw.h b/drivers/scsi/lpfc/lpfc_hw.h index 009aa0eee040..ec1227018913 100644 --- a/drivers/scsi/lpfc/lpfc_hw.h +++ b/drivers/scsi/lpfc/lpfc_hw.h @@ -115,6 +115,7 @@ struct lpfc_sli_ct_request { uint32_t PortID; struct gid { uint8_t PortType; /* for GID_PT requests */ +#define GID_PT_N_PORT 1 uint8_t DomainScope; uint8_t AreaScope; uint8_t Fc4Type; /* for GID_FT requests */ diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h index bbd0a57e953f..c15b9b6fb840 100644 --- a/drivers/scsi/lpfc/lpfc_hw4.h +++ b/drivers/scsi/lpfc/lpfc_hw4.h @@ -197,6 +197,10 @@ struct lpfc_sli_intf { #define LPFC_FCP_SCHED_ROUND_ROBIN 0 #define LPFC_FCP_SCHED_BY_CPU 1 +/* Algrithmns for NameServer Query after RSCN */ +#define LPFC_NS_QUERY_GID_FT 0 +#define LPFC_NS_QUERY_GID_PT 1 + /* Delay Multiplier constant */ #define LPFC_DMULT_CONST 651042 #define LPFC_DMULT_MAX 1023 @@ -1031,6 +1035,7 @@ struct mbox_header { #define LPFC_MBOX_OPCODE_FCOE_SET_FCLINK_SETTINGS 0x21 #define LPFC_MBOX_OPCODE_FCOE_LINK_DIAG_STATE 0x22 #define LPFC_MBOX_OPCODE_FCOE_LINK_DIAG_LOOPBACK 0x23 +#define LPFC_MBOX_OPCODE_FCOE_FC_SET_TRUNK_MODE 0x42 /* Low level Opcodes */ #define LPFC_MBOX_OPCODE_SET_DIAG_LOG_OPTION 0x37 @@ -2777,6 +2782,9 @@ struct lpfc_mbx_read_config { #define lpfc_mbx_rd_conf_lnk_ldv_SHIFT 8 #define lpfc_mbx_rd_conf_lnk_ldv_MASK 0x00000001 #define lpfc_mbx_rd_conf_lnk_ldv_WORD word2 +#define lpfc_mbx_rd_conf_trunk_SHIFT 12 +#define lpfc_mbx_rd_conf_trunk_MASK 0x0000000F +#define lpfc_mbx_rd_conf_trunk_WORD word2 #define lpfc_mbx_rd_conf_topology_SHIFT 24 #define lpfc_mbx_rd_conf_topology_MASK 0x000000FF #define lpfc_mbx_rd_conf_topology_WORD word2 @@ -3512,6 +3520,15 @@ struct lpfc_mbx_set_host_data { uint8_t data[LPFC_HOST_OS_DRIVER_VERSION_SIZE]; }; +struct lpfc_mbx_set_trunk_mode { + struct mbox_header header; + uint32_t word0; +#define lpfc_mbx_set_trunk_mode_WORD word0 +#define lpfc_mbx_set_trunk_mode_SHIFT 0 +#define lpfc_mbx_set_trunk_mode_MASK 0xFF + uint32_t word1; + uint32_t word2; +}; struct lpfc_mbx_get_sli4_parameters { struct mbox_header header; @@ -3833,6 +3850,9 @@ struct lpfc_mbx_wr_object { #define lpfc_wr_object_eof_SHIFT 31 #define lpfc_wr_object_eof_MASK 0x00000001 #define lpfc_wr_object_eof_WORD word4 +#define lpfc_wr_object_eas_SHIFT 29 +#define lpfc_wr_object_eas_MASK 0x00000001 +#define lpfc_wr_object_eas_WORD word4 #define lpfc_wr_object_write_length_SHIFT 0 #define lpfc_wr_object_write_length_MASK 0x00FFFFFF #define lpfc_wr_object_write_length_WORD word4 @@ -3843,6 +3863,15 @@ struct lpfc_mbx_wr_object { } request; struct { uint32_t actual_write_length; + uint32_t word5; +#define lpfc_wr_object_change_status_SHIFT 0 +#define lpfc_wr_object_change_status_MASK 0x000000FF +#define lpfc_wr_object_change_status_WORD word5 +#define LPFC_CHANGE_STATUS_NO_RESET_NEEDED 0x00 +#define LPFC_CHANGE_STATUS_PHYS_DEV_RESET 0x01 +#define LPFC_CHANGE_STATUS_FW_RESET 0x02 +#define LPFC_CHANGE_STATUS_PORT_MIGRATION 0x04 +#define LPFC_CHANGE_STATUS_PCI_RESET 0x05 } response; } u; }; @@ -3911,6 +3940,7 @@ struct lpfc_mqe { struct lpfc_mbx_set_feature set_feature; struct lpfc_mbx_memory_dump_type3 mem_dump_type3; struct lpfc_mbx_set_host_data set_host_data; + struct lpfc_mbx_set_trunk_mode set_trunk_mode; struct lpfc_mbx_nop nop; struct lpfc_mbx_set_ras_fwlog ras_fwlog; } un; @@ -4047,6 +4077,23 @@ struct lpfc_acqe_grp5 { uint32_t trailer; }; +static char *const trunk_errmsg[] = { /* map errcode */ + "", /* There is no such error code at index 0*/ + "link negotiated speed does not match existing" + " trunk - link was \"low\" speed", + "link negotiated speed does not match" + " existing trunk - link was \"middle\" speed", + "link negotiated speed does not match existing" + " trunk - link was \"high\" speed", + "Attached to non-trunking port - F_Port", + "Attached to non-trunking port - N_Port", + "FLOGI response timeout", + "non-FLOGI frame received", + "Invalid FLOGI response", + "Trunking initialization protocol", + "Trunk peer device mismatch", +}; + struct lpfc_acqe_fc_la { uint32_t word0; #define lpfc_acqe_fc_la_speed_SHIFT 24 @@ -4080,6 +4127,7 @@ struct lpfc_acqe_fc_la { #define LPFC_FC_LA_TYPE_MDS_LINK_DOWN 0x4 #define LPFC_FC_LA_TYPE_MDS_LOOPBACK 0x5 #define LPFC_FC_LA_TYPE_UNEXP_WWPN 0x6 +#define LPFC_FC_LA_TYPE_TRUNKING_EVENT 0x7 #define lpfc_acqe_fc_la_port_type_SHIFT 6 #define lpfc_acqe_fc_la_port_type_MASK 0x00000003 #define lpfc_acqe_fc_la_port_type_WORD word0 @@ -4088,6 +4136,32 @@ struct lpfc_acqe_fc_la { #define lpfc_acqe_fc_la_port_number_SHIFT 0 #define lpfc_acqe_fc_la_port_number_MASK 0x0000003F #define lpfc_acqe_fc_la_port_number_WORD word0 + +/* Attention Type is 0x07 (Trunking Event) word0 */ +#define lpfc_acqe_fc_la_trunk_link_status_port0_SHIFT 16 +#define lpfc_acqe_fc_la_trunk_link_status_port0_MASK 0x0000001 +#define lpfc_acqe_fc_la_trunk_link_status_port0_WORD word0 +#define lpfc_acqe_fc_la_trunk_link_status_port1_SHIFT 17 +#define lpfc_acqe_fc_la_trunk_link_status_port1_MASK 0x0000001 +#define lpfc_acqe_fc_la_trunk_link_status_port1_WORD word0 +#define lpfc_acqe_fc_la_trunk_link_status_port2_SHIFT 18 +#define lpfc_acqe_fc_la_trunk_link_status_port2_MASK 0x0000001 +#define lpfc_acqe_fc_la_trunk_link_status_port2_WORD word0 +#define lpfc_acqe_fc_la_trunk_link_status_port3_SHIFT 19 +#define lpfc_acqe_fc_la_trunk_link_status_port3_MASK 0x0000001 +#define lpfc_acqe_fc_la_trunk_link_status_port3_WORD word0 +#define lpfc_acqe_fc_la_trunk_config_port0_SHIFT 20 +#define lpfc_acqe_fc_la_trunk_config_port0_MASK 0x0000001 +#define lpfc_acqe_fc_la_trunk_config_port0_WORD word0 +#define lpfc_acqe_fc_la_trunk_config_port1_SHIFT 21 +#define lpfc_acqe_fc_la_trunk_config_port1_MASK 0x0000001 +#define lpfc_acqe_fc_la_trunk_config_port1_WORD word0 +#define lpfc_acqe_fc_la_trunk_config_port2_SHIFT 22 +#define lpfc_acqe_fc_la_trunk_config_port2_MASK 0x0000001 +#define lpfc_acqe_fc_la_trunk_config_port2_WORD word0 +#define lpfc_acqe_fc_la_trunk_config_port3_SHIFT 23 +#define lpfc_acqe_fc_la_trunk_config_port3_MASK 0x0000001 +#define lpfc_acqe_fc_la_trunk_config_port3_WORD word0 uint32_t word1; #define lpfc_acqe_fc_la_llink_spd_SHIFT 16 #define lpfc_acqe_fc_la_llink_spd_MASK 0x0000FFFF @@ -4095,6 +4169,12 @@ struct lpfc_acqe_fc_la { #define lpfc_acqe_fc_la_fault_SHIFT 0 #define lpfc_acqe_fc_la_fault_MASK 0x000000FF #define lpfc_acqe_fc_la_fault_WORD word1 +#define lpfc_acqe_fc_la_trunk_fault_SHIFT 0 +#define lpfc_acqe_fc_la_trunk_fault_MASK 0x0000000F +#define lpfc_acqe_fc_la_trunk_fault_WORD word1 +#define lpfc_acqe_fc_la_trunk_linkmask_SHIFT 4 +#define lpfc_acqe_fc_la_trunk_linkmask_MASK 0x000000F +#define lpfc_acqe_fc_la_trunk_linkmask_WORD word1 #define LPFC_FC_LA_FAULT_NONE 0x0 #define LPFC_FC_LA_FAULT_LOCAL 0x1 #define LPFC_FC_LA_FAULT_REMOTE 0x2 diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 68d62d55a3a5..c1c36812c3d2 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -447,19 +447,19 @@ lpfc_config_port_post(struct lpfc_hba *phba) "READ_SPARM mbxStatus x%x\n", mb->mbxCommand, mb->mbxStatus); phba->link_state = LPFC_HBA_ERROR; - mp = (struct lpfc_dmabuf *) pmb->context1; + mp = (struct lpfc_dmabuf *)pmb->ctx_buf; mempool_free(pmb, phba->mbox_mem_pool); lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); return -EIO; } - mp = (struct lpfc_dmabuf *) pmb->context1; + mp = (struct lpfc_dmabuf *)pmb->ctx_buf; memcpy(&vport->fc_sparam, mp->virt, sizeof (struct serv_parm)); lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); - pmb->context1 = NULL; + pmb->ctx_buf = NULL; lpfc_update_vport_wwn(vport); /* Update the fc_host data structures with new wwn. */ @@ -1801,7 +1801,12 @@ lpfc_sli4_port_sta_fn_reset(struct lpfc_hba *phba, int mbx_action, lpfc_offline(phba); /* release interrupt for possible resource change */ lpfc_sli4_disable_intr(phba); - lpfc_sli_brdrestart(phba); + rc = lpfc_sli_brdrestart(phba); + if (rc) { + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, + "6309 Failed to restart board\n"); + return rc; + } /* request and enable interrupt */ intr_mode = lpfc_sli4_enable_intr(phba, phba->intr_mode); if (intr_mode == LPFC_INTR_ERROR) { @@ -4106,6 +4111,32 @@ finished: return stat; } +void lpfc_host_supported_speeds_set(struct Scsi_Host *shost) +{ + struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata; + struct lpfc_hba *phba = vport->phba; + + fc_host_supported_speeds(shost) = 0; + if (phba->lmt & LMT_128Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_128GBIT; + if (phba->lmt & LMT_64Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_64GBIT; + if (phba->lmt & LMT_32Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_32GBIT; + if (phba->lmt & LMT_16Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_16GBIT; + if (phba->lmt & LMT_10Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_10GBIT; + if (phba->lmt & LMT_8Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_8GBIT; + if (phba->lmt & LMT_4Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_4GBIT; + if (phba->lmt & LMT_2Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_2GBIT; + if (phba->lmt & LMT_1Gb) + fc_host_supported_speeds(shost) |= FC_PORTSPEED_1GBIT; +} + /** * lpfc_host_attrib_init - Initialize SCSI host attributes on a FC port * @shost: pointer to SCSI host data structure. @@ -4133,23 +4164,7 @@ void lpfc_host_attrib_init(struct Scsi_Host *shost) lpfc_vport_symbolic_node_name(vport, fc_host_symbolic_name(shost), sizeof fc_host_symbolic_name(shost)); - fc_host_supported_speeds(shost) = 0; - if (phba->lmt & LMT_64Gb) - fc_host_supported_speeds(shost) |= FC_PORTSPEED_64GBIT; - if (phba->lmt & LMT_32Gb) - fc_host_supported_speeds(shost) |= FC_PORTSPEED_32GBIT; - if (phba->lmt & LMT_16Gb) - fc_host_supported_speeds(shost) |= FC_PORTSPEED_16GBIT; - if (phba->lmt & LMT_10Gb) - fc_host_supported_speeds(shost) |= FC_PORTSPEED_10GBIT; - if (phba->lmt & LMT_8Gb) - fc_host_supported_speeds(shost) |= FC_PORTSPEED_8GBIT; - if (phba->lmt & LMT_4Gb) - fc_host_supported_speeds(shost) |= FC_PORTSPEED_4GBIT; - if (phba->lmt & LMT_2Gb) - fc_host_supported_speeds(shost) |= FC_PORTSPEED_2GBIT; - if (phba->lmt & LMT_1Gb) - fc_host_supported_speeds(shost) |= FC_PORTSPEED_1GBIT; + lpfc_host_supported_speeds_set(shost); fc_host_maxframe_size(shost) = (((uint32_t) vport->fc_sparam.cmn.bbRcvSizeMsb & 0x0F) << 8) | @@ -4467,6 +4482,9 @@ lpfc_sli4_port_speed_parse(struct lpfc_hba *phba, uint32_t evt_code, case LPFC_FC_LA_SPEED_64G: port_speed = 64000; break; + case LPFC_FC_LA_SPEED_128G: + port_speed = 128000; + break; default: port_speed = 0; } @@ -4608,6 +4626,136 @@ out_free_pmb: mempool_free(pmb, phba->mbox_mem_pool); } +/** + * lpfc_async_link_speed_to_read_top - Parse async evt link speed code to read + * topology. + * @phba: pointer to lpfc hba data structure. + * @evt_code: asynchronous event code. + * @speed_code: asynchronous event link speed code. + * + * This routine is to parse the giving SLI4 async event link speed code into + * value of Read topology link speed. + * + * Return: link speed in terms of Read topology. + **/ +static uint8_t +lpfc_async_link_speed_to_read_top(struct lpfc_hba *phba, uint8_t speed_code) +{ + uint8_t port_speed; + + switch (speed_code) { + case LPFC_FC_LA_SPEED_1G: + port_speed = LPFC_LINK_SPEED_1GHZ; + break; + case LPFC_FC_LA_SPEED_2G: + port_speed = LPFC_LINK_SPEED_2GHZ; + break; + case LPFC_FC_LA_SPEED_4G: + port_speed = LPFC_LINK_SPEED_4GHZ; + break; + case LPFC_FC_LA_SPEED_8G: + port_speed = LPFC_LINK_SPEED_8GHZ; + break; + case LPFC_FC_LA_SPEED_16G: + port_speed = LPFC_LINK_SPEED_16GHZ; + break; + case LPFC_FC_LA_SPEED_32G: + port_speed = LPFC_LINK_SPEED_32GHZ; + break; + case LPFC_FC_LA_SPEED_64G: + port_speed = LPFC_LINK_SPEED_64GHZ; + break; + case LPFC_FC_LA_SPEED_128G: + port_speed = LPFC_LINK_SPEED_128GHZ; + break; + case LPFC_FC_LA_SPEED_256G: + port_speed = LPFC_LINK_SPEED_256GHZ; + break; + default: + port_speed = 0; + break; + } + + return port_speed; +} + +#define trunk_link_status(__idx)\ + bf_get(lpfc_acqe_fc_la_trunk_config_port##__idx, acqe_fc) ?\ + ((phba->trunk_link.link##__idx.state == LPFC_LINK_UP) ?\ + "Link up" : "Link down") : "NA" +/* Did port __idx reported an error */ +#define trunk_port_fault(__idx)\ + bf_get(lpfc_acqe_fc_la_trunk_config_port##__idx, acqe_fc) ?\ + (port_fault & (1 << __idx) ? "YES" : "NO") : "NA" + +static void +lpfc_update_trunk_link_status(struct lpfc_hba *phba, + struct lpfc_acqe_fc_la *acqe_fc) +{ + uint8_t port_fault = bf_get(lpfc_acqe_fc_la_trunk_linkmask, acqe_fc); + uint8_t err = bf_get(lpfc_acqe_fc_la_trunk_fault, acqe_fc); + + phba->sli4_hba.link_state.speed = + lpfc_sli4_port_speed_parse(phba, LPFC_TRAILER_CODE_FC, + bf_get(lpfc_acqe_fc_la_speed, acqe_fc)); + + phba->sli4_hba.link_state.logical_speed = + bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc); + /* We got FC link speed, convert to fc_linkspeed (READ_TOPOLOGY) */ + phba->fc_linkspeed = + lpfc_async_link_speed_to_read_top( + phba, + bf_get(lpfc_acqe_fc_la_speed, acqe_fc)); + + if (bf_get(lpfc_acqe_fc_la_trunk_config_port0, acqe_fc)) { + phba->trunk_link.link0.state = + bf_get(lpfc_acqe_fc_la_trunk_link_status_port0, acqe_fc) + ? LPFC_LINK_UP : LPFC_LINK_DOWN; + phba->trunk_link.link0.fault = port_fault & 0x1 ? err : 0; + } + if (bf_get(lpfc_acqe_fc_la_trunk_config_port1, acqe_fc)) { + phba->trunk_link.link1.state = + bf_get(lpfc_acqe_fc_la_trunk_link_status_port1, acqe_fc) + ? LPFC_LINK_UP : LPFC_LINK_DOWN; + phba->trunk_link.link1.fault = port_fault & 0x2 ? err : 0; + } + if (bf_get(lpfc_acqe_fc_la_trunk_config_port2, acqe_fc)) { + phba->trunk_link.link2.state = + bf_get(lpfc_acqe_fc_la_trunk_link_status_port2, acqe_fc) + ? LPFC_LINK_UP : LPFC_LINK_DOWN; + phba->trunk_link.link2.fault = port_fault & 0x4 ? err : 0; + } + if (bf_get(lpfc_acqe_fc_la_trunk_config_port3, acqe_fc)) { + phba->trunk_link.link3.state = + bf_get(lpfc_acqe_fc_la_trunk_link_status_port3, acqe_fc) + ? LPFC_LINK_UP : LPFC_LINK_DOWN; + phba->trunk_link.link3.fault = port_fault & 0x8 ? err : 0; + } + + lpfc_printf_log(phba, KERN_ERR, LOG_SLI, + "2910 Async FC Trunking Event - Speed:%d\n" + "\tLogical speed:%d " + "port0: %s port1: %s port2: %s port3: %s\n", + phba->sli4_hba.link_state.speed, + phba->sli4_hba.link_state.logical_speed, + trunk_link_status(0), trunk_link_status(1), + trunk_link_status(2), trunk_link_status(3)); + + if (port_fault) + lpfc_printf_log(phba, KERN_ERR, LOG_SLI, + "3202 trunk error:0x%x (%s) seen on port0:%s " + /* + * SLI-4: We have only 0xA error codes + * defined as of now. print an appropriate + * message in case driver needs to be updated. + */ + "port1:%s port2:%s port3:%s\n", err, err > 0xA ? + "UNDEFINED. update driver." : trunk_errmsg[err], + trunk_port_fault(0), trunk_port_fault(1), + trunk_port_fault(2), trunk_port_fault(3)); +} + + /** * lpfc_sli4_async_fc_evt - Process the asynchronous FC link event * @phba: pointer to lpfc hba data structure. @@ -4633,6 +4781,13 @@ lpfc_sli4_async_fc_evt(struct lpfc_hba *phba, struct lpfc_acqe_fc_la *acqe_fc) bf_get(lpfc_trailer_type, acqe_fc)); return; } + + if (bf_get(lpfc_acqe_fc_la_att_type, acqe_fc) == + LPFC_FC_LA_TYPE_TRUNKING_EVENT) { + lpfc_update_trunk_link_status(phba, acqe_fc); + return; + } + /* Keep the link status for extra SLI4 state machine reference */ phba->sli4_hba.link_state.speed = lpfc_sli4_port_speed_parse(phba, LPFC_TRAILER_CODE_FC, @@ -4762,6 +4917,8 @@ lpfc_sli4_async_sli_evt(struct lpfc_hba *phba, struct lpfc_acqe_sli *acqe_sli) struct temp_event temp_event_data; struct lpfc_acqe_misconfigured_event *misconfigured; struct Scsi_Host *shost; + struct lpfc_vport **vports; + int rc, i; evt_type = bf_get(lpfc_trailer_type, acqe_sli); @@ -4887,6 +5044,25 @@ lpfc_sli4_async_sli_evt(struct lpfc_hba *phba, struct lpfc_acqe_sli *acqe_sli) sprintf(message, "Unknown event status x%02x", status); break; } + + /* Issue READ_CONFIG mbox command to refresh supported speeds */ + rc = lpfc_sli4_read_config(phba); + if (rc) { + phba->lmt = 0; + lpfc_printf_log(phba, KERN_ERR, LOG_SLI, + "3194 Unable to retrieve supported " + "speeds, rc = 0x%x\n", rc); + } + vports = lpfc_create_vport_work_array(phba); + if (vports != NULL) { + for (i = 0; i <= phba->max_vports && vports[i] != NULL; + i++) { + shost = lpfc_shost_from_vport(vports[i]); + lpfc_host_supported_speeds_set(shost); + } + } + lpfc_destroy_vport_work_array(phba, vports); + phba->sli4_hba.lnk_info.optic_state = status; lpfc_printf_log(phba, KERN_ERR, LOG_SLI, "3176 Port Name %c %s\n", port_name, message); @@ -5044,7 +5220,7 @@ lpfc_sli4_async_fip_evt(struct lpfc_hba *phba, break; } /* If fast FCF failover rescan event is pending, do nothing */ - if (phba->fcf.fcf_flag & FCF_REDISC_EVT) { + if (phba->fcf.fcf_flag & (FCF_REDISC_EVT | FCF_REDISC_PEND)) { spin_unlock_irq(&phba->hbalock); break; } @@ -7181,26 +7357,19 @@ lpfc_post_init_setup(struct lpfc_hba *phba) static int lpfc_sli_pci_mem_setup(struct lpfc_hba *phba) { - struct pci_dev *pdev; + struct pci_dev *pdev = phba->pcidev; unsigned long bar0map_len, bar2map_len; int i, hbq_count; void *ptr; int error = -ENODEV; - /* Obtain PCI device reference */ - if (!phba->pcidev) + if (!pdev) return error; - else - pdev = phba->pcidev; /* Set the device DMA mask size */ - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 - || pci_set_consistent_dma_mask(pdev,DMA_BIT_MASK(64)) != 0) { - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 - || pci_set_consistent_dma_mask(pdev,DMA_BIT_MASK(32)) != 0) { - return error; - } - } + if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) + return error; /* Get the bus address of Bar0 and Bar2 and the number of bytes * required by each mapping. @@ -7779,6 +7948,8 @@ lpfc_sli4_read_config(struct lpfc_hba *phba) phba->sli4_hba.bbscn_params.word0 = rd_config->word8; } + phba->sli4_hba.conf_trunk = + bf_get(lpfc_mbx_rd_conf_trunk, rd_config); phba->sli4_hba.extents_in_use = bf_get(lpfc_mbx_rd_conf_extnts_inuse, rd_config); phba->sli4_hba.max_cfg_param.max_xri = @@ -7787,6 +7958,9 @@ lpfc_sli4_read_config(struct lpfc_hba *phba) bf_get(lpfc_mbx_rd_conf_xri_base, rd_config); phba->sli4_hba.max_cfg_param.max_vpi = bf_get(lpfc_mbx_rd_conf_vpi_count, rd_config); + /* Limit the max we support */ + if (phba->sli4_hba.max_cfg_param.max_vpi > LPFC_MAX_VPORTS) + phba->sli4_hba.max_cfg_param.max_vpi = LPFC_MAX_VPORTS; phba->sli4_hba.max_cfg_param.vpi_base = bf_get(lpfc_mbx_rd_conf_vpi_base, rd_config); phba->sli4_hba.max_cfg_param.max_rpi = @@ -9562,25 +9736,18 @@ out: static int lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba) { - struct pci_dev *pdev; + struct pci_dev *pdev = phba->pcidev; unsigned long bar0map_len, bar1map_len, bar2map_len; int error = -ENODEV; uint32_t if_type; - /* Obtain PCI device reference */ - if (!phba->pcidev) + if (!pdev) return error; - else - pdev = phba->pcidev; /* Set the device DMA mask size */ - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0 - || pci_set_consistent_dma_mask(pdev,DMA_BIT_MASK(64)) != 0) { - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0 - || pci_set_consistent_dma_mask(pdev,DMA_BIT_MASK(32)) != 0) { - return error; - } - } + if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) + return error; /* * The BARs and register set definitions and offset locations are @@ -10523,12 +10690,7 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba) kthread_stop(phba->worker_thread); /* Disable FW logging to host memory */ - writel(LPFC_CTL_PDEV_CTL_DDL_RAS, - phba->sli4_hba.conf_regs_memmap_p + LPFC_CTL_PDEV_CTL_OFFSET); - - /* Free RAS DMA memory */ - if (phba->ras_fwlog.ras_enabled == true) - lpfc_sli4_ras_dma_free(phba); + lpfc_ras_stop_fwlog(phba); /* Unset the queues shared with the hardware then release all * allocated resources. @@ -10539,6 +10701,10 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba) /* Reset SLI4 HBA FCoE function */ lpfc_pci_function_reset(phba); + /* Free RAS DMA memory */ + if (phba->ras_fwlog.ras_enabled) + lpfc_sli4_ras_dma_free(phba); + /* Stop the SLI4 device port */ phba->pport->work_port_events = 0; } @@ -12476,7 +12642,8 @@ lpfc_sli4_ras_init(struct lpfc_hba *phba) case PCI_DEVICE_ID_LANCER_G6_FC: case PCI_DEVICE_ID_LANCER_G7_FC: phba->ras_fwlog.ras_hwsupport = true; - if (phba->cfg_ras_fwlog_func == PCI_FUNC(phba->pcidev->devfn)) + if (phba->cfg_ras_fwlog_func == PCI_FUNC(phba->pcidev->devfn) && + phba->cfg_ras_fwlog_buffsize) phba->ras_fwlog.ras_enabled = true; else phba->ras_fwlog.ras_enabled = false; diff --git a/drivers/scsi/lpfc/lpfc_mbox.c b/drivers/scsi/lpfc/lpfc_mbox.c index deb094fdbb79..f6a5083a621e 100644 --- a/drivers/scsi/lpfc/lpfc_mbox.c +++ b/drivers/scsi/lpfc/lpfc_mbox.c @@ -94,7 +94,7 @@ lpfc_dump_static_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb, memset(mp->virt, 0, LPFC_BPL_SIZE); INIT_LIST_HEAD(&mp->list); /* save address for completion */ - pmb->context1 = (uint8_t *)mp; + pmb->ctx_buf = (uint8_t *)mp; mb->un.varWords[3] = putPaddrLow(mp->phys); mb->un.varWords[4] = putPaddrHigh(mp->phys); mb->un.varDmp.sli4_length = sizeof(struct static_vport_info); @@ -139,7 +139,7 @@ lpfc_dump_mem(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb, uint16_t offset, void *ctx; mb = &pmb->u.mb; - ctx = pmb->context2; + ctx = pmb->ctx_buf; /* Setup to dump VPD region */ memset(pmb, 0, sizeof (LPFC_MBOXQ_t)); @@ -151,7 +151,7 @@ lpfc_dump_mem(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb, uint16_t offset, mb->un.varDmp.word_cnt = (DMP_RSP_SIZE / sizeof (uint32_t)); mb->un.varDmp.co = 0; mb->un.varDmp.resp_offset = 0; - pmb->context2 = ctx; + pmb->ctx_buf = ctx; mb->mbxOwner = OWN_HOST; return; } @@ -172,7 +172,7 @@ lpfc_dump_wakeup_param(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) mb = &pmb->u.mb; /* Save context so that we can restore after memset */ - ctx = pmb->context2; + ctx = pmb->ctx_buf; /* Setup to dump VPD region */ memset(pmb, 0, sizeof(LPFC_MBOXQ_t)); @@ -186,7 +186,7 @@ lpfc_dump_wakeup_param(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) mb->un.varDmp.word_cnt = WAKE_UP_PARMS_WORD_SIZE; mb->un.varDmp.co = 0; mb->un.varDmp.resp_offset = 0; - pmb->context2 = ctx; + pmb->ctx_buf = ctx; return; } @@ -304,7 +304,7 @@ lpfc_read_topology(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb, /* Save address for later completion and set the owner to host so that * the FW knows this mailbox is available for processing. */ - pmb->context1 = (uint8_t *)mp; + pmb->ctx_buf = (uint8_t *)mp; mb->mbxOwner = OWN_HOST; return (0); } @@ -513,9 +513,9 @@ lpfc_init_link(struct lpfc_hba * phba, break; } - if (phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC && - mb->un.varInitLnk.link_flags & FLAGS_TOPOLOGY_MODE_LOOP) { - /* Failover is not tried for Lancer G6 */ + if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC || + phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) && + mb->un.varInitLnk.link_flags & FLAGS_TOPOLOGY_MODE_LOOP) { mb->un.varInitLnk.link_flags = FLAGS_TOPOLOGY_MODE_PT_PT; phba->cfg_topology = FLAGS_TOPOLOGY_MODE_PT_PT; } @@ -631,7 +631,7 @@ lpfc_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb, int vpi) mb->un.varRdSparm.vpi = phba->vpi_ids[vpi]; /* save address for completion */ - pmb->context1 = mp; + pmb->ctx_buf = mp; return (0); } @@ -783,7 +783,7 @@ lpfc_reg_rpi(struct lpfc_hba *phba, uint16_t vpi, uint32_t did, memcpy(sparam, param, sizeof (struct serv_parm)); /* save address for completion */ - pmb->context1 = (uint8_t *) mp; + pmb->ctx_buf = (uint8_t *)mp; mb->mbxCommand = MBX_REG_LOGIN64; mb->un.varRegLogin.un.sp64.tus.f.bdeSize = sizeof (struct serv_parm); @@ -858,7 +858,7 @@ lpfc_sli4_unreg_all_rpis(struct lpfc_vport *vport) mbox->u.mb.un.varUnregLogin.rsvd1 = 0x4000; mbox->vport = vport; mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; - mbox->context1 = NULL; + mbox->ctx_ndlp = NULL; rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) mempool_free(mbox, phba->mbox_mem_pool); @@ -2288,7 +2288,7 @@ lpfc_sli4_dump_cfg_rg23(struct lpfc_hba *phba, struct lpfcMboxq *mbox) INIT_LIST_HEAD(&mp->list); /* save address for completion */ - mbox->context1 = (uint8_t *) mp; + mbox->ctx_buf = (uint8_t *)mp; mb->mbxCommand = MBX_DUMP_MEMORY; mb->un.varDmp.type = DMP_NV_PARAMS; @@ -2305,7 +2305,7 @@ lpfc_mbx_cmpl_rdp_link_stat(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq) MAILBOX_t *mb; int rc = FAILURE; struct lpfc_rdp_context *rdp_context = - (struct lpfc_rdp_context *)(mboxq->context2); + (struct lpfc_rdp_context *)(mboxq->ctx_ndlp); mb = &mboxq->u.mb; if (mb->mbxStatus) @@ -2323,9 +2323,9 @@ mbx_failed: static void lpfc_mbx_cmpl_rdp_page_a2(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox) { - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) mbox->context1; + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)mbox->ctx_buf; struct lpfc_rdp_context *rdp_context = - (struct lpfc_rdp_context *)(mbox->context2); + (struct lpfc_rdp_context *)(mbox->ctx_ndlp); if (bf_get(lpfc_mqe_status, &mbox->u.mqe)) goto error_mbuf_free; @@ -2341,7 +2341,7 @@ lpfc_mbx_cmpl_rdp_page_a2(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox) lpfc_read_lnk_stat(phba, mbox); mbox->vport = rdp_context->ndlp->vport; mbox->mbox_cmpl = lpfc_mbx_cmpl_rdp_link_stat; - mbox->context2 = (struct lpfc_rdp_context *) rdp_context; + mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) == MBX_NOT_FINISHED) goto error_cmd_free; @@ -2359,9 +2359,9 @@ void lpfc_mbx_cmpl_rdp_page_a0(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox) { int rc; - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (mbox->context1); + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(mbox->ctx_buf); struct lpfc_rdp_context *rdp_context = - (struct lpfc_rdp_context *)(mbox->context2); + (struct lpfc_rdp_context *)(mbox->ctx_ndlp); if (bf_get(lpfc_mqe_status, &mbox->u.mqe)) goto error; @@ -2375,7 +2375,7 @@ lpfc_mbx_cmpl_rdp_page_a0(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox) INIT_LIST_HEAD(&mp->list); /* save address for completion */ - mbox->context1 = mp; + mbox->ctx_buf = mp; mbox->vport = rdp_context->ndlp->vport; bf_set(lpfc_mqe_command, &mbox->u.mqe, MBX_DUMP_MEMORY); @@ -2391,7 +2391,7 @@ lpfc_mbx_cmpl_rdp_page_a0(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox) mbox->u.mqe.un.mem_dump_type3.addr_hi = putPaddrHigh(mp->phys); mbox->mbox_cmpl = lpfc_mbx_cmpl_rdp_page_a2; - mbox->context2 = (struct lpfc_rdp_context *) rdp_context; + mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) goto error; @@ -2436,7 +2436,7 @@ lpfc_sli4_dump_page_a0(struct lpfc_hba *phba, struct lpfcMboxq *mbox) bf_set(lpfc_mqe_command, &mbox->u.mqe, MBX_DUMP_MEMORY); /* save address for completion */ - mbox->context1 = mp; + mbox->ctx_buf = mp; bf_set(lpfc_mbx_memory_dump_type3_type, &mbox->u.mqe.un.mem_dump_type3, DMP_LMSD); diff --git a/drivers/scsi/lpfc/lpfc_mem.c b/drivers/scsi/lpfc/lpfc_mem.c index 9c22a2c93462..66191fa35f63 100644 --- a/drivers/scsi/lpfc/lpfc_mem.c +++ b/drivers/scsi/lpfc/lpfc_mem.c @@ -330,7 +330,7 @@ lpfc_mem_free_all(struct lpfc_hba *phba) /* Free memory used in mailbox queue back to mailbox memory pool */ list_for_each_entry_safe(mbox, next_mbox, &psli->mboxq, list) { - mp = (struct lpfc_dmabuf *) (mbox->context1); + mp = (struct lpfc_dmabuf *)(mbox->ctx_buf); if (mp) { lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); @@ -340,7 +340,7 @@ lpfc_mem_free_all(struct lpfc_hba *phba) } /* Free memory used in mailbox cmpl list back to mailbox memory pool */ list_for_each_entry_safe(mbox, next_mbox, &psli->mboxq_cmpl, list) { - mp = (struct lpfc_dmabuf *) (mbox->context1); + mp = (struct lpfc_dmabuf *)(mbox->ctx_buf); if (mp) { lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); @@ -354,7 +354,7 @@ lpfc_mem_free_all(struct lpfc_hba *phba) spin_unlock_irq(&phba->hbalock); if (psli->mbox_active) { mbox = psli->mbox_active; - mp = (struct lpfc_dmabuf *) (mbox->context1); + mp = (struct lpfc_dmabuf *)(mbox->ctx_buf); if (mp) { lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c index 269808e8480f..96bc3789a166 100644 --- a/drivers/scsi/lpfc/lpfc_nportdisc.c +++ b/drivers/scsi/lpfc/lpfc_nportdisc.c @@ -467,7 +467,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, */ mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login; /* - * mbox->context2 = lpfc_nlp_get(ndlp) deferred until mailbox + * mbox->ctx_ndlp = lpfc_nlp_get(ndlp) deferred until mailbox * command issued in lpfc_cmpl_els_acc(). */ mbox->vport = vport; @@ -535,8 +535,8 @@ lpfc_mbx_cmpl_resume_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq) struct lpfc_nodelist *ndlp; uint32_t cmd; - elsiocb = (struct lpfc_iocbq *)mboxq->context1; - ndlp = (struct lpfc_nodelist *) mboxq->context2; + elsiocb = (struct lpfc_iocbq *)mboxq->ctx_buf; + ndlp = (struct lpfc_nodelist *)mboxq->ctx_ndlp; vport = mboxq->vport; cmd = elsiocb->drvrTimeout; @@ -836,7 +836,9 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) struct Scsi_Host *shost = lpfc_shost_from_vport(vport); if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED)) { + spin_lock_irq(shost->host_lock); ndlp->nlp_flag &= ~NLP_NPR_ADISC; + spin_unlock_irq(shost->host_lock); return 0; } @@ -851,7 +853,10 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) return 1; } } + + spin_lock_irq(shost->host_lock); ndlp->nlp_flag &= ~NLP_NPR_ADISC; + spin_unlock_irq(shost->host_lock); lpfc_unreg_rpi(vport, ndlp); return 0; } @@ -866,13 +871,26 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) * to release a rpi. **/ void -lpfc_release_rpi(struct lpfc_hba *phba, - struct lpfc_vport *vport, - uint16_t rpi) +lpfc_release_rpi(struct lpfc_hba *phba, struct lpfc_vport *vport, + struct lpfc_nodelist *ndlp, uint16_t rpi) { LPFC_MBOXQ_t *pmb; int rc; + /* If there is already an UNREG in progress for this ndlp, + * no need to queue up another one. + */ + if (ndlp->nlp_flag & NLP_UNREG_INP) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "1435 release_rpi SKIP UNREG x%x on " + "NPort x%x deferred x%x flg x%x " + "Data: %p\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_defer_did, + ndlp->nlp_flag, ndlp); + return; + } + pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); if (!pmb) @@ -881,6 +899,18 @@ lpfc_release_rpi(struct lpfc_hba *phba, else { lpfc_unreg_login(phba, vport->vpi, rpi, pmb); pmb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; + pmb->vport = vport; + pmb->ctx_ndlp = ndlp; + + if (((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) && + (!(vport->fc_flag & FC_OFFLINE_MODE))) + ndlp->nlp_flag |= NLP_UNREG_INP; + + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "1437 release_rpi UNREG x%x " + "on NPort x%x flg x%x\n", + ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag); + rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) mempool_free(pmb, phba->mbox_mem_pool); @@ -901,7 +931,7 @@ lpfc_disc_illegal(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, (evt == NLP_EVT_CMPL_REG_LOGIN) && (!pmb->u.mb.mbxStatus)) { rpi = pmb->u.mb.un.varWords[0]; - lpfc_release_rpi(phba, vport, rpi); + lpfc_release_rpi(phba, vport, ndlp, rpi); } lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY, "0271 Illegal State Transition: node x%x " @@ -1253,7 +1283,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, ndlp->nlp_flag |= NLP_REG_LOGIN_SEND; mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login; } - mbox->context2 = lpfc_nlp_get(ndlp); + mbox->ctx_ndlp = lpfc_nlp_get(ndlp); mbox->vport = vport; if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) != MBX_NOT_FINISHED) { @@ -1267,7 +1297,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, * command */ lpfc_nlp_put(ndlp); - mp = (struct lpfc_dmabuf *) mbox->context1; + mp = (struct lpfc_dmabuf *)mbox->ctx_buf; lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); mempool_free(mbox, phba->mbox_mem_pool); @@ -1329,7 +1359,7 @@ lpfc_cmpl_reglogin_plogi_issue(struct lpfc_vport *vport, if (!(phba->pport->load_flag & FC_UNLOADING) && !mb->mbxStatus) { rpi = pmb->u.mb.un.varWords[0]; - lpfc_release_rpi(phba, vport, rpi); + lpfc_release_rpi(phba, vport, ndlp, rpi); } return ndlp->nlp_state; } @@ -1636,10 +1666,10 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport, /* cleanup any ndlp on mbox q waiting for reglogin cmpl */ if ((mb = phba->sli.mbox_active)) { if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && - (ndlp == (struct lpfc_nodelist *) mb->context2)) { + (ndlp == (struct lpfc_nodelist *)mb->ctx_ndlp)) { ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; lpfc_nlp_put(ndlp); - mb->context2 = NULL; + mb->ctx_ndlp = NULL; mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; } } @@ -1647,8 +1677,8 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport, spin_lock_irq(&phba->hbalock); list_for_each_entry_safe(mb, nextmb, &phba->sli.mboxq, list) { if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && - (ndlp == (struct lpfc_nodelist *) mb->context2)) { - mp = (struct lpfc_dmabuf *) (mb->context1); + (ndlp == (struct lpfc_nodelist *)mb->ctx_ndlp)) { + mp = (struct lpfc_dmabuf *)(mb->ctx_buf); if (mp) { __lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); @@ -1770,9 +1800,16 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport, ndlp->nlp_fc4_type |= NLP_FC4_FCP; } else if (ndlp->nlp_fc4_type == 0) { - rc = lpfc_ns_cmd(vport, SLI_CTNS_GFT_ID, - 0, ndlp->nlp_DID); - return ndlp->nlp_state; + /* If we are only configured for FCP, the driver + * should just issue PRLI for FCP. Otherwise issue + * GFT_ID to determine if remote port supports NVME. + */ + if (phba->cfg_enable_fc4_type != LPFC_ENABLE_FCP) { + rc = lpfc_ns_cmd(vport, SLI_CTNS_GFT_ID, + 0, ndlp->nlp_DID); + return ndlp->nlp_state; + } + ndlp->nlp_fc4_type = NLP_FC4_FCP; } ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE; @@ -2863,8 +2900,9 @@ lpfc_disc_state_machine(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, /* DSM in event on NPort in state */ lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, "0211 DSM in event x%x on NPort x%x in " - "state %d Data: x%x\n", - evt, ndlp->nlp_DID, cur_state, ndlp->nlp_flag); + "state %d rpi x%x Data: x%x x%x\n", + evt, ndlp->nlp_DID, cur_state, ndlp->nlp_rpi, + ndlp->nlp_flag, ndlp->nlp_fc4_type); lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_DSM, "DSM in: evt:%d ste:%d did:x%x", @@ -2876,8 +2914,9 @@ lpfc_disc_state_machine(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, /* DSM out state on NPort */ if (got_ndlp) { lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, - "0212 DSM out state %d on NPort x%x Data: x%x\n", - rc, ndlp->nlp_DID, ndlp->nlp_flag); + "0212 DSM out state %d on NPort x%x " + "rpi x%x Data: x%x\n", + rc, ndlp->nlp_DID, ndlp->nlp_rpi, ndlp->nlp_flag); lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_DSM, "DSM out: ste:%d did:x%x flg:x%x", diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index ba831def9301..4c66b19e6199 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -1855,7 +1855,6 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport, bf_set(abort_cmd_criteria, &abts_wqe->abort_cmd, T_XRI_TAG); /* word 7 */ - bf_set(wqe_ct, &abts_wqe->abort_cmd.wqe_com, 0); bf_set(wqe_cmnd, &abts_wqe->abort_cmd.wqe_com, CMD_ABORT_XRI_CX); bf_set(wqe_class, &abts_wqe->abort_cmd.wqe_com, nvmereq_wqe->iocb.ulpClass); @@ -1870,7 +1869,6 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport, abts_buf->iotag); /* word 10 */ - bf_set(wqe_wqid, &abts_wqe->abort_cmd.wqe_com, nvmereq_wqe->hba_wqidx); bf_set(wqe_qosd, &abts_wqe->abort_cmd.wqe_com, 1); bf_set(wqe_lenloc, &abts_wqe->abort_cmd.wqe_com, LPFC_WQE_LENLOC_NONE); diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c index 4fa6703a9ec9..b4f1a840b3b4 100644 --- a/drivers/scsi/lpfc/lpfc_scsi.c +++ b/drivers/scsi/lpfc/lpfc_scsi.c @@ -2734,6 +2734,7 @@ lpfc_bg_scsi_prep_dma_buf_s3(struct lpfc_hba *phba, int datasegcnt, protsegcnt, datadir = scsi_cmnd->sc_data_direction; int prot_group_type = 0; int fcpdl; + struct lpfc_vport *vport = phba->pport; /* * Start the lpfc command prep by bumping the bpl beyond fcp_cmnd @@ -2839,6 +2840,14 @@ lpfc_bg_scsi_prep_dma_buf_s3(struct lpfc_hba *phba, */ iocb_cmd->un.fcpi.fcpi_parm = fcpdl; + /* + * For First burst, we may need to adjust the initial transfer + * length for DIF + */ + if (iocb_cmd->un.fcpi.fcpi_XRdy && + (fcpdl < vport->cfg_first_burst_size)) + iocb_cmd->un.fcpi.fcpi_XRdy = fcpdl; + return 0; err: if (lpfc_cmd->seg_cnt) @@ -3403,6 +3412,7 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba, int datasegcnt, protsegcnt, datadir = scsi_cmnd->sc_data_direction; int prot_group_type = 0; int fcpdl; + struct lpfc_vport *vport = phba->pport; /* * Start the lpfc command prep by bumping the sgl beyond fcp_cmnd @@ -3518,6 +3528,14 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba, */ iocb_cmd->un.fcpi.fcpi_parm = fcpdl; + /* + * For First burst, we may need to adjust the initial transfer + * length for DIF + */ + if (iocb_cmd->un.fcpi.fcpi_XRdy && + (fcpdl < vport->cfg_first_burst_size)) + iocb_cmd->un.fcpi.fcpi_XRdy = fcpdl; + /* * If the OAS driver feature is enabled and the lun is enabled for * OAS, set the oas iocb related flags. @@ -3914,7 +3932,7 @@ int lpfc_sli4_scmd_to_wqidx_distr(struct lpfc_hba *phba, uint32_t tag; uint16_t hwq; - if (cmnd && shost_use_blk_mq(cmnd->device->host)) { + if (cmnd) { tag = blk_mq_unique_tag(cmnd->request); hwq = blk_mq_unique_tag_to_hwq(tag); @@ -4163,7 +4181,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn, /* If pCmd was set to NULL from abort path, do not call scsi_done */ if (xchg(&lpfc_cmd->pCmd, NULL) == NULL) { lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP, - "0711 FCP cmd already NULL, sid: 0x%06x, " + "5688 FCP cmd already NULL, sid: 0x%06x, " "did: 0x%06x, oxid: 0x%04x\n", vport->fc_myDID, (pnode) ? pnode->nlp_DID : 0, @@ -4441,6 +4459,66 @@ lpfc_tskmgmt_def_cmpl(struct lpfc_hba *phba, return; } +/** + * lpfc_check_pci_resettable - Walks list of devices on pci_dev's bus to check + * if issuing a pci_bus_reset is possibly unsafe + * @phba: lpfc_hba pointer. + * + * Description: + * Walks the bus_list to ensure only PCI devices with Emulex + * vendor id, device ids that support hot reset, and only one occurrence + * of function 0. + * + * Returns: + * -EBADSLT, detected invalid device + * 0, successful + */ +int +lpfc_check_pci_resettable(const struct lpfc_hba *phba) +{ + const struct pci_dev *pdev = phba->pcidev; + struct pci_dev *ptr = NULL; + u8 counter = 0; + + /* Walk the list of devices on the pci_dev's bus */ + list_for_each_entry(ptr, &pdev->bus->devices, bus_list) { + /* Check for Emulex Vendor ID */ + if (ptr->vendor != PCI_VENDOR_ID_EMULEX) { + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, + "8346 Non-Emulex vendor found: " + "0x%04x\n", ptr->vendor); + return -EBADSLT; + } + + /* Check for valid Emulex Device ID */ + switch (ptr->device) { + case PCI_DEVICE_ID_LANCER_FC: + case PCI_DEVICE_ID_LANCER_G6_FC: + case PCI_DEVICE_ID_LANCER_G7_FC: + break; + default: + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, + "8347 Invalid device found: " + "0x%04x\n", ptr->device); + return -EBADSLT; + } + + /* Check for only one function 0 ID to ensure only one HBA on + * secondary bus + */ + if (ptr->devfn == 0) { + if (++counter > 1) { + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, + "8348 More than one device on " + "secondary bus found\n"); + return -EBADSLT; + } + } + } + + return 0; +} + /** * lpfc_info - Info entry point of scsi_host_template data structure * @host: The scsi host for which this call is being executed. @@ -4455,32 +4533,53 @@ lpfc_info(struct Scsi_Host *host) { struct lpfc_vport *vport = (struct lpfc_vport *) host->hostdata; struct lpfc_hba *phba = vport->phba; - int len, link_speed = 0; - static char lpfcinfobuf[384]; + int link_speed = 0; + static char lpfcinfobuf[384]; + char tmp[384] = {0}; - memset(lpfcinfobuf,0,384); + memset(lpfcinfobuf, 0, sizeof(lpfcinfobuf)); if (phba && phba->pcidev){ - strncpy(lpfcinfobuf, phba->ModelDesc, 256); - len = strlen(lpfcinfobuf); - snprintf(lpfcinfobuf + len, - 384-len, - " on PCI bus %02x device %02x irq %d", - phba->pcidev->bus->number, - phba->pcidev->devfn, - phba->pcidev->irq); - len = strlen(lpfcinfobuf); + /* Model Description */ + scnprintf(tmp, sizeof(tmp), phba->ModelDesc); + if (strlcat(lpfcinfobuf, tmp, sizeof(lpfcinfobuf)) >= + sizeof(lpfcinfobuf)) + goto buffer_done; + + /* PCI Info */ + scnprintf(tmp, sizeof(tmp), + " on PCI bus %02x device %02x irq %d", + phba->pcidev->bus->number, phba->pcidev->devfn, + phba->pcidev->irq); + if (strlcat(lpfcinfobuf, tmp, sizeof(lpfcinfobuf)) >= + sizeof(lpfcinfobuf)) + goto buffer_done; + + /* Port Number */ if (phba->Port[0]) { - snprintf(lpfcinfobuf + len, - 384-len, - " port %s", - phba->Port); + scnprintf(tmp, sizeof(tmp), " port %s", phba->Port); + if (strlcat(lpfcinfobuf, tmp, sizeof(lpfcinfobuf)) >= + sizeof(lpfcinfobuf)) + goto buffer_done; } - len = strlen(lpfcinfobuf); + + /* Link Speed */ link_speed = lpfc_sli_port_speed_get(phba); - if (link_speed != 0) - snprintf(lpfcinfobuf + len, 384-len, - " Logical Link Speed: %d Mbps", link_speed); + if (link_speed != 0) { + scnprintf(tmp, sizeof(tmp), + " Logical Link Speed: %d Mbps", link_speed); + if (strlcat(lpfcinfobuf, tmp, sizeof(lpfcinfobuf)) >= + sizeof(lpfcinfobuf)) + goto buffer_done; + } + + /* PCI resettable */ + if (!lpfc_check_pci_resettable(phba)) { + scnprintf(tmp, sizeof(tmp), " PCI resettable"); + strlcat(lpfcinfobuf, tmp, sizeof(lpfcinfobuf)); + } } + +buffer_done: return lpfcinfobuf; } @@ -6036,7 +6135,6 @@ struct scsi_host_template lpfc_template_nvme = { .this_id = -1, .sg_tablesize = 1, .cmd_per_lun = 1, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = lpfc_hba_attrs, .max_sectors = 0xFFFF, .vendor_id = LPFC_NL_VENDOR_ID, @@ -6061,7 +6159,6 @@ struct scsi_host_template lpfc_template_no_hr = { .this_id = -1, .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT, .cmd_per_lun = LPFC_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = lpfc_hba_attrs, .max_sectors = 0xFFFF, .vendor_id = LPFC_NL_VENDOR_ID, @@ -6088,7 +6185,6 @@ struct scsi_host_template lpfc_template = { .this_id = -1, .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT, .cmd_per_lun = LPFC_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = lpfc_hba_attrs, .max_sectors = 0xFFFF, .vendor_id = LPFC_NL_VENDOR_ID, @@ -6113,7 +6209,6 @@ struct scsi_host_template lpfc_vport_template = { .this_id = -1, .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT, .cmd_per_lun = LPFC_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = lpfc_vport_attrs, .max_sectors = 0xFFFF, .change_queue_depth = scsi_change_queue_depth, diff --git a/drivers/scsi/lpfc/lpfc_scsi.h b/drivers/scsi/lpfc/lpfc_scsi.h index cc99859774ff..b759b089432c 100644 --- a/drivers/scsi/lpfc/lpfc_scsi.h +++ b/drivers/scsi/lpfc/lpfc_scsi.h @@ -194,6 +194,10 @@ struct lpfc_scsi_buf { #define NO_MORE_OAS_LUN -1 #define NOT_OAS_ENABLED_LUN NO_MORE_OAS_LUN +#ifndef FC_PORTSPEED_128GBIT +#define FC_PORTSPEED_128GBIT 0x2000 +#endif + #define TXRDY_PAYLOAD_LEN 12 int lpfc_sli4_scmd_to_wqidx_distr(struct lpfc_hba *phba, diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c index b9e5cd79931a..30734caf77e1 100644 --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -2456,7 +2456,7 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) uint16_t rpi, vpi; int rc; - mp = (struct lpfc_dmabuf *) (pmb->context1); + mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); if (mp) { lpfc_mbuf_free(phba, mp->virt, mp->phys); @@ -2491,9 +2491,35 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) } if (pmb->u.mb.mbxCommand == MBX_REG_LOGIN64) { - ndlp = (struct lpfc_nodelist *)pmb->context2; + ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; lpfc_nlp_put(ndlp); - pmb->context2 = NULL; + pmb->ctx_buf = NULL; + pmb->ctx_ndlp = NULL; + } + + if (pmb->u.mb.mbxCommand == MBX_UNREG_LOGIN) { + ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; + + /* Check to see if there are any deferred events to process */ + if (ndlp) { + lpfc_printf_vlog( + vport, + KERN_INFO, LOG_MBOX | LOG_DISCOVERY, + "1438 UNREG cmpl deferred mbox x%x " + "on NPort x%x Data: x%x x%x %p\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_flag, ndlp->nlp_defer_did, ndlp); + + if ((ndlp->nlp_flag & NLP_UNREG_INP) && + (ndlp->nlp_defer_did != NLP_EVT_NOTHING_PENDING)) { + ndlp->nlp_flag &= ~NLP_UNREG_INP; + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); + } else { + ndlp->nlp_flag &= ~NLP_UNREG_INP; + } + } + pmb->ctx_ndlp = NULL; } /* Check security permission status on INIT_LINK mailbox command */ @@ -2527,21 +2553,46 @@ lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) struct lpfc_vport *vport = pmb->vport; struct lpfc_nodelist *ndlp; - ndlp = pmb->context1; + ndlp = pmb->ctx_ndlp; if (pmb->u.mb.mbxCommand == MBX_UNREG_LOGIN) { if (phba->sli_rev == LPFC_SLI_REV4 && (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >= LPFC_SLI_INTF_IF_TYPE_2)) { if (ndlp) { - lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI, - "0010 UNREG_LOGIN vpi:%x " - "rpi:%x DID:%x map:%x %p\n", - vport->vpi, ndlp->nlp_rpi, - ndlp->nlp_DID, - ndlp->nlp_usg_map, ndlp); + lpfc_printf_vlog( + vport, KERN_INFO, LOG_MBOX | LOG_SLI, + "0010 UNREG_LOGIN vpi:%x " + "rpi:%x DID:%x defer x%x flg x%x " + "map:%x %p\n", + vport->vpi, ndlp->nlp_rpi, + ndlp->nlp_DID, ndlp->nlp_defer_did, + ndlp->nlp_flag, + ndlp->nlp_usg_map, ndlp); ndlp->nlp_flag &= ~NLP_LOGO_ACC; lpfc_nlp_put(ndlp); + + /* Check to see if there are any deferred + * events to process + */ + if ((ndlp->nlp_flag & NLP_UNREG_INP) && + (ndlp->nlp_defer_did != + NLP_EVT_NOTHING_PENDING)) { + lpfc_printf_vlog( + vport, KERN_INFO, LOG_DISCOVERY, + "4111 UNREG cmpl deferred " + "clr x%x on " + "NPort x%x Data: x%x %p\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_defer_did, ndlp); + ndlp->nlp_flag &= ~NLP_UNREG_INP; + ndlp->nlp_defer_did = + NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi( + vport, ndlp->nlp_DID, 0); + } else { + ndlp->nlp_flag &= ~NLP_UNREG_INP; + } } } } @@ -4640,6 +4691,8 @@ lpfc_sli_brdrestart_s4(struct lpfc_hba *phba) hba_aer_enabled = phba->hba_flag & HBA_AER_ENABLED; rc = lpfc_sli4_brdreset(phba); + if (rc) + return rc; spin_lock_irq(&phba->hbalock); phba->pport->stopped = 0; @@ -5228,7 +5281,7 @@ lpfc_sli4_read_fcoe_params(struct lpfc_hba *phba) goto out_free_mboxq; } - mp = (struct lpfc_dmabuf *) mboxq->context1; + mp = (struct lpfc_dmabuf *)mboxq->ctx_buf; rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI, @@ -6147,6 +6200,25 @@ lpfc_set_features(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox, return; } +/** + * lpfc_ras_stop_fwlog: Disable FW logging by the adapter + * @phba: Pointer to HBA context object. + * + * Disable FW logging into host memory on the adapter. To + * be done before reading logs from the host memory. + **/ +void +lpfc_ras_stop_fwlog(struct lpfc_hba *phba) +{ + struct lpfc_ras_fwlog *ras_fwlog = &phba->ras_fwlog; + + ras_fwlog->ras_active = false; + + /* Disable FW logging to host memory */ + writel(LPFC_CTL_PDEV_CTL_DDL_RAS, + phba->sli4_hba.conf_regs_memmap_p + LPFC_CTL_PDEV_CTL_OFFSET); +} + /** * lpfc_sli4_ras_dma_free - Free memory allocated for FW logging. * @phba: Pointer to HBA context object. @@ -6211,7 +6283,7 @@ lpfc_sli4_ras_dma_alloc(struct lpfc_hba *phba, &ras_fwlog->lwpd.phys, GFP_KERNEL); if (!ras_fwlog->lwpd.virt) { - lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, "6185 LWPD Memory Alloc Failed\n"); return -ENOMEM; @@ -6228,7 +6300,7 @@ lpfc_sli4_ras_dma_alloc(struct lpfc_hba *phba, goto free_mem; } - dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev, + dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, LPFC_RAS_MAX_ENTRY_SIZE, &dmabuf->phys, GFP_KERNEL); @@ -6239,7 +6311,6 @@ lpfc_sli4_ras_dma_alloc(struct lpfc_hba *phba, "6187 DMA Alloc Failed FW logging"); goto free_mem; } - memset(dmabuf->virt, 0, LPFC_RAS_MAX_ENTRY_SIZE); dmabuf->buffer_tag = i; list_add_tail(&dmabuf->list, &ras_fwlog->fwlog_buff_list); } @@ -6274,11 +6345,13 @@ lpfc_sli4_ras_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response); if (mb->mbxStatus != MBX_SUCCESS || shdr_status) { - lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX, + lpfc_printf_log(phba, KERN_ERR, LOG_MBOX, "6188 FW LOG mailbox " "completed with status x%x add_status x%x," " mbx status x%x\n", shdr_status, shdr_add_status, mb->mbxStatus); + + ras_fwlog->ras_hwsupport = false; goto disable_ras; } @@ -6326,7 +6399,7 @@ lpfc_sli4_ras_fwlog_init(struct lpfc_hba *phba, rc = lpfc_sli4_ras_dma_alloc(phba, fwlog_entry_count); if (rc) { lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, - "6189 RAS FW Log Support Not Enabled"); + "6189 FW Log Memory Allocation Failed"); return rc; } } @@ -6334,7 +6407,7 @@ lpfc_sli4_ras_fwlog_init(struct lpfc_hba *phba, /* Setup Mailbox command */ mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); if (!mbox) { - lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, "6190 RAS MBX Alloc Failed"); rc = -ENOMEM; goto mem_free; @@ -6379,8 +6452,8 @@ lpfc_sli4_ras_fwlog_init(struct lpfc_hba *phba, rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); if (rc == MBX_NOT_FINISHED) { - lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, - "6191 RAS Mailbox failed. " + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, + "6191 FW-Log Mailbox failed. " "status %d mbxStatus : x%x", rc, bf_get(lpfc_mqe_status, &mbox->u.mqe)); mempool_free(mbox, phba->mbox_mem_pool); @@ -7348,7 +7421,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba) mboxq->vport = vport; rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); - mp = (struct lpfc_dmabuf *) mboxq->context1; + mp = (struct lpfc_dmabuf *)mboxq->ctx_buf; if (rc == MBX_SUCCESS) { memcpy(&vport->fc_sparam, mp->virt, sizeof(struct serv_parm)); rc = 0; @@ -7360,7 +7433,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba) */ lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); - mboxq->context1 = NULL; + mboxq->ctx_buf = NULL; if (unlikely(rc)) { lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI, "0382 READ_SPARAM command failed " @@ -7635,7 +7708,18 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba) */ spin_lock_irq(&phba->hbalock); phba->link_state = LPFC_LINK_DOWN; + + /* Check if physical ports are trunked */ + if (bf_get(lpfc_conf_trunk_port0, &phba->sli4_hba)) + phba->trunk_link.link0.state = LPFC_LINK_DOWN; + if (bf_get(lpfc_conf_trunk_port1, &phba->sli4_hba)) + phba->trunk_link.link1.state = LPFC_LINK_DOWN; + if (bf_get(lpfc_conf_trunk_port2, &phba->sli4_hba)) + phba->trunk_link.link2.state = LPFC_LINK_DOWN; + if (bf_get(lpfc_conf_trunk_port3, &phba->sli4_hba)) + phba->trunk_link.link3.state = LPFC_LINK_DOWN; spin_unlock_irq(&phba->hbalock); + if (!(phba->hba_flag & HBA_FCOE_MODE) && (phba->hba_flag & LINK_DISABLED)) { lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_SLI, @@ -8119,10 +8203,10 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, } /* Copy the mailbox extension data */ - if (pmbox->in_ext_byte_len && pmbox->context2) { - lpfc_sli_pcimem_bcopy(pmbox->context2, - (uint8_t *)phba->mbox_ext, - pmbox->in_ext_byte_len); + if (pmbox->in_ext_byte_len && pmbox->ctx_buf) { + lpfc_sli_pcimem_bcopy(pmbox->ctx_buf, + (uint8_t *)phba->mbox_ext, + pmbox->in_ext_byte_len); } /* Copy command data to host SLIM area */ lpfc_sli_pcimem_bcopy(mbx, phba->mbox, MAILBOX_CMD_SIZE); @@ -8133,10 +8217,10 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, = MAILBOX_HBA_EXT_OFFSET; /* Copy the mailbox extension data */ - if (pmbox->in_ext_byte_len && pmbox->context2) + if (pmbox->in_ext_byte_len && pmbox->ctx_buf) lpfc_memcpy_to_slim(phba->MBslimaddr + MAILBOX_HBA_EXT_OFFSET, - pmbox->context2, pmbox->in_ext_byte_len); + pmbox->ctx_buf, pmbox->in_ext_byte_len); if (mbx->mbxCommand == MBX_CONFIG_PORT) /* copy command data into host mbox for cmpl */ @@ -8259,9 +8343,9 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, lpfc_sli_pcimem_bcopy(phba->mbox, mbx, MAILBOX_CMD_SIZE); /* Copy the mailbox extension data */ - if (pmbox->out_ext_byte_len && pmbox->context2) { + if (pmbox->out_ext_byte_len && pmbox->ctx_buf) { lpfc_sli_pcimem_bcopy(phba->mbox_ext, - pmbox->context2, + pmbox->ctx_buf, pmbox->out_ext_byte_len); } } else { @@ -8269,8 +8353,9 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, lpfc_memcpy_from_slim(mbx, phba->MBslimaddr, MAILBOX_CMD_SIZE); /* Copy the mailbox extension data */ - if (pmbox->out_ext_byte_len && pmbox->context2) { - lpfc_memcpy_from_slim(pmbox->context2, + if (pmbox->out_ext_byte_len && pmbox->ctx_buf) { + lpfc_memcpy_from_slim( + pmbox->ctx_buf, phba->MBslimaddr + MAILBOX_HBA_EXT_OFFSET, pmbox->out_ext_byte_len); @@ -11265,19 +11350,12 @@ lpfc_sli4_abort_nvme_io(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, /* Complete prepping the abort wqe and issue to the FW. */ abts_wqe = &abtsiocbp->wqe; - bf_set(abort_cmd_ia, &abts_wqe->abort_cmd, 0); - bf_set(abort_cmd_criteria, &abts_wqe->abort_cmd, T_XRI_TAG); - - /* Explicitly set reserved fields to zero.*/ - abts_wqe->abort_cmd.rsrvd4 = 0; - abts_wqe->abort_cmd.rsrvd5 = 0; - /* WQE Common - word 6. Context is XRI tag. Set 0. */ - bf_set(wqe_xri_tag, &abts_wqe->abort_cmd.wqe_com, 0); - bf_set(wqe_ctxt_tag, &abts_wqe->abort_cmd.wqe_com, 0); + /* Clear any stale WQE contents */ + memset(abts_wqe, 0, sizeof(union lpfc_wqe)); + bf_set(abort_cmd_criteria, &abts_wqe->abort_cmd, T_XRI_TAG); /* word 7 */ - bf_set(wqe_ct, &abts_wqe->abort_cmd.wqe_com, 0); bf_set(wqe_cmnd, &abts_wqe->abort_cmd.wqe_com, CMD_ABORT_XRI_CX); bf_set(wqe_class, &abts_wqe->abort_cmd.wqe_com, cmdiocb->iocb.ulpClass); @@ -11292,7 +11370,6 @@ lpfc_sli4_abort_nvme_io(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, abtsiocbp->iotag); /* word 10 */ - bf_set(wqe_wqid, &abts_wqe->abort_cmd.wqe_com, cmdiocb->hba_wqidx); bf_set(wqe_qosd, &abts_wqe->abort_cmd.wqe_com, 1); bf_set(wqe_lenloc, &abts_wqe->abort_cmd.wqe_com, LPFC_WQE_LENLOC_NONE); @@ -12545,10 +12622,10 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) lpfc_sli_pcimem_bcopy(mbox, pmbox, MAILBOX_CMD_SIZE); if (pmb->out_ext_byte_len && - pmb->context2) + pmb->ctx_buf) lpfc_sli_pcimem_bcopy( phba->mbox_ext, - pmb->context2, + pmb->ctx_buf, pmb->out_ext_byte_len); } if (pmb->mbox_flag & LPFC_MBX_IMED_UNREG) { @@ -12563,9 +12640,9 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) if (!pmbox->mbxStatus) { mp = (struct lpfc_dmabuf *) - (pmb->context1); + (pmb->ctx_buf); ndlp = (struct lpfc_nodelist *) - pmb->context2; + pmb->ctx_ndlp; /* Reg_LOGIN of dflt RPI was * successful. new lets get @@ -12578,8 +12655,8 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) pmb); pmb->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi; - pmb->context1 = mp; - pmb->context2 = ndlp; + pmb->ctx_buf = mp; + pmb->ctx_ndlp = ndlp; pmb->vport = vport; rc = lpfc_sli_issue_mbox(phba, pmb, @@ -13185,16 +13262,16 @@ lpfc_sli4_sp_handle_mbox_event(struct lpfc_hba *phba, struct lpfc_mcqe *mcqe) mcqe_status, pmbox->un.varWords[0], 0); if (mcqe_status == MB_CQE_STATUS_SUCCESS) { - mp = (struct lpfc_dmabuf *)(pmb->context1); - ndlp = (struct lpfc_nodelist *)pmb->context2; + mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); + ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; /* Reg_LOGIN of dflt RPI was successful. Now lets get * RID of the PPI using the same mbox buffer. */ lpfc_unreg_login(phba, vport->vpi, pmbox->un.varWords[0], pmb); pmb->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi; - pmb->context1 = mp; - pmb->context2 = ndlp; + pmb->ctx_buf = mp; + pmb->ctx_ndlp = ndlp; pmb->vport = vport; rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT); if (rc != MBX_BUSY) @@ -13413,6 +13490,8 @@ lpfc_sli4_sp_handle_abort_xri_wcqe(struct lpfc_hba *phba, return workposted; } +#define FC_RCTL_MDS_DIAGS 0xF4 + /** * lpfc_sli4_sp_handle_rcqe - Process a receive-queue completion queue entry * @phba: Pointer to HBA context object. @@ -13426,6 +13505,7 @@ static bool lpfc_sli4_sp_handle_rcqe(struct lpfc_hba *phba, struct lpfc_rcqe *rcqe) { bool workposted = false; + struct fc_frame_header *fc_hdr; struct lpfc_queue *hrq = phba->sli4_hba.hdr_rq; struct lpfc_queue *drq = phba->sli4_hba.dat_rq; struct lpfc_nvmet_tgtport *tgtp; @@ -13462,7 +13542,17 @@ lpfc_sli4_sp_handle_rcqe(struct lpfc_hba *phba, struct lpfc_rcqe *rcqe) hrq->RQ_buf_posted--; memcpy(&dma_buf->cq_event.cqe.rcqe_cmpl, rcqe, sizeof(*rcqe)); - /* save off the frame for the word thread to process */ + fc_hdr = (struct fc_frame_header *)dma_buf->hbuf.virt; + + if (fc_hdr->fh_r_ctl == FC_RCTL_MDS_DIAGS || + fc_hdr->fh_r_ctl == FC_RCTL_DD_UNSOL_DATA) { + spin_unlock_irqrestore(&phba->hbalock, iflags); + /* Handle MDS Loopback frames */ + lpfc_sli4_handle_mds_loopback(phba->pport, dma_buf); + break; + } + + /* save off the frame for the work thread to process */ list_add_tail(&dma_buf->cq_event.list, &phba->sli4_hba.sp_queue_event); /* Frame received */ @@ -14501,7 +14591,8 @@ lpfc_sli4_queue_alloc(struct lpfc_hba *phba, uint32_t page_size, hw_page_size))/hw_page_size; /* If needed, Adjust page count to match the max the adapter supports */ - if (queue->page_count > phba->sli4_hba.pc_sli4_params.wqpcnt) + if (phba->sli4_hba.pc_sli4_params.wqpcnt && + (queue->page_count > phba->sli4_hba.pc_sli4_params.wqpcnt)) queue->page_count = phba->sli4_hba.pc_sli4_params.wqpcnt; INIT_LIST_HEAD(&queue->list); @@ -14669,7 +14760,8 @@ lpfc_modify_hba_eq_delay(struct lpfc_hba *phba, uint32_t startq, mbox->vport = phba->pport; mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; - mbox->context1 = NULL; + mbox->ctx_buf = NULL; + mbox->ctx_ndlp = NULL; rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL); shdr = (union lpfc_sli4_cfg_shdr *) &eq_delay->header.cfg_shdr; shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response); @@ -14789,7 +14881,8 @@ lpfc_eq_create(struct lpfc_hba *phba, struct lpfc_queue *eq, uint32_t imax) } mbox->vport = phba->pport; mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; - mbox->context1 = NULL; + mbox->ctx_buf = NULL; + mbox->ctx_ndlp = NULL; rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL); shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response); shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response); @@ -16863,8 +16956,6 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr) struct fc_vft_header *fc_vft_hdr; uint32_t *header = (uint32_t *) fc_hdr; -#define FC_RCTL_MDS_DIAGS 0xF4 - switch (fc_hdr->fh_r_ctl) { case FC_RCTL_DD_UNCAT: /* uncategorized information */ case FC_RCTL_DD_SOL_DATA: /* solicited data */ @@ -16903,15 +16994,12 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr) goto drop; } -#define FC_TYPE_VENDOR_UNIQUE 0xFF - switch (fc_hdr->fh_type) { case FC_TYPE_BLS: case FC_TYPE_ELS: case FC_TYPE_FCP: case FC_TYPE_CT: case FC_TYPE_NVME: - case FC_TYPE_VENDOR_UNIQUE: break; case FC_TYPE_IP: case FC_TYPE_ILS: @@ -17741,6 +17829,7 @@ lpfc_sli4_mds_loopback_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, dma_pool_free(phba->lpfc_drb_pool, pcmd->virt, pcmd->phys); kfree(pcmd); lpfc_sli_release_iocbq(phba, cmdiocb); + lpfc_drain_txq(phba); } static void @@ -17754,14 +17843,23 @@ lpfc_sli4_handle_mds_loopback(struct lpfc_vport *vport, struct lpfc_dmabuf *pcmd = NULL; uint32_t frame_len; int rc; + unsigned long iflags; fc_hdr = (struct fc_frame_header *)dmabuf->hbuf.virt; frame_len = bf_get(lpfc_rcqe_length, &dmabuf->cq_event.cqe.rcqe_cmpl); /* Send the received frame back */ iocbq = lpfc_sli_get_iocbq(phba); - if (!iocbq) - goto exit; + if (!iocbq) { + /* Queue cq event and wakeup worker thread to process it */ + spin_lock_irqsave(&phba->hbalock, iflags); + list_add_tail(&dmabuf->cq_event.list, + &phba->sli4_hba.sp_queue_event); + phba->hba_flag |= HBA_SP_QUEUE_EVT; + spin_unlock_irqrestore(&phba->hbalock, iflags); + lpfc_worker_wake_up(phba); + return; + } /* Allocate buffer for command payload */ pcmd = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL); @@ -17846,6 +17944,14 @@ lpfc_sli4_handle_received_buffer(struct lpfc_hba *phba, /* Process each received buffer */ fc_hdr = (struct fc_frame_header *)dmabuf->hbuf.virt; + if (fc_hdr->fh_r_ctl == FC_RCTL_MDS_DIAGS || + fc_hdr->fh_r_ctl == FC_RCTL_DD_UNSOL_DATA) { + vport = phba->pport; + /* Handle MDS Loopback frames */ + lpfc_sli4_handle_mds_loopback(vport, dmabuf); + return; + } + /* check to see if this a valid type of frame */ if (lpfc_fc_frame_check(phba, fc_hdr)) { lpfc_in_buf_free(phba, &dmabuf->dbuf); @@ -17860,13 +17966,6 @@ lpfc_sli4_handle_received_buffer(struct lpfc_hba *phba, fcfi = bf_get(lpfc_rcqe_fcf_id, &dmabuf->cq_event.cqe.rcqe_cmpl); - if (fc_hdr->fh_r_ctl == 0xF4 && fc_hdr->fh_type == 0xFF) { - vport = phba->pport; - /* Handle MDS Loopback frames */ - lpfc_sli4_handle_mds_loopback(vport, dmabuf); - return; - } - /* d_id this frame is directed to */ did = sli4_did_from_fc_hdr(fc_hdr); @@ -18207,8 +18306,8 @@ lpfc_sli4_resume_rpi(struct lpfc_nodelist *ndlp, lpfc_resume_rpi(mboxq, ndlp); if (cmpl) { mboxq->mbox_cmpl = cmpl; - mboxq->context1 = arg; - mboxq->context2 = ndlp; + mboxq->ctx_buf = arg; + mboxq->ctx_ndlp = ndlp; } else mboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl; mboxq->vport = ndlp->vport; @@ -18711,15 +18810,8 @@ next_priority: goto initial_priority; lpfc_printf_log(phba, KERN_WARNING, LOG_FIP, "2844 No roundrobin failover FCF available\n"); - if (next_fcf_index >= LPFC_SLI4_FCF_TBL_INDX_MAX) - return LPFC_FCOE_FCF_NEXT_NONE; - else { - lpfc_printf_log(phba, KERN_WARNING, LOG_FIP, - "3063 Only FCF available idx %d, flag %x\n", - next_fcf_index, - phba->fcf.fcf_pri[next_fcf_index].fcf_rec.flag); - return next_fcf_index; - } + + return LPFC_FCOE_FCF_NEXT_NONE; } if (next_fcf_index < LPFC_SLI4_FCF_TBL_INDX_MAX && @@ -19026,7 +19118,7 @@ lpfc_sli4_get_config_region23(struct lpfc_hba *phba, char *rgn23_data) if (lpfc_sli4_dump_cfg_rg23(phba, mboxq)) goto out; mqe = &mboxq->u.mqe; - mp = (struct lpfc_dmabuf *) mboxq->context1; + mp = (struct lpfc_dmabuf *)mboxq->ctx_buf; rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); if (rc) goto out; @@ -19171,11 +19263,11 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list, struct lpfc_mbx_wr_object *wr_object; LPFC_MBOXQ_t *mbox; int rc = 0, i = 0; - uint32_t shdr_status, shdr_add_status; + uint32_t shdr_status, shdr_add_status, shdr_change_status; uint32_t mbox_tmo; - union lpfc_sli4_cfg_shdr *shdr; struct lpfc_dmabuf *dmabuf; uint32_t written = 0; + bool check_change_status = false; mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); if (!mbox) @@ -19203,6 +19295,8 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list, (size - written); written += (size - written); bf_set(lpfc_wr_object_eof, &wr_object->u.request, 1); + bf_set(lpfc_wr_object_eas, &wr_object->u.request, 1); + check_change_status = true; } else { wr_object->u.request.bde[i].tus.f.bdeSize = SLI4_PAGE_SIZE; @@ -19219,9 +19313,39 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list, rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo); } /* The IOCTL status is embedded in the mailbox subheader. */ - shdr = (union lpfc_sli4_cfg_shdr *) &wr_object->header.cfg_shdr; - shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response); - shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, &shdr->response); + shdr_status = bf_get(lpfc_mbox_hdr_status, + &wr_object->header.cfg_shdr.response); + shdr_add_status = bf_get(lpfc_mbox_hdr_add_status, + &wr_object->header.cfg_shdr.response); + if (check_change_status) { + shdr_change_status = bf_get(lpfc_wr_object_change_status, + &wr_object->u.response); + switch (shdr_change_status) { + case (LPFC_CHANGE_STATUS_PHYS_DEV_RESET): + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, + "3198 Firmware write complete: System " + "reboot required to instantiate\n"); + break; + case (LPFC_CHANGE_STATUS_FW_RESET): + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, + "3199 Firmware write complete: Firmware" + " reset required to instantiate\n"); + break; + case (LPFC_CHANGE_STATUS_PORT_MIGRATION): + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, + "3200 Firmware write complete: Port " + "Migration or PCI Reset required to " + "instantiate\n"); + break; + case (LPFC_CHANGE_STATUS_PCI_RESET): + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, + "3201 Firmware write complete: PCI " + "Reset required to instantiate\n"); + break; + default: + break; + } + } if (rc != MBX_TIMEOUT) mempool_free(mbox, phba->mbox_mem_pool); if (shdr_status || shdr_add_status || rc) { @@ -19277,7 +19401,7 @@ lpfc_cleanup_pending_mbox(struct lpfc_vport *vport) (mb->u.mb.mbxCommand == MBX_REG_VPI)) mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; if (mb->u.mb.mbxCommand == MBX_REG_LOGIN64) { - act_mbx_ndlp = (struct lpfc_nodelist *)mb->context2; + act_mbx_ndlp = (struct lpfc_nodelist *)mb->ctx_ndlp; /* Put reference count for delayed processing */ act_mbx_ndlp = lpfc_nlp_get(act_mbx_ndlp); /* Unregister the RPI when mailbox complete */ @@ -19302,7 +19426,7 @@ lpfc_cleanup_pending_mbox(struct lpfc_vport *vport) mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; if (mb->u.mb.mbxCommand == MBX_REG_LOGIN64) { - ndlp = (struct lpfc_nodelist *)mb->context2; + ndlp = (struct lpfc_nodelist *)mb->ctx_ndlp; /* Unregister the RPI when mailbox complete */ mb->mbox_flag |= LPFC_MBX_IMED_UNREG; restart_loop = 1; @@ -19322,13 +19446,14 @@ lpfc_cleanup_pending_mbox(struct lpfc_vport *vport) while (!list_empty(&mbox_cmd_list)) { list_remove_head(&mbox_cmd_list, mb, LPFC_MBOXQ_t, list); if (mb->u.mb.mbxCommand == MBX_REG_LOGIN64) { - mp = (struct lpfc_dmabuf *) (mb->context1); + mp = (struct lpfc_dmabuf *)(mb->ctx_buf); if (mp) { __lpfc_mbuf_free(phba, mp->virt, mp->phys); kfree(mp); } - ndlp = (struct lpfc_nodelist *) mb->context2; - mb->context2 = NULL; + mb->ctx_buf = NULL; + ndlp = (struct lpfc_nodelist *)mb->ctx_ndlp; + mb->ctx_ndlp = NULL; if (ndlp) { spin_lock(shost->host_lock); ndlp->nlp_flag &= ~NLP_IGNR_REG_CMPL; diff --git a/drivers/scsi/lpfc/lpfc_sli.h b/drivers/scsi/lpfc/lpfc_sli.h index 34b7ab69b9b4..7abb395bb64a 100644 --- a/drivers/scsi/lpfc/lpfc_sli.h +++ b/drivers/scsi/lpfc/lpfc_sli.h @@ -144,9 +144,9 @@ typedef struct lpfcMboxq { MAILBOX_t mb; /* Mailbox cmd */ struct lpfc_mqe mqe; } u; - struct lpfc_vport *vport;/* virtual port pointer */ - void *context1; /* caller context information */ - void *context2; /* caller context information */ + struct lpfc_vport *vport; /* virtual port pointer */ + void *ctx_ndlp; /* caller ndlp information */ + void *ctx_buf; /* caller buffer information */ void *context3; void (*mbox_cmpl) (struct lpfc_hba *, struct lpfcMboxq *); diff --git a/drivers/scsi/lpfc/lpfc_sli4.h b/drivers/scsi/lpfc/lpfc_sli4.h index e76c380e1a84..6b2d2350e2c6 100644 --- a/drivers/scsi/lpfc/lpfc_sli4.h +++ b/drivers/scsi/lpfc/lpfc_sli4.h @@ -279,6 +279,7 @@ struct lpfc_fcf { #define FCF_REDISC_EVT 0x100 /* FCF rediscovery event to worker thread */ #define FCF_REDISC_FOV 0x200 /* Post FCF rediscovery fast failover */ #define FCF_REDISC_PROG (FCF_REDISC_PEND | FCF_REDISC_EVT) + uint16_t fcf_redisc_attempted; uint32_t addr_mode; uint32_t eligible_fcf_cnt; struct lpfc_fcf_rec current_rec; @@ -717,6 +718,19 @@ struct lpfc_sli4_hba { uint16_t num_online_cpu; uint16_t num_present_cpu; uint16_t curr_disp_cpu; + uint32_t conf_trunk; +#define lpfc_conf_trunk_port0_WORD conf_trunk +#define lpfc_conf_trunk_port0_SHIFT 0 +#define lpfc_conf_trunk_port0_MASK 0x1 +#define lpfc_conf_trunk_port1_WORD conf_trunk +#define lpfc_conf_trunk_port1_SHIFT 1 +#define lpfc_conf_trunk_port1_MASK 0x1 +#define lpfc_conf_trunk_port2_WORD conf_trunk +#define lpfc_conf_trunk_port2_SHIFT 2 +#define lpfc_conf_trunk_port2_MASK 0x1 +#define lpfc_conf_trunk_port3_WORD conf_trunk +#define lpfc_conf_trunk_port3_SHIFT 3 +#define lpfc_conf_trunk_port3_MASK 0x1 }; enum lpfc_sge_type { diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h index 5a0d512ff497..3f4398ffb567 100644 --- a/drivers/scsi/lpfc/lpfc_version.h +++ b/drivers/scsi/lpfc/lpfc_version.h @@ -20,7 +20,7 @@ * included with this package. * *******************************************************************/ -#define LPFC_DRIVER_VERSION "12.0.0.7" +#define LPFC_DRIVER_VERSION "12.0.0.10" #define LPFC_DRIVER_NAME "lpfc" /* Used for SLI 2/3 */ diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c index c340e0e47473..102a011ff6d4 100644 --- a/drivers/scsi/lpfc/lpfc_vport.c +++ b/drivers/scsi/lpfc/lpfc_vport.c @@ -138,8 +138,8 @@ lpfc_vport_sparm(struct lpfc_hba *phba, struct lpfc_vport *vport) * Grab buffer pointer and clear context1 so we can use * lpfc_sli_issue_box_wait */ - mp = (struct lpfc_dmabuf *) pmb->context1; - pmb->context1 = NULL; + mp = (struct lpfc_dmabuf *)pmb->ctx_buf; + pmb->ctx_buf = NULL; pmb->vport = vport; rc = lpfc_sli_issue_mbox_wait(phba, pmb, phba->fc_ratov * 2); diff --git a/drivers/scsi/mac53c94.c b/drivers/scsi/mac53c94.c index 177701dfdfcb..c8e6ae98a4a6 100644 --- a/drivers/scsi/mac53c94.c +++ b/drivers/scsi/mac53c94.c @@ -403,7 +403,7 @@ static struct scsi_host_template mac53c94_template = { .can_queue = 1, .this_id = 7, .sg_tablesize = SG_ALL, - .use_clustering = DISABLE_CLUSTERING, + .max_segment_size = 65535, }; static int mac53c94_probe(struct macio_dev *mdev, const struct of_device_id *match) diff --git a/drivers/scsi/mac_esp.c b/drivers/scsi/mac_esp.c index 764d320bb2ca..ee741207fd4e 100644 --- a/drivers/scsi/mac_esp.c +++ b/drivers/scsi/mac_esp.c @@ -307,7 +307,7 @@ static int esp_mac_probe(struct platform_device *dev) goto fail; host->max_id = 8; - host->use_clustering = DISABLE_CLUSTERING; + host->dma_boundary = PAGE_SIZE - 1; esp = shost_priv(host); esp->host = host; diff --git a/drivers/scsi/mac_scsi.c b/drivers/scsi/mac_scsi.c index dd6057359d7c..8b4b5b1a13d7 100644 --- a/drivers/scsi/mac_scsi.c +++ b/drivers/scsi/mac_scsi.c @@ -333,7 +333,7 @@ static struct scsi_host_template mac_scsi_template = { .this_id = 7, .sg_tablesize = 1, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .cmd_size = NCR5380_CMD_SIZE, .max_sectors = 128, }; diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c index 8c7154143a4e..4862f65ec3e8 100644 --- a/drivers/scsi/megaraid.c +++ b/drivers/scsi/megaraid.c @@ -4148,7 +4148,6 @@ static struct scsi_host_template megaraid_template = { .this_id = DEFAULT_INITIATOR_ID, .sg_tablesize = MAX_SGLIST, .cmd_per_lun = DEF_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .eh_abort_handler = megaraid_abort, .eh_device_reset_handler = megaraid_reset, .eh_bus_reset_handler = megaraid_reset, diff --git a/drivers/scsi/megaraid/megaraid_mbox.c b/drivers/scsi/megaraid/megaraid_mbox.c index 3b7abe5ca7f5..e836392b75e8 100644 --- a/drivers/scsi/megaraid/megaraid_mbox.c +++ b/drivers/scsi/megaraid/megaraid_mbox.c @@ -336,7 +336,6 @@ static struct scsi_host_template megaraid_template_g = { .eh_abort_handler = megaraid_abort_handler, .eh_host_reset_handler = megaraid_reset_handler, .change_queue_depth = scsi_change_queue_depth, - .use_clustering = ENABLE_CLUSTERING, .no_write_same = 1, .sdev_attrs = megaraid_sdev_attrs, .shost_attrs = megaraid_shost_attrs, @@ -1243,8 +1242,7 @@ megaraid_mbox_teardown_dma_pools(adapter_t *adapter) dma_pool_free(raid_dev->sg_pool_handle, sg_pci_blk[i].vaddr, sg_pci_blk[i].dma_addr); } - if (raid_dev->sg_pool_handle) - dma_pool_destroy(raid_dev->sg_pool_handle); + dma_pool_destroy(raid_dev->sg_pool_handle); epthru_pci_blk = raid_dev->epthru_pool; @@ -1252,8 +1250,7 @@ megaraid_mbox_teardown_dma_pools(adapter_t *adapter) dma_pool_free(raid_dev->epthru_pool_handle, epthru_pci_blk[i].vaddr, epthru_pci_blk[i].dma_addr); } - if (raid_dev->epthru_pool_handle) - dma_pool_destroy(raid_dev->epthru_pool_handle); + dma_pool_destroy(raid_dev->epthru_pool_handle); mbox_pci_blk = raid_dev->mbox_pool; @@ -1261,8 +1258,7 @@ megaraid_mbox_teardown_dma_pools(adapter_t *adapter) dma_pool_free(raid_dev->mbox_pool_handle, mbox_pci_blk[i].vaddr, mbox_pci_blk[i].dma_addr); } - if (raid_dev->mbox_pool_handle) - dma_pool_destroy(raid_dev->mbox_pool_handle); + dma_pool_destroy(raid_dev->mbox_pool_handle); return; } diff --git a/drivers/scsi/megaraid/megaraid_mm.c b/drivers/scsi/megaraid/megaraid_mm.c index 8428247015db..3ce837e4b24c 100644 --- a/drivers/scsi/megaraid/megaraid_mm.c +++ b/drivers/scsi/megaraid/megaraid_mm.c @@ -1017,8 +1017,7 @@ memalloc_error: kfree(adapter->kioc_list); kfree(adapter->mbox_list); - if (adapter->pthru_dma_pool) - dma_pool_destroy(adapter->pthru_dma_pool); + dma_pool_destroy(adapter->pthru_dma_pool); kfree(adapter); diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h index 67d356d84717..16536c41f0c5 100644 --- a/drivers/scsi/megaraid/megaraid_sas.h +++ b/drivers/scsi/megaraid/megaraid_sas.h @@ -2,7 +2,8 @@ * Linux MegaRAID driver for SAS based RAID controllers * * Copyright (c) 2003-2013 LSI Corporation - * Copyright (c) 2013-2014 Avago Technologies + * Copyright (c) 2013-2016 Avago Technologies + * Copyright (c) 2016-2018 Broadcom Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License @@ -19,14 +20,11 @@ * * FILE: megaraid_sas.h * - * Authors: Avago Technologies - * Kashyap Desai - * Sumit Saxena + * Authors: Broadcom Inc. + * Kashyap Desai + * Sumit Saxena * - * Send feedback to: megaraidlinux.pdl@avagotech.com - * - * Mail to: Avago Technologies, 350 West Trimble Road, Building 90, - * San Jose, California 95131 + * Send feedback to: megaraidlinux.pdl@broadcom.com */ #ifndef LSI_MEGARAID_SAS_H @@ -35,8 +33,8 @@ /* * MegaRAID SAS Driver meta data */ -#define MEGASAS_VERSION "07.706.03.00-rc1" -#define MEGASAS_RELDATE "May 21, 2018" +#define MEGASAS_VERSION "07.707.50.00-rc1" +#define MEGASAS_RELDATE "December 18, 2018" /* * Device IDs @@ -62,6 +60,10 @@ #define PCI_DEVICE_ID_LSI_TOMCAT 0x0017 #define PCI_DEVICE_ID_LSI_VENTURA_4PORT 0x001B #define PCI_DEVICE_ID_LSI_CRUSADER_4PORT 0x001C +#define PCI_DEVICE_ID_LSI_AERO_10E1 0x10e1 +#define PCI_DEVICE_ID_LSI_AERO_10E2 0x10e2 +#define PCI_DEVICE_ID_LSI_AERO_10E5 0x10e5 +#define PCI_DEVICE_ID_LSI_AERO_10E6 0x10e6 /* * Intel HBA SSDIDs @@ -142,6 +144,7 @@ * CLR_HANDSHAKE: FW is waiting for HANDSHAKE from BIOS or Driver * HOTPLUG : Resume from Hotplug * MFI_STOP_ADP : Send signal to FW to stop processing + * MFI_ADP_TRIGGER_SNAP_DUMP: Inform firmware to initiate snap dump */ #define WRITE_SEQUENCE_OFFSET (0x0000000FC) /* I20 */ #define HOST_DIAGNOSTIC_OFFSET (0x000000F8) /* I20 */ @@ -158,6 +161,7 @@ #define MFI_RESET_FLAGS MFI_INIT_READY| \ MFI_INIT_MFIMODE| \ MFI_INIT_ABORT +#define MFI_ADP_TRIGGER_SNAP_DUMP 0x00000100 #define MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE (0x01) /* @@ -860,8 +864,22 @@ struct megasas_ctrl_prop { u32 reserved:18; #endif } OnOffProperties; - u8 autoSnapVDSpace; - u8 viewSpace; + + union { + u8 autoSnapVDSpace; + u8 viewSpace; + struct { +#if defined(__BIG_ENDIAN_BITFIELD) + u16 reserved2:11; + u16 enable_snap_dump:1; + u16 reserved1:4; +#else + u16 reserved1:4; + u16 enable_snap_dump:1; + u16 reserved2:11; +#endif + } on_off_properties2; + }; __le16 spinDownTime; u8 reserved[24]; } __packed; @@ -1485,7 +1503,6 @@ enum FW_BOOT_CONTEXT { #define MEGASAS_IOCTL_CMD 0 #define MEGASAS_DEFAULT_CMD_TIMEOUT 90 #define MEGASAS_THROTTLE_QUEUE_DEPTH 16 -#define MEGASAS_BLOCKED_CMD_TIMEOUT 60 #define MEGASAS_DEFAULT_TM_TIMEOUT 50 /* * FW reports the maximum of number of commands that it can accept (maximum @@ -1544,11 +1561,16 @@ enum FW_BOOT_CONTEXT { #define MR_CAN_HANDLE_64_BIT_DMA_OFFSET (1 << 25) +#define MEGASAS_WATCHDOG_THREAD_INTERVAL 1000 +#define MEGASAS_WAIT_FOR_NEXT_DMA_MSECS 20 +#define MEGASAS_WATCHDOG_WAIT_COUNT 50 + enum MR_ADAPTER_TYPE { MFI_SERIES = 1, THUNDERBOLT_SERIES = 2, INVADER_SERIES = 3, VENTURA_SERIES = 4, + AERO_SERIES = 5, }; /* @@ -1588,11 +1610,10 @@ struct megasas_register_set { u32 reserved_3[3]; /*00A4h*/ - u32 outbound_scratch_pad ; /*00B0h*/ - u32 outbound_scratch_pad_2; /*00B4h*/ - u32 outbound_scratch_pad_3; /*00B8h*/ - u32 outbound_scratch_pad_4; /*00BCh*/ - + u32 outbound_scratch_pad_0; /*00B0h*/ + u32 outbound_scratch_pad_1; /*00B4h*/ + u32 outbound_scratch_pad_2; /*00B8h*/ + u32 outbound_scratch_pad_3; /*00BCh*/ u32 inbound_low_queue_port ; /*00C0h*/ @@ -2181,6 +2202,9 @@ struct megasas_instance { struct MR_LD_TARGETID_LIST *ld_targetid_list_buf; dma_addr_t ld_targetid_list_buf_h; + struct MR_SNAPDUMP_PROPERTIES *snapdump_prop; + dma_addr_t snapdump_prop_h; + void *crash_buf[MAX_CRASH_DUMP_SIZE]; unsigned int fw_crash_buffer_size; unsigned int fw_crash_state; @@ -2250,7 +2274,9 @@ struct megasas_instance { struct megasas_instance_template *instancet; struct tasklet_struct isr_tasklet; struct work_struct work_init; - struct work_struct crash_init; + struct delayed_work fw_fault_work; + struct workqueue_struct *fw_fault_work_q; + char fault_handler_work_q_name[48]; u8 flag; u8 unload; @@ -2310,6 +2336,7 @@ struct megasas_instance { bool support_nvme_passthru; u8 task_abort_tmo; u8 max_reset_tmo; + u8 snapdump_wait_time; }; struct MR_LD_VF_MAP { u32 size; @@ -2386,9 +2413,9 @@ struct megasas_instance_template { void (*enable_intr)(struct megasas_instance *); void (*disable_intr)(struct megasas_instance *); - int (*clear_intr)(struct megasas_register_set __iomem *); + int (*clear_intr)(struct megasas_instance *); - u32 (*read_fw_status_reg)(struct megasas_register_set __iomem *); + u32 (*read_fw_status_reg)(struct megasas_instance *); int (*adp_reset)(struct megasas_instance *, \ struct megasas_register_set __iomem *); int (*check_reset)(struct megasas_instance *, \ @@ -2535,11 +2562,11 @@ void megasas_set_dynamic_target_properties(struct scsi_device *sdev, bool is_target_prop); int megasas_get_target_prop(struct megasas_instance *instance, struct scsi_device *sdev); +void megasas_get_snapdump_properties(struct megasas_instance *instance); int megasas_set_crash_dump_params(struct megasas_instance *instance, u8 crash_buf_state); void megasas_free_host_crash_buffer(struct megasas_instance *instance); -void megasas_fusion_crash_dump_wq(struct work_struct *work); void megasas_return_cmd_fusion(struct megasas_instance *instance, struct megasas_cmd_fusion *cmd); @@ -2560,6 +2587,9 @@ int megasas_reset_target_fusion(struct scsi_cmnd *scmd); u32 mega_mod64(u64 dividend, u32 divisor); int megasas_alloc_fusion_context(struct megasas_instance *instance); void megasas_free_fusion_context(struct megasas_instance *instance); +int megasas_fusion_start_watchdog(struct megasas_instance *instance); +void megasas_fusion_stop_watchdog(struct megasas_instance *instance); + void megasas_set_dma_settings(struct megasas_instance *instance, struct megasas_dcmd_frame *dcmd, dma_addr_t dma_addr, u32 dma_len); diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c index 9b90c716f06d..f7bdd783360a 100644 --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c @@ -2,7 +2,8 @@ * Linux MegaRAID driver for SAS based RAID controllers * * Copyright (c) 2003-2013 LSI Corporation - * Copyright (c) 2013-2014 Avago Technologies + * Copyright (c) 2013-2016 Avago Technologies + * Copyright (c) 2016-2018 Broadcom Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License @@ -17,18 +18,15 @@ * You should have received a copy of the GNU General Public License * along with this program. If not, see . * - * Authors: Avago Technologies + * Authors: Broadcom Inc. * Sreenivas Bagalkote * Sumant Patro * Bo Yang * Adam Radford - * Kashyap Desai - * Sumit Saxena + * Kashyap Desai + * Sumit Saxena * - * Send feedback to: megaraidlinux.pdl@avagotech.com - * - * Mail to: Avago Technologies, 350 West Trimble Road, Building 90, - * San Jose, California 95131 + * Send feedback to: megaraidlinux.pdl@broadcom.com */ #include @@ -87,8 +85,7 @@ MODULE_PARM_DESC(throttlequeuedepth, unsigned int resetwaittime = MEGASAS_RESET_WAIT_TIME; module_param(resetwaittime, int, S_IRUGO); -MODULE_PARM_DESC(resetwaittime, "Wait time in seconds after I/O timeout " - "before resetting adapter. Default: 180"); +MODULE_PARM_DESC(resetwaittime, "Wait time in (1-180s) after I/O timeout before resetting adapter. Default: 180s"); int smp_affinity_enable = 1; module_param(smp_affinity_enable, int, S_IRUGO); @@ -96,7 +93,7 @@ MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Defau int rdpq_enable = 1; module_param(rdpq_enable, int, S_IRUGO); -MODULE_PARM_DESC(rdpq_enable, " Allocate reply queue in chunks for large queue depth enable/disable Default: disable(0)"); +MODULE_PARM_DESC(rdpq_enable, "Allocate reply queue in chunks for large queue depth enable/disable Default: enable(1)"); unsigned int dual_qdepth_disable; module_param(dual_qdepth_disable, int, S_IRUGO); @@ -108,8 +105,8 @@ MODULE_PARM_DESC(scmd_timeout, "scsi command timeout (10-90s), default 90s. See MODULE_LICENSE("GPL"); MODULE_VERSION(MEGASAS_VERSION); -MODULE_AUTHOR("megaraidlinux.pdl@avagotech.com"); -MODULE_DESCRIPTION("Avago MegaRAID SAS Driver"); +MODULE_AUTHOR("megaraidlinux.pdl@broadcom.com"); +MODULE_DESCRIPTION("Broadcom MegaRAID SAS Driver"); int megasas_transition_to_ready(struct megasas_instance *instance, int ocr); static int megasas_get_pd_list(struct megasas_instance *instance); @@ -165,6 +162,10 @@ static struct pci_device_id megasas_pci_table[] = { {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_TOMCAT)}, {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_VENTURA_4PORT)}, {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_CRUSADER_4PORT)}, + {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E1)}, + {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E2)}, + {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E5)}, + {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E6)}, {} }; @@ -189,7 +190,7 @@ void megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd, u8 alt_status); static u32 -megasas_read_fw_status_reg_gen2(struct megasas_register_set __iomem *regs); +megasas_read_fw_status_reg_gen2(struct megasas_instance *instance); static int megasas_adp_reset_gen2(struct megasas_instance *instance, struct megasas_register_set __iomem *reg_set); @@ -219,6 +220,28 @@ megasas_free_ctrl_dma_buffers(struct megasas_instance *instance); static inline void megasas_init_ctrl_params(struct megasas_instance *instance); +u32 megasas_readl(struct megasas_instance *instance, + const volatile void __iomem *addr) +{ + u32 i = 0, ret_val; + /* + * Due to a HW errata in Aero controllers, reads to certain + * Fusion registers could intermittently return all zeroes. + * This behavior is transient in nature and subsequent reads will + * return valid value. As a workaround in driver, retry readl for + * upto three times until a non-zero value is read. + */ + if (instance->adapter_type == AERO_SERIES) { + do { + ret_val = readl(addr); + i++; + } while (ret_val == 0 && i < 3); + return ret_val; + } else { + return readl(addr); + } +} + /** * megasas_set_dma_settings - Populate DMA address, length and flags for DCMDs * @instance: Adapter soft state @@ -419,19 +442,21 @@ megasas_disable_intr_xscale(struct megasas_instance *instance) * @regs: MFI register set */ static u32 -megasas_read_fw_status_reg_xscale(struct megasas_register_set __iomem * regs) +megasas_read_fw_status_reg_xscale(struct megasas_instance *instance) { - return readl(&(regs)->outbound_msg_0); + return readl(&instance->reg_set->outbound_msg_0); } /** * megasas_clear_interrupt_xscale - Check & clear interrupt * @regs: MFI register set */ static int -megasas_clear_intr_xscale(struct megasas_register_set __iomem * regs) +megasas_clear_intr_xscale(struct megasas_instance *instance) { u32 status; u32 mfiStatus = 0; + struct megasas_register_set __iomem *regs; + regs = instance->reg_set; /* * Check if it is our interrupt @@ -596,9 +621,9 @@ megasas_disable_intr_ppc(struct megasas_instance *instance) * @regs: MFI register set */ static u32 -megasas_read_fw_status_reg_ppc(struct megasas_register_set __iomem * regs) +megasas_read_fw_status_reg_ppc(struct megasas_instance *instance) { - return readl(&(regs)->outbound_scratch_pad); + return readl(&instance->reg_set->outbound_scratch_pad_0); } /** @@ -606,9 +631,11 @@ megasas_read_fw_status_reg_ppc(struct megasas_register_set __iomem * regs) * @regs: MFI register set */ static int -megasas_clear_intr_ppc(struct megasas_register_set __iomem * regs) +megasas_clear_intr_ppc(struct megasas_instance *instance) { u32 status, mfiStatus = 0; + struct megasas_register_set __iomem *regs; + regs = instance->reg_set; /* * Check if it is our interrupt @@ -721,9 +748,9 @@ megasas_disable_intr_skinny(struct megasas_instance *instance) * @regs: MFI register set */ static u32 -megasas_read_fw_status_reg_skinny(struct megasas_register_set __iomem *regs) +megasas_read_fw_status_reg_skinny(struct megasas_instance *instance) { - return readl(&(regs)->outbound_scratch_pad); + return readl(&instance->reg_set->outbound_scratch_pad_0); } /** @@ -731,10 +758,12 @@ megasas_read_fw_status_reg_skinny(struct megasas_register_set __iomem *regs) * @regs: MFI register set */ static int -megasas_clear_intr_skinny(struct megasas_register_set __iomem *regs) +megasas_clear_intr_skinny(struct megasas_instance *instance) { u32 status; u32 mfiStatus = 0; + struct megasas_register_set __iomem *regs; + regs = instance->reg_set; /* * Check if it is our interrupt @@ -748,7 +777,7 @@ megasas_clear_intr_skinny(struct megasas_register_set __iomem *regs) /* * Check if it is our interrupt */ - if ((megasas_read_fw_status_reg_skinny(regs) & MFI_STATE_MASK) == + if ((megasas_read_fw_status_reg_skinny(instance) & MFI_STATE_MASK) == MFI_STATE_FAULT) { mfiStatus = MFI_INTR_FLAG_FIRMWARE_STATE_CHANGE; } else @@ -866,9 +895,9 @@ megasas_disable_intr_gen2(struct megasas_instance *instance) * @regs: MFI register set */ static u32 -megasas_read_fw_status_reg_gen2(struct megasas_register_set __iomem *regs) +megasas_read_fw_status_reg_gen2(struct megasas_instance *instance) { - return readl(&(regs)->outbound_scratch_pad); + return readl(&instance->reg_set->outbound_scratch_pad_0); } /** @@ -876,10 +905,12 @@ megasas_read_fw_status_reg_gen2(struct megasas_register_set __iomem *regs) * @regs: MFI register set */ static int -megasas_clear_intr_gen2(struct megasas_register_set __iomem *regs) +megasas_clear_intr_gen2(struct megasas_instance *instance) { u32 status; u32 mfiStatus = 0; + struct megasas_register_set __iomem *regs; + regs = instance->reg_set; /* * Check if it is our interrupt @@ -2079,9 +2110,11 @@ void megaraid_sas_kill_hba(struct megasas_instance *instance) if ((instance->pdev->device == PCI_DEVICE_ID_LSI_SAS0073SKINNY) || (instance->pdev->device == PCI_DEVICE_ID_LSI_SAS0071SKINNY) || (instance->adapter_type != MFI_SERIES)) { - writel(MFI_STOP_ADP, &instance->reg_set->doorbell); - /* Flush */ - readl(&instance->reg_set->doorbell); + if (!instance->requestorId) { + writel(MFI_STOP_ADP, &instance->reg_set->doorbell); + /* Flush */ + readl(&instance->reg_set->doorbell); + } if (instance->requestorId && instance->peerIsPresent) memset(instance->ld_ids, 0xff, MEGASAS_MAX_LD_IDS); } else { @@ -2682,7 +2715,7 @@ static int megasas_wait_for_outstanding(struct megasas_instance *instance) i = 0; outstanding = atomic_read(&instance->fw_outstanding); - fw_state = instance->instancet->read_fw_status_reg(instance->reg_set) & MFI_STATE_MASK; + fw_state = instance->instancet->read_fw_status_reg(instance) & MFI_STATE_MASK; if ((!outstanding && (fw_state == MFI_STATE_OPERATIONAL))) goto no_outstanding; @@ -2711,7 +2744,7 @@ static int megasas_wait_for_outstanding(struct megasas_instance *instance) outstanding = atomic_read(&instance->fw_outstanding); - fw_state = instance->instancet->read_fw_status_reg(instance->reg_set) & MFI_STATE_MASK; + fw_state = instance->instancet->read_fw_status_reg(instance) & MFI_STATE_MASK; if ((!outstanding && (fw_state == MFI_STATE_OPERATIONAL))) goto no_outstanding; } @@ -3186,7 +3219,6 @@ static struct scsi_host_template megasas_template = { .eh_timed_out = megasas_reset_timer, .shost_attrs = megaraid_host_attrs, .bios_param = megasas_bios_param, - .use_clustering = ENABLE_CLUSTERING, .change_queue_depth = scsi_change_queue_depth, .no_write_same = 1, }; @@ -3278,6 +3310,7 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd, megasas_complete_int_cmd(instance, cmd); break; } + /* fall through */ case MFI_CMD_LD_READ: case MFI_CMD_LD_WRITE: @@ -3665,9 +3698,8 @@ megasas_deplete_reply_queue(struct megasas_instance *instance, return IRQ_HANDLED; } - if ((mfiStatus = instance->instancet->clear_intr( - instance->reg_set) - ) == 0) { + mfiStatus = instance->instancet->clear_intr(instance); + if (mfiStatus == 0) { /* Hardware may not set outbound_intr_status in MSI-X mode */ if (!instance->msix_vectors) return IRQ_NONE; @@ -3677,7 +3709,7 @@ megasas_deplete_reply_queue(struct megasas_instance *instance, if ((mfiStatus & MFI_INTR_FLAG_FIRMWARE_STATE_CHANGE)) { fw_state = instance->instancet->read_fw_status_reg( - instance->reg_set) & MFI_STATE_MASK; + instance) & MFI_STATE_MASK; if (fw_state != MFI_STATE_FAULT) { dev_notice(&instance->pdev->dev, "fw state:%x\n", @@ -3760,7 +3792,7 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr) u32 cur_state; u32 abs_state, curr_abs_state; - abs_state = instance->instancet->read_fw_status_reg(instance->reg_set); + abs_state = instance->instancet->read_fw_status_reg(instance); fw_state = abs_state & MFI_STATE_MASK; if (fw_state != MFI_STATE_READY) @@ -3832,7 +3864,8 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr) if (instance->adapter_type != MFI_SERIES) { for (i = 0; i < (10 * 1000); i += 20) { - if (readl( + if (megasas_readl( + instance, &instance-> reg_set-> doorbell) & 1) @@ -3891,12 +3924,12 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr) /* * The cur_state should not last for more than max_wait secs */ - for (i = 0; i < (max_wait * 1000); i++) { + for (i = 0; i < max_wait; i++) { curr_abs_state = instance->instancet-> - read_fw_status_reg(instance->reg_set); + read_fw_status_reg(instance); if (abs_state == curr_abs_state) { - msleep(1); + msleep(1000); } else break; } @@ -4634,9 +4667,9 @@ static void megasas_update_ext_vd_details(struct megasas_instance *instance) } dev_info(&instance->pdev->dev, - "firmware type\t: %s\n", - instance->supportmax256vd ? "Extended VD(240 VD)firmware" : - "Legacy(64 VD) firmware"); + "FW provided supportMaxExtLDs: %d\tmax_lds: %d\n", + instance->ctrl_info_buf->adapterOperations3.supportMaxExtLDs ? 1 : 0, + instance->ctrl_info_buf->max_lds); if (instance->max_raid_mapsize) { ventura_map_sz = instance->max_raid_mapsize * @@ -4661,6 +4694,87 @@ static void megasas_update_ext_vd_details(struct megasas_instance *instance) fusion->drv_map_sz = sizeof(struct MR_DRV_RAID_MAP_ALL); } +/* + * dcmd.opcode - MR_DCMD_CTRL_SNAPDUMP_GET_PROPERTIES + * dcmd.hdr.length - number of bytes to read + * dcmd.sge - Ptr to MR_SNAPDUMP_PROPERTIES + * Desc: Fill in snapdump properties + * Status: MFI_STAT_OK- Command successful + */ +void megasas_get_snapdump_properties(struct megasas_instance *instance) +{ + int ret = 0; + struct megasas_cmd *cmd; + struct megasas_dcmd_frame *dcmd; + struct MR_SNAPDUMP_PROPERTIES *ci; + dma_addr_t ci_h = 0; + + ci = instance->snapdump_prop; + ci_h = instance->snapdump_prop_h; + + if (!ci) + return; + + cmd = megasas_get_cmd(instance); + + if (!cmd) { + dev_dbg(&instance->pdev->dev, "Failed to get a free cmd\n"); + return; + } + + dcmd = &cmd->frame->dcmd; + + memset(ci, 0, sizeof(*ci)); + memset(dcmd->mbox.b, 0, MFI_MBOX_SIZE); + + dcmd->cmd = MFI_CMD_DCMD; + dcmd->cmd_status = MFI_STAT_INVALID_STATUS; + dcmd->sge_count = 1; + dcmd->flags = MFI_FRAME_DIR_READ; + dcmd->timeout = 0; + dcmd->pad_0 = 0; + dcmd->data_xfer_len = cpu_to_le32(sizeof(struct MR_SNAPDUMP_PROPERTIES)); + dcmd->opcode = cpu_to_le32(MR_DCMD_CTRL_SNAPDUMP_GET_PROPERTIES); + + megasas_set_dma_settings(instance, dcmd, ci_h, + sizeof(struct MR_SNAPDUMP_PROPERTIES)); + + if (!instance->mask_interrupts) { + ret = megasas_issue_blocked_cmd(instance, cmd, + MFI_IO_TIMEOUT_SECS); + } else { + ret = megasas_issue_polled(instance, cmd); + cmd->flags |= DRV_DCMD_SKIP_REFIRE; + } + + switch (ret) { + case DCMD_SUCCESS: + instance->snapdump_wait_time = + min_t(u8, ci->trigger_min_num_sec_before_ocr, + MEGASAS_MAX_SNAP_DUMP_WAIT_TIME); + break; + + case DCMD_TIMEOUT: + switch (dcmd_timeout_ocr_possible(instance)) { + case INITIATE_OCR: + cmd->flags |= DRV_DCMD_SKIP_REFIRE; + megasas_reset_fusion(instance->host, + MFI_IO_TIMEOUT_OCR); + break; + case KILL_ADAPTER: + megaraid_sas_kill_hba(instance); + break; + case IGNORE_TIMEOUT: + dev_info(&instance->pdev->dev, "Ignore DCMD timeout: %s %d\n", + __func__, __LINE__); + break; + } + } + + if (ret != DCMD_TIMEOUT) + megasas_return_cmd(instance, cmd); +} + /** * megasas_get_controller_info - Returns FW's controller structure * @instance: Adapter soft state @@ -4720,6 +4834,7 @@ megasas_get_ctrl_info(struct megasas_instance *instance) * CPU endianness format. */ le32_to_cpus((u32 *)&ci->properties.OnOffProperties); + le16_to_cpus((u16 *)&ci->properties.on_off_properties2); le32_to_cpus((u32 *)&ci->adapterOperations2); le32_to_cpus((u32 *)&ci->adapterOperations3); le16_to_cpus((u16 *)&ci->adapter_operations4); @@ -4741,6 +4856,11 @@ megasas_get_ctrl_info(struct megasas_instance *instance) /*Check whether controller is iMR or MR */ instance->is_imr = (ci->memory_size ? 0 : 1); + + instance->snapdump_wait_time = + (ci->properties.on_off_properties2.enable_snap_dump ? + MEGASAS_DEFAULT_SNAP_DUMP_WAIT_TIME : 0); + dev_info(&instance->pdev->dev, "controller type\t: %s(%dMB)\n", instance->is_imr ? "iMR" : "MR", @@ -4942,16 +5062,13 @@ fail_fw_init: static u32 megasas_init_adapter_mfi(struct megasas_instance *instance) { - struct megasas_register_set __iomem *reg_set; u32 context_sz; u32 reply_q_sz; - reg_set = instance->reg_set; - /* * Get various operational parameters from status register */ - instance->max_fw_cmds = instance->instancet->read_fw_status_reg(reg_set) & 0x00FFFF; + instance->max_fw_cmds = instance->instancet->read_fw_status_reg(instance) & 0x00FFFF; /* * Reduce the max supported cmds by 1. This is to ensure that the * reply_q_sz (1 more than the max cmd that driver may send) @@ -4959,7 +5076,7 @@ megasas_init_adapter_mfi(struct megasas_instance *instance) */ instance->max_fw_cmds = instance->max_fw_cmds-1; instance->max_mfi_cmds = instance->max_fw_cmds; - instance->max_num_sge = (instance->instancet->read_fw_status_reg(reg_set) & 0xFF0000) >> + instance->max_num_sge = (instance->instancet->read_fw_status_reg(instance) & 0xFF0000) >> 0x10; /* * For MFI skinny adapters, MEGASAS_SKINNY_INT_CMDS commands @@ -5015,7 +5132,7 @@ megasas_init_adapter_mfi(struct megasas_instance *instance) instance->fw_support_ieee = 0; instance->fw_support_ieee = - (instance->instancet->read_fw_status_reg(reg_set) & + (instance->instancet->read_fw_status_reg(instance) & 0x04000000); dev_notice(&instance->pdev->dev, "megasas_init_mfi: fw_support_ieee=%d", @@ -5213,14 +5330,14 @@ static int megasas_init_fw(struct megasas_instance *instance) { u32 max_sectors_1; u32 max_sectors_2, tmp_sectors, msix_enable; - u32 scratch_pad_2, scratch_pad_3, scratch_pad_4; + u32 scratch_pad_1, scratch_pad_2, scratch_pad_3, status_reg; resource_size_t base_addr; - struct megasas_register_set __iomem *reg_set; struct megasas_ctrl_info *ctrl_info = NULL; unsigned long bar_list; int i, j, loop, fw_msix_count = 0; struct IOV_111 *iovPtr; struct fusion_context *fusion; + bool do_adp_reset = true; fusion = instance->ctrl_context; @@ -5241,8 +5358,6 @@ static int megasas_init_fw(struct megasas_instance *instance) goto fail_ioremap; } - reg_set = instance->reg_set; - if (instance->adapter_type != MFI_SERIES) instance->instancet = &megasas_instance_template_fusion; else { @@ -5269,19 +5384,29 @@ static int megasas_init_fw(struct megasas_instance *instance) } if (megasas_transition_to_ready(instance, 0)) { - atomic_set(&instance->fw_reset_no_pci_access, 1); - instance->instancet->adp_reset - (instance, instance->reg_set); - atomic_set(&instance->fw_reset_no_pci_access, 0); - dev_info(&instance->pdev->dev, - "FW restarted successfully from %s!\n", - __func__); + if (instance->adapter_type >= INVADER_SERIES) { + status_reg = instance->instancet->read_fw_status_reg( + instance); + do_adp_reset = status_reg & MFI_RESET_ADAPTER; + } - /*waitting for about 30 second before retry*/ - ssleep(30); + if (do_adp_reset) { + atomic_set(&instance->fw_reset_no_pci_access, 1); + instance->instancet->adp_reset + (instance, instance->reg_set); + atomic_set(&instance->fw_reset_no_pci_access, 0); + dev_info(&instance->pdev->dev, + "FW restarted successfully from %s!\n", + __func__); + + /*waiting for about 30 second before retry*/ + ssleep(30); - if (megasas_transition_to_ready(instance, 0)) + if (megasas_transition_to_ready(instance, 0)) + goto fail_ready_state; + } else { goto fail_ready_state; + } } megasas_init_ctrl_params(instance); @@ -5297,38 +5422,57 @@ static int megasas_init_fw(struct megasas_instance *instance) fusion = instance->ctrl_context; - if (instance->adapter_type == VENTURA_SERIES) { - scratch_pad_3 = - readl(&instance->reg_set->outbound_scratch_pad_3); - instance->max_raid_mapsize = ((scratch_pad_3 >> + if (instance->adapter_type >= VENTURA_SERIES) { + scratch_pad_2 = + megasas_readl(instance, + &instance->reg_set->outbound_scratch_pad_2); + instance->max_raid_mapsize = ((scratch_pad_2 >> MR_MAX_RAID_MAP_SIZE_OFFSET_SHIFT) & MR_MAX_RAID_MAP_SIZE_MASK); } /* Check if MSI-X is supported while in ready state */ - msix_enable = (instance->instancet->read_fw_status_reg(reg_set) & + msix_enable = (instance->instancet->read_fw_status_reg(instance) & 0x4000000) >> 0x1a; if (msix_enable && !msix_disable) { int irq_flags = PCI_IRQ_MSIX; - scratch_pad_2 = readl - (&instance->reg_set->outbound_scratch_pad_2); + scratch_pad_1 = megasas_readl + (instance, &instance->reg_set->outbound_scratch_pad_1); /* Check max MSI-X vectors */ if (fusion) { if (instance->adapter_type == THUNDERBOLT_SERIES) { /* Thunderbolt Series*/ - instance->msix_vectors = (scratch_pad_2 + instance->msix_vectors = (scratch_pad_1 & MR_MAX_REPLY_QUEUES_OFFSET) + 1; fw_msix_count = instance->msix_vectors; - } else { /* Invader series supports more than 8 MSI-x vectors*/ - instance->msix_vectors = ((scratch_pad_2 + } else { + instance->msix_vectors = ((scratch_pad_1 & MR_MAX_REPLY_QUEUES_EXT_OFFSET) >> MR_MAX_REPLY_QUEUES_EXT_OFFSET_SHIFT) + 1; - if (instance->msix_vectors > 16) - instance->msix_combined = true; + + /* + * For Invader series, > 8 MSI-x vectors + * supported by FW/HW implies combined + * reply queue mode is enabled. + * For Ventura series, > 16 MSI-x vectors + * supported by FW/HW implies combined + * reply queue mode is enabled. + */ + switch (instance->adapter_type) { + case INVADER_SERIES: + if (instance->msix_vectors > 8) + instance->msix_combined = true; + break; + case AERO_SERIES: + case VENTURA_SERIES: + if (instance->msix_vectors > 16) + instance->msix_combined = true; + break; + } if (rdpq_enable) - instance->is_rdpq = (scratch_pad_2 & MR_RDPQ_MODE_OFFSET) ? + instance->is_rdpq = (scratch_pad_1 & MR_RDPQ_MODE_OFFSET) ? 1 : 0; fw_msix_count = instance->msix_vectors; /* Save 1-15 reply post index address to local memory @@ -5377,7 +5521,7 @@ static int megasas_init_fw(struct megasas_instance *instance) if (!instance->msix_vectors) { i = pci_alloc_irq_vectors(instance->pdev, 1, 1, PCI_IRQ_LEGACY); if (i < 0) - goto fail_setup_irqs; + goto fail_init_adapter; } megasas_setup_reply_map(instance); @@ -5403,13 +5547,14 @@ static int megasas_init_fw(struct megasas_instance *instance) if (instance->instancet->init_adapter(instance)) goto fail_init_adapter; - if (instance->adapter_type == VENTURA_SERIES) { - scratch_pad_4 = - readl(&instance->reg_set->outbound_scratch_pad_4); - if ((scratch_pad_4 & MR_NVME_PAGE_SIZE_MASK) >= + if (instance->adapter_type >= VENTURA_SERIES) { + scratch_pad_3 = + megasas_readl(instance, + &instance->reg_set->outbound_scratch_pad_3); + if ((scratch_pad_3 & MR_NVME_PAGE_SIZE_MASK) >= MR_DEFAULT_NVME_PAGE_SHIFT) instance->nvme_page_size = - (1 << (scratch_pad_4 & MR_NVME_PAGE_SIZE_MASK)); + (1 << (scratch_pad_3 & MR_NVME_PAGE_SIZE_MASK)); dev_info(&instance->pdev->dev, "NVME page size\t: (%d)\n", instance->nvme_page_size); @@ -5439,7 +5584,7 @@ static int megasas_init_fw(struct megasas_instance *instance) memset(instance->ld_ids, 0xff, MEGASAS_MAX_LD_IDS); /* stream detection initialization */ - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { fusion->stream_detect_by_ld = kcalloc(MAX_LOGICAL_DRIVES_EXT, sizeof(struct LD_STREAM_DETECT *), @@ -5539,6 +5684,11 @@ static int megasas_init_fw(struct megasas_instance *instance) instance->crash_dump_buf = NULL; } + if (instance->snapdump_wait_time) { + megasas_get_snapdump_properties(instance); + dev_info(&instance->pdev->dev, "Snap dump wait time\t: %d\n", + instance->snapdump_wait_time); + } dev_info(&instance->pdev->dev, "pci id\t\t: (0x%04x)/(0x%04x)/(0x%04x)/(0x%04x)\n", @@ -5553,7 +5703,6 @@ static int megasas_init_fw(struct megasas_instance *instance) dev_info(&instance->pdev->dev, "jbod sync map : %s\n", instance->use_seqnum_jbod_fp ? "yes" : "no"); - instance->max_sectors_per_req = instance->max_num_sge * SGE_BUFFER_SIZE / 512; if (tmp_sectors && (instance->max_sectors_per_req > tmp_sectors)) @@ -5576,19 +5725,32 @@ static int megasas_init_fw(struct megasas_instance *instance) /* Launch SR-IOV heartbeat timer */ if (instance->requestorId) { - if (!megasas_sriov_start_heartbeat(instance, 1)) + if (!megasas_sriov_start_heartbeat(instance, 1)) { megasas_start_timer(instance); - else + } else { instance->skip_heartbeat_timer_del = 1; + goto fail_get_ld_pd_list; + } } + /* + * Create and start watchdog thread which will monitor + * controller state every 1 sec and trigger OCR when + * it enters fault state + */ + if (instance->adapter_type != MFI_SERIES) + if (megasas_fusion_start_watchdog(instance) != SUCCESS) + goto fail_start_watchdog; + return 0; +fail_start_watchdog: + if (instance->requestorId && !instance->skip_heartbeat_timer_del) + del_timer_sync(&instance->sriov_heartbeat_timer); fail_get_ld_pd_list: instance->instancet->disable_intr(instance); -fail_init_adapter: megasas_destroy_irqs(instance); -fail_setup_irqs: +fail_init_adapter: if (instance->msix_vectors) pci_free_irq_vectors(instance->pdev); instance->msix_vectors = 0; @@ -6022,13 +6184,13 @@ static int megasas_io_attach(struct megasas_instance *instance) * @instance: Adapter soft state * Description: * - * For Ventura, driver/FW will operate in 64bit DMA addresses. + * For Ventura, driver/FW will operate in 63bit DMA addresses. * * For invader- * By default, driver/FW will operate in 32bit DMA addresses * for consistent DMA mapping but if 32 bit consistent - * DMA mask fails, driver will try with 64 bit consistent - * mask provided FW is true 64bit DMA capable + * DMA mask fails, driver will try with 63 bit consistent + * mask provided FW is true 63bit DMA capable * * For older controllers(Thunderbolt and MFI based adapters)- * driver/FW will operate in 32 bit consistent DMA addresses. @@ -6038,31 +6200,31 @@ megasas_set_dma_mask(struct megasas_instance *instance) { u64 consistent_mask; struct pci_dev *pdev; - u32 scratch_pad_2; + u32 scratch_pad_1; pdev = instance->pdev; - consistent_mask = (instance->adapter_type == VENTURA_SERIES) ? - DMA_BIT_MASK(64) : DMA_BIT_MASK(32); + consistent_mask = (instance->adapter_type >= VENTURA_SERIES) ? + DMA_BIT_MASK(63) : DMA_BIT_MASK(32); if (IS_DMA64) { - if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && + if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(63)) && dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) goto fail_set_dma_mask; - if ((*pdev->dev.dma_mask == DMA_BIT_MASK(64)) && + if ((*pdev->dev.dma_mask == DMA_BIT_MASK(63)) && (dma_set_coherent_mask(&pdev->dev, consistent_mask) && dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)))) { /* * If 32 bit DMA mask fails, then try for 64 bit mask * for FW capable of handling 64 bit DMA. */ - scratch_pad_2 = readl - (&instance->reg_set->outbound_scratch_pad_2); + scratch_pad_1 = megasas_readl + (instance, &instance->reg_set->outbound_scratch_pad_1); - if (!(scratch_pad_2 & MR_CAN_HANDLE_64_BIT_DMA_OFFSET)) + if (!(scratch_pad_1 & MR_CAN_HANDLE_64_BIT_DMA_OFFSET)) goto fail_set_dma_mask; else if (dma_set_mask_and_coherent(&pdev->dev, - DMA_BIT_MASK(64))) + DMA_BIT_MASK(63))) goto fail_set_dma_mask; } } else if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) @@ -6074,8 +6236,8 @@ megasas_set_dma_mask(struct megasas_instance *instance) instance->consistent_mask_64bit = true; dev_info(&pdev->dev, "%s bit DMA mask and %s bit consistent mask\n", - ((*pdev->dev.dma_mask == DMA_BIT_MASK(64)) ? "64" : "32"), - (instance->consistent_mask_64bit ? "64" : "32")); + ((*pdev->dev.dma_mask == DMA_BIT_MASK(64)) ? "63" : "32"), + (instance->consistent_mask_64bit ? "63" : "32")); return 0; @@ -6088,12 +6250,14 @@ fail_set_dma_mask: /* * megasas_set_adapter_type - Set adapter type. * Supported controllers can be divided in - * 4 categories- enum MR_ADAPTER_TYPE { - * MFI_SERIES = 1, - * THUNDERBOLT_SERIES = 2, - * INVADER_SERIES = 3, - * VENTURA_SERIES = 4, - * }; + * different categories- + * enum MR_ADAPTER_TYPE { + * MFI_SERIES = 1, + * THUNDERBOLT_SERIES = 2, + * INVADER_SERIES = 3, + * VENTURA_SERIES = 4, + * AERO_SERIES = 5, + * }; * @instance: Adapter soft state * return: void */ @@ -6104,6 +6268,12 @@ static inline void megasas_set_adapter_type(struct megasas_instance *instance) instance->adapter_type = MFI_SERIES; } else { switch (instance->pdev->device) { + case PCI_DEVICE_ID_LSI_AERO_10E1: + case PCI_DEVICE_ID_LSI_AERO_10E2: + case PCI_DEVICE_ID_LSI_AERO_10E5: + case PCI_DEVICE_ID_LSI_AERO_10E6: + instance->adapter_type = AERO_SERIES; + break; case PCI_DEVICE_ID_LSI_VENTURA: case PCI_DEVICE_ID_LSI_CRUSADER: case PCI_DEVICE_ID_LSI_HARPOON: @@ -6171,6 +6341,7 @@ static int megasas_alloc_ctrl_mem(struct megasas_instance *instance) if (megasas_alloc_mfi_ctrl_mem(instance)) goto fail; break; + case AERO_SERIES: case VENTURA_SERIES: case THUNDERBOLT_SERIES: case INVADER_SERIES: @@ -6245,6 +6416,14 @@ int megasas_alloc_ctrl_dma_buffers(struct megasas_instance *instance) "Failed to allocate PD list buffer\n"); return -ENOMEM; } + + instance->snapdump_prop = dma_alloc_coherent(&pdev->dev, + sizeof(struct MR_SNAPDUMP_PROPERTIES), + &instance->snapdump_prop_h, GFP_KERNEL); + + if (!instance->snapdump_prop) + dev_err(&pdev->dev, + "Failed to allocate snapdump properties buffer\n"); } instance->pd_list_buf = @@ -6388,6 +6567,12 @@ void megasas_free_ctrl_dma_buffers(struct megasas_instance *instance) dma_free_coherent(&pdev->dev, CRASH_DMA_BUF_SIZE, instance->crash_dump_buf, instance->crash_dump_h); + + if (instance->snapdump_prop) + dma_free_coherent(&pdev->dev, + sizeof(struct MR_SNAPDUMP_PROPERTIES), + instance->snapdump_prop, + instance->snapdump_prop_h); } /* @@ -6434,12 +6619,10 @@ static inline void megasas_init_ctrl_params(struct megasas_instance *instance) instance->disableOnlineCtrlReset = 1; instance->UnevenSpanSupport = 0; - if (instance->adapter_type != MFI_SERIES) { + if (instance->adapter_type != MFI_SERIES) INIT_WORK(&instance->work_init, megasas_fusion_ocr_wq); - INIT_WORK(&instance->crash_init, megasas_fusion_crash_dump_wq); - } else { + else INIT_WORK(&instance->work_init, process_fw_state_change_wq); - } } /** @@ -6455,6 +6638,13 @@ static int megasas_probe_one(struct pci_dev *pdev, struct megasas_instance *instance; u16 control = 0; + switch (pdev->device) { + case PCI_DEVICE_ID_LSI_AERO_10E1: + case PCI_DEVICE_ID_LSI_AERO_10E5: + dev_info(&pdev->dev, "Adapter is in configurable secure mode\n"); + break; + } + /* Reset MSI-X in the kdump kernel */ if (reset_devices) { pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX); @@ -6708,6 +6898,10 @@ megasas_suspend(struct pci_dev *pdev, pm_message_t state) if (instance->requestorId && !instance->skip_heartbeat_timer_del) del_timer_sync(&instance->sriov_heartbeat_timer); + /* Stop the FW fault detection watchdog */ + if (instance->adapter_type != MFI_SERIES) + megasas_fusion_stop_watchdog(instance); + megasas_flush_cache(instance); megasas_shutdown_controller(instance, MR_DCMD_HIBERNATE_SHUTDOWN); @@ -6843,8 +7037,16 @@ megasas_resume(struct pci_dev *pdev) if (megasas_start_aen(instance)) dev_err(&instance->pdev->dev, "Start AEN failed\n"); + /* Re-launch FW fault watchdog */ + if (instance->adapter_type != MFI_SERIES) + if (megasas_fusion_start_watchdog(instance) != SUCCESS) + goto fail_start_watchdog; + return 0; +fail_start_watchdog: + if (instance->requestorId && !instance->skip_heartbeat_timer_del) + del_timer_sync(&instance->sriov_heartbeat_timer); fail_init_mfi: megasas_free_ctrl_dma_buffers(instance); megasas_free_ctrl_mem(instance); @@ -6912,6 +7114,10 @@ static void megasas_detach_one(struct pci_dev *pdev) if (instance->requestorId && !instance->skip_heartbeat_timer_del) del_timer_sync(&instance->sriov_heartbeat_timer); + /* Stop the FW fault detection watchdog */ + if (instance->adapter_type != MFI_SERIES) + megasas_fusion_stop_watchdog(instance); + if (instance->fw_crash_state != UNAVAILABLE) megasas_free_host_crash_buffer(instance); scsi_remove_host(instance->host); @@ -6956,7 +7162,7 @@ skip_firing_dcmds: if (instance->msix_vectors) pci_free_irq_vectors(instance->pdev); - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { for (i = 0; i < MAX_LOGICAL_DRIVES_EXT; ++i) kfree(fusion->stream_detect_by_ld[i]); kfree(fusion->stream_detect_by_ld); @@ -7737,8 +7943,15 @@ megasas_aen_polling(struct work_struct *work) break; case MR_EVT_CTRL_PROP_CHANGED: - dcmd_ret = megasas_get_ctrl_info(instance); - break; + dcmd_ret = megasas_get_ctrl_info(instance); + if (dcmd_ret == DCMD_SUCCESS && + instance->snapdump_wait_time) { + megasas_get_snapdump_properties(instance); + dev_info(&instance->pdev->dev, + "Snap dump wait time\t: %d\n", + instance->snapdump_wait_time); + } + break; default: doscan = 0; break; diff --git a/drivers/scsi/megaraid/megaraid_sas_fp.c b/drivers/scsi/megaraid/megaraid_sas_fp.c index 59ecbb3b53b5..87c2c0472c8f 100644 --- a/drivers/scsi/megaraid/megaraid_sas_fp.c +++ b/drivers/scsi/megaraid/megaraid_sas_fp.c @@ -2,7 +2,8 @@ * Linux MegaRAID driver for SAS based RAID controllers * * Copyright (c) 2009-2013 LSI Corporation - * Copyright (c) 2013-2014 Avago Technologies + * Copyright (c) 2013-2016 Avago Technologies + * Copyright (c) 2016-2018 Broadcom Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License @@ -19,17 +20,14 @@ * * FILE: megaraid_sas_fp.c * - * Authors: Avago Technologies + * Authors: Broadcom Inc. * Sumant Patro * Varad Talamacki * Manoj Jose - * Kashyap Desai - * Sumit Saxena + * Kashyap Desai + * Sumit Saxena * - * Send feedback to: megaraidlinux.pdl@avagotech.com - * - * Mail to: Avago Technologies, 350 West Trimble Road, Building 90, - * San Jose, California 95131 + * Send feedback to: megaraidlinux.pdl@broadcom.com */ #include @@ -745,7 +743,7 @@ static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld, *pDevHandle = MR_PdDevHandleGet(pd, map); *pPdInterface = MR_PdInterfaceTypeGet(pd, map); /* get second pd also for raid 1/10 fast path writes*/ - if ((instance->adapter_type == VENTURA_SERIES) && + if ((instance->adapter_type >= VENTURA_SERIES) && (raid->level == 1) && !io_info->isRead) { r1_alt_pd = MR_ArPdGet(arRef, physArm + 1, map); @@ -770,7 +768,7 @@ static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld, } *pdBlock += stripRef + le64_to_cpu(MR_LdSpanPtrGet(ld, span, map)->startBlk); - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { ((struct RAID_CONTEXT_G35 *)pRAID_Context)->span_arm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm; io_info->span_arm = @@ -861,7 +859,7 @@ u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow, *pDevHandle = MR_PdDevHandleGet(pd, map); *pPdInterface = MR_PdInterfaceTypeGet(pd, map); /* get second pd also for raid 1/10 fast path writes*/ - if ((instance->adapter_type == VENTURA_SERIES) && + if ((instance->adapter_type >= VENTURA_SERIES) && (raid->level == 1) && !io_info->isRead) { r1_alt_pd = MR_ArPdGet(arRef, physArm + 1, map); @@ -888,7 +886,7 @@ u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow, } *pdBlock += stripRef + le64_to_cpu(MR_LdSpanPtrGet(ld, span, map)->startBlk); - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { ((struct RAID_CONTEXT_G35 *)pRAID_Context)->span_arm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm; io_info->span_arm = @@ -1266,7 +1264,7 @@ void mr_update_load_balance_params(struct MR_DRV_RAID_MAP_ALL *drv_map, for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES_EXT; ldCount++) { ld = MR_TargetIdToLdGet(ldCount, drv_map); - if (ld >= MAX_LOGICAL_DRIVES_EXT) { + if (ld >= MAX_LOGICAL_DRIVES_EXT - 1) { lbInfo[ldCount].loadBalanceFlag = 0; continue; } diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c index f74b5ea24f0f..211c17c33aa0 100644 --- a/drivers/scsi/megaraid/megaraid_sas_fusion.c +++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c @@ -2,7 +2,8 @@ * Linux MegaRAID driver for SAS based RAID controllers * * Copyright (c) 2009-2013 LSI Corporation - * Copyright (c) 2013-2014 Avago Technologies + * Copyright (c) 2013-2016 Avago Technologies + * Copyright (c) 2016-2018 Broadcom Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License @@ -19,16 +20,13 @@ * * FILE: megaraid_sas_fusion.c * - * Authors: Avago Technologies + * Authors: Broadcom Inc. * Sumant Patro * Adam Radford - * Kashyap Desai - * Sumit Saxena + * Kashyap Desai + * Sumit Saxena * - * Send feedback to: megaraidlinux.pdl@avagotech.com - * - * Mail to: Avago Technologies, 350 West Trimble Road, Building 90, - * San Jose, California 95131 + * Send feedback to: megaraidlinux.pdl@broadcom.com */ #include @@ -48,6 +46,7 @@ #include #include #include +#include #include #include @@ -74,7 +73,7 @@ void megasas_return_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd); int megasas_alloc_cmds(struct megasas_instance *instance); int -megasas_clear_intr_fusion(struct megasas_register_set __iomem *regs); +megasas_clear_intr_fusion(struct megasas_instance *instance); int megasas_issue_polled(struct megasas_instance *instance, struct megasas_cmd *cmd); @@ -95,6 +94,9 @@ static void megasas_free_rdpq_fusion(struct megasas_instance *instance); static void megasas_free_reply_fusion(struct megasas_instance *instance); static inline void megasas_configure_queue_sizes(struct megasas_instance *instance); +static void megasas_fusion_crash_dump(struct megasas_instance *instance); +extern u32 megasas_readl(struct megasas_instance *instance, + const volatile void __iomem *addr); /** * megasas_check_same_4gb_region - check if allocation @@ -165,9 +167,11 @@ megasas_disable_intr_fusion(struct megasas_instance *instance) } int -megasas_clear_intr_fusion(struct megasas_register_set __iomem *regs) +megasas_clear_intr_fusion(struct megasas_instance *instance) { u32 status; + struct megasas_register_set __iomem *regs; + regs = instance->reg_set; /* * Check if it is our interrupt */ @@ -262,16 +266,17 @@ megasas_fusion_update_can_queue(struct megasas_instance *instance, int fw_boot_c reg_set = instance->reg_set; - /* ventura FW does not fill outbound_scratch_pad_3 with queue depth */ + /* ventura FW does not fill outbound_scratch_pad_2 with queue depth */ if (instance->adapter_type < VENTURA_SERIES) cur_max_fw_cmds = - readl(&instance->reg_set->outbound_scratch_pad_3) & 0x00FFFF; + megasas_readl(instance, + &instance->reg_set->outbound_scratch_pad_2) & 0x00FFFF; if (dual_qdepth_disable || !cur_max_fw_cmds) - cur_max_fw_cmds = instance->instancet->read_fw_status_reg(reg_set) & 0x00FFFF; + cur_max_fw_cmds = instance->instancet->read_fw_status_reg(instance) & 0x00FFFF; else ldio_threshold = - (instance->instancet->read_fw_status_reg(reg_set) & 0x00FFFF) - MEGASAS_FUSION_IOCTL_CMDS; + (instance->instancet->read_fw_status_reg(instance) & 0x00FFFF) - MEGASAS_FUSION_IOCTL_CMDS; dev_info(&instance->pdev->dev, "Current firmware supports maximum commands: %d\t LDIO threshold: %d\n", @@ -807,10 +812,8 @@ megasas_free_rdpq_fusion(struct megasas_instance *instance) { } - if (fusion->reply_frames_desc_pool) - dma_pool_destroy(fusion->reply_frames_desc_pool); - if (fusion->reply_frames_desc_pool_align) - dma_pool_destroy(fusion->reply_frames_desc_pool_align); + dma_pool_destroy(fusion->reply_frames_desc_pool); + dma_pool_destroy(fusion->reply_frames_desc_pool_align); if (fusion->rdpq_virt) dma_free_coherent(&instance->pdev->dev, @@ -830,8 +833,7 @@ megasas_free_reply_fusion(struct megasas_instance *instance) { fusion->reply_frames_desc[0], fusion->reply_frames_desc_phys[0]); - if (fusion->reply_frames_desc_pool) - dma_pool_destroy(fusion->reply_frames_desc_pool); + dma_pool_destroy(fusion->reply_frames_desc_pool); } @@ -974,7 +976,7 @@ megasas_ioc_init_fusion(struct megasas_instance *instance) struct megasas_header *frame_hdr; const char *sys_info; MFI_CAPABILITIES *drv_ops; - u32 scratch_pad_2; + u32 scratch_pad_1; ktime_t time; bool cur_fw_64bit_dma_capable; @@ -985,14 +987,14 @@ megasas_ioc_init_fusion(struct megasas_instance *instance) cmd = fusion->ioc_init_cmd; - scratch_pad_2 = readl - (&instance->reg_set->outbound_scratch_pad_2); + scratch_pad_1 = megasas_readl + (instance, &instance->reg_set->outbound_scratch_pad_1); - cur_rdpq_mode = (scratch_pad_2 & MR_RDPQ_MODE_OFFSET) ? 1 : 0; + cur_rdpq_mode = (scratch_pad_1 & MR_RDPQ_MODE_OFFSET) ? 1 : 0; if (instance->adapter_type == INVADER_SERIES) { cur_fw_64bit_dma_capable = - (scratch_pad_2 & MR_CAN_HANDLE_64_BIT_DMA_OFFSET) ? true : false; + (scratch_pad_1 & MR_CAN_HANDLE_64_BIT_DMA_OFFSET) ? true : false; if (instance->consistent_mask_64bit && !cur_fw_64bit_dma_capable) { dev_err(&instance->pdev->dev, "Driver was operating on 64bit " @@ -1010,7 +1012,7 @@ megasas_ioc_init_fusion(struct megasas_instance *instance) goto fail_fw_init; } - instance->fw_sync_cache_support = (scratch_pad_2 & + instance->fw_sync_cache_support = (scratch_pad_1 & MR_CAN_HANDLE_SYNC_CACHE_OFFSET) ? 1 : 0; dev_info(&instance->pdev->dev, "FW supports sync cache\t: %s\n", instance->fw_sync_cache_support ? "Yes" : "No"); @@ -1043,9 +1045,7 @@ megasas_ioc_init_fusion(struct megasas_instance *instance) frame_hdr = &cmd->frame->hdr; frame_hdr->cmd_status = 0xFF; - frame_hdr->flags = cpu_to_le16( - le16_to_cpu(frame_hdr->flags) | - MFI_FRAME_DONT_POST_IN_REPLY_QUEUE); + frame_hdr->flags |= cpu_to_le16(MFI_FRAME_DONT_POST_IN_REPLY_QUEUE); init_frame->cmd = MFI_CMD_INIT; init_frame->cmd_status = 0xFF; @@ -1107,7 +1107,7 @@ megasas_ioc_init_fusion(struct megasas_instance *instance) instance->instancet->disable_intr(instance); for (i = 0; i < (10 * 1000); i += 20) { - if (readl(&instance->reg_set->doorbell) & 1) + if (megasas_readl(instance, &instance->reg_set->doorbell) & 1) msleep(20); else break; @@ -1115,7 +1115,7 @@ megasas_ioc_init_fusion(struct megasas_instance *instance) megasas_fire_cmd_fusion(instance, &req_desc); - wait_and_poll(instance, cmd, MFI_POLL_TIMEOUT_SECS); + wait_and_poll(instance, cmd, MFI_IO_TIMEOUT_SECS); frame_hdr = &cmd->frame->hdr; if (frame_hdr->cmd_status != 0) { @@ -1559,14 +1559,12 @@ void megasas_configure_queue_sizes(struct megasas_instance *instance) fusion = instance->ctrl_context; max_cmd = instance->max_fw_cmds; - if (instance->adapter_type == VENTURA_SERIES) + if (instance->adapter_type >= VENTURA_SERIES) instance->max_mpt_cmds = instance->max_fw_cmds * RAID_1_PEER_CMDS; else instance->max_mpt_cmds = instance->max_fw_cmds; - instance->max_scsi_cmds = instance->max_fw_cmds - - (MEGASAS_FUSION_INTERNAL_CMDS + - MEGASAS_FUSION_IOCTL_CMDS); + instance->max_scsi_cmds = instance->max_fw_cmds - instance->max_mfi_cmds; instance->cur_can_queue = instance->max_scsi_cmds; instance->host->can_queue = instance->cur_can_queue; @@ -1627,8 +1625,7 @@ static inline void megasas_free_ioc_init_cmd(struct megasas_instance *instance) fusion->ioc_init_cmd->frame, fusion->ioc_init_cmd->frame_phys_addr); - if (fusion->ioc_init_cmd) - kfree(fusion->ioc_init_cmd); + kfree(fusion->ioc_init_cmd); } /** @@ -1642,7 +1639,7 @@ megasas_init_adapter_fusion(struct megasas_instance *instance) { struct megasas_register_set __iomem *reg_set; struct fusion_context *fusion; - u32 scratch_pad_2; + u32 scratch_pad_1; int i = 0, count; fusion = instance->ctrl_context; @@ -1659,20 +1656,21 @@ megasas_init_adapter_fusion(struct megasas_instance *instance) megasas_configure_queue_sizes(instance); - scratch_pad_2 = readl(&instance->reg_set->outbound_scratch_pad_2); - /* If scratch_pad_2 & MEGASAS_MAX_CHAIN_SIZE_UNITS_MASK is set, + scratch_pad_1 = megasas_readl(instance, + &instance->reg_set->outbound_scratch_pad_1); + /* If scratch_pad_1 & MEGASAS_MAX_CHAIN_SIZE_UNITS_MASK is set, * Firmware support extended IO chain frame which is 4 times more than * legacy Firmware. * Legacy Firmware - Frame size is (8 * 128) = 1K * 1M IO Firmware - Frame size is (8 * 128 * 4) = 4K */ - if (scratch_pad_2 & MEGASAS_MAX_CHAIN_SIZE_UNITS_MASK) + if (scratch_pad_1 & MEGASAS_MAX_CHAIN_SIZE_UNITS_MASK) instance->max_chain_frame_sz = - ((scratch_pad_2 & MEGASAS_MAX_CHAIN_SIZE_MASK) >> + ((scratch_pad_1 & MEGASAS_MAX_CHAIN_SIZE_MASK) >> MEGASAS_MAX_CHAIN_SHIFT) * MEGASAS_1MB_IO; else instance->max_chain_frame_sz = - ((scratch_pad_2 & MEGASAS_MAX_CHAIN_SIZE_MASK) >> + ((scratch_pad_1 & MEGASAS_MAX_CHAIN_SIZE_MASK) >> MEGASAS_MAX_CHAIN_SHIFT) * MEGASAS_256K_IO; if (instance->max_chain_frame_sz < MEGASAS_CHAIN_FRAME_SZ_MIN) { @@ -1759,6 +1757,90 @@ fail_alloc_mfi_cmds: return 1; } +/** + * megasas_fault_detect_work - Worker function of + * FW fault handling workqueue. + */ +static void +megasas_fault_detect_work(struct work_struct *work) +{ + struct megasas_instance *instance = + container_of(work, struct megasas_instance, + fw_fault_work.work); + u32 fw_state, dma_state, status; + + /* Check the fw state */ + fw_state = instance->instancet->read_fw_status_reg(instance) & + MFI_STATE_MASK; + + if (fw_state == MFI_STATE_FAULT) { + dma_state = instance->instancet->read_fw_status_reg(instance) & + MFI_STATE_DMADONE; + /* Start collecting crash, if DMA bit is done */ + if (instance->crash_dump_drv_support && + instance->crash_dump_app_support && dma_state) { + megasas_fusion_crash_dump(instance); + } else { + if (instance->unload == 0) { + status = megasas_reset_fusion(instance->host, 0); + if (status != SUCCESS) { + dev_err(&instance->pdev->dev, + "Failed from %s %d, do not re-arm timer\n", + __func__, __LINE__); + return; + } + } + } + } + + if (instance->fw_fault_work_q) + queue_delayed_work(instance->fw_fault_work_q, + &instance->fw_fault_work, + msecs_to_jiffies(MEGASAS_WATCHDOG_THREAD_INTERVAL)); +} + +int +megasas_fusion_start_watchdog(struct megasas_instance *instance) +{ + /* Check if the Fault WQ is already started */ + if (instance->fw_fault_work_q) + return SUCCESS; + + INIT_DELAYED_WORK(&instance->fw_fault_work, megasas_fault_detect_work); + + snprintf(instance->fault_handler_work_q_name, + sizeof(instance->fault_handler_work_q_name), + "poll_megasas%d_status", instance->host->host_no); + + instance->fw_fault_work_q = + create_singlethread_workqueue(instance->fault_handler_work_q_name); + if (!instance->fw_fault_work_q) { + dev_err(&instance->pdev->dev, "Failed from %s %d\n", + __func__, __LINE__); + return FAILED; + } + + queue_delayed_work(instance->fw_fault_work_q, + &instance->fw_fault_work, + msecs_to_jiffies(MEGASAS_WATCHDOG_THREAD_INTERVAL)); + + return SUCCESS; +} + +void +megasas_fusion_stop_watchdog(struct megasas_instance *instance) +{ + struct workqueue_struct *wq; + + if (instance->fw_fault_work_q) { + wq = instance->fw_fault_work_q; + instance->fw_fault_work_q = NULL; + if (!cancel_delayed_work_sync(&instance->fw_fault_work)) + flush_workqueue(wq); + destroy_workqueue(wq); + } +} + /** * map_cmd_status - Maps FW cmd status to OS cmd status * @cmd : Pointer to cmd @@ -2543,19 +2625,22 @@ megasas_build_ldio_fusion(struct megasas_instance *instance, struct MR_DRV_RAID_MAP_ALL *local_map_ptr; u8 *raidLUN; unsigned long spinlock_flags; - union RAID_CONTEXT_UNION *praid_context; struct MR_LD_RAID *raid = NULL; struct MR_PRIV_DEVICE *mrdev_priv; + struct RAID_CONTEXT *rctx; + struct RAID_CONTEXT_G35 *rctx_g35; device_id = MEGASAS_DEV_INDEX(scp); fusion = instance->ctrl_context; io_request = cmd->io_request; - io_request->RaidContext.raid_context.virtual_disk_tgt_id = - cpu_to_le16(device_id); - io_request->RaidContext.raid_context.status = 0; - io_request->RaidContext.raid_context.ex_status = 0; + rctx = &io_request->RaidContext.raid_context; + rctx_g35 = &io_request->RaidContext.raid_context_g35; + + rctx->virtual_disk_tgt_id = cpu_to_le16(device_id); + rctx->status = 0; + rctx->ex_status = 0; req_desc = (union MEGASAS_REQUEST_DESCRIPTOR_UNION *)cmd->request_desc; @@ -2631,11 +2716,10 @@ megasas_build_ldio_fusion(struct megasas_instance *instance, raid = MR_LdRaidGet(ld, local_map_ptr); if (!raid || (!fusion->fast_path_io)) { - io_request->RaidContext.raid_context.reg_lock_flags = 0; + rctx->reg_lock_flags = 0; fp_possible = false; } else { - if (MR_BuildRaidContext(instance, &io_info, - &io_request->RaidContext.raid_context, + if (MR_BuildRaidContext(instance, &io_info, rctx, local_map_ptr, &raidLUN)) fp_possible = (io_info.fpOkForIo > 0) ? true : false; } @@ -2643,9 +2727,7 @@ megasas_build_ldio_fusion(struct megasas_instance *instance, cmd->request_desc->SCSIIO.MSIxIndex = instance->reply_map[raw_smp_processor_id()]; - praid_context = &io_request->RaidContext; - - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { /* FP for Optimal raid level 1. * All large RAID-1 writes (> 32 KiB, both WT and WB modes) * are built by the driver as LD I/Os. @@ -2681,17 +2763,17 @@ megasas_build_ldio_fusion(struct megasas_instance *instance, /* In ventura if stream detected for a read and it is * read ahead capable make this IO as LDIO */ - if (is_stream_detected(&io_request->RaidContext.raid_context_g35)) + if (is_stream_detected(rctx_g35)) fp_possible = false; } /* If raid is NULL, set CPU affinity to default CPU0 */ if (raid) - megasas_set_raidflag_cpu_affinity(praid_context, + megasas_set_raidflag_cpu_affinity(&io_request->RaidContext, raid, fp_possible, io_info.isRead, scsi_buff_len); else - praid_context->raid_context_g35.routing_flags |= + rctx_g35->routing_flags |= (MR_RAID_CTX_CPUSEL_0 << MR_RAID_CTX_ROUTINGFLAGS_CPUSEL_SHIFT); } @@ -2703,25 +2785,20 @@ megasas_build_ldio_fusion(struct megasas_instance *instance, (MPI2_REQ_DESCRIPT_FLAGS_FP_IO << MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); if (instance->adapter_type == INVADER_SERIES) { - if (io_request->RaidContext.raid_context.reg_lock_flags == - REGION_TYPE_UNUSED) + if (rctx->reg_lock_flags == REGION_TYPE_UNUSED) cmd->request_desc->SCSIIO.RequestFlags = (MEGASAS_REQ_DESCRIPT_FLAGS_NO_LOCK << MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); - io_request->RaidContext.raid_context.type - = MPI2_TYPE_CUDA; - io_request->RaidContext.raid_context.nseg = 0x1; + rctx->type = MPI2_TYPE_CUDA; + rctx->nseg = 0x1; io_request->IoFlags |= cpu_to_le16(MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH); - io_request->RaidContext.raid_context.reg_lock_flags |= + rctx->reg_lock_flags |= (MR_RL_FLAGS_GRANT_DESTINATION_CUDA | MR_RL_FLAGS_SEQ_NUM_ENABLE); - } else if (instance->adapter_type == VENTURA_SERIES) { - io_request->RaidContext.raid_context_g35.nseg_type |= - (1 << RAID_CONTEXT_NSEG_SHIFT); - io_request->RaidContext.raid_context_g35.nseg_type |= - (MPI2_TYPE_CUDA << RAID_CONTEXT_TYPE_SHIFT); - io_request->RaidContext.raid_context_g35.routing_flags |= - (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT); + } else if (instance->adapter_type >= VENTURA_SERIES) { + rctx_g35->nseg_type |= (1 << RAID_CONTEXT_NSEG_SHIFT); + rctx_g35->nseg_type |= (MPI2_TYPE_CUDA << RAID_CONTEXT_TYPE_SHIFT); + rctx_g35->routing_flags |= (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT); io_request->IoFlags |= cpu_to_le16(MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH); } @@ -2734,17 +2811,15 @@ megasas_build_ldio_fusion(struct megasas_instance *instance, &io_info, local_map_ptr); scp->SCp.Status |= MEGASAS_LOAD_BALANCE_FLAG; cmd->pd_r1_lb = io_info.pd_after_lb; - if (instance->adapter_type == VENTURA_SERIES) - io_request->RaidContext.raid_context_g35.span_arm - = io_info.span_arm; + if (instance->adapter_type >= VENTURA_SERIES) + rctx_g35->span_arm = io_info.span_arm; else - io_request->RaidContext.raid_context.span_arm - = io_info.span_arm; + rctx->span_arm = io_info.span_arm; } else scp->SCp.Status &= ~MEGASAS_LOAD_BALANCE_FLAG; - if (instance->adapter_type == VENTURA_SERIES) + if (instance->adapter_type >= VENTURA_SERIES) cmd->r1_alt_dev_handle = io_info.r1_alt_dev_handle; else cmd->r1_alt_dev_handle = MR_DEVHANDLE_INVALID; @@ -2762,31 +2837,26 @@ megasas_build_ldio_fusion(struct megasas_instance *instance, /* populate the LUN field */ memcpy(io_request->LUN, raidLUN, 8); } else { - io_request->RaidContext.raid_context.timeout_value = + rctx->timeout_value = cpu_to_le16(local_map_ptr->raidMap.fpPdIoTimeoutSec); cmd->request_desc->SCSIIO.RequestFlags = (MEGASAS_REQ_DESCRIPT_FLAGS_LD_IO << MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); if (instance->adapter_type == INVADER_SERIES) { if (io_info.do_fp_rlbypass || - (io_request->RaidContext.raid_context.reg_lock_flags - == REGION_TYPE_UNUSED)) + (rctx->reg_lock_flags == REGION_TYPE_UNUSED)) cmd->request_desc->SCSIIO.RequestFlags = (MEGASAS_REQ_DESCRIPT_FLAGS_NO_LOCK << MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); - io_request->RaidContext.raid_context.type - = MPI2_TYPE_CUDA; - io_request->RaidContext.raid_context.reg_lock_flags |= + rctx->type = MPI2_TYPE_CUDA; + rctx->reg_lock_flags |= (MR_RL_FLAGS_GRANT_DESTINATION_CPU0 | - MR_RL_FLAGS_SEQ_NUM_ENABLE); - io_request->RaidContext.raid_context.nseg = 0x1; - } else if (instance->adapter_type == VENTURA_SERIES) { - io_request->RaidContext.raid_context_g35.routing_flags |= - (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT); - io_request->RaidContext.raid_context_g35.nseg_type |= - (1 << RAID_CONTEXT_NSEG_SHIFT); - io_request->RaidContext.raid_context_g35.nseg_type |= - (MPI2_TYPE_CUDA << RAID_CONTEXT_TYPE_SHIFT); + MR_RL_FLAGS_SEQ_NUM_ENABLE); + rctx->nseg = 0x1; + } else if (instance->adapter_type >= VENTURA_SERIES) { + rctx_g35->routing_flags |= (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT); + rctx_g35->nseg_type |= (1 << RAID_CONTEXT_NSEG_SHIFT); + rctx_g35->nseg_type |= (MPI2_TYPE_CUDA << RAID_CONTEXT_TYPE_SHIFT); } io_request->Function = MEGASAS_MPI2_FUNCTION_LD_IO_REQUEST; io_request->DevHandle = cpu_to_le16(device_id); @@ -2832,7 +2902,7 @@ static void megasas_build_ld_nonrw_fusion(struct megasas_instance *instance, device_id < instance->fw_supported_vd_count)) { ld = MR_TargetIdToLdGet(device_id, local_map_ptr); - if (ld >= instance->fw_supported_vd_count) + if (ld >= instance->fw_supported_vd_count - 1) fp_possible = 0; else { raid = MR_LdRaidGet(ld, local_map_ptr); @@ -2855,7 +2925,7 @@ static void megasas_build_ld_nonrw_fusion(struct megasas_instance *instance, /* set RAID context values */ pRAID_Context->config_seq_num = raid->seqNum; - if (instance->adapter_type != VENTURA_SERIES) + if (instance->adapter_type < VENTURA_SERIES) pRAID_Context->reg_lock_flags = REGION_TYPE_SHARED_READ; pRAID_Context->timeout_value = cpu_to_le16(raid->fpIoTimeoutForLd); @@ -2940,7 +3010,7 @@ megasas_build_syspd_fusion(struct megasas_instance *instance, cpu_to_le16(device_id + (MAX_PHYSICAL_DEVICES - 1)); pRAID_Context->config_seq_num = pd_sync->seq[pd_index].seqNum; io_request->DevHandle = pd_sync->seq[pd_index].devHandle; - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { io_request->RaidContext.raid_context_g35.routing_flags |= (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT); io_request->RaidContext.raid_context_g35.nseg_type |= @@ -3073,7 +3143,7 @@ megasas_build_io_fusion(struct megasas_instance *instance, return 1; } - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { set_num_sge(&io_request->RaidContext.raid_context_g35, sge_count); cpu_to_le16s(&io_request->RaidContext.raid_context_g35.routing_flags); cpu_to_le16s(&io_request->RaidContext.raid_context_g35.nseg_type); @@ -3385,7 +3455,7 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex) atomic_dec(&lbinfo->scsi_pending_cmds[cmd_fusion->pd_r1_lb]); cmd_fusion->scmd->SCp.Status &= ~MEGASAS_LOAD_BALANCE_FLAG; } - //Fall thru and complete IO + /* Fall through - and complete IO */ case MEGASAS_MPI2_FUNCTION_LD_IO_REQUEST: /* LD-IO Path */ atomic_dec(&instance->fw_outstanding); if (cmd_fusion->r1_alt_dev_handle == MR_DEVHANDLE_INVALID) { @@ -3501,18 +3571,13 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr) { struct megasas_instance *instance = (struct megasas_instance *)instance_addr; - unsigned long flags; u32 count, MSIxIndex; count = instance->msix_vectors > 0 ? instance->msix_vectors : 1; /* If we have already declared adapter dead, donot complete cmds */ - spin_lock_irqsave(&instance->hba_lock, flags); - if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR) { - spin_unlock_irqrestore(&instance->hba_lock, flags); + if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR) return; - } - spin_unlock_irqrestore(&instance->hba_lock, flags); for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++) complete_cmd_fusion(instance, MSIxIndex); @@ -3525,48 +3590,24 @@ irqreturn_t megasas_isr_fusion(int irq, void *devp) { struct megasas_irq_context *irq_context = devp; struct megasas_instance *instance = irq_context->instance; - u32 mfiStatus, fw_state, dma_state; + u32 mfiStatus; if (instance->mask_interrupts) return IRQ_NONE; if (!instance->msix_vectors) { - mfiStatus = instance->instancet->clear_intr(instance->reg_set); + mfiStatus = instance->instancet->clear_intr(instance); if (!mfiStatus) return IRQ_NONE; } /* If we are resetting, bail */ if (test_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags)) { - instance->instancet->clear_intr(instance->reg_set); + instance->instancet->clear_intr(instance); return IRQ_HANDLED; } - if (!complete_cmd_fusion(instance, irq_context->MSIxIndex)) { - instance->instancet->clear_intr(instance->reg_set); - /* If we didn't complete any commands, check for FW fault */ - fw_state = instance->instancet->read_fw_status_reg( - instance->reg_set) & MFI_STATE_MASK; - dma_state = instance->instancet->read_fw_status_reg - (instance->reg_set) & MFI_STATE_DMADONE; - if (instance->crash_dump_drv_support && - instance->crash_dump_app_support) { - /* Start collecting crash, if DMA bit is done */ - if ((fw_state == MFI_STATE_FAULT) && dma_state) - schedule_work(&instance->crash_init); - else if (fw_state == MFI_STATE_FAULT) { - if (instance->unload == 0) - schedule_work(&instance->work_init); - } - } else if (fw_state == MFI_STATE_FAULT) { - dev_warn(&instance->pdev->dev, "Iop2SysDoorbellInt" - "for scsi%d\n", instance->host->host_no); - if (instance->unload == 0) - schedule_work(&instance->work_init); - } - } - - return IRQ_HANDLED; + return complete_cmd_fusion(instance, irq_context->MSIxIndex); } /** @@ -3692,9 +3733,9 @@ megasas_release_fusion(struct megasas_instance *instance) * @regs: MFI register set */ static u32 -megasas_read_fw_status_reg_fusion(struct megasas_register_set __iomem *regs) +megasas_read_fw_status_reg_fusion(struct megasas_instance *instance) { - return readl(&(regs)->outbound_scratch_pad); + return megasas_readl(instance, &instance->reg_set->outbound_scratch_pad_0); } /** @@ -3756,11 +3797,12 @@ megasas_adp_reset_fusion(struct megasas_instance *instance, writel(MPI2_WRSEQ_6TH_KEY_VALUE, &instance->reg_set->fusion_seq_offset); /* Check that the diag write enable (DRWE) bit is on */ - host_diag = readl(&instance->reg_set->fusion_host_diag); + host_diag = megasas_readl(instance, &instance->reg_set->fusion_host_diag); retry = 0; while (!(host_diag & HOST_DIAG_WRITE_ENABLE)) { msleep(100); - host_diag = readl(&instance->reg_set->fusion_host_diag); + host_diag = megasas_readl(instance, + &instance->reg_set->fusion_host_diag); if (retry++ == 100) { dev_warn(&instance->pdev->dev, "Host diag unlock failed from %s %d\n", @@ -3777,11 +3819,12 @@ megasas_adp_reset_fusion(struct megasas_instance *instance, msleep(3000); /* Make sure reset adapter bit is cleared */ - host_diag = readl(&instance->reg_set->fusion_host_diag); + host_diag = megasas_readl(instance, &instance->reg_set->fusion_host_diag); retry = 0; while (host_diag & HOST_DIAG_RESET_ADAPTER) { msleep(100); - host_diag = readl(&instance->reg_set->fusion_host_diag); + host_diag = megasas_readl(instance, + &instance->reg_set->fusion_host_diag); if (retry++ == 1000) { dev_warn(&instance->pdev->dev, "Diag reset adapter never cleared %s %d\n", @@ -3792,14 +3835,14 @@ megasas_adp_reset_fusion(struct megasas_instance *instance, if (host_diag & HOST_DIAG_RESET_ADAPTER) return -1; - abs_state = instance->instancet->read_fw_status_reg(instance->reg_set) + abs_state = instance->instancet->read_fw_status_reg(instance) & MFI_STATE_MASK; retry = 0; while ((abs_state <= MFI_STATE_FW_INIT) && (retry++ < 1000)) { msleep(100); abs_state = instance->instancet-> - read_fw_status_reg(instance->reg_set) & MFI_STATE_MASK; + read_fw_status_reg(instance) & MFI_STATE_MASK; } if (abs_state <= MFI_STATE_FW_INIT) { dev_warn(&instance->pdev->dev, @@ -3822,17 +3865,60 @@ megasas_check_reset_fusion(struct megasas_instance *instance, return 0; } +/** + * megasas_trigger_snap_dump - Trigger snap dump in FW + * @instance: Soft instance of adapter + */ +static inline void megasas_trigger_snap_dump(struct megasas_instance *instance) +{ + int j; + u32 fw_state; + + if (!instance->disableOnlineCtrlReset) { + dev_info(&instance->pdev->dev, "Trigger snap dump\n"); + writel(MFI_ADP_TRIGGER_SNAP_DUMP, + &instance->reg_set->doorbell); + readl(&instance->reg_set->doorbell); + } + + for (j = 0; j < instance->snapdump_wait_time; j++) { + fw_state = instance->instancet->read_fw_status_reg(instance) & + MFI_STATE_MASK; + if (fw_state == MFI_STATE_FAULT) { + dev_err(&instance->pdev->dev, + "Found FW in FAULT state, after snap dump trigger\n"); + return; + } + msleep(1000); + } +} + /* This function waits for outstanding commands on fusion to complete */ int megasas_wait_for_outstanding_fusion(struct megasas_instance *instance, int reason, int *convert) { int i, outstanding, retval = 0, hb_seconds_missed = 0; u32 fw_state; + u32 waittime_for_io_completion; + + waittime_for_io_completion = + min_t(u32, resetwaittime, + (resetwaittime - instance->snapdump_wait_time)); - for (i = 0; i < resetwaittime; i++) { + if (reason == MFI_IO_TIMEOUT_OCR) { + dev_info(&instance->pdev->dev, + "MFI command is timed out\n"); + megasas_complete_cmd_dpc_fusion((unsigned long)instance); + if (instance->snapdump_wait_time) + megasas_trigger_snap_dump(instance); + retval = 1; + goto out; + } + + for (i = 0; i < waittime_for_io_completion; i++) { /* Check if firmware is in fault state */ - fw_state = instance->instancet->read_fw_status_reg( - instance->reg_set) & MFI_STATE_MASK; + fw_state = instance->instancet->read_fw_status_reg(instance) & + MFI_STATE_MASK; if (fw_state == MFI_STATE_FAULT) { dev_warn(&instance->pdev->dev, "Found FW in FAULT state," " will reset adapter scsi%d.\n", @@ -3850,13 +3936,6 @@ int megasas_wait_for_outstanding_fusion(struct megasas_instance *instance, goto out; } - if (reason == MFI_IO_TIMEOUT_OCR) { - dev_info(&instance->pdev->dev, - "MFI IO is timed out, initiating OCR\n"); - megasas_complete_cmd_dpc_fusion((unsigned long)instance); - retval = 1; - goto out; - } /* If SR-IOV VF mode & heartbeat timeout, don't wait */ if (instance->requestorId && !reason) { @@ -3901,6 +3980,12 @@ int megasas_wait_for_outstanding_fusion(struct megasas_instance *instance, msleep(1000); } + if (instance->snapdump_wait_time) { + megasas_trigger_snap_dump(instance); + retval = 1; + goto out; + } + if (atomic_read(&instance->fw_outstanding)) { dev_err(&instance->pdev->dev, "pending commands remain after waiting, " "will reset adapter scsi%d.\n", @@ -3908,6 +3993,7 @@ int megasas_wait_for_outstanding_fusion(struct megasas_instance *instance, *convert = 1; retval = 1; } + out: return retval; } @@ -4518,7 +4604,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason) mutex_unlock(&instance->reset_mutex); return FAILED; } - status_reg = instance->instancet->read_fw_status_reg(instance->reg_set); + status_reg = instance->instancet->read_fw_status_reg(instance); abs_state = status_reg & MFI_STATE_MASK; /* IO timeout detected, forcibly put FW in FAULT state */ @@ -4527,7 +4613,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason) dev_info(&instance->pdev->dev, "IO/DCMD timeout is detected, " "forcibly FAULT Firmware\n"); atomic_set(&instance->adprecovery, MEGASAS_ADPRESET_SM_INFAULT); - status_reg = readl(&instance->reg_set->doorbell); + status_reg = megasas_readl(instance, &instance->reg_set->doorbell); writel(status_reg | MFI_STATE_FORCE_OCR, &instance->reg_set->doorbell); readl(&instance->reg_set->doorbell); @@ -4578,7 +4664,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason) for (i = 0 ; i < instance->max_scsi_cmds; i++) { cmd_fusion = fusion->cmd_list[i]; /*check for extra commands issued by driver*/ - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { r1_cmd = fusion->cmd_list[i + instance->max_fw_cmds]; megasas_return_cmd_fusion(instance, r1_cmd); } @@ -4605,8 +4691,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason) atomic_set(&instance->fw_outstanding, 0); - status_reg = instance->instancet->read_fw_status_reg( - instance->reg_set); + status_reg = instance->instancet->read_fw_status_reg(instance); abs_state = status_reg & MFI_STATE_MASK; reset_adapter = status_reg & MFI_RESET_ADAPTER; if (instance->disableOnlineCtrlReset || @@ -4677,7 +4762,7 @@ transition_to_ready: megasas_setup_jbod_map(instance); /* reset stream detection array */ - if (instance->adapter_type == VENTURA_SERIES) { + if (instance->adapter_type >= VENTURA_SERIES) { for (j = 0; j < MAX_LOGICAL_DRIVES_EXT; ++j) { memset(fusion->stream_detect_by_ld[j], 0, sizeof(struct LD_STREAM_DETECT)); @@ -4721,6 +4806,13 @@ transition_to_ready: megasas_set_crash_dump_params(instance, MR_CRASH_BUF_TURN_OFF); + if (instance->snapdump_wait_time) { + megasas_get_snapdump_properties(instance); + dev_info(&instance->pdev->dev, + "Snap dump wait time\t: %d\n", + instance->snapdump_wait_time); + } + retval = SUCCESS; /* Adapter reset completed successfully */ @@ -4752,16 +4844,15 @@ out: return retval; } -/* Fusion Crash dump collection work queue */ -void megasas_fusion_crash_dump_wq(struct work_struct *work) +/* Fusion Crash dump collection */ +void megasas_fusion_crash_dump(struct megasas_instance *instance) { - struct megasas_instance *instance = - container_of(work, struct megasas_instance, crash_init); u32 status_reg; u8 partial_copy = 0; + int wait = 0; - status_reg = instance->instancet->read_fw_status_reg(instance->reg_set); + status_reg = instance->instancet->read_fw_status_reg(instance); /* * Allocate host crash buffers to copy data from 1 MB DMA crash buffer @@ -4777,8 +4868,8 @@ void megasas_fusion_crash_dump_wq(struct work_struct *work) "crash dump and initiating OCR\n"); status_reg |= MFI_STATE_CRASH_DUMP_DONE; writel(status_reg, - &instance->reg_set->outbound_scratch_pad); - readl(&instance->reg_set->outbound_scratch_pad); + &instance->reg_set->outbound_scratch_pad_0); + readl(&instance->reg_set->outbound_scratch_pad_0); return; } megasas_alloc_host_crash_buffer(instance); @@ -4786,21 +4877,41 @@ void megasas_fusion_crash_dump_wq(struct work_struct *work) "allocated: %d\n", instance->drv_buf_alloc); } - /* - * Driver has allocated max buffers, which can be allocated - * and FW has more crash dump data, then driver will - * ignore the data. - */ - if (instance->drv_buf_index >= (instance->drv_buf_alloc)) { - dev_info(&instance->pdev->dev, "Driver is done copying " - "the buffer: %d\n", instance->drv_buf_alloc); - status_reg |= MFI_STATE_CRASH_DUMP_DONE; - partial_copy = 1; - } else { - memcpy(instance->crash_buf[instance->drv_buf_index], - instance->crash_dump_buf, CRASH_DMA_BUF_SIZE); - instance->drv_buf_index++; - status_reg &= ~MFI_STATE_DMADONE; + while (!(status_reg & MFI_STATE_CRASH_DUMP_DONE) && + (wait < MEGASAS_WATCHDOG_WAIT_COUNT)) { + if (!(status_reg & MFI_STATE_DMADONE)) { + /* + * Next crash dump buffer is not yet DMA'd by FW + * Check after 10ms. Wait for 1 second for FW to + * post the next buffer. If not bail out. + */ + wait++; + msleep(MEGASAS_WAIT_FOR_NEXT_DMA_MSECS); + status_reg = instance->instancet->read_fw_status_reg( + instance); + continue; + } + + wait = 0; + if (instance->drv_buf_index >= instance->drv_buf_alloc) { + dev_info(&instance->pdev->dev, + "Driver is done copying the buffer: %d\n", + instance->drv_buf_alloc); + status_reg |= MFI_STATE_CRASH_DUMP_DONE; + partial_copy = 1; + break; + } else { + memcpy(instance->crash_buf[instance->drv_buf_index], + instance->crash_dump_buf, CRASH_DMA_BUF_SIZE); + instance->drv_buf_index++; + status_reg &= ~MFI_STATE_DMADONE; + } + + writel(status_reg, &instance->reg_set->outbound_scratch_pad_0); + readl(&instance->reg_set->outbound_scratch_pad_0); + + msleep(MEGASAS_WAIT_FOR_NEXT_DMA_MSECS); + status_reg = instance->instancet->read_fw_status_reg(instance); } if (status_reg & MFI_STATE_CRASH_DUMP_DONE) { @@ -4809,13 +4920,10 @@ void megasas_fusion_crash_dump_wq(struct work_struct *work) instance->fw_crash_buffer_size = instance->drv_buf_index; instance->fw_crash_state = AVAILABLE; instance->drv_buf_index = 0; - writel(status_reg, &instance->reg_set->outbound_scratch_pad); - readl(&instance->reg_set->outbound_scratch_pad); + writel(status_reg, &instance->reg_set->outbound_scratch_pad_0); + readl(&instance->reg_set->outbound_scratch_pad_0); if (!partial_copy) megasas_reset_fusion(instance->host, 0); - } else { - writel(status_reg, &instance->reg_set->outbound_scratch_pad); - readl(&instance->reg_set->outbound_scratch_pad); } } diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.h b/drivers/scsi/megaraid/megaraid_sas_fusion.h index 8e5ebee6517f..ca73c50fe723 100644 --- a/drivers/scsi/megaraid/megaraid_sas_fusion.h +++ b/drivers/scsi/megaraid/megaraid_sas_fusion.h @@ -2,7 +2,8 @@ * Linux MegaRAID driver for SAS based RAID controllers * * Copyright (c) 2009-2013 LSI Corporation - * Copyright (c) 2013-2014 Avago Technologies + * Copyright (c) 2013-2016 Avago Technologies + * Copyright (c) 2016-2018 Broadcom Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License @@ -19,16 +20,13 @@ * * FILE: megaraid_sas_fusion.h * - * Authors: Avago Technologies + * Authors: Broadcom Inc. * Manoj Jose * Sumant Patro - * Kashyap Desai - * Sumit Saxena + * Kashyap Desai + * Sumit Saxena * - * Send feedback to: megaraidlinux.pdl@avagotech.com - * - * Mail to: Avago Technologies, 350 West Trimble Road, Building 90, - * San Jose, California 95131 + * Send feedback to: megaraidlinux.pdl@broadcom.com */ #ifndef _MEGARAID_SAS_FUSION_H_ @@ -725,6 +723,7 @@ struct MPI2_IOC_INIT_REQUEST { #define MR_DCMD_CTRL_SHARED_HOST_MEM_ALLOC 0x010e8485 /* SR-IOV HB alloc*/ #define MR_DCMD_LD_VF_MAP_GET_ALL_LDS_111 0x03200200 #define MR_DCMD_LD_VF_MAP_GET_ALL_LDS 0x03150200 +#define MR_DCMD_CTRL_SNAPDUMP_GET_PROPERTIES 0x01200100 struct MR_DEV_HANDLE_INFO { __le16 curDevHdl; @@ -1063,6 +1062,9 @@ struct MR_FW_RAID_MAP_DYNAMIC { #define MPI26_IEEE_SGE_FLAGS_NSF_NVME_PRP (0x08) #define MPI26_IEEE_SGE_FLAGS_NSF_NVME_SGL (0x10) +#define MEGASAS_DEFAULT_SNAP_DUMP_WAIT_TIME 15 +#define MEGASAS_MAX_SNAP_DUMP_WAIT_TIME 60 + struct megasas_register_set; struct megasas_instance; @@ -1350,6 +1352,14 @@ enum CMD_RET_VALUES { RETURN_CMD = 3, }; +struct MR_SNAPDUMP_PROPERTIES { + u8 offload_num; + u8 max_num_supported; + u8 cur_num_supported; + u8 trigger_min_num_sec_before_ocr; + u8 reserved[12]; +}; + void megasas_free_cmds_fusion(struct megasas_instance *instance); int megasas_ioc_init_fusion(struct megasas_instance *instance); u8 megasas_get_map_info(struct megasas_instance *instance); diff --git a/drivers/scsi/mesh.c b/drivers/scsi/mesh.c index ec6940f2fcb3..f3e182eb0970 100644 --- a/drivers/scsi/mesh.c +++ b/drivers/scsi/mesh.c @@ -1838,7 +1838,7 @@ static struct scsi_host_template mesh_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, + .max_segment_size = 65535, }; static int mesh_probe(struct macio_dev *mdev, const struct of_device_id *match) diff --git a/drivers/scsi/mpt3sas/mpi/mpi2.h b/drivers/scsi/mpt3sas/mpi/mpi2.h index 1e45268a78fc..7efd17a3c25b 100644 --- a/drivers/scsi/mpt3sas/mpi/mpi2.h +++ b/drivers/scsi/mpt3sas/mpi/mpi2.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Copyright 2000-2015 Avago Technologies. All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2.h @@ -9,7 +9,7 @@ * scatter/gather formats. * Creation Date: June 21, 2006 * - * mpi2.h Version: 02.00.50 + * mpi2.h Version: 02.00.53 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used @@ -116,7 +116,12 @@ * 02-03-17 02.00.48 Bumped MPI2_HEADER_VERSION_UNIT. * 06-13-17 02.00.49 Bumped MPI2_HEADER_VERSION_UNIT. * 09-29-17 02.00.50 Bumped MPI2_HEADER_VERSION_UNIT. - * -------------------------------------------------------------------------- + * 07-22-18 02.00.51 Added SECURE_BOOT define. + * Bumped MPI2_HEADER_VERSION_UNIT + * 08-15-18 02.00.52 Bumped MPI2_HEADER_VERSION_UNIT. + * 08-28-18 02.00.53 Bumped MPI2_HEADER_VERSION_UNIT. + * Added MPI2_IOCSTATUS_FAILURE + * -------------------------------------------------------------------------- */ #ifndef MPI2_H @@ -156,7 +161,7 @@ /* Unit and Dev versioning for this MPI header set */ -#define MPI2_HEADER_VERSION_UNIT (0x32) +#define MPI2_HEADER_VERSION_UNIT (0x35) #define MPI2_HEADER_VERSION_DEV (0x00) #define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00) #define MPI2_HEADER_VERSION_UNIT_SHIFT (8) @@ -257,6 +262,8 @@ typedef volatile struct _MPI2_SYSTEM_INTERFACE_REGS { */ #define MPI2_HOST_DIAGNOSTIC_OFFSET (0x00000008) +#define MPI26_DIAG_SECURE_BOOT (0x80000000) + #define MPI2_DIAG_SBR_RELOAD (0x00002000) #define MPI2_DIAG_BOOT_DEVICE_SELECT_MASK (0x00001800) @@ -687,7 +694,9 @@ typedef union _MPI2_REPLY_DESCRIPTORS_UNION { #define MPI2_IOCSTATUS_INVALID_FIELD (0x0007) #define MPI2_IOCSTATUS_INVALID_STATE (0x0008) #define MPI2_IOCSTATUS_OP_STATE_NOT_SUPPORTED (0x0009) +/*MPI v2.6 and later */ #define MPI2_IOCSTATUS_INSUFFICIENT_POWER (0x000A) +#define MPI2_IOCSTATUS_FAILURE (0x000F) /**************************************************************************** * Config IOCStatus values diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h b/drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h index 5122920a961a..398fa6fde960 100644 --- a/drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h +++ b/drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h @@ -1,13 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Copyright 2000-2015 Avago Technologies. All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_cnfg.h * Title: MPI Configuration messages and pages * Creation Date: November 10, 2006 * - * mpi2_cnfg.h Version: 02.00.42 + * mpi2_cnfg.h Version: 02.00.46 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used @@ -231,6 +231,18 @@ * Added NOIOB field to PCIe Device Page 2. * Added MPI26_PCIEDEV2_CAP_DATA_BLK_ALIGN_AND_GRAN to * the Capabilities field of PCIe Device Page 2. + * 07-22-18 02.00.43 Added defines for SAS3916 and SAS3816. + * Added WRiteCache defines to IO Unit Page 1. + * Added MaxEnclosureLevel to BIOS Page 1. + * Added OEMRD to SAS Enclosure Page 1. + * Added DMDReportPCIe to PCIe IO Unit Page 1. + * Added Flags field and flags for Retimers to + * PCIe Switch Page 1. + * 08-02-18 02.00.44 Added Slotx2, Slotx4 to ManPage 7. + * 08-15-18 02.00.45 Added ProductSpecific field at end of IOC Page 1 + * 08-28-18 02.00.46 Added NVMs Write Cache flag to IOUnitPage1 + * Added DMDReport Delay Time defines to + * PCIeIOUnitPage1 * -------------------------------------------------------------------------- */ @@ -568,8 +580,17 @@ typedef struct _MPI2_CONFIG_REPLY { #define MPI26_MFGPAGE_DEVID_SAS3616 (0x00D1) #define MPI26_MFGPAGE_DEVID_SAS3708 (0x00D2) -#define MPI26_MFGPAGE_DEVID_SAS3816 (0x00A1) -#define MPI26_MFGPAGE_DEVID_SAS3916 (0x00A0) +#define MPI26_MFGPAGE_DEVID_SEC_MASK_3916 (0x0003) +#define MPI26_MFGPAGE_DEVID_INVALID0_3916 (0x00E0) +#define MPI26_MFGPAGE_DEVID_CFG_SEC_3916 (0x00E1) +#define MPI26_MFGPAGE_DEVID_HARD_SEC_3916 (0x00E2) +#define MPI26_MFGPAGE_DEVID_INVALID1_3916 (0x00E3) + +#define MPI26_MFGPAGE_DEVID_SEC_MASK_3816 (0x0003) +#define MPI26_MFGPAGE_DEVID_INVALID0_3816 (0x00E4) +#define MPI26_MFGPAGE_DEVID_CFG_SEC_3816 (0x00E5) +#define MPI26_MFGPAGE_DEVID_HARD_SEC_3816 (0x00E6) +#define MPI26_MFGPAGE_DEVID_INVALID1_3816 (0x00E7) /*Manufacturing Page 0 */ @@ -932,7 +953,11 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_1 { #define MPI2_IOUNITPAGE1_PAGEVERSION (0x04) -/*IO Unit Page 1 Flags defines */ +/* IO Unit Page 1 Flags defines */ +#define MPI26_IOUNITPAGE1_NVME_WRCACHE_MASK (0x00030000) +#define MPI26_IOUNITPAGE1_NVME_WRCACHE_ENABLE (0x00000000) +#define MPI26_IOUNITPAGE1_NVME_WRCACHE_DISABLE (0x00010000) +#define MPI26_IOUNITPAGE1_NVME_WRCACHE_NO_CHANGE (0x00020000) #define MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK (0x00004000) #define MPI25_IOUNITPAGE1_NEW_DEVICE_FAST_PATH_DISABLE (0x00002000) #define MPI25_IOUNITPAGE1_DISABLE_FAST_PATH (0x00001000) @@ -1511,7 +1536,7 @@ typedef struct _MPI2_CONFIG_PAGE_BIOS_1 { U32 BiosOptions; /*0x04 */ U32 IOCSettings; /*0x08 */ U8 SSUTimeout; /*0x0C */ - U8 Reserved1; /*0x0D */ + U8 MaxEnclosureLevel; /*0x0D */ U16 Reserved2; /*0x0E */ U32 DeviceSettings; /*0x10 */ U16 NumberOfDevices; /*0x14 */ @@ -1530,7 +1555,6 @@ typedef struct _MPI2_CONFIG_PAGE_BIOS_1 { #define MPI2_BIOSPAGE1_OPTIONS_BOOT_LIST_ADD_ALT_BOOT_DEVICE (0x00008000) #define MPI2_BIOSPAGE1_OPTIONS_ADVANCED_CONFIG (0x00004000) -#define MPI2_BIOSPAGE1_OPTIONS_PNS_MASK (0x00003800) #define MPI2_BIOSPAGE1_OPTIONS_PNS_MASK (0x00003800) #define MPI2_BIOSPAGE1_OPTIONS_PNS_PBDHL (0x00000000) #define MPI2_BIOSPAGE1_OPTIONS_PNS_ENCSLOSURE (0x00000800) @@ -3271,10 +3295,12 @@ typedef struct _MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0 { U16 NumSlots; /*0x18 */ U16 StartSlot; /*0x1A */ U8 ChassisSlot; /*0x1C */ - U8 EnclosureLeve; /*0x1D */ + U8 EnclosureLevel; /*0x1D */ U16 SEPDevHandle; /*0x1E */ - U32 Reserved3; /*0x20 */ - U32 Reserved4; /*0x24 */ + U8 OEMRD; /*0x20 */ + U8 Reserved1a; /*0x21 */ + U16 Reserved2; /*0x22 */ + U32 Reserved3; /*0x24 */ } MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0, *PTR_MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0, Mpi2SasEnclosurePage0_t, *pMpi2SasEnclosurePage0_t, @@ -3285,6 +3311,8 @@ typedef struct _MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0 { #define MPI2_SASENCLOSURE0_PAGEVERSION (0x04) /*values for SAS Enclosure Page 0 Flags field */ +#define MPI26_SAS_ENCLS0_FLAGS_OEMRD_VALID (0x0080) +#define MPI26_SAS_ENCLS0_FLAGS_OEMRD_COLLECTING (0x0040) #define MPI2_SAS_ENCLS0_FLAGS_CHASSIS_SLOT_VALID (0x0020) #define MPI2_SAS_ENCLS0_FLAGS_ENCL_LEVEL_VALID (0x0010) #define MPI2_SAS_ENCLS0_FLAGS_MNG_MASK (0x000F) @@ -3298,6 +3326,8 @@ typedef struct _MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0 { #define MPI26_ENCLOSURE0_PAGEVERSION (0x04) /*Values for Enclosure Page 0 Flags field */ +#define MPI26_ENCLS0_FLAGS_OEMRD_VALID (0x0080) +#define MPI26_ENCLS0_FLAGS_OEMRD_COLLECTING (0x0040) #define MPI26_ENCLS0_FLAGS_CHASSIS_SLOT_VALID (0x0020) #define MPI26_ENCLS0_FLAGS_ENCL_LEVEL_VALID (0x0010) #define MPI26_ENCLS0_FLAGS_MNG_MASK (0x000F) @@ -3696,8 +3726,9 @@ typedef struct _MPI26_PCIE_IO_UNIT1_PHY_DATA { Mpi26PCIeIOUnit1PhyData_t, *pMpi26PCIeIOUnit1PhyData_t; /*values for LinkFlags */ -#define MPI26_PCIEIOUNIT1_LINKFLAGS_DIS_SRIS (0x00) -#define MPI26_PCIEIOUNIT1_LINKFLAGS_EN_SRIS (0x01) +#define MPI26_PCIEIOUNIT1_LINKFLAGS_DIS_SEPARATE_REFCLK (0x00) +#define MPI26_PCIEIOUNIT1_LINKFLAGS_SRIS_EN (0x01) +#define MPI26_PCIEIOUNIT1_LINKFLAGS_SRNS_EN (0x02) /* *Host code (drivers, BIOS, utilities, etc.) should leave this define set to @@ -3714,7 +3745,7 @@ typedef struct _MPI26_CONFIG_PAGE_PIOUNIT_1 { U16 AdditionalControlFlags; /*0x0C */ U16 NVMeMaxQueueDepth; /*0x0E */ U8 NumPhys; /*0x10 */ - U8 Reserved1; /*0x11 */ + U8 DMDReportPCIe; /*0x11 */ U16 Reserved2; /*0x12 */ MPI26_PCIE_IO_UNIT1_PHY_DATA PhyData[MPI26_PCIE_IOUNIT1_PHY_MAX];/*0x14 */ @@ -3736,6 +3767,12 @@ typedef struct _MPI26_CONFIG_PAGE_PIOUNIT_1 { #define MPI26_PCIEIOUNIT1_MAX_RATE_8_0 (0x40) #define MPI26_PCIEIOUNIT1_MAX_RATE_16_0 (0x50) +/*values for PCIe IO Unit Page 1 DMDReportPCIe */ +#define MPI26_PCIEIOUNIT1_DMDRPT_UNIT_MASK (0x80) +#define MPI26_PCIEIOUNIT1_DMDRPT_UNIT_1_SEC (0x00) +#define MPI26_PCIEIOUNIT1_DMDRPT_UNIT_16_SEC (0x80) +#define MPI26_PCIEIOUNIT1_DMDRPT_DELAY_TIME_MASK (0x7F) + /*see mpi2_pci.h for values for PCIe IO Unit Page 0 ControllerPhyDeviceInfo *values */ @@ -3788,6 +3825,9 @@ typedef struct _MPI26_CONFIG_PAGE_PSWITCH_1 { /*use MPI26_PCIE_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */ +/* defines for the Flags field */ +#define MPI26_PCIESWITCH1_2_RETIMER_PRESENCE (0x0002) +#define MPI26_PCIESWITCH1_RETIMER_PRESENCE (0x0001) /**************************************************************************** * PCIe Device Config Pages (MPI v2.6 and later) @@ -3849,19 +3889,21 @@ typedef struct _MPI26_CONFIG_PAGE_PCIEDEV_0 { *field */ -/*values for PCIe Device Page 0 Flags field */ -#define MPI26_PCIEDEV0_FLAGS_UNAUTHORIZED_DEVICE (0x8000) -#define MPI26_PCIEDEV0_FLAGS_ENABLED_FAST_PATH (0x4000) -#define MPI26_PCIEDEV0_FLAGS_FAST_PATH_CAPABLE (0x2000) -#define MPI26_PCIEDEV0_FLAGS_ASYNCHRONOUS_NOTIFICATION (0x0400) -#define MPI26_PCIEDEV0_FLAGS_ATA_SW_PRESERVATION (0x0200) -#define MPI26_PCIEDEV0_FLAGS_UNSUPPORTED_DEVICE (0x0100) -#define MPI26_PCIEDEV0_FLAGS_ATA_48BIT_LBA_SUPPORTED (0x0080) -#define MPI26_PCIEDEV0_FLAGS_ATA_SMART_SUPPORTED (0x0040) -#define MPI26_PCIEDEV0_FLAGS_ATA_NCQ_SUPPORTED (0x0020) -#define MPI26_PCIEDEV0_FLAGS_ATA_FUA_SUPPORTED (0x0010) -#define MPI26_PCIEDEV0_FLAGS_ENCL_LEVEL_VALID (0x0002) -#define MPI26_PCIEDEV0_FLAGS_DEVICE_PRESENT (0x0001) +/*values for PCIe Device Page 0 Flags field*/ +#define MPI26_PCIEDEV0_FLAGS_2_RETIMER_PRESENCE (0x00020000) +#define MPI26_PCIEDEV0_FLAGS_RETIMER_PRESENCE (0x00010000) +#define MPI26_PCIEDEV0_FLAGS_UNAUTHORIZED_DEVICE (0x00008000) +#define MPI26_PCIEDEV0_FLAGS_ENABLED_FAST_PATH (0x00004000) +#define MPI26_PCIEDEV0_FLAGS_FAST_PATH_CAPABLE (0x00002000) +#define MPI26_PCIEDEV0_FLAGS_ASYNCHRONOUS_NOTIFICATION (0x00000400) +#define MPI26_PCIEDEV0_FLAGS_ATA_SW_PRESERVATION (0x00000200) +#define MPI26_PCIEDEV0_FLAGS_UNSUPPORTED_DEVICE (0x00000100) +#define MPI26_PCIEDEV0_FLAGS_ATA_48BIT_LBA_SUPPORTED (0x00000080) +#define MPI26_PCIEDEV0_FLAGS_ATA_SMART_SUPPORTED (0x00000040) +#define MPI26_PCIEDEV0_FLAGS_ATA_NCQ_SUPPORTED (0x00000020) +#define MPI26_PCIEDEV0_FLAGS_ATA_FUA_SUPPORTED (0x00000010) +#define MPI26_PCIEDEV0_FLAGS_ENCL_LEVEL_VALID (0x00000002) +#define MPI26_PCIEDEV0_FLAGS_DEVICE_PRESENT (0x00000001) /* values for PCIe Device Page 0 SupportedLinkRates field */ #define MPI26_PCIEDEV0_LINK_RATE_16_0_SUPPORTED (0x08) diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_image.h b/drivers/scsi/mpt3sas/mpi/mpi2_image.h new file mode 100644 index 000000000000..4959585f029d --- /dev/null +++ b/drivers/scsi/mpt3sas/mpi/mpi2_image.h @@ -0,0 +1,506 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2016-2020 Broadcom Limited. All rights reserved. + * + * Name: mpi2_image.h + * Description: Contains definitions for firmware and other component images + * Creation Date: 04/02/2018 + * Version: 02.06.03 + * + * + * Version History + * --------------- + * + * Date Version Description + * -------- -------- ------------------------------------------------------ + * 08-01-18 02.06.00 Initial version for MPI 2.6.5. + * 08-14-18 02.06.01 Corrected define for MPI26_IMAGE_HEADER_SIGNATURE0_MPI26 + * 08-28-18 02.06.02 Added MPI2_EXT_IMAGE_TYPE_RDE + * 09-07-18 02.06.03 Added MPI26_EVENT_PCIE_TOPO_PI_16_LANES + */ +#ifndef MPI2_IMAGE_H +#define MPI2_IMAGE_H + + +/*FW Image Header */ +typedef struct _MPI2_FW_IMAGE_HEADER { + U32 Signature; /*0x00 */ + U32 Signature0; /*0x04 */ + U32 Signature1; /*0x08 */ + U32 Signature2; /*0x0C */ + MPI2_VERSION_UNION MPIVersion; /*0x10 */ + MPI2_VERSION_UNION FWVersion; /*0x14 */ + MPI2_VERSION_UNION NVDATAVersion; /*0x18 */ + MPI2_VERSION_UNION PackageVersion; /*0x1C */ + U16 VendorID; /*0x20 */ + U16 ProductID; /*0x22 */ + U16 ProtocolFlags; /*0x24 */ + U16 Reserved26; /*0x26 */ + U32 IOCCapabilities; /*0x28 */ + U32 ImageSize; /*0x2C */ + U32 NextImageHeaderOffset; /*0x30 */ + U32 Checksum; /*0x34 */ + U32 Reserved38; /*0x38 */ + U32 Reserved3C; /*0x3C */ + U32 Reserved40; /*0x40 */ + U32 Reserved44; /*0x44 */ + U32 Reserved48; /*0x48 */ + U32 Reserved4C; /*0x4C */ + U32 Reserved50; /*0x50 */ + U32 Reserved54; /*0x54 */ + U32 Reserved58; /*0x58 */ + U32 Reserved5C; /*0x5C */ + U32 BootFlags; /*0x60 */ + U32 FirmwareVersionNameWhat; /*0x64 */ + U8 FirmwareVersionName[32]; /*0x68 */ + U32 VendorNameWhat; /*0x88 */ + U8 VendorName[32]; /*0x8C */ + U32 PackageNameWhat; /*0x88 */ + U8 PackageName[32]; /*0x8C */ + U32 ReservedD0; /*0xD0 */ + U32 ReservedD4; /*0xD4 */ + U32 ReservedD8; /*0xD8 */ + U32 ReservedDC; /*0xDC */ + U32 ReservedE0; /*0xE0 */ + U32 ReservedE4; /*0xE4 */ + U32 ReservedE8; /*0xE8 */ + U32 ReservedEC; /*0xEC */ + U32 ReservedF0; /*0xF0 */ + U32 ReservedF4; /*0xF4 */ + U32 ReservedF8; /*0xF8 */ + U32 ReservedFC; /*0xFC */ +} MPI2_FW_IMAGE_HEADER, *PTR_MPI2_FW_IMAGE_HEADER, + Mpi2FWImageHeader_t, *pMpi2FWImageHeader_t; + +/*Signature field */ +#define MPI2_FW_HEADER_SIGNATURE_OFFSET (0x00) +#define MPI2_FW_HEADER_SIGNATURE_MASK (0xFF000000) +#define MPI2_FW_HEADER_SIGNATURE (0xEA000000) +#define MPI26_FW_HEADER_SIGNATURE (0xEB000000) + +/*Signature0 field */ +#define MPI2_FW_HEADER_SIGNATURE0_OFFSET (0x04) +#define MPI2_FW_HEADER_SIGNATURE0 (0x5AFAA55A) +/*Last byte is defined by architecture */ +#define MPI26_FW_HEADER_SIGNATURE0_BASE (0x5AEAA500) +#define MPI26_FW_HEADER_SIGNATURE0_ARC_0 (0x5A) +#define MPI26_FW_HEADER_SIGNATURE0_ARC_1 (0x00) +#define MPI26_FW_HEADER_SIGNATURE0_ARC_2 (0x01) +/*legacy (0x5AEAA55A) */ +#define MPI26_FW_HEADER_SIGNATURE0_ARC_3 (0x02) +#define MPI26_FW_HEADER_SIGNATURE0 \ + (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_0) +#define MPI26_FW_HEADER_SIGNATURE0_3516 \ + (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_1) +#define MPI26_FW_HEADER_SIGNATURE0_4008 \ + (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_3) + +/*Signature1 field */ +#define MPI2_FW_HEADER_SIGNATURE1_OFFSET (0x08) +#define MPI2_FW_HEADER_SIGNATURE1 (0xA55AFAA5) +#define MPI26_FW_HEADER_SIGNATURE1 (0xA55AEAA5) + +/*Signature2 field */ +#define MPI2_FW_HEADER_SIGNATURE2_OFFSET (0x0C) +#define MPI2_FW_HEADER_SIGNATURE2 (0x5AA55AFA) +#define MPI26_FW_HEADER_SIGNATURE2 (0x5AA55AEA) + +/*defines for using the ProductID field */ +#define MPI2_FW_HEADER_PID_TYPE_MASK (0xF000) +#define MPI2_FW_HEADER_PID_TYPE_SAS (0x2000) + +#define MPI2_FW_HEADER_PID_PROD_MASK (0x0F00) +#define MPI2_FW_HEADER_PID_PROD_A (0x0000) +#define MPI2_FW_HEADER_PID_PROD_TARGET_INITIATOR_SCSI (0x0200) +#define MPI2_FW_HEADER_PID_PROD_IR_SCSI (0x0700) + +#define MPI2_FW_HEADER_PID_FAMILY_MASK (0x00FF) +/*SAS ProductID Family bits */ +#define MPI2_FW_HEADER_PID_FAMILY_2108_SAS (0x0013) +#define MPI2_FW_HEADER_PID_FAMILY_2208_SAS (0x0014) +#define MPI25_FW_HEADER_PID_FAMILY_3108_SAS (0x0021) +#define MPI26_FW_HEADER_PID_FAMILY_3324_SAS (0x0028) +#define MPI26_FW_HEADER_PID_FAMILY_3516_SAS (0x0031) + +/*use MPI2_IOCFACTS_PROTOCOL_ defines for ProtocolFlags field */ + +/*use MPI2_IOCFACTS_CAPABILITY_ defines for IOCCapabilities field */ + +#define MPI2_FW_HEADER_IMAGESIZE_OFFSET (0x2C) +#define MPI2_FW_HEADER_NEXTIMAGE_OFFSET (0x30) + +#define MPI26_FW_HEADER_BOOTFLAGS_OFFSET (0x60) +#define MPI2_FW_HEADER_BOOTFLAGS_ISSI32M_FLAG (0x00000001) +#define MPI2_FW_HEADER_BOOTFLAGS_W25Q256JW_FLAG (0x00000002) +/*This image has a auto-discovery version of SPI */ +#define MPI2_FW_HEADER_BOOTFLAGS_AUTO_SPI_FLAG (0x00000004) + + +#define MPI2_FW_HEADER_VERNMHWAT_OFFSET (0x64) + +#define MPI2_FW_HEADER_WHAT_SIGNATURE (0x29232840) + +#define MPI2_FW_HEADER_SIZE (0x100) + + +/**************************************************************************** + * Component Image Format and related defines * + ****************************************************************************/ + +/*Maximum number of Hash Exclusion entries in a Component Image Header */ +#define MPI26_COMP_IMG_HDR_NUM_HASH_EXCL (4) + +/*Hash Exclusion Format */ +typedef struct _MPI26_HASH_EXCLUSION_FORMAT { + U32 Offset; /*0x00 */ + U32 Size; /*0x04 */ +} MPI26_HASH_EXCLUSION_FORMAT, + *PTR_MPI26_HASH_EXCLUSION_FORMAT, + Mpi26HashSxclusionFormat_t, + *pMpi26HashExclusionFormat_t; + +/*FW Image Header */ +typedef struct _MPI26_COMPONENT_IMAGE_HEADER { + U32 Signature0; /*0x00 */ + U32 LoadAddress; /*0x04 */ + U32 DataSize; /*0x08 */ + U32 StartAddress; /*0x0C */ + U32 Signature1; /*0x10 */ + U32 FlashOffset; /*0x14 */ + U32 FlashSize; /*0x18 */ + U32 VersionStringOffset; /*0x1C */ + U32 BuildDateStringOffset; /*0x20 */ + U32 BuildTimeStringOffset; /*0x24 */ + U32 EnvironmentVariableOffset; /*0x28 */ + U32 ApplicationSpecific; /*0x2C */ + U32 Signature2; /*0x30 */ + U32 HeaderSize; /*0x34 */ + U32 Crc; /*0x38 */ + U8 NotFlashImage; /*0x3C */ + U8 Compressed; /*0x3D */ + U16 Reserved3E; /*0x3E */ + U32 SecondaryFlashOffset; /*0x40 */ + U32 Reserved44; /*0x44 */ + U32 Reserved48; /*0x48 */ + MPI2_VERSION_UNION RMCInterfaceVersion; /*0x4C */ + MPI2_VERSION_UNION Reserved50; /*0x50 */ + MPI2_VERSION_UNION FWVersion; /*0x54 */ + MPI2_VERSION_UNION NvdataVersion; /*0x58 */ + MPI26_HASH_EXCLUSION_FORMAT + HashExclusion[MPI26_COMP_IMG_HDR_NUM_HASH_EXCL];/*0x5C */ + U32 NextImageHeaderOffset; /*0x7C */ + U32 Reserved80[32]; /*0x80 -- 0xFC */ +} MPI26_COMPONENT_IMAGE_HEADER, + *PTR_MPI26_COMPONENT_IMAGE_HEADER, + Mpi26ComponentImageHeader_t, + *pMpi26ComponentImageHeader_t; + + +/**** Definitions for Signature0 field ****/ +#define MPI26_IMAGE_HEADER_SIGNATURE0_MPI26 (0xEB000042) + +/**** Definitions for Signature1 field ****/ +#define MPI26_IMAGE_HEADER_SIGNATURE1_APPLICATION (0x20505041) +#define MPI26_IMAGE_HEADER_SIGNATURE1_CBB (0x20424243) +#define MPI26_IMAGE_HEADER_SIGNATURE1_MFG (0x2047464D) +#define MPI26_IMAGE_HEADER_SIGNATURE1_BIOS (0x534F4942) +#define MPI26_IMAGE_HEADER_SIGNATURE1_HIIM (0x4D494948) +#define MPI26_IMAGE_HEADER_SIGNATURE1_HIIA (0x41494948) +#define MPI26_IMAGE_HEADER_SIGNATURE1_CPLD (0x444C5043) +#define MPI26_IMAGE_HEADER_SIGNATURE1_SPD (0x20445053) +#define MPI26_IMAGE_HEADER_SIGNATURE1_NVDATA (0x5444564E) +#define MPI26_IMAGE_HEADER_SIGNATURE1_GAS_GAUGE (0x20534147) +#define MPI26_IMAGE_HEADER_SIGNATURE1_PBLP (0x50424C50) + +/**** Definitions for Signature2 field ****/ +#define MPI26_IMAGE_HEADER_SIGNATURE2_VALUE (0x50584546) + +/**** Offsets for Image Header Fields ****/ +#define MPI26_IMAGE_HEADER_SIGNATURE0_OFFSET (0x00) +#define MPI26_IMAGE_HEADER_LOAD_ADDRESS_OFFSET (0x04) +#define MPI26_IMAGE_HEADER_DATA_SIZE_OFFSET (0x08) +#define MPI26_IMAGE_HEADER_START_ADDRESS_OFFSET (0x0C) +#define MPI26_IMAGE_HEADER_SIGNATURE1_OFFSET (0x10) +#define MPI26_IMAGE_HEADER_FLASH_OFFSET_OFFSET (0x14) +#define MPI26_IMAGE_HEADER_FLASH_SIZE_OFFSET (0x18) +#define MPI26_IMAGE_HEADER_VERSION_STRING_OFFSET_OFFSET (0x1C) +#define MPI26_IMAGE_HEADER_BUILD_DATE_STRING_OFFSET_OFFSET (0x20) +#define MPI26_IMAGE_HEADER_BUILD_TIME_OFFSET_OFFSET (0x24) +#define MPI26_IMAGE_HEADER_ENVIROMENT_VAR_OFFSET_OFFSET (0x28) +#define MPI26_IMAGE_HEADER_APPLICATION_SPECIFIC_OFFSET (0x2C) +#define MPI26_IMAGE_HEADER_SIGNATURE2_OFFSET (0x30) +#define MPI26_IMAGE_HEADER_HEADER_SIZE_OFFSET (0x34) +#define MPI26_IMAGE_HEADER_CRC_OFFSET (0x38) +#define MPI26_IMAGE_HEADER_NOT_FLASH_IMAGE_OFFSET (0x3C) +#define MPI26_IMAGE_HEADER_COMPRESSED_OFFSET (0x3D) +#define MPI26_IMAGE_HEADER_SECONDARY_FLASH_OFFSET_OFFSET (0x40) +#define MPI26_IMAGE_HEADER_RMC_INTERFACE_VER_OFFSET (0x4C) +#define MPI26_IMAGE_HEADER_COMPONENT_IMAGE_VER_OFFSET (0x54) +#define MPI26_IMAGE_HEADER_HASH_EXCLUSION_OFFSET (0x5C) +#define MPI26_IMAGE_HEADER_NEXT_IMAGE_HEADER_OFFSET_OFFSET (0x7C) + + +#define MPI26_IMAGE_HEADER_SIZE (0x100) + + +/*Extended Image Header */ +typedef struct _MPI2_EXT_IMAGE_HEADER { + U8 ImageType; /*0x00 */ + U8 Reserved1; /*0x01 */ + U16 Reserved2; /*0x02 */ + U32 Checksum; /*0x04 */ + U32 ImageSize; /*0x08 */ + U32 NextImageHeaderOffset; /*0x0C */ + U32 PackageVersion; /*0x10 */ + U32 Reserved3; /*0x14 */ + U32 Reserved4; /*0x18 */ + U32 Reserved5; /*0x1C */ + U8 IdentifyString[32]; /*0x20 */ +} MPI2_EXT_IMAGE_HEADER, *PTR_MPI2_EXT_IMAGE_HEADER, + Mpi2ExtImageHeader_t, *pMpi2ExtImageHeader_t; + +/*useful offsets */ +#define MPI2_EXT_IMAGE_IMAGETYPE_OFFSET (0x00) +#define MPI2_EXT_IMAGE_IMAGESIZE_OFFSET (0x08) +#define MPI2_EXT_IMAGE_NEXTIMAGE_OFFSET (0x0C) +#define MPI2_EXT_IMAGE_PACKAGEVERSION_OFFSET (0x10) + +#define MPI2_EXT_IMAGE_HEADER_SIZE (0x40) + +/*defines for the ImageType field */ +#define MPI2_EXT_IMAGE_TYPE_UNSPECIFIED (0x00) +#define MPI2_EXT_IMAGE_TYPE_FW (0x01) +#define MPI2_EXT_IMAGE_TYPE_NVDATA (0x03) +#define MPI2_EXT_IMAGE_TYPE_BOOTLOADER (0x04) +#define MPI2_EXT_IMAGE_TYPE_INITIALIZATION (0x05) +#define MPI2_EXT_IMAGE_TYPE_FLASH_LAYOUT (0x06) +#define MPI2_EXT_IMAGE_TYPE_SUPPORTED_DEVICES (0x07) +#define MPI2_EXT_IMAGE_TYPE_MEGARAID (0x08) +#define MPI2_EXT_IMAGE_TYPE_ENCRYPTED_HASH (0x09) +#define MPI2_EXT_IMAGE_TYPE_RDE (0x0A) +#define MPI2_EXT_IMAGE_TYPE_MIN_PRODUCT_SPECIFIC (0x80) +#define MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC (0xFF) + +#define MPI2_EXT_IMAGE_TYPE_MAX (MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC) + +/*FLASH Layout Extended Image Data */ + +/* + *Host code (drivers, BIOS, utilities, etc.) should leave this define set to + *one and check RegionsPerLayout at runtime. + */ +#ifndef MPI2_FLASH_NUMBER_OF_REGIONS +#define MPI2_FLASH_NUMBER_OF_REGIONS (1) +#endif + +/* + *Host code (drivers, BIOS, utilities, etc.) should leave this define set to + *one and check NumberOfLayouts at runtime. + */ +#ifndef MPI2_FLASH_NUMBER_OF_LAYOUTS +#define MPI2_FLASH_NUMBER_OF_LAYOUTS (1) +#endif + +typedef struct _MPI2_FLASH_REGION { + U8 RegionType; /*0x00 */ + U8 Reserved1; /*0x01 */ + U16 Reserved2; /*0x02 */ + U32 RegionOffset; /*0x04 */ + U32 RegionSize; /*0x08 */ + U32 Reserved3; /*0x0C */ +} MPI2_FLASH_REGION, *PTR_MPI2_FLASH_REGION, + Mpi2FlashRegion_t, *pMpi2FlashRegion_t; + +typedef struct _MPI2_FLASH_LAYOUT { + U32 FlashSize; /*0x00 */ + U32 Reserved1; /*0x04 */ + U32 Reserved2; /*0x08 */ + U32 Reserved3; /*0x0C */ + MPI2_FLASH_REGION Region[MPI2_FLASH_NUMBER_OF_REGIONS]; /*0x10 */ +} MPI2_FLASH_LAYOUT, *PTR_MPI2_FLASH_LAYOUT, + Mpi2FlashLayout_t, *pMpi2FlashLayout_t; + +typedef struct _MPI2_FLASH_LAYOUT_DATA { + U8 ImageRevision; /*0x00 */ + U8 Reserved1; /*0x01 */ + U8 SizeOfRegion; /*0x02 */ + U8 Reserved2; /*0x03 */ + U16 NumberOfLayouts; /*0x04 */ + U16 RegionsPerLayout; /*0x06 */ + U16 MinimumSectorAlignment; /*0x08 */ + U16 Reserved3; /*0x0A */ + U32 Reserved4; /*0x0C */ + MPI2_FLASH_LAYOUT Layout[MPI2_FLASH_NUMBER_OF_LAYOUTS]; /*0x10 */ +} MPI2_FLASH_LAYOUT_DATA, *PTR_MPI2_FLASH_LAYOUT_DATA, + Mpi2FlashLayoutData_t, *pMpi2FlashLayoutData_t; + +/*defines for the RegionType field */ +#define MPI2_FLASH_REGION_UNUSED (0x00) +#define MPI2_FLASH_REGION_FIRMWARE (0x01) +#define MPI2_FLASH_REGION_BIOS (0x02) +#define MPI2_FLASH_REGION_NVDATA (0x03) +#define MPI2_FLASH_REGION_FIRMWARE_BACKUP (0x05) +#define MPI2_FLASH_REGION_MFG_INFORMATION (0x06) +#define MPI2_FLASH_REGION_CONFIG_1 (0x07) +#define MPI2_FLASH_REGION_CONFIG_2 (0x08) +#define MPI2_FLASH_REGION_MEGARAID (0x09) +#define MPI2_FLASH_REGION_COMMON_BOOT_BLOCK (0x0A) +#define MPI2_FLASH_REGION_INIT (MPI2_FLASH_REGION_COMMON_BOOT_BLOCK) +#define MPI2_FLASH_REGION_CBB_BACKUP (0x0D) +#define MPI2_FLASH_REGION_SBR (0x0E) +#define MPI2_FLASH_REGION_SBR_BACKUP (0x0F) +#define MPI2_FLASH_REGION_HIIM (0x10) +#define MPI2_FLASH_REGION_HIIA (0x11) +#define MPI2_FLASH_REGION_CTLR (0x12) +#define MPI2_FLASH_REGION_IMR_FIRMWARE (0x13) +#define MPI2_FLASH_REGION_MR_NVDATA (0x14) +#define MPI2_FLASH_REGION_CPLD (0x15) +#define MPI2_FLASH_REGION_PSOC (0x16) + +/*ImageRevision */ +#define MPI2_FLASH_LAYOUT_IMAGE_REVISION (0x00) + +/*Supported Devices Extended Image Data */ + +/* + *Host code (drivers, BIOS, utilities, etc.) should leave this define set to + *one and check NumberOfDevices at runtime. + */ +#ifndef MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES +#define MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES (1) +#endif + +typedef struct _MPI2_SUPPORTED_DEVICE { + U16 DeviceID; /*0x00 */ + U16 VendorID; /*0x02 */ + U16 DeviceIDMask; /*0x04 */ + U16 Reserved1; /*0x06 */ + U8 LowPCIRev; /*0x08 */ + U8 HighPCIRev; /*0x09 */ + U16 Reserved2; /*0x0A */ + U32 Reserved3; /*0x0C */ +} MPI2_SUPPORTED_DEVICE, *PTR_MPI2_SUPPORTED_DEVICE, + Mpi2SupportedDevice_t, *pMpi2SupportedDevice_t; + +typedef struct _MPI2_SUPPORTED_DEVICES_DATA { + U8 ImageRevision; /*0x00 */ + U8 Reserved1; /*0x01 */ + U8 NumberOfDevices; /*0x02 */ + U8 Reserved2; /*0x03 */ + U32 Reserved3; /*0x04 */ + MPI2_SUPPORTED_DEVICE + SupportedDevice[MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES];/*0x08 */ +} MPI2_SUPPORTED_DEVICES_DATA, *PTR_MPI2_SUPPORTED_DEVICES_DATA, + Mpi2SupportedDevicesData_t, *pMpi2SupportedDevicesData_t; + +/*ImageRevision */ +#define MPI2_SUPPORTED_DEVICES_IMAGE_REVISION (0x00) + +/*Init Extended Image Data */ + +typedef struct _MPI2_INIT_IMAGE_FOOTER { + U32 BootFlags; /*0x00 */ + U32 ImageSize; /*0x04 */ + U32 Signature0; /*0x08 */ + U32 Signature1; /*0x0C */ + U32 Signature2; /*0x10 */ + U32 ResetVector; /*0x14 */ +} MPI2_INIT_IMAGE_FOOTER, *PTR_MPI2_INIT_IMAGE_FOOTER, + Mpi2InitImageFooter_t, *pMpi2InitImageFooter_t; + +/*defines for the BootFlags field */ +#define MPI2_INIT_IMAGE_BOOTFLAGS_OFFSET (0x00) + +/*defines for the ImageSize field */ +#define MPI2_INIT_IMAGE_IMAGESIZE_OFFSET (0x04) + +/*defines for the Signature0 field */ +#define MPI2_INIT_IMAGE_SIGNATURE0_OFFSET (0x08) +#define MPI2_INIT_IMAGE_SIGNATURE0 (0x5AA55AEA) + +/*defines for the Signature1 field */ +#define MPI2_INIT_IMAGE_SIGNATURE1_OFFSET (0x0C) +#define MPI2_INIT_IMAGE_SIGNATURE1 (0xA55AEAA5) + +/*defines for the Signature2 field */ +#define MPI2_INIT_IMAGE_SIGNATURE2_OFFSET (0x10) +#define MPI2_INIT_IMAGE_SIGNATURE2 (0x5AEAA55A) + +/*Signature fields as individual bytes */ +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_0 (0xEA) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_1 (0x5A) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_2 (0xA5) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_3 (0x5A) + +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_4 (0xA5) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_5 (0xEA) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_6 (0x5A) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_7 (0xA5) + +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_8 (0x5A) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_9 (0xA5) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_A (0xEA) +#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_B (0x5A) + +/*defines for the ResetVector field */ +#define MPI2_INIT_IMAGE_RESETVECTOR_OFFSET (0x14) + + +/* Encrypted Hash Extended Image Data */ + +typedef struct _MPI25_ENCRYPTED_HASH_ENTRY { + U8 HashImageType; /*0x00 */ + U8 HashAlgorithm; /*0x01 */ + U8 EncryptionAlgorithm; /*0x02 */ + U8 Reserved1; /*0x03 */ + U32 Reserved2; /*0x04 */ + U32 EncryptedHash[1]; /*0x08 */ /* variable length */ +} MPI25_ENCRYPTED_HASH_ENTRY, *PTR_MPI25_ENCRYPTED_HASH_ENTRY, +Mpi25EncryptedHashEntry_t, *pMpi25EncryptedHashEntry_t; + +/* values for HashImageType */ +#define MPI25_HASH_IMAGE_TYPE_UNUSED (0x00) +#define MPI25_HASH_IMAGE_TYPE_FIRMWARE (0x01) +#define MPI25_HASH_IMAGE_TYPE_BIOS (0x02) + +#define MPI26_HASH_IMAGE_TYPE_UNUSED (0x00) +#define MPI26_HASH_IMAGE_TYPE_FIRMWARE (0x01) +#define MPI26_HASH_IMAGE_TYPE_BIOS (0x02) +#define MPI26_HASH_IMAGE_TYPE_KEY_HASH (0x03) + +/* values for HashAlgorithm */ +#define MPI25_HASH_ALGORITHM_UNUSED (0x00) +#define MPI25_HASH_ALGORITHM_SHA256 (0x01) + +#define MPI26_HASH_ALGORITHM_VERSION_MASK (0xE0) +#define MPI26_HASH_ALGORITHM_VERSION_NONE (0x00) +#define MPI26_HASH_ALGORITHM_VERSION_SHA1 (0x20) +#define MPI26_HASH_ALGORITHM_VERSION_SHA2 (0x40) +#define MPI26_HASH_ALGORITHM_VERSION_SHA3 (0x60) +#define MPI26_HASH_ALGORITHM_SIZE_MASK (0x1F) +#define MPI26_HASH_ALGORITHM_SIZE_256 (0x01) +#define MPI26_HASH_ALGORITHM_SIZE_512 (0x02) + + +/* values for EncryptionAlgorithm */ +#define MPI25_ENCRYPTION_ALG_UNUSED (0x00) +#define MPI25_ENCRYPTION_ALG_RSA256 (0x01) + +#define MPI26_ENCRYPTION_ALG_UNUSED (0x00) +#define MPI26_ENCRYPTION_ALG_RSA256 (0x01) +#define MPI26_ENCRYPTION_ALG_RSA512 (0x02) +#define MPI26_ENCRYPTION_ALG_RSA1024 (0x03) +#define MPI26_ENCRYPTION_ALG_RSA2048 (0x04) +#define MPI26_ENCRYPTION_ALG_RSA4096 (0x05) + +typedef struct _MPI25_ENCRYPTED_HASH_DATA { + U8 ImageVersion; /*0x00 */ + U8 NumHash; /*0x01 */ + U16 Reserved1; /*0x02 */ + U32 Reserved2; /*0x04 */ + MPI25_ENCRYPTED_HASH_ENTRY EncryptedHashEntry[1]; /*0x08 */ +} MPI25_ENCRYPTED_HASH_DATA, *PTR_MPI25_ENCRYPTED_HASH_DATA, +Mpi25EncryptedHashData_t, *pMpi25EncryptedHashData_t; + + +#endif /* MPI2_IMAGE_H */ diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_init.h b/drivers/scsi/mpt3sas/mpi/mpi2_init.h index 6213ce6791ac..8f1b903fe0a9 100644 --- a/drivers/scsi/mpt3sas/mpi/mpi2_init.h +++ b/drivers/scsi/mpt3sas/mpi/mpi2_init.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Copyright 2000-2015 Avago Technologies. All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_init.h diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_ioc.h b/drivers/scsi/mpt3sas/mpi/mpi2_ioc.h index 1faec3a93e69..68ea408cd5c5 100644 --- a/drivers/scsi/mpt3sas/mpi/mpi2_ioc.h +++ b/drivers/scsi/mpt3sas/mpi/mpi2_ioc.h @@ -1,13 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Copyright 2000-2015 Avago Technologies. All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_ioc.h * Title: MPI IOC, Port, Event, FW Download, and FW Upload messages * Creation Date: October 11, 2006 * - * mpi2_ioc.h Version: 02.00.34 + * mpi2_ioc.h Version: 02.00.37 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used @@ -171,6 +171,10 @@ * 09-29-17 02.00.34 Added MPI26_EVENT_PCIDEV_STAT_RC_PCIE_HOT_RESET_FAILED * to the ReasonCode field in PCIe Device Status Change * Event Data. + * 07-22-18 02.00.35 Added FW_DOWNLOAD_ITYPE_CPLD and _PSOC. + * Moved FW image definitions ionto new mpi2_image,h + * 08-14-18 02.00.36 Fixed definition of MPI2_FW_DOWNLOAD_ITYPE_PSOC (0x16) + * 09-07-18 02.00.37 Added MPI26_EVENT_PCIE_TOPO_PI_16_LANES * -------------------------------------------------------------------------- */ @@ -1255,6 +1259,7 @@ typedef struct _MPI26_EVENT_PCIE_TOPO_PORT_ENTRY { #define MPI26_EVENT_PCIE_TOPO_PI_2_LANES (0x20) #define MPI26_EVENT_PCIE_TOPO_PI_4_LANES (0x30) #define MPI26_EVENT_PCIE_TOPO_PI_8_LANES (0x40) +#define MPI26_EVENT_PCIE_TOPO_PI_16_LANES (0x50) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_MASK (0x0F) #define MPI26_EVENT_PCIE_TOPO_PI_RATE_UNKNOWN (0x00) @@ -1450,7 +1455,11 @@ typedef struct _MPI2_FW_DOWNLOAD_REQUEST { #define MPI2_FW_DOWNLOAD_ITYPE_CTLR (0x12) #define MPI2_FW_DOWNLOAD_ITYPE_IMR_FIRMWARE (0x13) #define MPI2_FW_DOWNLOAD_ITYPE_MR_NVDATA (0x14) +/*MPI v2.6 and newer */ +#define MPI2_FW_DOWNLOAD_ITYPE_CPLD (0x15) +#define MPI2_FW_DOWNLOAD_ITYPE_PSOC (0x16) #define MPI2_FW_DOWNLOAD_ITYPE_MIN_PRODUCT_SPECIFIC (0xF0) +#define MPI2_FW_DOWNLOAD_ITYPE_TERMINATE (0xFF) /*MPI v2.0 FWDownload TransactionContext Element */ typedef struct _MPI2_FW_DOWNLOAD_TCSGE { @@ -1597,352 +1606,6 @@ typedef struct _MPI2_FW_UPLOAD_REPLY { } MPI2_FW_UPLOAD_REPLY, *PTR_MPI2_FW_UPLOAD_REPLY, Mpi2FWUploadReply_t, *pMPi2FWUploadReply_t; -/*FW Image Header */ -typedef struct _MPI2_FW_IMAGE_HEADER { - U32 Signature; /*0x00 */ - U32 Signature0; /*0x04 */ - U32 Signature1; /*0x08 */ - U32 Signature2; /*0x0C */ - MPI2_VERSION_UNION MPIVersion; /*0x10 */ - MPI2_VERSION_UNION FWVersion; /*0x14 */ - MPI2_VERSION_UNION NVDATAVersion; /*0x18 */ - MPI2_VERSION_UNION PackageVersion; /*0x1C */ - U16 VendorID; /*0x20 */ - U16 ProductID; /*0x22 */ - U16 ProtocolFlags; /*0x24 */ - U16 Reserved26; /*0x26 */ - U32 IOCCapabilities; /*0x28 */ - U32 ImageSize; /*0x2C */ - U32 NextImageHeaderOffset; /*0x30 */ - U32 Checksum; /*0x34 */ - U32 Reserved38; /*0x38 */ - U32 Reserved3C; /*0x3C */ - U32 Reserved40; /*0x40 */ - U32 Reserved44; /*0x44 */ - U32 Reserved48; /*0x48 */ - U32 Reserved4C; /*0x4C */ - U32 Reserved50; /*0x50 */ - U32 Reserved54; /*0x54 */ - U32 Reserved58; /*0x58 */ - U32 Reserved5C; /*0x5C */ - U32 BootFlags; /*0x60 */ - U32 FirmwareVersionNameWhat; /*0x64 */ - U8 FirmwareVersionName[32]; /*0x68 */ - U32 VendorNameWhat; /*0x88 */ - U8 VendorName[32]; /*0x8C */ - U32 PackageNameWhat; /*0x88 */ - U8 PackageName[32]; /*0x8C */ - U32 ReservedD0; /*0xD0 */ - U32 ReservedD4; /*0xD4 */ - U32 ReservedD8; /*0xD8 */ - U32 ReservedDC; /*0xDC */ - U32 ReservedE0; /*0xE0 */ - U32 ReservedE4; /*0xE4 */ - U32 ReservedE8; /*0xE8 */ - U32 ReservedEC; /*0xEC */ - U32 ReservedF0; /*0xF0 */ - U32 ReservedF4; /*0xF4 */ - U32 ReservedF8; /*0xF8 */ - U32 ReservedFC; /*0xFC */ -} MPI2_FW_IMAGE_HEADER, *PTR_MPI2_FW_IMAGE_HEADER, - Mpi2FWImageHeader_t, *pMpi2FWImageHeader_t; - -/*Signature field */ -#define MPI2_FW_HEADER_SIGNATURE_OFFSET (0x00) -#define MPI2_FW_HEADER_SIGNATURE_MASK (0xFF000000) -#define MPI2_FW_HEADER_SIGNATURE (0xEA000000) -#define MPI26_FW_HEADER_SIGNATURE (0xEB000000) - -/*Signature0 field */ -#define MPI2_FW_HEADER_SIGNATURE0_OFFSET (0x04) -#define MPI2_FW_HEADER_SIGNATURE0 (0x5AFAA55A) -/* Last byte is defined by architecture */ -#define MPI26_FW_HEADER_SIGNATURE0_BASE (0x5AEAA500) -#define MPI26_FW_HEADER_SIGNATURE0_ARC_0 (0x5A) -#define MPI26_FW_HEADER_SIGNATURE0_ARC_1 (0x00) -#define MPI26_FW_HEADER_SIGNATURE0_ARC_2 (0x01) -/* legacy (0x5AEAA55A) */ -#define MPI26_FW_HEADER_SIGNATURE0_ARC_3 (0x02) -#define MPI26_FW_HEADER_SIGNATURE0 \ - (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_0) -#define MPI26_FW_HEADER_SIGNATURE0_3516 \ - (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_1) -#define MPI26_FW_HEADER_SIGNATURE0_4008 \ - (MPI26_FW_HEADER_SIGNATURE0_BASE+MPI26_FW_HEADER_SIGNATURE0_ARC_3) - -/*Signature1 field */ -#define MPI2_FW_HEADER_SIGNATURE1_OFFSET (0x08) -#define MPI2_FW_HEADER_SIGNATURE1 (0xA55AFAA5) -#define MPI26_FW_HEADER_SIGNATURE1 (0xA55AEAA5) - -/*Signature2 field */ -#define MPI2_FW_HEADER_SIGNATURE2_OFFSET (0x0C) -#define MPI2_FW_HEADER_SIGNATURE2 (0x5AA55AFA) -#define MPI26_FW_HEADER_SIGNATURE2 (0x5AA55AEA) - -/*defines for using the ProductID field */ -#define MPI2_FW_HEADER_PID_TYPE_MASK (0xF000) -#define MPI2_FW_HEADER_PID_TYPE_SAS (0x2000) - -#define MPI2_FW_HEADER_PID_PROD_MASK (0x0F00) -#define MPI2_FW_HEADER_PID_PROD_A (0x0000) -#define MPI2_FW_HEADER_PID_PROD_TARGET_INITIATOR_SCSI (0x0200) -#define MPI2_FW_HEADER_PID_PROD_IR_SCSI (0x0700) - -#define MPI2_FW_HEADER_PID_FAMILY_MASK (0x00FF) -/*SAS ProductID Family bits */ -#define MPI2_FW_HEADER_PID_FAMILY_2108_SAS (0x0013) -#define MPI2_FW_HEADER_PID_FAMILY_2208_SAS (0x0014) -#define MPI25_FW_HEADER_PID_FAMILY_3108_SAS (0x0021) -#define MPI26_FW_HEADER_PID_FAMILY_3324_SAS (0x0028) -#define MPI26_FW_HEADER_PID_FAMILY_3516_SAS (0x0031) - -/*use MPI2_IOCFACTS_PROTOCOL_ defines for ProtocolFlags field */ - -/*use MPI2_IOCFACTS_CAPABILITY_ defines for IOCCapabilities field */ - -#define MPI2_FW_HEADER_IMAGESIZE_OFFSET (0x2C) -#define MPI2_FW_HEADER_NEXTIMAGE_OFFSET (0x30) -#define MPI26_FW_HEADER_BOOTFLAGS_OFFSET (0x60) -#define MPI2_FW_HEADER_VERNMHWAT_OFFSET (0x64) - -#define MPI2_FW_HEADER_WHAT_SIGNATURE (0x29232840) - -#define MPI2_FW_HEADER_SIZE (0x100) - -/*Extended Image Header */ -typedef struct _MPI2_EXT_IMAGE_HEADER { - U8 ImageType; /*0x00 */ - U8 Reserved1; /*0x01 */ - U16 Reserved2; /*0x02 */ - U32 Checksum; /*0x04 */ - U32 ImageSize; /*0x08 */ - U32 NextImageHeaderOffset; /*0x0C */ - U32 PackageVersion; /*0x10 */ - U32 Reserved3; /*0x14 */ - U32 Reserved4; /*0x18 */ - U32 Reserved5; /*0x1C */ - U8 IdentifyString[32]; /*0x20 */ -} MPI2_EXT_IMAGE_HEADER, *PTR_MPI2_EXT_IMAGE_HEADER, - Mpi2ExtImageHeader_t, *pMpi2ExtImageHeader_t; - -/*useful offsets */ -#define MPI2_EXT_IMAGE_IMAGETYPE_OFFSET (0x00) -#define MPI2_EXT_IMAGE_IMAGESIZE_OFFSET (0x08) -#define MPI2_EXT_IMAGE_NEXTIMAGE_OFFSET (0x0C) - -#define MPI2_EXT_IMAGE_HEADER_SIZE (0x40) - -/*defines for the ImageType field */ -#define MPI2_EXT_IMAGE_TYPE_UNSPECIFIED (0x00) -#define MPI2_EXT_IMAGE_TYPE_FW (0x01) -#define MPI2_EXT_IMAGE_TYPE_NVDATA (0x03) -#define MPI2_EXT_IMAGE_TYPE_BOOTLOADER (0x04) -#define MPI2_EXT_IMAGE_TYPE_INITIALIZATION (0x05) -#define MPI2_EXT_IMAGE_TYPE_FLASH_LAYOUT (0x06) -#define MPI2_EXT_IMAGE_TYPE_SUPPORTED_DEVICES (0x07) -#define MPI2_EXT_IMAGE_TYPE_MEGARAID (0x08) -#define MPI2_EXT_IMAGE_TYPE_ENCRYPTED_HASH (0x09) -#define MPI2_EXT_IMAGE_TYPE_MIN_PRODUCT_SPECIFIC (0x80) -#define MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC (0xFF) - -#define MPI2_EXT_IMAGE_TYPE_MAX (MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC) - -/*FLASH Layout Extended Image Data */ - -/* - *Host code (drivers, BIOS, utilities, etc.) should leave this define set to - *one and check RegionsPerLayout at runtime. - */ -#ifndef MPI2_FLASH_NUMBER_OF_REGIONS -#define MPI2_FLASH_NUMBER_OF_REGIONS (1) -#endif - -/* - *Host code (drivers, BIOS, utilities, etc.) should leave this define set to - *one and check NumberOfLayouts at runtime. - */ -#ifndef MPI2_FLASH_NUMBER_OF_LAYOUTS -#define MPI2_FLASH_NUMBER_OF_LAYOUTS (1) -#endif - -typedef struct _MPI2_FLASH_REGION { - U8 RegionType; /*0x00 */ - U8 Reserved1; /*0x01 */ - U16 Reserved2; /*0x02 */ - U32 RegionOffset; /*0x04 */ - U32 RegionSize; /*0x08 */ - U32 Reserved3; /*0x0C */ -} MPI2_FLASH_REGION, *PTR_MPI2_FLASH_REGION, - Mpi2FlashRegion_t, *pMpi2FlashRegion_t; - -typedef struct _MPI2_FLASH_LAYOUT { - U32 FlashSize; /*0x00 */ - U32 Reserved1; /*0x04 */ - U32 Reserved2; /*0x08 */ - U32 Reserved3; /*0x0C */ - MPI2_FLASH_REGION Region[MPI2_FLASH_NUMBER_OF_REGIONS]; /*0x10 */ -} MPI2_FLASH_LAYOUT, *PTR_MPI2_FLASH_LAYOUT, - Mpi2FlashLayout_t, *pMpi2FlashLayout_t; - -typedef struct _MPI2_FLASH_LAYOUT_DATA { - U8 ImageRevision; /*0x00 */ - U8 Reserved1; /*0x01 */ - U8 SizeOfRegion; /*0x02 */ - U8 Reserved2; /*0x03 */ - U16 NumberOfLayouts; /*0x04 */ - U16 RegionsPerLayout; /*0x06 */ - U16 MinimumSectorAlignment; /*0x08 */ - U16 Reserved3; /*0x0A */ - U32 Reserved4; /*0x0C */ - MPI2_FLASH_LAYOUT Layout[MPI2_FLASH_NUMBER_OF_LAYOUTS]; /*0x10 */ -} MPI2_FLASH_LAYOUT_DATA, *PTR_MPI2_FLASH_LAYOUT_DATA, - Mpi2FlashLayoutData_t, *pMpi2FlashLayoutData_t; - -/*defines for the RegionType field */ -#define MPI2_FLASH_REGION_UNUSED (0x00) -#define MPI2_FLASH_REGION_FIRMWARE (0x01) -#define MPI2_FLASH_REGION_BIOS (0x02) -#define MPI2_FLASH_REGION_NVDATA (0x03) -#define MPI2_FLASH_REGION_FIRMWARE_BACKUP (0x05) -#define MPI2_FLASH_REGION_MFG_INFORMATION (0x06) -#define MPI2_FLASH_REGION_CONFIG_1 (0x07) -#define MPI2_FLASH_REGION_CONFIG_2 (0x08) -#define MPI2_FLASH_REGION_MEGARAID (0x09) -#define MPI2_FLASH_REGION_COMMON_BOOT_BLOCK (0x0A) -#define MPI2_FLASH_REGION_INIT (MPI2_FLASH_REGION_COMMON_BOOT_BLOCK) -#define MPI2_FLASH_REGION_CBB_BACKUP (0x0D) -#define MPI2_FLASH_REGION_SBR (0x0E) -#define MPI2_FLASH_REGION_SBR_BACKUP (0x0F) -#define MPI2_FLASH_REGION_HIIM (0x10) -#define MPI2_FLASH_REGION_HIIA (0x11) -#define MPI2_FLASH_REGION_CTLR (0x12) -#define MPI2_FLASH_REGION_IMR_FIRMWARE (0x13) -#define MPI2_FLASH_REGION_MR_NVDATA (0x14) - -/*ImageRevision */ -#define MPI2_FLASH_LAYOUT_IMAGE_REVISION (0x00) - -/*Supported Devices Extended Image Data */ - -/* - *Host code (drivers, BIOS, utilities, etc.) should leave this define set to - *one and check NumberOfDevices at runtime. - */ -#ifndef MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES -#define MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES (1) -#endif - -typedef struct _MPI2_SUPPORTED_DEVICE { - U16 DeviceID; /*0x00 */ - U16 VendorID; /*0x02 */ - U16 DeviceIDMask; /*0x04 */ - U16 Reserved1; /*0x06 */ - U8 LowPCIRev; /*0x08 */ - U8 HighPCIRev; /*0x09 */ - U16 Reserved2; /*0x0A */ - U32 Reserved3; /*0x0C */ -} MPI2_SUPPORTED_DEVICE, *PTR_MPI2_SUPPORTED_DEVICE, - Mpi2SupportedDevice_t, *pMpi2SupportedDevice_t; - -typedef struct _MPI2_SUPPORTED_DEVICES_DATA { - U8 ImageRevision; /*0x00 */ - U8 Reserved1; /*0x01 */ - U8 NumberOfDevices; /*0x02 */ - U8 Reserved2; /*0x03 */ - U32 Reserved3; /*0x04 */ - MPI2_SUPPORTED_DEVICE - SupportedDevice[MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES];/*0x08 */ -} MPI2_SUPPORTED_DEVICES_DATA, *PTR_MPI2_SUPPORTED_DEVICES_DATA, - Mpi2SupportedDevicesData_t, *pMpi2SupportedDevicesData_t; - -/*ImageRevision */ -#define MPI2_SUPPORTED_DEVICES_IMAGE_REVISION (0x00) - -/*Init Extended Image Data */ - -typedef struct _MPI2_INIT_IMAGE_FOOTER { - U32 BootFlags; /*0x00 */ - U32 ImageSize; /*0x04 */ - U32 Signature0; /*0x08 */ - U32 Signature1; /*0x0C */ - U32 Signature2; /*0x10 */ - U32 ResetVector; /*0x14 */ -} MPI2_INIT_IMAGE_FOOTER, *PTR_MPI2_INIT_IMAGE_FOOTER, - Mpi2InitImageFooter_t, *pMpi2InitImageFooter_t; - -/*defines for the BootFlags field */ -#define MPI2_INIT_IMAGE_BOOTFLAGS_OFFSET (0x00) - -/*defines for the ImageSize field */ -#define MPI2_INIT_IMAGE_IMAGESIZE_OFFSET (0x04) - -/*defines for the Signature0 field */ -#define MPI2_INIT_IMAGE_SIGNATURE0_OFFSET (0x08) -#define MPI2_INIT_IMAGE_SIGNATURE0 (0x5AA55AEA) - -/*defines for the Signature1 field */ -#define MPI2_INIT_IMAGE_SIGNATURE1_OFFSET (0x0C) -#define MPI2_INIT_IMAGE_SIGNATURE1 (0xA55AEAA5) - -/*defines for the Signature2 field */ -#define MPI2_INIT_IMAGE_SIGNATURE2_OFFSET (0x10) -#define MPI2_INIT_IMAGE_SIGNATURE2 (0x5AEAA55A) - -/*Signature fields as individual bytes */ -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_0 (0xEA) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_1 (0x5A) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_2 (0xA5) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_3 (0x5A) - -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_4 (0xA5) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_5 (0xEA) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_6 (0x5A) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_7 (0xA5) - -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_8 (0x5A) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_9 (0xA5) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_A (0xEA) -#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_B (0x5A) - -/*defines for the ResetVector field */ -#define MPI2_INIT_IMAGE_RESETVECTOR_OFFSET (0x14) - - -/* Encrypted Hash Extended Image Data */ - -typedef struct _MPI25_ENCRYPTED_HASH_ENTRY { - U8 HashImageType; /* 0x00 */ - U8 HashAlgorithm; /* 0x01 */ - U8 EncryptionAlgorithm; /* 0x02 */ - U8 Reserved1; /* 0x03 */ - U32 Reserved2; /* 0x04 */ - U32 EncryptedHash[1]; /* 0x08 */ /* variable length */ -} MPI25_ENCRYPTED_HASH_ENTRY, *PTR_MPI25_ENCRYPTED_HASH_ENTRY, -Mpi25EncryptedHashEntry_t, *pMpi25EncryptedHashEntry_t; - -/* values for HashImageType */ -#define MPI25_HASH_IMAGE_TYPE_UNUSED (0x00) -#define MPI25_HASH_IMAGE_TYPE_FIRMWARE (0x01) -#define MPI25_HASH_IMAGE_TYPE_BIOS (0x02) - -/* values for HashAlgorithm */ -#define MPI25_HASH_ALGORITHM_UNUSED (0x00) -#define MPI25_HASH_ALGORITHM_SHA256 (0x01) - -/* values for EncryptionAlgorithm */ -#define MPI25_ENCRYPTION_ALG_UNUSED (0x00) -#define MPI25_ENCRYPTION_ALG_RSA256 (0x01) - -typedef struct _MPI25_ENCRYPTED_HASH_DATA { - U8 ImageVersion; /* 0x00 */ - U8 NumHash; /* 0x01 */ - U16 Reserved1; /* 0x02 */ - U32 Reserved2; /* 0x04 */ - MPI25_ENCRYPTED_HASH_ENTRY EncryptedHashEntry[1]; /* 0x08 */ -} MPI25_ENCRYPTED_HASH_DATA, *PTR_MPI25_ENCRYPTED_HASH_DATA, -Mpi25EncryptedHashData_t, *pMpi25EncryptedHashData_t; - /**************************************************************************** * PowerManagementControl message diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_pci.h b/drivers/scsi/mpt3sas/mpi/mpi2_pci.h index f0281f943ec9..63a09509d7d1 100644 --- a/drivers/scsi/mpt3sas/mpi/mpi2_pci.h +++ b/drivers/scsi/mpt3sas/mpi/mpi2_pci.h @@ -1,12 +1,12 @@ /* - * Copyright 2012-2015 Avago Technologies. All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_pci.h * Title: MPI PCIe Attached Devices structures and definitions. * Creation Date: October 9, 2012 * - * mpi2_pci.h Version: 02.00.02 + * mpi2_pci.h Version: 02.00.03 * * NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25 * prefix are for use only on MPI v2.5 products, and must not be used @@ -23,6 +23,7 @@ * Removed SOP support. * 07-01-16 02.00.02 Added MPI26_NVME_FLAGS_FORCE_ADMIN_ERR_RESP to * NVME Encapsulated Request. + * 07-22-18 02.00.03 Updted flags field for NVME Encapsulated req * -------------------------------------------------------------------------- */ @@ -75,10 +76,10 @@ typedef struct _MPI26_NVME_ENCAPSULATED_REQUEST { #define MPI26_NVME_FLAGS_SUBMISSIONQ_ADMIN (0x0010) /*Error Response Address Space */ #define MPI26_NVME_FLAGS_MASK_ERROR_RSP_ADDR (0x000C) +#define MPI26_NVME_FLAGS_MASK_ERROR_RSP_ADDR_MASK (0x000C) #define MPI26_NVME_FLAGS_SYSTEM_RSP_ADDR (0x0000) -#define MPI26_NVME_FLAGS_IOCPLB_RSP_ADDR (0x0008) -#define MPI26_NVME_FLAGS_IOCPLBNTA_RSP_ADDR (0x000C) -/*Data Direction*/ +#define MPI26_NVME_FLAGS_IOCCTL_RSP_ADDR (0x0008) +/* Data Direction*/ #define MPI26_NVME_FLAGS_DATADIRECTION_MASK (0x0003) #define MPI26_NVME_FLAGS_NODATATRANSFER (0x0000) #define MPI26_NVME_FLAGS_WRITE (0x0001) diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_raid.h b/drivers/scsi/mpt3sas/mpi/mpi2_raid.h index b9bb1c178f12..b770eb516c14 100644 --- a/drivers/scsi/mpt3sas/mpi/mpi2_raid.h +++ b/drivers/scsi/mpt3sas/mpi/mpi2_raid.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Copyright 2000-2014 Avago Technologies. All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_raid.h diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_sas.h b/drivers/scsi/mpt3sas/mpi/mpi2_sas.h index afa17ff246b4..16c922a8a02b 100644 --- a/drivers/scsi/mpt3sas/mpi/mpi2_sas.h +++ b/drivers/scsi/mpt3sas/mpi/mpi2_sas.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Copyright 2000-2015 Avago Technologies. All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_sas.h diff --git a/drivers/scsi/mpt3sas/mpi/mpi2_tool.h b/drivers/scsi/mpt3sas/mpi/mpi2_tool.h index 629296ee9236..3f966b6796b3 100644 --- a/drivers/scsi/mpt3sas/mpi/mpi2_tool.h +++ b/drivers/scsi/mpt3sas/mpi/mpi2_tool.h @@ -1,13 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Copyright 2000-2014 Avago Technologies. All rights reserved. + * Copyright 2000-2020 Broadcom Inc. All rights reserved. * * * Name: mpi2_tool.h * Title: MPI diagnostic tool structures and definitions * Creation Date: March 26, 2007 * - * mpi2_tool.h Version: 02.00.14 + * mpi2_tool.h Version: 02.00.15 * * Version History * --------------- @@ -38,6 +38,8 @@ * 11-18-14 02.00.13 Updated copyright information. * 08-25-16 02.00.14 Added new values for the Flags field of Toolbox Clean * Tool Request Message. + * 07-22-18 02.00.15 Added defines for new TOOLBOX_PCIE_LANE_MARGINING tool. + * Added option for DeviceInfo field in ISTWI tool. * -------------------------------------------------------------------------- */ @@ -58,6 +60,7 @@ #define MPI2_TOOLBOX_BEACON_TOOL (0x05) #define MPI2_TOOLBOX_DIAGNOSTIC_CLI_TOOL (0x06) #define MPI2_TOOLBOX_TEXT_DISPLAY_TOOL (0x07) +#define MPI26_TOOLBOX_BACKEND_PCIE_LANE_MARGIN (0x08) /**************************************************************************** * Toolbox reply @@ -226,6 +229,13 @@ typedef struct _MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST { #define MPI2_TOOL_ISTWI_FLAG_AUTO_RESERVE_RELEASE (0x80) #define MPI2_TOOL_ISTWI_FLAG_PAGE_ADDR_MASK (0x07) +/*MPI26 TOOLBOX Request MsgFlags defines */ +#define MPI26_TOOLBOX_REQ_MSGFLAGS_ADDRESSING_MASK (0x01) +/*Request uses Man Page 43 device index addressing */ +#define MPI26_TOOLBOX_REQ_MSGFLAGS_ADDRESSING_DEVINDEX (0x00) +/*Request uses Man Page 43 device info struct addressing */ +#define MPI26_TOOLBOX_REQ_MSGFLAGS_ADDRESSING_DEVINFO (0x01) + /*Toolbox ISTWI Read Write Tool reply message */ typedef struct _MPI2_TOOLBOX_ISTWI_REPLY { U8 Tool; /*0x00 */ @@ -387,6 +397,64 @@ Mpi2ToolboxTextDisplayRequest_t, #define MPI2_TOOLBOX_CONSOLE_FLAG_TIMESTAMP (0x01) +/*************************************************************************** + * Toolbox Backend Lane Margining Tool + *************************************************************************** + */ + +/*Toolbox Backend Lane Margining Tool request message */ +typedef struct _MPI26_TOOLBOX_LANE_MARGINING_REQUEST { + U8 Tool; /*0x00 */ + U8 Reserved1; /*0x01 */ + U8 ChainOffset; /*0x02 */ + U8 Function; /*0x03 */ + U16 Reserved2; /*0x04 */ + U8 Reserved3; /*0x06 */ + U8 MsgFlags; /*0x07 */ + U8 VP_ID; /*0x08 */ + U8 VF_ID; /*0x09 */ + U16 Reserved4; /*0x0A */ + U8 Command; /*0x0C */ + U8 SwitchPort; /*0x0D */ + U16 DevHandle; /*0x0E */ + U8 RegisterOffset; /*0x10 */ + U8 Reserved5; /*0x11 */ + U16 DataLength; /*0x12 */ + MPI25_SGE_IO_UNION SGL; /*0x14 */ +} MPI26_TOOLBOX_LANE_MARGINING_REQUEST, + *PTR_MPI2_TOOLBOX_LANE_MARGINING_REQUEST, + Mpi26ToolboxLaneMarginingRequest_t, + *pMpi2ToolboxLaneMarginingRequest_t; + +/* defines for the Command field */ +#define MPI26_TOOL_MARGIN_COMMAND_ENTER_MARGIN_MODE (0x01) +#define MPI26_TOOL_MARGIN_COMMAND_READ_REGISTER_DATA (0x02) +#define MPI26_TOOL_MARGIN_COMMAND_WRITE_REGISTER_DATA (0x03) +#define MPI26_TOOL_MARGIN_COMMAND_EXIT_MARGIN_MODE (0x04) + + +/*Toolbox Backend Lane Margining Tool reply message */ +typedef struct _MPI26_TOOLBOX_LANE_MARGINING_REPLY { + U8 Tool; /*0x00 */ + U8 Reserved1; /*0x01 */ + U8 MsgLength; /*0x02 */ + U8 Function; /*0x03 */ + U16 Reserved2; /*0x04 */ + U8 Reserved3; /*0x06 */ + U8 MsgFlags; /*0x07 */ + U8 VP_ID; /*0x08 */ + U8 VF_ID; /*0x09 */ + U16 Reserved4; /*0x0A */ + U16 Reserved5; /*0x0C */ + U16 IOCStatus; /*0x0E */ + U32 IOCLogInfo; /*0x10 */ + U16 ReturnedDataLength; /*0x14 */ + U16 Reserved6; /*0x16 */ +} MPI26_TOOLBOX_LANE_MARGINING_REPLY, + *PTR_MPI26_TOOLBOX_LANE_MARGINING_REPLY, + Mpi26ToolboxLaneMarginingReply_t, + *pMpi26ToolboxLaneMarginingReply_t; + /***************************************************************************** * diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c index 2500377d0723..0a6cb8f0680c 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.c +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c @@ -156,6 +156,32 @@ _scsih_set_fwfault_debug(const char *val, const struct kernel_param *kp) module_param_call(mpt3sas_fwfault_debug, _scsih_set_fwfault_debug, param_get_int, &mpt3sas_fwfault_debug, 0644); +/** + * _base_readl_aero - retry readl for max three times. + * @addr - MPT Fusion system interface register address + * + * Retry the readl() for max three times if it gets zero value + * while reading the system interface register. + */ +static inline u32 +_base_readl_aero(const volatile void __iomem *addr) +{ + u32 i = 0, ret_val; + + do { + ret_val = readl(addr); + i++; + } while (ret_val == 0 && i < 3); + + return ret_val; +} + +static inline u32 +_base_readl(const volatile void __iomem *addr) +{ + return readl(addr); +} + /** * _base_clone_reply_to_sys_mem - copies reply to reply free iomem * in BAR0 space. @@ -716,7 +742,7 @@ mpt3sas_halt_firmware(struct MPT3SAS_ADAPTER *ioc) dump_stack(); - doorbell = readl(&ioc->chip->Doorbell); + doorbell = ioc->base_readl(&ioc->chip->Doorbell); if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) mpt3sas_base_fault_info(ioc , doorbell); else { @@ -1325,10 +1351,10 @@ _base_mask_interrupts(struct MPT3SAS_ADAPTER *ioc) u32 him_register; ioc->mask_interrupts = 1; - him_register = readl(&ioc->chip->HostInterruptMask); + him_register = ioc->base_readl(&ioc->chip->HostInterruptMask); him_register |= MPI2_HIM_DIM + MPI2_HIM_RIM + MPI2_HIM_RESET_IRQ_MASK; writel(him_register, &ioc->chip->HostInterruptMask); - readl(&ioc->chip->HostInterruptMask); + ioc->base_readl(&ioc->chip->HostInterruptMask); } /** @@ -1342,7 +1368,7 @@ _base_unmask_interrupts(struct MPT3SAS_ADAPTER *ioc) { u32 him_register; - him_register = readl(&ioc->chip->HostInterruptMask); + him_register = ioc->base_readl(&ioc->chip->HostInterruptMask); him_register &= ~MPI2_HIM_RIM; writel(him_register, &ioc->chip->HostInterruptMask); ioc->mask_interrupts = 0; @@ -3319,8 +3345,9 @@ _base_mpi_ep_writeq(__u64 b, volatile void __iomem *addr, static inline void _base_writeq(__u64 b, volatile void __iomem *addr, spinlock_t *writeq_lock) { + wmb(); __raw_writeq(b, addr); - mmiowb(); + barrier(); } #else static inline void @@ -4060,7 +4087,7 @@ _base_static_config_pages(struct MPT3SAS_ADAPTER *ioc) * flag unset in NVDATA. */ mpt3sas_config_get_manufacturing_pg11(ioc, &mpi_reply, &ioc->manu_pg11); - if (ioc->manu_pg11.EEDPTagMode == 0) { + if (!ioc->is_gen35_ioc && ioc->manu_pg11.EEDPTagMode == 0) { pr_err("%s: overriding NVDATA EEDPTagMode setting\n", ioc->name); ioc->manu_pg11.EEDPTagMode &= ~0x3; @@ -4854,7 +4881,7 @@ mpt3sas_base_get_iocstate(struct MPT3SAS_ADAPTER *ioc, int cooked) { u32 s, sc; - s = readl(&ioc->chip->Doorbell); + s = ioc->base_readl(&ioc->chip->Doorbell); sc = s & MPI2_IOC_STATE_MASK; return cooked ? sc : s; } @@ -4910,7 +4937,7 @@ _base_wait_for_doorbell_int(struct MPT3SAS_ADAPTER *ioc, int timeout) count = 0; cntdn = 1000 * timeout; do { - int_status = readl(&ioc->chip->HostInterruptStatus); + int_status = ioc->base_readl(&ioc->chip->HostInterruptStatus); if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) { dhsprintk(ioc, ioc_info(ioc, "%s: successful count(%d), timeout(%d)\n", @@ -4936,7 +4963,7 @@ _base_spin_on_doorbell_int(struct MPT3SAS_ADAPTER *ioc, int timeout) count = 0; cntdn = 2000 * timeout; do { - int_status = readl(&ioc->chip->HostInterruptStatus); + int_status = ioc->base_readl(&ioc->chip->HostInterruptStatus); if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) { dhsprintk(ioc, ioc_info(ioc, "%s: successful count(%d), timeout(%d)\n", @@ -4974,14 +5001,14 @@ _base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout) count = 0; cntdn = 1000 * timeout; do { - int_status = readl(&ioc->chip->HostInterruptStatus); + int_status = ioc->base_readl(&ioc->chip->HostInterruptStatus); if (!(int_status & MPI2_HIS_SYS2IOC_DB_STATUS)) { dhsprintk(ioc, ioc_info(ioc, "%s: successful count(%d), timeout(%d)\n", __func__, count, timeout)); return 0; } else if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) { - doorbell = readl(&ioc->chip->Doorbell); + doorbell = ioc->base_readl(&ioc->chip->Doorbell); if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) { mpt3sas_base_fault_info(ioc , doorbell); @@ -5016,7 +5043,7 @@ _base_wait_for_doorbell_not_used(struct MPT3SAS_ADAPTER *ioc, int timeout) count = 0; cntdn = 1000 * timeout; do { - doorbell_reg = readl(&ioc->chip->Doorbell); + doorbell_reg = ioc->base_readl(&ioc->chip->Doorbell); if (!(doorbell_reg & MPI2_DOORBELL_USED)) { dhsprintk(ioc, ioc_info(ioc, "%s: successful count(%d), timeout(%d)\n", @@ -5077,6 +5104,39 @@ _base_send_ioc_reset(struct MPT3SAS_ADAPTER *ioc, u8 reset_type, int timeout) return r; } +/** + * mpt3sas_wait_for_ioc - IOC's operational state is checked here. + * @ioc: per adapter object + * @wait_count: timeout in seconds + * + * Return: Waits up to timeout seconds for the IOC to + * become operational. Returns 0 if IOC is present + * and operational; otherwise returns -EFAULT. + */ + +int +mpt3sas_wait_for_ioc(struct MPT3SAS_ADAPTER *ioc, int timeout) +{ + int wait_state_count = 0; + u32 ioc_state; + + do { + ioc_state = mpt3sas_base_get_iocstate(ioc, 1); + if (ioc_state == MPI2_IOC_STATE_OPERATIONAL) + break; + ssleep(1); + ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", + __func__, ++wait_state_count); + } while (--timeout); + if (!timeout) { + ioc_err(ioc, "%s: failed due to ioc not operational\n", __func__); + return -EFAULT; + } + if (wait_state_count) + ioc_info(ioc, "ioc is operational\n"); + return 0; +} + /** * _base_handshake_req_reply_wait - send request thru doorbell interface * @ioc: per adapter object @@ -5098,13 +5158,13 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, __le32 *mfp; /* make sure doorbell is not in use */ - if ((readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) { + if ((ioc->base_readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) { ioc_err(ioc, "doorbell is in use (line=%d)\n", __LINE__); return -EFAULT; } /* clear pending doorbell interrupts from previous state changes */ - if (readl(&ioc->chip->HostInterruptStatus) & + if (ioc->base_readl(&ioc->chip->HostInterruptStatus) & MPI2_HIS_IOC2SYS_DB_STATUS) writel(0, &ioc->chip->HostInterruptStatus); @@ -5147,7 +5207,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, } /* read the first two 16-bits, it gives the total length of the reply */ - reply[0] = le16_to_cpu(readl(&ioc->chip->Doorbell) + reply[0] = le16_to_cpu(ioc->base_readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_DATA_MASK); writel(0, &ioc->chip->HostInterruptStatus); if ((_base_wait_for_doorbell_int(ioc, 5))) { @@ -5155,7 +5215,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, __LINE__); return -EFAULT; } - reply[1] = le16_to_cpu(readl(&ioc->chip->Doorbell) + reply[1] = le16_to_cpu(ioc->base_readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_DATA_MASK); writel(0, &ioc->chip->HostInterruptStatus); @@ -5166,9 +5226,10 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, return -EFAULT; } if (i >= reply_bytes/2) /* overflow case */ - readl(&ioc->chip->Doorbell); + ioc->base_readl(&ioc->chip->Doorbell); else - reply[i] = le16_to_cpu(readl(&ioc->chip->Doorbell) + reply[i] = le16_to_cpu( + ioc->base_readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_DATA_MASK); writel(0, &ioc->chip->HostInterruptStatus); } @@ -5211,11 +5272,9 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc, Mpi2SasIoUnitControlRequest_t *mpi_request) { u16 smid; - u32 ioc_state; u8 issue_reset = 0; int rc; void *request; - u16 wait_state_count; dinitprintk(ioc, ioc_info(ioc, "%s\n", __func__)); @@ -5227,20 +5286,9 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc, goto out; } - wait_state_count = 0; - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { - if (wait_state_count++ == 10) { - ioc_err(ioc, "%s: failed due to ioc not operational\n", - __func__); - rc = -EFAULT; - goto out; - } - ssleep(1); - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", - __func__, wait_state_count); - } + rc = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT); + if (rc) + goto out; smid = mpt3sas_base_get_smid(ioc, ioc->base_cb_idx); if (!smid) { @@ -5306,11 +5354,9 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc, Mpi2SepReply_t *mpi_reply, Mpi2SepRequest_t *mpi_request) { u16 smid; - u32 ioc_state; u8 issue_reset = 0; int rc; void *request; - u16 wait_state_count; dinitprintk(ioc, ioc_info(ioc, "%s\n", __func__)); @@ -5322,20 +5368,9 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc, goto out; } - wait_state_count = 0; - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { - if (wait_state_count++ == 10) { - ioc_err(ioc, "%s: failed due to ioc not operational\n", - __func__); - rc = -EFAULT; - goto out; - } - ssleep(1); - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", - __func__, wait_state_count); - } + rc = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT); + if (rc) + goto out; smid = mpt3sas_base_get_smid(ioc, ioc->base_cb_idx); if (!smid) { @@ -6020,14 +6055,14 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc) if (count++ > 20) goto out; - host_diagnostic = readl(&ioc->chip->HostDiagnostic); + host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic); drsprintk(ioc, ioc_info(ioc, "wrote magic sequence: count(%d), host_diagnostic(0x%08x)\n", count, host_diagnostic)); } while ((host_diagnostic & MPI2_DIAG_DIAG_WRITE_ENABLE) == 0); - hcb_size = readl(&ioc->chip->HCBSize); + hcb_size = ioc->base_readl(&ioc->chip->HCBSize); drsprintk(ioc, ioc_info(ioc, "diag reset: issued\n")); writel(host_diagnostic | MPI2_DIAG_RESET_ADAPTER, @@ -6040,7 +6075,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc) for (count = 0; count < (300000000 / MPI2_HARD_RESET_PCIE_SECOND_READ_DELAY_MICRO_SEC); count++) { - host_diagnostic = readl(&ioc->chip->HostDiagnostic); + host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic); if (host_diagnostic == 0xFFFFFFFF) goto out; @@ -6391,6 +6426,10 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc) ioc->rdpq_array_enable_assigned = 0; ioc->dma_mask = 0; + if (ioc->is_aero_ioc) + ioc->base_readl = &_base_readl_aero; + else + ioc->base_readl = &_base_readl; r = mpt3sas_base_map_resources(ioc); if (r) goto out_free_resources; diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h index 8f1d6b071b39..800351932cc3 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.h +++ b/drivers/scsi/mpt3sas/mpt3sas_base.h @@ -55,6 +55,7 @@ #include "mpi/mpi2_tool.h" #include "mpi/mpi2_sas.h" #include "mpi/mpi2_pci.h" +#include "mpi/mpi2_image.h" #include #include @@ -74,9 +75,9 @@ #define MPT3SAS_DRIVER_NAME "mpt3sas" #define MPT3SAS_AUTHOR "Avago Technologies " #define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver" -#define MPT3SAS_DRIVER_VERSION "26.100.00.00" -#define MPT3SAS_MAJOR_VERSION 26 -#define MPT3SAS_MINOR_VERSION 100 +#define MPT3SAS_DRIVER_VERSION "27.101.00.00" +#define MPT3SAS_MAJOR_VERSION 27 +#define MPT3SAS_MINOR_VERSION 101 #define MPT3SAS_BUILD_VERSION 0 #define MPT3SAS_RELEASE_VERSION 00 @@ -139,6 +140,9 @@ #define DEFAULT_NUM_FWCHAIN_ELEMTS 8 #define FW_IMG_HDR_READ_TIMEOUT 15 + +#define IOC_OPERATIONAL_WAIT_COUNT 10 + /* * NVMe defines */ @@ -908,6 +912,7 @@ typedef void (*NVME_BUILD_PRP)(struct MPT3SAS_ADAPTER *ioc, u16 smid, typedef void (*PUT_SMID_IO_FP_HIP) (struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 funcdep); typedef void (*PUT_SMID_DEFAULT) (struct MPT3SAS_ADAPTER *ioc, u16 smid); +typedef u32 (*BASE_READ_REG) (const volatile void __iomem *addr); /* IOC Facts and Port Facts converted from little endian to cpu */ union mpi3_version_union { @@ -1388,6 +1393,7 @@ struct MPT3SAS_ADAPTER { u8 hide_drives; spinlock_t diag_trigger_lock; u8 diag_trigger_active; + BASE_READ_REG base_readl; struct SL_WH_MASTER_TRIGGER_T diag_trigger_master; struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event; struct SL_WH_SCSI_TRIGGERS_T diag_trigger_scsi; @@ -1395,6 +1401,7 @@ struct MPT3SAS_ADAPTER { void *device_remove_in_progress; u16 device_remove_in_progress_sz; u8 is_gen35_ioc; + u8 is_aero_ioc; PUT_SMID_IO_FP_HIP put_smid_scsi_io; }; @@ -1487,6 +1494,7 @@ mpt3sas_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc); u8 mpt3sas_base_check_cmd_timeout(struct MPT3SAS_ADAPTER *ioc, u8 status, void *mpi_request, int sz); +int mpt3sas_wait_for_ioc(struct MPT3SAS_ADAPTER *ioc, int wait_count); /* scsih shared API */ struct scsi_cmnd *mpt3sas_scsih_scsi_lookup_get(struct MPT3SAS_ADAPTER *ioc, diff --git a/drivers/scsi/mpt3sas/mpt3sas_config.c b/drivers/scsi/mpt3sas/mpt3sas_config.c index 02209447f4ef..fb0a17252f86 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_config.c +++ b/drivers/scsi/mpt3sas/mpt3sas_config.c @@ -119,7 +119,7 @@ _config_display_some_debug(struct MPT3SAS_ADAPTER *ioc, u16 smid, desc = "raid_volume"; break; case MPI2_CONFIG_PAGETYPE_MANUFACTURING: - desc = "manufaucturing"; + desc = "manufacturing"; break; case MPI2_CONFIG_PAGETYPE_RAID_PHYSDISK: desc = "physdisk"; @@ -300,11 +300,9 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t void *config_page, u16 config_page_sz) { u16 smid; - u32 ioc_state; Mpi2ConfigRequest_t *config_request; int r; u8 retry_count, issue_host_reset = 0; - u16 wait_state_count; struct config_request mem; u32 ioc_status = UINT_MAX; @@ -361,23 +359,10 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t ioc_info(ioc, "%s: attempting retry (%d)\n", __func__, retry_count); } - wait_state_count = 0; - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { - if (wait_state_count++ == MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT) { - ioc_err(ioc, "%s: failed due to ioc not operational\n", - __func__); - ioc->config_cmds.status = MPT3_CMD_NOT_USED; - r = -EFAULT; - goto free_mem; - } - ssleep(1); - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", - __func__, wait_state_count); - } - if (wait_state_count) - ioc_info(ioc, "%s: ioc is operational\n", __func__); + + r = mpt3sas_wait_for_ioc(ioc, MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT); + if (r) + goto free_mem; smid = mpt3sas_base_get_smid(ioc, ioc->config_cb_idx); if (!smid) { @@ -673,10 +658,6 @@ mpt3sas_config_set_manufacturing_pg11(struct MPT3SAS_ADAPTER *ioc, r = _config_request(ioc, &mpi_request, mpi_reply, MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, sizeof(*config_page)); - mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_WRITE_NVRAM; - r = _config_request(ioc, &mpi_request, mpi_reply, - MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, - sizeof(*config_page)); out: return r; } diff --git a/drivers/scsi/mpt3sas/mpt3sas_ctl.c b/drivers/scsi/mpt3sas/mpt3sas_ctl.c index 4afa597cbfba..b2bb47c14d35 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_ctl.c +++ b/drivers/scsi/mpt3sas/mpt3sas_ctl.c @@ -641,7 +641,6 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg, MPI2DefaultReply_t *mpi_reply; Mpi26NVMeEncapsulatedRequest_t *nvme_encap_request = NULL; struct _pcie_device *pcie_device = NULL; - u32 ioc_state; u16 smid; u8 timeout; u8 issue_reset; @@ -654,7 +653,6 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg, dma_addr_t data_in_dma = 0; size_t data_in_sz = 0; long ret; - u16 wait_state_count; u16 device_handle = MPT3SAS_INVALID_DEVICE_HANDLE; u8 tr_method = MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE; @@ -666,22 +664,9 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg, goto out; } - wait_state_count = 0; - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { - if (wait_state_count++ == 10) { - ioc_err(ioc, "%s: failed due to ioc not operational\n", - __func__); - ret = -EFAULT; - goto out; - } - ssleep(1); - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", - __func__, wait_state_count); - } - if (wait_state_count) - ioc_info(ioc, "%s: ioc is operational\n", __func__); + ret = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT); + if (ret) + goto out; mpi_request = kzalloc(ioc->request_sz, GFP_KERNEL); if (!mpi_request) { diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c index 03c52847ed07..6be39dc27103 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c +++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c @@ -3748,6 +3748,40 @@ _scsih_tm_tr_complete(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, return _scsih_check_for_pending_tm(ioc, smid); } +/** _scsih_allow_scmd_to_device - check whether scmd needs to + * issue to IOC or not. + * @ioc: per adapter object + * @scmd: pointer to scsi command object + * + * Returns true if scmd can be issued to IOC otherwise returns false. + */ +inline bool _scsih_allow_scmd_to_device(struct MPT3SAS_ADAPTER *ioc, + struct scsi_cmnd *scmd) +{ + + if (ioc->pci_error_recovery) + return false; + + if (ioc->hba_mpi_version_belonged == MPI2_VERSION) { + if (ioc->remove_host) + return false; + + return true; + } + + if (ioc->remove_host) { + + switch (scmd->cmnd[0]) { + case SYNCHRONIZE_CACHE: + case START_STOP: + return true; + default: + return false; + } + } + + return true; +} /** * _scsih_sas_control_complete - completion routine @@ -4571,7 +4605,7 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd) return 0; } - if (ioc->pci_error_recovery || ioc->remove_host) { + if (!(_scsih_allow_scmd_to_device(ioc, scmd))) { scmd->result = DID_NO_CONNECT << 16; scmd->scsi_done(scmd); return 0; @@ -9641,6 +9675,7 @@ static void scsih_remove(struct pci_dev *pdev) /* release all the volumes */ _scsih_ir_shutdown(ioc); + sas_remove_host(shost); list_for_each_entry_safe(raid_device, next, &ioc->raid_device_list, list) { if (raid_device->starget) { @@ -9682,7 +9717,6 @@ static void scsih_remove(struct pci_dev *pdev) ioc->sas_hba.num_phys = 0; } - sas_remove_host(shost); mpt3sas_base_detach(ioc); spin_lock(&gioc_lock); list_del(&ioc->list); @@ -10139,7 +10173,6 @@ static struct scsi_host_template mpt2sas_driver_template = { .sg_tablesize = MPT2SAS_SG_DEPTH, .max_sectors = 32767, .cmd_per_lun = 7, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = mpt3sas_host_attrs, .sdev_attrs = mpt3sas_dev_attrs, .track_queue_depth = 1, @@ -10178,7 +10211,6 @@ static struct scsi_host_template mpt3sas_driver_template = { .sg_tablesize = MPT3SAS_SG_DEPTH, .max_sectors = 32767, .cmd_per_lun = 7, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = mpt3sas_host_attrs, .sdev_attrs = mpt3sas_dev_attrs, .track_queue_depth = 1, @@ -10250,6 +10282,10 @@ _scsih_determine_hba_mpi_version(struct pci_dev *pdev) case MPI26_MFGPAGE_DEVID_SAS3516_1: case MPI26_MFGPAGE_DEVID_SAS3416: case MPI26_MFGPAGE_DEVID_SAS3616: + case MPI26_MFGPAGE_DEVID_CFG_SEC_3916: + case MPI26_MFGPAGE_DEVID_HARD_SEC_3916: + case MPI26_MFGPAGE_DEVID_CFG_SEC_3816: + case MPI26_MFGPAGE_DEVID_HARD_SEC_3816: return MPI26_VERSION; } return 0; @@ -10337,8 +10373,17 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id) case MPI26_MFGPAGE_DEVID_SAS3616: ioc->is_gen35_ioc = 1; break; + case MPI26_MFGPAGE_DEVID_CFG_SEC_3816: + case MPI26_MFGPAGE_DEVID_CFG_SEC_3916: + dev_info(&pdev->dev, + "HBA is in Configurable Secure mode\n"); + /* fall through */ + case MPI26_MFGPAGE_DEVID_HARD_SEC_3816: + case MPI26_MFGPAGE_DEVID_HARD_SEC_3916: + ioc->is_aero_ioc = ioc->is_gen35_ioc = 1; + break; default: - ioc->is_gen35_ioc = 0; + ioc->is_gen35_ioc = ioc->is_aero_ioc = 0; } if ((ioc->hba_mpi_version_belonged == MPI25_VERSION && pdev->revision >= SAS3_PCI_DEVICE_C0_REVISION) || @@ -10795,6 +10840,23 @@ static const struct pci_device_id mpt3sas_pci_table[] = { /* Mercator ~ 3616*/ { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_SAS3616, PCI_ANY_ID, PCI_ANY_ID }, + + /* Aero SI 0x00E1 Configurable Secure + * 0x00E2 Hard Secure + */ + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_CFG_SEC_3916, + PCI_ANY_ID, PCI_ANY_ID }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_HARD_SEC_3916, + PCI_ANY_ID, PCI_ANY_ID }, + + /* Sea SI 0x00E5 Configurable Secure + * 0x00E6 Hard Secure + */ + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_CFG_SEC_3816, + PCI_ANY_ID, PCI_ANY_ID }, + { MPI2_MFGPAGE_VENDORID_LSI, MPI26_MFGPAGE_DEVID_HARD_SEC_3816, + PCI_ANY_ID, PCI_ANY_ID }, + {0} /* Terminating entry */ }; MODULE_DEVICE_TABLE(pci, mpt3sas_pci_table); diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c index 6a8a3c09b4b1..60ae2d0feb2b 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_transport.c +++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c @@ -296,7 +296,6 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc, struct rep_manu_request *manufacture_request; int rc; u16 smid; - u32 ioc_state; void *psge; u8 issue_reset = 0; void *data_out = NULL; @@ -304,7 +303,6 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc, dma_addr_t data_in_dma; size_t data_in_sz; size_t data_out_sz; - u16 wait_state_count; if (ioc->shost_recovery || ioc->pci_error_recovery) { ioc_info(ioc, "%s: host reset in progress!\n", __func__); @@ -320,22 +318,9 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc, } ioc->transport_cmds.status = MPT3_CMD_PENDING; - wait_state_count = 0; - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { - if (wait_state_count++ == 10) { - ioc_err(ioc, "%s: failed due to ioc not operational\n", - __func__); - rc = -EFAULT; - goto out; - } - ssleep(1); - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", - __func__, wait_state_count); - } - if (wait_state_count) - ioc_info(ioc, "%s: ioc is operational\n", __func__); + rc = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT); + if (rc) + goto out; smid = mpt3sas_base_get_smid(ioc, ioc->transport_cb_idx); if (!smid) { @@ -821,10 +806,13 @@ mpt3sas_transport_port_remove(struct MPT3SAS_ADAPTER *ioc, u64 sas_address, mpt3sas_port->remote_identify.sas_address, mpt3sas_phy->phy_id); mpt3sas_phy->phy_belongs_to_port = 0; - sas_port_delete_phy(mpt3sas_port->port, mpt3sas_phy->phy); + if (!ioc->remove_host) + sas_port_delete_phy(mpt3sas_port->port, + mpt3sas_phy->phy); list_del(&mpt3sas_phy->port_siblings); } - sas_port_delete(mpt3sas_port->port); + if (!ioc->remove_host) + sas_port_delete(mpt3sas_port->port); kfree(mpt3sas_port); } @@ -1076,13 +1064,11 @@ _transport_get_expander_phy_error_log(struct MPT3SAS_ADAPTER *ioc, struct phy_error_log_reply *phy_error_log_reply; int rc; u16 smid; - u32 ioc_state; void *psge; u8 issue_reset = 0; void *data_out = NULL; dma_addr_t data_out_dma; u32 sz; - u16 wait_state_count; if (ioc->shost_recovery || ioc->pci_error_recovery) { ioc_info(ioc, "%s: host reset in progress!\n", __func__); @@ -1098,22 +1084,9 @@ _transport_get_expander_phy_error_log(struct MPT3SAS_ADAPTER *ioc, } ioc->transport_cmds.status = MPT3_CMD_PENDING; - wait_state_count = 0; - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { - if (wait_state_count++ == 10) { - ioc_err(ioc, "%s: failed due to ioc not operational\n", - __func__); - rc = -EFAULT; - goto out; - } - ssleep(1); - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", - __func__, wait_state_count); - } - if (wait_state_count) - ioc_info(ioc, "%s: ioc is operational\n", __func__); + rc = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT); + if (rc) + goto out; smid = mpt3sas_base_get_smid(ioc, ioc->transport_cb_idx); if (!smid) { @@ -1381,13 +1354,11 @@ _transport_expander_phy_control(struct MPT3SAS_ADAPTER *ioc, struct phy_control_reply *phy_control_reply; int rc; u16 smid; - u32 ioc_state; void *psge; u8 issue_reset = 0; void *data_out = NULL; dma_addr_t data_out_dma; u32 sz; - u16 wait_state_count; if (ioc->shost_recovery || ioc->pci_error_recovery) { ioc_info(ioc, "%s: host reset in progress!\n", __func__); @@ -1403,22 +1374,9 @@ _transport_expander_phy_control(struct MPT3SAS_ADAPTER *ioc, } ioc->transport_cmds.status = MPT3_CMD_PENDING; - wait_state_count = 0; - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { - if (wait_state_count++ == 10) { - ioc_err(ioc, "%s: failed due to ioc not operational\n", - __func__); - rc = -EFAULT; - goto out; - } - ssleep(1); - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", - __func__, wait_state_count); - } - if (wait_state_count) - ioc_info(ioc, "%s: ioc is operational\n", __func__); + rc = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT); + if (rc) + goto out; smid = mpt3sas_base_get_smid(ioc, ioc->transport_cb_idx); if (!smid) { @@ -1880,7 +1838,6 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost, Mpi2SmpPassthroughReply_t *mpi_reply; int rc; u16 smid; - u32 ioc_state; void *psge; dma_addr_t dma_addr_in; dma_addr_t dma_addr_out; @@ -1888,7 +1845,6 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost, void *addr_out = NULL; size_t dma_len_in; size_t dma_len_out; - u16 wait_state_count; unsigned int reslen = 0; if (ioc->shost_recovery || ioc->pci_error_recovery) { @@ -1924,22 +1880,9 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost, if (rc) goto unmap_out; - wait_state_count = 0; - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) { - if (wait_state_count++ == 10) { - ioc_err(ioc, "%s: failed due to ioc not operational\n", - __func__); - rc = -EFAULT; - goto unmap_in; - } - ssleep(1); - ioc_state = mpt3sas_base_get_iocstate(ioc, 1); - ioc_info(ioc, "%s: waiting for operational state(count=%d)\n", - __func__, wait_state_count); - } - if (wait_state_count) - ioc_info(ioc, "%s: ioc is operational\n", __func__); + rc = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT); + if (rc) + goto unmap_in; smid = mpt3sas_base_get_smid(ioc, ioc->transport_cb_idx); if (!smid) { diff --git a/drivers/scsi/mvme147.c b/drivers/scsi/mvme147.c index 7d1ab414b78f..ca96d6d9c350 100644 --- a/drivers/scsi/mvme147.c +++ b/drivers/scsi/mvme147.c @@ -78,7 +78,6 @@ static struct scsi_host_template mvme147_host_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING }; static struct Scsi_Host *mvme147_shost; diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c index 3ac34373746c..030d911ee374 100644 --- a/drivers/scsi/mvsas/mv_init.c +++ b/drivers/scsi/mvsas/mv_init.c @@ -59,7 +59,6 @@ static struct scsi_host_template mvs_sht = { .this_id = -1, .sg_tablesize = SG_ALL, .max_sectors = SCSI_DEFAULT_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .eh_device_reset_handler = sas_eh_device_reset_handler, .eh_target_reset_handler = sas_eh_target_reset_handler, .target_destroy = sas_target_destroy, diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c index 2458974d1af6..dbe753fba486 100644 --- a/drivers/scsi/mvumi.c +++ b/drivers/scsi/mvumi.c @@ -2197,6 +2197,7 @@ static struct scsi_host_template mvumi_template = { .eh_timed_out = mvumi_timed_out, .eh_host_reset_handler = mvumi_host_reset, .bios_param = mvumi_bios_param, + .dma_boundary = PAGE_SIZE - 1, .this_id = -1, }; @@ -2620,7 +2621,7 @@ static int __maybe_unused mvumi_resume(struct pci_dev *pdev) } ret = mvumi_pci_set_master(pdev); - ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (ret) goto fail; ret = pci_request_regions(mhba->pdev, MV_DRIVER_NAME); diff --git a/drivers/scsi/myrb.c b/drivers/scsi/myrb.c index 0642f2d0a3bb..539ac8ce4fcd 100644 --- a/drivers/scsi/myrb.c +++ b/drivers/scsi/myrb.c @@ -1528,6 +1528,7 @@ static int myrb_ldev_queuecommand(struct Scsi_Host *shost, scmd->scsi_done(scmd); return 0; } + /* fall through */ case WRITE_6: lba = (((scmd->cmnd[1] & 0x1F) << 16) | (scmd->cmnd[2] << 8) | @@ -1544,6 +1545,7 @@ static int myrb_ldev_queuecommand(struct Scsi_Host *shost, scmd->scsi_done(scmd); return 0; } + /* fall through */ case WRITE_10: case VERIFY: /* 0x2F */ case WRITE_VERIFY: /* 0x2E */ @@ -1560,6 +1562,7 @@ static int myrb_ldev_queuecommand(struct Scsi_Host *shost, scmd->scsi_done(scmd); return 0; } + /* fall through */ case WRITE_12: case VERIFY_12: /* 0xAF */ case WRITE_VERIFY_12: /* 0xAE */ diff --git a/drivers/scsi/ncr53c8xx.c b/drivers/scsi/ncr53c8xx.c index 6cd3e289ef99..1a236a3dfd51 100644 --- a/drivers/scsi/ncr53c8xx.c +++ b/drivers/scsi/ncr53c8xx.c @@ -8313,7 +8313,6 @@ struct Scsi_Host * __init ncr_attach(struct scsi_host_template *tpnt, tpnt->this_id = 7; tpnt->sg_tablesize = SCSI_NCR_SG_TABLESIZE; tpnt->cmd_per_lun = SCSI_NCR_CMD_PER_LUN; - tpnt->use_clustering = ENABLE_CLUSTERING; if (device->differential) driver_setup.diff_support = device->differential; diff --git a/drivers/scsi/nsp32.c b/drivers/scsi/nsp32.c index 5aac3e801903..00e3cbee55b8 100644 --- a/drivers/scsi/nsp32.c +++ b/drivers/scsi/nsp32.c @@ -274,7 +274,7 @@ static struct scsi_host_template nsp32_template = { .sg_tablesize = NSP32_SG_SIZE, .max_sectors = 128, .this_id = NSP32_HOST_SCSIID, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .eh_abort_handler = nsp32_eh_abort, .eh_host_reset_handler = nsp32_eh_host_reset, /* .highmem_io = 1, */ diff --git a/drivers/scsi/osd/osd_initiator.c b/drivers/scsi/osd/osd_initiator.c index e19fa883376f..60cf7c5eb880 100644 --- a/drivers/scsi/osd/osd_initiator.c +++ b/drivers/scsi/osd/osd_initiator.c @@ -506,11 +506,11 @@ static void osd_request_async_done(struct request *req, blk_status_t error) _set_error_resid(or, req, error); if (req->next_rq) { - __blk_put_request(req->q, req->next_rq); + blk_put_request(req->next_rq); req->next_rq = NULL; } - __blk_put_request(req->q, req); + blk_put_request(req); or->request = NULL; or->in.req = NULL; or->out.req = NULL; diff --git a/drivers/scsi/osst.c b/drivers/scsi/osst.c index 7a1a1edde35d..664c1238a87f 100644 --- a/drivers/scsi/osst.c +++ b/drivers/scsi/osst.c @@ -341,7 +341,7 @@ static void osst_end_async(struct request *req, blk_status_t status) blk_rq_unmap_user(SRpnt->bio); } - __blk_put_request(req->q, req); + blk_put_request(req); } /* osst_request memory management */ diff --git a/drivers/scsi/pcmcia/nsp_cs.c b/drivers/scsi/pcmcia/nsp_cs.c index f3230494a8c9..1bd6825a4f14 100644 --- a/drivers/scsi/pcmcia/nsp_cs.c +++ b/drivers/scsi/pcmcia/nsp_cs.c @@ -86,7 +86,7 @@ static struct scsi_host_template nsp_driver_template = { .can_queue = 1, .this_id = NSP_INITIATOR_ID, .sg_tablesize = SG_ALL, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, }; static nsp_hw_data nsp_data_base; /* attach <-> detect glue */ diff --git a/drivers/scsi/pcmcia/qlogic_stub.c b/drivers/scsi/pcmcia/qlogic_stub.c index 173351a8554b..828d53faf09a 100644 --- a/drivers/scsi/pcmcia/qlogic_stub.c +++ b/drivers/scsi/pcmcia/qlogic_stub.c @@ -72,7 +72,7 @@ static struct scsi_host_template qlogicfas_driver_template = { .can_queue = 1, .this_id = -1, .sg_tablesize = SG_ALL, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, }; /*====================================================================*/ diff --git a/drivers/scsi/pcmcia/sym53c500_cs.c b/drivers/scsi/pcmcia/sym53c500_cs.c index a3b63bea0e50..d1e98a6ea28f 100644 --- a/drivers/scsi/pcmcia/sym53c500_cs.c +++ b/drivers/scsi/pcmcia/sym53c500_cs.c @@ -680,7 +680,6 @@ static struct scsi_host_template sym53c500_driver_template = { .can_queue = 1, .this_id = 7, .sg_tablesize = 32, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = SYM53C500_shost_attrs }; diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c index d71e7e4ec29c..a36060c23b37 100644 --- a/drivers/scsi/pm8001/pm8001_init.c +++ b/drivers/scsi/pm8001/pm8001_init.c @@ -84,7 +84,6 @@ static struct scsi_host_template pm8001_sht = { .this_id = -1, .sg_tablesize = SG_ALL, .max_sectors = SCSI_DEFAULT_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .eh_device_reset_handler = sas_eh_device_reset_handler, .eh_target_reset_handler = sas_eh_target_reset_handler, .target_destroy = sas_target_destroy, diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c index 4e86994e10e8..7c4673308f5b 100644 --- a/drivers/scsi/pmcraid.c +++ b/drivers/scsi/pmcraid.c @@ -846,16 +846,9 @@ static void pmcraid_erp_done(struct pmcraid_cmd *cmd) cmd->ioa_cb->ioarcb.cdb[0], ioasc); } - /* if we had allocated sense buffers for request sense, copy the sense - * release the buffers - */ - if (cmd->sense_buffer != NULL) { - memcpy(scsi_cmd->sense_buffer, - cmd->sense_buffer, - SCSI_SENSE_BUFFERSIZE); - pci_free_consistent(pinstance->pdev, - SCSI_SENSE_BUFFERSIZE, - cmd->sense_buffer, cmd->sense_buffer_dma); + if (cmd->sense_buffer) { + dma_unmap_single(&pinstance->pdev->dev, cmd->sense_buffer_dma, + SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE); cmd->sense_buffer = NULL; cmd->sense_buffer_dma = 0; } @@ -2444,13 +2437,12 @@ static void pmcraid_request_sense(struct pmcraid_cmd *cmd) { struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl; + struct device *dev = &cmd->drv_inst->pdev->dev; - /* allocate DMAable memory for sense buffers */ - cmd->sense_buffer = pci_alloc_consistent(cmd->drv_inst->pdev, - SCSI_SENSE_BUFFERSIZE, - &cmd->sense_buffer_dma); - - if (cmd->sense_buffer == NULL) { + cmd->sense_buffer = cmd->scsi_cmd->sense_buffer; + cmd->sense_buffer_dma = dma_map_single(dev, cmd->sense_buffer, + SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE); + if (dma_mapping_error(dev, cmd->sense_buffer_dma)) { pmcraid_err ("couldn't allocate sense buffer for request sense\n"); pmcraid_erp_done(cmd); @@ -2491,17 +2483,15 @@ static void pmcraid_request_sense(struct pmcraid_cmd *cmd) /** * pmcraid_cancel_all - cancel all outstanding IOARCBs as part of error recovery * @cmd: command that failed - * @sense: true if request_sense is required after cancel all + * @need_sense: true if request_sense is required after cancel all * * This function sends a cancel all to a device to clear the queue. */ -static void pmcraid_cancel_all(struct pmcraid_cmd *cmd, u32 sense) +static void pmcraid_cancel_all(struct pmcraid_cmd *cmd, bool need_sense) { struct scsi_cmnd *scsi_cmd = cmd->scsi_cmd; struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; struct pmcraid_resource_entry *res = scsi_cmd->device->hostdata; - void (*cmd_done) (struct pmcraid_cmd *) = sense ? pmcraid_erp_done - : pmcraid_request_sense; memset(ioarcb->cdb, 0, PMCRAID_MAX_CDB_LEN); ioarcb->request_flags0 = SYNC_OVERRIDE; @@ -2519,7 +2509,8 @@ static void pmcraid_cancel_all(struct pmcraid_cmd *cmd, u32 sense) /* writing to IOARRIN must be protected by host_lock, as mid-layer * schedule queuecommand while we are doing this */ - pmcraid_send_cmd(cmd, cmd_done, + pmcraid_send_cmd(cmd, need_sense ? + pmcraid_erp_done : pmcraid_request_sense, PMCRAID_REQUEST_SENSE_TIMEOUT, pmcraid_timeout_handler); } @@ -2612,7 +2603,7 @@ static int pmcraid_error_handler(struct pmcraid_cmd *cmd) struct pmcraid_ioasa *ioasa = &cmd->ioa_cb->ioasa; u32 ioasc = le32_to_cpu(ioasa->ioasc); u32 masked_ioasc = ioasc & PMCRAID_IOASC_SENSE_MASK; - u32 sense_copied = 0; + bool sense_copied = false; if (!res) { pmcraid_info("resource pointer is NULL\n"); @@ -2684,7 +2675,7 @@ static int pmcraid_error_handler(struct pmcraid_cmd *cmd) memcpy(scsi_cmd->sense_buffer, ioasa->sense_data, data_size); - sense_copied = 1; + sense_copied = true; } if (RES_IS_GSCSI(res->cfg_entry)) @@ -3523,7 +3514,7 @@ static int pmcraid_build_passthrough_ioadls( return -ENOMEM; } - sglist->num_dma_sg = pci_map_sg(cmd->drv_inst->pdev, + sglist->num_dma_sg = dma_map_sg(&cmd->drv_inst->pdev->dev, sglist->scatterlist, sglist->num_sg, direction); @@ -3572,7 +3563,7 @@ static void pmcraid_release_passthrough_ioadls( struct pmcraid_sglist *sglist = cmd->sglist; if (buflen > 0) { - pci_unmap_sg(cmd->drv_inst->pdev, + dma_unmap_sg(&cmd->drv_inst->pdev->dev, sglist->scatterlist, sglist->num_sg, direction); @@ -4158,7 +4149,6 @@ static struct scsi_host_template pmcraid_host_template = { .max_sectors = PMCRAID_IOA_MAX_SECTORS, .no_write_same = 1, .cmd_per_lun = PMCRAID_MAX_CMD_PER_LUN, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = pmcraid_host_attrs, .proc_name = PMCRAID_DRIVER_NAME, }; @@ -4708,9 +4698,9 @@ static void pmcraid_release_host_rrqs(struct pmcraid_instance *pinstance, int maxindex) { int i; - for (i = 0; i < maxindex; i++) { - pci_free_consistent(pinstance->pdev, + for (i = 0; i < maxindex; i++) { + dma_free_coherent(&pinstance->pdev->dev, HRRQ_ENTRY_SIZE * PMCRAID_MAX_CMD, pinstance->hrrq_start[i], pinstance->hrrq_start_bus_addr[i]); @@ -4737,11 +4727,9 @@ static int pmcraid_allocate_host_rrqs(struct pmcraid_instance *pinstance) for (i = 0; i < pinstance->num_hrrq; i++) { pinstance->hrrq_start[i] = - pci_alloc_consistent( - pinstance->pdev, - buffer_size, - &(pinstance->hrrq_start_bus_addr[i])); - + dma_alloc_coherent(&pinstance->pdev->dev, buffer_size, + &pinstance->hrrq_start_bus_addr[i], + GFP_KERNEL); if (!pinstance->hrrq_start[i]) { pmcraid_err("pci_alloc failed for hrrq vector : %d\n", i); @@ -4770,7 +4758,7 @@ static int pmcraid_allocate_host_rrqs(struct pmcraid_instance *pinstance) static void pmcraid_release_hcams(struct pmcraid_instance *pinstance) { if (pinstance->ccn.msg != NULL) { - pci_free_consistent(pinstance->pdev, + dma_free_coherent(&pinstance->pdev->dev, PMCRAID_AEN_HDR_SIZE + sizeof(struct pmcraid_hcam_ccn_ext), pinstance->ccn.msg, @@ -4782,7 +4770,7 @@ static void pmcraid_release_hcams(struct pmcraid_instance *pinstance) } if (pinstance->ldn.msg != NULL) { - pci_free_consistent(pinstance->pdev, + dma_free_coherent(&pinstance->pdev->dev, PMCRAID_AEN_HDR_SIZE + sizeof(struct pmcraid_hcam_ldn), pinstance->ldn.msg, @@ -4803,17 +4791,15 @@ static void pmcraid_release_hcams(struct pmcraid_instance *pinstance) */ static int pmcraid_allocate_hcams(struct pmcraid_instance *pinstance) { - pinstance->ccn.msg = pci_alloc_consistent( - pinstance->pdev, + pinstance->ccn.msg = dma_alloc_coherent(&pinstance->pdev->dev, PMCRAID_AEN_HDR_SIZE + sizeof(struct pmcraid_hcam_ccn_ext), - &(pinstance->ccn.baddr)); + &pinstance->ccn.baddr, GFP_KERNEL); - pinstance->ldn.msg = pci_alloc_consistent( - pinstance->pdev, + pinstance->ldn.msg = dma_alloc_coherent(&pinstance->pdev->dev, PMCRAID_AEN_HDR_SIZE + sizeof(struct pmcraid_hcam_ldn), - &(pinstance->ldn.baddr)); + &pinstance->ldn.baddr, GFP_KERNEL); if (pinstance->ldn.msg == NULL || pinstance->ccn.msg == NULL) { pmcraid_release_hcams(pinstance); @@ -4841,7 +4827,7 @@ static void pmcraid_release_config_buffers(struct pmcraid_instance *pinstance) { if (pinstance->cfg_table != NULL && pinstance->cfg_table_bus_addr != 0) { - pci_free_consistent(pinstance->pdev, + dma_free_coherent(&pinstance->pdev->dev, sizeof(struct pmcraid_config_table), pinstance->cfg_table, pinstance->cfg_table_bus_addr); @@ -4886,10 +4872,10 @@ static int pmcraid_allocate_config_buffers(struct pmcraid_instance *pinstance) list_add_tail(&pinstance->res_entries[i].queue, &pinstance->free_res_q); - pinstance->cfg_table = - pci_alloc_consistent(pinstance->pdev, + pinstance->cfg_table = dma_alloc_coherent(&pinstance->pdev->dev, sizeof(struct pmcraid_config_table), - &pinstance->cfg_table_bus_addr); + &pinstance->cfg_table_bus_addr, + GFP_KERNEL); if (NULL == pinstance->cfg_table) { pmcraid_err("couldn't alloc DMA memory for config table\n"); @@ -4954,7 +4940,7 @@ static void pmcraid_release_buffers(struct pmcraid_instance *pinstance) pmcraid_release_host_rrqs(pinstance, pinstance->num_hrrq); if (pinstance->inq_data != NULL) { - pci_free_consistent(pinstance->pdev, + dma_free_coherent(&pinstance->pdev->dev, sizeof(struct pmcraid_inquiry_data), pinstance->inq_data, pinstance->inq_data_baddr); @@ -4964,7 +4950,7 @@ static void pmcraid_release_buffers(struct pmcraid_instance *pinstance) } if (pinstance->timestamp_data != NULL) { - pci_free_consistent(pinstance->pdev, + dma_free_coherent(&pinstance->pdev->dev, sizeof(struct pmcraid_timestamp_data), pinstance->timestamp_data, pinstance->timestamp_data_baddr); @@ -4981,8 +4967,8 @@ static void pmcraid_release_buffers(struct pmcraid_instance *pinstance) * This routine pre-allocates memory based on the type of block as below: * cmdblocks(PMCRAID_MAX_CMD): kernel memory using kernel's slab_allocator, * IOARCBs(PMCRAID_MAX_CMD) : DMAable memory, using pci pool allocator - * config-table entries : DMAable memory using pci_alloc_consistent - * HostRRQs : DMAable memory, using pci_alloc_consistent + * config-table entries : DMAable memory using dma_alloc_coherent + * HostRRQs : DMAable memory, using dma_alloc_coherent * * Return Value * 0 in case all of the blocks are allocated, -ENOMEM otherwise. @@ -5019,11 +5005,9 @@ static int pmcraid_init_buffers(struct pmcraid_instance *pinstance) } /* allocate DMAable memory for page D0 INQUIRY buffer */ - pinstance->inq_data = pci_alloc_consistent( - pinstance->pdev, + pinstance->inq_data = dma_alloc_coherent(&pinstance->pdev->dev, sizeof(struct pmcraid_inquiry_data), - &pinstance->inq_data_baddr); - + &pinstance->inq_data_baddr, GFP_KERNEL); if (pinstance->inq_data == NULL) { pmcraid_err("couldn't allocate DMA memory for INQUIRY\n"); pmcraid_release_buffers(pinstance); @@ -5031,11 +5015,10 @@ static int pmcraid_init_buffers(struct pmcraid_instance *pinstance) } /* allocate DMAable memory for set timestamp data buffer */ - pinstance->timestamp_data = pci_alloc_consistent( - pinstance->pdev, + pinstance->timestamp_data = dma_alloc_coherent(&pinstance->pdev->dev, sizeof(struct pmcraid_timestamp_data), - &pinstance->timestamp_data_baddr); - + &pinstance->timestamp_data_baddr, + GFP_KERNEL); if (pinstance->timestamp_data == NULL) { pmcraid_err("couldn't allocate DMA memory for \ set time_stamp \n"); @@ -5324,12 +5307,12 @@ static int pmcraid_resume(struct pci_dev *pdev) pci_set_master(pdev); - if ((sizeof(dma_addr_t) == 4) || - pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) - rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + if (sizeof(dma_addr_t) == 4 || + dma_set_mask(&pdev->dev, DMA_BIT_MASK(64))) + rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (rc == 0) - rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + rc = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (rc != 0) { dev_err(&pdev->dev, "resume: Failed to set PCI DMA mask\n"); @@ -5733,19 +5716,19 @@ static int pmcraid_probe(struct pci_dev *pdev, /* Firmware requires the system bus address of IOARCB to be within * 32-bit addressable range though it has 64-bit IOARRIN register. * However, firmware supports 64-bit streaming DMA buffers, whereas - * coherent buffers are to be 32-bit. Since pci_alloc_consistent always + * coherent buffers are to be 32-bit. Since dma_alloc_coherent always * returns memory within 4GB (if not, change this logic), coherent * buffers are within firmware acceptable address ranges. */ - if ((sizeof(dma_addr_t) == 4) || - pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) - rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + if (sizeof(dma_addr_t) == 4 || + dma_set_mask(&pdev->dev, DMA_BIT_MASK(64))) + rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); /* firmware expects 32-bit DMA addresses for IOARRIN register; set 32 - * bit mask for pci_alloc_consistent to return addresses within 4GB + * bit mask for dma_alloc_coherent to return addresses within 4GB */ if (rc == 0) - rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + rc = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (rc != 0) { dev_err(&pdev->dev, "Failed to set PCI DMA mask\n"); diff --git a/drivers/scsi/ppa.c b/drivers/scsi/ppa.c index ee86a0c62dbf..c182b5458f98 100644 --- a/drivers/scsi/ppa.c +++ b/drivers/scsi/ppa.c @@ -978,7 +978,6 @@ static struct scsi_host_template ppa_template = { .bios_param = ppa_biosparam, .this_id = -1, .sg_tablesize = SG_ALL, - .use_clustering = ENABLE_CLUSTERING, .can_queue = 1, .slave_alloc = ppa_adjust_queue, }; diff --git a/drivers/scsi/ps3rom.c b/drivers/scsi/ps3rom.c index 4924424d20fe..8d769138c01c 100644 --- a/drivers/scsi/ps3rom.c +++ b/drivers/scsi/ps3rom.c @@ -349,7 +349,6 @@ static struct scsi_host_template ps3rom_host_template = { .sg_tablesize = SG_ALL, .emulated = 1, /* only sg driver uses this */ .max_sectors = PS3ROM_MAX_SECTORS, - .use_clustering = ENABLE_CLUSTERING, .module = THIS_MODULE, }; diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c index d5a4f17fce51..edcaf4b0cb0b 100644 --- a/drivers/scsi/qedf/qedf_main.c +++ b/drivers/scsi/qedf/qedf_main.c @@ -785,7 +785,6 @@ static struct scsi_host_template qedf_host_template = { .name = QEDF_MODULE_NAME, .this_id = -1, .cmd_per_lun = 32, - .use_clustering = ENABLE_CLUSTERING, .max_sectors = 0xffff, .queuecommand = qedf_queuecommand, .shost_attrs = qedf_host_attrs, @@ -2935,8 +2934,7 @@ static void qedf_free_fcoe_pf_param(struct qedf_ctx *qedf) qedf_free_global_queues(qedf); - if (qedf->global_queues) - kfree(qedf->global_queues); + kfree(qedf->global_queues); } /* diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h index a6f96b35e971..a26bb5066b90 100644 --- a/drivers/scsi/qedi/qedi.h +++ b/drivers/scsi/qedi/qedi.h @@ -45,7 +45,7 @@ struct qedi_endpoint; #define QEDI_MAX_TASK_NUM 0x0FFF #define QEDI_MAX_ISCSI_CONNS_PER_HBA 1024 #define QEDI_ISCSI_MAX_BDS_PER_CMD 255 /* Firmware max BDs is 255 */ -#define MAX_OUSTANDING_TASKS_PER_CON 1024 +#define MAX_OUTSTANDING_TASKS_PER_CON 1024 #define QEDI_MAX_BD_LEN 0xffff #define QEDI_BD_SPLIT_SZ 0x1000 @@ -63,12 +63,9 @@ struct qedi_endpoint; #define QEDI_LOCAL_PORT_INVALID 0xffff #define TX_RX_RING 16 #define RX_RING (TX_RX_RING - 1) -#define LL2_SINGLE_BUF_SIZE 0x400 -#define QEDI_PAGE_SIZE 4096 #define QEDI_PAGE_ALIGN(addr) ALIGN(addr, QEDI_PAGE_SIZE) #define QEDI_PAGE_MASK (~((QEDI_PAGE_SIZE) - 1)) -#define QEDI_PAGE_SIZE 4096 #define QEDI_HW_DMA_BOUNDARY 0xfff #define QEDI_PATH_HANDLE 0xFE0000000UL @@ -146,7 +143,7 @@ struct skb_work_list { }; /* Queue sizes in number of elements */ -#define QEDI_SQ_SIZE MAX_OUSTANDING_TASKS_PER_CON +#define QEDI_SQ_SIZE MAX_OUTSTANDING_TASKS_PER_CON #define QEDI_CQ_SIZE 2048 #define QEDI_CMDQ_SIZE QEDI_MAX_ISCSI_TASK #define QEDI_PROTO_CQ_PROD_IDX 0 diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c index 2f0a4f2c5ff8..4da660c1c431 100644 --- a/drivers/scsi/qedi/qedi_iscsi.c +++ b/drivers/scsi/qedi/qedi_iscsi.c @@ -61,7 +61,6 @@ struct scsi_host_template qedi_host_template = { .max_sectors = 0xffff, .dma_boundary = QEDI_HW_DMA_BOUNDARY, .cmd_per_lun = 128, - .use_clustering = ENABLE_CLUSTERING, .shost_attrs = qedi_shost_attrs, }; diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index 105b0e4d7818..5c53409a8cea 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -44,6 +44,11 @@ module_param(qedi_io_tracing, uint, 0644); MODULE_PARM_DESC(qedi_io_tracing, " Enable logging of SCSI requests/completions into trace buffer. (default off)."); +uint qedi_ll2_buf_size = 0x400; +module_param(qedi_ll2_buf_size, uint, 0644); +MODULE_PARM_DESC(qedi_ll2_buf_size, + "parameter to set ping packet size, default - 0x400, Jumbo packets - 0x2400."); + const struct qed_iscsi_ops *qedi_ops; static struct scsi_transport_template *qedi_scsi_transport; static struct pci_driver qedi_pci_driver; @@ -228,7 +233,7 @@ static int __qedi_alloc_uio_rings(struct qedi_uio_dev *udev) } /* Allocating memory for Tx/Rx pkt buffer */ - udev->ll2_buf_size = TX_RX_RING * LL2_SINGLE_BUF_SIZE; + udev->ll2_buf_size = TX_RX_RING * qedi_ll2_buf_size; udev->ll2_buf_size = QEDI_PAGE_ALIGN(udev->ll2_buf_size); udev->ll2_buf = (void *)__get_free_pages(GFP_KERNEL | __GFP_COMP | __GFP_ZERO, 2); @@ -283,7 +288,7 @@ static int qedi_alloc_uio_rings(struct qedi_ctx *qedi) qedi->udev = udev; udev->tx_pkt = udev->ll2_buf; - udev->rx_pkt = udev->ll2_buf + LL2_SINGLE_BUF_SIZE; + udev->rx_pkt = udev->ll2_buf + qedi_ll2_buf_size; return 0; err_uctrl: @@ -644,8 +649,7 @@ static struct qedi_ctx *qedi_host_alloc(struct pci_dev *pdev) qedi->max_active_conns = ISCSI_MAX_SESS_PER_HBA; qedi->max_sqes = QEDI_SQ_SIZE; - if (shost_use_blk_mq(shost)) - shost->nr_hw_queues = MIN_NUM_CPUS_MSIX(qedi); + shost->nr_hw_queues = MIN_NUM_CPUS_MSIX(qedi); pci_set_drvdata(pdev, qedi); @@ -659,7 +663,7 @@ static int qedi_ll2_rx(void *cookie, struct sk_buff *skb, u32 arg1, u32 arg2) struct qedi_uio_dev *udev; struct qedi_uio_ctrl *uctrl; struct skb_work_list *work; - u32 prod; + struct ethhdr *eh; if (!qedi) { QEDI_ERR(NULL, "qedi is NULL\n"); @@ -673,6 +677,29 @@ static int qedi_ll2_rx(void *cookie, struct sk_buff *skb, u32 arg1, u32 arg2) return 0; } + eh = (struct ethhdr *)skb->data; + /* Undo VLAN encapsulation */ + if (eh->h_proto == htons(ETH_P_8021Q)) { + memmove((u8 *)eh + VLAN_HLEN, eh, ETH_ALEN * 2); + eh = (struct ethhdr *)skb_pull(skb, VLAN_HLEN); + skb_reset_mac_header(skb); + } + + /* Filter out non FIP/FCoE frames here to free them faster */ + if (eh->h_proto != htons(ETH_P_ARP) && + eh->h_proto != htons(ETH_P_IP) && + eh->h_proto != htons(ETH_P_IPV6)) { + QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_LL2, + "Dropping frame ethertype [0x%x] len [0x%x].\n", + eh->h_proto, skb->len); + kfree_skb(skb); + return 0; + } + + QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_LL2, + "Allowed frame ethertype [0x%x] len [0x%x].\n", + eh->h_proto, skb->len); + udev = qedi->udev; uctrl = udev->uctrl; @@ -695,17 +722,10 @@ static int qedi_ll2_rx(void *cookie, struct sk_buff *skb, u32 arg1, u32 arg2) spin_lock_bh(&qedi->ll2_lock); list_add_tail(&work->list, &qedi->ll2_skb_list); + spin_unlock_bh(&qedi->ll2_lock); - ++uctrl->hw_rx_prod_cnt; - prod = (uctrl->hw_rx_prod + 1) % RX_RING; - if (prod != uctrl->host_rx_cons) { - uctrl->hw_rx_prod = prod; - spin_unlock_bh(&qedi->ll2_lock); - wake_up_process(qedi->ll2_recv_thread); - return 0; - } + wake_up_process(qedi->ll2_recv_thread); - spin_unlock_bh(&qedi->ll2_lock); return 0; } @@ -720,6 +740,7 @@ static int qedi_ll2_process_skb(struct qedi_ctx *qedi, struct sk_buff *skb, u32 rx_bd_prod; void *pkt; int len = 0; + u32 prod; if (!qedi) { QEDI_ERR(NULL, "qedi is NULL\n"); @@ -728,12 +749,16 @@ static int qedi_ll2_process_skb(struct qedi_ctx *qedi, struct sk_buff *skb, udev = qedi->udev; uctrl = udev->uctrl; - pkt = udev->rx_pkt + (uctrl->hw_rx_prod * LL2_SINGLE_BUF_SIZE); - len = min_t(u32, skb->len, (u32)LL2_SINGLE_BUF_SIZE); + + ++uctrl->hw_rx_prod_cnt; + prod = (uctrl->hw_rx_prod + 1) % RX_RING; + + pkt = udev->rx_pkt + (prod * qedi_ll2_buf_size); + len = min_t(u32, skb->len, (u32)qedi_ll2_buf_size); memcpy(pkt, skb->data, len); memset(&rxbd, 0, sizeof(rxbd)); - rxbd.rx_pkt_index = uctrl->hw_rx_prod; + rxbd.rx_pkt_index = prod; rxbd.rx_pkt_len = len; rxbd.vlan_id = vlan_id; @@ -744,6 +769,16 @@ static int qedi_ll2_process_skb(struct qedi_ctx *qedi, struct sk_buff *skb, memcpy(p_rxbd, &rxbd, sizeof(rxbd)); + QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_LL2, + "hw_rx_prod [%d] prod [%d] hw_rx_bd_prod [%d] rx_pkt_idx [%d] rx_len [%d].\n", + uctrl->hw_rx_prod, prod, uctrl->hw_rx_bd_prod, + rxbd.rx_pkt_index, rxbd.rx_pkt_len); + QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_LL2, + "host_rx_cons [%d] hw_rx_bd_cons [%d].\n", + uctrl->host_rx_cons, uctrl->host_rx_bd_cons); + + uctrl->hw_rx_prod = prod; + /* notify the iscsiuio about new packet */ uio_event_notify(&udev->qedi_uinfo); @@ -796,7 +831,7 @@ static int qedi_set_iscsi_pf_param(struct qedi_ctx *qedi) int rval = 0; - num_sq_pages = (MAX_OUSTANDING_TASKS_PER_CON * 8) / PAGE_SIZE; + num_sq_pages = (MAX_OUTSTANDING_TASKS_PER_CON * 8) / QEDI_PAGE_SIZE; qedi->num_queues = MIN_NUM_CPUS_MSIX(qedi); @@ -834,7 +869,7 @@ static int qedi_set_iscsi_pf_param(struct qedi_ctx *qedi) qedi->pf_params.iscsi_pf_params.max_fin_rt = 2; for (log_page_size = 0 ; log_page_size < 32 ; log_page_size++) { - if ((1 << log_page_size) == PAGE_SIZE) + if ((1 << log_page_size) == QEDI_PAGE_SIZE) break; } qedi->pf_params.iscsi_pf_params.log_page_size = log_page_size; @@ -952,6 +987,9 @@ static int qedi_find_boot_info(struct qedi_ctx *qedi, cls_sess = iscsi_conn_to_session(cls_conn); sess = cls_sess->dd_data; + if (!iscsi_is_session_online(cls_sess)) + continue; + if (pri_ctrl_flags) { if (!strcmp(pri_tgt->iscsi_name, sess->targetname) && !strcmp(pri_tgt->ip_addr, ep_ip_addr)) { @@ -1298,7 +1336,7 @@ static int qedi_request_msix_irq(struct qedi_ctx *qedi) int i, rc, cpu; cpu = cpumask_first(cpu_online_mask); - for (i = 0; i < MIN_NUM_CPUS_MSIX(qedi); i++) { + for (i = 0; i < qedi->int_info.msix_cnt; i++) { rc = request_irq(qedi->int_info.msix[i].vector, qedi_msix_handler, 0, "qedi", &qedi->fp_array[i]); @@ -1376,7 +1414,7 @@ static void qedi_free_bdq(struct qedi_ctx *qedi) int i; if (qedi->bdq_pbl_list) - dma_free_coherent(&qedi->pdev->dev, PAGE_SIZE, + dma_free_coherent(&qedi->pdev->dev, QEDI_PAGE_SIZE, qedi->bdq_pbl_list, qedi->bdq_pbl_list_dma); if (qedi->bdq_pbl) @@ -1437,7 +1475,7 @@ static int qedi_alloc_bdq(struct qedi_ctx *qedi) /* Alloc dma memory for BDQ page buffer list */ qedi->bdq_pbl_mem_size = QEDI_BDQ_NUM * sizeof(struct scsi_bd); - qedi->bdq_pbl_mem_size = ALIGN(qedi->bdq_pbl_mem_size, PAGE_SIZE); + qedi->bdq_pbl_mem_size = ALIGN(qedi->bdq_pbl_mem_size, QEDI_PAGE_SIZE); qedi->rq_num_entries = qedi->bdq_pbl_mem_size / sizeof(struct scsi_bd); QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN, "rq_num_entries = %d.\n", @@ -1472,7 +1510,8 @@ static int qedi_alloc_bdq(struct qedi_ctx *qedi) } /* Allocate list of PBL pages */ - qedi->bdq_pbl_list = dma_zalloc_coherent(&qedi->pdev->dev, PAGE_SIZE, + qedi->bdq_pbl_list = dma_zalloc_coherent(&qedi->pdev->dev, + QEDI_PAGE_SIZE, &qedi->bdq_pbl_list_dma, GFP_KERNEL); if (!qedi->bdq_pbl_list) { @@ -1485,13 +1524,14 @@ static int qedi_alloc_bdq(struct qedi_ctx *qedi) * Now populate PBL list with pages that contain pointers to the * individual buffers. */ - qedi->bdq_pbl_list_num_entries = qedi->bdq_pbl_mem_size / PAGE_SIZE; + qedi->bdq_pbl_list_num_entries = qedi->bdq_pbl_mem_size / + QEDI_PAGE_SIZE; list = (u64 *)qedi->bdq_pbl_list; page = qedi->bdq_pbl_list_dma; for (i = 0; i < qedi->bdq_pbl_list_num_entries; i++) { *list = qedi->bdq_pbl_dma; list++; - page += PAGE_SIZE; + page += QEDI_PAGE_SIZE; } return 0; diff --git a/drivers/scsi/qedi/qedi_version.h b/drivers/scsi/qedi/qedi_version.h index 8a0e523fc089..41bcbbafebd4 100644 --- a/drivers/scsi/qedi/qedi_version.h +++ b/drivers/scsi/qedi/qedi_version.h @@ -7,8 +7,8 @@ * this source tree. */ -#define QEDI_MODULE_VERSION "8.33.0.20" +#define QEDI_MODULE_VERSION "8.33.0.21" #define QEDI_DRIVER_MAJOR_VER 8 #define QEDI_DRIVER_MINOR_VER 33 #define QEDI_DRIVER_REV_VER 0 -#define QEDI_DRIVER_ENG_VER 20 +#define QEDI_DRIVER_ENG_VER 21 diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c index 15a50cc7e4b3..a414f51302b7 100644 --- a/drivers/scsi/qla1280.c +++ b/drivers/scsi/qla1280.c @@ -383,20 +383,10 @@ #include "qla1280.h" -#ifndef BITS_PER_LONG -#error "BITS_PER_LONG not defined!" -#endif -#if (BITS_PER_LONG == 64) || defined CONFIG_HIGHMEM +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT #define QLA_64BIT_PTR 1 #endif -#ifdef QLA_64BIT_PTR -#define pci_dma_hi32(a) ((a >> 16) >> 16) -#else -#define pci_dma_hi32(a) 0 -#endif -#define pci_dma_lo32(a) (a & 0xffffffff) - #define NVRAM_DELAY() udelay(500) /* 2 microseconds */ #if defined(__ia64__) && !defined(ia64_platform_is) @@ -1790,8 +1780,8 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha) mb[4] = cnt; mb[3] = ha->request_dma & 0xffff; mb[2] = (ha->request_dma >> 16) & 0xffff; - mb[7] = pci_dma_hi32(ha->request_dma) & 0xffff; - mb[6] = pci_dma_hi32(ha->request_dma) >> 16; + mb[7] = upper_32_bits(ha->request_dma) & 0xffff; + mb[6] = upper_32_bits(ha->request_dma) >> 16; dprintk(2, "%s: op=%d 0x%p = 0x%4x,0x%4x,0x%4x,0x%4x\n", __func__, mb[0], (void *)(long)ha->request_dma, @@ -1810,8 +1800,8 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha) mb[4] = cnt; mb[3] = p_tbuf & 0xffff; mb[2] = (p_tbuf >> 16) & 0xffff; - mb[7] = pci_dma_hi32(p_tbuf) & 0xffff; - mb[6] = pci_dma_hi32(p_tbuf) >> 16; + mb[7] = upper_32_bits(p_tbuf) & 0xffff; + mb[6] = upper_32_bits(p_tbuf) >> 16; err = qla1280_mailbox_command(ha, BIT_4 | BIT_3 | BIT_2 | BIT_1 | BIT_0, mb); @@ -1933,8 +1923,8 @@ qla1280_init_rings(struct scsi_qla_host *ha) mb[3] = ha->request_dma & 0xffff; mb[2] = (ha->request_dma >> 16) & 0xffff; mb[4] = 0; - mb[7] = pci_dma_hi32(ha->request_dma) & 0xffff; - mb[6] = pci_dma_hi32(ha->request_dma) >> 16; + mb[7] = upper_32_bits(ha->request_dma) & 0xffff; + mb[6] = upper_32_bits(ha->request_dma) >> 16; if (!(status = qla1280_mailbox_command(ha, BIT_7 | BIT_6 | BIT_4 | BIT_3 | BIT_2 | BIT_1 | BIT_0, &mb[0]))) { @@ -1947,8 +1937,8 @@ qla1280_init_rings(struct scsi_qla_host *ha) mb[3] = ha->response_dma & 0xffff; mb[2] = (ha->response_dma >> 16) & 0xffff; mb[5] = 0; - mb[7] = pci_dma_hi32(ha->response_dma) & 0xffff; - mb[6] = pci_dma_hi32(ha->response_dma) >> 16; + mb[7] = upper_32_bits(ha->response_dma) & 0xffff; + mb[6] = upper_32_bits(ha->response_dma) >> 16; status = qla1280_mailbox_command(ha, BIT_7 | BIT_6 | BIT_5 | BIT_3 | BIT_2 | BIT_1 | BIT_0, &mb[0]); @@ -2914,13 +2904,13 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) SCSI_BUS_32(cmd)); #endif *dword_ptr++ = - cpu_to_le32(pci_dma_lo32(dma_handle)); + cpu_to_le32(lower_32_bits(dma_handle)); *dword_ptr++ = - cpu_to_le32(pci_dma_hi32(dma_handle)); + cpu_to_le32(upper_32_bits(dma_handle)); *dword_ptr++ = cpu_to_le32(sg_dma_len(s)); dprintk(3, "S/G Segment phys_addr=%x %x, len=0x%x\n", - cpu_to_le32(pci_dma_hi32(dma_handle)), - cpu_to_le32(pci_dma_lo32(dma_handle)), + cpu_to_le32(upper_32_bits(dma_handle)), + cpu_to_le32(lower_32_bits(dma_handle)), cpu_to_le32(sg_dma_len(sg_next(s)))); remseg--; } @@ -2976,14 +2966,14 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) SCSI_BUS_32(cmd)); #endif *dword_ptr++ = - cpu_to_le32(pci_dma_lo32(dma_handle)); + cpu_to_le32(lower_32_bits(dma_handle)); *dword_ptr++ = - cpu_to_le32(pci_dma_hi32(dma_handle)); + cpu_to_le32(upper_32_bits(dma_handle)); *dword_ptr++ = cpu_to_le32(sg_dma_len(s)); dprintk(3, "S/G Segment Cont. phys_addr=%x %x, len=0x%x\n", - cpu_to_le32(pci_dma_hi32(dma_handle)), - cpu_to_le32(pci_dma_lo32(dma_handle)), + cpu_to_le32(upper_32_bits(dma_handle)), + cpu_to_le32(lower_32_bits(dma_handle)), cpu_to_le32(sg_dma_len(s))); } remseg -= cnt; @@ -3178,10 +3168,10 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) if (cnt == 4) break; *dword_ptr++ = - cpu_to_le32(pci_dma_lo32(sg_dma_address(s))); + cpu_to_le32(lower_32_bits(sg_dma_address(s))); *dword_ptr++ = cpu_to_le32(sg_dma_len(s)); dprintk(3, "S/G Segment phys_addr=0x%lx, len=0x%x\n", - (pci_dma_lo32(sg_dma_address(s))), + (lower_32_bits(sg_dma_address(s))), (sg_dma_len(s))); remseg--; } @@ -3224,13 +3214,13 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) if (cnt == 7) break; *dword_ptr++ = - cpu_to_le32(pci_dma_lo32(sg_dma_address(s))); + cpu_to_le32(lower_32_bits(sg_dma_address(s))); *dword_ptr++ = cpu_to_le32(sg_dma_len(s)); dprintk(1, "S/G Segment Cont. phys_addr=0x%x, " "len=0x%x\n", - cpu_to_le32(pci_dma_lo32(sg_dma_address(s))), + cpu_to_le32(lower_32_bits(sg_dma_address(s))), cpu_to_le32(sg_dma_len(s))); } remseg -= cnt; @@ -4213,7 +4203,6 @@ static struct scsi_host_template qla1280_driver_template = { .can_queue = MAX_OUTSTANDING_COMMANDS, .this_id = -1, .sg_tablesize = SG_ALL, - .use_clustering = ENABLE_CLUSTERING, }; diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c index 0bb9ac6ece92..00444dc79756 100644 --- a/drivers/scsi/qla2xxx/qla_attr.c +++ b/drivers/scsi/qla2xxx/qla_attr.c @@ -2712,6 +2712,8 @@ qla24xx_vport_delete(struct fc_vport *fc_vport) test_bit(FCPORT_UPDATE_NEEDED, &vha->dpc_flags)) msleep(1000); + qla_nvme_delete(vha); + qla24xx_disable_vp(vha); qla2x00_wait_for_sess_deletion(vha); diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c index eb59c796a795..364bb52ed2a6 100644 --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -237,15 +237,13 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport, qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2); sp->done = qla2x00_async_login_sp_done; - if (N2N_TOPO(fcport->vha->hw) && fcport_is_bigger(fcport)) { + if (N2N_TOPO(fcport->vha->hw) && fcport_is_bigger(fcport)) lio->u.logio.flags |= SRB_LOGIN_PRLI_ONLY; - } else { + else lio->u.logio.flags |= SRB_LOGIN_COND_PLOGI; - if (fcport->fc4f_nvme) - lio->u.logio.flags |= SRB_LOGIN_SKIP_PRLI; - - } + if (fcport->fc4f_nvme) + lio->u.logio.flags |= SRB_LOGIN_SKIP_PRLI; ql_dbg(ql_dbg_disc, vha, 0x2072, "Async-login - %8phC hdl=%x, loopid=%x portid=%02x%02x%02x " diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c index d620f4bebcd0..099d8e9851cb 100644 --- a/drivers/scsi/qla2xxx/qla_mid.c +++ b/drivers/scsi/qla2xxx/qla_mid.c @@ -507,6 +507,7 @@ qla24xx_create_vhost(struct fc_vport *fc_vport) qla2x00_start_timer(vha, WATCH_INTERVAL); vha->req = base_vha->req; + vha->flags.nvme_enabled = base_vha->flags.nvme_enabled; host->can_queue = base_vha->req->length + 128; host->cmd_per_lun = 3; if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c index 7e78e7eff783..39d892bbd219 100644 --- a/drivers/scsi/qla2xxx/qla_nvme.c +++ b/drivers/scsi/qla2xxx/qla_nvme.c @@ -272,17 +272,6 @@ static void qla_nvme_fcp_abort(struct nvme_fc_local_port *lport, schedule_work(&priv->abort_work); } -static void qla_nvme_poll(struct nvme_fc_local_port *lport, void *hw_queue_handle) -{ - struct qla_qpair *qpair = hw_queue_handle; - unsigned long flags; - struct scsi_qla_host *vha = lport->private; - - spin_lock_irqsave(&qpair->qp_lock, flags); - qla24xx_process_response_queue(vha, qpair->rsp); - spin_unlock_irqrestore(&qpair->qp_lock, flags); -} - static inline int qla2x00_start_nvme_mq(srb_t *sp) { unsigned long flags; @@ -474,21 +463,10 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport, int rval = -ENODEV; srb_t *sp; struct qla_qpair *qpair = hw_queue_handle; - struct nvme_private *priv; + struct nvme_private *priv = fd->private; struct qla_nvme_rport *qla_rport = rport->private; - if (!fd || !qpair) { - ql_log(ql_log_warn, NULL, 0x2134, - "NO NVMe request or Queue Handle\n"); - return rval; - } - - priv = fd->private; fcport = qla_rport->fcport; - if (!fcport) { - ql_log(ql_log_warn, NULL, 0x210e, "No fcport ptr\n"); - return rval; - } vha = fcport->vha; @@ -517,6 +495,7 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport, sp->name = "nvme_cmd"; sp->done = qla_nvme_sp_done; sp->qpair = qpair; + sp->vha = vha; nvme = &sp->u.iocb_cmd; nvme->u.nvme.desc = fd; @@ -564,7 +543,7 @@ static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport) schedule_work(&fcport->free_work); } - fcport->nvme_flag &= ~(NVME_FLAG_REGISTERED | NVME_FLAG_DELETING); + fcport->nvme_flag &= ~NVME_FLAG_DELETING; ql_log(ql_log_info, fcport->vha, 0x2110, "remoteport_delete of %p completed.\n", fcport); } @@ -578,7 +557,6 @@ static struct nvme_fc_port_template qla_nvme_fc_transport = { .ls_abort = qla_nvme_ls_abort, .fcp_io = qla_nvme_post_cmd, .fcp_abort = qla_nvme_fcp_abort, - .poll_queue = qla_nvme_poll, .max_hw_queues = 8, .max_sgl_segments = 128, .max_dif_sgl_segments = 64, diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index d0ecc729a90a..ea69dafc9774 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -328,7 +328,6 @@ struct scsi_host_template qla2xxx_driver_template = { .map_queues = qla2xxx_map_queues, .this_id = -1, .cmd_per_lun = 3, - .use_clustering = ENABLE_CLUSTERING, .sg_tablesize = SG_ALL, .max_sectors = 0xFFFF, @@ -857,13 +856,9 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) } if (ha->mqenable) { - if (shost_use_blk_mq(vha->host)) { - tag = blk_mq_unique_tag(cmd->request); - hwq = blk_mq_unique_tag_to_hwq(tag); - qpair = ha->queue_pair_map[hwq]; - } else if (vha->vp_idx && vha->qpair) { - qpair = vha->qpair; - } + tag = blk_mq_unique_tag(cmd->request); + hwq = blk_mq_unique_tag_to_hwq(tag); + qpair = ha->queue_pair_map[hwq]; if (qpair) return qla2xxx_mqueuecommand(host, cmd, qpair); @@ -1464,7 +1459,7 @@ __qla2xxx_eh_generic_reset(char *name, enum nexus_wait_type type, goto eh_reset_failed; } err = 2; - if (do_reset(fcport, cmd->device->lun, cmd->request->cpu + 1) + if (do_reset(fcport, cmd->device->lun, blk_mq_rq_cpu(cmd->request) + 1) != QLA_SUCCESS) { ql_log(ql_log_warn, vha, 0x800c, "do_reset failed for cmd=%p.\n", cmd); @@ -1746,10 +1741,53 @@ qla2x00_loop_reset(scsi_qla_host_t *vha) return QLA_SUCCESS; } +static void qla2x00_abort_srb(struct qla_qpair *qp, srb_t *sp, const int res, + unsigned long *flags) + __releases(qp->qp_lock_ptr) + __acquires(qp->qp_lock_ptr) +{ + scsi_qla_host_t *vha = qp->vha; + struct qla_hw_data *ha = vha->hw; + + if (sp->type == SRB_NVME_CMD || sp->type == SRB_NVME_LS) { + if (!sp_get(sp)) { + /* got sp */ + spin_unlock_irqrestore(qp->qp_lock_ptr, *flags); + qla_nvme_abort(ha, sp, res); + spin_lock_irqsave(qp->qp_lock_ptr, *flags); + } + } else if (GET_CMD_SP(sp) && !ha->flags.eeh_busy && + !test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags) && + !qla2x00_isp_reg_stat(ha) && sp->type == SRB_SCSI_CMD) { + /* + * Don't abort commands in adapter during EEH recovery as it's + * not accessible/responding. + * + * Get a reference to the sp and drop the lock. The reference + * ensures this sp->done() call and not the call in + * qla2xxx_eh_abort() ends the SCSI cmd (with result 'res'). + */ + if (!sp_get(sp)) { + int status; + + spin_unlock_irqrestore(qp->qp_lock_ptr, *flags); + status = qla2xxx_eh_abort(GET_CMD_SP(sp)); + spin_lock_irqsave(qp->qp_lock_ptr, *flags); + /* + * Get rid of extra reference caused + * by early exit from qla2xxx_eh_abort + */ + if (status == FAST_IO_FAIL) + atomic_dec(&sp->ref_count); + } + } + sp->done(sp, res); +} + static void __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res) { - int cnt, status; + int cnt; unsigned long flags; srb_t *sp; scsi_qla_host_t *vha = qp->vha; @@ -1768,50 +1806,7 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res) req->outstanding_cmds[cnt] = NULL; switch (sp->cmd_type) { case TYPE_SRB: - if (sp->type == SRB_NVME_CMD || - sp->type == SRB_NVME_LS) { - if (!sp_get(sp)) { - /* got sp */ - spin_unlock_irqrestore - (qp->qp_lock_ptr, - flags); - qla_nvme_abort(ha, sp, res); - spin_lock_irqsave - (qp->qp_lock_ptr, flags); - } - } else if (GET_CMD_SP(sp) && - !ha->flags.eeh_busy && - (!test_bit(ABORT_ISP_ACTIVE, - &vha->dpc_flags)) && - !qla2x00_isp_reg_stat(ha) && - (sp->type == SRB_SCSI_CMD)) { - /* - * Don't abort commands in adapter - * during EEH recovery as it's not - * accessible/responding. - * - * Get a reference to the sp and drop - * the lock. The reference ensures this - * sp->done() call and not the call in - * qla2xxx_eh_abort() ends the SCSI cmd - * (with result 'res'). - */ - if (!sp_get(sp)) { - spin_unlock_irqrestore - (qp->qp_lock_ptr, flags); - status = qla2xxx_eh_abort( - GET_CMD_SP(sp)); - spin_lock_irqsave - (qp->qp_lock_ptr, flags); - /* - * Get rid of extra reference caused - * by early exit from qla2xxx_eh_abort - */ - if (status == FAST_IO_FAIL) - atomic_dec(&sp->ref_count); - } - } - sp->done(sp, res); + qla2x00_abort_srb(qp, sp, res, &flags); break; case TYPE_TGT_CMD: if (!vha->hw->tgt.tgt_ops || !tgt || @@ -3159,7 +3154,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) goto probe_failed; } - if (ha->mqenable && shost_use_blk_mq(host)) { + if (ha->mqenable) { /* number of hardware queues supported by blk/scsi-mq*/ host->nr_hw_queues = ha->max_qpairs; @@ -3271,25 +3266,17 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) base_vha->mgmt_svr_loop_id, host->sg_tablesize); if (ha->mqenable) { - bool mq = false; bool startit = false; - if (QLA_TGT_MODE_ENABLED()) { - mq = true; + if (QLA_TGT_MODE_ENABLED()) startit = false; - } - if ((ql2x_ini_mode == QLA2XXX_INI_MODE_ENABLED) && - shost_use_blk_mq(host)) { - mq = true; + if (ql2x_ini_mode == QLA2XXX_INI_MODE_ENABLED) startit = true; - } - if (mq) { - /* Create start of day qpairs for Block MQ */ - for (i = 0; i < ha->max_qpairs; i++) - qla2xxx_create_qpair(base_vha, 5, 0, startit); - } + /* Create start of day qpairs for Block MQ */ + for (i = 0; i < ha->max_qpairs; i++) + qla2xxx_create_qpair(base_vha, 5, 0, startit); } if (ha->flags.running_gold_fw) @@ -3570,6 +3557,8 @@ qla2x00_delete_all_vps(struct qla_hw_data *ha, scsi_qla_host_t *base_vha) spin_unlock_irqrestore(&ha->vport_slock, flags); mutex_unlock(&ha->vport_lock); + qla_nvme_delete(vha); + fc_vport_terminate(vha->fc_vport); scsi_host_put(vha->host); @@ -4191,12 +4180,10 @@ fail_free_nvram: kfree(ha->nvram); ha->nvram = NULL; fail_free_ctx_mempool: - if (ha->ctx_mempool) - mempool_destroy(ha->ctx_mempool); + mempool_destroy(ha->ctx_mempool); ha->ctx_mempool = NULL; fail_free_srb_mempool: - if (ha->srb_mempool) - mempool_destroy(ha->srb_mempool); + mempool_destroy(ha->srb_mempool); ha->srb_mempool = NULL; fail_free_gid_list: dma_free_coherent(&ha->pdev->dev, qla2x00_gid_list_size(ha), @@ -4498,8 +4485,7 @@ qla2x00_mem_free(struct qla_hw_data *ha) dma_free_coherent(&ha->pdev->dev, MCTP_DUMP_SIZE, ha->mctp_dump, ha->mctp_dump_dma); - if (ha->srb_mempool) - mempool_destroy(ha->srb_mempool); + mempool_destroy(ha->srb_mempool); if (ha->dcbx_tlv) dma_free_coherent(&ha->pdev->dev, DCBX_TLV_DATA_SIZE, @@ -4531,8 +4517,7 @@ qla2x00_mem_free(struct qla_hw_data *ha) if (ha->async_pd) dma_pool_free(ha->s_dma_pool, ha->async_pd, ha->async_pd_dma); - if (ha->s_dma_pool) - dma_pool_destroy(ha->s_dma_pool); + dma_pool_destroy(ha->s_dma_pool); if (ha->gid_list) dma_free_coherent(&ha->pdev->dev, qla2x00_gid_list_size(ha), @@ -4553,14 +4538,11 @@ qla2x00_mem_free(struct qla_hw_data *ha) } } - if (ha->dl_dma_pool) - dma_pool_destroy(ha->dl_dma_pool); + dma_pool_destroy(ha->dl_dma_pool); - if (ha->fcp_cmnd_dma_pool) - dma_pool_destroy(ha->fcp_cmnd_dma_pool); + dma_pool_destroy(ha->fcp_cmnd_dma_pool); - if (ha->ctx_mempool) - mempool_destroy(ha->ctx_mempool); + mempool_destroy(ha->ctx_mempool); qlt_mem_free(ha); @@ -6952,11 +6934,12 @@ static int qla2xxx_map_queues(struct Scsi_Host *shost) { int rc; scsi_qla_host_t *vha = (scsi_qla_host_t *)shost->hostdata; + struct blk_mq_queue_map *qmap = &shost->tag_set.map[0]; if (USER_CTRL_IRQ(vha->hw)) - rc = blk_mq_map_queues(&shost->tag_set); + rc = blk_mq_map_queues(qmap); else - rc = blk_mq_pci_map_queues(&shost->tag_set, vha->hw->pdev, 0); + rc = blk_mq_pci_map_queues(qmap, vha->hw->pdev, 0); return rc; } @@ -7106,8 +7089,7 @@ qla2x00_module_exit(void) qla2x00_release_firmware(); kmem_cache_destroy(srb_cachep); qlt_exit(); - if (ctx_cachep) - kmem_cache_destroy(ctx_cachep); + kmem_cache_destroy(ctx_cachep); fc_release_transport(qla2xxx_transport_template); fc_release_transport(qla2xxx_transport_vport_template); } diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c index c4504740f0e2..510337eac106 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -2379,20 +2379,20 @@ void qlt_xmit_tm_rsp(struct qla_tgt_mgmt_cmd *mcmd) } if (mcmd->flags == QLA24XX_MGMT_SEND_NACK) { - if (mcmd->orig_iocb.imm_ntfy.u.isp24.status_subcode == - ELS_LOGO || - mcmd->orig_iocb.imm_ntfy.u.isp24.status_subcode == - ELS_PRLO || - mcmd->orig_iocb.imm_ntfy.u.isp24.status_subcode == - ELS_TPRLO) { + switch (mcmd->orig_iocb.imm_ntfy.u.isp24.status_subcode) { + case ELS_LOGO: + case ELS_PRLO: + case ELS_TPRLO: ql_dbg(ql_dbg_disc, vha, 0x2106, "TM response logo %phC status %#x state %#x", mcmd->sess->port_name, mcmd->fc_tm_rsp, mcmd->flags); qlt_schedule_sess_for_deletion(mcmd->sess); - } else { + break; + default: qlt_send_notify_ack(vha->hw->base_qpair, &mcmd->orig_iocb.imm_ntfy, 0, 0, 0, 0, 0, 0); + break; } } else { if (mcmd->orig_iocb.atio.u.raw.entry_type == ABTS_RECV_24XX) { @@ -2660,9 +2660,9 @@ static void qlt_load_cont_data_segments(struct qla_tgt_prm *prm) cnt < QLA_TGT_DATASEGS_PER_CONT_24XX && prm->seg_cnt; cnt++, prm->seg_cnt--) { *dword_ptr++ = - cpu_to_le32(pci_dma_lo32 + cpu_to_le32(lower_32_bits (sg_dma_address(prm->sg))); - *dword_ptr++ = cpu_to_le32(pci_dma_hi32 + *dword_ptr++ = cpu_to_le32(upper_32_bits (sg_dma_address(prm->sg))); *dword_ptr++ = cpu_to_le32(sg_dma_len(prm->sg)); @@ -2704,9 +2704,9 @@ static void qlt_load_data_segments(struct qla_tgt_prm *prm) (cnt < QLA_TGT_DATASEGS_PER_CMD_24XX) && prm->seg_cnt; cnt++, prm->seg_cnt--) { *dword_ptr++ = - cpu_to_le32(pci_dma_lo32(sg_dma_address(prm->sg))); + cpu_to_le32(lower_32_bits(sg_dma_address(prm->sg))); - *dword_ptr++ = cpu_to_le32(pci_dma_hi32( + *dword_ptr++ = cpu_to_le32(upper_32_bits( sg_dma_address(prm->sg))); *dword_ptr++ = cpu_to_le32(sg_dma_len(prm->sg)); diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h index 721da593b1bc..577e1786a3f1 100644 --- a/drivers/scsi/qla2xxx/qla_target.h +++ b/drivers/scsi/qla2xxx/qla_target.h @@ -771,14 +771,6 @@ int qla2x00_wait_for_hba_online(struct scsi_qla_host *); #define FC_TM_REJECT 4 #define FC_TM_FAILED 5 -#if (BITS_PER_LONG > 32) || defined(CONFIG_HIGHMEM64G) -#define pci_dma_lo32(a) (a & 0xffffffff) -#define pci_dma_hi32(a) ((((a) >> 16)>>16) & 0xffffffff) -#else -#define pci_dma_lo32(a) (a & 0xffffffff) -#define pci_dma_hi32(a) 0 -#endif - #define QLA_TGT_SENSE_VALID(sense) ((sense != NULL) && \ (((const uint8_t *)(sense))[0] & 0x70) == 0x70) diff --git a/drivers/scsi/qla2xxx/qla_version.h b/drivers/scsi/qla2xxx/qla_version.h index 12bafff71a1a..ca7945cb959b 100644 --- a/drivers/scsi/qla2xxx/qla_version.h +++ b/drivers/scsi/qla2xxx/qla_version.h @@ -7,7 +7,7 @@ /* * Driver version */ -#define QLA2XXX_VERSION "10.00.00.11-k" +#define QLA2XXX_VERSION "10.00.00.12-k" #define QLA_DRIVER_MAJOR_VER 10 #define QLA_DRIVER_MINOR_VER 0 diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c index 65053c066680..283e6b80abb5 100644 --- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c +++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c @@ -108,11 +108,6 @@ static ssize_t tcm_qla2xxx_format_wwn(char *buf, size_t len, u64 wwn) b[0], b[1], b[2], b[3], b[4], b[5], b[6], b[7]); } -static char *tcm_qla2xxx_get_fabric_name(void) -{ - return "qla2xxx"; -} - /* * From drivers/scsi/scsi_transport_fc.c:fc_parse_wwn */ @@ -178,11 +173,6 @@ static int tcm_qla2xxx_npiv_parse_wwn( return 0; } -static char *tcm_qla2xxx_npiv_get_fabric_name(void) -{ - return "qla2xxx_npiv"; -} - static char *tcm_qla2xxx_get_fabric_wwn(struct se_portal_group *se_tpg) { struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg, @@ -964,38 +954,14 @@ static ssize_t tcm_qla2xxx_tpg_enable_show(struct config_item *item, atomic_read(&tpg->lport_tpg_enabled)); } -static void tcm_qla2xxx_depend_tpg(struct work_struct *work) -{ - struct tcm_qla2xxx_tpg *base_tpg = container_of(work, - struct tcm_qla2xxx_tpg, tpg_base_work); - struct se_portal_group *se_tpg = &base_tpg->se_tpg; - struct scsi_qla_host *base_vha = base_tpg->lport->qla_vha; - - if (!target_depend_item(&se_tpg->tpg_group.cg_item)) { - atomic_set(&base_tpg->lport_tpg_enabled, 1); - qlt_enable_vha(base_vha); - } - complete(&base_tpg->tpg_base_comp); -} - -static void tcm_qla2xxx_undepend_tpg(struct work_struct *work) -{ - struct tcm_qla2xxx_tpg *base_tpg = container_of(work, - struct tcm_qla2xxx_tpg, tpg_base_work); - struct se_portal_group *se_tpg = &base_tpg->se_tpg; - struct scsi_qla_host *base_vha = base_tpg->lport->qla_vha; - - if (!qlt_stop_phase1(base_vha->vha_tgt.qla_tgt)) { - atomic_set(&base_tpg->lport_tpg_enabled, 0); - target_undepend_item(&se_tpg->tpg_group.cg_item); - } - complete(&base_tpg->tpg_base_comp); -} - static ssize_t tcm_qla2xxx_tpg_enable_store(struct config_item *item, const char *page, size_t count) { struct se_portal_group *se_tpg = to_tpg(item); + struct se_wwn *se_wwn = se_tpg->se_tpg_wwn; + struct tcm_qla2xxx_lport *lport = container_of(se_wwn, + struct tcm_qla2xxx_lport, lport_wwn); + struct scsi_qla_host *vha = lport->qla_vha; struct tcm_qla2xxx_tpg *tpg = container_of(se_tpg, struct tcm_qla2xxx_tpg, se_tpg); unsigned long op; @@ -1014,24 +980,16 @@ static ssize_t tcm_qla2xxx_tpg_enable_store(struct config_item *item, if (atomic_read(&tpg->lport_tpg_enabled)) return -EEXIST; - INIT_WORK(&tpg->tpg_base_work, tcm_qla2xxx_depend_tpg); + atomic_set(&tpg->lport_tpg_enabled, 1); + qlt_enable_vha(vha); } else { if (!atomic_read(&tpg->lport_tpg_enabled)) return count; - INIT_WORK(&tpg->tpg_base_work, tcm_qla2xxx_undepend_tpg); + atomic_set(&tpg->lport_tpg_enabled, 0); + qlt_stop_phase1(vha->vha_tgt.qla_tgt); } - init_completion(&tpg->tpg_base_comp); - schedule_work(&tpg->tpg_base_work); - wait_for_completion(&tpg->tpg_base_comp); - if (op) { - if (!atomic_read(&tpg->lport_tpg_enabled)) - return -ENODEV; - } else { - if (atomic_read(&tpg->lport_tpg_enabled)) - return -EPERM; - } return count; } @@ -1920,14 +1878,13 @@ static struct configfs_attribute *tcm_qla2xxx_wwn_attrs[] = { static const struct target_core_fabric_ops tcm_qla2xxx_ops = { .module = THIS_MODULE, - .name = "qla2xxx", + .fabric_name = "qla2xxx", .node_acl_size = sizeof(struct tcm_qla2xxx_nacl), /* * XXX: Limit assumes single page per scatter-gather-list entry. * Current maximum is ~4.9 MB per se_cmd->t_data_sg with PAGE_SIZE=4096 */ .max_data_sg_nents = 1200, - .get_fabric_name = tcm_qla2xxx_get_fabric_name, .tpg_get_wwn = tcm_qla2xxx_get_fabric_wwn, .tpg_get_tag = tcm_qla2xxx_get_tag, .tpg_check_demo_mode = tcm_qla2xxx_check_demo_mode, @@ -1969,9 +1926,8 @@ static const struct target_core_fabric_ops tcm_qla2xxx_ops = { static const struct target_core_fabric_ops tcm_qla2xxx_npiv_ops = { .module = THIS_MODULE, - .name = "qla2xxx_npiv", + .fabric_name = "qla2xxx_npiv", .node_acl_size = sizeof(struct tcm_qla2xxx_nacl), - .get_fabric_name = tcm_qla2xxx_npiv_get_fabric_name, .tpg_get_wwn = tcm_qla2xxx_get_fabric_wwn, .tpg_get_tag = tcm_qla2xxx_get_tag, .tpg_check_demo_mode = tcm_qla2xxx_check_demo_mode, diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.h b/drivers/scsi/qla2xxx/tcm_qla2xxx.h index 7550ba2831c3..147cf6c90366 100644 --- a/drivers/scsi/qla2xxx/tcm_qla2xxx.h +++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.h @@ -48,9 +48,6 @@ struct tcm_qla2xxx_tpg { struct tcm_qla2xxx_tpg_attrib tpg_attrib; /* Returned by tcm_qla2xxx_make_tpg() */ struct se_portal_group se_tpg; - /* Items for dealing with configfs_depend_item */ - struct completion tpg_base_comp; - struct work_struct tpg_base_work; }; struct tcm_qla2xxx_fc_loopid { diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c index 051164f755a4..949e186cc5d7 100644 --- a/drivers/scsi/qla4xxx/ql4_os.c +++ b/drivers/scsi/qla4xxx/ql4_os.c @@ -205,7 +205,6 @@ static struct scsi_host_template qla4xxx_driver_template = { .this_id = -1, .cmd_per_lun = 3, - .use_clustering = ENABLE_CLUSTERING, .sg_tablesize = SG_ALL, .max_sectors = 0xFFFF, @@ -4160,20 +4159,16 @@ static void qla4xxx_mem_free(struct scsi_qla_host *ha) ha->fw_dump_size = 0; /* Free srb pool. */ - if (ha->srb_mempool) - mempool_destroy(ha->srb_mempool); - + mempool_destroy(ha->srb_mempool); ha->srb_mempool = NULL; - if (ha->chap_dma_pool) - dma_pool_destroy(ha->chap_dma_pool); + dma_pool_destroy(ha->chap_dma_pool); if (ha->chap_list) vfree(ha->chap_list); ha->chap_list = NULL; - if (ha->fw_ddb_dma_pool) - dma_pool_destroy(ha->fw_ddb_dma_pool); + dma_pool_destroy(ha->fw_ddb_dma_pool); /* release io space registers */ if (is_qla8022(ha)) { diff --git a/drivers/scsi/qlogicfas.c b/drivers/scsi/qlogicfas.c index 95431d605c24..8f709002f746 100644 --- a/drivers/scsi/qlogicfas.c +++ b/drivers/scsi/qlogicfas.c @@ -193,7 +193,7 @@ static struct scsi_host_template qlogicfas_driver_template = { .can_queue = 1, .this_id = -1, .sg_tablesize = SG_ALL, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, }; static __init int qlogicfas_init(void) diff --git a/drivers/scsi/qlogicpti.c b/drivers/scsi/qlogicpti.c index 9d09228eee28..e35ce762d454 100644 --- a/drivers/scsi/qlogicpti.c +++ b/drivers/scsi/qlogicpti.c @@ -1287,7 +1287,6 @@ static struct scsi_host_template qpti_template = { .can_queue = QLOGICPTI_REQ_QUEUE_LEN, .this_id = 7, .sg_tablesize = QLOGICPTI_MAX_SG(QLOGICPTI_REQ_QUEUE_LEN), - .use_clustering = ENABLE_CLUSTERING, }; static const struct of_device_id qpti_match[]; diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c index fc1356d101b0..7675ff0ca2ea 100644 --- a/drivers/scsi/scsi.c +++ b/drivers/scsi/scsi.c @@ -780,11 +780,8 @@ MODULE_LICENSE("GPL"); module_param(scsi_logging_level, int, S_IRUGO|S_IWUSR); MODULE_PARM_DESC(scsi_logging_level, "a bit mask of logging levels"); -#ifdef CONFIG_SCSI_MQ_DEFAULT +/* This should go away in the future, it doesn't do anything anymore */ bool scsi_use_blk_mq = true; -#else -bool scsi_use_blk_mq = false; -#endif module_param_named(use_blk_mq, scsi_use_blk_mq, bool, S_IWUSR | S_IRUGO); static int __init init_scsi(void) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 60bcc6df97a9..661512bec3ac 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -3973,7 +3973,6 @@ static int scsi_debug_slave_configure(struct scsi_device *sdp) return 1; /* no resources, will be marked offline */ } sdp->hostdata = devip; - blk_queue_max_segment_size(sdp->request_queue, -1U); if (sdebug_no_uld) sdp->no_uld_attach = 1; config_cdb_len(sdp); @@ -5851,7 +5850,7 @@ static struct scsi_host_template sdebug_driver_template = { .sg_tablesize = SG_MAX_SEGMENTS, .cmd_per_lun = DEF_CMD_PER_LUN, .max_sectors = -1U, - .use_clustering = DISABLE_CLUSTERING, + .max_segment_size = -1U, .module = THIS_MODULE, .track_queue_depth = 1, }; @@ -5866,8 +5865,9 @@ static int sdebug_driver_probe(struct device *dev) sdbg_host = to_sdebug_host(dev); sdebug_driver_template.can_queue = sdebug_max_queue; - if (sdebug_clustering) - sdebug_driver_template.use_clustering = ENABLE_CLUSTERING; + if (!sdebug_clustering) + sdebug_driver_template.dma_boundary = PAGE_SIZE - 1; + hpnt = scsi_host_alloc(&sdebug_driver_template, sizeof(sdbg_host)); if (NULL == hpnt) { pr_err("scsi_host_alloc failed\n"); @@ -5881,8 +5881,7 @@ static int sdebug_driver_probe(struct device *dev) } /* Decide whether to tell scsi subsystem that we want mq */ /* Following should give the same answer for each host */ - if (shost_use_blk_mq(hpnt)) - hpnt->nr_hw_queues = submit_queues; + hpnt->nr_hw_queues = submit_queues; sdbg_host->shost = hpnt; *((struct sdebug_host_info **)hpnt->hostdata) = sdbg_host; diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index c736d61b1648..16eef068e9e9 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -297,19 +297,19 @@ enum blk_eh_timer_return scsi_times_out(struct request *req) if (rtn == BLK_EH_DONE) { /* - * For blk-mq, we must set the request state to complete now - * before sending the request to the scsi error handler. This - * will prevent a use-after-free in the event the LLD manages - * to complete the request before the error handler finishes - * processing this timed out request. + * Set the command to complete first in order to prevent a real + * completion from releasing the command while error handling + * is using it. If the command was already completed, then the + * lower level driver beat the timeout handler, and it is safe + * to return without escalating error recovery. * - * If the request was already completed, then the LLD beat the - * time out handler from transferring the request to the scsi - * error handler. In that case we can return immediately as no - * further action is required. + * If timeout handling lost the race to a real completion, the + * block layer may ignore that due to a fake timeout injection, + * so return RESET_TIMER to allow error handling another shot + * at this command. */ - if (req->q->mq_ops && !blk_mq_mark_complete(req)) - return rtn; + if (test_and_set_bit(SCMD_STATE_COMPLETE, &scmd->state)) + return BLK_EH_RESET_TIMER; if (scsi_abort_command(scmd) != SUCCESS) { set_host_byte(scmd, DID_TIME_OUT); scsi_eh_scmd_add(scmd); @@ -1932,7 +1932,7 @@ maybe_retry: static void eh_lock_door_done(struct request *req, blk_status_t status) { - __blk_put_request(req->q, req); + blk_put_request(req); } /** diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index fa6e0c3b3aa6..b13cc9288ba0 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -168,8 +168,6 @@ static void scsi_mq_requeue_cmd(struct scsi_cmnd *cmd) static void __scsi_queue_insert(struct scsi_cmnd *cmd, int reason, bool unbusy) { struct scsi_device *device = cmd->device; - struct request_queue *q = device->request_queue; - unsigned long flags; SCSI_LOG_MLQUEUE(1, scmd_printk(KERN_INFO, cmd, "Inserting command %p into mlqueue\n", cmd)); @@ -190,26 +188,20 @@ static void __scsi_queue_insert(struct scsi_cmnd *cmd, int reason, bool unbusy) * before blk_cleanup_queue() finishes. */ cmd->result = 0; - if (q->mq_ops) { - /* - * Before a SCSI command is dispatched, - * get_device(&sdev->sdev_gendev) is called and the host, - * target and device busy counters are increased. Since - * requeuing a request causes these actions to be repeated and - * since scsi_device_unbusy() has already been called, - * put_device(&device->sdev_gendev) must still be called. Call - * put_device() after blk_mq_requeue_request() to avoid that - * removal of the SCSI device can start before requeueing has - * happened. - */ - blk_mq_requeue_request(cmd->request, true); - put_device(&device->sdev_gendev); - return; - } - spin_lock_irqsave(q->queue_lock, flags); - blk_requeue_request(q, cmd->request); - kblockd_schedule_work(&device->requeue_work); - spin_unlock_irqrestore(q->queue_lock, flags); + + /* + * Before a SCSI command is dispatched, + * get_device(&sdev->sdev_gendev) is called and the host, + * target and device busy counters are increased. Since + * requeuing a request causes these actions to be repeated and + * since scsi_device_unbusy() has already been called, + * put_device(&device->sdev_gendev) must still be called. Call + * put_device() after blk_mq_requeue_request() to avoid that + * removal of the SCSI device can start before requeueing has + * happened. + */ + blk_mq_requeue_request(cmd->request, true); + put_device(&device->sdev_gendev); } /* @@ -370,10 +362,7 @@ void scsi_device_unbusy(struct scsi_device *sdev) static void scsi_kick_queue(struct request_queue *q) { - if (q->mq_ops) - blk_mq_run_hw_queues(q, false); - else - blk_run_queue(q); + blk_mq_run_hw_queues(q, false); } /* @@ -534,10 +523,7 @@ static void scsi_run_queue(struct request_queue *q) if (!list_empty(&sdev->host->starved_list)) scsi_starved_list_run(sdev->host); - if (q->mq_ops) - blk_mq_run_hw_queues(q, false); - else - blk_run_queue(q); + blk_mq_run_hw_queues(q, false); } void scsi_requeue_run_queue(struct work_struct *work) @@ -550,42 +536,6 @@ void scsi_requeue_run_queue(struct work_struct *work) scsi_run_queue(q); } -/* - * Function: scsi_requeue_command() - * - * Purpose: Handle post-processing of completed commands. - * - * Arguments: q - queue to operate on - * cmd - command that may need to be requeued. - * - * Returns: Nothing - * - * Notes: After command completion, there may be blocks left - * over which weren't finished by the previous command - * this can be for a number of reasons - the main one is - * I/O errors in the middle of the request, in which case - * we need to request the blocks that come after the bad - * sector. - * Notes: Upon return, cmd is a stale pointer. - */ -static void scsi_requeue_command(struct request_queue *q, struct scsi_cmnd *cmd) -{ - struct scsi_device *sdev = cmd->device; - struct request *req = cmd->request; - unsigned long flags; - - spin_lock_irqsave(q->queue_lock, flags); - blk_unprep_request(req); - req->special = NULL; - scsi_put_command(cmd); - blk_requeue_request(q, req); - spin_unlock_irqrestore(q->queue_lock, flags); - - scsi_run_queue(q); - - put_device(&sdev->sdev_gendev); -} - void scsi_run_host_queues(struct Scsi_Host *shost) { struct scsi_device *sdev; @@ -626,42 +576,6 @@ static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd) scsi_del_cmd_from_list(cmd); } -/* - * Function: scsi_release_buffers() - * - * Purpose: Free resources allocate for a scsi_command. - * - * Arguments: cmd - command that we are bailing. - * - * Lock status: Assumed that no lock is held upon entry. - * - * Returns: Nothing - * - * Notes: In the event that an upper level driver rejects a - * command, we must release resources allocated during - * the __init_io() function. Primarily this would involve - * the scatter-gather table. - */ -static void scsi_release_buffers(struct scsi_cmnd *cmd) -{ - if (cmd->sdb.table.nents) - sg_free_table_chained(&cmd->sdb.table, false); - - memset(&cmd->sdb, 0, sizeof(cmd->sdb)); - - if (scsi_prot_sg_count(cmd)) - sg_free_table_chained(&cmd->prot_sdb->table, false); -} - -static void scsi_release_bidi_buffers(struct scsi_cmnd *cmd) -{ - struct scsi_data_buffer *bidi_sdb = cmd->request->next_rq->special; - - sg_free_table_chained(&bidi_sdb->table, false); - kmem_cache_free(scsi_sdb_cache, bidi_sdb); - cmd->request->next_rq->special = NULL; -} - /* Returns false when no more bytes to process, true if there are more */ static bool scsi_end_request(struct request *req, blk_status_t error, unsigned int bytes, unsigned int bidi_bytes) @@ -687,46 +601,30 @@ static bool scsi_end_request(struct request *req, blk_status_t error, destroy_rcu_head(&cmd->rcu); } - if (req->mq_ctx) { - /* - * In the MQ case the command gets freed by __blk_mq_end_request, - * so we have to do all cleanup that depends on it earlier. - * - * We also can't kick the queues from irq context, so we - * will have to defer it to a workqueue. - */ - scsi_mq_uninit_cmd(cmd); - - /* - * queue is still alive, so grab the ref for preventing it - * from being cleaned up during running queue. - */ - percpu_ref_get(&q->q_usage_counter); - - __blk_mq_end_request(req, error); - - if (scsi_target(sdev)->single_lun || - !list_empty(&sdev->host->starved_list)) - kblockd_schedule_work(&sdev->requeue_work); - else - blk_mq_run_hw_queues(q, true); - - percpu_ref_put(&q->q_usage_counter); - } else { - unsigned long flags; + /* + * In the MQ case the command gets freed by __blk_mq_end_request, + * so we have to do all cleanup that depends on it earlier. + * + * We also can't kick the queues from irq context, so we + * will have to defer it to a workqueue. + */ + scsi_mq_uninit_cmd(cmd); - if (bidi_bytes) - scsi_release_bidi_buffers(cmd); - scsi_release_buffers(cmd); - scsi_put_command(cmd); + /* + * queue is still alive, so grab the ref for preventing it + * from being cleaned up during running queue. + */ + percpu_ref_get(&q->q_usage_counter); - spin_lock_irqsave(q->queue_lock, flags); - blk_finish_request(req, error); - spin_unlock_irqrestore(q->queue_lock, flags); + __blk_mq_end_request(req, error); - scsi_run_queue(q); - } + if (scsi_target(sdev)->single_lun || + !list_empty(&sdev->host->starved_list)) + kblockd_schedule_work(&sdev->requeue_work); + else + blk_mq_run_hw_queues(q, true); + percpu_ref_put(&q->q_usage_counter); put_device(&sdev->sdev_gendev); return false; } @@ -774,13 +672,7 @@ static void scsi_io_completion_reprep(struct scsi_cmnd *cmd, struct request_queue *q) { /* A new command will be prepared and issued. */ - if (q->mq_ops) { - scsi_mq_requeue_cmd(cmd); - } else { - /* Unprep request and put it back at head of the queue. */ - scsi_release_buffers(cmd); - scsi_requeue_command(q, cmd); - } + scsi_mq_requeue_cmd(cmd); } /* Helper for scsi_io_completion() when special action required. */ @@ -1120,7 +1012,8 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes) scsi_io_completion_action(cmd, result); } -static int scsi_init_sgtable(struct request *req, struct scsi_data_buffer *sdb) +static blk_status_t scsi_init_sgtable(struct request *req, + struct scsi_data_buffer *sdb) { int count; @@ -1129,7 +1022,7 @@ static int scsi_init_sgtable(struct request *req, struct scsi_data_buffer *sdb) */ if (unlikely(sg_alloc_table_chained(&sdb->table, blk_rq_nr_phys_segments(req), sdb->table.sgl))) - return BLKPREP_DEFER; + return BLK_STS_RESOURCE; /* * Next, walk the list, and fill in the addresses and sizes of @@ -1139,7 +1032,7 @@ static int scsi_init_sgtable(struct request *req, struct scsi_data_buffer *sdb) BUG_ON(count > sdb->table.nents); sdb->table.nents = count; sdb->length = blk_rq_payload_bytes(req); - return BLKPREP_OK; + return BLK_STS_OK; } /* @@ -1149,62 +1042,48 @@ static int scsi_init_sgtable(struct request *req, struct scsi_data_buffer *sdb) * * Arguments: cmd - Command descriptor we wish to initialize * - * Returns: 0 on success - * BLKPREP_DEFER if the failure is retryable - * BLKPREP_KILL if the failure is fatal + * Returns: BLK_STS_OK on success + * BLK_STS_RESOURCE if the failure is retryable + * BLK_STS_IOERR if the failure is fatal */ -int scsi_init_io(struct scsi_cmnd *cmd) +blk_status_t scsi_init_io(struct scsi_cmnd *cmd) { - struct scsi_device *sdev = cmd->device; struct request *rq = cmd->request; - bool is_mq = (rq->mq_ctx != NULL); - int error = BLKPREP_KILL; + blk_status_t ret; if (WARN_ON_ONCE(!blk_rq_nr_phys_segments(rq))) - goto err_exit; + return BLK_STS_IOERR; - error = scsi_init_sgtable(rq, &cmd->sdb); - if (error) - goto err_exit; + ret = scsi_init_sgtable(rq, &cmd->sdb); + if (ret) + return ret; if (blk_bidi_rq(rq)) { - if (!rq->q->mq_ops) { - struct scsi_data_buffer *bidi_sdb = - kmem_cache_zalloc(scsi_sdb_cache, GFP_ATOMIC); - if (!bidi_sdb) { - error = BLKPREP_DEFER; - goto err_exit; - } - - rq->next_rq->special = bidi_sdb; - } - - error = scsi_init_sgtable(rq->next_rq, rq->next_rq->special); - if (error) - goto err_exit; + ret = scsi_init_sgtable(rq->next_rq, rq->next_rq->special); + if (ret) + goto out_free_sgtables; } if (blk_integrity_rq(rq)) { struct scsi_data_buffer *prot_sdb = cmd->prot_sdb; int ivecs, count; - if (prot_sdb == NULL) { + if (WARN_ON_ONCE(!prot_sdb)) { /* * This can happen if someone (e.g. multipath) * queues a command to a device on an adapter * that does not support DIX. */ - WARN_ON_ONCE(1); - error = BLKPREP_KILL; - goto err_exit; + ret = BLK_STS_IOERR; + goto out_free_sgtables; } ivecs = blk_rq_count_integrity_sg(rq->q, rq->bio); if (sg_alloc_table_chained(&prot_sdb->table, ivecs, prot_sdb->table.sgl)) { - error = BLKPREP_DEFER; - goto err_exit; + ret = BLK_STS_RESOURCE; + goto out_free_sgtables; } count = blk_rq_map_integrity_sg(rq->q, rq->bio, @@ -1216,17 +1095,10 @@ int scsi_init_io(struct scsi_cmnd *cmd) cmd->prot_sdb->table.nents = count; } - return BLKPREP_OK; -err_exit: - if (is_mq) { - scsi_mq_free_sgtables(cmd); - } else { - scsi_release_buffers(cmd); - cmd->request->special = NULL; - scsi_put_command(cmd); - put_device(&sdev->sdev_gendev); - } - return error; + return BLK_STS_OK; +out_free_sgtables: + scsi_mq_free_sgtables(cmd); + return ret; } EXPORT_SYMBOL(scsi_init_io); @@ -1312,7 +1184,8 @@ void scsi_init_command(struct scsi_device *dev, struct scsi_cmnd *cmd) scsi_add_cmd_to_list(cmd); } -static int scsi_setup_scsi_cmnd(struct scsi_device *sdev, struct request *req) +static blk_status_t scsi_setup_scsi_cmnd(struct scsi_device *sdev, + struct request *req) { struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); @@ -1323,8 +1196,8 @@ static int scsi_setup_scsi_cmnd(struct scsi_device *sdev, struct request *req) * submit a request without an attached bio. */ if (req->bio) { - int ret = scsi_init_io(cmd); - if (unlikely(ret)) + blk_status_t ret = scsi_init_io(cmd); + if (unlikely(ret != BLK_STS_OK)) return ret; } else { BUG_ON(blk_rq_bytes(req)); @@ -1336,20 +1209,21 @@ static int scsi_setup_scsi_cmnd(struct scsi_device *sdev, struct request *req) cmd->cmnd = scsi_req(req)->cmd; cmd->transfersize = blk_rq_bytes(req); cmd->allowed = scsi_req(req)->retries; - return BLKPREP_OK; + return BLK_STS_OK; } /* * Setup a normal block command. These are simple request from filesystems * that still need to be translated to SCSI CDBs from the ULD. */ -static int scsi_setup_fs_cmnd(struct scsi_device *sdev, struct request *req) +static blk_status_t scsi_setup_fs_cmnd(struct scsi_device *sdev, + struct request *req) { struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); if (unlikely(sdev->handler && sdev->handler->prep_fn)) { - int ret = sdev->handler->prep_fn(sdev, req); - if (ret != BLKPREP_OK) + blk_status_t ret = sdev->handler->prep_fn(sdev, req); + if (ret != BLK_STS_OK) return ret; } @@ -1358,7 +1232,8 @@ static int scsi_setup_fs_cmnd(struct scsi_device *sdev, struct request *req) return scsi_cmd_to_driver(cmd)->init_command(cmd); } -static int scsi_setup_cmnd(struct scsi_device *sdev, struct request *req) +static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev, + struct request *req) { struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); @@ -1375,129 +1250,48 @@ static int scsi_setup_cmnd(struct scsi_device *sdev, struct request *req) return scsi_setup_fs_cmnd(sdev, req); } -static int +static blk_status_t scsi_prep_state_check(struct scsi_device *sdev, struct request *req) { - int ret = BLKPREP_OK; - - /* - * If the device is not in running state we will reject some - * or all commands. - */ - if (unlikely(sdev->sdev_state != SDEV_RUNNING)) { - switch (sdev->sdev_state) { - case SDEV_OFFLINE: - case SDEV_TRANSPORT_OFFLINE: - /* - * If the device is offline we refuse to process any - * commands. The device must be brought online - * before trying any recovery commands. - */ - sdev_printk(KERN_ERR, sdev, - "rejecting I/O to offline device\n"); - ret = BLKPREP_KILL; - break; - case SDEV_DEL: - /* - * If the device is fully deleted, we refuse to - * process any commands as well. - */ - sdev_printk(KERN_ERR, sdev, - "rejecting I/O to dead device\n"); - ret = BLKPREP_KILL; - break; - case SDEV_BLOCK: - case SDEV_CREATED_BLOCK: - ret = BLKPREP_DEFER; - break; - case SDEV_QUIESCE: - /* - * If the devices is blocked we defer normal commands. - */ - if (req && !(req->rq_flags & RQF_PREEMPT)) - ret = BLKPREP_DEFER; - break; - default: - /* - * For any other not fully online state we only allow - * special commands. In particular any user initiated - * command is not allowed. - */ - if (req && !(req->rq_flags & RQF_PREEMPT)) - ret = BLKPREP_KILL; - break; - } - } - return ret; -} - -static int -scsi_prep_return(struct request_queue *q, struct request *req, int ret) -{ - struct scsi_device *sdev = q->queuedata; - - switch (ret) { - case BLKPREP_KILL: - case BLKPREP_INVALID: - scsi_req(req)->result = DID_NO_CONNECT << 16; - /* release the command and kill it */ - if (req->special) { - struct scsi_cmnd *cmd = req->special; - scsi_release_buffers(cmd); - scsi_put_command(cmd); - put_device(&sdev->sdev_gendev); - req->special = NULL; - } - break; - case BLKPREP_DEFER: + switch (sdev->sdev_state) { + case SDEV_OFFLINE: + case SDEV_TRANSPORT_OFFLINE: /* - * If we defer, the blk_peek_request() returns NULL, but the - * queue must be restarted, so we schedule a callback to happen - * shortly. + * If the device is offline we refuse to process any + * commands. The device must be brought online + * before trying any recovery commands. */ - if (atomic_read(&sdev->device_busy) == 0) - blk_delay_queue(q, SCSI_QUEUE_DELAY); - break; + sdev_printk(KERN_ERR, sdev, + "rejecting I/O to offline device\n"); + return BLK_STS_IOERR; + case SDEV_DEL: + /* + * If the device is fully deleted, we refuse to + * process any commands as well. + */ + sdev_printk(KERN_ERR, sdev, + "rejecting I/O to dead device\n"); + return BLK_STS_IOERR; + case SDEV_BLOCK: + case SDEV_CREATED_BLOCK: + return BLK_STS_RESOURCE; + case SDEV_QUIESCE: + /* + * If the devices is blocked we defer normal commands. + */ + if (req && !(req->rq_flags & RQF_PREEMPT)) + return BLK_STS_RESOURCE; + return BLK_STS_OK; default: - req->rq_flags |= RQF_DONTPREP; - } - - return ret; -} - -static int scsi_prep_fn(struct request_queue *q, struct request *req) -{ - struct scsi_device *sdev = q->queuedata; - struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); - int ret; - - ret = scsi_prep_state_check(sdev, req); - if (ret != BLKPREP_OK) - goto out; - - if (!req->special) { - /* Bail if we can't get a reference to the device */ - if (unlikely(!get_device(&sdev->sdev_gendev))) { - ret = BLKPREP_DEFER; - goto out; - } - - scsi_init_command(sdev, cmd); - req->special = cmd; + /* + * For any other not fully online state we only allow + * special commands. In particular any user initiated + * command is not allowed. + */ + if (req && !(req->rq_flags & RQF_PREEMPT)) + return BLK_STS_IOERR; + return BLK_STS_OK; } - - cmd->tag = req->tag; - cmd->request = req; - cmd->prot_op = SCSI_PROT_NORMAL; - - ret = scsi_setup_cmnd(sdev, req); -out: - return scsi_prep_return(q, req, ret); -} - -static void scsi_unprep_fn(struct request_queue *q, struct request *req) -{ - scsi_uninit_cmd(blk_mq_rq_to_pdu(req)); } /* @@ -1519,14 +1313,8 @@ static inline int scsi_dev_queue_ready(struct request_queue *q, /* * unblock after device_blocked iterates to zero */ - if (atomic_dec_return(&sdev->device_blocked) > 0) { - /* - * For the MQ case we take care of this in the caller. - */ - if (!q->mq_ops) - blk_delay_queue(q, SCSI_QUEUE_DELAY); + if (atomic_dec_return(&sdev->device_blocked) > 0) goto out_dec; - } SCSI_LOG_MLQUEUE(3, sdev_printk(KERN_INFO, sdev, "unblocking device at zero depth\n")); } @@ -1661,13 +1449,13 @@ out_dec: * needs to return 'not busy'. Otherwise, request stacking drivers * may hold requests forever. */ -static int scsi_lld_busy(struct request_queue *q) +static bool scsi_mq_lld_busy(struct request_queue *q) { struct scsi_device *sdev = q->queuedata; struct Scsi_Host *shost; if (blk_queue_dying(q)) - return 0; + return false; shost = sdev->host; @@ -1678,43 +1466,9 @@ static int scsi_lld_busy(struct request_queue *q) * in SCSI layer. */ if (scsi_host_in_recovery(shost) || scsi_device_is_busy(sdev)) - return 1; - - return 0; -} - -/* - * Kill a request for a dead device - */ -static void scsi_kill_request(struct request *req, struct request_queue *q) -{ - struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); - struct scsi_device *sdev; - struct scsi_target *starget; - struct Scsi_Host *shost; - - blk_start_request(req); - - scmd_printk(KERN_INFO, cmd, "killing request\n"); - - sdev = cmd->device; - starget = scsi_target(sdev); - shost = sdev->host; - scsi_init_cmd_errh(cmd); - cmd->result = DID_NO_CONNECT << 16; - atomic_inc(&cmd->device->iorequest_cnt); - - /* - * SCSI request completion path will do scsi_device_unbusy(), - * bump busy counts. To bump the counters, we need to dance - * with the locks as normal issue path does. - */ - atomic_inc(&sdev->device_busy); - atomic_inc(&shost->host_busy); - if (starget->can_queue > 0) - atomic_inc(&starget->target_busy); + return true; - blk_complete_request(req); + return false; } static void scsi_softirq_done(struct request *rq) @@ -1837,170 +1591,6 @@ static int scsi_dispatch_cmd(struct scsi_cmnd *cmd) return 0; } -/** - * scsi_done - Invoke completion on finished SCSI command. - * @cmd: The SCSI Command for which a low-level device driver (LLDD) gives - * ownership back to SCSI Core -- i.e. the LLDD has finished with it. - * - * Description: This function is the mid-level's (SCSI Core) interrupt routine, - * which regains ownership of the SCSI command (de facto) from a LLDD, and - * calls blk_complete_request() for further processing. - * - * This function is interrupt context safe. - */ -static void scsi_done(struct scsi_cmnd *cmd) -{ - trace_scsi_dispatch_cmd_done(cmd); - blk_complete_request(cmd->request); -} - -/* - * Function: scsi_request_fn() - * - * Purpose: Main strategy routine for SCSI. - * - * Arguments: q - Pointer to actual queue. - * - * Returns: Nothing - * - * Lock status: request queue lock assumed to be held when called. - * - * Note: See sd_zbc.c sd_zbc_write_lock_zone() for write order - * protection for ZBC disks. - */ -static void scsi_request_fn(struct request_queue *q) - __releases(q->queue_lock) - __acquires(q->queue_lock) -{ - struct scsi_device *sdev = q->queuedata; - struct Scsi_Host *shost; - struct scsi_cmnd *cmd; - struct request *req; - - /* - * To start with, we keep looping until the queue is empty, or until - * the host is no longer able to accept any more requests. - */ - shost = sdev->host; - for (;;) { - int rtn; - /* - * get next queueable request. We do this early to make sure - * that the request is fully prepared even if we cannot - * accept it. - */ - req = blk_peek_request(q); - if (!req) - break; - - if (unlikely(!scsi_device_online(sdev))) { - sdev_printk(KERN_ERR, sdev, - "rejecting I/O to offline device\n"); - scsi_kill_request(req, q); - continue; - } - - if (!scsi_dev_queue_ready(q, sdev)) - break; - - /* - * Remove the request from the request list. - */ - if (!(blk_queue_tagged(q) && !blk_queue_start_tag(q, req))) - blk_start_request(req); - - spin_unlock_irq(q->queue_lock); - cmd = blk_mq_rq_to_pdu(req); - if (cmd != req->special) { - printk(KERN_CRIT "impossible request in %s.\n" - "please mail a stack trace to " - "linux-scsi@vger.kernel.org\n", - __func__); - blk_dump_rq_flags(req, "foo"); - BUG(); - } - - /* - * We hit this when the driver is using a host wide - * tag map. For device level tag maps the queue_depth check - * in the device ready fn would prevent us from trying - * to allocate a tag. Since the map is a shared host resource - * we add the dev to the starved list so it eventually gets - * a run when a tag is freed. - */ - if (blk_queue_tagged(q) && !(req->rq_flags & RQF_QUEUED)) { - spin_lock_irq(shost->host_lock); - if (list_empty(&sdev->starved_entry)) - list_add_tail(&sdev->starved_entry, - &shost->starved_list); - spin_unlock_irq(shost->host_lock); - goto not_ready; - } - - if (!scsi_target_queue_ready(shost, sdev)) - goto not_ready; - - if (!scsi_host_queue_ready(q, shost, sdev)) - goto host_not_ready; - - if (sdev->simple_tags) - cmd->flags |= SCMD_TAGGED; - else - cmd->flags &= ~SCMD_TAGGED; - - /* - * Finally, initialize any error handling parameters, and set up - * the timers for timeouts. - */ - scsi_init_cmd_errh(cmd); - - /* - * Dispatch the command to the low-level driver. - */ - cmd->scsi_done = scsi_done; - rtn = scsi_dispatch_cmd(cmd); - if (rtn) { - scsi_queue_insert(cmd, rtn); - spin_lock_irq(q->queue_lock); - goto out_delay; - } - spin_lock_irq(q->queue_lock); - } - - return; - - host_not_ready: - if (scsi_target(sdev)->can_queue > 0) - atomic_dec(&scsi_target(sdev)->target_busy); - not_ready: - /* - * lock q, handle tag, requeue req, and decrement device_busy. We - * must return with queue_lock held. - * - * Decrementing device_busy without checking it is OK, as all such - * cases (host limits or settings) should run the queue at some - * later time. - */ - spin_lock_irq(q->queue_lock); - blk_requeue_request(q, req); - atomic_dec(&sdev->device_busy); -out_delay: - if (!atomic_read(&sdev->device_busy) && !scsi_device_blocked(sdev)) - blk_delay_queue(q, SCSI_QUEUE_DELAY); -} - -static inline blk_status_t prep_to_mq(int ret) -{ - switch (ret) { - case BLKPREP_OK: - return BLK_STS_OK; - case BLKPREP_DEFER: - return BLK_STS_RESOURCE; - default: - return BLK_STS_IOERR; - } -} - /* Size in bytes of the sg-list stored in the scsi-mq command-private data. */ static unsigned int scsi_mq_sgl_size(struct Scsi_Host *shost) { @@ -2008,7 +1598,7 @@ static unsigned int scsi_mq_sgl_size(struct Scsi_Host *shost) sizeof(struct scatterlist); } -static int scsi_mq_prep_fn(struct request *req) +static blk_status_t scsi_mq_prep_fn(struct request *req) { struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); struct scsi_device *sdev = req->q->queuedata; @@ -2052,8 +1642,18 @@ static int scsi_mq_prep_fn(struct request *req) static void scsi_mq_done(struct scsi_cmnd *cmd) { + if (unlikely(test_and_set_bit(SCMD_STATE_COMPLETE, &cmd->state))) + return; trace_scsi_dispatch_cmd_done(cmd); - blk_mq_complete_request(cmd->request); + + /* + * If the block layer didn't complete the request due to a timeout + * injection, scsi must clear its internal completed state so that the + * timeout handler will see it needs to escalate its own error + * recovery. + */ + if (unlikely(!blk_mq_complete_request(cmd->request))) + clear_bit(SCMD_STATE_COMPLETE, &cmd->state); } static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) @@ -2096,9 +1696,15 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, blk_status_t ret; int reason; - ret = prep_to_mq(scsi_prep_state_check(sdev, req)); - if (ret != BLK_STS_OK) - goto out_put_budget; + /* + * If the device is not in running state we will reject some or all + * commands. + */ + if (unlikely(sdev->sdev_state != SDEV_RUNNING)) { + ret = scsi_prep_state_check(sdev, req); + if (ret != BLK_STS_OK) + goto out_put_budget; + } ret = BLK_STS_RESOURCE; if (!scsi_target_queue_ready(shost, sdev)) @@ -2106,8 +1712,9 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, if (!scsi_host_queue_ready(q, shost, sdev)) goto out_dec_target_busy; + clear_bit(SCMD_STATE_COMPLETE, &cmd->state); if (!(req->rq_flags & RQF_DONTPREP)) { - ret = prep_to_mq(scsi_mq_prep_fn(req)); + ret = scsi_mq_prep_fn(req); if (ret != BLK_STS_OK) goto out_dec_host_busy; req->rq_flags |= RQF_DONTPREP; @@ -2208,7 +1815,7 @@ static int scsi_map_queues(struct blk_mq_tag_set *set) if (shost->hostt->map_queues) return shost->hostt->map_queues(shost); - return blk_mq_map_queues(set); + return blk_mq_map_queues(&set->map[0]); } void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q) @@ -2235,10 +1842,8 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q) blk_queue_segment_boundary(q, shost->dma_boundary); dma_set_seg_boundary(dev, shost->dma_boundary); - blk_queue_max_segment_size(q, dma_get_max_seg_size(dev)); - - if (!shost->use_clustering) - q->limits.cluster = 0; + blk_queue_max_segment_size(q, + min(shost->max_segment_size, dma_get_max_seg_size(dev))); /* * Set a reasonable default alignment: The larger of 32-byte (dword), @@ -2251,77 +1856,6 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q) } EXPORT_SYMBOL_GPL(__scsi_init_queue); -static int scsi_old_init_rq(struct request_queue *q, struct request *rq, - gfp_t gfp) -{ - struct Scsi_Host *shost = q->rq_alloc_data; - const bool unchecked_isa_dma = shost->unchecked_isa_dma; - struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq); - - memset(cmd, 0, sizeof(*cmd)); - - if (unchecked_isa_dma) - cmd->flags |= SCMD_UNCHECKED_ISA_DMA; - cmd->sense_buffer = scsi_alloc_sense_buffer(unchecked_isa_dma, gfp, - NUMA_NO_NODE); - if (!cmd->sense_buffer) - goto fail; - cmd->req.sense = cmd->sense_buffer; - - if (scsi_host_get_prot(shost) >= SHOST_DIX_TYPE0_PROTECTION) { - cmd->prot_sdb = kmem_cache_zalloc(scsi_sdb_cache, gfp); - if (!cmd->prot_sdb) - goto fail_free_sense; - } - - return 0; - -fail_free_sense: - scsi_free_sense_buffer(unchecked_isa_dma, cmd->sense_buffer); -fail: - return -ENOMEM; -} - -static void scsi_old_exit_rq(struct request_queue *q, struct request *rq) -{ - struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq); - - if (cmd->prot_sdb) - kmem_cache_free(scsi_sdb_cache, cmd->prot_sdb); - scsi_free_sense_buffer(cmd->flags & SCMD_UNCHECKED_ISA_DMA, - cmd->sense_buffer); -} - -struct request_queue *scsi_old_alloc_queue(struct scsi_device *sdev) -{ - struct Scsi_Host *shost = sdev->host; - struct request_queue *q; - - q = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE, NULL); - if (!q) - return NULL; - q->cmd_size = sizeof(struct scsi_cmnd) + shost->hostt->cmd_size; - q->rq_alloc_data = shost; - q->request_fn = scsi_request_fn; - q->init_rq_fn = scsi_old_init_rq; - q->exit_rq_fn = scsi_old_exit_rq; - q->initialize_rq_fn = scsi_initialize_rq; - - if (blk_init_allocated_queue(q) < 0) { - blk_cleanup_queue(q); - return NULL; - } - - __scsi_init_queue(shost, q); - blk_queue_flag_set(QUEUE_FLAG_SCSI_PASSTHROUGH, q); - blk_queue_prep_rq(q, scsi_prep_fn); - blk_queue_unprep_rq(q, scsi_unprep_fn); - blk_queue_softirq_done(q, scsi_softirq_done); - blk_queue_rq_timed_out(q, scsi_times_out); - blk_queue_lld_busy(q, scsi_lld_busy); - return q; -} - static const struct blk_mq_ops scsi_mq_ops = { .get_budget = scsi_mq_get_budget, .put_budget = scsi_mq_put_budget, @@ -2334,6 +1868,7 @@ static const struct blk_mq_ops scsi_mq_ops = { .init_request = scsi_mq_init_request, .exit_request = scsi_mq_exit_request, .initialize_rq_fn = scsi_initialize_rq, + .busy = scsi_mq_lld_busy, .map_queues = scsi_map_queues, }; @@ -2388,10 +1923,7 @@ struct scsi_device *scsi_device_from_queue(struct request_queue *q) { struct scsi_device *sdev = NULL; - if (q->mq_ops) { - if (q->mq_ops == &scsi_mq_ops) - sdev = q->queuedata; - } else if (q->request_fn == scsi_request_fn) + if (q->mq_ops == &scsi_mq_ops) sdev = q->queuedata; if (!sdev || !get_device(&sdev->sdev_gendev)) sdev = NULL; @@ -2994,39 +2526,6 @@ void sdev_evt_send_simple(struct scsi_device *sdev, } EXPORT_SYMBOL_GPL(sdev_evt_send_simple); -/** - * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn() - * @sdev: SCSI device to count the number of scsi_request_fn() callers for. - */ -static int scsi_request_fn_active(struct scsi_device *sdev) -{ - struct request_queue *q = sdev->request_queue; - int request_fn_active; - - WARN_ON_ONCE(sdev->host->use_blk_mq); - - spin_lock_irq(q->queue_lock); - request_fn_active = q->request_fn_active; - spin_unlock_irq(q->queue_lock); - - return request_fn_active; -} - -/** - * scsi_wait_for_queuecommand() - wait for ongoing queuecommand() calls - * @sdev: SCSI device pointer. - * - * Wait until the ongoing shost->hostt->queuecommand() calls that are - * invoked from scsi_request_fn() have finished. - */ -static void scsi_wait_for_queuecommand(struct scsi_device *sdev) -{ - WARN_ON_ONCE(sdev->host->use_blk_mq); - - while (scsi_request_fn_active(sdev)) - msleep(20); -} - /** * scsi_device_quiesce - Block user issued commands. * @sdev: scsi device to quiesce. @@ -3150,7 +2649,6 @@ EXPORT_SYMBOL(scsi_target_resume); int scsi_internal_device_block_nowait(struct scsi_device *sdev) { struct request_queue *q = sdev->request_queue; - unsigned long flags; int err = 0; err = scsi_device_set_state(sdev, SDEV_BLOCK); @@ -3166,14 +2664,7 @@ int scsi_internal_device_block_nowait(struct scsi_device *sdev) * block layer from calling the midlayer with this device's * request queue. */ - if (q->mq_ops) { - blk_mq_quiesce_queue_nowait(q); - } else { - spin_lock_irqsave(q->queue_lock, flags); - blk_stop_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - } - + blk_mq_quiesce_queue_nowait(q); return 0; } EXPORT_SYMBOL_GPL(scsi_internal_device_block_nowait); @@ -3204,12 +2695,8 @@ static int scsi_internal_device_block(struct scsi_device *sdev) mutex_lock(&sdev->state_mutex); err = scsi_internal_device_block_nowait(sdev); - if (err == 0) { - if (q->mq_ops) - blk_mq_quiesce_queue(q); - else - scsi_wait_for_queuecommand(sdev); - } + if (err == 0) + blk_mq_quiesce_queue(q); mutex_unlock(&sdev->state_mutex); return err; @@ -3218,15 +2705,8 @@ static int scsi_internal_device_block(struct scsi_device *sdev) void scsi_start_queue(struct scsi_device *sdev) { struct request_queue *q = sdev->request_queue; - unsigned long flags; - if (q->mq_ops) { - blk_mq_unquiesce_queue(q); - } else { - spin_lock_irqsave(q->queue_lock, flags); - blk_start_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - } + blk_mq_unquiesce_queue(q); } /** diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h index 99f1db5e467e..5f21547b2ad2 100644 --- a/drivers/scsi/scsi_priv.h +++ b/drivers/scsi/scsi_priv.h @@ -92,7 +92,6 @@ extern void scsi_queue_insert(struct scsi_cmnd *cmd, int reason); extern void scsi_io_completion(struct scsi_cmnd *, unsigned int); extern void scsi_run_host_queues(struct Scsi_Host *shost); extern void scsi_requeue_run_queue(struct work_struct *work); -extern struct request_queue *scsi_old_alloc_queue(struct scsi_device *sdev); extern struct request_queue *scsi_mq_alloc_queue(struct scsi_device *sdev); extern void scsi_start_queue(struct scsi_device *sdev); extern int scsi_mq_setup_tags(struct Scsi_Host *shost); diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c index 78ca63dfba4a..dd0d516f65e2 100644 --- a/drivers/scsi/scsi_scan.c +++ b/drivers/scsi/scsi_scan.c @@ -266,10 +266,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget, */ sdev->borken = 1; - if (shost_use_blk_mq(shost)) - sdev->request_queue = scsi_mq_alloc_queue(sdev); - else - sdev->request_queue = scsi_old_alloc_queue(sdev); + sdev->request_queue = scsi_mq_alloc_queue(sdev); if (!sdev->request_queue) { /* release fn is set up in scsi_sysfs_device_initialise, so * have to free and put manually here */ @@ -280,11 +277,6 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget, WARN_ON_ONCE(!blk_get_queue(sdev->request_queue)); sdev->request_queue->queuedata = sdev; - if (!shost_use_blk_mq(sdev->host)) { - blk_queue_init_tags(sdev->request_queue, - sdev->host->cmd_per_lun, shost->bqt, - shost->hostt->tag_alloc_policy); - } scsi_change_queue_depth(sdev, sdev->host->cmd_per_lun ? sdev->host->cmd_per_lun : 1); diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c index 3aee9464a7bf..6a9040faed00 100644 --- a/drivers/scsi/scsi_sysfs.c +++ b/drivers/scsi/scsi_sysfs.c @@ -367,7 +367,6 @@ store_shost_eh_deadline(struct device *dev, struct device_attribute *attr, static DEVICE_ATTR(eh_deadline, S_IRUGO | S_IWUSR, show_shost_eh_deadline, store_shost_eh_deadline); -shost_rd_attr(use_blk_mq, "%d\n"); shost_rd_attr(unique_id, "%u\n"); shost_rd_attr(cmd_per_lun, "%hd\n"); shost_rd_attr(can_queue, "%hd\n"); @@ -386,6 +385,13 @@ show_host_busy(struct device *dev, struct device_attribute *attr, char *buf) } static DEVICE_ATTR(host_busy, S_IRUGO, show_host_busy, NULL); +static ssize_t +show_use_blk_mq(struct device *dev, struct device_attribute *attr, char *buf) +{ + return sprintf(buf, "1\n"); +} +static DEVICE_ATTR(use_blk_mq, S_IRUGO, show_use_blk_mq, NULL); + static struct attribute *scsi_sysfs_shost_attrs[] = { &dev_attr_use_blk_mq.attr, &dev_attr_unique_id.attr, diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c index 381668fa135d..d7035270d274 100644 --- a/drivers/scsi/scsi_transport_fc.c +++ b/drivers/scsi/scsi_transport_fc.c @@ -3592,7 +3592,7 @@ fc_bsg_job_timeout(struct request *req) /* the blk_end_sync_io() doesn't check the error */ if (inflight) - __blk_complete_request(req); + blk_mq_end_request(req, BLK_STS_IOERR); return BLK_EH_DONE; } @@ -3684,14 +3684,9 @@ static void fc_bsg_goose_queue(struct fc_rport *rport) { struct request_queue *q = rport->rqst_q; - unsigned long flags; - - if (!q) - return; - spin_lock_irqsave(q->queue_lock, flags); - blk_run_queue_async(q); - spin_unlock_irqrestore(q->queue_lock, flags); + if (q) + blk_mq_run_hw_queues(q, true); } /** @@ -3759,6 +3754,37 @@ static int fc_bsg_dispatch(struct bsg_job *job) return fc_bsg_host_dispatch(shost, job); } +static blk_status_t fc_bsg_rport_prep(struct fc_rport *rport) +{ + if (rport->port_state == FC_PORTSTATE_BLOCKED && + !(rport->flags & FC_RPORT_FAST_FAIL_TIMEDOUT)) + return BLK_STS_RESOURCE; + + if (rport->port_state != FC_PORTSTATE_ONLINE) + return BLK_STS_IOERR; + + return BLK_STS_OK; +} + + +static int fc_bsg_dispatch_prep(struct bsg_job *job) +{ + struct fc_rport *rport = fc_bsg_to_rport(job); + blk_status_t ret; + + ret = fc_bsg_rport_prep(rport); + switch (ret) { + case BLK_STS_OK: + break; + case BLK_STS_RESOURCE: + return -EAGAIN; + default: + return -EIO; + } + + return fc_bsg_dispatch(job); +} + /** * fc_bsg_hostadd - Create and add the bsg hooks so we can receive requests * @shost: shost for fc_host @@ -3780,7 +3806,8 @@ fc_bsg_hostadd(struct Scsi_Host *shost, struct fc_host_attrs *fc_host) snprintf(bsg_name, sizeof(bsg_name), "fc_host%d", shost->host_no); - q = bsg_setup_queue(dev, bsg_name, fc_bsg_dispatch, i->f->dd_bsg_size); + q = bsg_setup_queue(dev, bsg_name, fc_bsg_dispatch, fc_bsg_job_timeout, + i->f->dd_bsg_size); if (IS_ERR(q)) { dev_err(dev, "fc_host%d: bsg interface failed to initialize - setup queue\n", @@ -3788,26 +3815,11 @@ fc_bsg_hostadd(struct Scsi_Host *shost, struct fc_host_attrs *fc_host) return PTR_ERR(q); } __scsi_init_queue(shost, q); - blk_queue_rq_timed_out(q, fc_bsg_job_timeout); blk_queue_rq_timeout(q, FC_DEFAULT_BSG_TIMEOUT); fc_host->rqst_q = q; return 0; } -static int fc_bsg_rport_prep(struct request_queue *q, struct request *req) -{ - struct fc_rport *rport = dev_to_rport(q->queuedata); - - if (rport->port_state == FC_PORTSTATE_BLOCKED && - !(rport->flags & FC_RPORT_FAST_FAIL_TIMEDOUT)) - return BLKPREP_DEFER; - - if (rport->port_state != FC_PORTSTATE_ONLINE) - return BLKPREP_KILL; - - return BLKPREP_OK; -} - /** * fc_bsg_rportadd - Create and add the bsg hooks so we can receive requests * @shost: shost that rport is attached to @@ -3825,15 +3837,13 @@ fc_bsg_rportadd(struct Scsi_Host *shost, struct fc_rport *rport) if (!i->f->bsg_request) return -ENOTSUPP; - q = bsg_setup_queue(dev, dev_name(dev), fc_bsg_dispatch, - i->f->dd_bsg_size); + q = bsg_setup_queue(dev, dev_name(dev), fc_bsg_dispatch_prep, + fc_bsg_job_timeout, i->f->dd_bsg_size); if (IS_ERR(q)) { dev_err(dev, "failed to setup bsg queue\n"); return PTR_ERR(q); } __scsi_init_queue(shost, q); - blk_queue_prep_rq(q, fc_bsg_rport_prep); - blk_queue_rq_timed_out(q, fc_bsg_job_timeout); blk_queue_rq_timeout(q, BLK_DEFAULT_SG_TIMEOUT); rport->rqst_q = q; return 0; @@ -3852,10 +3862,7 @@ fc_bsg_rportadd(struct Scsi_Host *shost, struct fc_rport *rport) static void fc_bsg_remove(struct request_queue *q) { - if (q) { - bsg_unregister_queue(q); - blk_cleanup_queue(q); - } + bsg_remove_queue(q); } diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c index 6fd2fe210fc3..0508831d6fb9 100644 --- a/drivers/scsi/scsi_transport_iscsi.c +++ b/drivers/scsi/scsi_transport_iscsi.c @@ -37,6 +37,18 @@ #define ISCSI_TRANSPORT_VERSION "2.0-870" +#define CREATE_TRACE_POINTS +#include + +/* + * Export tracepoint symbols to be used by other modules. + */ +EXPORT_TRACEPOINT_SYMBOL_GPL(iscsi_dbg_conn); +EXPORT_TRACEPOINT_SYMBOL_GPL(iscsi_dbg_eh); +EXPORT_TRACEPOINT_SYMBOL_GPL(iscsi_dbg_session); +EXPORT_TRACEPOINT_SYMBOL_GPL(iscsi_dbg_tcp); +EXPORT_TRACEPOINT_SYMBOL_GPL(iscsi_dbg_sw_tcp); + static int dbg_session; module_param_named(debug_session, dbg_session, int, S_IRUGO | S_IWUSR); @@ -59,6 +71,9 @@ MODULE_PARM_DESC(debug_conn, iscsi_cls_session_printk(KERN_INFO, _session, \ "%s: " dbg_fmt, \ __func__, ##arg); \ + iscsi_dbg_trace(trace_iscsi_dbg_trans_session, \ + &(_session)->dev, \ + "%s " dbg_fmt, __func__, ##arg); \ } while (0); #define ISCSI_DBG_TRANS_CONN(_conn, dbg_fmt, arg...) \ @@ -66,7 +81,10 @@ MODULE_PARM_DESC(debug_conn, if (dbg_conn) \ iscsi_cls_conn_printk(KERN_INFO, _conn, \ "%s: " dbg_fmt, \ - __func__, ##arg); \ + __func__, ##arg); \ + iscsi_dbg_trace(trace_iscsi_dbg_trans_conn, \ + &(_conn)->dev, \ + "%s " dbg_fmt, __func__, ##arg); \ } while (0); struct iscsi_internal { @@ -1542,7 +1560,7 @@ iscsi_bsg_host_add(struct Scsi_Host *shost, struct iscsi_cls_host *ihost) return -ENOTSUPP; snprintf(bsg_name, sizeof(bsg_name), "iscsi_host%d", shost->host_no); - q = bsg_setup_queue(dev, bsg_name, iscsi_bsg_host_dispatch, 0); + q = bsg_setup_queue(dev, bsg_name, iscsi_bsg_host_dispatch, NULL, 0); if (IS_ERR(q)) { shost_printk(KERN_ERR, shost, "bsg interface failed to " "initialize - no request queue\n"); @@ -1576,10 +1594,7 @@ static int iscsi_remove_host(struct transport_container *tc, struct Scsi_Host *shost = dev_to_shost(dev); struct iscsi_cls_host *ihost = shost->shost_data; - if (ihost->bsg_q) { - bsg_unregister_queue(ihost->bsg_q); - blk_cleanup_queue(ihost->bsg_q); - } + bsg_remove_queue(ihost->bsg_q); return 0; } @@ -4497,6 +4512,20 @@ int iscsi_unregister_transport(struct iscsi_transport *tt) } EXPORT_SYMBOL_GPL(iscsi_unregister_transport); +void iscsi_dbg_trace(void (*trace)(struct device *dev, struct va_format *), + struct device *dev, const char *fmt, ...) +{ + struct va_format vaf; + va_list args; + + va_start(args, fmt); + vaf.fmt = fmt; + vaf.va = &args; + trace(dev, &vaf); + va_end(args); +} +EXPORT_SYMBOL_GPL(iscsi_dbg_trace); + static __init int iscsi_transport_init(void) { int err; diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c index 0a165b2b3e81..692b46937e52 100644 --- a/drivers/scsi/scsi_transport_sas.c +++ b/drivers/scsi/scsi_transport_sas.c @@ -198,7 +198,7 @@ static int sas_bsg_initialize(struct Scsi_Host *shost, struct sas_rphy *rphy) if (rphy) { q = bsg_setup_queue(&rphy->dev, dev_name(&rphy->dev), - sas_smp_dispatch, 0); + sas_smp_dispatch, NULL, 0); if (IS_ERR(q)) return PTR_ERR(q); rphy->q = q; @@ -207,7 +207,7 @@ static int sas_bsg_initialize(struct Scsi_Host *shost, struct sas_rphy *rphy) snprintf(name, sizeof(name), "sas_host%d", shost->host_no); q = bsg_setup_queue(&shost->shost_gendev, name, - sas_smp_dispatch, 0); + sas_smp_dispatch, NULL, 0); if (IS_ERR(q)) return PTR_ERR(q); to_sas_host_attrs(shost)->q = q; @@ -246,11 +246,7 @@ static int sas_host_remove(struct transport_container *tc, struct device *dev, struct Scsi_Host *shost = dev_to_shost(dev); struct request_queue *q = to_sas_host_attrs(shost)->q; - if (q) { - bsg_unregister_queue(q); - blk_cleanup_queue(q); - } - + bsg_remove_queue(q); return 0; } diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index bd0a5c694a97..a1a44f52e0e8 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -114,7 +114,7 @@ static int sd_suspend_system(struct device *); static int sd_suspend_runtime(struct device *); static int sd_resume(struct device *); static void sd_rescan(struct device *); -static int sd_init_command(struct scsi_cmnd *SCpnt); +static blk_status_t sd_init_command(struct scsi_cmnd *SCpnt); static void sd_uninit_command(struct scsi_cmnd *SCpnt); static int sd_done(struct scsi_cmnd *); static void sd_eh_reset(struct scsi_cmnd *); @@ -751,7 +751,7 @@ static void sd_config_discard(struct scsi_disk *sdkp, unsigned int mode) blk_queue_flag_set(QUEUE_FLAG_DISCARD, q); } -static int sd_setup_unmap_cmnd(struct scsi_cmnd *cmd) +static blk_status_t sd_setup_unmap_cmnd(struct scsi_cmnd *cmd) { struct scsi_device *sdp = cmd->device; struct request *rq = cmd->request; @@ -762,7 +762,7 @@ static int sd_setup_unmap_cmnd(struct scsi_cmnd *cmd) rq->special_vec.bv_page = mempool_alloc(sd_page_pool, GFP_ATOMIC); if (!rq->special_vec.bv_page) - return BLKPREP_DEFER; + return BLK_STS_RESOURCE; clear_highpage(rq->special_vec.bv_page); rq->special_vec.bv_offset = 0; rq->special_vec.bv_len = data_len; @@ -786,7 +786,8 @@ static int sd_setup_unmap_cmnd(struct scsi_cmnd *cmd) return scsi_init_io(cmd); } -static int sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd, bool unmap) +static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd, + bool unmap) { struct scsi_device *sdp = cmd->device; struct request *rq = cmd->request; @@ -796,7 +797,7 @@ static int sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd, bool unmap) rq->special_vec.bv_page = mempool_alloc(sd_page_pool, GFP_ATOMIC); if (!rq->special_vec.bv_page) - return BLKPREP_DEFER; + return BLK_STS_RESOURCE; clear_highpage(rq->special_vec.bv_page); rq->special_vec.bv_offset = 0; rq->special_vec.bv_len = data_len; @@ -817,7 +818,8 @@ static int sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd, bool unmap) return scsi_init_io(cmd); } -static int sd_setup_write_same10_cmnd(struct scsi_cmnd *cmd, bool unmap) +static blk_status_t sd_setup_write_same10_cmnd(struct scsi_cmnd *cmd, + bool unmap) { struct scsi_device *sdp = cmd->device; struct request *rq = cmd->request; @@ -827,7 +829,7 @@ static int sd_setup_write_same10_cmnd(struct scsi_cmnd *cmd, bool unmap) rq->special_vec.bv_page = mempool_alloc(sd_page_pool, GFP_ATOMIC); if (!rq->special_vec.bv_page) - return BLKPREP_DEFER; + return BLK_STS_RESOURCE; clear_highpage(rq->special_vec.bv_page); rq->special_vec.bv_offset = 0; rq->special_vec.bv_len = data_len; @@ -848,7 +850,7 @@ static int sd_setup_write_same10_cmnd(struct scsi_cmnd *cmd, bool unmap) return scsi_init_io(cmd); } -static int sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd) +static blk_status_t sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd) { struct request *rq = cmd->request; struct scsi_device *sdp = cmd->device; @@ -866,7 +868,7 @@ static int sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd) } if (sdp->no_write_same) - return BLKPREP_INVALID; + return BLK_STS_TARGET; if (sdkp->ws16 || sector > 0xffffffff || nr_sectors > 0xffff) return sd_setup_write_same16_cmnd(cmd, false); @@ -943,7 +945,7 @@ out: * Will set up either WRITE SAME(10) or WRITE SAME(16) depending on * the preference indicated by the target device. **/ -static int sd_setup_write_same_cmnd(struct scsi_cmnd *cmd) +static blk_status_t sd_setup_write_same_cmnd(struct scsi_cmnd *cmd) { struct request *rq = cmd->request; struct scsi_device *sdp = cmd->device; @@ -952,10 +954,10 @@ static int sd_setup_write_same_cmnd(struct scsi_cmnd *cmd) sector_t sector = blk_rq_pos(rq); unsigned int nr_sectors = blk_rq_sectors(rq); unsigned int nr_bytes = blk_rq_bytes(rq); - int ret; + blk_status_t ret; if (sdkp->device->no_write_same) - return BLKPREP_INVALID; + return BLK_STS_TARGET; BUG_ON(bio_offset(bio) || bio_iovec(bio).bv_len != sdp->sector_size); @@ -996,7 +998,7 @@ static int sd_setup_write_same_cmnd(struct scsi_cmnd *cmd) return ret; } -static int sd_setup_flush_cmnd(struct scsi_cmnd *cmd) +static blk_status_t sd_setup_flush_cmnd(struct scsi_cmnd *cmd) { struct request *rq = cmd->request; @@ -1009,10 +1011,10 @@ static int sd_setup_flush_cmnd(struct scsi_cmnd *cmd) cmd->allowed = SD_MAX_RETRIES; rq->timeout = rq->q->rq_timeout * SD_FLUSH_TIMEOUT_MULTIPLIER; - return BLKPREP_OK; + return BLK_STS_OK; } -static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) +static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) { struct request *rq = SCpnt->request; struct scsi_device *sdp = SCpnt->device; @@ -1022,18 +1024,14 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) sector_t threshold; unsigned int this_count = blk_rq_sectors(rq); unsigned int dif, dix; - int ret; unsigned char protect; + blk_status_t ret; ret = scsi_init_io(SCpnt); - if (ret != BLKPREP_OK) + if (ret != BLK_STS_OK) return ret; WARN_ON_ONCE(SCpnt != rq->special); - /* from here on until we're complete, any goto out - * is used for a killable error condition */ - ret = BLKPREP_KILL; - SCSI_LOG_HLQUEUE(1, scmd_printk(KERN_INFO, SCpnt, "%s: block=%llu, count=%d\n", @@ -1046,7 +1044,7 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) blk_rq_sectors(rq))); SCSI_LOG_HLQUEUE(2, scmd_printk(KERN_INFO, SCpnt, "Retry with 0x%p\n", SCpnt)); - goto out; + return BLK_STS_IOERR; } if (sdp->changed) { @@ -1055,7 +1053,7 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) * the changed bit has been reset */ /* printk("SCSI disk has been changed or is not present. Prohibiting further I/O.\n"); */ - goto out; + return BLK_STS_IOERR; } /* @@ -1093,31 +1091,28 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) if ((block & 1) || (blk_rq_sectors(rq) & 1)) { scmd_printk(KERN_ERR, SCpnt, "Bad block number requested\n"); - goto out; - } else { - block = block >> 1; - this_count = this_count >> 1; + return BLK_STS_IOERR; } + block = block >> 1; + this_count = this_count >> 1; } if (sdp->sector_size == 2048) { if ((block & 3) || (blk_rq_sectors(rq) & 3)) { scmd_printk(KERN_ERR, SCpnt, "Bad block number requested\n"); - goto out; - } else { - block = block >> 2; - this_count = this_count >> 2; + return BLK_STS_IOERR; } + block = block >> 2; + this_count = this_count >> 2; } if (sdp->sector_size == 4096) { if ((block & 7) || (blk_rq_sectors(rq) & 7)) { scmd_printk(KERN_ERR, SCpnt, "Bad block number requested\n"); - goto out; - } else { - block = block >> 3; - this_count = this_count >> 3; + return BLK_STS_IOERR; } + block = block >> 3; + this_count = this_count >> 3; } if (rq_data_dir(rq) == WRITE) { SCpnt->cmnd[0] = WRITE_6; @@ -1129,7 +1124,7 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) SCpnt->cmnd[0] = READ_6; } else { scmd_printk(KERN_ERR, SCpnt, "Unknown command %d\n", req_op(rq)); - goto out; + return BLK_STS_IOERR; } SCSI_LOG_HLQUEUE(2, scmd_printk(KERN_INFO, SCpnt, @@ -1149,10 +1144,8 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) { SCpnt->cmnd = mempool_alloc(sd_cdb_pool, GFP_ATOMIC); - if (unlikely(SCpnt->cmnd == NULL)) { - ret = BLKPREP_DEFER; - goto out; - } + if (unlikely(!SCpnt->cmnd)) + return BLK_STS_RESOURCE; SCpnt->cmd_len = SD_EXT_CDB_SIZE; memset(SCpnt->cmnd, 0, SCpnt->cmd_len); @@ -1220,7 +1213,7 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) */ scmd_printk(KERN_ERR, SCpnt, "FUA write on READ/WRITE(6) drive\n"); - goto out; + return BLK_STS_IOERR; } SCpnt->cmnd[1] |= (unsigned char) ((block >> 16) & 0x1f); @@ -1244,12 +1237,10 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt) * This indicates that the command is ready from our end to be * queued. */ - ret = BLKPREP_OK; - out: - return ret; + return BLK_STS_OK; } -static int sd_init_command(struct scsi_cmnd *cmd) +static blk_status_t sd_init_command(struct scsi_cmnd *cmd) { struct request *rq = cmd->request; @@ -1265,7 +1256,7 @@ static int sd_init_command(struct scsi_cmnd *cmd) case SD_LBP_ZERO: return sd_setup_write_same10_cmnd(cmd, false); default: - return BLKPREP_INVALID; + return BLK_STS_TARGET; } case REQ_OP_WRITE_ZEROES: return sd_setup_write_zeroes_cmnd(cmd); @@ -1280,7 +1271,7 @@ static int sd_init_command(struct scsi_cmnd *cmd) return sd_zbc_setup_reset_cmnd(cmd); default: WARN_ON_ONCE(1); - return BLKPREP_KILL; + return BLK_STS_NOTSUPP; } } diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h index 1d63f3a23ffb..7f43e6839bce 100644 --- a/drivers/scsi/sd.h +++ b/drivers/scsi/sd.h @@ -271,7 +271,7 @@ static inline int sd_is_zoned(struct scsi_disk *sdkp) extern int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buffer); extern void sd_zbc_print_zones(struct scsi_disk *sdkp); -extern int sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd); +extern blk_status_t sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd); extern void sd_zbc_complete(struct scsi_cmnd *cmd, unsigned int good_bytes, struct scsi_sense_hdr *sshdr); extern int sd_zbc_report_zones(struct gendisk *disk, sector_t sector, @@ -288,9 +288,9 @@ static inline int sd_zbc_read_zones(struct scsi_disk *sdkp, static inline void sd_zbc_print_zones(struct scsi_disk *sdkp) {} -static inline int sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd) +static inline blk_status_t sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd) { - return BLKPREP_INVALID; + return BLK_STS_TARGET; } static inline void sd_zbc_complete(struct scsi_cmnd *cmd, diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index e06c48c866e4..83365b29a4d8 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -185,7 +185,7 @@ static inline sector_t sd_zbc_zone_sectors(struct scsi_disk *sdkp) * * Called from sd_init_command() for a REQ_OP_ZONE_RESET request. */ -int sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd) +blk_status_t sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd) { struct request *rq = cmd->request; struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); @@ -194,14 +194,14 @@ int sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd) if (!sd_is_zoned(sdkp)) /* Not a zoned device */ - return BLKPREP_KILL; + return BLK_STS_IOERR; if (sdkp->device->changed) - return BLKPREP_KILL; + return BLK_STS_IOERR; if (sector & (sd_zbc_zone_sectors(sdkp) - 1)) /* Unaligned request */ - return BLKPREP_KILL; + return BLK_STS_IOERR; cmd->cmd_len = 16; memset(cmd->cmnd, 0, cmd->cmd_len); @@ -214,7 +214,7 @@ int sd_zbc_setup_reset_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = 0; cmd->allowed = 0; - return BLKPREP_OK; + return BLK_STS_OK; } /** diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c index c6ad00703c5b..4e27460ec926 100644 --- a/drivers/scsi/sg.c +++ b/drivers/scsi/sg.c @@ -1390,7 +1390,7 @@ sg_rq_end_io(struct request *rq, blk_status_t status) */ srp->rq = NULL; scsi_req_free_cmd(scsi_req(rq)); - __blk_put_request(rq->q, rq); + blk_put_request(rq); write_lock_irqsave(&sfp->rq_list_lock, iflags); if (unlikely(srp->orphan)) { diff --git a/drivers/scsi/sgiwd93.c b/drivers/scsi/sgiwd93.c index 5ed696dc9bbd..713bce998b0e 100644 --- a/drivers/scsi/sgiwd93.c +++ b/drivers/scsi/sgiwd93.c @@ -208,7 +208,7 @@ static struct scsi_host_template sgiwd93_template = { .this_id = 7, .sg_tablesize = SG_ALL, .cmd_per_lun = 8, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, }; static int sgiwd93_probe(struct platform_device *pdev) diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h index e97bf2670315..af962368818b 100644 --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h @@ -21,6 +21,9 @@ #if !defined(_SMARTPQI_H) #define _SMARTPQI_H +#include +#include + #pragma pack(1) #define PQI_DEVICE_SIGNATURE "PQI DREG" @@ -97,6 +100,12 @@ struct pqi_ctrl_registers { struct pqi_device_registers pqi_registers; /* 4000h */ }; +#if ((HZ) < 1000) +#define PQI_HZ 1000 +#else +#define PQI_HZ (HZ) +#endif + #define PQI_DEVICE_REGISTERS_OFFSET 0x4000 enum pqi_io_path { @@ -347,6 +356,10 @@ struct pqi_event_config { #define PQI_MAX_EVENT_DESCRIPTORS 255 +#define PQI_EVENT_OFA_MEMORY_ALLOCATION 0x0 +#define PQI_EVENT_OFA_QUIESCE 0x1 +#define PQI_EVENT_OFA_CANCELLED 0x2 + struct pqi_event_response { struct pqi_iu_header header; u8 event_type; @@ -354,7 +367,17 @@ struct pqi_event_response { u8 request_acknowlege : 1; __le16 event_id; __le32 additional_event_id; - u8 data[16]; + union { + struct { + __le32 bytes_requested; + u8 reserved[12]; + } ofa_memory_allocation; + + struct { + __le16 reason; /* reason for cancellation */ + u8 reserved[14]; + } ofa_cancelled; + } data; }; struct pqi_event_acknowledge_request { @@ -389,6 +412,54 @@ struct pqi_task_management_response { u8 response_code; }; +struct pqi_vendor_general_request { + struct pqi_iu_header header; + __le16 request_id; + __le16 function_code; + union { + struct { + __le16 first_section; + __le16 last_section; + u8 reserved[48]; + } config_table_update; + + struct { + __le64 buffer_address; + __le32 buffer_length; + u8 reserved[40]; + } ofa_memory_allocation; + } data; +}; + +struct pqi_vendor_general_response { + struct pqi_iu_header header; + __le16 request_id; + __le16 function_code; + __le16 status; + u8 reserved[2]; +}; + +#define PQI_VENDOR_GENERAL_CONFIG_TABLE_UPDATE 0 +#define PQI_VENDOR_GENERAL_HOST_MEMORY_UPDATE 1 + +#define PQI_OFA_VERSION 1 +#define PQI_OFA_SIGNATURE "OFA_QRM" +#define PQI_OFA_MAX_SG_DESCRIPTORS 64 + +#define PQI_OFA_MEMORY_DESCRIPTOR_LENGTH \ + (offsetof(struct pqi_ofa_memory, sg_descriptor) + \ + (PQI_OFA_MAX_SG_DESCRIPTORS * sizeof(struct pqi_sg_descriptor))) + +struct pqi_ofa_memory { + __le64 signature; /* "OFA_QRM" */ + __le16 version; /* version of this struct(1 = 1st version) */ + u8 reserved[62]; + __le32 bytes_allocated; /* total allocated memory in bytes */ + __le16 num_memory_descriptors; + u8 reserved1[2]; + struct pqi_sg_descriptor sg_descriptor[1]; +}; + struct pqi_aio_error_info { u8 status; u8 service_response; @@ -419,6 +490,7 @@ struct pqi_raid_error_info { #define PQI_REQUEST_IU_GENERAL_ADMIN 0x60 #define PQI_REQUEST_IU_REPORT_VENDOR_EVENT_CONFIG 0x72 #define PQI_REQUEST_IU_SET_VENDOR_EVENT_CONFIG 0x73 +#define PQI_REQUEST_IU_VENDOR_GENERAL 0x75 #define PQI_REQUEST_IU_ACKNOWLEDGE_VENDOR_EVENT 0xf6 #define PQI_RESPONSE_IU_GENERAL_MANAGEMENT 0x81 @@ -430,6 +502,7 @@ struct pqi_raid_error_info { #define PQI_RESPONSE_IU_AIO_PATH_IO_ERROR 0xf3 #define PQI_RESPONSE_IU_AIO_PATH_DISABLED 0xf4 #define PQI_RESPONSE_IU_VENDOR_EVENT 0xf5 +#define PQI_RESPONSE_IU_VENDOR_GENERAL 0xf7 #define PQI_GENERAL_ADMIN_FUNCTION_REPORT_DEVICE_CAPABILITY 0x0 #define PQI_GENERAL_ADMIN_FUNCTION_CREATE_IQ 0x10 @@ -492,6 +565,7 @@ struct pqi_raid_error_info { #define PQI_EVENT_TYPE_HARDWARE 0x2 #define PQI_EVENT_TYPE_PHYSICAL_DEVICE 0x4 #define PQI_EVENT_TYPE_LOGICAL_DEVICE 0x5 +#define PQI_EVENT_TYPE_OFA 0xfb #define PQI_EVENT_TYPE_AIO_STATE_CHANGE 0xfd #define PQI_EVENT_TYPE_AIO_CONFIG_CHANGE 0xfe @@ -556,6 +630,7 @@ typedef u32 pqi_index_t; #define SOP_TASK_ATTRIBUTE_ACA 4 #define SOP_TMF_COMPLETE 0x0 +#define SOP_TMF_REJECTED 0x4 #define SOP_TMF_FUNCTION_SUCCEEDED 0x8 /* additional CDB bytes usage field codes */ @@ -644,11 +719,13 @@ struct pqi_encryption_info { #define PQI_CONFIG_TABLE_MAX_LENGTH ((u16)~0) /* configuration table section IDs */ +#define PQI_CONFIG_TABLE_ALL_SECTIONS (-1) #define PQI_CONFIG_TABLE_SECTION_GENERAL_INFO 0 #define PQI_CONFIG_TABLE_SECTION_FIRMWARE_FEATURES 1 #define PQI_CONFIG_TABLE_SECTION_FIRMWARE_ERRATA 2 #define PQI_CONFIG_TABLE_SECTION_DEBUG 3 #define PQI_CONFIG_TABLE_SECTION_HEARTBEAT 4 +#define PQI_CONFIG_TABLE_SECTION_SOFT_RESET 5 struct pqi_config_table { u8 signature[8]; /* "CFGTABLE" */ @@ -680,6 +757,18 @@ struct pqi_config_table_general_info { /* command */ }; +struct pqi_config_table_firmware_features { + struct pqi_config_table_section_header header; + __le16 num_elements; + u8 features_supported[]; +/* u8 features_requested_by_host[]; */ +/* u8 features_enabled[]; */ +}; + +#define PQI_FIRMWARE_FEATURE_OFA 0 +#define PQI_FIRMWARE_FEATURE_SMP 1 +#define PQI_FIRMWARE_FEATURE_SOFT_RESET_HANDSHAKE 11 + struct pqi_config_table_debug { struct pqi_config_table_section_header header; __le32 scratchpad; @@ -690,6 +779,22 @@ struct pqi_config_table_heartbeat { __le32 heartbeat_counter; }; +struct pqi_config_table_soft_reset { + struct pqi_config_table_section_header header; + u8 soft_reset_status; +}; + +#define PQI_SOFT_RESET_INITIATE 0x1 +#define PQI_SOFT_RESET_ABORT 0x2 + +enum pqi_soft_reset_status { + RESET_INITIATE_FIRMWARE, + RESET_INITIATE_DRIVER, + RESET_ABORT, + RESET_NORESPONSE, + RESET_TIMEDOUT +}; + union pqi_reset_register { struct { u32 reset_type : 3; @@ -808,8 +913,10 @@ struct pqi_scsi_dev { u8 scsi3addr[8]; __be64 wwid; u8 volume_id[16]; + u8 unique_id[16]; u8 is_physical_device : 1; u8 is_external_raid_device : 1; + u8 is_expander_smp_device : 1; u8 target_lun_valid : 1; u8 device_gone : 1; u8 new_device : 1; @@ -817,6 +924,7 @@ struct pqi_scsi_dev { u8 volume_offline : 1; bool aio_enabled; /* only valid for physical disks */ bool in_reset; + bool in_remove; bool device_offline; u8 vendor[8]; /* bytes 8-15 of inquiry data */ u8 model[16]; /* bytes 16-31 of inquiry data */ @@ -854,6 +962,8 @@ struct pqi_scsi_dev { #define CISS_VPD_LV_DEVICE_GEOMETRY 0xc1 /* vendor-specific page */ #define CISS_VPD_LV_BYPASS_STATUS 0xc2 /* vendor-specific page */ #define CISS_VPD_LV_STATUS 0xc3 /* vendor-specific page */ +#define SCSI_VPD_HEADER_SZ 4 +#define SCSI_VPD_DEVICE_ID_IDX 8 /* Index of page id in page */ #define VPD_PAGE (1 << 8) @@ -916,6 +1026,7 @@ struct pqi_sas_node { struct pqi_sas_port { struct list_head port_list_entry; u64 sas_address; + struct pqi_scsi_dev *device; struct sas_port *port; int next_phy_index; struct list_head phy_list_head; @@ -947,13 +1058,15 @@ struct pqi_io_request { struct list_head request_list_entry; }; -#define PQI_NUM_SUPPORTED_EVENTS 6 +#define PQI_NUM_SUPPORTED_EVENTS 7 struct pqi_event { bool pending; u8 event_type; __le16 event_id; __le32 additional_event_id; + __le32 ofa_bytes_requested; + __le16 ofa_cancel_reason; }; #define PQI_RESERVED_IO_SLOTS_LUN_RESET 1 @@ -1014,12 +1127,16 @@ struct pqi_ctrl_info { struct mutex scan_mutex; struct mutex lun_reset_mutex; + struct mutex ofa_mutex; /* serialize ofa */ bool controller_online; bool block_requests; + bool in_shutdown; + bool in_ofa; u8 inbound_spanning_supported : 1; u8 outbound_spanning_supported : 1; u8 pqi_mode_enabled : 1; u8 pqi_reset_quiesce_supported : 1; + u8 soft_reset_handshake_supported : 1; struct list_head scsi_device_list; spinlock_t scsi_device_list_lock; @@ -1040,6 +1157,7 @@ struct pqi_ctrl_info { int previous_num_interrupts; u32 previous_heartbeat_count; __le32 __iomem *heartbeat_counter; + u8 __iomem *soft_reset_status; struct timer_list heartbeat_timer; struct work_struct ctrl_offline_work; @@ -1051,6 +1169,10 @@ struct pqi_ctrl_info { struct list_head raid_bypass_retry_list; spinlock_t raid_bypass_retry_list_lock; struct work_struct raid_bypass_retry_work; + + struct pqi_ofa_memory *pqi_ofa_mem_virt_addr; + dma_addr_t pqi_ofa_mem_dma_handle; + void **pqi_ofa_chunk_virt_addr; }; enum pqi_ctrl_mode { @@ -1080,8 +1202,13 @@ enum pqi_ctrl_mode { #define BMIC_WRITE 0x27 #define BMIC_SENSE_CONTROLLER_PARAMETERS 0x64 #define BMIC_SENSE_SUBSYSTEM_INFORMATION 0x66 +#define BMIC_CSMI_PASSTHRU 0x68 #define BMIC_WRITE_HOST_WELLNESS 0xa5 #define BMIC_FLUSH_CACHE 0xc2 +#define BMIC_SET_DIAG_OPTIONS 0xf4 +#define BMIC_SENSE_DIAG_OPTIONS 0xf5 + +#define CSMI_CC_SAS_SMP_PASSTHRU 0X17 #define SA_FLUSH_CACHE 0x1 @@ -1109,6 +1236,10 @@ struct bmic_identify_controller { u8 reserved3[32]; }; +#define SA_EXPANDER_SMP_DEVICE 0x05 +/*SCSI Invalid Device Type for SAS devices*/ +#define PQI_SAS_SCSI_INVALID_DEVTYPE 0xff + struct bmic_identify_physical_device { u8 scsi_bus; /* SCSI Bus number on controller */ u8 scsi_id; /* SCSI ID on this bus */ @@ -1189,6 +1320,50 @@ struct bmic_identify_physical_device { u8 padding_to_multiple_of_512[9]; }; +struct bmic_smp_request { + u8 frame_type; + u8 function; + u8 allocated_response_length; + u8 request_length; + u8 additional_request_bytes[1016]; +}; + +struct bmic_smp_response { + u8 frame_type; + u8 function; + u8 function_result; + u8 response_length; + u8 additional_response_bytes[1016]; +}; + +struct bmic_csmi_ioctl_header { + __le32 header_length; + u8 signature[8]; + __le32 timeout; + __le32 control_code; + __le32 return_code; + __le32 length; +}; + +struct bmic_csmi_smp_passthru { + u8 phy_identifier; + u8 port_identifier; + u8 connection_rate; + u8 reserved; + __be64 destination_sas_address; + __le32 request_length; + struct bmic_smp_request request; + u8 connection_status; + u8 reserved1[3]; + __le32 response_length; + struct bmic_smp_response response; +}; + +struct bmic_csmi_smp_passthru_buffer { + struct bmic_csmi_ioctl_header ioctl_header; + struct bmic_csmi_smp_passthru parameters; +}; + struct bmic_flush_cache { u8 disable_flag; u8 system_power_action; @@ -1206,8 +1381,42 @@ enum bmic_flush_cache_shutdown_event { RESTART = 4 }; +struct bmic_diag_options { + __le32 options; +}; + #pragma pack() +static inline struct pqi_ctrl_info *shost_to_hba(struct Scsi_Host *shost) +{ + void *hostdata = shost_priv(shost); + + return *((struct pqi_ctrl_info **)hostdata); +} + +static inline bool pqi_ctrl_offline(struct pqi_ctrl_info *ctrl_info) +{ + return !ctrl_info->controller_online; +} + +static inline void pqi_ctrl_busy(struct pqi_ctrl_info *ctrl_info) +{ + atomic_inc(&ctrl_info->num_busy_threads); +} + +static inline void pqi_ctrl_unbusy(struct pqi_ctrl_info *ctrl_info) +{ + atomic_dec(&ctrl_info->num_busy_threads); +} + +static inline bool pqi_ctrl_blocked(struct pqi_ctrl_info *ctrl_info) +{ + return ctrl_info->block_requests; +} + +void pqi_sas_smp_handler(struct bsg_job *job, struct Scsi_Host *shost, + struct sas_rphy *rphy); + int pqi_add_sas_host(struct Scsi_Host *shost, struct pqi_ctrl_info *ctrl_info); void pqi_delete_sas_host(struct pqi_ctrl_info *ctrl_info); int pqi_add_sas_device(struct pqi_sas_node *pqi_sas_node, @@ -1216,6 +1425,9 @@ void pqi_remove_sas_device(struct pqi_scsi_dev *device); struct pqi_scsi_dev *pqi_find_device_by_sas_rphy( struct pqi_ctrl_info *ctrl_info, struct sas_rphy *rphy); void pqi_prep_for_scsi_done(struct scsi_cmnd *scmd); +int pqi_csmi_smp_passthru(struct pqi_ctrl_info *ctrl_info, + struct bmic_csmi_smp_passthru_buffer *buffer, size_t buffer_length, + struct pqi_raid_error_info *error_info); extern struct sas_function_template pqi_sas_transport_functions; diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index a25a07a0b7f0..e2fa3f476227 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -40,11 +40,11 @@ #define BUILD_TIMESTAMP #endif -#define DRIVER_VERSION "1.1.4-130" +#define DRIVER_VERSION "1.2.4-070" #define DRIVER_MAJOR 1 -#define DRIVER_MINOR 1 +#define DRIVER_MINOR 2 #define DRIVER_RELEASE 4 -#define DRIVER_REVISION 130 +#define DRIVER_REVISION 70 #define DRIVER_NAME "Microsemi PQI Driver (v" \ DRIVER_VERSION BUILD_TIMESTAMP ")" @@ -74,6 +74,15 @@ static int pqi_aio_submit_io(struct pqi_ctrl_info *ctrl_info, struct scsi_cmnd *scmd, u32 aio_handle, u8 *cdb, unsigned int cdb_length, struct pqi_queue_group *queue_group, struct pqi_encryption_info *encryption_info, bool raid_bypass); +static void pqi_ofa_ctrl_quiesce(struct pqi_ctrl_info *ctrl_info); +static void pqi_ofa_ctrl_unquiesce(struct pqi_ctrl_info *ctrl_info); +static int pqi_ofa_ctrl_restart(struct pqi_ctrl_info *ctrl_info); +static void pqi_ofa_setup_host_buffer(struct pqi_ctrl_info *ctrl_info, + u32 bytes_requested); +static void pqi_ofa_free_host_buffer(struct pqi_ctrl_info *ctrl_info); +static int pqi_ofa_host_memory_update(struct pqi_ctrl_info *ctrl_info); +static int pqi_device_wait_for_pending_io(struct pqi_ctrl_info *ctrl_info, + struct pqi_scsi_dev *device, unsigned long timeout_secs); /* for flags argument to pqi_submit_raid_request_synchronous() */ #define PQI_SYNC_FLAGS_INTERRUPTABLE 0x1 @@ -113,6 +122,7 @@ static unsigned int pqi_supported_event_types[] = { PQI_EVENT_TYPE_HARDWARE, PQI_EVENT_TYPE_PHYSICAL_DEVICE, PQI_EVENT_TYPE_LOGICAL_DEVICE, + PQI_EVENT_TYPE_OFA, PQI_EVENT_TYPE_AIO_STATE_CHANGE, PQI_EVENT_TYPE_AIO_CONFIG_CHANGE, }; @@ -176,16 +186,14 @@ static inline void pqi_scsi_done(struct scsi_cmnd *scmd) scmd->scsi_done(scmd); } -static inline bool pqi_scsi3addr_equal(u8 *scsi3addr1, u8 *scsi3addr2) +static inline void pqi_disable_write_same(struct scsi_device *sdev) { - return memcmp(scsi3addr1, scsi3addr2, 8) == 0; + sdev->no_write_same = 1; } -static inline struct pqi_ctrl_info *shost_to_hba(struct Scsi_Host *shost) +static inline bool pqi_scsi3addr_equal(u8 *scsi3addr1, u8 *scsi3addr2) { - void *hostdata = shost_priv(shost); - - return *((struct pqi_ctrl_info **)hostdata); + return memcmp(scsi3addr1, scsi3addr2, 8) == 0; } static inline bool pqi_is_logical_device(struct pqi_scsi_dev *device) @@ -198,11 +206,6 @@ static inline bool pqi_is_external_raid_addr(u8 *scsi3addr) return scsi3addr[2] != 0; } -static inline bool pqi_ctrl_offline(struct pqi_ctrl_info *ctrl_info) -{ - return !ctrl_info->controller_online; -} - static inline void pqi_check_ctrl_health(struct pqi_ctrl_info *ctrl_info) { if (ctrl_info->controller_online) @@ -241,11 +244,6 @@ static inline void pqi_ctrl_unblock_requests(struct pqi_ctrl_info *ctrl_info) scsi_unblock_requests(ctrl_info->scsi_host); } -static inline bool pqi_ctrl_blocked(struct pqi_ctrl_info *ctrl_info) -{ - return ctrl_info->block_requests; -} - static unsigned long pqi_wait_if_ctrl_blocked(struct pqi_ctrl_info *ctrl_info, unsigned long timeout_msecs) { @@ -275,16 +273,6 @@ static unsigned long pqi_wait_if_ctrl_blocked(struct pqi_ctrl_info *ctrl_info, return remaining_msecs; } -static inline void pqi_ctrl_busy(struct pqi_ctrl_info *ctrl_info) -{ - atomic_inc(&ctrl_info->num_busy_threads); -} - -static inline void pqi_ctrl_unbusy(struct pqi_ctrl_info *ctrl_info) -{ - atomic_dec(&ctrl_info->num_busy_threads); -} - static inline void pqi_ctrl_wait_until_quiesced(struct pqi_ctrl_info *ctrl_info) { while (atomic_read(&ctrl_info->num_busy_threads) > @@ -312,11 +300,39 @@ static inline bool pqi_device_in_reset(struct pqi_scsi_dev *device) return device->in_reset; } +static inline void pqi_ctrl_ofa_start(struct pqi_ctrl_info *ctrl_info) +{ + ctrl_info->in_ofa = true; +} + +static inline void pqi_ctrl_ofa_done(struct pqi_ctrl_info *ctrl_info) +{ + ctrl_info->in_ofa = false; +} + +static inline bool pqi_ctrl_in_ofa(struct pqi_ctrl_info *ctrl_info) +{ + return ctrl_info->in_ofa; +} + +static inline void pqi_device_remove_start(struct pqi_scsi_dev *device) +{ + device->in_remove = true; +} + +static inline bool pqi_device_in_remove(struct pqi_ctrl_info *ctrl_info, + struct pqi_scsi_dev *device) +{ + return device->in_remove & !ctrl_info->in_shutdown; +} + static inline void pqi_schedule_rescan_worker_with_delay( struct pqi_ctrl_info *ctrl_info, unsigned long delay) { if (pqi_ctrl_offline(ctrl_info)) return; + if (pqi_ctrl_in_ofa(ctrl_info)) + return; schedule_delayed_work(&ctrl_info->rescan_work, delay); } @@ -326,7 +342,7 @@ static inline void pqi_schedule_rescan_worker(struct pqi_ctrl_info *ctrl_info) pqi_schedule_rescan_worker_with_delay(ctrl_info, 0); } -#define PQI_RESCAN_WORK_DELAY (10 * HZ) +#define PQI_RESCAN_WORK_DELAY (10 * PQI_HZ) static inline void pqi_schedule_rescan_worker_delayed( struct pqi_ctrl_info *ctrl_info) @@ -347,6 +363,27 @@ static inline u32 pqi_read_heartbeat_counter(struct pqi_ctrl_info *ctrl_info) return readl(ctrl_info->heartbeat_counter); } +static inline u8 pqi_read_soft_reset_status(struct pqi_ctrl_info *ctrl_info) +{ + if (!ctrl_info->soft_reset_status) + return 0; + + return readb(ctrl_info->soft_reset_status); +} + +static inline void pqi_clear_soft_reset_status(struct pqi_ctrl_info *ctrl_info, + u8 clear) +{ + u8 status; + + if (!ctrl_info->soft_reset_status) + return; + + status = pqi_read_soft_reset_status(ctrl_info); + status &= ~clear; + writeb(status, ctrl_info->soft_reset_status); +} + static int pqi_map_single(struct pci_dev *pci_dev, struct pqi_sg_descriptor *sg_descriptor, void *buffer, size_t buffer_length, enum dma_data_direction data_direction) @@ -390,6 +427,7 @@ static int pqi_build_raid_path_request(struct pqi_ctrl_info *ctrl_info, u16 vpd_page, enum dma_data_direction *dir) { u8 *cdb; + size_t cdb_length = buffer_length; memset(request, 0, sizeof(*request)); @@ -412,7 +450,7 @@ static int pqi_build_raid_path_request(struct pqi_ctrl_info *ctrl_info, cdb[1] = 0x1; cdb[2] = (u8)vpd_page; } - cdb[4] = (u8)buffer_length; + cdb[4] = (u8)cdb_length; break; case CISS_REPORT_LOG: case CISS_REPORT_PHYS: @@ -422,32 +460,45 @@ static int pqi_build_raid_path_request(struct pqi_ctrl_info *ctrl_info, cdb[1] = CISS_REPORT_PHYS_EXTENDED; else cdb[1] = CISS_REPORT_LOG_EXTENDED; - put_unaligned_be32(buffer_length, &cdb[6]); + put_unaligned_be32(cdb_length, &cdb[6]); break; case CISS_GET_RAID_MAP: request->data_direction = SOP_READ_FLAG; cdb[0] = CISS_READ; cdb[1] = CISS_GET_RAID_MAP; - put_unaligned_be32(buffer_length, &cdb[6]); + put_unaligned_be32(cdb_length, &cdb[6]); break; case SA_FLUSH_CACHE: request->data_direction = SOP_WRITE_FLAG; cdb[0] = BMIC_WRITE; cdb[6] = BMIC_FLUSH_CACHE; - put_unaligned_be16(buffer_length, &cdb[7]); + put_unaligned_be16(cdb_length, &cdb[7]); break; + case BMIC_SENSE_DIAG_OPTIONS: + cdb_length = 0; + /* fall through */ case BMIC_IDENTIFY_CONTROLLER: case BMIC_IDENTIFY_PHYSICAL_DEVICE: request->data_direction = SOP_READ_FLAG; cdb[0] = BMIC_READ; cdb[6] = cmd; - put_unaligned_be16(buffer_length, &cdb[7]); + put_unaligned_be16(cdb_length, &cdb[7]); break; + case BMIC_SET_DIAG_OPTIONS: + cdb_length = 0; + /* fall through */ case BMIC_WRITE_HOST_WELLNESS: request->data_direction = SOP_WRITE_FLAG; cdb[0] = BMIC_WRITE; cdb[6] = cmd; - put_unaligned_be16(buffer_length, &cdb[7]); + put_unaligned_be16(cdb_length, &cdb[7]); + break; + case BMIC_CSMI_PASSTHRU: + request->data_direction = SOP_BIDIRECTIONAL; + cdb[0] = BMIC_WRITE; + cdb[5] = CSMI_CC_SAS_SMP_PASSTHRU; + cdb[6] = cmd; + put_unaligned_be16(cdb_length, &cdb[7]); break; default: dev_err(&ctrl_info->pci_dev->dev, "unknown command 0x%c\n", @@ -509,43 +560,130 @@ static void pqi_free_io_request(struct pqi_io_request *io_request) atomic_dec(&io_request->refcount); } -static int pqi_identify_controller(struct pqi_ctrl_info *ctrl_info, - struct bmic_identify_controller *buffer) +static int pqi_send_scsi_raid_request(struct pqi_ctrl_info *ctrl_info, u8 cmd, + u8 *scsi3addr, void *buffer, size_t buffer_length, u16 vpd_page, + struct pqi_raid_error_info *error_info, + unsigned long timeout_msecs) { int rc; enum dma_data_direction dir; struct pqi_raid_path_request request; rc = pqi_build_raid_path_request(ctrl_info, &request, - BMIC_IDENTIFY_CONTROLLER, RAID_CTLR_LUNID, buffer, - sizeof(*buffer), 0, &dir); + cmd, scsi3addr, buffer, + buffer_length, vpd_page, &dir); if (rc) return rc; - rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 0, - NULL, NO_TIMEOUT); + rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, + 0, error_info, timeout_msecs); pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); return rc; } -static int pqi_scsi_inquiry(struct pqi_ctrl_info *ctrl_info, +/* Helper functions for pqi_send_scsi_raid_request */ + +static inline int pqi_send_ctrl_raid_request(struct pqi_ctrl_info *ctrl_info, + u8 cmd, void *buffer, size_t buffer_length) +{ + return pqi_send_scsi_raid_request(ctrl_info, cmd, RAID_CTLR_LUNID, + buffer, buffer_length, 0, NULL, NO_TIMEOUT); +} + +static inline int pqi_send_ctrl_raid_with_error(struct pqi_ctrl_info *ctrl_info, + u8 cmd, void *buffer, size_t buffer_length, + struct pqi_raid_error_info *error_info) +{ + return pqi_send_scsi_raid_request(ctrl_info, cmd, RAID_CTLR_LUNID, + buffer, buffer_length, 0, error_info, NO_TIMEOUT); +} + + +static inline int pqi_identify_controller(struct pqi_ctrl_info *ctrl_info, + struct bmic_identify_controller *buffer) +{ + return pqi_send_ctrl_raid_request(ctrl_info, BMIC_IDENTIFY_CONTROLLER, + buffer, sizeof(*buffer)); +} + +static inline int pqi_scsi_inquiry(struct pqi_ctrl_info *ctrl_info, u8 *scsi3addr, u16 vpd_page, void *buffer, size_t buffer_length) +{ + return pqi_send_scsi_raid_request(ctrl_info, INQUIRY, scsi3addr, + buffer, buffer_length, vpd_page, NULL, NO_TIMEOUT); +} + +static bool pqi_vpd_page_supported(struct pqi_ctrl_info *ctrl_info, + u8 *scsi3addr, u16 vpd_page) { int rc; - enum dma_data_direction dir; - struct pqi_raid_path_request request; + int i; + int pages; + unsigned char *buf, bufsize; - rc = pqi_build_raid_path_request(ctrl_info, &request, - INQUIRY, scsi3addr, buffer, buffer_length, vpd_page, - &dir); - if (rc) - return rc; + buf = kzalloc(256, GFP_KERNEL); + if (!buf) + return false; - rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 0, - NULL, NO_TIMEOUT); + /* Get the size of the page list first */ + rc = pqi_scsi_inquiry(ctrl_info, scsi3addr, + VPD_PAGE | SCSI_VPD_SUPPORTED_PAGES, + buf, SCSI_VPD_HEADER_SZ); + if (rc != 0) + goto exit_unsupported; + + pages = buf[3]; + if ((pages + SCSI_VPD_HEADER_SZ) <= 255) + bufsize = pages + SCSI_VPD_HEADER_SZ; + else + bufsize = 255; + + /* Get the whole VPD page list */ + rc = pqi_scsi_inquiry(ctrl_info, scsi3addr, + VPD_PAGE | SCSI_VPD_SUPPORTED_PAGES, + buf, bufsize); + if (rc != 0) + goto exit_unsupported; + + pages = buf[3]; + for (i = 1; i <= pages; i++) + if (buf[3 + i] == vpd_page) + goto exit_supported; + +exit_unsupported: + kfree(buf); + return false; + +exit_supported: + kfree(buf); + return true; +} + +static int pqi_get_device_id(struct pqi_ctrl_info *ctrl_info, + u8 *scsi3addr, u8 *device_id, int buflen) +{ + int rc; + unsigned char *buf; + + if (!pqi_vpd_page_supported(ctrl_info, scsi3addr, SCSI_VPD_DEVICE_ID)) + return 1; /* function not supported */ + + buf = kzalloc(64, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + rc = pqi_scsi_inquiry(ctrl_info, scsi3addr, + VPD_PAGE | SCSI_VPD_DEVICE_ID, + buf, 64); + if (rc == 0) { + if (buflen > 16) + buflen = 16; + memcpy(device_id, &buf[SCSI_VPD_DEVICE_ID_IDX], buflen); + } + + kfree(buf); - pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); return rc; } @@ -580,9 +718,7 @@ static int pqi_flush_cache(struct pqi_ctrl_info *ctrl_info, enum bmic_flush_cache_shutdown_event shutdown_event) { int rc; - struct pqi_raid_path_request request; struct bmic_flush_cache *flush_cache; - enum dma_data_direction dir; /* * Don't bother trying to flush the cache if the controller is @@ -597,42 +733,55 @@ static int pqi_flush_cache(struct pqi_ctrl_info *ctrl_info, flush_cache->shutdown_event = shutdown_event; - rc = pqi_build_raid_path_request(ctrl_info, &request, - SA_FLUSH_CACHE, RAID_CTLR_LUNID, flush_cache, - sizeof(*flush_cache), 0, &dir); - if (rc) - goto out; - - rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, - 0, NULL, NO_TIMEOUT); + rc = pqi_send_ctrl_raid_request(ctrl_info, SA_FLUSH_CACHE, flush_cache, + sizeof(*flush_cache)); - pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); -out: kfree(flush_cache); return rc; } -static int pqi_write_host_wellness(struct pqi_ctrl_info *ctrl_info, - void *buffer, size_t buffer_length) +int pqi_csmi_smp_passthru(struct pqi_ctrl_info *ctrl_info, + struct bmic_csmi_smp_passthru_buffer *buffer, size_t buffer_length, + struct pqi_raid_error_info *error_info) +{ + return pqi_send_ctrl_raid_with_error(ctrl_info, BMIC_CSMI_PASSTHRU, + buffer, buffer_length, error_info); +} + +#define PQI_FETCH_PTRAID_DATA (1UL<<31) + +static int pqi_set_diag_rescan(struct pqi_ctrl_info *ctrl_info) { int rc; - struct pqi_raid_path_request request; - enum dma_data_direction dir; + struct bmic_diag_options *diag; - rc = pqi_build_raid_path_request(ctrl_info, &request, - BMIC_WRITE_HOST_WELLNESS, RAID_CTLR_LUNID, buffer, - buffer_length, 0, &dir); + diag = kzalloc(sizeof(*diag), GFP_KERNEL); + if (!diag) + return -ENOMEM; + + rc = pqi_send_ctrl_raid_request(ctrl_info, BMIC_SENSE_DIAG_OPTIONS, + diag, sizeof(*diag)); if (rc) - return rc; + goto out; - rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, - 0, NULL, NO_TIMEOUT); + diag->options |= cpu_to_le32(PQI_FETCH_PTRAID_DATA); + + rc = pqi_send_ctrl_raid_request(ctrl_info, BMIC_SET_DIAG_OPTIONS, + diag, sizeof(*diag)); +out: + kfree(diag); - pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); return rc; } +static inline int pqi_write_host_wellness(struct pqi_ctrl_info *ctrl_info, + void *buffer, size_t buffer_length) +{ + return pqi_send_ctrl_raid_request(ctrl_info, BMIC_WRITE_HOST_WELLNESS, + buffer, buffer_length); +} + #pragma pack(1) struct bmic_host_wellness_driver_version { @@ -640,6 +789,7 @@ struct bmic_host_wellness_driver_version { u8 driver_version_tag[2]; __le16 driver_version_length; char driver_version[32]; + u8 dont_write_tag[2]; u8 end_tag[2]; }; @@ -669,6 +819,8 @@ static int pqi_write_driver_version_to_host_wellness( strncpy(buffer->driver_version, "Linux " DRIVER_VERSION, sizeof(buffer->driver_version) - 1); buffer->driver_version[sizeof(buffer->driver_version) - 1] = '\0'; + buffer->dont_write_tag[0] = 'D'; + buffer->dont_write_tag[1] = 'W'; buffer->end_tag[0] = 'Z'; buffer->end_tag[1] = 'Z'; @@ -742,7 +894,7 @@ static int pqi_write_current_time_to_host_wellness( return rc; } -#define PQI_UPDATE_TIME_WORK_INTERVAL (24UL * 60 * 60 * HZ) +#define PQI_UPDATE_TIME_WORK_INTERVAL (24UL * 60 * 60 * PQI_HZ) static void pqi_update_time_worker(struct work_struct *work) { @@ -776,23 +928,11 @@ static inline void pqi_cancel_update_time_worker( cancel_delayed_work_sync(&ctrl_info->update_time_work); } -static int pqi_report_luns(struct pqi_ctrl_info *ctrl_info, u8 cmd, +static inline int pqi_report_luns(struct pqi_ctrl_info *ctrl_info, u8 cmd, void *buffer, size_t buffer_length) { - int rc; - enum dma_data_direction dir; - struct pqi_raid_path_request request; - - rc = pqi_build_raid_path_request(ctrl_info, &request, - cmd, RAID_CTLR_LUNID, buffer, buffer_length, 0, &dir); - if (rc) - return rc; - - rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 0, - NULL, NO_TIMEOUT); - - pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); - return rc; + return pqi_send_ctrl_raid_request(ctrl_info, cmd, buffer, + buffer_length); } static int pqi_report_phys_logical_luns(struct pqi_ctrl_info *ctrl_info, u8 cmd, @@ -1010,8 +1150,6 @@ static int pqi_validate_raid_map(struct pqi_ctrl_info *ctrl_info, char *err_msg; u32 raid_map_size; u32 r5or6_blocks_per_row; - unsigned int num_phys_disks; - unsigned int num_raid_map_entries; raid_map_size = get_unaligned_le32(&raid_map->structure_size); @@ -1020,22 +1158,6 @@ static int pqi_validate_raid_map(struct pqi_ctrl_info *ctrl_info, goto bad_raid_map; } - if (raid_map_size > sizeof(*raid_map)) { - err_msg = "RAID map too large"; - goto bad_raid_map; - } - - num_phys_disks = get_unaligned_le16(&raid_map->layout_map_count) * - (get_unaligned_le16(&raid_map->data_disks_per_row) + - get_unaligned_le16(&raid_map->metadata_disks_per_row)); - num_raid_map_entries = num_phys_disks * - get_unaligned_le16(&raid_map->row_cnt); - - if (num_raid_map_entries > RAID_MAP_MAX_ENTRIES) { - err_msg = "invalid number of map entries in RAID map"; - goto bad_raid_map; - } - if (device->raid_level == SA_RAID_1) { if (get_unaligned_le16(&raid_map->layout_map_count) != 2) { err_msg = "invalid RAID-1 map"; @@ -1074,27 +1196,45 @@ static int pqi_get_raid_map(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device) { int rc; - enum dma_data_direction dir; - struct pqi_raid_path_request request; + u32 raid_map_size; struct raid_map *raid_map; raid_map = kmalloc(sizeof(*raid_map), GFP_KERNEL); if (!raid_map) return -ENOMEM; - rc = pqi_build_raid_path_request(ctrl_info, &request, - CISS_GET_RAID_MAP, device->scsi3addr, raid_map, - sizeof(*raid_map), 0, &dir); + rc = pqi_send_scsi_raid_request(ctrl_info, CISS_GET_RAID_MAP, + device->scsi3addr, raid_map, sizeof(*raid_map), + 0, NULL, NO_TIMEOUT); + if (rc) goto error; - rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 0, - NULL, NO_TIMEOUT); + raid_map_size = get_unaligned_le32(&raid_map->structure_size); - pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); + if (raid_map_size > sizeof(*raid_map)) { - if (rc) - goto error; + kfree(raid_map); + + raid_map = kmalloc(raid_map_size, GFP_KERNEL); + if (!raid_map) + return -ENOMEM; + + rc = pqi_send_scsi_raid_request(ctrl_info, CISS_GET_RAID_MAP, + device->scsi3addr, raid_map, raid_map_size, + 0, NULL, NO_TIMEOUT); + if (rc) + goto error; + + if (get_unaligned_le32(&raid_map->structure_size) + != raid_map_size) { + dev_warn(&ctrl_info->pci_dev->dev, + "Requested %d bytes, received %d bytes", + raid_map_size, + get_unaligned_le32(&raid_map->structure_size)); + goto error; + } + } rc = pqi_validate_raid_map(ctrl_info, device, raid_map); if (rc) @@ -1165,6 +1305,9 @@ static void pqi_get_volume_status(struct pqi_ctrl_info *ctrl_info, if (rc) goto out; + if (vpd->page_code != CISS_VPD_LV_STATUS) + goto out; + page_length = offsetof(struct ciss_vpd_logical_volume_status, volume_status) + vpd->page_length; if (page_length < sizeof(*vpd)) @@ -1190,6 +1333,9 @@ static int pqi_get_device_info(struct pqi_ctrl_info *ctrl_info, u8 *buffer; unsigned int retries; + if (device->is_expander_smp_device) + return 0; + buffer = kmalloc(64, GFP_KERNEL); if (!buffer) return -ENOMEM; @@ -1225,6 +1371,14 @@ static int pqi_get_device_info(struct pqi_ctrl_info *ctrl_info, } } + if (pqi_get_device_id(ctrl_info, device->scsi3addr, + device->unique_id, sizeof(device->unique_id)) < 0) + dev_warn(&ctrl_info->pci_dev->dev, + "Can't get device id for scsi %d:%d:%d:%d\n", + ctrl_info->scsi_host->host_no, + device->bus, device->target, + device->lun); + out: kfree(buffer); @@ -1387,9 +1541,24 @@ static int pqi_add_device(struct pqi_ctrl_info *ctrl_info, return rc; } +#define PQI_PENDING_IO_TIMEOUT_SECS 20 + static inline void pqi_remove_device(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device) { + int rc; + + pqi_device_remove_start(device); + + rc = pqi_device_wait_for_pending_io(ctrl_info, device, + PQI_PENDING_IO_TIMEOUT_SECS); + if (rc) + dev_err(&ctrl_info->pci_dev->dev, + "scsi %d:%d:%d:%d removing device with %d outstanding commands\n", + ctrl_info->scsi_host->host_no, device->bus, + device->target, device->lun, + atomic_read(&device->scsi_cmds_outstanding)); + if (pqi_is_logical_device(device)) scsi_remove_device(device->sdev); else @@ -1454,6 +1623,14 @@ static enum pqi_find_result pqi_scsi_find_entry(struct pqi_ctrl_info *ctrl_info, return DEVICE_NOT_FOUND; } +static inline const char *pqi_device_type(struct pqi_scsi_dev *device) +{ + if (device->is_expander_smp_device) + return "Enclosure SMP "; + + return scsi_device_type(device->devtype); +} + #define PQI_DEV_INFO_BUFFER_LENGTH 128 static void pqi_dev_info(struct pqi_ctrl_info *ctrl_info, @@ -1489,7 +1666,7 @@ static void pqi_dev_info(struct pqi_ctrl_info *ctrl_info, count += snprintf(buffer + count, PQI_DEV_INFO_BUFFER_LENGTH - count, " %s %.8s %.16s ", - scsi_device_type(device->devtype), + pqi_device_type(device), device->vendor, device->model); @@ -1534,6 +1711,8 @@ static void pqi_scsi_update_device(struct pqi_scsi_dev *existing_device, existing_device->is_physical_device = new_device->is_physical_device; existing_device->is_external_raid_device = new_device->is_external_raid_device; + existing_device->is_expander_smp_device = + new_device->is_expander_smp_device; existing_device->aio_enabled = new_device->aio_enabled; memcpy(existing_device->vendor, new_device->vendor, sizeof(existing_device->vendor)); @@ -1558,6 +1737,7 @@ static void pqi_scsi_update_device(struct pqi_scsi_dev *existing_device, new_device->raid_bypass_configured; existing_device->raid_bypass_enabled = new_device->raid_bypass_enabled; + existing_device->device_offline = false; /* To prevent this from being freed later. */ new_device->raid_map = NULL; @@ -1589,6 +1769,14 @@ static inline void pqi_fixup_botched_add(struct pqi_ctrl_info *ctrl_info, device->keep_device = false; } +static inline bool pqi_is_device_added(struct pqi_scsi_dev *device) +{ + if (device->is_expander_smp_device) + return device->sas_port != NULL; + + return device->sdev != NULL; +} + static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *new_device_list[], unsigned int num_new_devices) { @@ -1674,6 +1862,9 @@ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info, spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); + if (pqi_ctrl_in_ofa(ctrl_info)) + pqi_ctrl_ofa_done(ctrl_info); + /* Remove all devices that have gone away. */ list_for_each_entry_safe(device, next, &delete_list, delete_list_entry) { @@ -1683,7 +1874,7 @@ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info, } else { pqi_dev_info(ctrl_info, "removed", device); } - if (device->sdev) + if (pqi_is_device_added(device)) pqi_remove_device(ctrl_info, device); list_del(&device->delete_list_entry); pqi_free_device(device); @@ -1705,7 +1896,7 @@ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info, /* Expose any new devices. */ list_for_each_entry_safe(device, next, &add_list, add_list_entry) { - if (!device->sdev) { + if (!pqi_is_device_added(device)) { pqi_dev_info(ctrl_info, "added", device); rc = pqi_add_device(ctrl_info, device); if (rc) { @@ -1722,7 +1913,12 @@ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info, static bool pqi_is_supported_device(struct pqi_scsi_dev *device) { - bool is_supported = false; + bool is_supported; + + if (device->is_expander_smp_device) + return true; + + is_supported = false; switch (device->devtype) { case TYPE_DISK: @@ -1756,6 +1952,30 @@ static inline bool pqi_skip_device(u8 *scsi3addr) return false; } +static inline bool pqi_is_device_with_sas_address(struct pqi_scsi_dev *device) +{ + if (!device->is_physical_device) + return false; + + if (device->is_expander_smp_device) + return true; + + switch (device->devtype) { + case TYPE_DISK: + case TYPE_ZBC: + case TYPE_ENCLOSURE: + return true; + } + + return false; +} + +static inline bool pqi_expose_device(struct pqi_scsi_dev *device) +{ + return !device->is_physical_device || + !pqi_skip_device(device->scsi3addr); +} + static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) { int i; @@ -1864,9 +2084,14 @@ static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) memcpy(device->scsi3addr, scsi3addr, sizeof(device->scsi3addr)); device->is_physical_device = is_physical_device; - if (!is_physical_device) + if (is_physical_device) { + if (phys_lun_ext_entry->device_type == + SA_EXPANDER_SMP_DEVICE) + device->is_expander_smp_device = true; + } else { device->is_external_raid_device = pqi_is_external_raid_addr(scsi3addr); + } /* Gather information about the device. */ rc = pqi_get_device_info(ctrl_info, device); @@ -1899,30 +2124,23 @@ static int pqi_update_scsi_devices(struct pqi_ctrl_info *ctrl_info) device->wwid = phys_lun_ext_entry->wwid; if ((phys_lun_ext_entry->device_flags & REPORT_PHYS_LUN_DEV_FLAG_AIO_ENABLED) && - phys_lun_ext_entry->aio_handle) + phys_lun_ext_entry->aio_handle) { device->aio_enabled = true; + device->aio_handle = + phys_lun_ext_entry->aio_handle; + } + if (device->devtype == TYPE_DISK || + device->devtype == TYPE_ZBC) { + pqi_get_physical_disk_info(ctrl_info, + device, id_phys); + } } else { memcpy(device->volume_id, log_lun_ext_entry->volume_id, sizeof(device->volume_id)); } - switch (device->devtype) { - case TYPE_DISK: - case TYPE_ZBC: - case TYPE_ENCLOSURE: - if (device->is_physical_device) { - device->sas_address = - get_unaligned_be64(&device->wwid); - if (device->devtype == TYPE_DISK || - device->devtype == TYPE_ZBC) { - device->aio_handle = - phys_lun_ext_entry->aio_handle; - pqi_get_physical_disk_info(ctrl_info, - device, id_phys); - } - } - break; - } + if (pqi_is_device_with_sas_address(device)) + device->sas_address = get_unaligned_be64(&device->wwid); new_device_list[num_valid_devices++] = device; } @@ -1965,7 +2183,7 @@ static void pqi_remove_all_scsi_devices(struct pqi_ctrl_info *ctrl_info) if (!device) break; - if (device->sdev) + if (pqi_is_device_added(device)) pqi_remove_device(ctrl_info, device); pqi_free_device(device); } @@ -1991,7 +2209,13 @@ static int pqi_scan_scsi_devices(struct pqi_ctrl_info *ctrl_info) static void pqi_scan_start(struct Scsi_Host *shost) { - pqi_scan_scsi_devices(shost_to_hba(shost)); + struct pqi_ctrl_info *ctrl_info; + + ctrl_info = shost_to_hba(shost); + if (pqi_ctrl_in_ofa(ctrl_info)) + return; + + pqi_scan_scsi_devices(ctrl_info); } /* Returns TRUE if scan is finished. */ @@ -2018,6 +2242,12 @@ static void pqi_wait_until_lun_reset_finished(struct pqi_ctrl_info *ctrl_info) mutex_unlock(&ctrl_info->lun_reset_mutex); } +static void pqi_wait_until_ofa_finished(struct pqi_ctrl_info *ctrl_info) +{ + mutex_lock(&ctrl_info->ofa_mutex); + mutex_unlock(&ctrl_info->ofa_mutex); +} + static inline void pqi_set_encryption_info( struct pqi_encryption_info *encryption_info, struct raid_map *raid_map, u64 first_block) @@ -2325,9 +2555,6 @@ static int pqi_raid_bypass_submit_scsi_cmd(struct pqi_ctrl_info *ctrl_info, (map_row * total_disks_per_row) + first_column; } - if (unlikely(map_index >= RAID_MAP_MAX_ENTRIES)) - return PQI_RAID_BYPASS_INELIGIBLE; - aio_handle = raid_map->disk_data[map_index].aio_handle; disk_block = get_unaligned_le64(&raid_map->disk_starting_blk) + first_row * strip_size + @@ -2397,7 +2624,7 @@ static int pqi_wait_for_pqi_mode_ready(struct pqi_ctrl_info *ctrl_info) u8 status; pqi_registers = ctrl_info->pqi_registers; - timeout = (PQI_MODE_READY_TIMEOUT_SECS * HZ) + jiffies; + timeout = (PQI_MODE_READY_TIMEOUT_SECS * PQI_HZ) + jiffies; while (1) { signature = readq(&pqi_registers->signature); @@ -2458,10 +2685,9 @@ static inline void pqi_take_device_offline(struct scsi_device *sdev, char *path) return; device->device_offline = true; - scsi_device_set_state(sdev, SDEV_OFFLINE); ctrl_info = shost_to_hba(sdev->host); pqi_schedule_rescan_worker(ctrl_info); - dev_err(&ctrl_info->pci_dev->dev, "offlined %s scsi %d:%d:%d:%d\n", + dev_err(&ctrl_info->pci_dev->dev, "re-scanning %s scsi %d:%d:%d:%d\n", path, ctrl_info->scsi_host->host_no, device->bus, device->target, device->lun); } @@ -2665,6 +2891,9 @@ static int pqi_interpret_task_management_response( case SOP_TMF_FUNCTION_SUCCEEDED: rc = 0; break; + case SOP_TMF_REJECTED: + rc = -EAGAIN; + break; default: rc = -EIO; break; @@ -2704,8 +2933,17 @@ static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, switch (response->header.iu_type) { case PQI_RESPONSE_IU_RAID_PATH_IO_SUCCESS: case PQI_RESPONSE_IU_AIO_PATH_IO_SUCCESS: + if (io_request->scmd) + io_request->scmd->result = 0; + /* fall through */ case PQI_RESPONSE_IU_GENERAL_MANAGEMENT: break; + case PQI_RESPONSE_IU_VENDOR_GENERAL: + io_request->status = + get_unaligned_le16( + &((struct pqi_vendor_general_response *) + response)->status); + break; case PQI_RESPONSE_IU_TASK_MANAGEMENT: io_request->status = pqi_interpret_task_management_response( @@ -2825,16 +3063,121 @@ static void pqi_acknowledge_event(struct pqi_ctrl_info *ctrl_info, pqi_send_event_ack(ctrl_info, &request, sizeof(request)); } -static void pqi_event_worker(struct work_struct *work) +#define PQI_SOFT_RESET_STATUS_TIMEOUT_SECS 30 +#define PQI_SOFT_RESET_STATUS_POLL_INTERVAL_SECS 1 + +static enum pqi_soft_reset_status pqi_poll_for_soft_reset_status( + struct pqi_ctrl_info *ctrl_info) { - unsigned int i; - struct pqi_ctrl_info *ctrl_info; - struct pqi_event *event; + unsigned long timeout; + u8 status; - ctrl_info = container_of(work, struct pqi_ctrl_info, event_work); + timeout = (PQI_SOFT_RESET_STATUS_TIMEOUT_SECS * PQI_HZ) + jiffies; - pqi_ctrl_busy(ctrl_info); - pqi_wait_if_ctrl_blocked(ctrl_info, NO_TIMEOUT); + while (1) { + status = pqi_read_soft_reset_status(ctrl_info); + if (status & PQI_SOFT_RESET_INITIATE) + return RESET_INITIATE_DRIVER; + + if (status & PQI_SOFT_RESET_ABORT) + return RESET_ABORT; + + if (time_after(jiffies, timeout)) { + dev_err(&ctrl_info->pci_dev->dev, + "timed out waiting for soft reset status\n"); + return RESET_TIMEDOUT; + } + + if (!sis_is_firmware_running(ctrl_info)) + return RESET_NORESPONSE; + + ssleep(PQI_SOFT_RESET_STATUS_POLL_INTERVAL_SECS); + } +} + +static void pqi_process_soft_reset(struct pqi_ctrl_info *ctrl_info, + enum pqi_soft_reset_status reset_status) +{ + int rc; + + switch (reset_status) { + case RESET_INITIATE_DRIVER: + /* fall through */ + case RESET_TIMEDOUT: + dev_info(&ctrl_info->pci_dev->dev, + "resetting controller %u\n", ctrl_info->ctrl_id); + sis_soft_reset(ctrl_info); + /* fall through */ + case RESET_INITIATE_FIRMWARE: + rc = pqi_ofa_ctrl_restart(ctrl_info); + pqi_ofa_free_host_buffer(ctrl_info); + dev_info(&ctrl_info->pci_dev->dev, + "Online Firmware Activation for controller %u: %s\n", + ctrl_info->ctrl_id, rc == 0 ? "SUCCESS" : "FAILED"); + break; + case RESET_ABORT: + pqi_ofa_ctrl_unquiesce(ctrl_info); + dev_info(&ctrl_info->pci_dev->dev, + "Online Firmware Activation for controller %u: %s\n", + ctrl_info->ctrl_id, "ABORTED"); + break; + case RESET_NORESPONSE: + pqi_ofa_free_host_buffer(ctrl_info); + pqi_take_ctrl_offline(ctrl_info); + break; + } +} + +static void pqi_ofa_process_event(struct pqi_ctrl_info *ctrl_info, + struct pqi_event *event) +{ + u16 event_id; + enum pqi_soft_reset_status status; + + event_id = get_unaligned_le16(&event->event_id); + + mutex_lock(&ctrl_info->ofa_mutex); + + if (event_id == PQI_EVENT_OFA_QUIESCE) { + dev_info(&ctrl_info->pci_dev->dev, + "Received Online Firmware Activation quiesce event for controller %u\n", + ctrl_info->ctrl_id); + pqi_ofa_ctrl_quiesce(ctrl_info); + pqi_acknowledge_event(ctrl_info, event); + if (ctrl_info->soft_reset_handshake_supported) { + status = pqi_poll_for_soft_reset_status(ctrl_info); + pqi_process_soft_reset(ctrl_info, status); + } else { + pqi_process_soft_reset(ctrl_info, + RESET_INITIATE_FIRMWARE); + } + + } else if (event_id == PQI_EVENT_OFA_MEMORY_ALLOCATION) { + pqi_acknowledge_event(ctrl_info, event); + pqi_ofa_setup_host_buffer(ctrl_info, + le32_to_cpu(event->ofa_bytes_requested)); + pqi_ofa_host_memory_update(ctrl_info); + } else if (event_id == PQI_EVENT_OFA_CANCELLED) { + pqi_ofa_free_host_buffer(ctrl_info); + pqi_acknowledge_event(ctrl_info, event); + dev_info(&ctrl_info->pci_dev->dev, + "Online Firmware Activation(%u) cancel reason : %u\n", + ctrl_info->ctrl_id, event->ofa_cancel_reason); + } + + mutex_unlock(&ctrl_info->ofa_mutex); +} + +static void pqi_event_worker(struct work_struct *work) +{ + unsigned int i; + struct pqi_ctrl_info *ctrl_info; + struct pqi_event *event; + + ctrl_info = container_of(work, struct pqi_ctrl_info, event_work); + + pqi_ctrl_busy(ctrl_info); + pqi_wait_if_ctrl_blocked(ctrl_info, NO_TIMEOUT); if (pqi_ctrl_offline(ctrl_info)) goto out; @@ -2844,6 +3187,11 @@ static void pqi_event_worker(struct work_struct *work) for (i = 0; i < PQI_NUM_SUPPORTED_EVENTS; i++) { if (event->pending) { event->pending = false; + if (event->event_type == PQI_EVENT_TYPE_OFA) { + pqi_ctrl_unbusy(ctrl_info); + pqi_ofa_process_event(ctrl_info, event); + return; + } pqi_acknowledge_event(ctrl_info, event); } event++; @@ -2853,7 +3201,7 @@ out: pqi_ctrl_unbusy(ctrl_info); } -#define PQI_HEARTBEAT_TIMER_INTERVAL (10 * HZ) +#define PQI_HEARTBEAT_TIMER_INTERVAL (10 * PQI_HZ) static void pqi_heartbeat_timer_handler(struct timer_list *t) { @@ -2922,6 +3270,24 @@ static inline bool pqi_is_supported_event(unsigned int event_type) return pqi_event_type_to_event_index(event_type) != -1; } +static void pqi_ofa_capture_event_payload(struct pqi_event *event, + struct pqi_event_response *response) +{ + u16 event_id; + + event_id = get_unaligned_le16(&event->event_id); + + if (event->event_type == PQI_EVENT_TYPE_OFA) { + if (event_id == PQI_EVENT_OFA_MEMORY_ALLOCATION) { + event->ofa_bytes_requested = + response->data.ofa_memory_allocation.bytes_requested; + } else if (event_id == PQI_EVENT_OFA_CANCELLED) { + event->ofa_cancel_reason = + response->data.ofa_cancelled.reason; + } + } +} + static unsigned int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info) { unsigned int num_events; @@ -2956,6 +3322,7 @@ static unsigned int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info) event->event_id = response->event_id; event->additional_event_id = response->additional_event_id; + pqi_ofa_capture_event_payload(event, response); } } @@ -3389,7 +3756,7 @@ static int pqi_alloc_admin_queues(struct pqi_ctrl_info *ctrl_info) return 0; } -#define PQI_ADMIN_QUEUE_CREATE_TIMEOUT_JIFFIES HZ +#define PQI_ADMIN_QUEUE_CREATE_TIMEOUT_JIFFIES PQI_HZ #define PQI_ADMIN_QUEUE_CREATE_POLL_INTERVAL_MSECS 1 static int pqi_create_admin_queues(struct pqi_ctrl_info *ctrl_info) @@ -3482,7 +3849,7 @@ static int pqi_poll_for_admin_response(struct pqi_ctrl_info *ctrl_info, admin_queues = &ctrl_info->admin_queues; oq_ci = admin_queues->oq_ci_copy; - timeout = (PQI_ADMIN_REQUEST_TIMEOUT_SECS * HZ) + jiffies; + timeout = (PQI_ADMIN_REQUEST_TIMEOUT_SECS * PQI_HZ) + jiffies; while (1) { oq_pi = readl(admin_queues->oq_pi); @@ -3597,7 +3964,7 @@ static int pqi_wait_for_completion_io(struct pqi_ctrl_info *ctrl_info, while (1) { if (wait_for_completion_io_timeout(wait, - PQI_WAIT_FOR_COMPLETION_IO_TIMEOUT_SECS * HZ)) { + PQI_WAIT_FOR_COMPLETION_IO_TIMEOUT_SECS * PQI_HZ)) { rc = 0; break; } @@ -4927,7 +5294,17 @@ void pqi_prep_for_scsi_done(struct scsi_cmnd *scmd) { struct pqi_scsi_dev *device; + if (!scmd->device) { + set_host_byte(scmd, DID_NO_CONNECT); + return; + } + device = scmd->device->hostdata; + if (!device) { + set_host_byte(scmd, DID_NO_CONNECT); + return; + } + atomic_dec(&device->scsi_cmds_outstanding); } @@ -4944,16 +5321,24 @@ static int pqi_scsi_queue_command(struct Scsi_Host *shost, device = scmd->device->hostdata; ctrl_info = shost_to_hba(shost); + if (!device) { + set_host_byte(scmd, DID_NO_CONNECT); + pqi_scsi_done(scmd); + return 0; + } + atomic_inc(&device->scsi_cmds_outstanding); - if (pqi_ctrl_offline(ctrl_info)) { + if (pqi_ctrl_offline(ctrl_info) || pqi_device_in_remove(ctrl_info, + device)) { set_host_byte(scmd, DID_NO_CONNECT); pqi_scsi_done(scmd); return 0; } pqi_ctrl_busy(ctrl_info); - if (pqi_ctrl_blocked(ctrl_info) || pqi_device_in_reset(device)) { + if (pqi_ctrl_blocked(ctrl_info) || pqi_device_in_reset(device) || + pqi_ctrl_in_ofa(ctrl_info)) { rc = SCSI_MLQUEUE_HOST_BUSY; goto out; } @@ -5098,25 +5483,75 @@ static void pqi_fail_io_queued_for_device(struct pqi_ctrl_info *ctrl_info, } } +static void pqi_fail_io_queued_for_all_devices(struct pqi_ctrl_info *ctrl_info) +{ + unsigned int i; + unsigned int path; + struct pqi_queue_group *queue_group; + unsigned long flags; + struct pqi_io_request *io_request; + struct pqi_io_request *next; + struct scsi_cmnd *scmd; + + for (i = 0; i < ctrl_info->num_queue_groups; i++) { + queue_group = &ctrl_info->queue_groups[i]; + + for (path = 0; path < 2; path++) { + spin_lock_irqsave(&queue_group->submit_lock[path], + flags); + + list_for_each_entry_safe(io_request, next, + &queue_group->request_list[path], + request_list_entry) { + + scmd = io_request->scmd; + if (!scmd) + continue; + + list_del(&io_request->request_list_entry); + set_host_byte(scmd, DID_RESET); + pqi_scsi_done(scmd); + } + + spin_unlock_irqrestore( + &queue_group->submit_lock[path], flags); + } + } +} + static int pqi_device_wait_for_pending_io(struct pqi_ctrl_info *ctrl_info, - struct pqi_scsi_dev *device) + struct pqi_scsi_dev *device, unsigned long timeout_secs) { + unsigned long timeout; + + timeout = (timeout_secs * PQI_HZ) + jiffies; + while (atomic_read(&device->scsi_cmds_outstanding)) { pqi_check_ctrl_health(ctrl_info); if (pqi_ctrl_offline(ctrl_info)) return -ENXIO; + if (timeout_secs != NO_TIMEOUT) { + if (time_after(jiffies, timeout)) { + dev_err(&ctrl_info->pci_dev->dev, + "timed out waiting for pending IO\n"); + return -ETIMEDOUT; + } + } usleep_range(1000, 2000); } return 0; } -static int pqi_ctrl_wait_for_pending_io(struct pqi_ctrl_info *ctrl_info) +static int pqi_ctrl_wait_for_pending_io(struct pqi_ctrl_info *ctrl_info, + unsigned long timeout_secs) { bool io_pending; unsigned long flags; + unsigned long timeout; struct pqi_scsi_dev *device; + timeout = (timeout_secs * PQI_HZ) + jiffies; while (1) { io_pending = false; @@ -5138,6 +5573,13 @@ static int pqi_ctrl_wait_for_pending_io(struct pqi_ctrl_info *ctrl_info) if (pqi_ctrl_offline(ctrl_info)) return -ENXIO; + if (timeout_secs != NO_TIMEOUT) { + if (time_after(jiffies, timeout)) { + dev_err(&ctrl_info->pci_dev->dev, + "timed out waiting for pending IO\n"); + return -ETIMEDOUT; + } + } usleep_range(1000, 2000); } @@ -5161,7 +5603,7 @@ static int pqi_wait_for_lun_reset_completion(struct pqi_ctrl_info *ctrl_info, while (1) { if (wait_for_completion_io_timeout(wait, - PQI_LUN_RESET_TIMEOUT_SECS * HZ)) { + PQI_LUN_RESET_TIMEOUT_SECS * PQI_HZ)) { rc = 0; break; } @@ -5212,20 +5654,56 @@ static int pqi_lun_reset(struct pqi_ctrl_info *ctrl_info, return rc; } +#define PQI_LUN_RESET_RETRIES 3 +#define PQI_LUN_RESET_RETRY_INTERVAL_MSECS 10000 /* Performs a reset at the LUN level. */ -static int pqi_device_reset(struct pqi_ctrl_info *ctrl_info, +static int _pqi_device_reset(struct pqi_ctrl_info *ctrl_info, struct pqi_scsi_dev *device) { int rc; + unsigned int retries; + unsigned long timeout_secs; - rc = pqi_lun_reset(ctrl_info, device); - if (rc == 0) - rc = pqi_device_wait_for_pending_io(ctrl_info, device); + for (retries = 0;;) { + rc = pqi_lun_reset(ctrl_info, device); + if (rc != -EAGAIN || + ++retries > PQI_LUN_RESET_RETRIES) + break; + msleep(PQI_LUN_RESET_RETRY_INTERVAL_MSECS); + } + timeout_secs = rc ? PQI_LUN_RESET_TIMEOUT_SECS : NO_TIMEOUT; + + rc |= pqi_device_wait_for_pending_io(ctrl_info, device, timeout_secs); return rc == 0 ? SUCCESS : FAILED; } +static int pqi_device_reset(struct pqi_ctrl_info *ctrl_info, + struct pqi_scsi_dev *device) +{ + int rc; + + mutex_lock(&ctrl_info->lun_reset_mutex); + + pqi_ctrl_block_requests(ctrl_info); + pqi_ctrl_wait_until_quiesced(ctrl_info); + pqi_fail_io_queued_for_device(ctrl_info, device); + rc = pqi_wait_until_inbound_queues_empty(ctrl_info); + pqi_device_reset_start(device); + pqi_ctrl_unblock_requests(ctrl_info); + + if (rc) + rc = FAILED; + else + rc = _pqi_device_reset(ctrl_info, device); + + pqi_device_reset_done(device); + + mutex_unlock(&ctrl_info->lun_reset_mutex); + return rc; +} + static int pqi_eh_device_reset_handler(struct scsi_cmnd *scmd) { int rc; @@ -5243,28 +5721,16 @@ static int pqi_eh_device_reset_handler(struct scsi_cmnd *scmd) pqi_check_ctrl_health(ctrl_info); if (pqi_ctrl_offline(ctrl_info)) { + dev_err(&ctrl_info->pci_dev->dev, + "controller %u offlined - cannot send device reset\n", + ctrl_info->ctrl_id); rc = FAILED; goto out; } - mutex_lock(&ctrl_info->lun_reset_mutex); - - pqi_ctrl_block_requests(ctrl_info); - pqi_ctrl_wait_until_quiesced(ctrl_info); - pqi_fail_io_queued_for_device(ctrl_info, device); - rc = pqi_wait_until_inbound_queues_empty(ctrl_info); - pqi_device_reset_start(device); - pqi_ctrl_unblock_requests(ctrl_info); - - if (rc) - rc = FAILED; - else - rc = pqi_device_reset(ctrl_info, device); - - pqi_device_reset_done(device); - - mutex_unlock(&ctrl_info->lun_reset_mutex); + pqi_wait_until_ofa_finished(ctrl_info); + rc = pqi_device_reset(ctrl_info, device); out: dev_err(&ctrl_info->pci_dev->dev, "reset of scsi %d:%d:%d:%d: %s\n", @@ -5308,6 +5774,10 @@ static int pqi_slave_alloc(struct scsi_device *sdev) scsi_change_queue_depth(sdev, device->advertised_queue_depth); } + if (pqi_is_logical_device(device)) + pqi_disable_write_same(sdev); + else + sdev->allow_restart = 1; } spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); @@ -5319,7 +5789,8 @@ static int pqi_map_queues(struct Scsi_Host *shost) { struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); - return blk_mq_pci_map_queues(&shost->tag_set, ctrl_info->pci_dev, 0); + return blk_mq_pci_map_queues(&shost->tag_set.map[0], + ctrl_info->pci_dev, 0); } static int pqi_getpciinfo_ioctl(struct pqi_ctrl_info *ctrl_info, @@ -5579,6 +6050,9 @@ static int pqi_ioctl(struct scsi_device *sdev, int cmd, void __user *arg) ctrl_info = shost_to_hba(sdev->host); + if (pqi_ctrl_in_ofa(ctrl_info)) + return -EBUSY; + switch (cmd) { case CCISS_DEREGDISK: case CCISS_REGNEWDISK: @@ -5683,6 +6157,150 @@ static struct device_attribute *pqi_shost_attrs[] = { NULL }; +static ssize_t pqi_unique_id_show(struct device *dev, + struct device_attribute *attr, char *buffer) +{ + struct pqi_ctrl_info *ctrl_info; + struct scsi_device *sdev; + struct pqi_scsi_dev *device; + unsigned long flags; + unsigned char uid[16]; + + sdev = to_scsi_device(dev); + ctrl_info = shost_to_hba(sdev->host); + + spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags); + + device = sdev->hostdata; + if (!device) { + spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, + flags); + return -ENODEV; + } + memcpy(uid, device->unique_id, sizeof(uid)); + + spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); + + return snprintf(buffer, PAGE_SIZE, + "%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X\n", + uid[0], uid[1], uid[2], uid[3], + uid[4], uid[5], uid[6], uid[7], + uid[8], uid[9], uid[10], uid[11], + uid[12], uid[13], uid[14], uid[15]); +} + +static ssize_t pqi_lunid_show(struct device *dev, + struct device_attribute *attr, char *buffer) +{ + struct pqi_ctrl_info *ctrl_info; + struct scsi_device *sdev; + struct pqi_scsi_dev *device; + unsigned long flags; + u8 lunid[8]; + + sdev = to_scsi_device(dev); + ctrl_info = shost_to_hba(sdev->host); + + spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags); + + device = sdev->hostdata; + if (!device) { + spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, + flags); + return -ENODEV; + } + memcpy(lunid, device->scsi3addr, sizeof(lunid)); + + spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); + + return snprintf(buffer, PAGE_SIZE, "0x%8phN\n", lunid); +} + +#define MAX_PATHS 8 +static ssize_t pqi_path_info_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct pqi_ctrl_info *ctrl_info; + struct scsi_device *sdev; + struct pqi_scsi_dev *device; + unsigned long flags; + int i; + int output_len = 0; + u8 box; + u8 bay; + u8 path_map_index = 0; + char *active; + unsigned char phys_connector[2]; + + sdev = to_scsi_device(dev); + ctrl_info = shost_to_hba(sdev->host); + + spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags); + + device = sdev->hostdata; + if (!device) { + spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, + flags); + return -ENODEV; + } + + bay = device->bay; + for (i = 0; i < MAX_PATHS; i++) { + path_map_index = 1<active_path_index) + active = "Active"; + else if (device->path_map & path_map_index) + active = "Inactive"; + else + continue; + + output_len += scnprintf(buf + output_len, + PAGE_SIZE - output_len, + "[%d:%d:%d:%d] %20.20s ", + ctrl_info->scsi_host->host_no, + device->bus, device->target, + device->lun, + scsi_device_type(device->devtype)); + + if (device->devtype == TYPE_RAID || + pqi_is_logical_device(device)) + goto end_buffer; + + memcpy(&phys_connector, &device->phys_connector[i], + sizeof(phys_connector)); + if (phys_connector[0] < '0') + phys_connector[0] = '0'; + if (phys_connector[1] < '0') + phys_connector[1] = '0'; + + output_len += scnprintf(buf + output_len, + PAGE_SIZE - output_len, + "PORT: %.2s ", phys_connector); + + box = device->box[i]; + if (box != 0 && box != 0xFF) + output_len += scnprintf(buf + output_len, + PAGE_SIZE - output_len, + "BOX: %hhu ", box); + + if ((device->devtype == TYPE_DISK || + device->devtype == TYPE_ZBC) && + pqi_expose_device(device)) + output_len += scnprintf(buf + output_len, + PAGE_SIZE - output_len, + "BAY: %hhu ", bay); + +end_buffer: + output_len += scnprintf(buf + output_len, + PAGE_SIZE - output_len, + "%s\n", active); + } + + spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); + return output_len; +} + + static ssize_t pqi_sas_address_show(struct device *dev, struct device_attribute *attr, char *buffer) { @@ -5759,12 +6377,18 @@ static ssize_t pqi_raid_level_show(struct device *dev, return snprintf(buffer, PAGE_SIZE, "%s\n", raid_level); } +static DEVICE_ATTR(lunid, 0444, pqi_lunid_show, NULL); +static DEVICE_ATTR(unique_id, 0444, pqi_unique_id_show, NULL); +static DEVICE_ATTR(path_info, 0444, pqi_path_info_show, NULL); static DEVICE_ATTR(sas_address, 0444, pqi_sas_address_show, NULL); static DEVICE_ATTR(ssd_smart_path_enabled, 0444, pqi_ssd_smart_path_enabled_show, NULL); static DEVICE_ATTR(raid_level, 0444, pqi_raid_level_show, NULL); static struct device_attribute *pqi_sdev_attrs[] = { + &dev_attr_lunid, + &dev_attr_unique_id, + &dev_attr_path_info, &dev_attr_sas_address, &dev_attr_ssd_smart_path_enabled, &dev_attr_raid_level, @@ -5779,7 +6403,6 @@ static struct scsi_host_template pqi_driver_template = { .scan_start = pqi_scan_start, .scan_finished = pqi_scan_finished, .this_id = -1, - .use_clustering = ENABLE_CLUSTERING, .eh_device_reset_handler = pqi_eh_device_reset_handler, .ioctl = pqi_ioctl, .slave_alloc = pqi_slave_alloc, @@ -5947,38 +6570,288 @@ out: return rc; } -static int pqi_process_config_table(struct pqi_ctrl_info *ctrl_info) +struct pqi_config_table_section_info { + struct pqi_ctrl_info *ctrl_info; + void *section; + u32 section_offset; + void __iomem *section_iomem_addr; +}; + +static inline bool pqi_is_firmware_feature_supported( + struct pqi_config_table_firmware_features *firmware_features, + unsigned int bit_position) { - u32 table_length; - u32 section_offset; - void __iomem *table_iomem_addr; - struct pqi_config_table *config_table; - struct pqi_config_table_section_header *section; + unsigned int byte_index; - table_length = ctrl_info->config_table_length; + byte_index = bit_position / BITS_PER_BYTE; - config_table = kmalloc(table_length, GFP_KERNEL); - if (!config_table) { - dev_err(&ctrl_info->pci_dev->dev, - "failed to allocate memory for PQI configuration table\n"); - return -ENOMEM; - } + if (byte_index >= le16_to_cpu(firmware_features->num_elements)) + return false; - /* - * Copy the config table contents from I/O memory space into the - * temporary buffer. - */ + return firmware_features->features_supported[byte_index] & + (1 << (bit_position % BITS_PER_BYTE)) ? true : false; +} + +static inline bool pqi_is_firmware_feature_enabled( + struct pqi_config_table_firmware_features *firmware_features, + void __iomem *firmware_features_iomem_addr, + unsigned int bit_position) +{ + unsigned int byte_index; + u8 __iomem *features_enabled_iomem_addr; + + byte_index = (bit_position / BITS_PER_BYTE) + + (le16_to_cpu(firmware_features->num_elements) * 2); + + features_enabled_iomem_addr = firmware_features_iomem_addr + + offsetof(struct pqi_config_table_firmware_features, + features_supported) + byte_index; + + return *((__force u8 *)features_enabled_iomem_addr) & + (1 << (bit_position % BITS_PER_BYTE)) ? true : false; +} + +static inline void pqi_request_firmware_feature( + struct pqi_config_table_firmware_features *firmware_features, + unsigned int bit_position) +{ + unsigned int byte_index; + + byte_index = (bit_position / BITS_PER_BYTE) + + le16_to_cpu(firmware_features->num_elements); + + firmware_features->features_supported[byte_index] |= + (1 << (bit_position % BITS_PER_BYTE)); +} + +static int pqi_config_table_update(struct pqi_ctrl_info *ctrl_info, + u16 first_section, u16 last_section) +{ + struct pqi_vendor_general_request request; + + memset(&request, 0, sizeof(request)); + + request.header.iu_type = PQI_REQUEST_IU_VENDOR_GENERAL; + put_unaligned_le16(sizeof(request) - PQI_REQUEST_HEADER_LENGTH, + &request.header.iu_length); + put_unaligned_le16(PQI_VENDOR_GENERAL_CONFIG_TABLE_UPDATE, + &request.function_code); + put_unaligned_le16(first_section, + &request.data.config_table_update.first_section); + put_unaligned_le16(last_section, + &request.data.config_table_update.last_section); + + return pqi_submit_raid_request_synchronous(ctrl_info, &request.header, + 0, NULL, NO_TIMEOUT); +} + +static int pqi_enable_firmware_features(struct pqi_ctrl_info *ctrl_info, + struct pqi_config_table_firmware_features *firmware_features, + void __iomem *firmware_features_iomem_addr) +{ + void *features_requested; + void __iomem *features_requested_iomem_addr; + + features_requested = firmware_features->features_supported + + le16_to_cpu(firmware_features->num_elements); + + features_requested_iomem_addr = firmware_features_iomem_addr + + (features_requested - (void *)firmware_features); + + memcpy_toio(features_requested_iomem_addr, features_requested, + le16_to_cpu(firmware_features->num_elements)); + + return pqi_config_table_update(ctrl_info, + PQI_CONFIG_TABLE_SECTION_FIRMWARE_FEATURES, + PQI_CONFIG_TABLE_SECTION_FIRMWARE_FEATURES); +} + +struct pqi_firmware_feature { + char *feature_name; + unsigned int feature_bit; + bool supported; + bool enabled; + void (*feature_status)(struct pqi_ctrl_info *ctrl_info, + struct pqi_firmware_feature *firmware_feature); +}; + +static void pqi_firmware_feature_status(struct pqi_ctrl_info *ctrl_info, + struct pqi_firmware_feature *firmware_feature) +{ + if (!firmware_feature->supported) { + dev_info(&ctrl_info->pci_dev->dev, "%s not supported by controller\n", + firmware_feature->feature_name); + return; + } + + if (firmware_feature->enabled) { + dev_info(&ctrl_info->pci_dev->dev, + "%s enabled\n", firmware_feature->feature_name); + return; + } + + dev_err(&ctrl_info->pci_dev->dev, "failed to enable %s\n", + firmware_feature->feature_name); +} + +static inline void pqi_firmware_feature_update(struct pqi_ctrl_info *ctrl_info, + struct pqi_firmware_feature *firmware_feature) +{ + if (firmware_feature->feature_status) + firmware_feature->feature_status(ctrl_info, firmware_feature); +} + +static DEFINE_MUTEX(pqi_firmware_features_mutex); + +static struct pqi_firmware_feature pqi_firmware_features[] = { + { + .feature_name = "Online Firmware Activation", + .feature_bit = PQI_FIRMWARE_FEATURE_OFA, + .feature_status = pqi_firmware_feature_status, + }, + { + .feature_name = "Serial Management Protocol", + .feature_bit = PQI_FIRMWARE_FEATURE_SMP, + .feature_status = pqi_firmware_feature_status, + }, + { + .feature_name = "New Soft Reset Handshake", + .feature_bit = PQI_FIRMWARE_FEATURE_SOFT_RESET_HANDSHAKE, + .feature_status = pqi_firmware_feature_status, + }, +}; + +static void pqi_process_firmware_features( + struct pqi_config_table_section_info *section_info) +{ + int rc; + struct pqi_ctrl_info *ctrl_info; + struct pqi_config_table_firmware_features *firmware_features; + void __iomem *firmware_features_iomem_addr; + unsigned int i; + unsigned int num_features_supported; + + ctrl_info = section_info->ctrl_info; + firmware_features = section_info->section; + firmware_features_iomem_addr = section_info->section_iomem_addr; + + for (i = 0, num_features_supported = 0; + i < ARRAY_SIZE(pqi_firmware_features); i++) { + if (pqi_is_firmware_feature_supported(firmware_features, + pqi_firmware_features[i].feature_bit)) { + pqi_firmware_features[i].supported = true; + num_features_supported++; + } else { + pqi_firmware_feature_update(ctrl_info, + &pqi_firmware_features[i]); + } + } + + if (num_features_supported == 0) + return; + + for (i = 0; i < ARRAY_SIZE(pqi_firmware_features); i++) { + if (!pqi_firmware_features[i].supported) + continue; + pqi_request_firmware_feature(firmware_features, + pqi_firmware_features[i].feature_bit); + } + + rc = pqi_enable_firmware_features(ctrl_info, firmware_features, + firmware_features_iomem_addr); + if (rc) { + dev_err(&ctrl_info->pci_dev->dev, + "failed to enable firmware features in PQI configuration table\n"); + for (i = 0; i < ARRAY_SIZE(pqi_firmware_features); i++) { + if (!pqi_firmware_features[i].supported) + continue; + pqi_firmware_feature_update(ctrl_info, + &pqi_firmware_features[i]); + } + return; + } + + ctrl_info->soft_reset_handshake_supported = false; + for (i = 0; i < ARRAY_SIZE(pqi_firmware_features); i++) { + if (!pqi_firmware_features[i].supported) + continue; + if (pqi_is_firmware_feature_enabled(firmware_features, + firmware_features_iomem_addr, + pqi_firmware_features[i].feature_bit)) { + pqi_firmware_features[i].enabled = true; + if (pqi_firmware_features[i].feature_bit == + PQI_FIRMWARE_FEATURE_SOFT_RESET_HANDSHAKE) + ctrl_info->soft_reset_handshake_supported = + true; + } + pqi_firmware_feature_update(ctrl_info, + &pqi_firmware_features[i]); + } +} + +static void pqi_init_firmware_features(void) +{ + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(pqi_firmware_features); i++) { + pqi_firmware_features[i].supported = false; + pqi_firmware_features[i].enabled = false; + } +} + +static void pqi_process_firmware_features_section( + struct pqi_config_table_section_info *section_info) +{ + mutex_lock(&pqi_firmware_features_mutex); + pqi_init_firmware_features(); + pqi_process_firmware_features(section_info); + mutex_unlock(&pqi_firmware_features_mutex); +} + +static int pqi_process_config_table(struct pqi_ctrl_info *ctrl_info) +{ + u32 table_length; + u32 section_offset; + void __iomem *table_iomem_addr; + struct pqi_config_table *config_table; + struct pqi_config_table_section_header *section; + struct pqi_config_table_section_info section_info; + + table_length = ctrl_info->config_table_length; + if (table_length == 0) + return 0; + + config_table = kmalloc(table_length, GFP_KERNEL); + if (!config_table) { + dev_err(&ctrl_info->pci_dev->dev, + "failed to allocate memory for PQI configuration table\n"); + return -ENOMEM; + } + + /* + * Copy the config table contents from I/O memory space into the + * temporary buffer. + */ table_iomem_addr = ctrl_info->iomem_base + ctrl_info->config_table_offset; memcpy_fromio(config_table, table_iomem_addr, table_length); + section_info.ctrl_info = ctrl_info; section_offset = get_unaligned_le32(&config_table->first_section_offset); while (section_offset) { section = (void *)config_table + section_offset; + section_info.section = section; + section_info.section_offset = section_offset; + section_info.section_iomem_addr = + table_iomem_addr + section_offset; + switch (get_unaligned_le16(§ion->section_id)) { + case PQI_CONFIG_TABLE_SECTION_FIRMWARE_FEATURES: + pqi_process_firmware_features_section(§ion_info); + break; case PQI_CONFIG_TABLE_SECTION_HEARTBEAT: if (pqi_disable_heartbeat) dev_warn(&ctrl_info->pci_dev->dev, @@ -5991,6 +6864,13 @@ static int pqi_process_config_table(struct pqi_ctrl_info *ctrl_info) struct pqi_config_table_heartbeat, heartbeat_counter); break; + case PQI_CONFIG_TABLE_SECTION_SOFT_RESET: + ctrl_info->soft_reset_status = + table_iomem_addr + + section_offset + + offsetof(struct pqi_config_table_soft_reset, + soft_reset_status); + break; } section_offset = @@ -6123,10 +7003,6 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) ctrl_info->pqi_mode_enabled = true; pqi_save_ctrl_mode(ctrl_info, PQI_MODE); - rc = pqi_process_config_table(ctrl_info); - if (rc) - return rc; - rc = pqi_alloc_admin_queues(ctrl_info); if (rc) { dev_err(&ctrl_info->pci_dev->dev, @@ -6188,6 +7064,11 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) pqi_change_irq_mode(ctrl_info, IRQ_MODE_MSIX); ctrl_info->controller_online = true; + + rc = pqi_process_config_table(ctrl_info); + if (rc) + return rc; + pqi_start_heartbeat_timer(ctrl_info); rc = pqi_enable_events(ctrl_info); @@ -6209,6 +7090,13 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) return rc; } + rc = pqi_set_diag_rescan(ctrl_info); + if (rc) { + dev_err(&ctrl_info->pci_dev->dev, + "error enabling multi-lun rescan\n"); + return rc; + } + rc = pqi_write_driver_version_to_host_wellness(ctrl_info); if (rc) { dev_err(&ctrl_info->pci_dev->dev, @@ -6265,6 +7153,24 @@ static int pqi_ctrl_init_resume(struct pqi_ctrl_info *ctrl_info) if (rc) return rc; + /* + * Get the controller properties. This allows us to determine + * whether or not it supports PQI mode. + */ + rc = sis_get_ctrl_properties(ctrl_info); + if (rc) { + dev_err(&ctrl_info->pci_dev->dev, + "error obtaining controller properties\n"); + return rc; + } + + rc = sis_get_pqi_capabilities(ctrl_info); + if (rc) { + dev_err(&ctrl_info->pci_dev->dev, + "error obtaining controller capabilities\n"); + return rc; + } + /* * If the function we are about to call succeeds, the * controller will transition from legacy SIS mode @@ -6305,9 +7211,14 @@ static int pqi_ctrl_init_resume(struct pqi_ctrl_info *ctrl_info) pqi_change_irq_mode(ctrl_info, IRQ_MODE_MSIX); ctrl_info->controller_online = true; - pqi_start_heartbeat_timer(ctrl_info); pqi_ctrl_unblock_requests(ctrl_info); + rc = pqi_process_config_table(ctrl_info); + if (rc) + return rc; + + pqi_start_heartbeat_timer(ctrl_info); + rc = pqi_enable_events(ctrl_info); if (rc) { dev_err(&ctrl_info->pci_dev->dev, @@ -6315,6 +7226,20 @@ static int pqi_ctrl_init_resume(struct pqi_ctrl_info *ctrl_info) return rc; } + rc = pqi_get_ctrl_firmware_version(ctrl_info); + if (rc) { + dev_err(&ctrl_info->pci_dev->dev, + "error obtaining firmware version\n"); + return rc; + } + + rc = pqi_set_diag_rescan(ctrl_info); + if (rc) { + dev_err(&ctrl_info->pci_dev->dev, + "error enabling multi-lun rescan\n"); + return rc; + } + rc = pqi_write_driver_version_to_host_wellness(ctrl_info); if (rc) { dev_err(&ctrl_info->pci_dev->dev, @@ -6425,6 +7350,7 @@ static struct pqi_ctrl_info *pqi_alloc_ctrl_info(int numa_node) mutex_init(&ctrl_info->scan_mutex); mutex_init(&ctrl_info->lun_reset_mutex); + mutex_init(&ctrl_info->ofa_mutex); INIT_LIST_HEAD(&ctrl_info->scsi_device_list); spin_lock_init(&ctrl_info->scsi_device_list_lock); @@ -6501,6 +7427,217 @@ static void pqi_remove_ctrl(struct pqi_ctrl_info *ctrl_info) pqi_free_ctrl_resources(ctrl_info); } +static void pqi_ofa_ctrl_quiesce(struct pqi_ctrl_info *ctrl_info) +{ + pqi_cancel_update_time_worker(ctrl_info); + pqi_cancel_rescan_worker(ctrl_info); + pqi_wait_until_lun_reset_finished(ctrl_info); + pqi_wait_until_scan_finished(ctrl_info); + pqi_ctrl_ofa_start(ctrl_info); + pqi_ctrl_block_requests(ctrl_info); + pqi_ctrl_wait_until_quiesced(ctrl_info); + pqi_ctrl_wait_for_pending_io(ctrl_info, PQI_PENDING_IO_TIMEOUT_SECS); + pqi_fail_io_queued_for_all_devices(ctrl_info); + pqi_wait_until_inbound_queues_empty(ctrl_info); + pqi_stop_heartbeat_timer(ctrl_info); + ctrl_info->pqi_mode_enabled = false; + pqi_save_ctrl_mode(ctrl_info, SIS_MODE); +} + +static void pqi_ofa_ctrl_unquiesce(struct pqi_ctrl_info *ctrl_info) +{ + pqi_ofa_free_host_buffer(ctrl_info); + ctrl_info->pqi_mode_enabled = true; + pqi_save_ctrl_mode(ctrl_info, PQI_MODE); + ctrl_info->controller_online = true; + pqi_ctrl_unblock_requests(ctrl_info); + pqi_start_heartbeat_timer(ctrl_info); + pqi_schedule_update_time_worker(ctrl_info); + pqi_clear_soft_reset_status(ctrl_info, + PQI_SOFT_RESET_ABORT); + pqi_scan_scsi_devices(ctrl_info); +} + +static int pqi_ofa_alloc_mem(struct pqi_ctrl_info *ctrl_info, + u32 total_size, u32 chunk_size) +{ + u32 sg_count; + u32 size; + int i; + struct pqi_sg_descriptor *mem_descriptor = NULL; + struct device *dev; + struct pqi_ofa_memory *ofap; + + dev = &ctrl_info->pci_dev->dev; + + sg_count = (total_size + chunk_size - 1); + sg_count /= chunk_size; + + ofap = ctrl_info->pqi_ofa_mem_virt_addr; + + if (sg_count*chunk_size < total_size) + goto out; + + ctrl_info->pqi_ofa_chunk_virt_addr = + kcalloc(sg_count, sizeof(void *), GFP_KERNEL); + if (!ctrl_info->pqi_ofa_chunk_virt_addr) + goto out; + + for (size = 0, i = 0; size < total_size; size += chunk_size, i++) { + dma_addr_t dma_handle; + + ctrl_info->pqi_ofa_chunk_virt_addr[i] = + dma_zalloc_coherent(dev, chunk_size, &dma_handle, + GFP_KERNEL); + + if (!ctrl_info->pqi_ofa_chunk_virt_addr[i]) + break; + + mem_descriptor = &ofap->sg_descriptor[i]; + put_unaligned_le64 ((u64) dma_handle, &mem_descriptor->address); + put_unaligned_le32 (chunk_size, &mem_descriptor->length); + } + + if (!size || size < total_size) + goto out_free_chunks; + + put_unaligned_le32(CISS_SG_LAST, &mem_descriptor->flags); + put_unaligned_le16(sg_count, &ofap->num_memory_descriptors); + put_unaligned_le32(size, &ofap->bytes_allocated); + + return 0; + +out_free_chunks: + while (--i >= 0) { + mem_descriptor = &ofap->sg_descriptor[i]; + dma_free_coherent(dev, chunk_size, + ctrl_info->pqi_ofa_chunk_virt_addr[i], + get_unaligned_le64(&mem_descriptor->address)); + } + kfree(ctrl_info->pqi_ofa_chunk_virt_addr); + +out: + put_unaligned_le32 (0, &ofap->bytes_allocated); + return -ENOMEM; +} + +static int pqi_ofa_alloc_host_buffer(struct pqi_ctrl_info *ctrl_info) +{ + u32 total_size; + u32 min_chunk_size; + u32 chunk_sz; + + total_size = le32_to_cpu( + ctrl_info->pqi_ofa_mem_virt_addr->bytes_allocated); + min_chunk_size = total_size / PQI_OFA_MAX_SG_DESCRIPTORS; + + for (chunk_sz = total_size; chunk_sz >= min_chunk_size; chunk_sz /= 2) + if (!pqi_ofa_alloc_mem(ctrl_info, total_size, chunk_sz)) + return 0; + + return -ENOMEM; +} + +static void pqi_ofa_setup_host_buffer(struct pqi_ctrl_info *ctrl_info, + u32 bytes_requested) +{ + struct pqi_ofa_memory *pqi_ofa_memory; + struct device *dev; + + dev = &ctrl_info->pci_dev->dev; + pqi_ofa_memory = dma_zalloc_coherent(dev, + PQI_OFA_MEMORY_DESCRIPTOR_LENGTH, + &ctrl_info->pqi_ofa_mem_dma_handle, + GFP_KERNEL); + + if (!pqi_ofa_memory) + return; + + put_unaligned_le16(PQI_OFA_VERSION, &pqi_ofa_memory->version); + memcpy(&pqi_ofa_memory->signature, PQI_OFA_SIGNATURE, + sizeof(pqi_ofa_memory->signature)); + pqi_ofa_memory->bytes_allocated = cpu_to_le32(bytes_requested); + + ctrl_info->pqi_ofa_mem_virt_addr = pqi_ofa_memory; + + if (pqi_ofa_alloc_host_buffer(ctrl_info) < 0) { + dev_err(dev, "Failed to allocate host buffer of size = %u", + bytes_requested); + } +} + +static void pqi_ofa_free_host_buffer(struct pqi_ctrl_info *ctrl_info) +{ + int i; + struct pqi_sg_descriptor *mem_descriptor; + struct pqi_ofa_memory *ofap; + + ofap = ctrl_info->pqi_ofa_mem_virt_addr; + + if (!ofap) + return; + + if (!ofap->bytes_allocated) + goto out; + + mem_descriptor = ofap->sg_descriptor; + + for (i = 0; i < get_unaligned_le16(&ofap->num_memory_descriptors); + i++) { + dma_free_coherent(&ctrl_info->pci_dev->dev, + get_unaligned_le32(&mem_descriptor[i].length), + ctrl_info->pqi_ofa_chunk_virt_addr[i], + get_unaligned_le64(&mem_descriptor[i].address)); + } + kfree(ctrl_info->pqi_ofa_chunk_virt_addr); + +out: + dma_free_coherent(&ctrl_info->pci_dev->dev, + PQI_OFA_MEMORY_DESCRIPTOR_LENGTH, ofap, + ctrl_info->pqi_ofa_mem_dma_handle); + ctrl_info->pqi_ofa_mem_virt_addr = NULL; +} + +static int pqi_ofa_host_memory_update(struct pqi_ctrl_info *ctrl_info) +{ + struct pqi_vendor_general_request request; + size_t size; + struct pqi_ofa_memory *ofap; + + memset(&request, 0, sizeof(request)); + + ofap = ctrl_info->pqi_ofa_mem_virt_addr; + + request.header.iu_type = PQI_REQUEST_IU_VENDOR_GENERAL; + put_unaligned_le16(sizeof(request) - PQI_REQUEST_HEADER_LENGTH, + &request.header.iu_length); + put_unaligned_le16(PQI_VENDOR_GENERAL_HOST_MEMORY_UPDATE, + &request.function_code); + + if (ofap) { + size = offsetof(struct pqi_ofa_memory, sg_descriptor) + + get_unaligned_le16(&ofap->num_memory_descriptors) * + sizeof(struct pqi_sg_descriptor); + + put_unaligned_le64((u64)ctrl_info->pqi_ofa_mem_dma_handle, + &request.data.ofa_memory_allocation.buffer_address); + put_unaligned_le32(size, + &request.data.ofa_memory_allocation.buffer_length); + + } + + return pqi_submit_raid_request_synchronous(ctrl_info, &request.header, + 0, NULL, NO_TIMEOUT); +} + +#define PQI_POST_RESET_DELAY_B4_MSGU_READY 5000 + +static int pqi_ofa_ctrl_restart(struct pqi_ctrl_info *ctrl_info) +{ + msleep(PQI_POST_RESET_DELAY_B4_MSGU_READY); + return pqi_ctrl_init_resume(ctrl_info); +} + static void pqi_perform_lockup_action(void) { switch (pqi_lockup_action) { @@ -6599,7 +7736,7 @@ static int pqi_pci_probe(struct pci_dev *pci_dev, const struct pci_device_id *id) { int rc; - int node; + int node, cp_node; struct pqi_ctrl_info *ctrl_info; pqi_print_ctrl_info(pci_dev, id); @@ -6617,8 +7754,12 @@ static int pqi_pci_probe(struct pci_dev *pci_dev, "controller device ID matched using wildcards\n"); node = dev_to_node(&pci_dev->dev); - if (node == NUMA_NO_NODE) - set_dev_node(&pci_dev->dev, 0); + if (node == NUMA_NO_NODE) { + cp_node = cpu_to_node(0); + if (cp_node == NUMA_NO_NODE) + cp_node = 0; + set_dev_node(&pci_dev->dev, cp_node); + } ctrl_info = pqi_alloc_ctrl_info(node); if (!ctrl_info) { @@ -6653,6 +7794,8 @@ static void pqi_pci_remove(struct pci_dev *pci_dev) if (!ctrl_info) return; + ctrl_info->in_shutdown = true; + pqi_remove_ctrl(ctrl_info); } @@ -6670,6 +7813,7 @@ static void pqi_shutdown(struct pci_dev *pci_dev) * storage. */ rc = pqi_flush_cache(ctrl_info, SHUTDOWN); + pqi_free_interrupts(ctrl_info); pqi_reset(ctrl_info); if (rc == 0) return; @@ -6714,11 +7858,12 @@ static __maybe_unused int pqi_suspend(struct pci_dev *pci_dev, pm_message_t stat pqi_cancel_rescan_worker(ctrl_info); pqi_wait_until_scan_finished(ctrl_info); pqi_wait_until_lun_reset_finished(ctrl_info); + pqi_wait_until_ofa_finished(ctrl_info); pqi_flush_cache(ctrl_info, SUSPEND); pqi_ctrl_block_requests(ctrl_info); pqi_ctrl_wait_until_quiesced(ctrl_info); pqi_wait_until_inbound_queues_empty(ctrl_info); - pqi_ctrl_wait_for_pending_io(ctrl_info); + pqi_ctrl_wait_for_pending_io(ctrl_info, NO_TIMEOUT); pqi_stop_heartbeat_timer(ctrl_info); if (state.event == PM_EVENT_FREEZE) @@ -6802,6 +7947,14 @@ static const struct pci_device_id pqi_pci_id_table[] = { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 0x193d, 0x8461) }, + { + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, + 0x193d, 0xc460) + }, + { + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, + 0x193d, 0xc461) + }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 0x193d, 0xf460) @@ -6838,6 +7991,30 @@ static const struct pci_device_id pqi_pci_id_table[] = { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 0x1bd4, 0x004c) }, + { + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, + 0x19e5, 0xd227) + }, + { + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, + 0x19e5, 0xd228) + }, + { + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, + 0x19e5, 0xd229) + }, + { + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, + 0x19e5, 0xd22a) + }, + { + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, + 0x19e5, 0xd22b) + }, + { + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, + 0x19e5, 0xd22c) + }, { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, PCI_VENDOR_ID_ADAPTEC2, 0x0110) diff --git a/drivers/scsi/smartpqi/smartpqi_sas_transport.c b/drivers/scsi/smartpqi/smartpqi_sas_transport.c index b209a35e482e..0e4ef215115f 100644 --- a/drivers/scsi/smartpqi/smartpqi_sas_transport.c +++ b/drivers/scsi/smartpqi/smartpqi_sas_transport.c @@ -17,9 +17,11 @@ */ #include +#include #include #include #include +#include #include "smartpqi.h" static struct pqi_sas_phy *pqi_alloc_sas_phy(struct pqi_sas_port *pqi_sas_port) @@ -97,14 +99,32 @@ static int pqi_sas_port_add_rphy(struct pqi_sas_port *pqi_sas_port, identify = &rphy->identify; identify->sas_address = pqi_sas_port->sas_address; - identify->initiator_port_protocols = SAS_PROTOCOL_STP; - identify->target_port_protocols = SAS_PROTOCOL_STP; + + if (pqi_sas_port->device && + pqi_sas_port->device->is_expander_smp_device) { + identify->initiator_port_protocols = SAS_PROTOCOL_SMP; + identify->target_port_protocols = SAS_PROTOCOL_SMP; + } else { + identify->initiator_port_protocols = SAS_PROTOCOL_STP; + identify->target_port_protocols = SAS_PROTOCOL_STP; + } return sas_rphy_add(rphy); } +static struct sas_rphy *pqi_sas_rphy_alloc(struct pqi_sas_port *pqi_sas_port) +{ + if (pqi_sas_port->device && + pqi_sas_port->device->is_expander_smp_device) + return sas_expander_alloc(pqi_sas_port->port, + SAS_FANOUT_EXPANDER_DEVICE); + + return sas_end_device_alloc(pqi_sas_port->port); +} + static struct pqi_sas_port *pqi_alloc_sas_port( - struct pqi_sas_node *pqi_sas_node, u64 sas_address) + struct pqi_sas_node *pqi_sas_node, u64 sas_address, + struct pqi_scsi_dev *device) { int rc; struct pqi_sas_port *pqi_sas_port; @@ -127,6 +147,7 @@ static struct pqi_sas_port *pqi_alloc_sas_port( pqi_sas_port->port = port; pqi_sas_port->sas_address = sas_address; + pqi_sas_port->device = device; list_add_tail(&pqi_sas_port->port_list_entry, &pqi_sas_node->port_list_head); @@ -146,7 +167,7 @@ static void pqi_free_sas_port(struct pqi_sas_port *pqi_sas_port) struct pqi_sas_phy *next; list_for_each_entry_safe(pqi_sas_phy, next, - &pqi_sas_port->phy_list_head, phy_list_entry) + &pqi_sas_port->phy_list_head, phy_list_entry) pqi_free_sas_phy(pqi_sas_phy); sas_port_delete(pqi_sas_port->port); @@ -176,7 +197,7 @@ static void pqi_free_sas_node(struct pqi_sas_node *pqi_sas_node) return; list_for_each_entry_safe(pqi_sas_port, next, - &pqi_sas_node->port_list_head, port_list_entry) + &pqi_sas_node->port_list_head, port_list_entry) pqi_free_sas_port(pqi_sas_port); kfree(pqi_sas_node); @@ -206,13 +227,14 @@ int pqi_add_sas_host(struct Scsi_Host *shost, struct pqi_ctrl_info *ctrl_info) struct pqi_sas_port *pqi_sas_port; struct pqi_sas_phy *pqi_sas_phy; - parent_dev = &shost->shost_gendev; + parent_dev = &shost->shost_dev; pqi_sas_node = pqi_alloc_sas_node(parent_dev); if (!pqi_sas_node) return -ENOMEM; - pqi_sas_port = pqi_alloc_sas_port(pqi_sas_node, ctrl_info->sas_address); + pqi_sas_port = pqi_alloc_sas_port(pqi_sas_node, + ctrl_info->sas_address, NULL); if (!pqi_sas_port) { rc = -ENODEV; goto free_sas_node; @@ -254,11 +276,12 @@ int pqi_add_sas_device(struct pqi_sas_node *pqi_sas_node, struct pqi_sas_port *pqi_sas_port; struct sas_rphy *rphy; - pqi_sas_port = pqi_alloc_sas_port(pqi_sas_node, device->sas_address); + pqi_sas_port = pqi_alloc_sas_port(pqi_sas_node, + device->sas_address, device); if (!pqi_sas_port) return -ENOMEM; - rphy = sas_end_device_alloc(pqi_sas_port->port); + rphy = pqi_sas_rphy_alloc(pqi_sas_port); if (!rphy) { rc = -ENODEV; goto free_sas_port; @@ -329,6 +352,128 @@ static int pqi_sas_phy_speed(struct sas_phy *phy, return -EINVAL; } +#define CSMI_IOCTL_TIMEOUT 60 +#define SMP_CRC_FIELD_LENGTH 4 + +static struct bmic_csmi_smp_passthru_buffer * +pqi_build_csmi_smp_passthru_buffer(struct sas_rphy *rphy, + struct bsg_job *job) +{ + struct bmic_csmi_smp_passthru_buffer *smp_buf; + struct bmic_csmi_ioctl_header *ioctl_header; + struct bmic_csmi_smp_passthru *parameters; + u32 req_size; + u32 resp_size; + + smp_buf = kzalloc(sizeof(*smp_buf), GFP_KERNEL); + if (!smp_buf) + return NULL; + + req_size = job->request_payload.payload_len; + resp_size = job->reply_payload.payload_len; + + ioctl_header = &smp_buf->ioctl_header; + put_unaligned_le32(sizeof(smp_buf->ioctl_header), + &ioctl_header->header_length); + put_unaligned_le32(CSMI_IOCTL_TIMEOUT, &ioctl_header->timeout); + put_unaligned_le32(CSMI_CC_SAS_SMP_PASSTHRU, + &ioctl_header->control_code); + put_unaligned_le32(sizeof(smp_buf->parameters), &ioctl_header->length); + + parameters = &smp_buf->parameters; + parameters->phy_identifier = rphy->identify.phy_identifier; + parameters->port_identifier = 0; + parameters->connection_rate = 0; + put_unaligned_be64(rphy->identify.sas_address, + ¶meters->destination_sas_address); + + if (req_size > SMP_CRC_FIELD_LENGTH) + req_size -= SMP_CRC_FIELD_LENGTH; + + put_unaligned_le32(req_size, ¶meters->request_length); + + put_unaligned_le32(resp_size, ¶meters->response_length); + + sg_copy_to_buffer(job->request_payload.sg_list, + job->reply_payload.sg_cnt, ¶meters->request, + req_size); + + return smp_buf; +} + +static unsigned int pqi_build_sas_smp_handler_reply( + struct bmic_csmi_smp_passthru_buffer *smp_buf, struct bsg_job *job, + struct pqi_raid_error_info *error_info) +{ + sg_copy_from_buffer(job->reply_payload.sg_list, + job->reply_payload.sg_cnt, &smp_buf->parameters.response, + le32_to_cpu(smp_buf->parameters.response_length)); + + job->reply_len = le16_to_cpu(error_info->sense_data_length); + memcpy(job->reply, error_info->data, + le16_to_cpu(error_info->sense_data_length)); + + return job->reply_payload.payload_len - + get_unaligned_le32(&error_info->data_in_transferred); +} + +void pqi_sas_smp_handler(struct bsg_job *job, struct Scsi_Host *shost, + struct sas_rphy *rphy) +{ + int rc; + struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); + struct bmic_csmi_smp_passthru_buffer *smp_buf; + struct pqi_raid_error_info error_info; + unsigned int reslen = 0; + + pqi_ctrl_busy(ctrl_info); + + if (job->reply_payload.payload_len == 0) { + rc = -ENOMEM; + goto out; + } + + if (!rphy) { + rc = -EINVAL; + goto out; + } + + if (rphy->identify.device_type != SAS_FANOUT_EXPANDER_DEVICE) { + rc = -EINVAL; + goto out; + } + + if (job->request_payload.sg_cnt > 1 || job->reply_payload.sg_cnt > 1) { + rc = -EINVAL; + goto out; + } + + if (pqi_ctrl_offline(ctrl_info)) { + rc = -ENXIO; + goto out; + } + + if (pqi_ctrl_blocked(ctrl_info)) { + rc = -EBUSY; + goto out; + } + + smp_buf = pqi_build_csmi_smp_passthru_buffer(rphy, job); + if (!smp_buf) { + rc = -ENOMEM; + goto out; + } + + rc = pqi_csmi_smp_passthru(ctrl_info, smp_buf, sizeof(*smp_buf), + &error_info); + if (rc) + goto out; + + reslen = pqi_build_sas_smp_handler_reply(smp_buf, job, &error_info); +out: + bsg_job_done(job, rc, reslen); + pqi_ctrl_unbusy(ctrl_info); +} struct sas_function_template pqi_sas_transport_functions = { .get_linkerrors = pqi_sas_get_linkerrors, .get_enclosure_identifier = pqi_sas_get_enclosure_identifier, @@ -338,4 +483,5 @@ struct sas_function_template pqi_sas_transport_functions = { .phy_setup = pqi_sas_phy_setup, .phy_release = pqi_sas_phy_release, .set_phy_speed = pqi_sas_phy_speed, + .smp_handler = pqi_sas_smp_handler, }; diff --git a/drivers/scsi/smartpqi/smartpqi_sis.c b/drivers/scsi/smartpqi/smartpqi_sis.c index ea91658c7060..dcd11c6418cc 100644 --- a/drivers/scsi/smartpqi/smartpqi_sis.c +++ b/drivers/scsi/smartpqi/smartpqi_sis.c @@ -34,6 +34,7 @@ #define SIS_REENABLE_SIS_MODE 0x1 #define SIS_ENABLE_MSIX 0x40 #define SIS_ENABLE_INTX 0x80 +#define SIS_SOFT_RESET 0x100 #define SIS_CMD_READY 0x200 #define SIS_TRIGGER_SHUTDOWN 0x800000 #define SIS_PQI_RESET_QUIESCE 0x1000000 @@ -59,7 +60,7 @@ #define SIS_CTRL_KERNEL_UP 0x80 #define SIS_CTRL_KERNEL_PANIC 0x100 -#define SIS_CTRL_READY_TIMEOUT_SECS 30 +#define SIS_CTRL_READY_TIMEOUT_SECS 180 #define SIS_CTRL_READY_RESUME_TIMEOUT_SECS 90 #define SIS_CTRL_READY_POLL_INTERVAL_MSECS 10 @@ -90,7 +91,7 @@ static int sis_wait_for_ctrl_ready_with_timeout(struct pqi_ctrl_info *ctrl_info, unsigned long timeout; u32 status; - timeout = (timeout_secs * HZ) + jiffies; + timeout = (timeout_secs * PQI_HZ) + jiffies; while (1) { status = readl(&ctrl_info->registers->sis_firmware_status); @@ -202,7 +203,7 @@ static int sis_send_sync_cmd(struct pqi_ctrl_info *ctrl_info, * the top of the loop in order to give the controller time to start * processing the command before we start polling. */ - timeout = (SIS_CMD_COMPLETE_TIMEOUT_SECS * HZ) + jiffies; + timeout = (SIS_CMD_COMPLETE_TIMEOUT_SECS * PQI_HZ) + jiffies; while (1) { msleep(SIS_CMD_COMPLETE_POLL_INTERVAL_MSECS); doorbell = readl(®isters->sis_ctrl_to_host_doorbell); @@ -348,7 +349,7 @@ static int sis_wait_for_doorbell_bit_to_clear( u32 doorbell_register; unsigned long timeout; - timeout = (SIS_DOORBELL_BIT_CLEAR_TIMEOUT_SECS * HZ) + jiffies; + timeout = (SIS_DOORBELL_BIT_CLEAR_TIMEOUT_SECS * PQI_HZ) + jiffies; while (1) { doorbell_register = @@ -420,6 +421,12 @@ u32 sis_read_driver_scratch(struct pqi_ctrl_info *ctrl_info) return readl(&ctrl_info->registers->sis_driver_scratch); } +void sis_soft_reset(struct pqi_ctrl_info *ctrl_info) +{ + writel(SIS_SOFT_RESET, + &ctrl_info->registers->sis_host_to_ctrl_doorbell); +} + static void __attribute__((unused)) verify_structures(void) { BUILD_BUG_ON(offsetof(struct sis_base_struct, diff --git a/drivers/scsi/smartpqi/smartpqi_sis.h b/drivers/scsi/smartpqi/smartpqi_sis.h index 2bf889dbf5ab..d018cb9c3f82 100644 --- a/drivers/scsi/smartpqi/smartpqi_sis.h +++ b/drivers/scsi/smartpqi/smartpqi_sis.h @@ -33,5 +33,6 @@ int sis_pqi_reset_quiesce(struct pqi_ctrl_info *ctrl_info); int sis_reenable_sis_mode(struct pqi_ctrl_info *ctrl_info); void sis_write_driver_scratch(struct pqi_ctrl_info *ctrl_info, u32 value); u32 sis_read_driver_scratch(struct pqi_ctrl_info *ctrl_info); +void sis_soft_reset(struct pqi_ctrl_info *ctrl_info); #endif /* _SMARTPQI_SIS_H */ diff --git a/drivers/scsi/snic/snic_main.c b/drivers/scsi/snic/snic_main.c index 5295277d6325..5e824fd6047a 100644 --- a/drivers/scsi/snic/snic_main.c +++ b/drivers/scsi/snic/snic_main.c @@ -127,7 +127,6 @@ static struct scsi_host_template snic_host_template = { .this_id = -1, .cmd_per_lun = SNIC_DFLT_QUEUE_DEPTH, .can_queue = SNIC_MAX_IO_REQ, - .use_clustering = ENABLE_CLUSTERING, .sg_tablesize = SNIC_MAX_SG_DESC_CNT, .max_sectors = 0x800, .shost_attrs = snic_attrs, diff --git a/drivers/scsi/snic/snic_trc.c b/drivers/scsi/snic/snic_trc.c index fc60c933d6c0..458eaba24c78 100644 --- a/drivers/scsi/snic/snic_trc.c +++ b/drivers/scsi/snic/snic_trc.c @@ -126,7 +126,7 @@ snic_trc_init(void) int tbuf_sz = 0, ret; tbuf_sz = (snic_trace_max_pages * PAGE_SIZE); - tbuf = vmalloc(tbuf_sz); + tbuf = vzalloc(tbuf_sz); if (!tbuf) { SNIC_ERR("Failed to Allocate Trace Buffer Size. %d\n", tbuf_sz); SNIC_ERR("Trace Facility not enabled.\n"); @@ -135,7 +135,6 @@ snic_trc_init(void) return ret; } - memset(tbuf, 0, tbuf_sz); trc->buf = (struct snic_trc_data *) tbuf; spin_lock_init(&trc->lock); diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c index 54dd70ae9731..38ddbbfe5f3c 100644 --- a/drivers/scsi/sr.c +++ b/drivers/scsi/sr.c @@ -80,7 +80,7 @@ MODULE_ALIAS_SCSI_DEVICE(TYPE_WORM); static DEFINE_MUTEX(sr_mutex); static int sr_probe(struct device *); static int sr_remove(struct device *); -static int sr_init_command(struct scsi_cmnd *SCpnt); +static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt); static int sr_done(struct scsi_cmnd *); static int sr_runtime_suspend(struct device *dev); @@ -384,22 +384,22 @@ static int sr_done(struct scsi_cmnd *SCpnt) return good_bytes; } -static int sr_init_command(struct scsi_cmnd *SCpnt) +static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt) { int block = 0, this_count, s_size; struct scsi_cd *cd; struct request *rq = SCpnt->request; - int ret; + blk_status_t ret; ret = scsi_init_io(SCpnt); - if (ret != BLKPREP_OK) + if (ret != BLK_STS_OK) goto out; WARN_ON_ONCE(SCpnt != rq->special); cd = scsi_cd(rq->rq_disk); /* from here on until we're complete, any goto out * is used for a killable error condition */ - ret = BLKPREP_KILL; + ret = BLK_STS_IOERR; SCSI_LOG_HLQUEUE(1, scmd_printk(KERN_INFO, SCpnt, "Doing sr request, block = %d\n", block)); @@ -516,7 +516,7 @@ static int sr_init_command(struct scsi_cmnd *SCpnt) * This indicates that the command is ready from our end to be * queued. */ - ret = BLKPREP_OK; + ret = BLK_STS_OK; out: return ret; } diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index 307df2fa39a3..7ff22d3f03e3 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -530,7 +530,7 @@ static void st_scsi_execute_end(struct request *req, blk_status_t status) complete(SRpnt->waiting); blk_rq_unmap_user(tmp); - __blk_put_request(req->q, req); + blk_put_request(req); } static int st_scsi_execute(struct st_request *SRpnt, const unsigned char *cmd, diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c index 9b20643ab49d..f6bef7ad65e7 100644 --- a/drivers/scsi/stex.c +++ b/drivers/scsi/stex.c @@ -1489,6 +1489,7 @@ static struct scsi_host_template driver_template = { .eh_abort_handler = stex_abort, .eh_host_reset_handler = stex_reset, .this_id = -1, + .dma_boundary = PAGE_SIZE - 1, }; static struct pci_device_id stex_pci_tbl[] = { @@ -1617,19 +1618,6 @@ static struct st_card_info stex_card_info[] = { }, }; -static int stex_set_dma_mask(struct pci_dev * pdev) -{ - int ret; - - if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) - && !pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) - return 0; - ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); - if (!ret) - ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); - return ret; -} - static int stex_request_irq(struct st_hba *hba) { struct pci_dev *pdev = hba->pdev; @@ -1710,7 +1698,9 @@ static int stex_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto out_release_regions; } - err = stex_set_dma_mask(pdev); + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (err) + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); if (err) { printk(KERN_ERR DRV_NAME "(%s): set dma mask failed\n", pci_name(pdev)); diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c index 8f88348ebe42..84380bae20f1 100644 --- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c @@ -1698,7 +1698,6 @@ static struct scsi_host_template scsi_driver = { .slave_configure = storvsc_device_configure, .cmd_per_lun = 2048, .this_id = -1, - .use_clustering = ENABLE_CLUSTERING, /* Make sure we dont get a sg segment crosses a page boundary */ .dma_boundary = PAGE_SIZE-1, .no_write_same = 1, diff --git a/drivers/scsi/sun3_scsi.c b/drivers/scsi/sun3_scsi.c index 9492638296c8..95a7ea7eefa0 100644 --- a/drivers/scsi/sun3_scsi.c +++ b/drivers/scsi/sun3_scsi.c @@ -500,7 +500,7 @@ static struct scsi_host_template sun3_scsi_template = { .this_id = 7, .sg_tablesize = SG_NONE, .cmd_per_lun = 2, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .cmd_size = NCR5380_CMD_SIZE, }; diff --git a/drivers/scsi/sun_esp.c b/drivers/scsi/sun_esp.c index a11efbcb7f8b..c71bd01fef94 100644 --- a/drivers/scsi/sun_esp.c +++ b/drivers/scsi/sun_esp.c @@ -529,11 +529,10 @@ static int esp_sbus_probe(struct platform_device *op) int hme = 0; int ret; - if (dp->parent && - (!strcmp(dp->parent->name, "espdma") || - !strcmp(dp->parent->name, "dma"))) + if (of_node_name_eq(dp->parent, "espdma") || + of_node_name_eq(dp->parent, "dma")) dma_node = dp->parent; - else if (!strcmp(dp->name, "SUNW,fas")) { + else if (of_node_name_eq(dp, "SUNW,fas")) { dma_node = op->dev.of_node; hme = 1; } diff --git a/drivers/scsi/sym53c8xx_2/sym_glue.c b/drivers/scsi/sym53c8xx_2/sym_glue.c index 5f10aa9bad9b..57f6d63e4c40 100644 --- a/drivers/scsi/sym53c8xx_2/sym_glue.c +++ b/drivers/scsi/sym53c8xx_2/sym_glue.c @@ -1312,9 +1312,9 @@ static struct Scsi_Host *sym_attach(struct scsi_host_template *tpnt, int unit, sprintf(np->s.inst_name, "sym%d", np->s.unit); if ((SYM_CONF_DMA_ADDRESSING_MODE > 0) && (np->features & FE_DAC) && - !pci_set_dma_mask(pdev, DMA_DAC_MASK)) { + !dma_set_mask(&pdev->dev, DMA_DAC_MASK)) { set_dac(np); - } else if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) { + } else if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) { printf_warning("%s: No suitable DMA available\n", sym_name(np)); goto attach_failed; } @@ -1660,7 +1660,6 @@ static struct scsi_host_template sym2_template = { .eh_bus_reset_handler = sym53c8xx_eh_bus_reset_handler, .eh_host_reset_handler = sym53c8xx_eh_host_reset_handler, .this_id = 7, - .use_clustering = ENABLE_CLUSTERING, .max_sectors = 0xFFFF, #ifdef SYM_LINUX_PROC_INFO_SUPPORT .show_info = sym_show_info, diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig index 2ddd426323e9..2ddbb26d9c26 100644 --- a/drivers/scsi/ufs/Kconfig +++ b/drivers/scsi/ufs/Kconfig @@ -80,6 +80,14 @@ config SCSI_UFSHCD_PLATFORM If unsure, say N. +config SCSI_UFS_CDNS_PLATFORM + tristate "Cadence UFS Controller platform driver" + depends on SCSI_UFSHCD_PLATFORM + help + This selects the Cadence-specific additions to UFSHCD platform driver. + + If unsure, say N. + config SCSI_UFS_DWC_TC_PLATFORM tristate "DesignWare platform support using a G210 Test Chip" depends on SCSI_UFSHCD_PLATFORM diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile index aca481329828..a3bd70c3652c 100644 --- a/drivers/scsi/ufs/Makefile +++ b/drivers/scsi/ufs/Makefile @@ -2,6 +2,7 @@ # UFSHCD makefile obj-$(CONFIG_SCSI_UFS_DWC_TC_PCI) += tc-dwc-g210-pci.o ufshcd-dwc.o tc-dwc-g210.o obj-$(CONFIG_SCSI_UFS_DWC_TC_PLATFORM) += tc-dwc-g210-pltfrm.o ufshcd-dwc.o tc-dwc-g210.o +obj-$(CONFIG_SCSI_UFS_CDNS_PLATFORM) += cdns-pltfrm.o obj-$(CONFIG_SCSI_UFS_QCOM) += ufs-qcom.o obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o ufshcd-core-y += ufshcd.o ufs-sysfs.o diff --git a/drivers/scsi/ufs/cdns-pltfrm.c b/drivers/scsi/ufs/cdns-pltfrm.c new file mode 100644 index 000000000000..4a37b4f57164 --- /dev/null +++ b/drivers/scsi/ufs/cdns-pltfrm.c @@ -0,0 +1,148 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Platform UFS Host driver for Cadence controller + * + * Copyright (C) 2018 Cadence Design Systems, Inc. + * + * Authors: + * Jan Kotas + * + */ + +#include +#include +#include +#include +#include + +#include "ufshcd-pltfrm.h" + +#define CDNS_UFS_REG_HCLKDIV 0xFC + +/** + * Sets HCLKDIV register value based on the core_clk + * @hba: host controller instance + * + * Return zero for success and non-zero for failure + */ +static int cdns_ufs_set_hclkdiv(struct ufs_hba *hba) +{ + struct ufs_clk_info *clki; + struct list_head *head = &hba->clk_list_head; + unsigned long core_clk_rate = 0; + u32 core_clk_div = 0; + + if (list_empty(head)) + return 0; + + list_for_each_entry(clki, head, list) { + if (IS_ERR_OR_NULL(clki->clk)) + continue; + if (!strcmp(clki->name, "core_clk")) + core_clk_rate = clk_get_rate(clki->clk); + } + + if (!core_clk_rate) { + dev_err(hba->dev, "%s: unable to find core_clk rate\n", + __func__); + return -EINVAL; + } + + core_clk_div = core_clk_rate / USEC_PER_SEC; + + ufshcd_writel(hba, core_clk_div, CDNS_UFS_REG_HCLKDIV); + /** + * Make sure the register was updated, + * UniPro layer will not work with an incorrect value. + */ + mb(); + + return 0; +} + +/** + * Sets clocks used by the controller + * @hba: host controller instance + * @on: if true, enable clocks, otherwise disable + * @status: notify stage (pre, post change) + * + * Return zero for success and non-zero for failure + */ +static int cdns_ufs_setup_clocks(struct ufs_hba *hba, bool on, + enum ufs_notify_change_status status) +{ + if ((!on) || (status == PRE_CHANGE)) + return 0; + + return cdns_ufs_set_hclkdiv(hba); +} + +static struct ufs_hba_variant_ops cdns_pltfm_hba_vops = { + .name = "cdns-ufs-pltfm", + .setup_clocks = cdns_ufs_setup_clocks, +}; + +/** + * cdns_ufs_pltfrm_probe - probe routine of the driver + * @pdev: pointer to platform device handle + * + * Return zero for success and non-zero for failure + */ +static int cdns_ufs_pltfrm_probe(struct platform_device *pdev) +{ + int err; + struct device *dev = &pdev->dev; + + /* Perform generic probe */ + err = ufshcd_pltfrm_init(pdev, &cdns_pltfm_hba_vops); + if (err) + dev_err(dev, "ufshcd_pltfrm_init() failed %d\n", err); + + return err; +} + +/** + * cdns_ufs_pltfrm_remove - removes the ufs driver + * @pdev: pointer to platform device handle + * + * Always returns 0 + */ +static int cdns_ufs_pltfrm_remove(struct platform_device *pdev) +{ + struct ufs_hba *hba = platform_get_drvdata(pdev); + + ufshcd_remove(hba); + return 0; +} + +static const struct of_device_id cdns_ufs_of_match[] = { + { .compatible = "cdns,ufshc" }, + {}, +}; + +MODULE_DEVICE_TABLE(of, cdns_ufs_of_match); + +static const struct dev_pm_ops cdns_ufs_dev_pm_ops = { + .suspend = ufshcd_pltfrm_suspend, + .resume = ufshcd_pltfrm_resume, + .runtime_suspend = ufshcd_pltfrm_runtime_suspend, + .runtime_resume = ufshcd_pltfrm_runtime_resume, + .runtime_idle = ufshcd_pltfrm_runtime_idle, +}; + +static struct platform_driver cdns_ufs_pltfrm_driver = { + .probe = cdns_ufs_pltfrm_probe, + .remove = cdns_ufs_pltfrm_remove, + .driver = { + .name = "cdns-ufshcd", + .pm = &cdns_ufs_dev_pm_ops, + .of_match_table = cdns_ufs_of_match, + }, +}; + +module_platform_driver(cdns_ufs_pltfrm_driver); + +MODULE_AUTHOR("Jan Kotas "); +MODULE_DESCRIPTION("Cadence UFS host controller platform driver"); +MODULE_LICENSE("GPL v2"); +MODULE_VERSION(UFSHCD_DRIVER_VERSION); diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h index 58087d3916d0..dd65fea07687 100644 --- a/drivers/scsi/ufs/ufs.h +++ b/drivers/scsi/ufs/ufs.h @@ -46,7 +46,7 @@ #define QUERY_DESC_HDR_SIZE 2 #define QUERY_OSF_SIZE (GENERAL_UPIU_REQUEST_SIZE - \ (sizeof(struct utp_upiu_header))) -#define RESPONSE_UPIU_SENSE_DATA_LENGTH 18 +#define UFS_SENSE_SIZE 18 #define UPIU_HEADER_DWORD(byte3, byte2, byte1, byte0)\ cpu_to_be32((byte3 << 24) | (byte2 << 16) |\ @@ -378,6 +378,20 @@ enum query_opcode { UPIU_QUERY_OPCODE_TOGGLE_FLAG = 0x8, }; +/* bRefClkFreq attribute values */ +enum ufs_ref_clk_freq { + REF_CLK_FREQ_19_2_MHZ = 0, + REF_CLK_FREQ_26_MHZ = 1, + REF_CLK_FREQ_38_4_MHZ = 2, + REF_CLK_FREQ_52_MHZ = 3, + REF_CLK_FREQ_INVAL = -1, +}; + +struct ufs_ref_clk { + unsigned long freq_hz; + enum ufs_ref_clk_freq val; +}; + /* Query response result code */ enum { QUERY_RESULT_SUCCESS = 0x00, @@ -444,7 +458,7 @@ struct utp_cmd_rsp { __be32 residual_transfer_count; __be32 reserved[4]; __be16 sense_data_len; - u8 sense_data[RESPONSE_UPIU_SENSE_DATA_LENGTH]; + u8 sense_data[UFS_SENSE_SIZE]; }; /** diff --git a/drivers/scsi/ufs/ufs_bsg.c b/drivers/scsi/ufs/ufs_bsg.c index e5f8e54bf644..775bb4e5e36e 100644 --- a/drivers/scsi/ufs/ufs_bsg.c +++ b/drivers/scsi/ufs/ufs_bsg.c @@ -157,7 +157,7 @@ void ufs_bsg_remove(struct ufs_hba *hba) if (!hba->bsg_queue) return; - bsg_unregister_queue(hba->bsg_queue); + bsg_remove_queue(hba->bsg_queue); device_del(bsg_dev); put_device(bsg_dev); @@ -193,7 +193,7 @@ int ufs_bsg_probe(struct ufs_hba *hba) if (ret) goto out; - q = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), ufs_bsg_request, 0); + q = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), ufs_bsg_request, NULL, 0); if (IS_ERR(q)) { ret = PTR_ERR(q); goto out; diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index f1c57cd33b5b..9ba7671b84f8 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -51,8 +51,6 @@ #define CREATE_TRACE_POINTS #include -#define UFSHCD_REQ_SENSE_SIZE 18 - #define UFSHCD_ENABLE_INTRS (UTP_TRANSFER_REQ_COMPL |\ UTP_TASK_REQ_COMPL |\ UFSHCD_ERROR_MASK) @@ -1551,6 +1549,7 @@ start: * currently running. Hence, fall through to cancel gating * work and to enable clocks. */ + /* fallthrough */ case CLKS_OFF: ufshcd_scsi_block_requests(hba); hba->clk_gating.state = REQ_CLKS_ON; @@ -1562,6 +1561,7 @@ start: * fall through to check if we should wait for this * work to be done or not. */ + /* fallthrough */ case REQ_CLKS_ON: if (async) { rc = -EAGAIN; @@ -1890,11 +1890,10 @@ static inline void ufshcd_copy_sense_data(struct ufshcd_lrb *lrbp) int len_to_copy; len = be16_to_cpu(lrbp->ucd_rsp_ptr->sr.sense_data_len); - len_to_copy = min_t(int, RESPONSE_UPIU_SENSE_DATA_LENGTH, len); + len_to_copy = min_t(int, UFS_SENSE_SIZE, len); - memcpy(lrbp->sense_buffer, - lrbp->ucd_rsp_ptr->sr.sense_data, - min_t(int, len_to_copy, UFSHCD_REQ_SENSE_SIZE)); + memcpy(lrbp->sense_buffer, lrbp->ucd_rsp_ptr->sr.sense_data, + len_to_copy); } } @@ -2456,7 +2455,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) WARN_ON(lrbp->cmd); lrbp->cmd = cmd; - lrbp->sense_bufflen = UFSHCD_REQ_SENSE_SIZE; + lrbp->sense_bufflen = UFS_SENSE_SIZE; lrbp->sense_buffer = cmd->sense_buffer; lrbp->task_tag = tag; lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun); @@ -4620,6 +4619,7 @@ ufshcd_scsi_cmd_status(struct ufshcd_lrb *lrbp, int scsi_status) switch (scsi_status) { case SAM_STAT_CHECK_CONDITION: ufshcd_copy_sense_data(lrbp); + /* fallthrough */ case SAM_STAT_GOOD: result |= DID_OK << 16 | COMMAND_COMPLETE << 8 | @@ -6701,6 +6701,74 @@ static void ufshcd_def_desc_sizes(struct ufs_hba *hba) hba->desc_size.hlth_desc = QUERY_DESC_HEALTH_DEF_SIZE; } +static struct ufs_ref_clk ufs_ref_clk_freqs[] = { + {19200000, REF_CLK_FREQ_19_2_MHZ}, + {26000000, REF_CLK_FREQ_26_MHZ}, + {38400000, REF_CLK_FREQ_38_4_MHZ}, + {52000000, REF_CLK_FREQ_52_MHZ}, + {0, REF_CLK_FREQ_INVAL}, +}; + +static enum ufs_ref_clk_freq +ufs_get_bref_clk_from_hz(unsigned long freq) +{ + int i; + + for (i = 0; ufs_ref_clk_freqs[i].freq_hz; i++) + if (ufs_ref_clk_freqs[i].freq_hz == freq) + return ufs_ref_clk_freqs[i].val; + + return REF_CLK_FREQ_INVAL; +} + +void ufshcd_parse_dev_ref_clk_freq(struct ufs_hba *hba, struct clk *refclk) +{ + unsigned long freq; + + freq = clk_get_rate(refclk); + + hba->dev_ref_clk_freq = + ufs_get_bref_clk_from_hz(freq); + + if (hba->dev_ref_clk_freq == REF_CLK_FREQ_INVAL) + dev_err(hba->dev, + "invalid ref_clk setting = %ld\n", freq); +} + +static int ufshcd_set_dev_ref_clk(struct ufs_hba *hba) +{ + int err; + u32 ref_clk; + u32 freq = hba->dev_ref_clk_freq; + + err = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_READ_ATTR, + QUERY_ATTR_IDN_REF_CLK_FREQ, 0, 0, &ref_clk); + + if (err) { + dev_err(hba->dev, "failed reading bRefClkFreq. err = %d\n", + err); + goto out; + } + + if (ref_clk == freq) + goto out; /* nothing to update */ + + err = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, + QUERY_ATTR_IDN_REF_CLK_FREQ, 0, 0, &freq); + + if (err) { + dev_err(hba->dev, "bRefClkFreq setting to %lu Hz failed\n", + ufs_ref_clk_freqs[freq].freq_hz); + goto out; + } + + dev_dbg(hba->dev, "bRefClkFreq setting to %lu Hz succeeded\n", + ufs_ref_clk_freqs[freq].freq_hz); + +out: + return err; +} + /** * ufshcd_probe_hba - probe hba to detect device and initialize * @hba: per-adapter instance @@ -6766,6 +6834,12 @@ static int ufshcd_probe_hba(struct ufs_hba *hba) "%s: Failed getting max supported power mode\n", __func__); } else { + /* + * Set the right value to bRefClkFreq before attempting to + * switch to HS gears. + */ + if (hba->dev_ref_clk_freq != REF_CLK_FREQ_INVAL) + ufshcd_set_dev_ref_clk(hba); ret = ufshcd_config_pwr_mode(hba, &hba->max_pwr_info.info); if (ret) { dev_err(hba->dev, "%s: Failed setting power mode, err = %d\n", @@ -6910,6 +6984,7 @@ static struct scsi_host_template ufshcd_driver_template = { .max_host_blocked = 1, .track_queue_depth = 1, .sdev_groups = ufshcd_driver_groups, + .dma_boundary = PAGE_SIZE - 1, }; static int ufshcd_config_vreg_load(struct device *dev, struct ufs_vreg *vreg, @@ -7252,6 +7327,14 @@ static int ufshcd_init_clocks(struct ufs_hba *hba) goto out; } + /* + * Parse device ref clk freq as per device tree "ref_clk". + * Default dev_ref_clk_freq is set to REF_CLK_FREQ_INVAL + * in ufshcd_alloc_host(). + */ + if (!strcmp(clki->name, "ref_clk")) + ufshcd_parse_dev_ref_clk_freq(hba, clki->clk); + if (clki->max_freq) { ret = clk_set_rate(clki->clk, clki->max_freq); if (ret) { @@ -7379,19 +7462,19 @@ ufshcd_send_request_sense(struct ufs_hba *hba, struct scsi_device *sdp) 0, 0, 0, - UFSHCD_REQ_SENSE_SIZE, + UFS_SENSE_SIZE, 0}; char *buffer; int ret; - buffer = kzalloc(UFSHCD_REQ_SENSE_SIZE, GFP_KERNEL); + buffer = kzalloc(UFS_SENSE_SIZE, GFP_KERNEL); if (!buffer) { ret = -ENOMEM; goto out; } ret = scsi_execute(sdp, cmd, DMA_FROM_DEVICE, buffer, - UFSHCD_REQ_SENSE_SIZE, NULL, NULL, + UFS_SENSE_SIZE, NULL, NULL, msecs_to_jiffies(1000), 3, 0, RQF_PM, NULL); if (ret) pr_err("%s: failed with err %d\n", __func__, ret); @@ -8105,6 +8188,7 @@ int ufshcd_alloc_host(struct device *dev, struct ufs_hba **hba_handle) hba->host = host; hba->dev = dev; *hba_handle = hba; + hba->dev_ref_clk_freq = REF_CLK_FREQ_INVAL; INIT_LIST_HEAD(&hba->clk_list_head); diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 1a1c2b487a4e..69ba7445d2b3 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -550,6 +550,7 @@ struct ufs_hba { void *priv; unsigned int irq; bool is_irq_enabled; + enum ufs_ref_clk_freq dev_ref_clk_freq; /* Interrupt aggregation support is broken */ #define UFSHCD_QUIRK_BROKEN_INTR_AGGR 0x1 @@ -768,6 +769,7 @@ void ufshcd_remove(struct ufs_hba *); int ufshcd_wait_for_register(struct ufs_hba *hba, u32 reg, u32 mask, u32 val, unsigned long interval_us, unsigned long timeout_ms, bool can_sleep); +void ufshcd_parse_dev_ref_clk_freq(struct ufs_hba *hba, struct clk *refclk); static inline void check_upiu_size(void) { diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index 1c72db94270e..772b976e4ee4 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@ -68,33 +68,6 @@ struct virtio_scsi_vq { struct virtqueue *vq; }; -/* - * Per-target queue state. - * - * This struct holds the data needed by the queue steering policy. When a - * target is sent multiple requests, we need to drive them to the same queue so - * that FIFO processing order is kept. However, if a target was idle, we can - * choose a queue arbitrarily. In this case the queue is chosen according to - * the current VCPU, so the driver expects the number of request queues to be - * equal to the number of VCPUs. This makes it easy and fast to select the - * queue, and also lets the driver optimize the IRQ affinity for the virtqueues - * (each virtqueue's affinity is set to the CPU that "owns" the queue). - * - * tgt_seq is held to serialize reading and writing req_vq. - * - * Decrements of reqs are never concurrent with writes of req_vq: before the - * decrement reqs will be != 0; after the decrement the virtqueue completion - * routine will not use the req_vq so it can be changed by a new request. - * Thus they can happen outside the tgt_seq, provided of course we make reqs - * an atomic_t. - */ -struct virtio_scsi_target_state { - seqcount_t tgt_seq; - - /* Currently active virtqueue for requests sent to this target. */ - struct virtio_scsi_vq *req_vq; -}; - /* Driver instance state */ struct virtio_scsi { struct virtio_device *vdev; @@ -693,34 +666,12 @@ static int virtscsi_abort(struct scsi_cmnd *sc) return virtscsi_tmf(vscsi, cmd); } -static int virtscsi_target_alloc(struct scsi_target *starget) -{ - struct Scsi_Host *sh = dev_to_shost(starget->dev.parent); - struct virtio_scsi *vscsi = shost_priv(sh); - - struct virtio_scsi_target_state *tgt = - kmalloc(sizeof(*tgt), GFP_KERNEL); - if (!tgt) - return -ENOMEM; - - seqcount_init(&tgt->tgt_seq); - tgt->req_vq = &vscsi->req_vqs[0]; - - starget->hostdata = tgt; - return 0; -} - -static void virtscsi_target_destroy(struct scsi_target *starget) -{ - struct virtio_scsi_target_state *tgt = starget->hostdata; - kfree(tgt); -} - static int virtscsi_map_queues(struct Scsi_Host *shost) { struct virtio_scsi *vscsi = shost_priv(shost); + struct blk_mq_queue_map *qmap = &shost->tag_set.map[0]; - return blk_mq_virtio_map_queues(&shost->tag_set, vscsi->vdev, 2); + return blk_mq_virtio_map_queues(qmap, vscsi->vdev, 2); } /* @@ -747,9 +698,6 @@ static struct scsi_host_template virtscsi_host_template = { .slave_alloc = virtscsi_device_alloc, .dma_boundary = UINT_MAX, - .use_clustering = ENABLE_CLUSTERING, - .target_alloc = virtscsi_target_alloc, - .target_destroy = virtscsi_target_destroy, .map_queues = virtscsi_map_queues, .track_queue_depth = 1, .force_blk_mq = 1, diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c index 0d6b2a88fc8e..ecee4b3ff073 100644 --- a/drivers/scsi/vmw_pvscsi.c +++ b/drivers/scsi/vmw_pvscsi.c @@ -1007,7 +1007,6 @@ static struct scsi_host_template pvscsi_template = { .sg_tablesize = PVSCSI_MAX_NUM_SG_ENTRIES_PER_SEGMENT, .dma_boundary = UINT_MAX, .max_sectors = 0xffff, - .use_clustering = ENABLE_CLUSTERING, .change_queue_depth = pvscsi_change_queue_depth, .eh_abort_handler = pvscsi_abort, .eh_device_reset_handler = pvscsi_device_reset, diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c index 974bfb3f30f4..e3310e9488d2 100644 --- a/drivers/scsi/wd719x.c +++ b/drivers/scsi/wd719x.c @@ -153,8 +153,6 @@ static int wd719x_direct_cmd(struct wd719x *wd, u8 opcode, u8 dev, u8 lun, static void wd719x_destroy(struct wd719x *wd) { - struct wd719x_scb *scb; - /* stop the RISC */ if (wd719x_direct_cmd(wd, WD719X_CMD_SLEEP, 0, 0, 0, 0, WD719X_WAIT_FOR_RISC)) @@ -162,37 +160,35 @@ static void wd719x_destroy(struct wd719x *wd) /* disable RISC */ wd719x_writeb(wd, WD719X_PCI_MODE_SELECT, 0); - /* free all SCBs */ - list_for_each_entry(scb, &wd->active_scbs, list) - pci_free_consistent(wd->pdev, sizeof(struct wd719x_scb), scb, - scb->phys); - list_for_each_entry(scb, &wd->free_scbs, list) - pci_free_consistent(wd->pdev, sizeof(struct wd719x_scb), scb, - scb->phys); + WARN_ON_ONCE(!list_empty(&wd->active_scbs)); + /* free internal buffers */ - pci_free_consistent(wd->pdev, wd->fw_size, wd->fw_virt, wd->fw_phys); + dma_free_coherent(&wd->pdev->dev, wd->fw_size, wd->fw_virt, + wd->fw_phys); wd->fw_virt = NULL; - pci_free_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE, wd->hash_virt, - wd->hash_phys); + dma_free_coherent(&wd->pdev->dev, WD719X_HASH_TABLE_SIZE, wd->hash_virt, + wd->hash_phys); wd->hash_virt = NULL; - pci_free_consistent(wd->pdev, sizeof(struct wd719x_host_param), - wd->params, wd->params_phys); + dma_free_coherent(&wd->pdev->dev, sizeof(struct wd719x_host_param), + wd->params, wd->params_phys); wd->params = NULL; free_irq(wd->pdev->irq, wd); } -/* finish a SCSI command, mark SCB (if any) as free, unmap buffers */ -static void wd719x_finish_cmd(struct scsi_cmnd *cmd, int result) +/* finish a SCSI command, unmap buffers */ +static void wd719x_finish_cmd(struct wd719x_scb *scb, int result) { + struct scsi_cmnd *cmd = scb->cmd; struct wd719x *wd = shost_priv(cmd->device->host); - struct wd719x_scb *scb = (struct wd719x_scb *) cmd->host_scribble; - if (scb) { - list_move(&scb->list, &wd->free_scbs); - dma_unmap_single(&wd->pdev->dev, cmd->SCp.dma_handle, - SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE); - scsi_dma_unmap(cmd); - } + list_del(&scb->list); + + dma_unmap_single(&wd->pdev->dev, scb->phys, + sizeof(struct wd719x_scb), DMA_BIDIRECTIONAL); + scsi_dma_unmap(cmd); + dma_unmap_single(&wd->pdev->dev, cmd->SCp.dma_handle, + SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE); + cmd->result = result << 16; cmd->scsi_done(cmd); } @@ -202,36 +198,10 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) { int i, count_sg; unsigned long flags; - struct wd719x_scb *scb; + struct wd719x_scb *scb = scsi_cmd_priv(cmd); struct wd719x *wd = shost_priv(sh); - dma_addr_t phys; - cmd->host_scribble = NULL; - - /* get a free SCB - either from existing ones or allocate a new one */ - spin_lock_irqsave(wd->sh->host_lock, flags); - scb = list_first_entry_or_null(&wd->free_scbs, struct wd719x_scb, list); - if (scb) { - list_del(&scb->list); - phys = scb->phys; - } else { - spin_unlock_irqrestore(wd->sh->host_lock, flags); - scb = pci_alloc_consistent(wd->pdev, sizeof(struct wd719x_scb), - &phys); - spin_lock_irqsave(wd->sh->host_lock, flags); - if (!scb) { - dev_err(&wd->pdev->dev, "unable to allocate SCB\n"); - wd719x_finish_cmd(cmd, DID_ERROR); - spin_unlock_irqrestore(wd->sh->host_lock, flags); - return 0; - } - } - memset(scb, 0, sizeof(struct wd719x_scb)); - list_add(&scb->list, &wd->active_scbs); - - scb->phys = phys; scb->cmd = cmd; - cmd->host_scribble = (char *) scb; scb->CDB_tag = 0; /* Tagged queueing not supported yet */ scb->devid = cmd->device->id; @@ -240,10 +210,19 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) /* copy the command */ memcpy(scb->CDB, cmd->cmnd, cmd->cmd_len); + /* map SCB */ + scb->phys = dma_map_single(&wd->pdev->dev, scb, sizeof(*scb), + DMA_BIDIRECTIONAL); + + if (dma_mapping_error(&wd->pdev->dev, scb->phys)) + goto out_error; + /* map sense buffer */ scb->sense_buf_length = SCSI_SENSE_BUFFERSIZE; cmd->SCp.dma_handle = dma_map_single(&wd->pdev->dev, cmd->sense_buffer, SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE); + if (dma_mapping_error(&wd->pdev->dev, cmd->SCp.dma_handle)) + goto out_unmap_scb; scb->sense_buf = cpu_to_le32(cmd->SCp.dma_handle); /* request autosense */ @@ -258,11 +237,8 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) /* Scather/gather */ count_sg = scsi_dma_map(cmd); - if (count_sg < 0) { - wd719x_finish_cmd(cmd, DID_ERROR); - spin_unlock_irqrestore(wd->sh->host_lock, flags); - return 0; - } + if (count_sg < 0) + goto out_unmap_sense; BUG_ON(count_sg > WD719X_SG); if (count_sg) { @@ -283,19 +259,33 @@ static int wd719x_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *cmd) scb->data_p = 0; } + spin_lock_irqsave(wd->sh->host_lock, flags); + /* check if the Command register is free */ if (wd719x_readb(wd, WD719X_AMR_COMMAND) != WD719X_CMD_READY) { spin_unlock_irqrestore(wd->sh->host_lock, flags); return SCSI_MLQUEUE_HOST_BUSY; } + list_add(&scb->list, &wd->active_scbs); + /* write pointer to the AMR */ wd719x_writel(wd, WD719X_AMR_SCB_IN, scb->phys); /* send SCB opcode */ wd719x_writeb(wd, WD719X_AMR_COMMAND, WD719X_CMD_PROCESS_SCB); spin_unlock_irqrestore(wd->sh->host_lock, flags); + return 0; +out_unmap_sense: + dma_unmap_single(&wd->pdev->dev, cmd->SCp.dma_handle, + SCSI_SENSE_BUFFERSIZE, DMA_FROM_DEVICE); +out_unmap_scb: + dma_unmap_single(&wd->pdev->dev, scb->phys, sizeof(*scb), + DMA_BIDIRECTIONAL); +out_error: + cmd->result = DID_ERROR << 16; + cmd->scsi_done(cmd); return 0; } @@ -327,8 +317,8 @@ static int wd719x_chip_init(struct wd719x *wd) wd->fw_size = ALIGN(fw_wcs->size, 4) + fw_risc->size; if (!wd->fw_virt) - wd->fw_virt = pci_alloc_consistent(wd->pdev, wd->fw_size, - &wd->fw_phys); + wd->fw_virt = dma_alloc_coherent(&wd->pdev->dev, wd->fw_size, + &wd->fw_phys, GFP_KERNEL); if (!wd->fw_virt) { ret = -ENOMEM; goto wd719x_init_end; @@ -464,7 +454,7 @@ static int wd719x_abort(struct scsi_cmnd *cmd) { int action, result; unsigned long flags; - struct wd719x_scb *scb = (struct wd719x_scb *)cmd->host_scribble; + struct wd719x_scb *scb = scsi_cmd_priv(cmd); struct wd719x *wd = shost_priv(cmd->device->host); dev_info(&wd->pdev->dev, "abort command, tag: %x\n", cmd->tag); @@ -526,10 +516,8 @@ static int wd719x_host_reset(struct scsi_cmnd *cmd) result = FAILED; /* flush all SCBs */ - list_for_each_entry_safe(scb, tmp, &wd->active_scbs, list) { - struct scsi_cmnd *tmp_cmd = scb->cmd; - wd719x_finish_cmd(tmp_cmd, result); - } + list_for_each_entry_safe(scb, tmp, &wd->active_scbs, list) + wd719x_finish_cmd(scb, result); spin_unlock_irqrestore(wd->sh->host_lock, flags); return result; @@ -555,7 +543,6 @@ static inline void wd719x_interrupt_SCB(struct wd719x *wd, union wd719x_regs regs, struct wd719x_scb *scb) { - struct scsi_cmnd *cmd; int result; /* now have to find result from card */ @@ -643,9 +630,8 @@ static inline void wd719x_interrupt_SCB(struct wd719x *wd, result = DID_ERROR; break; } - cmd = scb->cmd; - wd719x_finish_cmd(cmd, result); + wd719x_finish_cmd(scb, result); } static irqreturn_t wd719x_interrupt(int irq, void *dev_id) @@ -809,7 +795,6 @@ static int wd719x_board_found(struct Scsi_Host *sh) int ret; INIT_LIST_HEAD(&wd->active_scbs); - INIT_LIST_HEAD(&wd->free_scbs); sh->base = pci_resource_start(wd->pdev, 0); @@ -820,17 +805,18 @@ static int wd719x_board_found(struct Scsi_Host *sh) wd->fw_virt = NULL; /* memory area for host (EEPROM) parameters */ - wd->params = pci_alloc_consistent(wd->pdev, - sizeof(struct wd719x_host_param), - &wd->params_phys); + wd->params = dma_alloc_coherent(&wd->pdev->dev, + sizeof(struct wd719x_host_param), + &wd->params_phys, GFP_KERNEL); if (!wd->params) { dev_warn(&wd->pdev->dev, "unable to allocate parameter buffer\n"); return -ENOMEM; } /* memory area for the RISC for hash table of outstanding requests */ - wd->hash_virt = pci_alloc_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE, - &wd->hash_phys); + wd->hash_virt = dma_alloc_coherent(&wd->pdev->dev, + WD719X_HASH_TABLE_SIZE, + &wd->hash_phys, GFP_KERNEL); if (!wd->hash_virt) { dev_warn(&wd->pdev->dev, "unable to allocate hash buffer\n"); ret = -ENOMEM; @@ -862,10 +848,10 @@ static int wd719x_board_found(struct Scsi_Host *sh) fail_free_irq: free_irq(wd->pdev->irq, wd); fail_free_hash: - pci_free_consistent(wd->pdev, WD719X_HASH_TABLE_SIZE, wd->hash_virt, + dma_free_coherent(&wd->pdev->dev, WD719X_HASH_TABLE_SIZE, wd->hash_virt, wd->hash_phys); fail_free_params: - pci_free_consistent(wd->pdev, sizeof(struct wd719x_host_param), + dma_free_coherent(&wd->pdev->dev, sizeof(struct wd719x_host_param), wd->params, wd->params_phys); return ret; @@ -874,6 +860,7 @@ fail_free_params: static struct scsi_host_template wd719x_template = { .module = THIS_MODULE, .name = "Western Digital 719x", + .cmd_size = sizeof(struct wd719x_scb), .queuecommand = wd719x_queuecommand, .eh_abort_handler = wd719x_abort, .eh_device_reset_handler = wd719x_dev_reset, @@ -884,7 +871,6 @@ static struct scsi_host_template wd719x_template = { .can_queue = 255, .this_id = 7, .sg_tablesize = WD719X_SG, - .use_clustering = ENABLE_CLUSTERING, }; static int wd719x_pci_probe(struct pci_dev *pdev, const struct pci_device_id *d) @@ -897,7 +883,7 @@ static int wd719x_pci_probe(struct pci_dev *pdev, const struct pci_device_id *d) if (err) goto fail; - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) { + if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) { dev_warn(&pdev->dev, "Unable to set 32-bit DMA mask\n"); goto disable_device; } diff --git a/drivers/scsi/wd719x.h b/drivers/scsi/wd719x.h index 0455b1633ca7..abaabd419a54 100644 --- a/drivers/scsi/wd719x.h +++ b/drivers/scsi/wd719x.h @@ -74,7 +74,6 @@ struct wd719x { void *hash_virt; /* hash table CPU address */ dma_addr_t hash_phys; /* hash table bus address */ struct list_head active_scbs; - struct list_head free_scbs; }; /* timeout delays in microsecs */ diff --git a/drivers/scsi/xen-scsifront.c b/drivers/scsi/xen-scsifront.c index 61389bdc7926..f0068e96a177 100644 --- a/drivers/scsi/xen-scsifront.c +++ b/drivers/scsi/xen-scsifront.c @@ -696,7 +696,6 @@ static struct scsi_host_template scsifront_sht = { .this_id = -1, .cmd_size = sizeof(struct vscsifrnt_shadow), .sg_tablesize = VSCSIIF_SG_TABLESIZE, - .use_clustering = DISABLE_CLUSTERING, .proc_name = "scsifront", }; @@ -1112,7 +1111,7 @@ static void scsifront_backend_changed(struct xenbus_device *dev, case XenbusStateClosed: if (dev->state == XenbusStateClosed) break; - /* Missed the backend's Closing state -- fallthrough */ + /* fall through - Missed the backend's Closing state */ case XenbusStateClosing: scsifront_disconnect(info); break; diff --git a/drivers/slimbus/Kconfig b/drivers/slimbus/Kconfig index 9d73ad806698..8cd595148d17 100644 --- a/drivers/slimbus/Kconfig +++ b/drivers/slimbus/Kconfig @@ -22,8 +22,9 @@ config SLIM_QCOM_CTRL config SLIM_QCOM_NGD_CTRL tristate "Qualcomm SLIMbus Satellite Non-Generic Device Component" - depends on QCOM_QMI_HELPERS - depends on HAS_IOMEM && DMA_ENGINE + depends on HAS_IOMEM && DMA_ENGINE && NET + depends on ARCH_QCOM || COMPILE_TEST + select QCOM_QMI_HELPERS help Select driver if Qualcomm's SLIMbus Satellite Non-Generic Device Component is programmed using Linux kernel. diff --git a/drivers/slimbus/qcom-ctrl.c b/drivers/slimbus/qcom-ctrl.c index db1f5135846a..ad3e2e23f56e 100644 --- a/drivers/slimbus/qcom-ctrl.c +++ b/drivers/slimbus/qcom-ctrl.c @@ -654,8 +654,7 @@ static int qcom_slim_remove(struct platform_device *pdev) #ifdef CONFIG_PM static int qcom_slim_runtime_suspend(struct device *device) { - struct platform_device *pdev = to_platform_device(device); - struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); + struct qcom_slim_ctrl *ctrl = dev_get_drvdata(device); int ret; dev_dbg(device, "pm_runtime: suspending...\n"); @@ -672,8 +671,7 @@ static int qcom_slim_runtime_suspend(struct device *device) static int qcom_slim_runtime_resume(struct device *device) { - struct platform_device *pdev = to_platform_device(device); - struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); + struct qcom_slim_ctrl *ctrl = dev_get_drvdata(device); int ret = 0; dev_dbg(device, "pm_runtime: resuming...\n"); diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c index 1382a8df6c75..71f094c9ec68 100644 --- a/drivers/slimbus/qcom-ngd-ctrl.c +++ b/drivers/slimbus/qcom-ngd-ctrl.c @@ -787,7 +787,7 @@ static int qcom_slim_ngd_xfer_msg(struct slim_controller *sctrl, if (txn->msg->num_bytes > SLIM_MSGQ_BUF_LEN || txn->rl > SLIM_MSGQ_BUF_LEN) { - dev_err(ctrl->dev, "msg exeeds HW limit\n"); + dev_err(ctrl->dev, "msg exceeds HW limit\n"); return -EINVAL; } @@ -1327,11 +1327,12 @@ static int of_qcom_slim_ngd_register(struct device *parent, { const struct ngd_reg_offset_data *data; struct qcom_slim_ngd *ngd; + const struct of_device_id *match; struct device_node *node; u32 id; - data = of_match_node(qcom_slim_ngd_dt_match, parent->of_node)->data; - + match = of_match_node(qcom_slim_ngd_dt_match, parent->of_node); + data = match->data; for_each_available_child_of_node(parent->of_node, node) { if (of_property_read_u32(node, "reg", &id)) continue; diff --git a/drivers/soc/fsl/dpio/dpio-service.c b/drivers/soc/fsl/dpio/dpio-service.c index 321a92613a7e..ec0837ff039a 100644 --- a/drivers/soc/fsl/dpio/dpio-service.c +++ b/drivers/soc/fsl/dpio/dpio-service.c @@ -601,3 +601,71 @@ struct dpaa2_dq *dpaa2_io_store_next(struct dpaa2_io_store *s, int *is_last) return ret; } EXPORT_SYMBOL_GPL(dpaa2_io_store_next); + +/** + * dpaa2_io_query_fq_count() - Get the frame and byte count for a given fq. + * @d: the given DPIO object. + * @fqid: the id of frame queue to be queried. + * @fcnt: the queried frame count. + * @bcnt: the queried byte count. + * + * Knowing the FQ count at run-time can be useful in debugging situations. + * The instantaneous frame- and byte-count are hereby returned. + * + * Return 0 for a successful query, and negative error code if query fails. + */ +int dpaa2_io_query_fq_count(struct dpaa2_io *d, u32 fqid, + u32 *fcnt, u32 *bcnt) +{ + struct qbman_fq_query_np_rslt state; + struct qbman_swp *swp; + unsigned long irqflags; + int ret; + + d = service_select(d); + if (!d) + return -ENODEV; + + swp = d->swp; + spin_lock_irqsave(&d->lock_mgmt_cmd, irqflags); + ret = qbman_fq_query_state(swp, fqid, &state); + spin_unlock_irqrestore(&d->lock_mgmt_cmd, irqflags); + if (ret) + return ret; + *fcnt = qbman_fq_state_frame_count(&state); + *bcnt = qbman_fq_state_byte_count(&state); + + return 0; +} +EXPORT_SYMBOL_GPL(dpaa2_io_query_fq_count); + +/** + * dpaa2_io_query_bp_count() - Query the number of buffers currently in a + * buffer pool. + * @d: the given DPIO object. + * @bpid: the index of buffer pool to be queried. + * @num: the queried number of buffers in the buffer pool. + * + * Return 0 for a successful query, and negative error code if query fails. + */ +int dpaa2_io_query_bp_count(struct dpaa2_io *d, u16 bpid, u32 *num) +{ + struct qbman_bp_query_rslt state; + struct qbman_swp *swp; + unsigned long irqflags; + int ret; + + d = service_select(d); + if (!d) + return -ENODEV; + + swp = d->swp; + spin_lock_irqsave(&d->lock_mgmt_cmd, irqflags); + ret = qbman_bp_query(swp, bpid, &state); + spin_unlock_irqrestore(&d->lock_mgmt_cmd, irqflags); + if (ret) + return ret; + *num = qbman_bp_info_num_free_bufs(&state); + return 0; +} +EXPORT_SYMBOL_GPL(dpaa2_io_query_bp_count); diff --git a/drivers/soc/fsl/dpio/qbman-portal.c b/drivers/soc/fsl/dpio/qbman-portal.c index cf1d448ea468..0bddb85c0ae5 100644 --- a/drivers/soc/fsl/dpio/qbman-portal.c +++ b/drivers/soc/fsl/dpio/qbman-portal.c @@ -1003,3 +1003,99 @@ int qbman_swp_CDAN_set(struct qbman_swp *s, u16 channelid, return 0; } + +#define QBMAN_RESPONSE_VERB_MASK 0x7f +#define QBMAN_FQ_QUERY_NP 0x45 +#define QBMAN_BP_QUERY 0x32 + +struct qbman_fq_query_desc { + u8 verb; + u8 reserved[3]; + __le32 fqid; + u8 reserved2[56]; +}; + +int qbman_fq_query_state(struct qbman_swp *s, u32 fqid, + struct qbman_fq_query_np_rslt *r) +{ + struct qbman_fq_query_desc *p; + void *resp; + + p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s); + if (!p) + return -EBUSY; + + /* FQID is a 24 bit value */ + p->fqid = cpu_to_le32(fqid & 0x00FFFFFF); + resp = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP); + if (!resp) { + pr_err("qbman: Query FQID %d NP fields failed, no response\n", + fqid); + return -EIO; + } + *r = *(struct qbman_fq_query_np_rslt *)resp; + /* Decode the outcome */ + WARN_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY_NP); + + /* Determine success or failure */ + if (r->rslt != QBMAN_MC_RSLT_OK) { + pr_err("Query NP fields of FQID 0x%x failed, code=0x%02x\n", + p->fqid, r->rslt); + return -EIO; + } + + return 0; +} + +u32 qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r) +{ + return (le32_to_cpu(r->frm_cnt) & 0x00FFFFFF); +} + +u32 qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r) +{ + return le32_to_cpu(r->byte_cnt); +} + +struct qbman_bp_query_desc { + u8 verb; + u8 reserved; + __le16 bpid; + u8 reserved2[60]; +}; + +int qbman_bp_query(struct qbman_swp *s, u16 bpid, + struct qbman_bp_query_rslt *r) +{ + struct qbman_bp_query_desc *p; + void *resp; + + p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s); + if (!p) + return -EBUSY; + + p->bpid = cpu_to_le16(bpid); + resp = qbman_swp_mc_complete(s, p, QBMAN_BP_QUERY); + if (!resp) { + pr_err("qbman: Query BPID %d fields failed, no response\n", + bpid); + return -EIO; + } + *r = *(struct qbman_bp_query_rslt *)resp; + /* Decode the outcome */ + WARN_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY); + + /* Determine success or failure */ + if (r->rslt != QBMAN_MC_RSLT_OK) { + pr_err("Query fields of BPID 0x%x failed, code=0x%02x\n", + bpid, r->rslt); + return -EIO; + } + + return 0; +} + +u32 qbman_bp_info_num_free_bufs(struct qbman_bp_query_rslt *a) +{ + return le32_to_cpu(a->fill); +} diff --git a/drivers/soc/fsl/dpio/qbman-portal.h b/drivers/soc/fsl/dpio/qbman-portal.h index 89d1dd9969b6..fa35fc1afeaa 100644 --- a/drivers/soc/fsl/dpio/qbman-portal.h +++ b/drivers/soc/fsl/dpio/qbman-portal.h @@ -441,4 +441,62 @@ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd, return cmd; } +/* Query APIs */ +struct qbman_fq_query_np_rslt { + u8 verb; + u8 rslt; + u8 st1; + u8 st2; + u8 reserved[2]; + __le16 od1_sfdr; + __le16 od2_sfdr; + __le16 od3_sfdr; + __le16 ra1_sfdr; + __le16 ra2_sfdr; + __le32 pfdr_hptr; + __le32 pfdr_tptr; + __le32 frm_cnt; + __le32 byte_cnt; + __le16 ics_surp; + u8 is; + u8 reserved2[29]; +}; + +int qbman_fq_query_state(struct qbman_swp *s, u32 fqid, + struct qbman_fq_query_np_rslt *r); +u32 qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r); +u32 qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r); + +struct qbman_bp_query_rslt { + u8 verb; + u8 rslt; + u8 reserved[4]; + u8 bdi; + u8 state; + __le32 fill; + __le32 hdotr; + __le16 swdet; + __le16 swdxt; + __le16 hwdet; + __le16 hwdxt; + __le16 swset; + __le16 swsxt; + __le16 vbpid; + __le16 icid; + __le64 bpscn_addr; + __le64 bpscn_ctx; + __le16 hw_targ; + u8 dbe; + u8 reserved2; + u8 sdcnt; + u8 hdcnt; + u8 sscnt; + u8 reserved3[9]; +}; + +int qbman_bp_query(struct qbman_swp *s, u16 bpid, + struct qbman_bp_query_rslt *r); + +u32 qbman_bp_info_num_free_bufs(struct qbman_bp_query_rslt *a); + #endif /* __FSL_QBMAN_PORTAL_H */ diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c index 5ce24718c2fd..52c153cd795a 100644 --- a/drivers/soc/fsl/qbman/qman.c +++ b/drivers/soc/fsl/qbman/qman.c @@ -36,6 +36,8 @@ #define MAX_IRQNAME 16 /* big enough for "QMan portal %d" */ #define QMAN_POLL_LIMIT 32 #define QMAN_PIRQ_DQRR_ITHRESH 12 +#define QMAN_DQRR_IT_MAX 15 +#define QMAN_ITP_MAX 0xFFF #define QMAN_PIRQ_MR_ITHRESH 4 #define QMAN_PIRQ_IPERIOD 100 @@ -727,9 +729,15 @@ static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr) qm_out(portal, QM_REG_DQRR_VDQCR, vdqcr); } -static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh) +static inline int qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh) { + + if (ithresh > QMAN_DQRR_IT_MAX) + return -EINVAL; + qm_out(portal, QM_REG_DQRR_ITR, ithresh); + + return 0; } /* --- MR API --- */ @@ -1012,20 +1020,27 @@ static inline void put_affine_portal(void) static struct workqueue_struct *qm_portal_wq; -void qman_dqrr_set_ithresh(struct qman_portal *portal, u8 ithresh) +int qman_dqrr_set_ithresh(struct qman_portal *portal, u8 ithresh) { + int res; + if (!portal) - return; + return -EINVAL; + + res = qm_dqrr_set_ithresh(&portal->p, ithresh); + if (res) + return res; - qm_dqrr_set_ithresh(&portal->p, ithresh); portal->p.dqrr.ithresh = ithresh; + + return 0; } EXPORT_SYMBOL(qman_dqrr_set_ithresh); void qman_dqrr_get_ithresh(struct qman_portal *portal, u8 *ithresh) { if (portal && ithresh) - *ithresh = portal->p.dqrr.ithresh; + *ithresh = qm_in(&portal->p, QM_REG_DQRR_ITR); } EXPORT_SYMBOL(qman_dqrr_get_ithresh); @@ -1036,10 +1051,14 @@ void qman_portal_get_iperiod(struct qman_portal *portal, u32 *iperiod) } EXPORT_SYMBOL(qman_portal_get_iperiod); -void qman_portal_set_iperiod(struct qman_portal *portal, u32 iperiod) +int qman_portal_set_iperiod(struct qman_portal *portal, u32 iperiod) { - if (portal) - qm_out(&portal->p, QM_REG_ITPR, iperiod); + if (!portal || iperiod > QMAN_ITP_MAX) + return -EINVAL; + + qm_out(&portal->p, QM_REG_ITPR, iperiod); + + return 0; } EXPORT_SYMBOL(qman_portal_set_iperiod); diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c index c5ee97ee7886..fd8d034cfec1 100644 --- a/drivers/soundwire/intel.c +++ b/drivers/soundwire/intel.c @@ -654,14 +654,14 @@ static int intel_pdm_set_sdw_stream(struct snd_soc_dai *dai, return cdns_set_sdw_stream(dai, stream, false, direction); } -static struct snd_soc_dai_ops intel_pcm_dai_ops = { +static const struct snd_soc_dai_ops intel_pcm_dai_ops = { .hw_params = intel_hw_params, .hw_free = intel_hw_free, .shutdown = sdw_cdns_shutdown, .set_sdw_stream = intel_pcm_set_sdw_stream, }; -static struct snd_soc_dai_ops intel_pdm_dai_ops = { +static const struct snd_soc_dai_ops intel_pdm_dai_ops = { .hw_params = intel_hw_params, .hw_free = intel_hw_free, .shutdown = sdw_cdns_shutdown, diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c index a880b5c6c6c3..90a8a9f1ac7d 100644 --- a/drivers/staging/android/ashmem.c +++ b/drivers/staging/android/ashmem.c @@ -195,7 +195,7 @@ static int range_alloc(struct ashmem_area *asma, } /** - * range_del() - Deletes and dealloctes an ashmem_range structure + * range_del() - Deletes and deallocates an ashmem_range structure * @range: The associated ashmem_range that has previously been allocated */ static void range_del(struct ashmem_range *range) @@ -521,7 +521,7 @@ static int set_name(struct ashmem_area *asma, void __user *name) * an data abort which would try to access mmap_sem. If another * thread has invoked ashmem_mmap then it will be holding the * semaphore and will be waiting for ashmem_mutex, there by leading to - * deadlock. We'll release the mutex and take the name to a local + * deadlock. We'll release the mutex and take the name to a local * variable that does not need protection and later copy the local * variable to the structure member with lock held. */ diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 99073325b0c0..a0802de8c3a1 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -95,6 +94,13 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, goto err1; } + spin_lock(&heap->stat_lock); + heap->num_of_buffers++; + heap->num_of_alloc_bytes += len; + if (heap->num_of_alloc_bytes > heap->alloc_bytes_wm) + heap->alloc_bytes_wm = heap->num_of_alloc_bytes; + spin_unlock(&heap->stat_lock); + INIT_LIST_HEAD(&buffer->attachments); mutex_init(&buffer->lock); mutex_lock(&dev->buffer_lock); @@ -117,6 +123,11 @@ void ion_buffer_destroy(struct ion_buffer *buffer) buffer->heap->ops->unmap_kernel(buffer->heap, buffer); } buffer->heap->ops->free(buffer); + spin_lock(&buffer->heap->stat_lock); + buffer->heap->num_of_buffers--; + buffer->heap->num_of_alloc_bytes -= buffer->size; + spin_unlock(&buffer->heap->stat_lock); + kfree(buffer); } @@ -528,12 +539,15 @@ void ion_device_add_heap(struct ion_heap *heap) { struct ion_device *dev = internal_dev; int ret; + struct dentry *heap_root; + char debug_name[64]; if (!heap->ops->allocate || !heap->ops->free) pr_err("%s: can not add heap with invalid ops struct.\n", __func__); spin_lock_init(&heap->free_lock); + spin_lock_init(&heap->stat_lock); heap->free_list_size = 0; if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) @@ -546,6 +560,33 @@ void ion_device_add_heap(struct ion_heap *heap) } heap->dev = dev; + heap->num_of_buffers = 0; + heap->num_of_alloc_bytes = 0; + heap->alloc_bytes_wm = 0; + + heap_root = debugfs_create_dir(heap->name, dev->debug_root); + debugfs_create_u64("num_of_buffers", + 0444, heap_root, + &heap->num_of_buffers); + debugfs_create_u64("num_of_alloc_bytes", + 0444, + heap_root, + &heap->num_of_alloc_bytes); + debugfs_create_u64("alloc_bytes_wm", + 0444, + heap_root, + &heap->alloc_bytes_wm); + + if (heap->shrinker.count_objects && + heap->shrinker.scan_objects) { + snprintf(debug_name, 64, "%s_shrink", heap->name); + debugfs_create_file(debug_name, + 0644, + heap_root, + heap, + &debug_shrink_fops); + } + down_write(&dev->lock); heap->id = heap_id++; /* @@ -555,14 +596,6 @@ void ion_device_add_heap(struct ion_heap *heap) plist_node_init(&heap->node, -heap->id); plist_add(&heap->node, &dev->heaps); - if (heap->shrinker.count_objects && heap->shrinker.scan_objects) { - char debug_name[64]; - - snprintf(debug_name, 64, "%s_shrink", heap->name); - debugfs_create_file(debug_name, 0644, dev->debug_root, - heap, &debug_shrink_fops); - } - dev->heap_cnt++; up_write(&dev->lock); } diff --git a/drivers/staging/android/ion/ion.h b/drivers/staging/android/ion/ion.h index c006fc1e5a16..47b594cf1ac9 100644 --- a/drivers/staging/android/ion/ion.h +++ b/drivers/staging/android/ion/ion.h @@ -157,6 +157,9 @@ struct ion_heap_ops { * @lock: protects the free list * @waitqueue: queue to wait on from deferred free thread * @task: task struct of deferred free thread + * @num_of_buffers the number of currently allocated buffers + * @num_of_alloc_bytes the number of allocated bytes + * @alloc_bytes_wm the number of allocated bytes watermark * * Represents a pool of memory from which buffers can be made. In some * systems the only heap is regular system memory allocated via vmalloc. @@ -177,6 +180,12 @@ struct ion_heap { spinlock_t free_lock; wait_queue_head_t waitqueue; struct task_struct *task; + u64 num_of_buffers; + u64 num_of_alloc_bytes; + u64 alloc_bytes_wm; + + /* protect heap statistics */ + spinlock_t stat_lock; }; /** diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index 548bb02c0ca6..0383f7548d48 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include "ion.h" @@ -110,7 +109,7 @@ static int ion_system_heap_allocate(struct ion_heap *heap, unsigned long size_remaining = PAGE_ALIGN(size); unsigned int max_order = orders[0]; - if (size / PAGE_SIZE > totalram_pages / 2) + if (size / PAGE_SIZE > totalram_pages() / 2) return -ENOMEM; INIT_LIST_HEAD(&pages); diff --git a/drivers/staging/axis-fifo/axis-fifo.c b/drivers/staging/axis-fifo/axis-fifo.c index c18bf31f55b6..805437fa249a 100644 --- a/drivers/staging/axis-fifo/axis-fifo.c +++ b/drivers/staging/axis-fifo/axis-fifo.c @@ -485,7 +485,8 @@ static ssize_t axis_fifo_write(struct file *f, const char __user *buf, ioread32(fifo->base_addr + XLLF_TDFV_OFFSET) >= words_to_write, fifo->write_queue_lock, - (write_timeout >= 0) ? msecs_to_jiffies(write_timeout) : + (write_timeout >= 0) ? + msecs_to_jiffies(write_timeout) : MAX_SCHEDULE_TIMEOUT); spin_unlock_irq(&fifo->write_queue_lock); diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c index c1c6b2b4ab91..5d2fcbfe02af 100644 --- a/drivers/staging/comedi/comedi_fops.c +++ b/drivers/staging/comedi/comedi_fops.c @@ -1209,6 +1209,7 @@ static int check_insn_config_length(struct comedi_insn *insn, break; case INSN_CONFIG_PWM_OUTPUT: case INSN_CONFIG_ANALOG_TRIG: + case INSN_CONFIG_TIMER_1: if (insn->n == 5) return 0; break; @@ -1500,25 +1501,21 @@ out: * data (for reads) to insns[].data pointers */ /* arbitrary limits */ -#define MAX_SAMPLES 256 +#define MIN_SAMPLES 16 +#define MAX_SAMPLES 65536 static int do_insnlist_ioctl(struct comedi_device *dev, struct comedi_insnlist __user *arg, void *file) { struct comedi_insnlist insnlist; struct comedi_insn *insns = NULL; unsigned int *data = NULL; + unsigned int max_n_data_required = MIN_SAMPLES; int i = 0; int ret = 0; if (copy_from_user(&insnlist, arg, sizeof(insnlist))) return -EFAULT; - data = kmalloc_array(MAX_SAMPLES, sizeof(unsigned int), GFP_KERNEL); - if (!data) { - ret = -ENOMEM; - goto error; - } - insns = kcalloc(insnlist.n_insns, sizeof(*insns), GFP_KERNEL); if (!insns) { ret = -ENOMEM; @@ -1532,13 +1529,26 @@ static int do_insnlist_ioctl(struct comedi_device *dev, goto error; } - for (i = 0; i < insnlist.n_insns; i++) { + /* Determine maximum memory needed for all instructions. */ + for (i = 0; i < insnlist.n_insns; ++i) { if (insns[i].n > MAX_SAMPLES) { dev_dbg(dev->class_dev, "number of samples too large\n"); ret = -EINVAL; goto error; } + max_n_data_required = max(max_n_data_required, insns[i].n); + } + + /* Allocate scratch space for all instruction data. */ + data = kmalloc_array(max_n_data_required, sizeof(unsigned int), + GFP_KERNEL); + if (!data) { + ret = -ENOMEM; + goto error; + } + + for (i = 0; i < insnlist.n_insns; ++i) { if (insns[i].insn & INSN_MASK_WRITE) { if (copy_from_user(data, insns[i].data, insns[i].n * sizeof(unsigned int))) { @@ -1592,22 +1602,27 @@ static int do_insn_ioctl(struct comedi_device *dev, { struct comedi_insn insn; unsigned int *data = NULL; + unsigned int n_data = MIN_SAMPLES; int ret = 0; - data = kmalloc_array(MAX_SAMPLES, sizeof(unsigned int), GFP_KERNEL); - if (!data) { - ret = -ENOMEM; - goto error; - } - if (copy_from_user(&insn, arg, sizeof(insn))) { - ret = -EFAULT; - goto error; + return -EFAULT; } + n_data = max(n_data, insn.n); + /* This is where the behavior of insn and insnlist deviate. */ - if (insn.n > MAX_SAMPLES) + if (insn.n > MAX_SAMPLES) { insn.n = MAX_SAMPLES; + n_data = MAX_SAMPLES; + } + + data = kmalloc_array(n_data, sizeof(unsigned int), GFP_KERNEL); + if (!data) { + ret = -ENOMEM; + goto error; + } + if (insn.insn & INSN_MASK_WRITE) { if (copy_from_user(data, insn.data, diff --git a/drivers/staging/comedi/drivers/8255.h b/drivers/staging/comedi/drivers/8255.h index 6cd1339ab83e..ceae3ca52e60 100644 --- a/drivers/staging/comedi/drivers/8255.h +++ b/drivers/staging/comedi/drivers/8255.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * module/8255.h * Header file for 8255 diff --git a/drivers/staging/comedi/drivers/addi_apci_3501.c b/drivers/staging/comedi/drivers/addi_apci_3501.c index a38267928e5e..b4aa588975df 100644 --- a/drivers/staging/comedi/drivers/addi_apci_3501.c +++ b/drivers/staging/comedi/drivers/addi_apci_3501.c @@ -258,8 +258,15 @@ static int apci3501_eeprom_insn_read(struct comedi_device *dev, { struct apci3501_private *devpriv = dev->private; unsigned short addr = CR_CHAN(insn->chanspec); + unsigned int val; + unsigned int i; - data[0] = apci3501_eeprom_readw(devpriv->amcc, 2 * addr); + if (insn->n) { + /* No point reading the same EEPROM location more than once. */ + val = apci3501_eeprom_readw(devpriv->amcc, 2 * addr); + for (i = 0; i < insn->n; i++) + data[i] = val; + } return insn->n; } diff --git a/drivers/staging/comedi/drivers/amplc_dio200.h b/drivers/staging/comedi/drivers/amplc_dio200.h index 4c3e4c37c4c5..1d81393be6df 100644 --- a/drivers/staging/comedi/drivers/amplc_dio200.h +++ b/drivers/staging/comedi/drivers/amplc_dio200.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * comedi/drivers/amplc_dio.h * diff --git a/drivers/staging/comedi/drivers/amplc_pc236.h b/drivers/staging/comedi/drivers/amplc_pc236.h index 4b67090aab54..45e933ee8735 100644 --- a/drivers/staging/comedi/drivers/amplc_pc236.h +++ b/drivers/staging/comedi/drivers/amplc_pc236.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * comedi/drivers/amplc_pc236.h * Header for "amplc_pc236", "amplc_pci236" and "amplc_pc236_common". diff --git a/drivers/staging/comedi/drivers/cb_pcidas.c b/drivers/staging/comedi/drivers/cb_pcidas.c index 8429d57087fd..02ae00c95313 100644 --- a/drivers/staging/comedi/drivers/cb_pcidas.c +++ b/drivers/staging/comedi/drivers/cb_pcidas.c @@ -116,7 +116,7 @@ #define PCIDAS_TRIG_SEL_ANALOG PCIDAS_TRIG_SEL(3) /* ext. analog trigger */ #define PCIDAS_TRIG_SEL_MASK PCIDAS_TRIG_SEL(3) /* start trigger mask */ #define PCIDAS_TRIG_POL BIT(2) /* invert trigger (1602 only) */ -#define PCIDAS_TRIG_MODE BIT(3) /* edge/level trigerred (1602 only) */ +#define PCIDAS_TRIG_MODE BIT(3) /* edge/level triggered (1602 only) */ #define PCIDAS_TRIG_EN BIT(4) /* enable external start trigger */ #define PCIDAS_TRIG_BURSTE BIT(5) /* burst mode enable */ #define PCIDAS_TRIG_CLR BIT(7) /* clear external trigger */ diff --git a/drivers/staging/comedi/drivers/cb_pcidas64.c b/drivers/staging/comedi/drivers/cb_pcidas64.c index 631a703b345d..e1774e09a320 100644 --- a/drivers/staging/comedi/drivers/cb_pcidas64.c +++ b/drivers/staging/comedi/drivers/cb_pcidas64.c @@ -3097,8 +3097,10 @@ static int ao_winsn(struct comedi_device *dev, struct comedi_subdevice *s, { const struct pcidas64_board *board = dev->board_ptr; struct pcidas64_private *devpriv = dev->private; - int chan = CR_CHAN(insn->chanspec); - int range = CR_RANGE(insn->chanspec); + unsigned int chan = CR_CHAN(insn->chanspec); + unsigned int range = CR_RANGE(insn->chanspec); + unsigned int val = s->readback[chan]; + unsigned int i; /* do some initializing */ writew(0, devpriv->main_iobase + DAC_CONTROL0_REG); @@ -3108,20 +3110,24 @@ static int ao_winsn(struct comedi_device *dev, struct comedi_subdevice *s, writew(devpriv->dac_control1_bits, devpriv->main_iobase + DAC_CONTROL1_REG); - /* write to channel */ - if (board->layout == LAYOUT_4020) { - writew(data[0] & 0xff, - devpriv->main_iobase + dac_lsb_4020_reg(chan)); - writew((data[0] >> 8) & 0xf, - devpriv->main_iobase + dac_msb_4020_reg(chan)); - } else { - writew(data[0], devpriv->main_iobase + dac_convert_reg(chan)); + for (i = 0; i < insn->n; i++) { + /* write to channel */ + val = data[i]; + if (board->layout == LAYOUT_4020) { + writew(val & 0xff, + devpriv->main_iobase + dac_lsb_4020_reg(chan)); + writew((val >> 8) & 0xf, + devpriv->main_iobase + dac_msb_4020_reg(chan)); + } else { + writew(val, + devpriv->main_iobase + dac_convert_reg(chan)); + } } - /* remember output value */ - s->readback[chan] = data[0]; + /* remember last output value */ + s->readback[chan] = val; - return 1; + return insn->n; } static void set_dac_control0_reg(struct comedi_device *dev, @@ -3762,9 +3768,17 @@ static int eeprom_read_insn(struct comedi_device *dev, struct comedi_subdevice *s, struct comedi_insn *insn, unsigned int *data) { - data[0] = read_eeprom(dev, CR_CHAN(insn->chanspec)); + unsigned int val; + unsigned int i; - return 1; + if (insn->n) { + /* No point reading the same EEPROM location more than once. */ + val = read_eeprom(dev, CR_CHAN(insn->chanspec)); + for (i = 0; i < insn->n; i++) + data[i] = val; + } + + return insn->n; } /* Allocate and initialize the subdevice structures. */ diff --git a/drivers/staging/comedi/drivers/cb_pcidda.c b/drivers/staging/comedi/drivers/cb_pcidda.c index 807a31e75883..1d09dd265ab7 100644 --- a/drivers/staging/comedi/drivers/cb_pcidda.c +++ b/drivers/staging/comedi/drivers/cb_pcidda.c @@ -291,6 +291,7 @@ static int cb_pcidda_ao_insn_write(struct comedi_device *dev, unsigned int channel = CR_CHAN(insn->chanspec); unsigned int range = CR_RANGE(insn->chanspec); unsigned int ctrl; + unsigned int i; if (range != devpriv->ao_range[channel]) cb_pcidda_calibrate(dev, channel, range); @@ -317,7 +318,8 @@ static int cb_pcidda_ao_insn_write(struct comedi_device *dev, outw(ctrl, devpriv->daqio + CB_DDA_DA_CTRL_REG); - outw(data[0], devpriv->daqio + CB_DDA_DA_DATA_REG(channel)); + for (i = 0; i < insn->n; i++) + outw(data[i], devpriv->daqio + CB_DDA_DA_DATA_REG(channel)); return insn->n; } diff --git a/drivers/staging/comedi/drivers/comedi_8254.h b/drivers/staging/comedi/drivers/comedi_8254.h index 7faa2185282e..d8264417e53c 100644 --- a/drivers/staging/comedi/drivers/comedi_8254.h +++ b/drivers/staging/comedi/drivers/comedi_8254.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * comedi_8254.h * Generic 8254 timer/counter support diff --git a/drivers/staging/comedi/drivers/comedi_isadma.h b/drivers/staging/comedi/drivers/comedi_isadma.h index ccef7c9c0a4d..2bd1329d727f 100644 --- a/drivers/staging/comedi/drivers/comedi_isadma.h +++ b/drivers/staging/comedi/drivers/comedi_isadma.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * COMEDI ISA DMA support functions * Copyright (c) 2014 H Hartley Sweeten diff --git a/drivers/staging/comedi/drivers/das08.h b/drivers/staging/comedi/drivers/das08.h index 235d32f7c817..ef65a7e504ee 100644 --- a/drivers/staging/comedi/drivers/das08.h +++ b/drivers/staging/comedi/drivers/das08.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * das08.h * diff --git a/drivers/staging/comedi/drivers/dt9812.c b/drivers/staging/comedi/drivers/dt9812.c index 75cc9e8e5b94..9f165f1cefa5 100644 --- a/drivers/staging/comedi/drivers/dt9812.c +++ b/drivers/staging/comedi/drivers/dt9812.c @@ -40,7 +40,7 @@ #define DT9812_MAX_WRITE_CMD_PIPE_SIZE 32 #define DT9812_MAX_READ_CMD_PIPE_SIZE 32 -/* usb_bulk_msg() timout in milliseconds */ +/* usb_bulk_msg() timeout in milliseconds */ #define DT9812_USB_TIMEOUT 1000 /* diff --git a/drivers/staging/comedi/drivers/mite.h b/drivers/staging/comedi/drivers/mite.h index d5e27ac25df8..c6c056069bb7 100644 --- a/drivers/staging/comedi/drivers/mite.h +++ b/drivers/staging/comedi/drivers/mite.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * module/mite.h * Hardware driver for NI Mite PCI interface chip diff --git a/drivers/staging/comedi/drivers/ni_labpc.h b/drivers/staging/comedi/drivers/ni_labpc.h index f685047efb67..728e901f53cd 100644 --- a/drivers/staging/comedi/drivers/ni_labpc.h +++ b/drivers/staging/comedi/drivers/ni_labpc.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * Header for ni_labpc ISA/PCMCIA/PCI drivers * diff --git a/drivers/staging/comedi/drivers/ni_labpc_common.c b/drivers/staging/comedi/drivers/ni_labpc_common.c index 7fa2d39562db..406952f5521d 100644 --- a/drivers/staging/comedi/drivers/ni_labpc_common.c +++ b/drivers/staging/comedi/drivers/ni_labpc_common.c @@ -906,7 +906,9 @@ static int labpc_ao_insn_write(struct comedi_device *dev, { const struct labpc_boardinfo *board = dev->board_ptr; struct labpc_private *devpriv = dev->private; - int channel, range; + unsigned int channel; + unsigned int range; + unsigned int i; unsigned long flags; channel = CR_CHAN(insn->chanspec); @@ -932,9 +934,10 @@ static int labpc_ao_insn_write(struct comedi_device *dev, devpriv->write_byte(dev, devpriv->cmd6, CMD6_REG); } /* send data */ - labpc_ao_write(dev, s, channel, data[0]); + for (i = 0; i < insn->n; i++) + labpc_ao_write(dev, s, channel, data[i]); - return 1; + return insn->n; } /* lowlevel write to eeprom/dac */ diff --git a/drivers/staging/comedi/drivers/ni_stc.h b/drivers/staging/comedi/drivers/ni_stc.h index 6c023b40fb53..35427f8bf8f7 100644 --- a/drivers/staging/comedi/drivers/ni_stc.h +++ b/drivers/staging/comedi/drivers/ni_stc.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * Register descriptions for NI DAQ-STC chip * diff --git a/drivers/staging/comedi/drivers/ni_tio.h b/drivers/staging/comedi/drivers/ni_tio.h index 340d63c74467..1e85d0a53715 100644 --- a/drivers/staging/comedi/drivers/ni_tio.h +++ b/drivers/staging/comedi/drivers/ni_tio.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * Header file for NI general purpose counter support code (ni_tio.c) * diff --git a/drivers/staging/comedi/drivers/ni_tio_internal.h b/drivers/staging/comedi/drivers/ni_tio_internal.h index 652a28990132..20fcd60038cd 100644 --- a/drivers/staging/comedi/drivers/ni_tio_internal.h +++ b/drivers/staging/comedi/drivers/ni_tio_internal.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * Header file for NI general purpose counter support code (ni_tio.c and * ni_tiocmd.c) diff --git a/drivers/staging/comedi/drivers/plx9052.h b/drivers/staging/comedi/drivers/plx9052.h index 7950d1f57db6..8ec5a5f2837d 100644 --- a/drivers/staging/comedi/drivers/plx9052.h +++ b/drivers/staging/comedi/drivers/plx9052.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * Definitions for the PLX-9052 PCI interface chip * diff --git a/drivers/staging/comedi/drivers/plx9080.h b/drivers/staging/comedi/drivers/plx9080.h index 469a9573acdc..aa0eda5a8093 100644 --- a/drivers/staging/comedi/drivers/plx9080.h +++ b/drivers/staging/comedi/drivers/plx9080.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * plx9080.h * diff --git a/drivers/staging/comedi/drivers/s626.h b/drivers/staging/comedi/drivers/s626.h index 4bdc4fba736f..749252b1d26b 100644 --- a/drivers/staging/comedi/drivers/s626.h +++ b/drivers/staging/comedi/drivers/s626.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * comedi/drivers/s626.h * Sensoray s626 Comedi driver, header file diff --git a/drivers/staging/comedi/drivers/tests/ni_routes_test.c b/drivers/staging/comedi/drivers/tests/ni_routes_test.c index a1eda035f270..c6dc18f346e8 100644 --- a/drivers/staging/comedi/drivers/tests/ni_routes_test.c +++ b/drivers/staging/comedi/drivers/tests/ni_routes_test.c @@ -372,7 +372,7 @@ void test_ni_lookup_route_register(void) unittest(ni_lookup_route_register(O(8), O(9), T) == 8, "validate last destination\n"); unittest(ni_lookup_route_register(O(10), O(9), T) == -EINVAL, - "lookup invalid desination\n"); + "lookup invalid destination\n"); unittest(ni_lookup_route_register(rgout0_src0, TRIGGER_LINE(0), T) == -EINVAL, diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c index 65cc3d9af972..8e8f57c4f029 100644 --- a/drivers/staging/emxx_udc/emxx_udc.c +++ b/drivers/staging/emxx_udc/emxx_udc.c @@ -56,25 +56,25 @@ static void _nbu2ss_fifo_flush(struct nbu2ss_udc *, struct nbu2ss_ep *); /*===========================================================================*/ /* Global */ -struct nbu2ss_udc udc_controller; +static struct nbu2ss_udc udc_controller; /*-------------------------------------------------------------------------*/ /* Read */ -static inline u32 _nbu2ss_readl(void *address) +static inline u32 _nbu2ss_readl(void __iomem *address) { return __raw_readl(address); } /*-------------------------------------------------------------------------*/ /* Write */ -static inline void _nbu2ss_writel(void *address, u32 udata) +static inline void _nbu2ss_writel(void __iomem *address, u32 udata) { __raw_writel(udata, address); } /*-------------------------------------------------------------------------*/ /* Set Bit */ -static inline void _nbu2ss_bitset(void *address, u32 udata) +static inline void _nbu2ss_bitset(void __iomem *address, u32 udata) { u32 reg_dt = __raw_readl(address) | (udata); @@ -83,7 +83,7 @@ static inline void _nbu2ss_bitset(void *address, u32 udata) /*-------------------------------------------------------------------------*/ /* Clear Bit */ -static inline void _nbu2ss_bitclr(void *address, u32 udata) +static inline void _nbu2ss_bitclr(void __iomem *address, u32 udata) { u32 reg_dt = __raw_readl(address) & ~(udata); @@ -108,20 +108,16 @@ static void _nbu2ss_dump_register(struct nbu2ss_udc *udc) dev_dbg(&udc->dev, "\n-USB REG-\n"); for (i = 0x0 ; i < USB_BASE_SIZE ; i += 16) { - reg_data = _nbu2ss_readl( - (u32 *)IO_ADDRESS(USB_BASE_ADDRESS + i)); + reg_data = _nbu2ss_readl(IO_ADDRESS(USB_BASE_ADDRESS + i)); dev_dbg(&udc->dev, "USB%04x =%08x", i, (int)reg_data); - reg_data = _nbu2ss_readl( - (u32 *)IO_ADDRESS(USB_BASE_ADDRESS + i + 4)); + reg_data = _nbu2ss_readl(IO_ADDRESS(USB_BASE_ADDRESS + i + 4)); dev_dbg(&udc->dev, " %08x", (int)reg_data); - reg_data = _nbu2ss_readl( - (u32 *)IO_ADDRESS(USB_BASE_ADDRESS + i + 8)); + reg_data = _nbu2ss_readl(IO_ADDRESS(USB_BASE_ADDRESS + i + 8)); dev_dbg(&udc->dev, " %08x", (int)reg_data); - reg_data = _nbu2ss_readl( - (u32 *)IO_ADDRESS(USB_BASE_ADDRESS + i + 12)); + reg_data = _nbu2ss_readl(IO_ADDRESS(USB_BASE_ADDRESS + i + 12)); dev_dbg(&udc->dev, " %08x\n", (int)reg_data); } @@ -135,6 +131,7 @@ static void _nbu2ss_ep0_complete(struct usb_ep *_ep, struct usb_request *_req) { u8 recipient; u16 selector; + u16 wIndex; u32 test_mode; struct usb_ctrlrequest *p_ctrl; struct nbu2ss_udc *udc; @@ -149,10 +146,11 @@ static void _nbu2ss_ep0_complete(struct usb_ep *_ep, struct usb_request *_req) /*-------------------------------------------------*/ /* SET_FEATURE */ recipient = (u8)(p_ctrl->bRequestType & USB_RECIP_MASK); - selector = p_ctrl->wValue; + selector = le16_to_cpu(p_ctrl->wValue); if ((recipient == USB_RECIP_DEVICE) && (selector == USB_DEVICE_TEST_MODE)) { - test_mode = (u32)(p_ctrl->wIndex >> 8); + wIndex = le16_to_cpu(p_ctrl->wIndex); + test_mode = (u32)(wIndex >> 8); _nbu2ss_set_test_mode(udc, test_mode); } } @@ -161,11 +159,8 @@ static void _nbu2ss_ep0_complete(struct usb_ep *_ep, struct usb_request *_req) /*-------------------------------------------------------------------------*/ /* Initialization usb_request */ -static void _nbu2ss_create_ep0_packet( - struct nbu2ss_udc *udc, - void *p_buf, - unsigned length -) +static void _nbu2ss_create_ep0_packet(struct nbu2ss_udc *udc, + void *p_buf, unsigned int length) { udc->ep0_req.req.buf = p_buf; udc->ep0_req.req.length = length; @@ -184,7 +179,7 @@ static u32 _nbu2ss_get_begin_ram_address(struct nbu2ss_udc *udc) u32 num, buf_type; u32 data, last_ram_adr, use_ram_size; - struct ep_regs *p_ep_regs; + struct ep_regs __iomem *p_ep_regs; last_ram_adr = (D_RAM_SIZE_CTRL / sizeof(u32)) * 2; use_ram_size = 0; @@ -377,7 +372,7 @@ static void _nbu2ss_ep_dma_exit(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep) { u32 num; u32 data; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (udc->vbus_active == 0) return; /* VBUS OFF */ @@ -408,7 +403,7 @@ static void _nbu2ss_ep_dma_exit(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep) /* Abort DMA */ static void _nbu2ss_ep_dma_abort(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep) { - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; _nbu2ss_bitclr(&preg->EP_DCR[ep->epnum - 1].EP_DCR1, DCR1_EPN_REQEN); mdelay(DMA_DISABLE_TIME); /* DCR1_EPN_REQEN Clear */ @@ -417,16 +412,12 @@ static void _nbu2ss_ep_dma_abort(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep) /*-------------------------------------------------------------------------*/ /* Start IN Transfer */ -static void _nbu2ss_ep_in_end( - struct nbu2ss_udc *udc, - u32 epnum, - u32 data32, - u32 length -) +static void _nbu2ss_ep_in_end(struct nbu2ss_udc *udc, + u32 epnum, u32 data32, u32 length) { u32 data; u32 num; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (length >= sizeof(u32)) return; @@ -460,12 +451,9 @@ static void _nbu2ss_ep_in_end( #ifdef USE_DMA /*-------------------------------------------------------------------------*/ -static void _nbu2ss_dma_map_single( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - u8 direct -) +static void _nbu2ss_dma_map_single(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + struct nbu2ss_req *req, u8 direct) { if (req->req.dma == DMA_ADDR_INVALID) { if (req->unaligned) { @@ -493,12 +481,9 @@ static void _nbu2ss_dma_map_single( } /*-------------------------------------------------------------------------*/ -static void _nbu2ss_dma_unmap_single( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - u8 direct -) +static void _nbu2ss_dma_unmap_single(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + struct nbu2ss_req *req, u8 direct) { u8 data[4]; u8 *p; @@ -669,10 +654,8 @@ static int EP0_receive_NULL(struct nbu2ss_udc *udc, bool pid_flag) } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_ep0_in_transfer( - struct nbu2ss_udc *udc, - struct nbu2ss_req *req -) +static int _nbu2ss_ep0_in_transfer(struct nbu2ss_udc *udc, + struct nbu2ss_req *req) { u8 *p_buffer; /* IN Data Buffer */ u32 data; @@ -726,10 +709,8 @@ static int _nbu2ss_ep0_in_transfer( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_ep0_out_transfer( - struct nbu2ss_udc *udc, - struct nbu2ss_req *req -) +static int _nbu2ss_ep0_out_transfer(struct nbu2ss_udc *udc, + struct nbu2ss_req *req) { u8 *p_buffer; u32 i_remain_size; @@ -803,12 +784,8 @@ static int _nbu2ss_ep0_out_transfer( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_out_dma( - struct nbu2ss_udc *udc, - struct nbu2ss_req *req, - u32 num, - u32 length -) +static int _nbu2ss_out_dma(struct nbu2ss_udc *udc, struct nbu2ss_req *req, + u32 num, u32 length) { dma_addr_t p_buffer; u32 mpkt; @@ -817,7 +794,7 @@ static int _nbu2ss_out_dma( u32 burst = 1; u32 data; int result = -EINVAL; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (req->dma_flag) return 1; /* DMA is forwarded */ @@ -866,12 +843,8 @@ static int _nbu2ss_out_dma( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_epn_out_pio( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - u32 length -) +static int _nbu2ss_epn_out_pio(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep, + struct nbu2ss_req *req, u32 length) { u8 *p_buffer; u32 i; @@ -880,7 +853,7 @@ static int _nbu2ss_epn_out_pio( union usb_reg_access temp_32; union usb_reg_access *p_buf_32; int result = 0; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (req->dma_flag) return 1; /* DMA is forwarded */ @@ -925,12 +898,8 @@ static int _nbu2ss_epn_out_pio( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_epn_out_data( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - u32 data_size -) +static int _nbu2ss_epn_out_data(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep, + struct nbu2ss_req *req, u32 data_size) { u32 num; u32 i_buf_size; @@ -955,16 +924,14 @@ static int _nbu2ss_epn_out_data( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_epn_out_transfer( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req -) +static int _nbu2ss_epn_out_transfer(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + struct nbu2ss_req *req) { u32 num; u32 i_recv_length; int result = 1; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (ep->epnum == 0) return -EINVAL; @@ -1011,13 +978,8 @@ static int _nbu2ss_epn_out_transfer( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_in_dma( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - u32 num, - u32 length -) +static int _nbu2ss_in_dma(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep, + struct nbu2ss_req *req, u32 num, u32 length) { dma_addr_t p_buffer; u32 mpkt; /* MaxPacketSize */ @@ -1026,7 +988,7 @@ static int _nbu2ss_in_dma( u32 i_write_length; u32 data; int result = -EINVAL; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (req->dma_flag) return 1; /* DMA is forwarded */ @@ -1087,12 +1049,8 @@ static int _nbu2ss_in_dma( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_epn_in_pio( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - u32 length -) +static int _nbu2ss_epn_in_pio(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep, + struct nbu2ss_req *req, u32 length) { u8 *p_buffer; u32 i; @@ -1101,7 +1059,7 @@ static int _nbu2ss_epn_in_pio( union usb_reg_access temp_32; union usb_reg_access *p_buf_32 = NULL; int result = 0; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (req->dma_flag) return 1; /* DMA is forwarded */ @@ -1140,12 +1098,8 @@ static int _nbu2ss_epn_in_pio( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_epn_in_data( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - u32 data_size -) +static int _nbu2ss_epn_in_data(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep, + struct nbu2ss_req *req, u32 data_size) { u32 num; int nret = 1; @@ -1167,11 +1121,8 @@ static int _nbu2ss_epn_in_data( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_epn_in_transfer( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req -) +static int _nbu2ss_epn_in_transfer(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, struct nbu2ss_req *req) { u32 num; u32 i_buf_size; @@ -1208,11 +1159,10 @@ static int _nbu2ss_epn_in_transfer( } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_start_transfer( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - bool bflag) +static int _nbu2ss_start_transfer(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + struct nbu2ss_req *req, + bool bflag) { int nret = -EINVAL; @@ -1287,9 +1237,7 @@ static void _nbu2ss_restert_transfer(struct nbu2ss_ep *ep) /*-------------------------------------------------------------------------*/ /* Endpoint Toggle Reset */ -static void _nbu2ss_endpoint_toggle_reset( - struct nbu2ss_udc *udc, - u8 ep_adrs) +static void _nbu2ss_endpoint_toggle_reset(struct nbu2ss_udc *udc, u8 ep_adrs) { u8 num; u32 data; @@ -1309,15 +1257,13 @@ static void _nbu2ss_endpoint_toggle_reset( /*-------------------------------------------------------------------------*/ /* Endpoint STALL set */ -static void _nbu2ss_set_endpoint_stall( - struct nbu2ss_udc *udc, - u8 ep_adrs, - bool bstall) +static void _nbu2ss_set_endpoint_stall(struct nbu2ss_udc *udc, + u8 ep_adrs, bool bstall) { u8 num, epnum; u32 data; struct nbu2ss_ep *ep; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if ((ep_adrs == 0) || (ep_adrs == 0x80)) { if (bstall) { @@ -1387,11 +1333,8 @@ static void _nbu2ss_set_test_mode(struct nbu2ss_udc *udc, u32 mode) } /*-------------------------------------------------------------------------*/ -static int _nbu2ss_set_feature_device( - struct nbu2ss_udc *udc, - u16 selector, - u16 wIndex -) +static int _nbu2ss_set_feature_device(struct nbu2ss_udc *udc, + u16 selector, u16 wIndex) { int result = -EOPNOTSUPP; @@ -1421,7 +1364,7 @@ static int _nbu2ss_get_ep_stall(struct nbu2ss_udc *udc, u8 ep_adrs) { u8 epnum; u32 data = 0, bit_data; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; epnum = ep_adrs & ~USB_ENDPOINT_DIR_MASK; if (epnum == 0) { @@ -1449,8 +1392,8 @@ static inline int _nbu2ss_req_feature(struct nbu2ss_udc *udc, bool bset) { u8 recipient = (u8)(udc->ctrl.bRequestType & USB_RECIP_MASK); u8 direction = (u8)(udc->ctrl.bRequestType & USB_DIR_IN); - u16 selector = udc->ctrl.wValue; - u16 wIndex = udc->ctrl.wIndex; + u16 selector = le16_to_cpu(udc->ctrl.wValue); + u16 wIndex = le16_to_cpu(udc->ctrl.wIndex); u8 ep_adrs; int result = -EOPNOTSUPP; @@ -1507,16 +1450,14 @@ static inline enum usb_device_speed _nbu2ss_get_speed(struct nbu2ss_udc *udc) } /*-------------------------------------------------------------------------*/ -static void _nbu2ss_epn_set_stall( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep -) +static void _nbu2ss_epn_set_stall(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep) { u8 ep_adrs; u32 regdata; int limit_cnt = 0; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (ep->direct == USB_DIR_IN) { for (limit_cnt = 0 @@ -1549,8 +1490,8 @@ static int std_req_get_status(struct nbu2ss_udc *udc) if ((udc->ctrl.wValue != 0x0000) || (direction != USB_DIR_IN)) return result; - length = min_t(u16, udc->ctrl.wLength, sizeof(status_data)); - + length = + min_t(u16, le16_to_cpu(udc->ctrl.wLength), sizeof(status_data)); switch (recipient) { case USB_RECIP_DEVICE: if (udc->ctrl.wIndex == 0x0000) { @@ -1565,8 +1506,8 @@ static int std_req_get_status(struct nbu2ss_udc *udc) break; case USB_RECIP_ENDPOINT: - if (0x0000 == (udc->ctrl.wIndex & 0xFF70)) { - ep_adrs = (u8)(udc->ctrl.wIndex & 0xFF); + if (0x0000 == (le16_to_cpu(udc->ctrl.wIndex) & 0xFF70)) { + ep_adrs = (u8)(le16_to_cpu(udc->ctrl.wIndex) & 0xFF); result = _nbu2ss_get_ep_stall(udc, ep_adrs); if (result > 0) @@ -1606,7 +1547,7 @@ static int std_req_set_feature(struct nbu2ss_udc *udc) static int std_req_set_address(struct nbu2ss_udc *udc) { int result = 0; - u32 wValue = udc->ctrl.wValue; + u32 wValue = le16_to_cpu(udc->ctrl.wValue); if ((udc->ctrl.bRequestType != 0x00) || (udc->ctrl.wIndex != 0x0000) || @@ -1628,7 +1569,7 @@ static int std_req_set_address(struct nbu2ss_udc *udc) /*-------------------------------------------------------------------------*/ static int std_req_set_configuration(struct nbu2ss_udc *udc) { - u32 config_value = (u32)(udc->ctrl.wValue & 0x00ff); + u32 config_value = (u32)(le16_to_cpu(udc->ctrl.wValue) & 0x00ff); if ((udc->ctrl.wIndex != 0x0000) || (udc->ctrl.wLength != 0x0000) || @@ -1882,10 +1823,9 @@ static inline void _nbu2ss_ep0_int(struct nbu2ss_udc *udc) } /*-------------------------------------------------------------------------*/ -static void _nbu2ss_ep_done( - struct nbu2ss_ep *ep, - struct nbu2ss_req *req, - int status) +static void _nbu2ss_ep_done(struct nbu2ss_ep *ep, + struct nbu2ss_req *req, + int status) { struct nbu2ss_udc *udc = ep->udc; @@ -1916,15 +1856,14 @@ static void _nbu2ss_ep_done( } /*-------------------------------------------------------------------------*/ -static inline void _nbu2ss_epn_in_int( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req) +static inline void _nbu2ss_epn_in_int(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + struct nbu2ss_req *req) { int result = 0; u32 status; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (req->dma_flag) return; /* DMA is forwarded */ @@ -1960,10 +1899,9 @@ static inline void _nbu2ss_epn_in_int( } /*-------------------------------------------------------------------------*/ -static inline void _nbu2ss_epn_out_int( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req) +static inline void _nbu2ss_epn_out_int(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + struct nbu2ss_req *req) { int result; @@ -1973,10 +1911,9 @@ static inline void _nbu2ss_epn_out_int( } /*-------------------------------------------------------------------------*/ -static inline void _nbu2ss_epn_in_dma_int( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req) +static inline void _nbu2ss_epn_in_dma_int(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + struct nbu2ss_req *req) { u32 mpkt; u32 size; @@ -2010,16 +1947,15 @@ static inline void _nbu2ss_epn_in_dma_int( } /*-------------------------------------------------------------------------*/ -static inline void _nbu2ss_epn_out_dma_int( - struct nbu2ss_udc *udc, - struct nbu2ss_ep *ep, - struct nbu2ss_req *req) +static inline void _nbu2ss_epn_out_dma_int(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + struct nbu2ss_req *req) { int i; u32 num; u32 dmacnt, ep_dmacnt; u32 mpkt; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; num = ep->epnum - 1; @@ -2195,7 +2131,7 @@ static int _nbu2ss_pullup(struct nbu2ss_udc *udc, int is_on) /*-------------------------------------------------------------------------*/ static void _nbu2ss_fifo_flush(struct nbu2ss_udc *udc, struct nbu2ss_ep *ep) { - struct fc_regs *p = udc->p_regs; + struct fc_regs __iomem *p = udc->p_regs; if (udc->vbus_active == 0) return; @@ -2413,7 +2349,7 @@ static irqreturn_t _nbu2ss_udc_irq(int irq, void *_udc) u32 epnum, int_bit; struct nbu2ss_udc *udc = (struct nbu2ss_udc *)_udc; - struct fc_regs *preg = udc->p_regs; + struct fc_regs __iomem *preg = udc->p_regs; if (gpio_get_value(VBUS_VALUE) == 0) { _nbu2ss_writel(&preg->USB_INT_STA, ~USB_INT_STA_RW); @@ -2478,9 +2414,8 @@ static irqreturn_t _nbu2ss_udc_irq(int irq, void *_udc) /*-------------------------------------------------------------------------*/ /* usb_ep_ops */ -static int nbu2ss_ep_enable( - struct usb_ep *_ep, - const struct usb_endpoint_descriptor *desc) +static int nbu2ss_ep_enable(struct usb_ep *_ep, + const struct usb_endpoint_descriptor *desc) { u8 ep_type; unsigned long flags; @@ -2568,9 +2503,8 @@ static int nbu2ss_ep_disable(struct usb_ep *_ep) } /*-------------------------------------------------------------------------*/ -static struct usb_request *nbu2ss_ep_alloc_request( - struct usb_ep *ep, - gfp_t gfp_flags) +static struct usb_request *nbu2ss_ep_alloc_request(struct usb_ep *ep, + gfp_t gfp_flags) { struct nbu2ss_req *req; @@ -2587,9 +2521,8 @@ static struct usb_request *nbu2ss_ep_alloc_request( } /*-------------------------------------------------------------------------*/ -static void nbu2ss_ep_free_request( - struct usb_ep *_ep, - struct usb_request *_req) +static void nbu2ss_ep_free_request(struct usb_ep *_ep, + struct usb_request *_req) { struct nbu2ss_req *req; @@ -2601,10 +2534,8 @@ static void nbu2ss_ep_free_request( } /*-------------------------------------------------------------------------*/ -static int nbu2ss_ep_queue( - struct usb_ep *_ep, - struct usb_request *_req, - gfp_t gfp_flags) +static int nbu2ss_ep_queue(struct usb_ep *_ep, + struct usb_request *_req, gfp_t gfp_flags) { struct nbu2ss_req *req; struct nbu2ss_ep *ep; @@ -2708,9 +2639,7 @@ static int nbu2ss_ep_queue( } /*-------------------------------------------------------------------------*/ -static int nbu2ss_ep_dequeue( - struct usb_ep *_ep, - struct usb_request *_req) +static int nbu2ss_ep_dequeue(struct usb_ep *_ep, struct usb_request *_req) { struct nbu2ss_req *req; struct nbu2ss_ep *ep; @@ -2804,7 +2733,7 @@ static int nbu2ss_ep_fifo_status(struct usb_ep *_ep) struct nbu2ss_ep *ep; struct nbu2ss_udc *udc; unsigned long flags; - struct fc_regs *preg; + struct fc_regs __iomem *preg; if (!_ep) { pr_err("%s, bad param\n", __func__); @@ -3021,10 +2950,8 @@ static int nbu2ss_gad_pullup(struct usb_gadget *pgadget, int is_on) } /*-------------------------------------------------------------------------*/ -static int nbu2ss_gad_ioctl( - struct usb_gadget *pgadget, - unsigned int code, - unsigned long param) +static int nbu2ss_gad_ioctl(struct usb_gadget *pgadget, + unsigned int code, unsigned long param) { return 0; } @@ -3113,9 +3040,8 @@ static void nbu2ss_drv_ep_init(struct nbu2ss_udc *udc) /*-------------------------------------------------------------------------*/ /* platform_driver */ -static int nbu2ss_drv_contest_init( - struct platform_device *pdev, - struct nbu2ss_udc *udc) +static int nbu2ss_drv_contest_init(struct platform_device *pdev, + struct nbu2ss_udc *udc) { spin_lock_init(&udc->lock); udc->dev = &pdev->dev; @@ -3177,7 +3103,7 @@ static int nbu2ss_drv_probe(struct platform_device *pdev) 0, driver_name, udc); /* IO Memory */ - udc->p_regs = (struct fc_regs *)mmio_base; + udc->p_regs = (struct fc_regs __iomem *)mmio_base; /* USB Function Controller Interrupt */ if (status != 0) { diff --git a/drivers/staging/emxx_udc/emxx_udc.h b/drivers/staging/emxx_udc/emxx_udc.h index 8337e38c238a..e28a74da9633 100644 --- a/drivers/staging/emxx_udc/emxx_udc.h +++ b/drivers/staging/emxx_udc/emxx_udc.h @@ -582,7 +582,7 @@ struct nbu2ss_udc { u32 curr_config; /* Current Configuration Number */ - struct fc_regs *p_regs; + struct fc_regs __iomem *p_regs; }; /* USB register access structure */ diff --git a/drivers/staging/erofs/Kconfig b/drivers/staging/erofs/Kconfig index c8521d71039b..d04b798a8efb 100644 --- a/drivers/staging/erofs/Kconfig +++ b/drivers/staging/erofs/Kconfig @@ -90,8 +90,9 @@ config EROFS_FS_IO_MAX_RETRIES config EROFS_FS_ZIP bool "EROFS Data Compresssion Support" depends on EROFS_FS + select LZ4_DECOMPRESS help - Currently we support VLE Compression only. + Currently we support LZ4 VLE Compression only. Play at your own risk. If you don't want to use compression feature, say N. diff --git a/drivers/staging/erofs/Makefile b/drivers/staging/erofs/Makefile index 9a766eb7ed75..c91b65223f99 100644 --- a/drivers/staging/erofs/Makefile +++ b/drivers/staging/erofs/Makefile @@ -9,5 +9,5 @@ obj-$(CONFIG_EROFS_FS) += erofs.o ccflags-y += -I$(src)/include erofs-objs := super.o inode.o data.o namei.o dir.o utils.o erofs-$(CONFIG_EROFS_FS_XATTR) += xattr.o -erofs-$(CONFIG_EROFS_FS_ZIP) += unzip_vle.o unzip_lz4.o unzip_vle_lz4.o +erofs-$(CONFIG_EROFS_FS_ZIP) += unzip_vle.o unzip_vle_lz4.o diff --git a/drivers/staging/erofs/TODO b/drivers/staging/erofs/TODO index f99ddb842f99..a8608b2f72bd 100644 --- a/drivers/staging/erofs/TODO +++ b/drivers/staging/erofs/TODO @@ -23,13 +23,14 @@ TODO List: - data deduplication and other useful features. -erofs-mkfs (preview version) binaries for i386 / x86_64 are available at: - - https://github.com/hsiangkao/erofs_mkfs_binary - -It is still in progress opening mkfs source code to public, -in any case an open-source mkfs will be released in the near future. - +The following git tree provides the file system user-space +tools under development (ex, formatting tool mkfs.erofs): +>> git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git + +The open-source development of erofs-utils is at the early stage. +Contact the original author Li Guifu and +the co-maintainer Fang Wei for the latest news +and more details. Code, suggestions, etc, are welcome. Please feel free to ask and send patches, diff --git a/drivers/staging/erofs/data.c b/drivers/staging/erofs/data.c index 6384f73e5418..5a55f0bfdfbb 100644 --- a/drivers/staging/erofs/data.c +++ b/drivers/staging/erofs/data.c @@ -40,7 +40,7 @@ static inline void read_endio(struct bio *bio) /* prio -- true is used for dir */ struct page *__erofs_get_meta_page(struct super_block *sb, - erofs_blk_t blkaddr, bool prio, bool nofail) + erofs_blk_t blkaddr, bool prio, bool nofail) { struct inode *const bd_inode = sb->s_bdev->bd_inode; struct address_space *const mapping = bd_inode->i_mapping; @@ -53,7 +53,7 @@ struct page *__erofs_get_meta_page(struct super_block *sb, repeat: page = find_or_create_page(mapping, blkaddr, gfp); - if (unlikely(page == NULL)) { + if (unlikely(!page)) { DBG_BUGON(nofail); return ERR_PTR(-ENOMEM); } @@ -76,7 +76,7 @@ repeat: } __submit_bio(bio, REQ_OP_READ, - REQ_META | (prio ? REQ_PRIO : 0)); + REQ_META | (prio ? REQ_PRIO : 0)); lock_page(page); @@ -107,8 +107,8 @@ err_out: } static int erofs_map_blocks_flatmode(struct inode *inode, - struct erofs_map_blocks *map, - int flags) + struct erofs_map_blocks *map, + int flags) { int err = 0; erofs_blk_t nblocks, lastblk; @@ -151,7 +151,7 @@ static int erofs_map_blocks_flatmode(struct inode *inode, map->m_flags |= EROFS_MAP_META; } else { errln("internal error @ nid: %llu (size %llu), m_la 0x%llx", - vi->nid, inode->i_size, map->m_la); + vi->nid, inode->i_size, map->m_la); DBG_BUGON(1); err = -EIO; goto err_out; @@ -167,16 +167,17 @@ err_out: #ifdef CONFIG_EROFS_FS_ZIP extern int z_erofs_map_blocks_iter(struct inode *, - struct erofs_map_blocks *, struct page **, int); + struct erofs_map_blocks *, + struct page **, int); #endif int erofs_map_blocks_iter(struct inode *inode, - struct erofs_map_blocks *map, - struct page **mpage_ret, int flags) + struct erofs_map_blocks *map, + struct page **mpage_ret, int flags) { /* by default, reading raw data never use erofs_map_blocks_iter */ if (unlikely(!is_inode_layout_compression(inode))) { - if (*mpage_ret != NULL) + if (*mpage_ret) put_page(*mpage_ret); *mpage_ret = NULL; @@ -192,27 +193,26 @@ int erofs_map_blocks_iter(struct inode *inode, } int erofs_map_blocks(struct inode *inode, - struct erofs_map_blocks *map, int flags) + struct erofs_map_blocks *map, int flags) { if (unlikely(is_inode_layout_compression(inode))) { struct page *mpage = NULL; int err; err = erofs_map_blocks_iter(inode, map, &mpage, flags); - if (mpage != NULL) + if (mpage) put_page(mpage); return err; } return erofs_map_blocks_flatmode(inode, map, flags); } -static inline struct bio *erofs_read_raw_page( - struct bio *bio, - struct address_space *mapping, - struct page *page, - erofs_off_t *last_block, - unsigned int nblocks, - bool ra) +static inline struct bio *erofs_read_raw_page(struct bio *bio, + struct address_space *mapping, + struct page *page, + erofs_off_t *last_block, + unsigned int nblocks, + bool ra) { struct inode *inode = mapping->host; erofs_off_t current_block = (erofs_off_t)page->index; @@ -232,15 +232,15 @@ static inline struct bio *erofs_read_raw_page( } /* note that for readpage case, bio also equals to NULL */ - if (bio != NULL && - /* not continuous */ - *last_block + 1 != current_block) { + if (bio && + /* not continuous */ + *last_block + 1 != current_block) { submit_bio_retry: __submit_bio(bio, REQ_OP_READ, 0); bio = NULL; } - if (bio == NULL) { + if (!bio) { struct erofs_map_blocks map = { .m_la = blknr_to_addr(current_block), }; @@ -307,7 +307,7 @@ submit_bio_retry: nblocks = BIO_MAX_PAGES; bio = erofs_grab_bio(inode->i_sb, - blknr, nblocks, read_endio, false); + blknr, nblocks, read_endio, false); if (IS_ERR(bio)) { err = PTR_ERR(bio); @@ -342,7 +342,7 @@ has_updated: unlock_page(page); /* if updated manually, continuous pages has a gap */ - if (bio != NULL) + if (bio) submit_bio_out: __submit_bio(bio, REQ_OP_READ, 0); @@ -361,7 +361,7 @@ static int erofs_raw_access_readpage(struct file *file, struct page *page) trace_erofs_readpage(page, true); bio = erofs_read_raw_page(NULL, page->mapping, - page, &last_block, 1, false); + page, &last_block, 1, false); if (IS_ERR(bio)) return PTR_ERR(bio); @@ -371,8 +371,9 @@ static int erofs_raw_access_readpage(struct file *file, struct page *page) } static int erofs_raw_access_readpages(struct file *filp, - struct address_space *mapping, - struct list_head *pages, unsigned int nr_pages) + struct address_space *mapping, + struct list_head *pages, + unsigned int nr_pages) { erofs_off_t last_block; struct bio *bio = NULL; @@ -389,13 +390,13 @@ static int erofs_raw_access_readpages(struct file *filp, if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) { bio = erofs_read_raw_page(bio, mapping, page, - &last_block, nr_pages, true); + &last_block, nr_pages, true); /* all the page errors are ignored when readahead */ if (IS_ERR(bio)) { pr_err("%s, readahead error at page %lu of nid %llu\n", - __func__, page->index, - EROFS_V(mapping->host)->nid); + __func__, page->index, + EROFS_V(mapping->host)->nid); bio = NULL; } @@ -407,7 +408,7 @@ static int erofs_raw_access_readpages(struct file *filp, DBG_BUGON(!list_empty(pages)); /* the rare case (end in gaps) */ - if (unlikely(bio != NULL)) + if (unlikely(bio)) __submit_bio(bio, REQ_OP_READ, 0); return 0; } diff --git a/drivers/staging/erofs/dir.c b/drivers/staging/erofs/dir.c index d1cb0d78ab84..833f052f79d0 100644 --- a/drivers/staging/erofs/dir.c +++ b/drivers/staging/erofs/dir.c @@ -53,8 +53,11 @@ static int erofs_fill_dentries(struct dir_context *ctx, strnlen(de_name, maxsize - nameoff) : le16_to_cpu(de[1].nameoff) - nameoff; - /* the corrupted directory found */ - BUG_ON(de_namelen < 0); + /* a corrupted entry is found */ + if (unlikely(de_namelen < 0)) { + DBG_BUGON(1); + return -EIO; + } #ifdef CONFIG_EROFS_FS_DEBUG dbg_namelen = min(EROFS_NAME_LEN - 1, de_namelen); @@ -66,8 +69,8 @@ static int erofs_fill_dentries(struct dir_context *ctx, #endif if (!dir_emit(ctx, de_name, de_namelen, - le64_to_cpu(de->nid), d_type)) - /* stoped by some reason */ + le64_to_cpu(de->nid), d_type)) + /* stopped by some reason */ return 1; ++de; *ofs += sizeof(struct erofs_dirent); diff --git a/drivers/staging/erofs/erofs_fs.h b/drivers/staging/erofs/erofs_fs.h index d4bffa2852b3..fa52898df006 100644 --- a/drivers/staging/erofs/erofs_fs.h +++ b/drivers/staging/erofs/erofs_fs.h @@ -38,10 +38,6 @@ struct erofs_super_block { /* 80 */__u8 reserved2[48]; /* 128 bytes */ } __packed; -#define __EROFS_BIT(_prefix, _cur, _pre) enum { \ - _prefix ## _cur ## _BIT = _prefix ## _pre ## _BIT + \ - _prefix ## _pre ## _BITS } - /* * erofs inode data mapping: * 0 - inode plain without inline data A: @@ -58,11 +54,13 @@ enum { EROFS_INODE_LAYOUT_INLINE, EROFS_INODE_LAYOUT_MAX }; + +/* bit definitions of inode i_advise */ #define EROFS_I_VERSION_BITS 1 #define EROFS_I_DATA_MAPPING_BITS 3 #define EROFS_I_VERSION_BIT 0 -__EROFS_BIT(EROFS_I_, DATA_MAPPING, VERSION); +#define EROFS_I_DATA_MAPPING_BIT 1 struct erofs_inode_v1 { /* 0 */__le16 i_advise; diff --git a/drivers/staging/erofs/inode.c b/drivers/staging/erofs/inode.c index 04c61a9d7b76..d7fbf5f4600f 100644 --- a/drivers/staging/erofs/inode.c +++ b/drivers/staging/erofs/inode.c @@ -133,7 +133,13 @@ static int fill_inline_data(struct inode *inode, void *data, return -ENOMEM; m_pofs += vi->inode_isize + vi->xattr_isize; - BUG_ON(m_pofs + inode->i_size > PAGE_SIZE); + + /* inline symlink data shouldn't across page boundary as well */ + if (unlikely(m_pofs + inode->i_size > PAGE_SIZE)) { + DBG_BUGON(1); + kfree(lnk); + return -EIO; + } /* get in-page inline data */ memcpy(lnk, data + m_pofs, inode->i_size); @@ -171,7 +177,7 @@ static int fill_inode(struct inode *inode, int isdir) return PTR_ERR(page); } - BUG_ON(!PageUptodate(page)); + DBG_BUGON(!PageUptodate(page)); data = page_address(page); err = read_inode(inode, data + ofs); diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h index 57575c7f5635..e049d00c087a 100644 --- a/drivers/staging/erofs/internal.h +++ b/drivers/staging/erofs/internal.h @@ -39,7 +39,7 @@ #define debugln(x, ...) ((void)0) #define dbg_might_sleep() ((void)0) -#define DBG_BUGON(...) ((void)0) +#define DBG_BUGON(x) ((void)(x)) #endif enum { @@ -194,50 +194,70 @@ struct erofs_workgroup { #define EROFS_LOCKED_MAGIC (INT_MIN | 0xE0F510CCL) -static inline bool erofs_workgroup_try_to_freeze( - struct erofs_workgroup *grp, int v) +#if defined(CONFIG_SMP) +static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp, + int val) { -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) - if (v != atomic_cmpxchg(&grp->refcount, - v, EROFS_LOCKED_MAGIC)) - return false; preempt_disable(); + if (val != atomic_cmpxchg(&grp->refcount, val, EROFS_LOCKED_MAGIC)) { + preempt_enable(); + return false; + } + return true; +} + +static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp, + int orig_val) +{ + /* + * other observers should notice all modifications + * in the freezing period. + */ + smp_mb(); + atomic_set(&grp->refcount, orig_val); + preempt_enable(); +} + +static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp) +{ + return atomic_cond_read_relaxed(&grp->refcount, + VAL != EROFS_LOCKED_MAGIC); +} #else +static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp, + int val) +{ preempt_disable(); - if (atomic_read(&grp->refcount) != v) { + /* no need to spin on UP platforms, let's just disable preemption. */ + if (val != atomic_read(&grp->refcount)) { preempt_enable(); return false; } -#endif return true; } -static inline void erofs_workgroup_unfreeze( - struct erofs_workgroup *grp, int v) +static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp, + int orig_val) { -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) - atomic_set(&grp->refcount, v); -#endif preempt_enable(); } +static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp) +{ + int v = atomic_read(&grp->refcount); + + /* workgroup is never freezed on uniprocessor systems */ + DBG_BUGON(v == EROFS_LOCKED_MAGIC); + return v; +} +#endif + static inline bool erofs_workgroup_get(struct erofs_workgroup *grp, int *ocnt) { - const int locked = (int)EROFS_LOCKED_MAGIC; int o; repeat: - o = atomic_read(&grp->refcount); - - /* spin if it is temporarily locked at the reclaim path */ - if (unlikely(o == locked)) { -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) - do - cpu_relax(); - while (atomic_read(&grp->refcount) == locked); -#endif - goto repeat; - } + o = erofs_wait_on_workgroup_freezed(grp); if (unlikely(o <= 0)) return -1; @@ -250,6 +270,7 @@ repeat: } #define __erofs_workgroup_get(grp) atomic_inc(&(grp)->refcount) +#define __erofs_workgroup_put(grp) atomic_dec(&(grp)->refcount) extern int erofs_workgroup_put(struct erofs_workgroup *grp); @@ -268,12 +289,14 @@ static inline void erofs_workstation_cleanup_all(struct super_block *sb) } #ifdef EROFS_FS_HAS_MANAGED_CACHE -#define EROFS_UNALLOCATED_CACHED_PAGE ((void *)0x5F0EF00D) - extern int erofs_try_to_free_all_cached_pages(struct erofs_sb_info *sbi, struct erofs_workgroup *egrp); extern int erofs_try_to_free_cached_page(struct address_space *mapping, struct page *page); + +#define MNGD_MAPPING(sbi) ((sbi)->managed_cache->i_mapping) +#else +#define MNGD_MAPPING(sbi) (NULL) #endif #define DEFAULT_MAX_SYNC_DECOMPRESS_PAGES 3 diff --git a/drivers/staging/erofs/lz4defs.h b/drivers/staging/erofs/lz4defs.h deleted file mode 100644 index 00a0b58a0871..000000000000 --- a/drivers/staging/erofs/lz4defs.h +++ /dev/null @@ -1,227 +0,0 @@ -#ifndef __LZ4DEFS_H__ -#define __LZ4DEFS_H__ - -/* - * lz4defs.h -- common and architecture specific defines for the kernel usage - - * LZ4 - Fast LZ compression algorithm - * Copyright (C) 2011-2016, Yann Collet. - * BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are - * met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above - * copyright notice, this list of conditions and the following disclaimer - * in the documentation and/or other materials provided with the - * distribution. - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * You can contact the author at : - * - LZ4 homepage : http://www.lz4.org - * - LZ4 source repository : https://github.com/lz4/lz4 - * - * Changed for kernel usage by: - * Sven Schmidt <4sschmid@informatik.uni-hamburg.de> - */ - -#include -#include /* memset, memcpy */ - -#define FORCE_INLINE __always_inline - -/*-************************************ - * Basic Types - **************************************/ -#include - -typedef uint8_t BYTE; -typedef uint16_t U16; -typedef uint32_t U32; -typedef int32_t S32; -typedef uint64_t U64; -typedef uintptr_t uptrval; - -/*-************************************ - * Architecture specifics - **************************************/ -#if defined(CONFIG_64BIT) -#define LZ4_ARCH64 1 -#else -#define LZ4_ARCH64 0 -#endif - -#if defined(__LITTLE_ENDIAN) -#define LZ4_LITTLE_ENDIAN 1 -#else -#define LZ4_LITTLE_ENDIAN 0 -#endif - -/*-************************************ - * Constants - **************************************/ -#define MINMATCH 4 - -#define WILDCOPYLENGTH 8 -#define LASTLITERALS 5 -#define MFLIMIT (WILDCOPYLENGTH + MINMATCH) - -/* Increase this value ==> compression run slower on incompressible data */ -#define LZ4_SKIPTRIGGER 6 - -#define HASH_UNIT sizeof(size_t) - -#define KB (1 << 10) -#define MB (1 << 20) -#define GB (1U << 30) - -#define MAXD_LOG 16 -#define MAX_DISTANCE ((1 << MAXD_LOG) - 1) -#define STEPSIZE sizeof(size_t) - -#define ML_BITS 4 -#define ML_MASK ((1U << ML_BITS) - 1) -#define RUN_BITS (8 - ML_BITS) -#define RUN_MASK ((1U << RUN_BITS) - 1) - -/*-************************************ - * Reading and writing into memory - **************************************/ -static FORCE_INLINE U16 LZ4_read16(const void *ptr) -{ - return get_unaligned((const U16 *)ptr); -} - -static FORCE_INLINE U32 LZ4_read32(const void *ptr) -{ - return get_unaligned((const U32 *)ptr); -} - -static FORCE_INLINE size_t LZ4_read_ARCH(const void *ptr) -{ - return get_unaligned((const size_t *)ptr); -} - -static FORCE_INLINE void LZ4_write16(void *memPtr, U16 value) -{ - put_unaligned(value, (U16 *)memPtr); -} - -static FORCE_INLINE void LZ4_write32(void *memPtr, U32 value) -{ - put_unaligned(value, (U32 *)memPtr); -} - -static FORCE_INLINE U16 LZ4_readLE16(const void *memPtr) -{ - return get_unaligned_le16(memPtr); -} - -static FORCE_INLINE void LZ4_writeLE16(void *memPtr, U16 value) -{ - return put_unaligned_le16(value, memPtr); -} - -static FORCE_INLINE void LZ4_copy8(void *dst, const void *src) -{ -#if LZ4_ARCH64 - U64 a = get_unaligned((const U64 *)src); - - put_unaligned(a, (U64 *)dst); -#else - U32 a = get_unaligned((const U32 *)src); - U32 b = get_unaligned((const U32 *)src + 1); - - put_unaligned(a, (U32 *)dst); - put_unaligned(b, (U32 *)dst + 1); -#endif -} - -/* - * customized variant of memcpy, - * which can overwrite up to 7 bytes beyond dstEnd - */ -static FORCE_INLINE void LZ4_wildCopy(void *dstPtr, - const void *srcPtr, void *dstEnd) -{ - BYTE *d = (BYTE *)dstPtr; - const BYTE *s = (const BYTE *)srcPtr; - BYTE *const e = (BYTE *)dstEnd; - - do { - LZ4_copy8(d, s); - d += 8; - s += 8; - } while (d < e); -} - -static FORCE_INLINE unsigned int LZ4_NbCommonBytes(register size_t val) -{ -#if LZ4_LITTLE_ENDIAN - return __ffs(val) >> 3; -#else - return (BITS_PER_LONG - 1 - __fls(val)) >> 3; -#endif -} - -static FORCE_INLINE unsigned int LZ4_count( - const BYTE *pIn, - const BYTE *pMatch, - const BYTE *pInLimit) -{ - const BYTE *const pStart = pIn; - - while (likely(pIn < pInLimit - (STEPSIZE - 1))) { - size_t const diff = LZ4_read_ARCH(pMatch) ^ LZ4_read_ARCH(pIn); - - if (!diff) { - pIn += STEPSIZE; - pMatch += STEPSIZE; - continue; - } - - pIn += LZ4_NbCommonBytes(diff); - - return (unsigned int)(pIn - pStart); - } - -#if LZ4_ARCH64 - if ((pIn < (pInLimit - 3)) - && (LZ4_read32(pMatch) == LZ4_read32(pIn))) { - pIn += 4; - pMatch += 4; - } -#endif - - if ((pIn < (pInLimit - 1)) - && (LZ4_read16(pMatch) == LZ4_read16(pIn))) { - pIn += 2; - pMatch += 2; - } - - if ((pIn < pInLimit) && (*pMatch == *pIn)) - pIn++; - - return (unsigned int)(pIn - pStart); -} - -typedef enum { noLimit = 0, limitedOutput = 1 } limitedOutput_directive; -typedef enum { byPtr, byU32, byU16 } tableType_t; - -typedef enum { noDict = 0, withPrefix64k, usingExtDict } dict_directive; -typedef enum { noDictIssue = 0, dictSmall } dictIssue_directive; - -typedef enum { endOnOutputSize = 0, endOnInputSize = 1 } endCondition_directive; -typedef enum { full = 0, partial = 1 } earlyEnd_directive; - -#endif diff --git a/drivers/staging/erofs/super.c b/drivers/staging/erofs/super.c index f69e619807a1..1c2eb69682ef 100644 --- a/drivers/staging/erofs/super.c +++ b/drivers/staging/erofs/super.c @@ -40,7 +40,6 @@ static int __init erofs_init_inode_cache(void) static void erofs_exit_inode_cache(void) { - BUG_ON(erofs_inode_cachep == NULL); kmem_cache_destroy(erofs_inode_cachep); } @@ -303,8 +302,8 @@ static int managed_cache_releasepage(struct page *page, gfp_t gfp_mask) int ret = 1; /* 0 - busy */ struct address_space *const mapping = page->mapping; - BUG_ON(!PageLocked(page)); - BUG_ON(mapping->a_ops != &managed_cache_aops); + DBG_BUGON(!PageLocked(page)); + DBG_BUGON(mapping->a_ops != &managed_cache_aops); if (PagePrivate(page)) ret = erofs_try_to_free_cached_page(mapping, page); @@ -317,10 +316,10 @@ static void managed_cache_invalidatepage(struct page *page, { const unsigned int stop = length + offset; - BUG_ON(!PageLocked(page)); + DBG_BUGON(!PageLocked(page)); - /* Check for overflow */ - BUG_ON(stop > PAGE_SIZE || stop < length); + /* Check for potential overflow in debug mode */ + DBG_BUGON(stop > PAGE_SIZE || stop < length); if (offset == 0 && stop == PAGE_SIZE) while (!managed_cache_releasepage(page, GFP_NOFS)) @@ -442,12 +441,6 @@ static int erofs_read_super(struct super_block *sb, erofs_register_super(sb); - /* - * We already have a positive dentry, which was instantiated - * by d_make_root. Just need to d_rehash it. - */ - d_rehash(sb->s_root); - if (!silent) infoln("mounted on %s with opts: %s.", dev_name, (char *)data); @@ -655,7 +648,7 @@ static int erofs_remount(struct super_block *sb, int *flags, char *data) unsigned int org_inject_rate = erofs_get_fault_rate(sbi); int err; - BUG_ON(!sb_rdonly(sb)); + DBG_BUGON(!sb_rdonly(sb)); err = parse_options(sb, data); if (err) goto out; diff --git a/drivers/staging/erofs/unzip_lz4.c b/drivers/staging/erofs/unzip_lz4.c deleted file mode 100644 index b1ea23f66c4e..000000000000 --- a/drivers/staging/erofs/unzip_lz4.c +++ /dev/null @@ -1,251 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause -/* - * linux/drivers/staging/erofs/unzip_lz4.c - * - * Copyright (C) 2018 HUAWEI, Inc. - * http://www.huawei.com/ - * Created by Gao Xiang - * - * Original code taken from 'linux/lib/lz4/lz4_decompress.c' - */ - -/* - * LZ4 - Fast LZ compression algorithm - * Copyright (C) 2011 - 2016, Yann Collet. - * BSD 2 - Clause License (http://www.opensource.org/licenses/bsd - license.php) - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are - * met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above - * copyright notice, this list of conditions and the following disclaimer - * in the documentation and/or other materials provided with the - * distribution. - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * You can contact the author at : - * - LZ4 homepage : http://www.lz4.org - * - LZ4 source repository : https://github.com/lz4/lz4 - * - * Changed for kernel usage by: - * Sven Schmidt <4sschmid@informatik.uni-hamburg.de> - */ -#include "internal.h" -#include -#include "lz4defs.h" - -/* - * no public solution to solve our requirement yet. - * see: - * https://groups.google.com/forum/#!topic/lz4c/_3kkz5N6n00 - */ -static FORCE_INLINE int customized_lz4_decompress_safe_partial( - const void * const source, - void * const dest, - int inputSize, - int outputSize) -{ - /* Local Variables */ - const BYTE *ip = (const BYTE *) source; - const BYTE * const iend = ip + inputSize; - - BYTE *op = (BYTE *) dest; - BYTE * const oend = op + outputSize; - BYTE *cpy; - - static const unsigned int dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 }; - static const int dec64table[] = { 0, 0, 0, -1, 0, 1, 2, 3 }; - - /* Empty output buffer */ - if (unlikely(outputSize == 0)) - return ((inputSize == 1) && (*ip == 0)) ? 0 : -1; - - /* Main Loop : decode sequences */ - while (1) { - size_t length; - const BYTE *match; - size_t offset; - - /* get literal length */ - unsigned int const token = *ip++; - - length = token>>ML_BITS; - - if (length == RUN_MASK) { - unsigned int s; - - do { - s = *ip++; - length += s; - } while ((ip < iend - RUN_MASK) & (s == 255)); - - if (unlikely((size_t)(op + length) < (size_t)(op))) { - /* overflow detection */ - goto _output_error; - } - if (unlikely((size_t)(ip + length) < (size_t)(ip))) { - /* overflow detection */ - goto _output_error; - } - } - - /* copy literals */ - cpy = op + length; - if ((cpy > oend - WILDCOPYLENGTH) || - (ip + length > iend - (2 + 1 + LASTLITERALS))) { - if (cpy > oend) { - memcpy(op, ip, length = oend - op); - op += length; - break; - } - - if (unlikely(ip + length > iend)) { - /* - * Error : - * read attempt beyond - * end of input buffer - */ - goto _output_error; - } - - memcpy(op, ip, length); - ip += length; - op += length; - - if (ip > iend - 2) - break; - /* Necessarily EOF, due to parsing restrictions */ - /* break; */ - } else { - LZ4_wildCopy(op, ip, cpy); - ip += length; - op = cpy; - } - - /* get offset */ - offset = LZ4_readLE16(ip); - ip += 2; - match = op - offset; - - if (unlikely(match < (const BYTE *)dest)) { - /* Error : offset outside buffers */ - goto _output_error; - } - - /* get matchlength */ - length = token & ML_MASK; - if (length == ML_MASK) { - unsigned int s; - - do { - s = *ip++; - - if (ip > iend - LASTLITERALS) - goto _output_error; - - length += s; - } while (s == 255); - - if (unlikely((size_t)(op + length) < (size_t)op)) { - /* overflow detection */ - goto _output_error; - } - } - - length += MINMATCH; - - /* copy match within block */ - cpy = op + length; - - if (unlikely(cpy >= oend - WILDCOPYLENGTH)) { - if (cpy >= oend) { - while (op < oend) - *op++ = *match++; - break; - } - goto __match; - } - - /* costs ~1%; silence an msan warning when offset == 0 */ - LZ4_write32(op, (U32)offset); - - if (unlikely(offset < 8)) { - const int dec64 = dec64table[offset]; - - op[0] = match[0]; - op[1] = match[1]; - op[2] = match[2]; - op[3] = match[3]; - match += dec32table[offset]; - memcpy(op + 4, match, 4); - match -= dec64; - } else { - LZ4_copy8(op, match); - match += 8; - } - - op += 8; - - if (unlikely(cpy > oend - 12)) { - BYTE * const oCopyLimit = oend - (WILDCOPYLENGTH - 1); - - if (op < oCopyLimit) { - LZ4_wildCopy(op, match, oCopyLimit); - match += oCopyLimit - op; - op = oCopyLimit; - } -__match: - while (op < cpy) - *op++ = *match++; - } else { - LZ4_copy8(op, match); - - if (length > 16) - LZ4_wildCopy(op + 8, match + 8, cpy); - } - - op = cpy; /* correction */ - } - DBG_BUGON((void *)ip - source > inputSize); - DBG_BUGON((void *)op - dest > outputSize); - - /* Nb of output bytes decoded */ - return (int) ((void *)op - dest); - - /* Overflow error detected */ -_output_error: - return -ERANGE; -} - -int z_erofs_unzip_lz4(void *in, void *out, size_t inlen, size_t outlen) -{ - int ret = customized_lz4_decompress_safe_partial(in, - out, inlen, outlen); - - if (ret >= 0) - return ret; - - /* - * LZ4_decompress_safe will return an error code - * (< 0) if decompression failed - */ - errln("%s, failed to decompress, in[%p, %zu] outlen[%p, %zu]", - __func__, in, inlen, out, outlen); - WARN_ON(1); - print_hex_dump(KERN_DEBUG, "raw data [in]: ", DUMP_PREFIX_OFFSET, - 16, 1, in, inlen, true); - print_hex_dump(KERN_DEBUG, "raw data [out]: ", DUMP_PREFIX_OFFSET, - 16, 1, out, outlen, true); - return -EIO; -} - diff --git a/drivers/staging/erofs/unzip_pagevec.h b/drivers/staging/erofs/unzip_pagevec.h index 0956615b86f7..23856ba2742d 100644 --- a/drivers/staging/erofs/unzip_pagevec.h +++ b/drivers/staging/erofs/unzip_pagevec.h @@ -150,7 +150,7 @@ z_erofs_pagevec_ctor_dequeue(struct z_erofs_pagevec_ctor *ctor, erofs_vtptr_t t; if (unlikely(ctor->index >= ctor->nr)) { - BUG_ON(ctor->next == NULL); + DBG_BUGON(!ctor->next); z_erofs_pagevec_ctor_pagedown(ctor, true); } diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c index 79d3ba62b298..4ac1099a39c6 100644 --- a/drivers/staging/erofs/unzip_vle.c +++ b/drivers/staging/erofs/unzip_vle.c @@ -15,14 +15,32 @@ #include +/* + * a compressed_pages[] placeholder in order to avoid + * being filled with file pages for in-place decompression. + */ +#define PAGE_UNALLOCATED ((void *)0x5F0E4B1D) + +/* how to allocate cached pages for a workgroup */ +enum z_erofs_cache_alloctype { + DONTALLOC, /* don't allocate any cached pages */ + DELAYEDALLOC, /* delayed allocation (at the time of submitting io) */ +}; + +/* + * tagged pointer with 1-bit tag for all compressed pages + * tag 0 - the page is just found with an extra page reference + */ +typedef tagptr1_t compressed_page_t; + +#define tag_compressed_page_justfound(page) \ + tagptr_fold(compressed_page_t, page, 1) + static struct workqueue_struct *z_erofs_workqueue __read_mostly; static struct kmem_cache *z_erofs_workgroup_cachep __read_mostly; void z_erofs_exit_zip_subsystem(void) { - BUG_ON(z_erofs_workqueue == NULL); - BUG_ON(z_erofs_workgroup_cachep == NULL); - destroy_workqueue(z_erofs_workqueue); kmem_cache_destroy(z_erofs_workgroup_cachep); } @@ -35,21 +53,48 @@ static inline int init_unzip_workqueue(void) * we don't need too many threads, limiting threads * could improve scheduling performance. */ - z_erofs_workqueue = alloc_workqueue("erofs_unzipd", - WQ_UNBOUND | WQ_HIGHPRI | WQ_CPU_INTENSIVE, - onlinecpus + onlinecpus / 4); + z_erofs_workqueue = + alloc_workqueue("erofs_unzipd", + WQ_UNBOUND | WQ_HIGHPRI | WQ_CPU_INTENSIVE, + onlinecpus + onlinecpus / 4); - return z_erofs_workqueue != NULL ? 0 : -ENOMEM; + return z_erofs_workqueue ? 0 : -ENOMEM; +} + +static void init_once(void *ptr) +{ + struct z_erofs_vle_workgroup *grp = ptr; + struct z_erofs_vle_work *const work = + z_erofs_vle_grab_primary_work(grp); + unsigned int i; + + mutex_init(&work->lock); + work->nr_pages = 0; + work->vcnt = 0; + for (i = 0; i < Z_EROFS_CLUSTER_MAX_PAGES; ++i) + grp->compressed_pages[i] = NULL; +} + +static void init_always(struct z_erofs_vle_workgroup *grp) +{ + struct z_erofs_vle_work *const work = + z_erofs_vle_grab_primary_work(grp); + + atomic_set(&grp->obj.refcount, 1); + grp->flags = 0; + + DBG_BUGON(work->nr_pages); + DBG_BUGON(work->vcnt); } int __init z_erofs_init_zip_subsystem(void) { z_erofs_workgroup_cachep = kmem_cache_create("erofs_compress", - Z_EROFS_WORKGROUP_SIZE, 0, - SLAB_RECLAIM_ACCOUNT, NULL); + Z_EROFS_WORKGROUP_SIZE, 0, + SLAB_RECLAIM_ACCOUNT, init_once); - if (z_erofs_workgroup_cachep != NULL) { + if (z_erofs_workgroup_cachep) { if (!init_unzip_workqueue()) return 0; @@ -98,38 +143,58 @@ struct z_erofs_vle_work_builder { { .work = NULL, .role = Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED } #ifdef EROFS_FS_HAS_MANAGED_CACHE - -static bool grab_managed_cache_pages(struct address_space *mapping, - erofs_blk_t start, - struct page **compressed_pages, - int clusterblks, - bool reserve_allocation) +static void preload_compressed_pages(struct z_erofs_vle_work_builder *bl, + struct address_space *mc, + pgoff_t index, + unsigned int clusterpages, + enum z_erofs_cache_alloctype type, + struct list_head *pagepool, + gfp_t gfp) { - bool noio = true; - unsigned int i; + struct page **const pages = bl->compressed_pages; + const unsigned int remaining = bl->compressed_deficit; + bool standalone = true; + unsigned int i, j = 0; - /* TODO: optimize by introducing find_get_pages_range */ - for (i = 0; i < clusterblks; ++i) { - struct page *page, *found; + if (bl->role < Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED) + return; + + gfp = mapping_gfp_constraint(mc, gfp) & ~__GFP_RECLAIM; + + index += clusterpages - remaining; + + for (i = 0; i < remaining; ++i) { + struct page *page; + compressed_page_t t; - if (READ_ONCE(compressed_pages[i]) != NULL) + /* the compressed page was loaded before */ + if (READ_ONCE(pages[i])) continue; - page = found = find_get_page(mapping, start + i); - if (found == NULL) { - noio = false; - if (!reserve_allocation) - continue; - page = EROFS_UNALLOCATED_CACHED_PAGE; + page = find_get_page(mc, index + i); + + if (page) { + t = tag_compressed_page_justfound(page); + } else if (type == DELAYEDALLOC) { + t = tagptr_init(compressed_page_t, PAGE_UNALLOCATED); + } else { /* DONTALLOC */ + if (standalone) + j = i; + standalone = false; + continue; } - if (NULL == cmpxchg(compressed_pages + i, NULL, page)) + if (!cmpxchg_relaxed(&pages[i], NULL, tagptr_cast_ptr(t))) continue; - if (found != NULL) - put_page(found); + if (page) + put_page(page); } - return noio; + bl->compressed_pages += j; + bl->compressed_deficit = remaining - j; + + if (standalone) + bl->role = Z_EROFS_VLE_WORK_PRIMARY; } /* called by erofs_shrinker to get rid of all compressed_pages */ @@ -138,7 +203,7 @@ int erofs_try_to_free_all_cached_pages(struct erofs_sb_info *sbi, { struct z_erofs_vle_workgroup *const grp = container_of(egrp, struct z_erofs_vle_workgroup, obj); - struct address_space *const mapping = sbi->managed_cache->i_mapping; + struct address_space *const mapping = MNGD_MAPPING(sbi); const int clusterpages = erofs_clusterpages(sbi); int i; @@ -149,7 +214,7 @@ int erofs_try_to_free_all_cached_pages(struct erofs_sb_info *sbi, for (i = 0; i < clusterpages; ++i) { struct page *page = grp->compressed_pages[i]; - if (page == NULL || page->mapping != mapping) + if (!page || page->mapping != mapping) continue; /* block other users from reclaiming or migrating the page */ @@ -201,6 +266,17 @@ int erofs_try_to_free_cached_page(struct address_space *mapping, } return ret; } +#else +static void preload_compressed_pages(struct z_erofs_vle_work_builder *bl, + struct address_space *mc, + pgoff_t index, + unsigned int clusterpages, + enum z_erofs_cache_alloctype type, + struct list_head *pagepool, + gfp_t gfp) +{ + /* nowhere to load compressed pages from */ +} #endif /* page_type must be Z_EROFS_PAGE_TYPE_EXCLUSIVE */ @@ -210,7 +286,7 @@ static inline bool try_to_reuse_as_compressed_page( { while (b->compressed_deficit) { --b->compressed_deficit; - if (NULL == cmpxchg(b->compressed_pages++, NULL, page)) + if (!cmpxchg(b->compressed_pages++, NULL, page)) return true; } @@ -250,11 +326,11 @@ static inline bool try_to_claim_workgroup( retry: if (grp->next == Z_EROFS_VLE_WORKGRP_NIL) { /* type 1, nil workgroup */ - if (Z_EROFS_VLE_WORKGRP_NIL != cmpxchg(&grp->next, - Z_EROFS_VLE_WORKGRP_NIL, *owned_head)) + if (cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_NIL, + *owned_head) != Z_EROFS_VLE_WORKGRP_NIL) goto retry; - *owned_head = grp; + *owned_head = &grp->next; *hosted = true; } else if (grp->next == Z_EROFS_VLE_WORKGRP_TAIL) { /* @@ -262,8 +338,8 @@ retry: * be careful that its submission itself is governed * by the original owned chain. */ - if (Z_EROFS_VLE_WORKGRP_TAIL != cmpxchg(&grp->next, - Z_EROFS_VLE_WORKGRP_TAIL, *owned_head)) + if (cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_TAIL, + *owned_head) != Z_EROFS_VLE_WORKGRP_TAIL) goto retry; *owned_head = Z_EROFS_VLE_WORKGRP_TAIL; @@ -293,7 +369,7 @@ z_erofs_vle_work_lookup(const struct z_erofs_vle_work_finder *f) struct z_erofs_vle_work *work; egrp = erofs_find_workgroup(f->sb, f->idx, &tag); - if (egrp == NULL) { + if (!egrp) { *f->grp_ret = NULL; return NULL; } @@ -366,13 +442,17 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f, struct z_erofs_vle_work *work; /* if multiref is disabled, grp should never be nullptr */ - BUG_ON(grp != NULL); + if (unlikely(grp)) { + DBG_BUGON(1); + return ERR_PTR(-EINVAL); + } /* no available workgroup, let's allocate one */ - grp = kmem_cache_zalloc(z_erofs_workgroup_cachep, GFP_NOFS); - if (unlikely(grp == NULL)) + grp = kmem_cache_alloc(z_erofs_workgroup_cachep, GFP_NOFS); + if (unlikely(!grp)) return ERR_PTR(-ENOMEM); + init_always(grp); grp->obj.index = f->idx; grp->llen = map->m_llen; @@ -380,7 +460,6 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f, (map->m_flags & EROFS_MAP_ZIPPED) ? Z_EROFS_VLE_WORKGRP_FMT_LZ4 : Z_EROFS_VLE_WORKGRP_FMT_PLAIN); - atomic_set(&grp->obj.refcount, 1); /* new workgrps have been claimed as type 1 */ WRITE_ONCE(grp->next, *f->owned_head); @@ -393,20 +472,24 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f, work = z_erofs_vle_grab_primary_work(grp); work->pageofs = f->pageofs; - mutex_init(&work->lock); + /* + * lock all primary followed works before visible to others + * and mutex_trylock *never* fails for a new workgroup. + */ + mutex_trylock(&work->lock); if (gnew) { int err = erofs_register_workgroup(f->sb, &grp->obj, 0); if (err) { + mutex_unlock(&work->lock); kmem_cache_free(z_erofs_workgroup_cachep, grp); return ERR_PTR(-EAGAIN); } } - *f->owned_head = *f->grp_ret = grp; - - mutex_lock(&work->lock); + *f->owned_head = &grp->next; + *f->grp_ret = grp; return work; } @@ -431,7 +514,7 @@ static int z_erofs_vle_work_iter_begin(struct z_erofs_vle_work_builder *builder, }; struct z_erofs_vle_work *work; - DBG_BUGON(builder->work != NULL); + DBG_BUGON(builder->work); /* must be Z_EROFS_WORK_TAIL or the next chained work */ DBG_BUGON(*owned_head == Z_EROFS_VLE_WORKGRP_NIL); @@ -441,7 +524,7 @@ static int z_erofs_vle_work_iter_begin(struct z_erofs_vle_work_builder *builder, repeat: work = z_erofs_vle_work_lookup(&finder); - if (work != NULL) { + if (work) { unsigned int orig_llen; /* increase workgroup `llen' if needed */ @@ -519,7 +602,7 @@ z_erofs_vle_work_iter_end(struct z_erofs_vle_work_builder *builder) { struct z_erofs_vle_work *work = builder->work; - if (work == NULL) + if (!work) return false; z_erofs_pagevec_ctor_exit(&builder->vector, false); @@ -542,7 +625,7 @@ static inline struct page *__stagingpage_alloc(struct list_head *pagepool, { struct page *page = erofs_allocpage(pagepool, gfp); - if (unlikely(page == NULL)) + if (unlikely(!page)) return NULL; page->mapping = Z_EROFS_MAPPING_STAGING; @@ -557,10 +640,9 @@ struct z_erofs_vle_frontend { z_erofs_vle_owned_workgrp_t owned_head; - bool initial; -#if (EROFS_FS_ZIP_CACHE_LVL >= 2) - erofs_off_t cachedzone_la; -#endif + /* used for applying cache strategy on the fly */ + bool backmost; + erofs_off_t headoffset; }; #define VLE_FRONTEND_INIT(__i) { \ @@ -571,7 +653,27 @@ struct z_erofs_vle_frontend { }, \ .builder = VLE_WORK_BUILDER_INIT(), \ .owned_head = Z_EROFS_VLE_WORKGRP_TAIL, \ - .initial = true, } + .backmost = true, } + +#ifdef EROFS_FS_HAS_MANAGED_CACHE +static inline bool +should_alloc_managed_pages(struct z_erofs_vle_frontend *fe, erofs_off_t la) +{ + if (fe->backmost) + return true; + + if (EROFS_FS_ZIP_CACHE_LVL >= 2) + return la < fe->headoffset; + + return false; +} +#else +static inline bool +should_alloc_managed_pages(struct z_erofs_vle_frontend *fe, erofs_off_t la) +{ + return false; +} +#endif static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe, struct page *page, @@ -587,18 +689,11 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe, bool tight = builder_is_followed(builder); struct z_erofs_vle_work *work = builder->work; -#ifdef EROFS_FS_HAS_MANAGED_CACHE - struct address_space *const mngda = sbi->managed_cache->i_mapping; - struct z_erofs_vle_workgroup *grp; - bool noio_outoforder; -#endif - + enum z_erofs_cache_alloctype cache_strategy; enum z_erofs_page_type page_type; unsigned int cur, end, spiltted, index; int err = 0; - trace_erofs_readpage(page, false); - /* register locked file pages as online pages in pack */ z_erofs_onlinepage_init(page); @@ -616,7 +711,7 @@ repeat: debugln("%s: [out-of-range] pos %llu", __func__, offset + cur); if (z_erofs_vle_work_iter_end(builder)) - fe->initial = false; + fe->backmost = false; map->m_la = offset + cur; map->m_llen = 0; @@ -634,20 +729,16 @@ repeat: if (unlikely(err)) goto err_out; -#ifdef EROFS_FS_HAS_MANAGED_CACHE - grp = fe->builder.grp; - - /* let's do out-of-order decompression for noio */ - noio_outoforder = grab_managed_cache_pages(mngda, - erofs_blknr(map->m_pa), - grp->compressed_pages, erofs_blknr(map->m_plen), - /* compressed page caching selection strategy */ - fe->initial | (EROFS_FS_ZIP_CACHE_LVL >= 2 ? - map->m_la < fe->cachedzone_la : 0)); - - if (noio_outoforder && builder_is_followed(builder)) - builder->role = Z_EROFS_VLE_WORK_PRIMARY; -#endif + /* preload all compressed pages (maybe downgrade role if necessary) */ + if (should_alloc_managed_pages(fe, map->m_la)) + cache_strategy = DELAYEDALLOC; + else + cache_strategy = DONTALLOC; + + preload_compressed_pages(builder, MNGD_MAPPING(sbi), + map->m_pa / PAGE_SIZE, + map->m_plen / PAGE_SIZE, + cache_strategy, page_pool, GFP_KERNEL); tight &= builder_is_followed(builder); work = builder->work; @@ -717,13 +808,18 @@ static void z_erofs_vle_unzip_kickoff(void *ptr, int bios) struct z_erofs_vle_unzip_io *io = tagptr_unfold_ptr(t); bool background = tagptr_unfold_tags(t); - if (atomic_add_return(bios, &io->pending_bios)) + if (!background) { + unsigned long flags; + + spin_lock_irqsave(&io->u.wait.lock, flags); + if (!atomic_add_return(bios, &io->pending_bios)) + wake_up_locked(&io->u.wait); + spin_unlock_irqrestore(&io->u.wait.lock, flags); return; + } - if (background) + if (!atomic_add_return(bios, &io->pending_bios)) queue_work(z_erofs_workqueue, &io->u.work); - else - wake_up(&io->u.wait); } static inline void z_erofs_vle_read_endio(struct bio *bio) @@ -732,7 +828,7 @@ static inline void z_erofs_vle_read_endio(struct bio *bio) unsigned int i; struct bio_vec *bvec; #ifdef EROFS_FS_HAS_MANAGED_CACHE - struct address_space *mngda = NULL; + struct address_space *mc = NULL; #endif bio_for_each_segment_all(bvec, bio, i) { @@ -740,21 +836,21 @@ static inline void z_erofs_vle_read_endio(struct bio *bio) bool cachemngd = false; DBG_BUGON(PageUptodate(page)); - BUG_ON(page->mapping == NULL); + DBG_BUGON(!page->mapping); #ifdef EROFS_FS_HAS_MANAGED_CACHE - if (unlikely(mngda == NULL && !z_erofs_is_stagingpage(page))) { + if (unlikely(!mc && !z_erofs_is_stagingpage(page))) { struct inode *const inode = page->mapping->host; struct super_block *const sb = inode->i_sb; - mngda = EROFS_SB(sb)->managed_cache->i_mapping; + mc = MNGD_MAPPING(EROFS_SB(sb)); } /* - * If mngda has not gotten, it equals NULL, + * If mc has not gotten, it equals NULL, * however, page->mapping never be NULL if working properly. */ - cachemngd = (page->mapping == mngda); + cachemngd = (page->mapping == mc); #endif if (unlikely(err)) @@ -778,9 +874,6 @@ static int z_erofs_vle_unzip(struct super_block *sb, struct list_head *page_pool) { struct erofs_sb_info *const sbi = EROFS_SB(sb); -#ifdef EROFS_FS_HAS_MANAGED_CACHE - struct address_space *const mngda = sbi->managed_cache->i_mapping; -#endif const unsigned int clusterpages = erofs_clusterpages(sbi); struct z_erofs_pagevec_ctor ctor; @@ -798,7 +891,7 @@ static int z_erofs_vle_unzip(struct super_block *sb, might_sleep(); work = z_erofs_vle_grab_primary_work(grp); - BUG_ON(!READ_ONCE(work->nr_pages)); + DBG_BUGON(!READ_ONCE(work->nr_pages)); mutex_lock(&work->lock); nr_pages = work->nr_pages; @@ -814,7 +907,7 @@ repeat: sizeof(struct page *), GFP_KERNEL); /* fallback to global pagemap for the lowmem scenario */ - if (unlikely(pages == NULL)) { + if (unlikely(!pages)) { if (nr_pages > Z_EROFS_VLE_VMAP_GLOBAL_PAGES) goto repeat; else { @@ -836,8 +929,8 @@ repeat: page = z_erofs_pagevec_ctor_dequeue(&ctor, &page_type); /* all pages in pagevec ought to be valid */ - DBG_BUGON(page == NULL); - DBG_BUGON(page->mapping == NULL); + DBG_BUGON(!page); + DBG_BUGON(!page->mapping); if (z_erofs_gather_if_stagingpage(page_pool, page)) continue; @@ -847,8 +940,8 @@ repeat: else pagenr = z_erofs_onlinepage_index(page); - BUG_ON(pagenr >= nr_pages); - BUG_ON(pages[pagenr] != NULL); + DBG_BUGON(pagenr >= nr_pages); + DBG_BUGON(pages[pagenr]); pages[pagenr] = page; } @@ -865,15 +958,14 @@ repeat: page = compressed_pages[i]; /* all compressed pages ought to be valid */ - DBG_BUGON(page == NULL); - DBG_BUGON(page->mapping == NULL); + DBG_BUGON(!page); + DBG_BUGON(!page->mapping); if (z_erofs_is_stagingpage(page)) continue; #ifdef EROFS_FS_HAS_MANAGED_CACHE - else if (page->mapping == mngda) { - BUG_ON(PageLocked(page)); - BUG_ON(!PageUptodate(page)); + if (page->mapping == MNGD_MAPPING(sbi)) { + DBG_BUGON(!PageUptodate(page)); continue; } #endif @@ -881,8 +973,8 @@ repeat: /* only non-head page could be reused as a compressed page */ pagenr = z_erofs_onlinepage_index(page); - BUG_ON(pagenr >= nr_pages); - BUG_ON(pages[pagenr] != NULL); + DBG_BUGON(pagenr >= nr_pages); + DBG_BUGON(pages[pagenr]); ++sparsemem_pages; pages[pagenr] = page; @@ -892,9 +984,6 @@ repeat: llen = (nr_pages << PAGE_SHIFT) - work->pageofs; if (z_erofs_vle_workgrp_fmt(grp) == Z_EROFS_VLE_WORKGRP_FMT_PLAIN) { - /* FIXME! this should be fixed in the future */ - BUG_ON(grp->llen != llen); - err = z_erofs_vle_plain_copy(compressed_pages, clusterpages, pages, nr_pages, work->pageofs); goto out; @@ -909,13 +998,11 @@ repeat: if (err != -ENOTSUPP) goto out_percpu; - if (sparsemem_pages >= nr_pages) { - BUG_ON(sparsemem_pages > nr_pages); + if (sparsemem_pages >= nr_pages) goto skip_allocpage; - } for (i = 0; i < nr_pages; ++i) { - if (pages[i] != NULL) + if (pages[i]) continue; pages[i] = __stagingpage_alloc(page_pool, GFP_NOFS); @@ -932,7 +1019,7 @@ skip_allocpage: out: for (i = 0; i < nr_pages; ++i) { page = pages[i]; - DBG_BUGON(page->mapping == NULL); + DBG_BUGON(!page->mapping); /* recycle all individual staging pages */ if (z_erofs_gather_if_stagingpage(page_pool, page)) @@ -949,7 +1036,7 @@ out_percpu: page = compressed_pages[i]; #ifdef EROFS_FS_HAS_MANAGED_CACHE - if (page->mapping == mngda) + if (page->mapping == MNGD_MAPPING(sbi)) continue; #endif /* recycle all individual staging pages */ @@ -992,7 +1079,7 @@ static void z_erofs_vle_unzip_all(struct super_block *sb, /* no possible that 'owned' equals NULL */ DBG_BUGON(owned == Z_EROFS_VLE_WORKGRP_NIL); - grp = owned; + grp = container_of(owned, struct z_erofs_vle_workgroup, next); owned = READ_ONCE(grp->next); z_erofs_vle_unzip(sb, grp, page_pool); @@ -1005,40 +1092,159 @@ static void z_erofs_vle_unzip_wq(struct work_struct *work) struct z_erofs_vle_unzip_io_sb, io.u.work); LIST_HEAD(page_pool); - BUG_ON(iosb->io.head == Z_EROFS_VLE_WORKGRP_TAIL_CLOSED); + DBG_BUGON(iosb->io.head == Z_EROFS_VLE_WORKGRP_TAIL_CLOSED); z_erofs_vle_unzip_all(iosb->sb, &iosb->io, &page_pool); put_pages_list(&page_pool); kvfree(iosb); } -static inline struct z_erofs_vle_unzip_io * -prepare_io_handler(struct super_block *sb, - struct z_erofs_vle_unzip_io *io, - bool background) +static struct page * +pickup_page_for_submission(struct z_erofs_vle_workgroup *grp, + unsigned int nr, + struct list_head *pagepool, + struct address_space *mc, + gfp_t gfp) +{ + /* determined at compile time to avoid too many #ifdefs */ + const bool nocache = __builtin_constant_p(mc) ? !mc : false; + const pgoff_t index = grp->obj.index; + bool tocache = false; + + struct address_space *mapping; + struct page *oldpage, *page; + + compressed_page_t t; + int justfound; + +repeat: + page = READ_ONCE(grp->compressed_pages[nr]); + oldpage = page; + + if (!page) + goto out_allocpage; + + /* + * the cached page has not been allocated and + * an placeholder is out there, prepare it now. + */ + if (!nocache && page == PAGE_UNALLOCATED) { + tocache = true; + goto out_allocpage; + } + + /* process the target tagged pointer */ + t = tagptr_init(compressed_page_t, page); + justfound = tagptr_unfold_tags(t); + page = tagptr_unfold_ptr(t); + + mapping = READ_ONCE(page->mapping); + + /* + * if managed cache is disabled, it's no way to + * get such a cached-like page. + */ + if (nocache) { + /* if managed cache is disabled, it is impossible `justfound' */ + DBG_BUGON(justfound); + + /* and it should be locked, not uptodate, and not truncated */ + DBG_BUGON(!PageLocked(page)); + DBG_BUGON(PageUptodate(page)); + DBG_BUGON(!mapping); + goto out; + } + + /* + * unmanaged (file) pages are all locked solidly, + * therefore it is impossible for `mapping' to be NULL. + */ + if (mapping && mapping != mc) + /* ought to be unmanaged pages */ + goto out; + + lock_page(page); + + /* only true if page reclaim goes wrong, should never happen */ + DBG_BUGON(justfound && PagePrivate(page)); + + /* the page is still in manage cache */ + if (page->mapping == mc) { + WRITE_ONCE(grp->compressed_pages[nr], page); + + if (!PagePrivate(page)) { + /* + * impossible to be !PagePrivate(page) for + * the current restriction as well if + * the page is already in compressed_pages[]. + */ + DBG_BUGON(!justfound); + + justfound = 0; + set_page_private(page, (unsigned long)grp); + SetPagePrivate(page); + } + + /* no need to submit io if it is already up-to-date */ + if (PageUptodate(page)) { + unlock_page(page); + page = NULL; + } + goto out; + } + + /* + * the managed page has been truncated, it's unsafe to + * reuse this one, let's allocate a new cache-managed page. + */ + DBG_BUGON(page->mapping); + DBG_BUGON(!justfound); + + tocache = true; + unlock_page(page); + put_page(page); +out_allocpage: + page = __stagingpage_alloc(pagepool, gfp); + if (oldpage != cmpxchg(&grp->compressed_pages[nr], oldpage, page)) { + list_add(&page->lru, pagepool); + cpu_relax(); + goto repeat; + } + if (nocache || !tocache) + goto out; + if (add_to_page_cache_lru(page, mc, index + nr, gfp)) { + page->mapping = Z_EROFS_MAPPING_STAGING; + goto out; + } + + set_page_private(page, (unsigned long)grp); + SetPagePrivate(page); +out: /* the only exit (for tracing and debugging) */ + return page; +} + +static struct z_erofs_vle_unzip_io * +jobqueue_init(struct super_block *sb, + struct z_erofs_vle_unzip_io *io, + bool foreground) { struct z_erofs_vle_unzip_io_sb *iosb; - if (!background) { + if (foreground) { /* waitqueue available for foreground io */ - BUG_ON(io == NULL); + DBG_BUGON(!io); init_waitqueue_head(&io->u.wait); atomic_set(&io->pending_bios, 0); goto out; } - if (io != NULL) - BUG(); - else { - /* allocate extra io descriptor for background io */ - iosb = kvzalloc(sizeof(struct z_erofs_vle_unzip_io_sb), + iosb = kvzalloc(sizeof(struct z_erofs_vle_unzip_io_sb), GFP_KERNEL | __GFP_NOFAIL); - BUG_ON(iosb == NULL); - - io = &iosb->io; - } + DBG_BUGON(!iosb); + /* initialize fields in the allocated descriptor */ + io = &iosb->io; iosb->sb = sb; INIT_WORK(&io->u.work, z_erofs_vle_unzip_wq); out: @@ -1046,48 +1252,105 @@ out: return io; } +/* define workgroup jobqueue types */ +enum { +#ifdef EROFS_FS_HAS_MANAGED_CACHE + JQ_BYPASS, +#endif + JQ_SUBMIT, + NR_JOBQUEUES, +}; + +static void *jobqueueset_init(struct super_block *sb, + z_erofs_vle_owned_workgrp_t qtail[], + struct z_erofs_vle_unzip_io *q[], + struct z_erofs_vle_unzip_io *fgq, + bool forcefg) +{ +#ifdef EROFS_FS_HAS_MANAGED_CACHE + /* + * if managed cache is enabled, bypass jobqueue is needed, + * no need to read from device for all workgroups in this queue. + */ + q[JQ_BYPASS] = jobqueue_init(sb, fgq + JQ_BYPASS, true); + qtail[JQ_BYPASS] = &q[JQ_BYPASS]->head; +#endif + + q[JQ_SUBMIT] = jobqueue_init(sb, fgq + JQ_SUBMIT, forcefg); + qtail[JQ_SUBMIT] = &q[JQ_SUBMIT]->head; + + return tagptr_cast_ptr(tagptr_fold(tagptr1_t, q[JQ_SUBMIT], !forcefg)); +} + #ifdef EROFS_FS_HAS_MANAGED_CACHE -/* true - unlocked (noio), false - locked (need submit io) */ -static inline bool recover_managed_page(struct z_erofs_vle_workgroup *grp, - struct page *page) +static void move_to_bypass_jobqueue(struct z_erofs_vle_workgroup *grp, + z_erofs_vle_owned_workgrp_t qtail[], + z_erofs_vle_owned_workgrp_t owned_head) { - wait_on_page_locked(page); - if (PagePrivate(page) && PageUptodate(page)) - return true; + z_erofs_vle_owned_workgrp_t *const submit_qtail = qtail[JQ_SUBMIT]; + z_erofs_vle_owned_workgrp_t *const bypass_qtail = qtail[JQ_BYPASS]; - lock_page(page); - if (unlikely(!PagePrivate(page))) { - set_page_private(page, (unsigned long)grp); - SetPagePrivate(page); - } - if (unlikely(PageUptodate(page))) { - unlock_page(page); - return true; - } - return false; + DBG_BUGON(owned_head == Z_EROFS_VLE_WORKGRP_TAIL_CLOSED); + if (owned_head == Z_EROFS_VLE_WORKGRP_TAIL) + owned_head = Z_EROFS_VLE_WORKGRP_TAIL_CLOSED; + + WRITE_ONCE(grp->next, Z_EROFS_VLE_WORKGRP_TAIL_CLOSED); + + WRITE_ONCE(*submit_qtail, owned_head); + WRITE_ONCE(*bypass_qtail, &grp->next); + + qtail[JQ_BYPASS] = &grp->next; } -#define __FSIO_1 1 +static bool postsubmit_is_all_bypassed(struct z_erofs_vle_unzip_io *q[], + unsigned int nr_bios, + bool force_fg) +{ + /* + * although background is preferred, no one is pending for submission. + * don't issue workqueue for decompression but drop it directly instead. + */ + if (force_fg || nr_bios) + return false; + + kvfree(container_of(q[JQ_SUBMIT], + struct z_erofs_vle_unzip_io_sb, + io)); + return true; +} #else -#define __FSIO_1 0 +static void move_to_bypass_jobqueue(struct z_erofs_vle_workgroup *grp, + z_erofs_vle_owned_workgrp_t qtail[], + z_erofs_vle_owned_workgrp_t owned_head) +{ + /* impossible to bypass submission for managed cache disabled */ + DBG_BUGON(1); +} + +static bool postsubmit_is_all_bypassed(struct z_erofs_vle_unzip_io *q[], + unsigned int nr_bios, + bool force_fg) +{ + /* bios should be >0 if managed cache is disabled */ + DBG_BUGON(!nr_bios); + return false; +} #endif static bool z_erofs_vle_submit_all(struct super_block *sb, z_erofs_vle_owned_workgrp_t owned_head, struct list_head *pagepool, - struct z_erofs_vle_unzip_io *fg_io, + struct z_erofs_vle_unzip_io *fgq, bool force_fg) { struct erofs_sb_info *const sbi = EROFS_SB(sb); const unsigned int clusterpages = erofs_clusterpages(sbi); const gfp_t gfp = GFP_NOFS; -#ifdef EROFS_FS_HAS_MANAGED_CACHE - struct address_space *const mngda = sbi->managed_cache->i_mapping; - struct z_erofs_vle_workgroup *lstgrp_noio = NULL, *lstgrp_io = NULL; -#endif - struct z_erofs_vle_unzip_io *ios[1 + __FSIO_1]; + + z_erofs_vle_owned_workgrp_t qtail[NR_JOBQUEUES]; + struct z_erofs_vle_unzip_io *q[NR_JOBQUEUES]; struct bio *bio; - tagptr1_t bi_private; + void *bi_private; /* since bio will be NULL, no need to initialize last_index */ pgoff_t uninitialized_var(last_index); bool force_submit = false; @@ -1096,105 +1359,55 @@ static bool z_erofs_vle_submit_all(struct super_block *sb, if (unlikely(owned_head == Z_EROFS_VLE_WORKGRP_TAIL)) return false; - /* - * force_fg == 1, (io, fg_io[0]) no io, (io, fg_io[1]) need submit io - * force_fg == 0, (io, fg_io[0]) no io; (io[1], bg_io) need submit io - */ -#ifdef EROFS_FS_HAS_MANAGED_CACHE - ios[0] = prepare_io_handler(sb, fg_io + 0, false); -#endif - - if (force_fg) { - ios[__FSIO_1] = prepare_io_handler(sb, fg_io + __FSIO_1, false); - bi_private = tagptr_fold(tagptr1_t, ios[__FSIO_1], 0); - } else { - ios[__FSIO_1] = prepare_io_handler(sb, NULL, true); - bi_private = tagptr_fold(tagptr1_t, ios[__FSIO_1], 1); - } - - nr_bios = 0; force_submit = false; bio = NULL; + nr_bios = 0; + bi_private = jobqueueset_init(sb, qtail, q, fgq, force_fg); /* by default, all need io submission */ - ios[__FSIO_1]->head = owned_head; + q[JQ_SUBMIT]->head = owned_head; do { struct z_erofs_vle_workgroup *grp; - struct page **compressed_pages, *oldpage, *page; pgoff_t first_index; - unsigned int i = 0; -#ifdef EROFS_FS_HAS_MANAGED_CACHE - unsigned int noio = 0; - bool cachemngd; -#endif + struct page *page; + unsigned int i = 0, bypass = 0; int err; /* no possible 'owned_head' equals the following */ DBG_BUGON(owned_head == Z_EROFS_VLE_WORKGRP_TAIL_CLOSED); DBG_BUGON(owned_head == Z_EROFS_VLE_WORKGRP_NIL); - grp = owned_head; + grp = container_of(owned_head, + struct z_erofs_vle_workgroup, next); /* close the main owned chain at first */ owned_head = cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_TAIL, - Z_EROFS_VLE_WORKGRP_TAIL_CLOSED); + Z_EROFS_VLE_WORKGRP_TAIL_CLOSED); first_index = grp->obj.index; - compressed_pages = grp->compressed_pages; - force_submit |= (first_index != last_index + 1); -repeat: - /* fulfill all compressed pages */ - oldpage = page = READ_ONCE(compressed_pages[i]); -#ifdef EROFS_FS_HAS_MANAGED_CACHE - cachemngd = false; - - if (page == EROFS_UNALLOCATED_CACHED_PAGE) { - cachemngd = true; - goto do_allocpage; - } else if (page != NULL) { - if (page->mapping != mngda) - BUG_ON(PageUptodate(page)); - else if (recover_managed_page(grp, page)) { - /* page is uptodate, skip io submission */ - force_submit = true; - ++noio; - goto skippage; - } - } else { -do_allocpage: -#else - if (page != NULL) - BUG_ON(PageUptodate(page)); - else { -#endif - page = __stagingpage_alloc(pagepool, gfp); - - if (oldpage != cmpxchg(compressed_pages + i, - oldpage, page)) { - list_add(&page->lru, pagepool); - goto repeat; -#ifdef EROFS_FS_HAS_MANAGED_CACHE - } else if (cachemngd && !add_to_page_cache_lru(page, - mngda, first_index + i, gfp)) { - set_page_private(page, (unsigned long)grp); - SetPagePrivate(page); -#endif - } +repeat: + page = pickup_page_for_submission(grp, i, pagepool, + MNGD_MAPPING(sbi), gfp); + if (!page) { + force_submit = true; + ++bypass; + goto skippage; } - if (bio != NULL && force_submit) { + if (bio && force_submit) { submit_bio_retry: __submit_bio(bio, REQ_OP_READ, 0); bio = NULL; } - if (bio == NULL) { + if (!bio) { bio = erofs_grab_bio(sb, first_index + i, - BIO_MAX_PAGES, z_erofs_vle_read_endio, true); - bio->bi_private = tagptr_cast_ptr(bi_private); + BIO_MAX_PAGES, + z_erofs_vle_read_endio, true); + bio->bi_private = bi_private; ++nr_bios; } @@ -1205,53 +1418,23 @@ submit_bio_retry: force_submit = false; last_index = first_index + i; -#ifdef EROFS_FS_HAS_MANAGED_CACHE skippage: -#endif if (++i < clusterpages) goto repeat; -#ifdef EROFS_FS_HAS_MANAGED_CACHE - if (noio < clusterpages) { - lstgrp_io = grp; - } else { - z_erofs_vle_owned_workgrp_t iogrp_next = - owned_head == Z_EROFS_VLE_WORKGRP_TAIL ? - Z_EROFS_VLE_WORKGRP_TAIL_CLOSED : - owned_head; - - if (lstgrp_io == NULL) - ios[1]->head = iogrp_next; - else - WRITE_ONCE(lstgrp_io->next, iogrp_next); - - if (lstgrp_noio == NULL) - ios[0]->head = grp; - else - WRITE_ONCE(lstgrp_noio->next, grp); - - lstgrp_noio = grp; - } -#endif + if (bypass < clusterpages) + qtail[JQ_SUBMIT] = &grp->next; + else + move_to_bypass_jobqueue(grp, qtail, owned_head); } while (owned_head != Z_EROFS_VLE_WORKGRP_TAIL); - if (bio != NULL) + if (bio) __submit_bio(bio, REQ_OP_READ, 0); -#ifndef EROFS_FS_HAS_MANAGED_CACHE - BUG_ON(!nr_bios); -#else - if (lstgrp_noio != NULL) - WRITE_ONCE(lstgrp_noio->next, Z_EROFS_VLE_WORKGRP_TAIL_CLOSED); - - if (!force_fg && !nr_bios) { - kvfree(container_of(ios[1], - struct z_erofs_vle_unzip_io_sb, io)); + if (postsubmit_is_all_bypassed(q, nr_bios, force_fg)) return true; - } -#endif - z_erofs_vle_unzip_kickoff(tagptr_cast_ptr(bi_private), nr_bios); + z_erofs_vle_unzip_kickoff(bi_private, nr_bios); return true; } @@ -1260,23 +1443,23 @@ static void z_erofs_submit_and_unzip(struct z_erofs_vle_frontend *f, bool force_fg) { struct super_block *sb = f->inode->i_sb; - struct z_erofs_vle_unzip_io io[1 + __FSIO_1]; + struct z_erofs_vle_unzip_io io[NR_JOBQUEUES]; if (!z_erofs_vle_submit_all(sb, f->owned_head, pagepool, io, force_fg)) return; #ifdef EROFS_FS_HAS_MANAGED_CACHE - z_erofs_vle_unzip_all(sb, &io[0], pagepool); + z_erofs_vle_unzip_all(sb, &io[JQ_BYPASS], pagepool); #endif if (!force_fg) return; /* wait until all bios are completed */ - wait_event(io[__FSIO_1].u.wait, - !atomic_read(&io[__FSIO_1].pending_bios)); + wait_event(io[JQ_SUBMIT].u.wait, + !atomic_read(&io[JQ_SUBMIT].pending_bios)); /* let's synchronous decompression */ - z_erofs_vle_unzip_all(sb, &io[__FSIO_1], pagepool); + z_erofs_vle_unzip_all(sb, &io[JQ_SUBMIT], pagepool); } static int z_erofs_vle_normalaccess_readpage(struct file *file, @@ -1287,9 +1470,10 @@ static int z_erofs_vle_normalaccess_readpage(struct file *file, int err; LIST_HEAD(pagepool); -#if (EROFS_FS_ZIP_CACHE_LVL >= 2) - f.cachedzone_la = (erofs_off_t)page->index << PAGE_SHIFT; -#endif + trace_erofs_readpage(page, false); + + f.headoffset = (erofs_off_t)page->index << PAGE_SHIFT; + err = z_erofs_do_read_page(&f, page, &pagepool); (void)z_erofs_vle_work_iter_end(&f.builder); @@ -1300,7 +1484,7 @@ static int z_erofs_vle_normalaccess_readpage(struct file *file, z_erofs_submit_and_unzip(&f, &pagepool, true); out: - if (f.m_iter.mpage != NULL) + if (f.m_iter.mpage) put_page(f.m_iter.mpage); /* clean up the remaining free pages */ @@ -1315,8 +1499,8 @@ static int z_erofs_vle_normalaccess_readpages(struct file *filp, { struct inode *const inode = mapping->host; struct erofs_sb_info *const sbi = EROFS_I_SB(inode); - const bool sync = __should_decompress_synchronously(sbi, nr_pages); + bool sync = __should_decompress_synchronously(sbi, nr_pages); struct z_erofs_vle_frontend f = VLE_FRONTEND_INIT(inode); gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL); struct page *head = NULL; @@ -1325,26 +1509,31 @@ static int z_erofs_vle_normalaccess_readpages(struct file *filp, trace_erofs_readpages(mapping->host, lru_to_page(pages), nr_pages, false); -#if (EROFS_FS_ZIP_CACHE_LVL >= 2) - f.cachedzone_la = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT; -#endif + f.headoffset = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT; + for (; nr_pages; --nr_pages) { struct page *page = lru_to_page(pages); prefetchw(&page->flags); list_del(&page->lru); + /* + * A pure asynchronous readahead is indicated if + * a PG_readahead marked page is hitted at first. + * Let's also do asynchronous decompression for this case. + */ + sync &= !(PageReadahead(page) && !head); + if (add_to_page_cache_lru(page, mapping, page->index, gfp)) { list_add(&page->lru, &pagepool); continue; } - BUG_ON(PagePrivate(page)); set_page_private(page, (unsigned long)head); head = page; } - while (head != NULL) { + while (head) { struct page *page = head; int err; @@ -1366,7 +1555,7 @@ static int z_erofs_vle_normalaccess_readpages(struct file *filp, z_erofs_submit_and_unzip(&f, &pagepool, sync); - if (f.m_iter.mpage != NULL) + if (f.m_iter.mpage) put_page(f.m_iter.mpage); /* clean up the remaining free pages */ @@ -1561,7 +1750,7 @@ int z_erofs_map_blocks_iter(struct inode *inode, mblk = vle_extent_blkaddr(inode, lcn); if (!mpage || mpage->index != mblk) { - if (mpage != NULL) + if (mpage) put_page(mpage); mpage = erofs_get_meta_page(ctx.sb, mblk, false); diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h index 3316bc36965d..5a4e1b62c0d1 100644 --- a/drivers/staging/erofs/unzip_vle.h +++ b/drivers/staging/erofs/unzip_vle.h @@ -67,13 +67,13 @@ struct z_erofs_vle_work { #define Z_EROFS_VLE_WORKGRP_FMT_LZ4 1 #define Z_EROFS_VLE_WORKGRP_FMT_MASK 1 -typedef struct z_erofs_vle_workgroup *z_erofs_vle_owned_workgrp_t; +typedef void *z_erofs_vle_owned_workgrp_t; struct z_erofs_vle_workgroup { struct erofs_workgroup obj; struct z_erofs_vle_work work; - /* next owned workgroup */ + /* point to next owned_workgrp_t */ z_erofs_vle_owned_workgrp_t next; /* compressed pages (including multi-usage pages) */ diff --git a/drivers/staging/erofs/unzip_vle_lz4.c b/drivers/staging/erofs/unzip_vle_lz4.c index 1a428658cbea..52797bd89da1 100644 --- a/drivers/staging/erofs/unzip_vle_lz4.c +++ b/drivers/staging/erofs/unzip_vle_lz4.c @@ -11,6 +11,28 @@ * distribution for more details. */ #include "unzip_vle.h" +#include + +int z_erofs_unzip_lz4(void *in, void *out, size_t inlen, size_t outlen) +{ + int ret = LZ4_decompress_safe_partial(in, out, inlen, outlen, outlen); + + if (ret >= 0) + return ret; + + /* + * LZ4_decompress_safe_partial will return an error code + * (< 0) if decompression failed + */ + errln("%s, failed to decompress, in[%p, %zu] outlen[%p, %zu]", + __func__, in, inlen, out, outlen); + WARN_ON(1); + print_hex_dump(KERN_DEBUG, "raw data [in]: ", DUMP_PREFIX_OFFSET, + 16, 1, in, inlen, true); + print_hex_dump(KERN_DEBUG, "raw data [out]: ", DUMP_PREFIX_OFFSET, + 16, 1, out, outlen, true); + return -EIO; +} #if Z_EROFS_CLUSTER_MAX_PAGES > Z_EROFS_VLE_INLINE_PAGEVECS #define EROFS_PERCPU_NR_PAGES Z_EROFS_CLUSTER_MAX_PAGES @@ -57,7 +79,7 @@ int z_erofs_vle_plain_copy(struct page **compressed_pages, if (compressed_pages[j] != page) continue; - BUG_ON(mirrored[j]); + DBG_BUGON(mirrored[j]); memcpy(percpu_data + j * PAGE_SIZE, dst, PAGE_SIZE); mirrored[j] = true; break; @@ -99,8 +121,6 @@ int z_erofs_vle_plain_copy(struct page **compressed_pages, return 0; } -extern int z_erofs_unzip_lz4(void *in, void *out, size_t inlen, size_t outlen); - int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages, unsigned int clusterpages, struct page **pages, @@ -206,3 +226,4 @@ int z_erofs_vle_unzip_vmap(struct page **compressed_pages, } return ret; } + diff --git a/drivers/staging/erofs/utils.c b/drivers/staging/erofs/utils.c index ea8a962e5c95..b535898ca753 100644 --- a/drivers/staging/erofs/utils.c +++ b/drivers/staging/erofs/utils.c @@ -23,9 +23,6 @@ struct page *erofs_allocpage(struct list_head *pool, gfp_t gfp) list_del(&page->lru); } else { page = alloc_pages(gfp | __GFP_NOFAIL, 0); - - BUG_ON(page == NULL); - BUG_ON(page->mapping != NULL); } return page; } @@ -58,7 +55,7 @@ repeat: /* decrease refcount added by erofs_workgroup_put */ if (unlikely(oldcount == 1)) atomic_long_dec(&erofs_global_shrink_cnt); - BUG_ON(index != grp->index); + DBG_BUGON(index != grp->index); } rcu_read_unlock(); return grp; @@ -71,8 +68,11 @@ int erofs_register_workgroup(struct super_block *sb, struct erofs_sb_info *sbi; int err; - /* grp->refcount should not < 1 */ - BUG_ON(!atomic_read(&grp->refcount)); + /* grp shouldn't be broken or used before */ + if (unlikely(atomic_read(&grp->refcount) != 1)) { + DBG_BUGON(1); + return -EINVAL; + } err = radix_tree_preload(GFP_NOFS); if (err) @@ -83,12 +83,21 @@ int erofs_register_workgroup(struct super_block *sb, grp = xa_tag_pointer(grp, tag); - err = radix_tree_insert(&sbi->workstn_tree, - grp->index, grp); + /* + * Bump up reference count before making this workgroup + * visible to other users in order to avoid potential UAF + * without serialized by erofs_workstn_lock. + */ + __erofs_workgroup_get(grp); - if (!err) { - __erofs_workgroup_get(grp); - } + err = radix_tree_insert(&sbi->workstn_tree, + grp->index, grp); + if (unlikely(err)) + /* + * it's safe to decrease since the workgroup isn't visible + * and refcount >= 2 (cannot be freezed). + */ + __erofs_workgroup_put(grp); erofs_workstn_unlock(sbi); radix_tree_preload_end(); @@ -97,19 +106,94 @@ int erofs_register_workgroup(struct super_block *sb, extern void erofs_workgroup_free_rcu(struct erofs_workgroup *grp); +static void __erofs_workgroup_free(struct erofs_workgroup *grp) +{ + atomic_long_dec(&erofs_global_shrink_cnt); + erofs_workgroup_free_rcu(grp); +} + int erofs_workgroup_put(struct erofs_workgroup *grp) { int count = atomic_dec_return(&grp->refcount); if (count == 1) atomic_long_inc(&erofs_global_shrink_cnt); - else if (!count) { - atomic_long_dec(&erofs_global_shrink_cnt); - erofs_workgroup_free_rcu(grp); - } + else if (!count) + __erofs_workgroup_free(grp); return count; } +#ifdef EROFS_FS_HAS_MANAGED_CACHE +/* for cache-managed case, customized reclaim paths exist */ +static void erofs_workgroup_unfreeze_final(struct erofs_workgroup *grp) +{ + erofs_workgroup_unfreeze(grp, 0); + __erofs_workgroup_free(grp); +} + +bool erofs_try_to_release_workgroup(struct erofs_sb_info *sbi, + struct erofs_workgroup *grp, + bool cleanup) +{ + /* + * for managed cache enabled, the refcount of workgroups + * themselves could be < 0 (freezed). So there is no guarantee + * that all refcount > 0 if managed cache is enabled. + */ + if (!erofs_workgroup_try_to_freeze(grp, 1)) + return false; + + /* + * note that all cached pages should be unlinked + * before delete it from the radix tree. + * Otherwise some cached pages of an orphan old workgroup + * could be still linked after the new one is available. + */ + if (erofs_try_to_free_all_cached_pages(sbi, grp)) { + erofs_workgroup_unfreeze(grp, 1); + return false; + } + + /* + * it is impossible to fail after the workgroup is freezed, + * however in order to avoid some race conditions, add a + * DBG_BUGON to observe this in advance. + */ + DBG_BUGON(xa_untag_pointer(radix_tree_delete(&sbi->workstn_tree, + grp->index)) != grp); + + /* + * if managed cache is enable, the last refcount + * should indicate the related workstation. + */ + erofs_workgroup_unfreeze_final(grp); + return true; +} + +#else +/* for nocache case, no customized reclaim path at all */ +bool erofs_try_to_release_workgroup(struct erofs_sb_info *sbi, + struct erofs_workgroup *grp, + bool cleanup) +{ + int cnt = atomic_read(&grp->refcount); + + DBG_BUGON(cnt <= 0); + DBG_BUGON(cleanup && cnt != 1); + + if (cnt > 1) + return false; + + DBG_BUGON(xa_untag_pointer(radix_tree_delete(&sbi->workstn_tree, + grp->index)) != grp); + + /* (rarely) could be grabbed again when freeing */ + erofs_workgroup_put(grp); + return true; +} + +#endif + unsigned long erofs_shrink_workstation(struct erofs_sb_info *sbi, unsigned long nr_shrink, bool cleanup) @@ -126,41 +210,13 @@ repeat: batch, first_index, PAGEVEC_SIZE); for (i = 0; i < found; ++i) { - int cnt; struct erofs_workgroup *grp = xa_untag_pointer(batch[i]); first_index = grp->index + 1; - cnt = atomic_read(&grp->refcount); - BUG_ON(cnt <= 0); - - if (cleanup) - BUG_ON(cnt != 1); - -#ifndef EROFS_FS_HAS_MANAGED_CACHE - else if (cnt > 1) -#else - if (!erofs_workgroup_try_to_freeze(grp, 1)) -#endif - continue; - - if (xa_untag_pointer(radix_tree_delete(&sbi->workstn_tree, - grp->index)) != grp) { -#ifdef EROFS_FS_HAS_MANAGED_CACHE -skip: - erofs_workgroup_unfreeze(grp, 1); -#endif + /* try to shrink each valid workgroup */ + if (!erofs_try_to_release_workgroup(sbi, grp, cleanup)) continue; - } - -#ifdef EROFS_FS_HAS_MANAGED_CACHE - if (erofs_try_to_free_all_cached_pages(sbi, grp)) - goto skip; - - erofs_workgroup_unfreeze(grp, 1); -#endif - /* (rarely) grabbed again when freeing */ - erofs_workgroup_put(grp); ++freed; if (unlikely(!--nr_shrink)) diff --git a/drivers/staging/fbtft/fbtft_device.c b/drivers/staging/fbtft/fbtft_device.c index 50e97da993e7..046f9d355ecb 100644 --- a/drivers/staging/fbtft/fbtft_device.c +++ b/drivers/staging/fbtft/fbtft_device.c @@ -1455,7 +1455,7 @@ static int __init fbtft_device_init(void) } /* name=list lists all supported displays */ - if (strncmp(name, "list", FBTFT_GPIO_NAME_SIZE) == 0) { + if (strcmp(name, "list") == 0) { pr_info("Supported displays:\n"); for (i = 0; i < ARRAY_SIZE(displays); i++) diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c index 7a7ca67822c5..daabaceeea52 100644 --- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c +++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c @@ -719,9 +719,6 @@ static int port_vlans_add(struct net_device *netdev, struct ethsw_port_priv *port_priv = netdev_priv(netdev); int vid, err = 0; - if (netif_is_bridge_master(vlan->obj.orig_dev)) - return -EOPNOTSUPP; - if (switchdev_trans_ph_prepare(trans)) return 0; @@ -930,8 +927,6 @@ static int swdev_port_obj_del(struct net_device *netdev, static const struct switchdev_ops ethsw_port_switchdev_ops = { .switchdev_port_attr_get = swdev_port_attr_get, .switchdev_port_attr_set = swdev_port_attr_set, - .switchdev_port_obj_add = swdev_port_obj_add, - .switchdev_port_obj_del = swdev_port_obj_del, }; /* For the moment, only flood setting needs to be updated */ @@ -972,6 +967,11 @@ static int port_bridge_leave(struct net_device *netdev) return err; } +static bool ethsw_port_dev_check(const struct net_device *netdev) +{ + return netdev->netdev_ops == ðsw_port_ops; +} + static int port_netdevice_event(struct notifier_block *unused, unsigned long event, void *ptr) { @@ -980,7 +980,7 @@ static int port_netdevice_event(struct notifier_block *unused, struct net_device *upper_dev; int err = 0; - if (netdev->netdev_ops != ðsw_port_ops) + if (!ethsw_port_dev_check(netdev)) return NOTIFY_DONE; /* Handle just upper dev link/unlink for the moment */ @@ -1083,10 +1083,51 @@ err_addr_alloc: return NOTIFY_BAD; } +static int +ethsw_switchdev_port_obj_event(unsigned long event, struct net_device *netdev, + struct switchdev_notifier_port_obj_info *port_obj_info) +{ + int err = -EOPNOTSUPP; + + switch (event) { + case SWITCHDEV_PORT_OBJ_ADD: + err = swdev_port_obj_add(netdev, port_obj_info->obj, + port_obj_info->trans); + break; + case SWITCHDEV_PORT_OBJ_DEL: + err = swdev_port_obj_del(netdev, port_obj_info->obj); + break; + } + + port_obj_info->handled = true; + return notifier_from_errno(err); +} + +static int port_switchdev_blocking_event(struct notifier_block *unused, + unsigned long event, void *ptr) +{ + struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + + if (!ethsw_port_dev_check(dev)) + return NOTIFY_DONE; + + switch (event) { + case SWITCHDEV_PORT_OBJ_ADD: /* fall through */ + case SWITCHDEV_PORT_OBJ_DEL: + return ethsw_switchdev_port_obj_event(event, dev, ptr); + } + + return NOTIFY_DONE; +} + static struct notifier_block port_switchdev_nb = { .notifier_call = port_switchdev_event, }; +static struct notifier_block port_switchdev_blocking_nb = { + .notifier_call = port_switchdev_blocking_event, +}; + static int ethsw_register_notifier(struct device *dev) { int err; @@ -1103,8 +1144,16 @@ static int ethsw_register_notifier(struct device *dev) goto err_switchdev_nb; } + err = register_switchdev_blocking_notifier(&port_switchdev_blocking_nb); + if (err) { + dev_err(dev, "Failed to register switchdev blocking notifier\n"); + goto err_switchdev_blocking_nb; + } + return 0; +err_switchdev_blocking_nb: + unregister_switchdev_notifier(&port_switchdev_nb); err_switchdev_nb: unregister_netdevice_notifier(&port_nb); return err; @@ -1123,7 +1172,7 @@ static int ethsw_open(struct ethsw_core *ethsw) for (i = 0; i < ethsw->sw_attr.num_ifs; i++) { port_priv = ethsw->ports[i]; - err = dev_open(port_priv->netdev); + err = dev_open(port_priv->netdev, NULL); if (err) { netdev_err(port_priv->netdev, "dev_open err %d\n", err); return err; @@ -1291,8 +1340,15 @@ static int ethsw_port_init(struct ethsw_port_priv *port_priv, u16 port) static void ethsw_unregister_notifier(struct device *dev) { + struct notifier_block *nb; int err; + nb = &port_switchdev_blocking_nb; + err = unregister_switchdev_blocking_notifier(nb); + if (err) + dev_err(dev, + "Failed to unregister switchdev blocking notifier (%d)\n", err); + err = unregister_switchdev_notifier(&port_switchdev_nb); if (err) dev_err(dev, diff --git a/drivers/staging/fwserial/fwserial.c b/drivers/staging/fwserial/fwserial.c index 173f451b86b7..3e416f5bbcba 100644 --- a/drivers/staging/fwserial/fwserial.c +++ b/drivers/staging/fwserial/fwserial.c @@ -1458,7 +1458,7 @@ static int fwtty_proc_show(struct seq_file *m, void *v) return 0; } -static int fwtty_debugfs_stats_show(struct seq_file *m, void *v) +static int fwtty_stats_show(struct seq_file *m, void *v) { struct fw_serial *serial = m->private; struct fwtty_port *port; @@ -1476,8 +1476,9 @@ static int fwtty_debugfs_stats_show(struct seq_file *m, void *v) } return 0; } +DEFINE_SHOW_ATTRIBUTE(fwtty_stats); -static int fwtty_debugfs_peers_show(struct seq_file *m, void *v) +static int fwtty_peers_show(struct seq_file *m, void *v) { struct fw_serial *serial = m->private; struct fwtty_peer *peer; @@ -1491,32 +1492,7 @@ static int fwtty_debugfs_peers_show(struct seq_file *m, void *v) rcu_read_unlock(); return 0; } - -static int fwtty_stats_open(struct inode *inode, struct file *fp) -{ - return single_open(fp, fwtty_debugfs_stats_show, inode->i_private); -} - -static int fwtty_peers_open(struct inode *inode, struct file *fp) -{ - return single_open(fp, fwtty_debugfs_peers_show, inode->i_private); -} - -static const struct file_operations fwtty_stats_fops = { - .owner = THIS_MODULE, - .open = fwtty_stats_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; - -static const struct file_operations fwtty_peers_fops = { - .owner = THIS_MODULE, - .open = fwtty_peers_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(fwtty_peers); static const struct tty_port_operations fwtty_port_ops = { .dtr_rts = fwtty_port_dtr_rts, diff --git a/drivers/staging/gasket/gasket_interrupt.c b/drivers/staging/gasket/gasket_interrupt.c index 49d47afad64f..ad5657d213f0 100644 --- a/drivers/staging/gasket/gasket_interrupt.c +++ b/drivers/staging/gasket/gasket_interrupt.c @@ -184,7 +184,7 @@ gasket_interrupt_msix_init(struct gasket_interrupt_data *interrupt_data) interrupt_data->msix_entries = kcalloc(interrupt_data->num_interrupts, - sizeof(struct msix_entry), GFP_KERNEL); + sizeof(*interrupt_data->msix_entries), GFP_KERNEL); if (!interrupt_data->msix_entries) return -ENOMEM; @@ -322,8 +322,7 @@ int gasket_interrupt_init(struct gasket_dev *gasket_dev) const struct gasket_driver_desc *driver_desc = gasket_get_driver_desc(gasket_dev); - interrupt_data = kzalloc(sizeof(struct gasket_interrupt_data), - GFP_KERNEL); + interrupt_data = kzalloc(sizeof(*interrupt_data), GFP_KERNEL); if (!interrupt_data) return -ENOMEM; gasket_dev->interrupt_data = interrupt_data; @@ -336,17 +335,17 @@ int gasket_interrupt_init(struct gasket_dev *gasket_dev) interrupt_data->pack_width = driver_desc->interrupt_pack_width; interrupt_data->num_configured = 0; - interrupt_data->eventfd_ctxs = kcalloc(driver_desc->num_interrupts, - sizeof(struct eventfd_ctx *), - GFP_KERNEL); + interrupt_data->eventfd_ctxs = + kcalloc(driver_desc->num_interrupts, + sizeof(*interrupt_data->eventfd_ctxs), GFP_KERNEL); if (!interrupt_data->eventfd_ctxs) { kfree(interrupt_data); return -ENOMEM; } - interrupt_data->interrupt_counts = kcalloc(driver_desc->num_interrupts, - sizeof(ulong), - GFP_KERNEL); + interrupt_data->interrupt_counts = + kcalloc(driver_desc->num_interrupts, + sizeof(*interrupt_data->interrupt_counts), GFP_KERNEL); if (!interrupt_data->interrupt_counts) { kfree(interrupt_data->eventfd_ctxs); kfree(interrupt_data); diff --git a/drivers/staging/gasket/gasket_page_table.c b/drivers/staging/gasket/gasket_page_table.c index b7d460cf15fb..26755d9ca41d 100644 --- a/drivers/staging/gasket/gasket_page_table.c +++ b/drivers/staging/gasket/gasket_page_table.c @@ -1088,9 +1088,9 @@ void gasket_page_table_reset(struct gasket_page_table *pg_tbl) } /* See gasket_page_table.h for description. */ -int gasket_page_table_lookup_page( - struct gasket_page_table *pg_tbl, ulong dev_addr, struct page **ppage, - ulong *poffset) +int gasket_page_table_lookup_page(struct gasket_page_table *pg_tbl, + ulong dev_addr, struct page **ppage, + ulong *poffset) { uint page_num; struct gasket_page_table_entry *pte; @@ -1134,9 +1134,9 @@ fail: } /* See gasket_page_table.h for description. */ -bool gasket_page_table_are_addrs_bad( - struct gasket_page_table *pg_tbl, ulong host_addr, ulong dev_addr, - ulong bytes) +bool gasket_page_table_are_addrs_bad(struct gasket_page_table *pg_tbl, + ulong host_addr, ulong dev_addr, + ulong bytes) { if (host_addr & (PAGE_SIZE - 1)) { dev_err(pg_tbl->device, @@ -1150,8 +1150,8 @@ bool gasket_page_table_are_addrs_bad( EXPORT_SYMBOL(gasket_page_table_are_addrs_bad); /* See gasket_page_table.h for description. */ -bool gasket_page_table_is_dev_addr_bad( - struct gasket_page_table *pg_tbl, ulong dev_addr, ulong bytes) +bool gasket_page_table_is_dev_addr_bad(struct gasket_page_table *pg_tbl, + ulong dev_addr, ulong bytes) { uint num_pages = bytes / PAGE_SIZE; @@ -1226,9 +1226,8 @@ int gasket_page_table_system_status(struct gasket_page_table *page_table) } /* Record the host_addr to coherent dma memory mapping. */ -int gasket_set_user_virt( - struct gasket_dev *gasket_dev, u64 size, dma_addr_t dma_address, - ulong vma) +int gasket_set_user_virt(struct gasket_dev *gasket_dev, u64 size, + dma_addr_t dma_address, ulong vma) { int j; struct gasket_page_table *pg_tbl; @@ -1278,7 +1277,8 @@ int gasket_alloc_coherent_memory(struct gasket_dev *gasket_dev, u64 size, /* allocate the physical memory block */ gasket_dev->page_table[index]->coherent_pages = - kcalloc(num_pages, sizeof(struct gasket_coherent_page_entry), + kcalloc(num_pages, + sizeof(*gasket_dev->page_table[index]->coherent_pages), GFP_KERNEL); if (!gasket_dev->page_table[index]->coherent_pages) goto nomem; @@ -1345,8 +1345,7 @@ int gasket_free_coherent_memory(struct gasket_dev *gasket_dev, u64 size, } /* Release all coherent memory. */ -void gasket_free_coherent_memory_all( - struct gasket_dev *gasket_dev, u64 index) +void gasket_free_coherent_memory_all(struct gasket_dev *gasket_dev, u64 index) { if (!gasket_dev->page_table[index]) return; diff --git a/drivers/staging/goldfish/goldfish_audio.c b/drivers/staging/goldfish/goldfish_audio.c index 3a75df1d2a0a..d4520490cf6d 100644 --- a/drivers/staging/goldfish/goldfish_audio.c +++ b/drivers/staging/goldfish/goldfish_audio.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * drivers/misc/goldfish_audio.c * diff --git a/drivers/staging/greybus/arche-apb-ctrl.c b/drivers/staging/greybus/arche-apb-ctrl.c index cc8d6fc831b4..be5ffed90bcf 100644 --- a/drivers/staging/greybus/arche-apb-ctrl.c +++ b/drivers/staging/greybus/arche-apb-ctrl.c @@ -20,7 +20,6 @@ #include #include "arche_platform.h" - static void apb_bootret_deassert(struct device *dev); struct arche_apb_ctrl_drvdata { diff --git a/drivers/staging/greybus/arche_platform.h b/drivers/staging/greybus/arche_platform.h index 02056351d25a..9d997f2d6517 100644 --- a/drivers/staging/greybus/arche_platform.h +++ b/drivers/staging/greybus/arche_platform.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +/* SPDX-License-Identifier: GPL-2.0 */ /* * Arche Platform driver to enable Unipro link. * diff --git a/drivers/staging/greybus/arpc.h b/drivers/staging/greybus/arpc.h index 3534ba1a4e6c..3dab6375909c 100644 --- a/drivers/staging/greybus/arpc.h +++ b/drivers/staging/greybus/arpc.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ /* * This file is provided under a dual BSD/GPLv2 license. When using or * redistributing this file, you may do so under either license. diff --git a/drivers/staging/greybus/audio_apbridgea.h b/drivers/staging/greybus/audio_apbridgea.h index 42ac6059bfc7..330fc7a397eb 100644 --- a/drivers/staging/greybus/audio_apbridgea.h +++ b/drivers/staging/greybus/audio_apbridgea.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: BSD-3-Clause +/* SPDX-License-Identifier: BSD-3-Clause */ /** * Copyright (c) 2015-2016 Google Inc. * All rights reserved. diff --git a/drivers/staging/greybus/audio_codec.h b/drivers/staging/greybus/audio_codec.h index 4efd8b3ebe07..d36f8cd35e06 100644 --- a/drivers/staging/greybus/audio_codec.h +++ b/drivers/staging/greybus/audio_codec.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +/* SPDX-License-Identifier: GPL-2.0 */ /* * Greybus audio driver * Copyright 2015 Google Inc. diff --git a/drivers/staging/greybus/audio_manager.h b/drivers/staging/greybus/audio_manager.h index dcb1a20f5ff1..be605485a8ce 100644 --- a/drivers/staging/greybus/audio_manager.h +++ b/drivers/staging/greybus/audio_manager.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +/* SPDX-License-Identifier: GPL-2.0 */ /* * Greybus operations * diff --git a/drivers/staging/greybus/audio_manager_module.c b/drivers/staging/greybus/audio_manager_module.c index 52342e832e3b..2bfd804183c4 100644 --- a/drivers/staging/greybus/audio_manager_module.c +++ b/drivers/staging/greybus/audio_manager_module.c @@ -25,8 +25,8 @@ struct gb_audio_manager_module_attribute { const char *buf, size_t count); }; -static ssize_t gb_audio_module_attr_show( - struct kobject *kobj, struct attribute *attr, char *buf) +static ssize_t gb_audio_module_attr_show(struct kobject *kobj, + struct attribute *attr, char *buf) { struct gb_audio_manager_module_attribute *attribute; struct gb_audio_manager_module *module; diff --git a/drivers/staging/greybus/audio_manager_private.h b/drivers/staging/greybus/audio_manager_private.h index 9d9b58fc54c4..2b3a766c7de7 100644 --- a/drivers/staging/greybus/audio_manager_private.h +++ b/drivers/staging/greybus/audio_manager_private.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +/* SPDX-License-Identifier: GPL-2.0 */ /* * Greybus operations * diff --git a/drivers/staging/greybus/audio_manager_sysfs.c b/drivers/staging/greybus/audio_manager_sysfs.c index 283fbed0c8ed..ab882cc49b41 100644 --- a/drivers/staging/greybus/audio_manager_sysfs.c +++ b/drivers/staging/greybus/audio_manager_sysfs.c @@ -11,9 +11,9 @@ #include "audio_manager.h" #include "audio_manager_private.h" -static ssize_t manager_sysfs_add_store( - struct kobject *kobj, struct kobj_attribute *attr, - const char *buf, size_t count) +static ssize_t manager_sysfs_add_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) { struct gb_audio_manager_module_descriptor desc = { {0} }; @@ -36,9 +36,9 @@ static ssize_t manager_sysfs_add_store( static struct kobj_attribute manager_add_attribute = __ATTR(add, 0664, NULL, manager_sysfs_add_store); -static ssize_t manager_sysfs_remove_store( - struct kobject *kobj, struct kobj_attribute *attr, - const char *buf, size_t count) +static ssize_t manager_sysfs_remove_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) { int id; @@ -57,9 +57,9 @@ static ssize_t manager_sysfs_remove_store( static struct kobj_attribute manager_remove_attribute = __ATTR(remove, 0664, NULL, manager_sysfs_remove_store); -static ssize_t manager_sysfs_dump_store( - struct kobject *kobj, struct kobj_attribute *attr, - const char *buf, size_t count) +static ssize_t manager_sysfs_dump_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) { int id; @@ -81,8 +81,8 @@ static ssize_t manager_sysfs_dump_store( static struct kobj_attribute manager_dump_attribute = __ATTR(dump, 0664, NULL, manager_sysfs_dump_store); -static void manager_sysfs_init_attribute( - struct kobject *kobj, struct kobj_attribute *kattr) +static void manager_sysfs_init_attribute(struct kobject *kobj, + struct kobj_attribute *kattr) { int err; diff --git a/drivers/staging/greybus/audio_module.c b/drivers/staging/greybus/audio_module.c index d065334efa23..300a2b4f3fc7 100644 --- a/drivers/staging/greybus/audio_module.c +++ b/drivers/staging/greybus/audio_module.c @@ -18,7 +18,7 @@ */ static int gbaudio_request_jack(struct gbaudio_module_info *module, - struct gb_audio_jack_event_request *req) + struct gb_audio_jack_event_request *req) { int report; struct snd_jack *jack = module->headset_jack.jack; @@ -26,8 +26,8 @@ static int gbaudio_request_jack(struct gbaudio_module_info *module, if (!jack) { dev_err_ratelimited(module->dev, - "Invalid jack event received:type: %u, event: %u\n", - req->jack_attribute, req->event); + "Invalid jack event received:type: %u, event: %u\n", + req->jack_attribute, req->event); return -EINVAL; } @@ -50,8 +50,8 @@ static int gbaudio_request_jack(struct gbaudio_module_info *module, report = req->jack_attribute & module->jack_mask; if (!report) { dev_err_ratelimited(module->dev, - "Invalid jack event received:type: %u, event: %u\n", - req->jack_attribute, req->event); + "Invalid jack event received:type: %u, event: %u\n", + req->jack_attribute, req->event); return -EINVAL; } @@ -74,8 +74,8 @@ static int gbaudio_request_button(struct gbaudio_module_info *module, if (!btn_jack) { dev_err_ratelimited(module->dev, - "Invalid button event received:type: %u, event: %u\n", - req->button_id, req->event); + "Invalid button event received:type: %u, event: %u\n", + req->button_id, req->event); return -EINVAL; } @@ -210,8 +210,8 @@ static int gb_audio_add_data_connection(struct gbaudio_module_info *gbmodule, return -ENOMEM; connection = gb_connection_create_offloaded(bundle, - le16_to_cpu(cport_desc->id), - GB_CONNECTION_FLAG_CSD); + le16_to_cpu(cport_desc->id), + GB_CONNECTION_FLAG_CSD); if (IS_ERR(connection)) { devm_kfree(gbmodule->dev, dai); return PTR_ERR(connection); @@ -317,7 +317,7 @@ static int gb_audio_probe(struct gb_bundle *bundle, ret = gbaudio_tplg_parse_data(gbmodule, topology); if (ret) { dev_err(dev, "%d:Error while parsing topology data\n", - ret); + ret); goto free_topology; } gbmodule->topology = topology; diff --git a/drivers/staging/greybus/audio_topology.c b/drivers/staging/greybus/audio_topology.c index b71078339e86..8bcbc3c4588c 100644 --- a/drivers/staging/greybus/audio_topology.c +++ b/drivers/staging/greybus/audio_topology.c @@ -158,7 +158,7 @@ static const char **gb_generate_enum_strings(struct gbaudio_module_info *gb, } static int gbcodec_mixer_ctl_info(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_info *uinfo) + struct snd_ctl_elem_info *uinfo) { unsigned int max; const char *name; @@ -209,7 +209,7 @@ static int gbcodec_mixer_ctl_info(struct snd_kcontrol *kcontrol, } static int gbcodec_mixer_ctl_get(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_value *ucontrol) + struct snd_ctl_elem_value *ucontrol) { int ret; struct gb_audio_ctl_elem_info *info; @@ -271,7 +271,7 @@ static int gbcodec_mixer_ctl_get(struct snd_kcontrol *kcontrol, } static int gbcodec_mixer_ctl_put(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_value *ucontrol) + struct snd_ctl_elem_value *ucontrol) { int ret = 0; struct gb_audio_ctl_elem_info *info; @@ -347,7 +347,7 @@ static int gbcodec_mixer_ctl_put(struct snd_kcontrol *kcontrol, * of DAPM related sequencing, etc. */ static int gbcodec_mixer_dapm_ctl_info(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_info *uinfo) + struct snd_ctl_elem_info *uinfo) { int platform_max, platform_min; struct gbaudio_ctl_pvt *data; @@ -378,7 +378,7 @@ static int gbcodec_mixer_dapm_ctl_info(struct snd_kcontrol *kcontrol, } static int gbcodec_mixer_dapm_ctl_get(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_value *ucontrol) + struct snd_ctl_elem_value *ucontrol) { int ret; struct gb_audio_ctl_elem_info *info; @@ -427,7 +427,7 @@ static int gbcodec_mixer_dapm_ctl_get(struct snd_kcontrol *kcontrol, } static int gbcodec_mixer_dapm_ctl_put(struct snd_kcontrol *kcontrol, - struct snd_ctl_elem_value *ucontrol) + struct snd_ctl_elem_value *ucontrol) { int ret, wi, max, connect; unsigned int mask, val; @@ -501,7 +501,7 @@ static int gbcodec_mixer_dapm_ctl_put(struct snd_kcontrol *kcontrol, .private_value = (unsigned long)data} static int gbcodec_event_spk(struct snd_soc_dapm_widget *w, - struct snd_kcontrol *k, int event) + struct snd_kcontrol *k, int event) { /* Ensure GB speaker is connected */ @@ -509,7 +509,7 @@ static int gbcodec_event_spk(struct snd_soc_dapm_widget *w, } static int gbcodec_event_hp(struct snd_soc_dapm_widget *w, - struct snd_kcontrol *k, int event) + struct snd_kcontrol *k, int event) { /* Ensure GB module supports jack slot */ @@ -517,7 +517,7 @@ static int gbcodec_event_hp(struct snd_soc_dapm_widget *w, } static int gbcodec_event_int_mic(struct snd_soc_dapm_widget *w, - struct snd_kcontrol *k, int event) + struct snd_kcontrol *k, int event) { /* Ensure GB module supports jack slot */ @@ -664,7 +664,7 @@ static int gbaudio_tplg_create_enum_kctl(struct gbaudio_module_info *gb, /* debug enum info */ dev_dbg(gb->dev, "Max:%d, name_length:%d\n", gbe->max, - le16_to_cpu(gb_enum->names_length)); + le16_to_cpu(gb_enum->names_length)); for (i = 0; i < gbe->max; i++) dev_dbg(gb->dev, "src[%d]: %s\n", i, gbe->texts[i]); @@ -873,7 +873,7 @@ static int gbaudio_tplg_create_enum_ctl(struct gbaudio_module_info *gb, /* debug enum info */ dev_dbg(gb->dev, "Max:%d, name_length:%d\n", gbe->max, - le16_to_cpu(gb_enum->names_length)); + le16_to_cpu(gb_enum->names_length)); for (i = 0; i < gbe->max; i++) dev_dbg(gb->dev, "src[%d]: %s\n", i, gbe->texts[i]); @@ -884,8 +884,8 @@ static int gbaudio_tplg_create_enum_ctl(struct gbaudio_module_info *gb, } static int gbaudio_tplg_create_mixer_ctl(struct gbaudio_module_info *gb, - struct snd_kcontrol_new *kctl, - struct gb_audio_control *ctl) + struct snd_kcontrol_new *kctl, + struct gb_audio_control *ctl) { struct gbaudio_ctl_pvt *ctldata; @@ -905,8 +905,8 @@ static int gbaudio_tplg_create_mixer_ctl(struct gbaudio_module_info *gb, } static int gbaudio_tplg_create_wcontrol(struct gbaudio_module_info *gb, - struct snd_kcontrol_new *kctl, - struct gb_audio_control *ctl) + struct snd_kcontrol_new *kctl, + struct gb_audio_control *ctl) { int ret; @@ -1086,9 +1086,10 @@ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module, case snd_soc_dapm_switch: *dw = (struct snd_soc_dapm_widget) SND_SOC_DAPM_SWITCH_E(w->name, SND_SOC_NOPM, 0, 0, - widget_kctls, gbaudio_widget_event, - SND_SOC_DAPM_PRE_PMU | - SND_SOC_DAPM_POST_PMD); + widget_kctls, + gbaudio_widget_event, + SND_SOC_DAPM_PRE_PMU | + SND_SOC_DAPM_POST_PMD); break; case snd_soc_dapm_pga: *dw = (struct snd_soc_dapm_widget) @@ -1100,16 +1101,16 @@ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module, case snd_soc_dapm_mixer: *dw = (struct snd_soc_dapm_widget) SND_SOC_DAPM_MIXER_E(w->name, SND_SOC_NOPM, 0, 0, NULL, - 0, gbaudio_widget_event, - SND_SOC_DAPM_PRE_PMU | - SND_SOC_DAPM_POST_PMD); + 0, gbaudio_widget_event, + SND_SOC_DAPM_PRE_PMU | + SND_SOC_DAPM_POST_PMD); break; case snd_soc_dapm_mux: *dw = (struct snd_soc_dapm_widget) SND_SOC_DAPM_MUX_E(w->name, SND_SOC_NOPM, 0, 0, - widget_kctls, gbaudio_widget_event, - SND_SOC_DAPM_PRE_PMU | - SND_SOC_DAPM_POST_PMD); + widget_kctls, gbaudio_widget_event, + SND_SOC_DAPM_PRE_PMU | + SND_SOC_DAPM_POST_PMD); break; case snd_soc_dapm_aif_in: *dw = (struct snd_soc_dapm_widget) @@ -1145,7 +1146,7 @@ error: } static int gbaudio_tplg_process_kcontrols(struct gbaudio_module_info *module, - struct gb_audio_control *controls) + struct gb_audio_control *controls) { int i, csize, ret; struct snd_kcontrol_new *dapm_kctls; @@ -1215,7 +1216,7 @@ error: } static int gbaudio_tplg_process_widgets(struct gbaudio_module_info *module, - struct gb_audio_widget *widgets) + struct gb_audio_widget *widgets) { int i, ret, w_size; struct snd_soc_dapm_widget *dapm_widgets; @@ -1264,7 +1265,7 @@ error: } static int gbaudio_tplg_process_routes(struct gbaudio_module_info *module, - struct gb_audio_route *routes) + struct gb_audio_route *routes) { int i, ret; struct snd_soc_dapm_route *dapm_routes; @@ -1300,8 +1301,8 @@ static int gbaudio_tplg_process_routes(struct gbaudio_module_info *module, } dapm_routes->control = gbaudio_map_controlid(module, - curr->control_id, - curr->index); + curr->control_id, + curr->index); if ((curr->control_id != GBAUDIO_INVALID_ID) && !dapm_routes->control) { dev_err(module->dev, "%d:%d:%d:%d - Invalid control\n", @@ -1325,7 +1326,7 @@ error: } static int gbaudio_tplg_process_header(struct gbaudio_module_info *module, - struct gb_audio_topology *tplg_data) + struct gb_audio_topology *tplg_data) { /* fetch no. of kcontrols, widgets & routes */ module->num_controls = tplg_data->num_controls; @@ -1351,7 +1352,7 @@ static int gbaudio_tplg_process_header(struct gbaudio_module_info *module, } int gbaudio_tplg_parse_data(struct gbaudio_module_info *module, - struct gb_audio_topology *tplg_data) + struct gb_audio_topology *tplg_data) { int ret; struct gb_audio_control *controls; diff --git a/drivers/staging/greybus/bootrom.c b/drivers/staging/greybus/bootrom.c index e85ffae85dff..402e6360834f 100644 --- a/drivers/staging/greybus/bootrom.c +++ b/drivers/staging/greybus/bootrom.c @@ -86,7 +86,8 @@ static void gb_bootrom_timedout(struct work_struct *work) } static void gb_bootrom_set_timeout(struct gb_bootrom *bootrom, - enum next_request_type next, unsigned long timeout) + enum next_request_type next, + unsigned long timeout) { bootrom->next_request = next; schedule_delayed_work(&bootrom->dwork, msecs_to_jiffies(timeout)); @@ -175,7 +176,7 @@ static int find_firmware(struct gb_bootrom *bootrom, u8 stage) firmware_name); rc = request_firmware(&bootrom->fw, firmware_name, - &connection->bundle->dev); + &connection->bundle->dev); if (rc) { dev_err(&connection->bundle->dev, "failed to find %s firmware (%d)\n", firmware_name, rc); @@ -274,7 +275,7 @@ static int gb_bootrom_get_firmware(struct gb_operation *op) if (offset >= fw->size || size > fw->size - offset) { dev_warn(dev, "bad firmware request (offs = %u, size = %u)\n", - offset, size); + offset, size); ret = -EINVAL; goto unlock; } @@ -387,15 +388,15 @@ static int gb_bootrom_get_version(struct gb_bootrom *bootrom) sizeof(response)); if (ret) { dev_err(&bundle->dev, - "failed to get protocol version: %d\n", - ret); + "failed to get protocol version: %d\n", + ret); return ret; } if (response.major > request.major) { dev_err(&bundle->dev, - "unsupported major protocol version (%u > %u)\n", - response.major, request.major); + "unsupported major protocol version (%u > %u)\n", + response.major, request.major); return -ENOTSUPP; } @@ -403,13 +404,13 @@ static int gb_bootrom_get_version(struct gb_bootrom *bootrom) bootrom->protocol_minor = response.minor; dev_dbg(&bundle->dev, "%s - %u.%u\n", __func__, response.major, - response.minor); + response.minor); return 0; } static int gb_bootrom_probe(struct gb_bundle *bundle, - const struct greybus_bundle_id *id) + const struct greybus_bundle_id *id) { struct greybus_descriptor_cport *cport_desc; struct gb_connection *connection; @@ -428,8 +429,8 @@ static int gb_bootrom_probe(struct gb_bundle *bundle, return -ENOMEM; connection = gb_connection_create(bundle, - le16_to_cpu(cport_desc->id), - gb_bootrom_request_handler); + le16_to_cpu(cport_desc->id), + gb_bootrom_request_handler); if (IS_ERR(connection)) { ret = PTR_ERR(connection); goto err_free_bootrom; @@ -466,7 +467,7 @@ static int gb_bootrom_probe(struct gb_bundle *bundle, NULL, 0); if (ret) { dev_err(&connection->bundle->dev, - "failed to send AP READY: %d\n", ret); + "failed to send AP READY: %d\n", ret); goto err_cancel_timeout; } diff --git a/drivers/staging/greybus/bundle.h b/drivers/staging/greybus/bundle.h index ffcfd43802cc..8734d2055657 100644 --- a/drivers/staging/greybus/bundle.h +++ b/drivers/staging/greybus/bundle.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +/* SPDX-License-Identifier: GPL-2.0 */ /* * Greybus bundles * diff --git a/drivers/staging/greybus/camera.c b/drivers/staging/greybus/camera.c index 6dded10f4155..615c8e7fd51e 100644 --- a/drivers/staging/greybus/camera.c +++ b/drivers/staging/greybus/camera.c @@ -841,8 +841,8 @@ done: } static int gb_camera_op_capture(void *priv, u32 request_id, - unsigned int streams, unsigned int num_frames, - size_t settings_size, const void *settings) + unsigned int streams, unsigned int num_frames, + size_t settings_size, const void *settings) { struct gb_camera *gcam = priv; @@ -869,7 +869,7 @@ static const struct gb_camera_ops gb_cam_ops = { */ static ssize_t gb_camera_debugfs_capabilities(struct gb_camera *gcam, - char *buf, size_t len) + char *buf, size_t len) { struct gb_camera_debugfs_buffer *buffer = &gcam->debugfs.buffers[GB_CAMERA_DEBUGFS_BUFFER_CAPABILITIES]; @@ -905,7 +905,7 @@ done: } static ssize_t gb_camera_debugfs_configure_streams(struct gb_camera *gcam, - char *buf, size_t len) + char *buf, size_t len) { struct gb_camera_debugfs_buffer *buffer = &gcam->debugfs.buffers[GB_CAMERA_DEBUGFS_BUFFER_STREAMS]; @@ -999,7 +999,7 @@ done: }; static ssize_t gb_camera_debugfs_capture(struct gb_camera *gcam, - char *buf, size_t len) + char *buf, size_t len) { unsigned int request_id; unsigned int streams_mask; @@ -1040,7 +1040,7 @@ static ssize_t gb_camera_debugfs_capture(struct gb_camera *gcam, } static ssize_t gb_camera_debugfs_flush(struct gb_camera *gcam, - char *buf, size_t len) + char *buf, size_t len) { struct gb_camera_debugfs_buffer *buffer = &gcam->debugfs.buffers[GB_CAMERA_DEBUGFS_BUFFER_FLUSH]; @@ -1190,7 +1190,6 @@ static int gb_camera_debugfs_init(struct gb_camera *gcam) debugfs_create_file(entry->name, entry->mask, gcam->debugfs.root, gcam, &gb_camera_debugfs_ops); - } } return 0; diff --git a/drivers/staging/greybus/connection.c b/drivers/staging/greybus/connection.c index 2103168b585e..eda964208cce 100644 --- a/drivers/staging/greybus/connection.c +++ b/drivers/staging/greybus/connection.c @@ -11,17 +11,13 @@ #include "greybus.h" #include "greybus_trace.h" - #define GB_CONNECTION_CPORT_QUIESCE_TIMEOUT 1000 - static void gb_connection_kref_release(struct kref *kref); - static DEFINE_SPINLOCK(gb_connections_lock); static DEFINE_MUTEX(gb_connection_mutex); - /* Caller holds gb_connection_mutex. */ static bool gb_connection_cport_in_use(struct gb_interface *intf, u16 cport_id) { @@ -30,7 +26,7 @@ static bool gb_connection_cport_in_use(struct gb_interface *intf, u16 cport_id) list_for_each_entry(connection, &hd->connections, hd_links) { if (connection->intf == intf && - connection->intf_cport_id == cport_id) + connection->intf_cport_id == cport_id) return true; } @@ -78,7 +74,7 @@ found: * received on the bundle. */ void greybus_data_rcvd(struct gb_host_device *hd, u16 cport_id, - u8 *data, size_t length) + u8 *data, size_t length) { struct gb_connection *connection; @@ -118,7 +114,7 @@ static void gb_connection_init_name(struct gb_connection *connection) } snprintf(connection->name, sizeof(connection->name), - "%u/%u:%u", hd_cport_id, intf_id, cport_id); + "%u/%u:%u", hd_cport_id, intf_id, cport_id); } /* @@ -146,10 +142,10 @@ static void gb_connection_init_name(struct gb_connection *connection) */ static struct gb_connection * _gb_connection_create(struct gb_host_device *hd, int hd_cport_id, - struct gb_interface *intf, - struct gb_bundle *bundle, int cport_id, - gb_request_handler_t handler, - unsigned long flags) + struct gb_interface *intf, + struct gb_bundle *bundle, int cport_id, + gb_request_handler_t handler, + unsigned long flags) { struct gb_connection *connection; int ret; @@ -230,35 +226,35 @@ err_unlock: struct gb_connection * gb_connection_create_static(struct gb_host_device *hd, u16 hd_cport_id, - gb_request_handler_t handler) + gb_request_handler_t handler) { return _gb_connection_create(hd, hd_cport_id, NULL, NULL, 0, handler, - GB_CONNECTION_FLAG_HIGH_PRIO); + GB_CONNECTION_FLAG_HIGH_PRIO); } struct gb_connection * gb_connection_create_control(struct gb_interface *intf) { return _gb_connection_create(intf->hd, -1, intf, NULL, 0, NULL, - GB_CONNECTION_FLAG_CONTROL | - GB_CONNECTION_FLAG_HIGH_PRIO); + GB_CONNECTION_FLAG_CONTROL | + GB_CONNECTION_FLAG_HIGH_PRIO); } struct gb_connection * gb_connection_create(struct gb_bundle *bundle, u16 cport_id, - gb_request_handler_t handler) + gb_request_handler_t handler) { struct gb_interface *intf = bundle->intf; return _gb_connection_create(intf->hd, -1, intf, bundle, cport_id, - handler, 0); + handler, 0); } EXPORT_SYMBOL_GPL(gb_connection_create); struct gb_connection * gb_connection_create_flags(struct gb_bundle *bundle, u16 cport_id, - gb_request_handler_t handler, - unsigned long flags) + gb_request_handler_t handler, + unsigned long flags) { struct gb_interface *intf = bundle->intf; @@ -266,13 +262,13 @@ gb_connection_create_flags(struct gb_bundle *bundle, u16 cport_id, flags &= ~GB_CONNECTION_FLAG_CORE_MASK; return _gb_connection_create(intf->hd, -1, intf, bundle, cport_id, - handler, flags); + handler, flags); } EXPORT_SYMBOL_GPL(gb_connection_create_flags); struct gb_connection * gb_connection_create_offloaded(struct gb_bundle *bundle, u16 cport_id, - unsigned long flags) + unsigned long flags) { flags |= GB_CONNECTION_FLAG_OFFLOADED; @@ -289,10 +285,10 @@ static int gb_connection_hd_cport_enable(struct gb_connection *connection) return 0; ret = hd->driver->cport_enable(hd, connection->hd_cport_id, - connection->flags); + connection->flags); if (ret) { dev_err(&hd->dev, "%s: failed to enable host cport: %d\n", - connection->name, ret); + connection->name, ret); return ret; } @@ -310,7 +306,7 @@ static void gb_connection_hd_cport_disable(struct gb_connection *connection) ret = hd->driver->cport_disable(hd, connection->hd_cport_id); if (ret) { dev_err(&hd->dev, "%s: failed to disable host cport: %d\n", - connection->name, ret); + connection->name, ret); } } @@ -325,7 +321,7 @@ static int gb_connection_hd_cport_connected(struct gb_connection *connection) ret = hd->driver->cport_connected(hd, connection->hd_cport_id); if (ret) { dev_err(&hd->dev, "%s: failed to set connected state: %d\n", - connection->name, ret); + connection->name, ret); return ret; } @@ -343,7 +339,7 @@ static int gb_connection_hd_cport_flush(struct gb_connection *connection) ret = hd->driver->cport_flush(hd, connection->hd_cport_id); if (ret) { dev_err(&hd->dev, "%s: failed to flush host cport: %d\n", - connection->name, ret); + connection->name, ret); return ret; } @@ -373,7 +369,7 @@ static int gb_connection_hd_cport_quiesce(struct gb_connection *connection) GB_CONNECTION_CPORT_QUIESCE_TIMEOUT); if (ret) { dev_err(&hd->dev, "%s: failed to quiesce host cport: %d\n", - connection->name, ret); + connection->name, ret); return ret; } @@ -391,7 +387,7 @@ static int gb_connection_hd_cport_clear(struct gb_connection *connection) ret = hd->driver->cport_clear(hd, connection->hd_cport_id); if (ret) { dev_err(&hd->dev, "%s: failed to clear host cport: %d\n", - connection->name, ret); + connection->name, ret); return ret; } @@ -427,11 +423,11 @@ gb_connection_svc_connection_create(struct gb_connection *connection) } ret = gb_svc_connection_create(hd->svc, - hd->svc->ap_intf_id, - connection->hd_cport_id, - intf->interface_id, - connection->intf_cport_id, - cport_flags); + hd->svc->ap_intf_id, + connection->hd_cport_id, + intf->interface_id, + connection->intf_cport_id, + cport_flags); if (ret) { dev_err(&connection->hd->dev, "%s: failed to create svc connection: %d\n", @@ -495,8 +491,8 @@ gb_connection_control_disconnecting(struct gb_connection *connection) ret = gb_control_disconnecting_operation(control, cport_id); if (ret) { dev_err(&connection->hd->dev, - "%s: failed to send disconnecting: %d\n", - connection->name, ret); + "%s: failed to send disconnecting: %d\n", + connection->name, ret); } } @@ -535,16 +531,16 @@ gb_connection_control_disconnected(struct gb_connection *connection) } static int gb_connection_shutdown_operation(struct gb_connection *connection, - u8 phase) + u8 phase) { struct gb_cport_shutdown_request *req; struct gb_operation *operation; int ret; operation = gb_operation_create_core(connection, - GB_REQUEST_TYPE_CPORT_SHUTDOWN, - sizeof(*req), 0, 0, - GFP_KERNEL); + GB_REQUEST_TYPE_CPORT_SHUTDOWN, + sizeof(*req), 0, 0, + GFP_KERNEL); if (!operation) return -ENOMEM; @@ -573,14 +569,14 @@ static int gb_connection_cport_shutdown(struct gb_connection *connection, return 0; ret = drv->cport_shutdown(hd, connection->hd_cport_id, phase, - GB_OPERATION_TIMEOUT_DEFAULT); + GB_OPERATION_TIMEOUT_DEFAULT); } else { ret = gb_connection_shutdown_operation(connection, phase); } if (ret) { dev_err(&hd->dev, "%s: failed to send cport shutdown (phase %d): %d\n", - connection->name, phase, ret); + connection->name, phase, ret); return ret; } @@ -606,14 +602,14 @@ gb_connection_cport_shutdown_phase_2(struct gb_connection *connection) * DISCONNECTING. */ static void gb_connection_cancel_operations(struct gb_connection *connection, - int errno) + int errno) __must_hold(&connection->lock) { struct gb_operation *operation; while (!list_empty(&connection->operations)) { operation = list_last_entry(&connection->operations, - struct gb_operation, links); + struct gb_operation, links); gb_operation_get(operation); spin_unlock_irq(&connection->lock); @@ -635,7 +631,7 @@ static void gb_connection_cancel_operations(struct gb_connection *connection, */ static void gb_connection_flush_incoming_operations(struct gb_connection *connection, - int errno) + int errno) __must_hold(&connection->lock) { struct gb_operation *operation; @@ -644,7 +640,7 @@ gb_connection_flush_incoming_operations(struct gb_connection *connection, while (!list_empty(&connection->operations)) { incoming = false; list_for_each_entry(operation, &connection->operations, - links) { + links) { if (gb_operation_is_incoming(operation)) { gb_operation_get(operation); incoming = true; diff --git a/drivers/staging/greybus/connection.h b/drivers/staging/greybus/connection.h index ec3f1d3ef3b9..8deeb1d5f008 100644 --- a/drivers/staging/greybus/connection.h +++ b/drivers/staging/greybus/connection.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +/* SPDX-License-Identifier: GPL-2.0 */ /* * Greybus connections * diff --git a/drivers/staging/greybus/control.c b/drivers/staging/greybus/control.c index 35f945a12b11..ffa41d31896d 100644 --- a/drivers/staging/greybus/control.c +++ b/drivers/staging/greybus/control.c @@ -32,15 +32,15 @@ static int gb_control_get_version(struct gb_control *control) sizeof(response)); if (ret) { dev_err(&intf->dev, - "failed to get control-protocol version: %d\n", - ret); + "failed to get control-protocol version: %d\n", + ret); return ret; } if (response.major > request.major) { dev_err(&intf->dev, - "unsupported major control-protocol version (%u > %u)\n", - response.major, request.major); + "unsupported major control-protocol version (%u > %u)\n", + response.major, request.major); return -ENOTSUPP; } @@ -48,13 +48,13 @@ static int gb_control_get_version(struct gb_control *control) control->protocol_minor = response.minor; dev_dbg(&intf->dev, "%s - %u.%u\n", __func__, response.major, - response.minor); + response.minor); return 0; } static int gb_control_get_bundle_version(struct gb_control *control, - struct gb_bundle *bundle) + struct gb_bundle *bundle) { struct gb_interface *intf = control->connection->intf; struct gb_control_bundle_version_request request; @@ -69,8 +69,8 @@ static int gb_control_get_bundle_version(struct gb_control *control, &response, sizeof(response)); if (ret) { dev_err(&intf->dev, - "failed to get bundle %u class version: %d\n", - bundle->id, ret); + "failed to get bundle %u class version: %d\n", + bundle->id, ret); return ret; } @@ -78,7 +78,7 @@ static int gb_control_get_bundle_version(struct gb_control *control, bundle->class_minor = response.minor; dev_dbg(&intf->dev, "%s - %u: %u.%u\n", __func__, bundle->id, - response.major, response.minor); + response.major, response.minor); return 0; } @@ -112,7 +112,7 @@ int gb_control_get_manifest_size_operation(struct gb_interface *intf) NULL, 0, &response, sizeof(response)); if (ret) { dev_err(&connection->intf->dev, - "failed to get manifest size: %d\n", ret); + "failed to get manifest size: %d\n", ret); return ret; } @@ -149,16 +149,16 @@ int gb_control_disconnected_operation(struct gb_control *control, u16 cport_id) } int gb_control_disconnecting_operation(struct gb_control *control, - u16 cport_id) + u16 cport_id) { struct gb_control_disconnecting_request *request; struct gb_operation *operation; int ret; operation = gb_operation_create_core(control->connection, - GB_CONTROL_TYPE_DISCONNECTING, - sizeof(*request), 0, 0, - GFP_KERNEL); + GB_CONTROL_TYPE_DISCONNECTING, + sizeof(*request), 0, 0, + GFP_KERNEL); if (!operation) return -ENOMEM; @@ -168,7 +168,7 @@ int gb_control_disconnecting_operation(struct gb_control *control, ret = gb_operation_request_send_sync(operation); if (ret) { dev_err(&control->dev, "failed to send disconnecting: %d\n", - ret); + ret); } gb_operation_put(operation); @@ -182,9 +182,10 @@ int gb_control_mode_switch_operation(struct gb_control *control) int ret; operation = gb_operation_create_core(control->connection, - GB_CONTROL_TYPE_MODE_SWITCH, - 0, 0, GB_OPERATION_FLAG_UNIDIRECTIONAL, - GFP_KERNEL); + GB_CONTROL_TYPE_MODE_SWITCH, + 0, 0, + GB_OPERATION_FLAG_UNIDIRECTIONAL, + GFP_KERNEL); if (!operation) return -ENOMEM; @@ -400,7 +401,7 @@ int gb_control_interface_hibernate_abort(struct gb_control *control) } static ssize_t vendor_string_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { struct gb_control *control = to_gb_control(dev); @@ -409,7 +410,7 @@ static ssize_t vendor_string_show(struct device *dev, static DEVICE_ATTR_RO(vendor_string); static ssize_t product_string_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { struct gb_control *control = to_gb_control(dev); @@ -455,8 +456,8 @@ struct gb_control *gb_control_create(struct gb_interface *intf) connection = gb_connection_create_control(intf); if (IS_ERR(connection)) { dev_err(&intf->dev, - "failed to create control connection: %ld\n", - PTR_ERR(connection)); + "failed to create control connection: %ld\n", + PTR_ERR(connection)); kfree(control); return ERR_CAST(connection); } @@ -485,8 +486,8 @@ int gb_control_enable(struct gb_control *control) ret = gb_connection_enable_tx(control->connection); if (ret) { dev_err(&control->connection->intf->dev, - "failed to enable control connection: %d\n", - ret); + "failed to enable control connection: %d\n", + ret); return ret; } @@ -547,8 +548,8 @@ int gb_control_add(struct gb_control *control) ret = device_add(&control->dev); if (ret) { dev_err(&control->dev, - "failed to register control device: %d\n", - ret); + "failed to register control device: %d\n", + ret); return ret; } diff --git a/drivers/staging/greybus/control.h b/drivers/staging/greybus/control.h index 643ddb9e0f92..3a29ec05f631 100644 --- a/drivers/staging/greybus/control.h +++ b/drivers/staging/greybus/control.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +/* SPDX-License-Identifier: GPL-2.0 */ /* * Greybus CPort control protocol * @@ -40,7 +40,7 @@ int gb_control_get_bundle_versions(struct gb_control *control); int gb_control_connected_operation(struct gb_control *control, u16 cport_id); int gb_control_disconnected_operation(struct gb_control *control, u16 cport_id); int gb_control_disconnecting_operation(struct gb_control *control, - u16 cport_id); + u16 cport_id); int gb_control_mode_switch_operation(struct gb_control *control); void gb_control_mode_switch_prepare(struct gb_control *control); void gb_control_mode_switch_complete(struct gb_control *control); diff --git a/drivers/staging/greybus/core.c b/drivers/staging/greybus/core.c index dafa430d176e..412337daf45c 100644 --- a/drivers/staging/greybus/core.c +++ b/drivers/staging/greybus/core.c @@ -28,7 +28,7 @@ int greybus_disabled(void) EXPORT_SYMBOL_GPL(greybus_disabled); static bool greybus_match_one_id(struct gb_bundle *bundle, - const struct greybus_bundle_id *id) + const struct greybus_bundle_id *id) { if ((id->match_flags & GREYBUS_ID_MATCH_VENDOR) && (id->vendor != bundle->intf->vendor_id)) @@ -48,7 +48,7 @@ static bool greybus_match_one_id(struct gb_bundle *bundle, static const struct greybus_bundle_id * greybus_match_id(struct gb_bundle *bundle, const struct greybus_bundle_id *id) { - if (id == NULL) + if (!id) return NULL; for (; id->vendor || id->product || id->class || id->driver_info; diff --git a/drivers/staging/greybus/es2.c b/drivers/staging/greybus/es2.c index b082d81833a0..be6af18cec31 100644 --- a/drivers/staging/greybus/es2.c +++ b/drivers/staging/greybus/es2.c @@ -216,7 +216,7 @@ static int output_async(struct es2_ap_dev *es2, void *req, u16 size, u8 cmd) } static int output(struct gb_host_device *hd, void *req, u16 size, u8 cmd, - bool async) + bool async) { struct es2_ap_dev *es2 = hd_to_es2(hd); @@ -227,7 +227,7 @@ static int output(struct gb_host_device *hd, void *req, u16 size, u8 cmd, } static int es2_cport_in_enable(struct es2_ap_dev *es2, - struct es2_cport_in *cport_in) + struct es2_cport_in *cport_in) { struct urb *urb; int ret; @@ -239,7 +239,7 @@ static int es2_cport_in_enable(struct es2_ap_dev *es2, ret = usb_submit_urb(urb, GFP_KERNEL); if (ret) { dev_err(&es2->usb_dev->dev, - "failed to submit in-urb: %d\n", ret); + "failed to submit in-urb: %d\n", ret); goto err_kill_urbs; } } @@ -256,7 +256,7 @@ err_kill_urbs: } static void es2_cport_in_disable(struct es2_ap_dev *es2, - struct es2_cport_in *cport_in) + struct es2_cport_in *cport_in) { struct urb *urb; int i; @@ -316,8 +316,8 @@ static struct urb *next_free_urb(struct es2_ap_dev *es2, gfp_t gfp_mask) /* Look in our pool of allocated urbs first, as that's the "fastest" */ for (i = 0; i < NUM_CPORT_OUT_URB; ++i) { - if (es2->cport_out_urb_busy[i] == false && - es2->cport_out_urb_cancelled[i] == false) { + if (!es2->cport_out_urb_busy[i] && + !es2->cport_out_urb_cancelled[i]) { es2->cport_out_urb_busy[i] = true; urb = es2->cport_out_urb[i]; break; @@ -487,7 +487,7 @@ static void message_cancel(struct gb_message *message) } static int es2_cport_allocate(struct gb_host_device *hd, int cport_id, - unsigned long flags) + unsigned long flags) { struct es2_ap_dev *es2 = hd_to_es2(hd); struct ida *id_map = &hd->cport_id_map; @@ -501,7 +501,7 @@ static int es2_cport_allocate(struct gb_host_device *hd, int cport_id, } if (flags & GB_CONNECTION_FLAG_OFFLOADED && - flags & GB_CONNECTION_FLAG_CDSI1) { + flags & GB_CONNECTION_FLAG_CDSI1) { if (es2->cdsi1_in_use) { dev_err(&hd->dev, "CDSI1 already in use\n"); return -EBUSY; @@ -561,16 +561,16 @@ static int cport_enable(struct gb_host_device *hd, u16 cport_id, req->flags = cpu_to_le32(connection_flags); dev_dbg(&hd->dev, "%s - cport = %u, flags = %02x\n", __func__, - cport_id, connection_flags); + cport_id, connection_flags); ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), - GB_APB_REQUEST_CPORT_FLAGS, - USB_DIR_OUT | USB_TYPE_VENDOR | - USB_RECIP_INTERFACE, cport_id, 0, - req, sizeof(*req), ES2_USB_CTRL_TIMEOUT); + GB_APB_REQUEST_CPORT_FLAGS, + USB_DIR_OUT | USB_TYPE_VENDOR | + USB_RECIP_INTERFACE, cport_id, 0, + req, sizeof(*req), ES2_USB_CTRL_TIMEOUT); if (ret != sizeof(*req)) { dev_err(&udev->dev, "failed to set cport flags for port %d\n", - cport_id); + cport_id); if (ret >= 0) ret = -EIO; @@ -596,7 +596,7 @@ static int es2_cport_connected(struct gb_host_device *hd, u16 cport_id) NULL, ES2_ARPC_CPORT_TIMEOUT); if (ret) { dev_err(dev, "failed to set connected state for cport %u: %d\n", - cport_id, ret); + cport_id, ret); return ret; } @@ -622,7 +622,7 @@ static int es2_cport_flush(struct gb_host_device *hd, u16 cport_id) } static int es2_cport_shutdown(struct gb_host_device *hd, u16 cport_id, - u8 phase, unsigned int timeout) + u8 phase, unsigned int timeout) { struct es2_ap_dev *es2 = hd_to_es2(hd); struct device *dev = &es2->usb_dev->dev; @@ -640,7 +640,7 @@ static int es2_cport_shutdown(struct gb_host_device *hd, u16 cport_id, &result, ES2_ARPC_CPORT_TIMEOUT + timeout); if (ret) { dev_err(dev, "failed to send shutdown over cport %u: %d (%d)\n", - cport_id, ret, result); + cport_id, ret, result); return ret; } @@ -648,7 +648,7 @@ static int es2_cport_shutdown(struct gb_host_device *hd, u16 cport_id, } static int es2_cport_quiesce(struct gb_host_device *hd, u16 cport_id, - size_t peer_space, unsigned int timeout) + size_t peer_space, unsigned int timeout) { struct es2_ap_dev *es2 = hd_to_es2(hd); struct device *dev = &es2->usb_dev->dev; @@ -669,7 +669,7 @@ static int es2_cport_quiesce(struct gb_host_device *hd, u16 cport_id, &result, ES2_ARPC_CPORT_TIMEOUT + timeout); if (ret) { dev_err(dev, "failed to quiesce cport %u: %d (%d)\n", - cport_id, ret, result); + cport_id, ret, result); return ret; } @@ -846,7 +846,7 @@ static void cport_in_callback(struct urb *urb) if (cport_id_valid(hd, cport_id)) { greybus_data_rcvd(hd, cport_id, urb->transfer_buffer, - urb->actual_length); + urb->actual_length); } else { dev_err(dev, "invalid cport id %u received\n", cport_id); } @@ -1083,14 +1083,14 @@ static void apb_log_get(struct es2_ap_dev *es2, char *buf) do { retval = usb_control_msg(es2->usb_dev, - usb_rcvctrlpipe(es2->usb_dev, 0), - GB_APB_REQUEST_LOG, - USB_DIR_IN | USB_TYPE_VENDOR | - USB_RECIP_INTERFACE, - 0x00, 0x00, - buf, - APB1_LOG_MSG_SIZE, - ES2_USB_CTRL_TIMEOUT); + usb_rcvctrlpipe(es2->usb_dev, 0), + GB_APB_REQUEST_LOG, + USB_DIR_IN | USB_TYPE_VENDOR | + USB_RECIP_INTERFACE, + 0x00, 0x00, + buf, + APB1_LOG_MSG_SIZE, + ES2_USB_CTRL_TIMEOUT); if (retval > 0) kfifo_in(&es2->apb_log_fifo, buf, retval); } while (retval > 0); @@ -1116,7 +1116,7 @@ static int apb_log_poll(void *data) } static ssize_t apb_log_read(struct file *f, char __user *buf, - size_t count, loff_t *ppos) + size_t count, loff_t *ppos) { struct es2_ap_dev *es2 = file_inode(f)->i_private; ssize_t ret; @@ -1153,8 +1153,8 @@ static void usb_log_enable(struct es2_ap_dev *es2) return; /* XXX We will need to rename this per APB */ es2->apb_log_dentry = debugfs_create_file("apb_log", 0444, - gb_debugfs_get(), es2, - &apb_log_fops); + gb_debugfs_get(), es2, + &apb_log_fops); } static void usb_log_disable(struct es2_ap_dev *es2) @@ -1170,7 +1170,7 @@ static void usb_log_disable(struct es2_ap_dev *es2) } static ssize_t apb_log_enable_read(struct file *f, char __user *buf, - size_t count, loff_t *ppos) + size_t count, loff_t *ppos) { struct es2_ap_dev *es2 = file_inode(f)->i_private; int enable = !IS_ERR_OR_NULL(es2->apb_log_task); @@ -1181,7 +1181,7 @@ static ssize_t apb_log_enable_read(struct file *f, char __user *buf, } static ssize_t apb_log_enable_write(struct file *f, const char __user *buf, - size_t count, loff_t *ppos) + size_t count, loff_t *ppos) { int enable; ssize_t retval; @@ -1274,7 +1274,7 @@ static int ap_probe(struct usb_interface *interface, } hd = gb_hd_create(&es2_driver, &udev->dev, ES2_GBUF_MSG_SIZE_MAX, - num_cports); + num_cports); if (IS_ERR(hd)) { usb_put_dev(udev); return PTR_ERR(hd); @@ -1409,9 +1409,9 @@ static int ap_probe(struct usb_interface *interface, /* XXX We will need to rename this per APB */ es2->apb_log_enable_dentry = debugfs_create_file("apb_log_enable", - 0644, - gb_debugfs_get(), es2, - &apb_log_enable_fops); + 0644, + gb_debugfs_get(), es2, + &apb_log_enable_fops); INIT_LIST_HEAD(&es2->arpcs); spin_lock_init(&es2->arpc_lock); diff --git a/drivers/staging/greybus/gpio.c b/drivers/staging/greybus/gpio.c index b1d4698019a1..e110681e6f86 100644 --- a/drivers/staging/greybus/gpio.c +++ b/drivers/staging/greybus/gpio.c @@ -74,7 +74,7 @@ static int gb_gpio_activate_operation(struct gb_gpio_controller *ggc, u8 which) request.which = which; ret = gb_operation_sync(ggc->connection, GB_GPIO_TYPE_ACTIVATE, - &request, sizeof(request), NULL, 0); + &request, sizeof(request), NULL, 0); if (ret) { gbphy_runtime_put_autosuspend(gbphy_dev); return ret; @@ -86,7 +86,7 @@ static int gb_gpio_activate_operation(struct gb_gpio_controller *ggc, u8 which) } static void gb_gpio_deactivate_operation(struct gb_gpio_controller *ggc, - u8 which) + u8 which) { struct gbphy_device *gbphy_dev = ggc->gbphy_dev; struct device *dev = &gbphy_dev->dev; @@ -95,7 +95,7 @@ static void gb_gpio_deactivate_operation(struct gb_gpio_controller *ggc, request.which = which; ret = gb_operation_sync(ggc->connection, GB_GPIO_TYPE_DEACTIVATE, - &request, sizeof(request), NULL, 0); + &request, sizeof(request), NULL, 0); if (ret) { dev_err(dev, "failed to deactivate gpio %u\n", which); goto out_pm_put; @@ -108,7 +108,7 @@ out_pm_put: } static int gb_gpio_get_direction_operation(struct gb_gpio_controller *ggc, - u8 which) + u8 which) { struct device *dev = &ggc->gbphy_dev->dev; struct gb_gpio_get_direction_request request; @@ -133,7 +133,7 @@ static int gb_gpio_get_direction_operation(struct gb_gpio_controller *ggc, } static int gb_gpio_direction_in_operation(struct gb_gpio_controller *ggc, - u8 which) + u8 which) { struct gb_gpio_direction_in_request request; int ret; @@ -147,7 +147,7 @@ static int gb_gpio_direction_in_operation(struct gb_gpio_controller *ggc, } static int gb_gpio_direction_out_operation(struct gb_gpio_controller *ggc, - u8 which, bool value_high) + u8 which, bool value_high) { struct gb_gpio_direction_out_request request; int ret; @@ -162,7 +162,7 @@ static int gb_gpio_direction_out_operation(struct gb_gpio_controller *ggc, } static int gb_gpio_get_value_operation(struct gb_gpio_controller *ggc, - u8 which) + u8 which) { struct device *dev = &ggc->gbphy_dev->dev; struct gb_gpio_get_value_request request; @@ -214,7 +214,7 @@ static void gb_gpio_set_value_operation(struct gb_gpio_controller *ggc, } static int gb_gpio_set_debounce_operation(struct gb_gpio_controller *ggc, - u8 which, u16 debounce_usec) + u8 which, u16 debounce_usec) { struct gb_gpio_set_debounce_request request; int ret; @@ -257,7 +257,7 @@ static void _gb_gpio_irq_unmask(struct gb_gpio_controller *ggc, u8 hwirq) } static void _gb_gpio_irq_set_type(struct gb_gpio_controller *ggc, - u8 hwirq, u8 type) + u8 hwirq, u8 type) { struct device *dev = &ggc->gbphy_dev->dev; struct gb_gpio_irq_type_request request; @@ -589,10 +589,10 @@ static void gb_gpio_irqchip_remove(struct gb_gpio_controller *ggc) * before calling this function. */ static int gb_gpio_irqchip_add(struct gpio_chip *chip, - struct irq_chip *irqchip, - unsigned int first_irq, - irq_flow_handler_t handler, - unsigned int type) + struct irq_chip *irqchip, + unsigned int first_irq, + irq_flow_handler_t handler, + unsigned int type) { struct gb_gpio_controller *ggc; unsigned int offset; @@ -607,8 +607,8 @@ static int gb_gpio_irqchip_add(struct gpio_chip *chip, ggc->irq_handler = handler; ggc->irq_default_type = type; ggc->irqdomain = irq_domain_add_simple(NULL, - ggc->line_max + 1, first_irq, - &gb_gpio_domain_ops, chip); + ggc->line_max + 1, first_irq, + &gb_gpio_domain_ops, chip); if (!ggc->irqdomain) { ggc->irqchip = NULL; return -EINVAL; @@ -648,9 +648,10 @@ static int gb_gpio_probe(struct gbphy_device *gbphy_dev, if (!ggc) return -ENOMEM; - connection = gb_connection_create(gbphy_dev->bundle, - le16_to_cpu(gbphy_dev->cport_desc->id), - gb_gpio_request_handler); + connection = + gb_connection_create(gbphy_dev->bundle, + le16_to_cpu(gbphy_dev->cport_desc->id), + gb_gpio_request_handler); if (IS_ERR(connection)) { ret = PTR_ERR(connection); goto exit_ggc_free; @@ -703,7 +704,7 @@ static int gb_gpio_probe(struct gbphy_device *gbphy_dev, goto exit_line_free; ret = gb_gpio_irqchip_add(gpio, irqc, 0, - handle_level_irq, IRQ_TYPE_NONE); + handle_level_irq, IRQ_TYPE_NONE); if (ret) { dev_err(&gbphy_dev->dev, "failed to add irq chip: %d\n", ret); goto exit_line_free; diff --git a/drivers/staging/greybus/greybus_protocols.h b/drivers/staging/greybus/greybus_protocols.h index 9bd7b6dfb476..ddc73f10eb22 100644 --- a/drivers/staging/greybus/greybus_protocols.h +++ b/drivers/staging/greybus/greybus_protocols.h @@ -878,10 +878,10 @@ struct gb_pwm_disable_request { /* Should match up with modes in linux/spi/spi.h */ #define GB_SPI_MODE_CPHA 0x01 /* clock phase */ #define GB_SPI_MODE_CPOL 0x02 /* clock polarity */ -#define GB_SPI_MODE_MODE_0 (0|0) /* (original MicroWire) */ -#define GB_SPI_MODE_MODE_1 (0|GB_SPI_MODE_CPHA) -#define GB_SPI_MODE_MODE_2 (GB_SPI_MODE_CPOL|0) -#define GB_SPI_MODE_MODE_3 (GB_SPI_MODE_CPOL|GB_SPI_MODE_CPHA) +#define GB_SPI_MODE_MODE_0 (0 | 0) /* (original MicroWire) */ +#define GB_SPI_MODE_MODE_1 (0 | GB_SPI_MODE_CPHA) +#define GB_SPI_MODE_MODE_2 (GB_SPI_MODE_CPOL | 0) +#define GB_SPI_MODE_MODE_3 (GB_SPI_MODE_CPOL | GB_SPI_MODE_CPHA) #define GB_SPI_MODE_CS_HIGH 0x04 /* chipselect active high? */ #define GB_SPI_MODE_LSB_FIRST 0x08 /* per-word bits-on-wire */ #define GB_SPI_MODE_3WIRE 0x10 /* SI/SO signals shared */ diff --git a/drivers/staging/greybus/hid.c b/drivers/staging/greybus/hid.c index 04053ff075a6..0b72e1b9d325 100644 --- a/drivers/staging/greybus/hid.c +++ b/drivers/staging/greybus/hid.c @@ -49,8 +49,8 @@ static int gb_hid_get_report_desc(struct gb_hid *ghid, char *rdesc) return ret; ret = gb_operation_sync(ghid->connection, GB_HID_TYPE_GET_REPORT_DESC, - NULL, 0, rdesc, - le16_to_cpu(ghid->hdesc.wReportDescLength)); + NULL, 0, rdesc, + le16_to_cpu(ghid->hdesc.wReportDescLength)); gb_pm_runtime_put_autosuspend(ghid->bundle); @@ -86,7 +86,7 @@ static int gb_hid_get_report(struct gb_hid *ghid, u8 report_type, u8 report_id, request.report_id = report_id; ret = gb_operation_sync(ghid->connection, GB_HID_TYPE_GET_REPORT, - &request, sizeof(request), buf, len); + &request, sizeof(request), buf, len); gb_pm_runtime_put_autosuspend(ghid->bundle); @@ -211,11 +211,13 @@ static void gb_hid_init_reports(struct gb_hid *ghid) struct hid_report *report; list_for_each_entry(report, - &hid->report_enum[HID_INPUT_REPORT].report_list, list) + &hid->report_enum[HID_INPUT_REPORT].report_list, + list) gb_hid_init_report(ghid, report); list_for_each_entry(report, - &hid->report_enum[HID_FEATURE_REPORT].report_list, list) + &hid->report_enum[HID_FEATURE_REPORT].report_list, + list) gb_hid_init_report(ghid, report); } @@ -259,8 +261,8 @@ static int __gb_hid_output_raw_report(struct hid_device *hid, __u8 *buf, } static int gb_hid_raw_request(struct hid_device *hid, unsigned char reportnum, - __u8 *buf, size_t len, unsigned char rtype, - int reqtype) + __u8 *buf, size_t len, unsigned char rtype, + int reqtype) { switch (reqtype) { case HID_REQ_GET_REPORT: @@ -440,7 +442,7 @@ static int gb_hid_probe(struct gb_bundle *bundle, return -ENOMEM; connection = gb_connection_create(bundle, le16_to_cpu(cport_desc->id), - gb_hid_request_handler); + gb_hid_request_handler); if (IS_ERR(connection)) { ret = PTR_ERR(connection); goto err_free_ghid; diff --git a/drivers/staging/greybus/i2c.c b/drivers/staging/greybus/i2c.c index 58a37deb6579..7bb85a75d3b1 100644 --- a/drivers/staging/greybus/i2c.c +++ b/drivers/staging/greybus/i2c.c @@ -107,7 +107,7 @@ gb_i2c_operation_create(struct gb_connection *connection, /* Response consists only of incoming data */ operation = gb_operation_create(connection, GB_I2C_TYPE_TRANSFER, - request_size, data_in_size, GFP_KERNEL); + request_size, data_in_size, GFP_KERNEL); if (!operation) return NULL; @@ -137,7 +137,7 @@ gb_i2c_operation_create(struct gb_connection *connection, } static void gb_i2c_decode_response(struct i2c_msg *msgs, u32 msg_count, - struct gb_i2c_transfer_response *response) + struct gb_i2c_transfer_response *response) { struct i2c_msg *msg = msgs; u8 *data; @@ -164,7 +164,7 @@ static bool gb_i2c_expected_transfer_error(int errno) } static int gb_i2c_transfer_operation(struct gb_i2c_device *gb_i2c_dev, - struct i2c_msg *msgs, u32 msg_count) + struct i2c_msg *msgs, u32 msg_count) { struct gb_connection *connection = gb_i2c_dev->connection; struct device *dev = &gb_i2c_dev->gbphy_dev->dev; @@ -199,7 +199,7 @@ exit_operation_put: } static int gb_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, - int msg_count) + int msg_count) { struct gb_i2c_device *gb_i2c_dev; @@ -211,8 +211,8 @@ static int gb_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, #if 0 /* Later */ static int gb_i2c_smbus_xfer(struct i2c_adapter *adap, - u16 addr, unsigned short flags, char read_write, - u8 command, int size, union i2c_smbus_data *data) + u16 addr, unsigned short flags, char read_write, + u8 command, int size, union i2c_smbus_data *data) { struct gb_i2c_device *gb_i2c_dev; @@ -249,7 +249,7 @@ static int gb_i2c_device_setup(struct gb_i2c_device *gb_i2c_dev) } static int gb_i2c_probe(struct gbphy_device *gbphy_dev, - const struct gbphy_device_id *id) + const struct gbphy_device_id *id) { struct gb_connection *connection; struct gb_i2c_device *gb_i2c_dev; @@ -260,9 +260,10 @@ static int gb_i2c_probe(struct gbphy_device *gbphy_dev, if (!gb_i2c_dev) return -ENOMEM; - connection = gb_connection_create(gbphy_dev->bundle, - le16_to_cpu(gbphy_dev->cport_desc->id), - NULL); + connection = + gb_connection_create(gbphy_dev->bundle, + le16_to_cpu(gbphy_dev->cport_desc->id), + NULL); if (IS_ERR(connection)) { ret = PTR_ERR(connection); goto exit_i2cdev_free; diff --git a/drivers/staging/greybus/loopback.c b/drivers/staging/greybus/loopback.c index 7080294f705c..48d85ebe404a 100644 --- a/drivers/staging/greybus/loopback.c +++ b/drivers/staging/greybus/loopback.c @@ -47,8 +47,6 @@ struct gb_loopback_device { /* We need to take a lock in atomic context */ spinlock_t lock; - struct list_head list; - struct list_head list_op_async; wait_queue_head_t wq; }; @@ -68,7 +66,6 @@ struct gb_loopback { struct kfifo kfifo_lat; struct mutex mutex; struct task_struct *task; - struct list_head entry; struct device *dev; wait_queue_head_t wq; wait_queue_head_t wq_completion; @@ -144,7 +141,7 @@ static ssize_t name##_##field##_show(struct device *dev, \ /* Report 0 for min and max if no transfer successed */ \ if (!gb->requests_completed) \ return sprintf(buf, "0\n"); \ - return sprintf(buf, "%"#type"\n", gb->name.field); \ + return sprintf(buf, "%" #type "\n", gb->name.field); \ } \ static DEVICE_ATTR_RO(name##_##field) @@ -179,7 +176,7 @@ static ssize_t field##_show(struct device *dev, \ char *buf) \ { \ struct gb_loopback *gb = dev_get_drvdata(dev); \ - return sprintf(buf, "%"#type"\n", gb->field); \ + return sprintf(buf, "%" #type "\n", gb->field); \ } \ static ssize_t field##_store(struct device *dev, \ struct device_attribute *attr, \ @@ -215,7 +212,7 @@ static ssize_t field##_show(struct device *dev, \ char *buf) \ { \ struct gb_loopback *gb = dev_get_drvdata(dev); \ - return sprintf(buf, "%"#type"\n", gb->field); \ + return sprintf(buf, "%" #type "\n", gb->field); \ } \ static ssize_t field##_store(struct device *dev, \ struct device_attribute *attr, \ @@ -973,50 +970,7 @@ static int gb_loopback_dbgfs_latency_show(struct seq_file *s, void *unused) return gb_loopback_dbgfs_latency_show_common(s, &gb->kfifo_lat, &gb->mutex); } - -static int gb_loopback_latency_open(struct inode *inode, struct file *file) -{ - return single_open(file, gb_loopback_dbgfs_latency_show, - inode->i_private); -} - -static const struct file_operations gb_loopback_debugfs_latency_ops = { - .open = gb_loopback_latency_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; - -static int gb_loopback_bus_id_compare(void *priv, struct list_head *lha, - struct list_head *lhb) -{ - struct gb_loopback *a = list_entry(lha, struct gb_loopback, entry); - struct gb_loopback *b = list_entry(lhb, struct gb_loopback, entry); - struct gb_connection *ca = a->connection; - struct gb_connection *cb = b->connection; - - if (ca->bundle->intf->interface_id < cb->bundle->intf->interface_id) - return -1; - if (cb->bundle->intf->interface_id < ca->bundle->intf->interface_id) - return 1; - if (ca->bundle->id < cb->bundle->id) - return -1; - if (cb->bundle->id < ca->bundle->id) - return 1; - if (ca->intf_cport_id < cb->intf_cport_id) - return -1; - else if (cb->intf_cport_id < ca->intf_cport_id) - return 1; - - return 0; -} - -static void gb_loopback_insert_id(struct gb_loopback *gb) -{ - /* perform an insertion sort */ - list_add_tail(&gb->entry, &gb_dev.list); - list_sort(NULL, &gb_dev.list, gb_loopback_bus_id_compare); -} +DEFINE_SHOW_ATTRIBUTE(gb_loopback_dbgfs_latency); #define DEBUGFS_NAMELEN 32 @@ -1076,7 +1030,7 @@ static int gb_loopback_probe(struct gb_bundle *bundle, snprintf(name, sizeof(name), "raw_latency_%s", dev_name(&connection->bundle->dev)); gb->file = debugfs_create_file(name, S_IFREG | 0444, gb_dev.root, gb, - &gb_loopback_debugfs_latency_ops); + &gb_loopback_dbgfs_latency_fops); gb->id = ida_simple_get(&loopback_ida, 0, 0, GFP_KERNEL); if (gb->id < 0) { @@ -1113,7 +1067,6 @@ static int gb_loopback_probe(struct gb_bundle *bundle, } spin_lock_irqsave(&gb_dev.lock, flags); - gb_loopback_insert_id(gb); gb_dev.count++; spin_unlock_irqrestore(&gb_dev.lock, flags); @@ -1169,7 +1122,6 @@ static void gb_loopback_disconnect(struct gb_bundle *bundle) spin_lock_irqsave(&gb_dev.lock, flags); gb_dev.count--; - list_del(&gb->entry); spin_unlock_irqrestore(&gb_dev.lock, flags); device_unregister(gb->dev); @@ -1196,8 +1148,6 @@ static int loopback_init(void) { int retval; - INIT_LIST_HEAD(&gb_dev.list); - INIT_LIST_HEAD(&gb_dev.list_op_async); spin_lock_init(&gb_dev.lock); gb_dev.root = debugfs_create_dir("gb_loopback", NULL); diff --git a/drivers/staging/greybus/module.c b/drivers/staging/greybus/module.c index 894d02e8d8b7..b251a53d0e8e 100644 --- a/drivers/staging/greybus/module.c +++ b/drivers/staging/greybus/module.c @@ -9,10 +9,9 @@ #include "greybus.h" #include "greybus_trace.h" - static ssize_t eject_store(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t len) + struct device_attribute *attr, + const char *buf, size_t len) { struct gb_module *module = to_gb_module(dev); struct gb_interface *intf; @@ -48,7 +47,7 @@ static ssize_t eject_store(struct device *dev, static DEVICE_ATTR_WO(eject); static ssize_t module_id_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { struct gb_module *module = to_gb_module(dev); @@ -57,7 +56,7 @@ static ssize_t module_id_show(struct device *dev, static DEVICE_ATTR_RO(module_id); static ssize_t num_interfaces_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { struct gb_module *module = to_gb_module(dev); @@ -88,7 +87,7 @@ struct device_type greybus_module_type = { }; struct gb_module *gb_module_create(struct gb_host_device *hd, u8 module_id, - size_t num_interfaces) + size_t num_interfaces) { struct gb_interface *intf; struct gb_module *module; @@ -117,7 +116,7 @@ struct gb_module *gb_module_create(struct gb_host_device *hd, u8 module_id, intf = gb_interface_create(module, module_id + i); if (!intf) { dev_err(&module->dev, "failed to create interface %u\n", - module_id + i); + module_id + i); goto err_put_interfaces; } module->interfaces[i] = intf; @@ -149,8 +148,8 @@ static void gb_module_register_interface(struct gb_interface *intf) if (ret) { if (intf->type != GB_INTERFACE_TYPE_DUMMY) { dev_err(&module->dev, - "failed to activate interface %u: %d\n", - intf_id, ret); + "failed to activate interface %u: %d\n", + intf_id, ret); } gb_interface_add(intf); @@ -164,7 +163,7 @@ static void gb_module_register_interface(struct gb_interface *intf) ret = gb_interface_enable(intf); if (ret) { dev_err(&module->dev, "failed to enable interface %u: %d\n", - intf_id, ret); + intf_id, ret); goto err_interface_deactivate; } diff --git a/drivers/staging/greybus/operation.c b/drivers/staging/greybus/operation.c index c462b1c046cd..fe268f7b63ed 100644 --- a/drivers/staging/greybus/operation.c +++ b/drivers/staging/greybus/operation.c @@ -31,7 +31,7 @@ static DECLARE_WAIT_QUEUE_HEAD(gb_operation_cancellation_queue); static DEFINE_SPINLOCK(gb_operations_lock); static int gb_operation_response_send(struct gb_operation *operation, - int errno); + int errno); /* * Increment operation active count and add to connection list unless the @@ -202,7 +202,7 @@ gb_operation_find_outgoing(struct gb_connection *connection, u16 operation_id) spin_lock_irqsave(&connection->lock, flags); list_for_each_entry(operation, &connection->operations, links) if (operation->id == operation_id && - !gb_operation_is_incoming(operation)) { + !gb_operation_is_incoming(operation)) { gb_operation_get(operation); found = true; break; @@ -307,8 +307,9 @@ static void gb_operation_timeout(struct timer_list *t) } static void gb_operation_message_init(struct gb_host_device *hd, - struct gb_message *message, u16 operation_id, - size_t payload_size, u8 type) + struct gb_message *message, + u16 operation_id, + size_t payload_size, u8 type) { struct gb_operation_msg_hdr *header; @@ -358,7 +359,7 @@ static void gb_operation_message_init(struct gb_host_device *hd, */ static struct gb_message * gb_operation_message_alloc(struct gb_host_device *hd, u8 type, - size_t payload_size, gfp_t gfp_flags) + size_t payload_size, gfp_t gfp_flags) { struct gb_message *message; struct gb_operation_msg_hdr *header; @@ -366,7 +367,7 @@ gb_operation_message_alloc(struct gb_host_device *hd, u8 type, if (message_size > hd->buffer_size_max) { dev_warn(&hd->dev, "requested message size too big (%zu > %zu)\n", - message_size, hd->buffer_size_max); + message_size, hd->buffer_size_max); return NULL; } @@ -465,7 +466,7 @@ static u8 gb_operation_errno_map(int errno) } bool gb_operation_response_alloc(struct gb_operation *operation, - size_t response_size, gfp_t gfp) + size_t response_size, gfp_t gfp) { struct gb_host_device *hd = operation->connection->hd; struct gb_operation_msg_hdr *request_header; @@ -516,8 +517,8 @@ EXPORT_SYMBOL_GPL(gb_operation_response_alloc); */ static struct gb_operation * gb_operation_create_common(struct gb_connection *connection, u8 type, - size_t request_size, size_t response_size, - unsigned long op_flags, gfp_t gfp_flags) + size_t request_size, size_t response_size, + unsigned long op_flags, gfp_t gfp_flags) { struct gb_host_device *hd = connection->hd; struct gb_operation *operation; @@ -572,9 +573,9 @@ err_cache: */ struct gb_operation * gb_operation_create_flags(struct gb_connection *connection, - u8 type, size_t request_size, - size_t response_size, unsigned long flags, - gfp_t gfp) + u8 type, size_t request_size, + size_t response_size, unsigned long flags, + gfp_t gfp) { struct gb_operation *operation; @@ -587,8 +588,8 @@ gb_operation_create_flags(struct gb_connection *connection, flags &= GB_OPERATION_FLAG_USER_MASK; operation = gb_operation_create_common(connection, type, - request_size, response_size, - flags, gfp); + request_size, response_size, + flags, gfp); if (operation) trace_gb_operation_create(operation); @@ -598,22 +599,23 @@ EXPORT_SYMBOL_GPL(gb_operation_create_flags); struct gb_operation * gb_operation_create_core(struct gb_connection *connection, - u8 type, size_t request_size, - size_t response_size, unsigned long flags, - gfp_t gfp) + u8 type, size_t request_size, + size_t response_size, unsigned long flags, + gfp_t gfp) { struct gb_operation *operation; flags |= GB_OPERATION_FLAG_CORE; operation = gb_operation_create_common(connection, type, - request_size, response_size, - flags, gfp); + request_size, response_size, + flags, gfp); if (operation) trace_gb_operation_create_core(operation); return operation; } + /* Do not export this function. */ size_t gb_operation_get_payload_size_max(struct gb_connection *connection) @@ -626,7 +628,7 @@ EXPORT_SYMBOL_GPL(gb_operation_get_payload_size_max); static struct gb_operation * gb_operation_create_incoming(struct gb_connection *connection, u16 id, - u8 type, void *data, size_t size) + u8 type, void *data, size_t size) { struct gb_operation *operation; size_t request_size; @@ -639,9 +641,9 @@ gb_operation_create_incoming(struct gb_connection *connection, u16 id, flags |= GB_OPERATION_FLAG_UNIDIRECTIONAL; operation = gb_operation_create_common(connection, type, - request_size, - GB_REQUEST_TYPE_INVALID, - flags, GFP_ATOMIC); + request_size, + GB_REQUEST_TYPE_INVALID, + flags, GFP_ATOMIC); if (!operation) return NULL; @@ -716,9 +718,9 @@ static void gb_operation_sync_callback(struct gb_operation *operation) * or a negative errno. */ int gb_operation_request_send(struct gb_operation *operation, - gb_operation_callback callback, - unsigned int timeout, - gfp_t gfp) + gb_operation_callback callback, + unsigned int timeout, + gfp_t gfp) { struct gb_connection *connection = operation->connection; struct gb_operation_msg_hdr *header; @@ -790,7 +792,7 @@ EXPORT_SYMBOL_GPL(gb_operation_request_send); * operation. */ int gb_operation_request_send_sync_timeout(struct gb_operation *operation, - unsigned int timeout) + unsigned int timeout) { int ret; @@ -819,13 +821,13 @@ EXPORT_SYMBOL_GPL(gb_operation_request_send_sync_timeout); * allocate the response message if necessary. */ static int gb_operation_response_send(struct gb_operation *operation, - int errno) + int errno) { struct gb_connection *connection = operation->connection; int ret; if (!operation->response && - !gb_operation_is_unidirectional(operation)) { + !gb_operation_is_unidirectional(operation)) { if (!gb_operation_response_alloc(operation, 0, GFP_KERNEL)) return -ENOMEM; } @@ -867,7 +869,7 @@ err_put: * This function is called when a message send request has completed. */ void greybus_message_sent(struct gb_host_device *hd, - struct gb_message *message, int status) + struct gb_message *message, int status) { struct gb_operation *operation = message->operation; struct gb_connection *connection = operation->connection; @@ -895,7 +897,7 @@ void greybus_message_sent(struct gb_host_device *hd, } else if (status || gb_operation_is_unidirectional(operation)) { if (gb_operation_result_set(operation, status)) { queue_work(gb_operation_completion_wq, - &operation->work); + &operation->work); } } } @@ -921,7 +923,7 @@ static void gb_connection_recv_request(struct gb_connection *connection, type = header->type; operation = gb_operation_create_incoming(connection, operation_id, - type, data, size); + type, data, size); if (!operation) { dev_err(&connection->hd->dev, "%s: can't create incoming operation\n", @@ -966,16 +968,16 @@ static void gb_connection_recv_response(struct gb_connection *connection, if (!operation_id) { dev_err_ratelimited(&connection->hd->dev, - "%s: invalid response id 0 received\n", - connection->name); + "%s: invalid response id 0 received\n", + connection->name); return; } operation = gb_operation_find_outgoing(connection, operation_id); if (!operation) { dev_err_ratelimited(&connection->hd->dev, - "%s: unexpected response id 0x%04x received\n", - connection->name, operation_id); + "%s: unexpected response id 0x%04x received\n", + connection->name, operation_id); return; } @@ -984,18 +986,18 @@ static void gb_connection_recv_response(struct gb_connection *connection, message_size = sizeof(*header) + message->payload_size; if (!errno && size > message_size) { dev_err_ratelimited(&connection->hd->dev, - "%s: malformed response 0x%02x received (%zu > %zu)\n", - connection->name, header->type, - size, message_size); + "%s: malformed response 0x%02x received (%zu > %zu)\n", + connection->name, header->type, + size, message_size); errno = -EMSGSIZE; } else if (!errno && size < message_size) { if (gb_operation_short_response_allowed(operation)) { message->payload_size = size - sizeof(*header); } else { dev_err_ratelimited(&connection->hd->dev, - "%s: short response 0x%02x received (%zu < %zu)\n", - connection->name, header->type, - size, message_size); + "%s: short response 0x%02x received (%zu < %zu)\n", + connection->name, header->type, + size, message_size); errno = -EMSGSIZE; } } @@ -1022,22 +1024,22 @@ static void gb_connection_recv_response(struct gb_connection *connection, * with, it's effectively dropped). */ void gb_connection_recv(struct gb_connection *connection, - void *data, size_t size) + void *data, size_t size) { struct gb_operation_msg_hdr header; struct device *dev = &connection->hd->dev; size_t msg_size; if (connection->state == GB_CONNECTION_STATE_DISABLED || - gb_connection_is_offloaded(connection)) { + gb_connection_is_offloaded(connection)) { dev_warn_ratelimited(dev, "%s: dropping %zu received bytes\n", - connection->name, size); + connection->name, size); return; } if (size < sizeof(header)) { dev_err_ratelimited(dev, "%s: short message received\n", - connection->name); + connection->name); return; } @@ -1046,19 +1048,19 @@ void gb_connection_recv(struct gb_connection *connection, msg_size = le16_to_cpu(header.size); if (size < msg_size) { dev_err_ratelimited(dev, - "%s: incomplete message 0x%04x of type 0x%02x received (%zu < %zu)\n", - connection->name, - le16_to_cpu(header.operation_id), - header.type, size, msg_size); + "%s: incomplete message 0x%04x of type 0x%02x received (%zu < %zu)\n", + connection->name, + le16_to_cpu(header.operation_id), + header.type, size, msg_size); return; /* XXX Should still complete operation */ } if (header.type & GB_MESSAGE_TYPE_RESPONSE) { gb_connection_recv_response(connection, &header, data, - msg_size); + msg_size); } else { gb_connection_recv_request(connection, &header, data, - msg_size); + msg_size); } } @@ -1079,7 +1081,7 @@ void gb_operation_cancel(struct gb_operation *operation, int errno) atomic_inc(&operation->waiters); wait_event(gb_operation_cancellation_queue, - !gb_operation_is_active(operation)); + !gb_operation_is_active(operation)); atomic_dec(&operation->waiters); } EXPORT_SYMBOL_GPL(gb_operation_cancel); @@ -1106,7 +1108,7 @@ void gb_operation_cancel_incoming(struct gb_operation *operation, int errno) atomic_inc(&operation->waiters); wait_event(gb_operation_cancellation_queue, - !gb_operation_is_active(operation)); + !gb_operation_is_active(operation)); atomic_dec(&operation->waiters); } @@ -1134,9 +1136,9 @@ void gb_operation_cancel_incoming(struct gb_operation *operation, int errno) * If there is an error, the response buffer is left alone. */ int gb_operation_sync_timeout(struct gb_connection *connection, int type, - void *request, int request_size, - void *response, int response_size, - unsigned int timeout) + void *request, int request_size, + void *response, int response_size, + unsigned int timeout) { struct gb_operation *operation; int ret; @@ -1187,8 +1189,9 @@ EXPORT_SYMBOL_GPL(gb_operation_sync_timeout); * the request as actually reached the remote end of the connection. */ int gb_operation_unidirectional_timeout(struct gb_connection *connection, - int type, void *request, int request_size, - unsigned int timeout) + int type, void *request, + int request_size, + unsigned int timeout) { struct gb_operation *operation; int ret; @@ -1197,9 +1200,9 @@ int gb_operation_unidirectional_timeout(struct gb_connection *connection, return -EINVAL; operation = gb_operation_create_flags(connection, type, - request_size, 0, - GB_OPERATION_FLAG_UNIDIRECTIONAL, - GFP_KERNEL); + request_size, 0, + GB_OPERATION_FLAG_UNIDIRECTIONAL, + GFP_KERNEL); if (!operation) return -ENOMEM; @@ -1222,17 +1225,19 @@ EXPORT_SYMBOL_GPL(gb_operation_unidirectional_timeout); int __init gb_operation_init(void) { gb_message_cache = kmem_cache_create("gb_message_cache", - sizeof(struct gb_message), 0, 0, NULL); + sizeof(struct gb_message), 0, 0, + NULL); if (!gb_message_cache) return -ENOMEM; gb_operation_cache = kmem_cache_create("gb_operation_cache", - sizeof(struct gb_operation), 0, 0, NULL); + sizeof(struct gb_operation), 0, + 0, NULL); if (!gb_operation_cache) goto err_destroy_message_cache; gb_operation_completion_wq = alloc_workqueue("greybus_completion", - 0, 0); + 0, 0); if (!gb_operation_completion_wq) goto err_destroy_operation_cache; diff --git a/drivers/staging/greybus/svc.c b/drivers/staging/greybus/svc.c index a2bb7e1a3db3..05bc45287b87 100644 --- a/drivers/staging/greybus/svc.c +++ b/drivers/staging/greybus/svc.c @@ -20,11 +20,10 @@ struct gb_svc_deferred_request { struct gb_operation *operation; }; - static int gb_svc_queue_deferred_request(struct gb_operation *operation); static ssize_t endo_id_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { struct gb_svc *svc = to_gb_svc(dev); @@ -33,7 +32,7 @@ static ssize_t endo_id_show(struct device *dev, static DEVICE_ATTR_RO(endo_id); static ssize_t ap_intf_id_show(struct device *dev, - struct device_attribute *attr, char *buf) + struct device_attribute *attr, char *buf) { struct gb_svc *svc = to_gb_svc(dev); @@ -304,8 +303,8 @@ int gb_svc_intf_vsys_set(struct gb_svc *svc, u8 intf_id, bool enable) type = GB_SVC_TYPE_INTF_VSYS_DISABLE; ret = gb_operation_sync(svc->connection, type, - &request, sizeof(request), - &response, sizeof(response)); + &request, sizeof(request), + &response, sizeof(response)); if (ret < 0) return ret; if (response.result_code != GB_SVC_INTF_VSYS_OK) @@ -327,8 +326,8 @@ int gb_svc_intf_refclk_set(struct gb_svc *svc, u8 intf_id, bool enable) type = GB_SVC_TYPE_INTF_REFCLK_DISABLE; ret = gb_operation_sync(svc->connection, type, - &request, sizeof(request), - &response, sizeof(response)); + &request, sizeof(request), + &response, sizeof(response)); if (ret < 0) return ret; if (response.result_code != GB_SVC_INTF_REFCLK_OK) @@ -350,8 +349,8 @@ int gb_svc_intf_unipro_set(struct gb_svc *svc, u8 intf_id, bool enable) type = GB_SVC_TYPE_INTF_UNIPRO_DISABLE; ret = gb_operation_sync(svc->connection, type, - &request, sizeof(request), - &response, sizeof(response)); + &request, sizeof(request), + &response, sizeof(response)); if (ret < 0) return ret; if (response.result_code != GB_SVC_INTF_UNIPRO_OK) @@ -368,15 +367,15 @@ int gb_svc_intf_activate(struct gb_svc *svc, u8 intf_id, u8 *intf_type) request.intf_id = intf_id; ret = gb_operation_sync_timeout(svc->connection, - GB_SVC_TYPE_INTF_ACTIVATE, - &request, sizeof(request), - &response, sizeof(response), - SVC_INTF_ACTIVATE_TIMEOUT); + GB_SVC_TYPE_INTF_ACTIVATE, + &request, sizeof(request), + &response, sizeof(response), + SVC_INTF_ACTIVATE_TIMEOUT); if (ret < 0) return ret; if (response.status != GB_SVC_OP_SUCCESS) { dev_err(&svc->dev, "failed to activate interface %u: %u\n", - intf_id, response.status); + intf_id, response.status); return -EREMOTEIO; } @@ -430,14 +429,14 @@ int gb_svc_dme_peer_get(struct gb_svc *svc, u8 intf_id, u16 attr, u16 selector, &response, sizeof(response)); if (ret) { dev_err(&svc->dev, "failed to get DME attribute (%u 0x%04x %u): %d\n", - intf_id, attr, selector, ret); + intf_id, attr, selector, ret); return ret; } result = le16_to_cpu(response.result_code); if (result) { dev_err(&svc->dev, "UniPro error while getting DME attribute (%u 0x%04x %u): %u\n", - intf_id, attr, selector, result); + intf_id, attr, selector, result); return -EREMOTEIO; } @@ -465,14 +464,14 @@ int gb_svc_dme_peer_set(struct gb_svc *svc, u8 intf_id, u16 attr, u16 selector, &response, sizeof(response)); if (ret) { dev_err(&svc->dev, "failed to set DME attribute (%u 0x%04x %u %u): %d\n", - intf_id, attr, selector, value, ret); + intf_id, attr, selector, value, ret); return ret; } result = le16_to_cpu(response.result_code); if (result) { dev_err(&svc->dev, "UniPro error while setting DME attribute (%u 0x%04x %u %u): %u\n", - intf_id, attr, selector, value, result); + intf_id, attr, selector, value, result); return -EREMOTEIO; } @@ -480,9 +479,9 @@ int gb_svc_dme_peer_set(struct gb_svc *svc, u8 intf_id, u16 attr, u16 selector, } int gb_svc_connection_create(struct gb_svc *svc, - u8 intf1_id, u16 cport1_id, - u8 intf2_id, u16 cport2_id, - u8 cport_flags) + u8 intf1_id, u16 cport1_id, + u8 intf2_id, u16 cport2_id, + u8 cport_flags) { struct gb_svc_conn_create_request request; @@ -513,13 +512,13 @@ void gb_svc_connection_destroy(struct gb_svc *svc, u8 intf1_id, u16 cport1_id, &request, sizeof(request), NULL, 0); if (ret) { dev_err(&svc->dev, "failed to destroy connection (%u:%u %u:%u): %d\n", - intf1_id, cport1_id, intf2_id, cport2_id, ret); + intf1_id, cport1_id, intf2_id, cport2_id, ret); } } /* Creates bi-directional routes between the devices */ int gb_svc_route_create(struct gb_svc *svc, u8 intf1_id, u8 dev1_id, - u8 intf2_id, u8 dev2_id) + u8 intf2_id, u8 dev2_id) { struct gb_svc_route_create_request request; @@ -545,7 +544,7 @@ void gb_svc_route_destroy(struct gb_svc *svc, u8 intf1_id, u8 intf2_id) &request, sizeof(request), NULL, 0); if (ret) { dev_err(&svc->dev, "failed to destroy route (%u %u): %d\n", - intf1_id, intf2_id, ret); + intf1_id, intf2_id, ret); } } @@ -648,8 +647,8 @@ static int gb_svc_version_request(struct gb_operation *op) if (op->request->payload_size < sizeof(*request)) { dev_err(&svc->dev, "short version request (%zu < %zu)\n", - op->request->payload_size, - sizeof(*request)); + op->request->payload_size, + sizeof(*request)); return -EINVAL; } @@ -657,7 +656,7 @@ static int gb_svc_version_request(struct gb_operation *op) if (request->major > GB_SVC_VERSION_MAJOR) { dev_warn(&svc->dev, "unsupported major version (%u > %u)\n", - request->major, GB_SVC_VERSION_MAJOR); + request->major, GB_SVC_VERSION_MAJOR); return -ENOTSUPP; } @@ -845,8 +844,8 @@ static int gb_svc_hello(struct gb_operation *op) if (op->request->payload_size < sizeof(*hello_request)) { dev_warn(&svc->dev, "short hello request (%zu < %zu)\n", - op->request->payload_size, - sizeof(*hello_request)); + op->request->payload_size, + sizeof(*hello_request)); return -EINVAL; } @@ -877,7 +876,7 @@ err_unregister_device: } static struct gb_interface *gb_svc_interface_lookup(struct gb_svc *svc, - u8 intf_id) + u8 intf_id) { struct gb_host_device *hd = svc->hd; struct gb_module *module; @@ -889,7 +888,7 @@ static struct gb_interface *gb_svc_interface_lookup(struct gb_svc *svc, num_interfaces = module->num_interfaces; if (intf_id >= module_id && - intf_id < module_id + num_interfaces) { + intf_id < module_id + num_interfaces) { return module->interfaces[intf_id - module_id]; } } @@ -938,8 +937,8 @@ static void gb_svc_process_hello_deferred(struct gb_operation *operation) if (ret) dev_warn(&svc->dev, - "power mode change failed on AP to switch link: %d\n", - ret); + "power mode change failed on AP to switch link: %d\n", + ret); } static void gb_svc_process_module_inserted(struct gb_operation *operation) @@ -961,17 +960,17 @@ static void gb_svc_process_module_inserted(struct gb_operation *operation) flags = le16_to_cpu(request->flags); dev_dbg(&svc->dev, "%s - id = %u, num_interfaces = %zu, flags = 0x%04x\n", - __func__, module_id, num_interfaces, flags); + __func__, module_id, num_interfaces, flags); if (flags & GB_SVC_MODULE_INSERTED_FLAG_NO_PRIMARY) { dev_warn(&svc->dev, "no primary interface detected on module %u\n", - module_id); + module_id); } module = gb_svc_module_lookup(svc, module_id); if (module) { dev_warn(&svc->dev, "unexpected module-inserted event %u\n", - module_id); + module_id); return; } @@ -1007,7 +1006,7 @@ static void gb_svc_process_module_removed(struct gb_operation *operation) module = gb_svc_module_lookup(svc, module_id); if (!module) { dev_warn(&svc->dev, "unexpected module-removed event %u\n", - module_id); + module_id); return; } @@ -1066,7 +1065,7 @@ static void gb_svc_process_intf_mailbox_event(struct gb_operation *operation) mailbox = le32_to_cpu(request->mailbox); dev_dbg(&svc->dev, "%s - id = %u, result = 0x%04x, mailbox = 0x%08x\n", - __func__, intf_id, result_code, mailbox); + __func__, intf_id, result_code, mailbox); intf = gb_svc_interface_lookup(svc, intf_id); if (!intf) { @@ -1140,7 +1139,7 @@ static int gb_svc_intf_reset_recv(struct gb_operation *op) if (request->payload_size < sizeof(*reset)) { dev_warn(&svc->dev, "short reset request received (%zu < %zu)\n", - request->payload_size, sizeof(*reset)); + request->payload_size, sizeof(*reset)); return -EINVAL; } reset = request->payload; @@ -1157,14 +1156,14 @@ static int gb_svc_module_inserted_recv(struct gb_operation *op) if (op->request->payload_size < sizeof(*request)) { dev_warn(&svc->dev, "short module-inserted request received (%zu < %zu)\n", - op->request->payload_size, sizeof(*request)); + op->request->payload_size, sizeof(*request)); return -EINVAL; } request = op->request->payload; dev_dbg(&svc->dev, "%s - id = %u\n", __func__, - request->primary_intf_id); + request->primary_intf_id); return gb_svc_queue_deferred_request(op); } @@ -1176,14 +1175,14 @@ static int gb_svc_module_removed_recv(struct gb_operation *op) if (op->request->payload_size < sizeof(*request)) { dev_warn(&svc->dev, "short module-removed request received (%zu < %zu)\n", - op->request->payload_size, sizeof(*request)); + op->request->payload_size, sizeof(*request)); return -EINVAL; } request = op->request->payload; dev_dbg(&svc->dev, "%s - id = %u\n", __func__, - request->primary_intf_id); + request->primary_intf_id); return gb_svc_queue_deferred_request(op); } @@ -1209,7 +1208,7 @@ static int gb_svc_intf_mailbox_event_recv(struct gb_operation *op) if (op->request->payload_size < sizeof(*request)) { dev_warn(&svc->dev, "short mailbox request received (%zu < %zu)\n", - op->request->payload_size, sizeof(*request)); + op->request->payload_size, sizeof(*request)); return -EINVAL; } @@ -1254,7 +1253,7 @@ static int gb_svc_request_handler(struct gb_operation *op) if (ret) { dev_warn(&svc->dev, "unexpected request 0x%02x received (state %u)\n", - type, svc->state); + type, svc->state); return ret; } @@ -1329,10 +1328,10 @@ struct gb_svc *gb_svc_create(struct gb_host_device *hd) svc->hd = hd; svc->connection = gb_connection_create_static(hd, GB_SVC_CPORT_ID, - gb_svc_request_handler); + gb_svc_request_handler); if (IS_ERR(svc->connection)) { dev_err(&svc->dev, "failed to create connection: %ld\n", - PTR_ERR(svc->connection)); + PTR_ERR(svc->connection)); goto err_put_device; } diff --git a/drivers/staging/greybus/uart.c b/drivers/staging/greybus/uart.c index 3313cb0b60af..b3bffe91ae99 100644 --- a/drivers/staging/greybus/uart.c +++ b/drivers/staging/greybus/uart.c @@ -805,8 +805,8 @@ static const struct tty_operations gb_ops = { .tiocmget = gb_tty_tiocmget, .tiocmset = gb_tty_tiocmset, .get_icount = gb_tty_get_icount, - .set_serial = set_serial_info, - .get_serial = get_serial_info, + .set_serial = set_serial_info, + .get_serial = get_serial_info, }; static const struct tty_port_operations gb_port_ops = { diff --git a/drivers/staging/iio/adc/Kconfig b/drivers/staging/iio/adc/Kconfig index 9d3062a07460..fc23059f1673 100644 --- a/drivers/staging/iio/adc/Kconfig +++ b/drivers/staging/iio/adc/Kconfig @@ -73,6 +73,7 @@ config AD7192 config AD7280 tristate "Analog Devices AD7280A Lithium Ion Battery Monitoring System" depends on SPI + select CRC8 help Say yes here to build support for Analog Devices AD7280A Lithium Ion Battery Monitoring System. diff --git a/drivers/staging/iio/adc/ad7280a.c b/drivers/staging/iio/adc/ad7280a.c index 58420dcb406d..14f6a3ced060 100644 --- a/drivers/staging/iio/adc/ad7280a.c +++ b/drivers/staging/iio/adc/ad7280a.c @@ -6,6 +6,7 @@ * Licensed under the GPL-2. */ +#include #include #include #include @@ -121,8 +122,6 @@ static unsigned int ad7280a_devaddr(unsigned int addr) * P(x) = x^8 + x^5 + x^3 + x^2 + x^1 + x^0 = 0b100101111 => 0x2F */ #define POLYNOM 0x2F -#define POLYNOM_ORDER 8 -#define HIGHBIT (1 << (POLYNOM_ORDER - 1)) struct ad7280_state { struct spi_device *spi; @@ -131,7 +130,7 @@ struct ad7280_state { int slave_num; int scan_cnt; int readback_delay_us; - unsigned char crc_tab[256]; + unsigned char crc_tab[CRC8_TABLE_SIZE]; unsigned char ctrl_hb; unsigned char ctrl_lb; unsigned char cell_threshhigh; @@ -144,23 +143,6 @@ struct ad7280_state { __be32 buf[2] ____cacheline_aligned; }; -static void ad7280_crc8_build_table(unsigned char *crc_tab) -{ - unsigned char bit, crc; - int cnt, i; - - for (cnt = 0; cnt < 256; cnt++) { - crc = cnt; - for (i = 0; i < 8; i++) { - bit = crc & HIGHBIT; - crc <<= 1; - if (bit) - crc ^= POLYNOM; - } - crc_tab[cnt] = crc; - } -} - static unsigned char ad7280_calc_crc8(unsigned char *crc_tab, unsigned int val) { unsigned char crc; @@ -256,7 +238,9 @@ static int ad7280_read(struct ad7280_state *st, unsigned int devaddr, if (ret) return ret; - __ad7280_read32(st, &tmp); + ret = __ad7280_read32(st, &tmp); + if (ret) + return ret; if (ad7280_check_crc(st, tmp)) return -EIO; @@ -294,7 +278,9 @@ static int ad7280_read_channel(struct ad7280_state *st, unsigned int devaddr, ad7280_delay(st); - __ad7280_read32(st, &tmp); + ret = __ad7280_read32(st, &tmp); + if (ret) + return ret; if (ad7280_check_crc(st, tmp)) return -EIO; @@ -327,7 +313,9 @@ static int ad7280_read_all_channels(struct ad7280_state *st, unsigned int cnt, ad7280_delay(st); for (i = 0; i < cnt; i++) { - __ad7280_read32(st, &tmp); + ret = __ad7280_read32(st, &tmp); + if (ret) + return ret; if (ad7280_check_crc(st, tmp)) return -EIO; @@ -342,6 +330,14 @@ static int ad7280_read_all_channels(struct ad7280_state *st, unsigned int cnt, return sum; } +static void ad7280_sw_power_down(void *data) +{ + struct ad7280_state *st = data; + + ad7280_write(st, AD7280A_DEVADDR_MASTER, AD7280A_CONTROL_HB, 1, + AD7280A_CTRL_HB_PWRDN_SW | st->ctrl_hb); +} + static int ad7280_chain_setup(struct ad7280_state *st) { unsigned int val, n; @@ -362,26 +358,38 @@ static int ad7280_chain_setup(struct ad7280_state *st) AD7280A_CTRL_LB_MUST_SET | st->ctrl_lb); if (ret) - return ret; + goto error_power_down; ret = ad7280_write(st, AD7280A_DEVADDR_MASTER, AD7280A_READ, 1, AD7280A_CONTROL_LB << 2); if (ret) - return ret; + goto error_power_down; for (n = 0; n <= AD7280A_MAX_CHAIN; n++) { - __ad7280_read32(st, &val); + ret = __ad7280_read32(st, &val); + if (ret) + goto error_power_down; + if (val == 0) return n - 1; - if (ad7280_check_crc(st, val)) - return -EIO; + if (ad7280_check_crc(st, val)) { + ret = -EIO; + goto error_power_down; + } - if (n != ad7280a_devaddr(val >> 27)) - return -EIO; + if (n != ad7280a_devaddr(val >> 27)) { + ret = -EIO; + goto error_power_down; + } } + ret = -EFAULT; + +error_power_down: + ad7280_write(st, AD7280A_DEVADDR_MASTER, AD7280A_CONTROL_HB, 1, + AD7280A_CTRL_HB_PWRDN_SW | st->ctrl_hb); - return -EFAULT; + return ret; } static ssize_t ad7280_show_balance_sw(struct device *dev, @@ -492,8 +500,8 @@ static int ad7280_channel_init(struct ad7280_state *st) { int dev, ch, cnt; - st->channels = kcalloc((st->slave_num + 1) * 12 + 2, - sizeof(*st->channels), GFP_KERNEL); + st->channels = devm_kcalloc(&st->spi->dev, (st->slave_num + 1) * 12 + 2, + sizeof(*st->channels), GFP_KERNEL); if (!st->channels) return -ENOMEM; @@ -552,48 +560,47 @@ static int ad7280_channel_init(struct ad7280_state *st) static int ad7280_attr_init(struct ad7280_state *st) { int dev, ch, cnt; + unsigned int index; + struct iio_dev_attr *iio_attr; - st->iio_attr = kcalloc(2, sizeof(*st->iio_attr) * - (st->slave_num + 1) * AD7280A_CELLS_PER_DEV, - GFP_KERNEL); + st->iio_attr = devm_kcalloc(&st->spi->dev, 2, sizeof(*st->iio_attr) * + (st->slave_num + 1) * AD7280A_CELLS_PER_DEV, + GFP_KERNEL); if (!st->iio_attr) return -ENOMEM; for (dev = 0, cnt = 0; dev <= st->slave_num; dev++) for (ch = AD7280A_CELL_VOLTAGE_1; ch <= AD7280A_CELL_VOLTAGE_6; ch++, cnt++) { - st->iio_attr[cnt].address = - ad7280a_devaddr(dev) << 8 | ch; - st->iio_attr[cnt].dev_attr.attr.mode = - 0644; - st->iio_attr[cnt].dev_attr.show = - ad7280_show_balance_sw; - st->iio_attr[cnt].dev_attr.store = - ad7280_store_balance_sw; - st->iio_attr[cnt].dev_attr.attr.name = - kasprintf(GFP_KERNEL, - "in%d-in%d_balance_switch_en", - dev * AD7280A_CELLS_PER_DEV + ch, - dev * AD7280A_CELLS_PER_DEV + ch + 1); - ad7280_attributes[cnt] = - &st->iio_attr[cnt].dev_attr.attr; + iio_attr = &st->iio_attr[cnt]; + index = dev * AD7280A_CELLS_PER_DEV + ch; + iio_attr->address = ad7280a_devaddr(dev) << 8 | ch; + iio_attr->dev_attr.attr.mode = 0644; + iio_attr->dev_attr.show = ad7280_show_balance_sw; + iio_attr->dev_attr.store = ad7280_store_balance_sw; + iio_attr->dev_attr.attr.name = + devm_kasprintf(&st->spi->dev, GFP_KERNEL, + "in%d-in%d_balance_switch_en", + index, index + 1); + if (!iio_attr->dev_attr.attr.name) + return -ENOMEM; + + ad7280_attributes[cnt] = &iio_attr->dev_attr.attr; cnt++; - st->iio_attr[cnt].address = - ad7280a_devaddr(dev) << 8 | + iio_attr = &st->iio_attr[cnt]; + iio_attr->address = ad7280a_devaddr(dev) << 8 | (AD7280A_CB1_TIMER + ch); - st->iio_attr[cnt].dev_attr.attr.mode = - 0644; - st->iio_attr[cnt].dev_attr.show = - ad7280_show_balance_timer; - st->iio_attr[cnt].dev_attr.store = - ad7280_store_balance_timer; - st->iio_attr[cnt].dev_attr.attr.name = - kasprintf(GFP_KERNEL, - "in%d-in%d_balance_timer", - dev * AD7280A_CELLS_PER_DEV + ch, - dev * AD7280A_CELLS_PER_DEV + ch + 1); - ad7280_attributes[cnt] = - &st->iio_attr[cnt].dev_attr.attr; + iio_attr->dev_attr.attr.mode = 0644; + iio_attr->dev_attr.show = ad7280_show_balance_timer; + iio_attr->dev_attr.store = ad7280_store_balance_timer; + iio_attr->dev_attr.attr.name = + devm_kasprintf(&st->spi->dev, GFP_KERNEL, + "in%d-in%d_balance_timer", + index, index + 1); + if (!iio_attr->dev_attr.attr.name) + return -ENOMEM; + + ad7280_attributes[cnt] = &iio_attr->dev_attr.attr; } ad7280_attributes[cnt] = NULL; @@ -610,7 +617,7 @@ static ssize_t ad7280_read_channel_config(struct device *dev, struct iio_dev_attr *this_attr = to_iio_dev_attr(attr); unsigned int val; - switch ((u32)this_attr->address) { + switch (this_attr->address) { case AD7280A_CELL_OVERVOLTAGE: val = 1000 + (st->cell_threshhigh * 1568) / 100; break; @@ -646,7 +653,7 @@ static ssize_t ad7280_write_channel_config(struct device *dev, if (ret) return ret; - switch ((u32)this_attr->address) { + switch (this_attr->address) { case AD7280A_CELL_OVERVOLTAGE: case AD7280A_CELL_UNDERVOLTAGE: val = ((val - 1000) * 100) / 1568; /* LSB 15.68mV */ @@ -662,7 +669,7 @@ static ssize_t ad7280_write_channel_config(struct device *dev, val = clamp(val, 0L, 0xFFL); mutex_lock(&st->lock); - switch ((u32)this_attr->address) { + switch (this_attr->address) { case AD7280A_CELL_OVERVOLTAGE: st->cell_threshhigh = val; break; @@ -857,7 +864,7 @@ static int ad7280_probe(struct spi_device *spi) if (!pdata) pdata = &ad7793_default_pdata; - ad7280_crc8_build_table(st->crc_tab); + crc8_populate_msb(st->crc_tab, POLYNOM); st->spi->max_speed_hz = AD7280A_MAX_SPI_CLK_HZ; st->spi->mode = SPI_MODE_1; @@ -877,6 +884,10 @@ static int ad7280_probe(struct spi_device *spi) st->cell_threshhigh = 0xFF; st->aux_threshhigh = 0xFF; + ret = devm_add_action_or_reset(&spi->dev, ad7280_sw_power_down, st); + if (ret) + return ret; + /* * Total Conversion Time = ((tACQ + tCONV) * * (Number of Conversions per Part)) − @@ -909,65 +920,37 @@ static int ad7280_probe(struct spi_device *spi) ret = ad7280_attr_init(st); if (ret < 0) - goto error_free_channels; + return ret; - ret = iio_device_register(indio_dev); + ret = devm_iio_device_register(&spi->dev, indio_dev); if (ret) - goto error_free_attr; + return ret; if (spi->irq > 0) { ret = ad7280_write(st, AD7280A_DEVADDR_MASTER, AD7280A_ALERT, 1, AD7280A_ALERT_RELAY_SIG_CHAIN_DOWN); if (ret) - goto error_unregister; + return ret; ret = ad7280_write(st, ad7280a_devaddr(st->slave_num), AD7280A_ALERT, 0, AD7280A_ALERT_GEN_STATIC_HIGH | (pdata->chain_last_alert_ignore & 0xF)); if (ret) - goto error_unregister; - - ret = request_threaded_irq(spi->irq, - NULL, - ad7280_event_handler, - IRQF_TRIGGER_FALLING | - IRQF_ONESHOT, - indio_dev->name, - indio_dev); + return ret; + + ret = devm_request_threaded_irq(&spi->dev, spi->irq, + NULL, + ad7280_event_handler, + IRQF_TRIGGER_FALLING | + IRQF_ONESHOT, + indio_dev->name, + indio_dev); if (ret) - goto error_unregister; + return ret; } - return 0; -error_unregister: - iio_device_unregister(indio_dev); - -error_free_attr: - kfree(st->iio_attr); - -error_free_channels: - kfree(st->channels); - - return ret; -} - -static int ad7280_remove(struct spi_device *spi) -{ - struct iio_dev *indio_dev = spi_get_drvdata(spi); - struct ad7280_state *st = iio_priv(indio_dev); - - if (spi->irq > 0) - free_irq(spi->irq, indio_dev); - iio_device_unregister(indio_dev); - - ad7280_write(st, AD7280A_DEVADDR_MASTER, AD7280A_CONTROL_HB, 1, - AD7280A_CTRL_HB_PWRDN_SW | st->ctrl_hb); - - kfree(st->channels); - kfree(st->iio_attr); - return 0; } @@ -982,7 +965,6 @@ static struct spi_driver ad7280_driver = { .name = "ad7280", }, .probe = ad7280_probe, - .remove = ad7280_remove, .id_table = ad7280_id, }; module_spi_driver(ad7280_driver); diff --git a/drivers/staging/iio/adc/ad7606.c b/drivers/staging/iio/adc/ad7606.c index 048c205b979e..7308fa8fbb4c 100644 --- a/drivers/staging/iio/adc/ad7606.c +++ b/drivers/staging/iio/adc/ad7606.c @@ -374,7 +374,7 @@ static int ad7606_request_gpios(struct ad7606_state *st) return 0; st->gpio_os = devm_gpiod_get_array_optional(dev, "oversampling-ratio", - GPIOD_OUT_LOW); + GPIOD_OUT_LOW); return PTR_ERR_OR_ZERO(st->gpio_os); } diff --git a/drivers/staging/iio/adc/ad7780.c b/drivers/staging/iio/adc/ad7780.c index b67412db0318..c4a85789c2db 100644 --- a/drivers/staging/iio/adc/ad7780.c +++ b/drivers/staging/iio/adc/ad7780.c @@ -22,19 +22,28 @@ #include #include -#define AD7780_RDY BIT(7) -#define AD7780_FILTER BIT(6) -#define AD7780_ERR BIT(5) -#define AD7780_ID1 BIT(4) -#define AD7780_ID0 BIT(3) -#define AD7780_GAIN BIT(2) -#define AD7780_PAT1 BIT(1) -#define AD7780_PAT0 BIT(0) +#define AD7780_RDY BIT(7) +#define AD7780_FILTER BIT(6) +#define AD7780_ERR BIT(5) +#define AD7780_ID1 BIT(4) +#define AD7780_ID0 BIT(3) +#define AD7780_GAIN BIT(2) +#define AD7780_PAT1 BIT(1) +#define AD7780_PAT0 BIT(0) + +#define AD7780_PATTERN (AD7780_PAT0) +#define AD7780_PATTERN_MASK (AD7780_PAT0 | AD7780_PAT1) + +#define AD7170_PAT2 BIT(2) + +#define AD7170_PATTERN (AD7780_PAT0 | AD7170_PAT2) +#define AD7170_PATTERN_MASK (AD7780_PAT0 | AD7780_PAT1 | AD7170_PAT2) struct ad7780_chip_info { struct iio_chan_spec channel; unsigned int pattern_mask; unsigned int pattern; + bool is_ad778x; }; struct ad7780_state { @@ -42,7 +51,6 @@ struct ad7780_state { struct regulator *reg; struct gpio_desc *powerdown_gpio; unsigned int gain; - u16 int_vref_mv; struct ad_sigma_delta sd; }; @@ -87,16 +95,20 @@ static int ad7780_read_raw(struct iio_dev *indio_dev, long m) { struct ad7780_state *st = iio_priv(indio_dev); + int voltage_uv; switch (m) { case IIO_CHAN_INFO_RAW: return ad_sigma_delta_single_conversion(indio_dev, chan, val); case IIO_CHAN_INFO_SCALE: - *val = st->int_vref_mv * st->gain; + voltage_uv = regulator_get_voltage(st->reg); + if (voltage_uv < 0) + return voltage_uv; + *val = (voltage_uv / 1000) * st->gain; *val2 = chan->scan_type.realbits - 1; return IIO_VAL_FRACTIONAL_LOG2; case IIO_CHAN_INFO_OFFSET: - *val -= (1 << (chan->scan_type.realbits - 1)); + *val = -(1 << (chan->scan_type.realbits - 1)); return IIO_VAL_INT; } @@ -113,10 +125,12 @@ static int ad7780_postprocess_sample(struct ad_sigma_delta *sigma_delta, ((raw_sample & chip_info->pattern_mask) != chip_info->pattern)) return -EIO; - if (raw_sample & AD7780_GAIN) - st->gain = 1; - else - st->gain = 128; + if (chip_info->is_ad778x) { + if (raw_sample & AD7780_GAIN) + st->gain = 1; + else + st->gain = 128; + } return 0; } @@ -133,23 +147,27 @@ static const struct ad_sigma_delta_info ad7780_sigma_delta_info = { static const struct ad7780_chip_info ad7780_chip_info_tbl[] = { [ID_AD7170] = { .channel = AD7780_CHANNEL(12, 24), - .pattern = 0x5, - .pattern_mask = 0x7, + .pattern = AD7170_PATTERN, + .pattern_mask = AD7170_PATTERN_MASK, + .is_ad778x = false, }, [ID_AD7171] = { .channel = AD7780_CHANNEL(16, 24), - .pattern = 0x5, - .pattern_mask = 0x7, + .pattern = AD7170_PATTERN, + .pattern_mask = AD7170_PATTERN_MASK, + .is_ad778x = false, }, [ID_AD7780] = { .channel = AD7780_CHANNEL(24, 32), - .pattern = 0x1, - .pattern_mask = 0x3, + .pattern = AD7780_PATTERN, + .pattern_mask = AD7780_PATTERN_MASK, + .is_ad778x = true, }, [ID_AD7781] = { .channel = AD7780_CHANNEL(20, 32), - .pattern = 0x1, - .pattern_mask = 0x3, + .pattern = AD7780_PATTERN, + .pattern_mask = AD7780_PATTERN_MASK, + .is_ad778x = true, }, }; @@ -161,7 +179,7 @@ static int ad7780_probe(struct spi_device *spi) { struct ad7780_state *st; struct iio_dev *indio_dev; - int ret, voltage_uv = 0; + int ret; indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st)); if (!indio_dev) @@ -181,16 +199,10 @@ static int ad7780_probe(struct spi_device *spi) dev_err(&spi->dev, "Failed to enable specified AVdd supply\n"); return ret; } - voltage_uv = regulator_get_voltage(st->reg); st->chip_info = &ad7780_chip_info_tbl[spi_get_device_id(spi)->driver_data]; - if (voltage_uv) - st->int_vref_mv = voltage_uv / 1000; - else - dev_warn(&spi->dev, "Reference voltage unspecified\n"); - spi_set_drvdata(spi, indio_dev); indio_dev->dev.parent = &spi->dev; diff --git a/drivers/staging/iio/adc/ad7816.c b/drivers/staging/iio/adc/ad7816.c index bf76a8620bdb..5209651a1b25 100644 --- a/drivers/staging/iio/adc/ad7816.c +++ b/drivers/staging/iio/adc/ad7816.c @@ -7,7 +7,7 @@ */ #include -#include +#include #include #include #include @@ -43,15 +43,22 @@ */ struct ad7816_chip_info { + kernel_ulong_t id; struct spi_device *spi_dev; - u16 rdwr_pin; - u16 convert_pin; - u16 busy_pin; + struct gpio_desc *rdwr_pin; + struct gpio_desc *convert_pin; + struct gpio_desc *busy_pin; u8 oti_data[AD7816_CS_MAX + 1]; u8 channel_id; /* 0 always be temperature */ u8 mode; }; +enum ad7816_type { + ID_AD7816, + ID_AD7817, + ID_AD7818, +}; + /* * ad7816 data access by SPI */ @@ -61,28 +68,30 @@ static int ad7816_spi_read(struct ad7816_chip_info *chip, u16 *data) int ret = 0; __be16 buf; - gpio_set_value(chip->rdwr_pin, 1); - gpio_set_value(chip->rdwr_pin, 0); + gpiod_set_value(chip->rdwr_pin, 1); + gpiod_set_value(chip->rdwr_pin, 0); ret = spi_write(spi_dev, &chip->channel_id, sizeof(chip->channel_id)); if (ret < 0) { dev_err(&spi_dev->dev, "SPI channel setting error\n"); return ret; } - gpio_set_value(chip->rdwr_pin, 1); + gpiod_set_value(chip->rdwr_pin, 1); if (chip->mode == AD7816_PD) { /* operating mode 2 */ - gpio_set_value(chip->convert_pin, 1); - gpio_set_value(chip->convert_pin, 0); + gpiod_set_value(chip->convert_pin, 1); + gpiod_set_value(chip->convert_pin, 0); } else { /* operating mode 1 */ - gpio_set_value(chip->convert_pin, 0); - gpio_set_value(chip->convert_pin, 1); + gpiod_set_value(chip->convert_pin, 0); + gpiod_set_value(chip->convert_pin, 1); } - while (gpio_get_value(chip->busy_pin)) - cpu_relax(); + if (chip->id == ID_AD7816 || chip->id == ID_AD7817) { + while (gpiod_get_value(chip->busy_pin)) + cpu_relax(); + } - gpio_set_value(chip->rdwr_pin, 0); - gpio_set_value(chip->rdwr_pin, 1); + gpiod_set_value(chip->rdwr_pin, 0); + gpiod_set_value(chip->rdwr_pin, 1); ret = spi_read(spi_dev, &buf, sizeof(*data)); if (ret < 0) { dev_err(&spi_dev->dev, "SPI data read error\n"); @@ -99,8 +108,8 @@ static int ad7816_spi_write(struct ad7816_chip_info *chip, u8 data) struct spi_device *spi_dev = chip->spi_dev; int ret = 0; - gpio_set_value(chip->rdwr_pin, 1); - gpio_set_value(chip->rdwr_pin, 0); + gpiod_set_value(chip->rdwr_pin, 1); + gpiod_set_value(chip->rdwr_pin, 0); ret = spi_write(spi_dev, &data, sizeof(data)); if (ret < 0) dev_err(&spi_dev->dev, "SPI oti data write error\n"); @@ -129,10 +138,10 @@ static ssize_t ad7816_store_mode(struct device *dev, struct ad7816_chip_info *chip = iio_priv(indio_dev); if (strcmp(buf, "full")) { - gpio_set_value(chip->rdwr_pin, 1); + gpiod_set_value(chip->rdwr_pin, 1); chip->mode = AD7816_FULL; } else { - gpio_set_value(chip->rdwr_pin, 0); + gpiod_set_value(chip->rdwr_pin, 0); chip->mode = AD7816_PD; } @@ -345,15 +354,9 @@ static int ad7816_probe(struct spi_device *spi_dev) { struct ad7816_chip_info *chip; struct iio_dev *indio_dev; - unsigned short *pins = dev_get_platdata(&spi_dev->dev); int ret = 0; int i; - if (!pins) { - dev_err(&spi_dev->dev, "No necessary GPIO platform data.\n"); - return -EINVAL; - } - indio_dev = devm_iio_device_alloc(&spi_dev->dev, sizeof(*chip)); if (!indio_dev) return -ENOMEM; @@ -364,34 +367,33 @@ static int ad7816_probe(struct spi_device *spi_dev) chip->spi_dev = spi_dev; for (i = 0; i <= AD7816_CS_MAX; i++) chip->oti_data[i] = 203; - chip->rdwr_pin = pins[0]; - chip->convert_pin = pins[1]; - chip->busy_pin = pins[2]; - - ret = devm_gpio_request(&spi_dev->dev, chip->rdwr_pin, - spi_get_device_id(spi_dev)->name); - if (ret) { - dev_err(&spi_dev->dev, "Fail to request rdwr gpio PIN %d.\n", - chip->rdwr_pin); + + chip->id = spi_get_device_id(spi_dev)->driver_data; + chip->rdwr_pin = devm_gpiod_get(&spi_dev->dev, "rdwr", GPIOD_OUT_HIGH); + if (IS_ERR(chip->rdwr_pin)) { + ret = PTR_ERR(chip->rdwr_pin); + dev_err(&spi_dev->dev, "Failed to request rdwr GPIO: %d\n", + ret); return ret; } - gpio_direction_input(chip->rdwr_pin); - ret = devm_gpio_request(&spi_dev->dev, chip->convert_pin, - spi_get_device_id(spi_dev)->name); - if (ret) { - dev_err(&spi_dev->dev, "Fail to request convert gpio PIN %d.\n", - chip->convert_pin); + chip->convert_pin = devm_gpiod_get(&spi_dev->dev, "convert", + GPIOD_OUT_HIGH); + if (IS_ERR(chip->convert_pin)) { + ret = PTR_ERR(chip->convert_pin); + dev_err(&spi_dev->dev, "Failed to request convert GPIO: %d\n", + ret); return ret; } - gpio_direction_input(chip->convert_pin); - ret = devm_gpio_request(&spi_dev->dev, chip->busy_pin, - spi_get_device_id(spi_dev)->name); - if (ret) { - dev_err(&spi_dev->dev, "Fail to request busy gpio PIN %d.\n", - chip->busy_pin); - return ret; + if (chip->id == ID_AD7816 || chip->id == ID_AD7817) { + chip->busy_pin = devm_gpiod_get(&spi_dev->dev, "busy", + GPIOD_IN); + if (IS_ERR(chip->busy_pin)) { + ret = PTR_ERR(chip->busy_pin); + dev_err(&spi_dev->dev, "Failed to request busy GPIO: %d\n", + ret); + return ret; + } } - gpio_direction_input(chip->busy_pin); indio_dev->name = spi_get_device_id(spi_dev)->name; indio_dev->dev.parent = &spi_dev->dev; @@ -420,10 +422,18 @@ static int ad7816_probe(struct spi_device *spi_dev) return 0; } +static const struct of_device_id ad7816_of_match[] = { + { .compatible = "adi,ad7816", }, + { .compatible = "adi,ad7817", }, + { .compatible = "adi,ad7818", }, + { } +}; +MODULE_DEVICE_TABLE(of, ad7816_of_match); + static const struct spi_device_id ad7816_id[] = { - { "ad7816", 0 }, - { "ad7817", 0 }, - { "ad7818", 0 }, + { "ad7816", ID_AD7816 }, + { "ad7817", ID_AD7817 }, + { "ad7818", ID_AD7818 }, {} }; @@ -432,6 +442,7 @@ MODULE_DEVICE_TABLE(spi, ad7816_id); static struct spi_driver ad7816_driver = { .driver = { .name = "ad7816", + .of_match_table = ad7816_of_match, }, .probe = ad7816_probe, .id_table = ad7816_id, diff --git a/drivers/staging/iio/addac/adt7316-i2c.c b/drivers/staging/iio/addac/adt7316-i2c.c index f66dd3ebbab1..2d51bd425662 100644 --- a/drivers/staging/iio/addac/adt7316-i2c.c +++ b/drivers/staging/iio/addac/adt7316-i2c.c @@ -35,6 +35,8 @@ static int adt7316_i2c_read(void *client, u8 reg, u8 *data) return ret; } + *data = ret; + return 0; } @@ -98,7 +100,6 @@ static int adt7316_i2c_probe(struct i2c_client *client, struct adt7316_bus bus = { .client = client, .irq = client->irq, - .irq_flags = IRQF_TRIGGER_LOW, .read = adt7316_i2c_read, .write = adt7316_i2c_write, .multi_read = adt7316_i2c_multi_read, @@ -120,9 +121,22 @@ static const struct i2c_device_id adt7316_i2c_id[] = { MODULE_DEVICE_TABLE(i2c, adt7316_i2c_id); +static const struct of_device_id adt7316_of_match[] = { + { .compatible = "adi,adt7316" }, + { .compatible = "adi,adt7317" }, + { .compatible = "adi,adt7318" }, + { .compatible = "adi,adt7516" }, + { .compatible = "adi,adt7517" }, + { .compatible = "adi,adt7519" }, + { }, +}; + +MODULE_DEVICE_TABLE(of, adt7316_of_match); + static struct i2c_driver adt7316_driver = { .driver = { .name = "adt7316", + .of_match_table = adt7316_of_match, .pm = ADT7316_PM_OPS, }, .probe = adt7316_i2c_probe, diff --git a/drivers/staging/iio/addac/adt7316-spi.c b/drivers/staging/iio/addac/adt7316-spi.c index 5cd22743e140..e75827e326a6 100644 --- a/drivers/staging/iio/addac/adt7316-spi.c +++ b/drivers/staging/iio/addac/adt7316-spi.c @@ -94,7 +94,6 @@ static int adt7316_spi_probe(struct spi_device *spi_dev) struct adt7316_bus bus = { .client = spi_dev, .irq = spi_dev->irq, - .irq_flags = IRQF_TRIGGER_LOW, .read = adt7316_spi_read, .write = adt7316_spi_write, .multi_read = adt7316_spi_multi_read, diff --git a/drivers/staging/iio/addac/adt7316.c b/drivers/staging/iio/addac/adt7316.c index 3f22d1088713..dc93e85808e0 100644 --- a/drivers/staging/iio/addac/adt7316.c +++ b/drivers/staging/iio/addac/adt7316.c @@ -177,7 +177,7 @@ struct adt7316_chip_info { struct adt7316_bus bus; - u16 ldac_pin; + struct gpio_desc *ldac_pin; u16 int_mask; /* 0x2f */ u8 config1; u8 config2; @@ -217,8 +217,8 @@ struct adt7316_limit_regs { }; static ssize_t adt7316_show_enabled(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -227,7 +227,7 @@ static ssize_t adt7316_show_enabled(struct device *dev, } static ssize_t _adt7316_store_enabled(struct adt7316_chip_info *chip, - int enable) + int enable) { u8 config1; int ret; @@ -247,9 +247,9 @@ static ssize_t _adt7316_store_enabled(struct adt7316_chip_info *chip, } static ssize_t adt7316_store_enabled(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -272,8 +272,8 @@ static IIO_DEVICE_ATTR(enabled, 0644, 0); static ssize_t adt7316_show_select_ex_temp(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -285,9 +285,9 @@ static ssize_t adt7316_show_select_ex_temp(struct device *dev, } static ssize_t adt7316_store_select_ex_temp(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -316,8 +316,8 @@ static IIO_DEVICE_ATTR(select_ex_temp, 0644, 0); static ssize_t adt7316_show_mode(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -329,9 +329,9 @@ static ssize_t adt7316_show_mode(struct device *dev, } static ssize_t adt7316_store_mode(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -357,8 +357,8 @@ static IIO_DEVICE_ATTR(mode, 0644, 0); static ssize_t adt7316_show_all_modes(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { return sprintf(buf, "single_channel\nround_robin\n"); } @@ -366,8 +366,8 @@ static ssize_t adt7316_show_all_modes(struct device *dev, static IIO_DEVICE_ATTR(all_modes, 0444, adt7316_show_all_modes, NULL, 0); static ssize_t adt7316_show_ad_channel(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -382,7 +382,7 @@ static ssize_t adt7316_show_ad_channel(struct device *dev, return sprintf(buf, "1 - Internal Temperature\n"); case ADT7316_AD_SINGLE_CH_EX: if (((chip->id & ID_FAMILY_MASK) == ID_ADT75XX) && - (chip->config1 & ADT7516_SEL_AIN1_2_EX_TEMP_MASK) == 0) + (chip->config1 & ADT7516_SEL_AIN1_2_EX_TEMP_MASK) == 0) return sprintf(buf, "2 - AIN1\n"); return sprintf(buf, "2 - External Temperature\n"); @@ -404,9 +404,9 @@ static ssize_t adt7316_show_ad_channel(struct device *dev, } static ssize_t adt7316_store_ad_channel(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -450,8 +450,8 @@ static IIO_DEVICE_ATTR(ad_channel, 0644, 0); static ssize_t adt7316_show_all_ad_channels(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -471,8 +471,8 @@ static IIO_DEVICE_ATTR(all_ad_channels, 0444, adt7316_show_all_ad_channels, NULL, 0); static ssize_t adt7316_show_disable_averaging(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -482,9 +482,9 @@ static ssize_t adt7316_show_disable_averaging(struct device *dev, } static ssize_t adt7316_store_disable_averaging(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -510,8 +510,8 @@ static IIO_DEVICE_ATTR(disable_averaging, 0644, 0); static ssize_t adt7316_show_enable_smbus_timeout(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -521,9 +521,9 @@ static ssize_t adt7316_show_enable_smbus_timeout(struct device *dev, } static ssize_t adt7316_store_enable_smbus_timeout(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -549,8 +549,8 @@ static IIO_DEVICE_ATTR(enable_smbus_timeout, 0644, 0); static ssize_t adt7316_show_powerdown(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -559,9 +559,9 @@ static ssize_t adt7316_show_powerdown(struct device *dev, } static ssize_t adt7316_store_powerdown(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -587,8 +587,8 @@ static IIO_DEVICE_ATTR(powerdown, 0644, 0); static ssize_t adt7316_show_fast_ad_clock(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -597,9 +597,9 @@ static ssize_t adt7316_show_fast_ad_clock(struct device *dev, } static ssize_t adt7316_store_fast_ad_clock(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -625,8 +625,8 @@ static IIO_DEVICE_ATTR(fast_ad_clock, 0644, 0); static ssize_t adt7316_show_da_high_resolution(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -642,9 +642,9 @@ static ssize_t adt7316_show_da_high_resolution(struct device *dev, } static ssize_t adt7316_store_da_high_resolution(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -678,8 +678,8 @@ static IIO_DEVICE_ATTR(da_high_resolution, 0644, 0); static ssize_t adt7316_show_AIN_internal_Vref(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -692,9 +692,9 @@ static ssize_t adt7316_show_AIN_internal_Vref(struct device *dev, } static ssize_t adt7316_store_AIN_internal_Vref(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -724,8 +724,8 @@ static IIO_DEVICE_ATTR(AIN_internal_Vref, 0644, 0); static ssize_t adt7316_show_enable_prop_DACA(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -735,9 +735,9 @@ static ssize_t adt7316_show_enable_prop_DACA(struct device *dev, } static ssize_t adt7316_store_enable_prop_DACA(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -758,13 +758,13 @@ static ssize_t adt7316_store_enable_prop_DACA(struct device *dev, } static IIO_DEVICE_ATTR(enable_proportion_DACA, 0644, - adt7316_show_enable_prop_DACA, - adt7316_store_enable_prop_DACA, - 0); + adt7316_show_enable_prop_DACA, + adt7316_store_enable_prop_DACA, + 0); static ssize_t adt7316_show_enable_prop_DACB(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -774,9 +774,9 @@ static ssize_t adt7316_show_enable_prop_DACB(struct device *dev, } static ssize_t adt7316_store_enable_prop_DACB(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -797,13 +797,13 @@ static ssize_t adt7316_store_enable_prop_DACB(struct device *dev, } static IIO_DEVICE_ATTR(enable_proportion_DACB, 0644, - adt7316_show_enable_prop_DACB, - adt7316_store_enable_prop_DACB, - 0); + adt7316_show_enable_prop_DACB, + adt7316_store_enable_prop_DACB, + 0); static ssize_t adt7316_show_DAC_2Vref_ch_mask(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -813,9 +813,9 @@ static ssize_t adt7316_show_DAC_2Vref_ch_mask(struct device *dev, } static ssize_t adt7316_store_DAC_2Vref_ch_mask(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -840,13 +840,13 @@ static ssize_t adt7316_store_DAC_2Vref_ch_mask(struct device *dev, } static IIO_DEVICE_ATTR(DAC_2Vref_channels_mask, 0644, - adt7316_show_DAC_2Vref_ch_mask, - adt7316_store_DAC_2Vref_ch_mask, - 0); + adt7316_show_DAC_2Vref_ch_mask, + adt7316_store_DAC_2Vref_ch_mask, + 0); static ssize_t adt7316_show_DAC_update_mode(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -870,9 +870,9 @@ static ssize_t adt7316_show_DAC_update_mode(struct device *dev, } static ssize_t adt7316_store_DAC_update_mode(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -900,13 +900,13 @@ static ssize_t adt7316_store_DAC_update_mode(struct device *dev, } static IIO_DEVICE_ATTR(DAC_update_mode, 0644, - adt7316_show_DAC_update_mode, - adt7316_store_DAC_update_mode, - 0); + adt7316_show_DAC_update_mode, + adt7316_store_DAC_update_mode, + 0); static ssize_t adt7316_show_all_DAC_update_modes(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -920,12 +920,12 @@ static ssize_t adt7316_show_all_DAC_update_modes(struct device *dev, } static IIO_DEVICE_ATTR(all_DAC_update_modes, 0444, - adt7316_show_all_DAC_update_modes, NULL, 0); + adt7316_show_all_DAC_update_modes, NULL, 0); static ssize_t adt7316_store_update_DAC(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -950,21 +950,21 @@ static ssize_t adt7316_store_update_DAC(struct device *dev, if (ret) return -EIO; } else { - gpio_set_value(chip->ldac_pin, 0); - gpio_set_value(chip->ldac_pin, 1); + gpiod_set_value(chip->ldac_pin, 0); + gpiod_set_value(chip->ldac_pin, 1); } return len; } static IIO_DEVICE_ATTR(update_DAC, 0644, - NULL, - adt7316_store_update_DAC, - 0); + NULL, + adt7316_store_update_DAC, + 0); static ssize_t adt7316_show_DA_AB_Vref_bypass(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -977,9 +977,9 @@ static ssize_t adt7316_show_DA_AB_Vref_bypass(struct device *dev, } static ssize_t adt7316_store_DA_AB_Vref_bypass(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1003,13 +1003,13 @@ static ssize_t adt7316_store_DA_AB_Vref_bypass(struct device *dev, } static IIO_DEVICE_ATTR(DA_AB_Vref_bypass, 0644, - adt7316_show_DA_AB_Vref_bypass, - adt7316_store_DA_AB_Vref_bypass, - 0); + adt7316_show_DA_AB_Vref_bypass, + adt7316_store_DA_AB_Vref_bypass, + 0); static ssize_t adt7316_show_DA_CD_Vref_bypass(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1022,9 +1022,9 @@ static ssize_t adt7316_show_DA_CD_Vref_bypass(struct device *dev, } static ssize_t adt7316_store_DA_CD_Vref_bypass(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1048,13 +1048,13 @@ static ssize_t adt7316_store_DA_CD_Vref_bypass(struct device *dev, } static IIO_DEVICE_ATTR(DA_CD_Vref_bypass, 0644, - adt7316_show_DA_CD_Vref_bypass, - adt7316_store_DA_CD_Vref_bypass, - 0); + adt7316_show_DA_CD_Vref_bypass, + adt7316_store_DA_CD_Vref_bypass, + 0); static ssize_t adt7316_show_DAC_internal_Vref(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1068,9 +1068,9 @@ static ssize_t adt7316_show_DAC_internal_Vref(struct device *dev, } static ssize_t adt7316_store_DAC_internal_Vref(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1109,12 +1109,12 @@ static ssize_t adt7316_store_DAC_internal_Vref(struct device *dev, } static IIO_DEVICE_ATTR(DAC_internal_Vref, 0644, - adt7316_show_DAC_internal_Vref, - adt7316_store_DAC_internal_Vref, - 0); + adt7316_show_DAC_internal_Vref, + adt7316_store_DAC_internal_Vref, + 0); static ssize_t adt7316_show_ad(struct adt7316_chip_info *chip, - int channel, char *buf) + int channel, char *buf) { u16 data; u8 msb, lsb; @@ -1122,7 +1122,7 @@ static ssize_t adt7316_show_ad(struct adt7316_chip_info *chip, int ret; if ((chip->config2 & ADT7316_AD_SINGLE_CH_MODE) && - channel != (chip->config2 & ADT7516_AD_SINGLE_CH_MASK)) + channel != (chip->config2 & ADT7516_AD_SINGLE_CH_MASK)) return -EPERM; switch (channel) { @@ -1189,8 +1189,8 @@ static ssize_t adt7316_show_ad(struct adt7316_chip_info *chip, } static ssize_t adt7316_show_VDD(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1200,8 +1200,8 @@ static ssize_t adt7316_show_VDD(struct device *dev, static IIO_DEVICE_ATTR(VDD, 0444, adt7316_show_VDD, NULL, 0); static ssize_t adt7316_show_in_temp(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1212,8 +1212,8 @@ static ssize_t adt7316_show_in_temp(struct device *dev, static IIO_DEVICE_ATTR(in_temp, 0444, adt7316_show_in_temp, NULL, 0); static ssize_t adt7316_show_ex_temp_AIN1(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1222,12 +1222,12 @@ static ssize_t adt7316_show_ex_temp_AIN1(struct device *dev, } static IIO_DEVICE_ATTR(ex_temp_AIN1, 0444, adt7316_show_ex_temp_AIN1, - NULL, 0); + NULL, 0); static IIO_DEVICE_ATTR(ex_temp, 0444, adt7316_show_ex_temp_AIN1, NULL, 0); static ssize_t adt7316_show_AIN2(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1237,8 +1237,8 @@ static ssize_t adt7316_show_AIN2(struct device *dev, static IIO_DEVICE_ATTR(AIN2, 0444, adt7316_show_AIN2, NULL, 0); static ssize_t adt7316_show_AIN3(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1248,8 +1248,8 @@ static ssize_t adt7316_show_AIN3(struct device *dev, static IIO_DEVICE_ATTR(AIN3, 0444, adt7316_show_AIN3, NULL, 0); static ssize_t adt7316_show_AIN4(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1259,7 +1259,7 @@ static ssize_t adt7316_show_AIN4(struct device *dev, static IIO_DEVICE_ATTR(AIN4, 0444, adt7316_show_AIN4, NULL, 0); static ssize_t adt7316_show_temp_offset(struct adt7316_chip_info *chip, - int offset_addr, char *buf) + int offset_addr, char *buf) { int data; u8 val; @@ -1277,7 +1277,9 @@ static ssize_t adt7316_show_temp_offset(struct adt7316_chip_info *chip, } static ssize_t adt7316_store_temp_offset(struct adt7316_chip_info *chip, - int offset_addr, const char *buf, size_t len) + int offset_addr, + const char *buf, + size_t len) { int data; u8 val; @@ -1300,8 +1302,8 @@ static ssize_t adt7316_store_temp_offset(struct adt7316_chip_info *chip, } static ssize_t adt7316_show_in_temp_offset(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1310,9 +1312,9 @@ static ssize_t adt7316_show_in_temp_offset(struct device *dev, } static ssize_t adt7316_store_in_temp_offset(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1322,12 +1324,12 @@ static ssize_t adt7316_store_in_temp_offset(struct device *dev, } static IIO_DEVICE_ATTR(in_temp_offset, 0644, - adt7316_show_in_temp_offset, - adt7316_store_in_temp_offset, 0); + adt7316_show_in_temp_offset, + adt7316_store_in_temp_offset, 0); static ssize_t adt7316_show_ex_temp_offset(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1336,9 +1338,9 @@ static ssize_t adt7316_show_ex_temp_offset(struct device *dev, } static ssize_t adt7316_store_ex_temp_offset(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1348,12 +1350,12 @@ static ssize_t adt7316_store_ex_temp_offset(struct device *dev, } static IIO_DEVICE_ATTR(ex_temp_offset, 0644, - adt7316_show_ex_temp_offset, - adt7316_store_ex_temp_offset, 0); + adt7316_show_ex_temp_offset, + adt7316_store_ex_temp_offset, 0); static ssize_t adt7316_show_in_analog_temp_offset(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1363,9 +1365,9 @@ static ssize_t adt7316_show_in_analog_temp_offset(struct device *dev, } static ssize_t adt7316_store_in_analog_temp_offset(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1375,12 +1377,12 @@ static ssize_t adt7316_store_in_analog_temp_offset(struct device *dev, } static IIO_DEVICE_ATTR(in_analog_temp_offset, 0644, - adt7316_show_in_analog_temp_offset, - adt7316_store_in_analog_temp_offset, 0); + adt7316_show_in_analog_temp_offset, + adt7316_store_in_analog_temp_offset, 0); static ssize_t adt7316_show_ex_analog_temp_offset(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1390,9 +1392,9 @@ static ssize_t adt7316_show_ex_analog_temp_offset(struct device *dev, } static ssize_t adt7316_store_ex_analog_temp_offset(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1402,21 +1404,21 @@ static ssize_t adt7316_store_ex_analog_temp_offset(struct device *dev, } static IIO_DEVICE_ATTR(ex_analog_temp_offset, 0644, - adt7316_show_ex_analog_temp_offset, - adt7316_store_ex_analog_temp_offset, 0); + adt7316_show_ex_analog_temp_offset, + adt7316_store_ex_analog_temp_offset, 0); static ssize_t adt7316_show_DAC(struct adt7316_chip_info *chip, - int channel, char *buf) + int channel, char *buf) { u16 data; u8 msb, lsb, offset; int ret; if (channel >= ADT7316_DA_MSB_DATA_REGS || - (channel == 0 && - (chip->config3 & ADT7316_EN_IN_TEMP_PROP_DACA)) || - (channel == 1 && - (chip->config3 & ADT7316_EN_EX_TEMP_PROP_DACB))) + (channel == 0 && + (chip->config3 & ADT7316_EN_IN_TEMP_PROP_DACA)) || + (channel == 1 && + (chip->config3 & ADT7316_EN_EX_TEMP_PROP_DACB))) return -EPERM; offset = chip->dac_bits - 8; @@ -1439,17 +1441,17 @@ static ssize_t adt7316_show_DAC(struct adt7316_chip_info *chip, } static ssize_t adt7316_store_DAC(struct adt7316_chip_info *chip, - int channel, const char *buf, size_t len) + int channel, const char *buf, size_t len) { u8 msb, lsb, offset; u16 data; int ret; if (channel >= ADT7316_DA_MSB_DATA_REGS || - (channel == 0 && - (chip->config3 & ADT7316_EN_IN_TEMP_PROP_DACA)) || - (channel == 1 && - (chip->config3 & ADT7316_EN_EX_TEMP_PROP_DACB))) + (channel == 0 && + (chip->config3 & ADT7316_EN_IN_TEMP_PROP_DACA)) || + (channel == 1 && + (chip->config3 & ADT7316_EN_EX_TEMP_PROP_DACB))) return -EPERM; offset = chip->dac_bits - 8; @@ -1476,8 +1478,8 @@ static ssize_t adt7316_store_DAC(struct adt7316_chip_info *chip, } static ssize_t adt7316_show_DAC_A(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1486,9 +1488,9 @@ static ssize_t adt7316_show_DAC_A(struct device *dev, } static ssize_t adt7316_store_DAC_A(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1497,11 +1499,11 @@ static ssize_t adt7316_store_DAC_A(struct device *dev, } static IIO_DEVICE_ATTR(DAC_A, 0644, adt7316_show_DAC_A, - adt7316_store_DAC_A, 0); + adt7316_store_DAC_A, 0); static ssize_t adt7316_show_DAC_B(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1510,9 +1512,9 @@ static ssize_t adt7316_show_DAC_B(struct device *dev, } static ssize_t adt7316_store_DAC_B(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1521,11 +1523,11 @@ static ssize_t adt7316_store_DAC_B(struct device *dev, } static IIO_DEVICE_ATTR(DAC_B, 0644, adt7316_show_DAC_B, - adt7316_store_DAC_B, 0); + adt7316_store_DAC_B, 0); static ssize_t adt7316_show_DAC_C(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1534,9 +1536,9 @@ static ssize_t adt7316_show_DAC_C(struct device *dev, } static ssize_t adt7316_store_DAC_C(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1545,11 +1547,11 @@ static ssize_t adt7316_store_DAC_C(struct device *dev, } static IIO_DEVICE_ATTR(DAC_C, 0644, adt7316_show_DAC_C, - adt7316_store_DAC_C, 0); + adt7316_store_DAC_C, 0); static ssize_t adt7316_show_DAC_D(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1558,9 +1560,9 @@ static ssize_t adt7316_show_DAC_D(struct device *dev, } static ssize_t adt7316_store_DAC_D(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1569,11 +1571,11 @@ static ssize_t adt7316_store_DAC_D(struct device *dev, } static IIO_DEVICE_ATTR(DAC_D, 0644, adt7316_show_DAC_D, - adt7316_store_DAC_D, 0); + adt7316_store_DAC_D, 0); static ssize_t adt7316_show_device_id(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1590,8 +1592,8 @@ static ssize_t adt7316_show_device_id(struct device *dev, static IIO_DEVICE_ATTR(device_id, 0444, adt7316_show_device_id, NULL, 0); static ssize_t adt7316_show_manufactorer_id(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1606,11 +1608,11 @@ static ssize_t adt7316_show_manufactorer_id(struct device *dev, } static IIO_DEVICE_ATTR(manufactorer_id, 0444, - adt7316_show_manufactorer_id, NULL, 0); + adt7316_show_manufactorer_id, NULL, 0); static ssize_t adt7316_show_device_rev(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1627,8 +1629,8 @@ static ssize_t adt7316_show_device_rev(struct device *dev, static IIO_DEVICE_ATTR(device_rev, 0444, adt7316_show_device_rev, NULL, 0); static ssize_t adt7316_show_bus_type(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1811,8 +1813,8 @@ static irqreturn_t adt7316_event_handler(int irq, void *private) * Show mask of enabled interrupts in Hex. */ static ssize_t adt7316_show_int_mask(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1824,9 +1826,9 @@ static ssize_t adt7316_show_int_mask(struct device *dev, * Set 1 to the mask in Hex to enabled interrupts. */ static ssize_t adt7316_set_int_mask(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1865,8 +1867,8 @@ static ssize_t adt7316_set_int_mask(struct device *dev, } static inline ssize_t adt7316_show_ad_bound(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev_attr *this_attr = to_iio_dev_attr(attr); struct iio_dev *dev_info = dev_to_iio_dev(dev); @@ -1876,7 +1878,7 @@ static inline ssize_t adt7316_show_ad_bound(struct device *dev, int ret; if ((chip->id & ID_FAMILY_MASK) == ID_ADT73XX && - this_attr->address > ADT7316_EX_TEMP_LOW) + this_attr->address > ADT7316_EX_TEMP_LOW) return -EPERM; ret = chip->bus.read(chip->bus.client, this_attr->address, &val); @@ -1886,7 +1888,7 @@ static inline ssize_t adt7316_show_ad_bound(struct device *dev, data = (int)val; if (!((chip->id & ID_FAMILY_MASK) == ID_ADT75XX && - (chip->config1 & ADT7516_SEL_AIN1_2_EX_TEMP_MASK) == 0)) { + (chip->config1 & ADT7516_SEL_AIN1_2_EX_TEMP_MASK) == 0)) { if (data & 0x80) data -= 256; } @@ -1895,9 +1897,9 @@ static inline ssize_t adt7316_show_ad_bound(struct device *dev, } static inline ssize_t adt7316_set_ad_bound(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev_attr *this_attr = to_iio_dev_attr(attr); struct iio_dev *dev_info = dev_to_iio_dev(dev); @@ -1907,7 +1909,7 @@ static inline ssize_t adt7316_set_ad_bound(struct device *dev, int ret; if ((chip->id & ID_FAMILY_MASK) == ID_ADT73XX && - this_attr->address > ADT7316_EX_TEMP_LOW) + this_attr->address > ADT7316_EX_TEMP_LOW) return -EPERM; ret = kstrtoint(buf, 10, &data); @@ -1915,7 +1917,7 @@ static inline ssize_t adt7316_set_ad_bound(struct device *dev, return -EINVAL; if ((chip->id & ID_FAMILY_MASK) == ID_ADT75XX && - (chip->config1 & ADT7516_SEL_AIN1_2_EX_TEMP_MASK) == 0) { + (chip->config1 & ADT7516_SEL_AIN1_2_EX_TEMP_MASK) == 0) { if (data > 255 || data < 0) return -EINVAL; } else { @@ -1936,8 +1938,8 @@ static inline ssize_t adt7316_set_ad_bound(struct device *dev, } static ssize_t adt7316_show_int_enabled(struct device *dev, - struct device_attribute *attr, - char *buf) + struct device_attribute *attr, + char *buf) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -1946,9 +1948,9 @@ static ssize_t adt7316_show_int_enabled(struct device *dev, } static ssize_t adt7316_set_int_enabled(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t len) + struct device_attribute *attr, + const char *buf, + size_t len) { struct iio_dev *dev_info = dev_to_iio_dev(dev); struct adt7316_chip_info *chip = iio_priv(dev_info); @@ -2097,11 +2099,12 @@ static const struct iio_info adt7516_info = { * device probe and remove */ int adt7316_probe(struct device *dev, struct adt7316_bus *bus, - const char *name) + const char *name) { struct adt7316_chip_info *chip; struct iio_dev *indio_dev; unsigned short *adt7316_platform_data = dev->platform_data; + int irq_type = IRQF_TRIGGER_LOW; int ret = 0; indio_dev = devm_iio_device_alloc(dev, sizeof(*chip)); @@ -2120,7 +2123,13 @@ int adt7316_probe(struct device *dev, struct adt7316_bus *bus, else return -ENODEV; - chip->ldac_pin = adt7316_platform_data[1]; + chip->ldac_pin = devm_gpiod_get_optional(dev, "adi,ldac", GPIOD_OUT_LOW); + if (IS_ERR(chip->ldac_pin)) { + ret = PTR_ERR(chip->ldac_pin); + dev_err(dev, "Failed to request ldac GPIO: %d\n", ret); + return ret; + } + if (chip->ldac_pin) { chip->config3 |= ADT7316_DA_EN_VIA_DAC_LDCA; if ((chip->id & ID_FAMILY_MASK) == ID_ADT75XX) @@ -2140,19 +2149,18 @@ int adt7316_probe(struct device *dev, struct adt7316_bus *bus, if (chip->bus.irq > 0) { if (adt7316_platform_data[0]) - chip->bus.irq_flags = adt7316_platform_data[0]; + irq_type = adt7316_platform_data[0]; ret = devm_request_threaded_irq(dev, chip->bus.irq, NULL, adt7316_event_handler, - chip->bus.irq_flags | - IRQF_ONESHOT, + irq_type | IRQF_ONESHOT, indio_dev->name, indio_dev); if (ret) return ret; - if (chip->bus.irq_flags & IRQF_TRIGGER_HIGH) + if (irq_type & IRQF_TRIGGER_HIGH) chip->config1 |= ADT7316_INT_POLARITY; } @@ -2169,7 +2177,7 @@ int adt7316_probe(struct device *dev, struct adt7316_bus *bus, return ret; dev_info(dev, "%s temperature sensor, ADC and DAC registered.\n", - indio_dev->name); + indio_dev->name); return 0; } diff --git a/drivers/staging/iio/addac/adt7316.h b/drivers/staging/iio/addac/adt7316.h index ec40fbb698a6..84ca4f6c88f5 100644 --- a/drivers/staging/iio/addac/adt7316.h +++ b/drivers/staging/iio/addac/adt7316.h @@ -17,7 +17,6 @@ struct adt7316_bus { void *client; int irq; - int irq_flags; int (*read)(void *client, u8 reg, u8 *data); int (*write)(void *client, u8 reg, u8 val); int (*multi_read)(void *client, u8 first_reg, u8 count, u8 *data); @@ -31,6 +30,6 @@ extern const struct dev_pm_ops adt7316_pm_ops; #define ADT7316_PM_OPS NULL #endif int adt7316_probe(struct device *dev, struct adt7316_bus *bus, - const char *name); + const char *name); #endif diff --git a/drivers/staging/iio/cdc/ad7150.c b/drivers/staging/iio/cdc/ad7150.c index d16084d7068c..24f74ce60f80 100644 --- a/drivers/staging/iio/cdc/ad7150.c +++ b/drivers/staging/iio/cdc/ad7150.c @@ -102,18 +102,19 @@ static int ad7150_read_raw(struct iio_dev *indio_dev, { int ret; struct ad7150_chip_info *chip = iio_priv(indio_dev); + int channel = chan->channel; switch (mask) { case IIO_CHAN_INFO_RAW: ret = i2c_smbus_read_word_data(chip->client, - ad7150_addresses[chan->channel][0]); + ad7150_addresses[channel][0]); if (ret < 0) return ret; *val = swab16(ret); return IIO_VAL_INT; case IIO_CHAN_INFO_AVERAGE_RAW: ret = i2c_smbus_read_word_data(chip->client, - ad7150_addresses[chan->channel][1]); + ad7150_addresses[channel][1]); if (ret < 0) return ret; *val = swab16(ret); @@ -182,8 +183,8 @@ static int ad7150_write_event_params(struct iio_dev *indio_dev, case IIO_EV_TYPE_THRESH: value = chip->threshold[rising][chan]; return i2c_smbus_write_word_data(chip->client, - ad7150_addresses[chan][3], - swab16(value)); + ad7150_addresses[chan][3], + swab16(value)); case IIO_EV_TYPE_MAG_ADAPTIVE: sens = chip->mag_sensitivity[rising][chan]; timeout = chip->mag_timeout[rising][chan]; diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c index a2370dd1e1a8..9e52384f5370 100644 --- a/drivers/staging/iio/impedance-analyzer/ad5933.c +++ b/drivers/staging/iio/impedance-analyzer/ad5933.c @@ -84,13 +84,13 @@ /** * struct ad5933_platform_data - platform specific data - * @ext_clk_Hz: the external clock frequency in Hz, if not set + * @ext_clk_hz: the external clock frequency in Hz, if not set * the driver uses the internal clock (16.776 MHz) * @vref_mv: the external reference voltage in millivolt */ struct ad5933_platform_data { - unsigned long ext_clk_Hz; + unsigned long ext_clk_hz; unsigned short vref_mv; }; @@ -210,7 +210,7 @@ static int ad5933_set_freq(struct ad5933_state *st, u8 d8[4]; } dat; - freqreg = (u64) freq * (u64) (1 << 27); + freqreg = (u64)freq * (u64)(1 << 27); do_div(freqreg, st->mclk_hz / 4); switch (reg) { @@ -267,7 +267,6 @@ static void ad5933_calc_out_ranges(struct ad5933_state *st) for (i = 0; i < 4; i++) st->range_avail[i] = normalized_3v3[i] * st->vref_mv / 3300; - } /* @@ -726,8 +725,8 @@ static int ad5933_probe(struct i2c_client *client, else st->vref_mv = pdata->vref_mv; - if (pdata->ext_clk_Hz) { - st->mclk_hz = pdata->ext_clk_Hz; + if (pdata->ext_clk_hz) { + st->mclk_hz = pdata->ext_clk_hz; st->ctrl_lb = AD5933_CTRL_EXT_SYSCLK; } else { st->mclk_hz = AD5933_INT_OSC_FREQ_Hz; @@ -787,9 +786,18 @@ static const struct i2c_device_id ad5933_id[] = { MODULE_DEVICE_TABLE(i2c, ad5933_id); +static const struct of_device_id ad5933_of_match[] = { + { .compatible = "adi,ad5933" }, + { .compatible = "adi,ad5934" }, + { }, +}; + +MODULE_DEVICE_TABLE(of, ad5933_of_match); + static struct i2c_driver ad5933_driver = { .driver = { .name = "ad5933", + .of_match_table = ad5933_of_match, }, .probe = ad5933_probe, .remove = ad5933_remove, diff --git a/drivers/staging/iio/resolver/Kconfig b/drivers/staging/iio/resolver/Kconfig index 6a469ee6101f..4a727c17bb8f 100644 --- a/drivers/staging/iio/resolver/Kconfig +++ b/drivers/staging/iio/resolver/Kconfig @@ -3,16 +3,6 @@ # menu "Resolver to digital converters" -config AD2S90 - tristate "Analog Devices ad2s90 driver" - depends on SPI - help - Say yes here to build support for Analog Devices spi resolver - to digital converters, ad2s90, provides direct access via sysfs. - - To compile this driver as a module, choose M here: the - module will be called ad2s90. - config AD2S1210 tristate "Analog Devices ad2s1210 driver" depends on SPI diff --git a/drivers/staging/iio/resolver/Makefile b/drivers/staging/iio/resolver/Makefile index 8d901dc7500b..b2049f2ce36e 100644 --- a/drivers/staging/iio/resolver/Makefile +++ b/drivers/staging/iio/resolver/Makefile @@ -2,5 +2,4 @@ # Makefile for Resolver/Synchro drivers # -obj-$(CONFIG_AD2S90) += ad2s90.o obj-$(CONFIG_AD2S1210) += ad2s1210.o diff --git a/drivers/staging/iio/resolver/ad2s1210.c b/drivers/staging/iio/resolver/ad2s1210.c index ac13b99bd9cb..cec9d995b3df 100644 --- a/drivers/staging/iio/resolver/ad2s1210.c +++ b/drivers/staging/iio/resolver/ad2s1210.c @@ -15,12 +15,11 @@ #include #include #include -#include +#include #include #include #include -#include "ad2s1210.h" #define DRV_NAME "ad2s1210" @@ -67,12 +66,33 @@ enum ad2s1210_mode { MOD_RESERVED, }; +enum ad2s1210_gpios { + AD2S1210_SAMPLE, + AD2S1210_A0, + AD2S1210_A1, + AD2S1210_RES0, + AD2S1210_RES1, +}; + +struct ad2s1210_gpio { + const char *name; + unsigned long flags; +}; + +static const struct ad2s1210_gpio gpios[] = { + [AD2S1210_SAMPLE] = { .name = "adi,sample", .flags = GPIOD_OUT_LOW }, + [AD2S1210_A0] = { .name = "adi,a0", .flags = GPIOD_OUT_LOW }, + [AD2S1210_A1] = { .name = "adi,a1", .flags = GPIOD_OUT_LOW }, + [AD2S1210_RES0] = { .name = "adi,res0", .flags = GPIOD_OUT_LOW }, + [AD2S1210_RES1] = { .name = "adi,res1", .flags = GPIOD_OUT_LOW }, +}; + static const unsigned int ad2s1210_resolution_value[] = { 10, 12, 14, 16 }; struct ad2s1210_state { - const struct ad2s1210_platform_data *pdata; struct mutex lock; struct spi_device *sdev; + struct gpio_desc *gpios[5]; unsigned int fclkin; unsigned int fexcit; bool hysteresis; @@ -91,8 +111,8 @@ static const int ad2s1210_mode_vals[4][2] = { static inline void ad2s1210_set_mode(enum ad2s1210_mode mode, struct ad2s1210_state *st) { - gpio_set_value(st->pdata->a[0], ad2s1210_mode_vals[mode][0]); - gpio_set_value(st->pdata->a[1], ad2s1210_mode_vals[mode][1]); + gpiod_set_value(st->gpios[AD2S1210_A0], ad2s1210_mode_vals[mode][0]); + gpiod_set_value(st->gpios[AD2S1210_A1], ad2s1210_mode_vals[mode][1]); st->mode = mode; } @@ -150,24 +170,16 @@ int ad2s1210_update_frequency_control_word(struct ad2s1210_state *st) return ad2s1210_config_write(st, fcw); } -static unsigned char ad2s1210_read_resolution_pin(struct ad2s1210_state *st) -{ - int resolution = (gpio_get_value(st->pdata->res[0]) << 1) | - gpio_get_value(st->pdata->res[1]); - - return ad2s1210_resolution_value[resolution]; -} - static const int ad2s1210_res_pins[4][2] = { { 0, 0 }, {0, 1}, {1, 0}, {1, 1} }; static inline void ad2s1210_set_resolution_pin(struct ad2s1210_state *st) { - gpio_set_value(st->pdata->res[0], - ad2s1210_res_pins[(st->resolution - 10) / 2][0]); - gpio_set_value(st->pdata->res[1], - ad2s1210_res_pins[(st->resolution - 10) / 2][1]); + gpiod_set_value(st->gpios[AD2S1210_RES0], + ad2s1210_res_pins[(st->resolution - 10) / 2][0]); + gpiod_set_value(st->gpios[AD2S1210_RES1], + ad2s1210_res_pins[(st->resolution - 10) / 2][1]); } static inline int ad2s1210_soft_reset(struct ad2s1210_state *st) @@ -301,15 +313,9 @@ static ssize_t ad2s1210_store_control(struct device *dev, "ad2s1210: write control register fail\n"); goto error_ret; } - st->resolution - = ad2s1210_resolution_value[data & AD2S1210_SET_RESOLUTION]; - if (st->pdata->gpioin) { - data = ad2s1210_read_resolution_pin(st); - if (data != st->resolution) - dev_warn(dev, "ad2s1210: resolution settings not match\n"); - } else { - ad2s1210_set_resolution_pin(st); - } + st->resolution = + ad2s1210_resolution_value[data & AD2S1210_SET_RESOLUTION]; + ad2s1210_set_resolution_pin(st); ret = len; st->hysteresis = !!(data & AD2S1210_ENABLE_HYSTERESIS); @@ -363,15 +369,9 @@ static ssize_t ad2s1210_store_resolution(struct device *dev, dev_err(dev, "ad2s1210: setting resolution fail\n"); goto error_ret; } - st->resolution - = ad2s1210_resolution_value[data & AD2S1210_SET_RESOLUTION]; - if (st->pdata->gpioin) { - data = ad2s1210_read_resolution_pin(st); - if (data != st->resolution) - dev_warn(dev, "ad2s1210: resolution settings not match\n"); - } else { - ad2s1210_set_resolution_pin(st); - } + st->resolution = + ad2s1210_resolution_value[data & AD2S1210_SET_RESOLUTION]; + ad2s1210_set_resolution_pin(st); ret = len; error_ret: mutex_unlock(&st->lock); @@ -401,15 +401,15 @@ static ssize_t ad2s1210_clear_fault(struct device *dev, int ret; mutex_lock(&st->lock); - gpio_set_value(st->pdata->sample, 0); + gpiod_set_value(st->gpios[AD2S1210_SAMPLE], 0); /* delay (2 * tck + 20) nano seconds */ udelay(1); - gpio_set_value(st->pdata->sample, 1); + gpiod_set_value(st->gpios[AD2S1210_SAMPLE], 1); ret = ad2s1210_config_read(st, AD2S1210_REG_FAULT); if (ret < 0) goto error_ret; - gpio_set_value(st->pdata->sample, 0); - gpio_set_value(st->pdata->sample, 1); + gpiod_set_value(st->gpios[AD2S1210_SAMPLE], 0); + gpiod_set_value(st->gpios[AD2S1210_SAMPLE], 1); error_ret: mutex_unlock(&st->lock); @@ -466,7 +466,7 @@ static int ad2s1210_read_raw(struct iio_dev *indio_dev, s16 vel; mutex_lock(&st->lock); - gpio_set_value(st->pdata->sample, 0); + gpiod_set_value(st->gpios[AD2S1210_SAMPLE], 0); /* delay (6 * tck + 20) nano seconds */ udelay(1); @@ -512,7 +512,7 @@ static int ad2s1210_read_raw(struct iio_dev *indio_dev, } error_ret: - gpio_set_value(st->pdata->sample, 1); + gpiod_set_value(st->gpios[AD2S1210_SAMPLE], 1); /* delay (2 * tck + 20) nano seconds */ udelay(1); mutex_unlock(&st->lock); @@ -592,10 +592,7 @@ static int ad2s1210_initial(struct ad2s1210_state *st) int ret; mutex_lock(&st->lock); - if (st->pdata->gpioin) - st->resolution = ad2s1210_read_resolution_pin(st); - else - ad2s1210_set_resolution_pin(st); + ad2s1210_set_resolution_pin(st); ret = ad2s1210_config_write(st, AD2S1210_REG_CONTROL); if (ret < 0) @@ -630,30 +627,22 @@ static const struct iio_info ad2s1210_info = { static int ad2s1210_setup_gpios(struct ad2s1210_state *st) { - unsigned long flags = st->pdata->gpioin ? GPIOF_DIR_IN : GPIOF_DIR_OUT; - struct gpio ad2s1210_gpios[] = { - { st->pdata->sample, GPIOF_DIR_IN, "sample" }, - { st->pdata->a[0], flags, "a0" }, - { st->pdata->a[1], flags, "a1" }, - { st->pdata->res[0], flags, "res0" }, - { st->pdata->res[0], flags, "res1" }, - }; - - return gpio_request_array(ad2s1210_gpios, ARRAY_SIZE(ad2s1210_gpios)); -} - -static void ad2s1210_free_gpios(struct ad2s1210_state *st) -{ - unsigned long flags = st->pdata->gpioin ? GPIOF_DIR_IN : GPIOF_DIR_OUT; - struct gpio ad2s1210_gpios[] = { - { st->pdata->sample, GPIOF_DIR_IN, "sample" }, - { st->pdata->a[0], flags, "a0" }, - { st->pdata->a[1], flags, "a1" }, - { st->pdata->res[0], flags, "res0" }, - { st->pdata->res[0], flags, "res1" }, - }; + struct spi_device *spi = st->sdev; + int i, ret; + + for (i = 0; i < ARRAY_SIZE(gpios); i++) { + st->gpios[i] = devm_gpiod_get(&spi->dev, gpios[i].name, + gpios[i].flags); + if (IS_ERR(st->gpios[i])) { + ret = PTR_ERR(st->gpios[i]); + dev_err(&spi->dev, + "ad2s1210: failed to request %s GPIO: %d\n", + gpios[i].name, ret); + return ret; + } + } - gpio_free_array(ad2s1210_gpios, ARRAY_SIZE(ad2s1210_gpios)); + return 0; } static int ad2s1210_probe(struct spi_device *spi) @@ -669,7 +658,6 @@ static int ad2s1210_probe(struct spi_device *spi) if (!indio_dev) return -ENOMEM; st = iio_priv(indio_dev); - st->pdata = spi->dev.platform_data; ret = ad2s1210_setup_gpios(st); if (ret < 0) return ret; @@ -692,7 +680,7 @@ static int ad2s1210_probe(struct spi_device *spi) ret = iio_device_register(indio_dev); if (ret) - goto error_free_gpios; + return ret; st->fclkin = spi->max_speed_hz; spi->mode = SPI_MODE_3; @@ -700,10 +688,6 @@ static int ad2s1210_probe(struct spi_device *spi) ad2s1210_initial(st); return 0; - -error_free_gpios: - ad2s1210_free_gpios(st); - return ret; } static int ad2s1210_remove(struct spi_device *spi) @@ -711,11 +695,16 @@ static int ad2s1210_remove(struct spi_device *spi) struct iio_dev *indio_dev = spi_get_drvdata(spi); iio_device_unregister(indio_dev); - ad2s1210_free_gpios(iio_priv(indio_dev)); return 0; } +static const struct of_device_id ad2s1210_of_match[] = { + { .compatible = "adi,ad2s1210", }, + { } +}; +MODULE_DEVICE_TABLE(of, ad2s1210_of_match); + static const struct spi_device_id ad2s1210_id[] = { { "ad2s1210" }, {} @@ -725,6 +714,7 @@ MODULE_DEVICE_TABLE(spi, ad2s1210_id); static struct spi_driver ad2s1210_driver = { .driver = { .name = DRV_NAME, + .of_match_table = of_match_ptr(ad2s1210_of_match), }, .probe = ad2s1210_probe, .remove = ad2s1210_remove, diff --git a/drivers/staging/iio/resolver/ad2s1210.h b/drivers/staging/iio/resolver/ad2s1210.h deleted file mode 100644 index e9b2147701fc..000000000000 --- a/drivers/staging/iio/resolver/ad2s1210.h +++ /dev/null @@ -1,20 +0,0 @@ -/* - * ad2s1210.h plaform data for the ADI Resolver to Digital Converters: - * AD2S1210 - * - * Copyright (c) 2010-2010 Analog Devices Inc. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ -#ifndef _AD2S1210_H -#define _AD2S1210_H - -struct ad2s1210_platform_data { - unsigned int sample; - unsigned int a[2]; - unsigned int res[2]; - bool gpioin; -}; -#endif /* _AD2S1210_H */ diff --git a/drivers/staging/ks7010/michael_mic.c b/drivers/staging/ks7010/michael_mic.c index e6bd70846e98..3acd79615f98 100644 --- a/drivers/staging/ks7010/michael_mic.c +++ b/drivers/staging/ks7010/michael_mic.c @@ -11,7 +11,6 @@ #include #include "michael_mic.h" - // Reset the state to the empty message. static inline void michael_clear(struct michael_mic *mic) { diff --git a/drivers/staging/media/bcm2048/radio-bcm2048.c b/drivers/staging/media/bcm2048/radio-bcm2048.c index debd1122875d..d9b02ff66259 100644 --- a/drivers/staging/media/bcm2048/radio-bcm2048.c +++ b/drivers/staging/media/bcm2048/radio-bcm2048.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * drivers/staging/media/radio-bcm2048.c * diff --git a/drivers/staging/media/bcm2048/radio-bcm2048.h b/drivers/staging/media/bcm2048/radio-bcm2048.h index 4d950c1e2e8b..22887a075257 100644 --- a/drivers/staging/media/bcm2048/radio-bcm2048.h +++ b/drivers/staging/media/bcm2048/radio-bcm2048.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * drivers/staging/media/radio-bcm2048.h * diff --git a/drivers/staging/media/davinci_vpfe/davinci_vpfe_user.h b/drivers/staging/media/davinci_vpfe/davinci_vpfe_user.h index 7cc115c9ebe6..8d772029c91d 100644 --- a/drivers/staging/media/davinci_vpfe/davinci_vpfe_user.h +++ b/drivers/staging/media/davinci_vpfe/davinci_vpfe_user.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipe.c b/drivers/staging/media/davinci_vpfe/dm365_ipipe.c index dcfeac818451..3d910b85905c 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_ipipe.c +++ b/drivers/staging/media/davinci_vpfe/dm365_ipipe.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad @@ -62,7 +59,7 @@ static int ipipe_validate_lutdpc_params(struct vpfe_ipipe_lutdpc *lutdpc) for (i = 0; i < lutdpc->dpc_size; i++) if (lutdpc->table[i].horz_pos > LUT_DPC_H_POS_MASK || - lutdpc->table[i].vert_pos > LUT_DPC_V_POS_MASK) + lutdpc->table[i].vert_pos > LUT_DPC_V_POS_MASK) return -EINVAL; return 0; @@ -102,7 +99,7 @@ static int ipipe_get_lutdpc_params(struct vpfe_ipipe_device *ipipe, void *param) lut_param->repl_white = lutdpc->repl_white; lut_param->dpc_size = lutdpc->dpc_size; memcpy(&lut_param->table, &lutdpc->table, - (lutdpc->dpc_size * sizeof(struct vpfe_ipipe_lutdpc_entry))); + (lutdpc->dpc_size * sizeof(struct vpfe_ipipe_lutdpc_entry))); return 0; } @@ -491,7 +488,7 @@ ipipe_validate_rgb2rgb_params(struct vpfe_ipipe_rgb2rgb *rgb2rgb, } static int ipipe_set_rgb2rgb_params(struct vpfe_ipipe_device *ipipe, - unsigned int id, void *param) + unsigned int id, void *param) { struct vpfe_ipipe_rgb2rgb *rgb2rgb = &ipipe->config.rgb2rgb1; struct device *dev = ipipe->subdev.v4l2_dev->dev; @@ -545,7 +542,7 @@ ipipe_set_rgb2rgb_2_params(struct vpfe_ipipe_device *ipipe, void *param) } static int ipipe_get_rgb2rgb_params(struct vpfe_ipipe_device *ipipe, - unsigned int id, void *param) + unsigned int id, void *param) { struct vpfe_ipipe_rgb2rgb *rgb2rgb = &ipipe->config.rgb2rgb1; struct vpfe_ipipe_rgb2rgb *rgb2rgb_param; @@ -775,44 +772,44 @@ success: static int ipipe_validate_rgb2yuv_params(struct vpfe_ipipe_rgb2yuv *rgb2yuv) { if (rgb2yuv->coef_ry.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_ry.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_ry.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->coef_gy.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_gy.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_gy.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->coef_by.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_by.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_by.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->coef_rcb.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_rcb.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_rcb.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->coef_gcb.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_gcb.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_gcb.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->coef_bcb.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_bcb.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_bcb.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->coef_rcr.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_rcr.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_rcr.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->coef_gcr.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_gcr.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_gcr.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->coef_bcr.decimal > RGB2YCBCR_COEF_DECI_MASK || - rgb2yuv->coef_bcr.integer > RGB2YCBCR_COEF_INT_MASK) + rgb2yuv->coef_bcr.integer > RGB2YCBCR_COEF_INT_MASK) return -EINVAL; if (rgb2yuv->out_ofst_y > RGB2YCBCR_OFST_MASK || - rgb2yuv->out_ofst_cb > RGB2YCBCR_OFST_MASK || - rgb2yuv->out_ofst_cr > RGB2YCBCR_OFST_MASK) + rgb2yuv->out_ofst_cb > RGB2YCBCR_OFST_MASK || + rgb2yuv->out_ofst_cr > RGB2YCBCR_OFST_MASK) return -EINVAL; return 0; @@ -918,7 +915,7 @@ static int ipipe_get_gbce_params(struct vpfe_ipipe_device *ipipe, void *param) gbce_param->type = gbce->type; memcpy(gbce_param->table, gbce->table, - (VPFE_IPIPE_MAX_SIZE_GBCE_LUT * sizeof(unsigned short))); + (VPFE_IPIPE_MAX_SIZE_GBCE_LUT * sizeof(unsigned short))); return 0; } @@ -945,7 +942,7 @@ ipipe_set_yuv422_conv_params(struct vpfe_ipipe_device *ipipe, void *param) yuv422_conv->chrom_pos = VPFE_IPIPE_YUV422_CHR_POS_COSITE; } else { memcpy(yuv422_conv, yuv422_conv_param, - sizeof(struct vpfe_ipipe_yuv422_conv)); + sizeof(struct vpfe_ipipe_yuv422_conv)); if (ipipe_validate_yuv422_conv_params(yuv422_conv) < 0) { dev_err(dev, "Invalid yuv422 params\n"); return -EINVAL; @@ -1382,7 +1379,7 @@ static int ipipe_set_stream(struct v4l2_subdev *sd, int enable) struct vpfe_device *vpfe_dev = to_vpfe_device(ipipe); if (enable && ipipe->input != IPIPE_INPUT_NONE && - ipipe->output != IPIPE_OUTPUT_NONE) { + ipipe->output != IPIPE_OUTPUT_NONE) { if (config_ipipe_hw(ipipe) < 0) return -EINVAL; } @@ -1402,8 +1399,8 @@ static int ipipe_set_stream(struct v4l2_subdev *sd, int enable) */ static struct v4l2_mbus_framefmt * __ipipe_get_format(struct vpfe_ipipe_device *ipipe, - struct v4l2_subdev_pad_config *cfg, unsigned int pad, - enum v4l2_subdev_format_whence which) + struct v4l2_subdev_pad_config *cfg, unsigned int pad, + enum v4l2_subdev_format_whence which) { if (which == V4L2_SUBDEV_FORMAT_TRY) return v4l2_subdev_get_try_format(&ipipe->subdev, cfg, pad); @@ -1421,9 +1418,9 @@ __ipipe_get_format(struct vpfe_ipipe_device *ipipe, */ static void ipipe_try_format(struct vpfe_ipipe_device *ipipe, - struct v4l2_subdev_pad_config *cfg, unsigned int pad, - struct v4l2_mbus_framefmt *fmt, - enum v4l2_subdev_format_whence which) + struct v4l2_subdev_pad_config *cfg, unsigned int pad, + struct v4l2_mbus_framefmt *fmt, + enum v4l2_subdev_format_whence which) { unsigned int max_out_height; unsigned int max_out_width; @@ -1463,13 +1460,13 @@ ipipe_try_format(struct vpfe_ipipe_device *ipipe, */ static int ipipe_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg, - struct v4l2_subdev_format *fmt) + struct v4l2_subdev_format *fmt) { struct vpfe_ipipe_device *ipipe = v4l2_get_subdevdata(sd); struct v4l2_mbus_framefmt *format; format = __ipipe_get_format(ipipe, cfg, fmt->pad, fmt->which); - if (format == NULL) + if (!format) return -EINVAL; ipipe_try_format(ipipe, cfg, fmt->pad, &fmt->format, fmt->which); @@ -1479,11 +1476,11 @@ ipipe_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg, return 0; if (fmt->pad == IPIPE_PAD_SINK && - (ipipe->input == IPIPE_INPUT_CCDC || + (ipipe->input == IPIPE_INPUT_CCDC || ipipe->input == IPIPE_INPUT_MEMORY)) ipipe->formats[fmt->pad] = fmt->format; else if (fmt->pad == IPIPE_PAD_SOURCE && - ipipe->output == IPIPE_OUTPUT_RESIZER) + ipipe->output == IPIPE_OUTPUT_RESIZER) ipipe->formats[fmt->pad] = fmt->format; else return -EINVAL; @@ -1499,7 +1496,7 @@ ipipe_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg, */ static int ipipe_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg, - struct v4l2_subdev_format *fmt) + struct v4l2_subdev_format *fmt) { struct vpfe_ipipe_device *ipipe = v4l2_get_subdevdata(sd); @@ -1519,8 +1516,8 @@ ipipe_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg, */ static int ipipe_enum_frame_size(struct v4l2_subdev *sd, - struct v4l2_subdev_pad_config *cfg, - struct v4l2_subdev_frame_size_enum *fse) + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_frame_size_enum *fse) { struct vpfe_ipipe_device *ipipe = v4l2_get_subdevdata(sd); struct v4l2_mbus_framefmt format; @@ -1687,7 +1684,7 @@ static const struct v4l2_subdev_ops ipipe_v4l2_ops = { */ static int ipipe_link_setup(struct media_entity *entity, const struct media_pad *local, - const struct media_pad *remote, u32 flags) + const struct media_pad *remote, u32 flags) { struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity); struct vpfe_ipipe_device *ipipe = v4l2_get_subdevdata(sd); @@ -1749,7 +1746,7 @@ void vpfe_ipipe_unregister_entities(struct vpfe_ipipe_device *vpfe_ipipe) */ int vpfe_ipipe_register_entities(struct vpfe_ipipe_device *ipipe, - struct v4l2_device *vdev) + struct v4l2_device *vdev) { int ret; @@ -1820,8 +1817,6 @@ vpfe_ipipe_init(struct vpfe_ipipe_device *ipipe, struct platform_device *pdev) v4l2_ctrl_new_std(&ipipe->ctrls, &ipipe_ctrl_ops, V4L2_CID_CONTRAST, 0, IPIPE_CONTRAST_HIGH, 1, 16); - - v4l2_ctrl_handler_setup(&ipipe->ctrls); sd->ctrl_handler = &ipipe->ctrls; diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipe.h b/drivers/staging/media/davinci_vpfe/dm365_ipipe.h index d81b29e19309..174334b53f96 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_ipipe.h +++ b/drivers/staging/media/davinci_vpfe/dm365_ipipe.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.c b/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.c index dbb7ddc70bef..5618c804c7e4 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.c +++ b/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.h b/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.h index 7ee157233047..16b6a14b7058 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.h +++ b/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c b/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c index e3425bf082ae..22fcdbcde96b 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c +++ b/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipeif.h b/drivers/staging/media/davinci_vpfe/dm365_ipipeif.h index cea3d61335af..4685d64016de 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_ipipeif.h +++ b/drivers/staging/media/davinci_vpfe/dm365_ipipeif.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipeif_user.h b/drivers/staging/media/davinci_vpfe/dm365_ipipeif_user.h index e2a69b59554a..046dbdec67d8 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_ipipeif_user.h +++ b/drivers/staging/media/davinci_vpfe/dm365_ipipeif_user.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_isif.c b/drivers/staging/media/davinci_vpfe/dm365_isif.c index 39eb0819ab4e..625d0aa8367f 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_isif.c +++ b/drivers/staging/media/davinci_vpfe/dm365_isif.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_isif.h b/drivers/staging/media/davinci_vpfe/dm365_isif.h index 89e814e9c0d7..0e1fe472fb2b 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_isif.h +++ b/drivers/staging/media/davinci_vpfe/dm365_isif.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_isif_regs.h b/drivers/staging/media/davinci_vpfe/dm365_isif_regs.h index 64fbb459baa2..6695680817b9 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_isif_regs.h +++ b/drivers/staging/media/davinci_vpfe/dm365_isif_regs.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_resizer.c b/drivers/staging/media/davinci_vpfe/dm365_resizer.c index 72bbbc34d18c..6098f43ac51b 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_resizer.c +++ b/drivers/staging/media/davinci_vpfe/dm365_resizer.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/dm365_resizer.h b/drivers/staging/media/davinci_vpfe/dm365_resizer.h index cf560a33d862..5e31de96b2c9 100644 --- a/drivers/staging/media/davinci_vpfe/dm365_resizer.h +++ b/drivers/staging/media/davinci_vpfe/dm365_resizer.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/vpfe.h b/drivers/staging/media/davinci_vpfe/vpfe.h index 0587bc52a840..1f8e011fc162 100644 --- a/drivers/staging/media/davinci_vpfe/vpfe.h +++ b/drivers/staging/media/davinci_vpfe/vpfe.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c b/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c index bdf6ee5ad96c..34d63c2e9199 100644 --- a/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c +++ b/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.h b/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.h index 8ad8d743f4e0..fe4a421b5dba 100644 --- a/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.h +++ b/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/vpfe_video.c b/drivers/staging/media/davinci_vpfe/vpfe_video.c index 5e9769ea8a50..510202a3b091 100644 --- a/drivers/staging/media/davinci_vpfe/vpfe_video.c +++ b/drivers/staging/media/davinci_vpfe/vpfe_video.c @@ -1,3 +1,4 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/davinci_vpfe/vpfe_video.h b/drivers/staging/media/davinci_vpfe/vpfe_video.h index 4bbd219e8329..5d01c4883ab4 100644 --- a/drivers/staging/media/davinci_vpfe/vpfe_video.h +++ b/drivers/staging/media/davinci_vpfe/vpfe_video.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (C) 2012 Texas Instruments Inc * @@ -10,10 +11,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * * Contributors: * Manjunath Hadli * Prabhakar Lad diff --git a/drivers/staging/media/tegra-vde/uapi.h b/drivers/staging/media/tegra-vde/uapi.h index a50c7bcae057..4bce08a7a54c 100644 --- a/drivers/staging/media/tegra-vde/uapi.h +++ b/drivers/staging/media/tegra-vde/uapi.h @@ -13,8 +13,8 @@ #include #include -#define FLAG_B_FRAME (1 << 0) -#define FLAG_REFERENCE (1 << 1) +#define FLAG_B_FRAME BIT(0) +#define FLAG_REFERENCE BIT(1) struct tegra_vde_h264_frame { __s32 y_fd; diff --git a/drivers/staging/most/Documentation/driver_usage.txt b/drivers/staging/most/Documentation/driver_usage.txt index bb9b4e870199..da7a8f405b9b 100644 --- a/drivers/staging/most/Documentation/driver_usage.txt +++ b/drivers/staging/most/Documentation/driver_usage.txt @@ -142,8 +142,9 @@ Cdev component example: Sound component example: -The sound component needs an additional parameter to determine the audio -resolution that is going to be used. The following formats are available: +The sound component needs additional parameters to determine the audio +resolution that is going to be used and to trigger the registration of a +sound card with ALSA. The following audio formats are available: - "1x8" (Mono) - "2x16" (16-bit stereo) @@ -151,9 +152,18 @@ resolution that is going to be used. The following formats are available: - "2x32" (32-bit stereo) - "6x16" (16-bit surround 5.1) - $ echo "mdev0:ep_81:sound:most51_playback.6x16" >$(DRV_DIR)/add_link +To make the sound module create a sound card and register it with ALSA the +string "create" needs to be attached to the module parameter section of the +configuration string. To create a sound card with with two playback devices +(linked to channel ep01 and ep02) and one capture device (linked to channel +ep83) the following is written to the add_link file: + $ echo "mdev0:ep01:sound:most51_playback.6x16" >$(DRV_DIR)/add_link + $ echo "mdev0:ep02:sound:most_playback.2x16" >$(DRV_DIR)/add_link + $ echo "mdev0:ep83:sound:most_capture.2x16.create" >$(DRV_DIR)/add_link +The link names (most51_playback, most_playback and most_capture) will +become the names of the PCM devices of the sound card. Section 2.3 USB Padding diff --git a/drivers/staging/most/sound/sound.c b/drivers/staging/most/sound/sound.c index 89b02fc305b8..79ab3a78c5ec 100644 --- a/drivers/staging/most/sound/sound.c +++ b/drivers/staging/most/sound/sound.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -20,7 +21,6 @@ #define DRIVER_NAME "sound" -static struct list_head dev_list; static struct core_component comp; /** @@ -56,6 +56,17 @@ struct channel { void (*copy_fn)(void *alsa, void *most, unsigned int bytes); }; +struct sound_adapter { + struct list_head dev_list; + struct most_interface *iface; + struct snd_card *card; + struct list_head list; + bool registered; + int pcm_dev_idx; +}; + +static struct list_head adpt_list; + #define MOST_PCM_INFO (SNDRV_PCM_INFO_MMAP | \ SNDRV_PCM_INFO_MMAP_VALID | \ SNDRV_PCM_INFO_BATCH | \ @@ -157,9 +168,10 @@ static void most_to_alsa_copy32(void *alsa, void *most, unsigned int bytes) static struct channel *get_channel(struct most_interface *iface, int channel_id) { + struct sound_adapter *adpt = iface->priv; struct channel *channel, *tmp; - list_for_each_entry_safe(channel, tmp, &dev_list, list) { + list_for_each_entry_safe(channel, tmp, &adpt->dev_list, list) { if ((channel->iface == iface) && (channel->id == channel_id)) return channel; } @@ -459,14 +471,14 @@ static const struct snd_pcm_ops pcm_ops = { .page = snd_pcm_lib_get_vmalloc_page, }; -static int split_arg_list(char *buf, char **card_name, u16 *ch_num, - char **sample_res) +static int split_arg_list(char *buf, char **device_name, u16 *ch_num, + char **sample_res, u8 *create) { char *num; int ret; - *card_name = strsep(&buf, "."); - if (!*card_name) { + *device_name = strsep(&buf, "."); + if (!*device_name) { pr_err("Missing sound card name\n"); return -EIO; } @@ -479,6 +491,9 @@ static int split_arg_list(char *buf, char **card_name, u16 *ch_num, *sample_res = strsep(&buf, ".\n"); if (!*sample_res) goto err; + + if (buf && !strcmp(buf, "create")) + *create = 1; return 0; err: @@ -536,6 +551,20 @@ found: return 0; } +static void release_adapter(struct sound_adapter *adpt) +{ + struct channel *channel, *tmp; + + list_for_each_entry_safe(channel, tmp, &adpt->dev_list, list) { + list_del(&channel->list); + kfree(channel); + } + if (adpt->card) + snd_card_free(adpt->card); + list_del(&adpt->list); + kfree(adpt); +} + /** * audio_probe_channel - probe function of the driver module * @iface: pointer to interface instance @@ -553,14 +582,15 @@ static int audio_probe_channel(struct most_interface *iface, int channel_id, char *arg_list) { struct channel *channel; - struct snd_card *card; + struct sound_adapter *adpt; struct snd_pcm *pcm; int playback_count = 0; int capture_count = 0; int ret; int direction; - char *card_name; + char *device_name; u16 ch_num; + u8 create = 0; char *sample_res; if (!iface) @@ -571,6 +601,38 @@ static int audio_probe_channel(struct most_interface *iface, int channel_id, return -EINVAL; } + ret = split_arg_list(arg_list, &device_name, &ch_num, &sample_res, + &create); + if (ret < 0) + return ret; + + list_for_each_entry(adpt, &adpt_list, list) { + if (adpt->iface != iface) + continue; + if (adpt->registered) + return -ENOSPC; + adpt->pcm_dev_idx++; + goto skip_adpt_alloc; + } + adpt = kzalloc(sizeof(*adpt), GFP_KERNEL); + if (!adpt) + return -ENOMEM; + + adpt->iface = iface; + INIT_LIST_HEAD(&adpt->dev_list); + iface->priv = adpt; + list_add_tail(&adpt->list, &adpt_list); + ret = snd_card_new(&iface->dev, -1, "INIC", THIS_MODULE, + sizeof(*channel), &adpt->card); + if (ret < 0) + goto err_free_adpt; + snprintf(adpt->card->driver, sizeof(adpt->card->driver), + "%s", DRIVER_NAME); + snprintf(adpt->card->shortname, sizeof(adpt->card->shortname), + "Microchip INIC"); + snprintf(adpt->card->longname, sizeof(adpt->card->longname), + "%s at %s", adpt->card->shortname, iface->description); +skip_adpt_alloc: if (get_channel(iface, channel_id)) { pr_err("channel (%s:%d) is already linked\n", iface->description, channel_id); @@ -584,53 +646,43 @@ static int audio_probe_channel(struct most_interface *iface, int channel_id, capture_count = 1; direction = SNDRV_PCM_STREAM_CAPTURE; } - - ret = split_arg_list(arg_list, &card_name, &ch_num, &sample_res); - if (ret < 0) - return ret; - - ret = snd_card_new(&iface->dev, -1, card_name, THIS_MODULE, - sizeof(*channel), &card); - if (ret < 0) - return ret; - - channel = card->private_data; - channel->card = card; + channel = kzalloc(sizeof(*channel), GFP_KERNEL); + if (!channel) { + ret = -ENOMEM; + goto err_free_adpt; + } + channel->card = adpt->card; channel->cfg = cfg; channel->iface = iface; channel->id = channel_id; init_waitqueue_head(&channel->playback_waitq); + list_add_tail(&channel->list, &adpt->dev_list); ret = audio_set_hw_params(&channel->pcm_hardware, ch_num, sample_res, cfg); if (ret) - goto err_free_card; + goto err_free_adpt; - snprintf(card->driver, sizeof(card->driver), "%s", DRIVER_NAME); - snprintf(card->shortname, sizeof(card->shortname), "Microchip MOST:%d", - card->number); - snprintf(card->longname, sizeof(card->longname), "%s at %s, ch %d", - card->shortname, iface->description, channel_id); + ret = snd_pcm_new(adpt->card, device_name, adpt->pcm_dev_idx, + playback_count, capture_count, &pcm); - ret = snd_pcm_new(card, card_name, 0, playback_count, - capture_count, &pcm); if (ret < 0) - goto err_free_card; + goto err_free_adpt; pcm->private_data = channel; - + strscpy(pcm->name, device_name, sizeof(pcm->name)); snd_pcm_set_ops(pcm, direction, &pcm_ops); - ret = snd_card_register(card); - if (ret < 0) - goto err_free_card; - - list_add_tail(&channel->list, &dev_list); - + if (create) { + ret = snd_card_register(adpt->card); + if (ret < 0) + goto err_free_adpt; + adpt->registered = true; + } return 0; -err_free_card: - snd_card_free(card); +err_free_adpt: + release_adapter(adpt); return ret; } @@ -647,6 +699,7 @@ static int audio_disconnect_channel(struct most_interface *iface, int channel_id) { struct channel *channel; + struct sound_adapter *adpt = iface->priv; channel = get_channel(iface, channel_id); if (!channel) { @@ -656,8 +709,10 @@ static int audio_disconnect_channel(struct most_interface *iface, } list_del(&channel->list); - snd_card_free(channel->card); + kfree(channel); + if (list_empty(&adpt->dev_list)) + release_adapter(adpt); return 0; } @@ -733,22 +788,14 @@ static int __init audio_init(void) { pr_info("init()\n"); - INIT_LIST_HEAD(&dev_list); + INIT_LIST_HEAD(&adpt_list); return most_register_component(&comp); } static void __exit audio_exit(void) { - struct channel *channel, *tmp; - pr_info("exit()\n"); - - list_for_each_entry_safe(channel, tmp, &dev_list, list) { - list_del(&channel->list); - snd_card_free(channel->card); - } - most_deregister_component(&comp); } diff --git a/drivers/staging/mt7621-dma/mtk-hsdma.c b/drivers/staging/mt7621-dma/mtk-hsdma.c index 5831f816c17b..d67a2504adb1 100644 --- a/drivers/staging/mt7621-dma/mtk-hsdma.c +++ b/drivers/staging/mt7621-dma/mtk-hsdma.c @@ -419,8 +419,9 @@ static void mtk_hsdma_chan_done(struct mtk_hsdam_engine *hsdma, vchan_cookie_complete(&desc->vdesc); chan_issued = gdma_next_desc(chan); } - } else + } else { dev_dbg(hsdma->ddev.dev, "no desc to complete\n"); + } if (chan_issued) set_bit(chan->id, &hsdma->chan_issued); @@ -457,8 +458,9 @@ static void mtk_hsdma_issue_pending(struct dma_chan *c) if (gdma_next_desc(chan)) { set_bit(chan->id, &hsdma->chan_issued); tasklet_schedule(&hsdma->task); - } else + } else { dev_dbg(hsdma->ddev.dev, "no desc to issue\n"); + } } spin_unlock_bh(&chan->vchan.lock); } diff --git a/drivers/staging/mt7621-dma/ralink-gdma.c b/drivers/staging/mt7621-dma/ralink-gdma.c index 73dbc7fe38a2..792a63bd55d4 100644 --- a/drivers/staging/mt7621-dma/ralink-gdma.c +++ b/drivers/staging/mt7621-dma/ralink-gdma.c @@ -459,12 +459,14 @@ static void gdma_dma_chan_irq(struct gdma_dma_dev *dma_dev, list_del(&desc->vdesc.node); vchan_cookie_complete(&desc->vdesc); chan_issued = gdma_next_desc(chan); - } else + } else { chan_issued = 1; + } } - } else + } else { dev_dbg(dma_dev->ddev.dev, "chan %d no desc to complete\n", chan->id); + } if (chan_issued) set_bit(chan->id, &dma_dev->chan_issued); spin_unlock_irqrestore(&chan->vchan.lock, flags); @@ -512,9 +514,10 @@ static void gdma_dma_issue_pending(struct dma_chan *c) if (gdma_next_desc(chan)) { set_bit(chan->id, &dma_dev->chan_issued); tasklet_schedule(&dma_dev->task); - } else + } else { dev_dbg(dma_dev->ddev.dev, "chan %d no desc to issue\n", chan->id); + } } spin_unlock_irqrestore(&chan->vchan.lock, flags); } @@ -537,11 +540,11 @@ static struct dma_async_tx_descriptor *gdma_dma_prep_slave_sg( desc->residue = 0; for_each_sg(sgl, sg, sg_len, i) { - if (direction == DMA_MEM_TO_DEV) + if (direction == DMA_MEM_TO_DEV) { desc->sg[i].src_addr = sg_dma_address(sg); - else if (direction == DMA_DEV_TO_MEM) + } else if (direction == DMA_DEV_TO_MEM) { desc->sg[i].dst_addr = sg_dma_address(sg); - else { + } else { dev_err(c->device->dev, "direction type %d error\n", direction); goto free_desc; @@ -637,11 +640,11 @@ static struct dma_async_tx_descriptor *gdma_dma_prep_dma_cyclic( desc->residue = buf_len; for (i = 0; i < num_periods; i++) { - if (direction == DMA_MEM_TO_DEV) + if (direction == DMA_MEM_TO_DEV) { desc->sg[i].src_addr = buf_addr; - else if (direction == DMA_DEV_TO_MEM) + } else if (direction == DMA_DEV_TO_MEM) { desc->sg[i].dst_addr = buf_addr; - else { + } else { dev_err(c->device->dev, "direction type %d error\n", direction); goto free_desc; @@ -737,9 +740,9 @@ static void gdma_dma_tasklet(unsigned long arg) if (chan->desc) { atomic_inc(&dma_dev->cnt); gdma_start_transfer(dma_dev, chan); - } else + } else { dev_dbg(dma_dev->ddev.dev, "chan %d no desc to issue\n", chan->id); - + } if (!dma_dev->chan_issued) break; } diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts index d5b27e224b56..6a1699ce9455 100644 --- a/drivers/staging/mt7621-dts/gbpc1.dts +++ b/drivers/staging/mt7621-dts/gbpc1.dts @@ -72,6 +72,7 @@ compatible = "jedec,spi-nor"; reg = <0>; spi-max-frequency = <50000000>; + broken-flash-reset; partition@0 { label = "u-boot"; diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi index 2e837e60663a..71f069d59ad8 100644 --- a/drivers/staging/mt7621-dts/mt7621.dtsi +++ b/drivers/staging/mt7621-dts/mt7621.dtsi @@ -202,15 +202,15 @@ state_default: pinctrl0 { }; - i2c_pins: i2c { - i2c { + i2c_pins: i2c0 { + i2c0 { group = "i2c"; function = "i2c"; }; }; - spi_pins: spi { - spi { + spi_pins: spi0 { + spi0 { group = "spi"; function = "spi"; }; @@ -251,21 +251,21 @@ }; }; - mdio_pins: mdio { - mdio { + mdio_pins: mdio0 { + mdio0 { group = "mdio"; function = "mdio"; }; }; - pcie_pins: pcie { - pcie { + pcie_pins: pcie0 { + pcie0 { group = "pcie"; function = "pcie rst"; }; }; - nand_pins: nand { + nand_pins: nand0 { spi-nand { group = "spi"; function = "nand1"; @@ -277,8 +277,8 @@ }; }; - sdhci_pins: sdhci { - sdhci { + sdhci_pins: sdhci0 { + sdhci0 { group = "sdhci"; function = "sdhci"; }; @@ -398,7 +398,6 @@ 0x1e142000 0x100 /* pcie port 0 RC control registers */ 0x1e143000 0x100 /* pcie port 1 RC control registers */ 0x1e144000 0x100>; /* pcie port 2 RC control registers */ - #address-cells = <3>; #size-cells = <2>; diff --git a/drivers/staging/mt7621-eth/mdio.c b/drivers/staging/mt7621-eth/mdio.c index ee851281b657..5fea6a447eed 100644 --- a/drivers/staging/mt7621-eth/mdio.c +++ b/drivers/staging/mt7621-eth/mdio.c @@ -89,7 +89,7 @@ int mtk_connect_phy_node(struct mtk_eth *eth, struct mtk_mac *mac, return -ENODEV; } - phydev->supported &= PHY_GBIT_FEATURES; + phydev->supported &= PHY_1000BT_FEATURES; phydev->advertising = phydev->supported; dev_info(eth->dev, diff --git a/drivers/staging/mt7621-eth/mtk_eth_soc.c b/drivers/staging/mt7621-eth/mtk_eth_soc.c index 363d3c978e02..21a76a8ccc26 100644 --- a/drivers/staging/mt7621-eth/mtk_eth_soc.c +++ b/drivers/staging/mt7621-eth/mtk_eth_soc.c @@ -1689,6 +1689,8 @@ static int mtk_open(struct net_device *dev) struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; + dma_coerce_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)); + if (!atomic_read(ð->dma_refcnt)) { int err = mtk_start_dma(eth); @@ -2062,9 +2064,6 @@ static int mtk_probe(struct platform_device *pdev) struct clk *sysclk; int err; - pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); - pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; - device_reset(&pdev->dev); match = of_match_device(of_mtk_match, &pdev->dev); diff --git a/drivers/staging/mt7621-mmc/dbg.c b/drivers/staging/mt7621-mmc/dbg.c index 829d3d0e895e..eabe0595978b 100644 --- a/drivers/staging/mt7621-mmc/dbg.c +++ b/drivers/staging/mt7621-mmc/dbg.c @@ -51,7 +51,6 @@ #include "mt6575_sd.h" #include -static char cmd_buf[256]; /* for debug zone */ unsigned int sd_debug_zone[4] = { @@ -62,6 +61,7 @@ unsigned int sd_debug_zone[4] = { }; #if defined(MT6575_SD_DEBUG) +static char cmd_buf[256]; /* for driver profile */ #define TICKS_ONE_MS (13000) u32 gpt_enable; diff --git a/drivers/staging/mt7621-mmc/sd.c b/drivers/staging/mt7621-mmc/sd.c index 0379f9c96f2a..4b26ec896a96 100644 --- a/drivers/staging/mt7621-mmc/sd.c +++ b/drivers/staging/mt7621-mmc/sd.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include @@ -216,26 +217,6 @@ static void msdc_tasklet_card(struct work_struct *work) spin_unlock(&host->lock); } -static void msdc_select_clksrc(struct msdc_host *host, unsigned char clksrc) -{ - u32 val; - - BUG_ON(clksrc > 3); - - val = readl(host->base + MSDC_CLKSRC_REG); - if (readl(host->base + MSDC_ECO_VER) >= 4) { - val &= ~(0x3 << clk_src_bit[host->id]); - val |= clksrc << clk_src_bit[host->id]; - } else { - val &= ~0x3; val |= clksrc; - } - writel(val, host->base + MSDC_CLKSRC_REG); - - host->hclk = hclks[clksrc]; - host->hw->clk_src = clksrc; -} -#endif /* end of --- */ - static void msdc_set_mclk(struct msdc_host *host, int ddr, unsigned int hz) { //struct msdc_hw *hw = host->hw; @@ -318,9 +299,9 @@ static void msdc_abort_data(struct msdc_host *host) #ifdef CONFIG_PM /* - register as callback function of WIFI(combo_sdio_register_pm) . - can called by msdc_drv_suspend/resume too. -*/ + * register as callback function of WIFI(combo_sdio_register_pm) . + * can called by msdc_drv_suspend/resume too. + */ static void msdc_pm(pm_message_t state, void *data) { struct msdc_host *host = (struct msdc_host *)data; @@ -351,7 +332,6 @@ static void msdc_pm(pm_message_t state, void *data) host->suspend = 0; host->pm_state = state; - } } #endif @@ -685,7 +665,7 @@ static int msdc_do_request(struct mmc_host *mmc, struct mmc_request *mrq) if (read) { if ((host->timeout_ns != data->timeout_ns) || - (host->timeout_clks != data->timeout_clks)) { + (host->timeout_clks != data->timeout_clks)) { msdc_set_timeout(host, data->timeout_ns, data->timeout_clks); } } @@ -711,7 +691,8 @@ static int msdc_do_request(struct mmc_host *mmc, struct mmc_request *mrq) goto done; /* for read, the data coming too fast, then CRC error - start DMA no business with CRC. */ + * start DMA no business with CRC. + */ //init_completion(&host->xfer_done); msdc_dma_start(host); @@ -794,9 +775,10 @@ static int msdc_tune_cmdrsp(struct msdc_host *host, struct mmc_command *cmd) u32 skip = 1; /* ==== don't support 3.0 now ==== - 1: R_SMPL[1] - 2: PAD_CMD_RESP_RXDLY[26:22] - ==========================*/ + * 1: R_SMPL[1] + * 2: PAD_CMD_RESP_RXDLY[26:22] + * ========================== + */ // save the previous tune result sdr_get_field(host->base + MSDC_IOCON, MSDC_IOCON_RSPL, &orig_rsmpl); @@ -1188,8 +1170,6 @@ static void msdc_ops_request(struct mmc_host *mmc, struct mmc_request *mrq) spin_unlock(&host->lock); mmc_request_done(mmc, mrq); - - return; } /* called by ops.set_ios */ @@ -1223,19 +1203,19 @@ static void msdc_ops_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) u32 ddr = 0; #ifdef MT6575_SD_DEBUG - static char *vdd[] = { + static const char * const vdd[] = { "1.50v", "1.55v", "1.60v", "1.65v", "1.70v", "1.80v", "1.90v", "2.00v", "2.10v", "2.20v", "2.30v", "2.40v", "2.50v", "2.60v", "2.70v", "2.80v", "2.90v", "3.00v", "3.10v", "3.20v", "3.30v", "3.40v", "3.50v", "3.60v" }; - static char *power_mode[] = { + static const char * const power_mode[] = { "OFF", "UP", "ON" }; - static char *bus_mode[] = { + static const char * const bus_mode[] = { "UNKNOWN", "OPENDRAIN", "PUSHPULL" }; - static char *timing[] = { + static const char * const timing[] = { "LEGACY", "MMC_HS", "SD_HS" }; @@ -1398,7 +1378,7 @@ static irqreturn_t msdc_irq(int irq, void *dev_id) /* command interrupts */ if (cmd && (intsts & cmdsts)) { if ((intsts & MSDC_INT_CMDRDY) || (intsts & MSDC_INT_ACMDRDY) || - (intsts & MSDC_INT_ACMD19_DONE)) { + (intsts & MSDC_INT_ACMD19_DONE)) { u32 *rsp = &cmd->resp[0]; switch (host->cmd_rsp) { @@ -1431,13 +1411,13 @@ static irqreturn_t msdc_irq(int irq, void *dev_id) /* mmc irq interrupts */ if (intsts & MSDC_INT_MMCIRQ) dev_info(mmc_dev(host->mmc), "msdc[%d] MMCIRQ: SDC_CSTS=0x%.8x\r\n", - host->id, readl(host->base + SDC_CSTS)); + host->id, readl(host->base + SDC_CSTS)); return IRQ_HANDLED; } /*--------------------------------------------------------------------------*/ -/* platform_driver members */ +/* platform_driver members */ /*--------------------------------------------------------------------------*/ /* called by msdc_drv_probe/remove */ static void msdc_enable_cd_irq(struct msdc_host *host, int enable) @@ -1446,11 +1426,11 @@ static void msdc_enable_cd_irq(struct msdc_host *host, int enable) /* for sdio, not set */ if ((hw->flags & MSDC_CD_PIN_EN) == 0) { - /* Pull down card detection pin since it is not avaiable */ + /* Pull down card detection pin since it is not available */ /* - if (hw->config_gpio_pin) - hw->config_gpio_pin(MSDC_CD_PIN, GPIO_PULL_DOWN); - */ + * if (hw->config_gpio_pin) + * hw->config_gpio_pin(MSDC_CD_PIN, GPIO_PULL_DOWN); + */ sdr_clr_bits(host->base + MSDC_PS, MSDC_PS_CDEN); sdr_clr_bits(host->base + MSDC_INTEN, MSDC_INTEN_CDSC); sdr_clr_bits(host->base + SDC_CFG, SDC_CFG_INSWKUP); @@ -1490,7 +1470,6 @@ static void msdc_enable_cd_irq(struct msdc_host *host, int enable) /* called by msdc_drv_probe */ static void msdc_init_hw(struct msdc_host *host) { - /* Configure to MMC/SD mode */ sdr_set_field(host->base + MSDC_CFG, MSDC_CFG_MODE, MSDC_SDMMC); @@ -1538,12 +1517,13 @@ static void msdc_init_hw(struct msdc_host *host) #endif /* for safety, should clear SDC_CFG.SDIO_INT_DET_EN & set SDC_CFG.SDIO in - pre-loader,uboot,kernel drivers. and SDC_CFG.SDIO_INT_DET_EN will be only - set when kernel driver wants to use SDIO bus interrupt */ + * pre-loader,uboot,kernel drivers. and SDC_CFG.SDIO_INT_DET_EN will be only + * set when kernel driver wants to use SDIO bus interrupt + */ /* Configure to enable SDIO mode. it's must otherwise sdio cmd5 failed */ sdr_set_bits(host->base + SDC_CFG, SDC_CFG_SDIO); - /* disable detect SDIO device interupt function */ + /* disable detect SDIO device interrupt function */ sdr_clr_bits(host->base + SDC_CFG, SDC_CFG_SDIOIDE); /* eneable SMT for glitch filter */ @@ -1693,7 +1673,7 @@ static int msdc_drv_probe(struct platform_device *pdev) host->mrq = NULL; //init_MUTEX(&host->sem); /* we don't need to support multiple threads access */ - mmc_dev(mmc)->dma_mask = NULL; + dma_coerce_mask_and_coherent(mmc_dev(mmc), DMA_BIT_MASK(32)); /* using dma_alloc_coherent*/ /* todo: using 1, for all 4 slots */ host->dma.gpd = dma_alloc_coherent(&pdev->dev, @@ -1797,6 +1777,7 @@ static void msdc_drv_pm(struct platform_device *pdev, pm_message_t state) if (mmc) { struct msdc_host *host = mmc_priv(mmc); + msdc_pm(state, (void *)host); } } diff --git a/drivers/staging/mt7621-pci/mediatek,mt7621-pci.txt b/drivers/staging/mt7621-pci/mediatek,mt7621-pci.txt new file mode 100644 index 000000000000..5a6ee4103cd5 --- /dev/null +++ b/drivers/staging/mt7621-pci/mediatek,mt7621-pci.txt @@ -0,0 +1,99 @@ +MediaTek MT7621 PCIe controller + +Required properties: +- compatible: "mediatek,mt7621-pci" +- device_type: Must be "pci" +- reg: Base addresses and lengths of the PCIe subsys and root ports. +- bus-range: Range of bus numbers associated with this controller. +- #address-cells: Address representation for root ports (must be 3) +- pinctrl-names : The pin control state names. +- pinctrl-0: The "default" pinctrl state. +- #size-cells: Size representation for root ports (must be 2) +- ranges: Ranges for the PCI memory and I/O regions. +- #interrupt-cells: Must be 1 +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties. + Please refer to the standard PCI bus binding document for a more detailed + explanation. +- status: either "disabled" or "okay". +- resets: Must contain an entry for each entry in reset-names. + See ../reset/reset.txt for details. +- reset-names: Must be "pcie0", "pcie1", "pcieN"... based on the number of + root ports. +- clocks: Must contain an entry for each entry in clock-names. + See ../clocks/clock-bindings.txt for details. +- clock-names: Must be "pcie0", "pcie1", "pcieN"... based on the number of + root ports. + +In addition, the device tree node must have sub-nodes describing each PCIe port +interface, having the following mandatory properties: + +Required properties: +- reg: Only the first four bytes are used to refer to the correct bus number + and device number. +- #address-cells: Must be 3 +- #size-cells: Must be 2 +- ranges: Sub-ranges distributed from the PCIe controller node. An empty + property is sufficient. +- bus-range: Range of bus numbers associated with this port. + +Example for MT7621: + + pcie: pcie@1e140000 { + compatible = "mediatek,mt7621-pci"; + reg = <0x1e140000 0x100 /* host-pci bridge registers */ + 0x1e142000 0x100 /* pcie port 0 RC control registers */ + 0x1e143000 0x100 /* pcie port 1 RC control registers */ + 0x1e144000 0x100>; /* pcie port 2 RC control registers */ + + #address-cells = <3>; + #size-cells = <2>; + + pinctrl-names = "default"; + pinctrl-0 = <&pcie_pins>; + + device_type = "pci"; + + bus-range = <0 255>; + ranges = < + 0x02000000 0 0x00000000 0x60000000 0 0x10000000 /* pci memory */ + 0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */ + >; + + #interrupt-cells = <1>; + interrupt-map-mask = <0xF0000 0 0 1>; + interrupt-map = <0x10000 0 0 1 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>, + <0x20000 0 0 1 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>, + <0x30000 0 0 1 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>; + + status = "disabled"; + + resets = <&rstctrl 24 &rstctrl 25 &rstctrl 26>; + reset-names = "pcie0", "pcie1", "pcie2"; + clocks = <&clkctrl 24 &clkctrl 25 &clkctrl 26>; + clock-names = "pcie0", "pcie1", "pcie2"; + + pcie@0,0 { + reg = <0x0000 0 0 0 0>; + #address-cells = <3>; + #size-cells = <2>; + ranges; + bus-range = <0x00 0xff>; + }; + + pcie@1,0 { + reg = <0x0800 0 0 0 0>; + #address-cells = <3>; + #size-cells = <2>; + ranges; + bus-range = <0x00 0xff>; + }; + + pcie@2,0 { + reg = <0x1000 0 0 0 0>; + #address-cells = <3>; + #size-cells = <2>; + ranges; + bus-range = <0x00 0xff>; + }; + }; + diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c index 8371a9cdb164..31310b6fb7db 100644 --- a/drivers/staging/mt7621-pci/pci-mt7621.c +++ b/drivers/staging/mt7621-pci/pci-mt7621.c @@ -1,33 +1,10 @@ // SPDX-License-Identifier: GPL-2.0+ -/************************************************************************** - * - * BRIEF MODULE DESCRIPTION +/* + * BRIEF MODULE DESCRIPTION * PCI init for Ralink RT2880 solution * - * Copyright 2007 Ralink Inc. (bruce_chang@ralinktech.com.tw) - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2 of the License, or (at your - * option) any later version. - * - * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED - * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN - * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, - * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT - * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF - * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON - * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF - * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * Copyright 2007 Ralink Inc. (bruce_chang@ralinktech.com.tw) * - * You should have received a copy of the GNU General Public License along - * with this program; if not, write to the Free Software Foundation, Inc., - * 675 Mass Ave, Cambridge, MA 02139, USA. - * - * - ************************************************************************** * May 2007 Bruce Chang * Initial Release * @@ -36,13 +13,11 @@ * * May 2011 Bruce Chang * support RT6855/MT7620 PCIe - * - ************************************************************************** */ #include -#include #include +#include #include #include #include @@ -57,29 +32,44 @@ #include "../../pci/pci.h" -/* - * These functions and structures provide the BIOS scan and mapping of the PCI - * devices. - */ +/* sysctl */ +#define MT7621_CHIP_REV_ID 0x0c +#define RALINK_CLKCFG1 0x30 +#define RALINK_RSTCTRL 0x34 +#define CHIP_REV_MT7621_E2 0x0101 -#define RALINK_PCIE0_CLK_EN BIT(24) -#define RALINK_PCIE1_CLK_EN BIT(25) -#define RALINK_PCIE2_CLK_EN BIT(26) +/* RALINK_RSTCTRL bits */ +#define RALINK_PCIE_RST BIT(23) + +/* MediaTek specific configuration registers */ +#define PCIE_FTS_NUM 0x70c +#define PCIE_FTS_NUM_MASK GENMASK(15, 8) +#define PCIE_FTS_NUM_L0(x) ((x) & 0xff << 8) -#define RALINK_PCI_CONFIG_ADDR 0x20 -#define RALINK_PCI_CONFIG_DATA 0x24 -#define RALINK_PCI_MEMBASE 0x28 -#define RALINK_PCI_IOBASE 0x2C -#define RALINK_PCIE0_RST BIT(24) -#define RALINK_PCIE1_RST BIT(25) -#define RALINK_PCIE2_RST BIT(26) +/* rt_sysc_membase relative registers */ +#define RALINK_PCIE_CLK_GEN 0x7c +#define RALINK_PCIE_CLK_GEN1 0x80 +/* Host-PCI bridge registers */ #define RALINK_PCI_PCICFG_ADDR 0x0000 #define RALINK_PCI_PCIMSK_ADDR 0x000C - -#define RT6855_PCIE0_OFFSET 0x2000 -#define RT6855_PCIE1_OFFSET 0x3000 -#define RT6855_PCIE2_OFFSET 0x4000 +#define RALINK_PCI_CONFIG_ADDR 0x0020 +#define RALINK_PCI_CONFIG_DATA 0x0024 +#define RALINK_PCI_MEMBASE 0x0028 +#define RALINK_PCI_IOBASE 0x002C + +/* PCICFG virtual bridges */ +#define MT7621_BR0_MASK GENMASK(19, 16) +#define MT7621_BR1_MASK GENMASK(23, 20) +#define MT7621_BR2_MASK GENMASK(27, 24) +#define MT7621_BR_ALL_MASK GENMASK(27, 16) +#define MT7621_BR0_SHIFT 16 +#define MT7621_BR1_SHIFT 20 +#define MT7621_BR2_SHIFT 24 + +/* PCIe RC control registers */ +#define MT7621_PCIE_OFFSET 0x2000 +#define MT7621_NEXT_PORT 0x1000 #define RALINK_PCI_BAR0SETUP_ADDR 0x0010 #define RALINK_PCI_IMBASEBAR0_ADDR 0x0018 @@ -88,54 +78,105 @@ #define RALINK_PCI_SUBID 0x0038 #define RALINK_PCI_STATUS 0x0050 -#define RALINK_PCIEPHY_P0P1_CTL_OFFSET 0x9000 -#define RALINK_PCIEPHY_P2_CTL_OFFSET 0xA000 - -#define RALINK_PCI_MM_MAP_BASE 0x60000000 +/* Some definition values */ +#define PCIE_REVISION_ID BIT(0) +#define PCIE_CLASS_CODE (0x60400 << 8) +#define PCIE_BAR_MAP_MAX GENMASK(30, 16) +#define PCIE_BAR_ENABLE BIT(0) +#define PCIE_PORT_INT_EN(x) BIT(20 + (x)) +#define PCIE_PORT_CLK_EN(x) BIT(24 + (x)) +#define PCIE_PORT_PERST(x) BIT(1 + (x)) +#define PCIE_PORT_LINKUP BIT(0) + +#define PCIE_CLK_GEN_EN BIT(31) +#define PCIE_CLK_GEN_DIS 0 +#define PCIE_CLK_GEN1_DIS GENMASK(30,24) +#define PCIE_CLK_GEN1_EN (BIT(27) | BIT(25)) #define RALINK_PCI_IO_MAP_BASE 0x1e160000 +#define MEMORY_BASE 0x0 -#define ASSERT_SYSRST_PCIE(val) \ - do { \ - if (rt_sysc_r32(SYSC_REG_CHIP_REV) == 0x00030101) \ - rt_sysc_m32(0, val, RALINK_RSTCTRL); \ - else \ - rt_sysc_m32(val, 0, RALINK_RSTCTRL); \ - } while (0) -#define DEASSERT_SYSRST_PCIE(val) \ - do { \ - if (rt_sysc_r32(SYSC_REG_CHIP_REV) == 0x00030101) \ - rt_sysc_m32(val, 0, RALINK_RSTCTRL); \ - else \ - rt_sysc_m32(0, val, RALINK_RSTCTRL); \ - } while (0) - -#define RALINK_CLKCFG1 0x30 -#define RALINK_RSTCTRL 0x34 -#define RALINK_GPIOMODE 0x60 -#define RALINK_PCIE_CLK_GEN 0x7c -#define RALINK_PCIE_CLK_GEN1 0x80 -//RALINK_RSTCTRL bit -#define RALINK_PCIE_RST BIT(23) -#define RALINK_PCI_RST BIT(24) -//RALINK_CLKCFG1 bit -#define RALINK_PCI_CLK_EN BIT(19) -#define RALINK_PCIE_CLK_EN BIT(21) +/* pcie phy related macros */ +#define RALINK_PCIEPHY_P0P1_CTL_OFFSET 0x9000 +#define RALINK_PCIEPHY_P2_CTL_OFFSET 0xA000 -#define MEMORY_BASE 0x0 -static int pcie_link_status; +#define RG_P0_TO_P1_WIDTH 0x100 + +#define RG_PE1_PIPE_REG 0x02c +#define RG_PE1_PIPE_RST BIT(12) +#define RG_PE1_PIPE_CMD_FRC BIT(4) + +#define RG_PE1_H_LCDDS_REG 0x49c +#define RG_PE1_H_LCDDS_PCW GENMASK(30, 0) +#define RG_PE1_H_LCDDS_PCW_VAL(x) ((0x7fffffff & (x)) << 0) + +#define RG_PE1_FRC_H_XTAL_REG 0x400 +#define RG_PE1_FRC_H_XTAL_TYPE BIT(8) +#define RG_PE1_H_XTAL_TYPE GENMASK(10, 9) +#define RG_PE1_H_XTAL_TYPE_VAL(x) ((0x3 & (x)) << 9) + +#define RG_PE1_FRC_PHY_REG 0x000 +#define RG_PE1_FRC_PHY_EN BIT(4) +#define RG_PE1_PHY_EN BIT(5) + +#define RG_PE1_H_PLL_REG 0x490 +#define RG_PE1_H_PLL_BC GENMASK(23, 22) +#define RG_PE1_H_PLL_BC_VAL(x) ((0x3 & (x)) << 22) +#define RG_PE1_H_PLL_BP GENMASK(21, 18) +#define RG_PE1_H_PLL_BP_VAL(x) ((0xf & (x)) << 18) +#define RG_PE1_H_PLL_IR GENMASK(15, 12) +#define RG_PE1_H_PLL_IR_VAL(x) ((0xf & (x)) << 12) +#define RG_PE1_H_PLL_IC GENMASK(11, 8) +#define RG_PE1_H_PLL_IC_VAL(x) ((0xf & (x)) << 8) +#define RG_PE1_H_PLL_PREDIV GENMASK(7, 6) +#define RG_PE1_H_PLL_PREDIV_VAL(x) ((0x3 & (x)) << 6) +#define RG_PE1_PLL_DIVEN GENMASK(3, 1) +#define RG_PE1_PLL_DIVEN_VAL(x) ((0x7 & (x)) << 1) + +#define RG_PE1_H_PLL_FBKSEL_REG 0x4bc +#define RG_PE1_H_PLL_FBKSEL GENMASK(5, 4) +#define RG_PE1_H_PLL_FBKSEL_VAL(x) ((0x3 & (x)) << 4) + +#define RG_PE1_H_LCDDS_SSC_PRD_REG 0x4a4 +#define RG_PE1_H_LCDDS_SSC_PRD GENMASK(15, 0) +#define RG_PE1_H_LCDDS_SSC_PRD_VAL(x) ((0xffff & (x)) << 0) + +#define RG_PE1_H_LCDDS_SSC_DELTA_REG 0x4a8 +#define RG_PE1_H_LCDDS_SSC_DELTA GENMASK(11, 0) +#define RG_PE1_H_LCDDS_SSC_DELTA_VAL(x) ((0xfff & (x)) << 0) +#define RG_PE1_H_LCDDS_SSC_DELTA1 GENMASK(27, 16) +#define RG_PE1_H_LCDDS_SSC_DELTA1_VAL(x) ((0xff & (x)) << 16) + +#define RG_PE1_LCDDS_CLK_PH_INV_REG 0x4a0 +#define RG_PE1_LCDDS_CLK_PH_INV BIT(5) + +#define RG_PE1_H_PLL_BR_REG 0x4ac +#define RG_PE1_H_PLL_BR GENMASK(18, 16) +#define RG_PE1_H_PLL_BR_VAL(x) ((0x7 & (x)) << 16) + +#define RG_PE1_MSTCKDIV_REG 0x414 +#define RG_PE1_MSTCKDIV GENMASK(7, 6) +#define RG_PE1_MSTCKDIV_VAL(x) ((0x3 & (x)) << 6) + +#define RG_PE1_FRC_MSTCKDIV BIT(5) /** * struct mt7621_pcie_port - PCIe port information - * @base: IO mapped register base + * @base: I/O mapped register base * @list: port list * @pcie: pointer to PCIe host info - * @reset: pointer to port reset control + * @phy_reg_offset: offset to related phy registers + * @pcie_rst: pointer to port reset control + * @slot: port slot + * @enabled: indicates if port is enabled */ struct mt7621_pcie_port { void __iomem *base; struct list_head list; struct mt7621_pcie *pcie; - struct reset_control *reset; + u32 phy_reg_offset; + struct reset_control *pcie_rst; + u32 slot; + bool enabled; }; /** @@ -171,6 +212,17 @@ static inline void pcie_write(struct mt7621_pcie *pcie, u32 val, u32 reg) writel(val, pcie->base + reg); } +static inline u32 pcie_port_read(struct mt7621_pcie_port *port, u32 reg) +{ + return readl(port->base + reg); +} + +static inline void pcie_port_write(struct mt7621_pcie_port *port, + u32 val, u32 reg) +{ + writel(val, port->base + reg); +} + static inline u32 mt7621_pci_get_cfgaddr(unsigned int bus, unsigned int slot, unsigned int func, unsigned int where) { @@ -196,8 +248,7 @@ struct pci_ops mt7621_pci_ops = { .write = pci_generic_config_write, }; -static u32 -read_config(struct mt7621_pcie *pcie, unsigned int dev, u32 reg) +static u32 read_config(struct mt7621_pcie *pcie, unsigned int dev, u32 reg) { u32 address = mt7621_pci_get_cfgaddr(0, dev, 0, reg); @@ -205,8 +256,8 @@ read_config(struct mt7621_pcie *pcie, unsigned int dev, u32 reg) return pcie_read(pcie, RALINK_PCI_CONFIG_DATA); } -static void -write_config(struct mt7621_pcie *pcie, unsigned int dev, u32 reg, u32 val) +static void write_config(struct mt7621_pcie *pcie, unsigned int dev, + u32 reg, u32 val) { u32 address = mt7621_pci_get_cfgaddr(0, dev, 0, reg); @@ -214,125 +265,194 @@ write_config(struct mt7621_pcie *pcie, unsigned int dev, u32 reg, u32 val) pcie_write(pcie, val, RALINK_PCI_CONFIG_DATA); } -static void -set_pcie_phy(struct mt7621_pcie *pcie, u32 offset, - int start_b, int bits, int val) +static void bypass_pipe_rst(struct mt7621_pcie_port *port) { + struct mt7621_pcie *pcie = port->pcie; + u32 phy_offset = port->phy_reg_offset; + u32 offset = (port->slot != 1) ? + phy_offset + RG_PE1_PIPE_REG : + phy_offset + RG_PE1_PIPE_REG + RG_P0_TO_P1_WIDTH; u32 reg = pcie_read(pcie, offset); - reg &= ~(((1 << bits) - 1) << start_b); - reg |= val << start_b; + reg &= ~(RG_PE1_PIPE_RST | RG_PE1_PIPE_CMD_FRC); + reg |= (RG_PE1_PIPE_RST | RG_PE1_PIPE_CMD_FRC); pcie_write(pcie, reg, offset); } -static void -bypass_pipe_rst(struct mt7621_pcie *pcie) -{ - /* PCIe Port 0 */ - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x02c), 12, 1, 0x01); // rg_pe1_pipe_rst_b - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x02c), 4, 1, 0x01); // rg_pe1_pipe_cmd_frc[4] - /* PCIe Port 1 */ - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x12c), 12, 1, 0x01); // rg_pe1_pipe_rst_b - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x12c), 4, 1, 0x01); // rg_pe1_pipe_cmd_frc[4] - /* PCIe Port 2 */ - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x02c), 12, 1, 0x01); // rg_pe1_pipe_rst_b - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x02c), 4, 1, 0x01); // rg_pe1_pipe_cmd_frc[4] -} - -static void -set_phy_for_ssc(struct mt7621_pcie *pcie) +static void set_phy_for_ssc(struct mt7621_pcie_port *port) { - unsigned long reg = rt_sysc_r32(SYSC_REG_SYSTEM_CONFIG0); + struct mt7621_pcie *pcie = port->pcie; + struct device *dev = pcie->dev; + u32 phy_offset = port->phy_reg_offset; + u32 reg = rt_sysc_r32(SYSC_REG_SYSTEM_CONFIG0); + u32 offset; + u32 val; reg = (reg >> 6) & 0x7; - /* Set PCIe Port0 & Port1 PHY to disable SSC */ + /* Set PCIe Port PHY to disable SSC */ /* Debug Xtal Type */ - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x400), 8, 1, 0x01); // rg_pe1_frc_h_xtal_type - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x400), 9, 2, 0x00); // rg_pe1_h_xtal_type - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x000), 4, 1, 0x01); // rg_pe1_frc_phy_en //Force Port 0 enable control - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x100), 4, 1, 0x01); // rg_pe1_frc_phy_en //Force Port 1 enable control - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x000), 5, 1, 0x00); // rg_pe1_phy_en //Port 0 disable - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x100), 5, 1, 0x00); // rg_pe1_phy_en //Port 1 disable - if (reg <= 5 && reg >= 3) { // 40MHz Xtal - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x490), 6, 2, 0x01); // RG_PE1_H_PLL_PREDIV //Pre-divider ratio (for host mode) - printk("***** Xtal 40MHz *****\n"); - } else { // 25MHz | 20MHz Xtal - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x490), 6, 2, 0x00); // RG_PE1_H_PLL_PREDIV //Pre-divider ratio (for host mode) + offset = phy_offset + RG_PE1_FRC_H_XTAL_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_FRC_H_XTAL_TYPE | RG_PE1_H_XTAL_TYPE); + val |= RG_PE1_FRC_H_XTAL_TYPE; + val |= RG_PE1_H_XTAL_TYPE_VAL(0x00); + pcie_write(pcie, val, offset); + + /* disable port */ + offset = (port->slot != 1) ? + phy_offset + RG_PE1_FRC_PHY_REG : + phy_offset + RG_PE1_FRC_PHY_REG + RG_P0_TO_P1_WIDTH; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_FRC_PHY_EN | RG_PE1_PHY_EN); + val |= RG_PE1_FRC_PHY_EN; + pcie_write(pcie, val, offset); + + /* Set Pre-divider ratio (for host mode) */ + offset = phy_offset + RG_PE1_H_PLL_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_H_PLL_PREDIV); + + if (reg <= 5 && reg >= 3) { /* 40MHz Xtal */ + val |= RG_PE1_H_PLL_PREDIV_VAL(0x01); + pcie_write(pcie, val, offset); + dev_info(dev, "Xtal is 40MHz\n"); + } else { /* 25MHz | 20MHz Xtal */ + val |= RG_PE1_H_PLL_PREDIV_VAL(0x00); + pcie_write(pcie, val, offset); if (reg >= 6) { - printk("***** Xtal 25MHz *****\n"); - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x4bc), 4, 2, 0x01); // RG_PE1_H_PLL_FBKSEL //Feedback clock select - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x49c), 0, 31, 0x18000000); // RG_PE1_H_LCDDS_PCW_NCPO //DDS NCPO PCW (for host mode) - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x4a4), 0, 16, 0x18d); // RG_PE1_H_LCDDS_SSC_PRD //DDS SSC dither period control - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x4a8), 0, 12, 0x4a); // RG_PE1_H_LCDDS_SSC_DELTA //DDS SSC dither amplitude control - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x4a8), 16, 12, 0x4a); // RG_PE1_H_LCDDS_SSC_DELTA1 //DDS SSC dither amplitude control for initial + dev_info(dev, "Xtal is 25MHz\n"); + + /* Select feedback clock */ + offset = phy_offset + RG_PE1_H_PLL_FBKSEL_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_H_PLL_FBKSEL); + val |= RG_PE1_H_PLL_FBKSEL_VAL(0x01); + pcie_write(pcie, val, offset); + + /* DDS NCPO PCW (for host mode) */ + offset = phy_offset + RG_PE1_H_LCDDS_SSC_PRD_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_H_LCDDS_SSC_PRD); + val |= RG_PE1_H_LCDDS_SSC_PRD_VAL(0x18000000); + pcie_write(pcie, val, offset); + + /* DDS SSC dither period control */ + offset = phy_offset + RG_PE1_H_LCDDS_SSC_PRD_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_H_LCDDS_SSC_PRD); + val |= RG_PE1_H_LCDDS_SSC_PRD_VAL(0x18d); + pcie_write(pcie, val, offset); + + /* DDS SSC dither amplitude control */ + offset = phy_offset + RG_PE1_H_LCDDS_SSC_DELTA_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_H_LCDDS_SSC_DELTA | + RG_PE1_H_LCDDS_SSC_DELTA1); + val |= RG_PE1_H_LCDDS_SSC_DELTA_VAL(0x4a); + val |= RG_PE1_H_LCDDS_SSC_DELTA1_VAL(0x4a); + pcie_write(pcie, val, offset); } else { - printk("***** Xtal 20MHz *****\n"); + dev_info(dev, "Xtal is 20MHz\n"); } } - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x4a0), 5, 1, 0x01); // RG_PE1_LCDDS_CLK_PH_INV //DDS clock inversion - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x490), 22, 2, 0x02); // RG_PE1_H_PLL_BC - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x490), 18, 4, 0x06); // RG_PE1_H_PLL_BP - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x490), 12, 4, 0x02); // RG_PE1_H_PLL_IR - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x490), 8, 4, 0x01); // RG_PE1_H_PLL_IC - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x4ac), 16, 3, 0x00); // RG_PE1_H_PLL_BR - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x490), 1, 3, 0x02); // RG_PE1_PLL_DIVEN - if (reg <= 5 && reg >= 3) { // 40MHz Xtal - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x414), 6, 2, 0x01); // rg_pe1_mstckdiv //value of da_pe1_mstckdiv when force mode enable - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x414), 5, 1, 0x01); // rg_pe1_frc_mstckdiv //force mode enable of da_pe1_mstckdiv - } - /* Enable PHY and disable force mode */ - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x000), 5, 1, 0x01); // rg_pe1_phy_en //Port 0 enable - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x100), 5, 1, 0x01); // rg_pe1_phy_en //Port 1 enable - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x000), 4, 1, 0x00); // rg_pe1_frc_phy_en //Force Port 0 disable control - set_pcie_phy(pcie, (RALINK_PCIEPHY_P0P1_CTL_OFFSET + 0x100), 4, 1, 0x00); // rg_pe1_frc_phy_en //Force Port 1 disable control - /* Set PCIe Port2 PHY to disable SSC */ - /* Debug Xtal Type */ - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x400), 8, 1, 0x01); // rg_pe1_frc_h_xtal_type - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x400), 9, 2, 0x00); // rg_pe1_h_xtal_type - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x000), 4, 1, 0x01); // rg_pe1_frc_phy_en //Force Port 0 enable control - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x000), 5, 1, 0x00); // rg_pe1_phy_en //Port 0 disable - if (reg <= 5 && reg >= 3) { // 40MHz Xtal - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x490), 6, 2, 0x01); // RG_PE1_H_PLL_PREDIV //Pre-divider ratio (for host mode) - } else { // 25MHz | 20MHz Xtal - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x490), 6, 2, 0x00); // RG_PE1_H_PLL_PREDIV //Pre-divider ratio (for host mode) - if (reg >= 6) { // 25MHz Xtal - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x4bc), 4, 2, 0x01); // RG_PE1_H_PLL_FBKSEL //Feedback clock select - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x49c), 0, 31, 0x18000000); // RG_PE1_H_LCDDS_PCW_NCPO //DDS NCPO PCW (for host mode) - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x4a4), 0, 16, 0x18d); // RG_PE1_H_LCDDS_SSC_PRD //DDS SSC dither period control - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x4a8), 0, 12, 0x4a); // RG_PE1_H_LCDDS_SSC_DELTA //DDS SSC dither amplitude control - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x4a8), 16, 12, 0x4a); // RG_PE1_H_LCDDS_SSC_DELTA1 //DDS SSC dither amplitude control for initial - } - } - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x4a0), 5, 1, 0x01); // RG_PE1_LCDDS_CLK_PH_INV //DDS clock inversion - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x490), 22, 2, 0x02); // RG_PE1_H_PLL_BC - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x490), 18, 4, 0x06); // RG_PE1_H_PLL_BP - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x490), 12, 4, 0x02); // RG_PE1_H_PLL_IR - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x490), 8, 4, 0x01); // RG_PE1_H_PLL_IC - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x4ac), 16, 3, 0x00); // RG_PE1_H_PLL_BR - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x490), 1, 3, 0x02); // RG_PE1_PLL_DIVEN - if (reg <= 5 && reg >= 3) { // 40MHz Xtal - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x414), 6, 2, 0x01); // rg_pe1_mstckdiv //value of da_pe1_mstckdiv when force mode enable - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x414), 5, 1, 0x01); // rg_pe1_frc_mstckdiv //force mode enable of da_pe1_mstckdiv + /* DDS clock inversion */ + offset = phy_offset + RG_PE1_LCDDS_CLK_PH_INV_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_LCDDS_CLK_PH_INV); + val |= RG_PE1_LCDDS_CLK_PH_INV; + pcie_write(pcie, val, offset); + + /* Set PLL bits */ + offset = phy_offset + RG_PE1_H_PLL_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_H_PLL_BC | RG_PE1_H_PLL_BP | RG_PE1_H_PLL_IR | + RG_PE1_H_PLL_IC | RG_PE1_PLL_DIVEN); + val |= RG_PE1_H_PLL_BC_VAL(0x02); + val |= RG_PE1_H_PLL_BP_VAL(0x06); + val |= RG_PE1_H_PLL_IR_VAL(0x02); + val |= RG_PE1_H_PLL_IC_VAL(0x01); + val |= RG_PE1_PLL_DIVEN_VAL(0x02); + pcie_write(pcie, val, offset); + + offset = phy_offset + RG_PE1_H_PLL_BR_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_H_PLL_BR); + val |= RG_PE1_H_PLL_BR_VAL(0x00); + pcie_write(pcie, val, offset); + + if (reg <= 5 && reg >= 3) { /* 40MHz Xtal */ + /* set force mode enable of da_pe1_mstckdiv */ + offset = phy_offset + RG_PE1_MSTCKDIV_REG; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_MSTCKDIV | RG_PE1_FRC_MSTCKDIV); + val |= (RG_PE1_MSTCKDIV_VAL(0x01) | RG_PE1_FRC_MSTCKDIV); + pcie_write(pcie, val, offset); } + /* Enable PHY and disable force mode */ - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x000), 5, 1, 0x01); // rg_pe1_phy_en //Port 0 enable - set_pcie_phy(pcie, (RALINK_PCIEPHY_P2_CTL_OFFSET + 0x000), 4, 1, 0x00); // rg_pe1_frc_phy_en //Force Port 0 disable control + offset = (port->slot != 1) ? + phy_offset + RG_PE1_FRC_PHY_REG : + phy_offset + RG_PE1_FRC_PHY_REG + RG_P0_TO_P1_WIDTH; + val = pcie_read(pcie, offset); + val &= ~(RG_PE1_FRC_PHY_EN | RG_PE1_PHY_EN); + val |= (RG_PE1_FRC_PHY_EN | RG_PE1_PHY_EN); + pcie_write(pcie, val, offset); +} + +static void mt7621_enable_phy(struct mt7621_pcie_port *port) +{ + u32 chip_rev_id = rt_sysc_r32(MT7621_CHIP_REV_ID); + + if ((chip_rev_id & 0xFFFF) == CHIP_REV_MT7621_E2) + bypass_pipe_rst(port); + set_phy_for_ssc(port); +} + +static inline void mt7621_control_assert(struct mt7621_pcie_port *port) +{ + u32 chip_rev_id = rt_sysc_r32(MT7621_CHIP_REV_ID); + + if ((chip_rev_id & 0xFFFF) == CHIP_REV_MT7621_E2) + reset_control_assert(port->pcie_rst); + else + reset_control_deassert(port->pcie_rst); +} + +static inline void mt7621_control_deassert(struct mt7621_pcie_port *port) +{ + u32 chip_rev_id = rt_sysc_r32(MT7621_CHIP_REV_ID); + + if ((chip_rev_id & 0xFFFF) == CHIP_REV_MT7621_E2) + reset_control_deassert(port->pcie_rst); + else + reset_control_assert(port->pcie_rst); } -static void setup_cm_memory_region(struct resource *mem_resource) +static void mt7621_reset_port(struct mt7621_pcie_port *port) { + mt7621_control_assert(port); + msleep(100); + mt7621_control_deassert(port); +} + +static void setup_cm_memory_region(struct mt7621_pcie *pcie) +{ + struct resource *mem_resource = &pcie->mem; + struct device *dev = pcie->dev; resource_size_t mask; if (mips_cps_numiocu(0)) { - /* FIXME: hardware doesn't accept mask values with 1s after + /* + * FIXME: hardware doesn't accept mask values with 1s after * 0s (e.g. 0xffef), so it would be great to warn if that's - * about to happen */ + * about to happen + */ mask = ~(mem_resource->end - mem_resource->start); write_gcr_reg1_base(mem_resource->start); write_gcr_reg1_mask(mask | CM_GCR_REGn_MASK_CMTGT_IOCU0); - printk("PCI coherence region base: 0x%08llx, mask/settings: 0x%08llx\n", + dev_info(dev, "PCI coherence region base: 0x%08llx, mask/settings: 0x%08llx\n", (unsigned long long)read_gcr_reg1_base(), (unsigned long long)read_gcr_reg1_mask()); } @@ -382,10 +502,54 @@ static int mt7621_pci_parse_request_of_pci_ranges(struct mt7621_pcie *pcie) return 0; } +static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie, + struct device_node *node, + int slot) +{ + struct mt7621_pcie_port *port; + struct device *dev = pcie->dev; + struct device_node *pnode = dev->of_node; + struct resource regs; + char name[6]; + int err; + + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); + if (!port) + return -ENOMEM; + + err = of_address_to_resource(pnode, slot + 1, ®s); + if (err) { + dev_err(dev, "missing \"reg\" property\n"); + return err; + } + + port->base = devm_ioremap_resource(dev, ®s); + if (IS_ERR(port->base)) + return PTR_ERR(port->base); + + snprintf(name, sizeof(name), "pcie%d", slot); + port->pcie_rst = devm_reset_control_get_exclusive(dev, name); + if (PTR_ERR(port->pcie_rst) == -EPROBE_DEFER) { + dev_err(dev, "failed to get pcie%d reset control\n", slot); + return PTR_ERR(port->pcie_rst); + } + + port->slot = slot; + port->pcie = pcie; + port->phy_reg_offset = (slot != 2) ? + RALINK_PCIEPHY_P0P1_CTL_OFFSET : + RALINK_PCIEPHY_P2_CTL_OFFSET; + + INIT_LIST_HEAD(&port->list); + list_add_tail(&port->list, &pcie->ports); + + return 0; +} + static int mt7621_pcie_parse_dt(struct mt7621_pcie *pcie) { struct device *dev = pcie->dev; - struct device_node *node = dev->of_node; + struct device_node *node = dev->of_node, *child; struct resource regs; int err; @@ -399,6 +563,215 @@ static int mt7621_pcie_parse_dt(struct mt7621_pcie *pcie) if (IS_ERR(pcie->base)) return PTR_ERR(pcie->base); + for_each_available_child_of_node(node, child) { + int slot; + + err = of_pci_get_devfn(child); + if (err < 0) { + dev_err(dev, "failed to parse devfn: %d\n", err); + return err; + } + + slot = PCI_SLOT(err); + + err = mt7621_pcie_parse_port(pcie, child, slot); + if (err) + return err; + } + + return 0; +} + +static int mt7621_pcie_init_port(struct mt7621_pcie_port *port) +{ + struct mt7621_pcie *pcie = port->pcie; + struct device *dev = pcie->dev; + u32 slot = port->slot; + u32 val = 0; + + /* + * Any MT7621 Ralink pcie controller that doesn't have 0x0101 at + * the end of the chip_id has inverted PCI resets. + */ + mt7621_reset_port(port); + + val = read_config(pcie, slot, PCIE_FTS_NUM); + dev_info(dev, "Port %d N_FTS = %x\n", (unsigned int)val, slot); + + if ((pcie_port_read(port, RALINK_PCI_STATUS) & PCIE_PORT_LINKUP) == 0) { + dev_err(dev, "pcie%d no card, disable it (RST & CLK)\n", slot); + mt7621_control_assert(port); + rt_sysc_m32(PCIE_PORT_CLK_EN(slot), 0, RALINK_CLKCFG1); + port->enabled = false; + } else { + port->enabled = true; + } + + mt7621_enable_phy(port); + + return 0; +} + +static void mt7621_pcie_init_ports(struct mt7621_pcie *pcie) +{ + struct device *dev = pcie->dev; + struct mt7621_pcie_port *port, *tmp; + int err; + + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { + u32 slot = port->slot; + + err = mt7621_pcie_init_port(port); + if (err) { + dev_err(dev, "Initiating port %d failed\n", slot); + list_del(&port->list); + } + } + + rt_sysc_m32(0, RALINK_PCIE_RST, RALINK_RSTCTRL); + rt_sysc_m32(0x30, 2 << 4, SYSC_REG_SYSTEM_CONFIG1); + rt_sysc_m32(PCIE_CLK_GEN_EN, PCIE_CLK_GEN_DIS, RALINK_PCIE_CLK_GEN); + rt_sysc_m32(PCIE_CLK_GEN1_DIS, PCIE_CLK_GEN1_EN, RALINK_PCIE_CLK_GEN1); + rt_sysc_m32(PCIE_CLK_GEN_DIS, PCIE_CLK_GEN_EN, RALINK_PCIE_CLK_GEN); + msleep(50); + rt_sysc_m32(RALINK_PCIE_RST, 0, RALINK_RSTCTRL); +} + +static int mt7621_pcie_enable_port(struct mt7621_pcie_port *port) +{ + struct mt7621_pcie *pcie = port->pcie; + u32 slot = port->slot; + u32 offset = MT7621_PCIE_OFFSET + (slot * MT7621_NEXT_PORT); + u32 val; + int err; + + /* assert port PERST_N */ + val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); + val |= PCIE_PORT_PERST(slot); + pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); + + /* de-assert port PERST_N */ + val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); + val &= ~PCIE_PORT_PERST(slot); + pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); + + /* 100ms timeout value should be enough for Gen1 training */ + err = readl_poll_timeout(port->base + RALINK_PCI_STATUS, + val, !!(val & PCIE_PORT_LINKUP), + 20, 100 * USEC_PER_MSEC); + if (err) + return -ETIMEDOUT; + + /* enable pcie interrupt */ + val = pcie_read(pcie, RALINK_PCI_PCIMSK_ADDR); + val |= PCIE_PORT_INT_EN(slot); + pcie_write(pcie, val, RALINK_PCI_PCIMSK_ADDR); + + /* map 2G DDR region */ + pcie_write(pcie, PCIE_BAR_MAP_MAX | PCIE_BAR_ENABLE, + offset + RALINK_PCI_BAR0SETUP_ADDR); + pcie_write(pcie, MEMORY_BASE, + offset + RALINK_PCI_IMBASEBAR0_ADDR); + + /* configure class code and revision ID */ + pcie_write(pcie, PCIE_CLASS_CODE | PCIE_REVISION_ID, + offset + RALINK_PCI_CLASS); + + return 0; +} + +static void mt7621_pcie_enable_ports(struct mt7621_pcie *pcie) +{ + struct device *dev = pcie->dev; + struct mt7621_pcie_port *port; + u8 num_slots_enabled = 0; + u32 slot; + u32 val; + + list_for_each_entry(port, &pcie->ports, list) { + if (port->enabled) { + if (!mt7621_pcie_enable_port(port)) { + dev_err(dev, "de-assert port %d PERST_N\n", + port->slot); + continue; + } + dev_info(dev, "PCIE%d enabled\n", slot); + num_slots_enabled++; + } + } + + for (slot = 0; slot < num_slots_enabled; slot++) { + val = read_config(pcie, slot, 0x4); + write_config(pcie, slot, 0x4, val | 0x4); + /* configure RC FTS number to 250 when it leaves L0s */ + val = read_config(pcie, slot, PCIE_FTS_NUM); + val &= ~PCIE_FTS_NUM_MASK; + val |= PCIE_FTS_NUM_L0(0x50); + write_config(pcie, slot, PCIE_FTS_NUM, val); + } +} + +static int mt7621_pcie_init_virtual_bridges(struct mt7621_pcie *pcie) +{ + u32 pcie_link_status = 0; + u32 val= 0; + struct mt7621_pcie_port *port; + + list_for_each_entry(port, &pcie->ports, list) { + u32 slot = port->slot; + + if (port->enabled) + pcie_link_status |= BIT(slot); + } + + if (pcie_link_status == 0) + return -1; + + /* + * pcie(2/1/0) link status pcie2_num pcie1_num pcie0_num + * 3'b000 x x x + * 3'b001 x x 0 + * 3'b010 x 0 x + * 3'b011 x 1 0 + * 3'b100 0 x x + * 3'b101 1 x 0 + * 3'b110 1 0 x + * 3'b111 2 1 0 + */ + switch (pcie_link_status) { + case 2: + val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); + val &= ~(MT7621_BR0_MASK | MT7621_BR1_MASK); + val |= 0x1 << MT7621_BR0_SHIFT; + val |= 0x0 << MT7621_BR1_SHIFT; + pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); + break; + case 4: + val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); + val &= ~MT7621_BR_ALL_MASK; + val |= 0x1 << MT7621_BR0_SHIFT; + val |= 0x2 << MT7621_BR1_SHIFT; + val |= 0x0 << MT7621_BR2_SHIFT; + pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); + break; + case 5: + val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); + val &= ~MT7621_BR_ALL_MASK; + val |= 0x0 << MT7621_BR0_SHIFT; + val |= 0x2 << MT7621_BR1_SHIFT; + val |= 0x1 << MT7621_BR2_SHIFT; + pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); + break; + case 6: + val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); + val &= ~MT7621_BR_ALL_MASK; + val |= 0x2 << MT7621_BR0_SHIFT; + val |= 0x0 << MT7621_BR1_SHIFT; + val |= 0x1 << MT7621_BR2_SHIFT; + pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); + break; + } + return 0; } @@ -441,7 +814,6 @@ static int mt7621_pci_probe(struct platform_device *pdev) struct mt7621_pcie *pcie; struct pci_host_bridge *bridge; int err; - u32 val = 0; LIST_HEAD(res); if (!dev->of_node) @@ -449,7 +821,7 @@ static int mt7621_pci_probe(struct platform_device *pdev) bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); if (!bridge) - return -ENODEV; + return -ENOMEM; pcie = pci_host_bridge_priv(bridge); pcie->dev = dev; @@ -468,204 +840,18 @@ static int mt7621_pci_probe(struct platform_device *pdev) ioport_resource.start = 0; ioport_resource.end = ~0UL; /* no limit */ - val = RALINK_PCIE0_RST; - val |= RALINK_PCIE1_RST; - val |= RALINK_PCIE2_RST; - - ASSERT_SYSRST_PCIE(RALINK_PCIE0_RST | RALINK_PCIE1_RST | RALINK_PCIE2_RST); - - *(unsigned int *)(0xbe000060) &= ~(0x3 << 10 | 0x3 << 3); - *(unsigned int *)(0xbe000060) |= BIT(10) | BIT(3); - mdelay(100); - *(unsigned int *)(0xbe000600) |= BIT(19) | BIT(8) | BIT(7); // use GPIO19/GPIO8/GPIO7 (PERST_N/UART_RXD3/UART_TXD3) - mdelay(100); - *(unsigned int *)(0xbe000620) &= ~(BIT(19) | BIT(8) | BIT(7)); // clear DATA - - mdelay(100); - - val = RALINK_PCIE0_RST; - val |= RALINK_PCIE1_RST; - val |= RALINK_PCIE2_RST; - - DEASSERT_SYSRST_PCIE(val); + mt7621_pcie_init_ports(pcie); - if ((*(unsigned int *)(0xbe00000c) & 0xFFFF) == 0x0101) // MT7621 E2 - bypass_pipe_rst(pcie); - set_phy_for_ssc(pcie); - - list_for_each_entry_safe(port, tmp, &pcie->ports, list) { - u32 slot = port->slot; - val = read_config(pcie, slot, 0x70c); - dev_info(dev, "Port %d N_FTS = %x\n", (unsigned int)val, slot); - } - - rt_sysc_m32(0, RALINK_PCIE_RST, RALINK_RSTCTRL); - rt_sysc_m32(0x30, 2 << 4, SYSC_REG_SYSTEM_CONFIG1); - - rt_sysc_m32(0x80000000, 0, RALINK_PCIE_CLK_GEN); - rt_sysc_m32(0x7f000000, 0xa << 24, RALINK_PCIE_CLK_GEN1); - rt_sysc_m32(0, 0x80000000, RALINK_PCIE_CLK_GEN); - - mdelay(50); - rt_sysc_m32(RALINK_PCIE_RST, 0, RALINK_RSTCTRL); - - /* Use GPIO control instead of PERST_N */ - *(unsigned int *)(0xbe000620) |= BIT(19) | BIT(8) | BIT(7); // set DATA - mdelay(1000); - - if ((pcie_read(pcie, RT6855_PCIE0_OFFSET + RALINK_PCI_STATUS) & 0x1) == 0) { - printk("PCIE0 no card, disable it(RST&CLK)\n"); - ASSERT_SYSRST_PCIE(RALINK_PCIE0_RST); - rt_sysc_m32(RALINK_PCIE0_CLK_EN, 0, RALINK_CLKCFG1); - pcie_link_status &= ~(BIT(0)); - } else { - pcie_link_status |= BIT(0); - val = pcie_read(pcie, RALINK_PCI_PCIMSK_ADDR); - val |= BIT(20); // enable pcie1 interrupt - pcie_write(pcie, val, RALINK_PCI_PCIMSK_ADDR); - } - - if ((pcie_read(pcie, RT6855_PCIE1_OFFSET + RALINK_PCI_STATUS) & 0x1) == 0) { - printk("PCIE1 no card, disable it(RST&CLK)\n"); - ASSERT_SYSRST_PCIE(RALINK_PCIE1_RST); - rt_sysc_m32(RALINK_PCIE1_CLK_EN, 0, RALINK_CLKCFG1); - pcie_link_status &= ~(BIT(1)); - } else { - pcie_link_status |= BIT(1); - val = pcie_read(pcie, RALINK_PCI_PCIMSK_ADDR); - val |= BIT(21); // enable pcie1 interrupt - pcie_write(pcie, val, RALINK_PCI_PCIMSK_ADDR); - } - - if ((pcie_read(pcie, RT6855_PCIE2_OFFSET + RALINK_PCI_STATUS) & 0x1) == 0) { - printk("PCIE2 no card, disable it(RST&CLK)\n"); - ASSERT_SYSRST_PCIE(RALINK_PCIE2_RST); - rt_sysc_m32(RALINK_PCIE2_CLK_EN, 0, RALINK_CLKCFG1); - pcie_link_status &= ~(BIT(2)); - } else { - pcie_link_status |= BIT(2); - val = pcie_read(pcie, RALINK_PCI_PCIMSK_ADDR); - val |= BIT(22); // enable pcie2 interrupt - pcie_write(pcie, val, RALINK_PCI_PCIMSK_ADDR); - } - - if (pcie_link_status == 0) + err = mt7621_pcie_init_virtual_bridges(pcie); + if (err) { + dev_err(dev, "Nothing is connected in virtual bridges. Exiting..."); return 0; - -/* -pcie(2/1/0) link status pcie2_num pcie1_num pcie0_num -3'b000 x x x -3'b001 x x 0 -3'b010 x 0 x -3'b011 x 1 0 -3'b100 0 x x -3'b101 1 x 0 -3'b110 1 0 x -3'b111 2 1 0 -*/ - switch (pcie_link_status) { - case 2: - val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); - val &= ~0x00ff0000; - val |= 0x1 << 16; // port 0 - val |= 0x0 << 20; // port 1 - pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); - break; - case 4: - val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); - val &= ~0x0fff0000; - val |= 0x1 << 16; //port0 - val |= 0x2 << 20; //port1 - val |= 0x0 << 24; //port2 - pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); - break; - case 5: - val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); - val &= ~0x0fff0000; - val |= 0x0 << 16; //port0 - val |= 0x2 << 20; //port1 - val |= 0x1 << 24; //port2 - pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); - break; - case 6: - val = pcie_read(pcie, RALINK_PCI_PCICFG_ADDR); - val &= ~0x0fff0000; - val |= 0x2 << 16; //port0 - val |= 0x0 << 20; //port1 - val |= 0x1 << 24; //port2 - pcie_write(pcie, val, RALINK_PCI_PCICFG_ADDR); - break; } -/* - ioport_resource.start = mt7621_res_pci_io1.start; - ioport_resource.end = mt7621_res_pci_io1.end; -*/ - pcie_write(pcie, 0xffffffff, RALINK_PCI_MEMBASE); pcie_write(pcie, RALINK_PCI_IO_MAP_BASE, RALINK_PCI_IOBASE); - //PCIe0 - if ((pcie_link_status & 0x1) != 0) { - /* open 7FFF:2G; ENABLE */ - pcie_write(pcie, 0x7FFF0001, - RT6855_PCIE0_OFFSET + RALINK_PCI_BAR0SETUP_ADDR); - pcie_write(pcie, MEMORY_BASE, - RT6855_PCIE0_OFFSET + RALINK_PCI_IMBASEBAR0_ADDR); - pcie_write(pcie, 0x06040001, - RT6855_PCIE0_OFFSET + RALINK_PCI_CLASS); - printk("PCIE0 enabled\n"); - } - - //PCIe1 - if ((pcie_link_status & 0x2) != 0) { - /* open 7FFF:2G; ENABLE */ - pcie_write(pcie, 0x7FFF0001, - RT6855_PCIE1_OFFSET + RALINK_PCI_BAR0SETUP_ADDR); - pcie_write(pcie, MEMORY_BASE, - RT6855_PCIE1_OFFSET + RALINK_PCI_IMBASEBAR0_ADDR); - pcie_write(pcie, 0x06040001, - RT6855_PCIE1_OFFSET + RALINK_PCI_CLASS); - printk("PCIE1 enabled\n"); - } - - //PCIe2 - if ((pcie_link_status & 0x4) != 0) { - /* open 7FFF:2G; ENABLE */ - pcie_write(pcie, 0x7FFF0001, - RT6855_PCIE2_OFFSET + RALINK_PCI_BAR0SETUP_ADDR); - pcie_write(pcie, MEMORY_BASE, - RT6855_PCIE2_OFFSET + RALINK_PCI_IMBASEBAR0_ADDR); - pcie_write(pcie, 0x06040001, - RT6855_PCIE2_OFFSET + RALINK_PCI_CLASS); - printk("PCIE2 enabled\n"); - } - - switch (pcie_link_status) { - case 7: - val = read_config(pcie, 2, 0x4); - write_config(pcie, 2, 0x4, val | 0x4); - val = read_config(pcie, 2, 0x70c); - val &= ~(0xff) << 8; - val |= 0x50 << 8; - write_config(pcie, 2, 0x70c, val); - case 3: - case 5: - case 6: - val = read_config(pcie, 1, 0x4); - write_config(pcie, 1, 0x4, val | 0x4); - val = read_config(pcie, 1, 0x70c); - val &= ~(0xff) << 8; - val |= 0x50 << 8; - write_config(pcie, 1, 0x70c, val); - default: - val = read_config(pcie, 0, 0x4); - write_config(pcie, 0, 0x4, val | 0x4); //bus master enable - val = read_config(pcie, 0, 0x70c); - val &= ~(0xff) << 8; - val |= 0x50 << 8; - write_config(pcie, 0, 0x70c, val); - } + mt7621_pcie_enable_ports(pcie); err = mt7621_pci_parse_request_of_pci_ranges(pcie); if (err) { @@ -673,7 +859,7 @@ pcie(2/1/0) link status pcie2_num pcie1_num pcie0_num return err; } - setup_cm_memory_region(&pcie->mem); + setup_cm_memory_region(pcie); err = mt7621_pcie_request_resources(pcie, &res); if (err) { diff --git a/drivers/staging/mt7621-spi/spi-mt7621.c b/drivers/staging/mt7621-spi/spi-mt7621.c index d045b5568e0f..513b6e79b985 100644 --- a/drivers/staging/mt7621-spi/spi-mt7621.c +++ b/drivers/staging/mt7621-spi/spi-mt7621.c @@ -55,9 +55,6 @@ #define MT7621_CPOL BIT(4) #define MT7621_LSB_FIRST BIT(3) -#define RT2880_SPI_MODE_BITS (SPI_CPOL | SPI_CPHA | \ - SPI_LSB_FIRST | SPI_CS_HIGH) - struct mt7621_spi; struct mt7621_spi { @@ -86,16 +83,13 @@ static inline void mt7621_spi_write(struct mt7621_spi *rs, u32 reg, u32 val) iowrite32(val, rs->base + reg); } -static void mt7621_spi_reset(struct mt7621_spi *rs, int duplex) +static void mt7621_spi_reset(struct mt7621_spi *rs) { u32 master = mt7621_spi_read(rs, MT7621_SPI_MASTER); master |= 7 << 29; master |= 1 << 2; - if (duplex) - master |= 1 << 10; - else - master &= ~(1 << 10); + master &= ~(1 << 10); mt7621_spi_write(rs, MT7621_SPI_MASTER, master); rs->pending_write = 0; @@ -107,7 +101,7 @@ static void mt7621_spi_set_cs(struct spi_device *spi, int enable) int cs = spi->chip_select; u32 polar = 0; - mt7621_spi_reset(rs, cs); + mt7621_spi_reset(rs); if (enable) polar = BIT(cs); mt7621_spi_write(rs, MT7621_SPI_POLAR, polar); @@ -139,20 +133,13 @@ static int mt7621_spi_prepare(struct spi_device *spi, unsigned int speed) if (spi->mode & SPI_LSB_FIRST) reg |= MT7621_LSB_FIRST; + /* This SPI controller seems to be tested on SPI flash only + * and some bits are swizzled under other SPI modes probably + * due to incorrect wiring inside the silicon. Only mode 0 + * works correctly. + */ reg &= ~(MT7621_CPHA | MT7621_CPOL); - switch (spi->mode & (SPI_CPOL | SPI_CPHA)) { - case SPI_MODE_0: - break; - case SPI_MODE_1: - reg |= MT7621_CPHA; - break; - case SPI_MODE_2: - reg |= MT7621_CPOL; - break; - case SPI_MODE_3: - reg |= MT7621_CPOL | MT7621_CPHA; - break; - } + mt7621_spi_write(rs, MT7621_SPI_MASTER, reg); return 0; @@ -261,7 +248,7 @@ static void mt7621_spi_write_half_duplex(struct mt7621_spi *rs, rs->pending_write = len; } -static int mt7621_spi_transfer_half_duplex(struct spi_master *master, +static int mt7621_spi_transfer_one_message(struct spi_master *master, struct spi_message *m) { struct mt7621_spi *rs = spi_master_get_devdata(master); @@ -284,10 +271,20 @@ static int mt7621_spi_transfer_half_duplex(struct spi_master *master, mt7621_spi_set_cs(spi, 1); m->actual_length = 0; list_for_each_entry(t, &m->transfers, transfer_list) { - if (t->rx_buf) + if ((t->rx_buf) && (t->tx_buf)) { + /* This controller will shift some extra data out + * of spi_opcode if (mosi_bit_cnt > 0) && + * (cmd_bit_cnt == 0). So the claimed full-duplex + * support is broken since we have no way to read + * the MISO value during that bit. + */ + status = -EIO; + goto msg_done; + } else if (t->rx_buf) { mt7621_spi_read_half_duplex(rs, t->len, t->rx_buf); - else if (t->tx_buf) + } else if (t->tx_buf) { mt7621_spi_write_half_duplex(rs, t->len, t->tx_buf); + } m->actual_length += t->len; } mt7621_spi_flush(rs); @@ -300,102 +297,6 @@ msg_done: return 0; } -static int mt7621_spi_transfer_full_duplex(struct spi_master *master, - struct spi_message *m) -{ - struct mt7621_spi *rs = spi_master_get_devdata(master); - struct spi_device *spi = m->spi; - unsigned int speed = spi->max_speed_hz; - struct spi_transfer *t = NULL; - int status = 0; - int i, len = 0; - int rx_len = 0; - u32 data[9] = { 0 }; - u32 val = 0; - - mt7621_spi_wait_till_ready(rs); - - list_for_each_entry(t, &m->transfers, transfer_list) { - const u8 *buf = t->tx_buf; - - if (t->rx_buf) - rx_len += t->len; - - if (!buf) - continue; - - if (WARN_ON(len + t->len > 16)) { - status = -EIO; - goto msg_done; - } - - for (i = 0; i < t->len; i++, len++) - data[len / 4] |= buf[i] << (8 * (len & 3)); - if (speed > t->speed_hz) - speed = t->speed_hz; - } - - if (WARN_ON(rx_len > 16)) { - status = -EIO; - goto msg_done; - } - - if (mt7621_spi_prepare(spi, speed)) { - status = -EIO; - goto msg_done; - } - - for (i = 0; i < len; i += 4) - mt7621_spi_write(rs, MT7621_SPI_DATA0 + i, data[i / 4]); - - val |= len * 8; - val |= (rx_len * 8) << 12; - mt7621_spi_write(rs, MT7621_SPI_MOREBUF, val); - - mt7621_spi_set_cs(spi, 1); - - val = mt7621_spi_read(rs, MT7621_SPI_TRANS); - val |= SPI_CTL_START; - mt7621_spi_write(rs, MT7621_SPI_TRANS, val); - - mt7621_spi_wait_till_ready(rs); - - mt7621_spi_set_cs(spi, 0); - - for (i = 0; i < rx_len; i += 4) - data[i / 4] = mt7621_spi_read(rs, MT7621_SPI_DATA4 + i); - - m->actual_length = rx_len; - - len = 0; - list_for_each_entry(t, &m->transfers, transfer_list) { - u8 *buf = t->rx_buf; - - if (!buf) - continue; - - for (i = 0; i < t->len; i++, len++) - buf[i] = data[len / 4] >> (8 * (len & 3)); - } - -msg_done: - m->status = status; - spi_finalize_current_message(master); - - return 0; -} - -static int mt7621_spi_transfer_one_message(struct spi_master *master, - struct spi_message *m) -{ - struct spi_device *spi = m->spi; - int cs = spi->chip_select; - - if (cs) - return mt7621_spi_transfer_full_duplex(master, m); - return mt7621_spi_transfer_half_duplex(master, m); -} - static int mt7621_spi_setup(struct spi_device *spi) { struct mt7621_spi *rs = spidev_to_mt7621_spi(spi); @@ -457,7 +358,7 @@ static int mt7621_spi_probe(struct platform_device *pdev) return -ENOMEM; } - master->mode_bits = RT2880_SPI_MODE_BITS; + master->mode_bits = SPI_LSB_FIRST; master->setup = mt7621_spi_setup; master->transfer_one_message = mt7621_spi_transfer_one_message; @@ -478,7 +379,7 @@ static int mt7621_spi_probe(struct platform_device *pdev) device_reset(&pdev->dev); - mt7621_spi_reset(rs, 0); + mt7621_spi_reset(rs); return spi_register_master(master); } diff --git a/drivers/staging/octeon-usb/octeon-hcd.c b/drivers/staging/octeon-usb/octeon-hcd.c index 9c766f5b812f..14982b6472a0 100644 --- a/drivers/staging/octeon-usb/octeon-hcd.c +++ b/drivers/staging/octeon-usb/octeon-hcd.c @@ -50,6 +50,7 @@ #include #include #include +#include #include #include @@ -3606,8 +3607,9 @@ static int octeon_usb_probe(struct platform_device *pdev) * Set the DMA mask to 64bits so we get buffers already translated for * DMA. */ - dev->coherent_dma_mask = ~0; - dev->dma_mask = &dev->coherent_dma_mask; + i = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (i) + return i; /* * Only cn52XX and cn56XX have DWC_OTG USB hardware and the diff --git a/drivers/staging/octeon/ethernet-mdio.c b/drivers/staging/octeon/ethernet-mdio.c index f67f95043887..2848fa71a33d 100644 --- a/drivers/staging/octeon/ethernet-mdio.c +++ b/drivers/staging/octeon/ethernet-mdio.c @@ -21,7 +21,6 @@ #include "ethernet-util.h" #include -#include static void cvm_oct_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) diff --git a/drivers/staging/octeon/ethernet-tx.c b/drivers/staging/octeon/ethernet-tx.c index df3441b815bb..317c9720467c 100644 --- a/drivers/staging/octeon/ethernet-tx.c +++ b/drivers/staging/octeon/ethernet-tx.c @@ -359,8 +359,7 @@ int cvm_oct_xmit(struct sk_buff *skb, struct net_device *dev) dst_release(skb_dst(skb)); skb_dst_set(skb, NULL); #ifdef CONFIG_XFRM - secpath_put(skb->sp); - skb->sp = NULL; + secpath_reset(skb); #endif nf_reset(skb); diff --git a/drivers/staging/octeon/ethernet.c b/drivers/staging/octeon/ethernet.c index 9b15c9ed844b..ce61c5670ef6 100644 --- a/drivers/staging/octeon/ethernet.c +++ b/drivers/staging/octeon/ethernet.c @@ -36,7 +36,6 @@ #include #include #include -#include #define OCTEON_MAX_MTU 65392 @@ -141,8 +140,8 @@ static void cvm_oct_periodic_worker(struct work_struct *work) if (priv->poll) priv->poll(cvm_oct_device[priv->port]); - cvm_oct_device[priv->port]->netdev_ops->ndo_get_stats( - cvm_oct_device[priv->port]); + cvm_oct_device[priv->port]->netdev_ops->ndo_get_stats + (cvm_oct_device[priv->port]); if (!atomic_read(&cvm_oct_poll_queue_stopping)) schedule_delayed_work(&priv->port_periodic_work, HZ); @@ -621,8 +620,8 @@ static const struct net_device_ops cvm_oct_pow_netdev_ops = { #endif }; -static struct device_node *cvm_oct_of_get_child( - const struct device_node *parent, int reg_val) +static struct device_node *cvm_oct_of_get_child + (const struct device_node *parent, int reg_val) { struct device_node *node = NULL; int size; @@ -818,7 +817,7 @@ static int cvm_oct_probe(struct platform_device *pdev) priv = netdev_priv(dev); priv->netdev = dev; priv->of_node = cvm_oct_node_for_port(pip, interface, - port_index); + port_index); INIT_DELAYED_WORK(&priv->port_periodic_work, cvm_oct_periodic_worker); diff --git a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c index ff145d493e1b..80b8d4153414 100644 --- a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c +++ b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c @@ -11,35 +11,51 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include -#include +#include #include +#include #include #include "olpc_dcon.h" +enum dcon_gpios { + OLPC_DCON_STAT0, + OLPC_DCON_STAT1, + OLPC_DCON_IRQ, + OLPC_DCON_LOAD, + OLPC_DCON_BLANK, +}; + +struct dcon_gpio { + const char *name; + unsigned long flags; +}; + +static const struct dcon_gpio gpios_asis[] = { + [OLPC_DCON_STAT0] = { .name = "dcon_stat0", .flags = GPIOD_ASIS }, + [OLPC_DCON_STAT1] = { .name = "dcon_stat1", .flags = GPIOD_ASIS }, + [OLPC_DCON_IRQ] = { .name = "dcon_irq", .flags = GPIOD_ASIS }, + [OLPC_DCON_LOAD] = { .name = "dcon_load", .flags = GPIOD_ASIS }, + [OLPC_DCON_BLANK] = { .name = "dcon_blank", .flags = GPIOD_ASIS }, +}; + +struct gpio_desc *gpios[5]; + static int dcon_init_xo_1(struct dcon_priv *dcon) { unsigned char lob; - - if (gpio_request(OLPC_GPIO_DCON_STAT0, "OLPC-DCON")) { - pr_err("failed to request STAT0 GPIO\n"); - return -EIO; - } - if (gpio_request(OLPC_GPIO_DCON_STAT1, "OLPC-DCON")) { - pr_err("failed to request STAT1 GPIO\n"); - goto err_gp_stat1; - } - if (gpio_request(OLPC_GPIO_DCON_IRQ, "OLPC-DCON")) { - pr_err("failed to request IRQ GPIO\n"); - goto err_gp_irq; - } - if (gpio_request(OLPC_GPIO_DCON_LOAD, "OLPC-DCON")) { - pr_err("failed to request LOAD GPIO\n"); - goto err_gp_load; - } - if (gpio_request(OLPC_GPIO_DCON_BLANK, "OLPC-DCON")) { - pr_err("failed to request BLANK GPIO\n"); - goto err_gp_blank; + int ret, i; + struct dcon_gpio *pin = &gpios_asis[0]; + + for (i = 0; i < ARRAY_SIZE(gpios_asis); i++) { + gpios[i] = devm_gpiod_get(&dcon->client->dev, pin[i].name, + pin[i].flags); + if (IS_ERR(gpios[i])) { + ret = PTR_ERR(gpios[i]); + pr_err("failed to request %s GPIO: %d\n", pin[i].name, + ret); + return ret; + } } /* Turn off the event enable for GPIO7 just to be safe */ @@ -61,12 +77,12 @@ static int dcon_init_xo_1(struct dcon_priv *dcon) dcon->pending_src = dcon->curr_src; /* Set the directions for the GPIO pins */ - gpio_direction_input(OLPC_GPIO_DCON_STAT0); - gpio_direction_input(OLPC_GPIO_DCON_STAT1); - gpio_direction_input(OLPC_GPIO_DCON_IRQ); - gpio_direction_input(OLPC_GPIO_DCON_BLANK); - gpio_direction_output(OLPC_GPIO_DCON_LOAD, - dcon->curr_src == DCON_SOURCE_CPU); + gpiod_direction_input(gpios[OLPC_DCON_STAT0]); + gpiod_direction_input(gpios[OLPC_DCON_STAT1]); + gpiod_direction_input(gpios[OLPC_DCON_IRQ]); + gpiod_direction_input(gpios[OLPC_DCON_BLANK]); + gpiod_direction_output(gpios[OLPC_DCON_LOAD], + dcon->curr_src == DCON_SOURCE_CPU); /* Set up the interrupt mappings */ @@ -84,7 +100,7 @@ static int dcon_init_xo_1(struct dcon_priv *dcon) /* Register the interrupt handler */ if (request_irq(DCON_IRQ, &dcon_interrupt, 0, "DCON", dcon)) { pr_err("failed to request DCON's irq\n"); - goto err_req_irq; + return -EIO; } /* Clear INV_EN for GPIO7 (DCONIRQ) */ @@ -125,18 +141,6 @@ static int dcon_init_xo_1(struct dcon_priv *dcon) cs5535_gpio_set(OLPC_GPIO_DCON_BLANK, GPIO_EVENTS_ENABLE); return 0; - -err_req_irq: - gpio_free(OLPC_GPIO_DCON_BLANK); -err_gp_blank: - gpio_free(OLPC_GPIO_DCON_LOAD); -err_gp_load: - gpio_free(OLPC_GPIO_DCON_IRQ); -err_gp_irq: - gpio_free(OLPC_GPIO_DCON_STAT1); -err_gp_stat1: - gpio_free(OLPC_GPIO_DCON_STAT0); - return -EIO; } static void dcon_wiggle_xo_1(void) @@ -180,13 +184,13 @@ static void dcon_wiggle_xo_1(void) static void dcon_set_dconload_1(int val) { - gpio_set_value(OLPC_GPIO_DCON_LOAD, val); + gpiod_set_value(gpios[OLPC_DCON_LOAD], val); } static int dcon_read_status_xo_1(u8 *status) { - *status = gpio_get_value(OLPC_GPIO_DCON_STAT0); - *status |= gpio_get_value(OLPC_GPIO_DCON_STAT1) << 1; + *status = gpiod_get_value(gpios[OLPC_DCON_STAT0]); + *status |= gpiod_get_value(gpios[OLPC_DCON_STAT1]) << 1; /* Clear the negative edge status for GPIO7 */ cs5535_gpio_set(OLPC_GPIO_DCON_IRQ, GPIO_NEGATIVE_EDGE_STS); diff --git a/drivers/staging/pi433/pi433_if.c b/drivers/staging/pi433/pi433_if.c index c85a805a1243..b2314636dc89 100644 --- a/drivers/staging/pi433/pi433_if.c +++ b/drivers/staging/pi433/pi433_if.c @@ -14,16 +14,6 @@ * * Copyright (C) 2016 Wolf-Entwicklungen * Marcus Wolf - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #undef DEBUG @@ -1255,12 +1245,16 @@ static int pi433_probe(struct spi_device *spi) /* create cdev */ device->cdev = cdev_alloc(); + if (!device->cdev) { + dev_dbg(device->dev, "allocation of cdev failed"); + goto cdev_failed; + } device->cdev->owner = THIS_MODULE; cdev_init(device->cdev, &pi433_fops); retval = cdev_add(device->cdev, device->devt, 1); if (retval) { dev_dbg(device->dev, "register of cdev failed"); - goto cdev_failed; + goto del_cdev; } /* spi setup */ @@ -1268,6 +1262,8 @@ static int pi433_probe(struct spi_device *spi) return 0; +del_cdev: + cdev_del(device->cdev); cdev_failed: kthread_stop(device->tx_task_struct); send_thread_failed: diff --git a/drivers/staging/pi433/pi433_if.h b/drivers/staging/pi433/pi433_if.h index 2d4fa77c793e..9feb95c431cb 100644 --- a/drivers/staging/pi433/pi433_if.h +++ b/drivers/staging/pi433/pi433_if.h @@ -15,16 +15,6 @@ * HopeRf with a similar interace - e. g. RFM69HCW, RFM12, RFM95, ... * Copyright (C) 2016 Wolf-Entwicklungen * Marcus Wolf - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #ifndef PI433_H diff --git a/drivers/staging/pi433/rf69.c b/drivers/staging/pi433/rf69.c index 4fa6c0237e59..e19b9ce794a8 100644 --- a/drivers/staging/pi433/rf69.c +++ b/drivers/staging/pi433/rf69.c @@ -4,16 +4,6 @@ * * Copyright (C) 2016 Wolf-Entwicklungen * Marcus Wolf - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ /* enable prosa debug info */ diff --git a/drivers/staging/pi433/rf69.h b/drivers/staging/pi433/rf69.h index 319b86c14234..d43a8d87d5d3 100644 --- a/drivers/staging/pi433/rf69.h +++ b/drivers/staging/pi433/rf69.h @@ -4,16 +4,6 @@ * * Copyright (C) 2016 Wolf-Entwicklungen * Marcus Wolf - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #ifndef RF69_H #define RF69_H diff --git a/drivers/staging/pi433/rf69_enum.h b/drivers/staging/pi433/rf69_enum.h index de3b7e32dad7..3ee1952245c2 100644 --- a/drivers/staging/pi433/rf69_enum.h +++ b/drivers/staging/pi433/rf69_enum.h @@ -4,16 +4,6 @@ * * Copyright (C) 2016 Wolf-Entwicklungen * Marcus Wolf - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #ifndef RF69_ENUM_H diff --git a/drivers/staging/pi433/rf69_registers.h b/drivers/staging/pi433/rf69_registers.h index ea19c1ca7509..f925a83c3247 100644 --- a/drivers/staging/pi433/rf69_registers.h +++ b/drivers/staging/pi433/rf69_registers.h @@ -4,16 +4,6 @@ * * Copyright (C) 2016 Wolf-Entwicklungen * Marcus Wolf - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ /*******************************************/ diff --git a/drivers/staging/rtl8188eu/core/rtw_ap.c b/drivers/staging/rtl8188eu/core/rtw_ap.c index 1c319c2ca86d..1f232ba6651c 100644 --- a/drivers/staging/rtl8188eu/core/rtw_ap.c +++ b/drivers/staging/rtl8188eu/core/rtw_ap.c @@ -629,7 +629,7 @@ static void start_bss_network(struct adapter *padapter, u8 *pbuf) } /* setting only at first time */ - if (pmlmepriv->cur_network.join_res != true) { + if (!pmlmepriv->cur_network.join_res) { /* WEP Key will be set before this function, do not * clear CAM. */ @@ -756,7 +756,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) DBG_88E("%s, len =%d\n", __func__, len); - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return _FAIL; if (len < 0 || len > MAX_IE_SZ) diff --git a/drivers/staging/rtl8188eu/core/rtw_cmd.c b/drivers/staging/rtl8188eu/core/rtw_cmd.c index 9b2a497aa413..407f65cf7150 100644 --- a/drivers/staging/rtl8188eu/core/rtw_cmd.c +++ b/drivers/staging/rtl8188eu/core/rtw_cmd.c @@ -214,11 +214,8 @@ _next: pcmdpriv->cmdthd_running = false; /* free all cmd_obj resources */ - while ((pcmd = rtw_dequeue_cmd(&pcmdpriv->cmd_queue))) { - /* DBG_88E("%s: leaving... drop cmdcode:%u\n", __func__, pcmd->cmdcode); */ - + while ((pcmd = rtw_dequeue_cmd(&pcmdpriv->cmd_queue))) rtw_free_cmd_obj(pcmd); - } complete(&pcmdpriv->terminate_cmdthread_comp); @@ -240,7 +237,7 @@ u8 rtw_sitesurvey_cmd(struct adapter *padapter, struct ndis_802_11_ssid *ssid, struct cmd_priv *pcmdpriv = &padapter->cmdpriv; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - if (check_fwstate(pmlmepriv, _FW_LINKED) == true) + if (check_fwstate(pmlmepriv, _FW_LINKED)) rtw_lps_ctrl_wk_cmd(padapter, LPS_CTRL_SCAN, 1); ph2c = kzalloc(sizeof(*ph2c), GFP_ATOMIC); @@ -259,7 +256,6 @@ u8 rtw_sitesurvey_cmd(struct adapter *padapter, struct ndis_802_11_ssid *ssid, init_h2fwcmd_w_parm_no_rsp(ph2c, psurveyPara, _SiteSurvey_CMD_); - /* psurveyPara->bsslimit = 48; */ psurveyPara->scan_mode = pmlmepriv->scan_mode; /* prepare ssid list */ @@ -294,7 +290,7 @@ u8 rtw_sitesurvey_cmd(struct adapter *padapter, struct ndis_802_11_ssid *ssid, mod_timer(&pmlmepriv->scan_to_timer, jiffies + msecs_to_jiffies(SCANNING_TIMEOUT)); - LedControl8188eu(padapter, LED_CTL_SITE_SURVEY); + led_control_8188eu(padapter, LED_CTL_SITE_SURVEY); pmlmepriv->scan_interval = SCAN_INTERVAL;/* 30*2 sec = 60sec */ } else { @@ -318,7 +314,7 @@ u8 rtw_createbss_cmd(struct adapter *padapter) struct wlan_bssid_ex *pdev_network = &padapter->registrypriv.dev_network; u8 res = _SUCCESS; - LedControl8188eu(padapter, LED_CTL_START_TO_LINK); + led_control_8188eu(padapter, LED_CTL_START_TO_LINK); if (pmlmepriv->assoc_ssid.SsidLength == 0) RT_TRACE(_module_rtl871x_cmd_c_, _drv_info_, (" createbss for Any SSid:%s\n", pmlmepriv->assoc_ssid.Ssid)); @@ -360,7 +356,7 @@ u8 rtw_joinbss_cmd(struct adapter *padapter, struct wlan_network *pnetwork) struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; - LedControl8188eu(padapter, LED_CTL_START_TO_LINK); + led_control_8188eu(padapter, LED_CTL_START_TO_LINK); if (pmlmepriv->assoc_ssid.SsidLength == 0) RT_TRACE(_module_rtl871x_cmd_c_, _drv_info_, ("+Join cmd: Any SSid\n")); @@ -660,9 +656,6 @@ u8 rtw_addbareq_cmd(struct adapter *padapter, u8 tid, u8 *addr) init_h2fwcmd_w_parm_no_rsp(ph2c, paddbareq_parm, _AddBAReq_CMD_); - /* DBG_88E("rtw_addbareq_cmd, tid =%d\n", tid); */ - - /* rtw_enqueue_cmd(pcmdpriv, ph2c); */ res = rtw_enqueue_cmd(pcmdpriv, ph2c); exit: @@ -696,7 +689,6 @@ u8 rtw_dynamic_chk_wk_cmd(struct adapter *padapter) init_h2fwcmd_w_parm_no_rsp(ph2c, pdrvextra_cmd_parm, _Set_Drv_Extra_CMD_); - /* rtw_enqueue_cmd(pcmdpriv, ph2c); */ res = rtw_enqueue_cmd(pcmdpriv, ph2c); exit: return res; @@ -820,7 +812,7 @@ static void dynamic_chk_wk_hdl(struct adapter *padapter, u8 *pbuf, int sz) pmlmepriv = &padapter->mlmepriv; #ifdef CONFIG_88EU_AP_MODE - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) == true) + if (check_fwstate(pmlmepriv, WIFI_AP_STATE)) expire_timeout_chk(padapter); #endif @@ -836,13 +828,13 @@ static void lps_ctrl_wk_hdl(struct adapter *padapter, u8 lps_ctrl_type) struct mlme_priv *pmlmepriv = &padapter->mlmepriv; u8 mstatus; - if ((check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) == true) || - (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) == true)) + if (check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) || + check_fwstate(pmlmepriv, WIFI_ADHOC_STATE)) return; switch (lps_ctrl_type) { case LPS_CTRL_SCAN: - if (check_fwstate(pmlmepriv, _FW_LINKED) == true) { + if (check_fwstate(pmlmepriv, _FW_LINKED)) { /* connect */ LPS_Leave(padapter); } @@ -862,7 +854,6 @@ static void lps_ctrl_wk_hdl(struct adapter *padapter, u8 lps_ctrl_type) rtw_hal_set_hwreg(padapter, HW_VAR_H2C_FW_JOINBSSRPT, (u8 *)(&mstatus)); break; case LPS_CTRL_SPECIAL_PACKET: - /* DBG_88E("LPS_CTRL_SPECIAL_PACKET\n"); */ pwrpriv->DelayLPSLastTimeStamp = jiffies; LPS_Leave(padapter); break; @@ -879,7 +870,6 @@ u8 rtw_lps_ctrl_wk_cmd(struct adapter *padapter, u8 lps_ctrl_type, u8 enqueue) struct cmd_obj *ph2c; struct drvextra_cmd_parm *pdrvextra_cmd_parm; struct cmd_priv *pcmdpriv = &padapter->cmdpriv; - /* struct pwrctrl_priv *pwrctrlpriv = &padapter->pwrctrlpriv; */ u8 res = _SUCCESS; if (enqueue) { @@ -1029,9 +1019,6 @@ static void rtw_chk_hi_queue_hdl(struct adapter *padapter) if (psta_bmc->sleepq_len == 0) { u8 val = 0; - /* while ((rtw_read32(padapter, 0x414)&0x00ffff00)!= 0) */ - /* while ((rtw_read32(padapter, 0x414)&0x0000ff00)!= 0) */ - rtw_hal_get_hwreg(padapter, HW_VAR_CHK_HI_QUEUE_EMPTY, &val); while (!val) { diff --git a/drivers/staging/rtl8188eu/core/rtw_led.c b/drivers/staging/rtl8188eu/core/rtw_led.c index a2e7789aecbd..d1406cc99768 100644 --- a/drivers/staging/rtl8188eu/core/rtw_led.c +++ b/drivers/staging/rtl8188eu/core/rtw_led.c @@ -33,7 +33,7 @@ void BlinkWorkItemCallback(struct work_struct *work) struct LED_871x *pLed = container_of(work, struct LED_871x, BlinkWorkItem); - BlinkHandler(pLed); + blink_handler(pLed); } /* */ @@ -94,17 +94,17 @@ static void SwLedBlink1(struct LED_871x *pLed) /* Change LED according to BlinkingLedState specified. */ if (pLed->BlinkingLedState == RTW_LED_ON) { - SwLedOn(padapter, pLed); + sw_led_on(padapter, pLed); RT_TRACE(_module_rtl8712_led_c_, _drv_info_, ("Blinktimes (%d): turn on\n", pLed->BlinkTimes)); } else { - SwLedOff(padapter, pLed); + sw_led_off(padapter, pLed); RT_TRACE(_module_rtl8712_led_c_, _drv_info_, ("Blinktimes (%d): turn off\n", pLed->BlinkTimes)); } if (padapter->pwrctrlpriv.rf_pwrstate != rf_on) { - SwLedOff(padapter, pLed); + sw_led_off(padapter, pLed); ResetLedStatus(pLed); return; } @@ -240,7 +240,7 @@ static void SwLedBlink1(struct LED_871x *pLed) static void SwLedControlMode1(struct adapter *padapter, enum LED_CTL_MODE LedAction) { struct led_priv *ledpriv = &padapter->ledpriv; - struct LED_871x *pLed = &ledpriv->SwLed0; + struct LED_871x *pLed = &ledpriv->sw_led; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; switch (LedAction) { @@ -445,7 +445,7 @@ static void SwLedControlMode1(struct adapter *padapter, enum LED_CTL_MODE LedAct del_timer_sync(&pLed->BlinkTimer); pLed->bLedScanBlinkInProgress = false; } - SwLedOff(padapter, pLed); + sw_led_off(padapter, pLed); break; default: break; @@ -455,11 +455,7 @@ static void SwLedControlMode1(struct adapter *padapter, enum LED_CTL_MODE LedAct ("Led %d\n", pLed->CurrLedState)); } -/* */ -/* Description: */ -/* Handler function of LED Blinking. */ -/* */ -void BlinkHandler(struct LED_871x *pLed) +void blink_handler(struct LED_871x *pLed) { struct adapter *padapter = pLed->padapter; @@ -469,7 +465,7 @@ void BlinkHandler(struct LED_871x *pLed) SwLedBlink1(pLed); } -void LedControl8188eu(struct adapter *padapter, enum LED_CTL_MODE LedAction) +void led_control_8188eu(struct adapter *padapter, enum LED_CTL_MODE LedAction) { if (padapter->bSurpriseRemoved || padapter->bDriverStopped || !padapter->hw_init_completed) diff --git a/drivers/staging/rtl8188eu/core/rtw_mlme.c b/drivers/staging/rtl8188eu/core/rtw_mlme.c index b9bd864f323c..714f7a70ed64 100644 --- a/drivers/staging/rtl8188eu/core/rtw_mlme.c +++ b/drivers/staging/rtl8188eu/core/rtw_mlme.c @@ -24,11 +24,11 @@ extern const u8 MCS_rate_1R[16]; int rtw_init_mlme_priv(struct adapter *padapter) { - int i; - u8 *pbuf; - struct wlan_network *pnetwork; - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - int res = _SUCCESS; + int i; + u8 *pbuf; + struct wlan_network *pnetwork; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + int res = _SUCCESS; /* We don't need to memset padapter->XXX to zero, because adapter is allocated by vzalloc(). */ @@ -39,9 +39,9 @@ int rtw_init_mlme_priv(struct adapter *padapter) pmlmepriv->cur_network.network.InfrastructureMode = Ndis802_11AutoUnknown; pmlmepriv->scan_mode = SCAN_ACTIVE;/* 1: active, 0: pasive. Maybe someday we should rename this varable to "active_mode" (Jeff) */ - spin_lock_init(&(pmlmepriv->lock)); - _rtw_init_queue(&(pmlmepriv->free_bss_pool)); - _rtw_init_queue(&(pmlmepriv->scanned_queue)); + spin_lock_init(&pmlmepriv->lock); + _rtw_init_queue(&pmlmepriv->free_bss_pool); + _rtw_init_queue(&pmlmepriv->scanned_queue); memset(&pmlmepriv->assoc_ssid, 0, sizeof(struct ndis_802_11_ssid)); @@ -56,9 +56,9 @@ int rtw_init_mlme_priv(struct adapter *padapter) pnetwork = (struct wlan_network *)pbuf; for (i = 0; i < MAX_BSS_CNT; i++) { - INIT_LIST_HEAD(&(pnetwork->list)); + INIT_LIST_HEAD(&pnetwork->list); - list_add_tail(&(pnetwork->list), &(pmlmepriv->free_bss_pool.queue)); + list_add_tail(&pnetwork->list, &pmlmepriv->free_bss_pool.queue); pnetwork++; } @@ -137,7 +137,7 @@ static void _rtw_free_network(struct mlme_priv *pmlmepriv, struct wlan_network * unsigned long curr_time; u32 delta_time; u32 lifetime = SCANQUEUE_LIFETIME; - struct __queue *free_queue = &(pmlmepriv->free_bss_pool); + struct __queue *free_queue = &pmlmepriv->free_bss_pool; if (!pnetwork) return; @@ -154,21 +154,21 @@ static void _rtw_free_network(struct mlme_priv *pmlmepriv, struct wlan_network * return; } spin_lock_bh(&free_queue->lock); - list_del_init(&(pnetwork->list)); - list_add_tail(&(pnetwork->list), &(free_queue->queue)); + list_del_init(&pnetwork->list); + list_add_tail(&pnetwork->list, &free_queue->queue); spin_unlock_bh(&free_queue->lock); } void _rtw_free_network_nolock(struct mlme_priv *pmlmepriv, struct wlan_network *pnetwork) { - struct __queue *free_queue = &(pmlmepriv->free_bss_pool); + struct __queue *free_queue = &pmlmepriv->free_bss_pool; if (!pnetwork) return; if (pnetwork->fixed) return; - list_del_init(&(pnetwork->list)); - list_add_tail(&(pnetwork->list), get_list_head(free_queue)); + list_del_init(&pnetwork->list); + list_add_tail(&pnetwork->list, get_list_head(free_queue)); } /* @@ -179,7 +179,7 @@ void _rtw_free_network_nolock(struct mlme_priv *pmlmepriv, struct wlan_network * struct wlan_network *rtw_find_network(struct __queue *scanned_queue, u8 *addr) { struct list_head *phead, *plist; - struct wlan_network *pnetwork = NULL; + struct wlan_network *pnetwork = NULL; u8 zero_addr[ETH_ALEN] = {0, 0, 0, 0, 0, 0}; if (!memcmp(zero_addr, addr, ETH_ALEN)) { @@ -230,8 +230,9 @@ int rtw_if_up(struct adapter *padapter) if (padapter->bDriverStopped || padapter->bSurpriseRemoved || !check_fwstate(&padapter->mlmepriv, _FW_LINKED)) { RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, - ("rtw_if_up:bDriverStopped(%d) OR bSurpriseRemoved(%d)", - padapter->bDriverStopped, padapter->bSurpriseRemoved)); + ("%s:bDriverStopped(%d) OR bSurpriseRemoved(%d)", + __func__, padapter->bDriverStopped, + padapter->bSurpriseRemoved)); res = false; } else { res = true; @@ -258,7 +259,7 @@ u8 *rtw_get_capability_from_ie(u8 *ie) u16 rtw_get_capability(struct wlan_bssid_ex *bss) { - __le16 val; + __le16 val; memcpy((u8 *)&val, rtw_get_capability_from_ie(bss->ies), 2); @@ -315,19 +316,19 @@ int is_same_network(struct wlan_bssid_ex *src, struct wlan_bssid_ex *dst) d_cap = le16_to_cpu(le_dcap); return ((src->Ssid.SsidLength == dst->Ssid.SsidLength) && - ((!memcmp(src->MacAddress, dst->MacAddress, ETH_ALEN)) == true) && - ((!memcmp(src->Ssid.Ssid, dst->Ssid.Ssid, src->Ssid.SsidLength)) == true) && + (!memcmp(src->MacAddress, dst->MacAddress, ETH_ALEN)) && + (!memcmp(src->Ssid.Ssid, dst->Ssid.Ssid, src->Ssid.SsidLength)) && ((s_cap & WLAN_CAPABILITY_IBSS) == (d_cap & WLAN_CAPABILITY_IBSS)) && ((s_cap & WLAN_CAPABILITY_ESS) == (d_cap & WLAN_CAPABILITY_ESS))); } -struct wlan_network *rtw_get_oldest_wlan_network(struct __queue *scanned_queue) +struct wlan_network *rtw_get_oldest_wlan_network(struct __queue *scanned_queue) { struct list_head *plist, *phead; - struct wlan_network *pwlan = NULL; - struct wlan_network *oldest = NULL; + struct wlan_network *pwlan = NULL; + struct wlan_network *oldest = NULL; phead = get_list_head(scanned_queue); @@ -354,7 +355,8 @@ void update_network(struct wlan_bssid_ex *dst, struct wlan_bssid_ex *src, rtw_hal_antdiv_rssi_compared(padapter, dst, src); /* this will update src.Rssi, need consider again */ /* The rule below is 1/5 for sample value, 4/5 for history value */ - if (check_fwstate(&padapter->mlmepriv, _FW_LINKED) && is_same_network(&(padapter->mlmepriv.cur_network.network), src)) { + if (check_fwstate(&padapter->mlmepriv, _FW_LINKED) && + is_same_network(&padapter->mlmepriv.cur_network.network, src)) { /* Take the recvpriv's value for the connected AP*/ ss_final = padapter->recvpriv.signal_strength; sq_final = padapter->recvpriv.signal_qual; @@ -384,11 +386,11 @@ void update_network(struct wlan_bssid_ex *dst, struct wlan_bssid_ex *src, static void update_current_network(struct adapter *adapter, struct wlan_bssid_ex *pnetwork) { - struct mlme_priv *pmlmepriv = &(adapter->mlmepriv); + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; - if ((check_fwstate(pmlmepriv, _FW_LINKED) == true) && - (is_same_network(&(pmlmepriv->cur_network.network), pnetwork))) { - update_network(&(pmlmepriv->cur_network.network), pnetwork, adapter, true); + if (check_fwstate(pmlmepriv, _FW_LINKED) && + is_same_network(&pmlmepriv->cur_network.network, pnetwork)) { + update_network(&pmlmepriv->cur_network.network, pnetwork, adapter, true); rtw_update_protection(adapter, (pmlmepriv->cur_network.network.ies) + sizeof(struct ndis_802_11_fixed_ie), pmlmepriv->cur_network.network.ie_length); } @@ -400,20 +402,20 @@ static void update_current_network(struct adapter *adapter, struct wlan_bssid_ex void rtw_update_scanned_network(struct adapter *adapter, struct wlan_bssid_ex *target) { struct list_head *plist, *phead; - u32 bssid_ex_sz; - struct mlme_priv *pmlmepriv = &(adapter->mlmepriv); - struct __queue *queue = &(pmlmepriv->scanned_queue); - struct wlan_network *pnetwork = NULL; - struct wlan_network *oldest = NULL; + u32 bssid_ex_sz; + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + struct __queue *queue = &pmlmepriv->scanned_queue; + struct wlan_network *pnetwork = NULL; + struct wlan_network *oldest = NULL; spin_lock_bh(&queue->lock); phead = get_list_head(queue); plist = phead->next; while (phead != plist) { - pnetwork = container_of(plist, struct wlan_network, list); + pnetwork = container_of(plist, struct wlan_network, list); - if (is_same_network(&(pnetwork->network), target)) + if (is_same_network(&pnetwork->network, target)) break; if ((oldest == ((struct wlan_network *)0)) || time_after(oldest->last_scanned, pnetwork->last_scanned)) @@ -424,12 +426,14 @@ void rtw_update_scanned_network(struct adapter *adapter, struct wlan_bssid_ex *t * with this beacon's information */ if (phead == plist) { - if (list_empty(&(pmlmepriv->free_bss_pool.queue))) { + if (list_empty(&pmlmepriv->free_bss_pool.queue)) { /* If there are no more slots, expire the oldest */ pnetwork = oldest; - rtw_hal_get_def_var(adapter, HAL_DEF_CURRENT_ANTENNA, &(target->PhyInfo.Optimum_antenna)); - memcpy(&(pnetwork->network), target, get_wlan_bssid_ex_sz(target)); + rtw_hal_get_def_var(adapter, HAL_DEF_CURRENT_ANTENNA, + &target->PhyInfo.Optimum_antenna); + memcpy(&pnetwork->network, target, + get_wlan_bssid_ex_sz(target)); /* variable initialize */ pnetwork->fixed = false; pnetwork->last_scanned = jiffies; @@ -447,26 +451,28 @@ void rtw_update_scanned_network(struct adapter *adapter, struct wlan_bssid_ex *t pnetwork = rtw_alloc_network(pmlmepriv); /* will update scan_time */ if (!pnetwork) { - RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("\n\n\nsomething wrong here\n\n\n")); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, + ("\n\n\nsomething wrong here\n\n\n")); goto exit; } bssid_ex_sz = get_wlan_bssid_ex_sz(target); target->Length = bssid_ex_sz; - rtw_hal_get_def_var(adapter, HAL_DEF_CURRENT_ANTENNA, &(target->PhyInfo.Optimum_antenna)); - memcpy(&(pnetwork->network), target, bssid_ex_sz); + rtw_hal_get_def_var(adapter, HAL_DEF_CURRENT_ANTENNA, + &target->PhyInfo.Optimum_antenna); + memcpy(&pnetwork->network, target, bssid_ex_sz); pnetwork->last_scanned = jiffies; /* bss info not receiving from the right channel */ if (pnetwork->network.PhyInfo.SignalQuality == 101) pnetwork->network.PhyInfo.SignalQuality = 0; - list_add_tail(&(pnetwork->list), &(queue->queue)); + list_add_tail(&pnetwork->list, &queue->queue); } } else { - /* we have an entry and we are going to update it. But this entry may - * be already expired. In this case we do the same as we found a new - * net and call the new_net handler + /* we have an entry and we are going to update it. But this + * entry may be already expired. In this case we do the same + * as we found a new net and call the new_net handler */ bool update_ie = true; @@ -476,7 +482,7 @@ void rtw_update_scanned_network(struct adapter *adapter, struct wlan_bssid_ex *t if ((pnetwork->network.ie_length > target->ie_length) && (target->Reserved[0] == 1)) update_ie = false; - update_network(&(pnetwork->network), target, adapter, update_ie); + update_network(&pnetwork->network, target, adapter, update_ie); } exit: @@ -529,7 +535,7 @@ static int rtw_is_desired_network(struct adapter *adapter, struct wlan_network * bselected = false; } - if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) == true) { + if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE)) { if (pnetwork->network.InfrastructureMode != pmlmepriv->cur_network.network.InfrastructureMode) bselected = false; } @@ -547,26 +553,28 @@ void rtw_survey_event_callback(struct adapter *adapter, u8 *pbuf) { u32 len; struct wlan_bssid_ex *pnetwork; - struct mlme_priv *pmlmepriv = &(adapter->mlmepriv); + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; pnetwork = (struct wlan_bssid_ex *)pbuf; - RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("rtw_survey_event_callback, ssid=%s\n", pnetwork->Ssid.Ssid)); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, + ("%s, ssid=%s\n", __func__, pnetwork->Ssid.Ssid)); len = get_wlan_bssid_ex_sz(pnetwork); if (len > (sizeof(struct wlan_bssid_ex))) { - RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("\n****rtw_survey_event_callback: return a wrong bss ***\n")); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, + ("\n****%s: return a wrong bss ***\n", __func__)); return; } spin_lock_bh(&pmlmepriv->lock); /* update IBSS_network 's timestamp */ - if ((check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE)) == true) { - if (!memcmp(&(pmlmepriv->cur_network.network.MacAddress), pnetwork->MacAddress, ETH_ALEN)) { + if (check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE)) { + if (!memcmp(&pmlmepriv->cur_network.network.MacAddress, pnetwork->MacAddress, ETH_ALEN)) { struct wlan_network *ibss_wlan = NULL; memcpy(pmlmepriv->cur_network.network.ies, pnetwork->ies, 8); - spin_lock_bh(&(pmlmepriv->scanned_queue.lock)); + spin_lock_bh(&pmlmepriv->scanned_queue.lock); ibss_wlan = rtw_find_network(&pmlmepriv->scanned_queue, pnetwork->MacAddress); if (ibss_wlan) { memcpy(ibss_wlan->network.ies, pnetwork->ies, 8); @@ -585,14 +593,12 @@ void rtw_survey_event_callback(struct adapter *adapter, u8 *pbuf) } exit: - spin_unlock_bh(&pmlmepriv->lock); - return; } void rtw_surveydone_event_callback(struct adapter *adapter, u8 *pbuf) { - struct mlme_priv *pmlmepriv = &(adapter->mlmepriv); + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; spin_lock_bh(&pmlmepriv->lock); @@ -602,7 +608,8 @@ void rtw_surveydone_event_callback(struct adapter *adapter, u8 *pbuf) pmlmepriv->wps_probe_req_ie = NULL; } - RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("rtw_surveydone_event_callback: fw_state:%x\n\n", get_fwstate(pmlmepriv))); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, + ("%s: fw_state:%x\n\n", __func__, get_fwstate(pmlmepriv))); if (check_fwstate(pmlmepriv, _FW_UNDER_SURVEY)) { del_timer_sync(&pmlmepriv->scan_to_timer); @@ -614,7 +621,7 @@ void rtw_surveydone_event_callback(struct adapter *adapter, u8 *pbuf) rtw_set_signal_stat_timer(&adapter->recvpriv); if (pmlmepriv->to_join) { - if ((check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) == true)) { + if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE)) { if (!check_fwstate(pmlmepriv, _FW_LINKED)) { set_fwstate(pmlmepriv, _FW_UNDER_LINKING); @@ -622,7 +629,7 @@ void rtw_surveydone_event_callback(struct adapter *adapter, u8 *pbuf) mod_timer(&pmlmepriv->assoc_timer, jiffies + msecs_to_jiffies(MAX_JOIN_TIMEOUT)); } else { - struct wlan_bssid_ex *pdev_network = &(adapter->registrypriv.dev_network); + struct wlan_bssid_ex *pdev_network = &adapter->registrypriv.dev_network; u8 *pibss = adapter->registrypriv.dev_network.MacAddress; _clr_fwstate_(pmlmepriv, _FW_UNDER_SURVEY); @@ -691,7 +698,7 @@ static void free_scanqueue(struct mlme_priv *pmlmepriv) struct __queue *scan_queue = &pmlmepriv->scanned_queue; struct list_head *plist, *phead, *ptemp; - RT_TRACE(_module_rtl871x_mlme_c_, _drv_notice_, ("+free_scanqueue\n")); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_notice_, ("+%s\n", __func__)); spin_lock_bh(&scan_queue->lock); spin_lock_bh(&free_queue->lock); @@ -714,7 +721,7 @@ static void free_scanqueue(struct mlme_priv *pmlmepriv) */ void rtw_free_assoc_resources(struct adapter *adapter) { - struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; spin_lock_bh(&pmlmepriv->scanned_queue.lock); rtw_free_assoc_resources_locked(adapter); @@ -727,8 +734,8 @@ void rtw_free_assoc_resources(struct adapter *adapter) void rtw_free_assoc_resources_locked(struct adapter *adapter) { struct wlan_network *pwlan = NULL; - struct mlme_priv *pmlmepriv = &adapter->mlmepriv; - struct sta_priv *pstapriv = &adapter->stapriv; + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + struct sta_priv *pstapriv = &adapter->stapriv; struct wlan_network *tgt_network = &pmlmepriv->cur_network; RT_TRACE(_module_rtl871x_mlme_c_, _drv_notice_, ("+rtw_free_assoc_resources\n")); @@ -741,7 +748,7 @@ void rtw_free_assoc_resources_locked(struct adapter *adapter) psta = rtw_get_stainfo(&adapter->stapriv, tgt_network->network.MacAddress); - spin_lock_bh(&(pstapriv->sta_hash_lock)); + spin_lock_bh(&pstapriv->sta_hash_lock); rtw_free_stainfo(adapter, psta); spin_unlock_bh(&pstapriv->sta_hash_lock); } @@ -752,7 +759,7 @@ void rtw_free_assoc_resources_locked(struct adapter *adapter) rtw_free_all_stainfo(adapter); psta = rtw_get_bcmc_stainfo(adapter); - spin_lock_bh(&(pstapriv->sta_hash_lock)); + spin_lock_bh(&pstapriv->sta_hash_lock); rtw_free_stainfo(adapter, psta); spin_unlock_bh(&pstapriv->sta_hash_lock); @@ -776,7 +783,7 @@ void rtw_free_assoc_resources_locked(struct adapter *adapter) */ void rtw_indicate_connect(struct adapter *padapter) { - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("+%s\n", __func__)); @@ -785,7 +792,7 @@ void rtw_indicate_connect(struct adapter *padapter) if (!check_fwstate(&padapter->mlmepriv, _FW_LINKED)) { set_fwstate(pmlmepriv, _FW_LINKED); - LedControl8188eu(padapter, LED_CTL_LINK); + led_control_8188eu(padapter, LED_CTL_LINK); rtw_os_indicate_connect(padapter); } @@ -802,9 +809,9 @@ void rtw_indicate_connect(struct adapter *padapter) */ void rtw_indicate_disconnect(struct adapter *padapter) { - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("+rtw_indicate_disconnect\n")); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("+%s\n", __func__)); _clr_fwstate_(pmlmepriv, _FW_UNDER_LINKING | WIFI_UNDER_WPS); @@ -816,7 +823,7 @@ void rtw_indicate_disconnect(struct adapter *padapter) rtw_os_indicate_disconnect(padapter); _clr_fwstate_(pmlmepriv, _FW_LINKED); - LedControl8188eu(padapter, LED_CTL_NO_LINK); + led_control_8188eu(padapter, LED_CTL_NO_LINK); rtw_clear_scan_deny(padapter); } @@ -900,8 +907,8 @@ static struct sta_info *rtw_joinbss_update_stainfo(struct adapter *padapter, str /* ptarget_wlan: found from scanned_queue */ static void rtw_joinbss_update_network(struct adapter *padapter, struct wlan_network *ptarget_wlan, struct wlan_network *pnetwork) { - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - struct wlan_network *cur_network = &(pmlmepriv->cur_network); + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct wlan_network *cur_network = &pmlmepriv->cur_network; DBG_88E("%s\n", __func__); @@ -957,12 +964,12 @@ static void rtw_joinbss_update_network(struct adapter *padapter, struct wlan_net void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) { struct sta_info *ptarget_sta = NULL, *pcur_sta = NULL; - struct sta_priv *pstapriv = &adapter->stapriv; - struct mlme_priv *pmlmepriv = &(adapter->mlmepriv); - struct wlan_network *pnetwork = (struct wlan_network *)pbuf; - struct wlan_network *cur_network = &(pmlmepriv->cur_network); - struct wlan_network *pcur_wlan = NULL, *ptarget_wlan = NULL; - unsigned int the_same_macaddr = false; + struct sta_priv *pstapriv = &adapter->stapriv; + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + struct wlan_network *pnetwork = (struct wlan_network *)pbuf; + struct wlan_network *cur_network = &pmlmepriv->cur_network; + struct wlan_network *pcur_wlan = NULL, *ptarget_wlan = NULL; + unsigned int the_same_macaddr = false; RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("joinbss event call back received with res=%d\n", pnetwork->join_res)); @@ -986,7 +993,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("\nrtw_joinbss_event_callback!! _enter_critical\n")); if (pnetwork->join_res > 0) { - spin_lock_bh(&(pmlmepriv->scanned_queue.lock)); + spin_lock_bh(&pmlmepriv->scanned_queue.lock); if (check_fwstate(pmlmepriv, _FW_UNDER_LINKING)) { /* s1. find ptarget_wlan */ if (check_fwstate(pmlmepriv, _FW_LINKED)) { @@ -999,20 +1006,20 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) pcur_sta = rtw_get_stainfo(pstapriv, cur_network->network.MacAddress); if (pcur_sta) { - spin_lock_bh(&(pstapriv->sta_hash_lock)); + spin_lock_bh(&pstapriv->sta_hash_lock); rtw_free_stainfo(adapter, pcur_sta); spin_unlock_bh(&pstapriv->sta_hash_lock); } ptarget_wlan = rtw_find_network(&pmlmepriv->scanned_queue, pnetwork->network.MacAddress); - if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) { + if (check_fwstate(pmlmepriv, WIFI_STATION_STATE)) { if (ptarget_wlan) ptarget_wlan->fixed = true; } } } else { ptarget_wlan = rtw_find_network(&pmlmepriv->scanned_queue, pnetwork->network.MacAddress); - if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) { + if (check_fwstate(pmlmepriv, WIFI_STATION_STATE)) { if (ptarget_wlan) ptarget_wlan->fixed = true; } @@ -1028,7 +1035,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) } /* s3. find ptarget_sta & update ptarget_sta after update cur_network only for station mode */ - if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) { + if (check_fwstate(pmlmepriv, WIFI_STATION_STATE)) { ptarget_sta = rtw_joinbss_update_stainfo(adapter, pnetwork); if (!ptarget_sta) { RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("Can't update stainfo when joinbss_event callback\n")); @@ -1038,7 +1045,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) } /* s4. indicate connect */ - if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) { + if (check_fwstate(pmlmepriv, WIFI_STATION_STATE)) { rtw_indicate_connect(adapter); } else { /* adhoc mode will rtw_indicate_connect when rtw_stassoc_event_callback */ @@ -1063,7 +1070,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) mod_timer(&pmlmepriv->assoc_timer, jiffies + msecs_to_jiffies(1)); - if ((check_fwstate(pmlmepriv, _FW_UNDER_LINKING)) == true) { + if (check_fwstate(pmlmepriv, _FW_UNDER_LINKING)) { RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("fail! clear _FW_UNDER_LINKING ^^^fw_state=%x\n", get_fwstate(pmlmepriv))); _clr_fwstate_(pmlmepriv, _FW_UNDER_LINKING); } @@ -1079,7 +1086,7 @@ ignore_joinbss_callback: void rtw_joinbss_event_callback(struct adapter *adapter, u8 *pbuf) { - struct wlan_network *pnetwork = (struct wlan_network *)pbuf; + struct wlan_network *pnetwork = (struct wlan_network *)pbuf; mlmeext_joinbss_event_callback(adapter, pnetwork->join_res); @@ -1091,11 +1098,11 @@ static u8 search_max_mac_id(struct adapter *padapter) u8 mac_id; #if defined(CONFIG_88EU_AP_MODE) u8 aid; - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; struct sta_priv *pstapriv = &padapter->stapriv; #endif - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; #if defined(CONFIG_88EU_AP_MODE) if (check_fwstate(pmlmepriv, WIFI_AP_STATE)) { @@ -1133,10 +1140,10 @@ void rtw_stassoc_hw_rpt(struct adapter *adapter, struct sta_info *psta) void rtw_stassoc_event_callback(struct adapter *adapter, u8 *pbuf) { struct sta_info *psta; - struct mlme_priv *pmlmepriv = &(adapter->mlmepriv); - struct stassoc_event *pstassoc = (struct stassoc_event *)pbuf; - struct wlan_network *cur_network = &(pmlmepriv->cur_network); - struct wlan_network *ptarget_wlan = NULL; + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + struct stassoc_event *pstassoc = (struct stassoc_event *)pbuf; + struct wlan_network *cur_network = &pmlmepriv->cur_network; + struct wlan_network *ptarget_wlan = NULL; if (!rtw_access_ctrl(adapter, pstassoc->macaddr)) return; @@ -1155,12 +1162,15 @@ void rtw_stassoc_event_callback(struct adapter *adapter, u8 *pbuf) psta = rtw_get_stainfo(&adapter->stapriv, pstassoc->macaddr); if (psta) { /* the sta have been in sta_info_queue => do nothing */ - RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("Error: rtw_stassoc_event_callback: sta has been in sta_hash_queue\n")); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, + ("Error: %s: sta has been in sta_hash_queue\n", + __func__)); return; /* between drv has received this event before and fw have not yet to set key to CAM_ENTRY) */ } psta = rtw_alloc_stainfo(&adapter->stapriv, pstassoc->macaddr); if (!psta) { - RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("Can't alloc sta_info when rtw_stassoc_event_callback\n")); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, + ("Can't alloc sta_info when %s\n", __func__)); return; } /* to do: init sta_info variable */ @@ -1177,7 +1187,7 @@ void rtw_stassoc_event_callback(struct adapter *adapter, u8 *pbuf) if ((check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE)) || (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE))) { if (adapter->stapriv.asoc_sta_count == 2) { - spin_lock_bh(&(pmlmepriv->scanned_queue.lock)); + spin_lock_bh(&pmlmepriv->scanned_queue.lock); ptarget_wlan = rtw_find_network(&pmlmepriv->scanned_queue, cur_network->network.MacAddress); if (ptarget_wlan) ptarget_wlan->fixed = true; @@ -1197,10 +1207,10 @@ void rtw_stadel_event_callback(struct adapter *adapter, u8 *pbuf) struct wlan_network *pwlan = NULL; struct wlan_bssid_ex *pdev_network = NULL; u8 *pibss = NULL; - struct mlme_priv *pmlmepriv = &(adapter->mlmepriv); - struct stadel_event *pstadel = (struct stadel_event *)pbuf; - struct sta_priv *pstapriv = &adapter->stapriv; - struct wlan_network *tgt_network = &(pmlmepriv->cur_network); + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + struct stadel_event *pstadel = (struct stadel_event *)pbuf; + struct sta_priv *pstapriv = &adapter->stapriv; + struct wlan_network *tgt_network = &pmlmepriv->cur_network; psta = rtw_get_stainfo(&adapter->stapriv, pstadel->macaddr); if (psta) @@ -1238,7 +1248,7 @@ void rtw_stadel_event_callback(struct adapter *adapter, u8 *pbuf) rtw_free_assoc_resources(adapter); rtw_indicate_disconnect(adapter); - spin_lock_bh(&(pmlmepriv->scanned_queue.lock)); + spin_lock_bh(&pmlmepriv->scanned_queue.lock); /* remove the network entry in scanned_queue */ pwlan = rtw_find_network(&pmlmepriv->scanned_queue, tgt_network->network.MacAddress); if (pwlan) { @@ -1250,12 +1260,12 @@ void rtw_stadel_event_callback(struct adapter *adapter, u8 *pbuf) } if (check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) || check_fwstate(pmlmepriv, WIFI_ADHOC_STATE)) { - spin_lock_bh(&(pstapriv->sta_hash_lock)); + spin_lock_bh(&pstapriv->sta_hash_lock); rtw_free_stainfo(adapter, psta); spin_unlock_bh(&pstapriv->sta_hash_lock); if (adapter->stapriv.asoc_sta_count == 1) { /* a sta + bc/mc_stainfo (not Ibss_stainfo) */ - spin_lock_bh(&(pmlmepriv->scanned_queue.lock)); + spin_lock_bh(&pmlmepriv->scanned_queue.lock); /* free old ibss network */ pwlan = rtw_find_network(&pmlmepriv->scanned_queue, tgt_network->network.MacAddress); if (pwlan) { @@ -1264,7 +1274,7 @@ void rtw_stadel_event_callback(struct adapter *adapter, u8 *pbuf) } spin_unlock_bh(&pmlmepriv->scanned_queue.lock); /* re-create ibss */ - pdev_network = &(adapter->registrypriv.dev_network); + pdev_network = &adapter->registrypriv.dev_network; pibss = adapter->registrypriv.dev_network.MacAddress; memcpy(pdev_network, &tgt_network->network, get_wlan_bssid_ex_sz(&tgt_network->network)); @@ -1289,7 +1299,7 @@ void rtw_stadel_event_callback(struct adapter *adapter, u8 *pbuf) void rtw_cpwm_event_callback(struct adapter *padapter, u8 *pbuf) { - RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("+rtw_cpwm_event_callback !!!\n")); + RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("+%s !!!\n", __func__)); } /* @@ -1298,9 +1308,8 @@ void rtw_cpwm_event_callback(struct adapter *padapter, u8 *pbuf) */ void _rtw_join_timeout_handler (struct timer_list *t) { - struct adapter *adapter = - from_timer(adapter, t, mlmepriv.assoc_timer); - struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + struct adapter *adapter = from_timer(adapter, t, mlmepriv.assoc_timer); + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; int do_join_r; DBG_88E("%s, fw_state=%x\n", __func__, get_fwstate(pmlmepriv)); @@ -1340,9 +1349,9 @@ void _rtw_join_timeout_handler (struct timer_list *t) */ void rtw_scan_timeout_handler (struct timer_list *t) { - struct adapter *adapter = - from_timer(adapter, t, mlmepriv.scan_to_timer); - struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + struct adapter *adapter = from_timer(adapter, t, + mlmepriv.scan_to_timer); + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; DBG_88E(FUNC_ADPT_FMT" fw_state=%x\n", FUNC_ADPT_ARG(adapter), get_fwstate(pmlmepriv)); spin_lock_bh(&pmlmepriv->lock); @@ -1368,8 +1377,8 @@ static void rtw_auto_scan_handler(struct adapter *padapter) void rtw_dynamic_check_timer_handlder(struct timer_list *t) { - struct adapter *adapter = - from_timer(adapter, t, mlmepriv.dynamic_chk_timer); + struct adapter *adapter = from_timer(adapter, t, + mlmepriv.dynamic_chk_timer); struct registry_priv *pregistrypriv = &adapter->registrypriv; if (!adapter) @@ -1403,7 +1412,8 @@ static int rtw_check_join_candidate(struct mlme_priv *pmlmepriv { int updated = false; unsigned long since_scan; - struct adapter *adapter = container_of(pmlmepriv, struct adapter, mlmepriv); + struct adapter *adapter = container_of(pmlmepriv, struct adapter, + mlmepriv); /* check bssid, if needed */ if (pmlmepriv->assoc_by_bssid) { @@ -1458,12 +1468,12 @@ int rtw_select_and_join_from_scanned_queue(struct mlme_priv *pmlmepriv) int ret; struct list_head *phead; struct adapter *adapter; - struct __queue *queue = &(pmlmepriv->scanned_queue); - struct wlan_network *pnetwork = NULL; - struct wlan_network *candidate = NULL; - u8 supp_ant_div = false; + struct __queue *queue = &pmlmepriv->scanned_queue; + struct wlan_network *pnetwork = NULL; + struct wlan_network *candidate = NULL; + u8 supp_ant_div = false; - spin_lock_bh(&(pmlmepriv->scanned_queue.lock)); + spin_lock_bh(&pmlmepriv->scanned_queue.lock); phead = get_list_head(queue); adapter = (struct adapter *)pmlmepriv->nic_hdl; pmlmepriv->pscanned = phead->next; @@ -1488,7 +1498,7 @@ int rtw_select_and_join_from_scanned_queue(struct mlme_priv *pmlmepriv) } /* check for situation of _FW_LINKED */ - if (check_fwstate(pmlmepriv, _FW_LINKED) == true) { + if (check_fwstate(pmlmepriv, _FW_LINKED)) { DBG_88E("%s: _FW_LINKED while ask_for_joinbss!!!\n", __func__); rtw_disassoc_cmd(adapter, 0, true); @@ -1516,10 +1526,10 @@ exit: int rtw_set_auth(struct adapter *adapter, struct security_priv *psecuritypriv) { - struct cmd_obj *pcmd; - struct setauth_parm *psetauthparm; - struct cmd_priv *pcmdpriv = &(adapter->cmdpriv); - int res = _SUCCESS; + struct cmd_obj *pcmd; + struct setauth_parm *psetauthparm; + struct cmd_priv *pcmdpriv = &adapter->cmdpriv; + int res = _SUCCESS; pcmd = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); if (!pcmd) { @@ -1550,12 +1560,12 @@ exit: int rtw_set_key(struct adapter *adapter, struct security_priv *psecuritypriv, int keyid, u8 set_tx) { - u8 keylen; - struct cmd_obj *pcmd; - struct setkey_parm *psetkeyparm; - struct cmd_priv *pcmdpriv = &(adapter->cmdpriv); - struct mlme_priv *pmlmepriv = &(adapter->mlmepriv); - int res = _SUCCESS; + u8 keylen; + struct cmd_obj *pcmd; + struct setkey_parm *psetkeyparm; + struct cmd_priv *pcmdpriv = &adapter->cmdpriv; + struct mlme_priv *pmlmepriv = &adapter->mlmepriv; + int res = _SUCCESS; pcmd = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); if (!pcmd) @@ -1570,31 +1580,34 @@ int rtw_set_key(struct adapter *adapter, struct security_priv *psecuritypriv, in if (psecuritypriv->dot11AuthAlgrthm == dot11AuthAlgrthm_8021X) { psetkeyparm->algorithm = (unsigned char)psecuritypriv->dot118021XGrpPrivacy; RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, - ("\n rtw_set_key: psetkeyparm->algorithm=(unsigned char)psecuritypriv->dot118021XGrpPrivacy=%d\n", - psetkeyparm->algorithm)); + ("\n %s: psetkeyparm->algorithm=(unsigned char)psecuritypriv->dot118021XGrpPrivacy=%d\n", + __func__, psetkeyparm->algorithm)); } else { psetkeyparm->algorithm = (u8)psecuritypriv->dot11PrivacyAlgrthm; RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, - ("\n rtw_set_key: psetkeyparm->algorithm=(u8)psecuritypriv->dot11PrivacyAlgrthm=%d\n", - psetkeyparm->algorithm)); + ("\n %s: psetkeyparm->algorithm=(u8)psecuritypriv->dot11PrivacyAlgrthm=%d\n", + __func__, psetkeyparm->algorithm)); } psetkeyparm->keyid = (u8)keyid;/* 0~3 */ psetkeyparm->set_tx = set_tx; pmlmepriv->key_mask |= BIT(psetkeyparm->keyid); - DBG_88E("==> rtw_set_key algorithm(%x), keyid(%x), key_mask(%x)\n", - psetkeyparm->algorithm, psetkeyparm->keyid, pmlmepriv->key_mask); + DBG_88E("==> %s algorithm(%x), keyid(%x), key_mask(%x)\n", + __func__, psetkeyparm->algorithm, psetkeyparm->keyid, + pmlmepriv->key_mask); RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, - ("\n rtw_set_key: psetkeyparm->algorithm=%d psetkeyparm->keyid=(u8)keyid=%d\n", - psetkeyparm->algorithm, keyid)); + ("\n %s: psetkeyparm->algorithm=%d psetkeyparm->keyid=(u8)keyid=%d\n", + __func__, psetkeyparm->algorithm, keyid)); switch (psetkeyparm->algorithm) { case _WEP40_: keylen = 5; - memcpy(&(psetkeyparm->key[0]), &(psecuritypriv->dot11DefKey[keyid].skey[0]), keylen); + memcpy(&psetkeyparm->key[0], + &psecuritypriv->dot11DefKey[keyid].skey[0], keylen); break; case _WEP104_: keylen = 13; - memcpy(&(psetkeyparm->key[0]), &(psecuritypriv->dot11DefKey[keyid].skey[0]), keylen); + memcpy(&psetkeyparm->key[0], + &psecuritypriv->dot11DefKey[keyid].skey[0], keylen); break; case _TKIP_: keylen = 16; @@ -1608,8 +1621,8 @@ int rtw_set_key(struct adapter *adapter, struct security_priv *psecuritypriv, in break; default: RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, - ("\n rtw_set_key:psecuritypriv->dot11PrivacyAlgrthm=%x (must be 1 or 2 or 4 or 5)\n", - psecuritypriv->dot11PrivacyAlgrthm)); + ("\n %s:psecuritypriv->dot11PrivacyAlgrthm=%x (must be 1 or 2 or 4 or 5)\n", + __func__, psecuritypriv->dot11PrivacyAlgrthm)); res = _FAIL; goto err_free_parm; } @@ -1632,7 +1645,7 @@ err_free_cmd: /* adjust ies for rtw_joinbss_cmd in WMM */ int rtw_restruct_wmm_ie(struct adapter *adapter, u8 *in_ie, u8 *out_ie, uint in_len, uint initial_out_len) { - unsigned int ielength = 0; + unsigned int ielength = 0; unsigned int i, j; /* i = 12; after the fixed IE */ @@ -1711,12 +1724,11 @@ static int rtw_append_pmkid(struct adapter *Adapter, int iEntry, u8 *ie, uint ie int rtw_restruct_sec_ie(struct adapter *adapter, u8 *in_ie, u8 *out_ie, uint in_len) { u8 authmode; - uint ielength; + uint ielength; int iEntry; - struct mlme_priv *pmlmepriv = &adapter->mlmepriv; struct security_priv *psecuritypriv = &adapter->securitypriv; - uint ndisauthmode = psecuritypriv->ndisauthtype; + uint ndisauthmode = psecuritypriv->ndisauthtype; uint ndissecuritytype = psecuritypriv->ndisencryptstatus; RT_TRACE(_module_rtl871x_mlme_c_, _drv_notice_, @@ -1728,7 +1740,7 @@ int rtw_restruct_sec_ie(struct adapter *adapter, u8 *in_ie, u8 *out_ie, uint in_ ielength = 12; if ((ndisauthmode == Ndis802_11AuthModeWPA) || (ndisauthmode == Ndis802_11AuthModeWPAPSK)) - authmode = _WPA_IE_ID_; + authmode = _WPA_IE_ID_; if ((ndisauthmode == Ndis802_11AuthModeWPA2) || (ndisauthmode == Ndis802_11AuthModeWPA2PSK)) authmode = _WPA2_IE_ID_; @@ -1745,12 +1757,9 @@ int rtw_restruct_sec_ie(struct adapter *adapter, u8 *in_ie, u8 *out_ie, uint in_ } iEntry = SecIsInPMKIDList(adapter, pmlmepriv->assoc_bssid); - if (iEntry < 0) { - return ielength; - } else { - if (authmode == _WPA2_IE_ID_) - ielength = rtw_append_pmkid(adapter, iEntry, out_ie, ielength); - } + if (iEntry >= 0 && authmode == _WPA2_IE_ID_) + ielength = rtw_append_pmkid(adapter, iEntry, out_ie, ielength); + return ielength; } @@ -1758,7 +1767,7 @@ void rtw_init_registrypriv_dev_network(struct adapter *adapter) { struct registry_priv *pregistrypriv = &adapter->registrypriv; struct eeprom_priv *peepriv = &adapter->eeprompriv; - struct wlan_bssid_ex *pdev_network = &pregistrypriv->dev_network; + struct wlan_bssid_ex *pdev_network = &pregistrypriv->dev_network; u8 *myhwaddr = myid(peepriv); memcpy(pdev_network->MacAddress, myhwaddr, ETH_ALEN); @@ -1777,9 +1786,9 @@ void rtw_update_registrypriv_dev_network(struct adapter *adapter) { int sz = 0; struct registry_priv *pregistrypriv = &adapter->registrypriv; - struct wlan_bssid_ex *pdev_network = &pregistrypriv->dev_network; - struct security_priv *psecuritypriv = &adapter->securitypriv; - struct wlan_network *cur_network = &adapter->mlmepriv.cur_network; + struct wlan_bssid_ex *pdev_network = &pregistrypriv->dev_network; + struct security_priv *psecuritypriv = &adapter->securitypriv; + struct wlan_network *cur_network = &adapter->mlmepriv.cur_network; pdev_network->Privacy = psecuritypriv->dot11PrivacyAlgrthm > 0 ? 1 : 0; /* adhoc no 802.1x */ @@ -1829,9 +1838,9 @@ void rtw_get_encrypt_decrypt_from_registrypriv(struct adapter *adapter) /* the function is at passive_level */ void rtw_joinbss_reset(struct adapter *padapter) { - u8 threshold; - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct ht_priv *phtpriv = &pmlmepriv->htpriv; + u8 threshold; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct ht_priv *phtpriv = &pmlmepriv->htpriv; /* todo: if you want to do something io/reg/hw setting before join_bss, please add code here */ pmlmepriv->num_FortyMHzIntolerant = 0; @@ -1861,9 +1870,9 @@ unsigned int rtw_restructure_ht_ie(struct adapter *padapter, u8 *in_ie, u8 *out_ enum ht_cap_ampdu_factor max_rx_ampdu_factor; unsigned char *p; unsigned char WMM_IE[] = {0x00, 0x50, 0xf2, 0x02, 0x00, 0x01, 0x00}; - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct qos_priv *pqospriv = &pmlmepriv->qospriv; - struct ht_priv *phtpriv = &pmlmepriv->htpriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct qos_priv *pqospriv = &pmlmepriv->qospriv; + struct ht_priv *phtpriv = &pmlmepriv->htpriv; u32 rx_packet_offset, max_recvbuf_sz; phtpriv->ht_option = false; @@ -1925,11 +1934,11 @@ unsigned int rtw_restructure_ht_ie(struct adapter *padapter, u8 *in_ie, u8 *out_ /* the function is > passive_level (in critical_section) */ void rtw_update_ht_cap(struct adapter *padapter, u8 *pie, uint ie_len) { - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct ht_priv *phtpriv = &pmlmepriv->htpriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct ht_priv *phtpriv = &pmlmepriv->htpriv; struct registry_priv *pregistrypriv = &padapter->registrypriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; if (!phtpriv->ht_option) return; @@ -1987,7 +1996,7 @@ void rtw_issue_addbareq_cmd(struct adapter *padapter, struct xmit_frame *pxmitfr u8 issued; int priority; struct sta_info *psta = NULL; - struct ht_priv *phtpriv; + struct ht_priv *phtpriv; struct pkt_attrib *pattrib = &pxmitframe->attrib; if (is_multicast_ether_addr(pattrib->ra) || @@ -2020,7 +2029,7 @@ void rtw_issue_addbareq_cmd(struct adapter *padapter, struct xmit_frame *pxmitfr void rtw_roaming(struct adapter *padapter, struct wlan_network *tgt_network) { - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; spin_lock_bh(&pmlmepriv->lock); _rtw_roaming(padapter, tgt_network); @@ -2029,9 +2038,8 @@ void rtw_roaming(struct adapter *padapter, struct wlan_network *tgt_network) void _rtw_roaming(struct adapter *padapter, struct wlan_network *tgt_network) { - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; int do_join_r; - struct wlan_network *pnetwork; if (tgt_network) diff --git a/drivers/staging/rtl8188eu/core/rtw_mlme_ext.c b/drivers/staging/rtl8188eu/core/rtw_mlme_ext.c index 6790b840aef8..7a36661ebbed 100644 --- a/drivers/staging/rtl8188eu/core/rtw_mlme_ext.c +++ b/drivers/staging/rtl8188eu/core/rtw_mlme_ext.c @@ -17,24 +17,20 @@ #include #include -static u8 null_addr[ETH_ALEN] = {0, 0, 0, 0, 0, 0}; +static u8 null_addr[ETH_ALEN] = {}; /************************************************** OUI definitions for the vendor specific IE ***************************************************/ -unsigned char RTW_WPA_OUI[] = {0x00, 0x50, 0xf2, 0x01}; -unsigned char WMM_OUI[] = {0x00, 0x50, 0xf2, 0x02}; -unsigned char WPS_OUI[] = {0x00, 0x50, 0xf2, 0x04}; -unsigned char P2P_OUI[] = {0x50, 0x6F, 0x9A, 0x09}; -unsigned char WFD_OUI[] = {0x50, 0x6F, 0x9A, 0x0A}; +const u8 RTW_WPA_OUI[] = {0x00, 0x50, 0xf2, 0x01}; +const u8 WPS_OUI[] = {0x00, 0x50, 0xf2, 0x04}; +static const u8 WMM_OUI[] = {0x00, 0x50, 0xf2, 0x02}; +static const u8 P2P_OUI[] = {0x50, 0x6F, 0x9A, 0x09}; -unsigned char WMM_INFO_OUI[] = {0x00, 0x50, 0xf2, 0x02, 0x00, 0x01}; -unsigned char WMM_PARA_OUI[] = {0x00, 0x50, 0xf2, 0x02, 0x01, 0x01}; +static const u8 WMM_PARA_OUI[] = {0x00, 0x50, 0xf2, 0x02, 0x01, 0x01}; -unsigned char WPA_TKIP_CIPHER[4] = {0x00, 0x50, 0xf2, 0x02}; -unsigned char RSN_TKIP_CIPHER[4] = {0x00, 0x0f, 0xac, 0x02}; - -extern unsigned char REALTEK_96B_IE[]; +const u8 WPA_TKIP_CIPHER[4] = {0x00, 0x50, 0xf2, 0x02}; +const u8 RSN_TKIP_CIPHER[4] = {0x00, 0x0f, 0xac, 0x02}; /******************************************************** MCS rate definitions @@ -56,7 +52,7 @@ static struct rt_channel_plan_2g RTW_ChannelPlan2G[RT_CHANNEL_DOMAIN_2G_MAX] = { {{}, 0}, /* 0x05, RT_CHANNEL_DOMAIN_2G_NULL */ }; -static struct rt_channel_plan_map RTW_ChannelPlanMap[RT_CHANNEL_DOMAIN_MAX] = { +static struct rt_channel_plan_map RTW_ChannelPlanMap[RT_CHANNEL_DOMAIN_MAX] = { /* 0x00 ~ 0x1F , Old Define ===== */ {0x02}, /* 0x00, RT_CHANNEL_DOMAIN_FCC */ {0x02}, /* 0x01, RT_CHANNEL_DOMAIN_IC */ @@ -154,8 +150,8 @@ int rtw_ch_set_search_ch(struct rt_channel_info *ch_set, const u32 ch) struct xmit_frame *alloc_mgtxmitframe(struct xmit_priv *pxmitpriv) { - struct xmit_frame *pmgntframe; - struct xmit_buf *pxmitbuf; + struct xmit_frame *pmgntframe; + struct xmit_buf *pxmitbuf; pmgntframe = rtw_alloc_xmitframe(pxmitpriv); if (!pmgntframe) { @@ -184,7 +180,7 @@ Following are some TX functions for WiFi MLME void update_mgnt_tx_rate(struct adapter *padapter, u8 rate) { - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; pmlmeext->tx_rate = rate; DBG_88E("%s(): rate = %x\n", __func__, rate); @@ -192,7 +188,7 @@ void update_mgnt_tx_rate(struct adapter *padapter, u8 rate) void update_mgntframe_attrib(struct adapter *padapter, struct pkt_attrib *pattrib) { - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; memset((u8 *)(pattrib), 0, sizeof(struct pkt_attrib)); @@ -259,7 +255,7 @@ static s32 dump_mgntframe_and_wait_ack(struct adapter *padapter, { s32 ret = _FAIL; u32 timeout_ms = 500;/* 500ms */ - struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; if (padapter->bSurpriseRemoved || padapter->bDriverStopped) return -1; @@ -313,18 +309,18 @@ static int update_hidden_ssid(u8 *ies, u32 ies_len, u8 hidden_ssid_mode) static void issue_beacon(struct adapter *padapter, int timeout_ms) { - struct xmit_frame *pmgntframe; - struct pkt_attrib *pattrib; - unsigned char *pframe; + struct xmit_frame *pmgntframe; + struct pkt_attrib *pattrib; + unsigned char *pframe; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; - unsigned int rate_len; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network); - u8 bc_addr[] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; + unsigned int rate_len; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *cur_network = &pmlmeinfo->network; + u8 bc_addr[] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; pmgntframe = alloc_mgtxmitframe(pxmitpriv); if (!pmgntframe) { @@ -333,7 +329,7 @@ static void issue_beacon(struct adapter *padapter, int timeout_ms) } #if defined(CONFIG_88EU_AP_MODE) spin_lock_bh(&pmlmepriv->bcn_update_lock); -#endif /* if defined (CONFIG_88EU_AP_MODE) */ +#endif /* update attribute */ pattrib = &pmgntframe->attrib; @@ -349,7 +345,7 @@ static void issue_beacon(struct adapter *padapter, int timeout_ms) *(fctrl) = 0; ether_addr_copy(pwlanhdr->addr1, bc_addr); - ether_addr_copy(pwlanhdr->addr2, myid(&(padapter->eeprompriv))); + ether_addr_copy(pwlanhdr->addr2, myid(&padapter->eeprompriv)); ether_addr_copy(pwlanhdr->addr3, cur_network->MacAddress); SetSeqNum(pwlanhdr, 0/*pmlmeext->mgnt_seq*/); @@ -359,7 +355,7 @@ static void issue_beacon(struct adapter *padapter, int timeout_ms) pframe += sizeof(struct ieee80211_hdr_3addr); pattrib->pktlen = sizeof(struct ieee80211_hdr_3addr); - if ((pmlmeinfo->state&0x03) == WIFI_FW_AP_STATE) { + if ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE) { int len_diff; u8 *wps_ie; uint wps_ielen; @@ -413,7 +409,7 @@ static void issue_beacon(struct adapter *padapter, int timeout_ms) pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, min_t(unsigned int, rate_len, 8), cur_network->SupportedRates, &pattrib->pktlen); /* DS parameter set */ - pframe = rtw_set_ie(pframe, _DSSET_IE_, 1, (unsigned char *)&(cur_network->Configuration.DSConfig), &pattrib->pktlen); + pframe = rtw_set_ie(pframe, _DSSET_IE_, 1, (unsigned char *)&cur_network->Configuration.DSConfig, &pattrib->pktlen); { u8 erpinfo = 0; @@ -436,7 +432,7 @@ _issue_bcn: pmlmepriv->update_bcn = false; spin_unlock_bh(&pmlmepriv->bcn_update_lock); -#endif /* if defined (CONFIG_88EU_AP_MODE) */ +#endif if ((pattrib->pktlen + TXDESC_SIZE) > 512) { DBG_88E("beacon frame too large\n"); @@ -454,22 +450,22 @@ _issue_bcn: static void issue_probersp(struct adapter *padapter, unsigned char *da) { - struct xmit_frame *pmgntframe; - struct pkt_attrib *pattrib; - unsigned char *pframe; + struct xmit_frame *pmgntframe; + struct pkt_attrib *pattrib; + unsigned char *pframe; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; - unsigned char *mac, *bssid; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); + unsigned char *mac, *bssid; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; #if defined(CONFIG_88EU_AP_MODE) u8 *pwps_ie; uint wps_ielen; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; -#endif /* if defined (CONFIG_88EU_AP_MODE) */ - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network); - unsigned int rate_len; +#endif + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *cur_network = &pmlmeinfo->network; + unsigned int rate_len; pmgntframe = alloc_mgtxmitframe(pxmitpriv); if (!pmgntframe) { @@ -486,7 +482,7 @@ static void issue_probersp(struct adapter *padapter, unsigned char *da) pframe = (u8 *)(pmgntframe->buf_addr) + TXDESC_OFFSET; pwlanhdr = (struct ieee80211_hdr *)pframe; - mac = myid(&(padapter->eeprompriv)); + mac = myid(&padapter->eeprompriv); bssid = cur_network->MacAddress; fctrl = &pwlanhdr->frame_control; @@ -507,7 +503,7 @@ static void issue_probersp(struct adapter *padapter, unsigned char *da) return; #if defined(CONFIG_88EU_AP_MODE) - if ((pmlmeinfo->state&0x03) == WIFI_FW_AP_STATE) { + if ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE) { pwps_ie = rtw_get_wps_ie(cur_network->ies+_FIXED_IE_LENGTH_, cur_network->ie_length-_FIXED_IE_LENGTH_, NULL, &wps_ielen); /* inerset & update wps_probe_resp_ie */ @@ -573,9 +569,9 @@ static void issue_probersp(struct adapter *padapter, unsigned char *da) pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, min_t(unsigned int, rate_len, 8), cur_network->SupportedRates, &pattrib->pktlen); /* DS parameter set */ - pframe = rtw_set_ie(pframe, _DSSET_IE_, 1, (unsigned char *)&(cur_network->Configuration.DSConfig), &pattrib->pktlen); + pframe = rtw_set_ie(pframe, _DSSET_IE_, 1, (unsigned char *)&cur_network->Configuration.DSConfig, &pattrib->pktlen); - if ((pmlmeinfo->state&0x03) == WIFI_FW_ADHOC_STATE) { + if ((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) { u8 erpinfo = 0; u32 ATIMWindow; /* IBSS Parameter Set... */ @@ -603,18 +599,18 @@ static int issue_probereq(struct adapter *padapter, bool wait_ack) { int ret = _FAIL; - struct xmit_frame *pmgntframe; - struct pkt_attrib *pattrib; - unsigned char *pframe; + struct xmit_frame *pmgntframe; + struct pkt_attrib *pattrib; + unsigned char *pframe; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; - unsigned char *mac; - unsigned char bssrate[NumRates]; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - int bssrate_len = 0; - u8 bc_addr[] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; + unsigned char *mac; + unsigned char bssrate[NumRates]; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + int bssrate_len = 0; + u8 bc_addr[] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; RT_TRACE(_module_rtl871x_mlme_c_, _drv_notice_, ("+%s\n", __func__)); @@ -631,7 +627,7 @@ static int issue_probereq(struct adapter *padapter, pframe = (u8 *)(pmgntframe->buf_addr) + TXDESC_OFFSET; pwlanhdr = (struct ieee80211_hdr *)pframe; - mac = myid(&(padapter->eeprompriv)); + mac = myid(&padapter->eeprompriv); fctrl = &pwlanhdr->frame_control; *(fctrl) = 0; @@ -656,17 +652,17 @@ static int issue_probereq(struct adapter *padapter, pattrib->pktlen = sizeof(struct ieee80211_hdr_3addr); if (pssid) - pframe = rtw_set_ie(pframe, _SSID_IE_, pssid->SsidLength, pssid->Ssid, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SSID_IE_, pssid->SsidLength, pssid->Ssid, &pattrib->pktlen); else - pframe = rtw_set_ie(pframe, _SSID_IE_, 0, NULL, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SSID_IE_, 0, NULL, &pattrib->pktlen); get_rate_set(padapter, bssrate, &bssrate_len); if (bssrate_len > 8) { - pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, 8, bssrate, &(pattrib->pktlen)); - pframe = rtw_set_ie(pframe, _EXT_SUPPORTEDRATES_IE_, (bssrate_len - 8), (bssrate + 8), &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, 8, bssrate, &pattrib->pktlen); + pframe = rtw_set_ie(pframe, _EXT_SUPPORTEDRATES_IE_, bssrate_len - 8, bssrate + 8, &pattrib->pktlen); } else { - pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, bssrate_len, bssrate, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, bssrate_len, bssrate, &pattrib->pktlen); } /* add wps_ie for wps2.0 */ @@ -749,10 +745,10 @@ static void issue_auth(struct adapter *padapter, struct sta_info *psta, __le16 le_val16; #endif int use_shared_key = 0; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; pmgntframe = alloc_mgtxmitframe(pxmitpriv); if (pmgntframe == NULL) @@ -782,9 +778,9 @@ static void issue_auth(struct adapter *padapter, struct sta_info *psta, ether_addr_copy(pwlanhdr->addr1, psta->hwaddr); ether_addr_copy(pwlanhdr->addr2, - myid(&(padapter->eeprompriv))); + myid(&padapter->eeprompriv)); ether_addr_copy(pwlanhdr->addr3, - myid(&(padapter->eeprompriv))); + myid(&padapter->eeprompriv)); /* setting auth algo number */ val16 = (u16)psta->authalg; @@ -816,7 +812,7 @@ static void issue_auth(struct adapter *padapter, struct sta_info *psta, /* added challenging text... */ if ((psta->auth_seq == 2) && (psta->state & WIFI_FW_AUTH_STATE) && (use_shared_key == 1)) - pframe = rtw_set_ie(pframe, _CHLGETXT_IE_, 128, psta->chg_txt, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _CHLGETXT_IE_, 128, psta->chg_txt, &pattrib->pktlen); #endif } else { __le32 le_tmp32; @@ -858,7 +854,7 @@ static void issue_auth(struct adapter *padapter, struct sta_info *psta, /* then checking to see if sending challenging text... */ if ((pmlmeinfo->auth_seq == 3) && (pmlmeinfo->state & WIFI_FW_AUTH_STATE) && (use_shared_key == 1)) { - pframe = rtw_set_ie(pframe, _CHLGETXT_IE_, 128, pmlmeinfo->chg_txt, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _CHLGETXT_IE_, 128, pmlmeinfo->chg_txt, &pattrib->pktlen); SetPrivacy(fctrl); @@ -883,17 +879,17 @@ static void issue_auth(struct adapter *padapter, struct sta_info *psta, static void issue_asocrsp(struct adapter *padapter, unsigned short status, struct sta_info *pstat, int pkt_type) { - struct xmit_frame *pmgntframe; + struct xmit_frame *pmgntframe; struct ieee80211_hdr *pwlanhdr; struct pkt_attrib *pattrib; - unsigned char *pbuf, *pframe; + unsigned char *pbuf, *pframe; unsigned short val; __le16 *fctrl; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; u8 *ie = pnetwork->ies; __le16 lestatus, leval; @@ -917,7 +913,7 @@ static void issue_asocrsp(struct adapter *padapter, unsigned short status, ether_addr_copy((void *)GetAddr1Ptr(pwlanhdr), pstat->hwaddr); ether_addr_copy((void *)GetAddr2Ptr(pwlanhdr), - myid(&(padapter->eeprompriv))); + myid(&padapter->eeprompriv)); ether_addr_copy((void *)GetAddr3Ptr(pwlanhdr), pnetwork->MacAddress); SetSeqNum(pwlanhdr, pmlmeext->mgnt_seq); @@ -944,10 +940,10 @@ static void issue_asocrsp(struct adapter *padapter, unsigned short status, pframe = rtw_set_fixed_ie(pframe, _ASOC_ID_, &leval, &pattrib->pktlen); if (pstat->bssratelen <= 8) { - pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, pstat->bssratelen, pstat->bssrateset, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, pstat->bssratelen, pstat->bssrateset, &pattrib->pktlen); } else { - pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, 8, pstat->bssrateset, &(pattrib->pktlen)); - pframe = rtw_set_ie(pframe, _EXT_SUPPORTEDRATES_IE_, (pstat->bssratelen-8), pstat->bssrateset+8, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, 8, pstat->bssrateset, &pattrib->pktlen); + pframe = rtw_set_ie(pframe, _EXT_SUPPORTEDRATES_IE_, pstat->bssratelen-8, pstat->bssrateset+8, &pattrib->pktlen); } if ((pstat->flags & WLAN_STA_HT) && (pmlmepriv->htpriv.ht_option)) { @@ -990,7 +986,7 @@ static void issue_asocrsp(struct adapter *padapter, unsigned short status, } if (pmlmeinfo->assoc_AP_vendor == HT_IOT_PEER_REALTEK) - pframe = rtw_set_ie(pframe, _VENDOR_SPECIFIC_IE_, 6, REALTEK_96B_IE, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _VENDOR_SPECIFIC_IE_, 6, REALTEK_96B_IE, &pattrib->pktlen); /* add WPS IE ie for wps 2.0 */ if (pmlmepriv->wps_assoc_resp_ie && pmlmepriv->wps_assoc_resp_ie_len > 0) { @@ -1008,21 +1004,21 @@ static void issue_asocrsp(struct adapter *padapter, unsigned short status, static void issue_assocreq(struct adapter *padapter) { int ret = _FAIL; - struct xmit_frame *pmgntframe; - struct pkt_attrib *pattrib; - unsigned char *pframe, *p; + struct xmit_frame *pmgntframe; + struct pkt_attrib *pattrib; + unsigned char *pframe, *p; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; - unsigned int i, j, ie_len, index = 0; + unsigned int i, j, ie_len, index = 0; unsigned char bssrate[NumRates], sta_bssrate[NumRates]; struct ndis_802_11_var_ie *pIE; - struct registry_priv *pregpriv = &padapter->registrypriv; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - int bssrate_len = 0, sta_bssrate_len = 0; - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct registry_priv *pregpriv = &padapter->registrypriv; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + int bssrate_len = 0, sta_bssrate_len = 0; + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; pmgntframe = alloc_mgtxmitframe(pxmitpriv); if (!pmgntframe) @@ -1039,7 +1035,7 @@ static void issue_assocreq(struct adapter *padapter) fctrl = &pwlanhdr->frame_control; *(fctrl) = 0; ether_addr_copy(pwlanhdr->addr1, pnetwork->MacAddress); - ether_addr_copy(pwlanhdr->addr2, myid(&(padapter->eeprompriv))); + ether_addr_copy(pwlanhdr->addr2, myid(&padapter->eeprompriv)); ether_addr_copy(pwlanhdr->addr3, pnetwork->MacAddress); SetSeqNum(pwlanhdr, pmlmeext->mgnt_seq); @@ -1063,7 +1059,7 @@ static void issue_assocreq(struct adapter *padapter) pattrib->pktlen += 2; /* SSID */ - pframe = rtw_set_ie(pframe, _SSID_IE_, pmlmeinfo->network.Ssid.SsidLength, pmlmeinfo->network.Ssid.Ssid, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SSID_IE_, pmlmeinfo->network.Ssid.SsidLength, pmlmeinfo->network.Ssid.Ssid, &pattrib->pktlen); /* supported rate & extended supported rate */ @@ -1110,16 +1106,16 @@ static void issue_assocreq(struct adapter *padapter) } if (bssrate_len > 8) { - pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, 8, bssrate, &(pattrib->pktlen)); - pframe = rtw_set_ie(pframe, _EXT_SUPPORTEDRATES_IE_, (bssrate_len - 8), (bssrate + 8), &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, 8, bssrate, &pattrib->pktlen); + pframe = rtw_set_ie(pframe, _EXT_SUPPORTEDRATES_IE_, bssrate_len - 8, bssrate + 8, &pattrib->pktlen); } else { - pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, bssrate_len, bssrate, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, bssrate_len, bssrate, &pattrib->pktlen); } /* RSN */ p = rtw_get_ie((pmlmeinfo->network.ies + sizeof(struct ndis_802_11_fixed_ie)), _RSN_IE_2_, &ie_len, (pmlmeinfo->network.ie_length - sizeof(struct ndis_802_11_fixed_ie))); if (p) - pframe = rtw_set_ie(pframe, _RSN_IE_2_, ie_len, (p + 2), &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _RSN_IE_2_, ie_len, p + 2, &pattrib->pktlen); /* HT caps */ if (padapter->mlmepriv.htpriv.ht_option) { @@ -1139,7 +1135,7 @@ static void issue_assocreq(struct adapter *padapter) if (pregpriv->rx_stbc) pmlmeinfo->HT_caps.cap_info |= cpu_to_le16(0x0100);/* RX STBC One spatial stream */ memcpy((u8 *)&pmlmeinfo->HT_caps.mcs, MCS_rate_1R, 16); - pframe = rtw_set_ie(pframe, _HT_CAPABILITY_IE_, ie_len, (u8 *)(&(pmlmeinfo->HT_caps)), &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _HT_CAPABILITY_IE_, ie_len, (u8 *)(&pmlmeinfo->HT_caps), &pattrib->pktlen); } } @@ -1159,7 +1155,7 @@ static void issue_assocreq(struct adapter *padapter) if (!memcmp(pIE->data, WPS_OUI, 4)) pIE->Length = 14; } - pframe = rtw_set_ie(pframe, _VENDOR_SPECIFIC_IE_, pIE->Length, pIE->data, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _VENDOR_SPECIFIC_IE_, pIE->Length, pIE->data, &pattrib->pktlen); } break; default: @@ -1168,7 +1164,7 @@ static void issue_assocreq(struct adapter *padapter) } if (pmlmeinfo->assoc_AP_vendor == HT_IOT_PEER_REALTEK) - pframe = rtw_set_ie(pframe, _VENDOR_SPECIFIC_IE_, 6, REALTEK_96B_IE, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, _VENDOR_SPECIFIC_IE_, 6, REALTEK_96B_IE, &pattrib->pktlen); pattrib->last_txcmdsz = pattrib->pktlen; dump_mgntframe(padapter, pmgntframe); @@ -1187,23 +1183,23 @@ static int _issue_nulldata(struct adapter *padapter, unsigned char *da, unsigned int power_mode, bool wait_ack) { int ret = _FAIL; - struct xmit_frame *pmgntframe; - struct pkt_attrib *pattrib; - unsigned char *pframe; + struct xmit_frame *pmgntframe; + struct pkt_attrib *pattrib; + unsigned char *pframe; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; - struct xmit_priv *pxmitpriv; - struct mlme_ext_priv *pmlmeext; - struct mlme_ext_info *pmlmeinfo; - struct wlan_bssid_ex *pnetwork; + struct xmit_priv *pxmitpriv; + struct mlme_ext_priv *pmlmeext; + struct mlme_ext_info *pmlmeinfo; + struct wlan_bssid_ex *pnetwork; if (!padapter) goto exit; - pxmitpriv = &(padapter->xmitpriv); - pmlmeext = &(padapter->mlmeextpriv); - pmlmeinfo = &(pmlmeext->mlmext_info); - pnetwork = &(pmlmeinfo->network); + pxmitpriv = &padapter->xmitpriv; + pmlmeext = &padapter->mlmeextpriv; + pmlmeinfo = &pmlmeext->mlmext_info; + pnetwork = &pmlmeinfo->network; pmgntframe = alloc_mgtxmitframe(pxmitpriv); if (!pmgntframe) @@ -1222,16 +1218,16 @@ static int _issue_nulldata(struct adapter *padapter, unsigned char *da, fctrl = &pwlanhdr->frame_control; *(fctrl) = 0; - if ((pmlmeinfo->state&0x03) == WIFI_FW_AP_STATE) + if ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE) SetFrDs(fctrl); - else if ((pmlmeinfo->state&0x03) == WIFI_FW_STATION_STATE) + else if ((pmlmeinfo->state & 0x03) == WIFI_FW_STATION_STATE) SetToDs(fctrl); if (power_mode) SetPwrMgt(fctrl); ether_addr_copy(pwlanhdr->addr1, da); - ether_addr_copy(pwlanhdr->addr2, myid(&(padapter->eeprompriv))); + ether_addr_copy(pwlanhdr->addr2, myid(&padapter->eeprompriv)); ether_addr_copy(pwlanhdr->addr3, pnetwork->MacAddress); SetSeqNum(pwlanhdr, pmlmeext->mgnt_seq); @@ -1262,9 +1258,9 @@ int issue_nulldata(struct adapter *padapter, unsigned char *da, int ret; int i = 0; unsigned long start = jiffies; - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; /* da == NULL, assume it's null data for sta to ap*/ if (da == NULL) @@ -1308,16 +1304,16 @@ static int _issue_qos_nulldata(struct adapter *padapter, unsigned char *da, u16 tid, bool wait_ack) { int ret = _FAIL; - struct xmit_frame *pmgntframe; - struct pkt_attrib *pattrib; - unsigned char *pframe; + struct xmit_frame *pmgntframe; + struct pkt_attrib *pattrib; + unsigned char *pframe; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; unsigned short *qc; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; DBG_88E("%s\n", __func__); @@ -1343,9 +1339,9 @@ static int _issue_qos_nulldata(struct adapter *padapter, unsigned char *da, fctrl = &pwlanhdr->frame_control; *(fctrl) = 0; - if ((pmlmeinfo->state&0x03) == WIFI_FW_AP_STATE) + if ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE) SetFrDs(fctrl); - else if ((pmlmeinfo->state&0x03) == WIFI_FW_STATION_STATE) + else if ((pmlmeinfo->state & 0x03) == WIFI_FW_STATION_STATE) SetToDs(fctrl); if (pattrib->mdata) @@ -1360,7 +1356,7 @@ static int _issue_qos_nulldata(struct adapter *padapter, unsigned char *da, SetAckpolicy(qc, pattrib->ack_policy); ether_addr_copy(pwlanhdr->addr1, da); - ether_addr_copy(pwlanhdr->addr2, myid(&(padapter->eeprompriv))); + ether_addr_copy(pwlanhdr->addr2, myid(&padapter->eeprompriv)); ether_addr_copy(pwlanhdr->addr3, pnetwork->MacAddress); SetSeqNum(pwlanhdr, pmlmeext->mgnt_seq); @@ -1391,9 +1387,9 @@ int issue_qos_nulldata(struct adapter *padapter, unsigned char *da, int ret; int i = 0; unsigned long start = jiffies; - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; /* da == NULL, assume it's null data for sta to ap*/ if (da == NULL) @@ -1435,15 +1431,15 @@ exit: static int _issue_deauth(struct adapter *padapter, unsigned char *da, unsigned short reason, bool wait_ack) { - struct xmit_frame *pmgntframe; - struct pkt_attrib *pattrib; - unsigned char *pframe; + struct xmit_frame *pmgntframe; + struct pkt_attrib *pattrib; + unsigned char *pframe; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; int ret = _FAIL; __le16 le_tmp; @@ -1465,7 +1461,7 @@ static int _issue_deauth(struct adapter *padapter, unsigned char *da, *(fctrl) = 0; ether_addr_copy(pwlanhdr->addr1, da); - ether_addr_copy(pwlanhdr->addr2, myid(&(padapter->eeprompriv))); + ether_addr_copy(pwlanhdr->addr2, myid(&padapter->eeprompriv)); ether_addr_copy(pwlanhdr->addr3, pnetwork->MacAddress); SetSeqNum(pwlanhdr, pmlmeext->mgnt_seq); @@ -1548,7 +1544,7 @@ static void issue_action_BA(struct adapter *padapter, unsigned char *raddr, u16 BA_para_set; u16 reason_code; u16 BA_timeout_value; - __le16 le_tmp; + __le16 le_tmp; u16 BA_starting_seqctrl = 0; enum ht_cap_ampdu_factor max_rx_ampdu_factor; struct xmit_frame *pmgntframe; @@ -1556,13 +1552,13 @@ static void issue_action_BA(struct adapter *padapter, unsigned char *raddr, u8 *pframe; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; struct sta_info *psta; struct sta_priv *pstapriv = &padapter->stapriv; struct registry_priv *pregpriv = &padapter->registrypriv; - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; DBG_88E("%s, category=%d, action=%d, status=%d\n", __func__, category, action, status); @@ -1583,7 +1579,7 @@ static void issue_action_BA(struct adapter *padapter, unsigned char *raddr, *(fctrl) = 0; ether_addr_copy(pwlanhdr->addr1, raddr); - ether_addr_copy(pwlanhdr->addr2, myid(&(padapter->eeprompriv))); + ether_addr_copy(pwlanhdr->addr2, myid(&padapter->eeprompriv)); ether_addr_copy(pwlanhdr->addr3, pnetwork->MacAddress); SetSeqNum(pwlanhdr, pmlmeext->mgnt_seq); @@ -1593,8 +1589,8 @@ static void issue_action_BA(struct adapter *padapter, unsigned char *raddr, pframe += sizeof(struct ieee80211_hdr_3addr); pattrib->pktlen = sizeof(struct ieee80211_hdr_3addr); - pframe = rtw_set_fixed_ie(pframe, 1, &(category), &(pattrib->pktlen)); - pframe = rtw_set_fixed_ie(pframe, 1, &(action), &(pattrib->pktlen)); + pframe = rtw_set_fixed_ie(pframe, 1, &category, &pattrib->pktlen); + pframe = rtw_set_fixed_ie(pframe, 1, &action, &pattrib->pktlen); if (category == 3) { switch (action) { @@ -1602,7 +1598,7 @@ static void issue_action_BA(struct adapter *padapter, unsigned char *raddr, do { pmlmeinfo->dialogToken++; } while (pmlmeinfo->dialogToken == 0); - pframe = rtw_set_fixed_ie(pframe, 1, &(pmlmeinfo->dialogToken), &(pattrib->pktlen)); + pframe = rtw_set_fixed_ie(pframe, 1, &pmlmeinfo->dialogToken, &pattrib->pktlen); BA_para_set = 0x1002 | ((status & 0xf) << 2); /* immediate ack & 64 buffer size */ le_tmp = cpu_to_le16(BA_para_set); @@ -1616,7 +1612,7 @@ static void issue_action_BA(struct adapter *padapter, unsigned char *raddr, psta = rtw_get_stainfo(pstapriv, raddr); if (psta) { - start_seq = (psta->sta_xmitpriv.txseq_tid[status & 0x07]&0xfff) + 1; + start_seq = (psta->sta_xmitpriv.txseq_tid[status & 0x07] & 0xfff) + 1; DBG_88E("BA_starting_seqctrl=%d for TID=%d\n", start_seq, status & 0x07); @@ -1697,20 +1693,20 @@ static void issue_action_BSSCoexistPacket(struct adapter *padapter) { struct list_head *plist, *phead; unsigned char category, action; - struct xmit_frame *pmgntframe; - struct pkt_attrib *pattrib; - unsigned char *pframe; + struct xmit_frame *pmgntframe; + struct pkt_attrib *pattrib; + unsigned char *pframe; struct ieee80211_hdr *pwlanhdr; __le16 *fctrl; - struct wlan_network *pnetwork = NULL; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); + struct wlan_network *pnetwork = NULL; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct __queue *queue = &(pmlmepriv->scanned_queue); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct __queue *queue = &pmlmepriv->scanned_queue; u8 InfoContent[16] = {0}; u8 ICS[8][15]; - struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network); + struct wlan_bssid_ex *cur_network = &pmlmeinfo->network; if ((pmlmepriv->num_FortyMHzIntolerant == 0) || (pmlmepriv->num_sta_no_ht == 0)) return; @@ -1740,7 +1736,7 @@ static void issue_action_BSSCoexistPacket(struct adapter *padapter) *(fctrl) = 0; ether_addr_copy(pwlanhdr->addr1, cur_network->MacAddress); - ether_addr_copy(pwlanhdr->addr2, myid(&(padapter->eeprompriv))); + ether_addr_copy(pwlanhdr->addr2, myid(&padapter->eeprompriv)); ether_addr_copy(pwlanhdr->addr3, cur_network->MacAddress); SetSeqNum(pwlanhdr, pmlmeext->mgnt_seq); @@ -1750,8 +1746,8 @@ static void issue_action_BSSCoexistPacket(struct adapter *padapter) pframe += sizeof(struct ieee80211_hdr_3addr); pattrib->pktlen = sizeof(struct ieee80211_hdr_3addr); - pframe = rtw_set_fixed_ie(pframe, 1, &(category), &(pattrib->pktlen)); - pframe = rtw_set_fixed_ie(pframe, 1, &(action), &(pattrib->pktlen)); + pframe = rtw_set_fixed_ie(pframe, 1, &category, &pattrib->pktlen); + pframe = rtw_set_fixed_ie(pframe, 1, &action, &pattrib->pktlen); /* */ if (pmlmepriv->num_FortyMHzIntolerant > 0) { @@ -1759,7 +1755,7 @@ static void issue_action_BSSCoexistPacket(struct adapter *padapter) iedata |= BIT(2);/* 20 MHz BSS Width Request */ - pframe = rtw_set_ie(pframe, EID_BSSCoexistence, 1, &iedata, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, EID_BSSCoexistence, 1, &iedata, &pattrib->pktlen); } /* */ @@ -1767,7 +1763,7 @@ static void issue_action_BSSCoexistPacket(struct adapter *padapter) if (pmlmepriv->num_sta_no_ht > 0) { int i; - spin_lock_bh(&(pmlmepriv->scanned_queue.lock)); + spin_lock_bh(&pmlmepriv->scanned_queue.lock); phead = get_list_head(queue); plist = phead->next; @@ -1814,7 +1810,7 @@ static void issue_action_BSSCoexistPacket(struct adapter *padapter) } } - pframe = rtw_set_ie(pframe, EID_BSSIntolerantChlReport, k, InfoContent, &(pattrib->pktlen)); + pframe = rtw_set_ie(pframe, EID_BSSIntolerantChlReport, k, InfoContent, &pattrib->pktlen); } } } @@ -1828,12 +1824,11 @@ unsigned int send_delba(struct adapter *padapter, u8 initiator, u8 *addr) { struct sta_priv *pstapriv = &padapter->stapriv; struct sta_info *psta = NULL; - /* struct recv_reorder_ctrl *preorder_ctrl; */ - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u16 tid; - if ((pmlmeinfo->state&0x03) != WIFI_FW_AP_STATE) + if ((pmlmeinfo->state & 0x03) != WIFI_FW_AP_STATE) if (!(pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS)) return _SUCCESS; @@ -1845,7 +1840,7 @@ unsigned int send_delba(struct adapter *padapter, u8 initiator, u8 *addr) for (tid = 0; tid < MAXTID; tid++) { if (psta->recvreorder_ctrl[tid].enable) { DBG_88E("rx agg disable tid(%d)\n", tid); - issue_action_BA(padapter, addr, RTW_WLAN_ACTION_DELBA, (((tid << 1) | initiator)&0x1F)); + issue_action_BA(padapter, addr, RTW_WLAN_ACTION_DELBA, (((tid << 1) | initiator) & 0x1F)); psta->recvreorder_ctrl[tid].enable = false; psta->recvreorder_ctrl[tid].indicate_seq = 0xffff; } @@ -1854,7 +1849,7 @@ unsigned int send_delba(struct adapter *padapter, u8 initiator, u8 *addr) for (tid = 0; tid < MAXTID; tid++) { if (psta->htpriv.agg_enable_bitmap & BIT(tid)) { DBG_88E("tx agg disable tid(%d)\n", tid); - issue_action_BA(padapter, addr, RTW_WLAN_ACTION_DELBA, (((tid << 1) | initiator)&0x1F)); + issue_action_BA(padapter, addr, RTW_WLAN_ACTION_DELBA, (((tid << 1) | initiator) & 0x1F)); psta->htpriv.agg_enable_bitmap &= ~BIT(tid); psta->htpriv.candidate_tid_bitmap &= ~BIT(tid); } @@ -1867,9 +1862,8 @@ unsigned int send_delba(struct adapter *padapter, u8 initiator, u8 *addr) unsigned int send_beacon(struct adapter *padapter) { u8 bxmitok = false; - int issue = 0; + int issue = 0; int poll = 0; - unsigned long start = jiffies; rtw_hal_set_hwreg(padapter, HW_VAR_BCN_VALID, NULL); @@ -1908,10 +1902,10 @@ Following are some utility functions for WiFi MLME static void site_survey(struct adapter *padapter) { - unsigned char survey_channel = 0, val8; + unsigned char survey_channel = 0, val8; enum rt_scan_type ScanType = SCAN_PASSIVE; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u32 initialgain = 0; struct rtw_ieee80211_channel *ch; @@ -2016,16 +2010,16 @@ static u8 collect_bss_info(struct adapter *padapter, struct recv_frame *precv_frame, struct wlan_bssid_ex *bssid) { - int i; - u32 len; + int i; + u32 len; u8 *p; u16 val16, subtype; u8 *pframe = precv_frame->pkt->data; u32 packet_len = precv_frame->pkt->len; u8 ie_offset; - struct registry_priv *pregistrypriv = &padapter->registrypriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct registry_priv *pregistrypriv = &padapter->registrypriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; len = packet_len - sizeof(struct ieee80211_hdr_3addr); @@ -2184,12 +2178,12 @@ static u8 collect_bss_info(struct adapter *padapter, static void start_create_ibss(struct adapter *padapter) { - unsigned short caps; + unsigned short caps; u8 val8; u8 join_type; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&(pmlmeinfo->network)); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&pmlmeinfo->network); pmlmeext->cur_channel = (u8)pnetwork->Configuration.DSConfig; pmlmeinfo->bcn_interval = get_beacon_interval(pnetwork); @@ -2200,7 +2194,7 @@ static void start_create_ibss(struct adapter *padapter) /* update capability */ caps = rtw_get_capability((struct wlan_bssid_ex *)pnetwork); update_capinfo(padapter, caps); - if (caps&cap_IBSS) {/* adhoc master */ + if (caps & cap_IBSS) {/* adhoc master */ val8 = 0xcf; rtw_hal_set_hwreg(padapter, HW_VAR_SEC_CFG, (u8 *)(&val8)); @@ -2236,11 +2230,11 @@ static void start_create_ibss(struct adapter *padapter) static void start_clnt_join(struct adapter *padapter) { - unsigned short caps; + unsigned short caps; u8 val8; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&(pmlmeinfo->network)); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&pmlmeinfo->network); int beacon_timeout; pmlmeext->cur_channel = (u8)pnetwork->Configuration.DSConfig; @@ -2252,7 +2246,7 @@ static void start_clnt_join(struct adapter *padapter) /* update capability */ caps = rtw_get_capability((struct wlan_bssid_ex *)pnetwork); update_capinfo(padapter, caps); - if (caps&cap_ESS) { + if (caps & cap_ESS) { Set_MSR(padapter, WIFI_FW_STATION_STATE); val8 = (pmlmeinfo->auth_algo == dot11AuthAlgrthm_8021X) ? 0xcc : 0xcf; @@ -2270,7 +2264,7 @@ static void start_clnt_join(struct adapter *padapter) msecs_to_jiffies((REAUTH_TO * REAUTH_LIMIT) + (REASSOC_TO * REASSOC_LIMIT) + beacon_timeout)); pmlmeinfo->state = WIFI_FW_AUTH_NULL | WIFI_FW_STATION_STATE; - } else if (caps&cap_IBSS) { /* adhoc client */ + } else if (caps & cap_IBSS) { /* adhoc client */ Set_MSR(padapter, WIFI_FW_ADHOC_STATE); val8 = 0xcf; @@ -2291,8 +2285,8 @@ static void start_clnt_join(struct adapter *padapter) static void start_clnt_auth(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; del_timer_sync(&pmlmeext->link_timer); @@ -2310,7 +2304,7 @@ static void start_clnt_auth(struct adapter *padapter) /* issue deauth before issuing auth to deal with the situation */ /* Commented by Albert 2012/07/21 */ /* For the Win8 P2P connection, it will be hard to have a successful connection if this Wi-Fi doesn't connect to it. */ - issue_deauth(padapter, (&(pmlmeinfo->network))->MacAddress, WLAN_REASON_DEAUTH_LEAVING); + issue_deauth(padapter, (&pmlmeinfo->network)->MacAddress, WLAN_REASON_DEAUTH_LEAVING); DBG_88E_LEVEL(_drv_info_, "start auth\n"); issue_auth(padapter, NULL, 0); @@ -2320,8 +2314,8 @@ static void start_clnt_auth(struct adapter *padapter) static void start_clnt_assoc(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; del_timer_sync(&pmlmeext->link_timer); @@ -2337,9 +2331,9 @@ static unsigned int receive_disconnect(struct adapter *padapter, unsigned char *MacAddr, unsigned short reason) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; /* check A3 */ if (memcmp(MacAddr, pnetwork->MacAddress, ETH_ALEN)) @@ -2347,7 +2341,7 @@ static unsigned int receive_disconnect(struct adapter *padapter, DBG_88E("%s\n", __func__); - if ((pmlmeinfo->state&0x03) == WIFI_FW_STATION_STATE) { + if ((pmlmeinfo->state & 0x03) == WIFI_FW_STATION_STATE) { if (pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS) { pmlmeinfo->state = WIFI_FW_NULL_STATE; report_del_sta_event(padapter, MacAddr, reason); @@ -2511,12 +2505,12 @@ Following are the callback functions for each subtype of the management frames static unsigned int OnProbeReq(struct adapter *padapter, struct recv_frame *precv_frame) { - unsigned int ielen; - unsigned char *p; + unsigned int ielen; + unsigned char *p; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *cur = &(pmlmeinfo->network); + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *cur = &pmlmeinfo->network; u8 *pframe = precv_frame->pkt->data; uint len = precv_frame->pkt->len; @@ -2546,7 +2540,7 @@ static unsigned int OnProbeReq(struct adapter *padapter, static unsigned int OnProbeRsp(struct adapter *padapter, struct recv_frame *precv_frame) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; if (pmlmeext->sitesurvey_res.state == SCAN_PROCESS) { report_survey_event(padapter, precv_frame); @@ -2560,16 +2554,16 @@ static unsigned int OnBeacon(struct adapter *padapter, struct recv_frame *precv_frame) { int cam_idx; - struct sta_info *psta; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct sta_info *psta; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct sta_priv *pstapriv = &padapter->stapriv; + struct sta_priv *pstapriv = &padapter->stapriv; u8 *pframe = precv_frame->pkt->data; uint len = precv_frame->pkt->len; struct wlan_bssid_ex *pbss; int ret = _SUCCESS; - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; if (pmlmeext->sitesurvey_res.state == SCAN_PROCESS) { report_survey_event(padapter, precv_frame); @@ -2582,8 +2576,8 @@ static unsigned int OnBeacon(struct adapter *padapter, pbss = (struct wlan_bssid_ex *)rtw_malloc(sizeof(struct wlan_bssid_ex)); if (pbss) { if (collect_bss_info(padapter, precv_frame, pbss) == _SUCCESS) { - update_network(&(pmlmepriv->cur_network.network), pbss, padapter, true); - rtw_get_bcn_info(&(pmlmepriv->cur_network)); + update_network(&pmlmepriv->cur_network.network, pbss, padapter, true); + rtw_get_bcn_info(&pmlmepriv->cur_network); } kfree(pbss); } @@ -2600,7 +2594,7 @@ static unsigned int OnBeacon(struct adapter *padapter, return _SUCCESS; } - if (((pmlmeinfo->state&0x03) == WIFI_FW_STATION_STATE) && (pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS)) { + if (((pmlmeinfo->state & 0x03) == WIFI_FW_STATION_STATE) && (pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS)) { psta = rtw_get_stainfo(pstapriv, GetAddr2Ptr(pframe)); if (psta != NULL) { ret = rtw_check_bcn_info(padapter, pframe, len); @@ -2614,7 +2608,7 @@ static unsigned int OnBeacon(struct adapter *padapter, if ((sta_rx_pkts(psta) & 0xf) == 0) update_beacon_info(padapter, pframe, len, psta); } - } else if ((pmlmeinfo->state&0x03) == WIFI_FW_ADHOC_STATE) { + } else if ((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) { psta = rtw_get_stainfo(pstapriv, GetAddr2Ptr(pframe)); if (psta != NULL) { /* update WMM, ERP in the beacon */ @@ -2651,21 +2645,21 @@ _END_ONBEACON_: static unsigned int OnAuth(struct adapter *padapter, struct recv_frame *precv_frame) { - unsigned int auth_mode, ie_len; + unsigned int auth_mode, ie_len; u16 seq; - unsigned char *sa, *p; + unsigned char *sa, *p; u16 algorithm; - int status; + int status; static struct sta_info stat; - struct sta_info *pstat = NULL; - struct sta_priv *pstapriv = &padapter->stapriv; + struct sta_info *pstat = NULL; + struct sta_priv *pstapriv = &padapter->stapriv; struct security_priv *psecuritypriv = &padapter->securitypriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u8 *pframe = precv_frame->pkt->data; uint len = precv_frame->pkt->len; - if ((pmlmeinfo->state&0x03) != WIFI_FW_AP_STATE) + if ((pmlmeinfo->state & 0x03) != WIFI_FW_AP_STATE) return _FAIL; DBG_88E("+%s\n", __func__); @@ -2820,18 +2814,18 @@ auth_fail: static unsigned int OnAuthClient(struct adapter *padapter, struct recv_frame *precv_frame) { - unsigned int seq, len, status, offset; - unsigned char *p; - unsigned int go2asoc = 0; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + unsigned int seq, len, status, offset; + unsigned char *p; + unsigned int go2asoc = 0; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u8 *pframe = precv_frame->pkt->data; uint pkt_len = precv_frame->pkt->len; DBG_88E("%s\n", __func__); /* check A1 matches or not */ - if (memcmp(myid(&(padapter->eeprompriv)), get_da(pframe), ETH_ALEN)) + if (memcmp(myid(&padapter->eeprompriv), get_da(pframe), ETH_ALEN)) return _SUCCESS; if (!(pmlmeinfo->state & WIFI_FW_AUTH_STATE)) @@ -2899,24 +2893,24 @@ static unsigned int OnAssocReq(struct adapter *padapter, #ifdef CONFIG_88EU_AP_MODE u16 capab_info; struct rtw_ieee802_11_elems elems; - struct sta_info *pstat; - unsigned char reassoc, *p, *pos, *wpa_ie; + struct sta_info *pstat; + unsigned char reassoc, *p, *pos, *wpa_ie; unsigned char WMM_IE[] = {0x00, 0x50, 0xf2, 0x02, 0x00, 0x01}; - int i, wpa_ie_len, left; - unsigned char supportRate[16]; - int supportRateNum; - unsigned short status = _STATS_SUCCESSFUL_; - unsigned short frame_type, ie_offset = 0; + int i, wpa_ie_len, left; + unsigned char supportRate[16]; + int supportRateNum; + unsigned short status = _STATS_SUCCESSFUL_; + unsigned short frame_type, ie_offset = 0; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; struct security_priv *psecuritypriv = &padapter->securitypriv; struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *cur = &(pmlmeinfo->network); + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *cur = &pmlmeinfo->network; struct sta_priv *pstapriv = &padapter->stapriv; u8 *pframe = precv_frame->pkt->data; uint ie_len, pkt_len = precv_frame->pkt->len; - if ((pmlmeinfo->state&0x03) != WIFI_FW_AP_STATE) + if ((pmlmeinfo->state & 0x03) != WIFI_FW_AP_STATE) return _FAIL; frame_type = GetFrameSubType(pframe); @@ -3041,8 +3035,8 @@ static unsigned int OnAssocReq(struct adapter *padapter, pstat->dot8021xalg = 1;/* psk, todo:802.1x */ pstat->wpa_psk |= BIT(1); - pstat->wpa2_group_cipher = group_cipher&psecuritypriv->wpa2_group_cipher; - pstat->wpa2_pairwise_cipher = pairwise_cipher&psecuritypriv->wpa2_pairwise_cipher; + pstat->wpa2_group_cipher = group_cipher & psecuritypriv->wpa2_group_cipher; + pstat->wpa2_pairwise_cipher = pairwise_cipher & psecuritypriv->wpa2_pairwise_cipher; if (!pstat->wpa2_group_cipher) status = WLAN_STATUS_INVALID_GROUP_CIPHER; @@ -3062,8 +3056,8 @@ static unsigned int OnAssocReq(struct adapter *padapter, pstat->dot8021xalg = 1;/* psk, todo:802.1x */ pstat->wpa_psk |= BIT(0); - pstat->wpa_group_cipher = group_cipher&psecuritypriv->wpa_group_cipher; - pstat->wpa_pairwise_cipher = pairwise_cipher&psecuritypriv->wpa_pairwise_cipher; + pstat->wpa_group_cipher = group_cipher & psecuritypriv->wpa_group_cipher; + pstat->wpa_pairwise_cipher = pairwise_cipher & psecuritypriv->wpa_pairwise_cipher; if (!pstat->wpa_group_cipher) status = WLAN_STATUS_INVALID_GROUP_CIPHER; @@ -3159,30 +3153,30 @@ static unsigned int OnAssocReq(struct adapter *padapter, pstat->qos_option = 1; pstat->qos_info = *(p+8); - pstat->max_sp_len = (pstat->qos_info>>5)&0x3; + pstat->max_sp_len = (pstat->qos_info>>5) & 0x3; - if ((pstat->qos_info&0xf) != 0xf) + if ((pstat->qos_info & 0xf) != 0xf) pstat->has_legacy_ac = true; else pstat->has_legacy_ac = false; - if (pstat->qos_info&0xf) { - if (pstat->qos_info&BIT(0)) + if (pstat->qos_info & 0xf) { + if (pstat->qos_info & BIT(0)) pstat->uapsd_vo = BIT(0)|BIT(1); else pstat->uapsd_vo = 0; - if (pstat->qos_info&BIT(1)) + if (pstat->qos_info & BIT(1)) pstat->uapsd_vi = BIT(0)|BIT(1); else pstat->uapsd_vi = 0; - if (pstat->qos_info&BIT(2)) + if (pstat->qos_info & BIT(2)) pstat->uapsd_bk = BIT(0)|BIT(1); else pstat->uapsd_bk = 0; - if (pstat->qos_info&BIT(3)) + if (pstat->qos_info & BIT(3)) pstat->uapsd_be = BIT(0)|BIT(1); else pstat->uapsd_be = 0; @@ -3209,14 +3203,14 @@ static unsigned int OnAssocReq(struct adapter *padapter, } else { pstat->flags &= ~WLAN_STA_HT; } - if ((!pmlmepriv->htpriv.ht_option) && (pstat->flags&WLAN_STA_HT)) { + if ((!pmlmepriv->htpriv.ht_option) && (pstat->flags & WLAN_STA_HT)) { status = _STATS_FAILURE_; goto OnAssocReqFail; } if ((pstat->flags & WLAN_STA_HT) && - ((pstat->wpa2_pairwise_cipher&WPA_CIPHER_TKIP) || - (pstat->wpa_pairwise_cipher&WPA_CIPHER_TKIP))) { + ((pstat->wpa2_pairwise_cipher & WPA_CIPHER_TKIP) || + (pstat->wpa_pairwise_cipher & WPA_CIPHER_TKIP))) { DBG_88E("HT: %pM tried to " "use TKIP with HT association\n", pstat->hwaddr); @@ -3333,19 +3327,18 @@ static unsigned int OnAssocRsp(struct adapter *padapter, { uint i; int res; - unsigned short status; + unsigned short status; struct ndis_802_11_var_ie *pIE; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - /* struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network); */ + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u8 *pframe = precv_frame->pkt->data; uint pkt_len = precv_frame->pkt->len; DBG_88E("%s\n", __func__); /* check A1 matches or not */ - if (memcmp(myid(&(padapter->eeprompriv)), get_da(pframe), ETH_ALEN)) + if (memcmp(myid(&padapter->eeprompriv), get_da(pframe), ETH_ALEN)) return _SUCCESS; if (!(pmlmeinfo->state & (WIFI_FW_AUTH_SUCCESS | WIFI_FW_ASSOC_STATE))) @@ -3372,7 +3365,7 @@ static unsigned int OnAssocRsp(struct adapter *padapter, pmlmeinfo->slotTime = (pmlmeinfo->capability & BIT(10)) ? 9 : 20; /* AID */ - pmlmeinfo->aid = (int)(le16_to_cpu(*(__le16 *)(pframe + WLAN_HDR_A3_LEN + 4))&0x3fff); + pmlmeinfo->aid = (int)(le16_to_cpu(*(__le16 *)(pframe + WLAN_HDR_A3_LEN + 4)) & 0x3fff); res = pmlmeinfo->aid; /* following are moved to join event callback function */ @@ -3420,12 +3413,12 @@ report_assoc_result: static unsigned int OnDeAuth(struct adapter *padapter, struct recv_frame *precv_frame) { - unsigned short reason; + unsigned short reason; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u8 *pframe = precv_frame->pkt->data; - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; /* check A3 */ if (memcmp(GetAddr3Ptr(pframe), pnetwork->MacAddress, ETH_ALEN)) @@ -3476,10 +3469,10 @@ static unsigned int OnDisassoc(struct adapter *padapter, { u16 reason; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u8 *pframe = precv_frame->pkt->data; - struct wlan_bssid_ex *pnetwork = &(pmlmeinfo->network); + struct wlan_bssid_ex *pnetwork = &pmlmeinfo->network; /* check A3 */ if (memcmp(GetAddr3Ptr(pframe), pnetwork->MacAddress, ETH_ALEN)) @@ -3588,21 +3581,22 @@ static unsigned int OnAction_back(struct adapter *padapter, u8 *addr; struct sta_info *psta = NULL; struct recv_reorder_ctrl *preorder_ctrl; - unsigned char *frame_body; - unsigned char category, action; - unsigned short tid, status, reason_code = 0; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + unsigned char *frame_body; + unsigned char category, action; + unsigned short tid, status, reason_code = 0; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u8 *pframe = precv_frame->pkt->data; struct sta_priv *pstapriv = &padapter->stapriv; + /* check RA matches or not */ - if (memcmp(myid(&(padapter->eeprompriv)), GetAddr1Ptr(pframe), + if (memcmp(myid(&padapter->eeprompriv), GetAddr1Ptr(pframe), ETH_ALEN))/* for if1, sta/ap mode */ return _SUCCESS; DBG_88E("%s\n", __func__); - if ((pmlmeinfo->state&0x03) != WIFI_FW_AP_STATE) + if ((pmlmeinfo->state & 0x03) != WIFI_FW_AP_STATE) if (!(pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS)) return _SUCCESS; @@ -3622,8 +3616,8 @@ static unsigned int OnAction_back(struct adapter *padapter, DBG_88E("%s, action=%d\n", __func__, action); switch (action) { case RTW_WLAN_ACTION_ADDBA_REQ: /* ADDBA request */ - memcpy(&(pmlmeinfo->ADDBA_req), &(frame_body[2]), sizeof(struct ADDBA_request)); - process_addba_req(padapter, (u8 *)&(pmlmeinfo->ADDBA_req), addr); + memcpy(&pmlmeinfo->ADDBA_req, &frame_body[2], sizeof(struct ADDBA_request)); + process_addba_req(padapter, (u8 *)&pmlmeinfo->ADDBA_req, addr); /* 37 = reject ADDBA Req */ issue_action_BA(padapter, addr, @@ -3665,9 +3659,9 @@ static unsigned int OnAction_back(struct adapter *padapter, static s32 rtw_action_public_decache(struct recv_frame *recv_frame, s32 token) { struct adapter *adapter = recv_frame->adapter; - struct mlme_ext_priv *mlmeext = &(adapter->mlmeextpriv); + struct mlme_ext_priv *mlmeext = &adapter->mlmeextpriv; u8 *frame = recv_frame->pkt->data; - u16 seq_ctrl = ((recv_frame->attrib.seq_num&0xffff) << 4) | + u16 seq_ctrl = ((recv_frame->attrib.seq_num & 0xffff) << 4) | (recv_frame->attrib.frag_num & 0xf); if (GetRetry(frame)) { @@ -3749,7 +3743,7 @@ static unsigned int on_action_public(struct adapter *padapter, u8 category, action; /* check RA matches or not */ - if (memcmp(myid(&(padapter->eeprompriv)), GetAddr1Ptr(pframe), ETH_ALEN)) + if (memcmp(myid(&padapter->eeprompriv), GetAddr1Ptr(pframe), ETH_ALEN)) goto exit; category = frame_body[0]; @@ -3812,9 +3806,9 @@ static unsigned int OnAction(struct adapter *padapter, struct recv_frame *precv_frame) { int i; - unsigned char category; + unsigned char category; struct action_handler *ptable; - unsigned char *frame_body; + unsigned char *frame_body; u8 *pframe = precv_frame->pkt->data; frame_body = (unsigned char *)(pframe + sizeof(struct ieee80211_hdr_3addr)); @@ -3854,7 +3848,7 @@ static struct mlme_handler mlme_sta_tbl[] = { int init_hw_mlme_ext(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; set_channel_bwmode(padapter, pmlmeext->cur_channel, pmlmeext->cur_ch_offset, pmlmeext->cur_bwmode); return _SUCCESS; @@ -3862,14 +3856,14 @@ int init_hw_mlme_ext(struct adapter *padapter) static void init_mlme_ext_priv_value(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - unsigned char mixed_datarate[NumRates] = { + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + unsigned char mixed_datarate[NumRates] = { _1M_RATE_, _2M_RATE_, _5M_RATE_, _11M_RATE_, _6M_RATE_, _9M_RATE_, _12M_RATE_, _18M_RATE_, _24M_RATE_, _36M_RATE_, - _48M_RATE_, _54M_RATE_, 0xff + _48M_RATE_, _54M_RATE_, 0xff }; - unsigned char mixed_basicrate[NumRates] = { + unsigned char mixed_basicrate[NumRates] = { _1M_RATE_, _2M_RATE_, _5M_RATE_, _11M_RATE_, _6M_RATE_, _12M_RATE_, _24M_RATE_, 0xff, }; @@ -4025,12 +4019,12 @@ static u8 init_channel_set(struct adapter *padapter, u8 ChannelPlan, return chanset_size; } -int init_mlme_ext_priv(struct adapter *padapter) +int init_mlme_ext_priv(struct adapter *padapter) { struct registry_priv *pregistrypriv = &padapter->registrypriv; struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; pmlmeext->padapter = padapter; @@ -4089,7 +4083,7 @@ void mgt_dispatcher(struct adapter *padapter, struct recv_frame *precv_frame) struct mlme_handler *ptable; #ifdef CONFIG_88EU_AP_MODE struct mlme_priv *pmlmepriv = &padapter->mlmepriv; -#endif /* CONFIG_88EU_AP_MODE */ +#endif u8 bc_addr[ETH_ALEN] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; u8 *pframe = precv_frame->pkt->data; struct sta_info *psta = rtw_get_stainfo(&padapter->stapriv, GetAddr2Ptr(pframe)); @@ -4170,7 +4164,7 @@ void report_survey_event(struct adapter *padapter, struct cmd_obj *pcmd_obj; u8 *pevtcmd; u32 cmdsz; - struct survey_event *psurvey_evt; + struct survey_event *psurvey_evt; struct C2HEvent_Header *pc2h_evt_hdr; struct mlme_ext_priv *pmlmeext; struct cmd_priv *pcmdpriv; @@ -4227,8 +4221,8 @@ void report_surveydone_event(struct adapter *padapter) u8 *pevtcmd; u32 cmdsz; struct surveydone_event *psurveydone_evt; - struct C2HEvent_Header *pc2h_evt_hdr; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct C2HEvent_Header *pc2h_evt_hdr; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; struct cmd_priv *pcmdpriv = &padapter->cmdpriv; pcmd_obj = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); @@ -4269,10 +4263,10 @@ void report_join_res(struct adapter *padapter, int res) struct cmd_obj *pcmd_obj; u8 *pevtcmd; u32 cmdsz; - struct joinbss_event *pjoinbss_evt; - struct C2HEvent_Header *pc2h_evt_hdr; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct joinbss_event *pjoinbss_evt; + struct C2HEvent_Header *pc2h_evt_hdr; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; struct cmd_priv *pcmdpriv = &padapter->cmdpriv; pcmd_obj = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); @@ -4301,7 +4295,7 @@ void report_join_res(struct adapter *padapter, int res) pc2h_evt_hdr->seq = atomic_inc_return(&pmlmeext->event_seq); pjoinbss_evt = (struct joinbss_event *)(pevtcmd + sizeof(struct C2HEvent_Header)); - memcpy((unsigned char *)(&(pjoinbss_evt->network.network)), &(pmlmeinfo->network), sizeof(struct wlan_bssid_ex)); + memcpy((unsigned char *)(&pjoinbss_evt->network.network), &pmlmeinfo->network, sizeof(struct wlan_bssid_ex)); pjoinbss_evt->network.join_res = res; pjoinbss_evt->network.aid = res; @@ -4319,10 +4313,10 @@ void report_del_sta_event(struct adapter *padapter, unsigned char *MacAddr, u8 *pevtcmd; u32 cmdsz; struct sta_info *psta; - int mac_id; - struct stadel_event *pdel_sta_evt; - struct C2HEvent_Header *pc2h_evt_hdr; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + int mac_id; + struct stadel_event *pdel_sta_evt; + struct C2HEvent_Header *pc2h_evt_hdr; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; struct cmd_priv *pcmdpriv = &padapter->cmdpriv; pcmd_obj = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); @@ -4351,7 +4345,7 @@ void report_del_sta_event(struct adapter *padapter, unsigned char *MacAddr, pc2h_evt_hdr->seq = atomic_inc_return(&pmlmeext->event_seq); pdel_sta_evt = (struct stadel_event *)(pevtcmd + sizeof(struct C2HEvent_Header)); - ether_addr_copy((unsigned char *)(&(pdel_sta_evt->macaddr)), MacAddr); + ether_addr_copy((unsigned char *)(&pdel_sta_evt->macaddr), MacAddr); memcpy((unsigned char *)(pdel_sta_evt->rsvd), (unsigned char *)(&reason), 2); psta = rtw_get_stainfo(&padapter->stapriv, MacAddr); @@ -4373,9 +4367,9 @@ void report_add_sta_event(struct adapter *padapter, unsigned char *MacAddr, struct cmd_obj *pcmd_obj; u8 *pevtcmd; u32 cmdsz; - struct stassoc_event *padd_sta_evt; - struct C2HEvent_Header *pc2h_evt_hdr; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct stassoc_event *padd_sta_evt; + struct C2HEvent_Header *pc2h_evt_hdr; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; struct cmd_priv *pcmdpriv = &padapter->cmdpriv; pcmd_obj = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); @@ -4404,7 +4398,7 @@ void report_add_sta_event(struct adapter *padapter, unsigned char *MacAddr, pc2h_evt_hdr->seq = atomic_inc_return(&pmlmeext->event_seq); padd_sta_evt = (struct stassoc_event *)(pevtcmd + sizeof(struct C2HEvent_Header)); - ether_addr_copy((unsigned char *)(&(padd_sta_evt->macaddr)), MacAddr); + ether_addr_copy((unsigned char *)(&padd_sta_evt->macaddr), MacAddr); padd_sta_evt->cam_id = cam_idx; DBG_88E("%s: add STA\n", __func__); @@ -4421,9 +4415,9 @@ Following are the event callback functions /* for sta/adhoc mode */ void update_sta_info(struct adapter *padapter, struct sta_info *psta) { - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; /* ERP */ VCS_update(padapter, psta); @@ -4461,11 +4455,11 @@ void update_sta_info(struct adapter *padapter, struct sta_info *psta) void mlmeext_joinbss_event_callback(struct adapter *padapter, int join_res) { - struct sta_info *psta, *psta_bmc; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network); - struct sta_priv *pstapriv = &padapter->stapriv; + struct sta_info *psta, *psta_bmc; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *cur_network = &pmlmeinfo->network; + struct sta_priv *pstapriv = &padapter->stapriv; u8 join_type; u16 media_status; @@ -4480,7 +4474,7 @@ void mlmeext_joinbss_event_callback(struct adapter *padapter, int join_res) goto exit_mlmeext_joinbss_event_callback; } - if ((pmlmeinfo->state&0x03) == WIFI_FW_ADHOC_STATE) { + if ((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) { /* for bc/mc */ psta_bmc = rtw_get_bcmc_stainfo(padapter); if (psta_bmc) { @@ -4528,7 +4522,7 @@ void mlmeext_joinbss_event_callback(struct adapter *padapter, int join_res) join_type = 2; rtw_hal_set_hwreg(padapter, HW_VAR_MLME_JOIN, (u8 *)(&join_type)); - if ((pmlmeinfo->state&0x03) == WIFI_FW_STATION_STATE) { + if ((pmlmeinfo->state & 0x03) == WIFI_FW_STATION_STATE) { /* correcting TSF */ correct_TSF(padapter, pmlmeext); } @@ -4541,13 +4535,13 @@ exit_mlmeext_joinbss_event_callback: void mlmeext_sta_add_event_callback(struct adapter *padapter, struct sta_info *psta) { - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u8 join_type; DBG_88E("%s\n", __func__); - if ((pmlmeinfo->state&0x03) == WIFI_FW_ADHOC_STATE) { + if ((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) { if (pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS) {/* adhoc master or sta_count>1 */ /* nothing to do */ } else { /* adhoc client */ @@ -4578,8 +4572,8 @@ void mlmeext_sta_add_event_callback(struct adapter *padapter, struct sta_info *p void mlmeext_sta_del_event_callback(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; if (is_client_associated_to_ap(padapter) || is_IBSS_empty(padapter)) { rtw_hal_set_hwreg(padapter, HW_VAR_MLME_DISCONNECT, NULL); @@ -4630,12 +4624,12 @@ static u8 chk_ap_is_alive(struct adapter *padapter, struct sta_info *psta) void linked_status_chk(struct adapter *padapter) { - u32 i; - struct sta_info *psta; - struct xmit_priv *pxmitpriv = &(padapter->xmitpriv); - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct sta_priv *pstapriv = &padapter->stapriv; + u32 i; + struct sta_info *psta; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct sta_priv *pstapriv = &padapter->stapriv; if (is_client_associated_to_ap(padapter)) { /* linked infrastructure client mode */ @@ -4750,10 +4744,10 @@ void survey_timer_hdl(struct timer_list *t) { struct adapter *padapter = from_timer(padapter, t, mlmeextpriv.survey_timer); - struct cmd_obj *ph2c; - struct sitesurvey_parm *psurveyPara; - struct cmd_priv *pcmdpriv = &padapter->cmdpriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct cmd_obj *ph2c; + struct sitesurvey_parm *psurveyPara; + struct cmd_priv *pcmdpriv = &padapter->cmdpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; /* issue rtw_sitesurvey_cmd */ if (pmlmeext->sitesurvey_res.state > SCAN_START) { @@ -4790,8 +4784,8 @@ void link_timer_hdl(struct timer_list *t) { struct adapter *padapter = from_timer(padapter, t, mlmeextpriv.link_timer); - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; if (pmlmeinfo->state & WIFI_FW_AUTH_NULL) { DBG_88E("%s:no beacon while connecting\n", __func__); @@ -4826,7 +4820,7 @@ void link_timer_hdl(struct timer_list *t) void addba_timer_hdl(struct timer_list *t) { struct sta_info *psta = from_timer(psta, t, addba_retry_timer); - struct ht_priv *phtpriv; + struct ht_priv *phtpriv; if (!psta) return; @@ -4842,8 +4836,8 @@ void addba_timer_hdl(struct timer_list *t) u8 setopmode_hdl(struct adapter *padapter, u8 *pbuf) { u8 type; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; struct setopmode_parm *psetop = (struct setopmode_parm *)pbuf; if (psetop->mode == Ndis802_11APMode) { @@ -4867,11 +4861,10 @@ u8 setopmode_hdl(struct adapter *padapter, u8 *pbuf) u8 createbss_hdl(struct adapter *padapter, u8 *pbuf) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&(pmlmeinfo->network)); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&pmlmeinfo->network); struct wlan_bssid_ex *pparm = (struct wlan_bssid_ex *)pbuf; - /* u32 initialgain; */ if (pparm->InfrastructureMode == Ndis802_11APMode) { #ifdef CONFIG_88EU_AP_MODE @@ -4929,10 +4922,10 @@ u8 join_cmd_hdl(struct adapter *padapter, u8 *pbuf) { u8 join_type; struct ndis_802_11_var_ie *pIE; - struct registry_priv *pregpriv = &padapter->registrypriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&(pmlmeinfo->network)); + struct registry_priv *pregpriv = &padapter->registrypriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&pmlmeinfo->network); struct wlan_bssid_ex *pparm = (struct wlan_bssid_ex *)pbuf; u32 i; @@ -5041,9 +5034,9 @@ u8 join_cmd_hdl(struct adapter *padapter, u8 *pbuf) u8 disconnect_hdl(struct adapter *padapter, unsigned char *pbuf) { struct disconnect_parm *param = (struct disconnect_parm *)pbuf; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&(pmlmeinfo->network)); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&pmlmeinfo->network); u8 val8; if (is_client_associated_to_ap(padapter)) @@ -5055,7 +5048,7 @@ u8 disconnect_hdl(struct adapter *padapter, unsigned char *pbuf) /* restore to initial setting. */ update_tx_basic_rate(padapter, padapter->registrypriv.wireless_mode); - if (((pmlmeinfo->state&0x03) == WIFI_FW_ADHOC_STATE) || ((pmlmeinfo->state&0x03) == WIFI_FW_AP_STATE)) { + if (((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) || ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE)) { /* Stop BCN */ val8 = 0; rtw_hal_set_hwreg(padapter, HW_VAR_BCN_FUNC, (u8 *)(&val8)); @@ -5088,7 +5081,7 @@ static int rtw_scan_ch_decision(struct adapter *padapter, { int i, j; int set_idx; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; /* clear out first */ memset(out, 0, sizeof(struct rtw_ieee80211_channel)*out_num); @@ -5127,12 +5120,12 @@ static int rtw_scan_ch_decision(struct adapter *padapter, u8 sitesurvey_cmd_hdl(struct adapter *padapter, u8 *pbuf) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct sitesurvey_parm *pparm = (struct sitesurvey_parm *)pbuf; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct sitesurvey_parm *pparm = (struct sitesurvey_parm *)pbuf; u8 bdelayscan = false; u8 val8; - u32 initialgain; - u32 i; + u32 initialgain; + u32 i; if (pmlmeext->sitesurvey_res.state == SCAN_DISABLE) { /* for first time sitesurvey_cmd */ @@ -5199,22 +5192,22 @@ u8 sitesurvey_cmd_hdl(struct adapter *padapter, u8 *pbuf) u8 setauth_hdl(struct adapter *padapter, unsigned char *pbuf) { - struct setauth_parm *pparm = (struct setauth_parm *)pbuf; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct setauth_parm *pparm = (struct setauth_parm *)pbuf; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; if (pparm->mode < 4) pmlmeinfo->auth_algo = pparm->mode; - return H2C_SUCCESS; + return H2C_SUCCESS; } u8 setkey_hdl(struct adapter *padapter, u8 *pbuf) { - unsigned short ctrl; - struct setkey_parm *pparm = (struct setkey_parm *)pbuf; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - unsigned char null_sta[] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; + unsigned short ctrl; + struct setkey_parm *pparm = (struct setkey_parm *)pbuf; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + u8 null_sta[ETH_ALEN] = {}; /* main tx key for wep. */ if (pparm->set_tx) @@ -5234,9 +5227,9 @@ u8 set_stakey_hdl(struct adapter *padapter, u8 *pbuf) { u16 ctrl = 0; u8 cam_id;/* cam_entry */ - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct set_stakey_parm *pparm = (struct set_stakey_parm *)pbuf; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct set_stakey_parm *pparm = (struct set_stakey_parm *)pbuf; /* cam_entry: */ /* 0~3 for default key */ @@ -5255,7 +5248,7 @@ u8 set_stakey_hdl(struct adapter *padapter, u8 *pbuf) DBG_88E_LEVEL(_drv_info_, "set pairwise key to hw: alg:%d(WEP40-1 WEP104-5 TKIP-2 AES-4) camid:%d\n", pparm->algorithm, cam_id); - if ((pmlmeinfo->state&0x03) == WIFI_FW_AP_STATE) { + if ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE) { struct sta_info *psta; struct sta_priv *pstapriv = &padapter->stapriv; @@ -5303,33 +5296,32 @@ u8 set_stakey_hdl(struct adapter *padapter, u8 *pbuf) u8 add_ba_hdl(struct adapter *padapter, unsigned char *pbuf) { - struct addBaReq_parm *pparm = (struct addBaReq_parm *)pbuf; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - + struct addBaReq_parm *pparm = (struct addBaReq_parm *)pbuf; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; struct sta_info *psta = rtw_get_stainfo(&padapter->stapriv, pparm->addr); if (!psta) - return H2C_SUCCESS; + return H2C_SUCCESS; if (((pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS) && (pmlmeinfo->HT_enable)) || - ((pmlmeinfo->state&0x03) == WIFI_FW_AP_STATE)) { + ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE)) { issue_action_BA(padapter, pparm->addr, RTW_WLAN_ACTION_ADDBA_REQ, (u16)pparm->tid); mod_timer(&psta->addba_retry_timer, jiffies + msecs_to_jiffies(ADDBA_TO)); } else { psta->htpriv.candidate_tid_bitmap &= ~BIT(pparm->tid); } - return H2C_SUCCESS; + return H2C_SUCCESS; } u8 set_tx_beacon_cmd(struct adapter *padapter) { - struct cmd_obj *ph2c; - struct wlan_bssid_ex *ptxBeacon_parm; - struct cmd_priv *pcmdpriv = &(padapter->cmdpriv); - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct cmd_obj *ph2c; + struct wlan_bssid_ex *ptxBeacon_parm; + struct cmd_priv *pcmdpriv = &padapter->cmdpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; u8 res = _SUCCESS; int len_diff = 0; @@ -5339,7 +5331,7 @@ u8 set_tx_beacon_cmd(struct adapter *padapter) goto exit; } - ptxBeacon_parm = kmemdup(&(pmlmeinfo->network), + ptxBeacon_parm = kmemdup(&pmlmeinfo->network, sizeof(struct wlan_bssid_ex), GFP_ATOMIC); if (ptxBeacon_parm == NULL) { kfree(ph2c); @@ -5364,12 +5356,12 @@ u8 mlme_evt_hdl(struct adapter *padapter, unsigned char *pbuf) { u8 evt_code; u16 evt_sz; - uint *peventbuf; + uint *peventbuf; void (*event_callback)(struct adapter *dev, u8 *pbuf); peventbuf = (uint *)pbuf; - evt_sz = (u16)(*peventbuf&0xffff); - evt_code = (u8)((*peventbuf>>16)&0xff); + evt_sz = (u16)(*peventbuf & 0xffff); + evt_code = (u8)((*peventbuf>>16) & 0xff); /* checking if event code is valid */ if (evt_code >= MAX_C2HEVT) { @@ -5408,14 +5400,14 @@ u8 tx_beacon_hdl(struct adapter *padapter, unsigned char *pbuf) struct sta_info *psta_bmc; struct list_head *xmitframe_plist, *xmitframe_phead; struct xmit_frame *pxmitframe = NULL; - struct sta_priv *pstapriv = &padapter->stapriv; + struct sta_priv *pstapriv = &padapter->stapriv; /* for BC/MC Frames */ psta_bmc = rtw_get_bcmc_stainfo(padapter); if (!psta_bmc) return H2C_SUCCESS; - if ((pstapriv->tim_bitmap&BIT(0)) && (psta_bmc->sleepq_len > 0)) { + if ((pstapriv->tim_bitmap & BIT(0)) && (psta_bmc->sleepq_len > 0)) { msleep(10);/* 10ms, ATIM(HIQ) Windows */ spin_lock_bh(&psta_bmc->sleep_q.lock); @@ -5454,7 +5446,7 @@ u8 tx_beacon_hdl(struct adapter *padapter, unsigned char *pbuf) u8 set_ch_hdl(struct adapter *padapter, u8 *pbuf) { struct set_ch_parm *set_ch_parm; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; if (!pbuf) return H2C_PARAMETERS_ERROR; @@ -5471,13 +5463,13 @@ u8 set_ch_hdl(struct adapter *padapter, u8 *pbuf) set_channel_bwmode(padapter, set_ch_parm->ch, set_ch_parm->ch_offset, set_ch_parm->bw); - return H2C_SUCCESS; + return H2C_SUCCESS; } u8 set_chplan_hdl(struct adapter *padapter, unsigned char *pbuf) { struct SetChannelPlan_param *setChannelPlan_param; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; if (!pbuf) return H2C_PARAMETERS_ERROR; @@ -5487,5 +5479,5 @@ u8 set_chplan_hdl(struct adapter *padapter, unsigned char *pbuf) pmlmeext->max_chan_nums = init_channel_set(padapter, setChannelPlan_param->channel_plan, pmlmeext->channel_set); init_channel_list(padapter, pmlmeext->channel_set, pmlmeext->max_chan_nums, &pmlmeext->channel_list); - return H2C_SUCCESS; + return H2C_SUCCESS; } diff --git a/drivers/staging/rtl8188eu/core/rtw_pwrctrl.c b/drivers/staging/rtl8188eu/core/rtw_pwrctrl.c index 9764e85c000c..6a846d08d449 100644 --- a/drivers/staging/rtl8188eu/core/rtw_pwrctrl.c +++ b/drivers/staging/rtl8188eu/core/rtw_pwrctrl.c @@ -47,7 +47,7 @@ static int rtw_hw_suspend(struct adapter *padapter) if (check_fwstate(pmlmepriv, _FW_LINKED)) { _clr_fwstate_(pmlmepriv, _FW_LINKED); - LedControl8188eu(padapter, LED_CTL_NO_LINK); + led_control_8188eu(padapter, LED_CTL_NO_LINK); rtw_os_indicate_disconnect(padapter); diff --git a/drivers/staging/rtl8188eu/core/rtw_recv.c b/drivers/staging/rtl8188eu/core/rtw_recv.c index dc447cc78c32..1d83affc08ce 100644 --- a/drivers/staging/rtl8188eu/core/rtw_recv.c +++ b/drivers/staging/rtl8188eu/core/rtw_recv.c @@ -650,8 +650,8 @@ int sta2sta_data_frame(struct adapter *adapter, struct recv_frame *precv_frame, u8 *sta_addr = NULL; bool mcast = is_multicast_ether_addr(pattrib->dst); - if ((check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) == true) || - (check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) == true)) { + if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) || + check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE)) { /* filter packets that SA is myself or multicast or broadcast */ if (!memcmp(myhwaddr, pattrib->src, ETH_ALEN)) { RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, (" SA==myself\n")); @@ -729,9 +729,9 @@ static int ap2sta_data_frame( u8 *myhwaddr = myid(&adapter->eeprompriv); bool mcast = is_multicast_ether_addr(pattrib->dst); - if ((check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) && - (check_fwstate(pmlmepriv, _FW_LINKED) == true || - check_fwstate(pmlmepriv, _FW_UNDER_LINKING))) { + if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) && + (check_fwstate(pmlmepriv, _FW_LINKED) || + check_fwstate(pmlmepriv, _FW_UNDER_LINKING))) { /* filter packets that SA is myself or multicast or broadcast */ if (!memcmp(myhwaddr, pattrib->src, ETH_ALEN)) { RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, (" SA==myself\n")); @@ -817,7 +817,7 @@ static int sta2ap_data_frame(struct adapter *adapter, unsigned char *mybssid = get_bssid(pmlmepriv); int ret = _SUCCESS; - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) == true) { + if (check_fwstate(pmlmepriv, WIFI_AP_STATE)) { /* For AP mode, RA = BSSID, TX = STA(SRC_ADDR), A3 = DST_ADDR */ if (memcmp(pattrib->bssid, mybssid, ETH_ALEN)) { ret = _FAIL; @@ -949,7 +949,7 @@ static int validate_recv_ctrl_frame(struct adapter *padapter, pxmitframe->attrib.triggered = 1; spin_unlock_bh(&psta->sleep_q.lock); - if (rtw_hal_xmit(padapter, pxmitframe) == true) + if (rtw_hal_xmit(padapter, pxmitframe)) rtw_os_xmit_complete(padapter, pxmitframe); spin_lock_bh(&psta->sleep_q.lock); @@ -1229,7 +1229,7 @@ static int validate_recv_frame(struct adapter *adapter, retval = _FAIL; /* only data frame return _SUCCESS */ break; case WIFI_DATA_TYPE: /* data */ - LedControl8188eu(adapter, LED_CTL_RX); + led_control_8188eu(adapter, LED_CTL_RX); pattrib->qos = (subtype & BIT(7)) ? 1 : 0; retval = validate_recv_data_frame(adapter, precv_frame); if (retval == _FAIL) { @@ -1838,7 +1838,7 @@ void rtw_reordering_ctrl_timeout_handler(struct timer_list *t) spin_lock_bh(&ppending_recvframe_queue->lock); - if (recv_indicatepkts_in_order(padapter, preorder_ctrl, true) == true) + if (recv_indicatepkts_in_order(padapter, preorder_ctrl, true)) mod_timer(&preorder_ctrl->reordering_ctrl_timer, jiffies + msecs_to_jiffies(REORDER_WAIT_TIME)); @@ -1912,7 +1912,7 @@ static int recv_func_posthandle(struct adapter *padapter, struct __queue *pfree_recv_queue = &padapter->recvpriv.free_recv_queue; /* DATA FRAME */ - LedControl8188eu(padapter, LED_CTL_RX); + led_control_8188eu(padapter, LED_CTL_RX); prframe = decryptor(padapter, prframe); if (!prframe) { diff --git a/drivers/staging/rtl8188eu/core/rtw_security.c b/drivers/staging/rtl8188eu/core/rtw_security.c index f7407632e80b..364d6ea14bf8 100644 --- a/drivers/staging/rtl8188eu/core/rtw_security.c +++ b/drivers/staging/rtl8188eu/core/rtw_security.c @@ -1259,7 +1259,7 @@ u32 rtw_aes_encrypt(struct adapter *padapter, u8 *pxmitframe) length = pattrib->last_txcmdsz-pattrib->hdrlen-pattrib->iv_len-pattrib->icv_len; aes_cipher(prwskey, pattrib->hdrlen, pframe, length); - } else{ + } else { length = pxmitpriv->frag_len-pattrib->hdrlen-pattrib->iv_len-pattrib->icv_len; aes_cipher(prwskey, pattrib->hdrlen, pframe, length); @@ -1267,7 +1267,7 @@ u32 rtw_aes_encrypt(struct adapter *padapter, u8 *pxmitframe) pframe = (u8 *)round_up((size_t)(pframe), 8); } } - } else{ + } else { RT_TRACE(_module_rtl871x_security_c_, _drv_err_, ("%s: stainfo==NULL!!!\n", __func__)); res = _FAIL; } diff --git a/drivers/staging/rtl8188eu/core/rtw_sreset.c b/drivers/staging/rtl8188eu/core/rtw_sreset.c index fb5adaf4a42c..a8397b132002 100644 --- a/drivers/staging/rtl8188eu/core/rtw_sreset.c +++ b/drivers/staging/rtl8188eu/core/rtw_sreset.c @@ -12,10 +12,10 @@ void rtw_hal_sreset_init(struct adapter *padapter) { struct sreset_priv *psrtpriv = &padapter->HalData->srestpriv; - psrtpriv->Wifi_Error_Status = WIFI_STATUS_SUCCESS; + psrtpriv->wifi_error_status = WIFI_STATUS_SUCCESS; } void sreset_set_wifi_error_status(struct adapter *padapter, u32 status) { - padapter->HalData->srestpriv.Wifi_Error_Status = status; + padapter->HalData->srestpriv.wifi_error_status = status; } diff --git a/drivers/staging/rtl8188eu/core/rtw_sta_mgt.c b/drivers/staging/rtl8188eu/core/rtw_sta_mgt.c index f12a12b19d3f..91a30142c567 100644 --- a/drivers/staging/rtl8188eu/core/rtw_sta_mgt.c +++ b/drivers/staging/rtl8188eu/core/rtw_sta_mgt.c @@ -429,7 +429,7 @@ struct sta_info *rtw_get_stainfo(struct sta_priv *pstapriv, u8 *hwaddr) while (phead != plist) { psta = container_of(plist, struct sta_info, hash_list); - if ((!memcmp(psta->hwaddr, addr, ETH_ALEN)) == true) { + if (!memcmp(psta->hwaddr, addr, ETH_ALEN)) { /* if found the matched address */ break; } diff --git a/drivers/staging/rtl8188eu/core/rtw_wlan_util.c b/drivers/staging/rtl8188eu/core/rtw_wlan_util.c index 3e05e2c7f61b..5f9c9de1f1da 100644 --- a/drivers/staging/rtl8188eu/core/rtw_wlan_util.c +++ b/drivers/staging/rtl8188eu/core/rtw_wlan_util.c @@ -12,20 +12,20 @@ #include #include -static unsigned char ARTHEROS_OUI1[] = {0x00, 0x03, 0x7f}; -static unsigned char ARTHEROS_OUI2[] = {0x00, 0x13, 0x74}; +static const u8 ARTHEROS_OUI1[] = {0x00, 0x03, 0x7f}; +static const u8 ARTHEROS_OUI2[] = {0x00, 0x13, 0x74}; -static unsigned char BROADCOM_OUI1[] = {0x00, 0x10, 0x18}; -static unsigned char BROADCOM_OUI2[] = {0x00, 0x0a, 0xf7}; +static const u8 BROADCOM_OUI1[] = {0x00, 0x10, 0x18}; +static const u8 BROADCOM_OUI2[] = {0x00, 0x0a, 0xf7}; -static unsigned char CISCO_OUI[] = {0x00, 0x40, 0x96}; -static unsigned char MARVELL_OUI[] = {0x00, 0x50, 0x43}; -static unsigned char RALINK_OUI[] = {0x00, 0x0c, 0x43}; -static unsigned char REALTEK_OUI[] = {0x00, 0xe0, 0x4c}; -static unsigned char AIRGOCAP_OUI[] = {0x00, 0x0a, 0xf5}; -static unsigned char EPIGRAM_OUI[] = {0x00, 0x90, 0x4c}; +static const u8 CISCO_OUI[] = {0x00, 0x40, 0x96}; +static const u8 MARVELL_OUI[] = {0x00, 0x50, 0x43}; +static const u8 RALINK_OUI[] = {0x00, 0x0c, 0x43}; +static const u8 REALTEK_OUI[] = {0x00, 0xe0, 0x4c}; +static const u8 AIRGOCAP_OUI[] = {0x00, 0x0a, 0xf5}; +static const u8 EPIGRAM_OUI[] = {0x00, 0x90, 0x4c}; -unsigned char REALTEK_96B_IE[] = {0x00, 0xe0, 0x4c, 0x02, 0x01, 0x20}; +u8 REALTEK_96B_IE[] = {0x00, 0xe0, 0x4c, 0x02, 0x01, 0x20}; #define R2T_PHY_DELAY (0) @@ -33,20 +33,20 @@ unsigned char REALTEK_96B_IE[] = {0x00, 0xe0, 0x4c, 0x02, 0x01, 0x20}; #define WAIT_FOR_BCN_TO_MIN (6000) #define WAIT_FOR_BCN_TO_MAX (20000) -static u8 rtw_basic_rate_cck[4] = { +static const u8 rtw_basic_rate_cck[4] = { IEEE80211_CCK_RATE_1MB | IEEE80211_BASIC_RATE_MASK, IEEE80211_CCK_RATE_2MB | IEEE80211_BASIC_RATE_MASK, IEEE80211_CCK_RATE_5MB | IEEE80211_BASIC_RATE_MASK, IEEE80211_CCK_RATE_11MB | IEEE80211_BASIC_RATE_MASK }; -static u8 rtw_basic_rate_ofdm[3] = { +static const u8 rtw_basic_rate_ofdm[3] = { IEEE80211_OFDM_RATE_6MB | IEEE80211_BASIC_RATE_MASK, IEEE80211_OFDM_RATE_12MB | IEEE80211_BASIC_RATE_MASK, IEEE80211_OFDM_RATE_24MB | IEEE80211_BASIC_RATE_MASK }; -static u8 rtw_basic_rate_mix[7] = { +static const u8 rtw_basic_rate_mix[7] = { IEEE80211_CCK_RATE_1MB | IEEE80211_BASIC_RATE_MASK, IEEE80211_CCK_RATE_2MB | IEEE80211_BASIC_RATE_MASK, IEEE80211_CCK_RATE_5MB | IEEE80211_BASIC_RATE_MASK, @@ -56,28 +56,29 @@ static u8 rtw_basic_rate_mix[7] = { IEEE80211_OFDM_RATE_24MB | IEEE80211_BASIC_RATE_MASK }; -int cckrates_included(unsigned char *rate, int ratelen) +bool cckrates_included(unsigned char *rate, int ratelen) { - int i; + int i; for (i = 0; i < ratelen; i++) { - if ((((rate[i]) & 0x7f) == 2) || (((rate[i]) & 0x7f) == 4) || - (((rate[i]) & 0x7f) == 11) || (((rate[i]) & 0x7f) == 22)) + u8 r = rate[i] & 0x7f; + + if (r == 2 || r == 4 || r == 11 || r == 22) return true; } return false; } -int cckratesonly_included(unsigned char *rate, int ratelen) +bool cckratesonly_included(unsigned char *rate, int ratelen) { - int i; + int i; for (i = 0; i < ratelen; i++) { - if ((((rate[i]) & 0x7f) != 2) && (((rate[i]) & 0x7f) != 4) && - (((rate[i]) & 0x7f) != 11) && (((rate[i]) & 0x7f) != 22)) + u8 r = rate[i] & 0x7f; + + if (r != 2 && r != 4 && r != 11 && r != 22) return false; } - return true; } @@ -155,7 +156,7 @@ static unsigned char ratetbl_val_2wifirate(unsigned char rate) } } -static int is_basicrate(struct adapter *padapter, unsigned char rate) +static bool is_basicrate(struct adapter *padapter, unsigned char rate) { int i; unsigned char val; @@ -176,7 +177,7 @@ static unsigned int ratetbl2rateset(struct adapter *padapter, unsigned char *rat { int i; unsigned char rate; - unsigned int len = 0; + unsigned int len = 0; struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; for (i = 0; i < NumRates; i++) { @@ -190,7 +191,7 @@ static unsigned int ratetbl2rateset(struct adapter *padapter, unsigned char *rat default: rate = ratetbl_val_2wifirate(rate); - if (is_basicrate(padapter, rate) == true) + if (is_basicrate(padapter, rate)) rate |= IEEE80211_BASIC_RATE_MASK; rateset[len] = rate; @@ -212,8 +213,8 @@ void get_rate_set(struct adapter *padapter, unsigned char *pbssrate, int *bssrat void UpdateBrateTbl(struct adapter *Adapter, u8 *mbrate) { - u8 i; - u8 rate; + u8 i; + u8 rate; /* 1M, 2M, 5.5M, 11M, 6M, 12M, 24M are mandatory. */ for (i = 0; i < NDIS_802_11_LENGTH_RATES_EX; i++) { @@ -234,8 +235,8 @@ void UpdateBrateTbl(struct adapter *Adapter, u8 *mbrate) void UpdateBrateTblForSoftAP(u8 *bssrateset, u32 bssratelen) { - u8 i; - u8 rate; + u8 i; + u8 rate; for (i = 0; i < bssratelen; i++) { rate = bssrateset[i] & 0x7f; @@ -252,14 +253,14 @@ void UpdateBrateTblForSoftAP(u8 *bssrateset, u32 bssratelen) void Save_DM_Func_Flag(struct adapter *padapter) { - u8 saveflag = true; + u8 saveflag = true; rtw_hal_set_hwreg(padapter, HW_VAR_DM_FUNC_OP, (u8 *)(&saveflag)); } void Restore_DM_Func_Flag(struct adapter *padapter) { - u8 saveflag = false; + u8 saveflag = false; rtw_hal_set_hwreg(padapter, HW_VAR_DM_FUNC_OP, (u8 *)(&saveflag)); } @@ -369,16 +370,17 @@ u16 get_beacon_interval(struct wlan_bssid_ex *bss) int is_client_associated_to_ap(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext; - struct mlme_ext_info *pmlmeinfo; + struct mlme_ext_priv *pmlmeext; + struct mlme_ext_info *pmlmeinfo; if (!padapter) return _FAIL; pmlmeext = &padapter->mlmeextpriv; - pmlmeinfo = &(pmlmeext->mlmext_info); + pmlmeinfo = &pmlmeext->mlmext_info; - if ((pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS) && ((pmlmeinfo->state&0x03) == WIFI_FW_STATION_STATE)) + if ((pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS) && + (pmlmeinfo->state & 0x03) == WIFI_FW_STATION_STATE) return true; else return _FAIL; @@ -386,10 +388,11 @@ int is_client_associated_to_ap(struct adapter *padapter) int is_client_associated_to_ibss(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; - if ((pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS) && ((pmlmeinfo->state&0x03) == WIFI_FW_ADHOC_STATE)) + if ((pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS) && + (pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) return true; else return _FAIL; @@ -398,8 +401,8 @@ int is_client_associated_to_ibss(struct adapter *padapter) int is_IBSS_empty(struct adapter *padapter) { unsigned int i; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; for (i = IBSS_START_MAC_ID; i < NUM_STA; i++) { if (pmlmeinfo->FW_sta_info[i].status == 1) @@ -425,9 +428,9 @@ void invalidate_cam_all(struct adapter *padapter) void write_cam(struct adapter *padapter, u8 entry, u16 ctrl, u8 *mac, u8 *key) { - unsigned int i, val, addr; + unsigned int i, val, addr; int j; - u32 cam_val[2]; + u32 cam_val[2]; addr = entry << 3; @@ -441,7 +444,8 @@ void write_cam(struct adapter *padapter, u8 entry, u16 ctrl, u8 *mac, u8 *key) break; default: i = (j - 2) << 2; - val = key[i] | (key[i+1] << 8) | (key[i+2] << 16) | (key[i+3] << 24); + val = key[i] | (key[i + 1] << 8) | (key[i + 2] << 16) | + (key[i + 3] << 24); break; } @@ -454,9 +458,8 @@ void write_cam(struct adapter *padapter, u8 entry, u16 ctrl, u8 *mac, u8 *key) void clear_cam_entry(struct adapter *padapter, u8 entry) { - u8 null_sta[] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; - u8 null_key[] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; + u8 null_sta[ETH_ALEN] = {}; + u8 null_key[16] = {}; write_cam(padapter, entry, 0, null_sta, null_key); } @@ -464,8 +467,8 @@ void clear_cam_entry(struct adapter *padapter, u8 entry) int allocate_fw_sta_entry(struct adapter *padapter) { unsigned int mac_id; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; for (mac_id = IBSS_START_MAC_ID; mac_id < NUM_STA; mac_id++) { if (pmlmeinfo->FW_sta_info[mac_id].status == 0) { @@ -480,8 +483,8 @@ int allocate_fw_sta_entry(struct adapter *padapter) void flush_all_cam_entry(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; rtw_hal_set_hwreg(padapter, HW_VAR_CAM_INVALID_ALL, NULL); @@ -490,10 +493,9 @@ void flush_all_cam_entry(struct adapter *padapter) int WMM_param_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE) { - /* struct registry_priv *pregpriv = &padapter->registrypriv; */ - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; if (pmlmepriv->qospriv.qos_option == 0) { pmlmeinfo->WMM_enable = 0; @@ -501,21 +503,21 @@ int WMM_param_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE) } pmlmeinfo->WMM_enable = 1; - memcpy(&(pmlmeinfo->WMM_param), (pIE->data + 6), sizeof(struct WMM_para_element)); + memcpy(&pmlmeinfo->WMM_param, pIE->data + 6, sizeof(struct WMM_para_element)); return true; } void WMMOnAssocRsp(struct adapter *padapter) { - u8 ACI, ACM, AIFS, ECWMin, ECWMax, aSifsTime; - u8 acm_mask; - u16 TXOP; - u32 acParm, i; - u32 edca[4], inx[4]; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct xmit_priv *pxmitpriv = &padapter->xmitpriv; - struct registry_priv *pregpriv = &padapter->registrypriv; + u8 ACI, ACM, AIFS, ECWMin, ECWMax, aSifsTime; + u8 acm_mask; + u16 TXOP; + u32 acParm, i; + u32 edca[4], inx[4]; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct registry_priv *pregpriv = &padapter->registrypriv; if (pmlmeinfo->WMM_enable == 0) { padapter->mlmepriv.acm_mask = 0; @@ -575,11 +577,11 @@ void WMMOnAssocRsp(struct adapter *padapter) inx[0] = 0; inx[1] = 1; inx[2] = 2; inx[3] = 3; if (pregpriv->wifi_spec == 1) { - u32 j, change_inx = false; + u32 j, change_inx = false; /* entry indx: 0->vo, 1->vi, 2->be, 3->bk. */ for (i = 0; i < 4; i++) { - for (j = i+1; j < 4; j++) { + for (j = i + 1; j < 4; j++) { /* compare CW and AIFS */ if ((edca[j] & 0xFFFF) < (edca[i] & 0xFFFF)) { change_inx = true; @@ -606,14 +608,14 @@ void WMMOnAssocRsp(struct adapter *padapter) static void bwmode_update_check(struct adapter *padapter, struct ndis_802_11_var_ie *pIE) { - unsigned char new_bwmode; - unsigned char new_ch_offset; - struct HT_info_element *pHT_info; - struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + unsigned char new_bwmode; + unsigned char new_ch_offset; + struct HT_info_element *pHT_info; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; struct registry_priv *pregistrypriv = &padapter->registrypriv; - struct ht_priv *phtpriv = &pmlmepriv->htpriv; + struct ht_priv *phtpriv = &pmlmepriv->htpriv; if (!pIE) return; @@ -660,11 +662,9 @@ static void bwmode_update_check(struct adapter *padapter, struct ndis_802_11_var if (pmlmeinfo->bwmode_updated) { struct sta_info *psta; - struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network); + struct wlan_bssid_ex *cur_network = &pmlmeinfo->network; struct sta_priv *pstapriv = &padapter->stapriv; - /* set_channel_bwmode(padapter, pmlmeext->cur_channel, pmlmeext->cur_ch_offset, pmlmeext->cur_bwmode); */ - /* update ap's stainfo */ psta = rtw_get_stainfo(pstapriv, cur_network->MacAddress); if (psta) { @@ -684,12 +684,12 @@ static void bwmode_update_check(struct adapter *padapter, struct ndis_802_11_var void HT_caps_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE) { - unsigned int i; - u8 max_AMPDU_len, min_MPDU_spacing; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct ht_priv *phtpriv = &pmlmepriv->htpriv; + unsigned int i; + u8 max_AMPDU_len, min_MPDU_spacing; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct ht_priv *phtpriv = &pmlmepriv->htpriv; u8 *HT_cap = (u8 *)(&pmlmeinfo->HT_caps); if (!pIE) @@ -727,10 +727,10 @@ void HT_caps_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE) void HT_info_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - struct ht_priv *phtpriv = &pmlmepriv->htpriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct mlme_priv *pmlmepriv = &padapter->mlmepriv; + struct ht_priv *phtpriv = &pmlmepriv->htpriv; if (!pIE) return; @@ -742,16 +742,15 @@ void HT_info_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE) return; pmlmeinfo->HT_info_enable = 1; - memcpy(&(pmlmeinfo->HT_info), pIE->data, pIE->Length); + memcpy(&pmlmeinfo->HT_info, pIE->data, pIE->Length); } void HTOnAssocRsp(struct adapter *padapter) { - unsigned char max_AMPDU_len; - unsigned char min_MPDU_spacing; - /* struct registry_priv *pregpriv = &padapter->registrypriv; */ - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + unsigned char max_AMPDU_len; + unsigned char min_MPDU_spacing; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; DBG_88E("%s\n", __func__); @@ -762,11 +761,11 @@ void HTOnAssocRsp(struct adapter *padapter) return; } - /* handle A-MPDU parameter field */ - /* - AMPDU_para [1:0]:Max AMPDU Len => 0:8k , 1:16k, 2:32k, 3:64k - AMPDU_para [4:2]:Min MPDU Start Spacing - */ + /* handle A-MPDU parameter field + * + * AMPDU_para [1:0]:Max AMPDU Len => 0:8k , 1:16k, 2:32k, 3:64k + * AMPDU_para [4:2]:Min MPDU Start Spacing + */ max_AMPDU_len = pmlmeinfo->HT_caps.ampdu_params_info & 0x03; min_MPDU_spacing = (pmlmeinfo->HT_caps.ampdu_params_info & 0x1c) >> 2; @@ -778,21 +777,21 @@ void HTOnAssocRsp(struct adapter *padapter) void ERP_IE_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; if (pIE->Length > 1) return; pmlmeinfo->ERP_enable = 1; - memcpy(&(pmlmeinfo->ERP_IE), pIE->data, pIE->Length); + memcpy(&pmlmeinfo->ERP_IE, pIE->data, pIE->Length); } void VCS_update(struct adapter *padapter, struct sta_info *psta) { - struct registry_priv *pregpriv = &padapter->registrypriv; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct registry_priv *pregpriv = &padapter->registrypriv; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; switch (pregpriv->vrtl_carrier_sense) { /* 0:off 1:on 2:auto */ case 0: /* off */ @@ -828,11 +827,10 @@ void VCS_update(struct adapter *padapter, struct sta_info *psta) int rtw_check_bcn_info(struct adapter *Adapter, u8 *pframe, u32 packet_len) { - unsigned int len; - unsigned char *p; - unsigned short val16, subtype; - struct wlan_network *cur_network = &(Adapter->mlmepriv.cur_network); - /* u8 wpa_ie[255], rsn_ie[255]; */ + unsigned int len; + unsigned char *p; + unsigned short val16, subtype; + struct wlan_network *cur_network = &Adapter->mlmepriv.cur_network; u16 wpa_len = 0, rsn_len = 0; u8 encryp_protocol = 0; struct wlan_bssid_ex *bssid; @@ -842,8 +840,8 @@ int rtw_check_bcn_info(struct adapter *Adapter, u8 *pframe, u32 packet_len) u8 *pbssid = GetAddr3Ptr(pframe); struct HT_info_element *pht_info = NULL; u32 bcn_channel; - unsigned short ht_cap_info; - unsigned char ht_info_infos_0; + unsigned short ht_cap_info; + unsigned char ht_info_infos_0; int ssid_len; if (!is_client_associated_to_ap(Adapter)) @@ -897,7 +895,7 @@ int rtw_check_bcn_info(struct adapter *Adapter, u8 *pframe, u32 packet_len) ht_info_infos_0 = 0; } if (ht_cap_info != cur_network->BcnInfo.ht_cap_info || - ((ht_info_infos_0&0x03) != (cur_network->BcnInfo.ht_info_infos_0&0x03))) { + ((ht_info_infos_0 & 0x03) != (cur_network->BcnInfo.ht_info_infos_0 & 0x03))) { DBG_88E("%s bcn now: ht_cap_info:%x ht_info_infos_0:%x\n", __func__, ht_cap_info, ht_info_infos_0); DBG_88E("%s bcn link: ht_cap_info:%x ht_info_infos_0:%x\n", __func__, @@ -986,18 +984,20 @@ int rtw_check_bcn_info(struct adapter *Adapter, u8 *pframe, u32 packet_len) } if (encryp_protocol == ENCRYP_PROTOCOL_WPA || encryp_protocol == ENCRYP_PROTOCOL_WPA2) { - pbuf = rtw_get_wpa_ie(&bssid->ies[12], &wpa_ielen, bssid->ie_length-12); + pbuf = rtw_get_wpa_ie(&bssid->ies[12], &wpa_ielen, + bssid->ie_length - 12); if (pbuf && (wpa_ielen > 0)) { - if (_SUCCESS == rtw_parse_wpa_ie(pbuf, wpa_ielen+2, &group_cipher, &pairwise_cipher, &is_8021x)) { + if (_SUCCESS == rtw_parse_wpa_ie(pbuf, wpa_ielen + 2, &group_cipher, &pairwise_cipher, &is_8021x)) { RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("%s pnetwork->pairwise_cipher: %d, group_cipher is %d, is_8021x is %d\n", __func__, pairwise_cipher, group_cipher, is_8021x)); } } else { - pbuf = rtw_get_wpa2_ie(&bssid->ies[12], &wpa_ielen, bssid->ie_length-12); + pbuf = rtw_get_wpa2_ie(&bssid->ies[12], &wpa_ielen, + bssid->ie_length - 12); if (pbuf && (wpa_ielen > 0)) { - if (_SUCCESS == rtw_parse_wpa2_ie(pbuf, wpa_ielen+2, &group_cipher, &pairwise_cipher, &is_8021x)) { + if (_SUCCESS == rtw_parse_wpa2_ie(pbuf, wpa_ielen + 2, &group_cipher, &pairwise_cipher, &is_8021x)) { RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("%s pnetwork->pairwise_cipher: %d, pnetwork->group_cipher is %d, is_802x is %d\n", __func__, pairwise_cipher, group_cipher, is_8021x)); @@ -1061,8 +1061,8 @@ unsigned int is_ap_in_tkip(struct adapter *padapter) u32 i; struct ndis_802_11_var_ie *pIE; struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network); + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *cur_network = &pmlmeinfo->network; if (rtw_get_capability((struct wlan_bssid_ex *)cur_network) & WLAN_CAPABILITY_PRIVACY) { for (i = sizeof(struct ndis_802_11_fixed_ie); i < pmlmeinfo->network.ie_length;) { @@ -1093,8 +1093,8 @@ unsigned int is_ap_in_wep(struct adapter *padapter) u32 i; struct ndis_802_11_var_ie *pIE; struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network); + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + struct wlan_bssid_ex *cur_network = &pmlmeinfo->network; if (rtw_get_capability((struct wlan_bssid_ex *)cur_network) & WLAN_CAPABILITY_PRIVACY) { for (i = sizeof(struct ndis_802_11_fixed_ie); i < pmlmeinfo->network.ie_length;) { @@ -1123,29 +1123,29 @@ static int wifirate2_ratetbl_inx(unsigned char rate) rate = rate & 0x7f; switch (rate) { - case 54*2: + case 108: return 11; - case 48*2: + case 96: return 10; - case 36*2: + case 72: return 9; - case 24*2: + case 48: return 8; - case 18*2: + case 36: return 7; - case 12*2: + case 24: return 6; - case 9*2: + case 18: return 5; - case 6*2: + case 12: return 4; - case 11*2: + case 22: return 3; case 11: return 2; - case 2*2: + case 4: return 1; - case 1*2: + case 2: return 0; default: return 0; @@ -1190,9 +1190,9 @@ unsigned int update_MSC_rate(struct ieee80211_ht_cap *pHT_caps) int support_short_GI(struct adapter *padapter, struct ieee80211_ht_cap *pHT_caps) { - unsigned char bit_offset; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + unsigned char bit_offset; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; if (!(pmlmeinfo->HT_enable)) return _FAIL; @@ -1264,8 +1264,8 @@ unsigned char check_assoc_AP(u8 *pframe, uint len) { unsigned int i; struct ndis_802_11_var_ie *pIE; - u8 epigram_vendor_flag; - u8 ralink_vendor_flag; + u8 epigram_vendor_flag; + u8 ralink_vendor_flag; epigram_vendor_flag = 0; ralink_vendor_flag = 0; @@ -1334,8 +1334,8 @@ unsigned char check_assoc_AP(u8 *pframe, uint len) void update_IOT_info(struct adapter *padapter) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; switch (pmlmeinfo->assoc_AP_vendor) { case HT_IOT_PEER_MARVELL: @@ -1365,9 +1365,9 @@ void update_IOT_info(struct adapter *padapter) void update_capinfo(struct adapter *Adapter, u16 updateCap) { - struct mlme_ext_priv *pmlmeext = &Adapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); - bool ShortPreamble; + struct mlme_ext_priv *pmlmeext = &Adapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; + bool ShortPreamble; /* Check preamble mode, 2005.01.06, by rcnjko. */ /* Mark to update preamble value forever, 2008.03.18 by lanhsin */ @@ -1440,16 +1440,15 @@ void update_wireless_mode(struct adapter *padapter) rtw_hal_set_hwreg(padapter, HW_VAR_RESP_SIFS, (u8 *)&SIFS_Timer); - if (pmlmeext->cur_wireless_mode & WIRELESS_11B) - update_mgnt_tx_rate(padapter, IEEE80211_CCK_RATE_1MB); - else - update_mgnt_tx_rate(padapter, IEEE80211_OFDM_RATE_6MB); + update_mgnt_tx_rate(padapter, + pmlmeext->cur_wireless_mode & WIRELESS_11B ? + IEEE80211_CCK_RATE_1MB : IEEE80211_OFDM_RATE_6MB); } void update_bmc_sta_support_rate(struct adapter *padapter, u32 mac_id) { - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; if (pmlmeext->cur_wireless_mode & WIRELESS_11B) { /* Only B, B/G, and B/G/N AP could use CCK rate */ @@ -1461,11 +1460,11 @@ void update_bmc_sta_support_rate(struct adapter *padapter, u32 mac_id) int update_sta_support_rate(struct adapter *padapter, u8 *pvar_ie, uint var_ie_len, int cam_idx) { - unsigned int ie_len; + unsigned int ie_len; struct ndis_802_11_var_ie *pIE; - int supportRateNum = 0; - struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + int supportRateNum = 0; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; pIE = (struct ndis_802_11_var_ie *)rtw_get_ie(pvar_ie, _SUPPORTEDRATES_IE_, &ie_len, var_ie_len); if (!pIE) @@ -1493,19 +1492,18 @@ void process_addba_req(struct adapter *padapter, u8 *paddba_req, u8 *addr) u16 param; struct recv_reorder_ctrl *preorder_ctrl; struct sta_priv *pstapriv = &padapter->stapriv; - struct ADDBA_request *preq = (struct ADDBA_request *)paddba_req; - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; - struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); + struct ADDBA_request *preq = (struct ADDBA_request *)paddba_req; + struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; + struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info; psta = rtw_get_stainfo(pstapriv, addr); if (psta) { param = le16_to_cpu(preq->BA_para_set); - tid = (param>>2)&0x0f; + tid = (param >> 2) & 0x0f; preorder_ctrl = &psta->recvreorder_ctrl[tid]; preorder_ctrl->indicate_seq = 0xffff; - preorder_ctrl->enable = (pmlmeinfo->accept_addba_req) ? true - : false; + preorder_ctrl->enable = pmlmeinfo->accept_addba_req; } } @@ -1517,7 +1515,7 @@ void update_TSF(struct mlme_ext_priv *pmlmeext, u8 *pframe, uint len) pIE = pframe + sizeof(struct ieee80211_hdr_3addr); pbuf = (__le32 *)pIE; - pmlmeext->TSFValue = le32_to_cpu(*(pbuf+1)); + pmlmeext->TSFValue = le32_to_cpu(*(pbuf + 1)); pmlmeext->TSFValue = pmlmeext->TSFValue << 32; diff --git a/drivers/staging/rtl8188eu/core/rtw_xmit.c b/drivers/staging/rtl8188eu/core/rtw_xmit.c index 0a3e710590ed..3b1ccd138c3f 100644 --- a/drivers/staging/rtl8188eu/core/rtw_xmit.c +++ b/drivers/staging/rtl8188eu/core/rtw_xmit.c @@ -467,7 +467,8 @@ static s32 update_attrib(struct adapter *padapter, struct sk_buff *pkt, struct p RT_TRACE(_module_rtl871x_xmit_c_, _drv_alert_, ("\nupdate_attrib => get sta_info fail, ra: %pM\n", (pattrib->ra))); res = _FAIL; goto exit; - } else if ((check_fwstate(pmlmepriv, WIFI_AP_STATE) == true) && (!(psta->state & _FW_LINKED))) { + } else if (check_fwstate(pmlmepriv, WIFI_AP_STATE) && + !(psta->state & _FW_LINKED)) { res = _FAIL; goto exit; } @@ -591,7 +592,7 @@ static s32 xmitframe_addmic(struct adapter *padapter, struct xmit_frame *pxmitfr struct pkt_attrib *pattrib = &pxmitframe->attrib; struct security_priv *psecuritypriv = &padapter->securitypriv; struct xmit_priv *pxmitpriv = &padapter->xmitpriv; - u8 priority[4] = {0x0, 0x0, 0x0, 0x0}; + u8 priority[4] = {}; u8 hw_hdr_offset = 0; if (pattrib->psta) @@ -604,9 +605,7 @@ static s32 xmitframe_addmic(struct adapter *padapter, struct xmit_frame *pxmitfr if (pattrib->encrypt == _TKIP_) { /* encode mic code */ if (stainfo) { - u8 null_key[16] = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, - 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, - 0x0, 0x0}; + u8 null_key[16] = {}; pframe = pxmitframe->buf_addr + hw_hdr_offset; @@ -758,7 +757,7 @@ s32 rtw_make_wlanhdr(struct adapter *padapter, u8 *hdr, struct pkt_attrib *pattr SetFrameSubType(fctrl, pattrib->subtype); if (pattrib->subtype & WIFI_DATA_TYPE) { - if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) { + if (check_fwstate(pmlmepriv, WIFI_STATION_STATE)) { /* to_ds = 1, fr_ds = 0; */ /* Data transfer to AP */ SetToDs(fctrl); @@ -1606,7 +1605,7 @@ s32 rtw_xmit(struct adapter *padapter, struct sk_buff **ppkt) } pxmitframe->pkt = *ppkt; - LedControl8188eu(padapter, LED_CTL_TX); + led_control_8188eu(padapter, LED_CTL_TX); pxmitframe->attrib.qsel = pxmitframe->attrib.priority; @@ -1984,7 +1983,7 @@ void xmit_delivery_enabled_frames(struct adapter *padapter, struct sta_info *pst pxmitframe->attrib.triggered = 1; - if (rtw_hal_xmit(padapter, pxmitframe) == true) + if (rtw_hal_xmit(padapter, pxmitframe)) rtw_os_xmit_complete(padapter, pxmitframe); if ((psta->sleepq_ac_len == 0) && (!psta->has_legacy_ac) && (wmmps_ac)) { @@ -2029,7 +2028,7 @@ int rtw_sctx_wait(struct submit_ctx *sctx) return ret; } -static bool rtw_sctx_chk_waring_status(int status) +static bool rtw_sctx_chk_warning_status(int status) { switch (status) { case RTW_SCTX_DONE_UNKNOWN: @@ -2047,7 +2046,7 @@ static bool rtw_sctx_chk_waring_status(int status) void rtw_sctx_done_err(struct submit_ctx **sctx, int status) { if (*sctx) { - if (rtw_sctx_chk_waring_status(status)) + if (rtw_sctx_chk_warning_status(status)) DBG_88E("%s status:%d\n", __func__, status); (*sctx)->status = status; complete(&((*sctx)->done)); diff --git a/drivers/staging/rtl8188eu/hal/hal8188e_rate_adaptive.c b/drivers/staging/rtl8188eu/hal/hal8188e_rate_adaptive.c index 6dbd7d261f1e..9ddd51685063 100644 --- a/drivers/staging/rtl8188eu/hal/hal8188e_rate_adaptive.c +++ b/drivers/staging/rtl8188eu/hal/hal8188e_rate_adaptive.c @@ -1,24 +1,13 @@ // SPDX-License-Identifier: GPL-2.0 -/*++ -Copyright (c) Realtek Semiconductor Corp. All rights reserved. +/* + * Copyright (c) Realtek Semiconductor Corp. All rights reserved. + */ -Module Name: - RateAdaptive.c - -Abstract: - Implement Rate Adaptive functions for common operations. - -Major Change History: - When Who What - ---------- --------------- ------------------------------- - 2011-08-12 Page Create. - ---*/ #include "odm_precomp.h" /* Rate adaptive parameters */ -static u8 RETRY_PENALTY[PERENTRY][RETRYSIZE+1] = { +static u8 RETRY_PENALTY[PERENTRY][RETRYSIZE + 1] = { {5, 4, 3, 2, 0, 3}, /* 92 , idx = 0 */ {6, 5, 4, 3, 0, 4}, /* 86 , idx = 1 */ {6, 5, 4, 2, 0, 4}, /* 81 , idx = 2 */ @@ -44,7 +33,7 @@ static u8 RETRY_PENALTY[PERENTRY][RETRYSIZE+1] = { {49, 16, 16, 0, 0, 48} }; /* 3, idx = 0x16 */ -static u8 PT_PENALTY[RETRYSIZE+1] = {34, 31, 30, 24, 0, 32}; +static u8 PT_PENALTY[RETRYSIZE + 1] = {34, 31, 30, 24, 0, 32}; /* wilson modify */ static u8 RETRY_PENALTY_IDX[2][RATESIZE] = { @@ -92,11 +81,8 @@ static u16 DynamicTxRPTTiming[6] = { /* End Rate adaptive parameters */ -static void odm_SetTxRPTTiming_8188E( - struct odm_dm_struct *dm_odm, - struct odm_ra_info *pRaInfo, - u8 extend - ) +static void odm_SetTxRPTTiming_8188E(struct odm_dm_struct *dm_odm, + struct odm_ra_info *pRaInfo, u8 extend) { u8 idx = 0; @@ -117,20 +103,20 @@ static void odm_SetTxRPTTiming_8188E( pRaInfo->RptTime = DynamicTxRPTTiming[idx]; ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, - ("pRaInfo->RptTime = 0x%x\n", pRaInfo->RptTime)); + ("pRaInfo->RptTime = 0x%x\n", pRaInfo->RptTime)); } static int odm_RateDown_8188E(struct odm_dm_struct *dm_odm, - struct odm_ra_info *pRaInfo) + struct odm_ra_info *pRaInfo) { u8 RateID, LowestRate, HighestRate; u8 i; ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, - ODM_DBG_TRACE, ("=====>odm_RateDown_8188E()\n")); + ODM_DBG_TRACE, ("=====>%s()\n", __func__)); if (!pRaInfo) { ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, - ("odm_RateDown_8188E(): pRaInfo is NULL\n")); + ("%s(): pRaInfo is NULL\n", __func__)); return -1; } RateID = pRaInfo->PreRate; @@ -146,7 +132,7 @@ static int odm_RateDown_8188E(struct odm_dm_struct *dm_odm, pRaInfo->RateSGI = 0; } else if (RateID > LowestRate) { if (RateID > 0) { - for (i = RateID-1; i > LowestRate; i--) { + for (i = RateID - 1; i > LowestRate; i--) { if (pRaInfo->RAUseRate & BIT(i)) { RateID = i; goto RateDownFinish; @@ -173,30 +159,28 @@ RateDownFinish: pRaInfo->DecisionRate = RateID; odm_SetTxRPTTiming_8188E(dm_odm, pRaInfo, 2); ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, - ODM_DBG_LOUD, ("Rate down, RPT Timing default\n")); + ODM_DBG_LOUD, ("Rate down, RPT Timing default\n")); ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, - ("RAWaitingCounter %d, RAPendingCounter %d", + ("RAWaitingCounter %d, RAPendingCounter %d", pRaInfo->RAWaitingCounter, pRaInfo->RAPendingCounter)); ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, - ("Rate down to RateID %d RateSGI %d\n", RateID, pRaInfo->RateSGI)); + ("Rate down to RateID %d RateSGI %d\n", RateID, pRaInfo->RateSGI)); ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, - ("<===== odm_RateDown_8188E()\n")); + ("<===== %s()\n", __func__)); return 0; } -static int odm_RateUp_8188E( - struct odm_dm_struct *dm_odm, - struct odm_ra_info *pRaInfo - ) +static int odm_RateUp_8188E(struct odm_dm_struct *dm_odm, + struct odm_ra_info *pRaInfo) { u8 RateID, HighestRate; u8 i; ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, - ODM_DBG_TRACE, ("=====>odm_RateUp_8188E()\n")); + ODM_DBG_TRACE, ("=====>%s()\n", __func__)); if (!pRaInfo) { ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, - ("odm_RateUp_8188E(): pRaInfo is NULL\n")); + ("%s(): pRaInfo is NULL\n", __func__)); return -1; } RateID = pRaInfo->PreRate; @@ -213,10 +197,10 @@ static int odm_RateUp_8188E( } odm_SetTxRPTTiming_8188E(dm_odm, pRaInfo, 0); ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, - ("odm_RateUp_8188E():Decrease RPT Timing\n")); + ("%s():Decrease RPT Timing\n", __func__)); if (RateID < HighestRate) { - for (i = RateID+1; i <= HighestRate; i++) { + for (i = RateID + 1; i <= HighestRate; i++) { if (pRaInfo->RAUseRate & BIT(i)) { RateID = i; goto RateUpfinish; @@ -232,19 +216,19 @@ static int odm_RateUp_8188E( } RateUpfinish: if (pRaInfo->RAWaitingCounter == - (4+PendingForRateUpFail[pRaInfo->RAPendingCounter])) + (4 + PendingForRateUpFail[pRaInfo->RAPendingCounter])) pRaInfo->RAWaitingCounter = 0; else pRaInfo->RAWaitingCounter++; pRaInfo->DecisionRate = RateID; ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, - ("Rate up to RateID %d\n", RateID)); + ("Rate up to RateID %d\n", RateID)); ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, - ("RAWaitingCounter %d, RAPendingCounter %d", - pRaInfo->RAWaitingCounter, pRaInfo->RAPendingCounter)); + ("RAWaitingCounter %d, RAPendingCounter %d", + pRaInfo->RAWaitingCounter, pRaInfo->RAPendingCounter)); ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, - ODM_DBG_TRACE, ("<===== odm_RateUp_8188E()\n")); + ODM_DBG_TRACE, ("<===== %s()\n", __func__)); return 0; } @@ -253,20 +237,21 @@ static void odm_ResetRaCounter_8188E(struct odm_ra_info *pRaInfo) u8 RateID; RateID = pRaInfo->DecisionRate; - pRaInfo->NscUp = (N_THRESHOLD_HIGH[RateID]+N_THRESHOLD_LOW[RateID])>>1; - pRaInfo->NscDown = (N_THRESHOLD_HIGH[RateID]+N_THRESHOLD_LOW[RateID])>>1; + pRaInfo->NscUp = (N_THRESHOLD_HIGH[RateID] + + N_THRESHOLD_LOW[RateID]) >> 1; + pRaInfo->NscDown = (N_THRESHOLD_HIGH[RateID] + + N_THRESHOLD_LOW[RateID]) >> 1; } static void odm_RateDecision_8188E(struct odm_dm_struct *dm_odm, - struct odm_ra_info *pRaInfo - ) + struct odm_ra_info *pRaInfo) { u8 RateID = 0, RtyPtID = 0, PenaltyID1 = 0, PenaltyID2 = 0, i = 0; /* u32 pool_retry; */ static u8 DynamicTxRPTTimingCounter; ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, - ("=====>odm_RateDecision_8188E()\n")); + ("=====>%s()\n", __func__)); if (pRaInfo->Active && (pRaInfo->TOTAL > 0)) { /* STA used and data packet exits */ if ((pRaInfo->RssiStaRA < (pRaInfo->PreRssiStaRA - 3)) || @@ -317,7 +302,7 @@ static void odm_RateDecision_8188E(struct odm_dm_struct *dm_odm, else pRaInfo->NscUp = 0; - ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE|ODM_COMP_INIT, ODM_DBG_LOUD, + ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE | ODM_COMP_INIT, ODM_DBG_LOUD, (" RssiStaRa = %d RtyPtID =%d PenaltyID1 = 0x%x PenaltyID2 = 0x%x RateID =%d NscDown =%d NscUp =%d SGI =%d\n", pRaInfo->RssiStaRA, RtyPtID, PenaltyID1, PenaltyID2, RateID, pRaInfo->NscDown, pRaInfo->NscUp, pRaInfo->RateSGI)); if ((pRaInfo->NscDown < N_THRESHOLD_LOW[RateID]) || @@ -345,7 +330,8 @@ static void odm_RateDecision_8188E(struct odm_dm_struct *dm_odm, odm_ResetRaCounter_8188E(pRaInfo); } - ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, ("<===== odm_RateDecision_8188E()\n")); + ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, + ("<===== %s()\n", __func__)); } static int odm_ARFBRefresh_8188E(struct odm_dm_struct *dm_odm, struct odm_ra_info *pRaInfo) @@ -356,41 +342,41 @@ static int odm_ARFBRefresh_8188E(struct odm_dm_struct *dm_odm, struct odm_ra_inf switch (pRaInfo->RateID) { case RATR_INX_WIRELESS_NGB: - pRaInfo->RAUseRate = (pRaInfo->RateMask)&0x0f8ff015; + pRaInfo->RAUseRate = pRaInfo->RateMask & 0x0f8ff015; break; case RATR_INX_WIRELESS_NG: - pRaInfo->RAUseRate = (pRaInfo->RateMask)&0x0f8ff010; + pRaInfo->RAUseRate = pRaInfo->RateMask & 0x0f8ff010; break; case RATR_INX_WIRELESS_NB: - pRaInfo->RAUseRate = (pRaInfo->RateMask)&0x0f8ff005; + pRaInfo->RAUseRate = pRaInfo->RateMask & 0x0f8ff005; break; case RATR_INX_WIRELESS_N: - pRaInfo->RAUseRate = (pRaInfo->RateMask)&0x0f8ff000; + pRaInfo->RAUseRate = pRaInfo->RateMask & 0x0f8ff000; break; case RATR_INX_WIRELESS_GB: - pRaInfo->RAUseRate = (pRaInfo->RateMask)&0x00000ff5; + pRaInfo->RAUseRate = pRaInfo->RateMask & 0x00000ff5; break; case RATR_INX_WIRELESS_G: - pRaInfo->RAUseRate = (pRaInfo->RateMask)&0x00000ff0; + pRaInfo->RAUseRate = pRaInfo->RateMask & 0x00000ff0; break; case RATR_INX_WIRELESS_B: - pRaInfo->RAUseRate = (pRaInfo->RateMask)&0x0000000d; + pRaInfo->RAUseRate = pRaInfo->RateMask & 0x0000000d; break; case 12: MaskFromReg = usb_read32(adapt, REG_ARFR0); - pRaInfo->RAUseRate = (pRaInfo->RateMask)&MaskFromReg; + pRaInfo->RAUseRate = pRaInfo->RateMask & MaskFromReg; break; case 13: MaskFromReg = usb_read32(adapt, REG_ARFR1); - pRaInfo->RAUseRate = (pRaInfo->RateMask)&MaskFromReg; + pRaInfo->RAUseRate = pRaInfo->RateMask & MaskFromReg; break; case 14: MaskFromReg = usb_read32(adapt, REG_ARFR2); - pRaInfo->RAUseRate = (pRaInfo->RateMask)&MaskFromReg; + pRaInfo->RAUseRate = pRaInfo->RateMask & MaskFromReg; break; case 15: MaskFromReg = usb_read32(adapt, REG_ARFR3); - pRaInfo->RAUseRate = (pRaInfo->RateMask)&MaskFromReg; + pRaInfo->RAUseRate = pRaInfo->RateMask & MaskFromReg; break; default: pRaInfo->RAUseRate = (pRaInfo->RateMask); @@ -399,7 +385,7 @@ static int odm_ARFBRefresh_8188E(struct odm_dm_struct *dm_odm, struct odm_ra_inf /* Highest rate */ if (pRaInfo->RAUseRate) { for (i = RATESIZE; i >= 0; i--) { - if ((pRaInfo->RAUseRate)&BIT(i)) { + if (pRaInfo->RAUseRate & BIT(i)) { pRaInfo->HighestRate = i; break; } @@ -511,15 +497,17 @@ static void odm_PTDecision_8188E(struct odm_ra_info *pRaInfo) j >>= 1; temp_stage = (pRaInfo->PTStage + 1) >> 1; if (temp_stage > j) - stage_id = temp_stage-j; + stage_id = temp_stage - j; else stage_id = 0; - pRaInfo->PTSmoothFactor = (pRaInfo->PTSmoothFactor>>1) + (pRaInfo->PTSmoothFactor>>2) + stage_id*16+2; + pRaInfo->PTSmoothFactor = (pRaInfo->PTSmoothFactor >> 1) + + (pRaInfo->PTSmoothFactor >> 2) + + stage_id * 16 + 2; if (pRaInfo->PTSmoothFactor > 192) pRaInfo->PTSmoothFactor = 192; stage_id = pRaInfo->PTSmoothFactor >> 6; - temp_stage = stage_id*2; + temp_stage = stage_id * 2; if (temp_stage != 0) temp_stage -= 1; if (pRaInfo->DROP > 3) @@ -527,13 +515,11 @@ static void odm_PTDecision_8188E(struct odm_ra_info *pRaInfo) pRaInfo->PTStage = temp_stage; } -static void -odm_RATxRPTTimerSetting( - struct odm_dm_struct *dm_odm, - u16 minRptTime -) +static void odm_RATxRPTTimerSetting(struct odm_dm_struct *dm_odm, + u16 minRptTime) { - ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, (" =====>odm_RATxRPTTimerSetting()\n")); + ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, + (" =====>%s()\n", __func__)); if (dm_odm->CurrminRptTime != minRptTime) { ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, @@ -541,7 +527,8 @@ odm_RATxRPTTimerSetting( rtw_rpt_timer_cfg_cmd(dm_odm->Adapter, minRptTime); dm_odm->CurrminRptTime = minRptTime; } - ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, (" <===== odm_RATxRPTTimerSetting()\n")); + ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, + (" <===== %s()\n", __func__)); } int ODM_RAInfo_Init(struct odm_dm_struct *dm_odm, u8 macid) @@ -551,7 +538,7 @@ int ODM_RAInfo_Init(struct odm_dm_struct *dm_odm, u8 macid) u8 max_rate_idx = 0x13; /* MCS7 */ if (dm_odm->pWirelessMode) - WirelessMode = *(dm_odm->pWirelessMode); + WirelessMode = *dm_odm->pWirelessMode; if (WirelessMode != 0xFF) { if (WirelessMode & ODM_WM_N24G) @@ -563,8 +550,8 @@ int ODM_RAInfo_Init(struct odm_dm_struct *dm_odm, u8 macid) } ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, - ("ODM_RAInfo_Init(): WirelessMode:0x%08x , max_raid_idx:0x%02x\n", - WirelessMode, max_rate_idx)); + ("%s(): WirelessMode:0x%08x , max_raid_idx:0x%02x\n", + __func__, WirelessMode, max_rate_idx)); pRaInfo->DecisionRate = max_rate_idx; pRaInfo->PreRate = max_rate_idx; @@ -576,8 +563,8 @@ int ODM_RAInfo_Init(struct odm_dm_struct *dm_odm, u8 macid) pRaInfo->PreRssiStaRA = 0; pRaInfo->SGIEnable = 0; pRaInfo->RAUseRate = 0xffffffff; - pRaInfo->NscDown = (N_THRESHOLD_HIGH[0x13]+N_THRESHOLD_LOW[0x13])/2; - pRaInfo->NscUp = (N_THRESHOLD_HIGH[0x13]+N_THRESHOLD_LOW[0x13])/2; + pRaInfo->NscDown = (N_THRESHOLD_HIGH[0x13] + N_THRESHOLD_LOW[0x13]) / 2; + pRaInfo->NscUp = (N_THRESHOLD_HIGH[0x13] + N_THRESHOLD_LOW[0x13]) / 2; pRaInfo->RateSGI = 0; pRaInfo->Active = 1; /* Active is not used at present. by page, 110819 */ pRaInfo->RptTime = 0x927c; @@ -632,7 +619,7 @@ u8 ODM_RA_GetDecisionRate_8188E(struct odm_dm_struct *dm_odm, u8 macid) return 0; DecisionRate = dm_odm->RAInfo[macid].DecisionRate; ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, - (" macid =%d DecisionRate = 0x%x\n", macid, DecisionRate)); + (" macid =%d DecisionRate = 0x%x\n", macid, DecisionRate)); return DecisionRate; } @@ -658,7 +645,7 @@ void ODM_RA_UpdateRateInfo_8188E(struct odm_dm_struct *dm_odm, u8 macid, u8 Rate ("macid =%d RateID = 0x%x RateMask = 0x%x SGIEnable =%d\n", macid, RateID, RateMask, SGIEnable)); - pRaInfo = &(dm_odm->RAInfo[macid]); + pRaInfo = &dm_odm->RAInfo[macid]; pRaInfo->RateID = RateID; pRaInfo->RateMask = RateMask; pRaInfo->SGIEnable = SGIEnable; @@ -674,7 +661,7 @@ void ODM_RA_SetRSSI_8188E(struct odm_dm_struct *dm_odm, u8 macid, u8 Rssi) ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_TRACE, (" macid =%d Rssi =%d\n", macid, Rssi)); - pRaInfo = &(dm_odm->RAInfo[macid]); + pRaInfo = &dm_odm->RAInfo[macid]; pRaInfo->RssiStaRA = Rssi; } @@ -694,8 +681,8 @@ void ODM_RA_TxRPT2Handle_8188E(struct odm_dm_struct *dm_odm, u8 *TxRPT_Buf, u16 u16 minRptTime = 0x927c; ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, - ("=====>ODM_RA_TxRPT2Handle_8188E(): valid0 =%d valid1 =%d BufferLength =%d\n", - macid_entry0, macid_entry1, TxRPT_Len)); + ("=====>%s(): valid0 =%d valid1 =%d BufferLength =%d\n", + __func__, macid_entry0, macid_entry1, TxRPT_Len)); ItemNum = TxRPT_Len >> 3; pBuffer = TxRPT_Buf; @@ -708,7 +695,7 @@ void ODM_RA_TxRPT2Handle_8188E(struct odm_dm_struct *dm_odm, u8 *TxRPT_Buf, u16 else valid = (1 << MacId) & macid_entry0; - pRAInfo = &(dm_odm->RAInfo[MacId]); + pRAInfo = &dm_odm->RAInfo[MacId]; if (valid) { pRAInfo->RTY[0] = (u16)GET_TX_REPORT_TYPE1_RERTY_0(pBuffer); pRAInfo->RTY[1] = (u16)GET_TX_REPORT_TYPE1_RERTY_1(pBuffer); @@ -769,5 +756,6 @@ void ODM_RA_TxRPT2Handle_8188E(struct odm_dm_struct *dm_odm, u8 *TxRPT_Buf, u16 odm_RATxRPTTimerSetting(dm_odm, minRptTime); - ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, ("<===== ODM_RA_TxRPT2Handle_8188E()\n")); + ODM_RT_TRACE(dm_odm, ODM_COMP_RATE_ADAPTIVE, ODM_DBG_LOUD, + ("<===== %s()\n", __func__)); } diff --git a/drivers/staging/rtl8188eu/hal/odm.c b/drivers/staging/rtl8188eu/hal/odm.c index 4ab490c1c13b..1165ee278536 100644 --- a/drivers/staging/rtl8188eu/hal/odm.c +++ b/drivers/staging/rtl8188eu/hal/odm.c @@ -1025,10 +1025,10 @@ void ODM_EdcaTurboInit(struct odm_dm_struct *pDM_Odm) pDM_Odm->DM_EDCA_Table.bIsCurRDLState = false; Adapter->recvpriv.bIsAnyNonBEPkts = false; - ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, ("Orginial VO PARAM: 0x%x\n", usb_read32(Adapter, ODM_EDCA_VO_PARAM))); - ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, ("Orginial VI PARAM: 0x%x\n", usb_read32(Adapter, ODM_EDCA_VI_PARAM))); - ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, ("Orginial BE PARAM: 0x%x\n", usb_read32(Adapter, ODM_EDCA_BE_PARAM))); - ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, ("Orginial BK PARAM: 0x%x\n", usb_read32(Adapter, ODM_EDCA_BK_PARAM))); + ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, ("Original VO PARAM: 0x%x\n", usb_read32(Adapter, ODM_EDCA_VO_PARAM))); + ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, ("Original VI PARAM: 0x%x\n", usb_read32(Adapter, ODM_EDCA_VI_PARAM))); + ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, ("Original BE PARAM: 0x%x\n", usb_read32(Adapter, ODM_EDCA_BE_PARAM))); + ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, ("Original BK PARAM: 0x%x\n", usb_read32(Adapter, ODM_EDCA_BK_PARAM))); } /* ODM_InitEdcaTurbo */ void odm_EdcaTurboCheck(struct odm_dm_struct *pDM_Odm) diff --git a/drivers/staging/rtl8188eu/hal/odm_hwconfig.c b/drivers/staging/rtl8188eu/hal/odm_hwconfig.c index 82d6b2e18b29..7ae476ffcd8c 100644 --- a/drivers/staging/rtl8188eu/hal/odm_hwconfig.c +++ b/drivers/staging/rtl8188eu/hal/odm_hwconfig.c @@ -53,20 +53,11 @@ static s32 odm_signal_scale_mapping(struct odm_dm_struct *dm_odm, s32 currsig) static u8 odm_evm_db_to_percentage(s8 value) { /* -33dB~0dB to 0%~99% */ - s8 ret_val; - - ret_val = value; - - if (ret_val >= 0) - ret_val = 0; - if (ret_val <= -33) - ret_val = -33; - - ret_val = 0 - ret_val; - ret_val *= 3; + s8 ret_val = clamp(-value, 0, 33) * 3; if (ret_val == 99) ret_val = 100; + return ret_val; } @@ -76,23 +67,24 @@ static void odm_RxPhyStatus92CSeries_Parsing(struct odm_dm_struct *dm_odm, struct odm_per_pkt_info *pPktinfo) { struct sw_ant_switch *pDM_SWAT_Table = &dm_odm->DM_SWAT_Table; - u8 i, Max_spatial_stream; + u8 i, max_spatial_stream; s8 rx_pwr[4], rx_pwr_all = 0; u8 EVM, PWDB_ALL = 0, PWDB_ALL_BT; u8 RSSI, total_rssi = 0; - u8 isCCKrate = 0; + bool is_cck_rate; u8 rf_rx_num = 0; u8 cck_highpwr = 0; u8 LNA_idx, VGA_idx; struct phy_status_rpt *pPhyStaRpt = (struct phy_status_rpt *)pPhyStatus; - isCCKrate = ((pPktinfo->Rate >= DESC92C_RATE1M) && (pPktinfo->Rate <= DESC92C_RATE11M)) ? true : false; + is_cck_rate = pPktinfo->Rate >= DESC92C_RATE1M && + pPktinfo->Rate <= DESC92C_RATE11M; pPhyInfo->RxMIMOSignalQuality[RF_PATH_A] = -1; pPhyInfo->RxMIMOSignalQuality[RF_PATH_B] = -1; - if (isCCKrate) { + if (is_cck_rate) { u8 cck_agc_rpt; dm_odm->PhyDbgInfo.NumQryPhyStatusCCK++; @@ -224,11 +216,11 @@ static void odm_RxPhyStatus92CSeries_Parsing(struct odm_dm_struct *dm_odm, /* (3)EVM of HT rate */ if (pPktinfo->Rate >= DESC92C_RATEMCS8 && pPktinfo->Rate <= DESC92C_RATEMCS15) - Max_spatial_stream = 2; /* both spatial stream make sense */ + max_spatial_stream = 2; /* both spatial stream make sense */ else - Max_spatial_stream = 1; /* only spatial stream 1 makes sense */ + max_spatial_stream = 1; /* only spatial stream 1 makes sense */ - for (i = 0; i < Max_spatial_stream; i++) { + for (i = 0; i < max_spatial_stream; i++) { /* Do not use shift operation like "rx_evmX >>= 1" because the compilor of free build environment */ /* fill most significant bit to "zero" when doing shifting operation which may change a negative */ /* value to positive one, then the dbm value (which is supposed to be negative) is not correct anymore. */ @@ -243,7 +235,7 @@ static void odm_RxPhyStatus92CSeries_Parsing(struct odm_dm_struct *dm_odm, } /* UI BSS List signal strength(in percentage), make it good looking, from 0~100. */ /* It is assigned to the BSS List in GetValueFromBeaconOrProbeRsp(). */ - if (isCCKrate) { + if (is_cck_rate) { pPhyInfo->SignalStrength = (u8)(odm_signal_scale_mapping(dm_odm, PWDB_ALL));/* PWDB_ALL; */ } else { if (rf_rx_num != 0) @@ -264,7 +256,7 @@ static void odm_Process_RSSIForDM(struct odm_dm_struct *dm_odm, { s32 UndecoratedSmoothedPWDB, UndecoratedSmoothedCCK; s32 UndecoratedSmoothedOFDM, RSSI_Ave; - u8 isCCKrate = 0; + bool is_cck_rate; u8 RSSI_max, RSSI_min, i; u32 OFDM_pkt = 0; u32 Weighting = 0; @@ -280,7 +272,8 @@ static void odm_Process_RSSIForDM(struct odm_dm_struct *dm_odm, if ((!pPktinfo->bPacketMatchBSSID)) return; - isCCKrate = ((pPktinfo->Rate >= DESC92C_RATE1M) && (pPktinfo->Rate <= DESC92C_RATE11M)) ? true : false; + is_cck_rate = pPktinfo->Rate >= DESC92C_RATE1M && + pPktinfo->Rate <= DESC92C_RATE11M; /* Smart Antenna Debug Message------------------ */ @@ -308,7 +301,7 @@ static void odm_Process_RSSIForDM(struct odm_dm_struct *dm_odm, UndecoratedSmoothedPWDB = pEntry->rssi_stat.UndecoratedSmoothedPWDB; if (pPktinfo->bPacketToSelf || pPktinfo->bPacketBeacon) { - if (!isCCKrate) { /* ofdm rate */ + if (!is_cck_rate) { /* ofdm rate */ if (pPhyInfo->RxMIMOSignalStrength[RF_PATH_B] == 0) { RSSI_Ave = pPhyInfo->RxMIMOSignalStrength[RF_PATH_A]; } else { diff --git a/drivers/staging/rtl8188eu/hal/phy.c b/drivers/staging/rtl8188eu/hal/phy.c index 482d48e003b7..51c40abfafaa 100644 --- a/drivers/staging/rtl8188eu/hal/phy.c +++ b/drivers/staging/rtl8188eu/hal/phy.c @@ -437,9 +437,9 @@ void rtl88eu_dm_txpower_tracking_callback_thermalmeter(struct adapter *adapt) thermal_val = (u8)(thermal_avg / thermal_avg_count); if (dm_odm->RFCalibrateInfo.bDoneTxpower && - !dm_odm->RFCalibrateInfo.bReloadtxpowerindex) + !dm_odm->RFCalibrateInfo.bReloadtxpowerindex) { delta = abs(thermal_val - dm_odm->RFCalibrateInfo.ThermalValue); - else { + } else { delta = abs(thermal_val - hal_data->EEPROMThermalMeter); if (dm_odm->RFCalibrateInfo.bReloadtxpowerindex) { dm_odm->RFCalibrateInfo.bReloadtxpowerindex = false; @@ -1225,15 +1225,10 @@ void rtl88eu_phy_iq_calibrate(struct adapter *adapt, bool recovery) return; } - for (i = 0; i < 8; i++) { - result[0][i] = 0; - result[1][i] = 0; - result[2][i] = 0; - if ((i == 0) || (i == 2) || (i == 4) || (i == 6)) - result[3][i] = 0x100; - else - result[3][i] = 0; - } + memset(result, 0, sizeof(result)); + for (i = 0; i < 8; i += 2) + result[3][i] = 0x100; + final = 0xff; pathaok = false; pathbok = false; diff --git a/drivers/staging/rtl8188eu/hal/rf.c b/drivers/staging/rtl8188eu/hal/rf.c index 81e30a1a6bfd..6fe4daea8fd5 100644 --- a/drivers/staging/rtl8188eu/hal/rf.c +++ b/drivers/staging/rtl8188eu/hal/rf.c @@ -164,20 +164,9 @@ static void get_rx_power_val_by_reg(struct adapter *adapt, u8 channel, /* increase power diff defined by Realtek for regulatory */ if (hal_data->pwrGroupCnt == 1) chnlGroup = 0; - if (hal_data->pwrGroupCnt >= hal_data->PGMaxGroup) { - if (channel < 3) - chnlGroup = 0; - else if (channel < 6) - chnlGroup = 1; - else if (channel < 9) - chnlGroup = 2; - else if (channel < 12) - chnlGroup = 3; - else if (channel < 14) - chnlGroup = 4; - else if (channel == 14) - chnlGroup = 5; - } + if (hal_data->pwrGroupCnt >= hal_data->PGMaxGroup) + Hal_GetChnlGroup88E(channel, &chnlGroup); + write_val = hal_data->MCSTxPowerLevelOriginalOffset[chnlGroup][index+(rf ? 8 : 0)] + ((index < 2) ? powerbase0[rf] : powerbase1[rf]); break; diff --git a/drivers/staging/rtl8188eu/hal/rtl8188e_cmd.c b/drivers/staging/rtl8188eu/hal/rtl8188e_cmd.c index b832bbf202a5..7022221136f6 100644 --- a/drivers/staging/rtl8188eu/hal/rtl8188e_cmd.c +++ b/drivers/staging/rtl8188eu/hal/rtl8188e_cmd.c @@ -90,15 +90,14 @@ static s32 FillH2CCmd_88E(struct adapter *adapt, u8 ElementID, u32 CmdLen, u8 *p /* Write Ext command */ msgbox_ex_addr = REG_HMEBOX_EXT_0 + (h2c_box_num * RTL88E_EX_MESSAGE_BOX_SIZE); - for (cmd_idx = 0; cmd_idx < ext_cmd_len; cmd_idx++) { + for (cmd_idx = 0; cmd_idx < ext_cmd_len; cmd_idx++) usb_write8(adapt, msgbox_ex_addr+cmd_idx, *((u8 *)(&h2c_cmd_ex)+cmd_idx)); - } } /* Write command */ msgbox_addr = REG_HMEBOX_0 + (h2c_box_num * RTL88E_MESSAGE_BOX_SIZE); - for (cmd_idx = 0; cmd_idx < RTL88E_MESSAGE_BOX_SIZE; cmd_idx++) { + for (cmd_idx = 0; cmd_idx < RTL88E_MESSAGE_BOX_SIZE; cmd_idx++) usb_write8(adapt, msgbox_addr+cmd_idx, *((u8 *)(&h2c_cmd)+cmd_idx)); - } + bcmd_down = true; adapt->HalData->LastHMEBoxNum = diff --git a/drivers/staging/rtl8188eu/hal/rtl8188e_hal_init.c b/drivers/staging/rtl8188eu/hal/rtl8188e_hal_init.c index 31e80d693f32..086f98d38cba 100644 --- a/drivers/staging/rtl8188eu/hal/rtl8188e_hal_init.c +++ b/drivers/staging/rtl8188eu/hal/rtl8188e_hal_init.c @@ -232,9 +232,8 @@ s32 InitLLTTable(struct adapter *padapter, u8 txpktbuf_bndy) /* Let last entry point to the start entry of ring buffer */ status = _LLTWrite(padapter, Last_Entry_Of_TxPktBuf, txpktbuf_bndy); - if (status != _SUCCESS) { + if (status != _SUCCESS) return status; - } } return status; @@ -373,7 +372,7 @@ static void Hal_ReadPowerValueFromPROM_8188E(struct txpowerinfo24g *pwrInfo24G, } } -static void Hal_GetChnlGroup88E(u8 chnl, u8 *group) +void Hal_GetChnlGroup88E(u8 chnl, u8 *group) { if (chnl < 3) /* Channel 1-2 */ *group = 0; diff --git a/drivers/staging/rtl8188eu/hal/rtl8188eu_led.c b/drivers/staging/rtl8188eu/hal/rtl8188eu_led.c index 412b76271a3d..35806b27fdee 100644 --- a/drivers/staging/rtl8188eu/hal/rtl8188eu_led.c +++ b/drivers/staging/rtl8188eu/hal/rtl8188eu_led.c @@ -10,60 +10,46 @@ #include #include -/* LED object. */ - -/* LED_819xUsb routines. */ -/* Description: */ -/* Turn on LED according to LedPin specified. */ -void SwLedOn(struct adapter *padapter, struct LED_871x *pLed) +void sw_led_on(struct adapter *padapter, struct LED_871x *pLed) { - u8 LedCfg; + u8 led_cfg; if (padapter->bSurpriseRemoved || padapter->bDriverStopped) return; - LedCfg = usb_read8(padapter, REG_LEDCFG2); - usb_write8(padapter, REG_LEDCFG2, (LedCfg&0xf0) | BIT(5) | BIT(6)); /* SW control led0 on. */ + led_cfg = usb_read8(padapter, REG_LEDCFG2); + usb_write8(padapter, REG_LEDCFG2, (led_cfg & 0xf0) | BIT(5) | BIT(6)); pLed->bLedOn = true; } -/* Description: */ -/* Turn off LED according to LedPin specified. */ -void SwLedOff(struct adapter *padapter, struct LED_871x *pLed) +void sw_led_off(struct adapter *padapter, struct LED_871x *pLed) { - u8 LedCfg; + u8 led_cfg; if (padapter->bSurpriseRemoved || padapter->bDriverStopped) goto exit; - LedCfg = usb_read8(padapter, REG_LEDCFG2);/* 0x4E */ + led_cfg = usb_read8(padapter, REG_LEDCFG2);/* 0x4E */ /* Open-drain arrangement for controlling the LED) */ - LedCfg &= 0x90; /* Set to software control. */ - usb_write8(padapter, REG_LEDCFG2, (LedCfg | BIT(3))); - LedCfg = usb_read8(padapter, REG_MAC_PINMUX_CFG); - LedCfg &= 0xFE; - usb_write8(padapter, REG_MAC_PINMUX_CFG, LedCfg); + led_cfg &= 0x90; /* Set to software control. */ + usb_write8(padapter, REG_LEDCFG2, (led_cfg | BIT(3))); + led_cfg = usb_read8(padapter, REG_MAC_PINMUX_CFG); + led_cfg &= 0xFE; + usb_write8(padapter, REG_MAC_PINMUX_CFG, led_cfg); exit: pLed->bLedOn = false; } -/* Interface to manipulate LED objects. */ -/* Default LED behavior. */ - -/* Description: */ -/* Initialize all LED_871x objects. */ void rtw_hal_sw_led_init(struct adapter *padapter) { - struct led_priv *pledpriv = &(padapter->ledpriv); + struct led_priv *pledpriv = &padapter->ledpriv; - InitLed871x(padapter, &(pledpriv->SwLed0)); + InitLed871x(padapter, &pledpriv->sw_led); } -/* Description: */ -/* DeInitialize all LED_819xUsb objects. */ void rtw_hal_sw_led_deinit(struct adapter *padapter) { - struct led_priv *ledpriv = &(padapter->ledpriv); + struct led_priv *ledpriv = &padapter->ledpriv; - DeInitLed871x(&(ledpriv->SwLed0)); + DeInitLed871x(&ledpriv->sw_led); } diff --git a/drivers/staging/rtl8188eu/hal/rtl8188eu_xmit.c b/drivers/staging/rtl8188eu/hal/rtl8188eu_xmit.c index a11bee16d070..a72e069269b8 100644 --- a/drivers/staging/rtl8188eu/hal/rtl8188eu_xmit.c +++ b/drivers/staging/rtl8188eu/hal/rtl8188eu_xmit.c @@ -412,7 +412,8 @@ static u32 xmitframe_need_length(struct xmit_frame *pxmitframe) return len; } -s32 rtl8188eu_xmitframe_complete(struct adapter *adapt, struct xmit_priv *pxmitpriv) +bool rtl8188eu_xmitframe_complete(struct adapter *adapt, + struct xmit_priv *pxmitpriv) { struct xmit_frame *pxmitframe = NULL; struct xmit_frame *pfirstframe = NULL; @@ -597,7 +598,7 @@ s32 rtl8188eu_xmitframe_complete(struct adapter *adapt, struct xmit_priv *pxmitp * true dump packet directly * false enqueue packet */ -s32 rtw_hal_xmit(struct adapter *adapt, struct xmit_frame *pxmitframe) +bool rtw_hal_xmit(struct adapter *adapt, struct xmit_frame *pxmitframe) { s32 res; struct xmit_buf *pxmitbuf = NULL; @@ -610,7 +611,7 @@ s32 rtw_hal_xmit(struct adapter *adapt, struct xmit_frame *pxmitframe) if (rtw_txframes_sta_ac_pending(adapt, pattrib) > 0) goto enqueue; - if (check_fwstate(pmlmepriv, _FW_UNDER_SURVEY|_FW_UNDER_LINKING) == true) + if (check_fwstate(pmlmepriv, _FW_UNDER_SURVEY | _FW_UNDER_LINKING)) goto enqueue; pxmitbuf = rtw_alloc_xmitbuf(pxmitpriv); diff --git a/drivers/staging/rtl8188eu/include/hal_intf.h b/drivers/staging/rtl8188eu/include/hal_intf.h index e5be27af7bf5..8b65fcba1967 100644 --- a/drivers/staging/rtl8188eu/include/hal_intf.h +++ b/drivers/staging/rtl8188eu/include/hal_intf.h @@ -185,7 +185,7 @@ u32 rtw_hal_inirp_init(struct adapter *padapter); void rtw_hal_inirp_deinit(struct adapter *padapter); void usb_intf_stop(struct adapter *padapter); -s32 rtw_hal_xmit(struct adapter *padapter, struct xmit_frame *pxmitframe); +bool rtw_hal_xmit(struct adapter *padapter, struct xmit_frame *pxmitframe); s32 rtw_hal_mgnt_xmit(struct adapter *padapter, struct xmit_frame *pmgntframe); diff --git a/drivers/staging/rtl8188eu/include/rtl8188e_hal.h b/drivers/staging/rtl8188eu/include/rtl8188e_hal.h index a86b07d3c82a..eb4655ecf6e0 100644 --- a/drivers/staging/rtl8188eu/include/rtl8188e_hal.h +++ b/drivers/staging/rtl8188eu/include/rtl8188e_hal.h @@ -329,6 +329,8 @@ struct hal_data_8188e { u8 UsbRxAggPageTimeout; }; +void Hal_GetChnlGroup88E(u8 chnl, u8 *group); + /* rtl8188e_hal_init.c */ void _8051Reset88E(struct adapter *padapter); void rtl8188e_InitializeFirmwareVars(struct adapter *padapter); diff --git a/drivers/staging/rtl8188eu/include/rtl8188e_xmit.h b/drivers/staging/rtl8188eu/include/rtl8188e_xmit.h index 20d35480dab8..421e9f45306f 100644 --- a/drivers/staging/rtl8188eu/include/rtl8188e_xmit.h +++ b/drivers/staging/rtl8188eu/include/rtl8188e_xmit.h @@ -149,8 +149,8 @@ s32 rtl8188eu_init_xmit_priv(struct adapter *padapter); s32 rtl8188eu_xmit_buf_handler(struct adapter *padapter); #define hal_xmit_handler rtl8188eu_xmit_buf_handler void rtl8188eu_xmit_tasklet(void *priv); -s32 rtl8188eu_xmitframe_complete(struct adapter *padapter, - struct xmit_priv *pxmitpriv); +bool rtl8188eu_xmitframe_complete(struct adapter *padapter, + struct xmit_priv *pxmitpriv); void handle_txrpt_ccx_88e(struct adapter *adapter, u8 *buf); diff --git a/drivers/staging/rtl8188eu/include/rtw_led.h b/drivers/staging/rtl8188eu/include/rtw_led.h index e50237ab05c4..ee62ed76a465 100644 --- a/drivers/staging/rtl8188eu/include/rtw_led.h +++ b/drivers/staging/rtl8188eu/include/rtw_led.h @@ -76,12 +76,10 @@ struct LED_871x { ((struct LED_871x *)_LED_871x)->CurrLedState == LED_BLINK_WPS_STOP || \ ((struct LED_871x *)_LED_871x)->bLedWPSBlinkInProgress) -void LedControl8188eu(struct adapter *padapter, enum LED_CTL_MODE LedAction); +void led_control_8188eu(struct adapter *padapter, enum LED_CTL_MODE LedAction); struct led_priv { - /* add for led control */ - struct LED_871x SwLed0; - /* add for led control */ + struct LED_871x sw_led; }; void BlinkWorkItemCallback(struct work_struct *work); @@ -93,8 +91,8 @@ void InitLed871x(struct adapter *padapter, struct LED_871x *pLed); void DeInitLed871x(struct LED_871x *pLed); /* hal... */ -void BlinkHandler(struct LED_871x *pLed); -void SwLedOn(struct adapter *padapter, struct LED_871x *pLed); -void SwLedOff(struct adapter *padapter, struct LED_871x *pLed); +void blink_handler(struct LED_871x *pLed); +void sw_led_on(struct adapter *padapter, struct LED_871x *pLed); +void sw_led_off(struct adapter *padapter, struct LED_871x *pLed); #endif /* __RTW_LED_H_ */ diff --git a/drivers/staging/rtl8188eu/include/rtw_mlme.h b/drivers/staging/rtl8188eu/include/rtw_mlme.h index 8d9d663f0645..bfef66525944 100644 --- a/drivers/staging/rtl8188eu/include/rtw_mlme.h +++ b/drivers/staging/rtl8188eu/include/rtw_mlme.h @@ -211,9 +211,9 @@ int hostapd_mode_init(struct adapter *padapter); void hostapd_mode_unload(struct adapter *padapter); #endif -extern unsigned char WPA_TKIP_CIPHER[4]; -extern unsigned char RSN_TKIP_CIPHER[4]; -extern unsigned char REALTEK_96B_IE[]; +extern const u8 WPA_TKIP_CIPHER[4]; +extern const u8 RSN_TKIP_CIPHER[4]; +extern u8 REALTEK_96B_IE[]; extern const u8 MCS_rate_1R[16]; void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf); @@ -285,7 +285,7 @@ static inline void _clr_fwstate_(struct mlme_priv *pmlmepriv, int state) static inline void clr_fwstate(struct mlme_priv *pmlmepriv, int state) { spin_lock_bh(&pmlmepriv->lock); - if (check_fwstate(pmlmepriv, state) == true) + if (check_fwstate(pmlmepriv, state)) pmlmepriv->fw_state ^= state; spin_unlock_bh(&pmlmepriv->lock); } diff --git a/drivers/staging/rtl8188eu/include/rtw_mlme_ext.h b/drivers/staging/rtl8188eu/include/rtw_mlme_ext.h index 9526da3efcc4..1fb2349bd0a0 100644 --- a/drivers/staging/rtl8188eu/include/rtw_mlme_ext.h +++ b/drivers/staging/rtl8188eu/include/rtw_mlme_ext.h @@ -80,15 +80,8 @@ #define _48M_RATE_ 10 #define _54M_RATE_ 11 - -extern unsigned char RTW_WPA_OUI[]; -extern unsigned char WMM_OUI[]; -extern unsigned char WPS_OUI[]; -extern unsigned char WFD_OUI[]; -extern unsigned char P2P_OUI[]; - -extern unsigned char WMM_INFO_OUI[]; -extern unsigned char WMM_PARA_OUI[]; +extern const u8 RTW_WPA_OUI[]; +extern const u8 WPS_OUI[]; /* Channel Plan Type. */ /* Note: */ @@ -580,8 +573,8 @@ void addba_timer_hdl(struct timer_list *t); mod_timer(&mlmeext->link_timer, jiffies + \ msecs_to_jiffies(ms)) -int cckrates_included(unsigned char *rate, int ratelen); -int cckratesonly_included(unsigned char *rate, int ratelen); +bool cckrates_included(unsigned char *rate, int ratelen); +bool cckratesonly_included(unsigned char *rate, int ratelen); void process_addba_req(struct adapter *padapter, u8 *paddba_req, u8 *addr); diff --git a/drivers/staging/rtl8188eu/include/rtw_recv.h b/drivers/staging/rtl8188eu/include/rtw_recv.h index 54b7ba367293..8fc500496f92 100644 --- a/drivers/staging/rtl8188eu/include/rtw_recv.h +++ b/drivers/staging/rtl8188eu/include/rtw_recv.h @@ -27,7 +27,7 @@ /* for Rx reordering buffer control */ struct recv_reorder_ctrl { struct adapter *padapter; - u8 enable; + bool enable; u16 indicate_seq;/* wstart_b, init_value=0xffff */ u16 wend_b; u8 wsize_b; diff --git a/drivers/staging/rtl8188eu/include/rtw_sreset.h b/drivers/staging/rtl8188eu/include/rtw_sreset.h index 3ee6a4a7847d..ea3c0d93bf0b 100644 --- a/drivers/staging/rtl8188eu/include/rtw_sreset.h +++ b/drivers/staging/rtl8188eu/include/rtw_sreset.h @@ -11,7 +11,7 @@ #include struct sreset_priv { - u8 Wifi_Error_Status; + u8 wifi_error_status; }; #include diff --git a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c index 4ecd2ff48c41..efaa1c779f4b 100644 --- a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c +++ b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c @@ -296,7 +296,7 @@ static char *translate_scan(struct adapter *padapter, iwe.cmd = IWEVQUAL; iwe.u.qual.updated = IW_QUAL_QUAL_UPDATED | IW_QUAL_LEVEL_UPDATED | IW_QUAL_NOISE_INVALID; - if (check_fwstate(pmlmepriv, _FW_LINKED) == true && + if (check_fwstate(pmlmepriv, _FW_LINKED) && is_same_network(&pmlmepriv->cur_network.network, &pnetwork->network)) { ss = padapter->recvpriv.signal_strength; sq = padapter->recvpriv.signal_qual; @@ -631,7 +631,7 @@ static int rtw_wx_get_name(struct net_device *dev, RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("cmd_code =%x\n", info->cmd)); - if (check_fwstate(pmlmepriv, _FW_LINKED|WIFI_ADHOC_MASTER_STATE) == true) { + if (check_fwstate(pmlmepriv, _FW_LINKED | WIFI_ADHOC_MASTER_STATE)) { /* parsing HT_CAP_IE */ p = rtw_get_ie(&pcur_bss->ies[12], _HT_CAPABILITY_IE_, &ht_ielen, pcur_bss->ie_length-12); if (p && ht_ielen > 0) @@ -1024,9 +1024,9 @@ static int rtw_wx_get_wap(struct net_device *dev, RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("rtw_wx_get_wap\n")); - if (((check_fwstate(pmlmepriv, _FW_LINKED)) == true) || - ((check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE)) == true) || - ((check_fwstate(pmlmepriv, WIFI_AP_STATE)) == true)) + if (check_fwstate(pmlmepriv, _FW_LINKED) || + check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) || + check_fwstate(pmlmepriv, WIFI_AP_STATE)) memcpy(wrqu->ap_addr.sa_data, pcur_bss->MacAddress, ETH_ALEN); else eth_zero_addr(wrqu->ap_addr.sa_data); @@ -1335,7 +1335,7 @@ static int rtw_wx_set_essid(struct net_device *dev, RT_TRACE(_module_rtl871x_ioctl_os_c, _drv_info_, ("rtw_wx_set_essid: find match, set infra mode\n")); - if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) == true) { + if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE)) { if (pnetwork->network.InfrastructureMode != pmlmepriv->cur_network.network.InfrastructureMode) continue; } @@ -1691,7 +1691,7 @@ static int rtw_wx_get_enc(struct net_device *dev, struct iw_point *erq = &(wrqu->encoding); struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - if (check_fwstate(pmlmepriv, _FW_LINKED) != true) { + if (!check_fwstate(pmlmepriv, _FW_LINKED)) { if (!check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE)) { erq->length = 0; erq->flags |= IW_ENCODE_DISABLED; @@ -2425,7 +2425,7 @@ static int rtw_set_beacon(struct net_device *dev, struct ieee_param *param, int DBG_88E("%s, len =%d\n", __func__, len); - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return -EINVAL; memcpy(&pstapriv->max_num_sta, param->u.bcn_ie.reserved, 2); @@ -2517,7 +2517,7 @@ static int rtw_del_sta(struct net_device *dev, struct ieee_param *param) DBG_88E("rtw_del_sta =%pM\n", (param->sta_addr)); - if (check_fwstate(pmlmepriv, (_FW_LINKED|WIFI_AP_STATE)) != true) + if (!check_fwstate(pmlmepriv, _FW_LINKED | WIFI_AP_STATE)) return -EINVAL; if (is_broadcast_ether_addr(param->sta_addr)) @@ -2553,7 +2553,7 @@ static int rtw_ioctl_get_sta_data(struct net_device *dev, struct ieee_param *par DBG_88E("rtw_ioctl_get_sta_info, sta_addr: %pM\n", (param_ex->sta_addr)); - if (check_fwstate(pmlmepriv, (_FW_LINKED|WIFI_AP_STATE)) != true) + if (!check_fwstate(pmlmepriv, _FW_LINKED | WIFI_AP_STATE)) return -EINVAL; if (is_broadcast_ether_addr(param_ex->sta_addr)) @@ -2607,7 +2607,7 @@ static int rtw_get_sta_wpaie(struct net_device *dev, struct ieee_param *param) DBG_88E("rtw_get_sta_wpaie, sta_addr: %pM\n", (param->sta_addr)); - if (check_fwstate(pmlmepriv, (_FW_LINKED|WIFI_AP_STATE)) != true) + if (!check_fwstate(pmlmepriv, _FW_LINKED | WIFI_AP_STATE)) return -EINVAL; if (is_broadcast_ether_addr(param->sta_addr)) @@ -2645,7 +2645,7 @@ static int rtw_set_wps_beacon(struct net_device *dev, struct ieee_param *param, DBG_88E("%s, len =%d\n", __func__, len); - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return -EINVAL; ie_len = len-12-2;/* 12 = param header, 2:no packed */ @@ -2680,7 +2680,7 @@ static int rtw_set_wps_probe_resp(struct net_device *dev, struct ieee_param *par DBG_88E("%s, len =%d\n", __func__, len); - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return -EINVAL; ie_len = len-12-2;/* 12 = param header, 2:no packed */ @@ -2710,7 +2710,7 @@ static int rtw_set_wps_assoc_resp(struct net_device *dev, struct ieee_param *par DBG_88E("%s, len =%d\n", __func__, len); - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return -EINVAL; ie_len = len-12-2;/* 12 = param header, 2:no packed */ @@ -2742,7 +2742,7 @@ static int rtw_set_hidden_ssid(struct net_device *dev, struct ieee_param *param, u8 value; - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return -EINVAL; if (param->u.wpa_param.name != 0) /* dummy test... */ @@ -2762,7 +2762,7 @@ static int rtw_ioctl_acl_remove_sta(struct net_device *dev, struct ieee_param *p struct adapter *padapter = (struct adapter *)rtw_netdev_priv(dev); struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return -EINVAL; if (is_broadcast_ether_addr(param->sta_addr)) @@ -2776,7 +2776,7 @@ static int rtw_ioctl_acl_add_sta(struct net_device *dev, struct ieee_param *para struct adapter *padapter = (struct adapter *)rtw_netdev_priv(dev); struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return -EINVAL; if (is_broadcast_ether_addr(param->sta_addr)) @@ -2791,7 +2791,7 @@ static int rtw_ioctl_set_macaddr_acl(struct net_device *dev, struct ieee_param * struct adapter *padapter = (struct adapter *)rtw_netdev_priv(dev); struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - if (check_fwstate(pmlmepriv, WIFI_AP_STATE) != true) + if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return -EINVAL; rtw_set_macaddr_acl(padapter, param->u.mlme.command); diff --git a/drivers/staging/rtl8188eu/os_dep/os_intfs.c b/drivers/staging/rtl8188eu/os_dep/os_intfs.c index dac9f98b4808..abcce2240f15 100644 --- a/drivers/staging/rtl8188eu/os_dep/os_intfs.c +++ b/drivers/staging/rtl8188eu/os_dep/os_intfs.c @@ -131,7 +131,7 @@ MODULE_PARM_DESC(debug, "Set debug level (1-9) (default 1)"); static bool rtw_monitor_enable; module_param_named(monitor_enable, rtw_monitor_enable, bool, 0444); -MODULE_PARM_DESC(monitor_enable, "Enable monitor inferface (default: false)"); +MODULE_PARM_DESC(monitor_enable, "Enable monitor interface (default: false)"); static int netdev_close(struct net_device *pnetdev); @@ -578,7 +578,7 @@ static int _netdev_open(struct net_device *pnetdev) } rtw_hal_inirp_init(padapter); - LedControl8188eu(padapter, LED_CTL_NO_LINK); + led_control_8188eu(padapter, LED_CTL_NO_LINK); padapter->bup = true; } @@ -661,7 +661,7 @@ int rtw_ips_pwr_up(struct adapter *padapter) result = ips_netdrv_open(padapter); - LedControl8188eu(padapter, LED_CTL_NO_LINK); + led_control_8188eu(padapter, LED_CTL_NO_LINK); DBG_88E("<=== rtw_ips_pwr_up.............. in %dms\n", jiffies_to_msecs(jiffies - start_time)); @@ -676,7 +676,7 @@ void rtw_ips_pwr_down(struct adapter *padapter) padapter->net_closed = true; - LedControl8188eu(padapter, LED_CTL_POWER_OFF); + led_control_8188eu(padapter, LED_CTL_POWER_OFF); rtw_ips_dev_unload(padapter); DBG_88E("<=== rtw_ips_pwr_down..................... in %dms\n", @@ -728,7 +728,7 @@ static int netdev_close(struct net_device *pnetdev) /* s2-4. */ rtw_free_network_queue(padapter, true); /* Close LED */ - LedControl8188eu(padapter, LED_CTL_POWER_OFF); + led_control_8188eu(padapter, LED_CTL_POWER_OFF); } RT_TRACE(_module_os_intfs_c_, _drv_info_, ("-88eu_drv - drv_close\n")); diff --git a/drivers/staging/rtl8188eu/os_dep/recv_linux.c b/drivers/staging/rtl8188eu/os_dep/recv_linux.c index 6f74f49bf3ab..9c9339863a4a 100644 --- a/drivers/staging/rtl8188eu/os_dep/recv_linux.c +++ b/drivers/staging/rtl8188eu/os_dep/recv_linux.c @@ -38,7 +38,7 @@ void rtw_handle_tkip_mic_err(struct adapter *padapter, u8 bgroup) } else { cur_time = jiffies; - if (cur_time - psecuritypriv->last_mic_err_time < 60*HZ) { + if (cur_time - psecuritypriv->last_mic_err_time < 60 * HZ) { psecuritypriv->btkip_countermeasure = true; psecuritypriv->last_mic_err_time = 0; psecuritypriv->btkip_countermeasure_time = cur_time; @@ -69,13 +69,13 @@ int rtw_recv_indicatepkt(struct adapter *padapter, struct sk_buff *skb; struct mlme_priv *pmlmepriv = &padapter->mlmepriv; - precvpriv = &(padapter->recvpriv); - pfree_recv_queue = &(precvpriv->free_recv_queue); + precvpriv = &padapter->recvpriv; + pfree_recv_queue = &precvpriv->free_recv_queue; skb = precv_frame->pkt; if (!skb) { RT_TRACE(_module_recv_osdep_c_, _drv_err_, - ("rtw_recv_indicatepkt():skb == NULL something wrong!!!!\n")); + ("%s():skb == NULL something wrong!!!!\n", __func__)); goto _recv_indicatepkt_drop; } @@ -126,7 +126,7 @@ _recv_indicatepkt_end: rtw_free_recvframe(precv_frame, pfree_recv_queue); RT_TRACE(_module_recv_osdep_c_, _drv_info_, - ("\n rtw_recv_indicatepkt :after netif_rx!!!!\n")); + ("\n %s :after netif_rx!!!!\n", __func__)); return _SUCCESS; diff --git a/drivers/staging/rtl8188eu/os_dep/rtw_android.c b/drivers/staging/rtl8188eu/os_dep/rtw_android.c index 34080c0ce14a..2ea2af3286bc 100644 --- a/drivers/staging/rtl8188eu/os_dep/rtw_android.c +++ b/drivers/staging/rtl8188eu/os_dep/rtw_android.c @@ -127,12 +127,6 @@ static int android_get_p2p_addr(struct net_device *net, char *command, return ETH_ALEN; } -static int rtw_android_set_block(struct net_device *net, char *command, - int total_len) -{ - return 0; -} - int rtw_android_priv_cmd(struct net_device *net, struct ifreq *ifr, int cmd) { int ret = 0; @@ -186,8 +180,6 @@ int rtw_android_priv_cmd(struct net_device *net, struct ifreq *ifr, int cmd) priv_cmd.total_len); break; case ANDROID_WIFI_CMD_BLOCK: - bytes_written = rtw_android_set_block(net, command, - priv_cmd.total_len); break; case ANDROID_WIFI_CMD_RXFILTER_START: break; diff --git a/drivers/staging/rtl8188eu/os_dep/usb_ops_linux.c b/drivers/staging/rtl8188eu/os_dep/usb_ops_linux.c index d6a499692e96..e4f2af2974ed 100644 --- a/drivers/staging/rtl8188eu/os_dep/usb_ops_linux.c +++ b/drivers/staging/rtl8188eu/os_dep/usb_ops_linux.c @@ -25,23 +25,24 @@ static void interrupt_handler_8188eu(struct adapter *adapt, u16 pkt_len, u8 *pbu /* C2H Event */ if (pbuf[0] != 0) - memcpy(&(haldata->C2hArray[0]), &(pbuf[USB_INTR_CONTENT_C2H_OFFSET]), 16); + memcpy(&haldata->C2hArray[0], + &pbuf[USB_INTR_CONTENT_C2H_OFFSET], 16); } static int recvbuf2recvframe(struct adapter *adapt, struct sk_buff *pskb) { - u8 *pbuf; - u8 shift_sz = 0; - u16 pkt_cnt; - u32 pkt_offset, skb_len, alloc_sz; - s32 transfer_len; - struct recv_stat *prxstat; - struct phy_stat *pphy_status = NULL; + u8 *pbuf; + u8 shift_sz = 0; + u16 pkt_cnt; + u32 pkt_offset, skb_len, alloc_sz; + s32 transfer_len; + struct recv_stat *prxstat; + struct phy_stat *pphy_status = NULL; struct sk_buff *pkt_copy = NULL; - struct recv_frame *precvframe = NULL; - struct rx_pkt_attrib *pattrib = NULL; + struct recv_frame *precvframe = NULL; + struct rx_pkt_attrib *pattrib = NULL; struct hal_data_8188e *haldata = adapt->HalData; - struct recv_priv *precvpriv = &adapt->recvpriv; + struct recv_priv *precvpriv = &adapt->recvpriv; struct __queue *pfree_recv_queue = &precvpriv->free_recv_queue; transfer_len = (s32)pskb->len; @@ -52,14 +53,16 @@ static int recvbuf2recvframe(struct adapter *adapt, struct sk_buff *pskb) do { RT_TRACE(_module_rtl871x_recv_c_, _drv_info_, - ("recvbuf2recvframe: rxdesc=offsset 0:0x%08x, 4:0x%08x, 8:0x%08x, C:0x%08x\n", - prxstat->rxdw0, prxstat->rxdw1, prxstat->rxdw2, prxstat->rxdw4)); + ("%s: rxdesc=offsset 0:0x%08x, 4:0x%08x, 8:0x%08x, C:0x%08x\n", + __func__, prxstat->rxdw0, prxstat->rxdw1, + prxstat->rxdw2, prxstat->rxdw4)); prxstat = (struct recv_stat *)pbuf; precvframe = rtw_alloc_recvframe(pfree_recv_queue); if (!precvframe) { - RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, ("recvbuf2recvframe: precvframe==NULL\n")); + RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, + ("%s: precvframe==NULL\n", __func__)); DBG_88E("%s()-%d: rtw_alloc_recvframe() failed! RX Drop!\n", __func__, __LINE__); goto _exit_recvbuf2recvframe; } @@ -83,7 +86,8 @@ static int recvbuf2recvframe(struct adapter *adapt, struct sk_buff *pskb) pkt_offset = RXDESC_SIZE + pattrib->drvinfo_sz + pattrib->shift_sz + pattrib->pkt_len; if ((pattrib->pkt_len <= 0) || (pkt_offset > transfer_len)) { - RT_TRACE(_module_rtl871x_recv_c_, _drv_info_, ("recvbuf2recvframe: pkt_len<=0\n")); + RT_TRACE(_module_rtl871x_recv_c_, _drv_info_, + ("%s: pkt_len<=0\n", __func__)); DBG_88E("%s()-%d: RX Warning!,pkt_len<=0 or pkt_offset> transfer_len\n", __func__, __LINE__); rtw_free_recvframe(precvframe, pfree_recv_queue); goto _exit_recvbuf2recvframe; @@ -121,7 +125,8 @@ static int recvbuf2recvframe(struct adapter *adapt, struct sk_buff *pskb) memcpy(pkt_copy->data, (pbuf + pattrib->drvinfo_sz + RXDESC_SIZE), skb_len); skb_put(precvframe->pkt, skb_len); } else { - DBG_88E("recvbuf2recvframe: alloc_skb fail , drop frag frame\n"); + DBG_88E("%s: alloc_skb fail , drop frag frame\n", + __func__); rtw_free_recvframe(precvframe, pfree_recv_queue); goto _exit_recvbuf2recvframe; } @@ -143,20 +148,19 @@ static int recvbuf2recvframe(struct adapter *adapt, struct sk_buff *pskb) update_recvframe_phyinfo_88e(precvframe, pphy_status); if (rtw_recv_entry(precvframe) != _SUCCESS) { RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, - ("recvbuf2recvframe: rtw_recv_entry(precvframe) != _SUCCESS\n")); + ("%s: rtw_recv_entry(precvframe) != _SUCCESS\n", + __func__)); } } else if (pattrib->pkt_rpt_type == TX_REPORT1) { /* CCX-TXRPT ack for xmit mgmt frames. */ handle_txrpt_ccx_88e(adapt, precvframe->pkt->data); rtw_free_recvframe(precvframe, pfree_recv_queue); } else if (pattrib->pkt_rpt_type == TX_REPORT2) { - ODM_RA_TxRPT2Handle_8188E( - &haldata->odmpriv, - precvframe->pkt->data, - pattrib->pkt_len, - pattrib->MacIDValidEntry[0], - pattrib->MacIDValidEntry[1] - ); + ODM_RA_TxRPT2Handle_8188E(&haldata->odmpriv, + precvframe->pkt->data, + pattrib->pkt_len, + pattrib->MacIDValidEntry[0], + pattrib->MacIDValidEntry[1]); rtw_free_recvframe(precvframe, pfree_recv_queue); } else if (pattrib->pkt_rpt_type == HIS_REPORT) { interrupt_handler_8188eu(adapt, pattrib->pkt_len, precvframe->pkt->data); @@ -169,7 +173,7 @@ static int recvbuf2recvframe(struct adapter *adapt, struct sk_buff *pskb) pkt_copy = NULL; if (transfer_len > 0 && pkt_cnt == 0) - pkt_cnt = (le32_to_cpu(prxstat->rxdw2)>>16) & 0xff; + pkt_cnt = (le32_to_cpu(prxstat->rxdw2) >> 16) & 0xff; } while ((transfer_len > 0) && (pkt_cnt > 0)); @@ -197,7 +201,7 @@ unsigned int ffaddr2pipehdl(struct dvobj_priv *pdvobj, u32 addr) static int usbctrl_vendorreq(struct adapter *adapt, u8 request, u16 value, u16 index, void *pdata, u16 len, u8 requesttype) { - struct dvobj_priv *dvobjpriv = adapter_to_dvobj(adapt); + struct dvobj_priv *dvobjpriv = adapter_to_dvobj(adapt); struct usb_device *udev = dvobjpriv->pusbdev; unsigned int pipe; int status = 0; @@ -206,7 +210,9 @@ static int usbctrl_vendorreq(struct adapter *adapt, u8 request, u16 value, u16 i int vendorreq_times = 0; if ((adapt->bSurpriseRemoved) || (adapt->pwrctrlpriv.pnp_bstop_trx)) { - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usbctrl_vendorreq:(adapt->bSurpriseRemoved ||adapter->pwrctrlpriv.pnp_bstop_trx)!!!\n")); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s:(adapt->bSurpriseRemoved ||adapter->pwrctrlpriv.pnp_bstop_trx)!!!\n", + __func__)); status = -EPERM; goto exit; } @@ -254,11 +260,10 @@ static int usbctrl_vendorreq(struct adapter *adapt, u8 request, u16 value, u16 i len, status, *(u32 *)pdata, vendorreq_times); if (status < 0) { - if (status == (-ESHUTDOWN) || status == -ENODEV) { + if (status == (-ESHUTDOWN) || status == -ENODEV) adapt->bSurpriseRemoved = true; - } else { - adapt->HalData->srestpriv.Wifi_Error_Status = USB_VEN_REQ_CMD_FAIL; - } + else + adapt->HalData->srestpriv.wifi_error_status = USB_VEN_REQ_CMD_FAIL; } else { /* status != len && status >= 0 */ if (status > 0) { if (requesttype == 0x01) { @@ -269,7 +274,7 @@ static int usbctrl_vendorreq(struct adapter *adapt, u8 request, u16 value, u16 i } } - /* firmware download is checksumed, don't retry */ + /* firmware download is checksummed, don't retry */ if ((value >= FW_8188E_START_ADDRESS && value <= FW_8188E_END_ADDRESS) || status == len) break; } @@ -294,7 +299,7 @@ u8 usb_read8(struct adapter *adapter, u32 addr) requesttype = 0x01;/* read_in */ index = 0;/* n/a */ - wvalue = (u16)(addr&0x0000ffff); + wvalue = (u16)(addr & 0x0000ffff); len = 1; usbctrl_vendorreq(adapter, request, wvalue, index, &data, len, requesttype); @@ -314,11 +319,11 @@ u16 usb_read16(struct adapter *adapter, u32 addr) request = 0x05; requesttype = 0x01;/* read_in */ index = 0;/* n/a */ - wvalue = (u16)(addr&0x0000ffff); + wvalue = (u16)(addr & 0x0000ffff); len = 2; usbctrl_vendorreq(adapter, request, wvalue, index, &data, len, requesttype); - return (u16)(le32_to_cpu(data)&0xffff); + return (u16)(le32_to_cpu(data) & 0xffff); } u32 usb_read32(struct adapter *adapter, u32 addr) @@ -334,7 +339,7 @@ u32 usb_read32(struct adapter *adapter, u32 addr) requesttype = 0x01;/* read_in */ index = 0;/* n/a */ - wvalue = (u16)(addr&0x0000ffff); + wvalue = (u16)(addr & 0x0000ffff); len = 4; usbctrl_vendorreq(adapter, request, wvalue, index, &data, len, requesttype); @@ -344,16 +349,17 @@ u32 usb_read32(struct adapter *adapter, u32 addr) static void usb_read_port_complete(struct urb *purb, struct pt_regs *regs) { - struct recv_buf *precvbuf = (struct recv_buf *)purb->context; - struct adapter *adapt = (struct adapter *)precvbuf->adapter; + struct recv_buf *precvbuf = (struct recv_buf *)purb->context; + struct adapter *adapt = (struct adapter *)precvbuf->adapter; struct recv_priv *precvpriv = &adapt->recvpriv; - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_read_port_complete!!!\n")); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("%s!!!\n", __func__)); if (adapt->bSurpriseRemoved || adapt->bDriverStopped || adapt->bReadPortCancel) { RT_TRACE(_module_hci_ops_os_c_, _drv_err_, - ("usb_read_port_complete:bDriverStopped(%d) OR bSurpriseRemoved(%d)\n", - adapt->bDriverStopped, adapt->bSurpriseRemoved)); + ("%s:bDriverStopped(%d) OR bSurpriseRemoved(%d)\n", + __func__, adapt->bDriverStopped, + adapt->bSurpriseRemoved)); precvbuf->reuse = true; DBG_88E("%s() RX Warning! bDriverStopped(%d) OR bSurpriseRemoved(%d) bReadPortCancel(%d)\n", @@ -365,7 +371,8 @@ static void usb_read_port_complete(struct urb *purb, struct pt_regs *regs) if (purb->status == 0) { /* SUCCESS */ if ((purb->actual_length > MAX_RECVBUF_SZ) || (purb->actual_length < RXDESC_SIZE)) { RT_TRACE(_module_hci_ops_os_c_, _drv_err_, - ("usb_read_port_complete: (purb->actual_length > MAX_RECVBUF_SZ) || (purb->actual_length < RXDESC_SIZE)\n")); + ("%s: (purb->actual_length > MAX_RECVBUF_SZ) || (purb->actual_length < RXDESC_SIZE)\n", + __func__)); precvbuf->reuse = true; usb_read_port(adapt, RECV_BULK_IN_ADDR, precvbuf); DBG_88E("%s()-%d: RX Warning!\n", __func__, __LINE__); @@ -381,9 +388,11 @@ static void usb_read_port_complete(struct urb *purb, struct pt_regs *regs) usb_read_port(adapt, RECV_BULK_IN_ADDR, precvbuf); } } else { - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_read_port_complete : purb->status(%d) != 0\n", purb->status)); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s : purb->status(%d) != 0\n", + __func__, purb->status)); - DBG_88E("###=> usb_read_port_complete => urb status(%d)\n", purb->status); + DBG_88E("###=> %s => urb status(%d)\n", __func__, purb->status); skb_put(precvbuf->pskb, purb->actual_length); precvbuf->pskb = NULL; @@ -396,11 +405,12 @@ static void usb_read_port_complete(struct urb *purb, struct pt_regs *regs) /* fall through */ case -ENOENT: adapt->bDriverStopped = true; - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_read_port_complete:bDriverStopped=true\n")); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s:bDriverStopped=true\n", __func__)); break; case -EPROTO: case -EOVERFLOW: - adapt->HalData->srestpriv.Wifi_Error_Status = USB_READ_PORT_FAIL; + adapt->HalData->srestpriv.wifi_error_status = USB_READ_PORT_FAIL; precvbuf->reuse = true; usb_read_port(adapt, RECV_BULK_IN_ADDR, precvbuf); break; @@ -416,9 +426,9 @@ static void usb_read_port_complete(struct urb *purb, struct pt_regs *regs) u32 usb_read_port(struct adapter *adapter, u32 addr, struct recv_buf *precvbuf) { struct urb *purb = NULL; - struct dvobj_priv *pdvobj = adapter_to_dvobj(adapter); - struct recv_priv *precvpriv = &adapter->recvpriv; - struct usb_device *pusbd = pdvobj->pusbdev; + struct dvobj_priv *pdvobj = adapter_to_dvobj(adapter); + struct recv_priv *precvpriv = &adapter->recvpriv; + struct usb_device *pusbd = pdvobj->pusbdev; int err; unsigned int pipe; u32 ret = _SUCCESS; @@ -426,13 +436,14 @@ u32 usb_read_port(struct adapter *adapter, u32 addr, struct recv_buf *precvbuf) if (adapter->bDriverStopped || adapter->bSurpriseRemoved || adapter->pwrctrlpriv.pnp_bstop_trx) { RT_TRACE(_module_hci_ops_os_c_, _drv_err_, - ("usb_read_port:(adapt->bDriverStopped ||adapt->bSurpriseRemoved ||adapter->pwrctrlpriv.pnp_bstop_trx)!!!\n")); + ("%s:(adapt->bDriverStopped ||adapt->bSurpriseRemoved ||adapter->pwrctrlpriv.pnp_bstop_trx)!!!\n", + __func__)); return _FAIL; } if (!precvbuf) { RT_TRACE(_module_hci_ops_os_c_, _drv_err_, - ("usb_read_port:precvbuf==NULL\n")); + ("%s:precvbuf==NULL\n", __func__)); return _FAIL; } @@ -447,7 +458,7 @@ u32 usb_read_port(struct adapter *adapter, u32 addr, struct recv_buf *precvbuf) precvbuf->pskb = netdev_alloc_skb(adapter->pnetdev, MAX_RECVBUF_SZ); if (!precvbuf->pskb) { RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("init_recvbuf(): alloc_skb fail!\n")); - DBG_88E("#### usb_read_port() alloc_skb fail!#####\n"); + DBG_88E("#### %s() alloc_skb fail!#####\n", __func__); return _FAIL; } } else { /* reuse skb */ @@ -509,7 +520,7 @@ int usb_write8(struct adapter *adapter, u32 addr, u8 val) request = 0x05; requesttype = 0x00;/* write_out */ index = 0;/* n/a */ - wvalue = (u16)(addr&0x0000ffff); + wvalue = (u16)(addr & 0x0000ffff); len = 1; data = val; return usbctrl_vendorreq(adapter, request, wvalue, @@ -529,7 +540,7 @@ int usb_write16(struct adapter *adapter, u32 addr, u16 val) requesttype = 0x00;/* write_out */ index = 0;/* n/a */ - wvalue = (u16)(addr&0x0000ffff); + wvalue = (u16)(addr & 0x0000ffff); len = 2; data = cpu_to_le32(val & 0x0000ffff); @@ -551,7 +562,7 @@ int usb_write32(struct adapter *adapter, u32 addr, u32 val) requesttype = 0x00;/* write_out */ index = 0;/* n/a */ - wvalue = (u16)(addr&0x0000ffff); + wvalue = (u16)(addr & 0x0000ffff); len = 4; data = cpu_to_le32(val); @@ -562,8 +573,8 @@ int usb_write32(struct adapter *adapter, u32 addr, u32 val) static void usb_write_port_complete(struct urb *purb, struct pt_regs *regs) { struct xmit_buf *pxmitbuf = (struct xmit_buf *)purb->context; - struct adapter *padapter = pxmitbuf->padapter; - struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct adapter *padapter = pxmitbuf->padapter; + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; switch (pxmitbuf->flags) { case VO_QUEUE_INX: @@ -590,8 +601,9 @@ static void usb_write_port_complete(struct urb *purb, struct pt_regs *regs) if (padapter->bSurpriseRemoved || padapter->bDriverStopped || padapter->bWritePortCancel) { RT_TRACE(_module_hci_ops_os_c_, _drv_err_, - ("usb_write_port_complete:bDriverStopped(%d) OR bSurpriseRemoved(%d)", - padapter->bDriverStopped, padapter->bSurpriseRemoved)); + ("%s:bDriverStopped(%d) OR bSurpriseRemoved(%d)", + __func__, padapter->bDriverStopped, + padapter->bSurpriseRemoved)); DBG_88E("%s(): TX Warning! bDriverStopped(%d) OR bSurpriseRemoved(%d) bWritePortCancel(%d) pxmitbuf->ext_tag(%x)\n", __func__, padapter->bDriverStopped, padapter->bSurpriseRemoved, padapter->bReadPortCancel, @@ -601,12 +613,15 @@ static void usb_write_port_complete(struct urb *purb, struct pt_regs *regs) } if (purb->status) { - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_write_port_complete : purb->status(%d) != 0\n", purb->status)); - DBG_88E("###=> urb_write_port_complete status(%d)\n", purb->status); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s : purb->status(%d) != 0\n", + __func__, purb->status)); + DBG_88E("###=> %s status(%d)\n", __func__, purb->status); if ((purb->status == -EPIPE) || (purb->status == -EPROTO)) { sreset_set_wifi_error_status(padapter, USB_WRITE_PORT_FAIL); } else if (purb->status == -EINPROGRESS) { - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_write_port_complete: EINPROGRESS\n")); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s: EINPROGRESS\n", __func__)); goto check_completion; } else if (purb->status == -ENOENT) { DBG_88E("%s: -ENOENT\n", __func__); @@ -615,15 +630,17 @@ static void usb_write_port_complete(struct urb *purb, struct pt_regs *regs) DBG_88E("%s: -ECONNRESET\n", __func__); goto check_completion; } else if (purb->status == -ESHUTDOWN) { - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_write_port_complete: ESHUTDOWN\n")); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s: ESHUTDOWN\n", __func__)); padapter->bDriverStopped = true; - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_write_port_complete:bDriverStopped = true\n")); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s:bDriverStopped = true\n", __func__)); goto check_completion; } else { padapter->bSurpriseRemoved = true; DBG_88E("bSurpriseRemoved = true\n"); - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_write_port_complete:bSurpriseRemoved = true\n")); - + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s:bSurpriseRemoved = true\n", __func__)); goto check_completion; } } @@ -645,17 +662,18 @@ u32 usb_write_port(struct adapter *padapter, u32 addr, u32 cnt, struct xmit_buf int status; u32 ret = _FAIL; struct urb *purb = NULL; - struct dvobj_priv *pdvobj = adapter_to_dvobj(padapter); - struct xmit_priv *pxmitpriv = &padapter->xmitpriv; + struct dvobj_priv *pdvobj = adapter_to_dvobj(padapter); + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; struct xmit_frame *pxmitframe = (struct xmit_frame *)xmitbuf->priv_data; struct usb_device *pusbd = pdvobj->pusbdev; - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("+usb_write_port\n")); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("+%s\n", __func__)); if ((padapter->bDriverStopped) || (padapter->bSurpriseRemoved) || (padapter->pwrctrlpriv.pnp_bstop_trx)) { RT_TRACE(_module_hci_ops_os_c_, _drv_err_, - ("usb_write_port:( padapter->bDriverStopped ||padapter->bSurpriseRemoved ||adapter->pwrctrlpriv.pnp_bstop_trx)!!!\n")); + ("%s:( padapter->bDriverStopped ||padapter->bSurpriseRemoved ||adapter->pwrctrlpriv.pnp_bstop_trx)!!!\n", + __func__)); rtw_sctx_done_err(&xmitbuf->sctx, RTW_SCTX_DONE_TX_DENY); goto exit; } @@ -703,8 +721,10 @@ u32 usb_write_port(struct adapter *padapter, u32 addr, u32 cnt, struct xmit_buf status = usb_submit_urb(purb, GFP_ATOMIC); if (status) { rtw_sctx_done_err(&xmitbuf->sctx, RTW_SCTX_DONE_WRITE_PORT_ERR); - DBG_88E("usb_write_port, status =%d\n", status); - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("usb_write_port(): usb_submit_urb, status =%x\n", status)); + DBG_88E("%s, status =%d\n", __func__, status); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, + ("%s(): usb_submit_urb, status =%x\n", + __func__, status)); switch (status) { case -ENODEV: @@ -720,7 +740,7 @@ u32 usb_write_port(struct adapter *padapter, u32 addr, u32 cnt, struct xmit_buf /* We add the URB_ZERO_PACKET flag to urb so that the host will send the zero packet automatically. */ - RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("-usb_write_port\n")); + RT_TRACE(_module_hci_ops_os_c_, _drv_err_, ("-%s\n", __func__)); exit: if (ret != _SUCCESS) @@ -776,7 +796,6 @@ void rtl8188eu_recv_tasklet(void *priv) void rtl8188eu_xmit_tasklet(void *priv) { - int ret = false; struct adapter *adapt = priv; struct xmit_priv *pxmitpriv = &adapt->xmitpriv; @@ -791,9 +810,7 @@ void rtl8188eu_xmit_tasklet(void *priv) break; } - ret = rtl8188eu_xmitframe_complete(adapt, pxmitpriv); - - if (!ret) + if (!rtl8188eu_xmitframe_complete(adapt, pxmitpriv)) break; } } diff --git a/drivers/staging/rtl8192e/rtllib_crypt_ccmp.c b/drivers/staging/rtl8192e/rtllib_crypt_ccmp.c index bc45cf098b04..2581ed6d14fa 100644 --- a/drivers/staging/rtl8192e/rtllib_crypt_ccmp.c +++ b/drivers/staging/rtl8192e/rtllib_crypt_ccmp.c @@ -1,12 +1,7 @@ -/* - * Host AP crypt: host-based CCMP encryption implementation for Host AP driver +// SPDX-License-Identifier: GPL-2.0 +/* Host AP crypt: host-based CCMP encryption implementation for Host AP driver * * Copyright (c) 2003-2004, Jouni Malinen - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. See README and COPYING for - * more details. */ #include @@ -67,7 +62,7 @@ static void *rtllib_ccmp_init(int key_idx) goto fail; priv->key_idx = key_idx; - priv->tfm = (void *)crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC); + priv->tfm = (void *)crypto_alloc_cipher("aes", 0, 0); if (IS_ERR(priv->tfm)) { pr_debug("Could not allocate crypto API aes\n"); priv->tfm = NULL; @@ -377,10 +372,11 @@ static int rtllib_ccmp_set_key(void *key, int len, u8 *seq, void *priv) data->rx_pn[5] = seq[0]; } crypto_cipher_setkey((void *)data->tfm, data->key, CCMP_TK_LEN); - } else if (len == 0) + } else if (len == 0) { data->key_set = 0; - else + } else { return -1; + } return 0; } diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_ccmp.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_ccmp.c index 041f1b123888..3534ddb900d1 100644 --- a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_ccmp.c +++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_ccmp.c @@ -71,7 +71,7 @@ static void *ieee80211_ccmp_init(int key_idx) goto fail; priv->key_idx = key_idx; - priv->tfm = (void *)crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC); + priv->tfm = (void *)crypto_alloc_cipher("aes", 0, 0); if (IS_ERR(priv->tfm)) { pr_debug("ieee80211_crypt_ccmp: could not allocate crypto API aes\n"); priv->tfm = NULL; diff --git a/drivers/staging/rtl8192u/r8192U.h b/drivers/staging/rtl8192u/r8192U.h index e65a893fd084..ec33fb9122e9 100644 --- a/drivers/staging/rtl8192u/r8192U.h +++ b/drivers/staging/rtl8192u/r8192U.h @@ -364,13 +364,13 @@ typedef enum _firmware_status { FW_STATUS_5_READY = 5, } firmware_status_e; -typedef struct _rt_firmare_seg_container { +typedef struct _fw_seg_container { u16 seg_size; u8 *seg_ptr; } fw_seg_container, *pfw_seg_container; typedef struct _rt_firmware { firmware_status_e firmware_status; - u16 cmdpacket_frag_thresold; + u16 cmdpacket_frag_threshold; #define RTL8190_MAX_FIRMWARE_CODE_SIZE 64000 u8 firmware_buf[RTL8190_MAX_FIRMWARE_CODE_SIZE]; u16 firmware_buf_size; diff --git a/drivers/staging/rtl8192u/r8192U_dm.c b/drivers/staging/rtl8192u/r8192U_dm.c index 5fb5f583f703..6c9f9d82477d 100644 --- a/drivers/staging/rtl8192u/r8192U_dm.c +++ b/drivers/staging/rtl8192u/r8192U_dm.c @@ -2983,7 +2983,7 @@ static void dm_init_dynamic_txpower(struct net_device *dev) static void dm_dynamic_txpower(struct net_device *dev) { struct r8192_priv *priv = ieee80211_priv(dev); - unsigned int txhipower_threshhold = 0; + unsigned int txhipower_threshold = 0; unsigned int txlowpower_threshold = 0; if (priv->ieee80211->bdynamic_txpower_enable != true) { @@ -2993,18 +2993,18 @@ static void dm_dynamic_txpower(struct net_device *dev) } /*printk("priv->ieee80211->current_network.unknown_cap_exist is %d , priv->ieee80211->current_network.broadcom_cap_exist is %d\n", priv->ieee80211->current_network.unknown_cap_exist, priv->ieee80211->current_network.broadcom_cap_exist);*/ if ((priv->ieee80211->current_network.atheros_cap_exist) && (priv->ieee80211->mode == IEEE_G)) { - txhipower_threshhold = TX_POWER_ATHEROAP_THRESH_HIGH; + txhipower_threshold = TX_POWER_ATHEROAP_THRESH_HIGH; txlowpower_threshold = TX_POWER_ATHEROAP_THRESH_LOW; } else { - txhipower_threshhold = TX_POWER_NEAR_FIELD_THRESH_HIGH; + txhipower_threshold = TX_POWER_NEAR_FIELD_THRESH_HIGH; txlowpower_threshold = TX_POWER_NEAR_FIELD_THRESH_LOW; } - /*printk("=======>%s(): txhipower_threshhold is %d, txlowpower_threshold is %d\n", __func__, txhipower_threshhold, txlowpower_threshold);*/ + /*printk("=======>%s(): txhipower_threshold is %d, txlowpower_threshold is %d\n", __func__, txhipower_threshold, txlowpower_threshold);*/ RT_TRACE(COMP_TXAGC, "priv->undecorated_smoothed_pwdb = %ld\n", priv->undecorated_smoothed_pwdb); if (priv->ieee80211->state == IEEE80211_LINKED) { - if (priv->undecorated_smoothed_pwdb >= txhipower_threshhold) { + if (priv->undecorated_smoothed_pwdb >= txhipower_threshold) { priv->bDynamicTxHighPower = true; priv->bDynamicTxLowPower = false; } else { diff --git a/drivers/staging/rtl8192u/r819xU_cmdpkt.c b/drivers/staging/rtl8192u/r819xU_cmdpkt.c index 900f7866d381..e064f43fd8b6 100644 --- a/drivers/staging/rtl8192u/r819xU_cmdpkt.c +++ b/drivers/staging/rtl8192u/r819xU_cmdpkt.c @@ -243,7 +243,7 @@ static void cmpk_handle_interrupt_status(struct net_device *dev, u8 *pmsg) cmdpkt_beacontimerinterrupt_819xusb(dev); } - /* Other informations in interrupt status we need? */ + /* Other information in interrupt status we need? */ DMESG("<---- cmpk_handle_interrupt_status()\n"); } diff --git a/drivers/staging/rtl8192u/r819xU_firmware.c b/drivers/staging/rtl8192u/r819xU_firmware.c index c3ea906f3af3..153d4ee0ec07 100644 --- a/drivers/staging/rtl8192u/r819xU_firmware.c +++ b/drivers/staging/rtl8192u/r819xU_firmware.c @@ -24,7 +24,7 @@ static void firmware_init_param(struct net_device *dev) struct r8192_priv *priv = ieee80211_priv(dev); rt_firmware *pfirmware = priv->pFirmware; - pfirmware->cmdpacket_frag_thresold = GET_COMMAND_PACKET_FRAG_THRESHOLD(MAX_TRANSMIT_BUFFER_SIZE); + pfirmware->cmdpacket_frag_threshold = GET_COMMAND_PACKET_FRAG_THRESHOLD(MAX_TRANSMIT_BUFFER_SIZE); } /* @@ -49,7 +49,7 @@ static bool fw_download_code(struct net_device *dev, u8 *code_virtual_address, firmware_init_param(dev); /* Fragmentation might be required */ - frag_threshold = pfirmware->cmdpacket_frag_thresold; + frag_threshold = pfirmware->cmdpacket_frag_threshold; do { if ((buffer_len - frag_offset) > frag_threshold) { frag_length = frag_threshold; diff --git a/drivers/staging/rtl8192u/r819xU_phyreg.h b/drivers/staging/rtl8192u/r819xU_phyreg.h index 65ee6088324c..dc9ddf100eab 100644 --- a/drivers/staging/rtl8192u/r819xU_phyreg.h +++ b/drivers/staging/rtl8192u/r819xU_phyreg.h @@ -53,7 +53,7 @@ /* page c */ #define rOFDM0_TRxPathEnable 0xc04 #define rOFDM0_XARxAFE 0xc10 /* RxIQ DC offset, Rx digital filter, DC notch filter */ -#define rOFDM0_XARxIQImbalance 0xc14 /* RxIQ imblance matrix */ +#define rOFDM0_XARxIQImbalance 0xc14 /* RxIQ imbalance matrix */ #define rOFDM0_XBRxAFE 0xc18 #define rOFDM0_XBRxIQImbalance 0xc1c #define rOFDM0_XCRxAFE 0xc20 diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c index 7cdd609cab6c..4c6519ccab30 100644 --- a/drivers/staging/rtl8712/hal_init.c +++ b/drivers/staging/rtl8712/hal_init.c @@ -100,11 +100,11 @@ static void fill_fwpriv(struct _adapter *padapter, struct fw_priv *pfwpriv) pfwpriv->rf_config = RTL8712_RFC_1T2R; } pfwpriv->mp_mode = (pregpriv->mp_mode == 1) ? 1 : 0; - pfwpriv->vcsType = pregpriv->vrtl_carrier_sense; /* 0:off 1:on 2:auto */ - pfwpriv->vcsMode = pregpriv->vcs_type; /* 1:RTS/CTS 2:CTS to self */ - /* default enable turboMode */ - pfwpriv->turboMode = ((pregpriv->wifi_test == 1) ? 0 : 1); - pfwpriv->lowPowerMode = pregpriv->low_power; + pfwpriv->vcs_type = pregpriv->vrtl_carrier_sense; /* 0:off 1:on 2:auto */ + pfwpriv->vcs_mode = pregpriv->vcs_type; /* 1:RTS/CTS 2:CTS to self */ + /* default enable turbo_mode */ + pfwpriv->turbo_mode = ((pregpriv->wifi_test == 1) ? 0 : 1); + pfwpriv->low_power_mode = pregpriv->low_power; } static void update_fwhdr(struct fw_hdr *pfwhdr, const u8 *pmappedfw) diff --git a/drivers/staging/rtl8712/rtl8712_hal.h b/drivers/staging/rtl8712/rtl8712_hal.h index 42f519739128..66cc4645e2d1 100644 --- a/drivers/staging/rtl8712/rtl8712_hal.h +++ b/drivers/staging/rtl8712/rtl8712_hal.h @@ -72,13 +72,13 @@ struct fw_priv { /*8-bytes alignment required*/ unsigned char regulatory_class_3; /*regulatory class bit map 3*/ unsigned char rfintfs; /* 0:SWSI, 1:HWSI, 2:HWPI*/ unsigned char def_nettype; - unsigned char turboMode; - unsigned char lowPowerMode;/* 0: normal mode, 1: low power mode*/ + unsigned char turbo_mode; + unsigned char low_power_mode;/* 0: normal mode, 1: low power mode*/ /*--- long word 2 ----*/ unsigned char lbk_mode; /*0x00: normal, 0x03: MACLBK, 0x01: PHYLBK*/ unsigned char mp_mode; /* 1: for MP use, 0: for normal driver */ - unsigned char vcsType; /* 0:off 1:on 2:auto */ - unsigned char vcsMode; /* 1:RTS/CTS 2:CTS to self */ + unsigned char vcs_type; /* 0:off 1:on 2:auto */ + unsigned char vcs_mode; /* 1:RTS/CTS 2:CTS to self */ unsigned char rsvd022; unsigned char rsvd023; unsigned char rsvd024; diff --git a/drivers/staging/rtl8712/rtl871x_cmd.h b/drivers/staging/rtl8712/rtl871x_cmd.h index 75a126d8e26c..262984c58efb 100644 --- a/drivers/staging/rtl8712/rtl871x_cmd.h +++ b/drivers/staging/rtl8712/rtl871x_cmd.h @@ -422,7 +422,7 @@ struct getrfintfs_parm { * * mac[0] == 0 * ==> CMD mode, return H2C_SUCCESS. - * The following condition must be ture under CMD mode + * The following condition must be true under CMD mode * mac[1] == mac[4], mac[2] == mac[3], mac[0]=mac[5]= 0; * s0 == 0x1234, s1 == 0xabcd, w0 == 0x78563412, w1 == 0x5aa5def7; * s2 == (b1 << 8 | b0); diff --git a/drivers/staging/rtl8723bs/TODO b/drivers/staging/rtl8723bs/TODO index 80dbdaca3a8f..58e02f944b6d 100644 --- a/drivers/staging/rtl8723bs/TODO +++ b/drivers/staging/rtl8723bs/TODO @@ -1,6 +1,6 @@ TODO: - find and remove code blocks guarded by never set CONFIG_FOO defines -- find and remove remaining code valid only for 5 HGz. Most of the obvious +- find and remove remaining code valid only for 5 GHz. Most of the obvious ones have been removed, but things like channel > 14 still exist. - find and remove any code for other chips that is left over - convert any remaining unusual variable types diff --git a/drivers/staging/rtl8723bs/core/rtw_ap.c b/drivers/staging/rtl8723bs/core/rtw_ap.c index 2691241bfd84..cbbfef389874 100644 --- a/drivers/staging/rtl8723bs/core/rtw_ap.c +++ b/drivers/staging/rtl8723bs/core/rtw_ap.c @@ -21,7 +21,6 @@ void init_mlme_ap_info(struct adapter *padapter) struct sta_priv *pstapriv = &padapter->stapriv; struct wlan_acl_pool *pacl_list = &pstapriv->acl_list; - spin_lock_init(&pmlmepriv->bcn_update_lock); /* for ACL */ @@ -69,7 +68,6 @@ static void update_BCNTIM(struct adapter *padapter) /* update TIM IE */ /* if (pstapriv->tim_bitmap) */ if (true) { - u8 *p, *dst_ie, *premainder_ie = NULL, *pbackup_remainder_ie = NULL; __le16 tim_bitmap_le; uint offset, tmp_len, tim_ielen, tim_ie_offset, remainder_ielen; @@ -83,7 +81,6 @@ static void update_BCNTIM(struct adapter *padapter) pnetwork_mlmeext->IELength - _FIXED_IE_LENGTH_ ); if (p != NULL && tim_ielen > 0) { - tim_ielen += 2; premainder_ie = p+tim_ielen; @@ -94,9 +91,7 @@ static void update_BCNTIM(struct adapter *padapter) /* append TIM IE from dst_ie offset */ dst_ie = p; - } else{ - - + } else { tim_ielen = 0; /* calucate head_len */ @@ -121,7 +116,6 @@ static void update_BCNTIM(struct adapter *padapter) if (p != NULL) offset += tmp_len+2; - /* DS Parameter Set IE, len =3 */ offset += 3; @@ -131,12 +125,9 @@ static void update_BCNTIM(struct adapter *padapter) /* append TIM IE from offset */ dst_ie = pie + offset; - } - if (remainder_ielen > 0) { - pbackup_remainder_ie = rtw_malloc(remainder_ielen); if (pbackup_remainder_ie && premainder_ie) memcpy(pbackup_remainder_ie, premainder_ie, remainder_ielen); @@ -160,7 +151,6 @@ static void update_BCNTIM(struct adapter *padapter) *dst_ie++ = 0; if (tim_ielen == 4) { - __le16 pvb; if (pstapriv->tim_bitmap&0xff00) @@ -171,15 +161,12 @@ static void update_BCNTIM(struct adapter *padapter) *dst_ie++ = le16_to_cpu(pvb); } else if (tim_ielen == 5) { - - memcpy(dst_ie, &tim_bitmap_le, 2); dst_ie += 2; } /* copy remainder IE */ if (pbackup_remainder_ie) { - memcpy(dst_ie, pbackup_remainder_ie, remainder_ielen); kfree(pbackup_remainder_ie); @@ -187,7 +174,6 @@ static void update_BCNTIM(struct adapter *padapter) offset = (uint)(dst_ie - pie); pnetwork_mlmeext->IELength = offset + remainder_ielen; - } } @@ -223,7 +209,6 @@ void expire_timeout_chk(struct adapter *padapter) char chk_alive_list[NUM_STA]; int i; - spin_lock_bh(&pstapriv->auth_list_lock); phead = &pstapriv->auth_list; @@ -237,16 +222,13 @@ void expire_timeout_chk(struct adapter *padapter) } #endif while (phead != plist) { - psta = LIST_CONTAINOR(plist, struct sta_info, auth_list); plist = get_next(plist); if (psta->expire_to > 0) { - psta->expire_to--; if (psta->expire_to == 0) { - list_del_init(&psta->auth_list); pstapriv->auth_list_cnt--; @@ -267,13 +249,11 @@ void expire_timeout_chk(struct adapter *padapter) spin_lock_bh(&pstapriv->auth_list_lock); } } - } spin_unlock_bh(&pstapriv->auth_list_lock); psta = NULL; - spin_lock_bh(&pstapriv->asoc_list_lock); phead = &pstapriv->asoc_list; @@ -287,7 +267,6 @@ void expire_timeout_chk(struct adapter *padapter) } #endif while (phead != plist) { - psta = LIST_CONTAINOR(plist, struct sta_info, asoc_list); plist = get_next(plist); #ifdef CONFIG_AUTO_AP_MODE @@ -304,11 +283,9 @@ void expire_timeout_chk(struct adapter *padapter) } if (psta->expire_to == 0) { - struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; if (padapter->registrypriv.wifi_spec == 1) { - psta->expire_to = pstapriv->expire_to; continue; } @@ -336,7 +313,6 @@ void expire_timeout_chk(struct adapter *padapter) if (stainfo_offset_valid(stainfo_offset)) chk_alive_list[chk_alive_num++] = stainfo_offset; - continue; } list_del_init(&psta->asoc_list); @@ -347,9 +323,7 @@ void expire_timeout_chk(struct adapter *padapter) psta->state ); updated = ap_free_sta(padapter, psta, false, WLAN_REASON_DEAUTH_LEAVING); - } else{ - - + } else { /* TODO: Aging mechanism to digest frames in sleep_q to avoid running out of xmitframe */ if (psta->sleepq_len > (NR_XMITFRAME/pstapriv->asoc_list_cnt) && padapter->xmitpriv.free_xmitframe_cnt < (( @@ -404,7 +378,6 @@ void expire_timeout_chk(struct adapter *padapter) psta->keep_alive_trycnt = 0; continue; } else if (psta->keep_alive_trycnt <= 3) { - DBG_871X( "ack check for asoc expire, keep_alive_trycnt =%d\n", psta->keep_alive_trycnt); @@ -453,7 +426,6 @@ void add_RATid(struct adapter *padapter, struct sta_info *psta, u8 rssi_level) shortGIrate = query_ra_short_GI(psta); if (pcur_network->Configuration.DSConfig > 14) { - if (tx_ra_bitmap & 0xffff000) sta_band |= WIRELESS_11_5N; @@ -474,7 +446,6 @@ void add_RATid(struct adapter *padapter, struct sta_info *psta, u8 rssi_level) psta->raid = rtw_hal_networktype_to_raid(padapter, psta); if (psta->aid < NUM_STA) { - u8 arg[4] = {0}; arg[0] = psta->mac_id; @@ -486,12 +457,9 @@ void add_RATid(struct adapter *padapter, struct sta_info *psta, u8 rssi_level) __func__, psta->mac_id, psta->raid, shortGIrate, tx_ra_bitmap); rtw_hal_add_ra_tid(padapter, tx_ra_bitmap, arg, rssi_level); - } else{ - - + } else { DBG_871X("station aid %d exceed the max number\n", psta->aid); } - } void update_bmc_sta(struct adapter *padapter) @@ -507,7 +475,6 @@ void update_bmc_sta(struct adapter *padapter) struct sta_info *psta = rtw_get_bcmc_stainfo(padapter); if (psta) { - psta->aid = 0;/* default set to 0 */ /* psta->mac_id = psta->aid+4; */ psta->mac_id = psta->aid + 1;/* mac_id = 1 for bc/mc stainfo */ @@ -571,12 +538,9 @@ void update_bmc_sta(struct adapter *padapter) psta->state = _FW_LINKED; spin_unlock_bh(&psta->lock); - } else{ - - + } else { DBG_871X("add_RATid_bmc_sta error!\n"); } - } /* notes: */ @@ -611,7 +575,6 @@ void update_sta_info_apmode(struct adapter *padapter, struct sta_info *psta) else psta->ieee8021x_blocked = false; - /* update sta's cap */ /* ERP */ @@ -619,7 +582,6 @@ void update_sta_info_apmode(struct adapter *padapter, struct sta_info *psta) /* HT related cap */ if (phtpriv_sta->ht_option) { - /* check if sta supports rx ampdu */ phtpriv_sta->ampdu_enable = phtpriv_ap->ampdu_enable; @@ -640,7 +602,6 @@ void update_sta_info_apmode(struct adapter *padapter, struct sta_info *psta) phtpriv_sta->ch_offset = pmlmeext->cur_ch_offset; - /* check if sta support s Short GI 20M */ if (( phtpriv_sta->ht_cap.cap_info & phtpriv_ap->ht_cap.cap_info @@ -651,7 +612,6 @@ void update_sta_info_apmode(struct adapter *padapter, struct sta_info *psta) if (( phtpriv_sta->ht_cap.cap_info & phtpriv_ap->ht_cap.cap_info ) & cpu_to_le16(IEEE80211_HT_CAP_SGI_40)) { - if (psta->bw_mode == CHANNEL_WIDTH_40) /* according to psta->bw_mode */ phtpriv_sta->sgi_40m = true; else @@ -663,7 +623,6 @@ void update_sta_info_apmode(struct adapter *padapter, struct sta_info *psta) /* B0 Config LDPC Coding Capability */ if (TEST_FLAG(phtpriv_ap->ldpc_cap, LDPC_HT_ENABLE_TX) && GET_HT_CAPABILITY_ELE_LDPC_CAP((u8 *)(&phtpriv_sta->ht_cap))) { - SET_FLAG(cur_ldpc_cap, (LDPC_HT_ENABLE_TX | LDPC_HT_CAP_TX)); DBG_871X("Enable HT Tx LDPC for STA(%d)\n", psta->aid); } @@ -671,13 +630,10 @@ void update_sta_info_apmode(struct adapter *padapter, struct sta_info *psta) /* B7 B8 B9 Config STBC setting */ if (TEST_FLAG(phtpriv_ap->stbc_cap, STBC_HT_ENABLE_TX) && GET_HT_CAPABILITY_ELE_RX_STBC((u8 *)(&phtpriv_sta->ht_cap))) { - SET_FLAG(cur_stbc_cap, (STBC_HT_ENABLE_TX | STBC_HT_CAP_TX)); DBG_871X("Enable HT Tx STBC for STA(%d)\n", psta->aid); } - } else{ - - + } else { phtpriv_sta->ampdu_enable = false; phtpriv_sta->sgi_20m = false; @@ -704,16 +660,12 @@ void update_sta_info_apmode(struct adapter *padapter, struct sta_info *psta) memset((void *)&psta->sta_stats, 0, sizeof(struct stainfo_stats)); - /* add ratid */ /* add_RATid(padapter, psta);//move to ap_sta_info_defer_update() */ - spin_lock_bh(&psta->lock); psta->state |= _FW_LINKED; spin_unlock_bh(&psta->lock); - - } static void update_ap_info(struct adapter *padapter, struct sta_info *psta) @@ -731,7 +683,6 @@ static void update_ap_info(struct adapter *padapter, struct sta_info *psta) /* HT related cap */ if (phtpriv_ap->ht_option) { - /* check if sta supports rx ampdu */ /* phtpriv_ap->ampdu_enable = phtpriv_ap->ampdu_enable; */ @@ -743,11 +694,8 @@ static void update_ap_info(struct adapter *padapter, struct sta_info *psta) if ((phtpriv_ap->ht_cap.cap_info) & cpu_to_le16(IEEE80211_HT_CAP_SGI_40)) phtpriv_ap->sgi_40m = true; - psta->qos_option = true; - } else{ - - + } else { phtpriv_ap->ampdu_enable = false; phtpriv_ap->sgi_20m = false; @@ -772,7 +720,6 @@ static void update_hw_ht_param(struct adapter *padapter) DBG_871X("%s\n", __func__); - /* handle A-MPDU parameter field */ /* AMPDU_para [1:0]:Max AMPDU Len => 0:8k , 1:16k, 2:32k, 3:64k @@ -805,7 +752,6 @@ static void update_hw_ht_param(struct adapter *padapter) /* Config current HT Protection mode. */ /* */ /* pmlmeinfo->HT_protection = pmlmeinfo->HT_info.infos[1] & 0x3; */ - } void start_bss_network(struct adapter *padapter, u8 *pbuf) @@ -833,7 +779,6 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) cur_bwmode = CHANNEL_WIDTH_20; cur_ch_offset = HAL_PRIME_CHNL_OFFSET_DONT_CARE; - /* check if there is wps ie, */ /* if there is wpsie in beacon, the hostapd will update beacon twice when stating hostapd, */ /* and at first time the security ie (RSN/WPA IE) will not include in beacon. */ @@ -845,14 +790,12 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) )) pmlmeext->bstart_bss = true; - /* todo: update wmm, ht cap */ /* pmlmeinfo->WMM_enable; */ /* pmlmeinfo->HT_enable; */ if (pmlmepriv->qospriv.qos_option) pmlmeinfo->WMM_enable = true; if (pmlmepriv->htpriv.ht_option) { - pmlmeinfo->WMM_enable = true; pmlmeinfo->HT_enable = true; /* pmlmeinfo->HT_info_enable = true; */ @@ -900,12 +843,10 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) rtw_hal_set_hwreg(padapter, HW_VAR_DO_IQK, NULL); if (!pmlmepriv->cur_network.join_res) { /* setting only at first time */ - /* u32 initialgain; */ /* initialgain = 0x1e; */ - /* disable dynamic functions, such as high power, DIG */ /* Save_DM_Func_Flag(padapter); */ /* Switch_DM_Func(padapter, DYNAMIC_FUNC_DISABLE, false); */ @@ -914,7 +855,6 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) Switch_DM_Func(padapter, DYNAMIC_ALL_FUNC_ENABLE, true); /* rtw_hal_set_hwreg(padapter, HW_VAR_INITIAL_GAIN, (u8 *)(&initialgain)); */ - } /* set channel, bwmode */ @@ -925,7 +865,6 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) (pnetwork->IELength - sizeof(struct ndis_802_11_fix_ie)) ); if (p && ie_len) { - pht_info = (struct HT_info_element *)(p+2); if (cur_channel > 14) { @@ -937,12 +876,10 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) } if ((cbw40_enable) && (pht_info->infos[0] & BIT(2))) { - /* switch to the 40M Hz mode */ /* pmlmeext->cur_bwmode = CHANNEL_WIDTH_40; */ cur_bwmode = CHANNEL_WIDTH_40; switch (pht_info->infos[0] & 0x3) { - case 1: /* pmlmeext->cur_ch_offset = HAL_PRIME_CHNL_OFFSET_LOWER; */ cur_ch_offset = HAL_PRIME_CHNL_OFFSET_LOWER; @@ -958,9 +895,7 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) cur_ch_offset = HAL_PRIME_CHNL_OFFSET_DONT_CARE; break; } - } - } set_channel_bwmode(padapter, cur_channel, cur_ch_offset, cur_bwmode); @@ -991,9 +926,7 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) rtw_get_capability((struct wlan_bssid_ex *)pnetwork) ); - if (pmlmeext->bstart_bss) { - update_beacon(padapter, _TIM_IE_, NULL, true); #ifndef CONFIG_INTERRUPT_BASED_TXBCN /* other case will tx beacon when bcn interrupt coming in. */ @@ -1002,15 +935,12 @@ void start_bss_network(struct adapter *padapter, u8 *pbuf) DBG_871X("issue_beacon, fail!\n"); #endif /* CONFIG_INTERRUPT_BASED_TXBCN */ - } - /* update bc/mc sta_info */ update_bmc_sta(padapter); /* pmlmeext->bstart_bss = true; */ - } int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) @@ -1050,7 +980,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) if (!check_fwstate(pmlmepriv, WIFI_AP_STATE)) return _FAIL; - if (len < 0 || len > MAX_IE_SZ) return _FAIL; @@ -1060,7 +989,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) memcpy(ie, pbuf, pbss_network->IELength); - if (pbss_network->InfrastructureMode != Ndis802_11APMode) return _FAIL; @@ -1086,7 +1014,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) (pbss_network->IELength - _BEACON_IE_OFFSET_) ); if (p && ie_len > 0) { - memset(&pbss_network->Ssid, 0, sizeof(struct ndis_802_11_ssid)); memcpy(pbss_network->Ssid.Ssid, (p + 2), ie_len); pbss_network->Ssid.SsidLength = ie_len; @@ -1105,7 +1032,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) pbss_network->Configuration.DSConfig = channel; - memset(supportRate, 0, NDIS_802_11_LENGTH_RATES_EX); /* get supported rates */ p = rtw_get_ie( @@ -1115,7 +1041,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) (pbss_network->IELength - _BEACON_IE_OFFSET_) ); if (p != NULL) { - memcpy(supportRate, p+2, ie_len); supportRateNum = ie_len; } @@ -1128,17 +1053,14 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) pbss_network->IELength - _BEACON_IE_OFFSET_ ); if (p != NULL) { - memcpy(supportRate+supportRateNum, p+2, ie_len); supportRateNum += ie_len; - } network_type = rtw_check_network_type(supportRate, supportRateNum, channel); rtw_set_supported_rate(pbss_network->SupportedRates, network_type); - /* parsing ERP_IE */ p = rtw_get_ie( ie + _BEACON_IE_OFFSET_, @@ -1168,7 +1090,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) (pbss_network->IELength - _BEACON_IE_OFFSET_) ); if (p && ie_len > 0) { - if (rtw_parse_wpa2_ie( p, ie_len+2, @@ -1176,7 +1097,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) &pairwise_cipher, NULL ) == _SUCCESS) { - psecuritypriv->dot11AuthAlgrthm = dot11AuthAlgrthm_8021X; psecuritypriv->dot8021xalg = 1;/* psk, todo:802.1x */ @@ -1185,7 +1105,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) psecuritypriv->wpa2_group_cipher = group_cipher; psecuritypriv->wpa2_pairwise_cipher = pairwise_cipher; } - } /* wpa */ @@ -1194,7 +1113,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) psecuritypriv->wpa_group_cipher = _NO_PRIVACY_; psecuritypriv->wpa_pairwise_cipher = _NO_PRIVACY_; for (p = ie + _BEACON_IE_OFFSET_; ; p += (ie_len + 2)) { - p = rtw_get_ie( p, _SSN_IE_1_, @@ -1202,7 +1120,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) (pbss_network->IELength - _BEACON_IE_OFFSET_ - (ie_len + 2)) ); if ((p) && (!memcmp(p+2, OUI1, 4))) { - if (rtw_parse_wpa_ie( p, ie_len+2, @@ -1210,7 +1127,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) &pairwise_cipher, NULL ) == _SUCCESS) { - psecuritypriv->dot11AuthAlgrthm = dot11AuthAlgrthm_8021X; psecuritypriv->dot8021xalg = 1;/* psk, todo:802.1x */ @@ -1222,22 +1138,17 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) } break; - } if ((p == NULL) || (ie_len == 0)) break; - - } /* wmm */ ie_len = 0; pmlmepriv->qospriv.qos_option = 0; if (pregistrypriv->wmm_enable) { - for (p = ie + _BEACON_IE_OFFSET_; ; p += (ie_len + 2)) { - p = rtw_get_ie( p, _VENDOR_SPECIFIC_IE_, @@ -1245,7 +1156,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) (pbss_network->IELength - _BEACON_IE_OFFSET_ - (ie_len + 2)) ); if ((p) && !memcmp(p+2, WMM_PARA_IE, 6)) { - pmlmepriv->qospriv.qos_option = 1; *(p+8) |= BIT(7);/* QoS Info, support U-APSD */ @@ -1261,7 +1171,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) if ((p == NULL) || (ie_len == 0)) break; - } } @@ -1273,7 +1182,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) (pbss_network->IELength - _BEACON_IE_OFFSET_) ); if (p && ie_len > 0) { - u8 rf_type = 0; u8 max_rx_ampdu_factor = 0; struct rtw_ieee80211_ht_cap *pht_cap = (struct rtw_ieee80211_ht_cap *)(p+2); @@ -1294,26 +1202,20 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) if (!TEST_FLAG(pmlmepriv->htpriv.ldpc_cap, LDPC_HT_ENABLE_RX)) pht_cap->cap_info &= cpu_to_le16(~(IEEE80211_HT_CAP_LDPC_CODING)); - if (!TEST_FLAG(pmlmepriv->htpriv.stbc_cap, STBC_HT_ENABLE_TX)) pht_cap->cap_info &= cpu_to_le16(~(IEEE80211_HT_CAP_TX_STBC)); - if (!TEST_FLAG(pmlmepriv->htpriv.stbc_cap, STBC_HT_ENABLE_RX)) pht_cap->cap_info &= cpu_to_le16(~(IEEE80211_HT_CAP_RX_STBC_3R)); - pht_cap->ampdu_params_info &= ~( IEEE80211_HT_CAP_AMPDU_FACTOR|IEEE80211_HT_CAP_AMPDU_DENSITY ); if ((psecuritypriv->wpa_pairwise_cipher & WPA_CIPHER_CCMP) || (psecuritypriv->wpa2_pairwise_cipher & WPA_CIPHER_CCMP)) { - pht_cap->ampdu_params_info |= (IEEE80211_HT_CAP_AMPDU_DENSITY&(0x07<<2)); - } else{ - - + } else { pht_cap->ampdu_params_info |= (IEEE80211_HT_CAP_AMPDU_DENSITY&0x00); } @@ -1328,13 +1230,11 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) rtw_hal_get_hwreg(padapter, HW_VAR_RF_TYPE, (u8 *)(&rf_type)); if (rf_type == RF_1T1R) { - pht_cap->supp_mcs_set[0] = 0xff; pht_cap->supp_mcs_set[1] = 0x0; } memcpy(&pmlmepriv->htpriv.ht_cap, p+2, ie_len); - } /* parsing HT_INFO_IE */ @@ -1347,9 +1247,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) if (p && ie_len > 0) pHT_info_ie = p; - switch (network_type) { - case WIRELESS_11B: pbss_network->NetworkTypeInUse = Ndis802_11DS; break; @@ -1373,21 +1271,18 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) if ((psecuritypriv->wpa2_pairwise_cipher&WPA_CIPHER_TKIP) || (psecuritypriv->wpa_pairwise_cipher&WPA_CIPHER_TKIP)) { - /* todo: */ /* ht_cap = false; */ } /* ht_cap */ if (pregistrypriv->ht_enable && ht_cap) { - pmlmepriv->htpriv.ht_option = true; pmlmepriv->qospriv.qos_option = 1; if (pregistrypriv->ampdu_enable == 1) pmlmepriv->htpriv.ampdu_enable = true; - HT_caps_handler(padapter, (struct ndis_80211_var_ie *)pHT_caps_ie); HT_info_handler(padapter, (struct ndis_80211_var_ie *)pHT_info_ie); @@ -1401,15 +1296,12 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) /* start_bss_network(padapter, (u8 *)pbss_network); */ rtw_startbss_cmd(padapter, RTW_CMDF_WAIT_ACK); - /* alloc sta_info for ap itself */ psta = rtw_get_stainfo(&padapter->stapriv, pbss_network->MacAddress); if (!psta) { - psta = rtw_alloc_stainfo(&padapter->stapriv, pbss_network->MacAddress); if (psta == NULL) return _FAIL; - } /* update AP's sta info */ @@ -1424,7 +1316,6 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len) /* update_bmc_sta(padapter); */ return ret; - } void rtw_set_macaddr_acl(struct adapter *padapter, int mode) @@ -1457,21 +1348,17 @@ int rtw_acl_add_sta(struct adapter *padapter, u8 *addr) if ((NUM_ACL-1) < pacl_list->num) return (-1); - spin_lock_bh(&(pacl_node_q->lock)); phead = get_list_head(pacl_node_q); plist = get_next(phead); while (phead != plist) { - paclnode = LIST_CONTAINOR(plist, struct rtw_wlan_acl_node, list); plist = get_next(plist); if (!memcmp(paclnode->addr, addr, ETH_ALEN)) { - if (paclnode->valid == true) { - added = true; DBG_871X("%s, sta has been added\n", __func__); break; @@ -1481,19 +1368,15 @@ int rtw_acl_add_sta(struct adapter *padapter, u8 *addr) spin_unlock_bh(&(pacl_node_q->lock)); - if (added) return ret; - spin_lock_bh(&(pacl_node_q->lock)); for (i = 0; i < NUM_ACL; i++) { - paclnode = &pacl_list->aclnode[i]; if (!paclnode->valid) { - INIT_LIST_HEAD(&paclnode->list); memcpy(paclnode->addr, addr, ETH_ALEN); @@ -1538,7 +1421,6 @@ int rtw_acl_remove_sta(struct adapter *padapter, u8 *addr) plist = get_next(phead); while (phead != plist) { - paclnode = LIST_CONTAINOR(plist, struct rtw_wlan_acl_node, list); plist = get_next(plist); @@ -1546,9 +1428,7 @@ int rtw_acl_remove_sta(struct adapter *padapter, u8 *addr) !memcmp(paclnode->addr, addr, ETH_ALEN) || !memcmp(baddr, addr, ETH_ALEN) ) { - if (paclnode->valid) { - paclnode->valid = false; list_del_init(&paclnode->list); @@ -1563,7 +1443,6 @@ int rtw_acl_remove_sta(struct adapter *padapter, u8 *addr) DBG_871X("%s, acl_num =%d\n", __func__, pacl_list->num); return ret; - } u8 rtw_ap_set_pairwise_key(struct adapter *padapter, struct sta_info *psta) @@ -1588,20 +1467,17 @@ u8 rtw_ap_set_pairwise_key(struct adapter *padapter, struct sta_info *psta) init_h2fwcmd_w_parm_no_rsp(ph2c, psetstakey_para, _SetStaKey_CMD_); - psetstakey_para->algorithm = (u8)psta->dot118021XPrivacy; memcpy(psetstakey_para->addr, psta->hwaddr, ETH_ALEN); memcpy(psetstakey_para->key, &psta->dot118021x_UncstKey, 16); - res = rtw_enqueue_cmd(pcmdpriv, ph2c); exit: return res; - } static int rtw_ap_set_key( @@ -1643,7 +1519,6 @@ static int rtw_ap_set_key( psetkeyparm->set_tx = set_tx; switch (alg) { - case _WEP40_: keylen = 5; break; @@ -1665,7 +1540,6 @@ static int rtw_ap_set_key( pcmd->rsp = NULL; pcmd->rspsz = 0; - INIT_LIST_HEAD(&pcmd->list); res = rtw_enqueue_cmd(pcmdpriv, pcmd); @@ -1693,7 +1567,6 @@ int rtw_ap_set_wep_key( u8 alg; switch (keylen) { - case 5: alg = _WEP40_; break; @@ -1712,7 +1585,6 @@ int rtw_ap_set_wep_key( static void update_bcn_fixed_ie(struct adapter *padapter) { DBG_871X("%s\n", __func__); - } static void update_bcn_erpinfo_ie(struct adapter *padapter) @@ -1737,7 +1609,6 @@ static void update_bcn_erpinfo_ie(struct adapter *padapter) (pnetwork->IELength - _BEACON_IE_OFFSET_) ); if (p && len > 0) { - struct ndis_80211_var_ie *pIE = (struct ndis_80211_var_ie *)p; if (pmlmepriv->num_sta_non_erp == 1) @@ -1754,37 +1625,31 @@ static void update_bcn_erpinfo_ie(struct adapter *padapter) ERP_IE_handler(padapter, pIE); } - } static void update_bcn_htcap_ie(struct adapter *padapter) { DBG_871X("%s\n", __func__); - } static void update_bcn_htinfo_ie(struct adapter *padapter) { DBG_871X("%s\n", __func__); - } static void update_bcn_rsn_ie(struct adapter *padapter) { DBG_871X("%s\n", __func__); - } static void update_bcn_wpa_ie(struct adapter *padapter) { DBG_871X("%s\n", __func__); - } static void update_bcn_wmm_ie(struct adapter *padapter) { DBG_871X("%s\n", __func__); - } static void update_bcn_wps_ie(struct adapter *padapter) @@ -1802,7 +1667,6 @@ static void update_bcn_wps_ie(struct adapter *padapter) unsigned char *ie = pnetwork->IEs; u32 ielen = pnetwork->IELength; - DBG_871X("%s\n", __func__); pwps_ie = rtw_get_wps_ie( @@ -1826,7 +1690,6 @@ static void update_bcn_wps_ie(struct adapter *padapter) remainder_ielen = ielen - wps_offset - wps_ielen; if (remainder_ielen > 0) { - pbackup_remainder_ie = rtw_malloc(remainder_ielen); if (pbackup_remainder_ie) memcpy(pbackup_remainder_ie, premainder_ie, remainder_ielen); @@ -1834,7 +1697,6 @@ static void update_bcn_wps_ie(struct adapter *padapter) wps_ielen = (uint)pwps_ie_src[1];/* to get ie data len */ if ((wps_offset+wps_ielen+2+remainder_ielen) <= MAX_IE_SZ) { - memcpy(pwps_ie, pwps_ie_src, wps_ielen+2); pwps_ie += (wps_ielen+2); @@ -1850,7 +1712,6 @@ static void update_bcn_wps_ie(struct adapter *padapter) /* deal with the case without set_tx_beacon_cmd() in update_beacon() */ #if defined(CONFIG_INTERRUPT_BASED_TXBCN) if ((pmlmeinfo->state&0x03) == WIFI_FW_AP_STATE) { - u8 sr = 0; rtw_get_wps_attr_content( @@ -1871,7 +1732,6 @@ static void update_bcn_wps_ie(struct adapter *padapter) static void update_bcn_p2p_ie(struct adapter *padapter) { - } static void update_bcn_vendor_spec_ie(struct adapter *padapter, u8 *oui) @@ -1892,9 +1752,6 @@ static void update_bcn_vendor_spec_ie(struct adapter *padapter, u8 *oui) else DBG_871X("unknown OUI type!\n"); - - - } void update_beacon(struct adapter *padapter, u8 ie_id, u8 *oui, u8 tx) @@ -1918,7 +1775,6 @@ void update_beacon(struct adapter *padapter, u8 ie_id, u8 *oui, u8 tx) spin_lock_bh(&pmlmepriv->bcn_update_lock); switch (ie_id) { - case 0xFF: update_bcn_fixed_ie(padapter);/* 8: TimeStamp, 2: Beacon Interval 2:Capability */ @@ -1971,12 +1827,10 @@ void update_beacon(struct adapter *padapter, u8 ie_id, u8 *oui, u8 tx) #ifndef CONFIG_INTERRUPT_BASED_TXBCN if (tx) { - /* send_beacon(padapter);//send_beacon must execute on TSR level */ set_tx_beacon_cmd(padapter); } #endif /* CONFIG_INTERRUPT_BASED_TXBCN */ - } /* @@ -2060,14 +1914,12 @@ static int rtw_ht_operation_update(struct adapter *padapter) __func__, pmlmepriv->ht_op_mode, op_mode_changes); return op_mode_changes; - } void associated_clients_update(struct adapter *padapter, u8 updated) { /* update associcated stations cap. */ if (updated) { - struct list_head *phead, *plist; struct sta_info *psta = NULL; struct sta_priv *pstapriv = &padapter->stapriv; @@ -2079,7 +1931,6 @@ void associated_clients_update(struct adapter *padapter, u8 updated) /* check asoc_queue */ while (phead != plist) { - psta = LIST_CONTAINOR(plist, struct sta_info, asoc_list); plist = get_next(plist); @@ -2088,9 +1939,7 @@ void associated_clients_update(struct adapter *padapter, u8 updated) } spin_unlock_bh(&pstapriv->asoc_list_lock); - } - } /* called > TSR LEVEL for USB or SDIO Interface*/ @@ -2101,102 +1950,75 @@ void bss_cap_update_on_sta_join(struct adapter *padapter, struct sta_info *psta) struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv); if (!(psta->flags & WLAN_STA_SHORT_PREAMBLE)) { - if (!psta->no_short_preamble_set) { - psta->no_short_preamble_set = 1; pmlmepriv->num_sta_no_short_preamble++; if ((pmlmeext->cur_wireless_mode > WIRELESS_11B) && (pmlmepriv->num_sta_no_short_preamble == 1)) { - beacon_updated = true; update_beacon(padapter, 0xFF, NULL, true); } - } - } else{ - - + } else { if (psta->no_short_preamble_set) { - psta->no_short_preamble_set = 0; pmlmepriv->num_sta_no_short_preamble--; if ((pmlmeext->cur_wireless_mode > WIRELESS_11B) && (pmlmepriv->num_sta_no_short_preamble == 0)) { - beacon_updated = true; update_beacon(padapter, 0xFF, NULL, true); } - } } if (psta->flags & WLAN_STA_NONERP) { - if (!psta->nonerp_set) { - psta->nonerp_set = 1; pmlmepriv->num_sta_non_erp++; if (pmlmepriv->num_sta_non_erp == 1) { - beacon_updated = true; update_beacon(padapter, _ERPINFO_IE_, NULL, true); } } - - } else{ - - + } else { if (psta->nonerp_set) { - psta->nonerp_set = 0; pmlmepriv->num_sta_non_erp--; if (pmlmepriv->num_sta_non_erp == 0) { - beacon_updated = true; update_beacon(padapter, _ERPINFO_IE_, NULL, true); } } - } - if (!(psta->capability & WLAN_CAPABILITY_SHORT_SLOT)) { - if (!psta->no_short_slot_time_set) { - psta->no_short_slot_time_set = 1; pmlmepriv->num_sta_no_short_slot_time++; if ((pmlmeext->cur_wireless_mode > WIRELESS_11B) && (pmlmepriv->num_sta_no_short_slot_time == 1)) { - beacon_updated = true; update_beacon(padapter, 0xFF, NULL, true); } - } - } else{ - - + } else { if (psta->no_short_slot_time_set) { - psta->no_short_slot_time_set = 0; pmlmepriv->num_sta_no_short_slot_time--; if ((pmlmeext->cur_wireless_mode > WIRELESS_11B) && (pmlmepriv->num_sta_no_short_slot_time == 0)) { - beacon_updated = true; update_beacon(padapter, 0xFF, NULL, true); } @@ -2204,7 +2026,6 @@ void bss_cap_update_on_sta_join(struct adapter *padapter, struct sta_info *psta) } if (psta->flags & WLAN_STA_HT) { - u16 ht_capab = le16_to_cpu(psta->htpriv.ht_cap.cap_info); DBG_871X("HT: STA " MAC_FMT " HT Capabilities " @@ -2237,9 +2058,7 @@ void bss_cap_update_on_sta_join(struct adapter *padapter, struct sta_info *psta) pmlmepriv->num_sta_ht_20mhz); } - } else{ - - + } else { if (!psta->no_ht_set) { psta->no_ht_set = 1; pmlmepriv->num_sta_no_ht++; @@ -2253,7 +2072,6 @@ void bss_cap_update_on_sta_join(struct adapter *padapter, struct sta_info *psta) } if (rtw_ht_operation_update(padapter) > 0) { - update_beacon(padapter, _HT_CAPABILITY_IE_, NULL, false); update_beacon(padapter, _HT_ADD_INFO_IE_, NULL, true); } @@ -2262,7 +2080,6 @@ void bss_cap_update_on_sta_join(struct adapter *padapter, struct sta_info *psta) associated_clients_update(padapter, beacon_updated); DBG_871X("%s, updated =%d\n", __func__, beacon_updated); - } u8 bss_cap_update_on_sta_leave(struct adapter *padapter, struct sta_info *psta) @@ -2279,7 +2096,6 @@ u8 bss_cap_update_on_sta_leave(struct adapter *padapter, struct sta_info *psta) pmlmepriv->num_sta_no_short_preamble--; if (pmlmeext->cur_wireless_mode > WIRELESS_11B && pmlmepriv->num_sta_no_short_preamble == 0){ - beacon_updated = true; update_beacon(padapter, 0xFF, NULL, true); } @@ -2289,7 +2105,6 @@ u8 bss_cap_update_on_sta_leave(struct adapter *padapter, struct sta_info *psta) psta->nonerp_set = 0; pmlmepriv->num_sta_non_erp--; if (pmlmepriv->num_sta_non_erp == 0) { - beacon_updated = true; update_beacon(padapter, _ERPINFO_IE_, NULL, true); } @@ -2300,7 +2115,6 @@ u8 bss_cap_update_on_sta_leave(struct adapter *padapter, struct sta_info *psta) pmlmepriv->num_sta_no_short_slot_time--; if (pmlmeext->cur_wireless_mode > WIRELESS_11B && pmlmepriv->num_sta_no_short_slot_time == 0){ - beacon_updated = true; update_beacon(padapter, 0xFF, NULL, true); } @@ -2322,7 +2136,6 @@ u8 bss_cap_update_on_sta_leave(struct adapter *padapter, struct sta_info *psta) } if (rtw_ht_operation_update(padapter) > 0) { - update_beacon(padapter, _HT_CAPABILITY_IE_, NULL, false); update_beacon(padapter, _HT_ADD_INFO_IE_, NULL, true); } @@ -2333,7 +2146,6 @@ u8 bss_cap_update_on_sta_leave(struct adapter *padapter, struct sta_info *psta) DBG_871X("%s, updated =%d\n", __func__, beacon_updated); return beacon_updated; - } u8 ap_free_sta( @@ -2349,7 +2161,6 @@ u8 ap_free_sta( return beacon_updated; if (active == true) { - /* tear down Rx AMPDU */ send_delba(padapter, 0, psta->hwaddr);/* recipient */ @@ -2362,13 +2173,11 @@ u8 ap_free_sta( psta->htpriv.agg_enable_bitmap = 0x0;/* reset */ psta->htpriv.candidate_tid_bitmap = 0x0;/* reset */ - /* report_del_sta_event(padapter, psta->hwaddr, reason); */ /* clear cam entry / key */ rtw_clearstakey_cmd(padapter, psta, true); - spin_lock_bh(&psta->lock); psta->state &= ~_FW_LINKED; spin_unlock_bh(&psta->lock); @@ -2381,9 +2190,7 @@ u8 ap_free_sta( rtw_free_stainfo(padapter, psta); - return beacon_updated; - } int rtw_sta_flush(struct adapter *padapter) @@ -2401,14 +2208,12 @@ int rtw_sta_flush(struct adapter *padapter) if ((pmlmeinfo->state&0x03) != WIFI_FW_AP_STATE) return ret; - spin_lock_bh(&pstapriv->asoc_list_lock); phead = &pstapriv->asoc_list; plist = get_next(phead); /* free sta asoc_queue */ while (phead != plist) { - psta = LIST_CONTAINOR(plist, struct sta_info, asoc_list); plist = get_next(plist); @@ -2422,13 +2227,11 @@ int rtw_sta_flush(struct adapter *padapter) } spin_unlock_bh(&pstapriv->asoc_list_lock); - issue_deauth(padapter, bc_addr, WLAN_REASON_DEAUTH_LEAVING); associated_clients_update(padapter, true); return ret; - } /* called > TSR LEVEL for USB or SDIO Interface*/ @@ -2437,7 +2240,6 @@ void sta_info_update(struct adapter *padapter, struct sta_info *psta) int flags = psta->flags; struct mlme_priv *pmlmepriv = &(padapter->mlmepriv); - /* update wmm cap. */ if (WLAN_STA_WME&flags) psta->qos_option = 1; @@ -2449,12 +2251,9 @@ void sta_info_update(struct adapter *padapter, struct sta_info *psta) /* update 802.11n ht cap. */ if (WLAN_STA_HT&flags) { - psta->htpriv.ht_option = true; psta->qos_option = 1; - } else{ - - + } else { psta->htpriv.ht_option = false; } @@ -2462,8 +2261,6 @@ void sta_info_update(struct adapter *padapter, struct sta_info *psta) psta->htpriv.ht_option = false; update_sta_info_apmode(padapter, psta); - - } /* called >= TSR LEVEL for USB or SDIO Interface*/ @@ -2473,7 +2270,6 @@ void ap_sta_info_defer_update(struct adapter *padapter, struct sta_info *psta) struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); if (psta->state & _FW_LINKED) { - pmlmeinfo->FW_sta_info[psta->mac_id].psta = psta; /* add ratid */ @@ -2506,7 +2302,6 @@ void rtw_ap_restore_network(struct adapter *padapter) if ((padapter->securitypriv.dot11PrivacyAlgrthm == _TKIP_) || (padapter->securitypriv.dot11PrivacyAlgrthm == _AES_)) { - /* restore group key, WEP keys is restored in ips_leave() */ rtw_set_key( padapter, @@ -2531,7 +2326,6 @@ void rtw_ap_restore_network(struct adapter *padapter) stainfo_offset = rtw_stainfo_offset(pstapriv, psta); if (stainfo_offset_valid(stainfo_offset)) chk_alive_list[chk_alive_num++] = stainfo_offset; - } spin_unlock_bh(&pstapriv->asoc_list_lock); @@ -2548,12 +2342,10 @@ void rtw_ap_restore_network(struct adapter *padapter) /* per sta pairwise key and settings */ if ((padapter->securitypriv.dot11PrivacyAlgrthm == _TKIP_) || (padapter->securitypriv.dot11PrivacyAlgrthm == _AES_)) { - rtw_setstakey_cmd(padapter, psta, true, false); } } } - } void start_ap_mode(struct adapter *padapter) @@ -2595,7 +2387,6 @@ void start_ap_mode(struct adapter *padapter) pmlmepriv->p2p_beacon_ie = NULL; pmlmepriv->p2p_probe_resp_ie = NULL; - /* for ACL */ INIT_LIST_HEAD(&(pacl_list->acl_node_q.queue)); pacl_list->num = 0; @@ -2604,7 +2395,6 @@ void start_ap_mode(struct adapter *padapter) INIT_LIST_HEAD(&pacl_list->aclnode[i].list); pacl_list->aclnode[i].valid = false; } - } void stop_ap_mode(struct adapter *padapter) @@ -2635,12 +2425,10 @@ void stop_ap_mode(struct adapter *padapter) phead = get_list_head(pacl_node_q); plist = get_next(phead); while (phead != plist) { - paclnode = LIST_CONTAINOR(plist, struct rtw_wlan_acl_node, list); plist = get_next(plist); if (paclnode->valid == true) { - paclnode->valid = false; list_del_init(&paclnode->list); diff --git a/drivers/staging/rtl8723bs/core/rtw_cmd.c b/drivers/staging/rtl8723bs/core/rtw_cmd.c index 830be63391b7..ea2c187e56bd 100644 --- a/drivers/staging/rtl8723bs/core/rtw_cmd.c +++ b/drivers/staging/rtl8723bs/core/rtw_cmd.c @@ -166,10 +166,8 @@ sint _rtw_init_cmd_priv(struct cmd_priv *pcmdpriv) { sint res = _SUCCESS; - sema_init(&(pcmdpriv->cmd_queue_sema), 0); - /* sema_init(&(pcmdpriv->cmd_done_sema), 0); */ - sema_init(&(pcmdpriv->terminate_cmdthread_sema), 0); - + init_completion(&pcmdpriv->cmd_queue_comp); + init_completion(&pcmdpriv->terminate_cmdthread_comp); _rtw_init_queue(&(pcmdpriv->cmd_queue)); @@ -286,7 +284,7 @@ struct cmd_obj *_rtw_dequeue_cmd(struct __queue *queue) spin_lock_irqsave(&queue->lock, irqL); if (list_empty(&(queue->queue))) obj = NULL; - else{ + else { obj = LIST_CONTAINOR(get_next(&(queue->queue)), struct cmd_obj, list); list_del_init(&obj->list); } @@ -369,7 +367,7 @@ u32 rtw_enqueue_cmd(struct cmd_priv *pcmdpriv, struct cmd_obj *cmd_obj) res = _rtw_enqueue_cmd(&pcmdpriv->cmd_queue, cmd_obj); if (res == _SUCCESS) - up(&pcmdpriv->cmd_queue_sema); + complete(&pcmdpriv->cmd_queue_comp); exit: return res; @@ -410,8 +408,8 @@ void rtw_stop_cmd_thread(struct adapter *adapter) atomic_read(&(adapter->cmdpriv.cmdthd_running)) == true && adapter->cmdpriv.stop_req == 0) { adapter->cmdpriv.stop_req = 1; - up(&adapter->cmdpriv.cmd_queue_sema); - down(&adapter->cmdpriv.terminate_cmdthread_sema); + complete(&adapter->cmdpriv.cmd_queue_comp); + wait_for_completion(&adapter->cmdpriv.terminate_cmdthread_comp); } } @@ -435,13 +433,13 @@ int rtw_cmd_thread(void *context) pcmdpriv->stop_req = 0; atomic_set(&(pcmdpriv->cmdthd_running), true); - up(&pcmdpriv->terminate_cmdthread_sema); + complete(&pcmdpriv->terminate_cmdthread_comp); RT_TRACE(_module_rtl871x_cmd_c_, _drv_info_, ("start r871x rtw_cmd_thread !!!!\n")); while (1) { - if (down_interruptible(&pcmdpriv->cmd_queue_sema)) { - DBG_871X_LEVEL(_drv_always_, FUNC_ADPT_FMT" down_interruptible(&pcmdpriv->cmd_queue_sema) return != 0, break\n", FUNC_ADPT_ARG(padapter)); + if (wait_for_completion_interruptible(&pcmdpriv->cmd_queue_comp)) { + DBG_871X_LEVEL(_drv_always_, FUNC_ADPT_FMT" wait_for_completion_interruptible(&pcmdpriv->cmd_queue_comp) return != 0, break\n", FUNC_ADPT_ARG(padapter)); break; } @@ -502,7 +500,7 @@ _next: } pcmdpriv->cmd_seq++; - } else{ + } else { pcmd->res = H2C_PARAMETERS_ERROR; } @@ -546,11 +544,11 @@ post_process: if (pcmd_callback == NULL) { RT_TRACE(_module_rtl871x_cmd_c_, _drv_info_, ("mlme_cmd_hdl(): pcmd_callback = 0x%p, cmdcode = 0x%x\n", pcmd_callback, pcmd->cmdcode)); rtw_free_cmd_obj(pcmd); - } else{ + } else { /* todo: !!! fill rsp_buf to pcmd->rsp if (pcmd->rsp!= NULL) */ pcmd_callback(pcmd->padapter, pcmd);/* need conider that free cmd_obj in rtw_cmd_callback */ } - } else{ + } else { RT_TRACE(_module_rtl871x_cmd_c_, _drv_err_, ("%s: cmdcode = 0x%x callback not defined!\n", __func__, pcmd->cmdcode)); rtw_free_cmd_obj(pcmd); } @@ -581,7 +579,7 @@ post_process: rtw_free_cmd_obj(pcmd); } while (1); - up(&pcmdpriv->terminate_cmdthread_sema); + complete(&pcmdpriv->terminate_cmdthread_comp); atomic_set(&(pcmdpriv->cmdthd_running), false); thread_exit(); @@ -880,7 +878,7 @@ u8 rtw_joinbss_cmd(struct adapter *padapter, struct wlan_network *pnetwork) if (psecnetwork->IELength != tmp_len) { psecnetwork->IELength = tmp_len; pqospriv->qos_option = 1; /* There is WMM IE in this corresp. beacon */ - } else{ + } else { pqospriv->qos_option = 0;/* There is no WMM IE in this corresp. beacon */ } } @@ -987,7 +985,7 @@ u8 rtw_setopmode_cmd(struct adapter *padapter, enum NDIS_802_11_NETWORK_INFRAST init_h2fwcmd_w_parm_no_rsp(ph2c, psetop, _SetOpMode_CMD_); res = rtw_enqueue_cmd(pcmdpriv, ph2c); - } else{ + } else { setopmode_hdl(padapter, (u8 *)psetop); kfree(psetop); } @@ -1022,7 +1020,7 @@ u8 rtw_setstakey_cmd(struct adapter *padapter, struct sta_info *sta, u8 unicast_ if (unicast_key == true) { memcpy(&psetstakey_para->key, &sta->dot118021x_UncstKey, 16); - } else{ + } else { memcpy(&psetstakey_para->key, &psecuritypriv->dot118021XGrpKey[psecuritypriv->dot118021XGrpKeyid].skey, 16); } @@ -1049,7 +1047,7 @@ u8 rtw_setstakey_cmd(struct adapter *padapter, struct sta_info *sta, u8 unicast_ ph2c->rsp = (u8 *) psetstakey_rsp; ph2c->rspsz = sizeof(struct set_stakey_rsp); res = rtw_enqueue_cmd(pcmdpriv, ph2c); - } else{ + } else { set_stakey_hdl(padapter, (u8 *)psetstakey_para); kfree(psetstakey_para); } @@ -1072,7 +1070,7 @@ u8 rtw_clearstakey_cmd(struct adapter *padapter, struct sta_info *sta, u8 enqueu clear_cam_entry(padapter, cam_id); rtw_camid_free(padapter, cam_id); } - } else{ + } else { ph2c = rtw_zmalloc(sizeof(struct cmd_obj)); if (ph2c == NULL) { res = _FAIL; @@ -1291,7 +1289,7 @@ u8 rtw_set_chplan_cmd(struct adapter *padapter, u8 chplan, u8 enqueue, u8 swconf init_h2fwcmd_w_parm_no_rsp(pcmdobj, setChannelPlan_param, GEN_CMD_CODE(_SetChannelPlan)); res = rtw_enqueue_cmd(pcmdpriv, pcmdobj); - } else{ + } else { /* no need to enqueue, do the cmd hdl directly and free cmd parameter */ if (H2C_SUCCESS != set_chplan_hdl(padapter, (unsigned char *)setChannelPlan_param)) res = _FAIL; @@ -1392,7 +1390,7 @@ u8 traffic_status_watchdog(struct adapter *padapter, u8 from_timer) pmlmepriv->LinkDetectInfo.TrafficTransitionCount = 30; } } - } else{ + } else { /* DBG_871X("(+)Tx = %d, Rx = %d\n", pmlmepriv->LinkDetectInfo.NumTxOkInPeriod, pmlmepriv->LinkDetectInfo.NumRxUnicastOkInPeriod); */ if (pmlmepriv->LinkDetectInfo.TrafficTransitionCount >= 2) @@ -1414,7 +1412,7 @@ u8 traffic_status_watchdog(struct adapter *padapter, u8 from_timer) else rtw_lps_ctrl_wk_cmd(padapter, LPS_CTRL_TRAFFIC_BUSY, 1); } - } else{ + } else { struct dvobj_priv *dvobj = adapter_to_dvobj(padapter); int n_assoc_iface = 0; @@ -1564,7 +1562,7 @@ u8 rtw_lps_ctrl_wk_cmd(struct adapter *padapter, u8 lps_ctrl_type, u8 enqueue) init_h2fwcmd_w_parm_no_rsp(ph2c, pdrvextra_cmd_parm, GEN_CMD_CODE(_Set_Drv_Extra)); res = rtw_enqueue_cmd(pcmdpriv, ph2c); - } else{ + } else { lps_ctrl_wk_hdl(padapter, lps_ctrl_type); } @@ -1623,7 +1621,7 @@ static void rtw_lps_change_dtim_hdl(struct adapter *padapter, u8 dtim) if (rtw_btcoex_IsBtControlLps(padapter) == true) return; - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); if (pwrpriv->dtim != dtim) { DBG_871X("change DTIM from %d to %d, bFwCurrentInPSMode =%d, ps_mode =%d\n", pwrpriv->dtim, dtim, @@ -1640,7 +1638,7 @@ static void rtw_lps_change_dtim_hdl(struct adapter *padapter, u8 dtim) rtw_hal_set_hwreg(padapter, HW_VAR_H2C_FW_PWRMODE, (u8 *)(&ps_mode)); } - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); } static void rtw_dm_ra_mask_hdl(struct adapter *padapter, struct sta_info *psta) @@ -1766,7 +1764,7 @@ static void rtw_chk_hi_queue_hdl(struct adapter *padapter) if (update_tim == true) update_beacon(padapter, _TIM_IE_, NULL, true); - } else{/* re check again */ + } else {/* re check again */ rtw_chk_hi_queue_cmd(padapter); } @@ -1956,7 +1954,7 @@ static void c2h_wk_callback(_workitem *work) if (c2h_evt != NULL) { /* This C2H event is read, clear it */ c2h_evt_clear(adapter); - } else{ + } else { c2h_evt = rtw_malloc(16); if (c2h_evt != NULL) { /* This C2H event is not read, read & clear now */ @@ -1980,7 +1978,7 @@ static void c2h_wk_callback(_workitem *work) /* Handle CCX report here */ rtw_hal_c2h_handler(adapter, c2h_evt); kfree(c2h_evt); - } else{ + } else { /* Enqueue into cmd_thread for others */ rtw_c2h_wk_cmd(adapter, c2h_evt); } @@ -2130,7 +2128,7 @@ void rtw_createbss_cmd_callback(struct adapter *padapter, struct cmd_obj *pcmd) } rtw_indicate_connect(padapter); - } else{ + } else { pwlan = _rtw_alloc_network(pmlmepriv); spin_lock_bh(&(pmlmepriv->scanned_queue.lock)); if (pwlan == NULL) { @@ -2141,7 +2139,7 @@ void rtw_createbss_cmd_callback(struct adapter *padapter, struct cmd_obj *pcmd) goto createbss_cmd_fail; } pwlan->last_scanned = jiffies; - } else{ + } else { list_add_tail(&(pwlan->list), &pmlmepriv->scanned_queue.queue); } diff --git a/drivers/staging/rtl8723bs/core/rtw_debug.c b/drivers/staging/rtl8723bs/core/rtw_debug.c index a2a2cefd1786..a3c8a712bdc2 100644 --- a/drivers/staging/rtl8723bs/core/rtw_debug.c +++ b/drivers/staging/rtl8723bs/core/rtw_debug.c @@ -519,7 +519,7 @@ int proc_get_ap_info(struct seq_file *m, void *v) } } - } else{ + } else { DBG_871X_SEL_NL(m, "can't get sta's macaddr, cur_network's macaddr:" MAC_FMT "\n", MAC_ARG(cur_network->network.MacAddress)); } diff --git a/drivers/staging/rtl8723bs/core/rtw_efuse.c b/drivers/staging/rtl8723bs/core/rtw_efuse.c index b3247c9642ea..eea3bece59a1 100644 --- a/drivers/staging/rtl8723bs/core/rtw_efuse.c +++ b/drivers/staging/rtl8723bs/core/rtw_efuse.c @@ -70,7 +70,7 @@ Efuse_Write1ByteToFakeContent( } if (fakeEfuseBank == 0) fakeEfuseContent[Offset] = Value; - else{ + else { fakeBTEfuseContent[fakeEfuseBank-1][Offset] = Value; } return true; @@ -303,7 +303,7 @@ bool bPseudoTest) if (tmpidx < 100) { *data = rtw_read8(padapter, EFUSE_CTRL); bResult = true; - } else{ + } else { *data = 0xff; bResult = false; DBG_871X("%s: [ERROR] addr = 0x%x bResult =%d time out 1s !!!\n", __func__, addr, bResult); @@ -359,7 +359,7 @@ bool bPseudoTest) if (tmpidx < 100) { bResult = true; - } else{ + } else { bResult = false; DBG_871X("%s: [ERROR] addr = 0x%x , efuseValue = 0x%x , bResult =%d time out 1s !!!\n", __func__, addr, efuseValue, bResult); diff --git a/drivers/staging/rtl8723bs/core/rtw_ieee80211.c b/drivers/staging/rtl8723bs/core/rtw_ieee80211.c index 33f2649ba2ec..ac203c01c098 100644 --- a/drivers/staging/rtl8723bs/core/rtw_ieee80211.c +++ b/drivers/staging/rtl8723bs/core/rtw_ieee80211.c @@ -99,7 +99,7 @@ int rtw_check_network_type(unsigned char *rate, int ratelen, int channel) return WIRELESS_INVALID; else return WIRELESS_11A; - } else{ /* could be pure B, pure G, or B/G */ + } else { /* could be pure B, pure G, or B/G */ if (rtw_is_cckratesonly_included(rate)) return WIRELESS_11B; else if (rtw_is_cckrates_included(rate)) @@ -157,7 +157,7 @@ u8 *rtw_get_ie(u8 *pbuf, sint index, sint *len, sint limit) if (*p == index) { *len = *(p + 1); return p; - } else{ + } else { tmp = *(p + 1); p += (tmp + 2); i += (tmp + 2); @@ -205,7 +205,7 @@ u8 *rtw_get_ie_ex(u8 *in_ie, uint in_len, u8 eid, u8 *oui, u8 oui_len, u8 *ie, u *ielen = in_ie[cnt+1]+2; break; - } else{ + } else { cnt += in_ie[cnt+1]+2; /* goto next */ } } @@ -336,7 +336,7 @@ int rtw_generate_ie(struct registry_priv *pregistrypriv) wireless_mode = WIRELESS_11A_5N; else wireless_mode = WIRELESS_11BG_24N; - } else{ + } else { wireless_mode = pregistrypriv->wireless_mode; } @@ -347,7 +347,7 @@ int rtw_generate_ie(struct registry_priv *pregistrypriv) if (rateLen > 8) { ie = rtw_set_ie(ie, _SUPPORTEDRATES_IE_, 8, pdev_network->SupportedRates, &sz); /* ie = rtw_set_ie(ie, _EXT_SUPPORTEDRATES_IE_, (rateLen - 8), (pdev_network->SupportedRates + 8), &sz); */ - } else{ + } else { ie = rtw_set_ie(ie, _SUPPORTEDRATES_IE_, rateLen, pdev_network->SupportedRates, &sz); } @@ -404,7 +404,7 @@ unsigned char *rtw_get_wpa_ie(unsigned char *pie, int *wpa_ie_len, int limit) return pbuf; - } else{ + } else { *wpa_ie_len = 0; return NULL; } @@ -642,7 +642,7 @@ int rtw_get_wapi_ie(u8 *in_ie, uint in_len, u8 *wapi_ie, u16 *wapi_len) *wapi_len = in_ie[cnt+1]+2; cnt += in_ie[cnt+1]+2; /* get next */ - } else{ + } else { cnt += in_ie[cnt+1]+2; /* get next */ } } @@ -684,7 +684,7 @@ int rtw_get_sec_ie(u8 *in_ie, uint in_len, u8 *rsn_ie, u16 *rsn_len, u8 *wpa_ie, *wpa_len = in_ie[cnt+1]+2; cnt += in_ie[cnt+1]+2; /* get next */ - } else{ + } else { if (authmode == _WPA2_IE_ID_) { RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("\n get_rsn_ie: sec_idx =%d in_ie[cnt+1]+2 =%d\n", sec_idx, in_ie[cnt+1]+2)); @@ -700,7 +700,7 @@ int rtw_get_sec_ie(u8 *in_ie, uint in_len, u8 *rsn_ie, u16 *rsn_len, u8 *wpa_ie, *rsn_len = in_ie[cnt+1]+2; cnt += in_ie[cnt+1]+2; /* get next */ - } else{ + } else { cnt += in_ie[cnt+1]+2; /* get next */ } } @@ -765,7 +765,7 @@ u8 *rtw_get_wps_ie(u8 *in_ie, uint in_len, u8 *wps_ie, uint *wps_ielen) cnt += in_ie[cnt+1]+2; break; - } else{ + } else { cnt += in_ie[cnt+1]+2; /* goto next */ } } @@ -817,7 +817,7 @@ u8 *rtw_get_wps_attr(u8 *wps_ie, uint wps_ielen, u16 target_attr_id, u8 *buf_att *len_attr = attr_len; break; - } else{ + } else { attr_ptr += attr_len; /* goto next */ } } @@ -1252,7 +1252,7 @@ u16 rtw_mcs_rate(u8 rf_type, u8 bw_40MHz, u8 short_GI, unsigned char *MCS_rate) max_rate = (bw_40MHz) ? ((short_GI)?300:270):((short_GI)?144:130); else if (MCS_rate[0] & BIT(0)) max_rate = (bw_40MHz) ? ((short_GI)?150:135):((short_GI)?72:65); - } else{ + } else { if (MCS_rate[1]) { if (MCS_rate[1] & BIT(7)) max_rate = (bw_40MHz) ? ((short_GI)?3000:2700):((short_GI)?1444:1300); @@ -1270,7 +1270,7 @@ u16 rtw_mcs_rate(u8 rf_type, u8 bw_40MHz, u8 short_GI, unsigned char *MCS_rate) max_rate = (bw_40MHz) ? ((short_GI)?600:540):((short_GI)?289:260); else if (MCS_rate[1] & BIT(0)) max_rate = (bw_40MHz) ? ((short_GI)?300:270):((short_GI)?144:130); - } else{ + } else { if (MCS_rate[0] & BIT(7)) max_rate = (bw_40MHz) ? ((short_GI)?1500:1350):((short_GI)?722:650); else if (MCS_rate[0] & BIT(6)) diff --git a/drivers/staging/rtl8723bs/core/rtw_ioctl_set.c b/drivers/staging/rtl8723bs/core/rtw_ioctl_set.c index 35ca825a7e5a..0c8a050b2a81 100644 --- a/drivers/staging/rtl8723bs/core/rtw_ioctl_set.c +++ b/drivers/staging/rtl8723bs/core/rtw_ioctl_set.c @@ -95,20 +95,20 @@ u8 rtw_do_join(struct adapter *padapter) pmlmepriv->to_join = false; RT_TRACE(_module_rtl871x_ioctl_set_c_, _drv_err_, ("rtw_do_join(): site survey return error\n.")); } - } else{ + } else { pmlmepriv->to_join = false; ret = _FAIL; } goto exit; - } else{ + } else { int select_ret; spin_unlock_bh(&(pmlmepriv->scanned_queue.lock)); select_ret = rtw_select_and_join_from_scanned_queue(pmlmepriv); if (select_ret == _SUCCESS) { pmlmepriv->to_join = false; _set_timer(&pmlmepriv->assoc_timer, MAX_JOIN_TIMEOUT); - } else{ + } else { if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) == true) { /* submit createbss_cmd to change to a ADHOC_MASTER */ @@ -135,7 +135,7 @@ u8 rtw_do_join(struct adapter *padapter) RT_TRACE(_module_rtl871x_ioctl_set_c_, _drv_info_, ("***Error => rtw_select_and_join_from_scanned_queue FAIL under STA_Mode***\n ")); - } else{ + } else { /* can't associate ; reset under-linking */ _clr_fwstate_(pmlmepriv, _FW_UNDER_LINKING); @@ -150,7 +150,7 @@ u8 rtw_do_join(struct adapter *padapter) pmlmepriv->to_join = false; RT_TRACE(_module_rtl871x_ioctl_set_c_, _drv_err_, ("do_join(): site survey return error\n.")); } - } else{ + } else { ret = _FAIL; pmlmepriv->to_join = false; } @@ -289,13 +289,13 @@ u8 rtw_set_802_11_ssid(struct adapter *padapter, struct ndis_802_11_ssid *ssid) _clr_fwstate_(pmlmepriv, WIFI_ADHOC_MASTER_STATE); set_fwstate(pmlmepriv, WIFI_ADHOC_STATE); } - } else{ + } else { goto release_mlme_lock;/* it means driver is in WIFI_ADHOC_MASTER_STATE, we needn't create bss again. */ } } else { rtw_lps_ctrl_wk_cmd(padapter, LPS_CTRL_JOINBSS, 1); } - } else{ + } else { RT_TRACE(_module_rtl871x_ioctl_set_c_, _drv_info_, ("Set SSID not the same ssid\n")); RT_TRACE(_module_rtl871x_ioctl_set_c_, _drv_info_, ("set_ssid =[%s] len = 0x%x\n", ssid->Ssid, (unsigned int)ssid->SsidLength)); RT_TRACE(_module_rtl871x_ioctl_set_c_, _drv_info_, ("assoc_ssid =[%s] len = 0x%x\n", pmlmepriv->assoc_ssid.Ssid, (unsigned int)pmlmepriv->assoc_ssid.SsidLength)); @@ -670,7 +670,7 @@ u16 rtw_get_cur_max_rate(struct adapter *adapter) short_GI, psta->htpriv.ht_cap.supp_mcs_set ); - } else{ + } else { while ((pcur_bss->SupportedRates[i] != 0) && (pcur_bss->SupportedRates[i] != 0xFF)) { rate = pcur_bss->SupportedRates[i]&0x7F; if (rate > max_rate) diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme.c b/drivers/staging/rtl8723bs/core/rtw_mlme.c index 4c5d5cf9dfe0..406e313477aa 100644 --- a/drivers/staging/rtl8723bs/core/rtw_mlme.c +++ b/drivers/staging/rtl8723bs/core/rtw_mlme.c @@ -902,7 +902,7 @@ void rtw_surveydone_event_callback(struct adapter *adapter, u8 *pbuf) if (rtw_select_and_join_from_scanned_queue(pmlmepriv) == _SUCCESS) { _set_timer(&pmlmepriv->assoc_timer, MAX_JOIN_TIMEOUT); - } else{ + } else { struct wlan_bssid_ex *pdev_network = &(adapter->registrypriv.dev_network); u8 *pibss = adapter->registrypriv.dev_network.MacAddress; @@ -925,7 +925,7 @@ void rtw_surveydone_event_callback(struct adapter *adapter, u8 *pbuf) pmlmepriv->to_join = false; } } - } else{ + } else { int s_ret; set_fwstate(pmlmepriv, _FW_UNDER_LINKING); pmlmepriv->to_join = false; @@ -935,7 +935,7 @@ void rtw_surveydone_event_callback(struct adapter *adapter, u8 *pbuf) } else if (s_ret == 2) {/* there is no need to wait for join */ _clr_fwstate_(pmlmepriv, _FW_UNDER_LINKING); rtw_indicate_connect(adapter); - } else{ + } else { DBG_871X("try_to_join, but select scanning queue fail, to_roam:%d\n", rtw_to_roam(adapter)); if (rtw_to_roam(adapter) != 0) { @@ -1389,7 +1389,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) if (pmlmepriv->assoc_ssid.SsidLength == 0) { RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("@@@@@ joinbss event call back for Any SSid\n")); - } else{ + } else { RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("@@@@@ rtw_joinbss_event_callback for SSid:%s\n", pmlmepriv->assoc_ssid.Ssid)); } @@ -1416,7 +1416,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) if (check_fwstate(pmlmepriv, _FW_LINKED)) { if (the_same_macaddr == true) { ptarget_wlan = rtw_find_network(&pmlmepriv->scanned_queue, cur_network->network.MacAddress); - } else{ + } else { pcur_wlan = rtw_find_network(&pmlmepriv->scanned_queue, cur_network->network.MacAddress); if (pcur_wlan) pcur_wlan->fixed = false; @@ -1432,7 +1432,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) } } - } else{ + } else { ptarget_wlan = _rtw_find_same_network(&pmlmepriv->scanned_queue, pnetwork); if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) { if (ptarget_wlan) @@ -1443,7 +1443,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) /* s2. update cur_network */ if (ptarget_wlan) { rtw_joinbss_update_network(adapter, ptarget_wlan, pnetwork); - } else{ + } else { DBG_871X_LEVEL(_drv_always_, "Can't find ptarget_wlan when joinbss_event callback\n"); spin_unlock_bh(&(pmlmepriv->scanned_queue.lock)); goto ignore_joinbss_callback; @@ -1464,7 +1464,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) { pmlmepriv->cur_network_scanned = ptarget_wlan; rtw_indicate_connect(adapter); - } else{ + } else { /* adhoc mode will rtw_indicate_connect when rtw_stassoc_event_callback */ RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("adhoc mode, fw_state:%x", get_fwstate(pmlmepriv))); } @@ -1475,7 +1475,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) RT_TRACE(_module_rtl871x_mlme_c_, _drv_info_, ("Cancel assoc_timer\n")); - } else{ + } else { RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("rtw_joinbss_event_callback err: fw_state:%x", get_fwstate(pmlmepriv))); spin_unlock_bh(&(pmlmepriv->scanned_queue.lock)); goto ignore_joinbss_callback; @@ -1494,7 +1494,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) _clr_fwstate_(pmlmepriv, _FW_UNDER_LINKING); } - } else{/* if join_res < 0 (join fails), then try again */ + } else {/* if join_res < 0 (join fails), then try again */ #ifdef REJOIN res = _FAIL; @@ -1510,7 +1510,7 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf) } else if (res == 2) {/* there is no need to wait for join */ _clr_fwstate_(pmlmepriv, _FW_UNDER_LINKING); rtw_indicate_connect(adapter); - } else{ + } else { RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_, ("Set Assoc_Timer = 1; can't find match ssid in scanned_q\n")); #endif @@ -1846,7 +1846,7 @@ void _rtw_join_timeout_handler(struct timer_list *t) } } - } else{ + } else { rtw_indicate_disconnect(adapter); free_scanqueue(pmlmepriv);/* */ @@ -1956,11 +1956,11 @@ void rtw_dynamic_check_timer_handler(struct adapter *adapter) if (bEnterPS) { /* rtw_lps_ctrl_wk_cmd(adapter, LPS_CTRL_ENTER, 1); */ rtw_hal_dm_watchdog_in_lps(adapter); - } else{ + } else { /* call rtw_lps_ctrl_wk_cmd(padapter, LPS_CTRL_LEAVE, 1) in traffic_status_watchdog() */ } - } else{ + } else { if (is_primary_adapter(adapter)) { rtw_dynamic_chk_wk_cmd(adapter); } @@ -2373,10 +2373,8 @@ sint rtw_set_key(struct adapter *adapter, struct security_priv *psecuritypriv, s INIT_LIST_HEAD(&pcmd->list); - /* sema_init(&(pcmd->cmd_sem), 0); */ - res = rtw_enqueue_cmd(pcmdpriv, pcmd); - } else{ + } else { setkey_hdl(adapter, (u8 *)psetkeyparm); kfree((u8 *) psetkeyparm); } @@ -2434,7 +2432,7 @@ static int SecIsInPMKIDList(struct adapter *Adapter, u8 *bssid) if ((psecuritypriv->PMKIDList[i].bUsed) && (!memcmp(psecuritypriv->PMKIDList[i].Bssid, bssid, ETH_ALEN))) { break; - } else{ + } else { i++; /* continue; */ } @@ -2521,7 +2519,7 @@ sint rtw_restruct_sec_ie(struct adapter *adapter, u8 *in_ie, u8 *out_ie, uint in iEntry = SecIsInPMKIDList(adapter, pmlmepriv->assoc_bssid); if (iEntry < 0) { return ielength; - } else{ + } else { if (authmode == _WPA2_IE_ID_) ielength = rtw_append_pmkid(adapter, iEntry, out_ie, ielength); } @@ -2636,7 +2634,7 @@ void rtw_joinbss_reset(struct adapter *padapter) else threshold = 0; rtw_hal_set_hwreg(padapter, HW_VAR_RXDMA_AGG_PG_TH, (u8 *)(&threshold)); - } else{ + } else { threshold = 1; rtw_hal_set_hwreg(padapter, HW_VAR_RXDMA_AGG_PG_TH, (u8 *)(&threshold)); } @@ -3132,7 +3130,7 @@ sint rtw_linked_check(struct adapter *padapter) (check_fwstate(&padapter->mlmepriv, WIFI_ADHOC_STATE|WIFI_ADHOC_MASTER_STATE) == true)) { if (padapter->stapriv.asoc_sta_count > 2) return true; - } else{ /* Station mode */ + } else { /* Station mode */ if (check_fwstate(&padapter->mlmepriv, _FW_LINKED) == true) return true; } diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c index 8445d516c93d..ebb45acb4446 100644 --- a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c +++ b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c @@ -865,7 +865,7 @@ unsigned int OnBeacon(struct adapter *padapter, union recv_frame *precv_frame) /* DBG_871X("update_bcn_info\n"); */ update_beacon_info(padapter, pframe, len, psta); } - } else{ + } else { /* allocate a new CAM entry for IBSS station */ cam_idx = allocate_fw_sta_entry(padapter); if (cam_idx == NUM_STA) @@ -977,7 +977,7 @@ unsigned int OnAuth(struct adapter *padapter, union recv_frame *precv_frame) /* pstat->flags = 0; */ /* pstat->capability = 0; */ - } else{ + } else { spin_lock_bh(&pstapriv->asoc_list_lock); if (list_empty(&pstat->asoc_list) == false) { @@ -1019,13 +1019,13 @@ unsigned int OnAuth(struct adapter *padapter, union recv_frame *precv_frame) pstat->state |= WIFI_FW_AUTH_SUCCESS; pstat->expire_to = pstapriv->assoc_to; pstat->authalg = algorithm; - } else{ + } else { DBG_871X("(2)auth rejected because out of seq [rx_seq =%d, exp_seq =%d]!\n", seq, pstat->auth_seq+1); status = _STATS_OUT_OF_AUTH_SEQ_; goto auth_fail; } - } else{ /* shared system or auto authentication */ + } else { /* shared system or auto authentication */ if (seq == 1) { /* prepare for the challenging txt... */ memset((void *)pstat->chg_txt, 78, 128); @@ -1052,12 +1052,12 @@ unsigned int OnAuth(struct adapter *padapter, union recv_frame *precv_frame) pstat->state |= WIFI_FW_AUTH_SUCCESS; /* challenging txt is correct... */ pstat->expire_to = pstapriv->assoc_to; - } else{ + } else { DBG_871X("auth rejected because challenge failure!\n"); status = _STATS_CHALLENGE_FAIL_; goto auth_fail; } - } else{ + } else { DBG_871X("(3)auth rejected because out of seq [rx_seq =%d, exp_seq =%d]!\n", seq, pstat->auth_seq+1); status = _STATS_OUT_OF_AUTH_SEQ_; @@ -1149,17 +1149,17 @@ unsigned int OnAuthClient(struct adapter *padapter, union recv_frame *precv_fram set_link_timer(pmlmeext, REAUTH_TO); return _SUCCESS; - } else{ + } else { /* open system */ go2asoc = 1; } } else if (seq == 4) { if (pmlmeinfo->auth_algo == dot11AuthAlgrthm_Shared) { go2asoc = 1; - } else{ + } else { goto authclnt_fail; } - } else{ + } else { /* this is also illegal */ /* DBG_871X("marc: clnt auth failed due to illegal seq =%x\n", seq); */ goto authclnt_fail; @@ -1207,7 +1207,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame) if (frame_type == WIFI_ASSOCREQ) { reassoc = 0; ie_offset = _ASOCREQ_IE_OFFSET_; - } else{ /* WIFI_REASSOCREQ */ + } else { /* WIFI_REASSOCREQ */ reassoc = 1; ie_offset = _REASOCREQ_IE_OFFSET_; } @@ -1241,11 +1241,11 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame) if (!((pstat->state) & WIFI_FW_ASSOC_SUCCESS)) { status = _RSON_CLS2_; goto asoc_class2_error; - } else{ + } else { pstat->state &= (~WIFI_FW_ASSOC_SUCCESS); pstat->state |= WIFI_FW_ASSOC_STATE; } - } else{ + } else { pstat->state &= (~WIFI_FW_AUTH_SUCCESS); pstat->state |= WIFI_FW_ASSOC_STATE; } @@ -1344,7 +1344,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame) if (!pstat->wpa2_pairwise_cipher) status = WLAN_STATUS_PAIRWISE_CIPHER_NOT_VALID; - } else{ + } else { status = WLAN_STATUS_INVALID_IE; } @@ -1368,7 +1368,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame) if (!pstat->wpa_pairwise_cipher) status = WLAN_STATUS_PAIRWISE_CIPHER_NOT_VALID; - } else{ + } else { status = WLAN_STATUS_INVALID_IE; } @@ -1417,7 +1417,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame) } } - } else{ + } else { int copy_len; if (psecuritypriv->wpa_psk == 0) { @@ -1436,7 +1436,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame) "used\n"); pstat->flags |= WLAN_STA_WPS; copy_len = 0; - } else{ + } else { copy_len = ((wpa_ie_len+2) > sizeof(pstat->wpa_ie)) ? (sizeof(pstat->wpa_ie)):(wpa_ie_len+2); } @@ -1793,7 +1793,7 @@ unsigned int OnDeAuth(struct adapter *padapter, union recv_frame *precv_frame) return _SUCCESS; - } else{ + } else { int ignore_received_deauth = 0; /* Commented by Albert 20130604 */ @@ -1867,7 +1867,7 @@ unsigned int OnDisassoc(struct adapter *padapter, union recv_frame *precv_frame) } return _SUCCESS; - } else{ + } else { DBG_871X_LEVEL(_drv_always_, "sta recv disassoc reason code(%d) sta:%pM\n", reason, GetAddr3Ptr(pframe)); @@ -1969,7 +1969,7 @@ unsigned int OnAction_back(struct adapter *padapter, union recv_frame *precv_fra if (pmlmeinfo->accept_addba_req) { issue_action_BA(padapter, addr, RTW_WLAN_ACTION_ADDBA_RESP, 0); - } else{ + } else { issue_action_BA(padapter, addr, RTW_WLAN_ACTION_ADDBA_RESP, 37);/* reject ADDBA Req */ } @@ -1984,7 +1984,7 @@ unsigned int OnAction_back(struct adapter *padapter, union recv_frame *precv_fra DBG_871X("agg_enable for TID =%d\n", tid); psta->htpriv.agg_enable_bitmap |= 1 << tid; psta->htpriv.candidate_tid_bitmap &= ~BIT(tid); - } else{ + } else { psta->htpriv.agg_enable_bitmap &= ~BIT(tid); } @@ -2692,7 +2692,7 @@ void issue_probersp(struct adapter *padapter, unsigned char *da, u8 is_valid_p2p pframe += remainder_ielen; pattrib->pktlen += remainder_ielen; } - } else{ + } else { memcpy(pframe, cur_network->IEs, cur_network->IELength); pframe += cur_network->IELength; pattrib->pktlen += cur_network->IELength; @@ -2731,7 +2731,7 @@ void issue_probersp(struct adapter *padapter, unsigned char *da, u8 is_valid_p2p pattrib->pktlen += ssid_ielen_diff; } } - } else{ + } else { /* timestamp will be inserted by hardware */ pframe += 8; pattrib->pktlen += 8; @@ -2867,7 +2867,7 @@ static int _issue_probereq(struct adapter *padapter, /* unicast probe request frame */ memcpy(pwlanhdr->addr1, da, ETH_ALEN); memcpy(pwlanhdr->addr3, da, ETH_ALEN); - } else{ + } else { /* broadcast probe request frame */ memcpy(pwlanhdr->addr1, bc_addr, ETH_ALEN); memcpy(pwlanhdr->addr3, bc_addr, ETH_ALEN); @@ -2892,7 +2892,7 @@ static int _issue_probereq(struct adapter *padapter, if (bssrate_len > 8) { pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, 8, bssrate, &(pattrib->pktlen)); pframe = rtw_set_ie(pframe, _EXT_SUPPORTEDRATES_IE_, (bssrate_len - 8), (bssrate + 8), &(pattrib->pktlen)); - } else{ + } else { pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, bssrate_len, bssrate, &(pattrib->pktlen)); } @@ -3040,7 +3040,7 @@ void issue_auth(struct adapter *padapter, struct sta_info *psta, unsigned short if ((psta->auth_seq == 2) && (psta->state & WIFI_FW_AUTH_STATE) && (use_shared_key == 1)) pframe = rtw_set_ie(pframe, _CHLGETXT_IE_, 128, psta->chg_txt, &(pattrib->pktlen)); - } else{ + } else { memcpy(pwlanhdr->addr1, get_my_bssid(&pmlmeinfo->network), ETH_ALEN); memcpy(pwlanhdr->addr2, myid(&padapter->eeprompriv), ETH_ALEN); memcpy(pwlanhdr->addr3, get_my_bssid(&pmlmeinfo->network), ETH_ALEN); @@ -3168,7 +3168,7 @@ void issue_asocrsp(struct adapter *padapter, unsigned short status, struct sta_i if (pstat->bssratelen <= 8) { pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, pstat->bssratelen, pstat->bssrateset, &(pattrib->pktlen)); - } else{ + } else { pframe = rtw_set_ie(pframe, _SUPPORTEDRATES_IE_, 8, pstat->bssrateset, &(pattrib->pktlen)); pframe = rtw_set_ie(pframe, _EXT_SUPPORTEDRATES_IE_, (pstat->bssratelen-8), pstat->bssrateset+8, &(pattrib->pktlen)); } @@ -3487,7 +3487,7 @@ static int _issue_nulldata(struct adapter *padapter, unsigned char *da, if (wait_ack) { ret = dump_mgntframe_and_wait_ack(padapter, pmgntframe); - } else{ + } else { dump_mgntframe(padapter, pmgntframe); ret = _SUCCESS; } @@ -3656,7 +3656,7 @@ static int _issue_qos_nulldata(struct adapter *padapter, unsigned char *da, if (wait_ack) { ret = dump_mgntframe_and_wait_ack(padapter, pmgntframe); - } else{ + } else { dump_mgntframe(padapter, pmgntframe); ret = _SUCCESS; } @@ -3765,7 +3765,7 @@ static int _issue_deauth(struct adapter *padapter, unsigned char *da, if (wait_ack) { ret = dump_mgntframe_and_wait_ack(padapter, pmgntframe); - } else{ + } else { dump_mgntframe(padapter, pmgntframe); ret = _SUCCESS; } @@ -4271,7 +4271,7 @@ unsigned int send_beacon(struct adapter *padapter) if (false == bxmitok) { DBG_871X("%s fail! %u ms\n", __func__, jiffies_to_msecs(jiffies - start)); return _FAIL; - } else{ + } else { unsigned long passing_time = jiffies_to_msecs(jiffies - start); if (passing_time > 100 || issue > 3) @@ -4331,7 +4331,7 @@ void site_survey(struct adapter *padapter) else #endif set_channel_bwmode(padapter, survey_channel, HAL_PRIME_CHNL_OFFSET_DONT_CARE, CHANNEL_WIDTH_20); - } else{ + } else { #ifdef DBG_FIXED_CHAN if (pmlmeext->fixed_chan != 0xff) SelectChannel(padapter, pmlmeext->fixed_chan); @@ -4379,7 +4379,7 @@ void site_survey(struct adapter *padapter) } #endif - } else{ + } else { /* channel number is 0 or this channel is not valid. */ @@ -4641,7 +4641,7 @@ void start_create_ibss(struct adapter *padapter) report_join_res(padapter, -1); pmlmeinfo->state = WIFI_FW_NULL_STATE; - } else{ + } else { rtw_hal_set_hwreg(padapter, HW_VAR_BSSID, padapter->registrypriv.dev_network.MacAddress); join_type = 0; rtw_hal_set_hwreg(padapter, HW_VAR_MLME_JOIN, (u8 *)(&join_type)); @@ -4650,7 +4650,7 @@ void start_create_ibss(struct adapter *padapter) pmlmeinfo->state |= WIFI_FW_ASSOC_SUCCESS; rtw_indicate_connect(padapter); } - } else{ + } else { DBG_871X("start_create_ibss, invalid cap:%x\n", caps); return; } @@ -4711,7 +4711,7 @@ void start_clnt_join(struct adapter *padapter) pmlmeinfo->state = WIFI_FW_ADHOC_STATE; report_join_res(padapter, 1); - } else{ + } else { /* DBG_871X("marc: invalid cap:%x\n", caps); */ return; } @@ -4914,7 +4914,7 @@ static void process_80211d(struct adapter *padapter, struct wlan_bssid_ex *bssid j++; k++; } - } else{ + } else { /* keep original STA 2.4G channel plan */ while ((i < MAX_CHANNEL_NUM) && (chplan_sta[i].ChannelNum != 0) && @@ -4976,7 +4976,7 @@ static void process_80211d(struct adapter *padapter, struct wlan_bssid_ex *bssid j++; k++; } - } else{ + } else { /* keep original STA 5G channel plan */ while ((i < MAX_CHANNEL_NUM) && (chplan_sta[i].ChannelNum != 0)) { chplan_new[k].ChannelNum = chplan_sta[i].ChannelNum; @@ -5334,13 +5334,6 @@ void report_add_sta_event(struct adapter *padapter, unsigned char *MacAddr, int return; } - -bool rtw_port_switch_chk(struct adapter *adapter) -{ - bool switch_needed = false; - return switch_needed; -} - /**************************************************************************** Following are the event callback functions @@ -5378,7 +5371,7 @@ void update_sta_info(struct adapter *padapter, struct sta_info *psta) psta->htpriv.beamform_cap = pmlmepriv->htpriv.beamform_cap; memcpy(&psta->htpriv.ht_cap, &pmlmeinfo->HT_caps, sizeof(struct rtw_ieee80211_ht_cap)); - } else{ + } else { psta->htpriv.ht_option = false; psta->htpriv.ampdu_enable = false; @@ -5414,7 +5407,6 @@ static void rtw_mlmeext_disconnect(struct adapter *padapter) struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv; struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&(pmlmeinfo->network)); - u8 state_backup = (pmlmeinfo->state&0x03); /* set_opmode_cmd(padapter, infra_client_with_mlme); */ @@ -5440,17 +5432,6 @@ static void rtw_mlmeext_disconnect(struct adapter *padapter) pmlmeinfo->state = WIFI_FW_NULL_STATE; - if (state_backup == WIFI_FW_STATION_STATE) { - if (rtw_port_switch_chk(padapter)) { - rtw_hal_set_hwreg(padapter, HW_VAR_PORT_SWITCH, NULL); - { - struct adapter *port0_iface = dvobj_get_port0_adapter(adapter_to_dvobj(padapter)); - if (port0_iface) - rtw_lps_ctrl_wk_cmd(port0_iface, LPS_CTRL_CONNECT, 0); - } - } - } - /* switch to the 20M Hz mode after disconnect */ pmlmeext->cur_bwmode = CHANNEL_WIDTH_20; pmlmeext->cur_ch_offset = HAL_PRIME_CHNL_OFFSET_DONT_CARE; @@ -5530,9 +5511,6 @@ void mlmeext_joinbss_event_callback(struct adapter *padapter, int join_res) rtw_hal_macid_wakeup(padapter, psta->mac_id); } - if (rtw_port_switch_chk(padapter)) - rtw_hal_set_hwreg(padapter, HW_VAR_PORT_SWITCH, NULL); - join_type = 2; rtw_hal_set_hwreg(padapter, HW_VAR_MLME_JOIN, (u8 *)(&join_type)); @@ -5565,7 +5543,7 @@ void mlmeext_sta_add_event_callback(struct adapter *padapter, struct sta_info *p if (pmlmeinfo->state & WIFI_FW_ASSOC_SUCCESS) { /* adhoc master or sta_count>1 */ /* nothing to do */ - } else{ /* adhoc client */ + } else { /* adhoc client */ /* update TSF Value */ /* update_TSF(pmlmeext, pframe, len); */ @@ -5699,7 +5677,7 @@ static u8 chk_ap_is_alive(struct adapter *padapter, struct sta_info *psta) && sta_rx_probersp_pkts(psta) == sta_last_rx_probersp_pkts(psta) ) { ret = false; - } else{ + } else { ret = true; } @@ -5803,14 +5781,14 @@ void linked_status_chk(struct adapter *padapter) if (pmlmeinfo->FW_sta_info[i].retry < 3) { pmlmeinfo->FW_sta_info[i].retry++; - } else{ + } else { pmlmeinfo->FW_sta_info[i].retry = 0; pmlmeinfo->FW_sta_info[i].status = 0; report_del_sta_event(padapter, psta->hwaddr , 65535/* indicate disconnect caused by no rx */ ); } - } else{ + } else { pmlmeinfo->FW_sta_info[i].retry = 0; pmlmeinfo->FW_sta_info[i].rx_pkt = (u32)sta_rx_pkts(psta); } @@ -6018,7 +5996,7 @@ static int rtw_auto_ap_start_beacon(struct adapter *adapter) rateLen = rtw_get_rateset_len(supportRate); if (rateLen > 8) { ie = rtw_set_ie(ie, _SUPPORTEDRATES_IE_, 8, supportRate, &sz); - } else{ + } else { ie = rtw_set_ie(ie, _SUPPORTEDRATES_IE_, rateLen, supportRate, &sz); } @@ -6030,7 +6008,7 @@ static int rtw_auto_ap_start_beacon(struct adapter *adapter) struct mlme_ext_priv *pbuddy_mlmeext = &pbuddystruct adapter->mlmeextpriv; oper_channel = pbuddy_mlmeext->cur_channel; - } else{ + } else { oper_channel = adapter_to_dvobj(adapter)->oper_channel; } ie = rtw_set_ie(ie, _DSSET_IE_, 1, &oper_channel, &sz); @@ -6045,7 +6023,7 @@ static int rtw_auto_ap_start_beacon(struct adapter *adapter) /* lunch ap mode & start to issue beacon */ if (rtw_check_beacon_data(adapter, pbuf, sz) == _SUCCESS) { - } else{ + } else { ret = -EINVAL; } @@ -6074,7 +6052,7 @@ u8 setopmode_hdl(struct adapter *padapter, u8 *pbuf) type = _HW_STATE_STATION_; } else if (psetop->mode == Ndis802_11IBSS) { type = _HW_STATE_ADHOC_; - } else{ + } else { type = _HW_STATE_NOLINK_; } @@ -6087,18 +6065,6 @@ u8 setopmode_hdl(struct adapter *padapter, u8 *pbuf) rtw_auto_ap_start_beacon(padapter); #endif - if (rtw_port_switch_chk(padapter)) { - rtw_hal_set_hwreg(padapter, HW_VAR_PORT_SWITCH, NULL); - - if (psetop->mode == Ndis802_11APMode) - adapter_to_pwrctl(padapter)->fw_psmode_iface_id = 0xff; /* ap mode won't dowload rsvd pages */ - else if (psetop->mode == Ndis802_11Infrastructure) { - struct adapter *port0_iface = dvobj_get_port0_adapter(adapter_to_dvobj(padapter)); - if (port0_iface) - rtw_lps_ctrl_wk_cmd(port0_iface, LPS_CTRL_CONNECT, 0); - } - } - if (psetop->mode == Ndis802_11APMode) { /* Do this after port switch to */ /* prevent from downloading rsvd page to wrong port */ @@ -6592,7 +6558,7 @@ u8 add_ba_hdl(struct adapter *padapter, unsigned char *pbuf) issue_action_BA(padapter, pparm->addr, RTW_WLAN_ACTION_ADDBA_REQ, (u16)pparm->tid); /* _set_timer(&pmlmeext->ADDBA_timer, ADDBA_TO); */ _set_timer(&psta->addba_retry_timer, ADDBA_TO); - } else{ + } else { psta->htpriv.candidate_tid_bitmap &= ~BIT(pparm->tid); } return H2C_SUCCESS; diff --git a/drivers/staging/rtl8723bs/core/rtw_pwrctrl.c b/drivers/staging/rtl8723bs/core/rtw_pwrctrl.c index 59a667753266..5c468c5057b1 100644 --- a/drivers/staging/rtl8723bs/core/rtw_pwrctrl.c +++ b/drivers/staging/rtl8723bs/core/rtw_pwrctrl.c @@ -45,9 +45,9 @@ void ips_enter(struct adapter *padapter) rtw_btcoex_IpsNotify(padapter, pwrpriv->ips_mode_req); - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); _ips_enter(padapter); - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); } int _ips_leave(struct adapter *padapter) @@ -85,9 +85,9 @@ int ips_leave(struct adapter *padapter) if (!is_primary_adapter(padapter)) return _SUCCESS; - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); ret = _ips_leave(padapter); - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); if (_SUCCESS == ret) rtw_btcoex_IpsNotify(padapter, IPS_NONE); @@ -158,9 +158,9 @@ void rtw_ps_processor(struct adapter *padapter) struct debug_priv *pdbgpriv = &psdpriv->drv_dbg; u32 ps_deny = 0; - down(&adapter_to_pwrctl(padapter)->lock); + mutex_lock(&adapter_to_pwrctl(padapter)->lock); ps_deny = rtw_ps_deny_get(padapter); - up(&adapter_to_pwrctl(padapter)->lock); + mutex_unlock(&adapter_to_pwrctl(padapter)->lock); if (ps_deny != 0) { DBG_871X(FUNC_ADPT_FMT ": ps_deny = 0x%08X, skip power save!\n", FUNC_ADPT_ARG(padapter), ps_deny); @@ -269,7 +269,7 @@ void rtw_set_rpwm(struct adapter *padapter, u8 pslv) if (pwrpriv->brpwmtimeout == true) { DBG_871X("%s: RPWM timeout, force to set RPWM(0x%02X) again!\n", __func__, pslv); - } else{ + } else { if ((pwrpriv->rpwm == pslv) || ((pwrpriv->rpwm >= PS_STATE_S2) && (pslv >= PS_STATE_S2))) { RT_TRACE(_module_rtl871x_pwrctrl_c_, _drv_err_, @@ -413,7 +413,7 @@ void rtw_set_ps_mode(struct adapter *padapter, u8 ps_mode, u8 smart_ps, u8 bcn_a return; - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); /* if (pwrpriv->pwr_mode == PS_MODE_ACTIVE) */ if (ps_mode == PS_MODE_ACTIVE) { @@ -459,7 +459,7 @@ void rtw_set_ps_mode(struct adapter *padapter, u8 ps_mode, u8 smart_ps, u8 bcn_a rtw_btcoex_LpsNotify(padapter, ps_mode); } - } else{ + } else { if ((PS_RDY_CHECK(padapter) && check_fwstate(&padapter->mlmepriv, WIFI_ASOC_STATE)) || ((rtw_btcoex_IsBtControlLps(padapter) == true) && (rtw_btcoex_IsLpsOn(padapter) == true)) @@ -494,7 +494,7 @@ void rtw_set_ps_mode(struct adapter *padapter, u8 ps_mode, u8 smart_ps, u8 bcn_a } } - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); } /* @@ -628,14 +628,14 @@ void LeaveAllPowerSaveModeDirect(struct adapter *Adapter) return; } - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); rtw_set_rpwm(Adapter, PS_STATE_S4); - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); rtw_lps_ctrl_wk_cmd(pri_padapter, LPS_CTRL_LEAVE, 0); - } else{ + } else { if (pwrpriv->rf_pwrstate == rf_off) if (false == ips_leave(pri_padapter)) DBG_871X("======> ips_leave fail.............\n"); @@ -696,7 +696,7 @@ void LPS_Leave_check( cond_resched(); while (1) { - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); if ((padapter->bSurpriseRemoved == true) || (padapter->hw_init_completed == false) @@ -704,7 +704,7 @@ void LPS_Leave_check( ) bReady = true; - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); if (true == bReady) break; @@ -732,11 +732,10 @@ void cpwm_int_hdl( pwrpriv = adapter_to_pwrctl(padapter); - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); if (pwrpriv->rpwm < PS_STATE_S2) { DBG_871X("%s: Redundant CPWM Int. RPWM = 0x%02X CPWM = 0x%02x\n", __func__, pwrpriv->rpwm, pwrpriv->cpwm); - up(&pwrpriv->lock); goto exit; } @@ -745,15 +744,15 @@ void cpwm_int_hdl( if (pwrpriv->cpwm >= PS_STATE_S2) { if (pwrpriv->alives & CMD_ALIVE) - up(&padapter->cmdpriv.cmd_queue_sema); + complete(&padapter->cmdpriv.cmd_queue_comp); if (pwrpriv->alives & XMIT_ALIVE) - up(&padapter->xmitpriv.xmit_sema); + complete(&padapter->xmitpriv.xmit_comp); } - up(&pwrpriv->lock); - exit: + mutex_unlock(&pwrpriv->lock); + RT_TRACE(_module_rtl871x_pwrctrl_c_, _drv_notice_, ("cpwm_int_hdl: cpwm = 0x%02x\n", pwrpriv->cpwm)); } @@ -783,12 +782,12 @@ static void rpwmtimeout_workitem_callback(struct work_struct *work) padapter = dvobj->if1; /* DBG_871X("+%s: rpwm = 0x%02X cpwm = 0x%02X\n", __func__, pwrpriv->rpwm, pwrpriv->cpwm); */ - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); if ((pwrpriv->rpwm == pwrpriv->cpwm) || (pwrpriv->cpwm >= PS_STATE_S2)) { DBG_871X("%s: rpwm = 0x%02X cpwm = 0x%02X CPWM done!\n", __func__, pwrpriv->rpwm, pwrpriv->cpwm); goto exit; } - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); if (rtw_read8(padapter, 0x100) != 0xEA) { struct reportpwrstate_parm report; @@ -800,7 +799,7 @@ static void rpwmtimeout_workitem_callback(struct work_struct *work) return; } - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); if ((pwrpriv->rpwm == pwrpriv->cpwm) || (pwrpriv->cpwm >= PS_STATE_S2)) { DBG_871X("%s: cpwm =%d, nothing to do!\n", __func__, pwrpriv->cpwm); @@ -811,7 +810,7 @@ static void rpwmtimeout_workitem_callback(struct work_struct *work) pwrpriv->brpwmtimeout = false; exit: - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); } /* @@ -867,7 +866,7 @@ s32 rtw_register_task_alive(struct adapter *padapter, u32 task) pwrctrl = adapter_to_pwrctl(padapter); pslv = PS_STATE_S2; - down(&pwrctrl->lock); + mutex_lock(&pwrctrl->lock); register_task_alive(pwrctrl, task); @@ -884,7 +883,7 @@ s32 rtw_register_task_alive(struct adapter *padapter, u32 task) } } - up(&pwrctrl->lock); + mutex_unlock(&pwrctrl->lock); if (_FAIL == res) if (pwrctrl->cpwm >= PS_STATE_S2) @@ -920,7 +919,7 @@ void rtw_unregister_task_alive(struct adapter *padapter, u32 task) pslv = PS_STATE_S2; } - down(&pwrctrl->lock); + mutex_lock(&pwrctrl->lock); unregister_task_alive(pwrctrl, task); @@ -936,7 +935,7 @@ void rtw_unregister_task_alive(struct adapter *padapter, u32 task) } - up(&pwrctrl->lock); + mutex_unlock(&pwrctrl->lock); } /* @@ -962,7 +961,7 @@ s32 rtw_register_tx_alive(struct adapter *padapter) pwrctrl = adapter_to_pwrctl(padapter); pslv = PS_STATE_S2; - down(&pwrctrl->lock); + mutex_lock(&pwrctrl->lock); register_task_alive(pwrctrl, XMIT_ALIVE); @@ -979,7 +978,7 @@ s32 rtw_register_tx_alive(struct adapter *padapter) } } - up(&pwrctrl->lock); + mutex_unlock(&pwrctrl->lock); if (_FAIL == res) if (pwrctrl->cpwm >= PS_STATE_S2) @@ -1011,7 +1010,7 @@ s32 rtw_register_cmd_alive(struct adapter *padapter) pwrctrl = adapter_to_pwrctl(padapter); pslv = PS_STATE_S2; - down(&pwrctrl->lock); + mutex_lock(&pwrctrl->lock); register_task_alive(pwrctrl, CMD_ALIVE); @@ -1028,7 +1027,7 @@ s32 rtw_register_cmd_alive(struct adapter *padapter) } } - up(&pwrctrl->lock); + mutex_unlock(&pwrctrl->lock); if (_FAIL == res) if (pwrctrl->cpwm >= PS_STATE_S2) @@ -1061,7 +1060,7 @@ void rtw_unregister_tx_alive(struct adapter *padapter) pslv = PS_STATE_S2; } - down(&pwrctrl->lock); + mutex_lock(&pwrctrl->lock); unregister_task_alive(pwrctrl, XMIT_ALIVE); @@ -1076,7 +1075,7 @@ void rtw_unregister_tx_alive(struct adapter *padapter) rtw_set_rpwm(padapter, pslv); } - up(&pwrctrl->lock); + mutex_unlock(&pwrctrl->lock); } /* @@ -1103,7 +1102,7 @@ void rtw_unregister_cmd_alive(struct adapter *padapter) pslv = PS_STATE_S2; } - down(&pwrctrl->lock); + mutex_lock(&pwrctrl->lock); unregister_task_alive(pwrctrl, CMD_ALIVE); @@ -1119,15 +1118,14 @@ void rtw_unregister_cmd_alive(struct adapter *padapter) } } - up(&pwrctrl->lock); + mutex_unlock(&pwrctrl->lock); } void rtw_init_pwrctrl_priv(struct adapter *padapter) { struct pwrctrl_priv *pwrctrlpriv = adapter_to_pwrctl(padapter); - sema_init(&pwrctrlpriv->lock, 1); - sema_init(&pwrctrlpriv->check_32k_lock, 1); + mutex_init(&pwrctrlpriv->lock); pwrctrlpriv->rf_pwrstate = rf_on; pwrctrlpriv->ips_enter_cnts = 0; pwrctrlpriv->ips_leave_cnts = 0; @@ -1357,13 +1355,13 @@ void rtw_ps_deny(struct adapter *padapter, enum PS_DENY_REASON reason) pwrpriv = adapter_to_pwrctl(padapter); - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); if (pwrpriv->ps_deny & BIT(reason)) { DBG_871X(FUNC_ADPT_FMT ": [WARNING] Reason %d had been set before!!\n", FUNC_ADPT_ARG(padapter), reason); } pwrpriv->ps_deny |= BIT(reason); - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); /* DBG_871X("-" FUNC_ADPT_FMT ": Now PS deny for 0x%08X\n", */ /* FUNC_ADPT_ARG(padapter), pwrpriv->ps_deny); */ @@ -1383,13 +1381,13 @@ void rtw_ps_deny_cancel(struct adapter *padapter, enum PS_DENY_REASON reason) pwrpriv = adapter_to_pwrctl(padapter); - down(&pwrpriv->lock); + mutex_lock(&pwrpriv->lock); if ((pwrpriv->ps_deny & BIT(reason)) == 0) { DBG_871X(FUNC_ADPT_FMT ": [ERROR] Reason %d had been canceled before!!\n", FUNC_ADPT_ARG(padapter), reason); } pwrpriv->ps_deny &= ~BIT(reason); - up(&pwrpriv->lock); + mutex_unlock(&pwrpriv->lock); /* DBG_871X("-" FUNC_ADPT_FMT ": Now PS deny for 0x%08X\n", */ /* FUNC_ADPT_ARG(padapter), pwrpriv->ps_deny); */ diff --git a/drivers/staging/rtl8723bs/core/rtw_recv.c b/drivers/staging/rtl8723bs/core/rtw_recv.c index 02f821d42658..b543e9768e88 100644 --- a/drivers/staging/rtl8723bs/core/rtw_recv.c +++ b/drivers/staging/rtl8723bs/core/rtw_recv.c @@ -113,7 +113,7 @@ union recv_frame *_rtw_alloc_recvframe(struct __queue *pfree_recv_queue) if (list_empty(&pfree_recv_queue->queue)) precvframe = NULL; - else{ + else { phead = get_list_head(pfree_recv_queue); plist = get_next(phead); @@ -291,7 +291,7 @@ struct recv_buf *rtw_dequeue_recvbuf(struct __queue *queue) if (list_empty(&queue->queue)) precvbuf = NULL; - else{ + else { phead = get_list_head(queue); plist = get_next(phead); @@ -411,7 +411,7 @@ sint recvframe_chkmic(struct adapter *adapter, union recv_frame *precvframe) rtw_handle_tkip_mic_err(adapter, (u8)IS_MCAST(prxattrib->ra)); RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, (" mic error :prxattrib->bdecrypted =%d ", prxattrib->bdecrypted)); DBG_871X(" mic error :prxattrib->bdecrypted =%d\n", prxattrib->bdecrypted); - } else{ + } else { RT_TRACE(_module_rtl871x_recv_c_, _drv_err_, (" mic error :prxattrib->bdecrypted =%d ", prxattrib->bdecrypted)); DBG_871X(" mic error :prxattrib->bdecrypted =%d\n", prxattrib->bdecrypted); } @@ -591,7 +591,7 @@ union recv_frame *portctrl(struct adapter *adapter, union recv_frame *precv_fram rtw_free_recvframe(precv_frame, &adapter->recvpriv.free_recv_queue); prtnframe = NULL; } - } else{ + } else { /* allowed */ /* check decryption status, and decrypt the frame if needed */ RT_TRACE(_module_rtl871x_recv_c_, _drv_info_, ("########portctrl:psta->ieee8021x_blocked == 0\n")); @@ -669,7 +669,7 @@ void process_pwrbit_data(struct adapter *padapter, union recv_frame *precv_frame /* DBG_871X("to sleep, sta_dz_bitmap =%x\n", pstapriv->sta_dz_bitmap); */ } - } else{ + } else { if (psta->state & WIFI_SLEEP_STATE) { /* psta->state ^= WIFI_SLEEP_STATE; */ /* pstapriv->sta_dz_bitmap &= ~BIT(psta->aid); */ @@ -831,7 +831,7 @@ sint sta2sta_data_frame( ret = _FAIL; goto exit; } - } else{ /* not mc-frame */ + } else { /* not mc-frame */ /* For AP mode, if DA is non-MCAST, then it must be BSSID, and bssid == BSSID */ if (memcmp(pattrib->bssid, pattrib->dst, ETH_ALEN)) { ret = _FAIL; @@ -988,7 +988,7 @@ sint ap2sta_data_frame( /* Special case */ ret = RTW_RX_HANDLED; goto exit; - } else{ + } else { if (!memcmp(myhwaddr, pattrib->dst, ETH_ALEN) && (!bmcast)) { *psta = rtw_get_stainfo(pstapriv, pattrib->bssid); /* get sta_info */ if (*psta == NULL) { @@ -1187,7 +1187,7 @@ sint validate_recv_ctrl_frame(struct adapter *padapter, union recv_frame *precv_ /* spin_unlock_bh(&psta->sleep_q.lock); */ spin_unlock_bh(&pxmitpriv->lock); - } else{ + } else { /* spin_unlock_bh(&psta->sleep_q.lock); */ spin_unlock_bh(&pxmitpriv->lock); @@ -1198,7 +1198,7 @@ sint validate_recv_ctrl_frame(struct adapter *padapter, union recv_frame *precv_ /* issue nulldata with More data bit = 0 to indicate we have no buffered packets */ issue_nulldata_in_interrupt(padapter, psta->hwaddr); - } else{ + } else { DBG_871X("error!psta->sleepq_len =%d\n", psta->sleepq_len); psta->sleepq_len = 0; } @@ -1355,7 +1355,7 @@ sint validate_recv_data_frame(struct adapter *adapter, union recv_frame *precv_f if (pattrib->priority != 0 && pattrib->priority != 3) adapter->recvpriv.bIsAnyNonBEPkts = true; - } else{ + } else { pattrib->priority = 0; pattrib->hdrlen = pattrib->to_fr_ds == 3 ? 30 : 24; } @@ -1386,7 +1386,7 @@ sint validate_recv_data_frame(struct adapter *adapter, union recv_frame *precv_f RT_TRACE(_module_rtl871x_recv_c_, _drv_info_, ("\n pattrib->encrypt =%d\n", pattrib->encrypt)); SET_ICE_IV_LEN(pattrib->iv_len, pattrib->icv_len, pattrib->encrypt); - } else{ + } else { pattrib->encrypt = 0; pattrib->iv_len = pattrib->icv_len = 0; } @@ -1847,7 +1847,7 @@ union recv_frame *recvframe_chk_defrag(struct adapter *padapter, union recv_fram prtnframe = NULL; - } else{ + } else { /* can't find this ta's defrag_queue, so free this recv_frame */ rtw_free_recvframe(precv_frame, pfree_recv_queue); prtnframe = NULL; @@ -1870,7 +1870,7 @@ union recv_frame *recvframe_chk_defrag(struct adapter *padapter, union recv_fram precv_frame = recvframe_defrag(padapter, pdefrag_q); prtnframe = precv_frame; - } else{ + } else { /* can't find this ta's defrag_queue, so free this recv_frame */ rtw_free_recvframe(precv_frame, pfree_recv_queue); prtnframe = NULL; @@ -2190,7 +2190,7 @@ int recv_indicatepkts_in_order(struct adapter *padapter, struct recv_reorder_ctr if (amsdu_to_msdu(padapter, prframe) != _SUCCESS) rtw_free_recvframe(prframe, &precvpriv->free_recv_queue); - } else{ + } else { /* error condition; */ } @@ -2198,7 +2198,7 @@ int recv_indicatepkts_in_order(struct adapter *padapter, struct recv_reorder_ctr /* Update local variables. */ bPktInBuf = false; - } else{ + } else { bPktInBuf = true; break; } @@ -2333,7 +2333,7 @@ int recv_indicatepkt_reorder(struct adapter *padapter, union recv_frame *prframe if (recv_indicatepkts_in_order(padapter, preorder_ctrl, false) == true) { _set_timer(&preorder_ctrl->reordering_ctrl_timer, REORDER_WAIT_TIME); spin_unlock_bh(&ppending_recvframe_queue->lock); - } else{ + } else { spin_unlock_bh(&ppending_recvframe_queue->lock); del_timer_sync(&preorder_ctrl->reordering_ctrl_timer); } @@ -2410,7 +2410,7 @@ int process_recv_indicatepkts(struct adapter *padapter, union recv_frame *prfram rtw_recv_indicatepkt(padapter, prframe); - } else{ + } else { RT_TRACE(_module_rtl871x_recv_c_, _drv_notice_, ("@@@@ process_recv_indicatepkts- recv_func free_indicatepkt\n")); RT_TRACE(_module_rtl871x_recv_c_, _drv_notice_, ("recv_func:bDriverStopped(%d) OR bSurpriseRemoved(%d)", padapter->bDriverStopped, padapter->bSurpriseRemoved)); diff --git a/drivers/staging/rtl8723bs/core/rtw_security.c b/drivers/staging/rtl8723bs/core/rtw_security.c index 240818b4a2c9..979056c3d397 100644 --- a/drivers/staging/rtl8723bs/core/rtw_security.c +++ b/drivers/staging/rtl8723bs/core/rtw_security.c @@ -252,7 +252,7 @@ void rtw_wep_encrypt(struct adapter *padapter, u8 *pxmitframe) arcfour_encrypt(&mycontext, payload, payload, length); arcfour_encrypt(&mycontext, payload+length, crc, 4); - } else{ + } else { length = pxmitpriv->frag_len-pattrib->hdrlen-pattrib->iv_len-pattrib->icv_len; *((__le32 *)crc) = getcrc32(payload, length); arcfour_init(&mycontext, wepkey, 3+keylength); @@ -821,7 +821,7 @@ u32 rtw_tkip_decrypt(struct adapter *padapter, u8 *precvframe) /* prwskey = psecuritypriv->dot118021XGrpKey[psecuritypriv->dot118021XGrpKeyid].skey; */ prwskey = psecuritypriv->dot118021XGrpKey[prxattrib->key_index].skey; prwskeylen = 16; - } else{ + } else { prwskey = &stainfo->dot118021x_UncstKey.skey[0]; prwskeylen = 16; } @@ -1117,7 +1117,7 @@ static void aes128k128d(u8 *key, u8 *data, u8 *ciphertext) byte_sub(ciphertext, intermediatea); shift_row(intermediatea, intermediateb); xor_128(intermediateb, round_key, ciphertext); - } else{ /* 1 - 9 */ + } else { /* 1 - 9 */ byte_sub(ciphertext, intermediatea); shift_row(intermediatea, intermediateb); mix_column(&intermediateb[0], &intermediatea[0]); diff --git a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c index 82716fd5c6f0..b9d36db762a9 100644 --- a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c +++ b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c @@ -206,7 +206,7 @@ struct sta_info *rtw_alloc_stainfo(struct sta_priv *pstapriv, u8 *hwaddr) spin_unlock_bh(&(pstapriv->sta_hash_lock)); psta = NULL; return psta; - } else{ + } else { psta = LIST_CONTAINOR(get_next(&pfree_sta_queue->queue), struct sta_info, list); list_del_init(&(psta->list)); diff --git a/drivers/staging/rtl8723bs/core/rtw_wlan_util.c b/drivers/staging/rtl8723bs/core/rtw_wlan_util.c index 2c65af319a60..2c172966ea60 100644 --- a/drivers/staging/rtl8723bs/core/rtw_wlan_util.c +++ b/drivers/staging/rtl8723bs/core/rtw_wlan_util.c @@ -889,7 +889,7 @@ void WMMOnAssocRsp(struct adapter *padapter) TXOP = 0x2f; acParm = AIFS | (ECWMin << 8) | (ECWMax << 12) | (TXOP << 16); rtw_hal_set_hwreg(padapter, HW_VAR_AC_PARAM_VO, (u8 *)(&acParm)); - } else{ + } else { edca[0] = edca[1] = edca[2] = edca[3] = 0; for (i = 0; i < 4; i++) { @@ -1028,7 +1028,7 @@ static void bwmode_update_check(struct adapter *padapter, struct ndis_80211_var_ new_ch_offset = HAL_PRIME_CHNL_OFFSET_DONT_CARE; break; } - } else{ + } else { new_bwmode = CHANNEL_WIDTH_20; new_ch_offset = HAL_PRIME_CHNL_OFFSET_DONT_CARE; } @@ -1060,7 +1060,7 @@ static void bwmode_update_check(struct adapter *padapter, struct ndis_80211_var_ /* bwmode */ psta->bw_mode = pmlmeext->cur_bwmode; phtpriv_sta->ch_offset = pmlmeext->cur_ch_offset; - } else{ + } else { psta->bw_mode = CHANNEL_WIDTH_20; phtpriv_sta->ch_offset = HAL_PRIME_CHNL_OFFSET_DONT_CARE; } @@ -1094,7 +1094,7 @@ void HT_caps_handler(struct adapter *padapter, struct ndis_80211_var_ie *pIE) /* Commented by Albert 2010/07/12 */ /* Got the endian issue here. */ pmlmeinfo->HT_caps.u.HT_cap[i] &= (pIE->data[i]); - } else{ + } else { /* modify from fw by Thomas 2010/11/17 */ if ((pmlmeinfo->HT_caps.u.HT_cap_element.AMPDU_para & 0x3) > (pIE->data[i] & 0x3)) max_AMPDU_len = (pIE->data[i] & 0x3); @@ -1191,7 +1191,7 @@ void HTOnAssocRsp(struct adapter *padapter) if ((pmlmeinfo->HT_info_enable) && (pmlmeinfo->HT_caps_enable)) { pmlmeinfo->HT_enable = 1; - } else{ + } else { pmlmeinfo->HT_enable = 0; /* set_channel_bwmode(padapter, pmlmeext->cur_channel, pmlmeext->cur_ch_offset, pmlmeext->cur_bwmode); */ return; @@ -1239,7 +1239,7 @@ void VCS_update(struct adapter *padapter, struct sta_info *psta) if (pregpriv->vcs_type == 1) { /* 1:RTS/CTS 2:CTS to self */ psta->rtsen = 1; psta->cts2self = 0; - } else{ + } else { psta->rtsen = 0; psta->cts2self = 1; } @@ -1251,11 +1251,11 @@ void VCS_update(struct adapter *padapter, struct sta_info *psta) if (pregpriv->vcs_type == 1) { psta->rtsen = 1; psta->cts2self = 0; - } else{ + } else { psta->rtsen = 0; psta->cts2self = 1; } - } else{ + } else { psta->rtsen = 0; psta->cts2self = 0; } @@ -1751,7 +1751,7 @@ void update_capinfo(struct adapter *Adapter, u16 updateCap) pmlmeinfo->preamble_mode = PREAMBLE_SHORT; rtw_hal_set_hwreg(Adapter, HW_VAR_ACK_PREAMBLE, (u8 *)&ShortPreamble); } - } else{ + } else { /* Long Preamble */ if (pmlmeinfo->preamble_mode != PREAMBLE_LONG) { /* PREAMBLE_SHORT or PREAMBLE_AUTO */ ShortPreamble = false; @@ -1804,7 +1804,7 @@ void update_wireless_mode(struct adapter *padapter) network_type = WIRELESS_11_5N; network_type |= WIRELESS_11A; - } else{ + } else { if (pmlmeinfo->VHT_enable) network_type = WIRELESS_11AC; else if (pmlmeinfo->HT_enable) @@ -1839,7 +1839,7 @@ void update_sta_basic_rate(struct sta_info *psta, u8 wireless_mode) /* Only B, B/G, and B/G/N AP could use CCK rate */ memcpy(psta->bssrateset, rtw_basic_rate_cck, 4); psta->bssratelen = 4; - } else{ + } else { memcpy(psta->bssrateset, rtw_basic_rate_ofdm, 3); psta->bssratelen = 3; } @@ -2038,7 +2038,7 @@ void rtw_alloc_macid(struct adapter *padapter, struct sta_info *psta) if (i > (NUM_STA-1)) { psta->mac_id = NUM_STA; DBG_871X(" no room for more MACIDs\n"); - } else{ + } else { psta->mac_id = i; DBG_871X("%s = %d\n", __func__, psta->mac_id); } @@ -2147,7 +2147,7 @@ int rtw_set_gpio_output_value(struct net_device *netdev, int gpio_num, bool isH DBG_871X("%s Set gpio %x[%d]=%d\n", __func__, REG_GPIO_PIN_CTRL+1, gpio_num, isHigh); res = 0; - } else{ + } else { DBG_871X("%s The gpio is input, not be set!\n", __func__); res = -1; } diff --git a/drivers/staging/rtl8723bs/core/rtw_xmit.c b/drivers/staging/rtl8723bs/core/rtw_xmit.c index edb678190b4b..625e67f39889 100644 --- a/drivers/staging/rtl8723bs/core/rtw_xmit.c +++ b/drivers/staging/rtl8723bs/core/rtw_xmit.c @@ -45,8 +45,8 @@ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter) spin_lock_init(&pxmitpriv->lock); spin_lock_init(&pxmitpriv->lock_sctx); - sema_init(&pxmitpriv->xmit_sema, 0); - sema_init(&pxmitpriv->terminate_xmitthread_sema, 0); + init_completion(&pxmitpriv->xmit_comp); + init_completion(&pxmitpriv->terminate_xmitthread_comp); /* Please insert all the queue initializaiton using _rtw_init_queue below @@ -385,7 +385,7 @@ static void update_attrib_vcs_info(struct adapter *padapter, struct xmit_frame * if (pmlmeext->cur_wireless_mode < WIRELESS_11_24N || padapter->registrypriv.wifi_spec) { if (sz > padapter->registrypriv.rts_thresh) pattrib->vcs_mode = RTS_CTS; - else{ + else { if (pattrib->rtsen) pattrib->vcs_mode = RTS_CTS; else if (pattrib->cts2self) @@ -393,7 +393,7 @@ static void update_attrib_vcs_info(struct adapter *padapter, struct xmit_frame * else pattrib->vcs_mode = NONE_VCS; } - } else{ + } else { while (true) { /* IOT action */ if ((pmlmeinfo->assoc_AP_vendor == HT_IOT_PEER_ATHEROS) && (pattrib->ampdu_en == true) && @@ -523,7 +523,7 @@ static s32 update_attrib_sec_info(struct adapter *padapter, struct pkt_attrib *p res = _FAIL; goto exit; } - } else{ + } else { GET_ENCRY_ALGO(psecuritypriv, psta, pattrib->encrypt, bmcast); switch (psecuritypriv->dot11AuthAlgrthm) { @@ -833,7 +833,7 @@ static s32 update_attrib(struct adapter *padapter, _pkt *pkt, struct pkt_attrib if (check_fwstate(pmlmepriv, WIFI_AP_STATE|WIFI_ADHOC_STATE|WIFI_ADHOC_MASTER_STATE)) { if (pattrib->qos_en) set_qos(&pktfile, pattrib); - } else{ + } else { if (pqospriv->qos_option) { set_qos(&pktfile, pattrib); @@ -949,7 +949,7 @@ static s32 xmitframe_addmic(struct adapter *padapter, struct xmit_frame *pxmitfr length = pattrib->last_txcmdsz-pattrib->hdrlen-pattrib->iv_len-((pattrib->bswenc) ? pattrib->icv_len : 0); rtw_secmicappend(&micdata, payload, length); payload = payload+length; - } else{ + } else { length = pxmitpriv->frag_len-pattrib->hdrlen-pattrib->iv_len-((pattrib->bswenc) ? pattrib->icv_len : 0); rtw_secmicappend(&micdata, payload, length); payload = payload+length+pattrib->icv_len; @@ -1134,7 +1134,7 @@ s32 rtw_make_wlanhdr(struct adapter *padapter, u8 *hdr, struct pkt_attrib *pattr psta->BA_starting_seqctrl[pattrib->priority & 0x0f] = (tx_seq+1)&0xfff; pattrib->ampdu_en = true;/* AGG EN */ - } else{ + } else { /* DBG_871X("tx ampdu over run\n"); */ psta->BA_starting_seqctrl[pattrib->priority & 0x0f] = (pattrib->seqnum+1)&0xfff; pattrib->ampdu_en = true;/* AGG EN */ @@ -1144,7 +1144,7 @@ s32 rtw_make_wlanhdr(struct adapter *padapter, u8 *hdr, struct pkt_attrib *pattr } } - } else{ + } else { } @@ -1570,7 +1570,7 @@ void rtw_update_protection(struct adapter *padapter, u8 *ie, uint ie_len) perp = rtw_get_ie(ie, _ERPINFO_IE_, &erp_len, ie_len); if (perp == NULL) pxmitpriv->vcs = NONE_VCS; - else{ + else { protection = (*(perp + 2)) & BIT(1); if (protection) { if (pregistrypriv->vcs_type == RTS_CTS) @@ -1810,7 +1810,7 @@ s32 rtw_free_xmitbuf(struct xmit_priv *pxmitpriv, struct xmit_buf *pxmitbuf) if (pxmitbuf->buf_tag == XMITBUF_CMD) { } else if (pxmitbuf->buf_tag == XMITBUF_MGNT) { rtw_free_xmitbuf_ext(pxmitpriv, pxmitbuf); - } else{ + } else { spin_lock_irqsave(&pfree_xmitbuf_queue->lock, irqL); list_del_init(&pxmitbuf->list); @@ -2684,7 +2684,7 @@ void wakeup_sta_to_xmit(struct adapter *padapter, struct sta_info *psta) if (psta->sleepq_ac_len > 0) { pxmitframe->attrib.mdata = 1; pxmitframe->attrib.eosp = 0; - } else{ + } else { pxmitframe->attrib.mdata = 0; pxmitframe->attrib.eosp = 1; } @@ -2839,7 +2839,7 @@ void xmit_delivery_enabled_frames(struct adapter *padapter, struct sta_info *pst if (psta->sleepq_ac_len > 0) { pxmitframe->attrib.mdata = 1; pxmitframe->attrib.eosp = 0; - } else{ + } else { pxmitframe->attrib.mdata = 0; pxmitframe->attrib.eosp = 1; } @@ -2879,7 +2879,7 @@ void enqueue_pending_xmitbuf( list_add_tail(&pxmitbuf->list, get_list_head(pqueue)); spin_unlock_bh(&pqueue->lock); - up(&(pri_adapter->xmitpriv.xmit_sema)); + complete(&(pri_adapter->xmitpriv.xmit_comp)); } void enqueue_pending_xmitbuf_to_head( @@ -2998,7 +2998,7 @@ int rtw_xmit_thread(void *context) flush_signals_thread(); } while (_SUCCESS == err); - up(&padapter->xmitpriv.terminate_xmitthread_sema); + complete(&padapter->xmitpriv.terminate_xmitthread_comp); thread_exit(); } @@ -3033,7 +3033,7 @@ int rtw_sctx_wait(struct submit_ctx *sctx, const char *msg) return ret; } -static bool rtw_sctx_chk_waring_status(int status) +static bool rtw_sctx_chk_warning_status(int status) { switch (status) { case RTW_SCTX_DONE_UNKNOWN: @@ -3051,7 +3051,7 @@ static bool rtw_sctx_chk_waring_status(int status) void rtw_sctx_done_err(struct submit_ctx **sctx, int status) { if (*sctx) { - if (rtw_sctx_chk_waring_status(status)) + if (rtw_sctx_chk_warning_status(status)) DBG_871X("%s status:%d\n", __func__, status); (*sctx)->status = status; complete(&((*sctx)->done)); diff --git a/drivers/staging/rtl8723bs/hal/hal_btcoex.c b/drivers/staging/rtl8723bs/hal/hal_btcoex.c index 14284b1867f3..2b43f6d3c762 100644 --- a/drivers/staging/rtl8723bs/hal/hal_btcoex.c +++ b/drivers/staging/rtl8723bs/hal/hal_btcoex.c @@ -259,7 +259,7 @@ static void halbtcoutsrc_AggregationCheck(PBTC_COEXIST pBtCoexist) if (pBtCoexist->btInfo.bRejectAggPkt) rtw_btcoex_RejectApAggregatedPacket(padapter, true); - else{ + else { if (pBtCoexist->btInfo.bPreBtCtrlAggBufSize != pBtCoexist->btInfo.bBtCtrlAggBufSize){ @@ -1209,7 +1209,7 @@ void EXhalbtcoutsrc_SpecialPacketNotify(PBTC_COEXIST pBtCoexist, u8 pktType) packetType = BTC_PACKET_EAPOL; else if (PACKET_ARP == pktType) packetType = BTC_PACKET_ARP; - else{ + else { packetType = BTC_PACKET_UNKNOWN; return; } diff --git a/drivers/staging/rtl8723bs/hal/odm_EdcaTurboCheck.c b/drivers/staging/rtl8723bs/hal/odm_EdcaTurboCheck.c index 0e674f39ef03..b7ebce7a6ff9 100644 --- a/drivers/staging/rtl8723bs/hal/odm_EdcaTurboCheck.c +++ b/drivers/staging/rtl8723bs/hal/odm_EdcaTurboCheck.c @@ -39,16 +39,16 @@ void ODM_EdcaTurboInit(void *pDM_VOID) Adapter->recvpriv.bIsAnyNonBEPkts = false; ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, - ("Orginial VO PARAM: 0x%x\n", + ("Original VO PARAM: 0x%x\n", rtw_read32(pDM_Odm->Adapter, ODM_EDCA_VO_PARAM))); ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, - ("Orginial VI PARAM: 0x%x\n", + ("Original VI PARAM: 0x%x\n", rtw_read32(pDM_Odm->Adapter, ODM_EDCA_VI_PARAM))); ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, - ("Orginial BE PARAM: 0x%x\n", + ("Original BE PARAM: 0x%x\n", rtw_read32(pDM_Odm->Adapter, ODM_EDCA_BE_PARAM))); ODM_RT_TRACE(pDM_Odm, ODM_COMP_EDCA_TURBO, ODM_DBG_LOUD, - ("Orginial BK PARAM: 0x%x\n", + ("Original BK PARAM: 0x%x\n", rtw_read32(pDM_Odm->Adapter, ODM_EDCA_BK_PARAM))); } /* ODM_InitEdcaTurbo */ diff --git a/drivers/staging/rtl8723bs/hal/rtl8723b_hal_init.c b/drivers/staging/rtl8723bs/hal/rtl8723b_hal_init.c index c7e55618b9a8..85fd12cca4ae 100644 --- a/drivers/staging/rtl8723bs/hal/rtl8723b_hal_init.c +++ b/drivers/staging/rtl8723bs/hal/rtl8723b_hal_init.c @@ -4502,8 +4502,8 @@ void rtl8723b_stop_thread(struct adapter *padapter) /* stop xmit_buf_thread */ if (xmitpriv->SdioXmitThread) { - up(&xmitpriv->SdioXmitSema); - down(&xmitpriv->SdioXmitTerminateSema); + complete(&xmitpriv->SdioXmitStart); + wait_for_completion(&xmitpriv->SdioXmitTerminate); xmitpriv->SdioXmitThread = NULL; } #endif diff --git a/drivers/staging/rtl8723bs/hal/rtl8723bs_recv.c b/drivers/staging/rtl8723bs/hal/rtl8723bs_recv.c index 85aba8a503cd..26742960ed65 100644 --- a/drivers/staging/rtl8723bs/hal/rtl8723bs_recv.c +++ b/drivers/staging/rtl8723bs/hal/rtl8723bs_recv.c @@ -262,7 +262,7 @@ static void rtl8723bs_recv_tasklet(void *priv) while (ptr < precvbuf->ptail) { precvframe = try_alloc_recvframe(precvpriv, precvbuf); - if(!precvframe) + if (!precvframe) return; /* rx desc parsing */ @@ -271,8 +271,8 @@ static void rtl8723bs_recv_tasklet(void *priv) pattrib = &precvframe->u.hdr.attrib; - if(rx_crc_err(precvpriv, p_hal_data, - pattrib, precvframe)) + if (rx_crc_err(precvpriv, p_hal_data, + pattrib, precvframe)) break; rx_report_sz = RXDESC_SIZE + pattrib->drvinfo_sz; @@ -280,8 +280,8 @@ static void rtl8723bs_recv_tasklet(void *priv) pattrib->shift_sz + pattrib->pkt_len; - if(pkt_exceeds_tail(precvpriv, ptr + pkt_offset, - precvbuf->ptail, precvframe)) + if (pkt_exceeds_tail(precvpriv, ptr + pkt_offset, + precvbuf->ptail, precvframe)) break; if ((pattrib->crc_err) || (pattrib->icv_err)) { diff --git a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c index 10b3f9733bad..69c4db5b5b0c 100644 --- a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c +++ b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c @@ -148,7 +148,7 @@ s32 rtl8723bs_xmit_buf_handler(struct adapter *padapter) pxmitpriv = &padapter->xmitpriv; - if (down_interruptible(&pxmitpriv->xmit_sema)) { + if (wait_for_completion_interruptible(&pxmitpriv->xmit_comp)) { DBG_871X_LEVEL(_drv_emerg_, "%s: down SdioXmitBufSema fail!\n", __func__); return _FAIL; } @@ -312,7 +312,7 @@ static s32 xmit_xmitframes(struct adapter *padapter, struct xmit_priv *pxmitpriv DBG_871X_LEVEL(_drv_err_, "%s: xmit_buf is not enough!\n", __func__); #endif err = -2; - up(&(pxmitpriv->xmit_sema)); + complete(&(pxmitpriv->xmit_comp)); break; } k = 0; @@ -420,8 +420,8 @@ static s32 rtl8723bs_xmit_handler(struct adapter *padapter) pxmitpriv = &padapter->xmitpriv; - if (down_interruptible(&pxmitpriv->SdioXmitSema)) { - DBG_871X_LEVEL(_drv_emerg_, "%s: down sema fail!\n", __func__); + if (wait_for_completion_interruptible(&pxmitpriv->SdioXmitStart)) { + DBG_871X_LEVEL(_drv_emerg_, "%s: SdioXmitStart fail!\n", __func__); return _FAIL; } @@ -490,10 +490,6 @@ int rtl8723bs_xmit_thread(void *context) DBG_871X("start "FUNC_ADPT_FMT"\n", FUNC_ADPT_ARG(padapter)); - /* For now, no one would down sema to check thread is running, */ - /* so mark this temporary, Lucas@20130820 */ -/* up(&pxmitpriv->SdioXmitTerminateSema); */ - do { ret = rtl8723bs_xmit_handler(padapter); if (signal_pending(current)) { @@ -501,7 +497,7 @@ int rtl8723bs_xmit_thread(void *context) } } while (_SUCCESS == ret); - up(&pxmitpriv->SdioXmitTerminateSema); + complete(&pxmitpriv->SdioXmitTerminate); RT_TRACE(_module_hal_xmit_c_, _drv_notice_, ("-%s\n", __func__)); @@ -590,7 +586,7 @@ s32 rtl8723bs_hal_xmit( return true; } - up(&pxmitpriv->SdioXmitSema); + complete(&pxmitpriv->SdioXmitStart); return false; } @@ -611,7 +607,7 @@ s32 rtl8723bs_hal_xmitframe_enqueue( #ifdef CONFIG_SDIO_TX_TASKLET tasklet_hi_schedule(&pxmitpriv->xmit_tasklet); #else - up(&pxmitpriv->SdioXmitSema); + complete(&pxmitpriv->SdioXmitStart); #endif } @@ -634,8 +630,8 @@ s32 rtl8723bs_init_xmit_priv(struct adapter *padapter) phal = GET_HAL_DATA(padapter); spin_lock_init(&phal->SdioTxFIFOFreePageLock); - sema_init(&xmitpriv->SdioXmitSema, 0); - sema_init(&xmitpriv->SdioXmitTerminateSema, 0); + init_completion(&xmitpriv->SdioXmitStart); + init_completion(&xmitpriv->SdioXmitTerminate); return _SUCCESS; } diff --git a/drivers/staging/rtl8723bs/hal/sdio_ops.c b/drivers/staging/rtl8723bs/hal/sdio_ops.c index d6b93e1f78d8..3fee34484577 100644 --- a/drivers/staging/rtl8723bs/hal/sdio_ops.c +++ b/drivers/staging/rtl8723bs/hal/sdio_ops.c @@ -1023,7 +1023,7 @@ void sd_int_dpc(struct adapter *adapter) u8 freepage[4]; _sdio_local_read(adapter, SDIO_REG_FREE_TXPG, 4, freepage); - up(&(adapter->xmitpriv.xmit_sema)); + complete(&(adapter->xmitpriv.xmit_comp)); } if (hal->sdio_hisr & SDIO_HISR_CPWM1) { diff --git a/drivers/staging/rtl8723bs/include/osdep_service_linux.h b/drivers/staging/rtl8723bs/include/osdep_service_linux.h index 58d1e1019241..2f1b51e614fb 100644 --- a/drivers/staging/rtl8723bs/include/osdep_service_linux.h +++ b/drivers/staging/rtl8723bs/include/osdep_service_linux.h @@ -22,7 +22,6 @@ #include #include #include - #include #include #include #include @@ -41,7 +40,6 @@ #include #include - typedef struct semaphore _sema; typedef spinlock_t _lock; typedef struct mutex _mutex; typedef struct timer_list _timer; diff --git a/drivers/staging/rtl8723bs/include/rtw_cmd.h b/drivers/staging/rtl8723bs/include/rtw_cmd.h index 1530d0ea1d51..299b55538788 100644 --- a/drivers/staging/rtl8723bs/include/rtw_cmd.h +++ b/drivers/staging/rtl8723bs/include/rtw_cmd.h @@ -7,6 +7,7 @@ #ifndef __RTW_CMD_H_ #define __RTW_CMD_H_ +#include #define C2H_MEM_SZ (16*1024) @@ -27,7 +28,6 @@ u8 *rsp; u32 rspsz; struct submit_ctx *sctx; - /* _sema cmd_sem; */ struct list_head list; }; @@ -38,9 +38,8 @@ }; struct cmd_priv { - _sema cmd_queue_sema; - /* _sema cmd_done_sema; */ - _sema terminate_cmdthread_sema; + struct completion cmd_queue_comp; + struct completion terminate_cmdthread_comp; struct __queue cmd_queue; u8 cmd_seq; u8 *cmd_buf; /* shall be non-paged, and 4 bytes aligned */ @@ -543,7 +542,7 @@ struct Tx_Beacon_param mac[0] == 0 ==> CMD mode, return H2C_SUCCESS. - The following condition must be ture under CMD mode + The following condition must be true under CMD mode mac[1] == mac[4], mac[2] == mac[3], mac[0]=mac[5]= 0; s0 == 0x1234, s1 == 0xabcd, w0 == 0x78563412, w1 == 0x5aa5def7; s2 == (b1 << 8 | b0); diff --git a/drivers/staging/rtl8723bs/include/rtw_io.h b/drivers/staging/rtl8723bs/include/rtw_io.h index 4f8be55da65d..99d104b3647a 100644 --- a/drivers/staging/rtl8723bs/include/rtw_io.h +++ b/drivers/staging/rtl8723bs/include/rtw_io.h @@ -115,7 +115,6 @@ struct io_req { u32 command; u32 status; u8 *pbuf; - _sema sema; void (*_async_io_callback)(struct adapter *padater, struct io_req *pio_req, u8 *cnxt); u8 *cnxt; diff --git a/drivers/staging/rtl8723bs/include/rtw_mlme_ext.h b/drivers/staging/rtl8723bs/include/rtw_mlme_ext.h index 4c3141882143..f6eabad4bbe0 100644 --- a/drivers/staging/rtl8723bs/include/rtw_mlme_ext.h +++ b/drivers/staging/rtl8723bs/include/rtw_mlme_ext.h @@ -648,7 +648,6 @@ void report_survey_event(struct adapter *padapter, union recv_frame *precv_frame void report_surveydone_event(struct adapter *padapter); void report_del_sta_event(struct adapter *padapter, unsigned char* MacAddr, unsigned short reason); void report_add_sta_event(struct adapter *padapter, unsigned char* MacAddr, int cam_idx); -bool rtw_port_switch_chk(struct adapter *adapter); void report_wmm_edca_update(struct adapter *padapter); void beacon_timing_control(struct adapter *padapter); diff --git a/drivers/staging/rtl8723bs/include/rtw_mp.h b/drivers/staging/rtl8723bs/include/rtw_mp.h index 839084733201..bb3970d58573 100644 --- a/drivers/staging/rtl8723bs/include/rtw_mp.h +++ b/drivers/staging/rtl8723bs/include/rtw_mp.h @@ -62,7 +62,6 @@ typedef struct _MPT_CONTEXT /* Indicate if the driver is unloading or unloaded. */ bool bMptDrvUnload; - _sema MPh2c_Sema; _timer MPh2c_timeout_timer; /* Event used to sync H2c for BT control */ diff --git a/drivers/staging/rtl8723bs/include/rtw_pwrctrl.h b/drivers/staging/rtl8723bs/include/rtw_pwrctrl.h index 72df6cffe62e..e2a4c680125f 100644 --- a/drivers/staging/rtl8723bs/include/rtw_pwrctrl.h +++ b/drivers/staging/rtl8723bs/include/rtw_pwrctrl.h @@ -7,6 +7,7 @@ #ifndef __RTW_PWRCTRL_H_ #define __RTW_PWRCTRL_H_ +#include #define FW_PWR0 0 #define FW_PWR1 1 @@ -93,10 +94,6 @@ struct reportpwrstate_parm { unsigned short rsvd; }; - -typedef _sema _pwrlock; - - #define LPS_DELAY_TIME 1*HZ /* 1 sec */ #define EXE_PWR_NONE 0x01 @@ -207,8 +204,7 @@ typedef struct pno_scan_info struct pwrctrl_priv { - _pwrlock lock; - _pwrlock check_32k_lock; + struct mutex lock; volatile u8 rpwm; /* requested power state for fw */ volatile u8 cpwm; /* fw current power state. updated when 1. read from HCPWM 2. driver lowers power level */ volatile u8 tog; /* toggling */ diff --git a/drivers/staging/rtl8723bs/include/rtw_xmit.h b/drivers/staging/rtl8723bs/include/rtw_xmit.h index a75b668d09a6..1b38b9182b31 100644 --- a/drivers/staging/rtl8723bs/include/rtw_xmit.h +++ b/drivers/staging/rtl8723bs/include/rtw_xmit.h @@ -7,6 +7,7 @@ #ifndef _RTW_XMIT_H_ #define _RTW_XMIT_H_ +#include #define MAX_XMITBUF_SZ (20480) /* 20k */ @@ -364,8 +365,8 @@ struct xmit_priv { _lock lock; - _sema xmit_sema; - _sema terminate_xmitthread_sema; + struct completion xmit_comp; + struct completion terminate_xmitthread_comp; /* struct __queue blk_strms[MAX_NUMBLKS]; */ struct __queue be_pending; @@ -419,8 +420,8 @@ struct xmit_priv { struct tasklet_struct xmit_tasklet; #else void *SdioXmitThread; - _sema SdioXmitSema; - _sema SdioXmitTerminateSema; + struct completion SdioXmitStart; + struct completion SdioXmitTerminate; #endif /* CONFIG_SDIO_TX_TASKLET */ struct __queue free_xmitbuf_queue; diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c index b8631baf128d..8fb03efd588b 100644 --- a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c +++ b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c @@ -2975,7 +2975,7 @@ static int rtw_dbg_port(struct net_device *dev, struct registry_priv *pregpriv = &padapter->registrypriv; /* 0: disable, bit(0):enable 2.4g, bit(1):enable 5g, 0x3: enable both 2.4g and 5g */ /* default is set to enable 2.4GHZ for IOT issue with bufflao's AP at 5GHZ */ - if (pregpriv && (extra_arg == 0 || extra_arg == 1|| extra_arg == 2 || extra_arg == 3)) { + if (extra_arg == 0 || extra_arg == 1 || extra_arg == 2 || extra_arg == 3) { pregpriv->rx_stbc = extra_arg; DBG_871X("set rx_stbc =%d\n", pregpriv->rx_stbc); } else @@ -2987,7 +2987,7 @@ static int rtw_dbg_port(struct net_device *dev, { struct registry_priv *pregpriv = &padapter->registrypriv; /* 0: disable, 0x1:enable (but wifi_spec should be 0), 0x2: force enable (don't care wifi_spec) */ - if (pregpriv && extra_arg < 3) { + if (extra_arg < 3) { pregpriv->ampdu_enable = extra_arg; DBG_871X("set ampdu_enable =%d\n", pregpriv->ampdu_enable); } else diff --git a/drivers/staging/rtl8723bs/os_dep/os_intfs.c b/drivers/staging/rtl8723bs/os_dep/os_intfs.c index 181642358e3f..143e3f9b31aa 100644 --- a/drivers/staging/rtl8723bs/os_dep/os_intfs.c +++ b/drivers/staging/rtl8723bs/os_dep/os_intfs.c @@ -585,7 +585,7 @@ u32 rtw_start_drv_threads(struct adapter *padapter) if (IS_ERR(padapter->cmdThread)) _status = _FAIL; else - down(&padapter->cmdpriv.terminate_cmdthread_sema); /* wait for cmd_thread to run */ + wait_for_completion(&padapter->cmdpriv.terminate_cmdthread_comp); /* wait for cmd_thread to run */ rtw_hal_start_thread(padapter); return _status; @@ -598,8 +598,8 @@ void rtw_stop_drv_threads (struct adapter *padapter) rtw_stop_cmd_thread(padapter); /* Below is to termindate tx_thread... */ - up(&padapter->xmitpriv.xmit_sema); - down(&padapter->xmitpriv.terminate_xmitthread_sema); + complete(&padapter->xmitpriv.xmit_comp); + wait_for_completion(&padapter->xmitpriv.terminate_xmitthread_comp); RT_TRACE(_module_os_intfs_c_, _drv_info_, ("\n drv_halt: rtw_xmit_thread can be terminated !\n")); rtw_hal_stop_thread(padapter); diff --git a/drivers/staging/rtl8723bs/os_dep/xmit_linux.c b/drivers/staging/rtl8723bs/os_dep/xmit_linux.c index 2cf903c66854..4e4e565d991c 100644 --- a/drivers/staging/rtl8723bs/os_dep/xmit_linux.c +++ b/drivers/staging/rtl8723bs/os_dep/xmit_linux.c @@ -101,7 +101,7 @@ void rtw_os_xmit_schedule(struct adapter *padapter) return; if (!list_empty(&padapter->xmitpriv.pending_xmitbuf_queue.queue)) - up(&pri_adapter->xmitpriv.xmit_sema); + complete(&pri_adapter->xmitpriv.xmit_comp); } static void rtw_check_xmit_resource(struct adapter *padapter, _pkt *pkt) diff --git a/drivers/staging/rtlwifi/base.c b/drivers/staging/rtlwifi/base.c index 50b1c187a920..35df2cf60619 100644 --- a/drivers/staging/rtlwifi/base.c +++ b/drivers/staging/rtlwifi/base.c @@ -756,21 +756,30 @@ u8 rtl_mrate_idx_to_arfr_id(struct ieee80211_hw *hw, u8 rate_index, return ret; } +static inline u8 _rtl_rate_id(struct ieee80211_hw *hw, + struct rtl_sta_info *sta_entry, + int rate_index) +{ + struct rtl_priv *rtlpriv = rtl_priv(hw); + + if (rtlpriv->cfg->spec_ver & RTL_SPEC_NEW_RATEID) { + int wireless_mode = sta_entry ? + sta_entry->wireless_mode : WIRELESS_MODE_G; + + return rtl_mrate_idx_to_arfr_id(hw, rate_index, wireless_mode); + } else { + return rate_index; + } +} + static void _rtl_txrate_selectmode(struct ieee80211_hw *hw, struct ieee80211_sta *sta, struct rtl_tcb_desc *tcb_desc) { -#define SET_RATE_ID(rate_id) \ - ((rtlpriv->cfg->spec_ver & RTL_SPEC_NEW_RATEID) ? \ - rtl_mrate_idx_to_arfr_id(hw, rate_id, \ - (sta_entry ? sta_entry->wireless_mode : \ - WIRELESS_MODE_G)) : \ - rate_id) - struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); struct rtl_sta_info *sta_entry = NULL; - u8 ratr_index = SET_RATE_ID(RATR_INX_WIRELESS_MC); + u8 ratr_index = _rtl_rate_id(hw, sta_entry, RATR_INX_WIRELESS_MC); if (sta) { sta_entry = (struct rtl_sta_info *)sta->drv_priv; @@ -786,7 +795,8 @@ static void _rtl_txrate_selectmode(struct ieee80211_hw *hw, rtlpriv->cfg->maps[RTL_RC_CCK_RATE2M]; tcb_desc->use_driver_rate = 1; tcb_desc->ratr_index = - SET_RATE_ID(RATR_INX_WIRELESS_MC); + _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_MC); } else { tcb_desc->ratr_index = ratr_index; } @@ -807,25 +817,32 @@ static void _rtl_txrate_selectmode(struct ieee80211_hw *hw, ; /* use sta_entry->ratr_index */ else if (mac->mode == WIRELESS_MODE_AC_5G) tcb_desc->ratr_index = - SET_RATE_ID(RATR_INX_WIRELESS_AC_5N); + _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_AC_5N); else if (mac->mode == WIRELESS_MODE_AC_24G) tcb_desc->ratr_index = - SET_RATE_ID(RATR_INX_WIRELESS_AC_24N); + _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_AC_24N); else if (mac->mode == WIRELESS_MODE_N_24G) tcb_desc->ratr_index = - SET_RATE_ID(RATR_INX_WIRELESS_NGB); + _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_NGB); else if (mac->mode == WIRELESS_MODE_N_5G) tcb_desc->ratr_index = - SET_RATE_ID(RATR_INX_WIRELESS_NG); + _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_NG); else if (mac->mode & WIRELESS_MODE_G) tcb_desc->ratr_index = - SET_RATE_ID(RATR_INX_WIRELESS_GB); + _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_GB); else if (mac->mode & WIRELESS_MODE_B) tcb_desc->ratr_index = - SET_RATE_ID(RATR_INX_WIRELESS_B); + _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_B); else if (mac->mode & WIRELESS_MODE_A) tcb_desc->ratr_index = - SET_RATE_ID(RATR_INX_WIRELESS_G); + _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_G); } else if (mac->opmode == NL80211_IFTYPE_AP || mac->opmode == NL80211_IFTYPE_ADHOC) { @@ -839,7 +856,6 @@ static void _rtl_txrate_selectmode(struct ieee80211_hw *hw, } } } -#undef SET_RATE_ID } static void _rtl_query_bandwidth_mode(struct ieee80211_hw *hw, @@ -1216,13 +1232,6 @@ void rtl_get_tcb_desc(struct ieee80211_hw *hw, struct ieee80211_sta *sta, struct sk_buff *skb, struct rtl_tcb_desc *tcb_desc) { -#define SET_RATE_ID(rate_id) \ - ((rtlpriv->cfg->spec_ver & RTL_SPEC_NEW_RATEID) ? \ - rtl_mrate_idx_to_arfr_id(hw, rate_id, \ - (sta_entry ? sta_entry->wireless_mode : \ - WIRELESS_MODE_G)) : \ - rate_id) - struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_mac *rtlmac = rtl_mac(rtl_priv(hw)); struct ieee80211_hdr *hdr = rtl_get_hdr(skb); @@ -1238,7 +1247,8 @@ void rtl_get_tcb_desc(struct ieee80211_hw *hw, if (!ieee80211_is_data(fc)) { tcb_desc->use_driver_rate = true; - tcb_desc->ratr_index = SET_RATE_ID(RATR_INX_WIRELESS_MC); + tcb_desc->ratr_index = _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_MC); tcb_desc->disable_ratefallback = 1; tcb_desc->mac_id = 0; tcb_desc->packet_bw = false; @@ -1258,7 +1268,8 @@ void rtl_get_tcb_desc(struct ieee80211_hw *hw, */ if (info->control.rates[0].idx == 0 || ieee80211_is_nullfunc(fc)) { tcb_desc->use_driver_rate = true; - tcb_desc->ratr_index = SET_RATE_ID(RATR_INX_WIRELESS_MC); + tcb_desc->ratr_index = _rtl_rate_id(hw, sta_entry, + RATR_INX_WIRELESS_MC); tcb_desc->disable_ratefallback = 1; } else if (sta && sta->vht_cap.vht_supported) { @@ -1291,7 +1302,6 @@ void rtl_get_tcb_desc(struct ieee80211_hw *hw, _rtl_qurey_shortpreamble_mode(hw, tcb_desc, info); _rtl_query_shortgi(hw, sta, tcb_desc, info); _rtl_query_protection_mode(hw, tcb_desc, info); -#undef SET_RATE_ID } bool rtl_tx_mgmt_proc(struct ieee80211_hw *hw, struct sk_buff *skb) @@ -1432,8 +1442,8 @@ static void setup_special_tx(struct rtl_priv *rtlpriv, struct rtl_ps_ctl *ppsc, rtlpriv->ra.is_special_data = true; if (rtlpriv->cfg->ops->get_btc_status()) - rtlpriv->btcoexist.btc_ops->btc_special_packet_notify( - rtlpriv, type); + rtlpriv->btcoexist.btc_ops->btc_special_packet_notify(rtlpriv, + type); rtl_lps_leave(hw); ppsc->last_delaylps_stamp_jiffies = jiffies; } @@ -2120,8 +2130,7 @@ label_lps_done: if (rtlpriv->link_info.roam_times >= 5) { pr_err("AP off, try to reconnect now\n"); rtlpriv->link_info.roam_times = 0; - ieee80211_connection_loss( - rtlpriv->mac80211.vif); + ieee80211_connection_loss(rtlpriv->mac80211.vif); } } else { rtlpriv->link_info.roam_times = 0; diff --git a/drivers/staging/rtlwifi/base.h b/drivers/staging/rtlwifi/base.h index 4299ca181365..591f433be1c3 100644 --- a/drivers/staging/rtlwifi/base.h +++ b/drivers/staging/rtlwifi/base.h @@ -151,9 +151,9 @@ void rtl_c2hcmd_wq_callback(void *data); void rtl_c2hcmd_launcher(struct ieee80211_hw *hw, int exec); void rtl_c2hcmd_enqueue(struct ieee80211_hw *hw, u8 tag, u8 len, u8 *val); -u8 rtl_mrate_idx_to_arfr_id( - struct ieee80211_hw *hw, u8 rate_index, - enum wireless_mode wirelessmode); +u8 rtl_mrate_idx_to_arfr_id(struct ieee80211_hw *hw, + u8 rate_index, + enum wireless_mode wirelessmode); void rtl_get_tcb_desc(struct ieee80211_hw *hw, struct ieee80211_tx_info *info, struct ieee80211_sta *sta, diff --git a/drivers/staging/rtlwifi/btcoexist/halbtcoutsrc.c b/drivers/staging/rtlwifi/btcoexist/halbtcoutsrc.c index 24e19ffd4431..b519d181b419 100644 --- a/drivers/staging/rtlwifi/btcoexist/halbtcoutsrc.c +++ b/drivers/staging/rtlwifi/btcoexist/halbtcoutsrc.c @@ -476,17 +476,6 @@ static u32 halbtc_get_wifi_link_status(struct btc_coexist *btcoexist) return ret_val; } -static s32 halbtc_get_wifi_rssi(struct rtl_priv *rtlpriv) -{ - int undec_sm_pwdb = 0; - - if (rtlpriv->mac80211.link_state >= MAC80211_LINKED) - undec_sm_pwdb = rtlpriv->dm.undec_sm_pwdb; - else /* associated entry pwdb */ - undec_sm_pwdb = rtlpriv->dm.undec_sm_pwdb; - return undec_sm_pwdb; -} - static bool halbtc_get(void *void_btcoexist, u8 get_type, void *out_buf) { struct btc_coexist *btcoexist = (struct btc_coexist *)void_btcoexist; @@ -585,7 +574,7 @@ static bool halbtc_get(void *void_btcoexist, u8 get_type, void *out_buf) *bool_tmp = false; break; case BTC_GET_S4_WIFI_RSSI: - *s32_tmp = halbtc_get_wifi_rssi(rtlpriv); + *s32_tmp = rtlpriv->dm.undec_sm_pwdb; break; case BTC_GET_S4_HS_RSSI: *s32_tmp = 0; diff --git a/drivers/staging/rtlwifi/core.c b/drivers/staging/rtlwifi/core.c index ca37f7511c4d..a9902818ae7e 100644 --- a/drivers/staging/rtlwifi/core.c +++ b/drivers/staging/rtlwifi/core.c @@ -1882,7 +1882,8 @@ bool rtl_hal_pwrseqcmdparsing(struct rtl_priv *rtlpriv, u8 cut_version, return true; default: WARN_ONCE(true, - "rtlwifi: %s(): Unknown CMD!!\n", __func__); + "rtlwifi: %s(): Unknown CMD!!\n", + __func__); break; } } diff --git a/drivers/staging/rtlwifi/phydm/phydm.c b/drivers/staging/rtlwifi/phydm/phydm.c index 27635feedba2..52b85b3ddb34 100644 --- a/drivers/staging/rtlwifi/phydm/phydm.c +++ b/drivers/staging/rtlwifi/phydm/phydm.c @@ -477,7 +477,7 @@ static void phydm_supportability_init(void *dm_void) } else { ODM_RT_TRACE(dm, ODM_COMP_INIT, - "ODM adaptivity is set to disnabled!!!\n"); + "ODM adaptivity is set to disabled!!!\n"); /**/ } diff --git a/drivers/staging/rtlwifi/phydm/phydm_adc_sampling.c b/drivers/staging/rtlwifi/phydm/phydm_adc_sampling.c index d6cea73fa185..f2468b82f745 100644 --- a/drivers/staging/rtlwifi/phydm/phydm_adc_sampling.c +++ b/drivers/staging/rtlwifi/phydm/phydm_adc_sampling.c @@ -551,11 +551,10 @@ void phydm_lamode_trigger_setting(void *dm_void, char input[][16], u32 *_used, /*dbg_print("echo cmd input_num = %d\n", input_num);*/ if ((strcmp(input[1], help) == 0)) { - PHYDM_SNPRINTF( - output + used, out_len - used, - "{En} {0:BB,1:BB_MAC,2:RF0,3:RF1,4:MAC}\n {BB:dbg_port[bit],BB_MAC:0-ok/1-fail/2-cca,MAC:ref} {DMA type} {TrigTime}\n {polling_time/ref_mask} {dbg_port} {0:P_Edge, 1:N_Edge} {SpRate:0-80M,1-40M,2-20M} {Capture num}\n"); - /**/ - } else if (is_enable_la_mode == 1) { + PHYDM_SNPRINTF(output + used, + out_len - used, + "{En} {0:BB,1:BB_MAC,2:RF0,3:RF1,4:MAC}\n {BB:dbg_port[bit],BB_MAC:0-ok/1-fail/2-cca,MAC:ref} {DMA type} {TrigTime}\n {polling_time/ref_mask} {dbg_port} {0:P_Edge, 1:N_Edge} {SpRate:0-80M,1-40M,2-20M} {Capture num}\n"); + } else if (is_enable_la_mode) { PHYDM_SSCANF(input[2], DCMD_DECIMAL, &var1[1]); trig_mode = (u8)var1[1]; @@ -575,7 +574,7 @@ void phydm_lamode_trigger_setting(void *dm_void, char input[][16], u32 *_used, PHYDM_SSCANF(input[10], DCMD_DECIMAL, &var1[9]); dma_data_sig_sel = (u8)var1[3]; - trigger_time_mu_sec = var1[4]; /*unit: us*/ + trigger_time_mu_sec = var1[4]; /* unit: us */ adc_smp->la_mac_ref_mask = var1[5]; adc_smp->la_dbg_port = var1[6]; diff --git a/drivers/staging/rtlwifi/phydm/phydm_ccx.c b/drivers/staging/rtlwifi/phydm/phydm_ccx.c index 57138606d9be..5e01ea8f4b68 100644 --- a/drivers/staging/rtlwifi/phydm/phydm_ccx.c +++ b/drivers/staging/rtlwifi/phydm/phydm_ccx.c @@ -317,7 +317,7 @@ void phydm_get_nhm_result(void *dm_void) ccx_info->NHM_result[10] = (u8)((value32 & MASKBYTE2) >> 16); ccx_info->NHM_result[11] = (u8)((value32 & MASKBYTE3) >> 24); - /*Get NHM duration*/ + /* Get NHM duration */ value32 = odm_read_4byte(dm, ODM_REG_NHM_CNT10_TO_CNT11_11N); ccx_info->NHM_duration = (u16)(value32 & MASKLWORD); } @@ -331,14 +331,15 @@ bool phydm_check_nhm_ready(void *dm_void) bool ret = false; if (dm->support_ic_type & ODM_IC_11AC_SERIES) { - value32 = - odm_get_bb_reg(dm, ODM_REG_CLM_RESULT_11AC, MASKDWORD); + value32 = odm_get_bb_reg(dm, + ODM_REG_CLM_RESULT_11AC, + MASKDWORD); for (i = 0; i < 200; i++) { ODM_delay_ms(1); if (odm_get_bb_reg(dm, ODM_REG_NHM_DUR_READY_11AC, BIT(17))) { - ret = 1; + ret = true; break; } } @@ -351,7 +352,7 @@ bool phydm_check_nhm_ready(void *dm_void) ODM_delay_ms(1); if (odm_get_bb_reg(dm, ODM_REG_NHM_DUR_READY_11AC, BIT(17))) { - ret = 1; + ret = true; break; } } diff --git a/drivers/staging/rtlwifi/phydm/phydm_debug.c b/drivers/staging/rtlwifi/phydm/phydm_debug.c index b5b69d5f1a41..91f2c054d83b 100644 --- a/drivers/staging/rtlwifi/phydm/phydm_debug.c +++ b/drivers/staging/rtlwifi/phydm/phydm_debug.c @@ -140,26 +140,17 @@ static inline void phydm_print_csi(struct phy_dm_struct *dm, u32 used, dword_h = odm_get_bb_reg(dm, 0xF74, MASKDWORD); dword_l = odm_get_bb_reg(dm, 0xF5C, MASKDWORD); - if (index % 2 == 0) - PHYDM_SNPRINTF( - output + used, out_len - used, - "%02x %02x %02x %02x %02x %02x %02x %02x\n", - dword_l & MASKBYTE0, (dword_l & MASKBYTE1) >> 8, - (dword_l & MASKBYTE2) >> 16, - (dword_l & MASKBYTE3) >> 24, - dword_h & MASKBYTE0, (dword_h & MASKBYTE1) >> 8, - (dword_h & MASKBYTE2) >> 16, - (dword_h & MASKBYTE3) >> 24); - else - PHYDM_SNPRINTF( - output + used, out_len - used, - "%02x %02x %02x %02x %02x %02x %02x %02x\n", - dword_l & MASKBYTE0, (dword_l & MASKBYTE1) >> 8, - (dword_l & MASKBYTE2) >> 16, - (dword_l & MASKBYTE3) >> 24, - dword_h & MASKBYTE0, (dword_h & MASKBYTE1) >> 8, - (dword_h & MASKBYTE2) >> 16, - (dword_h & MASKBYTE3) >> 24); + PHYDM_SNPRINTF(output + used, + out_len - used, + "%02x %02x %02x %02x %02x %02x %02x %02x\n", + dword_l & MASKBYTE0, + (dword_l & MASKBYTE1) >> 8, + (dword_l & MASKBYTE2) >> 16, + (dword_l & MASKBYTE3) >> 24, + dword_h & MASKBYTE0, + (dword_h & MASKBYTE1) >> 8, + (dword_h & MASKBYTE2) >> 16, + (dword_h & MASKBYTE3) >> 24); } } @@ -168,9 +159,7 @@ void phydm_init_debug_setting(struct phy_dm_struct *dm) dm->debug_level = ODM_DBG_TRACE; dm->fw_debug_components = 0; - dm->debug_components = - - 0; + dm->debug_components = 0; dm->fw_buff_is_enpty = true; dm->pre_c2h_seq = 0; diff --git a/drivers/staging/rtlwifi/phydm/phydm_dig.c b/drivers/staging/rtlwifi/phydm/phydm_dig.c index f10776fbe2d9..99c805cc380b 100644 --- a/drivers/staging/rtlwifi/phydm/phydm_dig.c +++ b/drivers/staging/rtlwifi/phydm/phydm_dig.c @@ -599,13 +599,8 @@ void odm_dig_init(void *dm_void) (DM_DIG_MAX_PAUSE_TYPE + 1)); dig_tab->pause_cckpd_level = 0; - if (dm->board_type & (ODM_BOARD_EXT_PA | ODM_BOARD_EXT_LNA)) { - dig_tab->rx_gain_range_max = DM_DIG_MAX_NIC; - dig_tab->rx_gain_range_min = DM_DIG_MIN_NIC; - } else { - dig_tab->rx_gain_range_max = DM_DIG_MAX_NIC; - dig_tab->rx_gain_range_min = DM_DIG_MIN_NIC; - } + dig_tab->rx_gain_range_max = DM_DIG_MAX_NIC; + dig_tab->rx_gain_range_min = DM_DIG_MIN_NIC; dig_tab->enable_adjust_big_jump = 1; if (dm->support_ic_type & ODM_RTL8822B) { diff --git a/drivers/staging/rtlwifi/phydm/phydm_edcaturbocheck.c b/drivers/staging/rtlwifi/phydm/phydm_edcaturbocheck.c index cd12512628c0..b5bd4fb20c90 100644 --- a/drivers/staging/rtlwifi/phydm/phydm_edcaturbocheck.c +++ b/drivers/staging/rtlwifi/phydm/phydm_edcaturbocheck.c @@ -25,13 +25,13 @@ void odm_edca_turbo_init(void *dm_void) dm->dm_edca_table.is_current_turbo_edca = false; dm->dm_edca_table.is_cur_rdl_state = false; - ODM_RT_TRACE(dm, ODM_COMP_EDCA_TURBO, "Orginial VO PARAM: 0x%x\n", + ODM_RT_TRACE(dm, ODM_COMP_EDCA_TURBO, "Original VO PARAM: 0x%x\n", odm_read_4byte(dm, ODM_EDCA_VO_PARAM)); - ODM_RT_TRACE(dm, ODM_COMP_EDCA_TURBO, "Orginial VI PARAM: 0x%x\n", + ODM_RT_TRACE(dm, ODM_COMP_EDCA_TURBO, "Original VI PARAM: 0x%x\n", odm_read_4byte(dm, ODM_EDCA_VI_PARAM)); - ODM_RT_TRACE(dm, ODM_COMP_EDCA_TURBO, "Orginial BE PARAM: 0x%x\n", + ODM_RT_TRACE(dm, ODM_COMP_EDCA_TURBO, "Original BE PARAM: 0x%x\n", odm_read_4byte(dm, ODM_EDCA_BE_PARAM)); - ODM_RT_TRACE(dm, ODM_COMP_EDCA_TURBO, "Orginial BK PARAM: 0x%x\n", + ODM_RT_TRACE(dm, ODM_COMP_EDCA_TURBO, "Original BK PARAM: 0x%x\n", odm_read_4byte(dm, ODM_EDCA_BK_PARAM)); } /* ODM_InitEdcaTurbo */ diff --git a/drivers/staging/rtlwifi/phydm/phydm_hwconfig.c b/drivers/staging/rtlwifi/phydm/phydm_hwconfig.c index 4bf86e5a451f..a4ad39ab3ddf 100644 --- a/drivers/staging/rtlwifi/phydm/phydm_hwconfig.c +++ b/drivers/staging/rtlwifi/phydm/phydm_hwconfig.c @@ -45,8 +45,8 @@ static u32 phydm_process_rssi_pwdb(struct phy_dm_struct *dm, u32 weighting = 0, undecorated_smoothed_pwdb; /* 2011.07.28 LukeLee: modified to prevent unstable CCK RSSI */ - if (entry->rssi_stat.ofdm_pkt == - 64) { /* speed up when all packets are OFDM*/ + if (entry->rssi_stat.ofdm_pkt == 64) { + /* speed up when all packets are OFDM */ undecorated_smoothed_pwdb = undecorated_smoothed_ofdm; ODM_RT_TRACE(dm, ODM_COMP_RSSI_MONITOR, "PWDB_0[%d] = (( %d ))\n", pktinfo->station_id, @@ -477,26 +477,6 @@ static u8 odm_query_rx_pwr_percentage(s8 ant_power) return 100 + ant_power; } -/* - * 2012/01/12 MH MOve some signal strength smooth method to MP HAL layer. - * IF other SW team do not support the feature, remove this section.?? - */ - -s32 odm_signal_scale_mapping(struct phy_dm_struct *dm, s32 curr_sig) -{ - { - return curr_sig; - } -} - -static u8 odm_sq_process_patch_rt_cid_819x_lenovo(struct phy_dm_struct *dm, - u8 is_cck_rate, u8 pwdb_all, - u8 path, u8 RSSI) -{ - u8 sq = 0; - return sq; -} - static u8 odm_evm_db_to_percentage(s8 value) { /* -33dB~0dB to 0%~99% */ @@ -748,16 +728,10 @@ static void odm_rx_phy_status92c_series_parsing( * from 0~100. */ /* It is assigned to the BSS List in GetValueFromBeaconOrProbeRsp(). */ - if (is_cck_rate) { - phy_info->signal_strength = (u8)( - odm_signal_scale_mapping(dm, pwdb_all)); /*pwdb_all;*/ - } else { - if (rf_rx_num != 0) { - phy_info->signal_strength = - (u8)(odm_signal_scale_mapping(dm, total_rssi /= - rf_rx_num)); - } - } + if (is_cck_rate) + phy_info->signal_strength = pwdb_all; + else if (rf_rx_num != 0) + phy_info->signal_strength = (total_rssi /= rf_rx_num); /* For 92C/92D HW (Hybrid) Antenna Diversity */ } @@ -901,13 +875,10 @@ static void odm_rx_phy_status_jaguar_series_parsing( phy_info->recv_signal_power = rx_pwr_all; /*(3) Get Signal Quality (EVM)*/ { - u8 sq; + u8 sq = 0; - if ((dm->support_platform == ODM_WIN) && - (dm->patch_id == RT_CID_819X_LENOVO)) - sq = odm_sq_process_patch_rt_cid_819x_lenovo( - dm, is_cck_rate, pwdb_all, 0, 0); - else + if (!(dm->support_platform == ODM_WIN && + dm->patch_id == RT_CID_819X_LENOVO)) sq = phydm_get_signal_quality_8812(phy_info, dm, phy_sta_rpt); @@ -1051,21 +1022,19 @@ static void odm_rx_phy_status_jaguar_series_parsing( */ /*It is assigned to the BSS List in GetValueFromBeaconOrProbeRsp().*/ if (is_cck_rate) { - phy_info->signal_strength = (u8)( - odm_signal_scale_mapping(dm, pwdb_all)); /*pwdb_all;*/ - } else { - if (rf_rx_num != 0) { - /* 2015/01 Sean, use the best two RSSI only, - * suggested by Ynlin and ChenYu. - */ - if (rf_rx_num == 1) - avg_rssi = best_rssi; - else - avg_rssi = (best_rssi + second_rssi) / 2; - phy_info->signal_strength = - (u8)(odm_signal_scale_mapping(dm, avg_rssi)); - } + phy_info->signal_strength = pwdb_all; + } else if (rf_rx_num != 0) { + /* 2015/01 Sean, use the best two RSSI only, + * suggested by Ynlin and ChenYu. + */ + if (rf_rx_num == 1) + avg_rssi = best_rssi; + else + avg_rssi = (best_rssi + second_rssi) / 2; + + phy_info->signal_strength = avg_rssi; } + dm->rx_pwdb_ave = dm->rx_pwdb_ave + phy_info->rx_pwdb_all; dm->dm_fat_table.antsel_rx_keep_0 = phy_sta_rpt->antidx_anta; @@ -1750,8 +1719,6 @@ static void phydm_get_rx_phy_status_type2(struct phy_dm_struct *dm, ODM_RTL8710B)) { /* JJ ADD 20161014 */ if (rxsc == 3) bw = ODM_BW40M; - else if ((rxsc == 1) || (rxsc == 2)) - bw = ODM_BW20M; else bw = ODM_BW20M; } @@ -1874,44 +1841,8 @@ void phydm_rx_phy_status_new_type(struct phy_dm_struct *phydm, u8 *phy_status, /* Update signal strength to UI, and phy_info->rx_pwdb_all is the * maximum RSSI of all path */ - phy_info->signal_strength = - (u8)(odm_signal_scale_mapping(phydm, phy_info->rx_pwdb_all)); + phy_info->signal_strength = phy_info->rx_pwdb_all; /* Calculate average RSSI and smoothed RSSI */ phydm_process_rssi_for_dm_new_type(phydm, phy_info, pktinfo); } - -u32 query_phydm_trx_capability(struct phy_dm_struct *dm) -{ - u32 value32 = 0xFFFFFFFF; - - return value32; -} - -u32 query_phydm_stbc_capability(struct phy_dm_struct *dm) -{ - u32 value32 = 0xFFFFFFFF; - - return value32; -} - -u32 query_phydm_ldpc_capability(struct phy_dm_struct *dm) -{ - u32 value32 = 0xFFFFFFFF; - - return value32; -} - -u32 query_phydm_txbf_parameters(struct phy_dm_struct *dm) -{ - u32 value32 = 0xFFFFFFFF; - - return value32; -} - -u32 query_phydm_txbf_capability(struct phy_dm_struct *dm) -{ - u32 value32 = 0xFFFFFFFF; - - return value32; -} diff --git a/drivers/staging/rtlwifi/phydm/phydm_hwconfig.h b/drivers/staging/rtlwifi/phydm/phydm_hwconfig.h index 6ad5e0292a97..ee4b9f0af2a1 100644 --- a/drivers/staging/rtlwifi/phydm/phydm_hwconfig.h +++ b/drivers/staging/rtlwifi/phydm/phydm_hwconfig.h @@ -216,8 +216,6 @@ odm_config_fw_with_header_file(struct phy_dm_struct *dm, u32 odm_get_hw_img_version(struct phy_dm_struct *dm); -s32 odm_signal_scale_mapping(struct phy_dm_struct *dm, s32 curr_sig); - /*For 8822B only!! need to move to FW finally */ /*==============================================*/ void phydm_rx_phy_status_new_type(struct phy_dm_struct *phydm, u8 *phy_status, @@ -486,14 +484,4 @@ struct phy_status_rpt_jaguar2_type2 { #endif }; -u32 query_phydm_trx_capability(struct phy_dm_struct *dm); - -u32 query_phydm_stbc_capability(struct phy_dm_struct *dm); - -u32 query_phydm_ldpc_capability(struct phy_dm_struct *dm); - -u32 query_phydm_txbf_parameters(struct phy_dm_struct *dm); - -u32 query_phydm_txbf_capability(struct phy_dm_struct *dm); - #endif /*#ifndef __HALHWOUTSRC_H__*/ diff --git a/drivers/staging/rtlwifi/phydm/phydm_psd.c b/drivers/staging/rtlwifi/phydm/phydm_psd.c index badc514ac0be..c93d871f1eb6 100644 --- a/drivers/staging/rtlwifi/phydm/phydm_psd.c +++ b/drivers/staging/rtlwifi/phydm/phydm_psd.c @@ -336,12 +336,7 @@ void phydm_psd_init(void *dm_void) 2; /*2b'11: 20MHz, 2b'10: 40MHz, 2b'01: 80MHz */ } - if (dm->support_ic_type == ODM_RTL8812) - dm_psd_table->psd_pwr_common_offset = 0; - else if (dm->support_ic_type == ODM_RTL8821) - dm_psd_table->psd_pwr_common_offset = 0; - else - dm_psd_table->psd_pwr_common_offset = 0; + dm_psd_table->psd_pwr_common_offset = 0; phydm_psd_para_setting(dm, 1, 2, 3, 128, 0, 0, 7, 0); /*phydm_psd(dm, 0x3c, 0, 127);*/ /* target at -50dBm */ diff --git a/drivers/staging/rtlwifi/ps.c b/drivers/staging/rtlwifi/ps.c index 0ca0532c73da..5118773ea6f7 100644 --- a/drivers/staging/rtlwifi/ps.c +++ b/drivers/staging/rtlwifi/ps.c @@ -245,7 +245,7 @@ void rtl_ips_nic_off_wq_callback(void *data) /* call before RF off */ if (rtlpriv->cfg->ops->get_btc_status()) rtlpriv->btcoexist.btc_ops->btc_ips_notify(rtlpriv, - ppsc->inactive_pwrstate); + ppsc->inactive_pwrstate); /*rtl_pci_reset_trx_ring(hw); */ _rtl_ps_inactive_ps(hw); @@ -291,7 +291,7 @@ void rtl_ips_nic_on(struct ieee80211_hw *hw) rtlpriv->phydm.ops->phydm_reset_dm(rtlpriv); if (rtlpriv->cfg->ops->get_btc_status()) rtlpriv->btcoexist.btc_ops->btc_ips_notify(rtlpriv, - ppsc->inactive_pwrstate); + ppsc->inactive_pwrstate); } } mutex_unlock(&rtlpriv->locks.ips_mutex); diff --git a/drivers/staging/rts5208/general.c b/drivers/staging/rts5208/general.c index 79d245877264..0f912b011064 100644 --- a/drivers/staging/rts5208/general.c +++ b/drivers/staging/rts5208/general.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/general.h b/drivers/staging/rts5208/general.h index 90a1f9297f5e..53e2dbabf04b 100644 --- a/drivers/staging/rts5208/general.h +++ b/drivers/staging/rts5208/general.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/ms.c b/drivers/staging/rts5208/ms.c index f53adf15c685..e43f92080c20 100644 --- a/drivers/staging/rts5208/ms.c +++ b/drivers/staging/rts5208/ms.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/ms.h b/drivers/staging/rts5208/ms.h index 71f98cc03eed..952cc14dd079 100644 --- a/drivers/staging/rts5208/ms.h +++ b/drivers/staging/rts5208/ms.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx.c b/drivers/staging/rts5208/rtsx.c index 69e6abe14abf..fa597953e9a0 100644 --- a/drivers/staging/rts5208/rtsx.c +++ b/drivers/staging/rts5208/rtsx.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) @@ -237,12 +226,6 @@ static struct scsi_host_template rtsx_host_template = { /* limit the total size of a transfer to 120 KB */ .max_sectors = 240, - /* merge commands... this seems to help performance, but - * periodically someone should test to see which setting is more - * optimal. - */ - .use_clustering = 1, - /* emulated HBA */ .emulated = 1, diff --git a/drivers/staging/rts5208/rtsx.h b/drivers/staging/rts5208/rtsx.h index 514536a6f92b..2e101da83220 100644 --- a/drivers/staging/rts5208/rtsx.h +++ b/drivers/staging/rts5208/rtsx.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_card.c b/drivers/staging/rts5208/rtsx_card.c index 6dc541e06fb9..294f381518fa 100644 --- a/drivers/staging/rts5208/rtsx_card.c +++ b/drivers/staging/rts5208/rtsx_card.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_card.h b/drivers/staging/rts5208/rtsx_card.h index 820b1113ea89..39727371cd7a 100644 --- a/drivers/staging/rts5208/rtsx_card.h +++ b/drivers/staging/rts5208/rtsx_card.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_chip.c b/drivers/staging/rts5208/rtsx_chip.c index 94fb35429bf1..76c35f3c0208 100644 --- a/drivers/staging/rts5208/rtsx_chip.c +++ b/drivers/staging/rts5208/rtsx_chip.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_chip.h b/drivers/staging/rts5208/rtsx_chip.h index ec1c3d96d31a..7325362f835f 100644 --- a/drivers/staging/rts5208/rtsx_chip.h +++ b/drivers/staging/rts5208/rtsx_chip.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_scsi.c b/drivers/staging/rts5208/rtsx_scsi.c index 9c594a778425..1deb74112ad4 100644 --- a/drivers/staging/rts5208/rtsx_scsi.c +++ b/drivers/staging/rts5208/rtsx_scsi.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_scsi.h b/drivers/staging/rts5208/rtsx_scsi.h index 30f3724848fe..df6138c97aaa 100644 --- a/drivers/staging/rts5208/rtsx_scsi.h +++ b/drivers/staging/rts5208/rtsx_scsi.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_sys.h b/drivers/staging/rts5208/rtsx_sys.h index 817700c0d794..77094809c814 100644 --- a/drivers/staging/rts5208/rtsx_sys.h +++ b/drivers/staging/rts5208/rtsx_sys.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_transport.c b/drivers/staging/rts5208/rtsx_transport.c index b4a796c570c2..8277d7895608 100644 --- a/drivers/staging/rts5208/rtsx_transport.c +++ b/drivers/staging/rts5208/rtsx_transport.c @@ -1,21 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0+ /* * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/rtsx_transport.h b/drivers/staging/rts5208/rtsx_transport.h index 99740c33f2fb..097efed24b79 100644 --- a/drivers/staging/rts5208/rtsx_transport.h +++ b/drivers/staging/rts5208/rtsx_transport.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/sd.c b/drivers/staging/rts5208/sd.c index ff1a9aa152ce..2c47ae613ea1 100644 --- a/drivers/staging/rts5208/sd.c +++ b/drivers/staging/rts5208/sd.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/sd.h b/drivers/staging/rts5208/sd.h index 900be444acf9..e124526360b2 100644 --- a/drivers/staging/rts5208/sd.h +++ b/drivers/staging/rts5208/sd.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/spi.c b/drivers/staging/rts5208/spi.c index 110cb9093f30..f1e9e80044ed 100644 --- a/drivers/staging/rts5208/spi.c +++ b/drivers/staging/rts5208/spi.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/spi.h b/drivers/staging/rts5208/spi.h index c8d2beacd9e5..dcf93c80b2d5 100644 --- a/drivers/staging/rts5208/spi.h +++ b/drivers/staging/rts5208/spi.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/xd.c b/drivers/staging/rts5208/xd.c index d71f19ceb6fa..c5ee04ecd1c9 100644 --- a/drivers/staging/rts5208/xd.c +++ b/drivers/staging/rts5208/xd.c @@ -1,20 +1,9 @@ -/* Driver for Realtek PCI-Express card reader +// SPDX-License-Identifier: GPL-2.0+ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) diff --git a/drivers/staging/rts5208/xd.h b/drivers/staging/rts5208/xd.h index d5f10880efb7..57b94129b26f 100644 --- a/drivers/staging/rts5208/xd.h +++ b/drivers/staging/rts5208/xd.h @@ -1,21 +1,9 @@ -/* Driver for Realtek PCI-Express card reader - * Header file +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Driver for Realtek PCI-Express card reader * * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the - * Free Software Foundation; either version 2, or (at your option) any - * later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . - * * Author: * Wei WANG (wei_wang@realsil.com.cn) * Micky Ching (micky_ching@realsil.com.cn) @@ -102,7 +90,7 @@ #define NO_OFFSET 0x0 #define WITH_OFFSET 0x1 -#define Sect_Per_Page 4 +#define SECT_PER_PAGE 4 #define XD_ADDR_MODE_2C XD_ADDR_MODE_2A #define ZONE0_BAD_BLOCK 23 diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c index 1035e91e7cd3..eed840b251da 100644 --- a/drivers/staging/sm750fb/sm750_accel.c +++ b/drivers/staging/sm750fb/sm750_accel.c @@ -383,7 +383,8 @@ int sm750_hw_imageblit(struct lynx_accel *accel, write_dpPort(accel, *(unsigned int *)(pSrcbuf + (j * 4))); if (ulBytesRemain) { - memcpy(ajRemain, pSrcbuf+ul4BytesPerScan, ulBytesRemain); + memcpy(ajRemain, pSrcbuf + ul4BytesPerScan, + ulBytesRemain); write_dpPort(accel, *(unsigned int *)ajRemain); } diff --git a/drivers/staging/speakup/i18n.c b/drivers/staging/speakup/i18n.c index cea8707653f5..ee240d36f947 100644 --- a/drivers/staging/speakup/i18n.c +++ b/drivers/staging/speakup/i18n.c @@ -336,7 +336,7 @@ static char *speakup_default_msgs[MSG_LAST_INDEX] = { [MSG_FUNCNAME_SPELL_DELAY_DEC] = "spell delay decrement", [MSG_FUNCNAME_SPELL_DELAY_INC] = "spell delay increment", [MSG_FUNCNAME_SPELL_WORD] = "spell word", - [MSG_FUNCNAME_SPELL_WORD_PHONETICALLY] = "spell word phoneticly", + [MSG_FUNCNAME_SPELL_WORD_PHONETICALLY] = "spell word phonetically", [MSG_FUNCNAME_TONE_DEC] = "tone decrement", [MSG_FUNCNAME_TONE_INC] = "tone increment", [MSG_FUNCNAME_VOICE_DEC] = "voice decrement", diff --git a/drivers/staging/speakup/kobjects.c b/drivers/staging/speakup/kobjects.c index 08f11cc17371..2e36d872662c 100644 --- a/drivers/staging/speakup/kobjects.c +++ b/drivers/staging/speakup/kobjects.c @@ -545,7 +545,7 @@ ssize_t spk_var_show(struct kobject *kobj, struct kobj_attribute *attr, int rv = 0; struct st_var_header *param; struct var_t *var; - char *cp1; + char *cp1; char *cp; char ch; unsigned long flags; diff --git a/drivers/staging/speakup/speakup_acntpc.c b/drivers/staging/speakup/speakup_acntpc.c index 28519754b2f0..c94328a5bd4a 100644 --- a/drivers/staging/speakup/speakup_acntpc.c +++ b/drivers/staging/speakup/speakup_acntpc.c @@ -298,7 +298,8 @@ static void accent_release(void) { spk_stop_serial_interrupt(); if (speakup_info.port_tts) - synth_release_region(speakup_info.port_tts-1, SYNTH_IO_EXTENT); + synth_release_region(speakup_info.port_tts - 1, + SYNTH_IO_EXTENT); speakup_info.port_tts = 0; } diff --git a/drivers/staging/speakup/speakup_decpc.c b/drivers/staging/speakup/speakup_decpc.c index 6649309e0342..459ee0c0bd57 100644 --- a/drivers/staging/speakup/speakup_decpc.c +++ b/drivers/staging/speakup/speakup_decpc.c @@ -302,12 +302,12 @@ static void synth_flush(struct spk_synth *synth) while (dt_ctrl(CTRL_flush)) { if (--timeout == 0) break; -udelay(50); + udelay(50); } for (timeout = 0; timeout < 10; timeout++) { if (dt_waitbit(STAT_dma_ready)) break; -udelay(50); + udelay(50); } outb_p(DMA_sync, speakup_info.port_tts + 4); outb_p(0, speakup_info.port_tts + 4); @@ -315,7 +315,7 @@ udelay(50); for (timeout = 0; timeout < 10; timeout++) { if (!(dt_getstatus() & STAT_flushing)) break; -udelay(50); + udelay(50); } dma_state = dt_getstatus() & STAT_dma_state; dma_state ^= STAT_dma_state; diff --git a/drivers/staging/speakup/speakup_keypc.c b/drivers/staging/speakup/speakup_keypc.c index 3901734982a4..b788272da4f9 100644 --- a/drivers/staging/speakup/speakup_keypc.c +++ b/drivers/staging/speakup/speakup_keypc.c @@ -177,7 +177,7 @@ static void do_catch_up(struct spk_synth *synth) jiffy_delta = spk_get_var(JIFFY); delay_time = spk_get_var(DELAY); full_time = spk_get_var(FULL); -spin_lock_irqsave(&speakup_info.spinlock, flags); + spin_lock_irqsave(&speakup_info.spinlock, flags); jiffy_delta_val = jiffy_delta->u.n.value; spin_unlock_irqrestore(&speakup_info.spinlock, flags); diff --git a/drivers/staging/speakup/spk_priv.h b/drivers/staging/speakup/spk_priv.h index 7b3a16e1fa23..c8e688878fc7 100644 --- a/drivers/staging/speakup/spk_priv.h +++ b/drivers/staging/speakup/spk_priv.h @@ -54,8 +54,10 @@ ssize_t spk_var_store(struct kobject *kobj, struct kobj_attribute *attr, int spk_serial_synth_probe(struct spk_synth *synth); int spk_ttyio_synth_probe(struct spk_synth *synth); -const char *spk_serial_synth_immediate(struct spk_synth *synth, const char *buff); -const char *spk_ttyio_synth_immediate(struct spk_synth *synth, const char *buff); +const char *spk_serial_synth_immediate(struct spk_synth *synth, + const char *buff); +const char *spk_ttyio_synth_immediate(struct spk_synth *synth, + const char *buff); void spk_do_catch_up(struct spk_synth *synth); void spk_do_catch_up_unicode(struct spk_synth *synth); void spk_synth_flush(struct spk_synth *synth); diff --git a/drivers/staging/speakup/spk_ttyio.c b/drivers/staging/speakup/spk_ttyio.c index 979e3ae249c1..c92bbd05516e 100644 --- a/drivers/staging/speakup/spk_ttyio.c +++ b/drivers/staging/speakup/spk_ttyio.c @@ -10,7 +10,7 @@ struct spk_ldisc_data { char buf; - struct semaphore sem; + struct completion completion; bool buf_free; }; @@ -55,7 +55,7 @@ static int spk_ttyio_ldisc_open(struct tty_struct *tty) if (!ldisc_data) return -ENOMEM; - sema_init(&ldisc_data->sem, 0); + init_completion(&ldisc_data->completion); ldisc_data->buf_free = true; speakup_tty->disc_data = ldisc_data; @@ -95,7 +95,7 @@ static int spk_ttyio_receive_buf2(struct tty_struct *tty, ldisc_data->buf = cp[0]; ldisc_data->buf_free = false; - up(&ldisc_data->sem); + complete(&ldisc_data->completion); return 1; } @@ -286,7 +286,8 @@ static unsigned char ttyio_in(int timeout) struct spk_ldisc_data *ldisc_data = speakup_tty->disc_data; char rv; - if (down_timeout(&ldisc_data->sem, usecs_to_jiffies(timeout)) == -ETIME) { + if (wait_for_completion_timeout(&ldisc_data->completion, + usecs_to_jiffies(timeout)) == 0) { if (timeout) pr_warn("spk_ttyio: timeout (%d) while waiting for input\n", timeout); diff --git a/drivers/staging/unisys/visorhba/visorhba_main.c b/drivers/staging/unisys/visorhba/visorhba_main.c index 4fc521c51c0e..2dad36a05518 100644 --- a/drivers/staging/unisys/visorhba/visorhba_main.c +++ b/drivers/staging/unisys/visorhba/visorhba_main.c @@ -645,7 +645,6 @@ static struct scsi_host_template visorhba_driver_template = { .this_id = -1, .slave_alloc = visorhba_slave_alloc, .slave_destroy = visorhba_slave_destroy, - .use_clustering = ENABLE_CLUSTERING, }; /* @@ -681,19 +680,7 @@ static int info_debugfs_show(struct seq_file *seq, void *v) return 0; } - -static int info_debugfs_open(struct inode *inode, struct file *file) -{ - return single_open(file, info_debugfs_show, inode->i_private); -} - -static const struct file_operations info_debugfs_fops = { - .owner = THIS_MODULE, - .open = info_debugfs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(info_debugfs); /* * complete_taskmgmt_command - Complete task management diff --git a/drivers/staging/unisys/visornic/visornic_main.c b/drivers/staging/unisys/visornic/visornic_main.c index 3647b8f1ed28..5eeb4b93b45b 100644 --- a/drivers/staging/unisys/visornic/visornic_main.c +++ b/drivers/staging/unisys/visornic/visornic_main.c @@ -2095,7 +2095,7 @@ static int visornic_resume(struct visor_device *dev, mod_timer(&devdata->irq_poll_timer, msecs_to_jiffies(2)); rtnl_lock(); - dev_open(netdev); + dev_open(netdev, NULL); rtnl_unlock(); complete_func(dev, 0); diff --git a/drivers/staging/vboxvideo/Makefile b/drivers/staging/vboxvideo/Makefile index 3f6094aa9cdf..1224f313af0c 100644 --- a/drivers/staging/vboxvideo/Makefile +++ b/drivers/staging/vboxvideo/Makefile @@ -1,6 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -ccflags-y := -Iinclude/drm - vboxvideo-y := hgsmi_base.o modesetting.o vbva_base.o \ vbox_drv.o vbox_fb.o vbox_hgsmi.o vbox_irq.o vbox_main.o \ vbox_mode.o vbox_prime.o vbox_ttm.o diff --git a/drivers/staging/vboxvideo/hgsmi_base.c b/drivers/staging/vboxvideo/hgsmi_base.c index 15ff5f42e2cd..361d3193258e 100644 --- a/drivers/staging/vboxvideo/hgsmi_base.c +++ b/drivers/staging/vboxvideo/hgsmi_base.c @@ -1,27 +1,8 @@ -/* - * Copyright (C) 2006-2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ +// SPDX-License-Identifier: MIT +/* Copyright (C) 2006-2017 Oracle Corporation */ +#include #include "vbox_drv.h" -#include "vbox_err.h" #include "vboxvideo_guest.h" #include "vboxvideo_vbe.h" #include "hgsmi_channels.h" @@ -29,9 +10,9 @@ /** * Inform the host of the location of the host flags in VRAM via an HGSMI cmd. - * @param ctx the context of the guest heap to use. - * @param location the offset chosen for the flags within guest VRAM. - * @returns 0 on success, -errno on failure + * Return: 0 or negative errno value. + * @ctx: The context of the guest heap to use. + * @location: The offset chosen for the flags within guest VRAM. */ int hgsmi_report_flags_location(struct gen_pool *ctx, u32 location) { @@ -53,9 +34,9 @@ int hgsmi_report_flags_location(struct gen_pool *ctx, u32 location) /** * Notify the host of HGSMI-related guest capabilities via an HGSMI command. - * @param ctx the context of the guest heap to use. - * @param caps the capabilities to report, see vbva_caps. - * @returns 0 on success, -errno on failure + * Return: 0 or negative errno value. + * @ctx: The context of the guest heap to use. + * @caps: The capabilities to report, see vbva_caps. */ int hgsmi_send_caps_info(struct gen_pool *ctx, u32 caps) { @@ -70,7 +51,7 @@ int hgsmi_send_caps_info(struct gen_pool *ctx, u32 caps) hgsmi_buffer_submit(ctx, p); - WARN_ON_ONCE(RT_FAILURE(p->rc)); + WARN_ON_ONCE(p->rc < 0); hgsmi_buffer_free(ctx, p); @@ -91,11 +72,10 @@ int hgsmi_test_query_conf(struct gen_pool *ctx) /** * Query the host for an HGSMI configuration parameter via an HGSMI command. - * @param ctx the context containing the heap used - * @param index the index of the parameter to query, - * @see vbva_conf32::index - * @param value_ret where to store the value of the parameter on success - * @returns 0 on success, -errno on failure + * Return: 0 or negative errno value. + * @ctx: The context containing the heap used. + * @index: The index of the parameter to query. + * @value_ret: Where to store the value of the parameter on success. */ int hgsmi_query_conf(struct gen_pool *ctx, u32 index, u32 *value_ret) { @@ -120,16 +100,15 @@ int hgsmi_query_conf(struct gen_pool *ctx, u32 index, u32 *value_ret) /** * Pass the host a new mouse pointer shape via an HGSMI command. - * - * @param ctx the context containing the heap to be used - * @param flags cursor flags, @see VMMDevReqMousePointer::flags - * @param hot_x horizontal position of the hot spot - * @param hot_y vertical position of the hot spot - * @param width width in pixels of the cursor - * @param height height in pixels of the cursor - * @param pixels pixel data, @see VMMDevReqMousePointer for the format - * @param len size in bytes of the pixel data - * @returns 0 on success, -errno on failure + * Return: 0 or negative errno value. + * @ctx: The context containing the heap to be used. + * @flags: Cursor flags. + * @hot_x: Horizontal position of the hot spot. + * @hot_y: Vertical position of the hot spot. + * @width: Width in pixels of the cursor. + * @height: Height in pixels of the cursor. + * @pixels: Pixel data, @see VMMDevReqMousePointer for the format. + * @len: Size in bytes of the pixel data. */ int hgsmi_update_pointer_shape(struct gen_pool *ctx, u32 flags, u32 hot_x, u32 hot_y, u32 width, u32 height, @@ -195,13 +174,13 @@ int hgsmi_update_pointer_shape(struct gen_pool *ctx, u32 flags, * Report the guest cursor position. The host may wish to use this information * to re-position its own cursor (though this is currently unlikely). The * current host cursor position is returned. - * @param ctx The context containing the heap used. - * @param report_position Are we reporting a position? - * @param x Guest cursor X position. - * @param y Guest cursor Y position. - * @param x_host Host cursor X position is stored here. Optional. - * @param y_host Host cursor Y position is stored here. Optional. - * @returns 0 on success, -errno on failure + * Return: 0 or negative errno value. + * @ctx: The context containing the heap used. + * @report_position: Are we reporting a position? + * @x: Guest cursor X position. + * @y: Guest cursor Y position. + * @x_host: Host cursor X position is stored here. Optional. + * @y_host: Host cursor Y position is stored here. Optional. */ int hgsmi_cursor_position(struct gen_pool *ctx, bool report_position, u32 x, u32 y, u32 *x_host, u32 *y_host) @@ -226,21 +205,3 @@ int hgsmi_cursor_position(struct gen_pool *ctx, bool report_position, return 0; } - -/** - * @todo Mouse pointer position to be read from VMMDev memory, address of the - * memory region can be queried from VMMDev via an IOCTL. This VMMDev memory - * region will contain host information which is needed by the guest. - * - * Reading will not cause a switch to the host. - * - * Have to take into account: - * * synchronization: host must write to the memory only from EMT, - * large structures must be read under flag, which tells the host - * that the guest is currently reading the memory (OWNER flag?). - * * guest writes: may be allocate a page for the host info and make - * the page readonly for the guest. - * * the information should be available only for additions drivers. - * * VMMDev additions driver will inform the host which version of the info - * it expects, host must support all versions. - */ diff --git a/drivers/staging/vboxvideo/hgsmi_ch_setup.h b/drivers/staging/vboxvideo/hgsmi_ch_setup.h index 8e6d9e11a69c..4e93418d6a13 100644 --- a/drivers/staging/vboxvideo/hgsmi_ch_setup.h +++ b/drivers/staging/vboxvideo/hgsmi_ch_setup.h @@ -1,24 +1,5 @@ -/* - * Copyright (C) 2006-2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ +/* SPDX-License-Identifier: MIT */ +/* Copyright (C) 2006-2017 Oracle Corporation */ #ifndef __HGSMI_CH_SETUP_H__ #define __HGSMI_CH_SETUP_H__ @@ -36,29 +17,14 @@ struct hgsmi_buffer_location { } __packed; /* HGSMI setup and configuration data structures. */ -/* host->guest commands pending, should be accessed under FIFO lock only */ + #define HGSMIHOSTFLAGS_COMMANDS_PENDING 0x01u -/* IRQ is fired, should be accessed under VGAState::lock only */ #define HGSMIHOSTFLAGS_IRQ 0x02u -/* vsync interrupt flag, should be accessed under VGAState::lock only */ #define HGSMIHOSTFLAGS_VSYNC 0x10u -/** monitor hotplug flag, should be accessed under VGAState::lock only */ #define HGSMIHOSTFLAGS_HOTPLUG 0x20u -/** - * Cursor capability state change flag, should be accessed under - * VGAState::lock only. @see vbva_conf32. - */ #define HGSMIHOSTFLAGS_CURSOR_CAPABILITIES 0x40u struct hgsmi_host_flags { - /* - * Host flags can be accessed and modified in multiple threads - * concurrently, e.g. CrOpenGL HGCM and GUI threads when completing - * HGSMI 3D and Video Accel respectively, EMT thread when dealing with - * HGSMI command processing, etc. - * Besides settings/cleaning flags atomically, some flags have their - * own special sync restrictions, see comments for flags above. - */ u32 host_flags; u32 reserved[3]; } __packed; diff --git a/drivers/staging/vboxvideo/hgsmi_channels.h b/drivers/staging/vboxvideo/hgsmi_channels.h index a2a34b2167b4..9b83f4ff3faf 100644 --- a/drivers/staging/vboxvideo/hgsmi_channels.h +++ b/drivers/staging/vboxvideo/hgsmi_channels.h @@ -1,24 +1,5 @@ -/* - * Copyright (C) 2006-2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ +/* SPDX-License-Identifier: MIT */ +/* Copyright (C) 2006-2017 Oracle Corporation */ #ifndef __HGSMI_CHANNELS_H__ #define __HGSMI_CHANNELS_H__ diff --git a/drivers/staging/vboxvideo/hgsmi_defs.h b/drivers/staging/vboxvideo/hgsmi_defs.h index 5b21fb974d20..6c8df1cdb087 100644 --- a/drivers/staging/vboxvideo/hgsmi_defs.h +++ b/drivers/staging/vboxvideo/hgsmi_defs.h @@ -1,24 +1,5 @@ -/* - * Copyright (C) 2006-2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ +/* SPDX-License-Identifier: MIT */ +/* Copyright (C) 2006-2017 Oracle Corporation */ #ifndef __HGSMI_DEFS_H__ #define __HGSMI_DEFS_H__ diff --git a/drivers/staging/vboxvideo/modesetting.c b/drivers/staging/vboxvideo/modesetting.c index 7616b8aab23a..7580b9002379 100644 --- a/drivers/staging/vboxvideo/modesetting.c +++ b/drivers/staging/vboxvideo/modesetting.c @@ -1,27 +1,8 @@ -/* - * Copyright (C) 2006-2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ +// SPDX-License-Identifier: MIT +/* Copyright (C) 2006-2017 Oracle Corporation */ +#include #include "vbox_drv.h" -#include "vbox_err.h" #include "vboxvideo_guest.h" #include "vboxvideo_vbe.h" #include "hgsmi_channels.h" @@ -30,18 +11,18 @@ * Set a video mode via an HGSMI request. The views must have been * initialised first using @a VBoxHGSMISendViewInfo and if the mode is being * set on the first display then it must be set first using registers. - * @param ctx The context containing the heap to use - * @param display The screen number - * @param origin_x The horizontal displacement relative to the first scrn - * @param origin_y The vertical displacement relative to the first screen - * @param start_offset The offset of the visible area of the framebuffer - * relative to the framebuffer start - * @param pitch The offset in bytes between the starts of two adjecent - * scan lines in video RAM - * @param width The mode width - * @param height The mode height - * @param bpp The colour depth of the mode - * @param flags Flags + * @ctx: The context containing the heap to use. + * @display: The screen number. + * @origin_x: The horizontal displacement relative to the first scrn. + * @origin_y: The vertical displacement relative to the first screen. + * @start_offset: The offset of the visible area of the framebuffer + * relative to the framebuffer start. + * @pitch: The offset in bytes between the starts of two adjecent + * scan lines in video RAM. + * @width: The mode width. + * @height: The mode height. + * @bpp: The colour depth of the mode. + * @flags: Flags. */ void hgsmi_process_display_info(struct gen_pool *ctx, u32 display, s32 origin_x, s32 origin_y, u32 start_offset, @@ -74,12 +55,12 @@ void hgsmi_process_display_info(struct gen_pool *ctx, u32 display, * expressed. This information remains valid until the next VBVA resize event * for any screen, at which time it is reset to the bounding rectangle of all * virtual screens. - * @param ctx The context containing the heap to use. - * @param origin_x Upper left X co-ordinate relative to the first screen. - * @param origin_y Upper left Y co-ordinate relative to the first screen. - * @param width Rectangle width. - * @param height Rectangle height. - * @returns 0 on success, -errno on failure + * Return: 0 or negative errno value. + * @ctx: The context containing the heap to use. + * @origin_x: Upper left X co-ordinate relative to the first screen. + * @origin_y: Upper left Y co-ordinate relative to the first screen. + * @width: Rectangle width. + * @height: Rectangle height. */ int hgsmi_update_input_mapping(struct gen_pool *ctx, s32 origin_x, s32 origin_y, u32 width, u32 height) @@ -104,10 +85,10 @@ int hgsmi_update_input_mapping(struct gen_pool *ctx, s32 origin_x, s32 origin_y, /** * Get most recent video mode hints. - * @param ctx The context containing the heap to use. - * @param screens The number of screens to query hints for, starting at 0. - * @param hints Array of vbva_modehint structures for receiving the hints. - * @returns 0 on success, -errno on failure + * Return: 0 or negative errno value. + * @ctx: The context containing the heap to use. + * @screens: The number of screens to query hints for, starting at 0. + * @hints: Array of vbva_modehint structures for receiving the hints. */ int hgsmi_get_mode_hints(struct gen_pool *ctx, unsigned int screens, struct vbva_modehint *hints) @@ -130,7 +111,7 @@ int hgsmi_get_mode_hints(struct gen_pool *ctx, unsigned int screens, hgsmi_buffer_submit(ctx, p); - if (RT_FAILURE(p->rc)) { + if (p->rc < 0) { hgsmi_buffer_free(ctx, p); return -EIO; } diff --git a/drivers/staging/vboxvideo/vbox_drv.c b/drivers/staging/vboxvideo/vbox_drv.c index d3e23dd70c1b..cc6532d8c2fa 100644 --- a/drivers/staging/vboxvideo/vbox_drv.c +++ b/drivers/staging/vboxvideo/vbox_drv.c @@ -1,28 +1,8 @@ +// SPDX-License-Identifier: MIT /* * Copyright (C) 2013-2017 Oracle Corporation * This file is based on ast_drv.c * Copyright 2012 Red Hat Inc. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * * Authors: Dave Airlie * Michael Thayer @@ -31,7 +11,6 @@ #include #include -#include #include #include "vbox_drv.h" @@ -44,8 +23,8 @@ module_param_named(modeset, vbox_modeset, int, 0400); static struct drm_driver driver; static const struct pci_device_id pciidlist[] = { - { 0x80ee, 0xbeef, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, - { 0, 0, 0}, + { PCI_DEVICE(0x80ee, 0xbeef) }, + { } }; MODULE_DEVICE_TABLE(pci, pciidlist); @@ -138,6 +117,7 @@ static void vbox_pci_remove(struct pci_dev *pdev) drm_dev_put(&vbox->ddev); } +#ifdef CONFIG_PM_SLEEP static int vbox_pm_suspend(struct device *dev) { struct vbox_private *vbox = dev_get_drvdata(dev); @@ -193,13 +173,16 @@ static const struct dev_pm_ops vbox_pm_ops = { .poweroff = vbox_pm_poweroff, .restore = vbox_pm_resume, }; +#endif static struct pci_driver vbox_pci_driver = { .name = DRIVER_NAME, .id_table = pciidlist, .probe = vbox_pci_probe, .remove = vbox_pci_remove, +#ifdef CONFIG_PM_SLEEP .driver.pm = &vbox_pm_ops, +#endif }; static const struct file_operations vbox_fops = { @@ -207,11 +190,9 @@ static const struct file_operations vbox_fops = { .open = drm_open, .release = drm_release, .unlocked_ioctl = drm_ioctl, + .compat_ioctl = drm_compat_ioctl, .mmap = vbox_mmap, .poll = drm_poll, -#ifdef CONFIG_COMPAT - .compat_ioctl = drm_compat_ioctl, -#endif .read = drm_read, }; @@ -227,21 +208,6 @@ static int vbox_master_set(struct drm_device *dev, */ vbox->initial_mode_queried = false; - mutex_lock(&vbox->hw_mutex); - /* - * Disable VBVA when someone releases master in case the next person - * tries tries to do VESA. - */ - /** @todo work out if anyone is likely to and whether it will work. */ - /* - * Update: we also disable it because if the new master does not do - * dirty rectangle reporting (e.g. old versions of Plymouth) then at - * least the first screen will still be updated. We enable it as soon - * as we receive a dirty rectangle report. - */ - vbox_disable_accel(vbox); - mutex_unlock(&vbox->hw_mutex); - return 0; } @@ -251,10 +217,6 @@ static void vbox_master_drop(struct drm_device *dev, struct drm_file *file_priv) /* See vbox_master_set() */ vbox->initial_mode_queried = false; - - mutex_lock(&vbox->hw_mutex); - vbox_disable_accel(vbox); - mutex_unlock(&vbox->hw_mutex); } static struct drm_driver driver = { @@ -314,5 +276,6 @@ module_init(vbox_init); module_exit(vbox_exit); MODULE_AUTHOR("Oracle Corporation"); +MODULE_AUTHOR("Hans de Goede "); MODULE_DESCRIPTION(DRIVER_DESC); MODULE_LICENSE("GPL and additional rights"); diff --git a/drivers/staging/vboxvideo/vbox_drv.h b/drivers/staging/vboxvideo/vbox_drv.h index fa933d422951..aa40e5cc2861 100644 --- a/drivers/staging/vboxvideo/vbox_drv.h +++ b/drivers/staging/vboxvideo/vbox_drv.h @@ -1,28 +1,8 @@ +/* SPDX-License-Identifier: MIT */ /* * Copyright (C) 2013-2017 Oracle Corporation * This file is based on ast_drv.h * Copyright 2012 Red Hat Inc. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * * Authors: Dave Airlie * Michael Thayer @@ -32,10 +12,10 @@ #include #include +#include #include #include -#include #include #include #include @@ -89,11 +69,11 @@ struct vbox_private { struct vbva_buf_ctx *vbva_info; bool any_pitch; u32 num_crtcs; - /** Amount of available VRAM, including space used for buffers. */ + /* Amount of available VRAM, including space used for buffers. */ u32 full_vram_size; - /** Amount of available VRAM, not including space used for buffers. */ + /* Amount of available VRAM, not including space used for buffers. */ u32 available_vram_size; - /** Array of structures for receiving mode hints. */ + /* Array of structures for receiving mode hints. */ struct vbva_modehint *last_mode_hints; int fb_mtrr; @@ -103,7 +83,7 @@ struct vbox_private { } ttm; struct mutex hw_mutex; /* protects modeset and accel/vbva accesses */ - /** + /* * We decide whether or not user-space supports display hot-plug * depending on whether they react to a hot-plug event after the initial * mode query. @@ -112,7 +92,7 @@ struct vbox_private { struct work_struct hotplug_work; u32 input_mapping_width; u32 input_mapping_height; - /** + /* * Is user-space using an X.Org-style layout of one large frame-buffer * encompassing all screen ones or is the fbdev console active? */ @@ -182,10 +162,6 @@ void vbox_hw_fini(struct vbox_private *vbox); int vbox_mode_init(struct vbox_private *vbox); void vbox_mode_fini(struct vbox_private *vbox); -#define DRM_MODE_FB_CMD drm_mode_fb_cmd2 - -void vbox_enable_accel(struct vbox_private *vbox); -void vbox_disable_accel(struct vbox_private *vbox); void vbox_report_caps(struct vbox_private *vbox); void vbox_framebuffer_dirty_rectangles(struct drm_framebuffer *fb, @@ -194,7 +170,7 @@ void vbox_framebuffer_dirty_rectangles(struct drm_framebuffer *fb, int vbox_framebuffer_init(struct vbox_private *vbox, struct vbox_framebuffer *vbox_fb, - const struct DRM_MODE_FB_CMD *mode_cmd, + const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object *obj); int vboxfb_create(struct drm_fb_helper *helper, diff --git a/drivers/staging/vboxvideo/vbox_err.h b/drivers/staging/vboxvideo/vbox_err.h deleted file mode 100644 index 562db8630eb0..000000000000 --- a/drivers/staging/vboxvideo/vbox_err.h +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Copyright (C) 2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ - -#ifndef __VBOX_ERR_H__ -#define __VBOX_ERR_H__ - -/** - * @name VirtualBox virtual-hardware error macros - * @{ - */ - -#define VINF_SUCCESS 0 -#define VERR_INVALID_PARAMETER (-2) -#define VERR_INVALID_POINTER (-6) -#define VERR_NO_MEMORY (-8) -#define VERR_NOT_IMPLEMENTED (-12) -#define VERR_INVALID_FUNCTION (-36) -#define VERR_NOT_SUPPORTED (-37) -#define VERR_TOO_MUCH_DATA (-42) -#define VERR_INVALID_STATE (-79) -#define VERR_OUT_OF_RESOURCES (-80) -#define VERR_ALREADY_EXISTS (-105) -#define VERR_INTERNAL_ERROR (-225) - -#define RT_SUCCESS_NP(rc) ((int)(rc) >= VINF_SUCCESS) -#define RT_SUCCESS(rc) (likely(RT_SUCCESS_NP(rc))) -#define RT_FAILURE(rc) (unlikely(!RT_SUCCESS_NP(rc))) - -/** @} */ - -#endif diff --git a/drivers/staging/vboxvideo/vbox_fb.c b/drivers/staging/vboxvideo/vbox_fb.c index d1a1f74c8de3..6b7aa23dfc0a 100644 --- a/drivers/staging/vboxvideo/vbox_fb.c +++ b/drivers/staging/vboxvideo/vbox_fb.c @@ -1,28 +1,8 @@ +// SPDX-License-Identifier: MIT /* * Copyright (C) 2013-2017 Oracle Corporation * This file is based on ast_fb.c * Copyright 2012 Red Hat Inc. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * * Authors: Dave Airlie * Michael Thayer #include -#include #include #include #include @@ -54,16 +33,10 @@ static struct fb_deferred_io vbox_defio = { static struct fb_ops vboxfb_ops = { .owner = THIS_MODULE, - .fb_check_var = drm_fb_helper_check_var, - .fb_set_par = drm_fb_helper_set_par, + DRM_FB_HELPER_DEFAULT_OPS, .fb_fillrect = drm_fb_helper_sys_fillrect, .fb_copyarea = drm_fb_helper_sys_copyarea, .fb_imageblit = drm_fb_helper_sys_imageblit, - .fb_pan_display = drm_fb_helper_pan_display, - .fb_blank = drm_fb_helper_blank, - .fb_setcmap = drm_fb_helper_setcmap, - .fb_debug_enter = drm_fb_helper_debug_enter, - .fb_debug_leave = drm_fb_helper_debug_leave, }; int vboxfb_create(struct drm_fb_helper *helper, @@ -72,7 +45,7 @@ int vboxfb_create(struct drm_fb_helper *helper, struct vbox_private *vbox = container_of(helper, struct vbox_private, fb_helper); struct pci_dev *pdev = vbox->ddev.pdev; - struct DRM_MODE_FB_CMD mode_cmd; + struct drm_mode_fb_cmd2 mode_cmd; struct drm_framebuffer *fb; struct fb_info *info; struct drm_gem_object *gobj; diff --git a/drivers/staging/vboxvideo/vbox_hgsmi.c b/drivers/staging/vboxvideo/vbox_hgsmi.c index 822fd31121cb..94b60654a012 100644 --- a/drivers/staging/vboxvideo/vbox_hgsmi.c +++ b/drivers/staging/vboxvideo/vbox_hgsmi.c @@ -1,26 +1,6 @@ +// SPDX-License-Identifier: MIT /* * Copyright (C) 2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * * Authors: Hans de Goede */ diff --git a/drivers/staging/vboxvideo/vbox_irq.c b/drivers/staging/vboxvideo/vbox_irq.c index 09f858ec1369..f3d9895c79d8 100644 --- a/drivers/staging/vboxvideo/vbox_irq.c +++ b/drivers/staging/vboxvideo/vbox_irq.c @@ -1,26 +1,8 @@ +// SPDX-License-Identifier: MIT /* * Copyright (C) 2016-2017 Oracle Corporation * This file is based on qxl_irq.c * Copyright 2013 Red Hat Inc. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - * * Authors: Dave Airlie * Alon Levy * Michael Thayer ddev; diff --git a/drivers/staging/vboxvideo/vbox_main.c b/drivers/staging/vboxvideo/vbox_main.c index 7466c1103ff6..e1fb70a42d32 100644 --- a/drivers/staging/vboxvideo/vbox_main.c +++ b/drivers/staging/vboxvideo/vbox_main.c @@ -1,37 +1,18 @@ +// SPDX-License-Identifier: MIT /* * Copyright (C) 2013-2017 Oracle Corporation * This file is based on ast_main.c * Copyright 2012 Red Hat Inc. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * * Authors: Dave Airlie , * Michael Thayer */ + +#include #include #include #include "vbox_drv.h" -#include "vbox_err.h" #include "vboxvideo_guest.h" #include "vboxvideo_vbe.h" @@ -46,40 +27,6 @@ static void vbox_user_framebuffer_destroy(struct drm_framebuffer *fb) kfree(fb); } -void vbox_enable_accel(struct vbox_private *vbox) -{ - unsigned int i; - struct vbva_buffer *vbva; - - if (!vbox->vbva_info || !vbox->vbva_buffers) { - /* Should never happen... */ - DRM_ERROR("vboxvideo: failed to set up VBVA.\n"); - return; - } - - for (i = 0; i < vbox->num_crtcs; ++i) { - if (vbox->vbva_info[i].vbva) - continue; - - vbva = (void __force *)vbox->vbva_buffers + - i * VBVA_MIN_BUFFER_SIZE; - if (!vbva_enable(&vbox->vbva_info[i], - vbox->guest_pool, vbva, i)) { - /* very old host or driver error. */ - DRM_ERROR("vboxvideo: vbva_enable failed\n"); - return; - } - } -} - -void vbox_disable_accel(struct vbox_private *vbox) -{ - unsigned int i; - - for (i = 0; i < vbox->num_crtcs; ++i) - vbva_disable(&vbox->vbva_info[i], vbox->guest_pool, i); -} - void vbox_report_caps(struct vbox_private *vbox) { u32 caps = VBVACAPS_DISABLE_CURSOR_INTEGRATION | @@ -91,12 +38,7 @@ void vbox_report_caps(struct vbox_private *vbox) hgsmi_send_caps_info(vbox->guest_pool, caps); } -/** - * Send information about dirty rectangles to VBVA. If necessary we enable - * VBVA first, as this is normally disabled after a change of master in case - * the new master does not send dirty rectangle information (is this even - * allowed?) - */ +/* Send information about dirty rectangles to VBVA. */ void vbox_framebuffer_dirty_rectangles(struct drm_framebuffer *fb, struct drm_clip_rect *rects, unsigned int num_rects) @@ -116,16 +58,14 @@ void vbox_framebuffer_dirty_rectangles(struct drm_framebuffer *fb, crtc_x = crtc->primary->state->src_x >> 16; crtc_y = crtc->primary->state->src_y >> 16; - vbox_enable_accel(vbox); - for (i = 0; i < num_rects; ++i) { struct vbva_cmd_hdr cmd_hdr; unsigned int crtc_id = to_vbox_crtc(crtc)->crtc_id; - if ((rects[i].x1 > crtc_x + mode->hdisplay) || - (rects[i].y1 > crtc_y + mode->vdisplay) || - (rects[i].x2 < crtc_x) || - (rects[i].y2 < crtc_y)) + if (rects[i].x1 > crtc_x + mode->hdisplay || + rects[i].y1 > crtc_y + mode->vdisplay || + rects[i].x2 < crtc_x || + rects[i].y2 < crtc_y) continue; cmd_hdr.x = (s16)rects[i].x1; @@ -163,7 +103,7 @@ static const struct drm_framebuffer_funcs vbox_fb_funcs = { int vbox_framebuffer_init(struct vbox_private *vbox, struct vbox_framebuffer *vbox_fb, - const struct DRM_MODE_FB_CMD *mode_cmd, + const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object *obj) { int ret; @@ -181,6 +121,7 @@ int vbox_framebuffer_init(struct vbox_private *vbox, static int vbox_accel_init(struct vbox_private *vbox) { + struct vbva_buffer *vbva; unsigned int i; vbox->vbva_info = devm_kcalloc(vbox->ddev.dev, vbox->num_crtcs, @@ -198,22 +139,34 @@ static int vbox_accel_init(struct vbox_private *vbox) if (!vbox->vbva_buffers) return -ENOMEM; - for (i = 0; i < vbox->num_crtcs; ++i) + for (i = 0; i < vbox->num_crtcs; ++i) { vbva_setup_buffer_context(&vbox->vbva_info[i], vbox->available_vram_size + i * VBVA_MIN_BUFFER_SIZE, VBVA_MIN_BUFFER_SIZE); + vbva = (void __force *)vbox->vbva_buffers + + i * VBVA_MIN_BUFFER_SIZE; + if (!vbva_enable(&vbox->vbva_info[i], + vbox->guest_pool, vbva, i)) { + /* very old host or driver error. */ + DRM_ERROR("vboxvideo: vbva_enable failed\n"); + } + } return 0; } static void vbox_accel_fini(struct vbox_private *vbox) { - vbox_disable_accel(vbox); + unsigned int i; + + for (i = 0; i < vbox->num_crtcs; ++i) + vbva_disable(&vbox->vbva_info[i], vbox->guest_pool, i); + pci_iounmap(vbox->ddev.pdev, vbox->vbva_buffers); } -/** Do we support the 4.3 plus mode hint reporting interface? */ +/* Do we support the 4.3 plus mode hint reporting interface? */ static bool have_hgsmi_mode_hints(struct vbox_private *vbox) { u32 have_hints, have_cursor; @@ -244,10 +197,6 @@ bool vbox_check_supported(u16 id) return dispi_id == id; } -/** - * Set up our heaps and data exchange buffers in VRAM before handing the rest - * to the memory manager. - */ int vbox_hw_init(struct vbox_private *vbox) { int ret = -ENOMEM; diff --git a/drivers/staging/vboxvideo/vbox_mode.c b/drivers/staging/vboxvideo/vbox_mode.c index 6acc965247ff..c43bec4628ae 100644 --- a/drivers/staging/vboxvideo/vbox_mode.c +++ b/drivers/staging/vboxvideo/vbox_mode.c @@ -1,32 +1,10 @@ +// SPDX-License-Identifier: MIT /* * Copyright (C) 2013-2017 Oracle Corporation * This file is based on ast_mode.c * Copyright 2012 Red Hat Inc. * Parts based on xf86-video-ast * Copyright (c) 2005 ASPEED Technology Inc. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * - */ -/* * Authors: Dave Airlie * Michael Thayer @@ -41,7 +19,7 @@ #include "vboxvideo.h" #include "hgsmi_channels.h" -/** +/* * Set a graphics mode. Poke any required values into registers, do an HGSMI * mode set and tell the host we support advanced graphics functions. */ @@ -84,7 +62,7 @@ static void vbox_do_modeset(struct drm_crtc *crtc) } flags = VBVA_SCREEN_F_ACTIVE; - flags |= (fb && crtc->state->active) ? 0 : VBVA_SCREEN_F_BLANK; + flags |= (fb && crtc->state->enable) ? 0 : VBVA_SCREEN_F_BLANK; flags |= vbox_crtc->disconnected ? VBVA_SCREEN_F_DISABLED : 0; hgsmi_process_display_info(vbox->guest_pool, vbox_crtc->crtc_id, x_offset, y_offset, @@ -190,7 +168,6 @@ static bool vbox_set_up_input_mapping(struct vbox_private *vbox) static void vbox_crtc_set_base_and_mode(struct drm_crtc *crtc, struct drm_framebuffer *fb, - struct drm_display_mode *mode, int x, int y) { struct vbox_bo *bo = gem_to_vbox_bo(to_vbox_framebuffer(fb)->obj); @@ -200,8 +177,11 @@ static void vbox_crtc_set_base_and_mode(struct drm_crtc *crtc, mutex_lock(&vbox->hw_mutex); - vbox_crtc->width = mode->hdisplay; - vbox_crtc->height = mode->vdisplay; + if (crtc->state->enable) { + vbox_crtc->width = crtc->state->mode.hdisplay; + vbox_crtc->height = crtc->state->mode.vdisplay; + } + vbox_crtc->x = x; vbox_crtc->y = y; vbox_crtc->fb_offset = vbox_bo_gpu_offset(bo); @@ -301,7 +281,7 @@ static void vbox_primary_atomic_update(struct drm_plane *plane, struct drm_crtc *crtc = plane->state->crtc; struct drm_framebuffer *fb = plane->state->fb; - vbox_crtc_set_base_and_mode(crtc, fb, &crtc->state->mode, + vbox_crtc_set_base_and_mode(crtc, fb, plane->state->src_x >> 16, plane->state->src_y >> 16); } @@ -312,7 +292,7 @@ static void vbox_primary_atomic_disable(struct drm_plane *plane, struct drm_crtc *crtc = old_state->crtc; /* vbox_do_modeset checks plane->state->fb and will disable if NULL */ - vbox_crtc_set_base_and_mode(crtc, old_state->fb, &crtc->state->mode, + vbox_crtc_set_base_and_mode(crtc, old_state->fb, old_state->src_x >> 16, old_state->src_y >> 16); } @@ -378,7 +358,7 @@ static int vbox_cursor_atomic_check(struct drm_plane *plane, return 0; } -/** +/* * Copy the ARGB image and generate the mask, which is needed in case the host * does not support ARGB cursors. The mask is a 1BPP bitmap with the bit set * if the corresponding alpha value in the ARGB image is greater than 0xF0. @@ -499,7 +479,7 @@ static void vbox_cursor_cleanup_fb(struct drm_plane *plane, vbox_bo_unpin(bo); } -static const uint32_t vbox_cursor_plane_formats[] = { +static const u32 vbox_cursor_plane_formats[] = { DRM_FORMAT_ARGB8888, }; @@ -520,7 +500,7 @@ static const struct drm_plane_funcs vbox_cursor_plane_funcs = { .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, }; -static const uint32_t vbox_primary_plane_formats[] = { +static const u32 vbox_primary_plane_formats[] = { DRM_FORMAT_XRGB8888, DRM_FORMAT_ARGB8888, }; @@ -549,7 +529,7 @@ static struct drm_plane *vbox_create_plane(struct vbox_private *vbox, const struct drm_plane_helper_funcs *helper_funcs = NULL; const struct drm_plane_funcs *funcs; struct drm_plane *plane; - const uint32_t *formats; + const u32 *formats; int num_formats; int err; @@ -672,11 +652,11 @@ static struct drm_encoder *vbox_encoder_init(struct drm_device *dev, return &vbox_encoder->base; } -/** +/* * Generate EDID data with a mode-unique serial number for the virtual - * monitor to try to persuade Unity that different modes correspond to - * different monitors and it should not try to force the same resolution on - * them. + * monitor to try to persuade Unity that different modes correspond to + * different monitors and it should not try to force the same resolution on + * them. */ static void vbox_set_edid(struct drm_connector *connector, int width, int height) diff --git a/drivers/staging/vboxvideo/vbox_prime.c b/drivers/staging/vboxvideo/vbox_prime.c index b7453e427a1d..d61985b0c6eb 100644 --- a/drivers/staging/vboxvideo/vbox_prime.c +++ b/drivers/staging/vboxvideo/vbox_prime.c @@ -1,25 +1,7 @@ +// SPDX-License-Identifier: MIT /* * Copyright (C) 2017 Oracle Corporation * Copyright 2017 Canonical - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - * * Authors: Andreas Pokorny */ diff --git a/drivers/staging/vboxvideo/vbox_ttm.c b/drivers/staging/vboxvideo/vbox_ttm.c index b36ec019c332..30f270027acf 100644 --- a/drivers/staging/vboxvideo/vbox_ttm.c +++ b/drivers/staging/vboxvideo/vbox_ttm.c @@ -1,34 +1,15 @@ +// SPDX-License-Identifier: MIT /* * Copyright (C) 2013-2017 Oracle Corporation * This file is based on ast_ttm.c * Copyright 2012 Red Hat Inc. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * - * * Authors: Dave Airlie * Michael Thayer */ +#include +#include +#include #include "vbox_drv.h" -#include static inline struct vbox_private *vbox_bdev(struct ttm_bo_device *bd) { diff --git a/drivers/staging/vboxvideo/vboxvideo.h b/drivers/staging/vboxvideo/vboxvideo.h index d835d75d761c..0592004f71aa 100644 --- a/drivers/staging/vboxvideo/vboxvideo.h +++ b/drivers/staging/vboxvideo/vboxvideo.h @@ -1,33 +1,9 @@ -/* - * Copyright (C) 2006-2016 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - */ +/* SPDX-License-Identifier: MIT */ +/* Copyright (C) 2006-2016 Oracle Corporation */ #ifndef __VBOXVIDEO_H__ #define __VBOXVIDEO_H__ -/* - * This should be in sync with monitorCount in - * src/VBox/Main/xml/VirtualBox-settings-common.xsd - */ #define VBOX_VIDEO_MAX_SCREENS 64 /* @@ -77,21 +53,14 @@ * read 32 bit value result of the last vbox command is returned */ -/** - * VBVA command header. - * - * @todo Where does this fit in? - */ struct vbva_cmd_hdr { - /** Coordinates of affected rectangle. */ s16 x; s16 y; u16 w; u16 h; } __packed; -/** @name VBVA ring defines. - * +/* * The VBVA ring buffer is suitable for transferring large (< 2GB) amount of * data. For example big bitmaps which do not fit to the buffer. * @@ -106,8 +75,8 @@ struct vbva_cmd_hdr { * VBVA_RING_BUFFER_THRESHOLD, the host fetched all record data and updates * data_offset. After that on each flush the host continues fetching the data * until the record is completed. - * */ + #define VBVA_RING_BUFFER_SIZE (4194304 - 1024) #define VBVA_RING_BUFFER_THRESHOLD (4096) @@ -122,11 +91,7 @@ struct vbva_cmd_hdr { #define VBVA_F_RECORD_PARTIAL 0x80000000u -/** - * VBVA record. - */ struct vbva_record { - /** The length of the record. Changed by guest. */ u32 len_and_flags; } __packed; @@ -144,7 +109,8 @@ struct vbva_record { /* The value for port IO to let the adapter to interpret the adapter memory. */ #define VBOX_VIDEO_INTERPRET_ADAPTER_MEMORY 0x00000000 -/* The value for port IO to let the adapter to interpret the display memory. +/* + * The value for port IO to let the adapter to interpret the display memory. * The display number is encoded in low 16 bits. */ #define VBOX_VIDEO_INTERPRET_DISPLAY_MEMORY_BASE 0x00010000 @@ -200,12 +166,12 @@ struct vbva_buffer { #define VBVA_CMDVBVA_CTL 18 /* Query most recent mode hints sent */ #define VBVA_QUERY_MODE_HINTS 19 -/** +/* * Report the guest virtual desktop position and size for mapping host and * guest pointer positions. */ #define VBVA_REPORT_INPUT_MAPPING 20 -/** Report the guest cursor position and query the host position. */ +/* Report the guest cursor position and query the host position. */ #define VBVA_CURSOR_POSITION 21 /* host->guest commands */ @@ -215,25 +181,24 @@ struct vbva_buffer { /* vbva_conf32::index */ #define VBOX_VBVA_CONF32_MONITOR_COUNT 0 #define VBOX_VBVA_CONF32_HOST_HEAP_SIZE 1 -/** +/* * Returns VINF_SUCCESS if the host can report mode hints via VBVA. * Set value to VERR_NOT_SUPPORTED before calling. */ #define VBOX_VBVA_CONF32_MODE_HINT_REPORTING 2 -/** +/* * Returns VINF_SUCCESS if the host can report guest cursor enabled status via * VBVA. Set value to VERR_NOT_SUPPORTED before calling. */ #define VBOX_VBVA_CONF32_GUEST_CURSOR_REPORTING 3 -/** +/* * Returns the currently available host cursor capabilities. Available if - * vbva_conf32::VBOX_VBVA_CONF32_GUEST_CURSOR_REPORTING returns success. - * @see VMMDevReqMouseStatus::mouseFeatures. + * VBOX_VBVA_CONF32_GUEST_CURSOR_REPORTING returns success. */ #define VBOX_VBVA_CONF32_CURSOR_CAPABILITIES 4 -/** Returns the supported flags in vbva_infoscreen::flags. */ +/* Returns the supported flags in vbva_infoscreen.flags. */ #define VBOX_VBVA_CONF32_SCREEN_FLAGS 5 -/** Returns the max size of VBVA record. */ +/* Returns the max size of VBVA record. */ #define VBOX_VBVA_CONF32_MAX_RECORD_SIZE 6 struct vbva_conf32 { @@ -241,20 +206,20 @@ struct vbva_conf32 { u32 value; } __packed; -/** Reserved for historical reasons. */ +/* Reserved for historical reasons. */ #define VBOX_VBVA_CURSOR_CAPABILITY_RESERVED0 BIT(0) -/** +/* * Guest cursor capability: can the host show a hardware cursor at the host * pointer location? */ #define VBOX_VBVA_CURSOR_CAPABILITY_HARDWARE BIT(1) -/** Reserved for historical reasons. */ +/* Reserved for historical reasons. */ #define VBOX_VBVA_CURSOR_CAPABILITY_RESERVED2 BIT(2) -/** Reserved for historical reasons. Must always be unset. */ +/* Reserved for historical reasons. Must always be unset. */ #define VBOX_VBVA_CURSOR_CAPABILITY_RESERVED3 BIT(3) -/** Reserved for historical reasons. */ +/* Reserved for historical reasons. */ #define VBOX_VBVA_CURSOR_CAPABILITY_RESERVED4 BIT(4) -/** Reserved for historical reasons. */ +/* Reserved for historical reasons. */ #define VBOX_VBVA_CURSOR_CAPABILITY_RESERVED5 BIT(5) struct vbva_infoview { @@ -275,21 +240,21 @@ struct vbva_flush { u32 reserved; } __packed; -/* vbva_infoscreen::flags */ +/* vbva_infoscreen.flags */ #define VBVA_SCREEN_F_NONE 0x0000 #define VBVA_SCREEN_F_ACTIVE 0x0001 -/** +/* * The virtual monitor has been disabled by the guest and should be removed * by the host and ignored for purposes of pointer position calculation. */ #define VBVA_SCREEN_F_DISABLED 0x0002 -/** +/* * The virtual monitor has been blanked by the guest and should be blacked * out by the host using width, height, etc values from the vbva_infoscreen * request. */ #define VBVA_SCREEN_F_BLANK 0x0004 -/** +/* * The virtual monitor has been blanked by the guest and should be blacked * out by the host using the previous mode values for width. height, etc. */ @@ -324,7 +289,7 @@ struct vbva_infoscreen { u16 flags; } __packed; -/* vbva_enable::flags */ +/* vbva_enable.flags */ #define VBVA_F_NONE 0x00000000 #define VBVA_F_ENABLE 0x00000001 #define VBVA_F_DISABLE 0x00000002 @@ -365,7 +330,6 @@ struct vbva_mouse_pointer_shape { /* Pointer data. * - **** * The data consists of 1 bpp AND mask followed by 32 bpp XOR (color) * mask. * @@ -387,30 +351,19 @@ struct vbva_mouse_pointer_shape { * Bytes in the gap between the AND and the XOR mask are undefined. * XOR mask scanlines have no gap between them and size of XOR mask is: * xor_len = width * 4 * height. - **** * * Preallocate 4 bytes for accessing actual data as p->data. */ u8 data[4]; } __packed; -/** - * @name vbva_mouse_pointer_shape::flags - * @note The VBOX_MOUSE_POINTER_* flags are used in the guest video driver, - * values must be <= 0x8000 and must not be changed. (try make more sense - * of this, please). - * @{ - */ - -/** pointer is visible */ +/* pointer is visible */ #define VBOX_MOUSE_POINTER_VISIBLE 0x0001 -/** pointer has alpha channel */ +/* pointer has alpha channel */ #define VBOX_MOUSE_POINTER_ALPHA 0x0002 -/** pointerData contains new pointer shape */ +/* pointerData contains new pointer shape */ #define VBOX_MOUSE_POINTER_SHAPE 0x0004 -/** @} */ - /* * The guest driver can handle asynch guest cmd completion by reading the * command offset from io port. @@ -418,11 +371,11 @@ struct vbva_mouse_pointer_shape { #define VBVACAPS_COMPLETEGCMD_BY_IOREAD 0x00000001 /* the guest driver can handle video adapter IRQs */ #define VBVACAPS_IRQ 0x00000002 -/** The guest can read video mode hints sent via VBVA. */ +/* The guest can read video mode hints sent via VBVA. */ #define VBVACAPS_VIDEO_MODE_HINTS 0x00000004 -/** The guest can switch to a software cursor on demand. */ +/* The guest can switch to a software cursor on demand. */ #define VBVACAPS_DISABLE_CURSOR_INTEGRATION 0x00000008 -/** The guest does not depend on host handling the VBE registers. */ +/* The guest does not depend on host handling the VBE registers. */ #define VBVACAPS_USE_VBVA_ONLY 0x00000010 struct vbva_caps { @@ -430,17 +383,17 @@ struct vbva_caps { u32 caps; } __packed; -/** Query the most recent mode hints received from the host. */ +/* Query the most recent mode hints received from the host. */ struct vbva_query_mode_hints { - /** The maximum number of screens to return hints for. */ + /* The maximum number of screens to return hints for. */ u16 hints_queried_count; - /** The size of the mode hint structures directly following this one. */ + /* The size of the mode hint structures directly following this one. */ u16 hint_structure_guest_size; - /** Return code for the operation. Initialise to VERR_NOT_SUPPORTED. */ + /* Return code for the operation. Initialise to VERR_NOT_SUPPORTED. */ s32 rc; } __packed; -/** +/* * Structure in which a mode hint is returned. The guest allocates an array * of these immediately after the vbva_query_mode_hints structure. * To accommodate future extensions, the vbva_query_mode_hints structure @@ -455,37 +408,35 @@ struct vbva_modehint { u32 cy; u32 bpp; /* Which has never been used... */ u32 display; - u32 dx; /**< X offset into the virtual frame-buffer. */ - u32 dy; /**< Y offset into the virtual frame-buffer. */ + u32 dx; /* X offset into the virtual frame-buffer. */ + u32 dy; /* Y offset into the virtual frame-buffer. */ u32 enabled; /* Not flags. Add new members for new flags. */ } __packed; #define VBVAMODEHINT_MAGIC 0x0801add9u -/** +/* * Report the rectangle relative to which absolute pointer events should be * expressed. This information remains valid until the next VBVA resize event * for any screen, at which time it is reset to the bounding rectangle of all * virtual screens and must be re-set. - * @see VBVA_REPORT_INPUT_MAPPING. */ struct vbva_report_input_mapping { - s32 x; /**< Upper left X co-ordinate relative to the first screen. */ - s32 y; /**< Upper left Y co-ordinate relative to the first screen. */ - u32 cx; /**< Rectangle width. */ - u32 cy; /**< Rectangle height. */ + s32 x; /* Upper left X co-ordinate relative to the first screen. */ + s32 y; /* Upper left Y co-ordinate relative to the first screen. */ + u32 cx; /* Rectangle width. */ + u32 cy; /* Rectangle height. */ } __packed; -/** +/* * Report the guest cursor position and query the host one. The host may wish * to use the guest information to re-position its own cursor (though this is * currently unlikely). - * @see VBVA_CURSOR_POSITION */ struct vbva_cursor_position { - u32 report_position; /**< Are we reporting a position? */ - u32 x; /**< Guest cursor X position */ - u32 y; /**< Guest cursor Y position */ + u32 report_position; /* Are we reporting a position? */ + u32 x; /* Guest cursor X position */ + u32 y; /* Guest cursor Y position */ } __packed; #endif diff --git a/drivers/staging/vboxvideo/vboxvideo_guest.h b/drivers/staging/vboxvideo/vboxvideo_guest.h index d09da841711a..55fcee3a6470 100644 --- a/drivers/staging/vboxvideo/vboxvideo_guest.h +++ b/drivers/staging/vboxvideo/vboxvideo_guest.h @@ -1,24 +1,5 @@ -/* - * Copyright (C) 2006-2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ +/* SPDX-License-Identifier: MIT */ +/* Copyright (C) 2006-2016 Oracle Corporation */ #ifndef __VBOXVIDEO_GUEST_H__ #define __VBOXVIDEO_GUEST_H__ @@ -26,30 +7,26 @@ #include #include "vboxvideo.h" -/** +/* * Structure grouping the context needed for sending graphics acceleration * information to the host via VBVA. Each screen has its own VBVA buffer. */ struct vbva_buf_ctx { - /** Offset of the buffer in the VRAM section for the screen */ + /* Offset of the buffer in the VRAM section for the screen */ u32 buffer_offset; - /** Length of the buffer in bytes */ + /* Length of the buffer in bytes */ u32 buffer_length; - /** Set if we wrote to the buffer faster than the host could read it */ + /* Set if we wrote to the buffer faster than the host could read it */ bool buffer_overflow; - /** VBVA record that we are currently preparing for the host, or NULL */ + /* VBVA record that we are currently preparing for the host, or NULL */ struct vbva_record *record; - /** + /* * Pointer to the VBVA buffer mapped into the current address space. * Will be NULL if VBVA is not enabled. */ struct vbva_buffer *vbva; }; -/** - * @name Base HGSMI APIs - * @{ - */ int hgsmi_report_flags_location(struct gen_pool *ctx, u32 location); int hgsmi_send_caps_info(struct gen_pool *ctx, u32 caps); int hgsmi_test_query_conf(struct gen_pool *ctx); @@ -59,12 +36,7 @@ int hgsmi_update_pointer_shape(struct gen_pool *ctx, u32 flags, u8 *pixels, u32 len); int hgsmi_cursor_position(struct gen_pool *ctx, bool report_position, u32 x, u32 y, u32 *x_host, u32 *y_host); -/** @} */ -/** - * @name VBVA APIs - * @{ - */ bool vbva_enable(struct vbva_buf_ctx *vbva_ctx, struct gen_pool *ctx, struct vbva_buffer *vbva, s32 screen); void vbva_disable(struct vbva_buf_ctx *vbva_ctx, struct gen_pool *ctx, @@ -76,12 +48,7 @@ bool vbva_write(struct vbva_buf_ctx *vbva_ctx, struct gen_pool *ctx, const void *p, u32 len); void vbva_setup_buffer_context(struct vbva_buf_ctx *vbva_ctx, u32 buffer_offset, u32 buffer_length); -/** @} */ -/** - * @name Modesetting APIs - * @{ - */ void hgsmi_process_display_info(struct gen_pool *ctx, u32 display, s32 origin_x, s32 origin_y, u32 start_offset, u32 pitch, u32 width, u32 height, @@ -90,6 +57,5 @@ int hgsmi_update_input_mapping(struct gen_pool *ctx, s32 origin_x, s32 origin_y, u32 width, u32 height); int hgsmi_get_mode_hints(struct gen_pool *ctx, unsigned int screens, struct vbva_modehint *hints); -/** @} */ #endif diff --git a/drivers/staging/vboxvideo/vboxvideo_vbe.h b/drivers/staging/vboxvideo/vboxvideo_vbe.h index f842f4d9c80a..427235869297 100644 --- a/drivers/staging/vboxvideo/vboxvideo_vbe.h +++ b/drivers/staging/vboxvideo/vboxvideo_vbe.h @@ -1,35 +1,11 @@ -/* - * Copyright (C) 2006-2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ +/* SPDX-License-Identifier: MIT */ +/* Copyright (C) 2006-2016 Oracle Corporation */ #ifndef __VBOXVIDEO_VBE_H__ #define __VBOXVIDEO_VBE_H__ /* GUEST <-> HOST Communication API */ -/** - * @todo FIXME: Either dynamicly ask host for this or put somewhere high in - * physical memory like 0xE0000000. - */ - #define VBE_DISPI_BANK_ADDRESS 0xA0000 #define VBE_DISPI_BANK_SIZE_KB 64 @@ -71,12 +47,6 @@ #define VBE_DISPI_ENABLED 0x01 #define VBE_DISPI_GETCAPS 0x02 #define VBE_DISPI_8BIT_DAC 0x20 -/** - * @note this definition is a BOCHS legacy, used only in the video BIOS - * code and ignored by the emulated hardware. - */ -#define VBE_DISPI_LFB_ENABLED 0x40 -#define VBE_DISPI_NOCLEARMEM 0x80 #define VGA_PORT_HGSMI_HOST 0x3b0 #define VGA_PORT_HGSMI_GUEST 0x3d0 diff --git a/drivers/staging/vboxvideo/vbva_base.c b/drivers/staging/vboxvideo/vbva_base.c index c10c782f94e1..36bc9824ec3f 100644 --- a/drivers/staging/vboxvideo/vbva_base.c +++ b/drivers/staging/vboxvideo/vbva_base.c @@ -1,27 +1,8 @@ -/* - * Copyright (C) 2006-2017 Oracle Corporation - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the "Software"), - * to deal in the Software without restriction, including without limitation - * the rights to use, copy, modify, merge, publish, distribute, sublicense, - * and/or sell copies of the Software, and to permit persons to whom the - * Software is furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - * OTHER DEALINGS IN THE SOFTWARE. - */ +// SPDX-License-Identifier: MIT +/* Copyright (C) 2006-2017 Oracle Corporation */ +#include #include "vbox_drv.h" -#include "vbox_err.h" #include "vboxvideo_guest.h" #include "hgsmi_channels.h" @@ -144,7 +125,7 @@ static bool vbva_inform_host(struct vbva_buf_ctx *vbva_ctx, hgsmi_buffer_submit(ctx, p); if (enable) - ret = RT_SUCCESS(p->base.result); + ret = p->base.result >= 0; else ret = true; diff --git a/drivers/staging/vc04_services/bcm2835-audio/Kconfig b/drivers/staging/vc04_services/bcm2835-audio/Kconfig index 9f536533c257..62c1c8ba4ad4 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/Kconfig +++ b/drivers/staging/vc04_services/bcm2835-audio/Kconfig @@ -1,6 +1,6 @@ config SND_BCM2835 tristate "BCM2835 Audio" - depends on ARCH_BCM2835 && SND + depends on (ARCH_BCM2835 || COMPILE_TEST) && SND select SND_PCM select BCM2835_VCHIQ help diff --git a/drivers/staging/vc04_services/bcm2835-audio/TODO b/drivers/staging/vc04_services/bcm2835-audio/TODO index 73d41fa631ac..cb8ead3e9108 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/TODO +++ b/drivers/staging/vc04_services/bcm2835-audio/TODO @@ -4,26 +4,7 @@ * * ***************************************************************************** +1) Revisit multi-cards options and PCM route mixer control (as per comment +https://lkml.org/lkml/2018/9/8/200) -1) Document the device tree node - -The downstream tree(the tree that the driver was imported from) at -http://www.github.com/raspberrypi/linux uses this node: - -audio: audio { - compatible = "brcm,bcm2835-audio"; - brcm,pwm-channels = <8>; -}; - -Since the driver requires the use of VCHIQ, it may be useful to have a link -in the device tree to the VCHIQ driver. - -2) Gracefully handle the case where VCHIQ is missing from the device tree or -it has not been initialized yet. - -3) Review error handling and remove duplicate code. - -4) Cleanup the logging mechanism. The driver should probably be using the -standard kernel logging mechanisms such as dev_info, dev_dbg, and friends. - -5) Fix the remaining checkpatch.pl errors and warnings. +2) Fix the remaining checkpatch.pl errors and warnings. diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c index e66da11af5cf..bc1eaa3a0773 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c +++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-pcm.c @@ -74,6 +74,7 @@ void bcm2835_playback_fifo(struct bcm2835_alsa_stream *alsa_stream, atomic_set(&alsa_stream->pos, pos); alsa_stream->period_offset += bytes; + alsa_stream->interpolate_start = ktime_get(); if (alsa_stream->period_offset >= alsa_stream->period_size) { alsa_stream->period_offset %= alsa_stream->period_size; snd_pcm_period_elapsed(substream); @@ -164,14 +165,11 @@ static int snd_bcm2835_playback_spdif_open(struct snd_pcm_substream *substream) return snd_bcm2835_playback_open_generic(substream, 1); } -/* close callback */ static int snd_bcm2835_playback_close(struct snd_pcm_substream *substream) { - /* the hardware-specific codes will be here */ - - struct bcm2835_chip *chip; - struct snd_pcm_runtime *runtime; struct bcm2835_alsa_stream *alsa_stream; + struct snd_pcm_runtime *runtime; + struct bcm2835_chip *chip; chip = snd_pcm_substream_chip(substream); mutex_lock(&chip->audio_mutex); @@ -195,20 +193,17 @@ static int snd_bcm2835_playback_close(struct snd_pcm_substream *substream) return 0; } -/* hw_params callback */ static int snd_bcm2835_pcm_hw_params(struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params) { return snd_pcm_lib_malloc_pages(substream, params_buffer_bytes(params)); } -/* hw_free callback */ static int snd_bcm2835_pcm_hw_free(struct snd_pcm_substream *substream) { return snd_pcm_lib_free_pages(substream); } -/* prepare callback */ static int snd_bcm2835_pcm_prepare(struct snd_pcm_substream *substream) { struct bcm2835_chip *chip = snd_pcm_substream_chip(substream); @@ -243,6 +238,7 @@ static int snd_bcm2835_pcm_prepare(struct snd_pcm_substream *substream) atomic_set(&alsa_stream->pos, 0); alsa_stream->period_offset = 0; alsa_stream->draining = false; + alsa_stream->interpolate_start = ktime_get(); return 0; } @@ -292,6 +288,24 @@ snd_bcm2835_pcm_pointer(struct snd_pcm_substream *substream) { struct snd_pcm_runtime *runtime = substream->runtime; struct bcm2835_alsa_stream *alsa_stream = runtime->private_data; + ktime_t now = ktime_get(); + + /* Give userspace better delay reporting by interpolating between GPU + * notifications, assuming audio speed is close enough to the clock + * used for ktime + */ + + if ((ktime_to_ns(alsa_stream->interpolate_start)) && + (ktime_compare(alsa_stream->interpolate_start, now) < 0)) { + u64 interval = + (ktime_to_ns(ktime_sub(now, + alsa_stream->interpolate_start))); + u64 frames_output_in_interval = + div_u64((interval * runtime->rate), 1000000000); + snd_pcm_sframes_t frames_output_in_interval_sized = + -frames_output_in_interval; + runtime->delay = frames_output_in_interval_sized; + } return snd_pcm_indirect_playback_pointer(substream, &alsa_stream->pcm_indirect, diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c index 781754f36da7..23fba01107b9 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c +++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c @@ -89,19 +89,14 @@ static int bcm2835_audio_send_simple(struct bcm2835_audio_instance *instance, return bcm2835_audio_send_msg(instance, &m, wait); } -static const u32 BCM2835_AUDIO_WRITE_COOKIE1 = ('B' << 24 | 'C' << 16 | - 'M' << 8 | 'A'); -static const u32 BCM2835_AUDIO_WRITE_COOKIE2 = ('D' << 24 | 'A' << 16 | - 'T' << 8 | 'A'); - static void audio_vchi_callback(void *param, const VCHI_CALLBACK_REASON_T reason, void *msg_handle) { struct bcm2835_audio_instance *instance = param; - int status; - int msg_len; struct vc_audio_msg m; + int msg_len; + int status; if (reason != VCHI_CALLBACK_MSG_AVAILABLE) return; @@ -109,15 +104,15 @@ static void audio_vchi_callback(void *param, status = vchi_msg_dequeue(instance->vchi_handle, &m, sizeof(m), &msg_len, VCHI_FLAGS_NONE); if (m.type == VC_AUDIO_MSG_TYPE_RESULT) { - instance->result = m.u.result.success; + instance->result = m.result.success; complete(&instance->msg_avail_comp); } else if (m.type == VC_AUDIO_MSG_TYPE_COMPLETE) { - if (m.u.complete.cookie1 != BCM2835_AUDIO_WRITE_COOKIE1 || - m.u.complete.cookie2 != BCM2835_AUDIO_WRITE_COOKIE2) + if (m.complete.cookie1 != VC_AUDIO_WRITE_COOKIE1 || + m.complete.cookie2 != VC_AUDIO_WRITE_COOKIE2) dev_err(instance->dev, "invalid cookie\n"); else bcm2835_playback_fifo(instance->alsa_stream, - m.u.complete.count); + m.complete.count); } else { dev_err(instance->dev, "unexpected callback type=%d\n", m.type); } @@ -127,7 +122,7 @@ static int vc_vchi_audio_init(VCHI_INSTANCE_T vchi_instance, struct bcm2835_audio_instance *instance) { - SERVICE_CREATION_T params = { + struct service_creation params = { .version = VCHI_VERSION_EX(VC_AUDIOSERV_VER, VC_AUDIOSERV_MIN_VER), .service_id = VC_AUDIO_SERVER_NAME, .callback = audio_vchi_callback, @@ -143,7 +138,6 @@ vc_vchi_audio_init(VCHI_INSTANCE_T vchi_instance, dev_err(instance->dev, "failed to open VCHI service connection (status=%d)\n", status); - kfree(instance); return -EPERM; } @@ -254,11 +248,11 @@ int bcm2835_audio_set_ctls(struct bcm2835_alsa_stream *alsa_stream) struct vc_audio_msg m = {}; m.type = VC_AUDIO_MSG_TYPE_CONTROL; - m.u.control.dest = chip->dest; + m.control.dest = chip->dest; if (!chip->mute) - m.u.control.volume = CHIP_MIN_VOLUME; + m.control.volume = CHIP_MIN_VOLUME; else - m.u.control.volume = alsa2chip(chip->volume); + m.control.volume = alsa2chip(chip->volume); return bcm2835_audio_send_msg(alsa_stream->instance, &m, true); } @@ -269,9 +263,9 @@ int bcm2835_audio_set_params(struct bcm2835_alsa_stream *alsa_stream, { struct vc_audio_msg m = { .type = VC_AUDIO_MSG_TYPE_CONFIG, - .u.config.channels = channels, - .u.config.samplerate = samplerate, - .u.config.bps = bps, + .config.channels = channels, + .config.samplerate = samplerate, + .config.bps = bps, }; int err; @@ -299,7 +293,7 @@ int bcm2835_audio_drain(struct bcm2835_alsa_stream *alsa_stream) { struct vc_audio_msg m = { .type = VC_AUDIO_MSG_TYPE_STOP, - .u.stop.draining = 1, + .stop.draining = 1, }; return bcm2835_audio_send_msg(alsa_stream->instance, &m, false); @@ -327,10 +321,10 @@ int bcm2835_audio_write(struct bcm2835_alsa_stream *alsa_stream, struct bcm2835_audio_instance *instance = alsa_stream->instance; struct vc_audio_msg m = { .type = VC_AUDIO_MSG_TYPE_WRITE, - .u.write.count = size, - .u.write.max_packet = instance->max_packet, - .u.write.cookie1 = BCM2835_AUDIO_WRITE_COOKIE1, - .u.write.cookie2 = BCM2835_AUDIO_WRITE_COOKIE2, + .write.count = size, + .write.max_packet = instance->max_packet, + .write.cookie1 = VC_AUDIO_WRITE_COOKIE1, + .write.cookie2 = VC_AUDIO_WRITE_COOKIE2, }; unsigned int count; int err, status; diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c index 87d56ab1ffa0..cf5f80f5ca6b 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c +++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c @@ -6,13 +6,13 @@ #include #include #include -#include #include "bcm2835.h" static bool enable_hdmi; static bool enable_headphones; static bool enable_compat_alsa = true; +static int num_channels = MAX_SUBSTREAMS; module_param(enable_hdmi, bool, 0444); MODULE_PARM_DESC(enable_hdmi, "Enables HDMI virtual audio device"); @@ -21,6 +21,8 @@ MODULE_PARM_DESC(enable_headphones, "Enables Headphones virtual audio device"); module_param(enable_compat_alsa, bool, 0444); MODULE_PARM_DESC(enable_compat_alsa, "Enables ALSA compatibility virtual audio device"); +module_param(num_channels, int, 0644); +MODULE_PARM_DESC(num_channels, "Number of audio channels (default: 8)"); static void bcm2835_devm_free_vchi_ctx(struct device *dev, void *res) { @@ -39,8 +41,6 @@ static int bcm2835_devm_add_vchi_ctx(struct device *dev) if (!vchi_ctx) return -ENOMEM; - memset(vchi_ctx, 0, sizeof(*vchi_ctx)); - ret = bcm2835_new_vchi_ctx(dev, vchi_ctx); if (ret) { devres_free(vchi_ctx); @@ -163,8 +163,8 @@ static int snd_add_child_device(struct device *dev, struct bcm2835_audio_driver *audio_driver, u32 numchans) { - struct snd_card *card; struct bcm2835_chip *chip; + struct snd_card *card; int err; err = snd_card_new(dev, -1, NULL, THIS_MODULE, sizeof(*chip), &card); @@ -227,12 +227,12 @@ static int snd_add_child_device(struct device *dev, static int snd_add_child_devices(struct device *device, u32 numchans) { - int i; - int count_devices = 0; - int minchannels = 0; - int extrachannels = 0; int extrachannels_per_driver = 0; int extrachannels_remainder = 0; + int count_devices = 0; + int extrachannels = 0; + int minchannels = 0; + int i; for (i = 0; i < ARRAY_SIZE(children_devices); i++) if (*children_devices[i].is_enabled) @@ -260,9 +260,9 @@ static int snd_add_child_devices(struct device *device, u32 numchans) extrachannels_remainder); for (i = 0; i < ARRAY_SIZE(children_devices); i++) { - int err; - int numchannels_this_device; struct bcm2835_audio_driver *audio_driver; + int numchannels_this_device; + int err; if (!*children_devices[i].is_enabled) continue; @@ -293,31 +293,22 @@ static int snd_add_child_devices(struct device *device, u32 numchans) return 0; } -static int snd_bcm2835_alsa_probe_dt(struct platform_device *pdev) +static int snd_bcm2835_alsa_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; - u32 numchans; int err; - err = of_property_read_u32(dev->of_node, "brcm,pwm-channels", - &numchans); - if (err) { - dev_err(dev, "Failed to get DT property 'brcm,pwm-channels'"); - return err; - } - - if (numchans == 0 || numchans > MAX_SUBSTREAMS) { - numchans = MAX_SUBSTREAMS; - dev_warn(dev, - "Illegal 'brcm,pwm-channels' value, will use %u\n", - numchans); + if (num_channels <= 0 || num_channels > MAX_SUBSTREAMS) { + num_channels = MAX_SUBSTREAMS; + dev_warn(dev, "Illegal num_channels value, will use %u\n", + num_channels); } err = bcm2835_devm_add_vchi_ctx(dev); if (err) return err; - err = snd_add_child_devices(dev, numchans); + err = snd_add_child_devices(dev, num_channels); if (err) return err; @@ -339,43 +330,19 @@ static int snd_bcm2835_alsa_resume(struct platform_device *pdev) #endif -static const struct of_device_id snd_bcm2835_of_match_table[] = { - { .compatible = "brcm,bcm2835-audio",}, - {}, -}; -MODULE_DEVICE_TABLE(of, snd_bcm2835_of_match_table); - -static struct platform_driver bcm2835_alsa0_driver = { - .probe = snd_bcm2835_alsa_probe_dt, +static struct platform_driver bcm2835_alsa_driver = { + .probe = snd_bcm2835_alsa_probe, #ifdef CONFIG_PM .suspend = snd_bcm2835_alsa_suspend, .resume = snd_bcm2835_alsa_resume, #endif .driver = { .name = "bcm2835_audio", - .of_match_table = snd_bcm2835_of_match_table, }, }; - -static int bcm2835_alsa_device_init(void) -{ - int retval; - - retval = platform_driver_register(&bcm2835_alsa0_driver); - if (retval) - pr_err("Error registering bcm2835_audio driver %d .\n", retval); - - return retval; -} - -static void bcm2835_alsa_device_exit(void) -{ - platform_driver_unregister(&bcm2835_alsa0_driver); -} - -late_initcall(bcm2835_alsa_device_init); -module_exit(bcm2835_alsa_device_exit); +module_platform_driver(bcm2835_alsa_driver); MODULE_AUTHOR("Dom Cobley"); MODULE_DESCRIPTION("Alsa driver for BCM2835 chip"); MODULE_LICENSE("GPL"); +MODULE_ALIAS("platform:bcm2835_audio"); diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.h b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.h index 34a0125ce646..ed0feb34b6c8 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835.h +++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835.h @@ -77,6 +77,7 @@ struct bcm2835_alsa_stream { unsigned int period_offset; unsigned int buffer_size; unsigned int period_size; + ktime_t interpolate_start; struct bcm2835_audio_instance *instance; int idx; diff --git a/drivers/staging/vc04_services/bcm2835-audio/vc_vchi_audioserv_defs.h b/drivers/staging/vc04_services/bcm2835-audio/vc_vchi_audioserv_defs.h index 1a7f0884ac9c..d6401e914ac9 100644 --- a/drivers/staging/vc04_services/bcm2835-audio/vc_vchi_audioserv_defs.h +++ b/drivers/staging/vc04_services/bcm2835-audio/vc_vchi_audioserv_defs.h @@ -7,8 +7,10 @@ #define VC_AUDIOSERV_MIN_VER 1 #define VC_AUDIOSERV_VER 2 -/* FourCC code used for VCHI connection */ +/* FourCC codes used for VCHI communication */ #define VC_AUDIO_SERVER_NAME MAKE_FOURCC("AUDS") +#define VC_AUDIO_WRITE_COOKIE1 MAKE_FOURCC("BCMA") +#define VC_AUDIO_WRITE_COOKIE2 MAKE_FOURCC("DATA") /* * List of screens that are currently supported @@ -91,7 +93,7 @@ struct vc_audio_msg { struct vc_audio_write write; struct vc_audio_result result; struct vc_audio_complete complete; - } u; + }; }; #endif /* _VC_AUDIO_DEFS_H_ */ diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c index c04bdf070c87..611a6ee2943a 100644 --- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c +++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c @@ -43,11 +43,6 @@ #define MAX_BCM2835_CAMERAS 2 -MODULE_DESCRIPTION("Broadcom 2835 MMAL video capture"); -MODULE_AUTHOR("Vincent Sanders"); -MODULE_LICENSE("GPL"); -MODULE_VERSION(BM2835_MMAL_VERSION); - int bcm2835_v4l2_debug; module_param_named(debug, bcm2835_v4l2_debug, int, 0644); MODULE_PARM_DESC(bcm2835_v4l2_debug, "Debug level 0-2"); @@ -1544,8 +1539,11 @@ static int mmal_init(struct bm2835_mmal_dev *dev) struct vchiq_mmal_component *camera; ret = vchiq_mmal_init(&dev->instance); - if (ret < 0) + if (ret < 0) { + v4l2_err(&dev->v4l2_dev, "%s: vchiq mmal init failed %d\n", + __func__, ret); return ret; + } /* get the camera component ready */ ret = vchiq_mmal_component_init(dev->instance, "ril.camera", @@ -1554,7 +1552,9 @@ static int mmal_init(struct bm2835_mmal_dev *dev) goto unreg_mmal; camera = dev->component[MMAL_COMPONENT_CAMERA]; - if (camera->outputs < MMAL_CAMERA_PORT_COUNT) { + if (camera->outputs < MMAL_CAMERA_PORT_COUNT) { + v4l2_err(&dev->v4l2_dev, "%s: too few camera outputs %d needed %d\n", + __func__, camera->outputs, MMAL_CAMERA_PORT_COUNT); ret = -EINVAL; goto unreg_camera; } @@ -1562,8 +1562,11 @@ static int mmal_init(struct bm2835_mmal_dev *dev) ret = set_camera_parameters(dev->instance, camera, dev); - if (ret < 0) + if (ret < 0) { + v4l2_err(&dev->v4l2_dev, "%s: unable to set camera parameters: %d\n", + __func__, ret); goto unreg_camera; + } /* There was an error in the firmware that meant the camera component * produced BGR instead of RGB. @@ -1652,8 +1655,8 @@ static int mmal_init(struct bm2835_mmal_dev *dev) if (dev->component[MMAL_COMPONENT_PREVIEW]->inputs < 1) { ret = -EINVAL; - pr_debug("too few input ports %d needed %d\n", - dev->component[MMAL_COMPONENT_PREVIEW]->inputs, 1); + v4l2_err(&dev->v4l2_dev, "%s: too few input ports %d needed %d\n", + __func__, dev->component[MMAL_COMPONENT_PREVIEW]->inputs, 1); goto unreg_preview; } @@ -1666,8 +1669,8 @@ static int mmal_init(struct bm2835_mmal_dev *dev) if (dev->component[MMAL_COMPONENT_IMAGE_ENCODE]->inputs < 1) { ret = -EINVAL; - v4l2_err(&dev->v4l2_dev, "too few input ports %d needed %d\n", - dev->component[MMAL_COMPONENT_IMAGE_ENCODE]->inputs, + v4l2_err(&dev->v4l2_dev, "%s: too few input ports %d needed %d\n", + __func__, dev->component[MMAL_COMPONENT_IMAGE_ENCODE]->inputs, 1); goto unreg_image_encoder; } @@ -1681,8 +1684,8 @@ static int mmal_init(struct bm2835_mmal_dev *dev) if (dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->inputs < 1) { ret = -EINVAL; - v4l2_err(&dev->v4l2_dev, "too few input ports %d needed %d\n", - dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->inputs, + v4l2_err(&dev->v4l2_dev, "%s: too few input ports %d needed %d\n", + __func__, dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->inputs, 1); goto unreg_vid_encoder; } @@ -1711,8 +1714,11 @@ static int mmal_init(struct bm2835_mmal_dev *dev) sizeof(enable)); } ret = bm2835_mmal_set_all_camera_controls(dev); - if (ret < 0) + if (ret < 0) { + v4l2_err(&dev->v4l2_dev, "%s: failed to set all camera controls: %d\n", + __func__, ret); goto unreg_vid_encoder; + } return 0; @@ -1841,6 +1847,12 @@ static int bcm2835_mmal_probe(struct platform_device *pdev) num_cameras = get_num_cameras(instance, resolutions, MAX_BCM2835_CAMERAS); + + if (num_cameras < 1) { + ret = -ENODEV; + goto cleanup_mmal; + } + if (num_cameras > MAX_BCM2835_CAMERAS) num_cameras = MAX_BCM2835_CAMERAS; @@ -1872,21 +1884,29 @@ static int bcm2835_mmal_probe(struct platform_device *pdev) snprintf(dev->v4l2_dev.name, sizeof(dev->v4l2_dev.name), "%s", BM2835_MMAL_MODULE_NAME); ret = v4l2_device_register(NULL, &dev->v4l2_dev); - if (ret) + if (ret) { + dev_err(&pdev->dev, "%s: could not register V4L2 device: %d\n", + __func__, ret); goto free_dev; + } /* setup v4l controls */ ret = bm2835_mmal_init_controls(dev, &dev->ctrl_handler); - if (ret < 0) + if (ret < 0) { + v4l2_err(&dev->v4l2_dev, "%s: could not init controls: %d\n", + __func__, ret); goto unreg_dev; + } dev->v4l2_dev.ctrl_handler = &dev->ctrl_handler; /* mmal init */ dev->instance = instance; ret = mmal_init(dev); - if (ret < 0) + if (ret < 0) { + v4l2_err(&dev->v4l2_dev, "%s: mmal init failed: %d\n", + __func__, ret); goto unreg_dev; - + } /* initialize queue */ q = &dev->capture.vb_vidq; memset(q, 0, sizeof(*q)); @@ -1904,16 +1924,19 @@ static int bcm2835_mmal_probe(struct platform_device *pdev) /* initialise video devices */ ret = bm2835_mmal_init_device(dev, &dev->vdev); - if (ret < 0) + if (ret < 0) { + v4l2_err(&dev->v4l2_dev, "%s: could not init device: %d\n", + __func__, ret); goto unreg_dev; + } /* Really want to call vidioc_s_fmt_vid_cap with the default * format, but currently the APIs don't join up. */ ret = mmal_setup_components(dev, &default_v4l2_format); if (ret < 0) { - v4l2_err(&dev->v4l2_dev, - "%s: could not setup components\n", __func__); + v4l2_err(&dev->v4l2_dev, "%s: could not setup components: %d\n", + __func__, ret); goto unreg_dev; } @@ -1937,8 +1960,9 @@ cleanup_gdev: bcm2835_cleanup_instance(gdev[i]); gdev[i] = NULL; } - pr_info("%s: error %d while loading driver\n", - BM2835_MMAL_MODULE_NAME, ret); + +cleanup_mmal: + vchiq_mmal_finalise(instance); return ret; } @@ -1966,3 +1990,9 @@ static struct platform_driver bcm2835_camera_driver = { }; module_platform_driver(bcm2835_camera_driver) + +MODULE_DESCRIPTION("Broadcom 2835 MMAL video capture"); +MODULE_AUTHOR("Vincent Sanders"); +MODULE_LICENSE("GPL"); +MODULE_VERSION(BM2835_MMAL_VERSION); +MODULE_ALIAS("platform:bcm2835-camera"); diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c index cc2d9933b969..16af735af5c3 100644 --- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c +++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c @@ -139,7 +139,7 @@ struct mmal_msg_context { struct { /* message handle to release */ - VCHI_HELD_MSG_T msg_handle; + struct vchi_held_msg msg_handle; /* pointer to received message */ struct mmal_msg *msg; /* received message length */ @@ -527,7 +527,7 @@ static void service_callback(void *param, int status; u32 msg_len; struct mmal_msg *msg; - VCHI_HELD_MSG_T msg_handle; + struct vchi_held_msg msg_handle; struct mmal_msg_context *msg_context; if (!instance) { @@ -625,7 +625,7 @@ static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance, struct mmal_msg *msg, unsigned int payload_len, struct mmal_msg **msg_out, - VCHI_HELD_MSG_T *msg_handle_out) + struct vchi_held_msg *msg_handle_out) { struct mmal_msg_context *msg_context; int ret; @@ -751,7 +751,7 @@ static int port_info_set(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; pr_debug("setting port info port %p\n", port); if (!port) @@ -812,7 +812,7 @@ static int port_info_get(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; /* port info time */ m.h.type = MMAL_MSG_TYPE_PORT_INFO_GET; @@ -908,7 +908,7 @@ static int create_component(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; /* build component create message */ m.h.type = MMAL_MSG_TYPE_COMPONENT_CREATE; @@ -955,7 +955,7 @@ static int destroy_component(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; m.h.type = MMAL_MSG_TYPE_COMPONENT_DESTROY; m.u.component_destroy.component_handle = component->handle; @@ -988,7 +988,7 @@ static int enable_component(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; m.h.type = MMAL_MSG_TYPE_COMPONENT_ENABLE; m.u.component_enable.component_handle = component->handle; @@ -1020,7 +1020,7 @@ static int disable_component(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; m.h.type = MMAL_MSG_TYPE_COMPONENT_DISABLE; m.u.component_disable.component_handle = component->handle; @@ -1053,7 +1053,7 @@ static int get_version(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; m.h.type = MMAL_MSG_TYPE_GET_VERSION; @@ -1086,7 +1086,7 @@ static int port_action_port(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; m.h.type = MMAL_MSG_TYPE_PORT_ACTION; m.u.port_action_port.component_handle = port->component->handle; @@ -1130,7 +1130,7 @@ static int port_action_handle(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; m.h.type = MMAL_MSG_TYPE_PORT_ACTION; @@ -1175,7 +1175,7 @@ static int port_parameter_set(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; m.h.type = MMAL_MSG_TYPE_PORT_PARAMETER_SET; @@ -1216,7 +1216,7 @@ static int port_parameter_get(struct vchiq_mmal_instance *instance, int ret; struct mmal_msg m; struct mmal_msg *rmsg; - VCHI_HELD_MSG_T rmsg_handle; + struct vchi_held_msg rmsg_handle; m.h.type = MMAL_MSG_TYPE_PORT_PARAMETER_GET; @@ -1623,8 +1623,11 @@ int vchiq_mmal_component_init(struct vchiq_mmal_instance *instance, component = &instance->component[instance->component_idx]; ret = create_component(instance, component, name); - if (ret < 0) + if (ret < 0) { + pr_err("%s: failed to create component %d (Not enough GPU mem?)\n", + __func__, ret); goto unlock; + } /* ports info needs gathering */ component->control.type = MMAL_PORT_TYPE_CONTROL; @@ -1803,7 +1806,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance) int status; struct vchiq_mmal_instance *instance; static VCHI_INSTANCE_T vchi_instance; - SERVICE_CREATION_T params = { + struct service_creation params = { .version = VCHI_VERSION_EX(VC_MMAL_VER, VC_MMAL_MIN_VER), .service_id = VC_MMAL_SERVER_NAME, .callback = service_callback, diff --git a/drivers/staging/vc04_services/interface/vchi/TODO b/drivers/staging/vc04_services/interface/vchi/TODO index 0b3ec75ff627..fc2752bc95b2 100644 --- a/drivers/staging/vc04_services/interface/vchi/TODO +++ b/drivers/staging/vc04_services/interface/vchi/TODO @@ -49,3 +49,45 @@ such as dev_info, dev_dbg, and friends. A short top-down description of this driver's architecture (function of kthreads, userspace, limitations) could be very helpful for reviewers. + +7) Review and comment memory barriers + +There is a heavy use of memory barriers in this driver, it would be very +beneficial to go over all of them and, if correct, comment on their merits. +Extra points to whomever confidently reviews the remote_event_*() family of +functions. + +8) Get rid of custom function return values + +Most functions use a custom set of return values, we should force proper Linux +error numbers. Special care is needed for VCHIQ_RETRY. + +9) Reformat core code with more sane indentations + +The code follows the 80 characters limitation yet tends to go 3 or 4 levels of +indentation deep making it very unpleasant to read. This is specially relevant +in the character driver ioctl code and in the core thread functions. + +10) Reorganize file structure: Move char driver to it's own file and join both +platform files + +The cdev is defined alongside with the platform code in vchiq_arm.c. It would +be nice to completely decouple it from the actual core code. For instance to be +able to use bcm2835-audio without having /dev/vchiq created. One could argue +it's better for security reasons or general cleanliness. It could even be +interesting to create two different kernel modules, something the likes of +vchiq-core.ko and vchiq-dev.ko. This would also ease the upstreaming process. + +The code in vchiq_bcm2835_arm.c should fit in the generic platform file. + +12) Get rid of all the struct typedefs + +Most structs are typedefd, it's not encouraged in the kernel. + +13) Get rid of all non essential global structures and create a proper per +device structure + +The first thing one generally sees in a probe function is a memory allocation +for all the device specific data. This structure is then passed all over the +driver. This is good practice since it makes the driver work regardless of the +number of devices probed. diff --git a/drivers/staging/vc04_services/interface/vchi/vchi.h b/drivers/staging/vc04_services/interface/vchi/vchi.h index 01381904775d..0b6fc0d31f4c 100644 --- a/drivers/staging/vc04_services/interface/vchi/vchi.h +++ b/drivers/staging/vc04_services/interface/vchi/vchi.h @@ -36,7 +36,6 @@ #include "interface/vchi/vchi_cfg.h" #include "interface/vchi/vchi_common.h" -#include "vchi_mh.h" /****************************************************************************** Global defs @@ -67,18 +66,18 @@ struct opaque_vchi_service_t; // Descriptor for a held message. Allocated by client, initialised by vchi_msg_hold, // vchi_msg_iter_hold or vchi_msg_iter_hold_next. Fields are for internal VCHI use only. -typedef struct { +struct vchi_held_msg { struct opaque_vchi_service_t *service; void *message; -} VCHI_HELD_MSG_T; +}; // structure used to provide the information needed to open a server or a client -typedef struct { +struct service_creation { struct vchi_version version; int32_t service_id; VCHI_CALLBACK_T callback; void *callback_param; -} SERVICE_CREATION_T; +}; // Opaque handle for a VCHI instance typedef struct opaque_vchi_instance_handle_t *VCHI_INSTANCE_T; @@ -113,17 +112,12 @@ extern uint32_t vchi_current_time(VCHI_INSTANCE_T instance_handle); /****************************************************************************** Global service API *****************************************************************************/ -// Routine to create a named service -extern int32_t vchi_service_create(VCHI_INSTANCE_T instance_handle, - SERVICE_CREATION_T *setup, - VCHI_SERVICE_HANDLE_T *handle); - // Routine to destroy a service extern int32_t vchi_service_destroy(const VCHI_SERVICE_HANDLE_T handle); // Routine to open a named service extern int32_t vchi_service_open(VCHI_INSTANCE_T instance_handle, - SERVICE_CREATION_T *setup, + struct service_creation *setup, VCHI_SERVICE_HANDLE_T *handle); extern int32_t vchi_get_peer_version(const VCHI_SERVICE_HANDLE_T handle, @@ -182,11 +176,11 @@ extern int32_t vchi_msg_hold(VCHI_SERVICE_HANDLE_T handle, void **data, // } may be NULL, as info can be uint32_t *msg_size, // } obtained from HELD_MSG_T VCHI_FLAGS_T flags, - VCHI_HELD_MSG_T *message_descriptor); + struct vchi_held_msg *message_descriptor); // Initialise an iterator to look through messages in place extern int32_t vchi_msg_look_ahead(VCHI_SERVICE_HANDLE_T handle, - VCHI_MSG_ITER_T *iter, + struct vchi_msg_iter *iter, VCHI_FLAGS_T flags); /****************************************************************************** @@ -194,42 +188,42 @@ extern int32_t vchi_msg_look_ahead(VCHI_SERVICE_HANDLE_T handle, *****************************************************************************/ // Routine to get the address of a held message -extern void *vchi_held_msg_ptr(const VCHI_HELD_MSG_T *message); +extern void *vchi_held_msg_ptr(const struct vchi_held_msg *message); // Routine to get the size of a held message -extern int32_t vchi_held_msg_size(const VCHI_HELD_MSG_T *message); +extern int32_t vchi_held_msg_size(const struct vchi_held_msg *message); // Routine to get the transmit timestamp as written into the header by the peer -extern uint32_t vchi_held_msg_tx_timestamp(const VCHI_HELD_MSG_T *message); +extern uint32_t vchi_held_msg_tx_timestamp(const struct vchi_held_msg *message); // Routine to get the reception timestamp, written as we parsed the header -extern uint32_t vchi_held_msg_rx_timestamp(const VCHI_HELD_MSG_T *message); +extern uint32_t vchi_held_msg_rx_timestamp(const struct vchi_held_msg *message); // Routine to release a held message after it has been processed -extern int32_t vchi_held_msg_release(VCHI_HELD_MSG_T *message); +extern int32_t vchi_held_msg_release(struct vchi_held_msg *message); // Indicates whether the iterator has a next message. -extern int32_t vchi_msg_iter_has_next(const VCHI_MSG_ITER_T *iter); +extern int32_t vchi_msg_iter_has_next(const struct vchi_msg_iter *iter); // Return the pointer and length for the next message and advance the iterator. -extern int32_t vchi_msg_iter_next(VCHI_MSG_ITER_T *iter, +extern int32_t vchi_msg_iter_next(struct vchi_msg_iter *iter, void **data, uint32_t *msg_size); // Remove the last message returned by vchi_msg_iter_next. // Can only be called once after each call to vchi_msg_iter_next. -extern int32_t vchi_msg_iter_remove(VCHI_MSG_ITER_T *iter); +extern int32_t vchi_msg_iter_remove(struct vchi_msg_iter *iter); // Hold the last message returned by vchi_msg_iter_next. // Can only be called once after each call to vchi_msg_iter_next. -extern int32_t vchi_msg_iter_hold(VCHI_MSG_ITER_T *iter, - VCHI_HELD_MSG_T *message); +extern int32_t vchi_msg_iter_hold(struct vchi_msg_iter *iter, + struct vchi_held_msg *message); // Return information for the next message, and hold it, advancing the iterator. -extern int32_t vchi_msg_iter_hold_next(VCHI_MSG_ITER_T *iter, +extern int32_t vchi_msg_iter_hold_next(struct vchi_msg_iter *iter, void **data, // } may be NULL uint32_t *msg_size, // } - VCHI_HELD_MSG_T *message); + struct vchi_held_msg *message); /****************************************************************************** Global bulk API @@ -244,7 +238,6 @@ extern int32_t vchi_bulk_queue_receive(VCHI_SERVICE_HANDLE_T handle, // Prepare interface for a transfer from the other side into relocatable memory. int32_t vchi_bulk_queue_receive_reloc(const VCHI_SERVICE_HANDLE_T handle, - VCHI_MEM_HANDLE_T h_dst, uint32_t offset, uint32_t data_size, const VCHI_FLAGS_T flags, @@ -266,7 +259,6 @@ extern int32_t vchi_bulk_queue_transmit(VCHI_SERVICE_HANDLE_T handle, #endif extern int32_t vchi_bulk_queue_transmit_reloc(VCHI_SERVICE_HANDLE_T handle, - VCHI_MEM_HANDLE_T h_src, uint32_t offset, uint32_t data_size, VCHI_FLAGS_T flags, diff --git a/drivers/staging/vc04_services/interface/vchi/vchi_common.h b/drivers/staging/vc04_services/interface/vchi/vchi_common.h index 8eb2bb9f0fe2..35f331f80812 100644 --- a/drivers/staging/vc04_services/interface/vchi/vchi_common.h +++ b/drivers/staging/vc04_services/interface/vchi/vchi_common.h @@ -127,9 +127,9 @@ typedef void (*VCHI_CALLBACK_T)(void *callback_param, //my service local param * '-vec_len' elements. Thus to append a header onto an existing vector, * you can do this: * - * void foo(const VCHI_MSG_VECTOR_T *v, int n) + * void foo(const struct vchi_msg_vector *v, int n) * { - * VCHI_MSG_VECTOR_T nv[2]; + * struct vchi_msg_vector nv[2]; * nv[0].vec_base = my_header; * nv[0].vec_len = sizeof my_header; * nv[1].vec_base = v; @@ -137,10 +137,10 @@ typedef void (*VCHI_CALLBACK_T)(void *callback_param, //my service local param * ... * */ -typedef struct vchi_msg_vector { +struct vchi_msg_vector { const void *vec_base; int32_t vec_len; -} VCHI_MSG_VECTOR_T; +}; // Opaque type for a connection API typedef struct opaque_vchi_connection_api_t VCHI_CONNECTION_API_T; @@ -154,11 +154,11 @@ typedef struct opaque_vchi_message_driver_t VCHI_MESSAGE_DRIVER_T; // will not proceed to messages received since. Behaviour is undefined if an iterator // is used again after messages for that service are removed/dequeued by any // means other than vchi_msg_iter_... calls on the iterator itself. -typedef struct { +struct vchi_msg_iter { struct opaque_vchi_service_t *service; void *last; void *next; void *remove; -} VCHI_MSG_ITER_T; +}; #endif // VCHI_COMMON_H_ diff --git a/drivers/staging/vc04_services/interface/vchi/vchi_mh.h b/drivers/staging/vc04_services/interface/vchi/vchi_mh.h deleted file mode 100644 index 198bd076b666..000000000000 --- a/drivers/staging/vc04_services/interface/vchi/vchi_mh.h +++ /dev/null @@ -1,42 +0,0 @@ -/** - * Copyright (c) 2010-2012 Broadcom. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions, and the following disclaimer, - * without modification. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. The names of the above-listed copyright holders may not be used - * to endorse or promote products derived from this software without - * specific prior written permission. - * - * ALTERNATIVELY, this software may be distributed under the terms of the - * GNU General Public License ("GPL") version 2, as published by the Free - * Software Foundation. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS - * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, - * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR - * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, - * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, - * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR - * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF - * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING - * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef VCHI_MH_H_ -#define VCHI_MH_H_ - -#include - -typedef int32_t VCHI_MEM_HANDLE_T; -#define VCHI_MEM_HANDLE_INVALID 0 - -#endif diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c index 83d740feab96..338b6e952515 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c @@ -48,7 +48,6 @@ #include "vchiq_arm.h" #include "vchiq_connected.h" -#include "vchiq_killable.h" #include "vchiq_pagelist.h" #define MAX_FRAGMENTS (VCHIQ_NUM_CURRENT_BULKS * 2) @@ -61,11 +60,11 @@ struct vchiq_2835_state { int inited; - VCHIQ_ARM_STATE_T arm_state; + struct vchiq_arm_state arm_state; }; struct vchiq_pagelist_info { - PAGELIST_T *pagelist; + struct pagelist *pagelist; size_t pagelist_buffer_size; dma_addr_t dma_addr; enum dma_data_direction dma_dir; @@ -106,12 +105,12 @@ static void free_pagelist(struct vchiq_pagelist_info *pagelistinfo, int actual); -int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state) +int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state *state) { struct device *dev = &pdev->dev; struct vchiq_drvdata *drvdata = platform_get_drvdata(pdev); struct rpi_firmware *fw = drvdata->fw; - VCHIQ_SLOT_ZERO_T *vchiq_slot_zero; + struct vchiq_slot_zero *vchiq_slot_zero; struct resource *res; void *slot_mem; dma_addr_t slot_phys; @@ -163,7 +162,7 @@ int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state) *(char **)&g_fragments_base[i * g_fragments_size] = NULL; sema_init(&g_free_fragments_sema, MAX_FRAGMENTS); - if (vchiq_init_state(state, vchiq_slot_zero, 0) != VCHIQ_SUCCESS) + if (vchiq_init_state(state, vchiq_slot_zero) != VCHIQ_SUCCESS) return -EINVAL; res = platform_get_resource(pdev, IORESOURCE_MEM, 0); @@ -204,7 +203,7 @@ int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state) } VCHIQ_STATUS_T -vchiq_platform_init_state(VCHIQ_STATE_T *state) +vchiq_platform_init_state(struct vchiq_state *state) { VCHIQ_STATUS_T status = VCHIQ_SUCCESS; struct vchiq_2835_state *platform_state; @@ -221,8 +220,8 @@ vchiq_platform_init_state(VCHIQ_STATE_T *state) return status; } -VCHIQ_ARM_STATE_T* -vchiq_platform_get_arm_state(VCHIQ_STATE_T *state) +struct vchiq_arm_state* +vchiq_platform_get_arm_state(struct vchiq_state *state) { struct vchiq_2835_state *platform_state; @@ -234,7 +233,7 @@ vchiq_platform_get_arm_state(VCHIQ_STATE_T *state) } void -remote_event_signal(REMOTE_EVENT_T *event) +remote_event_signal(struct remote_event *event) { wmb(); @@ -247,13 +246,11 @@ remote_event_signal(REMOTE_EVENT_T *event) } VCHIQ_STATUS_T -vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk, VCHI_MEM_HANDLE_T memhandle, - void *offset, int size, int dir) +vchiq_prepare_bulk_data(struct vchiq_bulk *bulk, void *offset, int size, + int dir) { struct vchiq_pagelist_info *pagelistinfo; - WARN_ON(memhandle != VCHI_MEM_HANDLE_INVALID); - pagelistinfo = create_pagelist((char __user *)offset, size, (dir == VCHIQ_BULK_RECEIVE) ? PAGELIST_READ @@ -262,7 +259,6 @@ vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk, VCHI_MEM_HANDLE_T memhandle, if (!pagelistinfo) return VCHIQ_ERROR; - bulk->handle = memhandle; bulk->data = (void *)(unsigned long)pagelistinfo->dma_addr; /* @@ -275,23 +271,13 @@ vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk, VCHI_MEM_HANDLE_T memhandle, } void -vchiq_complete_bulk(VCHIQ_BULK_T *bulk) +vchiq_complete_bulk(struct vchiq_bulk *bulk) { if (bulk && bulk->remote_data && bulk->actual) free_pagelist((struct vchiq_pagelist_info *)bulk->remote_data, bulk->actual); } -void -vchiq_transfer_bulk(VCHIQ_BULK_T *bulk) -{ - /* - * This should only be called on the master (VideoCore) side, but - * provide an implementation to avoid the need for ifdefery. - */ - BUG(); -} - void vchiq_dump_platform_state(void *dump_context) { @@ -304,29 +290,29 @@ vchiq_dump_platform_state(void *dump_context) } VCHIQ_STATUS_T -vchiq_platform_suspend(VCHIQ_STATE_T *state) +vchiq_platform_suspend(struct vchiq_state *state) { return VCHIQ_ERROR; } VCHIQ_STATUS_T -vchiq_platform_resume(VCHIQ_STATE_T *state) +vchiq_platform_resume(struct vchiq_state *state) { return VCHIQ_SUCCESS; } void -vchiq_platform_paused(VCHIQ_STATE_T *state) +vchiq_platform_paused(struct vchiq_state *state) { } void -vchiq_platform_resumed(VCHIQ_STATE_T *state) +vchiq_platform_resumed(struct vchiq_state *state) { } int -vchiq_platform_videocore_wanted(VCHIQ_STATE_T *state) +vchiq_platform_videocore_wanted(struct vchiq_state *state) { return 1; // autosuspend not supported - videocore always wanted } @@ -337,12 +323,12 @@ vchiq_platform_use_suspend_timer(void) return 0; } void -vchiq_dump_platform_use_state(VCHIQ_STATE_T *state) +vchiq_dump_platform_use_state(struct vchiq_state *state) { vchiq_log_info(vchiq_arm_log_level, "Suspend timer not in use"); } void -vchiq_platform_handle_timeout(VCHIQ_STATE_T *state) +vchiq_platform_handle_timeout(struct vchiq_state *state) { (void)state; } @@ -353,7 +339,7 @@ vchiq_platform_handle_timeout(VCHIQ_STATE_T *state) static irqreturn_t vchiq_doorbell_irq(int irq, void *dev_id) { - VCHIQ_STATE_T *state = dev_id; + struct vchiq_state *state = dev_id; irqreturn_t ret = IRQ_NONE; unsigned int status; @@ -398,7 +384,7 @@ cleanup_pagelistinfo(struct vchiq_pagelist_info *pagelistinfo) static struct vchiq_pagelist_info * create_pagelist(char __user *buf, size_t count, unsigned short type) { - PAGELIST_T *pagelist; + struct pagelist *pagelist; struct vchiq_pagelist_info *pagelistinfo; struct page **pages; u32 *addrs; @@ -412,7 +398,7 @@ create_pagelist(char __user *buf, size_t count, unsigned short type) offset = ((unsigned int)(unsigned long)buf & (PAGE_SIZE - 1)); num_pages = DIV_ROUND_UP(count + offset, PAGE_SIZE); - pagelist_size = sizeof(PAGELIST_T) + + pagelist_size = sizeof(struct pagelist) + (num_pages * sizeof(u32)) + (num_pages * sizeof(pages[0]) + (num_pages * sizeof(struct scatterlist))) + @@ -557,7 +543,7 @@ create_pagelist(char __user *buf, size_t count, unsigned short type) (g_cache_line_size - 1)))) { char *fragments; - if (down_interruptible(&g_free_fragments_sema) != 0) { + if (down_killable(&g_free_fragments_sema)) { cleanup_pagelistinfo(pagelistinfo); return NULL; } @@ -580,8 +566,8 @@ static void free_pagelist(struct vchiq_pagelist_info *pagelistinfo, int actual) { - PAGELIST_T *pagelist = pagelistinfo->pagelist; - struct page **pages = pagelistinfo->pages; + struct pagelist *pagelist = pagelistinfo->pagelist; + struct page **pages = pagelistinfo->pages; unsigned int num_pages = pagelistinfo->num_pages; vchiq_log_trace(vchiq_arm_log_level, "%s - %pK, %d", diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c index 45de21c210c1..804daf83be35 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c @@ -44,18 +44,18 @@ #include #include #include -#include +#include #include #include #include #include +#include #include #include "vchiq_core.h" #include "vchiq_ioctl.h" #include "vchiq_arm.h" #include "vchiq_debugfs.h" -#include "vchiq_killable.h" #define DEVICE_NAME "vchiq" @@ -63,8 +63,6 @@ #undef MODULE_PARAM_PREFIX #define MODULE_PARAM_PREFIX DEVICE_NAME "." -#define VCHIQ_MINOR 0 - /* Some per-instance constants */ #define MAX_COMPLETIONS 128 #define MAX_SERVICES 64 @@ -111,8 +109,8 @@ static const char *const resume_state_names[] = { static void suspend_timer_callback(struct timer_list *t); -typedef struct user_service_struct { - VCHIQ_SERVICE_T *service; +struct user_service { + struct vchiq_service *service; void *userdata; VCHIQ_INSTANCE_T instance; char is_vchi; @@ -121,11 +119,11 @@ typedef struct user_service_struct { int message_available_pos; int msg_insert; int msg_remove; - struct semaphore insert_event; - struct semaphore remove_event; - struct semaphore close_event; - VCHIQ_HEADER_T * msg_queue[MSG_QUEUE_SIZE]; -} USER_SERVICE_T; + struct completion insert_event; + struct completion remove_event; + struct completion close_event; + struct vchiq_header *msg_queue[MSG_QUEUE_SIZE]; +}; struct bulk_waiter_node { struct bulk_waiter bulk_waiter; @@ -134,12 +132,12 @@ struct bulk_waiter_node { }; struct vchiq_instance_struct { - VCHIQ_STATE_T *state; - VCHIQ_COMPLETION_DATA_T completions[MAX_COMPLETIONS]; + struct vchiq_state *state; + struct vchiq_completion_data completions[MAX_COMPLETIONS]; int completion_insert; int completion_remove; - struct semaphore insert_event; - struct semaphore remove_event; + struct completion insert_event; + struct completion remove_event; struct mutex completion_mutex; int connected; @@ -152,23 +150,23 @@ struct vchiq_instance_struct { struct list_head bulk_waiter_list; struct mutex bulk_waiter_list_mutex; - VCHIQ_DEBUGFS_NODE_T debugfs_node; + struct vchiq_debugfs_node debugfs_node; }; -typedef struct dump_context_struct { +struct dump_context { char __user *buf; size_t actual; size_t space; loff_t offset; -} DUMP_CONTEXT_T; +}; static struct cdev vchiq_cdev; static dev_t vchiq_devid; -static VCHIQ_STATE_T g_state; +static struct vchiq_state g_state; static struct class *vchiq_class; -static struct device *vchiq_dev; static DEFINE_SPINLOCK(msg_queue_spinlock); static struct platform_device *bcm2835_camera; +static struct platform_device *bcm2835_audio; static struct vchiq_drvdata bcm2835_drvdata = { .cache_line_size = 32, @@ -210,7 +208,7 @@ vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data, VCHIQ_STATUS_T vchiq_initialise(VCHIQ_INSTANCE_T *instance_out) { VCHIQ_STATUS_T status = VCHIQ_ERROR; - VCHIQ_STATE_T *state; + struct vchiq_state *state; VCHIQ_INSTANCE_T instance = NULL; int i; @@ -263,7 +261,7 @@ EXPORT_SYMBOL(vchiq_initialise); VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance) { VCHIQ_STATUS_T status; - VCHIQ_STATE_T *state = instance->state; + struct vchiq_state *state = instance->state; vchiq_log_trace(vchiq_core_log_level, "%s(%p) called", __func__, instance); @@ -280,16 +278,11 @@ VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance) "%s(%p): returning %d", __func__, instance, status); if (status == VCHIQ_SUCCESS) { - struct list_head *pos, *next; + struct bulk_waiter_node *waiter, *next; - list_for_each_safe(pos, next, - &instance->bulk_waiter_list) { - struct bulk_waiter_node *waiter; - - waiter = list_entry(pos, - struct bulk_waiter_node, - list); - list_del(pos); + list_for_each_entry_safe(waiter, next, + &instance->bulk_waiter_list, list) { + list_del(&waiter->list); vchiq_log_info(vchiq_arm_log_level, "bulk_waiter - cleaned up %pK for pid %d", waiter, waiter->pid); @@ -310,7 +303,7 @@ static int vchiq_is_connected(VCHIQ_INSTANCE_T instance) VCHIQ_STATUS_T vchiq_connect(VCHIQ_INSTANCE_T instance) { VCHIQ_STATUS_T status; - VCHIQ_STATE_T *state = instance->state; + struct vchiq_state *state = instance->state; vchiq_log_trace(vchiq_core_log_level, "%s(%p) called", __func__, instance); @@ -338,12 +331,12 @@ EXPORT_SYMBOL(vchiq_connect); VCHIQ_STATUS_T vchiq_add_service( VCHIQ_INSTANCE_T instance, - const VCHIQ_SERVICE_PARAMS_T *params, + const struct vchiq_service_params *params, VCHIQ_SERVICE_HANDLE_T *phandle) { VCHIQ_STATUS_T status; - VCHIQ_STATE_T *state = instance->state; - VCHIQ_SERVICE_T *service = NULL; + struct vchiq_state *state = instance->state; + struct vchiq_service *service = NULL; int srvstate; vchiq_log_trace(vchiq_core_log_level, @@ -377,12 +370,12 @@ EXPORT_SYMBOL(vchiq_add_service); VCHIQ_STATUS_T vchiq_open_service( VCHIQ_INSTANCE_T instance, - const VCHIQ_SERVICE_PARAMS_T *params, + const struct vchiq_service_params *params, VCHIQ_SERVICE_HANDLE_T *phandle) { VCHIQ_STATUS_T status = VCHIQ_ERROR; - VCHIQ_STATE_T *state = instance->state; - VCHIQ_SERVICE_T *service = NULL; + struct vchiq_state *state = instance->state; + struct vchiq_service *service = NULL; vchiq_log_trace(vchiq_core_log_level, "%s(%p) called", __func__, instance); @@ -424,9 +417,9 @@ vchiq_bulk_transmit(VCHIQ_SERVICE_HANDLE_T handle, const void *data, switch (mode) { case VCHIQ_BULK_MODE_NOCALLBACK: case VCHIQ_BULK_MODE_CALLBACK: - status = vchiq_bulk_transfer(handle, - VCHI_MEM_HANDLE_INVALID, (void *)data, size, userdata, - mode, VCHIQ_BULK_TRANSMIT); + status = vchiq_bulk_transfer(handle, (void *)data, size, + userdata, mode, + VCHIQ_BULK_TRANSMIT); break; case VCHIQ_BULK_MODE_BLOCKING: status = vchiq_blocking_bulk_transfer(handle, @@ -449,9 +442,8 @@ vchiq_bulk_receive(VCHIQ_SERVICE_HANDLE_T handle, void *data, switch (mode) { case VCHIQ_BULK_MODE_NOCALLBACK: case VCHIQ_BULK_MODE_CALLBACK: - status = vchiq_bulk_transfer(handle, - VCHI_MEM_HANDLE_INVALID, data, size, userdata, - mode, VCHIQ_BULK_RECEIVE); + status = vchiq_bulk_transfer(handle, data, size, userdata, + mode, VCHIQ_BULK_RECEIVE); break; case VCHIQ_BULK_MODE_BLOCKING: status = vchiq_blocking_bulk_transfer(handle, @@ -470,10 +462,9 @@ vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data, unsigned int size, VCHIQ_BULK_DIR_T dir) { VCHIQ_INSTANCE_T instance; - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; VCHIQ_STATUS_T status; struct bulk_waiter_node *waiter = NULL; - struct list_head *pos; service = find_service_by_handle(handle); if (!service) @@ -484,20 +475,16 @@ vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data, unlock_service(service); mutex_lock(&instance->bulk_waiter_list_mutex); - list_for_each(pos, &instance->bulk_waiter_list) { - if (list_entry(pos, struct bulk_waiter_node, - list)->pid == current->pid) { - waiter = list_entry(pos, - struct bulk_waiter_node, - list); - list_del(pos); + list_for_each_entry(waiter, &instance->bulk_waiter_list, list) { + if (waiter->pid == current->pid) { + list_del(&waiter->list); break; } } mutex_unlock(&instance->bulk_waiter_list_mutex); if (waiter) { - VCHIQ_BULK_T *bulk = waiter->bulk_waiter.bulk; + struct vchiq_bulk *bulk = waiter->bulk_waiter.bulk; if (bulk) { /* This thread has an outstanding bulk transfer. */ @@ -523,12 +510,11 @@ vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data, } } - status = vchiq_bulk_transfer(handle, VCHI_MEM_HANDLE_INVALID, - data, size, &waiter->bulk_waiter, VCHIQ_BULK_MODE_BLOCKING, - dir); + status = vchiq_bulk_transfer(handle, data, size, &waiter->bulk_waiter, + VCHIQ_BULK_MODE_BLOCKING, dir); if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) || !waiter->bulk_waiter.bulk) { - VCHIQ_BULK_T *bulk = waiter->bulk_waiter.bulk; + struct vchiq_bulk *bulk = waiter->bulk_waiter.bulk; if (bulk) { /* Cancel the signal when the transfer @@ -559,10 +545,10 @@ vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data, static VCHIQ_STATUS_T add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason, - VCHIQ_HEADER_T *header, USER_SERVICE_T *user_service, - void *bulk_userdata) + struct vchiq_header *header, struct user_service *user_service, + void *bulk_userdata) { - VCHIQ_COMPLETION_DATA_T *completion; + struct vchiq_completion_data *completion; int insert; DEBUG_INITIALISE(g_state.local) @@ -574,7 +560,7 @@ add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason, vchiq_log_trace(vchiq_arm_log_level, "%s - completion queue full", __func__); DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT); - if (down_interruptible(&instance->remove_event) != 0) { + if (wait_for_completion_killable( &instance->remove_event)) { vchiq_log_info(vchiq_arm_log_level, "service_callback interrupted"); return VCHIQ_RETRY; @@ -612,7 +598,7 @@ add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason, insert++; instance->completion_insert = insert; - up(&instance->insert_event); + complete(&instance->insert_event); return VCHIQ_SUCCESS; } @@ -624,16 +610,16 @@ add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason, ***************************************************************************/ static VCHIQ_STATUS_T -service_callback(VCHIQ_REASON_T reason, VCHIQ_HEADER_T *header, - VCHIQ_SERVICE_HANDLE_T handle, void *bulk_userdata) +service_callback(VCHIQ_REASON_T reason, struct vchiq_header *header, + VCHIQ_SERVICE_HANDLE_T handle, void *bulk_userdata) { /* How do we ensure the callback goes to the right client? - ** The service_user data points to a USER_SERVICE_T record containing - ** the original callback and the user state structure, which contains a - ** circular buffer for completion records. + ** The service_user data points to a user_service record + ** containing the original callback and the user state structure, which + ** contains a circular buffer for completion records. */ - USER_SERVICE_T *user_service; - VCHIQ_SERVICE_T *service; + struct user_service *user_service; + struct vchiq_service *service; VCHIQ_INSTANCE_T instance; bool skip_completion = false; @@ -643,7 +629,7 @@ service_callback(VCHIQ_REASON_T reason, VCHIQ_HEADER_T *header, service = handle_to_service(handle); BUG_ON(!service); - user_service = (USER_SERVICE_T *)service->base.userdata; + user_service = (struct user_service *)service->base.userdata; instance = user_service->instance; if (!instance || instance->closing) @@ -685,7 +671,8 @@ service_callback(VCHIQ_REASON_T reason, VCHIQ_HEADER_T *header, } DEBUG_TRACE(SERVICE_CALLBACK_LINE); - if (down_interruptible(&user_service->remove_event) + if (wait_for_completion_killable( + &user_service->remove_event) != 0) { vchiq_log_info(vchiq_arm_log_level, "%s interrupted", __func__); @@ -717,7 +704,7 @@ service_callback(VCHIQ_REASON_T reason, VCHIQ_HEADER_T *header, } spin_unlock(&msg_queue_spinlock); - up(&user_service->insert_event); + complete(&user_service->insert_event); header = NULL; } @@ -746,7 +733,7 @@ user_service_free(void *userdata) * close_delivered * ***************************************************************************/ -static void close_delivered(USER_SERVICE_T *user_service) +static void close_delivered(struct user_service *user_service) { vchiq_log_info(vchiq_arm_log_level, "%s(handle=%x)", @@ -757,81 +744,55 @@ static void close_delivered(USER_SERVICE_T *user_service) unlock_service(user_service->service); /* Wake the user-thread blocked in close_ or remove_service */ - up(&user_service->close_event); + complete(&user_service->close_event); user_service->close_pending = 0; } } struct vchiq_io_copy_callback_context { - struct vchiq_element *current_element; - size_t current_element_offset; + struct vchiq_element *element; + size_t element_offset; unsigned long elements_to_go; - size_t current_offset; }; -static ssize_t -vchiq_ioc_copy_element_data( - void *context, - void *dest, - size_t offset, - size_t maxsize) +static ssize_t vchiq_ioc_copy_element_data(void *context, void *dest, + size_t offset, size_t maxsize) { - long res; + struct vchiq_io_copy_callback_context *cc = context; + size_t total_bytes_copied = 0; size_t bytes_this_round; - struct vchiq_io_copy_callback_context *copy_context = - (struct vchiq_io_copy_callback_context *)context; - if (offset != copy_context->current_offset) - return 0; - - if (!copy_context->elements_to_go) - return 0; - - /* - * Complex logic here to handle the case of 0 size elements - * in the middle of the array of elements. - * - * Need to skip over these 0 size elements. - */ - while (1) { - bytes_this_round = min(copy_context->current_element->size - - copy_context->current_element_offset, - maxsize); + while (total_bytes_copied < maxsize) { + if (!cc->elements_to_go) + return total_bytes_copied; - if (bytes_this_round) - break; - - copy_context->elements_to_go--; - copy_context->current_element++; - copy_context->current_element_offset = 0; - - if (!copy_context->elements_to_go) - return 0; - } + if (!cc->element->size) { + cc->elements_to_go--; + cc->element++; + cc->element_offset = 0; + continue; + } - res = copy_from_user(dest, - copy_context->current_element->data + - copy_context->current_element_offset, - bytes_this_round); + bytes_this_round = min(cc->element->size - cc->element_offset, + maxsize - total_bytes_copied); - if (res != 0) - return -EFAULT; + if (copy_from_user(dest + total_bytes_copied, + cc->element->data + cc->element_offset, + bytes_this_round)) + return -EFAULT; - copy_context->current_element_offset += bytes_this_round; - copy_context->current_offset += bytes_this_round; + cc->element_offset += bytes_this_round; + total_bytes_copied += bytes_this_round; - /* - * Check if done with current element, and if so advance to the next. - */ - if (copy_context->current_element_offset == - copy_context->current_element->size) { - copy_context->elements_to_go--; - copy_context->current_element++; - copy_context->current_element_offset = 0; + if (cc->element_offset == cc->element->size) { + cc->elements_to_go--; + cc->element++; + cc->element_offset = 0; + } } - return bytes_this_round; + return maxsize; } /************************************************************************** @@ -848,10 +809,9 @@ vchiq_ioc_queue_message(VCHIQ_SERVICE_HANDLE_T handle, unsigned long i; size_t total_size = 0; - context.current_element = elements; - context.current_element_offset = 0; + context.element = elements; + context.element_offset = 0; context.elements_to_go = count; - context.current_offset = 0; for (i = 0; i < count; i++) { if (!elements[i].data && elements[i].size != 0) @@ -874,7 +834,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { VCHIQ_INSTANCE_T instance = file->private_data; VCHIQ_STATUS_T status = VCHIQ_SUCCESS; - VCHIQ_SERVICE_T *service = NULL; + struct vchiq_service *service = NULL; long ret = 0; int i, rc; @@ -906,7 +866,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) if (status == VCHIQ_SUCCESS) { /* Wake the completion thread and ask it to exit */ instance->closing = 1; - up(&instance->insert_event); + complete(&instance->insert_event); } break; @@ -936,8 +896,8 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) break; case VCHIQ_IOC_CREATE_SERVICE: { - VCHIQ_CREATE_SERVICE_T args; - USER_SERVICE_T *user_service = NULL; + struct vchiq_create_service args; + struct user_service *user_service = NULL; void *userdata; int srvstate; @@ -948,7 +908,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) break; } - user_service = kmalloc(sizeof(USER_SERVICE_T), GFP_KERNEL); + user_service = kmalloc(sizeof(*user_service), GFP_KERNEL); if (!user_service) { ret = -ENOMEM; break; @@ -987,9 +947,9 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) instance->completion_remove - 1; user_service->msg_insert = 0; user_service->msg_remove = 0; - sema_init(&user_service->insert_event, 0); - sema_init(&user_service->remove_event, 0); - sema_init(&user_service->close_event, 0); + init_completion(&user_service->insert_event); + init_completion(&user_service->remove_event); + init_completion(&user_service->close_event); if (args.is_open) { status = vchiq_open_service_internal @@ -1004,7 +964,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) } if (copy_to_user((void __user *) - &(((VCHIQ_CREATE_SERVICE_T __user *) + &(((struct vchiq_create_service __user *) arg)->handle), (const void *)&service->handle, sizeof(service->handle)) != 0) { @@ -1019,55 +979,38 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) } } break; - case VCHIQ_IOC_CLOSE_SERVICE: { + case VCHIQ_IOC_CLOSE_SERVICE: + case VCHIQ_IOC_REMOVE_SERVICE: { VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg; + struct user_service *user_service; service = find_service_for_instance(instance, handle); - if (service != NULL) { - USER_SERVICE_T *user_service = - (USER_SERVICE_T *)service->base.userdata; - /* close_pending is false on first entry, and when the - wait in vchiq_close_service has been interrupted. */ - if (!user_service->close_pending) { - status = vchiq_close_service(service->handle); - if (status != VCHIQ_SUCCESS) - break; - } - - /* close_pending is true once the underlying service - has been closed until the client library calls the - CLOSE_DELIVERED ioctl, signalling close_event. */ - if (user_service->close_pending && - down_interruptible(&user_service->close_event)) - status = VCHIQ_RETRY; - } else + if (!service) { ret = -EINVAL; - } break; + break; + } - case VCHIQ_IOC_REMOVE_SERVICE: { - VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg; + user_service = service->base.userdata; - service = find_service_for_instance(instance, handle); - if (service != NULL) { - USER_SERVICE_T *user_service = - (USER_SERVICE_T *)service->base.userdata; - /* close_pending is false on first entry, and when the - wait in vchiq_close_service has been interrupted. */ - if (!user_service->close_pending) { - status = vchiq_remove_service(service->handle); - if (status != VCHIQ_SUCCESS) - break; - } + /* close_pending is false on first entry, and when the + wait in vchiq_close_service has been interrupted. */ + if (!user_service->close_pending) { + status = (cmd == VCHIQ_IOC_CLOSE_SERVICE) ? + vchiq_close_service(service->handle) : + vchiq_remove_service(service->handle); + if (status != VCHIQ_SUCCESS) + break; + } - /* close_pending is true once the underlying service - has been closed until the client library calls the - CLOSE_DELIVERED ioctl, signalling close_event. */ - if (user_service->close_pending && - down_interruptible(&user_service->close_event)) - status = VCHIQ_RETRY; - } else - ret = -EINVAL; - } break; + /* close_pending is true once the underlying service + has been closed until the client library calls the + CLOSE_DELIVERED ioctl, signalling close_event. */ + if (user_service->close_pending && + wait_for_completion_killable( + &user_service->close_event)) + status = VCHIQ_RETRY; + break; + } case VCHIQ_IOC_USE_SERVICE: case VCHIQ_IOC_RELEASE_SERVICE: { @@ -1097,7 +1040,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) } break; case VCHIQ_IOC_QUEUE_MESSAGE: { - VCHIQ_QUEUE_MESSAGE_T args; + struct vchiq_queue_message args; if (copy_from_user (&args, (const void __user *)arg, @@ -1126,7 +1069,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) case VCHIQ_IOC_QUEUE_BULK_TRANSMIT: case VCHIQ_IOC_QUEUE_BULK_RECEIVE: { - VCHIQ_QUEUE_BULK_TRANSFER_T args; + struct vchiq_queue_bulk_transfer args; struct bulk_waiter_node *waiter = NULL; VCHIQ_BULK_DIR_T dir = @@ -1153,21 +1096,16 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) ret = -ENOMEM; break; } + args.userdata = &waiter->bulk_waiter; } else if (args.mode == VCHIQ_BULK_MODE_WAITING) { - struct list_head *pos; - mutex_lock(&instance->bulk_waiter_list_mutex); - list_for_each(pos, &instance->bulk_waiter_list) { - if (list_entry(pos, struct bulk_waiter_node, - list)->pid == current->pid) { - waiter = list_entry(pos, - struct bulk_waiter_node, - list); - list_del(pos); + list_for_each_entry(waiter, &instance->bulk_waiter_list, + list) { + if (waiter->pid == current->pid) { + list_del(&waiter->list); break; } - } mutex_unlock(&instance->bulk_waiter_list_mutex); if (!waiter) { @@ -1182,14 +1120,13 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) current->pid); args.userdata = &waiter->bulk_waiter; } - status = vchiq_bulk_transfer - (args.handle, - VCHI_MEM_HANDLE_INVALID, - args.data, args.size, - args.userdata, args.mode, - dir); + + status = vchiq_bulk_transfer(args.handle, args.data, args.size, + args.userdata, args.mode, dir); + if (!waiter) break; + if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) || !waiter->bulk_waiter.bulk) { if (waiter->bulk_waiter.bulk) { @@ -1212,7 +1149,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) waiter, current->pid); if (copy_to_user((void __user *) - &(((VCHIQ_QUEUE_BULK_TRANSFER_T __user *) + &(((struct vchiq_queue_bulk_transfer __user *) arg)->mode), (const void *)&mode_waiting, sizeof(mode_waiting)) != 0) @@ -1221,7 +1158,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) } break; case VCHIQ_IOC_AWAIT_COMPLETION: { - VCHIQ_AWAIT_COMPLETION_T args; + struct vchiq_await_completion args; DEBUG_TRACE(AWAIT_COMPLETION_LINE); if (!instance->connected) { @@ -1245,7 +1182,8 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) DEBUG_TRACE(AWAIT_COMPLETION_LINE); mutex_unlock(&instance->completion_mutex); - rc = down_interruptible(&instance->insert_event); + rc = wait_for_completion_killable( + &instance->insert_event); mutex_lock(&instance->completion_mutex); if (rc != 0) { DEBUG_TRACE(AWAIT_COMPLETION_LINE); @@ -1262,10 +1200,10 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) int remove = instance->completion_remove; for (ret = 0; ret < args.count; ret++) { - VCHIQ_COMPLETION_DATA_T *completion; - VCHIQ_SERVICE_T *service; - USER_SERVICE_T *user_service; - VCHIQ_HEADER_T *header; + struct vchiq_completion_data *completion; + struct vchiq_service *service; + struct user_service *user_service; + struct vchiq_header *header; if (remove == instance->completion_insert) break; @@ -1290,7 +1228,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) int msglen; msglen = header->size + - sizeof(VCHIQ_HEADER_T); + sizeof(struct vchiq_header); /* This must be a VCHIQ-style service */ if (args.msgbufsize < msglen) { vchiq_log_error( @@ -1343,10 +1281,11 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) unlock_service(service); if (copy_to_user((void __user *)( - (size_t)args.buf + - ret * sizeof(VCHIQ_COMPLETION_DATA_T)), + (size_t)args.buf + ret * + sizeof(struct vchiq_completion_data)), completion, - sizeof(VCHIQ_COMPLETION_DATA_T)) != 0) { + sizeof(struct vchiq_completion_data)) + != 0) { if (ret == 0) ret = -EFAULT; break; @@ -1363,8 +1302,8 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) if (msgbufcount != args.msgbufcount) { if (copy_to_user((void __user *) - &((VCHIQ_AWAIT_COMPLETION_T *)arg)-> - msgbufcount, + &((struct vchiq_await_completion *)arg) + ->msgbufcount, &msgbufcount, sizeof(msgbufcount)) != 0) { ret = -EFAULT; @@ -1373,15 +1312,15 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) } if (ret != 0) - up(&instance->remove_event); + complete(&instance->remove_event); mutex_unlock(&instance->completion_mutex); DEBUG_TRACE(AWAIT_COMPLETION_LINE); } break; case VCHIQ_IOC_DEQUEUE_MESSAGE: { - VCHIQ_DEQUEUE_MESSAGE_T args; - USER_SERVICE_T *user_service; - VCHIQ_HEADER_T *header; + struct vchiq_dequeue_message args; + struct user_service *user_service; + struct vchiq_header *header; DEBUG_TRACE(DEQUEUE_MESSAGE_LINE); if (copy_from_user @@ -1395,7 +1334,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) ret = -EINVAL; break; } - user_service = (USER_SERVICE_T *)service->base.userdata; + user_service = (struct user_service *)service->base.userdata; if (user_service->is_vchi == 0) { ret = -EINVAL; break; @@ -1413,8 +1352,8 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) do { spin_unlock(&msg_queue_spinlock); DEBUG_TRACE(DEQUEUE_MESSAGE_LINE); - if (down_interruptible( - &user_service->insert_event) != 0) { + if (wait_for_completion_killable( + &user_service->insert_event)) { vchiq_log_info(vchiq_arm_log_level, "DEQUEUE_MESSAGE interrupted"); ret = -EINTR; @@ -1436,7 +1375,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) user_service->msg_remove++; spin_unlock(&msg_queue_spinlock); - up(&user_service->remove_event); + complete(&user_service->remove_event); if (header == NULL) ret = -ENOTCONN; else if (header->size <= args.bufsize) { @@ -1468,8 +1407,8 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) } break; case VCHIQ_IOC_GET_CONFIG: { - VCHIQ_GET_CONFIG_T args; - VCHIQ_CONFIG_T config; + struct vchiq_get_config args; + struct vchiq_config config; if (copy_from_user(&args, (const void __user *)arg, sizeof(args)) != 0) { @@ -1480,18 +1419,16 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) ret = -EINVAL; break; } - status = vchiq_get_config(instance, args.config_size, &config); - if (status == VCHIQ_SUCCESS) { - if (copy_to_user((void __user *)args.pconfig, - &config, args.config_size) != 0) { - ret = -EFAULT; - break; - } + + vchiq_get_config(&config); + if (copy_to_user(args.pconfig, &config, args.config_size)) { + ret = -EFAULT; + break; } } break; case VCHIQ_IOC_SET_SERVICE_OPTION: { - VCHIQ_SET_SERVICE_OPTION_T args; + struct vchiq_set_service_option args; if (copy_from_user( &args, (const void __user *)arg, @@ -1524,8 +1461,8 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) service = find_closed_service_for_instance(instance, handle); if (service != NULL) { - USER_SERVICE_T *user_service = - (USER_SERVICE_T *)service->base.userdata; + struct user_service *user_service = + (struct user_service *)service->base.userdata; close_delivered(user_service); } else ret = -EINVAL; @@ -1593,7 +1530,7 @@ vchiq_compat_ioctl_create_service( unsigned int cmd, unsigned long arg) { - VCHIQ_CREATE_SERVICE_T __user *args; + struct vchiq_create_service __user *args; struct vchiq_create_service32 __user *ptrargs32 = (struct vchiq_create_service32 __user *)arg; struct vchiq_create_service32 args32; @@ -1656,8 +1593,8 @@ vchiq_compat_ioctl_queue_message(struct file *file, unsigned int cmd, unsigned long arg) { - VCHIQ_QUEUE_MESSAGE_T *args; - struct vchiq_element *elements; + struct vchiq_queue_message *args; + struct vchiq_element __user *elements; struct vchiq_queue_message32 args32; unsigned int count; @@ -1723,7 +1660,7 @@ vchiq_compat_ioctl_queue_bulk(struct file *file, unsigned int cmd, unsigned long arg) { - VCHIQ_QUEUE_BULK_TRANSFER_T *args; + struct vchiq_queue_bulk_transfer __user *args; struct vchiq_queue_bulk_transfer32 args32; struct vchiq_queue_bulk_transfer32 *ptrargs32 = (struct vchiq_queue_bulk_transfer32 *)arg; @@ -1789,16 +1726,16 @@ vchiq_compat_ioctl_await_completion(struct file *file, unsigned int cmd, unsigned long arg) { - VCHIQ_AWAIT_COMPLETION_T *args; - VCHIQ_COMPLETION_DATA_T *completion; - VCHIQ_COMPLETION_DATA_T completiontemp; + struct vchiq_await_completion __user *args; + struct vchiq_completion_data __user *completion; + struct vchiq_completion_data completiontemp; struct vchiq_await_completion32 args32; struct vchiq_completion_data32 completion32; - unsigned int *msgbufcount32; + unsigned int __user *msgbufcount32; unsigned int msgbufcount_native; compat_uptr_t msgbuf32; - void *msgbuf; - void **msgbufptr; + void __user *msgbuf; + void * __user *msgbufptr; long ret; args = compat_alloc_user_space(sizeof(*args) + @@ -1807,11 +1744,11 @@ vchiq_compat_ioctl_await_completion(struct file *file, if (!args) return -EFAULT; - completion = (VCHIQ_COMPLETION_DATA_T *)(args + 1); - msgbufptr = (void __user **)(completion + 1); + completion = (struct vchiq_completion_data __user *)(args + 1); + msgbufptr = (void * __user *)(completion + 1); if (copy_from_user(&args32, - (struct vchiq_completion_data32 *)arg, + (struct vchiq_completion_data32 __user *)arg, sizeof(args32))) return -EFAULT; @@ -1939,7 +1876,7 @@ vchiq_compat_ioctl_dequeue_message(struct file *file, unsigned int cmd, unsigned long arg) { - VCHIQ_DEQUEUE_MESSAGE_T *args; + struct vchiq_dequeue_message __user *args; struct vchiq_dequeue_message32 args32; args = compat_alloc_user_space(sizeof(*args)); @@ -1947,7 +1884,7 @@ vchiq_compat_ioctl_dequeue_message(struct file *file, return -EFAULT; if (copy_from_user(&args32, - (struct vchiq_dequeue_message32 *)arg, + (struct vchiq_dequeue_message32 __user *)arg, sizeof(args32))) return -EFAULT; @@ -1974,7 +1911,7 @@ vchiq_compat_ioctl_get_config(struct file *file, unsigned int cmd, unsigned long arg) { - VCHIQ_GET_CONFIG_T *args; + struct vchiq_get_config __user *args; struct vchiq_get_config32 args32; args = compat_alloc_user_space(sizeof(*args)); @@ -1982,7 +1919,7 @@ vchiq_compat_ioctl_get_config(struct file *file, return -EFAULT; if (copy_from_user(&args32, - (struct vchiq_get_config32 *)arg, + (struct vchiq_get_config32 __user *)arg, sizeof(args32))) return -EFAULT; @@ -2017,201 +1954,152 @@ vchiq_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg) #endif -/**************************************************************************** -* -* vchiq_open -* -***************************************************************************/ - -static int -vchiq_open(struct inode *inode, struct file *file) +static int vchiq_open(struct inode *inode, struct file *file) { - int dev = iminor(inode) & 0x0f; + struct vchiq_state *state = vchiq_get_state(); + VCHIQ_INSTANCE_T instance; vchiq_log_info(vchiq_arm_log_level, "vchiq_open"); - switch (dev) { - case VCHIQ_MINOR: { - VCHIQ_STATE_T *state = vchiq_get_state(); - VCHIQ_INSTANCE_T instance; - if (!state) { - vchiq_log_error(vchiq_arm_log_level, + if (!state) { + vchiq_log_error(vchiq_arm_log_level, "vchiq has no connection to VideoCore"); - return -ENOTCONN; - } - - instance = kzalloc(sizeof(*instance), GFP_KERNEL); - if (!instance) - return -ENOMEM; + return -ENOTCONN; + } - instance->state = state; - instance->pid = current->tgid; + instance = kzalloc(sizeof(*instance), GFP_KERNEL); + if (!instance) + return -ENOMEM; - vchiq_debugfs_add_instance(instance); + instance->state = state; + instance->pid = current->tgid; - sema_init(&instance->insert_event, 0); - sema_init(&instance->remove_event, 0); - mutex_init(&instance->completion_mutex); - mutex_init(&instance->bulk_waiter_list_mutex); - INIT_LIST_HEAD(&instance->bulk_waiter_list); + vchiq_debugfs_add_instance(instance); - file->private_data = instance; - } break; + init_completion(&instance->insert_event); + init_completion(&instance->remove_event); + mutex_init(&instance->completion_mutex); + mutex_init(&instance->bulk_waiter_list_mutex); + INIT_LIST_HEAD(&instance->bulk_waiter_list); - default: - vchiq_log_error(vchiq_arm_log_level, - "Unknown minor device: %d", dev); - return -ENXIO; - } + file->private_data = instance; return 0; } -/**************************************************************************** -* -* vchiq_release -* -***************************************************************************/ - -static int -vchiq_release(struct inode *inode, struct file *file) +static int vchiq_release(struct inode *inode, struct file *file) { - int dev = iminor(inode) & 0x0f; + VCHIQ_INSTANCE_T instance = file->private_data; + struct vchiq_state *state = vchiq_get_state(); + struct vchiq_service *service; int ret = 0; + int i; - switch (dev) { - case VCHIQ_MINOR: { - VCHIQ_INSTANCE_T instance = file->private_data; - VCHIQ_STATE_T *state = vchiq_get_state(); - VCHIQ_SERVICE_T *service; - int i; - - vchiq_log_info(vchiq_arm_log_level, - "%s: instance=%lx", - __func__, (unsigned long)instance); - - if (!state) { - ret = -EPERM; - goto out; - } + vchiq_log_info(vchiq_arm_log_level, "%s: instance=%lx", __func__, + (unsigned long)instance); - /* Ensure videocore is awake to allow termination. */ - vchiq_use_internal(instance->state, NULL, - USE_TYPE_VCHIQ); + if (!state) { + ret = -EPERM; + goto out; + } - mutex_lock(&instance->completion_mutex); + /* Ensure videocore is awake to allow termination. */ + vchiq_use_internal(instance->state, NULL, USE_TYPE_VCHIQ); - /* Wake the completion thread and ask it to exit */ - instance->closing = 1; - up(&instance->insert_event); + mutex_lock(&instance->completion_mutex); - mutex_unlock(&instance->completion_mutex); + /* Wake the completion thread and ask it to exit */ + instance->closing = 1; + complete(&instance->insert_event); - /* Wake the slot handler if the completion queue is full. */ - up(&instance->remove_event); + mutex_unlock(&instance->completion_mutex); - /* Mark all services for termination... */ - i = 0; - while ((service = next_service_by_instance(state, instance, - &i)) != NULL) { - USER_SERVICE_T *user_service = service->base.userdata; + /* Wake the slot handler if the completion queue is full. */ + complete(&instance->remove_event); - /* Wake the slot handler if the msg queue is full. */ - up(&user_service->remove_event); + /* Mark all services for termination... */ + i = 0; + while ((service = next_service_by_instance(state, instance, &i))) { + struct user_service *user_service = service->base.userdata; - vchiq_terminate_service_internal(service); - unlock_service(service); - } + /* Wake the slot handler if the msg queue is full. */ + complete(&user_service->remove_event); - /* ...and wait for them to die */ - i = 0; - while ((service = next_service_by_instance(state, instance, &i)) - != NULL) { - USER_SERVICE_T *user_service = service->base.userdata; + vchiq_terminate_service_internal(service); + unlock_service(service); + } - down(&service->remove_event); + /* ...and wait for them to die */ + i = 0; + while ((service = next_service_by_instance(state, instance, &i))) { + struct user_service *user_service = service->base.userdata; - BUG_ON(service->srvstate != VCHIQ_SRVSTATE_FREE); + wait_for_completion(&service->remove_event); - spin_lock(&msg_queue_spinlock); + BUG_ON(service->srvstate != VCHIQ_SRVSTATE_FREE); - while (user_service->msg_remove != - user_service->msg_insert) { - VCHIQ_HEADER_T *header; - int m = user_service->msg_remove & - (MSG_QUEUE_SIZE - 1); + spin_lock(&msg_queue_spinlock); - header = user_service->msg_queue[m]; - user_service->msg_remove++; - spin_unlock(&msg_queue_spinlock); - - if (header) - vchiq_release_message( - service->handle, - header); - spin_lock(&msg_queue_spinlock); - } + while (user_service->msg_remove != user_service->msg_insert) { + struct vchiq_header *header; + int m = user_service->msg_remove & (MSG_QUEUE_SIZE - 1); + header = user_service->msg_queue[m]; + user_service->msg_remove++; spin_unlock(&msg_queue_spinlock); - unlock_service(service); + if (header) + vchiq_release_message(service->handle, header); + spin_lock(&msg_queue_spinlock); } - /* Release any closed services */ - while (instance->completion_remove != - instance->completion_insert) { - VCHIQ_COMPLETION_DATA_T *completion; - VCHIQ_SERVICE_T *service; - - completion = &instance->completions[ - instance->completion_remove & - (MAX_COMPLETIONS - 1)]; - service = completion->service_userdata; - if (completion->reason == VCHIQ_SERVICE_CLOSED) { - USER_SERVICE_T *user_service = - service->base.userdata; - - /* Wake any blocked user-thread */ - if (instance->use_close_delivered) - up(&user_service->close_event); - unlock_service(service); - } - instance->completion_remove++; - } + spin_unlock(&msg_queue_spinlock); - /* Release the PEER service count. */ - vchiq_release_internal(instance->state, NULL); + unlock_service(service); + } - { - struct list_head *pos, *next; + /* Release any closed services */ + while (instance->completion_remove != + instance->completion_insert) { + struct vchiq_completion_data *completion; + struct vchiq_service *service; - list_for_each_safe(pos, next, - &instance->bulk_waiter_list) { - struct bulk_waiter_node *waiter; + completion = &instance->completions[ + instance->completion_remove & (MAX_COMPLETIONS - 1)]; + service = completion->service_userdata; + if (completion->reason == VCHIQ_SERVICE_CLOSED) { + struct user_service *user_service = + service->base.userdata; - waiter = list_entry(pos, - struct bulk_waiter_node, - list); - list_del(pos); - vchiq_log_info(vchiq_arm_log_level, - "bulk_waiter - cleaned up %pK for pid %d", - waiter, waiter->pid); - kfree(waiter); - } + /* Wake any blocked user-thread */ + if (instance->use_close_delivered) + complete(&user_service->close_event); + unlock_service(service); } + instance->completion_remove++; + } - vchiq_debugfs_remove_instance(instance); + /* Release the PEER service count. */ + vchiq_release_internal(instance->state, NULL); - kfree(instance); - file->private_data = NULL; - } break; + { + struct bulk_waiter_node *waiter, *next; - default: - vchiq_log_error(vchiq_arm_log_level, - "Unknown minor device: %d", dev); - ret = -ENXIO; + list_for_each_entry_safe(waiter, next, + &instance->bulk_waiter_list, list) { + list_del(&waiter->list); + vchiq_log_info(vchiq_arm_log_level, + "bulk_waiter - cleaned up %pK for pid %d", + waiter, waiter->pid); + kfree(waiter); + } } + vchiq_debugfs_remove_instance(instance); + + kfree(instance); + file->private_data = NULL; + out: return ret; } @@ -2225,7 +2113,7 @@ out: void vchiq_dump(void *dump_context, const char *str, int len) { - DUMP_CONTEXT_T *context = (DUMP_CONTEXT_T *)dump_context; + struct dump_context *context = (struct dump_context *)dump_context; if (context->actual < context->space) { int copy_bytes; @@ -2270,7 +2158,7 @@ vchiq_dump(void *dump_context, const char *str, int len) void vchiq_dump_platform_instances(void *dump_context) { - VCHIQ_STATE_T *state = vchiq_get_state(); + struct vchiq_state *state = vchiq_get_state(); char buf[80]; int len; int i; @@ -2279,7 +2167,7 @@ vchiq_dump_platform_instances(void *dump_context) marking those that have been dumped. */ for (i = 0; i < state->unused_service; i++) { - VCHIQ_SERVICE_T *service = state->services[i]; + struct vchiq_service *service = state->services[i]; VCHIQ_INSTANCE_T instance; if (service && (service->base.callback == service_callback)) { @@ -2290,7 +2178,7 @@ vchiq_dump_platform_instances(void *dump_context) } for (i = 0; i < state->unused_service; i++) { - VCHIQ_SERVICE_T *service = state->services[i]; + struct vchiq_service *service = state->services[i]; VCHIQ_INSTANCE_T instance; if (service && (service->base.callback == service_callback)) { @@ -2320,9 +2208,11 @@ vchiq_dump_platform_instances(void *dump_context) ***************************************************************************/ void -vchiq_dump_platform_service_state(void *dump_context, VCHIQ_SERVICE_T *service) +vchiq_dump_platform_service_state(void *dump_context, + struct vchiq_service *service) { - USER_SERVICE_T *user_service = (USER_SERVICE_T *)service->base.userdata; + struct user_service *user_service = + (struct user_service *)service->base.userdata; char buf[80]; int len; @@ -2353,7 +2243,7 @@ static ssize_t vchiq_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { - DUMP_CONTEXT_T context; + struct dump_context context; context.buf = buf; context.actual = 0; @@ -2367,7 +2257,7 @@ vchiq_read(struct file *file, char __user *buf, return context.actual; } -VCHIQ_STATE_T * +struct vchiq_state * vchiq_get_state(void) { @@ -2398,9 +2288,9 @@ vchiq_fops = { */ int -vchiq_videocore_wanted(VCHIQ_STATE_T *state) +vchiq_videocore_wanted(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); if (!arm_state) /* autosuspend not supported - always return wanted */ @@ -2420,7 +2310,7 @@ vchiq_videocore_wanted(VCHIQ_STATE_T *state) static VCHIQ_STATUS_T vchiq_keepalive_vchiq_callback(VCHIQ_REASON_T reason, - VCHIQ_HEADER_T *header, + struct vchiq_header *header, VCHIQ_SERVICE_HANDLE_T service_user, void *bulk_user) { @@ -2432,14 +2322,14 @@ vchiq_keepalive_vchiq_callback(VCHIQ_REASON_T reason, static int vchiq_keepalive_thread_func(void *v) { - VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v; - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_state *state = (struct vchiq_state *)v; + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); VCHIQ_STATUS_T status; VCHIQ_INSTANCE_T instance; VCHIQ_SERVICE_HANDLE_T ka_handle; - VCHIQ_SERVICE_PARAMS_T params = { + struct vchiq_service_params params = { .fourcc = VCHIQ_MAKE_FOURCC('K', 'E', 'E', 'P'), .callback = vchiq_keepalive_vchiq_callback, .version = KEEPALIVE_VER, @@ -2470,7 +2360,7 @@ vchiq_keepalive_thread_func(void *v) while (1) { long rc = 0, uc = 0; - if (wait_for_completion_interruptible(&arm_state->ka_evt) + if (wait_for_completion_killable(&arm_state->ka_evt) != 0) { vchiq_log_error(vchiq_susp_log_level, "%s interrupted", __func__); @@ -2511,7 +2401,8 @@ exit: } VCHIQ_STATUS_T -vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state) +vchiq_arm_init_state(struct vchiq_state *state, + struct vchiq_arm_state *arm_state) { if (arm_state) { rwlock_init(&arm_state->susp_res_lock); @@ -2607,8 +2498,8 @@ vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state) */ void -set_suspend_state(VCHIQ_ARM_STATE_T *arm_state, - enum vc_suspend_status new_state) +set_suspend_state(struct vchiq_arm_state *arm_state, + enum vc_suspend_status new_state) { /* set the state in all cases */ arm_state->vc_suspend_state = new_state; @@ -2644,8 +2535,8 @@ set_suspend_state(VCHIQ_ARM_STATE_T *arm_state, } void -set_resume_state(VCHIQ_ARM_STATE_T *arm_state, - enum vc_resume_status new_state) +set_resume_state(struct vchiq_arm_state *arm_state, + enum vc_resume_status new_state) { /* set the state in all cases */ arm_state->vc_resume_state = new_state; @@ -2673,7 +2564,7 @@ set_resume_state(VCHIQ_ARM_STATE_T *arm_state, /* should be called with the write lock held */ inline void -start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state) +start_suspend_timer(struct vchiq_arm_state *arm_state) { del_timer(&arm_state->suspend_timer); arm_state->suspend_timer.expires = jiffies + @@ -2684,7 +2575,7 @@ start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state) /* should be called with the write lock held */ static inline void -stop_suspend_timer(VCHIQ_ARM_STATE_T *arm_state) +stop_suspend_timer(struct vchiq_arm_state *arm_state) { if (arm_state->suspend_timer_running) { del_timer(&arm_state->suspend_timer); @@ -2693,9 +2584,9 @@ stop_suspend_timer(VCHIQ_ARM_STATE_T *arm_state) } static inline int -need_resume(VCHIQ_STATE_T *state) +need_resume(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); return (arm_state->vc_suspend_state > VC_SUSPEND_IDLE) && (arm_state->vc_resume_state < VC_RESUME_REQUESTED) && @@ -2703,7 +2594,7 @@ need_resume(VCHIQ_STATE_T *state) } static int -block_resume(VCHIQ_ARM_STATE_T *arm_state) +block_resume(struct vchiq_arm_state *arm_state) { int status = VCHIQ_SUCCESS; const unsigned long timeout_val = @@ -2720,7 +2611,7 @@ block_resume(VCHIQ_ARM_STATE_T *arm_state) write_unlock_bh(&arm_state->susp_res_lock); vchiq_log_info(vchiq_susp_log_level, "%s wait for previously " "blocked clients", __func__); - if (wait_for_completion_interruptible_timeout( + if (wait_for_completion_killable_timeout( &arm_state->blocked_blocker, timeout_val) <= 0) { vchiq_log_error(vchiq_susp_log_level, "%s wait for " @@ -2746,7 +2637,7 @@ block_resume(VCHIQ_ARM_STATE_T *arm_state) write_unlock_bh(&arm_state->susp_res_lock); vchiq_log_info(vchiq_susp_log_level, "%s wait for resume", __func__); - if (wait_for_completion_interruptible_timeout( + if (wait_for_completion_killable_timeout( &arm_state->vc_resume_complete, timeout_val) <= 0) { vchiq_log_error(vchiq_susp_log_level, "%s wait for " @@ -2769,7 +2660,7 @@ out: } static inline void -unblock_resume(VCHIQ_ARM_STATE_T *arm_state) +unblock_resume(struct vchiq_arm_state *arm_state) { complete_all(&arm_state->resume_blocker); arm_state->resume_blocked = 0; @@ -2778,10 +2669,10 @@ unblock_resume(VCHIQ_ARM_STATE_T *arm_state) /* Initiate suspend via slot handler. Should be called with the write lock * held */ VCHIQ_STATUS_T -vchiq_arm_vcsuspend(VCHIQ_STATE_T *state) +vchiq_arm_vcsuspend(struct vchiq_state *state) { VCHIQ_STATUS_T status = VCHIQ_ERROR; - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); if (!arm_state) goto out; @@ -2827,9 +2718,9 @@ out: } void -vchiq_platform_check_suspend(VCHIQ_STATE_T *state) +vchiq_platform_check_suspend(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); int susp = 0; if (!arm_state) @@ -2854,9 +2745,9 @@ out: } static void -output_timeout_error(VCHIQ_STATE_T *state) +output_timeout_error(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); char err[50] = ""; int vc_use_count = arm_state->videocore_use_count; int active_services = state->unused_service; @@ -2867,7 +2758,7 @@ output_timeout_error(VCHIQ_STATE_T *state) goto output_msg; } for (i = 0; i < active_services; i++) { - VCHIQ_SERVICE_T *service_ptr = state->services[i]; + struct vchiq_service *service_ptr = state->services[i]; if (service_ptr && service_ptr->service_use_count && (service_ptr->srvstate != VCHIQ_SRVSTATE_FREE)) { @@ -2899,9 +2790,9 @@ output_msg: ** videocore failed to suspend in time or VCHIQ_ERROR if interrupted. */ VCHIQ_STATUS_T -vchiq_arm_force_suspend(VCHIQ_STATE_T *state) +vchiq_arm_force_suspend(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); VCHIQ_STATUS_T status = VCHIQ_ERROR; long rc = 0; int repeat = -1; @@ -2953,7 +2844,7 @@ vchiq_arm_force_suspend(VCHIQ_STATE_T *state) do { write_unlock_bh(&arm_state->susp_res_lock); - rc = wait_for_completion_interruptible_timeout( + rc = wait_for_completion_killable_timeout( &arm_state->vc_suspend_complete, msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS)); @@ -3010,9 +2901,9 @@ out: } void -vchiq_check_suspend(VCHIQ_STATE_T *state) +vchiq_check_suspend(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); if (!arm_state) goto out; @@ -3032,9 +2923,9 @@ out: } int -vchiq_arm_allow_resume(VCHIQ_STATE_T *state) +vchiq_arm_allow_resume(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); int resume = 0; int ret = -1; @@ -3049,7 +2940,7 @@ vchiq_arm_allow_resume(VCHIQ_STATE_T *state) write_unlock_bh(&arm_state->susp_res_lock); if (resume) { - if (wait_for_completion_interruptible( + if (wait_for_completion_killable( &arm_state->vc_resume_complete) < 0) { vchiq_log_error(vchiq_susp_log_level, "%s interrupted", __func__); @@ -3076,9 +2967,9 @@ out: /* This function should be called with the write lock held */ int -vchiq_check_resume(VCHIQ_STATE_T *state) +vchiq_check_resume(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); int resume = 0; if (!arm_state) @@ -3098,10 +2989,10 @@ out: } VCHIQ_STATUS_T -vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, - enum USE_TYPE_E use_type) +vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service, + enum USE_TYPE_E use_type) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); VCHIQ_STATUS_T ret = VCHIQ_SUCCESS; char entity[16]; int *entity_uc; @@ -3231,9 +3122,9 @@ out: } VCHIQ_STATUS_T -vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service) +vchiq_release_internal(struct vchiq_state *state, struct vchiq_service *service) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); VCHIQ_STATUS_T ret = VCHIQ_SUCCESS; char entity[16]; int *entity_uc; @@ -3293,9 +3184,9 @@ out: } void -vchiq_on_remote_use(VCHIQ_STATE_T *state) +vchiq_on_remote_use(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); vchiq_log_trace(vchiq_susp_log_level, "%s", __func__); atomic_inc(&arm_state->ka_use_count); @@ -3303,9 +3194,9 @@ vchiq_on_remote_use(VCHIQ_STATE_T *state) } void -vchiq_on_remote_release(VCHIQ_STATE_T *state) +vchiq_on_remote_release(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); vchiq_log_trace(vchiq_susp_log_level, "%s", __func__); atomic_inc(&arm_state->ka_release_count); @@ -3313,18 +3204,18 @@ vchiq_on_remote_release(VCHIQ_STATE_T *state) } VCHIQ_STATUS_T -vchiq_use_service_internal(VCHIQ_SERVICE_T *service) +vchiq_use_service_internal(struct vchiq_service *service) { return vchiq_use_internal(service->state, service, USE_TYPE_SERVICE); } VCHIQ_STATUS_T -vchiq_release_service_internal(VCHIQ_SERVICE_T *service) +vchiq_release_service_internal(struct vchiq_service *service) { return vchiq_release_internal(service->state, service); } -VCHIQ_DEBUGFS_NODE_T * +struct vchiq_debugfs_node * vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance) { return &instance->debugfs_node; @@ -3333,7 +3224,7 @@ vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance) int vchiq_instance_get_use_count(VCHIQ_INSTANCE_T instance) { - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; int use_count = 0, i; i = 0; @@ -3360,7 +3251,7 @@ vchiq_instance_get_trace(VCHIQ_INSTANCE_T instance) void vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace) { - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; int i; i = 0; @@ -3374,8 +3265,9 @@ vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace) static void suspend_timer_callback(struct timer_list *t) { - VCHIQ_ARM_STATE_T *arm_state = from_timer(arm_state, t, suspend_timer); - VCHIQ_STATE_T *state = arm_state->state; + struct vchiq_arm_state *arm_state = + from_timer(arm_state, t, suspend_timer); + struct vchiq_state *state = arm_state->state; vchiq_log_info(vchiq_susp_log_level, "%s - suspend timer expired - check suspend", __func__); @@ -3386,7 +3278,7 @@ VCHIQ_STATUS_T vchiq_use_service_no_resume(VCHIQ_SERVICE_HANDLE_T handle) { VCHIQ_STATUS_T ret = VCHIQ_ERROR; - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); if (service) { ret = vchiq_use_internal(service->state, service, @@ -3400,7 +3292,7 @@ VCHIQ_STATUS_T vchiq_use_service(VCHIQ_SERVICE_HANDLE_T handle) { VCHIQ_STATUS_T ret = VCHIQ_ERROR; - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); if (service) { ret = vchiq_use_internal(service->state, service, @@ -3414,7 +3306,7 @@ VCHIQ_STATUS_T vchiq_release_service(VCHIQ_SERVICE_HANDLE_T handle) { VCHIQ_STATUS_T ret = VCHIQ_ERROR; - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); if (service) { ret = vchiq_release_internal(service->state, service); @@ -3430,9 +3322,9 @@ struct service_data_struct { }; void -vchiq_dump_service_use_state(VCHIQ_STATE_T *state) +vchiq_dump_service_use_state(struct vchiq_state *state) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); struct service_data_struct *service_data; int i, found = 0; /* If there's more than 64 services, only dump ones with @@ -3464,7 +3356,7 @@ vchiq_dump_service_use_state(VCHIQ_STATE_T *state) only_nonzero = 1; for (i = 0; i < active_services; i++) { - VCHIQ_SERVICE_T *service_ptr = state->services[i]; + struct vchiq_service *service_ptr = state->services[i]; if (!service_ptr) continue; @@ -3516,9 +3408,9 @@ vchiq_dump_service_use_state(VCHIQ_STATE_T *state) } VCHIQ_STATUS_T -vchiq_check_service(VCHIQ_SERVICE_T *service) +vchiq_check_service(struct vchiq_service *service) { - VCHIQ_ARM_STATE_T *arm_state; + struct vchiq_arm_state *arm_state; VCHIQ_STATUS_T ret = VCHIQ_ERROR; if (!service || !service->state) @@ -3549,15 +3441,16 @@ out: } /* stub functions */ -void vchiq_on_remote_use_active(VCHIQ_STATE_T *state) +void vchiq_on_remote_use_active(struct vchiq_state *state) { (void)state; } -void vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state, - VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate) +void vchiq_platform_conn_state_changed(struct vchiq_state *state, + VCHIQ_CONNSTATE_T oldstate, + VCHIQ_CONNSTATE_T newstate) { - VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state); + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); vchiq_log_info(vchiq_susp_log_level, "%d: %s->%s", state->id, get_conn_state_name(oldstate), get_conn_state_name(newstate)); @@ -3593,6 +3486,28 @@ static const struct of_device_id vchiq_of_match[] = { }; MODULE_DEVICE_TABLE(of, vchiq_of_match); +static struct platform_device * +vchiq_register_child(struct platform_device *pdev, const char *name) +{ + struct platform_device_info pdevinfo; + struct platform_device *child; + + memset(&pdevinfo, 0, sizeof(pdevinfo)); + + pdevinfo.parent = &pdev->dev; + pdevinfo.name = name; + pdevinfo.id = PLATFORM_DEVID_NONE; + pdevinfo.dma_mask = DMA_BIT_MASK(32); + + child = platform_device_register_full(&pdevinfo); + if (IS_ERR(child)) { + dev_warn(&pdev->dev, "%s not registered\n", name); + child = NULL; + } + + return child; +} + static int vchiq_probe(struct platform_device *pdev) { struct device_node *fw_node; @@ -3623,34 +3538,19 @@ static int vchiq_probe(struct platform_device *pdev) if (err != 0) goto failed_platform_init; - err = alloc_chrdev_region(&vchiq_devid, VCHIQ_MINOR, 1, DEVICE_NAME); - if (err != 0) { - vchiq_log_error(vchiq_arm_log_level, - "Unable to allocate device number"); - goto failed_platform_init; - } cdev_init(&vchiq_cdev, &vchiq_fops); vchiq_cdev.owner = THIS_MODULE; err = cdev_add(&vchiq_cdev, vchiq_devid, 1); if (err != 0) { vchiq_log_error(vchiq_arm_log_level, "Unable to register device"); - goto failed_cdev_add; + goto failed_platform_init; } - /* create sysfs entries */ - vchiq_class = class_create(THIS_MODULE, DEVICE_NAME); - err = PTR_ERR(vchiq_class); - if (IS_ERR(vchiq_class)) - goto failed_class_create; - - vchiq_dev = device_create(vchiq_class, NULL, - vchiq_devid, NULL, "vchiq"); - err = PTR_ERR(vchiq_dev); - if (IS_ERR(vchiq_dev)) + if (IS_ERR(device_create(vchiq_class, &pdev->dev, vchiq_devid, + NULL, "vchiq"))) goto failed_device_create; - /* create debugfs entries */ vchiq_debugfs_init(); vchiq_log_info(vchiq_arm_log_level, @@ -3658,18 +3558,13 @@ static int vchiq_probe(struct platform_device *pdev) VCHIQ_VERSION, VCHIQ_VERSION_MIN, MAJOR(vchiq_devid), MINOR(vchiq_devid)); - bcm2835_camera = platform_device_register_data(&pdev->dev, - "bcm2835-camera", -1, - NULL, 0); + bcm2835_camera = vchiq_register_child(pdev, "bcm2835-camera"); + bcm2835_audio = vchiq_register_child(pdev, "bcm2835_audio"); return 0; failed_device_create: - class_destroy(vchiq_class); -failed_class_create: cdev_del(&vchiq_cdev); -failed_cdev_add: - unregister_chrdev_region(vchiq_devid, 1); failed_platform_init: vchiq_log_warning(vchiq_arm_log_level, "could not load vchiq"); return err; @@ -3680,9 +3575,7 @@ static int vchiq_remove(struct platform_device *pdev) platform_device_unregister(bcm2835_camera); vchiq_debugfs_deinit(); device_destroy(vchiq_class, vchiq_devid); - class_destroy(vchiq_class); cdev_del(&vchiq_cdev); - unregister_chrdev_region(vchiq_devid, 1); return 0; } @@ -3695,7 +3588,48 @@ static struct platform_driver vchiq_driver = { .probe = vchiq_probe, .remove = vchiq_remove, }; -module_platform_driver(vchiq_driver); + +static int __init vchiq_driver_init(void) +{ + int ret; + + vchiq_class = class_create(THIS_MODULE, DEVICE_NAME); + if (IS_ERR(vchiq_class)) { + pr_err("Failed to create vchiq class\n"); + return PTR_ERR(vchiq_class); + } + + ret = alloc_chrdev_region(&vchiq_devid, 0, 1, DEVICE_NAME); + if (ret) { + pr_err("Failed to allocate vchiq's chrdev region\n"); + goto class_destroy; + } + + ret = platform_driver_register(&vchiq_driver); + if (ret) { + pr_err("Failed to register vchiq driver\n"); + goto region_unregister; + } + + return 0; + +region_unregister: + platform_driver_unregister(&vchiq_driver); + +class_destroy: + class_destroy(vchiq_class); + + return ret; +} +module_init(vchiq_driver_init); + +static void __exit vchiq_driver_exit(void) +{ + platform_driver_unregister(&vchiq_driver); + unregister_chrdev_region(vchiq_devid, 1); + class_destroy(vchiq_class); +} +module_exit(vchiq_driver_exit); MODULE_LICENSE("Dual BSD/GPL"); MODULE_DESCRIPTION("Videocore VCHIQ driver"); diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h index 2f3ebc99cbcf..cdb963054975 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h @@ -66,7 +66,7 @@ enum USE_TYPE_E { USE_TYPE_VCHIQ }; -typedef struct vchiq_arm_state_struct { +struct vchiq_arm_state { /* Keepalive-related data */ struct task_struct *ka_thread; struct completion ka_evt; @@ -83,7 +83,7 @@ typedef struct vchiq_arm_state_struct { unsigned int wake_address; - VCHIQ_STATE_T *state; + struct vchiq_state *state; struct timer_list suspend_timer; int suspend_timer_timeout; int suspend_timer_running; @@ -121,7 +121,7 @@ typedef struct vchiq_arm_state_struct { unsigned long long resume_start_time; unsigned long long last_wake_time; -} VCHIQ_ARM_STATE_T; +}; struct vchiq_drvdata { const unsigned int cache_line_size; @@ -131,31 +131,33 @@ struct vchiq_drvdata { extern int vchiq_arm_log_level; extern int vchiq_susp_log_level; -int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state); +int vchiq_platform_init(struct platform_device *pdev, + struct vchiq_state *state); -extern VCHIQ_STATE_T * +extern struct vchiq_state * vchiq_get_state(void); extern VCHIQ_STATUS_T -vchiq_arm_vcsuspend(VCHIQ_STATE_T *state); +vchiq_arm_vcsuspend(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_arm_force_suspend(VCHIQ_STATE_T *state); +vchiq_arm_force_suspend(struct vchiq_state *state); extern int -vchiq_arm_allow_resume(VCHIQ_STATE_T *state); +vchiq_arm_allow_resume(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_arm_vcresume(VCHIQ_STATE_T *state); +vchiq_arm_vcresume(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state); +vchiq_arm_init_state(struct vchiq_state *state, + struct vchiq_arm_state *arm_state); extern int -vchiq_check_resume(VCHIQ_STATE_T *state); +vchiq_check_resume(struct vchiq_state *state); extern void -vchiq_check_suspend(VCHIQ_STATE_T *state); +vchiq_check_suspend(struct vchiq_state *state); VCHIQ_STATUS_T vchiq_use_service(VCHIQ_SERVICE_HANDLE_T handle); @@ -163,36 +165,37 @@ extern VCHIQ_STATUS_T vchiq_release_service(VCHIQ_SERVICE_HANDLE_T handle); extern VCHIQ_STATUS_T -vchiq_check_service(VCHIQ_SERVICE_T *service); +vchiq_check_service(struct vchiq_service *service); extern VCHIQ_STATUS_T -vchiq_platform_suspend(VCHIQ_STATE_T *state); +vchiq_platform_suspend(struct vchiq_state *state); extern int -vchiq_platform_videocore_wanted(VCHIQ_STATE_T *state); +vchiq_platform_videocore_wanted(struct vchiq_state *state); extern int vchiq_platform_use_suspend_timer(void); extern void -vchiq_dump_platform_use_state(VCHIQ_STATE_T *state); +vchiq_dump_platform_use_state(struct vchiq_state *state); extern void -vchiq_dump_service_use_state(VCHIQ_STATE_T *state); +vchiq_dump_service_use_state(struct vchiq_state *state); -extern VCHIQ_ARM_STATE_T* -vchiq_platform_get_arm_state(VCHIQ_STATE_T *state); +extern struct vchiq_arm_state* +vchiq_platform_get_arm_state(struct vchiq_state *state); extern int -vchiq_videocore_wanted(VCHIQ_STATE_T *state); +vchiq_videocore_wanted(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, - enum USE_TYPE_E use_type); +vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service, + enum USE_TYPE_E use_type); extern VCHIQ_STATUS_T -vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service); +vchiq_release_internal(struct vchiq_state *state, + struct vchiq_service *service); -extern VCHIQ_DEBUGFS_NODE_T * +extern struct vchiq_debugfs_node * vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance); extern int @@ -208,14 +211,14 @@ extern void vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace); extern void -set_suspend_state(VCHIQ_ARM_STATE_T *arm_state, - enum vc_suspend_status new_state); +set_suspend_state(struct vchiq_arm_state *arm_state, + enum vc_suspend_status new_state); extern void -set_resume_state(VCHIQ_ARM_STATE_T *arm_state, - enum vc_resume_status new_state); +set_resume_state(struct vchiq_arm_state *arm_state, + enum vc_resume_status new_state); extern void -start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state); +start_suspend_timer(struct vchiq_arm_state *arm_state); #endif /* VCHIQ_ARM_H */ diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.c index 7ea29665bd0c..7d64e2ed7b42 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.c @@ -33,7 +33,6 @@ #include "vchiq_connected.h" #include "vchiq_core.h" -#include "vchiq_killable.h" #include #include diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c index 7642ced31436..9e17ec651bde 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c @@ -32,7 +32,6 @@ */ #include "vchiq_core.h" -#include "vchiq_killable.h" #define VCHIQ_SLOT_HANDLER_STACK 8192 @@ -73,8 +72,8 @@ enum { }; /* we require this for consistency between endpoints */ -vchiq_static_assert(sizeof(VCHIQ_HEADER_T) == 8); -vchiq_static_assert(IS_POW2(sizeof(VCHIQ_HEADER_T))); +vchiq_static_assert(sizeof(struct vchiq_header) == 8); +vchiq_static_assert(IS_POW2(sizeof(struct vchiq_header))); vchiq_static_assert(IS_POW2(VCHIQ_NUM_CURRENT_BULKS)); vchiq_static_assert(IS_POW2(VCHIQ_NUM_SERVICE_BULKS)); vchiq_static_assert(IS_POW2(VCHIQ_MAX_SERVICES)); @@ -85,13 +84,11 @@ int vchiq_core_log_level = VCHIQ_LOG_DEFAULT; int vchiq_core_msg_log_level = VCHIQ_LOG_DEFAULT; int vchiq_sync_log_level = VCHIQ_LOG_DEFAULT; -static atomic_t pause_bulks_count = ATOMIC_INIT(0); - static DEFINE_SPINLOCK(service_spinlock); DEFINE_SPINLOCK(bulk_waiter_spinlock); static DEFINE_SPINLOCK(quota_spinlock); -VCHIQ_STATE_T *vchiq_states[VCHIQ_MAX_STATES]; +struct vchiq_state *vchiq_states[VCHIQ_MAX_STATES]; static unsigned int handle_seq; static const char *const srvstate_names[] = { @@ -130,7 +127,7 @@ static const char *const conn_state_names[] = { }; static void -release_message_sync(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header); +release_message_sync(struct vchiq_state *state, struct vchiq_header *header); static const char *msg_type_str(unsigned int msg_type) { @@ -155,7 +152,7 @@ static const char *msg_type_str(unsigned int msg_type) } static inline void -vchiq_set_service_state(VCHIQ_SERVICE_T *service, int newstate) +vchiq_set_service_state(struct vchiq_service *service, int newstate) { vchiq_log_info(vchiq_core_log_level, "%d: srv:%d %s->%s", service->state->id, service->localport, @@ -164,10 +161,10 @@ vchiq_set_service_state(VCHIQ_SERVICE_T *service, int newstate) service->srvstate = newstate; } -VCHIQ_SERVICE_T * +struct vchiq_service * find_service_by_handle(VCHIQ_SERVICE_HANDLE_T handle) { - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; spin_lock(&service_spinlock); service = handle_to_service(handle); @@ -186,10 +183,10 @@ find_service_by_handle(VCHIQ_SERVICE_HANDLE_T handle) return service; } -VCHIQ_SERVICE_T * -find_service_by_port(VCHIQ_STATE_T *state, int localport) +struct vchiq_service * +find_service_by_port(struct vchiq_state *state, int localport) { - VCHIQ_SERVICE_T *service = NULL; + struct vchiq_service *service = NULL; if ((unsigned int)localport <= VCHIQ_PORT_MAX) { spin_lock(&service_spinlock); @@ -209,11 +206,11 @@ find_service_by_port(VCHIQ_STATE_T *state, int localport) return service; } -VCHIQ_SERVICE_T * +struct vchiq_service * find_service_for_instance(VCHIQ_INSTANCE_T instance, VCHIQ_SERVICE_HANDLE_T handle) { - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; spin_lock(&service_spinlock); service = handle_to_service(handle); @@ -233,11 +230,11 @@ find_service_for_instance(VCHIQ_INSTANCE_T instance, return service; } -VCHIQ_SERVICE_T * +struct vchiq_service * find_closed_service_for_instance(VCHIQ_INSTANCE_T instance, VCHIQ_SERVICE_HANDLE_T handle) { - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; spin_lock(&service_spinlock); service = handle_to_service(handle); @@ -259,16 +256,16 @@ find_closed_service_for_instance(VCHIQ_INSTANCE_T instance, return service; } -VCHIQ_SERVICE_T * -next_service_by_instance(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance, - int *pidx) +struct vchiq_service * +next_service_by_instance(struct vchiq_state *state, VCHIQ_INSTANCE_T instance, + int *pidx) { - VCHIQ_SERVICE_T *service = NULL; + struct vchiq_service *service = NULL; int idx = *pidx; spin_lock(&service_spinlock); while (idx < state->unused_service) { - VCHIQ_SERVICE_T *srv = state->services[idx++]; + struct vchiq_service *srv = state->services[idx++]; if (srv && (srv->srvstate != VCHIQ_SRVSTATE_FREE) && (srv->instance == instance)) { @@ -286,7 +283,7 @@ next_service_by_instance(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance, } void -lock_service(VCHIQ_SERVICE_T *service) +lock_service(struct vchiq_service *service) { spin_lock(&service_spinlock); WARN_ON(!service); @@ -298,7 +295,7 @@ lock_service(VCHIQ_SERVICE_T *service) } void -unlock_service(VCHIQ_SERVICE_T *service) +unlock_service(struct vchiq_service *service) { spin_lock(&service_spinlock); if (!service) { @@ -311,7 +308,7 @@ unlock_service(VCHIQ_SERVICE_T *service) } service->ref_count--; if (!service->ref_count) { - VCHIQ_STATE_T *state = service->state; + struct vchiq_state *state = service->state; WARN_ON(service->srvstate != VCHIQ_SRVSTATE_FREE); state->services[service->localport] = NULL; @@ -330,7 +327,7 @@ unlock: int vchiq_get_client_id(VCHIQ_SERVICE_HANDLE_T handle) { - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); int id; id = service ? service->client_id : 0; @@ -343,7 +340,7 @@ vchiq_get_client_id(VCHIQ_SERVICE_HANDLE_T handle) void * vchiq_get_service_userdata(VCHIQ_SERVICE_HANDLE_T handle) { - VCHIQ_SERVICE_T *service = handle_to_service(handle); + struct vchiq_service *service = handle_to_service(handle); return service ? service->base.userdata : NULL; } @@ -351,16 +348,16 @@ vchiq_get_service_userdata(VCHIQ_SERVICE_HANDLE_T handle) int vchiq_get_service_fourcc(VCHIQ_SERVICE_HANDLE_T handle) { - VCHIQ_SERVICE_T *service = handle_to_service(handle); + struct vchiq_service *service = handle_to_service(handle); return service ? service->base.fourcc : 0; } static void -mark_service_closing_internal(VCHIQ_SERVICE_T *service, int sh_thread) +mark_service_closing_internal(struct vchiq_service *service, int sh_thread) { - VCHIQ_STATE_T *state = service->state; - VCHIQ_SERVICE_QUOTA_T *service_quota; + struct vchiq_state *state = service->state; + struct vchiq_service_quota *service_quota; service->closing = 1; @@ -378,18 +375,18 @@ mark_service_closing_internal(VCHIQ_SERVICE_T *service, int sh_thread) /* Unblock any sending thread. */ service_quota = &state->service_quotas[service->localport]; - up(&service_quota->quota_event); + complete(&service_quota->quota_event); } static void -mark_service_closing(VCHIQ_SERVICE_T *service) +mark_service_closing(struct vchiq_service *service) { mark_service_closing_internal(service, 0); } static inline VCHIQ_STATUS_T -make_service_callback(VCHIQ_SERVICE_T *service, VCHIQ_REASON_T reason, - VCHIQ_HEADER_T *header, void *bulk_userdata) +make_service_callback(struct vchiq_service *service, VCHIQ_REASON_T reason, + struct vchiq_header *header, void *bulk_userdata) { VCHIQ_STATUS_T status; @@ -408,7 +405,7 @@ make_service_callback(VCHIQ_SERVICE_T *service, VCHIQ_REASON_T reason, } inline void -vchiq_set_conn_state(VCHIQ_STATE_T *state, VCHIQ_CONNSTATE_T newstate) +vchiq_set_conn_state(struct vchiq_state *state, VCHIQ_CONNSTATE_T newstate) { VCHIQ_CONNSTATE_T oldstate = state->conn_state; @@ -420,27 +417,23 @@ vchiq_set_conn_state(VCHIQ_STATE_T *state, VCHIQ_CONNSTATE_T newstate) } static inline void -remote_event_create(VCHIQ_STATE_T *state, REMOTE_EVENT_T *event) +remote_event_create(wait_queue_head_t *wq, struct remote_event *event) { event->armed = 0; /* Don't clear the 'fired' flag because it may already have been set ** by the other side. */ - sema_init((struct semaphore *)((char *)state + event->event), 0); + init_waitqueue_head(wq); } static inline int -remote_event_wait(VCHIQ_STATE_T *state, REMOTE_EVENT_T *event) +remote_event_wait(wait_queue_head_t *wq, struct remote_event *event) { if (!event->fired) { event->armed = 1; dsb(sy); - if (!event->fired) { - if (down_interruptible( - (struct semaphore *) - ((char *)state + event->event)) != 0) { - event->armed = 0; - return 0; - } + if (wait_event_killable(*wq, event->fired)) { + event->armed = 0; + return 0; } event->armed = 0; wmb(); @@ -451,26 +444,26 @@ remote_event_wait(VCHIQ_STATE_T *state, REMOTE_EVENT_T *event) } static inline void -remote_event_signal_local(VCHIQ_STATE_T *state, REMOTE_EVENT_T *event) +remote_event_signal_local(wait_queue_head_t *wq, struct remote_event *event) { event->armed = 0; - up((struct semaphore *)((char *)state + event->event)); + wake_up_all(wq); } static inline void -remote_event_poll(VCHIQ_STATE_T *state, REMOTE_EVENT_T *event) +remote_event_poll(wait_queue_head_t *wq, struct remote_event *event) { if (event->fired && event->armed) - remote_event_signal_local(state, event); + remote_event_signal_local(wq, event); } void -remote_event_pollall(VCHIQ_STATE_T *state) +remote_event_pollall(struct vchiq_state *state) { - remote_event_poll(state, &state->local->sync_trigger); - remote_event_poll(state, &state->local->sync_release); - remote_event_poll(state, &state->local->trigger); - remote_event_poll(state, &state->local->recycle); + remote_event_poll(&state->sync_trigger_event, &state->local->sync_trigger); + remote_event_poll(&state->sync_release_event, &state->local->sync_release); + remote_event_poll(&state->trigger_event, &state->local->trigger); + remote_event_poll(&state->recycle_event, &state->local->recycle); } /* Round up message sizes so that any space at the end of a slot is always big @@ -481,23 +474,23 @@ static inline size_t calc_stride(size_t size) { /* Allow room for the header */ - size += sizeof(VCHIQ_HEADER_T); + size += sizeof(struct vchiq_header); /* Round up */ - return (size + sizeof(VCHIQ_HEADER_T) - 1) & ~(sizeof(VCHIQ_HEADER_T) - - 1); + return (size + sizeof(struct vchiq_header) - 1) & + ~(sizeof(struct vchiq_header) - 1); } /* Called by the slot handler thread */ -static VCHIQ_SERVICE_T * -get_listening_service(VCHIQ_STATE_T *state, int fourcc) +static struct vchiq_service * +get_listening_service(struct vchiq_state *state, int fourcc) { int i; WARN_ON(fourcc == VCHIQ_FOURCC_INVALID); for (i = 0; i < state->unused_service; i++) { - VCHIQ_SERVICE_T *service = state->services[i]; + struct vchiq_service *service = state->services[i]; if (service && (service->public_fourcc == fourcc) && @@ -513,13 +506,13 @@ get_listening_service(VCHIQ_STATE_T *state, int fourcc) } /* Called by the slot handler thread */ -static VCHIQ_SERVICE_T * -get_connected_service(VCHIQ_STATE_T *state, unsigned int port) +static struct vchiq_service * +get_connected_service(struct vchiq_state *state, unsigned int port) { int i; for (i = 0; i < state->unused_service; i++) { - VCHIQ_SERVICE_T *service = state->services[i]; + struct vchiq_service *service = state->services[i]; if (service && (service->srvstate == VCHIQ_SRVSTATE_OPEN) && (service->remoteport == port)) { @@ -531,7 +524,8 @@ get_connected_service(VCHIQ_STATE_T *state, unsigned int port) } inline void -request_poll(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, int poll_type) +request_poll(struct vchiq_state *state, struct vchiq_service *service, + int poll_type) { u32 value; @@ -554,26 +548,26 @@ request_poll(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, int poll_type) wmb(); /* ... and ensure the slot handler runs. */ - remote_event_signal_local(state, &state->local->trigger); + remote_event_signal_local(&state->trigger_event, &state->local->trigger); } /* Called from queue_message, by the slot handler and application threads, ** with slot_mutex held */ -static VCHIQ_HEADER_T * -reserve_space(VCHIQ_STATE_T *state, size_t space, int is_blocking) +static struct vchiq_header * +reserve_space(struct vchiq_state *state, size_t space, int is_blocking) { - VCHIQ_SHARED_STATE_T *local = state->local; + struct vchiq_shared_state *local = state->local; int tx_pos = state->local_tx_pos; int slot_space = VCHIQ_SLOT_SIZE - (tx_pos & VCHIQ_SLOT_MASK); if (space > slot_space) { - VCHIQ_HEADER_T *header; + struct vchiq_header *header; /* Fill the remaining space with padding */ WARN_ON(state->tx_data == NULL); - header = (VCHIQ_HEADER_T *) + header = (struct vchiq_header *) (state->tx_data + (tx_pos & VCHIQ_SLOT_MASK)); header->msgid = VCHIQ_MSGID_PADDING; - header->size = slot_space - sizeof(VCHIQ_HEADER_T); + header->size = slot_space - sizeof(struct vchiq_header); tx_pos += slot_space; } @@ -584,7 +578,7 @@ reserve_space(VCHIQ_STATE_T *state, size_t space, int is_blocking) /* If there is no free slot... */ - if (down_trylock(&state->slot_available_event) != 0) { + if (!try_wait_for_completion(&state->slot_available_event)) { /* ...wait for one. */ VCHIQ_STATS_INC(state, slot_stalls); @@ -595,13 +589,13 @@ reserve_space(VCHIQ_STATE_T *state, size_t space, int is_blocking) remote_event_signal(&state->remote->trigger); if (!is_blocking || - (down_interruptible( - &state->slot_available_event) != 0)) + (wait_for_completion_killable( + &state->slot_available_event))) return NULL; /* No space available */ } if (tx_pos == (state->slot_queue_available * VCHIQ_SLOT_SIZE)) { - up(&state->slot_available_event); + complete(&state->slot_available_event); pr_warn("%s: invalid tx_pos: %d\n", __func__, tx_pos); return NULL; } @@ -615,14 +609,16 @@ reserve_space(VCHIQ_STATE_T *state, size_t space, int is_blocking) state->local_tx_pos = tx_pos + space; - return (VCHIQ_HEADER_T *)(state->tx_data + (tx_pos & VCHIQ_SLOT_MASK)); + return (struct vchiq_header *)(state->tx_data + + (tx_pos & VCHIQ_SLOT_MASK)); } /* Called by the recycle thread. */ static void -process_free_queue(VCHIQ_STATE_T *state, BITSET_T *service_found, size_t length) +process_free_queue(struct vchiq_state *state, BITSET_T *service_found, + size_t length) { - VCHIQ_SHARED_STATE_T *local = state->local; + struct vchiq_shared_state *local = state->local; int slot_queue_available; /* Find slots which have been freed by the other side, and return them @@ -660,13 +656,13 @@ process_free_queue(VCHIQ_STATE_T *state, BITSET_T *service_found, size_t length) pos = 0; while (pos < VCHIQ_SLOT_SIZE) { - VCHIQ_HEADER_T *header = - (VCHIQ_HEADER_T *)(data + pos); + struct vchiq_header *header = + (struct vchiq_header *)(data + pos); int msgid = header->msgid; if (VCHIQ_MSG_TYPE(msgid) == VCHIQ_MSG_DATA) { int port = VCHIQ_MSG_SRCPORT(msgid); - VCHIQ_SERVICE_QUOTA_T *service_quota = + struct vchiq_service_quota *service_quota = &state->service_quotas[port]; int count; @@ -681,7 +677,7 @@ process_free_queue(VCHIQ_STATE_T *state, BITSET_T *service_found, size_t length) /* Signal the service that it ** has dropped below its quota */ - up(&service_quota->quota_event); + complete(&service_quota->quota_event); else if (count == 0) { vchiq_log_error(vchiq_core_log_level, "service %d message_use_count=%d (header %pK, msgid %x, header->msgid %x, header->size %x)", @@ -706,7 +702,7 @@ process_free_queue(VCHIQ_STATE_T *state, BITSET_T *service_found, size_t length) /* Signal the service in case ** it has dropped below its ** quota */ - up(&service_quota->quota_event); + complete(&service_quota->quota_event); vchiq_log_trace( vchiq_core_log_level, "%d: pfq:%d %x@%pK - slot_use->%d", @@ -747,7 +743,7 @@ process_free_queue(VCHIQ_STATE_T *state, BITSET_T *service_found, size_t length) count - 1; spin_unlock("a_spinlock); if (count == state->data_quota) - up(&state->data_quota_event); + complete(&state->data_quota_event); } /* @@ -757,7 +753,7 @@ process_free_queue(VCHIQ_STATE_T *state, BITSET_T *service_found, size_t length) mb(); state->slot_queue_available = slot_queue_available; - up(&state->slot_available_event); + complete(&state->slot_available_event); } } @@ -805,17 +801,15 @@ copy_message_data( /* Called by the slot handler and application threads */ static VCHIQ_STATUS_T -queue_message(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, - int msgid, - ssize_t (*copy_callback)(void *context, void *dest, - size_t offset, size_t maxsize), - void *context, - size_t size, - int flags) -{ - VCHIQ_SHARED_STATE_T *local; - VCHIQ_SERVICE_QUOTA_T *service_quota = NULL; - VCHIQ_HEADER_T *header; +queue_message(struct vchiq_state *state, struct vchiq_service *service, + int msgid, + ssize_t (*copy_callback)(void *context, void *dest, + size_t offset, size_t maxsize), + void *context, size_t size, int flags) +{ + struct vchiq_shared_state *local; + struct vchiq_service_quota *service_quota = NULL; + struct vchiq_header *header; int type = VCHIQ_MSG_TYPE(msgid); size_t stride; @@ -865,8 +859,8 @@ queue_message(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, spin_unlock("a_spinlock); mutex_unlock(&state->slot_mutex); - if (down_interruptible(&state->data_quota_event) - != 0) + if (wait_for_completion_killable( + &state->data_quota_event)) return VCHIQ_RETRY; mutex_lock(&state->slot_mutex); @@ -876,7 +870,7 @@ queue_message(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, if ((tx_end_index == state->previous_data_index) || (state->data_use_count < state->data_quota)) { /* Pass the signal on to other waiters */ - up(&state->data_quota_event); + complete(&state->data_quota_event); break; } } @@ -896,8 +890,8 @@ queue_message(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, service_quota->slot_use_count); VCHIQ_SERVICE_STATS_INC(service, quota_stalls); mutex_unlock(&state->slot_mutex); - if (down_interruptible(&service_quota->quota_event) - != 0) + if (wait_for_completion_killable( + &service_quota->quota_event)) return VCHIQ_RETRY; if (service->closing) return VCHIQ_ERROR; @@ -1055,16 +1049,14 @@ queue_message(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, /* Called by the slot handler and application threads */ static VCHIQ_STATUS_T -queue_message_sync(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, - int msgid, - ssize_t (*copy_callback)(void *context, void *dest, - size_t offset, size_t maxsize), - void *context, - int size, - int is_blocking) -{ - VCHIQ_SHARED_STATE_T *local; - VCHIQ_HEADER_T *header; +queue_message_sync(struct vchiq_state *state, struct vchiq_service *service, + int msgid, + ssize_t (*copy_callback)(void *context, void *dest, + size_t offset, size_t maxsize), + void *context, int size, int is_blocking) +{ + struct vchiq_shared_state *local; + struct vchiq_header *header; ssize_t callback_result; local = state->local; @@ -1073,11 +1065,11 @@ queue_message_sync(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, (mutex_lock_killable(&state->sync_mutex) != 0)) return VCHIQ_RETRY; - remote_event_wait(state, &local->sync_release); + remote_event_wait(&state->sync_release_event, &local->sync_release); rmb(); - header = (VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state, + header = (struct vchiq_header *)SLOT_DATA_FROM_INDEX(state, local->slot_sync); { @@ -1140,9 +1132,6 @@ queue_message_sync(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, size); } - /* Make sure the new header is visible to the peer. */ - wmb(); - remote_event_signal(&state->remote->sync_trigger); if (VCHIQ_MSG_TYPE(msgid) != VCHIQ_MSG_PAUSE) @@ -1152,14 +1141,14 @@ queue_message_sync(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, } static inline void -claim_slot(VCHIQ_SLOT_INFO_T *slot) +claim_slot(struct vchiq_slot_info *slot) { slot->use_count++; } static void -release_slot(VCHIQ_STATE_T *state, VCHIQ_SLOT_INFO_T *slot_info, - VCHIQ_HEADER_T *header, VCHIQ_SERVICE_T *service) +release_slot(struct vchiq_state *state, struct vchiq_slot_info *slot_info, + struct vchiq_header *header, struct vchiq_service *service) { int release_count; @@ -1211,8 +1200,8 @@ release_slot(VCHIQ_STATE_T *state, VCHIQ_SLOT_INFO_T *slot_info, /* Called by the slot handler - don't hold the bulk mutex */ static VCHIQ_STATUS_T -notify_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue, - int retry_poll) +notify_bulks(struct vchiq_service *service, struct vchiq_bulk_queue *queue, + int retry_poll) { VCHIQ_STATUS_T status = VCHIQ_SUCCESS; @@ -1222,36 +1211,11 @@ notify_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue, (queue == &service->bulk_tx) ? 't' : 'r', queue->process, queue->remote_notify, queue->remove); - if (service->state->is_master) { - while (queue->remote_notify != queue->process) { - VCHIQ_BULK_T *bulk = - &queue->bulks[BULK_INDEX(queue->remote_notify)]; - int msgtype = (bulk->dir == VCHIQ_BULK_TRANSMIT) ? - VCHIQ_MSG_BULK_RX_DONE : VCHIQ_MSG_BULK_TX_DONE; - int msgid = VCHIQ_MAKE_MSG(msgtype, service->localport, - service->remoteport); - /* Only reply to non-dummy bulk requests */ - if (bulk->remote_data) { - status = queue_message( - service->state, - NULL, - msgid, - memcpy_copy_callback, - &bulk->actual, - 4, - 0); - if (status != VCHIQ_SUCCESS) - break; - } - queue->remote_notify++; - } - } else { - queue->remote_notify = queue->process; - } + queue->remote_notify = queue->process; if (status == VCHIQ_SUCCESS) { while (queue->remove != queue->remote_notify) { - VCHIQ_BULK_T *bulk = + struct vchiq_bulk *bulk = &queue->bulks[BULK_INDEX(queue->remove)]; /* Only generate callbacks for non-dummy bulk @@ -1282,7 +1246,7 @@ notify_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue, waiter = bulk->userdata; if (waiter) { waiter->actual = bulk->actual; - up(&waiter->event); + complete(&waiter->event); } spin_unlock(&bulk_waiter_spinlock); } else if (bulk->mode == @@ -1305,7 +1269,7 @@ notify_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue, } queue->remove++; - up(&service->bulk_remove_event); + complete(&service->bulk_remove_event); } if (!retry_poll) status = VCHIQ_SUCCESS; @@ -1321,7 +1285,7 @@ notify_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue, /* Called by the slot handler thread */ static void -poll_services(VCHIQ_STATE_T *state) +poll_services(struct vchiq_state *state) { int group, i; @@ -1331,7 +1295,7 @@ poll_services(VCHIQ_STATE_T *state) flags = atomic_xchg(&state->poll_services[group], 0); for (i = 0; flags; i++) { if (flags & (1 << i)) { - VCHIQ_SERVICE_T *service = + struct vchiq_service *service = find_service_by_port(state, (group<<5) + i); u32 service_flags; @@ -1385,66 +1349,10 @@ poll_services(VCHIQ_STATE_T *state) } } -/* Called by the slot handler or application threads, holding the bulk mutex. */ -static int -resolve_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue) -{ - VCHIQ_STATE_T *state = service->state; - int resolved = 0; - - while ((queue->process != queue->local_insert) && - (queue->process != queue->remote_insert)) { - VCHIQ_BULK_T *bulk = &queue->bulks[BULK_INDEX(queue->process)]; - - vchiq_log_trace(vchiq_core_log_level, - "%d: rb:%d %cx - li=%x ri=%x p=%x", - state->id, service->localport, - (queue == &service->bulk_tx) ? 't' : 'r', - queue->local_insert, queue->remote_insert, - queue->process); - - WARN_ON(!((int)(queue->local_insert - queue->process) > 0)); - WARN_ON(!((int)(queue->remote_insert - queue->process) > 0)); - - if (mutex_lock_killable(&state->bulk_transfer_mutex)) - break; - - vchiq_transfer_bulk(bulk); - mutex_unlock(&state->bulk_transfer_mutex); - - if (SRVTRACE_ENABLED(service, VCHIQ_LOG_INFO)) { - const char *header = (queue == &service->bulk_tx) ? - "Send Bulk to" : "Recv Bulk from"; - if (bulk->actual != VCHIQ_BULK_ACTUAL_ABORTED) - vchiq_log_info(SRVTRACE_LEVEL(service), - "%s %c%c%c%c d:%d len:%d %pK<->%pK", - header, - VCHIQ_FOURCC_AS_4CHARS( - service->base.fourcc), - service->remoteport, bulk->size, - bulk->data, bulk->remote_data); - else - vchiq_log_info(SRVTRACE_LEVEL(service), - "%s %c%c%c%c d:%d ABORTED - tx len:%d," - " rx len:%d %pK<->%pK", - header, - VCHIQ_FOURCC_AS_4CHARS( - service->base.fourcc), - service->remoteport, - bulk->size, bulk->remote_size, - bulk->data, bulk->remote_data); - } - - vchiq_complete_bulk(bulk); - queue->process++; - resolved++; - } - return resolved; -} - /* Called with the bulk_mutex held */ static void -abort_outstanding_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue) +abort_outstanding_bulks(struct vchiq_service *service, + struct vchiq_bulk_queue *queue) { int is_tx = (queue == &service->bulk_tx); @@ -1458,7 +1366,8 @@ abort_outstanding_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue) while ((queue->process != queue->local_insert) || (queue->process != queue->remote_insert)) { - VCHIQ_BULK_T *bulk = &queue->bulks[BULK_INDEX(queue->process)]; + struct vchiq_bulk *bulk = + &queue->bulks[BULK_INDEX(queue->process)]; if (queue->process == queue->remote_insert) { /* fabricate a matching dummy bulk */ @@ -1492,69 +1401,10 @@ abort_outstanding_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue) } } -/* Called from the slot handler thread */ -static void -pause_bulks(VCHIQ_STATE_T *state) -{ - if (unlikely(atomic_inc_return(&pause_bulks_count) != 1)) { - WARN_ON_ONCE(1); - atomic_set(&pause_bulks_count, 1); - return; - } - - /* Block bulk transfers from all services */ - mutex_lock(&state->bulk_transfer_mutex); -} - -/* Called from the slot handler thread */ -static void -resume_bulks(VCHIQ_STATE_T *state) -{ - int i; - - if (unlikely(atomic_dec_return(&pause_bulks_count) != 0)) { - WARN_ON_ONCE(1); - atomic_set(&pause_bulks_count, 0); - return; - } - - /* Allow bulk transfers from all services */ - mutex_unlock(&state->bulk_transfer_mutex); - - if (state->deferred_bulks == 0) - return; - - /* Deal with any bulks which had to be deferred due to being in - * paused state. Don't try to match up to number of deferred bulks - * in case we've had something come and close the service in the - * interim - just process all bulk queues for all services */ - vchiq_log_info(vchiq_core_log_level, "%s: processing %d deferred bulks", - __func__, state->deferred_bulks); - - for (i = 0; i < state->unused_service; i++) { - VCHIQ_SERVICE_T *service = state->services[i]; - int resolved_rx = 0; - int resolved_tx = 0; - - if (!service || (service->srvstate != VCHIQ_SRVSTATE_OPEN)) - continue; - - mutex_lock(&service->bulk_mutex); - resolved_rx = resolve_bulks(service, &service->bulk_rx); - resolved_tx = resolve_bulks(service, &service->bulk_tx); - mutex_unlock(&service->bulk_mutex); - if (resolved_rx) - notify_bulks(service, &service->bulk_rx, 1); - if (resolved_tx) - notify_bulks(service, &service->bulk_tx, 1); - } - state->deferred_bulks = 0; -} - static int -parse_open(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header) +parse_open(struct vchiq_state *state, struct vchiq_header *header) { - VCHIQ_SERVICE_T *service = NULL; + struct vchiq_service *service = NULL; int msgid, size; unsigned int localport, remoteport; @@ -1608,9 +1458,7 @@ parse_open(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header) service->sync = 0; /* Acknowledge the OPEN */ - if (service->sync && - (state->version_common >= - VCHIQ_VERSION_SYNCHRONOUS_MODE)) { + if (service->sync) { if (queue_message_sync( state, NULL, @@ -1676,10 +1524,10 @@ bail_not_ready: /* Called by the slot handler thread */ static void -parse_rx_slots(VCHIQ_STATE_T *state) +parse_rx_slots(struct vchiq_state *state) { - VCHIQ_SHARED_STATE_T *remote = state->remote; - VCHIQ_SERVICE_T *service = NULL; + struct vchiq_shared_state *remote = state->remote; + struct vchiq_service *service = NULL; int tx_pos; DEBUG_INITIALISE(state->local) @@ -1687,7 +1535,7 @@ parse_rx_slots(VCHIQ_STATE_T *state) tx_pos = remote->tx_pos; while (state->rx_pos != tx_pos) { - VCHIQ_HEADER_T *header; + struct vchiq_header *header; int msgid, size; int type; unsigned int localport, remoteport; @@ -1711,7 +1559,7 @@ parse_rx_slots(VCHIQ_STATE_T *state) state->rx_info->release_count = 0; } - header = (VCHIQ_HEADER_T *)(state->rx_data + + header = (struct vchiq_header *)(state->rx_data + (state->rx_pos & VCHIQ_SLOT_MASK)); DEBUG_VALUE(PARSE_HEADER, (int)(long)header); msgid = header->msgid; @@ -1814,7 +1662,7 @@ parse_rx_slots(VCHIQ_STATE_T *state) service->remoteport = remoteport; vchiq_set_service_state(service, VCHIQ_SRVSTATE_OPEN); - up(&service->remove_event); + complete(&service->remove_event); } else vchiq_log_error(vchiq_core_log_level, "OPENACK received in state %s", @@ -1866,90 +1714,32 @@ parse_rx_slots(VCHIQ_STATE_T *state) case VCHIQ_MSG_CONNECT: vchiq_log_info(vchiq_core_log_level, "%d: prs CONNECT@%pK", state->id, header); - state->version_common = ((VCHIQ_SLOT_ZERO_T *) + state->version_common = ((struct vchiq_slot_zero *) state->slot_data)->version; - up(&state->connect); + complete(&state->connect); break; case VCHIQ_MSG_BULK_RX: - case VCHIQ_MSG_BULK_TX: { - VCHIQ_BULK_QUEUE_T *queue; - - WARN_ON(!state->is_master); - queue = (type == VCHIQ_MSG_BULK_RX) ? - &service->bulk_tx : &service->bulk_rx; - if ((service->remoteport == remoteport) - && (service->srvstate == - VCHIQ_SRVSTATE_OPEN)) { - VCHIQ_BULK_T *bulk; - int resolved = 0; - - DEBUG_TRACE(PARSE_LINE); - if (mutex_lock_killable( - &service->bulk_mutex) != 0) { - DEBUG_TRACE(PARSE_LINE); - goto bail_not_ready; - } - - WARN_ON(!(queue->remote_insert < queue->remove + - VCHIQ_NUM_SERVICE_BULKS)); - bulk = &queue->bulks[ - BULK_INDEX(queue->remote_insert)]; - bulk->remote_data = - (void *)(long)((int *)header->data)[0]; - bulk->remote_size = ((int *)header->data)[1]; - wmb(); - - vchiq_log_info(vchiq_core_log_level, - "%d: prs %s@%pK (%d->%d) %x@%pK", - state->id, msg_type_str(type), - header, remoteport, localport, - bulk->remote_size, bulk->remote_data); - - queue->remote_insert++; - - if (atomic_read(&pause_bulks_count)) { - state->deferred_bulks++; - vchiq_log_info(vchiq_core_log_level, - "%s: deferring bulk (%d)", - __func__, - state->deferred_bulks); - if (state->conn_state != - VCHIQ_CONNSTATE_PAUSE_SENT) - vchiq_log_error( - vchiq_core_log_level, - "%s: bulks paused in " - "unexpected state %s", - __func__, - conn_state_names[ - state->conn_state]); - } else if (state->conn_state == - VCHIQ_CONNSTATE_CONNECTED) { - DEBUG_TRACE(PARSE_LINE); - resolved = resolve_bulks(service, - queue); - } - - mutex_unlock(&service->bulk_mutex); - if (resolved) - notify_bulks(service, queue, - 1/*retry_poll*/); - } - } break; + case VCHIQ_MSG_BULK_TX: + /* + * We should never receive a bulk request from the + * other side since we're not setup to perform as the + * master. + */ + WARN_ON(1); + break; case VCHIQ_MSG_BULK_RX_DONE: case VCHIQ_MSG_BULK_TX_DONE: - WARN_ON(state->is_master); if ((service->remoteport == remoteport) && (service->srvstate != VCHIQ_SRVSTATE_FREE)) { - VCHIQ_BULK_QUEUE_T *queue; - VCHIQ_BULK_T *bulk; + struct vchiq_bulk_queue *queue; + struct vchiq_bulk *bulk; queue = (type == VCHIQ_MSG_BULK_RX_DONE) ? &service->bulk_rx : &service->bulk_tx; DEBUG_TRACE(PARSE_LINE); - if (mutex_lock_killable( - &service->bulk_mutex) != 0) { + if (mutex_lock_killable(&service->bulk_mutex)) { DEBUG_TRACE(PARSE_LINE); goto bail_not_ready; } @@ -2026,8 +1816,6 @@ parse_rx_slots(VCHIQ_STATE_T *state) NULL, NULL, 0, QMFLAGS_NO_MUTEX_UNLOCK) == VCHIQ_RETRY) goto bail_not_ready; - if (state->is_master) - pause_bulks(state); } /* At this point slot_mutex is held */ vchiq_set_conn_state(state, VCHIQ_CONNSTATE_PAUSED); @@ -2039,8 +1827,6 @@ parse_rx_slots(VCHIQ_STATE_T *state) state->id, header, size); /* Release the slot mutex */ mutex_unlock(&state->slot_mutex); - if (state->is_master) - resume_bulks(state); vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED); vchiq_platform_resumed(state); break; @@ -2090,15 +1876,15 @@ bail_not_ready: static int slot_handler_func(void *v) { - VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v; - VCHIQ_SHARED_STATE_T *local = state->local; + struct vchiq_state *state = (struct vchiq_state *)v; + struct vchiq_shared_state *local = state->local; DEBUG_INITIALISE(local) while (1) { DEBUG_COUNT(SLOT_HANDLER_COUNT); DEBUG_TRACE(SLOT_HANDLER_LINE); - remote_event_wait(state, &local->trigger); + remote_event_wait(&state->trigger_event, &local->trigger); rmb(); @@ -2119,8 +1905,6 @@ slot_handler_func(void *v) break; case VCHIQ_CONNSTATE_PAUSING: - if (state->is_master) - pause_bulks(state); if (queue_message(state, NULL, VCHIQ_MAKE_MSG(VCHIQ_MSG_PAUSE, 0, 0), NULL, NULL, 0, @@ -2129,8 +1913,6 @@ slot_handler_func(void *v) vchiq_set_conn_state(state, VCHIQ_CONNSTATE_PAUSE_SENT); } else { - if (state->is_master) - resume_bulks(state); /* Retry later */ state->poll_needed = 1; } @@ -2145,8 +1927,6 @@ slot_handler_func(void *v) VCHIQ_MAKE_MSG(VCHIQ_MSG_RESUME, 0, 0), NULL, NULL, 0, QMFLAGS_NO_MUTEX_LOCK) != VCHIQ_RETRY) { - if (state->is_master) - resume_bulks(state); vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED); vchiq_platform_resumed(state); @@ -2180,8 +1960,8 @@ slot_handler_func(void *v) static int recycle_func(void *v) { - VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v; - VCHIQ_SHARED_STATE_T *local = state->local; + struct vchiq_state *state = (struct vchiq_state *)v; + struct vchiq_shared_state *local = state->local; BITSET_T *found; size_t length; @@ -2193,7 +1973,7 @@ recycle_func(void *v) return -ENOMEM; while (1) { - remote_event_wait(state, &local->recycle); + remote_event_wait(&state->recycle_event, &local->recycle); process_free_queue(state, found, length); } @@ -2204,18 +1984,19 @@ recycle_func(void *v) static int sync_func(void *v) { - VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v; - VCHIQ_SHARED_STATE_T *local = state->local; - VCHIQ_HEADER_T *header = (VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state, - state->remote->slot_sync); + struct vchiq_state *state = (struct vchiq_state *)v; + struct vchiq_shared_state *local = state->local; + struct vchiq_header *header = + (struct vchiq_header *)SLOT_DATA_FROM_INDEX(state, + state->remote->slot_sync); while (1) { - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; int msgid, size; int type; unsigned int localport, remoteport; - remote_event_wait(state, &local->sync_trigger); + remote_event_wait(&state->sync_trigger_event, &local->sync_trigger); rmb(); @@ -2269,7 +2050,7 @@ sync_func(void *v) vchiq_set_service_state(service, VCHIQ_SRVSTATE_OPENSYNC); service->sync = 1; - up(&service->remove_event); + complete(&service->remove_event); } release_message_sync(state, header); break; @@ -2308,7 +2089,7 @@ sync_func(void *v) } static void -init_bulk_queue(VCHIQ_BULK_QUEUE_T *queue) +init_bulk_queue(struct vchiq_bulk_queue *queue) { queue->local_insert = 0; queue->remote_insert = 0; @@ -2323,13 +2104,13 @@ get_conn_state_name(VCHIQ_CONNSTATE_T conn_state) return conn_state_names[conn_state]; } -VCHIQ_SLOT_ZERO_T * +struct vchiq_slot_zero * vchiq_init_slots(void *mem_base, int mem_size) { int mem_align = (int)((VCHIQ_SLOT_SIZE - (long)mem_base) & VCHIQ_SLOT_MASK); - VCHIQ_SLOT_ZERO_T *slot_zero = - (VCHIQ_SLOT_ZERO_T *)((char *)mem_base + mem_align); + struct vchiq_slot_zero *slot_zero = + (struct vchiq_slot_zero *)((char *)mem_base + mem_align); int num_slots = (mem_size - mem_align)/VCHIQ_SLOT_SIZE; int first_data_slot = VCHIQ_SLOT_ZERO_SLOTS; @@ -2343,12 +2124,12 @@ vchiq_init_slots(void *mem_base, int mem_size) return NULL; } - memset(slot_zero, 0, sizeof(VCHIQ_SLOT_ZERO_T)); + memset(slot_zero, 0, sizeof(struct vchiq_slot_zero)); slot_zero->magic = VCHIQ_MAGIC; slot_zero->version = VCHIQ_VERSION; slot_zero->version_min = VCHIQ_VERSION_MIN; - slot_zero->slot_zero_size = sizeof(VCHIQ_SLOT_ZERO_T); + slot_zero->slot_zero_size = sizeof(struct vchiq_slot_zero); slot_zero->slot_size = VCHIQ_SLOT_SIZE; slot_zero->max_slots = VCHIQ_MAX_SLOTS; slot_zero->max_slots_per_side = VCHIQ_MAX_SLOTS_PER_SIDE; @@ -2364,90 +2145,24 @@ vchiq_init_slots(void *mem_base, int mem_size) } VCHIQ_STATUS_T -vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero, - int is_master) +vchiq_init_state(struct vchiq_state *state, struct vchiq_slot_zero *slot_zero) { - VCHIQ_SHARED_STATE_T *local; - VCHIQ_SHARED_STATE_T *remote; + struct vchiq_shared_state *local; + struct vchiq_shared_state *remote; VCHIQ_STATUS_T status; char threadname[16]; int i; vchiq_log_warning(vchiq_core_log_level, - "%s: slot_zero = %pK, is_master = %d", - __func__, slot_zero, is_master); + "%s: slot_zero = %pK", __func__, slot_zero); if (vchiq_states[0]) { pr_err("%s: VCHIQ state already initialized\n", __func__); return VCHIQ_ERROR; } - /* Check the input configuration */ - - if (slot_zero->magic != VCHIQ_MAGIC) { - vchiq_loud_error_header(); - vchiq_loud_error("Invalid VCHIQ magic value found."); - vchiq_loud_error("slot_zero=%pK: magic=%x (expected %x)", - slot_zero, slot_zero->magic, VCHIQ_MAGIC); - vchiq_loud_error_footer(); - return VCHIQ_ERROR; - } - - if (slot_zero->version < VCHIQ_VERSION_MIN) { - vchiq_loud_error_header(); - vchiq_loud_error("Incompatible VCHIQ versions found."); - vchiq_loud_error("slot_zero=%pK: VideoCore version=%d (minimum %d)", - slot_zero, slot_zero->version, VCHIQ_VERSION_MIN); - vchiq_loud_error("Restart with a newer VideoCore image."); - vchiq_loud_error_footer(); - return VCHIQ_ERROR; - } - - if (VCHIQ_VERSION < slot_zero->version_min) { - vchiq_loud_error_header(); - vchiq_loud_error("Incompatible VCHIQ versions found."); - vchiq_loud_error("slot_zero=%pK: version=%d (VideoCore minimum %d)", - slot_zero, VCHIQ_VERSION, slot_zero->version_min); - vchiq_loud_error("Restart with a newer kernel."); - vchiq_loud_error_footer(); - return VCHIQ_ERROR; - } - - if ((slot_zero->slot_zero_size != sizeof(VCHIQ_SLOT_ZERO_T)) || - (slot_zero->slot_size != VCHIQ_SLOT_SIZE) || - (slot_zero->max_slots != VCHIQ_MAX_SLOTS) || - (slot_zero->max_slots_per_side != VCHIQ_MAX_SLOTS_PER_SIDE)) { - vchiq_loud_error_header(); - if (slot_zero->slot_zero_size != sizeof(VCHIQ_SLOT_ZERO_T)) - vchiq_loud_error("slot_zero=%pK: slot_zero_size=%d (expected %d)", - slot_zero, slot_zero->slot_zero_size, - (int)sizeof(VCHIQ_SLOT_ZERO_T)); - if (slot_zero->slot_size != VCHIQ_SLOT_SIZE) - vchiq_loud_error("slot_zero=%pK: slot_size=%d (expected %d)", - slot_zero, slot_zero->slot_size, - VCHIQ_SLOT_SIZE); - if (slot_zero->max_slots != VCHIQ_MAX_SLOTS) - vchiq_loud_error("slot_zero=%pK: max_slots=%d (expected %d)", - slot_zero, slot_zero->max_slots, - VCHIQ_MAX_SLOTS); - if (slot_zero->max_slots_per_side != VCHIQ_MAX_SLOTS_PER_SIDE) - vchiq_loud_error("slot_zero=%pK: max_slots_per_side=%d (expected %d)", - slot_zero, slot_zero->max_slots_per_side, - VCHIQ_MAX_SLOTS_PER_SIDE); - vchiq_loud_error_footer(); - return VCHIQ_ERROR; - } - - if (VCHIQ_VERSION < slot_zero->version) - slot_zero->version = VCHIQ_VERSION; - - if (is_master) { - local = &slot_zero->master; - remote = &slot_zero->slave; - } else { - local = &slot_zero->slave; - remote = &slot_zero->master; - } + local = &slot_zero->slave; + remote = &slot_zero->master; if (local->initialised) { vchiq_loud_error_header(); @@ -2455,15 +2170,12 @@ vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero, vchiq_loud_error("local state has already been " "initialised"); else - vchiq_loud_error("master/slave mismatch - two %ss", - is_master ? "master" : "slave"); + vchiq_loud_error("master/slave mismatch two slaves"); vchiq_loud_error_footer(); return VCHIQ_ERROR; } - memset(state, 0, sizeof(VCHIQ_STATE_T)); - - state->is_master = is_master; + memset(state, 0, sizeof(struct vchiq_state)); /* initialize shared state pointers @@ -2471,39 +2183,34 @@ vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero, state->local = local; state->remote = remote; - state->slot_data = (VCHIQ_SLOT_T *)slot_zero; + state->slot_data = (struct vchiq_slot *)slot_zero; /* initialize events and mutexes */ - sema_init(&state->connect, 0); + init_completion(&state->connect); mutex_init(&state->mutex); - sema_init(&state->trigger_event, 0); - sema_init(&state->recycle_event, 0); - sema_init(&state->sync_trigger_event, 0); - sema_init(&state->sync_release_event, 0); - mutex_init(&state->slot_mutex); mutex_init(&state->recycle_mutex); mutex_init(&state->sync_mutex); mutex_init(&state->bulk_transfer_mutex); - sema_init(&state->slot_available_event, 0); - sema_init(&state->slot_remove_event, 0); - sema_init(&state->data_quota_event, 0); + init_completion(&state->slot_available_event); + init_completion(&state->slot_remove_event); + init_completion(&state->data_quota_event); state->slot_queue_available = 0; for (i = 0; i < VCHIQ_MAX_SERVICES; i++) { - VCHIQ_SERVICE_QUOTA_T *service_quota = + struct vchiq_service_quota *service_quota = &state->service_quotas[i]; - sema_init(&service_quota->quota_event, 0); + init_completion(&service_quota->quota_event); } for (i = local->slot_first; i <= local->slot_last; i++) { local->slot_queue[state->slot_queue_available++] = i; - up(&state->slot_available_event); + complete(&state->slot_available_event); } state->default_slot_quota = state->slot_queue_available/2; @@ -2515,24 +2222,18 @@ vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero, state->data_use_count = 0; state->data_quota = state->slot_queue_available - 1; - local->trigger.event = offsetof(VCHIQ_STATE_T, trigger_event); - remote_event_create(state, &local->trigger); + remote_event_create(&state->trigger_event, &local->trigger); local->tx_pos = 0; - - local->recycle.event = offsetof(VCHIQ_STATE_T, recycle_event); - remote_event_create(state, &local->recycle); + remote_event_create(&state->recycle_event, &local->recycle); local->slot_queue_recycle = state->slot_queue_available; - - local->sync_trigger.event = offsetof(VCHIQ_STATE_T, sync_trigger_event); - remote_event_create(state, &local->sync_trigger); - - local->sync_release.event = offsetof(VCHIQ_STATE_T, sync_release_event); - remote_event_create(state, &local->sync_release); + remote_event_create(&state->sync_trigger_event, &local->sync_trigger); + remote_event_create(&state->sync_release_event, &local->sync_release); /* At start-of-day, the slot is empty and available */ - ((VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state, local->slot_sync))->msgid - = VCHIQ_MSGID_PADDING; - remote_event_signal_local(state, &local->sync_release); + ((struct vchiq_header *) + SLOT_DATA_FROM_INDEX(state, local->slot_sync))->msgid = + VCHIQ_MSGID_PADDING; + remote_event_signal_local(&state->sync_release_event, &local->sync_release); local->debug[DEBUG_ENTRIES] = DEBUG_MAX; @@ -2598,17 +2299,18 @@ fail_free_handler_thread: } /* Called from application thread when a client or server service is created. */ -VCHIQ_SERVICE_T * -vchiq_add_service_internal(VCHIQ_STATE_T *state, - const VCHIQ_SERVICE_PARAMS_T *params, int srvstate, - VCHIQ_INSTANCE_T instance, VCHIQ_USERDATA_TERM_T userdata_term) -{ - VCHIQ_SERVICE_T *service; - VCHIQ_SERVICE_T **pservice = NULL; - VCHIQ_SERVICE_QUOTA_T *service_quota; +struct vchiq_service * +vchiq_add_service_internal(struct vchiq_state *state, + const struct vchiq_service_params *params, + int srvstate, VCHIQ_INSTANCE_T instance, + VCHIQ_USERDATA_TERM_T userdata_term) +{ + struct vchiq_service *service; + struct vchiq_service **pservice = NULL; + struct vchiq_service_quota *service_quota; int i; - service = kmalloc(sizeof(VCHIQ_SERVICE_T), GFP_KERNEL); + service = kmalloc(sizeof(*service), GFP_KERNEL); if (!service) return service; @@ -2637,8 +2339,8 @@ vchiq_add_service_internal(VCHIQ_STATE_T *state, service->service_use_count = 0; init_bulk_queue(&service->bulk_tx); init_bulk_queue(&service->bulk_rx); - sema_init(&service->remove_event, 0); - sema_init(&service->bulk_remove_event, 0); + init_completion(&service->remove_event); + init_completion(&service->bulk_remove_event); mutex_init(&service->bulk_mutex); memset(&service->stats, 0, sizeof(service->stats)); @@ -2660,7 +2362,7 @@ vchiq_add_service_internal(VCHIQ_STATE_T *state, if (srvstate == VCHIQ_SRVSTATE_OPENING) { for (i = 0; i < state->unused_service; i++) { - VCHIQ_SERVICE_T *srv = state->services[i]; + struct vchiq_service *srv = state->services[i]; if (!srv) { pservice = &state->services[i]; @@ -2669,7 +2371,7 @@ vchiq_add_service_internal(VCHIQ_STATE_T *state, } } else { for (i = (state->unused_service - 1); i >= 0; i--) { - VCHIQ_SERVICE_T *srv = state->services[i]; + struct vchiq_service *srv = state->services[i]; if (!srv) pservice = &state->services[i]; @@ -2730,7 +2432,7 @@ vchiq_add_service_internal(VCHIQ_STATE_T *state, } VCHIQ_STATUS_T -vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id) +vchiq_open_service_internal(struct vchiq_service *service, int client_id) { struct vchiq_open_payload payload = { service->base.fourcc, @@ -2753,7 +2455,7 @@ vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id) QMFLAGS_IS_BLOCKING); if (status == VCHIQ_SUCCESS) { /* Wait for the ACK/NAK */ - if (down_interruptible(&service->remove_event) != 0) { + if (wait_for_completion_killable(&service->remove_event)) { status = VCHIQ_RETRY; vchiq_release_service_internal(service); } else if ((service->srvstate != VCHIQ_SRVSTATE_OPEN) && @@ -2773,17 +2475,17 @@ vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id) } static void -release_service_messages(VCHIQ_SERVICE_T *service) +release_service_messages(struct vchiq_service *service) { - VCHIQ_STATE_T *state = service->state; + struct vchiq_state *state = service->state; int slot_last = state->remote->slot_last; int i; /* Release any claimed messages aimed at this service */ if (service->sync) { - VCHIQ_HEADER_T *header = - (VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state, + struct vchiq_header *header = + (struct vchiq_header *)SLOT_DATA_FROM_INDEX(state, state->remote->slot_sync); if (VCHIQ_MSG_DSTPORT(header->msgid) == service->localport) release_message_sync(state, header); @@ -2792,7 +2494,7 @@ release_service_messages(VCHIQ_SERVICE_T *service) } for (i = state->remote->slot_first; i <= slot_last; i++) { - VCHIQ_SLOT_INFO_T *slot_info = + struct vchiq_slot_info *slot_info = SLOT_INFO_FROM_INDEX(state, i); if (slot_info->release_count != slot_info->use_count) { char *data = @@ -2808,8 +2510,8 @@ release_service_messages(VCHIQ_SERVICE_T *service) pos = 0; while (pos < end) { - VCHIQ_HEADER_T *header = - (VCHIQ_HEADER_T *)(data + pos); + struct vchiq_header *header = + (struct vchiq_header *)(data + pos); int msgid = header->msgid; int port = VCHIQ_MSG_DSTPORT(msgid); @@ -2834,7 +2536,7 @@ release_service_messages(VCHIQ_SERVICE_T *service) } static int -do_abort_bulks(VCHIQ_SERVICE_T *service) +do_abort_bulks(struct vchiq_service *service) { VCHIQ_STATUS_T status; @@ -2853,7 +2555,7 @@ do_abort_bulks(VCHIQ_SERVICE_T *service) } static VCHIQ_STATUS_T -close_service_complete(VCHIQ_SERVICE_T *service, int failstate) +close_service_complete(struct vchiq_service *service, int failstate) { VCHIQ_STATUS_T status; int is_server = (service->public_fourcc != VCHIQ_FOURCC_INVALID); @@ -2905,7 +2607,7 @@ close_service_complete(VCHIQ_SERVICE_T *service, int failstate) if (is_server) service->closing = 0; - up(&service->remove_event); + complete(&service->remove_event); } } else vchiq_set_service_state(service, failstate); @@ -2915,9 +2617,9 @@ close_service_complete(VCHIQ_SERVICE_T *service, int failstate) /* Called by the slot handler */ VCHIQ_STATUS_T -vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd) +vchiq_close_service_internal(struct vchiq_service *service, int close_recvd) { - VCHIQ_STATE_T *state = service->state; + struct vchiq_state *state = service->state; VCHIQ_STATUS_T status = VCHIQ_SUCCESS; int is_server = (service->public_fourcc != VCHIQ_FOURCC_INVALID); @@ -2946,7 +2648,7 @@ vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd) vchiq_set_service_state(service, VCHIQ_SRVSTATE_LISTENING); } - up(&service->remove_event); + complete(&service->remove_event); } else vchiq_free_service_internal(service); break; @@ -2955,7 +2657,7 @@ vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd) /* The open was rejected - tell the user */ vchiq_set_service_state(service, VCHIQ_SRVSTATE_CLOSEWAIT); - up(&service->remove_event); + complete(&service->remove_event); } else { /* Shutdown mid-open - let the other side know */ status = queue_message(state, service, @@ -2971,7 +2673,7 @@ vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd) mutex_lock(&state->sync_mutex); /* fall through */ case VCHIQ_SRVSTATE_OPEN: - if (state->is_master || close_recvd) { + if (close_recvd) { if (!do_abort_bulks(service)) status = VCHIQ_RETRY; } @@ -3018,11 +2720,9 @@ vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd) /* This happens when a process is killed mid-close */ break; - if (!state->is_master) { - if (!do_abort_bulks(service)) { - status = VCHIQ_RETRY; - break; - } + if (!do_abort_bulks(service)) { + status = VCHIQ_RETRY; + break; } if (status == VCHIQ_SUCCESS) @@ -3051,9 +2751,9 @@ vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd) /* Called from the application process upon process death */ void -vchiq_terminate_service_internal(VCHIQ_SERVICE_T *service) +vchiq_terminate_service_internal(struct vchiq_service *service) { - VCHIQ_STATE_T *state = service->state; + struct vchiq_state *state = service->state; vchiq_log_info(vchiq_core_log_level, "%d: tsi - (%d<->%d)", state->id, service->localport, service->remoteport); @@ -3066,9 +2766,9 @@ vchiq_terminate_service_internal(VCHIQ_SERVICE_T *service) /* Called from the slot handler */ void -vchiq_free_service_internal(VCHIQ_SERVICE_T *service) +vchiq_free_service_internal(struct vchiq_service *service) { - VCHIQ_STATE_T *state = service->state; + struct vchiq_state *state = service->state; vchiq_log_info(vchiq_core_log_level, "%d: fsi - (%d)", state->id, service->localport); @@ -3090,16 +2790,16 @@ vchiq_free_service_internal(VCHIQ_SERVICE_T *service) vchiq_set_service_state(service, VCHIQ_SRVSTATE_FREE); - up(&service->remove_event); + complete(&service->remove_event); /* Release the initial lock */ unlock_service(service); } VCHIQ_STATUS_T -vchiq_connect_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance) +vchiq_connect_internal(struct vchiq_state *state, VCHIQ_INSTANCE_T instance) { - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; int i; /* Find all services registered to this client and enable them. */ @@ -3122,20 +2822,20 @@ vchiq_connect_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance) } if (state->conn_state == VCHIQ_CONNSTATE_CONNECTING) { - if (down_interruptible(&state->connect) != 0) + if (wait_for_completion_killable(&state->connect)) return VCHIQ_RETRY; vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED); - up(&state->connect); + complete(&state->connect); } return VCHIQ_SUCCESS; } VCHIQ_STATUS_T -vchiq_shutdown_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance) +vchiq_shutdown_internal(struct vchiq_state *state, VCHIQ_INSTANCE_T instance) { - VCHIQ_SERVICE_T *service; + struct vchiq_service *service; int i; /* Find all services registered to this client and enable them. */ @@ -3150,7 +2850,7 @@ vchiq_shutdown_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance) } VCHIQ_STATUS_T -vchiq_pause_internal(VCHIQ_STATE_T *state) +vchiq_pause_internal(struct vchiq_state *state) { VCHIQ_STATUS_T status = VCHIQ_SUCCESS; @@ -3173,7 +2873,7 @@ vchiq_pause_internal(VCHIQ_STATE_T *state) } VCHIQ_STATUS_T -vchiq_resume_internal(VCHIQ_STATE_T *state) +vchiq_resume_internal(struct vchiq_state *state) { VCHIQ_STATUS_T status = VCHIQ_SUCCESS; @@ -3192,7 +2892,7 @@ VCHIQ_STATUS_T vchiq_close_service(VCHIQ_SERVICE_HANDLE_T handle) { /* Unregister the service */ - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); VCHIQ_STATUS_T status = VCHIQ_SUCCESS; if (!service) @@ -3221,7 +2921,7 @@ vchiq_close_service(VCHIQ_SERVICE_HANDLE_T handle) } while (1) { - if (down_interruptible(&service->remove_event) != 0) { + if (wait_for_completion_killable(&service->remove_event)) { status = VCHIQ_RETRY; break; } @@ -3251,7 +2951,7 @@ VCHIQ_STATUS_T vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle) { /* Unregister the service */ - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); VCHIQ_STATUS_T status = VCHIQ_SUCCESS; if (!service) @@ -3282,7 +2982,7 @@ vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle) request_poll(service->state, service, VCHIQ_POLL_REMOVE); } while (1) { - if (down_interruptible(&service->remove_event) != 0) { + if (wait_for_completion_killable(&service->remove_event)) { status = VCHIQ_RETRY; break; } @@ -3313,25 +3013,24 @@ vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle) * When called in blocking mode, the userdata field points to a bulk_waiter * structure. */ -VCHIQ_STATUS_T -vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, - VCHI_MEM_HANDLE_T memhandle, void *offset, int size, void *userdata, - VCHIQ_BULK_MODE_T mode, VCHIQ_BULK_DIR_T dir) -{ - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); - VCHIQ_BULK_QUEUE_T *queue; - VCHIQ_BULK_T *bulk; - VCHIQ_STATE_T *state; +VCHIQ_STATUS_T vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, + void *offset, int size, void *userdata, + VCHIQ_BULK_MODE_T mode, + VCHIQ_BULK_DIR_T dir) +{ + struct vchiq_service *service = find_service_by_handle(handle); + struct vchiq_bulk_queue *queue; + struct vchiq_bulk *bulk; + struct vchiq_state *state; struct bulk_waiter *bulk_waiter = NULL; const char dir_char = (dir == VCHIQ_BULK_TRANSMIT) ? 't' : 'r'; const int dir_msgtype = (dir == VCHIQ_BULK_TRANSMIT) ? VCHIQ_MSG_BULK_TX : VCHIQ_MSG_BULK_RX; VCHIQ_STATUS_T status = VCHIQ_ERROR; + int payload[2]; - if (!service || - (service->srvstate != VCHIQ_SRVSTATE_OPEN) || - ((memhandle == VCHI_MEM_HANDLE_INVALID) && (offset == NULL)) || - (vchiq_check_service(service) != VCHIQ_SUCCESS)) + if (!service || service->srvstate != VCHIQ_SRVSTATE_OPEN || + !offset || vchiq_check_service(service) != VCHIQ_SUCCESS) goto error_exit; switch (mode) { @@ -3340,7 +3039,7 @@ vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, break; case VCHIQ_BULK_MODE_BLOCKING: bulk_waiter = (struct bulk_waiter *)userdata; - sema_init(&bulk_waiter->event, 0); + init_completion(&bulk_waiter->event); bulk_waiter->actual = 0; bulk_waiter->bulk = NULL; break; @@ -3366,8 +3065,8 @@ vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, VCHIQ_SERVICE_STATS_INC(service, bulk_stalls); do { mutex_unlock(&service->bulk_mutex); - if (down_interruptible(&service->bulk_remove_event) - != 0) { + if (wait_for_completion_killable( + &service->bulk_remove_event)) { status = VCHIQ_RETRY; goto error_exit; } @@ -3388,8 +3087,7 @@ vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, bulk->size = size; bulk->actual = VCHIQ_BULK_ACTUAL_ABORTED; - if (vchiq_prepare_bulk_data(bulk, memhandle, offset, size, dir) != - VCHIQ_SUCCESS) + if (vchiq_prepare_bulk_data(bulk, offset, size, dir) != VCHIQ_SUCCESS) goto unlock_error_exit; wmb(); @@ -3409,32 +3107,25 @@ vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, if (service->srvstate != VCHIQ_SRVSTATE_OPEN) goto unlock_both_error_exit; - if (state->is_master) { - queue->local_insert++; - if (resolve_bulks(service, queue)) - request_poll(state, service, - (dir == VCHIQ_BULK_TRANSMIT) ? - VCHIQ_POLL_TXNOTIFY : VCHIQ_POLL_RXNOTIFY); - } else { - int payload[2] = { (int)(long)bulk->data, bulk->size }; - - status = queue_message(state, - NULL, - VCHIQ_MAKE_MSG(dir_msgtype, - service->localport, - service->remoteport), - memcpy_copy_callback, - &payload, - sizeof(payload), - QMFLAGS_IS_BLOCKING | - QMFLAGS_NO_MUTEX_LOCK | - QMFLAGS_NO_MUTEX_UNLOCK); - if (status != VCHIQ_SUCCESS) { - goto unlock_both_error_exit; - } - queue->local_insert++; + payload[0] = (int)(long)bulk->data; + payload[1] = bulk->size; + status = queue_message(state, + NULL, + VCHIQ_MAKE_MSG(dir_msgtype, + service->localport, + service->remoteport), + memcpy_copy_callback, + &payload, + sizeof(payload), + QMFLAGS_IS_BLOCKING | + QMFLAGS_NO_MUTEX_LOCK | + QMFLAGS_NO_MUTEX_UNLOCK); + if (status != VCHIQ_SUCCESS) { + goto unlock_both_error_exit; } + queue->local_insert++; + mutex_unlock(&state->slot_mutex); mutex_unlock(&service->bulk_mutex); @@ -3451,7 +3142,7 @@ waiting: if (bulk_waiter) { bulk_waiter->bulk = bulk; - if (down_interruptible(&bulk_waiter->event) != 0) + if (wait_for_completion_killable(&bulk_waiter->event)) status = VCHIQ_RETRY; else if (bulk_waiter->actual == VCHIQ_BULK_ACTUAL_ABORTED) status = VCHIQ_ERROR; @@ -3479,7 +3170,7 @@ vchiq_queue_message(VCHIQ_SERVICE_HANDLE_T handle, void *context, size_t size) { - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); VCHIQ_STATUS_T status = VCHIQ_ERROR; if (!service || @@ -3525,11 +3216,12 @@ error_exit: } void -vchiq_release_message(VCHIQ_SERVICE_HANDLE_T handle, VCHIQ_HEADER_T *header) +vchiq_release_message(VCHIQ_SERVICE_HANDLE_T handle, + struct vchiq_header *header) { - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); - VCHIQ_SHARED_STATE_T *remote; - VCHIQ_STATE_T *state; + struct vchiq_service *service = find_service_by_handle(handle); + struct vchiq_shared_state *remote; + struct vchiq_state *state; int slot_index; if (!service) @@ -3545,7 +3237,7 @@ vchiq_release_message(VCHIQ_SERVICE_HANDLE_T handle, VCHIQ_HEADER_T *header) int msgid = header->msgid; if (msgid & VCHIQ_MSGID_CLAIMED) { - VCHIQ_SLOT_INFO_T *slot_info = + struct vchiq_slot_info *slot_info = SLOT_INFO_FROM_INDEX(state, slot_index); release_slot(state, slot_info, header, service); @@ -3557,10 +3249,9 @@ vchiq_release_message(VCHIQ_SERVICE_HANDLE_T handle, VCHIQ_HEADER_T *header) } static void -release_message_sync(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header) +release_message_sync(struct vchiq_state *state, struct vchiq_header *header) { header->msgid = VCHIQ_MSGID_PADDING; - wmb(); remote_event_signal(&state->remote->sync_release); } @@ -3568,7 +3259,7 @@ VCHIQ_STATUS_T vchiq_get_peer_version(VCHIQ_SERVICE_HANDLE_T handle, short *peer_version) { VCHIQ_STATUS_T status = VCHIQ_ERROR; - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); if (!service || (vchiq_check_service(service) != VCHIQ_SUCCESS) || @@ -3583,35 +3274,21 @@ exit: return status; } -VCHIQ_STATUS_T -vchiq_get_config(VCHIQ_INSTANCE_T instance, - int config_size, VCHIQ_CONFIG_T *pconfig) +void vchiq_get_config(struct vchiq_config *config) { - VCHIQ_CONFIG_T config; - - (void)instance; - - config.max_msg_size = VCHIQ_MAX_MSG_SIZE; - config.bulk_threshold = VCHIQ_MAX_MSG_SIZE; - config.max_outstanding_bulks = VCHIQ_NUM_SERVICE_BULKS; - config.max_services = VCHIQ_MAX_SERVICES; - config.version = VCHIQ_VERSION; - config.version_min = VCHIQ_VERSION_MIN; - - if (config_size > sizeof(VCHIQ_CONFIG_T)) - return VCHIQ_ERROR; - - memcpy(pconfig, &config, - min(config_size, (int)(sizeof(VCHIQ_CONFIG_T)))); - - return VCHIQ_SUCCESS; + config->max_msg_size = VCHIQ_MAX_MSG_SIZE; + config->bulk_threshold = VCHIQ_MAX_MSG_SIZE; + config->max_outstanding_bulks = VCHIQ_NUM_SERVICE_BULKS; + config->max_services = VCHIQ_MAX_SERVICES; + config->version = VCHIQ_VERSION; + config->version_min = VCHIQ_VERSION_MIN; } VCHIQ_STATUS_T vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T handle, VCHIQ_SERVICE_OPTION_T option, int value) { - VCHIQ_SERVICE_T *service = find_service_by_handle(handle); + struct vchiq_service *service = find_service_by_handle(handle); VCHIQ_STATUS_T status = VCHIQ_ERROR; if (service) { @@ -3622,7 +3299,7 @@ vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T handle, break; case VCHIQ_SERVICE_OPTION_SLOT_QUOTA: { - VCHIQ_SERVICE_QUOTA_T *service_quota = + struct vchiq_service_quota *service_quota = &service->state->service_quotas[ service->localport]; if (value == 0) @@ -3635,14 +3312,14 @@ vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T handle, service_quota->message_use_count)) { /* Signal the service that it may have ** dropped below its quota */ - up(&service_quota->quota_event); + complete(&service_quota->quota_event); } status = VCHIQ_SUCCESS; } } break; case VCHIQ_SERVICE_OPTION_MESSAGE_QUOTA: { - VCHIQ_SERVICE_QUOTA_T *service_quota = + struct vchiq_service_quota *service_quota = &service->state->service_quotas[ service->localport]; if (value == 0) @@ -3656,7 +3333,7 @@ vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T handle, service_quota->slot_use_count)) /* Signal the service that it may have ** dropped below its quota */ - up(&service_quota->quota_event); + complete(&service_quota->quota_event); status = VCHIQ_SUCCESS; } } break; @@ -3685,8 +3362,8 @@ vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T handle, } static void -vchiq_dump_shared_state(void *dump_context, VCHIQ_STATE_T *state, - VCHIQ_SHARED_STATE_T *shared, const char *label) +vchiq_dump_shared_state(void *dump_context, struct vchiq_state *state, + struct vchiq_shared_state *shared, const char *label) { static const char *const debug_names[] = { "", @@ -3716,7 +3393,8 @@ vchiq_dump_shared_state(void *dump_context, VCHIQ_STATE_T *state, vchiq_dump(dump_context, buf, len + 1); for (i = shared->slot_first; i <= shared->slot_last; i++) { - VCHIQ_SLOT_INFO_T slot_info = *SLOT_INFO_FROM_INDEX(state, i); + struct vchiq_slot_info slot_info = + *SLOT_INFO_FROM_INDEX(state, i); if (slot_info.use_count != slot_info.release_count) { len = snprintf(buf, sizeof(buf), " %d: %d/%d", i, slot_info.use_count, @@ -3733,7 +3411,7 @@ vchiq_dump_shared_state(void *dump_context, VCHIQ_STATE_T *state, } void -vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state) +vchiq_dump_state(void *dump_context, struct vchiq_state *state) { char buf[80]; int len; @@ -3783,7 +3461,7 @@ vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state) vchiq_dump_platform_instances(dump_context); for (i = 0; i < state->unused_service; i++) { - VCHIQ_SERVICE_T *service = find_service_by_port(state, i); + struct vchiq_service *service = find_service_by_port(state, i); if (service) { vchiq_dump_service_state(dump_context, service); @@ -3793,7 +3471,7 @@ vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state) } void -vchiq_dump_service_state(void *dump_context, VCHIQ_SERVICE_T *service) +vchiq_dump_service_state(void *dump_context, struct vchiq_service *service) { char buf[80]; int len; @@ -3804,7 +3482,7 @@ vchiq_dump_service_state(void *dump_context, VCHIQ_SERVICE_T *service) if (service->srvstate != VCHIQ_SRVSTATE_FREE) { char remoteport[30]; - VCHIQ_SERVICE_QUOTA_T *service_quota = + struct vchiq_service_quota *service_quota = &service->state->service_quotas[service->localport]; int fourcc = service->base.fourcc; int tx_pending, rx_pending; @@ -3909,7 +3587,7 @@ vchiq_loud_error_footer(void) "================"); } -VCHIQ_STATUS_T vchiq_send_remote_use(VCHIQ_STATE_T *state) +VCHIQ_STATUS_T vchiq_send_remote_use(struct vchiq_state *state) { VCHIQ_STATUS_T status = VCHIQ_RETRY; @@ -3920,7 +3598,7 @@ VCHIQ_STATUS_T vchiq_send_remote_use(VCHIQ_STATE_T *state) return status; } -VCHIQ_STATUS_T vchiq_send_remote_release(VCHIQ_STATE_T *state) +VCHIQ_STATUS_T vchiq_send_remote_release(struct vchiq_state *state) { VCHIQ_STATUS_T status = VCHIQ_RETRY; @@ -3931,7 +3609,7 @@ VCHIQ_STATUS_T vchiq_send_remote_release(VCHIQ_STATE_T *state) return status; } -VCHIQ_STATUS_T vchiq_send_remote_use_active(VCHIQ_STATE_T *state) +VCHIQ_STATUS_T vchiq_send_remote_use_active(struct vchiq_state *state) { VCHIQ_STATUS_T status = VCHIQ_RETRY; diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h index 10deb5745dda..5f07db519633 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h @@ -35,8 +35,9 @@ #define VCHIQ_CORE_H #include -#include +#include #include +#include #include "vchiq_cfg.h" @@ -89,7 +90,7 @@ vchiq_static_assert(IS_POW2(VCHIQ_MAX_SLOTS_PER_SIDE)); #define VCHIQ_SLOT_MASK (VCHIQ_SLOT_SIZE - 1) #define VCHIQ_SLOT_QUEUE_MASK (VCHIQ_MAX_SLOTS_PER_SIDE - 1) -#define VCHIQ_SLOT_ZERO_SLOTS ((sizeof(VCHIQ_SLOT_ZERO_T) + \ +#define VCHIQ_SLOT_ZERO_SLOTS ((sizeof(struct vchiq_slot_zero) + \ VCHIQ_SLOT_SIZE - 1) / VCHIQ_SLOT_SIZE) #define VCHIQ_MSG_PADDING 0 /* - */ @@ -238,51 +239,47 @@ typedef enum { typedef void (*VCHIQ_USERDATA_TERM_T)(void *userdata); -typedef struct vchiq_bulk_struct { +struct vchiq_bulk { short mode; short dir; void *userdata; - VCHI_MEM_HANDLE_T handle; void *data; int size; void *remote_data; int remote_size; int actual; -} VCHIQ_BULK_T; +}; -typedef struct vchiq_bulk_queue_struct { +struct vchiq_bulk_queue { int local_insert; /* Where to insert the next local bulk */ int remote_insert; /* Where to insert the next remote bulk (master) */ int process; /* Bulk to transfer next */ int remote_notify; /* Bulk to notify the remote client of next (mstr) */ int remove; /* Bulk to notify the local client of, and remove, ** next */ - VCHIQ_BULK_T bulks[VCHIQ_NUM_SERVICE_BULKS]; -} VCHIQ_BULK_QUEUE_T; + struct vchiq_bulk bulks[VCHIQ_NUM_SERVICE_BULKS]; +}; -typedef struct remote_event_struct { +struct remote_event { int armed; int fired; - /* Contains offset from the beginning of the VCHIQ_STATE_T structure */ - u32 event; -} REMOTE_EVENT_T; + u32 __unused; +}; typedef struct opaque_platform_state_t *VCHIQ_PLATFORM_STATE_T; -typedef struct vchiq_state_struct VCHIQ_STATE_T; - -typedef struct vchiq_slot_struct { +struct vchiq_slot { char data[VCHIQ_SLOT_SIZE]; -} VCHIQ_SLOT_T; +}; -typedef struct vchiq_slot_info_struct { +struct vchiq_slot_info { /* Use two counters rather than one to avoid the need for a mutex. */ short use_count; short release_count; -} VCHIQ_SLOT_INFO_T; +}; -typedef struct vchiq_service_struct { - VCHIQ_SERVICE_BASE_T base; +struct vchiq_service { + struct vchiq_service_base base; VCHIQ_SERVICE_HANDLE_T handle; unsigned int ref_count; int srvstate; @@ -300,16 +297,16 @@ typedef struct vchiq_service_struct { short version_min; short peer_version; - VCHIQ_STATE_T *state; + struct vchiq_state *state; VCHIQ_INSTANCE_T instance; int service_use_count; - VCHIQ_BULK_QUEUE_T bulk_tx; - VCHIQ_BULK_QUEUE_T bulk_rx; + struct vchiq_bulk_queue bulk_tx; + struct vchiq_bulk_queue bulk_rx; - struct semaphore remove_event; - struct semaphore bulk_remove_event; + struct completion remove_event; + struct completion bulk_remove_event; struct mutex bulk_mutex; struct service_stats_struct { @@ -327,22 +324,22 @@ typedef struct vchiq_service_struct { uint64_t bulk_tx_bytes; uint64_t bulk_rx_bytes; } stats; -} VCHIQ_SERVICE_T; +}; -/* The quota information is outside VCHIQ_SERVICE_T so that it can be - statically allocated, since for accounting reasons a service's slot - usage is carried over between users of the same port number. +/* The quota information is outside struct vchiq_service so that it can + * be statically allocated, since for accounting reasons a service's slot + * usage is carried over between users of the same port number. */ -typedef struct vchiq_service_quota_struct { +struct vchiq_service_quota { unsigned short slot_quota; unsigned short slot_use_count; unsigned short message_quota; unsigned short message_use_count; - struct semaphore quota_event; + struct completion quota_event; int previous_tx_index; -} VCHIQ_SERVICE_QUOTA_T; +}; -typedef struct vchiq_shared_state_struct { +struct vchiq_shared_state { /* A non-zero value here indicates that the content is valid. */ int initialised; @@ -356,7 +353,7 @@ typedef struct vchiq_shared_state_struct { /* Signalling this event indicates that owner's slot handler thread ** should run. */ - REMOTE_EVENT_T trigger; + struct remote_event trigger; /* Indicates the byte position within the stream where the next message ** will be written. The least significant bits are an index into the @@ -364,26 +361,26 @@ typedef struct vchiq_shared_state_struct { int tx_pos; /* This event should be signalled when a slot is recycled. */ - REMOTE_EVENT_T recycle; + struct remote_event recycle; /* The slot_queue index where the next recycled slot will be written. */ int slot_queue_recycle; /* This event should be signalled when a synchronous message is sent. */ - REMOTE_EVENT_T sync_trigger; + struct remote_event sync_trigger; /* This event should be signalled when a synchronous message has been ** released. */ - REMOTE_EVENT_T sync_release; + struct remote_event sync_release; /* A circular buffer of slot indexes. */ int slot_queue[VCHIQ_MAX_SLOTS_PER_SIDE]; /* Debugging state */ int debug[DEBUG_MAX]; -} VCHIQ_SHARED_STATE_T; +}; -typedef struct vchiq_slot_zero_struct { +struct vchiq_slot_zero { int magic; short version; short version_min; @@ -392,27 +389,26 @@ typedef struct vchiq_slot_zero_struct { int max_slots; int max_slots_per_side; int platform_data[2]; - VCHIQ_SHARED_STATE_T master; - VCHIQ_SHARED_STATE_T slave; - VCHIQ_SLOT_INFO_T slots[VCHIQ_MAX_SLOTS]; -} VCHIQ_SLOT_ZERO_T; + struct vchiq_shared_state master; + struct vchiq_shared_state slave; + struct vchiq_slot_info slots[VCHIQ_MAX_SLOTS]; +}; -struct vchiq_state_struct { +struct vchiq_state { int id; int initialised; VCHIQ_CONNSTATE_T conn_state; - int is_master; short version_common; - VCHIQ_SHARED_STATE_T *local; - VCHIQ_SHARED_STATE_T *remote; - VCHIQ_SLOT_T *slot_data; + struct vchiq_shared_state *local; + struct vchiq_shared_state *remote; + struct vchiq_slot *slot_data; unsigned short default_slot_quota; unsigned short default_message_quota; /* Event indicating connect message received */ - struct semaphore connect; + struct completion connect; /* Mutex protecting services */ struct mutex mutex; @@ -428,20 +424,20 @@ struct vchiq_state_struct { struct task_struct *sync_thread; /* Local implementation of the trigger remote event */ - struct semaphore trigger_event; + wait_queue_head_t trigger_event; /* Local implementation of the recycle remote event */ - struct semaphore recycle_event; + wait_queue_head_t recycle_event; /* Local implementation of the sync trigger remote event */ - struct semaphore sync_trigger_event; + wait_queue_head_t sync_trigger_event; /* Local implementation of the sync release remote event */ - struct semaphore sync_release_event; + wait_queue_head_t sync_release_event; char *tx_data; char *rx_data; - VCHIQ_SLOT_INFO_T *rx_info; + struct vchiq_slot_info *rx_info; struct mutex slot_mutex; @@ -483,16 +479,12 @@ struct vchiq_state_struct { int unused_service; /* Signalled when a free slot becomes available. */ - struct semaphore slot_available_event; + struct completion slot_available_event; - struct semaphore slot_remove_event; + struct completion slot_remove_event; /* Signalled when a free data slot becomes available. */ - struct semaphore data_quota_event; - - /* Incremented when there are bulk transfers which cannot be processed - * whilst paused and must be processed on resume */ - int deferred_bulks; + struct completion data_quota_event; struct state_stats_struct { int slot_stalls; @@ -502,16 +494,16 @@ struct vchiq_state_struct { int error_count; } stats; - VCHIQ_SERVICE_T * services[VCHIQ_MAX_SERVICES]; - VCHIQ_SERVICE_QUOTA_T service_quotas[VCHIQ_MAX_SERVICES]; - VCHIQ_SLOT_INFO_T slot_info[VCHIQ_MAX_SLOTS]; + struct vchiq_service *services[VCHIQ_MAX_SERVICES]; + struct vchiq_service_quota service_quotas[VCHIQ_MAX_SERVICES]; + struct vchiq_slot_info slot_info[VCHIQ_MAX_SLOTS]; VCHIQ_PLATFORM_STATE_T platform_state; }; struct bulk_waiter { - VCHIQ_BULK_T *bulk; - struct semaphore event; + struct vchiq_bulk *bulk; + struct completion event; int actual; }; @@ -521,60 +513,60 @@ extern int vchiq_core_log_level; extern int vchiq_core_msg_log_level; extern int vchiq_sync_log_level; -extern VCHIQ_STATE_T *vchiq_states[VCHIQ_MAX_STATES]; +extern struct vchiq_state *vchiq_states[VCHIQ_MAX_STATES]; extern const char * get_conn_state_name(VCHIQ_CONNSTATE_T conn_state); -extern VCHIQ_SLOT_ZERO_T * +extern struct vchiq_slot_zero * vchiq_init_slots(void *mem_base, int mem_size); extern VCHIQ_STATUS_T -vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero, - int is_master); +vchiq_init_state(struct vchiq_state *state, struct vchiq_slot_zero *slot_zero); extern VCHIQ_STATUS_T -vchiq_connect_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance); +vchiq_connect_internal(struct vchiq_state *state, VCHIQ_INSTANCE_T instance); -extern VCHIQ_SERVICE_T * -vchiq_add_service_internal(VCHIQ_STATE_T *state, - const VCHIQ_SERVICE_PARAMS_T *params, int srvstate, - VCHIQ_INSTANCE_T instance, VCHIQ_USERDATA_TERM_T userdata_term); +extern struct vchiq_service * +vchiq_add_service_internal(struct vchiq_state *state, + const struct vchiq_service_params *params, + int srvstate, VCHIQ_INSTANCE_T instance, + VCHIQ_USERDATA_TERM_T userdata_term); extern VCHIQ_STATUS_T -vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id); +vchiq_open_service_internal(struct vchiq_service *service, int client_id); extern VCHIQ_STATUS_T -vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd); +vchiq_close_service_internal(struct vchiq_service *service, int close_recvd); extern void -vchiq_terminate_service_internal(VCHIQ_SERVICE_T *service); +vchiq_terminate_service_internal(struct vchiq_service *service); extern void -vchiq_free_service_internal(VCHIQ_SERVICE_T *service); +vchiq_free_service_internal(struct vchiq_service *service); extern VCHIQ_STATUS_T -vchiq_shutdown_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance); +vchiq_shutdown_internal(struct vchiq_state *state, VCHIQ_INSTANCE_T instance); extern VCHIQ_STATUS_T -vchiq_pause_internal(VCHIQ_STATE_T *state); +vchiq_pause_internal(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_resume_internal(VCHIQ_STATE_T *state); +vchiq_resume_internal(struct vchiq_state *state); extern void -remote_event_pollall(VCHIQ_STATE_T *state); +remote_event_pollall(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, - VCHI_MEM_HANDLE_T memhandle, void *offset, int size, void *userdata, - VCHIQ_BULK_MODE_T mode, VCHIQ_BULK_DIR_T dir); +vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *offset, int size, + void *userdata, VCHIQ_BULK_MODE_T mode, + VCHIQ_BULK_DIR_T dir); extern void -vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state); +vchiq_dump_state(void *dump_context, struct vchiq_state *state); extern void -vchiq_dump_service_state(void *dump_context, VCHIQ_SERVICE_T *service); +vchiq_dump_service_state(void *dump_context, struct vchiq_service *service); extern void vchiq_loud_error_header(void); @@ -583,12 +575,13 @@ extern void vchiq_loud_error_footer(void); extern void -request_poll(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, int poll_type); +request_poll(struct vchiq_state *state, struct vchiq_service *service, + int poll_type); -static inline VCHIQ_SERVICE_T * +static inline struct vchiq_service * handle_to_service(VCHIQ_SERVICE_HANDLE_T handle) { - VCHIQ_STATE_T *state = vchiq_states[(handle / VCHIQ_MAX_SERVICES) & + struct vchiq_state *state = vchiq_states[(handle / VCHIQ_MAX_SERVICES) & (VCHIQ_MAX_STATES - 1)]; if (!state) return NULL; @@ -596,57 +589,54 @@ handle_to_service(VCHIQ_SERVICE_HANDLE_T handle) return state->services[handle & (VCHIQ_MAX_SERVICES - 1)]; } -extern VCHIQ_SERVICE_T * +extern struct vchiq_service * find_service_by_handle(VCHIQ_SERVICE_HANDLE_T handle); -extern VCHIQ_SERVICE_T * -find_service_by_port(VCHIQ_STATE_T *state, int localport); +extern struct vchiq_service * +find_service_by_port(struct vchiq_state *state, int localport); -extern VCHIQ_SERVICE_T * +extern struct vchiq_service * find_service_for_instance(VCHIQ_INSTANCE_T instance, VCHIQ_SERVICE_HANDLE_T handle); -extern VCHIQ_SERVICE_T * +extern struct vchiq_service * find_closed_service_for_instance(VCHIQ_INSTANCE_T instance, VCHIQ_SERVICE_HANDLE_T handle); -extern VCHIQ_SERVICE_T * -next_service_by_instance(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance, - int *pidx); +extern struct vchiq_service * +next_service_by_instance(struct vchiq_state *state, VCHIQ_INSTANCE_T instance, + int *pidx); extern void -lock_service(VCHIQ_SERVICE_T *service); +lock_service(struct vchiq_service *service); extern void -unlock_service(VCHIQ_SERVICE_T *service); +unlock_service(struct vchiq_service *service); /* The following functions are called from vchiq_core, and external ** implementations must be provided. */ extern VCHIQ_STATUS_T -vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk, - VCHI_MEM_HANDLE_T memhandle, void *offset, int size, int dir); - -extern void -vchiq_transfer_bulk(VCHIQ_BULK_T *bulk); +vchiq_prepare_bulk_data(struct vchiq_bulk *bulk, void *offset, int size, + int dir); extern void -vchiq_complete_bulk(VCHIQ_BULK_T *bulk); +vchiq_complete_bulk(struct vchiq_bulk *bulk); extern void -remote_event_signal(REMOTE_EVENT_T *event); +remote_event_signal(struct remote_event *event); void -vchiq_platform_check_suspend(VCHIQ_STATE_T *state); +vchiq_platform_check_suspend(struct vchiq_state *state); extern void -vchiq_platform_paused(VCHIQ_STATE_T *state); +vchiq_platform_paused(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_platform_resume(VCHIQ_STATE_T *state); +vchiq_platform_resume(struct vchiq_state *state); extern void -vchiq_platform_resumed(VCHIQ_STATE_T *state); +vchiq_platform_resumed(struct vchiq_state *state); extern void vchiq_dump(void *dump_context, const char *str, int len); @@ -659,47 +649,48 @@ vchiq_dump_platform_instances(void *dump_context); extern void vchiq_dump_platform_service_state(void *dump_context, - VCHIQ_SERVICE_T *service); + struct vchiq_service *service); extern VCHIQ_STATUS_T -vchiq_use_service_internal(VCHIQ_SERVICE_T *service); +vchiq_use_service_internal(struct vchiq_service *service); extern VCHIQ_STATUS_T -vchiq_release_service_internal(VCHIQ_SERVICE_T *service); +vchiq_release_service_internal(struct vchiq_service *service); extern void -vchiq_on_remote_use(VCHIQ_STATE_T *state); +vchiq_on_remote_use(struct vchiq_state *state); extern void -vchiq_on_remote_release(VCHIQ_STATE_T *state); +vchiq_on_remote_release(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_platform_init_state(VCHIQ_STATE_T *state); +vchiq_platform_init_state(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_check_service(VCHIQ_SERVICE_T *service); +vchiq_check_service(struct vchiq_service *service); extern void -vchiq_on_remote_use_active(VCHIQ_STATE_T *state); +vchiq_on_remote_use_active(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_send_remote_use(VCHIQ_STATE_T *state); +vchiq_send_remote_use(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_send_remote_release(VCHIQ_STATE_T *state); +vchiq_send_remote_release(struct vchiq_state *state); extern VCHIQ_STATUS_T -vchiq_send_remote_use_active(VCHIQ_STATE_T *state); +vchiq_send_remote_use_active(struct vchiq_state *state); extern void -vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state, - VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate); +vchiq_platform_conn_state_changed(struct vchiq_state *state, + VCHIQ_CONNSTATE_T oldstate, + VCHIQ_CONNSTATE_T newstate); extern void -vchiq_platform_handle_timeout(VCHIQ_STATE_T *state); +vchiq_platform_handle_timeout(struct vchiq_state *state); extern void -vchiq_set_conn_state(VCHIQ_STATE_T *state, VCHIQ_CONNSTATE_T newstate); +vchiq_set_conn_state(struct vchiq_state *state, VCHIQ_CONNSTATE_T newstate); extern void vchiq_log_dump_mem(const char *label, uint32_t addr, const void *voidMem, diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.c index 6a9e71a61142..3928287cf5f7 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.c @@ -153,19 +153,7 @@ static int debugfs_usecount_show(struct seq_file *f, void *offset) return 0; } - -static int debugfs_usecount_open(struct inode *inode, struct file *file) -{ - return single_open(file, debugfs_usecount_show, inode->i_private); -} - -static const struct file_operations debugfs_usecount_fops = { - .owner = THIS_MODULE, - .open = debugfs_usecount_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(debugfs_usecount); static int debugfs_trace_show(struct seq_file *f, void *offset) { @@ -243,7 +231,8 @@ void vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance) void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance) { - VCHIQ_DEBUGFS_NODE_T *node = vchiq_instance_get_debugfs_node(instance); + struct vchiq_debugfs_node *node = + vchiq_instance_get_debugfs_node(instance); debugfs_remove_recursive(node->dentry); } diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.h index 3af6397ada19..5be1a5663f51 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.h +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.h @@ -36,9 +36,9 @@ #include "vchiq_core.h" -typedef struct vchiq_debugfs_node_struct { +struct vchiq_debugfs_node { struct dentry *dentry; -} VCHIQ_DEBUGFS_NODE_T; +}; void vchiq_debugfs_init(void); diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_if.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_if.h index e4109a83e628..13ed23eacfae 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_if.h +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_if.h @@ -34,12 +34,10 @@ #ifndef VCHIQ_IF_H #define VCHIQ_IF_H -#include "interface/vchi/vchi_mh.h" - #define VCHIQ_SERVICE_HANDLE_INVALID 0 #define VCHIQ_SLOT_SIZE 4096 -#define VCHIQ_MAX_MSG_SIZE (VCHIQ_SLOT_SIZE - sizeof(VCHIQ_HEADER_T)) +#define VCHIQ_MAX_MSG_SIZE (VCHIQ_SLOT_SIZE - sizeof(struct vchiq_header)) #define VCHIQ_CHANNEL_SIZE VCHIQ_MAX_MSG_SIZE /* For backwards compatibility */ #define VCHIQ_MAKE_FOURCC(x0, x1, x2, x3) \ @@ -78,7 +76,7 @@ typedef enum { VCHIQ_SERVICE_OPTION_TRACE } VCHIQ_SERVICE_OPTION_T; -typedef struct vchiq_header_struct { +struct vchiq_header { /* The message identifier - opaque to applications. */ int msgid; @@ -86,33 +84,34 @@ typedef struct vchiq_header_struct { unsigned int size; char data[0]; /* message */ -} VCHIQ_HEADER_T; +}; struct vchiq_element { - const void *data; + const void __user *data; unsigned int size; }; typedef unsigned int VCHIQ_SERVICE_HANDLE_T; -typedef VCHIQ_STATUS_T (*VCHIQ_CALLBACK_T)(VCHIQ_REASON_T, VCHIQ_HEADER_T *, - VCHIQ_SERVICE_HANDLE_T, void *); +typedef VCHIQ_STATUS_T (*VCHIQ_CALLBACK_T)(VCHIQ_REASON_T, + struct vchiq_header *, + VCHIQ_SERVICE_HANDLE_T, void *); -typedef struct vchiq_service_base_struct { +struct vchiq_service_base { int fourcc; VCHIQ_CALLBACK_T callback; void *userdata; -} VCHIQ_SERVICE_BASE_T; +}; -typedef struct vchiq_service_params_struct { +struct vchiq_service_params { int fourcc; VCHIQ_CALLBACK_T callback; void *userdata; short version; /* Increment for non-trivial changes */ short version_min; /* Update for incompatible changes */ -} VCHIQ_SERVICE_PARAMS_T; +}; -typedef struct vchiq_config_struct { +struct vchiq_config { unsigned int max_msg_size; unsigned int bulk_threshold; /* The message size above which it is better to use a bulk transfer @@ -121,7 +120,7 @@ typedef struct vchiq_config_struct { unsigned int max_services; short version; /* The version of VCHIQ */ short version_min; /* The minimum compatible version of VCHIQ */ -} VCHIQ_CONFIG_T; +}; typedef struct vchiq_instance_struct *VCHIQ_INSTANCE_T; typedef void (*VCHIQ_REMOTE_USE_CALLBACK_T)(void *cb_arg); @@ -130,10 +129,10 @@ extern VCHIQ_STATUS_T vchiq_initialise(VCHIQ_INSTANCE_T *pinstance); extern VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance); extern VCHIQ_STATUS_T vchiq_connect(VCHIQ_INSTANCE_T instance); extern VCHIQ_STATUS_T vchiq_add_service(VCHIQ_INSTANCE_T instance, - const VCHIQ_SERVICE_PARAMS_T *params, + const struct vchiq_service_params *params, VCHIQ_SERVICE_HANDLE_T *pservice); extern VCHIQ_STATUS_T vchiq_open_service(VCHIQ_INSTANCE_T instance, - const VCHIQ_SERVICE_PARAMS_T *params, + const struct vchiq_service_params *params, VCHIQ_SERVICE_HANDLE_T *pservice); extern VCHIQ_STATUS_T vchiq_close_service(VCHIQ_SERVICE_HANDLE_T service); extern VCHIQ_STATUS_T vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T service); @@ -148,7 +147,7 @@ vchiq_queue_message(VCHIQ_SERVICE_HANDLE_T handle, void *context, size_t size); extern void vchiq_release_message(VCHIQ_SERVICE_HANDLE_T service, - VCHIQ_HEADER_T *header); + struct vchiq_header *header); extern VCHIQ_STATUS_T vchiq_bulk_transmit(VCHIQ_SERVICE_HANDLE_T service, const void *data, unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode); @@ -156,16 +155,15 @@ extern VCHIQ_STATUS_T vchiq_bulk_receive(VCHIQ_SERVICE_HANDLE_T service, void *data, unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode); extern VCHIQ_STATUS_T vchiq_bulk_transmit_handle(VCHIQ_SERVICE_HANDLE_T service, - VCHI_MEM_HANDLE_T handle, const void *offset, unsigned int size, + const void *offset, unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode); extern VCHIQ_STATUS_T vchiq_bulk_receive_handle(VCHIQ_SERVICE_HANDLE_T service, - VCHI_MEM_HANDLE_T handle, void *offset, unsigned int size, - void *userdata, VCHIQ_BULK_MODE_T mode); + void *offset, unsigned int size, void *userdata, + VCHIQ_BULK_MODE_T mode); extern int vchiq_get_client_id(VCHIQ_SERVICE_HANDLE_T service); extern void *vchiq_get_service_userdata(VCHIQ_SERVICE_HANDLE_T service); extern int vchiq_get_service_fourcc(VCHIQ_SERVICE_HANDLE_T service); -extern VCHIQ_STATUS_T vchiq_get_config(VCHIQ_INSTANCE_T instance, - int config_size, VCHIQ_CONFIG_T *pconfig); +extern void vchiq_get_config(struct vchiq_config *config); extern VCHIQ_STATUS_T vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T service, VCHIQ_SERVICE_OPTION_T option, int value); diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_ioctl.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_ioctl.h index 9f859953f45c..56aef490e870 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_ioctl.h +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_ioctl.h @@ -40,90 +40,90 @@ #define VCHIQ_IOC_MAGIC 0xc4 #define VCHIQ_INVALID_HANDLE (~0) -typedef struct { - VCHIQ_SERVICE_PARAMS_T params; +struct vchiq_create_service { + struct vchiq_service_params params; int is_open; int is_vchi; unsigned int handle; /* OUT */ -} VCHIQ_CREATE_SERVICE_T; +}; -typedef struct { +struct vchiq_queue_message { unsigned int handle; unsigned int count; - const struct vchiq_element *elements; -} VCHIQ_QUEUE_MESSAGE_T; + const struct vchiq_element __user *elements; +}; -typedef struct { +struct vchiq_queue_bulk_transfer { unsigned int handle; void *data; unsigned int size; void *userdata; VCHIQ_BULK_MODE_T mode; -} VCHIQ_QUEUE_BULK_TRANSFER_T; +}; -typedef struct { +struct vchiq_completion_data { VCHIQ_REASON_T reason; - VCHIQ_HEADER_T *header; + struct vchiq_header *header; void *service_userdata; void *bulk_userdata; -} VCHIQ_COMPLETION_DATA_T; +}; -typedef struct { +struct vchiq_await_completion { unsigned int count; - VCHIQ_COMPLETION_DATA_T *buf; + struct vchiq_completion_data *buf; unsigned int msgbufsize; unsigned int msgbufcount; /* IN/OUT */ void **msgbufs; -} VCHIQ_AWAIT_COMPLETION_T; +}; -typedef struct { +struct vchiq_dequeue_message { unsigned int handle; int blocking; unsigned int bufsize; void *buf; -} VCHIQ_DEQUEUE_MESSAGE_T; +}; -typedef struct { +struct vchiq_get_config { unsigned int config_size; - VCHIQ_CONFIG_T *pconfig; -} VCHIQ_GET_CONFIG_T; + struct vchiq_config __user *pconfig; +}; -typedef struct { +struct vchiq_set_service_option { unsigned int handle; VCHIQ_SERVICE_OPTION_T option; int value; -} VCHIQ_SET_SERVICE_OPTION_T; +}; -typedef struct { +struct vchiq_dump_mem { void *virt_addr; size_t num_bytes; -} VCHIQ_DUMP_MEM_T; +}; #define VCHIQ_IOC_CONNECT _IO(VCHIQ_IOC_MAGIC, 0) #define VCHIQ_IOC_SHUTDOWN _IO(VCHIQ_IOC_MAGIC, 1) #define VCHIQ_IOC_CREATE_SERVICE \ - _IOWR(VCHIQ_IOC_MAGIC, 2, VCHIQ_CREATE_SERVICE_T) + _IOWR(VCHIQ_IOC_MAGIC, 2, struct vchiq_create_service) #define VCHIQ_IOC_REMOVE_SERVICE _IO(VCHIQ_IOC_MAGIC, 3) #define VCHIQ_IOC_QUEUE_MESSAGE \ - _IOW(VCHIQ_IOC_MAGIC, 4, VCHIQ_QUEUE_MESSAGE_T) + _IOW(VCHIQ_IOC_MAGIC, 4, struct vchiq_queue_message) #define VCHIQ_IOC_QUEUE_BULK_TRANSMIT \ - _IOWR(VCHIQ_IOC_MAGIC, 5, VCHIQ_QUEUE_BULK_TRANSFER_T) + _IOWR(VCHIQ_IOC_MAGIC, 5, struct vchiq_queue_bulk_transfer) #define VCHIQ_IOC_QUEUE_BULK_RECEIVE \ - _IOWR(VCHIQ_IOC_MAGIC, 6, VCHIQ_QUEUE_BULK_TRANSFER_T) + _IOWR(VCHIQ_IOC_MAGIC, 6, struct vchiq_queue_bulk_transfer) #define VCHIQ_IOC_AWAIT_COMPLETION \ - _IOWR(VCHIQ_IOC_MAGIC, 7, VCHIQ_AWAIT_COMPLETION_T) + _IOWR(VCHIQ_IOC_MAGIC, 7, struct vchiq_await_completion) #define VCHIQ_IOC_DEQUEUE_MESSAGE \ - _IOWR(VCHIQ_IOC_MAGIC, 8, VCHIQ_DEQUEUE_MESSAGE_T) + _IOWR(VCHIQ_IOC_MAGIC, 8, struct vchiq_dequeue_message) #define VCHIQ_IOC_GET_CLIENT_ID _IO(VCHIQ_IOC_MAGIC, 9) #define VCHIQ_IOC_GET_CONFIG \ - _IOWR(VCHIQ_IOC_MAGIC, 10, VCHIQ_GET_CONFIG_T) + _IOWR(VCHIQ_IOC_MAGIC, 10, struct vchiq_get_config) #define VCHIQ_IOC_CLOSE_SERVICE _IO(VCHIQ_IOC_MAGIC, 11) #define VCHIQ_IOC_USE_SERVICE _IO(VCHIQ_IOC_MAGIC, 12) #define VCHIQ_IOC_RELEASE_SERVICE _IO(VCHIQ_IOC_MAGIC, 13) #define VCHIQ_IOC_SET_SERVICE_OPTION \ - _IOW(VCHIQ_IOC_MAGIC, 14, VCHIQ_SET_SERVICE_OPTION_T) + _IOW(VCHIQ_IOC_MAGIC, 14, struct vchiq_set_service_option) #define VCHIQ_IOC_DUMP_PHYS_MEM \ - _IOW(VCHIQ_IOC_MAGIC, 15, VCHIQ_DUMP_MEM_T) + _IOW(VCHIQ_IOC_MAGIC, 15, struct vchiq_dump_mem) #define VCHIQ_IOC_LIB_VERSION _IO(VCHIQ_IOC_MAGIC, 16) #define VCHIQ_IOC_CLOSE_DELIVERED _IO(VCHIQ_IOC_MAGIC, 17) #define VCHIQ_IOC_MAX 17 diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_killable.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_killable.h deleted file mode 100644 index 778063ba312a..000000000000 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_killable.h +++ /dev/null @@ -1,55 +0,0 @@ -/** - * Copyright (c) 2010-2012 Broadcom. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions, and the following disclaimer, - * without modification. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. The names of the above-listed copyright holders may not be used - * to endorse or promote products derived from this software without - * specific prior written permission. - * - * ALTERNATIVELY, this software may be distributed under the terms of the - * GNU General Public License ("GPL") version 2, as published by the Free - * Software Foundation. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS - * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, - * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR - * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR - * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, - * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, - * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR - * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF - * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING - * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef VCHIQ_KILLABLE_H -#define VCHIQ_KILLABLE_H - -#include -#include - -#define SHUTDOWN_SIGS (sigmask(SIGKILL) | sigmask(SIGINT) | sigmask(SIGQUIT) | sigmask(SIGTRAP) | sigmask(SIGSTOP) | sigmask(SIGCONT)) - -static inline int __must_check down_interruptible_killable(struct semaphore *sem) -{ - /* Allow interception of killable signals only. We don't want to be interrupted by harmless signals like SIGALRM */ - int ret; - sigset_t blocked, oldset; - siginitsetinv(&blocked, SHUTDOWN_SIGS); - sigprocmask(SIG_SETMASK, &blocked, &oldset); - ret = down_interruptible(sem); - sigprocmask(SIG_SETMASK, &oldset, NULL); - return ret; -} -#define down_interruptible down_interruptible_killable - -#endif diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_pagelist.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_pagelist.h index bec411061554..4eaf7398cf2e 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_pagelist.h +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_pagelist.h @@ -38,7 +38,7 @@ #define PAGELIST_READ 1 #define PAGELIST_READ_WITH_FRAGMENTS 2 -typedef struct pagelist_struct { +struct pagelist { u32 length; u16 type; u16 offset; @@ -46,6 +46,6 @@ typedef struct pagelist_struct { * of following pages at consecutive * addresses. */ -} PAGELIST_T; +}; #endif /* VCHIQ_PAGELIST_H */ diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_shim.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_shim.c index c3223fcdaf87..ab6ca8fd6f14 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_shim.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_shim.c @@ -44,7 +44,7 @@ struct shim_service { VCHIQ_SERVICE_HANDLE_T handle; - VCHIU_QUEUE_T queue; + struct vchiu_queue queue; VCHI_CALLBACK_T callback; void *callback_param; @@ -72,7 +72,7 @@ int32_t vchi_msg_peek(VCHI_SERVICE_HANDLE_T handle, VCHI_FLAGS_T flags) { struct shim_service *service = (struct shim_service *)handle; - VCHIQ_HEADER_T *header; + struct vchiq_header *header; WARN_ON((flags != VCHI_FLAGS_NONE) && (flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE)); @@ -104,7 +104,7 @@ EXPORT_SYMBOL(vchi_msg_peek); int32_t vchi_msg_remove(VCHI_SERVICE_HANDLE_T handle) { struct shim_service *service = (struct shim_service *)handle; - VCHIQ_HEADER_T *header; + struct vchiq_header *header; header = vchiu_queue_pop(&service->queue); @@ -357,7 +357,7 @@ int32_t vchi_msg_dequeue(VCHI_SERVICE_HANDLE_T handle, VCHI_FLAGS_T flags) { struct shim_service *service = (struct shim_service *)handle; - VCHIQ_HEADER_T *header; + struct vchiq_header *header; WARN_ON((flags != VCHI_FLAGS_NONE) && (flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE)); @@ -382,7 +382,7 @@ EXPORT_SYMBOL(vchi_msg_dequeue); /*********************************************************** * Name: vchi_held_msg_release * - * Arguments: VCHI_HELD_MSG_T *message + * Arguments: struct vchi_held_msg *message * * Description: Routine to release a held message (after it has been read with * vchi_msg_hold) @@ -390,7 +390,7 @@ EXPORT_SYMBOL(vchi_msg_dequeue); * Returns: int32_t - success == 0 * ***********************************************************/ -int32_t vchi_held_msg_release(VCHI_HELD_MSG_T *message) +int32_t vchi_held_msg_release(struct vchi_held_msg *message) { /* * Convert the service field pointer back to an @@ -401,7 +401,7 @@ int32_t vchi_held_msg_release(VCHI_HELD_MSG_T *message) */ vchiq_release_message((VCHIQ_SERVICE_HANDLE_T)(long)message->service, - (VCHIQ_HEADER_T *)message->message); + (struct vchiq_header *)message->message); return 0; } @@ -414,7 +414,7 @@ EXPORT_SYMBOL(vchi_held_msg_release); * void **data, * uint32_t *msg_size, * VCHI_FLAGS_T flags, - * VCHI_HELD_MSG_T *message_handle + * struct vchi_held_msg *message_handle * * Description: Routine to return a pointer to the current message (to allow * in place processing). The message is dequeued - don't forget @@ -428,10 +428,10 @@ int32_t vchi_msg_hold(VCHI_SERVICE_HANDLE_T handle, void **data, uint32_t *msg_size, VCHI_FLAGS_T flags, - VCHI_HELD_MSG_T *message_handle) + struct vchi_held_msg *message_handle) { struct shim_service *service = (struct shim_service *)handle; - VCHIQ_HEADER_T *header; + struct vchiq_header *header; WARN_ON((flags != VCHI_FLAGS_NONE) && (flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE)); @@ -530,7 +530,7 @@ EXPORT_SYMBOL(vchi_disconnect); * Name: vchi_service_create * * Arguments: VCHI_INSTANCE_T *instance_handle - * SERVICE_CREATION_T *setup, + * struct service_creation *setup, * VCHI_SERVICE_HANDLE_T *handle * * Description: Routine to open a service @@ -540,7 +540,9 @@ EXPORT_SYMBOL(vchi_disconnect); ***********************************************************/ static VCHIQ_STATUS_T shim_callback(VCHIQ_REASON_T reason, - VCHIQ_HEADER_T *header, VCHIQ_SERVICE_HANDLE_T handle, void *bulk_user) + struct vchiq_header *header, + VCHIQ_SERVICE_HANDLE_T handle, + void *bulk_user) { struct shim_service *service = (struct shim_service *)VCHIQ_GET_SERVICE_USERDATA(handle); @@ -600,7 +602,7 @@ done: } static struct shim_service *service_alloc(VCHIQ_INSTANCE_T instance, - SERVICE_CREATION_T *setup) + struct service_creation *setup) { struct shim_service *service = kzalloc(sizeof(struct shim_service), GFP_KERNEL); @@ -628,7 +630,7 @@ static void service_free(struct shim_service *service) } int32_t vchi_service_open(VCHI_INSTANCE_T instance_handle, - SERVICE_CREATION_T *setup, + struct service_creation *setup, VCHI_SERVICE_HANDLE_T *handle) { VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle; @@ -637,7 +639,7 @@ int32_t vchi_service_open(VCHI_INSTANCE_T instance_handle, *handle = (VCHI_SERVICE_HANDLE_T)service; if (service) { - VCHIQ_SERVICE_PARAMS_T params; + struct vchiq_service_params params; VCHIQ_STATUS_T status; memset(¶ms, 0, sizeof(params)); @@ -660,38 +662,6 @@ int32_t vchi_service_open(VCHI_INSTANCE_T instance_handle, } EXPORT_SYMBOL(vchi_service_open); -int32_t vchi_service_create(VCHI_INSTANCE_T instance_handle, - SERVICE_CREATION_T *setup, - VCHI_SERVICE_HANDLE_T *handle) -{ - VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle; - struct shim_service *service = service_alloc(instance, setup); - - *handle = (VCHI_SERVICE_HANDLE_T)service; - - if (service) { - VCHIQ_SERVICE_PARAMS_T params; - VCHIQ_STATUS_T status; - - memset(¶ms, 0, sizeof(params)); - params.fourcc = setup->service_id; - params.callback = shim_callback; - params.userdata = service; - params.version = setup->version.version; - params.version_min = setup->version.version_min; - status = vchiq_add_service(instance, ¶ms, &service->handle); - - if (status != VCHIQ_SUCCESS) { - service_free(service); - service = NULL; - *handle = NULL; - } - } - - return (service != NULL) ? 0 : -1; -} -EXPORT_SYMBOL(vchi_service_create); - int32_t vchi_service_close(const VCHI_SERVICE_HANDLE_T handle) { int32_t ret = -1; diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c index 2e52f07bbaa9..55c5fd82b911 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c @@ -32,14 +32,13 @@ */ #include "vchiq_util.h" -#include "vchiq_killable.h" static inline int is_pow2(int i) { return i && !(i & (i - 1)); } -int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size) +int vchiu_queue_init(struct vchiu_queue *queue, int size) { WARN_ON(!is_pow2(size)); @@ -48,10 +47,11 @@ int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size) queue->write = 0; queue->initialized = 1; - sema_init(&queue->pop, 0); - sema_init(&queue->push, 0); + init_completion(&queue->pop); + init_completion(&queue->push); - queue->storage = kcalloc(size, sizeof(VCHIQ_HEADER_T *), GFP_KERNEL); + queue->storage = kcalloc(size, sizeof(struct vchiq_header *), + GFP_KERNEL); if (!queue->storage) { vchiu_queue_delete(queue); return 0; @@ -59,94 +59,62 @@ int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size) return 1; } -void vchiu_queue_delete(VCHIU_QUEUE_T *queue) +void vchiu_queue_delete(struct vchiu_queue *queue) { kfree(queue->storage); } -int vchiu_queue_is_empty(VCHIU_QUEUE_T *queue) +int vchiu_queue_is_empty(struct vchiu_queue *queue) { return queue->read == queue->write; } -int vchiu_queue_is_full(VCHIU_QUEUE_T *queue) +int vchiu_queue_is_full(struct vchiu_queue *queue) { return queue->write == queue->read + queue->size; } -void vchiu_queue_push(VCHIU_QUEUE_T *queue, VCHIQ_HEADER_T *header) +void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header) { if (!queue->initialized) return; while (queue->write == queue->read + queue->size) { - if (down_interruptible(&queue->pop) != 0) + if (wait_for_completion_killable(&queue->pop)) flush_signals(current); } - /* - * Write to queue->storage must be visible after read from - * queue->read - */ - smp_mb(); - queue->storage[queue->write & (queue->size - 1)] = header; - - /* - * Write to queue->storage must be visible before write to - * queue->write - */ - smp_wmb(); - queue->write++; - up(&queue->push); + complete(&queue->push); } -VCHIQ_HEADER_T *vchiu_queue_peek(VCHIU_QUEUE_T *queue) +struct vchiq_header *vchiu_queue_peek(struct vchiu_queue *queue) { while (queue->write == queue->read) { - if (down_interruptible(&queue->push) != 0) + if (wait_for_completion_killable(&queue->push)) flush_signals(current); } - up(&queue->push); // We haven't removed anything from the queue. - - /* - * Read from queue->storage must be visible after read from - * queue->write - */ - smp_rmb(); + complete(&queue->push); // We haven't removed anything from the queue. return queue->storage[queue->read & (queue->size - 1)]; } -VCHIQ_HEADER_T *vchiu_queue_pop(VCHIU_QUEUE_T *queue) +struct vchiq_header *vchiu_queue_pop(struct vchiu_queue *queue) { - VCHIQ_HEADER_T *header; + struct vchiq_header *header; while (queue->write == queue->read) { - if (down_interruptible(&queue->push) != 0) + if (wait_for_completion_killable(&queue->push)) flush_signals(current); } - /* - * Read from queue->storage must be visible after read from - * queue->write - */ - smp_rmb(); - header = queue->storage[queue->read & (queue->size - 1)]; - - /* - * Read from queue->storage must be visible before write to - * queue->read - */ - smp_mb(); - queue->read++; - up(&queue->pop); + complete(&queue->pop); return header; } diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.h index 5a1540d349d3..d842194b4023 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.h +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.h @@ -35,7 +35,7 @@ #define VCHIQ_UTIL_H #include -#include +#include #include #include #include @@ -54,27 +54,28 @@ #include "vchiq_if.h" -typedef struct { +struct vchiu_queue { int size; int read; int write; int initialized; - struct semaphore pop; - struct semaphore push; + struct completion pop; + struct completion push; - VCHIQ_HEADER_T **storage; -} VCHIU_QUEUE_T; + struct vchiq_header **storage; +}; -extern int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size); -extern void vchiu_queue_delete(VCHIU_QUEUE_T *queue); +extern int vchiu_queue_init(struct vchiu_queue *queue, int size); +extern void vchiu_queue_delete(struct vchiu_queue *queue); -extern int vchiu_queue_is_empty(VCHIU_QUEUE_T *queue); -extern int vchiu_queue_is_full(VCHIU_QUEUE_T *queue); +extern int vchiu_queue_is_empty(struct vchiu_queue *queue); +extern int vchiu_queue_is_full(struct vchiu_queue *queue); -extern void vchiu_queue_push(VCHIU_QUEUE_T *queue, VCHIQ_HEADER_T *header); +extern void vchiu_queue_push(struct vchiu_queue *queue, + struct vchiq_header *header); -extern VCHIQ_HEADER_T *vchiu_queue_peek(VCHIU_QUEUE_T *queue); -extern VCHIQ_HEADER_T *vchiu_queue_pop(VCHIU_QUEUE_T *queue); +extern struct vchiq_header *vchiu_queue_peek(struct vchiu_queue *queue); +extern struct vchiq_header *vchiu_queue_pop(struct vchiu_queue *queue); #endif diff --git a/drivers/staging/vt6655/baseband.c b/drivers/staging/vt6655/baseband.c index f0b163473426..b5ba0c76fb43 100644 --- a/drivers/staging/vt6655/baseband.c +++ b/drivers/staging/vt6655/baseband.c @@ -13,7 +13,7 @@ * * Functions: * BBuGetFrameTime - Calculate data frame transmitting time - * BBvCaculateParameter - Caculate PhyLength, PhyService and Phy Signal + * BBvCalculateParameter - Calculate PhyLength, PhyService and Phy Signal * parameter for baseband Tx * BBbReadEmbedded - Embedded read baseband register via MAC * BBbWriteEmbedded - Embedded write baseband register via MAC diff --git a/drivers/staging/wilc1000/Makefile b/drivers/staging/wilc1000/Makefile index 37e8560e501e..72a4daa05fdb 100644 --- a/drivers/staging/wilc1000/Makefile +++ b/drivers/staging/wilc1000/Makefile @@ -5,8 +5,7 @@ ccflags-y += -DFIRMWARE_1002=\"atmel/wilc1002_firmware.bin\" \ -DFIRMWARE_1003=\"atmel/wilc1003_firmware.bin\" wilc1000-objs := wilc_wfi_cfgoperations.o linux_wlan.o linux_mon.o \ - coreconfigurator.o host_interface.o \ - wilc_wlan_cfg.o wilc_wlan.o + host_interface.o wilc_wlan_cfg.o wilc_wlan.o obj-$(CONFIG_WILC1000_SDIO) += wilc1000-sdio.o wilc1000-sdio-objs += wilc_sdio.o diff --git a/drivers/staging/wilc1000/coreconfigurator.c b/drivers/staging/wilc1000/coreconfigurator.c deleted file mode 100644 index d6d3a971be43..000000000000 --- a/drivers/staging/wilc1000/coreconfigurator.c +++ /dev/null @@ -1,287 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2012 - 2018 Microchip Technology Inc., and its subsidiaries. - * All rights reserved. - */ - -#include - -#include "coreconfigurator.h" - -#define TAG_PARAM_OFFSET (MAC_HDR_LEN + TIME_STAMP_LEN + \ - BEACON_INTERVAL_LEN + CAP_INFO_LEN) - -enum sub_frame_type { - ASSOC_REQ = 0x00, - ASSOC_RSP = 0x10, - REASSOC_REQ = 0x20, - REASSOC_RSP = 0x30, - PROBE_REQ = 0x40, - PROBE_RSP = 0x50, - BEACON = 0x80, - ATIM = 0x90, - DISASOC = 0xA0, - AUTH = 0xB0, - DEAUTH = 0xC0, - ACTION = 0xD0, - PS_POLL = 0xA4, - RTS = 0xB4, - CTS = 0xC4, - ACK = 0xD4, - CFEND = 0xE4, - CFEND_ACK = 0xF4, - DATA = 0x08, - DATA_ACK = 0x18, - DATA_POLL = 0x28, - DATA_POLL_ACK = 0x38, - NULL_FRAME = 0x48, - CFACK = 0x58, - CFPOLL = 0x68, - CFPOLL_ACK = 0x78, - QOS_DATA = 0x88, - QOS_DATA_ACK = 0x98, - QOS_DATA_POLL = 0xA8, - QOS_DATA_POLL_ACK = 0xB8, - QOS_NULL_FRAME = 0xC8, - QOS_CFPOLL = 0xE8, - QOS_CFPOLL_ACK = 0xF8, - BLOCKACK_REQ = 0x84, - BLOCKACK = 0x94, - FRAME_SUBTYPE_FORCE_32BIT = 0xFFFFFFFF -}; - -static inline u16 get_beacon_period(u8 *data) -{ - u16 bcn_per; - - bcn_per = data[0]; - bcn_per |= (data[1] << 8); - - return bcn_per; -} - -static inline u32 get_beacon_timestamp_lo(u8 *data) -{ - u32 time_stamp = 0; - u32 index = MAC_HDR_LEN; - - time_stamp |= data[index++]; - time_stamp |= (data[index++] << 8); - time_stamp |= (data[index++] << 16); - time_stamp |= (data[index] << 24); - - return time_stamp; -} - -static inline u32 get_beacon_timestamp_hi(u8 *data) -{ - u32 time_stamp = 0; - u32 index = (MAC_HDR_LEN + 4); - - time_stamp |= data[index++]; - time_stamp |= (data[index++] << 8); - time_stamp |= (data[index++] << 16); - time_stamp |= (data[index] << 24); - - return time_stamp; -} - -static inline enum sub_frame_type get_sub_type(u8 *header) -{ - return ((enum sub_frame_type)(header[0] & 0xFC)); -} - -static inline u8 get_to_ds(u8 *header) -{ - return (header[1] & 0x01); -} - -static inline u8 get_from_ds(u8 *header) -{ - return ((header[1] & 0x02) >> 1); -} - -static inline void get_address1(u8 *msa, u8 *addr) -{ - memcpy(addr, msa + 4, 6); -} - -static inline void get_address2(u8 *msa, u8 *addr) -{ - memcpy(addr, msa + 10, 6); -} - -static inline void get_address3(u8 *msa, u8 *addr) -{ - memcpy(addr, msa + 16, 6); -} - -static inline void get_bssid(u8 *data, u8 *bssid) -{ - if (get_from_ds(data) == 1) - get_address2(data, bssid); - else if (get_to_ds(data) == 1) - get_address1(data, bssid); - else - get_address3(data, bssid); -} - -static inline void get_ssid(u8 *data, u8 *ssid, u8 *p_ssid_len) -{ - u8 i, j, len; - - len = data[TAG_PARAM_OFFSET + 1]; - j = TAG_PARAM_OFFSET + 2; - - if (len >= MAX_SSID_LEN) - len = 0; - - for (i = 0; i < len; i++, j++) - ssid[i] = data[j]; - - ssid[len] = '\0'; - - *p_ssid_len = len; -} - -static inline u16 get_cap_info(u8 *data) -{ - u16 cap_info = 0; - u16 index = MAC_HDR_LEN; - enum sub_frame_type st; - - st = get_sub_type(data); - - if (st == BEACON || st == PROBE_RSP) - index += TIME_STAMP_LEN + BEACON_INTERVAL_LEN; - - cap_info = data[index]; - cap_info |= (data[index + 1] << 8); - - return cap_info; -} - -static inline u16 get_asoc_status(u8 *data) -{ - u16 asoc_status; - - asoc_status = data[3]; - return (asoc_status << 8) | data[2]; -} - -static u8 *get_tim_elm(u8 *msa, u16 rx_len, u16 tag_param_offset) -{ - u16 index; - - index = tag_param_offset; - - while (index < (rx_len - FCS_LEN)) { - if (msa[index] == WLAN_EID_TIM) - return &msa[index]; - index += (IE_HDR_LEN + msa[index + 1]); - } - - return NULL; -} - -static u8 get_current_channel_802_11n(u8 *msa, u16 rx_len) -{ - u16 index; - - index = TAG_PARAM_OFFSET; - while (index < (rx_len - FCS_LEN)) { - if (msa[index] == WLAN_EID_DS_PARAMS) - return msa[index + 2]; - index += msa[index + 1] + IE_HDR_LEN; - } - - return 0; -} - -s32 wilc_parse_network_info(u8 *msg_buffer, - struct network_info **ret_network_info) -{ - struct network_info *network_info; - u8 *wid_val, *msa, *tim_elm, *ies; - u32 tsf_lo, tsf_hi; - u16 wid_len, rx_len, ies_len; - u8 msg_type, index; - - msg_type = msg_buffer[0]; - - if ('N' != msg_type) - return -EFAULT; - - wid_len = MAKE_WORD16(msg_buffer[6], msg_buffer[7]); - wid_val = &msg_buffer[8]; - - network_info = kzalloc(sizeof(*network_info), GFP_KERNEL); - if (!network_info) - return -ENOMEM; - - network_info->rssi = wid_val[0]; - - msa = &wid_val[1]; - - rx_len = wid_len - 1; - network_info->cap_info = get_cap_info(msa); - network_info->tsf_lo = get_beacon_timestamp_lo(msa); - - tsf_lo = get_beacon_timestamp_lo(msa); - tsf_hi = get_beacon_timestamp_hi(msa); - - network_info->tsf_hi = tsf_lo | ((u64)tsf_hi << 32); - - get_ssid(msa, network_info->ssid, &network_info->ssid_len); - get_bssid(msa, network_info->bssid); - - network_info->ch = get_current_channel_802_11n(msa, rx_len - + FCS_LEN); - - index = MAC_HDR_LEN + TIME_STAMP_LEN; - - network_info->beacon_period = get_beacon_period(msa + index); - - index += BEACON_INTERVAL_LEN + CAP_INFO_LEN; - - tim_elm = get_tim_elm(msa, rx_len + FCS_LEN, index); - if (tim_elm) - network_info->dtim_period = tim_elm[3]; - ies = &msa[TAG_PARAM_OFFSET]; - ies_len = rx_len - TAG_PARAM_OFFSET; - - if (ies_len > 0) { - network_info->ies = kmemdup(ies, ies_len, GFP_KERNEL); - if (!network_info->ies) { - kfree(network_info); - return -ENOMEM; - } - } - network_info->ies_len = ies_len; - - *ret_network_info = network_info; - - return 0; -} - -s32 wilc_parse_assoc_resp_info(u8 *buffer, u32 buffer_len, - struct connect_info *ret_conn_info) -{ - u8 *ies; - u16 ies_len; - - ret_conn_info->status = get_asoc_status(buffer); - if (ret_conn_info->status == WLAN_STATUS_SUCCESS) { - ies = &buffer[CAP_INFO_LEN + STATUS_CODE_LEN + AID_LEN]; - ies_len = buffer_len - (CAP_INFO_LEN + STATUS_CODE_LEN + - AID_LEN); - - ret_conn_info->resp_ies = kmemdup(ies, ies_len, GFP_KERNEL); - if (!ret_conn_info->resp_ies) - return -ENOMEM; - - ret_conn_info->resp_ies_len = ies_len; - } - - return 0; -} diff --git a/drivers/staging/wilc1000/coreconfigurator.h b/drivers/staging/wilc1000/coreconfigurator.h deleted file mode 100644 index b62acb447383..000000000000 --- a/drivers/staging/wilc1000/coreconfigurator.h +++ /dev/null @@ -1,81 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (c) 2012 - 2018 Microchip Technology Inc., and its subsidiaries. - * All rights reserved. - */ - -#ifndef CORECONFIGURATOR_H -#define CORECONFIGURATOR_H - -#include "wilc_wlan_if.h" - -#define NUM_RSSI 5 - -#define MAC_HDR_LEN 24 -#define FCS_LEN 4 -#define TIME_STAMP_LEN 8 -#define BEACON_INTERVAL_LEN 2 -#define CAP_INFO_LEN 2 -#define STATUS_CODE_LEN 2 -#define AID_LEN 2 -#define IE_HDR_LEN 2 - -#define SET_CFG 0 -#define GET_CFG 1 - -#define MAX_STRING_LEN 256 -#define MAX_ASSOC_RESP_FRAME_SIZE MAX_STRING_LEN - -#define MAKE_WORD16(lsb, msb) ((((u16)(msb) << 8) & 0xFF00) | (lsb)) -#define MAKE_WORD32(lsw, msw) ((((u32)(msw) << 16) & 0xFFFF0000) | (lsw)) - -struct rssi_history_buffer { - bool full; - u8 index; - s8 samples[NUM_RSSI]; -}; - -struct network_info { - s8 rssi; - u16 cap_info; - u8 ssid[MAX_SSID_LEN]; - u8 ssid_len; - u8 bssid[6]; - u16 beacon_period; - u8 dtim_period; - u8 ch; - unsigned long time_scan_cached; - unsigned long time_scan; - bool new_network; - u8 found; - u32 tsf_lo; - u8 *ies; - u16 ies_len; - void *join_params; - struct rssi_history_buffer rssi_history; - u64 tsf_hi; -}; - -struct connect_info { - u8 bssid[6]; - u8 *req_ies; - size_t req_ies_len; - u8 *resp_ies; - u16 resp_ies_len; - u16 status; -}; - -struct disconnect_info { - u16 reason; - u8 *ie; - size_t ie_len; -}; - -s32 wilc_parse_network_info(u8 *msg_buffer, - struct network_info **ret_network_info); -s32 wilc_parse_assoc_resp_info(u8 *buffer, u32 buffer_len, - struct connect_info *ret_conn_info); -void wilc_scan_complete_received(struct wilc *wilc, u8 *buffer, u32 length); -void wilc_network_info_received(struct wilc *wilc, u8 *buffer, u32 length); -void wilc_gnrl_async_info_received(struct wilc *wilc, u8 *buffer, u32 length); -#endif diff --git a/drivers/staging/wilc1000/host_interface.c b/drivers/staging/wilc1000/host_interface.c index 01db8999335e..70c854d939ce 100644 --- a/drivers/staging/wilc1000/host_interface.c +++ b/drivers/staging/wilc1000/host_interface.c @@ -13,80 +13,11 @@ #define REAL_JOIN_REQ 0 -struct host_if_wpa_attr { - u8 *key; - const u8 *mac_addr; - u8 *seq; - u8 seq_len; - u8 index; - u8 key_len; - u8 mode; -}; - -struct host_if_wep_attr { - u8 *key; - u8 key_len; - u8 index; - u8 mode; - enum authtype auth_type; -}; - -union host_if_key_attr { - struct host_if_wep_attr wep; - struct host_if_wpa_attr wpa; - struct host_if_pmkid_attr pmkid; -}; - -struct key_attr { - enum KEY_TYPE type; - u8 action; - union host_if_key_attr attr; -}; - -struct scan_attr { - u8 src; - u8 type; - u8 *ch_freq_list; - u8 ch_list_len; - u8 *ies; - size_t ies_len; - wilc_scan_result result; - void *arg; - struct hidden_network hidden_network; -}; - -struct connect_attr { - u8 *bssid; - u8 *ssid; - size_t ssid_len; - u8 *ies; - size_t ies_len; - u8 security; - wilc_connect_result result; - void *arg; - enum authtype auth_type; - u8 ch; - void *params; -}; - struct rcvd_async_info { u8 *buffer; u32 len; }; -struct channel_attr { - u8 set_ch; -}; - -struct beacon_attr { - u32 interval; - u32 dtim_period; - u32 head_len; - u8 *head; - u32 tail_len; - u8 *tail; -}; - struct set_multicast { bool enabled; u32 cnt; @@ -94,58 +25,58 @@ struct set_multicast { }; struct del_all_sta { - u8 del_all_sta[MAX_NUM_STA][ETH_ALEN]; u8 assoc_sta; + u8 mac[WILC_MAX_NUM_STA][ETH_ALEN]; }; -struct del_sta { - u8 mac_addr[ETH_ALEN]; +struct wilc_op_mode { + __le32 mode; }; -struct power_mgmt_param { - bool enabled; - u32 timeout; -}; +struct wilc_reg_frame { + bool reg; + u8 reg_id; + __le32 frame_type; +} __packed; -struct set_ip_addr { - u8 *ip_addr; - u8 idx; -}; +struct wilc_drv_handler { + __le32 handler; + u8 mode; +} __packed; -struct sta_inactive_t { - u32 inactive_time; - u8 mac[6]; -}; +struct wilc_wep_key { + u8 index; + u8 key_len; + u8 key[0]; +} __packed; -struct tx_power { - u8 tx_pwr; -}; +struct wilc_sta_wpa_ptk { + u8 mac_addr[ETH_ALEN]; + u8 key_len; + u8 key[0]; +} __packed; + +struct wilc_ap_wpa_ptk { + u8 mac_addr[ETH_ALEN]; + u8 index; + u8 key_len; + u8 key[0]; +} __packed; + +struct wilc_gtk_key { + u8 mac_addr[ETH_ALEN]; + u8 rsc[8]; + u8 index; + u8 key_len; + u8 key[0]; +} __packed; union message_body { - struct scan_attr scan_info; - struct connect_attr con_info; struct rcvd_net_info net_info; struct rcvd_async_info async_info; - struct key_attr key_info; - struct cfg_param_attr cfg_info; - struct channel_attr channel_info; - struct beacon_attr beacon_info; - struct add_sta_param add_sta_info; - struct del_sta del_sta_info; - struct add_sta_param edit_sta_info; - struct power_mgmt_param pwr_mgmt_info; - struct sta_inactive_t mac_info; - struct set_ip_addr ip_info; - struct drv_handler drv; struct set_multicast multicast_info; - struct op_mode mode; - struct get_mac_addr get_mac_info; - struct ba_session_info session_info; struct remain_ch remain_on_ch; - struct reg_frame reg_frame; char *data; - struct del_all_sta del_all_sta_info; - struct tx_power tx_power; }; struct host_if_msg { @@ -242,422 +173,12 @@ static struct wilc_vif *wilc_get_vif_from_idx(struct wilc *wilc, int idx) { int index = idx - 1; - if (index < 0 || index >= NUM_CONCURRENT_IFC) + if (index < 0 || index >= WILC_NUM_CONCURRENT_IFC) return NULL; return wilc->vif[index]; } -static void handle_set_channel(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct channel_attr *hif_set_ch = &msg->body.channel_info; - int ret; - struct wid wid; - - wid.id = WID_CURRENT_CHANNEL; - wid.type = WID_CHAR; - wid.val = (char *)&hif_set_ch->set_ch; - wid.size = sizeof(char); - - ret = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - - if (ret) - netdev_err(vif->ndev, "Failed to set channel\n"); - kfree(msg); -} - -static void handle_set_wfi_drv_handler(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct drv_handler *hif_drv_handler = &msg->body.drv; - int ret; - struct wid wid; - u8 *currbyte, *buffer; - struct host_if_drv *hif_drv; - - if (!vif->hif_drv || !hif_drv_handler) - goto free_msg; - - hif_drv = vif->hif_drv; - - buffer = kzalloc(DRV_HANDLER_SIZE, GFP_KERNEL); - if (!buffer) - goto free_msg; - - currbyte = buffer; - *currbyte = hif_drv->driver_handler_id & DRV_HANDLER_MASK; - currbyte++; - *currbyte = (u32)0 & DRV_HANDLER_MASK; - currbyte++; - *currbyte = (u32)0 & DRV_HANDLER_MASK; - currbyte++; - *currbyte = (u32)0 & DRV_HANDLER_MASK; - currbyte++; - *currbyte = (hif_drv_handler->name | (hif_drv_handler->mode << 1)); - - wid.id = WID_SET_DRV_HANDLER; - wid.type = WID_STR; - wid.val = (s8 *)buffer; - wid.size = DRV_HANDLER_SIZE; - - ret = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - hif_drv->driver_handler_id); - if (ret) - netdev_err(vif->ndev, "Failed to set driver handler\n"); - - kfree(buffer); - -free_msg: - if (msg->is_sync) - complete(&msg->work_comp); - - kfree(msg); -} - -static void handle_set_operation_mode(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct op_mode *hif_op_mode = &msg->body.mode; - int ret; - struct wid wid; - - wid.id = WID_SET_OPERATION_MODE; - wid.type = WID_INT; - wid.val = (s8 *)&hif_op_mode->mode; - wid.size = sizeof(u32); - - ret = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - - if (ret) - netdev_err(vif->ndev, "Failed to set operation mode\n"); - - kfree(msg); -} - -static void handle_get_mac_address(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct get_mac_addr *get_mac_addr = &msg->body.get_mac_info; - int ret; - struct wid wid; - - wid.id = WID_MAC_ADDR; - wid.type = WID_STR; - wid.val = get_mac_addr->mac_addr; - wid.size = ETH_ALEN; - - ret = wilc_send_config_pkt(vif, GET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - - if (ret) - netdev_err(vif->ndev, "Failed to get mac address\n"); - complete(&msg->work_comp); - /* free 'msg' data later, in caller */ -} - -static void handle_cfg_param(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct cfg_param_attr *param = &msg->body.cfg_info; - int ret; - struct wid wid_list[32]; - struct host_if_drv *hif_drv = vif->hif_drv; - int i = 0; - - mutex_lock(&hif_drv->cfg_values_lock); - - if (param->flag & BSS_TYPE) { - u8 bss_type = param->bss_type; - - if (bss_type < 6) { - wid_list[i].id = WID_BSS_TYPE; - wid_list[i].val = (s8 *)¶m->bss_type; - wid_list[i].type = WID_CHAR; - wid_list[i].size = sizeof(char); - hif_drv->cfg_values.bss_type = bss_type; - } else { - netdev_err(vif->ndev, "check value 6 over\n"); - goto unlock; - } - i++; - } - if (param->flag & AUTH_TYPE) { - u8 auth_type = param->auth_type; - - if (auth_type == 1 || auth_type == 2 || auth_type == 5) { - wid_list[i].id = WID_AUTH_TYPE; - wid_list[i].val = (s8 *)¶m->auth_type; - wid_list[i].type = WID_CHAR; - wid_list[i].size = sizeof(char); - hif_drv->cfg_values.auth_type = auth_type; - } else { - netdev_err(vif->ndev, "Impossible value\n"); - goto unlock; - } - i++; - } - if (param->flag & AUTHEN_TIMEOUT) { - if (param->auth_timeout > 0) { - wid_list[i].id = WID_AUTH_TIMEOUT; - wid_list[i].val = (s8 *)¶m->auth_timeout; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.auth_timeout = param->auth_timeout; - } else { - netdev_err(vif->ndev, "Range(1 ~ 65535) over\n"); - goto unlock; - } - i++; - } - if (param->flag & POWER_MANAGEMENT) { - u8 pm_mode = param->power_mgmt_mode; - - if (pm_mode < 5) { - wid_list[i].id = WID_POWER_MANAGEMENT; - wid_list[i].val = (s8 *)¶m->power_mgmt_mode; - wid_list[i].type = WID_CHAR; - wid_list[i].size = sizeof(char); - hif_drv->cfg_values.power_mgmt_mode = pm_mode; - } else { - netdev_err(vif->ndev, "Invalid power mode\n"); - goto unlock; - } - i++; - } - if (param->flag & RETRY_SHORT) { - u16 retry_limit = param->short_retry_limit; - - if (retry_limit > 0 && retry_limit < 256) { - wid_list[i].id = WID_SHORT_RETRY_LIMIT; - wid_list[i].val = (s8 *)¶m->short_retry_limit; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.short_retry_limit = retry_limit; - } else { - netdev_err(vif->ndev, "Range(1~256) over\n"); - goto unlock; - } - i++; - } - if (param->flag & RETRY_LONG) { - u16 limit = param->long_retry_limit; - - if (limit > 0 && limit < 256) { - wid_list[i].id = WID_LONG_RETRY_LIMIT; - wid_list[i].val = (s8 *)¶m->long_retry_limit; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.long_retry_limit = limit; - } else { - netdev_err(vif->ndev, "Range(1~256) over\n"); - goto unlock; - } - i++; - } - if (param->flag & FRAG_THRESHOLD) { - u16 frag_th = param->frag_threshold; - - if (frag_th > 255 && frag_th < 7937) { - wid_list[i].id = WID_FRAG_THRESHOLD; - wid_list[i].val = (s8 *)¶m->frag_threshold; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.frag_threshold = frag_th; - } else { - netdev_err(vif->ndev, "Threshold Range fail\n"); - goto unlock; - } - i++; - } - if (param->flag & RTS_THRESHOLD) { - u16 rts_th = param->rts_threshold; - - if (rts_th > 255) { - wid_list[i].id = WID_RTS_THRESHOLD; - wid_list[i].val = (s8 *)¶m->rts_threshold; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.rts_threshold = rts_th; - } else { - netdev_err(vif->ndev, "Threshold Range fail\n"); - goto unlock; - } - i++; - } - if (param->flag & PREAMBLE) { - u16 preamble_type = param->preamble_type; - - if (param->preamble_type < 3) { - wid_list[i].id = WID_PREAMBLE; - wid_list[i].val = (s8 *)¶m->preamble_type; - wid_list[i].type = WID_CHAR; - wid_list[i].size = sizeof(char); - hif_drv->cfg_values.preamble_type = preamble_type; - } else { - netdev_err(vif->ndev, "Preamble Range(0~2) over\n"); - goto unlock; - } - i++; - } - if (param->flag & SHORT_SLOT_ALLOWED) { - u8 slot_allowed = param->short_slot_allowed; - - if (slot_allowed < 2) { - wid_list[i].id = WID_SHORT_SLOT_ALLOWED; - wid_list[i].val = (s8 *)¶m->short_slot_allowed; - wid_list[i].type = WID_CHAR; - wid_list[i].size = sizeof(char); - hif_drv->cfg_values.short_slot_allowed = slot_allowed; - } else { - netdev_err(vif->ndev, "Short slot(2) over\n"); - goto unlock; - } - i++; - } - if (param->flag & TXOP_PROT_DISABLE) { - u8 prot_disabled = param->txop_prot_disabled; - - if (param->txop_prot_disabled < 2) { - wid_list[i].id = WID_11N_TXOP_PROT_DISABLE; - wid_list[i].val = (s8 *)¶m->txop_prot_disabled; - wid_list[i].type = WID_CHAR; - wid_list[i].size = sizeof(char); - hif_drv->cfg_values.txop_prot_disabled = prot_disabled; - } else { - netdev_err(vif->ndev, "TXOP prot disable\n"); - goto unlock; - } - i++; - } - if (param->flag & BEACON_INTERVAL) { - u16 beacon_interval = param->beacon_interval; - - if (beacon_interval > 0) { - wid_list[i].id = WID_BEACON_INTERVAL; - wid_list[i].val = (s8 *)¶m->beacon_interval; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.beacon_interval = beacon_interval; - } else { - netdev_err(vif->ndev, "Beacon interval(1~65535)fail\n"); - goto unlock; - } - i++; - } - if (param->flag & DTIM_PERIOD) { - if (param->dtim_period > 0 && param->dtim_period < 256) { - wid_list[i].id = WID_DTIM_PERIOD; - wid_list[i].val = (s8 *)¶m->dtim_period; - wid_list[i].type = WID_CHAR; - wid_list[i].size = sizeof(char); - hif_drv->cfg_values.dtim_period = param->dtim_period; - } else { - netdev_err(vif->ndev, "DTIM range(1~255) fail\n"); - goto unlock; - } - i++; - } - if (param->flag & SITE_SURVEY) { - enum site_survey enabled = param->site_survey_enabled; - - if (enabled < 3) { - wid_list[i].id = WID_SITE_SURVEY; - wid_list[i].val = (s8 *)¶m->site_survey_enabled; - wid_list[i].type = WID_CHAR; - wid_list[i].size = sizeof(char); - hif_drv->cfg_values.site_survey_enabled = enabled; - } else { - netdev_err(vif->ndev, "Site survey disable\n"); - goto unlock; - } - i++; - } - if (param->flag & SITE_SURVEY_SCAN_TIME) { - u16 scan_time = param->site_survey_scan_time; - - if (scan_time > 0) { - wid_list[i].id = WID_SITE_SURVEY_SCAN_TIME; - wid_list[i].val = (s8 *)¶m->site_survey_scan_time; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.site_survey_scan_time = scan_time; - } else { - netdev_err(vif->ndev, "Site scan time(1~65535) over\n"); - goto unlock; - } - i++; - } - if (param->flag & ACTIVE_SCANTIME) { - u16 active_scan_time = param->active_scan_time; - - if (active_scan_time > 0) { - wid_list[i].id = WID_ACTIVE_SCAN_TIME; - wid_list[i].val = (s8 *)¶m->active_scan_time; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.active_scan_time = active_scan_time; - } else { - netdev_err(vif->ndev, "Active time(1~65535) over\n"); - goto unlock; - } - i++; - } - if (param->flag & PASSIVE_SCANTIME) { - u16 time = param->passive_scan_time; - - if (time > 0) { - wid_list[i].id = WID_PASSIVE_SCAN_TIME; - wid_list[i].val = (s8 *)¶m->passive_scan_time; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.passive_scan_time = time; - } else { - netdev_err(vif->ndev, "Passive time(1~65535) over\n"); - goto unlock; - } - i++; - } - if (param->flag & CURRENT_TX_RATE) { - enum current_tx_rate curr_tx_rate = param->curr_tx_rate; - - if (curr_tx_rate == AUTORATE || curr_tx_rate == MBPS_1 || - curr_tx_rate == MBPS_2 || curr_tx_rate == MBPS_5_5 || - curr_tx_rate == MBPS_11 || curr_tx_rate == MBPS_6 || - curr_tx_rate == MBPS_9 || curr_tx_rate == MBPS_12 || - curr_tx_rate == MBPS_18 || curr_tx_rate == MBPS_24 || - curr_tx_rate == MBPS_36 || curr_tx_rate == MBPS_48 || - curr_tx_rate == MBPS_54) { - wid_list[i].id = WID_CURRENT_TX_RATE; - wid_list[i].val = (s8 *)&curr_tx_rate; - wid_list[i].type = WID_SHORT; - wid_list[i].size = sizeof(u16); - hif_drv->cfg_values.curr_tx_rate = (u8)curr_tx_rate; - } else { - netdev_err(vif->ndev, "out of TX rate\n"); - goto unlock; - } - i++; - } - - ret = wilc_send_config_pkt(vif, SET_CFG, wid_list, - i, wilc_get_vif_idx(vif)); - - if (ret) - netdev_err(vif->ndev, "Error in setting CFG params\n"); - -unlock: - mutex_unlock(&hif_drv->cfg_values_lock); - kfree(msg); -} - static int handle_scan_done(struct wilc_vif *vif, enum scan_event evt) { int result = 0; @@ -673,7 +194,7 @@ static int handle_scan_done(struct wilc_vif *vif, enum scan_event evt) wid.val = (s8 *)&abort_running_scan; wid.size = sizeof(char); - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, wilc_get_vif_idx(vif)); if (result) { @@ -696,11 +217,11 @@ static int handle_scan_done(struct wilc_vif *vif, enum scan_event evt) return result; } -static void handle_scan(struct work_struct *work) +int wilc_scan(struct wilc_vif *vif, u8 scan_source, u8 scan_type, + u8 *ch_freq_list, u8 ch_list_len, const u8 *ies, + size_t ies_len, wilc_scan_result scan_result, void *user_arg, + struct hidden_network *hidden_net) { - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct scan_attr *scan_info = &msg->body.scan_info; int result = 0; struct wid wid_list[5]; u32 index = 0; @@ -709,10 +230,6 @@ static void handle_scan(struct work_struct *work) u8 valuesize = 0; u8 *hdn_ntwk_wid_val = NULL; struct host_if_drv *hif_drv = vif->hif_drv; - struct hidden_network *hidden_net = &scan_info->hidden_network; - - hif_drv->usr_scan_req.scan_result = scan_info->result; - hif_drv->usr_scan_req.arg = scan_info->arg; if (hif_drv->hif_state >= HOST_IF_SCANNING && hif_drv->hif_state < HOST_IF_CONNECTED) { @@ -729,162 +246,101 @@ static void handle_scan(struct work_struct *work) hif_drv->usr_scan_req.ch_cnt = 0; - wid_list[index].id = WID_SSID_PROBE_REQ; - wid_list[index].type = WID_STR; + if (hidden_net) { + wid_list[index].id = WID_SSID_PROBE_REQ; + wid_list[index].type = WID_STR; - for (i = 0; i < hidden_net->n_ssids; i++) - valuesize += ((hidden_net->net_info[i].ssid_len) + 1); - hdn_ntwk_wid_val = kmalloc(valuesize + 1, GFP_KERNEL); - wid_list[index].val = hdn_ntwk_wid_val; - if (wid_list[index].val) { - buffer = wid_list[index].val; + for (i = 0; i < hidden_net->n_ssids; i++) + valuesize += ((hidden_net->net_info[i].ssid_len) + 1); + hdn_ntwk_wid_val = kmalloc(valuesize + 1, GFP_KERNEL); + wid_list[index].val = hdn_ntwk_wid_val; + if (wid_list[index].val) { + buffer = wid_list[index].val; - *buffer++ = hidden_net->n_ssids; + *buffer++ = hidden_net->n_ssids; - for (i = 0; i < hidden_net->n_ssids; i++) { - *buffer++ = hidden_net->net_info[i].ssid_len; - memcpy(buffer, hidden_net->net_info[i].ssid, - hidden_net->net_info[i].ssid_len); - buffer += hidden_net->net_info[i].ssid_len; - } + for (i = 0; i < hidden_net->n_ssids; i++) { + *buffer++ = hidden_net->net_info[i].ssid_len; + memcpy(buffer, hidden_net->net_info[i].ssid, + hidden_net->net_info[i].ssid_len); + buffer += hidden_net->net_info[i].ssid_len; + } - wid_list[index].size = (s32)(valuesize + 1); - index++; + wid_list[index].size = (s32)(valuesize + 1); + index++; + } } wid_list[index].id = WID_INFO_ELEMENT_PROBE; wid_list[index].type = WID_BIN_DATA; - wid_list[index].val = scan_info->ies; - wid_list[index].size = scan_info->ies_len; + wid_list[index].val = (s8 *)ies; + wid_list[index].size = ies_len; index++; wid_list[index].id = WID_SCAN_TYPE; wid_list[index].type = WID_CHAR; wid_list[index].size = sizeof(char); - wid_list[index].val = (s8 *)&scan_info->type; + wid_list[index].val = (s8 *)&scan_type; index++; wid_list[index].id = WID_SCAN_CHANNEL_LIST; wid_list[index].type = WID_BIN_DATA; - if (scan_info->ch_freq_list && - scan_info->ch_list_len > 0) { - int i; - - for (i = 0; i < scan_info->ch_list_len; i++) { - if (scan_info->ch_freq_list[i] > 0) - scan_info->ch_freq_list[i] -= 1; + if (ch_freq_list && ch_list_len > 0) { + for (i = 0; i < ch_list_len; i++) { + if (ch_freq_list[i] > 0) + ch_freq_list[i] -= 1; } } - wid_list[index].val = scan_info->ch_freq_list; - wid_list[index].size = scan_info->ch_list_len; + wid_list[index].val = ch_freq_list; + wid_list[index].size = ch_list_len; index++; wid_list[index].id = WID_START_SCAN_REQ; wid_list[index].type = WID_CHAR; wid_list[index].size = sizeof(char); - wid_list[index].val = (s8 *)&scan_info->src; + wid_list[index].val = (s8 *)&scan_source; index++; - result = wilc_send_config_pkt(vif, SET_CFG, wid_list, + result = wilc_send_config_pkt(vif, WILC_SET_CFG, wid_list, index, wilc_get_vif_idx(vif)); - - if (result) - netdev_err(vif->ndev, "Failed to send scan parameters\n"); - -error: if (result) { - del_timer(&hif_drv->scan_timer); - handle_scan_done(vif, SCAN_EVENT_ABORTED); + netdev_err(vif->ndev, "Failed to send scan parameters\n"); + goto error; } - kfree(scan_info->ch_freq_list); - scan_info->ch_freq_list = NULL; - - kfree(scan_info->ies); - scan_info->ies = NULL; - kfree(scan_info->hidden_network.net_info); - scan_info->hidden_network.net_info = NULL; + hif_drv->usr_scan_req.scan_result = scan_result; + hif_drv->usr_scan_req.arg = user_arg; + hif_drv->scan_timer_vif = vif; + mod_timer(&hif_drv->scan_timer, + jiffies + msecs_to_jiffies(HOST_IF_SCAN_TIMEOUT)); - kfree(hdn_ntwk_wid_val); +error: + if (hidden_net) { + kfree(hidden_net->net_info); + kfree(hdn_ntwk_wid_val); + } - kfree(msg); + return result; } -static void handle_connect(struct work_struct *work) +static int wilc_send_connect_wid(struct wilc_vif *vif) { - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct connect_attr *conn_attr = &msg->body.con_info; int result = 0; struct wid wid_list[8]; u32 wid_cnt = 0, dummyval = 0; u8 *cur_byte = NULL; - struct join_bss_param *bss_param; struct host_if_drv *hif_drv = vif->hif_drv; + struct user_conn_req *conn_attr = &hif_drv->usr_conn_req; + struct join_bss_param *bss_param = hif_drv->usr_conn_req.param; - if (msg->vif->hif_drv->usr_scan_req.scan_result) { - result = wilc_enqueue_work(msg); - if (result) - goto error; - - usleep_range(2 * 1000, 2 * 1000); - return; - } - - bss_param = conn_attr->params; - if (!bss_param) { - netdev_err(vif->ndev, "Required BSSID not found\n"); - result = -ENOENT; - goto error; - } - - if (conn_attr->bssid) { - hif_drv->usr_conn_req.bssid = kmemdup(conn_attr->bssid, 6, - GFP_KERNEL); - if (!hif_drv->usr_conn_req.bssid) { - result = -ENOMEM; - goto error; - } - } - - hif_drv->usr_conn_req.ssid_len = conn_attr->ssid_len; - if (conn_attr->ssid) { - hif_drv->usr_conn_req.ssid = kmalloc(conn_attr->ssid_len + 1, - GFP_KERNEL); - if (!hif_drv->usr_conn_req.ssid) { - result = -ENOMEM; - goto error; - } - memcpy(hif_drv->usr_conn_req.ssid, - conn_attr->ssid, - conn_attr->ssid_len); - hif_drv->usr_conn_req.ssid[conn_attr->ssid_len] = '\0'; - } - - hif_drv->usr_conn_req.ies_len = conn_attr->ies_len; - if (conn_attr->ies) { - hif_drv->usr_conn_req.ies = kmemdup(conn_attr->ies, - conn_attr->ies_len, - GFP_KERNEL); - if (!hif_drv->usr_conn_req.ies) { - result = -ENOMEM; - goto error; - } - } - - hif_drv->usr_conn_req.security = conn_attr->security; - hif_drv->usr_conn_req.auth_type = conn_attr->auth_type; - hif_drv->usr_conn_req.conn_result = conn_attr->result; - hif_drv->usr_conn_req.arg = conn_attr->arg; - - wid_list[wid_cnt].id = WID_SUCCESS_FRAME_COUNT; - wid_list[wid_cnt].type = WID_INT; - wid_list[wid_cnt].size = sizeof(u32); - wid_list[wid_cnt].val = (s8 *)(&(dummyval)); - wid_cnt++; + wid_list[wid_cnt].id = WID_SUCCESS_FRAME_COUNT; + wid_list[wid_cnt].type = WID_INT; + wid_list[wid_cnt].size = sizeof(u32); + wid_list[wid_cnt].val = (s8 *)(&(dummyval)); + wid_cnt++; wid_list[wid_cnt].id = WID_RECEIVED_FRAGMENT_COUNT; wid_list[wid_cnt].type = WID_INT; @@ -900,20 +356,20 @@ static void handle_connect(struct work_struct *work) wid_list[wid_cnt].id = WID_INFO_ELEMENT_ASSOCIATE; wid_list[wid_cnt].type = WID_BIN_DATA; - wid_list[wid_cnt].val = hif_drv->usr_conn_req.ies; - wid_list[wid_cnt].size = hif_drv->usr_conn_req.ies_len; + wid_list[wid_cnt].val = conn_attr->ies; + wid_list[wid_cnt].size = conn_attr->ies_len; wid_cnt++; wid_list[wid_cnt].id = WID_11I_MODE; wid_list[wid_cnt].type = WID_CHAR; wid_list[wid_cnt].size = sizeof(char); - wid_list[wid_cnt].val = (s8 *)&hif_drv->usr_conn_req.security; + wid_list[wid_cnt].val = (s8 *)&conn_attr->security; wid_cnt++; wid_list[wid_cnt].id = WID_AUTH_TYPE; wid_list[wid_cnt].type = WID_CHAR; wid_list[wid_cnt].size = sizeof(char); - wid_list[wid_cnt].val = (s8 *)&hif_drv->usr_conn_req.auth_type; + wid_list[wid_cnt].val = (s8 *)&conn_attr->auth_type; wid_cnt++; wid_list[wid_cnt].id = WID_JOIN_REQ_EXTENDED; @@ -933,7 +389,7 @@ static void handle_connect(struct work_struct *work) cur_byte[conn_attr->ssid_len] = '\0'; } cur_byte += MAX_SSID_LEN; - *(cur_byte++) = INFRASTRUCTURE; + *(cur_byte++) = WILC_FW_BSS_TYPE_INFRA; if (conn_attr->ch >= 1 && conn_attr->ch <= 14) { *(cur_byte++) = conn_attr->ch; @@ -941,8 +397,8 @@ static void handle_connect(struct work_struct *work) netdev_err(vif->ndev, "Channel out of range\n"); *(cur_byte++) = 0xFF; } - *(cur_byte++) = (bss_param->cap_info) & 0xFF; - *(cur_byte++) = ((bss_param->cap_info) >> 8) & 0xFF; + put_unaligned_le16(bss_param->cap_info, cur_byte); + cur_byte += 2; if (conn_attr->bssid) memcpy(cur_byte, conn_attr->bssid, 6); @@ -952,8 +408,8 @@ static void handle_connect(struct work_struct *work) memcpy(cur_byte, conn_attr->bssid, 6); cur_byte += 6; - *(cur_byte++) = (bss_param->beacon_period) & 0xFF; - *(cur_byte++) = ((bss_param->beacon_period) >> 8) & 0xFF; + put_unaligned_le16(bss_param->beacon_period, cur_byte); + cur_byte += 2; *(cur_byte++) = bss_param->dtim_period; memcpy(cur_byte, bss_param->supp_rates, MAX_RATES_SUPPORTED + 1); @@ -963,7 +419,7 @@ static void handle_connect(struct work_struct *work) *(cur_byte++) = bss_param->uapsd_cap; *(cur_byte++) = bss_param->ht_capable; - hif_drv->usr_conn_req.ht_capable = bss_param->ht_capable; + conn_attr->ht_capable = bss_param->ht_capable; *(cur_byte++) = bss_param->rsn_found; *(cur_byte++) = bss_param->rsn_grp_policy; @@ -984,10 +440,8 @@ static void handle_connect(struct work_struct *work) *(cur_byte++) = bss_param->noa_enabled; if (bss_param->noa_enabled) { - *(cur_byte++) = (bss_param->tsf) & 0xFF; - *(cur_byte++) = ((bss_param->tsf) >> 8) & 0xFF; - *(cur_byte++) = ((bss_param->tsf) >> 16) & 0xFF; - *(cur_byte++) = ((bss_param->tsf) >> 24) & 0xFF; + put_unaligned_le32(bss_param->tsf, cur_byte); + cur_byte += 4; *(cur_byte++) = bss_param->opp_enabled; *(cur_byte++) = bss_param->idx; @@ -1013,49 +467,21 @@ static void handle_connect(struct work_struct *work) cur_byte = wid_list[wid_cnt].val; wid_cnt++; - result = wilc_send_config_pkt(vif, SET_CFG, wid_list, + result = wilc_send_config_pkt(vif, WILC_SET_CFG, wid_list, wid_cnt, wilc_get_vif_idx(vif)); if (result) { netdev_err(vif->ndev, "failed to send config packet\n"); - result = -EFAULT; + kfree(cur_byte); goto error; } else { hif_drv->hif_state = HOST_IF_WAITING_CONN_RESP; } -error: - if (result) { - struct connect_info conn_info; - - del_timer(&hif_drv->connect_timer); - - memset(&conn_info, 0, sizeof(struct connect_info)); - - if (conn_attr->result) { - if (conn_attr->bssid) - memcpy(conn_info.bssid, conn_attr->bssid, 6); - - if (conn_attr->ies) { - conn_info.req_ies_len = conn_attr->ies_len; - conn_info.req_ies = kmalloc(conn_attr->ies_len, - GFP_KERNEL); - memcpy(conn_info.req_ies, - conn_attr->ies, - conn_attr->ies_len); - } - - conn_attr->result(CONN_DISCONN_EVENT_CONN_RESP, - &conn_info, MAC_STATUS_DISCONNECTED, - NULL, conn_attr->arg); - hif_drv->hif_state = HOST_IF_IDLE; - kfree(conn_info.req_ies); - conn_info.req_ies = NULL; + kfree(cur_byte); + return 0; - } else { - netdev_err(vif->ndev, "Connect callback is NULL\n"); - } - } +error: kfree(conn_attr->bssid); conn_attr->bssid = NULL; @@ -1066,8 +492,7 @@ error: kfree(conn_attr->ies); conn_attr->ies = NULL; - kfree(cur_byte); - kfree(msg); + return result; } static void handle_connect_timeout(struct work_struct *work) @@ -1106,7 +531,7 @@ static void handle_connect_timeout(struct work_struct *work) hif_drv->usr_conn_req.conn_result(CONN_DISCONN_EVENT_CONN_RESP, &info, - MAC_STATUS_DISCONNECTED, + WILC_MAC_STATUS_DISCONNECTED, NULL, hif_drv->usr_conn_req.arg); @@ -1121,7 +546,7 @@ static void handle_connect_timeout(struct work_struct *work) wid.val = (s8 *)&dummy_reason_code; wid.size = sizeof(char); - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, wilc_get_vif_idx(vif)); if (result) netdev_err(vif->ndev, "Failed to send disconnect\n"); @@ -1305,6 +730,101 @@ static void *host_int_parse_join_bss_param(struct network_info *info) return (void *)param; } +static inline u8 *get_bssid(struct ieee80211_mgmt *mgmt) +{ + if (ieee80211_has_fromds(mgmt->frame_control)) + return mgmt->sa; + else if (ieee80211_has_tods(mgmt->frame_control)) + return mgmt->da; + else + return mgmt->bssid; +} + +static s32 wilc_parse_network_info(u8 *msg_buffer, + struct network_info **ret_network_info) +{ + struct network_info *info; + struct ieee80211_mgmt *mgt; + u8 *wid_val, *msa, *ies; + u16 wid_len, rx_len, ies_len; + u8 msg_type; + size_t offset; + const u8 *ch_elm, *tim_elm, *ssid_elm; + + msg_type = msg_buffer[0]; + if ('N' != msg_type) + return -EFAULT; + + wid_len = get_unaligned_le16(&msg_buffer[6]); + wid_val = &msg_buffer[8]; + + info = kzalloc(sizeof(*info), GFP_KERNEL); + if (!info) + return -ENOMEM; + + info->rssi = wid_val[0]; + + msa = &wid_val[1]; + mgt = (struct ieee80211_mgmt *)&wid_val[1]; + rx_len = wid_len - 1; + + if (ieee80211_is_probe_resp(mgt->frame_control)) { + info->cap_info = le16_to_cpu(mgt->u.probe_resp.capab_info); + info->beacon_period = le16_to_cpu(mgt->u.probe_resp.beacon_int); + info->tsf = le64_to_cpu(mgt->u.probe_resp.timestamp); + info->tsf_lo = (u32)info->tsf; + offset = offsetof(struct ieee80211_mgmt, u.probe_resp.variable); + } else if (ieee80211_is_beacon(mgt->frame_control)) { + info->cap_info = le16_to_cpu(mgt->u.beacon.capab_info); + info->beacon_period = le16_to_cpu(mgt->u.beacon.beacon_int); + info->tsf = le64_to_cpu(mgt->u.beacon.timestamp); + info->tsf_lo = (u32)info->tsf; + offset = offsetof(struct ieee80211_mgmt, u.beacon.variable); + } else { + /* only process probe response and beacon frame */ + kfree(info); + return -EIO; + } + + ether_addr_copy(info->bssid, get_bssid(mgt)); + + ies = mgt->u.beacon.variable; + ies_len = rx_len - offset; + if (ies_len <= 0) { + kfree(info); + return -EIO; + } + + info->ies = kmemdup(ies, ies_len, GFP_KERNEL); + if (!info->ies) { + kfree(info); + return -ENOMEM; + } + + info->ies_len = ies_len; + + ssid_elm = cfg80211_find_ie(WLAN_EID_SSID, ies, ies_len); + if (ssid_elm) { + info->ssid_len = ssid_elm[1]; + if (info->ssid_len <= IEEE80211_MAX_SSID_LEN) + memcpy(info->ssid, ssid_elm + 2, info->ssid_len); + else + info->ssid_len = 0; + } + + ch_elm = cfg80211_find_ie(WLAN_EID_DS_PARAMS, ies, ies_len); + if (ch_elm && ch_elm[1] > 0) + info->ch = ch_elm[2]; + + tim_elm = cfg80211_find_ie(WLAN_EID_TIM, ies, ies_len); + if (tim_elm && tim_elm[1] >= 2) + info->dtim_period = tim_elm[3]; + + *ret_network_info = info; + + return 0; +} + static void handle_rcvd_ntwrk_info(struct work_struct *work) { struct host_if_msg *msg = container_of(work, struct host_if_msg, work); @@ -1387,7 +907,7 @@ static void host_int_get_assoc_res_info(struct wilc_vif *vif, wid.val = assoc_resp_info; wid.size = max_assoc_resp_info_len; - result = wilc_send_config_pkt(vif, GET_CFG, &wid, 1, + result = wilc_send_config_pkt(vif, WILC_GET_CFG, &wid, 1, wilc_get_vif_idx(vif)); if (result) { *rcvd_assoc_resp_info_len = 0; @@ -1410,6 +930,28 @@ static inline void host_int_free_user_conn_req(struct host_if_drv *hif_drv) hif_drv->usr_conn_req.ies = NULL; } +static s32 wilc_parse_assoc_resp_info(u8 *buffer, u32 buffer_len, + struct connect_info *ret_conn_info) +{ + u8 *ies; + u16 ies_len; + struct assoc_resp *res = (struct assoc_resp *)buffer; + + ret_conn_info->status = le16_to_cpu(res->status_code); + if (ret_conn_info->status == WLAN_STATUS_SUCCESS) { + ies = &buffer[sizeof(*res)]; + ies_len = buffer_len - sizeof(*res); + + ret_conn_info->resp_ies = kmemdup(ies, ies_len, GFP_KERNEL); + if (!ret_conn_info->resp_ies) + return -ENOMEM; + + ret_conn_info->resp_ies_len = ies_len; + } + + return 0; +} + static inline void host_int_parse_assoc_resp_info(struct wilc_vif *vif, u8 mac_status) { @@ -1418,13 +960,13 @@ static inline void host_int_parse_assoc_resp_info(struct wilc_vif *vif, memset(&conn_info, 0, sizeof(struct connect_info)); - if (mac_status == MAC_STATUS_CONNECTED) { + if (mac_status == WILC_MAC_STATUS_CONNECTED) { u32 assoc_resp_info_len; - memset(hif_drv->assoc_resp, 0, MAX_ASSOC_RESP_FRAME_SIZE); + memset(hif_drv->assoc_resp, 0, WILC_MAX_ASSOC_RESP_FRAME_SIZE); host_int_get_assoc_res_info(vif, hif_drv->assoc_resp, - MAX_ASSOC_RESP_FRAME_SIZE, + WILC_MAX_ASSOC_RESP_FRAME_SIZE, &assoc_resp_info_len); if (assoc_resp_info_len != 0) { @@ -1443,7 +985,7 @@ static inline void host_int_parse_assoc_resp_info(struct wilc_vif *vif, if (hif_drv->usr_conn_req.bssid) { memcpy(conn_info.bssid, hif_drv->usr_conn_req.bssid, 6); - if (mac_status == MAC_STATUS_CONNECTED && + if (mac_status == WILC_MAC_STATUS_CONNECTED && conn_info.status == WLAN_STATUS_SUCCESS) { memcpy(hif_drv->assoc_bssid, hif_drv->usr_conn_req.bssid, ETH_ALEN); @@ -1463,7 +1005,7 @@ static inline void host_int_parse_assoc_resp_info(struct wilc_vif *vif, &conn_info, mac_status, NULL, hif_drv->usr_conn_req.arg); - if (mac_status == MAC_STATUS_CONNECTED && + if (mac_status == WILC_MAC_STATUS_CONNECTED && conn_info.status == WLAN_STATUS_SUCCESS) { wilc_set_power_mgmt(vif, 0, 0); @@ -1555,10 +1097,10 @@ static void handle_rcvd_gnrl_async_info(struct work_struct *work) mac_status = rcvd_info->buffer[7]; if (hif_drv->hif_state == HOST_IF_WAITING_CONN_RESP) { host_int_parse_assoc_resp_info(vif, mac_status); - } else if ((mac_status == MAC_STATUS_DISCONNECTED) && + } else if ((mac_status == WILC_MAC_STATUS_DISCONNECTED) && (hif_drv->hif_state == HOST_IF_CONNECTED)) { host_int_handle_disconnect(vif); - } else if ((mac_status == MAC_STATUS_DISCONNECTED) && + } else if ((mac_status == WILC_MAC_STATUS_DISCONNECTED) && (hif_drv->usr_scan_req.scan_result)) { del_timer(&hif_drv->scan_timer); if (hif_drv->usr_scan_req.scan_result) @@ -1574,268 +1116,8 @@ free_msg: kfree(msg); } -static int wilc_pmksa_key_copy(struct wilc_vif *vif, struct key_attr *hif_key) -{ - int i; - int ret; - struct wid wid; - u8 *key_buf; - - key_buf = kmalloc((hif_key->attr.pmkid.numpmkid * PMKSA_KEY_LEN) + 1, - GFP_KERNEL); - if (!key_buf) - return -ENOMEM; - - key_buf[0] = hif_key->attr.pmkid.numpmkid; - - for (i = 0; i < hif_key->attr.pmkid.numpmkid; i++) { - memcpy(key_buf + ((PMKSA_KEY_LEN * i) + 1), - hif_key->attr.pmkid.pmkidlist[i].bssid, ETH_ALEN); - memcpy(key_buf + ((PMKSA_KEY_LEN * i) + ETH_ALEN + 1), - hif_key->attr.pmkid.pmkidlist[i].pmkid, PMKID_LEN); - } - - wid.id = WID_PMKID_INFO; - wid.type = WID_STR; - wid.val = (s8 *)key_buf; - wid.size = (hif_key->attr.pmkid.numpmkid * PMKSA_KEY_LEN) + 1; - - ret = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - - kfree(key_buf); - - return ret; -} - -static void handle_key(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct key_attr *hif_key = &msg->body.key_info; - int result = 0; - struct wid wid; - struct wid wid_list[5]; - u8 *key_buf; - struct host_if_drv *hif_drv = vif->hif_drv; - - switch (hif_key->type) { - case WEP: - - if (hif_key->action & ADDKEY_AP) { - wid_list[0].id = WID_11I_MODE; - wid_list[0].type = WID_CHAR; - wid_list[0].size = sizeof(char); - wid_list[0].val = (s8 *)&hif_key->attr.wep.mode; - - wid_list[1].id = WID_AUTH_TYPE; - wid_list[1].type = WID_CHAR; - wid_list[1].size = sizeof(char); - wid_list[1].val = (s8 *)&hif_key->attr.wep.auth_type; - - key_buf = kmalloc(hif_key->attr.wep.key_len + 2, - GFP_KERNEL); - if (!key_buf) { - result = -ENOMEM; - goto out_wep; - } - - key_buf[0] = hif_key->attr.wep.index; - key_buf[1] = hif_key->attr.wep.key_len; - - memcpy(&key_buf[2], hif_key->attr.wep.key, - hif_key->attr.wep.key_len); - - wid_list[2].id = WID_WEP_KEY_VALUE; - wid_list[2].type = WID_STR; - wid_list[2].size = hif_key->attr.wep.key_len + 2; - wid_list[2].val = (s8 *)key_buf; - - result = wilc_send_config_pkt(vif, SET_CFG, - wid_list, 3, - wilc_get_vif_idx(vif)); - kfree(key_buf); - } else if (hif_key->action & ADDKEY) { - key_buf = kmalloc(hif_key->attr.wep.key_len + 2, - GFP_KERNEL); - if (!key_buf) { - result = -ENOMEM; - goto out_wep; - } - key_buf[0] = hif_key->attr.wep.index; - memcpy(key_buf + 1, &hif_key->attr.wep.key_len, 1); - memcpy(key_buf + 2, hif_key->attr.wep.key, - hif_key->attr.wep.key_len); - - wid.id = WID_ADD_WEP_KEY; - wid.type = WID_STR; - wid.val = (s8 *)key_buf; - wid.size = hif_key->attr.wep.key_len + 2; - - result = wilc_send_config_pkt(vif, SET_CFG, - &wid, 1, - wilc_get_vif_idx(vif)); - kfree(key_buf); - } else if (hif_key->action & REMOVEKEY) { - wid.id = WID_REMOVE_WEP_KEY; - wid.type = WID_STR; - - wid.val = (s8 *)&hif_key->attr.wep.index; - wid.size = 1; - - result = wilc_send_config_pkt(vif, SET_CFG, - &wid, 1, - wilc_get_vif_idx(vif)); - } else if (hif_key->action & DEFAULTKEY) { - wid.id = WID_KEY_ID; - wid.type = WID_CHAR; - wid.val = (s8 *)&hif_key->attr.wep.index; - wid.size = sizeof(char); - - result = wilc_send_config_pkt(vif, SET_CFG, - &wid, 1, - wilc_get_vif_idx(vif)); - } -out_wep: - complete(&msg->work_comp); - break; - - case WPA_RX_GTK: - if (hif_key->action & ADDKEY_AP) { - key_buf = kzalloc(RX_MIC_KEY_MSG_LEN, GFP_KERNEL); - if (!key_buf) { - result = -ENOMEM; - goto out_wpa_rx_gtk; - } - - if (hif_key->attr.wpa.seq) - memcpy(key_buf + 6, hif_key->attr.wpa.seq, 8); - - memcpy(key_buf + 14, &hif_key->attr.wpa.index, 1); - memcpy(key_buf + 15, &hif_key->attr.wpa.key_len, 1); - memcpy(key_buf + 16, hif_key->attr.wpa.key, - hif_key->attr.wpa.key_len); - - wid_list[0].id = WID_11I_MODE; - wid_list[0].type = WID_CHAR; - wid_list[0].size = sizeof(char); - wid_list[0].val = (s8 *)&hif_key->attr.wpa.mode; - - wid_list[1].id = WID_ADD_RX_GTK; - wid_list[1].type = WID_STR; - wid_list[1].val = (s8 *)key_buf; - wid_list[1].size = RX_MIC_KEY_MSG_LEN; - - result = wilc_send_config_pkt(vif, SET_CFG, - wid_list, 2, - wilc_get_vif_idx(vif)); - - kfree(key_buf); - } else if (hif_key->action & ADDKEY) { - key_buf = kzalloc(RX_MIC_KEY_MSG_LEN, GFP_KERNEL); - if (!key_buf) { - result = -ENOMEM; - goto out_wpa_rx_gtk; - } - - if (hif_drv->hif_state == HOST_IF_CONNECTED) - memcpy(key_buf, hif_drv->assoc_bssid, ETH_ALEN); - else - netdev_err(vif->ndev, "Couldn't handle\n"); - - memcpy(key_buf + 6, hif_key->attr.wpa.seq, 8); - memcpy(key_buf + 14, &hif_key->attr.wpa.index, 1); - memcpy(key_buf + 15, &hif_key->attr.wpa.key_len, 1); - memcpy(key_buf + 16, hif_key->attr.wpa.key, - hif_key->attr.wpa.key_len); - - wid.id = WID_ADD_RX_GTK; - wid.type = WID_STR; - wid.val = (s8 *)key_buf; - wid.size = RX_MIC_KEY_MSG_LEN; - - result = wilc_send_config_pkt(vif, SET_CFG, - &wid, 1, - wilc_get_vif_idx(vif)); - - kfree(key_buf); - } -out_wpa_rx_gtk: - complete(&msg->work_comp); - break; - - case WPA_PTK: - if (hif_key->action & ADDKEY_AP) { - key_buf = kmalloc(PTK_KEY_MSG_LEN + 1, GFP_KERNEL); - if (!key_buf) { - result = -ENOMEM; - goto out_wpa_ptk; - } - - memcpy(key_buf, hif_key->attr.wpa.mac_addr, 6); - memcpy(key_buf + 6, &hif_key->attr.wpa.index, 1); - memcpy(key_buf + 7, &hif_key->attr.wpa.key_len, 1); - memcpy(key_buf + 8, hif_key->attr.wpa.key, - hif_key->attr.wpa.key_len); - - wid_list[0].id = WID_11I_MODE; - wid_list[0].type = WID_CHAR; - wid_list[0].size = sizeof(char); - wid_list[0].val = (s8 *)&hif_key->attr.wpa.mode; - - wid_list[1].id = WID_ADD_PTK; - wid_list[1].type = WID_STR; - wid_list[1].val = (s8 *)key_buf; - wid_list[1].size = PTK_KEY_MSG_LEN + 1; - - result = wilc_send_config_pkt(vif, SET_CFG, - wid_list, 2, - wilc_get_vif_idx(vif)); - kfree(key_buf); - } else if (hif_key->action & ADDKEY) { - key_buf = kmalloc(PTK_KEY_MSG_LEN, GFP_KERNEL); - if (!key_buf) { - result = -ENOMEM; - goto out_wpa_ptk; - } - - memcpy(key_buf, hif_key->attr.wpa.mac_addr, 6); - memcpy(key_buf + 6, &hif_key->attr.wpa.key_len, 1); - memcpy(key_buf + 7, hif_key->attr.wpa.key, - hif_key->attr.wpa.key_len); - - wid.id = WID_ADD_PTK; - wid.type = WID_STR; - wid.val = (s8 *)key_buf; - wid.size = PTK_KEY_MSG_LEN; - - result = wilc_send_config_pkt(vif, SET_CFG, - &wid, 1, - wilc_get_vif_idx(vif)); - kfree(key_buf); - } - -out_wpa_ptk: - complete(&msg->work_comp); - break; - - case PMKSA: - result = wilc_pmksa_key_copy(vif, hif_key); - /*free 'msg', this case it not a sync call*/ - kfree(msg); - break; - } - - if (result) - netdev_err(vif->ndev, "Failed to send key config packet\n"); - - /* free 'msg' data in caller sync call */ -} - -static void handle_disconnect(struct work_struct *work) +int wilc_disconnect(struct wilc_vif *vif) { - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; struct wid wid; struct host_if_drv *hif_drv = vif->hif_drv; struct disconnect_info disconn_info; @@ -1852,12 +1134,11 @@ static void handle_disconnect(struct work_struct *work) vif->obtaining_ip = false; wilc_set_power_mgmt(vif, 0, 0); - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, wilc_get_vif_idx(vif)); - if (result) { netdev_err(vif->ndev, "Failed to send dissconect\n"); - goto out; + return result; } memset(&disconn_info, 0, sizeof(struct disconnect_info)); @@ -1898,10 +1179,7 @@ static void handle_disconnect(struct work_struct *work) kfree(conn_req->ies); conn_req->ies = NULL; -out: - - complete(&msg->work_comp); - /* free 'msg' in caller after receiving completion */ + return 0; } void wilc_resolve_disconnect_aberration(struct wilc_vif *vif) @@ -1910,37 +1188,13 @@ void wilc_resolve_disconnect_aberration(struct wilc_vif *vif) return; if (vif->hif_drv->hif_state == HOST_IF_WAITING_CONN_RESP || vif->hif_drv->hif_state == HOST_IF_CONNECTING) - wilc_disconnect(vif, 1); -} - -static void handle_get_rssi(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - int result; - struct wid wid; - - wid.id = WID_RSSI; - wid.type = WID_CHAR; - wid.val = msg->body.data; - wid.size = sizeof(char); - - result = wilc_send_config_pkt(vif, GET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (result) - netdev_err(vif->ndev, "Failed to get RSSI value\n"); - - complete(&msg->work_comp); - /* free 'msg' data in caller */ + wilc_disconnect(vif); } -static void handle_get_statistics(struct work_struct *work) +int wilc_get_statistics(struct wilc_vif *vif, struct rf_info *stats) { - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; struct wid wid_list[5]; u32 wid_cnt = 0, result; - struct rf_info *stats = (struct rf_info *)msg->body.data; wid_list[wid_cnt].id = WID_LINKSPEED; wid_list[wid_cnt].type = WID_CHAR; @@ -1972,12 +1226,14 @@ static void handle_get_statistics(struct work_struct *work) wid_list[wid_cnt].val = (s8 *)&stats->tx_fail_cnt; wid_cnt++; - result = wilc_send_config_pkt(vif, GET_CFG, wid_list, + result = wilc_send_config_pkt(vif, WILC_GET_CFG, wid_list, wid_cnt, wilc_get_vif_idx(vif)); - if (result) + if (result) { netdev_err(vif->ndev, "Failed to send scan parameters\n"); + return result; + } if (stats->link_speed > TCP_ACK_FILTER_LINK_SPEED_THRESH && stats->link_speed != DEFAULT_LINK_SPEED) @@ -1985,327 +1241,81 @@ static void handle_get_statistics(struct work_struct *work) else if (stats->link_speed != DEFAULT_LINK_SPEED) wilc_enable_tcp_ack_filter(vif, false); - /* free 'msg' for async command, for sync caller will free it */ - if (msg->is_sync) - complete(&msg->work_comp); - else - kfree(msg); + return result; } -static void handle_get_inactive_time(struct work_struct *work) +static void handle_get_statistics(struct work_struct *work) { struct host_if_msg *msg = container_of(work, struct host_if_msg, work); struct wilc_vif *vif = msg->vif; - struct sta_inactive_t *hif_sta_inactive = &msg->body.mac_info; - int result; - struct wid wid; - - wid.id = WID_SET_STA_MAC_INACTIVE_TIME; - wid.type = WID_STR; - wid.size = ETH_ALEN; - wid.val = kmalloc(wid.size, GFP_KERNEL); - if (!wid.val) - goto out; + struct rf_info *stats = (struct rf_info *)msg->body.data; - ether_addr_copy(wid.val, hif_sta_inactive->mac); + wilc_get_statistics(vif, stats); - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - kfree(wid.val); + kfree(msg); +} - if (result) { - netdev_err(vif->ndev, "Failed to set inactive mac\n"); - goto out; - } +static void wilc_hif_pack_sta_param(u8 *cur_byte, const u8 *mac, + struct station_parameters *params) +{ + ether_addr_copy(cur_byte, mac); + cur_byte += ETH_ALEN; - wid.id = WID_GET_INACTIVE_TIME; - wid.type = WID_INT; - wid.val = (s8 *)&hif_sta_inactive->inactive_time; - wid.size = sizeof(u32); + put_unaligned_le16(params->aid, cur_byte); + cur_byte += 2; - result = wilc_send_config_pkt(vif, GET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); + *cur_byte++ = params->supported_rates_len; + if (params->supported_rates_len > 0) + memcpy(cur_byte, params->supported_rates, + params->supported_rates_len); + cur_byte += params->supported_rates_len; - if (result) - netdev_err(vif->ndev, "Failed to get inactive time\n"); + if (params->ht_capa) { + *cur_byte++ = true; + memcpy(cur_byte, ¶ms->ht_capa, + sizeof(struct ieee80211_ht_cap)); + } else { + *cur_byte++ = false; + } + cur_byte += sizeof(struct ieee80211_ht_cap); -out: - /* free 'msg' data in caller */ - complete(&msg->work_comp); + put_unaligned_le16(params->sta_flags_mask, cur_byte); + cur_byte += 2; + put_unaligned_le16(params->sta_flags_set, cur_byte); } -static void handle_add_beacon(struct work_struct *work) +static int handle_remain_on_chan(struct wilc_vif *vif, + struct remain_ch *hif_remain_ch) { - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct beacon_attr *param = &msg->body.beacon_info; int result; + u8 remain_on_chan_flag; struct wid wid; - u8 *cur_byte; + struct host_if_drv *hif_drv = vif->hif_drv; - wid.id = WID_ADD_BEACON; - wid.type = WID_BIN; - wid.size = param->head_len + param->tail_len + 16; - wid.val = kmalloc(wid.size, GFP_KERNEL); - if (!wid.val) - goto error; + if (!hif_drv->remain_on_ch_pending) { + hif_drv->remain_on_ch.arg = hif_remain_ch->arg; + hif_drv->remain_on_ch.expired = hif_remain_ch->expired; + hif_drv->remain_on_ch.ready = hif_remain_ch->ready; + hif_drv->remain_on_ch.ch = hif_remain_ch->ch; + hif_drv->remain_on_ch.id = hif_remain_ch->id; + } else { + hif_remain_ch->ch = hif_drv->remain_on_ch.ch; + } - cur_byte = wid.val; - *cur_byte++ = (param->interval & 0xFF); - *cur_byte++ = ((param->interval >> 8) & 0xFF); - *cur_byte++ = ((param->interval >> 16) & 0xFF); - *cur_byte++ = ((param->interval >> 24) & 0xFF); - - *cur_byte++ = (param->dtim_period & 0xFF); - *cur_byte++ = ((param->dtim_period >> 8) & 0xFF); - *cur_byte++ = ((param->dtim_period >> 16) & 0xFF); - *cur_byte++ = ((param->dtim_period >> 24) & 0xFF); - - *cur_byte++ = (param->head_len & 0xFF); - *cur_byte++ = ((param->head_len >> 8) & 0xFF); - *cur_byte++ = ((param->head_len >> 16) & 0xFF); - *cur_byte++ = ((param->head_len >> 24) & 0xFF); - - memcpy(cur_byte, param->head, param->head_len); - cur_byte += param->head_len; - - *cur_byte++ = (param->tail_len & 0xFF); - *cur_byte++ = ((param->tail_len >> 8) & 0xFF); - *cur_byte++ = ((param->tail_len >> 16) & 0xFF); - *cur_byte++ = ((param->tail_len >> 24) & 0xFF); - - if (param->tail) - memcpy(cur_byte, param->tail, param->tail_len); - cur_byte += param->tail_len; - - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (result) - netdev_err(vif->ndev, "Failed to send add beacon\n"); + if (hif_drv->usr_scan_req.scan_result) { + hif_drv->remain_on_ch_pending = 1; + result = -EBUSY; + goto error; + } + if (hif_drv->hif_state == HOST_IF_WAITING_CONN_RESP) { + result = -EBUSY; + goto error; + } -error: - kfree(wid.val); - kfree(param->head); - kfree(param->tail); - kfree(msg); -} - -static void handle_del_beacon(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - int result; - struct wid wid; - u8 del_beacon = 0; - - wid.id = WID_DEL_BEACON; - wid.type = WID_CHAR; - wid.size = sizeof(char); - wid.val = &del_beacon; - - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (result) - netdev_err(vif->ndev, "Failed to send delete beacon\n"); - kfree(msg); -} - -static u32 wilc_hif_pack_sta_param(u8 *buff, struct add_sta_param *param) -{ - u8 *cur_byte; - - cur_byte = buff; - - memcpy(cur_byte, param->bssid, ETH_ALEN); - cur_byte += ETH_ALEN; - - *cur_byte++ = param->aid & 0xFF; - *cur_byte++ = (param->aid >> 8) & 0xFF; - - *cur_byte++ = param->rates_len; - if (param->rates_len > 0) - memcpy(cur_byte, param->rates, param->rates_len); - cur_byte += param->rates_len; - - *cur_byte++ = param->ht_supported; - memcpy(cur_byte, ¶m->ht_capa, sizeof(struct ieee80211_ht_cap)); - cur_byte += sizeof(struct ieee80211_ht_cap); - - *cur_byte++ = param->flags_mask & 0xFF; - *cur_byte++ = (param->flags_mask >> 8) & 0xFF; - - *cur_byte++ = param->flags_set & 0xFF; - *cur_byte++ = (param->flags_set >> 8) & 0xFF; - - return cur_byte - buff; -} - -static void handle_add_station(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct add_sta_param *param = &msg->body.add_sta_info; - int result; - struct wid wid; - u8 *cur_byte; - - wid.id = WID_ADD_STA; - wid.type = WID_BIN; - wid.size = WILC_ADD_STA_LENGTH + param->rates_len; - - wid.val = kmalloc(wid.size, GFP_KERNEL); - if (!wid.val) - goto error; - - cur_byte = wid.val; - cur_byte += wilc_hif_pack_sta_param(cur_byte, param); - - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (result != 0) - netdev_err(vif->ndev, "Failed to send add station\n"); - -error: - kfree(param->rates); - kfree(wid.val); - kfree(msg); -} - -static void handle_del_all_sta(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct del_all_sta *param = &msg->body.del_all_sta_info; - int result; - struct wid wid; - u8 *curr_byte; - u8 i; - u8 zero_buff[6] = {0}; - - wid.id = WID_DEL_ALL_STA; - wid.type = WID_STR; - wid.size = (param->assoc_sta * ETH_ALEN) + 1; - - wid.val = kmalloc((param->assoc_sta * ETH_ALEN) + 1, GFP_KERNEL); - if (!wid.val) - goto error; - - curr_byte = wid.val; - - *(curr_byte++) = param->assoc_sta; - - for (i = 0; i < MAX_NUM_STA; i++) { - if (memcmp(param->del_all_sta[i], zero_buff, ETH_ALEN)) - memcpy(curr_byte, param->del_all_sta[i], ETH_ALEN); - else - continue; - - curr_byte += ETH_ALEN; - } - - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (result) - netdev_err(vif->ndev, "Failed to send delete all station\n"); - -error: - kfree(wid.val); - - /* free 'msg' data in caller */ - complete(&msg->work_comp); -} - -static void handle_del_station(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct del_sta *param = &msg->body.del_sta_info; - int result; - struct wid wid; - - wid.id = WID_REMOVE_STA; - wid.type = WID_BIN; - wid.size = ETH_ALEN; - - wid.val = kmalloc(wid.size, GFP_KERNEL); - if (!wid.val) - goto error; - - ether_addr_copy(wid.val, param->mac_addr); - - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (result) - netdev_err(vif->ndev, "Failed to del station\n"); - -error: - kfree(wid.val); - kfree(msg); -} - -static void handle_edit_station(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct add_sta_param *param = &msg->body.edit_sta_info; - int result; - struct wid wid; - u8 *cur_byte; - - wid.id = WID_EDIT_STA; - wid.type = WID_BIN; - wid.size = WILC_ADD_STA_LENGTH + param->rates_len; - - wid.val = kmalloc(wid.size, GFP_KERNEL); - if (!wid.val) - goto error; - - cur_byte = wid.val; - cur_byte += wilc_hif_pack_sta_param(cur_byte, param); - - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (result) - netdev_err(vif->ndev, "Failed to send edit station\n"); - -error: - kfree(param->rates); - kfree(wid.val); - kfree(msg); -} - -static int handle_remain_on_chan(struct wilc_vif *vif, - struct remain_ch *hif_remain_ch) -{ - int result; - u8 remain_on_chan_flag; - struct wid wid; - struct host_if_drv *hif_drv = vif->hif_drv; - - if (!hif_drv->remain_on_ch_pending) { - hif_drv->remain_on_ch.arg = hif_remain_ch->arg; - hif_drv->remain_on_ch.expired = hif_remain_ch->expired; - hif_drv->remain_on_ch.ready = hif_remain_ch->ready; - hif_drv->remain_on_ch.ch = hif_remain_ch->ch; - hif_drv->remain_on_ch.id = hif_remain_ch->id; - } else { - hif_remain_ch->ch = hif_drv->remain_on_ch.ch; - } - - if (hif_drv->usr_scan_req.scan_result) { - hif_drv->remain_on_ch_pending = 1; - result = -EBUSY; - goto error; - } - if (hif_drv->hif_state == HOST_IF_WAITING_CONN_RESP) { - result = -EBUSY; - goto error; - } - - if (vif->obtaining_ip || vif->connecting) { - result = -EBUSY; - goto error; - } + if (vif->obtaining_ip || vif->connecting) { + result = -EBUSY; + goto error; + } remain_on_chan_flag = true; wid.id = WID_REMAIN_ON_CHAN; @@ -2320,7 +1330,7 @@ static int handle_remain_on_chan(struct wilc_vif *vif, wid.val[0] = remain_on_chan_flag; wid.val[1] = (s8)hif_remain_ch->ch; - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, wilc_get_vif_idx(vif)); kfree(wid.val); if (result != 0) @@ -2340,39 +1350,6 @@ error: return result; } -static void handle_register_frame(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct reg_frame *hif_reg_frame = &msg->body.reg_frame; - int result; - struct wid wid; - u8 *cur_byte; - - wid.id = WID_REGISTER_FRAME; - wid.type = WID_STR; - wid.val = kmalloc(sizeof(u16) + 2, GFP_KERNEL); - if (!wid.val) - goto out; - - cur_byte = wid.val; - - *cur_byte++ = hif_reg_frame->reg; - *cur_byte++ = hif_reg_frame->reg_id; - memcpy(cur_byte, &hif_reg_frame->frame_type, sizeof(u16)); - - wid.size = sizeof(u16) + 2; - - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - kfree(wid.val); - if (result) - netdev_err(vif->ndev, "Failed to frame register\n"); - -out: - kfree(msg); -} - static void handle_listen_state_expired(struct work_struct *work) { struct host_if_msg *msg = container_of(work, struct host_if_msg, work); @@ -2397,7 +1374,7 @@ static void handle_listen_state_expired(struct work_struct *work) wid.val[0] = remain_on_chan_flag; wid.val[1] = FALSE_FRMWR_CHANNEL; - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, wilc_get_vif_idx(vif)); kfree(wid.val); if (result != 0) { @@ -2440,32 +1417,6 @@ static void listen_timer_cb(struct timer_list *t) } } -static void handle_power_management(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - struct power_mgmt_param *pm_param = &msg->body.pwr_mgmt_info; - int result; - struct wid wid; - s8 power_mode; - - wid.id = WID_POWER_MANAGEMENT; - - if (pm_param->enabled) - power_mode = MIN_FAST_PS; - else - power_mode = NO_POWERSAVE; - - wid.val = &power_mode; - wid.size = sizeof(char); - - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (result) - netdev_err(vif->ndev, "Failed to send power management\n"); - kfree(msg); -} - static void handle_set_mcast_filter(struct work_struct *work) { struct host_if_msg *msg = container_of(work, struct host_if_msg, work); @@ -2497,7 +1448,7 @@ static void handle_set_mcast_filter(struct work_struct *work) memcpy(cur_byte, hif_set_mc->mc_list, ((hif_set_mc->cnt) * ETH_ALEN)); - result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, wilc_get_vif_idx(vif)); if (result) netdev_err(vif->ndev, "Failed to send setup multicast\n"); @@ -2508,48 +1459,6 @@ error: kfree(msg); } -static void handle_set_tx_pwr(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - u8 tx_pwr = msg->body.tx_power.tx_pwr; - int ret; - struct wid wid; - - wid.id = WID_TX_POWER; - wid.type = WID_CHAR; - wid.val = &tx_pwr; - wid.size = sizeof(char); - - ret = wilc_send_config_pkt(vif, SET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (ret) - netdev_err(vif->ndev, "Failed to set TX PWR\n"); - kfree(msg); -} - -/* Note: 'msg' will be free after using data */ -static void handle_get_tx_pwr(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - struct wilc_vif *vif = msg->vif; - u8 *tx_pwr = &msg->body.tx_power.tx_pwr; - int ret; - struct wid wid; - - wid.id = WID_TX_POWER; - wid.type = WID_CHAR; - wid.val = (s8 *)tx_pwr; - wid.size = sizeof(char); - - ret = wilc_send_config_pkt(vif, GET_CFG, &wid, 1, - wilc_get_vif_idx(vif)); - if (ret) - netdev_err(vif->ndev, "Failed to get TX PWR\n"); - - complete(&msg->work_comp); -} - static void handle_scan_timer(struct work_struct *work) { struct host_if_msg *msg = container_of(work, struct host_if_msg, work); @@ -2558,14 +1467,6 @@ static void handle_scan_timer(struct work_struct *work) kfree(msg); } -static void handle_remain_on_chan_work(struct work_struct *work) -{ - struct host_if_msg *msg = container_of(work, struct host_if_msg, work); - - handle_remain_on_chan(msg->vif, &msg->body.remain_on_ch); - kfree(msg); -} - static void handle_scan_complete(struct work_struct *work) { struct host_if_msg *msg = container_of(work, struct host_if_msg, work); @@ -2579,7 +1480,8 @@ static void handle_scan_complete(struct work_struct *work) handle_scan_done(msg->vif, SCAN_EVENT_DONE); if (msg->vif->hif_drv->remain_on_ch_pending) - handle_remain_on_chan(msg->vif, &msg->body.remain_on_ch); + handle_remain_on_chan(msg->vif, + &msg->vif->hif_drv->remain_on_ch); kfree(msg); } @@ -2618,145 +1520,107 @@ static void timer_connect_cb(struct timer_list *t) int wilc_remove_wep_key(struct wilc_vif *vif, u8 index) { + struct wid wid; int result; - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; - - if (!hif_drv) { - result = -EFAULT; - netdev_err(vif->ndev, "%s: hif driver is NULL", __func__); - return result; - } - - msg = wilc_alloc_work(vif, handle_key, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); - msg->body.key_info.type = WEP; - msg->body.key_info.action = REMOVEKEY; - msg->body.key_info.attr.wep.index = index; + wid.id = WID_REMOVE_WEP_KEY; + wid.type = WID_STR; + wid.size = sizeof(char); + wid.val = &index; - result = wilc_enqueue_work(msg); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); if (result) - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - else - wait_for_completion(&msg->work_comp); - - kfree(msg); + netdev_err(vif->ndev, + "Failed to send remove wep key config packet\n"); return result; } int wilc_set_wep_default_keyid(struct wilc_vif *vif, u8 index) { + struct wid wid; int result; - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; - if (!hif_drv) { - result = -EFAULT; - netdev_err(vif->ndev, "%s: hif driver is NULL\n", __func__); - return result; - } - - msg = wilc_alloc_work(vif, handle_key, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - msg->body.key_info.type = WEP; - msg->body.key_info.action = DEFAULTKEY; - msg->body.key_info.attr.wep.index = index; - - result = wilc_enqueue_work(msg); + wid.id = WID_KEY_ID; + wid.type = WID_CHAR; + wid.size = sizeof(char); + wid.val = &index; + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); if (result) - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - else - wait_for_completion(&msg->work_comp); + netdev_err(vif->ndev, + "Failed to send wep default key config packet\n"); - kfree(msg); return result; } int wilc_add_wep_key_bss_sta(struct wilc_vif *vif, const u8 *key, u8 len, u8 index) { + struct wid wid; int result; - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; + struct wilc_wep_key *wep_key; - if (!hif_drv) { - netdev_err(vif->ndev, "%s: hif driver is NULL", __func__); - return -EFAULT; - } - - msg = wilc_alloc_work(vif, handle_key, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_ADD_WEP_KEY; + wid.type = WID_STR; + wid.size = sizeof(*wep_key) + len; + wep_key = kzalloc(wid.size, GFP_KERNEL); + if (!wep_key) + return -ENOMEM; - msg->body.key_info.type = WEP; - msg->body.key_info.action = ADDKEY; - msg->body.key_info.attr.wep.key = kmemdup(key, len, GFP_KERNEL); - if (!msg->body.key_info.attr.wep.key) { - result = -ENOMEM; - goto free_msg; - } + wid.val = (u8 *)wep_key; - msg->body.key_info.attr.wep.key_len = len; - msg->body.key_info.attr.wep.index = index; + wep_key->index = index; + wep_key->key_len = len; + memcpy(wep_key->key, key, len); - result = wilc_enqueue_work(msg); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); if (result) - goto free_key; - - wait_for_completion(&msg->work_comp); - -free_key: - kfree(msg->body.key_info.attr.wep.key); + netdev_err(vif->ndev, + "Failed to add wep key config packet\n"); -free_msg: - kfree(msg); + kfree(wep_key); return result; } int wilc_add_wep_key_bss_ap(struct wilc_vif *vif, const u8 *key, u8 len, u8 index, u8 mode, enum authtype auth_type) { + struct wid wid_list[3]; int result; - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; - - if (!hif_drv) { - netdev_err(vif->ndev, "%s: hif driver is NULL\n", __func__); - return -EFAULT; - } - - msg = wilc_alloc_work(vif, handle_key, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - msg->body.key_info.type = WEP; - msg->body.key_info.action = ADDKEY_AP; - msg->body.key_info.attr.wep.key = kmemdup(key, len, GFP_KERNEL); - if (!msg->body.key_info.attr.wep.key) { - result = -ENOMEM; - goto free_msg; - } + struct wilc_wep_key *wep_key; + + wid_list[0].id = WID_11I_MODE; + wid_list[0].type = WID_CHAR; + wid_list[0].size = sizeof(char); + wid_list[0].val = &mode; + + wid_list[1].id = WID_AUTH_TYPE; + wid_list[1].type = WID_CHAR; + wid_list[1].size = sizeof(char); + wid_list[1].val = (s8 *)&auth_type; + + wid_list[2].id = WID_WEP_KEY_VALUE; + wid_list[2].type = WID_STR; + wid_list[2].size = sizeof(*wep_key) + len; + wep_key = kzalloc(wid_list[2].size, GFP_KERNEL); + if (!wep_key) + return -ENOMEM; - msg->body.key_info.attr.wep.key_len = len; - msg->body.key_info.attr.wep.index = index; - msg->body.key_info.attr.wep.mode = mode; - msg->body.key_info.attr.wep.auth_type = auth_type; + wid_list[2].val = (u8 *)wep_key; - result = wilc_enqueue_work(msg); + wep_key->index = index; + wep_key->key_len = len; + memcpy(wep_key->key, key, len); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, wid_list, + ARRAY_SIZE(wid_list), + wilc_get_vif_idx(vif)); if (result) - goto free_key; - - wait_for_completion(&msg->work_comp); - -free_key: - kfree(msg->body.key_info.attr.wep.key); + netdev_err(vif->ndev, + "Failed to add wep ap key config packet\n"); -free_msg: - kfree(msg); + kfree(wep_key); return result; } @@ -2764,65 +1628,72 @@ int wilc_add_ptk(struct wilc_vif *vif, const u8 *ptk, u8 ptk_key_len, const u8 *mac_addr, const u8 *rx_mic, const u8 *tx_mic, u8 mode, u8 cipher_mode, u8 index) { - int result; - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; - u8 key_len = ptk_key_len; - - if (!hif_drv) { - netdev_err(vif->ndev, "%s: hif driver is NULL", __func__); - return -EFAULT; - } + int result = 0; + u8 t_key_len = ptk_key_len + RX_MIC_KEY_LEN + TX_MIC_KEY_LEN; - if (rx_mic) - key_len += RX_MIC_KEY_LEN; + if (mode == WILC_AP_MODE) { + struct wid wid_list[2]; + struct wilc_ap_wpa_ptk *key_buf; - if (tx_mic) - key_len += TX_MIC_KEY_LEN; + wid_list[0].id = WID_11I_MODE; + wid_list[0].type = WID_CHAR; + wid_list[0].size = sizeof(char); + wid_list[0].val = (s8 *)&cipher_mode; - msg = wilc_alloc_work(vif, handle_key, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); + key_buf = kzalloc(sizeof(*key_buf) + t_key_len, GFP_KERNEL); + if (!key_buf) + return -ENOMEM; - msg->body.key_info.type = WPA_PTK; - if (mode == AP_MODE) { - msg->body.key_info.action = ADDKEY_AP; - msg->body.key_info.attr.wpa.index = index; - } - if (mode == STATION_MODE) - msg->body.key_info.action = ADDKEY; + ether_addr_copy(key_buf->mac_addr, mac_addr); + key_buf->index = index; + key_buf->key_len = t_key_len; + memcpy(&key_buf->key[0], ptk, ptk_key_len); + + if (rx_mic) + memcpy(&key_buf->key[ptk_key_len], rx_mic, + RX_MIC_KEY_LEN); + + if (tx_mic) + memcpy(&key_buf->key[ptk_key_len + RX_MIC_KEY_LEN], + tx_mic, TX_MIC_KEY_LEN); + + wid_list[1].id = WID_ADD_PTK; + wid_list[1].type = WID_STR; + wid_list[1].size = sizeof(*key_buf) + t_key_len; + wid_list[1].val = (u8 *)key_buf; + result = wilc_send_config_pkt(vif, WILC_SET_CFG, wid_list, + ARRAY_SIZE(wid_list), + wilc_get_vif_idx(vif)); + kfree(key_buf); + } else if (mode == WILC_STATION_MODE) { + struct wid wid; + struct wilc_sta_wpa_ptk *key_buf; - msg->body.key_info.attr.wpa.key = kmemdup(ptk, ptk_key_len, GFP_KERNEL); - if (!msg->body.key_info.attr.wpa.key) { - result = -ENOMEM; - goto free_msg; - } + key_buf = kzalloc(sizeof(*key_buf) + t_key_len, GFP_KERNEL); + if (!key_buf) + return -ENOMEM; - if (rx_mic) - memcpy(msg->body.key_info.attr.wpa.key + 16, rx_mic, - RX_MIC_KEY_LEN); + ether_addr_copy(key_buf->mac_addr, mac_addr); + key_buf->key_len = t_key_len; + memcpy(&key_buf->key[0], ptk, ptk_key_len); - if (tx_mic) - memcpy(msg->body.key_info.attr.wpa.key + 24, tx_mic, - TX_MIC_KEY_LEN); + if (rx_mic) + memcpy(&key_buf->key[ptk_key_len], rx_mic, + RX_MIC_KEY_LEN); - msg->body.key_info.attr.wpa.key_len = key_len; - msg->body.key_info.attr.wpa.mac_addr = mac_addr; - msg->body.key_info.attr.wpa.mode = cipher_mode; + if (tx_mic) + memcpy(&key_buf->key[ptk_key_len + RX_MIC_KEY_LEN], + tx_mic, TX_MIC_KEY_LEN); - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - goto free_key; + wid.id = WID_ADD_PTK; + wid.type = WID_STR; + wid.size = sizeof(*key_buf) + t_key_len; + wid.val = (s8 *)key_buf; + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + kfree(key_buf); } - wait_for_completion(&msg->work_comp); - -free_key: - kfree(msg->body.key_info.attr.wpa.key); - -free_msg: - kfree(msg); return result; } @@ -2831,108 +1702,76 @@ int wilc_add_rx_gtk(struct wilc_vif *vif, const u8 *rx_gtk, u8 gtk_key_len, const u8 *rx_mic, const u8 *tx_mic, u8 mode, u8 cipher_mode) { - int result; - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; - u8 key_len = gtk_key_len; - - if (!hif_drv) { - netdev_err(vif->ndev, "%s: hif driver is NULL", __func__); - return -EFAULT; - } - - msg = wilc_alloc_work(vif, handle_key, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - if (rx_mic) - key_len += RX_MIC_KEY_LEN; - - if (tx_mic) - key_len += TX_MIC_KEY_LEN; + int result = 0; + struct wilc_gtk_key *gtk_key; + int t_key_len = gtk_key_len + RX_MIC_KEY_LEN + TX_MIC_KEY_LEN; - if (key_rsc) { - msg->body.key_info.attr.wpa.seq = kmemdup(key_rsc, - key_rsc_len, - GFP_KERNEL); - if (!msg->body.key_info.attr.wpa.seq) { - result = -ENOMEM; - goto free_msg; - } - } + gtk_key = kzalloc(sizeof(*gtk_key) + t_key_len, GFP_KERNEL); + if (!gtk_key) + return -ENOMEM; - msg->body.key_info.type = WPA_RX_GTK; + /* fill bssid value only in station mode */ + if (mode == WILC_STATION_MODE && + vif->hif_drv->hif_state == HOST_IF_CONNECTED) + memcpy(gtk_key->mac_addr, vif->hif_drv->assoc_bssid, ETH_ALEN); - if (mode == AP_MODE) { - msg->body.key_info.action = ADDKEY_AP; - msg->body.key_info.attr.wpa.mode = cipher_mode; - } - if (mode == STATION_MODE) - msg->body.key_info.action = ADDKEY; - - msg->body.key_info.attr.wpa.key = kmemdup(rx_gtk, key_len, GFP_KERNEL); - if (!msg->body.key_info.attr.wpa.key) { - result = -ENOMEM; - goto free_seq; - } + if (key_rsc) + memcpy(gtk_key->rsc, key_rsc, 8); + gtk_key->index = index; + gtk_key->key_len = t_key_len; + memcpy(>k_key->key[0], rx_gtk, gtk_key_len); if (rx_mic) - memcpy(msg->body.key_info.attr.wpa.key + 16, rx_mic, - RX_MIC_KEY_LEN); + memcpy(>k_key->key[gtk_key_len], rx_mic, RX_MIC_KEY_LEN); if (tx_mic) - memcpy(msg->body.key_info.attr.wpa.key + 24, tx_mic, - TX_MIC_KEY_LEN); + memcpy(>k_key->key[gtk_key_len + RX_MIC_KEY_LEN], + tx_mic, TX_MIC_KEY_LEN); - msg->body.key_info.attr.wpa.index = index; - msg->body.key_info.attr.wpa.key_len = key_len; - msg->body.key_info.attr.wpa.seq_len = key_rsc_len; + if (mode == WILC_AP_MODE) { + struct wid wid_list[2]; - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - goto free_key; - } + wid_list[0].id = WID_11I_MODE; + wid_list[0].type = WID_CHAR; + wid_list[0].size = sizeof(char); + wid_list[0].val = (s8 *)&cipher_mode; - wait_for_completion(&msg->work_comp); + wid_list[1].id = WID_ADD_RX_GTK; + wid_list[1].type = WID_STR; + wid_list[1].size = sizeof(*gtk_key) + t_key_len; + wid_list[1].val = (u8 *)gtk_key; -free_key: - kfree(msg->body.key_info.attr.wpa.key); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, wid_list, + ARRAY_SIZE(wid_list), + wilc_get_vif_idx(vif)); + kfree(gtk_key); + } else if (mode == WILC_STATION_MODE) { + struct wid wid; -free_seq: - kfree(msg->body.key_info.attr.wpa.seq); + wid.id = WID_ADD_RX_GTK; + wid.type = WID_STR; + wid.size = sizeof(*gtk_key) + t_key_len; + wid.val = (u8 *)gtk_key; + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + kfree(gtk_key); + } -free_msg: - kfree(msg); return result; } -int wilc_set_pmkid_info(struct wilc_vif *vif, - struct host_if_pmkid_attr *pmkid) +int wilc_set_pmkid_info(struct wilc_vif *vif, struct wilc_pmkid_attr *pmkid) { + struct wid wid; int result; - struct host_if_msg *msg; - int i; - - msg = wilc_alloc_work(vif, handle_key, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - msg->body.key_info.type = PMKSA; - msg->body.key_info.action = ADDKEY; - for (i = 0; i < pmkid->numpmkid; i++) { - memcpy(msg->body.key_info.attr.pmkid.pmkidlist[i].bssid, - &pmkid->pmkidlist[i].bssid, ETH_ALEN); - memcpy(msg->body.key_info.attr.pmkid.pmkidlist[i].pmkid, - &pmkid->pmkidlist[i].pmkid, PMKID_LEN); - } + wid.id = WID_PMKID_INFO; + wid.type = WID_STR; + wid.size = (pmkid->numpmkid * sizeof(struct wilc_pmkid)) + 1; + wid.val = (u8 *)pmkid; - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); return result; } @@ -2940,21 +1779,17 @@ int wilc_set_pmkid_info(struct wilc_vif *vif, int wilc_get_mac_address(struct wilc_vif *vif, u8 *mac_addr) { int result; - struct host_if_msg *msg; - - msg = wilc_alloc_work(vif, handle_get_mac_address, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); + struct wid wid; - msg->body.get_mac_info.mac_addr = mac_addr; + wid.id = WID_MAC_ADDR; + wid.type = WID_STR; + wid.size = ETH_ALEN; + wid.val = mac_addr; - result = wilc_enqueue_work(msg); + result = wilc_send_config_pkt(vif, WILC_GET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); if (result) - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - else - wait_for_completion(&msg->work_comp); - - kfree(msg); + netdev_err(vif->ndev, "Failed to get mac address\n"); return result; } @@ -2966,8 +1801,8 @@ int wilc_set_join_req(struct wilc_vif *vif, u8 *bssid, const u8 *ssid, u8 channel, void *join_params) { int result; - struct host_if_msg *msg; struct host_if_drv *hif_drv = vif->hif_drv; + struct user_conn_req *con_info = &hif_drv->usr_conn_req; if (!hif_drv || !connect_result) { netdev_err(vif->ndev, @@ -2981,50 +1816,45 @@ int wilc_set_join_req(struct wilc_vif *vif, u8 *bssid, const u8 *ssid, return -EFAULT; } - msg = wilc_alloc_work(vif, handle_connect, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); + if (hif_drv->usr_scan_req.scan_result) { + netdev_err(vif->ndev, "%s: Scan in progress\n", __func__); + return -EBUSY; + } - msg->body.con_info.security = security; - msg->body.con_info.auth_type = auth_type; - msg->body.con_info.ch = channel; - msg->body.con_info.result = connect_result; - msg->body.con_info.arg = user_arg; - msg->body.con_info.params = join_params; + con_info->security = security; + con_info->auth_type = auth_type; + con_info->ch = channel; + con_info->conn_result = connect_result; + con_info->arg = user_arg; + con_info->param = join_params; if (bssid) { - msg->body.con_info.bssid = kmemdup(bssid, 6, GFP_KERNEL); - if (!msg->body.con_info.bssid) { - result = -ENOMEM; - goto free_msg; - } + con_info->bssid = kmemdup(bssid, 6, GFP_KERNEL); + if (!con_info->bssid) + return -ENOMEM; } if (ssid) { - msg->body.con_info.ssid_len = ssid_len; - msg->body.con_info.ssid = kmemdup(ssid, ssid_len, GFP_KERNEL); - if (!msg->body.con_info.ssid) { + con_info->ssid_len = ssid_len; + con_info->ssid = kmemdup(ssid, ssid_len, GFP_KERNEL); + if (!con_info->ssid) { result = -ENOMEM; goto free_bssid; } } if (ies) { - msg->body.con_info.ies_len = ies_len; - msg->body.con_info.ies = kmemdup(ies, ies_len, GFP_KERNEL); - if (!msg->body.con_info.ies) { + con_info->ies_len = ies_len; + con_info->ies = kmemdup(ies, ies_len, GFP_KERNEL); + if (!con_info->ies) { result = -ENOMEM; goto free_ssid; } } - if (hif_drv->hif_state < HOST_IF_CONNECTING) - hif_drv->hif_state = HOST_IF_CONNECTING; - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); + result = wilc_send_connect_wid(vif); + if (result) goto free_ies; - } hif_drv->connect_timer_vif = vif; mod_timer(&hif_drv->connect_timer, @@ -3033,291 +1863,193 @@ int wilc_set_join_req(struct wilc_vif *vif, u8 *bssid, const u8 *ssid, return 0; free_ies: - kfree(msg->body.con_info.ies); + kfree(con_info->ies); free_ssid: - kfree(msg->body.con_info.ssid); + kfree(con_info->ssid); free_bssid: - kfree(msg->body.con_info.bssid); - -free_msg: - kfree(msg); - return result; -} - -int wilc_disconnect(struct wilc_vif *vif, u16 reason_code) -{ - int result; - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; + kfree(con_info->bssid); - if (!hif_drv) { - netdev_err(vif->ndev, "%s: hif driver is NULL", __func__); - return -EFAULT; - } - - msg = wilc_alloc_work(vif, handle_disconnect, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - result = wilc_enqueue_work(msg); - if (result) - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - else - wait_for_completion(&msg->work_comp); - - kfree(msg); return result; } int wilc_set_mac_chnl_num(struct wilc_vif *vif, u8 channel) { + struct wid wid; int result; - struct host_if_msg *msg; - msg = wilc_alloc_work(vif, handle_set_channel, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - msg->body.channel_info.set_ch = channel; + wid.id = WID_CURRENT_CHANNEL; + wid.type = WID_CHAR; + wid.size = sizeof(char); + wid.val = &channel; - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to set channel\n"); return result; } int wilc_set_wfi_drv_handler(struct wilc_vif *vif, int index, u8 mode, - u8 ifc_id, bool is_sync) + u8 ifc_id) { + struct wid wid; + struct host_if_drv *hif_drv = vif->hif_drv; int result; - struct host_if_msg *msg; + struct wilc_drv_handler drv; - msg = wilc_alloc_work(vif, handle_set_wfi_drv_handler, is_sync); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_SET_DRV_HANDLER; + wid.type = WID_STR; + wid.size = sizeof(drv); + wid.val = (u8 *)&drv; - msg->body.drv.handler = index; - msg->body.drv.mode = mode; - msg->body.drv.name = ifc_id; + drv.handler = cpu_to_le32(index); + drv.mode = (ifc_id | (mode << 1)); - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - return result; - } - - if (is_sync) - wait_for_completion(&msg->work_comp); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + hif_drv->driver_handler_id); + if (result) + netdev_err(vif->ndev, "Failed to set driver handler\n"); return result; } int wilc_set_operation_mode(struct wilc_vif *vif, u32 mode) { + struct wid wid; + struct wilc_op_mode op_mode; int result; - struct host_if_msg *msg; - msg = wilc_alloc_work(vif, handle_set_operation_mode, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_SET_OPERATION_MODE; + wid.type = WID_INT; + wid.size = sizeof(op_mode); + wid.val = (u8 *)&op_mode; - msg->body.mode.mode = mode; - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } + op_mode.mode = cpu_to_le32(mode); + + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to set operation mode\n"); return result; } -s32 wilc_get_inactive_time(struct wilc_vif *vif, const u8 *mac, - u32 *out_val) +s32 wilc_get_inactive_time(struct wilc_vif *vif, const u8 *mac, u32 *out_val) { + struct wid wid; s32 result; - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; - if (!hif_drv) { - netdev_err(vif->ndev, "%s: hif driver is NULL", __func__); - return -EFAULT; - } - - msg = wilc_alloc_work(vif, handle_get_inactive_time, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_SET_STA_MAC_INACTIVE_TIME; + wid.type = WID_STR; + wid.size = ETH_ALEN; + wid.val = kzalloc(wid.size, GFP_KERNEL); + if (!wid.val) + return -ENOMEM; - memcpy(msg->body.mac_info.mac, mac, ETH_ALEN); + ether_addr_copy(wid.val, mac); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + kfree(wid.val); + if (result) { + netdev_err(vif->ndev, "Failed to set inactive mac\n"); + return result; + } - result = wilc_enqueue_work(msg); + wid.id = WID_GET_INACTIVE_TIME; + wid.type = WID_INT; + wid.val = (s8 *)out_val; + wid.size = sizeof(u32); + result = wilc_send_config_pkt(vif, WILC_GET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); if (result) - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - else - wait_for_completion(&msg->work_comp); - - *out_val = msg->body.mac_info.inactive_time; - kfree(msg); + netdev_err(vif->ndev, "Failed to get inactive time\n"); return result; } int wilc_get_rssi(struct wilc_vif *vif, s8 *rssi_level) { + struct wid wid; int result; - struct host_if_msg *msg; if (!rssi_level) { netdev_err(vif->ndev, "%s: RSSI level is NULL\n", __func__); return -EFAULT; } - msg = wilc_alloc_work(vif, handle_get_rssi, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - msg->body.data = kzalloc(sizeof(s8), GFP_KERNEL); - if (!msg->body.data) { - kfree(msg); - return -ENOMEM; - } - - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - } else { - wait_for_completion(&msg->work_comp); - *rssi_level = *msg->body.data; - } - - kfree(msg->body.data); - kfree(msg); - - return result; -} - -int -wilc_get_statistics(struct wilc_vif *vif, struct rf_info *stats, bool is_sync) -{ - int result; - struct host_if_msg *msg; - - msg = wilc_alloc_work(vif, handle_get_statistics, is_sync); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - msg->body.data = (char *)stats; - - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - return result; - } - - if (is_sync) { - wait_for_completion(&msg->work_comp); - kfree(msg); - } + wid.id = WID_RSSI; + wid.type = WID_CHAR; + wid.size = sizeof(char); + wid.val = rssi_level; + result = wilc_send_config_pkt(vif, WILC_GET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to get RSSI value\n"); return result; } - -int wilc_scan(struct wilc_vif *vif, u8 scan_source, u8 scan_type, - u8 *ch_freq_list, u8 ch_list_len, const u8 *ies, - size_t ies_len, wilc_scan_result scan_result, void *user_arg, - struct hidden_network *hidden_network) + +int wilc_get_stats_async(struct wilc_vif *vif, struct rf_info *stats) { int result; struct host_if_msg *msg; - struct scan_attr *scan_info; - struct host_if_drv *hif_drv = vif->hif_drv; - - if (!hif_drv || !scan_result) { - netdev_err(vif->ndev, "hif_drv or scan_result = NULL\n"); - return -EFAULT; - } - msg = wilc_alloc_work(vif, handle_scan, false); + msg = wilc_alloc_work(vif, handle_get_statistics, false); if (IS_ERR(msg)) return PTR_ERR(msg); - scan_info = &msg->body.scan_info; - - if (hidden_network) { - scan_info->hidden_network.net_info = hidden_network->net_info; - scan_info->hidden_network.n_ssids = hidden_network->n_ssids; - } - - scan_info->src = scan_source; - scan_info->type = scan_type; - scan_info->result = scan_result; - scan_info->arg = user_arg; - - scan_info->ch_list_len = ch_list_len; - scan_info->ch_freq_list = kmemdup(ch_freq_list, - ch_list_len, - GFP_KERNEL); - if (!scan_info->ch_freq_list) { - result = -ENOMEM; - goto free_msg; - } - - scan_info->ies_len = ies_len; - scan_info->ies = kmemdup(ies, ies_len, GFP_KERNEL); - if (!scan_info->ies) { - result = -ENOMEM; - goto free_freq_list; - } + msg->body.data = (char *)stats; result = wilc_enqueue_work(msg); if (result) { netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - goto free_ies; + kfree(msg); + return result; } - hif_drv->scan_timer_vif = vif; - mod_timer(&hif_drv->scan_timer, - jiffies + msecs_to_jiffies(HOST_IF_SCAN_TIMEOUT)); - - return 0; - -free_ies: - kfree(scan_info->ies); - -free_freq_list: - kfree(scan_info->ch_freq_list); - -free_msg: - kfree(msg); return result; } -int wilc_hif_set_cfg(struct wilc_vif *vif, - struct cfg_param_attr *cfg_param) +int wilc_hif_set_cfg(struct wilc_vif *vif, struct cfg_param_attr *param) { - struct host_if_msg *msg; - struct host_if_drv *hif_drv = vif->hif_drv; + struct wid wid_list[4]; + int i = 0; int result; - if (!hif_drv) { - netdev_err(vif->ndev, "%s: hif driver is NULL", __func__); - return -EFAULT; + if (param->flag & WILC_CFG_PARAM_RETRY_SHORT) { + wid_list[i].id = WID_SHORT_RETRY_LIMIT; + wid_list[i].val = (s8 *)¶m->short_retry_limit; + wid_list[i].type = WID_SHORT; + wid_list[i].size = sizeof(u16); + i++; + } + if (param->flag & WILC_CFG_PARAM_RETRY_LONG) { + wid_list[i].id = WID_LONG_RETRY_LIMIT; + wid_list[i].val = (s8 *)¶m->long_retry_limit; + wid_list[i].type = WID_SHORT; + wid_list[i].size = sizeof(u16); + i++; + } + if (param->flag & WILC_CFG_PARAM_FRAG_THRESHOLD) { + wid_list[i].id = WID_FRAG_THRESHOLD; + wid_list[i].val = (s8 *)¶m->frag_threshold; + wid_list[i].type = WID_SHORT; + wid_list[i].size = sizeof(u16); + i++; + } + if (param->flag & WILC_CFG_PARAM_RTS_THRESHOLD) { + wid_list[i].id = WID_RTS_THRESHOLD; + wid_list[i].val = (s8 *)¶m->rts_threshold; + wid_list[i].type = WID_SHORT; + wid_list[i].size = sizeof(u16); + i++; } - msg = wilc_alloc_work(vif, handle_cfg_param, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - msg->body.cfg_info = *cfg_param; - result = wilc_enqueue_work(msg); - if (result) - kfree(msg); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, wid_list, + i, wilc_get_vif_idx(vif)); return result; } @@ -3332,7 +2064,7 @@ static void get_periodic_rssi(struct timer_list *t) } if (vif->hif_drv->hif_state == HOST_IF_CONNECTED) - wilc_get_statistics(vif, &vif->periodic_stat, false); + wilc_get_stats_async(vif, &vif->periodic_stat); mod_timer(&vif->periodic_rssi, jiffies + msecs_to_jiffies(5000)); } @@ -3368,20 +2100,10 @@ int wilc_init(struct net_device *dev, struct host_if_drv **hif_drv_handler) timer_setup(&hif_drv->connect_timer, timer_connect_cb, 0); timer_setup(&hif_drv->remain_on_ch_timer, listen_timer_cb, 0); - mutex_init(&hif_drv->cfg_values_lock); - mutex_lock(&hif_drv->cfg_values_lock); - hif_drv->hif_state = HOST_IF_IDLE; - hif_drv->cfg_values.site_survey_enabled = SITE_SURVEY_OFF; - hif_drv->cfg_values.scan_source = DEFAULT_SCAN; - hif_drv->cfg_values.active_scan_time = ACTIVE_SCAN_TIME; - hif_drv->cfg_values.passive_scan_time = PASSIVE_SCAN_TIME; - hif_drv->cfg_values.curr_tx_rate = AUTORATE; hif_drv->p2p_timeout = 0; - mutex_unlock(&hif_drv->cfg_values_lock); - wilc->clients_count++; return 0; @@ -3406,7 +2128,7 @@ int wilc_deinit(struct wilc_vif *vif) del_timer_sync(&vif->periodic_rssi); del_timer_sync(&hif_drv->remain_on_ch_timer); - wilc_set_wfi_drv_handler(vif, 0, 0, 0, true); + wilc_set_wfi_drv_handler(vif, 0, 0, 0); if (hif_drv->usr_scan_req.scan_result) { hif_drv->usr_scan_req.scan_result(SCAN_EVENT_ABORTED, NULL, @@ -3564,25 +2286,19 @@ int wilc_remain_on_channel(struct wilc_vif *vif, u32 session_id, wilc_remain_on_chan_ready ready, void *user_arg) { + struct remain_ch roc; int result; - struct host_if_msg *msg; - - msg = wilc_alloc_work(vif, handle_remain_on_chan_work, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - msg->body.remain_on_ch.ch = chan; - msg->body.remain_on_ch.expired = expired; - msg->body.remain_on_ch.ready = ready; - msg->body.remain_on_ch.arg = user_arg; - msg->body.remain_on_ch.duration = duration; - msg->body.remain_on_ch.id = session_id; - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } + roc.ch = chan; + roc.expired = expired; + roc.ready = ready; + roc.arg = user_arg; + roc.duration = duration; + roc.id = session_id; + result = handle_remain_on_chan(vif, &roc); + if (result) + netdev_err(vif->ndev, "%s: failed to set remain on channel\n", + __func__); return result; } @@ -3617,77 +2333,75 @@ int wilc_listen_state_expired(struct wilc_vif *vif, u32 session_id) void wilc_frame_register(struct wilc_vif *vif, u16 frame_type, bool reg) { + struct wid wid; int result; - struct host_if_msg *msg; + struct wilc_reg_frame reg_frame; - msg = wilc_alloc_work(vif, handle_register_frame, false); - if (IS_ERR(msg)) - return; + wid.id = WID_REGISTER_FRAME; + wid.type = WID_STR; + wid.size = sizeof(reg_frame); + wid.val = (u8 *)®_frame; + + memset(®_frame, 0x0, sizeof(reg_frame)); + reg_frame.reg = reg; switch (frame_type) { - case ACTION: - msg->body.reg_frame.reg_id = ACTION_FRM_IDX; + case IEEE80211_STYPE_ACTION: + reg_frame.reg_id = WILC_FW_ACTION_FRM_IDX; break; - case PROBE_REQ: - msg->body.reg_frame.reg_id = PROBE_REQ_IDX; + case IEEE80211_STYPE_PROBE_REQ: + reg_frame.reg_id = WILC_FW_PROBE_REQ_IDX; break; default: break; } - msg->body.reg_frame.frame_type = frame_type; - msg->body.reg_frame.reg = reg; - - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } + reg_frame.frame_type = cpu_to_le16(frame_type); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to frame register\n"); } int wilc_add_beacon(struct wilc_vif *vif, u32 interval, u32 dtim_period, - u32 head_len, u8 *head, u32 tail_len, u8 *tail) + struct cfg80211_beacon_data *params) { + struct wid wid; int result; - struct host_if_msg *msg; - struct beacon_attr *beacon_info; + u8 *cur_byte; - msg = wilc_alloc_work(vif, handle_add_beacon, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_ADD_BEACON; + wid.type = WID_BIN; + wid.size = params->head_len + params->tail_len + 16; + wid.val = kzalloc(wid.size, GFP_KERNEL); + if (!wid.val) + return -ENOMEM; - beacon_info = &msg->body.beacon_info; - beacon_info->interval = interval; - beacon_info->dtim_period = dtim_period; - beacon_info->head_len = head_len; - beacon_info->head = kmemdup(head, head_len, GFP_KERNEL); - if (!beacon_info->head) { - result = -ENOMEM; - goto error; - } - beacon_info->tail_len = tail_len; + cur_byte = wid.val; + put_unaligned_le32(interval, cur_byte); + cur_byte += 4; + put_unaligned_le32(dtim_period, cur_byte); + cur_byte += 4; + put_unaligned_le32(params->head_len, cur_byte); + cur_byte += 4; - if (tail_len > 0) { - beacon_info->tail = kmemdup(tail, tail_len, GFP_KERNEL); - if (!beacon_info->tail) { - result = -ENOMEM; - goto error; - } - } else { - beacon_info->tail = NULL; - } + if (params->head_len > 0) + memcpy(cur_byte, params->head, params->head_len); + cur_byte += params->head_len; - result = wilc_enqueue_work(msg); + put_unaligned_le32(params->tail_len, cur_byte); + cur_byte += 4; + + if (params->tail_len > 0) + memcpy(cur_byte, params->tail, params->tail_len); + + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); if (result) - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); + netdev_err(vif->ndev, "Failed to send add beacon\n"); -error: - if (result) { - kfree(beacon_info->head); - kfree(beacon_info->tail); - kfree(msg); - } + kfree(wid.val); return result; } @@ -3695,170 +2409,158 @@ error: int wilc_del_beacon(struct wilc_vif *vif) { int result; - struct host_if_msg *msg; + struct wid wid; + u8 del_beacon = 0; - msg = wilc_alloc_work(vif, handle_del_beacon, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_DEL_BEACON; + wid.type = WID_CHAR; + wid.size = sizeof(char); + wid.val = &del_beacon; - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to send delete beacon\n"); return result; } -int wilc_add_station(struct wilc_vif *vif, struct add_sta_param *sta_param) +int wilc_add_station(struct wilc_vif *vif, const u8 *mac, + struct station_parameters *params) { + struct wid wid; int result; - struct host_if_msg *msg; - struct add_sta_param *add_sta_info; + u8 *cur_byte; - msg = wilc_alloc_work(vif, handle_add_station, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_ADD_STA; + wid.type = WID_BIN; + wid.size = WILC_ADD_STA_LENGTH + params->supported_rates_len; + wid.val = kmalloc(wid.size, GFP_KERNEL); + if (!wid.val) + return -ENOMEM; - add_sta_info = &msg->body.add_sta_info; - memcpy(add_sta_info, sta_param, sizeof(struct add_sta_param)); - if (add_sta_info->rates_len > 0) { - add_sta_info->rates = kmemdup(sta_param->rates, - add_sta_info->rates_len, - GFP_KERNEL); - if (!add_sta_info->rates) { - kfree(msg); - return -ENOMEM; - } - } + cur_byte = wid.val; + wilc_hif_pack_sta_param(cur_byte, mac, params); + + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result != 0) + netdev_err(vif->ndev, "Failed to send add station\n"); + + kfree(wid.val); - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(add_sta_info->rates); - kfree(msg); - } return result; } int wilc_del_station(struct wilc_vif *vif, const u8 *mac_addr) { + struct wid wid; int result; - struct host_if_msg *msg; - struct del_sta *del_sta_info; - - msg = wilc_alloc_work(vif, handle_del_station, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); - del_sta_info = &msg->body.del_sta_info; + wid.id = WID_REMOVE_STA; + wid.type = WID_BIN; + wid.size = ETH_ALEN; + wid.val = kzalloc(wid.size, GFP_KERNEL); + if (!wid.val) + return -ENOMEM; if (!mac_addr) - eth_broadcast_addr(del_sta_info->mac_addr); + eth_broadcast_addr(wid.val); else - memcpy(del_sta_info->mac_addr, mac_addr, ETH_ALEN); + ether_addr_copy(wid.val, mac_addr); + + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to del station\n"); + + kfree(wid.val); - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } return result; } int wilc_del_allstation(struct wilc_vif *vif, u8 mac_addr[][ETH_ALEN]) { + struct wid wid; int result; - struct host_if_msg *msg; - struct del_all_sta *del_all_sta_info; - u8 zero_addr[ETH_ALEN] = {0}; int i; u8 assoc_sta = 0; + struct del_all_sta del_sta; - msg = wilc_alloc_work(vif, handle_del_all_sta, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); - - del_all_sta_info = &msg->body.del_all_sta_info; - - for (i = 0; i < MAX_NUM_STA; i++) { - if (memcmp(mac_addr[i], zero_addr, ETH_ALEN)) { - memcpy(del_all_sta_info->del_all_sta[i], mac_addr[i], - ETH_ALEN); + memset(&del_sta, 0x0, sizeof(del_sta)); + for (i = 0; i < WILC_MAX_NUM_STA; i++) { + if (!is_zero_ether_addr(mac_addr[i])) { assoc_sta++; + ether_addr_copy(del_sta.mac[i], mac_addr[i]); } } - if (!assoc_sta) { - kfree(msg); + + if (!assoc_sta) return 0; - } - del_all_sta_info->assoc_sta = assoc_sta; - result = wilc_enqueue_work(msg); + del_sta.assoc_sta = assoc_sta; - if (result) - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - else - wait_for_completion(&msg->work_comp); + wid.id = WID_DEL_ALL_STA; + wid.type = WID_STR; + wid.size = (assoc_sta * ETH_ALEN) + 1; + wid.val = (u8 *)&del_sta; - kfree(msg); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to send delete all station\n"); return result; } -int wilc_edit_station(struct wilc_vif *vif, - struct add_sta_param *sta_param) +int wilc_edit_station(struct wilc_vif *vif, const u8 *mac, + struct station_parameters *params) { + struct wid wid; int result; - struct host_if_msg *msg; - struct add_sta_param *add_sta_info; + u8 *cur_byte; - msg = wilc_alloc_work(vif, handle_edit_station, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_EDIT_STA; + wid.type = WID_BIN; + wid.size = WILC_ADD_STA_LENGTH + params->supported_rates_len; + wid.val = kmalloc(wid.size, GFP_KERNEL); + if (!wid.val) + return -ENOMEM; - add_sta_info = &msg->body.add_sta_info; - memcpy(add_sta_info, sta_param, sizeof(*add_sta_info)); - if (add_sta_info->rates_len > 0) { - add_sta_info->rates = kmemdup(sta_param->rates, - add_sta_info->rates_len, - GFP_KERNEL); - if (!add_sta_info->rates) { - kfree(msg); - return -ENOMEM; - } - } + cur_byte = wid.val; + wilc_hif_pack_sta_param(cur_byte, mac, params); - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(add_sta_info->rates); - kfree(msg); - } + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to send edit station\n"); + kfree(wid.val); return result; } int wilc_set_power_mgmt(struct wilc_vif *vif, bool enabled, u32 timeout) { + struct wid wid; int result; - struct host_if_msg *msg; + s8 power_mode; if (wilc_wlan_get_num_conn_ifcs(vif->wilc) == 2 && enabled) return 0; - msg = wilc_alloc_work(vif, handle_power_management, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); + if (enabled) + power_mode = WILC_FW_MIN_FAST_PS; + else + power_mode = WILC_FW_NO_POWERSAVE; - msg->body.pwr_mgmt_info.enabled = enabled; - msg->body.pwr_mgmt_info.timeout = timeout; + wid.id = WID_POWER_MANAGEMENT; + wid.val = &power_mode; + wid.size = sizeof(char); + result = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); + if (result) + netdev_err(vif->ndev, "Failed to send power management\n"); - result = wilc_enqueue_work(msg); - if (result) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } return result; } @@ -3887,19 +2589,15 @@ int wilc_setup_multicast_filter(struct wilc_vif *vif, bool enabled, u32 count, int wilc_set_tx_power(struct wilc_vif *vif, u8 tx_power) { int ret; - struct host_if_msg *msg; - - msg = wilc_alloc_work(vif, handle_set_tx_pwr, false); - if (IS_ERR(msg)) - return PTR_ERR(msg); + struct wid wid; - msg->body.tx_power.tx_pwr = tx_power; + wid.id = WID_TX_POWER; + wid.type = WID_CHAR; + wid.val = &tx_power; + wid.size = sizeof(char); - ret = wilc_enqueue_work(msg); - if (ret) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - kfree(msg); - } + ret = wilc_send_config_pkt(vif, WILC_SET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); return ret; } @@ -3907,21 +2605,15 @@ int wilc_set_tx_power(struct wilc_vif *vif, u8 tx_power) int wilc_get_tx_power(struct wilc_vif *vif, u8 *tx_power) { int ret; - struct host_if_msg *msg; + struct wid wid; - msg = wilc_alloc_work(vif, handle_get_tx_pwr, true); - if (IS_ERR(msg)) - return PTR_ERR(msg); + wid.id = WID_TX_POWER; + wid.type = WID_CHAR; + wid.val = tx_power; + wid.size = sizeof(char); - ret = wilc_enqueue_work(msg); - if (ret) { - netdev_err(vif->ndev, "%s: enqueue work failed\n", __func__); - } else { - wait_for_completion(&msg->work_comp); - *tx_power = msg->body.tx_power.tx_pwr; - } + ret = wilc_send_config_pkt(vif, WILC_GET_CFG, &wid, 1, + wilc_get_vif_idx(vif)); - /* free 'msg' after copying data */ - kfree(msg); return ret; } diff --git a/drivers/staging/wilc1000/host_interface.h b/drivers/staging/wilc1000/host_interface.h index 33fb7318734b..9b396a79b144 100644 --- a/drivers/staging/wilc1000/host_interface.h +++ b/drivers/staging/wilc1000/host_interface.h @@ -7,54 +7,84 @@ #ifndef HOST_INT_H #define HOST_INT_H #include -#include "coreconfigurator.h" - -#define IDLE_MODE 0x00 -#define AP_MODE 0x01 -#define STATION_MODE 0x02 -#define GO_MODE 0x03 -#define CLIENT_MODE 0x04 -#define ACTION 0xD0 -#define PROBE_REQ 0x40 -#define PROBE_RESP 0x50 - -#define ACTION_FRM_IDX 0 -#define PROBE_REQ_IDX 1 -#define MAX_NUM_STA 9 -#define ACTIVE_SCAN_TIME 10 -#define PASSIVE_SCAN_TIME 1200 -#define MIN_SCAN_TIME 10 -#define MAX_SCAN_TIME 1200 -#define DEFAULT_SCAN 0 -#define USER_SCAN BIT(0) -#define OBSS_PERIODIC_SCAN BIT(1) -#define OBSS_ONETIME_SCAN BIT(2) -#define GTK_RX_KEY_BUFF_LEN 24 -#define ADDKEY 0x1 -#define REMOVEKEY 0x2 -#define DEFAULTKEY 0x4 -#define ADDKEY_AP 0x8 +#include "wilc_wlan_if.h" + +enum { + WILC_IDLE_MODE = 0x0, + WILC_AP_MODE = 0x1, + WILC_STATION_MODE = 0x2, + WILC_GO_MODE = 0x3, + WILC_CLIENT_MODE = 0x4 +}; + +#define WILC_MAX_NUM_STA 9 #define MAX_NUM_SCANNED_NETWORKS 100 #define MAX_NUM_SCANNED_NETWORKS_SHADOW 130 -#define MAX_NUM_PROBED_SSID 10 -#define CHANNEL_SCAN_TIME 250 +#define WILC_MAX_NUM_PROBED_SSID 10 #define TX_MIC_KEY_LEN 8 #define RX_MIC_KEY_LEN 8 -#define PTK_KEY_LEN 16 - -#define TX_MIC_KEY_MSG_LEN 26 -#define RX_MIC_KEY_MSG_LEN 48 -#define PTK_KEY_MSG_LEN 39 -#define PMKSA_KEY_LEN 22 -#define ETH_ALEN 6 -#define PMKID_LEN 16 #define WILC_MAX_NUM_PMKIDS 16 #define WILC_ADD_STA_LENGTH 40 -#define NUM_CONCURRENT_IFC 2 -#define DRV_HANDLER_SIZE 5 -#define DRV_HANDLER_MASK 0x000000FF +#define WILC_NUM_CONCURRENT_IFC 2 + +#define NUM_RSSI 5 + +enum { + WILC_SET_CFG = 0, + WILC_GET_CFG +}; + +#define WILC_MAX_ASSOC_RESP_FRAME_SIZE 256 + +struct rssi_history_buffer { + bool full; + u8 index; + s8 samples[NUM_RSSI]; +}; + +struct network_info { + s8 rssi; + u16 cap_info; + u8 ssid[MAX_SSID_LEN]; + u8 ssid_len; + u8 bssid[6]; + u16 beacon_period; + u8 dtim_period; + u8 ch; + unsigned long time_scan_cached; + unsigned long time_scan; + bool new_network; + u8 found; + u32 tsf_lo; + u8 *ies; + u16 ies_len; + void *join_params; + struct rssi_history_buffer rssi_history; + u64 tsf; +}; + +struct connect_info { + u8 bssid[6]; + u8 *req_ies; + size_t req_ies_len; + u8 *resp_ies; + u16 resp_ies_len; + u16 status; +}; + +struct disconnect_info { + u16 reason; + u8 *ie; + size_t ie_len; +}; + +struct assoc_resp { + __le16 capab_info; + __le16 status_code; + __le16 aid; +} __packed; struct rf_info { u8 link_speed; @@ -74,77 +104,29 @@ enum host_if_state { HOST_IF_FORCE_32BIT = 0xFFFFFFFF }; -struct host_if_pmkid { +struct wilc_pmkid { u8 bssid[ETH_ALEN]; - u8 pmkid[PMKID_LEN]; -}; + u8 pmkid[WLAN_PMKID_LEN]; +} __packed; -struct host_if_pmkid_attr { +struct wilc_pmkid_attr { u8 numpmkid; - struct host_if_pmkid pmkidlist[WILC_MAX_NUM_PMKIDS]; -}; - -enum current_tx_rate { - AUTORATE = 0, - MBPS_1 = 1, - MBPS_2 = 2, - MBPS_5_5 = 5, - MBPS_11 = 11, - MBPS_6 = 6, - MBPS_9 = 9, - MBPS_12 = 12, - MBPS_18 = 18, - MBPS_24 = 24, - MBPS_36 = 36, - MBPS_48 = 48, - MBPS_54 = 54 -}; + struct wilc_pmkid pmkidlist[WILC_MAX_NUM_PMKIDS]; +} __packed; struct cfg_param_attr { u32 flag; - u8 ht_enable; - u8 bss_type; - u8 auth_type; - u16 auth_timeout; - u8 power_mgmt_mode; u16 short_retry_limit; u16 long_retry_limit; u16 frag_threshold; u16 rts_threshold; - u16 preamble_type; - u8 short_slot_allowed; - u8 txop_prot_disabled; - u16 beacon_interval; - u16 dtim_period; - enum site_survey site_survey_enabled; - u16 site_survey_scan_time; - u8 scan_source; - u16 active_scan_time; - u16 passive_scan_time; - enum current_tx_rate curr_tx_rate; - }; enum cfg_param { - RETRY_SHORT = BIT(0), - RETRY_LONG = BIT(1), - FRAG_THRESHOLD = BIT(2), - RTS_THRESHOLD = BIT(3), - BSS_TYPE = BIT(4), - AUTH_TYPE = BIT(5), - AUTHEN_TIMEOUT = BIT(6), - POWER_MANAGEMENT = BIT(7), - PREAMBLE = BIT(8), - SHORT_SLOT_ALLOWED = BIT(9), - TXOP_PROT_DISABLE = BIT(10), - BEACON_INTERVAL = BIT(11), - DTIM_PERIOD = BIT(12), - SITE_SURVEY = BIT(13), - SITE_SURVEY_SCAN_TIME = BIT(14), - ACTIVE_SCANTIME = BIT(15), - PASSIVE_SCANTIME = BIT(16), - CURRENT_TX_RATE = BIT(17), - HT_ENABLE = BIT(18), + WILC_CFG_PARAM_RETRY_SHORT = BIT(0), + WILC_CFG_PARAM_RETRY_LONG = BIT(1), + WILC_CFG_PARAM_FRAG_THRESHOLD = BIT(2), + WILC_CFG_PARAM_RTS_THRESHOLD = BIT(3) }; struct found_net_info { @@ -165,13 +147,6 @@ enum conn_event { CONN_DISCONN_EVENT_FORCE_32BIT = 0xFFFFFFFF }; -enum KEY_TYPE { - WEP, - WPA_RX_GTK, - WPA_PTK, - PMKSA, -}; - typedef void (*wilc_scan_result)(enum scan_event, struct network_info *, void *, void *); @@ -216,28 +191,9 @@ struct user_conn_req { size_t ies_len; wilc_connect_result conn_result; bool ht_capable; + u8 ch; void *arg; -}; - -struct drv_handler { - u32 handler; - u8 mode; - u8 name; -}; - -struct op_mode { - u32 mode; -}; - -struct get_mac_addr { - u8 *mac_addr; -}; - -struct ba_session_info { - u8 bssid[ETH_ALEN]; - u8 tid; - u16 buf_size; - u16 time_out; + void *param; }; struct remain_ch { @@ -249,12 +205,6 @@ struct remain_ch { u32 id; }; -struct reg_frame { - bool reg; - u16 frame_type; - u8 reg_id; -}; - struct wilc; struct host_if_drv { struct user_scan_req usr_scan_req; @@ -267,9 +217,6 @@ struct host_if_drv { enum host_if_state hif_state; u8 assoc_bssid[ETH_ALEN]; - struct cfg_param_attr cfg_values; - /*lock to protect concurrent setting of cfg params*/ - struct mutex cfg_values_lock; struct timer_list scan_timer; struct wilc_vif *scan_timer_vif; @@ -282,7 +229,7 @@ struct host_if_drv { bool ifc_up; int driver_handler_id; - u8 assoc_resp[MAX_ASSOC_RESP_FRAME_SIZE]; + u8 assoc_resp[WILC_MAX_ASSOC_RESP_FRAME_SIZE]; }; struct add_sta_param { @@ -312,15 +259,14 @@ int wilc_add_rx_gtk(struct wilc_vif *vif, const u8 *rx_gtk, u8 gtk_key_len, u8 index, u32 key_rsc_len, const u8 *key_rsc, const u8 *rx_mic, const u8 *tx_mic, u8 mode, u8 cipher_mode); -int wilc_set_pmkid_info(struct wilc_vif *vif, - struct host_if_pmkid_attr *pmkid); +int wilc_set_pmkid_info(struct wilc_vif *vif, struct wilc_pmkid_attr *pmkid); int wilc_get_mac_address(struct wilc_vif *vif, u8 *mac_addr); int wilc_set_join_req(struct wilc_vif *vif, u8 *bssid, const u8 *ssid, size_t ssid_len, const u8 *ies, size_t ies_len, wilc_connect_result connect_result, void *user_arg, u8 security, enum authtype auth_type, u8 channel, void *join_params); -int wilc_disconnect(struct wilc_vif *vif, u16 reason_code); +int wilc_disconnect(struct wilc_vif *vif); int wilc_set_mac_chnl_num(struct wilc_vif *vif, u8 channel); int wilc_get_rssi(struct wilc_vif *vif, s8 *rssi_level); int wilc_scan(struct wilc_vif *vif, u8 scan_source, u8 scan_type, @@ -332,13 +278,14 @@ int wilc_hif_set_cfg(struct wilc_vif *vif, int wilc_init(struct net_device *dev, struct host_if_drv **hif_drv_handler); int wilc_deinit(struct wilc_vif *vif); int wilc_add_beacon(struct wilc_vif *vif, u32 interval, u32 dtim_period, - u32 head_len, u8 *head, u32 tail_len, u8 *tail); + struct cfg80211_beacon_data *params); int wilc_del_beacon(struct wilc_vif *vif); -int wilc_add_station(struct wilc_vif *vif, struct add_sta_param *sta_param); +int wilc_add_station(struct wilc_vif *vif, const u8 *mac, + struct station_parameters *params); int wilc_del_allstation(struct wilc_vif *vif, u8 mac_addr[][ETH_ALEN]); int wilc_del_station(struct wilc_vif *vif, const u8 *mac_addr); -int wilc_edit_station(struct wilc_vif *vif, - struct add_sta_param *sta_param); +int wilc_edit_station(struct wilc_vif *vif, const u8 *mac, + struct station_parameters *params); int wilc_set_power_mgmt(struct wilc_vif *vif, bool enabled, u32 timeout); int wilc_setup_multicast_filter(struct wilc_vif *vif, bool enabled, u32 count, u8 *mc_list); @@ -350,13 +297,14 @@ int wilc_remain_on_channel(struct wilc_vif *vif, u32 session_id, int wilc_listen_state_expired(struct wilc_vif *vif, u32 session_id); void wilc_frame_register(struct wilc_vif *vif, u16 frame_type, bool reg); int wilc_set_wfi_drv_handler(struct wilc_vif *vif, int index, u8 mode, - u8 ifc_id, bool is_sync); + u8 ifc_id); int wilc_set_operation_mode(struct wilc_vif *vif, u32 mode); -int wilc_get_statistics(struct wilc_vif *vif, struct rf_info *stats, - bool is_sync); +int wilc_get_statistics(struct wilc_vif *vif, struct rf_info *stats); void wilc_resolve_disconnect_aberration(struct wilc_vif *vif); int wilc_get_vif_idx(struct wilc_vif *vif); int wilc_set_tx_power(struct wilc_vif *vif, u8 tx_power); int wilc_get_tx_power(struct wilc_vif *vif, u8 *tx_power); - +void wilc_scan_complete_received(struct wilc *wilc, u8 *buffer, u32 length); +void wilc_network_info_received(struct wilc *wilc, u8 *buffer, u32 length); +void wilc_gnrl_async_info_received(struct wilc *wilc, u8 *buffer, u32 length); #endif diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c index 76c901235e93..721689048648 100644 --- a/drivers/staging/wilc1000/linux_wlan.c +++ b/drivers/staging/wilc1000/linux_wlan.c @@ -46,7 +46,8 @@ static int dev_state_ev_handler(struct notifier_block *this, switch (event) { case NETDEV_UP: - if (vif->iftype == STATION_MODE || vif->iftype == CLIENT_MODE) { + if (vif->iftype == WILC_STATION_MODE || + vif->iftype == WILC_CLIENT_MODE) { hif_drv->ifc_up = 1; vif->obtaining_ip = false; del_timer(&vif->during_ip_timer); @@ -65,7 +66,8 @@ static int dev_state_ev_handler(struct notifier_block *this, break; case NETDEV_DOWN: - if (vif->iftype == STATION_MODE || vif->iftype == CLIENT_MODE) { + if (vif->iftype == WILC_STATION_MODE || + vif->iftype == WILC_CLIENT_MODE) { hif_drv->ifc_up = 0; vif->obtaining_ip = false; } @@ -162,7 +164,7 @@ void wilc_mac_indicate(struct wilc *wilc) s8 status; wilc_wlan_cfg_get_val(wilc, WID_STATUS, &status, 1); - if (wilc->mac_status == MAC_STATUS_INIT) { + if (wilc->mac_status == WILC_MAC_STATUS_INIT) { wilc->mac_status = status; complete(&wilc->sync_event); } else { @@ -179,11 +181,11 @@ static struct net_device *get_if_handler(struct wilc *wilc, u8 *mac_header) bssid1 = mac_header + 4; for (i = 0; i < wilc->vif_num; i++) { - if (wilc->vif[i]->mode == STATION_MODE) + if (wilc->vif[i]->mode == WILC_STATION_MODE) if (ether_addr_equal_unaligned(bssid, wilc->vif[i]->bssid)) return wilc->vif[i]->ndev; - if (wilc->vif[i]->mode == AP_MODE) + if (wilc->vif[i]->mode == WILC_AP_MODE) if (ether_addr_equal_unaligned(bssid1, wilc->vif[i]->bssid)) return wilc->vif[i]->ndev; @@ -203,11 +205,10 @@ void wilc_wlan_set_bssid(struct net_device *wilc_netdev, u8 *bssid, u8 mode) int wilc_wlan_get_num_conn_ifcs(struct wilc *wilc) { u8 i = 0; - u8 null_bssid[6] = {0}; u8 ret_val = 0; for (i = 0; i < wilc->vif_num; i++) - if (memcmp(wilc->vif[i]->bssid, null_bssid, 6)) + if (!is_zero_ether_addr(wilc->vif[i]->bssid)) ret_val++; return ret_val; @@ -240,7 +241,7 @@ static int linux_wlan_txq_task(void *vp) if (netif_queue_stopped(wl->vif[1]->ndev)) netif_wake_queue(wl->vif[1]->ndev); } - } while (ret == WILC_TX_ERR_NO_BUF && !wl->close); + } while (ret == -ENOBUFS && !wl->close); } return 0; } @@ -338,15 +339,15 @@ static int linux_wlan_init_test_config(struct net_device *dev, if (!wilc_wlan_cfg_set(vif, 0, WID_PC_TEST_MODE, c_val, 1, 0, 0)) goto fail; - c_val[0] = INFRASTRUCTURE; + c_val[0] = WILC_FW_BSS_TYPE_INFRA; if (!wilc_wlan_cfg_set(vif, 0, WID_BSS_TYPE, c_val, 1, 0, 0)) goto fail; - c_val[0] = AUTORATE; + c_val[0] = WILC_FW_TX_RATE_AUTO; if (!wilc_wlan_cfg_set(vif, 0, WID_CURRENT_TX_RATE, c_val, 1, 0, 0)) goto fail; - c_val[0] = G_MIXED_11B_2_MODE; + c_val[0] = WILC_FW_OPER_MODE_G_MIXED_11B_2; if (!wilc_wlan_cfg_set(vif, 0, WID_11G_OPERATING_MODE, c_val, 1, 0, 0)) goto fail; @@ -355,19 +356,19 @@ static int linux_wlan_init_test_config(struct net_device *dev, if (!wilc_wlan_cfg_set(vif, 0, WID_CURRENT_CHANNEL, c_val, 1, 0, 0)) goto fail; - c_val[0] = G_SHORT_PREAMBLE; + c_val[0] = WILC_FW_PREAMBLE_SHORT; if (!wilc_wlan_cfg_set(vif, 0, WID_PREAMBLE, c_val, 1, 0, 0)) goto fail; - c_val[0] = AUTO_PROT; + c_val[0] = WILC_FW_11N_PROT_AUTO; if (!wilc_wlan_cfg_set(vif, 0, WID_11N_PROT_MECH, c_val, 1, 0, 0)) goto fail; - c_val[0] = ACTIVE_SCAN; + c_val[0] = WILC_FW_ACTIVE_SCAN; if (!wilc_wlan_cfg_set(vif, 0, WID_SCAN_TYPE, c_val, 1, 0, 0)) goto fail; - c_val[0] = SITE_SURVEY_OFF; + c_val[0] = WILC_FW_SITE_SURVEY_OFF; if (!wilc_wlan_cfg_set(vif, 0, WID_SITE_SURVEY, c_val, 1, 0, 0)) goto fail; @@ -387,15 +388,15 @@ static int linux_wlan_init_test_config(struct net_device *dev, if (!wilc_wlan_cfg_set(vif, 0, WID_QOS_ENABLE, c_val, 1, 0, 0)) goto fail; - c_val[0] = NO_POWERSAVE; + c_val[0] = WILC_FW_NO_POWERSAVE; if (!wilc_wlan_cfg_set(vif, 0, WID_POWER_MANAGEMENT, c_val, 1, 0, 0)) goto fail; - c_val[0] = NO_SECURITY; /* NO_ENCRYPT, 0x79 */ + c_val[0] = WILC_FW_SEC_NO; if (!wilc_wlan_cfg_set(vif, 0, WID_11I_MODE, c_val, 1, 0, 0)) goto fail; - c_val[0] = OPEN_SYSTEM; + c_val[0] = WILC_FW_AUTH_OPEN_SYSTEM; if (!wilc_wlan_cfg_set(vif, 0, WID_AUTH_TYPE, c_val, 1, 0, 0)) goto fail; @@ -429,7 +430,7 @@ static int linux_wlan_init_test_config(struct net_device *dev, if (!wilc_wlan_cfg_set(vif, 0, WID_DTIM_PERIOD, c_val, 1, 0, 0)) goto fail; - c_val[0] = NORMAL_ACK; + c_val[0] = WILC_FW_ACK_POLICY_NORMAL; if (!wilc_wlan_cfg_set(vif, 0, WID_ACK_POLICY, c_val, 1, 0, 0)) goto fail; @@ -452,7 +453,7 @@ static int linux_wlan_init_test_config(struct net_device *dev, if (!wilc_wlan_cfg_set(vif, 0, WID_BEACON_INTERVAL, c_val, 2, 0, 0)) goto fail; - c_val[0] = REKEY_DISABLE; + c_val[0] = WILC_FW_REKEY_POLICY_DISABLE; if (!wilc_wlan_cfg_set(vif, 0, WID_REKEY_POLICY, c_val, 1, 0, 0)) goto fail; @@ -470,7 +471,7 @@ static int linux_wlan_init_test_config(struct net_device *dev, 0)) goto fail; - c_val[0] = G_SELF_CTS_PROT; + c_val[0] = WILC_FW_ERP_PROT_SELF_CTS; if (!wilc_wlan_cfg_set(vif, 0, WID_11N_ERP_PROT_TYPE, c_val, 1, 0, 0)) goto fail; @@ -478,7 +479,7 @@ static int linux_wlan_init_test_config(struct net_device *dev, if (!wilc_wlan_cfg_set(vif, 0, WID_11N_ENABLE, c_val, 1, 0, 0)) goto fail; - c_val[0] = HT_MIXED_MODE; + c_val[0] = WILC_FW_11N_OP_MODE_HT_MIXED; if (!wilc_wlan_cfg_set(vif, 0, WID_11N_OPERATING_MODE, c_val, 1, 0, 0)) goto fail; @@ -488,12 +489,12 @@ static int linux_wlan_init_test_config(struct net_device *dev, 0)) goto fail; - c_val[0] = DETECT_PROTECT_REPORT; + c_val[0] = WILC_FW_OBBS_NONHT_DETECT_PROTECT_REPORT; if (!wilc_wlan_cfg_set(vif, 0, WID_11N_OBSS_NONHT_DETECTION, c_val, 1, 0, 0)) goto fail; - c_val[0] = RTS_CTS_NONHT_PROT; + c_val[0] = WILC_FW_HT_PROT_RTS_CTS_NONHT; if (!wilc_wlan_cfg_set(vif, 0, WID_11N_HT_PROT_TYPE, c_val, 1, 0, 0)) goto fail; @@ -502,7 +503,7 @@ static int linux_wlan_init_test_config(struct net_device *dev, 0)) goto fail; - c_val[0] = MIMO_MODE; + c_val[0] = WILC_FW_SMPS_MODE_MIMO; if (!wilc_wlan_cfg_set(vif, 0, WID_11N_SMPS_MODE, c_val, 1, 0, 0)) goto fail; @@ -529,6 +530,7 @@ static void wlan_deinit_locks(struct net_device *dev) mutex_destroy(&wilc->hif_cs); mutex_destroy(&wilc->rxq_cs); + mutex_destroy(&wilc->cfg_cmd_lock); mutex_destroy(&wilc->txq_add_to_head_cs); } @@ -590,6 +592,7 @@ static void wlan_init_locks(struct net_device *dev) mutex_init(&wl->hif_cs); mutex_init(&wl->rxq_cs); + mutex_init(&wl->cfg_cmd_lock); spin_lock_init(&wl->txq_spinlock); mutex_init(&wl->txq_add_to_head_cs); @@ -624,7 +627,7 @@ static int wilc_wlan_initialize(struct net_device *dev, struct wilc_vif *vif) struct wilc *wl = vif->wilc; if (!wl->initialized) { - wl->mac_status = MAC_STATUS_INIT; + wl->mac_status = WILC_MAC_STATUS_INIT; wl->close = 0; wlan_init_locks(dev); @@ -751,8 +754,7 @@ static int wilc_mac_open(struct net_device *ndev) for (i = 0; i < wl->vif_num; i++) { if (ndev == wl->vif[i]->ndev) { wilc_set_wfi_drv_handler(vif, wilc_get_vif_idx(vif), - vif->iftype, vif->ifc_id, - false); + vif->iftype, vif->ifc_id); wilc_set_operation_mode(vif, vif->iftype); break; } @@ -894,31 +896,11 @@ netdev_tx_t wilc_mac_xmit(struct sk_buff *skb, struct net_device *ndev) static int wilc_mac_close(struct net_device *ndev) { - struct wilc_priv *priv; struct wilc_vif *vif = netdev_priv(ndev); - struct host_if_drv *hif_drv; - struct wilc *wl; - - if (!vif || !vif->ndev || !vif->ndev->ieee80211_ptr || - !vif->ndev->ieee80211_ptr->wiphy) - return 0; - - priv = wiphy_priv(vif->ndev->ieee80211_ptr->wiphy); - wl = vif->wilc; - - if (!priv) - return 0; - - hif_drv = (struct host_if_drv *)priv->hif_drv; + struct wilc *wl = vif->wilc; netdev_dbg(ndev, "Mac close\n"); - if (!wl) - return 0; - - if (!hif_drv) - return 0; - if (wl->open_ifcs > 0) wl->open_ifcs--; else @@ -1021,12 +1003,12 @@ void wilc_netdev_cleanup(struct wilc *wilc) } if (wilc->vif[0]->ndev || wilc->vif[1]->ndev) { - for (i = 0; i < NUM_CONCURRENT_IFC; i++) + for (i = 0; i < WILC_NUM_CONCURRENT_IFC; i++) if (wilc->vif[i]->ndev) if (wilc->vif[i]->mac_opened) wilc_mac_close(wilc->vif[i]->ndev); - for (i = 0; i < NUM_CONCURRENT_IFC; i++) { + for (i = 0; i < WILC_NUM_CONCURRENT_IFC; i++) { unregister_netdev(wilc->vif[i]->ndev); wilc_free_wiphy(wilc->vif[i]->ndev); free_netdev(wilc->vif[i]->ndev); @@ -1070,7 +1052,7 @@ int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type, wl->io_type = io_type; wl->hif_func = ops; wl->enable_ps = true; - wl->chip_ps_state = CHIP_WAKEDUP; + wl->chip_ps_state = WILC_CHIP_WAKEDUP; INIT_LIST_HEAD(&wl->txq_head.list); INIT_LIST_HEAD(&wl->rxq_head.list); @@ -1082,7 +1064,7 @@ int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type, register_inetaddr_notifier(&g_dev_notifier); - for (i = 0; i < NUM_CONCURRENT_IFC; i++) { + for (i = 0; i < WILC_NUM_CONCURRENT_IFC; i++) { struct wireless_dev *wdev; ndev = alloc_etherdev(sizeof(struct wilc_vif)); @@ -1130,7 +1112,7 @@ int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type, if (ret) goto free_ndev; - vif->iftype = STATION_MODE; + vif->iftype = WILC_STATION_MODE; vif->mac_opened = 0; } @@ -1139,7 +1121,7 @@ int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type, free_ndev: for (; i >= 0; i--) { if (wl->vif[i]) { - if (wl->vif[i]->iftype == STATION_MODE) + if (wl->vif[i]->iftype == WILC_STATION_MODE) unregister_netdev(wl->vif[i]->ndev); if (wl->vif[i]->ndev) { diff --git a/drivers/staging/wilc1000/wilc_sdio.c b/drivers/staging/wilc1000/wilc_sdio.c index ca351c950344..e2f739fef21c 100644 --- a/drivers/staging/wilc1000/wilc_sdio.c +++ b/drivers/staging/wilc1000/wilc_sdio.c @@ -30,6 +30,25 @@ struct wilc_sdio { int has_thrpt_enh3; }; +struct sdio_cmd52 { + u32 read_write: 1; + u32 function: 3; + u32 raw: 1; + u32 address: 17; + u32 data: 8; +}; + +struct sdio_cmd53 { + u32 read_write: 1; + u32 function: 3; + u32 block_mode: 1; + u32 increment: 1; + u32 address: 17; + u32 count: 9; + u8 *buffer; + u32 block_size; +}; + static const struct wilc_hif_func wilc_hif_sdio; static int sdio_write_reg(struct wilc *wilc, u32 addr, u32 data); @@ -125,7 +144,8 @@ static int linux_sdio_probe(struct sdio_func *func, } dev_dbg(&func->dev, "Initializing netdev\n"); - ret = wilc_netdev_init(&wilc, &func->dev, HIF_SDIO, &wilc_hif_sdio); + ret = wilc_netdev_init(&wilc, &func->dev, WILC_HIF_SDIO, + &wilc_hif_sdio); if (ret) { dev_err(&func->dev, "Couldn't initialize netdev\n"); kfree(sdio_priv); @@ -841,6 +861,7 @@ static int sdio_read_int(struct wilc *wilc, u32 *int_status) if (!sdio_priv->irq_gpio) { int i; + cmd.read_write = 0; cmd.function = 1; cmd.address = 0x04; cmd.data = 0; diff --git a/drivers/staging/wilc1000/wilc_spi.c b/drivers/staging/wilc1000/wilc_spi.c index cef127b249fb..153e120eff00 100644 --- a/drivers/staging/wilc1000/wilc_spi.c +++ b/drivers/staging/wilc1000/wilc_spi.c @@ -120,7 +120,7 @@ static int wilc_bus_probe(struct spi_device *spi) dev_err(&spi->dev, "failed to get the irq gpio\n"); } - ret = wilc_netdev_init(&wilc, NULL, HIF_SPI, &wilc_hif_spi); + ret = wilc_netdev_init(&wilc, NULL, WILC_HIF_SPI, &wilc_hif_spi); if (ret) { kfree(spi_priv); return ret; @@ -927,7 +927,8 @@ static int wilc_spi_read_int(struct wilc *wilc, u32 *int_status) int ret; u32 tmp; u32 byte_cnt; - int happened, j; + bool unexpected_irq; + int j; u32 unknown_mask; u32 irq_flags; int k = IRG_FLAGS_OFFSET + 5; @@ -947,8 +948,6 @@ static int wilc_spi_read_int(struct wilc *wilc, u32 *int_status) j = 0; do { - happened = 0; - wilc_spi_read_reg(wilc, 0x1a90, &irq_flags); tmp |= ((irq_flags >> 27) << IRG_FLAGS_OFFSET); @@ -959,15 +958,15 @@ static int wilc_spi_read_int(struct wilc *wilc, u32 *int_status) unknown_mask = ~((1ul << spi_priv->nint) - 1); - if ((tmp >> IRG_FLAGS_OFFSET) & unknown_mask) { + unexpected_irq = (tmp >> IRG_FLAGS_OFFSET) & unknown_mask; + if (unexpected_irq) { dev_err(&spi->dev, "Unexpected interrupt(2):j=%d,tmp=%x,mask=%x\n", j, tmp, unknown_mask); - happened = 1; } j++; - } while (happened); + } while (unexpected_irq); *int_status = tmp; diff --git a/drivers/staging/wilc1000/wilc_wfi_cfgoperations.c b/drivers/staging/wilc1000/wilc_wfi_cfgoperations.c index 4fbbbbd5a64b..ac47dda510e0 100644 --- a/drivers/staging/wilc1000/wilc_wfi_cfgoperations.c +++ b/drivers/staging/wilc1000/wilc_wfi_cfgoperations.c @@ -6,15 +6,6 @@ #include "wilc_wfi_cfgoperations.h" -#define NO_ENCRYPT 0 -#define ENCRYPT_ENABLED BIT(0) -#define WEP BIT(1) -#define WEP_EXTENDED BIT(2) -#define WPA BIT(3) -#define WPA2 BIT(4) -#define AES BIT(5) -#define TKIP BIT(6) - #define FRAME_TYPE_ID 0 #define ACTION_CAT_ID 24 #define ACTION_SUBTYPE_ID 25 @@ -41,14 +32,6 @@ #define nl80211_SCAN_RESULT_EXPIRE (3 * HZ) #define SCAN_RESULT_EXPIRE (40 * HZ) -static const u32 cipher_suites[] = { - WLAN_CIPHER_SUITE_WEP40, - WLAN_CIPHER_SUITE_WEP104, - WLAN_CIPHER_SUITE_TKIP, - WLAN_CIPHER_SUITE_CCMP, - WLAN_CIPHER_SUITE_AES_CMAC, -}; - static const struct ieee80211_txrx_stypes wilc_wfi_cfg80211_mgmt_types[NUM_NL80211_IFTYPES] = { [NL80211_IFTYPE_STATION] = { @@ -82,53 +65,6 @@ static const struct wiphy_wowlan_support wowlan_support = { .flags = WIPHY_WOWLAN_ANY }; -#define CHAN2G(_channel, _freq, _flags) { \ - .band = NL80211_BAND_2GHZ, \ - .center_freq = (_freq), \ - .hw_value = (_channel), \ - .flags = (_flags), \ - .max_antenna_gain = 0, \ - .max_power = 30, \ -} - -static struct ieee80211_channel ieee80211_2ghz_channels[] = { - CHAN2G(1, 2412, 0), - CHAN2G(2, 2417, 0), - CHAN2G(3, 2422, 0), - CHAN2G(4, 2427, 0), - CHAN2G(5, 2432, 0), - CHAN2G(6, 2437, 0), - CHAN2G(7, 2442, 0), - CHAN2G(8, 2447, 0), - CHAN2G(9, 2452, 0), - CHAN2G(10, 2457, 0), - CHAN2G(11, 2462, 0), - CHAN2G(12, 2467, 0), - CHAN2G(13, 2472, 0), - CHAN2G(14, 2484, 0), -}; - -#define RATETAB_ENT(_rate, _hw_value, _flags) { \ - .bitrate = (_rate), \ - .hw_value = (_hw_value), \ - .flags = (_flags), \ -} - -static struct ieee80211_rate ieee80211_bitrates[] = { - RATETAB_ENT(10, 0, 0), - RATETAB_ENT(20, 1, 0), - RATETAB_ENT(55, 2, 0), - RATETAB_ENT(110, 3, 0), - RATETAB_ENT(60, 9, 0), - RATETAB_ENT(90, 6, 0), - RATETAB_ENT(120, 7, 0), - RATETAB_ENT(180, 8, 0), - RATETAB_ENT(240, 9, 0), - RATETAB_ENT(360, 10, 0), - RATETAB_ENT(480, 11, 0), - RATETAB_ENT(540, 12, 0), -}; - struct p2p_mgmt_data { int size; u8 *buff; @@ -139,13 +75,6 @@ static u8 curr_channel; static u8 p2p_oui[] = {0x50, 0x6f, 0x9A, 0x09}; static u8 p2p_vendor_spec[] = {0xdd, 0x05, 0x00, 0x08, 0x40, 0x03}; -static struct ieee80211_supported_band wilc_band_2ghz = { - .channels = ieee80211_2ghz_channels, - .n_channels = ARRAY_SIZE(ieee80211_2ghz_channels), - .bitrates = ieee80211_bitrates, - .n_bitrates = ARRAY_SIZE(ieee80211_bitrates), -}; - #define AGING_TIME (9 * 1000) #define DURING_IP_TIME_OUT 15000 @@ -202,7 +131,7 @@ static void refresh_scan(struct wilc_priv *priv, bool direct_scan) channel, CFG80211_BSS_FTYPE_UNKNOWN, network_info->bssid, - network_info->tsf_hi, + network_info->tsf, network_info->cap_info, network_info->beacon_period, (const u8 *)network_info->ies, @@ -317,7 +246,7 @@ static void add_network_to_shadow(struct network_info *nw_info, shadow_nw_info->beacon_period = nw_info->beacon_period; shadow_nw_info->dtim_period = nw_info->dtim_period; shadow_nw_info->ch = nw_info->ch; - shadow_nw_info->tsf_hi = nw_info->tsf_hi; + shadow_nw_info->tsf = nw_info->tsf; if (ap_found != -1) kfree(shadow_nw_info->ies); shadow_nw_info->ies = kmemdup(nw_info->ies, nw_info->ies_len, @@ -381,7 +310,7 @@ static void cfg_scan_result(enum scan_event scan_event, channel, CFG80211_BSS_FTYPE_UNKNOWN, network_info->bssid, - network_info->tsf_hi, + network_info->tsf, network_info->cap_info, network_info->beacon_period, (const u8 *)network_info->ies, @@ -470,11 +399,11 @@ static void cfg_connect_result(enum conn_event conn_disconn_evt, connect_status = conn_info->status; - if (mac_status == MAC_STATUS_DISCONNECTED && + if (mac_status == WILC_MAC_STATUS_DISCONNECTED && conn_info->status == WLAN_STATUS_SUCCESS) { connect_status = WLAN_STATUS_UNSPECIFIED_FAILURE; wilc_wlan_set_bssid(priv->dev, null_bssid, - STATION_MODE); + WILC_STATION_MODE); if (!wfi_drv->p2p_connect) wlan_channel = INVALID_CHANNEL; @@ -516,7 +445,7 @@ static void cfg_connect_result(enum conn_event conn_disconn_evt, priv->p2p.recv_random = 0x00; priv->p2p.is_wilc_ie = false; eth_zero_addr(priv->associated_bss); - wilc_wlan_set_bssid(priv->dev, null_bssid, STATION_MODE); + wilc_wlan_set_bssid(priv->dev, null_bssid, WILC_STATION_MODE); if (!wfi_drv->p2p_connect) wlan_channel = INVALID_CHANNEL; @@ -618,18 +547,20 @@ static int scan(struct wiphy *wiphy, struct cfg80211_scan_request *request) if (request->n_ssids >= 1) { if (wilc_wfi_cfg_alloc_fill_ssid(request, - &hidden_ntwk)) - return -ENOMEM; + &hidden_ntwk)) { + ret = -ENOMEM; + goto out; + } - ret = wilc_scan(vif, USER_SCAN, ACTIVE_SCAN, - scan_ch_list, + ret = wilc_scan(vif, WILC_FW_USER_SCAN, + WILC_FW_ACTIVE_SCAN, scan_ch_list, request->n_channels, (const u8 *)request->ie, request->ie_len, cfg_scan_result, (void *)priv, &hidden_ntwk); } else { - ret = wilc_scan(vif, USER_SCAN, ACTIVE_SCAN, - scan_ch_list, + ret = wilc_scan(vif, WILC_FW_USER_SCAN, + WILC_FW_ACTIVE_SCAN, scan_ch_list, request->n_channels, (const u8 *)request->ie, request->ie_len, cfg_scan_result, @@ -639,8 +570,11 @@ static int scan(struct wiphy *wiphy, struct cfg80211_scan_request *request) netdev_err(priv->dev, "Requested scanned channels over\n"); } - if (ret != 0) - ret = -EBUSY; +out: + if (ret) { + priv->scan_req = NULL; + priv->cfg_scanning = false; + } return ret; } @@ -655,8 +589,8 @@ static int connect(struct wiphy *wiphy, struct net_device *dev, int ret; u32 i; u32 sel_bssi_idx = UINT_MAX; - u8 security = NO_ENCRYPT; - enum authtype auth_type = ANY; + u8 security = WILC_FW_SEC_NO; + enum authtype auth_type = WILC_FW_AUTH_ANY; u32 cipher_group; vif->connecting = true; @@ -703,9 +637,9 @@ static int connect(struct wiphy *wiphy, struct net_device *dev, memset(priv->wep_key_len, 0, sizeof(priv->wep_key_len)); cipher_group = sme->crypto.cipher_group; - if (cipher_group != NO_ENCRYPT) { + if (cipher_group != 0) { if (cipher_group == WLAN_CIPHER_SUITE_WEP40) { - security = ENCRYPT_ENABLED | WEP; + security = WILC_FW_SEC_WEP; priv->wep_key_len[sme->key_idx] = sme->key_len; memcpy(priv->wep_key[sme->key_idx], sme->key, @@ -715,7 +649,7 @@ static int connect(struct wiphy *wiphy, struct net_device *dev, wilc_add_wep_key_bss_sta(vif, sme->key, sme->key_len, sme->key_idx); } else if (cipher_group == WLAN_CIPHER_SUITE_WEP104) { - security = ENCRYPT_ENABLED | WEP | WEP_EXTENDED; + security = WILC_FW_SEC_WEP_EXTENDED; priv->wep_key_len[sme->key_idx] = sme->key_len; memcpy(priv->wep_key[sme->key_idx], sme->key, @@ -726,14 +660,14 @@ static int connect(struct wiphy *wiphy, struct net_device *dev, sme->key_idx); } else if (sme->crypto.wpa_versions & NL80211_WPA_VERSION_2) { if (cipher_group == WLAN_CIPHER_SUITE_TKIP) - security = ENCRYPT_ENABLED | WPA2 | TKIP; + security = WILC_FW_SEC_WPA2_TKIP; else - security = ENCRYPT_ENABLED | WPA2 | AES; + security = WILC_FW_SEC_WPA2_AES; } else if (sme->crypto.wpa_versions & NL80211_WPA_VERSION_1) { if (cipher_group == WLAN_CIPHER_SUITE_TKIP) - security = ENCRYPT_ENABLED | WPA | TKIP; + security = WILC_FW_SEC_WPA_TKIP; else - security = ENCRYPT_ENABLED | WPA | AES; + security = WILC_FW_SEC_WPA_AES; } else { ret = -ENOTSUPP; netdev_err(dev, "%s: Unsupported cipher\n", @@ -748,19 +682,19 @@ static int connect(struct wiphy *wiphy, struct net_device *dev, u32 ciphers_pairwise = sme->crypto.ciphers_pairwise[i]; if (ciphers_pairwise == WLAN_CIPHER_SUITE_TKIP) - security = security | TKIP; + security |= WILC_FW_TKIP; else - security = security | AES; + security |= WILC_FW_AES; } } switch (sme->auth_type) { case NL80211_AUTHTYPE_OPEN_SYSTEM: - auth_type = OPEN_SYSTEM; + auth_type = WILC_FW_AUTH_OPEN_SYSTEM; break; case NL80211_AUTHTYPE_SHARED_KEY: - auth_type = SHARED_KEY; + auth_type = WILC_FW_AUTH_SHARED_KEY; break; default: @@ -769,7 +703,7 @@ static int connect(struct wiphy *wiphy, struct net_device *dev, if (sme->crypto.n_akm_suites) { if (sme->crypto.akm_suites[0] == WLAN_AKM_SUITE_8021X) - auth_type = IEEE8021; + auth_type = WILC_FW_AUTH_IEEE8021; } curr_channel = nw_info->ch; @@ -777,7 +711,7 @@ static int connect(struct wiphy *wiphy, struct net_device *dev, if (!wfi_drv->p2p_connect) wlan_channel = nw_info->ch; - wilc_wlan_set_bssid(dev, nw_info->bssid, STATION_MODE); + wilc_wlan_set_bssid(dev, nw_info->bssid, WILC_STATION_MODE); ret = wilc_set_join_req(vif, nw_info->bssid, sme->ssid, sme->ssid_len, sme->ie, sme->ie_len, @@ -790,7 +724,9 @@ static int connect(struct wiphy *wiphy, struct net_device *dev, netdev_err(dev, "wilc_set_join_req(): Error\n"); ret = -ENOENT; - wilc_wlan_set_bssid(dev, null_bssid, STATION_MODE); + if (!wfi_drv->p2p_connect) + wlan_channel = INVALID_CHANNEL; + wilc_wlan_set_bssid(dev, null_bssid, WILC_STATION_MODE); goto out_error; } return 0; @@ -824,14 +760,14 @@ static int disconnect(struct wiphy *wiphy, struct net_device *dev, wfi_drv = (struct host_if_drv *)priv->hif_drv; if (!wfi_drv->p2p_connect) wlan_channel = INVALID_CHANNEL; - wilc_wlan_set_bssid(priv->dev, null_bssid, STATION_MODE); + wilc_wlan_set_bssid(priv->dev, null_bssid, WILC_STATION_MODE); priv->p2p.local_random = 0x01; priv->p2p.recv_random = 0x00; priv->p2p.is_wilc_ie = false; wfi_drv->p2p_timeout = 0; - ret = wilc_disconnect(vif, reason_code); + ret = wilc_disconnect(vif); if (ret != 0) { netdev_err(priv->dev, "Error in disconnecting\n"); ret = -EINVAL; @@ -900,7 +836,7 @@ static int add_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index, struct wilc_priv *priv = wiphy_priv(wiphy); const u8 *rx_mic = NULL; const u8 *tx_mic = NULL; - u8 mode = NO_ENCRYPT; + u8 mode = WILC_FW_SEC_NO; u8 op_mode; struct wilc_vif *vif = netdev_priv(netdev); @@ -911,14 +847,14 @@ static int add_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index, wilc_wfi_cfg_copy_wep_info(priv, key_index, params); if (params->cipher == WLAN_CIPHER_SUITE_WEP40) - mode = ENCRYPT_ENABLED | WEP; + mode = WILC_FW_SEC_WEP; else - mode = ENCRYPT_ENABLED | WEP | WEP_EXTENDED; + mode = WILC_FW_SEC_WEP_EXTENDED; ret = wilc_add_wep_key_bss_ap(vif, params->key, params->key_len, key_index, mode, - OPEN_SYSTEM); + WILC_FW_AUTH_OPEN_SYSTEM); break; } if (memcmp(params->key, priv->wep_key[key_index], @@ -951,18 +887,18 @@ static int add_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index, if (!pairwise) { if (params->cipher == WLAN_CIPHER_SUITE_TKIP) - mode = ENCRYPT_ENABLED | WPA | TKIP; + mode = WILC_FW_SEC_WPA_TKIP; else - mode = ENCRYPT_ENABLED | WPA2 | AES; + mode = WILC_FW_SEC_WPA2_AES; priv->wilc_groupkey = mode; key = priv->wilc_gtk[key_index]; } else { if (params->cipher == WLAN_CIPHER_SUITE_TKIP) - mode = ENCRYPT_ENABLED | WPA | TKIP; + mode = WILC_FW_SEC_WPA_TKIP; else - mode = priv->wilc_groupkey | AES; + mode = priv->wilc_groupkey | WILC_FW_AES; key = priv->wilc_ptk[key_index]; } @@ -970,7 +906,7 @@ static int add_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index, if (ret) return -ENOMEM; - op_mode = AP_MODE; + op_mode = WILC_AP_MODE; } else { if (params->key_len > 16 && params->cipher == WLAN_CIPHER_SUITE_TKIP) { @@ -979,7 +915,7 @@ static int add_key(struct wiphy *wiphy, struct net_device *netdev, u8 key_index, keylen = params->key_len - 16; } - op_mode = STATION_MODE; + op_mode = WILC_STATION_MODE; } if (!pairwise) @@ -1088,7 +1024,7 @@ static int get_station(struct wiphy *wiphy, struct net_device *dev, u32 associatedsta = ~0; u32 inactive_time = 0; - if (vif->iftype == AP_MODE || vif->iftype == GO_MODE) { + if (vif->iftype == WILC_AP_MODE || vif->iftype == WILC_GO_MODE) { for (i = 0; i < NUM_STA_ASSOCIATED; i++) { if (!(memcmp(mac, priv->assoc_stainfo.sta_associated_bss[i], @@ -1107,10 +1043,10 @@ static int get_station(struct wiphy *wiphy, struct net_device *dev, wilc_get_inactive_time(vif, mac, &inactive_time); sinfo->inactive_time = 1000 * inactive_time; - } else if (vif->iftype == STATION_MODE) { + } else if (vif->iftype == WILC_STATION_MODE) { struct rf_info stats; - wilc_get_statistics(vif, &stats, true); + wilc_get_statistics(vif, &stats); sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL) | BIT_ULL(NL80211_STA_INFO_RX_PACKETS) | @@ -1149,21 +1085,45 @@ static int set_wiphy_params(struct wiphy *wiphy, u32 changed) cfg_param_val.flag = 0; if (changed & WIPHY_PARAM_RETRY_SHORT) { - cfg_param_val.flag |= RETRY_SHORT; + netdev_dbg(vif->ndev, + "Setting WIPHY_PARAM_RETRY_SHORT %d\n", + wiphy->retry_short); + cfg_param_val.flag |= WILC_CFG_PARAM_RETRY_SHORT; cfg_param_val.short_retry_limit = wiphy->retry_short; } if (changed & WIPHY_PARAM_RETRY_LONG) { - cfg_param_val.flag |= RETRY_LONG; + netdev_dbg(vif->ndev, + "Setting WIPHY_PARAM_RETRY_LONG %d\n", + wiphy->retry_long); + cfg_param_val.flag |= WILC_CFG_PARAM_RETRY_LONG; cfg_param_val.long_retry_limit = wiphy->retry_long; } if (changed & WIPHY_PARAM_FRAG_THRESHOLD) { - cfg_param_val.flag |= FRAG_THRESHOLD; - cfg_param_val.frag_threshold = wiphy->frag_threshold; + if (wiphy->frag_threshold > 255 && + wiphy->frag_threshold < 7937) { + netdev_dbg(vif->ndev, + "Setting WIPHY_PARAM_FRAG_THRESHOLD %d\n", + wiphy->frag_threshold); + cfg_param_val.flag |= WILC_CFG_PARAM_FRAG_THRESHOLD; + cfg_param_val.frag_threshold = wiphy->frag_threshold; + } else { + netdev_err(vif->ndev, + "Fragmentation threshold out of range\n"); + return -EINVAL; + } } if (changed & WIPHY_PARAM_RTS_THRESHOLD) { - cfg_param_val.flag |= RTS_THRESHOLD; - cfg_param_val.rts_threshold = wiphy->rts_threshold; + if (wiphy->rts_threshold > 255) { + netdev_dbg(vif->ndev, + "Setting WIPHY_PARAM_RTS_THRESHOLD %d\n", + wiphy->rts_threshold); + cfg_param_val.flag |= WILC_CFG_PARAM_RTS_THRESHOLD; + cfg_param_val.rts_threshold = wiphy->rts_threshold; + } else { + netdev_err(vif->ndev, "RTS threshold out of range\n"); + return -EINVAL; + } } ret = wilc_hif_set_cfg(vif, &cfg_param_val); @@ -1193,7 +1153,7 @@ static int set_pmksa(struct wiphy *wiphy, struct net_device *netdev, memcpy(priv->pmkid_list.pmkidlist[i].bssid, pmksa->bssid, ETH_ALEN); memcpy(priv->pmkid_list.pmkidlist[i].pmkid, pmksa->pmkid, - PMKID_LEN); + WLAN_PMKID_LEN); if (!(flag == PMKID_FOUND)) priv->pmkid_list.numpmkid++; } else { @@ -1218,7 +1178,7 @@ static int del_pmksa(struct wiphy *wiphy, struct net_device *netdev, if (!memcmp(pmksa->bssid, priv->pmkid_list.pmkidlist[i].bssid, ETH_ALEN)) { memset(&priv->pmkid_list.pmkidlist[i], 0, - sizeof(struct host_if_pmkid)); + sizeof(struct wilc_pmkid)); break; } } @@ -1230,7 +1190,7 @@ static int del_pmksa(struct wiphy *wiphy, struct net_device *netdev, ETH_ALEN); memcpy(priv->pmkid_list.pmkidlist[i].pmkid, priv->pmkid_list.pmkidlist[i + 1].pmkid, - PMKID_LEN); + WLAN_PMKID_LEN); } priv->pmkid_list.numpmkid--; } else { @@ -1244,7 +1204,7 @@ static int flush_pmksa(struct wiphy *wiphy, struct net_device *netdev) { struct wilc_priv *priv = wiphy_priv(wiphy); - memset(&priv->pmkid_list, 0, sizeof(struct host_if_pmkid_attr)); + memset(&priv->pmkid_list, 0, sizeof(struct wilc_pmkid_attr)); return 0; } @@ -1676,12 +1636,12 @@ void wilc_mgmt_frame_register(struct wiphy *wiphy, struct wireless_dev *wdev, return; switch (frame_type) { - case PROBE_REQ: + case IEEE80211_STYPE_PROBE_REQ: vif->frame_reg[0].type = frame_type; vif->frame_reg[0].reg = reg; break; - case ACTION: + case IEEE80211_STYPE_ACTION: vif->frame_reg[1].type = frame_type; vif->frame_reg[1].reg = reg; break; @@ -1706,13 +1666,16 @@ static int dump_station(struct wiphy *wiphy, struct net_device *dev, { struct wilc_priv *priv = wiphy_priv(wiphy); struct wilc_vif *vif = netdev_priv(priv->dev); + int ret; if (idx != 0) return -ENOENT; sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL); - wilc_get_rssi(vif, &sinfo->signal); + ret = wilc_get_rssi(vif, &sinfo->signal); + if (ret) + return ret; memcpy(mac, priv->associated_bss, ETH_ALEN); return 0; @@ -1753,11 +1716,11 @@ static int change_virtual_intf(struct wiphy *wiphy, struct net_device *dev, dev->ieee80211_ptr->iftype = type; priv->wdev->iftype = type; vif->monitor_flag = 0; - vif->iftype = STATION_MODE; - wilc_set_operation_mode(vif, STATION_MODE); + vif->iftype = WILC_STATION_MODE; + wilc_set_operation_mode(vif, WILC_STATION_MODE); memset(priv->assoc_stainfo.sta_associated_bss, 0, - MAX_NUM_STA * ETH_ALEN); + WILC_MAX_NUM_STA * ETH_ALEN); wl->enable_ps = true; wilc_set_power_mgmt(vif, 1, 0); @@ -1768,8 +1731,8 @@ static int change_virtual_intf(struct wiphy *wiphy, struct net_device *dev, dev->ieee80211_ptr->iftype = type; priv->wdev->iftype = type; vif->monitor_flag = 0; - vif->iftype = CLIENT_MODE; - wilc_set_operation_mode(vif, STATION_MODE); + vif->iftype = WILC_CLIENT_MODE; + wilc_set_operation_mode(vif, WILC_STATION_MODE); wl->enable_ps = false; wilc_set_power_mgmt(vif, 0, 0); @@ -1779,12 +1742,12 @@ static int change_virtual_intf(struct wiphy *wiphy, struct net_device *dev, wl->enable_ps = false; dev->ieee80211_ptr->iftype = type; priv->wdev->iftype = type; - vif->iftype = AP_MODE; + vif->iftype = WILC_AP_MODE; if (wl->initialized) { wilc_set_wfi_drv_handler(vif, wilc_get_vif_idx(vif), - 0, vif->ifc_id, false); - wilc_set_operation_mode(vif, AP_MODE); + 0, vif->ifc_id); + wilc_set_operation_mode(vif, WILC_AP_MODE); wilc_set_power_mgmt(vif, 0, 0); } break; @@ -1793,10 +1756,10 @@ static int change_virtual_intf(struct wiphy *wiphy, struct net_device *dev, vif->obtaining_ip = true; mod_timer(&vif->during_ip_timer, jiffies + msecs_to_jiffies(DURING_IP_TIME_OUT)); - wilc_set_operation_mode(vif, AP_MODE); + wilc_set_operation_mode(vif, WILC_AP_MODE); dev->ieee80211_ptr->iftype = type; priv->wdev->iftype = type; - vif->iftype = GO_MODE; + vif->iftype = WILC_GO_MODE; wl->enable_ps = false; wilc_set_power_mgmt(vif, 0, 0); @@ -1815,21 +1778,17 @@ static int start_ap(struct wiphy *wiphy, struct net_device *dev, { struct wilc_vif *vif = netdev_priv(dev); struct wilc *wl = vif->wilc; - struct cfg80211_beacon_data *beacon = &settings->beacon; int ret; ret = set_channel(wiphy, &settings->chandef); - if (ret != 0) netdev_err(dev, "Error in setting channel\n"); - wilc_wlan_set_bssid(dev, wl->vif[vif->idx]->src_addr, AP_MODE); + wilc_wlan_set_bssid(dev, wl->vif[vif->idx]->src_addr, WILC_AP_MODE); wilc_set_power_mgmt(vif, 0, 0); return wilc_add_beacon(vif, settings->beacon_interval, - settings->dtim_period, beacon->head_len, - (u8 *)beacon->head, beacon->tail_len, - (u8 *)beacon->tail); + settings->dtim_period, &settings->beacon); } static int change_beacon(struct wiphy *wiphy, struct net_device *dev, @@ -1838,9 +1797,7 @@ static int change_beacon(struct wiphy *wiphy, struct net_device *dev, struct wilc_priv *priv = wiphy_priv(wiphy); struct wilc_vif *vif = netdev_priv(priv->dev); - return wilc_add_beacon(vif, 0, 0, beacon->head_len, - (u8 *)beacon->head, beacon->tail_len, - (u8 *)beacon->tail); + return wilc_add_beacon(vif, 0, 0, beacon); } static int stop_ap(struct wiphy *wiphy, struct net_device *dev) @@ -1850,7 +1807,7 @@ static int stop_ap(struct wiphy *wiphy, struct net_device *dev) struct wilc_vif *vif = netdev_priv(priv->dev); u8 null_bssid[ETH_ALEN] = {0}; - wilc_wlan_set_bssid(dev, null_bssid, AP_MODE); + wilc_wlan_set_bssid(dev, null_bssid, WILC_AP_MODE); ret = wilc_del_beacon(vif); @@ -1865,28 +1822,13 @@ static int add_station(struct wiphy *wiphy, struct net_device *dev, { int ret = 0; struct wilc_priv *priv = wiphy_priv(wiphy); - struct add_sta_param sta_params = { {0} }; struct wilc_vif *vif = netdev_priv(dev); - if (vif->iftype == AP_MODE || vif->iftype == GO_MODE) { - memcpy(sta_params.bssid, mac, ETH_ALEN); + if (vif->iftype == WILC_AP_MODE || vif->iftype == WILC_GO_MODE) { memcpy(priv->assoc_stainfo.sta_associated_bss[params->aid], mac, ETH_ALEN); - sta_params.aid = params->aid; - sta_params.rates_len = params->supported_rates_len; - sta_params.rates = params->supported_rates; - - if (!params->ht_capa) { - sta_params.ht_supported = false; - } else { - sta_params.ht_supported = true; - sta_params.ht_capa = *params->ht_capa; - } - - sta_params.flags_mask = params->sta_flags_mask; - sta_params.flags_set = params->sta_flags_set; - ret = wilc_add_station(vif, &sta_params); + ret = wilc_add_station(vif, mac, params); if (ret) netdev_err(dev, "Host add station fail\n"); } @@ -1903,7 +1845,7 @@ static int del_station(struct wiphy *wiphy, struct net_device *dev, struct wilc_vif *vif = netdev_priv(dev); struct sta_info *info; - if (!(vif->iftype == AP_MODE || vif->iftype == GO_MODE)) + if (!(vif->iftype == WILC_AP_MODE || vif->iftype == WILC_GO_MODE)) return ret; info = &priv->assoc_stainfo; @@ -1921,26 +1863,10 @@ static int change_station(struct wiphy *wiphy, struct net_device *dev, const u8 *mac, struct station_parameters *params) { int ret = 0; - struct add_sta_param sta_params = { {0} }; struct wilc_vif *vif = netdev_priv(dev); - if (vif->iftype == AP_MODE || vif->iftype == GO_MODE) { - memcpy(sta_params.bssid, mac, ETH_ALEN); - sta_params.aid = params->aid; - sta_params.rates_len = params->supported_rates_len; - sta_params.rates = params->supported_rates; - - if (!params->ht_capa) { - sta_params.ht_supported = false; - } else { - sta_params.ht_supported = true; - sta_params.ht_capa = *params->ht_capa; - } - - sta_params.flags_mask = params->sta_flags_mask; - sta_params.flags_set = params->sta_flags_set; - - ret = wilc_edit_station(vif, &sta_params); + if (vif->iftype == WILC_AP_MODE || vif->iftype == WILC_GO_MODE) { + ret = wilc_edit_station(vif, mac, params); if (ret) netdev_err(dev, "Host edit station fail\n"); } @@ -2095,14 +2021,6 @@ static struct wireless_dev *wilc_wfi_cfg_alloc(void) if (!wdev->wiphy) goto free_mem; - wilc_band_2ghz.ht_cap.ht_supported = 1; - wilc_band_2ghz.ht_cap.cap |= (1 << IEEE80211_HT_CAP_RX_STBC_SHIFT); - wilc_band_2ghz.ht_cap.mcs.rx_mask[0] = 0xff; - wilc_band_2ghz.ht_cap.ampdu_factor = IEEE80211_HT_MAX_AMPDU_8K; - wilc_band_2ghz.ht_cap.ampdu_density = IEEE80211_HT_MPDU_DENSITY_NONE; - - wdev->wiphy->bands[NL80211_BAND_2GHZ] = &wilc_band_2ghz; - return wdev; free_mem: @@ -2126,15 +2044,33 @@ struct wireless_dev *wilc_create_wiphy(struct net_device *net, priv = wdev_priv(wdev); priv->wdev = wdev; - wdev->wiphy->max_scan_ssids = MAX_NUM_PROBED_SSID; + + memcpy(priv->bitrates, wilc_bitrates, sizeof(wilc_bitrates)); + memcpy(priv->channels, wilc_2ghz_channels, sizeof(wilc_2ghz_channels)); + priv->band.bitrates = priv->bitrates; + priv->band.n_bitrates = ARRAY_SIZE(priv->bitrates); + priv->band.channels = priv->channels; + priv->band.n_channels = ARRAY_SIZE(wilc_2ghz_channels); + + priv->band.ht_cap.ht_supported = 1; + priv->band.ht_cap.cap |= (1 << IEEE80211_HT_CAP_RX_STBC_SHIFT); + priv->band.ht_cap.mcs.rx_mask[0] = 0xff; + priv->band.ht_cap.ampdu_factor = IEEE80211_HT_MAX_AMPDU_8K; + priv->band.ht_cap.ampdu_density = IEEE80211_HT_MPDU_DENSITY_NONE; + + wdev->wiphy->bands[NL80211_BAND_2GHZ] = &priv->band; + + wdev->wiphy->max_scan_ssids = WILC_MAX_NUM_PROBED_SSID; #ifdef CONFIG_PM wdev->wiphy->wowlan = &wowlan_support; #endif wdev->wiphy->max_num_pmkids = WILC_MAX_NUM_PMKIDS; wdev->wiphy->max_scan_ie_len = 1000; wdev->wiphy->signal_type = CFG80211_SIGNAL_TYPE_MBM; - wdev->wiphy->cipher_suites = cipher_suites; - wdev->wiphy->n_cipher_suites = ARRAY_SIZE(cipher_suites); + memcpy(priv->cipher_suites, wilc_cipher_suites, + sizeof(wilc_cipher_suites)); + wdev->wiphy->cipher_suites = priv->cipher_suites; + wdev->wiphy->n_cipher_suites = ARRAY_SIZE(wilc_cipher_suites); wdev->wiphy->mgmt_stypes = wilc_wfi_cfg80211_mgmt_types; wdev->wiphy->max_remain_on_channel_duration = 500; diff --git a/drivers/staging/wilc1000/wilc_wfi_netdevice.h b/drivers/staging/wilc1000/wilc_wfi_netdevice.h index 4f05a16c778e..c6685c0c238b 100644 --- a/drivers/staging/wilc1000/wilc_wfi_netdevice.h +++ b/drivers/staging/wilc1000/wilc_wfi_netdevice.h @@ -22,7 +22,6 @@ #define FLOW_CONTROL_UPPER_THRESHOLD 256 #define WILC_MAX_NUM_PMKIDS 16 -#define PMKID_LEN 16 #define PMKID_FOUND 1 #define NUM_STA_ASSOCIATED 8 @@ -58,7 +57,7 @@ struct wilc_wfi_wep_key { }; struct sta_info { - u8 sta_associated_bss[MAX_NUM_STA][ETH_ALEN]; + u8 sta_associated_bss[WILC_MAX_NUM_STA][ETH_ALEN]; }; /*Parameters needed for host interface for remaining on channel*/ @@ -75,6 +74,61 @@ struct wilc_p2p_var { bool is_wilc_ie; }; +static const u32 wilc_cipher_suites[] = { + WLAN_CIPHER_SUITE_WEP40, + WLAN_CIPHER_SUITE_WEP104, + WLAN_CIPHER_SUITE_TKIP, + WLAN_CIPHER_SUITE_CCMP, + WLAN_CIPHER_SUITE_AES_CMAC +}; + +#define CHAN2G(_channel, _freq, _flags) { \ + .band = NL80211_BAND_2GHZ, \ + .center_freq = (_freq), \ + .hw_value = (_channel), \ + .flags = (_flags), \ + .max_antenna_gain = 0, \ + .max_power = 30, \ +} + +static const struct ieee80211_channel wilc_2ghz_channels[] = { + CHAN2G(1, 2412, 0), + CHAN2G(2, 2417, 0), + CHAN2G(3, 2422, 0), + CHAN2G(4, 2427, 0), + CHAN2G(5, 2432, 0), + CHAN2G(6, 2437, 0), + CHAN2G(7, 2442, 0), + CHAN2G(8, 2447, 0), + CHAN2G(9, 2452, 0), + CHAN2G(10, 2457, 0), + CHAN2G(11, 2462, 0), + CHAN2G(12, 2467, 0), + CHAN2G(13, 2472, 0), + CHAN2G(14, 2484, 0) +}; + +#define RATETAB_ENT(_rate, _hw_value, _flags) { \ + .bitrate = (_rate), \ + .hw_value = (_hw_value), \ + .flags = (_flags), \ +} + +static struct ieee80211_rate wilc_bitrates[] = { + RATETAB_ENT(10, 0, 0), + RATETAB_ENT(20, 1, 0), + RATETAB_ENT(55, 2, 0), + RATETAB_ENT(110, 3, 0), + RATETAB_ENT(60, 9, 0), + RATETAB_ENT(90, 6, 0), + RATETAB_ENT(120, 7, 0), + RATETAB_ENT(180, 8, 0), + RATETAB_ENT(240, 9, 0), + RATETAB_ENT(360, 10, 0), + RATETAB_ENT(480, 11, 0), + RATETAB_ENT(540, 12, 0) +}; + struct wilc_priv { struct wireless_dev *wdev; struct cfg80211_scan_request *scan_req; @@ -90,13 +144,13 @@ struct wilc_priv { struct sk_buff *skb; struct net_device *dev; struct host_if_drv *hif_drv; - struct host_if_pmkid_attr pmkid_list; + struct wilc_pmkid_attr pmkid_list; u8 wep_key[4][WLAN_KEY_LEN_WEP104]; u8 wep_key_len[4]; /* The real interface that the monitor is on */ struct net_device *real_ndev; - struct wilc_wfi_key *wilc_gtk[MAX_NUM_STA]; - struct wilc_wfi_key *wilc_ptk[MAX_NUM_STA]; + struct wilc_wfi_key *wilc_gtk[WILC_MAX_NUM_STA]; + struct wilc_wfi_key *wilc_ptk[WILC_MAX_NUM_STA]; u8 wilc_groupkey; /* mutexes */ struct mutex scan_req_lock; @@ -105,6 +159,11 @@ struct wilc_priv { struct network_info scanned_shadow[MAX_NUM_SCANNED_NETWORKS_SHADOW]; int scanned_cnt; struct wilc_p2p_var p2p; + + struct ieee80211_channel channels[ARRAY_SIZE(wilc_2ghz_channels)]; + struct ieee80211_rate bitrates[ARRAY_SIZE(wilc_bitrates)]; + struct ieee80211_supported_band band; + u32 cipher_suites[ARRAY_SIZE(wilc_cipher_suites)]; }; struct frame_reg { @@ -169,7 +228,7 @@ struct wilc { int dev_irq_num; int close; u8 vif_num; - struct wilc_vif *vif[NUM_CONCURRENT_IFC]; + struct wilc_vif *vif[WILC_NUM_CONCURRENT_IFC]; u8 open_ifcs; /*protect head of transmit queue*/ struct mutex txq_add_to_head_cs; @@ -188,7 +247,8 @@ struct wilc { struct task_struct *txq_thread; int quit; - int cfg_frame_in_use; + /* lock to protect issue of wid command to firmware */ + struct mutex cfg_cmd_lock; struct wilc_cfg_frame cfg_frame; u32 cfg_frame_offset; int cfg_seq_no; diff --git a/drivers/staging/wilc1000/wilc_wlan.c b/drivers/staging/wilc1000/wilc_wlan.c index a48c906b2443..3c5e9e030cad 100644 --- a/drivers/staging/wilc1000/wilc_wlan.c +++ b/drivers/staging/wilc1000/wilc_wlan.c @@ -17,13 +17,13 @@ static inline bool is_wilc1000(u32 id) static inline void acquire_bus(struct wilc *wilc, enum bus_acquire acquire) { mutex_lock(&wilc->hif_cs); - if (acquire == ACQUIRE_AND_WAKEUP) + if (acquire == WILC_BUS_ACQUIRE_AND_WAKEUP) chip_wakeup(wilc); } static inline void release_bus(struct wilc *wilc, enum bus_release release) { - if (release == RELEASE_ALLOW_SLEEP) + if (release == WILC_BUS_RELEASE_ALLOW_SLEEP) chip_allow_sleep(wilc); mutex_unlock(&wilc->hif_cs); } @@ -399,7 +399,7 @@ void chip_wakeup(struct wilc *wilc) { u32 reg, clk_status_reg; - if ((wilc->io_type & 0x1) == HIF_SPI) { + if ((wilc->io_type & 0x1) == WILC_HIF_SPI) { do { wilc->hif_func->hif_read_reg(wilc, 1, ®); wilc->hif_func->hif_write_reg(wilc, 1, reg | BIT(1)); @@ -410,7 +410,7 @@ void chip_wakeup(struct wilc *wilc) wilc_get_chipid(wilc, true); } while (wilc_get_chipid(wilc, true) == 0); } while (wilc_get_chipid(wilc, true) == 0); - } else if ((wilc->io_type & 0x1) == HIF_SDIO) { + } else if ((wilc->io_type & 0x1) == WILC_HIF_SDIO) { wilc->hif_func->hif_write_reg(wilc, 0xfa, 1); usleep_range(200, 400); wilc->hif_func->hif_read_reg(wilc, 0xf0, ®); @@ -433,7 +433,7 @@ void chip_wakeup(struct wilc *wilc) } while ((clk_status_reg & 0x1) == 0); } - if (wilc->chip_ps_state == CHIP_SLEEPING_MANUAL) { + if (wilc->chip_ps_state == WILC_CHIP_SLEEPING_MANUAL) { if (wilc_get_chipid(wilc, false) < 0x1002b0) { u32 val32; @@ -446,37 +446,37 @@ void chip_wakeup(struct wilc *wilc) wilc->hif_func->hif_write_reg(wilc, 0x1e9c, val32); } } - wilc->chip_ps_state = CHIP_WAKEDUP; + wilc->chip_ps_state = WILC_CHIP_WAKEDUP; } EXPORT_SYMBOL_GPL(chip_wakeup); void wilc_chip_sleep_manually(struct wilc *wilc) { - if (wilc->chip_ps_state != CHIP_WAKEDUP) + if (wilc->chip_ps_state != WILC_CHIP_WAKEDUP) return; - acquire_bus(wilc, ACQUIRE_ONLY); + acquire_bus(wilc, WILC_BUS_ACQUIRE_ONLY); chip_allow_sleep(wilc); wilc->hif_func->hif_write_reg(wilc, 0x10a8, 1); - wilc->chip_ps_state = CHIP_SLEEPING_MANUAL; - release_bus(wilc, RELEASE_ONLY); + wilc->chip_ps_state = WILC_CHIP_SLEEPING_MANUAL; + release_bus(wilc, WILC_BUS_RELEASE_ONLY); } EXPORT_SYMBOL_GPL(wilc_chip_sleep_manually); void host_wakeup_notify(struct wilc *wilc) { - acquire_bus(wilc, ACQUIRE_ONLY); + acquire_bus(wilc, WILC_BUS_ACQUIRE_ONLY); wilc->hif_func->hif_write_reg(wilc, 0x10b0, 1); - release_bus(wilc, RELEASE_ONLY); + release_bus(wilc, WILC_BUS_RELEASE_ONLY); } EXPORT_SYMBOL_GPL(host_wakeup_notify); void host_sleep_notify(struct wilc *wilc) { - acquire_bus(wilc, ACQUIRE_ONLY); + acquire_bus(wilc, WILC_BUS_ACQUIRE_ONLY); wilc->hif_func->hif_write_reg(wilc, 0x10ac, 1); - release_bus(wilc, RELEASE_ONLY); + release_bus(wilc, WILC_BUS_RELEASE_ONLY); } EXPORT_SYMBOL_GPL(host_sleep_notify); @@ -541,7 +541,7 @@ int wilc_wlan_handle_txq(struct net_device *dev, u32 *txq_count) goto out; vmm_table[i] = 0x0; - acquire_bus(wilc, ACQUIRE_AND_WAKEUP); + acquire_bus(wilc, WILC_BUS_ACQUIRE_AND_WAKEUP); counter = 0; func = wilc->hif_func; do { @@ -584,7 +584,7 @@ int wilc_wlan_handle_txq(struct net_device *dev, u32 *txq_count) entries = ((reg >> 3) & 0x3f); break; } - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); } while (--timeout); if (timeout <= 0) { ret = func->hif_write_reg(wilc, WILC_HOST_VMM_CTL, 0x0); @@ -611,11 +611,11 @@ int wilc_wlan_handle_txq(struct net_device *dev, u32 *txq_count) goto out_release_bus; if (entries == 0) { - ret = WILC_TX_ERR_NO_BUF; + ret = -ENOBUFS; goto out_release_bus; } - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); offset = 0; i = 0; @@ -667,7 +667,7 @@ int wilc_wlan_handle_txq(struct net_device *dev, u32 *txq_count) kfree(tqe); } while (--entries); - acquire_bus(wilc, ACQUIRE_AND_WAKEUP); + acquire_bus(wilc, WILC_BUS_ACQUIRE_AND_WAKEUP); ret = func->hif_clear_int_ext(wilc, ENABLE_TX_VMM); if (!ret) @@ -676,7 +676,7 @@ int wilc_wlan_handle_txq(struct net_device *dev, u32 *txq_count) ret = func->hif_block_tx_ext(wilc, 0, txb, offset); out_release_bus: - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); out: mutex_unlock(&wilc->txq_add_to_head_cs); @@ -775,7 +775,7 @@ static void wilc_pllupdate_isr_ext(struct wilc *wilc, u32 int_stats) wilc->hif_func->hif_clear_int_ext(wilc, PLL_INT_CLR); - if (wilc->io_type == HIF_SDIO) + if (wilc->io_type == WILC_HIF_SDIO) mdelay(WILC_PLL_TO_SDIO); else mdelay(WILC_PLL_TO_SPI); @@ -835,7 +835,7 @@ void wilc_handle_isr(struct wilc *wilc) { u32 int_status; - acquire_bus(wilc, ACQUIRE_AND_WAKEUP); + acquire_bus(wilc, WILC_BUS_ACQUIRE_AND_WAKEUP); wilc->hif_func->hif_read_int(wilc, &int_status); if (int_status & PLL_INT_EXT) @@ -850,7 +850,7 @@ void wilc_handle_isr(struct wilc *wilc) if (!(int_status & (ALL_INT_EXT))) wilc_unknown_isr_ext(wilc); - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); } EXPORT_SYMBOL_GPL(wilc_handle_isr); @@ -874,7 +874,7 @@ int wilc_wlan_firmware_download(struct wilc *wilc, const u8 *buffer, memcpy(&size, &buffer[offset + 4], 4); le32_to_cpus(&addr); le32_to_cpus(&size); - acquire_bus(wilc, ACQUIRE_ONLY); + acquire_bus(wilc, WILC_BUS_ACQUIRE_ONLY); offset += 8; while (((int)size) && (offset < buffer_size)) { if (size <= blksz) @@ -892,7 +892,7 @@ int wilc_wlan_firmware_download(struct wilc *wilc, const u8 *buffer, offset += size2; size -= size2; } - release_bus(wilc, RELEASE_ONLY); + release_bus(wilc, WILC_BUS_RELEASE_ONLY); if (!ret) { ret = -EIO; @@ -913,20 +913,20 @@ int wilc_wlan_start(struct wilc *wilc) int ret; u32 chipid; - if (wilc->io_type == HIF_SDIO) { + if (wilc->io_type == WILC_HIF_SDIO) { reg = 0; reg |= BIT(3); - } else if (wilc->io_type == HIF_SPI) { + } else if (wilc->io_type == WILC_HIF_SPI) { reg = 1; } - acquire_bus(wilc, ACQUIRE_ONLY); + acquire_bus(wilc, WILC_BUS_ACQUIRE_ONLY); ret = wilc->hif_func->hif_write_reg(wilc, WILC_VMM_CORE_CFG, reg); if (!ret) { - release_bus(wilc, RELEASE_ONLY); + release_bus(wilc, WILC_BUS_RELEASE_ONLY); return -EIO; } reg = 0; - if (wilc->io_type == HIF_SDIO && wilc->dev_irq_num) + if (wilc->io_type == WILC_HIF_SDIO && wilc->dev_irq_num) reg |= WILC_HAVE_SDIO_IRQ_GPIO; #ifdef WILC_DISABLE_PMU @@ -954,7 +954,7 @@ int wilc_wlan_start(struct wilc *wilc) ret = wilc->hif_func->hif_write_reg(wilc, WILC_GP_REG_1, reg); if (!ret) { - release_bus(wilc, RELEASE_ONLY); + release_bus(wilc, WILC_BUS_RELEASE_ONLY); return -EIO; } @@ -962,7 +962,7 @@ int wilc_wlan_start(struct wilc *wilc) ret = wilc->hif_func->hif_read_reg(wilc, 0x1000, &chipid); if (!ret) { - release_bus(wilc, RELEASE_ONLY); + release_bus(wilc, WILC_BUS_RELEASE_ONLY); return -EIO; } @@ -976,7 +976,7 @@ int wilc_wlan_start(struct wilc *wilc) reg |= BIT(10); ret = wilc->hif_func->hif_write_reg(wilc, WILC_GLB_RESET_0, reg); wilc->hif_func->hif_read_reg(wilc, WILC_GLB_RESET_0, ®); - release_bus(wilc, RELEASE_ONLY); + release_bus(wilc, WILC_BUS_RELEASE_ONLY); return (ret < 0) ? ret : 0; } @@ -987,18 +987,18 @@ int wilc_wlan_stop(struct wilc *wilc) int ret; u8 timeout = 10; - acquire_bus(wilc, ACQUIRE_AND_WAKEUP); + acquire_bus(wilc, WILC_BUS_ACQUIRE_AND_WAKEUP); ret = wilc->hif_func->hif_read_reg(wilc, WILC_GLB_RESET_0, ®); if (!ret) { - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); return ret; } reg &= ~BIT(10); ret = wilc->hif_func->hif_write_reg(wilc, WILC_GLB_RESET_0, reg); if (!ret) { - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); return ret; } @@ -1006,7 +1006,7 @@ int wilc_wlan_stop(struct wilc *wilc) ret = wilc->hif_func->hif_read_reg(wilc, WILC_GLB_RESET_0, ®); if (!ret) { - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); return ret; } @@ -1021,7 +1021,7 @@ int wilc_wlan_stop(struct wilc *wilc) WILC_GLB_RESET_0, ®); if (!ret) { - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); return ret; } break; @@ -1036,7 +1036,7 @@ int wilc_wlan_stop(struct wilc *wilc) ret = wilc->hif_func->hif_write_reg(wilc, WILC_GLB_RESET_0, reg); - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); return ret; } @@ -1072,18 +1072,18 @@ void wilc_wlan_cleanup(struct net_device *dev) kfree(wilc->tx_buffer); wilc->tx_buffer = NULL; - acquire_bus(wilc, ACQUIRE_AND_WAKEUP); + acquire_bus(wilc, WILC_BUS_ACQUIRE_AND_WAKEUP); ret = wilc->hif_func->hif_read_reg(wilc, WILC_GP_REG_0, ®); if (!ret) - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); ret = wilc->hif_func->hif_write_reg(wilc, WILC_GP_REG_0, (reg | ABORT_INT)); if (!ret) - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); - release_bus(wilc, RELEASE_ALLOW_SLEEP); + release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP); wilc->hif_func->hif_deinit(NULL); } @@ -1122,8 +1122,7 @@ int wilc_wlan_cfg_set(struct wilc_vif *vif, int start, u16 wid, u8 *buffer, int ret_size; struct wilc *wilc = vif->wilc; - if (wilc->cfg_frame_in_use) - return 0; + mutex_lock(&wilc->cfg_cmd_lock); if (start) wilc->cfg_frame_offset = 0; @@ -1134,11 +1133,12 @@ int wilc_wlan_cfg_set(struct wilc_vif *vif, int start, u16 wid, u8 *buffer, offset += ret_size; wilc->cfg_frame_offset = offset; - if (!commit) + if (!commit) { + mutex_unlock(&wilc->cfg_cmd_lock); return ret_size; + } netdev_dbg(vif->ndev, "%s: seqno[%d]\n", __func__, wilc->cfg_seq_no); - wilc->cfg_frame_in_use = 1; if (wilc_wlan_cfg_commit(vif, WILC_CFG_SET, drv_handler)) ret_size = 0; @@ -1149,9 +1149,9 @@ int wilc_wlan_cfg_set(struct wilc_vif *vif, int start, u16 wid, u8 *buffer, ret_size = 0; } - wilc->cfg_frame_in_use = 0; wilc->cfg_frame_offset = 0; wilc->cfg_seq_no += 1; + mutex_unlock(&wilc->cfg_cmd_lock); return ret_size; } @@ -1163,8 +1163,7 @@ int wilc_wlan_cfg_get(struct wilc_vif *vif, int start, u16 wid, int commit, int ret_size; struct wilc *wilc = vif->wilc; - if (wilc->cfg_frame_in_use) - return 0; + mutex_lock(&wilc->cfg_cmd_lock); if (start) wilc->cfg_frame_offset = 0; @@ -1174,10 +1173,10 @@ int wilc_wlan_cfg_get(struct wilc_vif *vif, int start, u16 wid, int commit, offset += ret_size; wilc->cfg_frame_offset = offset; - if (!commit) + if (!commit) { + mutex_unlock(&wilc->cfg_cmd_lock); return ret_size; - - wilc->cfg_frame_in_use = 1; + } if (wilc_wlan_cfg_commit(vif, WILC_CFG_QUERY, drv_handler)) ret_size = 0; @@ -1187,9 +1186,9 @@ int wilc_wlan_cfg_get(struct wilc_vif *vif, int start, u16 wid, int commit, netdev_dbg(vif->ndev, "%s: Timed Out\n", __func__); ret_size = 0; } - wilc->cfg_frame_in_use = 0; wilc->cfg_frame_offset = 0; wilc->cfg_seq_no += 1; + mutex_unlock(&wilc->cfg_cmd_lock); return ret_size; } @@ -1205,7 +1204,7 @@ int wilc_send_config_pkt(struct wilc_vif *vif, u8 mode, struct wid *wids, int i; int ret = 0; - if (mode == GET_CFG) { + if (mode == WILC_GET_CFG) { for (i = 0; i < count; i++) { if (!wilc_wlan_cfg_get(vif, !i, wids[i].id, @@ -1221,7 +1220,7 @@ int wilc_send_config_pkt(struct wilc_vif *vif, u8 mode, struct wid *wids, wids[i].val, wids[i].size); } - } else if (mode == SET_CFG) { + } else if (mode == WILC_SET_CFG) { for (i = 0; i < count; i++) { if (!wilc_wlan_cfg_set(vif, !i, wids[i].id, @@ -1245,7 +1244,7 @@ static u32 init_chip(struct net_device *dev) struct wilc_vif *vif = netdev_priv(dev); struct wilc *wilc = vif->wilc; - acquire_bus(wilc, ACQUIRE_ONLY); + acquire_bus(wilc, WILC_BUS_ACQUIRE_ONLY); chipid = wilc_get_chipid(wilc, true); @@ -1268,7 +1267,7 @@ static u32 init_chip(struct net_device *dev) } } - release_bus(wilc, RELEASE_ONLY); + release_bus(wilc, WILC_BUS_RELEASE_ONLY); return ret; } diff --git a/drivers/staging/wilc1000/wilc_wlan_cfg.c b/drivers/staging/wilc1000/wilc_wlan_cfg.c index faa001c75681..8390766358da 100644 --- a/drivers/staging/wilc1000/wilc_wlan_cfg.c +++ b/drivers/staging/wilc1000/wilc_wlan_cfg.c @@ -7,7 +7,6 @@ #include "wilc_wlan_if.h" #include "wilc_wlan.h" #include "wilc_wlan_cfg.h" -#include "coreconfigurator.h" #include "wilc_wfi_netdevice.h" enum cfg_cmd_type { diff --git a/drivers/staging/wilc1000/wilc_wlan_if.h b/drivers/staging/wilc1000/wilc_wlan_if.h index ce2066b74287..e2310d860291 100644 --- a/drivers/staging/wilc1000/wilc_wlan_if.h +++ b/drivers/staging/wilc1000/wilc_wlan_if.h @@ -15,8 +15,10 @@ * ********************************************/ -#define HIF_SDIO (0) -#define HIF_SPI BIT(0) +enum { + WILC_HIF_SDIO = 0, + WILC_HIF_SPI = BIT(0) +}; /******************************************** * @@ -24,29 +26,12 @@ * ********************************************/ -struct sdio_cmd52 { - u32 read_write: 1; - u32 function: 3; - u32 raw: 1; - u32 address: 17; - u32 data: 8; -}; - -struct sdio_cmd53 { - u32 read_write: 1; - u32 function: 3; - u32 block_mode: 1; - u32 increment: 1; - u32 address: 17; - u32 count: 9; - u8 *buffer; - u32 block_size; +enum { + WILC_MAC_STATUS_INIT = -1, + WILC_MAC_STATUS_DISCONNECTED = 0, + WILC_MAC_STATUS_CONNECTED = 1 }; -#define MAC_STATUS_INIT -1 -#define MAC_STATUS_CONNECTED 1 -#define MAC_STATUS_DISCONNECTED 0 - struct tx_complete_data { int size; void *buff; @@ -56,8 +41,6 @@ struct tx_complete_data { typedef void (*wilc_tx_complete_func_t)(void *, int); -#define WILC_TX_ERR_NO_BUF (-2) - /******************************************** * * Wlan Configuration ID @@ -68,133 +51,172 @@ typedef void (*wilc_tx_complete_func_t)(void *, int); #define MAX_RATES_SUPPORTED 12 enum bss_types { - INFRASTRUCTURE = 0, - INDEPENDENT, - AP, + WILC_FW_BSS_TYPE_INFRA = 0, + WILC_FW_BSS_TYPE_INDEPENDENT, + WILC_FW_BSS_TYPE_AP, }; enum { - B_ONLY_MODE = 0, /* 1, 2 M, otherwise 5, 11 M */ - G_ONLY_MODE, /* 6,12,24 otherwise 9,18,36,48,54 */ - G_MIXED_11B_1_MODE, /* 1,2,5.5,11 otherwise all on */ - G_MIXED_11B_2_MODE, /* 1,2,5,11,6,12,24 otherwise all on */ + WILC_FW_OPER_MODE_B_ONLY = 0, /* 1, 2 M, otherwise 5, 11 M */ + WILC_FW_OPER_MODE_G_ONLY, /* 6,12,24 otherwise 9,18,36,48,54 */ + WILC_FW_OPER_MODE_G_MIXED_11B_1, /* 1,2,5.5,11 otherwise all on */ + WILC_FW_OPER_MODE_G_MIXED_11B_2, /* 1,2,5,11,6,12,24 otherwise all on */ }; enum { - G_SHORT_PREAMBLE = 0, /* Short Preamble */ - G_LONG_PREAMBLE = 1, /* Long Preamble */ - G_AUTO_PREAMBLE = 2, /* Auto Preamble Selection */ + WILC_FW_PREAMBLE_SHORT = 0, /* Short Preamble */ + WILC_FW_PREAMBLE_LONG = 1, /* Long Preamble */ + WILC_FW_PREAMBLE_AUTO = 2, /* Auto Preamble Selection */ }; enum { - PASSIVE_SCAN = 0, - ACTIVE_SCAN = 1, + WILC_FW_PASSIVE_SCAN = 0, + WILC_FW_ACTIVE_SCAN = 1, }; enum { - NO_POWERSAVE = 0, - MIN_FAST_PS = 1, - MAX_FAST_PS = 2, - MIN_PSPOLL_PS = 3, - MAX_PSPOLL_PS = 4 + WILC_FW_NO_POWERSAVE = 0, + WILC_FW_MIN_FAST_PS = 1, + WILC_FW_MAX_FAST_PS = 2, + WILC_FW_MIN_PSPOLL_PS = 3, + WILC_FW_MAX_PSPOLL_PS = 4 }; enum chip_ps_states { - CHIP_WAKEDUP = 0, - CHIP_SLEEPING_AUTO = 1, - CHIP_SLEEPING_MANUAL = 2 + WILC_CHIP_WAKEDUP = 0, + WILC_CHIP_SLEEPING_AUTO = 1, + WILC_CHIP_SLEEPING_MANUAL = 2 }; enum bus_acquire { - ACQUIRE_ONLY = 0, - ACQUIRE_AND_WAKEUP = 1, + WILC_BUS_ACQUIRE_ONLY = 0, + WILC_BUS_ACQUIRE_AND_WAKEUP = 1, }; enum bus_release { - RELEASE_ONLY = 0, - RELEASE_ALLOW_SLEEP = 1, + WILC_BUS_RELEASE_ONLY = 0, + WILC_BUS_RELEASE_ALLOW_SLEEP = 1, +}; + +enum { + WILC_FW_NO_ENCRYPT = 0, + WILC_FW_ENCRYPT_ENABLED = BIT(0), + WILC_FW_WEP = BIT(1), + WILC_FW_WEP_EXTENDED = BIT(2), + WILC_FW_WPA = BIT(3), + WILC_FW_WPA2 = BIT(4), + WILC_FW_AES = BIT(5), + WILC_FW_TKIP = BIT(6) }; enum { - NO_SECURITY = 0, - WEP_40 = 0x3, - WEP_104 = 0x7, - WPA_AES = 0x29, - WPA_TKIP = 0x49, - WPA_AES_TKIP = 0x69, /* Aes or Tkip */ - WPA2_AES = 0x31, - WPA2_TKIP = 0x51, - WPA2_AES_TKIP = 0x71, /* Aes or Tkip */ + WILC_FW_SEC_NO = WILC_FW_NO_ENCRYPT, + WILC_FW_SEC_WEP = WILC_FW_WEP | WILC_FW_ENCRYPT_ENABLED, + WILC_FW_SEC_WEP_EXTENDED = WILC_FW_WEP_EXTENDED | WILC_FW_SEC_WEP, + WILC_FW_SEC_WPA = WILC_FW_WPA | WILC_FW_ENCRYPT_ENABLED, + WILC_FW_SEC_WPA_AES = WILC_FW_AES | WILC_FW_SEC_WPA, + WILC_FW_SEC_WPA_TKIP = WILC_FW_TKIP | WILC_FW_SEC_WPA, + WILC_FW_SEC_WPA2 = WILC_FW_WPA2 | WILC_FW_ENCRYPT_ENABLED, + WILC_FW_SEC_WPA2_AES = WILC_FW_AES | WILC_FW_SEC_WPA2, + WILC_FW_SEC_WPA2_TKIP = WILC_FW_TKIP | WILC_FW_SEC_WPA2 }; enum authtype { - OPEN_SYSTEM = 1, - SHARED_KEY = 2, - ANY = 3, - IEEE8021 = 5 + WILC_FW_AUTH_OPEN_SYSTEM = 1, + WILC_FW_AUTH_SHARED_KEY = 2, + WILC_FW_AUTH_ANY = 3, + WILC_FW_AUTH_IEEE8021 = 5 }; enum site_survey { - SITE_SURVEY_1CH = 0, - SITE_SURVEY_ALL_CH = 1, - SITE_SURVEY_OFF = 2 + WILC_FW_SITE_SURVEY_1CH = 0, + WILC_FW_SITE_SURVEY_ALL_CH = 1, + WILC_FW_SITE_SURVEY_OFF = 2 }; enum { - NORMAL_ACK = 0, - NO_ACK, + WILC_FW_ACK_POLICY_NORMAL = 0, + WILC_FW_ACK_NO_POLICY, }; enum { - REKEY_DISABLE = 1, - REKEY_TIME_BASE, - REKEY_PKT_BASE, - REKEY_TIME_PKT_BASE + WILC_FW_REKEY_POLICY_DISABLE = 1, + WILC_FW_REKEY_POLICY_TIME_BASE, + WILC_FW_REKEY_POLICY_PKT_BASE, + WILC_FW_REKEY_POLICY_TIME_PKT_BASE }; enum { - FILTER_NO = 0x00, - FILTER_AP_ONLY = 0x01, - FILTER_STA_ONLY = 0x02 + WILC_FW_FILTER_NO = 0x00, + WILC_FW_FILTER_AP_ONLY = 0x01, + WILC_FW_FILTER_STA_ONLY = 0x02 }; enum { - AUTO_PROT = 0, /* Auto */ - NO_PROT, /* Do not use any protection */ - ERP_PROT, /* Protect all ERP frame exchanges */ - HT_PROT, /* Protect all HT frame exchanges */ - GF_PROT, /* Protect all GF frame exchanges */ + WILC_FW_11N_PROT_AUTO = 0, /* Auto */ + WILC_FW_11N_NO_PROT, /* Do not use any protection */ + WILC_FW_11N_PROT_ERP, /* Protect all ERP frame exchanges */ + WILC_FW_11N_PROT_HT, /* Protect all HT frame exchanges */ + WILC_FW_11N_PROT_GF /* Protect all GF frame exchanges */ }; enum { - G_SELF_CTS_PROT, - G_RTS_CTS_PROT, + WILC_FW_ERP_PROT_SELF_CTS, + WILC_FW_ERP_PROT_RTS_CTS, }; enum { - HT_MIXED_MODE = 1, - HT_ONLY_20MHZ_MODE, - HT_ONLY_20_40MHZ_MODE, + WILC_FW_11N_OP_MODE_HT_MIXED = 1, + WILC_FW_11N_OP_MODE_HT_ONLY_20MHZ, + WILC_FW_11N_OP_MODE_HT_ONLY_20_40MHZ, }; enum { - NO_DETECT = 0, - DETECT_ONLY = 1, - DETECT_PROTECT = 2, - DETECT_PROTECT_REPORT = 3, + WILC_FW_OBBS_NONHT_NO_DETECT = 0, + WILC_FW_OBBS_NONHT_DETECT_ONLY = 1, + WILC_FW_OBBS_NONHT_DETECT_PROTECT = 2, + WILC_FW_OBBS_NONHT_DETECT_PROTECT_REPORT = 3, }; enum { - RTS_CTS_NONHT_PROT = 0, /* RTS-CTS at non-HT rate */ - FIRST_FRAME_NONHT_PROT, /* First frame at non-HT rate */ - LSIG_TXOP_PROT, /* LSIG TXOP Protection */ - FIRST_FRAME_MIXED_PROT, /* First frame at Mixed format */ + WILC_FW_HT_PROT_RTS_CTS_NONHT = 0, /* RTS-CTS at non-HT rate */ + WILC_FW_HT_PROT_FIRST_FRAME_NONHT, /* First frame at non-HT rate */ + WILC_FW_HT_PROT_LSIG_TXOP, /* LSIG TXOP Protection */ + WILC_FW_HT_PROT_FIRST_FRAME_MIXED, /* First frame at Mixed format */ }; enum { - STATIC_MODE = 1, - DYNAMIC_MODE = 2, - MIMO_MODE = 3, /* power save disable */ + WILC_FW_SMPS_MODE_STATIC = 1, + WILC_FW_SMPS_MODE_DYNAMIC = 2, + WILC_FW_SMPS_MODE_MIMO = 3, /* power save disable */ +}; + +enum { + WILC_FW_TX_RATE_AUTO = 0, + WILC_FW_TX_RATE_MBPS_1 = 1, + WILC_FW_TX_RATE_MBPS_2 = 2, + WILC_FW_TX_RATE_MBPS_5_5 = 5, + WILC_FW_TX_RATE_MBPS_11 = 11, + WILC_FW_TX_RATE_MBPS_6 = 6, + WILC_FW_TX_RATE_MBPS_9 = 9, + WILC_FW_TX_RATE_MBPS_12 = 12, + WILC_FW_TX_RATE_MBPS_18 = 18, + WILC_FW_TX_RATE_MBPS_24 = 24, + WILC_FW_TX_RATE_MBPS_36 = 36, + WILC_FW_TX_RATE_MBPS_48 = 48, + WILC_FW_TX_RATE_MBPS_54 = 54 +}; + +enum { + WILC_FW_DEFAULT_SCAN = 0, + WILC_FW_USER_SCAN = BIT(0), + WILC_FW_OBSS_PERIODIC_SCAN = BIT(1), + WILC_FW_OBSS_ONETIME_SCAN = BIT(2) +}; + +enum { + WILC_FW_ACTION_FRM_IDX = 0, + WILC_FW_PROBE_REQ_IDX = 1 }; enum wid_type { @@ -698,13 +720,8 @@ enum { WID_LONG_RETRY_LIMIT = 0x1003, WID_BEACON_INTERVAL = 0x1006, WID_MEMORY_ACCESS_16BIT = 0x1008, - WID_RX_SENSE = 0x100B, - WID_ACTIVE_SCAN_TIME = 0x100C, - WID_PASSIVE_SCAN_TIME = 0x100D, - WID_SITE_SURVEY_SCAN_TIME = 0x100E, WID_JOIN_START_TIMEOUT = 0x100F, - WID_AUTH_TIMEOUT = 0x1010, WID_ASOC_TIMEOUT = 0x1011, WID_11I_PROTOCOL_TIMEOUT = 0x1012, WID_EAPOL_RESPONSE_TIMEOUT = 0x1013, @@ -739,11 +756,8 @@ enum { WID_HW_RX_COUNT = 0x2015, WID_MEMORY_ADDRESS = 0x201E, WID_MEMORY_ACCESS_32BIT = 0x201F, - WID_RF_REG_VAL = 0x2021, /* NMAC Integer WID list */ - WID_11N_PHY_ACTIVE_REG_VAL = 0x2080, - /* Custom Integer WID list */ WID_GET_INACTIVE_TIME = 0x2084, WID_SET_OPERATION_MODE = 0X2086, @@ -764,7 +778,6 @@ enum { WID_SUPP_PASSWORD = 0x3011, WID_SITE_SURVEY_RESULTS = 0x3012, WID_RX_POWER_LEVEL = 0x3013, - WID_DEL_ALL_RX_BA = 0x3014, WID_SET_STA_MAC_INACTIVE_TIME = 0x3017, WID_ADD_WEP_KEY = 0x3019, WID_REMOVE_WEP_KEY = 0x301A, diff --git a/drivers/staging/wlan-ng/cfg80211.c b/drivers/staging/wlan-ng/cfg80211.c index 47f2ee926a77..e5d7def1f366 100644 --- a/drivers/staging/wlan-ng/cfg80211.c +++ b/drivers/staging/wlan-ng/cfg80211.c @@ -678,7 +678,8 @@ static const struct cfg80211_ops prism2_usb_cfg_ops = { }; /* Functions to create/free wiphy interface */ -static struct wiphy *wlan_create_wiphy(struct device *dev, struct wlandevice *wlandev) +static struct wiphy *wlan_create_wiphy(struct device *dev, + struct wlandevice *wlandev) { struct wiphy *wiphy; struct prism2_wiphy_private *priv; diff --git a/drivers/staging/wlan-ng/prism2fw.c b/drivers/staging/wlan-ng/prism2fw.c index f99626ca6bdc..bb572b7fdfee 100644 --- a/drivers/staging/wlan-ng/prism2fw.c +++ b/drivers/staging/wlan-ng/prism2fw.c @@ -406,7 +406,6 @@ static int crcimage(struct imgchunk *fchunk, unsigned int nfchunks, int i; int c; u32 crcstart; - u32 crcend; u32 cstart = 0; u32 cend; u8 *dest; @@ -416,7 +415,6 @@ static int crcimage(struct imgchunk *fchunk, unsigned int nfchunks, if (!s3crc[i].dowrite) continue; crcstart = s3crc[i].addr; - crcend = s3crc[i].addr + s3crc[i].len; /* Find chunk */ for (c = 0; c < nfchunks; c++) { cstart = fchunk[c].addr; @@ -559,7 +557,7 @@ static int mkimage(struct imgchunk *clist, unsigned int *ccnt) for (i = 0; i < *ccnt; i++) { clist[i].data = kzalloc(clist[i].len, GFP_KERNEL); if (!clist[i].data) { - pr_err("failed to allocate image space, exitting.\n"); + pr_err("failed to allocate image space, exiting.\n"); return 1; } pr_debug("chunk[%d]: addr=0x%06x len=%d\n", diff --git a/drivers/staging/wlan-ng/prism2mib.c b/drivers/staging/wlan-ng/prism2mib.c index 5c0dad42f523..1eba5fa28d8f 100644 --- a/drivers/staging/wlan-ng/prism2mib.c +++ b/drivers/staging/wlan-ng/prism2mib.c @@ -133,12 +133,13 @@ static int prism2mib_excludeunencrypted(struct mibrec *mib, struct p80211msg_dot11req_mibset *msg, void *data); -static int prism2mib_fragmentationthreshold(struct mibrec *mib, - int isget, - struct wlandevice *wlandev, - struct hfa384x *hw, - struct p80211msg_dot11req_mibset *msg, - void *data); +static int +prism2mib_fragmentationthreshold(struct mibrec *mib, + int isget, + struct wlandevice *wlandev, + struct hfa384x *hw, + struct p80211msg_dot11req_mibset *msg, + void *data); static int prism2mib_priv(struct mibrec *mib, int isget, @@ -652,12 +653,13 @@ static int prism2mib_excludeunencrypted(struct mibrec *mib, * */ -static int prism2mib_fragmentationthreshold(struct mibrec *mib, - int isget, - struct wlandevice *wlandev, - struct hfa384x *hw, - struct p80211msg_dot11req_mibset *msg, - void *data) +static int +prism2mib_fragmentationthreshold(struct mibrec *mib, + int isget, + struct wlandevice *wlandev, + struct hfa384x *hw, + struct p80211msg_dot11req_mibset *msg, + void *data) { u32 *uint32 = data; diff --git a/drivers/staging/xgifb/XGI_main_26.c b/drivers/staging/xgifb/XGI_main_26.c index eca0b50f0df6..217c6bb82c0d 100644 --- a/drivers/staging/xgifb/XGI_main_26.c +++ b/drivers/staging/xgifb/XGI_main_26.c @@ -923,8 +923,9 @@ static int XGIfb_do_set_var(struct fb_var_screeninfo *var, int isactive, if (!htotal || !vtotal) { pr_debug("Invalid 'var' information\n"); return -EINVAL; - } pr_debug("var->pixclock=%d, htotal=%d, vtotal=%d\n", - var->pixclock, htotal, vtotal); + } + pr_debug("var->pixclock=%d, htotal=%d, vtotal=%d\n", + var->pixclock, htotal, vtotal); if (var->pixclock) { drate = 1000000000 / var->pixclock; diff --git a/drivers/staging/xgifb/vb_setmode.c b/drivers/staging/xgifb/vb_setmode.c index 1fa0dc66406e..3782f8641bf2 100644 --- a/drivers/staging/xgifb/vb_setmode.c +++ b/drivers/staging/xgifb/vb_setmode.c @@ -654,10 +654,11 @@ static void XGI_UpdateXG21CRTC(unsigned short ModeNo, xgifb_reg_and(pVBInfo->P3d4, 0x11, 0x7F); /* Unlock CR0~7 */ if (ModeNo == 0x2E && (XGI330_RefIndex[RefreshRateTableIndex].Ext_CRT1CRTC == - RES640x480x60)) + RES640x480x60)) index = 12; - else if (ModeNo == 0x2E && (XGI330_RefIndex[RefreshRateTableIndex]. - Ext_CRT1CRTC == RES640x480x72)) + else if (ModeNo == 0x2E && + (XGI330_RefIndex[RefreshRateTableIndex].Ext_CRT1CRTC == + RES640x480x72)) index = 13; else if (ModeNo == 0x2F) index = 14; diff --git a/drivers/target/iscsi/cxgbit/cxgbit_cm.c b/drivers/target/iscsi/cxgbit/cxgbit_cm.c index b19c960d5490..dab09b610723 100644 --- a/drivers/target/iscsi/cxgbit/cxgbit_cm.c +++ b/drivers/target/iscsi/cxgbit/cxgbit_cm.c @@ -935,8 +935,8 @@ cxgbit_offload_init(struct cxgbit_sock *csk, int iptype, __u8 *peer_ip, goto out; csk->mtu = ndev->mtu; csk->tx_chan = cxgb4_port_chan(ndev); - csk->smac_idx = cxgb4_tp_smt_idx(cdev->lldi.adapter_type, - cxgb4_port_viid(ndev)); + csk->smac_idx = + ((struct port_info *)netdev_priv(ndev))->smt_idx; step = cdev->lldi.ntxq / cdev->lldi.nchan; csk->txq_idx = cxgb4_port_idx(ndev) * step; @@ -971,8 +971,8 @@ cxgbit_offload_init(struct cxgbit_sock *csk, int iptype, __u8 *peer_ip, port_id = cxgb4_port_idx(ndev); csk->mtu = dst_mtu(dst); csk->tx_chan = cxgb4_port_chan(ndev); - csk->smac_idx = cxgb4_tp_smt_idx(cdev->lldi.adapter_type, - cxgb4_port_viid(ndev)); + csk->smac_idx = + ((struct port_info *)netdev_priv(ndev))->smt_idx; step = cdev->lldi.ntxq / cdev->lldi.nports; csk->txq_idx = (port_id * step) + diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c index c1d5a173553d..984941e036c8 100644 --- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -1493,8 +1493,6 @@ __iscsit_check_dataout_hdr(struct iscsi_conn *conn, void *buf, if (hdr->flags & ISCSI_FLAG_CMD_FINAL) iscsit_stop_dataout_timer(cmd); - transport_check_aborted_status(se_cmd, - (hdr->flags & ISCSI_FLAG_CMD_FINAL)); return iscsit_dump_data_payload(conn, payload_length, 1); } } else { @@ -1509,12 +1507,9 @@ __iscsit_check_dataout_hdr(struct iscsi_conn *conn, void *buf, * TASK_ABORTED status. */ if (se_cmd->transport_state & CMD_T_ABORTED) { - if (hdr->flags & ISCSI_FLAG_CMD_FINAL) - if (--cmd->outstanding_r2ts < 1) { - iscsit_stop_dataout_timer(cmd); - transport_check_aborted_status( - se_cmd, 1); - } + if (hdr->flags & ISCSI_FLAG_CMD_FINAL && + --cmd->outstanding_r2ts < 1) + iscsit_stop_dataout_timer(cmd); return iscsit_dump_data_payload(conn, payload_length, 1); } diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c index 95d0a22b2ad6..a5481dfeae8d 100644 --- a/drivers/target/iscsi/iscsi_target_configfs.c +++ b/drivers/target/iscsi/iscsi_target_configfs.c @@ -1343,11 +1343,6 @@ static struct configfs_attribute *lio_target_discovery_auth_attrs[] = { /* Start functions for target_core_fabric_ops */ -static char *iscsi_get_fabric_name(void) -{ - return "iSCSI"; -} - static int iscsi_get_cmd_state(struct se_cmd *se_cmd) { struct iscsi_cmd *cmd = container_of(se_cmd, struct iscsi_cmd, se_cmd); @@ -1549,9 +1544,9 @@ static void lio_release_cmd(struct se_cmd *se_cmd) const struct target_core_fabric_ops iscsi_ops = { .module = THIS_MODULE, - .name = "iscsi", + .fabric_alias = "iscsi", + .fabric_name = "iSCSI", .node_acl_size = sizeof(struct iscsi_node_acl), - .get_fabric_name = iscsi_get_fabric_name, .tpg_get_wwn = lio_tpg_get_endpoint_wwn, .tpg_get_tag = lio_tpg_get_tag, .tpg_get_default_depth = lio_tpg_get_default_depth, @@ -1596,4 +1591,6 @@ const struct target_core_fabric_ops iscsi_ops = { .tfc_tpg_nacl_attrib_attrs = lio_target_nacl_attrib_attrs, .tfc_tpg_nacl_auth_attrs = lio_target_nacl_auth_attrs, .tfc_tpg_nacl_param_attrs = lio_target_nacl_param_attrs, + + .write_pending_must_be_called = true, }; diff --git a/drivers/target/iscsi/iscsi_target_erl1.c b/drivers/target/iscsi/iscsi_target_erl1.c index a211e8154f4c..1b54a9c70851 100644 --- a/drivers/target/iscsi/iscsi_target_erl1.c +++ b/drivers/target/iscsi/iscsi_target_erl1.c @@ -943,20 +943,8 @@ int iscsit_execute_cmd(struct iscsi_cmd *cmd, int ooo) return 0; } spin_unlock_bh(&cmd->istate_lock); - /* - * Determine if delayed TASK_ABORTED status for WRITEs - * should be sent now if no unsolicited data out - * payloads are expected, or if the delayed status - * should be sent after unsolicited data out with - * ISCSI_FLAG_CMD_FINAL set in iscsi_handle_data_out() - */ - if (transport_check_aborted_status(se_cmd, - (cmd->unsolicited_data == 0)) != 0) + if (cmd->se_cmd.transport_state & CMD_T_ABORTED) return 0; - /* - * Otherwise send CHECK_CONDITION and sense for - * exception - */ return transport_send_check_condition_and_sense(se_cmd, cmd->sense_reason, 0); } @@ -974,13 +962,7 @@ int iscsit_execute_cmd(struct iscsi_cmd *cmd, int ooo) if (!(cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA)) { - /* - * Send the delayed TASK_ABORTED status for - * WRITEs if no more unsolicitied data is - * expected. - */ - if (transport_check_aborted_status(se_cmd, 1) - != 0) + if (cmd->se_cmd.transport_state & CMD_T_ABORTED) return 0; iscsit_set_dataout_sequence_values(cmd); @@ -995,11 +977,7 @@ int iscsit_execute_cmd(struct iscsi_cmd *cmd, int ooo) if ((cmd->data_direction == DMA_TO_DEVICE) && !(cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA)) { - /* - * Send the delayed TASK_ABORTED status for WRITEs if - * no more nsolicitied data is expected. - */ - if (transport_check_aborted_status(se_cmd, 1) != 0) + if (cmd->se_cmd.transport_state & CMD_T_ABORTED) return 0; iscsit_set_unsoliticed_dataout(cmd); diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c index 36b742932c72..86987da86dd6 100644 --- a/drivers/target/iscsi/iscsi_target_util.c +++ b/drivers/target/iscsi/iscsi_target_util.c @@ -150,24 +150,26 @@ void iscsit_free_r2ts_from_list(struct iscsi_cmd *cmd) static int iscsit_wait_for_tag(struct se_session *se_sess, int state, int *cpup) { int tag = -1; - DEFINE_WAIT(wait); + DEFINE_SBQ_WAIT(wait); struct sbq_wait_state *ws; + struct sbitmap_queue *sbq; if (state == TASK_RUNNING) return tag; - ws = &se_sess->sess_tag_pool.ws[0]; + sbq = &se_sess->sess_tag_pool; + ws = &sbq->ws[0]; for (;;) { - prepare_to_wait_exclusive(&ws->wait, &wait, state); + sbitmap_prepare_to_wait(sbq, ws, &wait, state); if (signal_pending_state(state, current)) break; - tag = sbitmap_queue_get(&se_sess->sess_tag_pool, cpup); + tag = sbitmap_queue_get(sbq, cpup); if (tag >= 0) break; schedule(); } - finish_wait(&ws->wait, &wait); + sbitmap_finish_wait(sbq, ws, &wait); return tag; } diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c index bc8918f382e4..7bd7c0c0db6f 100644 --- a/drivers/target/loopback/tcm_loop.c +++ b/drivers/target/loopback/tcm_loop.c @@ -324,7 +324,7 @@ static struct scsi_host_template tcm_loop_driver_template = { .sg_tablesize = 256, .cmd_per_lun = 1024, .max_sectors = 0xFFFF, - .use_clustering = DISABLE_CLUSTERING, + .dma_boundary = PAGE_SIZE - 1, .slave_alloc = tcm_loop_slave_alloc, .module = THIS_MODULE, .track_queue_depth = 1, @@ -460,11 +460,6 @@ static void tcm_loop_release_core_bus(void) pr_debug("Releasing TCM Loop Core BUS\n"); } -static char *tcm_loop_get_fabric_name(void) -{ - return "loopback"; -} - static inline struct tcm_loop_tpg *tl_tpg(struct se_portal_group *se_tpg) { return container_of(se_tpg, struct tcm_loop_tpg, tl_se_tpg); @@ -1149,8 +1144,7 @@ static struct configfs_attribute *tcm_loop_wwn_attrs[] = { static const struct target_core_fabric_ops loop_ops = { .module = THIS_MODULE, - .name = "loopback", - .get_fabric_name = tcm_loop_get_fabric_name, + .fabric_name = "loopback", .tpg_get_wwn = tcm_loop_get_endpoint_wwn, .tpg_get_tag = tcm_loop_get_tag, .tpg_check_demo_mode = tcm_loop_check_demo_mode, diff --git a/drivers/target/sbp/sbp_target.c b/drivers/target/sbp/sbp_target.c index 3d10189ecedc..08cee13dfb9a 100644 --- a/drivers/target/sbp/sbp_target.c +++ b/drivers/target/sbp/sbp_target.c @@ -1694,11 +1694,6 @@ static int sbp_check_false(struct se_portal_group *se_tpg) return 0; } -static char *sbp_get_fabric_name(void) -{ - return "sbp"; -} - static char *sbp_get_fabric_wwn(struct se_portal_group *se_tpg) { struct sbp_tpg *tpg = container_of(se_tpg, struct sbp_tpg, se_tpg); @@ -2323,8 +2318,7 @@ static struct configfs_attribute *sbp_tpg_attrib_attrs[] = { static const struct target_core_fabric_ops sbp_ops = { .module = THIS_MODULE, - .name = "sbp", - .get_fabric_name = sbp_get_fabric_name, + .fabric_name = "sbp", .tpg_get_wwn = sbp_get_fabric_wwn, .tpg_get_tag = sbp_get_tag, .tpg_check_demo_mode = sbp_check_true, diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c index 4f134b0c3e29..6b0d9beacf90 100644 --- a/drivers/target/target_core_alua.c +++ b/drivers/target/target_core_alua.c @@ -451,7 +451,7 @@ static inline void set_ascq(struct se_cmd *cmd, u8 alua_ascq) pr_debug("[%s]: ALUA TG Port not available, " "SenseKey: NOT_READY, ASC/ASCQ: " "0x04/0x%02x\n", - cmd->se_tfo->get_fabric_name(), alua_ascq); + cmd->se_tfo->fabric_name, alua_ascq); cmd->scsi_asc = 0x04; cmd->scsi_ascq = alua_ascq; @@ -1229,13 +1229,13 @@ static int core_alua_update_tpg_secondary_metadata(struct se_lun *lun) if (se_tpg->se_tpg_tfo->tpg_get_tag != NULL) { path = kasprintf(GFP_KERNEL, "%s/alua/%s/%s+%hu/lun_%llu", - db_root, se_tpg->se_tpg_tfo->get_fabric_name(), + db_root, se_tpg->se_tpg_tfo->fabric_name, se_tpg->se_tpg_tfo->tpg_get_wwn(se_tpg), se_tpg->se_tpg_tfo->tpg_get_tag(se_tpg), lun->unpacked_lun); } else { path = kasprintf(GFP_KERNEL, "%s/alua/%s/%s/lun_%llu", - db_root, se_tpg->se_tpg_tfo->get_fabric_name(), + db_root, se_tpg->se_tpg_tfo->fabric_name, se_tpg->se_tpg_tfo->tpg_get_wwn(se_tpg), lun->unpacked_lun); } diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c index f6b1549f4142..72016d0dfca5 100644 --- a/drivers/target/target_core_configfs.c +++ b/drivers/target/target_core_configfs.c @@ -172,7 +172,10 @@ static struct target_fabric_configfs *target_core_get_fabric( mutex_lock(&g_tf_lock); list_for_each_entry(tf, &g_tf_list, tf_list) { - if (!strcmp(tf->tf_ops->name, name)) { + const char *cmp_name = tf->tf_ops->fabric_alias; + if (!cmp_name) + cmp_name = tf->tf_ops->fabric_name; + if (!strcmp(cmp_name, name)) { atomic_inc(&tf->tf_access_cnt); mutex_unlock(&g_tf_lock); return tf; @@ -249,7 +252,7 @@ static struct config_group *target_core_register_fabric( return ERR_PTR(-EINVAL); } pr_debug("Target_Core_ConfigFS: REGISTER -> Located fabric:" - " %s\n", tf->tf_ops->name); + " %s\n", tf->tf_ops->fabric_name); /* * On a successful target_core_get_fabric() look, the returned * struct target_fabric_configfs *tf will contain a usage reference. @@ -282,7 +285,7 @@ static void target_core_deregister_fabric( " tf list\n", config_item_name(item)); pr_debug("Target_Core_ConfigFS: DEREGISTER -> located fabric:" - " %s\n", tf->tf_ops->name); + " %s\n", tf->tf_ops->fabric_name); atomic_dec(&tf->tf_access_cnt); pr_debug("Target_Core_ConfigFS: DEREGISTER -> Releasing ci" @@ -342,17 +345,20 @@ EXPORT_SYMBOL(target_undepend_item); static int target_fabric_tf_ops_check(const struct target_core_fabric_ops *tfo) { - if (!tfo->name) { - pr_err("Missing tfo->name\n"); - return -EINVAL; + if (tfo->fabric_alias) { + if (strlen(tfo->fabric_alias) >= TARGET_FABRIC_NAME_SIZE) { + pr_err("Passed alias: %s exceeds " + "TARGET_FABRIC_NAME_SIZE\n", tfo->fabric_alias); + return -EINVAL; + } } - if (strlen(tfo->name) >= TARGET_FABRIC_NAME_SIZE) { - pr_err("Passed name: %s exceeds TARGET_FABRIC" - "_NAME_SIZE\n", tfo->name); + if (!tfo->fabric_name) { + pr_err("Missing tfo->fabric_name\n"); return -EINVAL; } - if (!tfo->get_fabric_name) { - pr_err("Missing tfo->get_fabric_name()\n"); + if (strlen(tfo->fabric_name) >= TARGET_FABRIC_NAME_SIZE) { + pr_err("Passed name: %s exceeds " + "TARGET_FABRIC_NAME_SIZE\n", tfo->fabric_name); return -EINVAL; } if (!tfo->tpg_get_wwn) { @@ -486,7 +492,7 @@ void target_unregister_template(const struct target_core_fabric_ops *fo) mutex_lock(&g_tf_lock); list_for_each_entry(t, &g_tf_list, tf_list) { - if (!strcmp(t->tf_ops->name, fo->name)) { + if (!strcmp(t->tf_ops->fabric_name, fo->fabric_name)) { BUG_ON(atomic_read(&t->tf_access_cnt)); list_del(&t->tf_list); mutex_unlock(&g_tf_lock); @@ -532,9 +538,9 @@ DEF_CONFIGFS_ATTRIB_SHOW(emulate_tpu); DEF_CONFIGFS_ATTRIB_SHOW(emulate_tpws); DEF_CONFIGFS_ATTRIB_SHOW(emulate_caw); DEF_CONFIGFS_ATTRIB_SHOW(emulate_3pc); +DEF_CONFIGFS_ATTRIB_SHOW(emulate_pr); DEF_CONFIGFS_ATTRIB_SHOW(pi_prot_type); DEF_CONFIGFS_ATTRIB_SHOW(hw_pi_prot_type); -DEF_CONFIGFS_ATTRIB_SHOW(pi_prot_format); DEF_CONFIGFS_ATTRIB_SHOW(pi_prot_verify); DEF_CONFIGFS_ATTRIB_SHOW(enforce_pr_isids); DEF_CONFIGFS_ATTRIB_SHOW(is_nonrot); @@ -592,6 +598,7 @@ static ssize_t _name##_store(struct config_item *item, const char *page, \ DEF_CONFIGFS_ATTRIB_STORE_BOOL(emulate_fua_write); DEF_CONFIGFS_ATTRIB_STORE_BOOL(emulate_caw); DEF_CONFIGFS_ATTRIB_STORE_BOOL(emulate_3pc); +DEF_CONFIGFS_ATTRIB_STORE_BOOL(emulate_pr); DEF_CONFIGFS_ATTRIB_STORE_BOOL(enforce_pr_isids); DEF_CONFIGFS_ATTRIB_STORE_BOOL(is_nonrot); @@ -613,12 +620,17 @@ static void dev_set_t10_wwn_model_alias(struct se_device *dev) const char *configname; configname = config_item_name(&dev->dev_group.cg_item); - if (strlen(configname) >= 16) { + if (strlen(configname) >= INQUIRY_MODEL_LEN) { pr_warn("dev[%p]: Backstore name '%s' is too long for " - "INQUIRY_MODEL, truncating to 16 bytes\n", dev, + "INQUIRY_MODEL, truncating to 15 characters\n", dev, configname); } - snprintf(&dev->t10_wwn.model[0], 16, "%s", configname); + /* + * XXX We can't use sizeof(dev->t10_wwn.model) (INQUIRY_MODEL_LEN + 1) + * here without potentially breaking existing setups, so continue to + * truncate one byte shorter than what can be carried in INQUIRY. + */ + strlcpy(dev->t10_wwn.model, configname, INQUIRY_MODEL_LEN); } static ssize_t emulate_model_alias_store(struct config_item *item, @@ -640,11 +652,12 @@ static ssize_t emulate_model_alias_store(struct config_item *item, if (ret < 0) return ret; + BUILD_BUG_ON(sizeof(dev->t10_wwn.model) != INQUIRY_MODEL_LEN + 1); if (flag) { dev_set_t10_wwn_model_alias(dev); } else { - strncpy(&dev->t10_wwn.model[0], - dev->transport->inquiry_prod, 16); + strlcpy(dev->t10_wwn.model, dev->transport->inquiry_prod, + sizeof(dev->t10_wwn.model)); } da->emulate_model_alias = flag; return count; @@ -1116,9 +1129,10 @@ CONFIGFS_ATTR(, emulate_tpu); CONFIGFS_ATTR(, emulate_tpws); CONFIGFS_ATTR(, emulate_caw); CONFIGFS_ATTR(, emulate_3pc); +CONFIGFS_ATTR(, emulate_pr); CONFIGFS_ATTR(, pi_prot_type); CONFIGFS_ATTR_RO(, hw_pi_prot_type); -CONFIGFS_ATTR(, pi_prot_format); +CONFIGFS_ATTR_WO(, pi_prot_format); CONFIGFS_ATTR(, pi_prot_verify); CONFIGFS_ATTR(, enforce_pr_isids); CONFIGFS_ATTR(, is_nonrot); @@ -1156,6 +1170,7 @@ struct configfs_attribute *sbc_attrib_attrs[] = { &attr_emulate_tpws, &attr_emulate_caw, &attr_emulate_3pc, + &attr_emulate_pr, &attr_pi_prot_type, &attr_hw_pi_prot_type, &attr_pi_prot_format, @@ -1210,6 +1225,74 @@ static struct t10_wwn *to_t10_wwn(struct config_item *item) return container_of(to_config_group(item), struct t10_wwn, t10_wwn_group); } +/* + * STANDARD and VPD page 0x83 T10 Vendor Identification + */ +static ssize_t target_wwn_vendor_id_show(struct config_item *item, + char *page) +{ + return sprintf(page, "%s\n", &to_t10_wwn(item)->vendor[0]); +} + +static ssize_t target_wwn_vendor_id_store(struct config_item *item, + const char *page, size_t count) +{ + struct t10_wwn *t10_wwn = to_t10_wwn(item); + struct se_device *dev = t10_wwn->t10_dev; + /* +2 to allow for a trailing (stripped) '\n' and null-terminator */ + unsigned char buf[INQUIRY_VENDOR_LEN + 2]; + char *stripped = NULL; + size_t len; + int i; + + len = strlcpy(buf, page, sizeof(buf)); + if (len < sizeof(buf)) { + /* Strip any newline added from userspace. */ + stripped = strstrip(buf); + len = strlen(stripped); + } + if (len > INQUIRY_VENDOR_LEN) { + pr_err("Emulated T10 Vendor Identification exceeds" + " INQUIRY_VENDOR_LEN: " __stringify(INQUIRY_VENDOR_LEN) + "\n"); + return -EOVERFLOW; + } + + /* + * SPC 4.3.1: + * ASCII data fields shall contain only ASCII printable characters (i.e., + * code values 20h to 7Eh) and may be terminated with one or more ASCII + * null (00h) characters. + */ + for (i = 0; i < len; i++) { + if ((stripped[i] < 0x20) || (stripped[i] > 0x7E)) { + pr_err("Emulated T10 Vendor Identification contains" + " non-ASCII-printable characters\n"); + return -EINVAL; + } + } + + /* + * Check to see if any active exports exist. If they do exist, fail + * here as changing this information on the fly (underneath the + * initiator side OS dependent multipath code) could cause negative + * effects. + */ + if (dev->export_count) { + pr_err("Unable to set T10 Vendor Identification while" + " active %d exports exist\n", dev->export_count); + return -EINVAL; + } + + BUILD_BUG_ON(sizeof(dev->t10_wwn.vendor) != INQUIRY_VENDOR_LEN + 1); + strlcpy(dev->t10_wwn.vendor, stripped, sizeof(dev->t10_wwn.vendor)); + + pr_debug("Target_Core_ConfigFS: Set emulated T10 Vendor Identification:" + " %s\n", dev->t10_wwn.vendor); + + return count; +} + /* * VPD page 0x80 Unit serial */ @@ -1356,6 +1439,7 @@ DEF_DEV_WWN_ASSOC_SHOW(vpd_assoc_target_port, 0x10); /* VPD page 0x83 Association: SCSI Target Device */ DEF_DEV_WWN_ASSOC_SHOW(vpd_assoc_scsi_target_device, 0x20); +CONFIGFS_ATTR(target_wwn_, vendor_id); CONFIGFS_ATTR(target_wwn_, vpd_unit_serial); CONFIGFS_ATTR_RO(target_wwn_, vpd_protocol_identifier); CONFIGFS_ATTR_RO(target_wwn_, vpd_assoc_logical_unit); @@ -1363,6 +1447,7 @@ CONFIGFS_ATTR_RO(target_wwn_, vpd_assoc_target_port); CONFIGFS_ATTR_RO(target_wwn_, vpd_assoc_scsi_target_device); static struct configfs_attribute *target_core_dev_wwn_attrs[] = { + &target_wwn_attr_vendor_id, &target_wwn_attr_vpd_unit_serial, &target_wwn_attr_vpd_protocol_identifier, &target_wwn_attr_vpd_assoc_logical_unit, @@ -1400,7 +1485,7 @@ static ssize_t target_core_dev_pr_show_spc3_res(struct se_device *dev, core_pr_dump_initiator_port(pr_reg, i_buf, PR_REG_ISID_ID_LEN); return sprintf(page, "SPC-3 Reservation: %s Initiator: %s%s\n", - se_nacl->se_tpg->se_tpg_tfo->get_fabric_name(), + se_nacl->se_tpg->se_tpg_tfo->fabric_name, se_nacl->initiatorname, i_buf); } @@ -1414,7 +1499,7 @@ static ssize_t target_core_dev_pr_show_spc2_res(struct se_device *dev, if (se_nacl) { len = sprintf(page, "SPC-2 Reservation: %s Initiator: %s\n", - se_nacl->se_tpg->se_tpg_tfo->get_fabric_name(), + se_nacl->se_tpg->se_tpg_tfo->fabric_name, se_nacl->initiatorname); } else { len = sprintf(page, "No SPC-2 Reservation holder\n"); @@ -1427,6 +1512,9 @@ static ssize_t target_pr_res_holder_show(struct config_item *item, char *page) struct se_device *dev = pr_to_dev(item); int ret; + if (!dev->dev_attrib.emulate_pr) + return sprintf(page, "SPC_RESERVATIONS_DISABLED\n"); + if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) return sprintf(page, "Passthrough\n"); @@ -1489,13 +1577,13 @@ static ssize_t target_pr_res_pr_holder_tg_port_show(struct config_item *item, tfo = se_tpg->se_tpg_tfo; len += sprintf(page+len, "SPC-3 Reservation: %s" - " Target Node Endpoint: %s\n", tfo->get_fabric_name(), + " Target Node Endpoint: %s\n", tfo->fabric_name, tfo->tpg_get_wwn(se_tpg)); len += sprintf(page+len, "SPC-3 Reservation: Relative Port" " Identifier Tag: %hu %s Portal Group Tag: %hu" " %s Logical Unit: %llu\n", pr_reg->tg_pt_sep_rtpi, - tfo->get_fabric_name(), tfo->tpg_get_tag(se_tpg), - tfo->get_fabric_name(), pr_reg->pr_aptpl_target_lun); + tfo->fabric_name, tfo->tpg_get_tag(se_tpg), + tfo->fabric_name, pr_reg->pr_aptpl_target_lun); out_unlock: spin_unlock(&dev->dev_reservation_lock); @@ -1526,7 +1614,7 @@ static ssize_t target_pr_res_pr_registered_i_pts_show(struct config_item *item, core_pr_dump_initiator_port(pr_reg, i_buf, PR_REG_ISID_ID_LEN); sprintf(buf, "%s Node: %s%s Key: 0x%016Lx PRgen: 0x%08x\n", - tfo->get_fabric_name(), + tfo->fabric_name, pr_reg->pr_reg_nacl->initiatorname, i_buf, pr_reg->pr_res_key, pr_reg->pr_res_generation); @@ -1567,12 +1655,14 @@ static ssize_t target_pr_res_type_show(struct config_item *item, char *page) { struct se_device *dev = pr_to_dev(item); + if (!dev->dev_attrib.emulate_pr) + return sprintf(page, "SPC_RESERVATIONS_DISABLED\n"); if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) return sprintf(page, "SPC_PASSTHROUGH\n"); - else if (dev->dev_reservation_flags & DRF_SPC2_RESERVATIONS) + if (dev->dev_reservation_flags & DRF_SPC2_RESERVATIONS) return sprintf(page, "SPC2_RESERVATIONS\n"); - else - return sprintf(page, "SPC3_PERSISTENT_RESERVATIONS\n"); + + return sprintf(page, "SPC3_PERSISTENT_RESERVATIONS\n"); } static ssize_t target_pr_res_aptpl_active_show(struct config_item *item, @@ -1580,7 +1670,8 @@ static ssize_t target_pr_res_aptpl_active_show(struct config_item *item, { struct se_device *dev = pr_to_dev(item); - if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) + if (!dev->dev_attrib.emulate_pr || + (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR)) return 0; return sprintf(page, "APTPL Bit Status: %s\n", @@ -1592,7 +1683,8 @@ static ssize_t target_pr_res_aptpl_metadata_show(struct config_item *item, { struct se_device *dev = pr_to_dev(item); - if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) + if (!dev->dev_attrib.emulate_pr || + (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR)) return 0; return sprintf(page, "Ready to process PR APTPL metadata..\n"); @@ -1638,7 +1730,8 @@ static ssize_t target_pr_res_aptpl_metadata_store(struct config_item *item, u16 tpgt = 0; u8 type = 0; - if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) + if (!dev->dev_attrib.emulate_pr || + (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR)) return count; if (dev->dev_reservation_flags & DRF_SPC2_RESERVATIONS) return count; @@ -2746,7 +2839,7 @@ static ssize_t target_tg_pt_gp_members_show(struct config_item *item, struct se_portal_group *tpg = lun->lun_tpg; cur_len = snprintf(buf, TG_PT_GROUP_NAME_BUF, "%s/%s/tpgt_%hu" - "/%s\n", tpg->se_tpg_tfo->get_fabric_name(), + "/%s\n", tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_wwn(tpg), tpg->se_tpg_tfo->tpg_get_tag(tpg), config_item_name(&lun->lun_group.cg_item)); diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c index 47b5ef153135..93c56f4a9911 100644 --- a/drivers/target/target_core_device.c +++ b/drivers/target/target_core_device.c @@ -95,7 +95,7 @@ transport_lookup_cmd_lun(struct se_cmd *se_cmd, u64 unpacked_lun) deve->lun_access_ro) { pr_err("TARGET_CORE[%s]: Detected WRITE_PROTECTED LUN" " Access for 0x%08llx\n", - se_cmd->se_tfo->get_fabric_name(), + se_cmd->se_tfo->fabric_name, unpacked_lun); rcu_read_unlock(); ret = TCM_WRITE_PROTECTED; @@ -114,7 +114,7 @@ out_unlock: if (unpacked_lun != 0) { pr_err("TARGET_CORE[%s]: Detected NON_EXISTENT_LUN" " Access for 0x%08llx\n", - se_cmd->se_tfo->get_fabric_name(), + se_cmd->se_tfo->fabric_name, unpacked_lun); return TCM_NON_EXISTENT_LUN; } @@ -188,7 +188,7 @@ out_unlock: if (!se_lun) { pr_debug("TARGET_CORE[%s]: Detected NON_EXISTENT_LUN" " Access for 0x%08llx\n", - se_cmd->se_tfo->get_fabric_name(), + se_cmd->se_tfo->fabric_name, unpacked_lun); return -ENODEV; } @@ -237,7 +237,7 @@ struct se_dev_entry *core_get_se_deve_from_rtpi( if (!lun) { pr_err("%s device entries device pointer is" " NULL, but Initiator has access.\n", - tpg->se_tpg_tfo->get_fabric_name()); + tpg->se_tpg_tfo->fabric_name); continue; } if (lun->lun_rtpi != rtpi) @@ -571,9 +571,9 @@ int core_dev_add_lun( return rc; pr_debug("%s_TPG[%u]_LUN[%llu] - Activated %s Logical Unit from" - " CORE HBA: %u\n", tpg->se_tpg_tfo->get_fabric_name(), + " CORE HBA: %u\n", tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), lun->unpacked_lun, - tpg->se_tpg_tfo->get_fabric_name(), dev->se_hba->hba_id); + tpg->se_tpg_tfo->fabric_name, dev->se_hba->hba_id); /* * Update LUN maps for dynamically added initiators when * generate_node_acl is enabled. @@ -604,9 +604,9 @@ void core_dev_del_lun( struct se_lun *lun) { pr_debug("%s_TPG[%u]_LUN[%llu] - Deactivating %s Logical Unit from" - " device object\n", tpg->se_tpg_tfo->get_fabric_name(), + " device object\n", tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), lun->unpacked_lun, - tpg->se_tpg_tfo->get_fabric_name()); + tpg->se_tpg_tfo->fabric_name); core_tpg_remove_lun(tpg, lun); } @@ -621,7 +621,7 @@ struct se_lun_acl *core_dev_init_initiator_node_lun_acl( if (strlen(nacl->initiatorname) >= TRANSPORT_IQN_LEN) { pr_err("%s InitiatorName exceeds maximum size.\n", - tpg->se_tpg_tfo->get_fabric_name()); + tpg->se_tpg_tfo->fabric_name); *ret = -EOVERFLOW; return NULL; } @@ -664,7 +664,7 @@ int core_dev_add_initiator_node_lun_acl( return -EINVAL; pr_debug("%s_TPG[%hu]_LUN[%llu->%llu] - Added %s ACL for " - " InitiatorNode: %s\n", tpg->se_tpg_tfo->get_fabric_name(), + " InitiatorNode: %s\n", tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), lun->unpacked_lun, lacl->mapped_lun, lun_access_ro ? "RO" : "RW", nacl->initiatorname); @@ -697,7 +697,7 @@ int core_dev_del_initiator_node_lun_acl( pr_debug("%s_TPG[%hu]_LUN[%llu] - Removed ACL for" " InitiatorNode: %s Mapped LUN: %llu\n", - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), lun->unpacked_lun, nacl->initiatorname, lacl->mapped_lun); @@ -709,9 +709,9 @@ void core_dev_free_initiator_node_lun_acl( struct se_lun_acl *lacl) { pr_debug("%s_TPG[%hu] - Freeing ACL for %s InitiatorNode: %s" - " Mapped LUN: %llu\n", tpg->se_tpg_tfo->get_fabric_name(), + " Mapped LUN: %llu\n", tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, lacl->se_lun_nacl->initiatorname, lacl->mapped_lun); kfree(lacl); @@ -720,36 +720,17 @@ void core_dev_free_initiator_node_lun_acl( static void scsi_dump_inquiry(struct se_device *dev) { struct t10_wwn *wwn = &dev->t10_wwn; - char buf[17]; - int i, device_type; + int device_type = dev->transport->get_device_type(dev); + /* * Print Linux/SCSI style INQUIRY formatting to the kernel ring buffer */ - for (i = 0; i < 8; i++) - if (wwn->vendor[i] >= 0x20) - buf[i] = wwn->vendor[i]; - else - buf[i] = ' '; - buf[i] = '\0'; - pr_debug(" Vendor: %s\n", buf); - - for (i = 0; i < 16; i++) - if (wwn->model[i] >= 0x20) - buf[i] = wwn->model[i]; - else - buf[i] = ' '; - buf[i] = '\0'; - pr_debug(" Model: %s\n", buf); - - for (i = 0; i < 4; i++) - if (wwn->revision[i] >= 0x20) - buf[i] = wwn->revision[i]; - else - buf[i] = ' '; - buf[i] = '\0'; - pr_debug(" Revision: %s\n", buf); - - device_type = dev->transport->get_device_type(dev); + pr_debug(" Vendor: %-" __stringify(INQUIRY_VENDOR_LEN) "s\n", + wwn->vendor); + pr_debug(" Model: %-" __stringify(INQUIRY_MODEL_LEN) "s\n", + wwn->model); + pr_debug(" Revision: %-" __stringify(INQUIRY_REVISION_LEN) "s\n", + wwn->revision); pr_debug(" Type: %s ", scsi_device_type(device_type)); } @@ -805,6 +786,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name) dev->dev_attrib.emulate_tpws = DA_EMULATE_TPWS; dev->dev_attrib.emulate_caw = DA_EMULATE_CAW; dev->dev_attrib.emulate_3pc = DA_EMULATE_3PC; + dev->dev_attrib.emulate_pr = DA_EMULATE_PR; dev->dev_attrib.pi_prot_type = TARGET_DIF_TYPE0_PROT; dev->dev_attrib.enforce_pr_isids = DA_ENFORCE_PR_ISIDS; dev->dev_attrib.force_pr_aptpl = DA_FORCE_PR_APTPL; @@ -822,13 +804,19 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name) xcopy_lun = &dev->xcopy_lun; rcu_assign_pointer(xcopy_lun->lun_se_dev, dev); - init_completion(&xcopy_lun->lun_ref_comp); init_completion(&xcopy_lun->lun_shutdown_comp); INIT_LIST_HEAD(&xcopy_lun->lun_deve_list); INIT_LIST_HEAD(&xcopy_lun->lun_dev_link); mutex_init(&xcopy_lun->lun_tg_pt_md_mutex); xcopy_lun->lun_tpg = &xcopy_pt_tpg; + /* Preload the default INQUIRY const values */ + strlcpy(dev->t10_wwn.vendor, "LIO-ORG", sizeof(dev->t10_wwn.vendor)); + strlcpy(dev->t10_wwn.model, dev->transport->inquiry_prod, + sizeof(dev->t10_wwn.model)); + strlcpy(dev->t10_wwn.revision, dev->transport->inquiry_rev, + sizeof(dev->t10_wwn.revision)); + return dev; } @@ -986,36 +974,11 @@ int target_configure_device(struct se_device *dev) if (ret) goto out_destroy_device; - /* - * Startup the struct se_device processing thread - */ - dev->tmr_wq = alloc_workqueue("tmr-%s", WQ_MEM_RECLAIM | WQ_UNBOUND, 1, - dev->transport->name); - if (!dev->tmr_wq) { - pr_err("Unable to create tmr workqueue for %s\n", - dev->transport->name); - ret = -ENOMEM; - goto out_free_alua; - } - /* * Setup work_queue for QUEUE_FULL */ INIT_WORK(&dev->qf_work_queue, target_qf_do_work); - /* - * Preload the initial INQUIRY const values if we are doing - * anything virtual (IBLOCK, FILEIO, RAMDISK), but not for TCM/pSCSI - * passthrough because this is being provided by the backend LLD. - */ - if (!(dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)) { - strncpy(&dev->t10_wwn.vendor[0], "LIO-ORG", 8); - strncpy(&dev->t10_wwn.model[0], - dev->transport->inquiry_prod, 16); - strncpy(&dev->t10_wwn.revision[0], - dev->transport->inquiry_rev, 4); - } - scsi_dump_inquiry(dev); spin_lock(&hba->device_lock); @@ -1026,8 +989,6 @@ int target_configure_device(struct se_device *dev) return 0; -out_free_alua: - core_alua_free_lu_gp_mem(dev); out_destroy_device: dev->transport->destroy_device(dev); out_free_index: @@ -1046,8 +1007,6 @@ void target_free_device(struct se_device *dev) WARN_ON(!list_empty(&dev->dev_sep_list)); if (target_dev_configured(dev)) { - destroy_workqueue(dev->tmr_wq); - dev->transport->destroy_device(dev); mutex_lock(&device_mutex); @@ -1158,6 +1117,18 @@ passthrough_parse_cdb(struct se_cmd *cmd, return TCM_NO_SENSE; } + /* + * With emulate_pr disabled, all reservation requests should fail, + * regardless of whether or not TRANSPORT_FLAG_PASSTHROUGH_PGR is set. + */ + if (!dev->dev_attrib.emulate_pr && + ((cdb[0] == PERSISTENT_RESERVE_IN) || + (cdb[0] == PERSISTENT_RESERVE_OUT) || + (cdb[0] == RELEASE || cdb[0] == RELEASE_10) || + (cdb[0] == RESERVE || cdb[0] == RESERVE_10))) { + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + /* * For PERSISTENT RESERVE IN/OUT, RELEASE, and RESERVE we need to * emulate the response, since tcmu does not have the information diff --git a/drivers/target/target_core_fabric_configfs.c b/drivers/target/target_core_fabric_configfs.c index aa2f4f632ebe..9a6e20a2af7d 100644 --- a/drivers/target/target_core_fabric_configfs.c +++ b/drivers/target/target_core_fabric_configfs.c @@ -203,7 +203,7 @@ static ssize_t target_fabric_mappedlun_write_protect_store( pr_debug("%s_ConfigFS: Changed Initiator ACL: %s" " Mapped LUN: %llu Write Protect bit to %s\n", - se_tpg->se_tpg_tfo->get_fabric_name(), + se_tpg->se_tpg_tfo->fabric_name, se_nacl->initiatorname, lacl->mapped_lun, (wp) ? "ON" : "OFF"); return count; diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h index 0c6635587930..853344415963 100644 --- a/drivers/target/target_core_internal.h +++ b/drivers/target/target_core_internal.h @@ -138,7 +138,6 @@ int init_se_kmem_caches(void); void release_se_kmem_caches(void); u32 scsi_get_new_index(scsi_index_t); void transport_subsystem_check_init(void); -int transport_cmd_finish_abort(struct se_cmd *); unsigned char *transport_dump_cmd_direction(struct se_cmd *); void transport_dump_dev_state(struct se_device *, char *, int *); void transport_dump_dev_info(struct se_device *, struct se_lun *, @@ -148,7 +147,6 @@ int transport_dump_vpd_assoc(struct t10_vpd *, unsigned char *, int); int transport_dump_vpd_ident_type(struct t10_vpd *, unsigned char *, int); int transport_dump_vpd_ident(struct t10_vpd *, unsigned char *, int); void transport_clear_lun_ref(struct se_lun *); -void transport_send_task_abort(struct se_cmd *); sense_reason_t target_cmd_size_check(struct se_cmd *cmd, unsigned int size); void target_qf_do_work(struct work_struct *work); bool target_check_wce(struct se_device *dev); diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c index 10db5656fd5d..397f38cb7f4e 100644 --- a/drivers/target/target_core_pr.c +++ b/drivers/target/target_core_pr.c @@ -235,7 +235,7 @@ target_scsi2_reservation_release(struct se_cmd *cmd) tpg = sess->se_tpg; pr_debug("SCSI-2 Released reservation for %s LUN: %llu ->" " MAPPED LUN: %llu for %s\n", - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, cmd->se_lun->unpacked_lun, cmd->orig_fe_lun, sess->se_node_acl->initiatorname); @@ -278,7 +278,7 @@ target_scsi2_reservation_reserve(struct se_cmd *cmd) if (dev->dev_reserved_node_acl && (dev->dev_reserved_node_acl != sess->se_node_acl)) { pr_err("SCSI-2 RESERVATION CONFLIFT for %s fabric\n", - tpg->se_tpg_tfo->get_fabric_name()); + tpg->se_tpg_tfo->fabric_name); pr_err("Original reserver LUN: %llu %s\n", cmd->se_lun->unpacked_lun, dev->dev_reserved_node_acl->initiatorname); @@ -297,7 +297,7 @@ target_scsi2_reservation_reserve(struct se_cmd *cmd) dev->dev_reservation_flags |= DRF_SPC2_RESERVATIONS_WITH_ISID; } pr_debug("SCSI-2 Reserved %s LUN: %llu -> MAPPED LUN: %llu" - " for %s\n", tpg->se_tpg_tfo->get_fabric_name(), + " for %s\n", tpg->se_tpg_tfo->fabric_name, cmd->se_lun->unpacked_lun, cmd->orig_fe_lun, sess->se_node_acl->initiatorname); @@ -914,11 +914,11 @@ static void core_scsi3_aptpl_reserve( pr_debug("SPC-3 PR [%s] Service Action: APTPL RESERVE created" " new reservation holder TYPE: %s ALL_TG_PT: %d\n", - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, core_scsi3_pr_dump_type(pr_reg->pr_res_type), (pr_reg->pr_reg_all_tg_pt) ? 1 : 0); pr_debug("SPC-3 PR [%s] RESERVE Node: %s%s\n", - tpg->se_tpg_tfo->get_fabric_name(), node_acl->initiatorname, + tpg->se_tpg_tfo->fabric_name, node_acl->initiatorname, i_buf); } @@ -1036,19 +1036,19 @@ static void __core_scsi3_dump_registration( core_pr_dump_initiator_port(pr_reg, i_buf, PR_REG_ISID_ID_LEN); pr_debug("SPC-3 PR [%s] Service Action: REGISTER%s Initiator" - " Node: %s%s\n", tfo->get_fabric_name(), (register_type == REGISTER_AND_MOVE) ? + " Node: %s%s\n", tfo->fabric_name, (register_type == REGISTER_AND_MOVE) ? "_AND_MOVE" : (register_type == REGISTER_AND_IGNORE_EXISTING_KEY) ? "_AND_IGNORE_EXISTING_KEY" : "", nacl->initiatorname, i_buf); pr_debug("SPC-3 PR [%s] registration on Target Port: %s,0x%04x\n", - tfo->get_fabric_name(), tfo->tpg_get_wwn(se_tpg), + tfo->fabric_name, tfo->tpg_get_wwn(se_tpg), tfo->tpg_get_tag(se_tpg)); pr_debug("SPC-3 PR [%s] for %s TCM Subsystem %s Object Target" - " Port(s)\n", tfo->get_fabric_name(), + " Port(s)\n", tfo->fabric_name, (pr_reg->pr_reg_all_tg_pt) ? "ALL" : "SINGLE", dev->transport->name); pr_debug("SPC-3 PR [%s] SA Res Key: 0x%016Lx PRgeneration:" - " 0x%08x APTPL: %d\n", tfo->get_fabric_name(), + " 0x%08x APTPL: %d\n", tfo->fabric_name, pr_reg->pr_res_key, pr_reg->pr_res_generation, pr_reg->pr_reg_aptpl); } @@ -1329,7 +1329,7 @@ static void __core_scsi3_free_registration( */ while (atomic_read(&pr_reg->pr_res_holders) != 0) { pr_debug("SPC-3 PR [%s] waiting for pr_res_holders\n", - tfo->get_fabric_name()); + tfo->fabric_name); cpu_relax(); } @@ -1341,15 +1341,15 @@ static void __core_scsi3_free_registration( spin_lock(&pr_tmpl->registration_lock); pr_debug("SPC-3 PR [%s] Service Action: UNREGISTER Initiator" - " Node: %s%s\n", tfo->get_fabric_name(), + " Node: %s%s\n", tfo->fabric_name, pr_reg->pr_reg_nacl->initiatorname, i_buf); pr_debug("SPC-3 PR [%s] for %s TCM Subsystem %s Object Target" - " Port(s)\n", tfo->get_fabric_name(), + " Port(s)\n", tfo->fabric_name, (pr_reg->pr_reg_all_tg_pt) ? "ALL" : "SINGLE", dev->transport->name); pr_debug("SPC-3 PR [%s] SA Res Key: 0x%016Lx PRgeneration:" - " 0x%08x\n", tfo->get_fabric_name(), pr_reg->pr_res_key, + " 0x%08x\n", tfo->fabric_name, pr_reg->pr_res_key, pr_reg->pr_res_generation); if (!preempt_and_abort_list) { @@ -1645,7 +1645,7 @@ core_scsi3_decode_spec_i_port( dest_tpg = tmp_tpg; pr_debug("SPC-3 PR SPEC_I_PT: Located %s Node:" " %s Port RTPI: %hu\n", - dest_tpg->se_tpg_tfo->get_fabric_name(), + dest_tpg->se_tpg_tfo->fabric_name, dest_node_acl->initiatorname, dest_rtpi); spin_lock(&dev->se_port_lock); @@ -1662,7 +1662,7 @@ core_scsi3_decode_spec_i_port( pr_debug("SPC-3 PR SPEC_I_PT: Got %s data_length: %u tpdl: %u" " tid_len: %d for %s + %s\n", - dest_tpg->se_tpg_tfo->get_fabric_name(), cmd->data_length, + dest_tpg->se_tpg_tfo->fabric_name, cmd->data_length, tpdl, tid_len, i_str, iport_ptr); if (tid_len > tpdl) { @@ -1683,7 +1683,7 @@ core_scsi3_decode_spec_i_port( if (!dest_se_deve) { pr_err("Unable to locate %s dest_se_deve" " from destination RTPI: %hu\n", - dest_tpg->se_tpg_tfo->get_fabric_name(), + dest_tpg->se_tpg_tfo->fabric_name, dest_rtpi); core_scsi3_nodeacl_undepend_item(dest_node_acl); @@ -1704,7 +1704,7 @@ core_scsi3_decode_spec_i_port( pr_debug("SPC-3 PR SPEC_I_PT: Located %s Node: %s" " dest_se_deve mapped_lun: %llu\n", - dest_tpg->se_tpg_tfo->get_fabric_name(), + dest_tpg->se_tpg_tfo->fabric_name, dest_node_acl->initiatorname, dest_se_deve->mapped_lun); /* @@ -1815,7 +1815,7 @@ core_scsi3_decode_spec_i_port( pr_debug("SPC-3 PR [%s] SPEC_I_PT: Successfully" " registered Transport ID for Node: %s%s Mapped LUN:" - " %llu\n", dest_tpg->se_tpg_tfo->get_fabric_name(), + " %llu\n", dest_tpg->se_tpg_tfo->fabric_name, dest_node_acl->initiatorname, i_buf, (dest_se_deve) ? dest_se_deve->mapped_lun : 0); @@ -1913,7 +1913,7 @@ static int core_scsi3_update_aptpl_buf( "res_holder=1\nres_type=%02x\n" "res_scope=%02x\nres_all_tg_pt=%d\n" "mapped_lun=%llu\n", reg_count, - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, pr_reg->pr_reg_nacl->initiatorname, isid_buf, pr_reg->pr_res_key, pr_reg->pr_res_type, pr_reg->pr_res_scope, pr_reg->pr_reg_all_tg_pt, @@ -1923,7 +1923,7 @@ static int core_scsi3_update_aptpl_buf( "initiator_fabric=%s\ninitiator_node=%s\n%s" "sa_res_key=%llu\nres_holder=0\n" "res_all_tg_pt=%d\nmapped_lun=%llu\n", - reg_count, tpg->se_tpg_tfo->get_fabric_name(), + reg_count, tpg->se_tpg_tfo->fabric_name, pr_reg->pr_reg_nacl->initiatorname, isid_buf, pr_reg->pr_res_key, pr_reg->pr_reg_all_tg_pt, pr_reg->pr_res_mapped_lun); @@ -1942,7 +1942,7 @@ static int core_scsi3_update_aptpl_buf( */ snprintf(tmp, 512, "target_fabric=%s\ntarget_node=%s\n" "tpgt=%hu\nport_rtpi=%hu\ntarget_lun=%llu\nPR_REG_END:" - " %d\n", tpg->se_tpg_tfo->get_fabric_name(), + " %d\n", tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_wwn(tpg), tpg->se_tpg_tfo->tpg_get_tag(tpg), pr_reg->tg_pt_sep_rtpi, pr_reg->pr_aptpl_target_lun, @@ -2168,7 +2168,7 @@ core_scsi3_emulate_pro_register(struct se_cmd *cmd, u64 res_key, u64 sa_res_key, pr_reg->pr_res_key = sa_res_key; pr_debug("SPC-3 PR [%s] REGISTER%s: Changed Reservation" " Key for %s to: 0x%016Lx PRgeneration:" - " 0x%08x\n", cmd->se_tfo->get_fabric_name(), + " 0x%08x\n", cmd->se_tfo->fabric_name, (register_type == REGISTER_AND_IGNORE_EXISTING_KEY) ? "_AND_IGNORE_EXISTING_KEY" : "", pr_reg->pr_reg_nacl->initiatorname, pr_reg->pr_res_key, pr_reg->pr_res_generation); @@ -2356,9 +2356,9 @@ core_scsi3_pro_reserve(struct se_cmd *cmd, int type, int scope, u64 res_key) pr_err("SPC-3 PR: Attempted RESERVE from" " [%s]: %s while reservation already held by" " [%s]: %s, returning RESERVATION_CONFLICT\n", - cmd->se_tfo->get_fabric_name(), + cmd->se_tfo->fabric_name, se_sess->se_node_acl->initiatorname, - pr_res_nacl->se_tpg->se_tpg_tfo->get_fabric_name(), + pr_res_nacl->se_tpg->se_tpg_tfo->fabric_name, pr_res_holder->pr_reg_nacl->initiatorname); spin_unlock(&dev->dev_reservation_lock); @@ -2379,9 +2379,9 @@ core_scsi3_pro_reserve(struct se_cmd *cmd, int type, int scope, u64 res_key) " [%s]: %s trying to change TYPE and/or SCOPE," " while reservation already held by [%s]: %s," " returning RESERVATION_CONFLICT\n", - cmd->se_tfo->get_fabric_name(), + cmd->se_tfo->fabric_name, se_sess->se_node_acl->initiatorname, - pr_res_nacl->se_tpg->se_tpg_tfo->get_fabric_name(), + pr_res_nacl->se_tpg->se_tpg_tfo->fabric_name, pr_res_holder->pr_reg_nacl->initiatorname); spin_unlock(&dev->dev_reservation_lock); @@ -2414,10 +2414,10 @@ core_scsi3_pro_reserve(struct se_cmd *cmd, int type, int scope, u64 res_key) pr_debug("SPC-3 PR [%s] Service Action: RESERVE created new" " reservation holder TYPE: %s ALL_TG_PT: %d\n", - cmd->se_tfo->get_fabric_name(), core_scsi3_pr_dump_type(type), + cmd->se_tfo->fabric_name, core_scsi3_pr_dump_type(type), (pr_reg->pr_reg_all_tg_pt) ? 1 : 0); pr_debug("SPC-3 PR [%s] RESERVE Node: %s%s\n", - cmd->se_tfo->get_fabric_name(), + cmd->se_tfo->fabric_name, se_sess->se_node_acl->initiatorname, i_buf); spin_unlock(&dev->dev_reservation_lock); @@ -2506,12 +2506,12 @@ out: if (!dev->dev_pr_res_holder) { pr_debug("SPC-3 PR [%s] Service Action: %s RELEASE cleared" " reservation holder TYPE: %s ALL_TG_PT: %d\n", - tfo->get_fabric_name(), (explicit) ? "explicit" : + tfo->fabric_name, (explicit) ? "explicit" : "implicit", core_scsi3_pr_dump_type(pr_res_type), (pr_reg->pr_reg_all_tg_pt) ? 1 : 0); } pr_debug("SPC-3 PR [%s] RELEASE Node: %s%s\n", - tfo->get_fabric_name(), se_nacl->initiatorname, + tfo->fabric_name, se_nacl->initiatorname, i_buf); /* * Clear TYPE and SCOPE for the next PROUT Service Action: RESERVE @@ -2609,9 +2609,9 @@ core_scsi3_emulate_pro_release(struct se_cmd *cmd, int type, int scope, " reservation from [%s]: %s with different TYPE " "and/or SCOPE while reservation already held by" " [%s]: %s, returning RESERVATION_CONFLICT\n", - cmd->se_tfo->get_fabric_name(), + cmd->se_tfo->fabric_name, se_sess->se_node_acl->initiatorname, - pr_res_nacl->se_tpg->se_tpg_tfo->get_fabric_name(), + pr_res_nacl->se_tpg->se_tpg_tfo->fabric_name, pr_res_holder->pr_reg_nacl->initiatorname); spin_unlock(&dev->dev_reservation_lock); @@ -2752,7 +2752,7 @@ core_scsi3_emulate_pro_clear(struct se_cmd *cmd, u64 res_key) spin_unlock(&pr_tmpl->registration_lock); pr_debug("SPC-3 PR [%s] Service Action: CLEAR complete\n", - cmd->se_tfo->get_fabric_name()); + cmd->se_tfo->fabric_name); core_scsi3_update_and_write_aptpl(cmd->se_dev, false); @@ -2791,11 +2791,11 @@ static void __core_scsi3_complete_pro_preempt( pr_debug("SPC-3 PR [%s] Service Action: PREEMPT%s created new" " reservation holder TYPE: %s ALL_TG_PT: %d\n", - tfo->get_fabric_name(), (preempt_type == PREEMPT_AND_ABORT) ? "_AND_ABORT" : "", + tfo->fabric_name, (preempt_type == PREEMPT_AND_ABORT) ? "_AND_ABORT" : "", core_scsi3_pr_dump_type(type), (pr_reg->pr_reg_all_tg_pt) ? 1 : 0); pr_debug("SPC-3 PR [%s] PREEMPT%s from Node: %s%s\n", - tfo->get_fabric_name(), (preempt_type == PREEMPT_AND_ABORT) ? "_AND_ABORT" : "", + tfo->fabric_name, (preempt_type == PREEMPT_AND_ABORT) ? "_AND_ABORT" : "", nacl->initiatorname, i_buf); /* * For PREEMPT_AND_ABORT, add the preempting reservation's @@ -3282,7 +3282,7 @@ core_scsi3_emulate_pro_register_and_move(struct se_cmd *cmd, u64 res_key, " proto_ident: 0x%02x does not match ident: 0x%02x" " from fabric: %s\n", proto_ident, dest_se_tpg->proto_id, - dest_tf_ops->get_fabric_name()); + dest_tf_ops->fabric_name); ret = TCM_INVALID_PARAMETER_LIST; goto out; } @@ -3299,7 +3299,7 @@ core_scsi3_emulate_pro_register_and_move(struct se_cmd *cmd, u64 res_key, buf = NULL; pr_debug("SPC-3 PR [%s] Extracted initiator %s identifier: %s" - " %s\n", dest_tf_ops->get_fabric_name(), (iport_ptr != NULL) ? + " %s\n", dest_tf_ops->fabric_name, (iport_ptr != NULL) ? "port" : "device", initiator_str, (iport_ptr != NULL) ? iport_ptr : ""); /* @@ -3344,7 +3344,7 @@ after_iport_check: if (!dest_node_acl) { pr_err("Unable to locate %s dest_node_acl for" - " TransportID%s\n", dest_tf_ops->get_fabric_name(), + " TransportID%s\n", dest_tf_ops->fabric_name, initiator_str); ret = TCM_INVALID_PARAMETER_LIST; goto out; @@ -3360,7 +3360,7 @@ after_iport_check: } pr_debug("SPC-3 PR REGISTER_AND_MOVE: Found %s dest_node_acl:" - " %s from TransportID\n", dest_tf_ops->get_fabric_name(), + " %s from TransportID\n", dest_tf_ops->fabric_name, dest_node_acl->initiatorname); /* @@ -3370,7 +3370,7 @@ after_iport_check: dest_se_deve = core_get_se_deve_from_rtpi(dest_node_acl, rtpi); if (!dest_se_deve) { pr_err("Unable to locate %s dest_se_deve from RTPI:" - " %hu\n", dest_tf_ops->get_fabric_name(), rtpi); + " %hu\n", dest_tf_ops->fabric_name, rtpi); ret = TCM_INVALID_PARAMETER_LIST; goto out; } @@ -3385,7 +3385,7 @@ after_iport_check: pr_debug("SPC-3 PR REGISTER_AND_MOVE: Located %s node %s LUN" " ACL for dest_se_deve->mapped_lun: %llu\n", - dest_tf_ops->get_fabric_name(), dest_node_acl->initiatorname, + dest_tf_ops->fabric_name, dest_node_acl->initiatorname, dest_se_deve->mapped_lun); /* @@ -3501,13 +3501,13 @@ after_iport_check: pr_debug("SPC-3 PR [%s] Service Action: REGISTER_AND_MOVE" " created new reservation holder TYPE: %s on object RTPI:" - " %hu PRGeneration: 0x%08x\n", dest_tf_ops->get_fabric_name(), + " %hu PRGeneration: 0x%08x\n", dest_tf_ops->fabric_name, core_scsi3_pr_dump_type(type), rtpi, dest_pr_reg->pr_res_generation); pr_debug("SPC-3 PR Successfully moved reservation from" " %s Fabric Node: %s%s -> %s Fabric Node: %s %s\n", - tf_ops->get_fabric_name(), pr_reg_nacl->initiatorname, - i_buf, dest_tf_ops->get_fabric_name(), + tf_ops->fabric_name, pr_reg_nacl->initiatorname, + i_buf, dest_tf_ops->fabric_name, dest_node_acl->initiatorname, (iport_ptr != NULL) ? iport_ptr : ""); /* @@ -4095,6 +4095,8 @@ target_check_reservation(struct se_cmd *cmd) return 0; if (dev->se_hba->hba_flags & HBA_FLAGS_INTERNAL_USE) return 0; + if (!dev->dev_attrib.emulate_pr) + return 0; if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) return 0; diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c index 47d76c862014..b5388a106567 100644 --- a/drivers/target/target_core_pscsi.c +++ b/drivers/target/target_core_pscsi.c @@ -179,20 +179,20 @@ out_free: static void pscsi_set_inquiry_info(struct scsi_device *sdev, struct t10_wwn *wwn) { - unsigned char *buf; - if (sdev->inquiry_len < INQUIRY_LEN) return; - - buf = sdev->inquiry; - if (!buf) - return; /* - * Use sdev->inquiry from drivers/scsi/scsi_scan.c:scsi_alloc_sdev() + * Use sdev->inquiry data from drivers/scsi/scsi_scan.c:scsi_add_lun() */ - memcpy(&wwn->vendor[0], &buf[8], sizeof(wwn->vendor)); - memcpy(&wwn->model[0], &buf[16], sizeof(wwn->model)); - memcpy(&wwn->revision[0], &buf[32], sizeof(wwn->revision)); + BUILD_BUG_ON(sizeof(wwn->vendor) != INQUIRY_VENDOR_LEN + 1); + snprintf(wwn->vendor, sizeof(wwn->vendor), + "%." __stringify(INQUIRY_VENDOR_LEN) "s", sdev->vendor); + BUILD_BUG_ON(sizeof(wwn->model) != INQUIRY_MODEL_LEN + 1); + snprintf(wwn->model, sizeof(wwn->model), + "%." __stringify(INQUIRY_MODEL_LEN) "s", sdev->model); + BUILD_BUG_ON(sizeof(wwn->revision) != INQUIRY_REVISION_LEN + 1); + snprintf(wwn->revision, sizeof(wwn->revision), + "%." __stringify(INQUIRY_REVISION_LEN) "s", sdev->rev); } static int @@ -811,7 +811,6 @@ static ssize_t pscsi_show_configfs_dev_params(struct se_device *dev, char *b) struct scsi_device *sd = pdv->pdv_sd; unsigned char host_id[16]; ssize_t bl; - int i; if (phv->phv_mode == PHV_VIRTUAL_HOST_ID) snprintf(host_id, 16, "%d", pdv->pdv_host_id); @@ -824,29 +823,12 @@ static ssize_t pscsi_show_configfs_dev_params(struct se_device *dev, char *b) host_id); if (sd) { - bl += sprintf(b + bl, " "); - bl += sprintf(b + bl, "Vendor: "); - for (i = 0; i < 8; i++) { - if (ISPRINT(sd->vendor[i])) /* printable character? */ - bl += sprintf(b + bl, "%c", sd->vendor[i]); - else - bl += sprintf(b + bl, " "); - } - bl += sprintf(b + bl, " Model: "); - for (i = 0; i < 16; i++) { - if (ISPRINT(sd->model[i])) /* printable character ? */ - bl += sprintf(b + bl, "%c", sd->model[i]); - else - bl += sprintf(b + bl, " "); - } - bl += sprintf(b + bl, " Rev: "); - for (i = 0; i < 4; i++) { - if (ISPRINT(sd->rev[i])) /* printable character ? */ - bl += sprintf(b + bl, "%c", sd->rev[i]); - else - bl += sprintf(b + bl, " "); - } - bl += sprintf(b + bl, "\n"); + bl += sprintf(b + bl, " Vendor: %." + __stringify(INQUIRY_VENDOR_LEN) "s", sd->vendor); + bl += sprintf(b + bl, " Model: %." + __stringify(INQUIRY_MODEL_LEN) "s", sd->model); + bl += sprintf(b + bl, " Rev: %." + __stringify(INQUIRY_REVISION_LEN) "s\n", sd->rev); } return bl; } @@ -1094,7 +1076,7 @@ static void pscsi_req_done(struct request *req, blk_status_t status) break; } - __blk_put_request(req->q, req); + blk_put_request(req); kfree(pt); } diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c index f459118bc11b..47094ae01c04 100644 --- a/drivers/target/target_core_spc.c +++ b/drivers/target/target_core_spc.c @@ -108,12 +108,19 @@ spc_emulate_inquiry_std(struct se_cmd *cmd, unsigned char *buf) buf[7] = 0x2; /* CmdQue=1 */ - memcpy(&buf[8], "LIO-ORG ", 8); - memset(&buf[16], 0x20, 16); + /* + * ASCII data fields described as being left-aligned shall have any + * unused bytes at the end of the field (i.e., highest offset) and the + * unused bytes shall be filled with ASCII space characters (20h). + */ + memset(&buf[8], 0x20, + INQUIRY_VENDOR_LEN + INQUIRY_MODEL_LEN + INQUIRY_REVISION_LEN); + memcpy(&buf[8], dev->t10_wwn.vendor, + strnlen(dev->t10_wwn.vendor, INQUIRY_VENDOR_LEN)); memcpy(&buf[16], dev->t10_wwn.model, - min_t(size_t, strlen(dev->t10_wwn.model), 16)); + strnlen(dev->t10_wwn.model, INQUIRY_MODEL_LEN)); memcpy(&buf[32], dev->t10_wwn.revision, - min_t(size_t, strlen(dev->t10_wwn.revision), 4)); + strnlen(dev->t10_wwn.revision, INQUIRY_REVISION_LEN)); buf[4] = 31; /* Set additional length to 31 */ return 0; @@ -251,7 +258,10 @@ check_t10_vend_desc: buf[off] = 0x2; /* ASCII */ buf[off+1] = 0x1; /* T10 Vendor ID */ buf[off+2] = 0x0; - memcpy(&buf[off+4], "LIO-ORG", 8); + /* left align Vendor ID and pad with spaces */ + memset(&buf[off+4], 0x20, INQUIRY_VENDOR_LEN); + memcpy(&buf[off+4], dev->t10_wwn.vendor, + strnlen(dev->t10_wwn.vendor, INQUIRY_VENDOR_LEN)); /* Extra Byte for NULL Terminator */ id_len++; /* Identifier Length */ @@ -1281,6 +1291,14 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size) struct se_device *dev = cmd->se_dev; unsigned char *cdb = cmd->t_task_cdb; + if (!dev->dev_attrib.emulate_pr && + ((cdb[0] == PERSISTENT_RESERVE_IN) || + (cdb[0] == PERSISTENT_RESERVE_OUT) || + (cdb[0] == RELEASE || cdb[0] == RELEASE_10) || + (cdb[0] == RESERVE || cdb[0] == RESERVE_10))) { + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + switch (cdb[0]) { case MODE_SELECT: *size = cdb[4]; diff --git a/drivers/target/target_core_stat.c b/drivers/target/target_core_stat.c index f0db91ebd735..8d9ceedfd455 100644 --- a/drivers/target/target_core_stat.c +++ b/drivers/target/target_core_stat.c @@ -246,43 +246,25 @@ static ssize_t target_stat_lu_lu_name_show(struct config_item *item, char *page) static ssize_t target_stat_lu_vend_show(struct config_item *item, char *page) { struct se_device *dev = to_stat_lu_dev(item); - int i; - char str[sizeof(dev->t10_wwn.vendor)+1]; - /* scsiLuVendorId */ - for (i = 0; i < sizeof(dev->t10_wwn.vendor); i++) - str[i] = ISPRINT(dev->t10_wwn.vendor[i]) ? - dev->t10_wwn.vendor[i] : ' '; - str[i] = '\0'; - return snprintf(page, PAGE_SIZE, "%s\n", str); + return snprintf(page, PAGE_SIZE, "%-" __stringify(INQUIRY_VENDOR_LEN) + "s\n", dev->t10_wwn.vendor); } static ssize_t target_stat_lu_prod_show(struct config_item *item, char *page) { struct se_device *dev = to_stat_lu_dev(item); - int i; - char str[sizeof(dev->t10_wwn.model)+1]; - /* scsiLuProductId */ - for (i = 0; i < sizeof(dev->t10_wwn.model); i++) - str[i] = ISPRINT(dev->t10_wwn.model[i]) ? - dev->t10_wwn.model[i] : ' '; - str[i] = '\0'; - return snprintf(page, PAGE_SIZE, "%s\n", str); + return snprintf(page, PAGE_SIZE, "%-" __stringify(INQUIRY_MODEL_LEN) + "s\n", dev->t10_wwn.model); } static ssize_t target_stat_lu_rev_show(struct config_item *item, char *page) { struct se_device *dev = to_stat_lu_dev(item); - int i; - char str[sizeof(dev->t10_wwn.revision)+1]; - /* scsiLuRevisionId */ - for (i = 0; i < sizeof(dev->t10_wwn.revision); i++) - str[i] = ISPRINT(dev->t10_wwn.revision[i]) ? - dev->t10_wwn.revision[i] : ' '; - str[i] = '\0'; - return snprintf(page, PAGE_SIZE, "%s\n", str); + return snprintf(page, PAGE_SIZE, "%-" __stringify(INQUIRY_REVISION_LEN) + "s\n", dev->t10_wwn.revision); } static ssize_t target_stat_lu_dev_type_show(struct config_item *item, char *page) @@ -612,7 +594,7 @@ static ssize_t target_stat_tgt_port_name_show(struct config_item *item, dev = rcu_dereference(lun->lun_se_dev); if (dev) ret = snprintf(page, PAGE_SIZE, "%sPort#%u\n", - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, lun->lun_rtpi); rcu_read_unlock(); return ret; @@ -767,7 +749,7 @@ static ssize_t target_stat_transport_device_show(struct config_item *item, if (dev) { /* scsiTransportType */ ret = snprintf(page, PAGE_SIZE, "scsiTransport%s\n", - tpg->se_tpg_tfo->get_fabric_name()); + tpg->se_tpg_tfo->fabric_name); } rcu_read_unlock(); return ret; diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c index 6d1179a7f043..ad0061e09d4c 100644 --- a/drivers/target/target_core_tmr.c +++ b/drivers/target/target_core_tmr.c @@ -163,18 +163,23 @@ void core_tmr_abort_task( continue; printk("ABORT_TASK: Found referenced %s task_tag: %llu\n", - se_cmd->se_tfo->get_fabric_name(), ref_tag); + se_cmd->se_tfo->fabric_name, ref_tag); - if (!__target_check_io_state(se_cmd, se_sess, 0)) + if (!__target_check_io_state(se_cmd, se_sess, + dev->dev_attrib.emulate_tas)) continue; spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); - cancel_work_sync(&se_cmd->work); - transport_wait_for_tasks(se_cmd); + /* + * Ensure that this ABORT request is visible to the LU RESET + * code. + */ + if (!tmr->tmr_dev) + WARN_ON_ONCE(transport_lookup_tmr_lun(tmr->task_cmd, + se_cmd->orig_fe_lun) < 0); - if (!transport_cmd_finish_abort(se_cmd)) - target_put_sess_cmd(se_cmd); + target_put_cmd_and_wait(se_cmd); printk("ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for" " ref_tag: %llu\n", ref_tag); @@ -268,14 +273,28 @@ static void core_tmr_drain_tmr_list( (preempt_and_abort_list) ? "Preempt" : "", tmr_p, tmr_p->function, tmr_p->response, cmd->t_state); - cancel_work_sync(&cmd->work); - transport_wait_for_tasks(cmd); - - if (!transport_cmd_finish_abort(cmd)) - target_put_sess_cmd(cmd); + target_put_cmd_and_wait(cmd); } } +/** + * core_tmr_drain_state_list() - abort SCSI commands associated with a device + * + * @dev: Device for which to abort outstanding SCSI commands. + * @prout_cmd: Pointer to the SCSI PREEMPT AND ABORT if this function is called + * to realize the PREEMPT AND ABORT functionality. + * @tmr_sess: Session through which the LUN RESET has been received. + * @tas: Task Aborted Status (TAS) bit from the SCSI control mode page. + * A quote from SPC-4, paragraph "7.5.10 Control mode page": + * "A task aborted status (TAS) bit set to zero specifies that + * aborted commands shall be terminated by the device server + * without any response to the application client. A TAS bit set + * to one specifies that commands aborted by the actions of an I_T + * nexus other than the I_T nexus on which the command was + * received shall be completed with TASK ABORTED status." + * @preempt_and_abort_list: For the PREEMPT AND ABORT functionality, a list + * with registrations that will be preempted. + */ static void core_tmr_drain_state_list( struct se_device *dev, struct se_cmd *prout_cmd, @@ -350,18 +369,7 @@ static void core_tmr_drain_state_list( cmd->tag, (preempt_and_abort_list) ? "preempt" : "", cmd->pr_res_key); - /* - * If the command may be queued onto a workqueue cancel it now. - * - * This is equivalent to removal from the execute queue in the - * loop above, but we do it down here given that - * cancel_work_sync may block. - */ - cancel_work_sync(&cmd->work); - transport_wait_for_tasks(cmd); - - if (!transport_cmd_finish_abort(cmd)) - target_put_sess_cmd(cmd); + target_put_cmd_and_wait(cmd); } } @@ -398,7 +406,7 @@ int core_tmr_lun_reset( if (tmr_nacl && tmr_tpg) { pr_debug("LUN_RESET: TMR caller fabric: %s" " initiator port %s\n", - tmr_tpg->se_tpg_tfo->get_fabric_name(), + tmr_tpg->se_tpg_tfo->fabric_name, tmr_nacl->initiatorname); } } diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c index 02e8a5d86658..e2ace1059437 100644 --- a/drivers/target/target_core_tpg.c +++ b/drivers/target/target_core_tpg.c @@ -151,7 +151,7 @@ void core_tpg_add_node_to_devs( pr_debug("TARGET_CORE[%s]->TPG[%u]_LUN[%llu] - Adding %s" " access for LUN in Demo Mode\n", - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), lun->unpacked_lun, lun_access_ro ? "READ-ONLY" : "READ-WRITE"); @@ -176,7 +176,7 @@ target_set_nacl_queue_depth(struct se_portal_group *tpg, if (!acl->queue_depth) { pr_warn("Queue depth for %s Initiator Node: %s is 0," - "defaulting to 1.\n", tpg->se_tpg_tfo->get_fabric_name(), + "defaulting to 1.\n", tpg->se_tpg_tfo->fabric_name, acl->initiatorname); acl->queue_depth = 1; } @@ -227,11 +227,11 @@ static void target_add_node_acl(struct se_node_acl *acl) pr_debug("%s_TPG[%hu] - Added %s ACL with TCQ Depth: %d for %s" " Initiator Node: %s\n", - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), acl->dynamic_node_acl ? "DYNAMIC" : "", acl->queue_depth, - tpg->se_tpg_tfo->get_fabric_name(), + tpg->se_tpg_tfo->fabric_name, acl->initiatorname); } @@ -313,7 +313,7 @@ struct se_node_acl *core_tpg_add_initiator_node_acl( if (acl->dynamic_node_acl) { acl->dynamic_node_acl = 0; pr_debug("%s_TPG[%u] - Replacing dynamic ACL" - " for %s\n", tpg->se_tpg_tfo->get_fabric_name(), + " for %s\n", tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), initiatorname); mutex_unlock(&tpg->acl_node_mutex); return acl; @@ -321,7 +321,7 @@ struct se_node_acl *core_tpg_add_initiator_node_acl( pr_err("ACL entry for %s Initiator" " Node %s already exists for TPG %u, ignoring" - " request.\n", tpg->se_tpg_tfo->get_fabric_name(), + " request.\n", tpg->se_tpg_tfo->fabric_name, initiatorname, tpg->se_tpg_tfo->tpg_get_tag(tpg)); mutex_unlock(&tpg->acl_node_mutex); return ERR_PTR(-EEXIST); @@ -380,9 +380,9 @@ void core_tpg_del_initiator_node_acl(struct se_node_acl *acl) core_free_device_list_for_node(acl, tpg); pr_debug("%s_TPG[%hu] - Deleted ACL with TCQ Depth: %d for %s" - " Initiator Node: %s\n", tpg->se_tpg_tfo->get_fabric_name(), + " Initiator Node: %s\n", tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg), acl->queue_depth, - tpg->se_tpg_tfo->get_fabric_name(), acl->initiatorname); + tpg->se_tpg_tfo->fabric_name, acl->initiatorname); kfree(acl); } @@ -418,7 +418,7 @@ int core_tpg_set_initiator_node_queue_depth( pr_debug("Successfully changed queue depth to: %d for Initiator" " Node: %s on %s Target Portal Group: %u\n", acl->queue_depth, - acl->initiatorname, tpg->se_tpg_tfo->get_fabric_name(), + acl->initiatorname, tpg->se_tpg_tfo->fabric_name, tpg->se_tpg_tfo->tpg_get_tag(tpg)); return 0; @@ -512,7 +512,7 @@ int core_tpg_register( spin_unlock_bh(&tpg_lock); pr_debug("TARGET_CORE[%s]: Allocated portal_group for endpoint: %s, " - "Proto: %d, Portal Tag: %u\n", se_tpg->se_tpg_tfo->get_fabric_name(), + "Proto: %d, Portal Tag: %u\n", se_tpg->se_tpg_tfo->fabric_name, se_tpg->se_tpg_tfo->tpg_get_wwn(se_tpg) ? se_tpg->se_tpg_tfo->tpg_get_wwn(se_tpg) : NULL, se_tpg->proto_id, se_tpg->se_tpg_tfo->tpg_get_tag(se_tpg)); @@ -528,7 +528,7 @@ int core_tpg_deregister(struct se_portal_group *se_tpg) LIST_HEAD(node_list); pr_debug("TARGET_CORE[%s]: Deallocating portal_group for endpoint: %s, " - "Proto: %d, Portal Tag: %u\n", tfo->get_fabric_name(), + "Proto: %d, Portal Tag: %u\n", tfo->fabric_name, tfo->tpg_get_wwn(se_tpg) ? tfo->tpg_get_wwn(se_tpg) : NULL, se_tpg->proto_id, tfo->tpg_get_tag(se_tpg)); @@ -577,7 +577,6 @@ struct se_lun *core_tpg_alloc_lun( } lun->unpacked_lun = unpacked_lun; atomic_set(&lun->lun_acl_count, 0); - init_completion(&lun->lun_ref_comp); init_completion(&lun->lun_shutdown_comp); INIT_LIST_HEAD(&lun->lun_deve_list); INIT_LIST_HEAD(&lun->lun_dev_link); diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index 2cfd61d62e97..ef9e75b359d4 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -224,19 +224,28 @@ void transport_subsystem_check_init(void) sub_api_initialized = 1; } +static void target_release_sess_cmd_refcnt(struct percpu_ref *ref) +{ + struct se_session *sess = container_of(ref, typeof(*sess), cmd_count); + + wake_up(&sess->cmd_list_wq); +} + /** * transport_init_session - initialize a session object * @se_sess: Session object pointer. * * The caller must have zero-initialized @se_sess before calling this function. */ -void transport_init_session(struct se_session *se_sess) +int transport_init_session(struct se_session *se_sess) { INIT_LIST_HEAD(&se_sess->sess_list); INIT_LIST_HEAD(&se_sess->sess_acl_list); INIT_LIST_HEAD(&se_sess->sess_cmd_list); spin_lock_init(&se_sess->sess_cmd_lock); init_waitqueue_head(&se_sess->cmd_list_wq); + return percpu_ref_init(&se_sess->cmd_count, + target_release_sess_cmd_refcnt, 0, GFP_KERNEL); } EXPORT_SYMBOL(transport_init_session); @@ -247,6 +256,7 @@ EXPORT_SYMBOL(transport_init_session); struct se_session *transport_alloc_session(enum target_prot_op sup_prot_ops) { struct se_session *se_sess; + int ret; se_sess = kmem_cache_zalloc(se_sess_cache, GFP_KERNEL); if (!se_sess) { @@ -254,7 +264,11 @@ struct se_session *transport_alloc_session(enum target_prot_op sup_prot_ops) " se_sess_cache\n"); return ERR_PTR(-ENOMEM); } - transport_init_session(se_sess); + ret = transport_init_session(se_sess); + if (ret < 0) { + kmem_cache_free(se_sess_cache, se_sess); + return ERR_PTR(ret); + } se_sess->sup_prot_ops = sup_prot_ops; return se_sess; @@ -273,14 +287,11 @@ int transport_alloc_session_tags(struct se_session *se_sess, { int rc; - se_sess->sess_cmd_map = kcalloc(tag_size, tag_num, - GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL); + se_sess->sess_cmd_map = kvcalloc(tag_size, tag_num, + GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!se_sess->sess_cmd_map) { - se_sess->sess_cmd_map = vzalloc(array_size(tag_size, tag_num)); - if (!se_sess->sess_cmd_map) { - pr_err("Unable to allocate se_sess->sess_cmd_map\n"); - return -ENOMEM; - } + pr_err("Unable to allocate se_sess->sess_cmd_map\n"); + return -ENOMEM; } rc = sbitmap_queue_init_node(&se_sess->sess_tag_pool, tag_num, -1, @@ -397,7 +408,7 @@ void __transport_register_session( list_add_tail(&se_sess->sess_list, &se_tpg->tpg_sess_list); pr_debug("TARGET_CORE[%s]: Registered fabric_sess_ptr: %p\n", - se_tpg->se_tpg_tfo->get_fabric_name(), se_sess->fabric_sess_ptr); + se_tpg->se_tpg_tfo->fabric_name, se_sess->fabric_sess_ptr); } EXPORT_SYMBOL(__transport_register_session); @@ -581,6 +592,7 @@ void transport_free_session(struct se_session *se_sess) sbitmap_queue_free(&se_sess->sess_tag_pool); kvfree(se_sess->sess_cmd_map); } + percpu_ref_exit(&se_sess->cmd_count); kmem_cache_free(se_sess_cache, se_sess); } EXPORT_SYMBOL(transport_free_session); @@ -602,7 +614,7 @@ void transport_deregister_session(struct se_session *se_sess) spin_unlock_irqrestore(&se_tpg->session_lock, flags); pr_debug("TARGET_CORE[%s]: Deregistered fabric_sess\n", - se_tpg->se_tpg_tfo->get_fabric_name()); + se_tpg->se_tpg_tfo->fabric_name); /* * If last kref is dropping now for an explicit NodeACL, awake sleeping * ->acl_free_comp caller to wakeup configfs se_node_acl->acl_group @@ -695,32 +707,6 @@ static void transport_lun_remove_cmd(struct se_cmd *cmd) percpu_ref_put(&lun->lun_ref); } -int transport_cmd_finish_abort(struct se_cmd *cmd) -{ - bool send_tas = cmd->transport_state & CMD_T_TAS; - bool ack_kref = (cmd->se_cmd_flags & SCF_ACK_KREF); - int ret = 0; - - if (send_tas) - transport_send_task_abort(cmd); - - if (cmd->se_cmd_flags & SCF_SE_LUN_CMD) - transport_lun_remove_cmd(cmd); - /* - * Allow the fabric driver to unmap any resources before - * releasing the descriptor via TFO->release_cmd() - */ - if (!send_tas) - cmd->se_tfo->aborted_task(cmd); - - if (transport_cmd_check_stop_to_fabric(cmd)) - return 1; - if (!send_tas && ack_kref) - ret = target_put_sess_cmd(cmd); - - return ret; -} - static void target_complete_failure_work(struct work_struct *work) { struct se_cmd *cmd = container_of(work, struct se_cmd, work); @@ -770,12 +756,88 @@ void transport_copy_sense_to_cmd(struct se_cmd *cmd, unsigned char *sense) } EXPORT_SYMBOL(transport_copy_sense_to_cmd); +static void target_handle_abort(struct se_cmd *cmd) +{ + bool tas = cmd->transport_state & CMD_T_TAS; + bool ack_kref = cmd->se_cmd_flags & SCF_ACK_KREF; + int ret; + + pr_debug("tag %#llx: send_abort_response = %d\n", cmd->tag, tas); + + if (tas) { + if (!(cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)) { + cmd->scsi_status = SAM_STAT_TASK_ABORTED; + pr_debug("Setting SAM_STAT_TASK_ABORTED status for CDB: 0x%02x, ITT: 0x%08llx\n", + cmd->t_task_cdb[0], cmd->tag); + trace_target_cmd_complete(cmd); + ret = cmd->se_tfo->queue_status(cmd); + if (ret) { + transport_handle_queue_full(cmd, cmd->se_dev, + ret, false); + return; + } + } else { + cmd->se_tmr_req->response = TMR_FUNCTION_REJECTED; + cmd->se_tfo->queue_tm_rsp(cmd); + } + } else { + /* + * Allow the fabric driver to unmap any resources before + * releasing the descriptor via TFO->release_cmd(). + */ + cmd->se_tfo->aborted_task(cmd); + if (ack_kref) + WARN_ON_ONCE(target_put_sess_cmd(cmd) != 0); + /* + * To do: establish a unit attention condition on the I_T + * nexus associated with cmd. See also the paragraph "Aborting + * commands" in SAM. + */ + } + + WARN_ON_ONCE(kref_read(&cmd->cmd_kref) == 0); + + transport_lun_remove_cmd(cmd); + + transport_cmd_check_stop_to_fabric(cmd); +} + +static void target_abort_work(struct work_struct *work) +{ + struct se_cmd *cmd = container_of(work, struct se_cmd, work); + + target_handle_abort(cmd); +} + +static bool target_cmd_interrupted(struct se_cmd *cmd) +{ + int post_ret; + + if (cmd->transport_state & CMD_T_ABORTED) { + if (cmd->transport_complete_callback) + cmd->transport_complete_callback(cmd, false, &post_ret); + INIT_WORK(&cmd->work, target_abort_work); + queue_work(target_completion_wq, &cmd->work); + return true; + } else if (cmd->transport_state & CMD_T_STOP) { + if (cmd->transport_complete_callback) + cmd->transport_complete_callback(cmd, false, &post_ret); + complete_all(&cmd->t_transport_stop_comp); + return true; + } + + return false; +} + +/* May be called from interrupt context so must not sleep. */ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status) { - struct se_device *dev = cmd->se_dev; int success; unsigned long flags; + if (target_cmd_interrupted(cmd)) + return; + cmd->scsi_status = scsi_status; spin_lock_irqsave(&cmd->t_state_lock, flags); @@ -791,34 +853,12 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status) break; } - /* - * Check for case where an explicit ABORT_TASK has been received - * and transport_wait_for_tasks() will be waiting for completion.. - */ - if (cmd->transport_state & CMD_T_ABORTED || - cmd->transport_state & CMD_T_STOP) { - spin_unlock_irqrestore(&cmd->t_state_lock, flags); - /* - * If COMPARE_AND_WRITE was stopped by __transport_wait_for_tasks(), - * release se_device->caw_sem obtained by sbc_compare_and_write() - * since target_complete_ok_work() or target_complete_failure_work() - * won't be called to invoke the normal CAW completion callbacks. - */ - if (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) { - up(&dev->caw_sem); - } - complete_all(&cmd->t_transport_stop_comp); - return; - } else if (!success) { - INIT_WORK(&cmd->work, target_complete_failure_work); - } else { - INIT_WORK(&cmd->work, target_complete_ok_work); - } - cmd->t_state = TRANSPORT_COMPLETE; cmd->transport_state |= (CMD_T_COMPLETE | CMD_T_ACTIVE); spin_unlock_irqrestore(&cmd->t_state_lock, flags); + INIT_WORK(&cmd->work, success ? target_complete_ok_work : + target_complete_failure_work); if (cmd->se_cmd_flags & SCF_USE_CPUID) queue_work_on(cmd->cpuid, target_completion_wq, &cmd->work); else @@ -880,7 +920,7 @@ void target_qf_do_work(struct work_struct *work) atomic_dec_mb(&dev->dev_qf_count); pr_debug("Processing %s cmd: %p QUEUE_FULL in work queue" - " context: %s\n", cmd->se_tfo->get_fabric_name(), cmd, + " context: %s\n", cmd->se_tfo->fabric_name, cmd, (cmd->t_state == TRANSPORT_COMPLETE_QF_OK) ? "COMPLETE_OK" : (cmd->t_state == TRANSPORT_COMPLETE_QF_WP) ? "WRITE_PENDING" : "UNKNOWN"); @@ -1244,7 +1284,7 @@ target_cmd_size_check(struct se_cmd *cmd, unsigned int size) } else if (size != cmd->data_length) { pr_warn_ratelimited("TARGET_CORE[%s]: Expected Transfer Length:" " %u does not match SCSI CDB Length: %u for SAM Opcode:" - " 0x%02x\n", cmd->se_tfo->get_fabric_name(), + " 0x%02x\n", cmd->se_tfo->fabric_name, cmd->data_length, size, cmd->t_task_cdb[0]); if (cmd->data_direction == DMA_TO_DEVICE) { @@ -1316,7 +1356,8 @@ void transport_init_se_cmd( INIT_LIST_HEAD(&cmd->se_cmd_list); INIT_LIST_HEAD(&cmd->state_list); init_completion(&cmd->t_transport_stop_comp); - cmd->compl = NULL; + cmd->free_compl = NULL; + cmd->abrt_compl = NULL; spin_lock_init(&cmd->t_state_lock); INIT_WORK(&cmd->work, NULL); kref_init(&cmd->cmd_kref); @@ -1396,7 +1437,7 @@ target_setup_cmd_from_cdb(struct se_cmd *cmd, unsigned char *cdb) ret = dev->transport->parse_cdb(cmd); if (ret == TCM_UNSUPPORTED_SCSI_OPCODE) pr_warn_ratelimited("%s/%s: Unsupported SCSI Opcode 0x%02x, sending CHECK_CONDITION.\n", - cmd->se_tfo->get_fabric_name(), + cmd->se_tfo->fabric_name, cmd->se_sess->se_node_acl->initiatorname, cmd->t_task_cdb[0]); if (ret) @@ -1792,8 +1833,11 @@ void transport_generic_request_failure(struct se_cmd *cmd, if (cmd->transport_complete_callback) cmd->transport_complete_callback(cmd, false, &post_ret); - if (transport_check_aborted_status(cmd, 1)) + if (cmd->transport_state & CMD_T_ABORTED) { + INIT_WORK(&cmd->work, target_abort_work); + queue_work(target_completion_wq, &cmd->work); return; + } switch (sense_reason) { case TCM_NON_EXISTENT_LUN: @@ -1999,8 +2043,6 @@ static bool target_handle_task_attr(struct se_cmd *cmd) return true; } -static int __transport_check_aborted_status(struct se_cmd *, int); - void target_execute_cmd(struct se_cmd *cmd) { /* @@ -2009,20 +2051,10 @@ void target_execute_cmd(struct se_cmd *cmd) * * If the received CDB has already been aborted stop processing it here. */ - spin_lock_irq(&cmd->t_state_lock); - if (__transport_check_aborted_status(cmd, 1)) { - spin_unlock_irq(&cmd->t_state_lock); - return; - } - if (cmd->transport_state & CMD_T_STOP) { - pr_debug("%s:%d CMD_T_STOP for ITT: 0x%08llx\n", - __func__, __LINE__, cmd->tag); - - spin_unlock_irq(&cmd->t_state_lock); - complete_all(&cmd->t_transport_stop_comp); + if (target_cmd_interrupted(cmd)) return; - } + spin_lock_irq(&cmd->t_state_lock); cmd->t_state = TRANSPORT_PROCESSING; cmd->transport_state &= ~CMD_T_PRE_EXECUTE; cmd->transport_state |= CMD_T_ACTIVE | CMD_T_SENT; @@ -2571,7 +2603,8 @@ transport_generic_new_cmd(struct se_cmd *cmd) * Determine if frontend context caller is requesting the stopping of * this command for frontend exceptions. */ - if (cmd->transport_state & CMD_T_STOP) { + if (cmd->transport_state & CMD_T_STOP && + !cmd->se_tfo->write_pending_must_be_called) { pr_debug("%s:%d CMD_T_STOP for ITT: 0x%08llx\n", __func__, __LINE__, cmd->tag); @@ -2634,14 +2667,30 @@ static void target_wait_free_cmd(struct se_cmd *cmd, bool *aborted, bool *tas) spin_unlock_irqrestore(&cmd->t_state_lock, flags); } +/* + * Call target_put_sess_cmd() and wait until target_release_cmd_kref(@cmd) has + * finished. + */ +void target_put_cmd_and_wait(struct se_cmd *cmd) +{ + DECLARE_COMPLETION_ONSTACK(compl); + + WARN_ON_ONCE(cmd->abrt_compl); + cmd->abrt_compl = &compl; + target_put_sess_cmd(cmd); + wait_for_completion(&compl); +} + /* * This function is called by frontend drivers after processing of a command * has finished. * - * The protocol for ensuring that either the regular flow or the TMF - * code drops one reference is as follows: + * The protocol for ensuring that either the regular frontend command + * processing flow or target_handle_abort() code drops one reference is as + * follows: * - Calling .queue_data_in(), .queue_status() or queue_tm_rsp() will cause - * the frontend driver to drop one reference, synchronously or asynchronously. + * the frontend driver to call this function synchronously or asynchronously. + * That will cause one reference to be dropped. * - During regular command processing the target core sets CMD_T_COMPLETE * before invoking one of the .queue_*() functions. * - The code that aborts commands skips commands and TMFs for which @@ -2653,7 +2702,7 @@ static void target_wait_free_cmd(struct se_cmd *cmd, bool *aborted, bool *tas) * - For aborted commands for which CMD_T_TAS has been set .queue_status() will * be called and will drop a reference. * - For aborted commands for which CMD_T_TAS has not been set .aborted_task() - * will be called. transport_cmd_finish_abort() will drop the final reference. + * will be called. target_handle_abort() will drop the final reference. */ int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks) { @@ -2677,9 +2726,8 @@ int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks) transport_lun_remove_cmd(cmd); } if (aborted) - cmd->compl = &compl; - if (!aborted || tas) - ret = target_put_sess_cmd(cmd); + cmd->free_compl = &compl; + ret = target_put_sess_cmd(cmd); if (aborted) { pr_debug("Detected CMD_T_ABORTED for ITT: %llu\n", cmd->tag); wait_for_completion(&compl); @@ -2719,6 +2767,7 @@ int target_get_sess_cmd(struct se_cmd *se_cmd, bool ack_kref) } se_cmd->transport_state |= CMD_T_PRE_EXECUTE; list_add_tail(&se_cmd->se_cmd_list, &se_sess->sess_cmd_list); + percpu_ref_get(&se_sess->cmd_count); out: spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); @@ -2743,21 +2792,24 @@ static void target_release_cmd_kref(struct kref *kref) { struct se_cmd *se_cmd = container_of(kref, struct se_cmd, cmd_kref); struct se_session *se_sess = se_cmd->se_sess; - struct completion *compl = se_cmd->compl; + struct completion *free_compl = se_cmd->free_compl; + struct completion *abrt_compl = se_cmd->abrt_compl; unsigned long flags; if (se_sess) { spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); list_del_init(&se_cmd->se_cmd_list); - if (se_sess->sess_tearing_down && list_empty(&se_sess->sess_cmd_list)) - wake_up(&se_sess->cmd_list_wq); spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); } target_free_cmd_mem(se_cmd); se_cmd->se_tfo->release_cmd(se_cmd); - if (compl) - complete(compl); + if (free_compl) + complete(free_compl); + if (abrt_compl) + complete(abrt_compl); + + percpu_ref_put(&se_sess->cmd_count); } /** @@ -2886,6 +2938,8 @@ void target_sess_cmd_list_set_waiting(struct se_session *se_sess) spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); se_sess->sess_tearing_down = 1; spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); + + percpu_ref_kill(&se_sess->cmd_count); } EXPORT_SYMBOL(target_sess_cmd_list_set_waiting); @@ -2900,52 +2954,24 @@ void target_wait_for_sess_cmds(struct se_session *se_sess) WARN_ON_ONCE(!se_sess->sess_tearing_down); - spin_lock_irq(&se_sess->sess_cmd_lock); do { - ret = wait_event_lock_irq_timeout( - se_sess->cmd_list_wq, - list_empty(&se_sess->sess_cmd_list), - se_sess->sess_cmd_lock, 180 * HZ); + ret = wait_event_timeout(se_sess->cmd_list_wq, + percpu_ref_is_zero(&se_sess->cmd_count), + 180 * HZ); list_for_each_entry(cmd, &se_sess->sess_cmd_list, se_cmd_list) target_show_cmd("session shutdown: still waiting for ", cmd); } while (ret <= 0); - spin_unlock_irq(&se_sess->sess_cmd_lock); } EXPORT_SYMBOL(target_wait_for_sess_cmds); -static void target_lun_confirm(struct percpu_ref *ref) -{ - struct se_lun *lun = container_of(ref, struct se_lun, lun_ref); - - complete(&lun->lun_ref_comp); -} - +/* + * Prevent that new percpu_ref_tryget_live() calls succeed and wait until + * all references to the LUN have been released. Called during LUN shutdown. + */ void transport_clear_lun_ref(struct se_lun *lun) { - /* - * Mark the percpu-ref as DEAD, switch to atomic_t mode, drop - * the initial reference and schedule confirm kill to be - * executed after one full RCU grace period has completed. - */ - percpu_ref_kill_and_confirm(&lun->lun_ref, target_lun_confirm); - /* - * The first completion waits for percpu_ref_switch_to_atomic_rcu() - * to call target_lun_confirm after lun->lun_ref has been marked - * as __PERCPU_REF_DEAD on all CPUs, and switches to atomic_t - * mode so that percpu_ref_tryget_live() lookup of lun->lun_ref - * fails for all new incoming I/O. - */ - wait_for_completion(&lun->lun_ref_comp); - /* - * The second completion waits for percpu_ref_put_many() to - * invoke ->release() after lun->lun_ref has switched to - * atomic_t mode, and lun->lun_ref.count has reached zero. - * - * At this point all target-core lun->lun_ref references have - * been dropped via transport_lun_remove_cmd(), and it's safe - * to proceed with the remaining LUN shutdown. - */ + percpu_ref_kill(&lun->lun_ref); wait_for_completion(&lun->lun_shutdown_comp); } @@ -3229,6 +3255,8 @@ transport_send_check_condition_and_sense(struct se_cmd *cmd, { unsigned long flags; + WARN_ON_ONCE(cmd->se_cmd_flags & SCF_SCSI_TMR_CDB); + spin_lock_irqsave(&cmd->t_state_lock, flags); if (cmd->se_cmd_flags & SCF_SENT_CHECK_CONDITION) { spin_unlock_irqrestore(&cmd->t_state_lock, flags); @@ -3245,114 +3273,15 @@ transport_send_check_condition_and_sense(struct se_cmd *cmd, } EXPORT_SYMBOL(transport_send_check_condition_and_sense); -static int __transport_check_aborted_status(struct se_cmd *cmd, int send_status) - __releases(&cmd->t_state_lock) - __acquires(&cmd->t_state_lock) -{ - int ret; - - assert_spin_locked(&cmd->t_state_lock); - WARN_ON_ONCE(!irqs_disabled()); - - if (!(cmd->transport_state & CMD_T_ABORTED)) - return 0; - /* - * If cmd has been aborted but either no status is to be sent or it has - * already been sent, just return - */ - if (!send_status || !(cmd->se_cmd_flags & SCF_SEND_DELAYED_TAS)) { - if (send_status) - cmd->se_cmd_flags |= SCF_SEND_DELAYED_TAS; - return 1; - } - - pr_debug("Sending delayed SAM_STAT_TASK_ABORTED status for CDB:" - " 0x%02x ITT: 0x%08llx\n", cmd->t_task_cdb[0], cmd->tag); - - cmd->se_cmd_flags &= ~SCF_SEND_DELAYED_TAS; - cmd->scsi_status = SAM_STAT_TASK_ABORTED; - trace_target_cmd_complete(cmd); - - spin_unlock_irq(&cmd->t_state_lock); - ret = cmd->se_tfo->queue_status(cmd); - if (ret) - transport_handle_queue_full(cmd, cmd->se_dev, ret, false); - spin_lock_irq(&cmd->t_state_lock); - - return 1; -} - -int transport_check_aborted_status(struct se_cmd *cmd, int send_status) -{ - int ret; - - spin_lock_irq(&cmd->t_state_lock); - ret = __transport_check_aborted_status(cmd, send_status); - spin_unlock_irq(&cmd->t_state_lock); - - return ret; -} -EXPORT_SYMBOL(transport_check_aborted_status); - -void transport_send_task_abort(struct se_cmd *cmd) -{ - unsigned long flags; - int ret; - - spin_lock_irqsave(&cmd->t_state_lock, flags); - if (cmd->se_cmd_flags & (SCF_SENT_CHECK_CONDITION)) { - spin_unlock_irqrestore(&cmd->t_state_lock, flags); - return; - } - spin_unlock_irqrestore(&cmd->t_state_lock, flags); - - /* - * If there are still expected incoming fabric WRITEs, we wait - * until until they have completed before sending a TASK_ABORTED - * response. This response with TASK_ABORTED status will be - * queued back to fabric module by transport_check_aborted_status(). - */ - if (cmd->data_direction == DMA_TO_DEVICE) { - if (cmd->se_tfo->write_pending_status(cmd) != 0) { - spin_lock_irqsave(&cmd->t_state_lock, flags); - if (cmd->se_cmd_flags & SCF_SEND_DELAYED_TAS) { - spin_unlock_irqrestore(&cmd->t_state_lock, flags); - goto send_abort; - } - cmd->se_cmd_flags |= SCF_SEND_DELAYED_TAS; - spin_unlock_irqrestore(&cmd->t_state_lock, flags); - return; - } - } -send_abort: - cmd->scsi_status = SAM_STAT_TASK_ABORTED; - - transport_lun_remove_cmd(cmd); - - pr_debug("Setting SAM_STAT_TASK_ABORTED status for CDB: 0x%02x, ITT: 0x%08llx\n", - cmd->t_task_cdb[0], cmd->tag); - - trace_target_cmd_complete(cmd); - ret = cmd->se_tfo->queue_status(cmd); - if (ret) - transport_handle_queue_full(cmd, cmd->se_dev, ret, false); -} - static void target_tmr_work(struct work_struct *work) { struct se_cmd *cmd = container_of(work, struct se_cmd, work); struct se_device *dev = cmd->se_dev; struct se_tmr_req *tmr = cmd->se_tmr_req; - unsigned long flags; int ret; - spin_lock_irqsave(&cmd->t_state_lock, flags); - if (cmd->transport_state & CMD_T_ABORTED) { - tmr->response = TMR_FUNCTION_REJECTED; - spin_unlock_irqrestore(&cmd->t_state_lock, flags); - goto check_stop; - } - spin_unlock_irqrestore(&cmd->t_state_lock, flags); + if (cmd->transport_state & CMD_T_ABORTED) + goto aborted; switch (tmr->function) { case TMR_ABORT_TASK: @@ -3386,18 +3315,16 @@ static void target_tmr_work(struct work_struct *work) break; } - spin_lock_irqsave(&cmd->t_state_lock, flags); - if (cmd->transport_state & CMD_T_ABORTED) { - spin_unlock_irqrestore(&cmd->t_state_lock, flags); - goto check_stop; - } - spin_unlock_irqrestore(&cmd->t_state_lock, flags); + if (cmd->transport_state & CMD_T_ABORTED) + goto aborted; cmd->se_tfo->queue_tm_rsp(cmd); -check_stop: - transport_lun_remove_cmd(cmd); transport_cmd_check_stop_to_fabric(cmd); + return; + +aborted: + target_handle_abort(cmd); } int transport_generic_handle_tmr( @@ -3416,16 +3343,15 @@ int transport_generic_handle_tmr( spin_unlock_irqrestore(&cmd->t_state_lock, flags); if (aborted) { - pr_warn_ratelimited("handle_tmr caught CMD_T_ABORTED TMR %d" - "ref_tag: %llu tag: %llu\n", cmd->se_tmr_req->function, - cmd->se_tmr_req->ref_task_tag, cmd->tag); - transport_lun_remove_cmd(cmd); - transport_cmd_check_stop_to_fabric(cmd); + pr_warn_ratelimited("handle_tmr caught CMD_T_ABORTED TMR %d ref_tag: %llu tag: %llu\n", + cmd->se_tmr_req->function, + cmd->se_tmr_req->ref_task_tag, cmd->tag); + target_handle_abort(cmd); return 0; } INIT_WORK(&cmd->work, target_tmr_work); - queue_work(cmd->se_dev->tmr_wq, &cmd->work); + schedule_work(&cmd->work); return 0; } EXPORT_SYMBOL(transport_generic_handle_tmr); diff --git a/drivers/target/target_core_ua.c b/drivers/target/target_core_ua.c index c8ac242ce888..ced1c10364eb 100644 --- a/drivers/target/target_core_ua.c +++ b/drivers/target/target_core_ua.c @@ -266,7 +266,7 @@ bool core_scsi3_ua_for_check_condition(struct se_cmd *cmd, u8 *key, u8 *asc, pr_debug("[%s]: %s UNIT ATTENTION condition with" " INTLCK_CTRL: %d, mapped LUN: %llu, got CDB: 0x%02x" " reported ASC: 0x%02x, ASCQ: 0x%02x\n", - nacl->se_tpg->se_tpg_tfo->get_fabric_name(), + nacl->se_tpg->se_tpg_tfo->fabric_name, (dev->dev_attrib.emulate_ua_intlck_ctrl != 0) ? "Reporting" : "Releasing", dev->dev_attrib.emulate_ua_intlck_ctrl, cmd->orig_fe_lun, cmd->t_task_cdb[0], *asc, *ascq); @@ -327,7 +327,7 @@ int core_scsi3_ua_clear_for_request_sense( pr_debug("[%s]: Released UNIT ATTENTION condition, mapped" " LUN: %llu, got REQUEST_SENSE reported ASC: 0x%02x," - " ASCQ: 0x%02x\n", nacl->se_tpg->se_tpg_tfo->get_fabric_name(), + " ASCQ: 0x%02x\n", nacl->se_tpg->se_tpg_tfo->fabric_name, cmd->orig_fe_lun, *asc, *ascq); return (head) ? -EPERM : 0; diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 9cd404acdb82..1e6d24943565 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -958,7 +958,7 @@ static int add_to_cmdr_queue(struct tcmu_cmd *tcmu_cmd) * 0 success * 1 internally queued to wait for ring memory to free. */ -static sense_reason_t queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, int *scsi_err) +static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err) { struct tcmu_dev *udev = tcmu_cmd->tcmu_dev; struct se_cmd *se_cmd = tcmu_cmd->se_cmd; diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c index 70adcfdca8d1..c2e1fc927fdf 100644 --- a/drivers/target/target_core_xcopy.c +++ b/drivers/target/target_core_xcopy.c @@ -399,11 +399,6 @@ struct se_portal_group xcopy_pt_tpg; static struct se_session xcopy_pt_sess; static struct se_node_acl xcopy_pt_nacl; -static char *xcopy_pt_get_fabric_name(void) -{ - return "xcopy-pt"; -} - static int xcopy_pt_get_cmd_state(struct se_cmd *se_cmd) { return 0; @@ -463,7 +458,7 @@ static int xcopy_pt_queue_status(struct se_cmd *se_cmd) } static const struct target_core_fabric_ops xcopy_pt_tfo = { - .get_fabric_name = xcopy_pt_get_fabric_name, + .fabric_name = "xcopy-pt", .get_cmd_state = xcopy_pt_get_cmd_state, .release_cmd = xcopy_pt_release_cmd, .check_stop_free = xcopy_pt_check_stop_free, @@ -479,6 +474,8 @@ static const struct target_core_fabric_ops xcopy_pt_tfo = { int target_xcopy_setup_pt(void) { + int ret; + xcopy_wq = alloc_workqueue("xcopy_wq", WQ_MEM_RECLAIM, 0); if (!xcopy_wq) { pr_err("Unable to allocate xcopy_wq\n"); @@ -496,7 +493,9 @@ int target_xcopy_setup_pt(void) INIT_LIST_HEAD(&xcopy_pt_nacl.acl_list); INIT_LIST_HEAD(&xcopy_pt_nacl.acl_sess_list); memset(&xcopy_pt_sess, 0, sizeof(struct se_session)); - transport_init_session(&xcopy_pt_sess); + ret = transport_init_session(&xcopy_pt_sess); + if (ret < 0) + return ret; xcopy_pt_nacl.se_tpg = &xcopy_pt_tpg; xcopy_pt_nacl.nacl_sess = &xcopy_pt_sess; diff --git a/drivers/target/tcm_fc/tfc_conf.c b/drivers/target/tcm_fc/tfc_conf.c index e55c4d537592..1ce49518d440 100644 --- a/drivers/target/tcm_fc/tfc_conf.c +++ b/drivers/target/tcm_fc/tfc_conf.c @@ -392,11 +392,6 @@ static inline struct ft_tpg *ft_tpg(struct se_portal_group *se_tpg) return container_of(se_tpg, struct ft_tpg, se_tpg); } -static char *ft_get_fabric_name(void) -{ - return "fc"; -} - static char *ft_get_fabric_wwn(struct se_portal_group *se_tpg) { return ft_tpg(se_tpg)->lport_wwn->name; @@ -427,9 +422,8 @@ static u32 ft_tpg_get_inst_index(struct se_portal_group *se_tpg) static const struct target_core_fabric_ops ft_fabric_ops = { .module = THIS_MODULE, - .name = "fc", + .fabric_name = "fc", .node_acl_size = sizeof(struct ft_node_acl), - .get_fabric_name = ft_get_fabric_name, .tpg_get_wwn = ft_get_fabric_wwn, .tpg_get_tag = ft_get_tag, .tpg_check_demo_mode = ft_check_false, diff --git a/drivers/thermal/Kconfig b/drivers/thermal/Kconfig index 5422523c03f8..5fbfabbf627b 100644 --- a/drivers/thermal/Kconfig +++ b/drivers/thermal/Kconfig @@ -383,7 +383,7 @@ config INTEL_QUARK_DTS_THERMAL underlying BIOS/Firmware. menu "ACPI INT340X thermal drivers" -source drivers/thermal/int340x_thermal/Kconfig +source "drivers/thermal/int340x_thermal/Kconfig" endmenu config INTEL_BXT_PMIC_THERMAL diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c index 93e562f18d40..7416bdbd8576 100644 --- a/drivers/thunderbolt/domain.c +++ b/drivers/thunderbolt/domain.c @@ -7,7 +7,9 @@ */ #include +#include #include +#include #include #include #include @@ -236,6 +238,20 @@ err_free_str: } static DEVICE_ATTR_RW(boot_acl); +static ssize_t iommu_dma_protection_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + /* + * Kernel DMA protection is a feature where Thunderbolt security is + * handled natively using IOMMU. It is enabled when IOMMU is + * enabled and ACPI DMAR table has DMAR_PLATFORM_OPT_IN set. + */ + return sprintf(buf, "%d\n", + iommu_present(&pci_bus_type) && dmar_platform_optin()); +} +static DEVICE_ATTR_RO(iommu_dma_protection); + static ssize_t security_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -251,6 +267,7 @@ static DEVICE_ATTR_RO(security); static struct attribute *domain_attrs[] = { &dev_attr_boot_acl.attr, + &dev_attr_iommu_dma_protection.attr, &dev_attr_security.attr, NULL, }; diff --git a/drivers/tty/hvc/hvc_opal.c b/drivers/tty/hvc/hvc_opal.c index 77baf895108f..c66412566efc 100644 --- a/drivers/tty/hvc/hvc_opal.c +++ b/drivers/tty/hvc/hvc_opal.c @@ -353,7 +353,7 @@ void __init hvc_opal_init_early(void) if (!opal) return; for_each_child_of_node(opal, np) { - if (!strcmp(np->name, "serial")) { + if (of_node_name_eq(np, "serial")) { stdout_node = np; break; } diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c index 59eaa620bf13..6de6d4a1a221 100644 --- a/drivers/tty/hvc/hvc_vio.c +++ b/drivers/tty/hvc/hvc_vio.c @@ -371,20 +371,11 @@ device_initcall(hvc_vio_init); /* after drivers/tty/hvc/hvc_console.c */ void __init hvc_vio_init_early(void) { const __be32 *termno; - const char *name; const struct hv_ops *ops; /* find the boot console from /chosen/stdout */ - if (!of_stdout) - return; - name = of_get_property(of_stdout, "name", NULL); - if (!name) { - printk(KERN_WARNING "stdout node missing 'name' property!\n"); - return; - } - /* Check if it's a virtual terminal */ - if (strncmp(name, "vty", 3) != 0) + if (!of_node_name_prefix(of_stdout, "vty")) return; termno = of_get_property(of_stdout, "reg", NULL); if (termno == NULL) diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c index dabb391909aa..99460af61b77 100644 --- a/drivers/tty/n_hdlc.c +++ b/drivers/tty/n_hdlc.c @@ -612,7 +612,7 @@ static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file, } /* no data */ - if (file->f_flags & O_NONBLOCK) { + if (tty_io_nonblock(tty, file)) { ret = -EAGAIN; break; } @@ -679,7 +679,7 @@ static ssize_t n_hdlc_tty_write(struct tty_struct *tty, struct file *file, if (tbuf) break; - if (file->f_flags & O_NONBLOCK) { + if (tty_io_nonblock(tty, file)) { error = -EAGAIN; break; } diff --git a/drivers/tty/n_r3964.c b/drivers/tty/n_r3964.c index 749a608c40b0..f75696f0ee2d 100644 --- a/drivers/tty/n_r3964.c +++ b/drivers/tty/n_r3964.c @@ -1085,7 +1085,7 @@ static ssize_t r3964_read(struct tty_struct *tty, struct file *file, pMsg = remove_msg(pInfo, pClient); if (pMsg == NULL) { /* no messages available. */ - if (file->f_flags & O_NONBLOCK) { + if (tty_io_nonblock(tty, file)) { ret = -EAGAIN; goto unlock; } diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c index 3ad460219fd6..5dc9686697cf 100644 --- a/drivers/tty/n_tty.c +++ b/drivers/tty/n_tty.c @@ -1702,7 +1702,7 @@ n_tty_receive_buf_common(struct tty_struct *tty, const unsigned char *cp, down_read(&tty->termios_rwsem); - while (1) { + do { /* * When PARMRK is set, each input char may take up to 3 chars * in the read buf; reduce the buffer space avail by 3x @@ -1744,7 +1744,7 @@ n_tty_receive_buf_common(struct tty_struct *tty, const unsigned char *cp, fp += n; count -= n; rcvd += n; - } + } while (!test_bit(TTY_LDISC_CHANGING, &tty->flags)); tty->receive_room = room; @@ -2211,7 +2211,7 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, break; if (!timeout) break; - if (file->f_flags & O_NONBLOCK) { + if (tty_io_nonblock(tty, file)) { retval = -EAGAIN; break; } @@ -2365,7 +2365,7 @@ static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, } if (!nr) break; - if (file->f_flags & O_NONBLOCK) { + if (tty_io_nonblock(tty, file)) { retval = -EAGAIN; break; } diff --git a/drivers/tty/serdev/core.c b/drivers/tty/serdev/core.c index 9db93f500b4e..a0ac16ee6575 100644 --- a/drivers/tty/serdev/core.c +++ b/drivers/tty/serdev/core.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -216,6 +217,21 @@ void serdev_device_write_wakeup(struct serdev_device *serdev) } EXPORT_SYMBOL_GPL(serdev_device_write_wakeup); +/** + * serdev_device_write_buf() - write data asynchronously + * @serdev: serdev device + * @buf: data to be written + * @count: number of bytes to write + * + * Write data to the device asynchronously. + * + * Note that any accepted data has only been buffered by the controller; use + * serdev_device_wait_until_sent() to make sure the controller write buffer + * has actually been emptied. + * + * Return: The number of bytes written (less than count if not enough room in + * the write buffer), or a negative errno on errors. + */ int serdev_device_write_buf(struct serdev_device *serdev, const unsigned char *buf, size_t count) { @@ -228,17 +244,42 @@ int serdev_device_write_buf(struct serdev_device *serdev, } EXPORT_SYMBOL_GPL(serdev_device_write_buf); +/** + * serdev_device_write() - write data synchronously + * @serdev: serdev device + * @buf: data to be written + * @count: number of bytes to write + * @timeout: timeout in jiffies, or 0 to wait indefinitely + * + * Write data to the device synchronously by repeatedly calling + * serdev_device_write() until the controller has accepted all data (unless + * interrupted by a timeout or a signal). + * + * Note that any accepted data has only been buffered by the controller; use + * serdev_device_wait_until_sent() to make sure the controller write buffer + * has actually been emptied. + * + * Note that this function depends on serdev_device_write_wakeup() being + * called in the serdev driver write_wakeup() callback. + * + * Return: The number of bytes written (less than count if interrupted), + * -ETIMEDOUT or -ERESTARTSYS if interrupted before any bytes were written, or + * a negative errno on errors. + */ int serdev_device_write(struct serdev_device *serdev, const unsigned char *buf, size_t count, - unsigned long timeout) + long timeout) { struct serdev_controller *ctrl = serdev->ctrl; + int written = 0; int ret; - if (!ctrl || !ctrl->ops->write_buf || - (timeout && !serdev->ops->write_wakeup)) + if (!ctrl || !ctrl->ops->write_buf || !serdev->ops->write_wakeup) return -EINVAL; + if (timeout == 0) + timeout = MAX_SCHEDULE_TIMEOUT; + mutex_lock(&serdev->write_lock); do { reinit_completion(&serdev->write_comp); @@ -247,14 +288,29 @@ int serdev_device_write(struct serdev_device *serdev, if (ret < 0) break; + written += ret; buf += ret; count -= ret; - } while (count && - (timeout = wait_for_completion_timeout(&serdev->write_comp, - timeout))); + if (count == 0) + break; + + timeout = wait_for_completion_interruptible_timeout(&serdev->write_comp, + timeout); + } while (timeout > 0); mutex_unlock(&serdev->write_lock); - return ret < 0 ? ret : (count ? -ETIMEDOUT : 0); + + if (ret < 0) + return ret; + + if (timeout <= 0 && written == 0) { + if (timeout == -ERESTARTSYS) + return -ERESTARTSYS; + else + return -ETIMEDOUT; + } + + return written; } EXPORT_SYMBOL_GPL(serdev_device_write); diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c index 435bec40dee6..0438d9a905ce 100644 --- a/drivers/tty/serial/8250/8250_aspeed_vuart.c +++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c @@ -5,6 +5,10 @@ * Copyright (C) 2016 Jeremy Kerr , IBM Corp. * Copyright (C) 2006 Arnd Bergmann , IBM Corp. */ +#if defined(CONFIG_SERIAL_8250_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) +#define SUPPORT_SYSRQ +#endif + #include #include #include @@ -293,7 +297,7 @@ static int aspeed_vuart_handle_irq(struct uart_port *port) if (lsr & UART_LSR_THRE) serial8250_tx_chars(up); - spin_unlock_irqrestore(&port->lock, flags); + uart_unlock_and_check_sysrq(port, flags); return 1; } diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c index 94f3e1c64490..189ab1212d9a 100644 --- a/drivers/tty/serial/8250/8250_core.c +++ b/drivers/tty/serial/8250/8250_core.c @@ -942,6 +942,21 @@ static struct uart_8250_port *serial8250_find_match_or_unused(struct uart_port * return NULL; } +static void serial_8250_overrun_backoff_work(struct work_struct *work) +{ + struct uart_8250_port *up = + container_of(to_delayed_work(work), struct uart_8250_port, + overrun_backoff); + struct uart_port *port = &up->port; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + up->ier |= UART_IER_RLSI | UART_IER_RDI; + up->port.read_status_mask |= UART_LSR_DR; + serial_out(up, UART_IER, up->ier); + spin_unlock_irqrestore(&port->lock, flags); +} + /** * serial8250_register_8250_port - register a serial port * @up: serial port template @@ -1056,6 +1071,16 @@ int serial8250_register_8250_port(struct uart_8250_port *up) ret = 0; } } + + /* Initialise interrupt backoff work if required */ + if (up->overrun_backoff_time_ms > 0) { + uart->overrun_backoff_time_ms = up->overrun_backoff_time_ms; + INIT_DELAYED_WORK(&uart->overrun_backoff, + serial_8250_overrun_backoff_work); + } else { + uart->overrun_backoff_time_ms = 0; + } + mutex_unlock(&serial_mutex); return ret; diff --git a/drivers/tty/serial/8250/8250_fsl.c b/drivers/tty/serial/8250/8250_fsl.c index 6640a4c7ddd1..aa0e216d5ead 100644 --- a/drivers/tty/serial/8250/8250_fsl.c +++ b/drivers/tty/serial/8250/8250_fsl.c @@ -1,4 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 +#if defined(CONFIG_SERIAL_8250_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) +#define SUPPORT_SYSRQ +#endif + #include #include @@ -45,8 +49,29 @@ int fsl8250_handle_irq(struct uart_port *port) lsr = orig_lsr = up->port.serial_in(&up->port, UART_LSR); - if (lsr & (UART_LSR_DR | UART_LSR_BI)) + /* Process incoming characters first */ + if ((lsr & (UART_LSR_DR | UART_LSR_BI)) && + (up->ier & (UART_IER_RLSI | UART_IER_RDI))) { lsr = serial8250_rx_chars(up, lsr); + } + + /* Stop processing interrupts on input overrun */ + if ((orig_lsr & UART_LSR_OE) && (up->overrun_backoff_time_ms > 0)) { + unsigned long delay; + + up->ier = port->serial_in(port, UART_IER); + if (up->ier & (UART_IER_RLSI | UART_IER_RDI)) { + port->ops->stop_rx(port); + } else { + /* Keep restarting the timer until + * the input overrun subsides. + */ + cancel_delayed_work(&up->overrun_backoff); + } + + delay = msecs_to_jiffies(up->overrun_backoff_time_ms); + schedule_delayed_work(&up->overrun_backoff, delay); + } serial8250_modem_status(up); @@ -54,7 +79,7 @@ int fsl8250_handle_irq(struct uart_port *port) serial8250_tx_chars(up); up->lsr_saved_flags = orig_lsr; - spin_unlock_irqrestore(&up->port.lock, flags); + uart_unlock_and_check_sysrq(&up->port, flags); return 1; } EXPORT_SYMBOL_GPL(fsl8250_handle_irq); diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c index c3f933d10295..e2c407656fa6 100644 --- a/drivers/tty/serial/8250/8250_mtk.c +++ b/drivers/tty/serial/8250/8250_mtk.c @@ -14,6 +14,10 @@ #include #include #include +#include +#include +#include +#include #include "8250.h" @@ -22,12 +26,172 @@ #define UART_MTK_SAMPLE_POINT 0x0b /* Sample point register */ #define MTK_UART_RATE_FIX 0x0d /* UART Rate Fix Register */ +#define MTK_UART_DMA_EN 0x13 /* DMA Enable register */ +#define MTK_UART_DMA_EN_TX 0x2 +#define MTK_UART_DMA_EN_RX 0x5 + +#define MTK_UART_TX_SIZE UART_XMIT_SIZE +#define MTK_UART_RX_SIZE 0x8000 +#define MTK_UART_TX_TRIGGER 1 +#define MTK_UART_RX_TRIGGER MTK_UART_RX_SIZE + +#ifdef CONFIG_SERIAL_8250_DMA +enum dma_rx_status { + DMA_RX_START = 0, + DMA_RX_RUNNING = 1, + DMA_RX_SHUTDOWN = 2, +}; +#endif + struct mtk8250_data { int line; + unsigned int rx_pos; struct clk *uart_clk; struct clk *bus_clk; + struct uart_8250_dma *dma; +#ifdef CONFIG_SERIAL_8250_DMA + enum dma_rx_status rx_status; +#endif }; +#ifdef CONFIG_SERIAL_8250_DMA +static void mtk8250_rx_dma(struct uart_8250_port *up); + +static void mtk8250_dma_rx_complete(void *param) +{ + struct uart_8250_port *up = param; + struct uart_8250_dma *dma = up->dma; + struct mtk8250_data *data = up->port.private_data; + struct tty_port *tty_port = &up->port.state->port; + struct dma_tx_state state; + unsigned char *ptr; + int copied; + + dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr, + dma->rx_size, DMA_FROM_DEVICE); + + dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); + + if (data->rx_status == DMA_RX_SHUTDOWN) + return; + + if ((data->rx_pos + state.residue) <= dma->rx_size) { + ptr = (unsigned char *)(data->rx_pos + dma->rx_buf); + copied = tty_insert_flip_string(tty_port, ptr, state.residue); + } else { + ptr = (unsigned char *)(data->rx_pos + dma->rx_buf); + copied = tty_insert_flip_string(tty_port, ptr, + dma->rx_size - data->rx_pos); + ptr = (unsigned char *)(dma->rx_buf); + copied += tty_insert_flip_string(tty_port, ptr, + data->rx_pos + state.residue - dma->rx_size); + } + up->port.icount.rx += copied; + + tty_flip_buffer_push(tty_port); + + mtk8250_rx_dma(up); +} + +static void mtk8250_rx_dma(struct uart_8250_port *up) +{ + struct uart_8250_dma *dma = up->dma; + struct mtk8250_data *data = up->port.private_data; + struct dma_async_tx_descriptor *desc; + struct dma_tx_state state; + + desc = dmaengine_prep_slave_single(dma->rxchan, dma->rx_addr, + dma->rx_size, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!desc) { + pr_err("failed to prepare rx slave single\n"); + return; + } + + desc->callback = mtk8250_dma_rx_complete; + desc->callback_param = up; + + dma->rx_cookie = dmaengine_submit(desc); + + dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); + data->rx_pos = state.residue; + + dma_sync_single_for_device(dma->rxchan->device->dev, dma->rx_addr, + dma->rx_size, DMA_FROM_DEVICE); + + dma_async_issue_pending(dma->rxchan); +} + +static void mtk8250_dma_enable(struct uart_8250_port *up) +{ + struct uart_8250_dma *dma = up->dma; + struct mtk8250_data *data = up->port.private_data; + int lcr = serial_in(up, UART_LCR); + + if (data->rx_status != DMA_RX_START) + return; + + dma->rxconf.direction = DMA_DEV_TO_MEM; + dma->rxconf.src_addr_width = dma->rx_size / 1024; + dma->rxconf.src_addr = dma->rx_addr; + + dma->txconf.direction = DMA_MEM_TO_DEV; + dma->txconf.dst_addr_width = MTK_UART_TX_SIZE / 1024; + dma->txconf.dst_addr = dma->tx_addr; + + serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR_CLEAR_RCVR | + UART_FCR_CLEAR_XMIT); + serial_out(up, MTK_UART_DMA_EN, + MTK_UART_DMA_EN_RX | MTK_UART_DMA_EN_TX); + + serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B); + serial_out(up, UART_EFR, UART_EFR_ECB); + serial_out(up, UART_LCR, lcr); + + if (dmaengine_slave_config(dma->rxchan, &dma->rxconf) != 0) + pr_err("failed to configure rx dma channel\n"); + if (dmaengine_slave_config(dma->txchan, &dma->txconf) != 0) + pr_err("failed to configure tx dma channel\n"); + + data->rx_status = DMA_RX_RUNNING; + data->rx_pos = 0; + mtk8250_rx_dma(up); +} +#endif + +static int mtk8250_startup(struct uart_port *port) +{ +#ifdef CONFIG_SERIAL_8250_DMA + struct uart_8250_port *up = up_to_u8250p(port); + struct mtk8250_data *data = port->private_data; + + /* disable DMA for console */ + if (uart_console(port)) + up->dma = NULL; + + if (up->dma) { + data->rx_status = DMA_RX_START; + uart_circ_clear(&port->state->xmit); + } +#endif + memset(&port->icount, 0, sizeof(port->icount)); + + return serial8250_do_startup(port); +} + +static void mtk8250_shutdown(struct uart_port *port) +{ +#ifdef CONFIG_SERIAL_8250_DMA + struct uart_8250_port *up = up_to_u8250p(port); + struct mtk8250_data *data = port->private_data; + + if (up->dma) + data->rx_status = DMA_RX_SHUTDOWN; +#endif + + return serial8250_do_shutdown(port); +} + static void mtk8250_set_termios(struct uart_port *port, struct ktermios *termios, struct ktermios *old) @@ -36,6 +200,17 @@ mtk8250_set_termios(struct uart_port *port, struct ktermios *termios, unsigned long flags; unsigned int baud, quot; +#ifdef CONFIG_SERIAL_8250_DMA + if (up->dma) { + if (uart_console(port)) { + devm_kfree(up->port.dev, up->dma); + up->dma = NULL; + } else { + mtk8250_dma_enable(up); + } + } +#endif + serial8250_do_set_termios(port, termios, old); /* @@ -143,9 +318,20 @@ mtk8250_do_pm(struct uart_port *port, unsigned int state, unsigned int old) pm_runtime_put_sync_suspend(port->dev); } +#ifdef CONFIG_SERIAL_8250_DMA +static bool mtk8250_dma_filter(struct dma_chan *chan, void *param) +{ + return false; +} +#endif + static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p, struct mtk8250_data *data) { +#ifdef CONFIG_SERIAL_8250_DMA + int dmacnt; +#endif + data->uart_clk = devm_clk_get(&pdev->dev, "baud"); if (IS_ERR(data->uart_clk)) { /* @@ -162,7 +348,23 @@ static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p, } data->bus_clk = devm_clk_get(&pdev->dev, "bus"); - return PTR_ERR_OR_ZERO(data->bus_clk); + if (IS_ERR(data->bus_clk)) + return PTR_ERR(data->bus_clk); + + data->dma = NULL; +#ifdef CONFIG_SERIAL_8250_DMA + dmacnt = of_property_count_strings(pdev->dev.of_node, "dma-names"); + if (dmacnt == 2) { + data->dma = devm_kzalloc(&pdev->dev, sizeof(*data->dma), + GFP_KERNEL); + data->dma->fn = mtk8250_dma_filter; + data->dma->rx_size = MTK_UART_RX_SIZE; + data->dma->rxconf.src_maxburst = MTK_UART_RX_TRIGGER; + data->dma->txconf.dst_maxburst = MTK_UART_TX_TRIGGER; + } +#endif + + return 0; } static int mtk8250_probe(struct platform_device *pdev) @@ -204,8 +406,14 @@ static int mtk8250_probe(struct platform_device *pdev) uart.port.iotype = UPIO_MEM32; uart.port.regshift = 2; uart.port.private_data = data; + uart.port.shutdown = mtk8250_shutdown; + uart.port.startup = mtk8250_startup; uart.port.set_termios = mtk8250_set_termios; uart.port.uartclk = clk_get_rate(data->uart_clk); +#ifdef CONFIG_SERIAL_8250_DMA + if (data->dma) + uart.dma = data->dma; +#endif /* Disable Rate Fix function */ writel(0x0, uart.port.membase + diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c index 877fd7f8a8ed..a1a85805d010 100644 --- a/drivers/tty/serial/8250/8250_of.c +++ b/drivers/tty/serial/8250/8250_of.c @@ -240,6 +240,11 @@ static int of_platform_serial_probe(struct platform_device *ofdev) if (of_property_read_bool(ofdev->dev.of_node, "auto-flow-control")) port8250.capabilities |= UART_CAP_AFE; + if (of_property_read_u32(ofdev->dev.of_node, + "overrun-throttle-ms", + &port8250.overrun_backoff_time_ms) != 0) + port8250.overrun_backoff_time_ms = 0; + ret = serial8250_register_8250_port(&port8250); if (ret < 0) goto err_dispose; diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c index a019286f8bb6..ad7ba7d0f28d 100644 --- a/drivers/tty/serial/8250/8250_omap.c +++ b/drivers/tty/serial/8250/8250_omap.c @@ -8,6 +8,10 @@ * */ +#if defined(CONFIG_SERIAL_8250_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) +#define SUPPORT_SYSRQ +#endif + #include #include #include @@ -1085,7 +1089,7 @@ static int omap_8250_dma_handle_irq(struct uart_port *port) } } - spin_unlock_irqrestore(&port->lock, flags); + uart_unlock_and_check_sysrq(port, flags); serial8250_rpm_put(up); return 1; } diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c index 3f779d25ec0c..d2f3310abe54 100644 --- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c @@ -1736,7 +1736,7 @@ void serial8250_read_char(struct uart_8250_port *up, unsigned char lsr) else if (lsr & UART_LSR_FE) flag = TTY_FRAME; } - if (uart_handle_sysrq_char(port, ch)) + if (uart_prepare_sysrq_char(port, ch)) return; uart_insert_char(port, lsr, UART_LSR_OE, ch, flag); @@ -1878,7 +1878,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) if ((!up->dma || up->dma->tx_err) && (status & UART_LSR_THRE)) serial8250_tx_chars(up); - spin_unlock_irqrestore(&port->lock, flags); + uart_unlock_and_check_sysrq(port, flags); return 1; } EXPORT_SYMBOL_GPL(serial8250_handle_irq); @@ -3239,9 +3239,7 @@ void serial8250_console_write(struct uart_8250_port *up, const char *s, serial8250_rpm_get(up); - if (port->sysrq) - locked = 0; - else if (oops_in_progress) + if (oops_in_progress) locked = spin_trylock_irqsave(&port->lock, flags); else spin_lock_irqsave(&port->lock, flags); diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c index ebd33c0232e6..89ade213a1a9 100644 --- a/drivers/tty/serial/amba-pl011.c +++ b/drivers/tty/serial/amba-pl011.c @@ -2780,6 +2780,7 @@ static struct platform_driver arm_sbsa_uart_platform_driver = { .name = "sbsa-uart", .of_match_table = of_match_ptr(sbsa_uart_of_match), .acpi_match_table = ACPI_PTR(sbsa_uart_acpi_match), + .suppress_bind_attrs = IS_BUILTIN(CONFIG_SERIAL_AMBA_PL011), }, }; @@ -2808,6 +2809,7 @@ static struct amba_driver pl011_driver = { .drv = { .name = "uart-pl011", .pm = &pl011_dev_pm_ops, + .suppress_bind_attrs = IS_BUILTIN(CONFIG_SERIAL_AMBA_PL011), }, .id_table = pl011_ids, .probe = pl011_probe, diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c index 00c220e4f43c..241a48e5052c 100644 --- a/drivers/tty/serial/fsl_lpuart.c +++ b/drivers/tty/serial/fsl_lpuart.c @@ -1479,6 +1479,8 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios, else cr1 &= ~UARTCR1_PT; } + } else { + cr1 &= ~UARTCR1_PE; } /* ask the core to calculate the divisor */ @@ -1682,7 +1684,7 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios, ctrl &= ~UARTCTRL_PE; ctrl |= UARTCTRL_M; } else { - ctrl |= UARTCR1_PE; + ctrl |= UARTCTRL_PE; if ((termios->c_cflag & CSIZE) == CS8) ctrl |= UARTCTRL_M; if (termios->c_cflag & PARODD) @@ -1690,6 +1692,8 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios, else ctrl &= ~UARTCTRL_PT; } + } else { + ctrl &= ~UARTCTRL_PE; } /* ask the core to calculate the divisor */ diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c index d4e051b578f6..dff75dc94731 100644 --- a/drivers/tty/serial/imx.c +++ b/drivers/tty/serial/imx.c @@ -2064,7 +2064,7 @@ imx_uart_console_setup(struct console *co, char *options) retval = clk_prepare(sport->clk_per); if (retval) - clk_disable_unprepare(sport->clk_ipg); + clk_unprepare(sport->clk_ipg); error_console: return retval; diff --git a/drivers/tty/serial/lantiq.c b/drivers/tty/serial/lantiq.c index 044128277248..e052b69ceb98 100644 --- a/drivers/tty/serial/lantiq.c +++ b/drivers/tty/serial/lantiq.c @@ -8,24 +8,23 @@ * Copyright (C) 2010 Thomas Langer, */ -#include -#include -#include +#include #include -#include #include -#include -#include -#include -#include -#include +#include +#include +#include +#include +#include #include #include -#include -#include -#include - -#include +#include +#include +#include +#include +#include +#include +#include #define PORT_LTQ_ASC 111 #define MAXPORTS 2 @@ -105,7 +104,7 @@ static DEFINE_SPINLOCK(ltq_asc_lock); struct ltq_uart_port { struct uart_port port; /* clock used to derive divider */ - struct clk *fpiclk; + struct clk *freqclk; /* clock gating of the ASC core */ struct clk *clk; unsigned int tx_irq; @@ -113,6 +112,13 @@ struct ltq_uart_port { unsigned int err_irq; }; +static inline void asc_update_bits(u32 clear, u32 set, void __iomem *reg) +{ + u32 tmp = readl(reg); + + writel((tmp & ~clear) | set, reg); +} + static inline struct ltq_uart_port *to_ltq_uart_port(struct uart_port *port) { @@ -138,7 +144,7 @@ lqasc_start_tx(struct uart_port *port) static void lqasc_stop_rx(struct uart_port *port) { - ltq_w32(ASCWHBSTATE_CLRREN, port->membase + LTQ_ASC_WHBSTATE); + writel(ASCWHBSTATE_CLRREN, port->membase + LTQ_ASC_WHBSTATE); } static int @@ -147,11 +153,11 @@ lqasc_rx_chars(struct uart_port *port) struct tty_port *tport = &port->state->port; unsigned int ch = 0, rsr = 0, fifocnt; - fifocnt = ltq_r32(port->membase + LTQ_ASC_FSTAT) & ASCFSTAT_RXFFLMASK; + fifocnt = readl(port->membase + LTQ_ASC_FSTAT) & ASCFSTAT_RXFFLMASK; while (fifocnt--) { u8 flag = TTY_NORMAL; - ch = ltq_r8(port->membase + LTQ_ASC_RBUF); - rsr = (ltq_r32(port->membase + LTQ_ASC_STATE) + ch = readb(port->membase + LTQ_ASC_RBUF); + rsr = (readl(port->membase + LTQ_ASC_STATE) & ASCSTATE_ANY) | UART_DUMMY_UER_RX; tty_flip_buffer_push(tport); port->icount.rx++; @@ -163,16 +169,16 @@ lqasc_rx_chars(struct uart_port *port) if (rsr & ASCSTATE_ANY) { if (rsr & ASCSTATE_PE) { port->icount.parity++; - ltq_w32_mask(0, ASCWHBSTATE_CLRPE, + asc_update_bits(0, ASCWHBSTATE_CLRPE, port->membase + LTQ_ASC_WHBSTATE); } else if (rsr & ASCSTATE_FE) { port->icount.frame++; - ltq_w32_mask(0, ASCWHBSTATE_CLRFE, + asc_update_bits(0, ASCWHBSTATE_CLRFE, port->membase + LTQ_ASC_WHBSTATE); } if (rsr & ASCSTATE_ROE) { port->icount.overrun++; - ltq_w32_mask(0, ASCWHBSTATE_CLRROE, + asc_update_bits(0, ASCWHBSTATE_CLRROE, port->membase + LTQ_ASC_WHBSTATE); } @@ -211,10 +217,10 @@ lqasc_tx_chars(struct uart_port *port) return; } - while (((ltq_r32(port->membase + LTQ_ASC_FSTAT) & + while (((readl(port->membase + LTQ_ASC_FSTAT) & ASCFSTAT_TXFREEMASK) >> ASCFSTAT_TXFREEOFF) != 0) { if (port->x_char) { - ltq_w8(port->x_char, port->membase + LTQ_ASC_TBUF); + writeb(port->x_char, port->membase + LTQ_ASC_TBUF); port->icount.tx++; port->x_char = 0; continue; @@ -223,7 +229,7 @@ lqasc_tx_chars(struct uart_port *port) if (uart_circ_empty(xmit)) break; - ltq_w8(port->state->xmit.buf[port->state->xmit.tail], + writeb(port->state->xmit.buf[port->state->xmit.tail], port->membase + LTQ_ASC_TBUF); xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); port->icount.tx++; @@ -239,7 +245,7 @@ lqasc_tx_int(int irq, void *_port) unsigned long flags; struct uart_port *port = (struct uart_port *)_port; spin_lock_irqsave(<q_asc_lock, flags); - ltq_w32(ASC_IRNCR_TIR, port->membase + LTQ_ASC_IRNCR); + writel(ASC_IRNCR_TIR, port->membase + LTQ_ASC_IRNCR); spin_unlock_irqrestore(<q_asc_lock, flags); lqasc_start_tx(port); return IRQ_HANDLED; @@ -252,7 +258,7 @@ lqasc_err_int(int irq, void *_port) struct uart_port *port = (struct uart_port *)_port; spin_lock_irqsave(<q_asc_lock, flags); /* clear any pending interrupts */ - ltq_w32_mask(0, ASCWHBSTATE_CLRPE | ASCWHBSTATE_CLRFE | + asc_update_bits(0, ASCWHBSTATE_CLRPE | ASCWHBSTATE_CLRFE | ASCWHBSTATE_CLRROE, port->membase + LTQ_ASC_WHBSTATE); spin_unlock_irqrestore(<q_asc_lock, flags); return IRQ_HANDLED; @@ -264,7 +270,7 @@ lqasc_rx_int(int irq, void *_port) unsigned long flags; struct uart_port *port = (struct uart_port *)_port; spin_lock_irqsave(<q_asc_lock, flags); - ltq_w32(ASC_IRNCR_RIR, port->membase + LTQ_ASC_IRNCR); + writel(ASC_IRNCR_RIR, port->membase + LTQ_ASC_IRNCR); lqasc_rx_chars(port); spin_unlock_irqrestore(<q_asc_lock, flags); return IRQ_HANDLED; @@ -274,7 +280,7 @@ static unsigned int lqasc_tx_empty(struct uart_port *port) { int status; - status = ltq_r32(port->membase + LTQ_ASC_FSTAT) & ASCFSTAT_TXFFLMASK; + status = readl(port->membase + LTQ_ASC_FSTAT) & ASCFSTAT_TXFFLMASK; return status ? 0 : TIOCSER_TEMT; } @@ -301,18 +307,18 @@ lqasc_startup(struct uart_port *port) int retval; if (!IS_ERR(ltq_port->clk)) - clk_enable(ltq_port->clk); - port->uartclk = clk_get_rate(ltq_port->fpiclk); + clk_prepare_enable(ltq_port->clk); + port->uartclk = clk_get_rate(ltq_port->freqclk); - ltq_w32_mask(ASCCLC_DISS | ASCCLC_RMCMASK, (1 << ASCCLC_RMCOFFSET), + asc_update_bits(ASCCLC_DISS | ASCCLC_RMCMASK, (1 << ASCCLC_RMCOFFSET), port->membase + LTQ_ASC_CLC); - ltq_w32(0, port->membase + LTQ_ASC_PISEL); - ltq_w32( + writel(0, port->membase + LTQ_ASC_PISEL); + writel( ((TXFIFO_FL << ASCTXFCON_TXFITLOFF) & ASCTXFCON_TXFITLMASK) | ASCTXFCON_TXFEN | ASCTXFCON_TXFFLU, port->membase + LTQ_ASC_TXFCON); - ltq_w32( + writel( ((RXFIFO_FL << ASCRXFCON_RXFITLOFF) & ASCRXFCON_RXFITLMASK) | ASCRXFCON_RXFEN | ASCRXFCON_RXFFLU, port->membase + LTQ_ASC_RXFCON); @@ -320,7 +326,7 @@ lqasc_startup(struct uart_port *port) * setting enable bits */ wmb(); - ltq_w32_mask(0, ASCCON_M_8ASYNC | ASCCON_FEN | ASCCON_TOEN | + asc_update_bits(0, ASCCON_M_8ASYNC | ASCCON_FEN | ASCCON_TOEN | ASCCON_ROEN, port->membase + LTQ_ASC_CON); retval = request_irq(ltq_port->tx_irq, lqasc_tx_int, @@ -344,7 +350,7 @@ lqasc_startup(struct uart_port *port) goto err2; } - ltq_w32(ASC_IRNREN_RX | ASC_IRNREN_ERR | ASC_IRNREN_TX, + writel(ASC_IRNREN_RX | ASC_IRNREN_ERR | ASC_IRNREN_TX, port->membase + LTQ_ASC_IRNREN); return 0; @@ -363,13 +369,13 @@ lqasc_shutdown(struct uart_port *port) free_irq(ltq_port->rx_irq, port); free_irq(ltq_port->err_irq, port); - ltq_w32(0, port->membase + LTQ_ASC_CON); - ltq_w32_mask(ASCRXFCON_RXFEN, ASCRXFCON_RXFFLU, + writel(0, port->membase + LTQ_ASC_CON); + asc_update_bits(ASCRXFCON_RXFEN, ASCRXFCON_RXFFLU, port->membase + LTQ_ASC_RXFCON); - ltq_w32_mask(ASCTXFCON_TXFEN, ASCTXFCON_TXFFLU, + asc_update_bits(ASCTXFCON_TXFEN, ASCTXFCON_TXFFLU, port->membase + LTQ_ASC_TXFCON); if (!IS_ERR(ltq_port->clk)) - clk_disable(ltq_port->clk); + clk_disable_unprepare(ltq_port->clk); } static void @@ -438,7 +444,7 @@ lqasc_set_termios(struct uart_port *port, spin_lock_irqsave(<q_asc_lock, flags); /* set up CON */ - ltq_w32_mask(0, con, port->membase + LTQ_ASC_CON); + asc_update_bits(0, con, port->membase + LTQ_ASC_CON); /* Set baud rate - take a divider of 2 into account */ baud = uart_get_baud_rate(port, new, old, 0, port->uartclk / 16); @@ -446,22 +452,22 @@ lqasc_set_termios(struct uart_port *port, divisor = divisor / 2 - 1; /* disable the baudrate generator */ - ltq_w32_mask(ASCCON_R, 0, port->membase + LTQ_ASC_CON); + asc_update_bits(ASCCON_R, 0, port->membase + LTQ_ASC_CON); /* make sure the fractional divider is off */ - ltq_w32_mask(ASCCON_FDE, 0, port->membase + LTQ_ASC_CON); + asc_update_bits(ASCCON_FDE, 0, port->membase + LTQ_ASC_CON); /* set up to use divisor of 2 */ - ltq_w32_mask(ASCCON_BRS, 0, port->membase + LTQ_ASC_CON); + asc_update_bits(ASCCON_BRS, 0, port->membase + LTQ_ASC_CON); /* now we can write the new baudrate into the register */ - ltq_w32(divisor, port->membase + LTQ_ASC_BG); + writel(divisor, port->membase + LTQ_ASC_BG); /* turn the baudrate generator back on */ - ltq_w32_mask(0, ASCCON_R, port->membase + LTQ_ASC_CON); + asc_update_bits(0, ASCCON_R, port->membase + LTQ_ASC_CON); /* enable rx */ - ltq_w32(ASCWHBSTATE_SETREN, port->membase + LTQ_ASC_WHBSTATE); + writel(ASCWHBSTATE_SETREN, port->membase + LTQ_ASC_WHBSTATE); spin_unlock_irqrestore(<q_asc_lock, flags); @@ -572,10 +578,10 @@ lqasc_console_putchar(struct uart_port *port, int ch) return; do { - fifofree = (ltq_r32(port->membase + LTQ_ASC_FSTAT) + fifofree = (readl(port->membase + LTQ_ASC_FSTAT) & ASCFSTAT_TXFREEMASK) >> ASCFSTAT_TXFREEOFF; } while (fifofree == 0); - ltq_w8(ch, port->membase + LTQ_ASC_TBUF); + writeb(ch, port->membase + LTQ_ASC_TBUF); } static void lqasc_serial_port_write(struct uart_port *port, const char *s, @@ -623,9 +629,9 @@ lqasc_console_setup(struct console *co, char *options) port = <q_port->port; if (!IS_ERR(ltq_port->clk)) - clk_enable(ltq_port->clk); + clk_prepare_enable(ltq_port->clk); - port->uartclk = clk_get_rate(ltq_port->fpiclk); + port->uartclk = clk_get_rate(ltq_port->freqclk); if (options) uart_parse_options(options, &baud, &parity, &bits, &flow); @@ -688,7 +694,7 @@ lqasc_probe(struct platform_device *pdev) struct ltq_uart_port *ltq_port; struct uart_port *port; struct resource *mmres, irqres[3]; - int line = 0; + int line; int ret; mmres = platform_get_resource(pdev, IORESOURCE_MEM, 0); @@ -699,9 +705,20 @@ lqasc_probe(struct platform_device *pdev) return -ENODEV; } - /* check if this is the console port */ - if (mmres->start != CPHYSADDR(LTQ_EARLY_ASC)) - line = 1; + /* get serial id */ + line = of_alias_get_id(node, "serial"); + if (line < 0) { + if (IS_ENABLED(CONFIG_LANTIQ)) { + if (mmres->start == CPHYSADDR(LTQ_EARLY_ASC)) + line = 0; + else + line = 1; + } else { + dev_err(&pdev->dev, "failed to get alias id, errno %d\n", + line); + return line; + } + } if (lqasc_port[line]) { dev_err(&pdev->dev, "port %d already allocated\n", line); @@ -726,14 +743,22 @@ lqasc_probe(struct platform_device *pdev) port->irq = irqres[0].start; port->mapbase = mmres->start; - ltq_port->fpiclk = clk_get_fpi(); - if (IS_ERR(ltq_port->fpiclk)) { + if (IS_ENABLED(CONFIG_LANTIQ) && !IS_ENABLED(CONFIG_COMMON_CLK)) + ltq_port->freqclk = clk_get_fpi(); + else + ltq_port->freqclk = devm_clk_get(&pdev->dev, "freq"); + + + if (IS_ERR(ltq_port->freqclk)) { pr_err("failed to get fpi clk\n"); return -ENOENT; } /* not all asc ports have clock gates, lets ignore the return code */ - ltq_port->clk = clk_get(&pdev->dev, NULL); + if (IS_ENABLED(CONFIG_LANTIQ) && !IS_ENABLED(CONFIG_COMMON_CLK)) + ltq_port->clk = clk_get(&pdev->dev, NULL); + else + ltq_port->clk = devm_clk_get(&pdev->dev, "asc"); ltq_port->tx_irq = irqres[0].start; ltq_port->rx_irq = irqres[1].start; @@ -759,7 +784,7 @@ static struct platform_driver lqasc_driver = { }, }; -int __init +static int __init init_lqasc(void) { int ret; diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c index 3db48fcd6068..4f479841769a 100644 --- a/drivers/tty/serial/max310x.c +++ b/drivers/tty/serial/max310x.c @@ -833,12 +833,9 @@ static void max310x_wq_proc(struct work_struct *ws) static unsigned int max310x_tx_empty(struct uart_port *port) { - unsigned int lvl, sts; + u8 lvl = max310x_port_read(port, MAX310X_TXFIFOLVL_REG); - lvl = max310x_port_read(port, MAX310X_TXFIFOLVL_REG); - sts = max310x_port_read(port, MAX310X_IRQSTS_REG); - - return ((sts & MAX310X_IRQ_TXEMPTY_BIT) && !lvl) ? TIOCSER_TEMT : 0; + return lvl ? 0 : TIOCSER_TEMT; } static unsigned int max310x_get_mctrl(struct uart_port *port) diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c index 170e446a2f62..231f751d1ef4 100644 --- a/drivers/tty/serial/mvebu-uart.c +++ b/drivers/tty/serial/mvebu-uart.c @@ -72,6 +72,8 @@ #define BRDV_BAUD_MASK 0x3FF #define UART_OSAMP 0x14 +#define OSAMP_DEFAULT_DIVISOR 16 +#define OSAMP_DIVISORS_MASK 0x3F3F3F3F #define MVEBU_NR_UARTS 2 @@ -444,25 +446,34 @@ static void mvebu_uart_shutdown(struct uart_port *port) static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud) { struct mvebu_uart *mvuart = to_mvuart(port); - unsigned int baud_rate_div; - u32 brdv; + unsigned int d_divisor, m_divisor; + u32 brdv, osamp; if (IS_ERR(mvuart->clk)) return -PTR_ERR(mvuart->clk); /* - * The UART clock is divided by the value of the divisor to generate - * UCLK_OUT clock, which is 16 times faster than the baudrate. - * This prescaler can achieve all standard baudrates until 230400. - * Higher baudrates could be achieved for the extended UART by using the - * programmable oversampling stack (also called fractional divisor). + * The baudrate is derived from the UART clock thanks to two divisors: + * > D ("baud generator"): can divide the clock from 2 to 2^10 - 1. + * > M ("fractional divisor"): allows a better accuracy for + * baudrates higher than 230400. + * + * As the derivation of M is rather complicated, the code sticks to its + * default value (x16) when all the prescalers are zeroed, and only + * makes use of D to configure the desired baudrate. */ - baud_rate_div = DIV_ROUND_UP(port->uartclk, baud * 16); + m_divisor = OSAMP_DEFAULT_DIVISOR; + d_divisor = DIV_ROUND_UP(port->uartclk, baud * m_divisor); + brdv = readl(port->membase + UART_BRDV); brdv &= ~BRDV_BAUD_MASK; - brdv |= baud_rate_div; + brdv |= d_divisor; writel(brdv, port->membase + UART_BRDV); + osamp = readl(port->membase + UART_OSAMP); + osamp &= ~OSAMP_DIVISORS_MASK; + writel(osamp, port->membase + UART_OSAMP); + return 0; } diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c index cb85002a10d8..9ed121f08a54 100644 --- a/drivers/tty/serial/pch_uart.c +++ b/drivers/tty/serial/pch_uart.c @@ -933,7 +933,6 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv) struct scatterlist *sg; int nent; int fifo_size; - int tx_empty; struct dma_async_tx_descriptor *desc; int num; int i; @@ -958,11 +957,9 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv) } fifo_size = max(priv->fifo_size, 1); - tx_empty = 1; if (pop_tx_x(priv, xmit->buf)) { pch_uart_hal_write(priv, xmit->buf, 1); port->icount.tx++; - tx_empty = 0; fifo_size--; } diff --git a/drivers/tty/serial/pic32_uart.c b/drivers/tty/serial/pic32_uart.c index fd80d999308d..0bdf1687983f 100644 --- a/drivers/tty/serial/pic32_uart.c +++ b/drivers/tty/serial/pic32_uart.c @@ -919,6 +919,7 @@ static struct platform_driver pic32_uart_platform_driver = { .driver = { .name = PIC32_DEV_NAME, .of_match_table = of_match_ptr(pic32_serial_dt_ids), + .suppress_bind_attrs = IS_BUILTIN(CONFIG_SERIAL_PIC32), }, }; diff --git a/drivers/tty/serial/pmac_zilog.c b/drivers/tty/serial/pmac_zilog.c index a9d40988e1c8..bcb5bf70534e 100644 --- a/drivers/tty/serial/pmac_zilog.c +++ b/drivers/tty/serial/pmac_zilog.c @@ -1648,9 +1648,9 @@ static int __init pmz_probe(void) */ node_a = node_b = NULL; for (np = NULL; (np = of_get_next_child(node_p, np)) != NULL;) { - if (strncmp(np->name, "ch-a", 4) == 0) + if (of_node_name_prefix(np, "ch-a")) node_a = of_node_get(np); - else if (strncmp(np->name, "ch-b", 4) == 0) + else if (of_node_name_prefix(np, "ch-b")) node_b = of_node_get(np); } if (!node_a && !node_b) { diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c index d3b5261ee80a..a72d6d9fb983 100644 --- a/drivers/tty/serial/qcom_geni_serial.c +++ b/drivers/tty/serial/qcom_geni_serial.c @@ -1,6 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 // Copyright (c) 2017-2018, The Linux foundation. All rights reserved. +#if defined(CONFIG_SERIAL_QCOM_GENI_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) +# define SUPPORT_SYSRQ +#endif + #include #include #include @@ -89,9 +93,9 @@ #define MAX_LOOPBACK_CFG 3 #ifdef CONFIG_CONSOLE_POLL -#define RX_BYTES_PW 1 +#define CONSOLE_RX_BYTES_PW 1 #else -#define RX_BYTES_PW 4 +#define CONSOLE_RX_BYTES_PW 4 #endif struct qcom_geni_serial_port { @@ -113,6 +117,8 @@ struct qcom_geni_serial_port { u32 *rx_fifo; u32 loopback; bool brk; + + unsigned int tx_remaining; }; static const struct uart_ops qcom_geni_console_pops; @@ -162,8 +168,7 @@ static struct qcom_geni_serial_port qcom_geni_uart_ports[GENI_UART_PORTS] = { static ssize_t loopback_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct platform_device *pdev = to_platform_device(dev); - struct qcom_geni_serial_port *port = platform_get_drvdata(pdev); + struct qcom_geni_serial_port *port = dev_get_drvdata(dev); return snprintf(buf, sizeof(u32), "%d\n", port->loopback); } @@ -172,8 +177,7 @@ static ssize_t loopback_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t size) { - struct platform_device *pdev = to_platform_device(dev); - struct qcom_geni_serial_port *port = platform_get_drvdata(pdev); + struct qcom_geni_serial_port *port = dev_get_drvdata(dev); u32 loopback; if (kstrtoint(buf, 0, &loopback) || loopback > MAX_LOOPBACK_CFG) { @@ -435,6 +439,8 @@ static void qcom_geni_serial_console_write(struct console *co, const char *s, struct qcom_geni_serial_port *port; bool locked = true; unsigned long flags; + u32 geni_status; + u32 irq_en; WARN_ON(co->index < 0 || co->index >= GENI_UART_CONS_PORTS); @@ -448,6 +454,8 @@ static void qcom_geni_serial_console_write(struct console *co, const char *s, else spin_lock_irqsave(&uport->lock, flags); + geni_status = readl_relaxed(uport->membase + SE_GENI_STATUS); + /* Cancel the current write to log the fault */ if (!locked) { geni_se_cancel_m_cmd(&port->se); @@ -461,9 +469,26 @@ static void qcom_geni_serial_console_write(struct console *co, const char *s, } writel_relaxed(M_CMD_CANCEL_EN, uport->membase + SE_GENI_M_IRQ_CLEAR); + } else if ((geni_status & M_GENI_CMD_ACTIVE) && !port->tx_remaining) { + /* + * It seems we can't interrupt existing transfers if all data + * has been sent, in which case we need to look for done first. + */ + qcom_geni_serial_poll_tx_done(uport); + + if (uart_circ_chars_pending(&uport->state->xmit)) { + irq_en = readl_relaxed(uport->membase + + SE_GENI_M_IRQ_EN); + writel_relaxed(irq_en | M_TX_FIFO_WATERMARK_EN, + uport->membase + SE_GENI_M_IRQ_EN); + } } __qcom_geni_serial_console_write(uport, s, count); + + if (port->tx_remaining) + qcom_geni_serial_setup_tx(uport, port->tx_remaining); + if (locked) spin_unlock_irqrestore(&uport->lock, flags); } @@ -495,7 +520,8 @@ static int handle_rx_console(struct uart_port *uport, u32 bytes, bool drop) continue; } - sysrq = uart_handle_sysrq_char(uport, buf[c]); + sysrq = uart_prepare_sysrq_char(uport, buf[c]); + if (!sysrq) tty_insert_flip_char(tport, buf[c], TTY_NORMAL); } @@ -694,40 +720,51 @@ static void qcom_geni_serial_handle_rx(struct uart_port *uport, bool drop) port->handle_rx(uport, total_bytes, drop); } -static void qcom_geni_serial_handle_tx(struct uart_port *uport) +static void qcom_geni_serial_handle_tx(struct uart_port *uport, bool done, + bool active) { struct qcom_geni_serial_port *port = to_dev_port(uport, uport); struct circ_buf *xmit = &uport->state->xmit; size_t avail; size_t remaining; + size_t pending; int i; u32 status; + u32 irq_en; unsigned int chunk; int tail; - u32 irq_en; - chunk = uart_circ_chars_pending(xmit); status = readl_relaxed(uport->membase + SE_GENI_TX_FIFO_STATUS); - /* Both FIFO and framework buffer are drained */ - if (!chunk && !status) { + + /* Complete the current tx command before taking newly added data */ + if (active) + pending = port->tx_remaining; + else + pending = uart_circ_chars_pending(xmit); + + /* All data has been transmitted and acknowledged as received */ + if (!pending && !status && done) { qcom_geni_serial_stop_tx(uport); goto out_write_wakeup; } - if (!uart_console(uport)) { - irq_en = readl_relaxed(uport->membase + SE_GENI_M_IRQ_EN); - irq_en &= ~(M_TX_FIFO_WATERMARK_EN); - writel_relaxed(0, uport->membase + SE_GENI_TX_WATERMARK_REG); - writel_relaxed(irq_en, uport->membase + SE_GENI_M_IRQ_EN); - } + avail = port->tx_fifo_depth - (status & TX_FIFO_WC); + avail *= port->tx_bytes_pw; - avail = (port->tx_fifo_depth - port->tx_wm) * port->tx_bytes_pw; tail = xmit->tail; - chunk = min3((size_t)chunk, (size_t)(UART_XMIT_SIZE - tail), avail); + chunk = min(avail, pending); if (!chunk) goto out_write_wakeup; - qcom_geni_serial_setup_tx(uport, chunk); + if (!port->tx_remaining) { + qcom_geni_serial_setup_tx(uport, pending); + port->tx_remaining = pending; + + irq_en = readl_relaxed(uport->membase + SE_GENI_M_IRQ_EN); + if (!(irq_en & M_TX_FIFO_WATERMARK_EN)) + writel_relaxed(irq_en | M_TX_FIFO_WATERMARK_EN, + uport->membase + SE_GENI_M_IRQ_EN); + } remaining = chunk; for (i = 0; i < chunk; ) { @@ -737,21 +774,38 @@ static void qcom_geni_serial_handle_tx(struct uart_port *uport) memset(buf, 0, ARRAY_SIZE(buf)); tx_bytes = min_t(size_t, remaining, port->tx_bytes_pw); - for (c = 0; c < tx_bytes ; c++) - buf[c] = xmit->buf[tail + c]; + + for (c = 0; c < tx_bytes ; c++) { + buf[c] = xmit->buf[tail++]; + tail &= UART_XMIT_SIZE - 1; + } iowrite32_rep(uport->membase + SE_GENI_TX_FIFOn, buf, 1); i += tx_bytes; - tail += tx_bytes; uport->icount.tx += tx_bytes; remaining -= tx_bytes; + port->tx_remaining -= tx_bytes; } - xmit->tail = tail & (UART_XMIT_SIZE - 1); - if (uart_console(uport)) - qcom_geni_serial_poll_tx_done(uport); + xmit->tail = tail; + + /* + * The tx fifo watermark is level triggered and latched. Though we had + * cleared it in qcom_geni_serial_isr it will have already reasserted + * so we must clear it again here after our writes. + */ + writel_relaxed(M_TX_FIFO_WATERMARK_EN, + uport->membase + SE_GENI_M_IRQ_CLEAR); + out_write_wakeup: + if (!port->tx_remaining) { + irq_en = readl_relaxed(uport->membase + SE_GENI_M_IRQ_EN); + if (irq_en & M_TX_FIFO_WATERMARK_EN) + writel_relaxed(irq_en & ~M_TX_FIFO_WATERMARK_EN, + uport->membase + SE_GENI_M_IRQ_EN); + } + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) uart_write_wakeup(uport); } @@ -760,6 +814,7 @@ static irqreturn_t qcom_geni_serial_isr(int isr, void *dev) { unsigned int m_irq_status; unsigned int s_irq_status; + unsigned int geni_status; struct uart_port *uport = dev; unsigned long flags; unsigned int m_irq_en; @@ -773,6 +828,7 @@ static irqreturn_t qcom_geni_serial_isr(int isr, void *dev) spin_lock_irqsave(&uport->lock, flags); m_irq_status = readl_relaxed(uport->membase + SE_GENI_M_IRQ_STATUS); s_irq_status = readl_relaxed(uport->membase + SE_GENI_S_IRQ_STATUS); + geni_status = readl_relaxed(uport->membase + SE_GENI_STATUS); m_irq_en = readl_relaxed(uport->membase + SE_GENI_M_IRQ_EN); writel_relaxed(m_irq_status, uport->membase + SE_GENI_M_IRQ_CLEAR); writel_relaxed(s_irq_status, uport->membase + SE_GENI_S_IRQ_CLEAR); @@ -785,9 +841,9 @@ static irqreturn_t qcom_geni_serial_isr(int isr, void *dev) tty_insert_flip_char(tport, 0, TTY_OVERRUN); } - if (m_irq_status & (M_TX_FIFO_WATERMARK_EN | M_CMD_DONE_EN) && - m_irq_en & (M_TX_FIFO_WATERMARK_EN | M_CMD_DONE_EN)) - qcom_geni_serial_handle_tx(uport); + if (m_irq_status & m_irq_en & (M_TX_FIFO_WATERMARK_EN | M_CMD_DONE_EN)) + qcom_geni_serial_handle_tx(uport, m_irq_status & M_CMD_DONE_EN, + geni_status & M_GENI_CMD_ACTIVE); if (s_irq_status & S_GP_IRQ_0_EN || s_irq_status & S_GP_IRQ_1_EN) { if (s_irq_status & S_GP_IRQ_0_EN) @@ -804,7 +860,8 @@ static irqreturn_t qcom_geni_serial_isr(int isr, void *dev) qcom_geni_serial_handle_rx(uport, drop_rx); out_unlock: - spin_unlock_irqrestore(&uport->lock, flags); + uart_unlock_and_check_sysrq(uport, flags); + return IRQ_HANDLED; } @@ -853,11 +910,13 @@ static int qcom_geni_serial_port_setup(struct uart_port *uport) unsigned int rxstale = DEFAULT_BITS_PER_CHAR * STALE_TIMEOUT; u32 proto; - if (uart_console(uport)) + if (uart_console(uport)) { port->tx_bytes_pw = 1; - else + port->rx_bytes_pw = CONSOLE_RX_BYTES_PW; + } else { port->tx_bytes_pw = 4; - port->rx_bytes_pw = RX_BYTES_PW; + port->rx_bytes_pw = 4; + } proto = geni_se_read_proto(&port->se); if (proto != GENI_SE_UART) { @@ -1322,49 +1381,25 @@ static int qcom_geni_serial_remove(struct platform_device *pdev) return 0; } -static int __maybe_unused qcom_geni_serial_sys_suspend_noirq(struct device *dev) +static int __maybe_unused qcom_geni_serial_sys_suspend(struct device *dev) { struct qcom_geni_serial_port *port = dev_get_drvdata(dev); struct uart_port *uport = &port->uport; - if (uart_console(uport)) { - uart_suspend_port(uport->private_data, uport); - } else { - struct uart_state *state = uport->state; - /* - * If the port is open, deny system suspend. - */ - if (state->pm_state == UART_PM_STATE_ON) - return -EBUSY; - } - - return 0; + return uart_suspend_port(uport->private_data, uport); } -static int __maybe_unused qcom_geni_serial_sys_resume_noirq(struct device *dev) +static int __maybe_unused qcom_geni_serial_sys_resume(struct device *dev) { struct qcom_geni_serial_port *port = dev_get_drvdata(dev); struct uart_port *uport = &port->uport; - if (uart_console(uport) && - console_suspend_enabled && uport->suspended) { - uart_resume_port(uport->private_data, uport); - /* - * uart_suspend_port() invokes port shutdown which in turn - * frees the irq. uart_resume_port invokes port startup which - * performs request_irq. The request_irq auto-enables the IRQ. - * In addition, resume_noirq implicitly enables the IRQ and - * leads to an unbalanced IRQ enable warning. Disable the IRQ - * before returning so that the warning is suppressed. - */ - disable_irq(uport->irq); - } - return 0; + return uart_resume_port(uport->private_data, uport); } static const struct dev_pm_ops qcom_geni_serial_pm_ops = { - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_geni_serial_sys_suspend_noirq, - qcom_geni_serial_sys_resume_noirq) + SET_SYSTEM_SLEEP_PM_OPS(qcom_geni_serial_sys_suspend, + qcom_geni_serial_sys_resume) }; static const struct of_device_id qcom_geni_serial_match_table[] = { diff --git a/drivers/tty/serial/samsung.c b/drivers/tty/serial/samsung.c index da1bd4bba8a9..9fc3559f80d9 100644 --- a/drivers/tty/serial/samsung.c +++ b/drivers/tty/serial/samsung.c @@ -1287,7 +1287,7 @@ static void s3c24xx_serial_set_termios(struct uart_port *port, * Ask the core to calculate the divisor for us. */ - baud = uart_get_baud_rate(port, termios, old, 0, 115200*8); + baud = uart_get_baud_rate(port, termios, old, 0, 3000000); quot = s3c24xx_serial_getclk(ourport, baud, &clk, &clk_sel); if (baud == 38400 && (port->flags & UPF_SPD_MASK) == UPF_SPD_CUST) quot = port->custom_divisor; @@ -1365,11 +1365,14 @@ static void s3c24xx_serial_set_termios(struct uart_port *port, wr_regl(port, S3C2410_ULCON, ulcon); wr_regl(port, S3C2410_UBRDIV, quot); + port->status &= ~UPSTAT_AUTOCTS; + umcon = rd_regl(port, S3C2410_UMCON); if (termios->c_cflag & CRTSCTS) { umcon |= S3C2410_UMCOM_AFC; /* Disable RTS when RX FIFO contains 63 bytes */ umcon &= ~S3C2412_UMCON_AFC_8; + port->status = UPSTAT_AUTOCTS; } else { umcon &= ~S3C2410_UMCOM_AFC; } diff --git a/drivers/tty/serial/sccnxp.c b/drivers/tty/serial/sccnxp.c index 339befdd2f4d..68a24a14f6b7 100644 --- a/drivers/tty/serial/sccnxp.c +++ b/drivers/tty/serial/sccnxp.c @@ -12,6 +12,7 @@ #endif #include +#include #include #include #include @@ -47,7 +48,6 @@ # define MR2_STOP1 (7 << 0) # define MR2_STOP2 (0xf << 0) #define SCCNXP_SR_REG (0x01) -#define SCCNXP_CSR_REG SCCNXP_SR_REG # define SR_RXRDY (1 << 0) # define SR_FULL (1 << 1) # define SR_TXRDY (1 << 2) @@ -56,6 +56,8 @@ # define SR_PE (1 << 5) # define SR_FE (1 << 6) # define SR_BRK (1 << 7) +#define SCCNXP_CSR_REG (SCCNXP_SR_REG) +# define CSR_TIMER_MODE (0x0d) #define SCCNXP_CR_REG (0x02) # define CR_RX_ENABLE (1 << 0) # define CR_RX_DISABLE (1 << 1) @@ -82,9 +84,12 @@ # define IMR_RXRDY (1 << 1) # define ISR_TXRDY(x) (1 << ((x * 4) + 0)) # define ISR_RXRDY(x) (1 << ((x * 4) + 1)) +#define SCCNXP_CTPU_REG (0x06) +#define SCCNXP_CTPL_REG (0x07) #define SCCNXP_IPR_REG (0x0d) #define SCCNXP_OPCR_REG SCCNXP_IPR_REG #define SCCNXP_SOP_REG (0x0e) +#define SCCNXP_START_COUNTER_REG SCCNXP_SOP_REG #define SCCNXP_ROP_REG (0x0f) /* Route helpers */ @@ -103,6 +108,8 @@ struct sccnxp_chip { unsigned long freq_max; unsigned int flags; unsigned int fifosize; + /* Time between read/write cycles */ + unsigned int trwd; }; struct sccnxp_port { @@ -137,6 +144,7 @@ static const struct sccnxp_chip sc2681 = { .freq_max = 4000000, .flags = SCCNXP_HAVE_IO, .fifosize = 3, + .trwd = 200, }; static const struct sccnxp_chip sc2691 = { @@ -147,6 +155,7 @@ static const struct sccnxp_chip sc2691 = { .freq_max = 4000000, .flags = 0, .fifosize = 3, + .trwd = 150, }; static const struct sccnxp_chip sc2692 = { @@ -157,6 +166,7 @@ static const struct sccnxp_chip sc2692 = { .freq_max = 4000000, .flags = SCCNXP_HAVE_IO, .fifosize = 3, + .trwd = 30, }; static const struct sccnxp_chip sc2891 = { @@ -167,6 +177,7 @@ static const struct sccnxp_chip sc2891 = { .freq_max = 8000000, .flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0, .fifosize = 16, + .trwd = 27, }; static const struct sccnxp_chip sc2892 = { @@ -177,6 +188,7 @@ static const struct sccnxp_chip sc2892 = { .freq_max = 8000000, .flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0, .fifosize = 16, + .trwd = 17, }; static const struct sccnxp_chip sc28202 = { @@ -187,6 +199,7 @@ static const struct sccnxp_chip sc28202 = { .freq_max = 50000000, .flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0, .fifosize = 256, + .trwd = 10, }; static const struct sccnxp_chip sc68681 = { @@ -197,6 +210,7 @@ static const struct sccnxp_chip sc68681 = { .freq_max = 4000000, .flags = SCCNXP_HAVE_IO, .fifosize = 3, + .trwd = 200, }; static const struct sccnxp_chip sc68692 = { @@ -207,24 +221,36 @@ static const struct sccnxp_chip sc68692 = { .freq_max = 4000000, .flags = SCCNXP_HAVE_IO, .fifosize = 3, + .trwd = 200, }; -static inline u8 sccnxp_read(struct uart_port *port, u8 reg) +static u8 sccnxp_read(struct uart_port *port, u8 reg) { - return readb(port->membase + (reg << port->regshift)); + struct sccnxp_port *s = dev_get_drvdata(port->dev); + u8 ret; + + ret = readb(port->membase + (reg << port->regshift)); + + ndelay(s->chip->trwd); + + return ret; } -static inline void sccnxp_write(struct uart_port *port, u8 reg, u8 v) +static void sccnxp_write(struct uart_port *port, u8 reg, u8 v) { + struct sccnxp_port *s = dev_get_drvdata(port->dev); + writeb(v, port->membase + (reg << port->regshift)); + + ndelay(s->chip->trwd); } -static inline u8 sccnxp_port_read(struct uart_port *port, u8 reg) +static u8 sccnxp_port_read(struct uart_port *port, u8 reg) { return sccnxp_read(port, (port->line << 3) + reg); } -static inline void sccnxp_port_write(struct uart_port *port, u8 reg, u8 v) +static void sccnxp_port_write(struct uart_port *port, u8 reg, u8 v) { sccnxp_write(port, (port->line << 3) + reg, v); } @@ -233,7 +259,7 @@ static int sccnxp_update_best_err(int a, int b, int *besterr) { int err = abs(a - b); - if ((*besterr < 0) || (*besterr > err)) { + if (*besterr > err) { *besterr = err; return 0; } @@ -281,10 +307,22 @@ static const struct { static int sccnxp_set_baud(struct uart_port *port, int baud) { struct sccnxp_port *s = dev_get_drvdata(port->dev); - int div_std, tmp_baud, bestbaud = baud, besterr = -1; + int div_std, tmp_baud, bestbaud = INT_MAX, besterr = INT_MAX; struct sccnxp_chip *chip = s->chip; u8 i, acr = 0, csr = 0, mr0 = 0; + /* Find divisor to load to the timer preset registers */ + div_std = DIV_ROUND_CLOSEST(port->uartclk, 2 * 16 * baud); + if ((div_std >= 2) && (div_std <= 0xffff)) { + bestbaud = DIV_ROUND_CLOSEST(port->uartclk, 2 * 16 * div_std); + sccnxp_update_best_err(baud, bestbaud, &besterr); + csr = CSR_TIMER_MODE; + sccnxp_port_write(port, SCCNXP_CTPU_REG, div_std >> 8); + sccnxp_port_write(port, SCCNXP_CTPL_REG, div_std); + /* Issue start timer/counter command */ + sccnxp_port_read(port, SCCNXP_START_COUNTER_REG); + } + /* Find best baud from table */ for (i = 0; baud_std[i].baud && besterr; i++) { if (baud_std[i].mr0 && !(chip->flags & SCCNXP_HAVE_MR0)) diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c index af2a29cfbbe9..d5269aaaf9b2 100644 --- a/drivers/tty/serial/serial-tegra.c +++ b/drivers/tty/serial/serial-tegra.c @@ -746,7 +746,7 @@ static void tegra_uart_stop_rx(struct uart_port *u) if (!tup->rx_in_progress) return; - tegra_uart_wait_sym_time(tup, 1); /* wait a character interval */ + tegra_uart_wait_sym_time(tup, 1); /* wait one character interval */ ier = tup->ier_shadow; ier &= ~(UART_IER_RDI | UART_IER_RLSI | UART_IER_RTOIE | @@ -887,7 +887,7 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup) * * EORD is different interrupt than RX_TIMEOUT - RX_TIMEOUT occurs when * the DATA is sitting in the FIFO and couldn't be transferred to the - * DMA as the DMA size alignment(4 bytes) is not met. EORD will be + * DMA as the DMA size alignment (4 bytes) is not met. EORD will be * triggered when there is a pause of the incomming data stream for 4 * characters long. * @@ -1079,7 +1079,7 @@ static void tegra_uart_set_termios(struct uart_port *u, if (tup->rts_active) set_rts(tup, false); - /* Clear all interrupts as configuration is going to be change */ + /* Clear all interrupts as configuration is going to be changed */ tegra_uart_write(tup, tup->ier_shadow | UART_IER_RDI, UART_IER); tegra_uart_read(tup, UART_IER); tegra_uart_write(tup, 0, UART_IER); @@ -1165,10 +1165,10 @@ static void tegra_uart_set_termios(struct uart_port *u, /* update the port timeout based on new settings */ uart_update_timeout(u, termios->c_cflag, baud); - /* Make sure all write has completed */ + /* Make sure all writes have completed */ tegra_uart_read(tup, UART_IER); - /* Reenable interrupt */ + /* Re-enable interrupt */ tegra_uart_write(tup, tup->ier_shadow, UART_IER); tegra_uart_read(tup, UART_IER); diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c index c439a5a1e6c0..d4cca5bdaf1c 100644 --- a/drivers/tty/serial/serial_core.c +++ b/drivers/tty/serial/serial_core.c @@ -205,10 +205,15 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state, if (!state->xmit.buf) { state->xmit.buf = (unsigned char *) page; uart_circ_clear(&state->xmit); + uart_port_unlock(uport, flags); } else { + uart_port_unlock(uport, flags); + /* + * Do not free() the page under the port lock, see + * uart_shutdown(). + */ free_page(page); } - uart_port_unlock(uport, flags); retval = uport->ops->startup(uport); if (retval == 0) { @@ -268,6 +273,7 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state) struct uart_port *uport = uart_port_check(state); struct tty_port *port = &state->port; unsigned long flags = 0; + char *xmit_buf = NULL; /* * Set the TTY IO error marker @@ -298,14 +304,18 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state) tty_port_set_suspended(port, 0); /* - * Free the transmit buffer page. + * Do not free() the transmit buffer page under the port lock since + * this can create various circular locking scenarios. For instance, + * console driver may need to allocate/free a debug object, which + * can endup in printk() recursion. */ uart_port_lock(state, flags); - if (state->xmit.buf) { - free_page((unsigned long)state->xmit.buf); - state->xmit.buf = NULL; - } + xmit_buf = state->xmit.buf; + state->xmit.buf = NULL; uart_port_unlock(uport, flags); + + if (xmit_buf) + free_page((unsigned long)xmit_buf); } /** diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index cc56cb3b3eca..8df0fd824520 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -1331,7 +1331,7 @@ static void sci_tx_dma_release(struct sci_port *s) dma_release_channel(chan); } -static void sci_submit_rx(struct sci_port *s) +static int sci_submit_rx(struct sci_port *s, bool port_lock_held) { struct dma_chan *chan = s->chan_rx; struct uart_port *port = &s->port; @@ -1359,19 +1359,22 @@ static void sci_submit_rx(struct sci_port *s) s->active_rx = s->cookie_rx[0]; dma_async_issue_pending(chan); - return; + return 0; fail: + /* Switch to PIO */ + if (!port_lock_held) + spin_lock_irqsave(&port->lock, flags); if (i) dmaengine_terminate_async(chan); for (i = 0; i < 2; i++) s->cookie_rx[i] = -EINVAL; - s->active_rx = -EINVAL; - /* Switch to PIO */ - spin_lock_irqsave(&port->lock, flags); + s->active_rx = 0; s->chan_rx = NULL; sci_start_rx(port); - spin_unlock_irqrestore(&port->lock, flags); + if (!port_lock_held) + spin_unlock_irqrestore(&port->lock, flags); + return -EAGAIN; } static void work_fn_tx(struct work_struct *work) @@ -1491,7 +1494,7 @@ static enum hrtimer_restart rx_timer_fn(struct hrtimer *t) } if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) - sci_submit_rx(s); + sci_submit_rx(s, true); /* Direct new serial port interrupts back to CPU */ scr = serial_port_in(port, SCSCR); @@ -1617,7 +1620,7 @@ static void sci_request_dma(struct uart_port *port) s->chan_rx_saved = s->chan_rx = chan; if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) - sci_submit_rx(s); + sci_submit_rx(s, false); } } @@ -1666,8 +1669,10 @@ static irqreturn_t sci_rx_interrupt(int irq, void *ptr) disable_irq_nosync(irq); scr |= SCSCR_RDRQE; } else { + if (sci_submit_rx(s, false) < 0) + goto handle_pio; + scr &= ~SCSCR_RIE; - sci_submit_rx(s); } serial_port_out(port, SCSCR, scr); /* Clear current interrupt */ @@ -1679,6 +1684,8 @@ static irqreturn_t sci_rx_interrupt(int irq, void *ptr) return IRQ_HANDLED; } + +handle_pio: #endif if (s->rx_trigger > 1 && s->rx_fifo_timeout > 0) { @@ -1693,7 +1700,7 @@ static irqreturn_t sci_rx_interrupt(int irq, void *ptr) * of whether the I_IXOFF is set, otherwise, how is the interrupt * to be disabled? */ - sci_receive_chars(ptr); + sci_receive_chars(port); return IRQ_HANDLED; } @@ -1749,7 +1756,7 @@ static irqreturn_t sci_er_interrupt(int irq, void *ptr) } else { sci_handle_fifo_overrun(port); if (!s->chan_rx) - sci_receive_chars(ptr); + sci_receive_chars(port); } sci_clear_SCxSR(port, SCxSR_ERROR_CLEAR(port)); diff --git a/drivers/tty/serial/suncore.c b/drivers/tty/serial/suncore.c index 990376576970..2491551c293e 100644 --- a/drivers/tty/serial/suncore.c +++ b/drivers/tty/serial/suncore.c @@ -89,14 +89,14 @@ void sunserial_console_termios(struct console *con, struct device_node *uart_dp) int baud, bits, stop, cflag; char parity; - if (!strcmp(uart_dp->name, "rsc") || - !strcmp(uart_dp->name, "rsc-console") || - !strcmp(uart_dp->name, "rsc-control")) { + if (of_node_name_eq(uart_dp, "rsc") || + of_node_name_eq(uart_dp, "rsc-console") || + of_node_name_eq(uart_dp, "rsc-control")) { mode = of_get_property(uart_dp, "ssp-console-modes", NULL); if (!mode) mode = "115200,8,n,1,-"; - } else if (!strcmp(uart_dp->name, "lom-console")) { + } else if (of_node_name_eq(uart_dp, "lom-console")) { mode = "9600,8,n,1,-"; } else { struct device_node *dp; diff --git a/drivers/tty/serial/sunsu.c b/drivers/tty/serial/sunsu.c index 3e77475668c0..4db6aaa330b2 100644 --- a/drivers/tty/serial/sunsu.c +++ b/drivers/tty/serial/sunsu.c @@ -1503,8 +1503,8 @@ static int su_probe(struct platform_device *op) up->port.ops = &sunsu_pops; ignore_line = false; - if (!strcmp(dp->name, "rsc-console") || - !strcmp(dp->name, "lom-console")) + if (of_node_name_eq(dp, "rsc-console") || + of_node_name_eq(dp, "lom-console")) ignore_line = true; sunserial_console_match(SUNSU_CONSOLE(), dp, diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c index f0344adc86db..b8b912b5a8b9 100644 --- a/drivers/tty/serial/uartlite.c +++ b/drivers/tty/serial/uartlite.c @@ -22,6 +22,7 @@ #include #include #include +#include #define ULITE_NAME "ttyUL" #define ULITE_MAJOR 204 @@ -54,6 +55,7 @@ #define ULITE_CONTROL_RST_TX 0x01 #define ULITE_CONTROL_RST_RX 0x02 #define ULITE_CONTROL_IE 0x10 +#define UART_AUTOSUSPEND_TIMEOUT 3000 /* Static pointer to console port */ #ifdef CONFIG_SERIAL_UARTLITE_CONSOLE @@ -63,6 +65,7 @@ static struct uart_port *console_port; struct uartlite_data { const struct uartlite_reg_ops *reg_ops; struct clk *clk; + struct uart_driver *ulite_uart_driver; }; struct uartlite_reg_ops { @@ -390,12 +393,12 @@ static int ulite_verify_port(struct uart_port *port, struct serial_struct *ser) static void ulite_pm(struct uart_port *port, unsigned int state, unsigned int oldstate) { - struct uartlite_data *pdata = port->private_data; - - if (!state) - clk_enable(pdata->clk); - else - clk_disable(pdata->clk); + if (!state) { + pm_runtime_get_sync(port->dev); + } else { + pm_runtime_mark_last_busy(port->dev); + pm_runtime_put_autosuspend(port->dev); + } } #ifdef CONFIG_CONSOLE_POLL @@ -694,7 +697,9 @@ static int ulite_release(struct device *dev) int rc = 0; if (port) { - rc = uart_remove_one_port(&ulite_uart_driver, port); + struct uartlite_data *pdata = port->private_data; + + rc = uart_remove_one_port(pdata->ulite_uart_driver, port); dev_set_drvdata(dev, NULL); port->mapbase = 0; } @@ -712,8 +717,11 @@ static int __maybe_unused ulite_suspend(struct device *dev) { struct uart_port *port = dev_get_drvdata(dev); - if (port) - uart_suspend_port(&ulite_uart_driver, port); + if (port) { + struct uartlite_data *pdata = port->private_data; + + uart_suspend_port(pdata->ulite_uart_driver, port); + } return 0; } @@ -728,17 +736,41 @@ static int __maybe_unused ulite_resume(struct device *dev) { struct uart_port *port = dev_get_drvdata(dev); - if (port) - uart_resume_port(&ulite_uart_driver, port); + if (port) { + struct uartlite_data *pdata = port->private_data; + + uart_resume_port(pdata->ulite_uart_driver, port); + } return 0; } +static int __maybe_unused ulite_runtime_suspend(struct device *dev) +{ + struct uart_port *port = dev_get_drvdata(dev); + struct uartlite_data *pdata = port->private_data; + + clk_disable(pdata->clk); + return 0; +}; + +static int __maybe_unused ulite_runtime_resume(struct device *dev) +{ + struct uart_port *port = dev_get_drvdata(dev); + struct uartlite_data *pdata = port->private_data; + + clk_enable(pdata->clk); + return 0; +} /* --------------------------------------------------------------------- * Platform bus binding */ -static SIMPLE_DEV_PM_OPS(ulite_pm_ops, ulite_suspend, ulite_resume); +static const struct dev_pm_ops ulite_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(ulite_suspend, ulite_resume) + SET_RUNTIME_PM_OPS(ulite_runtime_suspend, + ulite_runtime_resume, NULL) +}; #if defined(CONFIG_OF) /* Match table for of_platform binding */ @@ -763,6 +795,22 @@ static int ulite_probe(struct platform_device *pdev) if (prop) id = be32_to_cpup(prop); #endif + if (id < 0) { + /* Look for a serialN alias */ + id = of_alias_get_id(pdev->dev.of_node, "serial"); + if (id < 0) + id = 0; + } + + if (!ulite_uart_driver.state) { + dev_dbg(&pdev->dev, "uartlite: calling uart_register_driver()\n"); + ret = uart_register_driver(&ulite_uart_driver); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to register driver\n"); + return ret; + } + } + pdata = devm_kzalloc(&pdev->dev, sizeof(struct uartlite_data), GFP_KERNEL); if (!pdata) @@ -788,24 +836,22 @@ static int ulite_probe(struct platform_device *pdev) pdata->clk = NULL; } + pdata->ulite_uart_driver = &ulite_uart_driver; ret = clk_prepare_enable(pdata->clk); if (ret) { dev_err(&pdev->dev, "Failed to prepare clock\n"); return ret; } - if (!ulite_uart_driver.state) { - dev_dbg(&pdev->dev, "uartlite: calling uart_register_driver()\n"); - ret = uart_register_driver(&ulite_uart_driver); - if (ret < 0) { - dev_err(&pdev->dev, "Failed to register driver\n"); - return ret; - } - } + pm_runtime_use_autosuspend(&pdev->dev); + pm_runtime_set_autosuspend_delay(&pdev->dev, UART_AUTOSUSPEND_TIMEOUT); + pm_runtime_set_active(&pdev->dev); + pm_runtime_enable(&pdev->dev); ret = ulite_assign(&pdev->dev, id, res->start, irq, pdata); - clk_disable(pdata->clk); + pm_runtime_mark_last_busy(&pdev->dev); + pm_runtime_put_autosuspend(&pdev->dev); return ret; } @@ -814,9 +860,14 @@ static int ulite_remove(struct platform_device *pdev) { struct uart_port *port = dev_get_drvdata(&pdev->dev); struct uartlite_data *pdata = port->private_data; + int rc; - clk_disable_unprepare(pdata->clk); - return ulite_release(&pdev->dev); + clk_unprepare(pdata->clk); + rc = ulite_release(&pdev->dev); + pm_runtime_disable(&pdev->dev); + pm_runtime_set_suspended(&pdev->dev); + pm_runtime_dont_use_autosuspend(&pdev->dev); + return rc; } /* work with hotplug and coldplug */ diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c index 57c66d2c3471..094f2958cb2b 100644 --- a/drivers/tty/serial/xilinx_uartps.c +++ b/drivers/tty/serial/xilinx_uartps.c @@ -123,7 +123,7 @@ MODULE_PARM_DESC(rx_timeout, "Rx timeout, 1-255"); #define CDNS_UART_IXR_RXTRIG 0x00000001 /* RX FIFO trigger interrupt */ #define CDNS_UART_IXR_RXFULL 0x00000004 /* RX FIFO full interrupt. */ #define CDNS_UART_IXR_RXEMPTY 0x00000002 /* RX FIFO empty interrupt. */ -#define CDNS_UART_IXR_MASK 0x00001FFF /* Valid bit mask */ +#define CDNS_UART_IXR_RXMASK 0x000021e7 /* Valid RX bit mask */ /* * Do not enable parity error interrupt for the following @@ -364,7 +364,7 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id) cdns_uart_handle_tx(dev_id); isrstatus &= ~CDNS_UART_IXR_TXEMPTY; } - if (isrstatus & CDNS_UART_IXR_MASK) + if (isrstatus & CDNS_UART_IXR_RXMASK) cdns_uart_handle_rx(dev_id, isrstatus); spin_unlock(&port->lock); @@ -1255,7 +1255,7 @@ static int cdns_uart_suspend(struct device *device) may_wake = device_may_wakeup(device); - if (console_suspend_enabled && may_wake) { + if (console_suspend_enabled && uart_console(port) && may_wake) { unsigned long flags = 0; spin_lock_irqsave(&port->lock, flags); @@ -1293,7 +1293,7 @@ static int cdns_uart_resume(struct device *device) may_wake = device_may_wakeup(device); - if (console_suspend_enabled && !may_wake) { + if (console_suspend_enabled && uart_console(port) && !may_wake) { clk_enable(cdns_uart->pclk); clk_enable(cdns_uart->uartclk); @@ -1508,8 +1508,10 @@ static int cdns_uart_probe(struct platform_device *pdev) #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE cdns_uart_console = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_console), GFP_KERNEL); - if (!cdns_uart_console) - return -ENOMEM; + if (!cdns_uart_console) { + rc = -ENOMEM; + goto err_out_id; + } strncpy(cdns_uart_console->name, CDNS_UART_TTY_NAME, sizeof(cdns_uart_console->name)); @@ -1624,6 +1626,7 @@ static int cdns_uart_probe(struct platform_device *pdev) pm_runtime_set_autosuspend_delay(&pdev->dev, UART_AUTOSUSPEND_TIMEOUT); pm_runtime_set_active(&pdev->dev); pm_runtime_enable(&pdev->dev); + device_init_wakeup(port->dev, true); #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE /* @@ -1702,6 +1705,7 @@ static int cdns_uart_remove(struct platform_device *pdev) pm_runtime_disable(&pdev->dev); pm_runtime_set_suspended(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev); + device_init_wakeup(&pdev->dev, false); #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE if (console_port == port) @@ -1719,6 +1723,7 @@ static struct platform_driver cdns_uart_platform_driver = { .name = CDNS_UART_NAME, .of_match_table = cdns_uart_of_match, .pm = &cdns_uart_dev_pm_ops, + .suppress_bind_attrs = IS_BUILTIN(CONFIG_SERIAL_XILINX_PS_UART), }, }; diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c index ad1ee5d01b53..1f03078ec352 100644 --- a/drivers/tty/sysrq.c +++ b/drivers/tty/sysrq.c @@ -134,17 +134,10 @@ static struct sysrq_key_op sysrq_unraw_op = { static void sysrq_handle_crash(int key) { - char *killer = NULL; - - /* we need to release the RCU read lock here, - * otherwise we get an annoying - * 'BUG: sleeping function called from invalid context' - * complaint from the kernel before the panic. - */ + /* release the RCU read lock before crashing */ rcu_read_unlock(); - panic_on_oops = 1; /* force panic */ - wmb(); - *killer = 1; + + panic("sysrq triggered crash\n"); } static struct sysrq_key_op sysrq_crash_op = { .handler = sysrq_handle_crash, @@ -660,8 +653,7 @@ static void sysrq_do_reset(struct timer_list *t) state->reset_requested = true; - ksys_sync(); - kernel_restart(NULL); + orderly_reboot(); } static void sysrq_handle_reset_request(struct sysrq_state *state) @@ -736,6 +728,8 @@ static void sysrq_of_get_keyreset_config(void) /* Get reset timeout if any. */ of_property_read_u32(np, "timeout-ms", &sysrq_reset_downtime_ms); + + of_node_put(np); } #else static void sysrq_of_get_keyreset_config(void) diff --git a/drivers/tty/tty_audit.c b/drivers/tty/tty_audit.c index 50f567b6a66e..28f87fd6a28e 100644 --- a/drivers/tty/tty_audit.c +++ b/drivers/tty/tty_audit.c @@ -61,20 +61,19 @@ static void tty_audit_log(const char *description, dev_t dev, unsigned char *data, size_t size) { struct audit_buffer *ab; - struct task_struct *tsk = current; - pid_t pid = task_pid_nr(tsk); - uid_t uid = from_kuid(&init_user_ns, task_uid(tsk)); - uid_t loginuid = from_kuid(&init_user_ns, audit_get_loginuid(tsk)); - unsigned int sessionid = audit_get_sessionid(tsk); + pid_t pid = task_pid_nr(current); + uid_t uid = from_kuid(&init_user_ns, task_uid(current)); + uid_t loginuid = from_kuid(&init_user_ns, audit_get_loginuid(current)); + unsigned int sessionid = audit_get_sessionid(current); ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_TTY); if (ab) { - char name[sizeof(tsk->comm)]; + char name[sizeof(current->comm)]; audit_log_format(ab, "%s pid=%u uid=%u auid=%u ses=%u major=%d" " minor=%d comm=", description, pid, uid, loginuid, sessionid, MAJOR(dev), MINOR(dev)); - get_task_comm(name, tsk); + get_task_comm(name, current); audit_log_untrustedstring(ab, name); audit_log_format(ab, " data="); audit_log_n_hex(ab, data, size); diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index 687250ec8032..bfe9ad85b362 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -1268,14 +1268,16 @@ static int tty_reopen(struct tty_struct *tty) if (test_bit(TTY_EXCLUSIVE, &tty->flags) && !capable(CAP_SYS_ADMIN)) return -EBUSY; - tty->count++; + retval = tty_ldisc_lock(tty, 5 * HZ); + if (retval) + return retval; - if (tty->ldisc) - return 0; + if (!tty->ldisc) + retval = tty_ldisc_reinit(tty, tty->termios.c_line); + tty_ldisc_unlock(tty); - retval = tty_ldisc_reinit(tty, tty->termios.c_line); - if (retval) - tty->count--; + if (retval == 0) + tty->count++; return retval; } diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c index fc4c97cae01e..45eda69b150c 100644 --- a/drivers/tty/tty_ldisc.c +++ b/drivers/tty/tty_ldisc.c @@ -327,6 +327,11 @@ int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout) { int ret; + /* Kindly asking blocked readers to release the read side */ + set_bit(TTY_LDISC_CHANGING, &tty->flags); + wake_up_interruptible_all(&tty->read_wait); + wake_up_interruptible_all(&tty->write_wait); + ret = __tty_ldisc_lock(tty, timeout); if (!ret) return -EBUSY; @@ -337,6 +342,8 @@ int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout) void tty_ldisc_unlock(struct tty_struct *tty) { clear_bit(TTY_LDISC_HALTED, &tty->flags); + /* Can be cleared here - ldisc_unlock will wake up writers firstly */ + clear_bit(TTY_LDISC_CHANGING, &tty->flags); __tty_ldisc_unlock(tty); } @@ -471,6 +478,7 @@ static int tty_ldisc_open(struct tty_struct *tty, struct tty_ldisc *ld) static void tty_ldisc_close(struct tty_struct *tty, struct tty_ldisc *ld) { + lockdep_assert_held_exclusive(&tty->ldisc_sem); WARN_ON(!test_bit(TTY_LDISC_OPEN, &tty->flags)); clear_bit(TTY_LDISC_OPEN, &tty->flags); if (ld->ops->close) @@ -492,6 +500,7 @@ static int tty_ldisc_failto(struct tty_struct *tty, int ld) struct tty_ldisc *disc = tty_ldisc_get(tty, ld); int r; + lockdep_assert_held_exclusive(&tty->ldisc_sem); if (IS_ERR(disc)) return PTR_ERR(disc); tty->ldisc = disc; @@ -615,6 +624,7 @@ EXPORT_SYMBOL_GPL(tty_set_ldisc); */ static void tty_ldisc_kill(struct tty_struct *tty) { + lockdep_assert_held_exclusive(&tty->ldisc_sem); if (!tty->ldisc) return; /* @@ -662,6 +672,7 @@ int tty_ldisc_reinit(struct tty_struct *tty, int disc) struct tty_ldisc *ld; int retval; + lockdep_assert_held_exclusive(&tty->ldisc_sem); ld = tty_ldisc_get(tty, disc); if (IS_ERR(ld)) { BUG_ON(disc == N_TTY); @@ -760,6 +771,10 @@ int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty) return retval; if (o_tty) { + /* + * Called without o_tty->ldisc_sem held, as o_tty has been + * just allocated and no one has a reference to it. + */ retval = tty_ldisc_open(o_tty, o_tty->ldisc); if (retval) { tty_ldisc_close(tty, tty->ldisc); @@ -825,6 +840,7 @@ int tty_ldisc_init(struct tty_struct *tty) */ void tty_ldisc_deinit(struct tty_struct *tty) { + /* no ldisc_sem, tty is being destroyed */ if (tty->ldisc) tty_ldisc_put(tty->ldisc); tty->ldisc = NULL; diff --git a/drivers/tty/tty_ldsem.c b/drivers/tty/tty_ldsem.c index 0c98d88f795a..717292c1c0df 100644 --- a/drivers/tty/tty_ldsem.c +++ b/drivers/tty/tty_ldsem.c @@ -34,29 +34,6 @@ #include -#ifdef CONFIG_DEBUG_LOCK_ALLOC -# define __acq(l, s, t, r, c, n, i) \ - lock_acquire(&(l)->dep_map, s, t, r, c, n, i) -# define __rel(l, n, i) \ - lock_release(&(l)->dep_map, n, i) -#define lockdep_acquire(l, s, t, i) __acq(l, s, t, 0, 1, NULL, i) -#define lockdep_acquire_nest(l, s, t, n, i) __acq(l, s, t, 0, 1, n, i) -#define lockdep_acquire_read(l, s, t, i) __acq(l, s, t, 1, 1, NULL, i) -#define lockdep_release(l, n, i) __rel(l, n, i) -#else -# define lockdep_acquire(l, s, t, i) do { } while (0) -# define lockdep_acquire_nest(l, s, t, n, i) do { } while (0) -# define lockdep_acquire_read(l, s, t, i) do { } while (0) -# define lockdep_release(l, n, i) do { } while (0) -#endif - -#ifdef CONFIG_LOCK_STAT -# define lock_stat(_lock, stat) lock_##stat(&(_lock)->dep_map, _RET_IP_) -#else -# define lock_stat(_lock, stat) do { } while (0) -#endif - - #if BITS_PER_LONG == 64 # define LDSEM_ACTIVE_MASK 0xffffffffL #else @@ -235,6 +212,7 @@ down_read_failed(struct ld_semaphore *sem, long count, long timeout) raw_spin_lock_irq(&sem->wait_lock); if (waiter.task) { atomic_long_add_return(-LDSEM_WAIT_BIAS, &sem->count); + sem->wait_readers--; list_del(&waiter.list); raw_spin_unlock_irq(&sem->wait_lock); put_task_struct(waiter.task); @@ -293,6 +271,16 @@ down_write_failed(struct ld_semaphore *sem, long count, long timeout) if (!locked) atomic_long_add_return(-LDSEM_WAIT_BIAS, &sem->count); list_del(&waiter.list); + + /* + * In case of timeout, wake up every reader who gave the right of way + * to writer. Prevent separation readers into two groups: + * one that helds semaphore and another that sleeps. + * (in case of no contention with a writer) + */ + if (!locked && list_empty(&sem->write_wait)) + __ldsem_wake_readers(sem); + raw_spin_unlock_irq(&sem->wait_lock); __set_current_state(TASK_RUNNING); @@ -310,17 +298,17 @@ static int __ldsem_down_read_nested(struct ld_semaphore *sem, { long count; - lockdep_acquire_read(sem, subclass, 0, _RET_IP_); + rwsem_acquire_read(&sem->dep_map, subclass, 0, _RET_IP_); count = atomic_long_add_return(LDSEM_READ_BIAS, &sem->count); if (count <= 0) { - lock_stat(sem, contended); + lock_contended(&sem->dep_map, _RET_IP_); if (!down_read_failed(sem, count, timeout)) { - lockdep_release(sem, 1, _RET_IP_); + rwsem_release(&sem->dep_map, 1, _RET_IP_); return 0; } } - lock_stat(sem, acquired); + lock_acquired(&sem->dep_map, _RET_IP_); return 1; } @@ -329,17 +317,17 @@ static int __ldsem_down_write_nested(struct ld_semaphore *sem, { long count; - lockdep_acquire(sem, subclass, 0, _RET_IP_); + rwsem_acquire(&sem->dep_map, subclass, 0, _RET_IP_); count = atomic_long_add_return(LDSEM_WRITE_BIAS, &sem->count); if ((count & LDSEM_ACTIVE_MASK) != LDSEM_ACTIVE_BIAS) { - lock_stat(sem, contended); + lock_contended(&sem->dep_map, _RET_IP_); if (!down_write_failed(sem, count, timeout)) { - lockdep_release(sem, 1, _RET_IP_); + rwsem_release(&sem->dep_map, 1, _RET_IP_); return 0; } } - lock_stat(sem, acquired); + lock_acquired(&sem->dep_map, _RET_IP_); return 1; } @@ -362,8 +350,8 @@ int ldsem_down_read_trylock(struct ld_semaphore *sem) while (count >= 0) { if (atomic_long_try_cmpxchg(&sem->count, &count, count + LDSEM_READ_BIAS)) { - lockdep_acquire_read(sem, 0, 1, _RET_IP_); - lock_stat(sem, acquired); + rwsem_acquire_read(&sem->dep_map, 0, 1, _RET_IP_); + lock_acquired(&sem->dep_map, _RET_IP_); return 1; } } @@ -388,8 +376,8 @@ int ldsem_down_write_trylock(struct ld_semaphore *sem) while ((count & LDSEM_ACTIVE_MASK) == 0) { if (atomic_long_try_cmpxchg(&sem->count, &count, count + LDSEM_WRITE_BIAS)) { - lockdep_acquire(sem, 0, 1, _RET_IP_); - lock_stat(sem, acquired); + rwsem_acquire(&sem->dep_map, 0, 1, _RET_IP_); + lock_acquired(&sem->dep_map, _RET_IP_); return 1; } } @@ -403,7 +391,7 @@ void ldsem_up_read(struct ld_semaphore *sem) { long count; - lockdep_release(sem, 1, _RET_IP_); + rwsem_release(&sem->dep_map, 1, _RET_IP_); count = atomic_long_add_return(-LDSEM_READ_BIAS, &sem->count); if (count < 0 && (count & LDSEM_ACTIVE_MASK) == 0) @@ -417,7 +405,7 @@ void ldsem_up_write(struct ld_semaphore *sem) { long count; - lockdep_release(sem, 1, _RET_IP_); + rwsem_release(&sem->dep_map, 1, _RET_IP_); count = atomic_long_add_return(-LDSEM_WRITE_BIAS, &sem->count); if (count < 0) diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c index 0a357db4b31b..131342280b46 100644 --- a/drivers/uio/uio.c +++ b/drivers/uio/uio.c @@ -569,20 +569,20 @@ static ssize_t uio_read(struct file *filep, char __user *buf, ssize_t retval = 0; s32 event_count; - mutex_lock(&idev->info_lock); - if (!idev->info || !idev->info->irq) - retval = -EIO; - mutex_unlock(&idev->info_lock); - - if (retval) - return retval; - if (count != sizeof(s32)) return -EINVAL; add_wait_queue(&idev->wait, &wait); do { + mutex_lock(&idev->info_lock); + if (!idev->info || !idev->info->irq) { + retval = -EIO; + mutex_unlock(&idev->info_lock); + break; + } + mutex_unlock(&idev->info_lock); + set_current_state(TASK_INTERRUPTIBLE); event_count = atomic_read(&idev->event); @@ -1017,6 +1017,9 @@ void uio_unregister_device(struct uio_info *info) idev->info = NULL; mutex_unlock(&idev->info_lock); + wake_up_interruptible(&idev->wait); + kill_fasync(&idev->async_queue, SIGIO, POLL_HUP); + device_unregister(&idev->dev); return; diff --git a/drivers/uio/uio_fsl_elbc_gpcm.c b/drivers/uio/uio_fsl_elbc_gpcm.c index 9cc37fe07d35..0ee3cd3c25ee 100644 --- a/drivers/uio/uio_fsl_elbc_gpcm.c +++ b/drivers/uio/uio_fsl_elbc_gpcm.c @@ -74,8 +74,7 @@ DEVICE_ATTR(reg_or, S_IRUGO|S_IWUSR|S_IWGRP, reg_show, reg_store); static ssize_t reg_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct platform_device *pdev = to_platform_device(dev); - struct uio_info *info = platform_get_drvdata(pdev); + struct uio_info *info = dev_get_drvdata(dev); struct fsl_elbc_gpcm *priv = info->priv; struct fsl_lbc_bank *bank = &priv->lbc->bank[priv->bank]; @@ -94,8 +93,7 @@ static ssize_t reg_show(struct device *dev, struct device_attribute *attr, static ssize_t reg_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { - struct platform_device *pdev = to_platform_device(dev); - struct uio_info *info = platform_get_drvdata(pdev); + struct uio_info *info = dev_get_drvdata(dev); struct fsl_elbc_gpcm *priv = info->priv; struct fsl_lbc_bank *bank = &priv->lbc->bank[priv->bank]; unsigned long val; diff --git a/drivers/usb/Kconfig b/drivers/usb/Kconfig index 987fc5ba6321..70e6c956c23c 100644 --- a/drivers/usb/Kconfig +++ b/drivers/usb/Kconfig @@ -205,8 +205,4 @@ config USB_ULPI_BUS To compile this driver as a module, choose M here: the module will be called ulpi. -config USB_ROLE_SWITCH - tristate - select USB_COMMON - endif # USB_SUPPORT diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c index 09b37c0d075d..e81de9ca8729 100644 --- a/drivers/usb/chipidea/ci_hdrc_imx.c +++ b/drivers/usb/chipidea/ci_hdrc_imx.c @@ -14,6 +14,7 @@ #include #include #include +#include #include "ci.h" #include "ci_hdrc_imx.h" @@ -85,6 +86,9 @@ struct ci_hdrc_imx_data { bool supports_runtime_pm; bool override_phy_control; bool in_lpm; + struct pinctrl *pinctrl; + struct pinctrl_state *pinctrl_hsic_active; + struct regulator *hsic_pad_regulator; /* SoC before i.mx6 (except imx23/imx28) needs three clks */ bool need_three_clks; struct clk *clk_ipg; @@ -132,11 +136,21 @@ static struct imx_usbmisc_data *usbmisc_get_init_data(struct device *dev) data->dev = &misc_pdev->dev; - if (of_find_property(np, "disable-over-current", NULL)) + /* + * Check the various over current related properties. If over current + * detection is disabled we're not interested in the polarity. + */ + if (of_find_property(np, "disable-over-current", NULL)) { data->disable_oc = 1; - - if (of_find_property(np, "over-current-active-high", NULL)) - data->oc_polarity = 1; + } else if (of_find_property(np, "over-current-active-high", NULL)) { + data->oc_pol_active_low = 0; + data->oc_pol_configured = 1; + } else if (of_find_property(np, "over-current-active-low", NULL)) { + data->oc_pol_active_low = 1; + data->oc_pol_configured = 1; + } else { + dev_warn(dev, "No over current polarity defined\n"); + } if (of_find_property(np, "external-vbus-divider", NULL)) data->evdo = 1; @@ -245,19 +259,49 @@ static void imx_disable_unprepare_clks(struct device *dev) } } +static int ci_hdrc_imx_notify_event(struct ci_hdrc *ci, unsigned int event) +{ + struct device *dev = ci->dev->parent; + struct ci_hdrc_imx_data *data = dev_get_drvdata(dev); + int ret = 0; + + switch (event) { + case CI_HDRC_IMX_HSIC_ACTIVE_EVENT: + ret = pinctrl_select_state(data->pinctrl, + data->pinctrl_hsic_active); + if (ret) + dev_err(dev, "hsic_active select failed, err=%d\n", + ret); + break; + case CI_HDRC_IMX_HSIC_SUSPEND_EVENT: + ret = imx_usbmisc_hsic_set_connect(data->usbmisc_data); + if (ret) + dev_err(dev, + "hsic_set_connect failed, err=%d\n", ret); + break; + default: + break; + } + + return ret; +} + static int ci_hdrc_imx_probe(struct platform_device *pdev) { struct ci_hdrc_imx_data *data; struct ci_hdrc_platform_data pdata = { .name = dev_name(&pdev->dev), .capoffset = DEF_CAPOFFSET, + .notify_event = ci_hdrc_imx_notify_event, }; int ret; const struct of_device_id *of_id; const struct ci_hdrc_imx_platform_flag *imx_platform_flag; struct device_node *np = pdev->dev.of_node; + struct device *dev = &pdev->dev; + struct pinctrl_state *pinctrl_hsic_idle; - of_id = of_match_device(ci_hdrc_imx_dt_ids, &pdev->dev); + of_id = of_match_device(ci_hdrc_imx_dt_ids, dev); if (!of_id) return -ENODEV; @@ -268,19 +312,73 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev) return -ENOMEM; platform_set_drvdata(pdev, data); - data->usbmisc_data = usbmisc_get_init_data(&pdev->dev); + data->usbmisc_data = usbmisc_get_init_data(dev); if (IS_ERR(data->usbmisc_data)) return PTR_ERR(data->usbmisc_data); - ret = imx_get_clks(&pdev->dev); + if (of_usb_get_phy_mode(dev->of_node) == USBPHY_INTERFACE_MODE_HSIC) { + pdata.flags |= CI_HDRC_IMX_IS_HSIC; + data->usbmisc_data->hsic = 1; + data->pinctrl = devm_pinctrl_get(dev); + if (IS_ERR(data->pinctrl)) { + dev_err(dev, "pinctrl get failed, err=%ld\n", + PTR_ERR(data->pinctrl)); + return PTR_ERR(data->pinctrl); + } + + pinctrl_hsic_idle = pinctrl_lookup_state(data->pinctrl, "idle"); + if (IS_ERR(pinctrl_hsic_idle)) { + dev_err(dev, + "pinctrl_hsic_idle lookup failed, err=%ld\n", + PTR_ERR(pinctrl_hsic_idle)); + return PTR_ERR(pinctrl_hsic_idle); + } + + ret = pinctrl_select_state(data->pinctrl, pinctrl_hsic_idle); + if (ret) { + dev_err(dev, "hsic_idle select failed, err=%d\n", ret); + return ret; + } + + data->pinctrl_hsic_active = pinctrl_lookup_state(data->pinctrl, + "active"); + if (IS_ERR(data->pinctrl_hsic_active)) { + dev_err(dev, + "pinctrl_hsic_active lookup failed, err=%ld\n", + PTR_ERR(data->pinctrl_hsic_active)); + return PTR_ERR(data->pinctrl_hsic_active); + } + + data->hsic_pad_regulator = devm_regulator_get(dev, "hsic"); + if (PTR_ERR(data->hsic_pad_regulator) == -EPROBE_DEFER) { + return -EPROBE_DEFER; + } else if (PTR_ERR(data->hsic_pad_regulator) == -ENODEV) { + /* no pad regualator is needed */ + data->hsic_pad_regulator = NULL; + } else if (IS_ERR(data->hsic_pad_regulator)) { + dev_err(dev, "Get HSIC pad regulator error: %ld\n", + PTR_ERR(data->hsic_pad_regulator)); + return PTR_ERR(data->hsic_pad_regulator); + } + + if (data->hsic_pad_regulator) { + ret = regulator_enable(data->hsic_pad_regulator); + if (ret) { + dev_err(dev, + "Failed to enable HSIC pad regulator\n"); + return ret; + } + } + } + ret = imx_get_clks(dev); if (ret) - return ret; + goto disable_hsic_regulator; - ret = imx_prepare_enable_clks(&pdev->dev); + ret = imx_prepare_enable_clks(dev); if (ret) - return ret; + goto disable_hsic_regulator; - data->phy = devm_usb_get_phy_by_phandle(&pdev->dev, "fsl,usbphy", 0); + data->phy = devm_usb_get_phy_by_phandle(dev, "fsl,usbphy", 0); if (IS_ERR(data->phy)) { ret = PTR_ERR(data->phy); /* Return -EINVAL if no usbphy is available */ @@ -305,40 +403,43 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev) ret = imx_usbmisc_init(data->usbmisc_data); if (ret) { - dev_err(&pdev->dev, "usbmisc init failed, ret=%d\n", ret); + dev_err(dev, "usbmisc init failed, ret=%d\n", ret); goto err_clk; } - data->ci_pdev = ci_hdrc_add_device(&pdev->dev, + data->ci_pdev = ci_hdrc_add_device(dev, pdev->resource, pdev->num_resources, &pdata); if (IS_ERR(data->ci_pdev)) { ret = PTR_ERR(data->ci_pdev); if (ret != -EPROBE_DEFER) - dev_err(&pdev->dev, - "ci_hdrc_add_device failed, err=%d\n", ret); + dev_err(dev, "ci_hdrc_add_device failed, err=%d\n", + ret); goto err_clk; } ret = imx_usbmisc_init_post(data->usbmisc_data); if (ret) { - dev_err(&pdev->dev, "usbmisc post failed, ret=%d\n", ret); + dev_err(dev, "usbmisc post failed, ret=%d\n", ret); goto disable_device; } if (data->supports_runtime_pm) { - pm_runtime_set_active(&pdev->dev); - pm_runtime_enable(&pdev->dev); + pm_runtime_set_active(dev); + pm_runtime_enable(dev); } - device_set_wakeup_capable(&pdev->dev, true); + device_set_wakeup_capable(dev, true); return 0; disable_device: ci_hdrc_remove_device(data->ci_pdev); err_clk: - imx_disable_unprepare_clks(&pdev->dev); + imx_disable_unprepare_clks(dev); +disable_hsic_regulator: + if (data->hsic_pad_regulator) + ret = regulator_disable(data->hsic_pad_regulator); return ret; } @@ -355,6 +456,8 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev) if (data->override_phy_control) usb_phy_shutdown(data->phy); imx_disable_unprepare_clks(&pdev->dev); + if (data->hsic_pad_regulator) + regulator_disable(data->hsic_pad_regulator); return 0; } @@ -367,9 +470,16 @@ static void ci_hdrc_imx_shutdown(struct platform_device *pdev) static int __maybe_unused imx_controller_suspend(struct device *dev) { struct ci_hdrc_imx_data *data = dev_get_drvdata(dev); + int ret = 0; dev_dbg(dev, "at %s\n", __func__); + ret = imx_usbmisc_hsic_set_clk(data->usbmisc_data, false); + if (ret) { + dev_err(dev, "usbmisc hsic_set_clk failed, ret=%d\n", ret); + return ret; + } + imx_disable_unprepare_clks(dev); data->in_lpm = true; @@ -400,8 +510,16 @@ static int __maybe_unused imx_controller_resume(struct device *dev) goto clk_disable; } + ret = imx_usbmisc_hsic_set_clk(data->usbmisc_data, true); + if (ret) { + dev_err(dev, "usbmisc hsic_set_clk failed, ret=%d\n", ret); + goto hsic_set_clk_fail; + } + return 0; +hsic_set_clk_fail: + imx_usbmisc_set_wakeup(data->usbmisc_data, true); clk_disable: imx_disable_unprepare_clks(dev); return ret; diff --git a/drivers/usb/chipidea/ci_hdrc_imx.h b/drivers/usb/chipidea/ci_hdrc_imx.h index 204275f47573..7cc53e2ce564 100644 --- a/drivers/usb/chipidea/ci_hdrc_imx.h +++ b/drivers/usb/chipidea/ci_hdrc_imx.h @@ -11,13 +11,22 @@ struct imx_usbmisc_data { int index; unsigned int disable_oc:1; /* over current detect disabled */ - unsigned int oc_polarity:1; /* over current polarity if oc enabled */ + + /* true if over-current polarity is active low */ + unsigned int oc_pol_active_low:1; + + /* true if dt specifies polarity */ + unsigned int oc_pol_configured:1; + unsigned int evdo:1; /* set external vbus divider option */ unsigned int ulpi:1; /* connected to an ULPI phy */ + unsigned int hsic:1; /* HSIC controlller */ }; -int imx_usbmisc_init(struct imx_usbmisc_data *); -int imx_usbmisc_init_post(struct imx_usbmisc_data *); -int imx_usbmisc_set_wakeup(struct imx_usbmisc_data *, bool); +int imx_usbmisc_init(struct imx_usbmisc_data *data); +int imx_usbmisc_init_post(struct imx_usbmisc_data *data); +int imx_usbmisc_set_wakeup(struct imx_usbmisc_data *data, bool enabled); +int imx_usbmisc_hsic_set_connect(struct imx_usbmisc_data *data); +int imx_usbmisc_hsic_set_clk(struct imx_usbmisc_data *data, bool on); #endif /* __DRIVER_USB_CHIPIDEA_CI_HDRC_IMX_H */ diff --git a/drivers/usb/chipidea/host.c b/drivers/usb/chipidea/host.c index d858a82c4f44..b45ceb91c735 100644 --- a/drivers/usb/chipidea/host.c +++ b/drivers/usb/chipidea/host.c @@ -170,6 +170,11 @@ static int host_start(struct ci_hdrc *ci) otg->host = &hcd->self; hcd->self.otg_port = 1; } + + if (ci->platdata->notify_event && + (ci->platdata->flags & CI_HDRC_IMX_IS_HSIC)) + ci->platdata->notify_event + (ci, CI_HDRC_IMX_HSIC_ACTIVE_EVENT); } return ret; @@ -215,9 +220,85 @@ void ci_hdrc_host_destroy(struct ci_hdrc *ci) host_stop(ci); } +/* The below code is based on tegra ehci driver */ +static int ci_ehci_hub_control( + struct usb_hcd *hcd, + u16 typeReq, + u16 wValue, + u16 wIndex, + char *buf, + u16 wLength +) +{ + struct ehci_hcd *ehci = hcd_to_ehci(hcd); + u32 __iomem *status_reg; + u32 temp; + unsigned long flags; + int retval = 0; + struct device *dev = hcd->self.controller; + struct ci_hdrc *ci = dev_get_drvdata(dev); + + status_reg = &ehci->regs->port_status[(wIndex & 0xff) - 1]; + + spin_lock_irqsave(&ehci->lock, flags); + + if (typeReq == SetPortFeature && wValue == USB_PORT_FEAT_SUSPEND) { + temp = ehci_readl(ehci, status_reg); + if ((temp & PORT_PE) == 0 || (temp & PORT_RESET) != 0) { + retval = -EPIPE; + goto done; + } + + temp &= ~(PORT_RWC_BITS | PORT_WKCONN_E); + temp |= PORT_WKDISC_E | PORT_WKOC_E; + ehci_writel(ehci, temp | PORT_SUSPEND, status_reg); + + /* + * If a transaction is in progress, there may be a delay in + * suspending the port. Poll until the port is suspended. + */ + if (ehci_handshake(ehci, status_reg, PORT_SUSPEND, + PORT_SUSPEND, 5000)) + ehci_err(ehci, "timeout waiting for SUSPEND\n"); + + if (ci->platdata->flags & CI_HDRC_IMX_IS_HSIC) { + if (ci->platdata->notify_event) + ci->platdata->notify_event(ci, + CI_HDRC_IMX_HSIC_SUSPEND_EVENT); + + temp = ehci_readl(ehci, status_reg); + temp &= ~(PORT_WKDISC_E | PORT_WKCONN_E); + ehci_writel(ehci, temp, status_reg); + } + + set_bit((wIndex & 0xff) - 1, &ehci->suspended_ports); + goto done; + } + + /* + * After resume has finished, it needs do some post resume + * operation for some SoCs. + */ + else if (typeReq == ClearPortFeature && + wValue == USB_PORT_FEAT_C_SUSPEND) { + /* Make sure the resume has finished, it should be finished */ + if (ehci_handshake(ehci, status_reg, PORT_RESUME, 0, 25000)) + ehci_err(ehci, "timeout waiting for resume\n"); + } + + spin_unlock_irqrestore(&ehci->lock, flags); + + /* Handle the hub control events here */ + return ehci_hub_control(hcd, typeReq, wValue, wIndex, buf, wLength); +done: + spin_unlock_irqrestore(&ehci->lock, flags); + return retval; +} static int ci_ehci_bus_suspend(struct usb_hcd *hcd) { struct ehci_hcd *ehci = hcd_to_ehci(hcd); + struct device *dev = hcd->self.controller; + struct ci_hdrc *ci = dev_get_drvdata(dev); int port; u32 tmp; @@ -249,6 +330,16 @@ static int ci_ehci_bus_suspend(struct usb_hcd *hcd) * It needs a short delay between set RS bit and PHCD. */ usleep_range(150, 200); + /* + * Need to clear WKCN and WKOC for imx HSIC, + * otherwise, there will be wakeup event. + */ + if (ci->platdata->flags & CI_HDRC_IMX_IS_HSIC) { + tmp = ehci_readl(ehci, reg); + tmp &= ~(PORT_WKDISC_E | PORT_WKCONN_E); + ehci_writel(ehci, tmp, reg); + } + break; } } @@ -281,4 +372,5 @@ void ci_hdrc_host_driver_init(void) ehci_init_driver(&ci_ehci_hc_driver, &ehci_ci_overrides); orig_bus_suspend = ci_ehci_hc_driver.bus_suspend; ci_ehci_hc_driver.bus_suspend = ci_ehci_bus_suspend; + ci_ehci_hc_driver.hub_control = ci_ehci_hub_control; } diff --git a/drivers/usb/chipidea/usbmisc_imx.c b/drivers/usb/chipidea/usbmisc_imx.c index def80ff547e4..097ffbca0bd9 100644 --- a/drivers/usb/chipidea/usbmisc_imx.c +++ b/drivers/usb/chipidea/usbmisc_imx.c @@ -64,10 +64,22 @@ #define MX6_BM_OVER_CUR_DIS BIT(7) #define MX6_BM_OVER_CUR_POLARITY BIT(8) #define MX6_BM_WAKEUP_ENABLE BIT(10) +#define MX6_BM_UTMI_ON_CLOCK BIT(13) #define MX6_BM_ID_WAKEUP BIT(16) #define MX6_BM_VBUS_WAKEUP BIT(17) #define MX6SX_BM_DPDM_WAKEUP_EN BIT(29) #define MX6_BM_WAKEUP_INTR BIT(31) + +#define MX6_USB_HSIC_CTRL_OFFSET 0x10 +/* Send resume signal without 480Mhz PHY clock */ +#define MX6SX_BM_HSIC_AUTO_RESUME BIT(23) +/* set before portsc.suspendM = 1 */ +#define MX6_BM_HSIC_DEV_CONN BIT(21) +/* HSIC enable */ +#define MX6_BM_HSIC_EN BIT(12) +/* Force HSIC module 480M clock on, even when in Host is in suspend mode */ +#define MX6_BM_HSIC_CLK_ON BIT(11) + #define MX6_USB_OTG1_PHY_CTRL 0x18 /* For imx6dql, it is host-only controller, for later imx6, it is otg's */ #define MX6_USB_OTG2_PHY_CTRL 0x1c @@ -94,6 +106,10 @@ struct usbmisc_ops { int (*post)(struct imx_usbmisc_data *data); /* It's called when we need to enable/disable usb wakeup */ int (*set_wakeup)(struct imx_usbmisc_data *data, bool enabled); + /* It's called before setting portsc.suspendM */ + int (*hsic_set_connect)(struct imx_usbmisc_data *data); + /* It's called during suspend/resume */ + int (*hsic_set_clk)(struct imx_usbmisc_data *data, bool enabled); }; struct imx_usbmisc { @@ -120,6 +136,14 @@ static int usbmisc_imx25_init(struct imx_usbmisc_data *data) val &= ~(MX25_OTG_SIC_MASK | MX25_OTG_PP_BIT); val |= (MX25_EHCI_INTERFACE_DIFF_UNI & MX25_EHCI_INTERFACE_MASK) << MX25_OTG_SIC_SHIFT; val |= (MX25_OTG_PM_BIT | MX25_OTG_OCPOL_BIT); + + /* + * If the polarity is not configured assume active high for + * historical reasons. + */ + if (data->oc_pol_configured && data->oc_pol_active_low) + val &= ~MX25_OTG_OCPOL_BIT; + writel(val, usbmisc->base); break; case 1: @@ -129,6 +153,13 @@ static int usbmisc_imx25_init(struct imx_usbmisc_data *data) val |= (MX25_H1_PM_BIT | MX25_H1_OCPOL_BIT | MX25_H1_TLL_BIT | MX25_H1_USBTE_BIT | MX25_H1_IPPUE_DOWN_BIT); + /* + * If the polarity is not configured assume active high for + * historical reasons. + */ + if (data->oc_pol_configured && data->oc_pol_active_low) + val &= ~MX25_H1_OCPOL_BIT; + writel(val, usbmisc->base); break; @@ -340,11 +371,17 @@ static int usbmisc_imx6q_init(struct imx_usbmisc_data *data) reg = readl(usbmisc->base + data->index * 4); if (data->disable_oc) { reg |= MX6_BM_OVER_CUR_DIS; - } else if (data->oc_polarity == 1) { - /* High active */ - reg &= ~(MX6_BM_OVER_CUR_DIS | MX6_BM_OVER_CUR_POLARITY); } else { - reg &= ~(MX6_BM_OVER_CUR_DIS); + reg &= ~MX6_BM_OVER_CUR_DIS; + + /* + * If the polarity is not configured keep it as setup by the + * bootloader. + */ + if (data->oc_pol_configured && data->oc_pol_active_low) + reg |= MX6_BM_OVER_CUR_POLARITY; + else if (data->oc_pol_configured) + reg &= ~MX6_BM_OVER_CUR_POLARITY; } writel(reg, usbmisc->base + data->index * 4); @@ -353,6 +390,18 @@ static int usbmisc_imx6q_init(struct imx_usbmisc_data *data) writel(reg | MX6_BM_NON_BURST_SETTING, usbmisc->base + data->index * 4); + /* For HSIC controller */ + if (data->hsic) { + reg = readl(usbmisc->base + data->index * 4); + writel(reg | MX6_BM_UTMI_ON_CLOCK, + usbmisc->base + data->index * 4); + reg = readl(usbmisc->base + MX6_USB_HSIC_CTRL_OFFSET + + (data->index - 2) * 4); + reg |= MX6_BM_HSIC_EN | MX6_BM_HSIC_CLK_ON; + writel(reg, usbmisc->base + MX6_USB_HSIC_CTRL_OFFSET + + (data->index - 2) * 4); + } + spin_unlock_irqrestore(&usbmisc->lock, flags); usbmisc_imx6q_set_wakeup(data, false); @@ -360,6 +409,79 @@ static int usbmisc_imx6q_init(struct imx_usbmisc_data *data) return 0; } +static int usbmisc_imx6_hsic_get_reg_offset(struct imx_usbmisc_data *data) +{ + int offset, ret = 0; + + if (data->index == 2 || data->index == 3) { + offset = (data->index - 2) * 4; + } else if (data->index == 0) { + /* + * For SoCs like i.MX7D and later, each USB controller has + * its own non-core register region. For SoCs before i.MX7D, + * the first two USB controllers are non-HSIC controllers. + */ + offset = 0; + } else { + dev_err(data->dev, "index is error for usbmisc\n"); + ret = -EINVAL; + } + + return ret ? ret : offset; +} + +static int usbmisc_imx6_hsic_set_connect(struct imx_usbmisc_data *data) +{ + unsigned long flags; + u32 val; + struct imx_usbmisc *usbmisc = dev_get_drvdata(data->dev); + int offset; + + spin_lock_irqsave(&usbmisc->lock, flags); + offset = usbmisc_imx6_hsic_get_reg_offset(data); + if (offset < 0) { + spin_unlock_irqrestore(&usbmisc->lock, flags); + return offset; + } + + val = readl(usbmisc->base + MX6_USB_HSIC_CTRL_OFFSET + offset); + if (!(val & MX6_BM_HSIC_DEV_CONN)) + writel(val | MX6_BM_HSIC_DEV_CONN, + usbmisc->base + MX6_USB_HSIC_CTRL_OFFSET + offset); + + spin_unlock_irqrestore(&usbmisc->lock, flags); + + return 0; +} + +static int usbmisc_imx6_hsic_set_clk(struct imx_usbmisc_data *data, bool on) +{ + unsigned long flags; + u32 val; + struct imx_usbmisc *usbmisc = dev_get_drvdata(data->dev); + int offset; + + spin_lock_irqsave(&usbmisc->lock, flags); + offset = usbmisc_imx6_hsic_get_reg_offset(data); + if (offset < 0) { + spin_unlock_irqrestore(&usbmisc->lock, flags); + return offset; + } + + val = readl(usbmisc->base + MX6_USB_HSIC_CTRL_OFFSET + offset); + val |= MX6_BM_HSIC_EN | MX6_BM_HSIC_CLK_ON; + if (on) + val |= MX6_BM_HSIC_CLK_ON; + else + val &= ~MX6_BM_HSIC_CLK_ON; + + writel(val, usbmisc->base + MX6_USB_HSIC_CTRL_OFFSET + offset); + spin_unlock_irqrestore(&usbmisc->lock, flags); + + return 0; +} + + static int usbmisc_imx6sx_init(struct imx_usbmisc_data *data) { void __iomem *reg = NULL; @@ -385,6 +507,13 @@ static int usbmisc_imx6sx_init(struct imx_usbmisc_data *data) spin_unlock_irqrestore(&usbmisc->lock, flags); } + /* For HSIC controller */ + if (data->hsic) { + val = readl(usbmisc->base + MX6_USB_HSIC_CTRL_OFFSET); + val |= MX6SX_BM_HSIC_AUTO_RESUME; + writel(val, usbmisc->base + MX6_USB_HSIC_CTRL_OFFSET); + } + return 0; } @@ -444,9 +573,17 @@ static int usbmisc_imx7d_init(struct imx_usbmisc_data *data) reg = readl(usbmisc->base); if (data->disable_oc) { reg |= MX6_BM_OVER_CUR_DIS; - } else if (data->oc_polarity == 1) { - /* High active */ - reg &= ~(MX6_BM_OVER_CUR_DIS | MX6_BM_OVER_CUR_POLARITY); + } else { + reg &= ~MX6_BM_OVER_CUR_DIS; + + /* + * If the polarity is not configured keep it as setup by the + * bootloader. + */ + if (data->oc_pol_configured && data->oc_pol_active_low) + reg |= MX6_BM_OVER_CUR_POLARITY; + else if (data->oc_pol_configured) + reg &= ~MX6_BM_OVER_CUR_POLARITY; } writel(reg, usbmisc->base); @@ -454,6 +591,7 @@ static int usbmisc_imx7d_init(struct imx_usbmisc_data *data) reg &= ~MX7D_USB_VBUS_WAKEUP_SOURCE_MASK; writel(reg | MX7D_USB_VBUS_WAKEUP_SOURCE_BVALID, usbmisc->base + MX7D_USBNC_USB_CTRL2); + spin_unlock_irqrestore(&usbmisc->lock, flags); usbmisc_imx7d_set_wakeup(data, false); @@ -481,6 +619,8 @@ static const struct usbmisc_ops imx53_usbmisc_ops = { static const struct usbmisc_ops imx6q_usbmisc_ops = { .set_wakeup = usbmisc_imx6q_set_wakeup, .init = usbmisc_imx6q_init, + .hsic_set_connect = usbmisc_imx6_hsic_set_connect, + .hsic_set_clk = usbmisc_imx6_hsic_set_clk, }; static const struct usbmisc_ops vf610_usbmisc_ops = { @@ -490,6 +630,8 @@ static const struct usbmisc_ops vf610_usbmisc_ops = { static const struct usbmisc_ops imx6sx_usbmisc_ops = { .set_wakeup = usbmisc_imx6q_set_wakeup, .init = usbmisc_imx6sx_init, + .hsic_set_connect = usbmisc_imx6_hsic_set_connect, + .hsic_set_clk = usbmisc_imx6_hsic_set_clk, }; static const struct usbmisc_ops imx7d_usbmisc_ops = { @@ -546,6 +688,33 @@ int imx_usbmisc_set_wakeup(struct imx_usbmisc_data *data, bool enabled) } EXPORT_SYMBOL_GPL(imx_usbmisc_set_wakeup); +int imx_usbmisc_hsic_set_connect(struct imx_usbmisc_data *data) +{ + struct imx_usbmisc *usbmisc; + + if (!data) + return 0; + + usbmisc = dev_get_drvdata(data->dev); + if (!usbmisc->ops->hsic_set_connect || !data->hsic) + return 0; + return usbmisc->ops->hsic_set_connect(data); +} +EXPORT_SYMBOL_GPL(imx_usbmisc_hsic_set_connect); + +int imx_usbmisc_hsic_set_clk(struct imx_usbmisc_data *data, bool on) +{ + struct imx_usbmisc *usbmisc; + + if (!data) + return 0; + + usbmisc = dev_get_drvdata(data->dev); + if (!usbmisc->ops->hsic_set_clk || !data->hsic) + return 0; + return usbmisc->ops->hsic_set_clk(data, on); +} +EXPORT_SYMBOL_GPL(imx_usbmisc_hsic_set_clk); static const struct of_device_id usbmisc_imx_dt_ids[] = { { .compatible = "fsl,imx25-usbmisc", diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c index 1b68fed464cb..ed8c62b2d9d1 100644 --- a/drivers/usb/class/cdc-acm.c +++ b/drivers/usb/class/cdc-acm.c @@ -581,6 +581,13 @@ static int acm_tty_install(struct tty_driver *driver, struct tty_struct *tty) if (retval) goto error_init_termios; + /* + * Suppress initial echoing for some devices which might send data + * immediately after acm driver has been installed. + */ + if (acm->quirks & DISABLE_ECHO) + tty->termios.c_lflag &= ~ECHO; + tty->driver_data = acm; return 0; @@ -1657,6 +1664,9 @@ static const struct usb_device_id acm_ids[] = { { USB_DEVICE(0x0e8d, 0x0003), /* FIREFLY, MediaTek Inc; andrey.arapov@gmail.com */ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ }, + { USB_DEVICE(0x0e8d, 0x2000), /* MediaTek Inc Preloader */ + .driver_info = DISABLE_ECHO, /* DISABLE ECHO in termios flag */ + }, { USB_DEVICE(0x0e8d, 0x3329), /* MediaTek Inc GPS */ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ }, diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h index ca06b20d7af9..515aad0847ee 100644 --- a/drivers/usb/class/cdc-acm.h +++ b/drivers/usb/class/cdc-acm.h @@ -140,3 +140,4 @@ struct acm { #define QUIRK_CONTROL_LINE_STATE BIT(6) #define CLEAR_HALT_CONDITIONS BIT(7) #define SEND_ZERO_PACKET BIT(8) +#define DISABLE_ECHO BIT(9) diff --git a/drivers/usb/common/Makefile b/drivers/usb/common/Makefile index fb4d5ef4165c..0a7c45e85481 100644 --- a/drivers/usb/common/Makefile +++ b/drivers/usb/common/Makefile @@ -9,4 +9,3 @@ usb-common-$(CONFIG_USB_LED_TRIG) += led.o obj-$(CONFIG_USB_OTG_FSM) += usb-otg-fsm.o obj-$(CONFIG_USB_ULPI_BUS) += ulpi.o -obj-$(CONFIG_USB_ROLE_SWITCH) += roles.o diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c index 487025d31d44..015b126ce455 100644 --- a/drivers/usb/core/hcd.c +++ b/drivers/usb/core/hcd.c @@ -1074,8 +1074,6 @@ static int register_root_hub(struct usb_hcd *hcd) usb_dev->devnum = devnum; usb_dev->bus->devnum_next = devnum + 1; - memset (&usb_dev->bus->devmap.devicemap, 0, - sizeof usb_dev->bus->devmap.devicemap); set_bit (devnum, usb_dev->bus->devmap.devicemap); usb_set_device_state(usb_dev, USB_STATE_ADDRESS); diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index f76b2e0aba9d..1d1e61e980f3 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -1112,6 +1112,16 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type) USB_PORT_FEAT_ENABLE); } + /* + * Add debounce if USB3 link is in polling/link training state. + * Link will automatically transition to Enabled state after + * link training completes. + */ + if (hub_is_superspeed(hdev) && + ((portstatus & USB_PORT_STAT_LINK_STATE) == + USB_SS_PORT_LS_POLLING)) + need_debounce_delay = true; + /* Clear status-change flags; we'll debounce later */ if (portchange & USB_PORT_STAT_C_CONNECTION) { need_debounce_delay = true; diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c index 2d6d2c8244de..68ad75a7460d 100644 --- a/drivers/usb/dwc2/gadget.c +++ b/drivers/usb/dwc2/gadget.c @@ -262,7 +262,7 @@ static void dwc2_gadget_wkup_alert_handler(struct dwc2_hsotg *hsotg) if (gintsts2 & GINTSTS2_WKUP_ALERT_INT) { dev_dbg(hsotg->dev, "%s: Wkup_Alert_Int\n", __func__); dwc2_clear_bit(hsotg, GINTSTS2, GINTSTS2_WKUP_ALERT_INT); - dwc2_set_bit(hsotg, DCFG, DCTL_RMTWKUPSIG); + dwc2_set_bit(hsotg, DCTL, DCTL_RMTWKUPSIG); } } @@ -3165,8 +3165,6 @@ static void kill_all_requests(struct dwc2_hsotg *hsotg, dwc2_hsotg_txfifo_flush(hsotg, ep->fifo_index); } -static int dwc2_hsotg_ep_disable(struct usb_ep *ep); - /** * dwc2_hsotg_disconnect - disconnect service * @hsotg: The device state. @@ -3188,9 +3186,11 @@ void dwc2_hsotg_disconnect(struct dwc2_hsotg *hsotg) /* all endpoints should be shutdown */ for (ep = 0; ep < hsotg->num_of_eps; ep++) { if (hsotg->eps_in[ep]) - dwc2_hsotg_ep_disable(&hsotg->eps_in[ep]->ep); + kill_all_requests(hsotg, hsotg->eps_in[ep], + -ESHUTDOWN); if (hsotg->eps_out[ep]) - dwc2_hsotg_ep_disable(&hsotg->eps_out[ep]->ep); + kill_all_requests(hsotg, hsotg->eps_out[ep], + -ESHUTDOWN); } call_gadget(hsotg, disconnect); @@ -3234,6 +3234,7 @@ static void dwc2_hsotg_irq_fifoempty(struct dwc2_hsotg *hsotg, bool periodic) GINTSTS_PTXFEMP | \ GINTSTS_RXFLVL) +static int dwc2_hsotg_ep_disable(struct usb_ep *ep); /** * dwc2_hsotg_core_init - issue softreset to the core * @hsotg: The device state @@ -4069,10 +4070,8 @@ static int dwc2_hsotg_ep_disable(struct usb_ep *ep) struct dwc2_hsotg *hsotg = hs_ep->parent; int dir_in = hs_ep->dir_in; int index = hs_ep->index; - unsigned long flags; u32 epctrl_reg; u32 ctrl; - int locked; dev_dbg(hsotg->dev, "%s(ep %p)\n", __func__, ep); @@ -4088,10 +4087,6 @@ static int dwc2_hsotg_ep_disable(struct usb_ep *ep) epctrl_reg = dir_in ? DIEPCTL(index) : DOEPCTL(index); - locked = spin_is_locked(&hsotg->lock); - if (!locked) - spin_lock_irqsave(&hsotg->lock, flags); - ctrl = dwc2_readl(hsotg, epctrl_reg); if (ctrl & DXEPCTL_EPENA) @@ -4114,12 +4109,22 @@ static int dwc2_hsotg_ep_disable(struct usb_ep *ep) hs_ep->fifo_index = 0; hs_ep->fifo_size = 0; - if (!locked) - spin_unlock_irqrestore(&hsotg->lock, flags); - return 0; } +static int dwc2_hsotg_ep_disable_lock(struct usb_ep *ep) +{ + struct dwc2_hsotg_ep *hs_ep = our_ep(ep); + struct dwc2_hsotg *hsotg = hs_ep->parent; + unsigned long flags; + int ret; + + spin_lock_irqsave(&hsotg->lock, flags); + ret = dwc2_hsotg_ep_disable(ep); + spin_unlock_irqrestore(&hsotg->lock, flags); + return ret; +} + /** * on_list - check request is on the given endpoint * @ep: The endpoint to check. @@ -4267,7 +4272,7 @@ static int dwc2_hsotg_ep_sethalt_lock(struct usb_ep *ep, int value) static const struct usb_ep_ops dwc2_hsotg_ep_ops = { .enable = dwc2_hsotg_ep_enable, - .disable = dwc2_hsotg_ep_disable, + .disable = dwc2_hsotg_ep_disable_lock, .alloc_request = dwc2_hsotg_ep_alloc_request, .free_request = dwc2_hsotg_ep_free_request, .queue = dwc2_hsotg_ep_queue_lock, @@ -4407,9 +4412,9 @@ static int dwc2_hsotg_udc_stop(struct usb_gadget *gadget) /* all endpoints should be shutdown */ for (ep = 1; ep < hsotg->num_of_eps; ep++) { if (hsotg->eps_in[ep]) - dwc2_hsotg_ep_disable(&hsotg->eps_in[ep]->ep); + dwc2_hsotg_ep_disable_lock(&hsotg->eps_in[ep]->ep); if (hsotg->eps_out[ep]) - dwc2_hsotg_ep_disable(&hsotg->eps_out[ep]->ep); + dwc2_hsotg_ep_disable_lock(&hsotg->eps_out[ep]->ep); } spin_lock_irqsave(&hsotg->lock, flags); @@ -4857,9 +4862,9 @@ int dwc2_hsotg_suspend(struct dwc2_hsotg *hsotg) for (ep = 0; ep < hsotg->num_of_eps; ep++) { if (hsotg->eps_in[ep]) - dwc2_hsotg_ep_disable(&hsotg->eps_in[ep]->ep); + dwc2_hsotg_ep_disable_lock(&hsotg->eps_in[ep]->ep); if (hsotg->eps_out[ep]) - dwc2_hsotg_ep_disable(&hsotg->eps_out[ep]->ep); + dwc2_hsotg_ep_disable_lock(&hsotg->eps_out[ep]->ep); } } @@ -5026,6 +5031,7 @@ void dwc2_gadget_init_lpm(struct dwc2_hsotg *hsotg) val |= hsotg->params.lpm_clock_gating ? GLPMCFG_ENBLSLPM : 0; val |= hsotg->params.hird_threshold << GLPMCFG_HIRD_THRES_SHIFT; val |= hsotg->params.besl ? GLPMCFG_ENBESL : 0; + val |= GLPMCFG_LPM_ACCEPT_CTRL_ISOC; dwc2_writel(hsotg, val, GLPMCFG); dev_dbg(hsotg->dev, "GLPMCFG=0x%08x\n", dwc2_readl(hsotg, GLPMCFG)); diff --git a/drivers/usb/dwc2/hcd.h b/drivers/usb/dwc2/hcd.h index 3f9bccc95add..c089ffa1f0a8 100644 --- a/drivers/usb/dwc2/hcd.h +++ b/drivers/usb/dwc2/hcd.h @@ -366,7 +366,7 @@ struct dwc2_qh { u32 desc_list_sz; u32 *n_bytes; struct timer_list unreserve_timer; - struct timer_list wait_timer; + struct hrtimer wait_timer; struct dwc2_tt *dwc_tt; int ttport; unsigned tt_buffer_dirty:1; diff --git a/drivers/usb/dwc2/hcd_queue.c b/drivers/usb/dwc2/hcd_queue.c index 40839591d2ec..ea3aa640c15c 100644 --- a/drivers/usb/dwc2/hcd_queue.c +++ b/drivers/usb/dwc2/hcd_queue.c @@ -59,7 +59,7 @@ #define DWC2_UNRESERVE_DELAY (msecs_to_jiffies(5)) /* If we get a NAK, wait this long before retrying */ -#define DWC2_RETRY_WAIT_DELAY (msecs_to_jiffies(1)) +#define DWC2_RETRY_WAIT_DELAY 1*1E6L /** * dwc2_periodic_channel_available() - Checks that a channel is available for a @@ -1464,10 +1464,12 @@ static void dwc2_deschedule_periodic(struct dwc2_hsotg *hsotg, * qh back to the "inactive" list, then queues transactions. * * @t: Pointer to wait_timer in a qh. + * + * Return: HRTIMER_NORESTART to not automatically restart this timer. */ -static void dwc2_wait_timer_fn(struct timer_list *t) +static enum hrtimer_restart dwc2_wait_timer_fn(struct hrtimer *t) { - struct dwc2_qh *qh = from_timer(qh, t, wait_timer); + struct dwc2_qh *qh = container_of(t, struct dwc2_qh, wait_timer); struct dwc2_hsotg *hsotg = qh->hsotg; unsigned long flags; @@ -1491,6 +1493,7 @@ static void dwc2_wait_timer_fn(struct timer_list *t) } spin_unlock_irqrestore(&hsotg->lock, flags); + return HRTIMER_NORESTART; } /** @@ -1521,7 +1524,8 @@ static void dwc2_qh_init(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh, /* Initialize QH */ qh->hsotg = hsotg; timer_setup(&qh->unreserve_timer, dwc2_unreserve_timer_fn, 0); - timer_setup(&qh->wait_timer, dwc2_wait_timer_fn, 0); + hrtimer_init(&qh->wait_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + qh->wait_timer.function = &dwc2_wait_timer_fn; qh->ep_type = ep_type; qh->ep_is_in = ep_is_in; @@ -1690,7 +1694,7 @@ void dwc2_hcd_qh_free(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh) * won't do anything anyway, but we want it to finish before we free * memory. */ - del_timer_sync(&qh->wait_timer); + hrtimer_cancel(&qh->wait_timer); dwc2_host_put_tt_info(hsotg, qh->dwc_tt); @@ -1716,6 +1720,7 @@ int dwc2_hcd_qh_add(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh) { int status; u32 intr_mask; + ktime_t delay; if (dbg_qh(qh)) dev_vdbg(hsotg->dev, "%s()\n", __func__); @@ -1734,8 +1739,8 @@ int dwc2_hcd_qh_add(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh) list_add_tail(&qh->qh_list_entry, &hsotg->non_periodic_sched_waiting); qh->wait_timer_cancel = false; - mod_timer(&qh->wait_timer, - jiffies + DWC2_RETRY_WAIT_DELAY + 1); + delay = ktime_set(0, DWC2_RETRY_WAIT_DELAY); + hrtimer_start(&qh->wait_timer, delay, HRTIMER_MODE_REL); } else { list_add_tail(&qh->qh_list_entry, &hsotg->non_periodic_sched_inactive); diff --git a/drivers/usb/dwc2/hw.h b/drivers/usb/dwc2/hw.h index 2b1ea441b7d4..98af924a9a5c 100644 --- a/drivers/usb/dwc2/hw.h +++ b/drivers/usb/dwc2/hw.h @@ -333,6 +333,8 @@ #define GLPMCFG_SNDLPM BIT(24) #define GLPMCFG_RETRY_CNT_MASK (0x7 << 21) #define GLPMCFG_RETRY_CNT_SHIFT 21 +#define GLPMCFG_LPM_ACCEPT_CTRL_CONTROL BIT(21) +#define GLPMCFG_LPM_ACCEPT_CTRL_ISOC BIT(22) #define GLPMCFG_LPM_CHNL_INDX_MASK (0xf << 17) #define GLPMCFG_LPM_CHNL_INDX_SHIFT 17 #define GLPMCFG_L1RESUMEOK BIT(16) diff --git a/drivers/usb/dwc2/params.c b/drivers/usb/dwc2/params.c index 7c1b6938f212..24ff5f21cb25 100644 --- a/drivers/usb/dwc2/params.c +++ b/drivers/usb/dwc2/params.c @@ -71,6 +71,13 @@ static void dwc2_set_his_params(struct dwc2_hsotg *hsotg) p->power_down = false; } +static void dwc2_set_s3c6400_params(struct dwc2_hsotg *hsotg) +{ + struct dwc2_core_params *p = &hsotg->params; + + p->power_down = 0; +} + static void dwc2_set_rk_params(struct dwc2_hsotg *hsotg) { struct dwc2_core_params *p = &hsotg->params; @@ -111,6 +118,7 @@ static void dwc2_set_amlogic_params(struct dwc2_hsotg *hsotg) p->phy_type = DWC2_PHY_TYPE_PARAM_UTMI; p->ahbcfg = GAHBCFG_HBSTLEN_INCR8 << GAHBCFG_HBSTLEN_SHIFT; + p->power_down = DWC2_POWER_DOWN_PARAM_NONE; } static void dwc2_set_amcc_params(struct dwc2_hsotg *hsotg) @@ -151,7 +159,8 @@ const struct of_device_id dwc2_of_match_table[] = { { .compatible = "lantiq,arx100-usb", .data = dwc2_set_ltq_params }, { .compatible = "lantiq,xrx200-usb", .data = dwc2_set_ltq_params }, { .compatible = "snps,dwc2" }, - { .compatible = "samsung,s3c6400-hsotg" }, + { .compatible = "samsung,s3c6400-hsotg", + .data = dwc2_set_s3c6400_params }, { .compatible = "amlogic,meson8-usb", .data = dwc2_set_amlogic_params }, { .compatible = "amlogic,meson8b-usb", diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c index 2f2048aa5fde..a1b126f90261 100644 --- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -80,11 +80,12 @@ static int dwc3_get_dr_mode(struct dwc3 *dwc) mode = USB_DR_MODE_PERIPHERAL; /* - * dwc_usb31 does not support OTG mode. If the controller - * supports DRD but the dr_mode is not specified or set to OTG, - * then set the mode to peripheral. + * DWC_usb31 and DWC_usb3 v3.30a and higher do not support OTG + * mode. If the controller supports DRD but the dr_mode is not + * specified or set to OTG, then set the mode to peripheral. */ - if (mode == USB_DR_MODE_OTG && dwc3_is_usb31(dwc)) + if (mode == USB_DR_MODE_OTG && + dwc->revision >= DWC3_REVISION_330A) mode = USB_DR_MODE_PERIPHERAL; } @@ -661,6 +662,8 @@ static int dwc3_phy_setup(struct dwc3 *dwc) if (dwc->dis_enblslpm_quirk) reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM; + else + reg |= DWC3_GUSB2PHYCFG_ENBLSLPM; if (dwc->dis_u2_freeclk_exists_quirk) reg &= ~DWC3_GUSB2PHYCFG_U2_FREECLK_EXISTS; @@ -702,6 +705,7 @@ static bool dwc3_core_is_valid(struct dwc3 *dwc) /* Detected DWC_usb31 IP */ dwc->revision = dwc3_readl(dwc->regs, DWC3_VER_NUMBER); dwc->revision |= DWC3_REVISION_IS_DWC31; + dwc->version_type = dwc3_readl(dwc->regs, DWC3_VER_TYPE); } else { return false; } @@ -1244,8 +1248,12 @@ static void dwc3_get_properties(struct dwc3 *dwc) "snps,is-utmi-l1-suspend"); device_property_read_u8(dev, "snps,hird-threshold", &hird_threshold); + dwc->dis_start_transfer_quirk = device_property_read_bool(dev, + "snps,dis-start-transfer-quirk"); dwc->usb3_lpm_capable = device_property_read_bool(dev, "snps,usb3_lpm_capable"); + dwc->usb2_lpm_disable = device_property_read_bool(dev, + "snps,usb2-lpm-disable"); device_property_read_u8(dev, "snps,rx-thr-num-pkt-prd", &rx_thr_num_pkt_prd); device_property_read_u8(dev, "snps,rx-max-burst-prd", @@ -1482,7 +1490,8 @@ static int dwc3_probe(struct platform_device *pdev) ret = dwc3_core_init(dwc); if (ret) { - dev_err(dev, "failed to initialize core\n"); + if (ret != -EPROBE_DEFER) + dev_err(dev, "failed to initialize core: %d\n", ret); goto err4; } diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h index 5bfb62533e0f..df876418cb78 100644 --- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -37,6 +37,7 @@ #define DWC3_EP0_SETUP_SIZE 512 #define DWC3_ENDPOINTS_NUM 32 #define DWC3_XHCI_RESOURCES_NUM 2 +#define DWC3_ISOC_MAX_RETRIES 5 #define DWC3_SCRATCHBUF_SIZE 4096 /* each buffer is assumed to be 4KiB */ #define DWC3_EVENT_BUFFERS_SIZE 4096 @@ -174,13 +175,19 @@ #define DWC3_GSBUSCFG0_INCRBRSTENA (1 << 0) /* undefined length enable */ #define DWC3_GSBUSCFG0_INCRBRST_MASK 0xff +/* Global Debug LSP MUX Select */ +#define DWC3_GDBGLSPMUX_ENDBC BIT(15) /* Host only */ +#define DWC3_GDBGLSPMUX_HOSTSELECT(n) ((n) & 0x3fff) +#define DWC3_GDBGLSPMUX_DEVSELECT(n) (((n) & 0xf) << 4) +#define DWC3_GDBGLSPMUX_EPSELECT(n) ((n) & 0xf) + /* Global Debug Queue/FIFO Space Available Register */ #define DWC3_GDBGFIFOSPACE_NUM(n) ((n) & 0x1f) #define DWC3_GDBGFIFOSPACE_TYPE(n) (((n) << 5) & 0x1e0) #define DWC3_GDBGFIFOSPACE_SPACE_AVAILABLE(n) (((n) >> 16) & 0xffff) -#define DWC3_TXFIFOQ 0 -#define DWC3_RXFIFOQ 1 +#define DWC3_TXFIFO 0 +#define DWC3_RXFIFO 1 #define DWC3_TXREQQ 2 #define DWC3_RXREQQ 3 #define DWC3_RXINFOQ 4 @@ -253,6 +260,9 @@ #define DWC3_GSTS_DEVICE_IP BIT(6) #define DWC3_GSTS_CSR_TIMEOUT BIT(5) #define DWC3_GSTS_BUS_ERR_ADDR_VLD BIT(4) +#define DWC3_GSTS_CURMOD(n) ((n) & 0x3) +#define DWC3_GSTS_CURMOD_DEVICE 0 +#define DWC3_GSTS_CURMOD_HOST 1 /* Global USB2 PHY Configuration Register */ #define DWC3_GUSB2PHYCFG_PHYSOFTRST BIT(31) @@ -321,6 +331,7 @@ #define DWC3_GHWPARAMS1_EN_PWROPT_HIB 2 #define DWC3_GHWPARAMS1_PWROPT(n) ((n) << 24) #define DWC3_GHWPARAMS1_PWROPT_MASK DWC3_GHWPARAMS1_PWROPT(3) +#define DWC3_GHWPARAMS1_ENDBC BIT(31) /* Global HWPARAMS3 Register */ #define DWC3_GHWPARAMS3_SSPHY_IFC(n) ((n) & 3) @@ -636,9 +647,9 @@ struct dwc3_event_buffer { /** * struct dwc3_ep - device side endpoint representation * @endpoint: usb endpoint + * @cancelled_list: list of cancelled requests for this endpoint * @pending_list: list of pending requests for this endpoint * @started_list: list of started requests on this endpoint - * @wait_end_transfer: wait_queue_head_t for waiting on End Transfer complete * @lock: spinlock for endpoint request queue traversal * @regs: pointer to first endpoint register * @trb_pool: array of transaction buffers @@ -656,14 +667,17 @@ struct dwc3_event_buffer { * @name: a human readable name e.g. ep1out-bulk * @direction: true for TX, false for RX * @stream_capable: true when streams are enabled + * @combo_num: the test combination BIT[15:14] of the frame number to test + * isochronous START TRANSFER command failure workaround + * @start_cmd_status: the status of testing START TRANSFER command with + * combo_num = 'b00 */ struct dwc3_ep { struct usb_ep endpoint; + struct list_head cancelled_list; struct list_head pending_list; struct list_head started_list; - wait_queue_head_t wait_end_transfer; - spinlock_t lock; void __iomem *regs; @@ -705,6 +719,10 @@ struct dwc3_ep { unsigned direction:1; unsigned stream_capable:1; + + /* For isochronous START TRANSFER workaround only */ + u8 combo_num; + int start_cmd_status; }; enum dwc3_phy { @@ -766,6 +784,7 @@ enum dwc3_link_state { #define DWC3_TRB_CTRL_ISP_IMI BIT(10) #define DWC3_TRB_CTRL_IOC BIT(11) #define DWC3_TRB_CTRL_SID_SOFN(n) (((n) & 0xffff) << 14) +#define DWC3_TRB_CTRL_GET_SID_SOFN(n) (((n) & (0xffff << 14)) >> 14) #define DWC3_TRBCTL_TYPE(n) ((n) & (0x3f << 4)) #define DWC3_TRBCTL_NORMAL DWC3_TRB_CTRL_TRBCTL(1) @@ -847,11 +866,12 @@ struct dwc3_hwparams { * @epnum: endpoint number to which this request refers * @trb: pointer to struct dwc3_trb * @trb_dma: DMA address of @trb - * @unaligned: true for OUT endpoints with length not divisible by maxp + * @num_trbs: number of TRBs used by this request + * @needs_extra_trb: true when request needs one extra TRB (either due to ZLP + * or unaligned OUT) * @direction: IN or OUT direction flag * @mapped: true when request has been dma-mapped * @started: request is started - * @zero: wants a ZLP */ struct dwc3_request { struct usb_request request; @@ -867,11 +887,12 @@ struct dwc3_request { struct dwc3_trb *trb; dma_addr_t trb_dma; - unsigned unaligned:1; + unsigned num_trbs; + + unsigned needs_extra_trb:1; unsigned direction:1; unsigned mapped:1; unsigned started:1; - unsigned zero:1; }; /* @@ -918,6 +939,7 @@ struct dwc3_scratchpad_array { * @u1u2: only used on revisions <1.83a for workaround * @maximum_speed: maximum speed requested (mainly for testing purposes) * @revision: revision register contents + * @version_type: VERSIONTYPE register contents, a sub release of a revision * @dr_mode: requested mode of operation * @current_dr_role: current role of operation when in dual-role mode * @desired_dr_role: desired role of operation when in dual-role mode @@ -945,6 +967,7 @@ struct dwc3_scratchpad_array { * @hwparams: copy of hwparams registers * @root: debugfs root folder pointer * @regset: debugfs pointer to regdump file + * @dbg_lsp_select: current debug lsp mux register selection * @test_mode: true when we're entering a USB test mode * @test_mode_nr: test feature selector * @lpm_nyet_threshold: LPM NYET response threshold @@ -970,7 +993,10 @@ struct dwc3_scratchpad_array { * @pullups_connected: true when Run/Stop bit is set * @setup_packet_pending: true when there's a Setup Packet in FIFO. Workaround * @three_stage_setup: set if we perform a three phase setup + * @dis_start_transfer_quirk: set if start_transfer failure SW workaround is + * not needed for DWC_usb31 version 1.70a-ea06 and below * @usb3_lpm_capable: set if hadrware supports Link Power Management + * @usb2_lpm_disable: set to disable usb2 lpm * @disable_scramble_quirk: set if we enable the disable scramble quirk * @u2exit_lfps_quirk: set if we enable u2exit lfps quirk * @u2ss_inp3_quirk: set if we enable P3 OK for U2/SS Inactive quirk @@ -1095,6 +1121,7 @@ struct dwc3 { #define DWC3_REVISION_290A 0x5533290a #define DWC3_REVISION_300A 0x5533300a #define DWC3_REVISION_310A 0x5533310a +#define DWC3_REVISION_330A 0x5533330a /* * NOTICE: we're using bit 31 as a "is usb 3.1" flag. This is really @@ -1103,6 +1130,17 @@ struct dwc3 { #define DWC3_REVISION_IS_DWC31 0x80000000 #define DWC3_USB31_REVISION_110A (0x3131302a | DWC3_REVISION_IS_DWC31) #define DWC3_USB31_REVISION_120A (0x3132302a | DWC3_REVISION_IS_DWC31) +#define DWC3_USB31_REVISION_160A (0x3136302a | DWC3_REVISION_IS_DWC31) +#define DWC3_USB31_REVISION_170A (0x3137302a | DWC3_REVISION_IS_DWC31) + + u32 version_type; + +#define DWC31_VERSIONTYPE_EA01 0x65613031 +#define DWC31_VERSIONTYPE_EA02 0x65613032 +#define DWC31_VERSIONTYPE_EA03 0x65613033 +#define DWC31_VERSIONTYPE_EA04 0x65613034 +#define DWC31_VERSIONTYPE_EA05 0x65613035 +#define DWC31_VERSIONTYPE_EA06 0x65613036 enum dwc3_ep0_next ep0_next_event; enum dwc3_ep0_state ep0state; @@ -1121,6 +1159,8 @@ struct dwc3 { struct dentry *root; struct debugfs_regset32 *regset; + u32 dbg_lsp_select; + u8 test_mode; u8 test_mode_nr; u8 lpm_nyet_threshold; @@ -1145,7 +1185,9 @@ struct dwc3 { unsigned pullups_connected:1; unsigned setup_packet_pending:1; unsigned three_stage_setup:1; + unsigned dis_start_transfer_quirk:1; unsigned usb3_lpm_capable:1; + unsigned usb2_lpm_disable:1; unsigned disable_scramble_quirk:1; unsigned u2exit_lfps_quirk:1; diff --git a/drivers/usb/dwc3/debug.h b/drivers/usb/dwc3/debug.h index c66d216dcc30..4f75ab3505b7 100644 --- a/drivers/usb/dwc3/debug.h +++ b/drivers/usb/dwc3/debug.h @@ -116,6 +116,35 @@ dwc3_gadget_link_string(enum dwc3_link_state link_state) } } +/** + * dwc3_gadget_hs_link_string - returns highspeed and below link name + * @link_state: link state code + */ +static inline const char * +dwc3_gadget_hs_link_string(enum dwc3_link_state link_state) +{ + switch (link_state) { + case DWC3_LINK_STATE_U0: + return "On"; + case DWC3_LINK_STATE_U2: + return "Sleep"; + case DWC3_LINK_STATE_U3: + return "Suspend"; + case DWC3_LINK_STATE_SS_DIS: + return "Disconnected"; + case DWC3_LINK_STATE_RX_DET: + return "Early Suspend"; + case DWC3_LINK_STATE_RECOV: + return "Recovery"; + case DWC3_LINK_STATE_RESET: + return "Reset"; + case DWC3_LINK_STATE_RESUME: + return "Resume"; + default: + return "UNKNOWN link state\n"; + } +} + /** * dwc3_trb_type_string - returns TRB type as a string * @type: the type of the TRB diff --git a/drivers/usb/dwc3/debugfs.c b/drivers/usb/dwc3/debugfs.c index df8e73ec3342..1c792710348f 100644 --- a/drivers/usb/dwc3/debugfs.c +++ b/drivers/usb/dwc3/debugfs.c @@ -25,6 +25,8 @@ #include "io.h" #include "debug.h" +#define DWC3_LSP_MUX_UNSELECTED 0xfffff + #define dump_register(nm) \ { \ .name = __stringify(nm), \ @@ -82,10 +84,6 @@ static const struct debugfs_reg32 dwc3_regs[] = { dump_register(GDBGFIFOSPACE), dump_register(GDBGLTSSM), dump_register(GDBGBMU), - dump_register(GDBGLSPMUX), - dump_register(GDBGLSP), - dump_register(GDBGEPINFO0), - dump_register(GDBGEPINFO1), dump_register(GPRTBIMAP_HS0), dump_register(GPRTBIMAP_HS1), dump_register(GPRTBIMAP_FS0), @@ -279,6 +277,114 @@ static const struct debugfs_reg32 dwc3_regs[] = { dump_register(OSTS), }; +static void dwc3_host_lsp(struct seq_file *s) +{ + struct dwc3 *dwc = s->private; + bool dbc_enabled; + u32 sel; + u32 reg; + u32 val; + + dbc_enabled = !!(dwc->hwparams.hwparams1 & DWC3_GHWPARAMS1_ENDBC); + + sel = dwc->dbg_lsp_select; + if (sel == DWC3_LSP_MUX_UNSELECTED) { + seq_puts(s, "Write LSP selection to print for host\n"); + return; + } + + reg = DWC3_GDBGLSPMUX_HOSTSELECT(sel); + + dwc3_writel(dwc->regs, DWC3_GDBGLSPMUX, reg); + val = dwc3_readl(dwc->regs, DWC3_GDBGLSP); + seq_printf(s, "GDBGLSP[%d] = 0x%08x\n", sel, val); + + if (dbc_enabled && sel < 256) { + reg |= DWC3_GDBGLSPMUX_ENDBC; + dwc3_writel(dwc->regs, DWC3_GDBGLSPMUX, reg); + val = dwc3_readl(dwc->regs, DWC3_GDBGLSP); + seq_printf(s, "GDBGLSP_DBC[%d] = 0x%08x\n", sel, val); + } +} + +static void dwc3_gadget_lsp(struct seq_file *s) +{ + struct dwc3 *dwc = s->private; + int i; + u32 reg; + + for (i = 0; i < 16; i++) { + reg = DWC3_GDBGLSPMUX_DEVSELECT(i); + dwc3_writel(dwc->regs, DWC3_GDBGLSPMUX, reg); + reg = dwc3_readl(dwc->regs, DWC3_GDBGLSP); + seq_printf(s, "GDBGLSP[%d] = 0x%08x\n", i, reg); + } +} + +static int dwc3_lsp_show(struct seq_file *s, void *unused) +{ + struct dwc3 *dwc = s->private; + unsigned int current_mode; + unsigned long flags; + u32 reg; + + spin_lock_irqsave(&dwc->lock, flags); + reg = dwc3_readl(dwc->regs, DWC3_GSTS); + current_mode = DWC3_GSTS_CURMOD(reg); + + switch (current_mode) { + case DWC3_GSTS_CURMOD_HOST: + dwc3_host_lsp(s); + break; + case DWC3_GSTS_CURMOD_DEVICE: + dwc3_gadget_lsp(s); + break; + default: + seq_puts(s, "Mode is unknown, no LSP register printed\n"); + break; + } + spin_unlock_irqrestore(&dwc->lock, flags); + + return 0; +} + +static int dwc3_lsp_open(struct inode *inode, struct file *file) +{ + return single_open(file, dwc3_lsp_show, inode->i_private); +} + +static ssize_t dwc3_lsp_write(struct file *file, const char __user *ubuf, + size_t count, loff_t *ppos) +{ + struct seq_file *s = file->private_data; + struct dwc3 *dwc = s->private; + unsigned long flags; + char buf[32] = { 0 }; + u32 sel; + int ret; + + if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count))) + return -EFAULT; + + ret = kstrtouint(buf, 0, &sel); + if (ret) + return ret; + + spin_lock_irqsave(&dwc->lock, flags); + dwc->dbg_lsp_select = sel; + spin_unlock_irqrestore(&dwc->lock, flags); + + return count; +} + +static const struct file_operations dwc3_lsp_fops = { + .open = dwc3_lsp_open, + .write = dwc3_lsp_write, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + static int dwc3_mode_show(struct seq_file *s, void *unused) { struct dwc3 *dwc = s->private; @@ -433,13 +539,24 @@ static int dwc3_link_state_show(struct seq_file *s, void *unused) unsigned long flags; enum dwc3_link_state state; u32 reg; + u8 speed; spin_lock_irqsave(&dwc->lock, flags); + reg = dwc3_readl(dwc->regs, DWC3_GSTS); + if (DWC3_GSTS_CURMOD(reg) != DWC3_GSTS_CURMOD_DEVICE) { + seq_puts(s, "Not available\n"); + spin_unlock_irqrestore(&dwc->lock, flags); + return 0; + } + reg = dwc3_readl(dwc->regs, DWC3_DSTS); state = DWC3_DSTS_USBLNKST(reg); - spin_unlock_irqrestore(&dwc->lock, flags); + speed = reg & DWC3_DSTS_CONNECTSPD; - seq_printf(s, "%s\n", dwc3_gadget_link_string(state)); + seq_printf(s, "%s\n", (speed >= DWC3_DSTS_SUPERSPEED) ? + dwc3_gadget_link_string(state) : + dwc3_gadget_hs_link_string(state)); + spin_unlock_irqrestore(&dwc->lock, flags); return 0; } @@ -457,6 +574,8 @@ static ssize_t dwc3_link_state_write(struct file *file, unsigned long flags; enum dwc3_link_state state = 0; char buf[32]; + u32 reg; + u8 speed; if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count))) return -EFAULT; @@ -477,6 +596,21 @@ static ssize_t dwc3_link_state_write(struct file *file, return -EINVAL; spin_lock_irqsave(&dwc->lock, flags); + reg = dwc3_readl(dwc->regs, DWC3_GSTS); + if (DWC3_GSTS_CURMOD(reg) != DWC3_GSTS_CURMOD_DEVICE) { + spin_unlock_irqrestore(&dwc->lock, flags); + return -EINVAL; + } + + reg = dwc3_readl(dwc->regs, DWC3_DSTS); + speed = reg & DWC3_DSTS_CONNECTSPD; + + if (speed < DWC3_DSTS_SUPERSPEED && + state != DWC3_LINK_STATE_RECOV) { + spin_unlock_irqrestore(&dwc->lock, flags); + return -EINVAL; + } + dwc3_gadget_set_link_state(dwc, state); spin_unlock_irqrestore(&dwc->lock, flags); @@ -496,7 +630,7 @@ struct dwc3_ep_file_map { const struct file_operations *const fops; }; -static int dwc3_tx_fifo_queue_show(struct seq_file *s, void *unused) +static int dwc3_tx_fifo_size_show(struct seq_file *s, void *unused) { struct dwc3_ep *dep = s->private; struct dwc3 *dwc = dep->dwc; @@ -504,14 +638,18 @@ static int dwc3_tx_fifo_queue_show(struct seq_file *s, void *unused) u32 val; spin_lock_irqsave(&dwc->lock, flags); - val = dwc3_core_fifo_space(dep, DWC3_TXFIFOQ); + val = dwc3_core_fifo_space(dep, DWC3_TXFIFO); + + /* Convert to bytes */ + val *= DWC3_MDWIDTH(dwc->hwparams.hwparams0); + val >>= 3; seq_printf(s, "%u\n", val); spin_unlock_irqrestore(&dwc->lock, flags); return 0; } -static int dwc3_rx_fifo_queue_show(struct seq_file *s, void *unused) +static int dwc3_rx_fifo_size_show(struct seq_file *s, void *unused) { struct dwc3_ep *dep = s->private; struct dwc3 *dwc = dep->dwc; @@ -519,7 +657,11 @@ static int dwc3_rx_fifo_queue_show(struct seq_file *s, void *unused) u32 val; spin_lock_irqsave(&dwc->lock, flags); - val = dwc3_core_fifo_space(dep, DWC3_RXFIFOQ); + val = dwc3_core_fifo_space(dep, DWC3_RXFIFO); + + /* Convert to bytes */ + val *= DWC3_MDWIDTH(dwc->hwparams.hwparams0); + val >>= 3; seq_printf(s, "%u\n", val); spin_unlock_irqrestore(&dwc->lock, flags); @@ -675,8 +817,32 @@ out: return 0; } -DEFINE_SHOW_ATTRIBUTE(dwc3_tx_fifo_queue); -DEFINE_SHOW_ATTRIBUTE(dwc3_rx_fifo_queue); +static int dwc3_ep_info_register_show(struct seq_file *s, void *unused) +{ + struct dwc3_ep *dep = s->private; + struct dwc3 *dwc = dep->dwc; + unsigned long flags; + u64 ep_info; + u32 lower_32_bits; + u32 upper_32_bits; + u32 reg; + + spin_lock_irqsave(&dwc->lock, flags); + reg = DWC3_GDBGLSPMUX_EPSELECT(dep->number); + dwc3_writel(dwc->regs, DWC3_GDBGLSPMUX, reg); + + lower_32_bits = dwc3_readl(dwc->regs, DWC3_GDBGEPINFO0); + upper_32_bits = dwc3_readl(dwc->regs, DWC3_GDBGEPINFO1); + + ep_info = ((u64)upper_32_bits << 32) | lower_32_bits; + seq_printf(s, "0x%016llx\n", ep_info); + spin_unlock_irqrestore(&dwc->lock, flags); + + return 0; +} + +DEFINE_SHOW_ATTRIBUTE(dwc3_tx_fifo_size); +DEFINE_SHOW_ATTRIBUTE(dwc3_rx_fifo_size); DEFINE_SHOW_ATTRIBUTE(dwc3_tx_request_queue); DEFINE_SHOW_ATTRIBUTE(dwc3_rx_request_queue); DEFINE_SHOW_ATTRIBUTE(dwc3_rx_info_queue); @@ -684,10 +850,11 @@ DEFINE_SHOW_ATTRIBUTE(dwc3_descriptor_fetch_queue); DEFINE_SHOW_ATTRIBUTE(dwc3_event_queue); DEFINE_SHOW_ATTRIBUTE(dwc3_transfer_type); DEFINE_SHOW_ATTRIBUTE(dwc3_trb_ring); +DEFINE_SHOW_ATTRIBUTE(dwc3_ep_info_register); static const struct dwc3_ep_file_map dwc3_ep_file_map[] = { - { "tx_fifo_queue", &dwc3_tx_fifo_queue_fops, }, - { "rx_fifo_queue", &dwc3_rx_fifo_queue_fops, }, + { "tx_fifo_size", &dwc3_tx_fifo_size_fops, }, + { "rx_fifo_size", &dwc3_rx_fifo_size_fops, }, { "tx_request_queue", &dwc3_tx_request_queue_fops, }, { "rx_request_queue", &dwc3_rx_request_queue_fops, }, { "rx_info_queue", &dwc3_rx_info_queue_fops, }, @@ -695,6 +862,7 @@ static const struct dwc3_ep_file_map dwc3_ep_file_map[] = { { "event_queue", &dwc3_event_queue_fops, }, { "transfer_type", &dwc3_transfer_type_fops, }, { "trb_ring", &dwc3_trb_ring_fops, }, + { "GDBGEPINFO", &dwc3_ep_info_register_fops, }, }; static void dwc3_debugfs_create_endpoint_files(struct dwc3_ep *dep, @@ -742,6 +910,8 @@ void dwc3_debugfs_init(struct dwc3 *dwc) if (!dwc->regset) return; + dwc->dbg_lsp_select = DWC3_LSP_MUX_UNSELECTED; + dwc->regset->regs = dwc3_regs; dwc->regset->nregs = ARRAY_SIZE(dwc3_regs); dwc->regset->base = dwc->regs - DWC3_GLOBALS_REGS_START; @@ -751,6 +921,9 @@ void dwc3_debugfs_init(struct dwc3 *dwc) debugfs_create_regset32("regdump", S_IRUGO, root, dwc->regset); + debugfs_create_file("lsp_dump", S_IRUGO | S_IWUSR, root, dwc, + &dwc3_lsp_fops); + if (IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)) { debugfs_create_file("mode", S_IRUGO | S_IWUSR, root, dwc, &dwc3_mode_fops); diff --git a/drivers/usb/dwc3/drd.c b/drivers/usb/dwc3/drd.c index 218371f985ca..869725d15c74 100644 --- a/drivers/usb/dwc3/drd.c +++ b/drivers/usb/dwc3/drd.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "debug.h" #include "core.h" @@ -445,9 +446,19 @@ static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc) struct device *dev = dwc->dev; struct device_node *np_phy, *np_conn; struct extcon_dev *edev; + const char *name; - if (of_property_read_bool(dev->of_node, "extcon")) - return extcon_get_edev_by_phandle(dwc->dev, 0); + if (device_property_read_bool(dev, "extcon")) + return extcon_get_edev_by_phandle(dev, 0); + + /* + * Device tree platforms should get extcon via phandle. + * On ACPI platforms, we get the name from a device property. + * This device property is for kernel internal use only and + * is expected to be set by the glue code. + */ + if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) + return extcon_get_extcon_dev(name); np_phy = of_parse_phandle(dev->of_node, "phys", 0); np_conn = of_graph_get_remote_node(np_phy, -1, -1); diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c index 842795856bf4..fdc6e4e403e8 100644 --- a/drivers/usb/dwc3/dwc3-pci.c +++ b/drivers/usb/dwc3/dwc3-pci.c @@ -170,20 +170,20 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc) * put the gpio descriptors again here because the phy driver * might want to grab them, too. */ - gpio = devm_gpiod_get_optional(&pdev->dev, "cs", - GPIOD_OUT_LOW); + gpio = gpiod_get_optional(&pdev->dev, "cs", GPIOD_OUT_LOW); if (IS_ERR(gpio)) return PTR_ERR(gpio); gpiod_set_value_cansleep(gpio, 1); + gpiod_put(gpio); - gpio = devm_gpiod_get_optional(&pdev->dev, "reset", - GPIOD_OUT_LOW); + gpio = gpiod_get_optional(&pdev->dev, "reset", GPIOD_OUT_LOW); if (IS_ERR(gpio)) return PTR_ERR(gpio); if (gpio) { gpiod_set_value_cansleep(gpio, 1); + gpiod_put(gpio); usleep_range(10000, 11000); } } diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 9f92ee03dde7..07bd31bb2f8a 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -27,7 +27,7 @@ #include "gadget.h" #include "io.h" -#define DWC3_ALIGN_FRAME(d) (((d)->frame_number + (d)->interval) \ +#define DWC3_ALIGN_FRAME(d, n) (((d)->frame_number + ((d)->interval * (n))) \ & ~((d)->interval - 1)) /** @@ -647,8 +647,6 @@ static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, unsigned int action) reg |= DWC3_DALEPENA_EP(dep->number); dwc3_writel(dwc->regs, DWC3_DALEPENA, reg); - init_waitqueue_head(&dep->wait_end_transfer); - if (usb_endpoint_xfer_control(desc)) goto out; @@ -672,7 +670,7 @@ static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, unsigned int action) * Issue StartTransfer here with no-op TRB so we can always rely on No * Response Update Transfer command. */ - if (usb_endpoint_xfer_bulk(desc) || + if ((usb_endpoint_xfer_bulk(desc) && !dep->stream_capable) || usb_endpoint_xfer_int(desc)) { struct dwc3_gadget_ep_cmd_params params; struct dwc3_trb *trb; @@ -919,8 +917,6 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb, struct usb_gadget *gadget = &dwc->gadget; enum usb_device_speed speed = gadget->speed; - dwc3_ep_inc_enq(dep); - trb->size = DWC3_TRB_SIZE_LENGTH(length); trb->bpl = lower_32_bits(dma); trb->bph = upper_32_bits(dma); @@ -990,16 +986,20 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb, usb_endpoint_type(dep->endpoint.desc)); } - /* always enable Continue on Short Packet */ + /* + * Enable Continue on Short Packet + * when endpoint is not a stream capable + */ if (usb_endpoint_dir_out(dep->endpoint.desc)) { - trb->ctrl |= DWC3_TRB_CTRL_CSP; + if (!dep->stream_capable) + trb->ctrl |= DWC3_TRB_CTRL_CSP; if (short_not_ok) trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI; } if ((!no_interrupt && !chain) || - (dwc3_calc_trbs_left(dep) == 0)) + (dwc3_calc_trbs_left(dep) == 1)) trb->ctrl |= DWC3_TRB_CTRL_IOC; if (chain) @@ -1010,6 +1010,8 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb, trb->ctrl |= DWC3_TRB_CTRL_HWO; + dwc3_ep_inc_enq(dep); + trace_dwc3_prepare_trb(dep, trb); } @@ -1046,6 +1048,8 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep, req->trb_dma = dwc3_trb_dma_offset(dep, trb); } + req->num_trbs++; + __dwc3_prepare_one_trb(dep, trb, dma, length, chain, node, stream_id, short_not_ok, no_interrupt); } @@ -1073,13 +1077,14 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep, struct dwc3 *dwc = dep->dwc; struct dwc3_trb *trb; - req->unaligned = true; + req->needs_extra_trb = true; /* prepare normal TRB */ dwc3_prepare_one_trb(dep, req, true, i); /* Now prepare one extra TRB to align transfer size */ trb = &dep->trb_pool[dep->trb_enqueue]; + req->num_trbs++; __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp - rem, false, 1, req->request.stream_id, @@ -1117,13 +1122,14 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep, struct dwc3 *dwc = dep->dwc; struct dwc3_trb *trb; - req->unaligned = true; + req->needs_extra_trb = true; /* prepare normal TRB */ dwc3_prepare_one_trb(dep, req, true, 0); /* Now prepare one extra TRB to align transfer size */ trb = &dep->trb_pool[dep->trb_enqueue]; + req->num_trbs++; __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp - rem, false, 1, req->request.stream_id, req->request.short_not_ok, @@ -1133,13 +1139,14 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep, struct dwc3 *dwc = dep->dwc; struct dwc3_trb *trb; - req->zero = true; + req->needs_extra_trb = true; /* prepare normal TRB */ dwc3_prepare_one_trb(dep, req, true, 0); /* Now prepare one extra TRB to handle ZLP */ trb = &dep->trb_pool[dep->trb_enqueue]; + req->num_trbs++; __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0, false, 1, req->request.stream_id, req->request.short_not_ok, @@ -1232,6 +1239,9 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep) params.param1 = lower_32_bits(req->trb_dma); cmd = DWC3_DEPCMD_STARTTRANSFER; + if (dep->stream_capable) + cmd |= DWC3_DEPCMD_PARAM(req->request.stream_id); + if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) cmd |= DWC3_DEPCMD_PARAM(dep->frame_number); } else { @@ -1263,17 +1273,151 @@ static int __dwc3_gadget_get_frame(struct dwc3 *dwc) return DWC3_DSTS_SOFFN(reg); } -static void __dwc3_gadget_start_isoc(struct dwc3_ep *dep) +/** + * dwc3_gadget_start_isoc_quirk - workaround invalid frame number + * @dep: isoc endpoint + * + * This function tests for the correct combination of BIT[15:14] from the 16-bit + * microframe number reported by the XferNotReady event for the future frame + * number to start the isoc transfer. + * + * In DWC_usb31 version 1.70a-ea06 and prior, for highspeed and fullspeed + * isochronous IN, BIT[15:14] of the 16-bit microframe number reported by the + * XferNotReady event are invalid. The driver uses this number to schedule the + * isochronous transfer and passes it to the START TRANSFER command. Because + * this number is invalid, the command may fail. If BIT[15:14] matches the + * internal 16-bit microframe, the START TRANSFER command will pass and the + * transfer will start at the scheduled time, if it is off by 1, the command + * will still pass, but the transfer will start 2 seconds in the future. For all + * other conditions, the START TRANSFER command will fail with bus-expiry. + * + * In order to workaround this issue, we can test for the correct combination of + * BIT[15:14] by sending START TRANSFER commands with different values of + * BIT[15:14]: 'b00, 'b01, 'b10, and 'b11. Each combination is 2^14 uframe apart + * (or 2 seconds). 4 seconds into the future will result in a bus-expiry status. + * As the result, within the 4 possible combinations for BIT[15:14], there will + * be 2 successful and 2 failure START COMMAND status. One of the 2 successful + * command status will result in a 2-second delay start. The smaller BIT[15:14] + * value is the correct combination. + * + * Since there are only 4 outcomes and the results are ordered, we can simply + * test 2 START TRANSFER commands with BIT[15:14] combinations 'b00 and 'b01 to + * deduce the smaller successful combination. + * + * Let test0 = test status for combination 'b00 and test1 = test status for 'b01 + * of BIT[15:14]. The correct combination is as follow: + * + * if test0 fails and test1 passes, BIT[15:14] is 'b01 + * if test0 fails and test1 fails, BIT[15:14] is 'b10 + * if test0 passes and test1 fails, BIT[15:14] is 'b11 + * if test0 passes and test1 passes, BIT[15:14] is 'b00 + * + * Synopsys STAR 9001202023: Wrong microframe number for isochronous IN + * endpoints. + */ +static int dwc3_gadget_start_isoc_quirk(struct dwc3_ep *dep) +{ + int cmd_status = 0; + bool test0; + bool test1; + + while (dep->combo_num < 2) { + struct dwc3_gadget_ep_cmd_params params; + u32 test_frame_number; + u32 cmd; + + /* + * Check if we can start isoc transfer on the next interval or + * 4 uframes in the future with BIT[15:14] as dep->combo_num + */ + test_frame_number = dep->frame_number & 0x3fff; + test_frame_number |= dep->combo_num << 14; + test_frame_number += max_t(u32, 4, dep->interval); + + params.param0 = upper_32_bits(dep->dwc->bounce_addr); + params.param1 = lower_32_bits(dep->dwc->bounce_addr); + + cmd = DWC3_DEPCMD_STARTTRANSFER; + cmd |= DWC3_DEPCMD_PARAM(test_frame_number); + cmd_status = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms); + + /* Redo if some other failure beside bus-expiry is received */ + if (cmd_status && cmd_status != -EAGAIN) { + dep->start_cmd_status = 0; + dep->combo_num = 0; + return 0; + } + + /* Store the first test status */ + if (dep->combo_num == 0) + dep->start_cmd_status = cmd_status; + + dep->combo_num++; + + /* + * End the transfer if the START_TRANSFER command is successful + * to wait for the next XferNotReady to test the command again + */ + if (cmd_status == 0) { + dwc3_stop_active_transfer(dep, true); + return 0; + } + } + + /* test0 and test1 are both completed at this point */ + test0 = (dep->start_cmd_status == 0); + test1 = (cmd_status == 0); + + if (!test0 && test1) + dep->combo_num = 1; + else if (!test0 && !test1) + dep->combo_num = 2; + else if (test0 && !test1) + dep->combo_num = 3; + else if (test0 && test1) + dep->combo_num = 0; + + dep->frame_number &= 0x3fff; + dep->frame_number |= dep->combo_num << 14; + dep->frame_number += max_t(u32, 4, dep->interval); + + /* Reinitialize test variables */ + dep->start_cmd_status = 0; + dep->combo_num = 0; + + return __dwc3_gadget_kick_transfer(dep); +} + +static int __dwc3_gadget_start_isoc(struct dwc3_ep *dep) { + struct dwc3 *dwc = dep->dwc; + int ret; + int i; + if (list_empty(&dep->pending_list)) { - dev_info(dep->dwc->dev, "%s: ran out of requests\n", - dep->name); dep->flags |= DWC3_EP_PENDING_REQUEST; - return; + return -EAGAIN; + } + + if (!dwc->dis_start_transfer_quirk && dwc3_is_usb31(dwc) && + (dwc->revision <= DWC3_USB31_REVISION_160A || + (dwc->revision == DWC3_USB31_REVISION_170A && + dwc->version_type >= DWC31_VERSIONTYPE_EA01 && + dwc->version_type <= DWC31_VERSIONTYPE_EA06))) { + + if (dwc->gadget.speed <= USB_SPEED_HIGH && dep->direction) + return dwc3_gadget_start_isoc_quirk(dep); } - dep->frame_number = DWC3_ALIGN_FRAME(dep); - __dwc3_gadget_kick_transfer(dep); + for (i = 0; i < DWC3_ISOC_MAX_RETRIES; i++) { + dep->frame_number = DWC3_ALIGN_FRAME(dep, i + 1); + + ret = __dwc3_gadget_kick_transfer(dep); + if (ret != -EAGAIN) + break; + } + + return ret; } static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req) @@ -1314,8 +1458,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req) if ((dep->flags & DWC3_EP_PENDING_REQUEST)) { if (!(dep->flags & DWC3_EP_TRANSFER_STARTED)) { - __dwc3_gadget_start_isoc(dep); - return 0; + return __dwc3_gadget_start_isoc(dep); } } } @@ -1341,6 +1484,40 @@ static int dwc3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request, return ret; } +static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *req) +{ + int i; + + /* + * If request was already started, this means we had to + * stop the transfer. With that we also need to ignore + * all TRBs used by the request, however TRBs can only + * be modified after completion of END_TRANSFER + * command. So what we do here is that we wait for + * END_TRANSFER completion and only after that, we jump + * over TRBs by clearing HWO and incrementing dequeue + * pointer. + */ + for (i = 0; i < req->num_trbs; i++) { + struct dwc3_trb *trb; + + trb = req->trb + i; + trb->ctrl &= ~DWC3_TRB_CTRL_HWO; + dwc3_ep_inc_deq(dep); + } +} + +static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep) +{ + struct dwc3_request *req; + struct dwc3_request *tmp; + + list_for_each_entry_safe(req, tmp, &dep->cancelled_list, list) { + dwc3_gadget_ep_skip_trbs(dep, req); + dwc3_gadget_giveback(dep, req, -ECONNRESET); + } +} + static int dwc3_gadget_ep_dequeue(struct usb_ep *ep, struct usb_request *request) { @@ -1371,68 +1548,11 @@ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep, /* wait until it is processed */ dwc3_stop_active_transfer(dep, true); - /* - * If request was already started, this means we had to - * stop the transfer. With that we also need to ignore - * all TRBs used by the request, however TRBs can only - * be modified after completion of END_TRANSFER - * command. So what we do here is that we wait for - * END_TRANSFER completion and only after that, we jump - * over TRBs by clearing HWO and incrementing dequeue - * pointer. - * - * Note that we have 2 possible types of transfers here: - * - * i) Linear buffer request - * ii) SG-list based request - * - * SG-list based requests will have r->num_pending_sgs - * set to a valid number (> 0). Linear requests, - * normally use a single TRB. - * - * For each of these two cases, if r->unaligned flag is - * set, one extra TRB has been used to align transfer - * size to wMaxPacketSize. - * - * All of these cases need to be taken into - * consideration so we don't mess up our TRB ring - * pointers. - */ - wait_event_lock_irq(dep->wait_end_transfer, - !(dep->flags & DWC3_EP_END_TRANSFER_PENDING), - dwc->lock); - if (!r->trb) goto out0; - if (r->num_pending_sgs) { - struct dwc3_trb *trb; - int i = 0; - - for (i = 0; i < r->num_pending_sgs; i++) { - trb = r->trb + i; - trb->ctrl &= ~DWC3_TRB_CTRL_HWO; - dwc3_ep_inc_deq(dep); - } - - if (r->unaligned || r->zero) { - trb = r->trb + r->num_pending_sgs + 1; - trb->ctrl &= ~DWC3_TRB_CTRL_HWO; - dwc3_ep_inc_deq(dep); - } - } else { - struct dwc3_trb *trb = r->trb; - - trb->ctrl &= ~DWC3_TRB_CTRL_HWO; - dwc3_ep_inc_deq(dep); - - if (r->unaligned || r->zero) { - trb = r->trb + 1; - trb->ctrl &= ~DWC3_TRB_CTRL_HWO; - dwc3_ep_inc_deq(dep); - } - } - goto out1; + dwc3_gadget_move_cancelled_request(req); + goto out0; } dev_err(dwc->dev, "request %pK was not queued to %s\n", request, ep->name); @@ -1440,9 +1560,6 @@ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep, goto out0; } -out1: - /* giveback the request */ - dwc3_gadget_giveback(dep, req, -ECONNRESET); out0: @@ -1934,8 +2051,6 @@ static int dwc3_gadget_stop(struct usb_gadget *g) { struct dwc3 *dwc = gadget_to_dwc(g); unsigned long flags; - int epnum; - u32 tmo_eps = 0; spin_lock_irqsave(&dwc->lock, flags); @@ -1944,36 +2059,6 @@ static int dwc3_gadget_stop(struct usb_gadget *g) __dwc3_gadget_stop(dwc); - for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) { - struct dwc3_ep *dep = dwc->eps[epnum]; - int ret; - - if (!dep) - continue; - - if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING)) - continue; - - ret = wait_event_interruptible_lock_irq_timeout(dep->wait_end_transfer, - !(dep->flags & DWC3_EP_END_TRANSFER_PENDING), - dwc->lock, msecs_to_jiffies(5)); - - if (ret <= 0) { - /* Timed out or interrupted! There's nothing much - * we can do so we just log here and print which - * endpoints timed out at the end. - */ - tmo_eps |= 1 << epnum; - dep->flags &= DWC3_EP_END_TRANSFER_PENDING; - } - } - - if (tmo_eps) { - dev_err(dwc->dev, - "end transfer timed out on endpoints 0x%x [bitmap]\n", - tmo_eps); - } - out: dwc->gadget_driver = NULL; spin_unlock_irqrestore(&dwc->lock, flags); @@ -2148,6 +2233,8 @@ static int dwc3_gadget_init_endpoint(struct dwc3 *dwc, u8 epnum) dep->direction = direction; dep->regs = dwc->regs + DWC3_DEP_BASE(epnum); dwc->eps[epnum] = dep; + dep->combo_num = 0; + dep->start_cmd_status = 0; snprintf(dep->name, sizeof(dep->name), "ep%u%s", num, direction ? "in" : "out"); @@ -2176,6 +2263,7 @@ static int dwc3_gadget_init_endpoint(struct dwc3 *dwc, u8 epnum) INIT_LIST_HEAD(&dep->pending_list); INIT_LIST_HEAD(&dep->started_list); + INIT_LIST_HEAD(&dep->cancelled_list); return 0; } @@ -2235,6 +2323,7 @@ static int dwc3_gadget_ep_reclaim_completed_trb(struct dwc3_ep *dep, dwc3_ep_inc_deq(dep); trace_dwc3_complete_trb(dep, trb); + req->num_trbs--; /* * If we're in the middle of series of chained TRBs and we @@ -2249,12 +2338,26 @@ static int dwc3_gadget_ep_reclaim_completed_trb(struct dwc3_ep *dep, if (chain && (trb->ctrl & DWC3_TRB_CTRL_HWO)) trb->ctrl &= ~DWC3_TRB_CTRL_HWO; + /* + * For isochronous transfers, the first TRB in a service interval must + * have the Isoc-First type. Track and report its interval frame number. + */ + if (usb_endpoint_xfer_isoc(dep->endpoint.desc) && + (trb->ctrl & DWC3_TRBCTL_ISOCHRONOUS_FIRST)) { + unsigned int frame_number; + + frame_number = DWC3_TRB_CTRL_GET_SID_SOFN(trb->ctrl); + frame_number &= ~(dep->interval - 1); + req->request.frame_number = frame_number; + } + /* * If we're dealing with unaligned size OUT transfer, we will be left * with one TRB pending in the ring. We need to manually clear HWO bit * from that TRB. */ - if ((req->zero || req->unaligned) && !(trb->ctrl & DWC3_TRB_CTRL_CHN)) { + + if (req->needs_extra_trb && !(trb->ctrl & DWC3_TRB_CTRL_CHN)) { trb->ctrl &= ~DWC3_TRB_CTRL_HWO; return 1; } @@ -2331,11 +2434,10 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep, ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event, status); - if (req->unaligned || req->zero) { + if (req->needs_extra_trb) { ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event, status); - req->unaligned = false; - req->zero = false; + req->needs_extra_trb = false; } req->request.actual = req->request.length - req->remaining; @@ -2430,7 +2532,7 @@ static void dwc3_gadget_endpoint_transfer_not_ready(struct dwc3_ep *dep, const struct dwc3_event_depevt *event) { dwc3_gadget_endpoint_frame_from_event(dep, event); - __dwc3_gadget_start_isoc(dep); + (void) __dwc3_gadget_start_isoc(dep); } static void dwc3_endpoint_interrupt(struct dwc3 *dwc, @@ -2468,7 +2570,7 @@ static void dwc3_endpoint_interrupt(struct dwc3 *dwc, if (cmd == DWC3_DEPCMD_ENDTRANSFER) { dep->flags &= ~DWC3_EP_END_TRANSFER_PENDING; - wake_up(&dep->wait_end_transfer); + dwc3_gadget_ep_cleanup_cancelled_requests(dep); } break; case DWC3_DEPEVT_STREAMEVT: diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h index 2aacd1afd9ff..023a473648eb 100644 --- a/drivers/usb/dwc3/gadget.h +++ b/drivers/usb/dwc3/gadget.h @@ -79,6 +79,21 @@ static inline void dwc3_gadget_move_started_request(struct dwc3_request *req) list_move_tail(&req->list, &dep->started_list); } +/** + * dwc3_gadget_move_cancelled_request - move @req to the cancelled_list + * @req: the request to be moved + * + * Caller should take care of locking. This function will move @req from its + * current list to the endpoint's cancelled_list. + */ +static inline void dwc3_gadget_move_cancelled_request(struct dwc3_request *req) +{ + struct dwc3_ep *dep = req->dep; + + req->started = false; + list_move_tail(&req->list, &dep->cancelled_list); +} + void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req, int status); diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c index 1a3878a3be78..f55947294f7c 100644 --- a/drivers/usb/dwc3/host.c +++ b/drivers/usb/dwc3/host.c @@ -46,7 +46,7 @@ out: int dwc3_host_init(struct dwc3 *dwc) { - struct property_entry props[3]; + struct property_entry props[4]; struct platform_device *xhci; int ret, irq; struct resource *res; @@ -93,6 +93,9 @@ int dwc3_host_init(struct dwc3 *dwc) if (dwc->usb3_lpm_capable) props[prop_idx++].name = "usb3-lpm-capable"; + if (dwc->usb2_lpm_disable) + props[prop_idx++].name = "usb2-lpm-disable"; + /** * WORKAROUND: dwc3 revisions <=3.00a have a limitation * where Port Disable command doesn't work. diff --git a/drivers/usb/dwc3/trace.h b/drivers/usb/dwc3/trace.h index f22714cce070..e97a00593dda 100644 --- a/drivers/usb/dwc3/trace.h +++ b/drivers/usb/dwc3/trace.h @@ -199,7 +199,7 @@ DECLARE_EVENT_CLASS(dwc3_log_gadget_ep_cmd, __entry->param2 = params->param2; __entry->cmd_status = cmd_status; ), - TP_printk("%s: cmd '%s' [%d] params %08x %08x %08x --> status: %s", + TP_printk("%s: cmd '%s' [%x] params %08x %08x %08x --> status: %s", __get_str(name), dwc3_gadget_ep_cmd_string(__entry->cmd), __entry->cmd, __entry->param0, __entry->param1, __entry->param2, @@ -251,9 +251,11 @@ DECLARE_EVENT_CLASS(dwc3_log_trb, s = "2x "; break; case 3: + default: s = "3x "; break; } + break; default: s = ""; } s; }), diff --git a/drivers/usb/early/ehci-dbgp.c b/drivers/usb/early/ehci-dbgp.c index d633c2abe5a4..ea0d531c63e2 100644 --- a/drivers/usb/early/ehci-dbgp.c +++ b/drivers/usb/early/ehci-dbgp.c @@ -631,28 +631,28 @@ static int ehci_reset_port(int port) if (!(portsc & PORT_RESET)) break; } - if (portsc & PORT_RESET) { - /* force reset to complete */ - loop = 100 * 1000; - writel(portsc & ~(PORT_RWC_BITS | PORT_RESET), - &ehci_regs->port_status[port - 1]); - do { - udelay(1); - portsc = readl(&ehci_regs->port_status[port-1]); - } while ((portsc & PORT_RESET) && (--loop > 0)); - } + if (portsc & PORT_RESET) { + /* force reset to complete */ + loop = 100 * 1000; + writel(portsc & ~(PORT_RWC_BITS | PORT_RESET), + &ehci_regs->port_status[port - 1]); + do { + udelay(1); + portsc = readl(&ehci_regs->port_status[port-1]); + } while ((portsc & PORT_RESET) && (--loop > 0)); + } - /* Device went away? */ - if (!(portsc & PORT_CONNECT)) - return -ENOTCONN; + /* Device went away? */ + if (!(portsc & PORT_CONNECT)) + return -ENOTCONN; - /* bomb out completely if something weird happened */ - if ((portsc & PORT_CSC)) - return -EINVAL; + /* bomb out completely if something weird happened */ + if ((portsc & PORT_CSC)) + return -EINVAL; - /* If we've finished resetting, then break out of the loop */ - if (!(portsc & PORT_RESET) && (portsc & PORT_PE)) - return 0; + /* If we've finished resetting, then break out of the loop */ + if (!(portsc & PORT_RESET) && (portsc & PORT_PE)) + return 0; return -EBUSY; } diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index 31e8bf3578c8..1e5430438703 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -18,11 +18,15 @@ #include #include #include +#include #include +#include #include #include +#include #include +#include #include #include @@ -218,6 +222,8 @@ struct ffs_io_data { struct usb_ep *ep; struct usb_request *req; + struct sg_table sgt; + bool use_sg; struct ffs_data *ffs; }; @@ -749,6 +755,65 @@ static ssize_t ffs_copy_to_iter(void *data, int data_len, struct iov_iter *iter) return ret; } +/* + * allocate a virtually contiguous buffer and create a scatterlist describing it + * @sg_table - pointer to a place to be filled with sg_table contents + * @size - required buffer size + */ +static void *ffs_build_sg_list(struct sg_table *sgt, size_t sz) +{ + struct page **pages; + void *vaddr, *ptr; + unsigned int n_pages; + int i; + + vaddr = vmalloc(sz); + if (!vaddr) + return NULL; + + n_pages = PAGE_ALIGN(sz) >> PAGE_SHIFT; + pages = kvmalloc_array(n_pages, sizeof(struct page *), GFP_KERNEL); + if (!pages) { + vfree(vaddr); + + return NULL; + } + for (i = 0, ptr = vaddr; i < n_pages; ++i, ptr += PAGE_SIZE) + pages[i] = vmalloc_to_page(ptr); + + if (sg_alloc_table_from_pages(sgt, pages, n_pages, 0, sz, GFP_KERNEL)) { + kvfree(pages); + vfree(vaddr); + + return NULL; + } + kvfree(pages); + + return vaddr; +} + +static inline void *ffs_alloc_buffer(struct ffs_io_data *io_data, + size_t data_len) +{ + if (io_data->use_sg) + return ffs_build_sg_list(&io_data->sgt, data_len); + + return kmalloc(data_len, GFP_KERNEL); +} + +static inline void ffs_free_buffer(struct ffs_io_data *io_data) +{ + if (!io_data->buf) + return; + + if (io_data->use_sg) { + sg_free_table(&io_data->sgt); + vfree(io_data->buf); + } else { + kfree(io_data->buf); + } +} + static void ffs_user_copy_worker(struct work_struct *work) { struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, @@ -776,7 +841,7 @@ static void ffs_user_copy_worker(struct work_struct *work) if (io_data->read) kfree(io_data->to_free); - kfree(io_data->buf); + ffs_free_buffer(io_data); kfree(io_data); } @@ -932,6 +997,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) * earlier */ gadget = epfile->ffs->gadget; + io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE; spin_lock_irq(&epfile->ffs->eps_lock); /* In the meantime, endpoint got disabled or changed. */ @@ -948,7 +1014,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) data_len = usb_ep_align_maybe(gadget, ep->ep, data_len); spin_unlock_irq(&epfile->ffs->eps_lock); - data = kmalloc(data_len, GFP_KERNEL); + data = ffs_alloc_buffer(io_data, data_len); if (unlikely(!data)) { ret = -ENOMEM; goto error_mutex; @@ -988,8 +1054,16 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) bool interrupted = false; req = ep->req; - req->buf = data; - req->length = data_len; + if (io_data->use_sg) { + req->buf = NULL; + req->sg = io_data->sgt.sgl; + req->num_sgs = io_data->sgt.nents; + } else { + req->buf = data; + } + req->length = data_len; + + io_data->buf = data; req->context = &done; req->complete = ffs_epfile_io_complete; @@ -1022,8 +1096,14 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) } else if (!(req = usb_ep_alloc_request(ep->ep, GFP_ATOMIC))) { ret = -ENOMEM; } else { - req->buf = data; - req->length = data_len; + if (io_data->use_sg) { + req->buf = NULL; + req->sg = io_data->sgt.sgl; + req->num_sgs = io_data->sgt.nents; + } else { + req->buf = data; + } + req->length = data_len; io_data->buf = data; io_data->ep = ep->ep; @@ -1052,7 +1132,7 @@ error_lock: error_mutex: mutex_unlock(&epfile->mutex); error: - kfree(data); + ffs_free_buffer(io_data); return ret; } @@ -1926,7 +2006,7 @@ typedef int (*ffs_os_desc_callback)(enum ffs_os_desc_type entity, static int __must_check ffs_do_single_desc(char *data, unsigned len, ffs_entity_callback entity, - void *priv) + void *priv, int *current_class) { struct usb_descriptor_header *_ds = (void *)data; u8 length; @@ -1984,6 +2064,7 @@ static int __must_check ffs_do_single_desc(char *data, unsigned len, __entity(INTERFACE, ds->bInterfaceNumber); if (ds->iInterface) __entity(STRING, ds->iInterface); + *current_class = ds->bInterfaceClass; } break; @@ -1997,11 +2078,22 @@ static int __must_check ffs_do_single_desc(char *data, unsigned len, } break; - case HID_DT_HID: - pr_vdebug("hid descriptor\n"); - if (length != sizeof(struct hid_descriptor)) - goto inv_length; - break; + case USB_TYPE_CLASS | 0x01: + if (*current_class == USB_INTERFACE_CLASS_HID) { + pr_vdebug("hid descriptor\n"); + if (length != sizeof(struct hid_descriptor)) + goto inv_length; + break; + } else if (*current_class == USB_INTERFACE_CLASS_CCID) { + pr_vdebug("ccid descriptor\n"); + if (length != sizeof(struct ccid_descriptor)) + goto inv_length; + break; + } else { + pr_vdebug("unknown descriptor: %d for class %d\n", + _ds->bDescriptorType, *current_class); + return -EINVAL; + } case USB_DT_OTG: if (length != sizeof(struct usb_otg_descriptor)) @@ -2058,6 +2150,7 @@ static int __must_check ffs_do_descs(unsigned count, char *data, unsigned len, { const unsigned _len = len; unsigned long num = 0; + int current_class = -1; ENTER(); @@ -2078,7 +2171,8 @@ static int __must_check ffs_do_descs(unsigned count, char *data, unsigned len, if (!data) return _len - len; - ret = ffs_do_single_desc(data, len, entity, priv); + ret = ffs_do_single_desc(data, len, entity, priv, + ¤t_class); if (unlikely(ret < 0)) { pr_debug("%s returns %d\n", __func__, ret); return ret; diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c index 106988a6661a..34f5982cab78 100644 --- a/drivers/usb/gadget/function/f_tcm.c +++ b/drivers/usb/gadget/function/f_tcm.c @@ -1256,11 +1256,6 @@ static int usbg_check_false(struct se_portal_group *se_tpg) return 0; } -static char *usbg_get_fabric_name(void) -{ - return "usb_gadget"; -} - static char *usbg_get_fabric_wwn(struct se_portal_group *se_tpg) { struct usbg_tpg *tpg = container_of(se_tpg, @@ -1718,8 +1713,7 @@ static int usbg_check_stop_free(struct se_cmd *se_cmd) static const struct target_core_fabric_ops usbg_ops = { .module = THIS_MODULE, - .name = "usb_gadget", - .get_fabric_name = usbg_get_fabric_name, + .fabric_name = "usb_gadget", .tpg_get_wwn = usbg_get_fabric_wwn, .tpg_get_tag = usbg_get_tag, .tpg_check_demo_mode = usbg_check_true, diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c index 0f026d445e31..737bd77a575d 100644 --- a/drivers/usb/gadget/function/u_ether.c +++ b/drivers/usb/gadget/function/u_ether.c @@ -879,7 +879,7 @@ int gether_register_netdev(struct net_device *net) sa.sa_family = net->type; memcpy(sa.sa_data, dev->dev_mac, ETH_ALEN); rtnl_lock(); - status = dev_set_mac_address(net, &sa); + status = dev_set_mac_address(net, &sa, NULL); rtnl_unlock(); if (status) pr_warn("cannot set self ethernet address: %d\n", status); diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c index f2497cb96abb..61e2c94cc0b0 100644 --- a/drivers/usb/gadget/function/uvc_queue.c +++ b/drivers/usb/gadget/function/uvc_queue.c @@ -102,7 +102,7 @@ static void uvc_buffer_queue(struct vb2_buffer *vb) spin_unlock_irqrestore(&queue->irqlock, flags); } -static struct vb2_ops uvc_queue_qops = { +static const struct vb2_ops uvc_queue_qops = { .queue_setup = uvc_queue_setup, .buf_prepare = uvc_buffer_prepare, .buf_queue = uvc_buffer_queue, diff --git a/drivers/usb/gadget/udc/aspeed-vhub/dev.c b/drivers/usb/gadget/udc/aspeed-vhub/dev.c index f0233912bace..6b1b16b17d7d 100644 --- a/drivers/usb/gadget/udc/aspeed-vhub/dev.c +++ b/drivers/usb/gadget/udc/aspeed-vhub/dev.c @@ -438,7 +438,7 @@ static int ast_vhub_udc_stop(struct usb_gadget *gadget) return 0; } -static struct usb_gadget_ops ast_vhub_udc_ops = { +static const struct usb_gadget_ops ast_vhub_udc_ops = { .get_frame = ast_vhub_udc_get_frame, .wakeup = ast_vhub_udc_wakeup, .pullup = ast_vhub_udc_pullup, diff --git a/drivers/usb/gadget/udc/pch_udc.c b/drivers/usb/gadget/udc/pch_udc.c index afaea11ec771..55c8c8abeacd 100644 --- a/drivers/usb/gadget/udc/pch_udc.c +++ b/drivers/usb/gadget/udc/pch_udc.c @@ -1330,7 +1330,7 @@ static void pch_vbus_gpio_work_rise(struct work_struct *irq_work) } /** - * pch_vbus_gpio_irq() - IRQ handler for GPIO intrerrupt for changing VBUS + * pch_vbus_gpio_irq() - IRQ handler for GPIO interrupt for changing VBUS * @irq: Interrupt request number * @dev: Reference to the device structure * diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c index cdffbd1e0316..6e34f9594159 100644 --- a/drivers/usb/gadget/udc/renesas_usb3.c +++ b/drivers/usb/gadget/udc/renesas_usb3.c @@ -358,6 +358,7 @@ struct renesas_usb3 { bool extcon_host; /* check id and set EXTCON_USB_HOST */ bool extcon_usb; /* check vbus and set EXTCON_USB */ bool forced_b_device; + bool start_to_connect; }; #define gadget_to_renesas_usb3(_gadget) \ @@ -476,7 +477,8 @@ static void usb3_init_axi_bridge(struct renesas_usb3 *usb3) static void usb3_init_epc_registers(struct renesas_usb3 *usb3) { usb3_write(usb3, ~0, USB3_USB_INT_STA_1); - usb3_enable_irq_1(usb3, USB_INT_1_VBUS_CNG); + if (!usb3->workaround_for_vbus) + usb3_enable_irq_1(usb3, USB_INT_1_VBUS_CNG); } static bool usb3_wakeup_usb2_phy(struct renesas_usb3 *usb3) @@ -700,8 +702,7 @@ static void usb3_mode_config(struct renesas_usb3 *usb3, bool host, bool a_dev) usb3_set_mode_by_role_sw(usb3, host); usb3_vbus_out(usb3, a_dev); /* for A-Peripheral or forced B-device mode */ - if ((!host && a_dev) || - (usb3->workaround_for_vbus && usb3->forced_b_device)) + if ((!host && a_dev) || usb3->start_to_connect) usb3_connect(usb3); spin_unlock_irqrestore(&usb3->lock, flags); } @@ -2432,7 +2433,11 @@ static ssize_t renesas_usb3_b_device_write(struct file *file, if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count))) return -EFAULT; - if (!strncmp(buf, "1", 1)) + usb3->start_to_connect = false; + if (usb3->workaround_for_vbus && usb3->forced_b_device && + !strncmp(buf, "2", 1)) + usb3->start_to_connect = true; + else if (!strncmp(buf, "1", 1)) usb3->forced_b_device = true; else usb3->forced_b_device = false; @@ -2440,7 +2445,7 @@ static ssize_t renesas_usb3_b_device_write(struct file *file, if (usb3->workaround_for_vbus) usb3_disconnect(usb3); - /* Let this driver call usb3_connect() anyway */ + /* Let this driver call usb3_connect() if needed */ usb3_check_id(usb3); return count; diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c index 8bf5ad7a59ad..af3e63316ace 100644 --- a/drivers/usb/gadget/udc/s3c2410_udc.c +++ b/drivers/usb/gadget/udc/s3c2410_udc.c @@ -119,7 +119,7 @@ static void dprintk(int level, const char *fmt, ...) } #endif -static int s3c2410_udc_debugfs_seq_show(struct seq_file *m, void *p) +static int s3c2410_udc_debugfs_show(struct seq_file *m, void *p) { u32 addr_reg, pwr_reg, ep_int_reg, usb_int_reg; u32 ep_int_en_reg, usb_int_en_reg, ep0_csr; @@ -168,20 +168,7 @@ static int s3c2410_udc_debugfs_seq_show(struct seq_file *m, void *p) return 0; } - -static int s3c2410_udc_debugfs_fops_open(struct inode *inode, - struct file *file) -{ - return single_open(file, s3c2410_udc_debugfs_seq_show, NULL); -} - -static const struct file_operations s3c2410_udc_debugfs_fops = { - .open = s3c2410_udc_debugfs_fops_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, - .owner = THIS_MODULE, -}; +DEFINE_SHOW_ATTRIBUTE(s3c2410_udc_debugfs); /* io macros */ diff --git a/drivers/usb/host/ehci-omap.c b/drivers/usb/host/ehci-omap.c index 7e4c13346a1e..7d20296cbe9f 100644 --- a/drivers/usb/host/ehci-omap.c +++ b/drivers/usb/host/ehci-omap.c @@ -159,11 +159,12 @@ static int ehci_hcd_omap_probe(struct platform_device *pdev) /* get the PHY device */ phy = devm_usb_get_phy_by_phandle(dev, "phys", i); if (IS_ERR(phy)) { - /* Don't bail out if PHY is not absolutely necessary */ - if (pdata->port_mode[i] != OMAP_EHCI_PORT_MODE_PHY) + ret = PTR_ERR(phy); + if (ret == -ENODEV) { /* no PHY */ + phy = NULL; continue; + } - ret = PTR_ERR(phy); if (ret != -EPROBE_DEFER) dev_err(dev, "Can't get PHY for port %d: %d\n", i, ret); diff --git a/drivers/usb/host/isp1362-hcd.c b/drivers/usb/host/isp1362-hcd.c index b21c386e6a46..28bf8bfb091e 100644 --- a/drivers/usb/host/isp1362-hcd.c +++ b/drivers/usb/host/isp1362-hcd.c @@ -2159,25 +2159,15 @@ static int isp1362_show(struct seq_file *s, void *unused) return 0; } - -static int isp1362_open(struct inode *inode, struct file *file) -{ - return single_open(file, isp1362_show, inode); -} - -static const struct file_operations debug_ops = { - .open = isp1362_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(isp1362); /* expect just one isp1362_hcd per system */ static void create_debug_file(struct isp1362_hcd *isp1362_hcd) { isp1362_hcd->debug_file = debugfs_create_file("isp1362", S_IRUGO, usb_debug_root, - isp1362_hcd, &debug_ops); + isp1362_hcd, + &isp1362_fops); } static void remove_debug_file(struct isp1362_hcd *isp1362_hcd) diff --git a/drivers/usb/host/ohci-mem.c b/drivers/usb/host/ohci-mem.c index b3da3f12e5b1..3965ac0341eb 100644 --- a/drivers/usb/host/ohci-mem.c +++ b/drivers/usb/host/ohci-mem.c @@ -57,14 +57,10 @@ static int ohci_mem_init (struct ohci_hcd *ohci) static void ohci_mem_cleanup (struct ohci_hcd *ohci) { - if (ohci->td_cache) { - dma_pool_destroy (ohci->td_cache); - ohci->td_cache = NULL; - } - if (ohci->ed_cache) { - dma_pool_destroy (ohci->ed_cache); - ohci->ed_cache = NULL; - } + dma_pool_destroy(ohci->td_cache); + ohci->td_cache = NULL; + dma_pool_destroy(ohci->ed_cache); + ohci->ed_cache = NULL; } /*-------------------------------------------------------------------------*/ diff --git a/drivers/usb/host/r8a66597-hcd.c b/drivers/usb/host/r8a66597-hcd.c index 984892dd72f5..42668aeca57c 100644 --- a/drivers/usb/host/r8a66597-hcd.c +++ b/drivers/usb/host/r8a66597-hcd.c @@ -1979,6 +1979,8 @@ static int r8a66597_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, static void r8a66597_endpoint_disable(struct usb_hcd *hcd, struct usb_host_endpoint *hep) +__acquires(r8a66597->lock) +__releases(r8a66597->lock) { struct r8a66597 *r8a66597 = hcd_to_r8a66597(hcd); struct r8a66597_pipe *pipe = (struct r8a66597_pipe *)hep->hcpriv; @@ -1991,13 +1993,14 @@ static void r8a66597_endpoint_disable(struct usb_hcd *hcd, return; pipenum = pipe->info.pipenum; + spin_lock_irqsave(&r8a66597->lock, flags); if (pipenum == 0) { kfree(hep->hcpriv); hep->hcpriv = NULL; + spin_unlock_irqrestore(&r8a66597->lock, flags); return; } - spin_lock_irqsave(&r8a66597->lock, flags); pipe_stop(r8a66597, pipe); pipe_irq_disable(r8a66597, pipenum); disable_irq_empty(r8a66597, pipenum); diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c index 01b5818a4be5..e2eece693655 100644 --- a/drivers/usb/host/xhci-hub.c +++ b/drivers/usb/host/xhci-hub.c @@ -714,13 +714,6 @@ void xhci_test_and_clear_bit(struct xhci_hcd *xhci, struct xhci_port *port, } } -/* Updates Link Status for USB 2.1 port */ -static void xhci_hub_report_usb2_link_state(u32 *status, u32 status_reg) -{ - if ((status_reg & PORT_PLS_MASK) == XDEV_U2) - *status |= USB_PORT_STAT_L1; -} - /* Updates Link Status for super Speed port */ static void xhci_hub_report_usb3_link_state(struct xhci_hcd *xhci, u32 *status, u32 status_reg) @@ -802,6 +795,100 @@ static void xhci_del_comp_mod_timer(struct xhci_hcd *xhci, u32 status, } } +static int xhci_handle_usb2_port_link_resume(struct xhci_port *port, + u32 *status, u32 portsc, + unsigned long flags) +{ + struct xhci_bus_state *bus_state; + struct xhci_hcd *xhci; + struct usb_hcd *hcd; + int slot_id; + u32 wIndex; + + hcd = port->rhub->hcd; + bus_state = &port->rhub->bus_state; + xhci = hcd_to_xhci(hcd); + wIndex = port->hcd_portnum; + + if ((portsc & PORT_RESET) || !(portsc & PORT_PE)) { + *status = 0xffffffff; + return -EINVAL; + } + /* did port event handler already start resume timing? */ + if (!bus_state->resume_done[wIndex]) { + /* If not, maybe we are in a host initated resume? */ + if (test_bit(wIndex, &bus_state->resuming_ports)) { + /* Host initated resume doesn't time the resume + * signalling using resume_done[]. + * It manually sets RESUME state, sleeps 20ms + * and sets U0 state. This should probably be + * changed, but not right now. + */ + } else { + /* port resume was discovered now and here, + * start resume timing + */ + unsigned long timeout = jiffies + + msecs_to_jiffies(USB_RESUME_TIMEOUT); + + set_bit(wIndex, &bus_state->resuming_ports); + bus_state->resume_done[wIndex] = timeout; + mod_timer(&hcd->rh_timer, timeout); + usb_hcd_start_port_resume(&hcd->self, wIndex); + } + /* Has resume been signalled for USB_RESUME_TIME yet? */ + } else if (time_after_eq(jiffies, bus_state->resume_done[wIndex])) { + int time_left; + + xhci_dbg(xhci, "Resume USB2 port %d\n", wIndex + 1); + bus_state->resume_done[wIndex] = 0; + clear_bit(wIndex, &bus_state->resuming_ports); + + set_bit(wIndex, &bus_state->rexit_ports); + + xhci_test_and_clear_bit(xhci, port, PORT_PLC); + xhci_set_link_state(xhci, port, XDEV_U0); + + spin_unlock_irqrestore(&xhci->lock, flags); + time_left = wait_for_completion_timeout( + &bus_state->rexit_done[wIndex], + msecs_to_jiffies(XHCI_MAX_REXIT_TIMEOUT_MS)); + spin_lock_irqsave(&xhci->lock, flags); + + if (time_left) { + slot_id = xhci_find_slot_id_by_port(hcd, xhci, + wIndex + 1); + if (!slot_id) { + xhci_dbg(xhci, "slot_id is zero\n"); + *status = 0xffffffff; + return -ENODEV; + } + xhci_ring_device(xhci, slot_id); + } else { + int port_status = readl(port->addr); + + xhci_warn(xhci, "Port resume %i msec timed out, portsc = 0x%x\n", + XHCI_MAX_REXIT_TIMEOUT_MS, + port_status); + *status |= USB_PORT_STAT_SUSPEND; + clear_bit(wIndex, &bus_state->rexit_ports); + } + + usb_hcd_end_port_resume(&hcd->self, wIndex); + bus_state->port_c_suspend |= 1 << wIndex; + bus_state->suspended_ports &= ~(1 << wIndex); + } else { + /* + * The resume has been signaling for less than + * USB_RESUME_TIME. Report the port status as SUSPEND, + * let the usbcore check port status again and clear + * resume signaling later. + */ + *status |= USB_PORT_STAT_SUSPEND; + } + return 0; +} + static u32 xhci_get_ext_port_status(u32 raw_port_status, u32 port_li) { u32 ext_stat = 0; @@ -818,6 +905,85 @@ static u32 xhci_get_ext_port_status(u32 raw_port_status, u32 port_li) return ext_stat; } +static void xhci_get_usb3_port_status(struct xhci_port *port, u32 *status, + u32 portsc) +{ + struct xhci_bus_state *bus_state; + struct xhci_hcd *xhci; + u32 link_state; + u32 portnum; + + bus_state = &port->rhub->bus_state; + xhci = hcd_to_xhci(port->rhub->hcd); + link_state = portsc & PORT_PLS_MASK; + portnum = port->hcd_portnum; + + /* USB3 specific wPortChange bits + * + * Port link change with port in resume state should not be + * reported to usbcore, as this is an internal state to be + * handled by xhci driver. Reporting PLC to usbcore may + * cause usbcore clearing PLC first and port change event + * irq won't be generated. + */ + + if (portsc & PORT_PLC && (link_state != XDEV_RESUME)) + *status |= USB_PORT_STAT_C_LINK_STATE << 16; + if (portsc & PORT_WRC) + *status |= USB_PORT_STAT_C_BH_RESET << 16; + if (portsc & PORT_CEC) + *status |= USB_PORT_STAT_C_CONFIG_ERROR << 16; + + /* USB3 specific wPortStatus bits */ + if (portsc & PORT_POWER) { + *status |= USB_SS_PORT_STAT_POWER; + /* link state handling */ + if (link_state == XDEV_U0) + bus_state->suspended_ports &= ~(1 << portnum); + } + + xhci_hub_report_usb3_link_state(xhci, status, portsc); + xhci_del_comp_mod_timer(xhci, portsc, portnum); +} + +static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status, + u32 portsc, unsigned long flags) +{ + struct xhci_bus_state *bus_state; + u32 link_state; + u32 portnum; + int ret; + + bus_state = &port->rhub->bus_state; + link_state = portsc & PORT_PLS_MASK; + portnum = port->hcd_portnum; + + /* USB2 wPortStatus bits */ + if (portsc & PORT_POWER) { + *status |= USB_PORT_STAT_POWER; + + /* link state is only valid if port is powered */ + if (link_state == XDEV_U3) + *status |= USB_PORT_STAT_SUSPEND; + if (link_state == XDEV_U2) + *status |= USB_PORT_STAT_L1; + if (link_state == XDEV_U0) { + bus_state->resume_done[portnum] = 0; + clear_bit(portnum, &bus_state->resuming_ports); + if (bus_state->suspended_ports & (1 << portnum)) { + bus_state->suspended_ports &= ~(1 << portnum); + bus_state->port_c_suspend |= 1 << portnum; + } + } + if (link_state == XDEV_RESUME) { + ret = xhci_handle_usb2_port_link_resume(port, status, + portsc, flags); + if (ret) + return; + } + } +} + /* * Converts a raw xHCI port status into the format that external USB 2.0 or USB * 3.0 hubs use. @@ -835,16 +1001,14 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd, __releases(&xhci->lock) __acquires(&xhci->lock) { - struct xhci_hcd *xhci = hcd_to_xhci(hcd); u32 status = 0; - int slot_id; struct xhci_hub *rhub; struct xhci_port *port; rhub = xhci_get_rhub(hcd); port = rhub->ports[wIndex]; - /* wPortChange bits */ + /* common wPortChange bits */ if (raw_port_status & PORT_CSC) status |= USB_PORT_STAT_C_CONNECTION << 16; if (raw_port_status & PORT_PEC) @@ -853,107 +1017,25 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd, status |= USB_PORT_STAT_C_OVERCURRENT << 16; if ((raw_port_status & PORT_RC)) status |= USB_PORT_STAT_C_RESET << 16; - /* USB3.0 only */ - if (hcd->speed >= HCD_USB3) { - /* Port link change with port in resume state should not be - * reported to usbcore, as this is an internal state to be - * handled by xhci driver. Reporting PLC to usbcore may - * cause usbcore clearing PLC first and port change event - * irq won't be generated. - */ - if ((raw_port_status & PORT_PLC) && - (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME) - status |= USB_PORT_STAT_C_LINK_STATE << 16; - if ((raw_port_status & PORT_WRC)) - status |= USB_PORT_STAT_C_BH_RESET << 16; - if ((raw_port_status & PORT_CEC)) - status |= USB_PORT_STAT_C_CONFIG_ERROR << 16; - } - if (hcd->speed < HCD_USB3) { - if ((raw_port_status & PORT_PLS_MASK) == XDEV_U3 - && (raw_port_status & PORT_POWER)) - status |= USB_PORT_STAT_SUSPEND; + /* common wPortStatus bits */ + if (raw_port_status & PORT_CONNECT) { + status |= USB_PORT_STAT_CONNECTION; + status |= xhci_port_speed(raw_port_status); } - if ((raw_port_status & PORT_PLS_MASK) == XDEV_RESUME && - !DEV_SUPERSPEED_ANY(raw_port_status) && hcd->speed < HCD_USB3) { - if ((raw_port_status & PORT_RESET) || - !(raw_port_status & PORT_PE)) - return 0xffffffff; - /* did port event handler already start resume timing? */ - if (!bus_state->resume_done[wIndex]) { - /* If not, maybe we are in a host initated resume? */ - if (test_bit(wIndex, &bus_state->resuming_ports)) { - /* Host initated resume doesn't time the resume - * signalling using resume_done[]. - * It manually sets RESUME state, sleeps 20ms - * and sets U0 state. This should probably be - * changed, but not right now. - */ - } else { - /* port resume was discovered now and here, - * start resume timing - */ - unsigned long timeout = jiffies + - msecs_to_jiffies(USB_RESUME_TIMEOUT); - - set_bit(wIndex, &bus_state->resuming_ports); - bus_state->resume_done[wIndex] = timeout; - mod_timer(&hcd->rh_timer, timeout); - usb_hcd_start_port_resume(&hcd->self, wIndex); - } - /* Has resume been signalled for USB_RESUME_TIME yet? */ - } else if (time_after_eq(jiffies, - bus_state->resume_done[wIndex])) { - int time_left; - - xhci_dbg(xhci, "Resume USB2 port %d\n", - wIndex + 1); - bus_state->resume_done[wIndex] = 0; - clear_bit(wIndex, &bus_state->resuming_ports); - - set_bit(wIndex, &bus_state->rexit_ports); - - xhci_test_and_clear_bit(xhci, port, PORT_PLC); - xhci_set_link_state(xhci, port, XDEV_U0); - - spin_unlock_irqrestore(&xhci->lock, flags); - time_left = wait_for_completion_timeout( - &bus_state->rexit_done[wIndex], - msecs_to_jiffies( - XHCI_MAX_REXIT_TIMEOUT_MS)); - spin_lock_irqsave(&xhci->lock, flags); - - if (time_left) { - slot_id = xhci_find_slot_id_by_port(hcd, - xhci, wIndex + 1); - if (!slot_id) { - xhci_dbg(xhci, "slot_id is zero\n"); - return 0xffffffff; - } - xhci_ring_device(xhci, slot_id); - } else { - int port_status = readl(port->addr); - xhci_warn(xhci, "Port resume took longer than %i msec, port status = 0x%x\n", - XHCI_MAX_REXIT_TIMEOUT_MS, - port_status); - status |= USB_PORT_STAT_SUSPEND; - clear_bit(wIndex, &bus_state->rexit_ports); - } + if (raw_port_status & PORT_PE) + status |= USB_PORT_STAT_ENABLE; + if (raw_port_status & PORT_OC) + status |= USB_PORT_STAT_OVERCURRENT; + if (raw_port_status & PORT_RESET) + status |= USB_PORT_STAT_RESET; - usb_hcd_end_port_resume(&hcd->self, wIndex); - bus_state->port_c_suspend |= 1 << wIndex; - bus_state->suspended_ports &= ~(1 << wIndex); - } else { - /* - * The resume has been signaling for less than - * USB_RESUME_TIME. Report the port status as SUSPEND, - * let the usbcore check port status again and clear - * resume signaling later. - */ - status |= USB_PORT_STAT_SUSPEND; - } - } + /* USB2 and USB3 specific bits, including Port Link State */ + if (hcd->speed >= HCD_USB3) + xhci_get_usb3_port_status(port, &status, raw_port_status); + else + xhci_get_usb2_port_status(port, &status, raw_port_status, + flags); /* * Clear stale usb2 resume signalling variables in case port changed * state during resume signalling. For example on error @@ -967,44 +1049,6 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd, usb_hcd_end_port_resume(&hcd->self, wIndex); } - - if ((raw_port_status & PORT_PLS_MASK) == XDEV_U0 && - (raw_port_status & PORT_POWER)) { - if (bus_state->suspended_ports & (1 << wIndex)) { - bus_state->suspended_ports &= ~(1 << wIndex); - if (hcd->speed < HCD_USB3) - bus_state->port_c_suspend |= 1 << wIndex; - } - bus_state->resume_done[wIndex] = 0; - clear_bit(wIndex, &bus_state->resuming_ports); - } - if (raw_port_status & PORT_CONNECT) { - status |= USB_PORT_STAT_CONNECTION; - status |= xhci_port_speed(raw_port_status); - } - if (raw_port_status & PORT_PE) - status |= USB_PORT_STAT_ENABLE; - if (raw_port_status & PORT_OC) - status |= USB_PORT_STAT_OVERCURRENT; - if (raw_port_status & PORT_RESET) - status |= USB_PORT_STAT_RESET; - if (raw_port_status & PORT_POWER) { - if (hcd->speed >= HCD_USB3) - status |= USB_SS_PORT_STAT_POWER; - else - status |= USB_PORT_STAT_POWER; - } - /* Update Port Link State */ - if (hcd->speed >= HCD_USB3) { - xhci_hub_report_usb3_link_state(xhci, &status, raw_port_status); - /* - * Verify if all USB3 Ports Have entered U0 already. - * Delete Compliance Mode Timer if so. - */ - xhci_del_comp_mod_timer(xhci, raw_port_status, wIndex); - } else { - xhci_hub_report_usb2_link_state(&status, raw_port_status); - } if (bus_state->port_c_suspend & (1 << wIndex)) status |= USB_PORT_STAT_C_SUSPEND << 16; @@ -1031,7 +1075,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, rhub = xhci_get_rhub(hcd); ports = rhub->ports; max_ports = rhub->num_ports; - bus_state = &xhci->bus_state[hcd_index(hcd)]; + bus_state = &rhub->bus_state; spin_lock_irqsave(&xhci->lock, flags); switch (typeReq) { @@ -1421,7 +1465,7 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf) rhub = xhci_get_rhub(hcd); ports = rhub->ports; max_ports = rhub->num_ports; - bus_state = &xhci->bus_state[hcd_index(hcd)]; + bus_state = &rhub->bus_state; /* Initial status is no changes */ retval = (max_ports + 8) / 8; @@ -1480,7 +1524,7 @@ int xhci_bus_suspend(struct usb_hcd *hcd) rhub = xhci_get_rhub(hcd); ports = rhub->ports; max_ports = rhub->num_ports; - bus_state = &xhci->bus_state[hcd_index(hcd)]; + bus_state = &rhub->bus_state; wake_enabled = hcd->self.root_hub->do_remote_wakeup; spin_lock_irqsave(&xhci->lock, flags); @@ -1623,7 +1667,7 @@ int xhci_bus_resume(struct usb_hcd *hcd) rhub = xhci_get_rhub(hcd); ports = rhub->ports; max_ports = rhub->num_ports; - bus_state = &xhci->bus_state[hcd_index(hcd)]; + bus_state = &rhub->bus_state; if (time_before(jiffies, bus_state->next_statechange)) msleep(5); @@ -1724,13 +1768,10 @@ int xhci_bus_resume(struct usb_hcd *hcd) unsigned long xhci_get_resuming_ports(struct usb_hcd *hcd) { - struct xhci_hcd *xhci = hcd_to_xhci(hcd); - struct xhci_bus_state *bus_state; - - bus_state = &xhci->bus_state[hcd_index(hcd)]; + struct xhci_hub *rhub = xhci_get_rhub(hcd); /* USB3 port wakeups are reported via usb_wakeup_notification() */ - return bus_state->resuming_ports; /* USB2 ports only */ + return rhub->bus_state.resuming_ports; /* USB2 ports only */ } #endif /* CONFIG_PM */ diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index b1f27aa38b10..36a3eb8849f1 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1922,8 +1922,8 @@ no_bw: xhci->page_size = 0; xhci->page_shift = 0; - xhci->bus_state[0].bus_suspended = 0; - xhci->bus_state[1].bus_suspended = 0; + xhci->usb2_rhub.bus_state.bus_suspended = 0; + xhci->usb3_rhub.bus_state.bus_suspended = 0; } static int xhci_test_trb_in_td(struct xhci_hcd *xhci, @@ -2181,23 +2181,11 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports, if (major_revision < 0x03 && xhci->num_ext_caps < max_caps) xhci->ext_caps[xhci->num_ext_caps++] = temp; - /* Check the host's USB2 LPM capability */ - if ((xhci->hci_version == 0x96) && (major_revision != 0x03) && - (temp & XHCI_L1C)) { + if ((xhci->hci_version >= 0x100) && (major_revision != 0x03) && + (temp & XHCI_HLC)) { xhci_dbg_trace(xhci, trace_xhci_dbg_init, - "xHCI 0.96: support USB2 software lpm"); - xhci->sw_lpm_support = 1; - } - - if ((xhci->hci_version >= 0x100) && (major_revision != 0x03)) { - xhci_dbg_trace(xhci, trace_xhci_dbg_init, - "xHCI 1.0: support USB2 software lpm"); - xhci->sw_lpm_support = 1; - if (temp & XHCI_HLC) { - xhci_dbg_trace(xhci, trace_xhci_dbg_init, - "xHCI 1.0: support USB2 hardware lpm"); - xhci->hw_lpm_support = 1; - } + "xHCI 1.0: support USB2 hardware lpm"); + xhci->hw_lpm_support = 1; } port_offset--; @@ -2536,10 +2524,10 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) for (i = 0; i < MAX_HC_SLOTS; i++) xhci->devs[i] = NULL; for (i = 0; i < USB_MAXCHILDREN; i++) { - xhci->bus_state[0].resume_done[i] = 0; - xhci->bus_state[1].resume_done[i] = 0; + xhci->usb2_rhub.bus_state.resume_done[i] = 0; + xhci->usb3_rhub.bus_state.resume_done[i] = 0; /* Only the USB 2.0 completions will ever be used. */ - init_completion(&xhci->bus_state[1].rexit_done[i]); + init_completion(&xhci->usb2_rhub.bus_state.rexit_done[i]); } if (scratchpad_alloc(xhci, flags)) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 65750582133f..40fa25c4d041 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -1593,7 +1593,7 @@ static void handle_port_status(struct xhci_hcd *xhci, } hcd = port->rhub->hcd; - bus_state = &xhci->bus_state[hcd_index(hcd)]; + bus_state = &port->rhub->bus_state; hcd_portnum = port->hcd_portnum; portsc = readl(port->addr); diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index dae3be1b9c8f..46ab9c041091 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -169,7 +169,7 @@ int xhci_reset(struct xhci_hcd *xhci) { u32 command; u32 state; - int ret, i; + int ret; state = readl(&xhci->op_regs->status); @@ -215,11 +215,12 @@ int xhci_reset(struct xhci_hcd *xhci) ret = xhci_handshake(&xhci->op_regs->status, STS_CNR, 0, 10 * 1000 * 1000); - for (i = 0; i < 2; i++) { - xhci->bus_state[i].port_c_suspend = 0; - xhci->bus_state[i].suspended_ports = 0; - xhci->bus_state[i].resuming_ports = 0; - } + xhci->usb2_rhub.bus_state.port_c_suspend = 0; + xhci->usb2_rhub.bus_state.suspended_ports = 0; + xhci->usb2_rhub.bus_state.resuming_ports = 0; + xhci->usb3_rhub.bus_state.port_c_suspend = 0; + xhci->usb3_rhub.bus_state.suspended_ports = 0; + xhci->usb3_rhub.bus_state.resuming_ports = 0; return ret; } @@ -1087,9 +1088,9 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) /* Wait a bit if either of the roothubs need to settle from the * transition into bus suspend. */ - if (time_before(jiffies, xhci->bus_state[0].next_statechange) || - time_before(jiffies, - xhci->bus_state[1].next_statechange)) + + if (time_before(jiffies, xhci->usb2_rhub.bus_state.next_statechange) || + time_before(jiffies, xhci->usb3_rhub.bus_state.next_statechange)) msleep(100); set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); @@ -4388,8 +4389,7 @@ static int xhci_update_device(struct usb_hcd *hcd, struct usb_device *udev) struct xhci_hcd *xhci = hcd_to_xhci(hcd); int portnum = udev->portnum - 1; - if (hcd->speed >= HCD_USB3 || !xhci->sw_lpm_support || - !udev->lpm_capable) + if (hcd->speed >= HCD_USB3 || !udev->lpm_capable) return 0; /* we only support lpm for non-hub device connected to root hub yet */ diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 011dd45f8718..652dc36e3012 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1682,13 +1682,6 @@ struct xhci_bus_state { */ #define XHCI_MAX_REXIT_TIMEOUT_MS 20 -static inline unsigned int hcd_index(struct usb_hcd *hcd) -{ - if (hcd->speed >= HCD_USB3) - return 0; - else - return 1; -} struct xhci_port { __le32 __iomem *addr; int hw_portnum; @@ -1700,6 +1693,8 @@ struct xhci_hub { struct xhci_port **ports; unsigned int num_ports; struct usb_hcd *hcd; + /* keep track of bus suspend info */ + struct xhci_bus_state bus_state; /* supported prococol extended capabiliy values */ u8 maj_rev; u8 min_rev; @@ -1854,13 +1849,9 @@ struct xhci_hcd { unsigned int num_active_eps; unsigned int limit_active_eps; - /* There are two roothubs to keep track of bus suspend info for */ - struct xhci_bus_state bus_state[2]; struct xhci_port *hw_ports; struct xhci_hub usb2_rhub; struct xhci_hub usb3_rhub; - /* support xHCI 0.96 spec USB2 software LPM */ - unsigned sw_lpm_support:1; /* support xHCI 1.0 spec USB2 hardware LPM */ unsigned hw_lpm_support:1; /* Broken Suspend flag for SNPS Suspend resume issue */ diff --git a/drivers/usb/image/microtek.c b/drivers/usb/image/microtek.c index 9f2f563c82ed..607be1f4fe27 100644 --- a/drivers/usb/image/microtek.c +++ b/drivers/usb/image/microtek.c @@ -632,7 +632,6 @@ static struct scsi_host_template mts_scsi_host_template = { .sg_tablesize = SG_ALL, .can_queue = 1, .this_id = -1, - .use_clustering = 1, .emulated = 1, .slave_alloc = mts_slave_alloc, .slave_configure = mts_slave_configure, diff --git a/drivers/usb/misc/appledisplay.c b/drivers/usb/misc/appledisplay.c index 39ca31b4de46..ac92725458b5 100644 --- a/drivers/usb/misc/appledisplay.c +++ b/drivers/usb/misc/appledisplay.c @@ -69,7 +69,6 @@ struct appledisplay { struct delayed_work work; int button_pressed; - spinlock_t lock; struct mutex sysfslock; /* concurrent read and write */ }; @@ -79,7 +78,6 @@ static void appledisplay_complete(struct urb *urb) { struct appledisplay *pdata = urb->context; struct device *dev = &pdata->udev->dev; - unsigned long flags; int status = urb->status; int retval; @@ -105,8 +103,6 @@ static void appledisplay_complete(struct urb *urb) goto exit; } - spin_lock_irqsave(&pdata->lock, flags); - switch(pdata->urbdata[1]) { case ACD_BTN_BRIGHT_UP: case ACD_BTN_BRIGHT_DOWN: @@ -119,8 +115,6 @@ static void appledisplay_complete(struct urb *urb) break; } - spin_unlock_irqrestore(&pdata->lock, flags); - exit: retval = usb_submit_urb(pdata->urb, GFP_ATOMIC); if (retval) { @@ -229,7 +223,6 @@ static int appledisplay_probe(struct usb_interface *iface, pdata->udev = udev; - spin_lock_init(&pdata->lock); INIT_DELAYED_WORK(&pdata->work, appledisplay_work); mutex_init(&pdata->sysfslock); @@ -261,6 +254,7 @@ static int appledisplay_probe(struct usb_interface *iface, usb_rcvintpipe(udev, int_in_endpointAddr), pdata->urbdata, ACD_URB_BUFFER_LEN, appledisplay_complete, pdata, 1); + pdata->urb->transfer_flags = URB_NO_TRANSFER_DMA_MAP; if (usb_submit_urb(pdata->urb, GFP_KERNEL)) { retval = -EIO; dev_err(&iface->dev, "Submitting URB failed\n"); diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c index ae70b9bfd797..4fee200795a5 100644 --- a/drivers/usb/mtu3/mtu3_core.c +++ b/drivers/usb/mtu3/mtu3_core.c @@ -180,7 +180,7 @@ static void mtu3_intr_enable(struct mtu3 *mtu) mtu3_writel(mbase, U3D_LV1IESR, value); /* Enable U2 common USB interrupts */ - value = SUSPEND_INTR | RESUME_INTR | RESET_INTR | LPM_RESUME_INTR; + value = SUSPEND_INTR | RESUME_INTR | RESET_INTR; mtu3_writel(mbase, U3D_COMMON_USB_INTR_ENABLE, value); if (mtu->is_u3_ip) { @@ -484,7 +484,7 @@ void mtu3_ep0_setup(struct mtu3 *mtu) mtu3_writel(mtu->mac_base, U3D_EP0CSR, csr); /* Enable EP0 interrupt */ - mtu3_writel(mtu->mac_base, U3D_EPIESR, EP0ISR); + mtu3_writel(mtu->mac_base, U3D_EPIESR, EP0ISR | SETUPENDISR); } static int mtu3_mem_alloc(struct mtu3 *mtu) @@ -578,12 +578,16 @@ static void mtu3_regs_init(struct mtu3 *mtu) if (mtu->is_u3_ip) { /* disable LGO_U1/U2 by default */ mtu3_clrbits(mbase, U3D_LINK_POWER_CONTROL, - SW_U1_ACCEPT_ENABLE | SW_U2_ACCEPT_ENABLE | SW_U1_REQUEST_ENABLE | SW_U2_REQUEST_ENABLE); + /* enable accept LGO_U1/U2 link command from host */ + mtu3_setbits(mbase, U3D_LINK_POWER_CONTROL, + SW_U1_ACCEPT_ENABLE | SW_U2_ACCEPT_ENABLE); /* device responses to u3_exit from host automatically */ mtu3_clrbits(mbase, U3D_LTSSM_CTRL, SOFT_U3_EXIT_EN); /* automatically build U2 link when U3 detect fail */ mtu3_setbits(mbase, U3D_USB2_TEST_MODE, U2U3_AUTO_SWITCH); + /* auto clear SOFT_CONN when clear USB3_EN if work as HS */ + mtu3_setbits(mbase, U3D_U3U2_SWITCH_CTRL, SOFTCON_CLR_AUTO_EN); } mtu3_set_speed(mtu); @@ -592,10 +596,10 @@ static void mtu3_regs_init(struct mtu3 *mtu) mtu3_clrbits(mbase, U3D_LINK_RESET_INFO, WTCHRP_MSK); /* U2/U3 detected by HW */ mtu3_writel(mbase, U3D_DEVICE_CONF, 0); - /* enable QMU 16B checksum */ - mtu3_setbits(mbase, U3D_QCR0, QMU_CS16B_EN); /* vbus detected by HW */ mtu3_clrbits(mbase, U3D_MISC_CTRL, VBUS_FRC_EN | VBUS_ON); + /* enable automatical HWRW from L1 */ + mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, LPM_HRWE); } static irqreturn_t mtu3_link_isr(struct mtu3 *mtu) @@ -708,12 +712,6 @@ static irqreturn_t mtu3_u2_common_isr(struct mtu3 *mtu) if (u2comm & RESET_INTR) mtu3_gadget_reset(mtu); - if (u2comm & LPM_RESUME_INTR) { - if (!(mtu3_readl(mbase, U3D_POWER_MANAGEMENT) & LPM_HRWE)) - mtu3_setbits(mbase, U3D_USB20_MISC_CONTROL, - LPM_U3_ACK_EN); - } - return IRQ_HANDLED; } diff --git a/drivers/usb/mtu3/mtu3_gadget_ep0.c b/drivers/usb/mtu3/mtu3_gadget_ep0.c index 25216e79cd6e..7cb7ac980446 100644 --- a/drivers/usb/mtu3/mtu3_gadget_ep0.c +++ b/drivers/usb/mtu3/mtu3_gadget_ep0.c @@ -336,9 +336,9 @@ static int ep0_handle_feature_dev(struct mtu3 *mtu, lpc = mtu3_readl(mbase, U3D_LINK_POWER_CONTROL); if (set) - lpc |= SW_U1_ACCEPT_ENABLE; + lpc |= SW_U1_REQUEST_ENABLE; else - lpc &= ~SW_U1_ACCEPT_ENABLE; + lpc &= ~SW_U1_REQUEST_ENABLE; mtu3_writel(mbase, U3D_LINK_POWER_CONTROL, lpc); mtu->u1_enable = !!set; @@ -351,9 +351,9 @@ static int ep0_handle_feature_dev(struct mtu3 *mtu, lpc = mtu3_readl(mbase, U3D_LINK_POWER_CONTROL); if (set) - lpc |= SW_U2_ACCEPT_ENABLE; + lpc |= SW_U2_REQUEST_ENABLE; else - lpc &= ~SW_U2_ACCEPT_ENABLE; + lpc &= ~SW_U2_REQUEST_ENABLE; mtu3_writel(mbase, U3D_LINK_POWER_CONTROL, lpc); mtu->u2_enable = !!set; @@ -692,9 +692,13 @@ irqreturn_t mtu3_ep0_isr(struct mtu3 *mtu) mtu3_writel(mbase, U3D_EPISR, int_status); /* W1C */ /* only handle ep0's */ - if (!(int_status & EP0ISR)) + if (!(int_status & (EP0ISR | SETUPENDISR))) return IRQ_NONE; + /* abort current SETUP, and process new one */ + if (int_status & SETUPENDISR) + mtu->ep0_state = MU3D_EP0_STATE_SETUP; + csr = mtu3_readl(mbase, U3D_EP0CSR); dev_dbg(mtu->dev, "%s csr=0x%x\n", __func__, csr); diff --git a/drivers/usb/mtu3/mtu3_hw_regs.h b/drivers/usb/mtu3/mtu3_hw_regs.h index a45bb253939f..1d65b7476f23 100644 --- a/drivers/usb/mtu3/mtu3_hw_regs.h +++ b/drivers/usb/mtu3/mtu3_hw_regs.h @@ -104,6 +104,7 @@ /* U3D_EPISR */ #define EPRISR(x) (BIT(16) << (x)) +#define SETUPENDISR BIT(16) #define EPTISR(x) (BIT(0) << (x)) #define EP0ISR BIT(0) @@ -267,6 +268,8 @@ #define U3D_LTSSM_INTR_ENABLE (SSUSB_USB3_MAC_CSR_BASE + 0x013C) #define U3D_LTSSM_INTR (SSUSB_USB3_MAC_CSR_BASE + 0x0140) +#define U3D_U3U2_SWITCH_CTRL (SSUSB_USB3_MAC_CSR_BASE + 0x0170) + /*---------------- SSUSB_USB3_MAC_CSR FIELD DEFINITION ----------------*/ /* U3D_LTSSM_CTRL */ @@ -301,6 +304,9 @@ #define SS_DISABLE_INTR BIT(1) #define SS_INACTIVE_INTR BIT(0) +/* U3D_U3U2_SWITCH_CTRL */ +#define SOFTCON_CLR_AUTO_EN BIT(0) + /*---------------- SSUSB_USB3_SYS_CSR REGISTER DEFINITION ----------------*/ #define U3D_LINK_UX_INACT_TIMER (SSUSB_USB3_SYS_CSR_BASE + 0x020C) diff --git a/drivers/usb/mtu3/mtu3_plat.c b/drivers/usb/mtu3/mtu3_plat.c index 46551f6d16fd..e086630e41a9 100644 --- a/drivers/usb/mtu3/mtu3_plat.c +++ b/drivers/usb/mtu3/mtu3_plat.c @@ -200,6 +200,14 @@ static void ssusb_ip_sw_reset(struct ssusb_mtk *ssusb) mtu3_setbits(ssusb->ippc_base, U3D_SSUSB_IP_PW_CTRL0, SSUSB_IP_SW_RST); udelay(1); mtu3_clrbits(ssusb->ippc_base, U3D_SSUSB_IP_PW_CTRL0, SSUSB_IP_SW_RST); + + /* + * device ip may be powered on in firmware/BROM stage before entering + * kernel stage; + * power down device ip, otherwise ip-sleep will fail when working as + * host only mode + */ + mtu3_setbits(ssusb->ippc_base, U3D_SSUSB_IP_PW_CTRL2, SSUSB_IP_DEV_PDN); } /* ignore the error if the clock does not exist */ diff --git a/drivers/usb/mtu3/mtu3_qmu.c b/drivers/usb/mtu3/mtu3_qmu.c index ff62ba232177..09f19f70fe8f 100644 --- a/drivers/usb/mtu3/mtu3_qmu.c +++ b/drivers/usb/mtu3/mtu3_qmu.c @@ -154,27 +154,6 @@ void mtu3_gpd_ring_free(struct mtu3_ep *mep) memset(ring, 0, sizeof(*ring)); } -/* - * calculate check sum of a gpd or bd - * add "noinline" and "mb" to prevent wrong calculation - */ -static noinline u8 qmu_calc_checksum(u8 *data) -{ - u8 chksum = 0; - int i; - - data[1] = 0x0; /* set checksum to 0 */ - - mb(); /* ensure the gpd/bd is really up-to-date */ - for (i = 0; i < QMU_CHECKSUM_LEN; i++) - chksum += data[i]; - - /* Default: HWO=1, @flag[bit0] */ - chksum += 1; - - return 0xFF - chksum; -} - void mtu3_qmu_resume(struct mtu3_ep *mep) { struct mtu3 *mtu = mep->mtu; @@ -260,7 +239,6 @@ static int mtu3_prepare_tx_gpd(struct mtu3_ep *mep, struct mtu3_request *mreq) if (req->zero) gpd->ext_flag |= GPD_EXT_FLAG_ZLP; - gpd->chksum = qmu_calc_checksum((u8 *)gpd); gpd->flag |= GPD_FLAGS_HWO; mreq->gpd = gpd; @@ -295,7 +273,6 @@ static int mtu3_prepare_rx_gpd(struct mtu3_ep *mep, struct mtu3_request *mreq) gpd->next_gpd = cpu_to_le32(lower_32_bits(enq_dma)); ext_addr |= GPD_EXT_NGP(upper_32_bits(enq_dma)); gpd->rx_ext_addr = cpu_to_le16(ext_addr); - gpd->chksum = qmu_calc_checksum((u8 *)gpd); gpd->flag |= GPD_FLAGS_HWO; mreq->gpd = gpd; @@ -323,7 +300,6 @@ int mtu3_qmu_start(struct mtu3_ep *mep) /* set QMU start address */ write_txq_start_addr(mbase, epnum, ring->dma); mtu3_setbits(mbase, MU3D_EP_TXCR0(epnum), TX_DMAREQEN); - mtu3_setbits(mbase, U3D_QCR0, QMU_TX_CS_EN(epnum)); /* send zero length packet according to ZLP flag in GPD */ mtu3_setbits(mbase, U3D_QCR1, QMU_TX_ZLP(epnum)); mtu3_writel(mbase, U3D_TQERRIESR0, @@ -338,7 +314,6 @@ int mtu3_qmu_start(struct mtu3_ep *mep) } else { write_rxq_start_addr(mbase, epnum, ring->dma); mtu3_setbits(mbase, MU3D_EP_RXCR0(epnum), RX_DMAREQEN); - mtu3_setbits(mbase, U3D_QCR0, QMU_RX_CS_EN(epnum)); /* don't expect ZLP */ mtu3_clrbits(mbase, U3D_QCR3, QMU_RX_ZLP(epnum)); /* move to next GPD when receive ZLP */ @@ -427,7 +402,7 @@ static void qmu_tx_zlp_error_handler(struct mtu3 *mtu, u8 epnum) return; } - dev_dbg(mtu->dev, "%s send ZLP for req=%p\n", __func__, mreq); + dev_dbg(mtu->dev, "%s send ZLP for req=%p\n", __func__, req); mtu3_clrbits(mbase, MU3D_EP_TXCR0(mep->epnum), TX_DMAREQEN); @@ -441,7 +416,6 @@ static void qmu_tx_zlp_error_handler(struct mtu3 *mtu, u8 epnum) /* by pass the current GDP */ gpd_current->flag |= GPD_FLAGS_BPS; - gpd_current->chksum = qmu_calc_checksum((u8 *)gpd_current); gpd_current->flag |= GPD_FLAGS_HWO; /*enable DMAREQEN, switch back to QMU mode */ diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c index 23a0df79ef21..403eb97915f8 100644 --- a/drivers/usb/musb/musb_dsps.c +++ b/drivers/usb/musb/musb_dsps.c @@ -181,9 +181,11 @@ static void dsps_musb_enable(struct musb *musb) musb_writel(reg_base, wrp->epintr_set, epmask); musb_writel(reg_base, wrp->coreintr_set, coremask); - /* start polling for ID change in dual-role idle mode */ - if (musb->xceiv->otg->state == OTG_STATE_B_IDLE && - musb->port_mode == MUSB_OTG) + /* + * start polling for runtime PM active and idle, + * and for ID change in dual-role idle mode. + */ + if (musb->xceiv->otg->state == OTG_STATE_B_IDLE) dsps_mod_timer(glue, -1); } @@ -227,8 +229,13 @@ static int dsps_check_status(struct musb *musb, void *unused) switch (musb->xceiv->otg->state) { case OTG_STATE_A_WAIT_VRISE: - dsps_mod_timer_optional(glue); - break; + if (musb->port_mode == MUSB_HOST) { + musb->xceiv->otg->state = OTG_STATE_A_WAIT_BCON; + dsps_mod_timer_optional(glue); + break; + } + /* fall through */ + case OTG_STATE_A_WAIT_BCON: /* keep VBUS on for host-only mode */ if (musb->port_mode == MUSB_HOST) { @@ -249,6 +256,10 @@ static int dsps_check_status(struct musb *musb, void *unused) musb->xceiv->otg->state = OTG_STATE_A_IDLE; MUSB_HST_MODE(musb); } + + if (musb->port_mode == MUSB_PERIPHERAL) + skip_session = 1; + if (!(devctl & MUSB_DEVCTL_SESSION) && !skip_session) musb_writeb(mregs, MUSB_DEVCTL, MUSB_DEVCTL_SESSION); diff --git a/drivers/usb/renesas_usbhs/common.c b/drivers/usb/renesas_usbhs/common.c index a3e1290d682d..249fbee97f3f 100644 --- a/drivers/usb/renesas_usbhs/common.c +++ b/drivers/usb/renesas_usbhs/common.c @@ -539,6 +539,10 @@ static int usbhsc_drvcllbck_notify_hotplug(struct platform_device *pdev) * platform functions */ static const struct of_device_id usbhs_of_match[] = { + { + .compatible = "renesas,usbhs-r8a774c0", + .data = (void *)USBHS_TYPE_RCAR_GEN3_WITH_PLL, + }, { .compatible = "renesas,usbhs-r8a7790", .data = (void *)USBHS_TYPE_RCAR_GEN2, @@ -841,7 +845,7 @@ static int usbhs_remove(struct platform_device *pdev) return 0; } -static int usbhsc_suspend(struct device *dev) +static __maybe_unused int usbhsc_suspend(struct device *dev) { struct usbhs_priv *priv = dev_get_drvdata(dev); struct usbhs_mod *mod = usbhs_mod_get_current(priv); @@ -857,7 +861,7 @@ static int usbhsc_suspend(struct device *dev) return 0; } -static int usbhsc_resume(struct device *dev) +static __maybe_unused int usbhsc_resume(struct device *dev) { struct usbhs_priv *priv = dev_get_drvdata(dev); struct platform_device *pdev = usbhs_priv_to_pdev(priv); @@ -874,24 +878,7 @@ static int usbhsc_resume(struct device *dev) return 0; } -static int usbhsc_runtime_nop(struct device *dev) -{ - /* Runtime PM callback shared between ->runtime_suspend() - * and ->runtime_resume(). Simply returns success. - * - * This driver re-initializes all registers after - * pm_runtime_get_sync() anyway so there is no need - * to save and restore registers here. - */ - return 0; -} - -static const struct dev_pm_ops usbhsc_pm_ops = { - .suspend = usbhsc_suspend, - .resume = usbhsc_resume, - .runtime_suspend = usbhsc_runtime_nop, - .runtime_resume = usbhsc_runtime_nop, -}; +static SIMPLE_DEV_PM_OPS(usbhsc_pm_ops, usbhsc_suspend, usbhsc_resume); static struct platform_driver renesas_usbhs_driver = { .driver = { diff --git a/drivers/usb/roles/Kconfig b/drivers/usb/roles/Kconfig index f5a5e6f79f1b..e4194ac94510 100644 --- a/drivers/usb/roles/Kconfig +++ b/drivers/usb/roles/Kconfig @@ -1,3 +1,16 @@ +config USB_ROLE_SWITCH + tristate "USB Role Switch Support" + help + USB Role Switch is a device that can select the USB role - host or + device - for a USB port (connector). In most cases dual-role capable + USB controller will also represent the switch, but on some platforms + multiplexer/demultiplexer switch is used to route the data lines on + the USB connector between separate USB host and device controllers. + + Say Y here if your USB connectors support both device and host roles. + To compile the driver as module, choose M here: the module will be + called roles.ko. + if USB_ROLE_SWITCH config USB_ROLES_INTEL_XHCI diff --git a/drivers/usb/roles/Makefile b/drivers/usb/roles/Makefile index e44b179ba275..c02873206fc1 100644 --- a/drivers/usb/roles/Makefile +++ b/drivers/usb/roles/Makefile @@ -1 +1,3 @@ -obj-$(CONFIG_USB_ROLES_INTEL_XHCI) += intel-xhci-usb-role-switch.o +obj-$(CONFIG_USB_ROLE_SWITCH) += roles.o +roles-y := class.o +obj-$(CONFIG_USB_ROLES_INTEL_XHCI) += intel-xhci-usb-role-switch.o diff --git a/drivers/usb/common/roles.c b/drivers/usb/roles/class.c similarity index 100% rename from drivers/usb/common/roles.c rename to drivers/usb/roles/class.c diff --git a/drivers/usb/serial/f81534.c b/drivers/usb/serial/f81534.c index 380933db34dd..2b39bda035c7 100644 --- a/drivers/usb/serial/f81534.c +++ b/drivers/usb/serial/f81534.c @@ -45,14 +45,17 @@ #define F81534_CONFIG1_REG (0x09 + F81534_UART_BASE_ADDRESS) #define F81534_DEF_CONF_ADDRESS_START 0x3000 -#define F81534_DEF_CONF_SIZE 8 +#define F81534_DEF_CONF_SIZE 12 #define F81534_CUSTOM_ADDRESS_START 0x2f00 #define F81534_CUSTOM_DATA_SIZE 0x10 #define F81534_CUSTOM_NO_CUSTOM_DATA 0xff #define F81534_CUSTOM_VALID_TOKEN 0xf0 #define F81534_CONF_OFFSET 1 -#define F81534_CONF_GPIO_OFFSET 4 +#define F81534_CONF_INIT_GPIO_OFFSET 4 +#define F81534_CONF_WORK_GPIO_OFFSET 8 +#define F81534_CONF_GPIO_SHUTDOWN 7 +#define F81534_CONF_GPIO_RS232 1 #define F81534_MAX_DATA_BLOCK 64 #define F81534_MAX_BUS_RETRY 20 @@ -1337,8 +1340,19 @@ static int f81534_set_port_output_pin(struct usb_serial_port *port) serial_priv = usb_get_serial_data(serial); port_priv = usb_get_serial_port_data(port); - idx = F81534_CONF_GPIO_OFFSET + port_priv->phy_num; + idx = F81534_CONF_INIT_GPIO_OFFSET + port_priv->phy_num; value = serial_priv->conf_data[idx]; + if (value >= F81534_CONF_GPIO_SHUTDOWN) { + /* + * Newer IC configure will make transceiver in shutdown mode on + * initial power on. We need enable it before using UARTs. + */ + idx = F81534_CONF_WORK_GPIO_OFFSET + port_priv->phy_num; + value = serial_priv->conf_data[idx]; + if (value >= F81534_CONF_GPIO_SHUTDOWN) + value = F81534_CONF_GPIO_RS232; + } + pins = &f81534_port_out_pins[port_priv->phy_num]; for (i = 0; i < ARRAY_SIZE(pins->pin); ++i) { diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c index 609198d9594c..1ab2a6191013 100644 --- a/drivers/usb/serial/ftdi_sio.c +++ b/drivers/usb/serial/ftdi_sio.c @@ -1130,7 +1130,7 @@ static unsigned short int ftdi_232am_baud_base_to_divisor(int baud, int base) { unsigned short int divisor; /* divisor shifted 3 bits to the left */ - int divisor3 = base / 2 / baud; + int divisor3 = DIV_ROUND_CLOSEST(base, 2 * baud); if ((divisor3 & 0x7) == 7) divisor3++; /* round x.7/8 up to x+1 */ divisor = divisor3 >> 3; @@ -1156,7 +1156,7 @@ static u32 ftdi_232bm_baud_base_to_divisor(int baud, int base) static const unsigned char divfrac[8] = { 0, 3, 2, 4, 1, 5, 6, 7 }; u32 divisor; /* divisor shifted 3 bits to the left */ - int divisor3 = base / 2 / baud; + int divisor3 = DIV_ROUND_CLOSEST(base, 2 * baud); divisor = divisor3 >> 3; divisor |= (u32)divfrac[divisor3 & 0x7] << 14; /* Deal with special cases for highest baud rates. */ @@ -1179,7 +1179,7 @@ static u32 ftdi_2232h_baud_base_to_divisor(int baud, int base) int divisor3; /* hi-speed baud rate is 10-bit sampling instead of 16-bit */ - divisor3 = base * 8 / (baud * 10); + divisor3 = DIV_ROUND_CLOSEST(8 * base, 10 * baud); divisor = divisor3 >> 3; divisor |= (u32)divfrac[divisor3 & 0x7] << 14; diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c index 88828b4b8c44..a698d46ba773 100644 --- a/drivers/usb/serial/mos7840.c +++ b/drivers/usb/serial/mos7840.c @@ -94,6 +94,7 @@ /* The native mos7840/7820 component */ #define USB_VENDOR_ID_MOSCHIP 0x9710 #define MOSCHIP_DEVICE_ID_7840 0x7840 +#define MOSCHIP_DEVICE_ID_7843 0x7843 #define MOSCHIP_DEVICE_ID_7820 0x7820 #define MOSCHIP_DEVICE_ID_7810 0x7810 /* The native component can have its vendor/device id's overridden @@ -176,6 +177,7 @@ enum mos7840_flag { static const struct usb_device_id id_table[] = { {USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7840)}, + {USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7843)}, {USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7820)}, {USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7810)}, {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_2)}, @@ -298,15 +300,10 @@ static int mos7840_set_uart_reg(struct usb_serial_port *port, __u16 reg, val = val & 0x00ff; /* For the UART control registers, the application number need to be Or'ed */ - if (port->serial->num_ports == 4) { + if (port->serial->num_ports == 2 && port->port_number != 0) + val |= ((__u16)port->port_number + 2) << 8; + else val |= ((__u16)port->port_number + 1) << 8; - } else { - if (port->port_number == 0) { - val |= ((__u16)port->port_number + 1) << 8; - } else { - val |= ((__u16)port->port_number + 2) << 8; - } - } dev_dbg(&port->dev, "%s application number is %x\n", __func__, val); return usb_control_msg(dev, usb_sndctrlpipe(dev, 0), MCS_WRREQ, MCS_WR_RTYPE, val, reg, NULL, 0, @@ -332,15 +329,10 @@ static int mos7840_get_uart_reg(struct usb_serial_port *port, __u16 reg, return -ENOMEM; /* Wval is same as application number */ - if (port->serial->num_ports == 4) { + if (port->serial->num_ports == 2 && port->port_number != 0) + Wval = ((__u16)port->port_number + 2) << 8; + else Wval = ((__u16)port->port_number + 1) << 8; - } else { - if (port->port_number == 0) { - Wval = ((__u16)port->port_number + 1) << 8; - } else { - Wval = ((__u16)port->port_number + 2) << 8; - } - } dev_dbg(&port->dev, "%s application number is %x\n", __func__, Wval); ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), MCS_RDREQ, MCS_RD_RTYPE, Wval, reg, buf, VENDOR_READ_LENGTH, @@ -601,7 +593,7 @@ static void mos7840_interrupt_callback(struct urb *urb) struct usb_serial *serial; __u16 Data; unsigned char *data; - __u8 sp[5], st; + __u8 sp[5]; int i, rv = 0; __u16 wval, wreg = 0; int status = urb->status; @@ -644,7 +636,6 @@ static void mos7840_interrupt_callback(struct urb *urb) sp[1] = (__u8) data[1]; sp[2] = (__u8) data[2]; sp[3] = (__u8) data[3]; - st = (__u8) data[4]; for (i = 0; i < serial->num_ports; i++) { mos7840_port = mos7840_get_port_private(serial->port[i]); @@ -1300,7 +1291,6 @@ static int mos7840_write(struct tty_struct *tty, struct usb_serial_port *port, struct urb *urb; /* __u16 Data; */ const unsigned char *current_position = data; - unsigned char *data1; if (mos7840_port_paranoia_check(port, __func__)) return -1; @@ -1361,7 +1351,6 @@ static int mos7840_write(struct tty_struct *tty, struct usb_serial_port *port, mos7840_bulk_out_data_callback, mos7840_port); } - data1 = urb->transfer_buffer; dev_dbg(&port->dev, "bulkout endpoint is %d\n", port->bulk_out_endpointAddress); if (mos7840_port->has_led) @@ -1592,7 +1581,6 @@ static int mos7840_send_cmd_write_baud_rate(struct moschip_port *mos7840_port, int divisor = 0; int status; __u16 Data; - unsigned char number; __u16 clk_sel_val; struct usb_serial_port *port; @@ -1606,8 +1594,6 @@ static int mos7840_send_cmd_write_baud_rate(struct moschip_port *mos7840_port, if (mos7840_serial_paranoia_check(port->serial, __func__)) return -1; - number = mos7840_port->port->port_number; - dev_dbg(&port->dev, "%s - baud = %d\n", __func__, baudRate); /* reset clk_uart_sel in spregOffset */ if (baudRate > 115200) { @@ -1697,14 +1683,12 @@ static void mos7840_change_port_settings(struct tty_struct *tty, { int baud; unsigned cflag; - unsigned iflag; __u8 lData; __u8 lParity; __u8 lStop; int status; __u16 Data; struct usb_serial_port *port; - struct usb_serial *serial; if (mos7840_port == NULL) return; @@ -1717,8 +1701,6 @@ static void mos7840_change_port_settings(struct tty_struct *tty, if (mos7840_serial_paranoia_check(port->serial, __func__)) return; - serial = port->serial; - if (!mos7840_port->open) { dev_dbg(&port->dev, "%s - port not opened\n", __func__); return; @@ -1729,7 +1711,6 @@ static void mos7840_change_port_settings(struct tty_struct *tty, lParity = LCR_PAR_NONE; cflag = tty->termios.c_cflag; - iflag = tty->termios.c_iflag; /* Change the number of bits */ switch (cflag & CSIZE) { @@ -2043,7 +2024,8 @@ static int mos7840_probe(struct usb_serial *serial, int device_type; if (product == MOSCHIP_DEVICE_ID_7810 || - product == MOSCHIP_DEVICE_ID_7820) { + product == MOSCHIP_DEVICE_ID_7820 || + product == MOSCHIP_DEVICE_ID_7843) { device_type = product; goto out; } @@ -2077,7 +2059,10 @@ static int mos7840_calc_num_ports(struct usb_serial *serial, int device_type = (unsigned long)usb_get_serial_data(serial); int num_ports; - num_ports = (device_type >> 4) & 0x000F; + if (device_type == MOSCHIP_DEVICE_ID_7843) + num_ports = 3; + else + num_ports = (device_type >> 4) & 0x000F; /* * num_ports is currently never zero as device_type is one of @@ -2132,22 +2117,16 @@ static int mos7840_port_probe(struct usb_serial_port *port) mos7840_port->SpRegOffset = 0x0; mos7840_port->ControlRegOffset = 0x1; mos7840_port->DcrRegOffset = 0x4; - } else if ((mos7840_port->port_num == 2) && (serial->num_ports == 4)) { - mos7840_port->SpRegOffset = 0x8; - mos7840_port->ControlRegOffset = 0x9; - mos7840_port->DcrRegOffset = 0x16; - } else if ((mos7840_port->port_num == 2) && (serial->num_ports == 2)) { - mos7840_port->SpRegOffset = 0xa; - mos7840_port->ControlRegOffset = 0xb; - mos7840_port->DcrRegOffset = 0x19; - } else if ((mos7840_port->port_num == 3) && (serial->num_ports == 4)) { - mos7840_port->SpRegOffset = 0xa; - mos7840_port->ControlRegOffset = 0xb; - mos7840_port->DcrRegOffset = 0x19; - } else if ((mos7840_port->port_num == 4) && (serial->num_ports == 4)) { - mos7840_port->SpRegOffset = 0xc; - mos7840_port->ControlRegOffset = 0xd; - mos7840_port->DcrRegOffset = 0x1c; + } else { + u8 phy_num = mos7840_port->port_num; + + /* Port 2 in the 2-port case uses registers of port 3 */ + if (serial->num_ports == 2) + phy_num = 3; + + mos7840_port->SpRegOffset = 0x8 + 2 * (phy_num - 2); + mos7840_port->ControlRegOffset = 0x9 + 2 * (phy_num - 2); + mos7840_port->DcrRegOffset = 0x16 + 3 * (phy_num - 2); } mos7840_dump_serial_port(port, mos7840_port); mos7840_set_port_private(port, mos7840_port); diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 1ce27f3ff7a7..aef15497ff31 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1955,6 +1955,10 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x1b) }, { USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 */ .driver_info = RSVD(4) | RSVD(5) | RSVD(6) }, + { USB_DEVICE(0x2cb7, 0x0104), /* Fibocom NL678 series */ + .driver_info = RSVD(4) | RSVD(5) }, + { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff), /* Fibocom NL678 series */ + .driver_info = RSVD(6) }, { } /* Terminating entry */ }; MODULE_DEVICE_TABLE(usb, option_ids); diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c index a4e0d13fc121..98e7a5df0f6d 100644 --- a/drivers/usb/serial/pl2303.c +++ b/drivers/usb/serial/pl2303.c @@ -91,9 +91,14 @@ static const struct usb_device_id id_table[] = { { USB_DEVICE(YCCABLE_VENDOR_ID, YCCABLE_PRODUCT_ID) }, { USB_DEVICE(SUPERIAL_VENDOR_ID, SUPERIAL_PRODUCT_ID) }, { USB_DEVICE(HP_VENDOR_ID, HP_LD220_PRODUCT_ID) }, + { USB_DEVICE(HP_VENDOR_ID, HP_LD220TA_PRODUCT_ID) }, { USB_DEVICE(HP_VENDOR_ID, HP_LD960_PRODUCT_ID) }, + { USB_DEVICE(HP_VENDOR_ID, HP_LD960TA_PRODUCT_ID) }, { USB_DEVICE(HP_VENDOR_ID, HP_LCM220_PRODUCT_ID) }, { USB_DEVICE(HP_VENDOR_ID, HP_LCM960_PRODUCT_ID) }, + { USB_DEVICE(HP_VENDOR_ID, HP_LM920_PRODUCT_ID) }, + { USB_DEVICE(HP_VENDOR_ID, HP_LM940_PRODUCT_ID) }, + { USB_DEVICE(HP_VENDOR_ID, HP_TD620_PRODUCT_ID) }, { USB_DEVICE(CRESSI_VENDOR_ID, CRESSI_EDY_PRODUCT_ID) }, { USB_DEVICE(ZEAGLE_VENDOR_ID, ZEAGLE_N2ITION3_PRODUCT_ID) }, { USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) }, diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h index 26965cc23c17..4e2554d55362 100644 --- a/drivers/usb/serial/pl2303.h +++ b/drivers/usb/serial/pl2303.h @@ -119,10 +119,15 @@ /* Hewlett-Packard POS Pole Displays */ #define HP_VENDOR_ID 0x03f0 +#define HP_LM920_PRODUCT_ID 0x026b +#define HP_TD620_PRODUCT_ID 0x0956 #define HP_LD960_PRODUCT_ID 0x0b39 #define HP_LCM220_PRODUCT_ID 0x3139 #define HP_LCM960_PRODUCT_ID 0x3239 #define HP_LD220_PRODUCT_ID 0x3524 +#define HP_LD220TA_PRODUCT_ID 0x4349 +#define HP_LD960TA_PRODUCT_ID 0x4439 +#define HP_LM940_PRODUCT_ID 0x5039 /* Cressi Edy (diving computer) PC interface */ #define CRESSI_VENDOR_ID 0x04b8 diff --git a/drivers/usb/serial/quatech2.c b/drivers/usb/serial/quatech2.c index f2fbe1ec9701..a62981ca7a73 100644 --- a/drivers/usb/serial/quatech2.c +++ b/drivers/usb/serial/quatech2.c @@ -500,7 +500,6 @@ static void qt2_process_read_urb(struct urb *urb) struct usb_serial *serial; struct qt2_serial_private *serial_priv; struct usb_serial_port *port; - struct qt2_port_private *port_priv; bool escapeflag; unsigned char *ch; int i; @@ -514,7 +513,6 @@ static void qt2_process_read_urb(struct urb *urb) serial = urb->context; serial_priv = usb_get_serial_data(serial); port = serial->port[serial_priv->current_port]; - port_priv = usb_get_serial_port_data(port); for (i = 0; i < urb->actual_length; i++) { ch = (unsigned char *)urb->transfer_buffer + i; @@ -566,7 +564,6 @@ static void qt2_process_read_urb(struct urb *urb) serial_priv->current_port = newport; port = serial->port[serial_priv->current_port]; - port_priv = usb_get_serial_port_data(port); i += 3; escapeflag = true; break; diff --git a/drivers/usb/storage/ene_ub6250.c b/drivers/usb/storage/ene_ub6250.c index 4d261e4de9ad..c26129d5b943 100644 --- a/drivers/usb/storage/ene_ub6250.c +++ b/drivers/usb/storage/ene_ub6250.c @@ -1131,7 +1131,7 @@ static int ms_lib_alloc_writebuf(struct us_data *us) ms_lib_clear_writebuf(us); -return 0; + return 0; } static int ms_lib_force_setlogical_pair(struct us_data *us, u16 logblk, u16 phyblk) diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c index e227bb5b794f..fde2e71a6ade 100644 --- a/drivers/usb/storage/scsiglue.c +++ b/drivers/usb/storage/scsiglue.c @@ -639,13 +639,6 @@ static const struct scsi_host_template usb_stor_host_template = { */ .max_sectors = 240, - /* - * merge commands... this seems to help performance, but - * periodically someone should test to see which setting is more - * optimal. - */ - .use_clustering = 1, - /* emulated HBA */ .emulated = 1, diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c index 1f7b401c4d04..36742e8e7edc 100644 --- a/drivers/usb/storage/uas.c +++ b/drivers/usb/storage/uas.c @@ -879,6 +879,7 @@ static struct scsi_host_template uas_host_template = { .this_id = -1, .sg_tablesize = SG_NONE, .skip_settle_delay = 1, + .dma_boundary = PAGE_SIZE - 1, }; #define UNUSUAL_DEV(id_vendor, id_product, bcdDeviceMin, bcdDeviceMax, \ diff --git a/drivers/usb/typec/tcpm/fusb302.c b/drivers/usb/typec/tcpm/fusb302.c index 43b64d9309d0..e9344997329c 100644 --- a/drivers/usb/typec/tcpm/fusb302.c +++ b/drivers/usb/typec/tcpm/fusb302.c @@ -1765,7 +1765,7 @@ static int fusb302_probe(struct i2c_client *client, * to be set by the platform code which also registers the i2c client * for the fusb302. */ - if (device_property_read_string(dev, "fcs,extcon-name", &name) == 0) { + if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) { chip->extcon = extcon_get_extcon_dev(name); if (!chip->extcon) return -EPROBE_DEFER; diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c index dbbd71f754d0..4bc29b586698 100644 --- a/drivers/usb/typec/tcpm/tcpm.c +++ b/drivers/usb/typec/tcpm/tcpm.c @@ -317,6 +317,9 @@ struct tcpm_port { /* Deadline in jiffies to exit src_try_wait state */ unsigned long max_wait; + /* port belongs to a self powered device */ + bool self_powered; + #ifdef CONFIG_DEBUG_FS struct dentry *dentry; struct mutex logbuffer_lock; /* log buffer access lock */ @@ -2210,7 +2213,8 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port) unsigned int i, j, max_mw = 0, max_mv = 0; unsigned int min_src_mv, max_src_mv, src_ma, src_mw; unsigned int min_snk_mv, max_snk_mv; - u32 pdo; + unsigned int max_op_mv; + u32 pdo, src, snk; unsigned int src_pdo = 0, snk_pdo = 0; /* @@ -2260,16 +2264,18 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port) continue; } - if (max_src_mv <= max_snk_mv && - min_src_mv >= min_snk_mv) { + if (min_src_mv <= max_snk_mv && + max_src_mv >= min_snk_mv) { + max_op_mv = min(max_src_mv, max_snk_mv); + src_mw = (max_op_mv * src_ma) / 1000; /* Prefer higher voltages if available */ if ((src_mw == max_mw && - min_src_mv > max_mv) || + max_op_mv > max_mv) || src_mw > max_mw) { src_pdo = i; snk_pdo = j; max_mw = src_mw; - max_mv = max_src_mv; + max_mv = max_op_mv; } } } @@ -2282,16 +2288,18 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port) } if (src_pdo) { - pdo = port->source_caps[src_pdo]; - - port->pps_data.min_volt = pdo_pps_apdo_min_voltage(pdo); - port->pps_data.max_volt = pdo_pps_apdo_max_voltage(pdo); - port->pps_data.max_curr = - min_pps_apdo_current(pdo, port->snk_pdo[snk_pdo]); - port->pps_data.out_volt = - min(pdo_pps_apdo_max_voltage(pdo), port->pps_data.out_volt); - port->pps_data.op_curr = - min(port->pps_data.max_curr, port->pps_data.op_curr); + src = port->source_caps[src_pdo]; + snk = port->snk_pdo[snk_pdo]; + + port->pps_data.min_volt = max(pdo_pps_apdo_min_voltage(src), + pdo_pps_apdo_min_voltage(snk)); + port->pps_data.max_volt = min(pdo_pps_apdo_max_voltage(src), + pdo_pps_apdo_max_voltage(snk)); + port->pps_data.max_curr = min_pps_apdo_current(src, snk); + port->pps_data.out_volt = min(port->pps_data.max_volt, + port->pps_data.out_volt); + port->pps_data.op_curr = min(port->pps_data.max_curr, + port->pps_data.op_curr); } return src_pdo; @@ -3254,7 +3262,8 @@ static void run_state_machine(struct tcpm_port *port) case SRC_HARD_RESET_VBUS_OFF: tcpm_set_vconn(port, true); tcpm_set_vbus(port, false); - tcpm_set_roles(port, false, TYPEC_SOURCE, TYPEC_HOST); + tcpm_set_roles(port, port->self_powered, TYPEC_SOURCE, + TYPEC_HOST); tcpm_set_state(port, SRC_HARD_RESET_VBUS_ON, PD_T_SRC_RECOVER); break; case SRC_HARD_RESET_VBUS_ON: @@ -3266,8 +3275,10 @@ static void run_state_machine(struct tcpm_port *port) case SNK_HARD_RESET_SINK_OFF: memset(&port->pps_data, 0, sizeof(port->pps_data)); tcpm_set_vconn(port, false); - tcpm_set_charge(port, false); - tcpm_set_roles(port, false, TYPEC_SINK, TYPEC_DEVICE); + if (port->pd_capable) + tcpm_set_charge(port, false); + tcpm_set_roles(port, port->self_powered, TYPEC_SINK, + TYPEC_DEVICE); /* * VBUS may or may not toggle, depending on the adapter. * If it doesn't toggle, transition to SNK_HARD_RESET_SINK_ON @@ -3297,6 +3308,12 @@ static void run_state_machine(struct tcpm_port *port) * Similar, dual-mode ports in source mode should transition * to PE_SNK_Transition_to_default. */ + if (port->pd_capable) { + tcpm_set_current_limit(port, + tcpm_get_current_limit(port), + 5000); + tcpm_set_charge(port, true); + } tcpm_set_attached_state(port, true); tcpm_set_state(port, SNK_STARTUP, 0); break; @@ -4412,6 +4429,8 @@ sink: return -EINVAL; port->operating_snk_mw = mw / 1000; + port->self_powered = fwnode_property_read_bool(fwnode, "self-powered"); + return 0; } @@ -4720,6 +4739,7 @@ static int tcpm_copy_caps(struct tcpm_port *port, port->typec_caps.prefer_role = tcfg->default_role; port->typec_caps.type = tcfg->type; port->typec_caps.data = tcfg->data; + port->self_powered = port->tcpc->config->self_powered; return 0; } diff --git a/drivers/usb/wusbcore/crypto.c b/drivers/usb/wusbcore/crypto.c index 68ddee86a886..edb7263bff40 100644 --- a/drivers/usb/wusbcore/crypto.c +++ b/drivers/usb/wusbcore/crypto.c @@ -316,7 +316,7 @@ ssize_t wusb_prf(void *out, size_t out_size, goto error_setkey_cbc; } - tfm_aes = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC); + tfm_aes = crypto_alloc_cipher("aes", 0, 0); if (IS_ERR(tfm_aes)) { result = PTR_ERR(tfm_aes); printk(KERN_ERR "E: can't load AES: %d\n", (int)result); diff --git a/drivers/uwb/i1480/dfu/usb.c b/drivers/uwb/i1480/dfu/usb.c index c0430a41e24b..b1b466cbc33e 100644 --- a/drivers/uwb/i1480/dfu/usb.c +++ b/drivers/uwb/i1480/dfu/usb.c @@ -309,7 +309,7 @@ int i1480_usb_cmd(struct i1480 *i1480, const char *cmd_name, size_t cmd_size) if (result < 0) { dev_err(dev, "%s: cannot submit NEEP read: %d\n", cmd_name, result); - goto error_submit_ep1; + goto error_submit_ep1; } /* Now post the command on EP0 */ result = usb_control_msg( diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c index 249472f05509..ce5dd219f2c8 100644 --- a/drivers/vfio/mdev/mdev_sysfs.c +++ b/drivers/vfio/mdev/mdev_sysfs.c @@ -92,8 +92,8 @@ static struct kobj_type mdev_type_ktype = { .release = mdev_type_release, }; -struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent, - struct attribute_group *group) +static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent, + struct attribute_group *group) { struct mdev_type *type; int ret; diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 42dc1d3d71cf..d0f8e4f5a039 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -38,3 +38,9 @@ config VFIO_PCI_IGD and LPC bridge config space. To enable Intel IGD assignment through vfio-pci, say Y. + +config VFIO_PCI_NVLINK2 + def_bool y + depends on VFIO_PCI && PPC_POWERNV + help + VFIO PCI support for P9 Witherspoon machine with NVIDIA V100 GPUs diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index 76d8ec058edd..9662c063a6b1 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -1,5 +1,6 @@ vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o obj-$(CONFIG_VFIO_PCI) += vfio-pci.o diff --git a/drivers/vfio/pci/trace.h b/drivers/vfio/pci/trace.h new file mode 100644 index 000000000000..228ccdb8d1c8 --- /dev/null +++ b/drivers/vfio/pci/trace.h @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * VFIO PCI mmap/mmap_fault tracepoints + * + * Copyright (C) 2018 IBM Corp. All rights reserved. + * Author: Alexey Kardashevskiy + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#undef TRACE_SYSTEM +#define TRACE_SYSTEM vfio_pci + +#if !defined(_TRACE_VFIO_PCI_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_VFIO_PCI_H + +#include + +TRACE_EVENT(vfio_pci_nvgpu_mmap_fault, + TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua, + vm_fault_t ret), + TP_ARGS(pdev, hpa, ua, ret), + + TP_STRUCT__entry( + __field(const char *, name) + __field(unsigned long, hpa) + __field(unsigned long, ua) + __field(int, ret) + ), + + TP_fast_assign( + __entry->name = dev_name(&pdev->dev), + __entry->hpa = hpa; + __entry->ua = ua; + __entry->ret = ret; + ), + + TP_printk("%s: %lx -> %lx ret=%d", __entry->name, __entry->hpa, + __entry->ua, __entry->ret) +); + +TRACE_EVENT(vfio_pci_nvgpu_mmap, + TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua, + unsigned long size, int ret), + TP_ARGS(pdev, hpa, ua, size, ret), + + TP_STRUCT__entry( + __field(const char *, name) + __field(unsigned long, hpa) + __field(unsigned long, ua) + __field(unsigned long, size) + __field(int, ret) + ), + + TP_fast_assign( + __entry->name = dev_name(&pdev->dev), + __entry->hpa = hpa; + __entry->ua = ua; + __entry->size = size; + __entry->ret = ret; + ), + + TP_printk("%s: %lx -> %lx size=%lx ret=%d", __entry->name, __entry->hpa, + __entry->ua, __entry->size, __entry->ret) +); + +TRACE_EVENT(vfio_pci_npu2_mmap, + TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua, + unsigned long size, int ret), + TP_ARGS(pdev, hpa, ua, size, ret), + + TP_STRUCT__entry( + __field(const char *, name) + __field(unsigned long, hpa) + __field(unsigned long, ua) + __field(unsigned long, size) + __field(int, ret) + ), + + TP_fast_assign( + __entry->name = dev_name(&pdev->dev), + __entry->hpa = hpa; + __entry->ua = ua; + __entry->size = size; + __entry->ret = ret; + ), + + TP_printk("%s: %lx -> %lx size=%lx ret=%d", __entry->name, __entry->hpa, + __entry->ua, __entry->size, __entry->ret) +); + +#endif /* _TRACE_VFIO_PCI_H */ + +#undef TRACE_INCLUDE_PATH +#define TRACE_INCLUDE_PATH . +#undef TRACE_INCLUDE_FILE +#define TRACE_INCLUDE_FILE trace + +/* This part must be outside protection */ +#include diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 50cdedfca9fe..ff60bd1ea587 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -56,8 +56,6 @@ module_param(disable_idle_d3, bool, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(disable_idle_d3, "Disable using the PCI D3 low power state for idle, unused devices"); -static DEFINE_MUTEX(driver_lock); - static inline bool vfio_vga_disabled(void) { #ifdef CONFIG_VFIO_PCI_VGA @@ -289,14 +287,37 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev) if (ret) { dev_warn(&vdev->pdev->dev, "Failed to setup Intel IGD regions\n"); - vfio_pci_disable(vdev); - return ret; + goto disable_exit; + } + } + + if (pdev->vendor == PCI_VENDOR_ID_NVIDIA && + IS_ENABLED(CONFIG_VFIO_PCI_NVLINK2)) { + ret = vfio_pci_nvdia_v100_nvlink2_init(vdev); + if (ret && ret != -ENODEV) { + dev_warn(&vdev->pdev->dev, + "Failed to setup NVIDIA NV2 RAM region\n"); + goto disable_exit; + } + } + + if (pdev->vendor == PCI_VENDOR_ID_IBM && + IS_ENABLED(CONFIG_VFIO_PCI_NVLINK2)) { + ret = vfio_pci_ibm_npu2_init(vdev); + if (ret && ret != -ENODEV) { + dev_warn(&vdev->pdev->dev, + "Failed to setup NVIDIA NV2 ATSD region\n"); + goto disable_exit; } } vfio_pci_probe_mmaps(vdev); return 0; + +disable_exit: + vfio_pci_disable(vdev); + return ret; } static void vfio_pci_disable(struct vfio_pci_device *vdev) @@ -393,14 +414,14 @@ static void vfio_pci_release(void *device_data) { struct vfio_pci_device *vdev = device_data; - mutex_lock(&driver_lock); + mutex_lock(&vdev->reflck->lock); if (!(--vdev->refcnt)) { vfio_spapr_pci_eeh_release(vdev->pdev); vfio_pci_disable(vdev); } - mutex_unlock(&driver_lock); + mutex_unlock(&vdev->reflck->lock); module_put(THIS_MODULE); } @@ -413,7 +434,7 @@ static int vfio_pci_open(void *device_data) if (!try_module_get(THIS_MODULE)) return -ENODEV; - mutex_lock(&driver_lock); + mutex_lock(&vdev->reflck->lock); if (!vdev->refcnt) { ret = vfio_pci_enable(vdev); @@ -424,7 +445,7 @@ static int vfio_pci_open(void *device_data) } vdev->refcnt++; error: - mutex_unlock(&driver_lock); + mutex_unlock(&vdev->reflck->lock); if (ret) module_put(THIS_MODULE); return ret; @@ -750,6 +771,12 @@ static long vfio_pci_ioctl(void *device_data, if (ret) return ret; + if (vdev->region[i].ops->add_capability) { + ret = vdev->region[i].ops->add_capability(vdev, + &vdev->region[i], &caps); + if (ret) + return ret; + } } } @@ -1117,6 +1144,15 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma) return -EINVAL; if ((vma->vm_flags & VM_SHARED) == 0) return -EINVAL; + if (index >= VFIO_PCI_NUM_REGIONS) { + int regnum = index - VFIO_PCI_NUM_REGIONS; + struct vfio_pci_region *region = vdev->region + regnum; + + if (region && region->ops && region->ops->mmap && + (region->flags & VFIO_REGION_INFO_FLAG_MMAP)) + return region->ops->mmap(vdev, region, vma); + return -EINVAL; + } if (index >= VFIO_PCI_ROM_REGION_INDEX) return -EINVAL; if (!vdev->bar_mmap_supported[index]) @@ -1187,6 +1223,9 @@ static const struct vfio_device_ops vfio_pci_ops = { .request = vfio_pci_request, }; +static int vfio_pci_reflck_attach(struct vfio_pci_device *vdev); +static void vfio_pci_reflck_put(struct vfio_pci_reflck *reflck); + static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct vfio_pci_device *vdev; @@ -1233,6 +1272,14 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) return ret; } + ret = vfio_pci_reflck_attach(vdev); + if (ret) { + vfio_del_group_dev(&pdev->dev); + vfio_iommu_group_put(group, &pdev->dev); + kfree(vdev); + return ret; + } + if (vfio_pci_is_vga(pdev)) { vga_client_register(pdev, vdev, NULL, vfio_pci_set_vga_decode); vga_set_legacy_decoding(pdev, @@ -1264,6 +1311,8 @@ static void vfio_pci_remove(struct pci_dev *pdev) if (!vdev) return; + vfio_pci_reflck_put(vdev->reflck); + vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev); kfree(vdev->region); mutex_destroy(&vdev->ioeventfds_lock); @@ -1320,16 +1369,97 @@ static struct pci_driver vfio_pci_driver = { .err_handler = &vfio_err_handlers, }; +static DEFINE_MUTEX(reflck_lock); + +static struct vfio_pci_reflck *vfio_pci_reflck_alloc(void) +{ + struct vfio_pci_reflck *reflck; + + reflck = kzalloc(sizeof(*reflck), GFP_KERNEL); + if (!reflck) + return ERR_PTR(-ENOMEM); + + kref_init(&reflck->kref); + mutex_init(&reflck->lock); + + return reflck; +} + +static void vfio_pci_reflck_get(struct vfio_pci_reflck *reflck) +{ + kref_get(&reflck->kref); +} + +static int vfio_pci_reflck_find(struct pci_dev *pdev, void *data) +{ + struct vfio_pci_reflck **preflck = data; + struct vfio_device *device; + struct vfio_pci_device *vdev; + + device = vfio_device_get_from_dev(&pdev->dev); + if (!device) + return 0; + + if (pci_dev_driver(pdev) != &vfio_pci_driver) { + vfio_device_put(device); + return 0; + } + + vdev = vfio_device_data(device); + + if (vdev->reflck) { + vfio_pci_reflck_get(vdev->reflck); + *preflck = vdev->reflck; + vfio_device_put(device); + return 1; + } + + vfio_device_put(device); + return 0; +} + +static int vfio_pci_reflck_attach(struct vfio_pci_device *vdev) +{ + bool slot = !pci_probe_reset_slot(vdev->pdev->slot); + + mutex_lock(&reflck_lock); + + if (pci_is_root_bus(vdev->pdev->bus) || + vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_reflck_find, + &vdev->reflck, slot) <= 0) + vdev->reflck = vfio_pci_reflck_alloc(); + + mutex_unlock(&reflck_lock); + + return PTR_ERR_OR_ZERO(vdev->reflck); +} + +static void vfio_pci_reflck_release(struct kref *kref) +{ + struct vfio_pci_reflck *reflck = container_of(kref, + struct vfio_pci_reflck, + kref); + + kfree(reflck); + mutex_unlock(&reflck_lock); +} + +static void vfio_pci_reflck_put(struct vfio_pci_reflck *reflck) +{ + kref_put_mutex(&reflck->kref, vfio_pci_reflck_release, &reflck_lock); +} + struct vfio_devices { struct vfio_device **devices; int cur_index; int max_index; }; -static int vfio_pci_get_devs(struct pci_dev *pdev, void *data) +static int vfio_pci_get_unused_devs(struct pci_dev *pdev, void *data) { struct vfio_devices *devs = data; struct vfio_device *device; + struct vfio_pci_device *vdev; if (devs->cur_index == devs->max_index) return -ENOSPC; @@ -1343,16 +1473,28 @@ static int vfio_pci_get_devs(struct pci_dev *pdev, void *data) return -EBUSY; } + vdev = vfio_device_data(device); + + /* Fault if the device is not unused */ + if (vdev->refcnt) { + vfio_device_put(device); + return -EBUSY; + } + devs->devices[devs->cur_index++] = device; return 0; } /* - * Attempt to do a bus/slot reset if there are devices affected by a reset for - * this device that are needs_reset and all of the affected devices are unused - * (!refcnt). Callers are required to hold driver_lock when calling this to - * prevent device opens and concurrent bus reset attempts. We prevent device - * unbinds by acquiring and holding a reference to the vfio_device. + * If a bus or slot reset is available for the provided device and: + * - All of the devices affected by that bus or slot reset are unused + * (!refcnt) + * - At least one of the affected devices is marked dirty via + * needs_reset (such as by lack of FLR support) + * Then attempt to perform that bus or slot reset. Callers are required + * to hold vdev->reflck->lock, protecting the bus/slot reset group from + * concurrent opens. A vfio_device reference is acquired for each device + * to prevent unbinds during the reset operation. * * NB: vfio-core considers a group to be viable even if some devices are * bound to drivers like pci-stub or pcieport. Here we require all devices @@ -1363,7 +1505,7 @@ static void vfio_pci_try_bus_reset(struct vfio_pci_device *vdev) { struct vfio_devices devs = { .cur_index = 0 }; int i = 0, ret = -EINVAL; - bool needs_reset = false, slot = false; + bool slot = false; struct vfio_pci_device *tmp; if (!pci_probe_reset_slot(vdev->pdev->slot)) @@ -1381,28 +1523,36 @@ static void vfio_pci_try_bus_reset(struct vfio_pci_device *vdev) return; if (vfio_pci_for_each_slot_or_bus(vdev->pdev, - vfio_pci_get_devs, &devs, slot)) + vfio_pci_get_unused_devs, + &devs, slot)) goto put_devs; + /* Does at least one need a reset? */ for (i = 0; i < devs.cur_index; i++) { tmp = vfio_device_data(devs.devices[i]); - if (tmp->needs_reset) - needs_reset = true; - if (tmp->refcnt) - goto put_devs; + if (tmp->needs_reset) { + ret = pci_reset_bus(vdev->pdev); + break; + } } - if (needs_reset) - ret = pci_reset_bus(vdev->pdev); - put_devs: for (i = 0; i < devs.cur_index; i++) { tmp = vfio_device_data(devs.devices[i]); - if (!ret) + + /* + * If reset was successful, affected devices no longer need + * a reset and we should return all the collateral devices + * to low power. If not successful, we either didn't reset + * the bus or timed out waiting for it, so let's not touch + * the power state. + */ + if (!ret) { tmp->needs_reset = false; - if (!tmp->refcnt && !disable_idle_d3) - pci_set_power_state(tmp->pdev, PCI_D3hot); + if (tmp != vdev && !disable_idle_d3) + pci_set_power_state(tmp->pdev, PCI_D3hot); + } vfio_device_put(devs.devices[i]); } diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c new file mode 100644 index 000000000000..054a2cf9dd8e --- /dev/null +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c @@ -0,0 +1,482 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * VFIO PCI NVIDIA Whitherspoon GPU support a.k.a. NVLink2. + * + * Copyright (C) 2018 IBM Corp. All rights reserved. + * Author: Alexey Kardashevskiy + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Register an on-GPU RAM region for cacheable access. + * + * Derived from original vfio_pci_igd.c: + * Copyright (C) 2016 Red Hat, Inc. All rights reserved. + * Author: Alex Williamson + */ + +#include +#include +#include +#include +#include +#include +#include +#include "vfio_pci_private.h" + +#define CREATE_TRACE_POINTS +#include "trace.h" + +EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_nvgpu_mmap_fault); +EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_nvgpu_mmap); +EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_npu2_mmap); + +struct vfio_pci_nvgpu_data { + unsigned long gpu_hpa; /* GPU RAM physical address */ + unsigned long gpu_tgt; /* TGT address of corresponding GPU RAM */ + unsigned long useraddr; /* GPU RAM userspace address */ + unsigned long size; /* Size of the GPU RAM window (usually 128GB) */ + struct mm_struct *mm; + struct mm_iommu_table_group_mem_t *mem; /* Pre-registered RAM descr. */ + struct pci_dev *gpdev; + struct notifier_block group_notifier; +}; + +static size_t vfio_pci_nvgpu_rw(struct vfio_pci_device *vdev, + char __user *buf, size_t count, loff_t *ppos, bool iswrite) +{ + unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS; + struct vfio_pci_nvgpu_data *data = vdev->region[i].data; + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + loff_t posaligned = pos & PAGE_MASK, posoff = pos & ~PAGE_MASK; + size_t sizealigned; + void __iomem *ptr; + + if (pos >= vdev->region[i].size) + return -EINVAL; + + count = min(count, (size_t)(vdev->region[i].size - pos)); + + /* + * We map only a bit of GPU RAM for a short time instead of mapping it + * for the guest lifetime as: + * + * 1) we do not know GPU RAM size, only aperture which is 4-8 times + * bigger than actual RAM size (16/32GB RAM vs. 128GB aperture); + * 2) mapping GPU RAM allows CPU to prefetch and if this happens + * before NVLink bridge is reset (which fences GPU RAM), + * hardware management interrupts (HMI) might happen, this + * will freeze NVLink bridge. + * + * This is not fast path anyway. + */ + sizealigned = _ALIGN_UP(posoff + count, PAGE_SIZE); + ptr = ioremap_cache(data->gpu_hpa + posaligned, sizealigned); + if (!ptr) + return -EFAULT; + + if (iswrite) { + if (copy_from_user(ptr + posoff, buf, count)) + count = -EFAULT; + else + *ppos += count; + } else { + if (copy_to_user(buf, ptr + posoff, count)) + count = -EFAULT; + else + *ppos += count; + } + + iounmap(ptr); + + return count; +} + +static void vfio_pci_nvgpu_release(struct vfio_pci_device *vdev, + struct vfio_pci_region *region) +{ + struct vfio_pci_nvgpu_data *data = region->data; + long ret; + + /* If there were any mappings at all... */ + if (data->mm) { + ret = mm_iommu_put(data->mm, data->mem); + WARN_ON(ret); + + mmdrop(data->mm); + } + + vfio_unregister_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY, + &data->group_notifier); + + pnv_npu2_unmap_lpar_dev(data->gpdev); + + kfree(data); +} + +static vm_fault_t vfio_pci_nvgpu_mmap_fault(struct vm_fault *vmf) +{ + vm_fault_t ret; + struct vm_area_struct *vma = vmf->vma; + struct vfio_pci_region *region = vma->vm_private_data; + struct vfio_pci_nvgpu_data *data = region->data; + unsigned long vmf_off = (vmf->address - vma->vm_start) >> PAGE_SHIFT; + unsigned long nv2pg = data->gpu_hpa >> PAGE_SHIFT; + unsigned long vm_pgoff = vma->vm_pgoff & + ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1); + unsigned long pfn = nv2pg + vm_pgoff + vmf_off; + + ret = vmf_insert_pfn(vma, vmf->address, pfn); + trace_vfio_pci_nvgpu_mmap_fault(data->gpdev, pfn << PAGE_SHIFT, + vmf->address, ret); + + return ret; +} + +static const struct vm_operations_struct vfio_pci_nvgpu_mmap_vmops = { + .fault = vfio_pci_nvgpu_mmap_fault, +}; + +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, struct vm_area_struct *vma) +{ + int ret; + struct vfio_pci_nvgpu_data *data = region->data; + + if (data->useraddr) + return -EPERM; + + if (vma->vm_end - vma->vm_start > data->size) + return -EINVAL; + + vma->vm_private_data = region; + vma->vm_flags |= VM_PFNMAP; + vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops; + + /* + * Calling mm_iommu_newdev() here once as the region is not + * registered yet and therefore right initialization will happen now. + * Other places will use mm_iommu_find() which returns + * registered @mem and does not go gup(). + */ + data->useraddr = vma->vm_start; + data->mm = current->mm; + + atomic_inc(&data->mm->mm_count); + ret = (int) mm_iommu_newdev(data->mm, data->useraddr, + (vma->vm_end - vma->vm_start) >> PAGE_SHIFT, + data->gpu_hpa, &data->mem); + + trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr, + vma->vm_end - vma->vm_start, ret); + + return ret; +} + +static int vfio_pci_nvgpu_add_capability(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, struct vfio_info_cap *caps) +{ + struct vfio_pci_nvgpu_data *data = region->data; + struct vfio_region_info_cap_nvlink2_ssatgt cap = { 0 }; + + cap.header.id = VFIO_REGION_INFO_CAP_NVLINK2_SSATGT; + cap.header.version = 1; + cap.tgt = data->gpu_tgt; + + return vfio_info_add_capability(caps, &cap.header, sizeof(cap)); +} + +static const struct vfio_pci_regops vfio_pci_nvgpu_regops = { + .rw = vfio_pci_nvgpu_rw, + .release = vfio_pci_nvgpu_release, + .mmap = vfio_pci_nvgpu_mmap, + .add_capability = vfio_pci_nvgpu_add_capability, +}; + +static int vfio_pci_nvgpu_group_notifier(struct notifier_block *nb, + unsigned long action, void *opaque) +{ + struct kvm *kvm = opaque; + struct vfio_pci_nvgpu_data *data = container_of(nb, + struct vfio_pci_nvgpu_data, + group_notifier); + + if (action == VFIO_GROUP_NOTIFY_SET_KVM && kvm && + pnv_npu2_map_lpar_dev(data->gpdev, + kvm->arch.lpid, MSR_DR | MSR_PR)) + return NOTIFY_BAD; + + return NOTIFY_OK; +} + +int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev) +{ + int ret; + u64 reg[2]; + u64 tgt = 0; + struct device_node *npu_node, *mem_node; + struct pci_dev *npu_dev; + struct vfio_pci_nvgpu_data *data; + uint32_t mem_phandle = 0; + unsigned long events = VFIO_GROUP_NOTIFY_SET_KVM; + + /* + * PCI config space does not tell us about NVLink presense but + * platform does, use this. + */ + npu_dev = pnv_pci_get_npu_dev(vdev->pdev, 0); + if (!npu_dev) + return -ENODEV; + + npu_node = pci_device_to_OF_node(npu_dev); + if (!npu_node) + return -EINVAL; + + if (of_property_read_u32(npu_node, "memory-region", &mem_phandle)) + return -EINVAL; + + mem_node = of_find_node_by_phandle(mem_phandle); + if (!mem_node) + return -EINVAL; + + if (of_property_read_variable_u64_array(mem_node, "reg", reg, + ARRAY_SIZE(reg), ARRAY_SIZE(reg)) != + ARRAY_SIZE(reg)) + return -EINVAL; + + if (of_property_read_u64(npu_node, "ibm,device-tgt-addr", &tgt)) { + dev_warn(&vdev->pdev->dev, "No ibm,device-tgt-addr found\n"); + return -EFAULT; + } + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->gpu_hpa = reg[0]; + data->gpu_tgt = tgt; + data->size = reg[1]; + + dev_dbg(&vdev->pdev->dev, "%lx..%lx\n", data->gpu_hpa, + data->gpu_hpa + data->size - 1); + + data->gpdev = vdev->pdev; + data->group_notifier.notifier_call = vfio_pci_nvgpu_group_notifier; + + ret = vfio_register_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY, + &events, &data->group_notifier); + if (ret) + goto free_exit; + + /* + * We have just set KVM, we do not need the listener anymore. + * Also, keeping it registered means that if more than one GPU is + * assigned, we will get several similar notifiers notifying about + * the same device again which does not help with anything. + */ + vfio_unregister_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY, + &data->group_notifier); + + ret = vfio_pci_register_dev_region(vdev, + PCI_VENDOR_ID_NVIDIA | VFIO_REGION_TYPE_PCI_VENDOR_TYPE, + VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM, + &vfio_pci_nvgpu_regops, + data->size, + VFIO_REGION_INFO_FLAG_READ | + VFIO_REGION_INFO_FLAG_WRITE | + VFIO_REGION_INFO_FLAG_MMAP, + data); + if (ret) + goto free_exit; + + return 0; +free_exit: + kfree(data); + + return ret; +} + +/* + * IBM NPU2 bridge + */ +struct vfio_pci_npu2_data { + void *base; /* ATSD register virtual address, for emulated access */ + unsigned long mmio_atsd; /* ATSD physical address */ + unsigned long gpu_tgt; /* TGT address of corresponding GPU RAM */ + unsigned int link_speed; /* The link speed from DT's ibm,nvlink-speed */ +}; + +static size_t vfio_pci_npu2_rw(struct vfio_pci_device *vdev, + char __user *buf, size_t count, loff_t *ppos, bool iswrite) +{ + unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS; + struct vfio_pci_npu2_data *data = vdev->region[i].data; + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + + if (pos >= vdev->region[i].size) + return -EINVAL; + + count = min(count, (size_t)(vdev->region[i].size - pos)); + + if (iswrite) { + if (copy_from_user(data->base + pos, buf, count)) + return -EFAULT; + } else { + if (copy_to_user(buf, data->base + pos, count)) + return -EFAULT; + } + *ppos += count; + + return count; +} + +static int vfio_pci_npu2_mmap(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, struct vm_area_struct *vma) +{ + int ret; + struct vfio_pci_npu2_data *data = region->data; + unsigned long req_len = vma->vm_end - vma->vm_start; + + if (req_len != PAGE_SIZE) + return -EINVAL; + + vma->vm_flags |= VM_PFNMAP; + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + + ret = remap_pfn_range(vma, vma->vm_start, data->mmio_atsd >> PAGE_SHIFT, + req_len, vma->vm_page_prot); + trace_vfio_pci_npu2_mmap(vdev->pdev, data->mmio_atsd, vma->vm_start, + vma->vm_end - vma->vm_start, ret); + + return ret; +} + +static void vfio_pci_npu2_release(struct vfio_pci_device *vdev, + struct vfio_pci_region *region) +{ + struct vfio_pci_npu2_data *data = region->data; + + memunmap(data->base); + kfree(data); +} + +static int vfio_pci_npu2_add_capability(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, struct vfio_info_cap *caps) +{ + struct vfio_pci_npu2_data *data = region->data; + struct vfio_region_info_cap_nvlink2_ssatgt captgt = { 0 }; + struct vfio_region_info_cap_nvlink2_lnkspd capspd = { 0 }; + int ret; + + captgt.header.id = VFIO_REGION_INFO_CAP_NVLINK2_SSATGT; + captgt.header.version = 1; + captgt.tgt = data->gpu_tgt; + + capspd.header.id = VFIO_REGION_INFO_CAP_NVLINK2_LNKSPD; + capspd.header.version = 1; + capspd.link_speed = data->link_speed; + + ret = vfio_info_add_capability(caps, &captgt.header, sizeof(captgt)); + if (ret) + return ret; + + return vfio_info_add_capability(caps, &capspd.header, sizeof(capspd)); +} + +static const struct vfio_pci_regops vfio_pci_npu2_regops = { + .rw = vfio_pci_npu2_rw, + .mmap = vfio_pci_npu2_mmap, + .release = vfio_pci_npu2_release, + .add_capability = vfio_pci_npu2_add_capability, +}; + +int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev) +{ + int ret; + struct vfio_pci_npu2_data *data; + struct device_node *nvlink_dn; + u32 nvlink_index = 0; + struct pci_dev *npdev = vdev->pdev; + struct device_node *npu_node = pci_device_to_OF_node(npdev); + struct pci_controller *hose = pci_bus_to_host(npdev->bus); + u64 mmio_atsd = 0; + u64 tgt = 0; + u32 link_speed = 0xff; + + /* + * PCI config space does not tell us about NVLink presense but + * platform does, use this. + */ + if (!pnv_pci_get_gpu_dev(vdev->pdev)) + return -ENODEV; + + /* + * NPU2 normally has 8 ATSD registers (for concurrency) and 6 links + * so we can allocate one register per link, using nvlink index as + * a key. + * There is always at least one ATSD register so as long as at least + * NVLink bridge #0 is passed to the guest, ATSD will be available. + */ + nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0); + if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", + &nvlink_index))) + return -ENODEV; + + if (of_property_read_u64_index(hose->dn, "ibm,mmio-atsd", nvlink_index, + &mmio_atsd)) { + dev_warn(&vdev->pdev->dev, "No available ATSD found\n"); + mmio_atsd = 0; + } + + if (of_property_read_u64(npu_node, "ibm,device-tgt-addr", &tgt)) { + dev_warn(&vdev->pdev->dev, "No ibm,device-tgt-addr found\n"); + return -EFAULT; + } + + if (of_property_read_u32(npu_node, "ibm,nvlink-speed", &link_speed)) { + dev_warn(&vdev->pdev->dev, "No ibm,nvlink-speed found\n"); + return -EFAULT; + } + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->mmio_atsd = mmio_atsd; + data->gpu_tgt = tgt; + data->link_speed = link_speed; + if (data->mmio_atsd) { + data->base = memremap(data->mmio_atsd, SZ_64K, MEMREMAP_WT); + if (!data->base) { + ret = -ENOMEM; + goto free_exit; + } + } + + /* + * We want to expose the capability even if this specific NVLink + * did not get its own ATSD register because capabilities + * belong to VFIO regions and normally there will be ATSD register + * assigned to the NVLink bridge. + */ + ret = vfio_pci_register_dev_region(vdev, + PCI_VENDOR_ID_IBM | + VFIO_REGION_TYPE_PCI_VENDOR_TYPE, + VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD, + &vfio_pci_npu2_regops, + data->mmio_atsd ? PAGE_SIZE : 0, + VFIO_REGION_INFO_FLAG_READ | + VFIO_REGION_INFO_FLAG_WRITE | + VFIO_REGION_INFO_FLAG_MMAP, + data); + if (ret) + goto free_exit; + + return 0; + +free_exit: + kfree(data); + + return ret; +} diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h index cde3b5d3441a..8c0009f00818 100644 --- a/drivers/vfio/pci/vfio_pci_private.h +++ b/drivers/vfio/pci/vfio_pci_private.h @@ -59,6 +59,12 @@ struct vfio_pci_regops { size_t count, loff_t *ppos, bool iswrite); void (*release)(struct vfio_pci_device *vdev, struct vfio_pci_region *region); + int (*mmap)(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, + struct vm_area_struct *vma); + int (*add_capability)(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, + struct vfio_info_cap *caps); }; struct vfio_pci_region { @@ -76,6 +82,11 @@ struct vfio_pci_dummy_resource { struct list_head res_next; }; +struct vfio_pci_reflck { + struct kref kref; + struct mutex lock; +}; + struct vfio_pci_device { struct pci_dev *pdev; void __iomem *barmap[PCI_STD_RESOURCE_END + 1]; @@ -104,6 +115,7 @@ struct vfio_pci_device { bool needs_reset; bool nointx; struct pci_saved_state *pci_saved_state; + struct vfio_pci_reflck *reflck; int refcnt; int ioeventfds_nr; struct eventfd_ctx *err_trigger; @@ -157,4 +169,18 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev) return -ENODEV; } #endif +#ifdef CONFIG_VFIO_PCI_NVLINK2 +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev); +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev); +#else +static inline int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev) +{ + return -ENODEV; +} + +static inline int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev) +{ + return -ENODEV; +} +#endif #endif /* VFIO_PCI_PRIVATE_H */ diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c index b30926e11d87..c424913324e3 100644 --- a/drivers/vfio/vfio_iommu_spapr_tce.c +++ b/drivers/vfio/vfio_iommu_spapr_tce.c @@ -152,11 +152,12 @@ static long tce_iommu_unregister_pages(struct tce_container *container, struct mm_iommu_table_group_mem_t *mem; struct tce_iommu_prereg *tcemem; bool found = false; + long ret; if ((vaddr & ~PAGE_MASK) || (size & ~PAGE_MASK)) return -EINVAL; - mem = mm_iommu_find(container->mm, vaddr, size >> PAGE_SHIFT); + mem = mm_iommu_get(container->mm, vaddr, size >> PAGE_SHIFT); if (!mem) return -ENOENT; @@ -168,9 +169,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container, } if (!found) - return -ENOENT; + ret = -ENOENT; + else + ret = tce_iommu_prereg_free(container, tcemem); - return tce_iommu_prereg_free(container, tcemem); + mm_iommu_put(container->mm, mem); + + return ret; } static long tce_iommu_register_pages(struct tce_container *container, @@ -185,22 +190,24 @@ static long tce_iommu_register_pages(struct tce_container *container, ((vaddr + size) < vaddr)) return -EINVAL; - mem = mm_iommu_find(container->mm, vaddr, entries); + mem = mm_iommu_get(container->mm, vaddr, entries); if (mem) { list_for_each_entry(tcemem, &container->prereg_list, next) { - if (tcemem->mem == mem) - return -EBUSY; + if (tcemem->mem == mem) { + ret = -EBUSY; + goto put_exit; + } } + } else { + ret = mm_iommu_new(container->mm, vaddr, entries, &mem); + if (ret) + return ret; } - ret = mm_iommu_get(container->mm, vaddr, entries, &mem); - if (ret) - return ret; - tcemem = kzalloc(sizeof(*tcemem), GFP_KERNEL); if (!tcemem) { - mm_iommu_put(container->mm, mem); - return -ENOMEM; + ret = -ENOMEM; + goto put_exit; } tcemem->mem = mem; @@ -209,10 +216,22 @@ static long tce_iommu_register_pages(struct tce_container *container, container->enabled = true; return 0; + +put_exit: + mm_iommu_put(container->mm, mem); + return ret; } -static bool tce_page_is_contained(struct page *page, unsigned page_shift) +static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa, + unsigned int page_shift) { + struct page *page; + unsigned long size = 0; + + if (mm_iommu_is_devmem(mm, hpa, page_shift, &size)) + return size == (1UL << page_shift); + + page = pfn_to_page(hpa >> PAGE_SHIFT); /* * Check that the TCE table granularity is not bigger than the size of * a page we just found. Otherwise the hardware can get access to @@ -371,6 +390,7 @@ static void tce_iommu_release(void *iommu_data) { struct tce_container *container = iommu_data; struct tce_iommu_group *tcegrp; + struct tce_iommu_prereg *tcemem, *tmtmp; long i; while (tce_groups_attached(container)) { @@ -393,13 +413,8 @@ static void tce_iommu_release(void *iommu_data) tce_iommu_free_table(container, tbl); } - while (!list_empty(&container->prereg_list)) { - struct tce_iommu_prereg *tcemem; - - tcemem = list_first_entry(&container->prereg_list, - struct tce_iommu_prereg, next); - WARN_ON_ONCE(tce_iommu_prereg_free(container, tcemem)); - } + list_for_each_entry_safe(tcemem, tmtmp, &container->prereg_list, next) + WARN_ON(tce_iommu_prereg_free(container, tcemem)); tce_iommu_disable(container); if (container->mm) @@ -492,7 +507,8 @@ static int tce_iommu_clear(struct tce_container *container, direction = DMA_NONE; oldhpa = 0; - ret = iommu_tce_xchg(tbl, entry, &oldhpa, &direction); + ret = iommu_tce_xchg(container->mm, tbl, entry, &oldhpa, + &direction); if (ret) continue; @@ -530,7 +546,6 @@ static long tce_iommu_build(struct tce_container *container, enum dma_data_direction direction) { long i, ret = 0; - struct page *page; unsigned long hpa; enum dma_data_direction dirtmp; @@ -541,15 +556,16 @@ static long tce_iommu_build(struct tce_container *container, if (ret) break; - page = pfn_to_page(hpa >> PAGE_SHIFT); - if (!tce_page_is_contained(page, tbl->it_page_shift)) { + if (!tce_page_is_contained(container->mm, hpa, + tbl->it_page_shift)) { ret = -EPERM; break; } hpa |= offset; dirtmp = direction; - ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp); + ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa, + &dirtmp); if (ret) { tce_iommu_unuse_page(container, hpa); pr_err("iommu_tce: %s failed ioba=%lx, tce=%lx, ret=%ld\n", @@ -576,7 +592,6 @@ static long tce_iommu_build_v2(struct tce_container *container, enum dma_data_direction direction) { long i, ret = 0; - struct page *page; unsigned long hpa; enum dma_data_direction dirtmp; @@ -589,8 +604,8 @@ static long tce_iommu_build_v2(struct tce_container *container, if (ret) break; - page = pfn_to_page(hpa >> PAGE_SHIFT); - if (!tce_page_is_contained(page, tbl->it_page_shift)) { + if (!tce_page_is_contained(container->mm, hpa, + tbl->it_page_shift)) { ret = -EPERM; break; } @@ -603,7 +618,8 @@ static long tce_iommu_build_v2(struct tce_container *container, if (mm_iommu_mapped_inc(mem)) break; - ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp); + ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa, + &dirtmp); if (ret) { /* dirtmp cannot be DMA_NONE here */ tce_iommu_unuse_page_v2(container, tbl, entry + i); diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index e87ef969a0fd..36f3d0f49e60 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -141,6 +141,10 @@ struct vhost_net { unsigned tx_zcopy_err; /* Flush in progress. Protected by tx vq lock. */ bool tx_flush; + /* Private page frag */ + struct page_frag page_frag; + /* Refcount bias of page frag */ + int refcnt_bias; }; static unsigned vhost_net_zcopy_mask __read_mostly; @@ -643,14 +647,53 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) !vhost_vq_avail_empty(vq->dev, vq); } +#define SKB_FRAG_PAGE_ORDER get_order(32768) + +static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz, + struct page_frag *pfrag, gfp_t gfp) +{ + if (pfrag->page) { + if (pfrag->offset + sz <= pfrag->size) + return true; + __page_frag_cache_drain(pfrag->page, net->refcnt_bias); + } + + pfrag->offset = 0; + net->refcnt_bias = 0; + if (SKB_FRAG_PAGE_ORDER) { + /* Avoid direct reclaim but allow kswapd to wake */ + pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | + __GFP_COMP | __GFP_NOWARN | + __GFP_NORETRY, + SKB_FRAG_PAGE_ORDER); + if (likely(pfrag->page)) { + pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; + goto done; + } + } + pfrag->page = alloc_page(gfp); + if (likely(pfrag->page)) { + pfrag->size = PAGE_SIZE; + goto done; + } + return false; + +done: + net->refcnt_bias = USHRT_MAX; + page_ref_add(pfrag->page, USHRT_MAX - 1); + return true; +} + #define VHOST_NET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, struct iov_iter *from) { struct vhost_virtqueue *vq = &nvq->vq; + struct vhost_net *net = container_of(vq->dev, struct vhost_net, + dev); struct socket *sock = vq->private_data; - struct page_frag *alloc_frag = ¤t->task_frag; + struct page_frag *alloc_frag = &net->page_frag; struct virtio_net_hdr *gso; struct xdp_buff *xdp = &nvq->xdp[nvq->batched_xdp]; struct tun_xdp_hdr *hdr; @@ -671,7 +714,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, buflen += SKB_DATA_ALIGN(len + pad); alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES); - if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL))) + if (unlikely(!vhost_net_page_frag_refill(net, buflen, + alloc_frag, GFP_KERNEL))) return -ENOMEM; buf = (char *)page_address(alloc_frag->page) + alloc_frag->offset; @@ -709,7 +753,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, xdp->data_end = xdp->data + len; hdr->buflen = buflen; - get_page(alloc_frag->page); + --net->refcnt_bias; alloc_frag->offset += buflen; ++nvq->batched_xdp; @@ -1298,6 +1342,8 @@ static int vhost_net_open(struct inode *inode, struct file *f) vhost_poll_init(n->poll + VHOST_NET_VQ_RX, handle_rx_net, EPOLLIN, dev); f->private_data = n; + n->page_frag.page = NULL; + n->refcnt_bias = 0; return 0; } @@ -1372,6 +1418,8 @@ static int vhost_net_release(struct inode *inode, struct file *f) kfree(n->vqs[VHOST_NET_VQ_RX].rxq.queue); kfree(n->vqs[VHOST_NET_VQ_TX].xdp); kfree(n->dev.vqs); + if (n->page_frag.page) + __page_frag_cache_drain(n->page_frag.page, n->refcnt_bias); kvfree(n); return 0; } diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 50dffe83714c..a08472ae5b1b 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -285,11 +285,6 @@ static int vhost_scsi_check_false(struct se_portal_group *se_tpg) return 0; } -static char *vhost_scsi_get_fabric_name(void) -{ - return "vhost"; -} - static char *vhost_scsi_get_fabric_wwn(struct se_portal_group *se_tpg) { struct vhost_scsi_tpg *tpg = container_of(se_tpg, @@ -2289,8 +2284,7 @@ static struct configfs_attribute *vhost_scsi_wwn_attrs[] = { static const struct target_core_fabric_ops vhost_scsi_ops = { .module = THIS_MODULE, - .name = "vhost", - .get_fabric_name = vhost_scsi_get_fabric_name, + .fabric_name = "vhost", .tpg_get_wwn = vhost_scsi_get_fabric_wwn, .tpg_get_tag = vhost_scsi_get_tpgt, .tpg_check_demo_mode = vhost_scsi_check_true, diff --git a/drivers/video/backlight/pm8941-wled.c b/drivers/video/backlight/pm8941-wled.c index 0b6d21955d91..da6aab20fdcb 100644 --- a/drivers/video/backlight/pm8941-wled.c +++ b/drivers/video/backlight/pm8941-wled.c @@ -330,7 +330,7 @@ static int pm8941_wled_configure(struct pm8941_wled *wled, struct device *dev) rc = of_property_read_string(dev->of_node, "label", &wled->name); if (rc) - wled->name = dev->of_node->name; + wled->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFn", dev->of_node); *cfg = pm8941_wled_config_defaults; for (i = 0; i < ARRAY_SIZE(u32_opts); ++i) { diff --git a/drivers/virt/vboxguest/vboxguest_core.c b/drivers/virt/vboxguest/vboxguest_core.c index 3093655c7b92..1475ed5ffcde 100644 --- a/drivers/virt/vboxguest/vboxguest_core.c +++ b/drivers/virt/vboxguest/vboxguest_core.c @@ -1312,7 +1312,7 @@ static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev, return -EINVAL; } - if (f32bit) + if (IS_ENABLED(CONFIG_COMPAT) && f32bit) ret = vbg_hgcm_call32(gdev, client_id, call->function, call->timeout_ms, VBG_IOCTL_HGCM_CALL_PARMS32(call), diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 814b395007b2..cd7e755484e3 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -44,6 +44,26 @@ } while (0) #define END_USE(_vq) \ do { BUG_ON(!(_vq)->in_use); (_vq)->in_use = 0; } while(0) +#define LAST_ADD_TIME_UPDATE(_vq) \ + do { \ + ktime_t now = ktime_get(); \ + \ + /* No kick or get, with .1 second between? Warn. */ \ + if ((_vq)->last_add_time_valid) \ + WARN_ON(ktime_to_ms(ktime_sub(now, \ + (_vq)->last_add_time)) > 100); \ + (_vq)->last_add_time = now; \ + (_vq)->last_add_time_valid = true; \ + } while (0) +#define LAST_ADD_TIME_CHECK(_vq) \ + do { \ + if ((_vq)->last_add_time_valid) { \ + WARN_ON(ktime_to_ms(ktime_sub(ktime_get(), \ + (_vq)->last_add_time)) > 100); \ + } \ + } while (0) +#define LAST_ADD_TIME_INVALID(_vq) \ + ((_vq)->last_add_time_valid = false) #else #define BAD_RING(_vq, fmt, args...) \ do { \ @@ -53,18 +73,38 @@ } while (0) #define START_USE(vq) #define END_USE(vq) +#define LAST_ADD_TIME_UPDATE(vq) +#define LAST_ADD_TIME_CHECK(vq) +#define LAST_ADD_TIME_INVALID(vq) #endif -struct vring_desc_state { +struct vring_desc_state_split { void *data; /* Data for callback. */ struct vring_desc *indir_desc; /* Indirect descriptor, if any. */ }; +struct vring_desc_state_packed { + void *data; /* Data for callback. */ + struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */ + u16 num; /* Descriptor list length. */ + u16 next; /* The next desc state in a list. */ + u16 last; /* The last desc state in a list. */ +}; + +struct vring_desc_extra_packed { + dma_addr_t addr; /* Buffer DMA addr. */ + u32 len; /* Buffer length. */ + u16 flags; /* Descriptor flags. */ +}; + struct vring_virtqueue { struct virtqueue vq; - /* Actual memory layout for this queue */ - struct vring vring; + /* Is this a packed ring? */ + bool packed_ring; + + /* Is DMA API used? */ + bool use_dma_api; /* Can we use weak barriers? */ bool weak_barriers; @@ -86,19 +126,70 @@ struct vring_virtqueue { /* Last used index we've seen. */ u16 last_used_idx; - /* Last written value to avail->flags */ - u16 avail_flags_shadow; + union { + /* Available for split ring */ + struct { + /* Actual memory layout for this queue. */ + struct vring vring; + + /* Last written value to avail->flags */ + u16 avail_flags_shadow; + + /* + * Last written value to avail->idx in + * guest byte order. + */ + u16 avail_idx_shadow; + + /* Per-descriptor state. */ + struct vring_desc_state_split *desc_state; - /* Last written value to avail->idx in guest byte order */ - u16 avail_idx_shadow; + /* DMA address and size information */ + dma_addr_t queue_dma_addr; + size_t queue_size_in_bytes; + } split; + + /* Available for packed ring */ + struct { + /* Actual memory layout for this queue. */ + struct vring_packed vring; + + /* Driver ring wrap counter. */ + bool avail_wrap_counter; + + /* Device ring wrap counter. */ + bool used_wrap_counter; + + /* Avail used flags. */ + u16 avail_used_flags; + + /* Index of the next avail descriptor. */ + u16 next_avail_idx; + + /* + * Last written value to driver->flags in + * guest byte order. + */ + u16 event_flags_shadow; + + /* Per-descriptor state. */ + struct vring_desc_state_packed *desc_state; + struct vring_desc_extra_packed *desc_extra; + + /* DMA address and size information */ + dma_addr_t ring_dma_addr; + dma_addr_t driver_event_dma_addr; + dma_addr_t device_event_dma_addr; + size_t ring_size_in_bytes; + size_t event_size_in_bytes; + } packed; + }; /* How to notify other side. FIXME: commonalize hcalls! */ bool (*notify)(struct virtqueue *vq); /* DMA, allocation, and size information */ bool we_own_ring; - size_t queue_size_in_bytes; - dma_addr_t queue_dma_addr; #ifdef DEBUG /* They're supposed to lock for us. */ @@ -108,13 +199,27 @@ struct vring_virtqueue { bool last_add_time_valid; ktime_t last_add_time; #endif - - /* Per-descriptor state. */ - struct vring_desc_state desc_state[]; }; + +/* + * Helpers. + */ + #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq) +static inline bool virtqueue_use_indirect(struct virtqueue *_vq, + unsigned int total_sg) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + /* + * If the host supports indirect descriptor tables, and we have multiple + * buffers, then go indirect. FIXME: tune this threshold + */ + return (vq->indirect && total_sg > 1 && vq->vq.num_free); +} + /* * Modern virtio devices have feature bits to specify whether they need a * quirk and bypass the IOMMU. If not there, just use the DMA API. @@ -161,6 +266,48 @@ static bool vring_use_dma_api(struct virtio_device *vdev) return false; } +static void *vring_alloc_queue(struct virtio_device *vdev, size_t size, + dma_addr_t *dma_handle, gfp_t flag) +{ + if (vring_use_dma_api(vdev)) { + return dma_alloc_coherent(vdev->dev.parent, size, + dma_handle, flag); + } else { + void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag); + + if (queue) { + phys_addr_t phys_addr = virt_to_phys(queue); + *dma_handle = (dma_addr_t)phys_addr; + + /* + * Sanity check: make sure we dind't truncate + * the address. The only arches I can find that + * have 64-bit phys_addr_t but 32-bit dma_addr_t + * are certain non-highmem MIPS and x86 + * configurations, but these configurations + * should never allocate physical pages above 32 + * bits, so this is fine. Just in case, throw a + * warning and abort if we end up with an + * unrepresentable address. + */ + if (WARN_ON_ONCE(*dma_handle != phys_addr)) { + free_pages_exact(queue, PAGE_ALIGN(size)); + return NULL; + } + } + return queue; + } +} + +static void vring_free_queue(struct virtio_device *vdev, size_t size, + void *queue, dma_addr_t dma_handle) +{ + if (vring_use_dma_api(vdev)) + dma_free_coherent(vdev->dev.parent, size, queue, dma_handle); + else + free_pages_exact(queue, PAGE_ALIGN(size)); +} + /* * The DMA ops on various arches are rather gnarly right now, and * making all of the arch DMA ops work on the vring device itself @@ -176,7 +323,7 @@ static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist *sg, enum dma_data_direction direction) { - if (!vring_use_dma_api(vq->vq.vdev)) + if (!vq->use_dma_api) return (dma_addr_t)sg_phys(sg); /* @@ -193,19 +340,33 @@ static dma_addr_t vring_map_single(const struct vring_virtqueue *vq, void *cpu_addr, size_t size, enum dma_data_direction direction) { - if (!vring_use_dma_api(vq->vq.vdev)) + if (!vq->use_dma_api) return (dma_addr_t)virt_to_phys(cpu_addr); return dma_map_single(vring_dma_dev(vq), cpu_addr, size, direction); } -static void vring_unmap_one(const struct vring_virtqueue *vq, - struct vring_desc *desc) +static int vring_mapping_error(const struct vring_virtqueue *vq, + dma_addr_t addr) +{ + if (!vq->use_dma_api) + return 0; + + return dma_mapping_error(vring_dma_dev(vq), addr); +} + + +/* + * Split ring specific functions - *_split(). + */ + +static void vring_unmap_one_split(const struct vring_virtqueue *vq, + struct vring_desc *desc) { u16 flags; - if (!vring_use_dma_api(vq->vq.vdev)) + if (!vq->use_dma_api) return; flags = virtio16_to_cpu(vq->vq.vdev, desc->flags); @@ -225,17 +386,9 @@ static void vring_unmap_one(const struct vring_virtqueue *vq, } } -static int vring_mapping_error(const struct vring_virtqueue *vq, - dma_addr_t addr) -{ - if (!vring_use_dma_api(vq->vq.vdev)) - return 0; - - return dma_mapping_error(vring_dma_dev(vq), addr); -} - -static struct vring_desc *alloc_indirect(struct virtqueue *_vq, - unsigned int total_sg, gfp_t gfp) +static struct vring_desc *alloc_indirect_split(struct virtqueue *_vq, + unsigned int total_sg, + gfp_t gfp) { struct vring_desc *desc; unsigned int i; @@ -256,14 +409,14 @@ static struct vring_desc *alloc_indirect(struct virtqueue *_vq, return desc; } -static inline int virtqueue_add(struct virtqueue *_vq, - struct scatterlist *sgs[], - unsigned int total_sg, - unsigned int out_sgs, - unsigned int in_sgs, - void *data, - void *ctx, - gfp_t gfp) +static inline int virtqueue_add_split(struct virtqueue *_vq, + struct scatterlist *sgs[], + unsigned int total_sg, + unsigned int out_sgs, + unsigned int in_sgs, + void *data, + void *ctx, + gfp_t gfp) { struct vring_virtqueue *vq = to_vvq(_vq); struct scatterlist *sg; @@ -282,30 +435,17 @@ static inline int virtqueue_add(struct virtqueue *_vq, return -EIO; } -#ifdef DEBUG - { - ktime_t now = ktime_get(); - - /* No kick or get, with .1 second between? Warn. */ - if (vq->last_add_time_valid) - WARN_ON(ktime_to_ms(ktime_sub(now, vq->last_add_time)) - > 100); - vq->last_add_time = now; - vq->last_add_time_valid = true; - } -#endif + LAST_ADD_TIME_UPDATE(vq); BUG_ON(total_sg == 0); head = vq->free_head; - /* If the host supports indirect descriptor tables, and we have multiple - * buffers, then go indirect. FIXME: tune this threshold */ - if (vq->indirect && total_sg > 1 && vq->vq.num_free) - desc = alloc_indirect(_vq, total_sg, gfp); + if (virtqueue_use_indirect(_vq, total_sg)) + desc = alloc_indirect_split(_vq, total_sg, gfp); else { desc = NULL; - WARN_ON_ONCE(total_sg > vq->vring.num && !vq->indirect); + WARN_ON_ONCE(total_sg > vq->split.vring.num && !vq->indirect); } if (desc) { @@ -316,7 +456,7 @@ static inline int virtqueue_add(struct virtqueue *_vq, descs_used = 1; } else { indirect = false; - desc = vq->vring.desc; + desc = vq->split.vring.desc; i = head; descs_used = total_sg; } @@ -372,10 +512,13 @@ static inline int virtqueue_add(struct virtqueue *_vq, if (vring_mapping_error(vq, addr)) goto unmap_release; - vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT); - vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr); + vq->split.vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, + VRING_DESC_F_INDIRECT); + vq->split.vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, + addr); - vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc)); + vq->split.vring.desc[head].len = cpu_to_virtio32(_vq->vdev, + total_sg * sizeof(struct vring_desc)); } /* We're using some buffers from the free list. */ @@ -383,27 +526,29 @@ static inline int virtqueue_add(struct virtqueue *_vq, /* Update free pointer */ if (indirect) - vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next); + vq->free_head = virtio16_to_cpu(_vq->vdev, + vq->split.vring.desc[head].next); else vq->free_head = i; /* Store token and indirect buffer state. */ - vq->desc_state[head].data = data; + vq->split.desc_state[head].data = data; if (indirect) - vq->desc_state[head].indir_desc = desc; + vq->split.desc_state[head].indir_desc = desc; else - vq->desc_state[head].indir_desc = ctx; + vq->split.desc_state[head].indir_desc = ctx; /* Put entry in available array (but don't update avail->idx until they * do sync). */ - avail = vq->avail_idx_shadow & (vq->vring.num - 1); - vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); + avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1); + vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); /* Descriptors and available array need to be set before we expose the * new available array entries. */ virtio_wmb(vq->weak_barriers); - vq->avail_idx_shadow++; - vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow); + vq->split.avail_idx_shadow++; + vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, + vq->split.avail_idx_shadow); vq->num_added++; pr_debug("Added buffer head %i to %p\n", head, vq); @@ -423,8 +568,8 @@ unmap_release: for (n = 0; n < total_sg; n++) { if (i == err_idx) break; - vring_unmap_one(vq, &desc[i]); - i = virtio16_to_cpu(_vq->vdev, vq->vring.desc[i].next); + vring_unmap_one_split(vq, &desc[i]); + i = virtio16_to_cpu(_vq->vdev, vq->split.vring.desc[i].next); } if (indirect) @@ -434,6 +579,1122 @@ unmap_release: return -EIO; } +static bool virtqueue_kick_prepare_split(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + u16 new, old; + bool needs_kick; + + START_USE(vq); + /* We need to expose available array entries before checking avail + * event. */ + virtio_mb(vq->weak_barriers); + + old = vq->split.avail_idx_shadow - vq->num_added; + new = vq->split.avail_idx_shadow; + vq->num_added = 0; + + LAST_ADD_TIME_CHECK(vq); + LAST_ADD_TIME_INVALID(vq); + + if (vq->event) { + needs_kick = vring_need_event(virtio16_to_cpu(_vq->vdev, + vring_avail_event(&vq->split.vring)), + new, old); + } else { + needs_kick = !(vq->split.vring.used->flags & + cpu_to_virtio16(_vq->vdev, + VRING_USED_F_NO_NOTIFY)); + } + END_USE(vq); + return needs_kick; +} + +static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, + void **ctx) +{ + unsigned int i, j; + __virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT); + + /* Clear data ptr. */ + vq->split.desc_state[head].data = NULL; + + /* Put back on free list: unmap first-level descriptors and find end */ + i = head; + + while (vq->split.vring.desc[i].flags & nextflag) { + vring_unmap_one_split(vq, &vq->split.vring.desc[i]); + i = virtio16_to_cpu(vq->vq.vdev, vq->split.vring.desc[i].next); + vq->vq.num_free++; + } + + vring_unmap_one_split(vq, &vq->split.vring.desc[i]); + vq->split.vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, + vq->free_head); + vq->free_head = head; + + /* Plus final descriptor */ + vq->vq.num_free++; + + if (vq->indirect) { + struct vring_desc *indir_desc = + vq->split.desc_state[head].indir_desc; + u32 len; + + /* Free the indirect table, if any, now that it's unmapped. */ + if (!indir_desc) + return; + + len = virtio32_to_cpu(vq->vq.vdev, + vq->split.vring.desc[head].len); + + BUG_ON(!(vq->split.vring.desc[head].flags & + cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))); + BUG_ON(len == 0 || len % sizeof(struct vring_desc)); + + for (j = 0; j < len / sizeof(struct vring_desc); j++) + vring_unmap_one_split(vq, &indir_desc[j]); + + kfree(indir_desc); + vq->split.desc_state[head].indir_desc = NULL; + } else if (ctx) { + *ctx = vq->split.desc_state[head].indir_desc; + } +} + +static inline bool more_used_split(const struct vring_virtqueue *vq) +{ + return vq->last_used_idx != virtio16_to_cpu(vq->vq.vdev, + vq->split.vring.used->idx); +} + +static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, + unsigned int *len, + void **ctx) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + void *ret; + unsigned int i; + u16 last_used; + + START_USE(vq); + + if (unlikely(vq->broken)) { + END_USE(vq); + return NULL; + } + + if (!more_used_split(vq)) { + pr_debug("No more buffers in queue\n"); + END_USE(vq); + return NULL; + } + + /* Only get used array entries after they have been exposed by host. */ + virtio_rmb(vq->weak_barriers); + + last_used = (vq->last_used_idx & (vq->split.vring.num - 1)); + i = virtio32_to_cpu(_vq->vdev, + vq->split.vring.used->ring[last_used].id); + *len = virtio32_to_cpu(_vq->vdev, + vq->split.vring.used->ring[last_used].len); + + if (unlikely(i >= vq->split.vring.num)) { + BAD_RING(vq, "id %u out of range\n", i); + return NULL; + } + if (unlikely(!vq->split.desc_state[i].data)) { + BAD_RING(vq, "id %u is not a head!\n", i); + return NULL; + } + + /* detach_buf_split clears data, so grab it now. */ + ret = vq->split.desc_state[i].data; + detach_buf_split(vq, i, ctx); + vq->last_used_idx++; + /* If we expect an interrupt for the next entry, tell host + * by writing event index and flush out the write before + * the read in the next get_buf call. */ + if (!(vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) + virtio_store_mb(vq->weak_barriers, + &vring_used_event(&vq->split.vring), + cpu_to_virtio16(_vq->vdev, vq->last_used_idx)); + + LAST_ADD_TIME_INVALID(vq); + + END_USE(vq); + return ret; +} + +static void virtqueue_disable_cb_split(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + if (!(vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) { + vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; + if (!vq->event) + vq->split.vring.avail->flags = + cpu_to_virtio16(_vq->vdev, + vq->split.avail_flags_shadow); + } +} + +static unsigned virtqueue_enable_cb_prepare_split(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + u16 last_used_idx; + + START_USE(vq); + + /* We optimistically turn back on interrupts, then check if there was + * more to do. */ + /* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to + * either clear the flags bit or point the event index at the next + * entry. Always do both to keep code simple. */ + if (vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) { + vq->split.avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT; + if (!vq->event) + vq->split.vring.avail->flags = + cpu_to_virtio16(_vq->vdev, + vq->split.avail_flags_shadow); + } + vring_used_event(&vq->split.vring) = cpu_to_virtio16(_vq->vdev, + last_used_idx = vq->last_used_idx); + END_USE(vq); + return last_used_idx; +} + +static bool virtqueue_poll_split(struct virtqueue *_vq, unsigned last_used_idx) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, + vq->split.vring.used->idx); +} + +static bool virtqueue_enable_cb_delayed_split(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + u16 bufs; + + START_USE(vq); + + /* We optimistically turn back on interrupts, then check if there was + * more to do. */ + /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to + * either clear the flags bit or point the event index at the next + * entry. Always update the event index to keep code simple. */ + if (vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) { + vq->split.avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT; + if (!vq->event) + vq->split.vring.avail->flags = + cpu_to_virtio16(_vq->vdev, + vq->split.avail_flags_shadow); + } + /* TODO: tune this threshold */ + bufs = (u16)(vq->split.avail_idx_shadow - vq->last_used_idx) * 3 / 4; + + virtio_store_mb(vq->weak_barriers, + &vring_used_event(&vq->split.vring), + cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs)); + + if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->split.vring.used->idx) + - vq->last_used_idx) > bufs)) { + END_USE(vq); + return false; + } + + END_USE(vq); + return true; +} + +static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + unsigned int i; + void *buf; + + START_USE(vq); + + for (i = 0; i < vq->split.vring.num; i++) { + if (!vq->split.desc_state[i].data) + continue; + /* detach_buf_split clears data, so grab it now. */ + buf = vq->split.desc_state[i].data; + detach_buf_split(vq, i, NULL); + vq->split.avail_idx_shadow--; + vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, + vq->split.avail_idx_shadow); + END_USE(vq); + return buf; + } + /* That should have freed everything. */ + BUG_ON(vq->vq.num_free != vq->split.vring.num); + + END_USE(vq); + return NULL; +} + +static struct virtqueue *vring_create_virtqueue_split( + unsigned int index, + unsigned int num, + unsigned int vring_align, + struct virtio_device *vdev, + bool weak_barriers, + bool may_reduce_num, + bool context, + bool (*notify)(struct virtqueue *), + void (*callback)(struct virtqueue *), + const char *name) +{ + struct virtqueue *vq; + void *queue = NULL; + dma_addr_t dma_addr; + size_t queue_size_in_bytes; + struct vring vring; + + /* We assume num is a power of 2. */ + if (num & (num - 1)) { + dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num); + return NULL; + } + + /* TODO: allocate each queue chunk individually */ + for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) { + queue = vring_alloc_queue(vdev, vring_size(num, vring_align), + &dma_addr, + GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); + if (queue) + break; + } + + if (!num) + return NULL; + + if (!queue) { + /* Try to get a single page. You are my only hope! */ + queue = vring_alloc_queue(vdev, vring_size(num, vring_align), + &dma_addr, GFP_KERNEL|__GFP_ZERO); + } + if (!queue) + return NULL; + + queue_size_in_bytes = vring_size(num, vring_align); + vring_init(&vring, num, queue, vring_align); + + vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers, context, + notify, callback, name); + if (!vq) { + vring_free_queue(vdev, queue_size_in_bytes, queue, + dma_addr); + return NULL; + } + + to_vvq(vq)->split.queue_dma_addr = dma_addr; + to_vvq(vq)->split.queue_size_in_bytes = queue_size_in_bytes; + to_vvq(vq)->we_own_ring = true; + + return vq; +} + + +/* + * Packed ring specific functions - *_packed(). + */ + +static void vring_unmap_state_packed(const struct vring_virtqueue *vq, + struct vring_desc_extra_packed *state) +{ + u16 flags; + + if (!vq->use_dma_api) + return; + + flags = state->flags; + + if (flags & VRING_DESC_F_INDIRECT) { + dma_unmap_single(vring_dma_dev(vq), + state->addr, state->len, + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); + } else { + dma_unmap_page(vring_dma_dev(vq), + state->addr, state->len, + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); + } +} + +static void vring_unmap_desc_packed(const struct vring_virtqueue *vq, + struct vring_packed_desc *desc) +{ + u16 flags; + + if (!vq->use_dma_api) + return; + + flags = le16_to_cpu(desc->flags); + + if (flags & VRING_DESC_F_INDIRECT) { + dma_unmap_single(vring_dma_dev(vq), + le64_to_cpu(desc->addr), + le32_to_cpu(desc->len), + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); + } else { + dma_unmap_page(vring_dma_dev(vq), + le64_to_cpu(desc->addr), + le32_to_cpu(desc->len), + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); + } +} + +static struct vring_packed_desc *alloc_indirect_packed(unsigned int total_sg, + gfp_t gfp) +{ + struct vring_packed_desc *desc; + + /* + * We require lowmem mappings for the descriptors because + * otherwise virt_to_phys will give us bogus addresses in the + * virtqueue. + */ + gfp &= ~__GFP_HIGHMEM; + + desc = kmalloc_array(total_sg, sizeof(struct vring_packed_desc), gfp); + + return desc; +} + +static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, + struct scatterlist *sgs[], + unsigned int total_sg, + unsigned int out_sgs, + unsigned int in_sgs, + void *data, + gfp_t gfp) +{ + struct vring_packed_desc *desc; + struct scatterlist *sg; + unsigned int i, n, err_idx; + u16 head, id; + dma_addr_t addr; + + head = vq->packed.next_avail_idx; + desc = alloc_indirect_packed(total_sg, gfp); + + if (unlikely(vq->vq.num_free < 1)) { + pr_debug("Can't add buf len 1 - avail = 0\n"); + END_USE(vq); + return -ENOSPC; + } + + i = 0; + id = vq->free_head; + BUG_ON(id == vq->packed.vring.num); + + for (n = 0; n < out_sgs + in_sgs; n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + addr = vring_map_one_sg(vq, sg, n < out_sgs ? + DMA_TO_DEVICE : DMA_FROM_DEVICE); + if (vring_mapping_error(vq, addr)) + goto unmap_release; + + desc[i].flags = cpu_to_le16(n < out_sgs ? + 0 : VRING_DESC_F_WRITE); + desc[i].addr = cpu_to_le64(addr); + desc[i].len = cpu_to_le32(sg->length); + i++; + } + } + + /* Now that the indirect table is filled in, map it. */ + addr = vring_map_single(vq, desc, + total_sg * sizeof(struct vring_packed_desc), + DMA_TO_DEVICE); + if (vring_mapping_error(vq, addr)) + goto unmap_release; + + vq->packed.vring.desc[head].addr = cpu_to_le64(addr); + vq->packed.vring.desc[head].len = cpu_to_le32(total_sg * + sizeof(struct vring_packed_desc)); + vq->packed.vring.desc[head].id = cpu_to_le16(id); + + if (vq->use_dma_api) { + vq->packed.desc_extra[id].addr = addr; + vq->packed.desc_extra[id].len = total_sg * + sizeof(struct vring_packed_desc); + vq->packed.desc_extra[id].flags = VRING_DESC_F_INDIRECT | + vq->packed.avail_used_flags; + } + + /* + * A driver MUST NOT make the first descriptor in the list + * available before all subsequent descriptors comprising + * the list are made available. + */ + virtio_wmb(vq->weak_barriers); + vq->packed.vring.desc[head].flags = cpu_to_le16(VRING_DESC_F_INDIRECT | + vq->packed.avail_used_flags); + + /* We're using some buffers from the free list. */ + vq->vq.num_free -= 1; + + /* Update free pointer */ + n = head + 1; + if (n >= vq->packed.vring.num) { + n = 0; + vq->packed.avail_wrap_counter ^= 1; + vq->packed.avail_used_flags ^= + 1 << VRING_PACKED_DESC_F_AVAIL | + 1 << VRING_PACKED_DESC_F_USED; + } + vq->packed.next_avail_idx = n; + vq->free_head = vq->packed.desc_state[id].next; + + /* Store token and indirect buffer state. */ + vq->packed.desc_state[id].num = 1; + vq->packed.desc_state[id].data = data; + vq->packed.desc_state[id].indir_desc = desc; + vq->packed.desc_state[id].last = id; + + vq->num_added += 1; + + pr_debug("Added buffer head %i to %p\n", head, vq); + END_USE(vq); + + return 0; + +unmap_release: + err_idx = i; + + for (i = 0; i < err_idx; i++) + vring_unmap_desc_packed(vq, &desc[i]); + + kfree(desc); + + END_USE(vq); + return -EIO; +} + +static inline int virtqueue_add_packed(struct virtqueue *_vq, + struct scatterlist *sgs[], + unsigned int total_sg, + unsigned int out_sgs, + unsigned int in_sgs, + void *data, + void *ctx, + gfp_t gfp) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + struct vring_packed_desc *desc; + struct scatterlist *sg; + unsigned int i, n, c, descs_used, err_idx; + __le16 uninitialized_var(head_flags), flags; + u16 head, id, uninitialized_var(prev), curr, avail_used_flags; + + START_USE(vq); + + BUG_ON(data == NULL); + BUG_ON(ctx && vq->indirect); + + if (unlikely(vq->broken)) { + END_USE(vq); + return -EIO; + } + + LAST_ADD_TIME_UPDATE(vq); + + BUG_ON(total_sg == 0); + + if (virtqueue_use_indirect(_vq, total_sg)) + return virtqueue_add_indirect_packed(vq, sgs, total_sg, + out_sgs, in_sgs, data, gfp); + + head = vq->packed.next_avail_idx; + avail_used_flags = vq->packed.avail_used_flags; + + WARN_ON_ONCE(total_sg > vq->packed.vring.num && !vq->indirect); + + desc = vq->packed.vring.desc; + i = head; + descs_used = total_sg; + + if (unlikely(vq->vq.num_free < descs_used)) { + pr_debug("Can't add buf len %i - avail = %i\n", + descs_used, vq->vq.num_free); + END_USE(vq); + return -ENOSPC; + } + + id = vq->free_head; + BUG_ON(id == vq->packed.vring.num); + + curr = id; + c = 0; + for (n = 0; n < out_sgs + in_sgs; n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ? + DMA_TO_DEVICE : DMA_FROM_DEVICE); + if (vring_mapping_error(vq, addr)) + goto unmap_release; + + flags = cpu_to_le16(vq->packed.avail_used_flags | + (++c == total_sg ? 0 : VRING_DESC_F_NEXT) | + (n < out_sgs ? 0 : VRING_DESC_F_WRITE)); + if (i == head) + head_flags = flags; + else + desc[i].flags = flags; + + desc[i].addr = cpu_to_le64(addr); + desc[i].len = cpu_to_le32(sg->length); + desc[i].id = cpu_to_le16(id); + + if (unlikely(vq->use_dma_api)) { + vq->packed.desc_extra[curr].addr = addr; + vq->packed.desc_extra[curr].len = sg->length; + vq->packed.desc_extra[curr].flags = + le16_to_cpu(flags); + } + prev = curr; + curr = vq->packed.desc_state[curr].next; + + if ((unlikely(++i >= vq->packed.vring.num))) { + i = 0; + vq->packed.avail_used_flags ^= + 1 << VRING_PACKED_DESC_F_AVAIL | + 1 << VRING_PACKED_DESC_F_USED; + } + } + } + + if (i < head) + vq->packed.avail_wrap_counter ^= 1; + + /* We're using some buffers from the free list. */ + vq->vq.num_free -= descs_used; + + /* Update free pointer */ + vq->packed.next_avail_idx = i; + vq->free_head = curr; + + /* Store token. */ + vq->packed.desc_state[id].num = descs_used; + vq->packed.desc_state[id].data = data; + vq->packed.desc_state[id].indir_desc = ctx; + vq->packed.desc_state[id].last = prev; + + /* + * A driver MUST NOT make the first descriptor in the list + * available before all subsequent descriptors comprising + * the list are made available. + */ + virtio_wmb(vq->weak_barriers); + vq->packed.vring.desc[head].flags = head_flags; + vq->num_added += descs_used; + + pr_debug("Added buffer head %i to %p\n", head, vq); + END_USE(vq); + + return 0; + +unmap_release: + err_idx = i; + i = head; + + vq->packed.avail_used_flags = avail_used_flags; + + for (n = 0; n < total_sg; n++) { + if (i == err_idx) + break; + vring_unmap_desc_packed(vq, &desc[i]); + i++; + if (i >= vq->packed.vring.num) + i = 0; + } + + END_USE(vq); + return -EIO; +} + +static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + u16 new, old, off_wrap, flags, wrap_counter, event_idx; + bool needs_kick; + union { + struct { + __le16 off_wrap; + __le16 flags; + }; + u32 u32; + } snapshot; + + START_USE(vq); + + /* + * We need to expose the new flags value before checking notification + * suppressions. + */ + virtio_mb(vq->weak_barriers); + + old = vq->packed.next_avail_idx - vq->num_added; + new = vq->packed.next_avail_idx; + vq->num_added = 0; + + snapshot.u32 = *(u32 *)vq->packed.vring.device; + flags = le16_to_cpu(snapshot.flags); + + LAST_ADD_TIME_CHECK(vq); + LAST_ADD_TIME_INVALID(vq); + + if (flags != VRING_PACKED_EVENT_FLAG_DESC) { + needs_kick = (flags != VRING_PACKED_EVENT_FLAG_DISABLE); + goto out; + } + + off_wrap = le16_to_cpu(snapshot.off_wrap); + + wrap_counter = off_wrap >> VRING_PACKED_EVENT_F_WRAP_CTR; + event_idx = off_wrap & ~(1 << VRING_PACKED_EVENT_F_WRAP_CTR); + if (wrap_counter != vq->packed.avail_wrap_counter) + event_idx -= vq->packed.vring.num; + + needs_kick = vring_need_event(event_idx, new, old); +out: + END_USE(vq); + return needs_kick; +} + +static void detach_buf_packed(struct vring_virtqueue *vq, + unsigned int id, void **ctx) +{ + struct vring_desc_state_packed *state = NULL; + struct vring_packed_desc *desc; + unsigned int i, curr; + + state = &vq->packed.desc_state[id]; + + /* Clear data ptr. */ + state->data = NULL; + + vq->packed.desc_state[state->last].next = vq->free_head; + vq->free_head = id; + vq->vq.num_free += state->num; + + if (unlikely(vq->use_dma_api)) { + curr = id; + for (i = 0; i < state->num; i++) { + vring_unmap_state_packed(vq, + &vq->packed.desc_extra[curr]); + curr = vq->packed.desc_state[curr].next; + } + } + + if (vq->indirect) { + u32 len; + + /* Free the indirect table, if any, now that it's unmapped. */ + desc = state->indir_desc; + if (!desc) + return; + + if (vq->use_dma_api) { + len = vq->packed.desc_extra[id].len; + for (i = 0; i < len / sizeof(struct vring_packed_desc); + i++) + vring_unmap_desc_packed(vq, &desc[i]); + } + kfree(desc); + state->indir_desc = NULL; + } else if (ctx) { + *ctx = state->indir_desc; + } +} + +static inline bool is_used_desc_packed(const struct vring_virtqueue *vq, + u16 idx, bool used_wrap_counter) +{ + bool avail, used; + u16 flags; + + flags = le16_to_cpu(vq->packed.vring.desc[idx].flags); + avail = !!(flags & (1 << VRING_PACKED_DESC_F_AVAIL)); + used = !!(flags & (1 << VRING_PACKED_DESC_F_USED)); + + return avail == used && used == used_wrap_counter; +} + +static inline bool more_used_packed(const struct vring_virtqueue *vq) +{ + return is_used_desc_packed(vq, vq->last_used_idx, + vq->packed.used_wrap_counter); +} + +static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq, + unsigned int *len, + void **ctx) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + u16 last_used, id; + void *ret; + + START_USE(vq); + + if (unlikely(vq->broken)) { + END_USE(vq); + return NULL; + } + + if (!more_used_packed(vq)) { + pr_debug("No more buffers in queue\n"); + END_USE(vq); + return NULL; + } + + /* Only get used elements after they have been exposed by host. */ + virtio_rmb(vq->weak_barriers); + + last_used = vq->last_used_idx; + id = le16_to_cpu(vq->packed.vring.desc[last_used].id); + *len = le32_to_cpu(vq->packed.vring.desc[last_used].len); + + if (unlikely(id >= vq->packed.vring.num)) { + BAD_RING(vq, "id %u out of range\n", id); + return NULL; + } + if (unlikely(!vq->packed.desc_state[id].data)) { + BAD_RING(vq, "id %u is not a head!\n", id); + return NULL; + } + + /* detach_buf_packed clears data, so grab it now. */ + ret = vq->packed.desc_state[id].data; + detach_buf_packed(vq, id, ctx); + + vq->last_used_idx += vq->packed.desc_state[id].num; + if (unlikely(vq->last_used_idx >= vq->packed.vring.num)) { + vq->last_used_idx -= vq->packed.vring.num; + vq->packed.used_wrap_counter ^= 1; + } + + /* + * If we expect an interrupt for the next entry, tell host + * by writing event index and flush out the write before + * the read in the next get_buf call. + */ + if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DESC) + virtio_store_mb(vq->weak_barriers, + &vq->packed.vring.driver->off_wrap, + cpu_to_le16(vq->last_used_idx | + (vq->packed.used_wrap_counter << + VRING_PACKED_EVENT_F_WRAP_CTR))); + + LAST_ADD_TIME_INVALID(vq); + + END_USE(vq); + return ret; +} + +static void virtqueue_disable_cb_packed(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + if (vq->packed.event_flags_shadow != VRING_PACKED_EVENT_FLAG_DISABLE) { + vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE; + vq->packed.vring.driver->flags = + cpu_to_le16(vq->packed.event_flags_shadow); + } +} + +static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + START_USE(vq); + + /* + * We optimistically turn back on interrupts, then check if there was + * more to do. + */ + + if (vq->event) { + vq->packed.vring.driver->off_wrap = + cpu_to_le16(vq->last_used_idx | + (vq->packed.used_wrap_counter << + VRING_PACKED_EVENT_F_WRAP_CTR)); + /* + * We need to update event offset and event wrap + * counter first before updating event flags. + */ + virtio_wmb(vq->weak_barriers); + } + + if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DISABLE) { + vq->packed.event_flags_shadow = vq->event ? + VRING_PACKED_EVENT_FLAG_DESC : + VRING_PACKED_EVENT_FLAG_ENABLE; + vq->packed.vring.driver->flags = + cpu_to_le16(vq->packed.event_flags_shadow); + } + + END_USE(vq); + return vq->last_used_idx | ((u16)vq->packed.used_wrap_counter << + VRING_PACKED_EVENT_F_WRAP_CTR); +} + +static bool virtqueue_poll_packed(struct virtqueue *_vq, u16 off_wrap) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + bool wrap_counter; + u16 used_idx; + + wrap_counter = off_wrap >> VRING_PACKED_EVENT_F_WRAP_CTR; + used_idx = off_wrap & ~(1 << VRING_PACKED_EVENT_F_WRAP_CTR); + + return is_used_desc_packed(vq, used_idx, wrap_counter); +} + +static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + u16 used_idx, wrap_counter; + u16 bufs; + + START_USE(vq); + + /* + * We optimistically turn back on interrupts, then check if there was + * more to do. + */ + + if (vq->event) { + /* TODO: tune this threshold */ + bufs = (vq->packed.vring.num - vq->vq.num_free) * 3 / 4; + wrap_counter = vq->packed.used_wrap_counter; + + used_idx = vq->last_used_idx + bufs; + if (used_idx >= vq->packed.vring.num) { + used_idx -= vq->packed.vring.num; + wrap_counter ^= 1; + } + + vq->packed.vring.driver->off_wrap = cpu_to_le16(used_idx | + (wrap_counter << VRING_PACKED_EVENT_F_WRAP_CTR)); + + /* + * We need to update event offset and event wrap + * counter first before updating event flags. + */ + virtio_wmb(vq->weak_barriers); + } else { + used_idx = vq->last_used_idx; + wrap_counter = vq->packed.used_wrap_counter; + } + + if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DISABLE) { + vq->packed.event_flags_shadow = vq->event ? + VRING_PACKED_EVENT_FLAG_DESC : + VRING_PACKED_EVENT_FLAG_ENABLE; + vq->packed.vring.driver->flags = + cpu_to_le16(vq->packed.event_flags_shadow); + } + + /* + * We need to update event suppression structure first + * before re-checking for more used buffers. + */ + virtio_mb(vq->weak_barriers); + + if (is_used_desc_packed(vq, used_idx, wrap_counter)) { + END_USE(vq); + return false; + } + + END_USE(vq); + return true; +} + +static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + unsigned int i; + void *buf; + + START_USE(vq); + + for (i = 0; i < vq->packed.vring.num; i++) { + if (!vq->packed.desc_state[i].data) + continue; + /* detach_buf clears data, so grab it now. */ + buf = vq->packed.desc_state[i].data; + detach_buf_packed(vq, i, NULL); + END_USE(vq); + return buf; + } + /* That should have freed everything. */ + BUG_ON(vq->vq.num_free != vq->packed.vring.num); + + END_USE(vq); + return NULL; +} + +static struct virtqueue *vring_create_virtqueue_packed( + unsigned int index, + unsigned int num, + unsigned int vring_align, + struct virtio_device *vdev, + bool weak_barriers, + bool may_reduce_num, + bool context, + bool (*notify)(struct virtqueue *), + void (*callback)(struct virtqueue *), + const char *name) +{ + struct vring_virtqueue *vq; + struct vring_packed_desc *ring; + struct vring_packed_desc_event *driver, *device; + dma_addr_t ring_dma_addr, driver_event_dma_addr, device_event_dma_addr; + size_t ring_size_in_bytes, event_size_in_bytes; + unsigned int i; + + ring_size_in_bytes = num * sizeof(struct vring_packed_desc); + + ring = vring_alloc_queue(vdev, ring_size_in_bytes, + &ring_dma_addr, + GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); + if (!ring) + goto err_ring; + + event_size_in_bytes = sizeof(struct vring_packed_desc_event); + + driver = vring_alloc_queue(vdev, event_size_in_bytes, + &driver_event_dma_addr, + GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); + if (!driver) + goto err_driver; + + device = vring_alloc_queue(vdev, event_size_in_bytes, + &device_event_dma_addr, + GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); + if (!device) + goto err_device; + + vq = kmalloc(sizeof(*vq), GFP_KERNEL); + if (!vq) + goto err_vq; + + vq->vq.callback = callback; + vq->vq.vdev = vdev; + vq->vq.name = name; + vq->vq.num_free = num; + vq->vq.index = index; + vq->we_own_ring = true; + vq->notify = notify; + vq->weak_barriers = weak_barriers; + vq->broken = false; + vq->last_used_idx = 0; + vq->num_added = 0; + vq->packed_ring = true; + vq->use_dma_api = vring_use_dma_api(vdev); + list_add_tail(&vq->vq.list, &vdev->vqs); +#ifdef DEBUG + vq->in_use = false; + vq->last_add_time_valid = false; +#endif + + vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && + !context; + vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); + + vq->packed.ring_dma_addr = ring_dma_addr; + vq->packed.driver_event_dma_addr = driver_event_dma_addr; + vq->packed.device_event_dma_addr = device_event_dma_addr; + + vq->packed.ring_size_in_bytes = ring_size_in_bytes; + vq->packed.event_size_in_bytes = event_size_in_bytes; + + vq->packed.vring.num = num; + vq->packed.vring.desc = ring; + vq->packed.vring.driver = driver; + vq->packed.vring.device = device; + + vq->packed.next_avail_idx = 0; + vq->packed.avail_wrap_counter = 1; + vq->packed.used_wrap_counter = 1; + vq->packed.event_flags_shadow = 0; + vq->packed.avail_used_flags = 1 << VRING_PACKED_DESC_F_AVAIL; + + vq->packed.desc_state = kmalloc_array(num, + sizeof(struct vring_desc_state_packed), + GFP_KERNEL); + if (!vq->packed.desc_state) + goto err_desc_state; + + memset(vq->packed.desc_state, 0, + num * sizeof(struct vring_desc_state_packed)); + + /* Put everything in free lists. */ + vq->free_head = 0; + for (i = 0; i < num-1; i++) + vq->packed.desc_state[i].next = i + 1; + + vq->packed.desc_extra = kmalloc_array(num, + sizeof(struct vring_desc_extra_packed), + GFP_KERNEL); + if (!vq->packed.desc_extra) + goto err_desc_extra; + + memset(vq->packed.desc_extra, 0, + num * sizeof(struct vring_desc_extra_packed)); + + /* No callback? Tell other side not to bother us. */ + if (!callback) { + vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE; + vq->packed.vring.driver->flags = + cpu_to_le16(vq->packed.event_flags_shadow); + } + + return &vq->vq; + +err_desc_extra: + kfree(vq->packed.desc_state); +err_desc_state: + kfree(vq); +err_vq: + vring_free_queue(vdev, event_size_in_bytes, device, ring_dma_addr); +err_device: + vring_free_queue(vdev, event_size_in_bytes, driver, ring_dma_addr); +err_driver: + vring_free_queue(vdev, ring_size_in_bytes, ring, ring_dma_addr); +err_ring: + return NULL; +} + + +/* + * Generic functions and exported symbols. + */ + +static inline int virtqueue_add(struct virtqueue *_vq, + struct scatterlist *sgs[], + unsigned int total_sg, + unsigned int out_sgs, + unsigned int in_sgs, + void *data, + void *ctx, + gfp_t gfp) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + return vq->packed_ring ? virtqueue_add_packed(_vq, sgs, total_sg, + out_sgs, in_sgs, data, ctx, gfp) : + virtqueue_add_split(_vq, sgs, total_sg, + out_sgs, in_sgs, data, ctx, gfp); +} + /** * virtqueue_add_sgs - expose buffers to other end * @vq: the struct virtqueue we're talking about. @@ -460,6 +1721,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq, /* Count them first. */ for (i = 0; i < out_sgs + in_sgs; i++) { struct scatterlist *sg; + for (sg = sgs[i]; sg; sg = sg_next(sg)) total_sg++; } @@ -550,34 +1812,9 @@ EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx); bool virtqueue_kick_prepare(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - u16 new, old; - bool needs_kick; - - START_USE(vq); - /* We need to expose available array entries before checking avail - * event. */ - virtio_mb(vq->weak_barriers); - old = vq->avail_idx_shadow - vq->num_added; - new = vq->avail_idx_shadow; - vq->num_added = 0; - -#ifdef DEBUG - if (vq->last_add_time_valid) { - WARN_ON(ktime_to_ms(ktime_sub(ktime_get(), - vq->last_add_time)) > 100); - } - vq->last_add_time_valid = false; -#endif - - if (vq->event) { - needs_kick = vring_need_event(virtio16_to_cpu(_vq->vdev, vring_avail_event(&vq->vring)), - new, old); - } else { - needs_kick = !(vq->vring.used->flags & cpu_to_virtio16(_vq->vdev, VRING_USED_F_NO_NOTIFY)); - } - END_USE(vq); - return needs_kick; + return vq->packed_ring ? virtqueue_kick_prepare_packed(_vq) : + virtqueue_kick_prepare_split(_vq); } EXPORT_SYMBOL_GPL(virtqueue_kick_prepare); @@ -625,60 +1862,6 @@ bool virtqueue_kick(struct virtqueue *vq) } EXPORT_SYMBOL_GPL(virtqueue_kick); -static void detach_buf(struct vring_virtqueue *vq, unsigned int head, - void **ctx) -{ - unsigned int i, j; - __virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT); - - /* Clear data ptr. */ - vq->desc_state[head].data = NULL; - - /* Put back on free list: unmap first-level descriptors and find end */ - i = head; - - while (vq->vring.desc[i].flags & nextflag) { - vring_unmap_one(vq, &vq->vring.desc[i]); - i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next); - vq->vq.num_free++; - } - - vring_unmap_one(vq, &vq->vring.desc[i]); - vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head); - vq->free_head = head; - - /* Plus final descriptor */ - vq->vq.num_free++; - - if (vq->indirect) { - struct vring_desc *indir_desc = vq->desc_state[head].indir_desc; - u32 len; - - /* Free the indirect table, if any, now that it's unmapped. */ - if (!indir_desc) - return; - - len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len); - - BUG_ON(!(vq->vring.desc[head].flags & - cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))); - BUG_ON(len == 0 || len % sizeof(struct vring_desc)); - - for (j = 0; j < len / sizeof(struct vring_desc); j++) - vring_unmap_one(vq, &indir_desc[j]); - - kfree(indir_desc); - vq->desc_state[head].indir_desc = NULL; - } else if (ctx) { - *ctx = vq->desc_state[head].indir_desc; - } -} - -static inline bool more_used(const struct vring_virtqueue *vq) -{ - return vq->last_used_idx != virtio16_to_cpu(vq->vq.vdev, vq->vring.used->idx); -} - /** * virtqueue_get_buf - get the next used buffer * @vq: the struct virtqueue we're talking about. @@ -699,57 +1882,9 @@ void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len, void **ctx) { struct vring_virtqueue *vq = to_vvq(_vq); - void *ret; - unsigned int i; - u16 last_used; - - START_USE(vq); - - if (unlikely(vq->broken)) { - END_USE(vq); - return NULL; - } - - if (!more_used(vq)) { - pr_debug("No more buffers in queue\n"); - END_USE(vq); - return NULL; - } - - /* Only get used array entries after they have been exposed by host. */ - virtio_rmb(vq->weak_barriers); - - last_used = (vq->last_used_idx & (vq->vring.num - 1)); - i = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].id); - *len = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].len); - - if (unlikely(i >= vq->vring.num)) { - BAD_RING(vq, "id %u out of range\n", i); - return NULL; - } - if (unlikely(!vq->desc_state[i].data)) { - BAD_RING(vq, "id %u is not a head!\n", i); - return NULL; - } - - /* detach_buf clears data, so grab it now. */ - ret = vq->desc_state[i].data; - detach_buf(vq, i, ctx); - vq->last_used_idx++; - /* If we expect an interrupt for the next entry, tell host - * by writing event index and flush out the write before - * the read in the next get_buf call. */ - if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) - virtio_store_mb(vq->weak_barriers, - &vring_used_event(&vq->vring), - cpu_to_virtio16(_vq->vdev, vq->last_used_idx)); - -#ifdef DEBUG - vq->last_add_time_valid = false; -#endif - END_USE(vq); - return ret; + return vq->packed_ring ? virtqueue_get_buf_ctx_packed(_vq, len, ctx) : + virtqueue_get_buf_ctx_split(_vq, len, ctx); } EXPORT_SYMBOL_GPL(virtqueue_get_buf_ctx); @@ -771,12 +1906,10 @@ void virtqueue_disable_cb(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) { - vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; - if (!vq->event) - vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); - } - + if (vq->packed_ring) + virtqueue_disable_cb_packed(_vq); + else + virtqueue_disable_cb_split(_vq); } EXPORT_SYMBOL_GPL(virtqueue_disable_cb); @@ -795,23 +1928,9 @@ EXPORT_SYMBOL_GPL(virtqueue_disable_cb); unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - u16 last_used_idx; - - START_USE(vq); - /* We optimistically turn back on interrupts, then check if there was - * more to do. */ - /* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to - * either clear the flags bit or point the event index at the next - * entry. Always do both to keep code simple. */ - if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) { - vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT; - if (!vq->event) - vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); - } - vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, last_used_idx = vq->last_used_idx); - END_USE(vq); - return last_used_idx; + return vq->packed_ring ? virtqueue_enable_cb_prepare_packed(_vq) : + virtqueue_enable_cb_prepare_split(_vq); } EXPORT_SYMBOL_GPL(virtqueue_enable_cb_prepare); @@ -829,7 +1948,8 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx) struct vring_virtqueue *vq = to_vvq(_vq); virtio_mb(vq->weak_barriers); - return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx); + return vq->packed_ring ? virtqueue_poll_packed(_vq, last_used_idx) : + virtqueue_poll_split(_vq, last_used_idx); } EXPORT_SYMBOL_GPL(virtqueue_poll); @@ -847,6 +1967,7 @@ EXPORT_SYMBOL_GPL(virtqueue_poll); bool virtqueue_enable_cb(struct virtqueue *_vq) { unsigned last_used_idx = virtqueue_enable_cb_prepare(_vq); + return !virtqueue_poll(_vq, last_used_idx); } EXPORT_SYMBOL_GPL(virtqueue_enable_cb); @@ -867,34 +1988,9 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb); bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - u16 bufs; - - START_USE(vq); - - /* We optimistically turn back on interrupts, then check if there was - * more to do. */ - /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to - * either clear the flags bit or point the event index at the next - * entry. Always update the event index to keep code simple. */ - if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) { - vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT; - if (!vq->event) - vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); - } - /* TODO: tune this threshold */ - bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4; - - virtio_store_mb(vq->weak_barriers, - &vring_used_event(&vq->vring), - cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs)); - - if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) { - END_USE(vq); - return false; - } - END_USE(vq); - return true; + return vq->packed_ring ? virtqueue_enable_cb_delayed_packed(_vq) : + virtqueue_enable_cb_delayed_split(_vq); } EXPORT_SYMBOL_GPL(virtqueue_enable_cb_delayed); @@ -909,30 +2005,17 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb_delayed); void *virtqueue_detach_unused_buf(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - unsigned int i; - void *buf; - - START_USE(vq); - - for (i = 0; i < vq->vring.num; i++) { - if (!vq->desc_state[i].data) - continue; - /* detach_buf clears data, so grab it now. */ - buf = vq->desc_state[i].data; - detach_buf(vq, i, NULL); - vq->avail_idx_shadow--; - vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow); - END_USE(vq); - return buf; - } - /* That should have freed everything. */ - BUG_ON(vq->vq.num_free != vq->vring.num); - END_USE(vq); - return NULL; + return vq->packed_ring ? virtqueue_detach_unused_buf_packed(_vq) : + virtqueue_detach_unused_buf_split(_vq); } EXPORT_SYMBOL_GPL(virtqueue_detach_unused_buf); +static inline bool more_used(const struct vring_virtqueue *vq) +{ + return vq->packed_ring ? more_used_packed(vq) : more_used_split(vq); +} + irqreturn_t vring_interrupt(int irq, void *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); @@ -953,6 +2036,7 @@ irqreturn_t vring_interrupt(int irq, void *_vq) } EXPORT_SYMBOL_GPL(vring_interrupt); +/* Only available for split ring */ struct virtqueue *__vring_new_virtqueue(unsigned int index, struct vring vring, struct virtio_device *vdev, @@ -965,27 +2049,26 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, unsigned int i; struct vring_virtqueue *vq; - vq = kmalloc(sizeof(*vq) + vring.num * sizeof(struct vring_desc_state), - GFP_KERNEL); + if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) + return NULL; + + vq = kmalloc(sizeof(*vq), GFP_KERNEL); if (!vq) return NULL; - vq->vring = vring; + vq->packed_ring = false; vq->vq.callback = callback; vq->vq.vdev = vdev; vq->vq.name = name; vq->vq.num_free = vring.num; vq->vq.index = index; vq->we_own_ring = false; - vq->queue_dma_addr = 0; - vq->queue_size_in_bytes = 0; vq->notify = notify; vq->weak_barriers = weak_barriers; vq->broken = false; vq->last_used_idx = 0; - vq->avail_flags_shadow = 0; - vq->avail_idx_shadow = 0; vq->num_added = 0; + vq->use_dma_api = vring_use_dma_api(vdev); list_add_tail(&vq->vq.list, &vdev->vqs); #ifdef DEBUG vq->in_use = false; @@ -996,65 +2079,39 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, !context; vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); + vq->split.queue_dma_addr = 0; + vq->split.queue_size_in_bytes = 0; + + vq->split.vring = vring; + vq->split.avail_flags_shadow = 0; + vq->split.avail_idx_shadow = 0; + /* No callback? Tell other side not to bother us. */ if (!callback) { - vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; + vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; if (!vq->event) - vq->vring.avail->flags = cpu_to_virtio16(vdev, vq->avail_flags_shadow); + vq->split.vring.avail->flags = cpu_to_virtio16(vdev, + vq->split.avail_flags_shadow); + } + + vq->split.desc_state = kmalloc_array(vring.num, + sizeof(struct vring_desc_state_split), GFP_KERNEL); + if (!vq->split.desc_state) { + kfree(vq); + return NULL; } /* Put everything in free lists. */ vq->free_head = 0; for (i = 0; i < vring.num-1; i++) - vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1); - memset(vq->desc_state, 0, vring.num * sizeof(struct vring_desc_state)); + vq->split.vring.desc[i].next = cpu_to_virtio16(vdev, i + 1); + memset(vq->split.desc_state, 0, vring.num * + sizeof(struct vring_desc_state_split)); return &vq->vq; } EXPORT_SYMBOL_GPL(__vring_new_virtqueue); -static void *vring_alloc_queue(struct virtio_device *vdev, size_t size, - dma_addr_t *dma_handle, gfp_t flag) -{ - if (vring_use_dma_api(vdev)) { - return dma_alloc_coherent(vdev->dev.parent, size, - dma_handle, flag); - } else { - void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag); - if (queue) { - phys_addr_t phys_addr = virt_to_phys(queue); - *dma_handle = (dma_addr_t)phys_addr; - - /* - * Sanity check: make sure we dind't truncate - * the address. The only arches I can find that - * have 64-bit phys_addr_t but 32-bit dma_addr_t - * are certain non-highmem MIPS and x86 - * configurations, but these configurations - * should never allocate physical pages above 32 - * bits, so this is fine. Just in case, throw a - * warning and abort if we end up with an - * unrepresentable address. - */ - if (WARN_ON_ONCE(*dma_handle != phys_addr)) { - free_pages_exact(queue, PAGE_ALIGN(size)); - return NULL; - } - } - return queue; - } -} - -static void vring_free_queue(struct virtio_device *vdev, size_t size, - void *queue, dma_addr_t dma_handle) -{ - if (vring_use_dma_api(vdev)) { - dma_free_coherent(vdev->dev.parent, size, queue, dma_handle); - } else { - free_pages_exact(queue, PAGE_ALIGN(size)); - } -} - struct virtqueue *vring_create_virtqueue( unsigned int index, unsigned int num, @@ -1067,57 +2124,19 @@ struct virtqueue *vring_create_virtqueue( void (*callback)(struct virtqueue *), const char *name) { - struct virtqueue *vq; - void *queue = NULL; - dma_addr_t dma_addr; - size_t queue_size_in_bytes; - struct vring vring; - - /* We assume num is a power of 2. */ - if (num & (num - 1)) { - dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num); - return NULL; - } - - /* TODO: allocate each queue chunk individually */ - for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) { - queue = vring_alloc_queue(vdev, vring_size(num, vring_align), - &dma_addr, - GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); - if (queue) - break; - } - if (!num) - return NULL; - - if (!queue) { - /* Try to get a single page. You are my only hope! */ - queue = vring_alloc_queue(vdev, vring_size(num, vring_align), - &dma_addr, GFP_KERNEL|__GFP_ZERO); - } - if (!queue) - return NULL; - - queue_size_in_bytes = vring_size(num, vring_align); - vring_init(&vring, num, queue, vring_align); - - vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers, context, - notify, callback, name); - if (!vq) { - vring_free_queue(vdev, queue_size_in_bytes, queue, - dma_addr); - return NULL; - } - - to_vvq(vq)->queue_dma_addr = dma_addr; - to_vvq(vq)->queue_size_in_bytes = queue_size_in_bytes; - to_vvq(vq)->we_own_ring = true; + if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) + return vring_create_virtqueue_packed(index, num, vring_align, + vdev, weak_barriers, may_reduce_num, + context, notify, callback, name); - return vq; + return vring_create_virtqueue_split(index, num, vring_align, + vdev, weak_barriers, may_reduce_num, + context, notify, callback, name); } EXPORT_SYMBOL_GPL(vring_create_virtqueue); +/* Only available for split ring */ struct virtqueue *vring_new_virtqueue(unsigned int index, unsigned int num, unsigned int vring_align, @@ -1130,6 +2149,10 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, const char *name) { struct vring vring; + + if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) + return NULL; + vring_init(&vring, num, pages, vring_align); return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context, notify, callback, name); @@ -1141,8 +2164,32 @@ void vring_del_virtqueue(struct virtqueue *_vq) struct vring_virtqueue *vq = to_vvq(_vq); if (vq->we_own_ring) { - vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes, - vq->vring.desc, vq->queue_dma_addr); + if (vq->packed_ring) { + vring_free_queue(vq->vq.vdev, + vq->packed.ring_size_in_bytes, + vq->packed.vring.desc, + vq->packed.ring_dma_addr); + + vring_free_queue(vq->vq.vdev, + vq->packed.event_size_in_bytes, + vq->packed.vring.driver, + vq->packed.driver_event_dma_addr); + + vring_free_queue(vq->vq.vdev, + vq->packed.event_size_in_bytes, + vq->packed.vring.device, + vq->packed.device_event_dma_addr); + + kfree(vq->packed.desc_state); + kfree(vq->packed.desc_extra); + } else { + vring_free_queue(vq->vq.vdev, + vq->split.queue_size_in_bytes, + vq->split.vring.desc, + vq->split.queue_dma_addr); + + kfree(vq->split.desc_state); + } } list_del(&_vq->list); kfree(vq); @@ -1164,6 +2211,8 @@ void vring_transport_features(struct virtio_device *vdev) break; case VIRTIO_F_IOMMU_PLATFORM: break; + case VIRTIO_F_RING_PACKED: + break; default: /* We don't understand this bit. */ __virtio_clear_bit(vdev, i); @@ -1184,7 +2233,7 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *_vq) struct vring_virtqueue *vq = to_vvq(_vq); - return vq->vring.num; + return vq->packed_ring ? vq->packed.vring.num : vq->split.vring.num; } EXPORT_SYMBOL_GPL(virtqueue_get_vring_size); @@ -1217,7 +2266,10 @@ dma_addr_t virtqueue_get_desc_addr(struct virtqueue *_vq) BUG_ON(!vq->we_own_ring); - return vq->queue_dma_addr; + if (vq->packed_ring) + return vq->packed.ring_dma_addr; + + return vq->split.queue_dma_addr; } EXPORT_SYMBOL_GPL(virtqueue_get_desc_addr); @@ -1227,8 +2279,11 @@ dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq) BUG_ON(!vq->we_own_ring); - return vq->queue_dma_addr + - ((char *)vq->vring.avail - (char *)vq->vring.desc); + if (vq->packed_ring) + return vq->packed.driver_event_dma_addr; + + return vq->split.queue_dma_addr + + ((char *)vq->split.vring.avail - (char *)vq->split.vring.desc); } EXPORT_SYMBOL_GPL(virtqueue_get_avail_addr); @@ -1238,14 +2293,18 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq) BUG_ON(!vq->we_own_ring); - return vq->queue_dma_addr + - ((char *)vq->vring.used - (char *)vq->vring.desc); + if (vq->packed_ring) + return vq->packed.device_event_dma_addr; + + return vq->split.queue_dma_addr + + ((char *)vq->split.vring.used - (char *)vq->split.vring.desc); } EXPORT_SYMBOL_GPL(virtqueue_get_used_addr); +/* Only available for split ring */ const struct vring *virtqueue_get_vring(struct virtqueue *vq) { - return &to_vvq(vq)->vring; + return &to_vvq(vq)->split.vring; } EXPORT_SYMBOL_GPL(virtqueue_get_vring); diff --git a/drivers/w1/Kconfig b/drivers/w1/Kconfig index 6743bde038cc..dbb41f45af8a 100644 --- a/drivers/w1/Kconfig +++ b/drivers/w1/Kconfig @@ -25,7 +25,7 @@ config W1_CON 2. Userspace commands. Includes read/write and search/alarm search commands. 3. Replies to userspace commands. -source drivers/w1/masters/Kconfig -source drivers/w1/slaves/Kconfig +source "drivers/w1/masters/Kconfig" +source "drivers/w1/slaves/Kconfig" endif # W1 diff --git a/drivers/watchdog/scx200_wdt.c b/drivers/watchdog/scx200_wdt.c index 836377cf9271..ec4063ebb41a 100644 --- a/drivers/watchdog/scx200_wdt.c +++ b/drivers/watchdog/scx200_wdt.c @@ -262,10 +262,3 @@ static void __exit scx200_wdt_cleanup(void) module_init(scx200_wdt_init); module_exit(scx200_wdt_cleanup); - -/* - Local variables: - compile-command: "make -k -C ../.. SUBDIRS=drivers/char modules" - c-basic-offset: 8 - End: -*/ diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index 221b7333d067..ceb5048de9a7 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -352,7 +352,7 @@ static enum bp_state reserve_additional_memory(void) mutex_unlock(&balloon_mutex); /* add_memory_resource() requires the device_hotplug lock */ lock_device_hotplug(); - rc = add_memory_resource(nid, resource, memhp_auto_online); + rc = add_memory_resource(nid, resource); unlock_device_hotplug(); mutex_lock(&balloon_mutex); diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c index b0b02a501167..5efc5eee9544 100644 --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -520,26 +520,26 @@ static int unmap_if_in_range(struct gntdev_grant_map *map, } static int mn_invl_range_start(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end, - bool blockable) + const struct mmu_notifier_range *range) { struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn); struct gntdev_grant_map *map; int ret = 0; - if (blockable) + if (range->blockable) mutex_lock(&priv->lock); else if (!mutex_trylock(&priv->lock)) return -EAGAIN; list_for_each_entry(map, &priv->maps, next) { - ret = unmap_if_in_range(map, start, end, blockable); + ret = unmap_if_in_range(map, range->start, range->end, + range->blockable); if (ret) goto out_unlock; } list_for_each_entry(map, &priv->freeable_maps, next) { - ret = unmap_if_in_range(map, start, end, blockable); + ret = unmap_if_in_range(map, range->start, range->end, + range->blockable); if (ret) goto out_unlock; } diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2a7f545bd0b5..989cf872b98c 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -53,8 +53,6 @@ * API. */ -#define XEN_SWIOTLB_ERROR_CODE (~(dma_addr_t)0x0) - static char *xen_io_tlb_start, *xen_io_tlb_end; static unsigned long xen_io_tlb_nslabs; /* @@ -405,8 +403,8 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir, attrs); - if (map == SWIOTLB_MAP_ERROR) - return XEN_SWIOTLB_ERROR_CODE; + if (map == DMA_MAPPING_ERROR) + return DMA_MAPPING_ERROR; dev_addr = xen_phys_to_bus(map); xen_dma_map_page(dev, pfn_to_page(map >> PAGE_SHIFT), @@ -421,7 +419,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, attrs |= DMA_ATTR_SKIP_CPU_SYNC; swiotlb_tbl_unmap_single(dev, map, size, dir, attrs); - return XEN_SWIOTLB_ERROR_CODE; + return DMA_MAPPING_ERROR; } /* @@ -443,21 +441,8 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr, xen_dma_unmap_page(hwdev, dev_addr, size, dir, attrs); /* NOTE: We use dev_addr here, not paddr! */ - if (is_xen_swiotlb_buffer(dev_addr)) { + if (is_xen_swiotlb_buffer(dev_addr)) swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs); - return; - } - - if (dir != DMA_FROM_DEVICE) - return; - - /* - * phys_to_virt doesn't work with hihgmem page but we could - * call dma_mark_clean() with hihgmem page here. However, we - * are fine since dma_mark_clean() is null on POWERPC. We can - * make dma_mark_clean() take a physical address if necessary. - */ - dma_mark_clean(phys_to_virt(paddr), size); } static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, @@ -495,11 +480,6 @@ xen_swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr, if (target == SYNC_FOR_DEVICE) xen_dma_sync_single_for_device(hwdev, dev_addr, size, dir); - - if (dir != DMA_FROM_DEVICE) - return; - - dma_mark_clean(phys_to_virt(paddr), size); } void @@ -574,7 +554,7 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, sg_phys(sg), sg->length, dir, attrs); - if (map == SWIOTLB_MAP_ERROR) { + if (map == DMA_MAPPING_ERROR) { dev_warn(hwdev, "swiotlb buffer is full\n"); /* Don't panic here, we expect map_sg users to do proper error handling. */ @@ -700,11 +680,6 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs); } -static int xen_swiotlb_mapping_error(struct device *dev, dma_addr_t dma_addr) -{ - return dma_addr == XEN_SWIOTLB_ERROR_CODE; -} - const struct dma_map_ops xen_swiotlb_dma_ops = { .alloc = xen_swiotlb_alloc_coherent, .free = xen_swiotlb_free_coherent, @@ -719,5 +694,4 @@ const struct dma_map_ops xen_swiotlb_dma_ops = { .dma_supported = xen_swiotlb_dma_supported, .mmap = xen_swiotlb_dma_mmap, .get_sgtable = xen_swiotlb_get_sgtable, - .mapping_error = xen_swiotlb_mapping_error, }; diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c index 14a3d4cbc2a7..c9e23a126218 100644 --- a/drivers/xen/xen-scsiback.c +++ b/drivers/xen/xen-scsiback.c @@ -1712,11 +1712,6 @@ static struct configfs_attribute *scsiback_wwn_attrs[] = { NULL, }; -static char *scsiback_get_fabric_name(void) -{ - return "xen-pvscsi"; -} - static int scsiback_port_link(struct se_portal_group *se_tpg, struct se_lun *lun) { @@ -1810,8 +1805,7 @@ static int scsiback_check_false(struct se_portal_group *se_tpg) static const struct target_core_fabric_ops scsiback_ops = { .module = THIS_MODULE, - .name = "xen-pvscsi", - .get_fabric_name = scsiback_get_fabric_name, + .fabric_name = "xen-pvscsi", .tpg_get_wwn = scsiback_get_fabric_wwn, .tpg_get_tag = scsiback_get_tag, .tpg_check_demo_mode = scsiback_check_true, diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c index 5165aa82bf7d..246f6122c9ee 100644 --- a/drivers/xen/xen-selfballoon.c +++ b/drivers/xen/xen-selfballoon.c @@ -189,7 +189,7 @@ static void selfballoon_process(struct work_struct *work) bool reset_timer = false; if (xen_selfballooning_enabled) { - cur_pages = totalram_pages; + cur_pages = totalram_pages(); tgt_pages = cur_pages; /* default is no change */ goal_pages = vm_memory_committed() + totalreserve_pages + @@ -227,7 +227,7 @@ static void selfballoon_process(struct work_struct *work) if (tgt_pages < floor_pages) tgt_pages = floor_pages; balloon_set_new_target(tgt_pages + - balloon_stats.current_pages - totalram_pages); + balloon_stats.current_pages - totalram_pages()); reset_timer = true; } #ifdef CONFIG_FRONTSWAP @@ -569,7 +569,7 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink) * much more reliably and response faster in some cases. */ if (!selfballoon_reserved_mb) { - reserve_pages = totalram_pages / 10; + reserve_pages = totalram_pages() / 10; selfballoon_reserved_mb = PAGES2MB(reserve_pages); } schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ); diff --git a/firmware/.gitignore b/firmware/.gitignore index d9c69017bc9a..9c8bdb9fdcc3 100644 --- a/firmware/.gitignore +++ b/firmware/.gitignore @@ -1,6 +1 @@ *.gen.S -*.fw -*.bin -*.csp -*.dsp -ihex2fw diff --git a/firmware/Makefile b/firmware/Makefile index 29641383e136..e2f7dd2f30e0 100644 --- a/firmware/Makefile +++ b/firmware/Makefile @@ -1,61 +1,41 @@ # SPDX-License-Identifier: GPL-2.0 -# -# kbuild file for firmware/ -# -# Create $(fwabs) from $(CONFIG_EXTRA_FIRMWARE_DIR) -- if it doesn't have a +# Create $(fwdir) from $(CONFIG_EXTRA_FIRMWARE_DIR) -- if it doesn't have a # leading /, it's relative to $(srctree). fwdir := $(subst $(quote),,$(CONFIG_EXTRA_FIRMWARE_DIR)) -fwabs := $(addprefix $(srctree)/,$(filter-out /%,$(fwdir)))$(filter /%,$(fwdir)) - -fw-external-y := $(subst $(quote),,$(CONFIG_EXTRA_FIRMWARE)) - -quiet_cmd_fwbin = MK_FW $@ - cmd_fwbin = FWNAME="$(patsubst firmware/%.gen.S,%,$@)"; \ - FWSTR="$(subst /,_,$(subst .,_,$(subst -,_,$(patsubst \ - firmware/%.gen.S,%,$@))))"; \ - ASM_WORD=$(if $(CONFIG_64BIT),.quad,.long); \ - ASM_ALIGN=$(if $(CONFIG_64BIT),3,2); \ - PROGBITS=$(if $(CONFIG_ARM),%,@)progbits; \ - echo "/* Generated by firmware/Makefile */" > $@;\ - echo " .section .rodata" >>$@;\ - echo " .p2align $${ASM_ALIGN}" >>$@;\ - echo "_fw_$${FWSTR}_bin:" >>$@;\ - echo " .incbin \"$(2)\"" >>$@;\ - echo "_fw_end:" >>$@;\ - echo " .section .rodata.str,\"aMS\",$${PROGBITS},1" >>$@;\ - echo " .p2align $${ASM_ALIGN}" >>$@;\ - echo "_fw_$${FWSTR}_name:" >>$@;\ - echo " .string \"$$FWNAME\"" >>$@;\ - echo " .section .builtin_fw,\"a\",$${PROGBITS}" >>$@;\ - echo " .p2align $${ASM_ALIGN}" >>$@;\ - echo " $${ASM_WORD} _fw_$${FWSTR}_name" >>$@;\ - echo " $${ASM_WORD} _fw_$${FWSTR}_bin" >>$@;\ - echo " $${ASM_WORD} _fw_end - _fw_$${FWSTR}_bin" >>$@; - -# One of these files will change, or come into existence, whenever -# the configuration changes between 32-bit and 64-bit. The .S files -# need to change when that happens. -wordsize_deps := $(wildcard include/config/64bit.h include/config/32bit.h \ - include/config/ppc32.h include/config/ppc64.h \ - include/config/superh32.h include/config/superh64.h \ - include/config/x86_32.h include/config/x86_64.h \ - firmware/Makefile) - -$(patsubst %,$(obj)/%.gen.S, $(fw-external-y)): %: $(wordsize_deps) \ - include/config/extra/firmware/dir.h - $(call cmd,fwbin,$(fwabs)/$(patsubst $(obj)/%.gen.S,%,$@)) +fwdir := $(addprefix $(srctree)/,$(filter-out /%,$(fwdir)))$(filter /%,$(fwdir)) + +obj-y := $(addsuffix .gen.o, $(subst $(quote),,$(CONFIG_EXTRA_FIRMWARE))) + +FWNAME = $(patsubst $(obj)/%.gen.S,%,$@) +FWSTR = $(subst /,_,$(subst .,_,$(subst -,_,$(FWNAME)))) +ASM_WORD = $(if $(CONFIG_64BIT),.quad,.long) +ASM_ALIGN = $(if $(CONFIG_64BIT),3,2) +PROGBITS = $(if $(CONFIG_ARM),%,@)progbits + +filechk_fwbin = { \ + echo "/* Generated by $(src)/Makefile */" ;\ + echo " .section .rodata" ;\ + echo " .p2align $(ASM_ALIGN)" ;\ + echo "_fw_$(FWSTR)_bin:" ;\ + echo " .incbin \"$(fwdir)/$(FWNAME)\"" ;\ + echo "_fw_end:" ;\ + echo " .section .rodata.str,\"aMS\",$(PROGBITS),1" ;\ + echo " .p2align $(ASM_ALIGN)" ;\ + echo "_fw_$(FWSTR)_name:" ;\ + echo " .string \"$(FWNAME)\"" ;\ + echo " .section .builtin_fw,\"a\",$(PROGBITS)" ;\ + echo " .p2align $(ASM_ALIGN)" ;\ + echo " $(ASM_WORD) _fw_$(FWSTR)_name" ;\ + echo " $(ASM_WORD) _fw_$(FWSTR)_bin" ;\ + echo " $(ASM_WORD) _fw_end - _fw_$(FWSTR)_bin" ;\ +} + +$(obj)/%.gen.S: FORCE + $(call filechk,fwbin) # The .o files depend on the binaries directly; the .S files don't. -$(patsubst %,$(obj)/%.gen.o, $(fw-external-y)): $(obj)/%.gen.o: $(fwdir)/% - -obj-y += $(patsubst %,%.gen.o, $(fw-external-y)) - -ifeq ($(KBUILD_SRC),) -# Makefile.build only creates subdirectories for O= builds, but external -# firmware might live outside the kernel source tree -_dummy := $(foreach d,$(addprefix $(obj)/,$(dir $(fw-external-y))), $(shell [ -d $(d) ] || mkdir -p $(d))) -endif +$(addprefix $(obj)/, $(obj-y)): $(obj)/%.gen.o: $(fwdir)/% targets := $(patsubst $(obj)/%,%, \ $(shell find $(obj) -name \*.gen.S 2>/dev/null)) diff --git a/fs/aio.c b/fs/aio.c index aac9659381d2..b906ff70c90f 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -70,6 +70,12 @@ struct aio_ring { struct io_event io_events[0]; }; /* 128 bytes + ring size */ +/* + * Plugging is meant to work with larger batches of IOs. If we don't + * have more than the below, then don't bother setting up a plug. + */ +#define AIO_PLUG_THRESHOLD 2 + #define AIO_RING_PAGES 8 struct kioctx_table { @@ -409,7 +415,7 @@ static int aio_migratepage(struct address_space *mapping, struct page *new, BUG_ON(PageWriteback(old)); get_page(new); - rc = migrate_page_move_mapping(mapping, new, old, NULL, mode, 1); + rc = migrate_page_move_mapping(mapping, new, old, mode, 1); if (rc != MIGRATEPAGE_SUCCESS) { put_page(new); goto out_unlock; @@ -902,7 +908,7 @@ static void put_reqs_available(struct kioctx *ctx, unsigned nr) local_irq_restore(flags); } -static bool get_reqs_available(struct kioctx *ctx) +static bool __get_reqs_available(struct kioctx *ctx) { struct kioctx_cpu *kcpu; bool ret = false; @@ -994,6 +1000,14 @@ static void user_refill_reqs_available(struct kioctx *ctx) spin_unlock_irq(&ctx->completion_lock); } +static bool get_reqs_available(struct kioctx *ctx) +{ + if (__get_reqs_available(ctx)) + return true; + user_refill_reqs_available(ctx); + return __get_reqs_available(ctx); +} + /* aio_get_req * Allocate a slot for an aio request. * Returns NULL if no requests are free. @@ -1002,24 +1016,16 @@ static inline struct aio_kiocb *aio_get_req(struct kioctx *ctx) { struct aio_kiocb *req; - if (!get_reqs_available(ctx)) { - user_refill_reqs_available(ctx); - if (!get_reqs_available(ctx)) - return NULL; - } - - req = kmem_cache_alloc(kiocb_cachep, GFP_KERNEL|__GFP_ZERO); + req = kmem_cache_alloc(kiocb_cachep, GFP_KERNEL); if (unlikely(!req)) - goto out_put; + return NULL; percpu_ref_get(&ctx->reqs); + req->ki_ctx = ctx; INIT_LIST_HEAD(&req->ki_list); refcount_set(&req->ki_refcnt, 0); - req->ki_ctx = ctx; + req->ki_eventfd = NULL; return req; -out_put: - put_reqs_available(ctx, 1); - return NULL; } static struct kioctx *lookup_ioctx(unsigned long ctx_id) @@ -1059,6 +1065,15 @@ static inline void iocb_put(struct aio_kiocb *iocb) } } +static void aio_fill_event(struct io_event *ev, struct aio_kiocb *iocb, + long res, long res2) +{ + ev->obj = (u64)(unsigned long)iocb->ki_user_iocb; + ev->data = iocb->ki_user_data; + ev->res = res; + ev->res2 = res2; +} + /* aio_complete * Called when the io request on the given iocb is complete. */ @@ -1086,10 +1101,7 @@ static void aio_complete(struct aio_kiocb *iocb, long res, long res2) ev_page = kmap_atomic(ctx->ring_pages[pos / AIO_EVENTS_PER_PAGE]); event = ev_page + pos % AIO_EVENTS_PER_PAGE; - event->obj = (u64)(unsigned long)iocb->ki_user_iocb; - event->data = iocb->ki_user_data; - event->res = res; - event->res2 = res2; + aio_fill_event(event, iocb, res, res2); kunmap_atomic(ev_page); flush_dcache_page(ctx->ring_pages[pos / AIO_EVENTS_PER_PAGE]); @@ -1416,7 +1428,7 @@ static void aio_complete_rw(struct kiocb *kiocb, long res, long res2) aio_complete(iocb, res, res2); } -static int aio_prep_rw(struct kiocb *req, struct iocb *iocb) +static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb) { int ret; @@ -1438,21 +1450,26 @@ static int aio_prep_rw(struct kiocb *req, struct iocb *iocb) ret = ioprio_check_cap(iocb->aio_reqprio); if (ret) { pr_debug("aio ioprio check cap error: %d\n", ret); - fput(req->ki_filp); - return ret; + goto out_fput; } req->ki_ioprio = iocb->aio_reqprio; } else - req->ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0); + req->ki_ioprio = get_current_ioprio(); ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags); if (unlikely(ret)) - fput(req->ki_filp); + goto out_fput; + + req->ki_flags &= ~IOCB_HIPRI; /* no one is going to poll for this I/O */ + return 0; + +out_fput: + fput(req->ki_filp); return ret; } -static int aio_setup_rw(int rw, struct iocb *iocb, struct iovec **iovec, +static int aio_setup_rw(int rw, const struct iocb *iocb, struct iovec **iovec, bool vectored, bool compat, struct iov_iter *iter) { void __user *buf = (void __user *)(uintptr_t)iocb->aio_buf; @@ -1487,12 +1504,12 @@ static inline void aio_rw_done(struct kiocb *req, ssize_t ret) ret = -EINTR; /*FALLTHRU*/ default: - aio_complete_rw(req, ret, 0); + req->ki_complete(req, ret, 0); } } -static ssize_t aio_read(struct kiocb *req, struct iocb *iocb, bool vectored, - bool compat) +static ssize_t aio_read(struct kiocb *req, const struct iocb *iocb, + bool vectored, bool compat) { struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; struct iov_iter iter; @@ -1524,8 +1541,8 @@ out_fput: return ret; } -static ssize_t aio_write(struct kiocb *req, struct iocb *iocb, bool vectored, - bool compat) +static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb, + bool vectored, bool compat) { struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; struct iov_iter iter; @@ -1580,7 +1597,8 @@ static void aio_fsync_work(struct work_struct *work) aio_complete(container_of(req, struct aio_kiocb, fsync), ret, 0); } -static int aio_fsync(struct fsync_iocb *req, struct iocb *iocb, bool datasync) +static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb, + bool datasync) { if (unlikely(iocb->aio_buf || iocb->aio_offset || iocb->aio_nbytes || iocb->aio_rw_flags)) @@ -1708,7 +1726,7 @@ aio_poll_queue_proc(struct file *file, struct wait_queue_head *head, add_wait_queue(head, &pt->iocb->poll.wait); } -static ssize_t aio_poll(struct aio_kiocb *aiocb, struct iocb *iocb) +static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb) { struct kioctx *ctx = aiocb->ki_ctx; struct poll_iocb *req = &aiocb->poll; @@ -1728,6 +1746,10 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, struct iocb *iocb) if (unlikely(!req->file)) return -EBADF; + req->head = NULL; + req->woken = false; + req->cancelled = false; + apt.pt._qproc = aio_poll_queue_proc; apt.pt._key = req->events; apt.iocb = aiocb; @@ -1776,44 +1798,44 @@ out: return 0; } -static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb, - bool compat) +static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb, + struct iocb __user *user_iocb, bool compat) { struct aio_kiocb *req; - struct iocb iocb; ssize_t ret; - if (unlikely(copy_from_user(&iocb, user_iocb, sizeof(iocb)))) - return -EFAULT; - /* enforce forwards compatibility on users */ - if (unlikely(iocb.aio_reserved2)) { + if (unlikely(iocb->aio_reserved2)) { pr_debug("EINVAL: reserve field set\n"); return -EINVAL; } /* prevent overflows */ if (unlikely( - (iocb.aio_buf != (unsigned long)iocb.aio_buf) || - (iocb.aio_nbytes != (size_t)iocb.aio_nbytes) || - ((ssize_t)iocb.aio_nbytes < 0) + (iocb->aio_buf != (unsigned long)iocb->aio_buf) || + (iocb->aio_nbytes != (size_t)iocb->aio_nbytes) || + ((ssize_t)iocb->aio_nbytes < 0) )) { pr_debug("EINVAL: overflow check\n"); return -EINVAL; } + if (!get_reqs_available(ctx)) + return -EAGAIN; + + ret = -EAGAIN; req = aio_get_req(ctx); if (unlikely(!req)) - return -EAGAIN; + goto out_put_reqs_available; - if (iocb.aio_flags & IOCB_FLAG_RESFD) { + if (iocb->aio_flags & IOCB_FLAG_RESFD) { /* * If the IOCB_FLAG_RESFD flag of aio_flags is set, get an * instance of the file* now. The file descriptor must be * an eventfd() fd, and will be signaled for each completed * event using the eventfd_signal() function. */ - req->ki_eventfd = eventfd_ctx_fdget((int) iocb.aio_resfd); + req->ki_eventfd = eventfd_ctx_fdget((int) iocb->aio_resfd); if (IS_ERR(req->ki_eventfd)) { ret = PTR_ERR(req->ki_eventfd); req->ki_eventfd = NULL; @@ -1828,32 +1850,32 @@ static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb, } req->ki_user_iocb = user_iocb; - req->ki_user_data = iocb.aio_data; + req->ki_user_data = iocb->aio_data; - switch (iocb.aio_lio_opcode) { + switch (iocb->aio_lio_opcode) { case IOCB_CMD_PREAD: - ret = aio_read(&req->rw, &iocb, false, compat); + ret = aio_read(&req->rw, iocb, false, compat); break; case IOCB_CMD_PWRITE: - ret = aio_write(&req->rw, &iocb, false, compat); + ret = aio_write(&req->rw, iocb, false, compat); break; case IOCB_CMD_PREADV: - ret = aio_read(&req->rw, &iocb, true, compat); + ret = aio_read(&req->rw, iocb, true, compat); break; case IOCB_CMD_PWRITEV: - ret = aio_write(&req->rw, &iocb, true, compat); + ret = aio_write(&req->rw, iocb, true, compat); break; case IOCB_CMD_FSYNC: - ret = aio_fsync(&req->fsync, &iocb, false); + ret = aio_fsync(&req->fsync, iocb, false); break; case IOCB_CMD_FDSYNC: - ret = aio_fsync(&req->fsync, &iocb, true); + ret = aio_fsync(&req->fsync, iocb, true); break; case IOCB_CMD_POLL: - ret = aio_poll(req, &iocb); + ret = aio_poll(req, iocb); break; default: - pr_debug("invalid aio operation %d\n", iocb.aio_lio_opcode); + pr_debug("invalid aio operation %d\n", iocb->aio_lio_opcode); ret = -EINVAL; break; } @@ -1867,14 +1889,25 @@ static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb, goto out_put_req; return 0; out_put_req: - put_reqs_available(ctx, 1); - percpu_ref_put(&ctx->reqs); if (req->ki_eventfd) eventfd_ctx_put(req->ki_eventfd); - kmem_cache_free(kiocb_cachep, req); + iocb_put(req); +out_put_reqs_available: + put_reqs_available(ctx, 1); return ret; } +static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb, + bool compat) +{ + struct iocb iocb; + + if (unlikely(copy_from_user(&iocb, user_iocb, sizeof(iocb)))) + return -EFAULT; + + return __io_submit_one(ctx, &iocb, user_iocb, compat); +} + /* sys_io_submit: * Queue the nr iocbs pointed to by iocbpp for processing. Returns * the number of iocbs queued. May return -EINVAL if the aio_context @@ -1907,7 +1940,8 @@ SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr, if (nr > ctx->nr_events) nr = ctx->nr_events; - blk_start_plug(&plug); + if (nr > AIO_PLUG_THRESHOLD) + blk_start_plug(&plug); for (i = 0; i < nr; i++) { struct iocb __user *user_iocb; @@ -1920,7 +1954,8 @@ SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr, if (ret) break; } - blk_finish_plug(&plug); + if (nr > AIO_PLUG_THRESHOLD) + blk_finish_plug(&plug); percpu_ref_put(&ctx->users); return i ? i : ret; @@ -1947,7 +1982,8 @@ COMPAT_SYSCALL_DEFINE3(io_submit, compat_aio_context_t, ctx_id, if (nr > ctx->nr_events) nr = ctx->nr_events; - blk_start_plug(&plug); + if (nr > AIO_PLUG_THRESHOLD) + blk_start_plug(&plug); for (i = 0; i < nr; i++) { compat_uptr_t user_iocb; @@ -1960,7 +1996,8 @@ COMPAT_SYSCALL_DEFINE3(io_submit, compat_aio_context_t, ctx_id, if (ret) break; } - blk_finish_plug(&plug); + if (nr > AIO_PLUG_THRESHOLD) + blk_finish_plug(&plug); percpu_ref_put(&ctx->users); return i ? i : ret; @@ -2065,11 +2102,13 @@ static long do_io_getevents(aio_context_t ctx_id, * specifies an infinite timeout. Note that the timeout pointed to by * timeout is relative. Will fail with -ENOSYS if not implemented. */ +#if !defined(CONFIG_64BIT_TIME) || defined(CONFIG_64BIT) + SYSCALL_DEFINE5(io_getevents, aio_context_t, ctx_id, long, min_nr, long, nr, struct io_event __user *, events, - struct timespec __user *, timeout) + struct __kernel_timespec __user *, timeout) { struct timespec64 ts; int ret; @@ -2083,6 +2122,8 @@ SYSCALL_DEFINE5(io_getevents, aio_context_t, ctx_id, return ret; } +#endif + struct __aio_sigset { const sigset_t __user *sigmask; size_t sigsetsize; @@ -2093,7 +2134,7 @@ SYSCALL_DEFINE6(io_pgetevents, long, min_nr, long, nr, struct io_event __user *, events, - struct timespec __user *, timeout, + struct __kernel_timespec __user *, timeout, const struct __aio_sigset __user *, usig) { struct __aio_sigset ksig = { NULL, }; @@ -2107,33 +2148,56 @@ SYSCALL_DEFINE6(io_pgetevents, if (usig && copy_from_user(&ksig, usig, sizeof(ksig))) return -EFAULT; - if (ksig.sigmask) { - if (ksig.sigsetsize != sizeof(sigset_t)) - return -EINVAL; - if (copy_from_user(&ksigmask, ksig.sigmask, sizeof(ksigmask))) - return -EFAULT; - sigdelsetmask(&ksigmask, sigmask(SIGKILL) | sigmask(SIGSTOP)); - sigprocmask(SIG_SETMASK, &ksigmask, &sigsaved); - } + ret = set_user_sigmask(ksig.sigmask, &ksigmask, &sigsaved, ksig.sigsetsize); + if (ret) + return ret; ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &ts : NULL); - if (signal_pending(current)) { - if (ksig.sigmask) { - current->saved_sigmask = sigsaved; - set_restore_sigmask(); - } + restore_user_sigmask(ksig.sigmask, &sigsaved); + if (signal_pending(current) && !ret) + ret = -ERESTARTNOHAND; - if (!ret) - ret = -ERESTARTNOHAND; - } else { - if (ksig.sigmask) - sigprocmask(SIG_SETMASK, &sigsaved, NULL); - } + return ret; +} + +#if defined(CONFIG_COMPAT_32BIT_TIME) && !defined(CONFIG_64BIT) + +SYSCALL_DEFINE6(io_pgetevents_time32, + aio_context_t, ctx_id, + long, min_nr, + long, nr, + struct io_event __user *, events, + struct old_timespec32 __user *, timeout, + const struct __aio_sigset __user *, usig) +{ + struct __aio_sigset ksig = { NULL, }; + sigset_t ksigmask, sigsaved; + struct timespec64 ts; + int ret; + + if (timeout && unlikely(get_old_timespec32(&ts, timeout))) + return -EFAULT; + + if (usig && copy_from_user(&ksig, usig, sizeof(ksig))) + return -EFAULT; + + + ret = set_user_sigmask(ksig.sigmask, &ksigmask, &sigsaved, ksig.sigsetsize); + if (ret) + return ret; + + ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &ts : NULL); + restore_user_sigmask(ksig.sigmask, &sigsaved); + if (signal_pending(current) && !ret) + ret = -ERESTARTNOHAND; return ret; } -#ifdef CONFIG_COMPAT +#endif + +#if defined(CONFIG_COMPAT_32BIT_TIME) + COMPAT_SYSCALL_DEFINE5(io_getevents, compat_aio_context_t, ctx_id, compat_long_t, min_nr, compat_long_t, nr, @@ -2152,12 +2216,17 @@ COMPAT_SYSCALL_DEFINE5(io_getevents, compat_aio_context_t, ctx_id, return ret; } +#endif + +#ifdef CONFIG_COMPAT struct __compat_aio_sigset { compat_sigset_t __user *sigmask; compat_size_t sigsetsize; }; +#if defined(CONFIG_COMPAT_32BIT_TIME) + COMPAT_SYSCALL_DEFINE6(io_pgetevents, compat_aio_context_t, ctx_id, compat_long_t, min_nr, @@ -2177,27 +2246,47 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents, if (usig && copy_from_user(&ksig, usig, sizeof(ksig))) return -EFAULT; - if (ksig.sigmask) { - if (ksig.sigsetsize != sizeof(compat_sigset_t)) - return -EINVAL; - if (get_compat_sigset(&ksigmask, ksig.sigmask)) - return -EFAULT; - sigdelsetmask(&ksigmask, sigmask(SIGKILL) | sigmask(SIGSTOP)); - sigprocmask(SIG_SETMASK, &ksigmask, &sigsaved); - } + ret = set_compat_user_sigmask(ksig.sigmask, &ksigmask, &sigsaved, ksig.sigsetsize); + if (ret) + return ret; ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &t : NULL); - if (signal_pending(current)) { - if (ksig.sigmask) { - current->saved_sigmask = sigsaved; - set_restore_sigmask(); - } - if (!ret) - ret = -ERESTARTNOHAND; - } else { - if (ksig.sigmask) - sigprocmask(SIG_SETMASK, &sigsaved, NULL); - } + restore_user_sigmask(ksig.sigmask, &sigsaved); + if (signal_pending(current) && !ret) + ret = -ERESTARTNOHAND; + + return ret; +} + +#endif + +COMPAT_SYSCALL_DEFINE6(io_pgetevents_time64, + compat_aio_context_t, ctx_id, + compat_long_t, min_nr, + compat_long_t, nr, + struct io_event __user *, events, + struct __kernel_timespec __user *, timeout, + const struct __compat_aio_sigset __user *, usig) +{ + struct __compat_aio_sigset ksig = { NULL, }; + sigset_t ksigmask, sigsaved; + struct timespec64 t; + int ret; + + if (timeout && get_timespec64(&t, timeout)) + return -EFAULT; + + if (usig && copy_from_user(&ksig, usig, sizeof(ksig))) + return -EFAULT; + + ret = set_compat_user_sigmask(ksig.sigmask, &ksigmask, &sigsaved, ksig.sigsetsize); + if (ret) + return ret; + + ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &t : NULL); + restore_user_sigmask(ksig.sigmask, &sigsaved); + if (signal_pending(current) && !ret) + ret = -ERESTARTNOHAND; return ret; } diff --git a/fs/block_dev.c b/fs/block_dev.c index a80b4f0ee7c4..450be88cffef 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -181,7 +181,7 @@ static void blkdev_bio_end_io_simple(struct bio *bio) struct task_struct *waiter = bio->bi_private; WRITE_ONCE(bio->bi_private, NULL); - wake_up_process(waiter); + blk_wake_io_task(waiter); } static ssize_t @@ -232,14 +232,18 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter, bio.bi_opf = dio_bio_write_op(iocb); task_io_account_write(ret); } + if (iocb->ki_flags & IOCB_HIPRI) + bio.bi_opf |= REQ_HIPRI; qc = submit_bio(&bio); for (;;) { - set_current_state(TASK_UNINTERRUPTIBLE); + __set_current_state(TASK_UNINTERRUPTIBLE); + if (!READ_ONCE(bio.bi_private)) break; + if (!(iocb->ki_flags & IOCB_HIPRI) || - !blk_poll(bdev_get_queue(bdev), qc)) + !blk_poll(bdev_get_queue(bdev), qc, true)) io_schedule(); } __set_current_state(TASK_RUNNING); @@ -298,12 +302,13 @@ static void blkdev_bio_end_io(struct bio *bio) } dio->iocb->ki_complete(iocb, ret, 0); - bio_put(&dio->bio); + if (dio->multi_bio) + bio_put(&dio->bio); } else { struct task_struct *waiter = dio->waiter; WRITE_ONCE(dio->waiter, NULL); - wake_up_process(waiter); + blk_wake_io_task(waiter); } } @@ -328,6 +333,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) struct blk_plug plug; struct blkdev_dio *dio; struct bio *bio; + bool is_poll = (iocb->ki_flags & IOCB_HIPRI) != 0; bool is_read = (iov_iter_rw(iter) == READ), is_sync; loff_t pos = iocb->ki_pos; blk_qc_t qc = BLK_QC_T_NONE; @@ -338,20 +344,27 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) return -EINVAL; bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, &blkdev_dio_pool); - bio_get(bio); /* extra ref for the completion handler */ dio = container_of(bio, struct blkdev_dio, bio); dio->is_sync = is_sync = is_sync_kiocb(iocb); - if (dio->is_sync) + if (dio->is_sync) { dio->waiter = current; - else + bio_get(bio); + } else { dio->iocb = iocb; + } dio->size = 0; dio->multi_bio = false; dio->should_dirty = is_read && iter_is_iovec(iter); - blk_start_plug(&plug); + /* + * Don't plug for HIPRI/polled IO, as those should go straight + * to issue + */ + if (!is_poll) + blk_start_plug(&plug); + for (;;) { bio_set_dev(bio, bdev); bio->bi_iter.bi_sector = pos >> 9; @@ -381,11 +394,21 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES); if (!nr_pages) { + if (iocb->ki_flags & IOCB_HIPRI) + bio->bi_opf |= REQ_HIPRI; + qc = submit_bio(bio); break; } if (!dio->multi_bio) { + /* + * AIO needs an extra reference to ensure the dio + * structure which is embedded into the first bio + * stays around. + */ + if (!is_sync) + bio_get(bio); dio->multi_bio = true; atomic_set(&dio->ref, 2); } else { @@ -395,18 +418,21 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) submit_bio(bio); bio = bio_alloc(GFP_KERNEL, nr_pages); } - blk_finish_plug(&plug); + + if (!is_poll) + blk_finish_plug(&plug); if (!is_sync) return -EIOCBQUEUED; for (;;) { - set_current_state(TASK_UNINTERRUPTIBLE); + __set_current_state(TASK_UNINTERRUPTIBLE); + if (!READ_ONCE(dio->waiter)) break; if (!(iocb->ki_flags & IOCB_HIPRI) || - !blk_poll(bdev_get_queue(bdev), qc)) + !blk_poll(bdev_get_queue(bdev), qc, true)) io_schedule(); } __set_current_state(TASK_RUNNING); @@ -1966,6 +1992,7 @@ static const struct address_space_operations def_blk_aops = { .writepages = blkdev_writepages, .releasepage = blkdev_releasepage, .direct_IO = blkdev_direct_IO, + .migratepage = buffer_migrate_page_norefs, .is_dirty_writeback = buffer_check_dirty_writeback, }; diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c index 68ebe188446a..78556447e1d5 100644 --- a/fs/btrfs/backref.c +++ b/fs/btrfs/backref.c @@ -591,7 +591,7 @@ unode_aux_to_inode_list(struct ulist_node *node) } /* - * We maintain three seperate rbtrees: one for direct refs, one for + * We maintain three separate rbtrees: one for direct refs, one for * indirect refs which have a key, and one for indirect refs which do not * have a key. Each tree does merge on insertion. * @@ -695,7 +695,7 @@ static int resolve_indirect_refs(struct btrfs_fs_info *fs_info, } /* - * Now it's a direct ref, put it in the the direct tree. We must + * Now it's a direct ref, put it in the direct tree. We must * do this last because the ref could be merged/freed here. */ prelim_ref_insert(fs_info, &preftrees->direct, ref, NULL); @@ -2020,9 +2020,6 @@ static int iterate_inode_refs(u64 inum, struct btrfs_root *fs_root, ret = -ENOMEM; break; } - extent_buffer_get(eb); - btrfs_tree_read_lock(eb); - btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK); btrfs_release_path(path); item = btrfs_item_nr(slot); @@ -2042,7 +2039,6 @@ static int iterate_inode_refs(u64 inum, struct btrfs_root *fs_root, len = sizeof(*iref) + name_len; iref = (struct btrfs_inode_ref *)((char *)iref + len); } - btrfs_tree_read_unlock_blocking(eb); free_extent_buffer(eb); } @@ -2083,10 +2079,6 @@ static int iterate_inode_extrefs(u64 inum, struct btrfs_root *fs_root, ret = -ENOMEM; break; } - extent_buffer_get(eb); - - btrfs_tree_read_lock(eb); - btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK); btrfs_release_path(path); item_size = btrfs_item_size_nr(eb, slot); @@ -2107,7 +2099,6 @@ static int iterate_inode_extrefs(u64 inum, struct btrfs_root *fs_root, cur_offset += btrfs_inode_extref_name_len(eb, extref); cur_offset += sizeof(*extref); } - btrfs_tree_read_unlock_blocking(eb); free_extent_buffer(eb); offset++; diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h index 97d91e55b70a..6f5d07415dab 100644 --- a/fs/btrfs/btrfs_inode.h +++ b/fs/btrfs/btrfs_inode.h @@ -20,7 +20,7 @@ * new data the application may have written before commit. */ enum { - BTRFS_INODE_ORDERED_DATA_CLOSE = 0, + BTRFS_INODE_ORDERED_DATA_CLOSE, BTRFS_INODE_DUMMY, BTRFS_INODE_IN_DEFRAG, BTRFS_INODE_HAS_ASYNC_EXTENT, @@ -29,6 +29,7 @@ enum { BTRFS_INODE_IN_DELALLOC_LIST, BTRFS_INODE_READDIO_NEED_LOCK, BTRFS_INODE_HAS_PROPS, + BTRFS_INODE_SNAPSHOT_FLUSH, }; /* in memory btrfs inode */ @@ -146,6 +147,12 @@ struct btrfs_inode { */ u64 last_unlink_trans; + /* + * Track the transaction id of the last transaction used to create a + * hard link for the inode. This is used by the log tree (fsync). + */ + u64 last_link_trans; + /* * Number of bytes outstanding that are going to need csums. This is * used in ENOSPC accounting. @@ -253,6 +260,11 @@ static inline bool btrfs_is_free_space_inode(struct btrfs_inode *inode) return false; } +static inline bool is_data_inode(struct inode *inode) +{ + return btrfs_ino(BTRFS_I(inode)) != BTRFS_BTREE_INODE_OBJECTID; +} + static inline void btrfs_mod_outstanding_extents(struct btrfs_inode *inode, int mod) { diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c index 2e43fba44035..b0c8094528d1 100644 --- a/fs/btrfs/check-integrity.c +++ b/fs/btrfs/check-integrity.c @@ -1202,24 +1202,24 @@ static void btrfsic_read_from_block_data( void *dstv, u32 offset, size_t len) { size_t cur; - size_t offset_in_page; + size_t pgoff; char *kaddr; char *dst = (char *)dstv; - size_t start_offset = block_ctx->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(block_ctx->start); unsigned long i = (start_offset + offset) >> PAGE_SHIFT; WARN_ON(offset + len > block_ctx->len); - offset_in_page = (start_offset + offset) & (PAGE_SIZE - 1); + pgoff = offset_in_page(start_offset + offset); while (len > 0) { - cur = min(len, ((size_t)PAGE_SIZE - offset_in_page)); + cur = min(len, ((size_t)PAGE_SIZE - pgoff)); BUG_ON(i >= DIV_ROUND_UP(block_ctx->len, PAGE_SIZE)); kaddr = block_ctx->datav[i]; - memcpy(dst, kaddr + offset_in_page, cur); + memcpy(dst, kaddr + pgoff, cur); dst += cur; len -= cur; - offset_in_page = 0; + pgoff = 0; i++; } } @@ -1601,7 +1601,7 @@ static int btrfsic_read_block(struct btrfsic_state *state, BUG_ON(block_ctx->datav); BUG_ON(block_ctx->pagev); BUG_ON(block_ctx->mem_to_free); - if (block_ctx->dev_bytenr & ((u64)PAGE_SIZE - 1)) { + if (!PAGE_ALIGNED(block_ctx->dev_bytenr)) { pr_info("btrfsic: read_block() with unaligned bytenr %llu\n", block_ctx->dev_bytenr); return -1; @@ -1720,7 +1720,7 @@ static int btrfsic_test_for_metadata(struct btrfsic_state *state, num_pages = state->metablock_size >> PAGE_SHIFT; h = (struct btrfs_header *)datav[0]; - if (memcmp(h->fsid, fs_info->fsid, BTRFS_FSID_SIZE)) + if (memcmp(h->fsid, fs_info->fs_devices->fsid, BTRFS_FSID_SIZE)) return 1; for (i = 0; i < num_pages; i++) { @@ -1778,7 +1778,7 @@ again: return; } is_metadata = 1; - BUG_ON(BTRFS_SUPER_INFO_SIZE & (PAGE_SIZE - 1)); + BUG_ON(!PAGE_ALIGNED(BTRFS_SUPER_INFO_SIZE)); processed_len = BTRFS_SUPER_INFO_SIZE; if (state->print_mask & BTRFSIC_PRINT_MASK_TREE_BEFORE_SB_WRITE) { @@ -2327,7 +2327,7 @@ static int btrfsic_check_all_ref_blocks(struct btrfsic_state *state, * write operations. Therefore it keeps the linkage * information for a block until a block is * rewritten. This can temporarily cause incorrect - * and even circular linkage informations. This + * and even circular linkage information. This * causes no harm unless such blocks are referenced * by the most recent super block. */ @@ -2892,12 +2892,12 @@ int btrfsic_mount(struct btrfs_fs_info *fs_info, struct list_head *dev_head = &fs_devices->devices; struct btrfs_device *device; - if (fs_info->nodesize & ((u64)PAGE_SIZE - 1)) { + if (!PAGE_ALIGNED(fs_info->nodesize)) { pr_info("btrfsic: cannot handle nodesize %d not being a multiple of PAGE_SIZE %ld!\n", fs_info->nodesize, PAGE_SIZE); return -1; } - if (fs_info->sectorsize & ((u64)PAGE_SIZE - 1)) { + if (!PAGE_ALIGNED(fs_info->sectorsize)) { pr_info("btrfsic: cannot handle sectorsize %d not being a multiple of PAGE_SIZE %ld!\n", fs_info->sectorsize, PAGE_SIZE); return -1; diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 2955a4ea2fa8..548057630b69 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -229,7 +229,6 @@ static noinline void end_compressed_writeback(struct inode *inode, */ static void end_compressed_bio_write(struct bio *bio) { - struct extent_io_tree *tree; struct compressed_bio *cb = bio->bi_private; struct inode *inode; struct page *page; @@ -248,14 +247,10 @@ static void end_compressed_bio_write(struct bio *bio) * call back into the FS and do all the end_io operations */ inode = cb->inode; - tree = &BTRFS_I(inode)->io_tree; cb->compressed_pages[0]->mapping = cb->inode->i_mapping; - tree->ops->writepage_end_io_hook(cb->compressed_pages[0], - cb->start, - cb->start + cb->len - 1, - NULL, - bio->bi_status ? - BLK_STS_OK : BLK_STS_NOTSUPP); + btrfs_writepage_endio_finish_ordered(cb->compressed_pages[0], + cb->start, cb->start + cb->len - 1, + bio->bi_status ? BLK_STS_OK : BLK_STS_NOTSUPP); cb->compressed_pages[0]->mapping = NULL; end_compressed_writeback(inode, cb); @@ -306,7 +301,7 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start, blk_status_t ret; int skip_sum = BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM; - WARN_ON(start & ((u64)PAGE_SIZE - 1)); + WARN_ON(!PAGE_ALIGNED(start)); cb = kmalloc(compressed_bio_size(fs_info, compressed_len), GFP_NOFS); if (!cb) return BLK_STS_RESOURCE; @@ -337,7 +332,8 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start, page = compressed_pages[pg_index]; page->mapping = inode->i_mapping; if (bio->bi_iter.bi_size) - submit = btrfs_merge_bio_hook(page, 0, PAGE_SIZE, bio, 0); + submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, bio, + 0); page->mapping = NULL; if (submit || bio_add_page(bio, page, PAGE_SIZE, 0) < @@ -481,7 +477,7 @@ static noinline int add_ra_bio_pages(struct inode *inode, if (page->index == end_index) { char *userpage; - size_t zero_offset = isize & (PAGE_SIZE - 1); + size_t zero_offset = offset_in_page(isize); if (zero_offset) { int zeros; @@ -615,8 +611,8 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio, page->index = em_start >> PAGE_SHIFT; if (comp_bio->bi_iter.bi_size) - submit = btrfs_merge_bio_hook(page, 0, PAGE_SIZE, - comp_bio, 0); + submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, + comp_bio, 0); page->mapping = NULL; if (submit || bio_add_page(comp_bio, page, PAGE_SIZE, 0) < @@ -1207,7 +1203,7 @@ int btrfs_decompress_buf2page(const char *buf, unsigned long buf_start, /* * Shannon Entropy calculation * - * Pure byte distribution analysis fails to determine compressiability of data. + * Pure byte distribution analysis fails to determine compressibility of data. * Try calculating entropy to estimate the average minimum number of bits * needed to encode the sampled data. * @@ -1271,7 +1267,7 @@ static u8 get4bits(u64 num, int shift) { /* * Use 4 bits as radix base - * Use 16 u32 counters for calculating new possition in buf array + * Use 16 u32 counters for calculating new position in buf array * * @array - array that will be sorted * @array_buf - buffer array to store sorting results diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 539901fb5165..d92462fe66c8 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -12,6 +12,7 @@ #include "transaction.h" #include "print-tree.h" #include "locking.h" +#include "volumes.h" static int split_node(struct btrfs_trans_handle *trans, struct btrfs_root *root, struct btrfs_path *path, int level); @@ -224,7 +225,7 @@ int btrfs_copy_root(struct btrfs_trans_handle *trans, else btrfs_set_header_owner(cow, new_root_objectid); - write_extent_buffer_fsid(cow, fs_info->fsid); + write_extent_buffer_fsid(cow, fs_info->fs_devices->metadata_uuid); WARN_ON(btrfs_header_generation(buf) > trans->transid); if (new_root_objectid == BTRFS_TREE_RELOC_OBJECTID) @@ -1050,7 +1051,7 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans, else btrfs_set_header_owner(cow, root->root_key.objectid); - write_extent_buffer_fsid(cow, fs_info->fsid); + write_extent_buffer_fsid(cow, fs_info->fs_devices->metadata_uuid); ret = update_ref_for_cow(trans, root, buf, cow, &last_ref); if (ret) { @@ -1290,7 +1291,6 @@ tree_mod_log_rewind(struct btrfs_fs_info *fs_info, struct btrfs_path *path, btrfs_tree_read_unlock_blocking(eb); free_extent_buffer(eb); - extent_buffer_get(eb_rewin); btrfs_tree_read_lock(eb_rewin); __tree_mod_log_rewind(fs_info, eb_rewin, time_seq, tm); WARN_ON(btrfs_header_nritems(eb_rewin) > @@ -1362,7 +1362,6 @@ get_old_root(struct btrfs_root *root, u64 time_seq) if (!eb) return NULL; - extent_buffer_get(eb); btrfs_tree_read_lock(eb); if (old_root) { btrfs_set_header_bytenr(eb, eb->start); @@ -1415,7 +1414,7 @@ static inline int should_cow_block(struct btrfs_trans_handle *trans, * * What is forced COW: * when we create snapshot during committing the transaction, - * after we've finished coping src root, we must COW the shared + * after we've finished copying src root, we must COW the shared * block to ensure the metadata consistency. */ if (btrfs_header_generation(buf) == trans->transid && @@ -1441,6 +1440,10 @@ noinline int btrfs_cow_block(struct btrfs_trans_handle *trans, u64 search_start; int ret; + if (test_bit(BTRFS_ROOT_DELETING, &root->state)) + btrfs_err(fs_info, + "COW'ing blocks on a fs root that's being dropped"); + if (trans->transaction != fs_info->running_transaction) WARN(1, KERN_CRIT "trans %llu running %llu\n", trans->transid, @@ -2584,14 +2587,27 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root, root_lock = BTRFS_READ_LOCK; if (p->search_commit_root) { - /* The commit roots are read only so we always do read locks */ - if (p->need_commit_sem) + /* + * The commit roots are read only so we always do read locks, + * and we always must hold the commit_root_sem when doing + * searches on them, the only exception is send where we don't + * want to block transaction commits for a long time, so + * we need to clone the commit root in order to avoid races + * with transaction commits that create a snapshot of one of + * the roots used by a send operation. + */ + if (p->need_commit_sem) { down_read(&fs_info->commit_root_sem); - b = root->commit_root; - extent_buffer_get(b); - level = btrfs_header_level(b); - if (p->need_commit_sem) + b = btrfs_clone_extent_buffer(root->commit_root); up_read(&fs_info->commit_root_sem); + if (!b) + return ERR_PTR(-ENOMEM); + + } else { + b = root->commit_root; + extent_buffer_get(b); + } + level = btrfs_header_level(b); /* * Ensure that all callers have set skip_locking when * p->search_commit_root = 1. @@ -2717,6 +2733,10 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root, again: prev_cmp = -1; b = btrfs_search_slot_get_root(root, p, write_lock_level); + if (IS_ERR(b)) { + ret = PTR_ERR(b); + goto done; + } while (b) { level = btrfs_header_level(b); @@ -3751,7 +3771,7 @@ static int push_leaf_right(struct btrfs_trans_handle *trans, struct btrfs_root /* Key greater than all keys in the leaf, right neighbor has * enough room for it and we're not emptying our leaf to delete * it, therefore use right neighbor to insert the new item and - * no need to touch/dirty our left leaft. */ + * no need to touch/dirty our left leaf. */ btrfs_tree_unlock(left); free_extent_buffer(left); path->nodes[0] = right; @@ -5390,7 +5410,6 @@ int btrfs_compare_trees(struct btrfs_root *left_root, ret = -ENOMEM; goto out; } - extent_buffer_get(left_path->nodes[left_level]); right_level = btrfs_header_level(right_root->commit_root); right_root_level = right_level; @@ -5401,7 +5420,6 @@ int btrfs_compare_trees(struct btrfs_root *left_root, ret = -ENOMEM; goto out; } - extent_buffer_get(right_path->nodes[right_level]); up_read(&fs_info->commit_root_sem); if (left_level == 0) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 68f322f600a0..f031a447a047 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -109,13 +109,26 @@ static inline unsigned long btrfs_chunk_item_size(int num_stripes) } /* - * File system states + * Runtime (in-memory) states of filesystem */ -#define BTRFS_FS_STATE_ERROR 0 -#define BTRFS_FS_STATE_REMOUNTING 1 -#define BTRFS_FS_STATE_TRANS_ABORTED 2 -#define BTRFS_FS_STATE_DEV_REPLACING 3 -#define BTRFS_FS_STATE_DUMMY_FS_INFO 4 +enum { + /* Global indicator of serious filesystem errors */ + BTRFS_FS_STATE_ERROR, + /* + * Filesystem is being remounted, allow to skip some operations, like + * defrag + */ + BTRFS_FS_STATE_REMOUNTING, + /* Track if a transaction abort has been reported on this filesystem */ + BTRFS_FS_STATE_TRANS_ABORTED, + /* + * Bio operations should be blocked on this filesystem because a source + * or target device is being destroyed as part of a device replace + */ + BTRFS_FS_STATE_DEV_REPLACING, + /* The btrfs_fs_info created for self-tests */ + BTRFS_FS_STATE_DUMMY_FS_INFO, +}; #define BTRFS_BACKREF_REV_MAX 256 #define BTRFS_BACKREF_REV_SHIFT 56 @@ -195,9 +208,10 @@ struct btrfs_root_backup { * it currently lacks any block count etc etc */ struct btrfs_super_block { - u8 csum[BTRFS_CSUM_SIZE]; /* the first 4 fields must match struct btrfs_header */ - u8 fsid[BTRFS_FSID_SIZE]; /* FS specific uuid */ + u8 csum[BTRFS_CSUM_SIZE]; + /* FS specific UUID, visible to user */ + u8 fsid[BTRFS_FSID_SIZE]; __le64 bytenr; /* this block number */ __le64 flags; @@ -234,8 +248,11 @@ struct btrfs_super_block { __le64 cache_generation; __le64 uuid_tree_generation; + /* the UUID written into btree blocks */ + u8 metadata_uuid[BTRFS_FSID_SIZE]; + /* future expansion */ - __le64 reserved[30]; + __le64 reserved[28]; u8 sys_chunk_array[BTRFS_SYSTEM_CHUNK_ARRAY_SIZE]; struct btrfs_root_backup super_roots[BTRFS_NUM_BACKUP_ROOTS]; } __attribute__ ((__packed__)); @@ -265,7 +282,8 @@ struct btrfs_super_block { BTRFS_FEATURE_INCOMPAT_RAID56 | \ BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF | \ BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA | \ - BTRFS_FEATURE_INCOMPAT_NO_HOLES) + BTRFS_FEATURE_INCOMPAT_NO_HOLES | \ + BTRFS_FEATURE_INCOMPAT_METADATA_UUID) #define BTRFS_FEATURE_INCOMPAT_SAFE_SET \ (BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF) @@ -316,7 +334,7 @@ struct btrfs_node { * The slots array records the index of the item or block pointer * used while walking the tree. */ -enum { READA_NONE = 0, READA_BACK, READA_FORWARD }; +enum { READA_NONE, READA_BACK, READA_FORWARD }; struct btrfs_path { struct extent_buffer *nodes[BTRFS_MAX_LEVEL]; int slots[BTRFS_MAX_LEVEL]; @@ -360,9 +378,7 @@ struct btrfs_dev_replace { struct btrfs_device *tgtdev; struct mutex lock_finishing_cancel_unmount; - rwlock_t lock; - atomic_t blocking_readers; - wait_queue_head_t read_lock_wq; + struct rw_semaphore rwsem; struct btrfs_scrub_progress scrub_progress; @@ -443,13 +459,19 @@ struct btrfs_space_info { struct kobject *block_group_kobjs[BTRFS_NR_RAID_TYPES]; }; -#define BTRFS_BLOCK_RSV_GLOBAL 1 -#define BTRFS_BLOCK_RSV_DELALLOC 2 -#define BTRFS_BLOCK_RSV_TRANS 3 -#define BTRFS_BLOCK_RSV_CHUNK 4 -#define BTRFS_BLOCK_RSV_DELOPS 5 -#define BTRFS_BLOCK_RSV_EMPTY 6 -#define BTRFS_BLOCK_RSV_TEMP 7 +/* + * Types of block reserves + */ +enum { + BTRFS_BLOCK_RSV_GLOBAL, + BTRFS_BLOCK_RSV_DELALLOC, + BTRFS_BLOCK_RSV_TRANS, + BTRFS_BLOCK_RSV_CHUNK, + BTRFS_BLOCK_RSV_DELOPS, + BTRFS_BLOCK_RSV_DELREFS, + BTRFS_BLOCK_RSV_EMPTY, + BTRFS_BLOCK_RSV_TEMP, +}; struct btrfs_block_rsv { u64 size; @@ -509,18 +531,18 @@ struct btrfs_free_cluster { }; enum btrfs_caching_type { - BTRFS_CACHE_NO = 0, - BTRFS_CACHE_STARTED = 1, - BTRFS_CACHE_FAST = 2, - BTRFS_CACHE_FINISHED = 3, - BTRFS_CACHE_ERROR = 4, + BTRFS_CACHE_NO, + BTRFS_CACHE_STARTED, + BTRFS_CACHE_FAST, + BTRFS_CACHE_FINISHED, + BTRFS_CACHE_ERROR, }; enum btrfs_disk_cache_state { - BTRFS_DC_WRITTEN = 0, - BTRFS_DC_ERROR = 1, - BTRFS_DC_CLEAR = 2, - BTRFS_DC_SETUP = 3, + BTRFS_DC_WRITTEN, + BTRFS_DC_ERROR, + BTRFS_DC_CLEAR, + BTRFS_DC_SETUP, }; struct btrfs_caching_control { @@ -712,41 +734,61 @@ struct btrfs_fs_devices; struct btrfs_balance_control; struct btrfs_delayed_root; -#define BTRFS_FS_BARRIER 1 -#define BTRFS_FS_CLOSING_START 2 -#define BTRFS_FS_CLOSING_DONE 3 -#define BTRFS_FS_LOG_RECOVERING 4 -#define BTRFS_FS_OPEN 5 -#define BTRFS_FS_QUOTA_ENABLED 6 -#define BTRFS_FS_UPDATE_UUID_TREE_GEN 9 -#define BTRFS_FS_CREATING_FREE_SPACE_TREE 10 -#define BTRFS_FS_BTREE_ERR 11 -#define BTRFS_FS_LOG1_ERR 12 -#define BTRFS_FS_LOG2_ERR 13 -#define BTRFS_FS_QUOTA_OVERRIDE 14 -/* Used to record internally whether fs has been frozen */ -#define BTRFS_FS_FROZEN 15 - -/* - * Indicate that a whole-filesystem exclusive operation is running - * (device replace, resize, device add/delete, balance) - */ -#define BTRFS_FS_EXCL_OP 16 - /* - * To info transaction_kthread we need an immediate commit so it doesn't - * need to wait for commit_interval + * Block group or device which contains an active swapfile. Used for preventing + * unsafe operations while a swapfile is active. + * + * These are sorted on (ptr, inode) (note that a block group or device can + * contain more than one swapfile). We compare the pointer values because we + * don't actually care what the object is, we just need a quick check whether + * the object exists in the rbtree. */ -#define BTRFS_FS_NEED_ASYNC_COMMIT 17 +struct btrfs_swapfile_pin { + struct rb_node node; + void *ptr; + struct inode *inode; + /* + * If true, ptr points to a struct btrfs_block_group_cache. Otherwise, + * ptr points to a struct btrfs_device. + */ + bool is_block_group; +}; -/* - * Indicate that balance has been set up from the ioctl and is in the main - * phase. The fs_info::balance_ctl is initialized. - */ -#define BTRFS_FS_BALANCE_RUNNING 18 +bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr); + +enum { + BTRFS_FS_BARRIER, + BTRFS_FS_CLOSING_START, + BTRFS_FS_CLOSING_DONE, + BTRFS_FS_LOG_RECOVERING, + BTRFS_FS_OPEN, + BTRFS_FS_QUOTA_ENABLED, + BTRFS_FS_UPDATE_UUID_TREE_GEN, + BTRFS_FS_CREATING_FREE_SPACE_TREE, + BTRFS_FS_BTREE_ERR, + BTRFS_FS_LOG1_ERR, + BTRFS_FS_LOG2_ERR, + BTRFS_FS_QUOTA_OVERRIDE, + /* Used to record internally whether fs has been frozen */ + BTRFS_FS_FROZEN, + /* + * Indicate that a whole-filesystem exclusive operation is running + * (device replace, resize, device add/delete, balance) + */ + BTRFS_FS_EXCL_OP, + /* + * To info transaction_kthread we need an immediate commit so it + * doesn't need to wait for commit_interval + */ + BTRFS_FS_NEED_ASYNC_COMMIT, + /* + * Indicate that balance has been set up from the ioctl and is in the + * main phase. The fs_info::balance_ctl is initialized. + */ + BTRFS_FS_BALANCE_RUNNING, +}; struct btrfs_fs_info { - u8 fsid[BTRFS_FSID_SIZE]; u8 chunk_tree_uuid[BTRFS_UUID_SIZE]; unsigned long flags; struct btrfs_root *extent_root; @@ -790,6 +832,8 @@ struct btrfs_fs_info { struct btrfs_block_rsv chunk_block_rsv; /* block reservation for delayed operations */ struct btrfs_block_rsv delayed_block_rsv; + /* block reservation for delayed refs */ + struct btrfs_block_rsv delayed_refs_rsv; struct btrfs_block_rsv empty_block_rsv; @@ -1114,6 +1158,10 @@ struct btrfs_fs_info { u32 sectorsize; u32 stripesize; + /* Block groups and devices containing active swapfiles. */ + spinlock_t swapfile_pins_lock; + struct rb_root swapfile_pins; + #ifdef CONFIG_BTRFS_FS_REF_VERIFY spinlock_t ref_verify_lock; struct rb_root block_tree; @@ -1133,22 +1181,24 @@ struct btrfs_subvolume_writers { /* * The state of btrfs root */ -/* - * btrfs_record_root_in_trans is a multi-step process, - * and it can race with the balancing code. But the - * race is very small, and only the first time the root - * is added to each transaction. So IN_TRANS_SETUP - * is used to tell us when more checks are required - */ -#define BTRFS_ROOT_IN_TRANS_SETUP 0 -#define BTRFS_ROOT_REF_COWS 1 -#define BTRFS_ROOT_TRACK_DIRTY 2 -#define BTRFS_ROOT_IN_RADIX 3 -#define BTRFS_ROOT_ORPHAN_ITEM_INSERTED 4 -#define BTRFS_ROOT_DEFRAG_RUNNING 5 -#define BTRFS_ROOT_FORCE_COW 6 -#define BTRFS_ROOT_MULTI_LOG_TASKS 7 -#define BTRFS_ROOT_DIRTY 8 +enum { + /* + * btrfs_record_root_in_trans is a multi-step process, and it can race + * with the balancing code. But the race is very small, and only the + * first time the root is added to each transaction. So IN_TRANS_SETUP + * is used to tell us when more checks are required + */ + BTRFS_ROOT_IN_TRANS_SETUP, + BTRFS_ROOT_REF_COWS, + BTRFS_ROOT_TRACK_DIRTY, + BTRFS_ROOT_IN_RADIX, + BTRFS_ROOT_ORPHAN_ITEM_INSERTED, + BTRFS_ROOT_DEFRAG_RUNNING, + BTRFS_ROOT_FORCE_COW, + BTRFS_ROOT_MULTI_LOG_TASKS, + BTRFS_ROOT_DIRTY, + BTRFS_ROOT_DELETING, +}; /* * in ram representation of the tree. extent_root is used for all allocations @@ -1274,6 +1324,9 @@ struct btrfs_root { u64 qgroup_meta_rsv_pertrans; u64 qgroup_meta_rsv_prealloc; + /* Number of active swapfiles */ + atomic_t nr_swapfiles; + #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS u64 alloc_bytenr; #endif @@ -2570,10 +2623,10 @@ static inline gfp_t btrfs_alloc_write_mask(struct address_space *mapping) /* extent-tree.c */ enum btrfs_inline_ref_type { - BTRFS_REF_TYPE_INVALID = 0, - BTRFS_REF_TYPE_BLOCK = 1, - BTRFS_REF_TYPE_DATA = 2, - BTRFS_REF_TYPE_ANY = 3, + BTRFS_REF_TYPE_INVALID, + BTRFS_REF_TYPE_BLOCK, + BTRFS_REF_TYPE_DATA, + BTRFS_REF_TYPE_ANY, }; int btrfs_get_extent_inline_ref_type(const struct extent_buffer *eb, @@ -2599,7 +2652,7 @@ static inline u64 btrfs_calc_trunc_metadata_size(struct btrfs_fs_info *fs_info, } int btrfs_should_throttle_delayed_refs(struct btrfs_trans_handle *trans); -int btrfs_check_space_for_delayed_refs(struct btrfs_trans_handle *trans); +bool btrfs_check_space_for_delayed_refs(struct btrfs_fs_info *fs_info); void btrfs_dec_block_group_reservations(struct btrfs_fs_info *fs_info, const u64 start); void btrfs_wait_block_group_reservations(struct btrfs_block_group_cache *bg); @@ -2713,10 +2766,12 @@ enum btrfs_reserve_flush_enum { enum btrfs_flush_state { FLUSH_DELAYED_ITEMS_NR = 1, FLUSH_DELAYED_ITEMS = 2, - FLUSH_DELALLOC = 3, - FLUSH_DELALLOC_WAIT = 4, - ALLOC_CHUNK = 5, - COMMIT_TRANS = 6, + FLUSH_DELAYED_REFS_NR = 3, + FLUSH_DELAYED_REFS = 4, + FLUSH_DELALLOC = 5, + FLUSH_DELALLOC_WAIT = 6, + ALLOC_CHUNK = 7, + COMMIT_TRANS = 8, }; int btrfs_alloc_data_chunk_ondemand(struct btrfs_inode *inode, u64 bytes); @@ -2767,6 +2822,13 @@ int btrfs_cond_migrate_bytes(struct btrfs_fs_info *fs_info, void btrfs_block_rsv_release(struct btrfs_fs_info *fs_info, struct btrfs_block_rsv *block_rsv, u64 num_bytes); +void btrfs_delayed_refs_rsv_release(struct btrfs_fs_info *fs_info, int nr); +void btrfs_update_delayed_refs_rsv(struct btrfs_trans_handle *trans); +int btrfs_delayed_refs_rsv_refill(struct btrfs_fs_info *fs_info, + enum btrfs_reserve_flush_enum flush); +void btrfs_migrate_to_delayed_refs_rsv(struct btrfs_fs_info *fs_info, + struct btrfs_block_rsv *src, + u64 num_bytes); int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache); void btrfs_dec_block_group_ro(struct btrfs_block_group_cache *cache); void btrfs_put_block_group_cache(struct btrfs_fs_info *info); @@ -3141,7 +3203,7 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans, struct inode *inode, u64 new_size, u32 min_type); -int btrfs_start_delalloc_inodes(struct btrfs_root *root); +int btrfs_start_delalloc_snapshot(struct btrfs_root *root); int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, int nr); int btrfs_set_extent_delalloc(struct inode *inode, u64 start, u64 end, unsigned int extra_bits, @@ -3150,9 +3212,16 @@ int btrfs_create_subvol_root(struct btrfs_trans_handle *trans, struct btrfs_root *new_root, struct btrfs_root *parent_root, u64 new_dirid); -int btrfs_merge_bio_hook(struct page *page, unsigned long offset, - size_t size, struct bio *bio, - unsigned long bio_flags); + void btrfs_set_delalloc_extent(struct inode *inode, struct extent_state *state, + unsigned *bits); +void btrfs_clear_delalloc_extent(struct inode *inode, + struct extent_state *state, unsigned *bits); +void btrfs_merge_delalloc_extent(struct inode *inode, struct extent_state *new, + struct extent_state *other); +void btrfs_split_delalloc_extent(struct inode *inode, + struct extent_state *orig, u64 split); +int btrfs_bio_fits_in_stripe(struct page *page, size_t size, struct bio *bio, + unsigned long bio_flags); void btrfs_set_range_writeback(struct extent_io_tree *tree, u64 start, u64 end); vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf); int btrfs_readpage(struct file *file, struct page *page); @@ -3189,6 +3258,12 @@ int btrfs_prealloc_file_range_trans(struct inode *inode, struct btrfs_trans_handle *trans, int mode, u64 start, u64 num_bytes, u64 min_size, loff_t actual_len, u64 *alloc_hint); +int btrfs_run_delalloc_range(void *private_data, struct page *locked_page, + u64 start, u64 end, int *page_started, unsigned long *nr_written, + struct writeback_control *wbc); +int btrfs_writepage_cow_fixup(struct page *page, u64 start, u64 end); +void btrfs_writepage_endio_finish_ordered(struct page *page, u64 start, + u64 end, int uptodate); extern const struct dentry_operations btrfs_dentry_operations; /* ioctl.c */ @@ -3428,6 +3503,16 @@ static inline void assfail(const char *expr, const char *file, int line) #define ASSERT(expr) ((void)0) #endif +/* + * Use that for functions that are conditionally exported for sanity tests but + * otherwise static + */ +#ifndef CONFIG_BTRFS_FS_RUN_SANITY_TESTS +#define EXPORT_FOR_TESTS static +#else +#define EXPORT_FOR_TESTS +#endif + __cold static inline void btrfs_print_v0_err(struct btrfs_fs_info *fs_info) { diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c index 9301b3ad9217..cad36c99a483 100644 --- a/fs/btrfs/delayed-ref.c +++ b/fs/btrfs/delayed-ref.c @@ -251,8 +251,6 @@ static inline void drop_delayed_ref(struct btrfs_trans_handle *trans, ref->in_tree = 0; btrfs_put_delayed_ref(ref); atomic_dec(&delayed_refs->num_entries); - if (trans->delayed_ref_updates) - trans->delayed_ref_updates--; } static bool merge_ref(struct btrfs_trans_handle *trans, @@ -400,6 +398,20 @@ again: return head; } +void btrfs_delete_ref_head(struct btrfs_delayed_ref_root *delayed_refs, + struct btrfs_delayed_ref_head *head) +{ + lockdep_assert_held(&delayed_refs->lock); + lockdep_assert_held(&head->lock); + + rb_erase_cached(&head->href_node, &delayed_refs->href_root); + RB_CLEAR_NODE(&head->href_node); + atomic_dec(&delayed_refs->num_entries); + delayed_refs->num_heads--; + if (head->processing == 0) + delayed_refs->num_heads_ready--; +} + /* * Helper to insert the ref_node to the tail or merge with tail. * @@ -453,7 +465,6 @@ inserted: if (ref->action == BTRFS_ADD_DELAYED_REF) list_add_tail(&ref->add_list, &href->ref_add_list); atomic_inc(&root->num_entries); - trans->delayed_ref_updates++; spin_unlock(&href->lock); return ret; } @@ -462,12 +473,14 @@ inserted: * helper function to update the accounting in the head ref * existing and update must have the same bytenr */ -static noinline void -update_existing_head_ref(struct btrfs_delayed_ref_root *delayed_refs, +static noinline void update_existing_head_ref(struct btrfs_trans_handle *trans, struct btrfs_delayed_ref_head *existing, struct btrfs_delayed_ref_head *update, int *old_ref_mod_ret) { + struct btrfs_delayed_ref_root *delayed_refs = + &trans->transaction->delayed_refs; + struct btrfs_fs_info *fs_info = trans->fs_info; int old_ref_mod; BUG_ON(existing->is_data != update->is_data); @@ -525,10 +538,18 @@ update_existing_head_ref(struct btrfs_delayed_ref_root *delayed_refs, * versa we need to make sure to adjust pending_csums accordingly. */ if (existing->is_data) { - if (existing->total_ref_mod >= 0 && old_ref_mod < 0) + u64 csum_leaves = + btrfs_csum_bytes_to_leaves(fs_info, + existing->num_bytes); + + if (existing->total_ref_mod >= 0 && old_ref_mod < 0) { delayed_refs->pending_csums -= existing->num_bytes; - if (existing->total_ref_mod < 0 && old_ref_mod >= 0) + btrfs_delayed_refs_rsv_release(fs_info, csum_leaves); + } + if (existing->total_ref_mod < 0 && old_ref_mod >= 0) { delayed_refs->pending_csums += existing->num_bytes; + trans->delayed_ref_updates += csum_leaves; + } } spin_unlock(&existing->lock); } @@ -634,7 +655,7 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans, && head_ref->qgroup_reserved && existing->qgroup_ref_root && existing->qgroup_reserved); - update_existing_head_ref(delayed_refs, existing, head_ref, + update_existing_head_ref(trans, existing, head_ref, old_ref_mod); /* * we've updated the existing ref, free the newly @@ -645,8 +666,12 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans, } else { if (old_ref_mod) *old_ref_mod = 0; - if (head_ref->is_data && head_ref->ref_mod < 0) + if (head_ref->is_data && head_ref->ref_mod < 0) { delayed_refs->pending_csums += head_ref->num_bytes; + trans->delayed_ref_updates += + btrfs_csum_bytes_to_leaves(trans->fs_info, + head_ref->num_bytes); + } delayed_refs->num_heads++; delayed_refs->num_heads_ready++; atomic_inc(&delayed_refs->num_entries); @@ -782,6 +807,12 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans, ret = insert_delayed_ref(trans, delayed_refs, head_ref, &ref->node); spin_unlock(&delayed_refs->lock); + /* + * Need to update the delayed_refs_rsv with any changes we may have + * made. + */ + btrfs_update_delayed_refs_rsv(trans); + trace_add_delayed_tree_ref(fs_info, &ref->node, ref, action == BTRFS_ADD_DELAYED_EXTENT ? BTRFS_ADD_DELAYED_REF : action); @@ -863,6 +894,12 @@ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans, ret = insert_delayed_ref(trans, delayed_refs, head_ref, &ref->node); spin_unlock(&delayed_refs->lock); + /* + * Need to update the delayed_refs_rsv with any changes we may have + * made. + */ + btrfs_update_delayed_refs_rsv(trans); + trace_add_delayed_data_ref(trans->fs_info, &ref->node, ref, action == BTRFS_ADD_DELAYED_EXTENT ? BTRFS_ADD_DELAYED_REF : action); @@ -899,6 +936,12 @@ int btrfs_add_delayed_extent_op(struct btrfs_fs_info *fs_info, NULL, NULL, NULL); spin_unlock(&delayed_refs->lock); + + /* + * Need to update the delayed_refs_rsv with any changes we may have + * made. + */ + btrfs_update_delayed_refs_rsv(trans); return 0; } diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h index 8e20c5cb5404..d2af974f68a1 100644 --- a/fs/btrfs/delayed-ref.h +++ b/fs/btrfs/delayed-ref.h @@ -261,7 +261,8 @@ static inline void btrfs_delayed_ref_unlock(struct btrfs_delayed_ref_head *head) { mutex_unlock(&head->mutex); } - +void btrfs_delete_ref_head(struct btrfs_delayed_ref_root *delayed_refs, + struct btrfs_delayed_ref_head *head); struct btrfs_delayed_ref_head *btrfs_select_ref_head( struct btrfs_delayed_ref_root *delayed_refs); diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c index 2aa48aecc52b..8750c835f535 100644 --- a/fs/btrfs/dev-replace.c +++ b/fs/btrfs/dev-replace.c @@ -59,7 +59,6 @@ no_valid_dev_replace_entry_found: BTRFS_DEV_REPLACE_ITEM_STATE_NEVER_STARTED; dev_replace->cont_reading_from_srcdev_mode = BTRFS_DEV_REPLACE_ITEM_CONT_READING_FROM_SRCDEV_MODE_ALWAYS; - dev_replace->replace_state = 0; dev_replace->time_started = 0; dev_replace->time_stopped = 0; atomic64_set(&dev_replace->num_write_errors, 0); @@ -285,13 +284,13 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans, struct btrfs_dev_replace_item *ptr; struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace; - btrfs_dev_replace_read_lock(dev_replace); + down_read(&dev_replace->rwsem); if (!dev_replace->is_valid || !dev_replace->item_needs_writeback) { - btrfs_dev_replace_read_unlock(dev_replace); + up_read(&dev_replace->rwsem); return 0; } - btrfs_dev_replace_read_unlock(dev_replace); + up_read(&dev_replace->rwsem); key.objectid = 0; key.type = BTRFS_DEV_REPLACE_KEY; @@ -349,7 +348,7 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans, ptr = btrfs_item_ptr(eb, path->slots[0], struct btrfs_dev_replace_item); - btrfs_dev_replace_write_lock(dev_replace); + down_write(&dev_replace->rwsem); if (dev_replace->srcdev) btrfs_set_dev_replace_src_devid(eb, ptr, dev_replace->srcdev->devid); @@ -372,7 +371,7 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans, btrfs_set_dev_replace_cursor_right(eb, ptr, dev_replace->cursor_right); dev_replace->item_needs_writeback = 0; - btrfs_dev_replace_write_unlock(dev_replace); + up_write(&dev_replace->rwsem); btrfs_mark_buffer_dirty(eb); @@ -390,7 +389,7 @@ static char* btrfs_dev_name(struct btrfs_device *device) return rcu_str_deref(device->name); } -int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, +static int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, const char *tgtdev_name, u64 srcdevid, const char *srcdev_name, int read_src) { @@ -407,6 +406,13 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, if (IS_ERR(src_device)) return PTR_ERR(src_device); + if (btrfs_pinned_by_swapfile(fs_info, src_device)) { + btrfs_warn_in_rcu(fs_info, + "cannot replace device %s (devid %llu) due to active swapfile", + btrfs_dev_name(src_device), src_device->devid); + return -ETXTBSY; + } + ret = btrfs_init_dev_replace_tgtdev(fs_info, tgtdev_name, src_device, &tgt_device); if (ret) @@ -426,7 +432,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, } need_unlock = true; - btrfs_dev_replace_write_lock(dev_replace); + down_write(&dev_replace->rwsem); switch (dev_replace->replace_state) { case BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED: case BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED: @@ -464,7 +470,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, dev_replace->item_needs_writeback = 1; atomic64_set(&dev_replace->num_write_errors, 0); atomic64_set(&dev_replace->num_uncorrectable_read_errors, 0); - btrfs_dev_replace_write_unlock(dev_replace); + up_write(&dev_replace->rwsem); need_unlock = false; ret = btrfs_sysfs_add_device_link(tgt_device->fs_devices, tgt_device); @@ -478,7 +484,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, if (IS_ERR(trans)) { ret = PTR_ERR(trans); need_unlock = true; - btrfs_dev_replace_write_lock(dev_replace); + down_write(&dev_replace->rwsem); dev_replace->replace_state = BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED; dev_replace->srcdev = NULL; @@ -497,7 +503,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, ret = btrfs_dev_replace_finishing(fs_info, ret); if (ret == -EINPROGRESS) { ret = BTRFS_IOCTL_DEV_REPLACE_RESULT_SCRUB_INPROGRESS; - } else { + } else if (ret != -ECANCELED) { WARN_ON(ret); } @@ -505,7 +511,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, leave: if (need_unlock) - btrfs_dev_replace_write_unlock(dev_replace); + up_write(&dev_replace->rwsem); btrfs_destroy_dev_replace_tgtdev(tgt_device); return ret; } @@ -533,8 +539,9 @@ int btrfs_dev_replace_by_ioctl(struct btrfs_fs_info *fs_info, args->start.cont_reading_from_srcdev_mode); args->result = ret; /* don't warn if EINPROGRESS, someone else might be running scrub */ - if (ret == BTRFS_IOCTL_DEV_REPLACE_RESULT_SCRUB_INPROGRESS) - ret = 0; + if (ret == BTRFS_IOCTL_DEV_REPLACE_RESULT_SCRUB_INPROGRESS || + ret == BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_ERROR) + return 0; return ret; } @@ -572,18 +579,18 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, /* don't allow cancel or unmount to disturb the finishing procedure */ mutex_lock(&dev_replace->lock_finishing_cancel_unmount); - btrfs_dev_replace_read_lock(dev_replace); + down_read(&dev_replace->rwsem); /* was the operation canceled, or is it finished? */ if (dev_replace->replace_state != BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED) { - btrfs_dev_replace_read_unlock(dev_replace); + up_read(&dev_replace->rwsem); mutex_unlock(&dev_replace->lock_finishing_cancel_unmount); return 0; } tgt_device = dev_replace->tgtdev; src_device = dev_replace->srcdev; - btrfs_dev_replace_read_unlock(dev_replace); + up_read(&dev_replace->rwsem); /* * flush all outstanding I/O and inode extent mappings before the @@ -607,7 +614,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, /* keep away write_all_supers() during the finishing procedure */ mutex_lock(&fs_info->fs_devices->device_list_mutex); mutex_lock(&fs_info->chunk_mutex); - btrfs_dev_replace_write_lock(dev_replace); + down_write(&dev_replace->rwsem); dev_replace->replace_state = scrub_ret ? BTRFS_IOCTL_DEV_REPLACE_STATE_CANCELED : BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED; @@ -622,12 +629,13 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, src_device, tgt_device); } else { - btrfs_err_in_rcu(fs_info, + if (scrub_ret != -ECANCELED) + btrfs_err_in_rcu(fs_info, "btrfs_scrub_dev(%s, %llu, %s) failed %d", btrfs_dev_name(src_device), src_device->devid, rcu_str_deref(tgt_device->name), scrub_ret); - btrfs_dev_replace_write_unlock(dev_replace); + up_write(&dev_replace->rwsem); mutex_unlock(&fs_info->chunk_mutex); mutex_unlock(&fs_info->fs_devices->device_list_mutex); btrfs_rm_dev_replace_blocked(fs_info); @@ -663,8 +671,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, list_add(&tgt_device->dev_alloc_list, &fs_info->fs_devices->alloc_list); fs_info->fs_devices->rw_devices++; - btrfs_dev_replace_write_unlock(dev_replace); - + up_write(&dev_replace->rwsem); btrfs_rm_dev_replace_blocked(fs_info); btrfs_rm_dev_replace_remove_srcdev(src_device); @@ -761,7 +768,7 @@ void btrfs_dev_replace_status(struct btrfs_fs_info *fs_info, { struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace; - btrfs_dev_replace_read_lock(dev_replace); + down_read(&dev_replace->rwsem); /* even if !dev_replace_is_valid, the values are good enough for * the replace_status ioctl */ args->result = BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_ERROR; @@ -773,7 +780,7 @@ void btrfs_dev_replace_status(struct btrfs_fs_info *fs_info, args->status.num_uncorrectable_read_errors = atomic64_read(&dev_replace->num_uncorrectable_read_errors); args->status.progress_1000 = btrfs_dev_replace_progress(fs_info); - btrfs_dev_replace_read_unlock(dev_replace); + up_read(&dev_replace->rwsem); } int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info) @@ -790,46 +797,74 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info) return -EROFS; mutex_lock(&dev_replace->lock_finishing_cancel_unmount); - btrfs_dev_replace_write_lock(dev_replace); + down_write(&dev_replace->rwsem); switch (dev_replace->replace_state) { case BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED: case BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED: case BTRFS_IOCTL_DEV_REPLACE_STATE_CANCELED: result = BTRFS_IOCTL_DEV_REPLACE_RESULT_NOT_STARTED; - btrfs_dev_replace_write_unlock(dev_replace); - goto leave; + up_write(&dev_replace->rwsem); + break; case BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED: + tgt_device = dev_replace->tgtdev; + src_device = dev_replace->srcdev; + up_write(&dev_replace->rwsem); + ret = btrfs_scrub_cancel(fs_info); + if (ret < 0) { + result = BTRFS_IOCTL_DEV_REPLACE_RESULT_NOT_STARTED; + } else { + result = BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_ERROR; + /* + * btrfs_dev_replace_finishing() will handle the + * cleanup part + */ + btrfs_info_in_rcu(fs_info, + "dev_replace from %s (devid %llu) to %s canceled", + btrfs_dev_name(src_device), src_device->devid, + btrfs_dev_name(tgt_device)); + } + break; case BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED: + /* + * Scrub doing the replace isn't running so we need to do the + * cleanup step of btrfs_dev_replace_finishing() here + */ result = BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_ERROR; tgt_device = dev_replace->tgtdev; src_device = dev_replace->srcdev; dev_replace->tgtdev = NULL; dev_replace->srcdev = NULL; - break; - } - dev_replace->replace_state = BTRFS_IOCTL_DEV_REPLACE_STATE_CANCELED; - dev_replace->time_stopped = ktime_get_real_seconds(); - dev_replace->item_needs_writeback = 1; - btrfs_dev_replace_write_unlock(dev_replace); - btrfs_scrub_cancel(fs_info); + dev_replace->replace_state = + BTRFS_IOCTL_DEV_REPLACE_STATE_CANCELED; + dev_replace->time_stopped = ktime_get_real_seconds(); + dev_replace->item_needs_writeback = 1; - trans = btrfs_start_transaction(root, 0); - if (IS_ERR(trans)) { - mutex_unlock(&dev_replace->lock_finishing_cancel_unmount); - return PTR_ERR(trans); - } - ret = btrfs_commit_transaction(trans); - WARN_ON(ret); + up_write(&dev_replace->rwsem); - btrfs_info_in_rcu(fs_info, - "dev_replace from %s (devid %llu) to %s canceled", - btrfs_dev_name(src_device), src_device->devid, - btrfs_dev_name(tgt_device)); + /* Scrub for replace must not be running in suspended state */ + ret = btrfs_scrub_cancel(fs_info); + ASSERT(ret != -ENOTCONN); + + trans = btrfs_start_transaction(root, 0); + if (IS_ERR(trans)) { + mutex_unlock(&dev_replace->lock_finishing_cancel_unmount); + return PTR_ERR(trans); + } + ret = btrfs_commit_transaction(trans); + WARN_ON(ret); - if (tgt_device) - btrfs_destroy_dev_replace_tgtdev(tgt_device); + btrfs_info_in_rcu(fs_info, + "suspended dev_replace from %s (devid %llu) to %s canceled", + btrfs_dev_name(src_device), src_device->devid, + btrfs_dev_name(tgt_device)); + + if (tgt_device) + btrfs_destroy_dev_replace_tgtdev(tgt_device); + break; + default: + result = -EINVAL; + } -leave: mutex_unlock(&dev_replace->lock_finishing_cancel_unmount); return result; } @@ -839,7 +874,8 @@ void btrfs_dev_replace_suspend_for_unmount(struct btrfs_fs_info *fs_info) struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace; mutex_lock(&dev_replace->lock_finishing_cancel_unmount); - btrfs_dev_replace_write_lock(dev_replace); + down_write(&dev_replace->rwsem); + switch (dev_replace->replace_state) { case BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED: case BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED: @@ -855,7 +891,7 @@ void btrfs_dev_replace_suspend_for_unmount(struct btrfs_fs_info *fs_info) break; } - btrfs_dev_replace_write_unlock(dev_replace); + up_write(&dev_replace->rwsem); mutex_unlock(&dev_replace->lock_finishing_cancel_unmount); } @@ -865,12 +901,13 @@ int btrfs_resume_dev_replace_async(struct btrfs_fs_info *fs_info) struct task_struct *task; struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace; - btrfs_dev_replace_write_lock(dev_replace); + down_write(&dev_replace->rwsem); + switch (dev_replace->replace_state) { case BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED: case BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED: case BTRFS_IOCTL_DEV_REPLACE_STATE_CANCELED: - btrfs_dev_replace_write_unlock(dev_replace); + up_write(&dev_replace->rwsem); return 0; case BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED: break; @@ -884,10 +921,12 @@ int btrfs_resume_dev_replace_async(struct btrfs_fs_info *fs_info) "cannot continue dev_replace, tgtdev is missing"); btrfs_info(fs_info, "you may cancel the operation after 'mount -o degraded'"); - btrfs_dev_replace_write_unlock(dev_replace); + dev_replace->replace_state = + BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED; + up_write(&dev_replace->rwsem); return 0; } - btrfs_dev_replace_write_unlock(dev_replace); + up_write(&dev_replace->rwsem); /* * This could collide with a paused balance, but the exclusive op logic @@ -895,6 +934,10 @@ int btrfs_resume_dev_replace_async(struct btrfs_fs_info *fs_info) * dev-replace to start anyway. */ if (test_and_set_bit(BTRFS_FS_EXCL_OP, &fs_info->flags)) { + down_write(&dev_replace->rwsem); + dev_replace->replace_state = + BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED; + up_write(&dev_replace->rwsem); btrfs_info(fs_info, "cannot resume dev-replace, other exclusive operation running"); return 0; @@ -925,7 +968,7 @@ static int btrfs_dev_replace_kthread(void *data) btrfs_device_get_total_bytes(dev_replace->srcdev), &dev_replace->scrub_progress, 0, 1); ret = btrfs_dev_replace_finishing(fs_info, ret); - WARN_ON(ret); + WARN_ON(ret && ret != -ECANCELED); clear_bit(BTRFS_FS_EXCL_OP, &fs_info->flags); return 0; @@ -948,7 +991,7 @@ int btrfs_dev_replace_is_ongoing(struct btrfs_dev_replace *dev_replace) * something that can happen if the dev_replace * procedure is suspended by an umount and then * the tgtdev is missing (or "btrfs dev scan") was - * not called and the the filesystem is remounted + * not called and the filesystem is remounted * in degraded state. This does not stop the * dev_replace procedure. It needs to be canceled * manually if the cancellation is wanted. @@ -958,42 +1001,6 @@ int btrfs_dev_replace_is_ongoing(struct btrfs_dev_replace *dev_replace) return 1; } -void btrfs_dev_replace_read_lock(struct btrfs_dev_replace *dev_replace) -{ - read_lock(&dev_replace->lock); -} - -void btrfs_dev_replace_read_unlock(struct btrfs_dev_replace *dev_replace) -{ - read_unlock(&dev_replace->lock); -} - -void btrfs_dev_replace_write_lock(struct btrfs_dev_replace *dev_replace) -{ -again: - wait_event(dev_replace->read_lock_wq, - atomic_read(&dev_replace->blocking_readers) == 0); - write_lock(&dev_replace->lock); - if (atomic_read(&dev_replace->blocking_readers)) { - write_unlock(&dev_replace->lock); - goto again; - } -} - -void btrfs_dev_replace_write_unlock(struct btrfs_dev_replace *dev_replace) -{ - write_unlock(&dev_replace->lock); -} - -/* inc blocking cnt and release read lock */ -void btrfs_dev_replace_set_lock_blocking( - struct btrfs_dev_replace *dev_replace) -{ - /* only set blocking for read lock */ - atomic_inc(&dev_replace->blocking_readers); - read_unlock(&dev_replace->lock); -} - void btrfs_bio_counter_inc_noblocked(struct btrfs_fs_info *fs_info) { percpu_counter_inc(&fs_info->dev_replace.bio_counter); diff --git a/fs/btrfs/dev-replace.h b/fs/btrfs/dev-replace.h index 795c551f5b5e..4aa40bacc6cc 100644 --- a/fs/btrfs/dev-replace.h +++ b/fs/btrfs/dev-replace.h @@ -13,19 +13,11 @@ int btrfs_run_dev_replace(struct btrfs_trans_handle *trans, struct btrfs_fs_info *fs_info); int btrfs_dev_replace_by_ioctl(struct btrfs_fs_info *fs_info, struct btrfs_ioctl_dev_replace_args *args); -int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info, - const char *tgtdev_name, u64 srcdevid, const char *srcdev_name, - int read_src); void btrfs_dev_replace_status(struct btrfs_fs_info *fs_info, struct btrfs_ioctl_dev_replace_args *args); int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info); void btrfs_dev_replace_suspend_for_unmount(struct btrfs_fs_info *fs_info); int btrfs_resume_dev_replace_async(struct btrfs_fs_info *fs_info); int btrfs_dev_replace_is_ongoing(struct btrfs_dev_replace *dev_replace); -void btrfs_dev_replace_read_lock(struct btrfs_dev_replace *dev_replace); -void btrfs_dev_replace_read_unlock(struct btrfs_dev_replace *dev_replace); -void btrfs_dev_replace_write_lock(struct btrfs_dev_replace *dev_replace); -void btrfs_dev_replace_write_unlock(struct btrfs_dev_replace *dev_replace); -void btrfs_dev_replace_set_lock_blocking(struct btrfs_dev_replace *dev_replace); #endif diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 6d776717d8b3..8da2f380d3c0 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -279,6 +279,12 @@ static int csum_tree_block(struct btrfs_fs_info *fs_info, len = buf->len - offset; while (len > 0) { + /* + * Note: we don't need to check for the err == 1 case here, as + * with the given combination of 'start = BTRFS_CSUM_SIZE (32)' + * and 'min_len = 32' and the currently implemented mapping + * algorithm we cannot cross a page boundary. + */ err = map_private_extent_buffer(buf, offset, 32, &kaddr, &map_start, &map_len); if (err) @@ -542,7 +548,7 @@ static int csum_dirty_buffer(struct btrfs_fs_info *fs_info, struct page *page) if (WARN_ON(!PageUptodate(page))) return -EUCLEAN; - ASSERT(memcmp_extent_buffer(eb, fs_info->fsid, + ASSERT(memcmp_extent_buffer(eb, fs_info->fs_devices->metadata_uuid, btrfs_header_fsid(), BTRFS_FSID_SIZE) == 0); return csum_tree_block(fs_info, eb, 0); @@ -557,7 +563,20 @@ static int check_tree_block_fsid(struct btrfs_fs_info *fs_info, read_extent_buffer(eb, fsid, btrfs_header_fsid(), BTRFS_FSID_SIZE); while (fs_devices) { - if (!memcmp(fsid, fs_devices->fsid, BTRFS_FSID_SIZE)) { + u8 *metadata_uuid; + + /* + * Checking the incompat flag is only valid for the current + * fs. For seed devices it's forbidden to have their uuid + * changed so reading ->fsid in this case is fine + */ + if (fs_devices == fs_info->fs_devices && + btrfs_fs_incompat(fs_info, METADATA_UUID)) + metadata_uuid = fs_devices->metadata_uuid; + else + metadata_uuid = fs_devices->fsid; + + if (!memcmp(fsid, metadata_uuid, BTRFS_FSID_SIZE)) { ret = 0; break; } @@ -660,19 +679,6 @@ out: return ret; } -static int btree_io_failed_hook(struct page *page, int failed_mirror) -{ - struct extent_buffer *eb; - - eb = (struct extent_buffer *)page->private; - set_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags); - eb->read_mirror = failed_mirror; - atomic_dec(&eb->io_pages); - if (test_and_clear_bit(EXTENT_BUFFER_READAHEAD, &eb->bflags)) - btree_readahead_hook(eb, -EIO); - return -EIO; /* we fixed nothing */ -} - static void end_workqueue_bio(struct bio *bio) { struct btrfs_end_io_wq *end_io_wq = bio->bi_private; @@ -751,11 +757,22 @@ static void run_one_async_start(struct btrfs_work *work) async->status = ret; } +/* + * In order to insert checksums into the metadata in large chunks, we wait + * until bio submission time. All the pages in the bio are checksummed and + * sums are attached onto the ordered extent record. + * + * At IO completion time the csums attached on the ordered extent record are + * inserted into the tree. + */ static void run_one_async_done(struct btrfs_work *work) { struct async_submit_bio *async; + struct inode *inode; + blk_status_t ret; async = container_of(work, struct async_submit_bio, work); + inode = async->private_data; /* If an error occurred we just want to clean up the bio and move on */ if (async->status) { @@ -764,7 +781,12 @@ static void run_one_async_done(struct btrfs_work *work) return; } - btrfs_submit_bio_done(async->private_data, async->bio, async->mirror_num); + ret = btrfs_map_bio(btrfs_sb(inode->i_sb), async->bio, + async->mirror_num, 1); + if (ret) { + async->bio->bi_status = ret; + bio_endio(async->bio); + } } static void run_one_async_free(struct btrfs_work *work) @@ -1178,6 +1200,7 @@ static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info, refcount_set(&root->refs, 1); atomic_set(&root->will_be_snapshotted, 0); atomic_set(&root->snapshot_force_cow, 0); + atomic_set(&root->nr_swapfiles, 0); root->log_transid = 0; root->log_transid_committed = -1; root->last_log_commit = 0; @@ -2118,10 +2141,8 @@ static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info) static void btrfs_init_dev_replace_locks(struct btrfs_fs_info *fs_info) { mutex_init(&fs_info->dev_replace.lock_finishing_cancel_unmount); - rwlock_init(&fs_info->dev_replace.lock); - atomic_set(&fs_info->dev_replace.blocking_readers, 0); + init_rwsem(&fs_info->dev_replace.rwsem); init_waitqueue_head(&fs_info->dev_replace.replace_wait); - init_waitqueue_head(&fs_info->dev_replace.read_lock_wq); } static void btrfs_init_qgroup(struct btrfs_fs_info *fs_info) @@ -2442,10 +2463,11 @@ static int validate_super(struct btrfs_fs_info *fs_info, ret = -EINVAL; } - if (memcmp(fs_info->fsid, sb->dev_item.fsid, BTRFS_FSID_SIZE) != 0) { + if (memcmp(fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid, + BTRFS_FSID_SIZE) != 0) { btrfs_err(fs_info, - "dev_item UUID does not match fsid: %pU != %pU", - fs_info->fsid, sb->dev_item.fsid); + "dev_item UUID does not match metadata fsid: %pU != %pU", + fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid); ret = -EINVAL; } @@ -2656,6 +2678,9 @@ int open_ctree(struct super_block *sb, btrfs_init_block_rsv(&fs_info->empty_block_rsv, BTRFS_BLOCK_RSV_EMPTY); btrfs_init_block_rsv(&fs_info->delayed_block_rsv, BTRFS_BLOCK_RSV_DELOPS); + btrfs_init_block_rsv(&fs_info->delayed_refs_rsv, + BTRFS_BLOCK_RSV_DELREFS); + atomic_set(&fs_info->async_delalloc_pages, 0); atomic_set(&fs_info->defrag_running, 0); atomic_set(&fs_info->qgroup_op_seq, 0); @@ -2745,6 +2770,9 @@ int open_ctree(struct super_block *sb, fs_info->sectorsize = 4096; fs_info->stripesize = 4096; + spin_lock_init(&fs_info->swapfile_pins_lock); + fs_info->swapfile_pins = RB_ROOT; + ret = btrfs_alloc_stripe_hash_table(fs_info); if (ret) { err = ret; @@ -2781,11 +2809,29 @@ int open_ctree(struct super_block *sb, * the whole block of INFO_SIZE */ memcpy(fs_info->super_copy, bh->b_data, sizeof(*fs_info->super_copy)); - memcpy(fs_info->super_for_commit, fs_info->super_copy, - sizeof(*fs_info->super_for_commit)); brelse(bh); - memcpy(fs_info->fsid, fs_info->super_copy->fsid, BTRFS_FSID_SIZE); + disk_super = fs_info->super_copy; + + ASSERT(!memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid, + BTRFS_FSID_SIZE)); + + if (btrfs_fs_incompat(fs_info, METADATA_UUID)) { + ASSERT(!memcmp(fs_info->fs_devices->metadata_uuid, + fs_info->super_copy->metadata_uuid, + BTRFS_FSID_SIZE)); + } + + features = btrfs_super_flags(disk_super); + if (features & BTRFS_SUPER_FLAG_CHANGING_FSID_V2) { + features &= ~BTRFS_SUPER_FLAG_CHANGING_FSID_V2; + btrfs_set_super_flags(disk_super, features); + btrfs_info(fs_info, + "found metadata UUID change in progress flag, clearing"); + } + + memcpy(fs_info->super_for_commit, fs_info->super_copy, + sizeof(*fs_info->super_for_commit)); ret = btrfs_validate_mount_super(fs_info); if (ret) { @@ -2794,7 +2840,6 @@ int open_ctree(struct super_block *sb, goto fail_alloc; } - disk_super = fs_info->super_copy; if (!btrfs_super_root(disk_super)) goto fail_alloc; @@ -2906,7 +2951,7 @@ int open_ctree(struct super_block *sb, sb->s_blocksize = sectorsize; sb->s_blocksize_bits = blksize_bits(sectorsize); - memcpy(&sb->s_uuid, fs_info->fsid, BTRFS_FSID_SIZE); + memcpy(&sb->s_uuid, fs_info->fs_devices->fsid, BTRFS_FSID_SIZE); mutex_lock(&fs_info->chunk_mutex); ret = btrfs_read_sys_array(fs_info); @@ -3055,7 +3100,7 @@ retry_root_backup: if (!sb_rdonly(sb) && !btrfs_check_rw_degradable(fs_info, NULL)) { btrfs_warn(fs_info, - "writeable mount is not allowed due to too many missing devices"); + "writable mount is not allowed due to too many missing devices"); goto fail_sysfs; } @@ -3724,7 +3769,8 @@ int write_all_supers(struct btrfs_fs_info *fs_info, int max_mirrors) btrfs_set_stack_device_io_width(dev_item, dev->io_width); btrfs_set_stack_device_sector_size(dev_item, dev->sector_size); memcpy(dev_item->uuid, dev->uuid, BTRFS_UUID_SIZE); - memcpy(dev_item->fsid, dev->fs_devices->fsid, BTRFS_FSID_SIZE); + memcpy(dev_item->fsid, dev->fs_devices->metadata_uuid, + BTRFS_FSID_SIZE); flags = btrfs_super_flags(sb); btrfs_set_super_flags(sb, flags | BTRFS_HEADER_FLAG_WRITTEN); @@ -4031,7 +4077,7 @@ void btrfs_mark_buffer_dirty(struct extent_buffer *buf) #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS /* * This is a fast path so only do this check if we have sanity tests - * enabled. Normal people shouldn't be using umapped buffers as dirty + * enabled. Normal people shouldn't be using unmapped buffers as dirty * outside of the sanity tests. */ if (unlikely(test_bit(EXTENT_BUFFER_UNMAPPED, &buf->bflags))) @@ -4329,6 +4375,8 @@ static int btrfs_destroy_pinned_extent(struct btrfs_fs_info *fs_info, unpin = pinned_extents; again: while (1) { + struct extent_state *cached_state = NULL; + /* * The btrfs_finish_extent_commit() may get the same range as * ours between find_first_extent_bit and clear_extent_dirty. @@ -4337,13 +4385,14 @@ again: */ mutex_lock(&fs_info->unused_bg_unpin_mutex); ret = find_first_extent_bit(unpin, 0, &start, &end, - EXTENT_DIRTY, NULL); + EXTENT_DIRTY, &cached_state); if (ret) { mutex_unlock(&fs_info->unused_bg_unpin_mutex); break; } - clear_extent_dirty(unpin, start, end); + clear_extent_dirty(unpin, start, end, &cached_state); + free_extent_state(cached_state); btrfs_error_unpin_extent_range(fs_info, start, end); mutex_unlock(&fs_info->unused_bg_unpin_mutex); cond_resched(); @@ -4400,6 +4449,7 @@ void btrfs_cleanup_dirty_bgs(struct btrfs_transaction *cur_trans, spin_unlock(&cur_trans->dirty_bgs_lock); btrfs_put_block_group(cache); + btrfs_delayed_refs_rsv_release(fs_info, 1); spin_lock(&cur_trans->dirty_bgs_lock); } spin_unlock(&cur_trans->dirty_bgs_lock); @@ -4505,7 +4555,4 @@ static const struct extent_io_ops btree_extent_io_ops = { /* mandatory callbacks */ .submit_bio_hook = btree_submit_bio_hook, .readpage_end_io_hook = btree_readpage_end_io_hook, - .readpage_io_failed_hook = btree_io_failed_hook, - - /* optional callbacks */ }; diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h index 4cccba22640f..987a64bc0c66 100644 --- a/fs/btrfs/disk-io.h +++ b/fs/btrfs/disk-io.h @@ -21,11 +21,11 @@ #define BTRFS_BDEV_BLOCKSIZE (4096) enum btrfs_wq_endio_type { - BTRFS_WQ_ENDIO_DATA = 0, - BTRFS_WQ_ENDIO_METADATA = 1, - BTRFS_WQ_ENDIO_FREE_SPACE = 2, - BTRFS_WQ_ENDIO_RAID56 = 3, - BTRFS_WQ_ENDIO_DIO_REPAIR = 4, + BTRFS_WQ_ENDIO_DATA, + BTRFS_WQ_ENDIO_METADATA, + BTRFS_WQ_ENDIO_FREE_SPACE, + BTRFS_WQ_ENDIO_RAID56, + BTRFS_WQ_ENDIO_DIO_REPAIR, }; static inline u64 btrfs_sb_offset(int mirror) diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index a1febf155747..b15afeae16df 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -51,6 +51,24 @@ enum { CHUNK_ALLOC_FORCE = 2, }; +/* + * Declare a helper function to detect underflow of various space info members + */ +#define DECLARE_SPACE_INFO_UPDATE(name) \ +static inline void update_##name(struct btrfs_space_info *sinfo, \ + s64 bytes) \ +{ \ + if (bytes < 0 && sinfo->name < -bytes) { \ + WARN_ON(1); \ + sinfo->name = 0; \ + return; \ + } \ + sinfo->name += bytes; \ +} + +DECLARE_SPACE_INFO_UPDATE(bytes_may_use); +DECLARE_SPACE_INFO_UPDATE(bytes_pinned); + static int __btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_delayed_ref_node *node, u64 parent, u64 root_objectid, u64 owner_objectid, @@ -1037,7 +1055,7 @@ out_free: /* * is_data == BTRFS_REF_TYPE_BLOCK, tree block type is required, - * is_data == BTRFS_REF_TYPE_DATA, data type is requried, + * is_data == BTRFS_REF_TYPE_DATA, data type is requiried, * is_data == BTRFS_REF_TYPE_ANY, either type is OK. */ int btrfs_get_extent_inline_ref_type(const struct extent_buffer *eb, @@ -2406,25 +2424,82 @@ static void unselect_delayed_ref_head(struct btrfs_delayed_ref_root *delayed_ref btrfs_delayed_ref_unlock(head); } -static int cleanup_extent_op(struct btrfs_trans_handle *trans, - struct btrfs_delayed_ref_head *head) +static struct btrfs_delayed_extent_op *cleanup_extent_op( + struct btrfs_delayed_ref_head *head) { struct btrfs_delayed_extent_op *extent_op = head->extent_op; - int ret; if (!extent_op) - return 0; - head->extent_op = NULL; + return NULL; + if (head->must_insert_reserved) { + head->extent_op = NULL; btrfs_free_delayed_extent_op(extent_op); - return 0; + return NULL; } + return extent_op; +} + +static int run_and_cleanup_extent_op(struct btrfs_trans_handle *trans, + struct btrfs_delayed_ref_head *head) +{ + struct btrfs_delayed_extent_op *extent_op; + int ret; + + extent_op = cleanup_extent_op(head); + if (!extent_op) + return 0; + head->extent_op = NULL; spin_unlock(&head->lock); ret = run_delayed_extent_op(trans, head, extent_op); btrfs_free_delayed_extent_op(extent_op); return ret ? ret : 1; } +static void cleanup_ref_head_accounting(struct btrfs_trans_handle *trans, + struct btrfs_delayed_ref_head *head) +{ + struct btrfs_fs_info *fs_info = trans->fs_info; + struct btrfs_delayed_ref_root *delayed_refs = + &trans->transaction->delayed_refs; + int nr_items = 1; /* Dropping this ref head update. */ + + if (head->total_ref_mod < 0) { + struct btrfs_space_info *space_info; + u64 flags; + + if (head->is_data) + flags = BTRFS_BLOCK_GROUP_DATA; + else if (head->is_system) + flags = BTRFS_BLOCK_GROUP_SYSTEM; + else + flags = BTRFS_BLOCK_GROUP_METADATA; + space_info = __find_space_info(fs_info, flags); + ASSERT(space_info); + percpu_counter_add_batch(&space_info->total_bytes_pinned, + -head->num_bytes, + BTRFS_TOTAL_BYTES_PINNED_BATCH); + + /* + * We had csum deletions accounted for in our delayed refs rsv, + * we need to drop the csum leaves for this update from our + * delayed_refs_rsv. + */ + if (head->is_data) { + spin_lock(&delayed_refs->lock); + delayed_refs->pending_csums -= head->num_bytes; + spin_unlock(&delayed_refs->lock); + nr_items += btrfs_csum_bytes_to_leaves(fs_info, + head->num_bytes); + } + } + + /* Also free its reserved qgroup space */ + btrfs_qgroup_free_delayed_ref(fs_info, head->qgroup_ref_root, + head->qgroup_reserved); + btrfs_delayed_refs_rsv_release(fs_info, nr_items); +} + static int cleanup_ref_head(struct btrfs_trans_handle *trans, struct btrfs_delayed_ref_head *head) { @@ -2435,7 +2510,7 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans, delayed_refs = &trans->transaction->delayed_refs; - ret = cleanup_extent_op(trans, head); + ret = run_and_cleanup_extent_op(trans, head); if (ret < 0) { unselect_delayed_ref_head(delayed_refs, head); btrfs_debug(fs_info, "run_delayed_extent_op returned %d", ret); @@ -2456,37 +2531,9 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans, spin_unlock(&delayed_refs->lock); return 1; } - delayed_refs->num_heads--; - rb_erase_cached(&head->href_node, &delayed_refs->href_root); - RB_CLEAR_NODE(&head->href_node); + btrfs_delete_ref_head(delayed_refs, head); spin_unlock(&head->lock); spin_unlock(&delayed_refs->lock); - atomic_dec(&delayed_refs->num_entries); - - trace_run_delayed_ref_head(fs_info, head, 0); - - if (head->total_ref_mod < 0) { - struct btrfs_space_info *space_info; - u64 flags; - - if (head->is_data) - flags = BTRFS_BLOCK_GROUP_DATA; - else if (head->is_system) - flags = BTRFS_BLOCK_GROUP_SYSTEM; - else - flags = BTRFS_BLOCK_GROUP_METADATA; - space_info = __find_space_info(fs_info, flags); - ASSERT(space_info); - percpu_counter_add_batch(&space_info->total_bytes_pinned, - -head->num_bytes, - BTRFS_TOTAL_BYTES_PINNED_BATCH); - - if (head->is_data) { - spin_lock(&delayed_refs->lock); - delayed_refs->pending_csums -= head->num_bytes; - spin_unlock(&delayed_refs->lock); - } - } if (head->must_insert_reserved) { btrfs_pin_extent(fs_info, head->bytenr, @@ -2497,9 +2544,9 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans, } } - /* Also free its reserved qgroup space */ - btrfs_qgroup_free_delayed_ref(fs_info, head->qgroup_ref_root, - head->qgroup_reserved); + cleanup_ref_head_accounting(trans, head); + + trace_run_delayed_ref_head(fs_info, head, 0); btrfs_delayed_ref_unlock(head); btrfs_put_delayed_ref_head(head); return 0; @@ -2792,40 +2839,28 @@ u64 btrfs_csum_bytes_to_leaves(struct btrfs_fs_info *fs_info, u64 csum_bytes) return num_csums; } -int btrfs_check_space_for_delayed_refs(struct btrfs_trans_handle *trans) +bool btrfs_check_space_for_delayed_refs(struct btrfs_fs_info *fs_info) { - struct btrfs_fs_info *fs_info = trans->fs_info; - struct btrfs_block_rsv *global_rsv; - u64 num_heads = trans->transaction->delayed_refs.num_heads_ready; - u64 csum_bytes = trans->transaction->delayed_refs.pending_csums; - unsigned int num_dirty_bgs = trans->transaction->num_dirty_bgs; - u64 num_bytes, num_dirty_bgs_bytes; - int ret = 0; + struct btrfs_block_rsv *delayed_refs_rsv = &fs_info->delayed_refs_rsv; + struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; + bool ret = false; + u64 reserved; - num_bytes = btrfs_calc_trans_metadata_size(fs_info, 1); - num_heads = heads_to_leaves(fs_info, num_heads); - if (num_heads > 1) - num_bytes += (num_heads - 1) * fs_info->nodesize; - num_bytes <<= 1; - num_bytes += btrfs_csum_bytes_to_leaves(fs_info, csum_bytes) * - fs_info->nodesize; - num_dirty_bgs_bytes = btrfs_calc_trans_metadata_size(fs_info, - num_dirty_bgs); - global_rsv = &fs_info->global_block_rsv; + spin_lock(&global_rsv->lock); + reserved = global_rsv->reserved; + spin_unlock(&global_rsv->lock); /* - * If we can't allocate any more chunks lets make sure we have _lots_ of - * wiggle room since running delayed refs can create more delayed refs. + * Since the global reserve is just kind of magic we don't really want + * to rely on it to save our bacon, so if our size is more than the + * delayed_refs_rsv and the global rsv then it's time to think about + * bailing. */ - if (global_rsv->space_info->full) { - num_dirty_bgs_bytes <<= 1; - num_bytes <<= 1; - } - - spin_lock(&global_rsv->lock); - if (global_rsv->reserved <= num_bytes + num_dirty_bgs_bytes) - ret = 1; - spin_unlock(&global_rsv->lock); + spin_lock(&delayed_refs_rsv->lock); + reserved += delayed_refs_rsv->reserved; + if (delayed_refs_rsv->size >= reserved) + ret = true; + spin_unlock(&delayed_refs_rsv->lock); return ret; } @@ -2844,7 +2879,7 @@ int btrfs_should_throttle_delayed_refs(struct btrfs_trans_handle *trans) if (val >= NSEC_PER_SEC / 2) return 2; - return btrfs_check_space_for_delayed_refs(trans); + return btrfs_check_space_for_delayed_refs(trans->fs_info); } struct async_delayed_refs { @@ -3588,6 +3623,8 @@ again: */ mutex_lock(&trans->transaction->cache_write_mutex); while (!list_empty(&dirty)) { + bool drop_reserve = true; + cache = list_first_entry(&dirty, struct btrfs_block_group_cache, dirty_list); @@ -3660,6 +3697,7 @@ again: list_add_tail(&cache->dirty_list, &cur_trans->dirty_bgs); btrfs_get_block_group(cache); + drop_reserve = false; } spin_unlock(&cur_trans->dirty_bgs_lock); } else if (ret) { @@ -3667,9 +3705,11 @@ again: } } - /* if its not on the io list, we need to put the block group */ + /* if it's not on the io list, we need to put the block group */ if (should_put) btrfs_put_block_group(cache); + if (drop_reserve) + btrfs_delayed_refs_rsv_release(fs_info, 1); if (ret) break; @@ -3818,6 +3858,7 @@ int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans, /* if its not on the io list, we need to put the block group */ if (should_put) btrfs_put_block_group(cache); + btrfs_delayed_refs_rsv_release(fs_info, 1); spin_lock(&cur_trans->dirty_bgs_lock); } spin_unlock(&cur_trans->dirty_bgs_lock); @@ -4256,7 +4297,7 @@ commit_trans: data_sinfo->flags, bytes, 1); return -ENOSPC; } - data_sinfo->bytes_may_use += bytes; + update_bytes_may_use(data_sinfo, bytes); trace_btrfs_space_reservation(fs_info, "space_info", data_sinfo->flags, bytes, 1); spin_unlock(&data_sinfo->lock); @@ -4309,10 +4350,7 @@ void btrfs_free_reserved_data_space_noquota(struct inode *inode, u64 start, data_sinfo = fs_info->data_sinfo; spin_lock(&data_sinfo->lock); - if (WARN_ON(data_sinfo->bytes_may_use < len)) - data_sinfo->bytes_may_use = 0; - else - data_sinfo->bytes_may_use -= len; + update_bytes_may_use(data_sinfo, -len); trace_btrfs_space_reservation(fs_info, "space_info", data_sinfo->flags, len, 0); spin_unlock(&data_sinfo->lock); @@ -4637,7 +4675,7 @@ static int can_overcommit(struct btrfs_fs_info *fs_info, /* * If we have dup, raid1 or raid10 then only half of the free - * space is actually useable. For raid56, the space info used + * space is actually usable. For raid56, the space info used * doesn't include the parity drive, so we don't have to * change the math */ @@ -4793,8 +4831,10 @@ static int may_commit_transaction(struct btrfs_fs_info *fs_info, { struct reserve_ticket *ticket = NULL; struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_block_rsv; + struct btrfs_block_rsv *delayed_refs_rsv = &fs_info->delayed_refs_rsv; struct btrfs_trans_handle *trans; - u64 bytes; + u64 bytes_needed; + u64 reclaim_bytes = 0; trans = (struct btrfs_trans_handle *)current->journal_info; if (trans) @@ -4807,15 +4847,15 @@ static int may_commit_transaction(struct btrfs_fs_info *fs_info, else if (!list_empty(&space_info->tickets)) ticket = list_first_entry(&space_info->tickets, struct reserve_ticket, list); - bytes = (ticket) ? ticket->bytes : 0; + bytes_needed = (ticket) ? ticket->bytes : 0; spin_unlock(&space_info->lock); - if (!bytes) + if (!bytes_needed) return 0; /* See if there is enough pinned space to make this reservation */ if (__percpu_counter_compare(&space_info->total_bytes_pinned, - bytes, + bytes_needed, BTRFS_TOTAL_BYTES_PINNED_BATCH) >= 0) goto commit; @@ -4827,14 +4867,18 @@ static int may_commit_transaction(struct btrfs_fs_info *fs_info, return -ENOSPC; spin_lock(&delayed_rsv->lock); - if (delayed_rsv->size > bytes) - bytes = 0; - else - bytes -= delayed_rsv->size; + reclaim_bytes += delayed_rsv->reserved; spin_unlock(&delayed_rsv->lock); + spin_lock(&delayed_refs_rsv->lock); + reclaim_bytes += delayed_refs_rsv->reserved; + spin_unlock(&delayed_refs_rsv->lock); + if (reclaim_bytes >= bytes_needed) + goto commit; + bytes_needed -= reclaim_bytes; + if (__percpu_counter_compare(&space_info->total_bytes_pinned, - bytes, + bytes_needed, BTRFS_TOTAL_BYTES_PINNED_BATCH) < 0) { return -ENOSPC; } @@ -4882,6 +4926,20 @@ static void flush_space(struct btrfs_fs_info *fs_info, shrink_delalloc(fs_info, num_bytes * 2, num_bytes, state == FLUSH_DELALLOC_WAIT); break; + case FLUSH_DELAYED_REFS_NR: + case FLUSH_DELAYED_REFS: + trans = btrfs_join_transaction(root); + if (IS_ERR(trans)) { + ret = PTR_ERR(trans); + break; + } + if (state == FLUSH_DELAYED_REFS_NR) + nr = calc_reclaim_items_nr(fs_info, num_bytes); + else + nr = 0; + btrfs_run_delayed_refs(trans, nr); + btrfs_end_transaction(trans); + break; case ALLOC_CHUNK: trans = btrfs_join_transaction(root); if (IS_ERR(trans)) { @@ -5108,7 +5166,7 @@ static int wait_reserve_ticket(struct btrfs_fs_info *fs_info, list_del_init(&ticket->list); if (ticket->bytes && ticket->bytes < orig_bytes) { u64 num_bytes = orig_bytes - ticket->bytes; - space_info->bytes_may_use -= num_bytes; + update_bytes_may_use(space_info, -num_bytes); trace_btrfs_space_reservation(fs_info, "space_info", space_info->flags, num_bytes, 0); } @@ -5154,13 +5212,13 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info, * If not things get more complicated. */ if (used + orig_bytes <= space_info->total_bytes) { - space_info->bytes_may_use += orig_bytes; + update_bytes_may_use(space_info, orig_bytes); trace_btrfs_space_reservation(fs_info, "space_info", space_info->flags, orig_bytes, 1); ret = 0; } else if (can_overcommit(fs_info, space_info, orig_bytes, flush, system_chunk)) { - space_info->bytes_may_use += orig_bytes; + update_bytes_may_use(space_info, orig_bytes); trace_btrfs_space_reservation(fs_info, "space_info", space_info->flags, orig_bytes, 1); ret = 0; @@ -5223,7 +5281,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info, if (ticket.bytes) { if (ticket.bytes < orig_bytes) { u64 num_bytes = orig_bytes - ticket.bytes; - space_info->bytes_may_use -= num_bytes; + update_bytes_may_use(space_info, -num_bytes); trace_btrfs_space_reservation(fs_info, "space_info", space_info->flags, num_bytes, 0); @@ -5244,7 +5302,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info, * @orig_bytes - the number of bytes we want * @flush - whether or not we can flush to make our reservation * - * This will reserve orgi_bytes number of bytes from the space info associated + * This will reserve orig_bytes number of bytes from the space info associated * with the block_rsv. If there is not enough space it will make an attempt to * flush out space to make room. It will do this by flushing delalloc if * possible or committing the transaction. If flush is 0 then no attempts to @@ -5354,6 +5412,90 @@ int btrfs_cond_migrate_bytes(struct btrfs_fs_info *fs_info, return 0; } +/** + * btrfs_migrate_to_delayed_refs_rsv - transfer bytes to our delayed refs rsv. + * @fs_info - the fs info for our fs. + * @src - the source block rsv to transfer from. + * @num_bytes - the number of bytes to transfer. + * + * This transfers up to the num_bytes amount from the src rsv to the + * delayed_refs_rsv. Any extra bytes are returned to the space info. + */ +void btrfs_migrate_to_delayed_refs_rsv(struct btrfs_fs_info *fs_info, + struct btrfs_block_rsv *src, + u64 num_bytes) +{ + struct btrfs_block_rsv *delayed_refs_rsv = &fs_info->delayed_refs_rsv; + u64 to_free = 0; + + spin_lock(&src->lock); + src->reserved -= num_bytes; + src->size -= num_bytes; + spin_unlock(&src->lock); + + spin_lock(&delayed_refs_rsv->lock); + if (delayed_refs_rsv->size > delayed_refs_rsv->reserved) { + u64 delta = delayed_refs_rsv->size - + delayed_refs_rsv->reserved; + if (num_bytes > delta) { + to_free = num_bytes - delta; + num_bytes = delta; + } + } else { + to_free = num_bytes; + num_bytes = 0; + } + + if (num_bytes) + delayed_refs_rsv->reserved += num_bytes; + if (delayed_refs_rsv->reserved >= delayed_refs_rsv->size) + delayed_refs_rsv->full = 1; + spin_unlock(&delayed_refs_rsv->lock); + + if (num_bytes) + trace_btrfs_space_reservation(fs_info, "delayed_refs_rsv", + 0, num_bytes, 1); + if (to_free) + space_info_add_old_bytes(fs_info, delayed_refs_rsv->space_info, + to_free); +} + +/** + * btrfs_delayed_refs_rsv_refill - refill based on our delayed refs usage. + * @fs_info - the fs_info for our fs. + * @flush - control how we can flush for this reservation. + * + * This will refill the delayed block_rsv up to 1 items size worth of space and + * will return -ENOSPC if we can't make the reservation. + */ +int btrfs_delayed_refs_rsv_refill(struct btrfs_fs_info *fs_info, + enum btrfs_reserve_flush_enum flush) +{ + struct btrfs_block_rsv *block_rsv = &fs_info->delayed_refs_rsv; + u64 limit = btrfs_calc_trans_metadata_size(fs_info, 1); + u64 num_bytes = 0; + int ret = -ENOSPC; + + spin_lock(&block_rsv->lock); + if (block_rsv->reserved < block_rsv->size) { + num_bytes = block_rsv->size - block_rsv->reserved; + num_bytes = min(num_bytes, limit); + } + spin_unlock(&block_rsv->lock); + + if (!num_bytes) + return 0; + + ret = reserve_metadata_bytes(fs_info->extent_root, block_rsv, + num_bytes, flush); + if (ret) + return ret; + block_rsv_add_bytes(block_rsv, num_bytes, 0); + trace_btrfs_space_reservation(fs_info, "delayed_refs_rsv", + 0, num_bytes, 1); + return 0; +} + /* * This is for space we already have accounted in space_info->bytes_may_use, so * basically when we're returning space from block_rsv's. @@ -5407,7 +5549,7 @@ again: flush = BTRFS_RESERVE_FLUSH_ALL; goto again; } - space_info->bytes_may_use -= num_bytes; + update_bytes_may_use(space_info, -num_bytes); trace_btrfs_space_reservation(fs_info, "space_info", space_info->flags, num_bytes, 0); spin_unlock(&space_info->lock); @@ -5435,7 +5577,7 @@ again: ticket->bytes, 1); list_del_init(&ticket->list); num_bytes -= ticket->bytes; - space_info->bytes_may_use += ticket->bytes; + update_bytes_may_use(space_info, ticket->bytes); ticket->bytes = 0; space_info->tickets_id++; wake_up(&ticket->wait); @@ -5443,7 +5585,7 @@ again: trace_btrfs_space_reservation(fs_info, "space_info", space_info->flags, num_bytes, 1); - space_info->bytes_may_use += num_bytes; + update_bytes_may_use(space_info, num_bytes); ticket->bytes -= num_bytes; num_bytes = 0; } @@ -5629,11 +5771,11 @@ int btrfs_block_rsv_refill(struct btrfs_root *root, /** * btrfs_inode_rsv_refill - refill the inode block rsv. * @inode - the inode we are refilling. - * @flush - the flusing restriction. + * @flush - the flushing restriction. * * Essentially the same as btrfs_block_rsv_refill, except it uses the * block_rsv->size as the minimum size. We'll either refill the missing amount - * or return if we already have enough space. This will also handle the resreve + * or return if we already have enough space. This will also handle the reserve * tracepoint for the reserved amount. */ static int btrfs_inode_rsv_refill(struct btrfs_inode *inode, @@ -5674,6 +5816,31 @@ static int btrfs_inode_rsv_refill(struct btrfs_inode *inode, return ret; } +static u64 __btrfs_block_rsv_release(struct btrfs_fs_info *fs_info, + struct btrfs_block_rsv *block_rsv, + u64 num_bytes, u64 *qgroup_to_release) +{ + struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; + struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_refs_rsv; + struct btrfs_block_rsv *target = delayed_rsv; + + if (target->full || target == block_rsv) + target = global_rsv; + + if (block_rsv->space_info != target->space_info) + target = NULL; + + return block_rsv_release_bytes(fs_info, block_rsv, target, num_bytes, + qgroup_to_release); +} + +void btrfs_block_rsv_release(struct btrfs_fs_info *fs_info, + struct btrfs_block_rsv *block_rsv, + u64 num_bytes) +{ + __btrfs_block_rsv_release(fs_info, block_rsv, num_bytes, NULL); +} + /** * btrfs_inode_rsv_release - release any excessive reservation. * @inode - the inode we need to release from. @@ -5688,7 +5855,6 @@ static int btrfs_inode_rsv_refill(struct btrfs_inode *inode, static void btrfs_inode_rsv_release(struct btrfs_inode *inode, bool qgroup_free) { struct btrfs_fs_info *fs_info = inode->root->fs_info; - struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; struct btrfs_block_rsv *block_rsv = &inode->block_rsv; u64 released = 0; u64 qgroup_to_release = 0; @@ -5698,8 +5864,8 @@ static void btrfs_inode_rsv_release(struct btrfs_inode *inode, bool qgroup_free) * are releasing 0 bytes, and then we'll just get the reservation over * the size free'd. */ - released = block_rsv_release_bytes(fs_info, block_rsv, global_rsv, 0, - &qgroup_to_release); + released = __btrfs_block_rsv_release(fs_info, block_rsv, 0, + &qgroup_to_release); if (released > 0) trace_btrfs_space_reservation(fs_info, "delalloc", btrfs_ino(inode), released, 0); @@ -5710,16 +5876,26 @@ static void btrfs_inode_rsv_release(struct btrfs_inode *inode, bool qgroup_free) qgroup_to_release); } -void btrfs_block_rsv_release(struct btrfs_fs_info *fs_info, - struct btrfs_block_rsv *block_rsv, - u64 num_bytes) +/** + * btrfs_delayed_refs_rsv_release - release a ref head's reservation. + * @fs_info - the fs_info for our fs. + * @nr - the number of items to drop. + * + * This drops the delayed ref head's count from the delayed refs rsv and frees + * any excess reservation we had. + */ +void btrfs_delayed_refs_rsv_release(struct btrfs_fs_info *fs_info, int nr) { + struct btrfs_block_rsv *block_rsv = &fs_info->delayed_refs_rsv; struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv; + u64 num_bytes = btrfs_calc_trans_metadata_size(fs_info, nr); + u64 released = 0; - if (global_rsv == block_rsv || - block_rsv->space_info != global_rsv->space_info) - global_rsv = NULL; - block_rsv_release_bytes(fs_info, block_rsv, global_rsv, num_bytes, NULL); + released = block_rsv_release_bytes(fs_info, block_rsv, global_rsv, + num_bytes, NULL); + if (released) + trace_btrfs_space_reservation(fs_info, "delayed_refs_rsv", + 0, released, 0); } static void update_global_block_rsv(struct btrfs_fs_info *fs_info) @@ -5750,14 +5926,14 @@ static void update_global_block_rsv(struct btrfs_fs_info *fs_info) num_bytes = min(num_bytes, block_rsv->size - block_rsv->reserved); block_rsv->reserved += num_bytes; - sinfo->bytes_may_use += num_bytes; + update_bytes_may_use(sinfo, num_bytes); trace_btrfs_space_reservation(fs_info, "space_info", sinfo->flags, num_bytes, 1); } } else if (block_rsv->reserved > block_rsv->size) { num_bytes = block_rsv->reserved - block_rsv->size; - sinfo->bytes_may_use -= num_bytes; + update_bytes_may_use(sinfo, -num_bytes); trace_btrfs_space_reservation(fs_info, "space_info", sinfo->flags, num_bytes, 0); block_rsv->reserved = block_rsv->size; @@ -5784,9 +5960,10 @@ static void init_global_block_rsv(struct btrfs_fs_info *fs_info) fs_info->trans_block_rsv.space_info = space_info; fs_info->empty_block_rsv.space_info = space_info; fs_info->delayed_block_rsv.space_info = space_info; + fs_info->delayed_refs_rsv.space_info = space_info; - fs_info->extent_root->block_rsv = &fs_info->global_block_rsv; - fs_info->csum_root->block_rsv = &fs_info->global_block_rsv; + fs_info->extent_root->block_rsv = &fs_info->delayed_refs_rsv; + fs_info->csum_root->block_rsv = &fs_info->delayed_refs_rsv; fs_info->dev_root->block_rsv = &fs_info->global_block_rsv; fs_info->tree_root->block_rsv = &fs_info->global_block_rsv; if (fs_info->quota_root) @@ -5806,8 +5983,34 @@ static void release_global_block_rsv(struct btrfs_fs_info *fs_info) WARN_ON(fs_info->chunk_block_rsv.reserved > 0); WARN_ON(fs_info->delayed_block_rsv.size > 0); WARN_ON(fs_info->delayed_block_rsv.reserved > 0); + WARN_ON(fs_info->delayed_refs_rsv.reserved > 0); + WARN_ON(fs_info->delayed_refs_rsv.size > 0); } +/* + * btrfs_update_delayed_refs_rsv - adjust the size of the delayed refs rsv + * @trans - the trans that may have generated delayed refs + * + * This is to be called anytime we may have adjusted trans->delayed_ref_updates, + * it'll calculate the additional size and add it to the delayed_refs_rsv. + */ +void btrfs_update_delayed_refs_rsv(struct btrfs_trans_handle *trans) +{ + struct btrfs_fs_info *fs_info = trans->fs_info; + struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_refs_rsv; + u64 num_bytes; + + if (!trans->delayed_ref_updates) + return; + + num_bytes = btrfs_calc_trans_metadata_size(fs_info, + trans->delayed_ref_updates); + spin_lock(&delayed_rsv->lock); + delayed_rsv->size += num_bytes; + delayed_rsv->full = 0; + spin_unlock(&delayed_rsv->lock); + trans->delayed_ref_updates = 0; +} /* * To be called after all the new block groups attached to the transaction @@ -6100,6 +6303,7 @@ static int update_block_group(struct btrfs_trans_handle *trans, u64 old_val; u64 byte_in_group; int factor; + int ret = 0; /* block accounting for super block */ spin_lock(&info->delalloc_root_lock); @@ -6113,8 +6317,10 @@ static int update_block_group(struct btrfs_trans_handle *trans, while (total) { cache = btrfs_lookup_block_group(info, bytenr); - if (!cache) - return -ENOENT; + if (!cache) { + ret = -ENOENT; + break; + } factor = btrfs_bg_type_to_factor(cache->flags); /* @@ -6151,7 +6357,7 @@ static int update_block_group(struct btrfs_trans_handle *trans, old_val -= num_bytes; btrfs_set_block_group_used(&cache->item, old_val); cache->pinned += num_bytes; - cache->space_info->bytes_pinned += num_bytes; + update_bytes_pinned(cache->space_info, num_bytes); cache->space_info->bytes_used -= num_bytes; cache->space_info->disk_used -= num_bytes * factor; spin_unlock(&cache->lock); @@ -6173,6 +6379,7 @@ static int update_block_group(struct btrfs_trans_handle *trans, list_add_tail(&cache->dirty_list, &trans->transaction->dirty_bgs); trans->transaction->num_dirty_bgs++; + trans->delayed_ref_updates++; btrfs_get_block_group(cache); } spin_unlock(&trans->transaction->dirty_bgs_lock); @@ -6190,7 +6397,10 @@ static int update_block_group(struct btrfs_trans_handle *trans, total -= num_bytes; bytenr += num_bytes; } - return 0; + + /* Modified block groups are accounted for in the delayed_refs_rsv. */ + btrfs_update_delayed_refs_rsv(trans); + return ret; } static u64 first_logical_byte(struct btrfs_fs_info *fs_info, u64 search_start) @@ -6222,7 +6432,7 @@ static int pin_down_extent(struct btrfs_fs_info *fs_info, spin_lock(&cache->space_info->lock); spin_lock(&cache->lock); cache->pinned += num_bytes; - cache->space_info->bytes_pinned += num_bytes; + update_bytes_pinned(cache->space_info, num_bytes); if (reserved) { cache->reserved -= num_bytes; cache->space_info->bytes_reserved -= num_bytes; @@ -6431,7 +6641,7 @@ static int btrfs_add_reserved_bytes(struct btrfs_block_group_cache *cache, } else { cache->reserved += num_bytes; space_info->bytes_reserved += num_bytes; - space_info->bytes_may_use -= ram_bytes; + update_bytes_may_use(space_info, -ram_bytes); if (delalloc) cache->delalloc_bytes += num_bytes; } @@ -6587,7 +6797,7 @@ static int unpin_extent_range(struct btrfs_fs_info *fs_info, spin_lock(&space_info->lock); spin_lock(&cache->lock); cache->pinned -= len; - space_info->bytes_pinned -= len; + update_bytes_pinned(space_info, -len); trace_btrfs_space_reservation(fs_info, "pinned", space_info->flags, len, 0); @@ -6608,7 +6818,7 @@ static int unpin_extent_range(struct btrfs_fs_info *fs_info, to_add = min(len, global_rsv->size - global_rsv->reserved); global_rsv->reserved += to_add; - space_info->bytes_may_use += to_add; + update_bytes_may_use(space_info, to_add); if (global_rsv->reserved >= global_rsv->size) global_rsv->full = 1; trace_btrfs_space_reservation(fs_info, @@ -6647,9 +6857,11 @@ int btrfs_finish_extent_commit(struct btrfs_trans_handle *trans) unpin = &fs_info->freed_extents[0]; while (!trans->aborted) { + struct extent_state *cached_state = NULL; + mutex_lock(&fs_info->unused_bg_unpin_mutex); ret = find_first_extent_bit(unpin, 0, &start, &end, - EXTENT_DIRTY, NULL); + EXTENT_DIRTY, &cached_state); if (ret) { mutex_unlock(&fs_info->unused_bg_unpin_mutex); break; @@ -6659,9 +6871,10 @@ int btrfs_finish_extent_commit(struct btrfs_trans_handle *trans) ret = btrfs_discard_extent(fs_info, start, end + 1 - start, NULL); - clear_extent_dirty(unpin, start, end); + clear_extent_dirty(unpin, start, end, &cached_state); unpin_extent_range(fs_info, start, end, true); mutex_unlock(&fs_info->unused_bg_unpin_mutex); + free_extent_state(cached_state); cond_resched(); } @@ -6955,12 +7168,8 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans, if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root)) goto out; - if (head->extent_op) { - if (!head->must_insert_reserved) - goto out; - btrfs_free_delayed_extent_op(head->extent_op); - head->extent_op = NULL; - } + if (cleanup_extent_op(head) != NULL) + goto out; /* * waiting for the lock here would deadlock. If someone else has it @@ -6969,22 +7178,9 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans, if (!mutex_trylock(&head->mutex)) goto out; - /* - * at this point we have a head with no other entries. Go - * ahead and process it. - */ - rb_erase_cached(&head->href_node, &delayed_refs->href_root); - RB_CLEAR_NODE(&head->href_node); - atomic_dec(&delayed_refs->num_entries); - - /* - * we don't take a ref on the node because we're removing it from the - * tree, so we just steal the ref the tree was holding. - */ - delayed_refs->num_heads--; - if (head->processing == 0) - delayed_refs->num_heads_ready--; + btrfs_delete_ref_head(delayed_refs, head); head->processing = 0; + spin_unlock(&head->lock); spin_unlock(&delayed_refs->lock); @@ -6992,6 +7188,7 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans, if (head->must_insert_reserved) ret = 1; + cleanup_ref_head_accounting(trans, head); mutex_unlock(&head->mutex); btrfs_put_delayed_ref_head(head); return ret; @@ -7238,6 +7435,345 @@ btrfs_release_block_group(struct btrfs_block_group_cache *cache, btrfs_put_block_group(cache); } +/* + * Structure used internally for find_free_extent() function. Wraps needed + * parameters. + */ +struct find_free_extent_ctl { + /* Basic allocation info */ + u64 ram_bytes; + u64 num_bytes; + u64 empty_size; + u64 flags; + int delalloc; + + /* Where to start the search inside the bg */ + u64 search_start; + + /* For clustered allocation */ + u64 empty_cluster; + + bool have_caching_bg; + bool orig_have_caching_bg; + + /* RAID index, converted from flags */ + int index; + + /* + * Current loop number, check find_free_extent_update_loop() for details + */ + int loop; + + /* + * Whether we're refilling a cluster, if true we need to re-search + * current block group but don't try to refill the cluster again. + */ + bool retry_clustered; + + /* + * Whether we're updating free space cache, if true we need to re-search + * current block group but don't try updating free space cache again. + */ + bool retry_unclustered; + + /* If current block group is cached */ + int cached; + + /* Max contiguous hole found */ + u64 max_extent_size; + + /* Total free space from free space cache, not always contiguous */ + u64 total_free_space; + + /* Found result */ + u64 found_offset; +}; + + +/* + * Helper function for find_free_extent(). + * + * Return -ENOENT to inform caller that we need fallback to unclustered mode. + * Return -EAGAIN to inform caller that we need to re-search this block group + * Return >0 to inform caller that we find nothing + * Return 0 means we have found a location and set ffe_ctl->found_offset. + */ +static int find_free_extent_clustered(struct btrfs_block_group_cache *bg, + struct btrfs_free_cluster *last_ptr, + struct find_free_extent_ctl *ffe_ctl, + struct btrfs_block_group_cache **cluster_bg_ret) +{ + struct btrfs_fs_info *fs_info = bg->fs_info; + struct btrfs_block_group_cache *cluster_bg; + u64 aligned_cluster; + u64 offset; + int ret; + + cluster_bg = btrfs_lock_cluster(bg, last_ptr, ffe_ctl->delalloc); + if (!cluster_bg) + goto refill_cluster; + if (cluster_bg != bg && (cluster_bg->ro || + !block_group_bits(cluster_bg, ffe_ctl->flags))) + goto release_cluster; + + offset = btrfs_alloc_from_cluster(cluster_bg, last_ptr, + ffe_ctl->num_bytes, cluster_bg->key.objectid, + &ffe_ctl->max_extent_size); + if (offset) { + /* We have a block, we're done */ + spin_unlock(&last_ptr->refill_lock); + trace_btrfs_reserve_extent_cluster(cluster_bg, + ffe_ctl->search_start, ffe_ctl->num_bytes); + *cluster_bg_ret = cluster_bg; + ffe_ctl->found_offset = offset; + return 0; + } + WARN_ON(last_ptr->block_group != cluster_bg); + +release_cluster: + /* + * If we are on LOOP_NO_EMPTY_SIZE, we can't set up a new clusters, so + * lets just skip it and let the allocator find whatever block it can + * find. If we reach this point, we will have tried the cluster + * allocator plenty of times and not have found anything, so we are + * likely way too fragmented for the clustering stuff to find anything. + * + * However, if the cluster is taken from the current block group, + * release the cluster first, so that we stand a better chance of + * succeeding in the unclustered allocation. + */ + if (ffe_ctl->loop >= LOOP_NO_EMPTY_SIZE && cluster_bg != bg) { + spin_unlock(&last_ptr->refill_lock); + btrfs_release_block_group(cluster_bg, ffe_ctl->delalloc); + return -ENOENT; + } + + /* This cluster didn't work out, free it and start over */ + btrfs_return_cluster_to_free_space(NULL, last_ptr); + + if (cluster_bg != bg) + btrfs_release_block_group(cluster_bg, ffe_ctl->delalloc); + +refill_cluster: + if (ffe_ctl->loop >= LOOP_NO_EMPTY_SIZE) { + spin_unlock(&last_ptr->refill_lock); + return -ENOENT; + } + + aligned_cluster = max_t(u64, + ffe_ctl->empty_cluster + ffe_ctl->empty_size, + bg->full_stripe_len); + ret = btrfs_find_space_cluster(fs_info, bg, last_ptr, + ffe_ctl->search_start, ffe_ctl->num_bytes, + aligned_cluster); + if (ret == 0) { + /* Now pull our allocation out of this cluster */ + offset = btrfs_alloc_from_cluster(bg, last_ptr, + ffe_ctl->num_bytes, ffe_ctl->search_start, + &ffe_ctl->max_extent_size); + if (offset) { + /* We found one, proceed */ + spin_unlock(&last_ptr->refill_lock); + trace_btrfs_reserve_extent_cluster(bg, + ffe_ctl->search_start, + ffe_ctl->num_bytes); + ffe_ctl->found_offset = offset; + return 0; + } + } else if (!ffe_ctl->cached && ffe_ctl->loop > LOOP_CACHING_NOWAIT && + !ffe_ctl->retry_clustered) { + spin_unlock(&last_ptr->refill_lock); + + ffe_ctl->retry_clustered = true; + wait_block_group_cache_progress(bg, ffe_ctl->num_bytes + + ffe_ctl->empty_cluster + ffe_ctl->empty_size); + return -EAGAIN; + } + /* + * At this point we either didn't find a cluster or we weren't able to + * allocate a block from our cluster. Free the cluster we've been + * trying to use, and go to the next block group. + */ + btrfs_return_cluster_to_free_space(NULL, last_ptr); + spin_unlock(&last_ptr->refill_lock); + return 1; +} + +/* + * Return >0 to inform caller that we find nothing + * Return 0 when we found an free extent and set ffe_ctrl->found_offset + * Return -EAGAIN to inform caller that we need to re-search this block group + */ +static int find_free_extent_unclustered(struct btrfs_block_group_cache *bg, + struct btrfs_free_cluster *last_ptr, + struct find_free_extent_ctl *ffe_ctl) +{ + u64 offset; + + /* + * We are doing an unclustered allocation, set the fragmented flag so + * we don't bother trying to setup a cluster again until we get more + * space. + */ + if (unlikely(last_ptr)) { + spin_lock(&last_ptr->lock); + last_ptr->fragmented = 1; + spin_unlock(&last_ptr->lock); + } + if (ffe_ctl->cached) { + struct btrfs_free_space_ctl *free_space_ctl; + + free_space_ctl = bg->free_space_ctl; + spin_lock(&free_space_ctl->tree_lock); + if (free_space_ctl->free_space < + ffe_ctl->num_bytes + ffe_ctl->empty_cluster + + ffe_ctl->empty_size) { + ffe_ctl->total_free_space = max_t(u64, + ffe_ctl->total_free_space, + free_space_ctl->free_space); + spin_unlock(&free_space_ctl->tree_lock); + return 1; + } + spin_unlock(&free_space_ctl->tree_lock); + } + + offset = btrfs_find_space_for_alloc(bg, ffe_ctl->search_start, + ffe_ctl->num_bytes, ffe_ctl->empty_size, + &ffe_ctl->max_extent_size); + + /* + * If we didn't find a chunk, and we haven't failed on this block group + * before, and this block group is in the middle of caching and we are + * ok with waiting, then go ahead and wait for progress to be made, and + * set @retry_unclustered to true. + * + * If @retry_unclustered is true then we've already waited on this + * block group once and should move on to the next block group. + */ + if (!offset && !ffe_ctl->retry_unclustered && !ffe_ctl->cached && + ffe_ctl->loop > LOOP_CACHING_NOWAIT) { + wait_block_group_cache_progress(bg, ffe_ctl->num_bytes + + ffe_ctl->empty_size); + ffe_ctl->retry_unclustered = true; + return -EAGAIN; + } else if (!offset) { + return 1; + } + ffe_ctl->found_offset = offset; + return 0; +} + +/* + * Return >0 means caller needs to re-search for free extent + * Return 0 means we have the needed free extent. + * Return <0 means we failed to locate any free extent. + */ +static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info, + struct btrfs_free_cluster *last_ptr, + struct btrfs_key *ins, + struct find_free_extent_ctl *ffe_ctl, + int full_search, bool use_cluster) +{ + struct btrfs_root *root = fs_info->extent_root; + int ret; + + if ((ffe_ctl->loop == LOOP_CACHING_NOWAIT) && + ffe_ctl->have_caching_bg && !ffe_ctl->orig_have_caching_bg) + ffe_ctl->orig_have_caching_bg = true; + + if (!ins->objectid && ffe_ctl->loop >= LOOP_CACHING_WAIT && + ffe_ctl->have_caching_bg) + return 1; + + if (!ins->objectid && ++(ffe_ctl->index) < BTRFS_NR_RAID_TYPES) + return 1; + + if (ins->objectid) { + if (!use_cluster && last_ptr) { + spin_lock(&last_ptr->lock); + last_ptr->window_start = ins->objectid; + spin_unlock(&last_ptr->lock); + } + return 0; + } + + /* + * LOOP_CACHING_NOWAIT, search partially cached block groups, kicking + * caching kthreads as we move along + * LOOP_CACHING_WAIT, search everything, and wait if our bg is caching + * LOOP_ALLOC_CHUNK, force a chunk allocation and try again + * LOOP_NO_EMPTY_SIZE, set empty_size and empty_cluster to 0 and try + * again + */ + if (ffe_ctl->loop < LOOP_NO_EMPTY_SIZE) { + ffe_ctl->index = 0; + if (ffe_ctl->loop == LOOP_CACHING_NOWAIT) { + /* + * We want to skip the LOOP_CACHING_WAIT step if we + * don't have any uncached bgs and we've already done a + * full search through. + */ + if (ffe_ctl->orig_have_caching_bg || !full_search) + ffe_ctl->loop = LOOP_CACHING_WAIT; + else + ffe_ctl->loop = LOOP_ALLOC_CHUNK; + } else { + ffe_ctl->loop++; + } + + if (ffe_ctl->loop == LOOP_ALLOC_CHUNK) { + struct btrfs_trans_handle *trans; + int exist = 0; + + trans = current->journal_info; + if (trans) + exist = 1; + else + trans = btrfs_join_transaction(root); + + if (IS_ERR(trans)) { + ret = PTR_ERR(trans); + return ret; + } + + ret = do_chunk_alloc(trans, ffe_ctl->flags, + CHUNK_ALLOC_FORCE); + + /* + * If we can't allocate a new chunk we've already looped + * through at least once, move on to the NO_EMPTY_SIZE + * case. + */ + if (ret == -ENOSPC) + ffe_ctl->loop = LOOP_NO_EMPTY_SIZE; + + /* Do not bail out on ENOSPC since we can do more. */ + if (ret < 0 && ret != -ENOSPC) + btrfs_abort_transaction(trans, ret); + else + ret = 0; + if (!exist) + btrfs_end_transaction(trans); + if (ret) + return ret; + } + + if (ffe_ctl->loop == LOOP_NO_EMPTY_SIZE) { + /* + * Don't loop again if we already have no empty_size and + * no empty_cluster. + */ + if (ffe_ctl->empty_size == 0 && + ffe_ctl->empty_cluster == 0) + return -ENOSPC; + ffe_ctl->empty_size = 0; + ffe_ctl->empty_cluster = 0; + } + return 1; + } + return -ENOSPC; +} + /* * walks the btree of allocated extents and find a hole of a given size. * The key ins is changed to record the hole: @@ -7248,6 +7784,20 @@ btrfs_release_block_group(struct btrfs_block_group_cache *cache, * * If there is no suitable free space, we will record the max size of * the free space extent currently. + * + * The overall logic and call chain: + * + * find_free_extent() + * |- Iterate through all block groups + * | |- Get a valid block group + * | |- Try to do clustered allocation in that block group + * | |- Try to do unclustered allocation in that block group + * | |- Check if the result is valid + * | | |- If valid, then exit + * | |- Jump to next block group + * | + * |- Push harder to find free extents + * |- If not found, re-iterate all block groups */ static noinline int find_free_extent(struct btrfs_fs_info *fs_info, u64 ram_bytes, u64 num_bytes, u64 empty_size, @@ -7255,24 +7805,28 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info, u64 flags, int delalloc) { int ret = 0; - struct btrfs_root *root = fs_info->extent_root; struct btrfs_free_cluster *last_ptr = NULL; struct btrfs_block_group_cache *block_group = NULL; - u64 search_start = 0; - u64 max_extent_size = 0; - u64 max_free_space = 0; - u64 empty_cluster = 0; + struct find_free_extent_ctl ffe_ctl = {0}; struct btrfs_space_info *space_info; - int loop = 0; - int index = btrfs_bg_flags_to_raid_index(flags); - bool failed_cluster_refill = false; - bool failed_alloc = false; bool use_cluster = true; - bool have_caching_bg = false; - bool orig_have_caching_bg = false; bool full_search = false; WARN_ON(num_bytes < fs_info->sectorsize); + + ffe_ctl.ram_bytes = ram_bytes; + ffe_ctl.num_bytes = num_bytes; + ffe_ctl.empty_size = empty_size; + ffe_ctl.flags = flags; + ffe_ctl.search_start = 0; + ffe_ctl.retry_clustered = false; + ffe_ctl.retry_unclustered = false; + ffe_ctl.delalloc = delalloc; + ffe_ctl.index = btrfs_bg_flags_to_raid_index(flags); + ffe_ctl.have_caching_bg = false; + ffe_ctl.orig_have_caching_bg = false; + ffe_ctl.found_offset = 0; + ins->type = BTRFS_EXTENT_ITEM_KEY; ins->objectid = 0; ins->offset = 0; @@ -7308,7 +7862,8 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info, spin_unlock(&space_info->lock); } - last_ptr = fetch_cluster_info(fs_info, space_info, &empty_cluster); + last_ptr = fetch_cluster_info(fs_info, space_info, + &ffe_ctl.empty_cluster); if (last_ptr) { spin_lock(&last_ptr->lock); if (last_ptr->block_group) @@ -7325,10 +7880,12 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info, spin_unlock(&last_ptr->lock); } - search_start = max(search_start, first_logical_byte(fs_info, 0)); - search_start = max(search_start, hint_byte); - if (search_start == hint_byte) { - block_group = btrfs_lookup_block_group(fs_info, search_start); + ffe_ctl.search_start = max(ffe_ctl.search_start, + first_logical_byte(fs_info, 0)); + ffe_ctl.search_start = max(ffe_ctl.search_start, hint_byte); + if (ffe_ctl.search_start == hint_byte) { + block_group = btrfs_lookup_block_group(fs_info, + ffe_ctl.search_start); /* * we don't want to use the block group if it doesn't match our * allocation bits, or if its not cached. @@ -7350,7 +7907,7 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info, btrfs_put_block_group(block_group); up_read(&space_info->groups_sem); } else { - index = btrfs_bg_flags_to_raid_index( + ffe_ctl.index = btrfs_bg_flags_to_raid_index( block_group->flags); btrfs_lock_block_group(block_group, delalloc); goto have_block_group; @@ -7360,21 +7917,19 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info, } } search: - have_caching_bg = false; - if (index == 0 || index == btrfs_bg_flags_to_raid_index(flags)) + ffe_ctl.have_caching_bg = false; + if (ffe_ctl.index == btrfs_bg_flags_to_raid_index(flags) || + ffe_ctl.index == 0) full_search = true; down_read(&space_info->groups_sem); - list_for_each_entry(block_group, &space_info->block_groups[index], - list) { - u64 offset; - int cached; - + list_for_each_entry(block_group, + &space_info->block_groups[ffe_ctl.index], list) { /* If the block group is read-only, we can skip it entirely. */ if (unlikely(block_group->ro)) continue; btrfs_grab_block_group(block_group, delalloc); - search_start = block_group->key.objectid; + ffe_ctl.search_start = block_group->key.objectid; /* * this can happen if we end up cycling through all the @@ -7398,9 +7953,9 @@ search: } have_block_group: - cached = block_group_cache_done(block_group); - if (unlikely(!cached)) { - have_caching_bg = true; + ffe_ctl.cached = block_group_cache_done(block_group); + if (unlikely(!ffe_ctl.cached)) { + ffe_ctl.have_caching_bg = true; ret = cache_block_group(block_group, 0); BUG_ON(ret < 0); ret = 0; @@ -7414,322 +7969,92 @@ have_block_group: * lets look there */ if (last_ptr && use_cluster) { - struct btrfs_block_group_cache *used_block_group; - unsigned long aligned_cluster; - /* - * the refill lock keeps out other - * people trying to start a new cluster - */ - used_block_group = btrfs_lock_cluster(block_group, - last_ptr, - delalloc); - if (!used_block_group) - goto refill_cluster; - - if (used_block_group != block_group && - (used_block_group->ro || - !block_group_bits(used_block_group, flags))) - goto release_cluster; - - offset = btrfs_alloc_from_cluster(used_block_group, - last_ptr, - num_bytes, - used_block_group->key.objectid, - &max_extent_size); - if (offset) { - /* we have a block, we're done */ - spin_unlock(&last_ptr->refill_lock); - trace_btrfs_reserve_extent_cluster( - used_block_group, - search_start, num_bytes); - if (used_block_group != block_group) { - btrfs_release_block_group(block_group, - delalloc); - block_group = used_block_group; - } - goto checks; - } - - WARN_ON(last_ptr->block_group != used_block_group); -release_cluster: - /* If we are on LOOP_NO_EMPTY_SIZE, we can't - * set up a new clusters, so lets just skip it - * and let the allocator find whatever block - * it can find. If we reach this point, we - * will have tried the cluster allocator - * plenty of times and not have found - * anything, so we are likely way too - * fragmented for the clustering stuff to find - * anything. - * - * However, if the cluster is taken from the - * current block group, release the cluster - * first, so that we stand a better chance of - * succeeding in the unclustered - * allocation. */ - if (loop >= LOOP_NO_EMPTY_SIZE && - used_block_group != block_group) { - spin_unlock(&last_ptr->refill_lock); - btrfs_release_block_group(used_block_group, - delalloc); - goto unclustered_alloc; - } + struct btrfs_block_group_cache *cluster_bg = NULL; - /* - * this cluster didn't work out, free it and - * start over - */ - btrfs_return_cluster_to_free_space(NULL, last_ptr); - - if (used_block_group != block_group) - btrfs_release_block_group(used_block_group, - delalloc); -refill_cluster: - if (loop >= LOOP_NO_EMPTY_SIZE) { - spin_unlock(&last_ptr->refill_lock); - goto unclustered_alloc; - } - - aligned_cluster = max_t(unsigned long, - empty_cluster + empty_size, - block_group->full_stripe_len); + ret = find_free_extent_clustered(block_group, last_ptr, + &ffe_ctl, &cluster_bg); - /* allocate a cluster in this block group */ - ret = btrfs_find_space_cluster(fs_info, block_group, - last_ptr, search_start, - num_bytes, - aligned_cluster); if (ret == 0) { - /* - * now pull our allocation out of this - * cluster - */ - offset = btrfs_alloc_from_cluster(block_group, - last_ptr, - num_bytes, - search_start, - &max_extent_size); - if (offset) { - /* we found one, proceed */ - spin_unlock(&last_ptr->refill_lock); - trace_btrfs_reserve_extent_cluster( - block_group, search_start, - num_bytes); - goto checks; + if (cluster_bg && cluster_bg != block_group) { + btrfs_release_block_group(block_group, + delalloc); + block_group = cluster_bg; } - } else if (!cached && loop > LOOP_CACHING_NOWAIT - && !failed_cluster_refill) { - spin_unlock(&last_ptr->refill_lock); - - failed_cluster_refill = true; - wait_block_group_cache_progress(block_group, - num_bytes + empty_cluster + empty_size); + goto checks; + } else if (ret == -EAGAIN) { goto have_block_group; - } - - /* - * at this point we either didn't find a cluster - * or we weren't able to allocate a block from our - * cluster. Free the cluster we've been trying - * to use, and go to the next block group - */ - btrfs_return_cluster_to_free_space(NULL, last_ptr); - spin_unlock(&last_ptr->refill_lock); - goto loop; - } - -unclustered_alloc: - /* - * We are doing an unclustered alloc, set the fragmented flag so - * we don't bother trying to setup a cluster again until we get - * more space. - */ - if (unlikely(last_ptr)) { - spin_lock(&last_ptr->lock); - last_ptr->fragmented = 1; - spin_unlock(&last_ptr->lock); - } - if (cached) { - struct btrfs_free_space_ctl *ctl = - block_group->free_space_ctl; - - spin_lock(&ctl->tree_lock); - if (ctl->free_space < - num_bytes + empty_cluster + empty_size) { - max_free_space = max(max_free_space, - ctl->free_space); - spin_unlock(&ctl->tree_lock); + } else if (ret > 0) { goto loop; } - spin_unlock(&ctl->tree_lock); + /* ret == -ENOENT case falls through */ } - offset = btrfs_find_space_for_alloc(block_group, search_start, - num_bytes, empty_size, - &max_extent_size); - /* - * If we didn't find a chunk, and we haven't failed on this - * block group before, and this block group is in the middle of - * caching and we are ok with waiting, then go ahead and wait - * for progress to be made, and set failed_alloc to true. - * - * If failed_alloc is true then we've already waited on this - * block group once and should move on to the next block group. - */ - if (!offset && !failed_alloc && !cached && - loop > LOOP_CACHING_NOWAIT) { - wait_block_group_cache_progress(block_group, - num_bytes + empty_size); - failed_alloc = true; + ret = find_free_extent_unclustered(block_group, last_ptr, + &ffe_ctl); + if (ret == -EAGAIN) goto have_block_group; - } else if (!offset) { + else if (ret > 0) goto loop; - } + /* ret == 0 case falls through */ checks: - search_start = round_up(offset, fs_info->stripesize); + ffe_ctl.search_start = round_up(ffe_ctl.found_offset, + fs_info->stripesize); /* move on to the next group */ - if (search_start + num_bytes > + if (ffe_ctl.search_start + num_bytes > block_group->key.objectid + block_group->key.offset) { - btrfs_add_free_space(block_group, offset, num_bytes); + btrfs_add_free_space(block_group, ffe_ctl.found_offset, + num_bytes); goto loop; } - if (offset < search_start) - btrfs_add_free_space(block_group, offset, - search_start - offset); + if (ffe_ctl.found_offset < ffe_ctl.search_start) + btrfs_add_free_space(block_group, ffe_ctl.found_offset, + ffe_ctl.search_start - ffe_ctl.found_offset); ret = btrfs_add_reserved_bytes(block_group, ram_bytes, num_bytes, delalloc); if (ret == -EAGAIN) { - btrfs_add_free_space(block_group, offset, num_bytes); + btrfs_add_free_space(block_group, ffe_ctl.found_offset, + num_bytes); goto loop; } btrfs_inc_block_group_reservations(block_group); /* we are all good, lets return */ - ins->objectid = search_start; + ins->objectid = ffe_ctl.search_start; ins->offset = num_bytes; - trace_btrfs_reserve_extent(block_group, search_start, num_bytes); + trace_btrfs_reserve_extent(block_group, ffe_ctl.search_start, + num_bytes); btrfs_release_block_group(block_group, delalloc); break; loop: - failed_cluster_refill = false; - failed_alloc = false; + ffe_ctl.retry_clustered = false; + ffe_ctl.retry_unclustered = false; BUG_ON(btrfs_bg_flags_to_raid_index(block_group->flags) != - index); + ffe_ctl.index); btrfs_release_block_group(block_group, delalloc); cond_resched(); } up_read(&space_info->groups_sem); - if ((loop == LOOP_CACHING_NOWAIT) && have_caching_bg - && !orig_have_caching_bg) - orig_have_caching_bg = true; - - if (!ins->objectid && loop >= LOOP_CACHING_WAIT && have_caching_bg) - goto search; - - if (!ins->objectid && ++index < BTRFS_NR_RAID_TYPES) + ret = find_free_extent_update_loop(fs_info, last_ptr, ins, &ffe_ctl, + full_search, use_cluster); + if (ret > 0) goto search; - /* - * LOOP_CACHING_NOWAIT, search partially cached block groups, kicking - * caching kthreads as we move along - * LOOP_CACHING_WAIT, search everything, and wait if our bg is caching - * LOOP_ALLOC_CHUNK, force a chunk allocation and try again - * LOOP_NO_EMPTY_SIZE, set empty_size and empty_cluster to 0 and try - * again - */ - if (!ins->objectid && loop < LOOP_NO_EMPTY_SIZE) { - index = 0; - if (loop == LOOP_CACHING_NOWAIT) { - /* - * We want to skip the LOOP_CACHING_WAIT step if we - * don't have any uncached bgs and we've already done a - * full search through. - */ - if (orig_have_caching_bg || !full_search) - loop = LOOP_CACHING_WAIT; - else - loop = LOOP_ALLOC_CHUNK; - } else { - loop++; - } - - if (loop == LOOP_ALLOC_CHUNK) { - struct btrfs_trans_handle *trans; - int exist = 0; - - trans = current->journal_info; - if (trans) - exist = 1; - else - trans = btrfs_join_transaction(root); - - if (IS_ERR(trans)) { - ret = PTR_ERR(trans); - goto out; - } - - ret = do_chunk_alloc(trans, flags, CHUNK_ALLOC_FORCE); - - /* - * If we can't allocate a new chunk we've already looped - * through at least once, move on to the NO_EMPTY_SIZE - * case. - */ - if (ret == -ENOSPC) - loop = LOOP_NO_EMPTY_SIZE; - - /* - * Do not bail out on ENOSPC since we - * can do more things. - */ - if (ret < 0 && ret != -ENOSPC) - btrfs_abort_transaction(trans, ret); - else - ret = 0; - if (!exist) - btrfs_end_transaction(trans); - if (ret) - goto out; - } - - if (loop == LOOP_NO_EMPTY_SIZE) { - /* - * Don't loop again if we already have no empty_size and - * no empty_cluster. - */ - if (empty_size == 0 && - empty_cluster == 0) { - ret = -ENOSPC; - goto out; - } - empty_size = 0; - empty_cluster = 0; - } - - goto search; - } else if (!ins->objectid) { - ret = -ENOSPC; - } else if (ins->objectid) { - if (!use_cluster && last_ptr) { - spin_lock(&last_ptr->lock); - last_ptr->window_start = ins->objectid; - spin_unlock(&last_ptr->lock); - } - ret = 0; - } -out: if (ret == -ENOSPC) { - if (!max_extent_size) - max_extent_size = max_free_space; + /* + * Use ffe_ctl->total_free_space as fallback if we can't find + * any contiguous hole. + */ + if (!ffe_ctl.max_extent_size) + ffe_ctl.max_extent_size = ffe_ctl.total_free_space; spin_lock(&space_info->lock); - space_info->max_extent_size = max_extent_size; + space_info->max_extent_size = ffe_ctl.max_extent_size; spin_unlock(&space_info->lock); - ins->offset = max_extent_size; + ins->offset = ffe_ctl.max_extent_size; } return ret; } @@ -8169,13 +8494,13 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root, btrfs_set_header_generation(buf, trans->transid); btrfs_set_header_backref_rev(buf, BTRFS_MIXED_BACKREF_REV); btrfs_set_header_owner(buf, owner); - write_extent_buffer_fsid(buf, fs_info->fsid); + write_extent_buffer_fsid(buf, fs_info->fs_devices->metadata_uuid); write_extent_buffer_chunk_tree_uuid(buf, fs_info->chunk_tree_uuid); if (root->root_key.objectid == BTRFS_TREE_LOG_OBJECTID) { buf->log_index = root->log_transid % 2; /* * we allow two log transactions at a time, use different - * EXENT bit to differentiate dirty pages. + * EXTENT bit to differentiate dirty pages. */ if (buf->log_index == 0) set_extent_dirty(&root->dirty_log_pages, buf->start, @@ -8221,7 +8546,12 @@ again: goto again; } - if (btrfs_test_opt(fs_info, ENOSPC_DEBUG)) { + /* + * The global reserve still exists to save us from ourselves, so don't + * warn_on if we are short on our delayed refs reserve. + */ + if (block_rsv->type != BTRFS_BLOCK_RSV_DELREFS && + btrfs_test_opt(fs_info, ENOSPC_DEBUG)) { static DEFINE_RATELIMIT_STATE(_rs, DEFAULT_RATELIMIT_INTERVAL * 10, /*DEFAULT_RATELIMIT_BURST*/ 1); @@ -8544,7 +8874,6 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans, u64 bytenr; u64 generation; u64 parent; - u32 blocksize; struct btrfs_key key; struct btrfs_key first_key; struct extent_buffer *next; @@ -8569,7 +8898,6 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans, bytenr = btrfs_node_blockptr(path->nodes[level], path->slots[level]); btrfs_node_key_to_cpu(path->nodes[level], &first_key, path->slots[level]); - blocksize = fs_info->nodesize; next = find_extent_buffer(fs_info, bytenr); if (!next) { @@ -8693,7 +9021,7 @@ skip: ret); } } - ret = btrfs_free_extent(trans, root, bytenr, blocksize, + ret = btrfs_free_extent(trans, root, bytenr, fs_info->nodesize, parent, root->root_key.objectid, level - 1, 0); if (ret) @@ -8944,9 +9272,22 @@ int btrfs_drop_snapshot(struct btrfs_root *root, goto out_free; } + err = btrfs_run_delayed_items(trans); + if (err) + goto out_end_trans; + if (block_rsv) trans->block_rsv = block_rsv; + /* + * This will help us catch people modifying the fs tree while we're + * dropping it. It is unsafe to mess with the fs tree while it's being + * dropped as we unlock the root node and parent nodes as we walk down + * the tree, assuming nothing will change. If something does change + * then we'll have stale information and drop references to blocks we've + * already dropped. + */ + set_bit(BTRFS_ROOT_DELETING, &root->state); if (btrfs_disk_key_objectid(&root_item->drop_progress) == 0) { level = btrfs_header_level(root->node); path->nodes[level] = btrfs_lock_root_node(root); @@ -9421,7 +9762,7 @@ void btrfs_dec_block_group_ro(struct btrfs_block_group_cache *cache) } /* - * checks to see if its even possible to relocate this block group. + * Checks to see if it's even possible to relocate this block group. * * @return - -1 if it's not a good idea to relocate this block group, 0 if its * ok to go ahead and try. @@ -10049,7 +10390,7 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info) * check for two cases, either we are full, and therefore * don't need to bother with the caching work since we won't * find any space, or we are empty, and we can just add all - * the space in and be done with it. This saves us _alot_ of + * the space in and be done with it. This saves us _a_lot_ of * time, particularly in the full case. */ if (found_key.offset == btrfs_block_group_used(&cache->item)) { @@ -10154,6 +10495,7 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans) add_block_group_free_space(trans, block_group); /* already aborted the transaction if it failed. */ next: + btrfs_delayed_refs_rsv_release(fs_info, 1); list_del_init(&block_group->bg_list); } btrfs_trans_release_chunk_metadata(trans); @@ -10231,6 +10573,8 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used, link_block_group(cache); list_add_tail(&cache->bg_list, &trans->new_bgs); + trans->delayed_ref_updates++; + btrfs_update_delayed_refs_rsv(trans); set_avail_alloc_bits(fs_info, type); return 0; @@ -10268,6 +10612,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, int factor; struct btrfs_caching_control *caching_ctl = NULL; bool remove_em; + bool remove_rsv = false; block_group = btrfs_lookup_block_group(fs_info, group_start); BUG_ON(!block_group); @@ -10315,7 +10660,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, mutex_lock(&trans->transaction->cache_write_mutex); /* - * make sure our free spache cache IO is done before remove the + * Make sure our free space cache IO is done before removing the * free space inode */ spin_lock(&trans->transaction->dirty_bgs_lock); @@ -10332,6 +10677,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, if (!list_empty(&block_group->dirty_list)) { list_del_init(&block_group->dirty_list); + remove_rsv = true; btrfs_put_block_group(block_group); } spin_unlock(&trans->transaction->dirty_bgs_lock); @@ -10541,6 +10887,8 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, ret = btrfs_del_item(trans, root, path); out: + if (remove_rsv) + btrfs_delayed_refs_rsv_release(fs_info, 1); btrfs_free_path(path); return ret; } @@ -10698,7 +11046,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info) spin_lock(&space_info->lock); spin_lock(&block_group->lock); - space_info->bytes_pinned -= block_group->pinned; + update_bytes_pinned(space_info, -block_group->pinned); space_info->bytes_readonly += block_group->pinned; percpu_counter_add_batch(&space_info->total_bytes_pinned, -block_group->pinned, @@ -10829,7 +11177,7 @@ static int btrfs_trim_free_extents(struct btrfs_device *device, if (!blk_queue_discard(bdev_get_queue(device->bdev))) return 0; - /* Not writeable = nothing to do. */ + /* Not writable = nothing to do. */ if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) return 0; diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index d228f706ff3e..fc126b92ea59 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -89,9 +89,18 @@ void btrfs_leak_debug_check(void) static inline void __btrfs_debug_check_extent_io_range(const char *caller, struct extent_io_tree *tree, u64 start, u64 end) { - if (tree->ops && tree->ops->check_extent_io_range) - tree->ops->check_extent_io_range(tree->private_data, caller, - start, end); + struct inode *inode = tree->private_data; + u64 isize; + + if (!inode || !is_data_inode(inode)) + return; + + isize = i_size_read(inode); + if (end >= PAGE_SIZE && (end % 2) == 0 && end != isize - 1) { + btrfs_debug_rl(BTRFS_I(inode)->root->fs_info, + "%s: ino %llu isize %llu odd range [%llu,%llu]", + caller, btrfs_ino(BTRFS_I(inode)), isize, start, end); + } } #else #define btrfs_leak_debug_add(new, head) do {} while (0) @@ -344,13 +353,6 @@ static inline struct rb_node *tree_search(struct extent_io_tree *tree, return tree_search_for_insert(tree, offset, NULL, NULL); } -static void merge_cb(struct extent_io_tree *tree, struct extent_state *new, - struct extent_state *other) -{ - if (tree->ops && tree->ops->merge_extent_hook) - tree->ops->merge_extent_hook(tree->private_data, new, other); -} - /* * utility function to look for merge candidates inside a given range. * Any extents with matching state are merged together into a single @@ -374,7 +376,10 @@ static void merge_state(struct extent_io_tree *tree, other = rb_entry(other_node, struct extent_state, rb_node); if (other->end == state->start - 1 && other->state == state->state) { - merge_cb(tree, state, other); + if (tree->private_data && + is_data_inode(tree->private_data)) + btrfs_merge_delalloc_extent(tree->private_data, + state, other); state->start = other->start; rb_erase(&other->rb_node, &tree->state); RB_CLEAR_NODE(&other->rb_node); @@ -386,7 +391,10 @@ static void merge_state(struct extent_io_tree *tree, other = rb_entry(other_node, struct extent_state, rb_node); if (other->start == state->end + 1 && other->state == state->state) { - merge_cb(tree, state, other); + if (tree->private_data && + is_data_inode(tree->private_data)) + btrfs_merge_delalloc_extent(tree->private_data, + state, other); state->end = other->end; rb_erase(&other->rb_node, &tree->state); RB_CLEAR_NODE(&other->rb_node); @@ -395,20 +403,6 @@ static void merge_state(struct extent_io_tree *tree, } } -static void set_state_cb(struct extent_io_tree *tree, - struct extent_state *state, unsigned *bits) -{ - if (tree->ops && tree->ops->set_bit_hook) - tree->ops->set_bit_hook(tree->private_data, state, bits); -} - -static void clear_state_cb(struct extent_io_tree *tree, - struct extent_state *state, unsigned *bits) -{ - if (tree->ops && tree->ops->clear_bit_hook) - tree->ops->clear_bit_hook(tree->private_data, state, bits); -} - static void set_state_bits(struct extent_io_tree *tree, struct extent_state *state, unsigned *bits, struct extent_changeset *changeset); @@ -451,13 +445,6 @@ static int insert_state(struct extent_io_tree *tree, return 0; } -static void split_cb(struct extent_io_tree *tree, struct extent_state *orig, - u64 split) -{ - if (tree->ops && tree->ops->split_extent_hook) - tree->ops->split_extent_hook(tree->private_data, orig, split); -} - /* * split a given extent state struct in two, inserting the preallocated * struct 'prealloc' as the newly created second half. 'split' indicates an @@ -477,7 +464,8 @@ static int split_state(struct extent_io_tree *tree, struct extent_state *orig, { struct rb_node *node; - split_cb(tree, orig, split); + if (tree->private_data && is_data_inode(tree->private_data)) + btrfs_split_delalloc_extent(tree->private_data, orig, split); prealloc->start = orig->start; prealloc->end = split - 1; @@ -504,7 +492,7 @@ static struct extent_state *next_state(struct extent_state *state) /* * utility function to clear some bits in an extent state struct. - * it will optionally wake up any one waiting on this state (wake == 1). + * it will optionally wake up anyone waiting on this state (wake == 1). * * If no bits are set on the state struct after clearing things, the * struct is freed and removed from the tree @@ -523,7 +511,10 @@ static struct extent_state *clear_state_bit(struct extent_io_tree *tree, WARN_ON(range > tree->dirty_bytes); tree->dirty_bytes -= range; } - clear_state_cb(tree, state, bits); + + if (tree->private_data && is_data_inode(tree->private_data)) + btrfs_clear_delalloc_extent(tree->private_data, state, bits); + ret = add_extent_changeset(state, bits_to_clear, changeset, 0); BUG_ON(ret < 0); state->state &= ~bits_to_clear; @@ -800,7 +791,9 @@ static void set_state_bits(struct extent_io_tree *tree, unsigned bits_to_set = *bits & ~EXTENT_CTLBITS; int ret; - set_state_cb(tree, state, bits); + if (tree->private_data && is_data_inode(tree->private_data)) + btrfs_set_delalloc_extent(tree->private_data, state, bits); + if ((bits_to_set & EXTENT_DIRTY) && !(state->state & EXTENT_DIRTY)) { u64 range = state->end - state->start + 1; tree->dirty_bytes += range; @@ -1459,16 +1452,16 @@ out: * find a contiguous range of bytes in the file marked as delalloc, not * more than 'max_bytes'. start and end are used to return the range, * - * 1 is returned if we find something, 0 if nothing was in the tree + * true is returned if we find something, false if nothing was in the tree */ -static noinline u64 find_delalloc_range(struct extent_io_tree *tree, +static noinline bool find_delalloc_range(struct extent_io_tree *tree, u64 *start, u64 *end, u64 max_bytes, struct extent_state **cached_state) { struct rb_node *node; struct extent_state *state; u64 cur_start = *start; - u64 found = 0; + bool found = false; u64 total_bytes = 0; spin_lock(&tree->lock); @@ -1479,8 +1472,7 @@ static noinline u64 find_delalloc_range(struct extent_io_tree *tree, */ node = tree_search(tree, cur_start); if (!node) { - if (!found) - *end = (u64)-1; + *end = (u64)-1; goto out; } @@ -1500,7 +1492,7 @@ static noinline u64 find_delalloc_range(struct extent_io_tree *tree, *cached_state = state; refcount_inc(&state->refs); } - found++; + found = true; *end = state->end; cur_start = state->end + 1; node = rb_next(node); @@ -1558,19 +1550,22 @@ static noinline int lock_delalloc_pages(struct inode *inode, } /* - * find a contiguous range of bytes in the file marked as delalloc, not - * more than 'max_bytes'. start and end are used to return the range, + * Find and lock a contiguous range of bytes in the file marked as delalloc, no + * more than @max_bytes. @Start and @end are used to return the range, * - * 1 is returned if we find something, 0 if nothing was in the tree + * Return: true if we find something + * false if nothing was in the tree */ -static noinline_for_stack u64 find_lock_delalloc_range(struct inode *inode, +EXPORT_FOR_TESTS +noinline_for_stack bool find_lock_delalloc_range(struct inode *inode, struct extent_io_tree *tree, struct page *locked_page, u64 *start, - u64 *end, u64 max_bytes) + u64 *end) { + u64 max_bytes = BTRFS_MAX_EXTENT_SIZE; u64 delalloc_start; u64 delalloc_end; - u64 found; + bool found; struct extent_state *cached_state = NULL; int ret; int loops = 0; @@ -1585,7 +1580,7 @@ again: *start = delalloc_start; *end = delalloc_end; free_extent_state(cached_state); - return 0; + return false; } /* @@ -1605,6 +1600,7 @@ again: /* step two, lock all the pages after the page that has start */ ret = lock_delalloc_pages(inode, locked_page, delalloc_start, delalloc_end); + ASSERT(!ret || ret == -EAGAIN); if (ret == -EAGAIN) { /* some of the pages are gone, lets avoid looping by * shortening the size of the delalloc range we're searching @@ -1616,11 +1612,10 @@ again: loops = 1; goto again; } else { - found = 0; + found = false; goto out_failed; } } - BUG_ON(ret); /* Only valid values are 0 and -EAGAIN */ /* step three, lock the state bits for the whole range */ lock_extent_bits(tree, delalloc_start, delalloc_end, &cached_state); @@ -1643,17 +1638,6 @@ out_failed: return found; } -#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS -u64 btrfs_find_lock_delalloc_range(struct inode *inode, - struct extent_io_tree *tree, - struct page *locked_page, u64 *start, - u64 *end, u64 max_bytes) -{ - return find_lock_delalloc_range(inode, tree, locked_page, start, end, - max_bytes); -} -#endif - static int __process_pages_contig(struct address_space *mapping, struct page *locked_page, pgoff_t start_index, pgoff_t end_index, @@ -2349,13 +2333,11 @@ struct bio *btrfs_create_repair_bio(struct inode *inode, struct bio *failed_bio, } /* - * this is a generic handler for readpage errors (default - * readpage_io_failed_hook). if other copies exist, read those and write back - * good data to the failed position. does not investigate in remapping the - * failed extent elsewhere, hoping the device will be smart enough to do this as - * needed + * This is a generic handler for readpage errors. If other copies exist, read + * those and write back good data to the failed position. Does not investigate + * in remapping the failed extent elsewhere, hoping the device will be smart + * enough to do this as needed */ - static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset, struct page *page, u64 start, u64 end, int failed_mirror) @@ -2412,14 +2394,9 @@ static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset, void end_extent_writepage(struct page *page, int err, u64 start, u64 end) { int uptodate = (err == 0); - struct extent_io_tree *tree; int ret = 0; - tree = &BTRFS_I(page->mapping->host)->io_tree; - - if (tree->ops && tree->ops->writepage_end_io_hook) - tree->ops->writepage_end_io_hook(page, start, end, NULL, - uptodate); + btrfs_writepage_endio_finish_ordered(page, start, end, uptodate); if (!uptodate) { ClearPageUptodate(page); @@ -2522,6 +2499,8 @@ static void end_bio_extent_readpage(struct bio *bio) struct page *page = bvec->bv_page; struct inode *inode = page->mapping->host; struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); + bool data_inode = btrfs_ino(BTRFS_I(inode)) + != BTRFS_BTREE_INODE_OBJECTID; btrfs_debug(fs_info, "end_bio_extent_readpage: bi_sector=%llu, err=%d, mirror=%u", @@ -2551,7 +2530,7 @@ static void end_bio_extent_readpage(struct bio *bio) len = bvec->bv_len; mirror = io_bio->mirror_num; - if (likely(uptodate && tree->ops)) { + if (likely(uptodate)) { ret = tree->ops->readpage_end_io_hook(io_bio, offset, page, start, end, mirror); @@ -2567,38 +2546,37 @@ static void end_bio_extent_readpage(struct bio *bio) if (likely(uptodate)) goto readpage_ok; - if (tree->ops) { - ret = tree->ops->readpage_io_failed_hook(page, mirror); - if (ret == -EAGAIN) { - /* - * Data inode's readpage_io_failed_hook() always - * returns -EAGAIN. - * - * The generic bio_readpage_error handles errors - * the following way: If possible, new read - * requests are created and submitted and will - * end up in end_bio_extent_readpage as well (if - * we're lucky, not in the !uptodate case). In - * that case it returns 0 and we just go on with - * the next page in our bio. If it can't handle - * the error it will return -EIO and we remain - * responsible for that page. - */ - ret = bio_readpage_error(bio, offset, page, - start, end, mirror); - if (ret == 0) { - uptodate = !bio->bi_status; - offset += len; - continue; - } - } + if (data_inode) { /* - * metadata's readpage_io_failed_hook() always returns - * -EIO and fixes nothing. -EIO is also returned if - * data inode error could not be fixed. + * The generic bio_readpage_error handles errors the + * following way: If possible, new read requests are + * created and submitted and will end up in + * end_bio_extent_readpage as well (if we're lucky, + * not in the !uptodate case). In that case it returns + * 0 and we just go on with the next page in our bio. + * If it can't handle the error it will return -EIO and + * we remain responsible for that page. */ - ASSERT(ret == -EIO); + ret = bio_readpage_error(bio, offset, page, start, end, + mirror); + if (ret == 0) { + uptodate = !bio->bi_status; + offset += len; + continue; + } + } else { + struct extent_buffer *eb; + + eb = (struct extent_buffer *)page->private; + set_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags); + eb->read_mirror = mirror; + atomic_dec(&eb->io_pages); + if (test_and_clear_bit(EXTENT_BUFFER_READAHEAD, + &eb->bflags)) + btree_readahead_hook(eb, -EIO); + + ret = -EIO; } readpage_ok: if (likely(uptodate)) { @@ -2607,7 +2585,7 @@ readpage_ok: unsigned off; /* Zero out the end if this page straddles i_size */ - off = i_size & (PAGE_SIZE-1); + off = offset_in_page(i_size); if (page->index == end_index && off) zero_user_segment(page, off, PAGE_SIZE); SetPageUptodate(page); @@ -2644,8 +2622,7 @@ readpage_ok: if (extent_len) endio_readpage_release_extent(tree, extent_start, extent_len, uptodate); - if (io_bio->end_io) - io_bio->end_io(io_bio, blk_status_to_errno(bio->bi_status)); + btrfs_io_bio_free_csum(io_bio); bio_put(bio); } @@ -2782,8 +2759,8 @@ static int submit_extent_page(unsigned int opf, struct extent_io_tree *tree, else contig = bio_end_sector(bio) == sector; - if (tree->ops && btrfs_merge_bio_hook(page, offset, page_size, - bio, bio_flags)) + ASSERT(tree->ops); + if (btrfs_bio_fits_in_stripe(page, page_size, bio, bio_flags)) can_merge = false; if (prev_bio_flags != bio_flags || !contig || !can_merge || @@ -2911,7 +2888,7 @@ static int __do_readpage(struct extent_io_tree *tree, if (page->index == last_byte >> PAGE_SHIFT) { char *userpage; - size_t zero_offset = last_byte & (PAGE_SIZE - 1); + size_t zero_offset = offset_in_page(last_byte); if (zero_offset) { iosize = PAGE_SIZE - zero_offset; @@ -3205,7 +3182,7 @@ static void update_nr_written(struct writeback_control *wbc, /* * helper for __extent_writepage, doing all of the delayed allocation setup. * - * This returns 1 if our fill_delalloc function did all the work required + * This returns 1 if btrfs_run_delalloc_range function did all the work required * to write the page (copy into inline extent). In this case the IO has * been started and the page is already unlocked. * @@ -3213,44 +3190,37 @@ static void update_nr_written(struct writeback_control *wbc, * This returns < 0 if there were errors (page still locked) */ static noinline_for_stack int writepage_delalloc(struct inode *inode, - struct page *page, struct writeback_control *wbc, - struct extent_page_data *epd, - u64 delalloc_start, - unsigned long *nr_written) + struct page *page, struct writeback_control *wbc, + u64 delalloc_start, unsigned long *nr_written) { - struct extent_io_tree *tree = epd->tree; + struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree; u64 page_end = delalloc_start + PAGE_SIZE - 1; - u64 nr_delalloc; + bool found; u64 delalloc_to_write = 0; u64 delalloc_end = 0; int ret; int page_started = 0; - if (epd->extent_locked || !tree->ops || !tree->ops->fill_delalloc) - return 0; while (delalloc_end < page_end) { - nr_delalloc = find_lock_delalloc_range(inode, tree, + found = find_lock_delalloc_range(inode, tree, page, &delalloc_start, - &delalloc_end, - BTRFS_MAX_EXTENT_SIZE); - if (nr_delalloc == 0) { + &delalloc_end); + if (!found) { delalloc_start = delalloc_end + 1; continue; } - ret = tree->ops->fill_delalloc(inode, page, - delalloc_start, - delalloc_end, - &page_started, - nr_written, wbc); + ret = btrfs_run_delalloc_range(inode, page, delalloc_start, + delalloc_end, &page_started, nr_written, wbc); /* File system has been set read-only */ if (ret) { SetPageError(page); - /* fill_delalloc should be return < 0 for error - * but just in case, we use > 0 here meaning the - * IO is started, so we don't want to return > 0 - * unless things are going well. + /* + * btrfs_run_delalloc_range should return < 0 for error + * but just in case, we use > 0 here meaning the IO is + * started, so we don't want to return > 0 unless + * things are going well. */ ret = ret < 0 ? ret : -EIO; goto done; @@ -3323,20 +3293,17 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode, int nr = 0; bool compressed; - if (tree->ops && tree->ops->writepage_start_hook) { - ret = tree->ops->writepage_start_hook(page, start, - page_end); - if (ret) { - /* Fixup worker will requeue */ - if (ret == -EBUSY) - wbc->pages_skipped++; - else - redirty_page_for_writepage(wbc, page); + ret = btrfs_writepage_cow_fixup(page, start, page_end); + if (ret) { + /* Fixup worker will requeue */ + if (ret == -EBUSY) + wbc->pages_skipped++; + else + redirty_page_for_writepage(wbc, page); - update_nr_written(wbc, nr_written); - unlock_page(page); - return 1; - } + update_nr_written(wbc, nr_written); + unlock_page(page); + return 1; } /* @@ -3347,9 +3314,7 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode, end = page_end; if (i_size <= start) { - if (tree->ops && tree->ops->writepage_end_io_hook) - tree->ops->writepage_end_io_hook(page, start, - page_end, NULL, 1); + btrfs_writepage_endio_finish_ordered(page, start, page_end, 1); goto done; } @@ -3360,9 +3325,8 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode, u64 offset; if (cur >= i_size) { - if (tree->ops && tree->ops->writepage_end_io_hook) - tree->ops->writepage_end_io_hook(page, cur, - page_end, NULL, 1); + btrfs_writepage_endio_finish_ordered(page, cur, + page_end, 1); break; } em = btrfs_get_extent(BTRFS_I(inode), page, pg_offset, cur, @@ -3396,11 +3360,10 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode, * end_io notification does not happen here for * compressed extents */ - if (!compressed && tree->ops && - tree->ops->writepage_end_io_hook) - tree->ops->writepage_end_io_hook(page, cur, - cur + iosize - 1, - NULL, 1); + if (!compressed) + btrfs_writepage_endio_finish_ordered(page, cur, + cur + iosize - 1, + 1); else if (compressed) { /* we don't want to end_page_writeback on * a compressed extent. this happens @@ -3469,7 +3432,7 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc, ClearPageError(page); - pg_offset = i_size & (PAGE_SIZE - 1); + pg_offset = offset_in_page(i_size); if (page->index > end_index || (page->index == end_index && !pg_offset)) { page->mapping->a_ops->invalidatepage(page, 0, PAGE_SIZE); @@ -3491,11 +3454,13 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc, set_page_extent_mapped(page); - ret = writepage_delalloc(inode, page, wbc, epd, start, &nr_written); - if (ret == 1) - goto done_unlocked; - if (ret) - goto done; + if (!epd->extent_locked) { + ret = writepage_delalloc(inode, page, wbc, start, &nr_written); + if (ret == 1) + goto done_unlocked; + if (ret) + goto done; + } ret = __extent_writepage_io(inode, page, wbc, epd, i_size, nr_written, write_flags, &nr); @@ -3934,12 +3899,25 @@ static int extent_write_cache_pages(struct address_space *mapping, range_whole = 1; scanned = 1; } - if (wbc->sync_mode == WB_SYNC_ALL) + + /* + * We do the tagged writepage as long as the snapshot flush bit is set + * and we are the first one who do the filemap_flush() on this inode. + * + * The nr_to_write == LONG_MAX is needed to make sure other flushers do + * not race in and drop the bit. + */ + if (range_whole && wbc->nr_to_write == LONG_MAX && + test_and_clear_bit(BTRFS_INODE_SNAPSHOT_FLUSH, + &BTRFS_I(inode)->runtime_flags)) + wbc->tagged_writepages = 1; + + if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) tag = PAGECACHE_TAG_TOWRITE; else tag = PAGECACHE_TAG_DIRTY; retry: - if (wbc->sync_mode == WB_SYNC_ALL) + if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) tag_pages_for_writeback(mapping, index, end); done_index = index; while (!done && !nr_to_write_done && (index <= end) && @@ -4084,10 +4062,8 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end, if (clear_page_dirty_for_io(page)) ret = __extent_writepage(page, &wbc_writepages, &epd); else { - if (tree->ops && tree->ops->writepage_end_io_hook) - tree->ops->writepage_end_io_hook(page, start, - start + PAGE_SIZE - 1, - NULL, 1); + btrfs_writepage_endio_finish_ordered(page, start, + start + PAGE_SIZE - 1, 1); unlock_page(page); } put_page(page); @@ -4118,42 +4094,36 @@ int extent_readpages(struct address_space *mapping, struct list_head *pages, unsigned nr_pages) { struct bio *bio = NULL; - unsigned page_idx; unsigned long bio_flags = 0; struct page *pagepool[16]; - struct page *page; struct extent_map *em_cached = NULL; struct extent_io_tree *tree = &BTRFS_I(mapping->host)->io_tree; int nr = 0; u64 prev_em_start = (u64)-1; - for (page_idx = 0; page_idx < nr_pages; page_idx++) { - page = list_entry(pages->prev, struct page, lru); + while (!list_empty(pages)) { + for (nr = 0; nr < ARRAY_SIZE(pagepool) && !list_empty(pages);) { + struct page *page = list_entry(pages->prev, + struct page, lru); - prefetchw(&page->flags); - list_del(&page->lru); - if (add_to_page_cache_lru(page, mapping, - page->index, - readahead_gfp_mask(mapping))) { - put_page(page); - continue; + prefetchw(&page->flags); + list_del(&page->lru); + if (add_to_page_cache_lru(page, mapping, page->index, + readahead_gfp_mask(mapping))) { + put_page(page); + continue; + } + + pagepool[nr++] = page; } - pagepool[nr++] = page; - if (nr < ARRAY_SIZE(pagepool)) - continue; __extent_readpages(tree, pagepool, nr, &em_cached, &bio, - &bio_flags, &prev_em_start); - nr = 0; + &bio_flags, &prev_em_start); } - if (nr) - __extent_readpages(tree, pagepool, nr, &em_cached, &bio, - &bio_flags, &prev_em_start); if (em_cached) free_extent_map(em_cached); - BUG_ON(!list_empty(pages)); if (bio) return submit_one_bio(bio, 0, bio_flags); return 0; @@ -4342,7 +4312,7 @@ static int emit_fiemap_extent(struct fiemap_extent_info *fieinfo, /* * Sanity check, extent_fiemap() should have ensured that new - * fiemap extent won't overlap with cahced one. + * fiemap extent won't overlap with cached one. * Not recoverable. * * NOTE: Physical address can overlap, due to compression @@ -4914,13 +4884,6 @@ again: check_buffer_tree_ref(eb); set_bit(EXTENT_BUFFER_IN_TREE, &eb->bflags); - /* - * We will free dummy extent buffer's if they come into - * free_extent_buffer with a ref count of 2, but if we are using this we - * want the buffers to stay in memory until we're done with them, so - * bump the ref count again. - */ - atomic_inc(&eb->refs); return eb; free_eb: btrfs_release_extent_buffer(eb); @@ -5102,7 +5065,9 @@ void free_extent_buffer(struct extent_buffer *eb) while (1) { refs = atomic_read(&eb->refs); - if (refs <= 3) + if ((!test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags) && refs <= 3) + || (test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags) && + refs == 1)) break; old = atomic_cmpxchg(&eb->refs, refs, refs - 1); if (old == refs) @@ -5110,10 +5075,6 @@ void free_extent_buffer(struct extent_buffer *eb) } spin_lock(&eb->refs_lock); - if (atomic_read(&eb->refs) == 2 && - test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags)) - atomic_dec(&eb->refs); - if (atomic_read(&eb->refs) == 2 && test_bit(EXTENT_BUFFER_STALE, &eb->bflags) && !extent_buffer_under_io(eb) && @@ -5340,7 +5301,7 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv, struct page *page; char *kaddr; char *dst = (char *)dstv; - size_t start_offset = eb->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(eb->start); unsigned long i = (start_offset + start) >> PAGE_SHIFT; if (start + len > eb->len) { @@ -5350,7 +5311,7 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv, return; } - offset = (start_offset + start) & (PAGE_SIZE - 1); + offset = offset_in_page(start_offset + start); while (len > 0) { page = eb->pages[i]; @@ -5375,14 +5336,14 @@ int read_extent_buffer_to_user(const struct extent_buffer *eb, struct page *page; char *kaddr; char __user *dst = (char __user *)dstv; - size_t start_offset = eb->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(eb->start); unsigned long i = (start_offset + start) >> PAGE_SHIFT; int ret = 0; WARN_ON(start > eb->len); WARN_ON(start + len > eb->start + eb->len); - offset = (start_offset + start) & (PAGE_SIZE - 1); + offset = offset_in_page(start_offset + start); while (len > 0) { page = eb->pages[i]; @@ -5413,10 +5374,10 @@ int map_private_extent_buffer(const struct extent_buffer *eb, char **map, unsigned long *map_start, unsigned long *map_len) { - size_t offset = start & (PAGE_SIZE - 1); + size_t offset; char *kaddr; struct page *p; - size_t start_offset = eb->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(eb->start); unsigned long i = (start_offset + start) >> PAGE_SHIFT; unsigned long end_i = (start_offset + start + min_len - 1) >> PAGE_SHIFT; @@ -5453,14 +5414,14 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv, struct page *page; char *kaddr; char *ptr = (char *)ptrv; - size_t start_offset = eb->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(eb->start); unsigned long i = (start_offset + start) >> PAGE_SHIFT; int ret = 0; WARN_ON(start > eb->len); WARN_ON(start + len > eb->start + eb->len); - offset = (start_offset + start) & (PAGE_SIZE - 1); + offset = offset_in_page(start_offset + start); while (len > 0) { page = eb->pages[i]; @@ -5509,13 +5470,13 @@ void write_extent_buffer(struct extent_buffer *eb, const void *srcv, struct page *page; char *kaddr; char *src = (char *)srcv; - size_t start_offset = eb->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(eb->start); unsigned long i = (start_offset + start) >> PAGE_SHIFT; WARN_ON(start > eb->len); WARN_ON(start + len > eb->start + eb->len); - offset = (start_offset + start) & (PAGE_SIZE - 1); + offset = offset_in_page(start_offset + start); while (len > 0) { page = eb->pages[i]; @@ -5539,13 +5500,13 @@ void memzero_extent_buffer(struct extent_buffer *eb, unsigned long start, size_t offset; struct page *page; char *kaddr; - size_t start_offset = eb->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(eb->start); unsigned long i = (start_offset + start) >> PAGE_SHIFT; WARN_ON(start > eb->len); WARN_ON(start + len > eb->start + eb->len); - offset = (start_offset + start) & (PAGE_SIZE - 1); + offset = offset_in_page(start_offset + start); while (len > 0) { page = eb->pages[i]; @@ -5584,13 +5545,12 @@ void copy_extent_buffer(struct extent_buffer *dst, struct extent_buffer *src, size_t offset; struct page *page; char *kaddr; - size_t start_offset = dst->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(dst->start); unsigned long i = (start_offset + dst_offset) >> PAGE_SHIFT; WARN_ON(src->len != dst_len); - offset = (start_offset + dst_offset) & - (PAGE_SIZE - 1); + offset = offset_in_page(start_offset + dst_offset); while (len > 0) { page = dst->pages[i]; @@ -5626,7 +5586,7 @@ static inline void eb_bitmap_offset(struct extent_buffer *eb, unsigned long *page_index, size_t *page_offset) { - size_t start_offset = eb->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(eb->start); size_t byte_offset = BIT_BYTE(nr); size_t offset; @@ -5638,7 +5598,7 @@ static inline void eb_bitmap_offset(struct extent_buffer *eb, offset = start_offset + start + byte_offset; *page_index = offset >> PAGE_SHIFT; - *page_offset = offset & (PAGE_SIZE - 1); + *page_offset = offset_in_page(offset); } /** @@ -5780,7 +5740,7 @@ void memcpy_extent_buffer(struct extent_buffer *dst, unsigned long dst_offset, size_t cur; size_t dst_off_in_page; size_t src_off_in_page; - size_t start_offset = dst->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(dst->start); unsigned long dst_i; unsigned long src_i; @@ -5798,10 +5758,8 @@ void memcpy_extent_buffer(struct extent_buffer *dst, unsigned long dst_offset, } while (len > 0) { - dst_off_in_page = (start_offset + dst_offset) & - (PAGE_SIZE - 1); - src_off_in_page = (start_offset + src_offset) & - (PAGE_SIZE - 1); + dst_off_in_page = offset_in_page(start_offset + dst_offset); + src_off_in_page = offset_in_page(start_offset + src_offset); dst_i = (start_offset + dst_offset) >> PAGE_SHIFT; src_i = (start_offset + src_offset) >> PAGE_SHIFT; @@ -5829,7 +5787,7 @@ void memmove_extent_buffer(struct extent_buffer *dst, unsigned long dst_offset, size_t src_off_in_page; unsigned long dst_end = dst_offset + len - 1; unsigned long src_end = src_offset + len - 1; - size_t start_offset = dst->start & ((u64)PAGE_SIZE - 1); + size_t start_offset = offset_in_page(dst->start); unsigned long dst_i; unsigned long src_i; @@ -5853,10 +5811,8 @@ void memmove_extent_buffer(struct extent_buffer *dst, unsigned long dst_offset, dst_i = (start_offset + dst_end) >> PAGE_SHIFT; src_i = (start_offset + src_end) >> PAGE_SHIFT; - dst_off_in_page = (start_offset + dst_end) & - (PAGE_SIZE - 1); - src_off_in_page = (start_offset + src_end) & - (PAGE_SIZE - 1); + dst_off_in_page = offset_in_page(start_offset + dst_end); + src_off_in_page = offset_in_page(start_offset + src_end); cur = min_t(unsigned long, len, src_off_in_page + 1); cur = min(cur, dst_off_in_page + 1); diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 369daa5d4f73..9673be3f3d1f 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -37,18 +37,22 @@ #define EXTENT_BIO_COMPRESSED 1 #define EXTENT_BIO_FLAG_SHIFT 16 -/* these are bit numbers for test/set bit */ -#define EXTENT_BUFFER_UPTODATE 0 -#define EXTENT_BUFFER_DIRTY 2 -#define EXTENT_BUFFER_CORRUPT 3 -#define EXTENT_BUFFER_READAHEAD 4 /* this got triggered by readahead */ -#define EXTENT_BUFFER_TREE_REF 5 -#define EXTENT_BUFFER_STALE 6 -#define EXTENT_BUFFER_WRITEBACK 7 -#define EXTENT_BUFFER_READ_ERR 8 /* read IO error */ -#define EXTENT_BUFFER_UNMAPPED 9 -#define EXTENT_BUFFER_IN_TREE 10 -#define EXTENT_BUFFER_WRITE_ERR 11 /* write IO error */ +enum { + EXTENT_BUFFER_UPTODATE, + EXTENT_BUFFER_DIRTY, + EXTENT_BUFFER_CORRUPT, + /* this got triggered by readahead */ + EXTENT_BUFFER_READAHEAD, + EXTENT_BUFFER_TREE_REF, + EXTENT_BUFFER_STALE, + EXTENT_BUFFER_WRITEBACK, + /* read IO error */ + EXTENT_BUFFER_READ_ERR, + EXTENT_BUFFER_UNMAPPED, + EXTENT_BUFFER_IN_TREE, + /* write IO error */ + EXTENT_BUFFER_WRITE_ERR, +}; /* these are flags for __process_pages_contig */ #define PAGE_UNLOCK (1 << 0) @@ -94,38 +98,13 @@ typedef blk_status_t (extent_submit_bio_start_t)(void *private_data, struct extent_io_ops { /* - * The following callbacks must be allways defined, the function + * The following callbacks must be always defined, the function * pointer will be called unconditionally. */ extent_submit_bio_hook_t *submit_bio_hook; int (*readpage_end_io_hook)(struct btrfs_io_bio *io_bio, u64 phy_offset, struct page *page, u64 start, u64 end, int mirror); - int (*readpage_io_failed_hook)(struct page *page, int failed_mirror); - - /* - * Optional hooks, called if the pointer is not NULL - */ - int (*fill_delalloc)(void *private_data, struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written, - struct writeback_control *wbc); - - int (*writepage_start_hook)(struct page *page, u64 start, u64 end); - void (*writepage_end_io_hook)(struct page *page, u64 start, u64 end, - struct extent_state *state, int uptodate); - void (*set_bit_hook)(void *private_data, struct extent_state *state, - unsigned *bits); - void (*clear_bit_hook)(void *private_data, - struct extent_state *state, - unsigned *bits); - void (*merge_extent_hook)(void *private_data, - struct extent_state *new, - struct extent_state *other); - void (*split_extent_hook)(void *private_data, - struct extent_state *orig, u64 split); - void (*check_extent_io_range)(void *private_data, const char *caller, - u64 start, u64 end); }; struct extent_io_tree { @@ -353,11 +332,11 @@ static inline int set_extent_dirty(struct extent_io_tree *tree, u64 start, } static inline int clear_extent_dirty(struct extent_io_tree *tree, u64 start, - u64 end) + u64 end, struct extent_state **cached) { return clear_extent_bit(tree, start, end, EXTENT_DIRTY | EXTENT_DELALLOC | - EXTENT_DO_ACCOUNTING, 0, 0, NULL); + EXTENT_DO_ACCOUNTING, 0, 0, cached); } int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, @@ -546,10 +525,9 @@ int free_io_failure(struct extent_io_tree *failure_tree, struct extent_io_tree *io_tree, struct io_failure_record *rec); #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS -u64 btrfs_find_lock_delalloc_range(struct inode *inode, - struct extent_io_tree *tree, - struct page *locked_page, u64 *start, - u64 *end, u64 max_bytes); +bool find_lock_delalloc_range(struct inode *inode, struct extent_io_tree *tree, + struct page *locked_page, u64 *start, + u64 *end); #endif struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info, u64 start); diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c index 7eea8b6e2cd3..a042a193c120 100644 --- a/fs/btrfs/extent_map.c +++ b/fs/btrfs/extent_map.c @@ -475,7 +475,8 @@ static struct extent_map *prev_extent_map(struct extent_map *em) return container_of(prev, struct extent_map, rb_node); } -/* helper for btfs_get_extent. Given an existing extent in the tree, +/* + * Helper for btrfs_get_extent. Given an existing extent in the tree, * the existing extent is the nearest extent to map_start, * and an extent that you want to insert, deal with overlap and insert * the best fitted new extent into the tree. diff --git a/fs/btrfs/extent_map.h b/fs/btrfs/extent_map.h index 31977ffd6190..ef05a0121652 100644 --- a/fs/btrfs/extent_map.h +++ b/fs/btrfs/extent_map.h @@ -11,13 +11,20 @@ #define EXTENT_MAP_INLINE ((u64)-2) #define EXTENT_MAP_DELALLOC ((u64)-1) -/* bits for the flags field */ -#define EXTENT_FLAG_PINNED 0 /* this entry not yet on disk, don't free it */ -#define EXTENT_FLAG_COMPRESSED 1 -#define EXTENT_FLAG_PREALLOC 3 /* pre-allocated extent */ -#define EXTENT_FLAG_LOGGING 4 /* Logging this extent */ -#define EXTENT_FLAG_FILLING 5 /* Filling in a preallocated extent */ -#define EXTENT_FLAG_FS_MAPPING 6 /* filesystem extent mapping type */ +/* bits for the extent_map::flags field */ +enum { + /* this entry not yet on disk, don't free it */ + EXTENT_FLAG_PINNED, + EXTENT_FLAG_COMPRESSED, + /* pre-allocated extent */ + EXTENT_FLAG_PREALLOC, + /* Logging this extent */ + EXTENT_FLAG_LOGGING, + /* Filling in a preallocated extent */ + EXTENT_FLAG_FILLING, + /* filesystem extent mapping type */ + EXTENT_FLAG_FS_MAPPING, +}; struct extent_map { struct rb_node rb_node; diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c index ba74827beb32..920bf3b4b0ef 100644 --- a/fs/btrfs/file-item.c +++ b/fs/btrfs/file-item.c @@ -142,11 +142,6 @@ int btrfs_lookup_file_extent(struct btrfs_trans_handle *trans, return ret; } -static void btrfs_io_bio_endio_readpage(struct btrfs_io_bio *bio, int err) -{ - kfree(bio->csum_allocated); -} - static blk_status_t __btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u64 logical_offset, u32 *dst, int dio) { @@ -175,14 +170,12 @@ static blk_status_t __btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio nblocks = bio->bi_iter.bi_size >> inode->i_sb->s_blocksize_bits; if (!dst) { if (nblocks * csum_size > BTRFS_BIO_INLINE_CSUM_SIZE) { - btrfs_bio->csum_allocated = kmalloc_array(nblocks, - csum_size, GFP_NOFS); - if (!btrfs_bio->csum_allocated) { + btrfs_bio->csum = kmalloc_array(nblocks, csum_size, + GFP_NOFS); + if (!btrfs_bio->csum) { btrfs_free_path(path); return BLK_STS_RESOURCE; } - btrfs_bio->csum = btrfs_bio->csum_allocated; - btrfs_bio->end_io = btrfs_io_bio_endio_readpage; } else { btrfs_bio->csum = btrfs_bio->csum_inline; } diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index 58e93bce3036..d38dc8c31533 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -399,7 +399,7 @@ static noinline int btrfs_copy_from_user(loff_t pos, size_t write_bytes, size_t copied = 0; size_t total_copied = 0; int pg = 0; - int offset = pos & (PAGE_SIZE - 1); + int offset = offset_in_page(pos); while (write_bytes > 0) { size_t count = min_t(size_t, @@ -1611,7 +1611,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb, return -ENOMEM; while (iov_iter_count(i) > 0) { - size_t offset = pos & (PAGE_SIZE - 1); + size_t offset = offset_in_page(pos); size_t sector_offset; size_t write_bytes = min(iov_iter_count(i), nrptrs * (size_t)PAGE_SIZE - @@ -2005,7 +2005,7 @@ int btrfs_release_file(struct inode *inode, struct file *filp) filp->private_data = NULL; /* - * ordered_data_close is set by settattr when we are about to truncate + * ordered_data_close is set by setattr when we are about to truncate * a file from a non-zero size to a zero size. This tries to * flush down new bytes that may have been written if the * application were using truncate to replace a file in place. @@ -2114,7 +2114,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) /* * We have to do this here to avoid the priority inversion of waiting on - * IO of a lower priority task while holding a transaciton open. + * IO of a lower priority task while holding a transaction open. */ ret = btrfs_wait_ordered_range(inode, start, len); if (ret) { @@ -2154,7 +2154,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) * here we could get into a situation where we're waiting on IO to * happen that is blocked on a transaction trying to commit. With start * we inc the extwriter counter, so we wait for all extwriters to exit - * before we start blocking join'ers. This comment is to keep somebody + * before we start blocking joiners. This comment is to keep somebody * from thinking they are super smart and changing this to * btrfs_join_transaction *cough*Josef*cough*. */ @@ -2186,25 +2186,6 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) up_write(&BTRFS_I(inode)->dio_sem); inode_unlock(inode); - /* - * If any of the ordered extents had an error, just return it to user - * space, so that the application knows some writes didn't succeed and - * can take proper action (retry for e.g.). Blindly committing the - * transaction in this case, would fool userspace that everything was - * successful. And we also want to make sure our log doesn't contain - * file extent items pointing to extents that weren't fully written to - - * just like in the non fast fsync path, where we check for the ordered - * operation's error flag before writing to the log tree and return -EIO - * if any of them had this flag set (btrfs_wait_ordered_range) - - * therefore we need to check for errors in the ordered operations, - * which are indicated by ctx.io_err. - */ - if (ctx.io_err) { - btrfs_end_transaction(trans); - ret = ctx.io_err; - goto out; - } - if (ret != BTRFS_NO_LOG_SYNC) { if (!ret) { ret = btrfs_sync_log(trans, root, &ctx); diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c index d6736595ec57..e5089087eaa6 100644 --- a/fs/btrfs/free-space-tree.c +++ b/fs/btrfs/free-space-tree.c @@ -74,11 +74,11 @@ out: return ret; } -struct btrfs_free_space_info * -search_free_space_info(struct btrfs_trans_handle *trans, - struct btrfs_fs_info *fs_info, - struct btrfs_block_group_cache *block_group, - struct btrfs_path *path, int cow) +EXPORT_FOR_TESTS +struct btrfs_free_space_info *search_free_space_info( + struct btrfs_trans_handle *trans, struct btrfs_fs_info *fs_info, + struct btrfs_block_group_cache *block_group, + struct btrfs_path *path, int cow) { struct btrfs_root *root = fs_info->free_space_root; struct btrfs_key key; @@ -176,6 +176,7 @@ static void le_bitmap_set(unsigned long *map, unsigned int start, int len) } } +EXPORT_FOR_TESTS int convert_free_space_to_bitmaps(struct btrfs_trans_handle *trans, struct btrfs_block_group_cache *block_group, struct btrfs_path *path) @@ -315,6 +316,7 @@ out: return ret; } +EXPORT_FOR_TESTS int convert_free_space_to_extents(struct btrfs_trans_handle *trans, struct btrfs_block_group_cache *block_group, struct btrfs_path *path) @@ -487,6 +489,7 @@ out: return ret; } +EXPORT_FOR_TESTS int free_space_test_bit(struct btrfs_block_group_cache *block_group, struct btrfs_path *path, u64 offset) { @@ -775,6 +778,7 @@ out: return ret; } +EXPORT_FOR_TESTS int __remove_from_free_space_tree(struct btrfs_trans_handle *trans, struct btrfs_block_group_cache *block_group, struct btrfs_path *path, u64 start, u64 size) @@ -968,6 +972,7 @@ out: return ret; } +EXPORT_FOR_TESTS int __add_to_free_space_tree(struct btrfs_trans_handle *trans, struct btrfs_block_group_cache *block_group, struct btrfs_path *path, u64 start, u64 size) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 9ea4c6f0352f..43eb4535319d 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include "ctree.h" #include "disk-io.h" @@ -103,23 +104,23 @@ static void __endio_write_update_ordered(struct inode *inode, /* * Cleanup all submitted ordered extents in specified range to handle errors - * from the fill_dellaloc() callback. + * from the btrfs_run_delalloc_range() callback. * * NOTE: caller must ensure that when an error happens, it can not call * extent_clear_unlock_delalloc() to clear both the bits EXTENT_DO_ACCOUNTING * and EXTENT_DELALLOC simultaneously, because that causes the reserved metadata * to be released, which we want to happen only when finishing the ordered - * extent (btrfs_finish_ordered_io()). Also note that the caller of the - * fill_delalloc() callback already does proper cleanup for the first page of - * the range, that is, it invokes the callback writepage_end_io_hook() for the - * range of the first page. + * extent (btrfs_finish_ordered_io()). */ static inline void btrfs_cleanup_ordered_extents(struct inode *inode, - const u64 offset, - const u64 bytes) + struct page *locked_page, + u64 offset, u64 bytes) { unsigned long index = offset >> PAGE_SHIFT; unsigned long end_index = (offset + bytes - 1) >> PAGE_SHIFT; + u64 page_start = page_offset(locked_page); + u64 page_end = page_start + PAGE_SIZE - 1; + struct page *page; while (index <= end_index) { @@ -130,8 +131,18 @@ static inline void btrfs_cleanup_ordered_extents(struct inode *inode, ClearPagePrivate2(page); put_page(page); } - return __endio_write_update_ordered(inode, offset + PAGE_SIZE, - bytes - PAGE_SIZE, false); + + /* + * In case this page belongs to the delalloc range being instantiated + * then skip it, since the first page of a range is going to be + * properly cleaned up by the caller of run_delalloc_range + */ + if (page_start >= offset && page_end <= (offset + bytes - 1)) { + offset += PAGE_SIZE; + bytes -= PAGE_SIZE; + } + + return __endio_write_update_ordered(inode, offset, bytes, false); } static int btrfs_dirty_inode(struct inode *inode); @@ -229,7 +240,7 @@ static int insert_inline_extent(struct btrfs_trans_handle *trans, start >> PAGE_SHIFT); btrfs_set_file_extent_compression(leaf, ei, 0); kaddr = kmap_atomic(page); - offset = start & (PAGE_SIZE - 1); + offset = offset_in_page(start); write_extent_buffer(leaf, kaddr + offset, ptr, size); kunmap_atomic(kaddr); put_page(page); @@ -357,7 +368,7 @@ struct async_extent { struct async_cow { struct inode *inode; - struct btrfs_root *root; + struct btrfs_fs_info *fs_info; struct page *locked_page; u64 start; u64 end; @@ -538,8 +549,7 @@ again: &total_compressed); if (!ret) { - unsigned long offset = total_compressed & - (PAGE_SIZE - 1); + unsigned long offset = offset_in_page(total_compressed); struct page *page = pages[nr_pages - 1]; char *kaddr; @@ -847,14 +857,13 @@ retry: ins.offset, async_extent->pages, async_extent->nr_pages, async_cow->write_flags)) { - struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree; struct page *p = async_extent->pages[0]; const u64 start = async_extent->start; const u64 end = start + async_extent->ram_size - 1; p->mapping = inode->i_mapping; - tree->ops->writepage_end_io_hook(p, start, end, - NULL, 0); + btrfs_writepage_endio_finish_ordered(p, start, end, 0); + p->mapping = NULL; extent_clear_unlock_delalloc(inode, start, end, end, NULL, 0, @@ -1144,13 +1153,11 @@ static noinline void async_cow_submit(struct btrfs_work *work) { struct btrfs_fs_info *fs_info; struct async_cow *async_cow; - struct btrfs_root *root; unsigned long nr_pages; async_cow = container_of(work, struct async_cow, work); - root = async_cow->root; - fs_info = root->fs_info; + fs_info = async_cow->fs_info; nr_pages = (async_cow->end - async_cow->start + PAGE_SIZE) >> PAGE_SHIFT; @@ -1179,7 +1186,6 @@ static int cow_file_range_async(struct inode *inode, struct page *locked_page, { struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); struct async_cow *async_cow; - struct btrfs_root *root = BTRFS_I(inode)->root; unsigned long nr_pages; u64 cur_end; @@ -1189,7 +1195,7 @@ static int cow_file_range_async(struct inode *inode, struct page *locked_page, async_cow = kmalloc(sizeof(*async_cow), GFP_NOFS); BUG_ON(!async_cow); /* -ENOMEM */ async_cow->inode = igrab(inode); - async_cow->root = root; + async_cow->fs_info = fs_info; async_cow->locked_page = locked_page; async_cow->start = start; async_cow->write_flags = write_flags; @@ -1372,7 +1378,8 @@ next_slot: * Do the same check as in btrfs_cross_ref_exist but * without the unnecessary search. */ - if (btrfs_file_extent_generation(leaf, fi) <= + if (!nolock && + btrfs_file_extent_generation(leaf, fi) <= btrfs_root_last_snapshot(&root->root_item)) goto out_check; if (extent_type == BTRFS_FILE_EXTENT_REG && !force) @@ -1576,12 +1583,12 @@ static inline int need_force_cow(struct inode *inode, u64 start, u64 end) } /* - * extent_io.c call back to do delayed allocation processing + * Function to process delayed allocation (create CoW) for ranges which are + * being touched for the first time. */ -static int run_delalloc_range(void *private_data, struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written, - struct writeback_control *wbc) +int btrfs_run_delalloc_range(void *private_data, struct page *locked_page, + u64 start, u64 end, int *page_started, unsigned long *nr_written, + struct writeback_control *wbc) { struct inode *inode = private_data; int ret; @@ -1605,14 +1612,14 @@ static int run_delalloc_range(void *private_data, struct page *locked_page, write_flags); } if (ret) - btrfs_cleanup_ordered_extents(inode, start, end - start + 1); + btrfs_cleanup_ordered_extents(inode, locked_page, start, + end - start + 1); return ret; } -static void btrfs_split_extent_hook(void *private_data, - struct extent_state *orig, u64 split) +void btrfs_split_delalloc_extent(struct inode *inode, + struct extent_state *orig, u64 split) { - struct inode *inode = private_data; u64 size; /* not delalloc, ignore it */ @@ -1625,7 +1632,7 @@ static void btrfs_split_extent_hook(void *private_data, u64 new_size; /* - * See the explanation in btrfs_merge_extent_hook, the same + * See the explanation in btrfs_merge_delalloc_extent, the same * applies here, just in reverse. */ new_size = orig->end - split + 1; @@ -1642,16 +1649,13 @@ static void btrfs_split_extent_hook(void *private_data, } /* - * extent_io.c merge_extent_hook, used to track merged delayed allocation - * extents so we can keep track of new extents that are just merged onto old - * extents, such as when we are doing sequential writes, so we can properly - * account for the metadata space we'll need. + * Handle merged delayed allocation extents so we can keep track of new extents + * that are just merged onto old extents, such as when we are doing sequential + * writes, so we can properly account for the metadata space we'll need. */ -static void btrfs_merge_extent_hook(void *private_data, - struct extent_state *new, - struct extent_state *other) +void btrfs_merge_delalloc_extent(struct inode *inode, struct extent_state *new, + struct extent_state *other) { - struct inode *inode = private_data; u64 new_size, old_size; u32 num_extents; @@ -1755,15 +1759,12 @@ static void btrfs_del_delalloc_inode(struct btrfs_root *root, } /* - * extent_io.c set_bit_hook, used to track delayed allocation - * bytes in this file, and to maintain the list of inodes that - * have pending delalloc work to be done. + * Properly track delayed allocation bytes in the inode and to maintain the + * list of inodes that have pending delalloc work to be done. */ -static void btrfs_set_bit_hook(void *private_data, - struct extent_state *state, unsigned *bits) +void btrfs_set_delalloc_extent(struct inode *inode, struct extent_state *state, + unsigned *bits) { - struct inode *inode = private_data; - struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); if ((*bits & EXTENT_DEFRAG) && !(*bits & EXTENT_DELALLOC)) @@ -1809,14 +1810,14 @@ static void btrfs_set_bit_hook(void *private_data, } /* - * extent_io.c clear_bit_hook, see set_bit_hook for why + * Once a range is no longer delalloc this function ensures that proper + * accounting happens. */ -static void btrfs_clear_bit_hook(void *private_data, - struct extent_state *state, - unsigned *bits) +void btrfs_clear_delalloc_extent(struct inode *vfs_inode, + struct extent_state *state, unsigned *bits) { - struct btrfs_inode *inode = BTRFS_I((struct inode *)private_data); - struct btrfs_fs_info *fs_info = btrfs_sb(inode->vfs_inode.i_sb); + struct btrfs_inode *inode = BTRFS_I(vfs_inode); + struct btrfs_fs_info *fs_info = btrfs_sb(vfs_inode->i_sb); u64 len = state->end + 1 - state->start; u32 num_extents = count_max_extents(len); @@ -1841,7 +1842,7 @@ static void btrfs_clear_bit_hook(void *private_data, /* * We don't reserve metadata space for space cache inodes so we - * don't need to call dellalloc_release_metadata if there is an + * don't need to call delalloc_release_metadata if there is an * error. */ if (*bits & EXTENT_CLEAR_META_RESV && @@ -1880,16 +1881,21 @@ static void btrfs_clear_bit_hook(void *private_data, } /* - * Merge bio hook, this must check the chunk tree to make sure we don't create - * bios that span stripes or chunks + * btrfs_bio_fits_in_stripe - Checks whether the size of the given bio will fit + * in a chunk's stripe. This function ensures that bios do not span a + * stripe/chunk * - * return 1 if page cannot be merged to bio - * return 0 if page can be merged to bio + * @page - The page we are about to add to the bio + * @size - size we want to add to the bio + * @bio - bio we want to ensure is smaller than a stripe + * @bio_flags - flags of the bio + * + * return 1 if page cannot be added to the bio + * return 0 if page can be added to the bio * return error otherwise */ -int btrfs_merge_bio_hook(struct page *page, unsigned long offset, - size_t size, struct bio *bio, - unsigned long bio_flags) +int btrfs_bio_fits_in_stripe(struct page *page, size_t size, struct bio *bio, + unsigned long bio_flags) { struct inode *inode = page->mapping->host; struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); @@ -1931,29 +1937,6 @@ static blk_status_t btrfs_submit_bio_start(void *private_data, struct bio *bio, return 0; } -/* - * in order to insert checksums into the metadata in large chunks, - * we wait until bio submission time. All the pages in the bio are - * checksummed and sums are attached onto the ordered extent record. - * - * At IO completion time the cums attached on the ordered extent record - * are inserted into the btree - */ -blk_status_t btrfs_submit_bio_done(void *private_data, struct bio *bio, - int mirror_num) -{ - struct inode *inode = private_data; - struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); - blk_status_t ret; - - ret = btrfs_map_bio(fs_info, bio, mirror_num, 1); - if (ret) { - bio->bi_status = ret; - bio_endio(bio); - } - return ret; -} - /* * extent_io.c submission hook. This does the right thing for csum calculation * on write, or reading the csums from the tree before a read. @@ -2056,7 +2039,7 @@ int btrfs_set_extent_delalloc(struct inode *inode, u64 start, u64 end, unsigned int extra_bits, struct extent_state **cached_state, int dedupe) { - WARN_ON((end & (PAGE_SIZE - 1)) == 0); + WARN_ON(PAGE_ALIGNED(end)); return set_extent_delalloc(&BTRFS_I(inode)->io_tree, start, end, extra_bits, cached_state); } @@ -2152,7 +2135,7 @@ out_page: * to fix it up. The async helper will wait for ordered extents, set * the delalloc bit and make it safe to write the page. */ -static int btrfs_writepage_start_hook(struct page *page, u64 start, u64 end) +int btrfs_writepage_cow_fixup(struct page *page, u64 start, u64 end) { struct inode *inode = page->mapping->host; struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); @@ -3159,8 +3142,8 @@ static void finish_ordered_fn(struct btrfs_work *work) btrfs_finish_ordered_io(ordered_extent); } -static void btrfs_writepage_end_io_hook(struct page *page, u64 start, u64 end, - struct extent_state *state, int uptodate) +void btrfs_writepage_endio_finish_ordered(struct page *page, u64 start, + u64 end, int uptodate) { struct inode *inode = page->mapping->host; struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); @@ -3686,6 +3669,21 @@ cache_index: * inode is not a directory, logging its parent unnecessarily. */ BTRFS_I(inode)->last_unlink_trans = BTRFS_I(inode)->last_trans; + /* + * Similar reasoning for last_link_trans, needs to be set otherwise + * for a case like the following: + * + * mkdir A + * touch foo + * ln foo A/bar + * echo 2 > /proc/sys/vm/drop_caches + * fsync foo + * + * + * Would result in link bar and directory A not existing after the power + * failure. + */ + BTRFS_I(inode)->last_link_trans = BTRFS_I(inode)->last_trans; path->slots[0]++; if (inode->i_nlink != 1 || @@ -4444,31 +4442,6 @@ out: return err; } -static int truncate_space_check(struct btrfs_trans_handle *trans, - struct btrfs_root *root, - u64 bytes_deleted) -{ - struct btrfs_fs_info *fs_info = root->fs_info; - int ret; - - /* - * This is only used to apply pressure to the enospc system, we don't - * intend to use this reservation at all. - */ - bytes_deleted = btrfs_csum_bytes_to_leaves(fs_info, bytes_deleted); - bytes_deleted *= fs_info->nodesize; - ret = btrfs_block_rsv_add(root, &fs_info->trans_block_rsv, - bytes_deleted, BTRFS_RESERVE_NO_FLUSH); - if (!ret) { - trace_btrfs_space_reservation(fs_info, "transaction", - trans->transid, - bytes_deleted, 1); - trans->bytes_reserved += bytes_deleted; - } - return ret; - -} - /* * Return this if we need to call truncate_block for the last bit of the * truncate. @@ -4513,7 +4486,6 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans, u64 bytes_deleted = 0; bool be_nice = false; bool should_throttle = false; - bool should_end = false; BUG_ON(new_size > 0 && min_type != BTRFS_EXTENT_DATA_KEY); @@ -4544,7 +4516,7 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans, /* * This function is also used to drop the items in the log tree before * we relog the inode, so if root != BTRFS_I(inode)->root, it means - * it is used to drop the loged items. So we shouldn't kill the delayed + * it is used to drop the logged items. So we shouldn't kill the delayed * items. */ if (min_type == 0 && root == BTRFS_I(inode)->root) @@ -4726,15 +4698,7 @@ delete: btrfs_abort_transaction(trans, ret); break; } - if (btrfs_should_throttle_delayed_refs(trans)) - btrfs_async_run_delayed_refs(fs_info, - trans->delayed_ref_updates * 2, - trans->transid, 0); if (be_nice) { - if (truncate_space_check(trans, root, - extent_num_bytes)) { - should_end = true; - } if (btrfs_should_throttle_delayed_refs(trans)) should_throttle = true; } @@ -4745,7 +4709,7 @@ delete: if (path->slots[0] == 0 || path->slots[0] != pending_del_slot || - should_throttle || should_end) { + should_throttle) { if (pending_del_nr) { ret = btrfs_del_items(trans, root, path, pending_del_slot, @@ -4757,23 +4721,24 @@ delete: pending_del_nr = 0; } btrfs_release_path(path); - if (should_throttle) { - unsigned long updates = trans->delayed_ref_updates; - if (updates) { - trans->delayed_ref_updates = 0; - ret = btrfs_run_delayed_refs(trans, - updates * 2); - if (ret) - break; - } - } + /* - * if we failed to refill our space rsv, bail out - * and let the transaction restart + * We can generate a lot of delayed refs, so we need to + * throttle every once and a while and make sure we're + * adding enough space to keep up with the work we are + * generating. Since we hold a transaction here we + * can't flush, and we don't want to FLUSH_LIMIT because + * we could have generated too many delayed refs to + * actually allocate, so just bail if we're short and + * let the normal reservation dance happen higher up. */ - if (should_end) { - ret = -EAGAIN; - break; + if (should_throttle) { + ret = btrfs_delayed_refs_rsv_refill(fs_info, + BTRFS_RESERVE_NO_FLUSH); + if (ret) { + ret = -EAGAIN; + break; + } } goto search_again; } else { @@ -4799,18 +4764,6 @@ out: } btrfs_free_path(path); - - if (be_nice && bytes_deleted > SZ_32M && (ret >= 0 || ret == -EAGAIN)) { - unsigned long updates = trans->delayed_ref_updates; - int err; - - if (updates) { - trans->delayed_ref_updates = 0; - err = btrfs_run_delayed_refs(trans, updates * 2); - if (err) - ret = err; - } - } return ret; } @@ -5155,7 +5108,7 @@ static int btrfs_setsize(struct inode *inode, struct iattr *attr) truncate_setsize(inode, newsize); - /* Disable nonlocked read DIO to avoid the end less truncate */ + /* Disable nonlocked read DIO to avoid the endless truncate */ btrfs_inode_block_unlocked_dio(BTRFS_I(inode)); inode_dio_wait(inode); btrfs_inode_resume_unlocked_dio(BTRFS_I(inode)); @@ -5333,8 +5286,8 @@ static struct btrfs_trans_handle *evict_refill_and_join(struct btrfs_root *root, * Try to steal from the global reserve if there is space for * it. */ - if (!btrfs_check_space_for_delayed_refs(trans) && - !btrfs_block_rsv_migrate(global_rsv, rsv, rsv->size, false)) + if (!btrfs_check_space_for_delayed_refs(fs_info) && + !btrfs_block_rsv_migrate(global_rsv, rsv, rsv->size, 0)) return trans; /* If not, commit and try again. */ @@ -6406,14 +6359,19 @@ fail_dir_item: err = btrfs_del_root_ref(trans, key.objectid, root->root_key.objectid, parent_ino, &local_index, name, name_len); - + if (err) + btrfs_abort_transaction(trans, err); } else if (add_backref) { u64 local_index; int err; err = btrfs_del_inode_ref(trans, root, name, name_len, ino, parent_ino, &local_index); + if (err) + btrfs_abort_transaction(trans, err); } + + /* Return the original error code */ return ret; } @@ -6625,6 +6583,7 @@ static int btrfs_link(struct dentry *old_dentry, struct inode *dir, if (err) goto fail; } + BTRFS_I(inode)->last_link_trans = trans->transid; d_instantiate(dentry, inode); ret = btrfs_log_new_name(trans, BTRFS_I(inode), NULL, parent, true, NULL); @@ -6652,7 +6611,6 @@ static int btrfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) struct btrfs_trans_handle *trans; struct btrfs_root *root = BTRFS_I(dir)->root; int err = 0; - int drop_on_err = 0; u64 objectid = 0; u64 index = 0; @@ -6678,7 +6636,6 @@ static int btrfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) goto out_fail; } - drop_on_err = 1; /* these must be set before we unlock the inode */ inode->i_op = &btrfs_dir_inode_operations; inode->i_fop = &btrfs_dir_file_operations; @@ -6699,7 +6656,6 @@ static int btrfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) goto out_fail; d_instantiate_new(dentry, inode); - drop_on_err = 0; out_fail: btrfs_end_transaction(trans); @@ -8053,9 +8009,7 @@ static void btrfs_endio_direct_read(struct bio *bio) dio_bio->bi_status = err; dio_end_io(dio_bio); - - if (io_bio->end_io) - io_bio->end_io(io_bio, blk_status_to_errno(err)); + btrfs_io_bio_free_csum(io_bio); bio_put(bio); } @@ -8098,7 +8052,7 @@ static void __endio_write_update_ordered(struct inode *inode, return; /* * Our bio might span multiple ordered extents. In this case - * we keep goin until we have accounted the whole dio. + * we keep going until we have accounted the whole dio. */ if (ordered_offset < offset + bytes) { ordered_bytes = offset + bytes - ordered_offset; @@ -8408,8 +8362,7 @@ static void btrfs_submit_direct(struct bio *dio_bio, struct inode *inode, if (!ret) return; - if (io_bio->end_io) - io_bio->end_io(io_bio, ret); + btrfs_io_bio_free_csum(io_bio); free_ordered: /* @@ -8912,7 +8865,7 @@ again: /* page is wholly or partially inside EOF */ if (page_start + PAGE_SIZE > size) - zero_start = size & ~PAGE_MASK; + zero_start = offset_in_page(size); else zero_start = PAGE_SIZE; @@ -9157,6 +9110,7 @@ struct inode *btrfs_alloc_inode(struct super_block *sb) ei->index_cnt = (u64)-1; ei->dir_index = 0; ei->last_unlink_trans = 0; + ei->last_link_trans = 0; ei->last_log_commit = 0; spin_lock_init(&ei->lock); @@ -9968,7 +9922,7 @@ static struct btrfs_delalloc_work *btrfs_alloc_delalloc_work(struct inode *inode * some fairly slow code that needs optimization. This walks the list * of all the inodes with pending delalloc and forces them to disk. */ -static int start_delalloc_inodes(struct btrfs_root *root, int nr) +static int start_delalloc_inodes(struct btrfs_root *root, int nr, bool snapshot) { struct btrfs_inode *binode; struct inode *inode; @@ -9996,6 +9950,9 @@ static int start_delalloc_inodes(struct btrfs_root *root, int nr) } spin_unlock(&root->delalloc_lock); + if (snapshot) + set_bit(BTRFS_INODE_SNAPSHOT_FLUSH, + &binode->runtime_flags); work = btrfs_alloc_delalloc_work(inode); if (!work) { iput(inode); @@ -10029,7 +9986,7 @@ out: return ret; } -int btrfs_start_delalloc_inodes(struct btrfs_root *root) +int btrfs_start_delalloc_snapshot(struct btrfs_root *root) { struct btrfs_fs_info *fs_info = root->fs_info; int ret; @@ -10037,7 +9994,7 @@ int btrfs_start_delalloc_inodes(struct btrfs_root *root) if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) return -EROFS; - ret = start_delalloc_inodes(root, -1); + ret = start_delalloc_inodes(root, -1, true); if (ret > 0) ret = 0; return ret; @@ -10066,7 +10023,7 @@ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, int nr) &fs_info->delalloc_roots); spin_unlock(&fs_info->delalloc_root_lock); - ret = start_delalloc_inodes(root, nr); + ret = start_delalloc_inodes(root, nr, false); btrfs_put_fs_root(root); if (ret < 0) goto out; @@ -10445,26 +10402,6 @@ out: return ret; } -__attribute__((const)) -static int btrfs_readpage_io_failed_hook(struct page *page, int failed_mirror) -{ - return -EAGAIN; -} - -static void btrfs_check_extent_io_range(void *private_data, const char *caller, - u64 start, u64 end) -{ - struct inode *inode = private_data; - u64 isize; - - isize = i_size_read(inode); - if (end >= PAGE_SIZE && (end % 2) == 0 && end != isize - 1) { - btrfs_debug_rl(BTRFS_I(inode)->root->fs_info, - "%s: ino %llu isize %llu odd range [%llu,%llu]", - caller, btrfs_ino(BTRFS_I(inode)), isize, start, end); - } -} - void btrfs_set_range_writeback(struct extent_io_tree *tree, u64 start, u64 end) { struct inode *inode = tree->private_data; @@ -10481,6 +10418,343 @@ void btrfs_set_range_writeback(struct extent_io_tree *tree, u64 start, u64 end) } } +#ifdef CONFIG_SWAP +/* + * Add an entry indicating a block group or device which is pinned by a + * swapfile. Returns 0 on success, 1 if there is already an entry for it, or a + * negative errno on failure. + */ +static int btrfs_add_swapfile_pin(struct inode *inode, void *ptr, + bool is_block_group) +{ + struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info; + struct btrfs_swapfile_pin *sp, *entry; + struct rb_node **p; + struct rb_node *parent = NULL; + + sp = kmalloc(sizeof(*sp), GFP_NOFS); + if (!sp) + return -ENOMEM; + sp->ptr = ptr; + sp->inode = inode; + sp->is_block_group = is_block_group; + + spin_lock(&fs_info->swapfile_pins_lock); + p = &fs_info->swapfile_pins.rb_node; + while (*p) { + parent = *p; + entry = rb_entry(parent, struct btrfs_swapfile_pin, node); + if (sp->ptr < entry->ptr || + (sp->ptr == entry->ptr && sp->inode < entry->inode)) { + p = &(*p)->rb_left; + } else if (sp->ptr > entry->ptr || + (sp->ptr == entry->ptr && sp->inode > entry->inode)) { + p = &(*p)->rb_right; + } else { + spin_unlock(&fs_info->swapfile_pins_lock); + kfree(sp); + return 1; + } + } + rb_link_node(&sp->node, parent, p); + rb_insert_color(&sp->node, &fs_info->swapfile_pins); + spin_unlock(&fs_info->swapfile_pins_lock); + return 0; +} + +/* Free all of the entries pinned by this swapfile. */ +static void btrfs_free_swapfile_pins(struct inode *inode) +{ + struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info; + struct btrfs_swapfile_pin *sp; + struct rb_node *node, *next; + + spin_lock(&fs_info->swapfile_pins_lock); + node = rb_first(&fs_info->swapfile_pins); + while (node) { + next = rb_next(node); + sp = rb_entry(node, struct btrfs_swapfile_pin, node); + if (sp->inode == inode) { + rb_erase(&sp->node, &fs_info->swapfile_pins); + if (sp->is_block_group) + btrfs_put_block_group(sp->ptr); + kfree(sp); + } + node = next; + } + spin_unlock(&fs_info->swapfile_pins_lock); +} + +struct btrfs_swap_info { + u64 start; + u64 block_start; + u64 block_len; + u64 lowest_ppage; + u64 highest_ppage; + unsigned long nr_pages; + int nr_extents; +}; + +static int btrfs_add_swap_extent(struct swap_info_struct *sis, + struct btrfs_swap_info *bsi) +{ + unsigned long nr_pages; + u64 first_ppage, first_ppage_reported, next_ppage; + int ret; + + first_ppage = ALIGN(bsi->block_start, PAGE_SIZE) >> PAGE_SHIFT; + next_ppage = ALIGN_DOWN(bsi->block_start + bsi->block_len, + PAGE_SIZE) >> PAGE_SHIFT; + + if (first_ppage >= next_ppage) + return 0; + nr_pages = next_ppage - first_ppage; + + first_ppage_reported = first_ppage; + if (bsi->start == 0) + first_ppage_reported++; + if (bsi->lowest_ppage > first_ppage_reported) + bsi->lowest_ppage = first_ppage_reported; + if (bsi->highest_ppage < (next_ppage - 1)) + bsi->highest_ppage = next_ppage - 1; + + ret = add_swap_extent(sis, bsi->nr_pages, nr_pages, first_ppage); + if (ret < 0) + return ret; + bsi->nr_extents += ret; + bsi->nr_pages += nr_pages; + return 0; +} + +static void btrfs_swap_deactivate(struct file *file) +{ + struct inode *inode = file_inode(file); + + btrfs_free_swapfile_pins(inode); + atomic_dec(&BTRFS_I(inode)->root->nr_swapfiles); +} + +static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file, + sector_t *span) +{ + struct inode *inode = file_inode(file); + struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info; + struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; + struct extent_state *cached_state = NULL; + struct extent_map *em = NULL; + struct btrfs_device *device = NULL; + struct btrfs_swap_info bsi = { + .lowest_ppage = (sector_t)-1ULL, + }; + int ret = 0; + u64 isize; + u64 start; + + /* + * If the swap file was just created, make sure delalloc is done. If the + * file changes again after this, the user is doing something stupid and + * we don't really care. + */ + ret = btrfs_wait_ordered_range(inode, 0, (u64)-1); + if (ret) + return ret; + + /* + * The inode is locked, so these flags won't change after we check them. + */ + if (BTRFS_I(inode)->flags & BTRFS_INODE_COMPRESS) { + btrfs_warn(fs_info, "swapfile must not be compressed"); + return -EINVAL; + } + if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW)) { + btrfs_warn(fs_info, "swapfile must not be copy-on-write"); + return -EINVAL; + } + if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) { + btrfs_warn(fs_info, "swapfile must not be checksummed"); + return -EINVAL; + } + + /* + * Balance or device remove/replace/resize can move stuff around from + * under us. The EXCL_OP flag makes sure they aren't running/won't run + * concurrently while we are mapping the swap extents, and + * fs_info->swapfile_pins prevents them from running while the swap file + * is active and moving the extents. Note that this also prevents a + * concurrent device add which isn't actually necessary, but it's not + * really worth the trouble to allow it. + */ + if (test_and_set_bit(BTRFS_FS_EXCL_OP, &fs_info->flags)) { + btrfs_warn(fs_info, + "cannot activate swapfile while exclusive operation is running"); + return -EBUSY; + } + /* + * Snapshots can create extents which require COW even if NODATACOW is + * set. We use this counter to prevent snapshots. We must increment it + * before walking the extents because we don't want a concurrent + * snapshot to run after we've already checked the extents. + */ + atomic_inc(&BTRFS_I(inode)->root->nr_swapfiles); + + isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize); + + lock_extent_bits(io_tree, 0, isize - 1, &cached_state); + start = 0; + while (start < isize) { + u64 logical_block_start, physical_block_start; + struct btrfs_block_group_cache *bg; + u64 len = isize - start; + + em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, start, len, 0); + if (IS_ERR(em)) { + ret = PTR_ERR(em); + goto out; + } + + if (em->block_start == EXTENT_MAP_HOLE) { + btrfs_warn(fs_info, "swapfile must not have holes"); + ret = -EINVAL; + goto out; + } + if (em->block_start == EXTENT_MAP_INLINE) { + /* + * It's unlikely we'll ever actually find ourselves + * here, as a file small enough to fit inline won't be + * big enough to store more than the swap header, but in + * case something changes in the future, let's catch it + * here rather than later. + */ + btrfs_warn(fs_info, "swapfile must not be inline"); + ret = -EINVAL; + goto out; + } + if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)) { + btrfs_warn(fs_info, "swapfile must not be compressed"); + ret = -EINVAL; + goto out; + } + + logical_block_start = em->block_start + (start - em->start); + len = min(len, em->len - (start - em->start)); + free_extent_map(em); + em = NULL; + + ret = can_nocow_extent(inode, start, &len, NULL, NULL, NULL); + if (ret < 0) { + goto out; + } else if (ret) { + ret = 0; + } else { + btrfs_warn(fs_info, + "swapfile must not be copy-on-write"); + ret = -EINVAL; + goto out; + } + + em = btrfs_get_chunk_map(fs_info, logical_block_start, len); + if (IS_ERR(em)) { + ret = PTR_ERR(em); + goto out; + } + + if (em->map_lookup->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) { + btrfs_warn(fs_info, + "swapfile must have single data profile"); + ret = -EINVAL; + goto out; + } + + if (device == NULL) { + device = em->map_lookup->stripes[0].dev; + ret = btrfs_add_swapfile_pin(inode, device, false); + if (ret == 1) + ret = 0; + else if (ret) + goto out; + } else if (device != em->map_lookup->stripes[0].dev) { + btrfs_warn(fs_info, "swapfile must be on one device"); + ret = -EINVAL; + goto out; + } + + physical_block_start = (em->map_lookup->stripes[0].physical + + (logical_block_start - em->start)); + len = min(len, em->len - (logical_block_start - em->start)); + free_extent_map(em); + em = NULL; + + bg = btrfs_lookup_block_group(fs_info, logical_block_start); + if (!bg) { + btrfs_warn(fs_info, + "could not find block group containing swapfile"); + ret = -EINVAL; + goto out; + } + + ret = btrfs_add_swapfile_pin(inode, bg, true); + if (ret) { + btrfs_put_block_group(bg); + if (ret == 1) + ret = 0; + else + goto out; + } + + if (bsi.block_len && + bsi.block_start + bsi.block_len == physical_block_start) { + bsi.block_len += len; + } else { + if (bsi.block_len) { + ret = btrfs_add_swap_extent(sis, &bsi); + if (ret) + goto out; + } + bsi.start = start; + bsi.block_start = physical_block_start; + bsi.block_len = len; + } + + start += len; + } + + if (bsi.block_len) + ret = btrfs_add_swap_extent(sis, &bsi); + +out: + if (!IS_ERR_OR_NULL(em)) + free_extent_map(em); + + unlock_extent_cached(io_tree, 0, isize - 1, &cached_state); + + if (ret) + btrfs_swap_deactivate(file); + + clear_bit(BTRFS_FS_EXCL_OP, &fs_info->flags); + + if (ret) + return ret; + + if (device) + sis->bdev = device->bdev; + *span = bsi.highest_ppage - bsi.lowest_ppage + 1; + sis->max = bsi.nr_pages; + sis->pages = bsi.nr_pages - 1; + sis->highest_bit = bsi.nr_pages - 1; + return bsi.nr_extents; +} +#else +static void btrfs_swap_deactivate(struct file *file) +{ +} + +static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file, + sector_t *span) +{ + return -EOPNOTSUPP; +} +#endif + static const struct inode_operations btrfs_dir_inode_operations = { .getattr = btrfs_getattr, .lookup = btrfs_lookup, @@ -10523,17 +10797,6 @@ static const struct extent_io_ops btrfs_extent_io_ops = { /* mandatory callbacks */ .submit_bio_hook = btrfs_submit_bio_hook, .readpage_end_io_hook = btrfs_readpage_end_io_hook, - .readpage_io_failed_hook = btrfs_readpage_io_failed_hook, - - /* optional callbacks */ - .fill_delalloc = run_delalloc_range, - .writepage_end_io_hook = btrfs_writepage_end_io_hook, - .writepage_start_hook = btrfs_writepage_start_hook, - .set_bit_hook = btrfs_set_bit_hook, - .clear_bit_hook = btrfs_clear_bit_hook, - .merge_extent_hook = btrfs_merge_extent_hook, - .split_extent_hook = btrfs_split_extent_hook, - .check_extent_io_range = btrfs_check_extent_io_range, }; /* @@ -10558,6 +10821,8 @@ static const struct address_space_operations btrfs_aops = { .releasepage = btrfs_releasepage, .set_page_dirty = btrfs_set_page_dirty, .error_remove_page = generic_error_remove_page, + .swap_activate = btrfs_swap_activate, + .swap_deactivate = btrfs_swap_deactivate, }; static const struct inode_operations btrfs_file_inode_operations = { diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 802a628e9f7d..fab9443f6a42 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -290,6 +290,11 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg) } else if (fsflags & FS_COMPR_FL) { const char *comp; + if (IS_SWAPFILE(inode)) { + ret = -ETXTBSY; + goto out_unlock; + } + binode->flags |= BTRFS_INODE_COMPRESS; binode->flags &= ~BTRFS_INODE_NOCOMPRESS; @@ -754,6 +759,12 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir, if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state)) return -EINVAL; + if (atomic_read(&root->nr_swapfiles)) { + btrfs_warn(fs_info, + "cannot snapshot subvolume with active swapfile"); + return -ETXTBSY; + } + pending_snapshot = kzalloc(sizeof(*pending_snapshot), GFP_KERNEL); if (!pending_snapshot) return -ENOMEM; @@ -777,7 +788,7 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir, wait_event(root->subv_writers->wait, percpu_counter_sum(&root->subv_writers->counter) == 0); - ret = btrfs_start_delalloc_inodes(root); + ret = btrfs_start_delalloc_snapshot(root); if (ret) goto dec_and_free; @@ -1505,9 +1516,13 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, } inode_lock(inode); - if (do_compress) - BTRFS_I(inode)->defrag_compress = compress_type; - ret = cluster_pages_for_defrag(inode, pages, i, cluster); + if (IS_SWAPFILE(inode)) { + ret = -ETXTBSY; + } else { + if (do_compress) + BTRFS_I(inode)->defrag_compress = compress_type; + ret = cluster_pages_for_defrag(inode, pages, i, cluster); + } if (ret < 0) { inode_unlock(inode); goto out_ra; @@ -3135,7 +3150,7 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info, } rcu_read_unlock(); - memcpy(&fi_args->fsid, fs_info->fsid, sizeof(fi_args->fsid)); + memcpy(&fi_args->fsid, fs_devices->fsid, sizeof(fi_args->fsid)); fi_args->nodesize = fs_info->nodesize; fi_args->sectorsize = fs_info->sectorsize; fi_args->clone_alignment = fs_info->sectorsize; @@ -3191,92 +3206,6 @@ out: return ret; } -static struct page *extent_same_get_page(struct inode *inode, pgoff_t index) -{ - struct page *page; - - page = grab_cache_page(inode->i_mapping, index); - if (!page) - return ERR_PTR(-ENOMEM); - - if (!PageUptodate(page)) { - int ret; - - ret = btrfs_readpage(NULL, page); - if (ret) - return ERR_PTR(ret); - lock_page(page); - if (!PageUptodate(page)) { - unlock_page(page); - put_page(page); - return ERR_PTR(-EIO); - } - if (page->mapping != inode->i_mapping) { - unlock_page(page); - put_page(page); - return ERR_PTR(-EAGAIN); - } - } - - return page; -} - -static int gather_extent_pages(struct inode *inode, struct page **pages, - int num_pages, u64 off) -{ - int i; - pgoff_t index = off >> PAGE_SHIFT; - - for (i = 0; i < num_pages; i++) { -again: - pages[i] = extent_same_get_page(inode, index + i); - if (IS_ERR(pages[i])) { - int err = PTR_ERR(pages[i]); - - if (err == -EAGAIN) - goto again; - pages[i] = NULL; - return err; - } - } - return 0; -} - -static int lock_extent_range(struct inode *inode, u64 off, u64 len, - bool retry_range_locking) -{ - /* - * Do any pending delalloc/csum calculations on inode, one way or - * another, and lock file content. - * The locking order is: - * - * 1) pages - * 2) range in the inode's io tree - */ - while (1) { - struct btrfs_ordered_extent *ordered; - lock_extent(&BTRFS_I(inode)->io_tree, off, off + len - 1); - ordered = btrfs_lookup_first_ordered_extent(inode, - off + len - 1); - if ((!ordered || - ordered->file_offset + ordered->len <= off || - ordered->file_offset >= off + len) && - !test_range_bit(&BTRFS_I(inode)->io_tree, off, - off + len - 1, EXTENT_DELALLOC, 0, NULL)) { - if (ordered) - btrfs_put_ordered_extent(ordered); - break; - } - unlock_extent(&BTRFS_I(inode)->io_tree, off, off + len - 1); - if (ordered) - btrfs_put_ordered_extent(ordered); - if (!retry_range_locking) - return -EAGAIN; - btrfs_wait_ordered_range(inode, off, len); - } - return 0; -} - static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2) { inode_unlock(inode1); @@ -3292,261 +3221,32 @@ static void btrfs_double_inode_lock(struct inode *inode1, struct inode *inode2) inode_lock_nested(inode2, I_MUTEX_CHILD); } -static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1, - struct inode *inode2, u64 loff2, u64 len) -{ - unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1); - unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1); -} - -static int btrfs_double_extent_lock(struct inode *inode1, u64 loff1, - struct inode *inode2, u64 loff2, u64 len, - bool retry_range_locking) -{ - int ret; - - if (inode1 < inode2) { - swap(inode1, inode2); - swap(loff1, loff2); - } - ret = lock_extent_range(inode1, loff1, len, retry_range_locking); - if (ret) - return ret; - ret = lock_extent_range(inode2, loff2, len, retry_range_locking); - if (ret) - unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, - loff1 + len - 1); - return ret; -} - -struct cmp_pages { - int num_pages; - struct page **src_pages; - struct page **dst_pages; -}; - -static void btrfs_cmp_data_free(struct cmp_pages *cmp) -{ - int i; - struct page *pg; - - for (i = 0; i < cmp->num_pages; i++) { - pg = cmp->src_pages[i]; - if (pg) { - unlock_page(pg); - put_page(pg); - cmp->src_pages[i] = NULL; - } - pg = cmp->dst_pages[i]; - if (pg) { - unlock_page(pg); - put_page(pg); - cmp->dst_pages[i] = NULL; - } - } -} - -static int btrfs_cmp_data_prepare(struct inode *src, u64 loff, - struct inode *dst, u64 dst_loff, - u64 len, struct cmp_pages *cmp) -{ - int ret; - int num_pages = PAGE_ALIGN(len) >> PAGE_SHIFT; - - cmp->num_pages = num_pages; - - ret = gather_extent_pages(src, cmp->src_pages, num_pages, loff); - if (ret) - goto out; - - ret = gather_extent_pages(dst, cmp->dst_pages, num_pages, dst_loff); - -out: - if (ret) - btrfs_cmp_data_free(cmp); - return ret; -} - -static int btrfs_cmp_data(u64 len, struct cmp_pages *cmp) -{ - int ret = 0; - int i; - struct page *src_page, *dst_page; - unsigned int cmp_len = PAGE_SIZE; - void *addr, *dst_addr; - - i = 0; - while (len) { - if (len < PAGE_SIZE) - cmp_len = len; - - BUG_ON(i >= cmp->num_pages); - - src_page = cmp->src_pages[i]; - dst_page = cmp->dst_pages[i]; - ASSERT(PageLocked(src_page)); - ASSERT(PageLocked(dst_page)); - - addr = kmap_atomic(src_page); - dst_addr = kmap_atomic(dst_page); - - flush_dcache_page(src_page); - flush_dcache_page(dst_page); - - if (memcmp(addr, dst_addr, cmp_len)) - ret = -EBADE; - - kunmap_atomic(addr); - kunmap_atomic(dst_addr); - - if (ret) - break; - - len -= cmp_len; - i++; - } - - return ret; -} - -static int extent_same_check_offsets(struct inode *inode, u64 off, u64 *plen, - u64 olen) -{ - u64 len = *plen; - u64 bs = BTRFS_I(inode)->root->fs_info->sb->s_blocksize; - - if (off + olen > inode->i_size || off + olen < off) - return -EINVAL; - - /* if we extend to eof, continue to block boundary */ - if (off + len == inode->i_size) - *plen = len = ALIGN(inode->i_size, bs) - off; - - /* Check that we are block aligned - btrfs_clone() requires this */ - if (!IS_ALIGNED(off, bs) || !IS_ALIGNED(off + len, bs)) - return -EINVAL; - - return 0; -} - static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 olen, - struct inode *dst, u64 dst_loff, - struct cmp_pages *cmp) + struct inode *dst, u64 dst_loff) { + u64 bs = BTRFS_I(src)->root->fs_info->sb->s_blocksize; int ret; u64 len = olen; - bool same_inode = (src == dst); - u64 same_lock_start = 0; - u64 same_lock_len = 0; - - ret = extent_same_check_offsets(src, loff, &len, olen); - if (ret) - return ret; - - ret = extent_same_check_offsets(dst, dst_loff, &len, olen); - if (ret) - return ret; - - if (same_inode) { - /* - * Single inode case wants the same checks, except we - * don't want our length pushed out past i_size as - * comparing that data range makes no sense. - * - * extent_same_check_offsets() will do this for an - * unaligned length at i_size, so catch it here and - * reject the request. - * - * This effectively means we require aligned extents - * for the single-inode case, whereas the other cases - * allow an unaligned length so long as it ends at - * i_size. - */ - if (len != olen) - return -EINVAL; - - /* Check for overlapping ranges */ - if (dst_loff + len > loff && dst_loff < loff + len) - return -EINVAL; - - same_lock_start = min_t(u64, loff, dst_loff); - same_lock_len = max_t(u64, loff, dst_loff) + len - same_lock_start; - } else { - /* - * If the source and destination inodes are different, the - * source's range end offset matches the source's i_size, that - * i_size is not a multiple of the sector size, and the - * destination range does not go past the destination's i_size, - * we must round down the length to the nearest sector size - * multiple. If we don't do this adjustment we end replacing - * with zeroes the bytes in the range that starts at the - * deduplication range's end offset and ends at the next sector - * size multiple. - */ - if (loff + olen == i_size_read(src) && - dst_loff + len < i_size_read(dst)) { - const u64 sz = BTRFS_I(src)->root->fs_info->sectorsize; - - len = round_down(i_size_read(src), sz) - loff; - if (len == 0) - return 0; - olen = len; - } - } - -again: - ret = btrfs_cmp_data_prepare(src, loff, dst, dst_loff, olen, cmp); - if (ret) - return ret; - if (same_inode) - ret = lock_extent_range(src, same_lock_start, same_lock_len, - false); - else - ret = btrfs_double_extent_lock(src, loff, dst, dst_loff, len, - false); + if (loff + len == src->i_size) + len = ALIGN(src->i_size, bs) - loff; /* - * If one of the inodes has dirty pages in the respective range or - * ordered extents, we need to flush dellaloc and wait for all ordered - * extents in the range. We must unlock the pages and the ranges in the - * io trees to avoid deadlocks when flushing delalloc (requires locking - * pages) and when waiting for ordered extents to complete (they require - * range locking). + * For same inode case we don't want our length pushed out past i_size + * as comparing that data range makes no sense. + * + * This effectively means we require aligned extents for the single + * inode case, whereas the other cases allow an unaligned length so long + * as it ends at i_size. */ - if (ret == -EAGAIN) { - /* - * Ranges in the io trees already unlocked. Now unlock all - * pages before waiting for all IO to complete. - */ - btrfs_cmp_data_free(cmp); - if (same_inode) { - btrfs_wait_ordered_range(src, same_lock_start, - same_lock_len); - } else { - btrfs_wait_ordered_range(src, loff, len); - btrfs_wait_ordered_range(dst, dst_loff, len); - } - goto again; - } - ASSERT(ret == 0); - if (WARN_ON(ret)) { - /* ranges in the io trees already unlocked */ - btrfs_cmp_data_free(cmp); - return ret; - } - - /* pass original length for comparison so we stay within i_size */ - ret = btrfs_cmp_data(olen, cmp); - if (ret == 0) - ret = btrfs_clone(src, dst, loff, olen, len, dst_loff, 1); - - if (same_inode) - unlock_extent(&BTRFS_I(src)->io_tree, same_lock_start, - same_lock_start + same_lock_len - 1); - else - btrfs_double_extent_unlock(src, loff, dst, dst_loff, len); + if (dst == src && len != olen) + return -EINVAL; - btrfs_cmp_data_free(cmp); + /* + * Lock destination range to serialize with concurrent readpages(). + */ + lock_extent(&BTRFS_I(dst)->io_tree, dst_loff, dst_loff + len - 1); + ret = btrfs_clone(src, dst, loff, olen, len, dst_loff, 1); + unlock_extent(&BTRFS_I(dst)->io_tree, dst_loff, dst_loff + len - 1); return ret; } @@ -3557,58 +3257,27 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen, struct inode *dst, u64 dst_loff) { int ret; - struct cmp_pages cmp; int num_pages = PAGE_ALIGN(BTRFS_MAX_DEDUPE_LEN) >> PAGE_SHIFT; - bool same_inode = (src == dst); u64 i, tail_len, chunk_count; - if (olen == 0) - return 0; - - if (same_inode) - inode_lock(src); - else - btrfs_double_inode_lock(src, dst); - /* don't make the dst file partly checksummed */ if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) != - (BTRFS_I(dst)->flags & BTRFS_INODE_NODATASUM)) { - ret = -EINVAL; - goto out_unlock; - } + (BTRFS_I(dst)->flags & BTRFS_INODE_NODATASUM)) + return -EINVAL; + + if (IS_SWAPFILE(src) || IS_SWAPFILE(dst)) + return -ETXTBSY; tail_len = olen % BTRFS_MAX_DEDUPE_LEN; chunk_count = div_u64(olen, BTRFS_MAX_DEDUPE_LEN); if (chunk_count == 0) num_pages = PAGE_ALIGN(tail_len) >> PAGE_SHIFT; - /* - * If deduping ranges in the same inode, locking rules make it - * mandatory to always lock pages in ascending order to avoid deadlocks - * with concurrent tasks (such as starting writeback/delalloc). - */ - if (same_inode && dst_loff < loff) - swap(loff, dst_loff); - - /* - * We must gather up all the pages before we initiate our extent - * locking. We use an array for the page pointers. Size of the array is - * bounded by len, which is in turn bounded by BTRFS_MAX_DEDUPE_LEN. - */ - cmp.src_pages = kvmalloc_array(num_pages, sizeof(struct page *), - GFP_KERNEL | __GFP_ZERO); - cmp.dst_pages = kvmalloc_array(num_pages, sizeof(struct page *), - GFP_KERNEL | __GFP_ZERO); - if (!cmp.src_pages || !cmp.dst_pages) { - ret = -ENOMEM; - goto out_free; - } - for (i = 0; i < chunk_count; i++) { ret = btrfs_extent_same_range(src, loff, BTRFS_MAX_DEDUPE_LEN, - dst, dst_loff, &cmp); + dst, dst_loff); if (ret) - goto out_free; + return ret; loff += BTRFS_MAX_DEDUPE_LEN; dst_loff += BTRFS_MAX_DEDUPE_LEN; @@ -3616,17 +3285,7 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen, if (tail_len > 0) ret = btrfs_extent_same_range(src, loff, tail_len, dst, - dst_loff, &cmp); - -out_free: - kvfree(cmp.src_pages); - kvfree(cmp.dst_pages); - -out_unlock: - if (same_inode) - inode_unlock(src); - else - btrfs_double_inode_unlock(src, dst); + dst_loff); return ret; } @@ -4213,11 +3872,9 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, struct inode *inode = file_inode(file); struct inode *src = file_inode(file_src); struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); - struct btrfs_root *root = BTRFS_I(inode)->root; int ret; u64 len = olen; u64 bs = fs_info->sb->s_blocksize; - int same_inode = src == inode; /* * TODO: @@ -4230,101 +3887,35 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, * be either compressed or non-compressed. */ - if (btrfs_root_readonly(root)) - return -EROFS; - - if (file_src->f_path.mnt != file->f_path.mnt || - src->i_sb != inode->i_sb) - return -EXDEV; - - if (S_ISDIR(src->i_mode) || S_ISDIR(inode->i_mode)) - return -EISDIR; - - if (!same_inode) { - btrfs_double_inode_lock(src, inode); - } else { - inode_lock(src); - } - /* don't make the dst file partly checksummed */ if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) != - (BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) { - ret = -EINVAL; - goto out_unlock; - } + (BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) + return -EINVAL; + + if (IS_SWAPFILE(src) || IS_SWAPFILE(inode)) + return -ETXTBSY; - /* determine range to clone */ - ret = -EINVAL; - if (off + len > src->i_size || off + len < off) - goto out_unlock; - if (len == 0) - olen = len = src->i_size - off; /* - * If we extend to eof, continue to block boundary if and only if the - * destination end offset matches the destination file's size, otherwise - * we would be corrupting data by placing the eof block into the middle - * of a file. + * VFS's generic_remap_file_range_prep() protects us from cloning the + * eof block into the middle of a file, which would result in corruption + * if the file size is not blocksize aligned. So we don't need to check + * for that case here. */ - if (off + len == src->i_size) { - if (!IS_ALIGNED(len, bs) && destoff + len < inode->i_size) - goto out_unlock; + if (off + len == src->i_size) len = ALIGN(src->i_size, bs) - off; - } - - if (len == 0) { - ret = 0; - goto out_unlock; - } - - /* verify the end result is block aligned */ - if (!IS_ALIGNED(off, bs) || !IS_ALIGNED(off + len, bs) || - !IS_ALIGNED(destoff, bs)) - goto out_unlock; - - /* verify if ranges are overlapped within the same file */ - if (same_inode) { - if (destoff + len > off && destoff < off + len) - goto out_unlock; - } if (destoff > inode->i_size) { ret = btrfs_cont_expand(inode, inode->i_size, destoff); if (ret) - goto out_unlock; + return ret; } /* - * Lock the target range too. Right after we replace the file extent - * items in the fs tree (which now point to the cloned data), we might - * have a worker replace them with extent items relative to a write - * operation that was issued before this clone operation (i.e. confront - * with inode.c:btrfs_finish_ordered_io). + * Lock destination range to serialize with concurrent readpages(). */ - if (same_inode) { - u64 lock_start = min_t(u64, off, destoff); - u64 lock_len = max_t(u64, off, destoff) + len - lock_start; - - ret = lock_extent_range(src, lock_start, lock_len, true); - } else { - ret = btrfs_double_extent_lock(src, off, inode, destoff, len, - true); - } - ASSERT(ret == 0); - if (WARN_ON(ret)) { - /* ranges in the io trees already unlocked */ - goto out_unlock; - } - + lock_extent(&BTRFS_I(inode)->io_tree, destoff, destoff + len - 1); ret = btrfs_clone(src, inode, off, olen, len, destoff, 0); - - if (same_inode) { - u64 lock_start = min_t(u64, off, destoff); - u64 lock_end = max_t(u64, off, destoff) + len - 1; - - unlock_extent(&BTRFS_I(src)->io_tree, lock_start, lock_end); - } else { - btrfs_double_extent_unlock(src, off, inode, destoff, len); - } + unlock_extent(&BTRFS_I(inode)->io_tree, destoff, destoff + len - 1); /* * Truncate page cache pages so that future reads will see the cloned * data immediately and not the previous data. @@ -4332,11 +3923,87 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, truncate_inode_pages_range(&inode->i_data, round_down(destoff, PAGE_SIZE), round_up(destoff + len, PAGE_SIZE) - 1); -out_unlock: + + return ret; +} + +static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in, + struct file *file_out, loff_t pos_out, + loff_t *len, unsigned int remap_flags) +{ + struct inode *inode_in = file_inode(file_in); + struct inode *inode_out = file_inode(file_out); + u64 bs = BTRFS_I(inode_out)->root->fs_info->sb->s_blocksize; + bool same_inode = inode_out == inode_in; + u64 wb_len; + int ret; + + if (!(remap_flags & REMAP_FILE_DEDUP)) { + struct btrfs_root *root_out = BTRFS_I(inode_out)->root; + + if (btrfs_root_readonly(root_out)) + return -EROFS; + + if (file_in->f_path.mnt != file_out->f_path.mnt || + inode_in->i_sb != inode_out->i_sb) + return -EXDEV; + } + + if (same_inode) + inode_lock(inode_in); + else + btrfs_double_inode_lock(inode_in, inode_out); + + /* + * Now that the inodes are locked, we need to start writeback ourselves + * and can not rely on the writeback from the VFS's generic helper + * generic_remap_file_range_prep() because: + * + * 1) For compression we must call filemap_fdatawrite_range() range + * twice (btrfs_fdatawrite_range() does it for us), and the generic + * helper only calls it once; + * + * 2) filemap_fdatawrite_range(), called by the generic helper only + * waits for the writeback to complete, i.e. for IO to be done, and + * not for the ordered extents to complete. We need to wait for them + * to complete so that new file extent items are in the fs tree. + */ + if (*len == 0 && !(remap_flags & REMAP_FILE_DEDUP)) + wb_len = ALIGN(inode_in->i_size, bs) - ALIGN_DOWN(pos_in, bs); + else + wb_len = ALIGN(*len, bs); + + /* + * Since we don't lock ranges, wait for ongoing lockless dio writes (as + * any in progress could create its ordered extents after we wait for + * existing ordered extents below). + */ + inode_dio_wait(inode_in); if (!same_inode) - btrfs_double_inode_unlock(src, inode); + inode_dio_wait(inode_out); + + ret = btrfs_wait_ordered_range(inode_in, ALIGN_DOWN(pos_in, bs), + wb_len); + if (ret < 0) + goto out_unlock; + ret = btrfs_wait_ordered_range(inode_out, ALIGN_DOWN(pos_out, bs), + wb_len); + if (ret < 0) + goto out_unlock; + + ret = generic_remap_file_range_prep(file_in, pos_in, file_out, pos_out, + len, remap_flags); + if (ret < 0 || *len == 0) + goto out_unlock; + + return 0; + + out_unlock: + if (same_inode) + inode_unlock(inode_in); else - inode_unlock(src); + btrfs_double_inode_unlock(inode_in, inode_out); + return ret; } @@ -4344,29 +4011,29 @@ loff_t btrfs_remap_file_range(struct file *src_file, loff_t off, struct file *dst_file, loff_t destoff, loff_t len, unsigned int remap_flags) { + struct inode *src_inode = file_inode(src_file); + struct inode *dst_inode = file_inode(dst_file); + bool same_inode = dst_inode == src_inode; int ret; if (remap_flags & ~(REMAP_FILE_DEDUP | REMAP_FILE_ADVISORY)) return -EINVAL; - if (remap_flags & REMAP_FILE_DEDUP) { - struct inode *src = file_inode(src_file); - struct inode *dst = file_inode(dst_file); - u64 bs = BTRFS_I(src)->root->fs_info->sb->s_blocksize; - - if (WARN_ON_ONCE(bs < PAGE_SIZE)) { - /* - * Btrfs does not support blocksize < page_size. As a - * result, btrfs_cmp_data() won't correctly handle - * this situation without an update. - */ - return -EINVAL; - } + ret = btrfs_remap_file_range_prep(src_file, off, dst_file, destoff, + &len, remap_flags); + if (ret < 0 || len == 0) + return ret; - ret = btrfs_extent_same(src, off, len, dst, destoff); - } else { + if (remap_flags & REMAP_FILE_DEDUP) + ret = btrfs_extent_same(src_inode, off, len, dst_inode, destoff); + else ret = btrfs_clone_files(dst_file, src_file, off, len, destoff); - } + + if (same_inode) + inode_unlock(src_inode); + else + btrfs_double_inode_unlock(src_inode, dst_inode); + return ret < 0 ? ret : len; } diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c index b6a4cc178bee..90639140439f 100644 --- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -27,7 +27,7 @@ * Records the total size (including the header) of compressed data. * * 2. Segment(s) - * Variable size. Each segment includes one segment header, followd by data + * Variable size. Each segment includes one segment header, followed by data * payload. * One regular LZO compressed extent can have one or more segments. * For inlined LZO compressed extent, only one segment is allowed. diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index 0c4ef208b8b9..6fde2b2741ef 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -460,7 +460,6 @@ void btrfs_remove_ordered_extent(struct inode *inode, struct btrfs_inode *btrfs_inode = BTRFS_I(inode); struct btrfs_root *root = btrfs_inode->root; struct rb_node *node; - bool dec_pending_ordered = false; /* This is paired with btrfs_add_ordered_extent. */ spin_lock(&btrfs_inode->lock); @@ -477,37 +476,8 @@ void btrfs_remove_ordered_extent(struct inode *inode, if (tree->last == node) tree->last = NULL; set_bit(BTRFS_ORDERED_COMPLETE, &entry->flags); - if (test_and_clear_bit(BTRFS_ORDERED_PENDING, &entry->flags)) - dec_pending_ordered = true; spin_unlock_irq(&tree->lock); - /* - * The current running transaction is waiting on us, we need to let it - * know that we're complete and wake it up. - */ - if (dec_pending_ordered) { - struct btrfs_transaction *trans; - - /* - * The checks for trans are just a formality, it should be set, - * but if it isn't we don't want to deref/assert under the spin - * lock, so be nice and check if trans is set, but ASSERT() so - * if it isn't set a developer will notice. - */ - spin_lock(&fs_info->trans_lock); - trans = fs_info->running_transaction; - if (trans) - refcount_inc(&trans->use_count); - spin_unlock(&fs_info->trans_lock); - - ASSERT(trans); - if (trans) { - if (atomic_dec_and_test(&trans->pending_ordered)) - wake_up(&trans->pending_wait); - btrfs_put_transaction(trans); - } - } - spin_lock(&root->ordered_extent_lock); list_del_init(&entry->root_extent_list); root->nr_ordered_extents--; diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h index 02d813aaa261..fb9a161f0215 100644 --- a/fs/btrfs/ordered-data.h +++ b/fs/btrfs/ordered-data.h @@ -37,28 +37,31 @@ struct btrfs_ordered_sum { * rbtree, just before waking any waiters. It is used to indicate the * IO is done and any metadata is inserted into the tree. */ -#define BTRFS_ORDERED_IO_DONE 0 /* set when all the pages are written */ - -#define BTRFS_ORDERED_COMPLETE 1 /* set when removed from the tree */ - -#define BTRFS_ORDERED_NOCOW 2 /* set when we want to write in place */ - -#define BTRFS_ORDERED_COMPRESSED 3 /* writing a zlib compressed extent */ - -#define BTRFS_ORDERED_PREALLOC 4 /* set when writing to preallocated extent */ - -#define BTRFS_ORDERED_DIRECT 5 /* set when we're doing DIO with this extent */ - -#define BTRFS_ORDERED_IOERR 6 /* We had an io error when writing this out */ - -#define BTRFS_ORDERED_UPDATED_ISIZE 7 /* indicates whether this ordered extent - * has done its due diligence in updating - * the isize. */ -#define BTRFS_ORDERED_TRUNCATED 8 /* Set when we have to truncate an extent */ - -#define BTRFS_ORDERED_PENDING 9 /* We are waiting for this ordered extent to - * complete in the current transaction. */ -#define BTRFS_ORDERED_REGULAR 10 /* Regular IO for COW */ +enum { + /* set when all the pages are written */ + BTRFS_ORDERED_IO_DONE, + /* set when removed from the tree */ + BTRFS_ORDERED_COMPLETE, + /* set when we want to write in place */ + BTRFS_ORDERED_NOCOW, + /* writing a zlib compressed extent */ + BTRFS_ORDERED_COMPRESSED, + /* set when writing to preallocated extent */ + BTRFS_ORDERED_PREALLOC, + /* set when we're doing DIO with this extent */ + BTRFS_ORDERED_DIRECT, + /* We had an io error when writing this out */ + BTRFS_ORDERED_IOERR, + /* + * indicates whether this ordered extent has done its due diligence in + * updating the isize + */ + BTRFS_ORDERED_UPDATED_ISIZE, + /* Set when we have to truncate an extent */ + BTRFS_ORDERED_TRUNCATED, + /* Regular IO for COW */ + BTRFS_ORDERED_REGULAR, +}; struct btrfs_ordered_extent { /* logical offset in the file */ diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index f70825af6438..4e473a998219 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -30,7 +30,7 @@ * - sync * - copy also limits on subvol creation * - limit - * - caches fuer ulists + * - caches for ulists * - performance benchmarks * - check all ioctl parameters */ @@ -522,7 +522,7 @@ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info) __del_qgroup_rb(qgroup); } /* - * we call btrfs_free_qgroup_config() when umounting + * We call btrfs_free_qgroup_config() when unmounting * filesystem and disabling quota, so we set qgroup_ulist * to be null here to avoid double free. */ @@ -1013,16 +1013,22 @@ out_add_root: btrfs_abort_transaction(trans, ret); goto out_free_path; } - spin_lock(&fs_info->qgroup_lock); - fs_info->quota_root = quota_root; - set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); - spin_unlock(&fs_info->qgroup_lock); ret = btrfs_commit_transaction(trans); trans = NULL; if (ret) goto out_free_path; + /* + * Set quota enabled flag after committing the transaction, to avoid + * deadlocks on fs_info->qgroup_ioctl_lock with concurrent snapshot + * creation. + */ + spin_lock(&fs_info->qgroup_lock); + fs_info->quota_root = quota_root; + set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); + spin_unlock(&fs_info->qgroup_lock); + ret = qgroup_rescan_init(fs_info, 0, 1); if (!ret) { qgroup_rescan_zero_tracking(fs_info); @@ -1122,7 +1128,7 @@ static void qgroup_dirty(struct btrfs_fs_info *fs_info, * The easy accounting, we're updating qgroup relationship whose child qgroup * only has exclusive extents. * - * In this case, all exclsuive extents will also be exlusive for parent, so + * In this case, all exclusive extents will also be exclusive for parent, so * excl/rfer just get added/removed. * * So is qgroup reservation space, which should also be added/removed to @@ -1749,14 +1755,14 @@ static int adjust_slots_upwards(struct btrfs_path *path, int root_level) * * 2) Mark the final tree blocks in @src_path and @dst_path qgroup dirty * NOTE: In above case, OO(a) and NN(a) won't be marked qgroup dirty. - * They should be marked during preivous (@dst_level = 1) iteration. + * They should be marked during previous (@dst_level = 1) iteration. * * 3) Mark file extents in leaves dirty * We don't have good way to pick out new file extents only. * So we still follow the old method by scanning all file extents in * the leave. * - * This function can free us from keeping two pathes, thus later we only need + * This function can free us from keeping two paths, thus later we only need * to care about how to iterate all new tree blocks in reloc tree. */ static int qgroup_trace_extent_swap(struct btrfs_trans_handle* trans, @@ -1895,7 +1901,7 @@ out: * * We will iterate through tree blocks NN(b), NN(d) and info qgroup to trace * above tree blocks along with their counter parts in file tree. - * While during search, old tree blocsk OO(c) will be skiped as tree block swap + * While during search, old tree blocks OO(c) will be skipped as tree block swap * won't affect OO(c). */ static int qgroup_trace_new_subtree_blocks(struct btrfs_trans_handle* trans, @@ -2020,7 +2026,7 @@ out: * Will go down the tree block pointed by @dst_eb (pointed by @dst_parent and * @dst_slot), and find any tree blocks whose generation is at @last_snapshot, * and then go down @src_eb (pointed by @src_parent and @src_slot) to find - * the conterpart of the tree block, then mark both tree blocks as qgroup dirty, + * the counterpart of the tree block, then mark both tree blocks as qgroup dirty, * and skip all tree blocks whose generation is smaller than last_snapshot. * * This would skip tons of tree blocks of original btrfs_qgroup_trace_subtree(), @@ -3104,9 +3110,6 @@ static int qgroup_rescan_leaf(struct btrfs_trans_handle *trans, mutex_unlock(&fs_info->qgroup_rescan_lock); goto out; } - extent_buffer_get(scratch_leaf); - btrfs_tree_read_lock(scratch_leaf); - btrfs_set_lock_blocking_rw(scratch_leaf, BTRFS_READ_LOCK); slot = path->slots[0]; btrfs_release_path(path); mutex_unlock(&fs_info->qgroup_rescan_lock); @@ -3132,10 +3135,8 @@ static int qgroup_rescan_leaf(struct btrfs_trans_handle *trans, goto out; } out: - if (scratch_leaf) { - btrfs_tree_read_unlock_blocking(scratch_leaf); + if (scratch_leaf) free_extent_buffer(scratch_leaf); - } if (done && !ret) { ret = 1; diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h index d8f78f5ab854..20c6bd5fa701 100644 --- a/fs/btrfs/qgroup.h +++ b/fs/btrfs/qgroup.h @@ -70,7 +70,7 @@ struct btrfs_qgroup_extent_record { * be converted into META_PERTRANS. */ enum btrfs_qgroup_rsv_type { - BTRFS_QGROUP_RSV_DATA = 0, + BTRFS_QGROUP_RSV_DATA, BTRFS_QGROUP_RSV_META_PERTRANS, BTRFS_QGROUP_RSV_META_PREALLOC, BTRFS_QGROUP_RSV_LAST, @@ -81,10 +81,10 @@ enum btrfs_qgroup_rsv_type { * * Each type should have different reservation behavior. * E.g, data follows its io_tree flag modification, while - * *currently* meta is just reserve-and-clear during transcation. + * *currently* meta is just reserve-and-clear during transaction. * * TODO: Add new type for reservation which can survive transaction commit. - * Currect metadata reservation behavior is not suitable for such case. + * Current metadata reservation behavior is not suitable for such case. */ struct btrfs_qgroup_rsv { u64 values[BTRFS_QGROUP_RSV_LAST]; diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index df41d7049936..e74455eb42f9 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -1980,7 +1980,7 @@ cleanup_io: * - In case of single failure, where rbio->failb == -1: * * Cache this rbio iff the above read reconstruction is - * excuted without problems. + * executed without problems. */ if (err == BLK_STS_OK && rbio->failb < 0) cache_rbio_pages(rbio); diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c index dec14b739b10..10d9589001a9 100644 --- a/fs/btrfs/reada.c +++ b/fs/btrfs/reada.c @@ -376,26 +376,28 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info, goto error; } + /* Insert extent in reada tree + all per-device trees, all or nothing */ + down_read(&fs_info->dev_replace.rwsem); ret = radix_tree_preload(GFP_KERNEL); - if (ret) + if (ret) { + up_read(&fs_info->dev_replace.rwsem); goto error; + } - /* insert extent in reada_tree + all per-device trees, all or nothing */ - btrfs_dev_replace_read_lock(&fs_info->dev_replace); spin_lock(&fs_info->reada_lock); ret = radix_tree_insert(&fs_info->reada_tree, index, re); if (ret == -EEXIST) { re_exist = radix_tree_lookup(&fs_info->reada_tree, index); re_exist->refcnt++; spin_unlock(&fs_info->reada_lock); - btrfs_dev_replace_read_unlock(&fs_info->dev_replace); radix_tree_preload_end(); + up_read(&fs_info->dev_replace.rwsem); goto error; } if (ret) { spin_unlock(&fs_info->reada_lock); - btrfs_dev_replace_read_unlock(&fs_info->dev_replace); radix_tree_preload_end(); + up_read(&fs_info->dev_replace.rwsem); goto error; } radix_tree_preload_end(); @@ -437,13 +439,13 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info, } radix_tree_delete(&fs_info->reada_tree, index); spin_unlock(&fs_info->reada_lock); - btrfs_dev_replace_read_unlock(&fs_info->dev_replace); + up_read(&fs_info->dev_replace.rwsem); goto error; } have_zone = 1; } spin_unlock(&fs_info->reada_lock); - btrfs_dev_replace_read_unlock(&fs_info->dev_replace); + up_read(&fs_info->dev_replace.rwsem); if (!have_zone) goto error; diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c index d69fbfb30aa9..c3557c12656b 100644 --- a/fs/btrfs/ref-verify.c +++ b/fs/btrfs/ref-verify.c @@ -43,7 +43,7 @@ struct ref_entry { * back to the delayed ref action. We hold the ref we are changing in the * action so we can account for the history properly, and we record the root we * were called with since it could be different from ref_root. We also store - * stack traces because thats how I roll. + * stack traces because that's how I roll. */ struct ref_action { int action; @@ -56,7 +56,7 @@ struct ref_action { /* * One of these for every block we reference, it holds the roots and references - * to it as well as all of the ref actions that have occured to it. We never + * to it as well as all of the ref actions that have occurred to it. We never * free it until we unmount the file system in order to make sure re-allocations * are happening properly. */ @@ -859,7 +859,7 @@ int btrfs_ref_tree_mod(struct btrfs_root *root, u64 bytenr, u64 num_bytes, * This shouldn't happen because we will add our re * above when we lookup the be with !parent, but just in * case catch this case so we don't panic because I - * didn't thik of some other corner case. + * didn't think of some other corner case. */ btrfs_err(fs_info, "failed to find root %llu for %llu", root->root_key.objectid, be->bytenr); diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index a3f75b8926d4..272b287f8cf0 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -2631,7 +2631,7 @@ static int reserve_metadata_space(struct btrfs_trans_handle *trans, * only one thread can access block_rsv at this point, * so we don't need hold lock to protect block_rsv. * we expand more reservation size here to allow enough - * space for relocation and we will return eailer in + * space for relocation and we will return earlier in * enospc case. */ rc->block_rsv->size = tmp + fs_info->nodesize * @@ -4185,37 +4185,13 @@ static struct reloc_control *alloc_reloc_control(void) static void describe_relocation(struct btrfs_fs_info *fs_info, struct btrfs_block_group_cache *block_group) { - char buf[128]; /* prefixed by a '|' that'll be dropped */ - u64 flags = block_group->flags; + char buf[128] = {'\0'}; - /* Shouldn't happen */ - if (!flags) { - strcpy(buf, "|NONE"); - } else { - char *bp = buf; - -#define DESCRIBE_FLAG(f, d) \ - if (flags & BTRFS_BLOCK_GROUP_##f) { \ - bp += snprintf(bp, buf - bp + sizeof(buf), "|%s", d); \ - flags &= ~BTRFS_BLOCK_GROUP_##f; \ - } - DESCRIBE_FLAG(DATA, "data"); - DESCRIBE_FLAG(SYSTEM, "system"); - DESCRIBE_FLAG(METADATA, "metadata"); - DESCRIBE_FLAG(RAID0, "raid0"); - DESCRIBE_FLAG(RAID1, "raid1"); - DESCRIBE_FLAG(DUP, "dup"); - DESCRIBE_FLAG(RAID10, "raid10"); - DESCRIBE_FLAG(RAID5, "raid5"); - DESCRIBE_FLAG(RAID6, "raid6"); - if (flags) - snprintf(bp, buf - bp + sizeof(buf), "|0x%llx", flags); -#undef DESCRIBE_FLAG - } + btrfs_describe_block_groups(block_group->flags, buf, sizeof(buf)); btrfs_info(fs_info, "relocating block group %llu flags %s", - block_group->key.objectid, buf + 1); + block_group->key.objectid, buf); } /* @@ -4223,6 +4199,7 @@ static void describe_relocation(struct btrfs_fs_info *fs_info, */ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start) { + struct btrfs_block_group_cache *bg; struct btrfs_root *extent_root = fs_info->extent_root; struct reloc_control *rc; struct inode *inode; @@ -4231,14 +4208,23 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start) int rw = 0; int err = 0; + bg = btrfs_lookup_block_group(fs_info, group_start); + if (!bg) + return -ENOENT; + + if (btrfs_pinned_by_swapfile(fs_info, bg)) { + btrfs_put_block_group(bg); + return -ETXTBSY; + } + rc = alloc_reloc_control(); - if (!rc) + if (!rc) { + btrfs_put_block_group(bg); return -ENOMEM; + } rc->extent_root = extent_root; - - rc->block_group = btrfs_lookup_block_group(fs_info, group_start); - BUG_ON(!rc->block_group); + rc->block_group = bg; ret = btrfs_inc_block_group_ro(rc->block_group); if (ret) { diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 902819d3cf41..6dcd36d7b849 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -339,7 +339,9 @@ static struct full_stripe_lock *insert_full_stripe_lock( } } - /* Insert new lock */ + /* + * Insert new lock. + */ ret = kmalloc(sizeof(*ret), GFP_KERNEL); if (!ret) return ERR_PTR(-ENOMEM); @@ -568,12 +570,11 @@ static void scrub_put_ctx(struct scrub_ctx *sctx) scrub_free_ctx(sctx); } -static noinline_for_stack -struct scrub_ctx *scrub_setup_ctx(struct btrfs_device *dev, int is_dev_replace) +static noinline_for_stack struct scrub_ctx *scrub_setup_ctx( + struct btrfs_fs_info *fs_info, int is_dev_replace) { struct scrub_ctx *sctx; int i; - struct btrfs_fs_info *fs_info = dev->fs_info; sctx = kzalloc(sizeof(*sctx), GFP_KERNEL); if (!sctx) @@ -582,7 +583,7 @@ struct scrub_ctx *scrub_setup_ctx(struct btrfs_device *dev, int is_dev_replace) sctx->is_dev_replace = is_dev_replace; sctx->pages_per_rd_bio = SCRUB_PAGES_PER_RD_BIO; sctx->curr = -1; - sctx->fs_info = dev->fs_info; + sctx->fs_info = fs_info; for (i = 0; i < SCRUB_BIOS_PER_SCTX; ++i) { struct scrub_bio *sbio; @@ -832,6 +833,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) int page_num; int success; bool full_stripe_locked; + unsigned int nofs_flag; static DEFINE_RATELIMIT_STATE(_rs, DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); @@ -856,6 +858,16 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) have_csum = sblock_to_check->pagev[0]->have_csum; dev = sblock_to_check->pagev[0]->dev; + /* + * We must use GFP_NOFS because the scrub task might be waiting for a + * worker task executing this function and in turn a transaction commit + * might be waiting the scrub task to pause (which needs to wait for all + * the worker tasks to complete before pausing). + * We do allocations in the workers through insert_full_stripe_lock() + * and scrub_add_page_to_wr_bio(), which happens down the call chain of + * this function. + */ + nofs_flag = memalloc_nofs_save(); /* * For RAID5/6, race can happen for a different device scrub thread. * For data corruption, Parity and Data threads will both try @@ -865,6 +877,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) */ ret = lock_full_stripe(fs_info, logical, &full_stripe_locked); if (ret < 0) { + memalloc_nofs_restore(nofs_flag); spin_lock(&sctx->stat_lock); if (ret == -ENOMEM) sctx->stat.malloc_errors++; @@ -904,7 +917,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) */ sblocks_for_recheck = kcalloc(BTRFS_MAX_MIRRORS, - sizeof(*sblocks_for_recheck), GFP_NOFS); + sizeof(*sblocks_for_recheck), GFP_KERNEL); if (!sblocks_for_recheck) { spin_lock(&sctx->stat_lock); sctx->stat.malloc_errors++; @@ -1202,6 +1215,7 @@ out: } ret = unlock_full_stripe(fs_info, logical, full_stripe_locked); + memalloc_nofs_restore(nofs_flag); if (ret < 0) return ret; return 0; @@ -3540,7 +3554,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx, if (!ret && sctx->is_dev_replace) { /* * If we are doing a device replace wait for any tasks - * that started dellaloc right before we set the block + * that started delalloc right before we set the block * group to RO mode, as they might have just allocated * an extent from it or decided they could do a nocow * write. And if any such tasks did that, wait for their @@ -3596,11 +3610,12 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx, break; } - btrfs_dev_replace_write_lock(&fs_info->dev_replace); + down_write(&fs_info->dev_replace.rwsem); dev_replace->cursor_right = found_key.offset + length; dev_replace->cursor_left = found_key.offset; dev_replace->item_needs_writeback = 1; - btrfs_dev_replace_write_unlock(&fs_info->dev_replace); + up_write(&dev_replace->rwsem); + ret = scrub_chunk(sctx, scrub_dev, chunk_offset, length, found_key.offset, cache); @@ -3636,10 +3651,10 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx, scrub_pause_off(fs_info); - btrfs_dev_replace_write_lock(&fs_info->dev_replace); + down_write(&fs_info->dev_replace.rwsem); dev_replace->cursor_left = dev_replace->cursor_right; dev_replace->item_needs_writeback = 1; - btrfs_dev_replace_write_unlock(&fs_info->dev_replace); + up_write(&fs_info->dev_replace.rwsem); if (ro_set) btrfs_dec_block_group_ro(cache); @@ -3772,6 +3787,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, struct scrub_ctx *sctx; int ret; struct btrfs_device *dev; + unsigned int nofs_flag; if (btrfs_fs_closing(fs_info)) return -EINVAL; @@ -3813,13 +3829,18 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, return -EINVAL; } + /* Allocate outside of device_list_mutex */ + sctx = scrub_setup_ctx(fs_info, is_dev_replace); + if (IS_ERR(sctx)) + return PTR_ERR(sctx); mutex_lock(&fs_info->fs_devices->device_list_mutex); dev = btrfs_find_device(fs_info, devid, NULL, NULL); if (!dev || (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state) && !is_dev_replace)) { mutex_unlock(&fs_info->fs_devices->device_list_mutex); - return -ENODEV; + ret = -ENODEV; + goto out_free_ctx; } if (!is_dev_replace && !readonly && @@ -3827,7 +3848,8 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, mutex_unlock(&fs_info->fs_devices->device_list_mutex); btrfs_err_in_rcu(fs_info, "scrub: device %s is not writable", rcu_str_deref(dev->name)); - return -EROFS; + ret = -EROFS; + goto out_free_ctx; } mutex_lock(&fs_info->scrub_lock); @@ -3835,34 +3857,29 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &dev->dev_state)) { mutex_unlock(&fs_info->scrub_lock); mutex_unlock(&fs_info->fs_devices->device_list_mutex); - return -EIO; + ret = -EIO; + goto out_free_ctx; } - btrfs_dev_replace_read_lock(&fs_info->dev_replace); + down_read(&fs_info->dev_replace.rwsem); if (dev->scrub_ctx || (!is_dev_replace && btrfs_dev_replace_is_ongoing(&fs_info->dev_replace))) { - btrfs_dev_replace_read_unlock(&fs_info->dev_replace); + up_read(&fs_info->dev_replace.rwsem); mutex_unlock(&fs_info->scrub_lock); mutex_unlock(&fs_info->fs_devices->device_list_mutex); - return -EINPROGRESS; + ret = -EINPROGRESS; + goto out_free_ctx; } - btrfs_dev_replace_read_unlock(&fs_info->dev_replace); + up_read(&fs_info->dev_replace.rwsem); ret = scrub_workers_get(fs_info, is_dev_replace); if (ret) { mutex_unlock(&fs_info->scrub_lock); mutex_unlock(&fs_info->fs_devices->device_list_mutex); - return ret; + goto out_free_ctx; } - sctx = scrub_setup_ctx(dev, is_dev_replace); - if (IS_ERR(sctx)) { - mutex_unlock(&fs_info->scrub_lock); - mutex_unlock(&fs_info->fs_devices->device_list_mutex); - scrub_workers_put(fs_info); - return PTR_ERR(sctx); - } sctx->readonly = readonly; dev->scrub_ctx = sctx; mutex_unlock(&fs_info->fs_devices->device_list_mutex); @@ -3875,6 +3892,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, atomic_inc(&fs_info->scrubs_running); mutex_unlock(&fs_info->scrub_lock); + /* + * In order to avoid deadlock with reclaim when there is a transaction + * trying to pause scrub, make sure we use GFP_NOFS for all the + * allocations done at btrfs_scrub_pages() and scrub_pages_for_parity() + * invoked by our callees. The pausing request is done when the + * transaction commit starts, and it blocks the transaction until scrub + * is paused (done at specific points at scrub_stripe() or right above + * before incrementing fs_info->scrubs_running). + */ + nofs_flag = memalloc_nofs_save(); if (!is_dev_replace) { /* * by holding device list mutex, we can @@ -3887,6 +3914,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, if (!ret) ret = scrub_enumerate_chunks(sctx, dev, start, end); + memalloc_nofs_restore(nofs_flag); wait_event(sctx->list_wait, atomic_read(&sctx->bios_in_flight) == 0); atomic_dec(&fs_info->scrubs_running); @@ -3904,6 +3932,11 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, scrub_put_ctx(sctx); + return ret; + +out_free_ctx: + scrub_free_ctx(sctx); + return ret; } diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index 5be83b5a1b43..1b15b43905f8 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -2238,7 +2238,7 @@ out: * inodes "orphan" name instead of the real name and stop. Same with new inodes * that were not created yet and overwritten inodes/refs. * - * When do we have have orphan inodes: + * When do we have orphan inodes: * 1. When an inode is freshly created and thus no valid refs are available yet * 2. When a directory lost all it's refs (deleted) but still has dir items * inside which were not processed yet (pending for move/delete). If anyone @@ -3854,7 +3854,7 @@ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move) /* * We may have refs where the parent directory does not exist * yet. This happens if the parent directories inum is higher - * the the current inum. To handle this case, we create the + * than the current inum. To handle this case, we create the * parent directory out of order. But we need to check if this * did already happen before due to other refs in the same dir. */ @@ -4775,7 +4775,7 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) struct btrfs_key key; pgoff_t index = offset >> PAGE_SHIFT; pgoff_t last_index; - unsigned pg_offset = offset & ~PAGE_MASK; + unsigned pg_offset = offset_in_page(offset); ssize_t ret = 0; key.objectid = sctx->cur_ino; diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 645fc81e2a94..368a5b9e6c13 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -93,7 +93,7 @@ const char *btrfs_decode_error(int errno) /* * __btrfs_handle_fs_error decodes expected errors from the caller and - * invokes the approciate error response. + * invokes the appropriate error response. */ __cold void __btrfs_handle_fs_error(struct btrfs_fs_info *fs_info, const char *function, @@ -151,7 +151,7 @@ void __btrfs_handle_fs_error(struct btrfs_fs_info *fs_info, const char *function * although there is no way to update the progress. It would add the * risk of a deadlock, therefore the canceling is omitted. The only * penalty is that some I/O remains active until the procedure - * completes. The next time when the filesystem is mounted writeable + * completes. The next time when the filesystem is mounted writable * again, the device replace operation continues. */ } @@ -1848,7 +1848,7 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data) if (!btrfs_check_rw_degradable(fs_info, NULL)) { btrfs_warn(fs_info, - "too many missing devices, writeable remount is not allowed"); + "too many missing devices, writable remount is not allowed"); ret = -EACCES; goto restore; } @@ -2090,7 +2090,7 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf) u64 total_free_data = 0; u64 total_free_meta = 0; int bits = dentry->d_sb->s_blocksize_bits; - __be32 *fsid = (__be32 *)fs_info->fsid; + __be32 *fsid = (__be32 *)fs_info->fs_devices->fsid; unsigned factor = 1; struct btrfs_block_rsv *block_rsv = &fs_info->global_block_rsv; int ret; @@ -2312,7 +2312,7 @@ static int btrfs_show_devname(struct seq_file *m, struct dentry *root) * device_list_mutex here as we only read the device data and the list * is protected by RCU. Even if a device is deleted during the list * traversals, we'll get valid data, the freeing callback will wait at - * least until until the rcu_read_unlock. + * least until the rcu_read_unlock. */ rcu_read_lock(); cur_devices = fs_info->fs_devices; diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c index 3717c864ba23..5a5930e3d32b 100644 --- a/fs/btrfs/sysfs.c +++ b/fs/btrfs/sysfs.c @@ -191,6 +191,7 @@ BTRFS_FEAT_ATTR_INCOMPAT(extended_iref, EXTENDED_IREF); BTRFS_FEAT_ATTR_INCOMPAT(raid56, RAID56); BTRFS_FEAT_ATTR_INCOMPAT(skinny_metadata, SKINNY_METADATA); BTRFS_FEAT_ATTR_INCOMPAT(no_holes, NO_HOLES); +BTRFS_FEAT_ATTR_INCOMPAT(metadata_uuid, METADATA_UUID); BTRFS_FEAT_ATTR_COMPAT_RO(free_space_tree, FREE_SPACE_TREE); static struct attribute *btrfs_supported_feature_attrs[] = { @@ -204,6 +205,7 @@ static struct attribute *btrfs_supported_feature_attrs[] = { BTRFS_FEAT_ATTR_PTR(raid56), BTRFS_FEAT_ATTR_PTR(skinny_metadata), BTRFS_FEAT_ATTR_PTR(no_holes), + BTRFS_FEAT_ATTR_PTR(metadata_uuid), BTRFS_FEAT_ATTR_PTR(free_space_tree), NULL }; @@ -505,12 +507,24 @@ static ssize_t quota_override_store(struct kobject *kobj, BTRFS_ATTR_RW(, quota_override, quota_override_show, quota_override_store); +static ssize_t btrfs_metadata_uuid_show(struct kobject *kobj, + struct kobj_attribute *a, char *buf) +{ + struct btrfs_fs_info *fs_info = to_fs_info(kobj); + + return snprintf(buf, PAGE_SIZE, "%pU\n", + fs_info->fs_devices->metadata_uuid); +} + +BTRFS_ATTR(, metadata_uuid, btrfs_metadata_uuid_show); + static const struct attribute *btrfs_attrs[] = { BTRFS_ATTR_PTR(, label), BTRFS_ATTR_PTR(, nodesize), BTRFS_ATTR_PTR(, sectorsize), BTRFS_ATTR_PTR(, clone_alignment), BTRFS_ATTR_PTR(, quota_override), + BTRFS_ATTR_PTR(, metadata_uuid), NULL, }; diff --git a/fs/btrfs/sysfs.h b/fs/btrfs/sysfs.h index c6ee600aff89..40716b357c1d 100644 --- a/fs/btrfs/sysfs.h +++ b/fs/btrfs/sysfs.h @@ -9,7 +9,7 @@ extern u64 btrfs_debugfs_test; enum btrfs_feature_set { - FEAT_COMPAT = 0, + FEAT_COMPAT, FEAT_COMPAT_RO, FEAT_INCOMPAT, FEAT_MAX diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c index db72b3b6209e..8a59597f1883 100644 --- a/fs/btrfs/tests/btrfs-tests.c +++ b/fs/btrfs/tests/btrfs-tests.c @@ -174,8 +174,10 @@ void btrfs_free_dummy_root(struct btrfs_root *root) /* Will be freed by btrfs_free_fs_roots */ if (WARN_ON(test_bit(BTRFS_ROOT_IN_RADIX, &root->state))) return; - if (root->node) + if (root->node) { + /* One for allocate_extent_buffer */ free_extent_buffer(root->node); + } kfree(root); } diff --git a/fs/btrfs/tests/extent-io-tests.c b/fs/btrfs/tests/extent-io-tests.c index 9e0f4a01be14..3c46d7f23456 100644 --- a/fs/btrfs/tests/extent-io-tests.c +++ b/fs/btrfs/tests/extent-io-tests.c @@ -62,10 +62,11 @@ static int test_find_delalloc(u32 sectorsize) struct page *page; struct page *locked_page = NULL; unsigned long index = 0; - u64 total_dirty = SZ_256M; - u64 max_bytes = SZ_128M; + /* In this test we need at least 2 file extents at its maximum size */ + u64 max_bytes = BTRFS_MAX_EXTENT_SIZE; + u64 total_dirty = 2 * max_bytes; u64 start, end, test_start; - u64 found; + bool found; int ret = -EINVAL; test_msg("running find delalloc tests"); @@ -76,7 +77,7 @@ static int test_find_delalloc(u32 sectorsize) return -ENOMEM; } - extent_io_tree_init(&tmp, inode); + extent_io_tree_init(&tmp, NULL); /* * First go through and create and mark all of our pages dirty, we pin @@ -106,8 +107,8 @@ static int test_find_delalloc(u32 sectorsize) set_extent_delalloc(&tmp, 0, sectorsize - 1, 0, NULL); start = 0; end = 0; - found = btrfs_find_lock_delalloc_range(inode, &tmp, locked_page, &start, - &end, max_bytes); + found = find_lock_delalloc_range(inode, &tmp, locked_page, &start, + &end); if (!found) { test_err("should have found at least one delalloc"); goto out_bits; @@ -137,8 +138,8 @@ static int test_find_delalloc(u32 sectorsize) set_extent_delalloc(&tmp, sectorsize, max_bytes - 1, 0, NULL); start = test_start; end = 0; - found = btrfs_find_lock_delalloc_range(inode, &tmp, locked_page, &start, - &end, max_bytes); + found = find_lock_delalloc_range(inode, &tmp, locked_page, &start, + &end); if (!found) { test_err("couldn't find delalloc in our range"); goto out_bits; @@ -171,8 +172,8 @@ static int test_find_delalloc(u32 sectorsize) } start = test_start; end = 0; - found = btrfs_find_lock_delalloc_range(inode, &tmp, locked_page, &start, - &end, max_bytes); + found = find_lock_delalloc_range(inode, &tmp, locked_page, &start, + &end); if (found) { test_err("found range when we shouldn't have"); goto out_bits; @@ -192,8 +193,8 @@ static int test_find_delalloc(u32 sectorsize) set_extent_delalloc(&tmp, max_bytes, total_dirty - 1, 0, NULL); start = test_start; end = 0; - found = btrfs_find_lock_delalloc_range(inode, &tmp, locked_page, &start, - &end, max_bytes); + found = find_lock_delalloc_range(inode, &tmp, locked_page, &start, + &end); if (!found) { test_err("didn't find our range"); goto out_bits; @@ -233,8 +234,8 @@ static int test_find_delalloc(u32 sectorsize) * this changes at any point in the future we will need to fix this * tests expected behavior. */ - found = btrfs_find_lock_delalloc_range(inode, &tmp, locked_page, &start, - &end, max_bytes); + found = find_lock_delalloc_range(inode, &tmp, locked_page, &start, + &end); if (!found) { test_err("didn't find our range"); goto out_bits; diff --git a/fs/btrfs/tests/inode-tests.c b/fs/btrfs/tests/inode-tests.c index 64043f028820..af0c8e30d9e2 100644 --- a/fs/btrfs/tests/inode-tests.c +++ b/fs/btrfs/tests/inode-tests.c @@ -254,11 +254,6 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) goto out; } - /* - * We will just free a dummy node if it's ref count is 2 so we need an - * extra ref so our searches don't accidentally release our page. - */ - extent_buffer_get(root->node); btrfs_set_header_nritems(root->node, 0); btrfs_set_header_level(root->node, 0); ret = -EINVAL; @@ -860,7 +855,6 @@ static int test_hole_first(u32 sectorsize, u32 nodesize) goto out; } - extent_buffer_get(root->node); btrfs_set_header_nritems(root->node, 0); btrfs_set_header_level(root->node, 0); BTRFS_I(inode)->root = root; diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index d1eeef9ec5da..127fa1535f58 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -233,14 +233,12 @@ loop: extwriter_counter_init(cur_trans, type); init_waitqueue_head(&cur_trans->writer_wait); init_waitqueue_head(&cur_trans->commit_wait); - init_waitqueue_head(&cur_trans->pending_wait); cur_trans->state = TRANS_STATE_RUNNING; /* * One for this trans handle, one so it will live on until we * commit the transaction. */ refcount_set(&cur_trans->use_count, 2); - atomic_set(&cur_trans->pending_ordered, 0); cur_trans->flags = 0; cur_trans->start_time = ktime_get_seconds(); @@ -456,7 +454,7 @@ start_transaction(struct btrfs_root *root, unsigned int num_items, bool enforce_qgroups) { struct btrfs_fs_info *fs_info = root->fs_info; - + struct btrfs_block_rsv *delayed_refs_rsv = &fs_info->delayed_refs_rsv; struct btrfs_trans_handle *h; struct btrfs_transaction *cur_trans; u64 num_bytes = 0; @@ -485,13 +483,28 @@ start_transaction(struct btrfs_root *root, unsigned int num_items, * the appropriate flushing if need be. */ if (num_items && root != fs_info->chunk_root) { + struct btrfs_block_rsv *rsv = &fs_info->trans_block_rsv; + u64 delayed_refs_bytes = 0; + qgroup_reserved = num_items * fs_info->nodesize; ret = btrfs_qgroup_reserve_meta_pertrans(root, qgroup_reserved, enforce_qgroups); if (ret) return ERR_PTR(ret); + /* + * We want to reserve all the bytes we may need all at once, so + * we only do 1 enospc flushing cycle per transaction start. We + * accomplish this by simply assuming we'll do 2 x num_items + * worth of delayed refs updates in this trans handle, and + * refill that amount for whatever is missing in the reserve. + */ num_bytes = btrfs_calc_trans_metadata_size(fs_info, num_items); + if (delayed_refs_rsv->full == 0) { + delayed_refs_bytes = num_bytes; + num_bytes <<= 1; + } + /* * Do the reservation for the relocation root creation */ @@ -500,8 +513,24 @@ start_transaction(struct btrfs_root *root, unsigned int num_items, reloc_reserved = true; } - ret = btrfs_block_rsv_add(root, &fs_info->trans_block_rsv, - num_bytes, flush); + ret = btrfs_block_rsv_add(root, rsv, num_bytes, flush); + if (ret) + goto reserve_fail; + if (delayed_refs_bytes) { + btrfs_migrate_to_delayed_refs_rsv(fs_info, rsv, + delayed_refs_bytes); + num_bytes -= delayed_refs_bytes; + } + } else if (num_items == 0 && flush == BTRFS_RESERVE_FLUSH_ALL && + !delayed_refs_rsv->full) { + /* + * Some people call with btrfs_start_transaction(root, 0) + * because they can be throttled, but have some other mechanism + * for reserving space. We still want these guys to refill the + * delayed block_rsv so just add 1 items worth of reservation + * here. + */ + ret = btrfs_delayed_refs_rsv_refill(fs_info, flush); if (ret) goto reserve_fail; } @@ -670,7 +699,7 @@ struct btrfs_trans_handle *btrfs_attach_transaction(struct btrfs_root *root) /* * btrfs_attach_transaction_barrier() - catch the running transaction * - * It is similar to the above function, the differentia is this one + * It is similar to the above function, the difference is this one * will wait for all the inactive transactions until they fully * complete. */ @@ -760,7 +789,7 @@ static int should_end_transaction(struct btrfs_trans_handle *trans) { struct btrfs_fs_info *fs_info = trans->fs_info; - if (btrfs_check_space_for_delayed_refs(trans)) + if (btrfs_check_space_for_delayed_refs(fs_info)) return 1; return !!btrfs_block_rsv_check(&fs_info->global_block_rsv, 5); @@ -769,22 +798,12 @@ static int should_end_transaction(struct btrfs_trans_handle *trans) int btrfs_should_end_transaction(struct btrfs_trans_handle *trans) { struct btrfs_transaction *cur_trans = trans->transaction; - int updates; - int err; smp_mb(); if (cur_trans->state >= TRANS_STATE_BLOCKED || cur_trans->delayed_refs.flushing) return 1; - updates = trans->delayed_ref_updates; - trans->delayed_ref_updates = 0; - if (updates) { - err = btrfs_run_delayed_refs(trans, updates * 2); - if (err) /* Error code will also eval true */ - return err; - } - return should_end_transaction(trans); } @@ -814,11 +833,8 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans, { struct btrfs_fs_info *info = trans->fs_info; struct btrfs_transaction *cur_trans = trans->transaction; - u64 transid = trans->transid; - unsigned long cur = trans->delayed_ref_updates; int lock = (trans->type != TRANS_JOIN_NOLOCK); int err = 0; - int must_run_delayed_refs = 0; if (refcount_read(&trans->use_count) > 1) { refcount_dec(&trans->use_count); @@ -829,27 +845,6 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans, btrfs_trans_release_metadata(trans); trans->block_rsv = NULL; - if (!list_empty(&trans->new_bgs)) - btrfs_create_pending_block_groups(trans); - - trans->delayed_ref_updates = 0; - if (!trans->sync) { - must_run_delayed_refs = - btrfs_should_throttle_delayed_refs(trans); - cur = max_t(unsigned long, cur, 32); - - /* - * don't make the caller wait if they are from a NOLOCK - * or ATTACH transaction, it will deadlock with commit - */ - if (must_run_delayed_refs == 1 && - (trans->type & (__TRANS_JOIN_NOLOCK | __TRANS_ATTACH))) - must_run_delayed_refs = 2; - } - - btrfs_trans_release_metadata(trans); - trans->block_rsv = NULL; - if (!list_empty(&trans->new_bgs)) btrfs_create_pending_block_groups(trans); @@ -894,10 +889,6 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans, } kmem_cache_free(btrfs_trans_handle_cachep, trans); - if (must_run_delayed_refs) { - btrfs_async_run_delayed_refs(info, cur, transid, - must_run_delayed_refs == 1); - } return err; } @@ -1338,7 +1329,7 @@ static int qgroup_account_snapshot(struct btrfs_trans_handle *trans, return 0; /* - * Ensure dirty @src will be commited. Or, after comming + * Ensure dirty @src will be committed. Or, after coming * commit_fs_roots() and switch_commit_roots(), any dirty but not * recorded root will never be updated again, causing an outdated root * item. @@ -1842,7 +1833,6 @@ static void cleanup_transaction(struct btrfs_trans_handle *trans, int err) { struct btrfs_fs_info *fs_info = trans->fs_info; struct btrfs_transaction *cur_trans = trans->transaction; - DEFINE_WAIT(wait); WARN_ON(refcount_read(&trans->use_count) > 1); @@ -1911,13 +1901,6 @@ static inline void btrfs_wait_delalloc_flush(struct btrfs_fs_info *fs_info) btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1); } -static inline void -btrfs_wait_pending_ordered(struct btrfs_transaction *cur_trans) -{ - wait_event(cur_trans->pending_wait, - atomic_read(&cur_trans->pending_ordered) == 0); -} - int btrfs_commit_transaction(struct btrfs_trans_handle *trans) { struct btrfs_fs_info *fs_info = trans->fs_info; @@ -2052,8 +2035,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans) btrfs_wait_delalloc_flush(fs_info); - btrfs_wait_pending_ordered(cur_trans); - btrfs_scrub_pause(fs_info); /* * Ok now we need to make sure to block out any other joins while we diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h index 4cbb1b55387d..f1ba78949d1b 100644 --- a/fs/btrfs/transaction.h +++ b/fs/btrfs/transaction.h @@ -12,13 +12,13 @@ #include "ctree.h" enum btrfs_trans_state { - TRANS_STATE_RUNNING = 0, - TRANS_STATE_BLOCKED = 1, - TRANS_STATE_COMMIT_START = 2, - TRANS_STATE_COMMIT_DOING = 3, - TRANS_STATE_UNBLOCKED = 4, - TRANS_STATE_COMPLETED = 5, - TRANS_STATE_MAX = 6, + TRANS_STATE_RUNNING, + TRANS_STATE_BLOCKED, + TRANS_STATE_COMMIT_START, + TRANS_STATE_COMMIT_DOING, + TRANS_STATE_UNBLOCKED, + TRANS_STATE_COMPLETED, + TRANS_STATE_MAX, }; #define BTRFS_TRANS_HAVE_FREE_BGS 0 @@ -39,7 +39,6 @@ struct btrfs_transaction { */ atomic_t num_writers; refcount_t use_count; - atomic_t pending_ordered; unsigned long flags; @@ -51,7 +50,6 @@ struct btrfs_transaction { time64_t start_time; wait_queue_head_t writer_wait; wait_queue_head_t commit_wait; - wait_queue_head_t pending_wait; struct list_head pending_snapshots; struct list_head pending_chunks; struct list_head switch_commits; diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c index 1a4e2b101ef2..a62e1e837a89 100644 --- a/fs/btrfs/tree-checker.c +++ b/fs/btrfs/tree-checker.c @@ -27,10 +27,10 @@ * * @type: leaf or node * @identifier: the necessary info to locate the leaf/node. - * It's recommened to decode key.objecitd/offset if it's + * It's recommended to decode key.objecitd/offset if it's * meaningful. * @reason: describe the error - * @bad_value: optional, it's recommened to output bad value and its + * @bad_value: optional, it's recommended to output bad value and its * expected value (range). * * Since comma is used to separate the components, only space is allowed @@ -130,7 +130,7 @@ static int check_extent_data_item(struct btrfs_fs_info *fs_info, } /* - * Support for new compression/encrption must introduce incompat flag, + * Support for new compression/encryption must introduce incompat flag, * and must be caught in open_ctree(). */ if (btrfs_file_extent_compression(leaf, fi) > BTRFS_COMPRESS_TYPES) { diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c index a5ce99a6c936..ac232b3d6d7e 100644 --- a/fs/btrfs/tree-log.c +++ b/fs/btrfs/tree-log.c @@ -1144,7 +1144,7 @@ next: } btrfs_release_path(path); - /* look for a conflicing name */ + /* look for a conflicting name */ di = btrfs_lookup_dir_item(trans, root, path, btrfs_ino(dir), name, namelen, 0); if (di && !IS_ERR(di)) { @@ -3149,7 +3149,7 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans, mutex_unlock(&log_root_tree->log_mutex); /* - * nobody else is going to jump in and write the the ctree + * Nobody else is going to jump in and write the ctree * super here because the log_commit atomic below is protecting * us. We must be called with a transaction handle pinning * the running transaction open, so a full commit can't hop @@ -3201,8 +3201,6 @@ static void free_log_tree(struct btrfs_trans_handle *trans, struct btrfs_root *log) { int ret; - u64 start; - u64 end; struct walk_control wc = { .free = 1, .process_func = process_one_buffer @@ -3216,18 +3214,8 @@ static void free_log_tree(struct btrfs_trans_handle *trans, btrfs_handle_fs_error(log->fs_info, ret, NULL); } - while (1) { - ret = find_first_extent_bit(&log->dirty_log_pages, - 0, &start, &end, - EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT, - NULL); - if (ret) - break; - - clear_extent_bits(&log->dirty_log_pages, start, end, - EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT); - } - + clear_extent_bits(&log->dirty_log_pages, 0, (u64)-1, + EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT); free_extent_buffer(log->node); kfree(log); } @@ -4383,7 +4371,6 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans, struct extent_map *em, *n; struct list_head extents; struct extent_map_tree *tree = &inode->extent_tree; - u64 logged_start, logged_end; u64 test_gen; int ret = 0; int num = 0; @@ -4392,8 +4379,6 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans, write_lock(&tree->lock); test_gen = root->fs_info->last_trans_committed; - logged_start = start; - logged_end = end; list_for_each_entry_safe(em, n, &tree->modified_extents, list) { /* @@ -4434,11 +4419,6 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans, em->start >= i_size_read(&inode->vfs_inode)) continue; - if (em->start < logged_start) - logged_start = em->start; - if ((em->start + em->len - 1) > logged_end) - logged_end = em->start + em->len - 1; - /* Need a ref to keep it from getting evicted from cache */ refcount_inc(&em->refs); set_bit(EXTENT_FLAG_LOGGING, &em->flags); @@ -5778,6 +5758,22 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans, goto end_trans; } + /* + * If a new hard link was added to the inode in the current transaction + * and its link count is now greater than 1, we need to fallback to a + * transaction commit, otherwise we can end up not logging all its new + * parents for all the hard links. Here just from the dentry used to + * fsync, we can not visit the ancestor inodes for all the other hard + * links to figure out if any is new, so we fallback to a transaction + * commit (instead of adding a lot of complexity of scanning a btree, + * since this scenario is not a common use case). + */ + if (inode->vfs_inode.i_nlink > 1 && + inode->last_link_trans > last_committed) { + ret = -EMLINK; + goto end_trans; + } + while (1) { if (!parent || d_really_is_negative(parent) || sb != parent->d_sb) break; diff --git a/fs/btrfs/tree-log.h b/fs/btrfs/tree-log.h index 767765031e59..0fab84a8f670 100644 --- a/fs/btrfs/tree-log.h +++ b/fs/btrfs/tree-log.h @@ -15,7 +15,6 @@ struct btrfs_log_ctx { int log_ret; int log_transid; - int io_err; bool log_new_dentries; struct inode *inode; struct list_head list; @@ -26,7 +25,6 @@ static inline void btrfs_init_log_ctx(struct btrfs_log_ctx *ctx, { ctx->log_ret = 0; ctx->log_transid = 0; - ctx->io_err = 0; ctx->log_new_dentries = false; ctx->inode = inode; INIT_LIST_HEAD(&ctx->list); diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index f435d397019e..2576b1a379c9 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -37,6 +37,7 @@ const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = { .tolerated_failures = 1, .devs_increment = 2, .ncopies = 2, + .nparity = 0, .raid_name = "raid10", .bg_flag = BTRFS_BLOCK_GROUP_RAID10, .mindev_error = BTRFS_ERROR_DEV_RAID10_MIN_NOT_MET, @@ -49,6 +50,7 @@ const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = { .tolerated_failures = 1, .devs_increment = 2, .ncopies = 2, + .nparity = 0, .raid_name = "raid1", .bg_flag = BTRFS_BLOCK_GROUP_RAID1, .mindev_error = BTRFS_ERROR_DEV_RAID1_MIN_NOT_MET, @@ -61,6 +63,7 @@ const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = { .tolerated_failures = 0, .devs_increment = 1, .ncopies = 2, + .nparity = 0, .raid_name = "dup", .bg_flag = BTRFS_BLOCK_GROUP_DUP, .mindev_error = 0, @@ -73,6 +76,7 @@ const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = { .tolerated_failures = 0, .devs_increment = 1, .ncopies = 1, + .nparity = 0, .raid_name = "raid0", .bg_flag = BTRFS_BLOCK_GROUP_RAID0, .mindev_error = 0, @@ -85,6 +89,7 @@ const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = { .tolerated_failures = 0, .devs_increment = 1, .ncopies = 1, + .nparity = 0, .raid_name = "single", .bg_flag = 0, .mindev_error = 0, @@ -96,7 +101,8 @@ const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = { .devs_min = 2, .tolerated_failures = 1, .devs_increment = 1, - .ncopies = 2, + .ncopies = 1, + .nparity = 1, .raid_name = "raid5", .bg_flag = BTRFS_BLOCK_GROUP_RAID5, .mindev_error = BTRFS_ERROR_DEV_RAID5_MIN_NOT_MET, @@ -108,7 +114,8 @@ const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = { .devs_min = 3, .tolerated_failures = 2, .devs_increment = 1, - .ncopies = 3, + .ncopies = 1, + .nparity = 2, .raid_name = "raid6", .bg_flag = BTRFS_BLOCK_GROUP_RAID6, .mindev_error = BTRFS_ERROR_DEV_RAID6_MIN_NOT_MET, @@ -123,6 +130,60 @@ const char *get_raid_name(enum btrfs_raid_types type) return btrfs_raid_array[type].raid_name; } +/* + * Fill @buf with textual description of @bg_flags, no more than @size_buf + * bytes including terminating null byte. + */ +void btrfs_describe_block_groups(u64 bg_flags, char *buf, u32 size_buf) +{ + int i; + int ret; + char *bp = buf; + u64 flags = bg_flags; + u32 size_bp = size_buf; + + if (!flags) { + strcpy(bp, "NONE"); + return; + } + +#define DESCRIBE_FLAG(flag, desc) \ + do { \ + if (flags & (flag)) { \ + ret = snprintf(bp, size_bp, "%s|", (desc)); \ + if (ret < 0 || ret >= size_bp) \ + goto out_overflow; \ + size_bp -= ret; \ + bp += ret; \ + flags &= ~(flag); \ + } \ + } while (0) + + DESCRIBE_FLAG(BTRFS_BLOCK_GROUP_DATA, "data"); + DESCRIBE_FLAG(BTRFS_BLOCK_GROUP_SYSTEM, "system"); + DESCRIBE_FLAG(BTRFS_BLOCK_GROUP_METADATA, "metadata"); + + DESCRIBE_FLAG(BTRFS_AVAIL_ALLOC_BIT_SINGLE, "single"); + for (i = 0; i < BTRFS_NR_RAID_TYPES; i++) + DESCRIBE_FLAG(btrfs_raid_array[i].bg_flag, + btrfs_raid_array[i].raid_name); +#undef DESCRIBE_FLAG + + if (flags) { + ret = snprintf(bp, size_bp, "0x%llx|", flags); + size_bp -= ret; + } + + if (size_bp < size_buf) + buf[size_buf - size_bp - 1] = '\0'; /* remove last | */ + + /* + * The text is trimmed, it's up to the caller to provide sufficiently + * large buffer + */ +out_overflow:; +} + static int init_first_rw_device(struct btrfs_trans_handle *trans, struct btrfs_fs_info *fs_info); static int btrfs_relocate_sys_chunks(struct btrfs_fs_info *fs_info); @@ -151,7 +212,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, * the mutex can be very coarse and can cover long-running operations * * protects: updates to fs_devices counters like missing devices, rw devices, - * seeding, structure cloning, openning/closing devices at mount/umount time + * seeding, structure cloning, opening/closing devices at mount/umount time * * global::fs_devs - add, remove, updates to the global list * @@ -238,13 +299,15 @@ struct list_head *btrfs_get_fs_uuids(void) /* * alloc_fs_devices - allocate struct btrfs_fs_devices - * @fsid: if not NULL, copy the uuid to fs_devices::fsid + * @fsid: if not NULL, copy the UUID to fs_devices::fsid + * @metadata_fsid: if not NULL, copy the UUID to fs_devices::metadata_fsid * * Return a pointer to a new struct btrfs_fs_devices on success, or ERR_PTR(). * The returned struct is not linked onto any lists and can be destroyed with * kfree() right away. */ -static struct btrfs_fs_devices *alloc_fs_devices(const u8 *fsid) +static struct btrfs_fs_devices *alloc_fs_devices(const u8 *fsid, + const u8 *metadata_fsid) { struct btrfs_fs_devices *fs_devs; @@ -261,6 +324,11 @@ static struct btrfs_fs_devices *alloc_fs_devices(const u8 *fsid) if (fsid) memcpy(fs_devs->fsid, fsid, BTRFS_FSID_SIZE); + if (metadata_fsid) + memcpy(fs_devs->metadata_uuid, metadata_fsid, BTRFS_FSID_SIZE); + else if (fsid) + memcpy(fs_devs->metadata_uuid, fsid, BTRFS_FSID_SIZE); + return fs_devs; } @@ -368,13 +436,57 @@ static struct btrfs_device *find_device(struct btrfs_fs_devices *fs_devices, return NULL; } -static noinline struct btrfs_fs_devices *find_fsid(u8 *fsid) +static noinline struct btrfs_fs_devices *find_fsid( + const u8 *fsid, const u8 *metadata_fsid) { struct btrfs_fs_devices *fs_devices; + ASSERT(fsid); + + if (metadata_fsid) { + /* + * Handle scanned device having completed its fsid change but + * belonging to a fs_devices that was created by first scanning + * a device which didn't have its fsid/metadata_uuid changed + * at all and the CHANGING_FSID_V2 flag set. + */ + list_for_each_entry(fs_devices, &fs_uuids, fs_list) { + if (fs_devices->fsid_change && + memcmp(metadata_fsid, fs_devices->fsid, + BTRFS_FSID_SIZE) == 0 && + memcmp(fs_devices->fsid, fs_devices->metadata_uuid, + BTRFS_FSID_SIZE) == 0) { + return fs_devices; + } + } + /* + * Handle scanned device having completed its fsid change but + * belonging to a fs_devices that was created by a device that + * has an outdated pair of fsid/metadata_uuid and + * CHANGING_FSID_V2 flag set. + */ + list_for_each_entry(fs_devices, &fs_uuids, fs_list) { + if (fs_devices->fsid_change && + memcmp(fs_devices->metadata_uuid, + fs_devices->fsid, BTRFS_FSID_SIZE) != 0 && + memcmp(metadata_fsid, fs_devices->metadata_uuid, + BTRFS_FSID_SIZE) == 0) { + return fs_devices; + } + } + } + + /* Handle non-split brain cases */ list_for_each_entry(fs_devices, &fs_uuids, fs_list) { - if (memcmp(fsid, fs_devices->fsid, BTRFS_FSID_SIZE) == 0) - return fs_devices; + if (metadata_fsid) { + if (memcmp(fsid, fs_devices->fsid, BTRFS_FSID_SIZE) == 0 + && memcmp(metadata_fsid, fs_devices->metadata_uuid, + BTRFS_FSID_SIZE) == 0) + return fs_devices; + } else { + if (memcmp(fsid, fs_devices->fsid, BTRFS_FSID_SIZE) == 0) + return fs_devices; + } } return NULL; } @@ -709,6 +821,13 @@ static int btrfs_open_one_device(struct btrfs_fs_devices *fs_devices, device->generation = btrfs_super_generation(disk_super); if (btrfs_super_flags(disk_super) & BTRFS_SUPER_FLAG_SEEDING) { + if (btrfs_super_incompat_flags(disk_super) & + BTRFS_FEATURE_INCOMPAT_METADATA_UUID) { + pr_err( + "BTRFS: Invalid seeding and uuid-changed device detected\n"); + goto error_brelse; + } + clear_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state); fs_devices->seeding = 1; } else { @@ -743,6 +862,51 @@ error_brelse: return -EINVAL; } +/* + * Handle scanned device having its CHANGING_FSID_V2 flag set and the fs_devices + * being created with a disk that has already completed its fsid change. + */ +static struct btrfs_fs_devices *find_fsid_inprogress( + struct btrfs_super_block *disk_super) +{ + struct btrfs_fs_devices *fs_devices; + + list_for_each_entry(fs_devices, &fs_uuids, fs_list) { + if (memcmp(fs_devices->metadata_uuid, fs_devices->fsid, + BTRFS_FSID_SIZE) != 0 && + memcmp(fs_devices->metadata_uuid, disk_super->fsid, + BTRFS_FSID_SIZE) == 0 && !fs_devices->fsid_change) { + return fs_devices; + } + } + + return NULL; +} + + +static struct btrfs_fs_devices *find_fsid_changed( + struct btrfs_super_block *disk_super) +{ + struct btrfs_fs_devices *fs_devices; + + /* + * Handles the case where scanned device is part of an fs that had + * multiple successful changes of FSID but curently device didn't + * observe it. Meaning our fsid will be different than theirs. + */ + list_for_each_entry(fs_devices, &fs_uuids, fs_list) { + if (memcmp(fs_devices->metadata_uuid, fs_devices->fsid, + BTRFS_FSID_SIZE) != 0 && + memcmp(fs_devices->metadata_uuid, disk_super->metadata_uuid, + BTRFS_FSID_SIZE) == 0 && + memcmp(fs_devices->fsid, disk_super->fsid, + BTRFS_FSID_SIZE) != 0) { + return fs_devices; + } + } + + return NULL; +} /* * Add new device to list of registered devices * @@ -755,14 +919,46 @@ static noinline struct btrfs_device *device_list_add(const char *path, bool *new_device_added) { struct btrfs_device *device; - struct btrfs_fs_devices *fs_devices; + struct btrfs_fs_devices *fs_devices = NULL; struct rcu_string *name; u64 found_transid = btrfs_super_generation(disk_super); u64 devid = btrfs_stack_device_id(&disk_super->dev_item); + bool has_metadata_uuid = (btrfs_super_incompat_flags(disk_super) & + BTRFS_FEATURE_INCOMPAT_METADATA_UUID); + bool fsid_change_in_progress = (btrfs_super_flags(disk_super) & + BTRFS_SUPER_FLAG_CHANGING_FSID_V2); + + if (fsid_change_in_progress) { + if (!has_metadata_uuid) { + /* + * When we have an image which has CHANGING_FSID_V2 set + * it might belong to either a filesystem which has + * disks with completed fsid change or it might belong + * to fs with no UUID changes in effect, handle both. + */ + fs_devices = find_fsid_inprogress(disk_super); + if (!fs_devices) + fs_devices = find_fsid(disk_super->fsid, NULL); + } else { + fs_devices = find_fsid_changed(disk_super); + } + } else if (has_metadata_uuid) { + fs_devices = find_fsid(disk_super->fsid, + disk_super->metadata_uuid); + } else { + fs_devices = find_fsid(disk_super->fsid, NULL); + } + - fs_devices = find_fsid(disk_super->fsid); if (!fs_devices) { - fs_devices = alloc_fs_devices(disk_super->fsid); + if (has_metadata_uuid) + fs_devices = alloc_fs_devices(disk_super->fsid, + disk_super->metadata_uuid); + else + fs_devices = alloc_fs_devices(disk_super->fsid, NULL); + + fs_devices->fsid_change = fsid_change_in_progress; + if (IS_ERR(fs_devices)) return ERR_CAST(fs_devices); @@ -774,6 +970,21 @@ static noinline struct btrfs_device *device_list_add(const char *path, mutex_lock(&fs_devices->device_list_mutex); device = find_device(fs_devices, devid, disk_super->dev_item.uuid); + + /* + * If this disk has been pulled into an fs devices created by + * a device which had the CHANGING_FSID_V2 flag then replace the + * metadata_uuid/fsid values of the fs_devices. + */ + if (has_metadata_uuid && fs_devices->fsid_change && + found_transid > fs_devices->latest_generation) { + memcpy(fs_devices->fsid, disk_super->fsid, + BTRFS_FSID_SIZE); + memcpy(fs_devices->metadata_uuid, + disk_super->metadata_uuid, BTRFS_FSID_SIZE); + + fs_devices->fsid_change = false; + } } if (!device) { @@ -850,6 +1061,35 @@ static noinline struct btrfs_device *device_list_add(const char *path, return ERR_PTR(-EEXIST); } + /* + * We are going to replace the device path for a given devid, + * make sure it's the same device if the device is mounted + */ + if (device->bdev) { + struct block_device *path_bdev; + + path_bdev = lookup_bdev(path); + if (IS_ERR(path_bdev)) { + mutex_unlock(&fs_devices->device_list_mutex); + return ERR_CAST(path_bdev); + } + + if (device->bdev != path_bdev) { + bdput(path_bdev); + mutex_unlock(&fs_devices->device_list_mutex); + btrfs_warn_in_rcu(device->fs_info, + "duplicate device fsid:devid for %pU:%llu old:%s new:%s", + disk_super->fsid, devid, + rcu_str_deref(device->name), path); + return ERR_PTR(-EEXIST); + } + bdput(path_bdev); + btrfs_info_in_rcu(device->fs_info, + "device fsid %pU devid %llu moved old:%s new:%s", + disk_super->fsid, devid, + rcu_str_deref(device->name), path); + } + name = rcu_string_strdup(path, GFP_NOFS); if (!name) { mutex_unlock(&fs_devices->device_list_mutex); @@ -869,8 +1109,11 @@ static noinline struct btrfs_device *device_list_add(const char *path, * it back. We need it to pick the disk with largest generation * (as above). */ - if (!fs_devices->opened) + if (!fs_devices->opened) { device->generation = found_transid; + fs_devices->latest_generation = max_t(u64, found_transid, + fs_devices->latest_generation); + } fs_devices->total_devices = btrfs_super_num_devices(disk_super); @@ -884,7 +1127,7 @@ static struct btrfs_fs_devices *clone_fs_devices(struct btrfs_fs_devices *orig) struct btrfs_device *device; struct btrfs_device *orig_dev; - fs_devices = alloc_fs_devices(orig->fsid); + fs_devices = alloc_fs_devices(orig->fsid, NULL); if (IS_ERR(fs_devices)) return fs_devices; @@ -1193,7 +1436,7 @@ static int btrfs_read_disk_super(struct block_device *bdev, u64 bytenr, p = kmap(*page); /* align our pointer to the offset of the super block */ - *disk_super = p + (bytenr & ~PAGE_MASK); + *disk_super = p + offset_in_page(bytenr); if (btrfs_super_bytenr(*disk_super) != bytenr || btrfs_super_magic(*disk_super) != BTRFS_MAGIC) { @@ -1709,7 +1952,8 @@ static int btrfs_add_dev_item(struct btrfs_trans_handle *trans, ptr = btrfs_device_uuid(dev_item); write_extent_buffer(leaf, device->uuid, ptr, BTRFS_UUID_SIZE); ptr = btrfs_device_fsid(dev_item); - write_extent_buffer(leaf, trans->fs_info->fsid, ptr, BTRFS_FSID_SIZE); + write_extent_buffer(leaf, trans->fs_info->fs_devices->metadata_uuid, + ptr, BTRFS_FSID_SIZE); btrfs_mark_buffer_dirty(leaf); ret = 0; @@ -1862,12 +2106,12 @@ static u64 btrfs_num_devices(struct btrfs_fs_info *fs_info) { u64 num_devices = fs_info->fs_devices->num_devices; - btrfs_dev_replace_read_lock(&fs_info->dev_replace); + down_read(&fs_info->dev_replace.rwsem); if (btrfs_dev_replace_is_ongoing(&fs_info->dev_replace)) { ASSERT(num_devices > 1); num_devices--; } - btrfs_dev_replace_read_unlock(&fs_info->dev_replace); + up_read(&fs_info->dev_replace.rwsem); return num_devices; } @@ -1900,6 +2144,14 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path, goto out; } + if (btrfs_pinned_by_swapfile(fs_info, device)) { + btrfs_warn_in_rcu(fs_info, + "cannot remove device %s (devid %llu) due to active swapfile", + rcu_str_deref(device->name), device->devid); + ret = -ETXTBSY; + goto out; + } + if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) { ret = BTRFS_ERROR_DEV_TGT_REPLACE; goto out; @@ -2132,7 +2384,13 @@ static struct btrfs_device *btrfs_find_device_by_path( disk_super = (struct btrfs_super_block *)bh->b_data; devid = btrfs_stack_device_id(&disk_super->dev_item); dev_uuid = disk_super->dev_item.uuid; - device = btrfs_find_device(fs_info, devid, dev_uuid, disk_super->fsid); + if (btrfs_fs_incompat(fs_info, METADATA_UUID)) + device = btrfs_find_device(fs_info, devid, dev_uuid, + disk_super->metadata_uuid); + else + device = btrfs_find_device(fs_info, devid, + dev_uuid, disk_super->fsid); + brelse(bh); if (!device) device = ERR_PTR(-ENOENT); @@ -2202,7 +2460,7 @@ static int btrfs_prepare_sprout(struct btrfs_fs_info *fs_info) if (!fs_devices->seeding) return -EINVAL; - seed_devices = alloc_fs_devices(NULL); + seed_devices = alloc_fs_devices(NULL, NULL); if (IS_ERR(seed_devices)) return PTR_ERR(seed_devices); @@ -2238,7 +2496,7 @@ static int btrfs_prepare_sprout(struct btrfs_fs_info *fs_info) fs_devices->seed = seed_devices; generate_random_uuid(fs_devices->fsid); - memcpy(fs_info->fsid, fs_devices->fsid, BTRFS_FSID_SIZE); + memcpy(fs_devices->metadata_uuid, fs_devices->fsid, BTRFS_FSID_SIZE); memcpy(disk_super->fsid, fs_devices->fsid, BTRFS_FSID_SIZE); mutex_unlock(&fs_devices->device_list_mutex); @@ -2480,7 +2738,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path * so rename the fsid on the sysfs */ snprintf(fsid_buf, BTRFS_UUID_UNPARSED_SIZE, "%pU", - fs_info->fsid); + fs_info->fs_devices->fsid); if (kobject_rename(&fs_devices->fsid_kobj, fsid_buf)) btrfs_warn(fs_info, "sysfs: failed to create fsid for sprout"); @@ -2718,8 +2976,15 @@ static int btrfs_del_sys_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset) return ret; } -static struct extent_map *get_chunk_map(struct btrfs_fs_info *fs_info, - u64 logical, u64 length) +/* + * btrfs_get_chunk_map() - Find the mapping containing the given logical extent. + * @logical: Logical block offset in bytes. + * @length: Length of extent in bytes. + * + * Return: Chunk mapping or ERR_PTR. + */ +struct extent_map *btrfs_get_chunk_map(struct btrfs_fs_info *fs_info, + u64 logical, u64 length) { struct extent_map_tree *em_tree; struct extent_map *em; @@ -2756,7 +3021,7 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset) int i, ret = 0; struct btrfs_fs_devices *fs_devices = fs_info->fs_devices; - em = get_chunk_map(fs_info, chunk_offset, 1); + em = btrfs_get_chunk_map(fs_info, chunk_offset, 1); if (IS_ERR(em)) { /* * This is a logic error, but we don't want to just rely on the @@ -2797,13 +3062,11 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset) mutex_unlock(&fs_info->chunk_mutex); } - if (map->stripes[i].dev) { - ret = btrfs_update_device(trans, map->stripes[i].dev); - if (ret) { - mutex_unlock(&fs_devices->device_list_mutex); - btrfs_abort_transaction(trans, ret); - goto out; - } + ret = btrfs_update_device(trans, device); + if (ret) { + mutex_unlock(&fs_devices->device_list_mutex); + btrfs_abort_transaction(trans, ret); + goto out; } } mutex_unlock(&fs_devices->device_list_mutex); @@ -3437,17 +3700,11 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) { struct btrfs_balance_control *bctl = fs_info->balance_ctl; struct btrfs_root *chunk_root = fs_info->chunk_root; - struct btrfs_root *dev_root = fs_info->dev_root; - struct list_head *devices; - struct btrfs_device *device; - u64 old_size; - u64 size_to_free; u64 chunk_type; struct btrfs_chunk *chunk; struct btrfs_path *path = NULL; struct btrfs_key key; struct btrfs_key found_key; - struct btrfs_trans_handle *trans; struct extent_buffer *leaf; int slot; int ret; @@ -3462,53 +3719,6 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) u32 count_sys = 0; int chunk_reserved = 0; - /* step one make some room on all the devices */ - devices = &fs_info->fs_devices->devices; - list_for_each_entry(device, devices, dev_list) { - old_size = btrfs_device_get_total_bytes(device); - size_to_free = div_factor(old_size, 1); - size_to_free = min_t(u64, size_to_free, SZ_1M); - if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state) || - btrfs_device_get_total_bytes(device) - - btrfs_device_get_bytes_used(device) > size_to_free || - test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) - continue; - - ret = btrfs_shrink_device(device, old_size - size_to_free); - if (ret == -ENOSPC) - break; - if (ret) { - /* btrfs_shrink_device never returns ret > 0 */ - WARN_ON(ret > 0); - goto error; - } - - trans = btrfs_start_transaction(dev_root, 0); - if (IS_ERR(trans)) { - ret = PTR_ERR(trans); - btrfs_info_in_rcu(fs_info, - "resize: unable to start transaction after shrinking device %s (error %d), old size %llu, new size %llu", - rcu_str_deref(device->name), ret, - old_size, old_size - size_to_free); - goto error; - } - - ret = btrfs_grow_device(trans, device, old_size); - if (ret) { - btrfs_end_transaction(trans); - /* btrfs_grow_device never returns ret > 0 */ - WARN_ON(ret > 0); - btrfs_info_in_rcu(fs_info, - "resize: unable to grow device after shrinking device %s (error %d), old size %llu, new size %llu", - rcu_str_deref(device->name), ret, - old_size, old_size - size_to_free); - goto error; - } - - btrfs_end_transaction(trans); - } - - /* step two, relocate all the chunks */ path = btrfs_alloc_path(); if (!path) { ret = -ENOMEM; @@ -3638,10 +3848,15 @@ again: ret = btrfs_relocate_chunk(fs_info, found_key.offset); mutex_unlock(&fs_info->delete_unused_bgs_mutex); - if (ret && ret != -ENOSPC) - goto error; if (ret == -ENOSPC) { enospc_errors++; + } else if (ret == -ETXTBSY) { + btrfs_info(fs_info, + "skipping relocation of block group %llu due to active swapfile", + found_key.offset); + ret = 0; + } else if (ret) { + goto error; } else { spin_lock(&fs_info->balance_lock); bctl->stat.completed++; @@ -3711,6 +3926,162 @@ static inline int validate_convert_profile(struct btrfs_balance_args *bctl_arg, (bctl_arg->target & ~allowed))); } +/* + * Fill @buf with textual description of balance filter flags @bargs, up to + * @size_buf including the terminating null. The output may be trimmed if it + * does not fit into the provided buffer. + */ +static void describe_balance_args(struct btrfs_balance_args *bargs, char *buf, + u32 size_buf) +{ + int ret; + u32 size_bp = size_buf; + char *bp = buf; + u64 flags = bargs->flags; + char tmp_buf[128] = {'\0'}; + + if (!flags) + return; + +#define CHECK_APPEND_NOARG(a) \ + do { \ + ret = snprintf(bp, size_bp, (a)); \ + if (ret < 0 || ret >= size_bp) \ + goto out_overflow; \ + size_bp -= ret; \ + bp += ret; \ + } while (0) + +#define CHECK_APPEND_1ARG(a, v1) \ + do { \ + ret = snprintf(bp, size_bp, (a), (v1)); \ + if (ret < 0 || ret >= size_bp) \ + goto out_overflow; \ + size_bp -= ret; \ + bp += ret; \ + } while (0) + +#define CHECK_APPEND_2ARG(a, v1, v2) \ + do { \ + ret = snprintf(bp, size_bp, (a), (v1), (v2)); \ + if (ret < 0 || ret >= size_bp) \ + goto out_overflow; \ + size_bp -= ret; \ + bp += ret; \ + } while (0) + + if (flags & BTRFS_BALANCE_ARGS_CONVERT) { + int index = btrfs_bg_flags_to_raid_index(bargs->target); + + CHECK_APPEND_1ARG("convert=%s,", get_raid_name(index)); + } + + if (flags & BTRFS_BALANCE_ARGS_SOFT) + CHECK_APPEND_NOARG("soft,"); + + if (flags & BTRFS_BALANCE_ARGS_PROFILES) { + btrfs_describe_block_groups(bargs->profiles, tmp_buf, + sizeof(tmp_buf)); + CHECK_APPEND_1ARG("profiles=%s,", tmp_buf); + } + + if (flags & BTRFS_BALANCE_ARGS_USAGE) + CHECK_APPEND_1ARG("usage=%llu,", bargs->usage); + + if (flags & BTRFS_BALANCE_ARGS_USAGE_RANGE) + CHECK_APPEND_2ARG("usage=%u..%u,", + bargs->usage_min, bargs->usage_max); + + if (flags & BTRFS_BALANCE_ARGS_DEVID) + CHECK_APPEND_1ARG("devid=%llu,", bargs->devid); + + if (flags & BTRFS_BALANCE_ARGS_DRANGE) + CHECK_APPEND_2ARG("drange=%llu..%llu,", + bargs->pstart, bargs->pend); + + if (flags & BTRFS_BALANCE_ARGS_VRANGE) + CHECK_APPEND_2ARG("vrange=%llu..%llu,", + bargs->vstart, bargs->vend); + + if (flags & BTRFS_BALANCE_ARGS_LIMIT) + CHECK_APPEND_1ARG("limit=%llu,", bargs->limit); + + if (flags & BTRFS_BALANCE_ARGS_LIMIT_RANGE) + CHECK_APPEND_2ARG("limit=%u..%u,", + bargs->limit_min, bargs->limit_max); + + if (flags & BTRFS_BALANCE_ARGS_STRIPES_RANGE) + CHECK_APPEND_2ARG("stripes=%u..%u,", + bargs->stripes_min, bargs->stripes_max); + +#undef CHECK_APPEND_2ARG +#undef CHECK_APPEND_1ARG +#undef CHECK_APPEND_NOARG + +out_overflow: + + if (size_bp < size_buf) + buf[size_buf - size_bp - 1] = '\0'; /* remove last , */ + else + buf[0] = '\0'; +} + +static void describe_balance_start_or_resume(struct btrfs_fs_info *fs_info) +{ + u32 size_buf = 1024; + char tmp_buf[192] = {'\0'}; + char *buf; + char *bp; + u32 size_bp = size_buf; + int ret; + struct btrfs_balance_control *bctl = fs_info->balance_ctl; + + buf = kzalloc(size_buf, GFP_KERNEL); + if (!buf) + return; + + bp = buf; + +#define CHECK_APPEND_1ARG(a, v1) \ + do { \ + ret = snprintf(bp, size_bp, (a), (v1)); \ + if (ret < 0 || ret >= size_bp) \ + goto out_overflow; \ + size_bp -= ret; \ + bp += ret; \ + } while (0) + + if (bctl->flags & BTRFS_BALANCE_FORCE) + CHECK_APPEND_1ARG("%s", "-f "); + + if (bctl->flags & BTRFS_BALANCE_DATA) { + describe_balance_args(&bctl->data, tmp_buf, sizeof(tmp_buf)); + CHECK_APPEND_1ARG("-d%s ", tmp_buf); + } + + if (bctl->flags & BTRFS_BALANCE_METADATA) { + describe_balance_args(&bctl->meta, tmp_buf, sizeof(tmp_buf)); + CHECK_APPEND_1ARG("-m%s ", tmp_buf); + } + + if (bctl->flags & BTRFS_BALANCE_SYSTEM) { + describe_balance_args(&bctl->sys, tmp_buf, sizeof(tmp_buf)); + CHECK_APPEND_1ARG("-s%s ", tmp_buf); + } + +#undef CHECK_APPEND_1ARG + +out_overflow: + + if (size_bp < size_buf) + buf[size_buf - size_bp - 1] = '\0'; /* remove last " " */ + btrfs_info(fs_info, "balance: %s %s", + (bctl->flags & BTRFS_BALANCE_RESUME) ? + "resume" : "start", buf); + + kfree(buf); +} + /* * Should be called with balance mutexe held */ @@ -3724,6 +4095,7 @@ int btrfs_balance(struct btrfs_fs_info *fs_info, int ret; u64 num_devices; unsigned seq; + bool reducing_integrity; if (btrfs_fs_closing(fs_info) || atomic_read(&fs_info->balance_pause_req) || @@ -3803,24 +4175,30 @@ int btrfs_balance(struct btrfs_fs_info *fs_info, !(bctl->sys.target & allowed)) || ((bctl->meta.flags & BTRFS_BALANCE_ARGS_CONVERT) && (fs_info->avail_metadata_alloc_bits & allowed) && - !(bctl->meta.target & allowed))) { - if (bctl->flags & BTRFS_BALANCE_FORCE) { - btrfs_info(fs_info, - "balance: force reducing metadata integrity"); - } else { - btrfs_err(fs_info, - "balance: reduces metadata integrity, use --force if you want this"); - ret = -EINVAL; - goto out; - } - } + !(bctl->meta.target & allowed))) + reducing_integrity = true; + else + reducing_integrity = false; + + /* if we're not converting, the target field is uninitialized */ + meta_target = (bctl->meta.flags & BTRFS_BALANCE_ARGS_CONVERT) ? + bctl->meta.target : fs_info->avail_metadata_alloc_bits; + data_target = (bctl->data.flags & BTRFS_BALANCE_ARGS_CONVERT) ? + bctl->data.target : fs_info->avail_data_alloc_bits; } while (read_seqretry(&fs_info->profiles_lock, seq)); - /* if we're not converting, the target field is uninitialized */ - meta_target = (bctl->meta.flags & BTRFS_BALANCE_ARGS_CONVERT) ? - bctl->meta.target : fs_info->avail_metadata_alloc_bits; - data_target = (bctl->data.flags & BTRFS_BALANCE_ARGS_CONVERT) ? - bctl->data.target : fs_info->avail_data_alloc_bits; + if (reducing_integrity) { + if (bctl->flags & BTRFS_BALANCE_FORCE) { + btrfs_info(fs_info, + "balance: force reducing metadata integrity"); + } else { + btrfs_err(fs_info, + "balance: reduces metadata integrity, use --force if you want this"); + ret = -EINVAL; + goto out; + } + } + if (btrfs_get_num_tolerated_disk_barrier_failures(meta_target) < btrfs_get_num_tolerated_disk_barrier_failures(data_target)) { int meta_index = btrfs_bg_flags_to_raid_index(meta_target); @@ -3850,11 +4228,19 @@ int btrfs_balance(struct btrfs_fs_info *fs_info, ASSERT(!test_bit(BTRFS_FS_BALANCE_RUNNING, &fs_info->flags)); set_bit(BTRFS_FS_BALANCE_RUNNING, &fs_info->flags); + describe_balance_start_or_resume(fs_info); mutex_unlock(&fs_info->balance_mutex); ret = __btrfs_balance(fs_info); mutex_lock(&fs_info->balance_mutex); + if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req)) + btrfs_info(fs_info, "balance: paused"); + else if (ret == -ECANCELED && atomic_read(&fs_info->balance_cancel_req)) + btrfs_info(fs_info, "balance: canceled"); + else + btrfs_info(fs_info, "balance: ended with status: %d", ret); + clear_bit(BTRFS_FS_BALANCE_RUNNING, &fs_info->flags); if (bargs) { @@ -3887,10 +4273,8 @@ static int balance_kthread(void *data) int ret = 0; mutex_lock(&fs_info->balance_mutex); - if (fs_info->balance_ctl) { - btrfs_info(fs_info, "balance: resuming"); + if (fs_info->balance_ctl) ret = btrfs_balance(fs_info, fs_info->balance_ctl, NULL); - } mutex_unlock(&fs_info->balance_mutex); return ret; @@ -4433,10 +4817,16 @@ again: ret = btrfs_relocate_chunk(fs_info, chunk_offset); mutex_unlock(&fs_info->delete_unused_bgs_mutex); - if (ret && ret != -ENOSPC) - goto done; - if (ret == -ENOSPC) + if (ret == -ENOSPC) { failed++; + } else if (ret) { + if (ret == -ETXTBSY) { + btrfs_warn(fs_info, + "could not shrink block group %llu due to active swapfile", + chunk_offset); + } + goto done; + } } while (key.offset-- > 0); if (failed && !retried) { @@ -4602,11 +4992,13 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans, int devs_min; /* min devs needed */ int devs_increment; /* ndevs has to be a multiple of this */ int ncopies; /* how many copies to data has */ + int nparity; /* number of stripes worth of bytes to + store parity information */ int ret; u64 max_stripe_size; u64 max_chunk_size; u64 stripe_size; - u64 num_bytes; + u64 chunk_size; int ndevs; int i; int j; @@ -4628,6 +5020,7 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans, devs_min = btrfs_raid_array[index].devs_min; devs_increment = btrfs_raid_array[index].devs_increment; ncopies = btrfs_raid_array[index].ncopies; + nparity = btrfs_raid_array[index].nparity; if (type & BTRFS_BLOCK_GROUP_DATA) { max_stripe_size = SZ_1G; @@ -4654,7 +5047,7 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans, BUG_ON(1); } - /* we don't want a chunk larger than 10% of writeable space */ + /* We don't want a chunk larger than 10% of writable space */ max_chunk_size = min(div_factor(fs_devices->total_rw_bytes, 1), max_chunk_size); @@ -4757,30 +5150,22 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans, * this will have to be fixed for RAID1 and RAID10 over * more drives */ - data_stripes = num_stripes / ncopies; - - if (type & BTRFS_BLOCK_GROUP_RAID5) - data_stripes = num_stripes - 1; - - if (type & BTRFS_BLOCK_GROUP_RAID6) - data_stripes = num_stripes - 2; + data_stripes = (num_stripes - nparity) / ncopies; /* * Use the number of data stripes to figure out how big this chunk * is really going to be in terms of logical address space, - * and compare that answer with the max chunk size + * and compare that answer with the max chunk size. If it's higher, + * we try to reduce stripe_size. */ if (stripe_size * data_stripes > max_chunk_size) { - stripe_size = div_u64(max_chunk_size, data_stripes); - - /* bump the answer up to a 16MB boundary */ - stripe_size = round_up(stripe_size, SZ_16M); - /* - * But don't go higher than the limits we found while searching - * for free extents + * Reduce stripe_size, round it up to a 16MB boundary again and + * then use it, unless it ends up being even bigger than the + * previous value we had already. */ - stripe_size = min(devices_info[ndevs - 1].max_avail, + stripe_size = min(round_up(div_u64(max_chunk_size, + data_stripes), SZ_16M), stripe_size); } @@ -4808,9 +5193,9 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans, map->type = type; map->sub_stripes = sub_stripes; - num_bytes = stripe_size * data_stripes; + chunk_size = stripe_size * data_stripes; - trace_btrfs_chunk_alloc(info, map, start, num_bytes); + trace_btrfs_chunk_alloc(info, map, start, chunk_size); em = alloc_extent_map(); if (!em) { @@ -4821,7 +5206,7 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans, set_bit(EXTENT_FLAG_FS_MAPPING, &em->flags); em->map_lookup = map; em->start = start; - em->len = num_bytes; + em->len = chunk_size; em->block_start = 0; em->block_len = em->len; em->orig_block_len = stripe_size; @@ -4839,14 +5224,13 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans, refcount_inc(&em->refs); write_unlock(&em_tree->lock); - ret = btrfs_make_block_group(trans, 0, type, start, num_bytes); + ret = btrfs_make_block_group(trans, 0, type, start, chunk_size); if (ret) goto error_del_extent; - for (i = 0; i < map->num_stripes; i++) { - num_bytes = map->stripes[i].dev->bytes_used + stripe_size; - btrfs_device_set_bytes_used(map->stripes[i].dev, num_bytes); - } + for (i = 0; i < map->num_stripes; i++) + btrfs_device_set_bytes_used(map->stripes[i].dev, + map->stripes[i].dev->bytes_used + stripe_size); atomic64_sub(stripe_size * map->num_stripes, &info->free_chunk_space); @@ -4890,7 +5274,7 @@ int btrfs_finish_chunk_alloc(struct btrfs_trans_handle *trans, int i = 0; int ret = 0; - em = get_chunk_map(fs_info, chunk_offset, chunk_size); + em = btrfs_get_chunk_map(fs_info, chunk_offset, chunk_size); if (IS_ERR(em)) return PTR_ERR(em); @@ -4971,10 +5355,10 @@ out: } /* - * Chunk allocation falls into two parts. The first part does works - * that make the new allocated chunk useable, but not do any operation - * that modifies the chunk tree. The second part does the works that - * require modifying the chunk tree. This division is important for the + * Chunk allocation falls into two parts. The first part does work + * that makes the new allocated chunk usable, but does not do any operation + * that modifies the chunk tree. The second part does the work that + * requires modifying the chunk tree. This division is important for the * bootstrap process of adding storage to a seed btrfs. */ int btrfs_alloc_chunk(struct btrfs_trans_handle *trans, u64 type) @@ -5032,7 +5416,7 @@ int btrfs_chunk_readonly(struct btrfs_fs_info *fs_info, u64 chunk_offset) int miss_ndevs = 0; int i; - em = get_chunk_map(fs_info, chunk_offset, 1); + em = btrfs_get_chunk_map(fs_info, chunk_offset, 1); if (IS_ERR(em)) return 1; @@ -5092,7 +5476,7 @@ int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len) struct map_lookup *map; int ret; - em = get_chunk_map(fs_info, logical, len); + em = btrfs_get_chunk_map(fs_info, logical, len); if (IS_ERR(em)) /* * We could return errors for these cases, but that could get @@ -5122,11 +5506,11 @@ int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len) ret = 1; free_extent_map(em); - btrfs_dev_replace_read_lock(&fs_info->dev_replace); + down_read(&fs_info->dev_replace.rwsem); if (btrfs_dev_replace_is_ongoing(&fs_info->dev_replace) && fs_info->dev_replace.tgtdev) ret++; - btrfs_dev_replace_read_unlock(&fs_info->dev_replace); + up_read(&fs_info->dev_replace.rwsem); return ret; } @@ -5138,7 +5522,7 @@ unsigned long btrfs_full_stripe_len(struct btrfs_fs_info *fs_info, struct map_lookup *map; unsigned long len = fs_info->sectorsize; - em = get_chunk_map(fs_info, logical, len); + em = btrfs_get_chunk_map(fs_info, logical, len); if (!WARN_ON(IS_ERR(em))) { map = em->map_lookup; @@ -5155,7 +5539,7 @@ int btrfs_is_parity_mirror(struct btrfs_fs_info *fs_info, u64 logical, u64 len) struct map_lookup *map; int ret = 0; - em = get_chunk_map(fs_info, logical, len); + em = btrfs_get_chunk_map(fs_info, logical, len); if(!WARN_ON(IS_ERR(em))) { map = em->map_lookup; @@ -5314,7 +5698,7 @@ static int __btrfs_map_block_for_discard(struct btrfs_fs_info *fs_info, /* discard always return a bbio */ ASSERT(bbio_ret); - em = get_chunk_map(fs_info, logical, length); + em = btrfs_get_chunk_map(fs_info, logical, length); if (IS_ERR(em)) return PTR_ERR(em); @@ -5640,7 +6024,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, return __btrfs_map_block_for_discard(fs_info, logical, *length, bbio_ret); - em = get_chunk_map(fs_info, logical, *length); + em = btrfs_get_chunk_map(fs_info, logical, *length); if (IS_ERR(em)) return PTR_ERR(em); @@ -5699,17 +6083,21 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, *length = em->len - offset; } - /* This is for when we're called from btrfs_merge_bio_hook() and all - it cares about is the length */ + /* + * This is for when we're called from btrfs_bio_fits_in_stripe and all + * it cares about is the length + */ if (!bbio_ret) goto out; - btrfs_dev_replace_read_lock(dev_replace); + down_read(&dev_replace->rwsem); dev_replace_is_ongoing = btrfs_dev_replace_is_ongoing(dev_replace); + /* + * Hold the semaphore for read during the whole operation, write is + * requested at commit time but must wait. + */ if (!dev_replace_is_ongoing) - btrfs_dev_replace_read_unlock(dev_replace); - else - btrfs_dev_replace_set_lock_blocking(dev_replace); + up_read(&dev_replace->rwsem); if (dev_replace_is_ongoing && mirror_num == map->num_stripes + 1 && !need_full_stripe(op) && dev_replace->tgtdev != NULL) { @@ -5904,12 +6292,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, } out: if (dev_replace_is_ongoing) { - ASSERT(atomic_read(&dev_replace->blocking_readers) > 0); - btrfs_dev_replace_read_lock(dev_replace); - /* Barrier implied by atomic_dec_and_test */ - if (atomic_dec_and_test(&dev_replace->blocking_readers)) - cond_wake_up_nomb(&dev_replace->read_lock_wq); - btrfs_dev_replace_read_unlock(dev_replace); + lockdep_assert_held(&dev_replace->rwsem); + /* Unlock and let waiting writers proceed */ + up_read(&dev_replace->rwsem); } free_extent_map(em); return ret; @@ -5943,7 +6328,7 @@ int btrfs_rmap_block(struct btrfs_fs_info *fs_info, u64 chunk_start, u64 rmap_len; int i, j, nr = 0; - em = get_chunk_map(fs_info, chunk_start, 1); + em = btrfs_get_chunk_map(fs_info, chunk_start, 1); if (IS_ERR(em)) return -EIO; @@ -6083,12 +6468,6 @@ static noinline void btrfs_schedule_bio(struct btrfs_device *device, int should_queue = 1; struct btrfs_pending_bios *pending_bios; - if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state) || - !device->bdev) { - bio_io_error(bio); - return; - } - /* don't bother with additional async steps for reads, right now */ if (bio_op(bio) == REQ_OP_READ) { btrfsic_submit_bio(bio); @@ -6217,7 +6596,8 @@ blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, for (dev_nr = 0; dev_nr < total_devs; dev_nr++) { dev = bbio->stripes[dev_nr].dev; - if (!dev || !dev->bdev || + if (!dev || !dev->bdev || test_bit(BTRFS_DEV_STATE_MISSING, + &dev->dev_state) || (bio_op(first_bio) == REQ_OP_WRITE && !test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))) { bbio_error(bbio, first_bio, logical); @@ -6245,7 +6625,7 @@ struct btrfs_device *btrfs_find_device(struct btrfs_fs_info *fs_info, u64 devid, cur_devices = fs_info->fs_devices; while (cur_devices) { if (!fsid || - !memcmp(cur_devices->fsid, fsid, BTRFS_FSID_SIZE)) { + !memcmp(cur_devices->metadata_uuid, fsid, BTRFS_FSID_SIZE)) { device = find_device(cur_devices, devid, uuid); if (device) return device; @@ -6574,12 +6954,12 @@ static struct btrfs_fs_devices *open_seed_devices(struct btrfs_fs_info *fs_info, fs_devices = fs_devices->seed; } - fs_devices = find_fsid(fsid); + fs_devices = find_fsid(fsid, NULL); if (!fs_devices) { if (!btrfs_test_opt(fs_info, DEGRADED)) return ERR_PTR(-ENOENT); - fs_devices = alloc_fs_devices(fsid); + fs_devices = alloc_fs_devices(fsid, NULL); if (IS_ERR(fs_devices)) return fs_devices; @@ -6629,7 +7009,7 @@ static int read_one_dev(struct btrfs_fs_info *fs_info, read_extent_buffer(leaf, fs_uuid, btrfs_device_fsid(dev_item), BTRFS_FSID_SIZE); - if (memcmp(fs_uuid, fs_info->fsid, BTRFS_FSID_SIZE)) { + if (memcmp(fs_uuid, fs_devices->metadata_uuid, BTRFS_FSID_SIZE)) { fs_devices = open_seed_devices(fs_info, fs_uuid); if (IS_ERR(fs_devices)) return PTR_ERR(fs_devices); @@ -6876,7 +7256,7 @@ bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info, if (missing > max_tolerated) { if (!failing_dev) btrfs_warn(fs_info, - "chunk %llu missing %d devices, max tolerance is %d for writeable mount", + "chunk %llu missing %d devices, max tolerance is %d for writable mount", em->start, missing, max_tolerated); free_extent_map(em); ret = false; @@ -7387,6 +7767,7 @@ static int verify_one_dev_extent(struct btrfs_fs_info *fs_info, struct extent_map_tree *em_tree = &fs_info->mapping_tree.map_tree; struct extent_map *em; struct map_lookup *map; + struct btrfs_device *dev; u64 stripe_len; bool found = false; int ret = 0; @@ -7436,6 +7817,22 @@ static int verify_one_dev_extent(struct btrfs_fs_info *fs_info, physical_offset, devid); ret = -EUCLEAN; } + + /* Make sure no dev extent is beyond device bondary */ + dev = btrfs_find_device(fs_info, devid, NULL, NULL); + if (!dev) { + btrfs_err(fs_info, "failed to find devid %llu", devid); + ret = -EUCLEAN; + goto out; + } + if (physical_offset + physical_len > dev->disk_total_bytes) { + btrfs_err(fs_info, +"dev extent devid %llu physical offset %llu len %llu is beyond device boundary %llu", + devid, physical_offset, physical_len, + dev->disk_total_bytes); + ret = -EUCLEAN; + goto out; + } out: free_extent_map(em); return ret; @@ -7478,6 +7875,8 @@ int btrfs_verify_dev_extents(struct btrfs_fs_info *fs_info) struct btrfs_path *path; struct btrfs_root *root = fs_info->dev_root; struct btrfs_key key; + u64 prev_devid = 0; + u64 prev_dev_ext_end = 0; int ret = 0; key.objectid = 1; @@ -7522,10 +7921,22 @@ int btrfs_verify_dev_extents(struct btrfs_fs_info *fs_info) chunk_offset = btrfs_dev_extent_chunk_offset(leaf, dext); physical_len = btrfs_dev_extent_length(leaf, dext); + /* Check if this dev extent overlaps with the previous one */ + if (devid == prev_devid && physical_offset < prev_dev_ext_end) { + btrfs_err(fs_info, +"dev extent devid %llu physical offset %llu overlap with previous dev extent end %llu", + devid, physical_offset, prev_dev_ext_end); + ret = -EUCLEAN; + goto out; + } + ret = verify_one_dev_extent(fs_info, chunk_offset, devid, physical_offset, physical_len); if (ret < 0) goto out; + prev_devid = devid; + prev_dev_ext_end = physical_offset + physical_len; + ret = btrfs_next_item(root, path); if (ret < 0) goto out; @@ -7541,3 +7952,27 @@ out: btrfs_free_path(path); return ret; } + +/* + * Check whether the given block group or device is pinned by any inode being + * used as a swapfile. + */ +bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr) +{ + struct btrfs_swapfile_pin *sp; + struct rb_node *node; + + spin_lock(&fs_info->swapfile_pins_lock); + node = fs_info->swapfile_pins.rb_node; + while (node) { + sp = rb_entry(node, struct btrfs_swapfile_pin, node); + if (ptr < sp->ptr) + node = node->rb_left; + else if (ptr > sp->ptr) + node = node->rb_right; + else + break; + } + spin_unlock(&fs_info->swapfile_pins_lock); + return node != NULL; +} diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h index aefce895e994..ed806649a473 100644 --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -210,6 +210,8 @@ BTRFS_DEVICE_GETSET_FUNCS(bytes_used); struct btrfs_fs_devices { u8 fsid[BTRFS_FSID_SIZE]; /* FS specific uuid */ + u8 metadata_uuid[BTRFS_FSID_SIZE]; + bool fsid_change; struct list_head fs_list; u64 num_devices; @@ -218,6 +220,10 @@ struct btrfs_fs_devices { u64 missing_devices; u64 total_rw_bytes; u64 total_devices; + + /* Highest generation number of seen devices */ + u64 latest_generation; + struct block_device *latest_bdev; /* all of the devices in the FS, protected by a mutex @@ -261,15 +267,12 @@ struct btrfs_fs_devices { * we allocate are actually btrfs_io_bios. We'll cram as much of * struct btrfs_bio as we can into this over time. */ -typedef void (btrfs_io_bio_end_io_t) (struct btrfs_io_bio *bio, int err); struct btrfs_io_bio { unsigned int mirror_num; unsigned int stripe_index; u64 logical; u8 *csum; u8 csum_inline[BTRFS_BIO_INLINE_CSUM_SIZE]; - u8 *csum_allocated; - btrfs_io_bio_end_io_t *end_io; struct bvec_iter iter; /* * This member must come last, bio_alloc_bioset will allocate enough @@ -283,15 +286,20 @@ static inline struct btrfs_io_bio *btrfs_io_bio(struct bio *bio) return container_of(bio, struct btrfs_io_bio, bio); } +static inline void btrfs_io_bio_free_csum(struct btrfs_io_bio *io_bio) +{ + if (io_bio->csum != io_bio->csum_inline) { + kfree(io_bio->csum); + io_bio->csum = NULL; + } +} + struct btrfs_bio_stripe { struct btrfs_device *dev; u64 physical; u64 length; /* only used for discard mappings */ }; -struct btrfs_bio; -typedef void (btrfs_bio_end_io_t) (struct btrfs_bio *bio, int err); - struct btrfs_bio { refcount_t refs; atomic_t stripes_pending; @@ -331,6 +339,8 @@ struct btrfs_raid_attr { int tolerated_failures; /* max tolerated fail devs */ int devs_increment; /* ndevs has to be a multiple of this */ int ncopies; /* how many copies to data has */ + int nparity; /* number of stripes worth of bytes to store + * parity information */ int mindev_error; /* error code if min devs requisite is unmet */ const char raid_name[8]; /* name of the raid */ u64 bg_flag; /* block group flag of the raid */ @@ -430,6 +440,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *path); int btrfs_balance(struct btrfs_fs_info *fs_info, struct btrfs_balance_control *bctl, struct btrfs_ioctl_balance_args *bargs); +void btrfs_describe_block_groups(u64 flags, char *buf, u32 size_buf); int btrfs_resume_balance_async(struct btrfs_fs_info *fs_info); int btrfs_recover_balance(struct btrfs_fs_info *fs_info); int btrfs_pause_balance(struct btrfs_fs_info *fs_info); @@ -462,6 +473,8 @@ unsigned long btrfs_full_stripe_len(struct btrfs_fs_info *fs_info, int btrfs_finish_chunk_alloc(struct btrfs_trans_handle *trans, u64 chunk_offset, u64 chunk_size); int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset); +struct extent_map *btrfs_get_chunk_map(struct btrfs_fs_info *fs_info, + u64 logical, u64 length); static inline void btrfs_dev_stat_inc(struct btrfs_device *dev, int index) diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c index ea78c3d6dcfc..f141b45ce349 100644 --- a/fs/btrfs/xattr.c +++ b/fs/btrfs/xattr.c @@ -11,6 +11,7 @@ #include #include #include +#include #include "ctree.h" #include "btrfs_inode.h" #include "transaction.h" @@ -422,9 +423,15 @@ static int btrfs_initxattrs(struct inode *inode, { const struct xattr *xattr; struct btrfs_trans_handle *trans = fs_info; + unsigned int nofs_flag; char *name; int err = 0; + /* + * We're holding a transaction handle, so use a NOFS memory allocation + * context to avoid deadlock if reclaim happens. + */ + nofs_flag = memalloc_nofs_save(); for (xattr = xattr_array; xattr->name != NULL; xattr++) { name = kmalloc(XATTR_SECURITY_PREFIX_LEN + strlen(xattr->name) + 1, GFP_KERNEL); @@ -440,6 +447,7 @@ static int btrfs_initxattrs(struct inode *inode, if (err < 0) break; } + memalloc_nofs_restore(nofs_flag); return err; } diff --git a/fs/buffer.c b/fs/buffer.c index 1286c2b95498..d60d61e8ed7d 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -3060,11 +3060,6 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, */ bio = bio_alloc(GFP_NOIO, 1); - if (wbc) { - wbc_init_bio(wbc, bio); - wbc_account_io(wbc, bh->b_page, bh->b_size); - } - bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); bio_set_dev(bio, bh->b_bdev); bio->bi_write_hint = write_hint; @@ -3084,6 +3079,11 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, op_flags |= REQ_PRIO; bio_set_op_attrs(bio, op, op_flags); + if (wbc) { + wbc_init_bio(wbc, bio); + wbc_account_io(wbc, bh->b_page, bh->b_size); + } + submit_bio(bio); return 0; } diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 79a265ba9200..dfb64a5211b6 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -810,7 +810,7 @@ static inline int default_congestion_kb(void) * This allows larger machines to have larger/more transfers. * Limit the default to 256M */ - congestion_kb = (16*int_sqrt(totalram_pages)) << (PAGE_SHIFT-10); + congestion_kb = (16*int_sqrt(totalram_pages())) << (PAGE_SHIFT-10); if (congestion_kb > 256*1024) congestion_kb = 256*1024; diff --git a/fs/cifs/file.c b/fs/cifs/file.c index c9bc56b1baac..6706328ce03f 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -1103,10 +1103,10 @@ try_again: rc = posix_lock_file(file, flock, NULL); up_write(&cinode->lock_sem); if (rc == FILE_LOCK_DEFERRED) { - rc = wait_event_interruptible(flock->fl_wait, !flock->fl_next); + rc = wait_event_interruptible(flock->fl_wait, !flock->fl_blocker); if (!rc) goto try_again; - posix_unblock_lock(flock); + locks_delete_block(flock); } return rc; } diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index e94a8d1d08a3..a568dac7b3a1 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -1724,7 +1724,7 @@ static struct smbd_connection *_smbd_get_connection( info->responder_resources); /* Need to send IRD/ORD in private data for iWARP */ - info->id->device->get_port_immutable( + info->id->device->ops.get_port_immutable( info->id->device, info->id->port_num, &port_immutable); if (port_immutable.core_cap_flags & RDMA_CORE_PORT_IWARP) { ird_ord_hdr[0] = info->responder_resources; diff --git a/fs/dax.c b/fs/dax.c index 48132eca3761..6959837cc465 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -246,18 +246,16 @@ static void wait_entry_unlocked(struct xa_state *xas, void *entry) ewait.wait.func = wake_exceptional_entry_func; wq = dax_entry_waitqueue(xas, entry, &ewait.key); - prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); + /* + * Unlike get_unlocked_entry() there is no guarantee that this + * path ever successfully retrieves an unlocked entry before an + * inode dies. Perform a non-exclusive wait in case this path + * never successfully performs its own wake up. + */ + prepare_to_wait(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); xas_unlock_irq(xas); schedule(); finish_wait(wq, &ewait.wait); - - /* - * Entry lock waits are exclusive. Wake up the next waiter since - * we aren't sure we will acquire the entry lock and thus wake - * the next waiter up on unlock. - */ - if (waitqueue_active(wq)) - __wake_up(wq, TASK_NORMAL, 1, &ewait.key); } static void put_unlocked_entry(struct xa_state *xas, void *entry) @@ -779,7 +777,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, i_mmap_lock_read(mapping); vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - unsigned long address, start, end; + struct mmu_notifier_range range; + unsigned long address; cond_resched(); @@ -793,7 +792,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, * call mmu_notifier_invalidate_range_start() on our behalf * before taking any lock. */ - if (follow_pte_pmd(vma->vm_mm, address, &start, &end, &ptep, &pmdp, &ptl)) + if (follow_pte_pmd(vma->vm_mm, address, &range, + &ptep, &pmdp, &ptl)) continue; /* @@ -835,7 +835,7 @@ unlock_pte: pte_unmap_unlock(ptep, ptl); } - mmu_notifier_invalidate_range_end(vma->vm_mm, start, end); + mmu_notifier_invalidate_range_end(&range); } i_mmap_unlock_read(mapping); } diff --git a/fs/direct-io.c b/fs/direct-io.c index 41a0e97252ae..dbc1a1f080ce 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -518,7 +518,7 @@ static struct bio *dio_await_one(struct dio *dio) dio->waiter = current; spin_unlock_irqrestore(&dio->bio_lock, flags); if (!(dio->iocb->ki_flags & IOCB_HIPRI) || - !blk_poll(dio->bio_disk->queue, dio->bio_cookie)) + !blk_poll(dio->bio_disk->queue, dio->bio_cookie, true)) io_schedule(); /* wake up sets us TASK_RUNNING */ spin_lock_irqsave(&dio->bio_lock, flags); @@ -1265,6 +1265,8 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, } else { dio->op = REQ_OP_READ; } + if (iocb->ki_flags & IOCB_HIPRI) + dio->op_flags |= REQ_HIPRI; /* * For AIO O_(D)SYNC writes we need to defer completions to a workqueue diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c index 562fa8c3edff..47ee66d70109 100644 --- a/fs/dlm/ast.c +++ b/fs/dlm/ast.c @@ -292,6 +292,8 @@ void dlm_callback_suspend(struct dlm_ls *ls) flush_workqueue(ls->ls_callback_wq); } +#define MAX_CB_QUEUE 25 + void dlm_callback_resume(struct dlm_ls *ls) { struct dlm_lkb *lkb, *safe; @@ -302,15 +304,23 @@ void dlm_callback_resume(struct dlm_ls *ls) if (!ls->ls_callback_wq) return; +more: mutex_lock(&ls->ls_cb_mutex); list_for_each_entry_safe(lkb, safe, &ls->ls_cb_delay, lkb_cb_list) { list_del_init(&lkb->lkb_cb_list); queue_work(ls->ls_callback_wq, &lkb->lkb_cb_work); count++; + if (count == MAX_CB_QUEUE) + break; } mutex_unlock(&ls->ls_cb_mutex); if (count) log_rinfo(ls, "dlm_callback_resume %d", count); + if (count == MAX_CB_QUEUE) { + count = 0; + cond_resched(); + goto more; + } } diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index cc91963683de..a928ba008d7d 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -1209,6 +1209,7 @@ static int create_lkb(struct dlm_ls *ls, struct dlm_lkb **lkb_ret) if (rv < 0) { log_error(ls, "create_lkb idr error %d", rv); + dlm_free_lkb(lkb); return rv; } @@ -4179,6 +4180,7 @@ static int receive_convert(struct dlm_ls *ls, struct dlm_message *ms) (unsigned long long)lkb->lkb_recover_seq, ms->m_header.h_nodeid, ms->m_lkid); error = -ENOENT; + dlm_put_lkb(lkb); goto fail; } @@ -4232,6 +4234,7 @@ static int receive_unlock(struct dlm_ls *ls, struct dlm_message *ms) lkb->lkb_id, lkb->lkb_remid, ms->m_header.h_nodeid, ms->m_lkid); error = -ENOENT; + dlm_put_lkb(lkb); goto fail; } @@ -5792,20 +5795,20 @@ int dlm_user_request(struct dlm_ls *ls, struct dlm_user_args *ua, goto out; } } - - /* After ua is attached to lkb it will be freed by dlm_free_lkb(). - When DLM_IFL_USER is set, the dlm knows that this is a userspace - lock and that lkb_astparam is the dlm_user_args structure. */ - error = set_lock_args(mode, &ua->lksb, flags, namelen, timeout_cs, fake_astfn, ua, fake_bastfn, &args); - lkb->lkb_flags |= DLM_IFL_USER; - if (error) { + kfree(ua->lksb.sb_lvbptr); + ua->lksb.sb_lvbptr = NULL; + kfree(ua); __put_lkb(ls, lkb); goto out; } + /* After ua is attached to lkb it will be freed by dlm_free_lkb(). + When DLM_IFL_USER is set, the dlm knows that this is a userspace + lock and that lkb_astparam is the dlm_user_args structure. */ + lkb->lkb_flags |= DLM_IFL_USER; error = request_lock(ls, lkb, name, namelen, &args); switch (error) { diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c index 5ba94be006ee..db43b98c4d64 100644 --- a/fs/dlm/lockspace.c +++ b/fs/dlm/lockspace.c @@ -431,7 +431,7 @@ static int new_lockspace(const char *name, const char *cluster, int do_unreg = 0; int namelen = strlen(name); - if (namelen > DLM_LOCKSPACE_LEN) + if (namelen > DLM_LOCKSPACE_LEN || namelen == 0) return -EINVAL; if (!lvblen || (lvblen % 8)) @@ -680,11 +680,9 @@ static int new_lockspace(const char *name, const char *cluster, kfree(ls->ls_recover_buf); out_lkbidr: idr_destroy(&ls->ls_lkbidr); - for (i = 0; i < DLM_REMOVE_NAMES_MAX; i++) { - if (ls->ls_remove_names[i]) - kfree(ls->ls_remove_names[i]); - } out_rsbtbl: + for (i = 0; i < DLM_REMOVE_NAMES_MAX; i++) + kfree(ls->ls_remove_names[i]); vfree(ls->ls_rsbtbl); out_lsfree: if (do_unreg) @@ -807,6 +805,7 @@ static int release_lockspace(struct dlm_ls *ls, int force) dlm_delete_debug_file(ls); + idr_destroy(&ls->ls_recover_idr); kfree(ls->ls_recover_buf); /* diff --git a/fs/dlm/member.c b/fs/dlm/member.c index 3fda3832cf6a..0bc43b35d2c5 100644 --- a/fs/dlm/member.c +++ b/fs/dlm/member.c @@ -671,7 +671,7 @@ int dlm_ls_stop(struct dlm_ls *ls) int dlm_ls_start(struct dlm_ls *ls) { struct dlm_recover *rv, *rv_old; - struct dlm_config_node *nodes; + struct dlm_config_node *nodes = NULL; int error, count; rv = kzalloc(sizeof(*rv), GFP_NOFS); @@ -680,7 +680,7 @@ int dlm_ls_start(struct dlm_ls *ls) error = dlm_config_nodes(ls->ls_name, &nodes, &count); if (error < 0) - goto fail; + goto fail_rv; spin_lock(&ls->ls_recover_lock); @@ -712,8 +712,9 @@ int dlm_ls_start(struct dlm_ls *ls) return 0; fail: - kfree(rv); kfree(nodes); + fail_rv: + kfree(rv); return error; } diff --git a/fs/dlm/memory.c b/fs/dlm/memory.c index 7cd24bccd4fe..37be29f21d04 100644 --- a/fs/dlm/memory.c +++ b/fs/dlm/memory.c @@ -38,10 +38,8 @@ int __init dlm_memory_init(void) void dlm_memory_exit(void) { - if (lkb_cache) - kmem_cache_destroy(lkb_cache); - if (rsb_cache) - kmem_cache_destroy(rsb_cache); + kmem_cache_destroy(lkb_cache); + kmem_cache_destroy(rsb_cache); } char *dlm_allocate_lvb(struct dlm_ls *ls) @@ -86,8 +84,7 @@ void dlm_free_lkb(struct dlm_lkb *lkb) struct dlm_user_args *ua; ua = lkb->lkb_ua; if (ua) { - if (ua->lksb.sb_lvbptr) - kfree(ua->lksb.sb_lvbptr); + kfree(ua->lksb.sb_lvbptr); kfree(ua); } } diff --git a/fs/dlm/user.c b/fs/dlm/user.c index 2a669390cd7f..3c84c62dadb7 100644 --- a/fs/dlm/user.c +++ b/fs/dlm/user.c @@ -25,6 +25,7 @@ #include "lvb_table.h" #include "user.h" #include "ast.h" +#include "config.h" static const char name_prefix[] = "dlm"; static const struct file_operations device_fops; @@ -404,7 +405,7 @@ static int device_create_lockspace(struct dlm_lspace_params *params) if (!capable(CAP_SYS_ADMIN)) return -EPERM; - error = dlm_new_lockspace(params->name, NULL, params->flags, + error = dlm_new_lockspace(params->name, dlm_config.ci_cluster_name, params->flags, DLM_USER_LVB_LEN, NULL, NULL, NULL, &lockspace); if (error) @@ -702,7 +703,7 @@ static int copy_result_to_user(struct dlm_user_args *ua, int compat, result.version[0] = DLM_DEVICE_VERSION_MAJOR; result.version[1] = DLM_DEVICE_VERSION_MINOR; result.version[2] = DLM_DEVICE_VERSION_PATCH; - memcpy(&result.lksb, &ua->lksb, sizeof(struct dlm_lksb)); + memcpy(&result.lksb, &ua->lksb, offsetof(struct dlm_lksb, sb_lvbptr)); result.user_lksb = ua->user_lksb; /* FIXME: dlm1 provides for the user's bastparam/addr to not be updated diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 42bbe6824b4b..8a5a1010886b 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -2223,31 +2223,13 @@ SYSCALL_DEFINE6(epoll_pwait, int, epfd, struct epoll_event __user *, events, * If the caller wants a certain signal mask to be set during the wait, * we apply it here. */ - if (sigmask) { - if (sigsetsize != sizeof(sigset_t)) - return -EINVAL; - if (copy_from_user(&ksigmask, sigmask, sizeof(ksigmask))) - return -EFAULT; - sigsaved = current->blocked; - set_current_blocked(&ksigmask); - } + error = set_user_sigmask(sigmask, &ksigmask, &sigsaved, sigsetsize); + if (error) + return error; error = do_epoll_wait(epfd, events, maxevents, timeout); - /* - * If we changed the signal mask, we need to restore the original one. - * In case we've got a signal while waiting, we do not restore the - * signal mask yet, and we allow do_signal() to deliver the signal on - * the way back to userspace, before the signal mask is restored. - */ - if (sigmask) { - if (error == -EINTR) { - memcpy(¤t->saved_sigmask, &sigsaved, - sizeof(sigsaved)); - set_restore_sigmask(); - } else - set_current_blocked(&sigsaved); - } + restore_user_sigmask(sigmask, &sigsaved); return error; } @@ -2266,31 +2248,13 @@ COMPAT_SYSCALL_DEFINE6(epoll_pwait, int, epfd, * If the caller wants a certain signal mask to be set during the wait, * we apply it here. */ - if (sigmask) { - if (sigsetsize != sizeof(compat_sigset_t)) - return -EINVAL; - if (get_compat_sigset(&ksigmask, sigmask)) - return -EFAULT; - sigsaved = current->blocked; - set_current_blocked(&ksigmask); - } + err = set_compat_user_sigmask(sigmask, &ksigmask, &sigsaved, sigsetsize); + if (err) + return err; err = do_epoll_wait(epfd, events, maxevents, timeout); - /* - * If we changed the signal mask, we need to restore the original one. - * In case we've got a signal while waiting, we do not restore the - * signal mask yet, and we allow do_signal() to deliver the signal on - * the way back to userspace, before the signal mask is restored. - */ - if (sigmask) { - if (err == -EINTR) { - memcpy(¤t->saved_sigmask, &sigsaved, - sizeof(sigsaved)); - set_restore_sigmask(); - } else - set_current_blocked(&sigsaved); - } + restore_user_sigmask(sigmask, &sigsaved); return err; } diff --git a/fs/ext2/super.c b/fs/ext2/super.c index eb11502e3fcd..73b2d528237f 100644 --- a/fs/ext2/super.c +++ b/fs/ext2/super.c @@ -73,7 +73,7 @@ void ext2_error(struct super_block *sb, const char *function, if (test_opt(sb, ERRORS_PANIC)) panic("EXT2-fs: panic from previous error\n"); - if (test_opt(sb, ERRORS_RO)) { + if (!sb_rdonly(sb) && test_opt(sb, ERRORS_RO)) { ext2_msg(sb, KERN_CRIT, "error: remounting filesystem read-only"); sb->s_flags |= SB_RDONLY; @@ -148,10 +148,9 @@ static void ext2_put_super (struct super_block * sb) ext2_quota_off_umount(sb); - if (sbi->s_ea_block_cache) { - ext2_xattr_destroy_cache(sbi->s_ea_block_cache); - sbi->s_ea_block_cache = NULL; - } + ext2_xattr_destroy_cache(sbi->s_ea_block_cache); + sbi->s_ea_block_cache = NULL; + if (!sb_rdonly(sb)) { struct ext2_super_block *es = sbi->s_es; @@ -1198,8 +1197,7 @@ cantfind_ext2: sb->s_id); goto failed_mount; failed_mount3: - if (sbi->s_ea_block_cache) - ext2_xattr_destroy_cache(sbi->s_ea_block_cache); + ext2_xattr_destroy_cache(sbi->s_ea_block_cache); percpu_counter_destroy(&sbi->s_freeblocks_counter); percpu_counter_destroy(&sbi->s_freeinodes_counter); percpu_counter_destroy(&sbi->s_dirs_counter); diff --git a/fs/ext2/xattr.c b/fs/ext2/xattr.c index dd8f10db82e9..4f30876ee325 100644 --- a/fs/ext2/xattr.c +++ b/fs/ext2/xattr.c @@ -835,7 +835,8 @@ ext2_xattr_cache_insert(struct mb_cache *cache, struct buffer_head *bh) __u32 hash = le32_to_cpu(HDR(bh)->h_hash); int error; - error = mb_cache_entry_create(cache, GFP_NOFS, hash, bh->b_blocknr, 1); + error = mb_cache_entry_create(cache, GFP_NOFS, hash, bh->b_blocknr, + true); if (error) { if (error == -EBUSY) { ea_bdebug(bh, "already in cache (%d cache entries)", diff --git a/fs/ext4/acl.c b/fs/ext4/acl.c index c1d570ee1d9f..8c7bbf3e566d 100644 --- a/fs/ext4/acl.c +++ b/fs/ext4/acl.c @@ -248,7 +248,8 @@ retry: error = posix_acl_update_mode(inode, &mode, &acl); if (error) goto out_stop; - update_mode = 1; + if (mode != inode->i_mode) + update_mode = 1; } error = __ext4_set_acl(handle, inode, type, acl, 0 /* xattr_flags */); diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 3f89d0ab08fc..185a05d3257e 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -2454,8 +2454,19 @@ int do_journal_get_write_access(handle_t *handle, #define FALL_BACK_TO_NONDELALLOC 1 #define CONVERT_INLINE_DATA 2 -extern struct inode *ext4_iget(struct super_block *, unsigned long); -extern struct inode *ext4_iget_normal(struct super_block *, unsigned long); +typedef enum { + EXT4_IGET_NORMAL = 0, + EXT4_IGET_SPECIAL = 0x0001, /* OK to iget a system inode */ + EXT4_IGET_HANDLE = 0x0002 /* Inode # is from a handle */ +} ext4_iget_flags; + +extern struct inode *__ext4_iget(struct super_block *sb, unsigned long ino, + ext4_iget_flags flags, const char *function, + unsigned int line); + +#define ext4_iget(sb, ino, flags) \ + __ext4_iget((sb), (ino), (flags), __func__, __LINE__) + extern int ext4_write_inode(struct inode *, struct writeback_control *); extern int ext4_setattr(struct dentry *, struct iattr *); extern int ext4_getattr(const struct path *, struct kstat *, u32, unsigned int); @@ -2538,6 +2549,8 @@ extern int ext4_group_extend(struct super_block *sb, extern int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count); /* super.c */ +extern struct buffer_head *ext4_sb_bread(struct super_block *sb, + sector_t block, int op_flags); extern int ext4_seq_options_show(struct seq_file *seq, void *offset); extern int ext4_calculate_overhead(struct super_block *sb); extern void ext4_superblock_csum_set(struct super_block *sb); diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c index 014f6a698cb7..7ff14a1adba3 100644 --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -1225,7 +1225,7 @@ struct inode *ext4_orphan_get(struct super_block *sb, unsigned long ino) if (!ext4_test_bit(bit, bitmap_bh->b_data)) goto bad_orphan; - inode = ext4_iget(sb, ino); + inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL); if (IS_ERR(inode)) { err = PTR_ERR(inode); ext4_error(sb, "couldn't read orphan inode %lu (err %d)", diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c index 9c4bac18cc6c..27373d88b5f0 100644 --- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -705,8 +705,11 @@ int ext4_try_to_write_inline_data(struct address_space *mapping, if (!PageUptodate(page)) { ret = ext4_read_inline_page(inode, page); - if (ret < 0) + if (ret < 0) { + unlock_page(page); + put_page(page); goto out_up_read; + } } ret = 1; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 22a9d8159720..9affabd07682 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4817,7 +4817,9 @@ static inline u64 ext4_inode_peek_iversion(const struct inode *inode) return inode_peek_iversion(inode); } -struct inode *ext4_iget(struct super_block *sb, unsigned long ino) +struct inode *__ext4_iget(struct super_block *sb, unsigned long ino, + ext4_iget_flags flags, const char *function, + unsigned int line) { struct ext4_iloc iloc; struct ext4_inode *raw_inode; @@ -4831,6 +4833,18 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino) gid_t i_gid; projid_t i_projid; + if (((flags & EXT4_IGET_NORMAL) && + (ino < EXT4_FIRST_INO(sb) && ino != EXT4_ROOT_INO)) || + (ino < EXT4_ROOT_INO) || + (ino > le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count))) { + if (flags & EXT4_IGET_HANDLE) + return ERR_PTR(-ESTALE); + __ext4_error(sb, function, line, + "inode #%lu: comm %s: iget: illegal inode #", + ino, current->comm); + return ERR_PTR(-EFSCORRUPTED); + } + inode = iget_locked(sb, ino); if (!inode) return ERR_PTR(-ENOMEM); @@ -4846,18 +4860,26 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino) raw_inode = ext4_raw_inode(&iloc); if ((ino == EXT4_ROOT_INO) && (raw_inode->i_links_count == 0)) { - EXT4_ERROR_INODE(inode, "root inode unallocated"); + ext4_error_inode(inode, function, line, 0, + "iget: root inode unallocated"); ret = -EFSCORRUPTED; goto bad_inode; } + if ((flags & EXT4_IGET_HANDLE) && + (raw_inode->i_links_count == 0) && (raw_inode->i_mode == 0)) { + ret = -ESTALE; + goto bad_inode; + } + if (EXT4_INODE_SIZE(inode->i_sb) > EXT4_GOOD_OLD_INODE_SIZE) { ei->i_extra_isize = le16_to_cpu(raw_inode->i_extra_isize); if (EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize > EXT4_INODE_SIZE(inode->i_sb) || (ei->i_extra_isize & 3)) { - EXT4_ERROR_INODE(inode, - "bad extra_isize %u (inode size %u)", + ext4_error_inode(inode, function, line, 0, + "iget: bad extra_isize %u " + "(inode size %u)", ei->i_extra_isize, EXT4_INODE_SIZE(inode->i_sb)); ret = -EFSCORRUPTED; @@ -4879,7 +4901,8 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino) } if (!ext4_inode_csum_verify(inode, raw_inode, ei)) { - EXT4_ERROR_INODE(inode, "checksum invalid"); + ext4_error_inode(inode, function, line, 0, + "iget: checksum invalid"); ret = -EFSBADCRC; goto bad_inode; } @@ -4936,7 +4959,8 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino) ((__u64)le16_to_cpu(raw_inode->i_file_acl_high)) << 32; inode->i_size = ext4_isize(sb, raw_inode); if ((size = i_size_read(inode)) < 0) { - EXT4_ERROR_INODE(inode, "bad i_size value: %lld", size); + ext4_error_inode(inode, function, line, 0, + "iget: bad i_size value: %lld", size); ret = -EFSCORRUPTED; goto bad_inode; } @@ -5012,7 +5036,8 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino) ret = 0; if (ei->i_file_acl && !ext4_data_block_valid(EXT4_SB(sb), ei->i_file_acl, 1)) { - EXT4_ERROR_INODE(inode, "bad extended attribute block %llu", + ext4_error_inode(inode, function, line, 0, + "iget: bad extended attribute block %llu", ei->i_file_acl); ret = -EFSCORRUPTED; goto bad_inode; @@ -5040,8 +5065,9 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino) } else if (S_ISLNK(inode->i_mode)) { /* VFS does not allow setting these so must be corruption */ if (IS_APPEND(inode) || IS_IMMUTABLE(inode)) { - EXT4_ERROR_INODE(inode, - "immutable or append flags not allowed on symlinks"); + ext4_error_inode(inode, function, line, 0, + "iget: immutable or append flags " + "not allowed on symlinks"); ret = -EFSCORRUPTED; goto bad_inode; } @@ -5071,7 +5097,8 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino) make_bad_inode(inode); } else { ret = -EFSCORRUPTED; - EXT4_ERROR_INODE(inode, "bogus i_mode (%o)", inode->i_mode); + ext4_error_inode(inode, function, line, 0, + "iget: bogus i_mode (%o)", inode->i_mode); goto bad_inode; } brelse(iloc.bh); @@ -5085,13 +5112,6 @@ bad_inode: return ERR_PTR(ret); } -struct inode *ext4_iget_normal(struct super_block *sb, unsigned long ino) -{ - if (ino < EXT4_FIRST_INO(sb) && ino != EXT4_ROOT_INO) - return ERR_PTR(-EFSCORRUPTED); - return ext4_iget(sb, ino); -} - static int ext4_inode_blocks_set(handle_t *handle, struct ext4_inode *raw_inode, struct ext4_inode_info *ei) @@ -5380,9 +5400,13 @@ int ext4_write_inode(struct inode *inode, struct writeback_control *wbc) { int err; - if (WARN_ON_ONCE(current->flags & PF_MEMALLOC)) + if (WARN_ON_ONCE(current->flags & PF_MEMALLOC) || + sb_rdonly(inode->i_sb)) return 0; + if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) + return -EIO; + if (EXT4_SB(inode->i_sb)->s_journal) { if (ext4_journal_current_handle()) { jbd_debug(1, "called recursively, non-PF_MEMALLOC!\n"); @@ -5398,7 +5422,8 @@ int ext4_write_inode(struct inode *inode, struct writeback_control *wbc) if (wbc->sync_mode != WB_SYNC_ALL || wbc->for_sync) return 0; - err = ext4_force_commit(inode->i_sb); + err = jbd2_complete_transaction(EXT4_SB(inode->i_sb)->s_journal, + EXT4_I(inode)->i_sync_tid); } else { struct ext4_iloc iloc; diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c index 0edee31913d1..d37dafa1d133 100644 --- a/fs/ext4/ioctl.c +++ b/fs/ext4/ioctl.c @@ -125,7 +125,7 @@ static long swap_inode_boot_loader(struct super_block *sb, !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) return -EPERM; - inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO); + inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO, EXT4_IGET_SPECIAL); if (IS_ERR(inode_bl)) return PTR_ERR(inode_bl); ei_bl = EXT4_I(inode_bl); diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c index 61a9d1927817..b1e4d359f73b 100644 --- a/fs/ext4/migrate.c +++ b/fs/ext4/migrate.c @@ -116,9 +116,9 @@ static int update_ind_extent_range(handle_t *handle, struct inode *inode, int i, retval = 0; unsigned long max_entries = inode->i_sb->s_blocksize >> 2; - bh = sb_bread(inode->i_sb, pblock); - if (!bh) - return -EIO; + bh = ext4_sb_bread(inode->i_sb, pblock, 0); + if (IS_ERR(bh)) + return PTR_ERR(bh); i_data = (__le32 *)bh->b_data; for (i = 0; i < max_entries; i++) { @@ -145,9 +145,9 @@ static int update_dind_extent_range(handle_t *handle, struct inode *inode, int i, retval = 0; unsigned long max_entries = inode->i_sb->s_blocksize >> 2; - bh = sb_bread(inode->i_sb, pblock); - if (!bh) - return -EIO; + bh = ext4_sb_bread(inode->i_sb, pblock, 0); + if (IS_ERR(bh)) + return PTR_ERR(bh); i_data = (__le32 *)bh->b_data; for (i = 0; i < max_entries; i++) { @@ -175,9 +175,9 @@ static int update_tind_extent_range(handle_t *handle, struct inode *inode, int i, retval = 0; unsigned long max_entries = inode->i_sb->s_blocksize >> 2; - bh = sb_bread(inode->i_sb, pblock); - if (!bh) - return -EIO; + bh = ext4_sb_bread(inode->i_sb, pblock, 0); + if (IS_ERR(bh)) + return PTR_ERR(bh); i_data = (__le32 *)bh->b_data; for (i = 0; i < max_entries; i++) { @@ -224,9 +224,9 @@ static int free_dind_blocks(handle_t *handle, struct buffer_head *bh; unsigned long max_entries = inode->i_sb->s_blocksize >> 2; - bh = sb_bread(inode->i_sb, le32_to_cpu(i_data)); - if (!bh) - return -EIO; + bh = ext4_sb_bread(inode->i_sb, le32_to_cpu(i_data), 0); + if (IS_ERR(bh)) + return PTR_ERR(bh); tmp_idata = (__le32 *)bh->b_data; for (i = 0; i < max_entries; i++) { @@ -254,9 +254,9 @@ static int free_tind_blocks(handle_t *handle, struct buffer_head *bh; unsigned long max_entries = inode->i_sb->s_blocksize >> 2; - bh = sb_bread(inode->i_sb, le32_to_cpu(i_data)); - if (!bh) - return -EIO; + bh = ext4_sb_bread(inode->i_sb, le32_to_cpu(i_data), 0); + if (IS_ERR(bh)) + return PTR_ERR(bh); tmp_idata = (__le32 *)bh->b_data; for (i = 0; i < max_entries; i++) { @@ -382,9 +382,9 @@ static int free_ext_idx(handle_t *handle, struct inode *inode, struct ext4_extent_header *eh; block = ext4_idx_pblock(ix); - bh = sb_bread(inode->i_sb, block); - if (!bh) - return -EIO; + bh = ext4_sb_bread(inode->i_sb, block, 0); + if (IS_ERR(bh)) + return PTR_ERR(bh); eh = (struct ext4_extent_header *)bh->b_data; if (eh->eh_depth != 0) { @@ -535,22 +535,22 @@ int ext4_ext_migrate(struct inode *inode) if (i_data[EXT4_IND_BLOCK]) { retval = update_ind_extent_range(handle, tmp_inode, le32_to_cpu(i_data[EXT4_IND_BLOCK]), &lb); - if (retval) - goto err_out; + if (retval) + goto err_out; } else lb.curr_block += max_entries; if (i_data[EXT4_DIND_BLOCK]) { retval = update_dind_extent_range(handle, tmp_inode, le32_to_cpu(i_data[EXT4_DIND_BLOCK]), &lb); - if (retval) - goto err_out; + if (retval) + goto err_out; } else lb.curr_block += max_entries * max_entries; if (i_data[EXT4_TIND_BLOCK]) { retval = update_tind_extent_range(handle, tmp_inode, le32_to_cpu(i_data[EXT4_TIND_BLOCK]), &lb); - if (retval) - goto err_out; + if (retval) + goto err_out; } /* * Build the last extent diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index 437f71fe83ae..2b928eb07fa2 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -1571,7 +1571,7 @@ static struct dentry *ext4_lookup(struct inode *dir, struct dentry *dentry, unsi dentry); return ERR_PTR(-EFSCORRUPTED); } - inode = ext4_iget_normal(dir->i_sb, ino); + inode = ext4_iget(dir->i_sb, ino, EXT4_IGET_NORMAL); if (inode == ERR_PTR(-ESTALE)) { EXT4_ERROR_INODE(dir, "deleted inode referenced: %u", @@ -1613,7 +1613,7 @@ struct dentry *ext4_get_parent(struct dentry *child) return ERR_PTR(-EFSCORRUPTED); } - return d_obtain_alias(ext4_iget_normal(child->d_sb, ino)); + return d_obtain_alias(ext4_iget(child->d_sb, ino, EXT4_IGET_NORMAL)); } /* diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index db7590178dfc..2aa62d58d8dd 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -374,13 +374,13 @@ static int io_submit_init_bio(struct ext4_io_submit *io, bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES); if (!bio) return -ENOMEM; - wbc_init_bio(io->io_wbc, bio); bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); bio_set_dev(bio, bh->b_bdev); bio->bi_end_io = ext4_end_bio; bio->bi_private = ext4_get_io_end(io->io_end); io->io_bio = bio; io->io_next_block = bh->b_blocknr; + wbc_init_bio(io->io_wbc, bio); return 0; } diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c index a5efee34415f..48421de803b7 100644 --- a/fs/ext4/resize.c +++ b/fs/ext4/resize.c @@ -127,10 +127,12 @@ static int verify_group_input(struct super_block *sb, else if (free_blocks_count < 0) ext4_warning(sb, "Bad blocks count %u", input->blocks_count); - else if (!(bh = sb_bread(sb, end - 1))) + else if (IS_ERR(bh = ext4_sb_bread(sb, end - 1, 0))) { + err = PTR_ERR(bh); + bh = NULL; ext4_warning(sb, "Cannot read last block (%llu)", end - 1); - else if (outside(input->block_bitmap, start, end)) + } else if (outside(input->block_bitmap, start, end)) ext4_warning(sb, "Block bitmap not in group (block %llu)", (unsigned long long)input->block_bitmap); else if (outside(input->inode_bitmap, start, end)) @@ -781,11 +783,11 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, struct ext4_super_block *es = EXT4_SB(sb)->s_es; unsigned long gdb_num = group / EXT4_DESC_PER_BLOCK(sb); ext4_fsblk_t gdblock = EXT4_SB(sb)->s_sbh->b_blocknr + 1 + gdb_num; - struct buffer_head **o_group_desc, **n_group_desc; - struct buffer_head *dind; - struct buffer_head *gdb_bh; + struct buffer_head **o_group_desc, **n_group_desc = NULL; + struct buffer_head *dind = NULL; + struct buffer_head *gdb_bh = NULL; int gdbackups; - struct ext4_iloc iloc; + struct ext4_iloc iloc = { .bh = NULL }; __le32 *data; int err; @@ -794,21 +796,22 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, "EXT4-fs: ext4_add_new_gdb: adding group block %lu\n", gdb_num); - gdb_bh = sb_bread(sb, gdblock); - if (!gdb_bh) - return -EIO; + gdb_bh = ext4_sb_bread(sb, gdblock, 0); + if (IS_ERR(gdb_bh)) + return PTR_ERR(gdb_bh); gdbackups = verify_reserved_gdb(sb, group, gdb_bh); if (gdbackups < 0) { err = gdbackups; - goto exit_bh; + goto errout; } data = EXT4_I(inode)->i_data + EXT4_DIND_BLOCK; - dind = sb_bread(sb, le32_to_cpu(*data)); - if (!dind) { - err = -EIO; - goto exit_bh; + dind = ext4_sb_bread(sb, le32_to_cpu(*data), 0); + if (IS_ERR(dind)) { + err = PTR_ERR(dind); + dind = NULL; + goto errout; } data = (__le32 *)dind->b_data; @@ -816,18 +819,18 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, ext4_warning(sb, "new group %u GDT block %llu not reserved", group, gdblock); err = -EINVAL; - goto exit_dind; + goto errout; } BUFFER_TRACE(EXT4_SB(sb)->s_sbh, "get_write_access"); err = ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh); if (unlikely(err)) - goto exit_dind; + goto errout; BUFFER_TRACE(gdb_bh, "get_write_access"); err = ext4_journal_get_write_access(handle, gdb_bh); if (unlikely(err)) - goto exit_dind; + goto errout; BUFFER_TRACE(dind, "get_write_access"); err = ext4_journal_get_write_access(handle, dind); @@ -837,7 +840,7 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, /* ext4_reserve_inode_write() gets a reference on the iloc */ err = ext4_reserve_inode_write(handle, inode, &iloc); if (unlikely(err)) - goto exit_dind; + goto errout; n_group_desc = ext4_kvmalloc((gdb_num + 1) * sizeof(struct buffer_head *), @@ -846,7 +849,7 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, err = -ENOMEM; ext4_warning(sb, "not enough memory for %lu groups", gdb_num + 1); - goto exit_inode; + goto errout; } /* @@ -862,7 +865,7 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, err = ext4_handle_dirty_metadata(handle, NULL, dind); if (unlikely(err)) { ext4_std_error(sb, err); - goto exit_inode; + goto errout; } inode->i_blocks -= (gdbackups + 1) * sb->s_blocksize >> (9 - EXT4_SB(sb)->s_cluster_bits); @@ -871,8 +874,7 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, err = ext4_handle_dirty_metadata(handle, NULL, gdb_bh); if (unlikely(err)) { ext4_std_error(sb, err); - iloc.bh = NULL; - goto exit_inode; + goto errout; } brelse(dind); @@ -888,15 +890,11 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, err = ext4_handle_dirty_super(handle, sb); if (err) ext4_std_error(sb, err); - return err; - -exit_inode: +errout: kvfree(n_group_desc); brelse(iloc.bh); -exit_dind: brelse(dind); -exit_bh: brelse(gdb_bh); ext4_debug("leaving with error %d\n", err); @@ -916,9 +914,9 @@ static int add_new_gdb_meta_bg(struct super_block *sb, gdblock = ext4_meta_bg_first_block_no(sb, group) + ext4_bg_has_super(sb, group); - gdb_bh = sb_bread(sb, gdblock); - if (!gdb_bh) - return -EIO; + gdb_bh = ext4_sb_bread(sb, gdblock, 0); + if (IS_ERR(gdb_bh)) + return PTR_ERR(gdb_bh); n_group_desc = ext4_kvmalloc((gdb_num + 1) * sizeof(struct buffer_head *), GFP_NOFS); @@ -975,9 +973,10 @@ static int reserve_backup_gdb(handle_t *handle, struct inode *inode, return -ENOMEM; data = EXT4_I(inode)->i_data + EXT4_DIND_BLOCK; - dind = sb_bread(sb, le32_to_cpu(*data)); - if (!dind) { - err = -EIO; + dind = ext4_sb_bread(sb, le32_to_cpu(*data), 0); + if (IS_ERR(dind)) { + err = PTR_ERR(dind); + dind = NULL; goto exit_free; } @@ -996,9 +995,10 @@ static int reserve_backup_gdb(handle_t *handle, struct inode *inode, err = -EINVAL; goto exit_bh; } - primary[res] = sb_bread(sb, blk); - if (!primary[res]) { - err = -EIO; + primary[res] = ext4_sb_bread(sb, blk, 0); + if (IS_ERR(primary[res])) { + err = PTR_ERR(primary[res]); + primary[res] = NULL; goto exit_bh; } gdbackups = verify_reserved_gdb(sb, group, primary[res]); @@ -1631,13 +1631,13 @@ int ext4_group_add(struct super_block *sb, struct ext4_new_group_data *input) } if (reserved_gdb || gdb_off == 0) { - if (ext4_has_feature_resize_inode(sb) || + if (!ext4_has_feature_resize_inode(sb) || !le16_to_cpu(es->s_reserved_gdt_blocks)) { ext4_warning(sb, "No reserved GDT blocks, can't resize"); return -EPERM; } - inode = ext4_iget(sb, EXT4_RESIZE_INO); + inode = ext4_iget(sb, EXT4_RESIZE_INO, EXT4_IGET_SPECIAL); if (IS_ERR(inode)) { ext4_warning(sb, "Error opening resize inode"); return PTR_ERR(inode); @@ -1965,7 +1965,8 @@ retry: } if (!resize_inode) - resize_inode = ext4_iget(sb, EXT4_RESIZE_INO); + resize_inode = ext4_iget(sb, EXT4_RESIZE_INO, + EXT4_IGET_SPECIAL); if (IS_ERR(resize_inode)) { ext4_warning(sb, "Error opening resize inode"); return PTR_ERR(resize_inode); diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 53ff6c2a26ed..d6c142d73d99 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -140,6 +140,29 @@ MODULE_ALIAS_FS("ext3"); MODULE_ALIAS("ext3"); #define IS_EXT3_SB(sb) ((sb)->s_bdev->bd_holder == &ext3_fs_type) +/* + * This works like sb_bread() except it uses ERR_PTR for error + * returns. Currently with sb_bread it's impossible to distinguish + * between ENOMEM and EIO situations (since both result in a NULL + * return. + */ +struct buffer_head * +ext4_sb_bread(struct super_block *sb, sector_t block, int op_flags) +{ + struct buffer_head *bh = sb_getblk(sb, block); + + if (bh == NULL) + return ERR_PTR(-ENOMEM); + if (buffer_uptodate(bh)) + return bh; + ll_rw_block(REQ_OP_READ, REQ_META | op_flags, 1, &bh); + wait_on_buffer(bh); + if (buffer_uptodate(bh)) + return bh; + put_bh(bh); + return ERR_PTR(-EIO); +} + static int ext4_verify_csum_type(struct super_block *sb, struct ext4_super_block *es) { @@ -1000,14 +1023,13 @@ static void ext4_put_super(struct super_block *sb) invalidate_bdev(sbi->journal_bdev); ext4_blkdev_remove(sbi); } - if (sbi->s_ea_inode_cache) { - ext4_xattr_destroy_cache(sbi->s_ea_inode_cache); - sbi->s_ea_inode_cache = NULL; - } - if (sbi->s_ea_block_cache) { - ext4_xattr_destroy_cache(sbi->s_ea_block_cache); - sbi->s_ea_block_cache = NULL; - } + + ext4_xattr_destroy_cache(sbi->s_ea_inode_cache); + sbi->s_ea_inode_cache = NULL; + + ext4_xattr_destroy_cache(sbi->s_ea_block_cache); + sbi->s_ea_block_cache = NULL; + if (sbi->s_mmp_tsk) kthread_stop(sbi->s_mmp_tsk); brelse(sbi->s_sbh); @@ -1151,20 +1173,11 @@ static struct inode *ext4_nfs_get_inode(struct super_block *sb, { struct inode *inode; - if (ino < EXT4_FIRST_INO(sb) && ino != EXT4_ROOT_INO) - return ERR_PTR(-ESTALE); - if (ino > le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count)) - return ERR_PTR(-ESTALE); - - /* iget isn't really right if the inode is currently unallocated!! - * - * ext4_read_inode will return a bad_inode if the inode had been - * deleted, so we should be safe. - * + /* * Currently we don't know the generation for parent directory, so * a generation of 0 means "accept any" */ - inode = ext4_iget_normal(sb, ino); + inode = ext4_iget(sb, ino, EXT4_IGET_HANDLE); if (IS_ERR(inode)) return ERR_CAST(inode); if (generation && inode->i_generation != generation) { @@ -1189,6 +1202,16 @@ static struct dentry *ext4_fh_to_parent(struct super_block *sb, struct fid *fid, ext4_nfs_get_inode); } +static int ext4_nfs_commit_metadata(struct inode *inode) +{ + struct writeback_control wbc = { + .sync_mode = WB_SYNC_ALL + }; + + trace_ext4_nfs_commit_metadata(inode); + return ext4_write_inode(inode, &wbc); +} + /* * Try to release metadata pages (indirect blocks, directories) which are * mapped via the block device. Since these pages could have journal heads @@ -1393,6 +1416,7 @@ static const struct export_operations ext4_export_ops = { .fh_to_dentry = ext4_fh_to_dentry, .fh_to_parent = ext4_fh_to_parent, .get_parent = ext4_get_parent, + .commit_metadata = ext4_nfs_commit_metadata, }; enum { @@ -1939,7 +1963,7 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token, #ifdef CONFIG_FS_DAX ext4_msg(sb, KERN_WARNING, "DAX enabled. Warning: EXPERIMENTAL, use at your own risk"); - sbi->s_mount_opt |= m->mount_opt; + sbi->s_mount_opt |= m->mount_opt; #else ext4_msg(sb, KERN_INFO, "dax option not supported"); return -1; @@ -3842,12 +3866,12 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) if (ext4_has_feature_inline_data(sb)) { ext4_msg(sb, KERN_ERR, "Cannot use DAX on a filesystem" " that may contain inline data"); - sbi->s_mount_opt &= ~EXT4_MOUNT_DAX; + goto failed_mount; } if (!bdev_dax_supported(sb->s_bdev, blocksize)) { ext4_msg(sb, KERN_ERR, - "DAX unsupported by block device. Turning off DAX."); - sbi->s_mount_opt &= ~EXT4_MOUNT_DAX; + "DAX unsupported by block device."); + goto failed_mount; } } @@ -4328,7 +4352,7 @@ no_journal: * so we can safely mount the rest of the filesystem now. */ - root = ext4_iget(sb, EXT4_ROOT_INO); + root = ext4_iget(sb, EXT4_ROOT_INO, EXT4_IGET_SPECIAL); if (IS_ERR(root)) { ext4_msg(sb, KERN_ERR, "get root inode failed"); ret = PTR_ERR(root); @@ -4522,14 +4546,12 @@ failed_mount4: if (EXT4_SB(sb)->rsv_conversion_wq) destroy_workqueue(EXT4_SB(sb)->rsv_conversion_wq); failed_mount_wq: - if (sbi->s_ea_inode_cache) { - ext4_xattr_destroy_cache(sbi->s_ea_inode_cache); - sbi->s_ea_inode_cache = NULL; - } - if (sbi->s_ea_block_cache) { - ext4_xattr_destroy_cache(sbi->s_ea_block_cache); - sbi->s_ea_block_cache = NULL; - } + ext4_xattr_destroy_cache(sbi->s_ea_inode_cache); + sbi->s_ea_inode_cache = NULL; + + ext4_xattr_destroy_cache(sbi->s_ea_block_cache); + sbi->s_ea_block_cache = NULL; + if (sbi->s_journal) { jbd2_journal_destroy(sbi->s_journal); sbi->s_journal = NULL; @@ -4598,7 +4620,7 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb, * happen if we iget() an unused inode, as the subsequent iput() * will try to delete it. */ - journal_inode = ext4_iget(sb, journal_inum); + journal_inode = ext4_iget(sb, journal_inum, EXT4_IGET_SPECIAL); if (IS_ERR(journal_inode)) { ext4_msg(sb, KERN_ERR, "no journal found"); return NULL; @@ -5680,7 +5702,7 @@ static int ext4_quota_enable(struct super_block *sb, int type, int format_id, if (!qf_inums[type]) return -EPERM; - qf_inode = ext4_iget(sb, qf_inums[type]); + qf_inode = ext4_iget(sb, qf_inums[type], EXT4_IGET_SPECIAL); if (IS_ERR(qf_inode)) { ext4_error(sb, "Bad quota inode # %lu", qf_inums[type]); return PTR_ERR(qf_inode); @@ -5690,9 +5712,9 @@ static int ext4_quota_enable(struct super_block *sb, int type, int format_id, qf_inode->i_flags |= S_NOQUOTA; lockdep_set_quota_inode(qf_inode, I_DATA_SEM_QUOTA); err = dquot_enable(qf_inode, type, format_id, flags); - iput(qf_inode); if (err) lockdep_set_quota_inode(qf_inode, I_DATA_SEM_NORMAL); + iput(qf_inode); return err; } diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c index 7643d52c776c..86ed9c686249 100644 --- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -384,7 +384,7 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino, struct inode *inode; int err; - inode = ext4_iget(parent->i_sb, ea_ino); + inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL); if (IS_ERR(inode)) { err = PTR_ERR(inode); ext4_error(parent->i_sb, @@ -522,14 +522,13 @@ ext4_xattr_block_get(struct inode *inode, int name_index, const char *name, ea_idebug(inode, "name=%d.%s, buffer=%p, buffer_size=%ld", name_index, name, buffer, (long)buffer_size); - error = -ENODATA; if (!EXT4_I(inode)->i_file_acl) - goto cleanup; + return -ENODATA; ea_idebug(inode, "reading block %llu", (unsigned long long)EXT4_I(inode)->i_file_acl); - bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl); - if (!bh) - goto cleanup; + bh = ext4_sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl, REQ_PRIO); + if (IS_ERR(bh)) + return PTR_ERR(bh); ea_bdebug(bh, "b_count=%d, refcount=%d", atomic_read(&(bh->b_count)), le32_to_cpu(BHDR(bh)->h_refcount)); error = ext4_xattr_check_block(inode, bh); @@ -696,26 +695,23 @@ ext4_xattr_block_list(struct dentry *dentry, char *buffer, size_t buffer_size) ea_idebug(inode, "buffer=%p, buffer_size=%ld", buffer, (long)buffer_size); - error = 0; if (!EXT4_I(inode)->i_file_acl) - goto cleanup; + return 0; ea_idebug(inode, "reading block %llu", (unsigned long long)EXT4_I(inode)->i_file_acl); - bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl); - error = -EIO; - if (!bh) - goto cleanup; + bh = ext4_sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl, REQ_PRIO); + if (IS_ERR(bh)) + return PTR_ERR(bh); ea_bdebug(bh, "b_count=%d, refcount=%d", atomic_read(&(bh->b_count)), le32_to_cpu(BHDR(bh)->h_refcount)); error = ext4_xattr_check_block(inode, bh); if (error) goto cleanup; ext4_xattr_block_cache_insert(EA_BLOCK_CACHE(inode), bh); - error = ext4_xattr_list_entries(dentry, BFIRST(bh), buffer, buffer_size); - + error = ext4_xattr_list_entries(dentry, BFIRST(bh), buffer, + buffer_size); cleanup: brelse(bh); - return error; } @@ -830,9 +826,9 @@ int ext4_get_inode_usage(struct inode *inode, qsize_t *usage) } if (EXT4_I(inode)->i_file_acl) { - bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl); - if (!bh) { - ret = -EIO; + bh = ext4_sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl, REQ_PRIO); + if (IS_ERR(bh)) { + ret = PTR_ERR(bh); goto out; } @@ -1486,7 +1482,8 @@ ext4_xattr_inode_cache_find(struct inode *inode, const void *value, } while (ce) { - ea_inode = ext4_iget(inode->i_sb, ce->e_value); + ea_inode = ext4_iget(inode->i_sb, ce->e_value, + EXT4_IGET_NORMAL); if (!IS_ERR(ea_inode) && !is_bad_inode(ea_inode) && (EXT4_I(ea_inode)->i_flags & EXT4_EA_INODE_FL) && @@ -1821,16 +1818,15 @@ ext4_xattr_block_find(struct inode *inode, struct ext4_xattr_info *i, if (EXT4_I(inode)->i_file_acl) { /* The inode already has an extended attribute block. */ - bs->bh = sb_bread(sb, EXT4_I(inode)->i_file_acl); - error = -EIO; - if (!bs->bh) - goto cleanup; + bs->bh = ext4_sb_bread(sb, EXT4_I(inode)->i_file_acl, REQ_PRIO); + if (IS_ERR(bs->bh)) + return PTR_ERR(bs->bh); ea_bdebug(bs->bh, "b_count=%d, refcount=%d", atomic_read(&(bs->bh->b_count)), le32_to_cpu(BHDR(bs->bh)->h_refcount)); error = ext4_xattr_check_block(inode, bs->bh); if (error) - goto cleanup; + return error; /* Find the named attribute. */ bs->s.base = BHDR(bs->bh); bs->s.first = BFIRST(bs->bh); @@ -1839,13 +1835,10 @@ ext4_xattr_block_find(struct inode *inode, struct ext4_xattr_info *i, error = xattr_find_entry(inode, &bs->s.here, bs->s.end, i->name_index, i->name, 1); if (error && error != -ENODATA) - goto cleanup; + return error; bs->s.not_found = error; } - error = 0; - -cleanup: - return error; + return 0; } static int @@ -2274,9 +2267,9 @@ static struct buffer_head *ext4_xattr_get_block(struct inode *inode) if (!EXT4_I(inode)->i_file_acl) return NULL; - bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl); - if (!bh) - return ERR_PTR(-EIO); + bh = ext4_sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl, REQ_PRIO); + if (IS_ERR(bh)) + return bh; error = ext4_xattr_check_block(inode, bh); if (error) { brelse(bh); @@ -2729,7 +2722,7 @@ retry: base = IFIRST(header); end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size; min_offs = end - base; - total_ino = sizeof(struct ext4_xattr_ibody_header); + total_ino = sizeof(struct ext4_xattr_ibody_header) + sizeof(u32); error = xattr_check_inode(inode, header, end); if (error) @@ -2746,10 +2739,11 @@ retry: if (EXT4_I(inode)->i_file_acl) { struct buffer_head *bh; - bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl); - error = -EIO; - if (!bh) + bh = ext4_sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl, REQ_PRIO); + if (IS_ERR(bh)) { + error = PTR_ERR(bh); goto cleanup; + } error = ext4_xattr_check_block(inode, bh); if (error) { brelse(bh); @@ -2903,11 +2897,12 @@ int ext4_xattr_delete_inode(handle_t *handle, struct inode *inode, } if (EXT4_I(inode)->i_file_acl) { - bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl); - if (!bh) { - EXT4_ERROR_INODE(inode, "block %llu read error", - EXT4_I(inode)->i_file_acl); - error = -EIO; + bh = ext4_sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl, REQ_PRIO); + if (IS_ERR(bh)) { + error = PTR_ERR(bh); + if (error == -EIO) + EXT4_ERROR_INODE(inode, "block %llu read error", + EXT4_I(inode)->i_file_acl); goto cleanup; } error = ext4_xattr_check_block(inode, bh); @@ -3060,8 +3055,10 @@ ext4_xattr_block_cache_find(struct inode *inode, while (ce) { struct buffer_head *bh; - bh = sb_bread(inode->i_sb, ce->e_value); - if (!bh) { + bh = ext4_sb_bread(inode->i_sb, ce->e_value, REQ_PRIO); + if (IS_ERR(bh)) { + if (PTR_ERR(bh) == -ENOMEM) + return NULL; EXT4_ERROR_INODE(inode, "block %lu read error", (unsigned long)ce->e_value); } else if (ext4_xattr_cmp(header, BHDR(bh)) == 0) { diff --git a/fs/f2fs/acl.c b/fs/f2fs/acl.c index fa707cdd4120..63e599524085 100644 --- a/fs/f2fs/acl.c +++ b/fs/f2fs/acl.c @@ -160,7 +160,7 @@ static void *f2fs_acl_to_disk(struct f2fs_sb_info *sbi, return (void *)f2fs_acl; fail: - kfree(f2fs_acl); + kvfree(f2fs_acl); return ERR_PTR(-EINVAL); } @@ -190,7 +190,7 @@ static struct posix_acl *__f2fs_get_acl(struct inode *inode, int type, acl = NULL; else acl = ERR_PTR(retval); - kfree(value); + kvfree(value); return acl; } @@ -240,7 +240,7 @@ static int __f2fs_set_acl(struct inode *inode, int type, error = f2fs_setxattr(inode, name_index, "", value, size, ipage, 0); - kfree(value); + kvfree(value); if (!error) set_cached_acl(inode, type, acl); @@ -352,12 +352,14 @@ static int f2fs_acl_create(struct inode *dir, umode_t *mode, return PTR_ERR(p); clone = f2fs_acl_clone(p, GFP_NOFS); - if (!clone) - goto no_mem; + if (!clone) { + ret = -ENOMEM; + goto release_acl; + } ret = f2fs_acl_create_masq(clone, mode); if (ret < 0) - goto no_mem_clone; + goto release_clone; if (ret == 0) posix_acl_release(clone); @@ -371,11 +373,11 @@ static int f2fs_acl_create(struct inode *dir, umode_t *mode, return 0; -no_mem_clone: +release_clone: posix_acl_release(clone); -no_mem: +release_acl: posix_acl_release(p); - return -ENOMEM; + return ret; } int f2fs_init_acl(struct inode *inode, struct inode *dir, struct page *ipage, diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index 9c28ea439e0b..f955cd3e0677 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -44,7 +44,7 @@ repeat: cond_resched(); goto repeat; } - f2fs_wait_on_page_writeback(page, META, true); + f2fs_wait_on_page_writeback(page, META, true, true); if (!PageUptodate(page)) SetPageUptodate(page); return page; @@ -370,9 +370,8 @@ continue_unlock: goto continue_unlock; } - f2fs_wait_on_page_writeback(page, META, true); + f2fs_wait_on_page_writeback(page, META, true, true); - BUG_ON(PageWriteback(page)); if (!clear_page_dirty_for_io(page)) goto continue_unlock; @@ -911,7 +910,7 @@ free_fail_no_cp: f2fs_put_page(cp1, 1); f2fs_put_page(cp2, 1); fail_no_cp: - kfree(sbi->ckpt); + kvfree(sbi->ckpt); return -EINVAL; } @@ -1290,11 +1289,11 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi, struct page *page = f2fs_grab_meta_page(sbi, blk_addr); int err; + f2fs_wait_on_page_writeback(page, META, true, true); + memcpy(page_address(page), src, PAGE_SIZE); - set_page_dirty(page); - f2fs_wait_on_page_writeback(page, META, true); - f2fs_bug_on(sbi, PageWriteback(page)); + set_page_dirty(page); if (unlikely(!clear_page_dirty_for_io(page))) f2fs_bug_on(sbi, 1); @@ -1328,11 +1327,9 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) int err; /* Flush all the NAT/SIT pages */ - while (get_pages(sbi, F2FS_DIRTY_META)) { - f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO); - if (unlikely(f2fs_cp_error(sbi))) - break; - } + f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO); + f2fs_bug_on(sbi, get_pages(sbi, F2FS_DIRTY_META) && + !f2fs_cp_error(sbi)); /* * modify checkpoint @@ -1405,14 +1402,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) for (i = 0; i < nm_i->nat_bits_blocks; i++) f2fs_update_meta_page(sbi, nm_i->nat_bits + (i << F2FS_BLKSIZE_BITS), blk + i); - - /* Flush all the NAT BITS pages */ - while (get_pages(sbi, F2FS_DIRTY_META)) { - f2fs_sync_meta_pages(sbi, META, LONG_MAX, - FS_CP_META_IO); - if (unlikely(f2fs_cp_error(sbi))) - break; - } } /* write out checkpoint buffer at block 0 */ @@ -1448,6 +1437,8 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) /* Here, we have one bio having CP pack except cp pack 2 page */ f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO); + f2fs_bug_on(sbi, get_pages(sbi, F2FS_DIRTY_META) && + !f2fs_cp_error(sbi)); /* wait for previous submitted meta pages writeback */ f2fs_wait_on_all_pages_writeback(sbi); @@ -1465,7 +1456,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) * invalidate intermediate page cache borrowed from meta inode * which are used for migration of encrypted inode's blocks. */ - if (f2fs_sb_has_encrypt(sbi->sb)) + if (f2fs_sb_has_encrypt(sbi)) invalidate_mapping_pages(META_MAPPING(sbi), MAIN_BLKADDR(sbi), MAX_BLKADDR(sbi) - 1); diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index b293cb3e27a2..f91d8630c9a2 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -372,29 +372,6 @@ static bool __has_merged_page(struct f2fs_bio_info *io, struct inode *inode, return false; } -static bool has_merged_page(struct f2fs_sb_info *sbi, struct inode *inode, - struct page *page, nid_t ino, - enum page_type type) -{ - enum page_type btype = PAGE_TYPE_OF_BIO(type); - enum temp_type temp; - struct f2fs_bio_info *io; - bool ret = false; - - for (temp = HOT; temp < NR_TEMP_TYPE; temp++) { - io = sbi->write_io[btype] + temp; - - down_read(&io->io_rwsem); - ret = __has_merged_page(io, inode, page, ino); - up_read(&io->io_rwsem); - - /* TODO: use HOT temp only for meta pages now. */ - if (ret || btype == META) - break; - } - return ret; -} - static void __f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type, enum temp_type temp) { @@ -420,13 +397,19 @@ static void __submit_merged_write_cond(struct f2fs_sb_info *sbi, nid_t ino, enum page_type type, bool force) { enum temp_type temp; - - if (!force && !has_merged_page(sbi, inode, page, ino, type)) - return; + bool ret = true; for (temp = HOT; temp < NR_TEMP_TYPE; temp++) { + if (!force) { + enum page_type btype = PAGE_TYPE_OF_BIO(type); + struct f2fs_bio_info *io = sbi->write_io[btype] + temp; - __f2fs_submit_merged_write(sbi, type, temp); + down_read(&io->io_rwsem); + ret = __has_merged_page(io, inode, page, ino); + up_read(&io->io_rwsem); + } + if (ret) + __f2fs_submit_merged_write(sbi, type, temp); /* TODO: use HOT temp only for meta pages now. */ if (type >= META) @@ -643,7 +626,7 @@ static void __set_data_blkaddr(struct dnode_of_data *dn) */ void f2fs_set_data_blkaddr(struct dnode_of_data *dn) { - f2fs_wait_on_page_writeback(dn->node_page, NODE, true); + f2fs_wait_on_page_writeback(dn->node_page, NODE, true, true); __set_data_blkaddr(dn); if (set_page_dirty(dn->node_page)) dn->node_changed = true; @@ -673,7 +656,7 @@ int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count) trace_f2fs_reserve_new_blocks(dn->inode, dn->nid, dn->ofs_in_node, count); - f2fs_wait_on_page_writeback(dn->node_page, NODE, true); + f2fs_wait_on_page_writeback(dn->node_page, NODE, true, true); for (; count > 0; dn->ofs_in_node++) { block_t blkaddr = datablock_addr(dn->inode, @@ -957,6 +940,9 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from) return err; } + if (direct_io && allow_outplace_dio(inode, iocb, from)) + return 0; + if (is_inode_flag_set(inode, FI_NO_PREALLOC)) return 0; @@ -970,6 +956,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from) map.m_next_pgofs = NULL; map.m_next_extent = NULL; map.m_seg_type = NO_CHECK_TYPE; + map.m_may_create = true; if (direct_io) { map.m_seg_type = f2fs_rw_hint_to_seg_type(iocb->ki_hint); @@ -1028,7 +1015,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, unsigned int maxblocks = map->m_len; struct dnode_of_data dn; struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - int mode = create ? ALLOC_NODE : LOOKUP_NODE; + int mode = map->m_may_create ? ALLOC_NODE : LOOKUP_NODE; pgoff_t pgofs, end_offset, end; int err = 0, ofs = 1; unsigned int ofs_in_node, last_ofs_in_node; @@ -1048,6 +1035,10 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, end = pgofs + maxblocks; if (!create && f2fs_lookup_extent_cache(inode, pgofs, &ei)) { + if (test_opt(sbi, LFS) && flag == F2FS_GET_BLOCK_DIO && + map->m_may_create) + goto next_dnode; + map->m_pblk = ei.blk + pgofs - ei.fofs; map->m_len = min((pgoff_t)maxblocks, ei.fofs + ei.len - pgofs); map->m_flags = F2FS_MAP_MAPPED; @@ -1062,7 +1053,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, } next_dnode: - if (create) + if (map->m_may_create) __do_map_lock(sbi, flag, true); /* When reading holes, we need its node page */ @@ -1099,11 +1090,13 @@ next_block: if (is_valid_data_blkaddr(sbi, blkaddr)) { /* use out-place-update for driect IO under LFS mode */ - if (test_opt(sbi, LFS) && create && - flag == F2FS_GET_BLOCK_DIO) { + if (test_opt(sbi, LFS) && flag == F2FS_GET_BLOCK_DIO && + map->m_may_create) { err = __allocate_data_block(&dn, map->m_seg_type); - if (!err) + if (!err) { + blkaddr = dn.data_blkaddr; set_inode_flag(inode, FI_APPEND_WRITE); + } } } else { if (create) { @@ -1209,7 +1202,7 @@ skip: f2fs_put_dnode(&dn); - if (create) { + if (map->m_may_create) { __do_map_lock(sbi, flag, false); f2fs_balance_fs(sbi, dn.node_changed); } @@ -1235,7 +1228,7 @@ sync_out: } f2fs_put_dnode(&dn); unlock_out: - if (create) { + if (map->m_may_create) { __do_map_lock(sbi, flag, false); f2fs_balance_fs(sbi, dn.node_changed); } @@ -1257,6 +1250,7 @@ bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len) map.m_next_pgofs = NULL; map.m_next_extent = NULL; map.m_seg_type = NO_CHECK_TYPE; + map.m_may_create = false; last_lblk = F2FS_BLK_ALIGN(pos + len); while (map.m_lblk < last_lblk) { @@ -1271,7 +1265,7 @@ bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len) static int __get_data_block(struct inode *inode, sector_t iblock, struct buffer_head *bh, int create, int flag, - pgoff_t *next_pgofs, int seg_type) + pgoff_t *next_pgofs, int seg_type, bool may_write) { struct f2fs_map_blocks map; int err; @@ -1281,6 +1275,7 @@ static int __get_data_block(struct inode *inode, sector_t iblock, map.m_next_pgofs = next_pgofs; map.m_next_extent = NULL; map.m_seg_type = seg_type; + map.m_may_create = may_write; err = f2fs_map_blocks(inode, &map, create, flag); if (!err) { @@ -1297,16 +1292,25 @@ static int get_data_block(struct inode *inode, sector_t iblock, { return __get_data_block(inode, iblock, bh_result, create, flag, next_pgofs, - NO_CHECK_TYPE); + NO_CHECK_TYPE, create); +} + +static int get_data_block_dio_write(struct inode *inode, sector_t iblock, + struct buffer_head *bh_result, int create) +{ + return __get_data_block(inode, iblock, bh_result, create, + F2FS_GET_BLOCK_DIO, NULL, + f2fs_rw_hint_to_seg_type(inode->i_write_hint), + true); } static int get_data_block_dio(struct inode *inode, sector_t iblock, struct buffer_head *bh_result, int create) { return __get_data_block(inode, iblock, bh_result, create, - F2FS_GET_BLOCK_DIO, NULL, - f2fs_rw_hint_to_seg_type( - inode->i_write_hint)); + F2FS_GET_BLOCK_DIO, NULL, + f2fs_rw_hint_to_seg_type(inode->i_write_hint), + false); } static int get_data_block_bmap(struct inode *inode, sector_t iblock, @@ -1318,7 +1322,7 @@ static int get_data_block_bmap(struct inode *inode, sector_t iblock, return __get_data_block(inode, iblock, bh_result, create, F2FS_GET_BLOCK_BMAP, NULL, - NO_CHECK_TYPE); + NO_CHECK_TYPE, create); } static inline sector_t logical_to_blk(struct inode *inode, loff_t offset) @@ -1525,6 +1529,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping, map.m_next_pgofs = NULL; map.m_next_extent = NULL; map.m_seg_type = NO_CHECK_TYPE; + map.m_may_create = false; for (; nr_pages; nr_pages--) { if (pages) { @@ -1855,6 +1860,8 @@ got_it: if (fio->need_lock == LOCK_REQ) f2fs_unlock_op(fio->sbi); err = f2fs_inplace_write_data(fio); + if (err && PageWriteback(page)) + end_page_writeback(page); trace_f2fs_do_write_data_page(fio->page, IPU); set_inode_flag(inode, FI_UPDATE_WRITE); return err; @@ -2142,12 +2149,11 @@ continue_unlock: if (PageWriteback(page)) { if (wbc->sync_mode != WB_SYNC_NONE) f2fs_wait_on_page_writeback(page, - DATA, true); + DATA, true, true); else goto continue_unlock; } - BUG_ON(PageWriteback(page)); if (!clear_page_dirty_for_io(page)) goto continue_unlock; @@ -2324,6 +2330,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, bool locked = false; struct extent_info ei = {0,0,0}; int err = 0; + int flag; /* * we already allocated all the blocks, so we don't need to get @@ -2333,9 +2340,15 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, !is_inode_flag_set(inode, FI_NO_PREALLOC)) return 0; + /* f2fs_lock_op avoids race between write CP and convert_inline_page */ + if (f2fs_has_inline_data(inode) && pos + len > MAX_INLINE_DATA(inode)) + flag = F2FS_GET_BLOCK_DEFAULT; + else + flag = F2FS_GET_BLOCK_PRE_AIO; + if (f2fs_has_inline_data(inode) || (pos & PAGE_MASK) >= i_size_read(inode)) { - __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true); + __do_map_lock(sbi, flag, true); locked = true; } restart: @@ -2373,6 +2386,7 @@ restart: f2fs_put_dnode(&dn); __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true); + WARN_ON(flag != F2FS_GET_BLOCK_PRE_AIO); locked = true; goto restart; } @@ -2386,7 +2400,7 @@ out: f2fs_put_dnode(&dn); unlock_out: if (locked) - __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false); + __do_map_lock(sbi, flag, false); return err; } @@ -2457,7 +2471,7 @@ repeat: } } - f2fs_wait_on_page_writeback(page, DATA, false); + f2fs_wait_on_page_writeback(page, DATA, false, true); if (len == PAGE_SIZE || PageUptodate(page)) return 0; @@ -2548,6 +2562,53 @@ static int check_direct_IO(struct inode *inode, struct iov_iter *iter, return 0; } +static void f2fs_dio_end_io(struct bio *bio) +{ + struct f2fs_private_dio *dio = bio->bi_private; + + dec_page_count(F2FS_I_SB(dio->inode), + dio->write ? F2FS_DIO_WRITE : F2FS_DIO_READ); + + bio->bi_private = dio->orig_private; + bio->bi_end_io = dio->orig_end_io; + + kvfree(dio); + + bio_endio(bio); +} + +static void f2fs_dio_submit_bio(struct bio *bio, struct inode *inode, + loff_t file_offset) +{ + struct f2fs_private_dio *dio; + bool write = (bio_op(bio) == REQ_OP_WRITE); + int err; + + dio = f2fs_kzalloc(F2FS_I_SB(inode), + sizeof(struct f2fs_private_dio), GFP_NOFS); + if (!dio) { + err = -ENOMEM; + goto out; + } + + dio->inode = inode; + dio->orig_end_io = bio->bi_end_io; + dio->orig_private = bio->bi_private; + dio->write = write; + + bio->bi_end_io = f2fs_dio_end_io; + bio->bi_private = dio; + + inc_page_count(F2FS_I_SB(inode), + write ? F2FS_DIO_WRITE : F2FS_DIO_READ); + + submit_bio(bio); + return; +out: + bio->bi_status = BLK_STS_IOERR; + bio_endio(bio); +} + static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) { struct address_space *mapping = iocb->ki_filp->f_mapping; @@ -2594,7 +2655,10 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) down_read(&fi->i_gc_rwsem[READ]); } - err = blockdev_direct_IO(iocb, inode, iter, get_data_block_dio); + err = __blockdev_direct_IO(iocb, inode, inode->i_sb->s_bdev, + iter, rw == WRITE ? get_data_block_dio_write : + get_data_block_dio, NULL, f2fs_dio_submit_bio, + DIO_LOCKING | DIO_SKIP_HOLES); if (do_opu) up_read(&fi->i_gc_rwsem[READ]); @@ -2738,7 +2802,7 @@ int f2fs_migrate_page(struct address_space *mapping, */ extra_count = (atomic_written ? 1 : 0) - page_has_private(page); rc = migrate_page_move_mapping(mapping, newpage, - page, NULL, mode, extra_count); + page, mode, extra_count); if (rc != MIGRATEPAGE_SUCCESS) { if (atomic_written) mutex_unlock(&fi->inmem_lock); diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c index 139b4d5c83d5..ebcc121920ba 100644 --- a/fs/f2fs/debug.c +++ b/fs/f2fs/debug.c @@ -53,6 +53,8 @@ static void update_general_status(struct f2fs_sb_info *sbi) si->vw_cnt = atomic_read(&sbi->vw_cnt); si->max_aw_cnt = atomic_read(&sbi->max_aw_cnt); si->max_vw_cnt = atomic_read(&sbi->max_vw_cnt); + si->nr_dio_read = get_pages(sbi, F2FS_DIO_READ); + si->nr_dio_write = get_pages(sbi, F2FS_DIO_WRITE); si->nr_wb_cp_data = get_pages(sbi, F2FS_WB_CP_DATA); si->nr_wb_data = get_pages(sbi, F2FS_WB_DATA); si->nr_rd_data = get_pages(sbi, F2FS_RD_DATA); @@ -62,7 +64,7 @@ static void update_general_status(struct f2fs_sb_info *sbi) si->nr_flushed = atomic_read(&SM_I(sbi)->fcc_info->issued_flush); si->nr_flushing = - atomic_read(&SM_I(sbi)->fcc_info->issing_flush); + atomic_read(&SM_I(sbi)->fcc_info->queued_flush); si->flush_list_empty = llist_empty(&SM_I(sbi)->fcc_info->issue_list); } @@ -70,7 +72,7 @@ static void update_general_status(struct f2fs_sb_info *sbi) si->nr_discarded = atomic_read(&SM_I(sbi)->dcc_info->issued_discard); si->nr_discarding = - atomic_read(&SM_I(sbi)->dcc_info->issing_discard); + atomic_read(&SM_I(sbi)->dcc_info->queued_discard); si->nr_discard_cmd = atomic_read(&SM_I(sbi)->dcc_info->discard_cmd_cnt); si->undiscard_blks = SM_I(sbi)->dcc_info->undiscard_blks; @@ -197,7 +199,7 @@ static void update_mem_info(struct f2fs_sb_info *sbi) si->base_mem += 2 * SIT_VBLOCK_MAP_SIZE * MAIN_SEGS(sbi); si->base_mem += SIT_VBLOCK_MAP_SIZE * MAIN_SEGS(sbi); si->base_mem += SIT_VBLOCK_MAP_SIZE; - if (sbi->segs_per_sec > 1) + if (__is_large_section(sbi)) si->base_mem += MAIN_SECS(sbi) * sizeof(struct sec_entry); si->base_mem += __bitmap_size(sbi, SIT_BITMAP); @@ -374,6 +376,8 @@ static int stat_show(struct seq_file *s, void *v) seq_printf(s, " - Inner Struct Count: tree: %d(%d), node: %d\n", si->ext_tree, si->zombie_tree, si->ext_node); seq_puts(s, "\nBalancing F2FS Async:\n"); + seq_printf(s, " - DIO (R: %4d, W: %4d)\n", + si->nr_dio_read, si->nr_dio_write); seq_printf(s, " - IO_R (Data: %4d, Node: %4d, Meta: %4d\n", si->nr_rd_data, si->nr_rd_node, si->nr_rd_meta); seq_printf(s, " - IO_W (CP: %4d, Data: %4d, Flush: (%4d %4d %4d), " @@ -444,18 +448,7 @@ static int stat_show(struct seq_file *s, void *v) return 0; } -static int stat_open(struct inode *inode, struct file *file) -{ - return single_open(file, stat_show, inode->i_private); -} - -static const struct file_operations stat_fops = { - .owner = THIS_MODULE, - .open = stat_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(stat); int f2fs_build_stats(struct f2fs_sb_info *sbi) { @@ -510,7 +503,7 @@ void f2fs_destroy_stats(struct f2fs_sb_info *sbi) list_del(&si->stat_list); mutex_unlock(&f2fs_stat_mutex); - kfree(si); + kvfree(si); } int __init f2fs_create_root_stats(void) diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c index bacc667950b6..50d0d36280fa 100644 --- a/fs/f2fs/dir.c +++ b/fs/f2fs/dir.c @@ -293,7 +293,7 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de, { enum page_type type = f2fs_has_inline_dentry(dir) ? NODE : DATA; lock_page(page); - f2fs_wait_on_page_writeback(page, type, true); + f2fs_wait_on_page_writeback(page, type, true, true); de->ino = cpu_to_le32(inode->i_ino); set_de_type(de, inode->i_mode); set_page_dirty(page); @@ -307,7 +307,7 @@ static void init_dent_inode(const struct qstr *name, struct page *ipage) { struct f2fs_inode *ri; - f2fs_wait_on_page_writeback(ipage, NODE, true); + f2fs_wait_on_page_writeback(ipage, NODE, true, true); /* copy name info. to this inode page */ ri = F2FS_INODE(ipage); @@ -550,7 +550,7 @@ start: ++level; goto start; add_dentry: - f2fs_wait_on_page_writeback(dentry_page, DATA, true); + f2fs_wait_on_page_writeback(dentry_page, DATA, true, true); if (inode) { down_write(&F2FS_I(inode)->i_sem); @@ -705,7 +705,7 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, return f2fs_delete_inline_entry(dentry, page, dir, inode); lock_page(page); - f2fs_wait_on_page_writeback(page, DATA, true); + f2fs_wait_on_page_writeback(page, DATA, true, true); dentry_blk = page_address(page); bit_pos = dentry - dentry_blk->dentry; @@ -808,6 +808,17 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, de_name.name = d->filename[bit_pos]; de_name.len = le16_to_cpu(de->name_len); + /* check memory boundary before moving forward */ + bit_pos += GET_DENTRY_SLOTS(le16_to_cpu(de->name_len)); + if (unlikely(bit_pos > d->max)) { + f2fs_msg(sbi->sb, KERN_WARNING, + "%s: corrupted namelen=%d, run fsck to fix.", + __func__, le16_to_cpu(de->name_len)); + set_sbi_flag(sbi, SBI_NEED_FSCK); + err = -EINVAL; + goto out; + } + if (f2fs_encrypted_inode(d->inode)) { int save_len = fstr->len; @@ -830,7 +841,6 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, if (readdir_ra) f2fs_ra_node_page(sbi, le32_to_cpu(de->ino)); - bit_pos += GET_DENTRY_SLOTS(le16_to_cpu(de->name_len)); ctx->pos = start_pos + bit_pos; } out: diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 1e031971a466..12fabd6735dd 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -67,7 +67,7 @@ struct f2fs_fault_info { unsigned int inject_type; }; -extern char *f2fs_fault_name[FAULT_MAX]; +extern const char *f2fs_fault_name[FAULT_MAX]; #define IS_FAULT_SET(fi, type) ((fi)->inject_type & (1 << (type))) #endif @@ -152,12 +152,13 @@ struct f2fs_mount_info { #define F2FS_FEATURE_VERITY 0x0400 /* reserved */ #define F2FS_FEATURE_SB_CHKSUM 0x0800 -#define F2FS_HAS_FEATURE(sb, mask) \ - ((F2FS_SB(sb)->raw_super->feature & cpu_to_le32(mask)) != 0) -#define F2FS_SET_FEATURE(sb, mask) \ - (F2FS_SB(sb)->raw_super->feature |= cpu_to_le32(mask)) -#define F2FS_CLEAR_FEATURE(sb, mask) \ - (F2FS_SB(sb)->raw_super->feature &= ~cpu_to_le32(mask)) +#define __F2FS_HAS_FEATURE(raw_super, mask) \ + ((raw_super->feature & cpu_to_le32(mask)) != 0) +#define F2FS_HAS_FEATURE(sbi, mask) __F2FS_HAS_FEATURE(sbi->raw_super, mask) +#define F2FS_SET_FEATURE(sbi, mask) \ + (sbi->raw_super->feature |= cpu_to_le32(mask)) +#define F2FS_CLEAR_FEATURE(sbi, mask) \ + (sbi->raw_super->feature &= ~cpu_to_le32(mask)) /* * Default values for user and/or group using reserved blocks @@ -284,7 +285,7 @@ struct discard_cmd { struct block_device *bdev; /* bdev */ unsigned short ref; /* reference count */ unsigned char state; /* state */ - unsigned char issuing; /* issuing discard */ + unsigned char queued; /* queued discard */ int error; /* bio error */ spinlock_t lock; /* for state/bio_ref updating */ unsigned short bio_ref; /* bio reference count */ @@ -326,7 +327,7 @@ struct discard_cmd_control { unsigned int undiscard_blks; /* # of undiscard blocks */ unsigned int next_pos; /* next discard position */ atomic_t issued_discard; /* # of issued discard */ - atomic_t issing_discard; /* # of issing discard */ + atomic_t queued_discard; /* # of queued discard */ atomic_t discard_cmd_cnt; /* # of cached cmd count */ struct rb_root_cached root; /* root of discard rb-tree */ bool rbtree_check; /* config for consistence check */ @@ -416,6 +417,7 @@ static inline bool __has_cursum_space(struct f2fs_journal *journal, #define F2FS_GOING_DOWN_METASYNC 0x1 /* going down with metadata */ #define F2FS_GOING_DOWN_NOSYNC 0x2 /* going down */ #define F2FS_GOING_DOWN_METAFLUSH 0x3 /* going down with meta flush */ +#define F2FS_GOING_DOWN_NEED_FSCK 0x4 /* going down to trigger fsck */ #if defined(__KERNEL__) && defined(CONFIG_COMPAT) /* @@ -557,16 +559,8 @@ struct extent_info { }; struct extent_node { - struct rb_node rb_node; - union { - struct { - unsigned int fofs; - unsigned int len; - u32 blk; - }; - struct extent_info ei; /* extent info */ - - }; + struct rb_node rb_node; /* rb node located in rb-tree */ + struct extent_info ei; /* extent info */ struct list_head list; /* node in global extent list of sbi */ struct extent_tree *et; /* extent tree pointer */ }; @@ -601,6 +595,7 @@ struct f2fs_map_blocks { pgoff_t *m_next_pgofs; /* point next possible non-hole pgofs */ pgoff_t *m_next_extent; /* point to next possible extent */ int m_seg_type; + bool m_may_create; /* indicate it is from write path */ }; /* for flag in get_data_block */ @@ -889,7 +884,7 @@ struct flush_cmd_control { struct task_struct *f2fs_issue_flush; /* flush thread */ wait_queue_head_t flush_wait_queue; /* waiting queue for wake-up */ atomic_t issued_flush; /* # of issued flushes */ - atomic_t issing_flush; /* # of issing flushes */ + atomic_t queued_flush; /* # of queued flushes */ struct llist_head issue_list; /* list for command issue */ struct llist_node *dispatch_list; /* list for command dispatch */ }; @@ -956,6 +951,8 @@ enum count_type { F2FS_RD_DATA, F2FS_RD_NODE, F2FS_RD_META, + F2FS_DIO_WRITE, + F2FS_DIO_READ, NR_COUNT_TYPE, }; @@ -1170,8 +1167,6 @@ struct f2fs_sb_info { /* for bio operations */ struct f2fs_bio_info *write_io[NR_PAGE_TYPE]; /* for write bios */ - struct mutex wio_mutex[NR_PAGE_TYPE - 1][NR_TEMP_TYPE]; - /* bio ordering for NODE/DATA */ /* keep migration IO order for LFS mode */ struct rw_semaphore io_order_lock; mempool_t *write_io_dummy; /* Dummy pages */ @@ -1263,6 +1258,7 @@ struct f2fs_sb_info { struct f2fs_gc_kthread *gc_thread; /* GC thread */ unsigned int cur_victim_sec; /* current victim section num */ unsigned int gc_mode; /* current GC state */ + unsigned int next_victim_seg[2]; /* next segment in victim section */ /* for skip statistic */ unsigned long long skipped_atomic_files[2]; /* FG_GC and BG_GC */ unsigned long long skipped_gc_rwsem; /* FG_GC only */ @@ -1272,6 +1268,8 @@ struct f2fs_sb_info { /* maximum # of trials to find a victim segment for SSR and GC */ unsigned int max_victim_search; + /* migration granularity of garbage collection, unit: segment */ + unsigned int migration_granularity; /* * for stat information. @@ -1330,6 +1328,13 @@ struct f2fs_sb_info { __u32 s_chksum_seed; }; +struct f2fs_private_dio { + struct inode *inode; + void *orig_private; + bio_end_io_t *orig_end_io; + bool write; +}; + #ifdef CONFIG_F2FS_FAULT_INJECTION #define f2fs_show_injection_info(type) \ printk_ratelimited("%sF2FS-fs : inject %s in %s of %pF\n", \ @@ -1608,12 +1613,16 @@ static inline void disable_nat_bits(struct f2fs_sb_info *sbi, bool lock) { unsigned long flags; - set_sbi_flag(sbi, SBI_NEED_FSCK); + /* + * In order to re-enable nat_bits we need to call fsck.f2fs by + * set_sbi_flag(sbi, SBI_NEED_FSCK). But it may give huge cost, + * so let's rely on regular fsck or unclean shutdown. + */ if (lock) spin_lock_irqsave(&sbi->cp_lock, flags); __clear_ckpt_flags(F2FS_CKPT(sbi), CP_NAT_BITS_FLAG); - kfree(NM_I(sbi)->nat_bits); + kvfree(NM_I(sbi)->nat_bits); NM_I(sbi)->nat_bits = NULL; if (lock) spin_unlock_irqrestore(&sbi->cp_lock, flags); @@ -2146,7 +2155,11 @@ static inline bool is_idle(struct f2fs_sb_info *sbi, int type) { if (get_pages(sbi, F2FS_RD_DATA) || get_pages(sbi, F2FS_RD_NODE) || get_pages(sbi, F2FS_RD_META) || get_pages(sbi, F2FS_WB_DATA) || - get_pages(sbi, F2FS_WB_CP_DATA)) + get_pages(sbi, F2FS_WB_CP_DATA) || + get_pages(sbi, F2FS_DIO_READ) || + get_pages(sbi, F2FS_DIO_WRITE) || + atomic_read(&SM_I(sbi)->dcc_info->queued_discard) || + atomic_read(&SM_I(sbi)->fcc_info->queued_flush)) return false; return f2fs_time_over(sbi, type); } @@ -2370,6 +2383,7 @@ static inline void __mark_inode_dirty_flag(struct inode *inode, case FI_NEW_INODE: if (set) return; + /* fall through */ case FI_DATA_EXIST: case FI_INLINE_DOTS: case FI_PIN_FILE: @@ -2672,22 +2686,37 @@ static inline bool is_dot_dotdot(const struct qstr *str) static inline bool f2fs_may_extent_tree(struct inode *inode) { - if (!test_opt(F2FS_I_SB(inode), EXTENT_CACHE) || + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + + if (!test_opt(sbi, EXTENT_CACHE) || is_inode_flag_set(inode, FI_NO_EXTENT)) return false; + /* + * for recovered files during mount do not create extents + * if shrinker is not registered. + */ + if (list_empty(&sbi->s_list)) + return false; + return S_ISREG(inode->i_mode); } static inline void *f2fs_kmalloc(struct f2fs_sb_info *sbi, size_t size, gfp_t flags) { + void *ret; + if (time_to_inject(sbi, FAULT_KMALLOC)) { f2fs_show_injection_info(FAULT_KMALLOC); return NULL; } - return kmalloc(size, flags); + ret = kmalloc(size, flags); + if (ret) + return ret; + + return kvmalloc(size, flags); } static inline void *f2fs_kzalloc(struct f2fs_sb_info *sbi, @@ -2762,6 +2791,8 @@ static inline void f2fs_update_iostat(struct f2fs_sb_info *sbi, spin_unlock(&sbi->iostat_lock); } +#define __is_large_section(sbi) ((sbi)->segs_per_sec > 1) + #define __is_meta_io(fio) (PAGE_TYPE_OF_BIO(fio->type) == META && \ (!is_read_io(fio->op) || fio->is_meta)) @@ -3007,7 +3038,7 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, struct f2fs_summary *sum, int type, struct f2fs_io_info *fio, bool add_list); void f2fs_wait_on_page_writeback(struct page *page, - enum page_type type, bool ordered); + enum page_type type, bool ordered, bool locked); void f2fs_wait_on_block_writeback(struct inode *inode, block_t blkaddr); void f2fs_wait_on_block_writeback_range(struct inode *inode, block_t blkaddr, block_t len); @@ -3147,6 +3178,7 @@ struct f2fs_stat_info { int total_count, utilization; int bg_gc, nr_wb_cp_data, nr_wb_data; int nr_rd_data, nr_rd_node, nr_rd_meta; + int nr_dio_read, nr_dio_write; unsigned int io_skip_bggc, other_skip_bggc; int nr_flushing, nr_flushed, flush_list_empty; int nr_discarding, nr_discarded; @@ -3459,9 +3491,9 @@ static inline bool f2fs_post_read_required(struct inode *inode) } #define F2FS_FEATURE_FUNCS(name, flagname) \ -static inline int f2fs_sb_has_##name(struct super_block *sb) \ +static inline int f2fs_sb_has_##name(struct f2fs_sb_info *sbi) \ { \ - return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_##flagname); \ + return F2FS_HAS_FEATURE(sbi, F2FS_FEATURE_##flagname); \ } F2FS_FEATURE_FUNCS(encrypt, ENCRYPT); @@ -3491,7 +3523,7 @@ static inline int get_blkz_type(struct f2fs_sb_info *sbi, static inline bool f2fs_hw_should_discard(struct f2fs_sb_info *sbi) { - return f2fs_sb_has_blkzoned(sbi->sb); + return f2fs_sb_has_blkzoned(sbi); } static inline bool f2fs_hw_support_discard(struct f2fs_sb_info *sbi) @@ -3566,7 +3598,7 @@ static inline bool f2fs_force_buffered_io(struct inode *inode, * for blkzoned device, fallback direct IO to buffered IO, so * all IOs can be serialized by log-structured write. */ - if (f2fs_sb_has_blkzoned(sbi->sb)) + if (f2fs_sb_has_blkzoned(sbi)) return true; if (test_opt(sbi, LFS) && (rw == WRITE) && block_unaligned_IO(inode, iocb, iter)) @@ -3589,7 +3621,7 @@ extern void f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned int rate, static inline bool is_journalled_quota(struct f2fs_sb_info *sbi) { #ifdef CONFIG_QUOTA - if (f2fs_sb_has_quota_ino(sbi->sb)) + if (f2fs_sb_has_quota_ino(sbi)) return true; if (F2FS_OPTION(sbi).s_qf_names[USRQUOTA] || F2FS_OPTION(sbi).s_qf_names[GRPQUOTA] || diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index 88b124677189..bba56b39dcc5 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -82,7 +82,7 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf) } /* fill the page */ - f2fs_wait_on_page_writeback(page, DATA, false); + f2fs_wait_on_page_writeback(page, DATA, false, true); /* wait for GCed page writeback via META_MAPPING */ f2fs_wait_on_block_writeback(inode, dn.data_blkaddr); @@ -216,6 +216,9 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end, trace_f2fs_sync_file_enter(inode); + if (S_ISDIR(inode->i_mode)) + goto go_write; + /* if fdatasync is triggered, let's do in-place-update */ if (datasync || get_dirty_pages(inode) <= SM_I(sbi)->min_fsync_blocks) set_inode_flag(inode, FI_NEED_IPU); @@ -575,7 +578,7 @@ static int truncate_partial_data_page(struct inode *inode, u64 from, if (IS_ERR(page)) return PTR_ERR(page) == -ENOENT ? 0 : PTR_ERR(page); truncate_out: - f2fs_wait_on_page_writeback(page, DATA, true); + f2fs_wait_on_page_writeback(page, DATA, true, true); zero_user(page, offset, PAGE_SIZE - offset); /* An encrypted inode should have a key and truncate the last page. */ @@ -696,7 +699,7 @@ int f2fs_getattr(const struct path *path, struct kstat *stat, unsigned int flags; if (f2fs_has_extra_attr(inode) && - f2fs_sb_has_inode_crtime(inode->i_sb) && + f2fs_sb_has_inode_crtime(F2FS_I_SB(inode)) && F2FS_FITS_IN_INODE(ri, fi->i_extra_isize, i_crtime)) { stat->result_mask |= STATX_BTIME; stat->btime.tv_sec = fi->i_crtime.tv_sec; @@ -892,7 +895,7 @@ static int fill_zero(struct inode *inode, pgoff_t index, if (IS_ERR(page)) return PTR_ERR(page); - f2fs_wait_on_page_writeback(page, DATA, true); + f2fs_wait_on_page_writeback(page, DATA, true, true); zero_user(page, start, len); set_page_dirty(page); f2fs_put_page(page, 1); @@ -1496,7 +1499,8 @@ static int expand_inode_data(struct inode *inode, loff_t offset, { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_map_blocks map = { .m_next_pgofs = NULL, - .m_next_extent = NULL, .m_seg_type = NO_CHECK_TYPE }; + .m_next_extent = NULL, .m_seg_type = NO_CHECK_TYPE, + .m_may_create = true }; pgoff_t pg_end; loff_t new_size = i_size_read(inode); loff_t off_end; @@ -1681,7 +1685,7 @@ static int __f2fs_ioc_setflags(struct inode *inode, unsigned int flags) inode->i_ctime = current_time(inode); f2fs_set_inode_flags(inode); - f2fs_mark_inode_dirty_sync(inode, false); + f2fs_mark_inode_dirty_sync(inode, true); return 0; } @@ -1962,6 +1966,13 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg) f2fs_stop_checkpoint(sbi, false); set_sbi_flag(sbi, SBI_IS_SHUTDOWN); break; + case F2FS_GOING_DOWN_NEED_FSCK: + set_sbi_flag(sbi, SBI_NEED_FSCK); + /* do checkpoint only */ + ret = f2fs_sync_fs(sb, 1); + if (ret) + goto out; + break; default: ret = -EINVAL; goto out; @@ -2030,7 +2041,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg) { struct inode *inode = file_inode(filp); - if (!f2fs_sb_has_encrypt(inode->i_sb)) + if (!f2fs_sb_has_encrypt(F2FS_I_SB(inode))) return -EOPNOTSUPP; f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); @@ -2040,7 +2051,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg) static int f2fs_ioc_get_encryption_policy(struct file *filp, unsigned long arg) { - if (!f2fs_sb_has_encrypt(file_inode(filp)->i_sb)) + if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp)))) return -EOPNOTSUPP; return fscrypt_ioctl_get_policy(filp, (void __user *)arg); } @@ -2051,7 +2062,7 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg) struct f2fs_sb_info *sbi = F2FS_I_SB(inode); int err; - if (!f2fs_sb_has_encrypt(inode->i_sb)) + if (!f2fs_sb_has_encrypt(sbi)) return -EOPNOTSUPP; err = mnt_want_write_file(filp); @@ -2155,7 +2166,7 @@ do_more: } ret = f2fs_gc(sbi, range.sync, true, GET_SEGNO(sbi, range.start)); - range.start += sbi->blocks_per_seg; + range.start += BLKS_PER_SEC(sbi); if (range.start <= end) goto do_more; out: @@ -2197,7 +2208,8 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, { struct inode *inode = file_inode(filp); struct f2fs_map_blocks map = { .m_next_extent = NULL, - .m_seg_type = NO_CHECK_TYPE }; + .m_seg_type = NO_CHECK_TYPE , + .m_may_create = false }; struct extent_info ei = {0, 0, 0}; pgoff_t pg_start, pg_end, next_pgofs; unsigned int blk_per_seg = sbi->blocks_per_seg; @@ -2560,7 +2572,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg) return -EFAULT; if (sbi->s_ndevs <= 1 || sbi->s_ndevs - 1 <= range.dev_num || - sbi->segs_per_sec != 1) { + __is_large_section(sbi)) { f2fs_msg(sbi->sb, KERN_WARNING, "Can't flush %u in %d for segs_per_sec %u != 1\n", range.dev_num, sbi->s_ndevs, @@ -2635,12 +2647,11 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid) struct inode *inode = file_inode(filp); struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct super_block *sb = sbi->sb; struct page *ipage; kprojid_t kprojid; int err; - if (!f2fs_sb_has_project_quota(sb)) { + if (!f2fs_sb_has_project_quota(sbi)) { if (projid != F2FS_DEF_PROJID) return -EOPNOTSUPP; else @@ -2757,7 +2768,7 @@ static int f2fs_ioc_fsgetxattr(struct file *filp, unsigned long arg) fa.fsx_xflags = f2fs_iflags_to_xflags(fi->i_flags & F2FS_FL_USER_VISIBLE); - if (f2fs_sb_has_project_quota(inode->i_sb)) + if (f2fs_sb_has_project_quota(F2FS_I_SB(inode))) fa.fsx_projid = (__u32)from_kprojid(&init_user_ns, fi->i_projid); @@ -2932,6 +2943,7 @@ int f2fs_precache_extents(struct inode *inode) map.m_next_pgofs = NULL; map.m_next_extent = &m_next_extent; map.m_seg_type = NO_CHECK_TYPE; + map.m_may_create = false; end = F2FS_I_SB(inode)->max_file_blocks; while (map.m_lblk < end) { diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index a07241fb8537..195cf0f9d9ef 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -142,7 +142,7 @@ int f2fs_start_gc_thread(struct f2fs_sb_info *sbi) "f2fs_gc-%u:%u", MAJOR(dev), MINOR(dev)); if (IS_ERR(gc_th->f2fs_gc_task)) { err = PTR_ERR(gc_th->f2fs_gc_task); - kfree(gc_th); + kvfree(gc_th); sbi->gc_thread = NULL; } out: @@ -155,7 +155,7 @@ void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi) if (!gc_th) return; kthread_stop(gc_th->f2fs_gc_task); - kfree(gc_th); + kvfree(gc_th); sbi->gc_thread = NULL; } @@ -323,8 +323,7 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi, p.min_cost = get_max_cost(sbi, &p); if (*result != NULL_SEGNO) { - if (IS_DATASEG(get_seg_entry(sbi, *result)->type) && - get_valid_blocks(sbi, *result, false) && + if (get_valid_blocks(sbi, *result, false) && !sec_usage_check(sbi, GET_SEC_FROM_SEG(sbi, *result))) p.min_segno = *result; goto out; @@ -333,6 +332,22 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi, if (p.max_search == 0) goto out; + if (__is_large_section(sbi) && p.alloc_mode == LFS) { + if (sbi->next_victim_seg[BG_GC] != NULL_SEGNO) { + p.min_segno = sbi->next_victim_seg[BG_GC]; + *result = p.min_segno; + sbi->next_victim_seg[BG_GC] = NULL_SEGNO; + goto got_result; + } + if (gc_type == FG_GC && + sbi->next_victim_seg[FG_GC] != NULL_SEGNO) { + p.min_segno = sbi->next_victim_seg[FG_GC]; + *result = p.min_segno; + sbi->next_victim_seg[FG_GC] = NULL_SEGNO; + goto got_result; + } + } + last_victim = sm->last_victim[p.gc_mode]; if (p.alloc_mode == LFS && gc_type == FG_GC) { p.min_segno = check_bg_victims(sbi); @@ -395,6 +410,8 @@ next: } if (p.min_segno != NULL_SEGNO) { got_it: + *result = (p.min_segno / p.ofs_unit) * p.ofs_unit; +got_result: if (p.alloc_mode == LFS) { secno = GET_SEC_FROM_SEG(sbi, p.min_segno); if (gc_type == FG_GC) @@ -402,13 +419,13 @@ got_it: else set_bit(secno, dirty_i->victim_secmap); } - *result = (p.min_segno / p.ofs_unit) * p.ofs_unit; + } +out: + if (p.min_segno != NULL_SEGNO) trace_f2fs_get_victim(sbi->sb, type, gc_type, &p, sbi->cur_victim_sec, prefree_segments(sbi), free_segments(sbi)); - } -out: mutex_unlock(&dirty_i->seglist_lock); return (p.min_segno == NULL_SEGNO) ? 0 : 1; @@ -658,6 +675,14 @@ got_it: fio.page = page; fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr; + /* + * don't cache encrypted data into meta inode until previous dirty + * data were writebacked to avoid racing between GC and flush. + */ + f2fs_wait_on_page_writeback(page, DATA, true, true); + + f2fs_wait_on_block_writeback(inode, dn.data_blkaddr); + fio.encrypted_page = f2fs_pagecache_get_page(META_MAPPING(sbi), dn.data_blkaddr, FGP_LOCK | FGP_CREAT, GFP_NOFS); @@ -743,7 +768,9 @@ static int move_data_block(struct inode *inode, block_t bidx, * don't cache encrypted data into meta inode until previous dirty * data were writebacked to avoid racing between GC and flush. */ - f2fs_wait_on_page_writeback(page, DATA, true); + f2fs_wait_on_page_writeback(page, DATA, true, true); + + f2fs_wait_on_block_writeback(inode, dn.data_blkaddr); err = f2fs_get_node_info(fio.sbi, dn.nid, &ni); if (err) @@ -802,8 +829,8 @@ static int move_data_block(struct inode *inode, block_t bidx, } write_page: + f2fs_wait_on_page_writeback(fio.encrypted_page, DATA, true, true); set_page_dirty(fio.encrypted_page); - f2fs_wait_on_page_writeback(fio.encrypted_page, DATA, true); if (clear_page_dirty_for_io(fio.encrypted_page)) dec_page_count(fio.sbi, F2FS_DIRTY_META); @@ -811,7 +838,7 @@ write_page: ClearPageError(page); /* allocate block address */ - f2fs_wait_on_page_writeback(dn.node_page, NODE, true); + f2fs_wait_on_page_writeback(dn.node_page, NODE, true, true); fio.op = REQ_OP_WRITE; fio.op_flags = REQ_SYNC; @@ -897,8 +924,9 @@ static int move_data_page(struct inode *inode, block_t bidx, int gc_type, bool is_dirty = PageDirty(page); retry: + f2fs_wait_on_page_writeback(page, DATA, true, true); + set_page_dirty(page); - f2fs_wait_on_page_writeback(page, DATA, true); if (clear_page_dirty_for_io(page)) { inode_dec_dirty_pages(inode); f2fs_remove_dirty_inode(inode); @@ -1093,15 +1121,18 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, struct blk_plug plug; unsigned int segno = start_segno; unsigned int end_segno = start_segno + sbi->segs_per_sec; - int seg_freed = 0; + int seg_freed = 0, migrated = 0; unsigned char type = IS_DATASEG(get_seg_entry(sbi, segno)->type) ? SUM_TYPE_DATA : SUM_TYPE_NODE; int submitted = 0; + if (__is_large_section(sbi)) + end_segno = rounddown(end_segno, sbi->segs_per_sec); + /* readahead multi ssa blocks those have contiguous address */ - if (sbi->segs_per_sec > 1) + if (__is_large_section(sbi)) f2fs_ra_meta_pages(sbi, GET_SUM_BLOCK(sbi, segno), - sbi->segs_per_sec, META_SSA, true); + end_segno - segno, META_SSA, true); /* reference all summary page */ while (segno < end_segno) { @@ -1130,10 +1161,13 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, GET_SUM_BLOCK(sbi, segno)); f2fs_put_page(sum_page, 0); - if (get_valid_blocks(sbi, segno, false) == 0 || - !PageUptodate(sum_page) || - unlikely(f2fs_cp_error(sbi))) - goto next; + if (get_valid_blocks(sbi, segno, false) == 0) + goto freed; + if (__is_large_section(sbi) && + migrated >= sbi->migration_granularity) + goto skip; + if (!PageUptodate(sum_page) || unlikely(f2fs_cp_error(sbi))) + goto skip; sum = page_address(sum_page); if (type != GET_SUM_TYPE((&sum->footer))) { @@ -1141,7 +1175,7 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, "type [%d, %d] in SSA and SIT", segno, type, GET_SUM_TYPE((&sum->footer))); set_sbi_flag(sbi, SBI_NEED_FSCK); - goto next; + goto skip; } /* @@ -1160,10 +1194,15 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, stat_inc_seg_count(sbi, type, gc_type); +freed: if (gc_type == FG_GC && get_valid_blocks(sbi, segno, false) == 0) seg_freed++; -next: + migrated++; + + if (__is_large_section(sbi) && segno + 1 < end_segno) + sbi->next_victim_seg[gc_type] = segno + 1; +skip: f2fs_put_page(sum_page, 0); } @@ -1307,7 +1346,7 @@ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi) sbi->gc_pin_file_threshold = DEF_GC_FAILED_PINNED_FILES; /* give warm/cold data area from slower device */ - if (sbi->s_ndevs && sbi->segs_per_sec == 1) + if (sbi->s_ndevs && !__is_large_section(sbi)) SIT_I(sbi)->last_victim[ALLOC_NEXT] = GET_SEGNO(sbi, FDEV(0).end_blk) + 1; } diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c index 7b0cff7e6051..d636cbcf68f2 100644 --- a/fs/f2fs/inline.c +++ b/fs/f2fs/inline.c @@ -72,7 +72,7 @@ void f2fs_truncate_inline_inode(struct inode *inode, addr = inline_data_addr(inode, ipage); - f2fs_wait_on_page_writeback(ipage, NODE, true); + f2fs_wait_on_page_writeback(ipage, NODE, true, true); memset(addr + from, 0, MAX_INLINE_DATA(inode) - from); set_page_dirty(ipage); @@ -161,7 +161,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page) fio.old_blkaddr = dn->data_blkaddr; set_inode_flag(dn->inode, FI_HOT_DATA); f2fs_outplace_write_data(dn, &fio); - f2fs_wait_on_page_writeback(page, DATA, true); + f2fs_wait_on_page_writeback(page, DATA, true, true); if (dirty) { inode_dec_dirty_pages(dn->inode); f2fs_remove_dirty_inode(dn->inode); @@ -236,7 +236,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page) f2fs_bug_on(F2FS_I_SB(inode), page->index); - f2fs_wait_on_page_writeback(dn.inode_page, NODE, true); + f2fs_wait_on_page_writeback(dn.inode_page, NODE, true, true); src_addr = kmap_atomic(page); dst_addr = inline_data_addr(inode, dn.inode_page); memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode)); @@ -277,7 +277,7 @@ process_inline: ipage = f2fs_get_node_page(sbi, inode->i_ino); f2fs_bug_on(sbi, IS_ERR(ipage)); - f2fs_wait_on_page_writeback(ipage, NODE, true); + f2fs_wait_on_page_writeback(ipage, NODE, true, true); src_addr = inline_data_addr(inode, npage); dst_addr = inline_data_addr(inode, ipage); @@ -391,7 +391,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, goto out; } - f2fs_wait_on_page_writeback(page, DATA, true); + f2fs_wait_on_page_writeback(page, DATA, true, true); dentry_blk = page_address(page); @@ -501,18 +501,18 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage, stat_dec_inline_dir(dir); clear_inode_flag(dir, FI_INLINE_DENTRY); - kfree(backup_dentry); + kvfree(backup_dentry); return 0; recover: lock_page(ipage); - f2fs_wait_on_page_writeback(ipage, NODE, true); + f2fs_wait_on_page_writeback(ipage, NODE, true, true); memcpy(inline_dentry, backup_dentry, MAX_INLINE_DATA(dir)); f2fs_i_depth_write(dir, 0); f2fs_i_size_write(dir, MAX_INLINE_DATA(dir)); set_page_dirty(ipage); f2fs_put_page(ipage, 1); - kfree(backup_dentry); + kvfree(backup_dentry); return err; } @@ -565,7 +565,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, } } - f2fs_wait_on_page_writeback(ipage, NODE, true); + f2fs_wait_on_page_writeback(ipage, NODE, true, true); name_hash = f2fs_dentry_hash(new_name, NULL); f2fs_update_dentry(ino, mode, &d, new_name, name_hash, bit_pos); @@ -597,7 +597,7 @@ void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, struct page *page, int i; lock_page(page); - f2fs_wait_on_page_writeback(page, NODE, true); + f2fs_wait_on_page_writeback(page, NODE, true, true); inline_dentry = inline_data_addr(dir, page); make_dentry_ptr_inline(dir, &d, inline_dentry); diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 91ceee0ed4c4..bec52961630b 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -103,7 +103,7 @@ static void __recover_inline_status(struct inode *inode, struct page *ipage) while (start < end) { if (*start++) { - f2fs_wait_on_page_writeback(ipage, NODE, true); + f2fs_wait_on_page_writeback(ipage, NODE, true, true); set_inode_flag(inode, FI_DATA_EXIST); set_raw_inline(inode, F2FS_INODE(ipage)); @@ -118,7 +118,7 @@ static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page { struct f2fs_inode *ri = &F2FS_NODE(page)->i; - if (!f2fs_sb_has_inode_chksum(sbi->sb)) + if (!f2fs_sb_has_inode_chksum(sbi)) return false; if (!IS_INODE(page) || !(ri->i_inline & F2FS_EXTRA_ATTR)) @@ -218,7 +218,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) return false; } - if (f2fs_sb_has_flexible_inline_xattr(sbi->sb) + if (f2fs_sb_has_flexible_inline_xattr(sbi) && !f2fs_has_extra_attr(inode)) { set_sbi_flag(sbi, SBI_NEED_FSCK); f2fs_msg(sbi->sb, KERN_WARNING, @@ -228,7 +228,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) } if (f2fs_has_extra_attr(inode) && - !f2fs_sb_has_extra_attr(sbi->sb)) { + !f2fs_sb_has_extra_attr(sbi)) { set_sbi_flag(sbi, SBI_NEED_FSCK); f2fs_msg(sbi->sb, KERN_WARNING, "%s: inode (ino=%lx) is with extra_attr, " @@ -340,7 +340,7 @@ static int do_read_inode(struct inode *inode) fi->i_extra_isize = f2fs_has_extra_attr(inode) ? le16_to_cpu(ri->i_extra_isize) : 0; - if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) { + if (f2fs_sb_has_flexible_inline_xattr(sbi)) { fi->i_inline_xattr_size = le16_to_cpu(ri->i_inline_xattr_size); } else if (f2fs_has_inline_xattr(inode) || f2fs_has_inline_dentry(inode)) { @@ -390,14 +390,14 @@ static int do_read_inode(struct inode *inode) if (fi->i_flags & F2FS_PROJINHERIT_FL) set_inode_flag(inode, FI_PROJ_INHERIT); - if (f2fs_has_extra_attr(inode) && f2fs_sb_has_project_quota(sbi->sb) && + if (f2fs_has_extra_attr(inode) && f2fs_sb_has_project_quota(sbi) && F2FS_FITS_IN_INODE(ri, fi->i_extra_isize, i_projid)) i_projid = (projid_t)le32_to_cpu(ri->i_projid); else i_projid = F2FS_DEF_PROJID; fi->i_projid = make_kprojid(&init_user_ns, i_projid); - if (f2fs_has_extra_attr(inode) && f2fs_sb_has_inode_crtime(sbi->sb) && + if (f2fs_has_extra_attr(inode) && f2fs_sb_has_inode_crtime(sbi) && F2FS_FITS_IN_INODE(ri, fi->i_extra_isize, i_crtime)) { fi->i_crtime.tv_sec = le64_to_cpu(ri->i_crtime); fi->i_crtime.tv_nsec = le32_to_cpu(ri->i_crtime_nsec); @@ -497,7 +497,7 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page) struct f2fs_inode *ri; struct extent_tree *et = F2FS_I(inode)->extent_tree; - f2fs_wait_on_page_writeback(node_page, NODE, true); + f2fs_wait_on_page_writeback(node_page, NODE, true, true); set_page_dirty(node_page); f2fs_inode_synced(inode); @@ -542,11 +542,11 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page) if (f2fs_has_extra_attr(inode)) { ri->i_extra_isize = cpu_to_le16(F2FS_I(inode)->i_extra_isize); - if (f2fs_sb_has_flexible_inline_xattr(F2FS_I_SB(inode)->sb)) + if (f2fs_sb_has_flexible_inline_xattr(F2FS_I_SB(inode))) ri->i_inline_xattr_size = cpu_to_le16(F2FS_I(inode)->i_inline_xattr_size); - if (f2fs_sb_has_project_quota(F2FS_I_SB(inode)->sb) && + if (f2fs_sb_has_project_quota(F2FS_I_SB(inode)) && F2FS_FITS_IN_INODE(ri, F2FS_I(inode)->i_extra_isize, i_projid)) { projid_t i_projid; @@ -556,7 +556,7 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page) ri->i_projid = cpu_to_le32(i_projid); } - if (f2fs_sb_has_inode_crtime(F2FS_I_SB(inode)->sb) && + if (f2fs_sb_has_inode_crtime(F2FS_I_SB(inode)) && F2FS_FITS_IN_INODE(ri, F2FS_I(inode)->i_extra_isize, i_crtime)) { ri->i_crtime = diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c index 99299ede7429..62d9829f3a6a 100644 --- a/fs/f2fs/namei.c +++ b/fs/f2fs/namei.c @@ -61,7 +61,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) goto fail; } - if (f2fs_sb_has_project_quota(sbi->sb) && + if (f2fs_sb_has_project_quota(sbi) && (F2FS_I(dir)->i_flags & F2FS_PROJINHERIT_FL)) F2FS_I(inode)->i_projid = F2FS_I(dir)->i_projid; else @@ -79,7 +79,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) f2fs_may_encrypt(inode)) f2fs_set_encrypted_inode(inode); - if (f2fs_sb_has_extra_attr(sbi->sb)) { + if (f2fs_sb_has_extra_attr(sbi)) { set_inode_flag(inode, FI_EXTRA_ATTR); F2FS_I(inode)->i_extra_isize = F2FS_TOTAL_EXTRA_ATTR_SIZE; } @@ -92,7 +92,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) if (f2fs_may_inline_dentry(inode)) set_inode_flag(inode, FI_INLINE_DENTRY); - if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) { + if (f2fs_sb_has_flexible_inline_xattr(sbi)) { f2fs_bug_on(sbi, !f2fs_has_extra_attr(inode)); if (f2fs_has_inline_xattr(inode)) xattr_size = F2FS_OPTION(sbi).inline_xattr_size; @@ -635,7 +635,7 @@ out_f2fs_handle_failed_inode: f2fs_handle_failed_inode(inode); out_free_encrypted_link: if (disk_link.name != (unsigned char *)symname) - kfree(disk_link.name); + kvfree(disk_link.name); return err; } diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index d338740d0fda..4f450e573312 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -826,6 +826,7 @@ static int truncate_node(struct dnode_of_data *dn) struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); struct node_info ni; int err; + pgoff_t index; err = f2fs_get_node_info(sbi, dn->nid, &ni); if (err) @@ -845,10 +846,11 @@ static int truncate_node(struct dnode_of_data *dn) clear_node_page_dirty(dn->node_page); set_sbi_flag(sbi, SBI_IS_DIRTY); + index = dn->node_page->index; f2fs_put_page(dn->node_page, 1); invalidate_mapping_pages(NODE_MAPPING(sbi), - dn->node_page->index, dn->node_page->index); + index, index); dn->node_page = NULL; trace_f2fs_truncate_node(dn->inode, dn->nid, ni.blk_addr); @@ -1104,7 +1106,7 @@ skip_partial: ri->i_nid[offset[0] - NODE_DIR1_BLOCK]) { lock_page(page); BUG_ON(page->mapping != NODE_MAPPING(sbi)); - f2fs_wait_on_page_writeback(page, NODE, true); + f2fs_wait_on_page_writeback(page, NODE, true, true); ri->i_nid[offset[0] - NODE_DIR1_BLOCK] = 0; set_page_dirty(page); unlock_page(page); @@ -1232,7 +1234,7 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs) new_ni.version = 0; set_node_addr(sbi, &new_ni, NEW_ADDR, false); - f2fs_wait_on_page_writeback(page, NODE, true); + f2fs_wait_on_page_writeback(page, NODE, true, true); fill_node_footer(page, dn->nid, dn->inode->i_ino, ofs, true); set_cold_node(page, S_ISDIR(dn->inode->i_mode)); if (!PageUptodate(page)) @@ -1596,10 +1598,10 @@ int f2fs_move_node_page(struct page *node_page, int gc_type) .for_reclaim = 0, }; + f2fs_wait_on_page_writeback(node_page, NODE, true, true); + set_page_dirty(node_page); - f2fs_wait_on_page_writeback(node_page, NODE, true); - f2fs_bug_on(F2FS_P_SB(node_page), PageWriteback(node_page)); if (!clear_page_dirty_for_io(node_page)) { err = -EAGAIN; goto out_page; @@ -1687,8 +1689,7 @@ continue_unlock: goto continue_unlock; } - f2fs_wait_on_page_writeback(page, NODE, true); - BUG_ON(PageWriteback(page)); + f2fs_wait_on_page_writeback(page, NODE, true, true); set_fsync_mark(page, 0); set_dentry_mark(page, 0); @@ -1739,7 +1740,7 @@ continue_unlock: "Retry to write fsync mark: ino=%u, idx=%lx", ino, last_page->index); lock_page(last_page); - f2fs_wait_on_page_writeback(last_page, NODE, true); + f2fs_wait_on_page_writeback(last_page, NODE, true, true); set_page_dirty(last_page); unlock_page(last_page); goto retry; @@ -1820,9 +1821,8 @@ continue_unlock: goto lock_node; } - f2fs_wait_on_page_writeback(page, NODE, true); + f2fs_wait_on_page_writeback(page, NODE, true, true); - BUG_ON(PageWriteback(page)); if (!clear_page_dirty_for_io(page)) goto continue_unlock; @@ -1889,7 +1889,7 @@ int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, get_page(page); spin_unlock_irqrestore(&sbi->fsync_node_lock, flags); - f2fs_wait_on_page_writeback(page, NODE, true); + f2fs_wait_on_page_writeback(page, NODE, true, false); if (TestClearPageError(page)) ret = -EIO; @@ -2467,7 +2467,7 @@ void f2fs_recover_inline_xattr(struct inode *inode, struct page *page) src_addr = inline_xattr_addr(inode, page); inline_size = inline_xattr_size(inode); - f2fs_wait_on_page_writeback(ipage, NODE, true); + f2fs_wait_on_page_writeback(ipage, NODE, true, true); memcpy(dst_addr, src_addr, inline_size); update_inode: f2fs_update_inode(inode, ipage); @@ -2561,17 +2561,17 @@ retry: if (dst->i_inline & F2FS_EXTRA_ATTR) { dst->i_extra_isize = src->i_extra_isize; - if (f2fs_sb_has_flexible_inline_xattr(sbi->sb) && + if (f2fs_sb_has_flexible_inline_xattr(sbi) && F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize), i_inline_xattr_size)) dst->i_inline_xattr_size = src->i_inline_xattr_size; - if (f2fs_sb_has_project_quota(sbi->sb) && + if (f2fs_sb_has_project_quota(sbi) && F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize), i_projid)) dst->i_projid = src->i_projid; - if (f2fs_sb_has_inode_crtime(sbi->sb) && + if (f2fs_sb_has_inode_crtime(sbi) && F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize), i_crtime_nsec)) { dst->i_crtime = src->i_crtime; @@ -3113,17 +3113,17 @@ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi) for (i = 0; i < nm_i->nat_blocks; i++) kvfree(nm_i->free_nid_bitmap[i]); - kfree(nm_i->free_nid_bitmap); + kvfree(nm_i->free_nid_bitmap); } kvfree(nm_i->free_nid_count); - kfree(nm_i->nat_bitmap); - kfree(nm_i->nat_bits); + kvfree(nm_i->nat_bitmap); + kvfree(nm_i->nat_bits); #ifdef CONFIG_F2FS_CHECK_FS - kfree(nm_i->nat_bitmap_mir); + kvfree(nm_i->nat_bitmap_mir); #endif sbi->nm_info = NULL; - kfree(nm_i); + kvfree(nm_i); } int __init f2fs_create_node_manager_caches(void) diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h index 1c73d879a9bc..e05af5df5648 100644 --- a/fs/f2fs/node.h +++ b/fs/f2fs/node.h @@ -361,7 +361,7 @@ static inline int set_nid(struct page *p, int off, nid_t nid, bool i) { struct f2fs_node *rn = F2FS_NODE(p); - f2fs_wait_on_page_writeback(p, NODE, true); + f2fs_wait_on_page_writeback(p, NODE, true, true); if (i) rn->i.i_nid[off - NODE_DIR1_BLOCK] = cpu_to_le32(nid); diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c index 1dfb17f9f9ff..e3883db868d8 100644 --- a/fs/f2fs/recovery.c +++ b/fs/f2fs/recovery.c @@ -250,7 +250,7 @@ static int recover_inode(struct inode *inode, struct page *page) i_gid_write(inode, le32_to_cpu(raw->i_gid)); if (raw->i_inline & F2FS_EXTRA_ATTR) { - if (f2fs_sb_has_project_quota(F2FS_I_SB(inode)->sb) && + if (f2fs_sb_has_project_quota(F2FS_I_SB(inode)) && F2FS_FITS_IN_INODE(raw, le16_to_cpu(raw->i_extra_isize), i_projid)) { projid_t i_projid; @@ -539,7 +539,7 @@ retry_dn: goto out; } - f2fs_wait_on_page_writeback(dn.node_page, NODE, true); + f2fs_wait_on_page_writeback(dn.node_page, NODE, true, true); err = f2fs_get_node_info(sbi, dn.nid, &ni); if (err) diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 6edcf8391dd3..9b79056d705d 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -229,7 +229,7 @@ static int __revoke_inmem_pages(struct inode *inode, lock_page(page); - f2fs_wait_on_page_writeback(page, DATA, true); + f2fs_wait_on_page_writeback(page, DATA, true, true); if (recover) { struct dnode_of_data dn; @@ -387,8 +387,9 @@ static int __f2fs_commit_inmem_pages(struct inode *inode) if (page->mapping == inode->i_mapping) { trace_f2fs_commit_inmem_page(page, INMEM); + f2fs_wait_on_page_writeback(page, DATA, true, true); + set_page_dirty(page); - f2fs_wait_on_page_writeback(page, DATA, true); if (clear_page_dirty_for_io(page)) { inode_dec_dirty_pages(inode); f2fs_remove_dirty_inode(inode); @@ -620,14 +621,16 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino) return 0; if (!test_opt(sbi, FLUSH_MERGE)) { + atomic_inc(&fcc->queued_flush); ret = submit_flush_wait(sbi, ino); + atomic_dec(&fcc->queued_flush); atomic_inc(&fcc->issued_flush); return ret; } - if (atomic_inc_return(&fcc->issing_flush) == 1 || sbi->s_ndevs > 1) { + if (atomic_inc_return(&fcc->queued_flush) == 1 || sbi->s_ndevs > 1) { ret = submit_flush_wait(sbi, ino); - atomic_dec(&fcc->issing_flush); + atomic_dec(&fcc->queued_flush); atomic_inc(&fcc->issued_flush); return ret; @@ -646,14 +649,14 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino) if (fcc->f2fs_issue_flush) { wait_for_completion(&cmd.wait); - atomic_dec(&fcc->issing_flush); + atomic_dec(&fcc->queued_flush); } else { struct llist_node *list; list = llist_del_all(&fcc->issue_list); if (!list) { wait_for_completion(&cmd.wait); - atomic_dec(&fcc->issing_flush); + atomic_dec(&fcc->queued_flush); } else { struct flush_cmd *tmp, *next; @@ -662,7 +665,7 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino) llist_for_each_entry_safe(tmp, next, list, llnode) { if (tmp == &cmd) { cmd.ret = ret; - atomic_dec(&fcc->issing_flush); + atomic_dec(&fcc->queued_flush); continue; } tmp->ret = ret; @@ -691,7 +694,7 @@ int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi) if (!fcc) return -ENOMEM; atomic_set(&fcc->issued_flush, 0); - atomic_set(&fcc->issing_flush, 0); + atomic_set(&fcc->queued_flush, 0); init_waitqueue_head(&fcc->flush_wait_queue); init_llist_head(&fcc->issue_list); SM_I(sbi)->fcc_info = fcc; @@ -703,7 +706,7 @@ init_thread: "f2fs_flush-%u:%u", MAJOR(dev), MINOR(dev)); if (IS_ERR(fcc->f2fs_issue_flush)) { err = PTR_ERR(fcc->f2fs_issue_flush); - kfree(fcc); + kvfree(fcc); SM_I(sbi)->fcc_info = NULL; return err; } @@ -722,7 +725,7 @@ void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free) kthread_stop(flush_thread); } if (free) { - kfree(fcc); + kvfree(fcc); SM_I(sbi)->fcc_info = NULL; } } @@ -907,7 +910,7 @@ static struct discard_cmd *__create_discard_cmd(struct f2fs_sb_info *sbi, dc->len = len; dc->ref = 0; dc->state = D_PREP; - dc->issuing = 0; + dc->queued = 0; dc->error = 0; init_completion(&dc->wait); list_add_tail(&dc->list, pend_list); @@ -940,7 +943,7 @@ static void __detach_discard_cmd(struct discard_cmd_control *dcc, struct discard_cmd *dc) { if (dc->state == D_DONE) - atomic_sub(dc->issuing, &dcc->issing_discard); + atomic_sub(dc->queued, &dcc->queued_discard); list_del(&dc->list); rb_erase_cached(&dc->rb_node, &dcc->root); @@ -1143,12 +1146,12 @@ submit: dc->bio_ref++; spin_unlock_irqrestore(&dc->lock, flags); - atomic_inc(&dcc->issing_discard); - dc->issuing++; + atomic_inc(&dcc->queued_discard); + dc->queued++; list_move_tail(&dc->list, wait_list); /* sanity check on discard range */ - __check_sit_bitmap(sbi, start, start + len); + __check_sit_bitmap(sbi, lstart, lstart + len); bio->bi_private = dc; bio->bi_end_io = f2fs_submit_discard_endio; @@ -1649,6 +1652,10 @@ static int issue_discard_thread(void *data) if (dcc->discard_wake) dcc->discard_wake = 0; + /* clean up pending candidates before going to sleep */ + if (atomic_read(&dcc->queued_discard)) + __wait_all_discard_cmd(sbi, NULL); + if (try_to_freeze()) continue; if (f2fs_readonly(sbi->sb)) @@ -1734,7 +1741,7 @@ static int __issue_discard_async(struct f2fs_sb_info *sbi, struct block_device *bdev, block_t blkstart, block_t blklen) { #ifdef CONFIG_BLK_DEV_ZONED - if (f2fs_sb_has_blkzoned(sbi->sb) && + if (f2fs_sb_has_blkzoned(sbi) && bdev_zoned_model(bdev) != BLK_ZONED_NONE) return __f2fs_issue_discard_zone(sbi, bdev, blkstart, blklen); #endif @@ -1882,7 +1889,7 @@ void f2fs_clear_prefree_segments(struct f2fs_sb_info *sbi, unsigned int start = 0, end = -1; unsigned int secno, start_segno; bool force = (cpc->reason & CP_DISCARD); - bool need_align = test_opt(sbi, LFS) && sbi->segs_per_sec > 1; + bool need_align = test_opt(sbi, LFS) && __is_large_section(sbi); mutex_lock(&dirty_i->seglist_lock); @@ -1914,7 +1921,7 @@ void f2fs_clear_prefree_segments(struct f2fs_sb_info *sbi, (end - 1) <= cpc->trim_end) continue; - if (!test_opt(sbi, LFS) || sbi->segs_per_sec == 1) { + if (!test_opt(sbi, LFS) || !__is_large_section(sbi)) { f2fs_issue_discard(sbi, START_BLOCK(sbi, start), (end - start) << sbi->log_blocks_per_seg); continue; @@ -1946,7 +1953,7 @@ find_next: sbi->blocks_per_seg, cur_pos); len = next_pos - cur_pos; - if (f2fs_sb_has_blkzoned(sbi->sb) || + if (f2fs_sb_has_blkzoned(sbi) || (force && len < cpc->trim_minlen)) goto skip; @@ -1994,7 +2001,7 @@ static int create_discard_cmd_control(struct f2fs_sb_info *sbi) INIT_LIST_HEAD(&dcc->fstrim_list); mutex_init(&dcc->cmd_lock); atomic_set(&dcc->issued_discard, 0); - atomic_set(&dcc->issing_discard, 0); + atomic_set(&dcc->queued_discard, 0); atomic_set(&dcc->discard_cmd_cnt, 0); dcc->nr_discards = 0; dcc->max_discards = MAIN_SEGS(sbi) << sbi->log_blocks_per_seg; @@ -2010,7 +2017,7 @@ init_thread: "f2fs_discard-%u:%u", MAJOR(dev), MINOR(dev)); if (IS_ERR(dcc->f2fs_issue_discard)) { err = PTR_ERR(dcc->f2fs_issue_discard); - kfree(dcc); + kvfree(dcc); SM_I(sbi)->dcc_info = NULL; return err; } @@ -2027,7 +2034,7 @@ static void destroy_discard_cmd_control(struct f2fs_sb_info *sbi) f2fs_stop_discard_thread(sbi); - kfree(dcc); + kvfree(dcc); SM_I(sbi)->dcc_info = NULL; } @@ -2146,7 +2153,7 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del) /* update total number of valid blocks to be written in ckpt area */ SIT_I(sbi)->written_valid_blocks += del; - if (sbi->segs_per_sec > 1) + if (__is_large_section(sbi)) get_sec_entry(sbi, segno)->valid_blocks += del; } @@ -2412,7 +2419,7 @@ static void reset_curseg(struct f2fs_sb_info *sbi, int type, int modified) static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type) { /* if segs_per_sec is large than 1, we need to keep original policy. */ - if (sbi->segs_per_sec != 1) + if (__is_large_section(sbi)) return CURSEG_I(sbi, type)->segno; if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) @@ -2722,7 +2729,7 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range) struct discard_policy dpolicy; unsigned long long trimmed = 0; int err = 0; - bool need_align = test_opt(sbi, LFS) && sbi->segs_per_sec > 1; + bool need_align = test_opt(sbi, LFS) && __is_large_section(sbi); if (start >= MAX_BLKADDR(sbi) || range->len < sbi->blocksize) return -EINVAL; @@ -3272,16 +3279,18 @@ void f2fs_replace_block(struct f2fs_sb_info *sbi, struct dnode_of_data *dn, } void f2fs_wait_on_page_writeback(struct page *page, - enum page_type type, bool ordered) + enum page_type type, bool ordered, bool locked) { if (PageWriteback(page)) { struct f2fs_sb_info *sbi = F2FS_P_SB(page); f2fs_submit_merged_write_cond(sbi, NULL, page, 0, type); - if (ordered) + if (ordered) { wait_on_page_writeback(page); - else + f2fs_bug_on(sbi, locked && PageWriteback(page)); + } else { wait_for_stable_page(page); + } } } @@ -3298,7 +3307,7 @@ void f2fs_wait_on_block_writeback(struct inode *inode, block_t blkaddr) cpage = find_lock_page(META_MAPPING(sbi), blkaddr); if (cpage) { - f2fs_wait_on_page_writeback(cpage, DATA, true); + f2fs_wait_on_page_writeback(cpage, DATA, true, true); f2fs_put_page(cpage, 1); } } @@ -3880,7 +3889,7 @@ static int build_sit_info(struct f2fs_sb_info *sbi) if (!sit_i->tmp_map) return -ENOMEM; - if (sbi->segs_per_sec > 1) { + if (__is_large_section(sbi)) { sit_i->sec_entries = f2fs_kvzalloc(sbi, array_size(sizeof(struct sec_entry), MAIN_SECS(sbi)), @@ -4035,7 +4044,7 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) se->valid_blocks; } - if (sbi->segs_per_sec > 1) + if (__is_large_section(sbi)) get_sec_entry(sbi, start)->valid_blocks += se->valid_blocks; } @@ -4079,7 +4088,7 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) sbi->discard_blks -= se->valid_blocks; } - if (sbi->segs_per_sec > 1) { + if (__is_large_section(sbi)) { get_sec_entry(sbi, start)->valid_blocks += se->valid_blocks; get_sec_entry(sbi, start)->valid_blocks -= @@ -4314,7 +4323,7 @@ static void destroy_dirty_segmap(struct f2fs_sb_info *sbi) destroy_victim_secmap(sbi); SM_I(sbi)->dirty_info = NULL; - kfree(dirty_i); + kvfree(dirty_i); } static void destroy_curseg(struct f2fs_sb_info *sbi) @@ -4326,10 +4335,10 @@ static void destroy_curseg(struct f2fs_sb_info *sbi) return; SM_I(sbi)->curseg_array = NULL; for (i = 0; i < NR_CURSEG_TYPE; i++) { - kfree(array[i].sum_blk); - kfree(array[i].journal); + kvfree(array[i].sum_blk); + kvfree(array[i].journal); } - kfree(array); + kvfree(array); } static void destroy_free_segmap(struct f2fs_sb_info *sbi) @@ -4340,7 +4349,7 @@ static void destroy_free_segmap(struct f2fs_sb_info *sbi) SM_I(sbi)->free_info = NULL; kvfree(free_i->free_segmap); kvfree(free_i->free_secmap); - kfree(free_i); + kvfree(free_i); } static void destroy_sit_info(struct f2fs_sb_info *sbi) @@ -4353,26 +4362,26 @@ static void destroy_sit_info(struct f2fs_sb_info *sbi) if (sit_i->sentries) { for (start = 0; start < MAIN_SEGS(sbi); start++) { - kfree(sit_i->sentries[start].cur_valid_map); + kvfree(sit_i->sentries[start].cur_valid_map); #ifdef CONFIG_F2FS_CHECK_FS - kfree(sit_i->sentries[start].cur_valid_map_mir); + kvfree(sit_i->sentries[start].cur_valid_map_mir); #endif - kfree(sit_i->sentries[start].ckpt_valid_map); - kfree(sit_i->sentries[start].discard_map); + kvfree(sit_i->sentries[start].ckpt_valid_map); + kvfree(sit_i->sentries[start].discard_map); } } - kfree(sit_i->tmp_map); + kvfree(sit_i->tmp_map); kvfree(sit_i->sentries); kvfree(sit_i->sec_entries); kvfree(sit_i->dirty_sentries_bitmap); SM_I(sbi)->sit_info = NULL; - kfree(sit_i->sit_bitmap); + kvfree(sit_i->sit_bitmap); #ifdef CONFIG_F2FS_CHECK_FS - kfree(sit_i->sit_bitmap_mir); + kvfree(sit_i->sit_bitmap_mir); #endif - kfree(sit_i); + kvfree(sit_i); } void f2fs_destroy_segment_manager(struct f2fs_sb_info *sbi) @@ -4388,7 +4397,7 @@ void f2fs_destroy_segment_manager(struct f2fs_sb_info *sbi) destroy_free_segmap(sbi); destroy_sit_info(sbi); sbi->sm_info = NULL; - kfree(sm_info); + kvfree(sm_info); } int __init f2fs_create_segment_manager_caches(void) diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h index ab3465faddf1..a77f76f528b6 100644 --- a/fs/f2fs/segment.h +++ b/fs/f2fs/segment.h @@ -333,7 +333,7 @@ static inline unsigned int get_valid_blocks(struct f2fs_sb_info *sbi, * In order to get # of valid blocks in a section instantly from many * segments, f2fs manages two counting structures separately. */ - if (use_section && sbi->segs_per_sec > 1) + if (use_section && __is_large_section(sbi)) return get_sec_entry(sbi, segno)->valid_blocks; else return get_seg_entry(sbi, segno)->valid_blocks; diff --git a/fs/f2fs/shrinker.c b/fs/f2fs/shrinker.c index 9e13db994fdf..a467aca29cfe 100644 --- a/fs/f2fs/shrinker.c +++ b/fs/f2fs/shrinker.c @@ -135,6 +135,6 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi) f2fs_shrink_extent_tree(sbi, __count_extent_cache(sbi)); spin_lock(&f2fs_list_lock); - list_del(&sbi->s_list); + list_del_init(&sbi->s_list); spin_unlock(&f2fs_list_lock); } diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index af58b2cc21b8..c46a1d4318d4 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -38,7 +38,7 @@ static struct kmem_cache *f2fs_inode_cachep; #ifdef CONFIG_F2FS_FAULT_INJECTION -char *f2fs_fault_name[FAULT_MAX] = { +const char *f2fs_fault_name[FAULT_MAX] = { [FAULT_KMALLOC] = "kmalloc", [FAULT_KVMALLOC] = "kvmalloc", [FAULT_PAGE_ALLOC] = "page alloc", @@ -259,7 +259,7 @@ static int f2fs_set_qf_name(struct super_block *sb, int qtype, "quota options when quota turned on"); return -EINVAL; } - if (f2fs_sb_has_quota_ino(sb)) { + if (f2fs_sb_has_quota_ino(sbi)) { f2fs_msg(sb, KERN_INFO, "QUOTA feature is enabled, so ignore qf_name"); return 0; @@ -289,7 +289,7 @@ static int f2fs_set_qf_name(struct super_block *sb, int qtype, set_opt(sbi, QUOTA); return 0; errout: - kfree(qname); + kvfree(qname); return ret; } @@ -302,7 +302,7 @@ static int f2fs_clear_qf_name(struct super_block *sb, int qtype) " when quota turned on"); return -EINVAL; } - kfree(F2FS_OPTION(sbi).s_qf_names[qtype]); + kvfree(F2FS_OPTION(sbi).s_qf_names[qtype]); F2FS_OPTION(sbi).s_qf_names[qtype] = NULL; return 0; } @@ -314,7 +314,7 @@ static int f2fs_check_quota_options(struct f2fs_sb_info *sbi) * 'grpquota' mount options are allowed even without quota feature * to support legacy quotas in quota files. */ - if (test_opt(sbi, PRJQUOTA) && !f2fs_sb_has_project_quota(sbi->sb)) { + if (test_opt(sbi, PRJQUOTA) && !f2fs_sb_has_project_quota(sbi)) { f2fs_msg(sbi->sb, KERN_ERR, "Project quota feature not enabled. " "Cannot enable project quota enforcement."); return -1; @@ -348,7 +348,7 @@ static int f2fs_check_quota_options(struct f2fs_sb_info *sbi) } } - if (f2fs_sb_has_quota_ino(sbi->sb) && F2FS_OPTION(sbi).s_jquota_fmt) { + if (f2fs_sb_has_quota_ino(sbi) && F2FS_OPTION(sbi).s_jquota_fmt) { f2fs_msg(sbi->sb, KERN_INFO, "QUOTA feature is enabled, so ignore jquota_fmt"); F2FS_OPTION(sbi).s_jquota_fmt = 0; @@ -399,10 +399,10 @@ static int parse_options(struct super_block *sb, char *options) set_opt(sbi, BG_GC); set_opt(sbi, FORCE_FG_GC); } else { - kfree(name); + kvfree(name); return -EINVAL; } - kfree(name); + kvfree(name); break; case Opt_disable_roll_forward: set_opt(sbi, DISABLE_ROLL_FORWARD); @@ -417,7 +417,7 @@ static int parse_options(struct super_block *sb, char *options) set_opt(sbi, DISCARD); break; case Opt_nodiscard: - if (f2fs_sb_has_blkzoned(sb)) { + if (f2fs_sb_has_blkzoned(sbi)) { f2fs_msg(sb, KERN_WARNING, "discard is required for zoned block devices"); return -EINVAL; @@ -566,11 +566,11 @@ static int parse_options(struct super_block *sb, char *options) return -ENOMEM; if (strlen(name) == 8 && !strncmp(name, "adaptive", 8)) { - if (f2fs_sb_has_blkzoned(sb)) { + if (f2fs_sb_has_blkzoned(sbi)) { f2fs_msg(sb, KERN_WARNING, "adaptive mode is not allowed with " "zoned block device feature"); - kfree(name); + kvfree(name); return -EINVAL; } set_opt_mode(sbi, F2FS_MOUNT_ADAPTIVE); @@ -578,10 +578,10 @@ static int parse_options(struct super_block *sb, char *options) !strncmp(name, "lfs", 3)) { set_opt_mode(sbi, F2FS_MOUNT_LFS); } else { - kfree(name); + kvfree(name); return -EINVAL; } - kfree(name); + kvfree(name); break; case Opt_io_size_bits: if (args->from && match_int(args, &arg)) @@ -714,10 +714,10 @@ static int parse_options(struct super_block *sb, char *options) !strncmp(name, "fs-based", 8)) { F2FS_OPTION(sbi).whint_mode = WHINT_MODE_FS; } else { - kfree(name); + kvfree(name); return -EINVAL; } - kfree(name); + kvfree(name); break; case Opt_alloc: name = match_strdup(&args[0]); @@ -731,10 +731,10 @@ static int parse_options(struct super_block *sb, char *options) !strncmp(name, "reuse", 5)) { F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_REUSE; } else { - kfree(name); + kvfree(name); return -EINVAL; } - kfree(name); + kvfree(name); break; case Opt_fsync: name = match_strdup(&args[0]); @@ -751,14 +751,14 @@ static int parse_options(struct super_block *sb, char *options) F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_NOBARRIER; } else { - kfree(name); + kvfree(name); return -EINVAL; } - kfree(name); + kvfree(name); break; case Opt_test_dummy_encryption: #ifdef CONFIG_F2FS_FS_ENCRYPTION - if (!f2fs_sb_has_encrypt(sb)) { + if (!f2fs_sb_has_encrypt(sbi)) { f2fs_msg(sb, KERN_ERR, "Encrypt feature is off"); return -EINVAL; } @@ -783,10 +783,10 @@ static int parse_options(struct super_block *sb, char *options) !strncmp(name, "disable", 7)) { set_opt(sbi, DISABLE_CHECKPOINT); } else { - kfree(name); + kvfree(name); return -EINVAL; } - kfree(name); + kvfree(name); break; default: f2fs_msg(sb, KERN_ERR, @@ -799,13 +799,13 @@ static int parse_options(struct super_block *sb, char *options) if (f2fs_check_quota_options(sbi)) return -EINVAL; #else - if (f2fs_sb_has_quota_ino(sbi->sb) && !f2fs_readonly(sbi->sb)) { + if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sbi->sb)) { f2fs_msg(sbi->sb, KERN_INFO, "Filesystem with quota feature cannot be mounted RDWR " "without CONFIG_QUOTA"); return -EINVAL; } - if (f2fs_sb_has_project_quota(sbi->sb) && !f2fs_readonly(sbi->sb)) { + if (f2fs_sb_has_project_quota(sbi) && !f2fs_readonly(sbi->sb)) { f2fs_msg(sb, KERN_ERR, "Filesystem with project quota feature cannot be " "mounted RDWR without CONFIG_QUOTA"); @@ -821,8 +821,8 @@ static int parse_options(struct super_block *sb, char *options) } if (test_opt(sbi, INLINE_XATTR_SIZE)) { - if (!f2fs_sb_has_extra_attr(sb) || - !f2fs_sb_has_flexible_inline_xattr(sb)) { + if (!f2fs_sb_has_extra_attr(sbi) || + !f2fs_sb_has_flexible_inline_xattr(sbi)) { f2fs_msg(sb, KERN_ERR, "extra_attr or flexible_inline_xattr " "feature is off"); @@ -1017,10 +1017,10 @@ static void destroy_device_list(struct f2fs_sb_info *sbi) for (i = 0; i < sbi->s_ndevs; i++) { blkdev_put(FDEV(i).bdev, FMODE_EXCL); #ifdef CONFIG_BLK_DEV_ZONED - kfree(FDEV(i).blkz_type); + kvfree(FDEV(i).blkz_type); #endif } - kfree(sbi->devs); + kvfree(sbi->devs); } static void f2fs_put_super(struct super_block *sb) @@ -1058,9 +1058,6 @@ static void f2fs_put_super(struct super_block *sb) f2fs_write_checkpoint(sbi, &cpc); } - /* f2fs_write_checkpoint can update stat informaion */ - f2fs_destroy_stats(sbi); - /* * normally superblock is clean, so we need to release this. * In addition, EIO will skip do checkpoint, we need this as well. @@ -1080,29 +1077,35 @@ static void f2fs_put_super(struct super_block *sb) iput(sbi->node_inode); iput(sbi->meta_inode); + /* + * iput() can update stat information, if f2fs_write_checkpoint() + * above failed with error. + */ + f2fs_destroy_stats(sbi); + /* destroy f2fs internal modules */ f2fs_destroy_node_manager(sbi); f2fs_destroy_segment_manager(sbi); - kfree(sbi->ckpt); + kvfree(sbi->ckpt); f2fs_unregister_sysfs(sbi); sb->s_fs_info = NULL; if (sbi->s_chksum_driver) crypto_free_shash(sbi->s_chksum_driver); - kfree(sbi->raw_super); + kvfree(sbi->raw_super); destroy_device_list(sbi); mempool_destroy(sbi->write_io_dummy); #ifdef CONFIG_QUOTA for (i = 0; i < MAXQUOTAS; i++) - kfree(F2FS_OPTION(sbi).s_qf_names[i]); + kvfree(F2FS_OPTION(sbi).s_qf_names[i]); #endif destroy_percpu_info(sbi); for (i = 0; i < NR_PAGE_TYPE; i++) - kfree(sbi->write_io[i]); - kfree(sbi); + kvfree(sbi->write_io[i]); + kvfree(sbi); } int f2fs_sync_fs(struct super_block *sb, int sync) @@ -1431,7 +1434,7 @@ static void default_options(struct f2fs_sb_info *sbi) sbi->sb->s_flags |= SB_LAZYTIME; set_opt(sbi, FLUSH_MERGE); set_opt(sbi, DISCARD); - if (f2fs_sb_has_blkzoned(sbi->sb)) + if (f2fs_sb_has_blkzoned(sbi)) set_opt_mode(sbi, F2FS_MOUNT_LFS); else set_opt_mode(sbi, F2FS_MOUNT_ADAPTIVE); @@ -1457,19 +1460,16 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi) sbi->sb->s_flags |= SB_ACTIVE; - mutex_lock(&sbi->gc_mutex); f2fs_update_time(sbi, DISABLE_TIME); while (!f2fs_time_over(sbi, DISABLE_TIME)) { + mutex_lock(&sbi->gc_mutex); err = f2fs_gc(sbi, true, false, NULL_SEGNO); if (err == -ENODATA) break; - if (err && err != -EAGAIN) { - mutex_unlock(&sbi->gc_mutex); + if (err && err != -EAGAIN) return err; - } } - mutex_unlock(&sbi->gc_mutex); err = sync_filesystem(sbi->sb); if (err) @@ -1531,7 +1531,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) GFP_KERNEL); if (!org_mount_opt.s_qf_names[i]) { for (j = 0; j < i; j++) - kfree(org_mount_opt.s_qf_names[j]); + kvfree(org_mount_opt.s_qf_names[j]); return -ENOMEM; } } else { @@ -1575,7 +1575,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) sb->s_flags &= ~SB_RDONLY; if (sb_any_quota_suspended(sb)) { dquot_resume(sb, -1); - } else if (f2fs_sb_has_quota_ino(sb)) { + } else if (f2fs_sb_has_quota_ino(sbi)) { err = f2fs_enable_quotas(sb); if (err) goto restore_opts; @@ -1651,7 +1651,7 @@ skip: #ifdef CONFIG_QUOTA /* Release old quota file names */ for (i = 0; i < MAXQUOTAS; i++) - kfree(org_mount_opt.s_qf_names[i]); + kvfree(org_mount_opt.s_qf_names[i]); #endif /* Update the POSIXACL Flag */ sb->s_flags = (sb->s_flags & ~SB_POSIXACL) | @@ -1672,7 +1672,7 @@ restore_opts: #ifdef CONFIG_QUOTA F2FS_OPTION(sbi).s_jquota_fmt = org_mount_opt.s_jquota_fmt; for (i = 0; i < MAXQUOTAS; i++) { - kfree(F2FS_OPTION(sbi).s_qf_names[i]); + kvfree(F2FS_OPTION(sbi).s_qf_names[i]); F2FS_OPTION(sbi).s_qf_names[i] = org_mount_opt.s_qf_names[i]; } #endif @@ -1817,7 +1817,7 @@ int f2fs_enable_quota_files(struct f2fs_sb_info *sbi, bool rdonly) int enabled = 0; int i, err; - if (f2fs_sb_has_quota_ino(sbi->sb) && rdonly) { + if (f2fs_sb_has_quota_ino(sbi) && rdonly) { err = f2fs_enable_quotas(sbi->sb); if (err) { f2fs_msg(sbi->sb, KERN_ERR, @@ -1848,7 +1848,7 @@ static int f2fs_quota_enable(struct super_block *sb, int type, int format_id, unsigned long qf_inum; int err; - BUG_ON(!f2fs_sb_has_quota_ino(sb)); + BUG_ON(!f2fs_sb_has_quota_ino(F2FS_SB(sb))); qf_inum = f2fs_qf_ino(sb, type); if (!qf_inum) @@ -1993,7 +1993,7 @@ static int f2fs_quota_off(struct super_block *sb, int type) goto out_put; err = dquot_quota_off(sb, type); - if (err || f2fs_sb_has_quota_ino(sb)) + if (err || f2fs_sb_has_quota_ino(F2FS_SB(sb))) goto out_put; inode_lock(inode); @@ -2173,7 +2173,7 @@ static int f2fs_set_context(struct inode *inode, const void *ctx, size_t len, * if LOST_FOUND feature is enabled. * */ - if (f2fs_sb_has_lost_found(sbi->sb) && + if (f2fs_sb_has_lost_found(sbi) && inode->i_ino == F2FS_ROOT_INO(sbi)) return -EPERM; @@ -2396,7 +2396,7 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi, __u32 crc = 0; /* Check checksum_offset and crc in superblock */ - if (le32_to_cpu(raw_super->feature) & F2FS_FEATURE_SB_CHKSUM) { + if (__F2FS_HAS_FEATURE(raw_super, F2FS_FEATURE_SB_CHKSUM)) { crc_offset = le32_to_cpu(raw_super->checksum_offset); if (crc_offset != offsetof(struct f2fs_super_block, crc)) { @@ -2496,10 +2496,10 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi, return 1; } - if (segment_count > (le32_to_cpu(raw_super->block_count) >> 9)) { + if (segment_count > (le64_to_cpu(raw_super->block_count) >> 9)) { f2fs_msg(sb, KERN_INFO, - "Wrong segment_count / block_count (%u > %u)", - segment_count, le32_to_cpu(raw_super->block_count)); + "Wrong segment_count / block_count (%u > %llu)", + segment_count, le64_to_cpu(raw_super->block_count)); return 1; } @@ -2674,7 +2674,7 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi) static void init_sb_info(struct f2fs_sb_info *sbi) { struct f2fs_super_block *raw_super = sbi->raw_super; - int i, j; + int i; sbi->log_sectors_per_block = le32_to_cpu(raw_super->log_sectors_per_block); @@ -2692,7 +2692,10 @@ static void init_sb_info(struct f2fs_sb_info *sbi) sbi->node_ino_num = le32_to_cpu(raw_super->node_ino); sbi->meta_ino_num = le32_to_cpu(raw_super->meta_ino); sbi->cur_victim_sec = NULL_SECNO; + sbi->next_victim_seg[BG_GC] = NULL_SEGNO; + sbi->next_victim_seg[FG_GC] = NULL_SEGNO; sbi->max_victim_search = DEF_MAX_VICTIM_SEARCH; + sbi->migration_granularity = sbi->segs_per_sec; sbi->dir_level = DEF_DIR_LEVEL; sbi->interval_time[CP_TIME] = DEF_CP_INTERVAL; @@ -2710,9 +2713,6 @@ static void init_sb_info(struct f2fs_sb_info *sbi) INIT_LIST_HEAD(&sbi->s_list); mutex_init(&sbi->umount_mutex); - for (i = 0; i < NR_PAGE_TYPE - 1; i++) - for (j = HOT; j < NR_TEMP_TYPE; j++) - mutex_init(&sbi->wio_mutex[i][j]); init_rwsem(&sbi->io_order_lock); spin_lock_init(&sbi->cp_lock); @@ -2749,7 +2749,7 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi) unsigned int n = 0; int err = -EIO; - if (!f2fs_sb_has_blkzoned(sbi->sb)) + if (!f2fs_sb_has_blkzoned(sbi)) return 0; if (sbi->blocks_per_blkz && sbi->blocks_per_blkz != @@ -2800,7 +2800,7 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi) } } - kfree(zones); + kvfree(zones); return err; } @@ -2860,7 +2860,7 @@ static int read_raw_super_block(struct f2fs_sb_info *sbi, /* No valid superblock */ if (!*raw_super) - kfree(super); + kvfree(super); else err = 0; @@ -2880,7 +2880,7 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover) } /* we should update superblock crc here */ - if (!recover && f2fs_sb_has_sb_chksum(sbi->sb)) { + if (!recover && f2fs_sb_has_sb_chksum(sbi)) { crc = f2fs_crc32(sbi, F2FS_RAW_SUPER(sbi), offsetof(struct f2fs_super_block, crc)); F2FS_RAW_SUPER(sbi)->crc = cpu_to_le32(crc); @@ -2968,7 +2968,7 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi) #ifdef CONFIG_BLK_DEV_ZONED if (bdev_zoned_model(FDEV(i).bdev) == BLK_ZONED_HM && - !f2fs_sb_has_blkzoned(sbi->sb)) { + !f2fs_sb_has_blkzoned(sbi)) { f2fs_msg(sbi->sb, KERN_ERR, "Zoned block device feature not enabled\n"); return -EINVAL; @@ -3064,7 +3064,7 @@ try_onemore: sbi->raw_super = raw_super; /* precompute checksum seed for metadata */ - if (f2fs_sb_has_inode_chksum(sb)) + if (f2fs_sb_has_inode_chksum(sbi)) sbi->s_chksum_seed = f2fs_chksum(sbi, ~0, raw_super->uuid, sizeof(raw_super->uuid)); @@ -3074,7 +3074,7 @@ try_onemore: * devices, but mandatory for host-managed zoned block devices. */ #ifndef CONFIG_BLK_DEV_ZONED - if (f2fs_sb_has_blkzoned(sb)) { + if (f2fs_sb_has_blkzoned(sbi)) { f2fs_msg(sb, KERN_ERR, "Zoned block device support is not enabled\n"); err = -EOPNOTSUPP; @@ -3101,13 +3101,13 @@ try_onemore: #ifdef CONFIG_QUOTA sb->dq_op = &f2fs_quota_operations; - if (f2fs_sb_has_quota_ino(sb)) + if (f2fs_sb_has_quota_ino(sbi)) sb->s_qcop = &dquot_quotactl_sysfile_ops; else sb->s_qcop = &f2fs_quotactl_ops; sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP | QTYPE_MASK_PRJ; - if (f2fs_sb_has_quota_ino(sbi->sb)) { + if (f2fs_sb_has_quota_ino(sbi)) { for (i = 0; i < MAXQUOTAS; i++) { if (f2fs_qf_ino(sbi->sb, i)) sbi->nquota_files++; @@ -3259,30 +3259,30 @@ try_onemore: f2fs_build_gc_manager(sbi); + err = f2fs_build_stats(sbi); + if (err) + goto free_nm; + /* get an inode for node space */ sbi->node_inode = f2fs_iget(sb, F2FS_NODE_INO(sbi)); if (IS_ERR(sbi->node_inode)) { f2fs_msg(sb, KERN_ERR, "Failed to read node inode"); err = PTR_ERR(sbi->node_inode); - goto free_nm; + goto free_stats; } - err = f2fs_build_stats(sbi); - if (err) - goto free_node_inode; - /* read root inode and dentry */ root = f2fs_iget(sb, F2FS_ROOT_INO(sbi)); if (IS_ERR(root)) { f2fs_msg(sb, KERN_ERR, "Failed to read root inode"); err = PTR_ERR(root); - goto free_stats; + goto free_node_inode; } if (!S_ISDIR(root->i_mode) || !root->i_blocks || !root->i_size || !root->i_nlink) { iput(root); err = -EINVAL; - goto free_stats; + goto free_node_inode; } sb->s_root = d_make_root(root); /* allocate root dentry */ @@ -3297,7 +3297,7 @@ try_onemore: #ifdef CONFIG_QUOTA /* Enable quota usage during mount */ - if (f2fs_sb_has_quota_ino(sb) && !f2fs_readonly(sb)) { + if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sb)) { err = f2fs_enable_quotas(sb); if (err) f2fs_msg(sb, KERN_ERR, @@ -3369,7 +3369,7 @@ skip_recovery: if (err) goto free_meta; } - kfree(options); + kvfree(options); /* recover broken superblock */ if (recovery) { @@ -3392,7 +3392,7 @@ skip_recovery: free_meta: #ifdef CONFIG_QUOTA f2fs_truncate_quota_inode_pages(sb); - if (f2fs_sb_has_quota_ino(sb) && !f2fs_readonly(sb)) + if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sb)) f2fs_quota_off_umount(sbi->sb); #endif /* @@ -3406,19 +3406,19 @@ free_meta: free_root_inode: dput(sb->s_root); sb->s_root = NULL; -free_stats: - f2fs_destroy_stats(sbi); free_node_inode: f2fs_release_ino_entry(sbi, true); truncate_inode_pages_final(NODE_MAPPING(sbi)); iput(sbi->node_inode); +free_stats: + f2fs_destroy_stats(sbi); free_nm: f2fs_destroy_node_manager(sbi); free_sm: f2fs_destroy_segment_manager(sbi); free_devices: destroy_device_list(sbi); - kfree(sbi->ckpt); + kvfree(sbi->ckpt); free_meta_inode: make_bad_inode(sbi->meta_inode); iput(sbi->meta_inode); @@ -3428,19 +3428,19 @@ free_percpu: destroy_percpu_info(sbi); free_bio_info: for (i = 0; i < NR_PAGE_TYPE; i++) - kfree(sbi->write_io[i]); + kvfree(sbi->write_io[i]); free_options: #ifdef CONFIG_QUOTA for (i = 0; i < MAXQUOTAS; i++) - kfree(F2FS_OPTION(sbi).s_qf_names[i]); + kvfree(F2FS_OPTION(sbi).s_qf_names[i]); #endif - kfree(options); + kvfree(options); free_sb_buf: - kfree(raw_super); + kvfree(raw_super); free_sbi: if (sbi->s_chksum_driver) crypto_free_shash(sbi->s_chksum_driver); - kfree(sbi); + kvfree(sbi); /* give only one another chance */ if (retry) { diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c index b777cbdd796b..0575edbe3ed6 100644 --- a/fs/f2fs/sysfs.c +++ b/fs/f2fs/sysfs.c @@ -90,34 +90,34 @@ static ssize_t features_show(struct f2fs_attr *a, if (!sb->s_bdev->bd_part) return snprintf(buf, PAGE_SIZE, "0\n"); - if (f2fs_sb_has_encrypt(sb)) + if (f2fs_sb_has_encrypt(sbi)) len += snprintf(buf, PAGE_SIZE - len, "%s", "encryption"); - if (f2fs_sb_has_blkzoned(sb)) + if (f2fs_sb_has_blkzoned(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "blkzoned"); - if (f2fs_sb_has_extra_attr(sb)) + if (f2fs_sb_has_extra_attr(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "extra_attr"); - if (f2fs_sb_has_project_quota(sb)) + if (f2fs_sb_has_project_quota(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "projquota"); - if (f2fs_sb_has_inode_chksum(sb)) + if (f2fs_sb_has_inode_chksum(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "inode_checksum"); - if (f2fs_sb_has_flexible_inline_xattr(sb)) + if (f2fs_sb_has_flexible_inline_xattr(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "flexible_inline_xattr"); - if (f2fs_sb_has_quota_ino(sb)) + if (f2fs_sb_has_quota_ino(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "quota_ino"); - if (f2fs_sb_has_inode_crtime(sb)) + if (f2fs_sb_has_inode_crtime(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "inode_crtime"); - if (f2fs_sb_has_lost_found(sb)) + if (f2fs_sb_has_lost_found(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "lost_found"); - if (f2fs_sb_has_sb_chksum(sb)) + if (f2fs_sb_has_sb_chksum(sbi)) len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len ? ", " : "", "sb_checksum"); len += snprintf(buf + len, PAGE_SIZE - len, "\n"); @@ -246,6 +246,11 @@ out: return count; } + if (!strcmp(a->attr.name, "migration_granularity")) { + if (t == 0 || t > sbi->segs_per_sec) + return -EINVAL; + } + if (!strcmp(a->attr.name, "trim_sections")) return -EINVAL; @@ -406,6 +411,7 @@ F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh); F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ra_nid_pages, ra_nid_pages); F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, dirty_nats_ratio, dirty_nats_ratio); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search); +F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, migration_granularity, migration_granularity); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, cp_interval, interval_time[CP_TIME]); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, idle_interval, interval_time[REQ_TIME]); @@ -460,6 +466,7 @@ static struct attribute *f2fs_attrs[] = { ATTR_LIST(min_hot_blocks), ATTR_LIST(min_ssr_sections), ATTR_LIST(max_victim_search), + ATTR_LIST(migration_granularity), ATTR_LIST(dir_level), ATTR_LIST(ram_thresh), ATTR_LIST(ra_nid_pages), diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c index 7261245c208d..18d5ffbc5e8c 100644 --- a/fs/f2fs/xattr.c +++ b/fs/f2fs/xattr.c @@ -288,7 +288,7 @@ static int read_xattr_block(struct inode *inode, void *txattr_addr) static int lookup_all_xattrs(struct inode *inode, struct page *ipage, unsigned int index, unsigned int len, const char *name, struct f2fs_xattr_entry **xe, - void **base_addr) + void **base_addr, int *base_size) { void *cur_addr, *txattr_addr, *last_addr = NULL; nid_t xnid = F2FS_I(inode)->i_xattr_nid; @@ -299,8 +299,8 @@ static int lookup_all_xattrs(struct inode *inode, struct page *ipage, if (!size && !inline_size) return -ENODATA; - txattr_addr = f2fs_kzalloc(F2FS_I_SB(inode), - inline_size + size + XATTR_PADDING_SIZE, GFP_NOFS); + *base_size = inline_size + size + XATTR_PADDING_SIZE; + txattr_addr = f2fs_kzalloc(F2FS_I_SB(inode), *base_size, GFP_NOFS); if (!txattr_addr) return -ENOMEM; @@ -312,8 +312,10 @@ static int lookup_all_xattrs(struct inode *inode, struct page *ipage, *xe = __find_inline_xattr(inode, txattr_addr, &last_addr, index, len, name); - if (*xe) + if (*xe) { + *base_size = inline_size; goto check; + } } /* read from xattr node block */ @@ -415,7 +417,7 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize, } f2fs_wait_on_page_writeback(ipage ? ipage : in_page, - NODE, true); + NODE, true, true); /* no need to use xattr node block */ if (hsize <= inline_size) { err = f2fs_truncate_xattr_node(inode); @@ -439,7 +441,7 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize, goto in_page_out; } f2fs_bug_on(sbi, new_nid); - f2fs_wait_on_page_writeback(xpage, NODE, true); + f2fs_wait_on_page_writeback(xpage, NODE, true, true); } else { struct dnode_of_data dn; set_new_dnode(&dn, inode, NULL, NULL, new_nid); @@ -474,6 +476,7 @@ int f2fs_getxattr(struct inode *inode, int index, const char *name, int error = 0; unsigned int size, len; void *base_addr = NULL; + int base_size; if (name == NULL) return -EINVAL; @@ -484,7 +487,7 @@ int f2fs_getxattr(struct inode *inode, int index, const char *name, down_read(&F2FS_I(inode)->i_xattr_sem); error = lookup_all_xattrs(inode, ipage, index, len, name, - &entry, &base_addr); + &entry, &base_addr, &base_size); up_read(&F2FS_I(inode)->i_xattr_sem); if (error) return error; @@ -498,6 +501,11 @@ int f2fs_getxattr(struct inode *inode, int index, const char *name, if (buffer) { char *pval = entry->e_name + entry->e_name_len; + + if (base_size - (pval - (char *)base_addr) < size) { + error = -ERANGE; + goto out; + } memcpy(buffer, pval, size); } error = size; diff --git a/fs/file.c b/fs/file.c index 50304c7525ea..3209ee271c41 100644 --- a/fs/file.c +++ b/fs/file.c @@ -640,6 +640,35 @@ out_unlock: } EXPORT_SYMBOL(__close_fd); /* for ksys_close() */ +/* + * variant of __close_fd that gets a ref on the file for later fput + */ +int __close_fd_get_file(unsigned int fd, struct file **res) +{ + struct files_struct *files = current->files; + struct file *file; + struct fdtable *fdt; + + spin_lock(&files->file_lock); + fdt = files_fdtable(files); + if (fd >= fdt->max_fds) + goto out_unlock; + file = fdt->fd[fd]; + if (!file) + goto out_unlock; + rcu_assign_pointer(fdt->fd[fd], NULL); + __put_unused_fd(files, fd); + spin_unlock(&files->file_lock); + get_file(file); + *res = file; + return filp_close(file, files); + +out_unlock: + spin_unlock(&files->file_lock); + *res = NULL; + return -ENOENT; +} + void do_close_on_exec(struct files_struct *files) { unsigned i; diff --git a/fs/file_table.c b/fs/file_table.c index e49af4caf15d..5679e7fcb6b0 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -380,10 +380,11 @@ void __init files_init(void) void __init files_maxfiles_init(void) { unsigned long n; - unsigned long memreserve = (totalram_pages - nr_free_pages()) * 3/2; + unsigned long nr_pages = totalram_pages(); + unsigned long memreserve = (nr_pages - nr_free_pages()) * 3/2; - memreserve = min(memreserve, totalram_pages - 1); - n = ((totalram_pages - memreserve) * (PAGE_SIZE / 1024)) / 10; + memreserve = min(memreserve, nr_pages - 1); + n = ((nr_pages - memreserve) * (PAGE_SIZE / 1024)) / 10; files_stat.max_files = max_t(unsigned long, n, NR_FILE); } diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c index 568abed20eb2..76baaa6be393 100644 --- a/fs/fuse/inode.c +++ b/fs/fuse/inode.c @@ -824,7 +824,7 @@ static const struct super_operations fuse_super_operations = { static void sanitize_global_limit(unsigned *limit) { if (*limit == 0) - *limit = ((totalram_pages << PAGE_SHIFT) >> 13) / + *limit = ((totalram_pages() << PAGE_SHIFT) >> 13) / sizeof(struct fuse_req); if (*limit >= 1 << 16) diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index 8afbb35559b9..05dd78f4b2b3 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -820,10 +820,10 @@ out: * @page: the page that's being released * @gfp_mask: passed from Linux VFS, ignored by us * - * Call try_to_free_buffers() if the buffers in this page can be - * released. + * Calls try_to_free_buffers() to free the buffers and put the page if the + * buffers can be released. * - * Returns: 0 + * Returns: 1 if the page was put or else 0 */ int gfs2_releasepage(struct page *page, gfp_t gfp_mask) @@ -930,14 +930,14 @@ static const struct address_space_operations gfs2_jdata_aops = { void gfs2_set_aops(struct inode *inode) { struct gfs2_inode *ip = GFS2_I(inode); + struct gfs2_sbd *sdp = GFS2_SB(inode); - if (gfs2_is_writeback(ip)) + if (gfs2_is_jdata(ip)) + inode->i_mapping->a_ops = &gfs2_jdata_aops; + else if (gfs2_is_writeback(sdp)) inode->i_mapping->a_ops = &gfs2_writeback_aops; - else if (gfs2_is_ordered(ip)) + else if (gfs2_is_ordered(sdp)) inode->i_mapping->a_ops = &gfs2_ordered_aops; - else if (gfs2_is_jdata(ip)) - inode->i_mapping->a_ops = &gfs2_jdata_aops; else BUG(); } - diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c index 9a4a15d646eb..02b2646d84b3 100644 --- a/fs/gfs2/bmap.c +++ b/fs/gfs2/bmap.c @@ -14,6 +14,7 @@ #include #include #include +#include #include "gfs2.h" #include "incore.h" @@ -2083,6 +2084,8 @@ static int do_grow(struct inode *inode, u64 size) } error = gfs2_trans_begin(sdp, RES_DINODE + RES_STATFS + RES_RG_BIT + + (unstuff && + gfs2_is_jdata(ip) ? RES_JDATA : 0) + (sdp->sd_args.ar_quota == GFS2_QUOTA_OFF ? 0 : RES_QUOTA), 0); if (error) @@ -2248,7 +2251,9 @@ int gfs2_map_journal_extents(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd) unsigned int shift = sdp->sd_sb.sb_bsize_shift; u64 size; int rc; + ktime_t start, end; + start = ktime_get(); lblock_stop = i_size_read(jd->jd_inode) >> shift; size = (lblock_stop - lblock) << shift; jd->nr_extents = 0; @@ -2268,8 +2273,9 @@ int gfs2_map_journal_extents(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd) lblock += (bh.b_size >> ip->i_inode.i_blkbits); } while(size > 0); - fs_info(sdp, "journal %d mapped with %u extents\n", jd->jd_jid, - jd->nr_extents); + end = ktime_get(); + fs_info(sdp, "journal %d mapped with %u extents in %lldms\n", jd->jd_jid, + jd->nr_extents, ktime_ms_delta(end, start)); return 0; fail: diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c index 45a17b770d97..a2dea5bc0427 100644 --- a/fs/gfs2/file.c +++ b/fs/gfs2/file.c @@ -1199,13 +1199,13 @@ static int do_flock(struct file *file, int cmd, struct file_lock *fl) mutex_lock(&fp->f_fl_mutex); if (gfs2_holder_initialized(fl_gh)) { + struct file_lock request; if (fl_gh->gh_state == state) goto out; - locks_lock_file_wait(file, - &(struct file_lock) { - .fl_type = F_UNLCK, - .fl_flags = FL_FLOCK - }); + locks_init_lock(&request); + request.fl_type = F_UNLCK; + request.fl_flags = FL_FLOCK; + locks_lock_file_wait(file, &request); gfs2_glock_dq(fl_gh); gfs2_holder_reinit(state, flags, fl_gh); } else { diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index 05431324b262..b92740edc416 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -1777,7 +1777,7 @@ static const char *gflags2str(char *buf, const struct gfs2_glock *gl) * */ -void gfs2_dump_glock(struct seq_file *seq, const struct gfs2_glock *gl) +void gfs2_dump_glock(struct seq_file *seq, struct gfs2_glock *gl) { const struct gfs2_glock_operations *glops = gl->gl_ops; unsigned long long dtime; diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h index 5e12220cc0c2..8949bf28b249 100644 --- a/fs/gfs2/glock.h +++ b/fs/gfs2/glock.h @@ -202,7 +202,7 @@ extern int gfs2_glock_nq_num(struct gfs2_sbd *sdp, u64 number, struct gfs2_holder *gh); extern int gfs2_glock_nq_m(unsigned int num_gh, struct gfs2_holder *ghs); extern void gfs2_glock_dq_m(unsigned int num_gh, struct gfs2_holder *ghs); -extern void gfs2_dump_glock(struct seq_file *seq, const struct gfs2_glock *gl); +extern void gfs2_dump_glock(struct seq_file *seq, struct gfs2_glock *gl); #define GLOCK_BUG_ON(gl,x) do { if (unlikely(x)) { gfs2_dump_glock(NULL, gl); BUG(); } } while(0) extern __printf(2, 3) void gfs2_print_dbg(struct seq_file *seq, const char *fmt, ...); diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c index c63bee9adb6a..f15b4c57c4bd 100644 --- a/fs/gfs2/glops.c +++ b/fs/gfs2/glops.c @@ -28,6 +28,7 @@ #include "util.h" #include "trans.h" #include "dir.h" +#include "lops.h" struct workqueue_struct *gfs2_freeze_wq; @@ -466,17 +467,25 @@ static int inode_go_lock(struct gfs2_holder *gh) * */ -static void inode_go_dump(struct seq_file *seq, const struct gfs2_glock *gl) +static void inode_go_dump(struct seq_file *seq, struct gfs2_glock *gl) { - const struct gfs2_inode *ip = gl->gl_object; + struct gfs2_inode *ip = gl->gl_object; + struct inode *inode = &ip->i_inode; + unsigned long nrpages; + if (ip == NULL) return; - gfs2_print_dbg(seq, " I: n:%llu/%llu t:%u f:0x%02lx d:0x%08x s:%llu\n", + + xa_lock_irq(&inode->i_data.i_pages); + nrpages = inode->i_data.nrpages; + xa_unlock_irq(&inode->i_data.i_pages); + + gfs2_print_dbg(seq, " I: n:%llu/%llu t:%u f:0x%02lx d:0x%08x s:%llu p:%lu\n", (unsigned long long)ip->i_no_formal_ino, (unsigned long long)ip->i_no_addr, IF2DT(ip->i_inode.i_mode), ip->i_flags, (unsigned int)ip->i_diskflags, - (unsigned long long)i_size_read(&ip->i_inode)); + (unsigned long long)i_size_read(inode), nrpages); } /** diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h index 888b62cfd6d1..e10e0b0a7cd5 100644 --- a/fs/gfs2/incore.h +++ b/fs/gfs2/incore.h @@ -165,7 +165,6 @@ struct gfs2_bufdata { u64 bd_blkno; struct list_head bd_list; - const struct gfs2_log_operations *bd_ops; struct gfs2_trans *bd_tr; struct list_head bd_ail_st_list; @@ -244,7 +243,7 @@ struct gfs2_glock_operations { int (*go_demote_ok) (const struct gfs2_glock *gl); int (*go_lock) (struct gfs2_holder *gh); void (*go_unlock) (struct gfs2_holder *gh); - void (*go_dump)(struct seq_file *seq, const struct gfs2_glock *gl); + void (*go_dump)(struct seq_file *seq, struct gfs2_glock *gl); void (*go_callback)(struct gfs2_glock *gl, bool remote); const int go_type; const unsigned long go_flags; diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c index 648f0ca1ad57..998051c4aea7 100644 --- a/fs/gfs2/inode.c +++ b/fs/gfs2/inode.c @@ -744,17 +744,19 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry, the gfs2 structures. */ if (default_acl) { error = __gfs2_set_acl(inode, default_acl, ACL_TYPE_DEFAULT); + if (error) + goto fail_gunlock3; posix_acl_release(default_acl); + default_acl = NULL; } if (acl) { - if (!error) - error = __gfs2_set_acl(inode, acl, ACL_TYPE_ACCESS); + error = __gfs2_set_acl(inode, acl, ACL_TYPE_ACCESS); + if (error) + goto fail_gunlock3; posix_acl_release(acl); + acl = NULL; } - if (error) - goto fail_gunlock3; - error = security_inode_init_security(&ip->i_inode, &dip->i_inode, name, &gfs2_initxattrs, NULL); if (error) @@ -789,10 +791,8 @@ fail_free_inode: } gfs2_rsqa_delete(ip, NULL); fail_free_acls: - if (default_acl) - posix_acl_release(default_acl); - if (acl) - posix_acl_release(acl); + posix_acl_release(default_acl); + posix_acl_release(acl); fail_gunlock: gfs2_dir_no_add(&da); gfs2_glock_dq_uninit(ghs); diff --git a/fs/gfs2/inode.h b/fs/gfs2/inode.h index b5b6341a4f5c..793808263c6d 100644 --- a/fs/gfs2/inode.h +++ b/fs/gfs2/inode.h @@ -30,16 +30,14 @@ static inline int gfs2_is_jdata(const struct gfs2_inode *ip) return ip->i_diskflags & GFS2_DIF_JDATA; } -static inline int gfs2_is_writeback(const struct gfs2_inode *ip) +static inline bool gfs2_is_ordered(const struct gfs2_sbd *sdp) { - const struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); - return (sdp->sd_args.ar_data == GFS2_DATA_WRITEBACK) && !gfs2_is_jdata(ip); + return sdp->sd_args.ar_data == GFS2_DATA_ORDERED; } -static inline int gfs2_is_ordered(const struct gfs2_inode *ip) +static inline bool gfs2_is_writeback(const struct gfs2_sbd *sdp) { - const struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); - return (sdp->sd_args.ar_data == GFS2_DATA_ORDERED) && !gfs2_is_jdata(ip); + return sdp->sd_args.ar_data == GFS2_DATA_WRITEBACK; } static inline int gfs2_is_dir(const struct gfs2_inode *ip) diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c index 99dd58694ba1..5bfaf381921a 100644 --- a/fs/gfs2/log.c +++ b/fs/gfs2/log.c @@ -605,7 +605,6 @@ void gfs2_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd) bd->bd_blkno = bh->b_blocknr; gfs2_remove_from_ail(bd); /* drops ref on bh */ bd->bd_bh = NULL; - bd->bd_ops = &gfs2_revoke_lops; sdp->sd_log_num_revoke++; atomic_inc(&gl->gl_revokes); set_bit(GLF_LFLUSH, &gl->gl_flags); @@ -734,7 +733,7 @@ void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd, lh->lh_crc = cpu_to_be32(crc); gfs2_log_write(sdp, page, sb->s_blocksize, 0, addr); - gfs2_log_flush_bio(sdp, REQ_OP_WRITE, op_flags); + gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE | op_flags); log_flush_wait(sdp); } @@ -811,7 +810,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags) gfs2_ordered_write(sdp); lops_before_commit(sdp, tr); - gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0); + gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE); if (sdp->sd_log_head != sdp->sd_log_flush_head) { log_flush_wait(sdp); diff --git a/fs/gfs2/log.h b/fs/gfs2/log.h index 20241436126d..1bc9bd444b28 100644 --- a/fs/gfs2/log.h +++ b/fs/gfs2/log.h @@ -51,12 +51,11 @@ static inline void gfs2_log_pointers_init(struct gfs2_sbd *sdp, static inline void gfs2_ordered_add_inode(struct gfs2_inode *ip) { - struct gfs2_sbd *sdp; + struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); - if (!gfs2_is_ordered(ip)) + if (gfs2_is_jdata(ip) || !gfs2_is_ordered(sdp)) return; - sdp = GFS2_SB(&ip->i_inode); if (!test_bit(GIF_ORDERED, &ip->i_flags)) { spin_lock(&sdp->sd_ordered_lock); if (!test_and_set_bit(GIF_ORDERED, &ip->i_flags)) diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c index 4c7069b8f3c1..94dcab655bc0 100644 --- a/fs/gfs2/lops.c +++ b/fs/gfs2/lops.c @@ -17,7 +17,9 @@ #include #include #include +#include +#include "bmap.h" #include "dir.h" #include "gfs2.h" #include "incore.h" @@ -193,7 +195,6 @@ static void gfs2_end_log_write_bh(struct gfs2_sbd *sdp, struct bio_vec *bvec, /** * gfs2_end_log_write - end of i/o to the log * @bio: The bio - * @error: Status of i/o request * * Each bio_vec contains either data from the pagecache or data * relating to the log itself. Here we iterate over the bio_vec @@ -228,83 +229,86 @@ static void gfs2_end_log_write(struct bio *bio) } /** - * gfs2_log_flush_bio - Submit any pending log bio - * @sdp: The superblock - * @op: REQ_OP - * @op_flags: req_flag_bits + * gfs2_log_submit_bio - Submit any pending log bio + * @biop: Address of the bio pointer + * @opf: REQ_OP | op_flags * * Submit any pending part-built or full bio to the block device. If * there is no pending bio, then this is a no-op. */ -void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int op, int op_flags) +void gfs2_log_submit_bio(struct bio **biop, int opf) { - if (sdp->sd_log_bio) { + struct bio *bio = *biop; + if (bio) { + struct gfs2_sbd *sdp = bio->bi_private; atomic_inc(&sdp->sd_log_in_flight); - bio_set_op_attrs(sdp->sd_log_bio, op, op_flags); - submit_bio(sdp->sd_log_bio); - sdp->sd_log_bio = NULL; + bio->bi_opf = opf; + submit_bio(bio); + *biop = NULL; } } /** - * gfs2_log_alloc_bio - Allocate a new bio for log writing - * @sdp: The superblock - * @blkno: The next device block number we want to write to + * gfs2_log_alloc_bio - Allocate a bio + * @sdp: The super block + * @blkno: The device block number we want to write to + * @end_io: The bi_end_io callback * - * This should never be called when there is a cached bio in the - * super block. When it returns, there will be a cached bio in the - * super block which will have as many bio_vecs as the device is - * happy to handle. + * Allocate a new bio, initialize it with the given parameters and return it. * - * Returns: Newly allocated bio + * Returns: The newly allocated bio */ -static struct bio *gfs2_log_alloc_bio(struct gfs2_sbd *sdp, u64 blkno) +static struct bio *gfs2_log_alloc_bio(struct gfs2_sbd *sdp, u64 blkno, + bio_end_io_t *end_io) { struct super_block *sb = sdp->sd_vfs; - struct bio *bio; - - BUG_ON(sdp->sd_log_bio); + struct bio *bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES); - bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES); bio->bi_iter.bi_sector = blkno * (sb->s_blocksize >> 9); bio_set_dev(bio, sb->s_bdev); - bio->bi_end_io = gfs2_end_log_write; + bio->bi_end_io = end_io; bio->bi_private = sdp; - sdp->sd_log_bio = bio; - return bio; } /** * gfs2_log_get_bio - Get cached log bio, or allocate a new one - * @sdp: The superblock + * @sdp: The super block * @blkno: The device block number we want to write to + * @bio: The bio to get or allocate + * @op: REQ_OP + * @end_io: The bi_end_io callback + * @flush: Always flush the current bio and allocate a new one? * * If there is a cached bio, then if the next block number is sequential * with the previous one, return it, otherwise flush the bio to the - * device. If there is not a cached bio, or we just flushed it, then + * device. If there is no cached bio, or we just flushed it, then * allocate a new one. * * Returns: The bio to use for log writes */ -static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno) +static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno, + struct bio **biop, int op, + bio_end_io_t *end_io, bool flush) { - struct bio *bio = sdp->sd_log_bio; - u64 nblk; + struct bio *bio = *biop; if (bio) { + u64 nblk; + nblk = bio_end_sector(bio); nblk >>= sdp->sd_fsb2bb_shift; - if (blkno == nblk) + if (blkno == nblk && !flush) return bio; - gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0); + gfs2_log_submit_bio(biop, op); } - return gfs2_log_alloc_bio(sdp, blkno); + *biop = gfs2_log_alloc_bio(sdp, blkno, end_io); + return *biop; } /** @@ -326,11 +330,12 @@ void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page, struct bio *bio; int ret; - bio = gfs2_log_get_bio(sdp, blkno); + bio = gfs2_log_get_bio(sdp, blkno, &sdp->sd_log_bio, REQ_OP_WRITE, + gfs2_end_log_write, false); ret = bio_add_page(bio, page, size, offset); if (ret == 0) { - gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0); - bio = gfs2_log_alloc_bio(sdp, blkno); + bio = gfs2_log_get_bio(sdp, blkno, &sdp->sd_log_bio, + REQ_OP_WRITE, gfs2_end_log_write, true); ret = bio_add_page(bio, page, size, offset); WARN_ON(ret == 0); } @@ -370,6 +375,184 @@ void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page) gfs2_log_bmap(sdp)); } +/** + * gfs2_end_log_read - end I/O callback for reads from the log + * @bio: The bio + * + * Simply unlock the pages in the bio. The main thread will wait on them and + * process them in order as necessary. + */ + +static void gfs2_end_log_read(struct bio *bio) +{ + struct page *page; + struct bio_vec *bvec; + int i; + + bio_for_each_segment_all(bvec, bio, i) { + page = bvec->bv_page; + if (bio->bi_status) { + int err = blk_status_to_errno(bio->bi_status); + + SetPageError(page); + mapping_set_error(page->mapping, err); + } + unlock_page(page); + } + + bio_put(bio); +} + +/** + * gfs2_jhead_pg_srch - Look for the journal head in a given page. + * @jd: The journal descriptor + * @page: The page to look in + * + * Returns: 1 if found, 0 otherwise. + */ + +static bool gfs2_jhead_pg_srch(struct gfs2_jdesc *jd, + struct gfs2_log_header_host *head, + struct page *page) +{ + struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode); + struct gfs2_log_header_host uninitialized_var(lh); + void *kaddr = kmap_atomic(page); + unsigned int offset; + bool ret = false; + + for (offset = 0; offset < PAGE_SIZE; offset += sdp->sd_sb.sb_bsize) { + if (!__get_log_header(sdp, kaddr + offset, 0, &lh)) { + if (lh.lh_sequence > head->lh_sequence) + *head = lh; + else { + ret = true; + break; + } + } + } + kunmap_atomic(kaddr); + return ret; +} + +/** + * gfs2_jhead_process_page - Search/cleanup a page + * @jd: The journal descriptor + * @index: Index of the page to look into + * @done: If set, perform only cleanup, else search and set if found. + * + * Find the page with 'index' in the journal's mapping. Search the page for + * the journal head if requested (cleanup == false). Release refs on the + * page so the page cache can reclaim it (put_page() twice). We grabbed a + * reference on this page two times, first when we did a find_or_create_page() + * to obtain the page to add it to the bio and second when we do a + * find_get_page() here to get the page to wait on while I/O on it is being + * completed. + * This function is also used to free up a page we might've grabbed but not + * used. Maybe we added it to a bio, but not submitted it for I/O. Or we + * submitted the I/O, but we already found the jhead so we only need to drop + * our references to the page. + */ + +static void gfs2_jhead_process_page(struct gfs2_jdesc *jd, unsigned long index, + struct gfs2_log_header_host *head, + bool *done) +{ + struct page *page; + + page = find_get_page(jd->jd_inode->i_mapping, index); + wait_on_page_locked(page); + + if (PageError(page)) + *done = true; + + if (!*done) + *done = gfs2_jhead_pg_srch(jd, head, page); + + put_page(page); /* Once for find_get_page */ + put_page(page); /* Once more for find_or_create_page */ +} + +/** + * gfs2_find_jhead - find the head of a log + * @jd: The journal descriptor + * @head: The log descriptor for the head of the log is returned here + * + * Do a search of a journal by reading it in large chunks using bios and find + * the valid log entry with the highest sequence number. (i.e. the log head) + * + * Returns: 0 on success, errno otherwise + */ + +int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head) +{ + struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode); + struct address_space *mapping = jd->jd_inode->i_mapping; + struct gfs2_journal_extent *je; + u32 block, read_idx = 0, submit_idx = 0, index = 0; + int shift = PAGE_SHIFT - sdp->sd_sb.sb_bsize_shift; + int blocks_per_page = 1 << shift, sz, ret = 0; + struct bio *bio = NULL; + struct page *page; + bool done = false; + errseq_t since; + + memset(head, 0, sizeof(*head)); + if (list_empty(&jd->extent_list)) + gfs2_map_journal_extents(sdp, jd); + + since = filemap_sample_wb_err(mapping); + list_for_each_entry(je, &jd->extent_list, list) { + for (block = 0; block < je->blocks; block += blocks_per_page) { + index = (je->lblock + block) >> shift; + + page = find_or_create_page(mapping, index, GFP_NOFS); + if (!page) { + ret = -ENOMEM; + done = true; + goto out; + } + + if (bio) { + sz = bio_add_page(bio, page, PAGE_SIZE, 0); + if (sz == PAGE_SIZE) + goto page_added; + submit_idx = index; + submit_bio(bio); + bio = NULL; + } + + bio = gfs2_log_alloc_bio(sdp, + je->dblock + (index << shift), + gfs2_end_log_read); + bio->bi_opf = REQ_OP_READ; + sz = bio_add_page(bio, page, PAGE_SIZE, 0); + gfs2_assert_warn(sdp, sz == PAGE_SIZE); + +page_added: + if (submit_idx <= read_idx + BIO_MAX_PAGES) { + /* Keep at least one bio in flight */ + continue; + } + + gfs2_jhead_process_page(jd, read_idx++, head, &done); + if (done) + goto out; /* found */ + } + } + +out: + if (bio) + submit_bio(bio); + while (read_idx <= index) + gfs2_jhead_process_page(jd, read_idx++, head, &done); + + if (!ret) + ret = filemap_check_wb_err(mapping, since); + + return ret; +} + static struct page *gfs2_get_log_desc(struct gfs2_sbd *sdp, u32 ld_type, u32 ld_length, u32 ld_data1) { diff --git a/fs/gfs2/lops.h b/fs/gfs2/lops.h index e4949394f054..331160fc568b 100644 --- a/fs/gfs2/lops.h +++ b/fs/gfs2/lops.h @@ -30,8 +30,10 @@ extern u64 gfs2_log_bmap(struct gfs2_sbd *sdp); extern void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page, unsigned size, unsigned offset, u64 blkno); extern void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page); -extern void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int op, int op_flags); +extern void gfs2_log_submit_bio(struct bio **biop, int opf); extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh); +extern int gfs2_find_jhead(struct gfs2_jdesc *jd, + struct gfs2_log_header_host *head); static inline unsigned int buf_limit(struct gfs2_sbd *sdp) { diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c index b041cb8ae383..1179763f6370 100644 --- a/fs/gfs2/ops_fstype.c +++ b/fs/gfs2/ops_fstype.c @@ -41,6 +41,7 @@ #include "dir.h" #include "meta_io.h" #include "trace_gfs2.h" +#include "lops.h" #define DO 0 #define UNDO 1 diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c index 0f501f938d1c..7389e445a7a7 100644 --- a/fs/gfs2/recovery.c +++ b/fs/gfs2/recovery.c @@ -120,6 +120,35 @@ void gfs2_revoke_clean(struct gfs2_jdesc *jd) } } +int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh, + unsigned int blkno, struct gfs2_log_header_host *head) +{ + u32 hash, crc; + + if (lh->lh_header.mh_magic != cpu_to_be32(GFS2_MAGIC) || + lh->lh_header.mh_type != cpu_to_be32(GFS2_METATYPE_LH) || + (blkno && be32_to_cpu(lh->lh_blkno) != blkno)) + return 1; + + hash = crc32(~0, lh, LH_V1_SIZE - 4); + hash = ~crc32_le_shift(hash, 4); /* assume lh_hash is zero */ + + if (be32_to_cpu(lh->lh_hash) != hash) + return 1; + + crc = crc32c(~0, (void *)lh + LH_V1_SIZE + 4, + sdp->sd_sb.sb_bsize - LH_V1_SIZE - 4); + + if ((lh->lh_crc != 0 && be32_to_cpu(lh->lh_crc) != crc)) + return 1; + + head->lh_sequence = be64_to_cpu(lh->lh_sequence); + head->lh_flags = be32_to_cpu(lh->lh_flags); + head->lh_tail = be32_to_cpu(lh->lh_tail); + head->lh_blkno = be32_to_cpu(lh->lh_blkno); + + return 0; +} /** * get_log_header - read the log header for a given segment * @jd: the journal @@ -137,159 +166,18 @@ void gfs2_revoke_clean(struct gfs2_jdesc *jd) static int get_log_header(struct gfs2_jdesc *jd, unsigned int blk, struct gfs2_log_header_host *head) { - struct gfs2_log_header *lh; + struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode); struct buffer_head *bh; - u32 hash, crc; int error; error = gfs2_replay_read_block(jd, blk, &bh); if (error) return error; - lh = (void *)bh->b_data; - - hash = crc32(~0, lh, LH_V1_SIZE - 4); - hash = ~crc32_le_shift(hash, 4); /* assume lh_hash is zero */ - - crc = crc32c(~0, (void *)lh + LH_V1_SIZE + 4, - bh->b_size - LH_V1_SIZE - 4); - - error = lh->lh_header.mh_magic != cpu_to_be32(GFS2_MAGIC) || - lh->lh_header.mh_type != cpu_to_be32(GFS2_METATYPE_LH) || - be32_to_cpu(lh->lh_blkno) != blk || - be32_to_cpu(lh->lh_hash) != hash || - (lh->lh_crc != 0 && be32_to_cpu(lh->lh_crc) != crc); + error = __get_log_header(sdp, (const struct gfs2_log_header *)bh->b_data, + blk, head); brelse(bh); - if (!error) { - head->lh_sequence = be64_to_cpu(lh->lh_sequence); - head->lh_flags = be32_to_cpu(lh->lh_flags); - head->lh_tail = be32_to_cpu(lh->lh_tail); - head->lh_blkno = be32_to_cpu(lh->lh_blkno); - } - return error; -} - -/** - * find_good_lh - find a good log header - * @jd: the journal - * @blk: the segment to start searching from - * @lh: the log header to fill in - * @forward: if true search forward in the log, else search backward - * - * Call get_log_header() to get a log header for a segment, but if the - * segment is bad, either scan forward or backward until we find a good one. - * - * Returns: errno - */ - -static int find_good_lh(struct gfs2_jdesc *jd, unsigned int *blk, - struct gfs2_log_header_host *head) -{ - unsigned int orig_blk = *blk; - int error; - - for (;;) { - error = get_log_header(jd, *blk, head); - if (error <= 0) - return error; - - if (++*blk == jd->jd_blocks) - *blk = 0; - - if (*blk == orig_blk) { - gfs2_consist_inode(GFS2_I(jd->jd_inode)); - return -EIO; - } - } -} - -/** - * jhead_scan - make sure we've found the head of the log - * @jd: the journal - * @head: this is filled in with the log descriptor of the head - * - * At this point, seg and lh should be either the head of the log or just - * before. Scan forward until we find the head. - * - * Returns: errno - */ - -static int jhead_scan(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head) -{ - unsigned int blk = head->lh_blkno; - struct gfs2_log_header_host lh; - int error; - - for (;;) { - if (++blk == jd->jd_blocks) - blk = 0; - - error = get_log_header(jd, blk, &lh); - if (error < 0) - return error; - if (error == 1) - continue; - - if (lh.lh_sequence == head->lh_sequence) { - gfs2_consist_inode(GFS2_I(jd->jd_inode)); - return -EIO; - } - if (lh.lh_sequence < head->lh_sequence) - break; - - *head = lh; - } - - return 0; -} - -/** - * gfs2_find_jhead - find the head of a log - * @jd: the journal - * @head: the log descriptor for the head of the log is returned here - * - * Do a binary search of a journal and find the valid log entry with the - * highest sequence number. (i.e. the log head) - * - * Returns: errno - */ - -int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head) -{ - struct gfs2_log_header_host lh_1, lh_m; - u32 blk_1, blk_2, blk_m; - int error; - - blk_1 = 0; - blk_2 = jd->jd_blocks - 1; - - for (;;) { - blk_m = (blk_1 + blk_2) / 2; - - error = find_good_lh(jd, &blk_1, &lh_1); - if (error) - return error; - - error = find_good_lh(jd, &blk_m, &lh_m); - if (error) - return error; - - if (blk_1 == blk_m || blk_m == blk_2) - break; - - if (lh_1.lh_sequence <= lh_m.lh_sequence) - blk_1 = blk_m; - else - blk_2 = blk_m; - } - - error = jhead_scan(jd, &lh_1); - if (error) - return error; - - *head = lh_1; - return error; } @@ -460,6 +348,8 @@ void gfs2_recover_func(struct work_struct *work) if (error) goto fail_gunlock_ji; t_jhd = ktime_get(); + fs_info(sdp, "jid=%u: Journal head lookup took %lldms\n", jd->jd_jid, + ktime_ms_delta(t_jhd, t_jlck)); if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) { fs_info(sdp, "jid=%u: Acquiring the transaction lock...\n", diff --git a/fs/gfs2/recovery.h b/fs/gfs2/recovery.h index 11fdfab4bf99..99575ab81202 100644 --- a/fs/gfs2/recovery.h +++ b/fs/gfs2/recovery.h @@ -27,10 +27,11 @@ extern int gfs2_revoke_add(struct gfs2_jdesc *jd, u64 blkno, unsigned int where) extern int gfs2_revoke_check(struct gfs2_jdesc *jd, u64 blkno, unsigned int where); extern void gfs2_revoke_clean(struct gfs2_jdesc *jd); -extern int gfs2_find_jhead(struct gfs2_jdesc *jd, - struct gfs2_log_header_host *head); extern int gfs2_recover_journal(struct gfs2_jdesc *gfs2_jd, bool wait); extern void gfs2_recover_func(struct work_struct *work); +extern int __get_log_header(struct gfs2_sbd *sdp, + const struct gfs2_log_header *lh, unsigned int blkno, + struct gfs2_log_header_host *head); #endif /* __RECOVERY_DOT_H__ */ diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index b08a530433ad..831d7cb5a49c 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -1780,9 +1780,9 @@ static int gfs2_rbm_find(struct gfs2_rbm *rbm, u8 state, u32 *minext, goto next_iter; } if (ret == -E2BIG) { + n += rbm->bii - initial_bii; rbm->bii = 0; rbm->offset = 0; - n += (rbm->bii - initial_bii); goto res_covered_end_of_rgrp; } return ret; @@ -2256,7 +2256,7 @@ static void rgblk_free(struct gfs2_sbd *sdp, struct gfs2_rgrpd *rgd, * */ -void gfs2_rgrp_dump(struct seq_file *seq, const struct gfs2_glock *gl) +void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_glock *gl) { struct gfs2_rgrpd *rgd = gl->gl_object; struct gfs2_blkreserv *trs; diff --git a/fs/gfs2/rgrp.h b/fs/gfs2/rgrp.h index b596c3d17988..499079a9dbbe 100644 --- a/fs/gfs2/rgrp.h +++ b/fs/gfs2/rgrp.h @@ -72,7 +72,7 @@ extern void gfs2_rlist_add(struct gfs2_inode *ip, struct gfs2_rgrp_list *rlist, extern void gfs2_rlist_alloc(struct gfs2_rgrp_list *rlist); extern void gfs2_rlist_free(struct gfs2_rgrp_list *rlist); extern u64 gfs2_ri_total(struct gfs2_sbd *sdp); -extern void gfs2_rgrp_dump(struct seq_file *seq, const struct gfs2_glock *gl); +extern void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_glock *gl); extern int gfs2_rgrp_send_discards(struct gfs2_sbd *sdp, u64 offset, struct buffer_head *bh, const struct gfs2_bitmap *bi, unsigned minlen, u64 *ptrimmed); diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c index ca71163ff7cf..d4b11c903971 100644 --- a/fs/gfs2/super.c +++ b/fs/gfs2/super.c @@ -45,6 +45,7 @@ #include "util.h" #include "sys.h" #include "xattr.h" +#include "lops.h" #define args_neq(a1, a2, x) ((a1)->ar_##x != (a2)->ar_##x) diff --git a/fs/gfs2/trans.c b/fs/gfs2/trans.c index 423bc2d03dd8..cd9a94a6b5bb 100644 --- a/fs/gfs2/trans.c +++ b/fs/gfs2/trans.c @@ -124,15 +124,13 @@ void gfs2_trans_end(struct gfs2_sbd *sdp) } static struct gfs2_bufdata *gfs2_alloc_bufdata(struct gfs2_glock *gl, - struct buffer_head *bh, - const struct gfs2_log_operations *lops) + struct buffer_head *bh) { struct gfs2_bufdata *bd; bd = kmem_cache_zalloc(gfs2_bufdata_cachep, GFP_NOFS | __GFP_NOFAIL); bd->bd_bh = bh; bd->bd_gl = gl; - bd->bd_ops = lops; INIT_LIST_HEAD(&bd->bd_list); bh->b_private = bd; return bd; @@ -169,7 +167,7 @@ void gfs2_trans_add_data(struct gfs2_glock *gl, struct buffer_head *bh) gfs2_log_unlock(sdp); unlock_buffer(bh); if (bh->b_private == NULL) - bd = gfs2_alloc_bufdata(gl, bh, &gfs2_databuf_lops); + bd = gfs2_alloc_bufdata(gl, bh); else bd = bh->b_private; lock_buffer(bh); @@ -210,7 +208,7 @@ void gfs2_trans_add_meta(struct gfs2_glock *gl, struct buffer_head *bh) unlock_buffer(bh); lock_page(bh->b_page); if (bh->b_private == NULL) - bd = gfs2_alloc_bufdata(gl, bh, &gfs2_buf_lops); + bd = gfs2_alloc_bufdata(gl, bh); else bd = bh->b_private; unlock_page(bh->b_page); diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 32920a10100e..a2fcea5f8225 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -383,17 +383,16 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end) * truncation is indicated by end of range being LLONG_MAX * In this case, we first scan the range and release found pages. * After releasing pages, hugetlb_unreserve_pages cleans up region/reserv - * maps and global counts. Page faults can not race with truncation - * in this routine. hugetlb_no_page() prevents page faults in the - * truncated range. It checks i_size before allocation, and again after - * with the page table lock for the page held. The same lock must be - * acquired to unmap a page. + * maps and global counts. * hole punch is indicated if end is not LLONG_MAX * In the hole punch case we scan the range and release found pages. * Only when releasing a page is the associated region/reserv map * deleted. The region/reserv map for ranges without associated - * pages are not modified. Page faults can race with hole punch. - * This is indicated if we find a mapped page. + * pages are not modified. + * + * Callers of this routine must hold the i_mmap_rwsem in write mode to prevent + * races with page faults. + * * Note: If the passed end of range value is beyond the end of file, but * not LLONG_MAX this routine still performs a hole punch operation. */ @@ -423,32 +422,14 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, for (i = 0; i < pagevec_count(&pvec); ++i) { struct page *page = pvec.pages[i]; - u32 hash; index = page->index; - hash = hugetlb_fault_mutex_hash(h, current->mm, - &pseudo_vma, - mapping, index, 0); - mutex_lock(&hugetlb_fault_mutex_table[hash]); - /* - * If page is mapped, it was faulted in after being - * unmapped in caller. Unmap (again) now after taking - * the fault mutex. The mutex will prevent faults - * until we finish removing the page. - * - * This race can only happen in the hole punch case. - * Getting here in a truncate operation is a bug. + * A mapped page is impossible as callers should unmap + * all references before calling. And, i_mmap_rwsem + * prevents the creation of additional mappings. */ - if (unlikely(page_mapped(page))) { - BUG_ON(truncate_op); - - i_mmap_lock_write(mapping); - hugetlb_vmdelete_list(&mapping->i_mmap, - index * pages_per_huge_page(h), - (index + 1) * pages_per_huge_page(h)); - i_mmap_unlock_write(mapping); - } + VM_BUG_ON(page_mapped(page)); lock_page(page); /* @@ -470,7 +451,6 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, } unlock_page(page); - mutex_unlock(&hugetlb_fault_mutex_table[hash]); } huge_pagevec_release(&pvec); cond_resched(); @@ -482,9 +462,20 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, static void hugetlbfs_evict_inode(struct inode *inode) { + struct address_space *mapping = inode->i_mapping; struct resv_map *resv_map; + /* + * The vfs layer guarantees that there are no other users of this + * inode. Therefore, it would be safe to call remove_inode_hugepages + * without holding i_mmap_rwsem. We acquire and hold here to be + * consistent with other callers. Since there will be no contention + * on the semaphore, overhead is negligible. + */ + i_mmap_lock_write(mapping); remove_inode_hugepages(inode, 0, LLONG_MAX); + i_mmap_unlock_write(mapping); + resv_map = (struct resv_map *)inode->i_mapping->private_data; /* root inode doesn't have the resv_map, so we should check it */ if (resv_map) @@ -505,8 +496,8 @@ static int hugetlb_vmtruncate(struct inode *inode, loff_t offset) i_mmap_lock_write(mapping); if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)) hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0); - i_mmap_unlock_write(mapping); remove_inode_hugepages(inode, offset, LLONG_MAX); + i_mmap_unlock_write(mapping); return 0; } @@ -540,8 +531,8 @@ static long hugetlbfs_punch_hole(struct inode *inode, loff_t offset, loff_t len) hugetlb_vmdelete_list(&mapping->i_mmap, hole_start >> PAGE_SHIFT, hole_end >> PAGE_SHIFT); - i_mmap_unlock_write(mapping); remove_inode_hugepages(inode, hole_start, hole_end); + i_mmap_unlock_write(mapping); inode_unlock(inode); } @@ -624,7 +615,11 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, /* addr is the offset within the file (zero based) */ addr = index * hpage_size; - /* mutex taken here, fault path and hole punch */ + /* + * fault mutex taken here, protects against fault path + * and hole punch. inode_lock previously taken protects + * against truncation. + */ hash = hugetlb_fault_mutex_hash(h, mm, &pseudo_vma, mapping, index, addr); mutex_lock(&hugetlb_fault_mutex_table[hash]); diff --git a/fs/inode.c b/fs/inode.c index 35d2108d567c..0cd47fe0dbe5 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -2149,7 +2149,9 @@ EXPORT_SYMBOL(timespec64_trunc); */ struct timespec64 current_time(struct inode *inode) { - struct timespec64 now = current_kernel_time64(); + struct timespec64 now; + + ktime_get_coarse_real_ts64(&now); if (unlikely(!inode->i_sb)) { WARN(1, "current_time() called with uninitialized super_block in the inode"); diff --git a/fs/iomap.c b/fs/iomap.c index d6bc98ae8d35..3a0cd557b4cf 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -492,16 +492,29 @@ done: } EXPORT_SYMBOL_GPL(iomap_readpages); +/* + * iomap_is_partially_uptodate checks whether blocks within a page are + * uptodate or not. + * + * Returns true if all blocks which correspond to a file portion + * we want to read within the page are uptodate. + */ int iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned long count) { struct iomap_page *iop = to_iomap_page(page); struct inode *inode = page->mapping->host; - unsigned first = from >> inode->i_blkbits; - unsigned last = (from + count - 1) >> inode->i_blkbits; + unsigned len, first, last; unsigned i; + /* Limit range to one page */ + len = min_t(unsigned, PAGE_SIZE - from, count); + + /* First and last blocks in range within page */ + first = from >> inode->i_blkbits; + last = (from + len - 1) >> inode->i_blkbits; + if (iop) { for (i = first; i <= last; i++) if (!test_bit(i, iop->uptodate)) @@ -550,7 +563,7 @@ iomap_migrate_page(struct address_space *mapping, struct page *newpage, { int ret; - ret = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); + ret = migrate_page_move_mapping(mapping, newpage, page, mode, 0); if (ret != MIGRATEPAGE_SUCCESS) return ret; @@ -1530,7 +1543,7 @@ static void iomap_dio_bio_end_io(struct bio *bio) if (dio->wait_for_completion) { struct task_struct *waiter = dio->submit.waiter; WRITE_ONCE(dio->submit.waiter, NULL); - wake_up_process(waiter); + blk_wake_io_task(waiter); } else if (dio->flags & IOMAP_DIO_WRITE) { struct inode *inode = file_inode(dio->iocb->ki_filp); @@ -1558,6 +1571,7 @@ iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos, unsigned len) { struct page *page = ZERO_PAGE(0); + int flags = REQ_SYNC | REQ_IDLE; struct bio *bio; bio = bio_alloc(GFP_KERNEL, 1); @@ -1566,9 +1580,12 @@ iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos, bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io; + if (dio->iocb->ki_flags & IOCB_HIPRI) + flags |= REQ_HIPRI; + get_page(page); __bio_add_page(bio, page, len, 0); - bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_SYNC | REQ_IDLE); + bio_set_op_attrs(bio, REQ_OP_WRITE, flags); atomic_inc(&dio->ref); return submit_bio(bio); @@ -1674,6 +1691,9 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, bio_set_pages_dirty(bio); } + if (dio->iocb->ki_flags & IOCB_HIPRI) + bio->bi_opf |= REQ_HIPRI; + iov_iter_advance(dio->submit.iter, n); dio->size += n; @@ -1901,14 +1921,15 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, return -EIOCBQUEUED; for (;;) { - set_current_state(TASK_UNINTERRUPTIBLE); + __set_current_state(TASK_UNINTERRUPTIBLE); + if (!READ_ONCE(dio->submit.waiter)) break; if (!(iocb->ki_flags & IOCB_HIPRI) || !dio->submit.last_queue || !blk_poll(dio->submit.last_queue, - dio->submit.cookie)) + dio->submit.cookie, true)) io_schedule(); } __set_current_state(TASK_RUNNING); diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c index 150cc030b4d7..2eb55c3361a8 100644 --- a/fs/jbd2/commit.c +++ b/fs/jbd2/commit.c @@ -439,6 +439,8 @@ void jbd2_journal_commit_transaction(journal_t *journal) finish_wait(&journal->j_wait_updates, &wait); } spin_unlock(&commit_transaction->t_handle_lock); + commit_transaction->t_state = T_SWITCH; + write_unlock(&journal->j_state_lock); J_ASSERT (atomic_read(&commit_transaction->t_outstanding_credits) <= journal->j_max_transaction_buffers); @@ -505,6 +507,7 @@ void jbd2_journal_commit_transaction(journal_t *journal) atomic_sub(atomic_read(&journal->j_reserved_credits), &commit_transaction->t_outstanding_credits); + write_lock(&journal->j_state_lock); trace_jbd2_commit_flushing(journal, commit_transaction); stats.run.rs_flushing = jiffies; stats.run.rs_locked = jbd2_time_diff(stats.run.rs_locked, diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index c0b66a7a795b..cc35537232f2 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -138,9 +138,9 @@ static inline void update_t_max_wait(transaction_t *transaction, } /* - * Wait until running transaction passes T_LOCKED state. Also starts the commit - * if needed. The function expects running transaction to exist and releases - * j_state_lock. + * Wait until running transaction passes to T_FLUSH state and new transaction + * can thus be started. Also starts the commit if needed. The function expects + * running transaction to exist and releases j_state_lock. */ static void wait_transaction_locked(journal_t *journal) __releases(journal->j_state_lock) @@ -160,6 +160,32 @@ static void wait_transaction_locked(journal_t *journal) finish_wait(&journal->j_wait_transaction_locked, &wait); } +/* + * Wait until running transaction transitions from T_SWITCH to T_FLUSH + * state and new transaction can thus be started. The function releases + * j_state_lock. + */ +static void wait_transaction_switching(journal_t *journal) + __releases(journal->j_state_lock) +{ + DEFINE_WAIT(wait); + + if (WARN_ON(!journal->j_running_transaction || + journal->j_running_transaction->t_state != T_SWITCH)) + return; + prepare_to_wait(&journal->j_wait_transaction_locked, &wait, + TASK_UNINTERRUPTIBLE); + read_unlock(&journal->j_state_lock); + /* + * We don't call jbd2_might_wait_for_commit() here as there's no + * waiting for outstanding handles happening anymore in T_SWITCH state + * and handling of reserved handles actually relies on that for + * correctness. + */ + schedule(); + finish_wait(&journal->j_wait_transaction_locked, &wait); +} + static void sub_reserved_credits(journal_t *journal, int blocks) { atomic_sub(blocks, &journal->j_reserved_credits); @@ -183,7 +209,8 @@ static int add_transaction_credits(journal_t *journal, int blocks, * If the current transaction is locked down for commit, wait * for the lock to be released. */ - if (t->t_state == T_LOCKED) { + if (t->t_state != T_RUNNING) { + WARN_ON_ONCE(t->t_state >= T_FLUSH); wait_transaction_locked(journal); return 1; } @@ -360,8 +387,14 @@ repeat: /* * We have handle reserved so we are allowed to join T_LOCKED * transaction and we don't have to check for transaction size - * and journal space. + * and journal space. But we still have to wait while running + * transaction is being switched to a committing one as it + * won't wait for any handles anymore. */ + if (transaction->t_state == T_SWITCH) { + wait_transaction_switching(journal); + goto repeat; + } sub_reserved_credits(journal, blocks); handle->h_reserved = 0; } @@ -910,7 +943,7 @@ repeat: * this is the first time this transaction is touching this buffer, * reset the modified flag */ - jh->b_modified = 0; + jh->b_modified = 0; /* * If the buffer is not journaled right now, we need to make sure it diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c index dbf5bc250bfd..f8d5021a652e 100644 --- a/fs/kernfs/file.c +++ b/fs/kernfs/file.c @@ -857,7 +857,6 @@ static __poll_t kernfs_fop_poll(struct file *filp, poll_table *wait) static void kernfs_notify_workfn(struct work_struct *work) { struct kernfs_node *kn; - struct kernfs_open_node *on; struct kernfs_super_info *info; repeat: /* pop one off the notify_list */ @@ -871,17 +870,6 @@ repeat: kn->attr.notify_next = NULL; spin_unlock_irq(&kernfs_notify_lock); - /* kick poll */ - spin_lock_irq(&kernfs_open_node_lock); - - on = kn->attr.open; - if (on) { - atomic_inc(&on->event); - wake_up_interruptible(&on->poll); - } - - spin_unlock_irq(&kernfs_open_node_lock); - /* kick fsnotify */ mutex_lock(&kernfs_mutex); @@ -934,10 +922,21 @@ void kernfs_notify(struct kernfs_node *kn) { static DECLARE_WORK(kernfs_notify_work, kernfs_notify_workfn); unsigned long flags; + struct kernfs_open_node *on; if (WARN_ON(kernfs_type(kn) != KERNFS_FILE)) return; + /* kick poll immediately */ + spin_lock_irqsave(&kernfs_open_node_lock, flags); + on = kn->attr.open; + if (on) { + atomic_inc(&on->event); + wake_up_interruptible(&on->poll); + } + spin_unlock_irqrestore(&kernfs_open_node_lock, flags); + + /* schedule work to kick fsnotify */ spin_lock_irqsave(&kernfs_notify_lock, flags); if (!kn->attr.notify_next) { kernfs_get(kn); diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c index 74330daeab71..ea719cdd6a36 100644 --- a/fs/lockd/svclock.c +++ b/fs/lockd/svclock.c @@ -276,7 +276,7 @@ static int nlmsvc_unlink_block(struct nlm_block *block) dprintk("lockd: unlinking block %p...\n", block); /* Remove block from list */ - status = posix_unblock_lock(&block->b_call->a_args.lock.fl); + status = locks_delete_block(&block->b_call->a_args.lock.fl); nlmsvc_remove_block(block); return status; } diff --git a/fs/locks.c b/fs/locks.c index 2ecb4db8c840..f0b24d98f36b 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -11,11 +11,11 @@ * * Miscellaneous edits, and a total rewrite of posix_lock_file() code. * Kai Petzke (wpp@marie.physik.tu-berlin.de), 1994 - * + * * Converted file_lock_table to a linked list from an array, which eliminates * the limits on how many active file locks are open. * Chad Page (pageone@netcom.com), November 27, 1994 - * + * * Removed dependency on file descriptors. dup()'ed file descriptors now * get the same locks as the original file descriptors, and a close() on * any file descriptor removes ALL the locks on the file for the current @@ -41,7 +41,7 @@ * with a file pointer (filp). As a result they can be shared by a parent * process and its children after a fork(). They are removed when the last * file descriptor referring to the file pointer is closed (unless explicitly - * unlocked). + * unlocked). * * FL_FLOCK locks never deadlock, an existing lock is always removed before * upgrading from shared to exclusive (or vice versa). When this happens @@ -50,7 +50,7 @@ * Andy Walker (andy@lysaker.kvaerner.no), June 09, 1995 * * Removed some race conditions in flock_lock_file(), marked other possible - * races. Just grep for FIXME to see them. + * races. Just grep for FIXME to see them. * Dmitry Gorodchanin (pgmdsg@ibi.com), February 09, 1996. * * Addressed Dmitry's concerns. Deadlock checking no longer recursive. @@ -112,6 +112,46 @@ * Leases and LOCK_MAND * Matthew Wilcox , June, 2000. * Stephen Rothwell , June, 2000. + * + * Locking conflicts and dependencies: + * If multiple threads attempt to lock the same byte (or flock the same file) + * only one can be granted the lock, and other must wait their turn. + * The first lock has been "applied" or "granted", the others are "waiting" + * and are "blocked" by the "applied" lock.. + * + * Waiting and applied locks are all kept in trees whose properties are: + * + * - the root of a tree may be an applied or waiting lock. + * - every other node in the tree is a waiting lock that + * conflicts with every ancestor of that node. + * + * Every such tree begins life as a waiting singleton which obviously + * satisfies the above properties. + * + * The only ways we modify trees preserve these properties: + * + * 1. We may add a new leaf node, but only after first verifying that it + * conflicts with all of its ancestors. + * 2. We may remove the root of a tree, creating a new singleton + * tree from the root and N new trees rooted in the immediate + * children. + * 3. If the root of a tree is not currently an applied lock, we may + * apply it (if possible). + * 4. We may upgrade the root of the tree (either extend its range, + * or upgrade its entire range from read to write). + * + * When an applied lock is modified in a way that reduces or downgrades any + * part of its range, we remove all its children (2 above). This particularly + * happens when a lock is unlocked. + * + * For each of those child trees we "wake up" the thread which is + * waiting for the lock so it can continue handling as follows: if the + * root of the tree applies, we do so (3). If it doesn't, it must + * conflict with some applied lock. We remove (wake up) all of its children + * (2), and add it is a new leaf to the tree rooted in the applied + * lock (1). We then repeat the process recursively with those + * children. + * */ #include @@ -189,9 +229,9 @@ static DEFINE_HASHTABLE(blocked_hash, BLOCKED_HASH_BITS); * This lock protects the blocked_hash. Generally, if you're accessing it, you * want to be holding this lock. * - * In addition, it also protects the fl->fl_block list, and the fl->fl_next - * pointer for file_lock structures that are acting as lock requests (in - * contrast to those that are acting as records of acquired locks). + * In addition, it also protects the fl->fl_blocked_requests list, and the + * fl->fl_blocker pointer for file_lock structures that are acting as lock + * requests (in contrast to those that are acting as records of acquired locks). * * Note that when we acquire this lock in order to change the above fields, * we often hold the flc_lock as well. In certain cases, when reading the fields @@ -293,7 +333,8 @@ static void locks_init_lock_heads(struct file_lock *fl) { INIT_HLIST_NODE(&fl->fl_link); INIT_LIST_HEAD(&fl->fl_list); - INIT_LIST_HEAD(&fl->fl_block); + INIT_LIST_HEAD(&fl->fl_blocked_requests); + INIT_LIST_HEAD(&fl->fl_blocked_member); init_waitqueue_head(&fl->fl_wait); } @@ -332,7 +373,8 @@ void locks_free_lock(struct file_lock *fl) { BUG_ON(waitqueue_active(&fl->fl_wait)); BUG_ON(!list_empty(&fl->fl_list)); - BUG_ON(!list_empty(&fl->fl_block)); + BUG_ON(!list_empty(&fl->fl_blocked_requests)); + BUG_ON(!list_empty(&fl->fl_blocked_member)); BUG_ON(!hlist_unhashed(&fl->fl_link)); locks_release_private(fl); @@ -357,7 +399,6 @@ void locks_init_lock(struct file_lock *fl) memset(fl, 0, sizeof(struct file_lock)); locks_init_lock_heads(fl); } - EXPORT_SYMBOL(locks_init_lock); /* @@ -397,9 +438,26 @@ void locks_copy_lock(struct file_lock *new, struct file_lock *fl) fl->fl_ops->fl_copy_lock(new, fl); } } - EXPORT_SYMBOL(locks_copy_lock); +static void locks_move_blocks(struct file_lock *new, struct file_lock *fl) +{ + struct file_lock *f; + + /* + * As ctx->flc_lock is held, new requests cannot be added to + * ->fl_blocked_requests, so we don't need a lock to check if it + * is empty. + */ + if (list_empty(&fl->fl_blocked_requests)) + return; + spin_lock(&blocked_lock_lock); + list_splice_init(&fl->fl_blocked_requests, &new->fl_blocked_requests); + list_for_each_entry(f, &fl->fl_blocked_requests, fl_blocked_member) + f->fl_blocker = new; + spin_unlock(&blocked_lock_lock); +} + static inline int flock_translate_cmd(int cmd) { if (cmd & LOCK_MAND) return cmd & (LOCK_MAND | LOCK_RW); @@ -416,17 +474,20 @@ static inline int flock_translate_cmd(int cmd) { /* Fill in a file_lock structure with an appropriate FLOCK lock. */ static struct file_lock * -flock_make_lock(struct file *filp, unsigned int cmd) +flock_make_lock(struct file *filp, unsigned int cmd, struct file_lock *fl) { - struct file_lock *fl; int type = flock_translate_cmd(cmd); if (type < 0) return ERR_PTR(type); - - fl = locks_alloc_lock(); - if (fl == NULL) - return ERR_PTR(-ENOMEM); + + if (fl == NULL) { + fl = locks_alloc_lock(); + if (fl == NULL) + return ERR_PTR(-ENOMEM); + } else { + locks_init_lock(fl); + } fl->fl_file = filp; fl->fl_owner = filp; @@ -434,7 +495,7 @@ flock_make_lock(struct file *filp, unsigned int cmd) fl->fl_flags = FL_FLOCK; fl->fl_type = type; fl->fl_end = OFFSET_MAX; - + return fl; } @@ -666,16 +727,58 @@ static void locks_delete_global_blocked(struct file_lock *waiter) static void __locks_delete_block(struct file_lock *waiter) { locks_delete_global_blocked(waiter); - list_del_init(&waiter->fl_block); - waiter->fl_next = NULL; + list_del_init(&waiter->fl_blocked_member); + waiter->fl_blocker = NULL; } -static void locks_delete_block(struct file_lock *waiter) +static void __locks_wake_up_blocks(struct file_lock *blocker) { + while (!list_empty(&blocker->fl_blocked_requests)) { + struct file_lock *waiter; + + waiter = list_first_entry(&blocker->fl_blocked_requests, + struct file_lock, fl_blocked_member); + __locks_delete_block(waiter); + if (waiter->fl_lmops && waiter->fl_lmops->lm_notify) + waiter->fl_lmops->lm_notify(waiter); + else + wake_up(&waiter->fl_wait); + } +} + +/** + * locks_delete_lock - stop waiting for a file lock + * @waiter: the lock which was waiting + * + * lockd/nfsd need to disconnect the lock while working on it. + */ +int locks_delete_block(struct file_lock *waiter) +{ + int status = -ENOENT; + + /* + * If fl_blocker is NULL, it won't be set again as this thread + * "owns" the lock and is the only one that might try to claim + * the lock. So it is safe to test fl_blocker locklessly. + * Also if fl_blocker is NULL, this waiter is not listed on + * fl_blocked_requests for some lock, so no other request can + * be added to the list of fl_blocked_requests for this + * request. So if fl_blocker is NULL, it is safe to + * locklessly check if fl_blocked_requests is empty. If both + * of these checks succeed, there is no need to take the lock. + */ + if (waiter->fl_blocker == NULL && + list_empty(&waiter->fl_blocked_requests)) + return status; spin_lock(&blocked_lock_lock); + if (waiter->fl_blocker) + status = 0; + __locks_wake_up_blocks(waiter); __locks_delete_block(waiter); spin_unlock(&blocked_lock_lock); + return status; } +EXPORT_SYMBOL(locks_delete_block); /* Insert waiter into blocker's block list. * We use a circular list so that processes can be easily woken up in @@ -683,26 +786,49 @@ static void locks_delete_block(struct file_lock *waiter) * it seems like the reasonable thing to do. * * Must be called with both the flc_lock and blocked_lock_lock held. The - * fl_block list itself is protected by the blocked_lock_lock, but by ensuring - * that the flc_lock is also held on insertions we can avoid taking the - * blocked_lock_lock in some cases when we see that the fl_block list is empty. + * fl_blocked_requests list itself is protected by the blocked_lock_lock, + * but by ensuring that the flc_lock is also held on insertions we can avoid + * taking the blocked_lock_lock in some cases when we see that the + * fl_blocked_requests list is empty. + * + * Rather than just adding to the list, we check for conflicts with any existing + * waiters, and add beneath any waiter that blocks the new waiter. + * Thus wakeups don't happen until needed. */ static void __locks_insert_block(struct file_lock *blocker, - struct file_lock *waiter) + struct file_lock *waiter, + bool conflict(struct file_lock *, + struct file_lock *)) { - BUG_ON(!list_empty(&waiter->fl_block)); - waiter->fl_next = blocker; - list_add_tail(&waiter->fl_block, &blocker->fl_block); + struct file_lock *fl; + BUG_ON(!list_empty(&waiter->fl_blocked_member)); + +new_blocker: + list_for_each_entry(fl, &blocker->fl_blocked_requests, fl_blocked_member) + if (conflict(fl, waiter)) { + blocker = fl; + goto new_blocker; + } + waiter->fl_blocker = blocker; + list_add_tail(&waiter->fl_blocked_member, &blocker->fl_blocked_requests); if (IS_POSIX(blocker) && !IS_OFDLCK(blocker)) locks_insert_global_blocked(waiter); + + /* The requests in waiter->fl_blocked are known to conflict with + * waiter, but might not conflict with blocker, or the requests + * and lock which block it. So they all need to be woken. + */ + __locks_wake_up_blocks(waiter); } /* Must be called with flc_lock held. */ static void locks_insert_block(struct file_lock *blocker, - struct file_lock *waiter) + struct file_lock *waiter, + bool conflict(struct file_lock *, + struct file_lock *)) { spin_lock(&blocked_lock_lock); - __locks_insert_block(blocker, waiter); + __locks_insert_block(blocker, waiter, conflict); spin_unlock(&blocked_lock_lock); } @@ -716,25 +842,15 @@ static void locks_wake_up_blocks(struct file_lock *blocker) /* * Avoid taking global lock if list is empty. This is safe since new * blocked requests are only added to the list under the flc_lock, and - * the flc_lock is always held here. Note that removal from the fl_block - * list does not require the flc_lock, so we must recheck list_empty() - * after acquiring the blocked_lock_lock. + * the flc_lock is always held here. Note that removal from the + * fl_blocked_requests list does not require the flc_lock, so we must + * recheck list_empty() after acquiring the blocked_lock_lock. */ - if (list_empty(&blocker->fl_block)) + if (list_empty(&blocker->fl_blocked_requests)) return; spin_lock(&blocked_lock_lock); - while (!list_empty(&blocker->fl_block)) { - struct file_lock *waiter; - - waiter = list_first_entry(&blocker->fl_block, - struct file_lock, fl_block); - __locks_delete_block(waiter); - if (waiter->fl_lmops && waiter->fl_lmops->lm_notify) - waiter->fl_lmops->lm_notify(waiter); - else - wake_up(&waiter->fl_wait); - } + __locks_wake_up_blocks(blocker); spin_unlock(&blocked_lock_lock); } @@ -766,47 +882,50 @@ locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose) /* Determine if lock sys_fl blocks lock caller_fl. Common functionality * checks for shared/exclusive status of overlapping locks. */ -static int locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl) +static bool locks_conflict(struct file_lock *caller_fl, + struct file_lock *sys_fl) { if (sys_fl->fl_type == F_WRLCK) - return 1; + return true; if (caller_fl->fl_type == F_WRLCK) - return 1; - return 0; + return true; + return false; } /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific * checking before calling the locks_conflict(). */ -static int posix_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl) +static bool posix_locks_conflict(struct file_lock *caller_fl, + struct file_lock *sys_fl) { /* POSIX locks owned by the same process do not conflict with * each other. */ if (posix_same_owner(caller_fl, sys_fl)) - return (0); + return false; /* Check whether they overlap */ if (!locks_overlap(caller_fl, sys_fl)) - return 0; + return false; - return (locks_conflict(caller_fl, sys_fl)); + return locks_conflict(caller_fl, sys_fl); } /* Determine if lock sys_fl blocks lock caller_fl. FLOCK specific * checking before calling the locks_conflict(). */ -static int flock_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl) +static bool flock_locks_conflict(struct file_lock *caller_fl, + struct file_lock *sys_fl) { /* FLOCK locks referring to the same filp do not conflict with * each other. */ if (caller_fl->fl_file == sys_fl->fl_file) - return (0); + return false; if ((caller_fl->fl_type & LOCK_MAND) || (sys_fl->fl_type & LOCK_MAND)) - return 0; + return false; - return (locks_conflict(caller_fl, sys_fl)); + return locks_conflict(caller_fl, sys_fl); } void @@ -877,8 +996,11 @@ static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl) struct file_lock *fl; hash_for_each_possible(blocked_hash, fl, fl_link, posix_owner_key(block_fl)) { - if (posix_same_owner(fl, block_fl)) - return fl->fl_next; + if (posix_same_owner(fl, block_fl)) { + while (fl->fl_blocker) + fl = fl->fl_blocker; + return fl; + } } return NULL; } @@ -965,12 +1087,13 @@ find_conflict: if (!(request->fl_flags & FL_SLEEP)) goto out; error = FILE_LOCK_DEFERRED; - locks_insert_block(fl, request); + locks_insert_block(fl, request, flock_locks_conflict); goto out; } if (request->fl_flags & FL_ACCESS) goto out; locks_copy_lock(new_fl, request); + locks_move_blocks(new_fl, request); locks_insert_lock_ctx(new_fl, &ctx->flc_flock); new_fl = NULL; error = 0; @@ -1039,12 +1162,13 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, spin_lock(&blocked_lock_lock); if (likely(!posix_locks_deadlock(request, fl))) { error = FILE_LOCK_DEFERRED; - __locks_insert_block(fl, request); + __locks_insert_block(fl, request, + posix_locks_conflict); } spin_unlock(&blocked_lock_lock); goto out; - } - } + } + } /* If we're just looking for a conflict, we're done. */ error = 0; @@ -1164,6 +1288,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, goto out; } locks_copy_lock(new_fl, request); + locks_move_blocks(new_fl, request); locks_insert_lock_ctx(new_fl, &fl->fl_list); fl = new_fl; new_fl = NULL; @@ -1237,13 +1362,11 @@ static int posix_lock_inode_wait(struct inode *inode, struct file_lock *fl) error = posix_lock_inode(inode, fl, NULL); if (error != FILE_LOCK_DEFERRED) break; - error = wait_event_interruptible(fl->fl_wait, !fl->fl_next); - if (!error) - continue; - - locks_delete_block(fl); - break; + error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); + if (error) + break; } + locks_delete_block(fl); return error; } @@ -1324,7 +1447,7 @@ int locks_mandatory_area(struct inode *inode, struct file *filp, loff_t start, error = posix_lock_inode(inode, &fl, NULL); if (error != FILE_LOCK_DEFERRED) break; - error = wait_event_interruptible(fl.fl_wait, !fl.fl_next); + error = wait_event_interruptible(fl.fl_wait, !fl.fl_blocker); if (!error) { /* * If we've been sleeping someone might have @@ -1334,13 +1457,12 @@ int locks_mandatory_area(struct inode *inode, struct file *filp, loff_t start, continue; } - locks_delete_block(&fl); break; } + locks_delete_block(&fl); return error; } - EXPORT_SYMBOL(locks_mandatory_area); #endif /* CONFIG_MANDATORY_FILE_LOCKING */ @@ -1511,14 +1633,14 @@ restart: break_time -= jiffies; if (break_time == 0) break_time++; - locks_insert_block(fl, new_fl); + locks_insert_block(fl, new_fl, leases_conflict); trace_break_lease_block(inode, new_fl); spin_unlock(&ctx->flc_lock); percpu_up_read_preempt_enable(&file_rwsem); locks_dispose_list(&dispose); error = wait_event_interruptible_timeout(new_fl->fl_wait, - !new_fl->fl_next, break_time); + !new_fl->fl_blocker, break_time); percpu_down_read_preempt_disable(&file_rwsem); spin_lock(&ctx->flc_lock); @@ -1542,7 +1664,6 @@ out: locks_free_lock(new_fl); return error; } - EXPORT_SYMBOL(__break_lease); /** @@ -1573,7 +1694,6 @@ void lease_get_mtime(struct inode *inode, struct timespec64 *time) if (has_lease) *time = current_time(inode); } - EXPORT_SYMBOL(lease_get_mtime); /** @@ -1628,8 +1748,8 @@ int fcntl_getlease(struct file *filp) /** * check_conflicting_open - see if the given dentry points to a file that has - * an existing open that would conflict with the - * desired lease. + * an existing open that would conflict with the + * desired lease. * @dentry: dentry to check * @arg: type of lease that we're trying to acquire * @flags: current lock flags @@ -1646,7 +1766,7 @@ check_conflicting_open(const struct dentry *dentry, const long arg, int flags) if (flags & FL_LAYOUT) return 0; - if ((arg == F_RDLCK) && (atomic_read(&inode->i_writecount) > 0)) + if ((arg == F_RDLCK) && inode_is_open_for_write(inode)) return -EAGAIN; if ((arg == F_WRLCK) && ((d_count(dentry) > 1) || @@ -1853,7 +1973,7 @@ EXPORT_SYMBOL(generic_setlease); * @arg: type of lease to obtain * @lease: file_lock to use when adding a lease * @priv: private info for lm_setup when adding a lease (may be - * NULL if lm_setup doesn't require it) + * NULL if lm_setup doesn't require it) * * Call this to establish a lease on the file. The "lease" argument is not * used for F_UNLCK requests and may be NULL. For commands that set or alter @@ -1931,13 +2051,11 @@ static int flock_lock_inode_wait(struct inode *inode, struct file_lock *fl) error = flock_lock_inode(inode, fl); if (error != FILE_LOCK_DEFERRED) break; - error = wait_event_interruptible(fl->fl_wait, !fl->fl_next); - if (!error) - continue; - - locks_delete_block(fl); - break; + error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); + if (error) + break; } + locks_delete_block(fl); return error; } @@ -2001,7 +2119,7 @@ SYSCALL_DEFINE2(flock, unsigned int, fd, unsigned int, cmd) !(f.file->f_mode & (FMODE_READ|FMODE_WRITE))) goto out_putf; - lock = flock_make_lock(f.file, cmd); + lock = flock_make_lock(f.file, cmd, NULL); if (IS_ERR(lock)) { error = PTR_ERR(lock); goto out_putf; @@ -2143,7 +2261,7 @@ int fcntl_getlk(struct file *filp, unsigned int cmd, struct flock *flock) error = vfs_test_lock(filp, fl); if (error) goto out; - + flock->l_type = fl->fl_type; if (fl->fl_type != F_UNLCK) { error = posix_lock_to_flock(flock, fl); @@ -2210,13 +2328,11 @@ static int do_lock_file_wait(struct file *filp, unsigned int cmd, error = vfs_lock_file(filp, cmd, fl, NULL); if (error != FILE_LOCK_DEFERRED) break; - error = wait_event_interruptible(fl->fl_wait, !fl->fl_next); - if (!error) - continue; - - locks_delete_block(fl); - break; + error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); + if (error) + break; } + locks_delete_block(fl); return error; } @@ -2476,6 +2592,7 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner) if (!ctx || list_empty(&ctx->flc_posix)) return; + locks_init_lock(&lock); lock.fl_type = F_UNLCK; lock.fl_flags = FL_POSIX | FL_CLOSE; lock.fl_start = 0; @@ -2492,26 +2609,21 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner) lock.fl_ops->fl_release_private(&lock); trace_locks_remove_posix(inode, &lock, error); } - EXPORT_SYMBOL(locks_remove_posix); /* The i_flctx must be valid when calling into here */ static void locks_remove_flock(struct file *filp, struct file_lock_context *flctx) { - struct file_lock fl = { - .fl_owner = filp, - .fl_pid = current->tgid, - .fl_file = filp, - .fl_flags = FL_FLOCK | FL_CLOSE, - .fl_type = F_UNLCK, - .fl_end = OFFSET_MAX, - }; + struct file_lock fl; struct inode *inode = locks_inode(filp); if (list_empty(&flctx->flc_flock)) return; + flock_make_lock(filp, LOCK_UN, &fl); + fl.fl_flags |= FL_CLOSE; + if (filp->f_op->flock) filp->f_op->flock(filp, F_SETLKW, &fl); else @@ -2569,27 +2681,6 @@ void locks_remove_file(struct file *filp) spin_unlock(&ctx->flc_lock); } -/** - * posix_unblock_lock - stop waiting for a file lock - * @waiter: the lock which was waiting - * - * lockd needs to block waiting for locks. - */ -int -posix_unblock_lock(struct file_lock *waiter) -{ - int status = 0; - - spin_lock(&blocked_lock_lock); - if (waiter->fl_next) - __locks_delete_block(waiter); - else - status = -ENOENT; - spin_unlock(&blocked_lock_lock); - return status; -} -EXPORT_SYMBOL(posix_unblock_lock); - /** * vfs_cancel_lock - file byte range unblock lock * @filp: The file to apply the unblock to @@ -2603,7 +2694,6 @@ int vfs_cancel_lock(struct file *filp, struct file_lock *fl) return filp->f_op->lock(filp, F_CANCELLK, fl); return 0; } - EXPORT_SYMBOL_GPL(vfs_cancel_lock); #ifdef CONFIG_PROC_FS @@ -2707,7 +2797,7 @@ static int locks_show(struct seq_file *f, void *v) lock_get_status(f, fl, iter->li_pos, ""); - list_for_each_entry(bfl, &fl->fl_block, fl_block) + list_for_each_entry(bfl, &fl->fl_blocked_requests, fl_blocked_member) lock_get_status(f, bfl, iter->li_pos, " ->"); return 0; @@ -2803,7 +2893,6 @@ static int __init filelock_init(void) filelock_cache = kmem_cache_create("file_lock_cache", sizeof(struct file_lock), 0, SLAB_PANIC, NULL); - for_each_possible_cpu(i) { struct file_lock_list_struct *fll = per_cpu_ptr(&file_lock_list, i); @@ -2813,5 +2902,4 @@ static int __init filelock_init(void) return 0; } - core_initcall(filelock_init); diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 867457d6dfbe..0ba2b0fb8ff3 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -6311,7 +6311,8 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl, /* Ensure we don't close file until we're done freeing locks! */ p->ctx = get_nfs_open_context(ctx); p->l_ctx = nfs_get_lock_context(ctx); - memcpy(&p->fl, fl, sizeof(p->fl)); + locks_init_lock(&p->fl); + locks_copy_lock(&p->fl, fl); p->server = NFS_SERVER(inode); return p; } @@ -6533,7 +6534,8 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl, p->server = server; refcount_inc(&lsp->ls_count); p->ctx = get_nfs_open_context(ctx); - memcpy(&p->fl, fl, sizeof(p->fl)); + locks_init_lock(&p->fl); + locks_copy_lock(&p->fl, fl); return p; out_free_seqid: nfs_free_seqid(p->arg.open_seqid); diff --git a/fs/nfs/write.c b/fs/nfs/write.c index 586726a590d8..4f15665f0ad1 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -2121,7 +2121,7 @@ int __init nfs_init_writepagecache(void) * This allows larger machines to have larger/more transfers. * Limit the default to 256M */ - nfs_congestion_kb = (16*int_sqrt(totalram_pages)) << (PAGE_SHIFT-10); + nfs_congestion_kb = (16*int_sqrt(totalram_pages())) << (PAGE_SHIFT-10); if (nfs_congestion_kb > 256*1024) nfs_congestion_kb = 256*1024; diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index f093fbe47133..a334828723fa 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -238,7 +238,7 @@ find_blocked_lock(struct nfs4_lockowner *lo, struct knfsd_fh *fh, } spin_unlock(&nn->blocked_locks_lock); if (found) - posix_unblock_lock(&found->nbl_lock); + locks_delete_block(&found->nbl_lock); return found; } @@ -293,7 +293,7 @@ remove_blocked_locks(struct nfs4_lockowner *lo) nbl = list_first_entry(&reaplist, struct nfsd4_blocked_lock, nbl_lru); list_del_init(&nbl->nbl_lru); - posix_unblock_lock(&nbl->nbl_lock); + locks_delete_block(&nbl->nbl_lock); free_blocked_lock(nbl); } } @@ -4863,7 +4863,7 @@ nfs4_laundromat(struct nfsd_net *nn) nbl = list_first_entry(&reaplist, struct nfsd4_blocked_lock, nbl_lru); list_del_init(&nbl->nbl_lru); - posix_unblock_lock(&nbl->nbl_lock); + locks_delete_block(&nbl->nbl_lock); free_blocked_lock(nbl); } out: diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index e2fe0e9ce0df..da52b594362a 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -99,7 +99,7 @@ static unsigned int nfsd_cache_size_limit(void) { unsigned int limit; - unsigned long low_pages = totalram_pages - totalhigh_pages; + unsigned long low_pages = totalram_pages() - totalhigh_pages(); limit = (16 * int_sqrt(low_pages)) << (PAGE_SHIFT-10); return min_t(unsigned int, limit, 256*1024); diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c index e08a6647267b..3723f3d18d20 100644 --- a/fs/notify/fanotify/fanotify.c +++ b/fs/notify/fanotify/fanotify.c @@ -89,7 +89,13 @@ static int fanotify_get_response(struct fsnotify_group *group, return ret; } -static bool fanotify_should_send_event(struct fsnotify_iter_info *iter_info, +/* + * This function returns a mask for an event that only contains the flags + * that have been specifically requested by the user. Flags that may have + * been included within the event mask, but have not been explicitly + * requested by the user, will not be present in the returned mask. + */ +static u32 fanotify_group_event_mask(struct fsnotify_iter_info *iter_info, u32 event_mask, const void *data, int data_type) { @@ -101,14 +107,14 @@ static bool fanotify_should_send_event(struct fsnotify_iter_info *iter_info, pr_debug("%s: report_mask=%x mask=%x data=%p data_type=%d\n", __func__, iter_info->report_mask, event_mask, data, data_type); - /* if we don't have enough info to send an event to userspace say no */ + /* If we don't have enough info to send an event to userspace say no */ if (data_type != FSNOTIFY_EVENT_PATH) - return false; + return 0; - /* sorry, fanotify only gives a damn about files and dirs */ + /* Sorry, fanotify only gives a damn about files and dirs */ if (!d_is_reg(path->dentry) && !d_can_lookup(path->dentry)) - return false; + return 0; fsnotify_foreach_obj_type(type) { if (!fsnotify_iter_should_report_type(iter_info, type)) @@ -129,13 +135,10 @@ static bool fanotify_should_send_event(struct fsnotify_iter_info *iter_info, if (d_is_dir(path->dentry) && !(marks_mask & FS_ISDIR & ~marks_ignored_mask)) - return false; - - if (event_mask & FANOTIFY_OUTGOING_EVENTS & - marks_mask & ~marks_ignored_mask) - return true; + return 0; - return false; + return event_mask & FANOTIFY_OUTGOING_EVENTS & marks_mask & + ~marks_ignored_mask; } struct fanotify_event_info *fanotify_alloc_event(struct fsnotify_group *group, @@ -207,10 +210,13 @@ static int fanotify_handle_event(struct fsnotify_group *group, BUILD_BUG_ON(FAN_OPEN_PERM != FS_OPEN_PERM); BUILD_BUG_ON(FAN_ACCESS_PERM != FS_ACCESS_PERM); BUILD_BUG_ON(FAN_ONDIR != FS_ISDIR); + BUILD_BUG_ON(FAN_OPEN_EXEC != FS_OPEN_EXEC); + BUILD_BUG_ON(FAN_OPEN_EXEC_PERM != FS_OPEN_EXEC_PERM); - BUILD_BUG_ON(HWEIGHT32(ALL_FANOTIFY_EVENT_BITS) != 10); + BUILD_BUG_ON(HWEIGHT32(ALL_FANOTIFY_EVENT_BITS) != 12); - if (!fanotify_should_send_event(iter_info, mask, data, data_type)) + mask = fanotify_group_event_mask(iter_info, mask, data, data_type); + if (!mask) return 0; pr_debug("%s: group=%p inode=%p mask=%x\n", __func__, group, inode, diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c index e03be5071362..9c870b0d2b56 100644 --- a/fs/notify/fanotify/fanotify_user.c +++ b/fs/notify/fanotify/fanotify_user.c @@ -206,7 +206,7 @@ static int process_access_response(struct fsnotify_group *group, static ssize_t copy_event_to_user(struct fsnotify_group *group, struct fsnotify_event *event, - char __user *buf) + char __user *buf, size_t count) { struct fanotify_event_metadata fanotify_event_metadata; struct file *f; @@ -220,6 +220,12 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group, fd = fanotify_event_metadata.fd; ret = -EFAULT; + /* + * Sanity check copy size in case get_one_event() and + * fill_event_metadata() event_len sizes ever get out of sync. + */ + if (WARN_ON_ONCE(fanotify_event_metadata.event_len > count)) + goto out_close_fd; if (copy_to_user(buf, &fanotify_event_metadata, fanotify_event_metadata.event_len)) goto out_close_fd; @@ -295,7 +301,7 @@ static ssize_t fanotify_read(struct file *file, char __user *buf, continue; } - ret = copy_event_to_user(group, kevent, buf); + ret = copy_event_to_user(group, kevent, buf, count); if (unlikely(ret == -EOPENSTALE)) { /* * We cannot report events with stale fd so drop it. @@ -669,7 +675,7 @@ static int fanotify_add_inode_mark(struct fsnotify_group *group, */ if ((flags & FAN_MARK_IGNORED_MASK) && !(flags & FAN_MARK_IGNORED_SURV_MODIFY) && - (atomic_read(&inode->i_writecount) > 0)) + inode_is_open_for_write(inode)) return 0; return fanotify_add_mark(group, &inode->i_fsnotify_marks, diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c index 348a184bcdda..1e2bfd26b352 100644 --- a/fs/notify/fdinfo.c +++ b/fs/notify/fdinfo.c @@ -15,6 +15,7 @@ #include #include "inotify/inotify.h" +#include "fdinfo.h" #include "fsnotify.h" #if defined(CONFIG_PROC_FS) diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c index d2c34900ae05..ecf09b6243d9 100644 --- a/fs/notify/fsnotify.c +++ b/fs/notify/fsnotify.c @@ -401,7 +401,7 @@ static __init int fsnotify_init(void) { int ret; - BUILD_BUG_ON(HWEIGHT32(ALL_FSNOTIFY_BITS) != 23); + BUILD_BUG_ON(HWEIGHT32(ALL_FSNOTIFY_BITS) != 25); ret = init_srcu_struct(&fsnotify_mark_srcu); if (ret) diff --git a/fs/ntfs/malloc.h b/fs/ntfs/malloc.h index ab172e5f51d9..5becc8acc8f4 100644 --- a/fs/ntfs/malloc.h +++ b/fs/ntfs/malloc.h @@ -47,7 +47,7 @@ static inline void *__ntfs_malloc(unsigned long size, gfp_t gfp_mask) return kmalloc(PAGE_SIZE, gfp_mask & ~__GFP_HIGHMEM); /* return (void *)__get_free_page(gfp_mask); */ } - if (likely((size >> PAGE_SHIFT) < totalram_pages)) + if (likely((size >> PAGE_SHIFT) < totalram_pages())) return __vmalloc(size, gfp_mask, PAGE_KERNEL); return NULL; } diff --git a/fs/ocfs2/Makefile b/fs/ocfs2/Makefile index 99ee093182cb..cc9b32b9db7c 100644 --- a/fs/ocfs2/Makefile +++ b/fs/ocfs2/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -ccflags-y := -Ifs/ocfs2 +ccflags-y := -I$(src) obj-$(CONFIG_OCFS2_FS) += \ ocfs2.o \ diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c index 4ebbd57cbf84..f9b84f7a3e4b 100644 --- a/fs/ocfs2/buffer_head_io.c +++ b/fs/ocfs2/buffer_head_io.c @@ -161,7 +161,6 @@ int ocfs2_read_blocks_sync(struct ocfs2_super *osb, u64 block, #endif } - clear_buffer_uptodate(bh); get_bh(bh); /* for end_buffer_read_sync() */ bh->b_end_io = end_buffer_read_sync; submit_bh(REQ_OP_READ, 0, bh); @@ -341,7 +340,6 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr, continue; } - clear_buffer_uptodate(bh); get_bh(bh); /* for end_buffer_read_sync() */ if (validate) set_buffer_needs_validate(bh); diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c index 9b2ed62dd638..f3c20b279eb2 100644 --- a/fs/ocfs2/cluster/heartbeat.c +++ b/fs/ocfs2/cluster/heartbeat.c @@ -582,9 +582,10 @@ bail: } static int o2hb_read_slots(struct o2hb_region *reg, + unsigned int begin_slot, unsigned int max_slots) { - unsigned int current_slot=0; + unsigned int current_slot = begin_slot; int status; struct o2hb_bio_wait_ctxt wc; struct bio *bio; @@ -1093,9 +1094,14 @@ static int o2hb_highest_node(unsigned long *nodes, int numbits) return find_last_bit(nodes, numbits); } +static int o2hb_lowest_node(unsigned long *nodes, int numbits) +{ + return find_first_bit(nodes, numbits); +} + static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) { - int i, ret, highest_node; + int i, ret, highest_node, lowest_node; int membership_change = 0, own_slot_ok = 0; unsigned long configured_nodes[BITS_TO_LONGS(O2NM_MAX_NODES)]; unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; @@ -1120,7 +1126,8 @@ static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) } highest_node = o2hb_highest_node(configured_nodes, O2NM_MAX_NODES); - if (highest_node >= O2NM_MAX_NODES) { + lowest_node = o2hb_lowest_node(configured_nodes, O2NM_MAX_NODES); + if (highest_node >= O2NM_MAX_NODES || lowest_node >= O2NM_MAX_NODES) { mlog(ML_NOTICE, "o2hb: No configured nodes found!\n"); ret = -EINVAL; goto bail; @@ -1130,7 +1137,7 @@ static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) * yet. Of course, if the node definitions have holes in them * then we're reading an empty slot anyway... Consider this * best-effort. */ - ret = o2hb_read_slots(reg, highest_node + 1); + ret = o2hb_read_slots(reg, lowest_node, highest_node + 1); if (ret < 0) { mlog_errno(ret); goto bail; @@ -1801,7 +1808,7 @@ static int o2hb_populate_slot_data(struct o2hb_region *reg) struct o2hb_disk_slot *slot; struct o2hb_disk_heartbeat_block *hb_block; - ret = o2hb_read_slots(reg, reg->hr_blocks); + ret = o2hb_read_slots(reg, 0, reg->hr_blocks); if (ret) goto out; diff --git a/fs/ocfs2/dlm/Makefile b/fs/ocfs2/dlm/Makefile index bd1aab1f49a4..ef2854422a6e 100644 --- a/fs/ocfs2/dlm/Makefile +++ b/fs/ocfs2/dlm/Makefile @@ -1,4 +1,4 @@ -ccflags-y := -Ifs/ocfs2 +ccflags-y := -I$(src)/.. obj-$(CONFIG_OCFS2_FS_O2CB) += ocfs2_dlm.o diff --git a/fs/ocfs2/dlmfs/Makefile b/fs/ocfs2/dlmfs/Makefile index eed3db8c5b49..33431a0296a3 100644 --- a/fs/ocfs2/dlmfs/Makefile +++ b/fs/ocfs2/dlmfs/Makefile @@ -1,4 +1,4 @@ -ccflags-y := -Ifs/ocfs2 +ccflags-y := -I$(src)/.. obj-$(CONFIG_OCFS2_FS) += ocfs2_dlmfs.o diff --git a/fs/ocfs2/dlmfs/dlmfs.c b/fs/ocfs2/dlmfs/dlmfs.c index 602c71f32740..b8fa1487cd85 100644 --- a/fs/ocfs2/dlmfs/dlmfs.c +++ b/fs/ocfs2/dlmfs/dlmfs.c @@ -179,7 +179,7 @@ bail: static int dlmfs_file_release(struct inode *inode, struct file *file) { - int level, status; + int level; struct dlmfs_inode_private *ip = DLMFS_I(inode); struct dlmfs_filp_private *fp = file->private_data; @@ -188,7 +188,6 @@ static int dlmfs_file_release(struct inode *inode, mlog(0, "close called on inode %lu\n", inode->i_ino); - status = 0; if (fp) { level = fp->fp_lock_level; if (level != DLM_LOCK_IV) diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c index b63c97f4318e..46fd3ef2cf21 100644 --- a/fs/ocfs2/journal.c +++ b/fs/ocfs2/journal.c @@ -1017,7 +1017,8 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb) mlog_errno(status); } - if (status == 0) { + /* Shutdown the kernel journal system */ + if (!jbd2_journal_destroy(journal->j_journal) && !status) { /* * Do not toggle if flush was unsuccessful otherwise * will leave dirty metadata in a "clean" journal @@ -1026,9 +1027,6 @@ void ocfs2_journal_shutdown(struct ocfs2_super *osb) if (status < 0) mlog_errno(status); } - - /* Shutdown the kernel journal system */ - jbd2_journal_destroy(journal->j_journal); journal->j_journal = NULL; OCFS2_I(inode)->ip_open_count--; diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c index 7642b6712c39..58973e4d2471 100644 --- a/fs/ocfs2/localalloc.c +++ b/fs/ocfs2/localalloc.c @@ -345,13 +345,18 @@ int ocfs2_load_local_alloc(struct ocfs2_super *osb) if (num_used || alloc->id1.bitmap1.i_used || alloc->id1.bitmap1.i_total - || la->la_bm_off) - mlog(ML_ERROR, "Local alloc hasn't been recovered!\n" + || la->la_bm_off) { + mlog(ML_ERROR, "inconsistent detected, clean journal with" + " unrecovered local alloc, please run fsck.ocfs2!\n" "found = %u, set = %u, taken = %u, off = %u\n", num_used, le32_to_cpu(alloc->id1.bitmap1.i_used), le32_to_cpu(alloc->id1.bitmap1.i_total), OCFS2_LOCAL_ALLOC(alloc)->la_bm_off); + status = -EINVAL; + goto bail; + } + osb->local_alloc_bh = alloc_bh; osb->local_alloc_state = OCFS2_LA_ENABLED; @@ -835,7 +840,7 @@ static int ocfs2_local_alloc_find_clear_bits(struct ocfs2_super *osb, u32 *numbits, struct ocfs2_alloc_reservation *resv) { - int numfound = 0, bitoff, left, startoff, lastzero; + int numfound = 0, bitoff, left, startoff; int local_resv = 0; struct ocfs2_alloc_reservation r; void *bitmap = NULL; @@ -873,7 +878,6 @@ static int ocfs2_local_alloc_find_clear_bits(struct ocfs2_super *osb, bitmap = OCFS2_LOCAL_ALLOC(alloc)->la_bitmap; numfound = bitoff = startoff = 0; - lastzero = -1; left = le32_to_cpu(alloc->id1.bitmap1.i_total); while ((bitoff = ocfs2_find_next_zero_bit(bitmap, left, startoff)) != -1) { if (bitoff == left) { diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c index d56f0079b858..b11acd34001a 100644 --- a/fs/ocfs2/locks.c +++ b/fs/ocfs2/locks.c @@ -52,6 +52,7 @@ static int ocfs2_do_flock(struct file *file, struct inode *inode, if (lockres->l_flags & OCFS2_LOCK_ATTACHED && lockres->l_level > LKM_NLMODE) { int old_level = 0; + struct file_lock request; if (lockres->l_level == LKM_EXMODE) old_level = 1; @@ -66,11 +67,10 @@ static int ocfs2_do_flock(struct file *file, struct inode *inode, * level. */ - locks_lock_file_wait(file, - &(struct file_lock) { - .fl_type = F_UNLCK, - .fl_flags = FL_FLOCK - }); + locks_init_lock(&request); + request.fl_type = F_UNLCK; + request.fl_flags = FL_FLOCK; + locks_lock_file_wait(file, &request); ocfs2_file_unlock(file); } diff --git a/fs/proc/array.c b/fs/proc/array.c index 0ceb3b6b37e7..9d428d5a0ac8 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -392,6 +392,15 @@ static inline void task_core_dumping(struct seq_file *m, struct mm_struct *mm) seq_putc(m, '\n'); } +static inline void task_thp_status(struct seq_file *m, struct mm_struct *mm) +{ + bool thp_enabled = IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE); + + if (thp_enabled) + thp_enabled = !test_bit(MMF_DISABLE_THP, &mm->flags); + seq_printf(m, "THP_enabled:\t%d\n", thp_enabled); +} + int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { @@ -406,6 +415,7 @@ int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, if (mm) { task_mem(m, mm); task_core_dumping(m, mm); + task_thp_status(m, mm); mmput(mm); } task_sig(m, task); diff --git a/fs/proc/base.c b/fs/proc/base.c index ce3465479447..d7fd1ca807d2 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -530,7 +530,7 @@ static const struct file_operations proc_lstats_operations = { static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { - unsigned long totalpages = totalram_pages + total_swap_pages; + unsigned long totalpages = totalram_pages() + total_swap_pages; unsigned long points = 0; points = oom_badness(task, NULL, NULL, totalpages) * diff --git a/fs/proc/page.c b/fs/proc/page.c index 6c517b11acf8..40b05e0d4274 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -46,7 +46,7 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf, ppage = pfn_to_page(pfn); else ppage = NULL; - if (!ppage || PageSlab(ppage)) + if (!ppage || PageSlab(ppage) || page_has_type(ppage)) pcount = 0; else pcount = page_mapcount(ppage); diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 47c3764c469b..f0ec9edab2f3 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -790,6 +790,8 @@ static int show_smap(struct seq_file *m, void *v) __show_smap(m, &mss); + seq_printf(m, "THPeligible: %d\n", transparent_hugepage_enabled(vma)); + if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); show_smap_vma_flags(m, vma); @@ -1096,6 +1098,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, return -ESRCH; mm = get_task_mm(task); if (mm) { + struct mmu_notifier_range range; struct clear_refs_private cp = { .type = type, }; @@ -1139,11 +1142,13 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, downgrade_write(&mm->mmap_sem); break; } - mmu_notifier_invalidate_range_start(mm, 0, -1); + + mmu_notifier_range_init(&range, mm, 0, -1UL); + mmu_notifier_invalidate_range_start(&range); } walk_page_range(0, mm->highest_vm_end, &clear_refs_walk); if (type == CLEAR_REFS_SOFT_DIRTY) - mmu_notifier_invalidate_range_end(mm, 0, -1); + mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb, 0, -1); up_read(&mm->mmap_sem); out_mm: diff --git a/fs/pstore/ftrace.c b/fs/pstore/ftrace.c index 06aab07b6bb7..b8a0931568f8 100644 --- a/fs/pstore/ftrace.c +++ b/fs/pstore/ftrace.c @@ -148,7 +148,7 @@ void pstore_unregister_ftrace(void) mutex_lock(&pstore_ftrace_lock); if (pstore_ftrace_enabled) { unregister_ftrace_function(&pstore_ftrace_ops); - pstore_ftrace_enabled = 0; + pstore_ftrace_enabled = false; } mutex_unlock(&pstore_ftrace_lock); diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c index 8cf2218b46a7..c60ee46f3e39 100644 --- a/fs/pstore/inode.c +++ b/fs/pstore/inode.c @@ -335,53 +335,10 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record) goto fail_alloc; private->record = record; - switch (record->type) { - case PSTORE_TYPE_DMESG: - scnprintf(name, sizeof(name), "dmesg-%s-%llu%s", - record->psi->name, record->id, - record->compressed ? ".enc.z" : ""); - break; - case PSTORE_TYPE_CONSOLE: - scnprintf(name, sizeof(name), "console-%s-%llu", - record->psi->name, record->id); - break; - case PSTORE_TYPE_FTRACE: - scnprintf(name, sizeof(name), "ftrace-%s-%llu", - record->psi->name, record->id); - break; - case PSTORE_TYPE_MCE: - scnprintf(name, sizeof(name), "mce-%s-%llu", - record->psi->name, record->id); - break; - case PSTORE_TYPE_PPC_RTAS: - scnprintf(name, sizeof(name), "rtas-%s-%llu", - record->psi->name, record->id); - break; - case PSTORE_TYPE_PPC_OF: - scnprintf(name, sizeof(name), "powerpc-ofw-%s-%llu", - record->psi->name, record->id); - break; - case PSTORE_TYPE_PPC_COMMON: - scnprintf(name, sizeof(name), "powerpc-common-%s-%llu", - record->psi->name, record->id); - break; - case PSTORE_TYPE_PMSG: - scnprintf(name, sizeof(name), "pmsg-%s-%llu", - record->psi->name, record->id); - break; - case PSTORE_TYPE_PPC_OPAL: - scnprintf(name, sizeof(name), "powerpc-opal-%s-%llu", - record->psi->name, record->id); - break; - case PSTORE_TYPE_UNKNOWN: - scnprintf(name, sizeof(name), "unknown-%s-%llu", - record->psi->name, record->id); - break; - default: - scnprintf(name, sizeof(name), "type%d-%s-%llu", - record->type, record->psi->name, record->id); - break; - } + scnprintf(name, sizeof(name), "%s-%s-%llu%s", + pstore_type_to_name(record->type), + record->psi->name, record->id, + record->compressed ? ".enc.z" : ""); dentry = d_alloc_name(root, name); if (!dentry) diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c index b821054ca3ed..2d1066ed3c28 100644 --- a/fs/pstore/platform.c +++ b/fs/pstore/platform.c @@ -59,6 +59,19 @@ MODULE_PARM_DESC(update_ms, "milliseconds before pstore updates its content " "enabling this option is not safe, it may lead to further " "corruption on Oopses)"); +/* Names should be in the same order as the enum pstore_type_id */ +static const char * const pstore_type_names[] = { + "dmesg", + "mce", + "console", + "ftrace", + "rtas", + "powerpc-ofw", + "powerpc-common", + "pmsg", + "powerpc-opal", +}; + static int pstore_new_entry; static void pstore_timefunc(struct timer_list *); @@ -104,6 +117,30 @@ void pstore_set_kmsg_bytes(int bytes) /* Tag each group of saved records with a sequence number */ static int oopscount; +const char *pstore_type_to_name(enum pstore_type_id type) +{ + BUILD_BUG_ON(ARRAY_SIZE(pstore_type_names) != PSTORE_TYPE_MAX); + + if (WARN_ON_ONCE(type >= PSTORE_TYPE_MAX)) + return "unknown"; + + return pstore_type_names[type]; +} +EXPORT_SYMBOL_GPL(pstore_type_to_name); + +enum pstore_type_id pstore_name_to_type(const char *name) +{ + int i; + + for (i = 0; i < PSTORE_TYPE_MAX; i++) { + if (!strcmp(pstore_type_names[i], name)) + return i; + } + + return PSTORE_TYPE_MAX; +} +EXPORT_SYMBOL_GPL(pstore_name_to_type); + static const char *get_reason_str(enum kmsg_dump_reason reason) { switch (reason) { @@ -124,26 +161,27 @@ static const char *get_reason_str(enum kmsg_dump_reason reason) } } -bool pstore_cannot_block_path(enum kmsg_dump_reason reason) +/* + * Should pstore_dump() wait for a concurrent pstore_dump()? If + * not, the current pstore_dump() will report a failure to dump + * and return. + */ +static bool pstore_cannot_wait(enum kmsg_dump_reason reason) { - /* - * In case of NMI path, pstore shouldn't be blocked - * regardless of reason. - */ + /* In NMI path, pstore shouldn't block regardless of reason. */ if (in_nmi()) return true; switch (reason) { /* In panic case, other cpus are stopped by smp_send_stop(). */ case KMSG_DUMP_PANIC: - /* Emergency restart shouldn't be blocked by spin lock. */ + /* Emergency restart shouldn't be blocked. */ case KMSG_DUMP_EMERG: return true; default: return false; } } -EXPORT_SYMBOL_GPL(pstore_cannot_block_path); #if IS_ENABLED(CONFIG_PSTORE_DEFLATE_COMPRESS) static int zbufsize_deflate(size_t size) @@ -258,20 +296,6 @@ static int pstore_compress(const void *in, void *out, return outlen; } -static int pstore_decompress(void *in, void *out, - unsigned int inlen, unsigned int outlen) -{ - int ret; - - ret = crypto_comp_decompress(tfm, in, inlen, out, &outlen); - if (ret) { - pr_err("crypto_comp_decompress failed, ret = %d!\n", ret); - return ret; - } - - return outlen; -} - static void allocate_buf_for_compression(void) { struct crypto_comp *ctx; @@ -318,7 +342,7 @@ static void allocate_buf_for_compression(void) big_oops_buf_sz = size; big_oops_buf = buf; - pr_info("Using compression: %s\n", zbackend->name); + pr_info("Using crash dump compression: %s\n", zbackend->name); } static void free_buf_for_compression(void) @@ -368,9 +392,8 @@ void pstore_record_init(struct pstore_record *record, } /* - * callback from kmsg_dump. (s2,l2) has the most recently - * written bytes, older bytes are in (s1,l1). Save as much - * as we can from the end of the buffer. + * callback from kmsg_dump. Save as much as we can (up to kmsg_bytes) from the + * end of the buffer. */ static void pstore_dump(struct kmsg_dumper *dumper, enum kmsg_dump_reason reason) @@ -378,23 +401,23 @@ static void pstore_dump(struct kmsg_dumper *dumper, unsigned long total = 0; const char *why; unsigned int part = 1; - unsigned long flags = 0; - int is_locked; int ret; why = get_reason_str(reason); - if (pstore_cannot_block_path(reason)) { - is_locked = spin_trylock_irqsave(&psinfo->buf_lock, flags); - if (!is_locked) { - pr_err("pstore dump routine blocked in %s path, may corrupt error record\n" - , in_nmi() ? "NMI" : why); + if (down_trylock(&psinfo->buf_lock)) { + /* Failed to acquire lock: give up if we cannot wait. */ + if (pstore_cannot_wait(reason)) { + pr_err("dump skipped in %s path: may corrupt error record\n", + in_nmi() ? "NMI" : why); + return; + } + if (down_interruptible(&psinfo->buf_lock)) { + pr_err("could not grab semaphore?!\n"); return; } - } else { - spin_lock_irqsave(&psinfo->buf_lock, flags); - is_locked = 1; } + oopscount++; while (total < kmsg_bytes) { char *dst; @@ -411,7 +434,7 @@ static void pstore_dump(struct kmsg_dumper *dumper, record.part = part; record.buf = psinfo->buf; - if (big_oops_buf && is_locked) { + if (big_oops_buf) { dst = big_oops_buf; dst_size = big_oops_buf_sz; } else { @@ -429,7 +452,7 @@ static void pstore_dump(struct kmsg_dumper *dumper, dst_size, &dump_size)) break; - if (big_oops_buf && is_locked) { + if (big_oops_buf) { zipped_len = pstore_compress(dst, psinfo->buf, header_size + dump_size, psinfo->bufsize); @@ -452,8 +475,8 @@ static void pstore_dump(struct kmsg_dumper *dumper, total += record.size; part++; } - if (is_locked) - spin_unlock_irqrestore(&psinfo->buf_lock, flags); + + up(&psinfo->buf_lock); } static struct kmsg_dumper pstore_dumper = { @@ -476,31 +499,14 @@ static void pstore_unregister_kmsg(void) #ifdef CONFIG_PSTORE_CONSOLE static void pstore_console_write(struct console *con, const char *s, unsigned c) { - const char *e = s + c; - - while (s < e) { - struct pstore_record record; - unsigned long flags; + struct pstore_record record; - pstore_record_init(&record, psinfo); - record.type = PSTORE_TYPE_CONSOLE; - - if (c > psinfo->bufsize) - c = psinfo->bufsize; + pstore_record_init(&record, psinfo); + record.type = PSTORE_TYPE_CONSOLE; - if (oops_in_progress) { - if (!spin_trylock_irqsave(&psinfo->buf_lock, flags)) - break; - } else { - spin_lock_irqsave(&psinfo->buf_lock, flags); - } - record.buf = (char *)s; - record.size = c; - psinfo->write(&record); - spin_unlock_irqrestore(&psinfo->buf_lock, flags); - s += c; - c = e - s; - } + record.buf = (char *)s; + record.size = c; + psinfo->write(&record); } static struct console pstore_console = { @@ -589,6 +595,7 @@ int pstore_register(struct pstore_info *psi) psi->write_user = pstore_write_user_compat; psinfo = psi; mutex_init(&psinfo->read_mutex); + sema_init(&psinfo->buf_lock, 1); spin_unlock(&pstore_lock); if (owner && !try_module_get(owner)) { @@ -656,8 +663,9 @@ EXPORT_SYMBOL_GPL(pstore_unregister); static void decompress_record(struct pstore_record *record) { + int ret; int unzipped_len; - char *decompressed; + char *unzipped, *workspace; if (!record->compressed) return; @@ -668,35 +676,42 @@ static void decompress_record(struct pstore_record *record) return; } - /* No compression method has created the common buffer. */ + /* Missing compression buffer means compression was not initialized. */ if (!big_oops_buf) { - pr_warn("no decompression buffer allocated\n"); + pr_warn("no decompression method initialized!\n"); return; } - unzipped_len = pstore_decompress(record->buf, big_oops_buf, - record->size, big_oops_buf_sz); - if (unzipped_len <= 0) { - pr_err("decompression failed: %d\n", unzipped_len); + /* Allocate enough space to hold max decompression and ECC. */ + unzipped_len = big_oops_buf_sz; + workspace = kmalloc(unzipped_len + record->ecc_notice_size, + GFP_KERNEL); + if (!workspace) return; - } - /* Build new buffer for decompressed contents. */ - decompressed = kmalloc(unzipped_len + record->ecc_notice_size, - GFP_KERNEL); - if (!decompressed) { - pr_err("decompression ran out of memory\n"); + /* After decompression "unzipped_len" is almost certainly smaller. */ + ret = crypto_comp_decompress(tfm, record->buf, record->size, + workspace, &unzipped_len); + if (ret) { + pr_err("crypto_comp_decompress failed, ret = %d!\n", ret); + kfree(workspace); return; } - memcpy(decompressed, big_oops_buf, unzipped_len); /* Append ECC notice to decompressed buffer. */ - memcpy(decompressed + unzipped_len, record->buf + record->size, + memcpy(workspace + unzipped_len, record->buf + record->size, record->ecc_notice_size); - /* Swap out compresed contents with decompressed contents. */ + /* Copy decompressed contents into an minimum-sized allocation. */ + unzipped = kmemdup(workspace, unzipped_len + record->ecc_notice_size, + GFP_KERNEL); + kfree(workspace); + if (!unzipped) + return; + + /* Swap out compressed contents with decompressed contents. */ kfree(record->buf); - record->buf = decompressed; + record->buf = unzipped; record->size = unzipped_len; record->compressed = false; } diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c index e02a9039b5ea..96f7d32cd184 100644 --- a/fs/pstore/ram.c +++ b/fs/pstore/ram.c @@ -124,19 +124,17 @@ static int ramoops_pstore_open(struct pstore_info *psi) } static struct persistent_ram_zone * -ramoops_get_next_prz(struct persistent_ram_zone *przs[], uint *c, uint max, - u64 *id, - enum pstore_type_id *typep, enum pstore_type_id type, - bool update) +ramoops_get_next_prz(struct persistent_ram_zone *przs[], int id, + struct pstore_record *record) { struct persistent_ram_zone *prz; - int i = (*c)++; + bool update = (record->type == PSTORE_TYPE_DMESG); /* Give up if we never existed or have hit the end. */ - if (!przs || i >= max) + if (!przs) return NULL; - prz = przs[i]; + prz = przs[id]; if (!prz) return NULL; @@ -147,8 +145,8 @@ ramoops_get_next_prz(struct persistent_ram_zone *przs[], uint *c, uint max, if (!persistent_ram_old_size(prz)) return NULL; - *typep = type; - *id = i; + record->type = prz->type; + record->id = id; return prz; } @@ -255,10 +253,8 @@ static ssize_t ramoops_pstore_read(struct pstore_record *record) /* Find the next valid persistent_ram_zone for DMESG */ while (cxt->dump_read_cnt < cxt->max_dump_cnt && !prz) { - prz = ramoops_get_next_prz(cxt->dprzs, &cxt->dump_read_cnt, - cxt->max_dump_cnt, &record->id, - &record->type, - PSTORE_TYPE_DMESG, 1); + prz = ramoops_get_next_prz(cxt->dprzs, cxt->dump_read_cnt++, + record); if (!prz_ok(prz)) continue; header_length = ramoops_read_kmsg_hdr(persistent_ram_old(prz), @@ -272,22 +268,18 @@ static ssize_t ramoops_pstore_read(struct pstore_record *record) } } - if (!prz_ok(prz)) - prz = ramoops_get_next_prz(&cxt->cprz, &cxt->console_read_cnt, - 1, &record->id, &record->type, - PSTORE_TYPE_CONSOLE, 0); + if (!prz_ok(prz) && !cxt->console_read_cnt++) + prz = ramoops_get_next_prz(&cxt->cprz, 0 /* single */, record); - if (!prz_ok(prz)) - prz = ramoops_get_next_prz(&cxt->mprz, &cxt->pmsg_read_cnt, - 1, &record->id, &record->type, - PSTORE_TYPE_PMSG, 0); + if (!prz_ok(prz) && !cxt->pmsg_read_cnt++) + prz = ramoops_get_next_prz(&cxt->mprz, 0 /* single */, record); /* ftrace is last since it may want to dynamically allocate memory. */ if (!prz_ok(prz)) { - if (!(cxt->flags & RAMOOPS_FLAG_FTRACE_PER_CPU)) { - prz = ramoops_get_next_prz(cxt->fprzs, - &cxt->ftrace_read_cnt, 1, &record->id, - &record->type, PSTORE_TYPE_FTRACE, 0); + if (!(cxt->flags & RAMOOPS_FLAG_FTRACE_PER_CPU) && + !cxt->ftrace_read_cnt++) { + prz = ramoops_get_next_prz(cxt->fprzs, 0 /* single */, + record); } else { /* * Build a new dummy record which combines all the @@ -299,15 +291,12 @@ static ssize_t ramoops_pstore_read(struct pstore_record *record) GFP_KERNEL); if (!tmp_prz) return -ENOMEM; + prz = tmp_prz; free_prz = true; while (cxt->ftrace_read_cnt < cxt->max_ftrace_cnt) { prz_next = ramoops_get_next_prz(cxt->fprzs, - &cxt->ftrace_read_cnt, - cxt->max_ftrace_cnt, - &record->id, - &record->type, - PSTORE_TYPE_FTRACE, 0); + cxt->ftrace_read_cnt++, record); if (!prz_ok(prz_next)) continue; @@ -321,7 +310,6 @@ static ssize_t ramoops_pstore_read(struct pstore_record *record) goto out; } record->id = 0; - prz = tmp_prz; } } @@ -611,6 +599,7 @@ static int ramoops_init_przs(const char *name, goto fail; } *paddr += zone_sz; + prz_ar[i]->type = pstore_name_to_type(name); } *przs = prz_ar; @@ -640,7 +629,7 @@ static int ramoops_init_prz(const char *name, label = kasprintf(GFP_KERNEL, "ramoops:%s", name); *prz = persistent_ram_new(*paddr, sz, sig, &cxt->ecc_info, - cxt->memtype, 0, label); + cxt->memtype, PRZ_FLAG_ZAP_OLD, label); if (IS_ERR(*prz)) { int err = PTR_ERR(*prz); @@ -649,9 +638,8 @@ static int ramoops_init_prz(const char *name, return err; } - persistent_ram_zap(*prz); - *paddr += sz; + (*prz)->type = pstore_name_to_type(name); return 0; } @@ -787,7 +775,7 @@ static int ramoops_probe(struct platform_device *pdev) dump_mem_sz = cxt->size - cxt->console_size - cxt->ftrace_size - cxt->pmsg_size; - err = ramoops_init_przs("dump", dev, cxt, &cxt->dprzs, &paddr, + err = ramoops_init_przs("dmesg", dev, cxt, &cxt->dprzs, &paddr, dump_mem_sz, cxt->record_size, &cxt->max_dump_cnt, 0, 0); if (err) @@ -827,7 +815,6 @@ static int ramoops_probe(struct platform_device *pdev) err = -ENOMEM; goto fail_clear; } - spin_lock_init(&cxt->pstore.buf_lock); cxt->pstore.flags = PSTORE_FLAGS_DMESG; if (cxt->console_size) @@ -855,9 +842,9 @@ static int ramoops_probe(struct platform_device *pdev) ramoops_pmsg_size = pdata->pmsg_size; ramoops_ftrace_size = pdata->ftrace_size; - pr_info("attached 0x%lx@0x%llx, ecc: %d/%d\n", + pr_info("using 0x%lx@0x%llx, ecc: %d\n", cxt->size, (unsigned long long)cxt->phys_addr, - cxt->ecc_info.ecc_size, cxt->ecc_info.block_size); + cxt->ecc_info.ecc_size); return 0; diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c index 12e21f789194..c11711c2cc83 100644 --- a/fs/pstore/ram_core.c +++ b/fs/pstore/ram_core.c @@ -12,7 +12,7 @@ * */ -#define pr_fmt(fmt) "persistent_ram: " fmt +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include @@ -29,6 +29,16 @@ #include #include +/** + * struct persistent_ram_buffer - persistent circular RAM buffer + * + * @sig: + * signature to indicate header (PERSISTENT_RAM_SIG xor PRZ-type value) + * @start: + * offset into @data where the beginning of the stored bytes begin + * @size: + * number of valid bytes stored in @data + */ struct persistent_ram_buffer { uint32_t sig; atomic_t start; @@ -443,7 +453,8 @@ static void *persistent_ram_iomap(phys_addr_t start, size_t size, void *va; if (!request_mem_region(start, size, label ?: "ramoops")) { - pr_err("request mem region (0x%llx@0x%llx) failed\n", + pr_err("request mem region (%s 0x%llx@0x%llx) failed\n", + label ?: "ramoops", (unsigned long long)size, (unsigned long long)start); return NULL; } @@ -489,32 +500,42 @@ static int persistent_ram_post_init(struct persistent_ram_zone *prz, u32 sig, struct persistent_ram_ecc_info *ecc_info) { int ret; + bool zap = !!(prz->flags & PRZ_FLAG_ZAP_OLD); ret = persistent_ram_init_ecc(prz, ecc_info); - if (ret) + if (ret) { + pr_warn("ECC failed %s\n", prz->label); return ret; + } sig ^= PERSISTENT_RAM_SIG; if (prz->buffer->sig == sig) { + if (buffer_size(prz) == 0) { + pr_debug("found existing empty buffer\n"); + return 0; + } + if (buffer_size(prz) > prz->buffer_size || - buffer_start(prz) > buffer_size(prz)) + buffer_start(prz) > buffer_size(prz)) { pr_info("found existing invalid buffer, size %zu, start %zu\n", buffer_size(prz), buffer_start(prz)); - else { + zap = true; + } else { pr_debug("found existing buffer, size %zu, start %zu\n", buffer_size(prz), buffer_start(prz)); persistent_ram_save_old(prz); - return 0; } } else { pr_debug("no valid data in buffer (sig = 0x%08x)\n", prz->buffer->sig); + prz->buffer->sig = sig; + zap = true; } - /* Rewind missing or invalid memory area. */ - prz->buffer->sig = sig; - persistent_ram_zap(prz); + /* Reset missing, invalid, or single-use memory area. */ + if (zap) + persistent_ram_zap(prz); return 0; } @@ -572,6 +593,12 @@ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size, if (ret) goto err; + pr_debug("attached %s 0x%zx@0x%llx: %zu header, %zu data, %zu ecc (%d/%d)\n", + prz->label, prz->size, (unsigned long long)prz->paddr, + sizeof(*prz->buffer), prz->buffer_size, + prz->size - sizeof(*prz->buffer) - prz->buffer_size, + prz->ecc_info.ecc_size, prz->ecc_info.block_size); + return prz; err: persistent_ram_free(prz); diff --git a/fs/quota/quota.c b/fs/quota/quota.c index f0cbf58ad4da..fd5dd806f1b9 100644 --- a/fs/quota/quota.c +++ b/fs/quota/quota.c @@ -791,7 +791,8 @@ static int quotactl_cmd_write(int cmd) /* Return true if quotactl command is manipulating quota on/off state */ static bool quotactl_cmd_onoff(int cmd) { - return (cmd == Q_QUOTAON) || (cmd == Q_QUOTAOFF); + return (cmd == Q_QUOTAON) || (cmd == Q_QUOTAOFF) || + (cmd == Q_XQUOTAON) || (cmd == Q_XQUOTAOFF); } /* diff --git a/fs/select.c b/fs/select.c index 22b3bf89f051..4c8652390c94 100644 --- a/fs/select.c +++ b/fs/select.c @@ -287,12 +287,18 @@ int poll_select_set_timeout(struct timespec64 *to, time64_t sec, long nsec) return 0; } +enum poll_time_type { + PT_TIMEVAL = 0, + PT_OLD_TIMEVAL = 1, + PT_TIMESPEC = 2, + PT_OLD_TIMESPEC = 3, +}; + static int poll_select_copy_remaining(struct timespec64 *end_time, void __user *p, - int timeval, int ret) + enum poll_time_type pt_type, int ret) { struct timespec64 rts; - struct timeval rtv; if (!p) return ret; @@ -310,18 +316,40 @@ static int poll_select_copy_remaining(struct timespec64 *end_time, rts.tv_sec = rts.tv_nsec = 0; - if (timeval) { - if (sizeof(rtv) > sizeof(rtv.tv_sec) + sizeof(rtv.tv_usec)) - memset(&rtv, 0, sizeof(rtv)); - rtv.tv_sec = rts.tv_sec; - rtv.tv_usec = rts.tv_nsec / NSEC_PER_USEC; + switch (pt_type) { + case PT_TIMEVAL: + { + struct timeval rtv; - if (!copy_to_user(p, &rtv, sizeof(rtv))) + if (sizeof(rtv) > sizeof(rtv.tv_sec) + sizeof(rtv.tv_usec)) + memset(&rtv, 0, sizeof(rtv)); + rtv.tv_sec = rts.tv_sec; + rtv.tv_usec = rts.tv_nsec / NSEC_PER_USEC; + if (!copy_to_user(p, &rtv, sizeof(rtv))) + return ret; + } + break; + case PT_OLD_TIMEVAL: + { + struct old_timeval32 rtv; + + rtv.tv_sec = rts.tv_sec; + rtv.tv_usec = rts.tv_nsec / NSEC_PER_USEC; + if (!copy_to_user(p, &rtv, sizeof(rtv))) + return ret; + } + break; + case PT_TIMESPEC: + if (!put_timespec64(&rts, p)) return ret; - - } else if (!put_timespec64(&rts, p)) - return ret; - + break; + case PT_OLD_TIMESPEC: + if (!put_old_timespec32(&rts, p)) + return ret; + break; + default: + BUG(); + } /* * If an application puts its timeval in read-only memory, we * don't want the Linux-specific update to the timeval to @@ -689,7 +717,7 @@ static int kern_select(int n, fd_set __user *inp, fd_set __user *outp, } ret = core_sys_select(n, inp, outp, exp, to); - ret = poll_select_copy_remaining(&end_time, tvp, 1, ret); + ret = poll_select_copy_remaining(&end_time, tvp, PT_TIMEVAL, ret); return ret; } @@ -701,49 +729,41 @@ SYSCALL_DEFINE5(select, int, n, fd_set __user *, inp, fd_set __user *, outp, } static long do_pselect(int n, fd_set __user *inp, fd_set __user *outp, - fd_set __user *exp, struct timespec __user *tsp, - const sigset_t __user *sigmask, size_t sigsetsize) + fd_set __user *exp, void __user *tsp, + const sigset_t __user *sigmask, size_t sigsetsize, + enum poll_time_type type) { sigset_t ksigmask, sigsaved; struct timespec64 ts, end_time, *to = NULL; int ret; if (tsp) { - if (get_timespec64(&ts, tsp)) - return -EFAULT; + switch (type) { + case PT_TIMESPEC: + if (get_timespec64(&ts, tsp)) + return -EFAULT; + break; + case PT_OLD_TIMESPEC: + if (get_old_timespec32(&ts, tsp)) + return -EFAULT; + break; + default: + BUG(); + } to = &end_time; if (poll_select_set_timeout(to, ts.tv_sec, ts.tv_nsec)) return -EINVAL; } - if (sigmask) { - /* XXX: Don't preclude handling different sized sigset_t's. */ - if (sigsetsize != sizeof(sigset_t)) - return -EINVAL; - if (copy_from_user(&ksigmask, sigmask, sizeof(ksigmask))) - return -EFAULT; - - sigdelsetmask(&ksigmask, sigmask(SIGKILL)|sigmask(SIGSTOP)); - sigprocmask(SIG_SETMASK, &ksigmask, &sigsaved); - } + ret = set_user_sigmask(sigmask, &ksigmask, &sigsaved, sigsetsize); + if (ret) + return ret; ret = core_sys_select(n, inp, outp, exp, to); - ret = poll_select_copy_remaining(&end_time, tsp, 0, ret); + ret = poll_select_copy_remaining(&end_time, tsp, type, ret); - if (ret == -ERESTARTNOHAND) { - /* - * Don't restore the signal mask yet. Let do_signal() deliver - * the signal on the way back to userspace, before the signal - * mask is restored. - */ - if (sigmask) { - memcpy(¤t->saved_sigmask, &sigsaved, - sizeof(sigsaved)); - set_restore_sigmask(); - } - } else if (sigmask) - sigprocmask(SIG_SETMASK, &sigsaved, NULL); + restore_user_sigmask(sigmask, &sigsaved); return ret; } @@ -755,7 +775,7 @@ static long do_pselect(int n, fd_set __user *inp, fd_set __user *outp, * the sigset size. */ SYSCALL_DEFINE6(pselect6, int, n, fd_set __user *, inp, fd_set __user *, outp, - fd_set __user *, exp, struct timespec __user *, tsp, + fd_set __user *, exp, struct __kernel_timespec __user *, tsp, void __user *, sig) { size_t sigsetsize = 0; @@ -769,9 +789,31 @@ SYSCALL_DEFINE6(pselect6, int, n, fd_set __user *, inp, fd_set __user *, outp, return -EFAULT; } - return do_pselect(n, inp, outp, exp, tsp, up, sigsetsize); + return do_pselect(n, inp, outp, exp, tsp, up, sigsetsize, PT_TIMESPEC); } +#if defined(CONFIG_COMPAT_32BIT_TIME) && !defined(CONFIG_64BIT) + +SYSCALL_DEFINE6(pselect6_time32, int, n, fd_set __user *, inp, fd_set __user *, outp, + fd_set __user *, exp, struct old_timespec32 __user *, tsp, + void __user *, sig) +{ + size_t sigsetsize = 0; + sigset_t __user *up = NULL; + + if (sig) { + if (!access_ok(VERIFY_READ, sig, sizeof(void *)+sizeof(size_t)) + || __get_user(up, (sigset_t __user * __user *)sig) + || __get_user(sigsetsize, + (size_t __user *)(sig+sizeof(void *)))) + return -EFAULT; + } + + return do_pselect(n, inp, outp, exp, tsp, up, sigsetsize, PT_OLD_TIMESPEC); +} + +#endif + #ifdef __ARCH_WANT_SYS_OLD_SELECT struct sel_arg_struct { unsigned long n; @@ -1045,7 +1087,7 @@ SYSCALL_DEFINE3(poll, struct pollfd __user *, ufds, unsigned int, nfds, } SYSCALL_DEFINE5(ppoll, struct pollfd __user *, ufds, unsigned int, nfds, - struct timespec __user *, tsp, const sigset_t __user *, sigmask, + struct __kernel_timespec __user *, tsp, const sigset_t __user *, sigmask, size_t, sigsetsize) { sigset_t ksigmask, sigsaved; @@ -1061,89 +1103,62 @@ SYSCALL_DEFINE5(ppoll, struct pollfd __user *, ufds, unsigned int, nfds, return -EINVAL; } - if (sigmask) { - /* XXX: Don't preclude handling different sized sigset_t's. */ - if (sigsetsize != sizeof(sigset_t)) - return -EINVAL; - if (copy_from_user(&ksigmask, sigmask, sizeof(ksigmask))) - return -EFAULT; - - sigdelsetmask(&ksigmask, sigmask(SIGKILL)|sigmask(SIGSTOP)); - sigprocmask(SIG_SETMASK, &ksigmask, &sigsaved); - } + ret = set_user_sigmask(sigmask, &ksigmask, &sigsaved, sigsetsize); + if (ret) + return ret; ret = do_sys_poll(ufds, nfds, to); + restore_user_sigmask(sigmask, &sigsaved); + /* We can restart this syscall, usually */ - if (ret == -EINTR) { - /* - * Don't restore the signal mask yet. Let do_signal() deliver - * the signal on the way back to userspace, before the signal - * mask is restored. - */ - if (sigmask) { - memcpy(¤t->saved_sigmask, &sigsaved, - sizeof(sigsaved)); - set_restore_sigmask(); - } + if (ret == -EINTR) ret = -ERESTARTNOHAND; - } else if (sigmask) - sigprocmask(SIG_SETMASK, &sigsaved, NULL); - ret = poll_select_copy_remaining(&end_time, tsp, 0, ret); + ret = poll_select_copy_remaining(&end_time, tsp, PT_TIMESPEC, ret); return ret; } -#ifdef CONFIG_COMPAT -#define __COMPAT_NFDBITS (8 * sizeof(compat_ulong_t)) +#if defined(CONFIG_COMPAT_32BIT_TIME) && !defined(CONFIG_64BIT) -static -int compat_poll_select_copy_remaining(struct timespec64 *end_time, void __user *p, - int timeval, int ret) +SYSCALL_DEFINE5(ppoll_time32, struct pollfd __user *, ufds, unsigned int, nfds, + struct old_timespec32 __user *, tsp, const sigset_t __user *, sigmask, + size_t, sigsetsize) { - struct timespec64 ts; + sigset_t ksigmask, sigsaved; + struct timespec64 ts, end_time, *to = NULL; + int ret; - if (!p) - return ret; + if (tsp) { + if (get_old_timespec32(&ts, tsp)) + return -EFAULT; - if (current->personality & STICKY_TIMEOUTS) - goto sticky; + to = &end_time; + if (poll_select_set_timeout(to, ts.tv_sec, ts.tv_nsec)) + return -EINVAL; + } - /* No update for zero timeout */ - if (!end_time->tv_sec && !end_time->tv_nsec) + ret = set_user_sigmask(sigmask, &ksigmask, &sigsaved, sigsetsize); + if (ret) return ret; - ktime_get_ts64(&ts); - ts = timespec64_sub(*end_time, ts); - if (ts.tv_sec < 0) - ts.tv_sec = ts.tv_nsec = 0; + ret = do_sys_poll(ufds, nfds, to); - if (timeval) { - struct old_timeval32 rtv; + restore_user_sigmask(sigmask, &sigsaved); - rtv.tv_sec = ts.tv_sec; - rtv.tv_usec = ts.tv_nsec / NSEC_PER_USEC; + /* We can restart this syscall, usually */ + if (ret == -EINTR) + ret = -ERESTARTNOHAND; - if (!copy_to_user(p, &rtv, sizeof(rtv))) - return ret; - } else { - if (!put_old_timespec32(&ts, p)) - return ret; - } - /* - * If an application puts its timeval in read-only memory, we - * don't want the Linux-specific update to the timeval to - * cause a fault after the select has completed - * successfully. However, because we're not updating the - * timeval, we can't restart the system call. - */ + ret = poll_select_copy_remaining(&end_time, tsp, PT_OLD_TIMESPEC, ret); -sticky: - if (ret == -ERESTARTNOHAND) - ret = -EINTR; return ret; } +#endif + +#ifdef CONFIG_COMPAT +#define __COMPAT_NFDBITS (8 * sizeof(compat_ulong_t)) /* * Ooo, nasty. We need here to frob 32-bit unsigned longs to @@ -1275,7 +1290,7 @@ static int do_compat_select(int n, compat_ulong_t __user *inp, } ret = compat_core_sys_select(n, inp, outp, exp, to); - ret = compat_poll_select_copy_remaining(&end_time, tvp, 1, ret); + ret = poll_select_copy_remaining(&end_time, tvp, PT_OLD_TIMEVAL, ret); return ret; } @@ -1307,52 +1322,66 @@ COMPAT_SYSCALL_DEFINE1(old_select, struct compat_sel_arg_struct __user *, arg) static long do_compat_pselect(int n, compat_ulong_t __user *inp, compat_ulong_t __user *outp, compat_ulong_t __user *exp, - struct old_timespec32 __user *tsp, compat_sigset_t __user *sigmask, - compat_size_t sigsetsize) + void __user *tsp, compat_sigset_t __user *sigmask, + compat_size_t sigsetsize, enum poll_time_type type) { sigset_t ksigmask, sigsaved; struct timespec64 ts, end_time, *to = NULL; int ret; if (tsp) { - if (get_old_timespec32(&ts, tsp)) - return -EFAULT; + switch (type) { + case PT_OLD_TIMESPEC: + if (get_old_timespec32(&ts, tsp)) + return -EFAULT; + break; + case PT_TIMESPEC: + if (get_timespec64(&ts, tsp)) + return -EFAULT; + break; + default: + BUG(); + } to = &end_time; if (poll_select_set_timeout(to, ts.tv_sec, ts.tv_nsec)) return -EINVAL; } - if (sigmask) { - if (sigsetsize != sizeof(compat_sigset_t)) - return -EINVAL; - if (get_compat_sigset(&ksigmask, sigmask)) - return -EFAULT; - - sigdelsetmask(&ksigmask, sigmask(SIGKILL)|sigmask(SIGSTOP)); - sigprocmask(SIG_SETMASK, &ksigmask, &sigsaved); - } + ret = set_compat_user_sigmask(sigmask, &ksigmask, &sigsaved, sigsetsize); + if (ret) + return ret; ret = compat_core_sys_select(n, inp, outp, exp, to); - ret = compat_poll_select_copy_remaining(&end_time, tsp, 0, ret); + ret = poll_select_copy_remaining(&end_time, tsp, type, ret); - if (ret == -ERESTARTNOHAND) { - /* - * Don't restore the signal mask yet. Let do_signal() deliver - * the signal on the way back to userspace, before the signal - * mask is restored. - */ - if (sigmask) { - memcpy(¤t->saved_sigmask, &sigsaved, - sizeof(sigsaved)); - set_restore_sigmask(); - } - } else if (sigmask) - sigprocmask(SIG_SETMASK, &sigsaved, NULL); + restore_user_sigmask(sigmask, &sigsaved); return ret; } +COMPAT_SYSCALL_DEFINE6(pselect6_time64, int, n, compat_ulong_t __user *, inp, + compat_ulong_t __user *, outp, compat_ulong_t __user *, exp, + struct __kernel_timespec __user *, tsp, void __user *, sig) +{ + compat_size_t sigsetsize = 0; + compat_uptr_t up = 0; + + if (sig) { + if (!access_ok(VERIFY_READ, sig, + sizeof(compat_uptr_t)+sizeof(compat_size_t)) || + __get_user(up, (compat_uptr_t __user *)sig) || + __get_user(sigsetsize, + (compat_size_t __user *)(sig+sizeof(up)))) + return -EFAULT; + } + + return do_compat_pselect(n, inp, outp, exp, tsp, compat_ptr(up), + sigsetsize, PT_TIMESPEC); +} + +#if defined(CONFIG_COMPAT_32BIT_TIME) + COMPAT_SYSCALL_DEFINE6(pselect6, int, n, compat_ulong_t __user *, inp, compat_ulong_t __user *, outp, compat_ulong_t __user *, exp, struct old_timespec32 __user *, tsp, void __user *, sig) @@ -1368,10 +1397,14 @@ COMPAT_SYSCALL_DEFINE6(pselect6, int, n, compat_ulong_t __user *, inp, (compat_size_t __user *)(sig+sizeof(up)))) return -EFAULT; } + return do_compat_pselect(n, inp, outp, exp, tsp, compat_ptr(up), - sigsetsize); + sigsetsize, PT_OLD_TIMESPEC); } +#endif + +#if defined(CONFIG_COMPAT_32BIT_TIME) COMPAT_SYSCALL_DEFINE5(ppoll, struct pollfd __user *, ufds, unsigned int, nfds, struct old_timespec32 __user *, tsp, const compat_sigset_t __user *, sigmask, compat_size_t, sigsetsize) @@ -1389,36 +1422,57 @@ COMPAT_SYSCALL_DEFINE5(ppoll, struct pollfd __user *, ufds, return -EINVAL; } - if (sigmask) { - if (sigsetsize != sizeof(compat_sigset_t)) - return -EINVAL; - if (get_compat_sigset(&ksigmask, sigmask)) + ret = set_compat_user_sigmask(sigmask, &ksigmask, &sigsaved, sigsetsize); + if (ret) + return ret; + + ret = do_sys_poll(ufds, nfds, to); + + restore_user_sigmask(sigmask, &sigsaved); + + /* We can restart this syscall, usually */ + if (ret == -EINTR) + ret = -ERESTARTNOHAND; + + ret = poll_select_copy_remaining(&end_time, tsp, PT_OLD_TIMESPEC, ret); + + return ret; +} +#endif + +/* New compat syscall for 64 bit time_t*/ +COMPAT_SYSCALL_DEFINE5(ppoll_time64, struct pollfd __user *, ufds, + unsigned int, nfds, struct __kernel_timespec __user *, tsp, + const compat_sigset_t __user *, sigmask, compat_size_t, sigsetsize) +{ + sigset_t ksigmask, sigsaved; + struct timespec64 ts, end_time, *to = NULL; + int ret; + + if (tsp) { + if (get_timespec64(&ts, tsp)) return -EFAULT; - sigdelsetmask(&ksigmask, sigmask(SIGKILL)|sigmask(SIGSTOP)); - sigprocmask(SIG_SETMASK, &ksigmask, &sigsaved); + to = &end_time; + if (poll_select_set_timeout(to, ts.tv_sec, ts.tv_nsec)) + return -EINVAL; } + ret = set_compat_user_sigmask(sigmask, &ksigmask, &sigsaved, sigsetsize); + if (ret) + return ret; + ret = do_sys_poll(ufds, nfds, to); + restore_user_sigmask(sigmask, &sigsaved); + /* We can restart this syscall, usually */ - if (ret == -EINTR) { - /* - * Don't restore the signal mask yet. Let do_signal() deliver - * the signal on the way back to userspace, before the signal - * mask is restored. - */ - if (sigmask) { - memcpy(¤t->saved_sigmask, &sigsaved, - sizeof(sigsaved)); - set_restore_sigmask(); - } + if (ret == -EINTR) ret = -ERESTARTNOHAND; - } else if (sigmask) - sigprocmask(SIG_SETMASK, &sigsaved, NULL); - ret = compat_poll_select_copy_remaining(&end_time, tsp, 0, ret); + ret = poll_select_copy_remaining(&end_time, tsp, PT_TIMESPEC, ret); return ret; } + #endif diff --git a/fs/ubifs/auth.c b/fs/ubifs/auth.c index 124e965a28b3..5bf5fd08879e 100644 --- a/fs/ubifs/auth.c +++ b/fs/ubifs/auth.c @@ -269,8 +269,7 @@ int ubifs_init_authentication(struct ubifs_info *c) goto out; } - c->hash_tfm = crypto_alloc_shash(c->auth_hash_name, 0, - CRYPTO_ALG_ASYNC); + c->hash_tfm = crypto_alloc_shash(c->auth_hash_name, 0, 0); if (IS_ERR(c->hash_tfm)) { err = PTR_ERR(c->hash_tfm); ubifs_err(c, "Can not allocate %s: %d", @@ -286,7 +285,7 @@ int ubifs_init_authentication(struct ubifs_info *c) goto out_free_hash; } - c->hmac_tfm = crypto_alloc_shash(hmac_name, 0, CRYPTO_ALG_ASYNC); + c->hmac_tfm = crypto_alloc_shash(hmac_name, 0, 0); if (IS_ERR(c->hmac_tfm)) { err = PTR_ERR(c->hmac_tfm); ubifs_err(c, "Can not allocate %s: %d", hmac_name, err); diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index 1b78f2e09218..5d2ffb1a45fc 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -1481,7 +1481,7 @@ static int ubifs_migrate_page(struct address_space *mapping, { int rc; - rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); + rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); if (rc != MIGRATEPAGE_SUCCESS) return rc; diff --git a/fs/udf/inode.c b/fs/udf/inode.c index 5df554a9f9c9..ae796e10f68b 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -1357,6 +1357,12 @@ reread: iinfo->i_alloc_type = le16_to_cpu(fe->icbTag.flags) & ICBTAG_FLAG_AD_MASK; + if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_SHORT && + iinfo->i_alloc_type != ICBTAG_FLAG_AD_LONG && + iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB) { + ret = -EIO; + goto out; + } iinfo->i_unique = 0; iinfo->i_lenEAttr = 0; iinfo->i_lenExtents = 0; diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 59dc28047030..89800fc7dc9d 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -53,7 +53,7 @@ struct userfaultfd_ctx { /* a refile sequence protected by fault_pending_wqh lock */ struct seqcount refile_seq; /* pseudo fd refcounting */ - atomic_t refcount; + refcount_t refcount; /* userfaultfd syscall flags */ unsigned int flags; /* features requested from the userspace */ @@ -140,8 +140,7 @@ out: */ static void userfaultfd_ctx_get(struct userfaultfd_ctx *ctx) { - if (!atomic_inc_not_zero(&ctx->refcount)) - BUG(); + refcount_inc(&ctx->refcount); } /** @@ -154,7 +153,7 @@ static void userfaultfd_ctx_get(struct userfaultfd_ctx *ctx) */ static void userfaultfd_ctx_put(struct userfaultfd_ctx *ctx) { - if (atomic_dec_and_test(&ctx->refcount)) { + if (refcount_dec_and_test(&ctx->refcount)) { VM_BUG_ON(spin_is_locked(&ctx->fault_pending_wqh.lock)); VM_BUG_ON(waitqueue_active(&ctx->fault_pending_wqh)); VM_BUG_ON(spin_is_locked(&ctx->fault_wqh.lock)); @@ -686,7 +685,7 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) return -ENOMEM; } - atomic_set(&ctx->refcount, 1); + refcount_set(&ctx->refcount, 1); ctx->flags = octx->flags; ctx->state = UFFD_STATE_RUNNING; ctx->features = octx->features; @@ -736,10 +735,18 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, struct userfaultfd_ctx *ctx; ctx = vma->vm_userfaultfd_ctx.ctx; - if (ctx && (ctx->features & UFFD_FEATURE_EVENT_REMAP)) { + + if (!ctx) + return; + + if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { vm_ctx->ctx = ctx; userfaultfd_ctx_get(ctx); WRITE_ONCE(ctx->mmap_changing, true); + } else { + /* Drop uffd context if remap feature not enabled */ + vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; + vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING); } } @@ -1927,7 +1934,7 @@ SYSCALL_DEFINE1(userfaultfd, int, flags) if (!ctx) return -ENOMEM; - atomic_set(&ctx->refcount, 1); + refcount_set(&ctx->refcount, 1); ctx->flags = flags; ctx->features = 0; ctx->state = UFFD_STATE_WAIT_API; diff --git a/fs/xfs/libxfs/xfs_ag.c b/fs/xfs/libxfs/xfs_ag.c index 9345802c99f7..999ad8d00d43 100644 --- a/fs/xfs/libxfs/xfs_ag.c +++ b/fs/xfs/libxfs/xfs_ag.c @@ -414,7 +414,6 @@ xfs_ag_extend_space( struct aghdr_init_data *id, xfs_extlen_t len) { - struct xfs_owner_info oinfo; struct xfs_buf *bp; struct xfs_agi *agi; struct xfs_agf *agf; @@ -448,17 +447,17 @@ xfs_ag_extend_space( /* * Free the new space. * - * XFS_RMAP_OWN_NULL is used here to tell the rmap btree that + * XFS_RMAP_OINFO_SKIP_UPDATE is used here to tell the rmap btree that * this doesn't actually exist in the rmap btree. */ - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_NULL); error = xfs_rmap_free(tp, bp, id->agno, be32_to_cpu(agf->agf_length) - len, - len, &oinfo); + len, &XFS_RMAP_OINFO_SKIP_UPDATE); if (error) return error; return xfs_free_extent(tp, XFS_AGB_TO_FSB(mp, id->agno, be32_to_cpu(agf->agf_length) - len), - len, &oinfo, XFS_AG_RESV_NONE); + len, &XFS_RMAP_OINFO_SKIP_UPDATE, + XFS_AG_RESV_NONE); } diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c index e1c0c0d2f1b0..b715668886a4 100644 --- a/fs/xfs/libxfs/xfs_alloc.c +++ b/fs/xfs/libxfs/xfs_alloc.c @@ -1594,7 +1594,6 @@ xfs_alloc_ag_vextent_small( xfs_extlen_t *flenp, /* result length */ int *stat) /* status: 0-freelist, 1-normal/none */ { - struct xfs_owner_info oinfo; int error; xfs_agblock_t fbno; xfs_extlen_t flen; @@ -1648,9 +1647,8 @@ xfs_alloc_ag_vextent_small( * doesn't live in the free space, we need to clear * out the OWN_AG rmap. */ - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_AG); error = xfs_rmap_free(args->tp, args->agbp, args->agno, - fbno, 1, &oinfo); + fbno, 1, &XFS_RMAP_OINFO_AG); if (error) goto error0; @@ -1694,28 +1692,28 @@ error0: */ STATIC int xfs_free_ag_extent( - xfs_trans_t *tp, - xfs_buf_t *agbp, - xfs_agnumber_t agno, - xfs_agblock_t bno, - xfs_extlen_t len, - struct xfs_owner_info *oinfo, - enum xfs_ag_resv_type type) + struct xfs_trans *tp, + struct xfs_buf *agbp, + xfs_agnumber_t agno, + xfs_agblock_t bno, + xfs_extlen_t len, + const struct xfs_owner_info *oinfo, + enum xfs_ag_resv_type type) { - xfs_btree_cur_t *bno_cur; /* cursor for by-block btree */ - xfs_btree_cur_t *cnt_cur; /* cursor for by-size btree */ - int error; /* error return value */ - xfs_agblock_t gtbno; /* start of right neighbor block */ - xfs_extlen_t gtlen; /* length of right neighbor block */ - int haveleft; /* have a left neighbor block */ - int haveright; /* have a right neighbor block */ - int i; /* temp, result code */ - xfs_agblock_t ltbno; /* start of left neighbor block */ - xfs_extlen_t ltlen; /* length of left neighbor block */ - xfs_mount_t *mp; /* mount point struct for filesystem */ - xfs_agblock_t nbno; /* new starting block of freespace */ - xfs_extlen_t nlen; /* new length of freespace */ - xfs_perag_t *pag; /* per allocation group data */ + struct xfs_mount *mp; + struct xfs_perag *pag; + struct xfs_btree_cur *bno_cur; + struct xfs_btree_cur *cnt_cur; + xfs_agblock_t gtbno; /* start of right neighbor */ + xfs_extlen_t gtlen; /* length of right neighbor */ + xfs_agblock_t ltbno; /* start of left neighbor */ + xfs_extlen_t ltlen; /* length of left neighbor */ + xfs_agblock_t nbno; /* new starting block of freesp */ + xfs_extlen_t nlen; /* new length of freespace */ + int haveleft; /* have a left neighbor */ + int haveright; /* have a right neighbor */ + int i; + int error; bno_cur = cnt_cur = NULL; mp = tp->t_mountp; @@ -2314,10 +2312,11 @@ xfs_alloc_fix_freelist( * repair/rmap.c in xfsprogs for details. */ memset(&targs, 0, sizeof(targs)); + /* struct copy below */ if (flags & XFS_ALLOC_FLAG_NORMAP) - xfs_rmap_skip_owner_update(&targs.oinfo); + targs.oinfo = XFS_RMAP_OINFO_SKIP_UPDATE; else - xfs_rmap_ag_owner(&targs.oinfo, XFS_RMAP_OWN_AG); + targs.oinfo = XFS_RMAP_OINFO_AG; while (!(flags & XFS_ALLOC_FLAG_NOSHRINK) && pag->pagf_flcount > need) { error = xfs_alloc_get_freelist(tp, agbp, &bno, 0); if (error) @@ -2435,7 +2434,6 @@ xfs_alloc_get_freelist( be32_add_cpu(&agf->agf_flcount, -1); xfs_trans_agflist_delta(tp, -1); pag->pagf_flcount--; - xfs_perag_put(pag); logflags = XFS_AGF_FLFIRST | XFS_AGF_FLCOUNT; if (btreeblk) { @@ -2443,6 +2441,7 @@ xfs_alloc_get_freelist( pag->pagf_btreeblks++; logflags |= XFS_AGF_BTREEBLKS; } + xfs_perag_put(pag); xfs_alloc_log_agf(tp, agbp, logflags); *bnop = bno; @@ -3008,21 +3007,21 @@ out: * Just break up the extent address and hand off to xfs_free_ag_extent * after fixing up the freelist. */ -int /* error */ +int __xfs_free_extent( - struct xfs_trans *tp, /* transaction pointer */ - xfs_fsblock_t bno, /* starting block number of extent */ - xfs_extlen_t len, /* length of extent */ - struct xfs_owner_info *oinfo, /* extent owner */ - enum xfs_ag_resv_type type, /* block reservation type */ - bool skip_discard) + struct xfs_trans *tp, + xfs_fsblock_t bno, + xfs_extlen_t len, + const struct xfs_owner_info *oinfo, + enum xfs_ag_resv_type type, + bool skip_discard) { - struct xfs_mount *mp = tp->t_mountp; - struct xfs_buf *agbp; - xfs_agnumber_t agno = XFS_FSB_TO_AGNO(mp, bno); - xfs_agblock_t agbno = XFS_FSB_TO_AGBNO(mp, bno); - int error; - unsigned int busy_flags = 0; + struct xfs_mount *mp = tp->t_mountp; + struct xfs_buf *agbp; + xfs_agnumber_t agno = XFS_FSB_TO_AGNO(mp, bno); + xfs_agblock_t agbno = XFS_FSB_TO_AGBNO(mp, bno); + int error; + unsigned int busy_flags = 0; ASSERT(len != 0); ASSERT(type != XFS_AG_RESV_AGFL); diff --git a/fs/xfs/libxfs/xfs_alloc.h b/fs/xfs/libxfs/xfs_alloc.h index 00cd5ec4cb6b..d6ed5d2c07c2 100644 --- a/fs/xfs/libxfs/xfs_alloc.h +++ b/fs/xfs/libxfs/xfs_alloc.h @@ -182,7 +182,7 @@ __xfs_free_extent( struct xfs_trans *tp, /* transaction pointer */ xfs_fsblock_t bno, /* starting block number of extent */ xfs_extlen_t len, /* length of extent */ - struct xfs_owner_info *oinfo, /* extent owner */ + const struct xfs_owner_info *oinfo, /* extent owner */ enum xfs_ag_resv_type type, /* block reservation type */ bool skip_discard); @@ -191,7 +191,7 @@ xfs_free_extent( struct xfs_trans *tp, xfs_fsblock_t bno, xfs_extlen_t len, - struct xfs_owner_info *oinfo, + const struct xfs_owner_info *oinfo, enum xfs_ag_resv_type type) { return __xfs_free_extent(tp, bno, len, oinfo, type, false); diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 19e921d1586f..332eefa2700b 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -536,7 +536,7 @@ __xfs_bmap_add_free( struct xfs_trans *tp, xfs_fsblock_t bno, xfs_filblks_t len, - struct xfs_owner_info *oinfo, + const struct xfs_owner_info *oinfo, bool skip_discard) { struct xfs_extent_free_item *new; /* new element */ @@ -564,7 +564,7 @@ __xfs_bmap_add_free( if (oinfo) new->xefi_oinfo = *oinfo; else - xfs_rmap_skip_owner_update(&new->xefi_oinfo); + new->xefi_oinfo = XFS_RMAP_OINFO_SKIP_UPDATE; new->xefi_skip_discard = skip_discard; trace_xfs_bmap_free_defer(tp->t_mountp, XFS_FSB_TO_AGNO(tp->t_mountp, bno), 0, @@ -3453,7 +3453,7 @@ xfs_bmap_btalloc( args.tp = ap->tp; args.mp = mp; args.fsbno = ap->blkno; - xfs_rmap_skip_owner_update(&args.oinfo); + args.oinfo = XFS_RMAP_OINFO_SKIP_UPDATE; /* Trim the allocation back to the maximum an AG can fit. */ args.maxlen = min(ap->length, mp->m_ag_max_usable); diff --git a/fs/xfs/libxfs/xfs_bmap.h b/fs/xfs/libxfs/xfs_bmap.h index 488dc8860fd7..09d3ea97cc15 100644 --- a/fs/xfs/libxfs/xfs_bmap.h +++ b/fs/xfs/libxfs/xfs_bmap.h @@ -186,7 +186,7 @@ int xfs_bmap_add_attrfork(struct xfs_inode *ip, int size, int rsvd); int xfs_bmap_set_attrforkoff(struct xfs_inode *ip, int size, int *version); void xfs_bmap_local_to_extents_empty(struct xfs_inode *ip, int whichfork); void __xfs_bmap_add_free(struct xfs_trans *tp, xfs_fsblock_t bno, - xfs_filblks_t len, struct xfs_owner_info *oinfo, + xfs_filblks_t len, const struct xfs_owner_info *oinfo, bool skip_discard); void xfs_bmap_compute_maxlevels(struct xfs_mount *mp, int whichfork); int xfs_bmap_first_unused(struct xfs_trans *tp, struct xfs_inode *ip, @@ -234,7 +234,7 @@ xfs_bmap_add_free( struct xfs_trans *tp, xfs_fsblock_t bno, xfs_filblks_t len, - struct xfs_owner_info *oinfo) + const struct xfs_owner_info *oinfo) { __xfs_bmap_add_free(tp, bno, len, oinfo, false); } diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c index e792b167150a..94f00427de98 100644 --- a/fs/xfs/libxfs/xfs_defer.c +++ b/fs/xfs/libxfs/xfs_defer.c @@ -172,7 +172,13 @@ * reoccur. */ -static const struct xfs_defer_op_type *defer_op_types[XFS_DEFER_OPS_TYPE_MAX]; +static const struct xfs_defer_op_type *defer_op_types[] = { + [XFS_DEFER_OPS_TYPE_BMAP] = &xfs_bmap_update_defer_type, + [XFS_DEFER_OPS_TYPE_REFCOUNT] = &xfs_refcount_update_defer_type, + [XFS_DEFER_OPS_TYPE_RMAP] = &xfs_rmap_update_defer_type, + [XFS_DEFER_OPS_TYPE_FREE] = &xfs_extent_free_defer_type, + [XFS_DEFER_OPS_TYPE_AGFL_FREE] = &xfs_agfl_free_defer_type, +}; /* * For each pending item in the intake list, log its intent item and the @@ -185,15 +191,15 @@ xfs_defer_create_intents( { struct list_head *li; struct xfs_defer_pending *dfp; + const struct xfs_defer_op_type *ops; list_for_each_entry(dfp, &tp->t_dfops, dfp_list) { - dfp->dfp_intent = dfp->dfp_type->create_intent(tp, - dfp->dfp_count); + ops = defer_op_types[dfp->dfp_type]; + dfp->dfp_intent = ops->create_intent(tp, dfp->dfp_count); trace_xfs_defer_create_intent(tp->t_mountp, dfp); - list_sort(tp->t_mountp, &dfp->dfp_work, - dfp->dfp_type->diff_items); + list_sort(tp->t_mountp, &dfp->dfp_work, ops->diff_items); list_for_each(li, &dfp->dfp_work) - dfp->dfp_type->log_item(tp, dfp->dfp_intent, li); + ops->log_item(tp, dfp->dfp_intent, li); } } @@ -204,14 +210,16 @@ xfs_defer_trans_abort( struct list_head *dop_pending) { struct xfs_defer_pending *dfp; + const struct xfs_defer_op_type *ops; trace_xfs_defer_trans_abort(tp, _RET_IP_); /* Abort intent items that don't have a done item. */ list_for_each_entry(dfp, dop_pending, dfp_list) { + ops = defer_op_types[dfp->dfp_type]; trace_xfs_defer_pending_abort(tp->t_mountp, dfp); if (dfp->dfp_intent && !dfp->dfp_done) { - dfp->dfp_type->abort_intent(dfp->dfp_intent); + ops->abort_intent(dfp->dfp_intent); dfp->dfp_intent = NULL; } } @@ -315,18 +323,20 @@ xfs_defer_cancel_list( struct xfs_defer_pending *pli; struct list_head *pwi; struct list_head *n; + const struct xfs_defer_op_type *ops; /* * Free the pending items. Caller should already have arranged * for the intent items to be released. */ list_for_each_entry_safe(dfp, pli, dop_list, dfp_list) { + ops = defer_op_types[dfp->dfp_type]; trace_xfs_defer_cancel_list(mp, dfp); list_del(&dfp->dfp_list); list_for_each_safe(pwi, n, &dfp->dfp_work) { list_del(pwi); dfp->dfp_count--; - dfp->dfp_type->cancel_item(pwi); + ops->cancel_item(pwi); } ASSERT(dfp->dfp_count == 0); kmem_free(dfp); @@ -350,7 +360,7 @@ xfs_defer_finish_noroll( struct list_head *n; void *state; int error = 0; - void (*cleanup_fn)(struct xfs_trans *, void *, int); + const struct xfs_defer_op_type *ops; LIST_HEAD(dop_pending); ASSERT((*tp)->t_flags & XFS_TRANS_PERM_LOG_RES); @@ -373,18 +383,18 @@ xfs_defer_finish_noroll( /* Log an intent-done item for the first pending item. */ dfp = list_first_entry(&dop_pending, struct xfs_defer_pending, dfp_list); + ops = defer_op_types[dfp->dfp_type]; trace_xfs_defer_pending_finish((*tp)->t_mountp, dfp); - dfp->dfp_done = dfp->dfp_type->create_done(*tp, dfp->dfp_intent, + dfp->dfp_done = ops->create_done(*tp, dfp->dfp_intent, dfp->dfp_count); - cleanup_fn = dfp->dfp_type->finish_cleanup; /* Finish the work items. */ state = NULL; list_for_each_safe(li, n, &dfp->dfp_work) { list_del(li); dfp->dfp_count--; - error = dfp->dfp_type->finish_item(*tp, li, - dfp->dfp_done, &state); + error = ops->finish_item(*tp, li, dfp->dfp_done, + &state); if (error == -EAGAIN) { /* * Caller wants a fresh transaction; @@ -400,8 +410,8 @@ xfs_defer_finish_noroll( * xfs_defer_cancel will take care of freeing * all these lists and stuff. */ - if (cleanup_fn) - cleanup_fn(*tp, state, error); + if (ops->finish_cleanup) + ops->finish_cleanup(*tp, state, error); goto out; } } @@ -413,20 +423,19 @@ xfs_defer_finish_noroll( * a Fresh Transaction while Finishing * Deferred Work" above. */ - dfp->dfp_intent = dfp->dfp_type->create_intent(*tp, + dfp->dfp_intent = ops->create_intent(*tp, dfp->dfp_count); dfp->dfp_done = NULL; list_for_each(li, &dfp->dfp_work) - dfp->dfp_type->log_item(*tp, dfp->dfp_intent, - li); + ops->log_item(*tp, dfp->dfp_intent, li); } else { /* Done with the dfp, free it. */ list_del(&dfp->dfp_list); kmem_free(dfp); } - if (cleanup_fn) - cleanup_fn(*tp, state, error); + if (ops->finish_cleanup) + ops->finish_cleanup(*tp, state, error); } out: @@ -486,8 +495,10 @@ xfs_defer_add( struct list_head *li) { struct xfs_defer_pending *dfp = NULL; + const struct xfs_defer_op_type *ops; ASSERT(tp->t_flags & XFS_TRANS_PERM_LOG_RES); + BUILD_BUG_ON(ARRAY_SIZE(defer_op_types) != XFS_DEFER_OPS_TYPE_MAX); /* * Add the item to a pending item at the end of the intake list. @@ -497,15 +508,15 @@ xfs_defer_add( if (!list_empty(&tp->t_dfops)) { dfp = list_last_entry(&tp->t_dfops, struct xfs_defer_pending, dfp_list); - if (dfp->dfp_type->type != type || - (dfp->dfp_type->max_items && - dfp->dfp_count >= dfp->dfp_type->max_items)) + ops = defer_op_types[dfp->dfp_type]; + if (dfp->dfp_type != type || + (ops->max_items && dfp->dfp_count >= ops->max_items)) dfp = NULL; } if (!dfp) { dfp = kmem_alloc(sizeof(struct xfs_defer_pending), KM_SLEEP | KM_NOFS); - dfp->dfp_type = defer_op_types[type]; + dfp->dfp_type = type; dfp->dfp_intent = NULL; dfp->dfp_done = NULL; dfp->dfp_count = 0; @@ -517,14 +528,6 @@ xfs_defer_add( dfp->dfp_count++; } -/* Initialize a deferred operation list. */ -void -xfs_defer_init_op_type( - const struct xfs_defer_op_type *type) -{ - defer_op_types[type->type] = type; -} - /* * Move deferred ops from one transaction to another and reset the source to * initial state. This is primarily used to carry state forward across diff --git a/fs/xfs/libxfs/xfs_defer.h b/fs/xfs/libxfs/xfs_defer.h index 2584a5b95b0d..7c28d7608ac6 100644 --- a/fs/xfs/libxfs/xfs_defer.h +++ b/fs/xfs/libxfs/xfs_defer.h @@ -8,20 +8,6 @@ struct xfs_defer_op_type; -/* - * Save a log intent item and a list of extents, so that we can replay - * whatever action had to happen to the extent list and file the log done - * item. - */ -struct xfs_defer_pending { - const struct xfs_defer_op_type *dfp_type; /* function pointers */ - struct list_head dfp_list; /* pending items */ - void *dfp_intent; /* log intent item */ - void *dfp_done; /* log done item */ - struct list_head dfp_work; /* work items */ - unsigned int dfp_count; /* # extent items */ -}; - /* * Header for deferred operation list. */ @@ -34,6 +20,20 @@ enum xfs_defer_ops_type { XFS_DEFER_OPS_TYPE_MAX, }; +/* + * Save a log intent item and a list of extents, so that we can replay + * whatever action had to happen to the extent list and file the log done + * item. + */ +struct xfs_defer_pending { + struct list_head dfp_list; /* pending items */ + struct list_head dfp_work; /* work items */ + void *dfp_intent; /* log intent item */ + void *dfp_done; /* log done item */ + unsigned int dfp_count; /* # extent items */ + enum xfs_defer_ops_type dfp_type; +}; + void xfs_defer_add(struct xfs_trans *tp, enum xfs_defer_ops_type type, struct list_head *h); int xfs_defer_finish_noroll(struct xfs_trans **tp); @@ -43,8 +43,6 @@ void xfs_defer_move(struct xfs_trans *dtp, struct xfs_trans *stp); /* Description of a deferred type. */ struct xfs_defer_op_type { - enum xfs_defer_ops_type type; - unsigned int max_items; void (*abort_intent)(void *); void *(*create_done)(struct xfs_trans *, void *, unsigned int); int (*finish_item)(struct xfs_trans *, struct list_head *, void *, @@ -54,8 +52,13 @@ struct xfs_defer_op_type { int (*diff_items)(void *, struct list_head *, struct list_head *); void *(*create_intent)(struct xfs_trans *, uint); void (*log_item)(struct xfs_trans *, void *, struct list_head *); + unsigned int max_items; }; -void xfs_defer_init_op_type(const struct xfs_defer_op_type *type); +extern const struct xfs_defer_op_type xfs_bmap_update_defer_type; +extern const struct xfs_defer_op_type xfs_refcount_update_defer_type; +extern const struct xfs_defer_op_type xfs_rmap_update_defer_type; +extern const struct xfs_defer_op_type xfs_extent_free_defer_type; +extern const struct xfs_defer_op_type xfs_agfl_free_defer_type; #endif /* __XFS_DEFER_H__ */ diff --git a/fs/xfs/libxfs/xfs_format.h b/fs/xfs/libxfs/xfs_format.h index 9995d5ae380b..9bb3c48843ec 100644 --- a/fs/xfs/libxfs/xfs_format.h +++ b/fs/xfs/libxfs/xfs_format.h @@ -916,6 +916,9 @@ static inline uint xfs_dinode_size(int version) /* * Values for di_format + * + * This enum is used in string mapping in xfs_trace.h; please keep the + * TRACE_DEFINE_ENUMs for it up to date. */ typedef enum xfs_dinode_fmt { XFS_DINODE_FMT_DEV, /* xfs_dev_t */ @@ -925,6 +928,13 @@ typedef enum xfs_dinode_fmt { XFS_DINODE_FMT_UUID /* added long ago, but never used */ } xfs_dinode_fmt_t; +#define XFS_INODE_FORMAT_STR \ + { XFS_DINODE_FMT_DEV, "dev" }, \ + { XFS_DINODE_FMT_LOCAL, "local" }, \ + { XFS_DINODE_FMT_EXTENTS, "extent" }, \ + { XFS_DINODE_FMT_BTREE, "btree" }, \ + { XFS_DINODE_FMT_UUID, "uuid" } + /* * Inode minimum and maximum sizes. */ @@ -1083,6 +1093,8 @@ static inline void xfs_dinode_put_rdev(struct xfs_dinode *dip, xfs_dev_t rdev) ((i) & XFS_INO_MASK(XFS_INO_OFFSET_BITS(mp))) #define XFS_OFFBNO_TO_AGINO(mp,b,o) \ ((xfs_agino_t)(((b) << XFS_INO_OFFSET_BITS(mp)) | (o))) +#define XFS_FSB_TO_INO(mp, b) ((xfs_ino_t)((b) << XFS_INO_OFFSET_BITS(mp))) +#define XFS_AGB_TO_AGINO(mp, b) ((xfs_agino_t)((b) << XFS_INO_OFFSET_BITS(mp))) #define XFS_MAXINUMBER ((xfs_ino_t)((1ULL << 56) - 1ULL)) #define XFS_MAXINUMBER_32 ((xfs_ino_t)((1ULL << 32) - 1ULL)) diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c index a8f6db735d5d..d32152fc8a6c 100644 --- a/fs/xfs/libxfs/xfs_ialloc.c +++ b/fs/xfs/libxfs/xfs_ialloc.c @@ -288,7 +288,7 @@ xfs_ialloc_inode_init( { struct xfs_buf *fbuf; struct xfs_dinode *free; - int nbufs, blks_per_cluster, inodes_per_cluster; + int nbufs; int version; int i, j; xfs_daddr_t d; @@ -299,9 +299,7 @@ xfs_ialloc_inode_init( * sizes, manipulate the inodes in buffers which are multiples of the * blocks size. */ - blks_per_cluster = xfs_icluster_size_fsb(mp); - inodes_per_cluster = blks_per_cluster << mp->m_sb.sb_inopblog; - nbufs = length / blks_per_cluster; + nbufs = length / mp->m_blocks_per_cluster; /* * Figure out what version number to use in the inodes we create. If @@ -312,7 +310,7 @@ xfs_ialloc_inode_init( * * For v3 inodes, we also need to write the inode number into the inode, * so calculate the first inode number of the chunk here as - * XFS_OFFBNO_TO_AGINO() only works within a filesystem block, not + * XFS_AGB_TO_AGINO() only works within a filesystem block, not * across multiple filesystem blocks (such as a cluster) and so cannot * be used in the cluster buffer loop below. * @@ -324,8 +322,7 @@ xfs_ialloc_inode_init( */ if (xfs_sb_version_hascrc(&mp->m_sb)) { version = 3; - ino = XFS_AGINO_TO_INO(mp, agno, - XFS_OFFBNO_TO_AGINO(mp, agbno, 0)); + ino = XFS_AGINO_TO_INO(mp, agno, XFS_AGB_TO_AGINO(mp, agbno)); /* * log the initialisation that is about to take place as an @@ -345,9 +342,10 @@ xfs_ialloc_inode_init( /* * Get the block. */ - d = XFS_AGB_TO_DADDR(mp, agno, agbno + (j * blks_per_cluster)); + d = XFS_AGB_TO_DADDR(mp, agno, agbno + + (j * mp->m_blocks_per_cluster)); fbuf = xfs_trans_get_buf(tp, mp->m_ddev_targp, d, - mp->m_bsize * blks_per_cluster, + mp->m_bsize * mp->m_blocks_per_cluster, XBF_UNMAPPED); if (!fbuf) return -ENOMEM; @@ -355,7 +353,7 @@ xfs_ialloc_inode_init( /* Initialize the inode buffers and log them appropriately. */ fbuf->b_ops = &xfs_inode_buf_ops; xfs_buf_zero(fbuf, 0, BBTOB(fbuf->b_length)); - for (i = 0; i < inodes_per_cluster; i++) { + for (i = 0; i < mp->m_inodes_per_cluster; i++) { int ioffset = i << mp->m_sb.sb_inodelog; uint isize = xfs_dinode_size(version); @@ -445,7 +443,7 @@ xfs_align_sparse_ino( return; /* calculate the inode offset and align startino */ - offset = mod << mp->m_sb.sb_inopblog; + offset = XFS_AGB_TO_AGINO(mp, mod); *startino -= offset; /* @@ -641,7 +639,7 @@ xfs_ialloc_ag_alloc( args.tp = tp; args.mp = tp->t_mountp; args.fsbno = NULLFSBLOCK; - xfs_rmap_ag_owner(&args.oinfo, XFS_RMAP_OWN_INODES); + args.oinfo = XFS_RMAP_OINFO_INODES; #ifdef DEBUG /* randomly do sparse inode allocations */ @@ -692,7 +690,7 @@ xfs_ialloc_ag_alloc( * but not to use them in the actual exact allocation. */ args.alignment = 1; - args.minalignslop = xfs_ialloc_cluster_alignment(args.mp) - 1; + args.minalignslop = args.mp->m_cluster_align - 1; /* Allow space for the inode btree to split. */ args.minleft = args.mp->m_in_maxlevels - 1; @@ -727,7 +725,7 @@ xfs_ialloc_ag_alloc( args.alignment = args.mp->m_dalign; isaligned = 1; } else - args.alignment = xfs_ialloc_cluster_alignment(args.mp); + args.alignment = args.mp->m_cluster_align; /* * Need to figure out where to allocate the inode blocks. * Ideally they should be spaced out through the a.g. @@ -756,7 +754,7 @@ xfs_ialloc_ag_alloc( args.type = XFS_ALLOCTYPE_NEAR_BNO; args.agbno = be32_to_cpu(agi->agi_root); args.fsbno = XFS_AGB_TO_FSB(args.mp, agno, args.agbno); - args.alignment = xfs_ialloc_cluster_alignment(args.mp); + args.alignment = args.mp->m_cluster_align; if ((error = xfs_alloc_vextent(&args))) return error; } @@ -797,7 +795,7 @@ sparse_alloc: if (error) return error; - newlen = args.len << args.mp->m_sb.sb_inopblog; + newlen = XFS_AGB_TO_AGINO(args.mp, args.len); ASSERT(newlen <= XFS_INODES_PER_CHUNK); allocmask = (1 << (newlen / XFS_INODES_PER_HOLEMASK_BIT)) - 1; } @@ -825,7 +823,7 @@ sparse_alloc: /* * Convert the results. */ - newino = XFS_OFFBNO_TO_AGINO(args.mp, args.agbno, 0); + newino = XFS_AGB_TO_AGINO(args.mp, args.agbno); if (xfs_inobt_issparse(~allocmask)) { /* @@ -1019,7 +1017,7 @@ xfs_ialloc_ag_select( */ ineed = mp->m_ialloc_min_blks; if (flags && ineed > 1) - ineed += xfs_ialloc_cluster_alignment(mp); + ineed += mp->m_cluster_align; longest = pag->pagf_longest; if (!longest) longest = pag->pagf_flcount > 0; @@ -1849,14 +1847,12 @@ xfs_difree_inode_chunk( int nextbit; xfs_agblock_t agbno; int contigblk; - struct xfs_owner_info oinfo; DECLARE_BITMAP(holemask, XFS_INOBT_HOLEMASK_BITS); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES); if (!xfs_inobt_issparse(rec->ir_holemask)) { /* not sparse, calculate extent info directly */ xfs_bmap_add_free(tp, XFS_AGB_TO_FSB(mp, agno, sagbno), - mp->m_ialloc_blks, &oinfo); + mp->m_ialloc_blks, &XFS_RMAP_OINFO_INODES); return; } @@ -1900,7 +1896,7 @@ xfs_difree_inode_chunk( ASSERT(agbno % mp->m_sb.sb_spino_align == 0); ASSERT(contigblk % mp->m_sb.sb_spino_align == 0); xfs_bmap_add_free(tp, XFS_AGB_TO_FSB(mp, agno, agbno), - contigblk, &oinfo); + contigblk, &XFS_RMAP_OINFO_INODES); /* reset range to current bit and carry on... */ startidx = endidx = nextbit; @@ -2292,7 +2288,6 @@ xfs_imap( xfs_agblock_t agbno; /* block number of inode in the alloc group */ xfs_agino_t agino; /* inode number within alloc group */ xfs_agnumber_t agno; /* allocation group number */ - int blks_per_cluster; /* num blocks per inode cluster */ xfs_agblock_t chunk_agbno; /* first block in inode chunk */ xfs_agblock_t cluster_agbno; /* first block in inode cluster */ int error; /* error code */ @@ -2338,8 +2333,6 @@ xfs_imap( return -EINVAL; } - blks_per_cluster = xfs_icluster_size_fsb(mp); - /* * For bulkstat and handle lookups, we have an untrusted inode number * that we have to verify is valid. We cannot do this just by reading @@ -2359,7 +2352,7 @@ xfs_imap( * If the inode cluster size is the same as the blocksize or * smaller we get to the buffer by simple arithmetics. */ - if (blks_per_cluster == 1) { + if (mp->m_blocks_per_cluster == 1) { offset = XFS_INO_TO_OFFSET(mp, ino); ASSERT(offset < mp->m_sb.sb_inopblock); @@ -2388,12 +2381,13 @@ xfs_imap( out_map: ASSERT(agbno >= chunk_agbno); cluster_agbno = chunk_agbno + - ((offset_agbno / blks_per_cluster) * blks_per_cluster); + ((offset_agbno / mp->m_blocks_per_cluster) * + mp->m_blocks_per_cluster); offset = ((agbno - cluster_agbno) * mp->m_sb.sb_inopblock) + XFS_INO_TO_OFFSET(mp, ino); imap->im_blkno = XFS_AGB_TO_DADDR(mp, agno, cluster_agbno); - imap->im_len = XFS_FSB_TO_BB(mp, blks_per_cluster); + imap->im_len = XFS_FSB_TO_BB(mp, mp->m_blocks_per_cluster); imap->im_boffset = (unsigned short)(offset << mp->m_sb.sb_inodelog); /* @@ -2726,8 +2720,8 @@ xfs_ialloc_has_inodes_at_extent( xfs_agino_t low; xfs_agino_t high; - low = XFS_OFFBNO_TO_AGINO(cur->bc_mp, bno, 0); - high = XFS_OFFBNO_TO_AGINO(cur->bc_mp, bno + len, 0) - 1; + low = XFS_AGB_TO_AGINO(cur->bc_mp, bno); + high = XFS_AGB_TO_AGINO(cur->bc_mp, bno + len) - 1; return xfs_ialloc_has_inode_record(cur, low, high, exists); } diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c index 7fbf8af0b159..9b25e7a0df47 100644 --- a/fs/xfs/libxfs/xfs_ialloc_btree.c +++ b/fs/xfs/libxfs/xfs_ialloc_btree.c @@ -84,7 +84,7 @@ __xfs_inobt_alloc_block( memset(&args, 0, sizeof(args)); args.tp = cur->bc_tp; args.mp = cur->bc_mp; - xfs_rmap_ag_owner(&args.oinfo, XFS_RMAP_OWN_INOBT); + args.oinfo = XFS_RMAP_OINFO_INOBT; args.fsbno = XFS_AGB_TO_FSB(args.mp, cur->bc_private.a.agno, sbno); args.minlen = 1; args.maxlen = 1; @@ -136,12 +136,9 @@ __xfs_inobt_free_block( struct xfs_buf *bp, enum xfs_ag_resv_type resv) { - struct xfs_owner_info oinfo; - - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INOBT); return xfs_free_extent(cur->bc_tp, XFS_DADDR_TO_FSB(cur->bc_mp, XFS_BUF_ADDR(bp)), 1, - &oinfo, resv); + &XFS_RMAP_OINFO_INOBT, resv); } STATIC int diff --git a/fs/xfs/libxfs/xfs_refcount_btree.c b/fs/xfs/libxfs/xfs_refcount_btree.c index 1aaa01c97517..d9eab657b63e 100644 --- a/fs/xfs/libxfs/xfs_refcount_btree.c +++ b/fs/xfs/libxfs/xfs_refcount_btree.c @@ -70,7 +70,7 @@ xfs_refcountbt_alloc_block( args.type = XFS_ALLOCTYPE_NEAR_BNO; args.fsbno = XFS_AGB_TO_FSB(cur->bc_mp, cur->bc_private.a.agno, xfs_refc_block(args.mp)); - xfs_rmap_ag_owner(&args.oinfo, XFS_RMAP_OWN_REFC); + args.oinfo = XFS_RMAP_OINFO_REFC; args.minlen = args.maxlen = args.prod = 1; args.resv = XFS_AG_RESV_METADATA; @@ -106,15 +106,13 @@ xfs_refcountbt_free_block( struct xfs_buf *agbp = cur->bc_private.a.agbp; struct xfs_agf *agf = XFS_BUF_TO_AGF(agbp); xfs_fsblock_t fsbno = XFS_DADDR_TO_FSB(mp, XFS_BUF_ADDR(bp)); - struct xfs_owner_info oinfo; int error; trace_xfs_refcountbt_free_block(cur->bc_mp, cur->bc_private.a.agno, XFS_FSB_TO_AGBNO(cur->bc_mp, fsbno), 1); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_REFC); be32_add_cpu(&agf->agf_refcount_blocks, -1); xfs_alloc_log_agf(cur->bc_tp, agbp, XFS_AGF_REFCOUNT_BLOCKS); - error = xfs_free_extent(cur->bc_tp, fsbno, 1, &oinfo, + error = xfs_free_extent(cur->bc_tp, fsbno, 1, &XFS_RMAP_OINFO_REFC, XFS_AG_RESV_METADATA); if (error) return error; diff --git a/fs/xfs/libxfs/xfs_rmap.c b/fs/xfs/libxfs/xfs_rmap.c index 245af452840e..8ed885507dd8 100644 --- a/fs/xfs/libxfs/xfs_rmap.c +++ b/fs/xfs/libxfs/xfs_rmap.c @@ -458,21 +458,21 @@ out: */ STATIC int xfs_rmap_unmap( - struct xfs_btree_cur *cur, - xfs_agblock_t bno, - xfs_extlen_t len, - bool unwritten, - struct xfs_owner_info *oinfo) + struct xfs_btree_cur *cur, + xfs_agblock_t bno, + xfs_extlen_t len, + bool unwritten, + const struct xfs_owner_info *oinfo) { - struct xfs_mount *mp = cur->bc_mp; - struct xfs_rmap_irec ltrec; - uint64_t ltoff; - int error = 0; - int i; - uint64_t owner; - uint64_t offset; - unsigned int flags; - bool ignore_off; + struct xfs_mount *mp = cur->bc_mp; + struct xfs_rmap_irec ltrec; + uint64_t ltoff; + int error = 0; + int i; + uint64_t owner; + uint64_t offset; + unsigned int flags; + bool ignore_off; xfs_owner_info_unpack(oinfo, &owner, &offset, &flags); ignore_off = XFS_RMAP_NON_INODE_OWNER(owner) || @@ -653,16 +653,16 @@ out_error: */ int xfs_rmap_free( - struct xfs_trans *tp, - struct xfs_buf *agbp, - xfs_agnumber_t agno, - xfs_agblock_t bno, - xfs_extlen_t len, - struct xfs_owner_info *oinfo) + struct xfs_trans *tp, + struct xfs_buf *agbp, + xfs_agnumber_t agno, + xfs_agblock_t bno, + xfs_extlen_t len, + const struct xfs_owner_info *oinfo) { - struct xfs_mount *mp = tp->t_mountp; - struct xfs_btree_cur *cur; - int error; + struct xfs_mount *mp = tp->t_mountp; + struct xfs_btree_cur *cur; + int error; if (!xfs_sb_version_hasrmapbt(&mp->m_sb)) return 0; @@ -710,23 +710,23 @@ xfs_rmap_is_mergeable( */ STATIC int xfs_rmap_map( - struct xfs_btree_cur *cur, - xfs_agblock_t bno, - xfs_extlen_t len, - bool unwritten, - struct xfs_owner_info *oinfo) + struct xfs_btree_cur *cur, + xfs_agblock_t bno, + xfs_extlen_t len, + bool unwritten, + const struct xfs_owner_info *oinfo) { - struct xfs_mount *mp = cur->bc_mp; - struct xfs_rmap_irec ltrec; - struct xfs_rmap_irec gtrec; - int have_gt; - int have_lt; - int error = 0; - int i; - uint64_t owner; - uint64_t offset; - unsigned int flags = 0; - bool ignore_off; + struct xfs_mount *mp = cur->bc_mp; + struct xfs_rmap_irec ltrec; + struct xfs_rmap_irec gtrec; + int have_gt; + int have_lt; + int error = 0; + int i; + uint64_t owner; + uint64_t offset; + unsigned int flags = 0; + bool ignore_off; xfs_owner_info_unpack(oinfo, &owner, &offset, &flags); ASSERT(owner != 0); @@ -890,16 +890,16 @@ out_error: */ int xfs_rmap_alloc( - struct xfs_trans *tp, - struct xfs_buf *agbp, - xfs_agnumber_t agno, - xfs_agblock_t bno, - xfs_extlen_t len, - struct xfs_owner_info *oinfo) + struct xfs_trans *tp, + struct xfs_buf *agbp, + xfs_agnumber_t agno, + xfs_agblock_t bno, + xfs_extlen_t len, + const struct xfs_owner_info *oinfo) { - struct xfs_mount *mp = tp->t_mountp; - struct xfs_btree_cur *cur; - int error; + struct xfs_mount *mp = tp->t_mountp; + struct xfs_btree_cur *cur; + int error; if (!xfs_sb_version_hasrmapbt(&mp->m_sb)) return 0; @@ -929,16 +929,16 @@ xfs_rmap_alloc( */ STATIC int xfs_rmap_convert( - struct xfs_btree_cur *cur, - xfs_agblock_t bno, - xfs_extlen_t len, - bool unwritten, - struct xfs_owner_info *oinfo) + struct xfs_btree_cur *cur, + xfs_agblock_t bno, + xfs_extlen_t len, + bool unwritten, + const struct xfs_owner_info *oinfo) { - struct xfs_mount *mp = cur->bc_mp; - struct xfs_rmap_irec r[4]; /* neighbor extent entries */ - /* left is 0, right is 1, prev is 2 */ - /* new is 3 */ + struct xfs_mount *mp = cur->bc_mp; + struct xfs_rmap_irec r[4]; /* neighbor extent entries */ + /* left is 0, right is 1, */ + /* prev is 2, new is 3 */ uint64_t owner; uint64_t offset; uint64_t new_endoff; @@ -1354,16 +1354,16 @@ done: */ STATIC int xfs_rmap_convert_shared( - struct xfs_btree_cur *cur, - xfs_agblock_t bno, - xfs_extlen_t len, - bool unwritten, - struct xfs_owner_info *oinfo) + struct xfs_btree_cur *cur, + xfs_agblock_t bno, + xfs_extlen_t len, + bool unwritten, + const struct xfs_owner_info *oinfo) { - struct xfs_mount *mp = cur->bc_mp; - struct xfs_rmap_irec r[4]; /* neighbor extent entries */ - /* left is 0, right is 1, prev is 2 */ - /* new is 3 */ + struct xfs_mount *mp = cur->bc_mp; + struct xfs_rmap_irec r[4]; /* neighbor extent entries */ + /* left is 0, right is 1, */ + /* prev is 2, new is 3 */ uint64_t owner; uint64_t offset; uint64_t new_endoff; @@ -1743,20 +1743,20 @@ done: */ STATIC int xfs_rmap_unmap_shared( - struct xfs_btree_cur *cur, - xfs_agblock_t bno, - xfs_extlen_t len, - bool unwritten, - struct xfs_owner_info *oinfo) + struct xfs_btree_cur *cur, + xfs_agblock_t bno, + xfs_extlen_t len, + bool unwritten, + const struct xfs_owner_info *oinfo) { - struct xfs_mount *mp = cur->bc_mp; - struct xfs_rmap_irec ltrec; - uint64_t ltoff; - int error = 0; - int i; - uint64_t owner; - uint64_t offset; - unsigned int flags; + struct xfs_mount *mp = cur->bc_mp; + struct xfs_rmap_irec ltrec; + uint64_t ltoff; + int error = 0; + int i; + uint64_t owner; + uint64_t offset; + unsigned int flags; xfs_owner_info_unpack(oinfo, &owner, &offset, &flags); if (unwritten) @@ -1905,22 +1905,22 @@ out_error: */ STATIC int xfs_rmap_map_shared( - struct xfs_btree_cur *cur, - xfs_agblock_t bno, - xfs_extlen_t len, - bool unwritten, - struct xfs_owner_info *oinfo) + struct xfs_btree_cur *cur, + xfs_agblock_t bno, + xfs_extlen_t len, + bool unwritten, + const struct xfs_owner_info *oinfo) { - struct xfs_mount *mp = cur->bc_mp; - struct xfs_rmap_irec ltrec; - struct xfs_rmap_irec gtrec; - int have_gt; - int have_lt; - int error = 0; - int i; - uint64_t owner; - uint64_t offset; - unsigned int flags = 0; + struct xfs_mount *mp = cur->bc_mp; + struct xfs_rmap_irec ltrec; + struct xfs_rmap_irec gtrec; + int have_gt; + int have_lt; + int error = 0; + int i; + uint64_t owner; + uint64_t offset; + unsigned int flags = 0; xfs_owner_info_unpack(oinfo, &owner, &offset, &flags); if (unwritten) @@ -2459,18 +2459,18 @@ xfs_rmap_has_record( */ int xfs_rmap_record_exists( - struct xfs_btree_cur *cur, - xfs_agblock_t bno, - xfs_extlen_t len, - struct xfs_owner_info *oinfo, - bool *has_rmap) + struct xfs_btree_cur *cur, + xfs_agblock_t bno, + xfs_extlen_t len, + const struct xfs_owner_info *oinfo, + bool *has_rmap) { - uint64_t owner; - uint64_t offset; - unsigned int flags; - int has_record; - struct xfs_rmap_irec irec; - int error; + uint64_t owner; + uint64_t offset; + unsigned int flags; + int has_record; + struct xfs_rmap_irec irec; + int error; xfs_owner_info_unpack(oinfo, &owner, &offset, &flags); ASSERT(XFS_RMAP_NON_INODE_OWNER(owner) || @@ -2530,7 +2530,7 @@ xfs_rmap_has_other_keys( struct xfs_btree_cur *cur, xfs_agblock_t bno, xfs_extlen_t len, - struct xfs_owner_info *oinfo, + const struct xfs_owner_info *oinfo, bool *has_rmap) { struct xfs_rmap_irec low = {0}; @@ -2550,3 +2550,31 @@ xfs_rmap_has_other_keys( *has_rmap = rks.has_rmap; return error; } + +const struct xfs_owner_info XFS_RMAP_OINFO_SKIP_UPDATE = { + .oi_owner = XFS_RMAP_OWN_NULL, +}; +const struct xfs_owner_info XFS_RMAP_OINFO_ANY_OWNER = { + .oi_owner = XFS_RMAP_OWN_UNKNOWN, +}; +const struct xfs_owner_info XFS_RMAP_OINFO_FS = { + .oi_owner = XFS_RMAP_OWN_FS, +}; +const struct xfs_owner_info XFS_RMAP_OINFO_LOG = { + .oi_owner = XFS_RMAP_OWN_LOG, +}; +const struct xfs_owner_info XFS_RMAP_OINFO_AG = { + .oi_owner = XFS_RMAP_OWN_AG, +}; +const struct xfs_owner_info XFS_RMAP_OINFO_INOBT = { + .oi_owner = XFS_RMAP_OWN_INOBT, +}; +const struct xfs_owner_info XFS_RMAP_OINFO_INODES = { + .oi_owner = XFS_RMAP_OWN_INODES, +}; +const struct xfs_owner_info XFS_RMAP_OINFO_REFC = { + .oi_owner = XFS_RMAP_OWN_REFC, +}; +const struct xfs_owner_info XFS_RMAP_OINFO_COW = { + .oi_owner = XFS_RMAP_OWN_COW, +}; diff --git a/fs/xfs/libxfs/xfs_rmap.h b/fs/xfs/libxfs/xfs_rmap.h index 157dc722ad35..e21ed0294e5c 100644 --- a/fs/xfs/libxfs/xfs_rmap.h +++ b/fs/xfs/libxfs/xfs_rmap.h @@ -6,16 +6,6 @@ #ifndef __XFS_RMAP_H__ #define __XFS_RMAP_H__ -static inline void -xfs_rmap_ag_owner( - struct xfs_owner_info *oi, - uint64_t owner) -{ - oi->oi_owner = owner; - oi->oi_offset = 0; - oi->oi_flags = 0; -} - static inline void xfs_rmap_ino_bmbt_owner( struct xfs_owner_info *oi, @@ -43,27 +33,13 @@ xfs_rmap_ino_owner( oi->oi_flags |= XFS_OWNER_INFO_ATTR_FORK; } -static inline void -xfs_rmap_skip_owner_update( - struct xfs_owner_info *oi) -{ - xfs_rmap_ag_owner(oi, XFS_RMAP_OWN_NULL); -} - static inline bool xfs_rmap_should_skip_owner_update( - struct xfs_owner_info *oi) + const struct xfs_owner_info *oi) { return oi->oi_owner == XFS_RMAP_OWN_NULL; } -static inline void -xfs_rmap_any_owner_update( - struct xfs_owner_info *oi) -{ - xfs_rmap_ag_owner(oi, XFS_RMAP_OWN_UNKNOWN); -} - /* Reverse mapping functions. */ struct xfs_buf; @@ -103,12 +79,12 @@ xfs_rmap_irec_offset_unpack( static inline void xfs_owner_info_unpack( - struct xfs_owner_info *oinfo, - uint64_t *owner, - uint64_t *offset, - unsigned int *flags) + const struct xfs_owner_info *oinfo, + uint64_t *owner, + uint64_t *offset, + unsigned int *flags) { - unsigned int r = 0; + unsigned int r = 0; *owner = oinfo->oi_owner; *offset = oinfo->oi_offset; @@ -137,10 +113,10 @@ xfs_owner_info_pack( int xfs_rmap_alloc(struct xfs_trans *tp, struct xfs_buf *agbp, xfs_agnumber_t agno, xfs_agblock_t bno, xfs_extlen_t len, - struct xfs_owner_info *oinfo); + const struct xfs_owner_info *oinfo); int xfs_rmap_free(struct xfs_trans *tp, struct xfs_buf *agbp, xfs_agnumber_t agno, xfs_agblock_t bno, xfs_extlen_t len, - struct xfs_owner_info *oinfo); + const struct xfs_owner_info *oinfo); int xfs_rmap_lookup_le(struct xfs_btree_cur *cur, xfs_agblock_t bno, xfs_extlen_t len, uint64_t owner, uint64_t offset, @@ -218,11 +194,21 @@ int xfs_rmap_btrec_to_irec(union xfs_btree_rec *rec, int xfs_rmap_has_record(struct xfs_btree_cur *cur, xfs_agblock_t bno, xfs_extlen_t len, bool *exists); int xfs_rmap_record_exists(struct xfs_btree_cur *cur, xfs_agblock_t bno, - xfs_extlen_t len, struct xfs_owner_info *oinfo, + xfs_extlen_t len, const struct xfs_owner_info *oinfo, bool *has_rmap); int xfs_rmap_has_other_keys(struct xfs_btree_cur *cur, xfs_agblock_t bno, - xfs_extlen_t len, struct xfs_owner_info *oinfo, + xfs_extlen_t len, const struct xfs_owner_info *oinfo, bool *has_rmap); int xfs_rmap_map_raw(struct xfs_btree_cur *cur, struct xfs_rmap_irec *rmap); +extern const struct xfs_owner_info XFS_RMAP_OINFO_SKIP_UPDATE; +extern const struct xfs_owner_info XFS_RMAP_OINFO_ANY_OWNER; +extern const struct xfs_owner_info XFS_RMAP_OINFO_FS; +extern const struct xfs_owner_info XFS_RMAP_OINFO_LOG; +extern const struct xfs_owner_info XFS_RMAP_OINFO_AG; +extern const struct xfs_owner_info XFS_RMAP_OINFO_INOBT; +extern const struct xfs_owner_info XFS_RMAP_OINFO_INODES; +extern const struct xfs_owner_info XFS_RMAP_OINFO_REFC; +extern const struct xfs_owner_info XFS_RMAP_OINFO_COW; + #endif /* __XFS_RMAP_H__ */ diff --git a/fs/xfs/libxfs/xfs_rtbitmap.c b/fs/xfs/libxfs/xfs_rtbitmap.c index b228c821bae6..eaaff67e9626 100644 --- a/fs/xfs/libxfs/xfs_rtbitmap.c +++ b/fs/xfs/libxfs/xfs_rtbitmap.c @@ -505,6 +505,12 @@ xfs_rtmodify_summary_int( uint first = (uint)((char *)sp - (char *)bp->b_addr); *sp += delta; + if (mp->m_rsum_cache) { + if (*sp == 0 && log == mp->m_rsum_cache[bbno]) + mp->m_rsum_cache[bbno]++; + if (*sp != 0 && log < mp->m_rsum_cache[bbno]) + mp->m_rsum_cache[bbno] = log; + } xfs_trans_log_buf(tp, bp, first, first + sizeof(*sp) - 1); } if (sum) diff --git a/fs/xfs/libxfs/xfs_symlink_remote.c b/fs/xfs/libxfs/xfs_symlink_remote.c index 95374ab2dee7..77d80106f989 100644 --- a/fs/xfs/libxfs/xfs_symlink_remote.c +++ b/fs/xfs/libxfs/xfs_symlink_remote.c @@ -199,7 +199,10 @@ xfs_symlink_local_to_remote( ifp->if_bytes - 1); } -/* Verify the consistency of an inline symlink. */ +/* + * Verify the in-memory consistency of an inline symlink data fork. This + * does not do on-disk format checks. + */ xfs_failaddr_t xfs_symlink_shortform_verify( struct xfs_inode *ip) @@ -215,9 +218,12 @@ xfs_symlink_shortform_verify( size = ifp->if_bytes; endp = sfp + size; - /* Zero length symlinks can exist while we're deleting a remote one. */ - if (size == 0) - return NULL; + /* + * Zero length symlinks should never occur in memory as they are + * never alllowed to exist on disk. + */ + if (!size) + return __this_address; /* No negative sizes or overly long symlink targets. */ if (size < 0 || size > XFS_SYMLINK_MAXLEN) diff --git a/fs/xfs/libxfs/xfs_types.c b/fs/xfs/libxfs/xfs_types.c index 33a5ca346baf..3306fc42cfad 100644 --- a/fs/xfs/libxfs/xfs_types.c +++ b/fs/xfs/libxfs/xfs_types.c @@ -87,16 +87,15 @@ xfs_agino_range( * Calculate the first inode, which will be in the first * cluster-aligned block after the AGFL. */ - bno = round_up(XFS_AGFL_BLOCK(mp) + 1, - xfs_ialloc_cluster_alignment(mp)); - *first = XFS_OFFBNO_TO_AGINO(mp, bno, 0); + bno = round_up(XFS_AGFL_BLOCK(mp) + 1, mp->m_cluster_align); + *first = XFS_AGB_TO_AGINO(mp, bno); /* * Calculate the last inode, which will be at the end of the * last (aligned) cluster that can be allocated in the AG. */ - bno = round_down(eoag, xfs_ialloc_cluster_alignment(mp)); - *last = XFS_OFFBNO_TO_AGINO(mp, bno, 0) - 1; + bno = round_down(eoag, mp->m_cluster_align); + *last = XFS_AGB_TO_AGINO(mp, bno) - 1; } /* diff --git a/fs/xfs/libxfs/xfs_types.h b/fs/xfs/libxfs/xfs_types.h index b9e6c89284c3..8f02855a019a 100644 --- a/fs/xfs/libxfs/xfs_types.h +++ b/fs/xfs/libxfs/xfs_types.h @@ -100,15 +100,37 @@ typedef void * xfs_failaddr_t; */ #define MAXNAMELEN 256 +/* + * This enum is used in string mapping in xfs_trace.h; please keep the + * TRACE_DEFINE_ENUMs for it up to date. + */ typedef enum { XFS_LOOKUP_EQi, XFS_LOOKUP_LEi, XFS_LOOKUP_GEi } xfs_lookup_t; +#define XFS_AG_BTREE_CMP_FORMAT_STR \ + { XFS_LOOKUP_EQi, "eq" }, \ + { XFS_LOOKUP_LEi, "le" }, \ + { XFS_LOOKUP_GEi, "ge" } + +/* + * This enum is used in string mapping in xfs_trace.h and scrub/trace.h; + * please keep the TRACE_DEFINE_ENUMs for it up to date. + */ typedef enum { XFS_BTNUM_BNOi, XFS_BTNUM_CNTi, XFS_BTNUM_RMAPi, XFS_BTNUM_BMAPi, XFS_BTNUM_INOi, XFS_BTNUM_FINOi, XFS_BTNUM_REFCi, XFS_BTNUM_MAX } xfs_btnum_t; +#define XFS_BTNUM_STRINGS \ + { XFS_BTNUM_BNOi, "bnobt" }, \ + { XFS_BTNUM_CNTi, "cntbt" }, \ + { XFS_BTNUM_RMAPi, "rmapbt" }, \ + { XFS_BTNUM_BMAPi, "bmbt" }, \ + { XFS_BTNUM_INOi, "inobt" }, \ + { XFS_BTNUM_FINOi, "finobt" }, \ + { XFS_BTNUM_REFCi, "refcbt" } + struct xfs_name { const unsigned char *name; int len; diff --git a/fs/xfs/scrub/agheader.c b/fs/xfs/scrub/agheader.c index 3068a9382feb..90955ab1e895 100644 --- a/fs/xfs/scrub/agheader.c +++ b/fs/xfs/scrub/agheader.c @@ -32,7 +32,6 @@ xchk_superblock_xref( struct xfs_scrub *sc, struct xfs_buf *bp) { - struct xfs_owner_info oinfo; struct xfs_mount *mp = sc->mp; xfs_agnumber_t agno = sc->sm->sm_agno; xfs_agblock_t agbno; @@ -49,8 +48,7 @@ xchk_superblock_xref( xchk_xref_is_used_space(sc, agbno, 1); xchk_xref_is_not_inode_chunk(sc, agbno, 1); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_FS); - xchk_xref_is_owned_by(sc, agbno, 1, &oinfo); + xchk_xref_is_owned_by(sc, agbno, 1, &XFS_RMAP_OINFO_FS); xchk_xref_is_not_shared(sc, agbno, 1); /* scrub teardown will take care of sc->sa for us */ @@ -484,7 +482,6 @@ STATIC void xchk_agf_xref( struct xfs_scrub *sc) { - struct xfs_owner_info oinfo; struct xfs_mount *mp = sc->mp; xfs_agblock_t agbno; int error; @@ -502,8 +499,7 @@ xchk_agf_xref( xchk_agf_xref_freeblks(sc); xchk_agf_xref_cntbt(sc); xchk_xref_is_not_inode_chunk(sc, agbno, 1); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_FS); - xchk_xref_is_owned_by(sc, agbno, 1, &oinfo); + xchk_xref_is_owned_by(sc, agbno, 1, &XFS_RMAP_OINFO_FS); xchk_agf_xref_btreeblks(sc); xchk_xref_is_not_shared(sc, agbno, 1); xchk_agf_xref_refcblks(sc); @@ -598,7 +594,6 @@ out: /* AGFL */ struct xchk_agfl_info { - struct xfs_owner_info oinfo; unsigned int sz_entries; unsigned int nr_entries; xfs_agblock_t *entries; @@ -609,15 +604,14 @@ struct xchk_agfl_info { STATIC void xchk_agfl_block_xref( struct xfs_scrub *sc, - xfs_agblock_t agbno, - struct xfs_owner_info *oinfo) + xfs_agblock_t agbno) { if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT) return; xchk_xref_is_used_space(sc, agbno, 1); xchk_xref_is_not_inode_chunk(sc, agbno, 1); - xchk_xref_is_owned_by(sc, agbno, 1, oinfo); + xchk_xref_is_owned_by(sc, agbno, 1, &XFS_RMAP_OINFO_AG); xchk_xref_is_not_shared(sc, agbno, 1); } @@ -638,7 +632,7 @@ xchk_agfl_block( else xchk_block_set_corrupt(sc, sc->sa.agfl_bp); - xchk_agfl_block_xref(sc, agbno, priv); + xchk_agfl_block_xref(sc, agbno); if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT) return XFS_BTREE_QUERY_RANGE_ABORT; @@ -662,7 +656,6 @@ STATIC void xchk_agfl_xref( struct xfs_scrub *sc) { - struct xfs_owner_info oinfo; struct xfs_mount *mp = sc->mp; xfs_agblock_t agbno; int error; @@ -678,8 +671,7 @@ xchk_agfl_xref( xchk_xref_is_used_space(sc, agbno, 1); xchk_xref_is_not_inode_chunk(sc, agbno, 1); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_FS); - xchk_xref_is_owned_by(sc, agbno, 1, &oinfo); + xchk_xref_is_owned_by(sc, agbno, 1, &XFS_RMAP_OINFO_FS); xchk_xref_is_not_shared(sc, agbno, 1); /* @@ -732,7 +724,6 @@ xchk_agfl( } /* Check the blocks in the AGFL. */ - xfs_rmap_ag_owner(&sai.oinfo, XFS_RMAP_OWN_AG); error = xfs_agfl_walk(sc->mp, XFS_BUF_TO_AGF(sc->sa.agf_bp), sc->sa.agfl_bp, xchk_agfl_block, &sai); if (error == XFS_BTREE_QUERY_RANGE_ABORT) { @@ -791,7 +782,6 @@ STATIC void xchk_agi_xref( struct xfs_scrub *sc) { - struct xfs_owner_info oinfo; struct xfs_mount *mp = sc->mp; xfs_agblock_t agbno; int error; @@ -808,8 +798,7 @@ xchk_agi_xref( xchk_xref_is_used_space(sc, agbno, 1); xchk_xref_is_not_inode_chunk(sc, agbno, 1); xchk_agi_xref_icounts(sc); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_FS); - xchk_xref_is_owned_by(sc, agbno, 1, &oinfo); + xchk_xref_is_owned_by(sc, agbno, 1, &XFS_RMAP_OINFO_FS); xchk_xref_is_not_shared(sc, agbno, 1); /* scrub teardown will take care of sc->sa for us */ diff --git a/fs/xfs/scrub/agheader_repair.c b/fs/xfs/scrub/agheader_repair.c index f7568a4b5fe5..03d1e15cceba 100644 --- a/fs/xfs/scrub/agheader_repair.c +++ b/fs/xfs/scrub/agheader_repair.c @@ -646,7 +646,6 @@ int xrep_agfl( struct xfs_scrub *sc) { - struct xfs_owner_info oinfo; struct xfs_bitmap agfl_extents; struct xfs_mount *mp = sc->mp; struct xfs_buf *agf_bp; @@ -708,8 +707,8 @@ xrep_agfl( goto err; /* Dump any AGFL overflow. */ - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_AG); - return xrep_reap_extents(sc, &agfl_extents, &oinfo, XFS_AG_RESV_AGFL); + return xrep_reap_extents(sc, &agfl_extents, &XFS_RMAP_OINFO_AG, + XFS_AG_RESV_AGFL); err: xfs_bitmap_destroy(&agfl_extents); return error; diff --git a/fs/xfs/scrub/alloc.c b/fs/xfs/scrub/alloc.c index 376bcb585ae6..44883e9112ad 100644 --- a/fs/xfs/scrub/alloc.c +++ b/fs/xfs/scrub/alloc.c @@ -125,12 +125,10 @@ xchk_allocbt( struct xfs_scrub *sc, xfs_btnum_t which) { - struct xfs_owner_info oinfo; struct xfs_btree_cur *cur; - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_AG); cur = which == XFS_BTNUM_BNO ? sc->sa.bno_cur : sc->sa.cnt_cur; - return xchk_btree(sc, cur, xchk_allocbt_rec, &oinfo, NULL); + return xchk_btree(sc, cur, xchk_allocbt_rec, &XFS_RMAP_OINFO_AG, NULL); } int diff --git a/fs/xfs/scrub/btree.c b/fs/xfs/scrub/btree.c index 4ae959f7ad2c..6f94d1f7322d 100644 --- a/fs/xfs/scrub/btree.c +++ b/fs/xfs/scrub/btree.c @@ -583,31 +583,32 @@ xchk_btree_block_keys( */ int xchk_btree( - struct xfs_scrub *sc, - struct xfs_btree_cur *cur, - xchk_btree_rec_fn scrub_fn, - struct xfs_owner_info *oinfo, - void *private) + struct xfs_scrub *sc, + struct xfs_btree_cur *cur, + xchk_btree_rec_fn scrub_fn, + const struct xfs_owner_info *oinfo, + void *private) { - struct xchk_btree bs = { NULL }; - union xfs_btree_ptr ptr; - union xfs_btree_ptr *pp; - union xfs_btree_rec *recp; - struct xfs_btree_block *block; - int level; - struct xfs_buf *bp; - struct check_owner *co; - struct check_owner *n; - int i; - int error = 0; + struct xchk_btree bs = { + .cur = cur, + .scrub_rec = scrub_fn, + .oinfo = oinfo, + .firstrec = true, + .private = private, + .sc = sc, + }; + union xfs_btree_ptr ptr; + union xfs_btree_ptr *pp; + union xfs_btree_rec *recp; + struct xfs_btree_block *block; + int level; + struct xfs_buf *bp; + struct check_owner *co; + struct check_owner *n; + int i; + int error = 0; /* Initialize scrub state */ - bs.cur = cur; - bs.scrub_rec = scrub_fn; - bs.oinfo = oinfo; - bs.firstrec = true; - bs.private = private; - bs.sc = sc; for (i = 0; i < XFS_BTREE_MAXLEVELS; i++) bs.firstkey[i] = true; INIT_LIST_HEAD(&bs.to_check); diff --git a/fs/xfs/scrub/btree.h b/fs/xfs/scrub/btree.h index aada763cd006..5572e475f8ed 100644 --- a/fs/xfs/scrub/btree.h +++ b/fs/xfs/scrub/btree.h @@ -31,21 +31,21 @@ typedef int (*xchk_btree_rec_fn)( struct xchk_btree { /* caller-provided scrub state */ - struct xfs_scrub *sc; - struct xfs_btree_cur *cur; - xchk_btree_rec_fn scrub_rec; - struct xfs_owner_info *oinfo; - void *private; + struct xfs_scrub *sc; + struct xfs_btree_cur *cur; + xchk_btree_rec_fn scrub_rec; + const struct xfs_owner_info *oinfo; + void *private; /* internal scrub state */ - union xfs_btree_rec lastrec; - bool firstrec; - union xfs_btree_key lastkey[XFS_BTREE_MAXLEVELS]; - bool firstkey[XFS_BTREE_MAXLEVELS]; - struct list_head to_check; + union xfs_btree_rec lastrec; + bool firstrec; + union xfs_btree_key lastkey[XFS_BTREE_MAXLEVELS]; + bool firstkey[XFS_BTREE_MAXLEVELS]; + struct list_head to_check; }; int xchk_btree(struct xfs_scrub *sc, struct xfs_btree_cur *cur, - xchk_btree_rec_fn scrub_fn, struct xfs_owner_info *oinfo, + xchk_btree_rec_fn scrub_fn, const struct xfs_owner_info *oinfo, void *private); #endif /* __XFS_SCRUB_BTREE_H__ */ diff --git a/fs/xfs/scrub/common.c b/fs/xfs/scrub/common.c index 346b02abccf7..0c54ff55b901 100644 --- a/fs/xfs/scrub/common.c +++ b/fs/xfs/scrub/common.c @@ -313,8 +313,8 @@ xchk_set_incomplete( */ struct xchk_rmap_ownedby_info { - struct xfs_owner_info *oinfo; - xfs_filblks_t *blocks; + const struct xfs_owner_info *oinfo; + xfs_filblks_t *blocks; }; STATIC int @@ -347,15 +347,15 @@ int xchk_count_rmap_ownedby_ag( struct xfs_scrub *sc, struct xfs_btree_cur *cur, - struct xfs_owner_info *oinfo, + const struct xfs_owner_info *oinfo, xfs_filblks_t *blocks) { - struct xchk_rmap_ownedby_info sroi; + struct xchk_rmap_ownedby_info sroi = { + .oinfo = oinfo, + .blocks = blocks, + }; - sroi.oinfo = oinfo; *blocks = 0; - sroi.blocks = blocks; - return xfs_rmap_query_all(cur, xchk_count_rmap_ownedby_irec, &sroi); } diff --git a/fs/xfs/scrub/common.h b/fs/xfs/scrub/common.h index 2d4324d12f9a..e26a430bd466 100644 --- a/fs/xfs/scrub/common.h +++ b/fs/xfs/scrub/common.h @@ -116,7 +116,7 @@ int xchk_ag_read_headers(struct xfs_scrub *sc, xfs_agnumber_t agno, void xchk_ag_btcur_free(struct xchk_ag *sa); int xchk_ag_btcur_init(struct xfs_scrub *sc, struct xchk_ag *sa); int xchk_count_rmap_ownedby_ag(struct xfs_scrub *sc, struct xfs_btree_cur *cur, - struct xfs_owner_info *oinfo, xfs_filblks_t *blocks); + const struct xfs_owner_info *oinfo, xfs_filblks_t *blocks); int xchk_setup_ag_btree(struct xfs_scrub *sc, struct xfs_inode *ip, bool force_log); diff --git a/fs/xfs/scrub/ialloc.c b/fs/xfs/scrub/ialloc.c index 224dba937492..882dc56c5c21 100644 --- a/fs/xfs/scrub/ialloc.c +++ b/fs/xfs/scrub/ialloc.c @@ -44,6 +44,11 @@ xchk_setup_ag_iallocbt( /* Inode btree scrubber. */ +struct xchk_iallocbt { + /* Number of inodes we see while scanning inobt. */ + unsigned long long inodes; +}; + /* * If we're checking the finobt, cross-reference with the inobt. * Otherwise we're checking the inobt; if there is an finobt, make sure @@ -82,15 +87,12 @@ xchk_iallocbt_chunk_xref( xfs_agblock_t agbno, xfs_extlen_t len) { - struct xfs_owner_info oinfo; - if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT) return; xchk_xref_is_used_space(sc, agbno, len); xchk_iallocbt_chunk_xref_other(sc, irec, agino); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES); - xchk_xref_is_owned_by(sc, agbno, len, &oinfo); + xchk_xref_is_owned_by(sc, agbno, len, &XFS_RMAP_OINFO_INODES); xchk_xref_is_not_shared(sc, agbno, len); } @@ -186,7 +188,6 @@ xchk_iallocbt_check_freemask( struct xchk_btree *bs, struct xfs_inobt_rec_incore *irec) { - struct xfs_owner_info oinfo; struct xfs_imap imap; struct xfs_mount *mp = bs->cur->bc_mp; struct xfs_dinode *dip; @@ -197,19 +198,16 @@ xchk_iallocbt_check_freemask( xfs_agino_t chunkino; xfs_agino_t clusterino; xfs_agblock_t agbno; - int blks_per_cluster; uint16_t holemask; uint16_t ir_holemask; int error = 0; /* Make sure the freemask matches the inode records. */ - blks_per_cluster = xfs_icluster_size_fsb(mp); - nr_inodes = XFS_OFFBNO_TO_AGINO(mp, blks_per_cluster, 0); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES); + nr_inodes = mp->m_inodes_per_cluster; for (agino = irec->ir_startino; agino < irec->ir_startino + XFS_INODES_PER_CHUNK; - agino += blks_per_cluster * mp->m_sb.sb_inopblock) { + agino += mp->m_inodes_per_cluster) { fsino = XFS_AGINO_TO_INO(mp, bs->cur->bc_private.a.agno, agino); chunkino = agino - irec->ir_startino; agbno = XFS_AGINO_TO_AGBNO(mp, agino); @@ -230,17 +228,18 @@ xchk_iallocbt_check_freemask( /* If any part of this is a hole, skip it. */ if (ir_holemask) { xchk_xref_is_not_owned_by(bs->sc, agbno, - blks_per_cluster, &oinfo); + mp->m_blocks_per_cluster, + &XFS_RMAP_OINFO_INODES); continue; } - xchk_xref_is_owned_by(bs->sc, agbno, blks_per_cluster, - &oinfo); + xchk_xref_is_owned_by(bs->sc, agbno, mp->m_blocks_per_cluster, + &XFS_RMAP_OINFO_INODES); /* Grab the inode cluster buffer. */ imap.im_blkno = XFS_AGB_TO_DADDR(mp, bs->cur->bc_private.a.agno, agbno); - imap.im_len = XFS_FSB_TO_BB(mp, blks_per_cluster); + imap.im_len = XFS_FSB_TO_BB(mp, mp->m_blocks_per_cluster); imap.im_boffset = 0; error = xfs_imap_to_bp(mp, bs->cur->bc_tp, &imap, @@ -272,7 +271,7 @@ xchk_iallocbt_rec( union xfs_btree_rec *rec) { struct xfs_mount *mp = bs->cur->bc_mp; - xfs_filblks_t *inode_blocks = bs->private; + struct xchk_iallocbt *iabt = bs->private; struct xfs_inobt_rec_incore irec; uint64_t holes; xfs_agnumber_t agno = bs->cur->bc_private.a.agno; @@ -306,12 +305,11 @@ xchk_iallocbt_rec( /* Make sure this record is aligned to cluster and inoalignmnt size. */ agbno = XFS_AGINO_TO_AGBNO(mp, irec.ir_startino); - if ((agbno & (xfs_ialloc_cluster_alignment(mp) - 1)) || - (agbno & (xfs_icluster_size_fsb(mp) - 1))) + if ((agbno & (mp->m_cluster_align - 1)) || + (agbno & (mp->m_blocks_per_cluster - 1))) xchk_btree_set_corrupt(bs->sc, bs->cur, 0); - *inode_blocks += XFS_B_TO_FSB(mp, - irec.ir_count * mp->m_sb.sb_inodesize); + iabt->inodes += irec.ir_count; /* Handle non-sparse inodes */ if (!xfs_inobt_issparse(irec.ir_holemask)) { @@ -366,7 +364,6 @@ xchk_iallocbt_xref_rmap_btreeblks( struct xfs_scrub *sc, int which) { - struct xfs_owner_info oinfo; xfs_filblks_t blocks; xfs_extlen_t inobt_blocks = 0; xfs_extlen_t finobt_blocks = 0; @@ -388,9 +385,8 @@ xchk_iallocbt_xref_rmap_btreeblks( return; } - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INOBT); - error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, &oinfo, - &blocks); + error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, + &XFS_RMAP_OINFO_INOBT, &blocks); if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur)) return; if (blocks != inobt_blocks + finobt_blocks) @@ -405,21 +401,21 @@ STATIC void xchk_iallocbt_xref_rmap_inodes( struct xfs_scrub *sc, int which, - xfs_filblks_t inode_blocks) + unsigned long long inodes) { - struct xfs_owner_info oinfo; xfs_filblks_t blocks; + xfs_filblks_t inode_blocks; int error; if (!sc->sa.rmap_cur || xchk_skip_xref(sc->sm)) return; /* Check that we saw as many inode blocks as the rmap knows about. */ - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES); - error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, &oinfo, - &blocks); + error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, + &XFS_RMAP_OINFO_INODES, &blocks); if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur)) return; + inode_blocks = XFS_B_TO_FSB(sc->mp, inodes * sc->mp->m_sb.sb_inodesize); if (blocks != inode_blocks) xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0); } @@ -431,14 +427,14 @@ xchk_iallocbt( xfs_btnum_t which) { struct xfs_btree_cur *cur; - struct xfs_owner_info oinfo; - xfs_filblks_t inode_blocks = 0; + struct xchk_iallocbt iabt = { + .inodes = 0, + }; int error; - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INOBT); cur = which == XFS_BTNUM_INO ? sc->sa.ino_cur : sc->sa.fino_cur; - error = xchk_btree(sc, cur, xchk_iallocbt_rec, &oinfo, - &inode_blocks); + error = xchk_btree(sc, cur, xchk_iallocbt_rec, &XFS_RMAP_OINFO_INOBT, + &iabt); if (error) return error; @@ -452,7 +448,7 @@ xchk_iallocbt( * to inode chunks with free inodes. */ if (which == XFS_BTNUM_INO) - xchk_iallocbt_xref_rmap_inodes(sc, which, inode_blocks); + xchk_iallocbt_xref_rmap_inodes(sc, which, iabt.inodes); return error; } diff --git a/fs/xfs/scrub/inode.c b/fs/xfs/scrub/inode.c index e386c9b0b4ab..e213efc194a1 100644 --- a/fs/xfs/scrub/inode.c +++ b/fs/xfs/scrub/inode.c @@ -509,7 +509,6 @@ xchk_inode_xref( xfs_ino_t ino, struct xfs_dinode *dip) { - struct xfs_owner_info oinfo; xfs_agnumber_t agno; xfs_agblock_t agbno; int error; @@ -526,8 +525,7 @@ xchk_inode_xref( xchk_xref_is_used_space(sc, agbno, 1); xchk_inode_xref_finobt(sc, ino); - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES); - xchk_xref_is_owned_by(sc, agbno, 1, &oinfo); + xchk_xref_is_owned_by(sc, agbno, 1, &XFS_RMAP_OINFO_INODES); xchk_xref_is_not_shared(sc, agbno, 1); xchk_inode_xref_bmap(sc, dip); diff --git a/fs/xfs/scrub/refcount.c b/fs/xfs/scrub/refcount.c index e8c82b026083..708b4158eb90 100644 --- a/fs/xfs/scrub/refcount.c +++ b/fs/xfs/scrub/refcount.c @@ -383,7 +383,6 @@ xchk_refcountbt_rec( STATIC void xchk_refcount_xref_rmap( struct xfs_scrub *sc, - struct xfs_owner_info *oinfo, xfs_filblks_t cow_blocks) { xfs_extlen_t refcbt_blocks = 0; @@ -397,17 +396,16 @@ xchk_refcount_xref_rmap( error = xfs_btree_count_blocks(sc->sa.refc_cur, &refcbt_blocks); if (!xchk_btree_process_error(sc, sc->sa.refc_cur, 0, &error)) return; - error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, oinfo, - &blocks); + error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, + &XFS_RMAP_OINFO_REFC, &blocks); if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur)) return; if (blocks != refcbt_blocks) xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0); /* Check that we saw as many cow blocks as the rmap knows about. */ - xfs_rmap_ag_owner(oinfo, XFS_RMAP_OWN_COW); - error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, oinfo, - &blocks); + error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, + &XFS_RMAP_OINFO_COW, &blocks); if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur)) return; if (blocks != cow_blocks) @@ -419,17 +417,15 @@ int xchk_refcountbt( struct xfs_scrub *sc) { - struct xfs_owner_info oinfo; xfs_agblock_t cow_blocks = 0; int error; - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_REFC); error = xchk_btree(sc, sc->sa.refc_cur, xchk_refcountbt_rec, - &oinfo, &cow_blocks); + &XFS_RMAP_OINFO_REFC, &cow_blocks); if (error) return error; - xchk_refcount_xref_rmap(sc, &oinfo, cow_blocks); + xchk_refcount_xref_rmap(sc, cow_blocks); return 0; } diff --git a/fs/xfs/scrub/repair.c b/fs/xfs/scrub/repair.c index 4fc0a5ea7673..1c8eecfe52b8 100644 --- a/fs/xfs/scrub/repair.c +++ b/fs/xfs/scrub/repair.c @@ -299,14 +299,14 @@ xrep_calc_ag_resblks( /* Allocate a block in an AG. */ int xrep_alloc_ag_block( - struct xfs_scrub *sc, - struct xfs_owner_info *oinfo, - xfs_fsblock_t *fsbno, - enum xfs_ag_resv_type resv) + struct xfs_scrub *sc, + const struct xfs_owner_info *oinfo, + xfs_fsblock_t *fsbno, + enum xfs_ag_resv_type resv) { - struct xfs_alloc_arg args = {0}; - xfs_agblock_t bno; - int error; + struct xfs_alloc_arg args = {0}; + xfs_agblock_t bno; + int error; switch (resv) { case XFS_AG_RESV_AGFL: @@ -505,7 +505,6 @@ xrep_put_freelist( struct xfs_scrub *sc, xfs_agblock_t agbno) { - struct xfs_owner_info oinfo; int error; /* Make sure there's space on the freelist. */ @@ -518,9 +517,8 @@ xrep_put_freelist( * create an rmap for the block prior to merging it or else other * parts will break. */ - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_AG); error = xfs_rmap_alloc(sc->tp, sc->sa.agf_bp, sc->sa.agno, agbno, 1, - &oinfo); + &XFS_RMAP_OINFO_AG); if (error) return error; @@ -538,17 +536,17 @@ xrep_put_freelist( /* Dispose of a single block. */ STATIC int xrep_reap_block( - struct xfs_scrub *sc, - xfs_fsblock_t fsbno, - struct xfs_owner_info *oinfo, - enum xfs_ag_resv_type resv) + struct xfs_scrub *sc, + xfs_fsblock_t fsbno, + const struct xfs_owner_info *oinfo, + enum xfs_ag_resv_type resv) { - struct xfs_btree_cur *cur; - struct xfs_buf *agf_bp = NULL; - xfs_agnumber_t agno; - xfs_agblock_t agbno; - bool has_other_rmap; - int error; + struct xfs_btree_cur *cur; + struct xfs_buf *agf_bp = NULL; + xfs_agnumber_t agno; + xfs_agblock_t agbno; + bool has_other_rmap; + int error; agno = XFS_FSB_TO_AGNO(sc->mp, fsbno); agbno = XFS_FSB_TO_AGBNO(sc->mp, fsbno); @@ -612,15 +610,15 @@ out_free: /* Dispose of every block of every extent in the bitmap. */ int xrep_reap_extents( - struct xfs_scrub *sc, - struct xfs_bitmap *bitmap, - struct xfs_owner_info *oinfo, - enum xfs_ag_resv_type type) + struct xfs_scrub *sc, + struct xfs_bitmap *bitmap, + const struct xfs_owner_info *oinfo, + enum xfs_ag_resv_type type) { - struct xfs_bitmap_range *bmr; - struct xfs_bitmap_range *n; - xfs_fsblock_t fsbno; - int error = 0; + struct xfs_bitmap_range *bmr; + struct xfs_bitmap_range *n; + xfs_fsblock_t fsbno; + int error = 0; ASSERT(xfs_sb_version_hasrmapbt(&sc->mp->m_sb)); diff --git a/fs/xfs/scrub/repair.h b/fs/xfs/scrub/repair.h index 9de321eee4ab..f2fc18bb7605 100644 --- a/fs/xfs/scrub/repair.h +++ b/fs/xfs/scrub/repair.h @@ -21,8 +21,9 @@ int xrep_roll_ag_trans(struct xfs_scrub *sc); bool xrep_ag_has_space(struct xfs_perag *pag, xfs_extlen_t nr_blocks, enum xfs_ag_resv_type type); xfs_extlen_t xrep_calc_ag_resblks(struct xfs_scrub *sc); -int xrep_alloc_ag_block(struct xfs_scrub *sc, struct xfs_owner_info *oinfo, - xfs_fsblock_t *fsbno, enum xfs_ag_resv_type resv); +int xrep_alloc_ag_block(struct xfs_scrub *sc, + const struct xfs_owner_info *oinfo, xfs_fsblock_t *fsbno, + enum xfs_ag_resv_type resv); int xrep_init_btblock(struct xfs_scrub *sc, xfs_fsblock_t fsb, struct xfs_buf **bpp, xfs_btnum_t btnum, const struct xfs_buf_ops *ops); @@ -32,7 +33,7 @@ struct xfs_bitmap; int xrep_fix_freelist(struct xfs_scrub *sc, bool can_shrink); int xrep_invalidate_blocks(struct xfs_scrub *sc, struct xfs_bitmap *btlist); int xrep_reap_extents(struct xfs_scrub *sc, struct xfs_bitmap *exlist, - struct xfs_owner_info *oinfo, enum xfs_ag_resv_type type); + const struct xfs_owner_info *oinfo, enum xfs_ag_resv_type type); struct xrep_find_ag_btree { /* in: rmap owner of the btree we're looking for */ diff --git a/fs/xfs/scrub/rmap.c b/fs/xfs/scrub/rmap.c index 5e293c129813..92a140c5b55e 100644 --- a/fs/xfs/scrub/rmap.c +++ b/fs/xfs/scrub/rmap.c @@ -174,24 +174,21 @@ int xchk_rmapbt( struct xfs_scrub *sc) { - struct xfs_owner_info oinfo; - - xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_AG); return xchk_btree(sc, sc->sa.rmap_cur, xchk_rmapbt_rec, - &oinfo, NULL); + &XFS_RMAP_OINFO_AG, NULL); } /* xref check that the extent is owned by a given owner */ static inline void xchk_xref_check_owner( - struct xfs_scrub *sc, - xfs_agblock_t bno, - xfs_extlen_t len, - struct xfs_owner_info *oinfo, - bool should_have_rmap) + struct xfs_scrub *sc, + xfs_agblock_t bno, + xfs_extlen_t len, + const struct xfs_owner_info *oinfo, + bool should_have_rmap) { - bool has_rmap; - int error; + bool has_rmap; + int error; if (!sc->sa.rmap_cur || xchk_skip_xref(sc->sm)) return; @@ -207,10 +204,10 @@ xchk_xref_check_owner( /* xref check that the extent is owned by a given owner */ void xchk_xref_is_owned_by( - struct xfs_scrub *sc, - xfs_agblock_t bno, - xfs_extlen_t len, - struct xfs_owner_info *oinfo) + struct xfs_scrub *sc, + xfs_agblock_t bno, + xfs_extlen_t len, + const struct xfs_owner_info *oinfo) { xchk_xref_check_owner(sc, bno, len, oinfo, true); } @@ -218,10 +215,10 @@ xchk_xref_is_owned_by( /* xref check that the extent is not owned by a given owner */ void xchk_xref_is_not_owned_by( - struct xfs_scrub *sc, - xfs_agblock_t bno, - xfs_extlen_t len, - struct xfs_owner_info *oinfo) + struct xfs_scrub *sc, + xfs_agblock_t bno, + xfs_extlen_t len, + const struct xfs_owner_info *oinfo) { xchk_xref_check_owner(sc, bno, len, oinfo, false); } diff --git a/fs/xfs/scrub/scrub.h b/fs/xfs/scrub/scrub.h index af323b229c4b..22f754fba8e5 100644 --- a/fs/xfs/scrub/scrub.h +++ b/fs/xfs/scrub/scrub.h @@ -122,9 +122,9 @@ void xchk_xref_is_not_inode_chunk(struct xfs_scrub *sc, xfs_agblock_t agbno, void xchk_xref_is_inode_chunk(struct xfs_scrub *sc, xfs_agblock_t agbno, xfs_extlen_t len); void xchk_xref_is_owned_by(struct xfs_scrub *sc, xfs_agblock_t agbno, - xfs_extlen_t len, struct xfs_owner_info *oinfo); + xfs_extlen_t len, const struct xfs_owner_info *oinfo); void xchk_xref_is_not_owned_by(struct xfs_scrub *sc, xfs_agblock_t agbno, - xfs_extlen_t len, struct xfs_owner_info *oinfo); + xfs_extlen_t len, const struct xfs_owner_info *oinfo); void xchk_xref_has_no_owner(struct xfs_scrub *sc, xfs_agblock_t agbno, xfs_extlen_t len); void xchk_xref_is_cow_staging(struct xfs_scrub *sc, xfs_agblock_t bno, diff --git a/fs/xfs/scrub/trace.h b/fs/xfs/scrub/trace.h index 4e20f0e48232..8344b14031ef 100644 --- a/fs/xfs/scrub/trace.h +++ b/fs/xfs/scrub/trace.h @@ -12,6 +12,71 @@ #include #include "xfs_bit.h" +/* + * ftrace's __print_symbolic requires that all enum values be wrapped in the + * TRACE_DEFINE_ENUM macro so that the enum value can be encoded in the ftrace + * ring buffer. Somehow this was only worth mentioning in the ftrace sample + * code. + */ +TRACE_DEFINE_ENUM(XFS_BTNUM_BNOi); +TRACE_DEFINE_ENUM(XFS_BTNUM_CNTi); +TRACE_DEFINE_ENUM(XFS_BTNUM_BMAPi); +TRACE_DEFINE_ENUM(XFS_BTNUM_INOi); +TRACE_DEFINE_ENUM(XFS_BTNUM_FINOi); +TRACE_DEFINE_ENUM(XFS_BTNUM_RMAPi); +TRACE_DEFINE_ENUM(XFS_BTNUM_REFCi); + +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_PROBE); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_SB); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_AGF); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_AGFL); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_AGI); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_BNOBT); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_CNTBT); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_INOBT); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_FINOBT); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_RMAPBT); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_REFCNTBT); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_INODE); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_BMBTD); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_BMBTA); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_BMBTC); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_DIR); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_XATTR); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_SYMLINK); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_PARENT); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_RTBITMAP); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_RTSUM); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_UQUOTA); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_GQUOTA); +TRACE_DEFINE_ENUM(XFS_SCRUB_TYPE_PQUOTA); + +#define XFS_SCRUB_TYPE_STRINGS \ + { XFS_SCRUB_TYPE_PROBE, "probe" }, \ + { XFS_SCRUB_TYPE_SB, "sb" }, \ + { XFS_SCRUB_TYPE_AGF, "agf" }, \ + { XFS_SCRUB_TYPE_AGFL, "agfl" }, \ + { XFS_SCRUB_TYPE_AGI, "agi" }, \ + { XFS_SCRUB_TYPE_BNOBT, "bnobt" }, \ + { XFS_SCRUB_TYPE_CNTBT, "cntbt" }, \ + { XFS_SCRUB_TYPE_INOBT, "inobt" }, \ + { XFS_SCRUB_TYPE_FINOBT, "finobt" }, \ + { XFS_SCRUB_TYPE_RMAPBT, "rmapbt" }, \ + { XFS_SCRUB_TYPE_REFCNTBT, "refcountbt" }, \ + { XFS_SCRUB_TYPE_INODE, "inode" }, \ + { XFS_SCRUB_TYPE_BMBTD, "bmapbtd" }, \ + { XFS_SCRUB_TYPE_BMBTA, "bmapbta" }, \ + { XFS_SCRUB_TYPE_BMBTC, "bmapbtc" }, \ + { XFS_SCRUB_TYPE_DIR, "directory" }, \ + { XFS_SCRUB_TYPE_XATTR, "xattr" }, \ + { XFS_SCRUB_TYPE_SYMLINK, "symlink" }, \ + { XFS_SCRUB_TYPE_PARENT, "parent" }, \ + { XFS_SCRUB_TYPE_RTBITMAP, "rtbitmap" }, \ + { XFS_SCRUB_TYPE_RTSUM, "rtsummary" }, \ + { XFS_SCRUB_TYPE_UQUOTA, "usrquota" }, \ + { XFS_SCRUB_TYPE_GQUOTA, "grpquota" }, \ + { XFS_SCRUB_TYPE_PQUOTA, "prjquota" } + DECLARE_EVENT_CLASS(xchk_class, TP_PROTO(struct xfs_inode *ip, struct xfs_scrub_metadata *sm, int error), @@ -36,10 +101,10 @@ DECLARE_EVENT_CLASS(xchk_class, __entry->flags = sm->sm_flags; __entry->error = error; ), - TP_printk("dev %d:%d ino 0x%llx type %u agno %u inum %llu gen %u flags 0x%x error %d", + TP_printk("dev %d:%d ino 0x%llx type %s agno %u inum %llu gen %u flags 0x%x error %d", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, - __entry->type, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), __entry->agno, __entry->inum, __entry->gen, @@ -78,9 +143,9 @@ TRACE_EVENT(xchk_op_error, __entry->error = error; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d type %u agno %u agbno %u error %d ret_ip %pS", + TP_printk("dev %d:%d type %s agno %u agbno %u error %d ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->type, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), __entry->agno, __entry->bno, __entry->error, @@ -109,11 +174,11 @@ TRACE_EVENT(xchk_file_op_error, __entry->error = error; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d ino 0x%llx fork %d type %u offset %llu error %d ret_ip %pS", + TP_printk("dev %d:%d ino 0x%llx fork %d type %s offset %llu error %d ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, __entry->whichfork, - __entry->type, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), __entry->offset, __entry->error, __entry->ret_ip) @@ -144,9 +209,9 @@ DECLARE_EVENT_CLASS(xchk_block_error_class, __entry->bno = bno; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d type %u agno %u agbno %u ret_ip %pS", + TP_printk("dev %d:%d type %s agno %u agbno %u ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->type, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), __entry->agno, __entry->bno, __entry->ret_ip) @@ -176,10 +241,10 @@ DECLARE_EVENT_CLASS(xchk_ino_error_class, __entry->type = sc->sm->sm_type; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d ino 0x%llx type %u ret_ip %pS", + TP_printk("dev %d:%d ino 0x%llx type %s ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, - __entry->type, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), __entry->ret_ip) ) @@ -213,11 +278,11 @@ DECLARE_EVENT_CLASS(xchk_fblock_error_class, __entry->offset = offset; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d ino 0x%llx fork %d type %u offset %llu ret_ip %pS", + TP_printk("dev %d:%d ino 0x%llx fork %d type %s offset %llu ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, __entry->whichfork, - __entry->type, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), __entry->offset, __entry->ret_ip) ); @@ -244,9 +309,9 @@ TRACE_EVENT(xchk_incomplete, __entry->type = sc->sm->sm_type; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d type %u ret_ip %pS", + TP_printk("dev %d:%d type %s ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->type, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), __entry->ret_ip) ); @@ -278,10 +343,10 @@ TRACE_EVENT(xchk_btree_op_error, __entry->error = error; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d type %u btnum %d level %d ptr %d agno %u agbno %u error %d ret_ip %pS", + TP_printk("dev %d:%d type %s btree %s level %d ptr %d agno %u agbno %u error %d ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->type, - __entry->btnum, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), + __print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS), __entry->level, __entry->ptr, __entry->agno, @@ -321,12 +386,12 @@ TRACE_EVENT(xchk_ifork_btree_op_error, __entry->error = error; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d ino 0x%llx fork %d type %u btnum %d level %d ptr %d agno %u agbno %u error %d ret_ip %pS", + TP_printk("dev %d:%d ino 0x%llx fork %d type %s btree %s level %d ptr %d agno %u agbno %u error %d ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, __entry->whichfork, - __entry->type, - __entry->btnum, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), + __print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS), __entry->level, __entry->ptr, __entry->agno, @@ -360,10 +425,10 @@ TRACE_EVENT(xchk_btree_error, __entry->ptr = cur->bc_ptrs[level]; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d type %u btnum %d level %d ptr %d agno %u agbno %u ret_ip %pS", + TP_printk("dev %d:%d type %s btree %s level %d ptr %d agno %u agbno %u ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->type, - __entry->btnum, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), + __print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS), __entry->level, __entry->ptr, __entry->agno, @@ -400,12 +465,12 @@ TRACE_EVENT(xchk_ifork_btree_error, __entry->ptr = cur->bc_ptrs[level]; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d ino 0x%llx fork %d type %u btnum %d level %d ptr %d agno %u agbno %u ret_ip %pS", + TP_printk("dev %d:%d ino 0x%llx fork %d type %s btree %s level %d ptr %d agno %u agbno %u ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, __entry->whichfork, - __entry->type, - __entry->btnum, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), + __print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS), __entry->level, __entry->ptr, __entry->agno, @@ -439,10 +504,10 @@ DECLARE_EVENT_CLASS(xchk_sbtree_class, __entry->nlevels = cur->bc_nlevels; __entry->ptr = cur->bc_ptrs[level]; ), - TP_printk("dev %d:%d type %u btnum %d agno %u agbno %u level %d nlevels %d ptr %d", + TP_printk("dev %d:%d type %s btree %s agno %u agbno %u level %d nlevels %d ptr %d", MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->type, - __entry->btnum, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), + __print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS), __entry->agno, __entry->bno, __entry->level, @@ -473,9 +538,9 @@ TRACE_EVENT(xchk_xref_error, __entry->error = error; __entry->ret_ip = ret_ip; ), - TP_printk("dev %d:%d type %u xref error %d ret_ip %pF", + TP_printk("dev %d:%d type %s xref error %d ret_ip %pS", MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->type, + __print_symbolic(__entry->type, XFS_SCRUB_TYPE_STRINGS), __entry->error, __entry->ret_ip) ); @@ -598,11 +663,11 @@ TRACE_EVENT(xrep_init_btblock, __entry->agbno = agbno; __entry->btnum = btnum; ), - TP_printk("dev %d:%d agno %u agbno %u btnum %d", + TP_printk("dev %d:%d agno %u agbno %u btree %s", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->agno, __entry->agbno, - __entry->btnum) + __print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS)) ) TRACE_EVENT(xrep_findroot_block, TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, xfs_agblock_t agbno, diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h index 494b4338446e..e5c23948a8ab 100644 --- a/fs/xfs/xfs_aops.h +++ b/fs/xfs/xfs_aops.h @@ -10,6 +10,9 @@ extern struct bio_set xfs_ioend_bioset; /* * Types of I/O for bmap clustering and I/O completion tracking. + * + * This enum is used in string mapping in xfs_trace.h; please keep the + * TRACE_DEFINE_ENUMs for it up to date. */ enum { XFS_IO_HOLE, /* covers region without any block allocation */ diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c index d9da66c718bb..74ddf66f4cfe 100644 --- a/fs/xfs/xfs_extfree_item.c +++ b/fs/xfs/xfs_extfree_item.c @@ -494,7 +494,6 @@ xfs_efi_recover( int error = 0; xfs_extent_t *extp; xfs_fsblock_t startblock_fsb; - struct xfs_owner_info oinfo; ASSERT(!test_bit(XFS_EFI_RECOVERED, &efip->efi_flags)); @@ -526,11 +525,11 @@ xfs_efi_recover( return error; efdp = xfs_trans_get_efd(tp, efip, efip->efi_format.efi_nextents); - xfs_rmap_any_owner_update(&oinfo); for (i = 0; i < efip->efi_format.efi_nextents; i++) { extp = &efip->efi_format.efi_extents[i]; error = xfs_trans_free_extent(tp, efdp, extp->ext_start, - extp->ext_len, &oinfo, false); + extp->ext_len, + &XFS_RMAP_OINFO_ANY_OWNER, false); if (error) goto abort_error; diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c index 093c2b8d7e20..ec2e63a7963b 100644 --- a/fs/xfs/xfs_fsops.c +++ b/fs/xfs/xfs_fsops.c @@ -252,7 +252,7 @@ xfs_growfs_data( if (mp->m_sb.sb_imax_pct) { uint64_t icount = mp->m_sb.sb_dblocks * mp->m_sb.sb_imax_pct; do_div(icount, 100); - mp->m_maxicount = icount << mp->m_sb.sb_inopblog; + mp->m_maxicount = XFS_FSB_TO_INO(mp, icount); } else mp->m_maxicount = 0; diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 05db9540e459..ae667ba74a1c 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -2184,8 +2184,6 @@ xfs_ifree_cluster( struct xfs_icluster *xic) { xfs_mount_t *mp = free_ip->i_mount; - int blks_per_cluster; - int inodes_per_cluster; int nbufs; int i, j; int ioffset; @@ -2199,11 +2197,9 @@ xfs_ifree_cluster( inum = xic->first_ino; pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, inum)); - blks_per_cluster = xfs_icluster_size_fsb(mp); - inodes_per_cluster = blks_per_cluster << mp->m_sb.sb_inopblog; - nbufs = mp->m_ialloc_blks / blks_per_cluster; + nbufs = mp->m_ialloc_blks / mp->m_blocks_per_cluster; - for (j = 0; j < nbufs; j++, inum += inodes_per_cluster) { + for (j = 0; j < nbufs; j++, inum += mp->m_inodes_per_cluster) { /* * The allocation bitmap tells us which inodes of the chunk were * physically allocated. Skip the cluster if an inode falls into @@ -2211,7 +2207,7 @@ xfs_ifree_cluster( */ ioffset = inum - xic->first_ino; if ((xic->alloc & XFS_INOBT_MASK(ioffset)) == 0) { - ASSERT(ioffset % inodes_per_cluster == 0); + ASSERT(ioffset % mp->m_inodes_per_cluster == 0); continue; } @@ -2227,7 +2223,7 @@ xfs_ifree_cluster( * to mark all the active inodes on the buffer stale. */ bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno, - mp->m_bsize * blks_per_cluster, + mp->m_bsize * mp->m_blocks_per_cluster, XBF_UNMAPPED); if (!bp) @@ -2242,7 +2238,7 @@ xfs_ifree_cluster( * want it to fail. We can acheive this by adding a write * verifier to the buffer. */ - bp->b_ops = &xfs_inode_buf_ops; + bp->b_ops = &xfs_inode_buf_ops; /* * Walk the inodes already attached to the buffer and mark them @@ -2274,7 +2270,7 @@ xfs_ifree_cluster( * transaction stale above, which means there is no point in * even trying to lock them. */ - for (i = 0; i < inodes_per_cluster; i++) { + for (i = 0; i < mp->m_inodes_per_cluster; i++) { retry: rcu_read_lock(); ip = radix_tree_lookup(&pag->pag_ici_root, diff --git a/fs/xfs/xfs_ioctl32.c b/fs/xfs/xfs_ioctl32.c index fba115f4103a..5001dca361e9 100644 --- a/fs/xfs/xfs_ioctl32.c +++ b/fs/xfs/xfs_ioctl32.c @@ -241,6 +241,32 @@ xfs_compat_ioc_bulkstat( int done; int error; + /* + * Output structure handling functions. Depending on the command, + * either the xfs_bstat and xfs_inogrp structures are written out + * to userpace memory via bulkreq.ubuffer. Normally the compat + * functions and structure size are the correct ones to use ... + */ + inumbers_fmt_pf inumbers_func = xfs_inumbers_fmt_compat; + bulkstat_one_pf bs_one_func = xfs_bulkstat_one_compat; + size_t bs_one_size = sizeof(struct compat_xfs_bstat); + +#ifdef CONFIG_X86_X32 + if (in_x32_syscall()) { + /* + * ... but on x32 the input xfs_fsop_bulkreq has pointers + * which must be handled in the "compat" (32-bit) way, while + * the xfs_bstat and xfs_inogrp structures follow native 64- + * bit layout convention. So adjust accordingly, otherwise + * the data written out in compat layout will not match what + * x32 userspace expects. + */ + inumbers_func = xfs_inumbers_fmt; + bs_one_func = xfs_bulkstat_one; + bs_one_size = sizeof(struct xfs_bstat); + } +#endif + /* done = 1 if there are more stats to get and if bulkstat */ /* should be called again (unused here, but used in dmapi) */ @@ -272,15 +298,15 @@ xfs_compat_ioc_bulkstat( if (cmd == XFS_IOC_FSINUMBERS_32) { error = xfs_inumbers(mp, &inlast, &count, - bulkreq.ubuffer, xfs_inumbers_fmt_compat); + bulkreq.ubuffer, inumbers_func); } else if (cmd == XFS_IOC_FSBULKSTAT_SINGLE_32) { int res; - error = xfs_bulkstat_one_compat(mp, inlast, bulkreq.ubuffer, - sizeof(compat_xfs_bstat_t), NULL, &res); + error = bs_one_func(mp, inlast, bulkreq.ubuffer, + bs_one_size, NULL, &res); } else if (cmd == XFS_IOC_FSBULKSTAT_32) { error = xfs_bulkstat(mp, &inlast, &count, - xfs_bulkstat_one_compat, sizeof(compat_xfs_bstat_t), + bs_one_func, bs_one_size, bulkreq.ubuffer, &done); } else error = -EINVAL; @@ -336,6 +362,7 @@ xfs_compat_attrlist_by_handle( { int error; attrlist_cursor_kern_t *cursor; + compat_xfs_fsop_attrlist_handlereq_t __user *p = arg; compat_xfs_fsop_attrlist_handlereq_t al_hreq; struct dentry *dentry; char *kbuf; @@ -370,6 +397,11 @@ xfs_compat_attrlist_by_handle( if (error) goto out_kfree; + if (copy_to_user(&p->pos, cursor, sizeof(attrlist_cursor_kern_t))) { + error = -EFAULT; + goto out_kfree; + } + if (copy_to_user(compat_ptr(al_hreq.buffer), kbuf, al_hreq.buflen)) error = -EFAULT; @@ -547,8 +579,12 @@ xfs_file_compat_ioctl( case FS_IOC_GETFSMAP: case XFS_IOC_SCRUB_METADATA: return xfs_file_ioctl(filp, cmd, p); -#ifndef BROKEN_X86_ALIGNMENT - /* These are handled fine if no alignment issues */ +#if !defined(BROKEN_X86_ALIGNMENT) || defined(CONFIG_X86_X32) + /* + * These are handled fine if no alignment issues. To support x32 + * which uses native 64-bit alignment we must emit these cases in + * addition to the ia-32 compat set below. + */ case XFS_IOC_ALLOCSP: case XFS_IOC_FREESP: case XFS_IOC_RESVSP: @@ -561,8 +597,16 @@ xfs_file_compat_ioctl( case XFS_IOC_FSGROWFSDATA: case XFS_IOC_FSGROWFSRT: case XFS_IOC_ZERO_RANGE: +#ifdef CONFIG_X86_X32 + /* + * x32 special: this gets a different cmd number from the ia-32 compat + * case below; the associated data will match native 64-bit alignment. + */ + case XFS_IOC_SWAPEXT: +#endif return xfs_file_ioctl(filp, cmd, p); -#else +#endif +#if defined(BROKEN_X86_ALIGNMENT) case XFS_IOC_ALLOCSP_32: case XFS_IOC_FREESP_32: case XFS_IOC_ALLOCSP64_32: diff --git a/fs/xfs/xfs_itable.c b/fs/xfs/xfs_itable.c index e9508ba01ed1..942e4aa5e729 100644 --- a/fs/xfs/xfs_itable.c +++ b/fs/xfs/xfs_itable.c @@ -167,20 +167,18 @@ xfs_bulkstat_ichunk_ra( { xfs_agblock_t agbno; struct blk_plug plug; - int blks_per_cluster; - int inodes_per_cluster; int i; /* inode chunk index */ agbno = XFS_AGINO_TO_AGBNO(mp, irec->ir_startino); - blks_per_cluster = xfs_icluster_size_fsb(mp); - inodes_per_cluster = blks_per_cluster << mp->m_sb.sb_inopblog; blk_start_plug(&plug); for (i = 0; i < XFS_INODES_PER_CHUNK; - i += inodes_per_cluster, agbno += blks_per_cluster) { - if (xfs_inobt_maskn(i, inodes_per_cluster) & ~irec->ir_free) { - xfs_btree_reada_bufs(mp, agno, agbno, blks_per_cluster, - &xfs_inode_buf_ops); + i += mp->m_inodes_per_cluster, agbno += mp->m_blocks_per_cluster) { + if (xfs_inobt_maskn(i, mp->m_inodes_per_cluster) & + ~irec->ir_free) { + xfs_btree_reada_bufs(mp, agno, agbno, + mp->m_blocks_per_cluster, + &xfs_inode_buf_ops); } } blk_finish_plug(&plug); diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index 1fc9e9042e0e..9fe88d125f0a 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -3850,7 +3850,6 @@ xlog_recover_do_icreate_pass2( unsigned int count; unsigned int isize; xfs_agblock_t length; - int blks_per_cluster; int bb_per_cluster; int cancel_count; int nbufs; @@ -3918,14 +3917,13 @@ xlog_recover_do_icreate_pass2( * buffers for cancellation so we don't overwrite anything written after * a cancellation. */ - blks_per_cluster = xfs_icluster_size_fsb(mp); - bb_per_cluster = XFS_FSB_TO_BB(mp, blks_per_cluster); - nbufs = length / blks_per_cluster; + bb_per_cluster = XFS_FSB_TO_BB(mp, mp->m_blocks_per_cluster); + nbufs = length / mp->m_blocks_per_cluster; for (i = 0, cancel_count = 0; i < nbufs; i++) { xfs_daddr_t daddr; daddr = XFS_AGB_TO_DADDR(mp, agno, - agbno + i * blks_per_cluster); + agbno + i * mp->m_blocks_per_cluster); if (xlog_check_buffer_cancelled(log, daddr, bb_per_cluster, 0)) cancel_count++; } diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 02d15098dbee..b4d8c318be3c 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -798,6 +798,10 @@ xfs_mountfs( if (mp->m_sb.sb_inoalignmt >= XFS_B_TO_FSBT(mp, new_size)) mp->m_inode_cluster_size = new_size; } + mp->m_blocks_per_cluster = xfs_icluster_size_fsb(mp); + mp->m_inodes_per_cluster = XFS_FSB_TO_INO(mp, mp->m_blocks_per_cluster); + mp->m_cluster_align = xfs_ialloc_cluster_alignment(mp); + mp->m_cluster_align_inodes = XFS_FSB_TO_INO(mp, mp->m_cluster_align); /* * If enabled, sparse inode chunk alignment is expected to match the diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index 7964513c3128..7daafe064af8 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -89,6 +89,13 @@ typedef struct xfs_mount { int m_logbsize; /* size of each log buffer */ uint m_rsumlevels; /* rt summary levels */ uint m_rsumsize; /* size of rt summary, bytes */ + /* + * Optional cache of rt summary level per bitmap block with the + * invariant that m_rsum_cache[bbno] <= the minimum i for which + * rsum[i][bbno] != 0. Reads and writes are serialized by the rsumip + * inode lock. + */ + uint8_t *m_rsum_cache; struct xfs_inode *m_rbmip; /* pointer to bitmap inode */ struct xfs_inode *m_rsumip; /* pointer to summary inode */ struct xfs_inode *m_rootip; /* pointer to root directory */ @@ -101,6 +108,10 @@ typedef struct xfs_mount { uint8_t m_agno_log; /* log #ag's */ uint8_t m_agino_log; /* #bits for agino in inum */ uint m_inode_cluster_size;/* min inode buf size */ + unsigned int m_inodes_per_cluster; + unsigned int m_blocks_per_cluster; + unsigned int m_cluster_align; + unsigned int m_cluster_align_inodes; uint m_blockmask; /* sb_blocksize-1 */ uint m_blockwsize; /* sb_blocksize in words */ uint m_blockwmask; /* blockwsize-1 */ diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c index 322a852ce284..c5b4fa004ca4 100644 --- a/fs/xfs/xfs_reflink.c +++ b/fs/xfs/xfs_reflink.c @@ -623,54 +623,47 @@ out: } /* - * Remap parts of a file's data fork after a successful CoW. + * Remap part of the CoW fork into the data fork. + * + * We aim to remap the range starting at @offset_fsb and ending at @end_fsb + * into the data fork; this function will remap what it can (at the end of the + * range) and update @end_fsb appropriately. Each remap gets its own + * transaction because we can end up merging and splitting bmbt blocks for + * every remap operation and we'd like to keep the block reservation + * requirements as low as possible. */ -int -xfs_reflink_end_cow( - struct xfs_inode *ip, - xfs_off_t offset, - xfs_off_t count) +STATIC int +xfs_reflink_end_cow_extent( + struct xfs_inode *ip, + xfs_fileoff_t offset_fsb, + xfs_fileoff_t *end_fsb) { - struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, XFS_COW_FORK); - struct xfs_bmbt_irec got, del; - struct xfs_trans *tp; - xfs_fileoff_t offset_fsb; - xfs_fileoff_t end_fsb; - int error; - unsigned int resblks; - xfs_filblks_t rlen; - struct xfs_iext_cursor icur; - - trace_xfs_reflink_end_cow(ip, offset, count); + struct xfs_bmbt_irec got, del; + struct xfs_iext_cursor icur; + struct xfs_mount *mp = ip->i_mount; + struct xfs_trans *tp; + struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, XFS_COW_FORK); + xfs_filblks_t rlen; + unsigned int resblks; + int error; /* No COW extents? That's easy! */ - if (ifp->if_bytes == 0) + if (ifp->if_bytes == 0) { + *end_fsb = offset_fsb; return 0; + } - offset_fsb = XFS_B_TO_FSBT(ip->i_mount, offset); - end_fsb = XFS_B_TO_FSB(ip->i_mount, offset + count); + resblks = XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK); + error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, + XFS_TRANS_RESERVE | XFS_TRANS_NOFS, &tp); + if (error) + return error; /* - * Start a rolling transaction to switch the mappings. We're - * unlikely ever to have to remap 16T worth of single-block - * extents, so just cap the worst case extent count to 2^32-1. - * Stick a warning in just in case, and avoid 64-bit division. + * Lock the inode. We have to ijoin without automatic unlock because + * the lead transaction is the refcountbt record deletion; the data + * fork update follows as a deferred log item. */ - BUILD_BUG_ON(MAX_RW_COUNT > UINT_MAX); - if (end_fsb - offset_fsb > UINT_MAX) { - error = -EFSCORRUPTED; - xfs_force_shutdown(ip->i_mount, SHUTDOWN_CORRUPT_INCORE); - ASSERT(0); - goto out; - } - resblks = XFS_NEXTENTADD_SPACE_RES(ip->i_mount, - (unsigned int)(end_fsb - offset_fsb), - XFS_DATA_FORK); - error = xfs_trans_alloc(ip->i_mount, &M_RES(ip->i_mount)->tr_write, - resblks, 0, XFS_TRANS_RESERVE | XFS_TRANS_NOFS, &tp); - if (error) - goto out; - xfs_ilock(ip, XFS_ILOCK_EXCL); xfs_trans_ijoin(tp, ip, 0); @@ -679,80 +672,131 @@ xfs_reflink_end_cow( * left by the time I/O completes for the loser of the race. In that * case we are done. */ - if (!xfs_iext_lookup_extent_before(ip, ifp, &end_fsb, &icur, &got)) + if (!xfs_iext_lookup_extent_before(ip, ifp, end_fsb, &icur, &got) || + got.br_startoff + got.br_blockcount <= offset_fsb) { + *end_fsb = offset_fsb; goto out_cancel; + } - /* Walk backwards until we're out of the I/O range... */ - while (got.br_startoff + got.br_blockcount > offset_fsb) { - del = got; - xfs_trim_extent(&del, offset_fsb, end_fsb - offset_fsb); - - /* Extent delete may have bumped ext forward */ - if (!del.br_blockcount) - goto prev_extent; + /* + * Structure copy @got into @del, then trim @del to the range that we + * were asked to remap. We preserve @got for the eventual CoW fork + * deletion; from now on @del represents the mapping that we're + * actually remapping. + */ + del = got; + xfs_trim_extent(&del, offset_fsb, *end_fsb - offset_fsb); - /* - * Only remap real extent that contain data. With AIO - * speculatively preallocations can leak into the range we - * are called upon, and we need to skip them. - */ - if (!xfs_bmap_is_real_extent(&got)) - goto prev_extent; + ASSERT(del.br_blockcount > 0); - /* Unmap the old blocks in the data fork. */ - ASSERT(tp->t_firstblock == NULLFSBLOCK); - rlen = del.br_blockcount; - error = __xfs_bunmapi(tp, ip, del.br_startoff, &rlen, 0, 1); - if (error) - goto out_cancel; + /* + * Only remap real extents that contain data. With AIO, speculative + * preallocations can leak into the range we are called upon, and we + * need to skip them. + */ + if (!xfs_bmap_is_real_extent(&got)) { + *end_fsb = del.br_startoff; + goto out_cancel; + } - /* Trim the extent to whatever got unmapped. */ - if (rlen) { - xfs_trim_extent(&del, del.br_startoff + rlen, - del.br_blockcount - rlen); - } - trace_xfs_reflink_cow_remap(ip, &del); + /* Unmap the old blocks in the data fork. */ + rlen = del.br_blockcount; + error = __xfs_bunmapi(tp, ip, del.br_startoff, &rlen, 0, 1); + if (error) + goto out_cancel; - /* Free the CoW orphan record. */ - error = xfs_refcount_free_cow_extent(tp, del.br_startblock, - del.br_blockcount); - if (error) - goto out_cancel; + /* Trim the extent to whatever got unmapped. */ + xfs_trim_extent(&del, del.br_startoff + rlen, del.br_blockcount - rlen); + trace_xfs_reflink_cow_remap(ip, &del); - /* Map the new blocks into the data fork. */ - error = xfs_bmap_map_extent(tp, ip, &del); - if (error) - goto out_cancel; + /* Free the CoW orphan record. */ + error = xfs_refcount_free_cow_extent(tp, del.br_startblock, + del.br_blockcount); + if (error) + goto out_cancel; - /* Charge this new data fork mapping to the on-disk quota. */ - xfs_trans_mod_dquot_byino(tp, ip, XFS_TRANS_DQ_DELBCOUNT, - (long)del.br_blockcount); + /* Map the new blocks into the data fork. */ + error = xfs_bmap_map_extent(tp, ip, &del); + if (error) + goto out_cancel; - /* Remove the mapping from the CoW fork. */ - xfs_bmap_del_extent_cow(ip, &icur, &got, &del); + /* Charge this new data fork mapping to the on-disk quota. */ + xfs_trans_mod_dquot_byino(tp, ip, XFS_TRANS_DQ_DELBCOUNT, + (long)del.br_blockcount); - error = xfs_defer_finish(&tp); - if (error) - goto out_cancel; - if (!xfs_iext_get_extent(ifp, &icur, &got)) - break; - continue; -prev_extent: - if (!xfs_iext_prev_extent(ifp, &icur, &got)) - break; - } + /* Remove the mapping from the CoW fork. */ + xfs_bmap_del_extent_cow(ip, &icur, &got, &del); error = xfs_trans_commit(tp); xfs_iunlock(ip, XFS_ILOCK_EXCL); if (error) - goto out; + return error; + + /* Update the caller about how much progress we made. */ + *end_fsb = del.br_startoff; return 0; out_cancel: xfs_trans_cancel(tp); xfs_iunlock(ip, XFS_ILOCK_EXCL); -out: - trace_xfs_reflink_end_cow_error(ip, error, _RET_IP_); + return error; +} + +/* + * Remap parts of a file's data fork after a successful CoW. + */ +int +xfs_reflink_end_cow( + struct xfs_inode *ip, + xfs_off_t offset, + xfs_off_t count) +{ + xfs_fileoff_t offset_fsb; + xfs_fileoff_t end_fsb; + int error = 0; + + trace_xfs_reflink_end_cow(ip, offset, count); + + offset_fsb = XFS_B_TO_FSBT(ip->i_mount, offset); + end_fsb = XFS_B_TO_FSB(ip->i_mount, offset + count); + + /* + * Walk backwards until we're out of the I/O range. The loop function + * repeatedly cycles the ILOCK to allocate one transaction per remapped + * extent. + * + * If we're being called by writeback then the the pages will still + * have PageWriteback set, which prevents races with reflink remapping + * and truncate. Reflink remapping prevents races with writeback by + * taking the iolock and mmaplock before flushing the pages and + * remapping, which means there won't be any further writeback or page + * cache dirtying until the reflink completes. + * + * We should never have two threads issuing writeback for the same file + * region. There are also have post-eof checks in the writeback + * preparation code so that we don't bother writing out pages that are + * about to be truncated. + * + * If we're being called as part of directio write completion, the dio + * count is still elevated, which reflink and truncate will wait for. + * Reflink remapping takes the iolock and mmaplock and waits for + * pending dio to finish, which should prevent any directio until the + * remap completes. Multiple concurrent directio writes to the same + * region are handled by end_cow processing only occurring for the + * threads which succeed; the outcome of multiple overlapping direct + * writes is not well defined anyway. + * + * It's possible that a buffered write and a direct write could collide + * here (the buffered write stumbles in after the dio flushes and + * invalidates the page cache and immediately queues writeback), but we + * have never supported this 100%. If either disk write succeeds the + * blocks will be remapped. + */ + while (end_fsb > offset_fsb && !error) + error = xfs_reflink_end_cow_extent(ip, offset_fsb, &end_fsb); + + if (error) + trace_xfs_reflink_end_cow_error(ip, error, _RET_IP_); return error; } diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c index 926ed314ffba..ac0fcdad0c4e 100644 --- a/fs/xfs/xfs_rtalloc.c +++ b/fs/xfs/xfs_rtalloc.c @@ -64,8 +64,12 @@ xfs_rtany_summary( int log; /* loop counter, log2 of ext. size */ xfs_suminfo_t sum; /* summary data */ + /* There are no extents at levels < m_rsum_cache[bbno]. */ + if (mp->m_rsum_cache && low < mp->m_rsum_cache[bbno]) + low = mp->m_rsum_cache[bbno]; + /* - * Loop over logs of extent sizes. Order is irrelevant. + * Loop over logs of extent sizes. */ for (log = low; log <= high; log++) { /* @@ -80,13 +84,17 @@ xfs_rtany_summary( */ if (sum) { *stat = 1; - return 0; + goto out; } } /* * Found nothing, return failure. */ *stat = 0; +out: + /* There were no extents at levels < log. */ + if (mp->m_rsum_cache && log > mp->m_rsum_cache[bbno]) + mp->m_rsum_cache[bbno] = log; return 0; } @@ -853,6 +861,21 @@ out_trans_cancel: return error; } +static void +xfs_alloc_rsum_cache( + xfs_mount_t *mp, /* file system mount structure */ + xfs_extlen_t rbmblocks) /* number of rt bitmap blocks */ +{ + /* + * The rsum cache is initialized to all zeroes, which is trivially a + * lower bound on the minimum level with any free extents. We can + * continue without the cache if it couldn't be allocated. + */ + mp->m_rsum_cache = kmem_zalloc_large(rbmblocks, KM_SLEEP); + if (!mp->m_rsum_cache) + xfs_warn(mp, "could not allocate realtime summary cache"); +} + /* * Visible (exported) functions. */ @@ -881,6 +904,7 @@ xfs_growfs_rt( xfs_extlen_t rsumblocks; /* current number of rt summary blks */ xfs_sb_t *sbp; /* old superblock */ xfs_fsblock_t sumbno; /* summary block number */ + uint8_t *rsum_cache; /* old summary cache */ sbp = &mp->m_sb; /* @@ -937,6 +961,11 @@ xfs_growfs_rt( error = xfs_growfs_rt_alloc(mp, rsumblocks, nrsumblocks, mp->m_rsumip); if (error) return error; + + rsum_cache = mp->m_rsum_cache; + if (nrbmblocks != sbp->sb_rbmblocks) + xfs_alloc_rsum_cache(mp, nrbmblocks); + /* * Allocate a new (fake) mount/sb. */ @@ -1062,6 +1091,20 @@ error_cancel: */ kmem_free(nmp); + /* + * If we had to allocate a new rsum_cache, we either need to free the + * old one (if we succeeded) or free the new one and restore the old one + * (if there was an error). + */ + if (rsum_cache != mp->m_rsum_cache) { + if (error) { + kmem_free(mp->m_rsum_cache); + mp->m_rsum_cache = rsum_cache; + } else { + kmem_free(rsum_cache); + } + } + return error; } @@ -1187,8 +1230,8 @@ xfs_rtmount_init( } /* - * Get the bitmap and summary inodes into the mount structure - * at mount time. + * Get the bitmap and summary inodes and the summary cache into the mount + * structure at mount time. */ int /* error */ xfs_rtmount_inodes( @@ -1198,19 +1241,18 @@ xfs_rtmount_inodes( xfs_sb_t *sbp; sbp = &mp->m_sb; - if (sbp->sb_rbmino == NULLFSINO) - return 0; error = xfs_iget(mp, NULL, sbp->sb_rbmino, 0, 0, &mp->m_rbmip); if (error) return error; ASSERT(mp->m_rbmip != NULL); - ASSERT(sbp->sb_rsumino != NULLFSINO); + error = xfs_iget(mp, NULL, sbp->sb_rsumino, 0, 0, &mp->m_rsumip); if (error) { xfs_irele(mp->m_rbmip); return error; } ASSERT(mp->m_rsumip != NULL); + xfs_alloc_rsum_cache(mp, sbp->sb_rbmblocks); return 0; } @@ -1218,6 +1260,7 @@ void xfs_rtunmount_inodes( struct xfs_mount *mp) { + kmem_free(mp->m_rsum_cache); if (mp->m_rbmip) xfs_irele(mp->m_rbmip); if (mp->m_rsumip) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index d3e6cd063688..c9097cb0b955 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -38,6 +38,7 @@ #include "xfs_refcount_item.h" #include "xfs_bmap_item.h" #include "xfs_reflink.h" +#include "xfs_defer.h" #include #include @@ -607,7 +608,7 @@ xfs_set_inode_alloc( } /* Get the last possible inode in the filesystem */ - agino = XFS_OFFBNO_TO_AGINO(mp, sbp->sb_agblocks - 1, 0); + agino = XFS_AGB_TO_AGINO(mp, sbp->sb_agblocks - 1); ino = XFS_AGINO_TO_INO(mp, agcount - 1, agino); /* @@ -1149,7 +1150,7 @@ xfs_fs_statfs( statp->f_bfree = fdblocks - mp->m_alloc_set_aside; statp->f_bavail = statp->f_bfree; - fakeinos = statp->f_bfree << sbp->sb_inopblog; + fakeinos = XFS_FSB_TO_INO(mp, statp->f_bfree); statp->f_files = min(icount + fakeinos, (uint64_t)XFS_MAXINUMBER); if (mp->m_maxicount) statp->f_files = min_t(typeof(statp->f_files), @@ -2085,11 +2086,6 @@ init_xfs_fs(void) printk(KERN_INFO XFS_VERSION_STRING " with " XFS_BUILD_OPTIONS " enabled\n"); - xfs_extent_free_init_defer_op(); - xfs_rmap_update_init_defer_op(); - xfs_refcount_update_init_defer_op(); - xfs_bmap_update_init_defer_op(); - xfs_dir_startup(); error = xfs_init_zones(); diff --git a/fs/xfs/xfs_symlink.c b/fs/xfs/xfs_symlink.c index a3e98c64b6e3..b2c1177c717f 100644 --- a/fs/xfs/xfs_symlink.c +++ b/fs/xfs/xfs_symlink.c @@ -192,6 +192,7 @@ xfs_symlink( pathlen = strlen(target_path); if (pathlen >= XFS_SYMLINK_MAXLEN) /* total string too long */ return -ENAMETOOLONG; + ASSERT(pathlen > 0); udqp = gdqp = NULL; prid = xfs_get_initial_prid(dp); @@ -378,6 +379,12 @@ out_release_inode: /* * Free a symlink that has blocks associated with it. + * + * Note: zero length symlinks are not allowed to exist. When we set the size to + * zero, also change it to a regular file so that it does not get written to + * disk as a zero length symlink. The inode is on the unlinked list already, so + * userspace cannot find this inode anymore, so this change is not user visible + * but allows us to catch corrupt zero-length symlinks in the verifiers. */ STATIC int xfs_inactive_symlink_rmt( @@ -412,13 +419,14 @@ xfs_inactive_symlink_rmt( xfs_trans_ijoin(tp, ip, 0); /* - * Lock the inode, fix the size, and join it to the transaction. - * Hold it so in the normal path, we still have it locked for - * the second transaction. In the error paths we need it + * Lock the inode, fix the size, turn it into a regular file and join it + * to the transaction. Hold it so in the normal path, we still have it + * locked for the second transaction. In the error paths we need it * held so the cancel won't rele it, see below. */ size = (int)ip->i_d.di_size; ip->i_d.di_size = 0; + VFS_I(ip)->i_mode = (VFS_I(ip)->i_mode & ~S_IFMT) | S_IFREG; xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); /* * Find the block(s) so we can inval and unmap them. @@ -494,17 +502,10 @@ xfs_inactive_symlink( return -EIO; xfs_ilock(ip, XFS_ILOCK_EXCL); - - /* - * Zero length symlinks _can_ exist. - */ pathlen = (int)ip->i_d.di_size; - if (!pathlen) { - xfs_iunlock(ip, XFS_ILOCK_EXCL); - return 0; - } + ASSERT(pathlen); - if (pathlen < 0 || pathlen > XFS_SYMLINK_MAXLEN) { + if (pathlen <= 0 || pathlen > XFS_SYMLINK_MAXLEN) { xfs_alert(mp, "%s: inode (0x%llx) bad symlink length (%d)", __func__, (unsigned long long)ip->i_ino, pathlen); xfs_iunlock(ip, XFS_ILOCK_EXCL); @@ -512,12 +513,12 @@ xfs_inactive_symlink( return -EFSCORRUPTED; } + /* + * Inline fork state gets removed by xfs_difree() so we have nothing to + * do here in that case. + */ if (ip->i_df.if_flags & XFS_IFINLINE) { - if (ip->i_df.if_bytes > 0) - xfs_idata_realloc(ip, -(ip->i_df.if_bytes), - XFS_DATA_FORK); xfs_iunlock(ip, XFS_ILOCK_EXCL); - ASSERT(ip->i_df.if_bytes == 0); return 0; } diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h index 8a6532aae779..6fcc893dfc91 100644 --- a/fs/xfs/xfs_trace.h +++ b/fs/xfs/xfs_trace.h @@ -640,6 +640,16 @@ DEFINE_INODE_EVENT(xfs_inode_set_cowblocks_tag); DEFINE_INODE_EVENT(xfs_inode_clear_cowblocks_tag); DEFINE_INODE_EVENT(xfs_inode_free_cowblocks_invalid); +/* + * ftrace's __print_symbolic requires that all enum values be wrapped in the + * TRACE_DEFINE_ENUM macro so that the enum value can be encoded in the ftrace + * ring buffer. Somehow this was only worth mentioning in the ftrace sample + * code. + */ +TRACE_DEFINE_ENUM(PE_SIZE_PTE); +TRACE_DEFINE_ENUM(PE_SIZE_PMD); +TRACE_DEFINE_ENUM(PE_SIZE_PUD); + TRACE_EVENT(xfs_filemap_fault, TP_PROTO(struct xfs_inode *ip, enum page_entry_size pe_size, bool write_fault), @@ -1208,6 +1218,12 @@ DEFINE_EVENT(xfs_readpage_class, name, \ DEFINE_READPAGE_EVENT(xfs_vm_readpage); DEFINE_READPAGE_EVENT(xfs_vm_readpages); +TRACE_DEFINE_ENUM(XFS_IO_HOLE); +TRACE_DEFINE_ENUM(XFS_IO_DELALLOC); +TRACE_DEFINE_ENUM(XFS_IO_UNWRITTEN); +TRACE_DEFINE_ENUM(XFS_IO_OVERWRITE); +TRACE_DEFINE_ENUM(XFS_IO_COW); + DECLARE_EVENT_CLASS(xfs_imap_class, TP_PROTO(struct xfs_inode *ip, xfs_off_t offset, ssize_t count, int type, struct xfs_bmbt_irec *irec), @@ -1885,11 +1901,11 @@ TRACE_EVENT(xfs_dir2_leafn_moveents, { 0, "target" }, \ { 1, "temp" } -#define XFS_INODE_FORMAT_STR \ - { 0, "invalid" }, \ - { 1, "local" }, \ - { 2, "extent" }, \ - { 3, "btree" } +TRACE_DEFINE_ENUM(XFS_DINODE_FMT_DEV); +TRACE_DEFINE_ENUM(XFS_DINODE_FMT_LOCAL); +TRACE_DEFINE_ENUM(XFS_DINODE_FMT_EXTENTS); +TRACE_DEFINE_ENUM(XFS_DINODE_FMT_BTREE); +TRACE_DEFINE_ENUM(XFS_DINODE_FMT_UUID); DECLARE_EVENT_CLASS(xfs_swap_extent_class, TP_PROTO(struct xfs_inode *ip, int which), @@ -2178,6 +2194,14 @@ DEFINE_DISCARD_EVENT(xfs_discard_exclude); DEFINE_DISCARD_EVENT(xfs_discard_busy); /* btree cursor events */ +TRACE_DEFINE_ENUM(XFS_BTNUM_BNOi); +TRACE_DEFINE_ENUM(XFS_BTNUM_CNTi); +TRACE_DEFINE_ENUM(XFS_BTNUM_BMAPi); +TRACE_DEFINE_ENUM(XFS_BTNUM_INOi); +TRACE_DEFINE_ENUM(XFS_BTNUM_FINOi); +TRACE_DEFINE_ENUM(XFS_BTNUM_RMAPi); +TRACE_DEFINE_ENUM(XFS_BTNUM_REFCi); + DECLARE_EVENT_CLASS(xfs_btree_cur_class, TP_PROTO(struct xfs_btree_cur *cur, int level, struct xfs_buf *bp), TP_ARGS(cur, level, bp), @@ -2197,9 +2221,9 @@ DECLARE_EVENT_CLASS(xfs_btree_cur_class, __entry->ptr = cur->bc_ptrs[level]; __entry->daddr = bp ? bp->b_bn : -1; ), - TP_printk("dev %d:%d btnum %d level %d/%d ptr %d daddr 0x%llx", + TP_printk("dev %d:%d btree %s level %d/%d ptr %d daddr 0x%llx", MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->btnum, + __print_symbolic(__entry->btnum, XFS_BTNUM_STRINGS), __entry->level, __entry->nlevels, __entry->ptr, @@ -2276,7 +2300,7 @@ DECLARE_EVENT_CLASS(xfs_defer_pending_class, ), TP_fast_assign( __entry->dev = mp ? mp->m_super->s_dev : 0; - __entry->type = dfp->dfp_type->type; + __entry->type = dfp->dfp_type; __entry->intent = dfp->dfp_intent; __entry->committed = dfp->dfp_done != NULL; __entry->nr = dfp->dfp_count; @@ -2405,7 +2429,7 @@ DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_agfl_free_deferred); DECLARE_EVENT_CLASS(xfs_rmap_class, TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, xfs_agblock_t agbno, xfs_extlen_t len, bool unwritten, - struct xfs_owner_info *oinfo), + const struct xfs_owner_info *oinfo), TP_ARGS(mp, agno, agbno, len, unwritten, oinfo), TP_STRUCT__entry( __field(dev_t, dev) @@ -2440,7 +2464,7 @@ DECLARE_EVENT_CLASS(xfs_rmap_class, DEFINE_EVENT(xfs_rmap_class, name, \ TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, \ xfs_agblock_t agbno, xfs_extlen_t len, bool unwritten, \ - struct xfs_owner_info *oinfo), \ + const struct xfs_owner_info *oinfo), \ TP_ARGS(mp, agno, agbno, len, unwritten, oinfo)) /* simple AG-based error/%ip tracepoint class */ @@ -2610,10 +2634,9 @@ DEFINE_AG_ERROR_EVENT(xfs_ag_resv_init_error); #define DEFINE_AG_EXTENT_EVENT(name) DEFINE_DISCARD_EVENT(name) /* ag btree lookup tracepoint class */ -#define XFS_AG_BTREE_CMP_FORMAT_STR \ - { XFS_LOOKUP_EQ, "eq" }, \ - { XFS_LOOKUP_LE, "le" }, \ - { XFS_LOOKUP_GE, "ge" } +TRACE_DEFINE_ENUM(XFS_LOOKUP_EQi); +TRACE_DEFINE_ENUM(XFS_LOOKUP_LEi); +TRACE_DEFINE_ENUM(XFS_LOOKUP_GEi); DECLARE_EVENT_CLASS(xfs_ag_btree_lookup_class, TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, xfs_agblock_t agbno, xfs_lookup_t dir), diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h index a0c5dbda18aa..c6e1c5704a8c 100644 --- a/fs/xfs/xfs_trans.h +++ b/fs/xfs/xfs_trans.h @@ -223,13 +223,13 @@ void xfs_trans_dirty_buf(struct xfs_trans *, struct xfs_buf *); bool xfs_trans_buf_is_dirty(struct xfs_buf *bp); void xfs_trans_log_inode(xfs_trans_t *, struct xfs_inode *, uint); -void xfs_extent_free_init_defer_op(void); struct xfs_efd_log_item *xfs_trans_get_efd(struct xfs_trans *, struct xfs_efi_log_item *, uint); int xfs_trans_free_extent(struct xfs_trans *, struct xfs_efd_log_item *, xfs_fsblock_t, - xfs_extlen_t, struct xfs_owner_info *, + xfs_extlen_t, + const struct xfs_owner_info *, bool); int xfs_trans_commit(struct xfs_trans *); int xfs_trans_roll(struct xfs_trans **); @@ -248,7 +248,6 @@ extern kmem_zone_t *xfs_trans_zone; /* rmap updates */ enum xfs_rmap_intent_type; -void xfs_rmap_update_init_defer_op(void); struct xfs_rud_log_item *xfs_trans_get_rud(struct xfs_trans *tp, struct xfs_rui_log_item *ruip); int xfs_trans_log_finish_rmap_update(struct xfs_trans *tp, @@ -260,7 +259,6 @@ int xfs_trans_log_finish_rmap_update(struct xfs_trans *tp, /* refcount updates */ enum xfs_refcount_intent_type; -void xfs_refcount_update_init_defer_op(void); struct xfs_cud_log_item *xfs_trans_get_cud(struct xfs_trans *tp, struct xfs_cui_log_item *cuip); int xfs_trans_log_finish_refcount_update(struct xfs_trans *tp, @@ -272,7 +270,6 @@ int xfs_trans_log_finish_refcount_update(struct xfs_trans *tp, /* mapping updates */ enum xfs_bmap_intent_type; -void xfs_bmap_update_init_defer_op(void); struct xfs_bud_log_item *xfs_trans_get_bud(struct xfs_trans *tp, struct xfs_bui_log_item *buip); int xfs_trans_log_finish_bmap_update(struct xfs_trans *tp, diff --git a/fs/xfs/xfs_trans_bmap.c b/fs/xfs/xfs_trans_bmap.c index 741c558b2179..11cff449d055 100644 --- a/fs/xfs/xfs_trans_bmap.c +++ b/fs/xfs/xfs_trans_bmap.c @@ -17,6 +17,7 @@ #include "xfs_alloc.h" #include "xfs_bmap.h" #include "xfs_inode.h" +#include "xfs_defer.h" /* * This routine is called to allocate a "bmap update done" @@ -220,8 +221,7 @@ xfs_bmap_update_cancel_item( kmem_free(bmap); } -static const struct xfs_defer_op_type xfs_bmap_update_defer_type = { - .type = XFS_DEFER_OPS_TYPE_BMAP, +const struct xfs_defer_op_type xfs_bmap_update_defer_type = { .max_items = XFS_BUI_MAX_FAST_EXTENTS, .diff_items = xfs_bmap_update_diff_items, .create_intent = xfs_bmap_update_create_intent, @@ -231,10 +231,3 @@ static const struct xfs_defer_op_type xfs_bmap_update_defer_type = { .finish_item = xfs_bmap_update_finish_item, .cancel_item = xfs_bmap_update_cancel_item, }; - -/* Register the deferred op type. */ -void -xfs_bmap_update_init_defer_op(void) -{ - xfs_defer_init_op_type(&xfs_bmap_update_defer_type); -} diff --git a/fs/xfs/xfs_trans_extfree.c b/fs/xfs/xfs_trans_extfree.c index 855c0b651fd4..0710434eb240 100644 --- a/fs/xfs/xfs_trans_extfree.c +++ b/fs/xfs/xfs_trans_extfree.c @@ -18,6 +18,7 @@ #include "xfs_alloc.h" #include "xfs_bmap.h" #include "xfs_trace.h" +#include "xfs_defer.h" /* * This routine is called to allocate an "extent free done" @@ -52,19 +53,20 @@ xfs_trans_get_efd(struct xfs_trans *tp, */ int xfs_trans_free_extent( - struct xfs_trans *tp, - struct xfs_efd_log_item *efdp, - xfs_fsblock_t start_block, - xfs_extlen_t ext_len, - struct xfs_owner_info *oinfo, - bool skip_discard) + struct xfs_trans *tp, + struct xfs_efd_log_item *efdp, + xfs_fsblock_t start_block, + xfs_extlen_t ext_len, + const struct xfs_owner_info *oinfo, + bool skip_discard) { - struct xfs_mount *mp = tp->t_mountp; - uint next_extent; - xfs_agnumber_t agno = XFS_FSB_TO_AGNO(mp, start_block); - xfs_agblock_t agbno = XFS_FSB_TO_AGBNO(mp, start_block); - struct xfs_extent *extp; - int error; + struct xfs_mount *mp = tp->t_mountp; + struct xfs_extent *extp; + uint next_extent; + xfs_agnumber_t agno = XFS_FSB_TO_AGNO(mp, start_block); + xfs_agblock_t agbno = XFS_FSB_TO_AGBNO(mp, + start_block); + int error; trace_xfs_bmap_free_deferred(tp->t_mountp, agno, 0, agbno, ext_len); @@ -206,8 +208,7 @@ xfs_extent_free_cancel_item( kmem_free(free); } -static const struct xfs_defer_op_type xfs_extent_free_defer_type = { - .type = XFS_DEFER_OPS_TYPE_FREE, +const struct xfs_defer_op_type xfs_extent_free_defer_type = { .max_items = XFS_EFI_MAX_FAST_EXTENTS, .diff_items = xfs_extent_free_diff_items, .create_intent = xfs_extent_free_create_intent, @@ -274,8 +275,7 @@ xfs_agfl_free_finish_item( /* sub-type with special handling for AGFL deferred frees */ -static const struct xfs_defer_op_type xfs_agfl_free_defer_type = { - .type = XFS_DEFER_OPS_TYPE_AGFL_FREE, +const struct xfs_defer_op_type xfs_agfl_free_defer_type = { .max_items = XFS_EFI_MAX_FAST_EXTENTS, .diff_items = xfs_extent_free_diff_items, .create_intent = xfs_extent_free_create_intent, @@ -285,11 +285,3 @@ static const struct xfs_defer_op_type xfs_agfl_free_defer_type = { .finish_item = xfs_agfl_free_finish_item, .cancel_item = xfs_extent_free_cancel_item, }; - -/* Register the deferred op type. */ -void -xfs_extent_free_init_defer_op(void) -{ - xfs_defer_init_op_type(&xfs_extent_free_defer_type); - xfs_defer_init_op_type(&xfs_agfl_free_defer_type); -} diff --git a/fs/xfs/xfs_trans_refcount.c b/fs/xfs/xfs_trans_refcount.c index 523c55663954..6c947ff4faf6 100644 --- a/fs/xfs/xfs_trans_refcount.c +++ b/fs/xfs/xfs_trans_refcount.c @@ -16,6 +16,7 @@ #include "xfs_refcount_item.h" #include "xfs_alloc.h" #include "xfs_refcount.h" +#include "xfs_defer.h" /* * This routine is called to allocate a "refcount update done" @@ -227,8 +228,7 @@ xfs_refcount_update_cancel_item( kmem_free(refc); } -static const struct xfs_defer_op_type xfs_refcount_update_defer_type = { - .type = XFS_DEFER_OPS_TYPE_REFCOUNT, +const struct xfs_defer_op_type xfs_refcount_update_defer_type = { .max_items = XFS_CUI_MAX_FAST_EXTENTS, .diff_items = xfs_refcount_update_diff_items, .create_intent = xfs_refcount_update_create_intent, @@ -239,10 +239,3 @@ static const struct xfs_defer_op_type xfs_refcount_update_defer_type = { .finish_cleanup = xfs_refcount_update_finish_cleanup, .cancel_item = xfs_refcount_update_cancel_item, }; - -/* Register the deferred op type. */ -void -xfs_refcount_update_init_defer_op(void) -{ - xfs_defer_init_op_type(&xfs_refcount_update_defer_type); -} diff --git a/fs/xfs/xfs_trans_rmap.c b/fs/xfs/xfs_trans_rmap.c index 05b00e40251f..a42890931ecd 100644 --- a/fs/xfs/xfs_trans_rmap.c +++ b/fs/xfs/xfs_trans_rmap.c @@ -16,6 +16,7 @@ #include "xfs_rmap_item.h" #include "xfs_alloc.h" #include "xfs_rmap.h" +#include "xfs_defer.h" /* Set the map extent flags for this reverse mapping. */ static void @@ -244,8 +245,7 @@ xfs_rmap_update_cancel_item( kmem_free(rmap); } -static const struct xfs_defer_op_type xfs_rmap_update_defer_type = { - .type = XFS_DEFER_OPS_TYPE_RMAP, +const struct xfs_defer_op_type xfs_rmap_update_defer_type = { .max_items = XFS_RUI_MAX_FAST_EXTENTS, .diff_items = xfs_rmap_update_diff_items, .create_intent = xfs_rmap_update_create_intent, @@ -256,10 +256,3 @@ static const struct xfs_defer_op_type xfs_rmap_update_defer_type = { .finish_cleanup = xfs_rmap_update_finish_cleanup, .cancel_item = xfs_rmap_update_cancel_item, }; - -/* Register the deferred op type. */ -void -xfs_rmap_update_init_defer_op(void) -{ - xfs_defer_init_op_type(&xfs_rmap_update_defer_type); -} diff --git a/include/asm-generic/dma-mapping.h b/include/asm-generic/dma-mapping.h index 880a292d792f..c13f46109e88 100644 --- a/include/asm-generic/dma-mapping.h +++ b/include/asm-generic/dma-mapping.h @@ -4,7 +4,7 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { - return &dma_direct_ops; + return NULL; } #endif /* _ASM_GENERIC_DMA_MAPPING_H */ diff --git a/include/asm-generic/error-injection.h b/include/asm-generic/error-injection.h index 296c65442f00..95a159a4137f 100644 --- a/include/asm-generic/error-injection.h +++ b/include/asm-generic/error-injection.h @@ -8,6 +8,7 @@ enum { EI_ETYPE_NULL, /* Return NULL if failure */ EI_ETYPE_ERRNO, /* Return -ERRNO if failure */ EI_ETYPE_ERRNO_NULL, /* Return -ERRNO or NULL if failure */ + EI_ETYPE_TRUE, /* Return true if failure */ }; struct error_injection_entry { diff --git a/include/asm-generic/export.h b/include/asm-generic/export.h index 4d73e6e3c66c..294d6ae785d4 100644 --- a/include/asm-generic/export.h +++ b/include/asm-generic/export.h @@ -59,16 +59,19 @@ __kcrctab_\name: .endm #undef __put -#if defined(__KSYM_DEPS__) - -#define __EXPORT_SYMBOL(sym, val, sec) === __KSYM_##sym === - -#elif defined(CONFIG_TRIM_UNUSED_KSYMS) +#if defined(CONFIG_TRIM_UNUSED_KSYMS) #include #include +.macro __ksym_marker sym + .section ".discard.ksym","a" +__ksym_marker_\sym: + .previous +.endm + #define __EXPORT_SYMBOL(sym, val, sec) \ + __ksym_marker sym; \ __cond_export_sym(sym, val, sec, __is_defined(__KSYM_##sym)) #define __cond_export_sym(sym, val, sec, conf) \ ___cond_export_sym(sym, val, sec, conf) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index a9cac82e9a7a..05e61e6c843f 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1057,6 +1057,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot); int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot); int pud_clear_huge(pud_t *pud); int pmd_clear_huge(pmd_t *pmd); +int p4d_free_pud_page(p4d_t *p4d, unsigned long addr); int pud_free_pmd_page(pud_t *pud, unsigned long addr); int pmd_free_pte_page(pmd_t *pmd, unsigned long addr); #else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */ @@ -1084,6 +1085,10 @@ static inline int pmd_clear_huge(pmd_t *pmd) { return 0; } +static inline int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) +{ + return 0; +} static inline int pud_free_pmd_page(pud_t *pud, unsigned long addr) { return 0; diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index 22e6f412c595..a3e766dff917 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -234,34 +234,6 @@ static inline void acomp_request_set_params(struct acomp_req *req, req->flags |= CRYPTO_ACOMP_ALLOC_OUTPUT; } -static inline void crypto_stat_compress(struct acomp_req *req, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&tfm->base.__crt_alg->compress_err_cnt); - } else { - atomic_inc(&tfm->base.__crt_alg->compress_cnt); - atomic64_add(req->slen, &tfm->base.__crt_alg->compress_tlen); - } -#endif -} - -static inline void crypto_stat_decompress(struct acomp_req *req, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&tfm->base.__crt_alg->compress_err_cnt); - } else { - atomic_inc(&tfm->base.__crt_alg->decompress_cnt); - atomic64_add(req->slen, &tfm->base.__crt_alg->decompress_tlen); - } -#endif -} - /** * crypto_acomp_compress() -- Invoke asynchronous compress operation * @@ -274,10 +246,13 @@ static inline void crypto_stat_decompress(struct acomp_req *req, int ret) static inline int crypto_acomp_compress(struct acomp_req *req) { struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct crypto_alg *alg = tfm->base.__crt_alg; + unsigned int slen = req->slen; int ret; + crypto_stats_get(alg); ret = tfm->compress(req); - crypto_stat_compress(req, ret); + crypto_stats_compress(slen, ret, alg); return ret; } @@ -293,10 +268,13 @@ static inline int crypto_acomp_compress(struct acomp_req *req) static inline int crypto_acomp_decompress(struct acomp_req *req) { struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct crypto_alg *alg = tfm->base.__crt_alg; + unsigned int slen = req->slen; int ret; + crypto_stats_get(alg); ret = tfm->decompress(req); - crypto_stat_decompress(req, ret); + crypto_stats_decompress(slen, ret, alg); return ret; } diff --git a/include/crypto/aead.h b/include/crypto/aead.h index 0d765d7bfb82..9ad595f97c65 100644 --- a/include/crypto/aead.h +++ b/include/crypto/aead.h @@ -115,7 +115,6 @@ struct aead_request { * @setkey: see struct skcipher_alg * @encrypt: see struct skcipher_alg * @decrypt: see struct skcipher_alg - * @geniv: see struct skcipher_alg * @ivsize: see struct skcipher_alg * @chunksize: see struct skcipher_alg * @init: Initialize the cryptographic transformation object. This function @@ -142,8 +141,6 @@ struct aead_alg { int (*init)(struct crypto_aead *tfm); void (*exit)(struct crypto_aead *tfm); - const char *geniv; - unsigned int ivsize; unsigned int maxauthsize; unsigned int chunksize; @@ -306,34 +303,6 @@ static inline struct crypto_aead *crypto_aead_reqtfm(struct aead_request *req) return __crypto_aead_cast(req->base.tfm); } -static inline void crypto_stat_aead_encrypt(struct aead_request *req, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_aead *tfm = crypto_aead_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&tfm->base.__crt_alg->aead_err_cnt); - } else { - atomic_inc(&tfm->base.__crt_alg->encrypt_cnt); - atomic64_add(req->cryptlen, &tfm->base.__crt_alg->encrypt_tlen); - } -#endif -} - -static inline void crypto_stat_aead_decrypt(struct aead_request *req, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_aead *tfm = crypto_aead_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&tfm->base.__crt_alg->aead_err_cnt); - } else { - atomic_inc(&tfm->base.__crt_alg->decrypt_cnt); - atomic64_add(req->cryptlen, &tfm->base.__crt_alg->decrypt_tlen); - } -#endif -} - /** * crypto_aead_encrypt() - encrypt plaintext * @req: reference to the aead_request handle that holds all information @@ -356,13 +325,16 @@ static inline void crypto_stat_aead_decrypt(struct aead_request *req, int ret) static inline int crypto_aead_encrypt(struct aead_request *req) { struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct crypto_alg *alg = aead->base.__crt_alg; + unsigned int cryptlen = req->cryptlen; int ret; + crypto_stats_get(alg); if (crypto_aead_get_flags(aead) & CRYPTO_TFM_NEED_KEY) ret = -ENOKEY; else ret = crypto_aead_alg(aead)->encrypt(req); - crypto_stat_aead_encrypt(req, ret); + crypto_stats_aead_encrypt(cryptlen, alg, ret); return ret; } @@ -391,15 +363,18 @@ static inline int crypto_aead_encrypt(struct aead_request *req) static inline int crypto_aead_decrypt(struct aead_request *req) { struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct crypto_alg *alg = aead->base.__crt_alg; + unsigned int cryptlen = req->cryptlen; int ret; + crypto_stats_get(alg); if (crypto_aead_get_flags(aead) & CRYPTO_TFM_NEED_KEY) ret = -ENOKEY; else if (req->cryptlen < crypto_aead_authsize(aead)) ret = -EINVAL; else ret = crypto_aead_alg(aead)->decrypt(req); - crypto_stat_aead_decrypt(req, ret); + crypto_stats_aead_decrypt(cryptlen, alg, ret); return ret; } diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h index afac71119396..2d690494568c 100644 --- a/include/crypto/akcipher.h +++ b/include/crypto/akcipher.h @@ -271,62 +271,6 @@ static inline unsigned int crypto_akcipher_maxsize(struct crypto_akcipher *tfm) return alg->max_size(tfm); } -static inline void crypto_stat_akcipher_encrypt(struct akcipher_request *req, - int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&tfm->base.__crt_alg->akcipher_err_cnt); - } else { - atomic_inc(&tfm->base.__crt_alg->encrypt_cnt); - atomic64_add(req->src_len, &tfm->base.__crt_alg->encrypt_tlen); - } -#endif -} - -static inline void crypto_stat_akcipher_decrypt(struct akcipher_request *req, - int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&tfm->base.__crt_alg->akcipher_err_cnt); - } else { - atomic_inc(&tfm->base.__crt_alg->decrypt_cnt); - atomic64_add(req->src_len, &tfm->base.__crt_alg->decrypt_tlen); - } -#endif -} - -static inline void crypto_stat_akcipher_sign(struct akcipher_request *req, - int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) - atomic_inc(&tfm->base.__crt_alg->akcipher_err_cnt); - else - atomic_inc(&tfm->base.__crt_alg->sign_cnt); -#endif -} - -static inline void crypto_stat_akcipher_verify(struct akcipher_request *req, - int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) - atomic_inc(&tfm->base.__crt_alg->akcipher_err_cnt); - else - atomic_inc(&tfm->base.__crt_alg->verify_cnt); -#endif -} - /** * crypto_akcipher_encrypt() - Invoke public key encrypt operation * @@ -341,10 +285,13 @@ static inline int crypto_akcipher_encrypt(struct akcipher_request *req) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct akcipher_alg *alg = crypto_akcipher_alg(tfm); + struct crypto_alg *calg = tfm->base.__crt_alg; + unsigned int src_len = req->src_len; int ret; + crypto_stats_get(calg); ret = alg->encrypt(req); - crypto_stat_akcipher_encrypt(req, ret); + crypto_stats_akcipher_encrypt(src_len, ret, calg); return ret; } @@ -362,10 +309,13 @@ static inline int crypto_akcipher_decrypt(struct akcipher_request *req) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct akcipher_alg *alg = crypto_akcipher_alg(tfm); + struct crypto_alg *calg = tfm->base.__crt_alg; + unsigned int src_len = req->src_len; int ret; + crypto_stats_get(calg); ret = alg->decrypt(req); - crypto_stat_akcipher_decrypt(req, ret); + crypto_stats_akcipher_decrypt(src_len, ret, calg); return ret; } @@ -383,10 +333,12 @@ static inline int crypto_akcipher_sign(struct akcipher_request *req) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct akcipher_alg *alg = crypto_akcipher_alg(tfm); + struct crypto_alg *calg = tfm->base.__crt_alg; int ret; + crypto_stats_get(calg); ret = alg->sign(req); - crypto_stat_akcipher_sign(req, ret); + crypto_stats_akcipher_sign(ret, calg); return ret; } @@ -404,10 +356,12 @@ static inline int crypto_akcipher_verify(struct akcipher_request *req) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct akcipher_alg *alg = crypto_akcipher_alg(tfm); + struct crypto_alg *calg = tfm->base.__crt_alg; int ret; + crypto_stats_get(calg); ret = alg->verify(req); - crypto_stat_akcipher_verify(req, ret); + crypto_stats_akcipher_verify(ret, calg); return ret; } diff --git a/include/crypto/chacha.h b/include/crypto/chacha.h new file mode 100644 index 000000000000..1fc70a69d550 --- /dev/null +++ b/include/crypto/chacha.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Common values and helper functions for the ChaCha and XChaCha stream ciphers. + * + * XChaCha extends ChaCha's nonce to 192 bits, while provably retaining ChaCha's + * security. Here they share the same key size, tfm context, and setkey + * function; only their IV size and encrypt/decrypt function differ. + * + * The ChaCha paper specifies 20, 12, and 8-round variants. In general, it is + * recommended to use the 20-round variant ChaCha20. However, the other + * variants can be needed in some performance-sensitive scenarios. The generic + * ChaCha code currently allows only the 20 and 12-round variants. + */ + +#ifndef _CRYPTO_CHACHA_H +#define _CRYPTO_CHACHA_H + +#include +#include +#include + +/* 32-bit stream position, then 96-bit nonce (RFC7539 convention) */ +#define CHACHA_IV_SIZE 16 + +#define CHACHA_KEY_SIZE 32 +#define CHACHA_BLOCK_SIZE 64 +#define CHACHAPOLY_IV_SIZE 12 + +/* 192-bit nonce, then 64-bit stream position */ +#define XCHACHA_IV_SIZE 32 + +struct chacha_ctx { + u32 key[8]; + int nrounds; +}; + +void chacha_block(u32 *state, u8 *stream, int nrounds); +static inline void chacha20_block(u32 *state, u8 *stream) +{ + chacha_block(state, stream, 20); +} +void hchacha_block(const u32 *in, u32 *out, int nrounds); + +void crypto_chacha_init(u32 *state, struct chacha_ctx *ctx, u8 *iv); + +int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keysize); +int crypto_chacha12_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keysize); + +int crypto_chacha_crypt(struct skcipher_request *req); +int crypto_xchacha_crypt(struct skcipher_request *req); + +#endif /* _CRYPTO_CHACHA_H */ diff --git a/include/crypto/chacha20.h b/include/crypto/chacha20.h deleted file mode 100644 index f76302d99e2b..000000000000 --- a/include/crypto/chacha20.h +++ /dev/null @@ -1,27 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Common values for the ChaCha20 algorithm - */ - -#ifndef _CRYPTO_CHACHA20_H -#define _CRYPTO_CHACHA20_H - -#include -#include -#include - -#define CHACHA20_IV_SIZE 16 -#define CHACHA20_KEY_SIZE 32 -#define CHACHA20_BLOCK_SIZE 64 - -struct chacha20_ctx { - u32 key[8]; -}; - -void chacha20_block(u32 *state, u8 *stream); -void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv); -int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key, - unsigned int keysize); -int crypto_chacha20_crypt(struct skcipher_request *req); - -#endif diff --git a/include/crypto/hash.h b/include/crypto/hash.h index bc7796600338..3b31c1b349ae 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -412,32 +412,6 @@ static inline void *ahash_request_ctx(struct ahash_request *req) int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen); -static inline void crypto_stat_ahash_update(struct ahash_request *req, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) - atomic_inc(&tfm->base.__crt_alg->hash_err_cnt); - else - atomic64_add(req->nbytes, &tfm->base.__crt_alg->hash_tlen); -#endif -} - -static inline void crypto_stat_ahash_final(struct ahash_request *req, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&tfm->base.__crt_alg->hash_err_cnt); - } else { - atomic_inc(&tfm->base.__crt_alg->hash_cnt); - atomic64_add(req->nbytes, &tfm->base.__crt_alg->hash_tlen); - } -#endif -} - /** * crypto_ahash_finup() - update and finalize message digest * @req: reference to the ahash_request handle that holds all information @@ -552,10 +526,14 @@ static inline int crypto_ahash_init(struct ahash_request *req) */ static inline int crypto_ahash_update(struct ahash_request *req) { + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct crypto_alg *alg = tfm->base.__crt_alg; + unsigned int nbytes = req->nbytes; int ret; + crypto_stats_get(alg); ret = crypto_ahash_reqtfm(req)->update(req); - crypto_stat_ahash_update(req, ret); + crypto_stats_ahash_update(nbytes, ret, alg); return ret; } diff --git a/include/crypto/hash_info.h b/include/crypto/hash_info.h index 56f217d41f12..91786b68dbdb 100644 --- a/include/crypto/hash_info.h +++ b/include/crypto/hash_info.h @@ -15,6 +15,7 @@ #include #include +#include #include diff --git a/include/crypto/internal/cryptouser.h b/include/crypto/internal/cryptouser.h index 8db299c25566..40623f4457df 100644 --- a/include/crypto/internal/cryptouser.h +++ b/include/crypto/internal/cryptouser.h @@ -3,6 +3,11 @@ struct crypto_alg *crypto_alg_match(struct crypto_user_alg *p, int exact); -int crypto_dump_reportstat(struct sk_buff *skb, struct netlink_callback *cb); +#ifdef CONFIG_CRYPTO_STATS int crypto_reportstat(struct sk_buff *in_skb, struct nlmsghdr *in_nlh, struct nlattr **attrs); -int crypto_dump_reportstat_done(struct netlink_callback *cb); +#else +static int crypto_reportstat(struct sk_buff *in_skb, struct nlmsghdr *in_nlh, struct nlattr **attrs) +{ + return -ENOTSUPP; +} +#endif diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h index e42f7063f245..453e867b4bd9 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -70,8 +70,6 @@ struct skcipher_walk { unsigned int alignmask; }; -extern const struct crypto_type crypto_givcipher_type; - static inline struct crypto_instance *skcipher_crypto_instance( struct skcipher_instance *inst) { diff --git a/include/crypto/kpp.h b/include/crypto/kpp.h index f517ba6d3a27..1a97e1601422 100644 --- a/include/crypto/kpp.h +++ b/include/crypto/kpp.h @@ -268,42 +268,6 @@ struct kpp_secret { unsigned short len; }; -static inline void crypto_stat_kpp_set_secret(struct crypto_kpp *tfm, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - if (ret) - atomic_inc(&tfm->base.__crt_alg->kpp_err_cnt); - else - atomic_inc(&tfm->base.__crt_alg->setsecret_cnt); -#endif -} - -static inline void crypto_stat_kpp_generate_public_key(struct kpp_request *req, - int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_kpp *tfm = crypto_kpp_reqtfm(req); - - if (ret) - atomic_inc(&tfm->base.__crt_alg->kpp_err_cnt); - else - atomic_inc(&tfm->base.__crt_alg->generate_public_key_cnt); -#endif -} - -static inline void crypto_stat_kpp_compute_shared_secret(struct kpp_request *req, - int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct crypto_kpp *tfm = crypto_kpp_reqtfm(req); - - if (ret) - atomic_inc(&tfm->base.__crt_alg->kpp_err_cnt); - else - atomic_inc(&tfm->base.__crt_alg->compute_shared_secret_cnt); -#endif -} - /** * crypto_kpp_set_secret() - Invoke kpp operation * @@ -323,10 +287,12 @@ static inline int crypto_kpp_set_secret(struct crypto_kpp *tfm, const void *buffer, unsigned int len) { struct kpp_alg *alg = crypto_kpp_alg(tfm); + struct crypto_alg *calg = tfm->base.__crt_alg; int ret; + crypto_stats_get(calg); ret = alg->set_secret(tfm, buffer, len); - crypto_stat_kpp_set_secret(tfm, ret); + crypto_stats_kpp_set_secret(calg, ret); return ret; } @@ -347,10 +313,12 @@ static inline int crypto_kpp_generate_public_key(struct kpp_request *req) { struct crypto_kpp *tfm = crypto_kpp_reqtfm(req); struct kpp_alg *alg = crypto_kpp_alg(tfm); + struct crypto_alg *calg = tfm->base.__crt_alg; int ret; + crypto_stats_get(calg); ret = alg->generate_public_key(req); - crypto_stat_kpp_generate_public_key(req, ret); + crypto_stats_kpp_generate_public_key(calg, ret); return ret; } @@ -368,10 +336,12 @@ static inline int crypto_kpp_compute_shared_secret(struct kpp_request *req) { struct crypto_kpp *tfm = crypto_kpp_reqtfm(req); struct kpp_alg *alg = crypto_kpp_alg(tfm); + struct crypto_alg *calg = tfm->base.__crt_alg; int ret; + crypto_stats_get(calg); ret = alg->compute_shared_secret(req); - crypto_stat_kpp_compute_shared_secret(req, ret); + crypto_stats_kpp_compute_shared_secret(calg, ret); return ret; } diff --git a/include/crypto/nhpoly1305.h b/include/crypto/nhpoly1305.h new file mode 100644 index 000000000000..53c04423c582 --- /dev/null +++ b/include/crypto/nhpoly1305.h @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Common values and helper functions for the NHPoly1305 hash function. + */ + +#ifndef _NHPOLY1305_H +#define _NHPOLY1305_H + +#include +#include + +/* NH parameterization: */ + +/* Endianness: little */ +/* Word size: 32 bits (works well on NEON, SSE2, AVX2) */ + +/* Stride: 2 words (optimal on ARM32 NEON; works okay on other CPUs too) */ +#define NH_PAIR_STRIDE 2 +#define NH_MESSAGE_UNIT (NH_PAIR_STRIDE * 2 * sizeof(u32)) + +/* Num passes (Toeplitz iteration count): 4, to give ε = 2^{-128} */ +#define NH_NUM_PASSES 4 +#define NH_HASH_BYTES (NH_NUM_PASSES * sizeof(u64)) + +/* Max message size: 1024 bytes (32x compression factor) */ +#define NH_NUM_STRIDES 64 +#define NH_MESSAGE_WORDS (NH_PAIR_STRIDE * 2 * NH_NUM_STRIDES) +#define NH_MESSAGE_BYTES (NH_MESSAGE_WORDS * sizeof(u32)) +#define NH_KEY_WORDS (NH_MESSAGE_WORDS + \ + NH_PAIR_STRIDE * 2 * (NH_NUM_PASSES - 1)) +#define NH_KEY_BYTES (NH_KEY_WORDS * sizeof(u32)) + +#define NHPOLY1305_KEY_SIZE (POLY1305_BLOCK_SIZE + NH_KEY_BYTES) + +struct nhpoly1305_key { + struct poly1305_key poly_key; + u32 nh_key[NH_KEY_WORDS]; +}; + +struct nhpoly1305_state { + + /* Running total of polynomial evaluation */ + struct poly1305_state poly_state; + + /* Partial block buffer */ + u8 buffer[NH_MESSAGE_UNIT]; + unsigned int buflen; + + /* + * Number of bytes remaining until the current NH message reaches + * NH_MESSAGE_BYTES. When nonzero, 'nh_hash' holds the partial NH hash. + */ + unsigned int nh_remaining; + + __le64 nh_hash[NH_NUM_PASSES]; +}; + +typedef void (*nh_t)(const u32 *key, const u8 *message, size_t message_len, + __le64 hash[NH_NUM_PASSES]); + +int crypto_nhpoly1305_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen); + +int crypto_nhpoly1305_init(struct shash_desc *desc); +int crypto_nhpoly1305_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen); +int crypto_nhpoly1305_update_helper(struct shash_desc *desc, + const u8 *src, unsigned int srclen, + nh_t nh_fn); +int crypto_nhpoly1305_final(struct shash_desc *desc, u8 *dst); +int crypto_nhpoly1305_final_helper(struct shash_desc *desc, u8 *dst, + nh_t nh_fn); + +#endif /* _NHPOLY1305_H */ diff --git a/include/crypto/poly1305.h b/include/crypto/poly1305.h index f718a19da82f..34317ed2071e 100644 --- a/include/crypto/poly1305.h +++ b/include/crypto/poly1305.h @@ -13,13 +13,21 @@ #define POLY1305_KEY_SIZE 32 #define POLY1305_DIGEST_SIZE 16 +struct poly1305_key { + u32 r[5]; /* key, base 2^26 */ +}; + +struct poly1305_state { + u32 h[5]; /* accumulator, base 2^26 */ +}; + struct poly1305_desc_ctx { /* key */ - u32 r[5]; + struct poly1305_key r; /* finalize key */ u32 s[4]; /* accumulator */ - u32 h[5]; + struct poly1305_state h; /* partial buffer */ u8 buf[POLY1305_BLOCK_SIZE]; /* bytes used in partial buffer */ @@ -30,6 +38,22 @@ struct poly1305_desc_ctx { bool sset; }; +/* + * Poly1305 core functions. These implement the ε-almost-∆-universal hash + * function underlying the Poly1305 MAC, i.e. they don't add an encrypted nonce + * ("s key") at the end. They also only support block-aligned inputs. + */ +void poly1305_core_setkey(struct poly1305_key *key, const u8 *raw_key); +static inline void poly1305_core_init(struct poly1305_state *state) +{ + memset(state->h, 0, sizeof(state->h)); +} +void poly1305_core_blocks(struct poly1305_state *state, + const struct poly1305_key *key, + const void *src, unsigned int nblocks); +void poly1305_core_emit(const struct poly1305_state *state, void *dst); + +/* Crypto API helper functions for the Poly1305 MAC */ int crypto_poly1305_init(struct shash_desc *desc); unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx, const u8 *src, unsigned int srclen); diff --git a/include/crypto/rng.h b/include/crypto/rng.h index 6d258f5b68f1..022a1b896b47 100644 --- a/include/crypto/rng.h +++ b/include/crypto/rng.h @@ -122,29 +122,6 @@ static inline void crypto_free_rng(struct crypto_rng *tfm) crypto_destroy_tfm(tfm, crypto_rng_tfm(tfm)); } -static inline void crypto_stat_rng_seed(struct crypto_rng *tfm, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - if (ret && ret != -EINPROGRESS && ret != -EBUSY) - atomic_inc(&tfm->base.__crt_alg->rng_err_cnt); - else - atomic_inc(&tfm->base.__crt_alg->seed_cnt); -#endif -} - -static inline void crypto_stat_rng_generate(struct crypto_rng *tfm, - unsigned int dlen, int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&tfm->base.__crt_alg->rng_err_cnt); - } else { - atomic_inc(&tfm->base.__crt_alg->generate_cnt); - atomic64_add(dlen, &tfm->base.__crt_alg->generate_tlen); - } -#endif -} - /** * crypto_rng_generate() - get random number * @tfm: cipher handle @@ -163,10 +140,12 @@ static inline int crypto_rng_generate(struct crypto_rng *tfm, const u8 *src, unsigned int slen, u8 *dst, unsigned int dlen) { + struct crypto_alg *alg = tfm->base.__crt_alg; int ret; + crypto_stats_get(alg); ret = crypto_rng_alg(tfm)->generate(tfm, src, slen, dst, dlen); - crypto_stat_rng_generate(tfm, dlen, ret); + crypto_stats_rng_generate(alg, dlen, ret); return ret; } diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index 925f547cdcfa..e555294ed77f 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -39,19 +39,6 @@ struct skcipher_request { void *__ctx[] CRYPTO_MINALIGN_ATTR; }; -/** - * struct skcipher_givcrypt_request - Crypto request with IV generation - * @seq: Sequence number for IV generation - * @giv: Space for generated IV - * @creq: The crypto request itself - */ -struct skcipher_givcrypt_request { - u64 seq; - u8 *giv; - - struct ablkcipher_request creq; -}; - struct crypto_skcipher { int (*setkey)(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen); @@ -486,32 +473,6 @@ static inline struct crypto_sync_skcipher *crypto_sync_skcipher_reqtfm( return container_of(tfm, struct crypto_sync_skcipher, base); } -static inline void crypto_stat_skcipher_encrypt(struct skcipher_request *req, - int ret, struct crypto_alg *alg) -{ -#ifdef CONFIG_CRYPTO_STATS - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&alg->cipher_err_cnt); - } else { - atomic_inc(&alg->encrypt_cnt); - atomic64_add(req->cryptlen, &alg->encrypt_tlen); - } -#endif -} - -static inline void crypto_stat_skcipher_decrypt(struct skcipher_request *req, - int ret, struct crypto_alg *alg) -{ -#ifdef CONFIG_CRYPTO_STATS - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&alg->cipher_err_cnt); - } else { - atomic_inc(&alg->decrypt_cnt); - atomic64_add(req->cryptlen, &alg->decrypt_tlen); - } -#endif -} - /** * crypto_skcipher_encrypt() - encrypt plaintext * @req: reference to the skcipher_request handle that holds all information @@ -526,13 +487,16 @@ static inline void crypto_stat_skcipher_decrypt(struct skcipher_request *req, static inline int crypto_skcipher_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_alg *alg = tfm->base.__crt_alg; + unsigned int cryptlen = req->cryptlen; int ret; + crypto_stats_get(alg); if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) ret = -ENOKEY; else ret = tfm->encrypt(req); - crypto_stat_skcipher_encrypt(req, ret, tfm->base.__crt_alg); + crypto_stats_skcipher_encrypt(cryptlen, ret, alg); return ret; } @@ -550,13 +514,16 @@ static inline int crypto_skcipher_encrypt(struct skcipher_request *req) static inline int crypto_skcipher_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_alg *alg = tfm->base.__crt_alg; + unsigned int cryptlen = req->cryptlen; int ret; + crypto_stats_get(alg); if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) ret = -ENOKEY; else ret = tfm->decrypt(req); - crypto_stat_skcipher_decrypt(req, ret, tfm->base.__crt_alg); + crypto_stats_skcipher_decrypt(cryptlen, ret, alg); return ret; } diff --git a/include/crypto/streebog.h b/include/crypto/streebog.h new file mode 100644 index 000000000000..4af119f7e07b --- /dev/null +++ b/include/crypto/streebog.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0+ OR BSD-2-Clause */ +/* + * Copyright (c) 2013 Alexey Degtyarev + * Copyright (c) 2018 Vitaly Chikunov + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the Free + * Software Foundation; either version 2 of the License, or (at your option) + * any later version. + */ + +#ifndef _CRYPTO_STREEBOG_H_ +#define _CRYPTO_STREEBOG_H_ + +#include + +#define STREEBOG256_DIGEST_SIZE 32 +#define STREEBOG512_DIGEST_SIZE 64 +#define STREEBOG_BLOCK_SIZE 64 + +struct streebog_uint512 { + u64 qword[8]; +}; + +struct streebog_state { + u8 buffer[STREEBOG_BLOCK_SIZE]; + struct streebog_uint512 hash; + struct streebog_uint512 h; + struct streebog_uint512 N; + struct streebog_uint512 Sigma; + size_t fillsize; +}; + +#endif /* !_CRYPTO_STREEBOG_H_ */ diff --git a/include/dt-bindings/gpio/tegra186-gpio.h b/include/dt-bindings/gpio/tegra186-gpio.h index 463ad398fe3e..cabc5712e745 100644 --- a/include/dt-bindings/gpio/tegra186-gpio.h +++ b/include/dt-bindings/gpio/tegra186-gpio.h @@ -14,6 +14,34 @@ #include /* GPIOs implemented by main GPIO controller */ +#define TEGRA186_MAIN_GPIO_PORT_A 0 +#define TEGRA186_MAIN_GPIO_PORT_B 1 +#define TEGRA186_MAIN_GPIO_PORT_C 2 +#define TEGRA186_MAIN_GPIO_PORT_D 3 +#define TEGRA186_MAIN_GPIO_PORT_E 4 +#define TEGRA186_MAIN_GPIO_PORT_F 5 +#define TEGRA186_MAIN_GPIO_PORT_G 6 +#define TEGRA186_MAIN_GPIO_PORT_H 7 +#define TEGRA186_MAIN_GPIO_PORT_I 8 +#define TEGRA186_MAIN_GPIO_PORT_J 9 +#define TEGRA186_MAIN_GPIO_PORT_K 10 +#define TEGRA186_MAIN_GPIO_PORT_L 11 +#define TEGRA186_MAIN_GPIO_PORT_M 12 +#define TEGRA186_MAIN_GPIO_PORT_N 13 +#define TEGRA186_MAIN_GPIO_PORT_O 14 +#define TEGRA186_MAIN_GPIO_PORT_P 15 +#define TEGRA186_MAIN_GPIO_PORT_Q 16 +#define TEGRA186_MAIN_GPIO_PORT_R 17 +#define TEGRA186_MAIN_GPIO_PORT_T 18 +#define TEGRA186_MAIN_GPIO_PORT_X 19 +#define TEGRA186_MAIN_GPIO_PORT_Y 20 +#define TEGRA186_MAIN_GPIO_PORT_BB 21 +#define TEGRA186_MAIN_GPIO_PORT_CC 22 + +#define TEGRA186_MAIN_GPIO(port, offset) \ + ((TEGRA186_MAIN_GPIO_PORT_##port * 8) + offset) + +/* need to keep these for backwards-compatibility */ #define TEGRA_MAIN_GPIO_PORT_A 0 #define TEGRA_MAIN_GPIO_PORT_B 1 #define TEGRA_MAIN_GPIO_PORT_C 2 @@ -42,6 +70,19 @@ ((TEGRA_MAIN_GPIO_PORT_##port * 8) + offset) /* GPIOs implemented by AON GPIO controller */ +#define TEGRA186_AON_GPIO_PORT_S 0 +#define TEGRA186_AON_GPIO_PORT_U 1 +#define TEGRA186_AON_GPIO_PORT_V 2 +#define TEGRA186_AON_GPIO_PORT_W 3 +#define TEGRA186_AON_GPIO_PORT_Z 4 +#define TEGRA186_AON_GPIO_PORT_AA 5 +#define TEGRA186_AON_GPIO_PORT_EE 6 +#define TEGRA186_AON_GPIO_PORT_FF 7 + +#define TEGRA186_AON_GPIO(port, offset) \ + ((TEGRA186_AON_GPIO_PORT_##port * 8) + offset) + +/* need to keep these for backwards-compatibility */ #define TEGRA_AON_GPIO_PORT_S 0 #define TEGRA_AON_GPIO_PORT_U 1 #define TEGRA_AON_GPIO_PORT_V 2 diff --git a/include/linux/alcor_pci.h b/include/linux/alcor_pci.h new file mode 100644 index 000000000000..da973e8a2da8 --- /dev/null +++ b/include/linux/alcor_pci.h @@ -0,0 +1,286 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Copyright (C) 2018 Oleksij Rempel + * + * Driver for Alcor Micro AU6601 and AU6621 controllers + */ + +#ifndef __ALCOR_PCI_H +#define __ALCOR_PCI_H + +#define ALCOR_SD_CARD 0 +#define ALCOR_MS_CARD 1 + +#define DRV_NAME_ALCOR_PCI_SDMMC "alcor_sdmmc" +#define DRV_NAME_ALCOR_PCI_MS "alcor_ms" + +#define PCI_ID_ALCOR_MICRO 0x1AEA +#define PCI_ID_AU6601 0x6601 +#define PCI_ID_AU6621 0x6621 + +#define MHZ_TO_HZ(freq) ((freq) * 1000 * 1000) + +#define AU6601_BASE_CLOCK 31000000 +#define AU6601_MIN_CLOCK 150000 +#define AU6601_MAX_CLOCK 208000000 +#define AU6601_MAX_DMA_SEGMENTS 1 +#define AU6601_MAX_PIO_SEGMENTS 1 +#define AU6601_MAX_DMA_BLOCK_SIZE 0x1000 +#define AU6601_MAX_PIO_BLOCK_SIZE 0x200 +#define AU6601_MAX_DMA_BLOCKS 1 +#define AU6601_DMA_LOCAL_SEGMENTS 1 + +/* registers spotter by reverse engineering but still + * with unknown functionality: + * 0x10 - ADMA phy address. AU6621 only? + * 0x51 - LED ctrl? + * 0x52 - unknown + * 0x61 - LED related? Always toggled BIT0 + * 0x63 - Same as 0x61? + * 0x77 - unknown + */ + +/* SDMA phy address. Higher then 0x0800.0000? + * The au6601 and au6621 have different DMA engines with different issues. One + * For example au6621 engine is triggered by addr change. No other interaction + * is needed. This means, if we get two buffers with same address, then engine + * will stall. + */ +#define AU6601_REG_SDMA_ADDR 0x00 +#define AU6601_SDMA_MASK 0xffffffff + +#define AU6601_DMA_BOUNDARY 0x05 +#define AU6621_DMA_PAGE_CNT 0x05 +/* PIO */ +#define AU6601_REG_BUFFER 0x08 +/* ADMA ctrl? AU6621 only. */ +#define AU6621_DMA_CTRL 0x0c +#define AU6621_DMA_ENABLE BIT(0) +/* CMD index */ +#define AU6601_REG_CMD_OPCODE 0x23 +/* CMD parametr */ +#define AU6601_REG_CMD_ARG 0x24 +/* CMD response 4x4 Bytes */ +#define AU6601_REG_CMD_RSP0 0x30 +#define AU6601_REG_CMD_RSP1 0x34 +#define AU6601_REG_CMD_RSP2 0x38 +#define AU6601_REG_CMD_RSP3 0x3C +/* default timeout set to 125: 125 * 40ms = 5 sec + * how exactly it is calculated? + */ +#define AU6601_TIME_OUT_CTRL 0x69 +/* Block size for SDMA or PIO */ +#define AU6601_REG_BLOCK_SIZE 0x6c +/* Some power related reg, used together with AU6601_OUTPUT_ENABLE */ +#define AU6601_POWER_CONTROL 0x70 + +/* PLL ctrl */ +#define AU6601_CLK_SELECT 0x72 +#define AU6601_CLK_OVER_CLK 0x80 +#define AU6601_CLK_384_MHZ 0x30 +#define AU6601_CLK_125_MHZ 0x20 +#define AU6601_CLK_48_MHZ 0x10 +#define AU6601_CLK_EXT_PLL 0x04 +#define AU6601_CLK_X2_MODE 0x02 +#define AU6601_CLK_ENABLE 0x01 +#define AU6601_CLK_31_25_MHZ 0x00 + +#define AU6601_CLK_DIVIDER 0x73 + +#define AU6601_INTERFACE_MODE_CTRL 0x74 +#define AU6601_DLINK_MODE 0x80 +#define AU6601_INTERRUPT_DELAY_TIME 0x40 +#define AU6601_SIGNAL_REQ_CTRL 0x30 +#define AU6601_MS_CARD_WP BIT(3) +#define AU6601_SD_CARD_WP BIT(0) + +/* same register values are used for: + * - AU6601_OUTPUT_ENABLE + * - AU6601_POWER_CONTROL + */ +#define AU6601_ACTIVE_CTRL 0x75 +#define AU6601_XD_CARD BIT(4) +/* AU6601_MS_CARD_ACTIVE - will cativate MS card section? */ +#define AU6601_MS_CARD BIT(3) +#define AU6601_SD_CARD BIT(0) + +/* card slot state. It should automatically detect type of + * the card + */ +#define AU6601_DETECT_STATUS 0x76 +#define AU6601_DETECT_EN BIT(7) +#define AU6601_MS_DETECTED BIT(3) +#define AU6601_SD_DETECTED BIT(0) +#define AU6601_DETECT_STATUS_M 0xf + +#define AU6601_REG_SW_RESET 0x79 +#define AU6601_BUF_CTRL_RESET BIT(7) +#define AU6601_RESET_DATA BIT(3) +#define AU6601_RESET_CMD BIT(0) + +#define AU6601_OUTPUT_ENABLE 0x7a + +#define AU6601_PAD_DRIVE0 0x7b +#define AU6601_PAD_DRIVE1 0x7c +#define AU6601_PAD_DRIVE2 0x7d +/* read EEPROM? */ +#define AU6601_FUNCTION 0x7f + +#define AU6601_CMD_XFER_CTRL 0x81 +#define AU6601_CMD_17_BYTE_CRC 0xc0 +#define AU6601_CMD_6_BYTE_WO_CRC 0x80 +#define AU6601_CMD_6_BYTE_CRC 0x40 +#define AU6601_CMD_START_XFER 0x20 +#define AU6601_CMD_STOP_WAIT_RDY 0x10 +#define AU6601_CMD_NO_RESP 0x00 + +#define AU6601_REG_BUS_CTRL 0x82 +#define AU6601_BUS_WIDTH_4BIT 0x20 +#define AU6601_BUS_WIDTH_8BIT 0x10 +#define AU6601_BUS_WIDTH_1BIT 0x00 + +#define AU6601_DATA_XFER_CTRL 0x83 +#define AU6601_DATA_WRITE BIT(7) +#define AU6601_DATA_DMA_MODE BIT(6) +#define AU6601_DATA_START_XFER BIT(0) + +#define AU6601_DATA_PIN_STATE 0x84 +#define AU6601_BUS_STAT_CMD BIT(15) +/* BIT(4) - BIT(7) are permanently 1. + * May be reserved or not attached DAT4-DAT7 + */ +#define AU6601_BUS_STAT_DAT3 BIT(3) +#define AU6601_BUS_STAT_DAT2 BIT(2) +#define AU6601_BUS_STAT_DAT1 BIT(1) +#define AU6601_BUS_STAT_DAT0 BIT(0) +#define AU6601_BUS_STAT_DAT_MASK 0xf + +#define AU6601_OPT 0x85 +#define AU6601_OPT_CMD_LINE_LEVEL 0x80 +#define AU6601_OPT_NCRC_16_CLK BIT(4) +#define AU6601_OPT_CMD_NWT BIT(3) +#define AU6601_OPT_STOP_CLK BIT(2) +#define AU6601_OPT_DDR_MODE BIT(1) +#define AU6601_OPT_SD_18V BIT(0) + +#define AU6601_CLK_DELAY 0x86 +#define AU6601_CLK_DATA_POSITIVE_EDGE 0x80 +#define AU6601_CLK_CMD_POSITIVE_EDGE 0x40 +#define AU6601_CLK_POSITIVE_EDGE_ALL (AU6601_CLK_CMD_POSITIVE_EDGE \ + | AU6601_CLK_DATA_POSITIVE_EDGE) + + +#define AU6601_REG_INT_STATUS 0x90 +#define AU6601_REG_INT_ENABLE 0x94 +#define AU6601_INT_DATA_END_BIT_ERR BIT(22) +#define AU6601_INT_DATA_CRC_ERR BIT(21) +#define AU6601_INT_DATA_TIMEOUT_ERR BIT(20) +#define AU6601_INT_CMD_INDEX_ERR BIT(19) +#define AU6601_INT_CMD_END_BIT_ERR BIT(18) +#define AU6601_INT_CMD_CRC_ERR BIT(17) +#define AU6601_INT_CMD_TIMEOUT_ERR BIT(16) +#define AU6601_INT_ERROR BIT(15) +#define AU6601_INT_OVER_CURRENT_ERR BIT(8) +#define AU6601_INT_CARD_INSERT BIT(7) +#define AU6601_INT_CARD_REMOVE BIT(6) +#define AU6601_INT_READ_BUF_RDY BIT(5) +#define AU6601_INT_WRITE_BUF_RDY BIT(4) +#define AU6601_INT_DMA_END BIT(3) +#define AU6601_INT_DATA_END BIT(1) +#define AU6601_INT_CMD_END BIT(0) + +#define AU6601_INT_NORMAL_MASK 0x00007FFF +#define AU6601_INT_ERROR_MASK 0xFFFF8000 + +#define AU6601_INT_CMD_MASK (AU6601_INT_CMD_END | \ + AU6601_INT_CMD_TIMEOUT_ERR | AU6601_INT_CMD_CRC_ERR | \ + AU6601_INT_CMD_END_BIT_ERR | AU6601_INT_CMD_INDEX_ERR) +#define AU6601_INT_DATA_MASK (AU6601_INT_DATA_END | AU6601_INT_DMA_END | \ + AU6601_INT_READ_BUF_RDY | AU6601_INT_WRITE_BUF_RDY | \ + AU6601_INT_DATA_TIMEOUT_ERR | AU6601_INT_DATA_CRC_ERR | \ + AU6601_INT_DATA_END_BIT_ERR) +#define AU6601_INT_ALL_MASK ((u32)-1) + +/* MS_CARD mode registers */ + +#define AU6601_MS_STATUS 0xa0 + +#define AU6601_MS_BUS_MODE_CTRL 0xa1 +#define AU6601_MS_BUS_8BIT_MODE 0x03 +#define AU6601_MS_BUS_4BIT_MODE 0x01 +#define AU6601_MS_BUS_1BIT_MODE 0x00 + +#define AU6601_MS_TPC_CMD 0xa2 +#define AU6601_MS_TPC_READ_PAGE_DATA 0x02 +#define AU6601_MS_TPC_READ_REG 0x04 +#define AU6601_MS_TPC_GET_INT 0x07 +#define AU6601_MS_TPC_WRITE_PAGE_DATA 0x0D +#define AU6601_MS_TPC_WRITE_REG 0x0B +#define AU6601_MS_TPC_SET_RW_REG_ADRS 0x08 +#define AU6601_MS_TPC_SET_CMD 0x0E +#define AU6601_MS_TPC_EX_SET_CMD 0x09 +#define AU6601_MS_TPC_READ_SHORT_DATA 0x03 +#define AU6601_MS_TPC_WRITE_SHORT_DATA 0x0C + +#define AU6601_MS_TRANSFER_MODE 0xa3 +#define AU6601_MS_XFER_INT_TIMEOUT_CHK BIT(2) +#define AU6601_MS_XFER_DMA_ENABLE BIT(1) +#define AU6601_MS_XFER_START BIT(0) + +#define AU6601_MS_DATA_PIN_STATE 0xa4 + +#define AU6601_MS_INT_STATUS 0xb0 +#define AU6601_MS_INT_ENABLE 0xb4 +#define AU6601_MS_INT_OVER_CURRENT_ERROR BIT(23) +#define AU6601_MS_INT_DATA_CRC_ERROR BIT(21) +#define AU6601_MS_INT_INT_TIMEOUT BIT(20) +#define AU6601_MS_INT_INT_RESP_ERROR BIT(19) +#define AU6601_MS_INT_CED_ERROR BIT(18) +#define AU6601_MS_INT_TPC_TIMEOUT BIT(16) +#define AU6601_MS_INT_ERROR BIT(15) +#define AU6601_MS_INT_CARD_INSERT BIT(7) +#define AU6601_MS_INT_CARD_REMOVE BIT(6) +#define AU6601_MS_INT_BUF_READ_RDY BIT(5) +#define AU6601_MS_INT_BUF_WRITE_RDY BIT(4) +#define AU6601_MS_INT_DMA_END BIT(3) +#define AU6601_MS_INT_TPC_END BIT(1) + +#define AU6601_MS_INT_DATA_MASK 0x00000038 +#define AU6601_MS_INT_TPC_MASK 0x003d8002 +#define AU6601_MS_INT_TPC_ERROR 0x003d0000 + +#define ALCOR_PCIE_LINK_CTRL_OFFSET 0x10 +#define ALCOR_PCIE_LINK_CAP_OFFSET 0x0c +#define ALCOR_CAP_START_OFFSET 0x34 + +struct alcor_dev_cfg { + u8 dma; +}; + +struct alcor_pci_priv { + struct pci_dev *pdev; + struct pci_dev *parent_pdev; + struct device *dev; + void __iomem *iobase; + unsigned int irq; + + unsigned long id; /* idr id */ + + struct alcor_dev_cfg *cfg; + + /* PCI ASPM related vars */ + int pdev_cap_off; + u8 pdev_aspm_cap; + int parent_cap_off; + u8 parent_aspm_cap; + u8 ext_config_dev_aspm; +}; + +void alcor_write8(struct alcor_pci_priv *priv, u8 val, unsigned int addr); +void alcor_write16(struct alcor_pci_priv *priv, u16 val, unsigned int addr); +void alcor_write32(struct alcor_pci_priv *priv, u32 val, unsigned int addr); +void alcor_write32be(struct alcor_pci_priv *priv, u32 val, unsigned int addr); +u8 alcor_read8(struct alcor_pci_priv *priv, unsigned int addr); +u32 alcor_read32(struct alcor_pci_priv *priv, unsigned int addr); +u32 alcor_read32be(struct alcor_pci_priv *priv, unsigned int addr); +#endif diff --git a/include/linux/audit.h b/include/linux/audit.h index 9334fbef7bae..a625c29a2ea2 100644 --- a/include/linux/audit.h +++ b/include/linux/audit.h @@ -115,8 +115,6 @@ extern int audit_classify_compat_syscall(int abi, unsigned syscall); struct filename; -extern void audit_log_session_info(struct audit_buffer *ab); - #define AUDIT_OFF 0 #define AUDIT_ON 1 #define AUDIT_LOCKED 2 @@ -153,8 +151,7 @@ extern void audit_log_link_denied(const char *operation); extern void audit_log_lost(const char *message); extern int audit_log_task_context(struct audit_buffer *ab); -extern void audit_log_task_info(struct audit_buffer *ab, - struct task_struct *tsk); +extern void audit_log_task_info(struct audit_buffer *ab); extern int audit_update_lsm_rules(void); @@ -202,8 +199,7 @@ static inline int audit_log_task_context(struct audit_buffer *ab) { return 0; } -static inline void audit_log_task_info(struct audit_buffer *ab, - struct task_struct *tsk) +static inline void audit_log_task_info(struct audit_buffer *ab) { } #define audit_enabled AUDIT_OFF #endif /* CONFIG_AUDIT */ diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index b2488055fd1d..7605b5919c3a 100644 --- a/include/linux/avf/virtchnl.h +++ b/include/linux/avf/virtchnl.h @@ -171,7 +171,7 @@ struct virtchnl_msg { VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg); -/* Message descriptions and data structures.*/ +/* Message descriptions and data structures. */ /* VIRTCHNL_OP_VERSION * VF posts its version number to the PF. PF responds with its version number @@ -342,6 +342,8 @@ struct virtchnl_vsi_queue_config_info { struct virtchnl_queue_pair_info qpair[1]; }; +VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info); + /* VIRTCHNL_OP_REQUEST_QUEUES * VF sends this message to request the PF to allocate additional queues to * this VF. Each VF gets a guaranteed number of queues on init but asking for @@ -357,8 +359,6 @@ struct virtchnl_vf_res_request { u16 num_queue_pairs; }; -VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info); - /* VIRTCHNL_OP_CONFIG_IRQ_MAP * VF uses this message to map vectors to queues. * The rxq_map and txq_map fields are bitmaps used to indicate which queues @@ -819,8 +819,8 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, if (msglen >= valid_len) { struct virtchnl_tc_info *vti = (struct virtchnl_tc_info *)msg; - valid_len += vti->num_tc * - sizeof(struct virtchnl_channel_info); + valid_len += (vti->num_tc - 1) * + sizeof(struct virtchnl_channel_info); if (vti->num_tc == 0) err_msg_format = true; } diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h index 9a6bc0951cfa..c31157135598 100644 --- a/include/linux/backing-dev-defs.h +++ b/include/linux/backing-dev-defs.h @@ -258,6 +258,14 @@ static inline void wb_get(struct bdi_writeback *wb) */ static inline void wb_put(struct bdi_writeback *wb) { + if (WARN_ON_ONCE(!wb->bdi)) { + /* + * A driver bug might cause a file to be removed before bdi was + * initialized. + */ + return; + } + if (wb != &wb->bdi->wb) percpu_ref_put(&wb->refcnt); } diff --git a/include/linux/bio.h b/include/linux/bio.h index 056fb627edb3..7380b094dcca 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -491,35 +491,40 @@ do { \ bio_clear_flag(bio, BIO_THROTTLED);\ (bio)->bi_disk = (bdev)->bd_disk; \ (bio)->bi_partno = (bdev)->bd_partno; \ + bio_associate_blkg(bio); \ } while (0) #define bio_copy_dev(dst, src) \ do { \ (dst)->bi_disk = (src)->bi_disk; \ (dst)->bi_partno = (src)->bi_partno; \ + bio_clone_blkg_association(dst, src); \ } while (0) #define bio_dev(bio) \ disk_devt((bio)->bi_disk) #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -int bio_associate_blkcg_from_page(struct bio *bio, struct page *page); +void bio_associate_blkg_from_page(struct bio *bio, struct page *page); #else -static inline int bio_associate_blkcg_from_page(struct bio *bio, - struct page *page) { return 0; } +static inline void bio_associate_blkg_from_page(struct bio *bio, + struct page *page) { } #endif #ifdef CONFIG_BLK_CGROUP -int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css); -int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg); -void bio_disassociate_task(struct bio *bio); -void bio_clone_blkcg_association(struct bio *dst, struct bio *src); +void bio_disassociate_blkg(struct bio *bio); +void bio_associate_blkg(struct bio *bio); +void bio_associate_blkg_from_css(struct bio *bio, + struct cgroup_subsys_state *css); +void bio_clone_blkg_association(struct bio *dst, struct bio *src); #else /* CONFIG_BLK_CGROUP */ -static inline int bio_associate_blkcg(struct bio *bio, - struct cgroup_subsys_state *blkcg_css) { return 0; } -static inline void bio_disassociate_task(struct bio *bio) { } -static inline void bio_clone_blkcg_association(struct bio *dst, - struct bio *src) { } +static inline void bio_disassociate_blkg(struct bio *bio) { } +static inline void bio_associate_blkg(struct bio *bio) { } +static inline void bio_associate_blkg_from_css(struct bio *bio, + struct cgroup_subsys_state *css) +{ } +static inline void bio_clone_blkg_association(struct bio *dst, + struct bio *src) { } #endif /* CONFIG_BLK_CGROUP */ #ifdef CONFIG_HIGHMEM diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h index 6d766a19f2bb..f025fd1e22e6 100644 --- a/include/linux/blk-cgroup.h +++ b/include/linux/blk-cgroup.h @@ -21,6 +21,7 @@ #include #include #include +#include /* percpu_counter batch for blkg_[rw]stats, per-cpu drift doesn't matter */ #define BLKG_STAT_CPU_BATCH (INT_MAX / 2) @@ -122,11 +123,8 @@ struct blkcg_gq { /* all non-root blkcg_gq's are guaranteed to have access to parent */ struct blkcg_gq *parent; - /* request allocation list for this blkcg-q pair */ - struct request_list rl; - /* reference count */ - atomic_t refcnt; + struct percpu_ref refcnt; /* is this blkg online? protected by both blkcg and q locks */ bool online; @@ -184,6 +182,8 @@ extern struct cgroup_subsys_state * const blkcg_root_css; struct blkcg_gq *blkg_lookup_slowpath(struct blkcg *blkcg, struct request_queue *q, bool update_hint); +struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg, + struct request_queue *q); struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg, struct request_queue *q); int blkcg_init_queue(struct request_queue *q); @@ -230,22 +230,62 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, char *input, struct blkg_conf_ctx *ctx); void blkg_conf_finish(struct blkg_conf_ctx *ctx); +/** + * blkcg_css - find the current css + * + * Find the css associated with either the kthread or the current task. + * This may return a dying css, so it is up to the caller to use tryget logic + * to confirm it is alive and well. + */ +static inline struct cgroup_subsys_state *blkcg_css(void) +{ + struct cgroup_subsys_state *css; + + css = kthread_blkcg(); + if (css) + return css; + return task_css(current, io_cgrp_id); +} static inline struct blkcg *css_to_blkcg(struct cgroup_subsys_state *css) { return css ? container_of(css, struct blkcg, css) : NULL; } -static inline struct blkcg *bio_blkcg(struct bio *bio) +/** + * __bio_blkcg - internal, inconsistent version to get blkcg + * + * DO NOT USE. + * This function is inconsistent and consequently is dangerous to use. The + * first part of the function returns a blkcg where a reference is owned by the + * bio. This means it does not need to be rcu protected as it cannot go away + * with the bio owning a reference to it. However, the latter potentially gets + * it from task_css(). This can race against task migration and the cgroup + * dying. It is also semantically different as it must be called rcu protected + * and is susceptible to failure when trying to get a reference to it. + * Therefore, it is not ok to assume that *_get() will always succeed on the + * blkcg returned here. + */ +static inline struct blkcg *__bio_blkcg(struct bio *bio) { - struct cgroup_subsys_state *css; + if (bio && bio->bi_blkg) + return bio->bi_blkg->blkcg; + return css_to_blkcg(blkcg_css()); +} - if (bio && bio->bi_css) - return css_to_blkcg(bio->bi_css); - css = kthread_blkcg(); - if (css) - return css_to_blkcg(css); - return css_to_blkcg(task_css(current, io_cgrp_id)); +/** + * bio_blkcg - grab the blkcg associated with a bio + * @bio: target bio + * + * This returns the blkcg associated with a bio, %NULL if not associated. + * Callers are expected to either handle %NULL or know association has been + * done prior to calling this. + */ +static inline struct blkcg *bio_blkcg(struct bio *bio) +{ + if (bio && bio->bi_blkg) + return bio->bi_blkg->blkcg; + return NULL; } static inline bool blk_cgroup_congested(void) @@ -328,16 +368,12 @@ static inline struct blkcg_gq *__blkg_lookup(struct blkcg *blkcg, * @q: request_queue of interest * * Lookup blkg for the @blkcg - @q pair. This function should be called - * under RCU read lock and is guaranteed to return %NULL if @q is bypassing - * - see blk_queue_bypass_start() for details. + * under RCU read loc. */ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg, struct request_queue *q) { WARN_ON_ONCE(!rcu_read_lock_held()); - - if (unlikely(blk_queue_bypass(q))) - return NULL; return __blkg_lookup(blkcg, q, false); } @@ -451,26 +487,35 @@ static inline int blkg_path(struct blkcg_gq *blkg, char *buf, int buflen) */ static inline void blkg_get(struct blkcg_gq *blkg) { - WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0); - atomic_inc(&blkg->refcnt); + percpu_ref_get(&blkg->refcnt); } /** - * blkg_try_get - try and get a blkg reference + * blkg_tryget - try and get a blkg reference * @blkg: blkg to get * * This is for use when doing an RCU lookup of the blkg. We may be in the midst * of freeing this blkg, so we can only use it if the refcnt is not zero. */ -static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg) +static inline bool blkg_tryget(struct blkcg_gq *blkg) { - if (atomic_inc_not_zero(&blkg->refcnt)) - return blkg; - return NULL; + return percpu_ref_tryget(&blkg->refcnt); } +/** + * blkg_tryget_closest - try and get a blkg ref on the closet blkg + * @blkg: blkg to get + * + * This walks up the blkg tree to find the closest non-dying blkg and returns + * the blkg that it did association with as it may not be the passed in blkg. + */ +static inline struct blkcg_gq *blkg_tryget_closest(struct blkcg_gq *blkg) +{ + while (blkg && !percpu_ref_tryget(&blkg->refcnt)) + blkg = blkg->parent; -void __blkg_release_rcu(struct rcu_head *rcu); + return blkg; +} /** * blkg_put - put a blkg reference @@ -478,9 +523,7 @@ void __blkg_release_rcu(struct rcu_head *rcu); */ static inline void blkg_put(struct blkcg_gq *blkg) { - WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0); - if (atomic_dec_and_test(&blkg->refcnt)) - call_rcu(&blkg->rcu_head, __blkg_release_rcu); + percpu_ref_put(&blkg->refcnt); } /** @@ -515,94 +558,6 @@ static inline void blkg_put(struct blkcg_gq *blkg) if (((d_blkg) = __blkg_lookup(css_to_blkcg(pos_css), \ (p_blkg)->q, false))) -/** - * blk_get_rl - get request_list to use - * @q: request_queue of interest - * @bio: bio which will be attached to the allocated request (may be %NULL) - * - * The caller wants to allocate a request from @q to use for @bio. Find - * the request_list to use and obtain a reference on it. Should be called - * under queue_lock. This function is guaranteed to return non-%NULL - * request_list. - */ -static inline struct request_list *blk_get_rl(struct request_queue *q, - struct bio *bio) -{ - struct blkcg *blkcg; - struct blkcg_gq *blkg; - - rcu_read_lock(); - - blkcg = bio_blkcg(bio); - - /* bypass blkg lookup and use @q->root_rl directly for root */ - if (blkcg == &blkcg_root) - goto root_rl; - - /* - * Try to use blkg->rl. blkg lookup may fail under memory pressure - * or if either the blkcg or queue is going away. Fall back to - * root_rl in such cases. - */ - blkg = blkg_lookup(blkcg, q); - if (unlikely(!blkg)) - goto root_rl; - - blkg_get(blkg); - rcu_read_unlock(); - return &blkg->rl; -root_rl: - rcu_read_unlock(); - return &q->root_rl; -} - -/** - * blk_put_rl - put request_list - * @rl: request_list to put - * - * Put the reference acquired by blk_get_rl(). Should be called under - * queue_lock. - */ -static inline void blk_put_rl(struct request_list *rl) -{ - if (rl->blkg->blkcg != &blkcg_root) - blkg_put(rl->blkg); -} - -/** - * blk_rq_set_rl - associate a request with a request_list - * @rq: request of interest - * @rl: target request_list - * - * Associate @rq with @rl so that accounting and freeing can know the - * request_list @rq came from. - */ -static inline void blk_rq_set_rl(struct request *rq, struct request_list *rl) -{ - rq->rl = rl; -} - -/** - * blk_rq_rl - return the request_list a request came from - * @rq: request of interest - * - * Return the request_list @rq is allocated from. - */ -static inline struct request_list *blk_rq_rl(struct request *rq) -{ - return rq->rl; -} - -struct request_list *__blk_queue_next_rl(struct request_list *rl, - struct request_queue *q); -/** - * blk_queue_for_each_rl - iterate through all request_lists of a request_queue - * - * Should be used under queue_lock. - */ -#define blk_queue_for_each_rl(rl, q) \ - for ((rl) = &(q)->root_rl; (rl); (rl) = __blk_queue_next_rl((rl), (q))) - static inline int blkg_stat_init(struct blkg_stat *stat, gfp_t gfp) { int ret; @@ -797,32 +752,34 @@ static inline bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg struct bio *bio) { return false; } #endif + +static inline void blkcg_bio_issue_init(struct bio *bio) +{ + bio_issue_init(&bio->bi_issue, bio_sectors(bio)); +} + static inline bool blkcg_bio_issue_check(struct request_queue *q, struct bio *bio) { - struct blkcg *blkcg; struct blkcg_gq *blkg; bool throtl = false; rcu_read_lock(); - blkcg = bio_blkcg(bio); - - /* associate blkcg if bio hasn't attached one */ - bio_associate_blkcg(bio, &blkcg->css); - - blkg = blkg_lookup(blkcg, q); - if (unlikely(!blkg)) { - spin_lock_irq(q->queue_lock); - blkg = blkg_lookup_create(blkcg, q); - if (IS_ERR(blkg)) - blkg = NULL; - spin_unlock_irq(q->queue_lock); + + if (!bio->bi_blkg) { + char b[BDEVNAME_SIZE]; + + WARN_ONCE(1, + "no blkg associated for bio on block-device: %s\n", + bio_devname(bio, b)); + bio_associate_blkg(bio); } + blkg = bio->bi_blkg; + throtl = blk_throtl_bio(q, blkg, bio); if (!throtl) { - blkg = blkg ?: q->root_blkg; /* * If the bio is flagged with BIO_QUEUE_ENTERED it means this * is a split bio and we would have already accounted for the @@ -834,6 +791,8 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q, blkg_rwstat_add(&blkg->stat_ios, bio->bi_opf, 1); } + blkcg_bio_issue_init(bio); + rcu_read_unlock(); return !throtl; } @@ -930,6 +889,7 @@ static inline int blkcg_activate_policy(struct request_queue *q, static inline void blkcg_deactivate_policy(struct request_queue *q, const struct blkcg_policy *pol) { } +static inline struct blkcg *__bio_blkcg(struct bio *bio) { return NULL; } static inline struct blkcg *bio_blkcg(struct bio *bio) { return NULL; } static inline struct blkg_policy_data *blkg_to_pd(struct blkcg_gq *blkg, @@ -939,12 +899,7 @@ static inline char *blkg_path(struct blkcg_gq *blkg) { return NULL; } static inline void blkg_get(struct blkcg_gq *blkg) { } static inline void blkg_put(struct blkcg_gq *blkg) { } -static inline struct request_list *blk_get_rl(struct request_queue *q, - struct bio *bio) { return &q->root_rl; } -static inline void blk_put_rl(struct request_list *rl) { } -static inline void blk_rq_set_rl(struct request *rq, struct request_list *rl) { } -static inline struct request_list *blk_rq_rl(struct request *rq) { return &rq->q->root_rl; } - +static inline void blkcg_bio_issue_init(struct bio *bio) { } static inline bool blkcg_bio_issue_check(struct request_queue *q, struct bio *bio) { return true; } diff --git a/include/linux/blk-mq-pci.h b/include/linux/blk-mq-pci.h index 9f4c17f0d2d8..0b1f45c62623 100644 --- a/include/linux/blk-mq-pci.h +++ b/include/linux/blk-mq-pci.h @@ -2,10 +2,10 @@ #ifndef _LINUX_BLK_MQ_PCI_H #define _LINUX_BLK_MQ_PCI_H -struct blk_mq_tag_set; +struct blk_mq_queue_map; struct pci_dev; -int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev, +int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap, struct pci_dev *pdev, int offset); #endif /* _LINUX_BLK_MQ_PCI_H */ diff --git a/include/linux/blk-mq-rdma.h b/include/linux/blk-mq-rdma.h index b4ade198007d..7b6ecf9ac4c3 100644 --- a/include/linux/blk-mq-rdma.h +++ b/include/linux/blk-mq-rdma.h @@ -4,7 +4,7 @@ struct blk_mq_tag_set; struct ib_device; -int blk_mq_rdma_map_queues(struct blk_mq_tag_set *set, +int blk_mq_rdma_map_queues(struct blk_mq_queue_map *map, struct ib_device *dev, int first_vec); #endif /* _LINUX_BLK_MQ_RDMA_H */ diff --git a/include/linux/blk-mq-virtio.h b/include/linux/blk-mq-virtio.h index 69b4da262c45..687ae287e1dc 100644 --- a/include/linux/blk-mq-virtio.h +++ b/include/linux/blk-mq-virtio.h @@ -2,10 +2,10 @@ #ifndef _LINUX_BLK_MQ_VIRTIO_H #define _LINUX_BLK_MQ_VIRTIO_H -struct blk_mq_tag_set; +struct blk_mq_queue_map; struct virtio_device; -int blk_mq_virtio_map_queues(struct blk_mq_tag_set *set, +int blk_mq_virtio_map_queues(struct blk_mq_queue_map *qmap, struct virtio_device *vdev, int first_vec); #endif /* _LINUX_BLK_MQ_VIRTIO_H */ diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 2286dc12c6bc..0e030f5f76b6 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -37,7 +37,8 @@ struct blk_mq_hw_ctx { struct blk_mq_ctx *dispatch_from; unsigned int dispatch_busy; - unsigned int nr_ctx; + unsigned short type; + unsigned short nr_ctx; struct blk_mq_ctx **ctxs; spinlock_t dispatch_wait_lock; @@ -74,10 +75,31 @@ struct blk_mq_hw_ctx { struct srcu_struct srcu[0]; }; +struct blk_mq_queue_map { + unsigned int *mq_map; + unsigned int nr_queues; + unsigned int queue_offset; +}; + +enum hctx_type { + HCTX_TYPE_DEFAULT, /* all I/O not otherwise accounted for */ + HCTX_TYPE_READ, /* just for READ I/O */ + HCTX_TYPE_POLL, /* polled I/O of any kind */ + + HCTX_MAX_TYPES, +}; + struct blk_mq_tag_set { - unsigned int *mq_map; + /* + * map[] holds ctx -> hctx mappings, one map exists for each type + * that the driver wishes to support. There are no restrictions + * on maps being of the same size, and it's perfectly legal to + * share maps between types. + */ + struct blk_mq_queue_map map[HCTX_MAX_TYPES]; + unsigned int nr_maps; /* nr entries in map[] */ const struct blk_mq_ops *ops; - unsigned int nr_hw_queues; + unsigned int nr_hw_queues; /* nr hw queues across maps */ unsigned int queue_depth; /* max hw supported */ unsigned int reserved_tags; unsigned int cmd_size; /* per-request extra data */ @@ -99,6 +121,7 @@ struct blk_mq_queue_data { typedef blk_status_t (queue_rq_fn)(struct blk_mq_hw_ctx *, const struct blk_mq_queue_data *); +typedef void (commit_rqs_fn)(struct blk_mq_hw_ctx *); typedef bool (get_budget_fn)(struct blk_mq_hw_ctx *); typedef void (put_budget_fn)(struct blk_mq_hw_ctx *); typedef enum blk_eh_timer_return (timeout_fn)(struct request *, bool); @@ -109,11 +132,13 @@ typedef int (init_request_fn)(struct blk_mq_tag_set *set, struct request *, typedef void (exit_request_fn)(struct blk_mq_tag_set *set, struct request *, unsigned int); -typedef void (busy_iter_fn)(struct blk_mq_hw_ctx *, struct request *, void *, +typedef bool (busy_iter_fn)(struct blk_mq_hw_ctx *, struct request *, void *, bool); -typedef void (busy_tag_iter_fn)(struct request *, void *, bool); -typedef int (poll_fn)(struct blk_mq_hw_ctx *, unsigned int); +typedef bool (busy_tag_iter_fn)(struct request *, void *, bool); +typedef int (poll_fn)(struct blk_mq_hw_ctx *); typedef int (map_queues_fn)(struct blk_mq_tag_set *set); +typedef bool (busy_fn)(struct request_queue *); +typedef void (complete_fn)(struct request *); struct blk_mq_ops { @@ -122,6 +147,15 @@ struct blk_mq_ops { */ queue_rq_fn *queue_rq; + /* + * If a driver uses bd->last to judge when to submit requests to + * hardware, it must define this function. In case of errors that + * make us stop issuing further requests, this hook serves the + * purpose of kicking the hardware (which the last request otherwise + * would have done). + */ + commit_rqs_fn *commit_rqs; + /* * Reserve budget before queue request, once .queue_rq is * run, it is driver's responsibility to release the @@ -141,7 +175,7 @@ struct blk_mq_ops { */ poll_fn *poll; - softirq_done_fn *complete; + complete_fn *complete; /* * Called when the block layer side of a hardware queue has been @@ -165,6 +199,11 @@ struct blk_mq_ops { /* Called from inside blk_get_request() */ void (*initialize_rq_fn)(struct request *rq); + /* + * If set, returns whether or not this queue currently is busy + */ + busy_fn *busy; + map_queues_fn *map_queues; #ifdef CONFIG_BLK_DEBUG_FS @@ -218,6 +257,8 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule); void blk_mq_free_request(struct request *rq); bool blk_mq_can_queue(struct blk_mq_hw_ctx *); +bool blk_mq_queue_inflight(struct request_queue *q); + enum { /* return when out of requests */ BLK_MQ_REQ_NOWAIT = (__force blk_mq_req_flags_t)(1 << 0), @@ -264,7 +305,7 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head, bool kick_requeue_list); void blk_mq_kick_requeue_list(struct request_queue *q); void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs); -void blk_mq_complete_request(struct request *rq); +bool blk_mq_complete_request(struct request *rq); bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list, struct bio *bio); bool blk_mq_queue_stopped(struct request_queue *q); @@ -288,24 +329,12 @@ void blk_mq_freeze_queue_wait(struct request_queue *q); int blk_mq_freeze_queue_wait_timeout(struct request_queue *q, unsigned long timeout); -int blk_mq_map_queues(struct blk_mq_tag_set *set); +int blk_mq_map_queues(struct blk_mq_queue_map *qmap); void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues); void blk_mq_quiesce_queue_nowait(struct request_queue *q); -/** - * blk_mq_mark_complete() - Set request state to complete - * @rq: request to set to complete state - * - * Returns true if request state was successfully set to complete. If - * successful, the caller is responsibile for seeing this request is ended, as - * blk_mq_complete_request will not work again. - */ -static inline bool blk_mq_mark_complete(struct request *rq) -{ - return cmpxchg(&rq->state, MQ_RQ_IN_FLIGHT, MQ_RQ_COMPLETE) == - MQ_RQ_IN_FLIGHT; -} +unsigned int blk_mq_rq_cpu(struct request *rq); /* * Driver command data is immediately after the request. So subtract request @@ -328,4 +357,14 @@ static inline void *blk_mq_rq_to_pdu(struct request *rq) for ((i) = 0; (i) < (hctx)->nr_ctx && \ ({ ctx = (hctx)->ctxs[(i)]; 1; }); (i)++) +static inline blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx, + struct request *rq) +{ + if (rq->tag != -1) + return rq->tag | (hctx->queue_num << BLK_QC_T_SHIFT); + + return rq->internal_tag | (hctx->queue_num << BLK_QC_T_SHIFT) | + BLK_QC_T_INTERNAL; +} + #endif diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 1dcf652ba0aa..5c7e7f859a24 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -174,11 +174,11 @@ struct bio { void *bi_private; #ifdef CONFIG_BLK_CGROUP /* - * Optional ioc and css associated with this bio. Put on bio - * release. Read comment on top of bio_associate_current(). + * Represents the association of the css and request_queue for the bio. + * If a bio goes direct to device, it will not have a blkg as it will + * not have a request_queue associated with it. The reference is put + * on release of the bio. */ - struct io_context *bi_ioc; - struct cgroup_subsys_state *bi_css; struct blkcg_gq *bi_blkg; struct bio_issue bi_issue; #endif @@ -228,6 +228,7 @@ struct bio { #define BIO_TRACE_COMPLETION 10 /* bio_endio() should trace the final completion * of this bio. */ #define BIO_QUEUE_ENTERED 11 /* can use blk_queue_enter_live() */ +#define BIO_TRACKED 12 /* set if bio goes through the rq_qos path */ /* See BVEC_POOL_OFFSET below before adding new flags */ @@ -323,6 +324,8 @@ enum req_flag_bits { /* command specific flags for REQ_OP_WRITE_ZEROES: */ __REQ_NOUNMAP, /* do not free blocks when zeroing */ + __REQ_HIPRI, + /* for driver use */ __REQ_DRV, __REQ_SWAP, /* swapping request. */ @@ -343,8 +346,8 @@ enum req_flag_bits { #define REQ_RAHEAD (1ULL << __REQ_RAHEAD) #define REQ_BACKGROUND (1ULL << __REQ_BACKGROUND) #define REQ_NOWAIT (1ULL << __REQ_NOWAIT) - #define REQ_NOUNMAP (1ULL << __REQ_NOUNMAP) +#define REQ_HIPRI (1ULL << __REQ_HIPRI) #define REQ_DRV (1ULL << __REQ_DRV) #define REQ_SWAP (1ULL << __REQ_SWAP) @@ -422,17 +425,6 @@ static inline bool blk_qc_t_valid(blk_qc_t cookie) return cookie != BLK_QC_T_NONE; } -static inline blk_qc_t blk_tag_to_qc_t(unsigned int tag, unsigned int queue_num, - bool internal) -{ - blk_qc_t ret = tag | (queue_num << BLK_QC_T_SHIFT); - - if (internal) - ret |= BLK_QC_T_INTERNAL; - - return ret; -} - static inline unsigned int blk_qc_t_to_queue_num(blk_qc_t cookie) { return (cookie & ~BLK_QC_T_INTERNAL) >> BLK_QC_T_SHIFT; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 4293dc1cd160..338604dff7d0 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -58,25 +58,6 @@ struct blk_stat_callback; typedef void (rq_end_io_fn)(struct request *, blk_status_t); -#define BLK_RL_SYNCFULL (1U << 0) -#define BLK_RL_ASYNCFULL (1U << 1) - -struct request_list { - struct request_queue *q; /* the queue this rl belongs to */ -#ifdef CONFIG_BLK_CGROUP - struct blkcg_gq *blkg; /* blkg this request pool belongs to */ -#endif - /* - * count[], starved[], and wait[] are indexed by - * BLK_RW_SYNC/BLK_RW_ASYNC - */ - int count[2]; - int starved[2]; - mempool_t *rq_pool; - wait_queue_head_t wait[2]; - unsigned int flags; -}; - /* * request flags */ typedef __u32 __bitwise req_flags_t; @@ -85,8 +66,6 @@ typedef __u32 __bitwise req_flags_t; #define RQF_SORTED ((__force req_flags_t)(1 << 0)) /* drive already may have started this one */ #define RQF_STARTED ((__force req_flags_t)(1 << 1)) -/* uses tagged queueing */ -#define RQF_QUEUED ((__force req_flags_t)(1 << 2)) /* may not be passed by ioscheduler */ #define RQF_SOFTBARRIER ((__force req_flags_t)(1 << 3)) /* request for flush sequence */ @@ -150,8 +129,8 @@ enum mq_rq_state { struct request { struct request_queue *q; struct blk_mq_ctx *mq_ctx; + struct blk_mq_hw_ctx *mq_hctx; - int cpu; unsigned int cmd_flags; /* op and common flags */ req_flags_t rq_flags; @@ -245,11 +224,7 @@ struct request { refcount_t ref; unsigned int timeout; - - /* access through blk_rq_set_deadline, blk_rq_deadline */ - unsigned long __deadline; - - struct list_head timeout_list; + unsigned long deadline; union { struct __call_single_data csd; @@ -264,10 +239,6 @@ struct request { /* for bidi */ struct request *next_rq; - -#ifdef CONFIG_BLK_CGROUP - struct request_list *rl; /* rl this rq is alloced from */ -#endif }; static inline bool blk_op_is_scsi(unsigned int op) @@ -311,41 +282,21 @@ static inline unsigned short req_get_ioprio(struct request *req) struct blk_queue_ctx; -typedef void (request_fn_proc) (struct request_queue *q); typedef blk_qc_t (make_request_fn) (struct request_queue *q, struct bio *bio); -typedef bool (poll_q_fn) (struct request_queue *q, blk_qc_t); -typedef int (prep_rq_fn) (struct request_queue *, struct request *); -typedef void (unprep_rq_fn) (struct request_queue *, struct request *); struct bio_vec; -typedef void (softirq_done_fn)(struct request *); typedef int (dma_drain_needed_fn)(struct request *); -typedef int (lld_busy_fn) (struct request_queue *q); -typedef int (bsg_job_fn) (struct bsg_job *); -typedef int (init_rq_fn)(struct request_queue *, struct request *, gfp_t); -typedef void (exit_rq_fn)(struct request_queue *, struct request *); enum blk_eh_timer_return { BLK_EH_DONE, /* drivers has completed the command */ BLK_EH_RESET_TIMER, /* reset timer and try again */ }; -typedef enum blk_eh_timer_return (rq_timed_out_fn)(struct request *); - enum blk_queue_state { Queue_down, Queue_up, }; -struct blk_queue_tag { - struct request **tag_index; /* map of busy tags */ - unsigned long *tag_map; /* bit map of free/busy tags */ - int max_depth; /* what we will send to device */ - int real_max_depth; /* what the array can hold */ - atomic_t refcnt; /* map can be shared */ - int alloc_policy; /* tag allocation policy */ - int next_tag; /* next tag */ -}; #define BLK_TAG_ALLOC_FIFO 0 /* allocate starting from 0 */ #define BLK_TAG_ALLOC_RR 1 /* allocate starting from last allocated tag */ @@ -389,7 +340,6 @@ struct queue_limits { unsigned char misaligned; unsigned char discard_misaligned; - unsigned char cluster; unsigned char raid_partial_stripes_expensive; enum blk_zoned_model zoned; }; @@ -444,40 +394,15 @@ struct request_queue { struct list_head queue_head; struct request *last_merge; struct elevator_queue *elevator; - int nr_rqs[2]; /* # allocated [a]sync rqs */ - int nr_rqs_elvpriv; /* # allocated rqs w/ elvpriv */ struct blk_queue_stats *stats; struct rq_qos *rq_qos; - /* - * If blkcg is not used, @q->root_rl serves all requests. If blkcg - * is used, root blkg allocates from @q->root_rl and all other - * blkgs from their own blkg->rl. Which one to use should be - * determined using bio_request_list(). - */ - struct request_list root_rl; - - request_fn_proc *request_fn; make_request_fn *make_request_fn; - poll_q_fn *poll_fn; - prep_rq_fn *prep_rq_fn; - unprep_rq_fn *unprep_rq_fn; - softirq_done_fn *softirq_done_fn; - rq_timed_out_fn *rq_timed_out_fn; dma_drain_needed_fn *dma_drain_needed; - lld_busy_fn *lld_busy_fn; - /* Called just after a request is allocated */ - init_rq_fn *init_rq_fn; - /* Called just before a request is freed */ - exit_rq_fn *exit_rq_fn; - /* Called from inside blk_get_request() */ - void (*initialize_rq_fn)(struct request *rq); const struct blk_mq_ops *mq_ops; - unsigned int *mq_map; - /* sw queues */ struct blk_mq_ctx __percpu *queue_ctx; unsigned int nr_queues; @@ -488,17 +413,6 @@ struct request_queue { struct blk_mq_hw_ctx **queue_hw_ctx; unsigned int nr_hw_queues; - /* - * Dispatch queue sorting - */ - sector_t end_sector; - struct request *boundary_rq; - - /* - * Delayed queue handling - */ - struct delayed_work delay_work; - struct backing_dev_info *backing_dev_info; /* @@ -529,13 +443,7 @@ struct request_queue { */ gfp_t bounce_gfp; - /* - * protects queue structures from reentrancy. ->__queue_lock should - * _never_ be used directly, it is queue private. always use - * ->queue_lock. - */ - spinlock_t __queue_lock; - spinlock_t *queue_lock; + spinlock_t queue_lock; /* * queue kobject @@ -545,7 +453,7 @@ struct request_queue { /* * mq queue kobject */ - struct kobject mq_kobj; + struct kobject *mq_kobj; #ifdef CONFIG_BLK_DEV_INTEGRITY struct blk_integrity integrity; @@ -561,27 +469,12 @@ struct request_queue { * queue settings */ unsigned long nr_requests; /* Max # of requests */ - unsigned int nr_congestion_on; - unsigned int nr_congestion_off; - unsigned int nr_batching; unsigned int dma_drain_size; void *dma_drain_buffer; unsigned int dma_pad_mask; unsigned int dma_alignment; - struct blk_queue_tag *queue_tags; - - unsigned int nr_sorted; - unsigned int in_flight[2]; - - /* - * Number of active block driver functions for which blk_drain_queue() - * must wait. Must be incremented around functions that unlock the - * queue_lock internally, e.g. scsi_request_fn(). - */ - unsigned int request_fn_active; - unsigned int rq_timeout; int poll_nsec; @@ -590,7 +483,6 @@ struct request_queue { struct timer_list timeout; struct work_struct timeout_work; - struct list_head timeout_list; struct list_head icq_list; #ifdef CONFIG_BLK_CGROUP @@ -645,11 +537,9 @@ struct request_queue { struct mutex sysfs_lock; - int bypass_depth; atomic_t mq_freeze_depth; #if defined(CONFIG_BLK_DEV_BSG) - bsg_job_fn *bsg_job_fn; struct bsg_class_device bsg_dev; #endif @@ -669,12 +559,12 @@ struct request_queue { #ifdef CONFIG_BLK_DEBUG_FS struct dentry *debugfs_dir; struct dentry *sched_debugfs_dir; + struct dentry *rqos_debugfs_dir; #endif bool mq_sysfs_init_done; size_t cmd_size; - void *rq_alloc_data; struct work_struct release_work; @@ -682,10 +572,8 @@ struct request_queue { u64 write_hints[BLK_MAX_WRITE_HINTS]; }; -#define QUEUE_FLAG_QUEUED 0 /* uses generic tag queueing */ #define QUEUE_FLAG_STOPPED 1 /* queue is stopped */ #define QUEUE_FLAG_DYING 2 /* queue being torn down */ -#define QUEUE_FLAG_BYPASS 3 /* act as dumb FIFO queue */ #define QUEUE_FLAG_BIDI 4 /* queue supports bidi requests */ #define QUEUE_FLAG_NOMERGES 5 /* disable merge attempts */ #define QUEUE_FLAG_SAME_COMP 6 /* complete on same CPU-group */ @@ -718,19 +606,15 @@ struct request_queue { (1 << QUEUE_FLAG_ADD_RANDOM)) #define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ - (1 << QUEUE_FLAG_SAME_COMP) | \ - (1 << QUEUE_FLAG_POLL)) + (1 << QUEUE_FLAG_SAME_COMP)) void blk_queue_flag_set(unsigned int flag, struct request_queue *q); void blk_queue_flag_clear(unsigned int flag, struct request_queue *q); bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); -bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q); -#define blk_queue_tagged(q) test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags) #define blk_queue_stopped(q) test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags) #define blk_queue_dying(q) test_bit(QUEUE_FLAG_DYING, &(q)->queue_flags) #define blk_queue_dead(q) test_bit(QUEUE_FLAG_DEAD, &(q)->queue_flags) -#define blk_queue_bypass(q) test_bit(QUEUE_FLAG_BYPASS, &(q)->queue_flags) #define blk_queue_init_done(q) test_bit(QUEUE_FLAG_INIT_DONE, &(q)->queue_flags) #define blk_queue_nomerges(q) test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags) #define blk_queue_noxmerges(q) \ @@ -757,37 +641,20 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q); extern void blk_set_pm_only(struct request_queue *q); extern void blk_clear_pm_only(struct request_queue *q); -static inline int queue_in_flight(struct request_queue *q) -{ - return q->in_flight[0] + q->in_flight[1]; -} - static inline bool blk_account_rq(struct request *rq) { return (rq->rq_flags & RQF_STARTED) && !blk_rq_is_passthrough(rq); } -#define blk_rq_cpu_valid(rq) ((rq)->cpu != -1) #define blk_bidi_rq(rq) ((rq)->next_rq != NULL) -/* rq->queuelist of dequeued request must be list_empty() */ -#define blk_queued_rq(rq) (!list_empty(&(rq)->queuelist)) #define list_entry_rq(ptr) list_entry((ptr), struct request, queuelist) #define rq_data_dir(rq) (op_is_write(req_op(rq)) ? WRITE : READ) -/* - * Driver can handle struct request, if it either has an old style - * request_fn defined, or is blk-mq based. - */ -static inline bool queue_is_rq_based(struct request_queue *q) -{ - return q->request_fn || q->mq_ops; -} - -static inline unsigned int blk_queue_cluster(struct request_queue *q) +static inline bool queue_is_mq(struct request_queue *q) { - return q->limits.cluster; + return q->mq_ops; } static inline enum blk_zoned_model @@ -845,27 +712,6 @@ static inline bool rq_is_sync(struct request *rq) return op_is_sync(rq->cmd_flags); } -static inline bool blk_rl_full(struct request_list *rl, bool sync) -{ - unsigned int flag = sync ? BLK_RL_SYNCFULL : BLK_RL_ASYNCFULL; - - return rl->flags & flag; -} - -static inline void blk_set_rl_full(struct request_list *rl, bool sync) -{ - unsigned int flag = sync ? BLK_RL_SYNCFULL : BLK_RL_ASYNCFULL; - - rl->flags |= flag; -} - -static inline void blk_clear_rl_full(struct request_list *rl, bool sync) -{ - unsigned int flag = sync ? BLK_RL_SYNCFULL : BLK_RL_ASYNCFULL; - - rl->flags &= ~flag; -} - static inline bool rq_mergeable(struct request *rq) { if (blk_rq_is_passthrough(rq)) @@ -902,16 +748,6 @@ static inline unsigned int blk_queue_depth(struct request_queue *q) return q->nr_requests; } -/* - * q->prep_rq_fn return values - */ -enum { - BLKPREP_OK, /* serve it */ - BLKPREP_KILL, /* fatal error, kill, return -EIO */ - BLKPREP_DEFER, /* leave on queue */ - BLKPREP_INVALID, /* invalid command, kill, return -EREMOTEIO */ -}; - extern unsigned long blk_max_low_pfn, blk_max_pfn; /* @@ -983,10 +819,8 @@ extern blk_qc_t direct_make_request(struct bio *bio); extern void blk_rq_init(struct request_queue *q, struct request *rq); extern void blk_init_request_from_bio(struct request *req, struct bio *bio); extern void blk_put_request(struct request *); -extern void __blk_put_request(struct request_queue *, struct request *); extern struct request *blk_get_request(struct request_queue *, unsigned int op, blk_mq_req_flags_t flags); -extern void blk_requeue_request(struct request_queue *, struct request *); extern int blk_lld_busy(struct request_queue *q); extern int blk_rq_prep_clone(struct request *rq, struct request *rq_src, struct bio_set *bs, gfp_t gfp_mask, @@ -996,7 +830,6 @@ extern void blk_rq_unprep_clone(struct request *rq); extern blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq); extern int blk_rq_append_bio(struct request *rq, struct bio **bio); -extern void blk_delay_queue(struct request_queue *, unsigned long); extern void blk_queue_split(struct request_queue *, struct bio **); extern void blk_recount_segments(struct request_queue *, struct bio *); extern int scsi_verify_blk_ioctl(struct block_device *, unsigned int); @@ -1009,15 +842,7 @@ extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t, extern int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags); extern void blk_queue_exit(struct request_queue *q); -extern void blk_start_queue(struct request_queue *q); -extern void blk_start_queue_async(struct request_queue *q); -extern void blk_stop_queue(struct request_queue *q); extern void blk_sync_queue(struct request_queue *q); -extern void __blk_stop_queue(struct request_queue *q); -extern void __blk_run_queue(struct request_queue *q); -extern void __blk_run_queue_uncond(struct request_queue *q); -extern void blk_run_queue(struct request_queue *); -extern void blk_run_queue_async(struct request_queue *q); extern int blk_rq_map_user(struct request_queue *, struct request *, struct rq_map_data *, void __user *, unsigned long, gfp_t); @@ -1034,7 +859,7 @@ extern void blk_execute_rq_nowait(struct request_queue *, struct gendisk *, int blk_status_to_errno(blk_status_t status); blk_status_t errno_to_blk_status(int errno); -bool blk_poll(struct request_queue *q, blk_qc_t cookie); +int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin); static inline struct request_queue *bdev_get_queue(struct block_device *bdev) { @@ -1172,13 +997,6 @@ static inline unsigned int blk_rq_count_bios(struct request *rq) return nr_bios; } -/* - * Request issue related functions. - */ -extern struct request *blk_peek_request(struct request_queue *q); -extern void blk_start_request(struct request *rq); -extern struct request *blk_fetch_request(struct request_queue *q); - void blk_steal_bios(struct bio_list *list, struct request *rq); /* @@ -1196,27 +1014,18 @@ void blk_steal_bios(struct bio_list *list, struct request *rq); */ extern bool blk_update_request(struct request *rq, blk_status_t error, unsigned int nr_bytes); -extern void blk_finish_request(struct request *rq, blk_status_t error); -extern bool blk_end_request(struct request *rq, blk_status_t error, - unsigned int nr_bytes); extern void blk_end_request_all(struct request *rq, blk_status_t error); extern bool __blk_end_request(struct request *rq, blk_status_t error, unsigned int nr_bytes); extern void __blk_end_request_all(struct request *rq, blk_status_t error); extern bool __blk_end_request_cur(struct request *rq, blk_status_t error); -extern void blk_complete_request(struct request *); extern void __blk_complete_request(struct request *); extern void blk_abort_request(struct request *); -extern void blk_unprep_request(struct request *); /* * Access functions for manipulating queue properties */ -extern struct request_queue *blk_init_queue_node(request_fn_proc *rfn, - spinlock_t *lock, int node_id); -extern struct request_queue *blk_init_queue(request_fn_proc *, spinlock_t *); -extern int blk_init_allocated_queue(struct request_queue *); extern void blk_cleanup_queue(struct request_queue *); extern void blk_queue_make_request(struct request_queue *, make_request_fn *); extern void blk_queue_bounce_limit(struct request_queue *, u64); @@ -1255,15 +1064,10 @@ extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int); extern int blk_queue_dma_drain(struct request_queue *q, dma_drain_needed_fn *dma_drain_needed, void *buf, unsigned int size); -extern void blk_queue_lld_busy(struct request_queue *q, lld_busy_fn *fn); extern void blk_queue_segment_boundary(struct request_queue *, unsigned long); extern void blk_queue_virt_boundary(struct request_queue *, unsigned long); -extern void blk_queue_prep_rq(struct request_queue *, prep_rq_fn *pfn); -extern void blk_queue_unprep_rq(struct request_queue *, unprep_rq_fn *ufn); extern void blk_queue_dma_alignment(struct request_queue *, int); extern void blk_queue_update_dma_alignment(struct request_queue *, int); -extern void blk_queue_softirq_done(struct request_queue *, softirq_done_fn *); -extern void blk_queue_rq_timed_out(struct request_queue *, rq_timed_out_fn *); extern void blk_queue_rq_timeout(struct request_queue *, unsigned int); extern void blk_queue_flush_queueable(struct request_queue *q, bool queueable); extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua); @@ -1299,8 +1103,7 @@ extern long nr_blockdev_pages(void); bool __must_check blk_get_queue(struct request_queue *); struct request_queue *blk_alloc_queue(gfp_t); -struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, - spinlock_t *lock); +struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id); extern void blk_put_queue(struct request_queue *); extern void blk_set_queue_dying(struct request_queue *); @@ -1317,9 +1120,10 @@ extern void blk_set_queue_dying(struct request_queue *); * schedule() where blk_schedule_flush_plug() is called. */ struct blk_plug { - struct list_head list; /* requests */ struct list_head mq_list; /* blk-mq requests */ struct list_head cb_list; /* md requires an unplug callback */ + unsigned short rq_count; + bool multiple_queues; }; #define BLK_MAX_REQUEST_COUNT 16 #define BLK_PLUG_FLUSH_SIZE (128 * 1024) @@ -1358,31 +1162,10 @@ static inline bool blk_needs_flush_plug(struct task_struct *tsk) struct blk_plug *plug = tsk->plug; return plug && - (!list_empty(&plug->list) || - !list_empty(&plug->mq_list) || + (!list_empty(&plug->mq_list) || !list_empty(&plug->cb_list)); } -/* - * tag stuff - */ -extern int blk_queue_start_tag(struct request_queue *, struct request *); -extern struct request *blk_queue_find_tag(struct request_queue *, int); -extern void blk_queue_end_tag(struct request_queue *, struct request *); -extern int blk_queue_init_tags(struct request_queue *, int, struct blk_queue_tag *, int); -extern void blk_queue_free_tags(struct request_queue *); -extern int blk_queue_resize_tags(struct request_queue *, int); -extern struct blk_queue_tag *blk_init_tags(int, int); -extern void blk_free_tags(struct blk_queue_tag *); - -static inline struct request *blk_map_queue_find_tag(struct blk_queue_tag *bqt, - int tag) -{ - if (unlikely(bqt == NULL || tag >= bqt->real_max_depth)) - return NULL; - return bqt->tag_index[tag]; -} - extern int blkdev_issue_flush(struct block_device *, gfp_t, sector_t *); extern int blkdev_issue_write_same(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct page *page); @@ -1982,4 +1765,17 @@ static inline int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask, #endif /* CONFIG_BLOCK */ +static inline void blk_wake_io_task(struct task_struct *waiter) +{ + /* + * If we're polling, the task itself is doing the completions. For + * that case, we don't need to signal a wakeup, it's enough to just + * mark us as RUNNING. + */ + if (waiter == current) + __set_current_state(TASK_RUNNING); + else + wake_up_process(waiter); +} + #endif diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 33014ae73103..e734f163bd0b 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -23,6 +23,7 @@ struct bpf_prog; struct bpf_map; struct sock; struct seq_file; +struct btf; struct btf_type; /* map is generic key/value storage optionally accesible by eBPF programs */ @@ -52,6 +53,7 @@ struct bpf_map_ops { void (*map_seq_show_elem)(struct bpf_map *map, void *key, struct seq_file *m); int (*map_check_btf)(const struct bpf_map *map, + const struct btf *btf, const struct btf_type *key_type, const struct btf_type *value_type); }; @@ -126,6 +128,7 @@ static inline bool bpf_map_support_seq_show(const struct bpf_map *map) } int map_check_no_btf(const struct bpf_map *map, + const struct btf *btf, const struct btf_type *key_type, const struct btf_type *value_type); @@ -268,15 +271,18 @@ struct bpf_prog_offload_ops { int (*insn_hook)(struct bpf_verifier_env *env, int insn_idx, int prev_insn_idx); int (*finalize)(struct bpf_verifier_env *env); + int (*prepare)(struct bpf_prog *prog); + int (*translate)(struct bpf_prog *prog); + void (*destroy)(struct bpf_prog *prog); }; struct bpf_prog_offload { struct bpf_prog *prog; struct net_device *netdev; + struct bpf_offload_dev *offdev; void *dev_priv; struct list_head offloads; bool dev_state; - const struct bpf_prog_offload_ops *dev_ops; void *jited_image; u32 jited_len; }; @@ -293,9 +299,11 @@ struct bpf_prog_aux { atomic_t refcnt; u32 used_map_cnt; u32 max_ctx_offset; + u32 max_pkt_offset; u32 stack_depth; u32 id; - u32 func_cnt; + u32 func_cnt; /* used by non-func prog as the number of func progs */ + u32 func_idx; /* 0 for non-func prog, the index in func array for func prog */ bool offload_requested; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ @@ -312,6 +320,30 @@ struct bpf_prog_aux { void *security; #endif struct bpf_prog_offload *offload; + struct btf *btf; + struct bpf_func_info *func_info; + /* bpf_line_info loaded from userspace. linfo->insn_off + * has the xlated insn offset. + * Both the main and sub prog share the same linfo. + * The subprog can access its first linfo by + * using the linfo_idx. + */ + struct bpf_line_info *linfo; + /* jited_linfo is the jited addr of the linfo. It has a + * one to one mapping to linfo: + * jited_linfo[i] is the jited addr for the linfo[i]->insn_off. + * Both the main and sub prog share the same jited_linfo. + * The subprog can access its first jited_linfo by + * using the linfo_idx. + */ + void **jited_linfo; + u32 func_info_cnt; + u32 nr_linfo; + /* subprog can use linfo_idx to access its first linfo and + * jited_linfo. + * main prog always has linfo_idx == 0 + */ + u32 linfo_idx; union { struct work_struct work; struct rcu_head rcu; @@ -523,7 +555,8 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size) } /* verify correctness of eBPF program */ -int bpf_check(struct bpf_prog **fp, union bpf_attr *attr); +int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, + union bpf_attr __user *uattr); void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth); /* Map specifics */ @@ -691,7 +724,8 @@ int bpf_map_offload_get_next_key(struct bpf_map *map, bool bpf_offload_prog_map_match(struct bpf_prog *prog, struct bpf_map *map); -struct bpf_offload_dev *bpf_offload_dev_create(void); +struct bpf_offload_dev * +bpf_offload_dev_create(const struct bpf_prog_offload_ops *ops); void bpf_offload_dev_destroy(struct bpf_offload_dev *offdev); int bpf_offload_dev_netdev_register(struct bpf_offload_dev *offdev, struct net_device *netdev); diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index d93e89761a8b..c233efc106c6 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -38,6 +38,7 @@ enum bpf_reg_liveness { REG_LIVE_NONE = 0, /* reg hasn't been read or written this branch */ REG_LIVE_READ, /* reg was read, so we're sensitive to initial value */ REG_LIVE_WRITTEN, /* reg was written first, screening off later reads */ + REG_LIVE_DONE = 4, /* liveness won't be updating this register anymore */ }; struct bpf_reg_state { @@ -203,6 +204,7 @@ static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log) struct bpf_subprog_info { u32 start; /* insn idx of function entry point */ + u32 linfo_idx; /* The idx to the main_prog->aux->linfo */ u16 stack_depth; /* max. stack depth used by this function */ }; @@ -223,6 +225,7 @@ struct bpf_verifier_env { bool allow_ptr_leaks; bool seen_direct_write; struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */ + const struct bpf_line_info *prev_linfo; struct bpf_verifier_log log; struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1]; u32 subprog_cnt; @@ -245,7 +248,7 @@ static inline struct bpf_reg_state *cur_regs(struct bpf_verifier_env *env) return cur_func(env)->regs; } -int bpf_prog_offload_verifier_prep(struct bpf_verifier_env *env); +int bpf_prog_offload_verifier_prep(struct bpf_prog *prog); int bpf_prog_offload_verify_insn(struct bpf_verifier_env *env, int insn_idx, int prev_insn_idx); int bpf_prog_offload_finalize(struct bpf_verifier_env *env); diff --git a/include/linux/brcmphy.h b/include/linux/brcmphy.h index 949e9af8d9d6..9cd00a37b8d3 100644 --- a/include/linux/brcmphy.h +++ b/include/linux/brcmphy.h @@ -28,6 +28,7 @@ #define PHY_ID_BCM89610 0x03625cd0 #define PHY_ID_BCM7250 0xae025280 +#define PHY_ID_BCM7255 0xae025120 #define PHY_ID_BCM7260 0xae025190 #define PHY_ID_BCM7268 0xae025090 #define PHY_ID_BCM7271 0xae0253b0 diff --git a/include/linux/bsg-lib.h b/include/linux/bsg-lib.h index 6aeaf6472665..b356e0006731 100644 --- a/include/linux/bsg-lib.h +++ b/include/linux/bsg-lib.h @@ -31,6 +31,9 @@ struct device; struct scatterlist; struct request_queue; +typedef int (bsg_job_fn) (struct bsg_job *); +typedef enum blk_eh_timer_return (bsg_timeout_fn)(struct request *); + struct bsg_buffer { unsigned int payload_len; int sg_cnt; @@ -72,7 +75,8 @@ struct bsg_job { void bsg_job_done(struct bsg_job *job, int result, unsigned int reply_payload_rcv_len); struct request_queue *bsg_setup_queue(struct device *dev, const char *name, - bsg_job_fn *job_fn, int dd_job_size); + bsg_job_fn *job_fn, bsg_timeout_fn *timeout, int dd_job_size); +void bsg_remove_queue(struct request_queue *q); void bsg_job_put(struct bsg_job *job); int __must_check bsg_job_get(struct bsg_job *job); diff --git a/include/linux/btf.h b/include/linux/btf.h index e076c4697049..12502e25e767 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -7,6 +7,7 @@ #include struct btf; +struct btf_member; struct btf_type; union bpf_attr; @@ -46,5 +47,24 @@ void btf_type_seq_show(const struct btf *btf, u32 type_id, void *obj, struct seq_file *m); int btf_get_fd_by_id(u32 id); u32 btf_id(const struct btf *btf); +bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s, + const struct btf_member *m, + u32 expected_offset, u32 expected_size); + +#ifdef CONFIG_BPF_SYSCALL +const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id); +const char *btf_name_by_offset(const struct btf *btf, u32 offset); +#else +static inline const struct btf_type *btf_type_by_id(const struct btf *btf, + u32 type_id) +{ + return NULL; +} +static inline const char *btf_name_by_offset(const struct btf *btf, + u32 offset) +{ + return NULL; +} +#endif #endif diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index 5e1694fe035b..8fcbae1b8db0 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -92,6 +92,7 @@ enum { CFTYPE_NO_PREFIX = (1 << 3), /* (DON'T USE FOR NEW FILES) no subsys prefix */ CFTYPE_WORLD_WRITABLE = (1 << 4), /* (DON'T USE FOR NEW FILES) S_IWUGO */ + CFTYPE_DEBUG = (1 << 5), /* create when cgroup_debug */ /* internal flags, do not use outside cgroup core proper */ __CFTYPE_ONLY_ON_DFL = (1 << 16), /* only on default hierarchy */ diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index 9d12757a65b0..9968332cceed 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -93,6 +93,8 @@ extern struct css_set init_css_set; bool css_has_online_children(struct cgroup_subsys_state *css); struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss); +struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgroup, + struct cgroup_subsys *ss); struct cgroup_subsys_state *cgroup_get_e_css(struct cgroup *cgroup, struct cgroup_subsys *ss); struct cgroup_subsys_state *css_tryget_online_from_dir(struct dentry *dentry, diff --git a/include/linux/compat.h b/include/linux/compat.h index 88720b443cd6..056be0d03722 100644 --- a/include/linux/compat.h +++ b/include/linux/compat.h @@ -169,6 +169,10 @@ typedef struct { compat_sigset_word sig[_COMPAT_NSIG_WORDS]; } compat_sigset_t; +int set_compat_user_sigmask(const compat_sigset_t __user *usigmask, + sigset_t *set, sigset_t *oldset, + size_t sigsetsize); + struct compat_sigaction { #ifndef __ARCH_HAS_IRIX_SIGACTION compat_uptr_t sa_handler; @@ -558,6 +562,12 @@ asmlinkage long compat_sys_io_pgetevents(compat_aio_context_t ctx_id, struct io_event __user *events, struct old_timespec32 __user *timeout, const struct __compat_aio_sigset __user *usig); +asmlinkage long compat_sys_io_pgetevents_time64(compat_aio_context_t ctx_id, + compat_long_t min_nr, + compat_long_t nr, + struct io_event __user *events, + struct __kernel_timespec __user *timeout, + const struct __compat_aio_sigset __user *usig); /* fs/cookies.c */ asmlinkage long compat_sys_lookup_dcookie(u32, u32, char __user *, compat_size_t); @@ -643,11 +653,21 @@ asmlinkage long compat_sys_pselect6(int n, compat_ulong_t __user *inp, compat_ulong_t __user *exp, struct old_timespec32 __user *tsp, void __user *sig); +asmlinkage long compat_sys_pselect6_time64(int n, compat_ulong_t __user *inp, + compat_ulong_t __user *outp, + compat_ulong_t __user *exp, + struct __kernel_timespec __user *tsp, + void __user *sig); asmlinkage long compat_sys_ppoll(struct pollfd __user *ufds, unsigned int nfds, struct old_timespec32 __user *tsp, const compat_sigset_t __user *sigmask, compat_size_t sigsetsize); +asmlinkage long compat_sys_ppoll_time64(struct pollfd __user *ufds, + unsigned int nfds, + struct __kernel_timespec __user *tsp, + const compat_sigset_t __user *sigmask, + compat_size_t sigsetsize); /* fs/signalfd.c */ asmlinkage long compat_sys_signalfd4(int ufd, @@ -768,6 +788,9 @@ asmlinkage long compat_sys_rt_sigpending(compat_sigset_t __user *uset, asmlinkage long compat_sys_rt_sigtimedwait(compat_sigset_t __user *uthese, struct compat_siginfo __user *uinfo, struct old_timespec32 __user *uts, compat_size_t sigsetsize); +asmlinkage long compat_sys_rt_sigtimedwait_time64(compat_sigset_t __user *uthese, + struct compat_siginfo __user *uinfo, + struct __kernel_timespec __user *uts, compat_size_t sigsetsize); asmlinkage long compat_sys_rt_sigqueueinfo(compat_pid_t pid, int sig, struct compat_siginfo __user *uinfo); /* No generic prototype for rt_sigreturn */ @@ -873,6 +896,9 @@ asmlinkage long compat_sys_move_pages(pid_t pid, compat_ulong_t nr_pages, asmlinkage long compat_sys_rt_tgsigqueueinfo(compat_pid_t tgid, compat_pid_t pid, int sig, struct compat_siginfo __user *uinfo); +asmlinkage long compat_sys_recvmmsg_time64(int fd, struct compat_mmsghdr __user *mmsg, + unsigned vlen, unsigned int flags, + struct __kernel_timespec __user *timeout); asmlinkage long compat_sys_recvmmsg(int fd, struct compat_mmsghdr __user *mmsg, unsigned vlen, unsigned int flags, struct old_timespec32 __user *timeout); diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 3e7dafb3ea80..39f668d5066b 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -16,9 +16,13 @@ /* all clang versions usable with the kernel support KASAN ABI version 5 */ #define KASAN_ABI_VERSION 5 +#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer) /* emulate gcc's __SANITIZE_ADDRESS__ flag */ -#if __has_feature(address_sanitizer) #define __SANITIZE_ADDRESS__ +#define __no_sanitize_address \ + __attribute__((no_sanitize("address", "hwaddress"))) +#else +#define __no_sanitize_address #endif /* diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index 2010493e1040..5776da43da97 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -143,6 +143,12 @@ #define KASAN_ABI_VERSION 3 #endif +#if __has_attribute(__no_sanitize_address__) +#define __no_sanitize_address __attribute__((no_sanitize_address)) +#else +#define __no_sanitize_address +#endif + #if GCC_VERSION >= 50100 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 #endif diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index fe07b680dd4a..19f32b0c29af 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -199,19 +199,6 @@ */ #define __noreturn __attribute__((__noreturn__)) -/* - * Optional: only supported since gcc >= 4.8 - * Optional: not supported by icc - * - * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-no_005fsanitize_005faddress-function-attribute - * clang: https://clang.llvm.org/docs/AttributeReference.html#no-sanitize-address-no-address-safety-analysis - */ -#if __has_attribute(__no_sanitize_address__) -# define __no_sanitize_address __attribute__((__no_sanitize_address__)) -#else -# define __no_sanitize_address -#endif - /* * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Type-Attributes.html#index-packed-type-attribute * clang: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-packed-variable-attribute diff --git a/include/linux/cordic.h b/include/linux/cordic.h index cf68ca4a508c..3d656f54d64f 100644 --- a/include/linux/cordic.h +++ b/include/linux/cordic.h @@ -18,6 +18,15 @@ #include +#define CORDIC_ANGLE_GEN 39797 +#define CORDIC_PRECISION_SHIFT 16 +#define CORDIC_NUM_ITER (CORDIC_PRECISION_SHIFT + 2) + +#define CORDIC_FIXED(X) ((s32)((X) << CORDIC_PRECISION_SHIFT)) +#define CORDIC_FLOAT(X) (((X) >= 0) \ + ? ((((X) >> (CORDIC_PRECISION_SHIFT - 1)) + 1) >> 1) \ + : -((((-(X)) >> (CORDIC_PRECISION_SHIFT - 1)) + 1) >> 1)) + /** * struct cordic_iq - i/q coordinate. * diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 3634ad6fe202..902ec171fc6d 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -49,7 +49,6 @@ #define CRYPTO_ALG_TYPE_BLKCIPHER 0x00000004 #define CRYPTO_ALG_TYPE_ABLKCIPHER 0x00000005 #define CRYPTO_ALG_TYPE_SKCIPHER 0x00000005 -#define CRYPTO_ALG_TYPE_GIVCIPHER 0x00000006 #define CRYPTO_ALG_TYPE_KPP 0x00000008 #define CRYPTO_ALG_TYPE_ACOMPRESS 0x0000000a #define CRYPTO_ALG_TYPE_SCOMPRESS 0x0000000b @@ -76,12 +75,6 @@ */ #define CRYPTO_ALG_NEED_FALLBACK 0x00000100 -/* - * This bit is set for symmetric key ciphers that have already been wrapped - * with a generic IV generator to prevent them from being wrapped again. - */ -#define CRYPTO_ALG_GENIV 0x00000200 - /* * Set if the algorithm has passed automated run-time testing. Note that * if there is no run-time testing for a given algorithm it is considered @@ -157,7 +150,6 @@ struct crypto_async_request; struct crypto_blkcipher; struct crypto_tfm; struct crypto_type; -struct skcipher_givcrypt_request; typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err); @@ -246,31 +238,16 @@ struct cipher_desc { * be called in parallel with the same transformation object. * @decrypt: Decrypt a single block. This is a reverse counterpart to @encrypt * and the conditions are exactly the same. - * @givencrypt: Update the IV for encryption. With this function, a cipher - * implementation may provide the function on how to update the IV - * for encryption. - * @givdecrypt: Update the IV for decryption. This is the reverse of - * @givencrypt . - * @geniv: The transformation implementation may use an "IV generator" provided - * by the kernel crypto API. Several use cases have a predefined - * approach how IVs are to be updated. For such use cases, the kernel - * crypto API provides ready-to-use implementations that can be - * referenced with this variable. * @ivsize: IV size applicable for transformation. The consumer must provide an * IV of exactly that size to perform the encrypt or decrypt operation. * - * All fields except @givencrypt , @givdecrypt , @geniv and @ivsize are - * mandatory and must be filled. + * All fields except @ivsize are mandatory and must be filled. */ struct ablkcipher_alg { int (*setkey)(struct crypto_ablkcipher *tfm, const u8 *key, unsigned int keylen); int (*encrypt)(struct ablkcipher_request *req); int (*decrypt)(struct ablkcipher_request *req); - int (*givencrypt)(struct skcipher_givcrypt_request *req); - int (*givdecrypt)(struct skcipher_givcrypt_request *req); - - const char *geniv; unsigned int min_keysize; unsigned int max_keysize; @@ -284,10 +261,9 @@ struct ablkcipher_alg { * @setkey: see struct ablkcipher_alg * @encrypt: see struct ablkcipher_alg * @decrypt: see struct ablkcipher_alg - * @geniv: see struct ablkcipher_alg * @ivsize: see struct ablkcipher_alg * - * All fields except @geniv and @ivsize are mandatory and must be filled. + * All fields except @ivsize are mandatory and must be filled. */ struct blkcipher_alg { int (*setkey)(struct crypto_tfm *tfm, const u8 *key, @@ -299,8 +275,6 @@ struct blkcipher_alg { struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes); - const char *geniv; - unsigned int min_keysize; unsigned int max_keysize; unsigned int ivsize; @@ -369,6 +343,115 @@ struct compress_alg { unsigned int slen, u8 *dst, unsigned int *dlen); }; +#ifdef CONFIG_CRYPTO_STATS +/* + * struct crypto_istat_aead - statistics for AEAD algorithm + * @encrypt_cnt: number of encrypt requests + * @encrypt_tlen: total data size handled by encrypt requests + * @decrypt_cnt: number of decrypt requests + * @decrypt_tlen: total data size handled by decrypt requests + * @err_cnt: number of error for AEAD requests + */ +struct crypto_istat_aead { + atomic64_t encrypt_cnt; + atomic64_t encrypt_tlen; + atomic64_t decrypt_cnt; + atomic64_t decrypt_tlen; + atomic64_t err_cnt; +}; + +/* + * struct crypto_istat_akcipher - statistics for akcipher algorithm + * @encrypt_cnt: number of encrypt requests + * @encrypt_tlen: total data size handled by encrypt requests + * @decrypt_cnt: number of decrypt requests + * @decrypt_tlen: total data size handled by decrypt requests + * @verify_cnt: number of verify operation + * @sign_cnt: number of sign requests + * @err_cnt: number of error for akcipher requests + */ +struct crypto_istat_akcipher { + atomic64_t encrypt_cnt; + atomic64_t encrypt_tlen; + atomic64_t decrypt_cnt; + atomic64_t decrypt_tlen; + atomic64_t verify_cnt; + atomic64_t sign_cnt; + atomic64_t err_cnt; +}; + +/* + * struct crypto_istat_cipher - statistics for cipher algorithm + * @encrypt_cnt: number of encrypt requests + * @encrypt_tlen: total data size handled by encrypt requests + * @decrypt_cnt: number of decrypt requests + * @decrypt_tlen: total data size handled by decrypt requests + * @err_cnt: number of error for cipher requests + */ +struct crypto_istat_cipher { + atomic64_t encrypt_cnt; + atomic64_t encrypt_tlen; + atomic64_t decrypt_cnt; + atomic64_t decrypt_tlen; + atomic64_t err_cnt; +}; + +/* + * struct crypto_istat_compress - statistics for compress algorithm + * @compress_cnt: number of compress requests + * @compress_tlen: total data size handled by compress requests + * @decompress_cnt: number of decompress requests + * @decompress_tlen: total data size handled by decompress requests + * @err_cnt: number of error for compress requests + */ +struct crypto_istat_compress { + atomic64_t compress_cnt; + atomic64_t compress_tlen; + atomic64_t decompress_cnt; + atomic64_t decompress_tlen; + atomic64_t err_cnt; +}; + +/* + * struct crypto_istat_hash - statistics for has algorithm + * @hash_cnt: number of hash requests + * @hash_tlen: total data size hashed + * @err_cnt: number of error for hash requests + */ +struct crypto_istat_hash { + atomic64_t hash_cnt; + atomic64_t hash_tlen; + atomic64_t err_cnt; +}; + +/* + * struct crypto_istat_kpp - statistics for KPP algorithm + * @setsecret_cnt: number of setsecrey operation + * @generate_public_key_cnt: number of generate_public_key operation + * @compute_shared_secret_cnt: number of compute_shared_secret operation + * @err_cnt: number of error for KPP requests + */ +struct crypto_istat_kpp { + atomic64_t setsecret_cnt; + atomic64_t generate_public_key_cnt; + atomic64_t compute_shared_secret_cnt; + atomic64_t err_cnt; +}; + +/* + * struct crypto_istat_rng: statistics for RNG algorithm + * @generate_cnt: number of RNG generate requests + * @generate_tlen: total data size of generated data by the RNG + * @seed_cnt: number of times the RNG was seeded + * @err_cnt: number of error for RNG requests + */ +struct crypto_istat_rng { + atomic64_t generate_cnt; + atomic64_t generate_tlen; + atomic64_t seed_cnt; + atomic64_t err_cnt; +}; +#endif /* CONFIG_CRYPTO_STATS */ #define cra_ablkcipher cra_u.ablkcipher #define cra_blkcipher cra_u.blkcipher @@ -454,32 +537,14 @@ struct compress_alg { * @cra_refcnt: internally used * @cra_destroy: internally used * - * All following statistics are for this crypto_alg - * @encrypt_cnt: number of encrypt requests - * @decrypt_cnt: number of decrypt requests - * @compress_cnt: number of compress requests - * @decompress_cnt: number of decompress requests - * @generate_cnt: number of RNG generate requests - * @seed_cnt: number of times the rng was seeded - * @hash_cnt: number of hash requests - * @sign_cnt: number of sign requests - * @setsecret_cnt: number of setsecrey operation - * @generate_public_key_cnt: number of generate_public_key operation - * @verify_cnt: number of verify operation - * @compute_shared_secret_cnt: number of compute_shared_secret operation - * @encrypt_tlen: total data size handled by encrypt requests - * @decrypt_tlen: total data size handled by decrypt requests - * @compress_tlen: total data size handled by compress requests - * @decompress_tlen: total data size handled by decompress requests - * @generate_tlen: total data size of generated data by the RNG - * @hash_tlen: total data size hashed - * @akcipher_err_cnt: number of error for akcipher requests - * @cipher_err_cnt: number of error for akcipher requests - * @compress_err_cnt: number of error for akcipher requests - * @aead_err_cnt: number of error for akcipher requests - * @hash_err_cnt: number of error for akcipher requests - * @rng_err_cnt: number of error for akcipher requests - * @kpp_err_cnt: number of error for akcipher requests + * @stats: union of all possible crypto_istat_xxx structures + * @stats.aead: statistics for AEAD algorithm + * @stats.akcipher: statistics for akcipher algorithm + * @stats.cipher: statistics for cipher algorithm + * @stats.compress: statistics for compress algorithm + * @stats.hash: statistics for hash algorithm + * @stats.rng: statistics for rng algorithm + * @stats.kpp: statistics for KPP algorithm * * The struct crypto_alg describes a generic Crypto API algorithm and is common * for all of the transformations. Any variable not documented here shall not @@ -515,46 +580,86 @@ struct crypto_alg { struct module *cra_module; +#ifdef CONFIG_CRYPTO_STATS union { - atomic_t encrypt_cnt; - atomic_t compress_cnt; - atomic_t generate_cnt; - atomic_t hash_cnt; - atomic_t setsecret_cnt; - }; - union { - atomic64_t encrypt_tlen; - atomic64_t compress_tlen; - atomic64_t generate_tlen; - atomic64_t hash_tlen; - }; - union { - atomic_t akcipher_err_cnt; - atomic_t cipher_err_cnt; - atomic_t compress_err_cnt; - atomic_t aead_err_cnt; - atomic_t hash_err_cnt; - atomic_t rng_err_cnt; - atomic_t kpp_err_cnt; - }; - union { - atomic_t decrypt_cnt; - atomic_t decompress_cnt; - atomic_t seed_cnt; - atomic_t generate_public_key_cnt; - }; - union { - atomic64_t decrypt_tlen; - atomic64_t decompress_tlen; - }; - union { - atomic_t verify_cnt; - atomic_t compute_shared_secret_cnt; - }; - atomic_t sign_cnt; + struct crypto_istat_aead aead; + struct crypto_istat_akcipher akcipher; + struct crypto_istat_cipher cipher; + struct crypto_istat_compress compress; + struct crypto_istat_hash hash; + struct crypto_istat_rng rng; + struct crypto_istat_kpp kpp; + } stats; +#endif /* CONFIG_CRYPTO_STATS */ } CRYPTO_MINALIGN_ATTR; +#ifdef CONFIG_CRYPTO_STATS +void crypto_stats_init(struct crypto_alg *alg); +void crypto_stats_get(struct crypto_alg *alg); +void crypto_stats_ablkcipher_encrypt(unsigned int nbytes, int ret, struct crypto_alg *alg); +void crypto_stats_ablkcipher_decrypt(unsigned int nbytes, int ret, struct crypto_alg *alg); +void crypto_stats_aead_encrypt(unsigned int cryptlen, struct crypto_alg *alg, int ret); +void crypto_stats_aead_decrypt(unsigned int cryptlen, struct crypto_alg *alg, int ret); +void crypto_stats_ahash_update(unsigned int nbytes, int ret, struct crypto_alg *alg); +void crypto_stats_ahash_final(unsigned int nbytes, int ret, struct crypto_alg *alg); +void crypto_stats_akcipher_encrypt(unsigned int src_len, int ret, struct crypto_alg *alg); +void crypto_stats_akcipher_decrypt(unsigned int src_len, int ret, struct crypto_alg *alg); +void crypto_stats_akcipher_sign(int ret, struct crypto_alg *alg); +void crypto_stats_akcipher_verify(int ret, struct crypto_alg *alg); +void crypto_stats_compress(unsigned int slen, int ret, struct crypto_alg *alg); +void crypto_stats_decompress(unsigned int slen, int ret, struct crypto_alg *alg); +void crypto_stats_kpp_set_secret(struct crypto_alg *alg, int ret); +void crypto_stats_kpp_generate_public_key(struct crypto_alg *alg, int ret); +void crypto_stats_kpp_compute_shared_secret(struct crypto_alg *alg, int ret); +void crypto_stats_rng_seed(struct crypto_alg *alg, int ret); +void crypto_stats_rng_generate(struct crypto_alg *alg, unsigned int dlen, int ret); +void crypto_stats_skcipher_encrypt(unsigned int cryptlen, int ret, struct crypto_alg *alg); +void crypto_stats_skcipher_decrypt(unsigned int cryptlen, int ret, struct crypto_alg *alg); +#else +static inline void crypto_stats_init(struct crypto_alg *alg) +{} +static inline void crypto_stats_get(struct crypto_alg *alg) +{} +static inline void crypto_stats_ablkcipher_encrypt(unsigned int nbytes, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_ablkcipher_decrypt(unsigned int nbytes, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_aead_encrypt(unsigned int cryptlen, struct crypto_alg *alg, int ret) +{} +static inline void crypto_stats_aead_decrypt(unsigned int cryptlen, struct crypto_alg *alg, int ret) +{} +static inline void crypto_stats_ahash_update(unsigned int nbytes, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_ahash_final(unsigned int nbytes, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_akcipher_encrypt(unsigned int src_len, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_akcipher_decrypt(unsigned int src_len, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_akcipher_sign(int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_akcipher_verify(int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_compress(unsigned int slen, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_decompress(unsigned int slen, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_kpp_set_secret(struct crypto_alg *alg, int ret) +{} +static inline void crypto_stats_kpp_generate_public_key(struct crypto_alg *alg, int ret) +{} +static inline void crypto_stats_kpp_compute_shared_secret(struct crypto_alg *alg, int ret) +{} +static inline void crypto_stats_rng_seed(struct crypto_alg *alg, int ret) +{} +static inline void crypto_stats_rng_generate(struct crypto_alg *alg, unsigned int dlen, int ret) +{} +static inline void crypto_stats_skcipher_encrypt(unsigned int cryptlen, int ret, struct crypto_alg *alg) +{} +static inline void crypto_stats_skcipher_decrypt(unsigned int cryptlen, int ret, struct crypto_alg *alg) +{} +#endif /* * A helper struct for waiting for completion of async crypto ops */ @@ -800,14 +905,14 @@ static inline struct crypto_ablkcipher *__crypto_ablkcipher_cast( static inline u32 crypto_skcipher_type(u32 type) { - type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV); + type &= ~CRYPTO_ALG_TYPE_MASK; type |= CRYPTO_ALG_TYPE_BLKCIPHER; return type; } static inline u32 crypto_skcipher_mask(u32 mask) { - mask &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV); + mask &= ~CRYPTO_ALG_TYPE_MASK; mask |= CRYPTO_ALG_TYPE_BLKCIPHER_MASK; return mask; } @@ -973,38 +1078,6 @@ static inline struct crypto_ablkcipher *crypto_ablkcipher_reqtfm( return __crypto_ablkcipher_cast(req->base.tfm); } -static inline void crypto_stat_ablkcipher_encrypt(struct ablkcipher_request *req, - int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct ablkcipher_tfm *crt = - crypto_ablkcipher_crt(crypto_ablkcipher_reqtfm(req)); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&crt->base->base.__crt_alg->cipher_err_cnt); - } else { - atomic_inc(&crt->base->base.__crt_alg->encrypt_cnt); - atomic64_add(req->nbytes, &crt->base->base.__crt_alg->encrypt_tlen); - } -#endif -} - -static inline void crypto_stat_ablkcipher_decrypt(struct ablkcipher_request *req, - int ret) -{ -#ifdef CONFIG_CRYPTO_STATS - struct ablkcipher_tfm *crt = - crypto_ablkcipher_crt(crypto_ablkcipher_reqtfm(req)); - - if (ret && ret != -EINPROGRESS && ret != -EBUSY) { - atomic_inc(&crt->base->base.__crt_alg->cipher_err_cnt); - } else { - atomic_inc(&crt->base->base.__crt_alg->decrypt_cnt); - atomic64_add(req->nbytes, &crt->base->base.__crt_alg->decrypt_tlen); - } -#endif -} - /** * crypto_ablkcipher_encrypt() - encrypt plaintext * @req: reference to the ablkcipher_request handle that holds all information @@ -1020,10 +1093,13 @@ static inline int crypto_ablkcipher_encrypt(struct ablkcipher_request *req) { struct ablkcipher_tfm *crt = crypto_ablkcipher_crt(crypto_ablkcipher_reqtfm(req)); + struct crypto_alg *alg = crt->base->base.__crt_alg; + unsigned int nbytes = req->nbytes; int ret; + crypto_stats_get(alg); ret = crt->encrypt(req); - crypto_stat_ablkcipher_encrypt(req, ret); + crypto_stats_ablkcipher_encrypt(nbytes, ret, alg); return ret; } @@ -1042,10 +1118,13 @@ static inline int crypto_ablkcipher_decrypt(struct ablkcipher_request *req) { struct ablkcipher_tfm *crt = crypto_ablkcipher_crt(crypto_ablkcipher_reqtfm(req)); + struct crypto_alg *alg = crt->base->base.__crt_alg; + unsigned int nbytes = req->nbytes; int ret; + crypto_stats_get(alg); ret = crt->decrypt(req); - crypto_stat_ablkcipher_decrypt(req, ret); + crypto_stats_ablkcipher_decrypt(nbytes, ret, alg); return ret; } diff --git a/include/linux/dma-debug.h b/include/linux/dma-debug.h index 30213adbb6b9..2ad5c363d7d5 100644 --- a/include/linux/dma-debug.h +++ b/include/linux/dma-debug.h @@ -30,8 +30,6 @@ struct bus_type; extern void dma_debug_add_bus(struct bus_type *bus); -extern int dma_debug_resize_entries(u32 num_entries); - extern void debug_dma_map_single(struct device *dev, const void *addr, unsigned long len); @@ -72,17 +70,6 @@ extern void debug_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, int direction); -extern void debug_dma_sync_single_range_for_cpu(struct device *dev, - dma_addr_t dma_handle, - unsigned long offset, - size_t size, - int direction); - -extern void debug_dma_sync_single_range_for_device(struct device *dev, - dma_addr_t dma_handle, - unsigned long offset, - size_t size, int direction); - extern void debug_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, int direction); @@ -101,11 +88,6 @@ static inline void dma_debug_add_bus(struct bus_type *bus) { } -static inline int dma_debug_resize_entries(u32 num_entries) -{ - return 0; -} - static inline void debug_dma_map_single(struct device *dev, const void *addr, unsigned long len) { @@ -174,22 +156,6 @@ static inline void debug_dma_sync_single_for_device(struct device *dev, { } -static inline void debug_dma_sync_single_range_for_cpu(struct device *dev, - dma_addr_t dma_handle, - unsigned long offset, - size_t size, - int direction) -{ -} - -static inline void debug_dma_sync_single_range_for_device(struct device *dev, - dma_addr_t dma_handle, - unsigned long offset, - size_t size, - int direction) -{ -} - static inline void debug_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, int direction) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 9e66bfe369aa..b7338702592a 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -5,8 +5,6 @@ #include #include -#define DIRECT_MAPPING_ERROR (~(dma_addr_t)0) - #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA #include #else @@ -50,14 +48,6 @@ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) return __sme_clr(__dma_to_phys(dev, daddr)); } -#ifdef CONFIG_ARCH_HAS_DMA_MARK_CLEAN -void dma_mark_clean(void *addr, size_t size); -#else -static inline void dma_mark_clean(void *addr, size_t size) -{ -} -#endif /* CONFIG_ARCH_HAS_DMA_MARK_CLEAN */ - u64 dma_direct_get_required_mask(struct device *dev); void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); @@ -67,11 +57,8 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs); -dma_addr_t dma_direct_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs); -int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, - enum dma_data_direction dir, unsigned long attrs); +struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); +void __dma_direct_free_pages(struct device *dev, size_t size, struct page *page); int dma_direct_supported(struct device *dev, u64 mask); -int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr); #endif /* _LINUX_DMA_DIRECT_H */ diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index e8ca5e654277..e760dc5d1fa8 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -69,7 +69,6 @@ dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs); void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs); -int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); /* The DMA API isn't _quite_ the whole story, though... */ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index d327bdd53716..ba521d5506c9 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -128,13 +128,14 @@ struct dma_map_ops { enum dma_data_direction dir); void (*cache_sync)(struct device *dev, void *vaddr, size_t size, enum dma_data_direction direction); - int (*mapping_error)(struct device *dev, dma_addr_t dma_addr); int (*dma_supported)(struct device *dev, u64 mask); u64 (*get_required_mask)(struct device *dev); }; -extern const struct dma_map_ops dma_direct_ops; +#define DMA_MAPPING_ERROR (~(dma_addr_t)0) + extern const struct dma_map_ops dma_virt_ops; +extern const struct dma_map_ops dma_dummy_ops; #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) @@ -220,6 +221,69 @@ static inline const struct dma_map_ops *get_dma_ops(struct device *dev) } #endif +static inline bool dma_is_direct(const struct dma_map_ops *ops) +{ + return likely(!ops); +} + +/* + * All the dma_direct_* declarations are here just for the indirect call bypass, + * and must not be used directly drivers! + */ +dma_addr_t dma_direct_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, enum dma_data_direction dir, + unsigned long attrs); +int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, + enum dma_data_direction dir, unsigned long attrs); + +#if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ + defined(CONFIG_SWIOTLB) +void dma_direct_sync_single_for_device(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir); +void dma_direct_sync_sg_for_device(struct device *dev, + struct scatterlist *sgl, int nents, enum dma_data_direction dir); +#else +static inline void dma_direct_sync_single_for_device(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir) +{ +} +static inline void dma_direct_sync_sg_for_device(struct device *dev, + struct scatterlist *sgl, int nents, enum dma_data_direction dir) +{ +} +#endif + +#if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU) || \ + defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL) || \ + defined(CONFIG_SWIOTLB) +void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir, unsigned long attrs); +void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, unsigned long attrs); +void dma_direct_sync_single_for_cpu(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir); +void dma_direct_sync_sg_for_cpu(struct device *dev, + struct scatterlist *sgl, int nents, enum dma_data_direction dir); +#else +static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ +} +static inline void dma_direct_unmap_sg(struct device *dev, + struct scatterlist *sgl, int nents, enum dma_data_direction dir, + unsigned long attrs) +{ +} +static inline void dma_direct_sync_single_for_cpu(struct device *dev, + dma_addr_t addr, size_t size, enum dma_data_direction dir) +{ +} +static inline void dma_direct_sync_sg_for_cpu(struct device *dev, + struct scatterlist *sgl, int nents, enum dma_data_direction dir) +{ +} +#endif + static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir, @@ -230,9 +294,12 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, BUG_ON(!valid_dma_direction(dir)); debug_dma_map_single(dev, ptr, size); - addr = ops->map_page(dev, virt_to_page(ptr), - offset_in_page(ptr), size, - dir, attrs); + if (dma_is_direct(ops)) + addr = dma_direct_map_page(dev, virt_to_page(ptr), + offset_in_page(ptr), size, dir, attrs); + else + addr = ops->map_page(dev, virt_to_page(ptr), + offset_in_page(ptr), size, dir, attrs); debug_dma_map_page(dev, virt_to_page(ptr), offset_in_page(ptr), size, dir, addr, true); @@ -247,11 +314,19 @@ static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr, const struct dma_map_ops *ops = get_dma_ops(dev); BUG_ON(!valid_dma_direction(dir)); - if (ops->unmap_page) + if (dma_is_direct(ops)) + dma_direct_unmap_page(dev, addr, size, dir, attrs); + else if (ops->unmap_page) ops->unmap_page(dev, addr, size, dir, attrs); debug_dma_unmap_page(dev, addr, size, dir, true); } +static inline void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + return dma_unmap_single_attrs(dev, addr, size, dir, attrs); +} + /* * dma_maps_sg_attrs returns 0 on error and > 0 on success. * It should never return a value < 0. @@ -264,7 +339,10 @@ static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int ents; BUG_ON(!valid_dma_direction(dir)); - ents = ops->map_sg(dev, sg, nents, dir, attrs); + if (dma_is_direct(ops)) + ents = dma_direct_map_sg(dev, sg, nents, dir, attrs); + else + ents = ops->map_sg(dev, sg, nents, dir, attrs); BUG_ON(ents < 0); debug_dma_map_sg(dev, sg, nents, ents, dir); @@ -279,7 +357,9 @@ static inline void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg BUG_ON(!valid_dma_direction(dir)); debug_dma_unmap_sg(dev, sg, nents, dir); - if (ops->unmap_sg) + if (dma_is_direct(ops)) + dma_direct_unmap_sg(dev, sg, nents, dir, attrs); + else if (ops->unmap_sg) ops->unmap_sg(dev, sg, nents, dir, attrs); } @@ -293,25 +373,15 @@ static inline dma_addr_t dma_map_page_attrs(struct device *dev, dma_addr_t addr; BUG_ON(!valid_dma_direction(dir)); - addr = ops->map_page(dev, page, offset, size, dir, attrs); + if (dma_is_direct(ops)) + addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); + else + addr = ops->map_page(dev, page, offset, size, dir, attrs); debug_dma_map_page(dev, page, offset, size, dir, addr, false); return addr; } -static inline void dma_unmap_page_attrs(struct device *dev, - dma_addr_t addr, size_t size, - enum dma_data_direction dir, - unsigned long attrs) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - - BUG_ON(!valid_dma_direction(dir)); - if (ops->unmap_page) - ops->unmap_page(dev, addr, size, dir, attrs); - debug_dma_unmap_page(dev, addr, size, dir, false); -} - static inline dma_addr_t dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, @@ -327,7 +397,7 @@ static inline dma_addr_t dma_map_resource(struct device *dev, BUG_ON(pfn_valid(PHYS_PFN(phys_addr))); addr = phys_addr; - if (ops->map_resource) + if (ops && ops->map_resource) addr = ops->map_resource(dev, phys_addr, size, dir, attrs); debug_dma_map_resource(dev, phys_addr, size, dir, addr); @@ -342,7 +412,7 @@ static inline void dma_unmap_resource(struct device *dev, dma_addr_t addr, const struct dma_map_ops *ops = get_dma_ops(dev); BUG_ON(!valid_dma_direction(dir)); - if (ops->unmap_resource) + if (ops && ops->unmap_resource) ops->unmap_resource(dev, addr, size, dir, attrs); debug_dma_unmap_resource(dev, addr, size, dir); } @@ -354,11 +424,20 @@ static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, const struct dma_map_ops *ops = get_dma_ops(dev); BUG_ON(!valid_dma_direction(dir)); - if (ops->sync_single_for_cpu) + if (dma_is_direct(ops)) + dma_direct_sync_single_for_cpu(dev, addr, size, dir); + else if (ops->sync_single_for_cpu) ops->sync_single_for_cpu(dev, addr, size, dir); debug_dma_sync_single_for_cpu(dev, addr, size, dir); } +static inline void dma_sync_single_range_for_cpu(struct device *dev, + dma_addr_t addr, unsigned long offset, size_t size, + enum dma_data_direction dir) +{ + return dma_sync_single_for_cpu(dev, addr + offset, size, dir); +} + static inline void dma_sync_single_for_device(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir) @@ -366,37 +445,18 @@ static inline void dma_sync_single_for_device(struct device *dev, const struct dma_map_ops *ops = get_dma_ops(dev); BUG_ON(!valid_dma_direction(dir)); - if (ops->sync_single_for_device) + if (dma_is_direct(ops)) + dma_direct_sync_single_for_device(dev, addr, size, dir); + else if (ops->sync_single_for_device) ops->sync_single_for_device(dev, addr, size, dir); debug_dma_sync_single_for_device(dev, addr, size, dir); } -static inline void dma_sync_single_range_for_cpu(struct device *dev, - dma_addr_t addr, - unsigned long offset, - size_t size, - enum dma_data_direction dir) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - - BUG_ON(!valid_dma_direction(dir)); - if (ops->sync_single_for_cpu) - ops->sync_single_for_cpu(dev, addr + offset, size, dir); - debug_dma_sync_single_range_for_cpu(dev, addr, offset, size, dir); -} - static inline void dma_sync_single_range_for_device(struct device *dev, - dma_addr_t addr, - unsigned long offset, - size_t size, - enum dma_data_direction dir) + dma_addr_t addr, unsigned long offset, size_t size, + enum dma_data_direction dir) { - const struct dma_map_ops *ops = get_dma_ops(dev); - - BUG_ON(!valid_dma_direction(dir)); - if (ops->sync_single_for_device) - ops->sync_single_for_device(dev, addr + offset, size, dir); - debug_dma_sync_single_range_for_device(dev, addr, offset, size, dir); + return dma_sync_single_for_device(dev, addr + offset, size, dir); } static inline void @@ -406,7 +466,9 @@ dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, const struct dma_map_ops *ops = get_dma_ops(dev); BUG_ON(!valid_dma_direction(dir)); - if (ops->sync_sg_for_cpu) + if (dma_is_direct(ops)) + dma_direct_sync_sg_for_cpu(dev, sg, nelems, dir); + else if (ops->sync_sg_for_cpu) ops->sync_sg_for_cpu(dev, sg, nelems, dir); debug_dma_sync_sg_for_cpu(dev, sg, nelems, dir); } @@ -418,7 +480,9 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, const struct dma_map_ops *ops = get_dma_ops(dev); BUG_ON(!valid_dma_direction(dir)); - if (ops->sync_sg_for_device) + if (dma_is_direct(ops)) + dma_direct_sync_sg_for_device(dev, sg, nelems, dir); + else if (ops->sync_sg_for_device) ops->sync_sg_for_device(dev, sg, nelems, dir); debug_dma_sync_sg_for_device(dev, sg, nelems, dir); @@ -431,16 +495,8 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, #define dma_map_page(d, p, o, s, r) dma_map_page_attrs(d, p, o, s, r, 0) #define dma_unmap_page(d, a, s, r) dma_unmap_page_attrs(d, a, s, r, 0) -static inline void -dma_cache_sync(struct device *dev, void *vaddr, size_t size, - enum dma_data_direction dir) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - - BUG_ON(!valid_dma_direction(dir)); - if (ops->cache_sync) - ops->cache_sync(dev, vaddr, size, dir); -} +void dma_cache_sync(struct device *dev, void *vaddr, size_t size, + enum dma_data_direction dir); extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size, @@ -455,107 +511,29 @@ void *dma_common_pages_remap(struct page **pages, size_t size, const void *caller); void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags); -/** - * dma_mmap_attrs - map a coherent DMA allocation into user space - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices - * @vma: vm_area_struct describing requested user mapping - * @cpu_addr: kernel CPU-view address returned from dma_alloc_attrs - * @handle: device-view address returned from dma_alloc_attrs - * @size: size of memory originally requested in dma_alloc_attrs - * @attrs: attributes of mapping properties requested in dma_alloc_attrs - * - * Map a coherent DMA buffer previously allocated by dma_alloc_attrs - * into user space. The coherent DMA buffer must not be freed by the - * driver until the user space mapping has been released. - */ -static inline int -dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, - dma_addr_t dma_addr, size_t size, unsigned long attrs) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - BUG_ON(!ops); - if (ops->mmap) - return ops->mmap(dev, vma, cpu_addr, dma_addr, size, attrs); - return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); -} +int __init dma_atomic_pool_init(gfp_t gfp, pgprot_t prot); +bool dma_in_atomic_pool(void *start, size_t size); +void *dma_alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags); +bool dma_free_from_pool(void *start, size_t size); +int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs); #define dma_mmap_coherent(d, v, c, h, s) dma_mmap_attrs(d, v, c, h, s, 0) int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs); -static inline int -dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, void *cpu_addr, - dma_addr_t dma_addr, size_t size, - unsigned long attrs) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - BUG_ON(!ops); - if (ops->get_sgtable) - return ops->get_sgtable(dev, sgt, cpu_addr, dma_addr, size, - attrs); - return dma_common_get_sgtable(dev, sgt, cpu_addr, dma_addr, size, - attrs); -} - +int dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs); #define dma_get_sgtable(d, t, v, h, s) dma_get_sgtable_attrs(d, t, v, h, s, 0) -#ifndef arch_dma_alloc_attrs -#define arch_dma_alloc_attrs(dev) (true) -#endif - -static inline void *dma_alloc_attrs(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, - unsigned long attrs) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - void *cpu_addr; - - BUG_ON(!ops); - WARN_ON_ONCE(dev && !dev->coherent_dma_mask); - - if (dma_alloc_from_dev_coherent(dev, size, dma_handle, &cpu_addr)) - return cpu_addr; - - /* let the implementation decide on the zone to allocate from: */ - flag &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM); - - if (!arch_dma_alloc_attrs(&dev)) - return NULL; - if (!ops->alloc) - return NULL; - - cpu_addr = ops->alloc(dev, size, dma_handle, flag, attrs); - debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr); - return cpu_addr; -} - -static inline void dma_free_attrs(struct device *dev, size_t size, - void *cpu_addr, dma_addr_t dma_handle, - unsigned long attrs) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - - BUG_ON(!ops); - - if (dma_release_from_dev_coherent(dev, get_order(size), cpu_addr)) - return; - /* - * On non-coherent platforms which implement DMA-coherent buffers via - * non-cacheable remaps, ops->free() may call vunmap(). Thus getting - * this far in IRQ context is a) at risk of a BUG_ON() or trying to - * sleep on some machines, and b) an indication that the driver is - * probably misusing the coherent API anyway. - */ - WARN_ON(irqs_disabled()); - - if (!ops->free || !cpu_addr) - return; - - debug_dma_free_coherent(dev, size, cpu_addr, dma_handle); - ops->free(dev, size, cpu_addr, dma_handle, attrs); -} +void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, + gfp_t flag, unsigned long attrs); +void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_handle, unsigned long attrs); static inline void *dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) @@ -573,43 +551,16 @@ static inline void dma_free_coherent(struct device *dev, size_t size, static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) { - const struct dma_map_ops *ops = get_dma_ops(dev); - debug_dma_mapping_error(dev, dma_addr); - if (ops->mapping_error) - return ops->mapping_error(dev, dma_addr); - return 0; -} - -static inline void dma_check_mask(struct device *dev, u64 mask) -{ - if (sme_active() && (mask < (((u64)sme_get_me_mask() << 1) - 1))) - dev_warn(dev, "SME is active, device will require DMA bounce buffers\n"); -} - -static inline int dma_supported(struct device *dev, u64 mask) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - - if (!ops) - return 0; - if (!ops->dma_supported) - return 1; - return ops->dma_supported(dev, mask); -} - -#ifndef HAVE_ARCH_DMA_SET_MASK -static inline int dma_set_mask(struct device *dev, u64 mask) -{ - if (!dev->dma_mask || !dma_supported(dev, mask)) - return -EIO; - dma_check_mask(dev, mask); - - *dev->dma_mask = mask; + if (dma_addr == DMA_MAPPING_ERROR) + return -ENOMEM; return 0; } -#endif + +int dma_supported(struct device *dev, u64 mask); +int dma_set_mask(struct device *dev, u64 mask); +int dma_set_coherent_mask(struct device *dev, u64 mask); static inline u64 dma_get_mask(struct device *dev) { @@ -618,21 +569,6 @@ static inline u64 dma_get_mask(struct device *dev) return DMA_BIT_MASK(32); } -#ifdef CONFIG_ARCH_HAS_DMA_SET_COHERENT_MASK -int dma_set_coherent_mask(struct device *dev, u64 mask); -#else -static inline int dma_set_coherent_mask(struct device *dev, u64 mask) -{ - if (!dma_supported(dev, mask)) - return -EIO; - - dma_check_mask(dev, mask); - - dev->coherent_dma_mask = mask; - return 0; -} -#endif - /* * Set both the DMA mask and the coherent DMA mask to the same thing. * Note that we don't check the return value from dma_set_coherent_mask() @@ -676,8 +612,7 @@ static inline unsigned int dma_get_max_seg_size(struct device *dev) return SZ_64K; } -static inline unsigned int dma_set_max_seg_size(struct device *dev, - unsigned int size) +static inline int dma_set_max_seg_size(struct device *dev, unsigned int size) { if (dev->dma_parms) { dev->dma_parms->max_segment_size = size; @@ -709,12 +644,13 @@ static inline unsigned long dma_max_pfn(struct device *dev) } #endif +/* + * Please always use dma_alloc_coherent instead as it already zeroes the memory! + */ static inline void *dma_zalloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag) { - void *ret = dma_alloc_coherent(dev, size, dma_handle, - flag | __GFP_ZERO); - return ret; + return dma_alloc_coherent(dev, size, dma_handle, flag); } static inline int dma_get_cache_alignment(void) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 9051b055beec..69b36ed31a99 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -38,7 +38,10 @@ pgprot_t arch_dma_mmap_pgprot(struct device *dev, pgprot_t prot, void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size, enum dma_data_direction direction); #else -#define arch_dma_cache_sync NULL +static inline void arch_dma_cache_sync(struct device *dev, void *vaddr, + size_t size, enum dma_data_direction direction) +{ +} #endif /* CONFIG_DMA_NONCOHERENT_CACHE_SYNC */ #ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE @@ -69,4 +72,6 @@ static inline void arch_sync_dma_for_cpu_all(struct device *dev) } #endif /* CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL */ +void arch_dma_prep_coherent(struct page *page, size_t size); + #endif /* _LINUX_DMA_NONCOHERENT_H */ diff --git a/include/linux/dmar.h b/include/linux/dmar.h index 843a41ba7e28..f8af1d770520 100644 --- a/include/linux/dmar.h +++ b/include/linux/dmar.h @@ -39,6 +39,7 @@ struct acpi_dmar_header; /* DMAR Flags */ #define DMAR_INTR_REMAP 0x1 #define DMAR_X2APIC_OPT_OUT 0x2 +#define DMAR_PLATFORM_OPT_IN 0x4 struct intel_iommu; @@ -170,6 +171,8 @@ static inline int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert) { return 0; } #endif /* CONFIG_IRQ_REMAP */ +extern bool dmar_platform_optin(void); + #else /* CONFIG_DMAR_TABLE */ static inline int dmar_device_add(void *handle) @@ -182,6 +185,11 @@ static inline int dmar_device_remove(void *handle) return 0; } +static inline bool dmar_platform_optin(void) +{ + return false; +} + #endif /* CONFIG_DMAR_TABLE */ struct irte { diff --git a/include/linux/elevator.h b/include/linux/elevator.h index 015bb59c0331..2e9e2763bf47 100644 --- a/include/linux/elevator.h +++ b/include/linux/elevator.h @@ -23,74 +23,6 @@ enum elv_merge { ELEVATOR_DISCARD_MERGE = 3, }; -typedef enum elv_merge (elevator_merge_fn) (struct request_queue *, struct request **, - struct bio *); - -typedef void (elevator_merge_req_fn) (struct request_queue *, struct request *, struct request *); - -typedef void (elevator_merged_fn) (struct request_queue *, struct request *, enum elv_merge); - -typedef int (elevator_allow_bio_merge_fn) (struct request_queue *, - struct request *, struct bio *); - -typedef int (elevator_allow_rq_merge_fn) (struct request_queue *, - struct request *, struct request *); - -typedef void (elevator_bio_merged_fn) (struct request_queue *, - struct request *, struct bio *); - -typedef int (elevator_dispatch_fn) (struct request_queue *, int); - -typedef void (elevator_add_req_fn) (struct request_queue *, struct request *); -typedef struct request *(elevator_request_list_fn) (struct request_queue *, struct request *); -typedef void (elevator_completed_req_fn) (struct request_queue *, struct request *); -typedef int (elevator_may_queue_fn) (struct request_queue *, unsigned int); - -typedef void (elevator_init_icq_fn) (struct io_cq *); -typedef void (elevator_exit_icq_fn) (struct io_cq *); -typedef int (elevator_set_req_fn) (struct request_queue *, struct request *, - struct bio *, gfp_t); -typedef void (elevator_put_req_fn) (struct request *); -typedef void (elevator_activate_req_fn) (struct request_queue *, struct request *); -typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct request *); - -typedef int (elevator_init_fn) (struct request_queue *, - struct elevator_type *e); -typedef void (elevator_exit_fn) (struct elevator_queue *); -typedef void (elevator_registered_fn) (struct request_queue *); - -struct elevator_ops -{ - elevator_merge_fn *elevator_merge_fn; - elevator_merged_fn *elevator_merged_fn; - elevator_merge_req_fn *elevator_merge_req_fn; - elevator_allow_bio_merge_fn *elevator_allow_bio_merge_fn; - elevator_allow_rq_merge_fn *elevator_allow_rq_merge_fn; - elevator_bio_merged_fn *elevator_bio_merged_fn; - - elevator_dispatch_fn *elevator_dispatch_fn; - elevator_add_req_fn *elevator_add_req_fn; - elevator_activate_req_fn *elevator_activate_req_fn; - elevator_deactivate_req_fn *elevator_deactivate_req_fn; - - elevator_completed_req_fn *elevator_completed_req_fn; - - elevator_request_list_fn *elevator_former_req_fn; - elevator_request_list_fn *elevator_latter_req_fn; - - elevator_init_icq_fn *elevator_init_icq_fn; /* see iocontext.h */ - elevator_exit_icq_fn *elevator_exit_icq_fn; /* ditto */ - - elevator_set_req_fn *elevator_set_req_fn; - elevator_put_req_fn *elevator_put_req_fn; - - elevator_may_queue_fn *elevator_may_queue_fn; - - elevator_init_fn *elevator_init_fn; - elevator_exit_fn *elevator_exit_fn; - elevator_registered_fn *elevator_registered_fn; -}; - struct blk_mq_alloc_data; struct blk_mq_hw_ctx; @@ -137,17 +69,14 @@ struct elevator_type struct kmem_cache *icq_cache; /* fields provided by elevator implementation */ - union { - struct elevator_ops sq; - struct elevator_mq_ops mq; - } ops; + struct elevator_mq_ops ops; + size_t icq_size; /* see iocontext.h */ size_t icq_align; /* ditto */ struct elv_fs_entry *elevator_attrs; char elevator_name[ELV_NAME_MAX]; const char *elevator_alias; struct module *elevator_owner; - bool uses_mq; #ifdef CONFIG_BLK_DEBUG_FS const struct blk_mq_debugfs_attr *queue_debugfs_attrs; const struct blk_mq_debugfs_attr *hctx_debugfs_attrs; @@ -175,40 +104,25 @@ struct elevator_queue struct kobject kobj; struct mutex sysfs_lock; unsigned int registered:1; - unsigned int uses_mq:1; DECLARE_HASHTABLE(hash, ELV_HASH_BITS); }; /* * block elevator interface */ -extern void elv_dispatch_sort(struct request_queue *, struct request *); -extern void elv_dispatch_add_tail(struct request_queue *, struct request *); -extern void elv_add_request(struct request_queue *, struct request *, int); -extern void __elv_add_request(struct request_queue *, struct request *, int); extern enum elv_merge elv_merge(struct request_queue *, struct request **, struct bio *); extern void elv_merge_requests(struct request_queue *, struct request *, struct request *); extern void elv_merged_request(struct request_queue *, struct request *, enum elv_merge); -extern void elv_bio_merged(struct request_queue *q, struct request *, - struct bio *); extern bool elv_attempt_insert_merge(struct request_queue *, struct request *); -extern void elv_requeue_request(struct request_queue *, struct request *); extern struct request *elv_former_request(struct request_queue *, struct request *); extern struct request *elv_latter_request(struct request_queue *, struct request *); -extern int elv_may_queue(struct request_queue *, unsigned int); -extern void elv_completed_request(struct request_queue *, struct request *); -extern int elv_set_request(struct request_queue *q, struct request *rq, - struct bio *bio, gfp_t gfp_mask); -extern void elv_put_request(struct request_queue *, struct request *); -extern void elv_drain_elevator(struct request_queue *); /* * io scheduler registration */ -extern void __init load_default_elevator_module(void); extern int elv_register(struct elevator_type *); extern void elv_unregister(struct elevator_type *); @@ -260,9 +174,5 @@ enum { #define rq_entry_fifo(ptr) list_entry((ptr), struct request, queuelist) #define rq_fifo_clear(rq) list_del_init(&(rq)->queuelist) -#else /* CONFIG_BLOCK */ - -static inline void load_default_elevator_module(void) { } - #endif /* CONFIG_BLOCK */ #endif diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h index 572e11bb8696..2c0af7b00715 100644 --- a/include/linux/etherdevice.h +++ b/include/linux/etherdevice.h @@ -32,6 +32,7 @@ struct device; int eth_platform_get_mac_address(struct device *dev, u8 *mac_addr); unsigned char *arch_get_platform_mac_address(void); +int nvmem_get_mac_address(struct device *dev, void *addrbuf); u32 eth_get_headlen(void *data, unsigned int max_len); __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev); extern const struct header_ops eth_header_ops; diff --git a/include/linux/export.h b/include/linux/export.h index ce764a5d2ee4..fd8711ed9ac4 100644 --- a/include/linux/export.h +++ b/include/linux/export.h @@ -92,22 +92,22 @@ struct kernel_symbol { */ #define __EXPORT_SYMBOL(sym, sec) -#elif defined(__KSYM_DEPS__) +#elif defined(CONFIG_TRIM_UNUSED_KSYMS) + +#include /* * For fine grained build dependencies, we want to tell the build system * about each possible exported symbol even if they're not actually exported. - * We use a string pattern that is unlikely to be valid code that the build - * system filters out from the preprocessor output (see ksym_dep_filter - * in scripts/Kbuild.include). + * We use a symbol pattern __ksym_marker_ that the build system filters + * from the $(NM) output (see scripts/gen_ksymdeps.sh). These symbols are + * discarded in the final link stage. */ -#define __EXPORT_SYMBOL(sym, sec) === __KSYM_##sym === - -#elif defined(CONFIG_TRIM_UNUSED_KSYMS) - -#include +#define __ksym_marker(sym) \ + static int __ksym_marker_##sym[0] __section(".discard.ksym") __used #define __EXPORT_SYMBOL(sym, sec) \ + __ksym_marker(sym); \ __cond_export_sym(sym, sec, __is_defined(__KSYM_##sym)) #define __cond_export_sym(sym, sec, conf) \ ___cond_export_sym(sym, sec, conf) diff --git a/include/linux/fanotify.h b/include/linux/fanotify.h index a5a60691e48b..9e2142795335 100644 --- a/include/linux/fanotify.h +++ b/include/linux/fanotify.h @@ -37,10 +37,11 @@ /* Events that user can request to be notified on */ #define FANOTIFY_EVENTS (FAN_ACCESS | FAN_MODIFY | \ - FAN_CLOSE | FAN_OPEN) + FAN_CLOSE | FAN_OPEN | FAN_OPEN_EXEC) /* Events that require a permission response from user */ -#define FANOTIFY_PERM_EVENTS (FAN_OPEN_PERM | FAN_ACCESS_PERM) +#define FANOTIFY_PERM_EVENTS (FAN_OPEN_PERM | FAN_ACCESS_PERM | \ + FAN_OPEN_EXEC_PERM) /* Extra flags that may be reported with event or control handling of events */ #define FANOTIFY_EVENT_FLAGS (FAN_EVENT_ON_CHILD | FAN_ONDIR) diff --git a/include/linux/fdtable.h b/include/linux/fdtable.h index 41615f38bcff..f07c55ea0c22 100644 --- a/include/linux/fdtable.h +++ b/include/linux/fdtable.h @@ -121,6 +121,7 @@ extern void __fd_install(struct files_struct *files, unsigned int fd, struct file *file); extern int __close_fd(struct files_struct *files, unsigned int fd); +extern int __close_fd_get_file(unsigned int fd, struct file **res); extern struct kmem_cache *files_cachep; diff --git a/include/linux/filter.h b/include/linux/filter.h index a8b9d90a8042..8c8544b375eb 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -675,24 +675,10 @@ static inline u32 bpf_ctx_off_adjust_machine(u32 size) return size; } -static inline bool bpf_ctx_narrow_align_ok(u32 off, u32 size_access, - u32 size_default) -{ - size_default = bpf_ctx_off_adjust_machine(size_default); - size_access = bpf_ctx_off_adjust_machine(size_access); - -#ifdef __LITTLE_ENDIAN - return (off & (size_default - 1)) == 0; -#else - return (off & (size_default - 1)) + size_access == size_default; -#endif -} - static inline bool bpf_ctx_narrow_access_ok(u32 off, u32 size, u32 size_default) { - return bpf_ctx_narrow_align_ok(off, size, size_default) && - size <= size_default && (size & (size - 1)) == 0; + return size <= size_default && (size & (size - 1)) == 0; } #define bpf_classic_proglen(fprog) (fprog->len * sizeof(fprog->filter[0])) @@ -739,6 +725,13 @@ void bpf_prog_free(struct bpf_prog *fp); bool bpf_opcode_in_insntable(u8 code); +void bpf_prog_free_linfo(struct bpf_prog *prog); +void bpf_prog_fill_jited_linfo(struct bpf_prog *prog, + const u32 *insn_to_jit_off); +int bpf_prog_alloc_jited_linfo(struct bpf_prog *prog); +void bpf_prog_free_jited_linfo(struct bpf_prog *prog); +void bpf_prog_free_unused_jited_linfo(struct bpf_prog *prog); + struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags); struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size, gfp_t gfp_extra_flags); diff --git a/include/linux/firmware/intel/stratix10-smc.h b/include/linux/firmware/intel/stratix10-smc.h new file mode 100644 index 000000000000..5be5dab50b13 --- /dev/null +++ b/include/linux/firmware/intel/stratix10-smc.h @@ -0,0 +1,312 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2017-2018, Intel Corporation + */ + +#ifndef __STRATIX10_SMC_H +#define __STRATIX10_SMC_H + +#include +#include + +/** + * This file defines the Secure Monitor Call (SMC) message protocol used for + * service layer driver in normal world (EL1) to communicate with secure + * monitor software in Secure Monitor Exception Level 3 (EL3). + * + * This file is shared with secure firmware (FW) which is out of kernel tree. + * + * An ARM SMC instruction takes a function identifier and up to 6 64-bit + * register values as arguments, and can return up to 4 64-bit register + * value. The operation of the secure monitor is determined by the parameter + * values passed in through registers. + * + * EL1 and EL3 communicates pointer as physical address rather than the + * virtual address. + * + * Functions specified by ARM SMC Calling convention: + * + * FAST call executes atomic operations, returns when the requested operation + * has completed. + * STD call starts a operation which can be preempted by a non-secure + * interrupt. The call can return before the requested operation has + * completed. + * + * a0..a7 is used as register names in the descriptions below, on arm32 + * that translates to r0..r7 and on arm64 to w0..w7. + */ + +/** + * @func_num: function ID + */ +#define INTEL_SIP_SMC_STD_CALL_VAL(func_num) \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_STD_CALL, ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_SIP, (func_num)) + +#define INTEL_SIP_SMC_FAST_CALL_VAL(func_num) \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_SIP, (func_num)) + +/** + * Return values in INTEL_SIP_SMC_* call + * + * INTEL_SIP_SMC_RETURN_UNKNOWN_FUNCTION: + * Secure monitor software doesn't recognize the request. + * + * INTEL_SIP_SMC_STATUS_OK: + * FPGA configuration completed successfully, + * In case of FPGA configuration write operation, it means secure monitor + * software can accept the next chunk of FPGA configuration data. + * + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY: + * In case of FPGA configuration write operation, it means secure monitor + * software is still processing previous data & can't accept the next chunk + * of data. Service driver needs to issue + * INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE call to query the + * completed block(s). + * + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR: + * There is error during the FPGA configuration process. + * + * INTEL_SIP_SMC_REG_ERROR: + * There is error during a read or write operation of the protected registers. + * + * INTEL_SIP_SMC_RSU_ERROR: + * There is error during a remote status update. + */ +#define INTEL_SIP_SMC_RETURN_UNKNOWN_FUNCTION 0xFFFFFFFF +#define INTEL_SIP_SMC_STATUS_OK 0x0 +#define INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY 0x1 +#define INTEL_SIP_SMC_FPGA_CONFIG_STATUS_REJECTED 0x2 +#define INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR 0x4 +#define INTEL_SIP_SMC_REG_ERROR 0x5 +#define INTEL_SIP_SMC_RSU_ERROR 0x7 + +/** + * Request INTEL_SIP_SMC_FPGA_CONFIG_START + * + * Sync call used by service driver at EL1 to request the FPGA in EL3 to + * be prepare to receive a new configuration. + * + * Call register usage: + * a0: INTEL_SIP_SMC_FPGA_CONFIG_START. + * a1: flag for full or partial configuration. 0 for full and 1 for partial + * configuration. + * a2-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK, or INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. + * a1-3: not used. + */ +#define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_START 1 +#define INTEL_SIP_SMC_FPGA_CONFIG_START \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_START) + +/** + * Request INTEL_SIP_SMC_FPGA_CONFIG_WRITE + * + * Async call used by service driver at EL1 to provide FPGA configuration data + * to secure world. + * + * Call register usage: + * a0: INTEL_SIP_SMC_FPGA_CONFIG_WRITE. + * a1: 64bit physical address of the configuration data memory block + * a2: Size of configuration data block. + * a3-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK, INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY or + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. + * a1: 64bit physical address of 1st completed memory block if any completed + * block, otherwise zero value. + * a2: 64bit physical address of 2nd completed memory block if any completed + * block, otherwise zero value. + * a3: 64bit physical address of 3rd completed memory block if any completed + * block, otherwise zero value. + */ +#define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_WRITE 2 +#define INTEL_SIP_SMC_FPGA_CONFIG_WRITE \ + INTEL_SIP_SMC_STD_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_WRITE) + +/** + * Request INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE + * + * Sync call used by service driver at EL1 to track the completed write + * transactions. This request is called after INTEL_SIP_SMC_FPGA_CONFIG_WRITE + * call returns INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY. + * + * Call register usage: + * a0: INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE. + * a1-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK, INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY or + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. + * a1: 64bit physical address of 1st completed memory block. + * a2: 64bit physical address of 2nd completed memory block if + * any completed block, otherwise zero value. + * a3: 64bit physical address of 3rd completed memory block if + * any completed block, otherwise zero value. + */ +#define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_COMPLETED_WRITE 3 +#define INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE \ +INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_COMPLETED_WRITE) + +/** + * Request INTEL_SIP_SMC_FPGA_CONFIG_ISDONE + * + * Sync call used by service driver at EL1 to inform secure world that all + * data are sent, to check whether or not the secure world had completed + * the FPGA configuration process. + * + * Call register usage: + * a0: INTEL_SIP_SMC_FPGA_CONFIG_ISDONE. + * a1-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK, INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY or + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. + * a1-3: not used. + */ +#define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_ISDONE 4 +#define INTEL_SIP_SMC_FPGA_CONFIG_ISDONE \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_ISDONE) + +/** + * Request INTEL_SIP_SMC_FPGA_CONFIG_GET_MEM + * + * Sync call used by service driver at EL1 to query the physical address of + * memory block reserved by secure monitor software. + * + * Call register usage: + * a0:INTEL_SIP_SMC_FPGA_CONFIG_GET_MEM. + * a1-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. + * a1: start of physical address of reserved memory block. + * a2: size of reserved memory block. + * a3: not used. + */ +#define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_GET_MEM 5 +#define INTEL_SIP_SMC_FPGA_CONFIG_GET_MEM \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_GET_MEM) + +/** + * Request INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK + * + * For SMC loop-back mode only, used for internal integration, debugging + * or troubleshooting. + * + * Call register usage: + * a0: INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK. + * a1-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. + * a1-3: not used. + */ +#define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_LOOPBACK 6 +#define INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_LOOPBACK) + +/* + * Request INTEL_SIP_SMC_REG_READ + * + * Read a protected register at EL3 + * + * Call register usage: + * a0: INTEL_SIP_SMC_REG_READ. + * a1: register address. + * a2-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_REG_ERROR. + * a1: value in the register + * a2-3: not used. + */ +#define INTEL_SIP_SMC_FUNCID_REG_READ 7 +#define INTEL_SIP_SMC_REG_READ \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_READ) + +/* + * Request INTEL_SIP_SMC_REG_WRITE + * + * Write a protected register at EL3 + * + * Call register usage: + * a0: INTEL_SIP_SMC_REG_WRITE. + * a1: register address + * a2: value to program into register. + * a3-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_REG_ERROR. + * a1-3: not used. + */ +#define INTEL_SIP_SMC_FUNCID_REG_WRITE 8 +#define INTEL_SIP_SMC_REG_WRITE \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_WRITE) + +/* + * Request INTEL_SIP_SMC_FUNCID_REG_UPDATE + * + * Update one or more bits in a protected register at EL3 using a + * read-modify-write operation. + * + * Call register usage: + * a0: INTEL_SIP_SMC_REG_UPDATE. + * a1: register address + * a2: write Mask. + * a3: value to write. + * a4-7: not used. + * + * Return status: + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_REG_ERROR. + * a1-3: Not used. + */ +#define INTEL_SIP_SMC_FUNCID_REG_UPDATE 9 +#define INTEL_SIP_SMC_REG_UPDATE \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_UPDATE) + +/* + * Request INTEL_SIP_SMC_RSU_STATUS + * + * Request remote status update boot log, call is synchronous. + * + * Call register usage: + * a0 INTEL_SIP_SMC_RSU_STATUS + * a1-7 not used + * + * Return status + * a0: Current Image + * a1: Last Failing Image + * a2: Version | State + * a3: Error details | Error location + * + * Or + * + * a0: INTEL_SIP_SMC_RSU_ERROR + */ +#define INTEL_SIP_SMC_FUNCID_RSU_STATUS 11 +#define INTEL_SIP_SMC_RSU_STATUS \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_RSU_STATUS) + +/* + * Request INTEL_SIP_SMC_RSU_UPDATE + * + * Request to set the offset of the bitstream to boot after reboot, call + * is synchronous. + * + * Call register usage: + * a0 INTEL_SIP_SMC_RSU_UPDATE + * a1 64bit physical address of the configuration data memory in flash + * a2-7 not used + * + * Return status + * a0 INTEL_SIP_SMC_STATUS_OK + */ +#define INTEL_SIP_SMC_FUNCID_RSU_UPDATE 12 +#define INTEL_SIP_SMC_RSU_UPDATE \ + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_RSU_UPDATE) +#endif diff --git a/include/linux/firmware/intel/stratix10-svc-client.h b/include/linux/firmware/intel/stratix10-svc-client.h new file mode 100644 index 000000000000..e521f172a47a --- /dev/null +++ b/include/linux/firmware/intel/stratix10-svc-client.h @@ -0,0 +1,217 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2017-2018, Intel Corporation + */ + +#ifndef __STRATIX10_SVC_CLIENT_H +#define __STRATIX10_SVC_CLIENT_H + +/** + * Service layer driver supports client names + * + * fpga: for FPGA configuration + * rsu: for remote status update + */ +#define SVC_CLIENT_FPGA "fpga" +#define SVC_CLIENT_RSU "rsu" + +/** + * Status of the sent command, in bit number + * + * SVC_COMMAND_STATUS_RECONFIG_REQUEST_OK: + * Secure firmware accepts the request of FPGA reconfiguration. + * + * SVC_STATUS_RECONFIG_BUFFER_SUBMITTED: + * Service client successfully submits FPGA configuration + * data buffer to secure firmware. + * + * SVC_COMMAND_STATUS_RECONFIG_BUFFER_DONE: + * Secure firmware completes data process, ready to accept the + * next WRITE transaction. + * + * SVC_COMMAND_STATUS_RECONFIG_COMPLETED: + * Secure firmware completes FPGA configuration successfully, FPGA should + * be in user mode. + * + * SVC_COMMAND_STATUS_RECONFIG_BUSY: + * FPGA configuration is still in process. + * + * SVC_COMMAND_STATUS_RECONFIG_ERROR: + * Error encountered during FPGA configuration. + * + * SVC_STATUS_RSU_OK: + * Secure firmware accepts the request of remote status update (RSU). + */ +#define SVC_STATUS_RECONFIG_REQUEST_OK 0 +#define SVC_STATUS_RECONFIG_BUFFER_SUBMITTED 1 +#define SVC_STATUS_RECONFIG_BUFFER_DONE 2 +#define SVC_STATUS_RECONFIG_COMPLETED 3 +#define SVC_STATUS_RECONFIG_BUSY 4 +#define SVC_STATUS_RECONFIG_ERROR 5 +#define SVC_STATUS_RSU_OK 6 +#define SVC_STATUS_RSU_ERROR 7 +/** + * Flag bit for COMMAND_RECONFIG + * + * COMMAND_RECONFIG_FLAG_PARTIAL: + * Set to FPGA configuration type (full or partial), the default + * is full reconfig. + */ +#define COMMAND_RECONFIG_FLAG_PARTIAL 0 + +/** + * Timeout settings for service clients: + * timeout value used in Stratix10 FPGA manager driver. + * timeout value used in RSU driver + */ +#define SVC_RECONFIG_REQUEST_TIMEOUT_MS 100 +#define SVC_RECONFIG_BUFFER_TIMEOUT_MS 240 +#define SVC_RSU_REQUEST_TIMEOUT_MS 300 + +struct stratix10_svc_chan; + +/** + * enum stratix10_svc_command_code - supported service commands + * + * @COMMAND_NOOP: do 'dummy' request for integration/debug/trouble-shooting + * + * @COMMAND_RECONFIG: ask for FPGA configuration preparation, return status + * is SVC_STATUS_RECONFIG_REQUEST_OK + * + * @COMMAND_RECONFIG_DATA_SUBMIT: submit buffer(s) of bit-stream data for the + * FPGA configuration, return status is SVC_STATUS_RECONFIG_BUFFER_SUBMITTED, + * or SVC_STATUS_RECONFIG_ERROR + * + * @COMMAND_RECONFIG_DATA_CLAIM: check the status of the configuration, return + * status is SVC_STATUS_RECONFIG_COMPLETED, or SVC_STATUS_RECONFIG_BUSY, or + * SVC_STATUS_RECONFIG_ERROR + * + * @COMMAND_RECONFIG_STATUS: check the status of the configuration, return + * status is SVC_STATUS_RECONFIG_COMPLETED, or SVC_STATUS_RECONFIG_BUSY, or + * SVC_STATUS_RECONFIG_ERROR + * + * @COMMAND_RSU_STATUS: request remote system update boot log, return status + * is log data or SVC_STATUS_RSU_ERROR + * + * @COMMAND_RSU_UPDATE: set the offset of the bitstream to boot after reboot, + * return status is SVC_STATUS_RSU_OK or SVC_STATUS_RSU_ERROR + */ +enum stratix10_svc_command_code { + COMMAND_NOOP = 0, + COMMAND_RECONFIG, + COMMAND_RECONFIG_DATA_SUBMIT, + COMMAND_RECONFIG_DATA_CLAIM, + COMMAND_RECONFIG_STATUS, + COMMAND_RSU_STATUS, + COMMAND_RSU_UPDATE +}; + +/** + * struct stratix10_svc_client_msg - message sent by client to service + * @payload: starting address of data need be processed + * @payload_length: data size in bytes + * @command: service command + * @arg: args to be passed via registers and not physically mapped buffers + */ +struct stratix10_svc_client_msg { + void *payload; + size_t payload_length; + enum stratix10_svc_command_code command; + u64 arg[3]; +}; + +/** + * struct stratix10_svc_command_config_type - config type + * @flags: flag bit for the type of FPGA configuration + */ +struct stratix10_svc_command_config_type { + u32 flags; +}; + +/** + * struct stratix10_svc_cb_data - callback data structure from service layer + * @status: the status of sent command + * @kaddr1: address of 1st completed data block + * @kaddr2: address of 2nd completed data block + * @kaddr3: address of 3rd completed data block + */ +struct stratix10_svc_cb_data { + u32 status; + void *kaddr1; + void *kaddr2; + void *kaddr3; +}; + +/** + * struct stratix10_svc_client - service client structure + * @dev: the client device + * @receive_cb: callback to provide service client the received data + * @priv: client private data + */ +struct stratix10_svc_client { + struct device *dev; + void (*receive_cb)(struct stratix10_svc_client *client, + struct stratix10_svc_cb_data *cb_data); + void *priv; +}; + +/** + * stratix10_svc_request_channel_byname() - request service channel + * @client: identity of the client requesting the channel + * @name: supporting client name defined above + * + * Return: a pointer to channel assigned to the client on success, + * or ERR_PTR() on error. + */ +struct stratix10_svc_chan +*stratix10_svc_request_channel_byname(struct stratix10_svc_client *client, + const char *name); + +/** + * stratix10_svc_free_channel() - free service channel. + * @chan: service channel to be freed + */ +void stratix10_svc_free_channel(struct stratix10_svc_chan *chan); + +/** + * stratix10_svc_allocate_memory() - allocate the momory + * @chan: service channel assigned to the client + * @size: number of bytes client requests + * + * Service layer allocates the requested number of bytes from the memory + * pool for the client. + * + * Return: the starting address of allocated memory on success, or + * ERR_PTR() on error. + */ +void *stratix10_svc_allocate_memory(struct stratix10_svc_chan *chan, + size_t size); + +/** + * stratix10_svc_free_memory() - free allocated memory + * @chan: service channel assigned to the client + * @kaddr: starting address of memory to be free back to pool + */ +void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr); + +/** + * stratix10_svc_send() - send a message to the remote + * @chan: service channel assigned to the client + * @msg: message data to be sent, in the format of + * struct stratix10_svc_client_msg + * + * Return: 0 for success, -ENOMEM or -ENOBUFS on error. + */ +int stratix10_svc_send(struct stratix10_svc_chan *chan, void *msg); + +/** + * intel_svc_done() - complete service request + * @chan: service channel assigned to the client + * + * This function is used by service client to inform service layer that + * client's service requests are completed, or there is an error in the + * request process. + */ +void stratix10_svc_done(struct stratix10_svc_chan *chan); +#endif + diff --git a/include/linux/font.h b/include/linux/font.h index d6821769dd1e..51b91c8b69d5 100644 --- a/include/linux/font.h +++ b/include/linux/font.h @@ -32,6 +32,7 @@ struct font_desc { #define ACORN8x8_IDX 8 #define MINI4x6_IDX 9 #define FONT6x10_IDX 10 +#define TER16x32_IDX 11 extern const struct font_desc font_vga_8x8, font_vga_8x16, @@ -43,7 +44,8 @@ extern const struct font_desc font_vga_8x8, font_sun_12x22, font_acorn_8x8, font_mini_4x6, - font_6x10; + font_6x10, + font_ter_16x32; /* Find a font with a specific name */ diff --git a/include/linux/fs.h b/include/linux/fs.h index c95c0807471f..811c77743dad 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1044,10 +1044,15 @@ bool opens_in_grace(struct net *); * Obviously, the last two criteria only matter for POSIX locks. */ struct file_lock { - struct file_lock *fl_next; /* singly linked list for this inode */ + struct file_lock *fl_blocker; /* The lock, that is blocking us */ struct list_head fl_list; /* link into file_lock_context */ struct hlist_node fl_link; /* node in global lists */ - struct list_head fl_block; /* circular list of blocked processes */ + struct list_head fl_blocked_requests; /* list of requests with + * ->fl_blocker pointing here + */ + struct list_head fl_blocked_member; /* node in + * ->fl_blocker->fl_blocked_requests + */ fl_owner_t fl_owner; unsigned int fl_flags; unsigned char fl_type; @@ -1119,7 +1124,7 @@ extern void locks_remove_file(struct file *); extern void locks_release_private(struct file_lock *); extern void posix_test_lock(struct file *, struct file_lock *); extern int posix_lock_file(struct file *, struct file_lock *, struct file_lock *); -extern int posix_unblock_lock(struct file_lock *); +extern int locks_delete_block(struct file_lock *); extern int vfs_test_lock(struct file *, struct file_lock *); extern int vfs_lock_file(struct file *, unsigned int, struct file_lock *, struct file_lock *); extern int vfs_cancel_lock(struct file *filp, struct file_lock *fl); @@ -1209,7 +1214,7 @@ static inline int posix_lock_file(struct file *filp, struct file_lock *fl, return -ENOLCK; } -static inline int posix_unblock_lock(struct file_lock *waiter) +static inline int locks_delete_block(struct file_lock *waiter) { return -ENOENT; } @@ -2021,7 +2026,7 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp) .ki_filp = filp, .ki_flags = iocb_flags(filp), .ki_hint = ki_hint_validate(file_write_hint(filp)), - .ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0), + .ki_ioprio = get_current_ioprio(), }; } @@ -3264,8 +3269,12 @@ extern int generic_check_addressable(unsigned, u64); extern int buffer_migrate_page(struct address_space *, struct page *, struct page *, enum migrate_mode); +extern int buffer_migrate_page_norefs(struct address_space *, + struct page *, struct page *, + enum migrate_mode); #else #define buffer_migrate_page NULL +#define buffer_migrate_page_norefs NULL #endif extern int setattr_prepare(struct dentry *, struct iattr *); diff --git a/include/linux/fsi-occ.h b/include/linux/fsi-occ.h new file mode 100644 index 000000000000..d4cdc2aa6e33 --- /dev/null +++ b/include/linux/fsi-occ.h @@ -0,0 +1,25 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifndef LINUX_FSI_OCC_H +#define LINUX_FSI_OCC_H + +struct device; + +#define OCC_RESP_CMD_IN_PRG 0xFF +#define OCC_RESP_SUCCESS 0 +#define OCC_RESP_CMD_INVAL 0x11 +#define OCC_RESP_CMD_LEN_INVAL 0x12 +#define OCC_RESP_DATA_INVAL 0x13 +#define OCC_RESP_CHKSUM_ERR 0x14 +#define OCC_RESP_INT_ERR 0x15 +#define OCC_RESP_BAD_STATE 0x16 +#define OCC_RESP_CRIT_EXCEPT 0xE0 +#define OCC_RESP_CRIT_INIT 0xE1 +#define OCC_RESP_CRIT_WATCHDOG 0xE2 +#define OCC_RESP_CRIT_OCB 0xE3 +#define OCC_RESP_CRIT_HW 0xE4 + +int fsi_occ_submit(struct device *dev, const void *request, size_t req_len, + void *response, size_t *resp_len); + +#endif /* LINUX_FSI_OCC_H */ diff --git a/include/linux/fsl/mc.h b/include/linux/fsl/mc.h index 9d3f668df7df..741f567253ef 100644 --- a/include/linux/fsl/mc.h +++ b/include/linux/fsl/mc.h @@ -210,8 +210,8 @@ struct mc_cmd_header { }; struct fsl_mc_command { - u64 header; - u64 params[MC_CMD_NUM_OF_PARAMS]; + __le64 header; + __le64 params[MC_CMD_NUM_OF_PARAMS]; }; enum mc_cmd_status { @@ -238,11 +238,11 @@ enum mc_cmd_status { /* Command completion flag */ #define MC_CMD_FLAG_INTR_DIS 0x01 -static inline u64 mc_encode_cmd_header(u16 cmd_id, - u32 cmd_flags, - u16 token) +static inline __le64 mc_encode_cmd_header(u16 cmd_id, + u32 cmd_flags, + u16 token) { - u64 header = 0; + __le64 header = 0; struct mc_cmd_header *hdr = (struct mc_cmd_header *)&header; hdr->cmd_id = cpu_to_le16(cmd_id); diff --git a/include/linux/fsnotify.h b/include/linux/fsnotify.h index fd1ce10553bf..2ccb08cb5d6a 100644 --- a/include/linux/fsnotify.h +++ b/include/linux/fsnotify.h @@ -26,30 +26,46 @@ static inline int fsnotify_parent(const struct path *path, struct dentry *dentry return __fsnotify_parent(path, dentry, mask); } -/* simple call site for access decisions */ +/* + * Simple wrapper to consolidate calls fsnotify_parent()/fsnotify() when + * an event is on a path. + */ +static inline int fsnotify_path(struct inode *inode, const struct path *path, + __u32 mask) +{ + int ret = fsnotify_parent(path, NULL, mask); + + if (ret) + return ret; + return fsnotify(inode, mask, path, FSNOTIFY_EVENT_PATH, NULL, 0); +} + +/* Simple call site for access decisions */ static inline int fsnotify_perm(struct file *file, int mask) { + int ret; const struct path *path = &file->f_path; struct inode *inode = file_inode(file); __u32 fsnotify_mask = 0; - int ret; if (file->f_mode & FMODE_NONOTIFY) return 0; if (!(mask & (MAY_READ | MAY_OPEN))) return 0; - if (mask & MAY_OPEN) + if (mask & MAY_OPEN) { fsnotify_mask = FS_OPEN_PERM; - else if (mask & MAY_READ) - fsnotify_mask = FS_ACCESS_PERM; - else - BUG(); - ret = fsnotify_parent(path, NULL, fsnotify_mask); - if (ret) - return ret; + if (file->f_flags & __FMODE_EXEC) { + ret = fsnotify_path(inode, path, FS_OPEN_EXEC_PERM); - return fsnotify(inode, fsnotify_mask, path, FSNOTIFY_EVENT_PATH, NULL, 0); + if (ret) + return ret; + } + } else if (mask & MAY_READ) { + fsnotify_mask = FS_ACCESS_PERM; + } + + return fsnotify_path(inode, path, fsnotify_mask); } /* @@ -180,10 +196,8 @@ static inline void fsnotify_access(struct file *file) if (S_ISDIR(inode->i_mode)) mask |= FS_ISDIR; - if (!(file->f_mode & FMODE_NONOTIFY)) { - fsnotify_parent(path, NULL, mask); - fsnotify(inode, mask, path, FSNOTIFY_EVENT_PATH, NULL, 0); - } + if (!(file->f_mode & FMODE_NONOTIFY)) + fsnotify_path(inode, path, mask); } /* @@ -198,10 +212,8 @@ static inline void fsnotify_modify(struct file *file) if (S_ISDIR(inode->i_mode)) mask |= FS_ISDIR; - if (!(file->f_mode & FMODE_NONOTIFY)) { - fsnotify_parent(path, NULL, mask); - fsnotify(inode, mask, path, FSNOTIFY_EVENT_PATH, NULL, 0); - } + if (!(file->f_mode & FMODE_NONOTIFY)) + fsnotify_path(inode, path, mask); } /* @@ -215,9 +227,10 @@ static inline void fsnotify_open(struct file *file) if (S_ISDIR(inode->i_mode)) mask |= FS_ISDIR; + if (file->f_flags & __FMODE_EXEC) + mask |= FS_OPEN_EXEC; - fsnotify_parent(path, NULL, mask); - fsnotify(inode, mask, path, FSNOTIFY_EVENT_PATH, NULL, 0); + fsnotify_path(inode, path, mask); } /* @@ -233,10 +246,8 @@ static inline void fsnotify_close(struct file *file) if (S_ISDIR(inode->i_mode)) mask |= FS_ISDIR; - if (!(file->f_mode & FMODE_NONOTIFY)) { - fsnotify_parent(path, NULL, mask); - fsnotify(inode, mask, path, FSNOTIFY_EVENT_PATH, NULL, 0); - } + if (!(file->f_mode & FMODE_NONOTIFY)) + fsnotify_path(inode, path, mask); } /* diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h index 135b973e44d1..7639774e7475 100644 --- a/include/linux/fsnotify_backend.h +++ b/include/linux/fsnotify_backend.h @@ -38,6 +38,7 @@ #define FS_DELETE 0x00000200 /* Subfile was deleted */ #define FS_DELETE_SELF 0x00000400 /* Self was deleted */ #define FS_MOVE_SELF 0x00000800 /* Self was moved */ +#define FS_OPEN_EXEC 0x00001000 /* File was opened for exec */ #define FS_UNMOUNT 0x00002000 /* inode on umount fs */ #define FS_Q_OVERFLOW 0x00004000 /* Event queued overflowed */ @@ -45,6 +46,7 @@ #define FS_OPEN_PERM 0x00010000 /* open event in an permission hook */ #define FS_ACCESS_PERM 0x00020000 /* access event in a permissions hook */ +#define FS_OPEN_EXEC_PERM 0x00040000 /* open/exec event in a permission hook */ #define FS_EXCL_UNLINK 0x04000000 /* do not send events if object is unlinked */ #define FS_ISDIR 0x40000000 /* event occurred against dir */ @@ -62,11 +64,13 @@ #define FS_EVENTS_POSS_ON_CHILD (FS_ACCESS | FS_MODIFY | FS_ATTRIB |\ FS_CLOSE_WRITE | FS_CLOSE_NOWRITE | FS_OPEN |\ FS_MOVED_FROM | FS_MOVED_TO | FS_CREATE |\ - FS_DELETE | FS_OPEN_PERM | FS_ACCESS_PERM) + FS_DELETE | FS_OPEN_PERM | FS_ACCESS_PERM | \ + FS_OPEN_EXEC | FS_OPEN_EXEC_PERM) #define FS_MOVE (FS_MOVED_FROM | FS_MOVED_TO) -#define ALL_FSNOTIFY_PERM_EVENTS (FS_OPEN_PERM | FS_ACCESS_PERM) +#define ALL_FSNOTIFY_PERM_EVENTS (FS_OPEN_PERM | FS_ACCESS_PERM | \ + FS_OPEN_EXEC_PERM) /* Events that can be reported to backends */ #define ALL_FSNOTIFY_EVENTS (FS_ACCESS | FS_MODIFY | FS_ATTRIB | \ @@ -74,7 +78,8 @@ FS_MOVED_FROM | FS_MOVED_TO | FS_CREATE | \ FS_DELETE | FS_DELETE_SELF | FS_MOVE_SELF | \ FS_UNMOUNT | FS_Q_OVERFLOW | FS_IN_IGNORED | \ - FS_OPEN_PERM | FS_ACCESS_PERM | FS_DN_RENAME) + FS_OPEN_PERM | FS_ACCESS_PERM | FS_DN_RENAME | \ + FS_OPEN_EXEC | FS_OPEN_EXEC_PERM) /* Extra flags that may be reported with event or control handling of events */ #define ALL_FSNOTIFY_FLAGS (FS_EXCL_UNLINK | FS_ISDIR | FS_IN_ONESHOT | \ diff --git a/include/linux/futex.h b/include/linux/futex.h index 821ae502d3d8..ccaef0097785 100644 --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -9,9 +9,6 @@ struct inode; struct mm_struct; struct task_struct; -extern int -handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi); - /* * Futexes are matched on equal values of this key. * The key type depends on whether it's a shared or private mapping. @@ -55,11 +52,6 @@ extern void exit_robust_list(struct task_struct *curr); long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout, u32 __user *uaddr2, u32 val2, u32 val3); -#ifdef CONFIG_HAVE_FUTEX_CMPXCHG -#define futex_cmpxchg_enabled 1 -#else -extern int futex_cmpxchg_enabled; -#endif #else static inline void exit_robust_list(struct task_struct *curr) { diff --git a/include/linux/genhd.h b/include/linux/genhd.h index 70fc838e6773..06c0fd594097 100644 --- a/include/linux/genhd.h +++ b/include/linux/genhd.h @@ -17,6 +17,7 @@ #include #include #include +#include #ifdef CONFIG_BLOCK @@ -89,6 +90,7 @@ struct disk_stats { unsigned long merges[NR_STAT_GROUPS]; unsigned long io_ticks; unsigned long time_in_queue; + local_t in_flight[2]; }; #define PARTITION_META_INFO_VOLNAMELTH 64 @@ -122,14 +124,13 @@ struct hd_struct { int make_it_fail; #endif unsigned long stamp; - atomic_t in_flight[2]; #ifdef CONFIG_SMP struct disk_stats __percpu *dkstats; #else struct disk_stats dkstats; #endif struct percpu_ref ref; - struct rcu_head rcu_head; + struct rcu_work rcu_work; }; #define GENHD_FL_REMOVABLE 1 @@ -295,8 +296,11 @@ extern struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, #define part_stat_lock() ({ rcu_read_lock(); get_cpu(); }) #define part_stat_unlock() do { put_cpu(); rcu_read_unlock(); } while (0) -#define __part_stat_add(cpu, part, field, addnd) \ - (per_cpu_ptr((part)->dkstats, (cpu))->field += (addnd)) +#define part_stat_get_cpu(part, field, cpu) \ + (per_cpu_ptr((part)->dkstats, (cpu))->field) + +#define part_stat_get(part, field) \ + part_stat_get_cpu(part, field, smp_processor_id()) #define part_stat_read(part, field) \ ({ \ @@ -333,10 +337,9 @@ static inline void free_part_stats(struct hd_struct *part) #define part_stat_lock() ({ rcu_read_lock(); 0; }) #define part_stat_unlock() rcu_read_unlock() -#define __part_stat_add(cpu, part, field, addnd) \ - ((part)->dkstats.field += addnd) - -#define part_stat_read(part, field) ((part)->dkstats.field) +#define part_stat_get(part, field) ((part)->dkstats.field) +#define part_stat_get_cpu(part, field, cpu) part_stat_get(part, field) +#define part_stat_read(part, field) part_stat_get(part, field) static inline void part_stat_set_all(struct hd_struct *part, int value) { @@ -362,22 +365,33 @@ static inline void free_part_stats(struct hd_struct *part) part_stat_read(part, field[STAT_WRITE]) + \ part_stat_read(part, field[STAT_DISCARD])) -#define part_stat_add(cpu, part, field, addnd) do { \ - __part_stat_add((cpu), (part), field, addnd); \ +#define __part_stat_add(part, field, addnd) \ + (part_stat_get(part, field) += (addnd)) + +#define part_stat_add(part, field, addnd) do { \ + __part_stat_add((part), field, addnd); \ if ((part)->partno) \ - __part_stat_add((cpu), &part_to_disk((part))->part0, \ + __part_stat_add(&part_to_disk((part))->part0, \ field, addnd); \ } while (0) -#define part_stat_dec(cpu, gendiskp, field) \ - part_stat_add(cpu, gendiskp, field, -1) -#define part_stat_inc(cpu, gendiskp, field) \ - part_stat_add(cpu, gendiskp, field, 1) -#define part_stat_sub(cpu, gendiskp, field, subnd) \ - part_stat_add(cpu, gendiskp, field, -subnd) - -void part_in_flight(struct request_queue *q, struct hd_struct *part, - unsigned int inflight[2]); +#define part_stat_dec(gendiskp, field) \ + part_stat_add(gendiskp, field, -1) +#define part_stat_inc(gendiskp, field) \ + part_stat_add(gendiskp, field, 1) +#define part_stat_sub(gendiskp, field, subnd) \ + part_stat_add(gendiskp, field, -subnd) + +#define part_stat_local_dec(gendiskp, field) \ + local_dec(&(part_stat_get(gendiskp, field))) +#define part_stat_local_inc(gendiskp, field) \ + local_inc(&(part_stat_get(gendiskp, field))) +#define part_stat_local_read(gendiskp, field) \ + local_read(&(part_stat_get(gendiskp, field))) +#define part_stat_local_read_cpu(gendiskp, field, cpu) \ + local_read(&(part_stat_get_cpu(gendiskp, field, cpu))) + +unsigned int part_in_flight(struct request_queue *q, struct hd_struct *part); void part_in_flight_rw(struct request_queue *q, struct hd_struct *part, unsigned int inflight[2]); void part_dec_in_flight(struct request_queue *q, struct hd_struct *part, @@ -398,8 +412,7 @@ static inline void free_part_info(struct hd_struct *part) kfree(part->info); } -/* block/blk-core.c */ -extern void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part); +void update_io_ticks(struct hd_struct *part, unsigned long now); /* block/genhd.c */ extern void device_add_disk(struct device *parent, struct gendisk *disk, diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 0705164f928c..5f5e25fd6149 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -81,7 +81,7 @@ struct vm_area_struct; * * %__GFP_HARDWALL enforces the cpuset memory allocation policy. * - * %__GFP_THISNODE forces the allocation to be satisified from the requested + * %__GFP_THISNODE forces the allocation to be satisfied from the requested * node with no fallbacks or placement policy enforcements. * * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg. diff --git a/include/linux/gpio/consumer.h b/include/linux/gpio/consumer.h index 8aebcf822082..9ddcf50a3c59 100644 --- a/include/linux/gpio/consumer.h +++ b/include/linux/gpio/consumer.h @@ -163,7 +163,7 @@ int gpiod_is_active_low(const struct gpio_desc *desc); int gpiod_cansleep(const struct gpio_desc *desc); int gpiod_to_irq(const struct gpio_desc *desc); -void gpiod_set_consumer_name(struct gpio_desc *desc, const char *name); +int gpiod_set_consumer_name(struct gpio_desc *desc, const char *name); /* Convert between the old gpio_ and new gpiod_ interfaces */ struct gpio_desc *gpio_to_desc(unsigned gpio); @@ -509,15 +509,17 @@ static inline int gpiod_to_irq(const struct gpio_desc *desc) return -EINVAL; } -static inline void gpiod_set_consumer_name(struct gpio_desc *desc, const char *name) +static inline int gpiod_set_consumer_name(struct gpio_desc *desc, + const char *name) { /* GPIO can never have been requested */ WARN_ON(1); + return -EINVAL; } static inline struct gpio_desc *gpio_to_desc(unsigned gpio) { - return ERR_PTR(-EINVAL); + return NULL; } static inline int desc_to_gpio(const struct gpio_desc *desc) diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h index 2db62b550b95..07cddbf45186 100644 --- a/include/linux/gpio/driver.h +++ b/include/linux/gpio/driver.h @@ -17,6 +17,7 @@ struct device_node; struct seq_file; struct gpio_device; struct module; +enum gpiod_flags; #ifdef CONFIG_GPIOLIB @@ -166,11 +167,6 @@ struct gpio_irq_chip { */ void (*irq_disable)(struct irq_data *data); }; - -static inline struct gpio_irq_chip *to_gpio_irq_chip(struct irq_chip *chip) -{ - return container_of(chip, struct gpio_irq_chip, chip); -} #endif /** @@ -422,7 +418,6 @@ static inline int gpiochip_add(struct gpio_chip *chip) extern void gpiochip_remove(struct gpio_chip *chip); extern int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *chip, void *data); -extern void devm_gpiochip_remove(struct device *dev, struct gpio_chip *chip); extern struct gpio_chip *gpiochip_find(void *data, int (*match)(struct gpio_chip *chip, void *data)); @@ -610,7 +605,8 @@ gpiochip_remove_pin_ranges(struct gpio_chip *chip) #endif /* CONFIG_PINCTRL */ struct gpio_desc *gpiochip_request_own_desc(struct gpio_chip *chip, u16 hwnum, - const char *label); + const char *label, + enum gpiod_flags flags); void gpiochip_free_own_desc(struct gpio_desc *desc); #else /* CONFIG_GPIOLIB */ diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 0690679832d4..ea5cdbd8c2c3 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -36,7 +36,31 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size) /* declarations for linux/mm/highmem.c */ unsigned int nr_free_highpages(void); -extern unsigned long totalhigh_pages; +extern atomic_long_t _totalhigh_pages; +static inline unsigned long totalhigh_pages(void) +{ + return (unsigned long)atomic_long_read(&_totalhigh_pages); +} + +static inline void totalhigh_pages_inc(void) +{ + atomic_long_inc(&_totalhigh_pages); +} + +static inline void totalhigh_pages_dec(void) +{ + atomic_long_dec(&_totalhigh_pages); +} + +static inline void totalhigh_pages_add(long count) +{ + atomic_long_add(count, &_totalhigh_pages); +} + +static inline void totalhigh_pages_set(long val) +{ + atomic_long_set(&_totalhigh_pages, val); +} void kmap_flush_unused(void); @@ -51,7 +75,7 @@ static inline struct page *kmap_to_page(void *addr) return virt_to_page(addr); } -#define totalhigh_pages 0UL +static inline unsigned long totalhigh_pages(void) { return 0UL; } #ifndef ARCH_HAS_KMAP static inline void *kmap(struct page *page) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index c6fb869a81c0..66f9ebbb1df3 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -69,6 +69,7 @@ #define LINUX_HMM_H #include +#include #if IS_ENABLED(CONFIG_HMM) @@ -486,6 +487,7 @@ struct hmm_devmem_ops { * @device: device to bind resource to * @ops: memory operations callback * @ref: per CPU refcount + * @page_fault: callback when CPU fault on an unaddressable device page * * This an helper structure for device drivers that do not wish to implement * the gory details related to hotplugging new memoy and allocating struct @@ -493,7 +495,28 @@ struct hmm_devmem_ops { * * Device drivers can directly use ZONE_DEVICE memory on their own if they * wish to do so. + * + * The page_fault() callback must migrate page back, from device memory to + * system memory, so that the CPU can access it. This might fail for various + * reasons (device issues, device have been unplugged, ...). When such error + * conditions happen, the page_fault() callback must return VM_FAULT_SIGBUS and + * set the CPU page table entry to "poisoned". + * + * Note that because memory cgroup charges are transferred to the device memory, + * this should never fail due to memory restrictions. However, allocation + * of a regular system page might still fail because we are out of memory. If + * that happens, the page_fault() callback must return VM_FAULT_OOM. + * + * The page_fault() callback can also try to migrate back multiple pages in one + * chunk, as an optimization. It must, however, prioritize the faulting address + * over all the others. */ +typedef int (*dev_page_fault_t)(struct vm_area_struct *vma, + unsigned long addr, + const struct page *page, + unsigned int flags, + pmd_t *pmdp); + struct hmm_devmem { struct completion completion; unsigned long pfn_first; @@ -503,6 +526,7 @@ struct hmm_devmem { struct dev_pagemap pagemap; const struct hmm_devmem_ops *ops; struct percpu_ref ref; + dev_page_fault_t page_fault; }; /* @@ -512,8 +536,7 @@ struct hmm_devmem { * enough and allocate struct page for it. * * The device driver can wrap the hmm_devmem struct inside a private device - * driver struct. The device driver must call hmm_devmem_remove() before the - * device goes away and before freeing the hmm_devmem struct memory. + * driver struct. */ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, struct device *device, @@ -521,7 +544,6 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, struct device *device, struct resource *res); -void hmm_devmem_remove(struct hmm_devmem *devmem); /* * hmm_devmem_page_set_drvdata - set per-page driver data field diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4663ee96cf59..381e872bfde0 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -93,7 +93,11 @@ extern bool is_vma_temporary_stack(struct vm_area_struct *vma); extern unsigned long transparent_hugepage_flags; -static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) +/* + * to be used on vmas which are known to support THP. + * Use transparent_hugepage_enabled otherwise + */ +static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) { if (vma->vm_flags & VM_NOHUGEPAGE) return false; @@ -117,6 +121,8 @@ static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) return false; } +bool transparent_hugepage_enabled(struct vm_area_struct *vma); + #define transparent_hugepage_use_zero_page() \ (transparent_hugepage_flags & \ (1< #include #include -#include +#include #include #include #include @@ -50,6 +50,7 @@ struct ide_request { struct scsi_request sreq; u8 sense[SCSI_SENSE_BUFFERSIZE]; u8 type; + void *special; }; static inline struct ide_request *ide_req(struct request *rq) @@ -529,6 +530,10 @@ struct ide_drive_s { struct request_queue *queue; /* request queue */ + bool (*prep_rq)(struct ide_drive_s *, struct request *); + + struct blk_mq_tag_set tag_set; + struct request *rq; /* current request */ void *driver_data; /* extra driver data */ u16 *id; /* identification info */ @@ -612,6 +617,10 @@ struct ide_drive_s { bool sense_rq_armed; struct request *sense_rq; struct request_sense sense_data; + + /* async sense insertion */ + struct work_struct rq_work; + struct list_head rq_list; }; typedef struct ide_drive_s ide_drive_t; @@ -1089,6 +1098,7 @@ extern int ide_pci_clk; int ide_end_rq(ide_drive_t *, struct request *, blk_status_t, unsigned int); void ide_kill_rq(ide_drive_t *, struct request *); +void ide_insert_request_head(ide_drive_t *, struct request *); void __ide_set_handler(ide_drive_t *, ide_handler_t *, unsigned int); void ide_set_handler(ide_drive_t *, ide_handler_t *, unsigned int); @@ -1208,7 +1218,7 @@ extern void ide_stall_queue(ide_drive_t *drive, unsigned long timeout); extern void ide_timer_expiry(struct timer_list *t); extern irqreturn_t ide_intr(int irq, void *dev_id); -extern void do_ide_request(struct request_queue *); +extern blk_status_t ide_queue_rq(struct blk_mq_hw_ctx *, const struct blk_mq_queue_data *); extern void ide_requeue_and_plug(ide_drive_t *drive, struct request *rq); void ide_init_disk(struct gendisk *, ide_drive_t *); diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h index 0ef67f837ae1..3b04e72315e1 100644 --- a/include/linux/ieee80211.h +++ b/include/linux/ieee80211.h @@ -812,6 +812,8 @@ enum mesh_config_capab_flags { IEEE80211_MESHCONF_CAPAB_POWER_SAVE_LEVEL = 0x40, }; +#define IEEE80211_MESHCONF_FORM_CONNECTED_TO_GATE 0x1 + /** * mesh channel switch parameters element's flag indicator * @@ -1617,7 +1619,7 @@ struct ieee80211_he_mcs_nss_supp { * struct ieee80211_he_operation - HE capabilities element * * This structure is the "HE operation element" fields as - * described in P802.11ax_D2.0 section 9.4.2.238 + * described in P802.11ax_D3.0 section 9.4.2.238 */ struct ieee80211_he_operation { __le32 he_oper_params; @@ -2009,17 +2011,17 @@ ieee80211_he_ppe_size(u8 ppe_thres_hdr, const u8 *phy_cap_info) } /* HE Operation defines */ -#define IEEE80211_HE_OPERATION_BSS_COLOR_MASK 0x0000003f -#define IEEE80211_HE_OPERATION_DFLT_PE_DURATION_MASK 0x000001c0 -#define IEEE80211_HE_OPERATION_DFLT_PE_DURATION_OFFSET 6 -#define IEEE80211_HE_OPERATION_TWT_REQUIRED 0x00000200 -#define IEEE80211_HE_OPERATION_RTS_THRESHOLD_MASK 0x000ffc00 -#define IEEE80211_HE_OPERATION_RTS_THRESHOLD_OFFSET 10 -#define IEEE80211_HE_OPERATION_PARTIAL_BSS_COLOR 0x00100000 -#define IEEE80211_HE_OPERATION_VHT_OPER_INFO 0x00200000 -#define IEEE80211_HE_OPERATION_MULTI_BSSID_AP 0x10000000 -#define IEEE80211_HE_OPERATION_TX_BSSID_INDICATOR 0x20000000 -#define IEEE80211_HE_OPERATION_BSS_COLOR_DISABLED 0x40000000 +#define IEEE80211_HE_OPERATION_DFLT_PE_DURATION_MASK 0x00000003 +#define IEEE80211_HE_OPERATION_TWT_REQUIRED 0x00000008 +#define IEEE80211_HE_OPERATION_RTS_THRESHOLD_MASK 0x00003ff0 +#define IEEE80211_HE_OPERATION_RTS_THRESHOLD_OFFSET 4 +#define IEEE80211_HE_OPERATION_VHT_OPER_INFO 0x00004000 +#define IEEE80211_HE_OPERATION_CO_LOCATED_BSS 0x00008000 +#define IEEE80211_HE_OPERATION_ER_SU_DISABLE 0x00010000 +#define IEEE80211_HE_OPERATION_BSS_COLOR_MASK 0x3f000000 +#define IEEE80211_HE_OPERATION_BSS_COLOR_OFFSET 24 +#define IEEE80211_HE_OPERATION_PARTIAL_BSS_COLOR 0x40000000 +#define IEEE80211_HE_OPERATION_BSS_COLOR_DISABLED 0x80000000 /* * ieee80211_he_oper_size - calculate 802.11ax HE Operations IE size @@ -2044,7 +2046,7 @@ ieee80211_he_oper_size(const u8 *he_oper_ie) he_oper_params = le32_to_cpu(he_oper->he_oper_params); if (he_oper_params & IEEE80211_HE_OPERATION_VHT_OPER_INFO) oper_len += 3; - if (he_oper_params & IEEE80211_HE_OPERATION_MULTI_BSSID_AP) + if (he_oper_params & IEEE80211_HE_OPERATION_CO_LOCATED_BSS) oper_len++; /* Add the first byte (extension ID) to the total length */ @@ -2685,6 +2687,10 @@ enum ieee80211_tdls_actioncode { */ #define WLAN_EXT_CAPA9_FTM_INITIATOR BIT(7) +/* Defines support for TWT Requester and TWT Responder */ +#define WLAN_EXT_CAPA10_TWT_REQUESTER_SUPPORT BIT(5) +#define WLAN_EXT_CAPA10_TWT_RESPONDER_SUPPORT BIT(6) + /* TDLS specific payload type in the LLC/SNAP header */ #define WLAN_TDLS_SNAP_RFTYPE 0x2 diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h index c20c7e197d07..627b788ba0ff 100644 --- a/include/linux/if_bridge.h +++ b/include/linux/if_bridge.h @@ -119,6 +119,8 @@ static inline int br_vlan_get_info(const struct net_device *dev, u16 vid, struct net_device *br_fdb_find_port(const struct net_device *br_dev, const unsigned char *addr, __u16 vid); +void br_fdb_clear_offload(const struct net_device *dev, u16 vid); +bool br_port_flag_is_set(const struct net_device *dev, unsigned long flag); #else static inline struct net_device * br_fdb_find_port(const struct net_device *br_dev, @@ -127,6 +129,16 @@ br_fdb_find_port(const struct net_device *br_dev, { return NULL; } + +static inline void br_fdb_clear_offload(const struct net_device *dev, u16 vid) +{ +} + +static inline bool +br_port_flag_is_set(const struct net_device *dev, unsigned long flag) +{ + return false; +} #endif #endif diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h index 83ea4df6ab81..4cca4da7a6de 100644 --- a/include/linux/if_vlan.h +++ b/include/linux/if_vlan.h @@ -65,8 +65,7 @@ static inline struct vlan_ethhdr *vlan_eth_hdr(const struct sk_buff *skb) #define VLAN_PRIO_MASK 0xe000 /* Priority Code Point */ #define VLAN_PRIO_SHIFT 13 -#define VLAN_CFI_MASK 0x1000 /* Canonical Format Indicator */ -#define VLAN_TAG_PRESENT VLAN_CFI_MASK +#define VLAN_CFI_MASK 0x1000 /* Canonical Format Indicator / Drop Eligible Indicator */ #define VLAN_VID_MASK 0x0fff /* VLAN Identifier */ #define VLAN_N_VID 4096 @@ -78,10 +77,11 @@ static inline bool is_vlan_dev(const struct net_device *dev) return dev->priv_flags & IFF_802_1Q_VLAN; } -#define skb_vlan_tag_present(__skb) ((__skb)->vlan_tci & VLAN_TAG_PRESENT) -#define skb_vlan_tag_get(__skb) ((__skb)->vlan_tci & ~VLAN_TAG_PRESENT) +#define skb_vlan_tag_present(__skb) ((__skb)->vlan_present) +#define skb_vlan_tag_get(__skb) ((__skb)->vlan_tci) #define skb_vlan_tag_get_id(__skb) ((__skb)->vlan_tci & VLAN_VID_MASK) -#define skb_vlan_tag_get_prio(__skb) ((__skb)->vlan_tci & VLAN_PRIO_MASK) +#define skb_vlan_tag_get_cfi(__skb) (!!((__skb)->vlan_tci & VLAN_CFI_MASK)) +#define skb_vlan_tag_get_prio(__skb) (((__skb)->vlan_tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT) static inline int vlan_get_rx_ctag_filter_info(struct net_device *dev) { @@ -133,6 +133,9 @@ struct vlan_pcpu_stats { extern struct net_device *__vlan_find_dev_deep_rcu(struct net_device *real_dev, __be16 vlan_proto, u16 vlan_id); +extern int vlan_for_each(struct net_device *dev, + int (*action)(struct net_device *dev, int vid, + void *arg), void *arg); extern struct net_device *vlan_dev_real_dev(const struct net_device *dev); extern u16 vlan_dev_vlan_id(const struct net_device *dev); extern __be16 vlan_dev_vlan_proto(const struct net_device *dev); @@ -236,6 +239,14 @@ __vlan_find_dev_deep_rcu(struct net_device *real_dev, return NULL; } +static inline int +vlan_for_each(struct net_device *dev, + int (*action)(struct net_device *dev, int vid, void *arg), + void *arg) +{ + return 0; +} + static inline struct net_device *vlan_dev_real_dev(const struct net_device *dev) { BUG(); @@ -461,6 +472,31 @@ static inline struct sk_buff *vlan_insert_tag_set_proto(struct sk_buff *skb, return skb; } +/** + * __vlan_hwaccel_clear_tag - clear hardware accelerated VLAN info + * @skb: skbuff to clear + * + * Clears the VLAN information from @skb + */ +static inline void __vlan_hwaccel_clear_tag(struct sk_buff *skb) +{ + skb->vlan_present = 0; +} + +/** + * __vlan_hwaccel_copy_tag - copy hardware accelerated VLAN info from another skb + * @dst: skbuff to copy to + * @src: skbuff to copy from + * + * Copies VLAN information from @src to @dst (for branchless code) + */ +static inline void __vlan_hwaccel_copy_tag(struct sk_buff *dst, const struct sk_buff *src) +{ + dst->vlan_present = src->vlan_present; + dst->vlan_proto = src->vlan_proto; + dst->vlan_tci = src->vlan_tci; +} + /* * __vlan_hwaccel_push_inside - pushes vlan tag to the payload * @skb: skbuff to tag @@ -475,7 +511,7 @@ static inline struct sk_buff *__vlan_hwaccel_push_inside(struct sk_buff *skb) skb = vlan_insert_tag_set_proto(skb, skb->vlan_proto, skb_vlan_tag_get(skb)); if (likely(skb)) - skb->vlan_tci = 0; + __vlan_hwaccel_clear_tag(skb); return skb; } @@ -491,7 +527,8 @@ static inline void __vlan_hwaccel_put_tag(struct sk_buff *skb, __be16 vlan_proto, u16 vlan_tci) { skb->vlan_proto = vlan_proto; - skb->vlan_tci = VLAN_TAG_PRESENT | vlan_tci; + skb->vlan_tci = vlan_tci; + skb->vlan_present = 1; } /** @@ -531,8 +568,6 @@ static inline int __vlan_hwaccel_get_tag(const struct sk_buff *skb, } } -#define HAVE_VLAN_GET_TAG - /** * vlan_get_tag - get the VLAN ID from the skb * @skb: skbuff to query diff --git a/include/linux/iio/adc/ad_sigma_delta.h b/include/linux/iio/adc/ad_sigma_delta.h index 730ead1a46df..7e84351fa2c0 100644 --- a/include/linux/iio/adc/ad_sigma_delta.h +++ b/include/linux/iio/adc/ad_sigma_delta.h @@ -39,6 +39,8 @@ struct iio_dev; * if there is just one read-only sample data shift register. * @addr_shift: Shift of the register address in the communications register. * @read_mask: Mask for the communications register having the read bit set. + * @data_reg: Address of the data register, if 0 the default address of 0x3 will + * be used. */ struct ad_sigma_delta_info { int (*set_channel)(struct ad_sigma_delta *, unsigned int channel); @@ -47,6 +49,7 @@ struct ad_sigma_delta_info { bool has_registers; unsigned int addr_shift; unsigned int read_mask; + unsigned int data_reg; }; /** diff --git a/include/linux/iio/common/st_sensors.h b/include/linux/iio/common/st_sensors.h index f9bd6e8ab138..8092b8e7f37e 100644 --- a/include/linux/iio/common/st_sensors.h +++ b/include/linux/iio/common/st_sensors.h @@ -40,7 +40,7 @@ #define ST_SENSORS_DEFAULT_STAT_ADDR 0x27 #define ST_SENSORS_MAX_NAME 17 -#define ST_SENSORS_MAX_4WAI 7 +#define ST_SENSORS_MAX_4WAI 8 #define ST_SENSORS_LSM_CHANNELS(device_type, mask, index, mod, \ ch2, s, endian, rbits, sbits, addr) \ diff --git a/include/linux/indirect_call_wrapper.h b/include/linux/indirect_call_wrapper.h new file mode 100644 index 000000000000..00d7e8e919c6 --- /dev/null +++ b/include/linux/indirect_call_wrapper.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_INDIRECT_CALL_WRAPPER_H +#define _LINUX_INDIRECT_CALL_WRAPPER_H + +#ifdef CONFIG_RETPOLINE + +/* + * INDIRECT_CALL_$NR - wrapper for indirect calls with $NR known builtin + * @f: function pointer + * @f$NR: builtin functions names, up to $NR of them + * @__VA_ARGS__: arguments for @f + * + * Avoid retpoline overhead for known builtin, checking @f vs each of them and + * eventually invoking directly the builtin function. The functions are check + * in the given order. Fallback to the indirect call. + */ +#define INDIRECT_CALL_1(f, f1, ...) \ + ({ \ + likely(f == f1) ? f1(__VA_ARGS__) : f(__VA_ARGS__); \ + }) +#define INDIRECT_CALL_2(f, f2, f1, ...) \ + ({ \ + likely(f == f2) ? f2(__VA_ARGS__) : \ + INDIRECT_CALL_1(f, f1, __VA_ARGS__); \ + }) + +#define INDIRECT_CALLABLE_DECLARE(f) f +#define INDIRECT_CALLABLE_SCOPE + +#else +#define INDIRECT_CALL_1(f, f1, ...) f(__VA_ARGS__) +#define INDIRECT_CALL_2(f, f2, f1, ...) f(__VA_ARGS__) +#define INDIRECT_CALLABLE_DECLARE(f) +#define INDIRECT_CALLABLE_SCOPE static +#endif + +/* + * We can use INDIRECT_CALL_$NR for ipv6 related functions only if ipv6 is + * builtin, this macro simplify dealing with indirect calls with only ipv4/ipv6 + * alternatives + */ +#if IS_BUILTIN(CONFIG_IPV6) +#define INDIRECT_CALL_INET(f, f2, f1, ...) \ + INDIRECT_CALL_2(f, f2, f1, __VA_ARGS__) +#elif IS_ENABLED(CONFIG_INET) +#define INDIRECT_CALL_INET(f, f2, f1, ...) INDIRECT_CALL_1(f, f1, __VA_ARGS__) +#else +#define INDIRECT_CALL_INET(f, f2, f1, ...) f(__VA_ARGS__) +#endif + +#endif diff --git a/include/linux/init.h b/include/linux/init.h index 9c2aba1dbabf..5255069f5a9f 100644 --- a/include/linux/init.h +++ b/include/linux/init.h @@ -146,7 +146,6 @@ extern unsigned int reset_devices; /* used by init/main.c */ void setup_arch(char **); void prepare_namespace(void); -void __init load_default_modules(void); int __init init_rootfs(void); #if defined(CONFIG_STRICT_KERNEL_RWX) || defined(CONFIG_STRICT_MODULE_RWX) diff --git a/include/linux/initrd.h b/include/linux/initrd.h index 84b423044088..14beaff9b445 100644 --- a/include/linux/initrd.h +++ b/include/linux/initrd.h @@ -21,4 +21,7 @@ extern int initrd_below_start_ok; extern unsigned long initrd_start, initrd_end; extern void free_initrd_mem(unsigned long, unsigned long); +extern phys_addr_t phys_initrd_start; +extern unsigned long phys_initrd_size; + extern unsigned int real_root_dev; diff --git a/include/linux/ioprio.h b/include/linux/ioprio.h index 9e30ed6443db..e9bfe6972aed 100644 --- a/include/linux/ioprio.h +++ b/include/linux/ioprio.h @@ -70,6 +70,19 @@ static inline int task_nice_ioclass(struct task_struct *task) return IOPRIO_CLASS_BE; } +/* + * If the calling process has set an I/O priority, use that. Otherwise, return + * the default I/O priority. + */ +static inline int get_current_ioprio(void) +{ + struct io_context *ioc = current->io_context; + + if (ioc) + return ioc->ioprio; + return IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0); +} + /* * For inheritance, return the highest of the two given priorities */ diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h index b708e5169d1d..0f919d5fe84f 100644 --- a/include/linux/jbd2.h +++ b/include/linux/jbd2.h @@ -575,6 +575,7 @@ struct transaction_s enum { T_RUNNING, T_LOCKED, + T_SWITCH, T_FLUSH, T_COMMIT, T_COMMIT_DFLUSH, @@ -662,13 +663,13 @@ struct transaction_s /* * Number of outstanding updates running on this transaction - * [t_handle_lock] + * [none] */ atomic_t t_updates; /* * Number of buffers reserved for use by all handles in this transaction - * handle but not yet modified. [t_handle_lock] + * handle but not yet modified. [none] */ atomic_t t_outstanding_credits; @@ -690,7 +691,7 @@ struct transaction_s ktime_t t_start_time; /* - * How many handles used this transaction? [t_handle_lock] + * How many handles used this transaction? [none] */ atomic_t t_handle_count; diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 46aae129917c..b40ea104dd36 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -14,13 +14,13 @@ struct task_struct; #include #include -extern unsigned char kasan_zero_page[PAGE_SIZE]; -extern pte_t kasan_zero_pte[PTRS_PER_PTE]; -extern pmd_t kasan_zero_pmd[PTRS_PER_PMD]; -extern pud_t kasan_zero_pud[PTRS_PER_PUD]; -extern p4d_t kasan_zero_p4d[MAX_PTRS_PER_P4D]; +extern unsigned char kasan_early_shadow_page[PAGE_SIZE]; +extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE]; +extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD]; +extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD]; +extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D]; -int kasan_populate_zero_shadow(const void *shadow_start, +int kasan_populate_early_shadow(const void *shadow_start, const void *shadow_end); static inline void *kasan_mem_to_shadow(const void *addr) @@ -45,22 +45,24 @@ void kasan_free_pages(struct page *page, unsigned int order); void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags); -void kasan_cache_shrink(struct kmem_cache *cache); -void kasan_cache_shutdown(struct kmem_cache *cache); void kasan_poison_slab(struct page *page); void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); void kasan_poison_object_data(struct kmem_cache *cache, void *object); -void kasan_init_slab_obj(struct kmem_cache *cache, const void *object); +void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, + const void *object); -void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); +void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags); void kasan_kfree_large(void *ptr, unsigned long ip); void kasan_poison_kfree(void *ptr, unsigned long ip); -void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, - gfp_t flags); -void kasan_krealloc(const void *object, size_t new_size, gfp_t flags); +void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, + size_t size, gfp_t flags); +void * __must_check kasan_krealloc(const void *object, size_t new_size, + gfp_t flags); -void kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); +void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags); bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); struct kasan_cache { @@ -97,27 +99,40 @@ static inline void kasan_free_pages(struct page *page, unsigned int order) {} static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} -static inline void kasan_cache_shrink(struct kmem_cache *cache) {} -static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} static inline void kasan_poison_slab(struct page *page) {} static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) {} static inline void kasan_poison_object_data(struct kmem_cache *cache, void *object) {} -static inline void kasan_init_slab_obj(struct kmem_cache *cache, - const void *object) {} +static inline void *kasan_init_slab_obj(struct kmem_cache *cache, + const void *object) +{ + return (void *)object; +} -static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {} +static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) +{ + return ptr; +} static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} -static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags) {} -static inline void kasan_krealloc(const void *object, size_t new_size, - gfp_t flags) {} +static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, + size_t size, gfp_t flags) +{ + return (void *)object; +} +static inline void *kasan_krealloc(const void *object, size_t new_size, + gfp_t flags) +{ + return (void *)object; +} -static inline void kasan_slab_alloc(struct kmem_cache *s, void *object, - gfp_t flags) {} +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ + return object; +} static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip) { @@ -140,4 +155,40 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } #endif /* CONFIG_KASAN */ +#ifdef CONFIG_KASAN_GENERIC + +#define KASAN_SHADOW_INIT 0 + +void kasan_cache_shrink(struct kmem_cache *cache); +void kasan_cache_shutdown(struct kmem_cache *cache); + +#else /* CONFIG_KASAN_GENERIC */ + +static inline void kasan_cache_shrink(struct kmem_cache *cache) {} +static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} + +#endif /* CONFIG_KASAN_GENERIC */ + +#ifdef CONFIG_KASAN_SW_TAGS + +#define KASAN_SHADOW_INIT 0xFF + +void kasan_init_tags(void); + +void *kasan_reset_tag(const void *addr); + +void kasan_report(unsigned long addr, size_t size, + bool is_write, unsigned long ip); + +#else /* CONFIG_KASAN_SW_TAGS */ + +static inline void kasan_init_tags(void) { } + +static inline void *kasan_reset_tag(const void *addr) +{ + return (void *)addr; +} + +#endif /* CONFIG_KASAN_SW_TAGS */ + #endif /* LINUX_KASAN_H */ diff --git a/include/linux/key.h b/include/linux/key.h index e58ee10f6e58..7099985e35a9 100644 --- a/include/linux/key.h +++ b/include/linux/key.h @@ -346,6 +346,9 @@ static inline key_serial_t key_serial(const struct key *key) extern void key_set_timeout(struct key *, unsigned); +extern key_ref_t lookup_user_key(key_serial_t id, unsigned long flags, + key_perm_t perm); + /* * The permissions required on a key that we're looking up. */ diff --git a/include/linux/kref.h b/include/linux/kref.h index 29220724bf1c..cb00a0268061 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -53,10 +53,7 @@ static inline void kref_get(struct kref *kref) * @release: pointer to the function that will clean up the object when the * last reference to the object is released. * This pointer is required, and it is not acceptable to pass kfree - * in as this function. If the caller does pass kfree to this - * function, you will be publicly mocked mercilessly by the kref - * maintainer, and anyone else who happens to notice it. You have - * been warned. + * in as this function. * * Decrement the refcount, and if 0, call release(). * Return 1 if the object was removed, otherwise return 0. Beware, if this diff --git a/include/linux/lantiq.h b/include/linux/lantiq.h new file mode 100644 index 000000000000..67921169d84d --- /dev/null +++ b/include/linux/lantiq.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __LINUX_LANTIQ_H +#define __LINUX_LANTIQ_H + +#ifdef CONFIG_LANTIQ +#include +#else + +#ifndef LTQ_EARLY_ASC +#define LTQ_EARLY_ASC 0 +#endif + +#ifndef CPHYSADDR +#define CPHYSADDR(a) 0 +#endif + +static inline struct clk *clk_get_fpi(void) +{ + return NULL; +} +#endif /* CONFIG_LANTIQ */ +#endif /* __LINUX_LANTIQ_H */ diff --git a/include/linux/libata.h b/include/linux/libata.h index 38c95d66ab12..68133842e6d7 100644 --- a/include/linux/libata.h +++ b/include/linux/libata.h @@ -135,7 +135,6 @@ enum { ATA_SHT_EMULATED = 1, ATA_SHT_THIS_ID = -1, - ATA_SHT_USE_CLUSTERING = 1, /* struct ata_taskfile flags */ ATA_TFLAG_LBA48 = (1 << 0), /* enable 48-bit LBA and "HOB" */ @@ -1360,7 +1359,6 @@ extern struct device_attribute *ata_common_sdev_attrs[]; .tag_alloc_policy = BLK_TAG_ALLOC_RR, \ .this_id = ATA_SHT_THIS_ID, \ .emulated = ATA_SHT_EMULATED, \ - .use_clustering = ATA_SHT_USE_CLUSTERING, \ .proc_name = drv_name, \ .slave_configure = ata_scsi_slave_config, \ .slave_destroy = ata_scsi_slave_destroy, \ diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h index 097072c5a852..5440f11b0907 100644 --- a/include/linux/libnvdimm.h +++ b/include/linux/libnvdimm.h @@ -38,6 +38,10 @@ enum { NDD_UNARMED = 1, /* locked memory devices should not be accessed */ NDD_LOCKED = 2, + /* memory under security wipes should not be accessed */ + NDD_SECURITY_OVERWRITE = 3, + /* tracking whether or not there is a pending device reference */ + NDD_WORK_PENDING = 4, /* need to set a limit somewhere, but yes, this is likely overkill */ ND_IOCTL_MAX_BUFLEN = SZ_4M, @@ -87,7 +91,7 @@ struct nvdimm_bus_descriptor { ndctl_fn ndctl; int (*flush_probe)(struct nvdimm_bus_descriptor *nd_desc); int (*clear_to_send)(struct nvdimm_bus_descriptor *nd_desc, - struct nvdimm *nvdimm, unsigned int cmd); + struct nvdimm *nvdimm, unsigned int cmd, void *data); }; struct nd_cmd_desc { @@ -155,6 +159,46 @@ static inline struct nd_blk_region_desc *to_blk_region_desc( } +enum nvdimm_security_state { + NVDIMM_SECURITY_DISABLED, + NVDIMM_SECURITY_UNLOCKED, + NVDIMM_SECURITY_LOCKED, + NVDIMM_SECURITY_FROZEN, + NVDIMM_SECURITY_OVERWRITE, +}; + +#define NVDIMM_PASSPHRASE_LEN 32 +#define NVDIMM_KEY_DESC_LEN 22 + +struct nvdimm_key_data { + u8 data[NVDIMM_PASSPHRASE_LEN]; +}; + +enum nvdimm_passphrase_type { + NVDIMM_USER, + NVDIMM_MASTER, +}; + +struct nvdimm_security_ops { + enum nvdimm_security_state (*state)(struct nvdimm *nvdimm, + enum nvdimm_passphrase_type pass_type); + int (*freeze)(struct nvdimm *nvdimm); + int (*change_key)(struct nvdimm *nvdimm, + const struct nvdimm_key_data *old_data, + const struct nvdimm_key_data *new_data, + enum nvdimm_passphrase_type pass_type); + int (*unlock)(struct nvdimm *nvdimm, + const struct nvdimm_key_data *key_data); + int (*disable)(struct nvdimm *nvdimm, + const struct nvdimm_key_data *key_data); + int (*erase)(struct nvdimm *nvdimm, + const struct nvdimm_key_data *key_data, + enum nvdimm_passphrase_type pass_type); + int (*overwrite)(struct nvdimm *nvdimm, + const struct nvdimm_key_data *key_data); + int (*query_overwrite)(struct nvdimm *nvdimm); +}; + void badrange_init(struct badrange *badrange); int badrange_add(struct badrange *badrange, u64 addr, u64 length); void badrange_forget(struct badrange *badrange, phys_addr_t start, @@ -165,6 +209,7 @@ struct nvdimm_bus *nvdimm_bus_register(struct device *parent, struct nvdimm_bus_descriptor *nfit_desc); void nvdimm_bus_unregister(struct nvdimm_bus *nvdimm_bus); struct nvdimm_bus *to_nvdimm_bus(struct device *dev); +struct nvdimm_bus *nvdimm_to_bus(struct nvdimm *nvdimm); struct nvdimm *to_nvdimm(struct device *dev); struct nd_region *to_nd_region(struct device *dev); struct device *nd_region_dev(struct nd_region *nd_region); @@ -175,10 +220,21 @@ const char *nvdimm_name(struct nvdimm *nvdimm); struct kobject *nvdimm_kobj(struct nvdimm *nvdimm); unsigned long nvdimm_cmd_mask(struct nvdimm *nvdimm); void *nvdimm_provider_data(struct nvdimm *nvdimm); -struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, void *provider_data, - const struct attribute_group **groups, unsigned long flags, - unsigned long cmd_mask, int num_flush, - struct resource *flush_wpq); +struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus, + void *provider_data, const struct attribute_group **groups, + unsigned long flags, unsigned long cmd_mask, int num_flush, + struct resource *flush_wpq, const char *dimm_id, + const struct nvdimm_security_ops *sec_ops); +static inline struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, + void *provider_data, const struct attribute_group **groups, + unsigned long flags, unsigned long cmd_mask, int num_flush, + struct resource *flush_wpq) +{ + return __nvdimm_create(nvdimm_bus, provider_data, groups, flags, + cmd_mask, num_flush, flush_wpq, NULL, NULL); +} + +int nvdimm_security_setup_events(struct nvdimm *nvdimm); const struct nd_cmd_desc *nd_cmd_dimm_desc(int cmd); const struct nd_cmd_desc *nd_cmd_bus_desc(int cmd); u32 nd_cmd_in_size(struct nvdimm *nvdimm, int cmd, @@ -204,6 +260,16 @@ u64 nd_fletcher64(void *addr, size_t len, bool le); void nvdimm_flush(struct nd_region *nd_region); int nvdimm_has_flush(struct nd_region *nd_region); int nvdimm_has_cache(struct nd_region *nd_region); +int nvdimm_in_overwrite(struct nvdimm *nvdimm); + +static inline int nvdimm_ctl(struct nvdimm *nvdimm, unsigned int cmd, void *buf, + unsigned int buf_len, int *cmd_rc) +{ + struct nvdimm_bus *nvdimm_bus = nvdimm_to_bus(nvdimm); + struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus); + + return nd_desc->ndctl(nd_desc, nvdimm, cmd, buf, buf_len, cmd_rc); +} #ifdef CONFIG_ARCH_HAS_PMEM_API #define ARCH_MEMREMAP_PMEM MEMREMAP_WB diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 2fdeac1a420d..5d865a5d5cdc 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -90,7 +90,7 @@ typedef int (nvm_get_chk_meta_fn)(struct nvm_dev *, sector_t, int, struct nvm_chk_meta *); typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *); -typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *); +typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *, int); typedef void (nvm_destroy_dma_pool_fn)(void *); typedef void *(nvm_dev_dma_alloc_fn)(struct nvm_dev *, void *, gfp_t, dma_addr_t *); @@ -357,6 +357,7 @@ struct nvm_geo { u32 clba; /* sectors per chunk */ u16 csecs; /* sector size */ u16 sos; /* out-of-band area size */ + bool ext; /* metadata in extended data buffer */ /* device write constrains */ u32 ws_min; /* minimum write size */ diff --git a/include/linux/linkmode.h b/include/linux/linkmode.h index 22443d7fb5cd..a99c58866860 100644 --- a/include/linux/linkmode.h +++ b/include/linux/linkmode.h @@ -57,6 +57,15 @@ static inline void linkmode_clear_bit(int nr, volatile unsigned long *addr) __clear_bit(nr, addr); } +static inline void linkmode_mod_bit(int nr, volatile unsigned long *addr, + int set) +{ + if (set) + linkmode_set_bit(nr, addr); + else + linkmode_clear_bit(nr, addr); +} + static inline void linkmode_change_bit(int nr, volatile unsigned long *addr) { __change_bit(nr, addr); diff --git a/include/linux/memblock.h b/include/linux/memblock.h index aee299a6aa76..64c41cf45590 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -154,7 +154,6 @@ void __next_mem_range_rev(u64 *idx, int nid, enum memblock_flags flags, void __next_reserved_mem_region(u64 *idx, phys_addr_t *out_start, phys_addr_t *out_end); -void __memblock_free_early(phys_addr_t base, phys_addr_t size); void __memblock_free_late(phys_addr_t base, phys_addr_t size); /** @@ -320,6 +319,7 @@ static inline int memblock_get_region_node(const struct memblock_region *r) /* Flags for memblock allocation APIs */ #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) #define MEMBLOCK_ALLOC_ACCESSIBLE 0 +#define MEMBLOCK_ALLOC_KASAN 1 /* We are using top down, so it is safe to use 0 here */ #define MEMBLOCK_LOW_LIMIT 0 @@ -414,13 +414,13 @@ static inline void * __init memblock_alloc_node_nopanic(phys_addr_t size, static inline void __init memblock_free_early(phys_addr_t base, phys_addr_t size) { - __memblock_free_early(base, size); + memblock_free(base, size); } static inline void __init memblock_free_early_nid(phys_addr_t base, phys_addr_t size, int nid) { - __memblock_free_early(base, size); + memblock_free(base, size); } static inline void __init memblock_free_late(phys_addr_t base, phys_addr_t size) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 7ab2120155a4..83ae11cbd12c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -526,9 +526,11 @@ void mem_cgroup_handle_over_high(void); unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg); -void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, +void mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p); +void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg); + static inline void mem_cgroup_enter_user_fault(void) { WARN_ON(current->in_user_fault); @@ -970,7 +972,12 @@ static inline unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg) } static inline void -mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) +mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p) +{ +} + +static inline void +mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) { } diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index ffd9cd10fcf3..07da5c6c5ba0 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -107,8 +107,8 @@ static inline bool movable_node_is_enabled(void) } #ifdef CONFIG_MEMORY_HOTREMOVE -extern int arch_remove_memory(u64 start, u64 size, - struct vmem_altmap *altmap); +extern int arch_remove_memory(int nid, u64 start, u64 size, + struct vmem_altmap *altmap); extern int __remove_pages(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); #endif /* CONFIG_MEMORY_HOTREMOVE */ @@ -326,15 +326,14 @@ extern int walk_memory_range(unsigned long start_pfn, unsigned long end_pfn, void *arg, int (*func)(struct memory_block *, void *)); extern int __add_memory(int nid, u64 start, u64 size); extern int add_memory(int nid, u64 start, u64 size); -extern int add_memory_resource(int nid, struct resource *resource, bool online); +extern int add_memory_resource(int nid, struct resource *resource); extern int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, bool want_memblock); extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); -extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages); extern bool is_memblock_offlined(struct memory_block *mem); -extern int sparse_add_one_section(struct pglist_data *pgdat, - unsigned long start_pfn, struct vmem_altmap *altmap); +extern int sparse_add_one_section(int nid, unsigned long start_pfn, + struct vmem_altmap *altmap); extern void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap); extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 0ac69ddf5fc4..f0628660d541 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -4,8 +4,6 @@ #include #include -#include - struct resource; struct device; @@ -66,62 +64,34 @@ enum memory_type { }; /* - * For MEMORY_DEVICE_PRIVATE we use ZONE_DEVICE and extend it with two - * callbacks: - * page_fault() - * page_free() - * * Additional notes about MEMORY_DEVICE_PRIVATE may be found in * include/linux/hmm.h and Documentation/vm/hmm.rst. There is also a brief * explanation in include/linux/memory_hotplug.h. * - * The page_fault() callback must migrate page back, from device memory to - * system memory, so that the CPU can access it. This might fail for various - * reasons (device issues, device have been unplugged, ...). When such error - * conditions happen, the page_fault() callback must return VM_FAULT_SIGBUS and - * set the CPU page table entry to "poisoned". - * - * Note that because memory cgroup charges are transferred to the device memory, - * this should never fail due to memory restrictions. However, allocation - * of a regular system page might still fail because we are out of memory. If - * that happens, the page_fault() callback must return VM_FAULT_OOM. - * - * The page_fault() callback can also try to migrate back multiple pages in one - * chunk, as an optimization. It must, however, prioritize the faulting address - * over all the others. - * - * * The page_free() callback is called once the page refcount reaches 1 * (ZONE_DEVICE pages never reach 0 refcount unless there is a refcount bug. * This allows the device driver to implement its own memory management.) - * - * For MEMORY_DEVICE_PUBLIC only the page_free() callback matter. */ -typedef int (*dev_page_fault_t)(struct vm_area_struct *vma, - unsigned long addr, - const struct page *page, - unsigned int flags, - pmd_t *pmdp); typedef void (*dev_page_free_t)(struct page *page, void *data); /** * struct dev_pagemap - metadata for ZONE_DEVICE mappings - * @page_fault: callback when CPU fault on an unaddressable device page * @page_free: free page callback when page refcount reaches 1 * @altmap: pre-allocated/reserved memory for vmemmap allocations * @res: physical address range covered by @ref * @ref: reference count that pins the devm_memremap_pages() mapping + * @kill: callback to transition @ref to the dead state * @dev: host device of the mapping for debug * @data: private data pointer for page_free() * @type: memory type: see MEMORY_* in memory_hotplug.h */ struct dev_pagemap { - dev_page_fault_t page_fault; dev_page_free_t page_free; struct vmem_altmap altmap; bool altmap_valid; struct resource res; struct percpu_ref *ref; + void (*kill)(struct percpu_ref *ref); struct device *dev; void *data; enum memory_type type; diff --git a/include/linux/mfd/axp20x.h b/include/linux/mfd/axp20x.h index 1293695245df..a353cd22b388 100644 --- a/include/linux/mfd/axp20x.h +++ b/include/linux/mfd/axp20x.h @@ -266,6 +266,7 @@ enum axp20x_variants { #define AXP288_RT_BATT_V_H 0xa0 #define AXP288_RT_BATT_V_L 0xa1 +#define AXP813_ACIN_PATH_CTRL 0x3a #define AXP813_ADC_RATE 0x85 /* Fuel Gauge */ diff --git a/include/linux/mfd/tmio.h b/include/linux/mfd/tmio.h index 1e70060c92ce..e2687a30e5a1 100644 --- a/include/linux/mfd/tmio.h +++ b/include/linux/mfd/tmio.h @@ -54,12 +54,8 @@ * idle before writing to some registers. */ #define TMIO_MMC_HAS_IDLE_WAIT BIT(4) -/* - * A GPIO is used for card hotplug detection. We need an extra flag for this, - * because 0 is a valid GPIO number too, and requiring users to specify - * cd_gpio < 0 to disable GPIO hotplug would break backwards compatibility. - */ -#define TMIO_MMC_USE_GPIO_CD BIT(5) + +/* BIT(5) is unused */ /* * Some controllers have CMD12 automatically @@ -104,7 +100,6 @@ struct tmio_mmc_data { unsigned long capabilities2; unsigned long flags; u32 ocr_mask; /* available voltages */ - unsigned int cd_gpio; int alignment_shift; dma_addr_t dma_rx_offset; unsigned int max_blk_count; diff --git a/include/linux/migrate.h b/include/linux/migrate.h index f2b4abbca55e..e13d9bf2f9a5 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -29,7 +29,7 @@ enum migrate_reason { }; /* In mm/debug.c; also keep sync with include/trace/events/migrate.h */ -extern char *migrate_reason_names[MR_TYPES]; +extern const char *migrate_reason_names[MR_TYPES]; static inline struct page *new_page_nodemask(struct page *page, int preferred_nid, nodemask_t *nodemask) @@ -77,8 +77,7 @@ extern void migrate_page_copy(struct page *newpage, struct page *page); extern int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page); extern int migrate_page_move_mapping(struct address_space *mapping, - struct page *newpage, struct page *page, - struct buffer_head *head, enum migrate_mode mode, + struct page *newpage, struct page *page, enum migrate_mode mode, int extra_count); #else diff --git a/include/linux/mii.h b/include/linux/mii.h index 2da85b02e1c0..6fee8b1a4400 100644 --- a/include/linux/mii.h +++ b/include/linux/mii.h @@ -209,7 +209,7 @@ static inline u32 ethtool_adv_to_mii_ctrl1000_t(u32 ethadv) /** * linkmode_adv_to_mii_ctrl1000_t - * advertising: the linkmode advertisement settings + * @advertising: the linkmode advertisement settings * * A small helper function that translates linkmode advertisement * settings to phy autonegotiation advertisements for the @@ -287,6 +287,25 @@ static inline u32 mii_stat1000_to_ethtool_lpa_t(u32 lpa) return result; } +/** + * mii_stat1000_mod_linkmode_lpa_t + * @advertising: target the linkmode advertisement settings + * @adv: value of the MII_STAT1000 register + * + * A small helper function that translates MII_STAT1000 bits, when in + * 1000Base-T mode, to linkmode advertisement settings. Other bits in + * advertising are not changes. + */ +static inline void mii_stat1000_mod_linkmode_lpa_t(unsigned long *advertising, + u32 lpa) +{ + linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, + advertising, lpa & LPA_1000HALF); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, + advertising, lpa & LPA_1000FULL); +} + /** * ethtool_adv_to_mii_adv_x * @ethadv: the ethtool advertisement settings @@ -353,51 +372,105 @@ static inline u32 mii_lpa_to_ethtool_lpa_x(u32 lpa) return result | mii_adv_to_ethtool_adv_x(lpa); } +/** + * mii_adv_mod_linkmode_adv_t + * @advertising:pointer to destination link mode. + * @adv: value of the MII_ADVERTISE register + * + * A small helper function that translates MII_ADVERTISE bits to + * linkmode advertisement settings. Leaves other bits unchanged. + */ +static inline void mii_adv_mod_linkmode_adv_t(unsigned long *advertising, + u32 adv) +{ + linkmode_mod_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, + advertising, adv & ADVERTISE_10HALF); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, + advertising, adv & ADVERTISE_10FULL); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, + advertising, adv & ADVERTISE_100HALF); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, + advertising, adv & ADVERTISE_100FULL); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_Pause_BIT, advertising, + adv & ADVERTISE_PAUSE_CAP); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, + advertising, adv & ADVERTISE_PAUSE_ASYM); +} + /** * mii_adv_to_linkmode_adv_t * @advertising:pointer to destination link mode. * @adv: value of the MII_ADVERTISE register * * A small helper function that translates MII_ADVERTISE bits - * to linkmode advertisement settings. + * to linkmode advertisement settings. Clears the old value + * of advertising. */ static inline void mii_adv_to_linkmode_adv_t(unsigned long *advertising, u32 adv) { linkmode_zero(advertising); - if (adv & ADVERTISE_10HALF) - linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, - advertising); - if (adv & ADVERTISE_10FULL) - linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, - advertising); - if (adv & ADVERTISE_100HALF) - linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, - advertising); - if (adv & ADVERTISE_100FULL) - linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, - advertising); - if (adv & ADVERTISE_PAUSE_CAP) - linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, advertising); - if (adv & ADVERTISE_PAUSE_ASYM) - linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, advertising); + mii_adv_mod_linkmode_adv_t(advertising, adv); +} + +/** + * mii_lpa_to_linkmode_lpa_t + * @adv: value of the MII_LPA register + * + * A small helper function that translates MII_LPA bits, when in + * 1000Base-T mode, to linkmode LP advertisement settings. Clears the + * old value of advertising + */ +static inline void mii_lpa_to_linkmode_lpa_t(unsigned long *lp_advertising, + u32 lpa) +{ + mii_adv_to_linkmode_adv_t(lp_advertising, lpa); + + if (lpa & LPA_LPACK) + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + lp_advertising); + +} + +/** + * mii_lpa_mod_linkmode_lpa_t + * @adv: value of the MII_LPA register + * + * A small helper function that translates MII_LPA bits, when in + * 1000Base-T mode, to linkmode LP advertisement settings. Leaves + * other bits unchanged. + */ +static inline void mii_lpa_mod_linkmode_lpa_t(unsigned long *lp_advertising, + u32 lpa) +{ + mii_adv_mod_linkmode_adv_t(lp_advertising, lpa); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, + lp_advertising, lpa & LPA_LPACK); } /** - * ethtool_adv_to_lcl_adv_t - * @advertising:pointer to ethtool advertising + * linkmode_adv_to_lcl_adv_t + * @advertising:pointer to linkmode advertising * - * A small helper function that translates ethtool advertising to LVL + * A small helper function that translates linkmode advertising to LVL * pause capabilities. */ -static inline u32 ethtool_adv_to_lcl_adv_t(u32 advertising) +static inline u32 linkmode_adv_to_lcl_adv_t(unsigned long *advertising) { u32 lcl_adv = 0; - if (advertising & ADVERTISED_Pause) + if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT, + advertising)) lcl_adv |= ADVERTISE_PAUSE_CAP; - if (advertising & ADVERTISED_Asym_Pause) + if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT, + advertising)) lcl_adv |= ADVERTISE_PAUSE_ASYM; return lcl_adv; diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h index dca6ab4eaa99..36e412c3d657 100644 --- a/include/linux/mlx4/device.h +++ b/include/linux/mlx4/device.h @@ -226,6 +226,7 @@ enum { MLX4_DEV_CAP_FLAG2_SL_TO_VL_CHANGE_EVENT = 1ULL << 37, MLX4_DEV_CAP_FLAG2_USER_MAC_EN = 1ULL << 38, MLX4_DEV_CAP_FLAG2_DRIVER_VERSION_TO_FW = 1ULL << 39, + MLX4_DEV_CAP_FLAG2_SW_CQ_INIT = 1ULL << 40, }; enum { @@ -1136,7 +1137,8 @@ void mlx4_free_hwq_res(struct mlx4_dev *mdev, struct mlx4_hwq_resources *wqres, int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, struct mlx4_mtt *mtt, struct mlx4_uar *uar, u64 db_rec, struct mlx4_cq *cq, - unsigned vector, int collapsed, int timestamp_en); + unsigned int vector, int collapsed, int timestamp_en, + void *buf_addr, bool user_cq); void mlx4_cq_free(struct mlx4_dev *dev, struct mlx4_cq *cq); int mlx4_qp_reserve_range(struct mlx4_dev *dev, int cnt, int align, int *base, u8 flags, u8 usage); diff --git a/include/linux/mlx5/cq.h b/include/linux/mlx5/cq.h index 31a750570c38..612c8c2f2466 100644 --- a/include/linux/mlx5/cq.h +++ b/include/linux/mlx5/cq.h @@ -60,7 +60,7 @@ struct mlx5_core_cq { } tasklet_ctx; int reset_notify_added; struct list_head reset_notify; - struct mlx5_eq *eq; + struct mlx5_eq_comp *eq; u16 uid; }; @@ -125,9 +125,9 @@ struct mlx5_cq_modify_params { }; enum { - CQE_SIZE_64 = 0, - CQE_SIZE_128 = 1, - CQE_SIZE_128_PAD = 2, + CQE_STRIDE_64 = 0, + CQE_STRIDE_128 = 1, + CQE_STRIDE_128_PAD = 2, }; #define MLX5_MAX_CQ_PERIOD (BIT(__mlx5_bit_sz(cqc, cq_period)) - 1) @@ -135,8 +135,8 @@ enum { static inline int cqe_sz_to_mlx_sz(u8 size, int padding_128_en) { - return padding_128_en ? CQE_SIZE_128_PAD : - size == 64 ? CQE_SIZE_64 : CQE_SIZE_128; + return padding_128_en ? CQE_STRIDE_128_PAD : + size == 64 ? CQE_STRIDE_64 : CQE_STRIDE_128; } static inline void mlx5_cq_set_ci(struct mlx5_core_cq *cq) diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index b4c0457fbebd..8c4a820bd4c1 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -212,6 +212,13 @@ enum { MLX5_PFAULT_SUBTYPE_RDMA = 1, }; +enum wqe_page_fault_type { + MLX5_WQE_PF_TYPE_RMP = 0, + MLX5_WQE_PF_TYPE_REQ_SEND_OR_WRITE = 1, + MLX5_WQE_PF_TYPE_RESP = 2, + MLX5_WQE_PF_TYPE_REQ_READ_OR_ATOMIC = 3, +}; + enum { MLX5_PERM_LOCAL_READ = 1 << 2, MLX5_PERM_LOCAL_WRITE = 1 << 3, @@ -294,9 +301,15 @@ enum { MLX5_EVENT_QUEUE_TYPE_DCT = 6, }; +/* mlx5 components can subscribe to any one of these events via + * mlx5_eq_notifier_register API. + */ enum mlx5_event { + /* Special value to subscribe to any event */ + MLX5_EVENT_TYPE_NOTIFY_ANY = 0x0, + /* HW events enum start: comp events are not subscribable */ MLX5_EVENT_TYPE_COMP = 0x0, - + /* HW Async events enum start: subscribable events */ MLX5_EVENT_TYPE_PATH_MIG = 0x01, MLX5_EVENT_TYPE_COMM_EST = 0x02, MLX5_EVENT_TYPE_SQ_DRAINED = 0x03, @@ -317,6 +330,7 @@ enum mlx5_event { MLX5_EVENT_TYPE_TEMP_WARN_EVENT = 0x17, MLX5_EVENT_TYPE_REMOTE_CONFIG = 0x19, MLX5_EVENT_TYPE_GENERAL_EVENT = 0x22, + MLX5_EVENT_TYPE_MONITOR_COUNTER = 0x24, MLX5_EVENT_TYPE_PPS_EVENT = 0x25, MLX5_EVENT_TYPE_DB_BF_CONGESTION = 0x1a, @@ -334,6 +348,8 @@ enum mlx5_event { MLX5_EVENT_TYPE_FPGA_QP_ERROR = 0x21, MLX5_EVENT_TYPE_DEVICE_TRACER = 0x26, + + MLX5_EVENT_TYPE_MAX = MLX5_EVENT_TYPE_DEVICE_TRACER + 1, }; enum { @@ -405,6 +421,7 @@ enum { MLX5_OPCODE_ATOMIC_MASKED_FA = 0x15, MLX5_OPCODE_BIND_MW = 0x18, MLX5_OPCODE_CONFIG_CMD = 0x1f, + MLX5_OPCODE_ENHANCED_MPSW = 0x29, MLX5_RECV_OPCODE_RDMA_WRITE_IMM = 0x00, MLX5_RECV_OPCODE_SEND = 0x01, @@ -766,6 +783,11 @@ static inline u8 mlx5_get_cqe_format(struct mlx5_cqe64 *cqe) return (cqe->op_own >> 2) & 0x3; } +static inline u8 get_cqe_opcode(struct mlx5_cqe64 *cqe) +{ + return cqe->op_own >> 4; +} + static inline u8 get_cqe_lro_tcppsh(struct mlx5_cqe64 *cqe) { return (cqe->lro_tcppsh_abort_dupack >> 6) & 1; diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index aa5963b5d38e..54299251d40d 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -46,10 +46,11 @@ #include #include #include +#include #include #include -#include +#include #include #include @@ -84,18 +85,6 @@ enum { MLX5_MAX_PORTS = 2, }; -enum { - MLX5_EQ_VEC_PAGES = 0, - MLX5_EQ_VEC_CMD = 1, - MLX5_EQ_VEC_ASYNC = 2, - MLX5_EQ_VEC_PFAULT = 3, - MLX5_EQ_VEC_COMP_BASE, -}; - -enum { - MLX5_MAX_IRQ_NAME = 32 -}; - enum { MLX5_ATOMIC_MODE_OFFSET = 16, MLX5_ATOMIC_MODE_IB_COMP = 1, @@ -205,16 +194,7 @@ struct mlx5_rsc_debug { }; enum mlx5_dev_event { - MLX5_DEV_EVENT_SYS_ERROR, - MLX5_DEV_EVENT_PORT_UP, - MLX5_DEV_EVENT_PORT_DOWN, - MLX5_DEV_EVENT_PORT_INITIALIZED, - MLX5_DEV_EVENT_LID_CHANGE, - MLX5_DEV_EVENT_PKEY_CHANGE, - MLX5_DEV_EVENT_GUID_CHANGE, - MLX5_DEV_EVENT_CLIENT_REREG, - MLX5_DEV_EVENT_PPS, - MLX5_DEV_EVENT_DELAY_DROP_TIMEOUT, + MLX5_DEV_EVENT_SYS_ERROR = 128, /* 0 - 127 are FW events */ }; enum mlx5_port_status { @@ -222,14 +202,6 @@ enum mlx5_port_status { MLX5_PORT_DOWN = 2, }; -enum mlx5_eq_type { - MLX5_EQ_TYPE_COMP, - MLX5_EQ_TYPE_ASYNC, -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - MLX5_EQ_TYPE_PF, -#endif -}; - struct mlx5_bfreg_info { u32 *sys_pages; int num_low_latency_bfregs; @@ -297,6 +269,8 @@ struct mlx5_cmd_stats { }; struct mlx5_cmd { + struct mlx5_nb nb; + void *cmd_alloc_buf; dma_addr_t alloc_dma; int alloc_size; @@ -366,51 +340,6 @@ struct mlx5_frag_buf_ctrl { u8 log_frag_strides; }; -struct mlx5_eq_tasklet { - struct list_head list; - struct list_head process_list; - struct tasklet_struct task; - /* lock on completion tasklet list */ - spinlock_t lock; -}; - -struct mlx5_eq_pagefault { - struct work_struct work; - /* Pagefaults lock */ - spinlock_t lock; - struct workqueue_struct *wq; - mempool_t *pool; -}; - -struct mlx5_cq_table { - /* protect radix tree */ - spinlock_t lock; - struct radix_tree_root tree; -}; - -struct mlx5_eq { - struct mlx5_core_dev *dev; - struct mlx5_cq_table cq_table; - __be32 __iomem *doorbell; - u32 cons_index; - struct mlx5_frag_buf buf; - int size; - unsigned int irqn; - u8 eqn; - int nent; - u64 mask; - struct list_head list; - int index; - struct mlx5_rsc_debug *dbg; - enum mlx5_eq_type type; - union { - struct mlx5_eq_tasklet tasklet_ctx; -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - struct mlx5_eq_pagefault pf_ctx; -#endif - }; -}; - struct mlx5_core_psv { u32 psv_idx; struct psv_layout { @@ -463,36 +392,6 @@ struct mlx5_core_rsc_common { struct completion free; }; -struct mlx5_core_srq { - struct mlx5_core_rsc_common common; /* must be first */ - u32 srqn; - int max; - size_t max_gs; - size_t max_avail_gather; - int wqe_shift; - void (*event) (struct mlx5_core_srq *, enum mlx5_event); - - atomic_t refcount; - struct completion free; - u16 uid; -}; - -struct mlx5_eq_table { - void __iomem *update_ci; - void __iomem *update_arm_ci; - struct list_head comp_eqs_list; - struct mlx5_eq pages_eq; - struct mlx5_eq async_eq; - struct mlx5_eq cmd_eq; -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - struct mlx5_eq pfault_eq; -#endif - int num_comp_vectors; - /* protect EQs list - */ - spinlock_t lock; -}; - struct mlx5_uars_page { void __iomem *map; bool wc; @@ -542,13 +441,8 @@ struct mlx5_core_health { }; struct mlx5_qp_table { - /* protect radix tree - */ - spinlock_t lock; - struct radix_tree_root tree; -}; + struct notifier_block nb; -struct mlx5_srq_table { /* protect radix tree */ spinlock_t lock; @@ -575,11 +469,6 @@ struct mlx5_core_sriov { int enabled_vfs; }; -struct mlx5_irq_info { - cpumask_var_t mask; - char name[MLX5_MAX_IRQ_NAME]; -}; - struct mlx5_fc_stats { spinlock_t counters_idr_lock; /* protects counters_idr */ struct idr counters_idr; @@ -593,10 +482,12 @@ struct mlx5_fc_stats { unsigned long sampling_interval; /* jiffies */ }; +struct mlx5_events; struct mlx5_mpfs; struct mlx5_eswitch; struct mlx5_lag; -struct mlx5_pagefault; +struct mlx5_devcom; +struct mlx5_eq_table; struct mlx5_rate_limit { u32 rate; @@ -619,37 +510,12 @@ struct mlx5_rl_table { struct mlx5_rl_entry *rl_entry; }; -enum port_module_event_status_type { - MLX5_MODULE_STATUS_PLUGGED = 0x1, - MLX5_MODULE_STATUS_UNPLUGGED = 0x2, - MLX5_MODULE_STATUS_ERROR = 0x3, - MLX5_MODULE_STATUS_NUM = 0x3, -}; - -enum port_module_event_error_type { - MLX5_MODULE_EVENT_ERROR_POWER_BUDGET_EXCEEDED, - MLX5_MODULE_EVENT_ERROR_LONG_RANGE_FOR_NON_MLNX_CABLE_MODULE, - MLX5_MODULE_EVENT_ERROR_BUS_STUCK, - MLX5_MODULE_EVENT_ERROR_NO_EEPROM_RETRY_TIMEOUT, - MLX5_MODULE_EVENT_ERROR_ENFORCE_PART_NUMBER_LIST, - MLX5_MODULE_EVENT_ERROR_UNKNOWN_IDENTIFIER, - MLX5_MODULE_EVENT_ERROR_HIGH_TEMPERATURE, - MLX5_MODULE_EVENT_ERROR_BAD_CABLE, - MLX5_MODULE_EVENT_ERROR_UNKNOWN, - MLX5_MODULE_EVENT_ERROR_NUM, -}; - -struct mlx5_port_module_event_stats { - u64 status_counters[MLX5_MODULE_STATUS_NUM]; - u64 error_counters[MLX5_MODULE_EVENT_ERROR_NUM]; -}; - struct mlx5_priv { char name[MLX5_MAX_NAME_LEN]; - struct mlx5_eq_table eq_table; - struct mlx5_irq_info *irq_info; + struct mlx5_eq_table *eq_table; /* pages stuff */ + struct mlx5_nb pg_nb; struct workqueue_struct *pg_wq; struct rb_root page_root; int fw_pages; @@ -659,8 +525,6 @@ struct mlx5_priv { struct mlx5_core_health health; - struct mlx5_srq_table srq_table; - /* start: qp staff */ struct mlx5_qp_table qp_table; struct dentry *qp_debugfs; @@ -690,28 +554,18 @@ struct mlx5_priv { struct list_head dev_list; struct list_head ctx_list; spinlock_t ctx_lock; - - struct list_head waiting_events_list; - bool is_accum_events; + struct mlx5_events *events; struct mlx5_flow_steering *steering; struct mlx5_mpfs *mpfs; struct mlx5_eswitch *eswitch; struct mlx5_core_sriov sriov; struct mlx5_lag *lag; + struct mlx5_devcom *devcom; unsigned long pci_dev_data; struct mlx5_fc_stats fc_stats; struct mlx5_rl_table rl_table; - struct mlx5_port_module_event_stats pme_stats; - -#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING - void (*pfault)(struct mlx5_core_dev *dev, - void *context, - struct mlx5_pagefault *pfault); - void *pfault_ctx; - struct srcu_struct pfault_srcu; -#endif struct mlx5_bfreg_data bfregs; struct mlx5_uars_page *uar; }; @@ -736,44 +590,6 @@ enum mlx5_pagefault_type_flags { MLX5_PFAULT_RDMA = 1 << 2, }; -/* Contains the details of a pagefault. */ -struct mlx5_pagefault { - u32 bytes_committed; - u32 token; - u8 event_subtype; - u8 type; - union { - /* Initiator or send message responder pagefault details. */ - struct { - /* Received packet size, only valid for responders. */ - u32 packet_size; - /* - * Number of resource holding WQE, depends on type. - */ - u32 wq_num; - /* - * WQE index. Refers to either the send queue or - * receive queue, according to event_subtype. - */ - u16 wqe_index; - } wqe; - /* RDMA responder pagefault details */ - struct { - u32 r_key; - /* - * Received packet size, minimal size page fault - * resolution required for forward progress. - */ - u32 packet_size; - u32 rdma_op_len; - u64 rdma_va; - } rdma; - }; - - struct mlx5_eq *eq; - struct work_struct work; -}; - struct mlx5_td { struct list_head tirs_list; u32 tdn; @@ -803,6 +619,8 @@ struct mlx5_pps { }; struct mlx5_clock { + struct mlx5_core_dev *mdev; + struct mlx5_nb pps_nb; seqlock_t lock; struct cyclecounter cycles; struct timecounter tc; @@ -810,7 +628,6 @@ struct mlx5_clock { u32 nominal_c_mult; unsigned long overflow_period; struct delayed_work overflow_work; - struct mlx5_core_dev *mdev; struct ptp_clock *ptp; struct ptp_clock_info ptp_info; struct mlx5_pps pps_info; @@ -843,9 +660,6 @@ struct mlx5_core_dev { /* sync interface state */ struct mutex intf_state_mutex; unsigned long intf_state; - void (*event) (struct mlx5_core_dev *dev, - enum mlx5_dev_event event, - unsigned long param); struct mlx5_priv priv; struct mlx5_profile *profile; atomic_t num_qps; @@ -858,9 +672,6 @@ struct mlx5_core_dev { } roce; #ifdef CONFIG_MLX5_FPGA struct mlx5_fpga_device *fpga; -#endif -#ifdef CONFIG_RFS_ACCEL - struct cpu_rmap *rmap; #endif struct mlx5_clock clock; struct mlx5_ib_clock_info *clock_info; @@ -940,8 +751,8 @@ struct mlx5_hca_vport_context { u64 node_guid; u32 cap_mask1; u32 cap_mask1_perm; - u32 cap_mask2; - u32 cap_mask2_perm; + u16 cap_mask2; + u16 cap_mask2_perm; u16 lid; u8 init_type_reply; /* bitmask: see ib spec 14.2.5.6 InitTypeReply */ u8 lmc; @@ -1070,13 +881,6 @@ struct mlx5_cmd_mailbox *mlx5_alloc_cmd_mailbox_chain(struct mlx5_core_dev *dev, gfp_t flags, int npages); void mlx5_free_cmd_mailbox_chain(struct mlx5_core_dev *dev, struct mlx5_cmd_mailbox *head); -int mlx5_core_create_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, - struct mlx5_srq_attr *in); -int mlx5_core_destroy_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq); -int mlx5_core_query_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, - struct mlx5_srq_attr *out); -int mlx5_core_arm_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq, - u16 lwm, int is_srq); void mlx5_init_mkey_table(struct mlx5_core_dev *dev); void mlx5_cleanup_mkey_table(struct mlx5_core_dev *dev); int mlx5_core_create_mkey_cb(struct mlx5_core_dev *dev, @@ -1095,9 +899,9 @@ int mlx5_core_alloc_pd(struct mlx5_core_dev *dev, u32 *pdn); int mlx5_core_dealloc_pd(struct mlx5_core_dev *dev, u32 pdn); int mlx5_core_mad_ifc(struct mlx5_core_dev *dev, const void *inb, void *outb, u16 opmod, u8 port); -void mlx5_pagealloc_init(struct mlx5_core_dev *dev); +int mlx5_pagealloc_init(struct mlx5_core_dev *dev); void mlx5_pagealloc_cleanup(struct mlx5_core_dev *dev); -int mlx5_pagealloc_start(struct mlx5_core_dev *dev); +void mlx5_pagealloc_start(struct mlx5_core_dev *dev); void mlx5_pagealloc_stop(struct mlx5_core_dev *dev); void mlx5_core_req_pages_handler(struct mlx5_core_dev *dev, u16 func_id, s32 npages); @@ -1108,9 +912,6 @@ void mlx5_unregister_debugfs(void); void mlx5_fill_page_array(struct mlx5_frag_buf *buf, __be64 *pas); void mlx5_fill_page_frag_array(struct mlx5_frag_buf *frag_buf, __be64 *pas); -void mlx5_rsc_event(struct mlx5_core_dev *dev, u32 rsn, int event_type); -void mlx5_srq_event(struct mlx5_core_dev *dev, u32 srqn, int event_type); -struct mlx5_core_srq *mlx5_core_get_srq(struct mlx5_core_dev *dev, u32 srqn); int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn, unsigned int *irqn); int mlx5_core_attach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn); @@ -1155,6 +956,9 @@ int mlx5_alloc_bfreg(struct mlx5_core_dev *mdev, struct mlx5_sq_bfreg *bfreg, bool map_wc, bool fast_path); void mlx5_free_bfreg(struct mlx5_core_dev *mdev, struct mlx5_sq_bfreg *bfreg); +unsigned int mlx5_comp_vectors_count(struct mlx5_core_dev *dev); +struct cpumask * +mlx5_comp_irq_get_affinity_mask(struct mlx5_core_dev *dev, int vector); unsigned int mlx5_core_reserved_gids_count(struct mlx5_core_dev *dev); int mlx5_core_roce_gid_set(struct mlx5_core_dev *dev, unsigned int index, u8 roce_version, u8 roce_l3_type, const u8 *gid, @@ -1202,23 +1006,21 @@ struct mlx5_interface { void (*remove)(struct mlx5_core_dev *dev, void *context); int (*attach)(struct mlx5_core_dev *dev, void *context); void (*detach)(struct mlx5_core_dev *dev, void *context); - void (*event)(struct mlx5_core_dev *dev, void *context, - enum mlx5_dev_event event, unsigned long param); - void (*pfault)(struct mlx5_core_dev *dev, - void *context, - struct mlx5_pagefault *pfault); - void * (*get_dev)(void *context); int protocol; struct list_head list; }; -void *mlx5_get_protocol_dev(struct mlx5_core_dev *mdev, int protocol); int mlx5_register_interface(struct mlx5_interface *intf); void mlx5_unregister_interface(struct mlx5_interface *intf); +int mlx5_notifier_register(struct mlx5_core_dev *dev, struct notifier_block *nb); +int mlx5_notifier_unregister(struct mlx5_core_dev *dev, struct notifier_block *nb); + int mlx5_core_query_vendor_id(struct mlx5_core_dev *mdev, u32 *vendor_id); int mlx5_cmd_create_vport_lag(struct mlx5_core_dev *dev); int mlx5_cmd_destroy_vport_lag(struct mlx5_core_dev *dev); +bool mlx5_lag_is_roce(struct mlx5_core_dev *dev); +bool mlx5_lag_is_sriov(struct mlx5_core_dev *dev); bool mlx5_lag_is_active(struct mlx5_core_dev *dev); struct net_device *mlx5_lag_get_roce_netdev(struct mlx5_core_dev *dev); int mlx5_lag_query_cong_counters(struct mlx5_core_dev *dev, @@ -1306,10 +1108,4 @@ enum { MLX5_TRIGGERED_CMD_COMP = (u64)1 << 32, }; -static inline const struct cpumask * -mlx5_get_vector_affinity_hint(struct mlx5_core_dev *dev, int vector) -{ - return dev->priv.irq_info[vector].mask; -} - #endif /* MLX5_DRIVER_H */ diff --git a/include/linux/mlx5/eq.h b/include/linux/mlx5/eq.h new file mode 100644 index 000000000000..00045cc4ea11 --- /dev/null +++ b/include/linux/mlx5/eq.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2018 Mellanox Technologies. */ + +#ifndef MLX5_CORE_EQ_H +#define MLX5_CORE_EQ_H + +enum { + MLX5_EQ_PAGEREQ_IDX = 0, + MLX5_EQ_CMD_IDX = 1, + MLX5_EQ_ASYNC_IDX = 2, + /* reserved to be used by mlx5_core ulps (mlx5e/mlx5_ib) */ + MLX5_EQ_PFAULT_IDX = 3, + MLX5_EQ_MAX_ASYNC_EQS, + /* completion eqs vector indices start here */ + MLX5_EQ_VEC_COMP_BASE = MLX5_EQ_MAX_ASYNC_EQS, +}; + +#define MLX5_NUM_CMD_EQE (32) +#define MLX5_NUM_ASYNC_EQE (0x1000) +#define MLX5_NUM_SPARE_EQE (0x80) + +struct mlx5_eq; +struct mlx5_core_dev; + +struct mlx5_eq_param { + u8 index; + int nent; + u64 mask; + void *context; + irq_handler_t handler; +}; + +struct mlx5_eq * +mlx5_eq_create_generic(struct mlx5_core_dev *dev, const char *name, + struct mlx5_eq_param *param); +int +mlx5_eq_destroy_generic(struct mlx5_core_dev *dev, struct mlx5_eq *eq); + +struct mlx5_eqe *mlx5_eq_get_eqe(struct mlx5_eq *eq, u32 cc); +void mlx5_eq_update_ci(struct mlx5_eq *eq, u32 cc, bool arm); + +/* The HCA will think the queue has overflowed if we + * don't tell it we've been processing events. We + * create EQs with MLX5_NUM_SPARE_EQE extra entries, + * so we must update our consumer index at + * least that often. + * + * mlx5_eq_update_cc must be called on every EQE @EQ irq handler + */ +static inline u32 mlx5_eq_update_cc(struct mlx5_eq *eq, u32 cc) +{ + if (unlikely(cc >= MLX5_NUM_SPARE_EQE)) { + mlx5_eq_update_ci(eq, cc, 0); + cc = 0; + } + return cc; +} + +struct mlx5_nb { + struct notifier_block nb; + u8 event_type; +}; + +#define mlx5_nb_cof(ptr, type, member) \ + (container_of(container_of(ptr, struct mlx5_nb, nb), type, member)) + +#define MLX5_NB_INIT(name, handler, event) do { \ + (name)->nb.notifier_call = handler; \ + (name)->event_type = MLX5_EVENT_TYPE_##event; \ +} while (0) + +#endif /* MLX5_CORE_EQ_H */ diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index 5660f07d3be0..9df51da04621 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -86,6 +86,11 @@ struct mlx5_flow_spec { u32 match_value[MLX5_ST_SZ_DW(fte_match_param)]; }; +enum { + MLX5_FLOW_DEST_VPORT_VHCA_ID = BIT(0), + MLX5_FLOW_DEST_VPORT_REFORMAT_ID = BIT(1), +}; + struct mlx5_flow_destination { enum mlx5_flow_destination_type type; union { @@ -96,7 +101,8 @@ struct mlx5_flow_destination { struct { u16 num; u16 vhca_id; - bool vhca_id_valid; + u32 reformat_id; + u8 flags; } vport; }; }; diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 4e77bfe0b580..35fe5217b244 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -76,13 +76,7 @@ enum { }; enum { - MLX5_GENERAL_OBJ_TYPES_CAP_UCTX = (1ULL << 4), - MLX5_GENERAL_OBJ_TYPES_CAP_UMEM = (1ULL << 5), -}; - -enum { - MLX5_OBJ_TYPE_UCTX = 0x0004, - MLX5_OBJ_TYPE_UMEM = 0x0005, + MLX5_SHARED_RESOURCE_UID = 0xffff, }; enum { @@ -144,6 +138,9 @@ enum { MLX5_CMD_OP_DESTROY_XRQ = 0x718, MLX5_CMD_OP_QUERY_XRQ = 0x719, MLX5_CMD_OP_ARM_XRQ = 0x71a, + MLX5_CMD_OP_QUERY_XRQ_DC_PARAMS_ENTRY = 0x725, + MLX5_CMD_OP_SET_XRQ_DC_PARAMS_ENTRY = 0x726, + MLX5_CMD_OP_QUERY_XRQ_ERROR_PARAMS = 0x727, MLX5_CMD_OP_QUERY_VPORT_STATE = 0x750, MLX5_CMD_OP_MODIFY_VPORT_STATE = 0x751, MLX5_CMD_OP_QUERY_ESW_VPORT_CONTEXT = 0x752, @@ -161,6 +158,8 @@ enum { MLX5_CMD_OP_ALLOC_Q_COUNTER = 0x771, MLX5_CMD_OP_DEALLOC_Q_COUNTER = 0x772, MLX5_CMD_OP_QUERY_Q_COUNTER = 0x773, + MLX5_CMD_OP_SET_MONITOR_COUNTER = 0x774, + MLX5_CMD_OP_ARM_MONITOR_COUNTER = 0x775, MLX5_CMD_OP_SET_PP_RATE_LIMIT = 0x780, MLX5_CMD_OP_QUERY_RATE_LIMIT = 0x781, MLX5_CMD_OP_CREATE_SCHEDULING_ELEMENT = 0x782, @@ -245,6 +244,7 @@ enum { MLX5_CMD_OP_MODIFY_FLOW_TABLE = 0x93c, MLX5_CMD_OP_ALLOC_PACKET_REFORMAT_CONTEXT = 0x93d, MLX5_CMD_OP_DEALLOC_PACKET_REFORMAT_CONTEXT = 0x93e, + MLX5_CMD_OP_QUERY_PACKET_REFORMAT_CONTEXT = 0x93f, MLX5_CMD_OP_ALLOC_MODIFY_HEADER_CONTEXT = 0x940, MLX5_CMD_OP_DEALLOC_MODIFY_HEADER_CONTEXT = 0x941, MLX5_CMD_OP_QUERY_MODIFY_HEADER_CONTEXT = 0x942, @@ -257,9 +257,19 @@ enum { MLX5_CMD_OP_MODIFY_GENERAL_OBJECT = 0xa01, MLX5_CMD_OP_QUERY_GENERAL_OBJECT = 0xa02, MLX5_CMD_OP_DESTROY_GENERAL_OBJECT = 0xa03, + MLX5_CMD_OP_CREATE_UCTX = 0xa04, + MLX5_CMD_OP_DESTROY_UCTX = 0xa06, + MLX5_CMD_OP_CREATE_UMEM = 0xa08, + MLX5_CMD_OP_DESTROY_UMEM = 0xa0a, MLX5_CMD_OP_MAX }; +/* Valid range for general commands that don't work over an object */ +enum { + MLX5_CMD_OP_GENERAL_START = 0xb00, + MLX5_CMD_OP_GENERAL_END = 0xd00, +}; + struct mlx5_ifc_flow_table_fields_supported_bits { u8 outer_dmac[0x1]; u8 outer_smac[0x1]; @@ -349,7 +359,7 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 reformat_l3_tunnel_to_l2[0x1]; u8 reformat_l2_to_l3_tunnel[0x1]; u8 reformat_and_modify_action[0x1]; - u8 reserved_at_14[0xb]; + u8 reserved_at_15[0xb]; u8 reserved_at_20[0x2]; u8 log_max_ft_size[0x6]; u8 log_max_modify_header_context[0x8]; @@ -421,6 +431,16 @@ struct mlx5_ifc_fte_match_set_lyr_2_4_bits { union mlx5_ifc_ipv6_layout_ipv4_layout_auto_bits dst_ipv4_dst_ipv6; }; +struct mlx5_ifc_nvgre_key_bits { + u8 hi[0x18]; + u8 lo[0x8]; +}; + +union mlx5_ifc_gre_key_bits { + struct mlx5_ifc_nvgre_key_bits nvgre; + u8 key[0x20]; +}; + struct mlx5_ifc_fte_match_set_misc_bits { u8 reserved_at_0[0x8]; u8 source_sqn[0x18]; @@ -442,8 +462,7 @@ struct mlx5_ifc_fte_match_set_misc_bits { u8 reserved_at_64[0xc]; u8 gre_protocol[0x10]; - u8 gre_key_h[0x18]; - u8 gre_key_l[0x8]; + union mlx5_ifc_gre_key_bits gre_key; u8 vxlan_vni[0x18]; u8 reserved_at_b8[0x8]; @@ -599,20 +618,28 @@ struct mlx5_ifc_flow_table_eswitch_cap_bits { u8 reserved_at_800[0x7800]; }; +enum { + MLX5_COUNTER_SOURCE_ESWITCH = 0x0, + MLX5_COUNTER_FLOW_ESWITCH = 0x1, +}; + struct mlx5_ifc_e_switch_cap_bits { u8 vport_svlan_strip[0x1]; u8 vport_cvlan_strip[0x1]; u8 vport_svlan_insert[0x1]; u8 vport_cvlan_insert_if_not_exist[0x1]; u8 vport_cvlan_insert_overwrite[0x1]; - u8 reserved_at_5[0x18]; + u8 reserved_at_5[0x17]; + u8 counter_eswitch_affinity[0x1]; u8 merged_eswitch[0x1]; u8 nic_vport_node_guid_modify[0x1]; u8 nic_vport_port_guid_modify[0x1]; u8 vxlan_encap_decap[0x1]; u8 nvgre_encap_decap[0x1]; - u8 reserved_at_22[0x9]; + u8 reserved_at_22[0x1]; + u8 log_max_fdb_encap_uplink[0x5]; + u8 reserved_at_21[0x3]; u8 log_max_packet_reformat_context[0x5]; u8 reserved_2b[0x6]; u8 max_encap_header_size[0xa]; @@ -831,7 +858,7 @@ struct mlx5_ifc_vector_calc_cap_bits { struct mlx5_ifc_calc_op calc2; struct mlx5_ifc_calc_op calc3; - u8 reserved_at_e0[0x720]; + u8 reserved_at_c0[0x720]; }; enum { @@ -885,6 +912,10 @@ enum { MLX5_CAP_UMR_FENCE_NONE = 0x2, }; +enum { + MLX5_UCTX_CAP_RAW_TX = 1UL << 0, +}; + struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_0[0x30]; u8 vhca_id[0x10]; @@ -1045,7 +1076,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 vector_calc[0x1]; u8 umr_ptr_rlky[0x1]; u8 imaicl[0x1]; - u8 reserved_at_232[0x4]; + u8 qp_packet_based[0x1]; + u8 reserved_at_233[0x3]; u8 qkv[0x1]; u8 pkv[0x1]; u8 set_deth_sqpn[0x1]; @@ -1155,7 +1187,10 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_440[0x20]; - u8 reserved_at_460[0x10]; + u8 reserved_at_460[0x3]; + u8 log_max_uctx[0x5]; + u8 reserved_at_468[0x3]; + u8 log_max_umem[0x5]; u8 max_num_eqs[0x10]; u8 reserved_at_480[0x3]; @@ -1195,7 +1230,19 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 num_vhca_ports[0x8]; u8 reserved_at_618[0x6]; u8 sw_owner_id[0x1]; - u8 reserved_at_61f[0x1e1]; + u8 reserved_at_61f[0x1]; + + u8 max_num_of_monitor_counters[0x10]; + u8 num_ppcnt_monitor_counters[0x10]; + + u8 reserved_at_640[0x10]; + u8 num_q_monitor_counters[0x10]; + + u8 reserved_at_660[0x40]; + + u8 uctx_cap[0x20]; + + u8 reserved_at_6c0[0x140]; }; enum mlx5_flow_destination_type { @@ -1211,8 +1258,10 @@ enum mlx5_flow_destination_type { struct mlx5_ifc_dest_format_struct_bits { u8 destination_type[0x8]; u8 destination_id[0x18]; + u8 destination_eswitch_owner_vhca_id_valid[0x1]; - u8 reserved_at_21[0xf]; + u8 packet_reformat[0x1]; + u8 reserved_at_22[0xe]; u8 destination_eswitch_owner_vhca_id[0x10]; }; @@ -1222,6 +1271,14 @@ struct mlx5_ifc_flow_counter_list_bits { u8 reserved_at_20[0x20]; }; +struct mlx5_ifc_extended_dest_format_bits { + struct mlx5_ifc_dest_format_struct_bits destination_entry; + + u8 packet_reformat_id[0x20]; + + u8 reserved_at_60[0x20]; +}; + union mlx5_ifc_dest_format_struct_flow_counter_list_auto_bits { struct mlx5_ifc_dest_format_struct_bits dest_format_struct; struct mlx5_ifc_flow_counter_list_bits flow_counter_list; @@ -2251,7 +2308,8 @@ struct mlx5_ifc_qpc_bits { u8 st[0x8]; u8 reserved_at_10[0x3]; u8 pm_state[0x2]; - u8 reserved_at_15[0x3]; + u8 reserved_at_15[0x1]; + u8 req_e2e_credit_mode[0x2]; u8 offload_type[0x4]; u8 end_padding_mode[0x2]; u8 reserved_at_1e[0x2]; @@ -2442,7 +2500,8 @@ struct mlx5_ifc_flow_context_bits { u8 reserved_at_60[0x10]; u8 action[0x10]; - u8 reserved_at_80[0x8]; + u8 extended_destination[0x1]; + u8 reserved_at_80[0x7]; u8 destination_list_size[0x18]; u8 reserved_at_a0[0x8]; @@ -3798,6 +3857,83 @@ enum { MLX5_VPORT_STATE_OP_MOD_ESW_VPORT = 0x1, }; +struct mlx5_ifc_arm_monitor_counter_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x20]; + + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_arm_monitor_counter_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x40]; +}; + +enum { + MLX5_QUERY_MONITOR_CNT_TYPE_PPCNT = 0x0, + MLX5_QUERY_MONITOR_CNT_TYPE_Q_COUNTER = 0x1, +}; + +enum mlx5_monitor_counter_ppcnt { + MLX5_QUERY_MONITOR_PPCNT_IN_RANGE_LENGTH_ERRORS = 0x0, + MLX5_QUERY_MONITOR_PPCNT_OUT_OF_RANGE_LENGTH_FIELD = 0x1, + MLX5_QUERY_MONITOR_PPCNT_FRAME_TOO_LONG_ERRORS = 0x2, + MLX5_QUERY_MONITOR_PPCNT_FRAME_CHECK_SEQUENCE_ERRORS = 0x3, + MLX5_QUERY_MONITOR_PPCNT_ALIGNMENT_ERRORS = 0x4, + MLX5_QUERY_MONITOR_PPCNT_IF_OUT_DISCARDS = 0x5, +}; + +enum { + MLX5_QUERY_MONITOR_Q_COUNTER_RX_OUT_OF_BUFFER = 0x4, +}; + +struct mlx5_ifc_monitor_counter_output_bits { + u8 reserved_at_0[0x4]; + u8 type[0x4]; + u8 reserved_at_8[0x8]; + u8 counter[0x10]; + + u8 counter_group_id[0x20]; +}; + +#define MLX5_CMD_SET_MONITOR_NUM_PPCNT_COUNTER_SET1 (6) +#define MLX5_CMD_SET_MONITOR_NUM_Q_COUNTERS_SET1 (1) +#define MLX5_CMD_SET_MONITOR_NUM_COUNTER (MLX5_CMD_SET_MONITOR_NUM_PPCNT_COUNTER_SET1 +\ + MLX5_CMD_SET_MONITOR_NUM_Q_COUNTERS_SET1) + +struct mlx5_ifc_set_monitor_counter_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 num_of_counters[0x10]; + + u8 reserved_at_60[0x20]; + + struct mlx5_ifc_monitor_counter_output_bits monitor_counter[MLX5_CMD_SET_MONITOR_NUM_COUNTER]; +}; + +struct mlx5_ifc_set_monitor_counter_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x40]; +}; + struct mlx5_ifc_query_vport_state_in_bits { u8 opcode[0x10]; u8 reserved_at_10[0x10]; @@ -4663,7 +4799,7 @@ enum { MLX5_QUERY_FLOW_GROUP_OUT_MATCH_CRITERIA_ENABLE_OUTER_HEADERS = 0x0, MLX5_QUERY_FLOW_GROUP_OUT_MATCH_CRITERIA_ENABLE_MISC_PARAMETERS = 0x1, MLX5_QUERY_FLOW_GROUP_OUT_MATCH_CRITERIA_ENABLE_INNER_HEADERS = 0x2, - MLX5_QUERY_FLOW_GROUP_IN_MATCH_CRITERIA_ENABLE_MISC_PARAMETERS_2 = 0X3, + MLX5_QUERY_FLOW_GROUP_IN_MATCH_CRITERIA_ENABLE_MISC_PARAMETERS_2 = 0x3, }; struct mlx5_ifc_query_flow_group_out_bits { @@ -5569,7 +5705,7 @@ struct mlx5_ifc_modify_nic_vport_context_out_bits { struct mlx5_ifc_modify_nic_vport_field_select_bits { u8 reserved_at_0[0x12]; u8 affiliation[0x1]; - u8 reserved_at_e[0x1]; + u8 reserved_at_13[0x1]; u8 disable_uc_local_lb[0x1]; u8 disable_mc_local_lb[0x1]; u8 node_guid[0x1]; @@ -6569,7 +6705,7 @@ struct mlx5_ifc_dealloc_transport_domain_out_bits { struct mlx5_ifc_dealloc_transport_domain_in_bits { u8 opcode[0x10]; - u8 reserved_at_10[0x10]; + u8 uid[0x10]; u8 reserved_at_20[0x10]; u8 op_mod[0x10]; @@ -7422,7 +7558,7 @@ struct mlx5_ifc_alloc_transport_domain_out_bits { struct mlx5_ifc_alloc_transport_domain_in_bits { u8 opcode[0x10]; - u8 reserved_at_10[0x10]; + u8 uid[0x10]; u8 reserved_at_20[0x10]; u8 op_mod[0x10]; @@ -7444,7 +7580,7 @@ struct mlx5_ifc_alloc_q_counter_out_bits { struct mlx5_ifc_alloc_q_counter_in_bits { u8 opcode[0x10]; - u8 reserved_at_10[0x10]; + u8 uid[0x10]; u8 reserved_at_20[0x10]; u8 op_mod[0x10]; @@ -8166,7 +8302,9 @@ struct mlx5_ifc_pcam_regs_5000_to_507f_bits { u8 port_access_reg_cap_mask_31_to_13[0x13]; u8 pbmc[0x1]; u8 pptb[0x1]; - u8 port_access_reg_cap_mask_10_to_0[0xb]; + u8 port_access_reg_cap_mask_10_to_09[0x2]; + u8 ppcnt[0x1]; + u8 port_access_reg_cap_mask_07_to_00[0x8]; }; struct mlx5_ifc_pcam_reg_bits { @@ -9030,7 +9168,7 @@ struct mlx5_ifc_dcbx_param_bits { u8 dcbx_cee_cap[0x1]; u8 dcbx_ieee_cap[0x1]; u8 dcbx_standby_cap[0x1]; - u8 reserved_at_0[0x5]; + u8 reserved_at_3[0x5]; u8 port_number[0x8]; u8 reserved_at_10[0xa]; u8 max_application_table_size[6]; @@ -9263,9 +9401,9 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits { }; struct mlx5_ifc_umem_bits { - u8 modify_field_select[0x40]; + u8 reserved_at_0[0x80]; - u8 reserved_at_40[0x5b]; + u8 reserved_at_80[0x1b]; u8 log_page_size[0x5]; u8 page_offset[0x20]; @@ -9276,19 +9414,46 @@ struct mlx5_ifc_umem_bits { }; struct mlx5_ifc_uctx_bits { - u8 modify_field_select[0x40]; + u8 cap[0x20]; - u8 reserved_at_40[0x1c0]; + u8 reserved_at_20[0x160]; }; struct mlx5_ifc_create_umem_in_bits { - struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; - struct mlx5_ifc_umem_bits umem; + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x40]; + + struct mlx5_ifc_umem_bits umem; }; struct mlx5_ifc_create_uctx_in_bits { - struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; - struct mlx5_ifc_uctx_bits uctx; + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x40]; + + struct mlx5_ifc_uctx_bits uctx; +}; + +struct mlx5_ifc_destroy_uctx_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 uid[0x10]; + + u8 reserved_at_60[0x20]; }; struct mlx5_ifc_mtrc_string_db_param_bits { diff --git a/include/linux/mlx5/port.h b/include/linux/mlx5/port.h index 34aed6032f86..bf4bc01ffb0c 100644 --- a/include/linux/mlx5/port.h +++ b/include/linux/mlx5/port.h @@ -107,9 +107,6 @@ enum mlx5e_connector_type { #define MLX5E_PROT_MASK(link_mode) (1 << link_mode) -#define PORT_MODULE_EVENT_MODULE_STATUS_MASK 0xF -#define PORT_MODULE_EVENT_ERROR_TYPE_MASK 0xF - int mlx5_set_port_caps(struct mlx5_core_dev *dev, u8 port_num, u32 caps); int mlx5_query_port_ptys(struct mlx5_core_dev *dev, u32 *ptys, int ptys_size, int proto_mask, u8 local_port); diff --git a/include/linux/mlx5/qp.h b/include/linux/mlx5/qp.h index fbe322c966bc..b26ea9077384 100644 --- a/include/linux/mlx5/qp.h +++ b/include/linux/mlx5/qp.h @@ -596,6 +596,11 @@ int mlx5_core_dealloc_q_counter(struct mlx5_core_dev *dev, u16 counter_id); int mlx5_core_query_q_counter(struct mlx5_core_dev *dev, u16 counter_id, int reset, void *out, int out_size); +struct mlx5_core_rsc_common *mlx5_core_res_hold(struct mlx5_core_dev *dev, + int res_num, + enum mlx5_res_type res_type); +void mlx5_core_res_put(struct mlx5_core_rsc_common *res); + static inline const char *mlx5_qp_type_str(int type) { switch (type) { diff --git a/include/linux/mlx5/srq.h b/include/linux/mlx5/srq.h deleted file mode 100644 index 1b1f3c20c6a3..000000000000 --- a/include/linux/mlx5/srq.h +++ /dev/null @@ -1,72 +0,0 @@ -/* - * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved. - * - * This software is available to you under a choice of one of two - * licenses. You may choose to be licensed under the terms of the GNU - * General Public License (GPL) Version 2, available from the file - * COPYING in the main directory of this source tree, or the - * OpenIB.org BSD license below: - * - * Redistribution and use in source and binary forms, with or - * without modification, are permitted provided that the following - * conditions are met: - * - * - Redistributions of source code must retain the above - * copyright notice, this list of conditions and the following - * disclaimer. - * - * - Redistributions in binary form must reproduce the above - * copyright notice, this list of conditions and the following - * disclaimer in the documentation and/or other materials - * provided with the distribution. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ - -#ifndef MLX5_SRQ_H -#define MLX5_SRQ_H - -#include - -enum { - MLX5_SRQ_FLAG_ERR = (1 << 0), - MLX5_SRQ_FLAG_WQ_SIG = (1 << 1), - MLX5_SRQ_FLAG_RNDV = (1 << 2), -}; - -struct mlx5_srq_attr { - u32 type; - u32 flags; - u32 log_size; - u32 wqe_shift; - u32 log_page_size; - u32 wqe_cnt; - u32 srqn; - u32 xrcd; - u32 page_offset; - u32 cqn; - u32 pd; - u32 lwm; - u32 user_index; - u64 db_record; - __be64 *pas; - u32 tm_log_list_size; - u32 tm_next_tag; - u32 tm_hw_phase_cnt; - u32 tm_sw_phase_cnt; - u16 uid; -}; - -struct mlx5_core_dev; - -void mlx5_init_srq_table(struct mlx5_core_dev *dev); -void mlx5_cleanup_srq_table(struct mlx5_core_dev *dev); - -#endif /* MLX5_SRQ_H */ diff --git a/include/linux/mlx5/transobj.h b/include/linux/mlx5/transobj.h index 7f5ca2cd3a32..a261d5528ff7 100644 --- a/include/linux/mlx5/transobj.h +++ b/include/linux/mlx5/transobj.h @@ -58,17 +58,6 @@ int mlx5_core_create_tis(struct mlx5_core_dev *dev, u32 *in, int inlen, int mlx5_core_modify_tis(struct mlx5_core_dev *dev, u32 tisn, u32 *in, int inlen); void mlx5_core_destroy_tis(struct mlx5_core_dev *dev, u32 tisn); -int mlx5_core_create_rmp(struct mlx5_core_dev *dev, u32 *in, int inlen, - u32 *rmpn); -int mlx5_core_modify_rmp(struct mlx5_core_dev *dev, u32 *in, int inlen); -int mlx5_core_destroy_rmp(struct mlx5_core_dev *dev, u32 rmpn); -int mlx5_core_query_rmp(struct mlx5_core_dev *dev, u32 rmpn, u32 *out); -int mlx5_core_arm_rmp(struct mlx5_core_dev *dev, u32 rmpn, u16 lwm); -int mlx5_core_create_xsrq(struct mlx5_core_dev *dev, u32 *in, int inlen, - u32 *rmpn); -int mlx5_core_destroy_xsrq(struct mlx5_core_dev *dev, u32 rmpn); -int mlx5_core_arm_xsrq(struct mlx5_core_dev *dev, u32 rmpn, u16 lwm); - int mlx5_core_create_rqt(struct mlx5_core_dev *dev, u32 *in, int inlen, u32 *rqtn); int mlx5_core_modify_rqt(struct mlx5_core_dev *dev, u32 rqtn, u32 *in, diff --git a/include/linux/mm.h b/include/linux/mm.h index 5411de93a363..ea1f12d15365 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -48,7 +48,32 @@ static inline void set_max_mapnr(unsigned long limit) static inline void set_max_mapnr(unsigned long limit) { } #endif -extern unsigned long totalram_pages; +extern atomic_long_t _totalram_pages; +static inline unsigned long totalram_pages(void) +{ + return (unsigned long)atomic_long_read(&_totalram_pages); +} + +static inline void totalram_pages_inc(void) +{ + atomic_long_inc(&_totalram_pages); +} + +static inline void totalram_pages_dec(void) +{ + atomic_long_dec(&_totalram_pages); +} + +static inline void totalram_pages_add(long count) +{ + atomic_long_add(count, &_totalram_pages); +} + +static inline void totalram_pages_set(long val) +{ + atomic_long_set(&_totalram_pages, val); +} + extern void * high_memory; extern int page_cluster; @@ -804,6 +829,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGOFF (SECTIONS_PGOFF - NODES_WIDTH) #define ZONES_PGOFF (NODES_PGOFF - ZONES_WIDTH) #define LAST_CPUPID_PGOFF (ZONES_PGOFF - LAST_CPUPID_WIDTH) +#define KASAN_TAG_PGOFF (LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH) /* * Define the bit shifts to access each section. For non-existent @@ -814,6 +840,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGSHIFT (NODES_PGOFF * (NODES_WIDTH != 0)) #define ZONES_PGSHIFT (ZONES_PGOFF * (ZONES_WIDTH != 0)) #define LAST_CPUPID_PGSHIFT (LAST_CPUPID_PGOFF * (LAST_CPUPID_WIDTH != 0)) +#define KASAN_TAG_PGSHIFT (KASAN_TAG_PGOFF * (KASAN_TAG_WIDTH != 0)) /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allocator */ #ifdef NODE_NOT_IN_PAGE_FLAGS @@ -836,6 +863,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_MASK ((1UL << NODES_WIDTH) - 1) #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1) #define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1) +#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1) #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1) static inline enum zone_type page_zonenum(const struct page *page) @@ -1101,6 +1129,32 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid) } #endif /* CONFIG_NUMA_BALANCING */ +#ifdef CONFIG_KASAN_SW_TAGS +static inline u8 page_kasan_tag(const struct page *page) +{ + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) +{ + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT); + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT; +} + +static inline void page_kasan_tag_reset(struct page *page) +{ + page_kasan_tag_set(page, 0xff); +} +#else +static inline u8 page_kasan_tag(const struct page *page) +{ + return 0xff; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) { } +static inline void page_kasan_tag_reset(struct page *page) { } +#endif + static inline struct zone *page_zone(const struct page *page) { return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]; @@ -1397,6 +1451,8 @@ struct mm_walk { void *private; }; +struct mmu_notifier_range; + int walk_page_range(unsigned long addr, unsigned long end, struct mm_walk *walk); int walk_page_vma(struct vm_area_struct *vma, struct mm_walk *walk); @@ -1405,8 +1461,8 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, int copy_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *vma); int follow_pte_pmd(struct mm_struct *mm, unsigned long address, - unsigned long *start, unsigned long *end, - pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); + struct mmu_notifier_range *range, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, @@ -1900,13 +1956,6 @@ static inline bool ptlock_init(struct page *page) return true; } -/* Reset page->mapping so free_pages_check won't complain. */ -static inline void pte_lock_deinit(struct page *page) -{ - page->mapping = NULL; - ptlock_free(page); -} - #else /* !USE_SPLIT_PTE_PTLOCKS */ /* * We use mm->page_table_lock to guard all pagetable pages of the mm. @@ -1917,7 +1966,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline void ptlock_cache_init(void) {} static inline bool ptlock_init(struct page *page) { return true; } -static inline void pte_lock_deinit(struct page *page) {} +static inline void ptlock_free(struct page *page) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ static inline void pgtable_init(void) @@ -1937,7 +1986,7 @@ static inline bool pgtable_page_ctor(struct page *page) static inline void pgtable_page_dtor(struct page *page) { - pte_lock_deinit(page); + ptlock_free(page); __ClearPageTable(page); dec_zone_page_state(page, NR_PAGETABLE); } @@ -2054,7 +2103,7 @@ extern void free_initmem(void); * Return pages freed into the buddy system. */ extern unsigned long free_reserved_area(void *start, void *end, - int poison, char *s); + int poison, const char *s); #ifdef CONFIG_HIGHMEM /* @@ -2202,6 +2251,7 @@ extern void zone_pcp_reset(struct zone *zone); /* page_alloc.c */ extern int min_free_kbytes; +extern int watermark_boost_factor; extern int watermark_scale_factor; /* nommu.c */ diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 2a5fe75dd082..4d35ff36ceff 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -147,6 +147,9 @@ struct mmc_host_ops { /* Prepare HS400 target operating frequency depending host driver */ int (*prepare_hs400_tuning)(struct mmc_host *host, struct mmc_ios *ios); + /* Prepare switch to DDR during the HS400 init sequence */ + int (*hs400_prepare_ddr)(struct mmc_host *host); + /* Prepare for switching from HS400 to HS200 */ void (*hs400_downgrade)(struct mmc_host *host); @@ -331,7 +334,7 @@ struct mmc_host { #define MMC_CAP_UHS (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | \ MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | \ MMC_CAP_UHS_DDR50) -/* (1 << 21) is free for reuse */ +#define MMC_CAP_SYNC_RUNTIME_PM (1 << 21) /* Synced runtime PM suspends. */ #define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */ #define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */ #define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */ diff --git a/include/linux/mmc/sdio_ids.h b/include/linux/mmc/sdio_ids.h index 4224902a8e22..4332199c71c2 100644 --- a/include/linux/mmc/sdio_ids.h +++ b/include/linux/mmc/sdio_ids.h @@ -42,6 +42,7 @@ #define SDIO_DEVICE_ID_BROADCOM_4354 0x4354 #define SDIO_DEVICE_ID_BROADCOM_4356 0x4356 #define SDIO_DEVICE_ID_CYPRESS_4373 0x4373 +#define SDIO_DEVICE_ID_CYPRESS_43012 43012 #define SDIO_VENDOR_ID_INTEL 0x0089 #define SDIO_DEVICE_ID_INTEL_IWMC3200WIMAX 0x1402 diff --git a/include/linux/mmc/slot-gpio.h b/include/linux/mmc/slot-gpio.h index 06607c59c4d0..feebd7aa6f5c 100644 --- a/include/linux/mmc/slot-gpio.h +++ b/include/linux/mmc/slot-gpio.h @@ -17,12 +17,7 @@ struct mmc_host; int mmc_gpio_get_ro(struct mmc_host *host); -int mmc_gpio_request_ro(struct mmc_host *host, unsigned int gpio); - int mmc_gpio_get_cd(struct mmc_host *host); -int mmc_gpio_request_cd(struct mmc_host *host, unsigned int gpio, - unsigned int debounce); - int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id, unsigned int idx, bool override_active_level, unsigned int debounce, bool *gpio_invert); diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 9893a6432adf..4050ec1c3b45 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -25,6 +25,13 @@ struct mmu_notifier_mm { spinlock_t lock; }; +struct mmu_notifier_range { + struct mm_struct *mm; + unsigned long start; + unsigned long end; + bool blockable; +}; + struct mmu_notifier_ops { /* * Called either by mmu_notifier_unregister or when the mm is @@ -146,12 +153,9 @@ struct mmu_notifier_ops { * */ int (*invalidate_range_start)(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end, - bool blockable); + const struct mmu_notifier_range *range); void (*invalidate_range_end)(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end); + const struct mmu_notifier_range *range); /* * invalidate_range() is either called between @@ -216,11 +220,8 @@ extern int __mmu_notifier_test_young(struct mm_struct *mm, unsigned long address); extern void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address, pte_t pte); -extern int __mmu_notifier_invalidate_range_start(struct mm_struct *mm, - unsigned long start, unsigned long end, - bool blockable); -extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm, - unsigned long start, unsigned long end, +extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *r); +extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r, bool only_end); extern void __mmu_notifier_invalidate_range(struct mm_struct *mm, unsigned long start, unsigned long end); @@ -264,33 +265,37 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm, __mmu_notifier_change_pte(mm, address, pte); } -static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline void +mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) { - if (mm_has_notifiers(mm)) - __mmu_notifier_invalidate_range_start(mm, start, end, true); + if (mm_has_notifiers(range->mm)) { + range->blockable = true; + __mmu_notifier_invalidate_range_start(range); + } } -static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline int +mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range) { - if (mm_has_notifiers(mm)) - return __mmu_notifier_invalidate_range_start(mm, start, end, false); + if (mm_has_notifiers(range->mm)) { + range->blockable = false; + return __mmu_notifier_invalidate_range_start(range); + } return 0; } -static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline void +mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { - if (mm_has_notifiers(mm)) - __mmu_notifier_invalidate_range_end(mm, start, end, false); + if (mm_has_notifiers(range->mm)) + __mmu_notifier_invalidate_range_end(range, false); } -static inline void mmu_notifier_invalidate_range_only_end(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline void +mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) { - if (mm_has_notifiers(mm)) - __mmu_notifier_invalidate_range_end(mm, start, end, true); + if (mm_has_notifiers(range->mm)) + __mmu_notifier_invalidate_range_end(range, true); } static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, @@ -311,6 +316,17 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm) __mmu_notifier_mm_destroy(mm); } + +static inline void mmu_notifier_range_init(struct mmu_notifier_range *range, + struct mm_struct *mm, + unsigned long start, + unsigned long end) +{ + range->mm = mm; + range->start = start; + range->end = end; +} + #define ptep_clear_flush_young_notify(__vma, __address, __ptep) \ ({ \ int __young; \ @@ -420,10 +436,26 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm) extern void mmu_notifier_call_srcu(struct rcu_head *rcu, void (*func)(struct rcu_head *rcu)); -extern void mmu_notifier_synchronize(void); #else /* CONFIG_MMU_NOTIFIER */ +struct mmu_notifier_range { + unsigned long start; + unsigned long end; +}; + +static inline void _mmu_notifier_range_init(struct mmu_notifier_range *range, + unsigned long start, + unsigned long end) +{ + range->start = start; + range->end = end; +} + +#define mmu_notifier_range_init(range, mm, start, end) \ + _mmu_notifier_range_init(range, start, end) + + static inline int mm_has_notifiers(struct mm_struct *mm) { return 0; @@ -451,24 +483,24 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm, { } -static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline void +mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) { } -static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline int +mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range) { return 0; } -static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline +void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { } -static inline void mmu_notifier_invalidate_range_only_end(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline void +mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) { } diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index db023a92f3a4..cc4a507d7ca4 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -65,7 +65,7 @@ enum migratetype { }; /* In mm/page_alloc.c; keep in sync also with show_migration_types() there */ -extern char * const migratetype_names[MIGRATE_TYPES]; +extern const char * const migratetype_names[MIGRATE_TYPES]; #ifdef CONFIG_CMA # define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA) @@ -269,9 +269,10 @@ enum zone_watermarks { NR_WMARK }; -#define min_wmark_pages(z) (z->watermark[WMARK_MIN]) -#define low_wmark_pages(z) (z->watermark[WMARK_LOW]) -#define high_wmark_pages(z) (z->watermark[WMARK_HIGH]) +#define min_wmark_pages(z) (z->_watermark[WMARK_MIN] + z->watermark_boost) +#define low_wmark_pages(z) (z->_watermark[WMARK_LOW] + z->watermark_boost) +#define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boost) +#define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost) struct per_cpu_pages { int count; /* number of pages in the list */ @@ -314,7 +315,7 @@ enum zone_type { * Architecture Limit * --------------------------- * parisc, ia64, sparc <4G - * s390 <2G + * s390, powerpc <2G * arm Various * alpha Unlimited or 0-16MB. * @@ -362,7 +363,8 @@ struct zone { /* Read-mostly fields */ /* zone watermarks, access with *_wmark_pages(zone) macros */ - unsigned long watermark[NR_WMARK]; + unsigned long _watermark[NR_WMARK]; + unsigned long watermark_boost; unsigned long nr_reserved_highatomic; @@ -428,14 +430,8 @@ struct zone { * Write access to present_pages at runtime should be protected by * mem_hotplug_begin/end(). Any reader who can't tolerant drift of * present_pages should get_online_mems() to get a stable value. - * - * Read access to managed_pages should be safe because it's unsigned - * long. Write access to zone->managed_pages and totalram_pages are - * protected by managed_page_count_lock at runtime. Idealy only - * adjust_managed_page_count() should be used instead of directly - * touching zone->managed_pages and totalram_pages. */ - unsigned long managed_pages; + atomic_long_t managed_pages; unsigned long spanned_pages; unsigned long present_pages; @@ -524,6 +520,11 @@ enum pgdat_flags { PGDAT_RECLAIM_LOCKED, /* prevents concurrent reclaim */ }; +static inline unsigned long zone_managed_pages(struct zone *zone) +{ + return (unsigned long)atomic_long_read(&zone->managed_pages); +} + static inline unsigned long zone_end_pfn(const struct zone *zone) { return zone->zone_start_pfn + zone->spanned_pages; @@ -635,9 +636,8 @@ typedef struct pglist_data { #endif #if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_DEFERRED_STRUCT_PAGE_INIT) /* - * Must be held any time you expect node_start_pfn, node_present_pages - * or node_spanned_pages stay constant. Holding this will also - * guarantee that any pfn_valid() stays that way. + * Must be held any time you expect node_start_pfn, + * node_present_pages, node_spanned_pages or nr_zones to stay constant. * * pgdat_resize_lock() and pgdat_resize_unlock() are provided to * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG @@ -691,8 +691,6 @@ typedef struct pglist_data { * is the first PFN that needs to be initialised. */ unsigned long first_deferred_pfn; - /* Number of non-deferred pages */ - unsigned long static_init_pgcnt; #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -820,7 +818,7 @@ static inline bool is_dev_zone(const struct zone *zone) */ static inline bool managed_zone(struct zone *zone) { - return zone->managed_pages; + return zone_managed_pages(zone); } /* Returns true if a zone has memory */ @@ -890,6 +888,8 @@ static inline int is_highmem(struct zone *zone) struct ctl_table; int min_free_kbytes_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); +int watermark_boost_factor_sysctl_handler(struct ctl_table *, int, + void __user *, size_t *, loff_t *); int watermark_scale_factor_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); extern int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES]; diff --git a/include/linux/module.h b/include/linux/module.h index fce6b4335e36..d5453eb5a68b 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -432,6 +432,10 @@ struct module { unsigned int num_tracepoints; tracepoint_ptr_t *tracepoints_ptrs; #endif +#ifdef CONFIG_BPF_EVENTS + unsigned int num_bpf_raw_events; + struct bpf_raw_event_map *bpf_raw_events; +#endif #ifdef HAVE_JUMP_LABEL struct jump_entry *jump_entries; unsigned int num_jump_entries; @@ -486,6 +490,13 @@ struct module { #define MODULE_ARCH_INIT {} #endif +#ifndef HAVE_ARCH_KALLSYMS_SYMBOL_VALUE +static inline unsigned long kallsyms_symbol_value(const Elf_Sym *sym) +{ + return sym->st_value; +} +#endif + extern struct mutex module_mutex; /* FIXME: It'd be nice to isolate modules during init, too, so they diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h index ba8fa9072aca..677768b21a1d 100644 --- a/include/linux/mtd/mtd.h +++ b/include/linux/mtd/mtd.h @@ -25,6 +25,7 @@ #include #include #include +#include #include @@ -342,6 +343,7 @@ struct mtd_info { struct device dev; int usecount; struct mtd_debug_info dbg; + struct nvmem_device *nvmem; }; int mtd_ooblayout_ecc(struct mtd_info *mtd, int section, diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 857f8abf7b91..1377d085ef99 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -845,6 +845,8 @@ enum tc_setup_type { TC_SETUP_QDISC_PRIO, TC_SETUP_QDISC_MQ, TC_SETUP_QDISC_ETF, + TC_SETUP_ROOT_QDISC, + TC_SETUP_QDISC_GRED, }; /* These structures hold the attributes of bpf state that are being passed @@ -863,9 +865,6 @@ enum bpf_netdev_command { XDP_QUERY_PROG, XDP_QUERY_PROG_HW, /* BPF program for offload callbacks, invoked at program load time. */ - BPF_OFFLOAD_VERIFIER_PREP, - BPF_OFFLOAD_TRANSLATE, - BPF_OFFLOAD_DESTROY, BPF_OFFLOAD_MAP_ALLOC, BPF_OFFLOAD_MAP_FREE, XDP_QUERY_XSK_UMEM, @@ -891,15 +890,6 @@ struct netdev_bpf { /* flags with which program was installed */ u32 prog_flags; }; - /* BPF_OFFLOAD_VERIFIER_PREP */ - struct { - struct bpf_prog *prog; - const struct bpf_prog_offload_ops *ops; /* callee set */ - } verifier; - /* BPF_OFFLOAD_TRANSLATE, BPF_OFFLOAD_DESTROY */ - struct { - struct bpf_prog *prog; - } offload; /* BPF_OFFLOAD_MAP_ALLOC, BPF_OFFLOAD_MAP_FREE */ struct { struct bpf_offloaded_map *offmap; @@ -1175,7 +1165,7 @@ struct dev_ifalias { * entries to skb and update idx with the number of entries. * * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh, - * u16 flags) + * u16 flags, struct netlink_ext_ack *extack) * int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq, * struct net_device *dev, u32 filter_mask, * int nlflags) @@ -1397,10 +1387,16 @@ struct net_device_ops { struct net_device *dev, struct net_device *filter_dev, int *idx); - + int (*ndo_fdb_get)(struct sk_buff *skb, + struct nlattr *tb[], + struct net_device *dev, + const unsigned char *addr, + u16 vid, u32 portid, u32 seq, + struct netlink_ext_ack *extack); int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh, - u16 flags); + u16 flags, + struct netlink_ext_ack *extack); int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq, struct net_device *dev, @@ -2388,13 +2384,13 @@ struct pcpu_sw_netstats { u64 tx_packets; u64 tx_bytes; struct u64_stats_sync syncp; -}; +} __aligned(4 * sizeof(u64)); struct pcpu_lstats { u64 packets; u64 bytes; struct u64_stats_sync syncp; -}; +} __aligned(2 * sizeof(u64)); #define __netdev_alloc_pcpu_stats(type, gfp) \ ({ \ @@ -2459,7 +2455,8 @@ enum netdev_cmd { NETDEV_REGISTER, NETDEV_UNREGISTER, NETDEV_CHANGEMTU, /* notify after mtu change happened */ - NETDEV_CHANGEADDR, + NETDEV_CHANGEADDR, /* notify after the address change */ + NETDEV_PRE_CHANGEADDR, /* notify before the address change */ NETDEV_GOING_DOWN, NETDEV_CHANGENAME, NETDEV_FEAT_CHANGE, @@ -2521,6 +2518,11 @@ struct netdev_notifier_changelowerstate_info { void *lower_state_info; /* is lower dev state */ }; +struct netdev_notifier_pre_changeaddr_info { + struct netdev_notifier_info info; /* must be first */ + const unsigned char *dev_addr; +}; + static inline void netdev_notifier_info_init(struct netdev_notifier_info *info, struct net_device *dev) { @@ -2615,7 +2617,7 @@ struct net_device *dev_get_by_name(struct net *net, const char *name); struct net_device *dev_get_by_name_rcu(struct net *net, const char *name); struct net_device *__dev_get_by_name(struct net *net, const char *name); int dev_alloc_name(struct net_device *dev, const char *name); -int dev_open(struct net_device *dev); +int dev_open(struct net_device *dev, struct netlink_ext_ack *extack); void dev_close(struct net_device *dev); void dev_close_many(struct list_head *head, bool unlink); void dev_disable_lro(struct net_device *dev); @@ -3224,6 +3226,14 @@ static inline void netdev_sent_queue(struct net_device *dev, unsigned int bytes) netdev_tx_sent_queue(netdev_get_tx_queue(dev, 0), bytes); } +static inline bool __netdev_sent_queue(struct net_device *dev, + unsigned int bytes, + bool xmit_more) +{ + return __netdev_tx_sent_queue(netdev_get_tx_queue(dev, 0), bytes, + xmit_more); +} + static inline void netdev_tx_completed_queue(struct netdev_queue *dev_queue, unsigned int pkts, unsigned int bytes) { @@ -3613,8 +3623,10 @@ int dev_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr, int dev_ifconf(struct net *net, struct ifconf *, int); int dev_ethtool(struct net *net, struct ifreq *); unsigned int dev_get_flags(const struct net_device *); -int __dev_change_flags(struct net_device *, unsigned int flags); -int dev_change_flags(struct net_device *, unsigned int); +int __dev_change_flags(struct net_device *dev, unsigned int flags, + struct netlink_ext_ack *extack); +int dev_change_flags(struct net_device *dev, unsigned int flags, + struct netlink_ext_ack *extack); void __dev_notify_flags(struct net_device *, unsigned int old_flags, unsigned int gchanges); int dev_change_name(struct net_device *, const char *); @@ -3627,7 +3639,10 @@ int dev_set_mtu_ext(struct net_device *dev, int mtu, int dev_set_mtu(struct net_device *, int); int dev_change_tx_queue_len(struct net_device *, unsigned long); void dev_set_group(struct net_device *, int); -int dev_set_mac_address(struct net_device *, struct sockaddr *); +int dev_pre_changeaddr_notify(struct net_device *dev, const char *addr, + struct netlink_ext_ack *extack); +int dev_set_mac_address(struct net_device *dev, struct sockaddr *sa, + struct netlink_ext_ack *extack); int dev_change_carrier(struct net_device *, bool new_carrier); int dev_get_phys_port_id(struct net_device *dev, struct netdev_phys_item_id *ppid); @@ -4068,6 +4083,16 @@ int __hw_addr_sync_dev(struct netdev_hw_addr_list *list, int (*sync)(struct net_device *, const unsigned char *), int (*unsync)(struct net_device *, const unsigned char *)); +int __hw_addr_ref_sync_dev(struct netdev_hw_addr_list *list, + struct net_device *dev, + int (*sync)(struct net_device *, + const unsigned char *, int), + int (*unsync)(struct net_device *, + const unsigned char *, int)); +void __hw_addr_ref_unsync_dev(struct netdev_hw_addr_list *list, + struct net_device *dev, + int (*unsync)(struct net_device *, + const unsigned char *, int)); void __hw_addr_unsync_dev(struct netdev_hw_addr_list *list, struct net_device *dev, int (*unsync)(struct net_device *, @@ -4332,9 +4357,10 @@ static inline bool can_checksum_protocol(netdev_features_t features, } #ifdef CONFIG_BUG -void netdev_rx_csum_fault(struct net_device *dev); +void netdev_rx_csum_fault(struct net_device *dev, struct sk_buff *skb); #else -static inline void netdev_rx_csum_fault(struct net_device *dev) +static inline void netdev_rx_csum_fault(struct net_device *dev, + struct sk_buff *skb) { } #endif @@ -4360,7 +4386,7 @@ static inline netdev_tx_t netdev_start_xmit(struct sk_buff *skb, struct net_devi struct netdev_queue *txq, bool more) { const struct net_device_ops *ops = dev->netdev_ops; - int rc; + netdev_tx_t rc; rc = __netdev_start_xmit(ops, skb, dev, more); if (rc == NETDEV_TX_OK) diff --git a/include/linux/netfilter/ipset/ip_set.h b/include/linux/netfilter/ipset/ip_set.h index 1d100efe74ec..f2e1e6b13ca4 100644 --- a/include/linux/netfilter/ipset/ip_set.h +++ b/include/linux/netfilter/ipset/ip_set.h @@ -303,11 +303,11 @@ ip_set_put_flags(struct sk_buff *skb, struct ip_set *set) /* Netlink CB args */ enum { IPSET_CB_NET = 0, /* net namespace */ + IPSET_CB_PROTO, /* ipset protocol */ IPSET_CB_DUMP, /* dump single set/all sets */ IPSET_CB_INDEX, /* set index */ IPSET_CB_PRIVATE, /* set private data */ IPSET_CB_ARG0, /* type specific */ - IPSET_CB_ARG1, }; /* register and unregister set references */ diff --git a/include/linux/netfilter/nf_conntrack_proto_gre.h b/include/linux/netfilter/nf_conntrack_proto_gre.h index 14edb795ab43..6989e2e4eabf 100644 --- a/include/linux/netfilter/nf_conntrack_proto_gre.h +++ b/include/linux/netfilter/nf_conntrack_proto_gre.h @@ -41,7 +41,5 @@ int nf_ct_gre_keymap_add(struct nf_conn *ct, enum ip_conntrack_dir dir, /* delete keymap entries */ void nf_ct_gre_keymap_destroy(struct nf_conn *ct); -void nf_nat_need_gre(void); - #endif /* __KERNEL__ */ #endif /* _CONNTRACK_PROTO_GRE_H */ diff --git a/include/linux/netfilter_bridge.h b/include/linux/netfilter_bridge.h index fa0686500970..5f2614d02e03 100644 --- a/include/linux/netfilter_bridge.h +++ b/include/linux/netfilter_bridge.h @@ -17,43 +17,58 @@ static inline void br_drop_fake_rtable(struct sk_buff *skb) skb_dst_drop(skb); } +static inline struct nf_bridge_info * +nf_bridge_info_get(const struct sk_buff *skb) +{ + return skb_ext_find(skb, SKB_EXT_BRIDGE_NF); +} + +static inline bool nf_bridge_info_exists(const struct sk_buff *skb) +{ + return skb_ext_exist(skb, SKB_EXT_BRIDGE_NF); +} + static inline int nf_bridge_get_physinif(const struct sk_buff *skb) { - struct nf_bridge_info *nf_bridge; + const struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb); - if (skb->nf_bridge == NULL) + if (!nf_bridge) return 0; - nf_bridge = skb->nf_bridge; return nf_bridge->physindev ? nf_bridge->physindev->ifindex : 0; } static inline int nf_bridge_get_physoutif(const struct sk_buff *skb) { - struct nf_bridge_info *nf_bridge; + const struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb); - if (skb->nf_bridge == NULL) + if (!nf_bridge) return 0; - nf_bridge = skb->nf_bridge; return nf_bridge->physoutdev ? nf_bridge->physoutdev->ifindex : 0; } static inline struct net_device * nf_bridge_get_physindev(const struct sk_buff *skb) { - return skb->nf_bridge ? skb->nf_bridge->physindev : NULL; + const struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb); + + return nf_bridge ? nf_bridge->physindev : NULL; } static inline struct net_device * nf_bridge_get_physoutdev(const struct sk_buff *skb) { - return skb->nf_bridge ? skb->nf_bridge->physoutdev : NULL; + const struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb); + + return nf_bridge ? nf_bridge->physoutdev : NULL; } static inline bool nf_bridge_in_prerouting(const struct sk_buff *skb) { - return skb->nf_bridge && skb->nf_bridge->in_prerouting; + const struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb); + + return nf_bridge && nf_bridge->in_prerouting; } #else #define br_drop_fake_rtable(skb) do { } while (0) diff --git a/include/linux/netlink.h b/include/linux/netlink.h index 4da90a6ab536..4e8add270200 100644 --- a/include/linux/netlink.h +++ b/include/linux/netlink.h @@ -34,8 +34,8 @@ struct netlink_skb_parms { #define NETLINK_CREDS(skb) (&NETLINK_CB((skb)).creds) -extern void netlink_table_grab(void); -extern void netlink_table_ungrab(void); +void netlink_table_grab(void); +void netlink_table_ungrab(void); #define NL_CFG_F_NONROOT_RECV (1 << 0) #define NL_CFG_F_NONROOT_SEND (1 << 1) @@ -51,7 +51,7 @@ struct netlink_kernel_cfg { bool (*compare)(struct net *net, struct sock *sk); }; -extern struct sock *__netlink_kernel_create(struct net *net, int unit, +struct sock *__netlink_kernel_create(struct net *net, int unit, struct module *module, struct netlink_kernel_cfg *cfg); static inline struct sock * @@ -110,24 +110,33 @@ struct netlink_ext_ack { } \ } while (0) -extern void netlink_kernel_release(struct sock *sk); -extern int __netlink_change_ngroups(struct sock *sk, unsigned int groups); -extern int netlink_change_ngroups(struct sock *sk, unsigned int groups); -extern void __netlink_clear_multicast_users(struct sock *sk, unsigned int group); -extern void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err, - const struct netlink_ext_ack *extack); -extern int netlink_has_listeners(struct sock *sk, unsigned int group); - -extern int netlink_unicast(struct sock *ssk, struct sk_buff *skb, __u32 portid, int nonblock); -extern int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, __u32 portid, - __u32 group, gfp_t allocation); -extern int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb, - __u32 portid, __u32 group, gfp_t allocation, - int (*filter)(struct sock *dsk, struct sk_buff *skb, void *data), - void *filter_data); -extern int netlink_set_err(struct sock *ssk, __u32 portid, __u32 group, int code); -extern int netlink_register_notifier(struct notifier_block *nb); -extern int netlink_unregister_notifier(struct notifier_block *nb); +static inline void nl_set_extack_cookie_u64(struct netlink_ext_ack *extack, + u64 cookie) +{ + u64 __cookie = cookie; + + memcpy(extack->cookie, &__cookie, sizeof(__cookie)); + extack->cookie_len = sizeof(__cookie); +} + +void netlink_kernel_release(struct sock *sk); +int __netlink_change_ngroups(struct sock *sk, unsigned int groups); +int netlink_change_ngroups(struct sock *sk, unsigned int groups); +void __netlink_clear_multicast_users(struct sock *sk, unsigned int group); +void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err, + const struct netlink_ext_ack *extack); +int netlink_has_listeners(struct sock *sk, unsigned int group); + +int netlink_unicast(struct sock *ssk, struct sk_buff *skb, __u32 portid, int nonblock); +int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, __u32 portid, + __u32 group, gfp_t allocation); +int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb, + __u32 portid, __u32 group, gfp_t allocation, + int (*filter)(struct sock *dsk, struct sk_buff *skb, void *data), + void *filter_data); +int netlink_set_err(struct sock *ssk, __u32 portid, __u32 group, int code); +int netlink_register_notifier(struct notifier_block *nb); +int netlink_unregister_notifier(struct notifier_block *nb); /* finegrained unicast helpers: */ struct sock *netlink_getsockbyfilp(struct file *filp); @@ -203,7 +212,7 @@ struct netlink_dump_control { u16 min_dump_alloc; }; -extern int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb, +int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb, const struct nlmsghdr *nlh, struct netlink_dump_control *control); static inline int netlink_dump_start(struct sock *ssk, struct sk_buff *skb, @@ -222,8 +231,8 @@ struct netlink_tap { struct list_head list; }; -extern int netlink_add_tap(struct netlink_tap *nt); -extern int netlink_remove_tap(struct netlink_tap *nt); +int netlink_add_tap(struct netlink_tap *nt); +int netlink_remove_tap(struct netlink_tap *nt); bool __netlink_ns_capable(const struct netlink_skb_parms *nsp, struct user_namespace *ns, int cap); diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h index 496ff759f84c..91745cc3704c 100644 --- a/include/linux/nvme-fc-driver.h +++ b/include/linux/nvme-fc-driver.h @@ -403,7 +403,6 @@ struct nvme_fc_port_template { void **handle); void (*delete_queue)(struct nvme_fc_local_port *, unsigned int qidx, void *handle); - void (*poll_queue)(struct nvme_fc_local_port *, void *handle); int (*ls_req)(struct nvme_fc_local_port *, struct nvme_fc_remote_port *, struct nvmefc_ls_req *); @@ -649,22 +648,6 @@ enum { * sequence in one LLDD operation. Errors during Data * sequence transmit must not allow RSP sequence to be sent. */ - NVMET_FCTGTFEAT_CMD_IN_ISR = (1 << 1), - /* Bit 2: When 0, the LLDD is calling the cmd rcv handler - * in a non-isr context, allowing the transport to finish - * op completion in the calling context. When 1, the LLDD - * is calling the cmd rcv handler in an ISR context, - * requiring the transport to transition to a workqueue - * for op completion. - */ - NVMET_FCTGTFEAT_OPDONE_IN_ISR = (1 << 2), - /* Bit 3: When 0, the LLDD is calling the op done handler - * in a non-isr context, allowing the transport to finish - * op completion in the calling context. When 1, the LLDD - * is calling the op done handler in an ISR context, - * requiring the transport to transition to a workqueue - * for op completion. - */ }; diff --git a/include/linux/nvme-tcp.h b/include/linux/nvme-tcp.h new file mode 100644 index 000000000000..03d87c0550a9 --- /dev/null +++ b/include/linux/nvme-tcp.h @@ -0,0 +1,189 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NVMe over Fabrics TCP protocol header. + * Copyright (c) 2018 Lightbits Labs. All rights reserved. + */ + +#ifndef _LINUX_NVME_TCP_H +#define _LINUX_NVME_TCP_H + +#include + +#define NVME_TCP_DISC_PORT 8009 +#define NVME_TCP_ADMIN_CCSZ SZ_8K +#define NVME_TCP_DIGEST_LENGTH 4 + +enum nvme_tcp_pfv { + NVME_TCP_PFV_1_0 = 0x0, +}; + +enum nvme_tcp_fatal_error_status { + NVME_TCP_FES_INVALID_PDU_HDR = 0x01, + NVME_TCP_FES_PDU_SEQ_ERR = 0x02, + NVME_TCP_FES_HDR_DIGEST_ERR = 0x03, + NVME_TCP_FES_DATA_OUT_OF_RANGE = 0x04, + NVME_TCP_FES_R2T_LIMIT_EXCEEDED = 0x05, + NVME_TCP_FES_DATA_LIMIT_EXCEEDED = 0x05, + NVME_TCP_FES_UNSUPPORTED_PARAM = 0x06, +}; + +enum nvme_tcp_digest_option { + NVME_TCP_HDR_DIGEST_ENABLE = (1 << 0), + NVME_TCP_DATA_DIGEST_ENABLE = (1 << 1), +}; + +enum nvme_tcp_pdu_type { + nvme_tcp_icreq = 0x0, + nvme_tcp_icresp = 0x1, + nvme_tcp_h2c_term = 0x2, + nvme_tcp_c2h_term = 0x3, + nvme_tcp_cmd = 0x4, + nvme_tcp_rsp = 0x5, + nvme_tcp_h2c_data = 0x6, + nvme_tcp_c2h_data = 0x7, + nvme_tcp_r2t = 0x9, +}; + +enum nvme_tcp_pdu_flags { + NVME_TCP_F_HDGST = (1 << 0), + NVME_TCP_F_DDGST = (1 << 1), + NVME_TCP_F_DATA_LAST = (1 << 2), + NVME_TCP_F_DATA_SUCCESS = (1 << 3), +}; + +/** + * struct nvme_tcp_hdr - nvme tcp pdu common header + * + * @type: pdu type + * @flags: pdu specific flags + * @hlen: pdu header length + * @pdo: pdu data offset + * @plen: pdu wire byte length + */ +struct nvme_tcp_hdr { + __u8 type; + __u8 flags; + __u8 hlen; + __u8 pdo; + __le32 plen; +}; + +/** + * struct nvme_tcp_icreq_pdu - nvme tcp initialize connection request pdu + * + * @hdr: pdu generic header + * @pfv: pdu version format + * @hpda: host pdu data alignment (dwords, 0's based) + * @digest: digest types enabled + * @maxr2t: maximum r2ts per request supported + */ +struct nvme_tcp_icreq_pdu { + struct nvme_tcp_hdr hdr; + __le16 pfv; + __u8 hpda; + __u8 digest; + __le32 maxr2t; + __u8 rsvd2[112]; +}; + +/** + * struct nvme_tcp_icresp_pdu - nvme tcp initialize connection response pdu + * + * @hdr: pdu common header + * @pfv: pdu version format + * @cpda: controller pdu data alignment (dowrds, 0's based) + * @digest: digest types enabled + * @maxdata: maximum data capsules per r2t supported + */ +struct nvme_tcp_icresp_pdu { + struct nvme_tcp_hdr hdr; + __le16 pfv; + __u8 cpda; + __u8 digest; + __le32 maxdata; + __u8 rsvd[112]; +}; + +/** + * struct nvme_tcp_term_pdu - nvme tcp terminate connection pdu + * + * @hdr: pdu common header + * @fes: fatal error status + * @fei: fatal error information + */ +struct nvme_tcp_term_pdu { + struct nvme_tcp_hdr hdr; + __le16 fes; + __le32 fei; + __u8 rsvd[8]; +}; + +/** + * struct nvme_tcp_cmd_pdu - nvme tcp command capsule pdu + * + * @hdr: pdu common header + * @cmd: nvme command + */ +struct nvme_tcp_cmd_pdu { + struct nvme_tcp_hdr hdr; + struct nvme_command cmd; +}; + +/** + * struct nvme_tcp_rsp_pdu - nvme tcp response capsule pdu + * + * @hdr: pdu common header + * @hdr: nvme-tcp generic header + * @cqe: nvme completion queue entry + */ +struct nvme_tcp_rsp_pdu { + struct nvme_tcp_hdr hdr; + struct nvme_completion cqe; +}; + +/** + * struct nvme_tcp_r2t_pdu - nvme tcp ready-to-transfer pdu + * + * @hdr: pdu common header + * @command_id: nvme command identifier which this relates to + * @ttag: transfer tag (controller generated) + * @r2t_offset: offset from the start of the command data + * @r2t_length: length the host is allowed to send + */ +struct nvme_tcp_r2t_pdu { + struct nvme_tcp_hdr hdr; + __u16 command_id; + __u16 ttag; + __le32 r2t_offset; + __le32 r2t_length; + __u8 rsvd[4]; +}; + +/** + * struct nvme_tcp_data_pdu - nvme tcp data pdu + * + * @hdr: pdu common header + * @command_id: nvme command identifier which this relates to + * @ttag: transfer tag (controller generated) + * @data_offset: offset from the start of the command data + * @data_length: length of the data stream + */ +struct nvme_tcp_data_pdu { + struct nvme_tcp_hdr hdr; + __u16 command_id; + __u16 ttag; + __le32 data_offset; + __le32 data_length; + __u8 rsvd[4]; +}; + +union nvme_tcp_pdu { + struct nvme_tcp_icreq_pdu icreq; + struct nvme_tcp_icresp_pdu icresp; + struct nvme_tcp_cmd_pdu cmd; + struct nvme_tcp_rsp_pdu rsp; + struct nvme_tcp_r2t_pdu r2t; + struct nvme_tcp_data_pdu data; +}; + +#endif /* _LINUX_NVME_TCP_H */ diff --git a/include/linux/nvme.h b/include/linux/nvme.h index 818dbe9331be..bbcc83886899 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -52,15 +52,20 @@ enum { enum { NVMF_TRTYPE_RDMA = 1, /* RDMA */ NVMF_TRTYPE_FC = 2, /* Fibre Channel */ + NVMF_TRTYPE_TCP = 3, /* TCP/IP */ NVMF_TRTYPE_LOOP = 254, /* Reserved for host usage */ NVMF_TRTYPE_MAX, }; /* Transport Requirements codes for Discovery Log Page entry TREQ field */ enum { - NVMF_TREQ_NOT_SPECIFIED = 0, /* Not specified */ - NVMF_TREQ_REQUIRED = 1, /* Required */ - NVMF_TREQ_NOT_REQUIRED = 2, /* Not Required */ + NVMF_TREQ_NOT_SPECIFIED = 0, /* Not specified */ + NVMF_TREQ_REQUIRED = 1, /* Required */ + NVMF_TREQ_NOT_REQUIRED = 2, /* Not Required */ +#define NVME_TREQ_SECURE_CHANNEL_MASK \ + (NVMF_TREQ_REQUIRED | NVMF_TREQ_NOT_REQUIRED) + + NVMF_TREQ_DISABLE_SQFLOW = (1 << 2), /* Supports SQ flow control disable */ }; /* RDMA QP Service Type codes for Discovery Log Page entry TSAS @@ -198,6 +203,11 @@ enum { NVME_PS_FLAGS_NON_OP_STATE = 1 << 1, }; +enum nvme_ctrl_attr { + NVME_CTRL_ATTR_HID_128_BIT = (1 << 0), + NVME_CTRL_ATTR_TBKAS = (1 << 6), +}; + struct nvme_id_ctrl { __le16 vid; __le16 ssvid; @@ -214,7 +224,11 @@ struct nvme_id_ctrl { __le32 rtd3e; __le32 oaes; __le32 ctratt; - __u8 rsvd100[156]; + __u8 rsvd100[28]; + __le16 crdt1; + __le16 crdt2; + __le16 crdt3; + __u8 rsvd134[122]; __le16 oacs; __u8 acl; __u8 aerl; @@ -481,12 +495,21 @@ enum { NVME_AER_NOTICE_NS_CHANGED = 0x00, NVME_AER_NOTICE_FW_ACT_STARTING = 0x01, NVME_AER_NOTICE_ANA = 0x03, + NVME_AER_NOTICE_DISC_CHANGED = 0xf0, }; enum { - NVME_AEN_CFG_NS_ATTR = 1 << 8, - NVME_AEN_CFG_FW_ACT = 1 << 9, - NVME_AEN_CFG_ANA_CHANGE = 1 << 11, + NVME_AEN_BIT_NS_ATTR = 8, + NVME_AEN_BIT_FW_ACT = 9, + NVME_AEN_BIT_ANA_CHANGE = 11, + NVME_AEN_BIT_DISC_CHANGE = 31, +}; + +enum { + NVME_AEN_CFG_NS_ATTR = 1 << NVME_AEN_BIT_NS_ATTR, + NVME_AEN_CFG_FW_ACT = 1 << NVME_AEN_BIT_FW_ACT, + NVME_AEN_CFG_ANA_CHANGE = 1 << NVME_AEN_BIT_ANA_CHANGE, + NVME_AEN_CFG_DISC_CHANGE = 1 << NVME_AEN_BIT_DISC_CHANGE, }; struct nvme_lba_range_type { @@ -639,7 +662,12 @@ struct nvme_common_command { __le32 cdw2[2]; __le64 metadata; union nvme_data_ptr dptr; - __le32 cdw10[6]; + __le32 cdw10; + __le32 cdw11; + __le32 cdw12; + __le32 cdw13; + __le32 cdw14; + __le32 cdw15; }; struct nvme_rw_command { @@ -738,6 +766,15 @@ enum { NVME_HOST_MEM_RETURN = (1 << 1), }; +struct nvme_feat_host_behavior { + __u8 acre; + __u8 resv1[511]; +}; + +enum { + NVME_ENABLE_ACRE = 1, +}; + /* Admin commands */ enum nvme_admin_opcode { @@ -792,6 +829,7 @@ enum { NVME_FEAT_RRL = 0x12, NVME_FEAT_PLM_CONFIG = 0x13, NVME_FEAT_PLM_WINDOW = 0x14, + NVME_FEAT_HOST_BEHAVIOR = 0x16, NVME_FEAT_SW_PROGRESS = 0x80, NVME_FEAT_HOST_ID = 0x81, NVME_FEAT_RESV_MASK = 0x82, @@ -1030,6 +1068,10 @@ struct nvmf_disc_rsp_page_hdr { struct nvmf_disc_rsp_page_entry entries[0]; }; +enum { + NVME_CONNECT_DISABLE_SQFLOW = (1 << 2), +}; + struct nvmf_connect_command { __u8 opcode; __u8 resv1; @@ -1126,6 +1168,20 @@ struct nvme_command { }; }; +struct nvme_error_slot { + __le64 error_count; + __le16 sqid; + __le16 cmdid; + __le16 status_field; + __le16 param_error_location; + __le64 lba; + __le32 nsid; + __u8 vs; + __u8 resv[3]; + __le64 cs; + __u8 resv2[24]; +}; + static inline bool nvme_is_write(struct nvme_command *cmd) { /* @@ -1243,6 +1299,7 @@ enum { NVME_SC_ANA_TRANSITION = 0x303, NVME_SC_HOST_PATH_ERROR = 0x370, + NVME_SC_CRD = 0x1800, NVME_SC_DNR = 0x4000, }; diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h index 1e3283c2af77..fe051323be0a 100644 --- a/include/linux/nvmem-provider.h +++ b/include/linux/nvmem-provider.h @@ -19,6 +19,13 @@ typedef int (*nvmem_reg_read_t)(void *priv, unsigned int offset, typedef int (*nvmem_reg_write_t)(void *priv, unsigned int offset, void *val, size_t bytes); +enum nvmem_type { + NVMEM_TYPE_UNKNOWN = 0, + NVMEM_TYPE_EEPROM, + NVMEM_TYPE_OTP, + NVMEM_TYPE_BATTERY_BACKED, +}; + /** * struct nvmem_config - NVMEM device configuration * @@ -28,8 +35,10 @@ typedef int (*nvmem_reg_write_t)(void *priv, unsigned int offset, * @owner: Pointer to exporter module. Used for refcounting. * @cells: Optional array of pre-defined NVMEM cells. * @ncells: Number of elements in cells. + * @type: Type of the nvmem storage * @read_only: Device is read-only. * @root_only: Device is accessibly to root only. + * @no_of_node: Device should not use the parent's of_node even if it's !NULL. * @reg_read: Callback to read data. * @reg_write: Callback to write data. * @size: Device size. @@ -51,8 +60,10 @@ struct nvmem_config { struct module *owner; const struct nvmem_cell_info *cells; int ncells; + enum nvmem_type type; bool read_only; bool root_only; + bool no_of_node; nvmem_reg_read_t reg_read; nvmem_reg_write_t reg_write; int size; diff --git a/include/linux/objagg.h b/include/linux/objagg.h new file mode 100644 index 000000000000..34f38c186ea0 --- /dev/null +++ b/include/linux/objagg.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ +/* Copyright (c) 2018 Mellanox Technologies. All rights reserved */ + +#ifndef _OBJAGG_H +#define _OBJAGG_H + +struct objagg_ops { + size_t obj_size; + void * (*delta_create)(void *priv, void *parent_obj, void *obj); + void (*delta_destroy)(void *priv, void *delta_priv); + void * (*root_create)(void *priv, void *obj); + void (*root_destroy)(void *priv, void *root_priv); +}; + +struct objagg; +struct objagg_obj; + +const void *objagg_obj_root_priv(const struct objagg_obj *objagg_obj); +const void *objagg_obj_delta_priv(const struct objagg_obj *objagg_obj); +const void *objagg_obj_raw(const struct objagg_obj *objagg_obj); + +struct objagg_obj *objagg_obj_get(struct objagg *objagg, void *obj); +void objagg_obj_put(struct objagg *objagg, struct objagg_obj *objagg_obj); +struct objagg *objagg_create(const struct objagg_ops *ops, void *priv); +void objagg_destroy(struct objagg *objagg); + +struct objagg_obj_stats { + unsigned int user_count; + unsigned int delta_user_count; /* includes delta object users */ +}; + +struct objagg_obj_stats_info { + struct objagg_obj_stats stats; + struct objagg_obj *objagg_obj; /* associated object */ + bool is_root; +}; + +struct objagg_stats { + unsigned int stats_info_count; + struct objagg_obj_stats_info stats_info[]; +}; + +const struct objagg_stats *objagg_stats_get(struct objagg *objagg); +void objagg_stats_put(const struct objagg_stats *objagg_stats); + +#endif diff --git a/include/linux/of.h b/include/linux/of.h index 0fe5bef81a7e..fe472e5195a9 100644 --- a/include/linux/of.h +++ b/include/linux/of.h @@ -137,11 +137,16 @@ extern struct device_node *of_aliases; extern struct device_node *of_stdout; extern raw_spinlock_t devtree_lock; -/* flag descriptions (need to be visible even when !CONFIG_OF) */ -#define OF_DYNAMIC 1 /* node and properties were allocated via kmalloc */ -#define OF_DETACHED 2 /* node has been detached from the device tree */ -#define OF_POPULATED 3 /* device already created for the node */ -#define OF_POPULATED_BUS 4 /* of_platform_populate recursed to children of this node */ +/* + * struct device_node flag descriptions + * (need to be visible even when !CONFIG_OF) + */ +#define OF_DYNAMIC 1 /* (and properties) allocated via kmalloc */ +#define OF_DETACHED 2 /* detached from the device tree */ +#define OF_POPULATED 3 /* device already created */ +#define OF_POPULATED_BUS 4 /* platform bus created for children */ +#define OF_OVERLAY 5 /* allocated for an overlay */ +#define OF_OVERLAY_FREE_CSET 6 /* in overlay cset being freed */ #define OF_BAD_ADDR ((u64)-1) @@ -984,6 +989,12 @@ static inline int of_map_rid(struct device_node *np, u32 rid, #define of_node_cmp(s1, s2) strcasecmp((s1), (s2)) #endif +static inline int of_prop_val_eq(struct property *p1, struct property *p2) +{ + return p1->length == p2->length && + !memcmp(p1->value, p2->value, (size_t)p1->length); +} + #if defined(CONFIG_OF) && defined(CONFIG_NUMA) extern int of_node_to_nid(struct device_node *np); #else diff --git a/include/linux/of_net.h b/include/linux/of_net.h index 90d81ee9e6a0..9cd72aab76fe 100644 --- a/include/linux/of_net.h +++ b/include/linux/of_net.h @@ -13,7 +13,6 @@ struct net_device; extern int of_get_phy_mode(struct device_node *np); extern const void *of_get_mac_address(struct device_node *np); -extern int of_get_nvmem_mac_address(struct device_node *np, void *addr); extern struct net_device *of_find_net_device_by_node(struct device_node *np); #else static inline int of_get_phy_mode(struct device_node *np) @@ -26,11 +25,6 @@ static inline const void *of_get_mac_address(struct device_node *np) return NULL; } -static inline int of_get_nvmem_mac_address(struct device_node *np, void *addr) -{ - return -ENODEV; -} - static inline struct net_device *of_find_net_device_by_node(struct device_node *np) { return NULL; diff --git a/include/linux/of_pdt.h b/include/linux/of_pdt.h index d0b183ab65c6..89e4eb076a01 100644 --- a/include/linux/of_pdt.h +++ b/include/linux/of_pdt.h @@ -35,6 +35,4 @@ extern void *prom_early_alloc(unsigned long size); /* for building the device tree */ extern void of_pdt_build_devicetree(phandle root_node, struct of_pdt_ops *ops); -extern void (*of_pdt_build_more)(struct device_node *dp); - #endif /* _LINUX_OF_PDT_H */ diff --git a/include/linux/oom.h b/include/linux/oom.h index 69864a547663..d07992009265 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -15,6 +15,13 @@ struct notifier_block; struct mem_cgroup; struct task_struct; +enum oom_constraint { + CONSTRAINT_NONE, + CONSTRAINT_CPUSET, + CONSTRAINT_MEMORY_POLICY, + CONSTRAINT_MEMCG, +}; + /* * Details of the page allocation that triggered the oom killer that are used to * determine what should be killed. @@ -42,6 +49,9 @@ struct oom_control { unsigned long totalpages; struct task_struct *chosen; unsigned long chosen_points; + + /* Used to print the constraint info. */ + enum oom_constraint constraint; }; extern struct mutex oom_lock; diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h index 7ec86bf31ce4..1dda31825ec4 100644 --- a/include/linux/page-flags-layout.h +++ b/include/linux/page-flags-layout.h @@ -82,6 +82,16 @@ #define LAST_CPUPID_WIDTH 0 #endif +#ifdef CONFIG_KASAN_SW_TAGS +#define KASAN_TAG_WIDTH 8 +#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \ + > BITS_PER_LONG - NR_PAGEFLAGS +#error "KASAN: not enough bits in page flags for tag" +#endif +#else +#define KASAN_TAG_WIDTH 0 +#endif + /* * We are going to use the flags for the page to node mapping if its in * there. This includes the case where there is no node, so it is implicit. diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 50ce1bddaf56..39b4494e29f1 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -669,6 +669,7 @@ PAGEFLAG_FALSE(DoubleMap) #define PAGE_TYPE_BASE 0xf0000000 /* Reserve 0x0000007f to catch underflows of page_mapcount */ +#define PAGE_MAPCOUNT_RESERVE -128 #define PG_buddy 0x00000080 #define PG_balloon 0x00000100 #define PG_kmemcg 0x00000200 @@ -677,6 +678,11 @@ PAGEFLAG_FALSE(DoubleMap) #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) +static inline int page_has_type(struct page *page) +{ + return (int)page->page_type < PAGE_MAPCOUNT_RESERVE; +} + #define PAGE_TYPE_OPS(uname, lname) \ static __always_inline int Page##uname(struct page *page) \ { \ diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 4ae347cbc36d..4eb26d278046 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -30,8 +30,11 @@ static inline bool is_migrate_isolate(int migratetype) } #endif +#define SKIP_HWPOISON 0x1 +#define REPORT_FAILURE 0x2 + bool has_unmovable_pages(struct zone *zone, struct page *page, int count, - int migratetype, bool skip_hwpoisoned_pages); + int migratetype, int flags); void set_pageblock_migratetype(struct page *page, int migratetype); int move_freepages_block(struct zone *zone, struct page *page, int migratetype, int *num_movable); @@ -44,10 +47,14 @@ int move_freepages_block(struct zone *zone, struct page *page, * For isolating all pages in the range finally, the caller have to * free all pages in the range. test_page_isolated() can be used for * test it. + * + * The following flags are allowed (they can be combined in a bit mask) + * SKIP_HWPOISON - ignore hwpoison pages + * REPORT_FAILURE - report details about the failure to isolate the range */ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, - unsigned migratetype, bool skip_hwpoisoned_pages); + unsigned migratetype, int flags); /* * Changes MIGRATE_ISOLATE to MIGRATE_MOVABLE. diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index 9132c5cb41f1..06a66327333d 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -25,10 +25,11 @@ #include +#define PB_migratetype_bits 3 /* Bit indices that affect a whole block of pages */ enum pageblock_bits { PB_migrate, - PB_migrate_end = PB_migrate + 3 - 1, + PB_migrate_end = PB_migrate + PB_migratetype_bits - 1, /* 3 bits required for migrate types */ PB_migrate_skip,/* If set the block is skipped by compaction */ diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 226f96f0dee0..e2d7039af6a3 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -537,6 +537,8 @@ static inline int wait_on_page_locked_killable(struct page *page) return wait_on_page_bit_killable(compound_head(page), PG_locked); } +extern void put_and_wait_on_page_locked(struct page *page); + /* * Wait for a page to complete writeback */ diff --git a/include/linux/pci.h b/include/linux/pci.h index 51a5a5217667..1ab78a23ae08 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -396,6 +396,14 @@ struct pci_dev { unsigned int is_hotplug_bridge:1; unsigned int shpc_managed:1; /* SHPC owned by shpchp */ unsigned int is_thunderbolt:1; /* Thunderbolt controller */ + /* + * Devices marked being untrusted are the ones that can potentially + * execute DMA attacks and similar. They are typically connected + * through external ports such as Thunderbolt but not limited to + * that. When an IOMMU is enabled they should be getting full + * mappings to make sure they cannot access arbitrary memory. + */ + unsigned int untrusted:1; unsigned int __aer_firmware_first_valid:1; unsigned int __aer_firmware_first:1; unsigned int broken_intx_masking:1; /* INTx masking can't be used */ diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 349276fbd269..d86d5a2477fc 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -2362,6 +2362,8 @@ #define PCI_VENDOR_ID_SYNOPSYS 0x16c3 +#define PCI_VENDOR_ID_USR 0x16ec + #define PCI_VENDOR_ID_VITESSE 0x1725 #define PCI_DEVICE_ID_VITESSE_VSC7174 0x7174 diff --git a/include/linux/phy.h b/include/linux/phy.h index 3ea87f774a76..da039f211c22 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -58,6 +58,11 @@ extern __ETHTOOL_DECLARE_LINK_MODE_MASK(phy_10gbit_full_features) __ro_after_ini #define PHY_10GBIT_FEATURES ((unsigned long *)&phy_10gbit_features) #define PHY_10GBIT_FULL_FEATURES ((unsigned long *)&phy_10gbit_full_features) +extern const int phy_10_100_features_array[4]; +extern const int phy_basic_t1_features_array[2]; +extern const int phy_gbit_features_array[2]; +extern const int phy_10gbit_features_array[1]; + /* * Set phydev->irq to PHY_POLL if interrupts are not supported, * or not desired for this PHY. Set to PHY_IGNORE_INTERRUPT if @@ -66,9 +71,8 @@ extern __ETHTOOL_DECLARE_LINK_MODE_MASK(phy_10gbit_full_features) __ro_after_ini #define PHY_POLL -1 #define PHY_IGNORE_INTERRUPT -2 -#define PHY_HAS_INTERRUPT 0x00000001 -#define PHY_IS_INTERNAL 0x00000002 -#define PHY_RST_AFTER_CLK_EN 0x00000004 +#define PHY_IS_INTERNAL 0x00000001 +#define PHY_RST_AFTER_CLK_EN 0x00000002 #define MDIO_DEVICE_IS_PHY 0x80000000 /* Interface Mode definitions */ @@ -178,7 +182,6 @@ static inline const char *phy_modes(phy_interface_t interface) #define PHY_INIT_TIMEOUT 100000 #define PHY_STATE_TIME 1 #define PHY_FORCE_TIMEOUT 10 -#define PHY_AN_TIMEOUT 10 #define PHY_MAX_ADDR 32 @@ -264,57 +267,27 @@ static inline struct mii_bus *devm_mdiobus_alloc(struct device *dev) void devm_mdiobus_free(struct device *dev, struct mii_bus *bus); struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr); -#define PHY_INTERRUPT_DISABLED 0x0 -#define PHY_INTERRUPT_ENABLED 0x80000000 +#define PHY_INTERRUPT_DISABLED false +#define PHY_INTERRUPT_ENABLED true /* PHY state machine states: * * DOWN: PHY device and driver are not ready for anything. probe * should be called if and only if the PHY is in this state, * given that the PHY device exists. - * - PHY driver probe function will, depending on the PHY, set - * the state to STARTING or READY - * - * STARTING: PHY device is coming up, and the ethernet driver is - * not ready. PHY drivers may set this in the probe function. - * If they do, they are responsible for making sure the state is - * eventually set to indicate whether the PHY is UP or READY, - * depending on the state when the PHY is done starting up. - * - PHY driver will set the state to READY - * - start will set the state to PENDING + * - PHY driver probe function will set the state to READY * * READY: PHY is ready to send and receive packets, but the * controller is not. By default, PHYs which do not implement - * probe will be set to this state by phy_probe(). If the PHY - * driver knows the PHY is ready, and the PHY state is STARTING, - * then it sets this STATE. + * probe will be set to this state by phy_probe(). * - start will set the state to UP * - * PENDING: PHY device is coming up, but the ethernet driver is - * ready. phy_start will set this state if the PHY state is - * STARTING. - * - PHY driver will set the state to UP when the PHY is ready - * * UP: The PHY and attached device are ready to do work. * Interrupts should be started here. - * - timer moves to AN - * - * AN: The PHY is currently negotiating the link state. Link is - * therefore down for now. phy_timer will set this state when it - * detects the state is UP. config_aneg will set this state - * whenever called with phydev->autoneg set to AUTONEG_ENABLE. - * - If autonegotiation finishes, but there's no link, it sets - * the state to NOLINK. - * - If aneg finishes with link, it sets the state to RUNNING, - * and calls adjust_link - * - If autonegotiation did not finish after an arbitrary amount - * of time, autonegotiation should be tried again if the PHY - * supports "magic" autonegotiation (back to AN) - * - If it didn't finish, and no magic_aneg, move to FORCING. + * - timer moves to NOLINK or RUNNING * * NOLINK: PHY is up, but not currently plugged in. - * - If the timer notes that the link comes back, we move to RUNNING - * - config_aneg moves to AN + * - irq or timer will set RUNNING if link comes back * - phy_stop moves to HALTED * * FORCING: PHY is being configured with forced settings @@ -325,11 +298,7 @@ struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr); * * RUNNING: PHY is currently up, running, and possibly sending * and/or receiving packets - * - timer will set CHANGELINK if we're polling (this ensures the - * link state is polled every other cycle of this state machine, - * which makes it every other second) - * - irq will set CHANGELINK - * - config_aneg will set AN + * - irq or timer will set NOLINK if link goes down * - phy_stop moves to HALTED * * CHANGELINK: PHY experienced a change in link state @@ -349,16 +318,13 @@ struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr); */ enum phy_state { PHY_DOWN = 0, - PHY_STARTING, PHY_READY, - PHY_PENDING, + PHY_HALTED, PHY_UP, - PHY_AN, PHY_RUNNING, PHY_NOLINK, PHY_FORCING, PHY_CHANGELINK, - PHY_HALTED, PHY_RESUMING }; @@ -390,7 +356,6 @@ struct phy_c45_device_ids { * giving up on the current attempt at acquiring a link * irq: IRQ number of the PHY's interrupt (-1 if none) * phy_timer: The timer for handling the state machine - * phy_queue: A work_queue for the phy_mac_interrupt * attached_dev: The attached enet driver's device instance ptr * adjust_link: Callback for the enet controller to respond to * changes in the link state. @@ -427,6 +392,9 @@ struct phy_device { /* The most recently read link state */ unsigned link:1; + /* Interrupts are enabled */ + unsigned interrupts:1; + enum phy_state state; u32 dev_flags; @@ -442,14 +410,11 @@ struct phy_device { int pause; int asym_pause; - /* Enabled Interrupts */ - u32 interrupts; - - /* Union of PHY and Attached devices' supported modes */ - /* See mii.h for more info */ - u32 supported; - u32 advertising; - u32 lp_advertising; + /* Union of PHY and Attached devices' supported link modes */ + /* See ethtool.h for more info */ + __ETHTOOL_DECLARE_LINK_MODE_MASK(supported); + __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising); + __ETHTOOL_DECLARE_LINK_MODE_MASK(lp_advertising); /* Energy efficient ethernet modes which should be prohibited */ u32 eee_broken_modes; @@ -475,7 +440,6 @@ struct phy_device { void *priv; /* Interrupt and Polling infrastructure */ - struct work_struct phy_queue; struct delayed_work state_queue; struct mutex lock; @@ -674,6 +638,10 @@ struct phy_driver { #define PHY_ANY_ID "MATCH ANY PHY" #define PHY_ANY_UID 0xffffffff +#define PHY_ID_MATCH_EXACT(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 0) +#define PHY_ID_MATCH_MODEL(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 4) +#define PHY_ID_MATCH_VENDOR(id) .phy_id = (id), .phy_id_mask = GENMASK(31, 10) + /* A Structure for boards to register fixups with the PHY Lib */ struct phy_fixup { struct list_head list; @@ -697,9 +665,31 @@ struct phy_setting { const struct phy_setting * phy_lookup_setting(int speed, int duplex, const unsigned long *mask, - size_t maxbit, bool exact); + bool exact); size_t phy_speeds(unsigned int *speeds, size_t size, - unsigned long *mask, size_t maxbit); + unsigned long *mask); + +static inline bool __phy_is_started(struct phy_device *phydev) +{ + WARN_ON(!mutex_is_locked(&phydev->lock)); + + return phydev->state >= PHY_UP; +} + +/** + * phy_is_started - Convenience function to check whether PHY is started + * @phydev: The phy_device struct + */ +static inline bool phy_is_started(struct phy_device *phydev) +{ + bool started; + + mutex_lock(&phydev->lock); + started = __phy_is_started(phydev); + mutex_unlock(&phydev->lock); + + return started; +} void phy_resolve_aneg_linkmode(struct phy_device *phydev); @@ -1050,11 +1040,9 @@ int phy_driver_register(struct phy_driver *new_driver, struct module *owner); int phy_drivers_register(struct phy_driver *new_driver, int n, struct module *owner); void phy_state_machine(struct work_struct *work); -void phy_change_work(struct work_struct *work); void phy_mac_interrupt(struct phy_device *phydev); void phy_start_machine(struct phy_device *phydev); void phy_stop_machine(struct phy_device *phydev); -void phy_trigger_machine(struct phy_device *phydev); int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd); void phy_ethtool_ksettings_get(struct phy_device *phydev, struct ethtool_link_ksettings *cmd); diff --git a/include/linux/phy/phy-mipi-dphy.h b/include/linux/phy/phy-mipi-dphy.h new file mode 100644 index 000000000000..c08aacc0ac35 --- /dev/null +++ b/include/linux/phy/phy-mipi-dphy.h @@ -0,0 +1,285 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2018 Cadence Design Systems Inc. + */ + +#ifndef __PHY_MIPI_DPHY_H_ +#define __PHY_MIPI_DPHY_H_ + +#include